Oct 07 12:16:57 localhost kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct 07 12:16:57 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 07 12:16:57 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 07 12:16:57 localhost kernel: BIOS-provided physical RAM map:
Oct 07 12:16:57 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 07 12:16:57 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 07 12:16:57 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 07 12:16:57 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 07 12:16:57 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 07 12:16:57 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 07 12:16:57 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 07 12:16:57 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 07 12:16:57 localhost kernel: NX (Execute Disable) protection: active
Oct 07 12:16:57 localhost kernel: APIC: Static calls initialized
Oct 07 12:16:57 localhost kernel: SMBIOS 2.8 present.
Oct 07 12:16:57 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 07 12:16:57 localhost kernel: Hypervisor detected: KVM
Oct 07 12:16:57 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 07 12:16:57 localhost kernel: kvm-clock: using sched offset of 8909329230 cycles
Oct 07 12:16:57 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 07 12:16:57 localhost kernel: tsc: Detected 2799.998 MHz processor
Oct 07 12:16:57 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 07 12:16:57 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 07 12:16:57 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 07 12:16:57 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 07 12:16:57 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 07 12:16:57 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 07 12:16:57 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 07 12:16:57 localhost kernel: Using GB pages for direct mapping
Oct 07 12:16:57 localhost kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct 07 12:16:57 localhost kernel: ACPI: Early table checksum verification disabled
Oct 07 12:16:57 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 07 12:16:57 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 07 12:16:57 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 07 12:16:57 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 07 12:16:57 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 07 12:16:57 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 07 12:16:57 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 07 12:16:57 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 07 12:16:57 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 07 12:16:57 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 07 12:16:57 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 07 12:16:57 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 07 12:16:57 localhost kernel: No NUMA configuration found
Oct 07 12:16:57 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 07 12:16:57 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct 07 12:16:57 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 07 12:16:57 localhost kernel: Zone ranges:
Oct 07 12:16:57 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 07 12:16:57 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 07 12:16:57 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 07 12:16:57 localhost kernel:   Device   empty
Oct 07 12:16:57 localhost kernel: Movable zone start for each node
Oct 07 12:16:57 localhost kernel: Early memory node ranges
Oct 07 12:16:57 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 07 12:16:57 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 07 12:16:57 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 07 12:16:57 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 07 12:16:57 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 07 12:16:57 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 07 12:16:57 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 07 12:16:57 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 07 12:16:57 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 07 12:16:57 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 07 12:16:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 07 12:16:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 07 12:16:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 07 12:16:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 07 12:16:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 07 12:16:57 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 07 12:16:57 localhost kernel: TSC deadline timer available
Oct 07 12:16:57 localhost kernel: CPU topo: Max. logical packages:   8
Oct 07 12:16:57 localhost kernel: CPU topo: Max. logical dies:       8
Oct 07 12:16:57 localhost kernel: CPU topo: Max. dies per package:   1
Oct 07 12:16:57 localhost kernel: CPU topo: Max. threads per core:   1
Oct 07 12:16:57 localhost kernel: CPU topo: Num. cores per package:     1
Oct 07 12:16:57 localhost kernel: CPU topo: Num. threads per package:   1
Oct 07 12:16:57 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 07 12:16:57 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 07 12:16:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 07 12:16:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 07 12:16:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 07 12:16:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 07 12:16:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 07 12:16:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 07 12:16:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 07 12:16:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 07 12:16:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 07 12:16:57 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 07 12:16:57 localhost kernel: Booting paravirtualized kernel on KVM
Oct 07 12:16:57 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 07 12:16:57 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 07 12:16:57 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 07 12:16:57 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Oct 07 12:16:57 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 07 12:16:57 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 07 12:16:57 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 07 12:16:57 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct 07 12:16:57 localhost kernel: random: crng init done
Oct 07 12:16:57 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 07 12:16:57 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 07 12:16:57 localhost kernel: Fallback order for Node 0: 0 
Oct 07 12:16:57 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 07 12:16:57 localhost kernel: Policy zone: Normal
Oct 07 12:16:57 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 07 12:16:57 localhost kernel: software IO TLB: area num 8.
Oct 07 12:16:57 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 07 12:16:57 localhost kernel: ftrace: allocating 49370 entries in 193 pages
Oct 07 12:16:57 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 07 12:16:57 localhost kernel: Dynamic Preempt: voluntary
Oct 07 12:16:57 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 07 12:16:57 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 07 12:16:57 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 07 12:16:57 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 07 12:16:57 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 07 12:16:57 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 07 12:16:57 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 07 12:16:57 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 07 12:16:57 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 07 12:16:57 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 07 12:16:57 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 07 12:16:57 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 07 12:16:57 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 07 12:16:57 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 07 12:16:57 localhost kernel: Console: colour VGA+ 80x25
Oct 07 12:16:57 localhost kernel: printk: console [ttyS0] enabled
Oct 07 12:16:57 localhost kernel: ACPI: Core revision 20230331
Oct 07 12:16:57 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 07 12:16:57 localhost kernel: x2apic enabled
Oct 07 12:16:57 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 07 12:16:57 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 07 12:16:57 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Oct 07 12:16:57 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 07 12:16:57 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 07 12:16:57 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 07 12:16:57 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 07 12:16:57 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 07 12:16:57 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 07 12:16:57 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 07 12:16:57 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 07 12:16:57 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 07 12:16:57 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 07 12:16:57 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 07 12:16:57 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 07 12:16:57 localhost kernel: x86/bugs: return thunk changed
Oct 07 12:16:57 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 07 12:16:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 07 12:16:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 07 12:16:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 07 12:16:57 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 07 12:16:57 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 07 12:16:57 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 07 12:16:57 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 07 12:16:57 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 07 12:16:57 localhost kernel: landlock: Up and running.
Oct 07 12:16:57 localhost kernel: Yama: becoming mindful.
Oct 07 12:16:57 localhost kernel: SELinux:  Initializing.
Oct 07 12:16:57 localhost kernel: LSM support for eBPF active
Oct 07 12:16:57 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 07 12:16:57 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 07 12:16:57 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 07 12:16:57 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 07 12:16:57 localhost kernel: ... version:                0
Oct 07 12:16:57 localhost kernel: ... bit width:              48
Oct 07 12:16:57 localhost kernel: ... generic registers:      6
Oct 07 12:16:57 localhost kernel: ... value mask:             0000ffffffffffff
Oct 07 12:16:57 localhost kernel: ... max period:             00007fffffffffff
Oct 07 12:16:57 localhost kernel: ... fixed-purpose events:   0
Oct 07 12:16:57 localhost kernel: ... event mask:             000000000000003f
Oct 07 12:16:57 localhost kernel: signal: max sigframe size: 1776
Oct 07 12:16:57 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 07 12:16:57 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 07 12:16:57 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 07 12:16:57 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 07 12:16:57 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 07 12:16:57 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 07 12:16:57 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Oct 07 12:16:57 localhost kernel: node 0 deferred pages initialised in 18ms
Oct 07 12:16:57 localhost kernel: Memory: 7765356K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616500K reserved, 0K cma-reserved)
Oct 07 12:16:57 localhost kernel: devtmpfs: initialized
Oct 07 12:16:57 localhost kernel: x86/mm: Memory block size: 128MB
Oct 07 12:16:57 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 07 12:16:57 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 07 12:16:57 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 07 12:16:57 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 07 12:16:57 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 07 12:16:57 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 07 12:16:57 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 07 12:16:57 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 07 12:16:57 localhost kernel: audit: type=2000 audit(1759839415.993:1): state=initialized audit_enabled=0 res=1
Oct 07 12:16:57 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 07 12:16:57 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 07 12:16:57 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 07 12:16:57 localhost kernel: cpuidle: using governor menu
Oct 07 12:16:57 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 07 12:16:57 localhost kernel: PCI: Using configuration type 1 for base access
Oct 07 12:16:57 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 07 12:16:57 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 07 12:16:57 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 07 12:16:57 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 07 12:16:57 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 07 12:16:57 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 07 12:16:57 localhost kernel: Demotion targets for Node 0: null
Oct 07 12:16:57 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 07 12:16:57 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 07 12:16:57 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 07 12:16:57 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 07 12:16:57 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 07 12:16:57 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 07 12:16:57 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 07 12:16:57 localhost kernel: ACPI: Interpreter enabled
Oct 07 12:16:57 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 07 12:16:57 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 07 12:16:57 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 07 12:16:57 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 07 12:16:57 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 07 12:16:57 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 07 12:16:57 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [3] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [4] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [5] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [6] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [7] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [8] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [9] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [10] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [11] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [12] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [13] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [14] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [15] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [16] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [17] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [18] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [19] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [20] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [21] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [22] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [23] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [24] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [25] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [26] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [27] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [28] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [29] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [30] registered
Oct 07 12:16:57 localhost kernel: acpiphp: Slot [31] registered
Oct 07 12:16:57 localhost kernel: PCI host bridge to bus 0000:00
Oct 07 12:16:57 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 07 12:16:57 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 07 12:16:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 07 12:16:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 07 12:16:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 07 12:16:57 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 07 12:16:57 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 07 12:16:57 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 07 12:16:57 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 07 12:16:57 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 07 12:16:57 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 07 12:16:57 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 07 12:16:57 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 07 12:16:57 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 07 12:16:57 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 07 12:16:57 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 07 12:16:57 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 07 12:16:57 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 07 12:16:57 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 07 12:16:57 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 07 12:16:57 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 07 12:16:57 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 07 12:16:57 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 07 12:16:57 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 07 12:16:57 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 07 12:16:57 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 07 12:16:57 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 07 12:16:57 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 07 12:16:57 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 07 12:16:57 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 07 12:16:57 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 07 12:16:57 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 07 12:16:57 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 07 12:16:57 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 07 12:16:57 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 07 12:16:57 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 07 12:16:57 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 07 12:16:57 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 07 12:16:57 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 07 12:16:57 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 07 12:16:57 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 07 12:16:57 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 07 12:16:57 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 07 12:16:57 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 07 12:16:57 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 07 12:16:57 localhost kernel: iommu: Default domain type: Translated
Oct 07 12:16:57 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 07 12:16:57 localhost kernel: SCSI subsystem initialized
Oct 07 12:16:57 localhost kernel: ACPI: bus type USB registered
Oct 07 12:16:57 localhost kernel: usbcore: registered new interface driver usbfs
Oct 07 12:16:57 localhost kernel: usbcore: registered new interface driver hub
Oct 07 12:16:57 localhost kernel: usbcore: registered new device driver usb
Oct 07 12:16:57 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 07 12:16:57 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 07 12:16:57 localhost kernel: PTP clock support registered
Oct 07 12:16:57 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 07 12:16:57 localhost kernel: NetLabel: Initializing
Oct 07 12:16:57 localhost kernel: NetLabel:  domain hash size = 128
Oct 07 12:16:57 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 07 12:16:57 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 07 12:16:57 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 07 12:16:57 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 07 12:16:57 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 07 12:16:57 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 07 12:16:57 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 07 12:16:57 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 07 12:16:57 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 07 12:16:57 localhost kernel: vgaarb: loaded
Oct 07 12:16:57 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 07 12:16:57 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 07 12:16:57 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 07 12:16:57 localhost kernel: pnp: PnP ACPI init
Oct 07 12:16:57 localhost kernel: pnp 00:03: [dma 2]
Oct 07 12:16:57 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 07 12:16:57 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 07 12:16:57 localhost kernel: NET: Registered PF_INET protocol family
Oct 07 12:16:57 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 07 12:16:57 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 07 12:16:57 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 07 12:16:57 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 07 12:16:57 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 07 12:16:57 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 07 12:16:57 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 07 12:16:57 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 07 12:16:57 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 07 12:16:57 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 07 12:16:57 localhost kernel: NET: Registered PF_XDP protocol family
Oct 07 12:16:57 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 07 12:16:57 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 07 12:16:57 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 07 12:16:57 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 07 12:16:57 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 07 12:16:57 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 07 12:16:57 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 07 12:16:57 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 07 12:16:57 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 72002 usecs
Oct 07 12:16:57 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 07 12:16:57 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 07 12:16:57 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 07 12:16:57 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 07 12:16:57 localhost kernel: ACPI: bus type thunderbolt registered
Oct 07 12:16:57 localhost kernel: Initialise system trusted keyrings
Oct 07 12:16:57 localhost kernel: Key type blacklist registered
Oct 07 12:16:57 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 07 12:16:57 localhost kernel: zbud: loaded
Oct 07 12:16:57 localhost kernel: integrity: Platform Keyring initialized
Oct 07 12:16:57 localhost kernel: integrity: Machine keyring initialized
Oct 07 12:16:57 localhost kernel: Freeing initrd memory: 86104K
Oct 07 12:16:57 localhost kernel: NET: Registered PF_ALG protocol family
Oct 07 12:16:57 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 07 12:16:57 localhost kernel: Key type asymmetric registered
Oct 07 12:16:57 localhost kernel: Asymmetric key parser 'x509' registered
Oct 07 12:16:57 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 07 12:16:57 localhost kernel: io scheduler mq-deadline registered
Oct 07 12:16:57 localhost kernel: io scheduler kyber registered
Oct 07 12:16:57 localhost kernel: io scheduler bfq registered
Oct 07 12:16:57 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 07 12:16:57 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 07 12:16:57 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 07 12:16:57 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 07 12:16:57 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 07 12:16:57 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 07 12:16:57 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 07 12:16:57 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 07 12:16:57 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 07 12:16:57 localhost kernel: Non-volatile memory driver v1.3
Oct 07 12:16:57 localhost kernel: rdac: device handler registered
Oct 07 12:16:57 localhost kernel: hp_sw: device handler registered
Oct 07 12:16:57 localhost kernel: emc: device handler registered
Oct 07 12:16:57 localhost kernel: alua: device handler registered
Oct 07 12:16:57 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 07 12:16:57 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 07 12:16:57 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 07 12:16:57 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 07 12:16:57 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 07 12:16:57 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 07 12:16:57 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 07 12:16:57 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct 07 12:16:57 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 07 12:16:57 localhost kernel: hub 1-0:1.0: USB hub found
Oct 07 12:16:57 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 07 12:16:57 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 07 12:16:57 localhost kernel: usbserial: USB Serial support registered for generic
Oct 07 12:16:57 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 07 12:16:57 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 07 12:16:57 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 07 12:16:57 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 07 12:16:57 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 07 12:16:57 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 07 12:16:57 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-07T12:16:56 UTC (1759839416)
Oct 07 12:16:57 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 07 12:16:57 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 07 12:16:57 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 07 12:16:57 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 07 12:16:57 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 07 12:16:57 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 07 12:16:57 localhost kernel: usbcore: registered new interface driver usbhid
Oct 07 12:16:57 localhost kernel: usbhid: USB HID core driver
Oct 07 12:16:57 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 07 12:16:57 localhost kernel: Initializing XFRM netlink socket
Oct 07 12:16:57 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 07 12:16:57 localhost kernel: Segment Routing with IPv6
Oct 07 12:16:57 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 07 12:16:57 localhost kernel: mpls_gso: MPLS GSO support
Oct 07 12:16:57 localhost kernel: IPI shorthand broadcast: enabled
Oct 07 12:16:57 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 07 12:16:57 localhost kernel: AES CTR mode by8 optimization enabled
Oct 07 12:16:57 localhost kernel: sched_clock: Marking stable (1234001987, 149914077)->(1568214780, -184298716)
Oct 07 12:16:57 localhost kernel: registered taskstats version 1
Oct 07 12:16:57 localhost kernel: Loading compiled-in X.509 certificates
Oct 07 12:16:57 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 07 12:16:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 07 12:16:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 07 12:16:57 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 07 12:16:57 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 07 12:16:57 localhost kernel: Demotion targets for Node 0: null
Oct 07 12:16:57 localhost kernel: page_owner is disabled
Oct 07 12:16:57 localhost kernel: Key type .fscrypt registered
Oct 07 12:16:57 localhost kernel: Key type fscrypt-provisioning registered
Oct 07 12:16:57 localhost kernel: Key type big_key registered
Oct 07 12:16:57 localhost kernel: Key type encrypted registered
Oct 07 12:16:57 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 07 12:16:57 localhost kernel: Loading compiled-in module X.509 certificates
Oct 07 12:16:57 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 07 12:16:57 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 07 12:16:57 localhost kernel: ima: No architecture policies found
Oct 07 12:16:57 localhost kernel: evm: Initialising EVM extended attributes:
Oct 07 12:16:57 localhost kernel: evm: security.selinux
Oct 07 12:16:57 localhost kernel: evm: security.SMACK64 (disabled)
Oct 07 12:16:57 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 07 12:16:57 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 07 12:16:57 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 07 12:16:57 localhost kernel: evm: security.apparmor (disabled)
Oct 07 12:16:57 localhost kernel: evm: security.ima
Oct 07 12:16:57 localhost kernel: evm: security.capability
Oct 07 12:16:57 localhost kernel: evm: HMAC attrs: 0x1
Oct 07 12:16:57 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 07 12:16:57 localhost kernel: Running certificate verification RSA selftest
Oct 07 12:16:57 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 07 12:16:57 localhost kernel: Running certificate verification ECDSA selftest
Oct 07 12:16:57 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 07 12:16:57 localhost kernel: clk: Disabling unused clocks
Oct 07 12:16:57 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 07 12:16:57 localhost kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct 07 12:16:57 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 07 12:16:57 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct 07 12:16:57 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 07 12:16:57 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 07 12:16:57 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 07 12:16:57 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 07 12:16:57 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 07 12:16:57 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 07 12:16:57 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 07 12:16:57 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 07 12:16:57 localhost kernel: Run /init as init process
Oct 07 12:16:57 localhost kernel:   with arguments:
Oct 07 12:16:57 localhost kernel:     /init
Oct 07 12:16:57 localhost kernel:   with environment:
Oct 07 12:16:57 localhost kernel:     HOME=/
Oct 07 12:16:57 localhost kernel:     TERM=linux
Oct 07 12:16:57 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64
Oct 07 12:16:57 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 07 12:16:57 localhost systemd[1]: Detected virtualization kvm.
Oct 07 12:16:57 localhost systemd[1]: Detected architecture x86-64.
Oct 07 12:16:57 localhost systemd[1]: Running in initrd.
Oct 07 12:16:57 localhost systemd[1]: No hostname configured, using default hostname.
Oct 07 12:16:57 localhost systemd[1]: Hostname set to <localhost>.
Oct 07 12:16:57 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 07 12:16:57 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 07 12:16:57 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 07 12:16:57 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 07 12:16:57 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 07 12:16:57 localhost systemd[1]: Reached target Local File Systems.
Oct 07 12:16:57 localhost systemd[1]: Reached target Path Units.
Oct 07 12:16:57 localhost systemd[1]: Reached target Slice Units.
Oct 07 12:16:57 localhost systemd[1]: Reached target Swaps.
Oct 07 12:16:57 localhost systemd[1]: Reached target Timer Units.
Oct 07 12:16:57 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 07 12:16:57 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 07 12:16:57 localhost systemd[1]: Listening on Journal Socket.
Oct 07 12:16:57 localhost systemd[1]: Listening on udev Control Socket.
Oct 07 12:16:57 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 07 12:16:57 localhost systemd[1]: Reached target Socket Units.
Oct 07 12:16:57 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 07 12:16:57 localhost systemd[1]: Starting Journal Service...
Oct 07 12:16:57 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 07 12:16:57 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 07 12:16:57 localhost systemd[1]: Starting Create System Users...
Oct 07 12:16:57 localhost systemd[1]: Starting Setup Virtual Console...
Oct 07 12:16:57 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 07 12:16:57 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 07 12:16:57 localhost systemd[1]: Finished Create System Users.
Oct 07 12:16:57 localhost systemd-journald[310]: Journal started
Oct 07 12:16:57 localhost systemd-journald[310]: Runtime Journal (/run/log/journal/955d3c0d1dc54415a876b62089e34180) is 8.0M, max 153.5M, 145.5M free.
Oct 07 12:16:57 localhost systemd-sysusers[313]: Creating group 'users' with GID 100.
Oct 07 12:16:57 localhost systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Oct 07 12:16:57 localhost systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 07 12:16:57 localhost systemd[1]: Started Journal Service.
Oct 07 12:16:57 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 07 12:16:57 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 07 12:16:57 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 07 12:16:57 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 07 12:16:58 localhost systemd[1]: Finished Setup Virtual Console.
Oct 07 12:16:58 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 07 12:16:58 localhost systemd[1]: Starting dracut cmdline hook...
Oct 07 12:16:58 localhost dracut-cmdline[330]: dracut-9 dracut-057-102.git20250818.el9
Oct 07 12:16:58 localhost dracut-cmdline[330]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 07 12:16:58 localhost systemd[1]: Finished dracut cmdline hook.
Oct 07 12:16:58 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 07 12:16:58 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 07 12:16:58 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 07 12:16:58 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 07 12:16:58 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 07 12:16:58 localhost kernel: RPC: Registered udp transport module.
Oct 07 12:16:58 localhost kernel: RPC: Registered tcp transport module.
Oct 07 12:16:58 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 07 12:16:58 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 07 12:16:58 localhost rpc.statd[448]: Version 2.5.4 starting
Oct 07 12:16:58 localhost rpc.statd[448]: Initializing NSM state
Oct 07 12:16:58 localhost rpc.idmapd[453]: Setting log level to 0
Oct 07 12:16:58 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 07 12:16:58 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 07 12:16:58 localhost systemd-udevd[466]: Using default interface naming scheme 'rhel-9.0'.
Oct 07 12:16:58 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 07 12:16:58 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 07 12:16:58 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 07 12:16:58 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 07 12:16:58 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 07 12:16:58 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 07 12:16:58 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 07 12:16:58 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 07 12:16:58 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 07 12:16:58 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 07 12:16:58 localhost systemd[1]: Reached target Network.
Oct 07 12:16:58 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 07 12:16:58 localhost systemd[1]: Starting dracut initqueue hook...
Oct 07 12:16:58 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 07 12:16:58 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 07 12:16:58 localhost systemd[1]: Reached target System Initialization.
Oct 07 12:16:58 localhost systemd[1]: Reached target Basic System.
Oct 07 12:16:58 localhost kernel: libata version 3.00 loaded.
Oct 07 12:16:58 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 07 12:16:58 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 07 12:16:58 localhost kernel: scsi host0: ata_piix
Oct 07 12:16:58 localhost kernel: scsi host1: ata_piix
Oct 07 12:16:58 localhost systemd-udevd[470]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 12:16:58 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 07 12:16:58 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 07 12:16:58 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 07 12:16:58 localhost kernel:  vda: vda1
Oct 07 12:16:59 localhost kernel: ata1: found unknown device (class 0)
Oct 07 12:16:59 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 07 12:16:59 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 07 12:16:59 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 07 12:16:59 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 07 12:16:59 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 07 12:16:59 localhost systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 07 12:16:59 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 07 12:16:59 localhost systemd[1]: Reached target Initrd Root Device.
Oct 07 12:16:59 localhost systemd[1]: Finished dracut initqueue hook.
Oct 07 12:16:59 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 07 12:16:59 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 07 12:16:59 localhost systemd[1]: Reached target Remote File Systems.
Oct 07 12:16:59 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 07 12:16:59 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 07 12:16:59 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct 07 12:16:59 localhost systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Oct 07 12:16:59 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 07 12:16:59 localhost systemd[1]: Mounting /sysroot...
Oct 07 12:16:59 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 07 12:16:59 localhost kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct 07 12:16:59 localhost kernel: XFS (vda1): Ending clean mount
Oct 07 12:17:00 localhost systemd[1]: Mounted /sysroot.
Oct 07 12:17:00 localhost systemd[1]: Reached target Initrd Root File System.
Oct 07 12:17:00 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 07 12:17:00 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 07 12:17:00 localhost systemd[1]: Reached target Initrd File Systems.
Oct 07 12:17:00 localhost systemd[1]: Reached target Initrd Default Target.
Oct 07 12:17:00 localhost systemd[1]: Starting dracut mount hook...
Oct 07 12:17:00 localhost systemd[1]: Finished dracut mount hook.
Oct 07 12:17:00 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 07 12:17:00 localhost rpc.idmapd[453]: exiting on signal 15
Oct 07 12:17:00 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 07 12:17:00 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 07 12:17:00 localhost systemd[1]: Stopped target Network.
Oct 07 12:17:00 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 07 12:17:00 localhost systemd[1]: Stopped target Timer Units.
Oct 07 12:17:00 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 07 12:17:00 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 07 12:17:00 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 07 12:17:00 localhost systemd[1]: Stopped target Basic System.
Oct 07 12:17:00 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 07 12:17:00 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 07 12:17:00 localhost systemd[1]: Stopped target Path Units.
Oct 07 12:17:00 localhost systemd[1]: Stopped target Remote File Systems.
Oct 07 12:17:00 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 07 12:17:00 localhost systemd[1]: Stopped target Slice Units.
Oct 07 12:17:00 localhost systemd[1]: Stopped target Socket Units.
Oct 07 12:17:00 localhost systemd[1]: Stopped target System Initialization.
Oct 07 12:17:00 localhost systemd[1]: Stopped target Local File Systems.
Oct 07 12:17:00 localhost systemd[1]: Stopped target Swaps.
Oct 07 12:17:00 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Stopped dracut mount hook.
Oct 07 12:17:00 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 07 12:17:00 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 07 12:17:00 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 07 12:17:00 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 07 12:17:00 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 07 12:17:00 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 07 12:17:00 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 07 12:17:00 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 07 12:17:00 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 07 12:17:00 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 07 12:17:00 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 07 12:17:00 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 07 12:17:00 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Closed udev Control Socket.
Oct 07 12:17:00 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Closed udev Kernel Socket.
Oct 07 12:17:00 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 07 12:17:00 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 07 12:17:00 localhost systemd[1]: Starting Cleanup udev Database...
Oct 07 12:17:00 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 07 12:17:00 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 07 12:17:00 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Stopped Create System Users.
Oct 07 12:17:00 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 07 12:17:00 localhost systemd[1]: Finished Cleanup udev Database.
Oct 07 12:17:00 localhost systemd[1]: Reached target Switch Root.
Oct 07 12:17:00 localhost systemd[1]: Starting Switch Root...
Oct 07 12:17:00 localhost systemd[1]: Switching root.
Oct 07 12:17:00 localhost systemd-journald[310]: Journal stopped
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.159 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.160 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:03:30 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.161 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.162 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.170 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845810.0830505, 5aa06cd5-91e7-4797-83c0-ddd3966533ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.170 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] VM Resumed (Lifecycle Event)
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.215 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.224 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.228 2 INFO nova.compute.manager [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Took 13.23 seconds to spawn the instance on the hypervisor.
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.229 2 DEBUG nova.compute.manager [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.246 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:03:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:30.260 161536 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 07 14:03:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:30.261 161536 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpl807twbq/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 07 14:03:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:30.072 275502 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 07 14:03:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:30.077 275502 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 07 14:03:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:30.080 275502 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Oct 07 14:03:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:30.080 275502 INFO oslo.privsep.daemon [-] privsep daemon running as pid 275502
Oct 07 14:03:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:30.264 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a0bb2f14-9f7d-447e-a1cf-ae3b2fa31b64]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:03:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:03:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1955050984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.305 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.327 2 DEBUG nova.storage.rbd_utils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] rbd image 39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.379 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.402 2 INFO nova.compute.manager [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Took 14.44 seconds to build instance.
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.428 2 DEBUG oslo_concurrency.lockutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:03:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:03:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:03:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1394659432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.882 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.884 2 DEBUG nova.objects.instance [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 39a26aa2-978f-4d33-bc4e-fd4bfc81d380 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:03:30 compute-0 nova_compute[259550]: 2025-10-07 14:03:30.970 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:03:30 compute-0 nova_compute[259550]:   <uuid>39a26aa2-978f-4d33-bc4e-fd4bfc81d380</uuid>
Oct 07 14:03:30 compute-0 nova_compute[259550]:   <name>instance-00000004</name>
Oct 07 14:03:30 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:03:30 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:03:30 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <nova:name>tempest-DeleteServersAdminTestJSON-server-782175462</nova:name>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:03:29</nova:creationTime>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:03:30 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:03:30 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:03:30 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:03:30 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:03:30 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:03:30 compute-0 nova_compute[259550]:         <nova:user uuid="29b159ffa5754a4ea36ea97967fc907f">tempest-DeleteServersAdminTestJSON-2055501431-project-member</nova:user>
Oct 07 14:03:30 compute-0 nova_compute[259550]:         <nova:project uuid="99c1b7cefd964764a69e1e53219287d2">tempest-DeleteServersAdminTestJSON-2055501431</nova:project>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:03:30 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:03:30 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <system>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <entry name="serial">39a26aa2-978f-4d33-bc4e-fd4bfc81d380</entry>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <entry name="uuid">39a26aa2-978f-4d33-bc4e-fd4bfc81d380</entry>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     </system>
Oct 07 14:03:30 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:03:30 compute-0 nova_compute[259550]:   <os>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:   </os>
Oct 07 14:03:30 compute-0 nova_compute[259550]:   <features>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:   </features>
Oct 07 14:03:30 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:03:30 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:03:30 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk">
Oct 07 14:03:30 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       </source>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:03:30 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk.config">
Oct 07 14:03:30 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       </source>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:03:30 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/39a26aa2-978f-4d33-bc4e-fd4bfc81d380/console.log" append="off"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <video>
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     </video>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:03:30 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:03:30 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:03:30 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:03:30 compute-0 nova_compute[259550]: </domain>
Oct 07 14:03:30 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:03:31 compute-0 nova_compute[259550]: 2025-10-07 14:03:31.146 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:03:31 compute-0 nova_compute[259550]: 2025-10-07 14:03:31.147 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:03:31 compute-0 nova_compute[259550]: 2025-10-07 14:03:31.147 2 INFO nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Using config drive
Oct 07 14:03:31 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1955050984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:03:31 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1394659432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:03:31 compute-0 nova_compute[259550]: 2025-10-07 14:03:31.168 2 DEBUG nova.storage.rbd_utils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] rbd image 39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:03:31 compute-0 nova_compute[259550]: 2025-10-07 14:03:31.359 2 INFO nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Creating config drive at /var/lib/nova/instances/39a26aa2-978f-4d33-bc4e-fd4bfc81d380/disk.config
Oct 07 14:03:31 compute-0 nova_compute[259550]: 2025-10-07 14:03:31.364 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/39a26aa2-978f-4d33-bc4e-fd4bfc81d380/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv8qdrg66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:03:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 107 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 789 KiB/s rd, 943 KiB/s wr, 80 op/s
Oct 07 14:03:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:31.481 275502 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:03:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:31.481 275502 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:03:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:31.481 275502 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:03:31 compute-0 nova_compute[259550]: 2025-10-07 14:03:31.493 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/39a26aa2-978f-4d33-bc4e-fd4bfc81d380/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv8qdrg66" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:03:31 compute-0 nova_compute[259550]: 2025-10-07 14:03:31.524 2 DEBUG nova.storage.rbd_utils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] rbd image 39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:03:31 compute-0 nova_compute[259550]: 2025-10-07 14:03:31.529 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/39a26aa2-978f-4d33-bc4e-fd4bfc81d380/disk.config 39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:03:31 compute-0 nova_compute[259550]: 2025-10-07 14:03:31.743 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/39a26aa2-978f-4d33-bc4e-fd4bfc81d380/disk.config 39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:03:31 compute-0 nova_compute[259550]: 2025-10-07 14:03:31.744 2 INFO nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Deleting local config drive /var/lib/nova/instances/39a26aa2-978f-4d33-bc4e-fd4bfc81d380/disk.config because it was imported into RBD.
Oct 07 14:03:31 compute-0 systemd-machined[214580]: New machine qemu-4-instance-00000004.
Oct 07 14:03:31 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.005 2 DEBUG nova.compute.manager [req-c078c9c5-29f4-4294-be67-45ed847ffa96 req-c51e1f98-d8dc-4f47-bff5-41d37f69545f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Received event network-vif-plugged-8bbf9c96-17e6-49df-8a58-e3557085f576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.005 2 DEBUG oslo_concurrency.lockutils [req-c078c9c5-29f4-4294-be67-45ed847ffa96 req-c51e1f98-d8dc-4f47-bff5-41d37f69545f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.006 2 DEBUG oslo_concurrency.lockutils [req-c078c9c5-29f4-4294-be67-45ed847ffa96 req-c51e1f98-d8dc-4f47-bff5-41d37f69545f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.006 2 DEBUG oslo_concurrency.lockutils [req-c078c9c5-29f4-4294-be67-45ed847ffa96 req-c51e1f98-d8dc-4f47-bff5-41d37f69545f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.006 2 DEBUG nova.compute.manager [req-c078c9c5-29f4-4294-be67-45ed847ffa96 req-c51e1f98-d8dc-4f47-bff5-41d37f69545f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] No waiting events found dispatching network-vif-plugged-8bbf9c96-17e6-49df-8a58-e3557085f576 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.006 2 WARNING nova.compute.manager [req-c078c9c5-29f4-4294-be67-45ed847ffa96 req-c51e1f98-d8dc-4f47-bff5-41d37f69545f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Received unexpected event network-vif-plugged-8bbf9c96-17e6-49df-8a58-e3557085f576 for instance with vm_state active and task_state None.
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:32 compute-0 ceph-mon[74295]: pgmap v1086: 305 pgs: 305 active+clean; 107 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 789 KiB/s rd, 943 KiB/s wr, 80 op/s
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00046802760955915867 of space, bias 1.0, pg target 0.1404082828677476 quantized to 32 (current 32)
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:03:32 compute-0 NetworkManager[44949]: <info>  [1759845812.6615] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Oct 07 14:03:32 compute-0 NetworkManager[44949]: <info>  [1759845812.6622] device (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 07 14:03:32 compute-0 NetworkManager[44949]: <info>  [1759845812.6632] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Oct 07 14:03:32 compute-0 NetworkManager[44949]: <info>  [1759845812.6634] device (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:32 compute-0 NetworkManager[44949]: <info>  [1759845812.6644] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Oct 07 14:03:32 compute-0 NetworkManager[44949]: <info>  [1759845812.6650] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct 07 14:03:32 compute-0 NetworkManager[44949]: <info>  [1759845812.6654] device (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 07 14:03:32 compute-0 NetworkManager[44949]: <info>  [1759845812.6657] device (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 07 14:03:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:03:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1928728611' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:03:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:03:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1928728611' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:03:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:32.713 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4a5736-3c68-4957-91a4-69db6ce26f6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:03:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:32.715 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap384938fa-41 in ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:03:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:32.718 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap384938fa-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:03:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:32.718 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c614dae5-dba9-474c-89be-210816963151]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:03:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:32.724 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9cee2c-ae28-4d95-934b-242ee1da200b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:03:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:32.759 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[24c73748-3093-42d9-8690-d0365242526d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.803 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845812.8012054, 39a26aa2-978f-4d33-bc4e-fd4bfc81d380 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.803 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] VM Resumed (Lifecycle Event)
Oct 07 14:03:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:32.803 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c6f194-2567-472f-be3c-d2c5c94a7b4c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:03:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:32.805 161536 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpy8uxhs20/privsep.sock']
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.806 2 DEBUG nova.compute.manager [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.806 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.810 2 INFO nova.virt.libvirt.driver [-] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Instance spawned successfully.
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.810 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.861 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.866 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.891 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.892 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.893 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.893 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.893 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.894 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.996 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.997 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845812.8025327, 39a26aa2-978f-4d33-bc4e-fd4bfc81d380 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:03:32 compute-0 nova_compute[259550]: 2025-10-07 14:03:32.997 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] VM Started (Lifecycle Event)
Oct 07 14:03:33 compute-0 nova_compute[259550]: 2025-10-07 14:03:33.026 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:03:33 compute-0 nova_compute[259550]: 2025-10-07 14:03:33.031 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:03:33 compute-0 nova_compute[259550]: 2025-10-07 14:03:33.037 2 INFO nova.compute.manager [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Took 4.42 seconds to spawn the instance on the hypervisor.
Oct 07 14:03:33 compute-0 nova_compute[259550]: 2025-10-07 14:03:33.038 2 DEBUG nova.compute.manager [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:03:33 compute-0 nova_compute[259550]: 2025-10-07 14:03:33.054 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:03:33 compute-0 nova_compute[259550]: 2025-10-07 14:03:33.106 2 INFO nova.compute.manager [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Took 5.50 seconds to build instance.
Oct 07 14:03:33 compute-0 nova_compute[259550]: 2025-10-07 14:03:33.131 2 DEBUG oslo_concurrency.lockutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "39a26aa2-978f-4d33-bc4e-fd4bfc81d380" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:03:33 compute-0 nova_compute[259550]: 2025-10-07 14:03:33.148 2 DEBUG nova.compute.manager [req-19297643-f452-4e84-93c2-95ccefd25712 req-6d1e7d49-d147-4455-8e63-399822cd6f05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Received event network-changed-8bbf9c96-17e6-49df-8a58-e3557085f576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:03:33 compute-0 nova_compute[259550]: 2025-10-07 14:03:33.148 2 DEBUG nova.compute.manager [req-19297643-f452-4e84-93c2-95ccefd25712 req-6d1e7d49-d147-4455-8e63-399822cd6f05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Refreshing instance network info cache due to event network-changed-8bbf9c96-17e6-49df-8a58-e3557085f576. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:03:33 compute-0 nova_compute[259550]: 2025-10-07 14:03:33.148 2 DEBUG oslo_concurrency.lockutils [req-19297643-f452-4e84-93c2-95ccefd25712 req-6d1e7d49-d147-4455-8e63-399822cd6f05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5aa06cd5-91e7-4797-83c0-ddd3966533ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:03:33 compute-0 nova_compute[259550]: 2025-10-07 14:03:33.149 2 DEBUG oslo_concurrency.lockutils [req-19297643-f452-4e84-93c2-95ccefd25712 req-6d1e7d49-d147-4455-8e63-399822cd6f05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5aa06cd5-91e7-4797-83c0-ddd3966533ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:03:33 compute-0 nova_compute[259550]: 2025-10-07 14:03:33.149 2 DEBUG nova.network.neutron [req-19297643-f452-4e84-93c2-95ccefd25712 req-6d1e7d49-d147-4455-8e63-399822cd6f05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Refreshing network info cache for port 8bbf9c96-17e6-49df-8a58-e3557085f576 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:03:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1928728611' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:03:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1928728611' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:03:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 107 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 640 KiB/s wr, 46 op/s
Oct 07 14:03:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:33.616 161536 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 07 14:03:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:33.617 161536 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpy8uxhs20/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 07 14:03:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:33.410 275676 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 07 14:03:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:33.415 275676 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 07 14:03:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:33.418 275676 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 07 14:03:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:33.418 275676 INFO oslo.privsep.daemon [-] privsep daemon running as pid 275676
Oct 07 14:03:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:33.620 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[eea3c6ba-4503-4e08-877e-f6a6eb985057]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:03:34 compute-0 ceph-mon[74295]: pgmap v1087: 305 pgs: 305 active+clean; 107 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 640 KiB/s wr, 46 op/s
Oct 07 14:03:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:34.312 275676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:03:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:34.312 275676 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:03:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:34.312 275676 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:03:34 compute-0 nova_compute[259550]: 2025-10-07 14:03:34.621 2 DEBUG nova.network.neutron [req-19297643-f452-4e84-93c2-95ccefd25712 req-6d1e7d49-d147-4455-8e63-399822cd6f05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Updated VIF entry in instance network info cache for port 8bbf9c96-17e6-49df-8a58-e3557085f576. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:03:34 compute-0 nova_compute[259550]: 2025-10-07 14:03:34.622 2 DEBUG nova.network.neutron [req-19297643-f452-4e84-93c2-95ccefd25712 req-6d1e7d49-d147-4455-8e63-399822cd6f05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Updating instance_info_cache with network_info: [{"id": "8bbf9c96-17e6-49df-8a58-e3557085f576", "address": "fa:16:3e:09:39:1a", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bbf9c96-17", "ovs_interfaceid": "8bbf9c96-17e6-49df-8a58-e3557085f576", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:03:34 compute-0 nova_compute[259550]: 2025-10-07 14:03:34.640 2 DEBUG oslo_concurrency.lockutils [req-19297643-f452-4e84-93c2-95ccefd25712 req-6d1e7d49-d147-4455-8e63-399822cd6f05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5aa06cd5-91e7-4797-83c0-ddd3966533ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:03:34 compute-0 nova_compute[259550]: 2025-10-07 14:03:34.891 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759845799.8912003, b3438e68-f883-4d76-86d9-7396298ab465 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:03:34 compute-0 nova_compute[259550]: 2025-10-07 14:03:34.892 2 INFO nova.compute.manager [-] [instance: b3438e68-f883-4d76-86d9-7396298ab465] VM Stopped (Lifecycle Event)
Oct 07 14:03:34 compute-0 nova_compute[259550]: 2025-10-07 14:03:34.923 2 DEBUG nova.compute.manager [None req-cef3d681-e454-485b-bdb7-0ef6404bf70f - - - - - -] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.096 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ce49a762-46c3-471b-9647-c030abcea08e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:03:35 compute-0 systemd-udevd[275664]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:03:35 compute-0 NetworkManager[44949]: <info>  [1759845815.1163] manager: (tap384938fa-40): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.111 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b47e1338-ff96-4bb0-960f-1fe325c616ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.159 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a44dd0-7d62-4aea-9cb2-2183de396418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.163 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e82301fb-bddd-40f4-8dd3-677be626782a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:03:35 compute-0 nova_compute[259550]: 2025-10-07 14:03:35.178 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Acquiring lock "39a26aa2-978f-4d33-bc4e-fd4bfc81d380" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:03:35 compute-0 nova_compute[259550]: 2025-10-07 14:03:35.179 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "39a26aa2-978f-4d33-bc4e-fd4bfc81d380" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:03:35 compute-0 nova_compute[259550]: 2025-10-07 14:03:35.180 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Acquiring lock "39a26aa2-978f-4d33-bc4e-fd4bfc81d380-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:03:35 compute-0 nova_compute[259550]: 2025-10-07 14:03:35.180 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "39a26aa2-978f-4d33-bc4e-fd4bfc81d380-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:03:35 compute-0 nova_compute[259550]: 2025-10-07 14:03:35.180 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "39a26aa2-978f-4d33-bc4e-fd4bfc81d380-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:03:35 compute-0 nova_compute[259550]: 2025-10-07 14:03:35.181 2 INFO nova.compute.manager [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Terminating instance
Oct 07 14:03:35 compute-0 nova_compute[259550]: 2025-10-07 14:03:35.182 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Acquiring lock "refresh_cache-39a26aa2-978f-4d33-bc4e-fd4bfc81d380" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:03:35 compute-0 nova_compute[259550]: 2025-10-07 14:03:35.182 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Acquired lock "refresh_cache-39a26aa2-978f-4d33-bc4e-fd4bfc81d380" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:03:35 compute-0 nova_compute[259550]: 2025-10-07 14:03:35.183 2 DEBUG nova.network.neutron [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:03:35 compute-0 NetworkManager[44949]: <info>  [1759845815.1942] device (tap384938fa-40): carrier: link connected
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.197 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ec53f4-c395-4405-93bb-72cd7cb5f9e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.220 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[069f5ba4-81da-43e2-a34b-8b9ff733e104]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap384938fa-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:dc:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639875, 'reachable_time': 21429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275704, 'error': None, 'target': 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.248 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aac93308-8f70-4985-8f9f-5f292ab5b9ad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:dc9f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639875, 'tstamp': 639875}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275705, 'error': None, 'target': 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.268 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bb735e5b-ef02-47a6-8a53-21f9fc2d768a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap384938fa-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:dc:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639875, 'reachable_time': 21429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275706, 'error': None, 'target': 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.323 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[099d417a-4abb-4277-baa2-d8edb4ec78cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:03:35 compute-0 nova_compute[259550]: 2025-10-07 14:03:35.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 134 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.415 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[44318c19-5dff-48ad-a7ae-a9938f7c2a6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.418 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap384938fa-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.418 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.419 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap384938fa-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:03:35 compute-0 nova_compute[259550]: 2025-10-07 14:03:35.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:35 compute-0 NetworkManager[44949]: <info>  [1759845815.4226] manager: (tap384938fa-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Oct 07 14:03:35 compute-0 kernel: tap384938fa-40: entered promiscuous mode
Oct 07 14:03:35 compute-0 nova_compute[259550]: 2025-10-07 14:03:35.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.438 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap384938fa-40, col_values=(('external_ids', {'iface-id': '86c408a4-938c-4caa-9ec3-5622a47990e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:03:35 compute-0 nova_compute[259550]: 2025-10-07 14:03:35.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:35 compute-0 ovn_controller[151684]: 2025-10-07T14:03:35Z|00031|binding|INFO|Releasing lport 86c408a4-938c-4caa-9ec3-5622a47990e3 from this chassis (sb_readonly=0)
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.458 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/384938fa-4eb0-4ec5-a6a4-bc65721ba22a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/384938fa-4eb0-4ec5-a6a4-bc65721ba22a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:03:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:03:35 compute-0 nova_compute[259550]: 2025-10-07 14:03:35.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.461 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[00c6b464-9e84-419f-9a49-55409caa29b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.462 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-384938fa-4eb0-4ec5-a6a4-bc65721ba22a
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/384938fa-4eb0-4ec5-a6a4-bc65721ba22a.pid.haproxy
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 384938fa-4eb0-4ec5-a6a4-bc65721ba22a
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:03:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.464 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'env', 'PROCESS_TAG=haproxy-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/384938fa-4eb0-4ec5-a6a4-bc65721ba22a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:03:35 compute-0 nova_compute[259550]: 2025-10-07 14:03:35.531 2 DEBUG nova.network.neutron [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:03:35 compute-0 nova_compute[259550]: 2025-10-07 14:03:35.776 2 DEBUG nova.network.neutron [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:03:35 compute-0 nova_compute[259550]: 2025-10-07 14:03:35.812 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Releasing lock "refresh_cache-39a26aa2-978f-4d33-bc4e-fd4bfc81d380" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:03:35 compute-0 nova_compute[259550]: 2025-10-07 14:03:35.813 2 DEBUG nova.compute.manager [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:03:35 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct 07 14:03:35 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 4.068s CPU time.
Oct 07 14:03:35 compute-0 systemd-machined[214580]: Machine qemu-4-instance-00000004 terminated.
Oct 07 14:03:35 compute-0 podman[275739]: 2025-10-07 14:03:35.859563441 +0000 UTC m=+0.028843003 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:03:36 compute-0 podman[275739]: 2025-10-07 14:03:36.02557362 +0000 UTC m=+0.194853182 container create ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:03:36 compute-0 nova_compute[259550]: 2025-10-07 14:03:36.047 2 INFO nova.virt.libvirt.driver [-] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Instance destroyed successfully.
Oct 07 14:03:36 compute-0 nova_compute[259550]: 2025-10-07 14:03:36.048 2 DEBUG nova.objects.instance [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lazy-loading 'resources' on Instance uuid 39a26aa2-978f-4d33-bc4e-fd4bfc81d380 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:03:36 compute-0 systemd[1]: Started libpod-conmon-ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b.scope.
Oct 07 14:03:36 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c7010ba2fc6f6ddfa22c1cc2034b39e4a61b0f4287273b69c1e4981fb9a1cb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:03:36 compute-0 podman[275739]: 2025-10-07 14:03:36.129305086 +0000 UTC m=+0.298584668 container init ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 07 14:03:36 compute-0 podman[275739]: 2025-10-07 14:03:36.135265075 +0000 UTC m=+0.304544637 container start ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:03:36 compute-0 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[275764]: [NOTICE]   (275779) : New worker (275781) forked
Oct 07 14:03:36 compute-0 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[275764]: [NOTICE]   (275779) : Loading success.
Oct 07 14:03:36 compute-0 ceph-mon[74295]: pgmap v1088: 305 pgs: 305 active+clean; 134 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 07 14:03:36 compute-0 nova_compute[259550]: 2025-10-07 14:03:36.537 2 INFO nova.virt.libvirt.driver [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Deleting instance files /var/lib/nova/instances/39a26aa2-978f-4d33-bc4e-fd4bfc81d380_del
Oct 07 14:03:36 compute-0 nova_compute[259550]: 2025-10-07 14:03:36.538 2 INFO nova.virt.libvirt.driver [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Deletion of /var/lib/nova/instances/39a26aa2-978f-4d33-bc4e-fd4bfc81d380_del complete
Oct 07 14:03:36 compute-0 nova_compute[259550]: 2025-10-07 14:03:36.589 2 INFO nova.compute.manager [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 07 14:03:36 compute-0 nova_compute[259550]: 2025-10-07 14:03:36.590 2 DEBUG oslo.service.loopingcall [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:03:36 compute-0 nova_compute[259550]: 2025-10-07 14:03:36.592 2 DEBUG nova.compute.manager [-] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:03:36 compute-0 nova_compute[259550]: 2025-10-07 14:03:36.593 2 DEBUG nova.network.neutron [-] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:03:36 compute-0 nova_compute[259550]: 2025-10-07 14:03:36.711 2 DEBUG nova.network.neutron [-] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:03:36 compute-0 nova_compute[259550]: 2025-10-07 14:03:36.760 2 DEBUG nova.network.neutron [-] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:03:36 compute-0 nova_compute[259550]: 2025-10-07 14:03:36.774 2 INFO nova.compute.manager [-] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Took 0.18 seconds to deallocate network for instance.
Oct 07 14:03:36 compute-0 nova_compute[259550]: 2025-10-07 14:03:36.826 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:03:36 compute-0 nova_compute[259550]: 2025-10-07 14:03:36.832 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:03:36 compute-0 nova_compute[259550]: 2025-10-07 14:03:36.902 2 DEBUG oslo_concurrency.processutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:03:37 compute-0 nova_compute[259550]: 2025-10-07 14:03:37.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:03:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/685437039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:03:37 compute-0 nova_compute[259550]: 2025-10-07 14:03:37.331 2 DEBUG oslo_concurrency.processutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:03:37 compute-0 nova_compute[259550]: 2025-10-07 14:03:37.342 2 DEBUG nova.compute.provider_tree [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:03:37 compute-0 nova_compute[259550]: 2025-10-07 14:03:37.364 2 DEBUG nova.scheduler.client.report [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:03:37 compute-0 nova_compute[259550]: 2025-10-07 14:03:37.387 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:03:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 134 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 203 op/s
Oct 07 14:03:37 compute-0 nova_compute[259550]: 2025-10-07 14:03:37.442 2 INFO nova.scheduler.client.report [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Deleted allocations for instance 39a26aa2-978f-4d33-bc4e-fd4bfc81d380
Oct 07 14:03:37 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/685437039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:03:37 compute-0 nova_compute[259550]: 2025-10-07 14:03:37.540 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "39a26aa2-978f-4d33-bc4e-fd4bfc81d380" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.361s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:03:38 compute-0 ceph-mon[74295]: pgmap v1089: 305 pgs: 305 active+clean; 134 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 203 op/s
Oct 07 14:03:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 107 MiB data, 223 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 250 op/s
Oct 07 14:03:40 compute-0 nova_compute[259550]: 2025-10-07 14:03:40.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:03:40 compute-0 ceph-mon[74295]: pgmap v1090: 305 pgs: 305 active+clean; 107 MiB data, 223 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 250 op/s
Oct 07 14:03:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 257 op/s
Oct 07 14:03:42 compute-0 nova_compute[259550]: 2025-10-07 14:03:42.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:42 compute-0 nova_compute[259550]: 2025-10-07 14:03:42.442 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Acquiring lock "74c8f655-7045-4ad9-8246-3d2504315607" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:03:42 compute-0 nova_compute[259550]: 2025-10-07 14:03:42.443 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "74c8f655-7045-4ad9-8246-3d2504315607" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:03:42 compute-0 ceph-mon[74295]: pgmap v1091: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 257 op/s
Oct 07 14:03:42 compute-0 nova_compute[259550]: 2025-10-07 14:03:42.622 2 DEBUG nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:03:42 compute-0 nova_compute[259550]: 2025-10-07 14:03:42.822 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:03:42 compute-0 nova_compute[259550]: 2025-10-07 14:03:42.823 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:03:42 compute-0 nova_compute[259550]: 2025-10-07 14:03:42.831 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:03:42 compute-0 nova_compute[259550]: 2025-10-07 14:03:42.832 2 INFO nova.compute.claims [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:03:43 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 07 14:03:43 compute-0 podman[275815]: 2025-10-07 14:03:43.093354255 +0000 UTC m=+0.071340179 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 07 14:03:43 compute-0 podman[275814]: 2025-10-07 14:03:43.100014653 +0000 UTC m=+0.079918858 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:03:43 compute-0 nova_compute[259550]: 2025-10-07 14:03:43.133 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:03:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 237 op/s
Oct 07 14:03:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:03:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1703088313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:03:43 compute-0 nova_compute[259550]: 2025-10-07 14:03:43.567 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:03:43 compute-0 nova_compute[259550]: 2025-10-07 14:03:43.575 2 DEBUG nova.compute.provider_tree [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:03:43 compute-0 nova_compute[259550]: 2025-10-07 14:03:43.600 2 DEBUG nova.scheduler.client.report [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:03:43 compute-0 nova_compute[259550]: 2025-10-07 14:03:43.661 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:03:43 compute-0 nova_compute[259550]: 2025-10-07 14:03:43.663 2 DEBUG nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:03:43 compute-0 nova_compute[259550]: 2025-10-07 14:03:43.972 2 DEBUG nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:03:43 compute-0 nova_compute[259550]: 2025-10-07 14:03:43.972 2 DEBUG nova.network.neutron [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.025 2 INFO nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:03:44 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1703088313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.048 2 DEBUG nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.221 2 DEBUG nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.223 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.224 2 INFO nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Creating image(s)
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.254 2 DEBUG nova.storage.rbd_utils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] rbd image 74c8f655-7045-4ad9-8246-3d2504315607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:03:44 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.289 2 DEBUG nova.storage.rbd_utils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] rbd image 74c8f655-7045-4ad9-8246-3d2504315607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.319 2 DEBUG nova.storage.rbd_utils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] rbd image 74c8f655-7045-4ad9-8246-3d2504315607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.324 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.395 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.396 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.397 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.397 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.426 2 DEBUG nova.storage.rbd_utils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] rbd image 74c8f655-7045-4ad9-8246-3d2504315607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.431 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 74c8f655-7045-4ad9-8246-3d2504315607_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.728 2 DEBUG nova.network.neutron [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.730 2 DEBUG nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.766 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 74c8f655-7045-4ad9-8246-3d2504315607_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.842 2 DEBUG nova.storage.rbd_utils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] resizing rbd image 74c8f655-7045-4ad9-8246-3d2504315607_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.938 2 DEBUG nova.objects.instance [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lazy-loading 'migration_context' on Instance uuid 74c8f655-7045-4ad9-8246-3d2504315607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.959 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.960 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Ensure instance console log exists: /var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.961 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.961 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.962 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.963 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.969 2 WARNING nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.974 2 DEBUG nova.virt.libvirt.host [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.975 2 DEBUG nova.virt.libvirt.host [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.978 2 DEBUG nova.virt.libvirt.host [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.979 2 DEBUG nova.virt.libvirt.host [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.980 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.980 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.980 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.981 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.981 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.981 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.982 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.982 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.982 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.982 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.983 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.983 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:03:44 compute-0 nova_compute[259550]: 2025-10-07 14:03:44.986 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:03:45 compute-0 ceph-mon[74295]: pgmap v1092: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 237 op/s
Oct 07 14:03:45 compute-0 nova_compute[259550]: 2025-10-07 14:03:45.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 91 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 249 op/s
Oct 07 14:03:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:03:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:03:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3136911333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:03:45 compute-0 nova_compute[259550]: 2025-10-07 14:03:45.503 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:03:45 compute-0 nova_compute[259550]: 2025-10-07 14:03:45.530 2 DEBUG nova.storage.rbd_utils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] rbd image 74c8f655-7045-4ad9-8246-3d2504315607_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:03:45 compute-0 nova_compute[259550]: 2025-10-07 14:03:45.534 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:03:45 compute-0 ovn_controller[151684]: 2025-10-07T14:03:45Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:39:1a 10.100.0.5
Oct 07 14:03:45 compute-0 ovn_controller[151684]: 2025-10-07T14:03:45Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:39:1a 10.100.0.5
Oct 07 14:03:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:03:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3321439574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:03:45 compute-0 nova_compute[259550]: 2025-10-07 14:03:45.965 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:03:45 compute-0 nova_compute[259550]: 2025-10-07 14:03:45.968 2 DEBUG nova.objects.instance [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lazy-loading 'pci_devices' on Instance uuid 74c8f655-7045-4ad9-8246-3d2504315607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:03:45 compute-0 nova_compute[259550]: 2025-10-07 14:03:45.984 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:03:45 compute-0 nova_compute[259550]:   <uuid>74c8f655-7045-4ad9-8246-3d2504315607</uuid>
Oct 07 14:03:45 compute-0 nova_compute[259550]:   <name>instance-00000005</name>
Oct 07 14:03:45 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:03:45 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:03:45 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerDiagnosticsNegativeTest-server-644875358</nova:name>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:03:44</nova:creationTime>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:03:45 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:03:45 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:03:45 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:03:45 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:03:45 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:03:45 compute-0 nova_compute[259550]:         <nova:user uuid="74c6d01f508d42c7840dbfdf786af788">tempest-ServerDiagnosticsNegativeTest-1419378582-project-member</nova:user>
Oct 07 14:03:45 compute-0 nova_compute[259550]:         <nova:project uuid="4bb899690ee54889beb522359cc49e4f">tempest-ServerDiagnosticsNegativeTest-1419378582</nova:project>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:03:45 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:03:45 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <system>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <entry name="serial">74c8f655-7045-4ad9-8246-3d2504315607</entry>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <entry name="uuid">74c8f655-7045-4ad9-8246-3d2504315607</entry>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     </system>
Oct 07 14:03:45 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:03:45 compute-0 nova_compute[259550]:   <os>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:   </os>
Oct 07 14:03:45 compute-0 nova_compute[259550]:   <features>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:   </features>
Oct 07 14:03:45 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:03:45 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:03:45 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/74c8f655-7045-4ad9-8246-3d2504315607_disk">
Oct 07 14:03:45 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       </source>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:03:45 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/74c8f655-7045-4ad9-8246-3d2504315607_disk.config">
Oct 07 14:03:45 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       </source>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:03:45 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607/console.log" append="off"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <video>
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     </video>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:03:45 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:03:45 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:03:45 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:03:45 compute-0 nova_compute[259550]: </domain>
Oct 07 14:03:45 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:03:46 compute-0 nova_compute[259550]: 2025-10-07 14:03:46.044 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:03:46 compute-0 nova_compute[259550]: 2025-10-07 14:03:46.044 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:03:46 compute-0 nova_compute[259550]: 2025-10-07 14:03:46.045 2 INFO nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Using config drive
Oct 07 14:03:46 compute-0 ceph-mon[74295]: pgmap v1093: 305 pgs: 305 active+clean; 91 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 249 op/s
Oct 07 14:03:46 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3136911333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:03:46 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3321439574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:03:46 compute-0 sudo[276104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:03:46 compute-0 sudo[276104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:46 compute-0 sudo[276104]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:46 compute-0 nova_compute[259550]: 2025-10-07 14:03:46.132 2 DEBUG nova.storage.rbd_utils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] rbd image 74c8f655-7045-4ad9-8246-3d2504315607_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:03:46 compute-0 sudo[276137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:03:46 compute-0 sudo[276137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:46 compute-0 sudo[276137]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:46 compute-0 sudo[276173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:03:46 compute-0 sudo[276173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:46 compute-0 sudo[276173]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:46 compute-0 sudo[276198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 07 14:03:46 compute-0 sudo[276198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:46 compute-0 nova_compute[259550]: 2025-10-07 14:03:46.350 2 INFO nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Creating config drive at /var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607/disk.config
Oct 07 14:03:46 compute-0 nova_compute[259550]: 2025-10-07 14:03:46.356 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1rjdh_fy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:03:46 compute-0 nova_compute[259550]: 2025-10-07 14:03:46.492 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1rjdh_fy" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:03:46 compute-0 nova_compute[259550]: 2025-10-07 14:03:46.520 2 DEBUG nova.storage.rbd_utils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] rbd image 74c8f655-7045-4ad9-8246-3d2504315607_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:03:46 compute-0 nova_compute[259550]: 2025-10-07 14:03:46.525 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607/disk.config 74c8f655-7045-4ad9-8246-3d2504315607_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:03:46 compute-0 sudo[276198]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:03:46 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:03:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:03:46 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:03:46 compute-0 sudo[276272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:03:46 compute-0 sudo[276272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:46 compute-0 sudo[276272]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:46 compute-0 sudo[276308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:03:46 compute-0 sudo[276308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:46 compute-0 sudo[276308]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:46 compute-0 nova_compute[259550]: 2025-10-07 14:03:46.712 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607/disk.config 74c8f655-7045-4ad9-8246-3d2504315607_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:03:46 compute-0 nova_compute[259550]: 2025-10-07 14:03:46.713 2 INFO nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Deleting local config drive /var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607/disk.config because it was imported into RBD.
Oct 07 14:03:46 compute-0 sudo[276333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:03:46 compute-0 sudo[276333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:46 compute-0 sudo[276333]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:46 compute-0 systemd-machined[214580]: New machine qemu-5-instance-00000005.
Oct 07 14:03:46 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Oct 07 14:03:46 compute-0 sudo[276367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:03:46 compute-0 sudo[276367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:47 compute-0 nova_compute[259550]: 2025-10-07 14:03:47.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:47 compute-0 sudo[276367]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 07 14:03:47 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 07 14:03:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:03:47 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:03:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:03:47 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:03:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:03:47 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:03:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 116 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 07 14:03:47 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 42a327ff-dea2-407e-925c-ff8553fb368d does not exist
Oct 07 14:03:47 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 15ffd35c-e826-4d4d-b1ec-bfc9cfcae014 does not exist
Oct 07 14:03:47 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev eb93310d-6c93-4906-9e39-97f20d5561da does not exist
Oct 07 14:03:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:03:47 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:03:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:03:47 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:03:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:03:47 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:03:47 compute-0 sudo[276431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:03:47 compute-0 sudo[276431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:47 compute-0 sudo[276431]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:47 compute-0 sudo[276471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:03:47 compute-0 sudo[276471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:47 compute-0 sudo[276471]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:47 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:03:47 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:03:47 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 07 14:03:47 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:03:47 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:03:47 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:03:47 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:03:47 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:03:47 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:03:47 compute-0 sudo[276518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:03:47 compute-0 sudo[276518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:47 compute-0 sudo[276518]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:47 compute-0 sudo[276548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:03:47 compute-0 sudo[276548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.073 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845828.073248, 74c8f655-7045-4ad9-8246-3d2504315607 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.075 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] VM Resumed (Lifecycle Event)
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.079 2 DEBUG nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.079 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.084 2 INFO nova.virt.libvirt.driver [-] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Instance spawned successfully.
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.085 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:03:48 compute-0 podman[276611]: 2025-10-07 14:03:48.087184722 +0000 UTC m=+0.071084932 container create 0f082d2ec5831ef85c85248c592d93011e54389d299caf187b712c6b81032f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lehmann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.095 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.102 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.108 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.108 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.109 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.109 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.110 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.110 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:03:48 compute-0 systemd[1]: Started libpod-conmon-0f082d2ec5831ef85c85248c592d93011e54389d299caf187b712c6b81032f33.scope.
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.133 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.134 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845828.07479, 74c8f655-7045-4ad9-8246-3d2504315607 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.134 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] VM Started (Lifecycle Event)
Oct 07 14:03:48 compute-0 podman[276611]: 2025-10-07 14:03:48.045750834 +0000 UTC m=+0.029651074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.160 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.164 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.170 2 INFO nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Took 3.95 seconds to spawn the instance on the hypervisor.
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.170 2 DEBUG nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:03:48 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.179 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:03:48 compute-0 podman[276611]: 2025-10-07 14:03:48.208126487 +0000 UTC m=+0.192026717 container init 0f082d2ec5831ef85c85248c592d93011e54389d299caf187b712c6b81032f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lehmann, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 14:03:48 compute-0 podman[276611]: 2025-10-07 14:03:48.218248118 +0000 UTC m=+0.202148328 container start 0f082d2ec5831ef85c85248c592d93011e54389d299caf187b712c6b81032f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.222 2 INFO nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Took 5.43 seconds to build instance.
Oct 07 14:03:48 compute-0 podman[276611]: 2025-10-07 14:03:48.226794216 +0000 UTC m=+0.210694436 container attach 0f082d2ec5831ef85c85248c592d93011e54389d299caf187b712c6b81032f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lehmann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 07 14:03:48 compute-0 tender_lehmann[276627]: 167 167
Oct 07 14:03:48 compute-0 systemd[1]: libpod-0f082d2ec5831ef85c85248c592d93011e54389d299caf187b712c6b81032f33.scope: Deactivated successfully.
Oct 07 14:03:48 compute-0 podman[276611]: 2025-10-07 14:03:48.230715592 +0000 UTC m=+0.214615812 container died 0f082d2ec5831ef85c85248c592d93011e54389d299caf187b712c6b81032f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:03:48 compute-0 nova_compute[259550]: 2025-10-07 14:03:48.237 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "74c8f655-7045-4ad9-8246-3d2504315607" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:03:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab34d759db6fc508577f821b629ce0833e4f53f6c8d9fb4df47bcedc551cbc7d-merged.mount: Deactivated successfully.
Oct 07 14:03:48 compute-0 podman[276611]: 2025-10-07 14:03:48.326069152 +0000 UTC m=+0.309969392 container remove 0f082d2ec5831ef85c85248c592d93011e54389d299caf187b712c6b81032f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lehmann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:03:48 compute-0 systemd[1]: libpod-conmon-0f082d2ec5831ef85c85248c592d93011e54389d299caf187b712c6b81032f33.scope: Deactivated successfully.
Oct 07 14:03:48 compute-0 podman[276649]: 2025-10-07 14:03:48.566572863 +0000 UTC m=+0.066133089 container create b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:03:48 compute-0 ceph-mon[74295]: pgmap v1094: 305 pgs: 305 active+clean; 116 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 07 14:03:48 compute-0 podman[276649]: 2025-10-07 14:03:48.529624975 +0000 UTC m=+0.029185221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:03:48 compute-0 systemd[1]: Started libpod-conmon-b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f.scope.
Oct 07 14:03:48 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab96af5b1f781901c9cfc2377a51b3b165d11e3e6e2705ff61271bacb372a72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab96af5b1f781901c9cfc2377a51b3b165d11e3e6e2705ff61271bacb372a72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab96af5b1f781901c9cfc2377a51b3b165d11e3e6e2705ff61271bacb372a72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab96af5b1f781901c9cfc2377a51b3b165d11e3e6e2705ff61271bacb372a72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab96af5b1f781901c9cfc2377a51b3b165d11e3e6e2705ff61271bacb372a72/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:03:48 compute-0 podman[276649]: 2025-10-07 14:03:48.731658918 +0000 UTC m=+0.231219154 container init b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 07 14:03:48 compute-0 podman[276649]: 2025-10-07 14:03:48.740593988 +0000 UTC m=+0.240154224 container start b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:03:48 compute-0 podman[276649]: 2025-10-07 14:03:48.766441569 +0000 UTC m=+0.266001805 container attach b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 07 14:03:49 compute-0 nova_compute[259550]: 2025-10-07 14:03:49.152 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Acquiring lock "74c8f655-7045-4ad9-8246-3d2504315607" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:03:49 compute-0 nova_compute[259550]: 2025-10-07 14:03:49.153 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "74c8f655-7045-4ad9-8246-3d2504315607" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:03:49 compute-0 nova_compute[259550]: 2025-10-07 14:03:49.154 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Acquiring lock "74c8f655-7045-4ad9-8246-3d2504315607-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:03:49 compute-0 nova_compute[259550]: 2025-10-07 14:03:49.154 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "74c8f655-7045-4ad9-8246-3d2504315607-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:03:49 compute-0 nova_compute[259550]: 2025-10-07 14:03:49.154 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "74c8f655-7045-4ad9-8246-3d2504315607-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:03:49 compute-0 nova_compute[259550]: 2025-10-07 14:03:49.156 2 INFO nova.compute.manager [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Terminating instance
Oct 07 14:03:49 compute-0 nova_compute[259550]: 2025-10-07 14:03:49.158 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Acquiring lock "refresh_cache-74c8f655-7045-4ad9-8246-3d2504315607" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:03:49 compute-0 nova_compute[259550]: 2025-10-07 14:03:49.159 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Acquired lock "refresh_cache-74c8f655-7045-4ad9-8246-3d2504315607" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:03:49 compute-0 nova_compute[259550]: 2025-10-07 14:03:49.159 2 DEBUG nova.network.neutron [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:03:49 compute-0 nova_compute[259550]: 2025-10-07 14:03:49.327 2 DEBUG nova.network.neutron [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:03:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 164 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Oct 07 14:03:49 compute-0 nova_compute[259550]: 2025-10-07 14:03:49.617 2 DEBUG nova.network.neutron [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:03:49 compute-0 nova_compute[259550]: 2025-10-07 14:03:49.666 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Releasing lock "refresh_cache-74c8f655-7045-4ad9-8246-3d2504315607" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:03:49 compute-0 nova_compute[259550]: 2025-10-07 14:03:49.668 2 DEBUG nova.compute.manager [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:03:49 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct 07 14:03:49 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 2.870s CPU time.
Oct 07 14:03:49 compute-0 systemd-machined[214580]: Machine qemu-5-instance-00000005 terminated.
Oct 07 14:03:50 compute-0 nifty_spence[276666]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:03:50 compute-0 nifty_spence[276666]: --> relative data size: 1.0
Oct 07 14:03:50 compute-0 nifty_spence[276666]: --> All data devices are unavailable
Oct 07 14:03:50 compute-0 systemd[1]: libpod-b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f.scope: Deactivated successfully.
Oct 07 14:03:50 compute-0 podman[276649]: 2025-10-07 14:03:50.073801904 +0000 UTC m=+1.573362120 container died b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:03:50 compute-0 systemd[1]: libpod-b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f.scope: Consumed 1.119s CPU time.
Oct 07 14:03:50 compute-0 nova_compute[259550]: 2025-10-07 14:03:50.099 2 INFO nova.virt.libvirt.driver [-] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Instance destroyed successfully.
Oct 07 14:03:50 compute-0 nova_compute[259550]: 2025-10-07 14:03:50.101 2 DEBUG nova.objects.instance [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lazy-loading 'resources' on Instance uuid 74c8f655-7045-4ad9-8246-3d2504315607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:03:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-bab96af5b1f781901c9cfc2377a51b3b165d11e3e6e2705ff61271bacb372a72-merged.mount: Deactivated successfully.
Oct 07 14:03:50 compute-0 podman[276649]: 2025-10-07 14:03:50.144973297 +0000 UTC m=+1.644533513 container remove b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 07 14:03:50 compute-0 systemd[1]: libpod-conmon-b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f.scope: Deactivated successfully.
Oct 07 14:03:50 compute-0 sudo[276548]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:50 compute-0 sudo[276726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:03:50 compute-0 sudo[276726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:50 compute-0 sudo[276726]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:50 compute-0 sudo[276751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:03:50 compute-0 sudo[276751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:50 compute-0 sudo[276751]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:50 compute-0 nova_compute[259550]: 2025-10-07 14:03:50.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:50 compute-0 sudo[276777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:03:50 compute-0 sudo[276777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:50 compute-0 sudo[276777]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:03:50 compute-0 sudo[276802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:03:50 compute-0 sudo[276802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:50 compute-0 nova_compute[259550]: 2025-10-07 14:03:50.530 2 INFO nova.virt.libvirt.driver [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Deleting instance files /var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607_del
Oct 07 14:03:50 compute-0 nova_compute[259550]: 2025-10-07 14:03:50.531 2 INFO nova.virt.libvirt.driver [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Deletion of /var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607_del complete
Oct 07 14:03:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:50.578 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:03:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:03:50.580 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:03:50 compute-0 nova_compute[259550]: 2025-10-07 14:03:50.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:50 compute-0 nova_compute[259550]: 2025-10-07 14:03:50.593 2 INFO nova.compute.manager [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Took 0.92 seconds to destroy the instance on the hypervisor.
Oct 07 14:03:50 compute-0 nova_compute[259550]: 2025-10-07 14:03:50.593 2 DEBUG oslo.service.loopingcall [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:03:50 compute-0 nova_compute[259550]: 2025-10-07 14:03:50.594 2 DEBUG nova.compute.manager [-] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:03:50 compute-0 nova_compute[259550]: 2025-10-07 14:03:50.594 2 DEBUG nova.network.neutron [-] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:03:50 compute-0 ceph-mon[74295]: pgmap v1095: 305 pgs: 305 active+clean; 164 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Oct 07 14:03:50 compute-0 podman[276867]: 2025-10-07 14:03:50.868453646 +0000 UTC m=+0.048965751 container create 9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_benz, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 07 14:03:50 compute-0 systemd[1]: Started libpod-conmon-9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec.scope.
Oct 07 14:03:50 compute-0 podman[276867]: 2025-10-07 14:03:50.849090929 +0000 UTC m=+0.029603054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:03:50 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:03:50 compute-0 podman[276867]: 2025-10-07 14:03:50.982182278 +0000 UTC m=+0.162694393 container init 9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_benz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 14:03:50 compute-0 podman[276867]: 2025-10-07 14:03:50.990636584 +0000 UTC m=+0.171148679 container start 9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_benz, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:03:50 compute-0 podman[276867]: 2025-10-07 14:03:50.994291551 +0000 UTC m=+0.174803666 container attach 9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 07 14:03:50 compute-0 ecstatic_benz[276883]: 167 167
Oct 07 14:03:50 compute-0 systemd[1]: libpod-9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec.scope: Deactivated successfully.
Oct 07 14:03:50 compute-0 conmon[276883]: conmon 9840fdff892a035e69a0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec.scope/container/memory.events
Oct 07 14:03:50 compute-0 podman[276867]: 2025-10-07 14:03:50.998389941 +0000 UTC m=+0.178902036 container died 9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 07 14:03:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d2b2f568b97a3c1b20fdf15e6f7ebcbdeb69663c55986884036f4fd56023041-merged.mount: Deactivated successfully.
Oct 07 14:03:51 compute-0 podman[276867]: 2025-10-07 14:03:51.042558083 +0000 UTC m=+0.223070178 container remove 9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_benz, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:03:51 compute-0 nova_compute[259550]: 2025-10-07 14:03:51.048 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759845816.044832, 39a26aa2-978f-4d33-bc4e-fd4bfc81d380 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:03:51 compute-0 nova_compute[259550]: 2025-10-07 14:03:51.049 2 INFO nova.compute.manager [-] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] VM Stopped (Lifecycle Event)
Oct 07 14:03:51 compute-0 systemd[1]: libpod-conmon-9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec.scope: Deactivated successfully.
Oct 07 14:03:51 compute-0 nova_compute[259550]: 2025-10-07 14:03:51.072 2 DEBUG nova.compute.manager [None req-4776ba38-85c4-4125-ab49-b819c6dfd9e5 - - - - - -] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:03:51 compute-0 nova_compute[259550]: 2025-10-07 14:03:51.172 2 DEBUG nova.network.neutron [-] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:03:51 compute-0 nova_compute[259550]: 2025-10-07 14:03:51.202 2 DEBUG nova.network.neutron [-] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:03:51 compute-0 podman[276908]: 2025-10-07 14:03:51.245257454 +0000 UTC m=+0.047476821 container create 499949e303d184f81d743fa153c9489231eef18fe801554590af23e51b870a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_saha, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:03:51 compute-0 systemd[1]: Started libpod-conmon-499949e303d184f81d743fa153c9489231eef18fe801554590af23e51b870a84.scope.
Oct 07 14:03:51 compute-0 nova_compute[259550]: 2025-10-07 14:03:51.295 2 INFO nova.compute.manager [-] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Took 0.70 seconds to deallocate network for instance.
Oct 07 14:03:51 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2f3b7ecf3da2841f086f5f1237b5a8f7f7b299207d8ab53c4949eeb7fe09e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2f3b7ecf3da2841f086f5f1237b5a8f7f7b299207d8ab53c4949eeb7fe09e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2f3b7ecf3da2841f086f5f1237b5a8f7f7b299207d8ab53c4949eeb7fe09e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2f3b7ecf3da2841f086f5f1237b5a8f7f7b299207d8ab53c4949eeb7fe09e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:03:51 compute-0 podman[276908]: 2025-10-07 14:03:51.225789303 +0000 UTC m=+0.028008690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:03:51 compute-0 podman[276908]: 2025-10-07 14:03:51.325990912 +0000 UTC m=+0.128210319 container init 499949e303d184f81d743fa153c9489231eef18fe801554590af23e51b870a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:03:51 compute-0 podman[276908]: 2025-10-07 14:03:51.333347879 +0000 UTC m=+0.135567286 container start 499949e303d184f81d743fa153c9489231eef18fe801554590af23e51b870a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 07 14:03:51 compute-0 podman[276908]: 2025-10-07 14:03:51.337728967 +0000 UTC m=+0.139948374 container attach 499949e303d184f81d743fa153c9489231eef18fe801554590af23e51b870a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_saha, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:03:51 compute-0 nova_compute[259550]: 2025-10-07 14:03:51.371 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:03:51 compute-0 nova_compute[259550]: 2025-10-07 14:03:51.371 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:03:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 148 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 150 op/s
Oct 07 14:03:51 compute-0 nova_compute[259550]: 2025-10-07 14:03:51.475 2 DEBUG oslo_concurrency.processutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:03:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:03:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/221377279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:03:51 compute-0 nova_compute[259550]: 2025-10-07 14:03:51.933 2 DEBUG oslo_concurrency.processutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:03:51 compute-0 nova_compute[259550]: 2025-10-07 14:03:51.940 2 DEBUG nova.compute.provider_tree [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:03:51 compute-0 nova_compute[259550]: 2025-10-07 14:03:51.962 2 DEBUG nova.scheduler.client.report [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:03:51 compute-0 nova_compute[259550]: 2025-10-07 14:03:51.991 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:03:52 compute-0 nova_compute[259550]: 2025-10-07 14:03:52.018 2 INFO nova.scheduler.client.report [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Deleted allocations for instance 74c8f655-7045-4ad9-8246-3d2504315607
Oct 07 14:03:52 compute-0 nova_compute[259550]: 2025-10-07 14:03:52.073 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "74c8f655-7045-4ad9-8246-3d2504315607" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:03:52 compute-0 nova_compute[259550]: 2025-10-07 14:03:52.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:52 compute-0 vigilant_saha[276923]: {
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:     "0": [
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:         {
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "devices": [
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "/dev/loop3"
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             ],
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "lv_name": "ceph_lv0",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "lv_size": "21470642176",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "name": "ceph_lv0",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "tags": {
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.cluster_name": "ceph",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.crush_device_class": "",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.encrypted": "0",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.osd_id": "0",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.type": "block",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.vdo": "0"
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             },
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "type": "block",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "vg_name": "ceph_vg0"
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:         }
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:     ],
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:     "1": [
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:         {
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "devices": [
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "/dev/loop4"
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             ],
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "lv_name": "ceph_lv1",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "lv_size": "21470642176",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "name": "ceph_lv1",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "tags": {
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.cluster_name": "ceph",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.crush_device_class": "",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.encrypted": "0",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.osd_id": "1",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.type": "block",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.vdo": "0"
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             },
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "type": "block",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "vg_name": "ceph_vg1"
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:         }
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:     ],
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:     "2": [
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:         {
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "devices": [
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "/dev/loop5"
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             ],
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "lv_name": "ceph_lv2",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "lv_size": "21470642176",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "name": "ceph_lv2",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "tags": {
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.cluster_name": "ceph",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.crush_device_class": "",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.encrypted": "0",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.osd_id": "2",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.type": "block",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:                 "ceph.vdo": "0"
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             },
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "type": "block",
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:             "vg_name": "ceph_vg2"
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:         }
Oct 07 14:03:52 compute-0 vigilant_saha[276923]:     ]
Oct 07 14:03:52 compute-0 vigilant_saha[276923]: }
Oct 07 14:03:52 compute-0 systemd[1]: libpod-499949e303d184f81d743fa153c9489231eef18fe801554590af23e51b870a84.scope: Deactivated successfully.
Oct 07 14:03:52 compute-0 podman[276908]: 2025-10-07 14:03:52.245670159 +0000 UTC m=+1.047889526 container died 499949e303d184f81d743fa153c9489231eef18fe801554590af23e51b870a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_saha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-df2f3b7ecf3da2841f086f5f1237b5a8f7f7b299207d8ab53c4949eeb7fe09e2-merged.mount: Deactivated successfully.
Oct 07 14:03:52 compute-0 podman[276908]: 2025-10-07 14:03:52.383336661 +0000 UTC m=+1.185556028 container remove 499949e303d184f81d743fa153c9489231eef18fe801554590af23e51b870a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 07 14:03:52 compute-0 systemd[1]: libpod-conmon-499949e303d184f81d743fa153c9489231eef18fe801554590af23e51b870a84.scope: Deactivated successfully.
Oct 07 14:03:52 compute-0 sudo[276802]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:52 compute-0 sudo[276967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:03:52 compute-0 sudo[276967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:52 compute-0 sudo[276967]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:52 compute-0 sudo[276992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:03:52 compute-0 sudo[276992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:52 compute-0 sudo[276992]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:52 compute-0 sudo[277017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:03:52 compute-0 sudo[277017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:52 compute-0 sudo[277017]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:03:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:03:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:03:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:03:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:03:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:03:52 compute-0 sudo[277042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:03:52 compute-0 sudo[277042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:52 compute-0 ceph-mon[74295]: pgmap v1096: 305 pgs: 305 active+clean; 148 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 150 op/s
Oct 07 14:03:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/221377279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:03:53 compute-0 podman[277108]: 2025-10-07 14:03:53.059952757 +0000 UTC m=+0.040599457 container create 6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:03:53 compute-0 systemd[1]: Started libpod-conmon-6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31.scope.
Oct 07 14:03:53 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:03:53 compute-0 podman[277108]: 2025-10-07 14:03:53.136575226 +0000 UTC m=+0.117221936 container init 6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:03:53 compute-0 podman[277108]: 2025-10-07 14:03:53.043961699 +0000 UTC m=+0.024608429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:03:53 compute-0 podman[277108]: 2025-10-07 14:03:53.14496968 +0000 UTC m=+0.125616380 container start 6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:03:53 compute-0 trusting_mendel[277125]: 167 167
Oct 07 14:03:53 compute-0 podman[277108]: 2025-10-07 14:03:53.148946917 +0000 UTC m=+0.129593647 container attach 6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 07 14:03:53 compute-0 systemd[1]: libpod-6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31.scope: Deactivated successfully.
Oct 07 14:03:53 compute-0 conmon[277125]: conmon 6eafb6c9fbf0f4e789c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31.scope/container/memory.events
Oct 07 14:03:53 compute-0 podman[277108]: 2025-10-07 14:03:53.150986252 +0000 UTC m=+0.131632952 container died 6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:03:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f04616ca4b75a9b59f2ff06b1693d2aaa51e799a9a720e84aa856db98c902656-merged.mount: Deactivated successfully.
Oct 07 14:03:53 compute-0 podman[277108]: 2025-10-07 14:03:53.188241747 +0000 UTC m=+0.168888447 container remove 6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 07 14:03:53 compute-0 systemd[1]: libpod-conmon-6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31.scope: Deactivated successfully.
Oct 07 14:03:53 compute-0 podman[277150]: 2025-10-07 14:03:53.367963805 +0000 UTC m=+0.043009562 container create 26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:03:53 compute-0 systemd[1]: Started libpod-conmon-26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0.scope.
Oct 07 14:03:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 148 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 140 op/s
Oct 07 14:03:53 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6605e7645f8daf3a26609902c48ae606923ac4a58341c7ec6685568a32a0c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6605e7645f8daf3a26609902c48ae606923ac4a58341c7ec6685568a32a0c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6605e7645f8daf3a26609902c48ae606923ac4a58341c7ec6685568a32a0c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6605e7645f8daf3a26609902c48ae606923ac4a58341c7ec6685568a32a0c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:03:53 compute-0 podman[277150]: 2025-10-07 14:03:53.350190989 +0000 UTC m=+0.025236766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:03:53 compute-0 podman[277150]: 2025-10-07 14:03:53.453764819 +0000 UTC m=+0.128810586 container init 26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:03:53 compute-0 podman[277150]: 2025-10-07 14:03:53.459653526 +0000 UTC m=+0.134699283 container start 26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:03:53 compute-0 podman[277150]: 2025-10-07 14:03:53.465113503 +0000 UTC m=+0.140159260 container attach 26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Oct 07 14:03:54 compute-0 nifty_spence[277166]: {
Oct 07 14:03:54 compute-0 nifty_spence[277166]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:03:54 compute-0 nifty_spence[277166]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:03:54 compute-0 nifty_spence[277166]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:03:54 compute-0 nifty_spence[277166]:         "osd_id": 2,
Oct 07 14:03:54 compute-0 nifty_spence[277166]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:03:54 compute-0 nifty_spence[277166]:         "type": "bluestore"
Oct 07 14:03:54 compute-0 nifty_spence[277166]:     },
Oct 07 14:03:54 compute-0 nifty_spence[277166]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:03:54 compute-0 nifty_spence[277166]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:03:54 compute-0 nifty_spence[277166]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:03:54 compute-0 nifty_spence[277166]:         "osd_id": 1,
Oct 07 14:03:54 compute-0 nifty_spence[277166]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:03:54 compute-0 nifty_spence[277166]:         "type": "bluestore"
Oct 07 14:03:54 compute-0 nifty_spence[277166]:     },
Oct 07 14:03:54 compute-0 nifty_spence[277166]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:03:54 compute-0 nifty_spence[277166]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:03:54 compute-0 nifty_spence[277166]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:03:54 compute-0 nifty_spence[277166]:         "osd_id": 0,
Oct 07 14:03:54 compute-0 nifty_spence[277166]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:03:54 compute-0 nifty_spence[277166]:         "type": "bluestore"
Oct 07 14:03:54 compute-0 nifty_spence[277166]:     }
Oct 07 14:03:54 compute-0 nifty_spence[277166]: }
Oct 07 14:03:54 compute-0 systemd[1]: libpod-26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0.scope: Deactivated successfully.
Oct 07 14:03:54 compute-0 systemd[1]: libpod-26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0.scope: Consumed 1.108s CPU time.
Oct 07 14:03:54 compute-0 conmon[277166]: conmon 26270d07ea0ed924d106 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0.scope/container/memory.events
Oct 07 14:03:54 compute-0 podman[277150]: 2025-10-07 14:03:54.570575798 +0000 UTC m=+1.245621585 container died 26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 07 14:03:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f6605e7645f8daf3a26609902c48ae606923ac4a58341c7ec6685568a32a0c6-merged.mount: Deactivated successfully.
Oct 07 14:03:54 compute-0 podman[277150]: 2025-10-07 14:03:54.633157491 +0000 UTC m=+1.308203248 container remove 26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 07 14:03:54 compute-0 systemd[1]: libpod-conmon-26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0.scope: Deactivated successfully.
Oct 07 14:03:54 compute-0 sudo[277042]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:03:54 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:03:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:03:54 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:03:54 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev dc0622c1-e24e-41d4-b0e2-03c7b9e7b3a3 does not exist
Oct 07 14:03:54 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 8547c35f-5820-4e13-bcd9-47ca0311d7b9 does not exist
Oct 07 14:03:54 compute-0 podman[277199]: 2025-10-07 14:03:54.750033038 +0000 UTC m=+0.143264084 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:03:54 compute-0 sudo[277237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:03:54 compute-0 sudo[277237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:54 compute-0 sudo[277237]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:54 compute-0 ceph-mon[74295]: pgmap v1097: 305 pgs: 305 active+clean; 148 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 140 op/s
Oct 07 14:03:54 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:03:54 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:03:54 compute-0 sudo[277264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:03:54 compute-0 sudo[277264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:03:54 compute-0 sudo[277264]: pam_unix(sudo:session): session closed for user root
Oct 07 14:03:55 compute-0 nova_compute[259550]: 2025-10-07 14:03:55.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 121 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 189 op/s
Oct 07 14:03:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:03:56 compute-0 ceph-mon[74295]: pgmap v1098: 305 pgs: 305 active+clean; 121 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 189 op/s
Oct 07 14:03:57 compute-0 nova_compute[259550]: 2025-10-07 14:03:57.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:03:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 177 op/s
Oct 07 14:03:57 compute-0 ceph-mon[74295]: pgmap v1099: 305 pgs: 305 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 177 op/s
Oct 07 14:03:58 compute-0 podman[277289]: 2025-10-07 14:03:58.078752561 +0000 UTC m=+0.061320361 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 07 14:03:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 166 op/s
Oct 07 14:04:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:00.037 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:00.038 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:00.038 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:00 compute-0 nova_compute[259550]: 2025-10-07 14:04:00.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:04:00 compute-0 ceph-mon[74295]: pgmap v1100: 305 pgs: 305 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 166 op/s
Oct 07 14:04:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:00.581 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:04:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 91 op/s
Oct 07 14:04:01 compute-0 nova_compute[259550]: 2025-10-07 14:04:01.832 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "af17d051-72b6-45f1-b829-94d3a2939519" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:01 compute-0 nova_compute[259550]: 2025-10-07 14:04:01.833 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "af17d051-72b6-45f1-b829-94d3a2939519" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:01 compute-0 nova_compute[259550]: 2025-10-07 14:04:01.851 2 DEBUG nova.compute.manager [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:04:01 compute-0 nova_compute[259550]: 2025-10-07 14:04:01.920 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:01 compute-0 nova_compute[259550]: 2025-10-07 14:04:01.921 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:01 compute-0 nova_compute[259550]: 2025-10-07 14:04:01.931 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:04:01 compute-0 nova_compute[259550]: 2025-10-07 14:04:01.932 2 INFO nova.compute.claims [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.038 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:02 compute-0 ceph-mon[74295]: pgmap v1101: 305 pgs: 305 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 91 op/s
Oct 07 14:04:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:04:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3384435753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.519 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.527 2 DEBUG nova.compute.provider_tree [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.544 2 DEBUG nova.scheduler.client.report [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.565 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.566 2 DEBUG nova.compute.manager [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.604 2 DEBUG nova.compute.manager [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.617 2 INFO nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.633 2 DEBUG nova.compute.manager [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.709 2 DEBUG nova.compute.manager [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.711 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.711 2 INFO nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Creating image(s)
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.738 2 DEBUG nova.storage.rbd_utils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.771 2 DEBUG nova.storage.rbd_utils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.796 2 DEBUG nova.storage.rbd_utils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.801 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.864 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.867 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.868 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.869 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.899 2 DEBUG nova.storage.rbd_utils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:02 compute-0 nova_compute[259550]: 2025-10-07 14:04:02.903 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 af17d051-72b6-45f1-b829-94d3a2939519_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.271 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 af17d051-72b6-45f1-b829-94d3a2939519_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.340 2 DEBUG nova.storage.rbd_utils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] resizing rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:04:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 754 KiB/s rd, 14 KiB/s wr, 48 op/s
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.443 2 DEBUG nova.objects.instance [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'migration_context' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.460 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.461 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Ensure instance console log exists: /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.461 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.461 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.462 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.464 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.469 2 WARNING nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.473 2 DEBUG nova.virt.libvirt.host [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.474 2 DEBUG nova.virt.libvirt.host [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.477 2 DEBUG nova.virt.libvirt.host [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.477 2 DEBUG nova.virt.libvirt.host [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.478 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.478 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.479 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.479 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.479 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.479 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.479 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.480 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.480 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.480 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.480 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.480 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.483 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3384435753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:04:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:04:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1310574256' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.941 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.971 2 DEBUG nova.storage.rbd_utils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:03 compute-0 nova_compute[259550]: 2025-10-07 14:04:03.976 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:04:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1888465638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:04 compute-0 nova_compute[259550]: 2025-10-07 14:04:04.478 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:04 compute-0 nova_compute[259550]: 2025-10-07 14:04:04.480 2 DEBUG nova.objects.instance [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'pci_devices' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:04 compute-0 nova_compute[259550]: 2025-10-07 14:04:04.497 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:04:04 compute-0 nova_compute[259550]:   <uuid>af17d051-72b6-45f1-b829-94d3a2939519</uuid>
Oct 07 14:04:04 compute-0 nova_compute[259550]:   <name>instance-00000006</name>
Oct 07 14:04:04 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:04:04 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:04:04 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersAdmin275Test-server-2034007563</nova:name>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:04:03</nova:creationTime>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:04:04 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:04:04 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:04:04 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:04:04 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:04:04 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:04:04 compute-0 nova_compute[259550]:         <nova:user uuid="0418d9872e6041138cc90a3aa74cce48">tempest-ServersAdmin275Test-1410041514-project-member</nova:user>
Oct 07 14:04:04 compute-0 nova_compute[259550]:         <nova:project uuid="60e63a33759f4241845fccdb5c104b64">tempest-ServersAdmin275Test-1410041514</nova:project>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:04:04 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:04:04 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <system>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <entry name="serial">af17d051-72b6-45f1-b829-94d3a2939519</entry>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <entry name="uuid">af17d051-72b6-45f1-b829-94d3a2939519</entry>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     </system>
Oct 07 14:04:04 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:04:04 compute-0 nova_compute[259550]:   <os>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:   </os>
Oct 07 14:04:04 compute-0 nova_compute[259550]:   <features>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:   </features>
Oct 07 14:04:04 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:04:04 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:04:04 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/af17d051-72b6-45f1-b829-94d3a2939519_disk">
Oct 07 14:04:04 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       </source>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:04:04 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/af17d051-72b6-45f1-b829-94d3a2939519_disk.config">
Oct 07 14:04:04 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       </source>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:04:04 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/console.log" append="off"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <video>
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     </video>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:04:04 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:04:04 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:04:04 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:04:04 compute-0 nova_compute[259550]: </domain>
Oct 07 14:04:04 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:04:04 compute-0 ceph-mon[74295]: pgmap v1102: 305 pgs: 305 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 754 KiB/s rd, 14 KiB/s wr, 48 op/s
Oct 07 14:04:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1310574256' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1888465638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:04 compute-0 nova_compute[259550]: 2025-10-07 14:04:04.621 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:04:04 compute-0 nova_compute[259550]: 2025-10-07 14:04:04.621 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:04:04 compute-0 nova_compute[259550]: 2025-10-07 14:04:04.622 2 INFO nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Using config drive
Oct 07 14:04:04 compute-0 nova_compute[259550]: 2025-10-07 14:04:04.648 2 DEBUG nova.storage.rbd_utils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:04 compute-0 nova_compute[259550]: 2025-10-07 14:04:04.938 2 INFO nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Creating config drive at /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config
Oct 07 14:04:04 compute-0 nova_compute[259550]: 2025-10-07 14:04:04.943 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpww6enjpi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:05 compute-0 nova_compute[259550]: 2025-10-07 14:04:05.071 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpww6enjpi" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:05 compute-0 nova_compute[259550]: 2025-10-07 14:04:05.096 2 DEBUG nova.storage.rbd_utils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:05 compute-0 nova_compute[259550]: 2025-10-07 14:04:05.100 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config af17d051-72b6-45f1-b829-94d3a2939519_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:05 compute-0 nova_compute[259550]: 2025-10-07 14:04:05.122 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759845830.0956862, 74c8f655-7045-4ad9-8246-3d2504315607 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:04:05 compute-0 nova_compute[259550]: 2025-10-07 14:04:05.123 2 INFO nova.compute.manager [-] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] VM Stopped (Lifecycle Event)
Oct 07 14:04:05 compute-0 nova_compute[259550]: 2025-10-07 14:04:05.200 2 DEBUG nova.compute.manager [None req-4c082ff7-5885-4222-8579-cf149a82d079 - - - - - -] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:05 compute-0 nova_compute[259550]: 2025-10-07 14:04:05.280 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config af17d051-72b6-45f1-b829-94d3a2939519_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:05 compute-0 nova_compute[259550]: 2025-10-07 14:04:05.281 2 INFO nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deleting local config drive /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config because it was imported into RBD.
Oct 07 14:04:05 compute-0 systemd-machined[214580]: New machine qemu-6-instance-00000006.
Oct 07 14:04:05 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Oct 07 14:04:05 compute-0 nova_compute[259550]: 2025-10-07 14:04:05.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 157 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 761 KiB/s rd, 1.3 MiB/s wr, 61 op/s
Oct 07 14:04:05 compute-0 ovn_controller[151684]: 2025-10-07T14:04:05Z|00032|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 07 14:04:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.055 2 DEBUG oslo_concurrency.lockutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.058 2 DEBUG oslo_concurrency.lockutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.058 2 DEBUG oslo_concurrency.lockutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.059 2 DEBUG oslo_concurrency.lockutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.059 2 DEBUG oslo_concurrency.lockutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.061 2 INFO nova.compute.manager [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Terminating instance
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.063 2 DEBUG nova.compute.manager [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:04:06 compute-0 kernel: tap8bbf9c96-17 (unregistering): left promiscuous mode
Oct 07 14:04:06 compute-0 NetworkManager[44949]: <info>  [1759845846.1678] device (tap8bbf9c96-17): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:04:06 compute-0 ovn_controller[151684]: 2025-10-07T14:04:06Z|00033|binding|INFO|Releasing lport 8bbf9c96-17e6-49df-8a58-e3557085f576 from this chassis (sb_readonly=0)
Oct 07 14:04:06 compute-0 ovn_controller[151684]: 2025-10-07T14:04:06Z|00034|binding|INFO|Setting lport 8bbf9c96-17e6-49df-8a58-e3557085f576 down in Southbound
Oct 07 14:04:06 compute-0 ovn_controller[151684]: 2025-10-07T14:04:06Z|00035|binding|INFO|Removing iface tap8bbf9c96-17 ovn-installed in OVS
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.217 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:39:1a 10.100.0.5'], port_security=['fa:16:3e:09:39:1a 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5aa06cd5-91e7-4797-83c0-ddd3966533ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '380b73085cef431383bee110ceaefb15', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd3b7853b-d94c-445c-810d-9b4dd15d78f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f05683d-d61c-46e7-a8b2-e5ebf47fffcc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8bbf9c96-17e6-49df-8a58-e3557085f576) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:04:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.218 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8bbf9c96-17e6-49df-8a58-e3557085f576 in datapath 384938fa-4eb0-4ec5-a6a4-bc65721ba22a unbound from our chassis
Oct 07 14:04:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.220 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 384938fa-4eb0-4ec5-a6a4-bc65721ba22a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:04:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.222 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[51cc40b8-e93c-4cb4-ab34-96c9687d8c83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.222 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a namespace which is not needed anymore
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.260 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845846.2593713, af17d051-72b6-45f1-b829-94d3a2939519 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.260 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] VM Resumed (Lifecycle Event)
Oct 07 14:04:06 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct 07 14:04:06 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 16.702s CPU time.
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.263 2 DEBUG nova.compute.manager [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.263 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:04:06 compute-0 systemd-machined[214580]: Machine qemu-3-instance-00000003 terminated.
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.271 2 INFO nova.virt.libvirt.driver [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance spawned successfully.
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.271 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:04:06 compute-0 NetworkManager[44949]: <info>  [1759845846.2845] manager: (tap8bbf9c96-17): new Tun device (/org/freedesktop/NetworkManager/Devices/31)
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.303 2 INFO nova.virt.libvirt.driver [-] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Instance destroyed successfully.
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.303 2 DEBUG nova.objects.instance [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lazy-loading 'resources' on Instance uuid 5aa06cd5-91e7-4797-83c0-ddd3966533ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.316 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.325 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.335 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.335 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.336 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.336 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.337 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.337 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.347 2 DEBUG nova.virt.libvirt.vif [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:03:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-741383512',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-741383512',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-741383512',id=3,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMYfAR9mj2imemzusfPwwtddn0lEjKMvdiXZQNbuwqTJFtx1U2M3BRUSevXd6qU4D5KOSepLHIPjFK2NZ957Ri2Kv5dYObirVp8T/b/ktoKbgTEgyyASZo/0n0wfyQLhEw==',key_name='tempest-keypair-1528796341',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:03:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='380b73085cef431383bee110ceaefb15',ramdisk_id='',reservation_id='r-9ayb8exy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:03:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2d78803d42674b89b4ea28d9a2442357',uuid=5aa06cd5-91e7-4797-83c0-ddd3966533ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8bbf9c96-17e6-49df-8a58-e3557085f576", "address": "fa:16:3e:09:39:1a", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bbf9c96-17", "ovs_interfaceid": "8bbf9c96-17e6-49df-8a58-e3557085f576", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.347 2 DEBUG nova.network.os_vif_util [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converting VIF {"id": "8bbf9c96-17e6-49df-8a58-e3557085f576", "address": "fa:16:3e:09:39:1a", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bbf9c96-17", "ovs_interfaceid": "8bbf9c96-17e6-49df-8a58-e3557085f576", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.348 2 DEBUG nova.network.os_vif_util [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:39:1a,bridge_name='br-int',has_traffic_filtering=True,id=8bbf9c96-17e6-49df-8a58-e3557085f576,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bbf9c96-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.349 2 DEBUG os_vif [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:39:1a,bridge_name='br-int',has_traffic_filtering=True,id=8bbf9c96-17e6-49df-8a58-e3557085f576,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bbf9c96-17') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.352 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8bbf9c96-17, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.372 2 INFO os_vif [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:39:1a,bridge_name='br-int',has_traffic_filtering=True,id=8bbf9c96-17e6-49df-8a58-e3557085f576,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bbf9c96-17')
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.394 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.395 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845846.2622466, af17d051-72b6-45f1-b829-94d3a2939519 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.395 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] VM Started (Lifecycle Event)
Oct 07 14:04:06 compute-0 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[275764]: [NOTICE]   (275779) : haproxy version is 2.8.14-c23fe91
Oct 07 14:04:06 compute-0 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[275764]: [NOTICE]   (275779) : path to executable is /usr/sbin/haproxy
Oct 07 14:04:06 compute-0 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[275764]: [WARNING]  (275779) : Exiting Master process...
Oct 07 14:04:06 compute-0 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[275764]: [ALERT]    (275779) : Current worker (275781) exited with code 143 (Terminated)
Oct 07 14:04:06 compute-0 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[275764]: [WARNING]  (275779) : All workers exited. Exiting... (0)
Oct 07 14:04:06 compute-0 systemd[1]: libpod-ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b.scope: Deactivated successfully.
Oct 07 14:04:06 compute-0 podman[277705]: 2025-10-07 14:04:06.441154048 +0000 UTC m=+0.062446052 container died ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.456 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.461 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:04:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-84c7010ba2fc6f6ddfa22c1cc2034b39e4a61b0f4287273b69c1e4981fb9a1cb-merged.mount: Deactivated successfully.
Oct 07 14:04:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b-userdata-shm.mount: Deactivated successfully.
Oct 07 14:04:06 compute-0 podman[277705]: 2025-10-07 14:04:06.50291993 +0000 UTC m=+0.124211934 container cleanup ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.510 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:04:06 compute-0 ceph-mon[74295]: pgmap v1103: 305 pgs: 305 active+clean; 157 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 761 KiB/s rd, 1.3 MiB/s wr, 61 op/s
Oct 07 14:04:06 compute-0 systemd[1]: libpod-conmon-ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b.scope: Deactivated successfully.
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.556 2 INFO nova.compute.manager [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Took 3.85 seconds to spawn the instance on the hypervisor.
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.557 2 DEBUG nova.compute.manager [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:06 compute-0 podman[277750]: 2025-10-07 14:04:06.590119551 +0000 UTC m=+0.060465878 container remove ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:04:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.599 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[223e4954-7273-4f0d-baf3-6486674a245b]: (4, ('Tue Oct  7 02:04:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a (ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b)\nee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b\nTue Oct  7 02:04:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a (ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b)\nee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.602 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a9274e-ec83-405e-a276-c116eb44e76b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.602 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap384938fa-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:06 compute-0 kernel: tap384938fa-40: left promiscuous mode
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.629 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c078deeb-3311-40a0-9ba7-23c7f9d7324f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.645 2 INFO nova.compute.manager [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Took 4.76 seconds to build instance.
Oct 07 14:04:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.663 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b0394262-6759-499e-ace4-7f99841d6cf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.664 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e1996e-a0a2-4894-89f9-efb5719c4b13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.683 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[52d96de8-6495-4fb3-83a4-3febf9b35eb0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639865, 'reachable_time': 20041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277763, 'error': None, 'target': 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d384938fa\x2d4eb0\x2d4ec5\x2da6a4\x2dbc65721ba22a.mount: Deactivated successfully.
Oct 07 14:04:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.699 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:04:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.699 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[f544657e-fdb8-4f10-8899-089a80c3ee95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.770 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "af17d051-72b6-45f1-b829-94d3a2939519" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.869 2 DEBUG nova.compute.manager [req-7056e284-6cfd-43ca-b103-6a60098160d7 req-40d4e78d-0eee-40e8-9c3f-4befae2f8f74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Received event network-vif-unplugged-8bbf9c96-17e6-49df-8a58-e3557085f576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.869 2 DEBUG oslo_concurrency.lockutils [req-7056e284-6cfd-43ca-b103-6a60098160d7 req-40d4e78d-0eee-40e8-9c3f-4befae2f8f74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.870 2 DEBUG oslo_concurrency.lockutils [req-7056e284-6cfd-43ca-b103-6a60098160d7 req-40d4e78d-0eee-40e8-9c3f-4befae2f8f74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.870 2 DEBUG oslo_concurrency.lockutils [req-7056e284-6cfd-43ca-b103-6a60098160d7 req-40d4e78d-0eee-40e8-9c3f-4befae2f8f74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.870 2 DEBUG nova.compute.manager [req-7056e284-6cfd-43ca-b103-6a60098160d7 req-40d4e78d-0eee-40e8-9c3f-4befae2f8f74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] No waiting events found dispatching network-vif-unplugged-8bbf9c96-17e6-49df-8a58-e3557085f576 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.870 2 DEBUG nova.compute.manager [req-7056e284-6cfd-43ca-b103-6a60098160d7 req-40d4e78d-0eee-40e8-9c3f-4befae2f8f74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Received event network-vif-unplugged-8bbf9c96-17e6-49df-8a58-e3557085f576 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.988 2 INFO nova.virt.libvirt.driver [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Deleting instance files /var/lib/nova/instances/5aa06cd5-91e7-4797-83c0-ddd3966533ce_del
Oct 07 14:04:06 compute-0 nova_compute[259550]: 2025-10-07 14:04:06.989 2 INFO nova.virt.libvirt.driver [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Deletion of /var/lib/nova/instances/5aa06cd5-91e7-4797-83c0-ddd3966533ce_del complete
Oct 07 14:04:07 compute-0 nova_compute[259550]: 2025-10-07 14:04:07.318 2 INFO nova.compute.manager [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Took 1.25 seconds to destroy the instance on the hypervisor.
Oct 07 14:04:07 compute-0 nova_compute[259550]: 2025-10-07 14:04:07.318 2 DEBUG oslo.service.loopingcall [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:04:07 compute-0 nova_compute[259550]: 2025-10-07 14:04:07.318 2 DEBUG nova.compute.manager [-] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:04:07 compute-0 nova_compute[259550]: 2025-10-07 14:04:07.319 2 DEBUG nova.network.neutron [-] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:04:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 167 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.534358) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845847534405, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2091, "num_deletes": 252, "total_data_size": 3402313, "memory_usage": 3457216, "flush_reason": "Manual Compaction"}
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845847561618, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3313483, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20964, "largest_seqno": 23054, "table_properties": {"data_size": 3304072, "index_size": 5905, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19380, "raw_average_key_size": 20, "raw_value_size": 3285069, "raw_average_value_size": 3425, "num_data_blocks": 266, "num_entries": 959, "num_filter_entries": 959, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759845639, "oldest_key_time": 1759845639, "file_creation_time": 1759845847, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 27320 microseconds, and 9017 cpu microseconds.
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.561674) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3313483 bytes OK
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.561699) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.568415) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.568438) EVENT_LOG_v1 {"time_micros": 1759845847568430, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.568459) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3393532, prev total WAL file size 3393532, number of live WAL files 2.
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.569501) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3235KB)], [50(7424KB)]
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845847569537, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10916657, "oldest_snapshot_seqno": -1}
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4770 keys, 9171902 bytes, temperature: kUnknown
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845847646350, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9171902, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9137441, "index_size": 21423, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 116896, "raw_average_key_size": 24, "raw_value_size": 9048679, "raw_average_value_size": 1896, "num_data_blocks": 901, "num_entries": 4770, "num_filter_entries": 4770, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759845847, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.646749) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9171902 bytes
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.649543) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.9 rd, 119.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.3 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5291, records dropped: 521 output_compression: NoCompression
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.649566) EVENT_LOG_v1 {"time_micros": 1759845847649555, "job": 26, "event": "compaction_finished", "compaction_time_micros": 76928, "compaction_time_cpu_micros": 25415, "output_level": 6, "num_output_files": 1, "total_output_size": 9171902, "num_input_records": 5291, "num_output_records": 4770, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845847650328, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845847651962, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.569401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.652027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.652034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.652035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.652037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:04:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.652038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:04:08 compute-0 ceph-mon[74295]: pgmap v1104: 305 pgs: 305 active+clean; 167 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 07 14:04:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 107 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 07 14:04:09 compute-0 nova_compute[259550]: 2025-10-07 14:04:09.748 2 DEBUG nova.compute.manager [req-1370c5a4-fead-47da-ba58-090dce7b9fe1 req-0fff700a-6555-4c01-94b1-8ec450537bb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Received event network-vif-plugged-8bbf9c96-17e6-49df-8a58-e3557085f576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:04:09 compute-0 nova_compute[259550]: 2025-10-07 14:04:09.748 2 DEBUG oslo_concurrency.lockutils [req-1370c5a4-fead-47da-ba58-090dce7b9fe1 req-0fff700a-6555-4c01-94b1-8ec450537bb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:09 compute-0 nova_compute[259550]: 2025-10-07 14:04:09.748 2 DEBUG oslo_concurrency.lockutils [req-1370c5a4-fead-47da-ba58-090dce7b9fe1 req-0fff700a-6555-4c01-94b1-8ec450537bb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:09 compute-0 nova_compute[259550]: 2025-10-07 14:04:09.749 2 DEBUG oslo_concurrency.lockutils [req-1370c5a4-fead-47da-ba58-090dce7b9fe1 req-0fff700a-6555-4c01-94b1-8ec450537bb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:09 compute-0 nova_compute[259550]: 2025-10-07 14:04:09.749 2 DEBUG nova.compute.manager [req-1370c5a4-fead-47da-ba58-090dce7b9fe1 req-0fff700a-6555-4c01-94b1-8ec450537bb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] No waiting events found dispatching network-vif-plugged-8bbf9c96-17e6-49df-8a58-e3557085f576 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:04:09 compute-0 nova_compute[259550]: 2025-10-07 14:04:09.749 2 WARNING nova.compute.manager [req-1370c5a4-fead-47da-ba58-090dce7b9fe1 req-0fff700a-6555-4c01-94b1-8ec450537bb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Received unexpected event network-vif-plugged-8bbf9c96-17e6-49df-8a58-e3557085f576 for instance with vm_state active and task_state deleting.
Oct 07 14:04:09 compute-0 nova_compute[259550]: 2025-10-07 14:04:09.895 2 DEBUG nova.network.neutron [-] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:04:09 compute-0 nova_compute[259550]: 2025-10-07 14:04:09.920 2 INFO nova.compute.manager [-] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Took 2.60 seconds to deallocate network for instance.
Oct 07 14:04:10 compute-0 nova_compute[259550]: 2025-10-07 14:04:10.099 2 DEBUG oslo_concurrency.lockutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:10 compute-0 nova_compute[259550]: 2025-10-07 14:04:10.099 2 DEBUG oslo_concurrency.lockutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:10 compute-0 nova_compute[259550]: 2025-10-07 14:04:10.172 2 DEBUG oslo_concurrency.processutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:10 compute-0 nova_compute[259550]: 2025-10-07 14:04:10.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:04:10 compute-0 ceph-mon[74295]: pgmap v1105: 305 pgs: 305 active+clean; 107 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 07 14:04:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:04:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2002777242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:04:10 compute-0 nova_compute[259550]: 2025-10-07 14:04:10.672 2 DEBUG oslo_concurrency.processutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:10 compute-0 nova_compute[259550]: 2025-10-07 14:04:10.678 2 DEBUG nova.compute.provider_tree [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:04:10 compute-0 nova_compute[259550]: 2025-10-07 14:04:10.702 2 DEBUG nova.scheduler.client.report [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:04:10 compute-0 nova_compute[259550]: 2025-10-07 14:04:10.726 2 DEBUG oslo_concurrency.lockutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:10 compute-0 nova_compute[259550]: 2025-10-07 14:04:10.748 2 INFO nova.scheduler.client.report [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Deleted allocations for instance 5aa06cd5-91e7-4797-83c0-ddd3966533ce
Oct 07 14:04:10 compute-0 nova_compute[259550]: 2025-10-07 14:04:10.821 2 DEBUG oslo_concurrency.lockutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:10 compute-0 nova_compute[259550]: 2025-10-07 14:04:10.865 2 INFO nova.compute.manager [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Rebuilding instance
Oct 07 14:04:10 compute-0 nova_compute[259550]: 2025-10-07 14:04:10.878 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "01404604-70e9-49ec-9047-ec42ba41afee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:10 compute-0 nova_compute[259550]: 2025-10-07 14:04:10.879 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:10 compute-0 nova_compute[259550]: 2025-10-07 14:04:10.897 2 DEBUG nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:04:10 compute-0 nova_compute[259550]: 2025-10-07 14:04:10.997 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:10 compute-0 nova_compute[259550]: 2025-10-07 14:04:10.998 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.004 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.005 2 INFO nova.compute.claims [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.140 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.188 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'trusted_certs' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.207 2 DEBUG nova.compute.manager [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.248 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'pci_requests' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.285 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'pci_devices' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.341 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'resources' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.353 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'migration_context' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.364 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.370 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:04:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 88 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 07 14:04:11 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2002777242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:04:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:04:11 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2066137532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.656 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.662 2 DEBUG nova.compute.provider_tree [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.678 2 DEBUG nova.scheduler.client.report [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.700 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.702 2 DEBUG nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.746 2 DEBUG nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.747 2 DEBUG nova.network.neutron [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.774 2 INFO nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.807 2 DEBUG nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.870 2 DEBUG nova.compute.manager [req-bdca8af6-7780-48d0-8b79-3915dcbdd0b9 req-ee6fb98b-507d-47e1-8fa6-19244796c538 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Received event network-vif-deleted-8bbf9c96-17e6-49df-8a58-e3557085f576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.931 2 DEBUG nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.932 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.933 2 INFO nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Creating image(s)
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.957 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:11 compute-0 nova_compute[259550]: 2025-10-07 14:04:11.981 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.007 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.014 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.086 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.087 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.087 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.088 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.114 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.121 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 01404604-70e9-49ec-9047-ec42ba41afee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.209 2 DEBUG nova.policy [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2d78803d42674b89b4ea28d9a2442357', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '380b73085cef431383bee110ceaefb15', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.426 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 01404604-70e9-49ec-9047-ec42ba41afee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.488 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] resizing rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:04:12 compute-0 ceph-mon[74295]: pgmap v1106: 305 pgs: 305 active+clean; 88 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 07 14:04:12 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2066137532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.598 2 DEBUG nova.objects.instance [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lazy-loading 'migration_context' on Instance uuid 01404604-70e9-49ec-9047-ec42ba41afee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.635 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.661 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.665 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.666 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.666 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.698 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.699 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.743 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.745 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.769 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:12 compute-0 nova_compute[259550]: 2025-10-07 14:04:12.774 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 01404604-70e9-49ec-9047-ec42ba41afee_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:13 compute-0 nova_compute[259550]: 2025-10-07 14:04:13.059 2 DEBUG nova.network.neutron [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Successfully created port: 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:04:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 88 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 07 14:04:13 compute-0 nova_compute[259550]: 2025-10-07 14:04:13.832 2 DEBUG nova.network.neutron [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Successfully updated port: 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:04:13 compute-0 nova_compute[259550]: 2025-10-07 14:04:13.836 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 01404604-70e9-49ec-9047-ec42ba41afee_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:13 compute-0 nova_compute[259550]: 2025-10-07 14:04:13.867 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "refresh_cache-01404604-70e9-49ec-9047-ec42ba41afee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:04:13 compute-0 nova_compute[259550]: 2025-10-07 14:04:13.868 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquired lock "refresh_cache-01404604-70e9-49ec-9047-ec42ba41afee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:04:13 compute-0 nova_compute[259550]: 2025-10-07 14:04:13.869 2 DEBUG nova.network.neutron [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:04:13 compute-0 nova_compute[259550]: 2025-10-07 14:04:13.944 2 DEBUG nova.compute.manager [req-b4e07baa-859a-4542-83f2-bc4eac4a8b61 req-b99b0f27-440c-4fe0-98a0-0fcddb6c634a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received event network-changed-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:04:13 compute-0 nova_compute[259550]: 2025-10-07 14:04:13.945 2 DEBUG nova.compute.manager [req-b4e07baa-859a-4542-83f2-bc4eac4a8b61 req-b99b0f27-440c-4fe0-98a0-0fcddb6c634a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Refreshing instance network info cache due to event network-changed-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:04:13 compute-0 nova_compute[259550]: 2025-10-07 14:04:13.945 2 DEBUG oslo_concurrency.lockutils [req-b4e07baa-859a-4542-83f2-bc4eac4a8b61 req-b99b0f27-440c-4fe0-98a0-0fcddb6c634a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-01404604-70e9-49ec-9047-ec42ba41afee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:04:13 compute-0 nova_compute[259550]: 2025-10-07 14:04:13.952 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:04:13 compute-0 nova_compute[259550]: 2025-10-07 14:04:13.952 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Ensure instance console log exists: /var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:04:13 compute-0 nova_compute[259550]: 2025-10-07 14:04:13.953 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:13 compute-0 nova_compute[259550]: 2025-10-07 14:04:13.953 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:13 compute-0 nova_compute[259550]: 2025-10-07 14:04:13.954 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:14 compute-0 podman[278107]: 2025-10-07 14:04:14.079803069 +0000 UTC m=+0.063188230 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:04:14 compute-0 podman[278106]: 2025-10-07 14:04:14.109083452 +0000 UTC m=+0.091972701 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:04:14 compute-0 nova_compute[259550]: 2025-10-07 14:04:14.447 2 DEBUG nova.network.neutron [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:04:14 compute-0 ceph-mon[74295]: pgmap v1107: 305 pgs: 305 active+clean; 88 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 117 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 145 op/s
Oct 07 14:04:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.591 2 DEBUG nova.network.neutron [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Updating instance_info_cache with network_info: [{"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.747 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Releasing lock "refresh_cache-01404604-70e9-49ec-9047-ec42ba41afee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.749 2 DEBUG nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Instance network_info: |[{"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.750 2 DEBUG oslo_concurrency.lockutils [req-b4e07baa-859a-4542-83f2-bc4eac4a8b61 req-b99b0f27-440c-4fe0-98a0-0fcddb6c634a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-01404604-70e9-49ec-9047-ec42ba41afee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.751 2 DEBUG nova.network.neutron [req-b4e07baa-859a-4542-83f2-bc4eac4a8b61 req-b99b0f27-440c-4fe0-98a0-0fcddb6c634a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Refreshing network info cache for port 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.754 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Start _get_guest_xml network_info=[{"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [{'guest_format': None, 'device_type': 'disk', 'size': 1, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.760 2 WARNING nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.770 2 DEBUG nova.virt.libvirt.host [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.771 2 DEBUG nova.virt.libvirt.host [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.775 2 DEBUG nova.virt.libvirt.host [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.776 2 DEBUG nova.virt.libvirt.host [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.776 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.776 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:03:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='21014677',id=19,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-576165011',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.777 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.777 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.778 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.778 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.778 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.779 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.779 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.780 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.780 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.780 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:04:15 compute-0 nova_compute[259550]: 2025-10-07 14:04:15.784 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:04:16 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/255063289' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:16 compute-0 nova_compute[259550]: 2025-10-07 14:04:16.239 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:16 compute-0 nova_compute[259550]: 2025-10-07 14:04:16.241 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:16 compute-0 nova_compute[259550]: 2025-10-07 14:04:16.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:16 compute-0 ceph-mon[74295]: pgmap v1108: 305 pgs: 305 active+clean; 117 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 145 op/s
Oct 07 14:04:16 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/255063289' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:04:16 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/291129585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:16 compute-0 nova_compute[259550]: 2025-10-07 14:04:16.718 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:16 compute-0 nova_compute[259550]: 2025-10-07 14:04:16.740 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:16 compute-0 nova_compute[259550]: 2025-10-07 14:04:16.744 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:16 compute-0 nova_compute[259550]: 2025-10-07 14:04:16.917 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Acquiring lock "9b81bc46-3d46-47e9-85da-b3f62c6db7b2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:16 compute-0 nova_compute[259550]: 2025-10-07 14:04:16.919 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "9b81bc46-3d46-47e9-85da-b3f62c6db7b2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:16 compute-0 nova_compute[259550]: 2025-10-07 14:04:16.965 2 DEBUG nova.compute.manager [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:04:16 compute-0 nova_compute[259550]: 2025-10-07 14:04:16.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.029 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.030 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.037 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.038 2 INFO nova.compute.claims [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.060 2 DEBUG nova.network.neutron [req-b4e07baa-859a-4542-83f2-bc4eac4a8b61 req-b99b0f27-440c-4fe0-98a0-0fcddb6c634a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Updated VIF entry in instance network info cache for port 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.061 2 DEBUG nova.network.neutron [req-b4e07baa-859a-4542-83f2-bc4eac4a8b61 req-b99b0f27-440c-4fe0-98a0-0fcddb6c634a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Updating instance_info_cache with network_info: [{"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.142 2 DEBUG oslo_concurrency.lockutils [req-b4e07baa-859a-4542-83f2-bc4eac4a8b61 req-b99b0f27-440c-4fe0-98a0-0fcddb6c634a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-01404604-70e9-49ec-9047-ec42ba41afee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:04:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:04:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2723830996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.191 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.193 2 DEBUG nova.virt.libvirt.vif [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1945845741',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1945845741',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(19),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1945845741',id=7,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=19,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMYfAR9mj2imemzusfPwwtddn0lEjKMvdiXZQNbuwqTJFtx1U2M3BRUSevXd6qU4D5KOSepLHIPjFK2NZ957Ri2Kv5dYObirVp8T/b/ktoKbgTEgyyASZo/0n0wfyQLhEw==',key_name='tempest-keypair-1528796341',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='380b73085cef431383bee110ceaefb15',ramdisk_id='',reservation_id='r-gkxpo37o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:04:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2d78803d42674b89b4ea28d9a2442357',uuid=01404604-70e9-49ec-9047-ec42ba41afee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.193 2 DEBUG nova.network.os_vif_util [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converting VIF {"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.194 2 DEBUG nova.network.os_vif_util [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ad:77,bridge_name='br-int',has_traffic_filtering=True,id=60081c2f-b84d-467f-85f2-c2dd4b4fb4c5,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60081c2f-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.195 2 DEBUG nova.objects.instance [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lazy-loading 'pci_devices' on Instance uuid 01404604-70e9-49ec-9047-ec42ba41afee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.210 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:04:17 compute-0 nova_compute[259550]:   <uuid>01404604-70e9-49ec-9047-ec42ba41afee</uuid>
Oct 07 14:04:17 compute-0 nova_compute[259550]:   <name>instance-00000007</name>
Oct 07 14:04:17 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:04:17 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:04:17 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1945845741</nova:name>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:04:15</nova:creationTime>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <nova:flavor name="tempest-flavor_with_ephemeral_1-576165011">
Oct 07 14:04:17 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:04:17 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:04:17 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:04:17 compute-0 nova_compute[259550]:         <nova:ephemeral>1</nova:ephemeral>
Oct 07 14:04:17 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:04:17 compute-0 nova_compute[259550]:         <nova:user uuid="2d78803d42674b89b4ea28d9a2442357">tempest-ServersWithSpecificFlavorTestJSON-1902159686-project-member</nova:user>
Oct 07 14:04:17 compute-0 nova_compute[259550]:         <nova:project uuid="380b73085cef431383bee110ceaefb15">tempest-ServersWithSpecificFlavorTestJSON-1902159686</nova:project>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:04:17 compute-0 nova_compute[259550]:         <nova:port uuid="60081c2f-b84d-467f-85f2-c2dd4b4fb4c5">
Oct 07 14:04:17 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:04:17 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:04:17 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <system>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <entry name="serial">01404604-70e9-49ec-9047-ec42ba41afee</entry>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <entry name="uuid">01404604-70e9-49ec-9047-ec42ba41afee</entry>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     </system>
Oct 07 14:04:17 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:04:17 compute-0 nova_compute[259550]:   <os>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:   </os>
Oct 07 14:04:17 compute-0 nova_compute[259550]:   <features>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:   </features>
Oct 07 14:04:17 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:04:17 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:04:17 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/01404604-70e9-49ec-9047-ec42ba41afee_disk">
Oct 07 14:04:17 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       </source>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:04:17 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/01404604-70e9-49ec-9047-ec42ba41afee_disk.eph0">
Oct 07 14:04:17 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       </source>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:04:17 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <target dev="vdb" bus="virtio"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/01404604-70e9-49ec-9047-ec42ba41afee_disk.config">
Oct 07 14:04:17 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       </source>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:04:17 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:1e:ad:77"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <target dev="tap60081c2f-b8"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee/console.log" append="off"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <video>
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     </video>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:04:17 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:04:17 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:04:17 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:04:17 compute-0 nova_compute[259550]: </domain>
Oct 07 14:04:17 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.214 2 DEBUG nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Preparing to wait for external event network-vif-plugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.214 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "01404604-70e9-49ec-9047-ec42ba41afee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.214 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.217 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.218 2 DEBUG nova.virt.libvirt.vif [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1945845741',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1945845741',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(19),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1945845741',id=7,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=19,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMYfAR9mj2imemzusfPwwtddn0lEjKMvdiXZQNbuwqTJFtx1U2M3BRUSevXd6qU4D5KOSepLHIPjFK2NZ957Ri2Kv5dYObirVp8T/b/ktoKbgTEgyyASZo/0n0wfyQLhEw==',key_name='tempest-keypair-1528796341',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='380b73085cef431383bee110ceaefb15',ramdisk_id='',reservation_id='r-gkxpo37o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:04:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2d78803d42674b89b4ea28d9a2442357',uuid=01404604-70e9-49ec-9047-ec42ba41afee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.218 2 DEBUG nova.network.os_vif_util [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converting VIF {"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.219 2 DEBUG nova.network.os_vif_util [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ad:77,bridge_name='br-int',has_traffic_filtering=True,id=60081c2f-b84d-467f-85f2-c2dd4b4fb4c5,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60081c2f-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.220 2 DEBUG os_vif [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ad:77,bridge_name='br-int',has_traffic_filtering=True,id=60081c2f-b84d-467f-85f2-c2dd4b4fb4c5,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60081c2f-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.224 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.225 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.228 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60081c2f-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.229 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap60081c2f-b8, col_values=(('external_ids', {'iface-id': '60081c2f-b84d-467f-85f2-c2dd4b4fb4c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:ad:77', 'vm-uuid': '01404604-70e9-49ec-9047-ec42ba41afee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:17 compute-0 NetworkManager[44949]: <info>  [1759845857.2322] manager: (tap60081c2f-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.239 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.266 2 INFO os_vif [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ad:77,bridge_name='br-int',has_traffic_filtering=True,id=60081c2f-b84d-467f-85f2-c2dd4b4fb4c5,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60081c2f-b8')
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.357 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.358 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.358 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.358 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] No VIF found with MAC fa:16:3e:1e:ad:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.358 2 INFO nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Using config drive
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.384 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 136 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 148 op/s
Oct 07 14:04:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:04:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/359812699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:04:17 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/291129585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:17 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2723830996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.696 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.703 2 DEBUG nova.compute.provider_tree [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.708 2 INFO nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Creating config drive at /var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee/disk.config
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.714 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjk4vtyjg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.754 2 DEBUG nova.scheduler.client.report [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.787 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.788 2 DEBUG nova.compute.manager [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.846 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjk4vtyjg" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.846 2 DEBUG nova.compute.manager [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.873 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.884 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee/disk.config 01404604-70e9-49ec-9047-ec42ba41afee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:17 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.921 2 INFO nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:04:17 compute-0 nova_compute[259550]: 2025-10-07 14:04:17.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.050 2 DEBUG nova.compute.manager [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.142 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee/disk.config 01404604-70e9-49ec-9047-ec42ba41afee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.143 2 INFO nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Deleting local config drive /var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee/disk.config because it was imported into RBD.
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.171 2 DEBUG nova.compute.manager [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.173 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.173 2 INFO nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Creating image(s)
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.200 2 DEBUG nova.storage.rbd_utils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] rbd image 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:18 compute-0 kernel: tap60081c2f-b8: entered promiscuous mode
Oct 07 14:04:18 compute-0 ovn_controller[151684]: 2025-10-07T14:04:18Z|00036|binding|INFO|Claiming lport 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 for this chassis.
Oct 07 14:04:18 compute-0 ovn_controller[151684]: 2025-10-07T14:04:18Z|00037|binding|INFO|60081c2f-b84d-467f-85f2-c2dd4b4fb4c5: Claiming fa:16:3e:1e:ad:77 10.100.0.9
Oct 07 14:04:18 compute-0 NetworkManager[44949]: <info>  [1759845858.2107] manager: (tap60081c2f-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.218 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:ad:77 10.100.0.9'], port_security=['fa:16:3e:1e:ad:77 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '01404604-70e9-49ec-9047-ec42ba41afee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '380b73085cef431383bee110ceaefb15', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd3b7853b-d94c-445c-810d-9b4dd15d78f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f05683d-d61c-46e7-a8b2-e5ebf47fffcc, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=60081c2f-b84d-467f-85f2-c2dd4b4fb4c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.220 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 in datapath 384938fa-4eb0-4ec5-a6a4-bc65721ba22a bound to our chassis
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.221 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 384938fa-4eb0-4ec5-a6a4-bc65721ba22a
Oct 07 14:04:18 compute-0 ovn_controller[151684]: 2025-10-07T14:04:18Z|00038|binding|INFO|Setting lport 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 ovn-installed in OVS
Oct 07 14:04:18 compute-0 ovn_controller[151684]: 2025-10-07T14:04:18Z|00039|binding|INFO|Setting lport 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 up in Southbound
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.236 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b7575b14-1d70-4b29-8cc1-bf6f76324415]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.237 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap384938fa-41 in ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.239 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap384938fa-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.239 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b3effa3b-c9f5-4add-bfc6-f43cb7820e89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.239 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[305e15c7-f4c7-4932-b053-e11f210227c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.244 2 DEBUG nova.storage.rbd_utils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] rbd image 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.253 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b37842-42e4-4379-b356-231aa2dbcf25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:18 compute-0 systemd-machined[214580]: New machine qemu-7-instance-00000007.
Oct 07 14:04:18 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Oct 07 14:04:18 compute-0 systemd-udevd[278372]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.290 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[78a38714-cb21-4d91-bc28-fed7051d5213]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:18 compute-0 NetworkManager[44949]: <info>  [1759845858.2949] device (tap60081c2f-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:04:18 compute-0 NetworkManager[44949]: <info>  [1759845858.2973] device (tap60081c2f-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.314 2 DEBUG nova.storage.rbd_utils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] rbd image 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.327 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[29fe25af-6d8f-41b3-9887-fc0a16fa3a45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.332 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.333 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4ab52c19-bef5-42cc-b401-dba64ed5eecb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:18 compute-0 NetworkManager[44949]: <info>  [1759845858.3345] manager: (tap384938fa-40): new Veth device (/org/freedesktop/NetworkManager/Devices/34)
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.377 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf347c4-3e0c-48e5-b67c-3bcfc154e876]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.381 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[012a5885-6d3c-4ce5-94c3-da97befb4c8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:18 compute-0 NetworkManager[44949]: <info>  [1759845858.4095] device (tap384938fa-40): carrier: link connected
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.410 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.411 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.412 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.412 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.417 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe14be9-85c8-405a-8cb7-fc540efa8a2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.443 2 DEBUG nova.storage.rbd_utils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] rbd image 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.445 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe64a74-bcf5-4d95-a592-693004d41cc8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap384938fa-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:dc:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 644197, 'reachable_time': 16446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278430, 'error': None, 'target': 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.451 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.468 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6dcecc35-1656-41cd-9e72-b2f3bfbd64ff]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:dc9f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 644197, 'tstamp': 644197}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278435, 'error': None, 'target': 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.493 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7d051d6b-c39a-4fe5-af55-06a324c06604]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap384938fa-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:dc:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 644197, 'reachable_time': 16446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278437, 'error': None, 'target': 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.536 2 DEBUG nova.compute.manager [req-17a65ac9-fdb0-46e4-9287-e71214345525 req-8207175a-f69a-44dd-896b-8ed545add165 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received event network-vif-plugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.536 2 DEBUG oslo_concurrency.lockutils [req-17a65ac9-fdb0-46e4-9287-e71214345525 req-8207175a-f69a-44dd-896b-8ed545add165 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "01404604-70e9-49ec-9047-ec42ba41afee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.536 2 DEBUG oslo_concurrency.lockutils [req-17a65ac9-fdb0-46e4-9287-e71214345525 req-8207175a-f69a-44dd-896b-8ed545add165 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.537 2 DEBUG oslo_concurrency.lockutils [req-17a65ac9-fdb0-46e4-9287-e71214345525 req-8207175a-f69a-44dd-896b-8ed545add165 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.537 2 DEBUG nova.compute.manager [req-17a65ac9-fdb0-46e4-9287-e71214345525 req-8207175a-f69a-44dd-896b-8ed545add165 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Processing event network-vif-plugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.537 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[79b7eca0-8874-45e9-8cc6-24de7a9cd177]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.621 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dba943e2-7241-4ede-8474-fd039ff3ff97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.623 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap384938fa-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.623 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.624 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap384938fa-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:04:18 compute-0 NetworkManager[44949]: <info>  [1759845858.6270] manager: (tap384938fa-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:18 compute-0 kernel: tap384938fa-40: entered promiscuous mode
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.630 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap384938fa-40, col_values=(('external_ids', {'iface-id': '86c408a4-938c-4caa-9ec3-5622a47990e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:18 compute-0 ovn_controller[151684]: 2025-10-07T14:04:18Z|00040|binding|INFO|Releasing lport 86c408a4-938c-4caa-9ec3-5622a47990e3 from this chassis (sb_readonly=0)
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.648 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/384938fa-4eb0-4ec5-a6a4-bc65721ba22a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/384938fa-4eb0-4ec5-a6a4-bc65721ba22a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.652 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9f43eaab-57e5-453d-82f3-1185139fe0ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.653 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-384938fa-4eb0-4ec5-a6a4-bc65721ba22a
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/384938fa-4eb0-4ec5-a6a4-bc65721ba22a.pid.haproxy
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 384938fa-4eb0-4ec5-a6a4-bc65721ba22a
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:04:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.654 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'env', 'PROCESS_TAG=haproxy-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/384938fa-4eb0-4ec5-a6a4-bc65721ba22a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:04:18 compute-0 ceph-mon[74295]: pgmap v1109: 305 pgs: 305 active+clean; 136 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 148 op/s
Oct 07 14:04:18 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/359812699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.792 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:18 compute-0 nova_compute[259550]: 2025-10-07 14:04:18.868 2 DEBUG nova.storage.rbd_utils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] resizing rbd image 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.010 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.020 2 DEBUG nova.objects.instance [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b81bc46-3d46-47e9-85da-b3f62c6db7b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.035 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.036 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Ensure instance console log exists: /var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.036 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.037 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.037 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.039 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.043 2 WARNING nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.052 2 DEBUG nova.virt.libvirt.host [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.053 2 DEBUG nova.virt.libvirt.host [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.058 2 DEBUG nova.virt.libvirt.host [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.059 2 DEBUG nova.virt.libvirt.host [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.059 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.060 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.060 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.061 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.061 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.061 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.061 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.061 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.062 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.062 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.062 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.062 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.066 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:19 compute-0 podman[278618]: 2025-10-07 14:04:19.129947263 +0000 UTC m=+0.089981707 container create 5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 07 14:04:19 compute-0 podman[278618]: 2025-10-07 14:04:19.072147198 +0000 UTC m=+0.032181672 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:04:19 compute-0 systemd[1]: Started libpod-conmon-5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2.scope.
Oct 07 14:04:19 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:04:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ba8972aef5751b49df55a9acd3f9f51d77eb178dc3254adc380d65c0d9ec3cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:04:19 compute-0 podman[278618]: 2025-10-07 14:04:19.248140665 +0000 UTC m=+0.208175129 container init 5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:04:19 compute-0 podman[278618]: 2025-10-07 14:04:19.258016179 +0000 UTC m=+0.218050623 container start 5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:04:19 compute-0 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[278635]: [NOTICE]   (278658) : New worker (278660) forked
Oct 07 14:04:19 compute-0 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[278635]: [NOTICE]   (278658) : Loading success.
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.422 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845859.4216967, 01404604-70e9-49ec-9047-ec42ba41afee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.426 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] VM Started (Lifecycle Event)
Oct 07 14:04:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 162 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 166 op/s
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.437 2 DEBUG nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.445 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.452 2 INFO nova.virt.libvirt.driver [-] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Instance spawned successfully.
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.453 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.480 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.488 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.498 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.499 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.501 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.502 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.504 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.505 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:04:19 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/531227528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.553 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.554 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845859.4259088, 01404604-70e9-49ec-9047-ec42ba41afee => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.554 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] VM Paused (Lifecycle Event)
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.572 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.618 2 DEBUG nova.storage.rbd_utils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] rbd image 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.627 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.661 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.678 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845859.44624, 01404604-70e9-49ec-9047-ec42ba41afee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.679 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] VM Resumed (Lifecycle Event)
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.740 2 INFO nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Took 7.81 seconds to spawn the instance on the hypervisor.
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.740 2 DEBUG nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:19 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/531227528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.762 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.766 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.848 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:04:19 compute-0 nova_compute[259550]: 2025-10-07 14:04:19.943 2 INFO nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Took 8.99 seconds to build instance.
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.061 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:04:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1886957082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.089 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.091 2 DEBUG nova.objects.instance [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b81bc46-3d46-47e9-85da-b3f62c6db7b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.129 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:04:20 compute-0 nova_compute[259550]:   <uuid>9b81bc46-3d46-47e9-85da-b3f62c6db7b2</uuid>
Oct 07 14:04:20 compute-0 nova_compute[259550]:   <name>instance-00000008</name>
Oct 07 14:04:20 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:04:20 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:04:20 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerDiagnosticsV248Test-server-448006730</nova:name>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:04:19</nova:creationTime>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:04:20 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:04:20 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:04:20 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:04:20 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:04:20 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:04:20 compute-0 nova_compute[259550]:         <nova:user uuid="affb512bf16041e687eef2ef7709dca5">tempest-ServerDiagnosticsV248Test-1776882737-project-member</nova:user>
Oct 07 14:04:20 compute-0 nova_compute[259550]:         <nova:project uuid="4ba32c1825e14b5c9c382bea6c3047c5">tempest-ServerDiagnosticsV248Test-1776882737</nova:project>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:04:20 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:04:20 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <system>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <entry name="serial">9b81bc46-3d46-47e9-85da-b3f62c6db7b2</entry>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <entry name="uuid">9b81bc46-3d46-47e9-85da-b3f62c6db7b2</entry>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     </system>
Oct 07 14:04:20 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:04:20 compute-0 nova_compute[259550]:   <os>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:   </os>
Oct 07 14:04:20 compute-0 nova_compute[259550]:   <features>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:   </features>
Oct 07 14:04:20 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:04:20 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:04:20 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk">
Oct 07 14:04:20 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       </source>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:04:20 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk.config">
Oct 07 14:04:20 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       </source>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:04:20 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2/console.log" append="off"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <video>
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     </video>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:04:20 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:04:20 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:04:20 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:04:20 compute-0 nova_compute[259550]: </domain>
Oct 07 14:04:20 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.183 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.184 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.184 2 INFO nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Using config drive
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.215 2 DEBUG nova.storage.rbd_utils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] rbd image 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.558 2 INFO nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Creating config drive at /var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2/disk.config
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.565 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj2quij2t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.667 2 DEBUG nova.compute.manager [req-b276ad41-5f8a-4924-b925-c3c93ff9172a req-37b93cdc-fe8a-4e1d-8c06-9fd4bb0eb0f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received event network-vif-plugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.668 2 DEBUG oslo_concurrency.lockutils [req-b276ad41-5f8a-4924-b925-c3c93ff9172a req-37b93cdc-fe8a-4e1d-8c06-9fd4bb0eb0f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "01404604-70e9-49ec-9047-ec42ba41afee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.668 2 DEBUG oslo_concurrency.lockutils [req-b276ad41-5f8a-4924-b925-c3c93ff9172a req-37b93cdc-fe8a-4e1d-8c06-9fd4bb0eb0f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.669 2 DEBUG oslo_concurrency.lockutils [req-b276ad41-5f8a-4924-b925-c3c93ff9172a req-37b93cdc-fe8a-4e1d-8c06-9fd4bb0eb0f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.669 2 DEBUG nova.compute.manager [req-b276ad41-5f8a-4924-b925-c3c93ff9172a req-37b93cdc-fe8a-4e1d-8c06-9fd4bb0eb0f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] No waiting events found dispatching network-vif-plugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.669 2 WARNING nova.compute.manager [req-b276ad41-5f8a-4924-b925-c3c93ff9172a req-37b93cdc-fe8a-4e1d-8c06-9fd4bb0eb0f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received unexpected event network-vif-plugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 for instance with vm_state active and task_state None.
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.696 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj2quij2t" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.720 2 DEBUG nova.storage.rbd_utils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] rbd image 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:20 compute-0 nova_compute[259550]: 2025-10-07 14:04:20.724 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2/disk.config 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:20 compute-0 ceph-mon[74295]: pgmap v1110: 305 pgs: 305 active+clean; 162 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 166 op/s
Oct 07 14:04:20 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1886957082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:21 compute-0 nova_compute[259550]: 2025-10-07 14:04:21.310 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2/disk.config 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:21 compute-0 nova_compute[259550]: 2025-10-07 14:04:21.311 2 INFO nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Deleting local config drive /var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2/disk.config because it was imported into RBD.
Oct 07 14:04:21 compute-0 systemd-machined[214580]: New machine qemu-8-instance-00000008.
Oct 07 14:04:21 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Oct 07 14:04:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 199 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 769 KiB/s rd, 5.5 MiB/s wr, 153 op/s
Oct 07 14:04:21 compute-0 nova_compute[259550]: 2025-10-07 14:04:21.512 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759845846.299679, 5aa06cd5-91e7-4797-83c0-ddd3966533ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:04:21 compute-0 nova_compute[259550]: 2025-10-07 14:04:21.513 2 INFO nova.compute.manager [-] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] VM Stopped (Lifecycle Event)
Oct 07 14:04:21 compute-0 nova_compute[259550]: 2025-10-07 14:04:21.529 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:04:21 compute-0 nova_compute[259550]: 2025-10-07 14:04:21.531 2 DEBUG nova.compute.manager [None req-4c3a0000-e7bf-4812-8ba6-3939a5330641 - - - - - -] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:21 compute-0 nova_compute[259550]: 2025-10-07 14:04:21.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.193 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845862.1926048, 9b81bc46-3d46-47e9-85da-b3f62c6db7b2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.193 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] VM Resumed (Lifecycle Event)
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.196 2 DEBUG nova.compute.manager [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.196 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.201 2 INFO nova.virt.libvirt.driver [-] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Instance spawned successfully.
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.201 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.213 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.219 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.231 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.231 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.232 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.232 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.232 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.233 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.240 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.240 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845862.1955829, 9b81bc46-3d46-47e9-85da-b3f62c6db7b2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.241 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] VM Started (Lifecycle Event)
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.270 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.274 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.297 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.307 2 INFO nova.compute.manager [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Took 4.14 seconds to spawn the instance on the hypervisor.
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.307 2 DEBUG nova.compute.manager [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.360 2 INFO nova.compute.manager [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Took 5.35 seconds to build instance.
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.382 2 DEBUG nova.compute.manager [req-1dbb1bfa-7dc5-41cb-921b-3bb0cd75ec69 req-4358e4c9-6c72-4162-9a0f-686e9771c3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received event network-changed-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.382 2 DEBUG nova.compute.manager [req-1dbb1bfa-7dc5-41cb-921b-3bb0cd75ec69 req-4358e4c9-6c72-4162-9a0f-686e9771c3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Refreshing instance network info cache due to event network-changed-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.383 2 DEBUG oslo_concurrency.lockutils [req-1dbb1bfa-7dc5-41cb-921b-3bb0cd75ec69 req-4358e4c9-6c72-4162-9a0f-686e9771c3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-01404604-70e9-49ec-9047-ec42ba41afee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.383 2 DEBUG oslo_concurrency.lockutils [req-1dbb1bfa-7dc5-41cb-921b-3bb0cd75ec69 req-4358e4c9-6c72-4162-9a0f-686e9771c3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-01404604-70e9-49ec-9047-ec42ba41afee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.383 2 DEBUG nova.network.neutron [req-1dbb1bfa-7dc5-41cb-921b-3bb0cd75ec69 req-4358e4c9-6c72-4162-9a0f-686e9771c3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Refreshing network info cache for port 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.385 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "9b81bc46-3d46-47e9-85da-b3f62c6db7b2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:04:22
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.control', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'vms', 'cephfs.cephfs.meta', 'backups']
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:04:22 compute-0 ceph-mon[74295]: pgmap v1111: 305 pgs: 305 active+clean; 199 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 769 KiB/s rd, 5.5 MiB/s wr, 153 op/s
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:04:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:04:22 compute-0 nova_compute[259550]: 2025-10-07 14:04:22.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:04:23 compute-0 nova_compute[259550]: 2025-10-07 14:04:23.011 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:23 compute-0 nova_compute[259550]: 2025-10-07 14:04:23.012 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:23 compute-0 nova_compute[259550]: 2025-10-07 14:04:23.012 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:23 compute-0 nova_compute[259550]: 2025-10-07 14:04:23.013 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:04:23 compute-0 nova_compute[259550]: 2025-10-07 14:04:23.013 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 199 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 342 KiB/s rd, 5.5 MiB/s wr, 126 op/s
Oct 07 14:04:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:04:23 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2116738628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:04:23 compute-0 nova_compute[259550]: 2025-10-07 14:04:23.556 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:23 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2116738628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:04:23 compute-0 nova_compute[259550]: 2025-10-07 14:04:23.857 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:04:23 compute-0 nova_compute[259550]: 2025-10-07 14:04:23.859 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:04:23 compute-0 nova_compute[259550]: 2025-10-07 14:04:23.859 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:04:23 compute-0 nova_compute[259550]: 2025-10-07 14:04:23.864 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:04:23 compute-0 nova_compute[259550]: 2025-10-07 14:04:23.864 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:04:23 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 07 14:04:23 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 13.563s CPU time.
Oct 07 14:04:23 compute-0 systemd-machined[214580]: Machine qemu-6-instance-00000006 terminated.
Oct 07 14:04:23 compute-0 nova_compute[259550]: 2025-10-07 14:04:23.881 2 DEBUG nova.network.neutron [req-1dbb1bfa-7dc5-41cb-921b-3bb0cd75ec69 req-4358e4c9-6c72-4162-9a0f-686e9771c3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Updated VIF entry in instance network info cache for port 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:04:23 compute-0 nova_compute[259550]: 2025-10-07 14:04:23.882 2 DEBUG nova.network.neutron [req-1dbb1bfa-7dc5-41cb-921b-3bb0cd75ec69 req-4358e4c9-6c72-4162-9a0f-686e9771c3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Updating instance_info_cache with network_info: [{"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.032 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.033 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.152 2 DEBUG nova.compute.manager [None req-3eb5139d-5ad7-4f85-a2e6-13df2140e66d 9ed39eb9f7ad458fa8a7f2195995dd29 83b0f5fde49d43c9a200759be3d68a34 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.153 2 DEBUG oslo_concurrency.lockutils [req-1dbb1bfa-7dc5-41cb-921b-3bb0cd75ec69 req-4358e4c9-6c72-4162-9a0f-686e9771c3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-01404604-70e9-49ec-9047-ec42ba41afee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.158 2 INFO nova.compute.manager [None req-3eb5139d-5ad7-4f85-a2e6-13df2140e66d 9ed39eb9f7ad458fa8a7f2195995dd29 83b0f5fde49d43c9a200759be3d68a34 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Retrieving diagnostics
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.255 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.257 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4404MB free_disk=59.90228271484375GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.257 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.257 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.336 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance af17d051-72b6-45f1-b829-94d3a2939519 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.337 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 01404604-70e9-49ec-9047-ec42ba41afee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.337 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 9b81bc46-3d46-47e9-85da-b3f62c6db7b2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.338 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.338 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.411 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.552 2 INFO nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance shutdown successfully after 13 seconds.
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.566 2 INFO nova.virt.libvirt.driver [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance destroyed successfully.
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.571 2 INFO nova.virt.libvirt.driver [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance destroyed successfully.
Oct 07 14:04:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:04:24 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3417708911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:04:24 compute-0 ceph-mon[74295]: pgmap v1112: 305 pgs: 305 active+clean; 199 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 342 KiB/s rd, 5.5 MiB/s wr, 126 op/s
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.858 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.865 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.884 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.906 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:04:24 compute-0 nova_compute[259550]: 2025-10-07 14:04:24.907 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:25 compute-0 podman[278893]: 2025-10-07 14:04:25.12544653 +0000 UTC m=+0.110809405 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:04:25 compute-0 nova_compute[259550]: 2025-10-07 14:04:25.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 216 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 206 op/s
Oct 07 14:04:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:04:25 compute-0 nova_compute[259550]: 2025-10-07 14:04:25.507 2 INFO nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deleting instance files /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519_del
Oct 07 14:04:25 compute-0 nova_compute[259550]: 2025-10-07 14:04:25.508 2 INFO nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deletion of /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519_del complete
Oct 07 14:04:25 compute-0 nova_compute[259550]: 2025-10-07 14:04:25.634 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:04:25 compute-0 nova_compute[259550]: 2025-10-07 14:04:25.635 2 INFO nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Creating image(s)
Oct 07 14:04:25 compute-0 nova_compute[259550]: 2025-10-07 14:04:25.662 2 DEBUG nova.storage.rbd_utils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:25 compute-0 nova_compute[259550]: 2025-10-07 14:04:25.702 2 DEBUG nova.storage.rbd_utils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:25 compute-0 nova_compute[259550]: 2025-10-07 14:04:25.736 2 DEBUG nova.storage.rbd_utils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:25 compute-0 nova_compute[259550]: 2025-10-07 14:04:25.740 2 DEBUG oslo_concurrency.lockutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:25 compute-0 nova_compute[259550]: 2025-10-07 14:04:25.741 2 DEBUG oslo_concurrency.lockutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:25 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3417708911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:04:25 compute-0 ceph-mon[74295]: pgmap v1113: 305 pgs: 305 active+clean; 216 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 206 op/s
Oct 07 14:04:25 compute-0 nova_compute[259550]: 2025-10-07 14:04:25.978 2 DEBUG nova.virt.libvirt.imagebackend [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Image locations are: [{'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/d37bdf89-ce37-478a-af4d-2b9cd0435b79/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/d37bdf89-ce37-478a-af4d-2b9cd0435b79/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 07 14:04:27 compute-0 nova_compute[259550]: 2025-10-07 14:04:27.013 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:27 compute-0 nova_compute[259550]: 2025-10-07 14:04:27.105 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2.part --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:27 compute-0 nova_compute[259550]: 2025-10-07 14:04:27.107 2 DEBUG nova.virt.images [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] d37bdf89-ce37-478a-af4d-2b9cd0435b79 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 07 14:04:27 compute-0 nova_compute[259550]: 2025-10-07 14:04:27.108 2 DEBUG nova.privsep.utils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 07 14:04:27 compute-0 nova_compute[259550]: 2025-10-07 14:04:27.111 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2.part /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:27 compute-0 nova_compute[259550]: 2025-10-07 14:04:27.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 194 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.5 MiB/s wr, 279 op/s
Oct 07 14:04:27 compute-0 nova_compute[259550]: 2025-10-07 14:04:27.620 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2.part /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2.converted" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:27 compute-0 nova_compute[259550]: 2025-10-07 14:04:27.627 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:27 compute-0 nova_compute[259550]: 2025-10-07 14:04:27.702 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2.converted --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:27 compute-0 nova_compute[259550]: 2025-10-07 14:04:27.704 2 DEBUG oslo_concurrency.lockutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:27 compute-0 nova_compute[259550]: 2025-10-07 14:04:27.733 2 DEBUG nova.storage.rbd_utils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:27 compute-0 nova_compute[259550]: 2025-10-07 14:04:27.737 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 af17d051-72b6-45f1-b829-94d3a2939519_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:27 compute-0 nova_compute[259550]: 2025-10-07 14:04:27.958 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:04:27 compute-0 nova_compute[259550]: 2025-10-07 14:04:27.958 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:04:27 compute-0 nova_compute[259550]: 2025-10-07 14:04:27.958 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.266 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-af17d051-72b6-45f1-b829-94d3a2939519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.267 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-af17d051-72b6-45f1-b829-94d3a2939519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.268 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.268 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.293 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 af17d051-72b6-45f1-b829-94d3a2939519_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.368 2 DEBUG nova.storage.rbd_utils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] resizing rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.496 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.496 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Ensure instance console log exists: /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.497 2 DEBUG oslo_concurrency.lockutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.497 2 DEBUG oslo_concurrency.lockutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.498 2 DEBUG oslo_concurrency.lockutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.499 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.502 2 WARNING nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.512 2 DEBUG nova.virt.libvirt.host [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.513 2 DEBUG nova.virt.libvirt.host [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.517 2 DEBUG nova.virt.libvirt.host [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.517 2 DEBUG nova.virt.libvirt.host [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.519 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.519 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.520 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.520 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.520 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.520 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.521 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.521 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.521 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.521 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.521 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.522 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.522 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'vcpu_model' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:28 compute-0 ceph-mon[74295]: pgmap v1114: 305 pgs: 305 active+clean; 194 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.5 MiB/s wr, 279 op/s
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.574 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:28 compute-0 nova_compute[259550]: 2025-10-07 14:04:28.816 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:04:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:04:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3885171355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:29 compute-0 nova_compute[259550]: 2025-10-07 14:04:29.049 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:29 compute-0 podman[279122]: 2025-10-07 14:04:29.086457515 +0000 UTC m=+0.065055810 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 07 14:04:29 compute-0 nova_compute[259550]: 2025-10-07 14:04:29.098 2 DEBUG nova.storage.rbd_utils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:29 compute-0 nova_compute[259550]: 2025-10-07 14:04:29.104 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:29 compute-0 nova_compute[259550]: 2025-10-07 14:04:29.132 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:04:29 compute-0 nova_compute[259550]: 2025-10-07 14:04:29.174 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-af17d051-72b6-45f1-b829-94d3a2939519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:04:29 compute-0 nova_compute[259550]: 2025-10-07 14:04:29.175 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:04:29 compute-0 nova_compute[259550]: 2025-10-07 14:04:29.175 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:04:29 compute-0 nova_compute[259550]: 2025-10-07 14:04:29.176 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:04:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 158 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.5 MiB/s wr, 303 op/s
Oct 07 14:04:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3885171355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:04:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1605597078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:29 compute-0 nova_compute[259550]: 2025-10-07 14:04:29.592 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:29 compute-0 nova_compute[259550]: 2025-10-07 14:04:29.595 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:04:29 compute-0 nova_compute[259550]:   <uuid>af17d051-72b6-45f1-b829-94d3a2939519</uuid>
Oct 07 14:04:29 compute-0 nova_compute[259550]:   <name>instance-00000006</name>
Oct 07 14:04:29 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:04:29 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:04:29 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersAdmin275Test-server-2034007563</nova:name>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:04:28</nova:creationTime>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:04:29 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:04:29 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:04:29 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:04:29 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:04:29 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:04:29 compute-0 nova_compute[259550]:         <nova:user uuid="0418d9872e6041138cc90a3aa74cce48">tempest-ServersAdmin275Test-1410041514-project-member</nova:user>
Oct 07 14:04:29 compute-0 nova_compute[259550]:         <nova:project uuid="60e63a33759f4241845fccdb5c104b64">tempest-ServersAdmin275Test-1410041514</nova:project>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:04:29 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:04:29 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <system>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <entry name="serial">af17d051-72b6-45f1-b829-94d3a2939519</entry>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <entry name="uuid">af17d051-72b6-45f1-b829-94d3a2939519</entry>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     </system>
Oct 07 14:04:29 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:04:29 compute-0 nova_compute[259550]:   <os>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:   </os>
Oct 07 14:04:29 compute-0 nova_compute[259550]:   <features>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:   </features>
Oct 07 14:04:29 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:04:29 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:04:29 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/af17d051-72b6-45f1-b829-94d3a2939519_disk">
Oct 07 14:04:29 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       </source>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:04:29 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/af17d051-72b6-45f1-b829-94d3a2939519_disk.config">
Oct 07 14:04:29 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       </source>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:04:29 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/console.log" append="off"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <video>
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     </video>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:04:29 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:04:29 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:04:29 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:04:29 compute-0 nova_compute[259550]: </domain>
Oct 07 14:04:29 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:04:29 compute-0 nova_compute[259550]: 2025-10-07 14:04:29.735 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:04:29 compute-0 nova_compute[259550]: 2025-10-07 14:04:29.736 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:04:29 compute-0 nova_compute[259550]: 2025-10-07 14:04:29.738 2 INFO nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Using config drive
Oct 07 14:04:29 compute-0 nova_compute[259550]: 2025-10-07 14:04:29.778 2 DEBUG nova.storage.rbd_utils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:29 compute-0 nova_compute[259550]: 2025-10-07 14:04:29.827 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'ec2_ids' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:29 compute-0 nova_compute[259550]: 2025-10-07 14:04:29.993 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'keypairs' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:30 compute-0 nova_compute[259550]: 2025-10-07 14:04:30.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:04:30 compute-0 nova_compute[259550]: 2025-10-07 14:04:30.486 2 INFO nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Creating config drive at /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config
Oct 07 14:04:30 compute-0 nova_compute[259550]: 2025-10-07 14:04:30.494 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzg73a04q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:30 compute-0 nova_compute[259550]: 2025-10-07 14:04:30.641 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzg73a04q" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:30 compute-0 nova_compute[259550]: 2025-10-07 14:04:30.703 2 DEBUG nova.storage.rbd_utils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:30 compute-0 nova_compute[259550]: 2025-10-07 14:04:30.713 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config af17d051-72b6-45f1-b829-94d3a2939519_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:30 compute-0 ceph-mon[74295]: pgmap v1115: 305 pgs: 305 active+clean; 158 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.5 MiB/s wr, 303 op/s
Oct 07 14:04:30 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1605597078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:30 compute-0 nova_compute[259550]: 2025-10-07 14:04:30.911 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config af17d051-72b6-45f1-b829-94d3a2939519_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:30 compute-0 nova_compute[259550]: 2025-10-07 14:04:30.913 2 INFO nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deleting local config drive /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config because it was imported into RBD.
Oct 07 14:04:30 compute-0 systemd-machined[214580]: New machine qemu-9-instance-00000006.
Oct 07 14:04:30 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000006.
Oct 07 14:04:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 183 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.9 MiB/s wr, 272 op/s
Oct 07 14:04:31 compute-0 nova_compute[259550]: 2025-10-07 14:04:31.842 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for af17d051-72b6-45f1-b829-94d3a2939519 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:04:31 compute-0 nova_compute[259550]: 2025-10-07 14:04:31.845 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845871.84188, af17d051-72b6-45f1-b829-94d3a2939519 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:04:31 compute-0 nova_compute[259550]: 2025-10-07 14:04:31.846 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] VM Resumed (Lifecycle Event)
Oct 07 14:04:31 compute-0 nova_compute[259550]: 2025-10-07 14:04:31.849 2 DEBUG nova.compute.manager [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:04:31 compute-0 nova_compute[259550]: 2025-10-07 14:04:31.850 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:04:31 compute-0 nova_compute[259550]: 2025-10-07 14:04:31.856 2 INFO nova.virt.libvirt.driver [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance spawned successfully.
Oct 07 14:04:31 compute-0 nova_compute[259550]: 2025-10-07 14:04:31.858 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:04:31 compute-0 nova_compute[259550]: 2025-10-07 14:04:31.941 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:31 compute-0 nova_compute[259550]: 2025-10-07 14:04:31.947 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:04:31 compute-0 nova_compute[259550]: 2025-10-07 14:04:31.959 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:31 compute-0 nova_compute[259550]: 2025-10-07 14:04:31.960 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:31 compute-0 nova_compute[259550]: 2025-10-07 14:04:31.960 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:31 compute-0 nova_compute[259550]: 2025-10-07 14:04:31.961 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:31 compute-0 nova_compute[259550]: 2025-10-07 14:04:31.961 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:31 compute-0 nova_compute[259550]: 2025-10-07 14:04:31.962 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:32 compute-0 nova_compute[259550]: 2025-10-07 14:04:32.207 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:04:32 compute-0 nova_compute[259550]: 2025-10-07 14:04:32.212 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845871.8443518, af17d051-72b6-45f1-b829-94d3a2939519 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:04:32 compute-0 nova_compute[259550]: 2025-10-07 14:04:32.212 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] VM Started (Lifecycle Event)
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0010434599246272058 of space, bias 1.0, pg target 0.31303797738816175 quantized to 32 (current 32)
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:04:32 compute-0 nova_compute[259550]: 2025-10-07 14:04:32.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:32 compute-0 nova_compute[259550]: 2025-10-07 14:04:32.279 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:32 compute-0 nova_compute[259550]: 2025-10-07 14:04:32.286 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:04:32 compute-0 nova_compute[259550]: 2025-10-07 14:04:32.307 2 DEBUG nova.compute.manager [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:32 compute-0 nova_compute[259550]: 2025-10-07 14:04:32.446 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:04:32 compute-0 nova_compute[259550]: 2025-10-07 14:04:32.501 2 DEBUG oslo_concurrency.lockutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:32 compute-0 nova_compute[259550]: 2025-10-07 14:04:32.502 2 DEBUG oslo_concurrency.lockutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:32 compute-0 nova_compute[259550]: 2025-10-07 14:04:32.502 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:04:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:04:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2229173120' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:04:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:04:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2229173120' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:04:32 compute-0 ceph-mon[74295]: pgmap v1116: 305 pgs: 305 active+clean; 183 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.9 MiB/s wr, 272 op/s
Oct 07 14:04:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2229173120' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:04:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2229173120' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:04:32 compute-0 nova_compute[259550]: 2025-10-07 14:04:32.800 2 DEBUG oslo_concurrency.lockutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.298s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 183 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 2.0 MiB/s wr, 214 op/s
Oct 07 14:04:34 compute-0 ovn_controller[151684]: 2025-10-07T14:04:34Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:ad:77 10.100.0.9
Oct 07 14:04:34 compute-0 ovn_controller[151684]: 2025-10-07T14:04:34Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:ad:77 10.100.0.9
Oct 07 14:04:34 compute-0 ceph-mon[74295]: pgmap v1117: 305 pgs: 305 active+clean; 183 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 2.0 MiB/s wr, 214 op/s
Oct 07 14:04:35 compute-0 nova_compute[259550]: 2025-10-07 14:04:35.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 190 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 2.9 MiB/s wr, 273 op/s
Oct 07 14:04:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:04:35 compute-0 nova_compute[259550]: 2025-10-07 14:04:35.558 2 DEBUG nova.compute.manager [None req-d1145a78-3b7f-445d-9e05-dd4bb2a1ce19 9ed39eb9f7ad458fa8a7f2195995dd29 83b0f5fde49d43c9a200759be3d68a34 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:35 compute-0 nova_compute[259550]: 2025-10-07 14:04:35.563 2 INFO nova.compute.manager [None req-d1145a78-3b7f-445d-9e05-dd4bb2a1ce19 9ed39eb9f7ad458fa8a7f2195995dd29 83b0f5fde49d43c9a200759be3d68a34 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Retrieving diagnostics
Oct 07 14:04:35 compute-0 ceph-mon[74295]: pgmap v1118: 305 pgs: 305 active+clean; 190 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 2.9 MiB/s wr, 273 op/s
Oct 07 14:04:36 compute-0 nova_compute[259550]: 2025-10-07 14:04:36.403 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Acquiring lock "9b81bc46-3d46-47e9-85da-b3f62c6db7b2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:36 compute-0 nova_compute[259550]: 2025-10-07 14:04:36.404 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "9b81bc46-3d46-47e9-85da-b3f62c6db7b2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:36 compute-0 nova_compute[259550]: 2025-10-07 14:04:36.404 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Acquiring lock "9b81bc46-3d46-47e9-85da-b3f62c6db7b2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:36 compute-0 nova_compute[259550]: 2025-10-07 14:04:36.404 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "9b81bc46-3d46-47e9-85da-b3f62c6db7b2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:36 compute-0 nova_compute[259550]: 2025-10-07 14:04:36.405 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "9b81bc46-3d46-47e9-85da-b3f62c6db7b2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:36 compute-0 nova_compute[259550]: 2025-10-07 14:04:36.406 2 INFO nova.compute.manager [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Terminating instance
Oct 07 14:04:36 compute-0 nova_compute[259550]: 2025-10-07 14:04:36.406 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Acquiring lock "refresh_cache-9b81bc46-3d46-47e9-85da-b3f62c6db7b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:04:36 compute-0 nova_compute[259550]: 2025-10-07 14:04:36.407 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Acquired lock "refresh_cache-9b81bc46-3d46-47e9-85da-b3f62c6db7b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:04:36 compute-0 nova_compute[259550]: 2025-10-07 14:04:36.407 2 DEBUG nova.network.neutron [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:04:36 compute-0 nova_compute[259550]: 2025-10-07 14:04:36.550 2 INFO nova.compute.manager [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Rebuilding instance
Oct 07 14:04:36 compute-0 nova_compute[259550]: 2025-10-07 14:04:36.710 2 DEBUG nova.network.neutron [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:04:36 compute-0 nova_compute[259550]: 2025-10-07 14:04:36.824 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lazy-loading 'trusted_certs' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:36 compute-0 nova_compute[259550]: 2025-10-07 14:04:36.929 2 DEBUG nova.compute.manager [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:37 compute-0 nova_compute[259550]: 2025-10-07 14:04:37.023 2 DEBUG nova.network.neutron [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:04:37 compute-0 nova_compute[259550]: 2025-10-07 14:04:37.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:37 compute-0 nova_compute[259550]: 2025-10-07 14:04:37.417 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Releasing lock "refresh_cache-9b81bc46-3d46-47e9-85da-b3f62c6db7b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:04:37 compute-0 nova_compute[259550]: 2025-10-07 14:04:37.418 2 DEBUG nova.compute.manager [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:04:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 225 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 5.2 MiB/s wr, 292 op/s
Oct 07 14:04:37 compute-0 nova_compute[259550]: 2025-10-07 14:04:37.493 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lazy-loading 'pci_requests' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:37 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct 07 14:04:37 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 14.158s CPU time.
Oct 07 14:04:37 compute-0 systemd-machined[214580]: Machine qemu-8-instance-00000008 terminated.
Oct 07 14:04:37 compute-0 nova_compute[259550]: 2025-10-07 14:04:37.639 2 INFO nova.virt.libvirt.driver [-] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Instance destroyed successfully.
Oct 07 14:04:37 compute-0 nova_compute[259550]: 2025-10-07 14:04:37.641 2 DEBUG nova.objects.instance [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lazy-loading 'resources' on Instance uuid 9b81bc46-3d46-47e9-85da-b3f62c6db7b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:37 compute-0 nova_compute[259550]: 2025-10-07 14:04:37.805 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lazy-loading 'pci_devices' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:38 compute-0 nova_compute[259550]: 2025-10-07 14:04:38.049 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lazy-loading 'resources' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:38 compute-0 nova_compute[259550]: 2025-10-07 14:04:38.347 2 INFO nova.virt.libvirt.driver [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Deleting instance files /var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2_del
Oct 07 14:04:38 compute-0 nova_compute[259550]: 2025-10-07 14:04:38.348 2 INFO nova.virt.libvirt.driver [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Deletion of /var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2_del complete
Oct 07 14:04:38 compute-0 nova_compute[259550]: 2025-10-07 14:04:38.362 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lazy-loading 'migration_context' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:38 compute-0 ceph-mon[74295]: pgmap v1119: 305 pgs: 305 active+clean; 225 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 5.2 MiB/s wr, 292 op/s
Oct 07 14:04:38 compute-0 nova_compute[259550]: 2025-10-07 14:04:38.795 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:04:38 compute-0 nova_compute[259550]: 2025-10-07 14:04:38.800 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:04:39 compute-0 nova_compute[259550]: 2025-10-07 14:04:39.203 2 INFO nova.compute.manager [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Took 1.78 seconds to destroy the instance on the hypervisor.
Oct 07 14:04:39 compute-0 nova_compute[259550]: 2025-10-07 14:04:39.204 2 DEBUG oslo.service.loopingcall [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:04:39 compute-0 nova_compute[259550]: 2025-10-07 14:04:39.205 2 DEBUG nova.compute.manager [-] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:04:39 compute-0 nova_compute[259550]: 2025-10-07 14:04:39.205 2 DEBUG nova.network.neutron [-] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:04:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 210 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 6.0 MiB/s wr, 242 op/s
Oct 07 14:04:39 compute-0 nova_compute[259550]: 2025-10-07 14:04:39.568 2 DEBUG nova.network.neutron [-] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:04:39 compute-0 nova_compute[259550]: 2025-10-07 14:04:39.682 2 DEBUG nova.network.neutron [-] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:04:39 compute-0 nova_compute[259550]: 2025-10-07 14:04:39.945 2 INFO nova.compute.manager [-] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Took 0.74 seconds to deallocate network for instance.
Oct 07 14:04:40 compute-0 nova_compute[259550]: 2025-10-07 14:04:40.306 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:40 compute-0 nova_compute[259550]: 2025-10-07 14:04:40.307 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:40 compute-0 nova_compute[259550]: 2025-10-07 14:04:40.382 2 DEBUG oslo_concurrency.processutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:40 compute-0 nova_compute[259550]: 2025-10-07 14:04:40.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:04:40 compute-0 ceph-mon[74295]: pgmap v1120: 305 pgs: 305 active+clean; 210 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 6.0 MiB/s wr, 242 op/s
Oct 07 14:04:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:04:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2393469742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:04:40 compute-0 nova_compute[259550]: 2025-10-07 14:04:40.898 2 DEBUG oslo_concurrency.processutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:40 compute-0 nova_compute[259550]: 2025-10-07 14:04:40.908 2 DEBUG nova.compute.provider_tree [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:04:41 compute-0 nova_compute[259550]: 2025-10-07 14:04:41.028 2 DEBUG nova.scheduler.client.report [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:04:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 169 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.5 MiB/s wr, 242 op/s
Oct 07 14:04:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2393469742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:04:41 compute-0 nova_compute[259550]: 2025-10-07 14:04:41.550 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:41 compute-0 nova_compute[259550]: 2025-10-07 14:04:41.696 2 INFO nova.scheduler.client.report [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Deleted allocations for instance 9b81bc46-3d46-47e9-85da-b3f62c6db7b2
Oct 07 14:04:42 compute-0 nova_compute[259550]: 2025-10-07 14:04:42.160 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "9b81bc46-3d46-47e9-85da-b3f62c6db7b2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:42 compute-0 nova_compute[259550]: 2025-10-07 14:04:42.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:42 compute-0 ceph-mon[74295]: pgmap v1121: 305 pgs: 305 active+clean; 169 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.5 MiB/s wr, 242 op/s
Oct 07 14:04:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 169 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 238 op/s
Oct 07 14:04:44 compute-0 ceph-mon[74295]: pgmap v1122: 305 pgs: 305 active+clean; 169 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 238 op/s
Oct 07 14:04:45 compute-0 podman[279341]: 2025-10-07 14:04:45.100774572 +0000 UTC m=+0.074677878 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 14:04:45 compute-0 podman[279340]: 2025-10-07 14:04:45.12310879 +0000 UTC m=+0.101057704 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:04:45 compute-0 nova_compute[259550]: 2025-10-07 14:04:45.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 239 op/s
Oct 07 14:04:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:04:46 compute-0 ceph-mon[74295]: pgmap v1123: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 239 op/s
Oct 07 14:04:47 compute-0 nova_compute[259550]: 2025-10-07 14:04:47.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 185 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 210 op/s
Oct 07 14:04:48 compute-0 ceph-mon[74295]: pgmap v1124: 305 pgs: 305 active+clean; 185 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 210 op/s
Oct 07 14:04:48 compute-0 nova_compute[259550]: 2025-10-07 14:04:48.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:48 compute-0 nova_compute[259550]: 2025-10-07 14:04:48.854 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:04:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 508 KiB/s rd, 3.0 MiB/s wr, 142 op/s
Oct 07 14:04:50 compute-0 nova_compute[259550]: 2025-10-07 14:04:50.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:04:50 compute-0 ceph-mon[74295]: pgmap v1125: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 508 KiB/s rd, 3.0 MiB/s wr, 142 op/s
Oct 07 14:04:51 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 07 14:04:51 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000006.scope: Consumed 13.938s CPU time.
Oct 07 14:04:51 compute-0 systemd-machined[214580]: Machine qemu-9-instance-00000006 terminated.
Oct 07 14:04:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 380 KiB/s rd, 2.2 MiB/s wr, 103 op/s
Oct 07 14:04:51 compute-0 nova_compute[259550]: 2025-10-07 14:04:51.870 2 INFO nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance shutdown successfully after 13 seconds.
Oct 07 14:04:51 compute-0 nova_compute[259550]: 2025-10-07 14:04:51.878 2 INFO nova.virt.libvirt.driver [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance destroyed successfully.
Oct 07 14:04:51 compute-0 nova_compute[259550]: 2025-10-07 14:04:51.887 2 INFO nova.virt.libvirt.driver [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance destroyed successfully.
Oct 07 14:04:52 compute-0 nova_compute[259550]: 2025-10-07 14:04:52.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:52 compute-0 nova_compute[259550]: 2025-10-07 14:04:52.634 2 INFO nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deleting instance files /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519_del
Oct 07 14:04:52 compute-0 nova_compute[259550]: 2025-10-07 14:04:52.635 2 INFO nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deletion of /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519_del complete
Oct 07 14:04:52 compute-0 ceph-mon[74295]: pgmap v1126: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 380 KiB/s rd, 2.2 MiB/s wr, 103 op/s
Oct 07 14:04:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:04:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:04:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:04:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:04:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:04:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:04:52 compute-0 nova_compute[259550]: 2025-10-07 14:04:52.640 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759845877.6361618, 9b81bc46-3d46-47e9-85da-b3f62c6db7b2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:04:52 compute-0 nova_compute[259550]: 2025-10-07 14:04:52.641 2 INFO nova.compute.manager [-] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] VM Stopped (Lifecycle Event)
Oct 07 14:04:52 compute-0 nova_compute[259550]: 2025-10-07 14:04:52.699 2 DEBUG nova.compute.manager [None req-87e6f636-9a89-40f5-a821-f856b42db070 - - - - - -] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:52 compute-0 nova_compute[259550]: 2025-10-07 14:04:52.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:52.705 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:04:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:52.708 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:04:52 compute-0 nova_compute[259550]: 2025-10-07 14:04:52.864 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:04:52 compute-0 nova_compute[259550]: 2025-10-07 14:04:52.865 2 INFO nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Creating image(s)
Oct 07 14:04:52 compute-0 nova_compute[259550]: 2025-10-07 14:04:52.893 2 DEBUG nova.storage.rbd_utils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:52 compute-0 nova_compute[259550]: 2025-10-07 14:04:52.919 2 DEBUG nova.storage.rbd_utils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:52 compute-0 nova_compute[259550]: 2025-10-07 14:04:52.941 2 DEBUG nova.storage.rbd_utils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:52 compute-0 nova_compute[259550]: 2025-10-07 14:04:52.945 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.009 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.010 2 DEBUG oslo_concurrency.lockutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.011 2 DEBUG oslo_concurrency.lockutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.012 2 DEBUG oslo_concurrency.lockutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.034 2 DEBUG nova.storage.rbd_utils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.038 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 af17d051-72b6-45f1-b829-94d3a2939519_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.433 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 af17d051-72b6-45f1-b829-94d3a2939519_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 284 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.504 2 DEBUG nova.storage.rbd_utils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] resizing rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.632 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.632 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Ensure instance console log exists: /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.633 2 DEBUG oslo_concurrency.lockutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.633 2 DEBUG oslo_concurrency.lockutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.633 2 DEBUG oslo_concurrency.lockutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.635 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.640 2 WARNING nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.650 2 DEBUG nova.virt.libvirt.host [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.651 2 DEBUG nova.virt.libvirt.host [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.653 2 DEBUG nova.virt.libvirt.host [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.654 2 DEBUG nova.virt.libvirt.host [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.654 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.654 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.655 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.655 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.655 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.656 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.656 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.656 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.656 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.656 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.657 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.657 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.657 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lazy-loading 'vcpu_model' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:53 compute-0 nova_compute[259550]: 2025-10-07 14:04:53.684 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:04:54 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2175104374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:54 compute-0 nova_compute[259550]: 2025-10-07 14:04:54.136 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:54 compute-0 nova_compute[259550]: 2025-10-07 14:04:54.163 2 DEBUG nova.storage.rbd_utils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:54 compute-0 nova_compute[259550]: 2025-10-07 14:04:54.167 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:04:54 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3346577199' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:54 compute-0 nova_compute[259550]: 2025-10-07 14:04:54.625 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:54 compute-0 nova_compute[259550]: 2025-10-07 14:04:54.629 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:04:54 compute-0 nova_compute[259550]:   <uuid>af17d051-72b6-45f1-b829-94d3a2939519</uuid>
Oct 07 14:04:54 compute-0 nova_compute[259550]:   <name>instance-00000006</name>
Oct 07 14:04:54 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:04:54 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:04:54 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersAdmin275Test-server-2034007563</nova:name>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:04:53</nova:creationTime>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:04:54 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:04:54 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:04:54 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:04:54 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:04:54 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:04:54 compute-0 nova_compute[259550]:         <nova:user uuid="0418d9872e6041138cc90a3aa74cce48">tempest-ServersAdmin275Test-1410041514-project-member</nova:user>
Oct 07 14:04:54 compute-0 nova_compute[259550]:         <nova:project uuid="60e63a33759f4241845fccdb5c104b64">tempest-ServersAdmin275Test-1410041514</nova:project>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:04:54 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:04:54 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <system>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <entry name="serial">af17d051-72b6-45f1-b829-94d3a2939519</entry>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <entry name="uuid">af17d051-72b6-45f1-b829-94d3a2939519</entry>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     </system>
Oct 07 14:04:54 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:04:54 compute-0 nova_compute[259550]:   <os>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:   </os>
Oct 07 14:04:54 compute-0 nova_compute[259550]:   <features>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:   </features>
Oct 07 14:04:54 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:04:54 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:04:54 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/af17d051-72b6-45f1-b829-94d3a2939519_disk">
Oct 07 14:04:54 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       </source>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:04:54 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/af17d051-72b6-45f1-b829-94d3a2939519_disk.config">
Oct 07 14:04:54 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       </source>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:04:54 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/console.log" append="off"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <video>
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     </video>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:04:54 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:04:54 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:04:54 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:04:54 compute-0 nova_compute[259550]: </domain>
Oct 07 14:04:54 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:04:54 compute-0 ceph-mon[74295]: pgmap v1127: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 284 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:04:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2175104374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3346577199' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:04:54 compute-0 nova_compute[259550]: 2025-10-07 14:04:54.762 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:04:54 compute-0 nova_compute[259550]: 2025-10-07 14:04:54.762 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:04:54 compute-0 nova_compute[259550]: 2025-10-07 14:04:54.763 2 INFO nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Using config drive
Oct 07 14:04:54 compute-0 nova_compute[259550]: 2025-10-07 14:04:54.789 2 DEBUG nova.storage.rbd_utils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:54 compute-0 nova_compute[259550]: 2025-10-07 14:04:54.815 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lazy-loading 'ec2_ids' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:54 compute-0 nova_compute[259550]: 2025-10-07 14:04:54.860 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lazy-loading 'keypairs' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:54 compute-0 sudo[279649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:04:54 compute-0 sudo[279649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:04:54 compute-0 sudo[279649]: pam_unix(sudo:session): session closed for user root
Oct 07 14:04:54 compute-0 nova_compute[259550]: 2025-10-07 14:04:54.982 2 DEBUG oslo_concurrency.lockutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "01404604-70e9-49ec-9047-ec42ba41afee" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:54 compute-0 nova_compute[259550]: 2025-10-07 14:04:54.982 2 DEBUG oslo_concurrency.lockutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:54 compute-0 nova_compute[259550]: 2025-10-07 14:04:54.983 2 DEBUG oslo_concurrency.lockutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "01404604-70e9-49ec-9047-ec42ba41afee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:54 compute-0 nova_compute[259550]: 2025-10-07 14:04:54.983 2 DEBUG oslo_concurrency.lockutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:54 compute-0 nova_compute[259550]: 2025-10-07 14:04:54.983 2 DEBUG oslo_concurrency.lockutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:54 compute-0 nova_compute[259550]: 2025-10-07 14:04:54.984 2 INFO nova.compute.manager [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Terminating instance
Oct 07 14:04:54 compute-0 nova_compute[259550]: 2025-10-07 14:04:54.985 2 DEBUG nova.compute.manager [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:04:54 compute-0 sudo[279674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:04:54 compute-0 sudo[279674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:04:54 compute-0 sudo[279674]: pam_unix(sudo:session): session closed for user root
Oct 07 14:04:55 compute-0 kernel: tap60081c2f-b8 (unregistering): left promiscuous mode
Oct 07 14:04:55 compute-0 NetworkManager[44949]: <info>  [1759845895.0630] device (tap60081c2f-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:04:55 compute-0 sudo[279699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:04:55 compute-0 sudo[279699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:04:55 compute-0 sudo[279699]: pam_unix(sudo:session): session closed for user root
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:55 compute-0 ovn_controller[151684]: 2025-10-07T14:04:55Z|00041|binding|INFO|Releasing lport 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 from this chassis (sb_readonly=0)
Oct 07 14:04:55 compute-0 ovn_controller[151684]: 2025-10-07T14:04:55Z|00042|binding|INFO|Setting lport 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 down in Southbound
Oct 07 14:04:55 compute-0 ovn_controller[151684]: 2025-10-07T14:04:55Z|00043|binding|INFO|Removing iface tap60081c2f-b8 ovn-installed in OVS
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.082 2 INFO nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Creating config drive at /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.088 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzud2uc2s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.124 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:ad:77 10.100.0.9'], port_security=['fa:16:3e:1e:ad:77 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '01404604-70e9-49ec-9047-ec42ba41afee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '380b73085cef431383bee110ceaefb15', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd3b7853b-d94c-445c-810d-9b4dd15d78f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f05683d-d61c-46e7-a8b2-e5ebf47fffcc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=60081c2f-b84d-467f-85f2-c2dd4b4fb4c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:04:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.125 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 in datapath 384938fa-4eb0-4ec5-a6a4-bc65721ba22a unbound from our chassis
Oct 07 14:04:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.126 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 384938fa-4eb0-4ec5-a6a4-bc65721ba22a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:04:55 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct 07 14:04:55 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 16.049s CPU time.
Oct 07 14:04:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.129 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c96644-389c-40c8-826f-2893361cb9b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.129 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a namespace which is not needed anymore
Oct 07 14:04:55 compute-0 systemd-machined[214580]: Machine qemu-7-instance-00000007 terminated.
Oct 07 14:04:55 compute-0 sudo[279726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 07 14:04:55 compute-0 sudo[279726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.223 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzud2uc2s" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.253 2 DEBUG nova.storage.rbd_utils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.257 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config af17d051-72b6-45f1-b829-94d3a2939519_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.285 2 INFO nova.virt.libvirt.driver [-] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Instance destroyed successfully.
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.286 2 DEBUG nova.objects.instance [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lazy-loading 'resources' on Instance uuid 01404604-70e9-49ec-9047-ec42ba41afee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:55 compute-0 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[278635]: [NOTICE]   (278658) : haproxy version is 2.8.14-c23fe91
Oct 07 14:04:55 compute-0 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[278635]: [NOTICE]   (278658) : path to executable is /usr/sbin/haproxy
Oct 07 14:04:55 compute-0 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[278635]: [ALERT]    (278658) : Current worker (278660) exited with code 143 (Terminated)
Oct 07 14:04:55 compute-0 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[278635]: [WARNING]  (278658) : All workers exited. Exiting... (0)
Oct 07 14:04:55 compute-0 podman[279762]: 2025-10-07 14:04:55.293613526 +0000 UTC m=+0.105763650 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 07 14:04:55 compute-0 systemd[1]: libpod-5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2.scope: Deactivated successfully.
Oct 07 14:04:55 compute-0 conmon[278635]: conmon 5798779e97104bc286fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2.scope/container/memory.events
Oct 07 14:04:55 compute-0 podman[279798]: 2025-10-07 14:04:55.302500273 +0000 UTC m=+0.063264783 container died 5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 07 14:04:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2-userdata-shm.mount: Deactivated successfully.
Oct 07 14:04:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ba8972aef5751b49df55a9acd3f9f51d77eb178dc3254adc380d65c0d9ec3cd-merged.mount: Deactivated successfully.
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.335 2 DEBUG nova.virt.libvirt.vif [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1945845741',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1945845741',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(19),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1945845741',id=7,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=19,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMYfAR9mj2imemzusfPwwtddn0lEjKMvdiXZQNbuwqTJFtx1U2M3BRUSevXd6qU4D5KOSepLHIPjFK2NZ957Ri2Kv5dYObirVp8T/b/ktoKbgTEgyyASZo/0n0wfyQLhEw==',key_name='tempest-keypair-1528796341',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:04:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='380b73085cef431383bee110ceaefb15',ramdisk_id='',reservation_id='r-gkxpo37o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:04:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2d78803d42674b89b4ea28d9a2442357',uuid=01404604-70e9-49ec-9047-ec42ba41afee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.336 2 DEBUG nova.network.os_vif_util [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converting VIF {"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.337 2 DEBUG nova.network.os_vif_util [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1e:ad:77,bridge_name='br-int',has_traffic_filtering=True,id=60081c2f-b84d-467f-85f2-c2dd4b4fb4c5,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60081c2f-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.337 2 DEBUG os_vif [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:ad:77,bridge_name='br-int',has_traffic_filtering=True,id=60081c2f-b84d-467f-85f2-c2dd4b4fb4c5,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60081c2f-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.340 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60081c2f-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:55 compute-0 podman[279798]: 2025-10-07 14:04:55.383480199 +0000 UTC m=+0.144244709 container cleanup 5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.387 2 INFO os_vif [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:ad:77,bridge_name='br-int',has_traffic_filtering=True,id=60081c2f-b84d-467f-85f2-c2dd4b4fb4c5,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60081c2f-b8')
Oct 07 14:04:55 compute-0 systemd[1]: libpod-conmon-5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2.scope: Deactivated successfully.
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 191 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 3.7 MiB/s wr, 84 op/s
Oct 07 14:04:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:04:55 compute-0 podman[279888]: 2025-10-07 14:04:55.507485415 +0000 UTC m=+0.094079517 container remove 5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 07 14:04:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.513 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd2474b-493e-4567-be4a-cb947869545b]: (4, ('Tue Oct  7 02:04:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a (5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2)\n5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2\nTue Oct  7 02:04:55 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a (5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2)\n5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.518 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f9bece-2972-4da2-a08a-020f68c2c8d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.519 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap384938fa-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.537 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config af17d051-72b6-45f1-b829-94d3a2939519_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.538 2 INFO nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deleting local config drive /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config because it was imported into RBD.
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:55 compute-0 kernel: tap384938fa-40: left promiscuous mode
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.547 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[81a672eb-75bd-4562-91c2-1819b92a8906]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.571 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b6537f-ce3a-42c1-91f9-45320c9fbd47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.574 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4a714dde-512c-4219-a9f5-12dc3875cfe2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.591 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[13957d82-c929-4b97-aec6-63872b8cc779]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 644188, 'reachable_time': 33820, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279954, 'error': None, 'target': 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d384938fa\x2d4eb0\x2d4ec5\x2da6a4\x2dbc65721ba22a.mount: Deactivated successfully.
Oct 07 14:04:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.598 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:04:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.599 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[da421ed4-cc5f-4747-994b-2c62127fdc64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:04:55 compute-0 systemd-machined[214580]: New machine qemu-10-instance-00000006.
Oct 07 14:04:55 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-00000006.
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.665 2 DEBUG nova.compute.manager [req-f5d281d8-8875-4364-b6c3-fc48c4b778d1 req-c1c07d87-0750-405f-8c78-c1e33961c847 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received event network-vif-unplugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.665 2 DEBUG oslo_concurrency.lockutils [req-f5d281d8-8875-4364-b6c3-fc48c4b778d1 req-c1c07d87-0750-405f-8c78-c1e33961c847 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "01404604-70e9-49ec-9047-ec42ba41afee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.665 2 DEBUG oslo_concurrency.lockutils [req-f5d281d8-8875-4364-b6c3-fc48c4b778d1 req-c1c07d87-0750-405f-8c78-c1e33961c847 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.666 2 DEBUG oslo_concurrency.lockutils [req-f5d281d8-8875-4364-b6c3-fc48c4b778d1 req-c1c07d87-0750-405f-8c78-c1e33961c847 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.666 2 DEBUG nova.compute.manager [req-f5d281d8-8875-4364-b6c3-fc48c4b778d1 req-c1c07d87-0750-405f-8c78-c1e33961c847 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] No waiting events found dispatching network-vif-unplugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:04:55 compute-0 nova_compute[259550]: 2025-10-07 14:04:55.666 2 DEBUG nova.compute.manager [req-f5d281d8-8875-4364-b6c3-fc48c4b778d1 req-c1c07d87-0750-405f-8c78-c1e33961c847 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received event network-vif-unplugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:04:55 compute-0 podman[280000]: 2025-10-07 14:04:55.830885364 +0000 UTC m=+0.081258784 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:04:55 compute-0 podman[280000]: 2025-10-07 14:04:55.934368662 +0000 UTC m=+0.184742072 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:04:56 compute-0 ceph-mon[74295]: pgmap v1128: 305 pgs: 305 active+clean; 191 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 3.7 MiB/s wr, 84 op/s
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.793 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for af17d051-72b6-45f1-b829-94d3a2939519 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.794 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845896.792987, af17d051-72b6-45f1-b829-94d3a2939519 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.794 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] VM Resumed (Lifecycle Event)
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.797 2 DEBUG nova.compute.manager [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.798 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.802 2 INFO nova.virt.libvirt.driver [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance spawned successfully.
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.803 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.827 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.831 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.845 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.845 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.846 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.847 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.848 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.848 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.852 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.853 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845896.7967756, af17d051-72b6-45f1-b829-94d3a2939519 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.853 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] VM Started (Lifecycle Event)
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.890 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.894 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:04:56 compute-0 sudo[279726]: pam_unix(sudo:session): session closed for user root
Oct 07 14:04:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.915 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.923 2 DEBUG nova.compute.manager [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:04:56 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:04:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.971 2 DEBUG oslo_concurrency.lockutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.972 2 DEBUG oslo_concurrency.lockutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:56 compute-0 nova_compute[259550]: 2025-10-07 14:04:56.972 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:04:57 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:04:57 compute-0 nova_compute[259550]: 2025-10-07 14:04:57.080 2 DEBUG oslo_concurrency.lockutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:57 compute-0 sudo[280198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:04:57 compute-0 sudo[280198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:04:57 compute-0 sudo[280198]: pam_unix(sudo:session): session closed for user root
Oct 07 14:04:57 compute-0 sudo[280223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:04:57 compute-0 sudo[280223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:04:57 compute-0 sudo[280223]: pam_unix(sudo:session): session closed for user root
Oct 07 14:04:57 compute-0 sudo[280248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:04:57 compute-0 sudo[280248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:04:57 compute-0 sudo[280248]: pam_unix(sudo:session): session closed for user root
Oct 07 14:04:57 compute-0 sudo[280273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:04:57 compute-0 sudo[280273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:04:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 169 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 321 KiB/s rd, 3.9 MiB/s wr, 122 op/s
Oct 07 14:04:57 compute-0 nova_compute[259550]: 2025-10-07 14:04:57.489 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "af17d051-72b6-45f1-b829-94d3a2939519" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:57 compute-0 nova_compute[259550]: 2025-10-07 14:04:57.491 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "af17d051-72b6-45f1-b829-94d3a2939519" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:57 compute-0 nova_compute[259550]: 2025-10-07 14:04:57.491 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "af17d051-72b6-45f1-b829-94d3a2939519-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:57 compute-0 nova_compute[259550]: 2025-10-07 14:04:57.492 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "af17d051-72b6-45f1-b829-94d3a2939519-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:57 compute-0 nova_compute[259550]: 2025-10-07 14:04:57.492 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "af17d051-72b6-45f1-b829-94d3a2939519-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:57 compute-0 nova_compute[259550]: 2025-10-07 14:04:57.494 2 INFO nova.compute.manager [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Terminating instance
Oct 07 14:04:57 compute-0 nova_compute[259550]: 2025-10-07 14:04:57.495 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "refresh_cache-af17d051-72b6-45f1-b829-94d3a2939519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:04:57 compute-0 nova_compute[259550]: 2025-10-07 14:04:57.496 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquired lock "refresh_cache-af17d051-72b6-45f1-b829-94d3a2939519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:04:57 compute-0 nova_compute[259550]: 2025-10-07 14:04:57.496 2 DEBUG nova.network.neutron [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:04:57 compute-0 nova_compute[259550]: 2025-10-07 14:04:57.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:04:57 compute-0 nova_compute[259550]: 2025-10-07 14:04:57.759 2 DEBUG nova.network.neutron [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:04:57 compute-0 nova_compute[259550]: 2025-10-07 14:04:57.790 2 DEBUG nova.compute.manager [req-4f4eb2bc-0fba-4ca1-b015-a1cf405f3607 req-753ad797-84b0-44a8-940a-28ac0886e52f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received event network-vif-plugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:04:57 compute-0 nova_compute[259550]: 2025-10-07 14:04:57.791 2 DEBUG oslo_concurrency.lockutils [req-4f4eb2bc-0fba-4ca1-b015-a1cf405f3607 req-753ad797-84b0-44a8-940a-28ac0886e52f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "01404604-70e9-49ec-9047-ec42ba41afee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:57 compute-0 nova_compute[259550]: 2025-10-07 14:04:57.791 2 DEBUG oslo_concurrency.lockutils [req-4f4eb2bc-0fba-4ca1-b015-a1cf405f3607 req-753ad797-84b0-44a8-940a-28ac0886e52f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:57 compute-0 nova_compute[259550]: 2025-10-07 14:04:57.792 2 DEBUG oslo_concurrency.lockutils [req-4f4eb2bc-0fba-4ca1-b015-a1cf405f3607 req-753ad797-84b0-44a8-940a-28ac0886e52f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:04:57 compute-0 nova_compute[259550]: 2025-10-07 14:04:57.792 2 DEBUG nova.compute.manager [req-4f4eb2bc-0fba-4ca1-b015-a1cf405f3607 req-753ad797-84b0-44a8-940a-28ac0886e52f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] No waiting events found dispatching network-vif-plugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:04:57 compute-0 nova_compute[259550]: 2025-10-07 14:04:57.792 2 WARNING nova.compute.manager [req-4f4eb2bc-0fba-4ca1-b015-a1cf405f3607 req-753ad797-84b0-44a8-940a-28ac0886e52f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received unexpected event network-vif-plugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 for instance with vm_state active and task_state deleting.
Oct 07 14:04:57 compute-0 sudo[280273]: pam_unix(sudo:session): session closed for user root
Oct 07 14:04:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:04:57 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:04:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:04:57 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:04:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:04:57 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:04:57 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 1f452177-40df-430d-aeb9-dca7dc4ac672 does not exist
Oct 07 14:04:57 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 66af92ee-5ab5-407e-a6ea-4df72b4f7541 does not exist
Oct 07 14:04:57 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev aed59fd1-9dc1-48cb-bd51-564d2c2a371f does not exist
Oct 07 14:04:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:04:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:04:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:04:58 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:04:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:04:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:04:58 compute-0 nova_compute[259550]: 2025-10-07 14:04:58.055 2 DEBUG nova.network.neutron [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:04:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:04:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:04:58 compute-0 ceph-mon[74295]: pgmap v1129: 305 pgs: 305 active+clean; 169 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 321 KiB/s rd, 3.9 MiB/s wr, 122 op/s
Oct 07 14:04:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:04:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:04:58 compute-0 sudo[280329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:04:58 compute-0 sudo[280329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:04:58 compute-0 sudo[280329]: pam_unix(sudo:session): session closed for user root
Oct 07 14:04:58 compute-0 nova_compute[259550]: 2025-10-07 14:04:58.102 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Releasing lock "refresh_cache-af17d051-72b6-45f1-b829-94d3a2939519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:04:58 compute-0 nova_compute[259550]: 2025-10-07 14:04:58.103 2 DEBUG nova.compute.manager [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:04:58 compute-0 sudo[280354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:04:58 compute-0 sudo[280354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:04:58 compute-0 sudo[280354]: pam_unix(sudo:session): session closed for user root
Oct 07 14:04:58 compute-0 sudo[280379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:04:58 compute-0 sudo[280379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:04:58 compute-0 sudo[280379]: pam_unix(sudo:session): session closed for user root
Oct 07 14:04:58 compute-0 sudo[280404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:04:58 compute-0 sudo[280404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:04:58 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 07 14:04:58 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000006.scope: Consumed 2.265s CPU time.
Oct 07 14:04:58 compute-0 systemd-machined[214580]: Machine qemu-10-instance-00000006 terminated.
Oct 07 14:04:58 compute-0 nova_compute[259550]: 2025-10-07 14:04:58.533 2 INFO nova.virt.libvirt.driver [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance destroyed successfully.
Oct 07 14:04:58 compute-0 nova_compute[259550]: 2025-10-07 14:04:58.534 2 DEBUG nova.objects.instance [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'resources' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:04:58 compute-0 podman[280490]: 2025-10-07 14:04:58.721878142 +0000 UTC m=+0.032750576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:04:58 compute-0 podman[280490]: 2025-10-07 14:04:58.817203012 +0000 UTC m=+0.128075406 container create 4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_blackburn, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 07 14:04:58 compute-0 systemd[1]: Started libpod-conmon-4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35.scope.
Oct 07 14:04:58 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:04:59 compute-0 podman[280490]: 2025-10-07 14:04:59.006260848 +0000 UTC m=+0.317133302 container init 4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:04:59 compute-0 nova_compute[259550]: 2025-10-07 14:04:59.008 2 INFO nova.virt.libvirt.driver [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Deleting instance files /var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee_del
Oct 07 14:04:59 compute-0 nova_compute[259550]: 2025-10-07 14:04:59.010 2 INFO nova.virt.libvirt.driver [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Deletion of /var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee_del complete
Oct 07 14:04:59 compute-0 podman[280490]: 2025-10-07 14:04:59.016726638 +0000 UTC m=+0.327599022 container start 4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_blackburn, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:04:59 compute-0 podman[280490]: 2025-10-07 14:04:59.021315381 +0000 UTC m=+0.332187795 container attach 4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 07 14:04:59 compute-0 objective_blackburn[280506]: 167 167
Oct 07 14:04:59 compute-0 systemd[1]: libpod-4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35.scope: Deactivated successfully.
Oct 07 14:04:59 compute-0 conmon[280506]: conmon 4eb6ad457fde0965906f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35.scope/container/memory.events
Oct 07 14:04:59 compute-0 nova_compute[259550]: 2025-10-07 14:04:59.066 2 INFO nova.compute.manager [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Took 4.08 seconds to destroy the instance on the hypervisor.
Oct 07 14:04:59 compute-0 nova_compute[259550]: 2025-10-07 14:04:59.067 2 DEBUG oslo.service.loopingcall [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:04:59 compute-0 nova_compute[259550]: 2025-10-07 14:04:59.067 2 DEBUG nova.compute.manager [-] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:04:59 compute-0 nova_compute[259550]: 2025-10-07 14:04:59.067 2 DEBUG nova.network.neutron [-] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:04:59 compute-0 podman[280511]: 2025-10-07 14:04:59.071320159 +0000 UTC m=+0.029067679 container died 4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_blackburn, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 07 14:04:59 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:04:59 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:04:59 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:04:59 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:04:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-02d283d6545ba6fdfb3c28451f3c7b2438d97d76814a4ef6fbf296f2c0e639aa-merged.mount: Deactivated successfully.
Oct 07 14:04:59 compute-0 podman[280511]: 2025-10-07 14:04:59.140248082 +0000 UTC m=+0.097995582 container remove 4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_blackburn, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 14:04:59 compute-0 systemd[1]: libpod-conmon-4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35.scope: Deactivated successfully.
Oct 07 14:04:59 compute-0 podman[280527]: 2025-10-07 14:04:59.21907189 +0000 UTC m=+0.070359263 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct 07 14:04:59 compute-0 podman[280555]: 2025-10-07 14:04:59.353256118 +0000 UTC m=+0.051761455 container create 5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:04:59 compute-0 systemd[1]: Started libpod-conmon-5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d.scope.
Oct 07 14:04:59 compute-0 podman[280555]: 2025-10-07 14:04:59.327846839 +0000 UTC m=+0.026352226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:04:59 compute-0 nova_compute[259550]: 2025-10-07 14:04:59.427 2 INFO nova.virt.libvirt.driver [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deleting instance files /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519_del
Oct 07 14:04:59 compute-0 nova_compute[259550]: 2025-10-07 14:04:59.429 2 INFO nova.virt.libvirt.driver [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deletion of /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519_del complete
Oct 07 14:04:59 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c9389afacffe0da2c5ed228d5529119b4e15d651d268007f8281caa0fb0758/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:04:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 126 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.0 MiB/s wr, 151 op/s
Oct 07 14:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c9389afacffe0da2c5ed228d5529119b4e15d651d268007f8281caa0fb0758/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c9389afacffe0da2c5ed228d5529119b4e15d651d268007f8281caa0fb0758/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c9389afacffe0da2c5ed228d5529119b4e15d651d268007f8281caa0fb0758/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c9389afacffe0da2c5ed228d5529119b4e15d651d268007f8281caa0fb0758/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:04:59 compute-0 podman[280555]: 2025-10-07 14:04:59.464427062 +0000 UTC m=+0.162932419 container init 5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:04:59 compute-0 podman[280555]: 2025-10-07 14:04:59.478410066 +0000 UTC m=+0.176915413 container start 5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bohr, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:04:59 compute-0 podman[280555]: 2025-10-07 14:04:59.483989434 +0000 UTC m=+0.182494791 container attach 5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bohr, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 07 14:04:59 compute-0 nova_compute[259550]: 2025-10-07 14:04:59.517 2 INFO nova.compute.manager [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Took 1.41 seconds to destroy the instance on the hypervisor.
Oct 07 14:04:59 compute-0 nova_compute[259550]: 2025-10-07 14:04:59.517 2 DEBUG oslo.service.loopingcall [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:04:59 compute-0 nova_compute[259550]: 2025-10-07 14:04:59.518 2 DEBUG nova.compute.manager [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:04:59 compute-0 nova_compute[259550]: 2025-10-07 14:04:59.518 2 DEBUG nova.network.neutron [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:04:59 compute-0 nova_compute[259550]: 2025-10-07 14:04:59.719 2 DEBUG nova.network.neutron [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:04:59 compute-0 nova_compute[259550]: 2025-10-07 14:04:59.738 2 DEBUG nova.network.neutron [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:04:59 compute-0 nova_compute[259550]: 2025-10-07 14:04:59.758 2 INFO nova.compute.manager [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Took 0.24 seconds to deallocate network for instance.
Oct 07 14:04:59 compute-0 nova_compute[259550]: 2025-10-07 14:04:59.840 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:04:59 compute-0 nova_compute[259550]: 2025-10-07 14:04:59.841 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:04:59 compute-0 nova_compute[259550]: 2025-10-07 14:04:59.945 2 DEBUG oslo_concurrency.processutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:00.038 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:00.039 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:00.039 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:00 compute-0 ceph-mon[74295]: pgmap v1130: 305 pgs: 305 active+clean; 126 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.0 MiB/s wr, 151 op/s
Oct 07 14:05:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:05:00 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1363682272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:00 compute-0 nova_compute[259550]: 2025-10-07 14:05:00.377 2 DEBUG oslo_concurrency.processutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:00 compute-0 nova_compute[259550]: 2025-10-07 14:05:00.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:00 compute-0 nova_compute[259550]: 2025-10-07 14:05:00.391 2 DEBUG nova.compute.provider_tree [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:05:00 compute-0 nova_compute[259550]: 2025-10-07 14:05:00.408 2 DEBUG nova.scheduler.client.report [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:05:00 compute-0 nova_compute[259550]: 2025-10-07 14:05:00.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:00 compute-0 nova_compute[259550]: 2025-10-07 14:05:00.429 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:00 compute-0 nova_compute[259550]: 2025-10-07 14:05:00.463 2 INFO nova.scheduler.client.report [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Deleted allocations for instance af17d051-72b6-45f1-b829-94d3a2939519
Oct 07 14:05:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:05:00 compute-0 nova_compute[259550]: 2025-10-07 14:05:00.506 2 DEBUG nova.network.neutron [-] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:05:00 compute-0 nova_compute[259550]: 2025-10-07 14:05:00.542 2 INFO nova.compute.manager [-] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Took 1.47 seconds to deallocate network for instance.
Oct 07 14:05:00 compute-0 nova_compute[259550]: 2025-10-07 14:05:00.548 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "af17d051-72b6-45f1-b829-94d3a2939519" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:00 compute-0 nova_compute[259550]: 2025-10-07 14:05:00.586 2 DEBUG oslo_concurrency.lockutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:00 compute-0 nova_compute[259550]: 2025-10-07 14:05:00.587 2 DEBUG oslo_concurrency.lockutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:00 compute-0 interesting_bohr[280572]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:05:00 compute-0 interesting_bohr[280572]: --> relative data size: 1.0
Oct 07 14:05:00 compute-0 interesting_bohr[280572]: --> All data devices are unavailable
Oct 07 14:05:00 compute-0 systemd[1]: libpod-5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d.scope: Deactivated successfully.
Oct 07 14:05:00 compute-0 systemd[1]: libpod-5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d.scope: Consumed 1.099s CPU time.
Oct 07 14:05:00 compute-0 nova_compute[259550]: 2025-10-07 14:05:00.652 2 DEBUG oslo_concurrency.processutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:00 compute-0 podman[280623]: 2025-10-07 14:05:00.681898012 +0000 UTC m=+0.028926885 container died 5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bohr, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 07 14:05:00 compute-0 nova_compute[259550]: 2025-10-07 14:05:00.693 2 DEBUG nova.compute.manager [req-82edb5a4-ce9f-4cf7-8025-04115d2cca85 req-241661ab-2adb-46e3-a3f5-436f0d2f7554 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received event network-vif-deleted-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:05:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4c9389afacffe0da2c5ed228d5529119b4e15d651d268007f8281caa0fb0758-merged.mount: Deactivated successfully.
Oct 07 14:05:00 compute-0 podman[280623]: 2025-10-07 14:05:00.760362501 +0000 UTC m=+0.107391354 container remove 5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bohr, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 07 14:05:00 compute-0 systemd[1]: libpod-conmon-5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d.scope: Deactivated successfully.
Oct 07 14:05:00 compute-0 sudo[280404]: pam_unix(sudo:session): session closed for user root
Oct 07 14:05:00 compute-0 sudo[280655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:05:00 compute-0 sudo[280655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:05:00 compute-0 sudo[280655]: pam_unix(sudo:session): session closed for user root
Oct 07 14:05:00 compute-0 sudo[280682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:05:00 compute-0 sudo[280682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:05:00 compute-0 sudo[280682]: pam_unix(sudo:session): session closed for user root
Oct 07 14:05:00 compute-0 sudo[280707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:05:00 compute-0 sudo[280707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:05:00 compute-0 sudo[280707]: pam_unix(sudo:session): session closed for user root
Oct 07 14:05:01 compute-0 sudo[280732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:05:01 compute-0 sudo[280732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:05:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:05:01 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1198301669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:01 compute-0 nova_compute[259550]: 2025-10-07 14:05:01.122 2 DEBUG oslo_concurrency.processutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:01 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1363682272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:01 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1198301669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:01 compute-0 nova_compute[259550]: 2025-10-07 14:05:01.133 2 DEBUG nova.compute.provider_tree [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:05:01 compute-0 nova_compute[259550]: 2025-10-07 14:05:01.152 2 DEBUG nova.scheduler.client.report [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:05:01 compute-0 nova_compute[259550]: 2025-10-07 14:05:01.175 2 DEBUG oslo_concurrency.lockutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:01 compute-0 nova_compute[259550]: 2025-10-07 14:05:01.220 2 INFO nova.scheduler.client.report [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Deleted allocations for instance 01404604-70e9-49ec-9047-ec42ba41afee
Oct 07 14:05:01 compute-0 nova_compute[259550]: 2025-10-07 14:05:01.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:01 compute-0 nova_compute[259550]: 2025-10-07 14:05:01.284 2 DEBUG oslo_concurrency.lockutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:01 compute-0 podman[280800]: 2025-10-07 14:05:01.430618197 +0000 UTC m=+0.042416525 container create 4acddc1c7dfe8751c6dbd97f71d7b25edc90c4e019d470933a4de9180850cc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_nash, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 07 14:05:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 69 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 162 op/s
Oct 07 14:05:01 compute-0 systemd[1]: Started libpod-conmon-4acddc1c7dfe8751c6dbd97f71d7b25edc90c4e019d470933a4de9180850cc08.scope.
Oct 07 14:05:01 compute-0 podman[280800]: 2025-10-07 14:05:01.412588525 +0000 UTC m=+0.024386883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:05:01 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:05:01 compute-0 podman[280800]: 2025-10-07 14:05:01.528280169 +0000 UTC m=+0.140078527 container init 4acddc1c7dfe8751c6dbd97f71d7b25edc90c4e019d470933a4de9180850cc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 07 14:05:01 compute-0 podman[280800]: 2025-10-07 14:05:01.540796413 +0000 UTC m=+0.152594741 container start 4acddc1c7dfe8751c6dbd97f71d7b25edc90c4e019d470933a4de9180850cc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:05:01 compute-0 podman[280800]: 2025-10-07 14:05:01.545462058 +0000 UTC m=+0.157260386 container attach 4acddc1c7dfe8751c6dbd97f71d7b25edc90c4e019d470933a4de9180850cc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_nash, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:05:01 compute-0 quirky_nash[280816]: 167 167
Oct 07 14:05:01 compute-0 systemd[1]: libpod-4acddc1c7dfe8751c6dbd97f71d7b25edc90c4e019d470933a4de9180850cc08.scope: Deactivated successfully.
Oct 07 14:05:01 compute-0 podman[280800]: 2025-10-07 14:05:01.550322498 +0000 UTC m=+0.162120826 container died 4acddc1c7dfe8751c6dbd97f71d7b25edc90c4e019d470933a4de9180850cc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_nash, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:05:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a855eff98aa7e2269330241313a3a31d47e15235d198e857438ea25589630ae1-merged.mount: Deactivated successfully.
Oct 07 14:05:01 compute-0 podman[280800]: 2025-10-07 14:05:01.598331533 +0000 UTC m=+0.210129861 container remove 4acddc1c7dfe8751c6dbd97f71d7b25edc90c4e019d470933a4de9180850cc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_nash, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 07 14:05:01 compute-0 systemd[1]: libpod-conmon-4acddc1c7dfe8751c6dbd97f71d7b25edc90c4e019d470933a4de9180850cc08.scope: Deactivated successfully.
Oct 07 14:05:01 compute-0 podman[280840]: 2025-10-07 14:05:01.785227001 +0000 UTC m=+0.042954231 container create 8cc3c007eeccf1f9cade6b4492023b8a1de653abf7381fa7b90d3ce8b59e8629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_raman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 07 14:05:01 compute-0 systemd[1]: Started libpod-conmon-8cc3c007eeccf1f9cade6b4492023b8a1de653abf7381fa7b90d3ce8b59e8629.scope.
Oct 07 14:05:01 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d9772ff57fe389986589b837a7e5cfccd001f41a74f4fc17d39fe3eb467245/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d9772ff57fe389986589b837a7e5cfccd001f41a74f4fc17d39fe3eb467245/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d9772ff57fe389986589b837a7e5cfccd001f41a74f4fc17d39fe3eb467245/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d9772ff57fe389986589b837a7e5cfccd001f41a74f4fc17d39fe3eb467245/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:05:01 compute-0 podman[280840]: 2025-10-07 14:05:01.767108676 +0000 UTC m=+0.024835926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:05:01 compute-0 podman[280840]: 2025-10-07 14:05:01.87234639 +0000 UTC m=+0.130073670 container init 8cc3c007eeccf1f9cade6b4492023b8a1de653abf7381fa7b90d3ce8b59e8629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_raman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 07 14:05:01 compute-0 podman[280840]: 2025-10-07 14:05:01.882080821 +0000 UTC m=+0.139808051 container start 8cc3c007eeccf1f9cade6b4492023b8a1de653abf7381fa7b90d3ce8b59e8629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_raman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:05:01 compute-0 podman[280840]: 2025-10-07 14:05:01.886237942 +0000 UTC m=+0.143965202 container attach 8cc3c007eeccf1f9cade6b4492023b8a1de653abf7381fa7b90d3ce8b59e8629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 07 14:05:02 compute-0 ceph-mon[74295]: pgmap v1131: 305 pgs: 305 active+clean; 69 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 162 op/s
Oct 07 14:05:02 compute-0 funny_raman[280856]: {
Oct 07 14:05:02 compute-0 funny_raman[280856]:     "0": [
Oct 07 14:05:02 compute-0 funny_raman[280856]:         {
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "devices": [
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "/dev/loop3"
Oct 07 14:05:02 compute-0 funny_raman[280856]:             ],
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "lv_name": "ceph_lv0",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "lv_size": "21470642176",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "name": "ceph_lv0",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "tags": {
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.cluster_name": "ceph",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.crush_device_class": "",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.encrypted": "0",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.osd_id": "0",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.type": "block",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.vdo": "0"
Oct 07 14:05:02 compute-0 funny_raman[280856]:             },
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "type": "block",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "vg_name": "ceph_vg0"
Oct 07 14:05:02 compute-0 funny_raman[280856]:         }
Oct 07 14:05:02 compute-0 funny_raman[280856]:     ],
Oct 07 14:05:02 compute-0 funny_raman[280856]:     "1": [
Oct 07 14:05:02 compute-0 funny_raman[280856]:         {
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "devices": [
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "/dev/loop4"
Oct 07 14:05:02 compute-0 funny_raman[280856]:             ],
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "lv_name": "ceph_lv1",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "lv_size": "21470642176",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "name": "ceph_lv1",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "tags": {
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.cluster_name": "ceph",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.crush_device_class": "",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.encrypted": "0",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.osd_id": "1",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.type": "block",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.vdo": "0"
Oct 07 14:05:02 compute-0 funny_raman[280856]:             },
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "type": "block",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "vg_name": "ceph_vg1"
Oct 07 14:05:02 compute-0 funny_raman[280856]:         }
Oct 07 14:05:02 compute-0 funny_raman[280856]:     ],
Oct 07 14:05:02 compute-0 funny_raman[280856]:     "2": [
Oct 07 14:05:02 compute-0 funny_raman[280856]:         {
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "devices": [
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "/dev/loop5"
Oct 07 14:05:02 compute-0 funny_raman[280856]:             ],
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "lv_name": "ceph_lv2",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "lv_size": "21470642176",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "name": "ceph_lv2",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "tags": {
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.cluster_name": "ceph",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.crush_device_class": "",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.encrypted": "0",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.osd_id": "2",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.type": "block",
Oct 07 14:05:02 compute-0 funny_raman[280856]:                 "ceph.vdo": "0"
Oct 07 14:05:02 compute-0 funny_raman[280856]:             },
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "type": "block",
Oct 07 14:05:02 compute-0 funny_raman[280856]:             "vg_name": "ceph_vg2"
Oct 07 14:05:02 compute-0 funny_raman[280856]:         }
Oct 07 14:05:02 compute-0 funny_raman[280856]:     ]
Oct 07 14:05:02 compute-0 funny_raman[280856]: }
Oct 07 14:05:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:02.710 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:02 compute-0 systemd[1]: libpod-8cc3c007eeccf1f9cade6b4492023b8a1de653abf7381fa7b90d3ce8b59e8629.scope: Deactivated successfully.
Oct 07 14:05:02 compute-0 podman[280840]: 2025-10-07 14:05:02.749759107 +0000 UTC m=+1.007486337 container died 8cc3c007eeccf1f9cade6b4492023b8a1de653abf7381fa7b90d3ce8b59e8629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 07 14:05:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-58d9772ff57fe389986589b837a7e5cfccd001f41a74f4fc17d39fe3eb467245-merged.mount: Deactivated successfully.
Oct 07 14:05:02 compute-0 podman[280840]: 2025-10-07 14:05:02.807384278 +0000 UTC m=+1.065111508 container remove 8cc3c007eeccf1f9cade6b4492023b8a1de653abf7381fa7b90d3ce8b59e8629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 07 14:05:02 compute-0 systemd[1]: libpod-conmon-8cc3c007eeccf1f9cade6b4492023b8a1de653abf7381fa7b90d3ce8b59e8629.scope: Deactivated successfully.
Oct 07 14:05:02 compute-0 sudo[280732]: pam_unix(sudo:session): session closed for user root
Oct 07 14:05:02 compute-0 sudo[280877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:05:02 compute-0 sudo[280877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:05:02 compute-0 sudo[280877]: pam_unix(sudo:session): session closed for user root
Oct 07 14:05:02 compute-0 sudo[280902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:05:02 compute-0 sudo[280902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:05:02 compute-0 sudo[280902]: pam_unix(sudo:session): session closed for user root
Oct 07 14:05:03 compute-0 sudo[280927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:05:03 compute-0 sudo[280927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:05:03 compute-0 sudo[280927]: pam_unix(sudo:session): session closed for user root
Oct 07 14:05:03 compute-0 sudo[280952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:05:03 compute-0 sudo[280952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:05:03 compute-0 podman[281017]: 2025-10-07 14:05:03.448369371 +0000 UTC m=+0.042415746 container create 4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_easley, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 07 14:05:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 69 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Oct 07 14:05:03 compute-0 systemd[1]: Started libpod-conmon-4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9.scope.
Oct 07 14:05:03 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:05:03 compute-0 podman[281017]: 2025-10-07 14:05:03.4326187 +0000 UTC m=+0.026665095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:05:03 compute-0 podman[281017]: 2025-10-07 14:05:03.545119798 +0000 UTC m=+0.139166193 container init 4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_easley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 14:05:03 compute-0 podman[281017]: 2025-10-07 14:05:03.553825041 +0000 UTC m=+0.147871416 container start 4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_easley, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 07 14:05:03 compute-0 epic_easley[281033]: 167 167
Oct 07 14:05:03 compute-0 systemd[1]: libpod-4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9.scope: Deactivated successfully.
Oct 07 14:05:03 compute-0 podman[281017]: 2025-10-07 14:05:03.561560438 +0000 UTC m=+0.155606843 container attach 4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:05:03 compute-0 conmon[281033]: conmon 4d9d25869691aa4dea62 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9.scope/container/memory.events
Oct 07 14:05:03 compute-0 podman[281017]: 2025-10-07 14:05:03.563296164 +0000 UTC m=+0.157342539 container died 4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:05:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d0bb181dbc0d38f7e781fec98b37f27361f7059d806d1facab445c37d3682e0-merged.mount: Deactivated successfully.
Oct 07 14:05:03 compute-0 podman[281017]: 2025-10-07 14:05:03.600095068 +0000 UTC m=+0.194141443 container remove 4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 07 14:05:03 compute-0 systemd[1]: libpod-conmon-4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9.scope: Deactivated successfully.
Oct 07 14:05:03 compute-0 podman[281058]: 2025-10-07 14:05:03.753887151 +0000 UTC m=+0.038494350 container create cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keller, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:05:03 compute-0 systemd[1]: Started libpod-conmon-cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68.scope.
Oct 07 14:05:03 compute-0 podman[281058]: 2025-10-07 14:05:03.739206529 +0000 UTC m=+0.023813758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:05:03 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cc5e180235d69a1b1d79a1336a94c312ba5991c2aa38a32247d7b86011e337/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cc5e180235d69a1b1d79a1336a94c312ba5991c2aa38a32247d7b86011e337/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cc5e180235d69a1b1d79a1336a94c312ba5991c2aa38a32247d7b86011e337/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cc5e180235d69a1b1d79a1336a94c312ba5991c2aa38a32247d7b86011e337/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:05:03 compute-0 podman[281058]: 2025-10-07 14:05:03.871275651 +0000 UTC m=+0.155882940 container init cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keller, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 07 14:05:03 compute-0 podman[281058]: 2025-10-07 14:05:03.883660172 +0000 UTC m=+0.168267401 container start cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keller, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:05:03 compute-0 podman[281058]: 2025-10-07 14:05:03.88882411 +0000 UTC m=+0.173431309 container attach cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:05:04 compute-0 ceph-mon[74295]: pgmap v1132: 305 pgs: 305 active+clean; 69 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Oct 07 14:05:04 compute-0 gallant_keller[281075]: {
Oct 07 14:05:04 compute-0 gallant_keller[281075]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:05:04 compute-0 gallant_keller[281075]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:05:04 compute-0 gallant_keller[281075]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:05:04 compute-0 gallant_keller[281075]:         "osd_id": 2,
Oct 07 14:05:04 compute-0 gallant_keller[281075]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:05:04 compute-0 gallant_keller[281075]:         "type": "bluestore"
Oct 07 14:05:04 compute-0 gallant_keller[281075]:     },
Oct 07 14:05:04 compute-0 gallant_keller[281075]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:05:04 compute-0 gallant_keller[281075]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:05:04 compute-0 gallant_keller[281075]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:05:04 compute-0 gallant_keller[281075]:         "osd_id": 1,
Oct 07 14:05:04 compute-0 gallant_keller[281075]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:05:04 compute-0 gallant_keller[281075]:         "type": "bluestore"
Oct 07 14:05:04 compute-0 gallant_keller[281075]:     },
Oct 07 14:05:04 compute-0 gallant_keller[281075]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:05:04 compute-0 gallant_keller[281075]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:05:04 compute-0 gallant_keller[281075]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:05:04 compute-0 gallant_keller[281075]:         "osd_id": 0,
Oct 07 14:05:04 compute-0 gallant_keller[281075]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:05:04 compute-0 gallant_keller[281075]:         "type": "bluestore"
Oct 07 14:05:04 compute-0 gallant_keller[281075]:     }
Oct 07 14:05:04 compute-0 gallant_keller[281075]: }
Oct 07 14:05:05 compute-0 systemd[1]: libpod-cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68.scope: Deactivated successfully.
Oct 07 14:05:05 compute-0 podman[281058]: 2025-10-07 14:05:05.020043244 +0000 UTC m=+1.304650443 container died cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keller, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:05:05 compute-0 systemd[1]: libpod-cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68.scope: Consumed 1.141s CPU time.
Oct 07 14:05:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5cc5e180235d69a1b1d79a1336a94c312ba5991c2aa38a32247d7b86011e337-merged.mount: Deactivated successfully.
Oct 07 14:05:05 compute-0 podman[281058]: 2025-10-07 14:05:05.086840681 +0000 UTC m=+1.371447890 container remove cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keller, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 07 14:05:05 compute-0 systemd[1]: libpod-conmon-cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68.scope: Deactivated successfully.
Oct 07 14:05:05 compute-0 sudo[280952]: pam_unix(sudo:session): session closed for user root
Oct 07 14:05:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:05:05 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:05:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:05:05 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:05:05 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 5cc89144-b655-4080-88f5-8c5adac06ce2 does not exist
Oct 07 14:05:05 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 250f5054-d262-4047-9059-5905510bd2c5 does not exist
Oct 07 14:05:05 compute-0 sudo[281118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:05:05 compute-0 sudo[281118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:05:05 compute-0 sudo[281118]: pam_unix(sudo:session): session closed for user root
Oct 07 14:05:05 compute-0 sudo[281143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:05:05 compute-0 sudo[281143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:05:05 compute-0 sudo[281143]: pam_unix(sudo:session): session closed for user root
Oct 07 14:05:05 compute-0 nova_compute[259550]: 2025-10-07 14:05:05.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:05 compute-0 nova_compute[259550]: 2025-10-07 14:05:05.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 07 14:05:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:05:05 compute-0 nova_compute[259550]: 2025-10-07 14:05:05.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:05 compute-0 nova_compute[259550]: 2025-10-07 14:05:05.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:05:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:05:06 compute-0 ceph-mon[74295]: pgmap v1133: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 07 14:05:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 230 KiB/s wr, 153 op/s
Oct 07 14:05:08 compute-0 ceph-mon[74295]: pgmap v1134: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 230 KiB/s wr, 153 op/s
Oct 07 14:05:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 15 KiB/s wr, 114 op/s
Oct 07 14:05:09 compute-0 nova_compute[259550]: 2025-10-07 14:05:09.873 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Acquiring lock "dbb074a6-4040-4bb9-8c58-ee07f164e2ec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:09 compute-0 nova_compute[259550]: 2025-10-07 14:05:09.874 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "dbb074a6-4040-4bb9-8c58-ee07f164e2ec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:09 compute-0 nova_compute[259550]: 2025-10-07 14:05:09.892 2 DEBUG nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:05:09 compute-0 nova_compute[259550]: 2025-10-07 14:05:09.956 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:09 compute-0 nova_compute[259550]: 2025-10-07 14:05:09.957 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:09 compute-0 nova_compute[259550]: 2025-10-07 14:05:09.970 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:05:09 compute-0 nova_compute[259550]: 2025-10-07 14:05:09.970 2 INFO nova.compute.claims [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.052 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.220 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759845895.219699, 01404604-70e9-49ec-9047-ec42ba41afee => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.222 2 INFO nova.compute.manager [-] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] VM Stopped (Lifecycle Event)
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.244 2 DEBUG nova.compute.manager [None req-f75758d6-e16a-4610-89c6-534235d3cddd - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:05:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:05:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/617181251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.515 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:10 compute-0 ceph-mon[74295]: pgmap v1135: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 15 KiB/s wr, 114 op/s
Oct 07 14:05:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/617181251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.521 2 DEBUG nova.compute.provider_tree [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.540 2 DEBUG nova.scheduler.client.report [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.560 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.561 2 DEBUG nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.610 2 DEBUG nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.611 2 DEBUG nova.network.neutron [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.635 2 INFO nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.655 2 DEBUG nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.745 2 DEBUG nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.746 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.747 2 INFO nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Creating image(s)
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.770 2 DEBUG nova.storage.rbd_utils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] rbd image dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.794 2 DEBUG nova.storage.rbd_utils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] rbd image dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.817 2 DEBUG nova.storage.rbd_utils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] rbd image dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.821 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.885 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.887 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.888 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.888 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.915 2 DEBUG nova.storage.rbd_utils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] rbd image dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:10 compute-0 nova_compute[259550]: 2025-10-07 14:05:10.920 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.064 2 DEBUG nova.network.neutron [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.065 2 DEBUG nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.289 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.370s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.345 2 DEBUG nova.storage.rbd_utils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] resizing rbd image dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:05:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 1.5 KiB/s wr, 54 op/s
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.474 2 DEBUG nova.objects.instance [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lazy-loading 'migration_context' on Instance uuid dbb074a6-4040-4bb9-8c58-ee07f164e2ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.588 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.589 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Ensure instance console log exists: /var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.590 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.590 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.590 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.592 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.597 2 WARNING nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.606 2 DEBUG nova.virt.libvirt.host [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.607 2 DEBUG nova.virt.libvirt.host [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.610 2 DEBUG nova.virt.libvirt.host [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.611 2 DEBUG nova.virt.libvirt.host [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.611 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.611 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.612 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.612 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.612 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.613 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.613 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.613 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.613 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.613 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.614 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.614 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:05:11 compute-0 nova_compute[259550]: 2025-10-07 14:05:11.617 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:05:12 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3439828980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:12 compute-0 nova_compute[259550]: 2025-10-07 14:05:12.081 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:12 compute-0 nova_compute[259550]: 2025-10-07 14:05:12.110 2 DEBUG nova.storage.rbd_utils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] rbd image dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:12 compute-0 nova_compute[259550]: 2025-10-07 14:05:12.115 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:12 compute-0 ceph-mon[74295]: pgmap v1136: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 1.5 KiB/s wr, 54 op/s
Oct 07 14:05:12 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3439828980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:05:12 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1319367602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:12 compute-0 nova_compute[259550]: 2025-10-07 14:05:12.566 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:12 compute-0 nova_compute[259550]: 2025-10-07 14:05:12.569 2 DEBUG nova.objects.instance [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lazy-loading 'pci_devices' on Instance uuid dbb074a6-4040-4bb9-8c58-ee07f164e2ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:05:12 compute-0 nova_compute[259550]: 2025-10-07 14:05:12.620 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:05:12 compute-0 nova_compute[259550]:   <uuid>dbb074a6-4040-4bb9-8c58-ee07f164e2ec</uuid>
Oct 07 14:05:12 compute-0 nova_compute[259550]:   <name>instance-00000009</name>
Oct 07 14:05:12 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:05:12 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:05:12 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <nova:name>tempest-TenantUsagesTestJSON-server-1931796297</nova:name>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:05:11</nova:creationTime>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:05:12 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:05:12 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:05:12 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:05:12 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:05:12 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:05:12 compute-0 nova_compute[259550]:         <nova:user uuid="554cedcefe7a4a1b8d0977fcf096d3fb">tempest-TenantUsagesTestJSON-632087855-project-member</nova:user>
Oct 07 14:05:12 compute-0 nova_compute[259550]:         <nova:project uuid="17c7dbcd6fa548a9b31ecf0706a8b974">tempest-TenantUsagesTestJSON-632087855</nova:project>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:05:12 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:05:12 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <system>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <entry name="serial">dbb074a6-4040-4bb9-8c58-ee07f164e2ec</entry>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <entry name="uuid">dbb074a6-4040-4bb9-8c58-ee07f164e2ec</entry>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     </system>
Oct 07 14:05:12 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:05:12 compute-0 nova_compute[259550]:   <os>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:   </os>
Oct 07 14:05:12 compute-0 nova_compute[259550]:   <features>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:   </features>
Oct 07 14:05:12 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:05:12 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:05:12 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk">
Oct 07 14:05:12 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       </source>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:05:12 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk.config">
Oct 07 14:05:12 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       </source>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:05:12 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec/console.log" append="off"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <video>
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     </video>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:05:12 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:05:12 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:05:12 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:05:12 compute-0 nova_compute[259550]: </domain>
Oct 07 14:05:12 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:05:12 compute-0 nova_compute[259550]: 2025-10-07 14:05:12.729 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:05:12 compute-0 nova_compute[259550]: 2025-10-07 14:05:12.729 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:05:12 compute-0 nova_compute[259550]: 2025-10-07 14:05:12.730 2 INFO nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Using config drive
Oct 07 14:05:12 compute-0 nova_compute[259550]: 2025-10-07 14:05:12.751 2 DEBUG nova.storage.rbd_utils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] rbd image dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:13 compute-0 nova_compute[259550]: 2025-10-07 14:05:13.035 2 INFO nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Creating config drive at /var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec/disk.config
Oct 07 14:05:13 compute-0 nova_compute[259550]: 2025-10-07 14:05:13.041 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprh4ms4xi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:13 compute-0 nova_compute[259550]: 2025-10-07 14:05:13.176 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprh4ms4xi" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:13 compute-0 nova_compute[259550]: 2025-10-07 14:05:13.208 2 DEBUG nova.storage.rbd_utils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] rbd image dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:13 compute-0 nova_compute[259550]: 2025-10-07 14:05:13.213 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec/disk.config dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:13 compute-0 nova_compute[259550]: 2025-10-07 14:05:13.379 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec/disk.config dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:13 compute-0 nova_compute[259550]: 2025-10-07 14:05:13.380 2 INFO nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Deleting local config drive /var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec/disk.config because it was imported into RBD.
Oct 07 14:05:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 852 B/s wr, 13 op/s
Oct 07 14:05:13 compute-0 systemd-machined[214580]: New machine qemu-11-instance-00000009.
Oct 07 14:05:13 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-00000009.
Oct 07 14:05:13 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1319367602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:13 compute-0 nova_compute[259550]: 2025-10-07 14:05:13.531 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759845898.5292194, af17d051-72b6-45f1-b829-94d3a2939519 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:05:13 compute-0 nova_compute[259550]: 2025-10-07 14:05:13.531 2 INFO nova.compute.manager [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] VM Stopped (Lifecycle Event)
Oct 07 14:05:13 compute-0 nova_compute[259550]: 2025-10-07 14:05:13.581 2 DEBUG nova.compute.manager [None req-951ac14b-abb0-4861-92dd-8cda14b2f446 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:14 compute-0 ceph-mon[74295]: pgmap v1137: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 852 B/s wr, 13 op/s
Oct 07 14:05:14 compute-0 nova_compute[259550]: 2025-10-07 14:05:14.792 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845914.7923188, dbb074a6-4040-4bb9-8c58-ee07f164e2ec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:05:14 compute-0 nova_compute[259550]: 2025-10-07 14:05:14.793 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] VM Resumed (Lifecycle Event)
Oct 07 14:05:14 compute-0 nova_compute[259550]: 2025-10-07 14:05:14.795 2 DEBUG nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:05:14 compute-0 nova_compute[259550]: 2025-10-07 14:05:14.796 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:05:14 compute-0 nova_compute[259550]: 2025-10-07 14:05:14.799 2 INFO nova.virt.libvirt.driver [-] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Instance spawned successfully.
Oct 07 14:05:14 compute-0 nova_compute[259550]: 2025-10-07 14:05:14.799 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:05:14 compute-0 nova_compute[259550]: 2025-10-07 14:05:14.837 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:14 compute-0 nova_compute[259550]: 2025-10-07 14:05:14.842 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:05:14 compute-0 nova_compute[259550]: 2025-10-07 14:05:14.899 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:14 compute-0 nova_compute[259550]: 2025-10-07 14:05:14.901 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:14 compute-0 nova_compute[259550]: 2025-10-07 14:05:14.902 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:14 compute-0 nova_compute[259550]: 2025-10-07 14:05:14.902 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:14 compute-0 nova_compute[259550]: 2025-10-07 14:05:14.903 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:14 compute-0 nova_compute[259550]: 2025-10-07 14:05:14.904 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:14 compute-0 nova_compute[259550]: 2025-10-07 14:05:14.970 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:05:14 compute-0 nova_compute[259550]: 2025-10-07 14:05:14.970 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845914.7933679, dbb074a6-4040-4bb9-8c58-ee07f164e2ec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:05:14 compute-0 nova_compute[259550]: 2025-10-07 14:05:14.971 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] VM Started (Lifecycle Event)
Oct 07 14:05:15 compute-0 nova_compute[259550]: 2025-10-07 14:05:15.197 2 INFO nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Took 4.45 seconds to spawn the instance on the hypervisor.
Oct 07 14:05:15 compute-0 nova_compute[259550]: 2025-10-07 14:05:15.198 2 DEBUG nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:15 compute-0 nova_compute[259550]: 2025-10-07 14:05:15.209 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:15 compute-0 nova_compute[259550]: 2025-10-07 14:05:15.215 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:05:15 compute-0 nova_compute[259550]: 2025-10-07 14:05:15.284 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:05:15 compute-0 nova_compute[259550]: 2025-10-07 14:05:15.304 2 INFO nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Took 5.37 seconds to build instance.
Oct 07 14:05:15 compute-0 nova_compute[259550]: 2025-10-07 14:05:15.337 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "dbb074a6-4040-4bb9-8c58-ee07f164e2ec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:15 compute-0 nova_compute[259550]: 2025-10-07 14:05:15.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:15 compute-0 nova_compute[259550]: 2025-10-07 14:05:15.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 69 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.2 MiB/s wr, 32 op/s
Oct 07 14:05:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:05:16 compute-0 podman[281536]: 2025-10-07 14:05:16.085839894 +0000 UTC m=+0.063004326 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:05:16 compute-0 podman[281535]: 2025-10-07 14:05:16.087663992 +0000 UTC m=+0.065302228 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 07 14:05:16 compute-0 ceph-mon[74295]: pgmap v1138: 305 pgs: 305 active+clean; 69 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.2 MiB/s wr, 32 op/s
Oct 07 14:05:16 compute-0 nova_compute[259550]: 2025-10-07 14:05:16.716 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Acquiring lock "dbb074a6-4040-4bb9-8c58-ee07f164e2ec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:16 compute-0 nova_compute[259550]: 2025-10-07 14:05:16.717 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "dbb074a6-4040-4bb9-8c58-ee07f164e2ec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:16 compute-0 nova_compute[259550]: 2025-10-07 14:05:16.717 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Acquiring lock "dbb074a6-4040-4bb9-8c58-ee07f164e2ec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:16 compute-0 nova_compute[259550]: 2025-10-07 14:05:16.717 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "dbb074a6-4040-4bb9-8c58-ee07f164e2ec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:16 compute-0 nova_compute[259550]: 2025-10-07 14:05:16.717 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "dbb074a6-4040-4bb9-8c58-ee07f164e2ec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:16 compute-0 nova_compute[259550]: 2025-10-07 14:05:16.718 2 INFO nova.compute.manager [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Terminating instance
Oct 07 14:05:16 compute-0 nova_compute[259550]: 2025-10-07 14:05:16.719 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Acquiring lock "refresh_cache-dbb074a6-4040-4bb9-8c58-ee07f164e2ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:05:16 compute-0 nova_compute[259550]: 2025-10-07 14:05:16.719 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Acquired lock "refresh_cache-dbb074a6-4040-4bb9-8c58-ee07f164e2ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:05:16 compute-0 nova_compute[259550]: 2025-10-07 14:05:16.719 2 DEBUG nova.network.neutron [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:05:16 compute-0 nova_compute[259550]: 2025-10-07 14:05:16.920 2 DEBUG nova.network.neutron [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:05:17 compute-0 nova_compute[259550]: 2025-10-07 14:05:17.296 2 DEBUG nova.network.neutron [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:05:17 compute-0 nova_compute[259550]: 2025-10-07 14:05:17.339 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Releasing lock "refresh_cache-dbb074a6-4040-4bb9-8c58-ee07f164e2ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:05:17 compute-0 nova_compute[259550]: 2025-10-07 14:05:17.340 2 DEBUG nova.compute.manager [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:05:17 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct 07 14:05:17 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000009.scope: Consumed 3.901s CPU time.
Oct 07 14:05:17 compute-0 systemd-machined[214580]: Machine qemu-11-instance-00000009 terminated.
Oct 07 14:05:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 226 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 07 14:05:17 compute-0 nova_compute[259550]: 2025-10-07 14:05:17.564 2 INFO nova.virt.libvirt.driver [-] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Instance destroyed successfully.
Oct 07 14:05:17 compute-0 nova_compute[259550]: 2025-10-07 14:05:17.564 2 DEBUG nova.objects.instance [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lazy-loading 'resources' on Instance uuid dbb074a6-4040-4bb9-8c58-ee07f164e2ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:05:17 compute-0 nova_compute[259550]: 2025-10-07 14:05:17.588 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:17 compute-0 nova_compute[259550]: 2025-10-07 14:05:17.589 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:17 compute-0 nova_compute[259550]: 2025-10-07 14:05:17.727 2 DEBUG nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:05:17 compute-0 nova_compute[259550]: 2025-10-07 14:05:17.885 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:17 compute-0 nova_compute[259550]: 2025-10-07 14:05:17.886 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:17 compute-0 nova_compute[259550]: 2025-10-07 14:05:17.895 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:05:17 compute-0 nova_compute[259550]: 2025-10-07 14:05:17.896 2 INFO nova.compute.claims [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:05:17 compute-0 nova_compute[259550]: 2025-10-07 14:05:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:05:17 compute-0 nova_compute[259550]: 2025-10-07 14:05:17.988 2 INFO nova.virt.libvirt.driver [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Deleting instance files /var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec_del
Oct 07 14:05:17 compute-0 nova_compute[259550]: 2025-10-07 14:05:17.989 2 INFO nova.virt.libvirt.driver [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Deletion of /var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec_del complete
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.123 2 INFO nova.compute.manager [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.124 2 DEBUG oslo.service.loopingcall [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.124 2 DEBUG nova.compute.manager [-] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.124 2 DEBUG nova.network.neutron [-] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.186 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.270 2 DEBUG nova.network.neutron [-] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.327 2 DEBUG nova.network.neutron [-] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.379 2 INFO nova.compute.manager [-] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Took 0.25 seconds to deallocate network for instance.
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.414 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:18 compute-0 ceph-mon[74295]: pgmap v1139: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 226 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 07 14:05:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:05:18 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2941124990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.630 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.637 2 DEBUG nova.compute.provider_tree [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.659 2 DEBUG nova.scheduler.client.report [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.692 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.693 2 DEBUG nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.696 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.733 2 DEBUG nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.734 2 DEBUG nova.network.neutron [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.753 2 INFO nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.773 2 DEBUG nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.782 2 DEBUG oslo_concurrency.processutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.861 2 DEBUG nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.865 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.866 2 INFO nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Creating image(s)
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.893 2 DEBUG nova.storage.rbd_utils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.918 2 DEBUG nova.storage.rbd_utils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.947 2 DEBUG nova.storage.rbd_utils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.952 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.985 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:05:18 compute-0 nova_compute[259550]: 2025-10-07 14:05:18.986 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.024 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.026 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.026 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.027 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.051 2 DEBUG nova.storage.rbd_utils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.056 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.080 2 DEBUG nova.policy [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f06dda9346a24fb094ad9fe51664cc48', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:05:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:05:19 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3265124334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.326 2 DEBUG oslo_concurrency.processutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.334 2 DEBUG nova.compute.provider_tree [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.364 2 DEBUG nova.scheduler.client.report [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.373 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.401 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.444 2 DEBUG nova.storage.rbd_utils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] resizing rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:05:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 60 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.475 2 INFO nova.scheduler.client.report [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Deleted allocations for instance dbb074a6-4040-4bb9-8c58-ee07f164e2ec
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.551 2 DEBUG nova.objects.instance [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'migration_context' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.557 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "dbb074a6-4040-4bb9-8c58-ee07f164e2ec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.568 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.569 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Ensure instance console log exists: /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.569 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.569 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.570 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:19 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2941124990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:19 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3265124334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.627 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.627 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.658 2 DEBUG nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.842 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.844 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.852 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.853 2 INFO nova.compute.claims [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:05:19 compute-0 nova_compute[259550]: 2025-10-07 14:05:19.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.001 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.331 2 DEBUG nova.network.neutron [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Successfully created port: 9b25db0b-246e-456c-82d7-cf361c57f9c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:05:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/459009998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.472 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.479 2 DEBUG nova.compute.provider_tree [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:05:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.495 2 DEBUG nova.scheduler.client.report [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.526 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.527 2 DEBUG nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.567 2 DEBUG nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.567 2 DEBUG nova.network.neutron [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.582 2 INFO nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:05:20 compute-0 ceph-mon[74295]: pgmap v1140: 305 pgs: 305 active+clean; 60 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Oct 07 14:05:20 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/459009998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.599 2 DEBUG nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.752 2 DEBUG nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.755 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.756 2 INFO nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Creating image(s)
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.779 2 DEBUG nova.storage.rbd_utils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.805 2 DEBUG nova.storage.rbd_utils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.829 2 DEBUG nova.storage.rbd_utils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.833 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.903 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.904 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.905 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.905 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.930 2 DEBUG nova.storage.rbd_utils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:20 compute-0 nova_compute[259550]: 2025-10-07 14:05:20.934 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:21 compute-0 nova_compute[259550]: 2025-10-07 14:05:21.015 2 DEBUG nova.policy [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f06dda9346a24fb094ad9fe51664cc48', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:05:21 compute-0 nova_compute[259550]: 2025-10-07 14:05:21.272 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:21 compute-0 nova_compute[259550]: 2025-10-07 14:05:21.345 2 DEBUG nova.storage.rbd_utils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] resizing rbd image 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:05:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 56 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 128 op/s
Oct 07 14:05:21 compute-0 nova_compute[259550]: 2025-10-07 14:05:21.496 2 DEBUG nova.objects.instance [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'migration_context' on Instance uuid 7322f2d1-885e-4e41-8a96-e90d4ddc6c38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:05:21 compute-0 nova_compute[259550]: 2025-10-07 14:05:21.556 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:05:21 compute-0 nova_compute[259550]: 2025-10-07 14:05:21.557 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Ensure instance console log exists: /var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:05:21 compute-0 nova_compute[259550]: 2025-10-07 14:05:21.558 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:21 compute-0 nova_compute[259550]: 2025-10-07 14:05:21.559 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:21 compute-0 nova_compute[259550]: 2025-10-07 14:05:21.560 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:21 compute-0 nova_compute[259550]: 2025-10-07 14:05:21.571 2 DEBUG nova.network.neutron [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Successfully created port: 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:05:21 compute-0 nova_compute[259550]: 2025-10-07 14:05:21.686 2 DEBUG nova.network.neutron [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Successfully updated port: 9b25db0b-246e-456c-82d7-cf361c57f9c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:05:21 compute-0 nova_compute[259550]: 2025-10-07 14:05:21.833 2 DEBUG nova.compute.manager [req-a295e9c9-701a-4eb4-8e4a-32e0539c1dcf req-47fbe28e-5489-4a6d-bbcd-b0f9bef3aeab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-changed-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:05:21 compute-0 nova_compute[259550]: 2025-10-07 14:05:21.834 2 DEBUG nova.compute.manager [req-a295e9c9-701a-4eb4-8e4a-32e0539c1dcf req-47fbe28e-5489-4a6d-bbcd-b0f9bef3aeab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Refreshing instance network info cache due to event network-changed-9b25db0b-246e-456c-82d7-cf361c57f9c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:05:21 compute-0 nova_compute[259550]: 2025-10-07 14:05:21.834 2 DEBUG oslo_concurrency.lockutils [req-a295e9c9-701a-4eb4-8e4a-32e0539c1dcf req-47fbe28e-5489-4a6d-bbcd-b0f9bef3aeab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-ddf09c33-d956-404b-a5d8-44a3727f9a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:05:21 compute-0 nova_compute[259550]: 2025-10-07 14:05:21.834 2 DEBUG oslo_concurrency.lockutils [req-a295e9c9-701a-4eb4-8e4a-32e0539c1dcf req-47fbe28e-5489-4a6d-bbcd-b0f9bef3aeab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-ddf09c33-d956-404b-a5d8-44a3727f9a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:05:21 compute-0 nova_compute[259550]: 2025-10-07 14:05:21.834 2 DEBUG nova.network.neutron [req-a295e9c9-701a-4eb4-8e4a-32e0539c1dcf req-47fbe28e-5489-4a6d-bbcd-b0f9bef3aeab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Refreshing network info cache for port 9b25db0b-246e-456c-82d7-cf361c57f9c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:05:21 compute-0 nova_compute[259550]: 2025-10-07 14:05:21.846 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "refresh_cache-ddf09c33-d956-404b-a5d8-44a3727f9a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:05:21 compute-0 nova_compute[259550]: 2025-10-07 14:05:21.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:05:22 compute-0 nova_compute[259550]: 2025-10-07 14:05:22.131 2 DEBUG nova.network.neutron [req-a295e9c9-701a-4eb4-8e4a-32e0539c1dcf req-47fbe28e-5489-4a6d-bbcd-b0f9bef3aeab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:05:22 compute-0 nova_compute[259550]: 2025-10-07 14:05:22.563 2 DEBUG nova.network.neutron [req-a295e9c9-701a-4eb4-8e4a-32e0539c1dcf req-47fbe28e-5489-4a6d-bbcd-b0f9bef3aeab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:05:22 compute-0 ceph-mon[74295]: pgmap v1141: 305 pgs: 305 active+clean; 56 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 128 op/s
Oct 07 14:05:22 compute-0 nova_compute[259550]: 2025-10-07 14:05:22.607 2 DEBUG oslo_concurrency.lockutils [req-a295e9c9-701a-4eb4-8e4a-32e0539c1dcf req-47fbe28e-5489-4a6d-bbcd-b0f9bef3aeab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-ddf09c33-d956-404b-a5d8-44a3727f9a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:05:22
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'images', 'vms', 'default.rgw.meta', 'volumes', 'backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta']
Oct 07 14:05:22 compute-0 nova_compute[259550]: 2025-10-07 14:05:22.609 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquired lock "refresh_cache-ddf09c33-d956-404b-a5d8-44a3727f9a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:05:22 compute-0 nova_compute[259550]: 2025-10-07 14:05:22.609 2 DEBUG nova.network.neutron [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:05:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:05:22 compute-0 nova_compute[259550]: 2025-10-07 14:05:22.882 2 DEBUG nova.network.neutron [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:05:22 compute-0 nova_compute[259550]: 2025-10-07 14:05:22.929 2 DEBUG nova.network.neutron [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Successfully updated port: 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:05:22 compute-0 nova_compute[259550]: 2025-10-07 14:05:22.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.017 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "refresh_cache-7322f2d1-885e-4e41-8a96-e90d4ddc6c38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.017 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquired lock "refresh_cache-7322f2d1-885e-4e41-8a96-e90d4ddc6c38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.018 2 DEBUG nova.network.neutron [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.023 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.023 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.024 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.056 2 DEBUG nova.compute.manager [req-2a9e1a0b-17a5-498a-84a1-a5d9d2c5879f req-1b24c9f9-8808-4156-a561-410eb245da3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Received event network-changed-3486260f-fd35-48fb-a925-cbe6f4a1a9f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.056 2 DEBUG nova.compute.manager [req-2a9e1a0b-17a5-498a-84a1-a5d9d2c5879f req-1b24c9f9-8808-4156-a561-410eb245da3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Refreshing instance network info cache due to event network-changed-3486260f-fd35-48fb-a925-cbe6f4a1a9f5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.057 2 DEBUG oslo_concurrency.lockutils [req-2a9e1a0b-17a5-498a-84a1-a5d9d2c5879f req-1b24c9f9-8808-4156-a561-410eb245da3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-7322f2d1-885e-4e41-8a96-e90d4ddc6c38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.341 2 DEBUG nova.network.neutron [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:05:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 56 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 128 op/s
Oct 07 14:05:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:05:23 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1357035517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.500 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:23 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1357035517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.681 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.683 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4688MB free_disk=59.97964096069336GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.683 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.683 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.794 2 DEBUG nova.network.neutron [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Updating instance_info_cache with network_info: [{"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.819 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance ddf09c33-d956-404b-a5d8-44a3727f9a3b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.819 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 7322f2d1-885e-4e41-8a96-e90d4ddc6c38 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.820 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.820 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.873 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.897 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Releasing lock "refresh_cache-ddf09c33-d956-404b-a5d8-44a3727f9a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.897 2 DEBUG nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance network_info: |[{"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.900 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Start _get_guest_xml network_info=[{"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.904 2 WARNING nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.909 2 DEBUG nova.virt.libvirt.host [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.910 2 DEBUG nova.virt.libvirt.host [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.913 2 DEBUG nova.virt.libvirt.host [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.913 2 DEBUG nova.virt.libvirt.host [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.913 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.914 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.914 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.914 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.914 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.915 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.915 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.915 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.915 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.915 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.915 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.916 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:05:23 compute-0 nova_compute[259550]: 2025-10-07 14:05:23.919 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:05:24 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3735479179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.326 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.333 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:05:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:05:24 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2478153480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.363 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.385 2 DEBUG nova.storage.rbd_utils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.390 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.413 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.463 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.463 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:24 compute-0 ceph-mon[74295]: pgmap v1142: 305 pgs: 305 active+clean; 56 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 128 op/s
Oct 07 14:05:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3735479179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2478153480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.803 2 DEBUG nova.network.neutron [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Updating instance_info_cache with network_info: [{"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:05:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:05:24 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3202289319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.827 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.828 2 DEBUG nova.virt.libvirt.vif [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1321212972',display_name='tempest-ServersAdminTestJSON-server-1321212972',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1321212972',id=10,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-ixj4uqzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:18Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=ddf09c33-d956-404b-a5d8-44a3727f9a3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.829 2 DEBUG nova.network.os_vif_util [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.829 2 DEBUG nova.network.os_vif_util [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.830 2 DEBUG nova.objects.instance [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'pci_devices' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.832 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Releasing lock "refresh_cache-7322f2d1-885e-4e41-8a96-e90d4ddc6c38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.833 2 DEBUG nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Instance network_info: |[{"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.833 2 DEBUG oslo_concurrency.lockutils [req-2a9e1a0b-17a5-498a-84a1-a5d9d2c5879f req-1b24c9f9-8808-4156-a561-410eb245da3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-7322f2d1-885e-4e41-8a96-e90d4ddc6c38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.833 2 DEBUG nova.network.neutron [req-2a9e1a0b-17a5-498a-84a1-a5d9d2c5879f req-1b24c9f9-8808-4156-a561-410eb245da3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Refreshing network info cache for port 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.836 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Start _get_guest_xml network_info=[{"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.840 2 WARNING nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.844 2 DEBUG nova.virt.libvirt.host [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.845 2 DEBUG nova.virt.libvirt.host [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.848 2 DEBUG nova.virt.libvirt.host [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.848 2 DEBUG nova.virt.libvirt.host [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.848 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.848 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.849 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.849 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.849 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.849 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.850 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.850 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.850 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.850 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.850 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.851 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.854 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.884 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:05:24 compute-0 nova_compute[259550]:   <uuid>ddf09c33-d956-404b-a5d8-44a3727f9a3b</uuid>
Oct 07 14:05:24 compute-0 nova_compute[259550]:   <name>instance-0000000a</name>
Oct 07 14:05:24 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:05:24 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:05:24 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersAdminTestJSON-server-1321212972</nova:name>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:05:23</nova:creationTime>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:05:24 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:05:24 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:05:24 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:05:24 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:05:24 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:05:24 compute-0 nova_compute[259550]:         <nova:user uuid="f06dda9346a24fb094ad9fe51664cc48">tempest-ServersAdminTestJSON-1442908900-project-member</nova:user>
Oct 07 14:05:24 compute-0 nova_compute[259550]:         <nova:project uuid="48bbd5aa8b9d4a0ea0150bd57145fc68">tempest-ServersAdminTestJSON-1442908900</nova:project>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:05:24 compute-0 nova_compute[259550]:         <nova:port uuid="9b25db0b-246e-456c-82d7-cf361c57f9c5">
Oct 07 14:05:24 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:05:24 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:05:24 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <system>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <entry name="serial">ddf09c33-d956-404b-a5d8-44a3727f9a3b</entry>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <entry name="uuid">ddf09c33-d956-404b-a5d8-44a3727f9a3b</entry>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     </system>
Oct 07 14:05:24 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:05:24 compute-0 nova_compute[259550]:   <os>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:   </os>
Oct 07 14:05:24 compute-0 nova_compute[259550]:   <features>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:   </features>
Oct 07 14:05:24 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:05:24 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:05:24 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk">
Oct 07 14:05:24 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       </source>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:05:24 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config">
Oct 07 14:05:24 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       </source>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:05:24 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:6c:03:d4"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <target dev="tap9b25db0b-24"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/console.log" append="off"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <video>
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     </video>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:05:24 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:05:24 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:05:24 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:05:24 compute-0 nova_compute[259550]: </domain>
Oct 07 14:05:24 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.886 2 DEBUG nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Preparing to wait for external event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.887 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.887 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.888 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.889 2 DEBUG nova.virt.libvirt.vif [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1321212972',display_name='tempest-ServersAdminTestJSON-server-1321212972',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1321212972',id=10,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-ixj4uqzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:18Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=ddf09c33-d956-404b-a5d8-44a3727f9a3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.889 2 DEBUG nova.network.os_vif_util [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.890 2 DEBUG nova.network.os_vif_util [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.890 2 DEBUG os_vif [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.892 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.893 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.897 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b25db0b-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.898 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9b25db0b-24, col_values=(('external_ids', {'iface-id': '9b25db0b-246e-456c-82d7-cf361c57f9c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:03:d4', 'vm-uuid': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:24 compute-0 NetworkManager[44949]: <info>  [1759845924.9017] manager: (tap9b25db0b-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:24 compute-0 nova_compute[259550]: 2025-10-07 14:05:24.911 2 INFO os_vif [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24')
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.094 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.095 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.095 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No VIF found with MAC fa:16:3e:6c:03:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.096 2 INFO nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Using config drive
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.127 2 DEBUG nova.storage.rbd_utils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:05:25 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1485311075' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.305 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.326 2 DEBUG nova.storage.rbd_utils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.331 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 112 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 179 op/s
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.465 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:05:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.749 2 INFO nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Creating config drive at /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.755 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7o0wlq1t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:05:25 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2613226568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.785 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.787 2 DEBUG nova.virt.libvirt.vif [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-33708594',display_name='tempest-ServersAdminTestJSON-server-33708594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-33708594',id=11,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-346ygca7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:20Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=7322f2d1-885e-4e41-8a96-e90d4ddc6c38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.788 2 DEBUG nova.network.os_vif_util [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.789 2 DEBUG nova.network.os_vif_util [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:5d:93,bridge_name='br-int',has_traffic_filtering=True,id=3486260f-fd35-48fb-a925-cbe6f4a1a9f5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3486260f-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.790 2 DEBUG nova.objects.instance [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7322f2d1-885e-4e41-8a96-e90d4ddc6c38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.807 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquiring lock "a2f7901e-6572-4162-b995-0c44fb69eab5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.808 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.812 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:05:25 compute-0 nova_compute[259550]:   <uuid>7322f2d1-885e-4e41-8a96-e90d4ddc6c38</uuid>
Oct 07 14:05:25 compute-0 nova_compute[259550]:   <name>instance-0000000b</name>
Oct 07 14:05:25 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:05:25 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:05:25 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersAdminTestJSON-server-33708594</nova:name>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:05:24</nova:creationTime>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:05:25 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:05:25 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:05:25 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:05:25 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:05:25 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:05:25 compute-0 nova_compute[259550]:         <nova:user uuid="f06dda9346a24fb094ad9fe51664cc48">tempest-ServersAdminTestJSON-1442908900-project-member</nova:user>
Oct 07 14:05:25 compute-0 nova_compute[259550]:         <nova:project uuid="48bbd5aa8b9d4a0ea0150bd57145fc68">tempest-ServersAdminTestJSON-1442908900</nova:project>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:05:25 compute-0 nova_compute[259550]:         <nova:port uuid="3486260f-fd35-48fb-a925-cbe6f4a1a9f5">
Oct 07 14:05:25 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:05:25 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:05:25 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <system>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <entry name="serial">7322f2d1-885e-4e41-8a96-e90d4ddc6c38</entry>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <entry name="uuid">7322f2d1-885e-4e41-8a96-e90d4ddc6c38</entry>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     </system>
Oct 07 14:05:25 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:05:25 compute-0 nova_compute[259550]:   <os>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:   </os>
Oct 07 14:05:25 compute-0 nova_compute[259550]:   <features>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:   </features>
Oct 07 14:05:25 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:05:25 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:05:25 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk">
Oct 07 14:05:25 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       </source>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:05:25 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk.config">
Oct 07 14:05:25 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       </source>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:05:25 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:68:5d:93"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <target dev="tap3486260f-fd"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38/console.log" append="off"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <video>
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     </video>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:05:25 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:05:25 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:05:25 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:05:25 compute-0 nova_compute[259550]: </domain>
Oct 07 14:05:25 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.813 2 DEBUG nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Preparing to wait for external event network-vif-plugged-3486260f-fd35-48fb-a925-cbe6f4a1a9f5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.813 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.814 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.814 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.815 2 DEBUG nova.virt.libvirt.vif [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-33708594',display_name='tempest-ServersAdminTestJSON-server-33708594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-33708594',id=11,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-346ygca7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:20Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=7322f2d1-885e-4e41-8a96-e90d4ddc6c38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.815 2 DEBUG nova.network.os_vif_util [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.815 2 DEBUG nova.network.os_vif_util [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:5d:93,bridge_name='br-int',has_traffic_filtering=True,id=3486260f-fd35-48fb-a925-cbe6f4a1a9f5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3486260f-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.816 2 DEBUG os_vif [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:5d:93,bridge_name='br-int',has_traffic_filtering=True,id=3486260f-fd35-48fb-a925-cbe6f4a1a9f5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3486260f-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.817 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.817 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.820 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3486260f-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.821 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3486260f-fd, col_values=(('external_ids', {'iface-id': '3486260f-fd35-48fb-a925-cbe6f4a1a9f5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:5d:93', 'vm-uuid': '7322f2d1-885e-4e41-8a96-e90d4ddc6c38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:25 compute-0 NetworkManager[44949]: <info>  [1759845925.8237] manager: (tap3486260f-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.823 2 DEBUG nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.830 2 INFO os_vif [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:5d:93,bridge_name='br-int',has_traffic_filtering=True,id=3486260f-fd35-48fb-a925-cbe6f4a1a9f5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3486260f-fd')
Oct 07 14:05:25 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3202289319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:25 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1485311075' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:25 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2613226568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.890 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.890 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.896 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.897 2 INFO nova.compute.claims [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.899 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7o0wlq1t" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.923 2 DEBUG nova.storage.rbd_utils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.928 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.963 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.964 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.964 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No VIF found with MAC fa:16:3e:68:5d:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.965 2 INFO nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Using config drive
Oct 07 14:05:25 compute-0 nova_compute[259550]: 2025-10-07 14:05:25.993 2 DEBUG nova.storage.rbd_utils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.007 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.048 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.048 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.098 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:26 compute-0 podman[282229]: 2025-10-07 14:05:26.127163256 +0000 UTC m=+0.116730852 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.127 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.128 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.129 2 INFO nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deleting local config drive /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config because it was imported into RBD.
Oct 07 14:05:26 compute-0 kernel: tap9b25db0b-24: entered promiscuous mode
Oct 07 14:05:26 compute-0 NetworkManager[44949]: <info>  [1759845926.2062] manager: (tap9b25db0b-24): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Oct 07 14:05:26 compute-0 ovn_controller[151684]: 2025-10-07T14:05:26Z|00044|binding|INFO|Claiming lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 for this chassis.
Oct 07 14:05:26 compute-0 ovn_controller[151684]: 2025-10-07T14:05:26Z|00045|binding|INFO|9b25db0b-246e-456c-82d7-cf361c57f9c5: Claiming fa:16:3e:6c:03:d4 10.100.0.7
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.226 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:03:d4 10.100.0.7'], port_security=['fa:16:3e:6c:03:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9b25db0b-246e-456c-82d7-cf361c57f9c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.227 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9b25db0b-246e-456c-82d7-cf361c57f9c5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 bound to our chassis
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.228 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6
Oct 07 14:05:26 compute-0 systemd-machined[214580]: New machine qemu-12-instance-0000000a.
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.245 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0a37e379-be09-4d3e-9793-6945fb9186e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.246 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1eabd9ee-61 in ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.248 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1eabd9ee-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.248 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0442ef50-24f9-4fc8-ba4f-15dd74d314fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.250 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c36a5e05-6460-486a-a033-cf7b79f7c401]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:26 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000a.
Oct 07 14:05:26 compute-0 systemd-udevd[282301]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.265 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[0deb0828-9097-49db-96e1-f76dfeac5bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:26 compute-0 NetworkManager[44949]: <info>  [1759845926.2761] device (tap9b25db0b-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:05:26 compute-0 NetworkManager[44949]: <info>  [1759845926.2786] device (tap9b25db0b-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.297 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[30ab7615-a867-4d69-a597-c01972bed74c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:26 compute-0 ovn_controller[151684]: 2025-10-07T14:05:26Z|00046|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 ovn-installed in OVS
Oct 07 14:05:26 compute-0 ovn_controller[151684]: 2025-10-07T14:05:26Z|00047|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 up in Southbound
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.339 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[62533f46-c84e-42df-a8ac-4cc5d237cf91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.345 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9962620f-d531-4e52-ae87-92fdcdbfedda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:26 compute-0 NetworkManager[44949]: <info>  [1759845926.3473] manager: (tap1eabd9ee-60): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.385 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6cad1063-b938-4037-a54a-7121c3c6bb81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.388 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc1c40c-18d1-4eeb-bf77-f9edc80a68f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:26 compute-0 NetworkManager[44949]: <info>  [1759845926.4173] device (tap1eabd9ee-60): carrier: link connected
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.424 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2c582eb4-7738-4e3e-adf3-8480ca458bbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.447 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[00e4b3b2-e5c9-4d18-bd6d-3abe08997af9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282340, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.474 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d6d98c75-627e-4c91-992a-c421000cba34]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2e:d4d3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650998, 'tstamp': 650998}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282342, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.499 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[df5fab42-4bfe-488c-ba81-438aaedf2f87]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282343, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.536 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b0aa8a68-64ea-4266-822b-5ab001b14083]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.588 2 INFO nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Creating config drive at /var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38/disk.config
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.594 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptzrif0ux execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.607 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f91e182-ead2-4006-9319-635ad55a29ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.609 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.609 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.610 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:26 compute-0 NetworkManager[44949]: <info>  [1759845926.6133] manager: (tap1eabd9ee-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Oct 07 14:05:26 compute-0 kernel: tap1eabd9ee-60: entered promiscuous mode
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.621 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:26 compute-0 ovn_controller[151684]: 2025-10-07T14:05:26Z|00048|binding|INFO|Releasing lport 47854cb1-b863-4b06-b664-27d734ff5751 from this chassis (sb_readonly=0)
Oct 07 14:05:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:05:26 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1914118239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.643 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1eabd9ee-6333-432b-b50d-9679677d38f6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1eabd9ee-6333-432b-b50d-9679677d38f6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.644 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[049d078c-ecab-401e-b011-4268441623f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.645 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-1eabd9ee-6333-432b-b50d-9679677d38f6
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/1eabd9ee-6333-432b-b50d-9679677d38f6.pid.haproxy
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 1eabd9ee-6333-432b-b50d-9679677d38f6
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:05:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.646 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'env', 'PROCESS_TAG=haproxy-1eabd9ee-6333-432b-b50d-9679677d38f6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1eabd9ee-6333-432b-b50d-9679677d38f6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.663 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.671 2 DEBUG nova.compute.provider_tree [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.697 2 DEBUG nova.scheduler.client.report [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.726 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptzrif0ux" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.752 2 DEBUG nova.storage.rbd_utils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.757 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38/disk.config 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.791 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.793 2 DEBUG nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.807 2 DEBUG nova.network.neutron [req-2a9e1a0b-17a5-498a-84a1-a5d9d2c5879f req-1b24c9f9-8808-4156-a561-410eb245da3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Updated VIF entry in instance network info cache for port 3486260f-fd35-48fb-a925-cbe6f4a1a9f5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.808 2 DEBUG nova.network.neutron [req-2a9e1a0b-17a5-498a-84a1-a5d9d2c5879f req-1b24c9f9-8808-4156-a561-410eb245da3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Updating instance_info_cache with network_info: [{"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:05:26 compute-0 ceph-mon[74295]: pgmap v1143: 305 pgs: 305 active+clean; 112 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 179 op/s
Oct 07 14:05:26 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1914118239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.928 2 DEBUG oslo_concurrency.lockutils [req-2a9e1a0b-17a5-498a-84a1-a5d9d2c5879f req-1b24c9f9-8808-4156-a561-410eb245da3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-7322f2d1-885e-4e41-8a96-e90d4ddc6c38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.950 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38/disk.config 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:26 compute-0 nova_compute[259550]: 2025-10-07 14:05:26.951 2 INFO nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Deleting local config drive /var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38/disk.config because it was imported into RBD.
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.003 2 DEBUG nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.004 2 DEBUG nova.network.neutron [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:05:27 compute-0 NetworkManager[44949]: <info>  [1759845927.0121] manager: (tap3486260f-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Oct 07 14:05:27 compute-0 kernel: tap3486260f-fd: entered promiscuous mode
Oct 07 14:05:27 compute-0 ovn_controller[151684]: 2025-10-07T14:05:27Z|00049|binding|INFO|Claiming lport 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 for this chassis.
Oct 07 14:05:27 compute-0 ovn_controller[151684]: 2025-10-07T14:05:27Z|00050|binding|INFO|3486260f-fd35-48fb-a925-cbe6f4a1a9f5: Claiming fa:16:3e:68:5d:93 10.100.0.9
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:27 compute-0 NetworkManager[44949]: <info>  [1759845927.0265] device (tap3486260f-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:05:27 compute-0 NetworkManager[44949]: <info>  [1759845927.0278] device (tap3486260f-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:05:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.029 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:5d:93 10.100.0.9'], port_security=['fa:16:3e:68:5d:93 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7322f2d1-885e-4e41-8a96-e90d4ddc6c38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3486260f-fd35-48fb-a925-cbe6f4a1a9f5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:05:27 compute-0 ovn_controller[151684]: 2025-10-07T14:05:27Z|00051|binding|INFO|Setting lport 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 ovn-installed in OVS
Oct 07 14:05:27 compute-0 ovn_controller[151684]: 2025-10-07T14:05:27Z|00052|binding|INFO|Setting lport 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 up in Southbound
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:27 compute-0 systemd-machined[214580]: New machine qemu-13-instance-0000000b.
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.063 2 INFO nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:05:27 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000b.
Oct 07 14:05:27 compute-0 podman[282421]: 2025-10-07 14:05:27.092092274 +0000 UTC m=+0.090990295 container create 534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.095 2 DEBUG nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:05:27 compute-0 podman[282421]: 2025-10-07 14:05:27.049612107 +0000 UTC m=+0.048510148 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:05:27 compute-0 systemd[1]: Started libpod-conmon-534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091.scope.
Oct 07 14:05:27 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:05:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee58677923a11d6eaf88f22eb26cdfd6a6c1bfcaf21246d843b12d3493d77fd3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:05:27 compute-0 podman[282421]: 2025-10-07 14:05:27.237766199 +0000 UTC m=+0.236664260 container init 534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:05:27 compute-0 podman[282421]: 2025-10-07 14:05:27.243750609 +0000 UTC m=+0.242648630 container start 534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:05:27 compute-0 neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6[282452]: [NOTICE]   (282456) : New worker (282458) forked
Oct 07 14:05:27 compute-0 neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6[282452]: [NOTICE]   (282456) : Loading success.
Oct 07 14:05:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.345 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 unbound from our chassis
Oct 07 14:05:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.348 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.361 2 DEBUG nova.policy [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '124a23e91e614186848847e685d191d9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '54e39443887a407284ed98974d4e0771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:05:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.364 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ae044ac8-96ee-4211-baa7-6904b1ffad62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.374 2 DEBUG nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.379 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.380 2 INFO nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Creating image(s)
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.407 2 DEBUG nova.storage.rbd_utils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] rbd image a2f7901e-6572-4162-b995-0c44fb69eab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.408 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[08d115dc-12bf-43ad-b516-a88cc5f6a9fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.412 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[aa6740b5-a635-44b2-b895-b38dae2251f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.444 2 DEBUG nova.storage.rbd_utils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] rbd image a2f7901e-6572-4162-b995-0c44fb69eab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.450 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c8cc37f8-375d-40af-878d-5318e98d4b09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 134 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.1 MiB/s wr, 162 op/s
Oct 07 14:05:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.470 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[347d88e3-0232-43e9-8c25-045dea49308c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 306, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 306, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282508, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.487 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a694bf-2dca-41e9-ac59-802fca4aa280]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282516, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282516, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.489 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.490 2 DEBUG nova.storage.rbd_utils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] rbd image a2f7901e-6572-4162-b995-0c44fb69eab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.492 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.493 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:05:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.493 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.494 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.497 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.525 2 DEBUG nova.compute.manager [req-d2c0c196-efb7-408c-b797-beacb0a44d84 req-a08bba62-020a-47ef-bc87-8161964a6466 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.526 2 DEBUG oslo_concurrency.lockutils [req-d2c0c196-efb7-408c-b797-beacb0a44d84 req-a08bba62-020a-47ef-bc87-8161964a6466 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.526 2 DEBUG oslo_concurrency.lockutils [req-d2c0c196-efb7-408c-b797-beacb0a44d84 req-a08bba62-020a-47ef-bc87-8161964a6466 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.526 2 DEBUG oslo_concurrency.lockutils [req-d2c0c196-efb7-408c-b797-beacb0a44d84 req-a08bba62-020a-47ef-bc87-8161964a6466 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.526 2 DEBUG nova.compute.manager [req-d2c0c196-efb7-408c-b797-beacb0a44d84 req-a08bba62-020a-47ef-bc87-8161964a6466 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Processing event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.558 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.559 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.560 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.560 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.580 2 DEBUG nova.storage.rbd_utils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] rbd image a2f7901e-6572-4162-b995-0c44fb69eab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.583 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a2f7901e-6572-4162-b995-0c44fb69eab5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:27 compute-0 nova_compute[259550]: 2025-10-07 14:05:27.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:05:27 compute-0 ceph-mon[74295]: pgmap v1144: 305 pgs: 305 active+clean; 134 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.1 MiB/s wr, 162 op/s
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.224 2 DEBUG nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.225 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845928.2230887, ddf09c33-d956-404b-a5d8-44a3727f9a3b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.225 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] VM Started (Lifecycle Event)
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.231 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.237 2 INFO nova.virt.libvirt.driver [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance spawned successfully.
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.238 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.309 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.313 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.362 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.362 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.363 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.363 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.363 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.364 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.418 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.419 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845928.2249036, ddf09c33-d956-404b-a5d8-44a3727f9a3b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.419 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] VM Paused (Lifecycle Event)
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.443 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a2f7901e-6572-4162-b995-0c44fb69eab5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.861s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.513 2 DEBUG nova.storage.rbd_utils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] resizing rbd image a2f7901e-6572-4162-b995-0c44fb69eab5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.645 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.648 2 DEBUG nova.network.neutron [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Successfully created port: d4571f56-54c6-4986-845c-cd57c4faadac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.654 2 INFO nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Took 9.79 seconds to spawn the instance on the hypervisor.
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.654 2 DEBUG nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.658 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845928.2298362, ddf09c33-d956-404b-a5d8-44a3727f9a3b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.659 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] VM Resumed (Lifecycle Event)
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.715 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.718 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.801 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845928.3106873, 7322f2d1-885e-4e41-8a96-e90d4ddc6c38 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.802 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] VM Started (Lifecycle Event)
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.805 2 INFO nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Took 10.95 seconds to build instance.
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.848 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.853 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845928.311534, 7322f2d1-885e-4e41-8a96-e90d4ddc6c38 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.853 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] VM Paused (Lifecycle Event)
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.858 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.874 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.878 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.905 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:05:28 compute-0 nova_compute[259550]: 2025-10-07 14:05:28.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.121 2 DEBUG nova.objects.instance [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lazy-loading 'migration_context' on Instance uuid a2f7901e-6572-4162-b995-0c44fb69eab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.157 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.158 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Ensure instance console log exists: /var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.159 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.159 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.159 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 168 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.9 MiB/s wr, 176 op/s
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.706 2 DEBUG nova.compute.manager [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.706 2 DEBUG oslo_concurrency.lockutils [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.707 2 DEBUG oslo_concurrency.lockutils [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.707 2 DEBUG oslo_concurrency.lockutils [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.707 2 DEBUG nova.compute.manager [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.707 2 WARNING nova.compute.manager [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received unexpected event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with vm_state active and task_state None.
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.707 2 DEBUG nova.compute.manager [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Received event network-vif-plugged-3486260f-fd35-48fb-a925-cbe6f4a1a9f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.707 2 DEBUG oslo_concurrency.lockutils [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.708 2 DEBUG oslo_concurrency.lockutils [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.708 2 DEBUG oslo_concurrency.lockutils [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.708 2 DEBUG nova.compute.manager [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Processing event network-vif-plugged-3486260f-fd35-48fb-a925-cbe6f4a1a9f5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.708 2 DEBUG nova.compute.manager [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Received event network-vif-plugged-3486260f-fd35-48fb-a925-cbe6f4a1a9f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.708 2 DEBUG oslo_concurrency.lockutils [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.709 2 DEBUG oslo_concurrency.lockutils [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.709 2 DEBUG oslo_concurrency.lockutils [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.709 2 DEBUG nova.compute.manager [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] No waiting events found dispatching network-vif-plugged-3486260f-fd35-48fb-a925-cbe6f4a1a9f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.709 2 WARNING nova.compute.manager [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Received unexpected event network-vif-plugged-3486260f-fd35-48fb-a925-cbe6f4a1a9f5 for instance with vm_state building and task_state spawning.
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.710 2 DEBUG nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.714 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845929.7134633, 7322f2d1-885e-4e41-8a96-e90d4ddc6c38 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.715 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] VM Resumed (Lifecycle Event)
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.717 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.721 2 INFO nova.virt.libvirt.driver [-] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Instance spawned successfully.
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.722 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.729 2 DEBUG nova.network.neutron [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Successfully updated port: d4571f56-54c6-4986-845c-cd57c4faadac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.969 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.977 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.978 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.978 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.979 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.979 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.980 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.985 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.996 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquiring lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.997 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquired lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:05:29 compute-0 nova_compute[259550]: 2025-10-07 14:05:29.997 2 DEBUG nova.network.neutron [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:05:30 compute-0 nova_compute[259550]: 2025-10-07 14:05:30.021 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:05:30 compute-0 nova_compute[259550]: 2025-10-07 14:05:30.045 2 INFO nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Took 9.29 seconds to spawn the instance on the hypervisor.
Oct 07 14:05:30 compute-0 nova_compute[259550]: 2025-10-07 14:05:30.045 2 DEBUG nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:30 compute-0 podman[282724]: 2025-10-07 14:05:30.10006544 +0000 UTC m=+0.078857900 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:05:30 compute-0 nova_compute[259550]: 2025-10-07 14:05:30.109 2 INFO nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Took 10.40 seconds to build instance.
Oct 07 14:05:30 compute-0 nova_compute[259550]: 2025-10-07 14:05:30.128 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:30 compute-0 nova_compute[259550]: 2025-10-07 14:05:30.225 2 DEBUG nova.network.neutron [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:05:30 compute-0 nova_compute[259550]: 2025-10-07 14:05:30.411 2 DEBUG nova.compute.manager [req-6b89023d-27a7-4598-9aef-e21c971531e9 req-80c80b44-6386-461e-aaf0-b38de50a086d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received event network-changed-d4571f56-54c6-4986-845c-cd57c4faadac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:05:30 compute-0 nova_compute[259550]: 2025-10-07 14:05:30.412 2 DEBUG nova.compute.manager [req-6b89023d-27a7-4598-9aef-e21c971531e9 req-80c80b44-6386-461e-aaf0-b38de50a086d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Refreshing instance network info cache due to event network-changed-d4571f56-54c6-4986-845c-cd57c4faadac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:05:30 compute-0 nova_compute[259550]: 2025-10-07 14:05:30.412 2 DEBUG oslo_concurrency.lockutils [req-6b89023d-27a7-4598-9aef-e21c971531e9 req-80c80b44-6386-461e-aaf0-b38de50a086d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:05:30 compute-0 nova_compute[259550]: 2025-10-07 14:05:30.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.499865) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845930499939, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1242, "num_deletes": 507, "total_data_size": 1313482, "memory_usage": 1336920, "flush_reason": "Manual Compaction"}
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845930514730, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 893255, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23055, "largest_seqno": 24296, "table_properties": {"data_size": 888584, "index_size": 1619, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14668, "raw_average_key_size": 18, "raw_value_size": 876646, "raw_average_value_size": 1132, "num_data_blocks": 73, "num_entries": 774, "num_filter_entries": 774, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759845848, "oldest_key_time": 1759845848, "file_creation_time": 1759845930, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 14908 microseconds, and 3631 cpu microseconds.
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.514773) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 893255 bytes OK
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.514796) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.516302) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.516318) EVENT_LOG_v1 {"time_micros": 1759845930516312, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.516342) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1306705, prev total WAL file size 1306705, number of live WAL files 2.
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.517251) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(872KB)], [53(8956KB)]
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845930517323, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10065157, "oldest_snapshot_seqno": -1}
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4547 keys, 7056062 bytes, temperature: kUnknown
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845930558081, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7056062, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7025983, "index_size": 17617, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11397, "raw_key_size": 113930, "raw_average_key_size": 25, "raw_value_size": 6944036, "raw_average_value_size": 1527, "num_data_blocks": 732, "num_entries": 4547, "num_filter_entries": 4547, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759845930, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:05:30 compute-0 ceph-mon[74295]: pgmap v1145: 305 pgs: 305 active+clean; 168 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.9 MiB/s wr, 176 op/s
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.558398) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7056062 bytes
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.561210) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 246.4 rd, 172.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.7 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(19.2) write-amplify(7.9) OK, records in: 5544, records dropped: 997 output_compression: NoCompression
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.561237) EVENT_LOG_v1 {"time_micros": 1759845930561223, "job": 28, "event": "compaction_finished", "compaction_time_micros": 40846, "compaction_time_cpu_micros": 18237, "output_level": 6, "num_output_files": 1, "total_output_size": 7056062, "num_input_records": 5544, "num_output_records": 4547, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845930561708, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845930563263, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.517138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.563374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.563382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.563384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.563387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:05:30 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.563389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:05:30 compute-0 nova_compute[259550]: 2025-10-07 14:05:30.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.386 2 DEBUG nova.network.neutron [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Updating instance_info_cache with network_info: [{"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.408 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Releasing lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.409 2 DEBUG nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Instance network_info: |[{"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.410 2 DEBUG oslo_concurrency.lockutils [req-6b89023d-27a7-4598-9aef-e21c971531e9 req-80c80b44-6386-461e-aaf0-b38de50a086d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.410 2 DEBUG nova.network.neutron [req-6b89023d-27a7-4598-9aef-e21c971531e9 req-80c80b44-6386-461e-aaf0-b38de50a086d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Refreshing network info cache for port d4571f56-54c6-4986-845c-cd57c4faadac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.413 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Start _get_guest_xml network_info=[{"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.419 2 WARNING nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.428 2 DEBUG nova.virt.libvirt.host [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.429 2 DEBUG nova.virt.libvirt.host [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.434 2 DEBUG nova.virt.libvirt.host [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.435 2 DEBUG nova.virt.libvirt.host [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.436 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.436 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.436 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.437 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.437 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.437 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.438 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.438 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.438 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.438 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.439 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.439 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.442 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 181 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.3 MiB/s wr, 158 op/s
Oct 07 14:05:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:05:31 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3913640470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.897 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.922 2 DEBUG nova.storage.rbd_utils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] rbd image a2f7901e-6572-4162-b995-0c44fb69eab5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:31 compute-0 nova_compute[259550]: 2025-10-07 14:05:31.927 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0010426968361333595 of space, bias 1.0, pg target 0.3128090508400078 quantized to 32 (current 32)
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:05:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:05:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1549963102' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.376 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.378 2 DEBUG nova.virt.libvirt.vif [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-230026105',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-230026105',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-230026105',id=12,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54e39443887a407284ed98974d4e0771',ramdisk_id='',reservation_id='r-0i2aqjqd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-909075908',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-909075908-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:27Z,user_data=None,user_id='124a23e91e614186848847e685d191d9',uuid=a2f7901e-6572-4162-b995-0c44fb69eab5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.379 2 DEBUG nova.network.os_vif_util [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Converting VIF {"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.380 2 DEBUG nova.network.os_vif_util [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:1a:81,bridge_name='br-int',has_traffic_filtering=True,id=d4571f56-54c6-4986-845c-cd57c4faadac,network=Network(c11e448c-21ee-492b-8c01-cf2af1e6c4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4571f56-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.381 2 DEBUG nova.objects.instance [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lazy-loading 'pci_devices' on Instance uuid a2f7901e-6572-4162-b995-0c44fb69eab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.441 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:05:32 compute-0 nova_compute[259550]:   <uuid>a2f7901e-6572-4162-b995-0c44fb69eab5</uuid>
Oct 07 14:05:32 compute-0 nova_compute[259550]:   <name>instance-0000000c</name>
Oct 07 14:05:32 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:05:32 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:05:32 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-230026105</nova:name>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:05:31</nova:creationTime>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:05:32 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:05:32 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:05:32 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:05:32 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:05:32 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:05:32 compute-0 nova_compute[259550]:         <nova:user uuid="124a23e91e614186848847e685d191d9">tempest-FloatingIPsAssociationNegativeTestJSON-909075908-project-member</nova:user>
Oct 07 14:05:32 compute-0 nova_compute[259550]:         <nova:project uuid="54e39443887a407284ed98974d4e0771">tempest-FloatingIPsAssociationNegativeTestJSON-909075908</nova:project>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:05:32 compute-0 nova_compute[259550]:         <nova:port uuid="d4571f56-54c6-4986-845c-cd57c4faadac">
Oct 07 14:05:32 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:05:32 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:05:32 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <system>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <entry name="serial">a2f7901e-6572-4162-b995-0c44fb69eab5</entry>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <entry name="uuid">a2f7901e-6572-4162-b995-0c44fb69eab5</entry>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     </system>
Oct 07 14:05:32 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:05:32 compute-0 nova_compute[259550]:   <os>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:   </os>
Oct 07 14:05:32 compute-0 nova_compute[259550]:   <features>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:   </features>
Oct 07 14:05:32 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:05:32 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:05:32 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/a2f7901e-6572-4162-b995-0c44fb69eab5_disk">
Oct 07 14:05:32 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       </source>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:05:32 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/a2f7901e-6572-4162-b995-0c44fb69eab5_disk.config">
Oct 07 14:05:32 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       </source>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:05:32 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:47:1a:81"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <target dev="tapd4571f56-54"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5/console.log" append="off"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <video>
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     </video>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:05:32 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:05:32 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:05:32 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:05:32 compute-0 nova_compute[259550]: </domain>
Oct 07 14:05:32 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.447 2 DEBUG nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Preparing to wait for external event network-vif-plugged-d4571f56-54c6-4986-845c-cd57c4faadac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.448 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquiring lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.448 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.448 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.449 2 DEBUG nova.virt.libvirt.vif [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-230026105',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-230026105',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-230026105',id=12,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54e39443887a407284ed98974d4e0771',ramdisk_id='',reservation_id='r-0i2aqjqd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-909075908',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-909075908-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:27Z,user_data=None,user_id='124a23e91e614186848847e685d191d9',uuid=a2f7901e-6572-4162-b995-0c44fb69eab5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.450 2 DEBUG nova.network.os_vif_util [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Converting VIF {"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.450 2 DEBUG nova.network.os_vif_util [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:1a:81,bridge_name='br-int',has_traffic_filtering=True,id=d4571f56-54c6-4986-845c-cd57c4faadac,network=Network(c11e448c-21ee-492b-8c01-cf2af1e6c4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4571f56-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.451 2 DEBUG os_vif [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:1a:81,bridge_name='br-int',has_traffic_filtering=True,id=d4571f56-54c6-4986-845c-cd57c4faadac,network=Network(c11e448c-21ee-492b-8c01-cf2af1e6c4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4571f56-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.452 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.453 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.457 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4571f56-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.458 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd4571f56-54, col_values=(('external_ids', {'iface-id': 'd4571f56-54c6-4986-845c-cd57c4faadac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:1a:81', 'vm-uuid': 'a2f7901e-6572-4162-b995-0c44fb69eab5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:32 compute-0 NetworkManager[44949]: <info>  [1759845932.4615] manager: (tapd4571f56-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.469 2 INFO os_vif [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:1a:81,bridge_name='br-int',has_traffic_filtering=True,id=d4571f56-54c6-4986-845c-cd57c4faadac,network=Network(c11e448c-21ee-492b-8c01-cf2af1e6c4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4571f56-54')
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.540 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.543 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.546 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] No VIF found with MAC fa:16:3e:47:1a:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.547 2 INFO nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Using config drive
Oct 07 14:05:32 compute-0 ceph-mon[74295]: pgmap v1146: 305 pgs: 305 active+clean; 181 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.3 MiB/s wr, 158 op/s
Oct 07 14:05:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3913640470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1549963102' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.593 2 DEBUG nova.storage.rbd_utils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] rbd image a2f7901e-6572-4162-b995-0c44fb69eab5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.599 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759845917.5605228, dbb074a6-4040-4bb9-8c58-ee07f164e2ec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.600 2 INFO nova.compute.manager [-] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] VM Stopped (Lifecycle Event)
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.639 2 DEBUG nova.compute.manager [None req-9432ad32-7ba7-4281-9023-48cb5966d012 - - - - - -] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:05:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2975917875' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:05:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:05:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2975917875' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.808 2 DEBUG nova.network.neutron [req-6b89023d-27a7-4598-9aef-e21c971531e9 req-80c80b44-6386-461e-aaf0-b38de50a086d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Updated VIF entry in instance network info cache for port d4571f56-54c6-4986-845c-cd57c4faadac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.809 2 DEBUG nova.network.neutron [req-6b89023d-27a7-4598-9aef-e21c971531e9 req-80c80b44-6386-461e-aaf0-b38de50a086d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Updating instance_info_cache with network_info: [{"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:05:32 compute-0 nova_compute[259550]: 2025-10-07 14:05:32.829 2 DEBUG oslo_concurrency.lockutils [req-6b89023d-27a7-4598-9aef-e21c971531e9 req-80c80b44-6386-461e-aaf0-b38de50a086d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:05:33 compute-0 nova_compute[259550]: 2025-10-07 14:05:33.176 2 INFO nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Creating config drive at /var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5/disk.config
Oct 07 14:05:33 compute-0 nova_compute[259550]: 2025-10-07 14:05:33.182 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcn8c9_lg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:33 compute-0 nova_compute[259550]: 2025-10-07 14:05:33.315 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcn8c9_lg" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:33 compute-0 nova_compute[259550]: 2025-10-07 14:05:33.348 2 DEBUG nova.storage.rbd_utils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] rbd image a2f7901e-6572-4162-b995-0c44fb69eab5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:33 compute-0 nova_compute[259550]: 2025-10-07 14:05:33.352 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5/disk.config a2f7901e-6572-4162-b995-0c44fb69eab5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 181 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.6 MiB/s wr, 134 op/s
Oct 07 14:05:33 compute-0 nova_compute[259550]: 2025-10-07 14:05:33.509 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5/disk.config a2f7901e-6572-4162-b995-0c44fb69eab5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:33 compute-0 nova_compute[259550]: 2025-10-07 14:05:33.510 2 INFO nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Deleting local config drive /var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5/disk.config because it was imported into RBD.
Oct 07 14:05:33 compute-0 virtqemud[259430]: End of file while reading data: Input/output error
Oct 07 14:05:33 compute-0 virtqemud[259430]: End of file while reading data: Input/output error
Oct 07 14:05:33 compute-0 NetworkManager[44949]: <info>  [1759845933.5731] manager: (tapd4571f56-54): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Oct 07 14:05:33 compute-0 kernel: tapd4571f56-54: entered promiscuous mode
Oct 07 14:05:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2975917875' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:05:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2975917875' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:05:33 compute-0 ovn_controller[151684]: 2025-10-07T14:05:33Z|00053|binding|INFO|Claiming lport d4571f56-54c6-4986-845c-cd57c4faadac for this chassis.
Oct 07 14:05:33 compute-0 ovn_controller[151684]: 2025-10-07T14:05:33Z|00054|binding|INFO|d4571f56-54c6-4986-845c-cd57c4faadac: Claiming fa:16:3e:47:1a:81 10.100.0.9
Oct 07 14:05:33 compute-0 nova_compute[259550]: 2025-10-07 14:05:33.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.605 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:1a:81 10.100.0.9'], port_security=['fa:16:3e:47:1a:81 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a2f7901e-6572-4162-b995-0c44fb69eab5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c11e448c-21ee-492b-8c01-cf2af1e6c4f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54e39443887a407284ed98974d4e0771', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3b31c4f6-cd30-4c48-acc0-2bceb0f06bb5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11ba191e-5394-4c4a-8b33-ca086e98b88d, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d4571f56-54c6-4986-845c-cd57c4faadac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:05:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.611 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d4571f56-54c6-4986-845c-cd57c4faadac in datapath c11e448c-21ee-492b-8c01-cf2af1e6c4f4 bound to our chassis
Oct 07 14:05:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.613 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c11e448c-21ee-492b-8c01-cf2af1e6c4f4
Oct 07 14:05:33 compute-0 systemd-machined[214580]: New machine qemu-14-instance-0000000c.
Oct 07 14:05:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.628 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[322d2c35-f92a-4783-944b-08151802387a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.629 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc11e448c-21 in ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:05:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.631 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc11e448c-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:05:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.631 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[757a8b46-0eaa-4118-8f22-87d386959654]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.632 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c9d9f1-379f-45d8-bb47-2102cead6c24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:33 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000c.
Oct 07 14:05:33 compute-0 systemd-udevd[282880]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:05:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.655 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb33639-0275-4689-ba44-50fb7475f820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:33 compute-0 NetworkManager[44949]: <info>  [1759845933.6730] device (tapd4571f56-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:05:33 compute-0 nova_compute[259550]: 2025-10-07 14:05:33.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:33 compute-0 ovn_controller[151684]: 2025-10-07T14:05:33Z|00055|binding|INFO|Setting lport d4571f56-54c6-4986-845c-cd57c4faadac ovn-installed in OVS
Oct 07 14:05:33 compute-0 ovn_controller[151684]: 2025-10-07T14:05:33Z|00056|binding|INFO|Setting lport d4571f56-54c6-4986-845c-cd57c4faadac up in Southbound
Oct 07 14:05:33 compute-0 NetworkManager[44949]: <info>  [1759845933.6758] device (tapd4571f56-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:05:33 compute-0 nova_compute[259550]: 2025-10-07 14:05:33.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.677 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2038cc05-fb79-4c16-9d48-0b1a0d9806de]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.716 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2a563063-4eaf-42e6-b1ca-4d2279344b74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.725 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[728d1f6c-8924-4e74-b561-b2eed31ab3a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:33 compute-0 NetworkManager[44949]: <info>  [1759845933.7272] manager: (tapc11e448c-20): new Veth device (/org/freedesktop/NetworkManager/Devices/44)
Oct 07 14:05:33 compute-0 systemd-udevd[282883]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:05:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.763 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c75d0143-3e4f-4513-8c58-6e0a7d8d7ee9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.767 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0b25f1-c168-4779-ace2-f3f36c1e2ae2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:33 compute-0 NetworkManager[44949]: <info>  [1759845933.8061] device (tapc11e448c-20): carrier: link connected
Oct 07 14:05:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.814 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f5a6ef-d8fa-405e-88a6-448cce6b554f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.836 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc95221-8f99-4eb7-8496-72269f730c0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc11e448c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:ab:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651737, 'reachable_time': 40650, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282912, 'error': None, 'target': 'ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.859 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8043d0a0-f9cb-4e5d-99f7-28014ecf4515]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe65:abcb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651737, 'tstamp': 651737}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282913, 'error': None, 'target': 'ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:33 compute-0 nova_compute[259550]: 2025-10-07 14:05:33.874 2 DEBUG nova.compute.manager [req-b30ae0aa-7057-4ad8-b0e4-372410ca199e req-2c77c112-4979-4218-bcc7-de0e8e0757f0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received event network-vif-plugged-d4571f56-54c6-4986-845c-cd57c4faadac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:05:33 compute-0 nova_compute[259550]: 2025-10-07 14:05:33.875 2 DEBUG oslo_concurrency.lockutils [req-b30ae0aa-7057-4ad8-b0e4-372410ca199e req-2c77c112-4979-4218-bcc7-de0e8e0757f0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:33 compute-0 nova_compute[259550]: 2025-10-07 14:05:33.875 2 DEBUG oslo_concurrency.lockutils [req-b30ae0aa-7057-4ad8-b0e4-372410ca199e req-2c77c112-4979-4218-bcc7-de0e8e0757f0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:33 compute-0 nova_compute[259550]: 2025-10-07 14:05:33.876 2 DEBUG oslo_concurrency.lockutils [req-b30ae0aa-7057-4ad8-b0e4-372410ca199e req-2c77c112-4979-4218-bcc7-de0e8e0757f0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:33 compute-0 nova_compute[259550]: 2025-10-07 14:05:33.876 2 DEBUG nova.compute.manager [req-b30ae0aa-7057-4ad8-b0e4-372410ca199e req-2c77c112-4979-4218-bcc7-de0e8e0757f0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Processing event network-vif-plugged-d4571f56-54c6-4986-845c-cd57c4faadac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:05:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.882 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[46bf391e-3ec3-4d1f-a38d-886965cac203]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc11e448c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:ab:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651737, 'reachable_time': 40650, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282914, 'error': None, 'target': 'ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.944 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[703069e2-0e66-40ca-bf9a-619116d93734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:34.018 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e8938ba2-79b4-455f-a7b4-2fdf17bbf3bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:34.021 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc11e448c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:34.022 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:34.022 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc11e448c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:34 compute-0 NetworkManager[44949]: <info>  [1759845934.0260] manager: (tapc11e448c-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Oct 07 14:05:34 compute-0 kernel: tapc11e448c-20: entered promiscuous mode
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:34.031 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc11e448c-20, col_values=(('external_ids', {'iface-id': '7dd78e44-59a7-4f45-983c-498cc6aa3cd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:34 compute-0 ovn_controller[151684]: 2025-10-07T14:05:34Z|00057|binding|INFO|Releasing lport 7dd78e44-59a7-4f45-983c-498cc6aa3cd4 from this chassis (sb_readonly=0)
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:34.056 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c11e448c-21ee-492b-8c01-cf2af1e6c4f4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c11e448c-21ee-492b-8c01-cf2af1e6c4f4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:34.057 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9878fddf-c3c2-43a3-a94d-bc6453e1b14f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:34.058 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-c11e448c-21ee-492b-8c01-cf2af1e6c4f4
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/c11e448c-21ee-492b-8c01-cf2af1e6c4f4.pid.haproxy
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID c11e448c-21ee-492b-8c01-cf2af1e6c4f4
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:05:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:34.060 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4', 'env', 'PROCESS_TAG=haproxy-c11e448c-21ee-492b-8c01-cf2af1e6c4f4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c11e448c-21ee-492b-8c01-cf2af1e6c4f4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:05:34 compute-0 podman[282986]: 2025-10-07 14:05:34.484847129 +0000 UTC m=+0.054244332 container create eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 07 14:05:34 compute-0 systemd[1]: Started libpod-conmon-eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482.scope.
Oct 07 14:05:34 compute-0 podman[282986]: 2025-10-07 14:05:34.456633054 +0000 UTC m=+0.026030267 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.560 2 DEBUG nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.561 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845934.5612476, a2f7901e-6572-4162-b995-0c44fb69eab5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.562 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] VM Started (Lifecycle Event)
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.569 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.573 2 INFO nova.virt.libvirt.driver [-] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Instance spawned successfully.
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.574 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:05:34 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:05:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cfefc9e73b43b0851f8beb7cc640de90c39f70e6d14855bf71f6c4d9005459/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:05:34 compute-0 ceph-mon[74295]: pgmap v1147: 305 pgs: 305 active+clean; 181 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.6 MiB/s wr, 134 op/s
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.603 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:34 compute-0 podman[282986]: 2025-10-07 14:05:34.606203014 +0000 UTC m=+0.175600247 container init eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.613 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:05:34 compute-0 podman[282986]: 2025-10-07 14:05:34.618959505 +0000 UTC m=+0.188356728 container start eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.621 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.622 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.622 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.623 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.624 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.624 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.635 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.636 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845934.56133, a2f7901e-6572-4162-b995-0c44fb69eab5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.636 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] VM Paused (Lifecycle Event)
Oct 07 14:05:34 compute-0 neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4[283000]: [NOTICE]   (283004) : New worker (283006) forked
Oct 07 14:05:34 compute-0 neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4[283000]: [NOTICE]   (283004) : Loading success.
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.691 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.695 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845934.5687776, a2f7901e-6572-4162-b995-0c44fb69eab5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.696 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] VM Resumed (Lifecycle Event)
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.729 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.736 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.744 2 INFO nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Took 7.37 seconds to spawn the instance on the hypervisor.
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.744 2 DEBUG nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.773 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.820 2 INFO nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Took 8.95 seconds to build instance.
Oct 07 14:05:34 compute-0 nova_compute[259550]: 2025-10-07 14:05:34.850 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:35 compute-0 nova_compute[259550]: 2025-10-07 14:05:35.074 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "74094438-0995-4031-9943-cc85a5ef4f57" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:35 compute-0 nova_compute[259550]: 2025-10-07 14:05:35.075 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:35 compute-0 nova_compute[259550]: 2025-10-07 14:05:35.102 2 DEBUG nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:05:35 compute-0 nova_compute[259550]: 2025-10-07 14:05:35.172 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:35 compute-0 nova_compute[259550]: 2025-10-07 14:05:35.173 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:35 compute-0 nova_compute[259550]: 2025-10-07 14:05:35.182 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:05:35 compute-0 nova_compute[259550]: 2025-10-07 14:05:35.183 2 INFO nova.compute.claims [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:05:35 compute-0 nova_compute[259550]: 2025-10-07 14:05:35.328 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:35 compute-0 nova_compute[259550]: 2025-10-07 14:05:35.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 181 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.6 MiB/s wr, 214 op/s
Oct 07 14:05:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:05:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct 07 14:05:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct 07 14:05:35 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct 07 14:05:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:05:35 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/769448829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:35 compute-0 nova_compute[259550]: 2025-10-07 14:05:35.855 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:35 compute-0 nova_compute[259550]: 2025-10-07 14:05:35.861 2 DEBUG nova.compute.provider_tree [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:05:35 compute-0 nova_compute[259550]: 2025-10-07 14:05:35.889 2 DEBUG nova.scheduler.client.report [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:05:35 compute-0 nova_compute[259550]: 2025-10-07 14:05:35.910 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:35 compute-0 nova_compute[259550]: 2025-10-07 14:05:35.911 2 DEBUG nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.235 2 DEBUG nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.236 2 DEBUG nova.network.neutron [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.299 2 DEBUG nova.compute.manager [req-4db57a34-968f-4870-a2fc-e6fdf8bcad7d req-39e46e12-533d-4263-9e1e-f9d56f364ed0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received event network-vif-plugged-d4571f56-54c6-4986-845c-cd57c4faadac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.299 2 DEBUG oslo_concurrency.lockutils [req-4db57a34-968f-4870-a2fc-e6fdf8bcad7d req-39e46e12-533d-4263-9e1e-f9d56f364ed0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.300 2 DEBUG oslo_concurrency.lockutils [req-4db57a34-968f-4870-a2fc-e6fdf8bcad7d req-39e46e12-533d-4263-9e1e-f9d56f364ed0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.300 2 DEBUG oslo_concurrency.lockutils [req-4db57a34-968f-4870-a2fc-e6fdf8bcad7d req-39e46e12-533d-4263-9e1e-f9d56f364ed0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.300 2 DEBUG nova.compute.manager [req-4db57a34-968f-4870-a2fc-e6fdf8bcad7d req-39e46e12-533d-4263-9e1e-f9d56f364ed0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] No waiting events found dispatching network-vif-plugged-d4571f56-54c6-4986-845c-cd57c4faadac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.300 2 WARNING nova.compute.manager [req-4db57a34-968f-4870-a2fc-e6fdf8bcad7d req-39e46e12-533d-4263-9e1e-f9d56f364ed0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received unexpected event network-vif-plugged-d4571f56-54c6-4986-845c-cd57c4faadac for instance with vm_state active and task_state None.
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.401 2 DEBUG nova.policy [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f06dda9346a24fb094ad9fe51664cc48', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.417 2 INFO nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.481 2 DEBUG nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:05:36 compute-0 ceph-mon[74295]: pgmap v1148: 305 pgs: 305 active+clean; 181 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.6 MiB/s wr, 214 op/s
Oct 07 14:05:36 compute-0 ceph-mon[74295]: osdmap e132: 3 total, 3 up, 3 in
Oct 07 14:05:36 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/769448829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.805 2 DEBUG nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.807 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.808 2 INFO nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Creating image(s)
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.833 2 DEBUG nova.storage.rbd_utils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 74094438-0995-4031-9943-cc85a5ef4f57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.866 2 DEBUG nova.storage.rbd_utils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 74094438-0995-4031-9943-cc85a5ef4f57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.898 2 DEBUG nova.storage.rbd_utils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 74094438-0995-4031-9943-cc85a5ef4f57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.905 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.973 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.975 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.975 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:36 compute-0 nova_compute[259550]: 2025-10-07 14:05:36.976 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:37 compute-0 nova_compute[259550]: 2025-10-07 14:05:37.002 2 DEBUG nova.storage.rbd_utils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 74094438-0995-4031-9943-cc85a5ef4f57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:37 compute-0 nova_compute[259550]: 2025-10-07 14:05:37.006 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 74094438-0995-4031-9943-cc85a5ef4f57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:37 compute-0 nova_compute[259550]: 2025-10-07 14:05:37.162 2 DEBUG nova.network.neutron [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Successfully created port: 132e9e5d-eaab-437e-a82e-d49f6c4a09df _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:05:37 compute-0 nova_compute[259550]: 2025-10-07 14:05:37.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 181 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 2.2 MiB/s wr, 224 op/s
Oct 07 14:05:37 compute-0 nova_compute[259550]: 2025-10-07 14:05:37.650 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 74094438-0995-4031-9943-cc85a5ef4f57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:37 compute-0 nova_compute[259550]: 2025-10-07 14:05:37.722 2 DEBUG nova.storage.rbd_utils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] resizing rbd image 74094438-0995-4031-9943-cc85a5ef4f57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:05:37 compute-0 nova_compute[259550]: 2025-10-07 14:05:37.938 2 DEBUG nova.objects.instance [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'migration_context' on Instance uuid 74094438-0995-4031-9943-cc85a5ef4f57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:05:37 compute-0 nova_compute[259550]: 2025-10-07 14:05:37.976 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:05:37 compute-0 nova_compute[259550]: 2025-10-07 14:05:37.977 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Ensure instance console log exists: /var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:05:37 compute-0 nova_compute[259550]: 2025-10-07 14:05:37.977 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:37 compute-0 nova_compute[259550]: 2025-10-07 14:05:37.978 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:37 compute-0 nova_compute[259550]: 2025-10-07 14:05:37.978 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct 07 14:05:38 compute-0 ceph-mon[74295]: pgmap v1150: 305 pgs: 305 active+clean; 181 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 2.2 MiB/s wr, 224 op/s
Oct 07 14:05:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct 07 14:05:38 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct 07 14:05:39 compute-0 nova_compute[259550]: 2025-10-07 14:05:39.426 2 DEBUG nova.network.neutron [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Successfully updated port: 132e9e5d-eaab-437e-a82e-d49f6c4a09df _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:05:39 compute-0 nova_compute[259550]: 2025-10-07 14:05:39.447 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "refresh_cache-74094438-0995-4031-9943-cc85a5ef4f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:05:39 compute-0 nova_compute[259550]: 2025-10-07 14:05:39.448 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquired lock "refresh_cache-74094438-0995-4031-9943-cc85a5ef4f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:05:39 compute-0 nova_compute[259550]: 2025-10-07 14:05:39.448 2 DEBUG nova.network.neutron [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:05:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 213 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 6.5 MiB/s rd, 1.4 MiB/s wr, 280 op/s
Oct 07 14:05:39 compute-0 nova_compute[259550]: 2025-10-07 14:05:39.596 2 DEBUG nova.compute.manager [req-ff20ea32-22d5-4fbe-b23f-a096099b48a2 req-d4bbb38b-67bf-4718-9caf-89b9551e3ff9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Received event network-changed-132e9e5d-eaab-437e-a82e-d49f6c4a09df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:05:39 compute-0 nova_compute[259550]: 2025-10-07 14:05:39.597 2 DEBUG nova.compute.manager [req-ff20ea32-22d5-4fbe-b23f-a096099b48a2 req-d4bbb38b-67bf-4718-9caf-89b9551e3ff9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Refreshing instance network info cache due to event network-changed-132e9e5d-eaab-437e-a82e-d49f6c4a09df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:05:39 compute-0 nova_compute[259550]: 2025-10-07 14:05:39.597 2 DEBUG oslo_concurrency.lockutils [req-ff20ea32-22d5-4fbe-b23f-a096099b48a2 req-d4bbb38b-67bf-4718-9caf-89b9551e3ff9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-74094438-0995-4031-9943-cc85a5ef4f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:05:39 compute-0 ceph-mon[74295]: osdmap e133: 3 total, 3 up, 3 in
Oct 07 14:05:40 compute-0 nova_compute[259550]: 2025-10-07 14:05:40.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:05:40 compute-0 nova_compute[259550]: 2025-10-07 14:05:40.608 2 DEBUG nova.network.neutron [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:05:40 compute-0 ceph-mon[74295]: pgmap v1152: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 213 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 6.5 MiB/s rd, 1.4 MiB/s wr, 280 op/s
Oct 07 14:05:40 compute-0 nova_compute[259550]: 2025-10-07 14:05:40.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:40 compute-0 NetworkManager[44949]: <info>  [1759845940.8513] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct 07 14:05:40 compute-0 NetworkManager[44949]: <info>  [1759845940.8524] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct 07 14:05:40 compute-0 ovn_controller[151684]: 2025-10-07T14:05:40Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6c:03:d4 10.100.0.7
Oct 07 14:05:40 compute-0 ovn_controller[151684]: 2025-10-07T14:05:40Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6c:03:d4 10.100.0.7
Oct 07 14:05:40 compute-0 nova_compute[259550]: 2025-10-07 14:05:40.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:40 compute-0 ovn_controller[151684]: 2025-10-07T14:05:40Z|00058|binding|INFO|Releasing lport 7dd78e44-59a7-4f45-983c-498cc6aa3cd4 from this chassis (sb_readonly=0)
Oct 07 14:05:40 compute-0 ovn_controller[151684]: 2025-10-07T14:05:40Z|00059|binding|INFO|Releasing lport 47854cb1-b863-4b06-b664-27d734ff5751 from this chassis (sb_readonly=0)
Oct 07 14:05:40 compute-0 nova_compute[259550]: 2025-10-07 14:05:40.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 234 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 7.1 MiB/s rd, 3.5 MiB/s wr, 339 op/s
Oct 07 14:05:41 compute-0 nova_compute[259550]: 2025-10-07 14:05:41.707 2 DEBUG nova.compute.manager [req-c985606a-2a5f-44c4-9f06-d04cdb1c9f89 req-df6c903d-da79-4239-a336-1dbbf7669f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received event network-changed-d4571f56-54c6-4986-845c-cd57c4faadac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:05:41 compute-0 nova_compute[259550]: 2025-10-07 14:05:41.708 2 DEBUG nova.compute.manager [req-c985606a-2a5f-44c4-9f06-d04cdb1c9f89 req-df6c903d-da79-4239-a336-1dbbf7669f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Refreshing instance network info cache due to event network-changed-d4571f56-54c6-4986-845c-cd57c4faadac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:05:41 compute-0 nova_compute[259550]: 2025-10-07 14:05:41.708 2 DEBUG oslo_concurrency.lockutils [req-c985606a-2a5f-44c4-9f06-d04cdb1c9f89 req-df6c903d-da79-4239-a336-1dbbf7669f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:05:41 compute-0 nova_compute[259550]: 2025-10-07 14:05:41.708 2 DEBUG oslo_concurrency.lockutils [req-c985606a-2a5f-44c4-9f06-d04cdb1c9f89 req-df6c903d-da79-4239-a336-1dbbf7669f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:05:41 compute-0 nova_compute[259550]: 2025-10-07 14:05:41.709 2 DEBUG nova.network.neutron [req-c985606a-2a5f-44c4-9f06-d04cdb1c9f89 req-df6c903d-da79-4239-a336-1dbbf7669f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Refreshing network info cache for port d4571f56-54c6-4986-845c-cd57c4faadac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:05:41 compute-0 nova_compute[259550]: 2025-10-07 14:05:41.973 2 DEBUG nova.network.neutron [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Updating instance_info_cache with network_info: [{"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:05:41 compute-0 nova_compute[259550]: 2025-10-07 14:05:41.992 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Releasing lock "refresh_cache-74094438-0995-4031-9943-cc85a5ef4f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:05:41 compute-0 nova_compute[259550]: 2025-10-07 14:05:41.998 2 DEBUG nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Instance network_info: |[{"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.001 2 DEBUG oslo_concurrency.lockutils [req-ff20ea32-22d5-4fbe-b23f-a096099b48a2 req-d4bbb38b-67bf-4718-9caf-89b9551e3ff9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-74094438-0995-4031-9943-cc85a5ef4f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.001 2 DEBUG nova.network.neutron [req-ff20ea32-22d5-4fbe-b23f-a096099b48a2 req-d4bbb38b-67bf-4718-9caf-89b9551e3ff9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Refreshing network info cache for port 132e9e5d-eaab-437e-a82e-d49f6c4a09df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.006 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Start _get_guest_xml network_info=[{"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.012 2 WARNING nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.018 2 DEBUG nova.virt.libvirt.host [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.019 2 DEBUG nova.virt.libvirt.host [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.028 2 DEBUG nova.virt.libvirt.host [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.029 2 DEBUG nova.virt.libvirt.host [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.030 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.030 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.032 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.032 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.032 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.033 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.033 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.033 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.034 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.034 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.034 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.035 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.040 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:05:42 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2271294480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.491 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.516 2 DEBUG nova.storage.rbd_utils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 74094438-0995-4031-9943-cc85a5ef4f57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:42 compute-0 nova_compute[259550]: 2025-10-07 14:05:42.522 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:42 compute-0 ceph-mon[74295]: pgmap v1153: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 234 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 7.1 MiB/s rd, 3.5 MiB/s wr, 339 op/s
Oct 07 14:05:42 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2271294480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:05:42 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/964272690' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.000 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.004 2 DEBUG nova.virt.libvirt.vif [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-222035485',display_name='tempest-ServersAdminTestJSON-server-222035485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-222035485',id=13,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-lr6293xm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:36Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=74094438-0995-4031-9943-cc85a5ef4f57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.004 2 DEBUG nova.network.os_vif_util [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.006 2 DEBUG nova.network.os_vif_util [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:2c:66,bridge_name='br-int',has_traffic_filtering=True,id=132e9e5d-eaab-437e-a82e-d49f6c4a09df,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132e9e5d-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:05:43 compute-0 ovn_controller[151684]: 2025-10-07T14:05:43Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:68:5d:93 10.100.0.9
Oct 07 14:05:43 compute-0 ovn_controller[151684]: 2025-10-07T14:05:43Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:5d:93 10.100.0.9
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.008 2 DEBUG nova.objects.instance [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'pci_devices' on Instance uuid 74094438-0995-4031-9943-cc85a5ef4f57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.023 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:05:43 compute-0 nova_compute[259550]:   <uuid>74094438-0995-4031-9943-cc85a5ef4f57</uuid>
Oct 07 14:05:43 compute-0 nova_compute[259550]:   <name>instance-0000000d</name>
Oct 07 14:05:43 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:05:43 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:05:43 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersAdminTestJSON-server-222035485</nova:name>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:05:42</nova:creationTime>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:05:43 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:05:43 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:05:43 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:05:43 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:05:43 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:05:43 compute-0 nova_compute[259550]:         <nova:user uuid="f06dda9346a24fb094ad9fe51664cc48">tempest-ServersAdminTestJSON-1442908900-project-member</nova:user>
Oct 07 14:05:43 compute-0 nova_compute[259550]:         <nova:project uuid="48bbd5aa8b9d4a0ea0150bd57145fc68">tempest-ServersAdminTestJSON-1442908900</nova:project>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:05:43 compute-0 nova_compute[259550]:         <nova:port uuid="132e9e5d-eaab-437e-a82e-d49f6c4a09df">
Oct 07 14:05:43 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:05:43 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:05:43 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <system>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <entry name="serial">74094438-0995-4031-9943-cc85a5ef4f57</entry>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <entry name="uuid">74094438-0995-4031-9943-cc85a5ef4f57</entry>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     </system>
Oct 07 14:05:43 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:05:43 compute-0 nova_compute[259550]:   <os>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:   </os>
Oct 07 14:05:43 compute-0 nova_compute[259550]:   <features>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:   </features>
Oct 07 14:05:43 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:05:43 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:05:43 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/74094438-0995-4031-9943-cc85a5ef4f57_disk">
Oct 07 14:05:43 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       </source>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:05:43 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/74094438-0995-4031-9943-cc85a5ef4f57_disk.config">
Oct 07 14:05:43 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       </source>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:05:43 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:66:2c:66"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <target dev="tap132e9e5d-ea"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57/console.log" append="off"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <video>
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     </video>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:05:43 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:05:43 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:05:43 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:05:43 compute-0 nova_compute[259550]: </domain>
Oct 07 14:05:43 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.030 2 DEBUG nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Preparing to wait for external event network-vif-plugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.031 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "74094438-0995-4031-9943-cc85a5ef4f57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.031 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.031 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.032 2 DEBUG nova.virt.libvirt.vif [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-222035485',display_name='tempest-ServersAdminTestJSON-server-222035485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-222035485',id=13,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-lr6293xm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:36Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=74094438-0995-4031-9943-cc85a5ef4f57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.033 2 DEBUG nova.network.os_vif_util [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.034 2 DEBUG nova.network.os_vif_util [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:2c:66,bridge_name='br-int',has_traffic_filtering=True,id=132e9e5d-eaab-437e-a82e-d49f6c4a09df,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132e9e5d-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.034 2 DEBUG os_vif [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:2c:66,bridge_name='br-int',has_traffic_filtering=True,id=132e9e5d-eaab-437e-a82e-d49f6c4a09df,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132e9e5d-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.036 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.036 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.042 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap132e9e5d-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.042 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap132e9e5d-ea, col_values=(('external_ids', {'iface-id': '132e9e5d-eaab-437e-a82e-d49f6c4a09df', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:2c:66', 'vm-uuid': '74094438-0995-4031-9943-cc85a5ef4f57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:43 compute-0 NetworkManager[44949]: <info>  [1759845943.0454] manager: (tap132e9e5d-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.054 2 INFO os_vif [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:2c:66,bridge_name='br-int',has_traffic_filtering=True,id=132e9e5d-eaab-437e-a82e-d49f6c4a09df,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132e9e5d-ea')
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.109 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.110 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.111 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No VIF found with MAC fa:16:3e:66:2c:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.112 2 INFO nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Using config drive
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.135 2 DEBUG nova.storage.rbd_utils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 74094438-0995-4031-9943-cc85a5ef4f57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 234 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.5 MiB/s wr, 218 op/s
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.593 2 INFO nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Creating config drive at /var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57/disk.config
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.599 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpajbk0hx7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.682 2 DEBUG nova.network.neutron [req-c985606a-2a5f-44c4-9f06-d04cdb1c9f89 req-df6c903d-da79-4239-a336-1dbbf7669f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Updated VIF entry in instance network info cache for port d4571f56-54c6-4986-845c-cd57c4faadac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.683 2 DEBUG nova.network.neutron [req-c985606a-2a5f-44c4-9f06-d04cdb1c9f89 req-df6c903d-da79-4239-a336-1dbbf7669f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Updating instance_info_cache with network_info: [{"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.723 2 DEBUG oslo_concurrency.lockutils [req-c985606a-2a5f-44c4-9f06-d04cdb1c9f89 req-df6c903d-da79-4239-a336-1dbbf7669f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.735 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpajbk0hx7" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.762 2 DEBUG nova.storage.rbd_utils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 74094438-0995-4031-9943-cc85a5ef4f57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:43 compute-0 nova_compute[259550]: 2025-10-07 14:05:43.767 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57/disk.config 74094438-0995-4031-9943-cc85a5ef4f57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:43 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/964272690' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:44 compute-0 nova_compute[259550]: 2025-10-07 14:05:44.099 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57/disk.config 74094438-0995-4031-9943-cc85a5ef4f57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:44 compute-0 nova_compute[259550]: 2025-10-07 14:05:44.100 2 INFO nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Deleting local config drive /var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57/disk.config because it was imported into RBD.
Oct 07 14:05:44 compute-0 kernel: tap132e9e5d-ea: entered promiscuous mode
Oct 07 14:05:44 compute-0 ovn_controller[151684]: 2025-10-07T14:05:44Z|00060|binding|INFO|Claiming lport 132e9e5d-eaab-437e-a82e-d49f6c4a09df for this chassis.
Oct 07 14:05:44 compute-0 ovn_controller[151684]: 2025-10-07T14:05:44Z|00061|binding|INFO|132e9e5d-eaab-437e-a82e-d49f6c4a09df: Claiming fa:16:3e:66:2c:66 10.100.0.10
Oct 07 14:05:44 compute-0 NetworkManager[44949]: <info>  [1759845944.1717] manager: (tap132e9e5d-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Oct 07 14:05:44 compute-0 nova_compute[259550]: 2025-10-07 14:05:44.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:44 compute-0 ovn_controller[151684]: 2025-10-07T14:05:44Z|00062|binding|INFO|Setting lport 132e9e5d-eaab-437e-a82e-d49f6c4a09df ovn-installed in OVS
Oct 07 14:05:44 compute-0 nova_compute[259550]: 2025-10-07 14:05:44.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:44 compute-0 nova_compute[259550]: 2025-10-07 14:05:44.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:44 compute-0 systemd-udevd[283340]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:05:44 compute-0 ovn_controller[151684]: 2025-10-07T14:05:44Z|00063|binding|INFO|Setting lport 132e9e5d-eaab-437e-a82e-d49f6c4a09df up in Southbound
Oct 07 14:05:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.215 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:2c:66 10.100.0.10'], port_security=['fa:16:3e:66:2c:66 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '74094438-0995-4031-9943-cc85a5ef4f57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=132e9e5d-eaab-437e-a82e-d49f6c4a09df) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:05:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.216 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 132e9e5d-eaab-437e-a82e-d49f6c4a09df in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 bound to our chassis
Oct 07 14:05:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.218 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6
Oct 07 14:05:44 compute-0 NetworkManager[44949]: <info>  [1759845944.2247] device (tap132e9e5d-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:05:44 compute-0 systemd-machined[214580]: New machine qemu-15-instance-0000000d.
Oct 07 14:05:44 compute-0 NetworkManager[44949]: <info>  [1759845944.2266] device (tap132e9e5d-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:05:44 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000d.
Oct 07 14:05:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.245 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb1e52d-ccfd-4f32-8915-ecf2815ebae9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:44 compute-0 nova_compute[259550]: 2025-10-07 14:05:44.288 2 DEBUG nova.network.neutron [req-ff20ea32-22d5-4fbe-b23f-a096099b48a2 req-d4bbb38b-67bf-4718-9caf-89b9551e3ff9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Updated VIF entry in instance network info cache for port 132e9e5d-eaab-437e-a82e-d49f6c4a09df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:05:44 compute-0 nova_compute[259550]: 2025-10-07 14:05:44.289 2 DEBUG nova.network.neutron [req-ff20ea32-22d5-4fbe-b23f-a096099b48a2 req-d4bbb38b-67bf-4718-9caf-89b9551e3ff9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Updating instance_info_cache with network_info: [{"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:05:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.292 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[29030d98-fc01-428b-9385-52120d28fa3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.296 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2327cb71-095a-47cc-a7f3-b3b8a69afe89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.329 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1eddd0df-32f0-487c-99b4-582ea41e53b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.352 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8490addb-db4c-49ca-b5f5-3f8da5f0c8e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283355, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.377 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e71fe57e-94aa-4c41-8912-3aea10e6a5ac]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283356, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283356, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.380 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:44 compute-0 nova_compute[259550]: 2025-10-07 14:05:44.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.384 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.384 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:05:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.384 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.385 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:05:44 compute-0 nova_compute[259550]: 2025-10-07 14:05:44.415 2 DEBUG oslo_concurrency.lockutils [req-ff20ea32-22d5-4fbe-b23f-a096099b48a2 req-d4bbb38b-67bf-4718-9caf-89b9551e3ff9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-74094438-0995-4031-9943-cc85a5ef4f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:05:44 compute-0 nova_compute[259550]: 2025-10-07 14:05:44.716 2 DEBUG nova.compute.manager [req-24c7b9a2-22cc-46ee-aa11-c1d5f67f0860 req-72debb31-0c2e-48c3-9375-e1c178a14504 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Received event network-vif-plugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:05:44 compute-0 nova_compute[259550]: 2025-10-07 14:05:44.716 2 DEBUG oslo_concurrency.lockutils [req-24c7b9a2-22cc-46ee-aa11-c1d5f67f0860 req-72debb31-0c2e-48c3-9375-e1c178a14504 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "74094438-0995-4031-9943-cc85a5ef4f57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:44 compute-0 nova_compute[259550]: 2025-10-07 14:05:44.717 2 DEBUG oslo_concurrency.lockutils [req-24c7b9a2-22cc-46ee-aa11-c1d5f67f0860 req-72debb31-0c2e-48c3-9375-e1c178a14504 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:44 compute-0 nova_compute[259550]: 2025-10-07 14:05:44.717 2 DEBUG oslo_concurrency.lockutils [req-24c7b9a2-22cc-46ee-aa11-c1d5f67f0860 req-72debb31-0c2e-48c3-9375-e1c178a14504 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:44 compute-0 nova_compute[259550]: 2025-10-07 14:05:44.717 2 DEBUG nova.compute.manager [req-24c7b9a2-22cc-46ee-aa11-c1d5f67f0860 req-72debb31-0c2e-48c3-9375-e1c178a14504 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Processing event network-vif-plugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:05:44 compute-0 ceph-mon[74295]: pgmap v1154: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 234 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.5 MiB/s wr, 218 op/s
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 269 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 254 op/s
Oct 07 14:05:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.680 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845945.6797159, 74094438-0995-4031-9943-cc85a5ef4f57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.680 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] VM Started (Lifecycle Event)
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.683 2 DEBUG nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.691 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.696 2 INFO nova.virt.libvirt.driver [-] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Instance spawned successfully.
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.697 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.702 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.706 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.728 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.728 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845945.6798453, 74094438-0995-4031-9943-cc85a5ef4f57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.729 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] VM Paused (Lifecycle Event)
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.732 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.733 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.734 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.734 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.734 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.735 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.756 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.761 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845945.6904776, 74094438-0995-4031-9943-cc85a5ef4f57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.762 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] VM Resumed (Lifecycle Event)
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.791 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.798 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.807 2 INFO nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Took 9.00 seconds to spawn the instance on the hypervisor.
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.807 2 DEBUG nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.817 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:05:45 compute-0 nova_compute[259550]: 2025-10-07 14:05:45.872 2 INFO nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Took 10.72 seconds to build instance.
Oct 07 14:05:46 compute-0 nova_compute[259550]: 2025-10-07 14:05:46.053 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:46 compute-0 ceph-mon[74295]: pgmap v1155: 305 pgs: 305 active+clean; 269 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 254 op/s
Oct 07 14:05:46 compute-0 nova_compute[259550]: 2025-10-07 14:05:46.835 2 DEBUG nova.compute.manager [req-aa7401a7-7a51-4c08-84c0-71c6b6847289 req-985b2ca8-e6bb-43d0-9df2-1f06bc18248d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Received event network-vif-plugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:05:46 compute-0 nova_compute[259550]: 2025-10-07 14:05:46.836 2 DEBUG oslo_concurrency.lockutils [req-aa7401a7-7a51-4c08-84c0-71c6b6847289 req-985b2ca8-e6bb-43d0-9df2-1f06bc18248d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "74094438-0995-4031-9943-cc85a5ef4f57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:46 compute-0 nova_compute[259550]: 2025-10-07 14:05:46.836 2 DEBUG oslo_concurrency.lockutils [req-aa7401a7-7a51-4c08-84c0-71c6b6847289 req-985b2ca8-e6bb-43d0-9df2-1f06bc18248d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:46 compute-0 nova_compute[259550]: 2025-10-07 14:05:46.836 2 DEBUG oslo_concurrency.lockutils [req-aa7401a7-7a51-4c08-84c0-71c6b6847289 req-985b2ca8-e6bb-43d0-9df2-1f06bc18248d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:46 compute-0 nova_compute[259550]: 2025-10-07 14:05:46.837 2 DEBUG nova.compute.manager [req-aa7401a7-7a51-4c08-84c0-71c6b6847289 req-985b2ca8-e6bb-43d0-9df2-1f06bc18248d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] No waiting events found dispatching network-vif-plugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:05:46 compute-0 nova_compute[259550]: 2025-10-07 14:05:46.837 2 WARNING nova.compute.manager [req-aa7401a7-7a51-4c08-84c0-71c6b6847289 req-985b2ca8-e6bb-43d0-9df2-1f06bc18248d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Received unexpected event network-vif-plugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df for instance with vm_state active and task_state None.
Oct 07 14:05:47 compute-0 podman[283400]: 2025-10-07 14:05:47.100584411 +0000 UTC m=+0.079435444 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 07 14:05:47 compute-0 podman[283399]: 2025-10-07 14:05:47.106701154 +0000 UTC m=+0.086033070 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:05:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 293 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 7.2 MiB/s wr, 305 op/s
Oct 07 14:05:47 compute-0 ceph-mon[74295]: pgmap v1156: 305 pgs: 305 active+clean; 293 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 7.2 MiB/s wr, 305 op/s
Oct 07 14:05:48 compute-0 nova_compute[259550]: 2025-10-07 14:05:48.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:48 compute-0 ovn_controller[151684]: 2025-10-07T14:05:48Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:1a:81 10.100.0.9
Oct 07 14:05:48 compute-0 ovn_controller[151684]: 2025-10-07T14:05:48Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:1a:81 10.100.0.9
Oct 07 14:05:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 314 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 8.2 MiB/s wr, 317 op/s
Oct 07 14:05:50 compute-0 nova_compute[259550]: 2025-10-07 14:05:50.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:05:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct 07 14:05:50 compute-0 nova_compute[259550]: 2025-10-07 14:05:50.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct 07 14:05:50 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct 07 14:05:50 compute-0 ceph-mon[74295]: pgmap v1157: 305 pgs: 305 active+clean; 314 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 8.2 MiB/s wr, 317 op/s
Oct 07 14:05:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 320 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.0 MiB/s wr, 304 op/s
Oct 07 14:05:51 compute-0 ceph-mon[74295]: osdmap e134: 3 total, 3 up, 3 in
Oct 07 14:05:52 compute-0 nova_compute[259550]: 2025-10-07 14:05:52.062 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "348c80b7-7f65-4300-9dab-6a333f1b2c74" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:52 compute-0 nova_compute[259550]: 2025-10-07 14:05:52.062 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:52 compute-0 nova_compute[259550]: 2025-10-07 14:05:52.150 2 DEBUG nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:05:52 compute-0 nova_compute[259550]: 2025-10-07 14:05:52.279 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:52 compute-0 nova_compute[259550]: 2025-10-07 14:05:52.280 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:52 compute-0 nova_compute[259550]: 2025-10-07 14:05:52.289 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:05:52 compute-0 nova_compute[259550]: 2025-10-07 14:05:52.290 2 INFO nova.compute.claims [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:05:52 compute-0 nova_compute[259550]: 2025-10-07 14:05:52.368 2 DEBUG nova.compute.manager [req-d6c87d66-e6eb-4e53-b9e6-aa20366a50cf req-d3655765-3bc5-4849-aca6-14749d4121a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received event network-changed-d4571f56-54c6-4986-845c-cd57c4faadac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:05:52 compute-0 nova_compute[259550]: 2025-10-07 14:05:52.369 2 DEBUG nova.compute.manager [req-d6c87d66-e6eb-4e53-b9e6-aa20366a50cf req-d3655765-3bc5-4849-aca6-14749d4121a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Refreshing instance network info cache due to event network-changed-d4571f56-54c6-4986-845c-cd57c4faadac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:05:52 compute-0 nova_compute[259550]: 2025-10-07 14:05:52.369 2 DEBUG oslo_concurrency.lockutils [req-d6c87d66-e6eb-4e53-b9e6-aa20366a50cf req-d3655765-3bc5-4849-aca6-14749d4121a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:05:52 compute-0 nova_compute[259550]: 2025-10-07 14:05:52.369 2 DEBUG oslo_concurrency.lockutils [req-d6c87d66-e6eb-4e53-b9e6-aa20366a50cf req-d3655765-3bc5-4849-aca6-14749d4121a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:05:52 compute-0 nova_compute[259550]: 2025-10-07 14:05:52.369 2 DEBUG nova.network.neutron [req-d6c87d66-e6eb-4e53-b9e6-aa20366a50cf req-d3655765-3bc5-4849-aca6-14749d4121a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Refreshing network info cache for port d4571f56-54c6-4986-845c-cd57c4faadac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:05:52 compute-0 nova_compute[259550]: 2025-10-07 14:05:52.531 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:05:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:05:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:05:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:05:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:05:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:05:52 compute-0 ceph-mon[74295]: pgmap v1159: 305 pgs: 305 active+clean; 320 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.0 MiB/s wr, 304 op/s
Oct 07 14:05:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:05:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1646590604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:52 compute-0 nova_compute[259550]: 2025-10-07 14:05:52.957 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:52 compute-0 nova_compute[259550]: 2025-10-07 14:05:52.964 2 DEBUG nova.compute.provider_tree [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:05:52 compute-0 nova_compute[259550]: 2025-10-07 14:05:52.990 2 DEBUG nova.scheduler.client.report [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.032 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.033 2 DEBUG nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.094 2 DEBUG nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.095 2 DEBUG nova.network.neutron [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:05:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:53.134 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:53.135 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.150 2 INFO nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.184 2 DEBUG nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:05:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 320 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.0 MiB/s wr, 304 op/s
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.533 2 DEBUG nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.534 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.535 2 INFO nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Creating image(s)
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.561 2 DEBUG nova.storage.rbd_utils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.587 2 DEBUG nova.storage.rbd_utils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.611 2 DEBUG nova.storage.rbd_utils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.615 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.696 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.697 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.698 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.699 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.725 2 DEBUG nova.storage.rbd_utils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.731 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1646590604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:53 compute-0 nova_compute[259550]: 2025-10-07 14:05:53.799 2 DEBUG nova.policy [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f06dda9346a24fb094ad9fe51664cc48', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:05:54 compute-0 nova_compute[259550]: 2025-10-07 14:05:54.123 2 DEBUG nova.network.neutron [req-d6c87d66-e6eb-4e53-b9e6-aa20366a50cf req-d3655765-3bc5-4849-aca6-14749d4121a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Updated VIF entry in instance network info cache for port d4571f56-54c6-4986-845c-cd57c4faadac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:05:54 compute-0 nova_compute[259550]: 2025-10-07 14:05:54.124 2 DEBUG nova.network.neutron [req-d6c87d66-e6eb-4e53-b9e6-aa20366a50cf req-d3655765-3bc5-4849-aca6-14749d4121a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Updating instance_info_cache with network_info: [{"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:05:54 compute-0 nova_compute[259550]: 2025-10-07 14:05:54.143 2 DEBUG oslo_concurrency.lockutils [req-d6c87d66-e6eb-4e53-b9e6-aa20366a50cf req-d3655765-3bc5-4849-aca6-14749d4121a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:05:54 compute-0 nova_compute[259550]: 2025-10-07 14:05:54.250 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:54 compute-0 nova_compute[259550]: 2025-10-07 14:05:54.316 2 DEBUG nova.storage.rbd_utils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] resizing rbd image 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:05:54 compute-0 nova_compute[259550]: 2025-10-07 14:05:54.681 2 DEBUG nova.objects.instance [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'migration_context' on Instance uuid 348c80b7-7f65-4300-9dab-6a333f1b2c74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:05:54 compute-0 nova_compute[259550]: 2025-10-07 14:05:54.697 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:05:54 compute-0 nova_compute[259550]: 2025-10-07 14:05:54.698 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Ensure instance console log exists: /var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:05:54 compute-0 nova_compute[259550]: 2025-10-07 14:05:54.699 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:54 compute-0 nova_compute[259550]: 2025-10-07 14:05:54.700 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:54 compute-0 nova_compute[259550]: 2025-10-07 14:05:54.701 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:54 compute-0 nova_compute[259550]: 2025-10-07 14:05:54.804 2 DEBUG nova.network.neutron [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Successfully created port: 184f2379-8442-414e-bccb-6f5e5a314e72 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:05:54 compute-0 ceph-mon[74295]: pgmap v1160: 305 pgs: 305 active+clean; 320 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.0 MiB/s wr, 304 op/s
Oct 07 14:05:55 compute-0 nova_compute[259550]: 2025-10-07 14:05:55.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:55 compute-0 nova_compute[259550]: 2025-10-07 14:05:55.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 326 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 210 op/s
Oct 07 14:05:55 compute-0 nova_compute[259550]: 2025-10-07 14:05:55.472 2 DEBUG nova.network.neutron [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Successfully updated port: 184f2379-8442-414e-bccb-6f5e5a314e72 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:05:55 compute-0 nova_compute[259550]: 2025-10-07 14:05:55.488 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "refresh_cache-348c80b7-7f65-4300-9dab-6a333f1b2c74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:05:55 compute-0 nova_compute[259550]: 2025-10-07 14:05:55.488 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquired lock "refresh_cache-348c80b7-7f65-4300-9dab-6a333f1b2c74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:05:55 compute-0 nova_compute[259550]: 2025-10-07 14:05:55.488 2 DEBUG nova.network.neutron [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:05:55 compute-0 nova_compute[259550]: 2025-10-07 14:05:55.540 2 DEBUG nova.compute.manager [req-33e0ac3a-92f9-4cb2-9873-a6f6695fc40e req-f7826e93-229f-41ad-8731-ec8d18760400 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Received event network-changed-184f2379-8442-414e-bccb-6f5e5a314e72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:05:55 compute-0 nova_compute[259550]: 2025-10-07 14:05:55.540 2 DEBUG nova.compute.manager [req-33e0ac3a-92f9-4cb2-9873-a6f6695fc40e req-f7826e93-229f-41ad-8731-ec8d18760400 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Refreshing instance network info cache due to event network-changed-184f2379-8442-414e-bccb-6f5e5a314e72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:05:55 compute-0 nova_compute[259550]: 2025-10-07 14:05:55.541 2 DEBUG oslo_concurrency.lockutils [req-33e0ac3a-92f9-4cb2-9873-a6f6695fc40e req-f7826e93-229f-41ad-8731-ec8d18760400 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-348c80b7-7f65-4300-9dab-6a333f1b2c74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:05:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:05:55 compute-0 nova_compute[259550]: 2025-10-07 14:05:55.714 2 DEBUG nova.network.neutron [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:05:55 compute-0 ceph-mon[74295]: pgmap v1161: 305 pgs: 305 active+clean; 326 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 210 op/s
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.110 2 DEBUG nova.network.neutron [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Updating instance_info_cache with network_info: [{"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:05:57 compute-0 podman[283623]: 2025-10-07 14:05:57.119280917 +0000 UTC m=+0.106996040 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller)
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.123 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Releasing lock "refresh_cache-348c80b7-7f65-4300-9dab-6a333f1b2c74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.124 2 DEBUG nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Instance network_info: |[{"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.124 2 DEBUG oslo_concurrency.lockutils [req-33e0ac3a-92f9-4cb2-9873-a6f6695fc40e req-f7826e93-229f-41ad-8731-ec8d18760400 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-348c80b7-7f65-4300-9dab-6a333f1b2c74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.124 2 DEBUG nova.network.neutron [req-33e0ac3a-92f9-4cb2-9873-a6f6695fc40e req-f7826e93-229f-41ad-8731-ec8d18760400 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Refreshing network info cache for port 184f2379-8442-414e-bccb-6f5e5a314e72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.127 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Start _get_guest_xml network_info=[{"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.130 2 WARNING nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.134 2 DEBUG nova.virt.libvirt.host [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.135 2 DEBUG nova.virt.libvirt.host [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.137 2 DEBUG nova.virt.libvirt.host [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.138 2 DEBUG nova.virt.libvirt.host [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.138 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.138 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.139 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.139 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.139 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.139 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.139 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.139 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.139 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.140 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.140 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.140 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.142 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 344 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.1 MiB/s wr, 158 op/s
Oct 07 14:05:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:05:57 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3760172047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.625 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.647 2 DEBUG nova.storage.rbd_utils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.652 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.944 2 DEBUG oslo_concurrency.lockutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquiring lock "a2f7901e-6572-4162-b995-0c44fb69eab5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.945 2 DEBUG oslo_concurrency.lockutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.945 2 DEBUG oslo_concurrency.lockutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquiring lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.945 2 DEBUG oslo_concurrency.lockutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.946 2 DEBUG oslo_concurrency.lockutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.947 2 INFO nova.compute.manager [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Terminating instance
Oct 07 14:05:57 compute-0 nova_compute[259550]: 2025-10-07 14:05:57.948 2 DEBUG nova.compute.manager [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:05:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/305683047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.144 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.146 2 DEBUG nova.virt.libvirt.vif [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-353618528',display_name='tempest-ServersAdminTestJSON-server-353618528',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-353618528',id=14,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-tg713xdg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:53Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=348c80b7-7f65-4300-9dab-6a333f1b2c74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.146 2 DEBUG nova.network.os_vif_util [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.147 2 DEBUG nova.network.os_vif_util [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:34:23,bridge_name='br-int',has_traffic_filtering=True,id=184f2379-8442-414e-bccb-6f5e5a314e72,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184f2379-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.148 2 DEBUG nova.objects.instance [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'pci_devices' on Instance uuid 348c80b7-7f65-4300-9dab-6a333f1b2c74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.166 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:05:58 compute-0 nova_compute[259550]:   <uuid>348c80b7-7f65-4300-9dab-6a333f1b2c74</uuid>
Oct 07 14:05:58 compute-0 nova_compute[259550]:   <name>instance-0000000e</name>
Oct 07 14:05:58 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:05:58 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:05:58 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersAdminTestJSON-server-353618528</nova:name>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:05:57</nova:creationTime>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:05:58 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:05:58 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:05:58 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:05:58 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:05:58 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:05:58 compute-0 nova_compute[259550]:         <nova:user uuid="f06dda9346a24fb094ad9fe51664cc48">tempest-ServersAdminTestJSON-1442908900-project-member</nova:user>
Oct 07 14:05:58 compute-0 nova_compute[259550]:         <nova:project uuid="48bbd5aa8b9d4a0ea0150bd57145fc68">tempest-ServersAdminTestJSON-1442908900</nova:project>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:05:58 compute-0 nova_compute[259550]:         <nova:port uuid="184f2379-8442-414e-bccb-6f5e5a314e72">
Oct 07 14:05:58 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:05:58 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:05:58 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <system>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <entry name="serial">348c80b7-7f65-4300-9dab-6a333f1b2c74</entry>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <entry name="uuid">348c80b7-7f65-4300-9dab-6a333f1b2c74</entry>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     </system>
Oct 07 14:05:58 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:05:58 compute-0 nova_compute[259550]:   <os>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:   </os>
Oct 07 14:05:58 compute-0 nova_compute[259550]:   <features>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:   </features>
Oct 07 14:05:58 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:05:58 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:05:58 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/348c80b7-7f65-4300-9dab-6a333f1b2c74_disk">
Oct 07 14:05:58 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       </source>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:05:58 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/348c80b7-7f65-4300-9dab-6a333f1b2c74_disk.config">
Oct 07 14:05:58 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       </source>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:05:58 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:c8:34:23"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <target dev="tap184f2379-84"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74/console.log" append="off"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <video>
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     </video>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:05:58 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:05:58 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:05:58 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:05:58 compute-0 nova_compute[259550]: </domain>
Oct 07 14:05:58 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.166 2 DEBUG nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Preparing to wait for external event network-vif-plugged-184f2379-8442-414e-bccb-6f5e5a314e72 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.167 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.167 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.167 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.168 2 DEBUG nova.virt.libvirt.vif [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-353618528',display_name='tempest-ServersAdminTestJSON-server-353618528',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-353618528',id=14,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-tg713xdg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:53Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=348c80b7-7f65-4300-9dab-6a333f1b2c74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.169 2 DEBUG nova.network.os_vif_util [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.169 2 DEBUG nova.network.os_vif_util [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:34:23,bridge_name='br-int',has_traffic_filtering=True,id=184f2379-8442-414e-bccb-6f5e5a314e72,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184f2379-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.170 2 DEBUG os_vif [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:34:23,bridge_name='br-int',has_traffic_filtering=True,id=184f2379-8442-414e-bccb-6f5e5a314e72,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184f2379-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.172 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.176 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap184f2379-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.177 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap184f2379-84, col_values=(('external_ids', {'iface-id': '184f2379-8442-414e-bccb-6f5e5a314e72', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:34:23', 'vm-uuid': '348c80b7-7f65-4300-9dab-6a333f1b2c74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:58 compute-0 NetworkManager[44949]: <info>  [1759845958.1800] manager: (tap184f2379-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.189 2 INFO os_vif [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:34:23,bridge_name='br-int',has_traffic_filtering=True,id=184f2379-8442-414e-bccb-6f5e5a314e72,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184f2379-84')
Oct 07 14:05:58 compute-0 kernel: tapd4571f56-54 (unregistering): left promiscuous mode
Oct 07 14:05:58 compute-0 NetworkManager[44949]: <info>  [1759845958.3703] device (tapd4571f56-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:58 compute-0 ovn_controller[151684]: 2025-10-07T14:05:58Z|00064|binding|INFO|Releasing lport d4571f56-54c6-4986-845c-cd57c4faadac from this chassis (sb_readonly=0)
Oct 07 14:05:58 compute-0 ovn_controller[151684]: 2025-10-07T14:05:58Z|00065|binding|INFO|Setting lport d4571f56-54c6-4986-845c-cd57c4faadac down in Southbound
Oct 07 14:05:58 compute-0 ovn_controller[151684]: 2025-10-07T14:05:58Z|00066|binding|INFO|Removing iface tapd4571f56-54 ovn-installed in OVS
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:58.411 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:1a:81 10.100.0.9'], port_security=['fa:16:3e:47:1a:81 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a2f7901e-6572-4162-b995-0c44fb69eab5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c11e448c-21ee-492b-8c01-cf2af1e6c4f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54e39443887a407284ed98974d4e0771', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3b31c4f6-cd30-4c48-acc0-2bceb0f06bb5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11ba191e-5394-4c4a-8b33-ca086e98b88d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d4571f56-54c6-4986-845c-cd57c4faadac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:05:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:58.412 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d4571f56-54c6-4986-845c-cd57c4faadac in datapath c11e448c-21ee-492b-8c01-cf2af1e6c4f4 unbound from our chassis
Oct 07 14:05:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:58.413 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c11e448c-21ee-492b-8c01-cf2af1e6c4f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:05:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:58.418 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[920eab32-22d6-41db-9cbe-540bde7682df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:58.419 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4 namespace which is not needed anymore
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.426 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.427 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.427 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No VIF found with MAC fa:16:3e:c8:34:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.427 2 INFO nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Using config drive
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.449 2 DEBUG nova.storage.rbd_utils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:58 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct 07 14:05:58 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000c.scope: Consumed 14.831s CPU time.
Oct 07 14:05:58 compute-0 systemd-machined[214580]: Machine qemu-14-instance-0000000c terminated.
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.592 2 INFO nova.virt.libvirt.driver [-] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Instance destroyed successfully.
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.592 2 DEBUG nova.objects.instance [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lazy-loading 'resources' on Instance uuid a2f7901e-6572-4162-b995-0c44fb69eab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:05:58 compute-0 ceph-mon[74295]: pgmap v1162: 305 pgs: 305 active+clean; 344 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.1 MiB/s wr, 158 op/s
Oct 07 14:05:58 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3760172047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:58 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/305683047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:05:58 compute-0 neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4[283000]: [NOTICE]   (283004) : haproxy version is 2.8.14-c23fe91
Oct 07 14:05:58 compute-0 neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4[283000]: [NOTICE]   (283004) : path to executable is /usr/sbin/haproxy
Oct 07 14:05:58 compute-0 neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4[283000]: [WARNING]  (283004) : Exiting Master process...
Oct 07 14:05:58 compute-0 neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4[283000]: [ALERT]    (283004) : Current worker (283006) exited with code 143 (Terminated)
Oct 07 14:05:58 compute-0 neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4[283000]: [WARNING]  (283004) : All workers exited. Exiting... (0)
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.622 2 DEBUG nova.virt.libvirt.vif [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:05:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-230026105',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-230026105',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-230026105',id=12,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:05:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='54e39443887a407284ed98974d4e0771',ramdisk_id='',reservation_id='r-0i2aqjqd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-909075908',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-909075908-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:05:34Z,user_data=None,user_id='124a23e91e614186848847e685d191d9',uuid=a2f7901e-6572-4162-b995-0c44fb69eab5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.622 2 DEBUG nova.network.os_vif_util [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Converting VIF {"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.623 2 DEBUG nova.network.os_vif_util [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:47:1a:81,bridge_name='br-int',has_traffic_filtering=True,id=d4571f56-54c6-4986-845c-cd57c4faadac,network=Network(c11e448c-21ee-492b-8c01-cf2af1e6c4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4571f56-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.624 2 DEBUG os_vif [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:1a:81,bridge_name='br-int',has_traffic_filtering=True,id=d4571f56-54c6-4986-845c-cd57c4faadac,network=Network(c11e448c-21ee-492b-8c01-cf2af1e6c4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4571f56-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:05:58 compute-0 systemd[1]: libpod-eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482.scope: Deactivated successfully.
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.625 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4571f56-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:05:58 compute-0 podman[283754]: 2025-10-07 14:05:58.631276018 +0000 UTC m=+0.117762828 container died eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.636 2 INFO os_vif [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:1a:81,bridge_name='br-int',has_traffic_filtering=True,id=d4571f56-54c6-4986-845c-cd57c4faadac,network=Network(c11e448c-21ee-492b-8c01-cf2af1e6c4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4571f56-54')
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.762 2 INFO nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Creating config drive at /var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74/disk.config
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.771 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpox63kxhu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482-userdata-shm.mount: Deactivated successfully.
Oct 07 14:05:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-08cfefc9e73b43b0851f8beb7cc640de90c39f70e6d14855bf71f6c4d9005459-merged.mount: Deactivated successfully.
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.905 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpox63kxhu" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.976 2 DEBUG nova.storage.rbd_utils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:05:58 compute-0 nova_compute[259550]: 2025-10-07 14:05:58.980 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74/disk.config 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.018 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "4b95692e-088d-452c-83b7-4c50df73b8fe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.019 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.023 2 DEBUG nova.network.neutron [req-33e0ac3a-92f9-4cb2-9873-a6f6695fc40e req-f7826e93-229f-41ad-8731-ec8d18760400 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Updated VIF entry in instance network info cache for port 184f2379-8442-414e-bccb-6f5e5a314e72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.024 2 DEBUG nova.network.neutron [req-33e0ac3a-92f9-4cb2-9873-a6f6695fc40e req-f7826e93-229f-41ad-8731-ec8d18760400 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Updating instance_info_cache with network_info: [{"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.040 2 DEBUG oslo_concurrency.lockutils [req-33e0ac3a-92f9-4cb2-9873-a6f6695fc40e req-f7826e93-229f-41ad-8731-ec8d18760400 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-348c80b7-7f65-4300-9dab-6a333f1b2c74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.041 2 DEBUG nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.111 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.112 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.119 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.120 2 INFO nova.compute.claims [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:05:59 compute-0 podman[283754]: 2025-10-07 14:05:59.185512101 +0000 UTC m=+0.671998891 container cleanup eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:05:59 compute-0 systemd[1]: libpod-conmon-eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482.scope: Deactivated successfully.
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.274 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:05:59 compute-0 ovn_controller[151684]: 2025-10-07T14:05:59Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:2c:66 10.100.0.10
Oct 07 14:05:59 compute-0 ovn_controller[151684]: 2025-10-07T14:05:59Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:2c:66 10.100.0.10
Oct 07 14:05:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 390 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.1 MiB/s wr, 132 op/s
Oct 07 14:05:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:05:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2372964451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:05:59 compute-0 podman[283853]: 2025-10-07 14:05:59.751735174 +0000 UTC m=+0.543068035 container remove eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:05:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:59.760 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3bbc6438-2a76-499c-9575-408ae1959a48]: (4, ('Tue Oct  7 02:05:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4 (eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482)\neef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482\nTue Oct  7 02:05:59 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4 (eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482)\neef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:59.762 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[83b4c3c5-6737-42fc-9db3-02b2cd98308f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:59.763 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc11e448c-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:59 compute-0 kernel: tapc11e448c-20: left promiscuous mode
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.772 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.778 2 DEBUG nova.compute.provider_tree [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:05:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:59.791 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1df188d8-b363-4a86-963f-be3ed2f2f4eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.804 2 DEBUG nova.scheduler.client.report [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:05:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:59.821 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cdda217c-fa7c-46a6-945f-93ba5839cccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:59.824 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ca912246-2cd6-4ec3-bf64-8b6fc40e6aba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.831 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.832 2 DEBUG nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:05:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:59.842 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[195d97d5-b8a8-40ad-9e2a-ad434dfd8783]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651727, 'reachable_time': 29525, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283898, 'error': None, 'target': 'ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:59.845 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:05:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:05:59.845 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a5a0a98e-8bb3-4cc0-9d9a-fcb08a65b109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:05:59 compute-0 systemd[1]: run-netns-ovnmeta\x2dc11e448c\x2d21ee\x2d492b\x2d8c01\x2dcf2af1e6c4f4.mount: Deactivated successfully.
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.878 2 DEBUG nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.878 2 DEBUG nova.network.neutron [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.901 2 INFO nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:05:59 compute-0 nova_compute[259550]: 2025-10-07 14:05:59.922 2 DEBUG nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.004 2 DEBUG nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.008 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.009 2 INFO nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Creating image(s)
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.030 2 DEBUG nova.storage.rbd_utils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 4b95692e-088d-452c-83b7-4c50df73b8fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.039 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.039 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.040 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.067 2 DEBUG nova.storage.rbd_utils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 4b95692e-088d-452c-83b7-4c50df73b8fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.097 2 DEBUG nova.storage.rbd_utils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 4b95692e-088d-452c-83b7-4c50df73b8fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.101 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.137 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.162 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74/disk.config 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.163 2 INFO nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Deleting local config drive /var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74/disk.config because it was imported into RBD.
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.172 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.173 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.175 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.176 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.204 2 DEBUG nova.storage.rbd_utils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 4b95692e-088d-452c-83b7-4c50df73b8fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.209 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4b95692e-088d-452c-83b7-4c50df73b8fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:00 compute-0 kernel: tap184f2379-84: entered promiscuous mode
Oct 07 14:06:00 compute-0 NetworkManager[44949]: <info>  [1759845960.2396] manager: (tap184f2379-84): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Oct 07 14:06:00 compute-0 systemd-udevd[283717]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:06:00 compute-0 ovn_controller[151684]: 2025-10-07T14:06:00Z|00067|binding|INFO|Claiming lport 184f2379-8442-414e-bccb-6f5e5a314e72 for this chassis.
Oct 07 14:06:00 compute-0 ovn_controller[151684]: 2025-10-07T14:06:00Z|00068|binding|INFO|184f2379-8442-414e-bccb-6f5e5a314e72: Claiming fa:16:3e:c8:34:23 10.100.0.11
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.250 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:34:23 10.100.0.11'], port_security=['fa:16:3e:c8:34:23 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '348c80b7-7f65-4300-9dab-6a333f1b2c74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=184f2379-8442-414e-bccb-6f5e5a314e72) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.252 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 184f2379-8442-414e-bccb-6f5e5a314e72 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 bound to our chassis
Oct 07 14:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.253 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6
Oct 07 14:06:00 compute-0 NetworkManager[44949]: <info>  [1759845960.2624] device (tap184f2379-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:06:00 compute-0 NetworkManager[44949]: <info>  [1759845960.2635] device (tap184f2379-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:00 compute-0 ovn_controller[151684]: 2025-10-07T14:06:00Z|00069|binding|INFO|Setting lport 184f2379-8442-414e-bccb-6f5e5a314e72 ovn-installed in OVS
Oct 07 14:06:00 compute-0 ovn_controller[151684]: 2025-10-07T14:06:00Z|00070|binding|INFO|Setting lport 184f2379-8442-414e-bccb-6f5e5a314e72 up in Southbound
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.279 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ad76f724-79c7-4fad-9a1f-45a0c7db687e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:00 compute-0 systemd-machined[214580]: New machine qemu-16-instance-0000000e.
Oct 07 14:06:00 compute-0 podman[283957]: 2025-10-07 14:06:00.309151522 +0000 UTC m=+0.113387481 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 07 14:06:00 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000e.
Oct 07 14:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.316 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[669007ba-db5b-4c8d-8f3d-884ae6ecac03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.319 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[19f6522d-d5f0-457f-968b-3bcdd85baa9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.352 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7bdd4a8f-78e8-43f6-9258-294eef27a92b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.373 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1f79e34e-0e36-4f6b-b854-f350b271f440]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 1000, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 1000, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284035, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.388 2 DEBUG nova.policy [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7ff78293ed4e40f9954a0b0e6fca0caa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '711e531670b1460a923f2f91ce0f63db', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.393 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b02da377-9a2b-4392-8567-eac392f4dfe8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284037, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284037, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.396 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.400 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.400 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.401 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.401 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.586 2 DEBUG nova.compute.manager [req-5d7fd2e3-a056-4e68-a796-b496b10b4be0 req-fd106426-838e-4440-84b7-cd5f548048b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Received event network-vif-plugged-184f2379-8442-414e-bccb-6f5e5a314e72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.586 2 DEBUG oslo_concurrency.lockutils [req-5d7fd2e3-a056-4e68-a796-b496b10b4be0 req-fd106426-838e-4440-84b7-cd5f548048b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.587 2 DEBUG oslo_concurrency.lockutils [req-5d7fd2e3-a056-4e68-a796-b496b10b4be0 req-fd106426-838e-4440-84b7-cd5f548048b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.587 2 DEBUG oslo_concurrency.lockutils [req-5d7fd2e3-a056-4e68-a796-b496b10b4be0 req-fd106426-838e-4440-84b7-cd5f548048b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.587 2 DEBUG nova.compute.manager [req-5d7fd2e3-a056-4e68-a796-b496b10b4be0 req-fd106426-838e-4440-84b7-cd5f548048b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Processing event network-vif-plugged-184f2379-8442-414e-bccb-6f5e5a314e72 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.597 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4b95692e-088d-452c-83b7-4c50df73b8fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.663 2 DEBUG nova.storage.rbd_utils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] resizing rbd image 4b95692e-088d-452c-83b7-4c50df73b8fe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:06:00 compute-0 ceph-mon[74295]: pgmap v1163: 305 pgs: 305 active+clean; 390 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.1 MiB/s wr, 132 op/s
Oct 07 14:06:00 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2372964451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.741 2 INFO nova.virt.libvirt.driver [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Deleting instance files /var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5_del
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.743 2 INFO nova.virt.libvirt.driver [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Deletion of /var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5_del complete
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.804 2 DEBUG nova.compute.manager [req-ff4a3dd9-69a1-4a8f-8599-78eae53ad771 req-6138f193-df93-4436-b1d9-86a590067d6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received event network-vif-unplugged-d4571f56-54c6-4986-845c-cd57c4faadac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.804 2 DEBUG oslo_concurrency.lockutils [req-ff4a3dd9-69a1-4a8f-8599-78eae53ad771 req-6138f193-df93-4436-b1d9-86a590067d6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.805 2 DEBUG oslo_concurrency.lockutils [req-ff4a3dd9-69a1-4a8f-8599-78eae53ad771 req-6138f193-df93-4436-b1d9-86a590067d6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.805 2 DEBUG oslo_concurrency.lockutils [req-ff4a3dd9-69a1-4a8f-8599-78eae53ad771 req-6138f193-df93-4436-b1d9-86a590067d6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.805 2 DEBUG nova.compute.manager [req-ff4a3dd9-69a1-4a8f-8599-78eae53ad771 req-6138f193-df93-4436-b1d9-86a590067d6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] No waiting events found dispatching network-vif-unplugged-d4571f56-54c6-4986-845c-cd57c4faadac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.806 2 DEBUG nova.compute.manager [req-ff4a3dd9-69a1-4a8f-8599-78eae53ad771 req-6138f193-df93-4436-b1d9-86a590067d6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received event network-vif-unplugged-d4571f56-54c6-4986-845c-cd57c4faadac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.812 2 DEBUG nova.objects.instance [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lazy-loading 'migration_context' on Instance uuid 4b95692e-088d-452c-83b7-4c50df73b8fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.865 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.865 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Ensure instance console log exists: /var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.866 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.866 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.866 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.867 2 INFO nova.compute.manager [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Took 2.92 seconds to destroy the instance on the hypervisor.
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.867 2 DEBUG oslo.service.loopingcall [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.868 2 DEBUG nova.compute.manager [-] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:06:00 compute-0 nova_compute[259550]: 2025-10-07 14:06:00.868 2 DEBUG nova.network.neutron [-] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.100 2 DEBUG nova.network.neutron [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Successfully created port: d06bd5a4-d9b7-4791-a387-d190eb1457f6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.161 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845961.1604762, 348c80b7-7f65-4300-9dab-6a333f1b2c74 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.161 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] VM Started (Lifecycle Event)
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.163 2 DEBUG nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.167 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.170 2 INFO nova.virt.libvirt.driver [-] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Instance spawned successfully.
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.170 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.179 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.181 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.189 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.190 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.190 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.190 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.191 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.191 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.214 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.214 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845961.1606133, 348c80b7-7f65-4300-9dab-6a333f1b2c74 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.214 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] VM Paused (Lifecycle Event)
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.241 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.246 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845961.166253, 348c80b7-7f65-4300-9dab-6a333f1b2c74 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.247 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] VM Resumed (Lifecycle Event)
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.250 2 INFO nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Took 7.72 seconds to spawn the instance on the hypervisor.
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.251 2 DEBUG nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.263 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.266 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.289 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.314 2 INFO nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Took 9.07 seconds to build instance.
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.332 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 385 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 4.4 MiB/s wr, 107 op/s
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.579 2 DEBUG nova.network.neutron [-] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.596 2 INFO nova.compute.manager [-] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Took 0.73 seconds to deallocate network for instance.
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.635 2 DEBUG oslo_concurrency.lockutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.636 2 DEBUG oslo_concurrency.lockutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.762 2 DEBUG oslo_concurrency.processutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.827 2 DEBUG nova.network.neutron [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Successfully updated port: d06bd5a4-d9b7-4791-a387-d190eb1457f6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.844 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.844 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquired lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:06:01 compute-0 nova_compute[259550]: 2025-10-07 14:06:01.845 2 DEBUG nova.network.neutron [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:06:02 compute-0 nova_compute[259550]: 2025-10-07 14:06:02.157 2 DEBUG nova.network.neutron [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:06:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:06:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2187989551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:06:02 compute-0 nova_compute[259550]: 2025-10-07 14:06:02.278 2 DEBUG oslo_concurrency.processutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:02 compute-0 nova_compute[259550]: 2025-10-07 14:06:02.283 2 DEBUG nova.compute.provider_tree [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:06:02 compute-0 nova_compute[259550]: 2025-10-07 14:06:02.308 2 DEBUG nova.scheduler.client.report [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:06:02 compute-0 nova_compute[259550]: 2025-10-07 14:06:02.329 2 DEBUG oslo_concurrency.lockutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:02 compute-0 nova_compute[259550]: 2025-10-07 14:06:02.363 2 INFO nova.scheduler.client.report [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Deleted allocations for instance a2f7901e-6572-4162-b995-0c44fb69eab5
Oct 07 14:06:02 compute-0 nova_compute[259550]: 2025-10-07 14:06:02.442 2 DEBUG oslo_concurrency.lockutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:02 compute-0 nova_compute[259550]: 2025-10-07 14:06:02.520 2 DEBUG oslo_concurrency.lockutils [None req-4a8aa99f-fd68-42df-a76b-0cb3d8f24c63 d0be8e9cfd464bddaa8ac5bddebb0025 4b1d8140b79544acaa6c75dbb71cd46e - - default default] Acquiring lock "refresh_cache-348c80b7-7f65-4300-9dab-6a333f1b2c74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:06:02 compute-0 nova_compute[259550]: 2025-10-07 14:06:02.520 2 DEBUG oslo_concurrency.lockutils [None req-4a8aa99f-fd68-42df-a76b-0cb3d8f24c63 d0be8e9cfd464bddaa8ac5bddebb0025 4b1d8140b79544acaa6c75dbb71cd46e - - default default] Acquired lock "refresh_cache-348c80b7-7f65-4300-9dab-6a333f1b2c74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:06:02 compute-0 nova_compute[259550]: 2025-10-07 14:06:02.520 2 DEBUG nova.network.neutron [None req-4a8aa99f-fd68-42df-a76b-0cb3d8f24c63 d0be8e9cfd464bddaa8ac5bddebb0025 4b1d8140b79544acaa6c75dbb71cd46e - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:06:02 compute-0 ceph-mon[74295]: pgmap v1164: 305 pgs: 305 active+clean; 385 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 4.4 MiB/s wr, 107 op/s
Oct 07 14:06:02 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2187989551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:06:02 compute-0 nova_compute[259550]: 2025-10-07 14:06:02.739 2 DEBUG nova.compute.manager [req-4f19a190-61e6-4dcf-a825-4ccc75d4f7a7 req-20833546-e31e-4cbe-b9dd-846fb88737f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Received event network-vif-plugged-184f2379-8442-414e-bccb-6f5e5a314e72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:02 compute-0 nova_compute[259550]: 2025-10-07 14:06:02.740 2 DEBUG oslo_concurrency.lockutils [req-4f19a190-61e6-4dcf-a825-4ccc75d4f7a7 req-20833546-e31e-4cbe-b9dd-846fb88737f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:02 compute-0 nova_compute[259550]: 2025-10-07 14:06:02.740 2 DEBUG oslo_concurrency.lockutils [req-4f19a190-61e6-4dcf-a825-4ccc75d4f7a7 req-20833546-e31e-4cbe-b9dd-846fb88737f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:02 compute-0 nova_compute[259550]: 2025-10-07 14:06:02.740 2 DEBUG oslo_concurrency.lockutils [req-4f19a190-61e6-4dcf-a825-4ccc75d4f7a7 req-20833546-e31e-4cbe-b9dd-846fb88737f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:02 compute-0 nova_compute[259550]: 2025-10-07 14:06:02.741 2 DEBUG nova.compute.manager [req-4f19a190-61e6-4dcf-a825-4ccc75d4f7a7 req-20833546-e31e-4cbe-b9dd-846fb88737f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] No waiting events found dispatching network-vif-plugged-184f2379-8442-414e-bccb-6f5e5a314e72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:06:02 compute-0 nova_compute[259550]: 2025-10-07 14:06:02.741 2 WARNING nova.compute.manager [req-4f19a190-61e6-4dcf-a825-4ccc75d4f7a7 req-20833546-e31e-4cbe-b9dd-846fb88737f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Received unexpected event network-vif-plugged-184f2379-8442-414e-bccb-6f5e5a314e72 for instance with vm_state active and task_state None.
Oct 07 14:06:02 compute-0 nova_compute[259550]: 2025-10-07 14:06:02.741 2 DEBUG nova.compute.manager [req-4f19a190-61e6-4dcf-a825-4ccc75d4f7a7 req-20833546-e31e-4cbe-b9dd-846fb88737f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received event network-vif-deleted-d4571f56-54c6-4986-845c-cd57c4faadac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:02 compute-0 nova_compute[259550]: 2025-10-07 14:06:02.953 2 DEBUG nova.network.neutron [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updating instance_info_cache with network_info: [{"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.010 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Releasing lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.010 2 DEBUG nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Instance network_info: |[{"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.013 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Start _get_guest_xml network_info=[{"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.018 2 WARNING nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.023 2 DEBUG nova.virt.libvirt.host [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.024 2 DEBUG nova.virt.libvirt.host [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.027 2 DEBUG nova.virt.libvirt.host [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.027 2 DEBUG nova.virt.libvirt.host [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.028 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.028 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.028 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.028 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.028 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.029 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.029 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.029 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.029 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.029 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.029 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.030 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.033 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.339 2 DEBUG nova.compute.manager [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received event network-vif-plugged-d4571f56-54c6-4986-845c-cd57c4faadac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.340 2 DEBUG oslo_concurrency.lockutils [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.340 2 DEBUG oslo_concurrency.lockutils [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.340 2 DEBUG oslo_concurrency.lockutils [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.341 2 DEBUG nova.compute.manager [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] No waiting events found dispatching network-vif-plugged-d4571f56-54c6-4986-845c-cd57c4faadac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.341 2 WARNING nova.compute.manager [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received unexpected event network-vif-plugged-d4571f56-54c6-4986-845c-cd57c4faadac for instance with vm_state deleted and task_state None.
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.342 2 DEBUG nova.compute.manager [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.342 2 DEBUG nova.compute.manager [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing instance network info cache due to event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.343 2 DEBUG oslo_concurrency.lockutils [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.343 2 DEBUG oslo_concurrency.lockutils [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.344 2 DEBUG nova.network.neutron [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:06:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 385 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 4.0 MiB/s wr, 97 op/s
Oct 07 14:06:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:06:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3933433294' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.512 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.540 2 DEBUG nova.storage.rbd_utils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 4b95692e-088d-452c-83b7-4c50df73b8fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.544 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3933433294' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:06:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:06:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2082770503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:06:03 compute-0 nova_compute[259550]: 2025-10-07 14:06:03.998 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.000 2 DEBUG nova.virt.libvirt.vif [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-99112055',display_name='tempest-FloatingIPsAssociationTestJSON-server-99112055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-99112055',id=15,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='711e531670b1460a923f2f91ce0f63db',ramdisk_id='',reservation_id='r-ib7l0lly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1021362371',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1021362371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:59Z,user_data=None,user_id='7ff78293ed4e40f9954a0b0e6fca0caa',uuid=4b95692e-088d-452c-83b7-4c50df73b8fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.000 2 DEBUG nova.network.os_vif_util [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converting VIF {"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.001 2 DEBUG nova.network.os_vif_util [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:f8:94,bridge_name='br-int',has_traffic_filtering=True,id=d06bd5a4-d9b7-4791-a387-d190eb1457f6,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd06bd5a4-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.003 2 DEBUG nova.objects.instance [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lazy-loading 'pci_devices' on Instance uuid 4b95692e-088d-452c-83b7-4c50df73b8fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.006 2 DEBUG nova.network.neutron [None req-4a8aa99f-fd68-42df-a76b-0cb3d8f24c63 d0be8e9cfd464bddaa8ac5bddebb0025 4b1d8140b79544acaa6c75dbb71cd46e - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Updating instance_info_cache with network_info: [{"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.050 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:06:04 compute-0 nova_compute[259550]:   <uuid>4b95692e-088d-452c-83b7-4c50df73b8fe</uuid>
Oct 07 14:06:04 compute-0 nova_compute[259550]:   <name>instance-0000000f</name>
Oct 07 14:06:04 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:06:04 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:06:04 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-99112055</nova:name>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:06:03</nova:creationTime>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:06:04 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:06:04 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:06:04 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:06:04 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:06:04 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:06:04 compute-0 nova_compute[259550]:         <nova:user uuid="7ff78293ed4e40f9954a0b0e6fca0caa">tempest-FloatingIPsAssociationTestJSON-1021362371-project-member</nova:user>
Oct 07 14:06:04 compute-0 nova_compute[259550]:         <nova:project uuid="711e531670b1460a923f2f91ce0f63db">tempest-FloatingIPsAssociationTestJSON-1021362371</nova:project>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:06:04 compute-0 nova_compute[259550]:         <nova:port uuid="d06bd5a4-d9b7-4791-a387-d190eb1457f6">
Oct 07 14:06:04 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:06:04 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:06:04 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <system>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <entry name="serial">4b95692e-088d-452c-83b7-4c50df73b8fe</entry>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <entry name="uuid">4b95692e-088d-452c-83b7-4c50df73b8fe</entry>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     </system>
Oct 07 14:06:04 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:06:04 compute-0 nova_compute[259550]:   <os>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:   </os>
Oct 07 14:06:04 compute-0 nova_compute[259550]:   <features>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:   </features>
Oct 07 14:06:04 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:06:04 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:06:04 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4b95692e-088d-452c-83b7-4c50df73b8fe_disk">
Oct 07 14:06:04 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       </source>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:06:04 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4b95692e-088d-452c-83b7-4c50df73b8fe_disk.config">
Oct 07 14:06:04 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       </source>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:06:04 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:65:f8:94"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <target dev="tapd06bd5a4-d9"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe/console.log" append="off"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <video>
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     </video>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:06:04 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:06:04 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:06:04 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:06:04 compute-0 nova_compute[259550]: </domain>
Oct 07 14:06:04 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.050 2 DEBUG nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Preparing to wait for external event network-vif-plugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.051 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.051 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.051 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.052 2 DEBUG nova.virt.libvirt.vif [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-99112055',display_name='tempest-FloatingIPsAssociationTestJSON-server-99112055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-99112055',id=15,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='711e531670b1460a923f2f91ce0f63db',ramdisk_id='',reservation_id='r-ib7l0lly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1021362371',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1021362371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:59Z,user_data=None,user_id='7ff78293ed4e40f9954a0b0e6fca0caa',uuid=4b95692e-088d-452c-83b7-4c50df73b8fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.053 2 DEBUG nova.network.os_vif_util [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converting VIF {"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.053 2 DEBUG nova.network.os_vif_util [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:f8:94,bridge_name='br-int',has_traffic_filtering=True,id=d06bd5a4-d9b7-4791-a387-d190eb1457f6,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd06bd5a4-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.054 2 DEBUG os_vif [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:f8:94,bridge_name='br-int',has_traffic_filtering=True,id=d06bd5a4-d9b7-4791-a387-d190eb1457f6,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd06bd5a4-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.059 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd06bd5a4-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.059 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd06bd5a4-d9, col_values=(('external_ids', {'iface-id': 'd06bd5a4-d9b7-4791-a387-d190eb1457f6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:f8:94', 'vm-uuid': '4b95692e-088d-452c-83b7-4c50df73b8fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:04 compute-0 NetworkManager[44949]: <info>  [1759845964.0622] manager: (tapd06bd5a4-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.068 2 DEBUG oslo_concurrency.lockutils [None req-4a8aa99f-fd68-42df-a76b-0cb3d8f24c63 d0be8e9cfd464bddaa8ac5bddebb0025 4b1d8140b79544acaa6c75dbb71cd46e - - default default] Releasing lock "refresh_cache-348c80b7-7f65-4300-9dab-6a333f1b2c74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.068 2 DEBUG nova.compute.manager [None req-4a8aa99f-fd68-42df-a76b-0cb3d8f24c63 d0be8e9cfd464bddaa8ac5bddebb0025 4b1d8140b79544acaa6c75dbb71cd46e - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.068 2 DEBUG nova.compute.manager [None req-4a8aa99f-fd68-42df-a76b-0cb3d8f24c63 d0be8e9cfd464bddaa8ac5bddebb0025 4b1d8140b79544acaa6c75dbb71cd46e - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] network_info to inject: |[{"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.070 2 INFO os_vif [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:f8:94,bridge_name='br-int',has_traffic_filtering=True,id=d06bd5a4-d9b7-4791-a387-d190eb1457f6,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd06bd5a4-d9')
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.134 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.135 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.135 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] No VIF found with MAC fa:16:3e:65:f8:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.136 2 INFO nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Using config drive
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.161 2 DEBUG nova.storage.rbd_utils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 4b95692e-088d-452c-83b7-4c50df73b8fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:04 compute-0 ceph-mon[74295]: pgmap v1165: 305 pgs: 305 active+clean; 385 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 4.0 MiB/s wr, 97 op/s
Oct 07 14:06:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2082770503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.963 2 INFO nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Creating config drive at /var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe/disk.config
Oct 07 14:06:04 compute-0 nova_compute[259550]: 2025-10-07 14:06:04.967 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0sw8hwmj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.100 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0sw8hwmj" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.128 2 DEBUG nova.storage.rbd_utils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 4b95692e-088d-452c-83b7-4c50df73b8fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.132 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe/disk.config 4b95692e-088d-452c-83b7-4c50df73b8fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.201 2 DEBUG nova.network.neutron [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updated VIF entry in instance network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.202 2 DEBUG nova.network.neutron [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updating instance_info_cache with network_info: [{"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.236 2 DEBUG oslo_concurrency.lockutils [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.294 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe/disk.config 4b95692e-088d-452c-83b7-4c50df73b8fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.295 2 INFO nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Deleting local config drive /var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe/disk.config because it was imported into RBD.
Oct 07 14:06:05 compute-0 kernel: tapd06bd5a4-d9: entered promiscuous mode
Oct 07 14:06:05 compute-0 ovn_controller[151684]: 2025-10-07T14:06:05Z|00071|binding|INFO|Claiming lport d06bd5a4-d9b7-4791-a387-d190eb1457f6 for this chassis.
Oct 07 14:06:05 compute-0 ovn_controller[151684]: 2025-10-07T14:06:05Z|00072|binding|INFO|d06bd5a4-d9b7-4791-a387-d190eb1457f6: Claiming fa:16:3e:65:f8:94 10.100.0.7
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:05 compute-0 NetworkManager[44949]: <info>  [1759845965.3674] manager: (tapd06bd5a4-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Oct 07 14:06:05 compute-0 sudo[284298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.372 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:f8:94 10.100.0.7'], port_security=['fa:16:3e:65:f8:94 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4b95692e-088d-452c-83b7-4c50df73b8fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80f412c9-c511-49ba-a8ca-ff830fcff803', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '711e531670b1460a923f2f91ce0f63db', 'neutron:revision_number': '2', 'neutron:security_group_ids': '087bdc78-7c9e-48d3-b83f-b37a1a3f8ec7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e75ebab-8b50-4bfb-bcab-f5fcada82242, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d06bd5a4-d9b7-4791-a387-d190eb1457f6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.373 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d06bd5a4-d9b7-4791-a387-d190eb1457f6 in datapath 80f412c9-c511-49ba-a8ca-ff830fcff803 bound to our chassis
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.375 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80f412c9-c511-49ba-a8ca-ff830fcff803
Oct 07 14:06:05 compute-0 sudo[284298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:06:05 compute-0 ovn_controller[151684]: 2025-10-07T14:06:05Z|00073|binding|INFO|Setting lport d06bd5a4-d9b7-4791-a387-d190eb1457f6 ovn-installed in OVS
Oct 07 14:06:05 compute-0 sudo[284298]: pam_unix(sudo:session): session closed for user root
Oct 07 14:06:05 compute-0 ovn_controller[151684]: 2025-10-07T14:06:05Z|00074|binding|INFO|Setting lport d06bd5a4-d9b7-4791-a387-d190eb1457f6 up in Southbound
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.394 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[be0c2cbc-e1e4-4e86-aeb1-3c845ae1f21e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.395 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap80f412c9-c1 in ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.398 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap80f412c9-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.399 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a925b706-2451-4683-aff4-241926d5c99e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.401 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[83c15ff3-3838-40e2-836b-a7346a5c2341]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:05 compute-0 systemd-udevd[284353]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.420 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[8d95b6eb-9b53-4a22-8dc3-3250ff78b9f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:05 compute-0 systemd-machined[214580]: New machine qemu-17-instance-0000000f.
Oct 07 14:06:05 compute-0 NetworkManager[44949]: <info>  [1759845965.4379] device (tapd06bd5a4-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:06:05 compute-0 NetworkManager[44949]: <info>  [1759845965.4392] device (tapd06bd5a4-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:06:05 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-0000000f.
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.446 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cc51c2ff-be27-45b7-81f4-005687a11e24]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:05 compute-0 sudo[284334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:06:05 compute-0 sudo[284334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:06:05 compute-0 sudo[284334]: pam_unix(sudo:session): session closed for user root
Oct 07 14:06:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 359 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 5.0 MiB/s wr, 201 op/s
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.499 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a56f62b0-fdf4-491c-8179-675647a40184]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:05 compute-0 NetworkManager[44949]: <info>  [1759845965.5090] manager: (tap80f412c9-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.508 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb7bd77-3875-4135-a9e4-7bd8aa33f4ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:05 compute-0 sudo[284369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:06:05 compute-0 sudo[284369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:06:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:06:05 compute-0 sudo[284369]: pam_unix(sudo:session): session closed for user root
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.561 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a6241e6c-69c1-4911-8c50-80f2d855e537]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.567 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab2c3dd-802b-4f32-96e6-055c75e1d860]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:05 compute-0 NetworkManager[44949]: <info>  [1759845965.6004] device (tap80f412c9-c0): carrier: link connected
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.607 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[dc579c54-ecee-486d-b04a-44de5d783638]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:05 compute-0 sudo[284417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:06:05 compute-0 sudo[284417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.633 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a175ed11-2d25-41cf-8ba1-e917ed7cc663]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80f412c9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:6b:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654916, 'reachable_time': 32115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284442, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.663 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac76663-7a29-41f7-a9f7-09bfb0f7aae5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febc:6b79'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654916, 'tstamp': 654916}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284445, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.687 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a23ee53f-c26d-4410-9dd3-09cfe7a73a87]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80f412c9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:6b:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654916, 'reachable_time': 32115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284446, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.730 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1531067-7dd6-432c-bda4-ef6a6deeafee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.804 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de2c184a-39f8-44e4-a069-4fe0db7e0a19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.807 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80f412c9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.807 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.807 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80f412c9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:05 compute-0 NetworkManager[44949]: <info>  [1759845965.8102] manager: (tap80f412c9-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:05 compute-0 kernel: tap80f412c9-c0: entered promiscuous mode
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.812 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80f412c9-c0, col_values=(('external_ids', {'iface-id': '60dd3b69-c15d-4f1f-8348-1807afc1578d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:05 compute-0 ovn_controller[151684]: 2025-10-07T14:06:05Z|00075|binding|INFO|Releasing lport 60dd3b69-c15d-4f1f-8348-1807afc1578d from this chassis (sb_readonly=0)
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.839 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/80f412c9-c511-49ba-a8ca-ff830fcff803.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/80f412c9-c511-49ba-a8ca-ff830fcff803.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.840 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de4f5b9c-e5e8-42ed-af69-f8841e8aebf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.840 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-80f412c9-c511-49ba-a8ca-ff830fcff803
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/80f412c9-c511-49ba-a8ca-ff830fcff803.pid.haproxy
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 80f412c9-c511-49ba-a8ca-ff830fcff803
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:06:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.841 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'env', 'PROCESS_TAG=haproxy-80f412c9-c511-49ba-a8ca-ff830fcff803', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/80f412c9-c511-49ba-a8ca-ff830fcff803.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.983 2 DEBUG nova.compute.manager [req-249efe03-b420-4b5f-8fea-8536a5c6fe83 req-53c865b8-71a1-4056-a28f-07daf60b7086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-vif-plugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.985 2 DEBUG oslo_concurrency.lockutils [req-249efe03-b420-4b5f-8fea-8536a5c6fe83 req-53c865b8-71a1-4056-a28f-07daf60b7086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.985 2 DEBUG oslo_concurrency.lockutils [req-249efe03-b420-4b5f-8fea-8536a5c6fe83 req-53c865b8-71a1-4056-a28f-07daf60b7086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.985 2 DEBUG oslo_concurrency.lockutils [req-249efe03-b420-4b5f-8fea-8536a5c6fe83 req-53c865b8-71a1-4056-a28f-07daf60b7086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:05 compute-0 nova_compute[259550]: 2025-10-07 14:06:05.986 2 DEBUG nova.compute.manager [req-249efe03-b420-4b5f-8fea-8536a5c6fe83 req-53c865b8-71a1-4056-a28f-07daf60b7086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Processing event network-vif-plugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:06:06 compute-0 sudo[284417]: pam_unix(sudo:session): session closed for user root
Oct 07 14:06:06 compute-0 podman[284505]: 2025-10-07 14:06:06.245450058 +0000 UTC m=+0.055774010 container create 44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:06:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:06:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:06:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:06:06 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:06:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:06:06 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:06:06 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 3b14746a-cfb3-4025-afd3-9a834488c2cc does not exist
Oct 07 14:06:06 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 0aa7d394-35cb-4107-bede-6b6793855552 does not exist
Oct 07 14:06:06 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 38ce0fde-2cef-45ee-8170-2c221306e192 does not exist
Oct 07 14:06:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:06:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:06:06 compute-0 systemd[1]: Started libpod-conmon-44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084.scope.
Oct 07 14:06:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:06:06 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:06:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:06:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:06:06 compute-0 podman[284505]: 2025-10-07 14:06:06.21557587 +0000 UTC m=+0.025899842 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:06:06 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:06:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ed1157e3fd8b123979ce2884e8cf7ae250f676278229a741e499d63f106c27c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:06:06 compute-0 podman[284505]: 2025-10-07 14:06:06.343571151 +0000 UTC m=+0.153895133 container init 44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 14:06:06 compute-0 podman[284505]: 2025-10-07 14:06:06.351241476 +0000 UTC m=+0.161565428 container start 44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 07 14:06:06 compute-0 sudo[284538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:06:06 compute-0 sudo[284538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:06:06 compute-0 sudo[284538]: pam_unix(sudo:session): session closed for user root
Oct 07 14:06:06 compute-0 neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803[284528]: [NOTICE]   (284582) : New worker (284589) forked
Oct 07 14:06:06 compute-0 neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803[284528]: [NOTICE]   (284582) : Loading success.
Oct 07 14:06:06 compute-0 sudo[284597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:06:06 compute-0 sudo[284597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:06:06 compute-0 sudo[284597]: pam_unix(sudo:session): session closed for user root
Oct 07 14:06:06 compute-0 sudo[284628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:06:06 compute-0 sudo[284628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:06:06 compute-0 sudo[284628]: pam_unix(sudo:session): session closed for user root
Oct 07 14:06:06 compute-0 sudo[284653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:06:06 compute-0 sudo[284653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:06:06 compute-0 ceph-mon[74295]: pgmap v1166: 305 pgs: 305 active+clean; 359 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 5.0 MiB/s wr, 201 op/s
Oct 07 14:06:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:06:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:06:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:06:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:06:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:06:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:06:06 compute-0 nova_compute[259550]: 2025-10-07 14:06:06.924 2 DEBUG nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:06:06 compute-0 nova_compute[259550]: 2025-10-07 14:06:06.925 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845966.9238791, 4b95692e-088d-452c-83b7-4c50df73b8fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:06:06 compute-0 nova_compute[259550]: 2025-10-07 14:06:06.925 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] VM Started (Lifecycle Event)
Oct 07 14:06:06 compute-0 nova_compute[259550]: 2025-10-07 14:06:06.932 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:06:06 compute-0 nova_compute[259550]: 2025-10-07 14:06:06.936 2 INFO nova.virt.libvirt.driver [-] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Instance spawned successfully.
Oct 07 14:06:06 compute-0 nova_compute[259550]: 2025-10-07 14:06:06.936 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:06:06 compute-0 nova_compute[259550]: 2025-10-07 14:06:06.944 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:06 compute-0 nova_compute[259550]: 2025-10-07 14:06:06.952 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:06:06 compute-0 nova_compute[259550]: 2025-10-07 14:06:06.972 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:06 compute-0 nova_compute[259550]: 2025-10-07 14:06:06.973 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:06 compute-0 nova_compute[259550]: 2025-10-07 14:06:06.973 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:06 compute-0 podman[284720]: 2025-10-07 14:06:06.972809449 +0000 UTC m=+0.060350274 container create a17abc1bb23296baf43d5d14cb69c8f86fc2e3e720c85ec60b0e0496f0163064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_leavitt, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:06:06 compute-0 nova_compute[259550]: 2025-10-07 14:06:06.973 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:06 compute-0 nova_compute[259550]: 2025-10-07 14:06:06.974 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:06 compute-0 nova_compute[259550]: 2025-10-07 14:06:06.974 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:06 compute-0 nova_compute[259550]: 2025-10-07 14:06:06.977 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:06:06 compute-0 nova_compute[259550]: 2025-10-07 14:06:06.977 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845966.9248953, 4b95692e-088d-452c-83b7-4c50df73b8fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:06:06 compute-0 nova_compute[259550]: 2025-10-07 14:06:06.978 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] VM Paused (Lifecycle Event)
Oct 07 14:06:07 compute-0 nova_compute[259550]: 2025-10-07 14:06:07.007 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:07 compute-0 nova_compute[259550]: 2025-10-07 14:06:07.012 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845966.9290807, 4b95692e-088d-452c-83b7-4c50df73b8fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:06:07 compute-0 nova_compute[259550]: 2025-10-07 14:06:07.012 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] VM Resumed (Lifecycle Event)
Oct 07 14:06:07 compute-0 systemd[1]: Started libpod-conmon-a17abc1bb23296baf43d5d14cb69c8f86fc2e3e720c85ec60b0e0496f0163064.scope.
Oct 07 14:06:07 compute-0 nova_compute[259550]: 2025-10-07 14:06:07.041 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:07 compute-0 podman[284720]: 2025-10-07 14:06:06.951654583 +0000 UTC m=+0.039195438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:06:07 compute-0 nova_compute[259550]: 2025-10-07 14:06:07.044 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:06:07 compute-0 nova_compute[259550]: 2025-10-07 14:06:07.052 2 INFO nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Took 7.05 seconds to spawn the instance on the hypervisor.
Oct 07 14:06:07 compute-0 nova_compute[259550]: 2025-10-07 14:06:07.052 2 DEBUG nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:07 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:06:07 compute-0 nova_compute[259550]: 2025-10-07 14:06:07.064 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:06:07 compute-0 podman[284720]: 2025-10-07 14:06:07.078361599 +0000 UTC m=+0.165902444 container init a17abc1bb23296baf43d5d14cb69c8f86fc2e3e720c85ec60b0e0496f0163064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:06:07 compute-0 podman[284720]: 2025-10-07 14:06:07.086211079 +0000 UTC m=+0.173751904 container start a17abc1bb23296baf43d5d14cb69c8f86fc2e3e720c85ec60b0e0496f0163064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 07 14:06:07 compute-0 podman[284720]: 2025-10-07 14:06:07.089637111 +0000 UTC m=+0.177177966 container attach a17abc1bb23296baf43d5d14cb69c8f86fc2e3e720c85ec60b0e0496f0163064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:06:07 compute-0 kind_leavitt[284736]: 167 167
Oct 07 14:06:07 compute-0 systemd[1]: libpod-a17abc1bb23296baf43d5d14cb69c8f86fc2e3e720c85ec60b0e0496f0163064.scope: Deactivated successfully.
Oct 07 14:06:07 compute-0 podman[284720]: 2025-10-07 14:06:07.09370877 +0000 UTC m=+0.181249595 container died a17abc1bb23296baf43d5d14cb69c8f86fc2e3e720c85ec60b0e0496f0163064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_leavitt, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 07 14:06:07 compute-0 nova_compute[259550]: 2025-10-07 14:06:07.106 2 INFO nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Took 8.02 seconds to build instance.
Oct 07 14:06:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-822abaf9521394065b8c6437ea368001bb6118d023ef5217b6da980ab7b18128-merged.mount: Deactivated successfully.
Oct 07 14:06:07 compute-0 nova_compute[259550]: 2025-10-07 14:06:07.125 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:07 compute-0 podman[284720]: 2025-10-07 14:06:07.138088356 +0000 UTC m=+0.225629181 container remove a17abc1bb23296baf43d5d14cb69c8f86fc2e3e720c85ec60b0e0496f0163064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 07 14:06:07 compute-0 systemd[1]: libpod-conmon-a17abc1bb23296baf43d5d14cb69c8f86fc2e3e720c85ec60b0e0496f0163064.scope: Deactivated successfully.
Oct 07 14:06:07 compute-0 ovn_controller[151684]: 2025-10-07T14:06:07Z|00076|binding|INFO|Releasing lport 47854cb1-b863-4b06-b664-27d734ff5751 from this chassis (sb_readonly=0)
Oct 07 14:06:07 compute-0 ovn_controller[151684]: 2025-10-07T14:06:07Z|00077|binding|INFO|Releasing lport 60dd3b69-c15d-4f1f-8348-1807afc1578d from this chassis (sb_readonly=0)
Oct 07 14:06:07 compute-0 nova_compute[259550]: 2025-10-07 14:06:07.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:07 compute-0 podman[284760]: 2025-10-07 14:06:07.360179311 +0000 UTC m=+0.047668885 container create 610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 07 14:06:07 compute-0 ovn_controller[151684]: 2025-10-07T14:06:07Z|00078|binding|INFO|Releasing lport 47854cb1-b863-4b06-b664-27d734ff5751 from this chassis (sb_readonly=0)
Oct 07 14:06:07 compute-0 ovn_controller[151684]: 2025-10-07T14:06:07Z|00079|binding|INFO|Releasing lport 60dd3b69-c15d-4f1f-8348-1807afc1578d from this chassis (sb_readonly=0)
Oct 07 14:06:07 compute-0 nova_compute[259550]: 2025-10-07 14:06:07.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:07 compute-0 systemd[1]: Started libpod-conmon-610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c.scope.
Oct 07 14:06:07 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb9329f245a9aeb018ea41ce2fff940de7d75c6b3613bac067f6a0d1b50e156/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb9329f245a9aeb018ea41ce2fff940de7d75c6b3613bac067f6a0d1b50e156/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb9329f245a9aeb018ea41ce2fff940de7d75c6b3613bac067f6a0d1b50e156/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb9329f245a9aeb018ea41ce2fff940de7d75c6b3613bac067f6a0d1b50e156/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb9329f245a9aeb018ea41ce2fff940de7d75c6b3613bac067f6a0d1b50e156/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:06:07 compute-0 podman[284760]: 2025-10-07 14:06:07.341169654 +0000 UTC m=+0.028659248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:06:07 compute-0 podman[284760]: 2025-10-07 14:06:07.447446294 +0000 UTC m=+0.134935898 container init 610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct 07 14:06:07 compute-0 podman[284760]: 2025-10-07 14:06:07.454721498 +0000 UTC m=+0.142211062 container start 610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 14:06:07 compute-0 podman[284760]: 2025-10-07 14:06:07.458704315 +0000 UTC m=+0.146193919 container attach 610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_northcutt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:06:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 219 op/s
Oct 07 14:06:08 compute-0 nova_compute[259550]: 2025-10-07 14:06:08.386 2 DEBUG nova.compute.manager [req-c14bc630-078f-4f2d-9363-684ff59864d5 req-0027c28e-67fe-4cc3-ab8b-bc990a1f573a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-vif-plugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:08 compute-0 nova_compute[259550]: 2025-10-07 14:06:08.387 2 DEBUG oslo_concurrency.lockutils [req-c14bc630-078f-4f2d-9363-684ff59864d5 req-0027c28e-67fe-4cc3-ab8b-bc990a1f573a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:08 compute-0 nova_compute[259550]: 2025-10-07 14:06:08.388 2 DEBUG oslo_concurrency.lockutils [req-c14bc630-078f-4f2d-9363-684ff59864d5 req-0027c28e-67fe-4cc3-ab8b-bc990a1f573a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:08 compute-0 nova_compute[259550]: 2025-10-07 14:06:08.388 2 DEBUG oslo_concurrency.lockutils [req-c14bc630-078f-4f2d-9363-684ff59864d5 req-0027c28e-67fe-4cc3-ab8b-bc990a1f573a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:08 compute-0 nova_compute[259550]: 2025-10-07 14:06:08.389 2 DEBUG nova.compute.manager [req-c14bc630-078f-4f2d-9363-684ff59864d5 req-0027c28e-67fe-4cc3-ab8b-bc990a1f573a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] No waiting events found dispatching network-vif-plugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:06:08 compute-0 nova_compute[259550]: 2025-10-07 14:06:08.389 2 WARNING nova.compute.manager [req-c14bc630-078f-4f2d-9363-684ff59864d5 req-0027c28e-67fe-4cc3-ab8b-bc990a1f573a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received unexpected event network-vif-plugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 for instance with vm_state active and task_state None.
Oct 07 14:06:08 compute-0 confident_northcutt[284777]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:06:08 compute-0 confident_northcutt[284777]: --> relative data size: 1.0
Oct 07 14:06:08 compute-0 confident_northcutt[284777]: --> All data devices are unavailable
Oct 07 14:06:08 compute-0 systemd[1]: libpod-610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c.scope: Deactivated successfully.
Oct 07 14:06:08 compute-0 systemd[1]: libpod-610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c.scope: Consumed 1.112s CPU time.
Oct 07 14:06:08 compute-0 podman[284760]: 2025-10-07 14:06:08.650825666 +0000 UTC m=+1.338315250 container died 610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:06:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ceb9329f245a9aeb018ea41ce2fff940de7d75c6b3613bac067f6a0d1b50e156-merged.mount: Deactivated successfully.
Oct 07 14:06:08 compute-0 ceph-mon[74295]: pgmap v1167: 305 pgs: 305 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 219 op/s
Oct 07 14:06:08 compute-0 podman[284760]: 2025-10-07 14:06:08.889808413 +0000 UTC m=+1.577298017 container remove 610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:06:08 compute-0 systemd[1]: libpod-conmon-610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c.scope: Deactivated successfully.
Oct 07 14:06:08 compute-0 sudo[284653]: pam_unix(sudo:session): session closed for user root
Oct 07 14:06:09 compute-0 sudo[284820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:06:09 compute-0 sudo[284820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:06:09 compute-0 sudo[284820]: pam_unix(sudo:session): session closed for user root
Oct 07 14:06:09 compute-0 nova_compute[259550]: 2025-10-07 14:06:09.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:09 compute-0 sudo[284845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:06:09 compute-0 sudo[284845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:06:09 compute-0 sudo[284845]: pam_unix(sudo:session): session closed for user root
Oct 07 14:06:09 compute-0 sudo[284870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:06:09 compute-0 sudo[284870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:06:09 compute-0 sudo[284870]: pam_unix(sudo:session): session closed for user root
Oct 07 14:06:09 compute-0 nova_compute[259550]: 2025-10-07 14:06:09.180 2 INFO nova.compute.manager [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Rebuilding instance
Oct 07 14:06:09 compute-0 sudo[284895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:06:09 compute-0 sudo[284895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:06:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.3 MiB/s wr, 257 op/s
Oct 07 14:06:09 compute-0 podman[284959]: 2025-10-07 14:06:09.552134365 +0000 UTC m=+0.039883667 container create 1a0f664082ff4907aa2bc14f226315c0b84f711eda357dcfc0fd40d1d171822a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_payne, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 07 14:06:09 compute-0 nova_compute[259550]: 2025-10-07 14:06:09.589 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'trusted_certs' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:09 compute-0 systemd[1]: Started libpod-conmon-1a0f664082ff4907aa2bc14f226315c0b84f711eda357dcfc0fd40d1d171822a.scope.
Oct 07 14:06:09 compute-0 nova_compute[259550]: 2025-10-07 14:06:09.606 2 DEBUG nova.compute.manager [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:09 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:06:09 compute-0 podman[284959]: 2025-10-07 14:06:09.531999557 +0000 UTC m=+0.019748869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:06:09 compute-0 podman[284959]: 2025-10-07 14:06:09.633340645 +0000 UTC m=+0.121089967 container init 1a0f664082ff4907aa2bc14f226315c0b84f711eda357dcfc0fd40d1d171822a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_payne, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:06:09 compute-0 podman[284959]: 2025-10-07 14:06:09.641154114 +0000 UTC m=+0.128903416 container start 1a0f664082ff4907aa2bc14f226315c0b84f711eda357dcfc0fd40d1d171822a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 07 14:06:09 compute-0 podman[284959]: 2025-10-07 14:06:09.644870193 +0000 UTC m=+0.132619525 container attach 1a0f664082ff4907aa2bc14f226315c0b84f711eda357dcfc0fd40d1d171822a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_payne, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 14:06:09 compute-0 optimistic_payne[284975]: 167 167
Oct 07 14:06:09 compute-0 systemd[1]: libpod-1a0f664082ff4907aa2bc14f226315c0b84f711eda357dcfc0fd40d1d171822a.scope: Deactivated successfully.
Oct 07 14:06:09 compute-0 podman[284959]: 2025-10-07 14:06:09.647952675 +0000 UTC m=+0.135701977 container died 1a0f664082ff4907aa2bc14f226315c0b84f711eda357dcfc0fd40d1d171822a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_payne, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 14:06:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6878ab2c1a541dd565a06ec8d1d827074ab2d60339641aa4da868692dd79fca2-merged.mount: Deactivated successfully.
Oct 07 14:06:09 compute-0 podman[284959]: 2025-10-07 14:06:09.685117859 +0000 UTC m=+0.172867151 container remove 1a0f664082ff4907aa2bc14f226315c0b84f711eda357dcfc0fd40d1d171822a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_payne, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 07 14:06:09 compute-0 systemd[1]: libpod-conmon-1a0f664082ff4907aa2bc14f226315c0b84f711eda357dcfc0fd40d1d171822a.scope: Deactivated successfully.
Oct 07 14:06:09 compute-0 nova_compute[259550]: 2025-10-07 14:06:09.825 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'pci_requests' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:09 compute-0 nova_compute[259550]: 2025-10-07 14:06:09.843 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'pci_devices' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:09 compute-0 nova_compute[259550]: 2025-10-07 14:06:09.855 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'resources' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:09 compute-0 nova_compute[259550]: 2025-10-07 14:06:09.869 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'migration_context' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:09 compute-0 ceph-mon[74295]: pgmap v1168: 305 pgs: 305 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.3 MiB/s wr, 257 op/s
Oct 07 14:06:09 compute-0 nova_compute[259550]: 2025-10-07 14:06:09.880 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:06:09 compute-0 nova_compute[259550]: 2025-10-07 14:06:09.885 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:06:09 compute-0 podman[284998]: 2025-10-07 14:06:09.884153928 +0000 UTC m=+0.033530827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:06:10 compute-0 podman[284998]: 2025-10-07 14:06:10.011099842 +0000 UTC m=+0.160476711 container create 3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haslett, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:06:10 compute-0 systemd[1]: Started libpod-conmon-3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24.scope.
Oct 07 14:06:10 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef614f8a702e01de142d646aa8d814f67012a165401631ef8025b70c19f81a31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef614f8a702e01de142d646aa8d814f67012a165401631ef8025b70c19f81a31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef614f8a702e01de142d646aa8d814f67012a165401631ef8025b70c19f81a31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef614f8a702e01de142d646aa8d814f67012a165401631ef8025b70c19f81a31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:06:10 compute-0 podman[284998]: 2025-10-07 14:06:10.265689836 +0000 UTC m=+0.415066725 container init 3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:06:10 compute-0 podman[284998]: 2025-10-07 14:06:10.272284612 +0000 UTC m=+0.421661491 container start 3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:06:10 compute-0 podman[284998]: 2025-10-07 14:06:10.364075455 +0000 UTC m=+0.513452624 container attach 3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haslett, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:06:10 compute-0 nova_compute[259550]: 2025-10-07 14:06:10.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:06:11 compute-0 laughing_haslett[285015]: {
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:     "0": [
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:         {
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "devices": [
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "/dev/loop3"
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             ],
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "lv_name": "ceph_lv0",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "lv_size": "21470642176",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "name": "ceph_lv0",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "tags": {
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.cluster_name": "ceph",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.crush_device_class": "",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.encrypted": "0",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.osd_id": "0",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.type": "block",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.vdo": "0"
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             },
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "type": "block",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "vg_name": "ceph_vg0"
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:         }
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:     ],
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:     "1": [
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:         {
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "devices": [
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "/dev/loop4"
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             ],
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "lv_name": "ceph_lv1",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "lv_size": "21470642176",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "name": "ceph_lv1",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "tags": {
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.cluster_name": "ceph",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.crush_device_class": "",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.encrypted": "0",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.osd_id": "1",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.type": "block",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.vdo": "0"
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             },
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "type": "block",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "vg_name": "ceph_vg1"
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:         }
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:     ],
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:     "2": [
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:         {
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "devices": [
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "/dev/loop5"
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             ],
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "lv_name": "ceph_lv2",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "lv_size": "21470642176",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "name": "ceph_lv2",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "tags": {
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.cluster_name": "ceph",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.crush_device_class": "",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.encrypted": "0",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.osd_id": "2",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.type": "block",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:                 "ceph.vdo": "0"
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             },
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "type": "block",
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:             "vg_name": "ceph_vg2"
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:         }
Oct 07 14:06:11 compute-0 laughing_haslett[285015]:     ]
Oct 07 14:06:11 compute-0 laughing_haslett[285015]: }
Oct 07 14:06:11 compute-0 systemd[1]: libpod-3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24.scope: Deactivated successfully.
Oct 07 14:06:11 compute-0 conmon[285015]: conmon 3c1bd9f8eee22a58aa6c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24.scope/container/memory.events
Oct 07 14:06:11 compute-0 podman[284998]: 2025-10-07 14:06:11.160041498 +0000 UTC m=+1.309418377 container died 3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haslett, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:06:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef614f8a702e01de142d646aa8d814f67012a165401631ef8025b70c19f81a31-merged.mount: Deactivated successfully.
Oct 07 14:06:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.3 MiB/s wr, 244 op/s
Oct 07 14:06:11 compute-0 podman[284998]: 2025-10-07 14:06:11.818316433 +0000 UTC m=+1.967693302 container remove 3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 07 14:06:11 compute-0 sudo[284895]: pam_unix(sudo:session): session closed for user root
Oct 07 14:06:11 compute-0 systemd[1]: libpod-conmon-3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24.scope: Deactivated successfully.
Oct 07 14:06:11 compute-0 sudo[285035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:06:11 compute-0 sudo[285035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:06:11 compute-0 sudo[285035]: pam_unix(sudo:session): session closed for user root
Oct 07 14:06:12 compute-0 sudo[285060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:06:12 compute-0 sudo[285060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:06:12 compute-0 sudo[285060]: pam_unix(sudo:session): session closed for user root
Oct 07 14:06:12 compute-0 sudo[285085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:06:12 compute-0 sudo[285085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:06:12 compute-0 sudo[285085]: pam_unix(sudo:session): session closed for user root
Oct 07 14:06:12 compute-0 sudo[285110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:06:12 compute-0 sudo[285110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:06:12 compute-0 podman[285177]: 2025-10-07 14:06:12.531056532 +0000 UTC m=+0.087018198 container create 756e633c19224be72c9f694d67b08fa9bff9e0fc988088d8c9a1c6198b2d0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:06:12 compute-0 podman[285177]: 2025-10-07 14:06:12.469451465 +0000 UTC m=+0.025413151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:06:12 compute-0 systemd[1]: Started libpod-conmon-756e633c19224be72c9f694d67b08fa9bff9e0fc988088d8c9a1c6198b2d0bc0.scope.
Oct 07 14:06:12 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:06:12 compute-0 podman[285177]: 2025-10-07 14:06:12.635966696 +0000 UTC m=+0.191928382 container init 756e633c19224be72c9f694d67b08fa9bff9e0fc988088d8c9a1c6198b2d0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:06:12 compute-0 podman[285177]: 2025-10-07 14:06:12.645738727 +0000 UTC m=+0.201700383 container start 756e633c19224be72c9f694d67b08fa9bff9e0fc988088d8c9a1c6198b2d0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 07 14:06:12 compute-0 awesome_dewdney[285191]: 167 167
Oct 07 14:06:12 compute-0 podman[285177]: 2025-10-07 14:06:12.650196716 +0000 UTC m=+0.206158382 container attach 756e633c19224be72c9f694d67b08fa9bff9e0fc988088d8c9a1c6198b2d0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:06:12 compute-0 systemd[1]: libpod-756e633c19224be72c9f694d67b08fa9bff9e0fc988088d8c9a1c6198b2d0bc0.scope: Deactivated successfully.
Oct 07 14:06:12 compute-0 podman[285177]: 2025-10-07 14:06:12.652724172 +0000 UTC m=+0.208685838 container died 756e633c19224be72c9f694d67b08fa9bff9e0fc988088d8c9a1c6198b2d0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 07 14:06:12 compute-0 ceph-mon[74295]: pgmap v1169: 305 pgs: 305 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.3 MiB/s wr, 244 op/s
Oct 07 14:06:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c16af6d8f7ffe50bf46a12b054599db0fb831469c6169c042f6597df044996bf-merged.mount: Deactivated successfully.
Oct 07 14:06:12 compute-0 podman[285177]: 2025-10-07 14:06:12.71210538 +0000 UTC m=+0.268067046 container remove 756e633c19224be72c9f694d67b08fa9bff9e0fc988088d8c9a1c6198b2d0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:06:12 compute-0 systemd[1]: libpod-conmon-756e633c19224be72c9f694d67b08fa9bff9e0fc988088d8c9a1c6198b2d0bc0.scope: Deactivated successfully.
Oct 07 14:06:12 compute-0 podman[285215]: 2025-10-07 14:06:12.932888701 +0000 UTC m=+0.044192382 container create 80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:06:13 compute-0 systemd[1]: Started libpod-conmon-80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06.scope.
Oct 07 14:06:13 compute-0 podman[285215]: 2025-10-07 14:06:12.916166414 +0000 UTC m=+0.027470125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:06:13 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103876b3dfb5b50be99c5bcad73ba04aee0fc79e3cdfdeac4c6403aed9acffb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103876b3dfb5b50be99c5bcad73ba04aee0fc79e3cdfdeac4c6403aed9acffb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103876b3dfb5b50be99c5bcad73ba04aee0fc79e3cdfdeac4c6403aed9acffb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103876b3dfb5b50be99c5bcad73ba04aee0fc79e3cdfdeac4c6403aed9acffb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:06:13 compute-0 podman[285215]: 2025-10-07 14:06:13.077902996 +0000 UTC m=+0.189206707 container init 80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:06:13 compute-0 podman[285215]: 2025-10-07 14:06:13.087750219 +0000 UTC m=+0.199053900 container start 80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:06:13 compute-0 podman[285215]: 2025-10-07 14:06:13.111210867 +0000 UTC m=+0.222514578 container attach 80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 07 14:06:13 compute-0 kernel: tap9b25db0b-24 (unregistering): left promiscuous mode
Oct 07 14:06:13 compute-0 NetworkManager[44949]: <info>  [1759845973.1570] device (tap9b25db0b-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:06:13 compute-0 ovn_controller[151684]: 2025-10-07T14:06:13Z|00080|binding|INFO|Releasing lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 from this chassis (sb_readonly=0)
Oct 07 14:06:13 compute-0 ovn_controller[151684]: 2025-10-07T14:06:13Z|00081|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 down in Southbound
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:13 compute-0 ovn_controller[151684]: 2025-10-07T14:06:13Z|00082|binding|INFO|Removing iface tap9b25db0b-24 ovn-installed in OVS
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.181 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:03:d4 10.100.0.7'], port_security=['fa:16:3e:6c:03:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9b25db0b-246e-456c-82d7-cf361c57f9c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:06:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.183 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9b25db0b-246e-456c-82d7-cf361c57f9c5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 unbound from our chassis
Oct 07 14:06:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.186 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.214 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d59a16-ba78-4a77-b16d-6a2035899ceb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:13 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct 07 14:06:13 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000a.scope: Consumed 16.174s CPU time.
Oct 07 14:06:13 compute-0 systemd-machined[214580]: Machine qemu-12-instance-0000000a terminated.
Oct 07 14:06:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.262 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6be01c2f-7309-40b1-8944-6841494e7bd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.265 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[af9496da-8cef-43a1-8627-92e2662857d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.295 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[67b394a1-bc97-4144-9f06-3d2a7d56c5a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.317 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2a9765-e65c-4c05-8f7b-37c8ae7884c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 12, 'rx_bytes': 1084, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 12, 'rx_bytes': 1084, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285249, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.339 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[be70bf88-78f9-4f59-8277-fda9a28248f3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285250, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285250, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.341 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.348 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.349 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.349 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.349 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.470 2 DEBUG nova.compute.manager [req-94977ac3-88af-458e-b8d9-70c8bfd6d2be req-6e9ada91-11b4-43b0-ac57-98d590767ffa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-unplugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.471 2 DEBUG oslo_concurrency.lockutils [req-94977ac3-88af-458e-b8d9-70c8bfd6d2be req-6e9ada91-11b4-43b0-ac57-98d590767ffa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.472 2 DEBUG oslo_concurrency.lockutils [req-94977ac3-88af-458e-b8d9-70c8bfd6d2be req-6e9ada91-11b4-43b0-ac57-98d590767ffa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.472 2 DEBUG oslo_concurrency.lockutils [req-94977ac3-88af-458e-b8d9-70c8bfd6d2be req-6e9ada91-11b4-43b0-ac57-98d590767ffa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.472 2 DEBUG nova.compute.manager [req-94977ac3-88af-458e-b8d9-70c8bfd6d2be req-6e9ada91-11b4-43b0-ac57-98d590767ffa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-unplugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.472 2 WARNING nova.compute.manager [req-94977ac3-88af-458e-b8d9-70c8bfd6d2be req-6e9ada91-11b4-43b0-ac57-98d590767ffa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received unexpected event network-vif-unplugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with vm_state error and task_state rebuilding.
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.479 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "115674ad-2273-4c42-b9ae-d380c2c005d6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.479 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 208 op/s
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.502 2 DEBUG nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.560 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.560 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.567 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.568 2 INFO nova.compute.claims [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.589 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759845958.5887623, a2f7901e-6572-4162-b995-0c44fb69eab5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.590 2 INFO nova.compute.manager [-] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] VM Stopped (Lifecycle Event)
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.631 2 DEBUG nova.compute.manager [None req-6c7f47d3-8e2f-4608-b9dc-c4980b99bdef - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.914 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.942 2 INFO nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance shutdown successfully after 4 seconds.
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.949 2 INFO nova.virt.libvirt.driver [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance destroyed successfully.
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.963 2 INFO nova.virt.libvirt.driver [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance destroyed successfully.
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.964 2 DEBUG nova.virt.libvirt.vif [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1321212972',display_name='tempest-ServersAdminTestJSON-server-1321212972',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1321212972',id=10,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:05:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-ixj4uqzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:06:08Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=ddf09c33-d956-404b-a5d8-44a3727f9a3b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.965 2 DEBUG nova.network.os_vif_util [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.966 2 DEBUG nova.network.os_vif_util [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.967 2 DEBUG os_vif [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:13 compute-0 nova_compute[259550]: 2025-10-07 14:06:13.970 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b25db0b-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.023 2 INFO os_vif [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24')
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]: {
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:         "osd_id": 2,
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:         "type": "bluestore"
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:     },
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:         "osd_id": 1,
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:         "type": "bluestore"
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:     },
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:         "osd_id": 0,
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:         "type": "bluestore"
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]:     }
Oct 07 14:06:14 compute-0 inspiring_kalam[285233]: }
Oct 07 14:06:14 compute-0 systemd[1]: libpod-80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06.scope: Deactivated successfully.
Oct 07 14:06:14 compute-0 systemd[1]: libpod-80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06.scope: Consumed 1.065s CPU time.
Oct 07 14:06:14 compute-0 conmon[285233]: conmon 80a161c7d4fc95a6c8aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06.scope/container/memory.events
Oct 07 14:06:14 compute-0 podman[285215]: 2025-10-07 14:06:14.20500125 +0000 UTC m=+1.316304931 container died 80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:06:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-f103876b3dfb5b50be99c5bcad73ba04aee0fc79e3cdfdeac4c6403aed9acffb-merged.mount: Deactivated successfully.
Oct 07 14:06:14 compute-0 podman[285215]: 2025-10-07 14:06:14.272430842 +0000 UTC m=+1.383734523 container remove 80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:06:14 compute-0 systemd[1]: libpod-conmon-80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06.scope: Deactivated successfully.
Oct 07 14:06:14 compute-0 sudo[285110]: pam_unix(sudo:session): session closed for user root
Oct 07 14:06:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:06:14 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:06:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:06:14 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:06:14 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev f0fde30d-d18a-44f1-8326-c852476af9e3 does not exist
Oct 07 14:06:14 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 936ebc31-55eb-4a34-ad0c-d95c4a228ac3 does not exist
Oct 07 14:06:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:06:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2216741884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.415 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.422 2 DEBUG nova.compute.provider_tree [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.439 2 DEBUG nova.scheduler.client.report [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:06:14 compute-0 sudo[285342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:06:14 compute-0 sudo[285342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:06:14 compute-0 sudo[285342]: pam_unix(sudo:session): session closed for user root
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.463 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.903s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.464 2 DEBUG nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.518 2 DEBUG nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.519 2 DEBUG nova.network.neutron [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:06:14 compute-0 sudo[285369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:06:14 compute-0 sudo[285369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:06:14 compute-0 sudo[285369]: pam_unix(sudo:session): session closed for user root
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.542 2 INFO nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.559 2 DEBUG nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.642 2 DEBUG nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.643 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.644 2 INFO nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Creating image(s)
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.668 2 DEBUG nova.storage.rbd_utils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 115674ad-2273-4c42-b9ae-d380c2c005d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.691 2 DEBUG nova.storage.rbd_utils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 115674ad-2273-4c42-b9ae-d380c2c005d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:14 compute-0 ceph-mon[74295]: pgmap v1170: 305 pgs: 305 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 208 op/s
Oct 07 14:06:14 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:06:14 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:06:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2216741884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.715 2 DEBUG nova.storage.rbd_utils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 115674ad-2273-4c42-b9ae-d380c2c005d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.721 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:14 compute-0 ovn_controller[151684]: 2025-10-07T14:06:14Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c8:34:23 10.100.0.11
Oct 07 14:06:14 compute-0 ovn_controller[151684]: 2025-10-07T14:06:14Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c8:34:23 10.100.0.11
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.755 2 INFO nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deleting instance files /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b_del
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.756 2 INFO nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deletion of /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b_del complete
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.786 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.786 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.787 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.787 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.811 2 DEBUG nova.storage.rbd_utils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 115674ad-2273-4c42-b9ae-d380c2c005d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.815 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 115674ad-2273-4c42-b9ae-d380c2c005d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.966 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:06:14 compute-0 nova_compute[259550]: 2025-10-07 14:06:14.967 2 INFO nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Creating image(s)
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.001 2 DEBUG nova.storage.rbd_utils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.037 2 DEBUG nova.storage.rbd_utils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.062 2 DEBUG nova.storage.rbd_utils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.066 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.103 2 DEBUG nova.policy [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7ff78293ed4e40f9954a0b0e6fca0caa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '711e531670b1460a923f2f91ce0f63db', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.141 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.143 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.144 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.144 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.168 2 DEBUG nova.storage.rbd_utils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.172 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.440 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 115674ad-2273-4c42-b9ae-d380c2c005d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 392 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.2 MiB/s wr, 242 op/s
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.520 2 DEBUG nova.storage.rbd_utils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] resizing rbd image 115674ad-2273-4c42-b9ae-d380c2c005d6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:06:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.886 2 DEBUG nova.compute.manager [req-5bf5e547-1def-4183-a451-ab653d9541ad req-422223c9-ddd0-4ad9-b6f1-b1a079fcfe1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.887 2 DEBUG oslo_concurrency.lockutils [req-5bf5e547-1def-4183-a451-ab653d9541ad req-422223c9-ddd0-4ad9-b6f1-b1a079fcfe1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.887 2 DEBUG oslo_concurrency.lockutils [req-5bf5e547-1def-4183-a451-ab653d9541ad req-422223c9-ddd0-4ad9-b6f1-b1a079fcfe1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.887 2 DEBUG oslo_concurrency.lockutils [req-5bf5e547-1def-4183-a451-ab653d9541ad req-422223c9-ddd0-4ad9-b6f1-b1a079fcfe1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.888 2 DEBUG nova.compute.manager [req-5bf5e547-1def-4183-a451-ab653d9541ad req-422223c9-ddd0-4ad9-b6f1-b1a079fcfe1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:06:15 compute-0 nova_compute[259550]: 2025-10-07 14:06:15.888 2 WARNING nova.compute.manager [req-5bf5e547-1def-4183-a451-ab653d9541ad req-422223c9-ddd0-4ad9-b6f1-b1a079fcfe1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received unexpected event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with vm_state error and task_state rebuild_spawning.
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.174 2 DEBUG nova.network.neutron [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Successfully created port: e6624198-96a3-41f5-b3e6-217be5426796 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.235 2 DEBUG nova.objects.instance [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lazy-loading 'migration_context' on Instance uuid 115674ad-2273-4c42-b9ae-d380c2c005d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.251 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.252 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Ensure instance console log exists: /var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.252 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.252 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.253 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.291 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.349 2 DEBUG nova.storage.rbd_utils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] resizing rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.595 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.596 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Ensure instance console log exists: /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.596 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.596 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.596 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.598 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Start _get_guest_xml network_info=[{"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.603 2 WARNING nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.607 2 DEBUG nova.virt.libvirt.host [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.608 2 DEBUG nova.virt.libvirt.host [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.610 2 DEBUG nova.virt.libvirt.host [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.611 2 DEBUG nova.virt.libvirt.host [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.611 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.611 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.611 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.612 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.612 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.612 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.612 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.612 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.613 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.613 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.613 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.613 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.613 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:16 compute-0 nova_compute[259550]: 2025-10-07 14:06:16.627 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:16 compute-0 ceph-mon[74295]: pgmap v1171: 305 pgs: 305 active+clean; 392 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.2 MiB/s wr, 242 op/s
Oct 07 14:06:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:06:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3556732845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.093 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.142 2 DEBUG nova.storage.rbd_utils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.149 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 394 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.1 MiB/s wr, 175 op/s
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.591 2 DEBUG nova.network.neutron [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Successfully updated port: e6624198-96a3-41f5-b3e6-217be5426796 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:06:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:06:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2265620750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.617 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.618 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquired lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.618 2 DEBUG nova.network.neutron [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.625 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.627 2 DEBUG nova.virt.libvirt.vif [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1321212972',display_name='tempest-ServersAdminTestJSON-server-1321212972',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1321212972',id=10,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:05:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-ixj4uqzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:06:14Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=ddf09c33-d956-404b-a5d8-44a3727f9a3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.627 2 DEBUG nova.network.os_vif_util [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.628 2 DEBUG nova.network.os_vif_util [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.631 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:06:17 compute-0 nova_compute[259550]:   <uuid>ddf09c33-d956-404b-a5d8-44a3727f9a3b</uuid>
Oct 07 14:06:17 compute-0 nova_compute[259550]:   <name>instance-0000000a</name>
Oct 07 14:06:17 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:06:17 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:06:17 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersAdminTestJSON-server-1321212972</nova:name>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:06:16</nova:creationTime>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:06:17 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:06:17 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:06:17 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:06:17 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:06:17 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:06:17 compute-0 nova_compute[259550]:         <nova:user uuid="f06dda9346a24fb094ad9fe51664cc48">tempest-ServersAdminTestJSON-1442908900-project-member</nova:user>
Oct 07 14:06:17 compute-0 nova_compute[259550]:         <nova:project uuid="48bbd5aa8b9d4a0ea0150bd57145fc68">tempest-ServersAdminTestJSON-1442908900</nova:project>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:06:17 compute-0 nova_compute[259550]:         <nova:port uuid="9b25db0b-246e-456c-82d7-cf361c57f9c5">
Oct 07 14:06:17 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:06:17 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:06:17 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <system>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <entry name="serial">ddf09c33-d956-404b-a5d8-44a3727f9a3b</entry>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <entry name="uuid">ddf09c33-d956-404b-a5d8-44a3727f9a3b</entry>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     </system>
Oct 07 14:06:17 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:06:17 compute-0 nova_compute[259550]:   <os>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:   </os>
Oct 07 14:06:17 compute-0 nova_compute[259550]:   <features>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:   </features>
Oct 07 14:06:17 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:06:17 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:06:17 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk">
Oct 07 14:06:17 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       </source>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:06:17 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config">
Oct 07 14:06:17 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       </source>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:06:17 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:6c:03:d4"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <target dev="tap9b25db0b-24"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/console.log" append="off"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <video>
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     </video>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:06:17 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:06:17 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:06:17 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:06:17 compute-0 nova_compute[259550]: </domain>
Oct 07 14:06:17 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.632 2 DEBUG nova.compute.manager [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Preparing to wait for external event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.632 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.632 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.633 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.633 2 DEBUG nova.virt.libvirt.vif [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1321212972',display_name='tempest-ServersAdminTestJSON-server-1321212972',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1321212972',id=10,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:05:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-ixj4uqzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:06:14Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=ddf09c33-d956-404b-a5d8-44a3727f9a3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.633 2 DEBUG nova.network.os_vif_util [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.634 2 DEBUG nova.network.os_vif_util [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.634 2 DEBUG os_vif [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.636 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.637 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.644 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b25db0b-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.644 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9b25db0b-24, col_values=(('external_ids', {'iface-id': '9b25db0b-246e-456c-82d7-cf361c57f9c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:03:d4', 'vm-uuid': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:17 compute-0 NetworkManager[44949]: <info>  [1759845977.6475] manager: (tap9b25db0b-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.657 2 INFO os_vif [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24')
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.753 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.754 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.754 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No VIF found with MAC fa:16:3e:6c:03:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.755 2 INFO nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Using config drive
Oct 07 14:06:17 compute-0 podman[285790]: 2025-10-07 14:06:17.769789874 +0000 UTC m=+0.068322787 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:06:17 compute-0 podman[285791]: 2025-10-07 14:06:17.787746554 +0000 UTC m=+0.086010639 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.793 2 DEBUG nova.storage.rbd_utils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.801 2 DEBUG nova.network.neutron [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.825 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'ec2_ids' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:17 compute-0 nova_compute[259550]: 2025-10-07 14:06:17.847 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'keypairs' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:17 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3556732845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:06:17 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2265620750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:06:18 compute-0 nova_compute[259550]: 2025-10-07 14:06:18.121 2 DEBUG nova.compute.manager [req-26c59cb7-9b4a-471e-8793-3d767d44e7b3 req-78b8e034-f51e-4a38-b0d6-1dcf7d58d961 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received event network-changed-e6624198-96a3-41f5-b3e6-217be5426796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:18 compute-0 nova_compute[259550]: 2025-10-07 14:06:18.121 2 DEBUG nova.compute.manager [req-26c59cb7-9b4a-471e-8793-3d767d44e7b3 req-78b8e034-f51e-4a38-b0d6-1dcf7d58d961 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Refreshing instance network info cache due to event network-changed-e6624198-96a3-41f5-b3e6-217be5426796. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:06:18 compute-0 nova_compute[259550]: 2025-10-07 14:06:18.122 2 DEBUG oslo_concurrency.lockutils [req-26c59cb7-9b4a-471e-8793-3d767d44e7b3 req-78b8e034-f51e-4a38-b0d6-1dcf7d58d961 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:06:18 compute-0 nova_compute[259550]: 2025-10-07 14:06:18.732 2 INFO nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Creating config drive at /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config
Oct 07 14:06:18 compute-0 nova_compute[259550]: 2025-10-07 14:06:18.737 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppaohkwte execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:18 compute-0 nova_compute[259550]: 2025-10-07 14:06:18.872 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppaohkwte" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:18 compute-0 ceph-mon[74295]: pgmap v1172: 305 pgs: 305 active+clean; 394 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.1 MiB/s wr, 175 op/s
Oct 07 14:06:18 compute-0 nova_compute[259550]: 2025-10-07 14:06:18.903 2 DEBUG nova.storage.rbd_utils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:18 compute-0 nova_compute[259550]: 2025-10-07 14:06:18.907 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 408 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.7 MiB/s wr, 226 op/s
Oct 07 14:06:19 compute-0 nova_compute[259550]: 2025-10-07 14:06:19.761 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.854s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:19 compute-0 nova_compute[259550]: 2025-10-07 14:06:19.762 2 INFO nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deleting local config drive /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config because it was imported into RBD.
Oct 07 14:06:19 compute-0 NetworkManager[44949]: <info>  [1759845979.8191] manager: (tap9b25db0b-24): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Oct 07 14:06:19 compute-0 kernel: tap9b25db0b-24: entered promiscuous mode
Oct 07 14:06:19 compute-0 ovn_controller[151684]: 2025-10-07T14:06:19Z|00083|binding|INFO|Claiming lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 for this chassis.
Oct 07 14:06:19 compute-0 ovn_controller[151684]: 2025-10-07T14:06:19Z|00084|binding|INFO|9b25db0b-246e-456c-82d7-cf361c57f9c5: Claiming fa:16:3e:6c:03:d4 10.100.0.7
Oct 07 14:06:19 compute-0 nova_compute[259550]: 2025-10-07 14:06:19.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:19.841 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:03:d4 10.100.0.7'], port_security=['fa:16:3e:6c:03:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9b25db0b-246e-456c-82d7-cf361c57f9c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:06:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:19.843 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9b25db0b-246e-456c-82d7-cf361c57f9c5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 bound to our chassis
Oct 07 14:06:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:19.845 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6
Oct 07 14:06:19 compute-0 ovn_controller[151684]: 2025-10-07T14:06:19Z|00085|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 ovn-installed in OVS
Oct 07 14:06:19 compute-0 ovn_controller[151684]: 2025-10-07T14:06:19Z|00086|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 up in Southbound
Oct 07 14:06:19 compute-0 nova_compute[259550]: 2025-10-07 14:06:19.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:19 compute-0 nova_compute[259550]: 2025-10-07 14:06:19.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:19.872 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[217299ff-39af-4c8b-a0f6-0e6c81809947]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:19 compute-0 systemd-udevd[285904]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:06:19 compute-0 systemd-machined[214580]: New machine qemu-18-instance-0000000a.
Oct 07 14:06:19 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-0000000a.
Oct 07 14:06:19 compute-0 NetworkManager[44949]: <info>  [1759845979.8947] device (tap9b25db0b-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:06:19 compute-0 NetworkManager[44949]: <info>  [1759845979.8955] device (tap9b25db0b-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:06:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:19.914 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d02f829f-893e-405c-89f7-18cf0893fe31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:19.918 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e10fab5b-447b-4336-bea4-ca5be13db5d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:19.957 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[569b96d5-3c9e-45f8-9188-7fe5a1083206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:19 compute-0 ceph-mon[74295]: pgmap v1173: 305 pgs: 305 active+clean; 408 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.7 MiB/s wr, 226 op/s
Oct 07 14:06:19 compute-0 nova_compute[259550]: 2025-10-07 14:06:19.980 2 DEBUG nova.network.neutron [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Updating instance_info_cache with network_info: [{"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:06:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:19.980 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aff6e865-4eff-4a53-98f5-aca0b8d4fd8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 14, 'rx_bytes': 1126, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 14, 'rx_bytes': 1126, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285916, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:19 compute-0 nova_compute[259550]: 2025-10-07 14:06:19.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:06:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:20.003 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9be7a4e4-3ad8-46a2-ae00-9e15b73c6c02]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285918, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285918, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:20.006 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:20.013 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:20.013 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:20.014 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.015 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Releasing lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.016 2 DEBUG nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Instance network_info: |[{"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.016 2 DEBUG oslo_concurrency.lockutils [req-26c59cb7-9b4a-471e-8793-3d767d44e7b3 req-78b8e034-f51e-4a38-b0d6-1dcf7d58d961 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.017 2 DEBUG nova.network.neutron [req-26c59cb7-9b4a-471e-8793-3d767d44e7b3 req-78b8e034-f51e-4a38-b0d6-1dcf7d58d961 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Refreshing network info cache for port e6624198-96a3-41f5-b3e6-217be5426796 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:06:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:20.017 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.020 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Start _get_guest_xml network_info=[{"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.027 2 WARNING nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.033 2 DEBUG nova.virt.libvirt.host [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.034 2 DEBUG nova.virt.libvirt.host [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.040 2 DEBUG nova.virt.libvirt.host [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.041 2 DEBUG nova.virt.libvirt.host [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.041 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.041 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.042 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.042 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.043 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.043 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.043 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.043 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.043 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.044 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.044 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.044 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.048 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:20 compute-0 ovn_controller[151684]: 2025-10-07T14:06:20Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:f8:94 10.100.0.7
Oct 07 14:06:20 compute-0 ovn_controller[151684]: 2025-10-07T14:06:20Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:f8:94 10.100.0.7
Oct 07 14:06:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:06:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3319170676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.525 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.562 2 DEBUG nova.storage.rbd_utils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 115674ad-2273-4c42-b9ae-d380c2c005d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.568 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:06:20 compute-0 nova_compute[259550]: 2025-10-07 14:06:20.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 07 14:06:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:06:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2732484885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.060 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.062 2 DEBUG nova.virt.libvirt.vif [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:06:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-279731597',display_name='tempest-FloatingIPsAssociationTestJSON-server-279731597',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-279731597',id=16,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='711e531670b1460a923f2f91ce0f63db',ramdisk_id='',reservation_id='r-juuxxw0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1021362371',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1021362371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:06:14Z,user_data=None,user_id='7ff78293ed4e40f9954a0b0e6fca0caa',uuid=115674ad-2273-4c42-b9ae-d380c2c005d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.063 2 DEBUG nova.network.os_vif_util [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converting VIF {"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.064 2 DEBUG nova.network.os_vif_util [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:da:ba,bridge_name='br-int',has_traffic_filtering=True,id=e6624198-96a3-41f5-b3e6-217be5426796,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6624198-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.065 2 DEBUG nova.objects.instance [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lazy-loading 'pci_devices' on Instance uuid 115674ad-2273-4c42-b9ae-d380c2c005d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:21 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3319170676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.085 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:06:21 compute-0 nova_compute[259550]:   <uuid>115674ad-2273-4c42-b9ae-d380c2c005d6</uuid>
Oct 07 14:06:21 compute-0 nova_compute[259550]:   <name>instance-00000010</name>
Oct 07 14:06:21 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:06:21 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:06:21 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-279731597</nova:name>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:06:20</nova:creationTime>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:06:21 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:06:21 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:06:21 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:06:21 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:06:21 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:06:21 compute-0 nova_compute[259550]:         <nova:user uuid="7ff78293ed4e40f9954a0b0e6fca0caa">tempest-FloatingIPsAssociationTestJSON-1021362371-project-member</nova:user>
Oct 07 14:06:21 compute-0 nova_compute[259550]:         <nova:project uuid="711e531670b1460a923f2f91ce0f63db">tempest-FloatingIPsAssociationTestJSON-1021362371</nova:project>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:06:21 compute-0 nova_compute[259550]:         <nova:port uuid="e6624198-96a3-41f5-b3e6-217be5426796">
Oct 07 14:06:21 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:06:21 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:06:21 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <system>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <entry name="serial">115674ad-2273-4c42-b9ae-d380c2c005d6</entry>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <entry name="uuid">115674ad-2273-4c42-b9ae-d380c2c005d6</entry>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     </system>
Oct 07 14:06:21 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:06:21 compute-0 nova_compute[259550]:   <os>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:   </os>
Oct 07 14:06:21 compute-0 nova_compute[259550]:   <features>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:   </features>
Oct 07 14:06:21 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:06:21 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:06:21 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/115674ad-2273-4c42-b9ae-d380c2c005d6_disk">
Oct 07 14:06:21 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       </source>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:06:21 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/115674ad-2273-4c42-b9ae-d380c2c005d6_disk.config">
Oct 07 14:06:21 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       </source>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:06:21 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:89:da:ba"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <target dev="tape6624198-96"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6/console.log" append="off"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <video>
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     </video>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:06:21 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:06:21 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:06:21 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:06:21 compute-0 nova_compute[259550]: </domain>
Oct 07 14:06:21 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.087 2 DEBUG nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Preparing to wait for external event network-vif-plugged-e6624198-96a3-41f5-b3e6-217be5426796 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.087 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.087 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.088 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.089 2 DEBUG nova.virt.libvirt.vif [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:06:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-279731597',display_name='tempest-FloatingIPsAssociationTestJSON-server-279731597',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-279731597',id=16,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='711e531670b1460a923f2f91ce0f63db',ramdisk_id='',reservation_id='r-juuxxw0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1021362371',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1021362371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:06:14Z,user_data=None,user_id='7ff78293ed4e40f9954a0b0e6fca0caa',uuid=115674ad-2273-4c42-b9ae-d380c2c005d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.089 2 DEBUG nova.network.os_vif_util [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converting VIF {"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.090 2 DEBUG nova.network.os_vif_util [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:da:ba,bridge_name='br-int',has_traffic_filtering=True,id=e6624198-96a3-41f5-b3e6-217be5426796,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6624198-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.091 2 DEBUG os_vif [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:da:ba,bridge_name='br-int',has_traffic_filtering=True,id=e6624198-96a3-41f5-b3e6-217be5426796,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6624198-96') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.093 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.093 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.101 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape6624198-96, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.102 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape6624198-96, col_values=(('external_ids', {'iface-id': 'e6624198-96a3-41f5-b3e6-217be5426796', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:da:ba', 'vm-uuid': '115674ad-2273-4c42-b9ae-d380c2c005d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:21 compute-0 NetworkManager[44949]: <info>  [1759845981.1047] manager: (tape6624198-96): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.112 2 INFO os_vif [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:da:ba,bridge_name='br-int',has_traffic_filtering=True,id=e6624198-96a3-41f5-b3e6-217be5426796,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6624198-96')
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.251 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.252 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.253 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] No VIF found with MAC fa:16:3e:89:da:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.254 2 INFO nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Using config drive
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.285 2 DEBUG nova.storage.rbd_utils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 115674ad-2273-4c42-b9ae-d380c2c005d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 434 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 7.2 MiB/s wr, 220 op/s
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.538 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for ddf09c33-d956-404b-a5d8-44a3727f9a3b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.539 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845981.5370498, ddf09c33-d956-404b-a5d8-44a3727f9a3b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.539 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] VM Started (Lifecycle Event)
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.557 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.563 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845981.5375614, ddf09c33-d956-404b-a5d8-44a3727f9a3b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.563 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] VM Paused (Lifecycle Event)
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.628 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.632 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.663 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.795 2 INFO nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Creating config drive at /var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6/disk.config
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.820 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeipfy_sh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.951 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeipfy_sh" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.978 2 DEBUG nova.storage.rbd_utils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 115674ad-2273-4c42-b9ae-d380c2c005d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:21 compute-0 nova_compute[259550]: 2025-10-07 14:06:21.983 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6/disk.config 115674ad-2273-4c42-b9ae-d380c2c005d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:22 compute-0 nova_compute[259550]: 2025-10-07 14:06:22.010 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:06:22 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2732484885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:06:22 compute-0 ceph-mon[74295]: pgmap v1174: 305 pgs: 305 active+clean; 434 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 7.2 MiB/s wr, 220 op/s
Oct 07 14:06:22 compute-0 nova_compute[259550]: 2025-10-07 14:06:22.285 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6/disk.config 115674ad-2273-4c42-b9ae-d380c2c005d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:22 compute-0 nova_compute[259550]: 2025-10-07 14:06:22.286 2 INFO nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Deleting local config drive /var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6/disk.config because it was imported into RBD.
Oct 07 14:06:22 compute-0 kernel: tape6624198-96: entered promiscuous mode
Oct 07 14:06:22 compute-0 systemd-udevd[285908]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:06:22 compute-0 NetworkManager[44949]: <info>  [1759845982.3410] manager: (tape6624198-96): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Oct 07 14:06:22 compute-0 ovn_controller[151684]: 2025-10-07T14:06:22Z|00087|binding|INFO|Claiming lport e6624198-96a3-41f5-b3e6-217be5426796 for this chassis.
Oct 07 14:06:22 compute-0 ovn_controller[151684]: 2025-10-07T14:06:22Z|00088|binding|INFO|e6624198-96a3-41f5-b3e6-217be5426796: Claiming fa:16:3e:89:da:ba 10.100.0.9
Oct 07 14:06:22 compute-0 nova_compute[259550]: 2025-10-07 14:06:22.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.350 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:da:ba 10.100.0.9'], port_security=['fa:16:3e:89:da:ba 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '115674ad-2273-4c42-b9ae-d380c2c005d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80f412c9-c511-49ba-a8ca-ff830fcff803', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '711e531670b1460a923f2f91ce0f63db', 'neutron:revision_number': '2', 'neutron:security_group_ids': '087bdc78-7c9e-48d3-b83f-b37a1a3f8ec7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e75ebab-8b50-4bfb-bcab-f5fcada82242, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=e6624198-96a3-41f5-b3e6-217be5426796) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:06:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.351 161536 INFO neutron.agent.ovn.metadata.agent [-] Port e6624198-96a3-41f5-b3e6-217be5426796 in datapath 80f412c9-c511-49ba-a8ca-ff830fcff803 bound to our chassis
Oct 07 14:06:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.352 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80f412c9-c511-49ba-a8ca-ff830fcff803
Oct 07 14:06:22 compute-0 NetworkManager[44949]: <info>  [1759845982.3573] device (tape6624198-96): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:06:22 compute-0 NetworkManager[44949]: <info>  [1759845982.3584] device (tape6624198-96): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:06:22 compute-0 ovn_controller[151684]: 2025-10-07T14:06:22Z|00089|binding|INFO|Setting lport e6624198-96a3-41f5-b3e6-217be5426796 ovn-installed in OVS
Oct 07 14:06:22 compute-0 ovn_controller[151684]: 2025-10-07T14:06:22Z|00090|binding|INFO|Setting lport e6624198-96a3-41f5-b3e6-217be5426796 up in Southbound
Oct 07 14:06:22 compute-0 nova_compute[259550]: 2025-10-07 14:06:22.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.373 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b34e63fe-6540-4cd1-9bec-54ee8ca3a2d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:22 compute-0 nova_compute[259550]: 2025-10-07 14:06:22.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:22 compute-0 systemd-machined[214580]: New machine qemu-19-instance-00000010.
Oct 07 14:06:22 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000010.
Oct 07 14:06:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.414 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f29a12ba-0d9a-4979-b56f-9a10d57bbad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.418 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[827c26f3-66b0-42d0-8ffd-81c0b48640cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.456 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b983360d-2a98-4e1e-91ce-93e16dee275d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.476 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bd45bf31-370a-4dd9-ada8-30d84d77b5a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80f412c9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:6b:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654916, 'reachable_time': 32115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286108, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.496 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d53241d8-5227-45cc-87be-c964d236c371]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap80f412c9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654932, 'tstamp': 654932}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286110, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap80f412c9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654936, 'tstamp': 654936}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286110, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.498 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80f412c9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:22 compute-0 nova_compute[259550]: 2025-10-07 14:06:22.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:22 compute-0 nova_compute[259550]: 2025-10-07 14:06:22.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.502 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80f412c9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.502 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.503 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80f412c9-c0, col_values=(('external_ids', {'iface-id': '60dd3b69-c15d-4f1f-8348-1807afc1578d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.503 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:06:22
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.meta', 'vms', 'images', 'backups', 'volumes', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:06:22 compute-0 nova_compute[259550]: 2025-10-07 14:06:22.703 2 DEBUG nova.network.neutron [req-26c59cb7-9b4a-471e-8793-3d767d44e7b3 req-78b8e034-f51e-4a38-b0d6-1dcf7d58d961 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Updated VIF entry in instance network info cache for port e6624198-96a3-41f5-b3e6-217be5426796. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:06:22 compute-0 nova_compute[259550]: 2025-10-07 14:06:22.704 2 DEBUG nova.network.neutron [req-26c59cb7-9b4a-471e-8793-3d767d44e7b3 req-78b8e034-f51e-4a38-b0d6-1dcf7d58d961 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Updating instance_info_cache with network_info: [{"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:06:22 compute-0 nova_compute[259550]: 2025-10-07 14:06:22.718 2 DEBUG oslo_concurrency.lockutils [req-26c59cb7-9b4a-471e-8793-3d767d44e7b3 req-78b8e034-f51e-4a38-b0d6-1dcf7d58d961 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:06:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:06:22 compute-0 nova_compute[259550]: 2025-10-07 14:06:22.921 2 DEBUG nova.compute.manager [req-47cab03d-a371-4233-aa8a-099d4bd1b9c3 req-f7a0e6a3-8ca1-4494-84d9-472aec6148a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received event network-vif-plugged-e6624198-96a3-41f5-b3e6-217be5426796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:22 compute-0 nova_compute[259550]: 2025-10-07 14:06:22.922 2 DEBUG oslo_concurrency.lockutils [req-47cab03d-a371-4233-aa8a-099d4bd1b9c3 req-f7a0e6a3-8ca1-4494-84d9-472aec6148a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:22 compute-0 nova_compute[259550]: 2025-10-07 14:06:22.922 2 DEBUG oslo_concurrency.lockutils [req-47cab03d-a371-4233-aa8a-099d4bd1b9c3 req-f7a0e6a3-8ca1-4494-84d9-472aec6148a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:22 compute-0 nova_compute[259550]: 2025-10-07 14:06:22.923 2 DEBUG oslo_concurrency.lockutils [req-47cab03d-a371-4233-aa8a-099d4bd1b9c3 req-f7a0e6a3-8ca1-4494-84d9-472aec6148a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:22 compute-0 nova_compute[259550]: 2025-10-07 14:06:22.923 2 DEBUG nova.compute.manager [req-47cab03d-a371-4233-aa8a-099d4bd1b9c3 req-f7a0e6a3-8ca1-4494-84d9-472aec6148a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Processing event network-vif-plugged-e6624198-96a3-41f5-b3e6-217be5426796 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:06:22 compute-0 nova_compute[259550]: 2025-10-07 14:06:22.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:06:22 compute-0 nova_compute[259550]: 2025-10-07 14:06:22.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.150 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.150 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:06:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 434 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 587 KiB/s rd, 7.2 MiB/s wr, 185 op/s
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.516 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845983.5160422, 115674ad-2273-4c42-b9ae-d380c2c005d6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.517 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] VM Started (Lifecycle Event)
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.519 2 DEBUG nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.523 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.526 2 INFO nova.virt.libvirt.driver [-] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Instance spawned successfully.
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.527 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.594 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.599 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.693 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.694 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845983.5169919, 115674ad-2273-4c42-b9ae-d380c2c005d6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.694 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] VM Paused (Lifecycle Event)
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.701 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.702 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.702 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.703 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.703 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.703 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.733 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.737 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845983.5223353, 115674ad-2273-4c42-b9ae-d380c2c005d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.737 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] VM Resumed (Lifecycle Event)
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.803 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.806 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.820 2 INFO nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Took 9.18 seconds to spawn the instance on the hypervisor.
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.821 2 DEBUG nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.860 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:06:23 compute-0 nova_compute[259550]: 2025-10-07 14:06:23.939 2 INFO nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Took 10.39 seconds to build instance.
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.124 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.207 2 DEBUG nova.compute.manager [req-783735ee-ab4c-44ec-83fc-8f779588f357 req-3b9b6133-e783-4705-9d31-a64e0320260b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.208 2 DEBUG oslo_concurrency.lockutils [req-783735ee-ab4c-44ec-83fc-8f779588f357 req-3b9b6133-e783-4705-9d31-a64e0320260b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.208 2 DEBUG oslo_concurrency.lockutils [req-783735ee-ab4c-44ec-83fc-8f779588f357 req-3b9b6133-e783-4705-9d31-a64e0320260b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.208 2 DEBUG oslo_concurrency.lockutils [req-783735ee-ab4c-44ec-83fc-8f779588f357 req-3b9b6133-e783-4705-9d31-a64e0320260b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.209 2 DEBUG nova.compute.manager [req-783735ee-ab4c-44ec-83fc-8f779588f357 req-3b9b6133-e783-4705-9d31-a64e0320260b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Processing event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.209 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.210 2 DEBUG nova.compute.manager [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.215 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845984.2147563, ddf09c33-d956-404b-a5d8-44a3727f9a3b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.215 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] VM Resumed (Lifecycle Event)
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.218 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.222 2 INFO nova.virt.libvirt.driver [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance spawned successfully.
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.222 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.318 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.322 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.325 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.326 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.326 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.327 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.327 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.328 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.380 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.500 2 DEBUG nova.compute.manager [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.563 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.563 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.563 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.692 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:24 compute-0 ceph-mon[74295]: pgmap v1175: 305 pgs: 305 active+clean; 434 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 587 KiB/s rd, 7.2 MiB/s wr, 185 op/s
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:06:24 compute-0 nova_compute[259550]: 2025-10-07 14:06:24.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.017 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.019 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.019 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.339 2 DEBUG nova.compute.manager [req-520fe25e-9523-4bcf-b172-a5d028610380 req-548ec93c-c9b8-496b-9a34-999d9ba5df1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received event network-vif-plugged-e6624198-96a3-41f5-b3e6-217be5426796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.340 2 DEBUG oslo_concurrency.lockutils [req-520fe25e-9523-4bcf-b172-a5d028610380 req-548ec93c-c9b8-496b-9a34-999d9ba5df1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.341 2 DEBUG oslo_concurrency.lockutils [req-520fe25e-9523-4bcf-b172-a5d028610380 req-548ec93c-c9b8-496b-9a34-999d9ba5df1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.341 2 DEBUG oslo_concurrency.lockutils [req-520fe25e-9523-4bcf-b172-a5d028610380 req-548ec93c-c9b8-496b-9a34-999d9ba5df1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.342 2 DEBUG nova.compute.manager [req-520fe25e-9523-4bcf-b172-a5d028610380 req-548ec93c-c9b8-496b-9a34-999d9ba5df1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] No waiting events found dispatching network-vif-plugged-e6624198-96a3-41f5-b3e6-217be5426796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.342 2 WARNING nova.compute.manager [req-520fe25e-9523-4bcf-b172-a5d028610380 req-548ec93c-c9b8-496b-9a34-999d9ba5df1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received unexpected event network-vif-plugged-e6624198-96a3-41f5-b3e6-217be5426796 for instance with vm_state active and task_state None.
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 449 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 714 KiB/s rd, 7.8 MiB/s wr, 213 op/s
Oct 07 14:06:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:06:25 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/833656625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:06:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.554 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.787 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.788 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.795 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.795 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.800 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.801 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.807 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.808 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.813 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.813 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.818 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:06:25 compute-0 nova_compute[259550]: 2025-10-07 14:06:25.818 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:06:25 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/833656625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:06:26 compute-0 nova_compute[259550]: 2025-10-07 14:06:26.096 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:06:26 compute-0 nova_compute[259550]: 2025-10-07 14:06:26.100 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3710MB free_disk=59.7696533203125GB free_vcpus=2 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:06:26 compute-0 nova_compute[259550]: 2025-10-07 14:06:26.100 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:26 compute-0 nova_compute[259550]: 2025-10-07 14:06:26.101 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:26 compute-0 nova_compute[259550]: 2025-10-07 14:06:26.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:26 compute-0 ceph-mon[74295]: pgmap v1176: 305 pgs: 305 active+clean; 449 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 714 KiB/s rd, 7.8 MiB/s wr, 213 op/s
Oct 07 14:06:27 compute-0 nova_compute[259550]: 2025-10-07 14:06:27.371 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance ddf09c33-d956-404b-a5d8-44a3727f9a3b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:06:27 compute-0 nova_compute[259550]: 2025-10-07 14:06:27.372 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 7322f2d1-885e-4e41-8a96-e90d4ddc6c38 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:06:27 compute-0 nova_compute[259550]: 2025-10-07 14:06:27.372 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 74094438-0995-4031-9943-cc85a5ef4f57 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:06:27 compute-0 nova_compute[259550]: 2025-10-07 14:06:27.373 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 348c80b7-7f65-4300-9dab-6a333f1b2c74 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:06:27 compute-0 nova_compute[259550]: 2025-10-07 14:06:27.373 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 4b95692e-088d-452c-83b7-4c50df73b8fe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:06:27 compute-0 nova_compute[259550]: 2025-10-07 14:06:27.373 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 115674ad-2273-4c42-b9ae-d380c2c005d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:06:27 compute-0 nova_compute[259550]: 2025-10-07 14:06:27.373 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:06:27 compute-0 nova_compute[259550]: 2025-10-07 14:06:27.374 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:06:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 451 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.3 MiB/s wr, 245 op/s
Oct 07 14:06:27 compute-0 nova_compute[259550]: 2025-10-07 14:06:27.554 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:27 compute-0 ceph-mon[74295]: pgmap v1177: 305 pgs: 305 active+clean; 451 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.3 MiB/s wr, 245 op/s
Oct 07 14:06:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:06:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/664003288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:06:28 compute-0 nova_compute[259550]: 2025-10-07 14:06:28.071 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:28 compute-0 nova_compute[259550]: 2025-10-07 14:06:28.080 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:06:28 compute-0 podman[286197]: 2025-10-07 14:06:28.129124104 +0000 UTC m=+0.111008658 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:06:28 compute-0 nova_compute[259550]: 2025-10-07 14:06:28.339 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:06:28 compute-0 nova_compute[259550]: 2025-10-07 14:06:28.846 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:06:28 compute-0 nova_compute[259550]: 2025-10-07 14:06:28.846 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/664003288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:06:29 compute-0 nova_compute[259550]: 2025-10-07 14:06:29.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:29 compute-0 NetworkManager[44949]: <info>  [1759845989.0780] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Oct 07 14:06:29 compute-0 NetworkManager[44949]: <info>  [1759845989.0788] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Oct 07 14:06:29 compute-0 nova_compute[259550]: 2025-10-07 14:06:29.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:29 compute-0 ovn_controller[151684]: 2025-10-07T14:06:29Z|00091|binding|INFO|Releasing lport 47854cb1-b863-4b06-b664-27d734ff5751 from this chassis (sb_readonly=0)
Oct 07 14:06:29 compute-0 ovn_controller[151684]: 2025-10-07T14:06:29Z|00092|binding|INFO|Releasing lport 60dd3b69-c15d-4f1f-8348-1807afc1578d from this chassis (sb_readonly=0)
Oct 07 14:06:29 compute-0 nova_compute[259550]: 2025-10-07 14:06:29.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 451 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.5 MiB/s wr, 281 op/s
Oct 07 14:06:29 compute-0 nova_compute[259550]: 2025-10-07 14:06:29.848 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:06:29 compute-0 nova_compute[259550]: 2025-10-07 14:06:29.848 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:06:29 compute-0 nova_compute[259550]: 2025-10-07 14:06:29.848 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:06:29 compute-0 nova_compute[259550]: 2025-10-07 14:06:29.895 2 DEBUG nova.compute.manager [req-b501e603-f3d8-481c-a591-6308ee1bfa2e req-d583e908-bb6a-4b6e-8c8e-5724058f18c4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:29 compute-0 nova_compute[259550]: 2025-10-07 14:06:29.895 2 DEBUG oslo_concurrency.lockutils [req-b501e603-f3d8-481c-a591-6308ee1bfa2e req-d583e908-bb6a-4b6e-8c8e-5724058f18c4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:29 compute-0 nova_compute[259550]: 2025-10-07 14:06:29.896 2 DEBUG oslo_concurrency.lockutils [req-b501e603-f3d8-481c-a591-6308ee1bfa2e req-d583e908-bb6a-4b6e-8c8e-5724058f18c4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:29 compute-0 nova_compute[259550]: 2025-10-07 14:06:29.896 2 DEBUG oslo_concurrency.lockutils [req-b501e603-f3d8-481c-a591-6308ee1bfa2e req-d583e908-bb6a-4b6e-8c8e-5724058f18c4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:29 compute-0 nova_compute[259550]: 2025-10-07 14:06:29.896 2 DEBUG nova.compute.manager [req-b501e603-f3d8-481c-a591-6308ee1bfa2e req-d583e908-bb6a-4b6e-8c8e-5724058f18c4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:06:29 compute-0 nova_compute[259550]: 2025-10-07 14:06:29.896 2 WARNING nova.compute.manager [req-b501e603-f3d8-481c-a591-6308ee1bfa2e req-d583e908-bb6a-4b6e-8c8e-5724058f18c4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received unexpected event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with vm_state active and task_state None.
Oct 07 14:06:30 compute-0 nova_compute[259550]: 2025-10-07 14:06:30.084 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-ddf09c33-d956-404b-a5d8-44a3727f9a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:06:30 compute-0 nova_compute[259550]: 2025-10-07 14:06:30.084 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-ddf09c33-d956-404b-a5d8-44a3727f9a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:06:30 compute-0 nova_compute[259550]: 2025-10-07 14:06:30.085 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:06:30 compute-0 nova_compute[259550]: 2025-10-07 14:06:30.085 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:30 compute-0 ceph-mon[74295]: pgmap v1178: 305 pgs: 305 active+clean; 451 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.5 MiB/s wr, 281 op/s
Oct 07 14:06:30 compute-0 nova_compute[259550]: 2025-10-07 14:06:30.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:06:31 compute-0 podman[286225]: 2025-10-07 14:06:31.072374746 +0000 UTC m=+0.057640061 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:06:31 compute-0 nova_compute[259550]: 2025-10-07 14:06:31.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 451 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 208 op/s
Oct 07 14:06:31 compute-0 nova_compute[259550]: 2025-10-07 14:06:31.975 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Updating instance_info_cache with network_info: [{"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003729976557921469 of space, bias 1.0, pg target 1.1189929673764407 quantized to 32 (current 32)
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.1991676866616201 quantized to 32 (current 32)
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 07 14:06:32 compute-0 nova_compute[259550]: 2025-10-07 14:06:32.296 2 DEBUG nova.compute.manager [req-3c80eb06-e1d6-4378-b6fb-1e74936f6500 req-303b84a7-bd52-4c3d-ad10-00f75855939b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:32 compute-0 nova_compute[259550]: 2025-10-07 14:06:32.296 2 DEBUG nova.compute.manager [req-3c80eb06-e1d6-4378-b6fb-1e74936f6500 req-303b84a7-bd52-4c3d-ad10-00f75855939b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing instance network info cache due to event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:06:32 compute-0 nova_compute[259550]: 2025-10-07 14:06:32.296 2 DEBUG oslo_concurrency.lockutils [req-3c80eb06-e1d6-4378-b6fb-1e74936f6500 req-303b84a7-bd52-4c3d-ad10-00f75855939b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:06:32 compute-0 nova_compute[259550]: 2025-10-07 14:06:32.296 2 DEBUG oslo_concurrency.lockutils [req-3c80eb06-e1d6-4378-b6fb-1e74936f6500 req-303b84a7-bd52-4c3d-ad10-00f75855939b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:06:32 compute-0 nova_compute[259550]: 2025-10-07 14:06:32.297 2 DEBUG nova.network.neutron [req-3c80eb06-e1d6-4378-b6fb-1e74936f6500 req-303b84a7-bd52-4c3d-ad10-00f75855939b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:06:32 compute-0 nova_compute[259550]: 2025-10-07 14:06:32.423 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-ddf09c33-d956-404b-a5d8-44a3727f9a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:06:32 compute-0 nova_compute[259550]: 2025-10-07 14:06:32.423 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:06:32 compute-0 nova_compute[259550]: 2025-10-07 14:06:32.423 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:06:32 compute-0 nova_compute[259550]: 2025-10-07 14:06:32.424 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:06:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:06:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/973118103' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:06:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:06:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/973118103' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:06:32 compute-0 ceph-mon[74295]: pgmap v1179: 305 pgs: 305 active+clean; 451 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 208 op/s
Oct 07 14:06:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/973118103' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:06:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/973118103' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:06:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 451 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 705 KiB/s wr, 173 op/s
Oct 07 14:06:34 compute-0 ceph-mon[74295]: pgmap v1180: 305 pgs: 305 active+clean; 451 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 705 KiB/s wr, 173 op/s
Oct 07 14:06:35 compute-0 nova_compute[259550]: 2025-10-07 14:06:35.275 2 INFO nova.compute.manager [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Rebuilding instance
Oct 07 14:06:35 compute-0 nova_compute[259550]: 2025-10-07 14:06:35.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:35 compute-0 nova_compute[259550]: 2025-10-07 14:06:35.471 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'trusted_certs' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 451 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 714 KiB/s wr, 175 op/s
Oct 07 14:06:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:06:35 compute-0 nova_compute[259550]: 2025-10-07 14:06:35.715 2 DEBUG nova.compute.manager [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:35 compute-0 nova_compute[259550]: 2025-10-07 14:06:35.934 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'pci_requests' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:36 compute-0 nova_compute[259550]: 2025-10-07 14:06:36.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:36 compute-0 nova_compute[259550]: 2025-10-07 14:06:36.132 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'pci_devices' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:36 compute-0 nova_compute[259550]: 2025-10-07 14:06:36.294 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'resources' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:36 compute-0 nova_compute[259550]: 2025-10-07 14:06:36.493 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'migration_context' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:36 compute-0 nova_compute[259550]: 2025-10-07 14:06:36.677 2 DEBUG nova.network.neutron [req-3c80eb06-e1d6-4378-b6fb-1e74936f6500 req-303b84a7-bd52-4c3d-ad10-00f75855939b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updated VIF entry in instance network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:06:36 compute-0 nova_compute[259550]: 2025-10-07 14:06:36.678 2 DEBUG nova.network.neutron [req-3c80eb06-e1d6-4378-b6fb-1e74936f6500 req-303b84a7-bd52-4c3d-ad10-00f75855939b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updating instance_info_cache with network_info: [{"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:06:36 compute-0 nova_compute[259550]: 2025-10-07 14:06:36.701 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:06:36 compute-0 nova_compute[259550]: 2025-10-07 14:06:36.707 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:06:36 compute-0 nova_compute[259550]: 2025-10-07 14:06:36.838 2 DEBUG oslo_concurrency.lockutils [req-3c80eb06-e1d6-4378-b6fb-1e74936f6500 req-303b84a7-bd52-4c3d-ad10-00f75855939b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:06:37 compute-0 ceph-mon[74295]: pgmap v1181: 305 pgs: 305 active+clean; 451 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 714 KiB/s wr, 175 op/s
Oct 07 14:06:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 451 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 92 KiB/s wr, 147 op/s
Oct 07 14:06:38 compute-0 ceph-mon[74295]: pgmap v1182: 305 pgs: 305 active+clean; 451 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 92 KiB/s wr, 147 op/s
Oct 07 14:06:38 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 07 14:06:39 compute-0 nova_compute[259550]: 2025-10-07 14:06:39.229 2 DEBUG nova.compute.manager [req-0ce290b7-4f73-46ca-813a-1e6196f5c2ce req-89fb3a84-02db-4b1e-859b-6cc0631d1b23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:39 compute-0 nova_compute[259550]: 2025-10-07 14:06:39.229 2 DEBUG nova.compute.manager [req-0ce290b7-4f73-46ca-813a-1e6196f5c2ce req-89fb3a84-02db-4b1e-859b-6cc0631d1b23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing instance network info cache due to event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:06:39 compute-0 nova_compute[259550]: 2025-10-07 14:06:39.230 2 DEBUG oslo_concurrency.lockutils [req-0ce290b7-4f73-46ca-813a-1e6196f5c2ce req-89fb3a84-02db-4b1e-859b-6cc0631d1b23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:06:39 compute-0 nova_compute[259550]: 2025-10-07 14:06:39.230 2 DEBUG oslo_concurrency.lockutils [req-0ce290b7-4f73-46ca-813a-1e6196f5c2ce req-89fb3a84-02db-4b1e-859b-6cc0631d1b23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:06:39 compute-0 nova_compute[259550]: 2025-10-07 14:06:39.230 2 DEBUG nova.network.neutron [req-0ce290b7-4f73-46ca-813a-1e6196f5c2ce req-89fb3a84-02db-4b1e-859b-6cc0631d1b23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:06:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 458 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 758 KiB/s wr, 101 op/s
Oct 07 14:06:40 compute-0 ceph-mon[74295]: pgmap v1183: 305 pgs: 305 active+clean; 458 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 758 KiB/s wr, 101 op/s
Oct 07 14:06:40 compute-0 nova_compute[259550]: 2025-10-07 14:06:40.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:06:41 compute-0 nova_compute[259550]: 2025-10-07 14:06:41.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 472 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 242 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Oct 07 14:06:41 compute-0 nova_compute[259550]: 2025-10-07 14:06:41.684 2 DEBUG nova.network.neutron [req-0ce290b7-4f73-46ca-813a-1e6196f5c2ce req-89fb3a84-02db-4b1e-859b-6cc0631d1b23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updated VIF entry in instance network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:06:41 compute-0 nova_compute[259550]: 2025-10-07 14:06:41.685 2 DEBUG nova.network.neutron [req-0ce290b7-4f73-46ca-813a-1e6196f5c2ce req-89fb3a84-02db-4b1e-859b-6cc0631d1b23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updating instance_info_cache with network_info: [{"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:06:41 compute-0 nova_compute[259550]: 2025-10-07 14:06:41.890 2 DEBUG oslo_concurrency.lockutils [req-0ce290b7-4f73-46ca-813a-1e6196f5c2ce req-89fb3a84-02db-4b1e-859b-6cc0631d1b23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:06:42 compute-0 ovn_controller[151684]: 2025-10-07T14:06:42Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:89:da:ba 10.100.0.9
Oct 07 14:06:42 compute-0 ovn_controller[151684]: 2025-10-07T14:06:42Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:89:da:ba 10.100.0.9
Oct 07 14:06:42 compute-0 nova_compute[259550]: 2025-10-07 14:06:42.374 2 DEBUG nova.compute.manager [req-b6b9ffb5-5419-4fd8-abdd-af78bdbeda8c req-c623241d-7249-40fe-85fe-51d1ac4d813c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received event network-changed-e6624198-96a3-41f5-b3e6-217be5426796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:42 compute-0 nova_compute[259550]: 2025-10-07 14:06:42.375 2 DEBUG nova.compute.manager [req-b6b9ffb5-5419-4fd8-abdd-af78bdbeda8c req-c623241d-7249-40fe-85fe-51d1ac4d813c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Refreshing instance network info cache due to event network-changed-e6624198-96a3-41f5-b3e6-217be5426796. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:06:42 compute-0 nova_compute[259550]: 2025-10-07 14:06:42.375 2 DEBUG oslo_concurrency.lockutils [req-b6b9ffb5-5419-4fd8-abdd-af78bdbeda8c req-c623241d-7249-40fe-85fe-51d1ac4d813c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:06:42 compute-0 nova_compute[259550]: 2025-10-07 14:06:42.375 2 DEBUG oslo_concurrency.lockutils [req-b6b9ffb5-5419-4fd8-abdd-af78bdbeda8c req-c623241d-7249-40fe-85fe-51d1ac4d813c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:06:42 compute-0 nova_compute[259550]: 2025-10-07 14:06:42.375 2 DEBUG nova.network.neutron [req-b6b9ffb5-5419-4fd8-abdd-af78bdbeda8c req-c623241d-7249-40fe-85fe-51d1ac4d813c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Refreshing network info cache for port e6624198-96a3-41f5-b3e6-217be5426796 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:06:42 compute-0 ceph-mon[74295]: pgmap v1184: 305 pgs: 305 active+clean; 472 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 242 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Oct 07 14:06:43 compute-0 ovn_controller[151684]: 2025-10-07T14:06:43Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6c:03:d4 10.100.0.7
Oct 07 14:06:43 compute-0 ovn_controller[151684]: 2025-10-07T14:06:43Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6c:03:d4 10.100.0.7
Oct 07 14:06:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 472 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 2.1 MiB/s wr, 36 op/s
Oct 07 14:06:43 compute-0 nova_compute[259550]: 2025-10-07 14:06:43.837 2 DEBUG nova.network.neutron [req-b6b9ffb5-5419-4fd8-abdd-af78bdbeda8c req-c623241d-7249-40fe-85fe-51d1ac4d813c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Updated VIF entry in instance network info cache for port e6624198-96a3-41f5-b3e6-217be5426796. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:06:43 compute-0 nova_compute[259550]: 2025-10-07 14:06:43.838 2 DEBUG nova.network.neutron [req-b6b9ffb5-5419-4fd8-abdd-af78bdbeda8c req-c623241d-7249-40fe-85fe-51d1ac4d813c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Updating instance_info_cache with network_info: [{"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:06:43 compute-0 nova_compute[259550]: 2025-10-07 14:06:43.856 2 DEBUG oslo_concurrency.lockutils [req-b6b9ffb5-5419-4fd8-abdd-af78bdbeda8c req-c623241d-7249-40fe-85fe-51d1ac4d813c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:06:44 compute-0 ceph-mon[74295]: pgmap v1185: 305 pgs: 305 active+clean; 472 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 2.1 MiB/s wr, 36 op/s
Oct 07 14:06:45 compute-0 nova_compute[259550]: 2025-10-07 14:06:45.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 493 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 550 KiB/s rd, 3.7 MiB/s wr, 96 op/s
Oct 07 14:06:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:06:46 compute-0 ceph-mon[74295]: pgmap v1186: 305 pgs: 305 active+clean; 493 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 550 KiB/s rd, 3.7 MiB/s wr, 96 op/s
Oct 07 14:06:46 compute-0 nova_compute[259550]: 2025-10-07 14:06:46.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:46 compute-0 nova_compute[259550]: 2025-10-07 14:06:46.763 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:06:46 compute-0 nova_compute[259550]: 2025-10-07 14:06:46.967 2 DEBUG nova.compute.manager [req-a1f17fda-a2d0-43a5-86f9-c441331c1b82 req-04d56703-109f-4cb4-8d5e-e0e47dc2f3fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received event network-changed-e6624198-96a3-41f5-b3e6-217be5426796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:46 compute-0 nova_compute[259550]: 2025-10-07 14:06:46.967 2 DEBUG nova.compute.manager [req-a1f17fda-a2d0-43a5-86f9-c441331c1b82 req-04d56703-109f-4cb4-8d5e-e0e47dc2f3fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Refreshing instance network info cache due to event network-changed-e6624198-96a3-41f5-b3e6-217be5426796. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:06:46 compute-0 nova_compute[259550]: 2025-10-07 14:06:46.968 2 DEBUG oslo_concurrency.lockutils [req-a1f17fda-a2d0-43a5-86f9-c441331c1b82 req-04d56703-109f-4cb4-8d5e-e0e47dc2f3fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:06:46 compute-0 nova_compute[259550]: 2025-10-07 14:06:46.968 2 DEBUG oslo_concurrency.lockutils [req-a1f17fda-a2d0-43a5-86f9-c441331c1b82 req-04d56703-109f-4cb4-8d5e-e0e47dc2f3fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:06:46 compute-0 nova_compute[259550]: 2025-10-07 14:06:46.968 2 DEBUG nova.network.neutron [req-a1f17fda-a2d0-43a5-86f9-c441331c1b82 req-04d56703-109f-4cb4-8d5e-e0e47dc2f3fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Refreshing network info cache for port e6624198-96a3-41f5-b3e6-217be5426796 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:06:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 507 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 692 KiB/s rd, 4.2 MiB/s wr, 119 op/s
Oct 07 14:06:47 compute-0 nova_compute[259550]: 2025-10-07 14:06:47.771 2 DEBUG oslo_concurrency.lockutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "115674ad-2273-4c42-b9ae-d380c2c005d6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:47 compute-0 nova_compute[259550]: 2025-10-07 14:06:47.772 2 DEBUG oslo_concurrency.lockutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:47 compute-0 nova_compute[259550]: 2025-10-07 14:06:47.773 2 DEBUG oslo_concurrency.lockutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:47 compute-0 nova_compute[259550]: 2025-10-07 14:06:47.773 2 DEBUG oslo_concurrency.lockutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:47 compute-0 nova_compute[259550]: 2025-10-07 14:06:47.773 2 DEBUG oslo_concurrency.lockutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:47 compute-0 nova_compute[259550]: 2025-10-07 14:06:47.775 2 INFO nova.compute.manager [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Terminating instance
Oct 07 14:06:47 compute-0 nova_compute[259550]: 2025-10-07 14:06:47.776 2 DEBUG nova.compute.manager [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:06:48 compute-0 podman[286247]: 2025-10-07 14:06:48.086735216 +0000 UTC m=+0.069323265 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:06:48 compute-0 podman[286248]: 2025-10-07 14:06:48.12019614 +0000 UTC m=+0.101512265 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, managed_by=edpm_ansible)
Oct 07 14:06:48 compute-0 kernel: tape6624198-96 (unregistering): left promiscuous mode
Oct 07 14:06:48 compute-0 NetworkManager[44949]: <info>  [1759846008.2929] device (tape6624198-96): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:06:48 compute-0 nova_compute[259550]: 2025-10-07 14:06:48.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:48 compute-0 ovn_controller[151684]: 2025-10-07T14:06:48Z|00093|binding|INFO|Releasing lport e6624198-96a3-41f5-b3e6-217be5426796 from this chassis (sb_readonly=0)
Oct 07 14:06:48 compute-0 ovn_controller[151684]: 2025-10-07T14:06:48Z|00094|binding|INFO|Setting lport e6624198-96a3-41f5-b3e6-217be5426796 down in Southbound
Oct 07 14:06:48 compute-0 ovn_controller[151684]: 2025-10-07T14:06:48Z|00095|binding|INFO|Removing iface tape6624198-96 ovn-installed in OVS
Oct 07 14:06:48 compute-0 nova_compute[259550]: 2025-10-07 14:06:48.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.321 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:da:ba 10.100.0.9'], port_security=['fa:16:3e:89:da:ba 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '115674ad-2273-4c42-b9ae-d380c2c005d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80f412c9-c511-49ba-a8ca-ff830fcff803', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '711e531670b1460a923f2f91ce0f63db', 'neutron:revision_number': '4', 'neutron:security_group_ids': '087bdc78-7c9e-48d3-b83f-b37a1a3f8ec7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e75ebab-8b50-4bfb-bcab-f5fcada82242, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=e6624198-96a3-41f5-b3e6-217be5426796) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:06:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.322 161536 INFO neutron.agent.ovn.metadata.agent [-] Port e6624198-96a3-41f5-b3e6-217be5426796 in datapath 80f412c9-c511-49ba-a8ca-ff830fcff803 unbound from our chassis
Oct 07 14:06:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.324 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80f412c9-c511-49ba-a8ca-ff830fcff803
Oct 07 14:06:48 compute-0 nova_compute[259550]: 2025-10-07 14:06:48.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.348 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[59e728fe-bc78-4ac1-978e-eaca945f01a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:48 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000010.scope: Deactivated successfully.
Oct 07 14:06:48 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000010.scope: Consumed 15.308s CPU time.
Oct 07 14:06:48 compute-0 systemd-machined[214580]: Machine qemu-19-instance-00000010 terminated.
Oct 07 14:06:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.381 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b558386e-b659-4eee-adf0-6daf1de8fb0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.384 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b22a04ab-6aca-4eab-a8e4-50e650db7320]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.418 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0e881b89-3dca-4e92-aa2f-667cd7389182]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:48 compute-0 nova_compute[259550]: 2025-10-07 14:06:48.417 2 INFO nova.virt.libvirt.driver [-] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Instance destroyed successfully.
Oct 07 14:06:48 compute-0 nova_compute[259550]: 2025-10-07 14:06:48.418 2 DEBUG nova.objects.instance [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lazy-loading 'resources' on Instance uuid 115674ad-2273-4c42-b9ae-d380c2c005d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:48 compute-0 nova_compute[259550]: 2025-10-07 14:06:48.435 2 DEBUG nova.virt.libvirt.vif [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:06:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-279731597',display_name='tempest-FloatingIPsAssociationTestJSON-server-279731597',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-279731597',id=16,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:06:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='711e531670b1460a923f2f91ce0f63db',ramdisk_id='',reservation_id='r-juuxxw0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1021362371',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1021362371-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:06:23Z,user_data=None,user_id='7ff78293ed4e40f9954a0b0e6fca0caa',uuid=115674ad-2273-4c42-b9ae-d380c2c005d6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:06:48 compute-0 nova_compute[259550]: 2025-10-07 14:06:48.436 2 DEBUG nova.network.os_vif_util [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converting VIF {"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:06:48 compute-0 nova_compute[259550]: 2025-10-07 14:06:48.437 2 DEBUG nova.network.os_vif_util [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:da:ba,bridge_name='br-int',has_traffic_filtering=True,id=e6624198-96a3-41f5-b3e6-217be5426796,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6624198-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:06:48 compute-0 nova_compute[259550]: 2025-10-07 14:06:48.437 2 DEBUG os_vif [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:da:ba,bridge_name='br-int',has_traffic_filtering=True,id=e6624198-96a3-41f5-b3e6-217be5426796,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6624198-96') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:06:48 compute-0 nova_compute[259550]: 2025-10-07 14:06:48.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:48 compute-0 nova_compute[259550]: 2025-10-07 14:06:48.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6624198-96, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.438 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[56340906-1c72-44a8-b05a-8ca880bff3d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80f412c9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:6b:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654916, 'reachable_time': 32115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286309, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:48 compute-0 nova_compute[259550]: 2025-10-07 14:06:48.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:48 compute-0 nova_compute[259550]: 2025-10-07 14:06:48.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:48 compute-0 nova_compute[259550]: 2025-10-07 14:06:48.444 2 INFO os_vif [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:da:ba,bridge_name='br-int',has_traffic_filtering=True,id=e6624198-96a3-41f5-b3e6-217be5426796,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6624198-96')
Oct 07 14:06:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.456 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa75935-c5cc-4643-8f0d-a79f0562bb84]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap80f412c9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654932, 'tstamp': 654932}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286310, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap80f412c9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654936, 'tstamp': 654936}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286310, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.458 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80f412c9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.461 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80f412c9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.461 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.462 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80f412c9-c0, col_values=(('external_ids', {'iface-id': '60dd3b69-c15d-4f1f-8348-1807afc1578d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:48 compute-0 nova_compute[259550]: 2025-10-07 14:06:48.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.462 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:48 compute-0 ceph-mon[74295]: pgmap v1187: 305 pgs: 305 active+clean; 507 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 692 KiB/s rd, 4.2 MiB/s wr, 119 op/s
Oct 07 14:06:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 517 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 750 KiB/s rd, 4.3 MiB/s wr, 141 op/s
Oct 07 14:06:49 compute-0 nova_compute[259550]: 2025-10-07 14:06:49.780 2 INFO nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance shutdown successfully after 13 seconds.
Oct 07 14:06:49 compute-0 kernel: tap9b25db0b-24 (unregistering): left promiscuous mode
Oct 07 14:06:49 compute-0 NetworkManager[44949]: <info>  [1759846009.8082] device (tap9b25db0b-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:06:49 compute-0 nova_compute[259550]: 2025-10-07 14:06:49.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:49 compute-0 ovn_controller[151684]: 2025-10-07T14:06:49Z|00096|binding|INFO|Releasing lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 from this chassis (sb_readonly=0)
Oct 07 14:06:49 compute-0 ovn_controller[151684]: 2025-10-07T14:06:49Z|00097|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 down in Southbound
Oct 07 14:06:49 compute-0 ovn_controller[151684]: 2025-10-07T14:06:49Z|00098|binding|INFO|Removing iface tap9b25db0b-24 ovn-installed in OVS
Oct 07 14:06:49 compute-0 nova_compute[259550]: 2025-10-07 14:06:49.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:49 compute-0 nova_compute[259550]: 2025-10-07 14:06:49.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.855 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:03:d4 10.100.0.7'], port_security=['fa:16:3e:6c:03:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9b25db0b-246e-456c-82d7-cf361c57f9c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:06:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.856 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9b25db0b-246e-456c-82d7-cf361c57f9c5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 unbound from our chassis
Oct 07 14:06:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.857 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6
Oct 07 14:06:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.876 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6cfbc461-e3b2-4b53-8a09-8ca31ed9b3c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:49 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct 07 14:06:49 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000000a.scope: Consumed 16.416s CPU time.
Oct 07 14:06:49 compute-0 systemd-machined[214580]: Machine qemu-18-instance-0000000a terminated.
Oct 07 14:06:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.908 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[19d161c5-fbfd-4983-81b2-ddd1210464dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.911 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1be040a8-0dea-4f2a-beb1-56b3a4f80b28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:49 compute-0 nova_compute[259550]: 2025-10-07 14:06:49.922 2 INFO nova.virt.libvirt.driver [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Deleting instance files /var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6_del
Oct 07 14:06:49 compute-0 nova_compute[259550]: 2025-10-07 14:06:49.923 2 INFO nova.virt.libvirt.driver [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Deletion of /var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6_del complete
Oct 07 14:06:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.947 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[329729a6-5dde-41e6-b40c-8b6c4c4c2eaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.966 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[03892cb3-aaba-4f60-8fbe-647756392c6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 16, 'rx_bytes': 1168, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 16, 'rx_bytes': 1168, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286339, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.989 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[90998376-2ca7-4ff2-b9c0-49df383cd20e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286340, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286340, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.991 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:49 compute-0 nova_compute[259550]: 2025-10-07 14:06:49.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:49 compute-0 nova_compute[259550]: 2025-10-07 14:06:49.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.998 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.998 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.998 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:49 compute-0 kernel: tap9b25db0b-24: entered promiscuous mode
Oct 07 14:06:50 compute-0 systemd-udevd[286290]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.999 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:50 compute-0 NetworkManager[44949]: <info>  [1759846010.0030] manager: (tap9b25db0b-24): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Oct 07 14:06:50 compute-0 ovn_controller[151684]: 2025-10-07T14:06:50Z|00099|binding|INFO|Claiming lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 for this chassis.
Oct 07 14:06:50 compute-0 ovn_controller[151684]: 2025-10-07T14:06:50Z|00100|binding|INFO|9b25db0b-246e-456c-82d7-cf361c57f9c5: Claiming fa:16:3e:6c:03:d4 10.100.0.7
Oct 07 14:06:50 compute-0 kernel: tap9b25db0b-24 (unregistering): left promiscuous mode
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.024 2 INFO nova.virt.libvirt.driver [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance destroyed successfully.
Oct 07 14:06:50 compute-0 ovn_controller[151684]: 2025-10-07T14:06:50Z|00101|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 ovn-installed in OVS
Oct 07 14:06:50 compute-0 ovn_controller[151684]: 2025-10-07T14:06:50Z|00102|if_status|INFO|Not setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 down as sb is readonly
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.032 2 INFO nova.virt.libvirt.driver [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance destroyed successfully.
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.035 2 DEBUG nova.virt.libvirt.vif [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1321212972',display_name='tempest-ServersAdminTestJSON-server-1321212972',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1321212972',id=10,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:06:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-ixj4uqzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:06:33Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=ddf09c33-d956-404b-a5d8-44a3727f9a3b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.035 2 DEBUG nova.network.os_vif_util [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.036 2 DEBUG nova.network.os_vif_util [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.036 2 DEBUG os_vif [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.037 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b25db0b-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:50 compute-0 ovn_controller[151684]: 2025-10-07T14:06:50Z|00103|binding|INFO|Releasing lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 from this chassis (sb_readonly=0)
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.038 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:03:d4 10.100.0.7'], port_security=['fa:16:3e:6c:03:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9b25db0b-246e-456c-82d7-cf361c57f9c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.039 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9b25db0b-246e-456c-82d7-cf361c57f9c5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 bound to our chassis
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.040 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.058 2 INFO os_vif [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24')
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.060 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd902c8-6648-426f-8af6-45195b4ceb6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.101 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d149ca3f-4471-4a12-a9ec-551fbe368f94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.106 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a96992-4a80-4219-9c01-63942ffa17c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.140 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d22346b6-8596-427a-8f93-3167602d533b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.168 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e9facc50-360c-4fbd-8d6f-6dbca58af7ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 18, 'rx_bytes': 1168, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 18, 'rx_bytes': 1168, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286369, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.189 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bf776aa7-bd50-4a0a-ae5e-5f86565ff1c5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286370, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286370, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.191 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.194 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.195 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.195 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.195 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.235 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:03:d4 10.100.0.7'], port_security=['fa:16:3e:6c:03:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9b25db0b-246e-456c-82d7-cf361c57f9c5) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.236 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9b25db0b-246e-456c-82d7-cf361c57f9c5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 unbound from our chassis
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.238 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.259 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4a13b511-192b-4475-9849-3ce4a96c3b73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.299 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e69b01ee-24a6-4a0a-bba3-c1663310f158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.306 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9af5bd0e-9ad7-4599-8efe-fc9be64648e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.336 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[39d1733f-f0e5-46f1-9949-ab2bf608cc77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.356 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[30a66143-e3a8-41fe-b643-0660197454ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 20, 'rx_bytes': 1168, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 20, 'rx_bytes': 1168, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286377, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.378 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[74daa060-0131-4ae6-b9c6-393b160d1eb2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286378, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286378, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.379 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.383 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.383 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.383 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.384 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.399 2 INFO nova.compute.manager [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Took 2.62 seconds to destroy the instance on the hypervisor.
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.399 2 DEBUG oslo.service.loopingcall [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.400 2 DEBUG nova.compute.manager [-] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.400 2 DEBUG nova.network.neutron [-] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.428 2 DEBUG nova.network.neutron [req-a1f17fda-a2d0-43a5-86f9-c441331c1b82 req-04d56703-109f-4cb4-8d5e-e0e47dc2f3fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Updated VIF entry in instance network info cache for port e6624198-96a3-41f5-b3e6-217be5426796. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.429 2 DEBUG nova.network.neutron [req-a1f17fda-a2d0-43a5-86f9-c441331c1b82 req-04d56703-109f-4cb4-8d5e-e0e47dc2f3fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Updating instance_info_cache with network_info: [{"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.483 2 DEBUG oslo_concurrency.lockutils [req-a1f17fda-a2d0-43a5-86f9-c441331c1b82 req-04d56703-109f-4cb4-8d5e-e0e47dc2f3fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.512 2 INFO nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deleting instance files /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b_del
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.513 2 INFO nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deletion of /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b_del complete
Oct 07 14:06:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.822 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.823 2 INFO nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Creating image(s)
Oct 07 14:06:50 compute-0 ceph-mon[74295]: pgmap v1188: 305 pgs: 305 active+clean; 517 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 750 KiB/s rd, 4.3 MiB/s wr, 141 op/s
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.872 2 DEBUG nova.storage.rbd_utils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.900 2 DEBUG nova.storage.rbd_utils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.925 2 DEBUG nova.storage.rbd_utils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.931 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.965 2 DEBUG nova.compute.manager [req-f00e5bf4-cfd4-43c5-8e0b-dcb736992834 req-2f44708a-a60f-42a2-b0d4-4264470cbe22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received event network-vif-unplugged-e6624198-96a3-41f5-b3e6-217be5426796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.965 2 DEBUG oslo_concurrency.lockutils [req-f00e5bf4-cfd4-43c5-8e0b-dcb736992834 req-2f44708a-a60f-42a2-b0d4-4264470cbe22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.966 2 DEBUG oslo_concurrency.lockutils [req-f00e5bf4-cfd4-43c5-8e0b-dcb736992834 req-2f44708a-a60f-42a2-b0d4-4264470cbe22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.967 2 DEBUG oslo_concurrency.lockutils [req-f00e5bf4-cfd4-43c5-8e0b-dcb736992834 req-2f44708a-a60f-42a2-b0d4-4264470cbe22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.967 2 DEBUG nova.compute.manager [req-f00e5bf4-cfd4-43c5-8e0b-dcb736992834 req-2f44708a-a60f-42a2-b0d4-4264470cbe22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] No waiting events found dispatching network-vif-unplugged-e6624198-96a3-41f5-b3e6-217be5426796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.967 2 DEBUG nova.compute.manager [req-f00e5bf4-cfd4-43c5-8e0b-dcb736992834 req-2f44708a-a60f-42a2-b0d4-4264470cbe22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received event network-vif-unplugged-e6624198-96a3-41f5-b3e6-217be5426796 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.993 2 DEBUG nova.compute.manager [req-ee78968a-9678-4f25-8873-15db737f5d27 req-65741cd4-4e8f-4dfe-802f-b10daf78fbcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-unplugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.993 2 DEBUG oslo_concurrency.lockutils [req-ee78968a-9678-4f25-8873-15db737f5d27 req-65741cd4-4e8f-4dfe-802f-b10daf78fbcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.993 2 DEBUG oslo_concurrency.lockutils [req-ee78968a-9678-4f25-8873-15db737f5d27 req-65741cd4-4e8f-4dfe-802f-b10daf78fbcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.993 2 DEBUG oslo_concurrency.lockutils [req-ee78968a-9678-4f25-8873-15db737f5d27 req-65741cd4-4e8f-4dfe-802f-b10daf78fbcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.994 2 DEBUG nova.compute.manager [req-ee78968a-9678-4f25-8873-15db737f5d27 req-65741cd4-4e8f-4dfe-802f-b10daf78fbcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-unplugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:06:50 compute-0 nova_compute[259550]: 2025-10-07 14:06:50.994 2 WARNING nova.compute.manager [req-ee78968a-9678-4f25-8873-15db737f5d27 req-65741cd4-4e8f-4dfe-802f-b10daf78fbcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received unexpected event network-vif-unplugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with vm_state active and task_state rebuild_spawning.
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.008 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.009 2 DEBUG oslo_concurrency.lockutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.010 2 DEBUG oslo_concurrency.lockutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.010 2 DEBUG oslo_concurrency.lockutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.031 2 DEBUG nova.storage.rbd_utils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.035 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.338 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.415 2 DEBUG nova.storage.rbd_utils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] resizing rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:06:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 449 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 713 KiB/s rd, 3.6 MiB/s wr, 134 op/s
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.574 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.574 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Ensure instance console log exists: /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.575 2 DEBUG oslo_concurrency.lockutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.575 2 DEBUG oslo_concurrency.lockutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.575 2 DEBUG oslo_concurrency.lockutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.577 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Start _get_guest_xml network_info=[{"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.581 2 WARNING nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.588 2 DEBUG nova.virt.libvirt.host [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.589 2 DEBUG nova.virt.libvirt.host [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.591 2 DEBUG nova.network.neutron [-] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.593 2 DEBUG nova.virt.libvirt.host [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.594 2 DEBUG nova.virt.libvirt.host [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.595 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.595 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.595 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.595 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.596 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.596 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.596 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.596 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.596 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.596 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.597 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.597 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.597 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.638 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.662 2 INFO nova.compute.manager [-] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Took 1.26 seconds to deallocate network for instance.
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.733 2 DEBUG oslo_concurrency.lockutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.733 2 DEBUG oslo_concurrency.lockutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:51 compute-0 nova_compute[259550]: 2025-10-07 14:06:51.892 2 DEBUG oslo_concurrency.processutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:51 compute-0 ceph-mon[74295]: pgmap v1189: 305 pgs: 305 active+clean; 449 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 713 KiB/s rd, 3.6 MiB/s wr, 134 op/s
Oct 07 14:06:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:06:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/88216778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.085 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.110 2 DEBUG nova.storage.rbd_utils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.114 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:06:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1335684738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.346 2 DEBUG oslo_concurrency.processutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.353 2 DEBUG nova.compute.provider_tree [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.395 2 DEBUG nova.scheduler.client.report [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.445 2 DEBUG oslo_concurrency.lockutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.487 2 INFO nova.scheduler.client.report [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Deleted allocations for instance 115674ad-2273-4c42-b9ae-d380c2c005d6
Oct 07 14:06:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:06:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2864037943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.578 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.580 2 DEBUG nova.virt.libvirt.vif [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1321212972',display_name='tempest-ServersAdminTestJSON-server-1321212972',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1321212972',id=10,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:06:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-ixj4uqzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:06:50Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=ddf09c33-d956-404b-a5d8-44a3727f9a3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.580 2 DEBUG nova.network.os_vif_util [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.581 2 DEBUG nova.network.os_vif_util [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.584 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:06:52 compute-0 nova_compute[259550]:   <uuid>ddf09c33-d956-404b-a5d8-44a3727f9a3b</uuid>
Oct 07 14:06:52 compute-0 nova_compute[259550]:   <name>instance-0000000a</name>
Oct 07 14:06:52 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:06:52 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:06:52 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersAdminTestJSON-server-1321212972</nova:name>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:06:51</nova:creationTime>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:06:52 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:06:52 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:06:52 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:06:52 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:06:52 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:06:52 compute-0 nova_compute[259550]:         <nova:user uuid="f06dda9346a24fb094ad9fe51664cc48">tempest-ServersAdminTestJSON-1442908900-project-member</nova:user>
Oct 07 14:06:52 compute-0 nova_compute[259550]:         <nova:project uuid="48bbd5aa8b9d4a0ea0150bd57145fc68">tempest-ServersAdminTestJSON-1442908900</nova:project>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:06:52 compute-0 nova_compute[259550]:         <nova:port uuid="9b25db0b-246e-456c-82d7-cf361c57f9c5">
Oct 07 14:06:52 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:06:52 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:06:52 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <system>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <entry name="serial">ddf09c33-d956-404b-a5d8-44a3727f9a3b</entry>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <entry name="uuid">ddf09c33-d956-404b-a5d8-44a3727f9a3b</entry>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     </system>
Oct 07 14:06:52 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:06:52 compute-0 nova_compute[259550]:   <os>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:   </os>
Oct 07 14:06:52 compute-0 nova_compute[259550]:   <features>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:   </features>
Oct 07 14:06:52 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:06:52 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:06:52 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk">
Oct 07 14:06:52 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       </source>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:06:52 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config">
Oct 07 14:06:52 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       </source>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:06:52 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:6c:03:d4"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <target dev="tap9b25db0b-24"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/console.log" append="off"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <video>
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     </video>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:06:52 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:06:52 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:06:52 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:06:52 compute-0 nova_compute[259550]: </domain>
Oct 07 14:06:52 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.585 2 DEBUG nova.virt.libvirt.vif [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1321212972',display_name='tempest-ServersAdminTestJSON-server-1321212972',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1321212972',id=10,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:06:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-ixj4uqzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:06:50Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=ddf09c33-d956-404b-a5d8-44a3727f9a3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.585 2 DEBUG nova.network.os_vif_util [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.586 2 DEBUG nova.network.os_vif_util [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.586 2 DEBUG os_vif [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.587 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.587 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.590 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b25db0b-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.591 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9b25db0b-24, col_values=(('external_ids', {'iface-id': '9b25db0b-246e-456c-82d7-cf361c57f9c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:03:d4', 'vm-uuid': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:52 compute-0 NetworkManager[44949]: <info>  [1759846012.5936] manager: (tap9b25db0b-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.600 2 INFO os_vif [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24')
Oct 07 14:06:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:06:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:06:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:06:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:06:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:06:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:06:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/88216778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:06:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1335684738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:06:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2864037943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:06:52 compute-0 nova_compute[259550]: 2025-10-07 14:06:52.957 2 DEBUG oslo_concurrency.lockutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.048 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.049 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.049 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No VIF found with MAC fa:16:3e:6c:03:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.050 2 INFO nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Using config drive
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.078 2 DEBUG nova.storage.rbd_utils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.088 2 DEBUG nova.compute.manager [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received event network-vif-plugged-e6624198-96a3-41f5-b3e6-217be5426796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.088 2 DEBUG oslo_concurrency.lockutils [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.088 2 DEBUG oslo_concurrency.lockutils [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.089 2 DEBUG oslo_concurrency.lockutils [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.089 2 DEBUG nova.compute.manager [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] No waiting events found dispatching network-vif-plugged-e6624198-96a3-41f5-b3e6-217be5426796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.089 2 WARNING nova.compute.manager [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received unexpected event network-vif-plugged-e6624198-96a3-41f5-b3e6-217be5426796 for instance with vm_state deleted and task_state None.
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.089 2 DEBUG nova.compute.manager [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received event network-vif-deleted-e6624198-96a3-41f5-b3e6-217be5426796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.089 2 DEBUG nova.compute.manager [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.090 2 DEBUG nova.compute.manager [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing instance network info cache due to event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.090 2 DEBUG oslo_concurrency.lockutils [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.090 2 DEBUG oslo_concurrency.lockutils [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.090 2 DEBUG nova.network.neutron [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.388 2 DEBUG nova.compute.manager [req-c64a424c-4c13-4428-a63f-5de0fdee97c3 req-f9fbc40a-daed-4b5c-8aee-0da72624c16f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.388 2 DEBUG oslo_concurrency.lockutils [req-c64a424c-4c13-4428-a63f-5de0fdee97c3 req-f9fbc40a-daed-4b5c-8aee-0da72624c16f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.389 2 DEBUG oslo_concurrency.lockutils [req-c64a424c-4c13-4428-a63f-5de0fdee97c3 req-f9fbc40a-daed-4b5c-8aee-0da72624c16f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.389 2 DEBUG oslo_concurrency.lockutils [req-c64a424c-4c13-4428-a63f-5de0fdee97c3 req-f9fbc40a-daed-4b5c-8aee-0da72624c16f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.389 2 DEBUG nova.compute.manager [req-c64a424c-4c13-4428-a63f-5de0fdee97c3 req-f9fbc40a-daed-4b5c-8aee-0da72624c16f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.389 2 WARNING nova.compute.manager [req-c64a424c-4c13-4428-a63f-5de0fdee97c3 req-f9fbc40a-daed-4b5c-8aee-0da72624c16f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received unexpected event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with vm_state active and task_state rebuild_spawning.
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.396 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'ec2_ids' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 449 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 697 KiB/s rd, 2.2 MiB/s wr, 119 op/s
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.569 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'keypairs' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:06:53 compute-0 ovn_controller[151684]: 2025-10-07T14:06:53Z|00104|binding|INFO|Releasing lport 47854cb1-b863-4b06-b664-27d734ff5751 from this chassis (sb_readonly=0)
Oct 07 14:06:53 compute-0 ovn_controller[151684]: 2025-10-07T14:06:53Z|00105|binding|INFO|Releasing lport 60dd3b69-c15d-4f1f-8348-1807afc1578d from this chassis (sb_readonly=0)
Oct 07 14:06:53 compute-0 nova_compute[259550]: 2025-10-07 14:06:53.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:53 compute-0 ceph-mon[74295]: pgmap v1190: 305 pgs: 305 active+clean; 449 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 697 KiB/s rd, 2.2 MiB/s wr, 119 op/s
Oct 07 14:06:54 compute-0 nova_compute[259550]: 2025-10-07 14:06:54.235 2 INFO nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Creating config drive at /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config
Oct 07 14:06:54 compute-0 nova_compute[259550]: 2025-10-07 14:06:54.240 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf1xjdx7z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:54 compute-0 nova_compute[259550]: 2025-10-07 14:06:54.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:54.278 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:06:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:54.279 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:06:54 compute-0 nova_compute[259550]: 2025-10-07 14:06:54.374 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf1xjdx7z" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:54 compute-0 nova_compute[259550]: 2025-10-07 14:06:54.402 2 DEBUG nova.storage.rbd_utils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:06:54 compute-0 nova_compute[259550]: 2025-10-07 14:06:54.406 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:06:54 compute-0 nova_compute[259550]: 2025-10-07 14:06:54.870 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:06:54 compute-0 nova_compute[259550]: 2025-10-07 14:06:54.871 2 INFO nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deleting local config drive /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config because it was imported into RBD.
Oct 07 14:06:54 compute-0 kernel: tap9b25db0b-24: entered promiscuous mode
Oct 07 14:06:54 compute-0 NetworkManager[44949]: <info>  [1759846014.9235] manager: (tap9b25db0b-24): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Oct 07 14:06:54 compute-0 ovn_controller[151684]: 2025-10-07T14:06:54Z|00106|binding|INFO|Claiming lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 for this chassis.
Oct 07 14:06:54 compute-0 ovn_controller[151684]: 2025-10-07T14:06:54Z|00107|binding|INFO|9b25db0b-246e-456c-82d7-cf361c57f9c5: Claiming fa:16:3e:6c:03:d4 10.100.0.7
Oct 07 14:06:54 compute-0 nova_compute[259550]: 2025-10-07 14:06:54.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:54 compute-0 ovn_controller[151684]: 2025-10-07T14:06:54Z|00108|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 ovn-installed in OVS
Oct 07 14:06:54 compute-0 nova_compute[259550]: 2025-10-07 14:06:54.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:54 compute-0 nova_compute[259550]: 2025-10-07 14:06:54.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:54 compute-0 systemd-udevd[286700]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:06:54 compute-0 NetworkManager[44949]: <info>  [1759846014.9631] device (tap9b25db0b-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:06:54 compute-0 NetworkManager[44949]: <info>  [1759846014.9651] device (tap9b25db0b-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:06:54 compute-0 systemd-machined[214580]: New machine qemu-20-instance-0000000a.
Oct 07 14:06:54 compute-0 ovn_controller[151684]: 2025-10-07T14:06:54Z|00109|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 up in Southbound
Oct 07 14:06:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:54.973 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:03:d4 10.100.0.7'], port_security=['fa:16:3e:6c:03:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9b25db0b-246e-456c-82d7-cf361c57f9c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:06:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:54.974 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9b25db0b-246e-456c-82d7-cf361c57f9c5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 bound to our chassis
Oct 07 14:06:54 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-0000000a.
Oct 07 14:06:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:54.976 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6
Oct 07 14:06:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:54.993 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f92d953c-91c6-483f-95cd-1e9750a5889d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.024 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[aadcd0e3-c2e9-4a5d-879a-4b1557570b78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.027 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0f8686fb-011a-44fb-9716-cae42ce202a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.060 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0b32027b-321f-4de9-819b-10e6d0f17e1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:55 compute-0 nova_compute[259550]: 2025-10-07 14:06:55.064 2 DEBUG nova.network.neutron [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updated VIF entry in instance network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:06:55 compute-0 nova_compute[259550]: 2025-10-07 14:06:55.065 2 DEBUG nova.network.neutron [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updating instance_info_cache with network_info: [{"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:06:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.082 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[01a14f9e-04ca-48ef-a2bc-7ca5ea8b990c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 22, 'rx_bytes': 1168, 'tx_bytes': 1112, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 22, 'rx_bytes': 1168, 'tx_bytes': 1112, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286717, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.102 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a2fb16ab-f999-473f-90a0-5cb4b843beeb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286718, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286718, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:06:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.104 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:55 compute-0 nova_compute[259550]: 2025-10-07 14:06:55.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:55 compute-0 nova_compute[259550]: 2025-10-07 14:06:55.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.108 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.109 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.109 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:06:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.109 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:06:55 compute-0 nova_compute[259550]: 2025-10-07 14:06:55.204 2 DEBUG oslo_concurrency.lockutils [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:06:55 compute-0 nova_compute[259550]: 2025-10-07 14:06:55.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 386 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 742 KiB/s rd, 3.3 MiB/s wr, 185 op/s
Oct 07 14:06:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:06:55 compute-0 nova_compute[259550]: 2025-10-07 14:06:55.854 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for ddf09c33-d956-404b-a5d8-44a3727f9a3b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:06:55 compute-0 nova_compute[259550]: 2025-10-07 14:06:55.855 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846015.8535411, ddf09c33-d956-404b-a5d8-44a3727f9a3b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:06:55 compute-0 nova_compute[259550]: 2025-10-07 14:06:55.855 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] VM Resumed (Lifecycle Event)
Oct 07 14:06:55 compute-0 nova_compute[259550]: 2025-10-07 14:06:55.858 2 DEBUG nova.compute.manager [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:06:55 compute-0 nova_compute[259550]: 2025-10-07 14:06:55.858 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:06:55 compute-0 nova_compute[259550]: 2025-10-07 14:06:55.862 2 INFO nova.virt.libvirt.driver [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance spawned successfully.
Oct 07 14:06:55 compute-0 nova_compute[259550]: 2025-10-07 14:06:55.862 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:06:55 compute-0 nova_compute[259550]: 2025-10-07 14:06:55.976 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:55 compute-0 nova_compute[259550]: 2025-10-07 14:06:55.981 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.179 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.180 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846015.855435, ddf09c33-d956-404b-a5d8-44a3727f9a3b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.180 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] VM Started (Lifecycle Event)
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.186 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.186 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.187 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.187 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.188 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.188 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.205 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.209 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.256 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.362 2 DEBUG nova.compute.manager [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.492 2 DEBUG nova.compute.manager [req-f98fb5c3-d62c-4df5-a4c0-00ff19b8387e req-87d8feb1-4564-4e62-9c7c-98dbbc02d1d5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.492 2 DEBUG oslo_concurrency.lockutils [req-f98fb5c3-d62c-4df5-a4c0-00ff19b8387e req-87d8feb1-4564-4e62-9c7c-98dbbc02d1d5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.493 2 DEBUG oslo_concurrency.lockutils [req-f98fb5c3-d62c-4df5-a4c0-00ff19b8387e req-87d8feb1-4564-4e62-9c7c-98dbbc02d1d5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.493 2 DEBUG oslo_concurrency.lockutils [req-f98fb5c3-d62c-4df5-a4c0-00ff19b8387e req-87d8feb1-4564-4e62-9c7c-98dbbc02d1d5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.493 2 DEBUG nova.compute.manager [req-f98fb5c3-d62c-4df5-a4c0-00ff19b8387e req-87d8feb1-4564-4e62-9c7c-98dbbc02d1d5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.493 2 WARNING nova.compute.manager [req-f98fb5c3-d62c-4df5-a4c0-00ff19b8387e req-87d8feb1-4564-4e62-9c7c-98dbbc02d1d5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received unexpected event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with vm_state active and task_state rebuild_spawning.
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.518 2 DEBUG oslo_concurrency.lockutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.519 2 DEBUG oslo_concurrency.lockutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.519 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:06:56 compute-0 nova_compute[259550]: 2025-10-07 14:06:56.653 2 DEBUG oslo_concurrency.lockutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:56 compute-0 ceph-mon[74295]: pgmap v1191: 305 pgs: 305 active+clean; 386 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 742 KiB/s rd, 3.3 MiB/s wr, 185 op/s
Oct 07 14:06:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 405 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 257 KiB/s rd, 2.3 MiB/s wr, 133 op/s
Oct 07 14:06:57 compute-0 nova_compute[259550]: 2025-10-07 14:06:57.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:06:58 compute-0 ceph-mon[74295]: pgmap v1192: 305 pgs: 305 active+clean; 405 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 257 KiB/s rd, 2.3 MiB/s wr, 133 op/s
Oct 07 14:06:59 compute-0 podman[286761]: 2025-10-07 14:06:59.111979103 +0000 UTC m=+0.089716269 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 07 14:06:59 compute-0 nova_compute[259550]: 2025-10-07 14:06:59.303 2 DEBUG nova.compute.manager [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:59 compute-0 nova_compute[259550]: 2025-10-07 14:06:59.303 2 DEBUG oslo_concurrency.lockutils [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:06:59 compute-0 nova_compute[259550]: 2025-10-07 14:06:59.304 2 DEBUG oslo_concurrency.lockutils [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:06:59 compute-0 nova_compute[259550]: 2025-10-07 14:06:59.304 2 DEBUG oslo_concurrency.lockutils [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:06:59 compute-0 nova_compute[259550]: 2025-10-07 14:06:59.304 2 DEBUG nova.compute.manager [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:06:59 compute-0 nova_compute[259550]: 2025-10-07 14:06:59.304 2 WARNING nova.compute.manager [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received unexpected event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with vm_state error and task_state None.
Oct 07 14:06:59 compute-0 nova_compute[259550]: 2025-10-07 14:06:59.304 2 DEBUG nova.compute.manager [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:06:59 compute-0 nova_compute[259550]: 2025-10-07 14:06:59.305 2 DEBUG nova.compute.manager [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing instance network info cache due to event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:06:59 compute-0 nova_compute[259550]: 2025-10-07 14:06:59.305 2 DEBUG oslo_concurrency.lockutils [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:06:59 compute-0 nova_compute[259550]: 2025-10-07 14:06:59.305 2 DEBUG oslo_concurrency.lockutils [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:06:59 compute-0 nova_compute[259550]: 2025-10-07 14:06:59.305 2 DEBUG nova.network.neutron [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:06:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 405 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.9 MiB/s wr, 152 op/s
Oct 07 14:07:00 compute-0 ceph-mon[74295]: pgmap v1193: 305 pgs: 305 active+clean; 405 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.9 MiB/s wr, 152 op/s
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.039 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.040 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.040 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.336 2 DEBUG oslo_concurrency.lockutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "348c80b7-7f65-4300-9dab-6a333f1b2c74" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.337 2 DEBUG oslo_concurrency.lockutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.337 2 DEBUG oslo_concurrency.lockutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.337 2 DEBUG oslo_concurrency.lockutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.338 2 DEBUG oslo_concurrency.lockutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.339 2 INFO nova.compute.manager [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Terminating instance
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.340 2 DEBUG nova.compute.manager [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:07:00 compute-0 kernel: tap184f2379-84 (unregistering): left promiscuous mode
Oct 07 14:07:00 compute-0 NetworkManager[44949]: <info>  [1759846020.4717] device (tap184f2379-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:00 compute-0 ovn_controller[151684]: 2025-10-07T14:07:00Z|00110|binding|INFO|Releasing lport 184f2379-8442-414e-bccb-6f5e5a314e72 from this chassis (sb_readonly=0)
Oct 07 14:07:00 compute-0 ovn_controller[151684]: 2025-10-07T14:07:00Z|00111|binding|INFO|Setting lport 184f2379-8442-414e-bccb-6f5e5a314e72 down in Southbound
Oct 07 14:07:00 compute-0 ovn_controller[151684]: 2025-10-07T14:07:00Z|00112|binding|INFO|Removing iface tap184f2379-84 ovn-installed in OVS
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.539 2 DEBUG nova.network.neutron [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updated VIF entry in instance network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.540 2 DEBUG nova.network.neutron [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updating instance_info_cache with network_info: [{"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:07:00 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Oct 07 14:07:00 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000e.scope: Consumed 15.247s CPU time.
Oct 07 14:07:00 compute-0 systemd-machined[214580]: Machine qemu-16-instance-0000000e terminated.
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.559 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:34:23 10.100.0.11'], port_security=['fa:16:3e:c8:34:23 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '348c80b7-7f65-4300-9dab-6a333f1b2c74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=184f2379-8442-414e-bccb-6f5e5a314e72) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.560 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 184f2379-8442-414e-bccb-6f5e5a314e72 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 unbound from our chassis
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.561 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.574 2 DEBUG oslo_concurrency.lockutils [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.585 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[41940f52-74f9-441e-b74c-f6e9b0aa13d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.595 2 INFO nova.virt.libvirt.driver [-] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Instance destroyed successfully.
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.596 2 DEBUG nova.objects.instance [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'resources' on Instance uuid 348c80b7-7f65-4300-9dab-6a333f1b2c74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.617 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2c77311e-15e4-4690-8b41-659c03dd6d13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.623 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[febb76cb-9535-402b-b6c6-46899857881a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.658 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b31b9394-cde3-46d5-81f4-495af53aa997]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.670 2 DEBUG oslo_concurrency.lockutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "4b95692e-088d-452c-83b7-4c50df73b8fe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.670 2 DEBUG oslo_concurrency.lockutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.671 2 DEBUG oslo_concurrency.lockutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.671 2 DEBUG oslo_concurrency.lockutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.671 2 DEBUG oslo_concurrency.lockutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.673 2 INFO nova.compute.manager [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Terminating instance
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.674 2 DEBUG nova.compute.manager [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.681 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9f0835-1c75-40b3-986d-19db50224073]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 24, 'rx_bytes': 1168, 'tx_bytes': 1196, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 24, 'rx_bytes': 1168, 'tx_bytes': 1196, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286810, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.703 2 DEBUG nova.virt.libvirt.vif [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:05:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-353618528',display_name='tempest-ServersAdminTestJSON-server-353618528',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-353618528',id=14,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:06:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-tg713xdg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:06:01Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=348c80b7-7f65-4300-9dab-6a333f1b2c74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.703 2 DEBUG nova.network.os_vif_util [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.704 2 DEBUG nova.network.os_vif_util [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c8:34:23,bridge_name='br-int',has_traffic_filtering=True,id=184f2379-8442-414e-bccb-6f5e5a314e72,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184f2379-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.705 2 DEBUG os_vif [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:34:23,bridge_name='br-int',has_traffic_filtering=True,id=184f2379-8442-414e-bccb-6f5e5a314e72,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184f2379-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.707 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap184f2379-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.707 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c9710863-0b8b-4271-a144-f836f43cd3c2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286811, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286811, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.710 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.712 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.712 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.712 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.713 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.714 2 INFO os_vif [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:34:23,bridge_name='br-int',has_traffic_filtering=True,id=184f2379-8442-414e-bccb-6f5e5a314e72,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184f2379-84')
Oct 07 14:07:00 compute-0 kernel: tapd06bd5a4-d9 (unregistering): left promiscuous mode
Oct 07 14:07:00 compute-0 NetworkManager[44949]: <info>  [1759846020.8197] device (tapd06bd5a4-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:07:00 compute-0 ovn_controller[151684]: 2025-10-07T14:07:00Z|00113|binding|INFO|Releasing lport d06bd5a4-d9b7-4791-a387-d190eb1457f6 from this chassis (sb_readonly=0)
Oct 07 14:07:00 compute-0 ovn_controller[151684]: 2025-10-07T14:07:00Z|00114|binding|INFO|Setting lport d06bd5a4-d9b7-4791-a387-d190eb1457f6 down in Southbound
Oct 07 14:07:00 compute-0 ovn_controller[151684]: 2025-10-07T14:07:00Z|00115|binding|INFO|Removing iface tapd06bd5a4-d9 ovn-installed in OVS
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.857 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:f8:94 10.100.0.7'], port_security=['fa:16:3e:65:f8:94 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4b95692e-088d-452c-83b7-4c50df73b8fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80f412c9-c511-49ba-a8ca-ff830fcff803', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '711e531670b1460a923f2f91ce0f63db', 'neutron:revision_number': '4', 'neutron:security_group_ids': '087bdc78-7c9e-48d3-b83f-b37a1a3f8ec7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e75ebab-8b50-4bfb-bcab-f5fcada82242, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d06bd5a4-d9b7-4791-a387-d190eb1457f6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.858 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d06bd5a4-d9b7-4791-a387-d190eb1457f6 in datapath 80f412c9-c511-49ba-a8ca-ff830fcff803 unbound from our chassis
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.860 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 80f412c9-c511-49ba-a8ca-ff830fcff803, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.861 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[74223833-3950-4235-93ca-1de7aead968f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.861 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803 namespace which is not needed anymore
Oct 07 14:07:00 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Oct 07 14:07:00 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Consumed 15.427s CPU time.
Oct 07 14:07:00 compute-0 systemd-machined[214580]: Machine qemu-17-instance-0000000f terminated.
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.917 2 INFO nova.virt.libvirt.driver [-] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Instance destroyed successfully.
Oct 07 14:07:00 compute-0 nova_compute[259550]: 2025-10-07 14:07:00.917 2 DEBUG nova.objects.instance [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lazy-loading 'resources' on Instance uuid 4b95692e-088d-452c-83b7-4c50df73b8fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:07:01 compute-0 neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803[284528]: [NOTICE]   (284582) : haproxy version is 2.8.14-c23fe91
Oct 07 14:07:01 compute-0 neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803[284528]: [NOTICE]   (284582) : path to executable is /usr/sbin/haproxy
Oct 07 14:07:01 compute-0 neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803[284528]: [ALERT]    (284582) : Current worker (284589) exited with code 143 (Terminated)
Oct 07 14:07:01 compute-0 neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803[284528]: [WARNING]  (284582) : All workers exited. Exiting... (0)
Oct 07 14:07:01 compute-0 systemd[1]: libpod-44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084.scope: Deactivated successfully.
Oct 07 14:07:01 compute-0 podman[286861]: 2025-10-07 14:07:01.053631636 +0000 UTC m=+0.068623615 container died 44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.068 2 DEBUG nova.virt.libvirt.vif [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:05:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-99112055',display_name='tempest-FloatingIPsAssociationTestJSON-server-99112055',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-99112055',id=15,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:06:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='711e531670b1460a923f2f91ce0f63db',ramdisk_id='',reservation_id='r-ib7l0lly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1021362371',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1021362371-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:06:07Z,user_data=None,user_id='7ff78293ed4e40f9954a0b0e6fca0caa',uuid=4b95692e-088d-452c-83b7-4c50df73b8fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.069 2 DEBUG nova.network.os_vif_util [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converting VIF {"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.070 2 DEBUG nova.network.os_vif_util [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:f8:94,bridge_name='br-int',has_traffic_filtering=True,id=d06bd5a4-d9b7-4791-a387-d190eb1457f6,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd06bd5a4-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.070 2 DEBUG os_vif [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:f8:94,bridge_name='br-int',has_traffic_filtering=True,id=d06bd5a4-d9b7-4791-a387-d190eb1457f6,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd06bd5a4-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.072 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd06bd5a4-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.084 2 INFO os_vif [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:f8:94,bridge_name='br-int',has_traffic_filtering=True,id=d06bd5a4-d9b7-4791-a387-d190eb1457f6,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd06bd5a4-d9')
Oct 07 14:07:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ed1157e3fd8b123979ce2884e8cf7ae250f676278229a741e499d63f106c27c-merged.mount: Deactivated successfully.
Oct 07 14:07:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084-userdata-shm.mount: Deactivated successfully.
Oct 07 14:07:01 compute-0 podman[286861]: 2025-10-07 14:07:01.181289298 +0000 UTC m=+0.196281257 container cleanup 44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:07:01 compute-0 systemd[1]: libpod-conmon-44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084.scope: Deactivated successfully.
Oct 07 14:07:01 compute-0 podman[286901]: 2025-10-07 14:07:01.226762914 +0000 UTC m=+0.095032801 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 07 14:07:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.280 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:01 compute-0 podman[286921]: 2025-10-07 14:07:01.311056907 +0000 UTC m=+0.099249504 container remove 44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:07:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.318 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee257e4-4b70-4316-ab7d-145e7cd8397d]: (4, ('Tue Oct  7 02:07:00 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803 (44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084)\n44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084\nTue Oct  7 02:07:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803 (44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084)\n44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.319 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8e2bf33b-4630-4fd6-990d-0050ebbcc0be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.320 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80f412c9-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:01 compute-0 kernel: tap80f412c9-c0: left promiscuous mode
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.342 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a9abf47b-4d33-4751-9704-9c1d41fc1e23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.381 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[82196678-78f8-4d2f-bc30-9d8239523ae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.384 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[87ef612b-80c2-4c77-9884-539e0688f805]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.409 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dd3cd8c8-81a3-403d-8977-1e2f14e79b49]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654905, 'reachable_time': 15332, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286943, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d80f412c9\x2dc511\x2d49ba\x2da8ca\x2dff830fcff803.mount: Deactivated successfully.
Oct 07 14:07:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.417 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:07:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.417 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[be53bc8e-a122-4fd3-8122-89580faf8896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 405 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 153 op/s
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.893 2 DEBUG nova.compute.manager [req-b596d2ff-5781-4ca1-9ae7-de6385ebaac7 req-42b491d1-2c73-4a32-8df4-7e800a4d7548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Received event network-vif-unplugged-184f2379-8442-414e-bccb-6f5e5a314e72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.894 2 DEBUG oslo_concurrency.lockutils [req-b596d2ff-5781-4ca1-9ae7-de6385ebaac7 req-42b491d1-2c73-4a32-8df4-7e800a4d7548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.894 2 DEBUG oslo_concurrency.lockutils [req-b596d2ff-5781-4ca1-9ae7-de6385ebaac7 req-42b491d1-2c73-4a32-8df4-7e800a4d7548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.894 2 DEBUG oslo_concurrency.lockutils [req-b596d2ff-5781-4ca1-9ae7-de6385ebaac7 req-42b491d1-2c73-4a32-8df4-7e800a4d7548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.894 2 DEBUG nova.compute.manager [req-b596d2ff-5781-4ca1-9ae7-de6385ebaac7 req-42b491d1-2c73-4a32-8df4-7e800a4d7548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] No waiting events found dispatching network-vif-unplugged-184f2379-8442-414e-bccb-6f5e5a314e72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.894 2 DEBUG nova.compute.manager [req-b596d2ff-5781-4ca1-9ae7-de6385ebaac7 req-42b491d1-2c73-4a32-8df4-7e800a4d7548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Received event network-vif-unplugged-184f2379-8442-414e-bccb-6f5e5a314e72 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.971 2 DEBUG nova.compute.manager [req-b4b2c494-1a43-421b-bf8f-b429d9e82690 req-af667705-5bd0-48a2-86d8-bbcb2048027a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-vif-unplugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.971 2 DEBUG oslo_concurrency.lockutils [req-b4b2c494-1a43-421b-bf8f-b429d9e82690 req-af667705-5bd0-48a2-86d8-bbcb2048027a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.972 2 DEBUG oslo_concurrency.lockutils [req-b4b2c494-1a43-421b-bf8f-b429d9e82690 req-af667705-5bd0-48a2-86d8-bbcb2048027a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.972 2 DEBUG oslo_concurrency.lockutils [req-b4b2c494-1a43-421b-bf8f-b429d9e82690 req-af667705-5bd0-48a2-86d8-bbcb2048027a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.972 2 DEBUG nova.compute.manager [req-b4b2c494-1a43-421b-bf8f-b429d9e82690 req-af667705-5bd0-48a2-86d8-bbcb2048027a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] No waiting events found dispatching network-vif-unplugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:07:01 compute-0 nova_compute[259550]: 2025-10-07 14:07:01.972 2 DEBUG nova.compute.manager [req-b4b2c494-1a43-421b-bf8f-b429d9e82690 req-af667705-5bd0-48a2-86d8-bbcb2048027a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-vif-unplugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:07:02 compute-0 ceph-mon[74295]: pgmap v1194: 305 pgs: 305 active+clean; 405 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 153 op/s
Oct 07 14:07:02 compute-0 nova_compute[259550]: 2025-10-07 14:07:02.783 2 INFO nova.virt.libvirt.driver [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Deleting instance files /var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74_del
Oct 07 14:07:02 compute-0 nova_compute[259550]: 2025-10-07 14:07:02.784 2 INFO nova.virt.libvirt.driver [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Deletion of /var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74_del complete
Oct 07 14:07:02 compute-0 nova_compute[259550]: 2025-10-07 14:07:02.898 2 INFO nova.compute.manager [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Took 2.56 seconds to destroy the instance on the hypervisor.
Oct 07 14:07:02 compute-0 nova_compute[259550]: 2025-10-07 14:07:02.899 2 DEBUG oslo.service.loopingcall [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:07:02 compute-0 nova_compute[259550]: 2025-10-07 14:07:02.899 2 DEBUG nova.compute.manager [-] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:07:02 compute-0 nova_compute[259550]: 2025-10-07 14:07:02.899 2 DEBUG nova.network.neutron [-] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:07:02 compute-0 nova_compute[259550]: 2025-10-07 14:07:02.950 2 INFO nova.virt.libvirt.driver [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Deleting instance files /var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe_del
Oct 07 14:07:02 compute-0 nova_compute[259550]: 2025-10-07 14:07:02.954 2 INFO nova.virt.libvirt.driver [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Deletion of /var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe_del complete
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.056 2 INFO nova.compute.manager [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Took 2.38 seconds to destroy the instance on the hypervisor.
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.057 2 DEBUG oslo.service.loopingcall [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.057 2 DEBUG nova.compute.manager [-] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.057 2 DEBUG nova.network.neutron [-] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.416 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846008.4146168, 115674ad-2273-4c42-b9ae-d380c2c005d6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.416 2 INFO nova.compute.manager [-] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] VM Stopped (Lifecycle Event)
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.451 2 DEBUG nova.compute.manager [None req-e872ea26-ca95-4feb-8cb2-cf4b20658361 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 405 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.562 2 DEBUG nova.network.neutron [-] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.607 2 INFO nova.compute.manager [-] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Took 0.55 seconds to deallocate network for instance.
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.684 2 DEBUG oslo_concurrency.lockutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.685 2 DEBUG oslo_concurrency.lockutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.700 2 DEBUG nova.network.neutron [-] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.733 2 INFO nova.compute.manager [-] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Took 0.83 seconds to deallocate network for instance.
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.781 2 DEBUG oslo_concurrency.processutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.810 2 DEBUG oslo_concurrency.lockutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.994 2 DEBUG nova.compute.manager [req-e01d3919-6648-41ce-be7c-3de44722d312 req-4be98727-ec5f-4c47-b7ce-9f10057bd060 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Received event network-vif-plugged-184f2379-8442-414e-bccb-6f5e5a314e72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.994 2 DEBUG oslo_concurrency.lockutils [req-e01d3919-6648-41ce-be7c-3de44722d312 req-4be98727-ec5f-4c47-b7ce-9f10057bd060 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.995 2 DEBUG oslo_concurrency.lockutils [req-e01d3919-6648-41ce-be7c-3de44722d312 req-4be98727-ec5f-4c47-b7ce-9f10057bd060 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.995 2 DEBUG oslo_concurrency.lockutils [req-e01d3919-6648-41ce-be7c-3de44722d312 req-4be98727-ec5f-4c47-b7ce-9f10057bd060 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.995 2 DEBUG nova.compute.manager [req-e01d3919-6648-41ce-be7c-3de44722d312 req-4be98727-ec5f-4c47-b7ce-9f10057bd060 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] No waiting events found dispatching network-vif-plugged-184f2379-8442-414e-bccb-6f5e5a314e72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.995 2 WARNING nova.compute.manager [req-e01d3919-6648-41ce-be7c-3de44722d312 req-4be98727-ec5f-4c47-b7ce-9f10057bd060 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Received unexpected event network-vif-plugged-184f2379-8442-414e-bccb-6f5e5a314e72 for instance with vm_state deleted and task_state None.
Oct 07 14:07:03 compute-0 nova_compute[259550]: 2025-10-07 14:07:03.995 2 DEBUG nova.compute.manager [req-e01d3919-6648-41ce-be7c-3de44722d312 req-4be98727-ec5f-4c47-b7ce-9f10057bd060 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-vif-deleted-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:07:04 compute-0 nova_compute[259550]: 2025-10-07 14:07:04.165 2 DEBUG nova.compute.manager [req-81705dd3-4aee-48d3-af37-a3a5a92a1407 req-e41e9903-eda3-4fbd-a6ae-616b5cdcb1de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-vif-plugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:07:04 compute-0 nova_compute[259550]: 2025-10-07 14:07:04.165 2 DEBUG oslo_concurrency.lockutils [req-81705dd3-4aee-48d3-af37-a3a5a92a1407 req-e41e9903-eda3-4fbd-a6ae-616b5cdcb1de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:04 compute-0 nova_compute[259550]: 2025-10-07 14:07:04.165 2 DEBUG oslo_concurrency.lockutils [req-81705dd3-4aee-48d3-af37-a3a5a92a1407 req-e41e9903-eda3-4fbd-a6ae-616b5cdcb1de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:04 compute-0 nova_compute[259550]: 2025-10-07 14:07:04.165 2 DEBUG oslo_concurrency.lockutils [req-81705dd3-4aee-48d3-af37-a3a5a92a1407 req-e41e9903-eda3-4fbd-a6ae-616b5cdcb1de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:04 compute-0 nova_compute[259550]: 2025-10-07 14:07:04.166 2 DEBUG nova.compute.manager [req-81705dd3-4aee-48d3-af37-a3a5a92a1407 req-e41e9903-eda3-4fbd-a6ae-616b5cdcb1de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] No waiting events found dispatching network-vif-plugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:07:04 compute-0 nova_compute[259550]: 2025-10-07 14:07:04.166 2 WARNING nova.compute.manager [req-81705dd3-4aee-48d3-af37-a3a5a92a1407 req-e41e9903-eda3-4fbd-a6ae-616b5cdcb1de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received unexpected event network-vif-plugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 for instance with vm_state deleted and task_state None.
Oct 07 14:07:04 compute-0 nova_compute[259550]: 2025-10-07 14:07:04.166 2 DEBUG nova.compute.manager [req-81705dd3-4aee-48d3-af37-a3a5a92a1407 req-e41e9903-eda3-4fbd-a6ae-616b5cdcb1de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Received event network-vif-deleted-184f2379-8442-414e-bccb-6f5e5a314e72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:07:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:07:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3671485503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:04 compute-0 nova_compute[259550]: 2025-10-07 14:07:04.238 2 DEBUG oslo_concurrency.processutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:04 compute-0 nova_compute[259550]: 2025-10-07 14:07:04.245 2 DEBUG nova.compute.provider_tree [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:07:04 compute-0 nova_compute[259550]: 2025-10-07 14:07:04.327 2 DEBUG nova.scheduler.client.report [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:07:04 compute-0 nova_compute[259550]: 2025-10-07 14:07:04.437 2 DEBUG oslo_concurrency.lockutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:04 compute-0 nova_compute[259550]: 2025-10-07 14:07:04.439 2 DEBUG oslo_concurrency.lockutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:04 compute-0 nova_compute[259550]: 2025-10-07 14:07:04.493 2 INFO nova.scheduler.client.report [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Deleted allocations for instance 4b95692e-088d-452c-83b7-4c50df73b8fe
Oct 07 14:07:04 compute-0 nova_compute[259550]: 2025-10-07 14:07:04.531 2 DEBUG oslo_concurrency.processutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:04 compute-0 ceph-mon[74295]: pgmap v1195: 305 pgs: 305 active+clean; 405 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Oct 07 14:07:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3671485503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:04 compute-0 nova_compute[259550]: 2025-10-07 14:07:04.783 2 DEBUG oslo_concurrency.lockutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:07:05 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/657177471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:05 compute-0 nova_compute[259550]: 2025-10-07 14:07:05.036 2 DEBUG oslo_concurrency.processutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:05 compute-0 nova_compute[259550]: 2025-10-07 14:07:05.043 2 DEBUG nova.compute.provider_tree [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:07:05 compute-0 nova_compute[259550]: 2025-10-07 14:07:05.411 2 DEBUG nova.scheduler.client.report [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:07:05 compute-0 nova_compute[259550]: 2025-10-07 14:07:05.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 307 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 188 op/s
Oct 07 14:07:05 compute-0 nova_compute[259550]: 2025-10-07 14:07:05.578 2 DEBUG oslo_concurrency.lockutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:05 compute-0 nova_compute[259550]: 2025-10-07 14:07:05.612 2 INFO nova.scheduler.client.report [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Deleted allocations for instance 348c80b7-7f65-4300-9dab-6a333f1b2c74
Oct 07 14:07:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:07:05 compute-0 nova_compute[259550]: 2025-10-07 14:07:05.831 2 DEBUG oslo_concurrency.lockutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:05 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/657177471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.179 2 DEBUG oslo_concurrency.lockutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "74094438-0995-4031-9943-cc85a5ef4f57" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.180 2 DEBUG oslo_concurrency.lockutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.180 2 DEBUG oslo_concurrency.lockutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "74094438-0995-4031-9943-cc85a5ef4f57-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.180 2 DEBUG oslo_concurrency.lockutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.180 2 DEBUG oslo_concurrency.lockutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.181 2 INFO nova.compute.manager [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Terminating instance
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.182 2 DEBUG nova.compute.manager [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:07:06 compute-0 kernel: tap132e9e5d-ea (unregistering): left promiscuous mode
Oct 07 14:07:06 compute-0 NetworkManager[44949]: <info>  [1759846026.3764] device (tap132e9e5d-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:06 compute-0 ovn_controller[151684]: 2025-10-07T14:07:06Z|00116|binding|INFO|Releasing lport 132e9e5d-eaab-437e-a82e-d49f6c4a09df from this chassis (sb_readonly=0)
Oct 07 14:07:06 compute-0 ovn_controller[151684]: 2025-10-07T14:07:06Z|00117|binding|INFO|Setting lport 132e9e5d-eaab-437e-a82e-d49f6c4a09df down in Southbound
Oct 07 14:07:06 compute-0 ovn_controller[151684]: 2025-10-07T14:07:06Z|00118|binding|INFO|Removing iface tap132e9e5d-ea ovn-installed in OVS
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.391 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:2c:66 10.100.0.10'], port_security=['fa:16:3e:66:2c:66 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '74094438-0995-4031-9943-cc85a5ef4f57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=132e9e5d-eaab-437e-a82e-d49f6c4a09df) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:07:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.392 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 132e9e5d-eaab-437e-a82e-d49f6c4a09df in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 unbound from our chassis
Oct 07 14:07:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.394 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.420 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[abbf938c-17e3-4e41-9f58-68a62e095c9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:06 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct 07 14:07:06 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Consumed 16.561s CPU time.
Oct 07 14:07:06 compute-0 systemd-machined[214580]: Machine qemu-15-instance-0000000d terminated.
Oct 07 14:07:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.457 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e5fd5c8e-2480-46e4-a7de-76b3e3111c19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.462 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5492a0a5-999d-404f-9105-12aa2b3c43c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.498 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3915fb63-8332-4984-b54f-62a63ce13636]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.522 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[104eefc0-56ef-4ddd-9a7f-f20e0ecf3fc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 26, 'rx_bytes': 1168, 'tx_bytes': 1280, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 26, 'rx_bytes': 1168, 'tx_bytes': 1280, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287001, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.537 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4fee9561-4f12-42e1-8057-7912a73afed9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287002, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287002, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.539 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.545 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.546 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:07:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.546 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.547 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.634 2 INFO nova.virt.libvirt.driver [-] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Instance destroyed successfully.
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.636 2 DEBUG nova.objects.instance [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'resources' on Instance uuid 74094438-0995-4031-9943-cc85a5ef4f57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.693 2 DEBUG nova.virt.libvirt.vif [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:05:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-222035485',display_name='tempest-ServersAdminTestJSON-server-222035485',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-222035485',id=13,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:05:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-lr6293xm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:05:45Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=74094438-0995-4031-9943-cc85a5ef4f57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.695 2 DEBUG nova.network.os_vif_util [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.696 2 DEBUG nova.network.os_vif_util [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:2c:66,bridge_name='br-int',has_traffic_filtering=True,id=132e9e5d-eaab-437e-a82e-d49f6c4a09df,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132e9e5d-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.696 2 DEBUG os_vif [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:2c:66,bridge_name='br-int',has_traffic_filtering=True,id=132e9e5d-eaab-437e-a82e-d49f6c4a09df,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132e9e5d-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.698 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap132e9e5d-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.704 2 INFO os_vif [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:2c:66,bridge_name='br-int',has_traffic_filtering=True,id=132e9e5d-eaab-437e-a82e-d49f6c4a09df,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132e9e5d-ea')
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.723 2 DEBUG nova.compute.manager [req-3fa3fd81-187b-4c53-b8ab-e17769a06f06 req-71977daf-99f9-498d-a56e-a203d251ee1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Received event network-vif-unplugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.724 2 DEBUG oslo_concurrency.lockutils [req-3fa3fd81-187b-4c53-b8ab-e17769a06f06 req-71977daf-99f9-498d-a56e-a203d251ee1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "74094438-0995-4031-9943-cc85a5ef4f57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.724 2 DEBUG oslo_concurrency.lockutils [req-3fa3fd81-187b-4c53-b8ab-e17769a06f06 req-71977daf-99f9-498d-a56e-a203d251ee1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.724 2 DEBUG oslo_concurrency.lockutils [req-3fa3fd81-187b-4c53-b8ab-e17769a06f06 req-71977daf-99f9-498d-a56e-a203d251ee1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.724 2 DEBUG nova.compute.manager [req-3fa3fd81-187b-4c53-b8ab-e17769a06f06 req-71977daf-99f9-498d-a56e-a203d251ee1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] No waiting events found dispatching network-vif-unplugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:07:06 compute-0 nova_compute[259550]: 2025-10-07 14:07:06.725 2 DEBUG nova.compute.manager [req-3fa3fd81-187b-4c53-b8ab-e17769a06f06 req-71977daf-99f9-498d-a56e-a203d251ee1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Received event network-vif-unplugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:07:06 compute-0 ceph-mon[74295]: pgmap v1196: 305 pgs: 305 active+clean; 307 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 188 op/s
Oct 07 14:07:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 246 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 719 KiB/s wr, 130 op/s
Oct 07 14:07:08 compute-0 ceph-mon[74295]: pgmap v1197: 305 pgs: 305 active+clean; 246 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 719 KiB/s wr, 130 op/s
Oct 07 14:07:09 compute-0 nova_compute[259550]: 2025-10-07 14:07:09.070 2 DEBUG nova.compute.manager [req-d4b4c03f-2484-4711-a17c-b9fd65a1c110 req-58af5a3e-33b2-4010-a768-20ce0321f2ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Received event network-vif-plugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:07:09 compute-0 nova_compute[259550]: 2025-10-07 14:07:09.071 2 DEBUG oslo_concurrency.lockutils [req-d4b4c03f-2484-4711-a17c-b9fd65a1c110 req-58af5a3e-33b2-4010-a768-20ce0321f2ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "74094438-0995-4031-9943-cc85a5ef4f57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:09 compute-0 nova_compute[259550]: 2025-10-07 14:07:09.071 2 DEBUG oslo_concurrency.lockutils [req-d4b4c03f-2484-4711-a17c-b9fd65a1c110 req-58af5a3e-33b2-4010-a768-20ce0321f2ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:09 compute-0 nova_compute[259550]: 2025-10-07 14:07:09.071 2 DEBUG oslo_concurrency.lockutils [req-d4b4c03f-2484-4711-a17c-b9fd65a1c110 req-58af5a3e-33b2-4010-a768-20ce0321f2ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:09 compute-0 nova_compute[259550]: 2025-10-07 14:07:09.071 2 DEBUG nova.compute.manager [req-d4b4c03f-2484-4711-a17c-b9fd65a1c110 req-58af5a3e-33b2-4010-a768-20ce0321f2ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] No waiting events found dispatching network-vif-plugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:07:09 compute-0 nova_compute[259550]: 2025-10-07 14:07:09.071 2 WARNING nova.compute.manager [req-d4b4c03f-2484-4711-a17c-b9fd65a1c110 req-58af5a3e-33b2-4010-a768-20ce0321f2ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Received unexpected event network-vif-plugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df for instance with vm_state active and task_state deleting.
Oct 07 14:07:09 compute-0 ovn_controller[151684]: 2025-10-07T14:07:09Z|00119|binding|INFO|Releasing lport 47854cb1-b863-4b06-b664-27d734ff5751 from this chassis (sb_readonly=0)
Oct 07 14:07:09 compute-0 nova_compute[259550]: 2025-10-07 14:07:09.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:09 compute-0 ovn_controller[151684]: 2025-10-07T14:07:09Z|00120|binding|INFO|Releasing lport 47854cb1-b863-4b06-b664-27d734ff5751 from this chassis (sb_readonly=0)
Oct 07 14:07:09 compute-0 nova_compute[259550]: 2025-10-07 14:07:09.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 191 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 514 KiB/s wr, 139 op/s
Oct 07 14:07:09 compute-0 nova_compute[259550]: 2025-10-07 14:07:09.594 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "809b049f-447c-4cdd-b8d2-8325f6d3b576" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:09 compute-0 nova_compute[259550]: 2025-10-07 14:07:09.594 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "809b049f-447c-4cdd-b8d2-8325f6d3b576" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:09 compute-0 nova_compute[259550]: 2025-10-07 14:07:09.613 2 DEBUG nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:07:09 compute-0 nova_compute[259550]: 2025-10-07 14:07:09.675 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:09 compute-0 nova_compute[259550]: 2025-10-07 14:07:09.675 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:09 compute-0 nova_compute[259550]: 2025-10-07 14:07:09.682 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:07:09 compute-0 nova_compute[259550]: 2025-10-07 14:07:09.683 2 INFO nova.compute.claims [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:07:09 compute-0 nova_compute[259550]: 2025-10-07 14:07:09.870 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.029 2 INFO nova.virt.libvirt.driver [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Deleting instance files /var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57_del
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.031 2 INFO nova.virt.libvirt.driver [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Deletion of /var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57_del complete
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.090 2 INFO nova.compute.manager [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Took 3.91 seconds to destroy the instance on the hypervisor.
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.091 2 DEBUG oslo.service.loopingcall [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.091 2 DEBUG nova.compute.manager [-] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.092 2 DEBUG nova.network.neutron [-] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:07:10 compute-0 ceph-mon[74295]: pgmap v1198: 305 pgs: 305 active+clean; 191 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 514 KiB/s wr, 139 op/s
Oct 07 14:07:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:07:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1766614299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.337 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.345 2 DEBUG nova.compute.provider_tree [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.357 2 DEBUG nova.scheduler.client.report [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.376 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.377 2 DEBUG nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.417 2 DEBUG nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.417 2 DEBUG nova.network.neutron [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.432 2 INFO nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.448 2 DEBUG nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.539 2 DEBUG nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.541 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.541 2 INFO nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Creating image(s)
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.561 2 DEBUG nova.storage.rbd_utils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.583 2 DEBUG nova.storage.rbd_utils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.604 2 DEBUG nova.storage.rbd_utils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.608 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:10 compute-0 ovn_controller[151684]: 2025-10-07T14:07:10Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6c:03:d4 10.100.0.7
Oct 07 14:07:10 compute-0 ovn_controller[151684]: 2025-10-07T14:07:10Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6c:03:d4 10.100.0.7
Oct 07 14:07:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.673 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.674 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.675 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.675 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.699 2 DEBUG nova.storage.rbd_utils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.703 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.811 2 DEBUG nova.network.neutron [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 07 14:07:10 compute-0 nova_compute[259550]: 2025-10-07 14:07:10.812 2 DEBUG nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:07:11 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1766614299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:11 compute-0 nova_compute[259550]: 2025-10-07 14:07:11.324 2 DEBUG nova.network.neutron [-] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:07:11 compute-0 nova_compute[259550]: 2025-10-07 14:07:11.403 2 INFO nova.compute.manager [-] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Took 1.31 seconds to deallocate network for instance.
Oct 07 14:07:11 compute-0 nova_compute[259550]: 2025-10-07 14:07:11.470 2 DEBUG oslo_concurrency.lockutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:11 compute-0 nova_compute[259550]: 2025-10-07 14:07:11.471 2 DEBUG oslo_concurrency.lockutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:11 compute-0 nova_compute[259550]: 2025-10-07 14:07:11.486 2 DEBUG nova.compute.manager [req-9a16ffbe-5df6-4696-9cc2-6c627716c9ec req-9cfaf2cd-84be-4067-a798-83b0a5fadd7e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Received event network-vif-deleted-132e9e5d-eaab-437e-a82e-d49f6c4a09df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:07:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 176 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 861 KiB/s rd, 1019 KiB/s wr, 126 op/s
Oct 07 14:07:11 compute-0 nova_compute[259550]: 2025-10-07 14:07:11.573 2 DEBUG oslo_concurrency.processutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:11 compute-0 nova_compute[259550]: 2025-10-07 14:07:11.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:07:12 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/11611704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:12 compute-0 nova_compute[259550]: 2025-10-07 14:07:12.048 2 DEBUG oslo_concurrency.processutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:12 compute-0 nova_compute[259550]: 2025-10-07 14:07:12.056 2 DEBUG nova.compute.provider_tree [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:07:12 compute-0 nova_compute[259550]: 2025-10-07 14:07:12.106 2 DEBUG nova.scheduler.client.report [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:07:12 compute-0 nova_compute[259550]: 2025-10-07 14:07:12.218 2 DEBUG oslo_concurrency.lockutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:12 compute-0 nova_compute[259550]: 2025-10-07 14:07:12.295 2 INFO nova.scheduler.client.report [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Deleted allocations for instance 74094438-0995-4031-9943-cc85a5ef4f57
Oct 07 14:07:12 compute-0 nova_compute[259550]: 2025-10-07 14:07:12.379 2 DEBUG oslo_concurrency.lockutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:12 compute-0 ceph-mon[74295]: pgmap v1199: 305 pgs: 305 active+clean; 176 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 861 KiB/s rd, 1019 KiB/s wr, 126 op/s
Oct 07 14:07:12 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/11611704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:12 compute-0 nova_compute[259550]: 2025-10-07 14:07:12.789 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:12 compute-0 nova_compute[259550]: 2025-10-07 14:07:12.862 2 DEBUG nova.storage.rbd_utils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] resizing rbd image 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:07:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 176 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 1019 KiB/s wr, 102 op/s
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.548 2 DEBUG nova.objects.instance [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lazy-loading 'migration_context' on Instance uuid 809b049f-447c-4cdd-b8d2-8325f6d3b576 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.561 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.561 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Ensure instance console log exists: /var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.562 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.562 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.563 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.565 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.569 2 WARNING nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.573 2 DEBUG nova.virt.libvirt.host [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.573 2 DEBUG nova.virt.libvirt.host [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.576 2 DEBUG nova.virt.libvirt.host [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.577 2 DEBUG nova.virt.libvirt.host [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.578 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.578 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.579 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.579 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.579 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.580 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.580 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.580 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.581 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.581 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.581 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.581 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.585 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.758 2 DEBUG oslo_concurrency.lockutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.759 2 DEBUG oslo_concurrency.lockutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.759 2 DEBUG oslo_concurrency.lockutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.759 2 DEBUG oslo_concurrency.lockutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.762 2 DEBUG oslo_concurrency.lockutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.763 2 INFO nova.compute.manager [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Terminating instance
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.764 2 DEBUG nova.compute.manager [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:07:13 compute-0 kernel: tap3486260f-fd (unregistering): left promiscuous mode
Oct 07 14:07:13 compute-0 NetworkManager[44949]: <info>  [1759846033.9083] device (tap3486260f-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:07:13 compute-0 ovn_controller[151684]: 2025-10-07T14:07:13Z|00121|binding|INFO|Releasing lport 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 from this chassis (sb_readonly=0)
Oct 07 14:07:13 compute-0 ovn_controller[151684]: 2025-10-07T14:07:13Z|00122|binding|INFO|Setting lport 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 down in Southbound
Oct 07 14:07:13 compute-0 ovn_controller[151684]: 2025-10-07T14:07:13Z|00123|binding|INFO|Removing iface tap3486260f-fd ovn-installed in OVS
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:13.933 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:5d:93 10.100.0.9'], port_security=['fa:16:3e:68:5d:93 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7322f2d1-885e-4e41-8a96-e90d4ddc6c38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3486260f-fd35-48fb-a925-cbe6f4a1a9f5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:07:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:13.934 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 unbound from our chassis
Oct 07 14:07:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:13.941 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6
Oct 07 14:07:13 compute-0 nova_compute[259550]: 2025-10-07 14:07:13.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:13.966 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e12e297e-3c47-424c-886f-14ccf7f1cdb0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:13 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct 07 14:07:13 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000b.scope: Consumed 17.821s CPU time.
Oct 07 14:07:13 compute-0 systemd-machined[214580]: Machine qemu-13-instance-0000000b terminated.
Oct 07 14:07:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:13.992 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[72730b9d-42b8-4545-835a-38de165d8273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:13.995 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[341a7778-8ef1-410c-8b3a-5088fbe55c67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:14.022 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[980c64c4-92fd-4ff0-b07a-c1e545040c47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:14.042 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1b060a57-7ac5-4219-b55d-5619165c8d57]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 28, 'rx_bytes': 1168, 'tx_bytes': 1364, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 28, 'rx_bytes': 1168, 'tx_bytes': 1364, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287278, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:07:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/160664761' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:07:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:14.060 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0d43e55d-65b0-44a2-b7a9-9e97e6ab0a8c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287280, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287280, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:14.062 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.069 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:14.101 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:14.101 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:07:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:14.102 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:14.102 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.111 2 DEBUG nova.storage.rbd_utils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.117 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.208 2 INFO nova.virt.libvirt.driver [-] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Instance destroyed successfully.
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.209 2 DEBUG nova.objects.instance [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'resources' on Instance uuid 7322f2d1-885e-4e41-8a96-e90d4ddc6c38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.224 2 DEBUG nova.virt.libvirt.vif [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:05:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-33708594',display_name='tempest-ServersAdminTestJSON-server-33708594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-33708594',id=11,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:05:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-346ygca7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:05:30Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=7322f2d1-885e-4e41-8a96-e90d4ddc6c38,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.225 2 DEBUG nova.network.os_vif_util [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.226 2 DEBUG nova.network.os_vif_util [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:5d:93,bridge_name='br-int',has_traffic_filtering=True,id=3486260f-fd35-48fb-a925-cbe6f4a1a9f5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3486260f-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.226 2 DEBUG os_vif [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:5d:93,bridge_name='br-int',has_traffic_filtering=True,id=3486260f-fd35-48fb-a925-cbe6f4a1a9f5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3486260f-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.228 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3486260f-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.235 2 INFO os_vif [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:5d:93,bridge_name='br-int',has_traffic_filtering=True,id=3486260f-fd35-48fb-a925-cbe6f4a1a9f5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3486260f-fd')
Oct 07 14:07:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:07:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/36237976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.577 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.578 2 DEBUG nova.objects.instance [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lazy-loading 'pci_devices' on Instance uuid 809b049f-447c-4cdd-b8d2-8325f6d3b576 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.591 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:07:14 compute-0 nova_compute[259550]:   <uuid>809b049f-447c-4cdd-b8d2-8325f6d3b576</uuid>
Oct 07 14:07:14 compute-0 nova_compute[259550]:   <name>instance-00000011</name>
Oct 07 14:07:14 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:07:14 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:07:14 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <nova:name>tempest-LiveMigrationNegativeTest-server-842363532</nova:name>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:07:13</nova:creationTime>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:07:14 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:07:14 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:07:14 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:07:14 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:07:14 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:07:14 compute-0 nova_compute[259550]:         <nova:user uuid="027a68b305ed4ef799375643ce9e5831">tempest-LiveMigrationNegativeTest-182284186-project-member</nova:user>
Oct 07 14:07:14 compute-0 nova_compute[259550]:         <nova:project uuid="64dfda49c33d49eb9d343ba7a8f90d6d">tempest-LiveMigrationNegativeTest-182284186</nova:project>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:07:14 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:07:14 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <system>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <entry name="serial">809b049f-447c-4cdd-b8d2-8325f6d3b576</entry>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <entry name="uuid">809b049f-447c-4cdd-b8d2-8325f6d3b576</entry>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     </system>
Oct 07 14:07:14 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:07:14 compute-0 nova_compute[259550]:   <os>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:   </os>
Oct 07 14:07:14 compute-0 nova_compute[259550]:   <features>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:   </features>
Oct 07 14:07:14 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:07:14 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:07:14 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/809b049f-447c-4cdd-b8d2-8325f6d3b576_disk">
Oct 07 14:07:14 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       </source>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:07:14 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/809b049f-447c-4cdd-b8d2-8325f6d3b576_disk.config">
Oct 07 14:07:14 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       </source>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:07:14 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576/console.log" append="off"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <video>
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     </video>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:07:14 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:07:14 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:07:14 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:07:14 compute-0 nova_compute[259550]: </domain>
Oct 07 14:07:14 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:07:14 compute-0 sudo[287350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:07:14 compute-0 sudo[287350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:07:14 compute-0 sudo[287350]: pam_unix(sudo:session): session closed for user root
Oct 07 14:07:14 compute-0 ceph-mon[74295]: pgmap v1200: 305 pgs: 305 active+clean; 176 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 1019 KiB/s wr, 102 op/s
Oct 07 14:07:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/160664761' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:07:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/36237976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.667 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.667 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.668 2 INFO nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Using config drive
Oct 07 14:07:14 compute-0 sudo[287378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:07:14 compute-0 sudo[287378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:07:14 compute-0 sudo[287378]: pam_unix(sudo:session): session closed for user root
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.691 2 DEBUG nova.storage.rbd_utils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:14 compute-0 sudo[287418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:07:14 compute-0 sudo[287418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:07:14 compute-0 sudo[287418]: pam_unix(sudo:session): session closed for user root
Oct 07 14:07:14 compute-0 sudo[287446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:07:14 compute-0 sudo[287446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.913 2 INFO nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Creating config drive at /var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576/disk.config
Oct 07 14:07:14 compute-0 nova_compute[259550]: 2025-10-07 14:07:14.921 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd0b1ikph execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:15 compute-0 nova_compute[259550]: 2025-10-07 14:07:15.056 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd0b1ikph" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:15 compute-0 nova_compute[259550]: 2025-10-07 14:07:15.078 2 DEBUG nova.storage.rbd_utils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:15 compute-0 nova_compute[259550]: 2025-10-07 14:07:15.081 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576/disk.config 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:15 compute-0 sudo[287446]: pam_unix(sudo:session): session closed for user root
Oct 07 14:07:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:07:15 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:07:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:07:15 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:07:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:07:15 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:07:15 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev d5bfaa21-e0b8-4777-9891-a22f95873d8e does not exist
Oct 07 14:07:15 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 645aa166-edc4-427a-9dc9-e3073da22029 does not exist
Oct 07 14:07:15 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 0415d47e-4e9b-4e32-b07b-5b30f72c60d1 does not exist
Oct 07 14:07:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:07:15 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:07:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:07:15 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:07:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:07:15 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:07:15 compute-0 sudo[287543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:07:15 compute-0 sudo[287543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:07:15 compute-0 sudo[287543]: pam_unix(sudo:session): session closed for user root
Oct 07 14:07:15 compute-0 nova_compute[259550]: 2025-10-07 14:07:15.452 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576/disk.config 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:15 compute-0 nova_compute[259550]: 2025-10-07 14:07:15.453 2 INFO nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Deleting local config drive /var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576/disk.config because it was imported into RBD.
Oct 07 14:07:15 compute-0 nova_compute[259550]: 2025-10-07 14:07:15.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:15 compute-0 sudo[287568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:07:15 compute-0 sudo[287568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:07:15 compute-0 sudo[287568]: pam_unix(sudo:session): session closed for user root
Oct 07 14:07:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 228 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 335 KiB/s rd, 3.3 MiB/s wr, 168 op/s
Oct 07 14:07:15 compute-0 systemd-machined[214580]: New machine qemu-21-instance-00000011.
Oct 07 14:07:15 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-00000011.
Oct 07 14:07:15 compute-0 sudo[287599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:07:15 compute-0 sudo[287599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:07:15 compute-0 sudo[287599]: pam_unix(sudo:session): session closed for user root
Oct 07 14:07:15 compute-0 nova_compute[259550]: 2025-10-07 14:07:15.594 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846020.5930336, 348c80b7-7f65-4300-9dab-6a333f1b2c74 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:07:15 compute-0 nova_compute[259550]: 2025-10-07 14:07:15.596 2 INFO nova.compute.manager [-] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] VM Stopped (Lifecycle Event)
Oct 07 14:07:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:07:15 compute-0 sudo[287627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:07:15 compute-0 sudo[287627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:07:15 compute-0 nova_compute[259550]: 2025-10-07 14:07:15.673 2 DEBUG nova.compute.manager [None req-bb845f3a-0c20-4f9d-b7fd-8fd062d0fcfa - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:15 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:07:15 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:07:15 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:07:15 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:07:15 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:07:15 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:07:15 compute-0 nova_compute[259550]: 2025-10-07 14:07:15.757 2 INFO nova.virt.libvirt.driver [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Deleting instance files /var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38_del
Oct 07 14:07:15 compute-0 nova_compute[259550]: 2025-10-07 14:07:15.758 2 INFO nova.virt.libvirt.driver [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Deletion of /var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38_del complete
Oct 07 14:07:15 compute-0 nova_compute[259550]: 2025-10-07 14:07:15.911 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846020.9110177, 4b95692e-088d-452c-83b7-4c50df73b8fe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:07:15 compute-0 nova_compute[259550]: 2025-10-07 14:07:15.913 2 INFO nova.compute.manager [-] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] VM Stopped (Lifecycle Event)
Oct 07 14:07:15 compute-0 nova_compute[259550]: 2025-10-07 14:07:15.937 2 INFO nova.compute.manager [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Took 2.17 seconds to destroy the instance on the hypervisor.
Oct 07 14:07:15 compute-0 nova_compute[259550]: 2025-10-07 14:07:15.938 2 DEBUG oslo.service.loopingcall [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:07:15 compute-0 nova_compute[259550]: 2025-10-07 14:07:15.938 2 DEBUG nova.compute.manager [-] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:07:15 compute-0 nova_compute[259550]: 2025-10-07 14:07:15.938 2 DEBUG nova.network.neutron [-] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:07:15 compute-0 nova_compute[259550]: 2025-10-07 14:07:15.942 2 DEBUG nova.compute.manager [None req-e08215e8-c508-4a75-90ad-29630b3abb9e - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:16 compute-0 podman[287698]: 2025-10-07 14:07:16.050503253 +0000 UTC m=+0.060704913 container create d9f70f19adb3dd1d01ed4f7e1401411b9499f9fadb99984136c6e388ead1249b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:07:16 compute-0 systemd[1]: Started libpod-conmon-d9f70f19adb3dd1d01ed4f7e1401411b9499f9fadb99984136c6e388ead1249b.scope.
Oct 07 14:07:16 compute-0 podman[287698]: 2025-10-07 14:07:16.018681832 +0000 UTC m=+0.028883522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:07:16 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:07:16 compute-0 podman[287698]: 2025-10-07 14:07:16.174953369 +0000 UTC m=+0.185155029 container init d9f70f19adb3dd1d01ed4f7e1401411b9499f9fadb99984136c6e388ead1249b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 07 14:07:16 compute-0 podman[287698]: 2025-10-07 14:07:16.184485334 +0000 UTC m=+0.194686984 container start d9f70f19adb3dd1d01ed4f7e1401411b9499f9fadb99984136c6e388ead1249b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:07:16 compute-0 naughty_dubinsky[287730]: 167 167
Oct 07 14:07:16 compute-0 systemd[1]: libpod-d9f70f19adb3dd1d01ed4f7e1401411b9499f9fadb99984136c6e388ead1249b.scope: Deactivated successfully.
Oct 07 14:07:16 compute-0 podman[287698]: 2025-10-07 14:07:16.199804883 +0000 UTC m=+0.210006553 container attach d9f70f19adb3dd1d01ed4f7e1401411b9499f9fadb99984136c6e388ead1249b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 07 14:07:16 compute-0 podman[287698]: 2025-10-07 14:07:16.200953383 +0000 UTC m=+0.211155043 container died d9f70f19adb3dd1d01ed4f7e1401411b9499f9fadb99984136c6e388ead1249b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct 07 14:07:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f53987f538ab19c836d25239c8db0f5c59bbe98cdcac0231d96fd81b28f627f-merged.mount: Deactivated successfully.
Oct 07 14:07:16 compute-0 podman[287698]: 2025-10-07 14:07:16.531192729 +0000 UTC m=+0.541394369 container remove d9f70f19adb3dd1d01ed4f7e1401411b9499f9fadb99984136c6e388ead1249b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:07:16 compute-0 systemd[1]: libpod-conmon-d9f70f19adb3dd1d01ed4f7e1401411b9499f9fadb99984136c6e388ead1249b.scope: Deactivated successfully.
Oct 07 14:07:16 compute-0 nova_compute[259550]: 2025-10-07 14:07:16.705 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846036.70553, 809b049f-447c-4cdd-b8d2-8325f6d3b576 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:07:16 compute-0 nova_compute[259550]: 2025-10-07 14:07:16.706 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] VM Resumed (Lifecycle Event)
Oct 07 14:07:16 compute-0 nova_compute[259550]: 2025-10-07 14:07:16.709 2 DEBUG nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:07:16 compute-0 nova_compute[259550]: 2025-10-07 14:07:16.709 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:07:16 compute-0 nova_compute[259550]: 2025-10-07 14:07:16.713 2 INFO nova.virt.libvirt.driver [-] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Instance spawned successfully.
Oct 07 14:07:16 compute-0 nova_compute[259550]: 2025-10-07 14:07:16.713 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:07:16 compute-0 nova_compute[259550]: 2025-10-07 14:07:16.789 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:16 compute-0 nova_compute[259550]: 2025-10-07 14:07:16.792 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:07:16 compute-0 podman[287779]: 2025-10-07 14:07:16.708915689 +0000 UTC m=+0.031066671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:07:16 compute-0 ceph-mon[74295]: pgmap v1201: 305 pgs: 305 active+clean; 228 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 335 KiB/s rd, 3.3 MiB/s wr, 168 op/s
Oct 07 14:07:16 compute-0 podman[287779]: 2025-10-07 14:07:16.870908989 +0000 UTC m=+0.193059951 container create 49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:07:16 compute-0 systemd[1]: Started libpod-conmon-49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8.scope.
Oct 07 14:07:16 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0e61456fee75454fd88b45e72bf2eb0413ab983a29fda6e66c8b10a5bf850e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0e61456fee75454fd88b45e72bf2eb0413ab983a29fda6e66c8b10a5bf850e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0e61456fee75454fd88b45e72bf2eb0413ab983a29fda6e66c8b10a5bf850e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0e61456fee75454fd88b45e72bf2eb0413ab983a29fda6e66c8b10a5bf850e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0e61456fee75454fd88b45e72bf2eb0413ab983a29fda6e66c8b10a5bf850e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:07:17 compute-0 podman[287779]: 2025-10-07 14:07:17.018333539 +0000 UTC m=+0.340484521 container init 49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 07 14:07:17 compute-0 podman[287779]: 2025-10-07 14:07:17.031326166 +0000 UTC m=+0.353477128 container start 49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_aryabhata, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:07:17 compute-0 podman[287779]: 2025-10-07 14:07:17.037118242 +0000 UTC m=+0.359269204 container attach 49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.062 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.065 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846036.7085156, 809b049f-447c-4cdd-b8d2-8325f6d3b576 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.065 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] VM Started (Lifecycle Event)
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.071 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.072 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.072 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.073 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.073 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.073 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.108 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.108 2 DEBUG nova.network.neutron [-] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.114 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.150 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.153 2 INFO nova.compute.manager [-] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Took 1.21 seconds to deallocate network for instance.
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.159 2 INFO nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Took 6.62 seconds to spawn the instance on the hypervisor.
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.160 2 DEBUG nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.195 2 DEBUG nova.compute.manager [req-87785c9c-63c2-4c98-925e-300d213ed4d6 req-c5fac24c-9bc6-4fb5-95a6-ead4a291bea4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Received event network-vif-deleted-3486260f-fd35-48fb-a925-cbe6f4a1a9f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.221 2 DEBUG oslo_concurrency.lockutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.222 2 DEBUG oslo_concurrency.lockutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.246 2 INFO nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Took 7.59 seconds to build instance.
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.274 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "809b049f-447c-4cdd-b8d2-8325f6d3b576" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.333 2 DEBUG oslo_concurrency.processutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 209 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 320 KiB/s rd, 3.9 MiB/s wr, 129 op/s
Oct 07 14:07:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:07:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/860727468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.854 2 DEBUG oslo_concurrency.processutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.862 2 DEBUG nova.compute.provider_tree [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:07:17 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/860727468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.898 2 DEBUG nova.scheduler.client.report [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.931 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquiring lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.933 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.950 2 DEBUG oslo_concurrency.lockutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:17 compute-0 nova_compute[259550]: 2025-10-07 14:07:17.966 2 DEBUG nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.000 2 INFO nova.scheduler.client.report [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Deleted allocations for instance 7322f2d1-885e-4e41-8a96-e90d4ddc6c38
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.078 2 DEBUG oslo_concurrency.lockutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.081 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.081 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.088 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.089 2 INFO nova.compute.claims [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.243 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:18 compute-0 quizzical_aryabhata[287796]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:07:18 compute-0 quizzical_aryabhata[287796]: --> relative data size: 1.0
Oct 07 14:07:18 compute-0 quizzical_aryabhata[287796]: --> All data devices are unavailable
Oct 07 14:07:18 compute-0 systemd[1]: libpod-49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8.scope: Deactivated successfully.
Oct 07 14:07:18 compute-0 systemd[1]: libpod-49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8.scope: Consumed 1.232s CPU time.
Oct 07 14:07:18 compute-0 podman[287779]: 2025-10-07 14:07:18.430449091 +0000 UTC m=+1.752600053 container died 49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.485 2 DEBUG oslo_concurrency.lockutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.487 2 DEBUG oslo_concurrency.lockutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.488 2 DEBUG oslo_concurrency.lockutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.488 2 DEBUG oslo_concurrency.lockutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.488 2 DEBUG oslo_concurrency.lockutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.489 2 INFO nova.compute.manager [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Terminating instance
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.490 2 DEBUG nova.compute.manager [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:07:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b0e61456fee75454fd88b45e72bf2eb0413ab983a29fda6e66c8b10a5bf850e-merged.mount: Deactivated successfully.
Oct 07 14:07:18 compute-0 podman[287779]: 2025-10-07 14:07:18.52995197 +0000 UTC m=+1.852102922 container remove 49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct 07 14:07:18 compute-0 systemd[1]: libpod-conmon-49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8.scope: Deactivated successfully.
Oct 07 14:07:18 compute-0 sudo[287627]: pam_unix(sudo:session): session closed for user root
Oct 07 14:07:18 compute-0 podman[287868]: 2025-10-07 14:07:18.594187257 +0000 UTC m=+0.117977845 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:07:18 compute-0 podman[287875]: 2025-10-07 14:07:18.606592998 +0000 UTC m=+0.123980555 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 07 14:07:18 compute-0 kernel: tap9b25db0b-24 (unregistering): left promiscuous mode
Oct 07 14:07:18 compute-0 NetworkManager[44949]: <info>  [1759846038.6101] device (tap9b25db0b-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:07:18 compute-0 ovn_controller[151684]: 2025-10-07T14:07:18Z|00124|binding|INFO|Releasing lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 from this chassis (sb_readonly=0)
Oct 07 14:07:18 compute-0 ovn_controller[151684]: 2025-10-07T14:07:18Z|00125|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 down in Southbound
Oct 07 14:07:18 compute-0 ovn_controller[151684]: 2025-10-07T14:07:18Z|00126|binding|INFO|Removing iface tap9b25db0b-24 ovn-installed in OVS
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:18.635 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:03:d4 10.100.0.7'], port_security=['fa:16:3e:6c:03:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9b25db0b-246e-456c-82d7-cf361c57f9c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:07:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:18.637 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9b25db0b-246e-456c-82d7-cf361c57f9c5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 unbound from our chassis
Oct 07 14:07:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:18.638 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1eabd9ee-6333-432b-b50d-9679677d38f6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:07:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:18.639 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[207a5d39-f746-455c-b02c-12c896424f56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:18.640 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6 namespace which is not needed anymore
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:18 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct 07 14:07:18 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000000a.scope: Consumed 14.085s CPU time.
Oct 07 14:07:18 compute-0 systemd-machined[214580]: Machine qemu-20-instance-0000000a terminated.
Oct 07 14:07:18 compute-0 sudo[287915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:07:18 compute-0 sudo[287915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:07:18 compute-0 sudo[287915]: pam_unix(sudo:session): session closed for user root
Oct 07 14:07:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:07:18 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/849641466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.736 2 INFO nova.virt.libvirt.driver [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance destroyed successfully.
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.737 2 DEBUG nova.objects.instance [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'resources' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.757 2 DEBUG nova.virt.libvirt.vif [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1321212972',display_name='tempest-ServersAdminTestJSON-server-1321212972',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1321212972',id=10,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:06:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-ixj4uqzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:06:59Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=ddf09c33-d956-404b-a5d8-44a3727f9a3b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.758 2 DEBUG nova.network.os_vif_util [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.759 2 DEBUG nova.network.os_vif_util [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.759 2 DEBUG os_vif [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.761 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b25db0b-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:18 compute-0 sudo[287952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:07:18 compute-0 sudo[287952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.777 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:18 compute-0 sudo[287952]: pam_unix(sudo:session): session closed for user root
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.780 2 INFO os_vif [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24')
Oct 07 14:07:18 compute-0 neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6[282452]: [NOTICE]   (282456) : haproxy version is 2.8.14-c23fe91
Oct 07 14:07:18 compute-0 neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6[282452]: [NOTICE]   (282456) : path to executable is /usr/sbin/haproxy
Oct 07 14:07:18 compute-0 neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6[282452]: [WARNING]  (282456) : Exiting Master process...
Oct 07 14:07:18 compute-0 neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6[282452]: [ALERT]    (282456) : Current worker (282458) exited with code 143 (Terminated)
Oct 07 14:07:18 compute-0 neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6[282452]: [WARNING]  (282456) : All workers exited. Exiting... (0)
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.844 2 DEBUG nova.compute.provider_tree [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:07:18 compute-0 sudo[288006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:07:18 compute-0 systemd[1]: libpod-534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091.scope: Deactivated successfully.
Oct 07 14:07:18 compute-0 sudo[288006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:07:18 compute-0 podman[287999]: 2025-10-07 14:07:18.852555201 +0000 UTC m=+0.068113780 container died 534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:07:18 compute-0 sudo[288006]: pam_unix(sudo:session): session closed for user root
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.862 2 DEBUG nova.scheduler.client.report [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:07:18 compute-0 ceph-mon[74295]: pgmap v1202: 305 pgs: 305 active+clean; 209 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 320 KiB/s rd, 3.9 MiB/s wr, 129 op/s
Oct 07 14:07:18 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/849641466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.890 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.893 2 DEBUG nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:07:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091-userdata-shm.mount: Deactivated successfully.
Oct 07 14:07:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee58677923a11d6eaf88f22eb26cdfd6a6c1bfcaf21246d843b12d3493d77fd3-merged.mount: Deactivated successfully.
Oct 07 14:07:18 compute-0 sudo[288063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:07:18 compute-0 sudo[288063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.938 2 DEBUG nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.938 2 DEBUG nova.network.neutron [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.955 2 INFO nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:07:18 compute-0 nova_compute[259550]: 2025-10-07 14:07:18.974 2 DEBUG nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:07:19 compute-0 podman[287999]: 2025-10-07 14:07:19.00514387 +0000 UTC m=+0.220702409 container cleanup 534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:07:19 compute-0 systemd[1]: libpod-conmon-534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091.scope: Deactivated successfully.
Oct 07 14:07:19 compute-0 nova_compute[259550]: 2025-10-07 14:07:19.061 2 DEBUG nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:07:19 compute-0 nova_compute[259550]: 2025-10-07 14:07:19.063 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:07:19 compute-0 nova_compute[259550]: 2025-10-07 14:07:19.064 2 INFO nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Creating image(s)
Oct 07 14:07:19 compute-0 nova_compute[259550]: 2025-10-07 14:07:19.099 2 DEBUG nova.storage.rbd_utils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] rbd image dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:19 compute-0 nova_compute[259550]: 2025-10-07 14:07:19.134 2 DEBUG nova.storage.rbd_utils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] rbd image dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:19 compute-0 podman[288098]: 2025-10-07 14:07:19.171538077 +0000 UTC m=+0.132972865 container remove 534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:07:19 compute-0 nova_compute[259550]: 2025-10-07 14:07:19.172 2 DEBUG nova.storage.rbd_utils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] rbd image dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:19.181 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8760a037-d1d7-40a4-b4cf-ec2ffcc16f6a]: (4, ('Tue Oct  7 02:07:18 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6 (534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091)\n534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091\nTue Oct  7 02:07:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6 (534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091)\n534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:19.183 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[864a1b9f-251c-4204-9c3b-fc1363ae10ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:19.184 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:19 compute-0 nova_compute[259550]: 2025-10-07 14:07:19.184 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:19 compute-0 kernel: tap1eabd9ee-60: left promiscuous mode
Oct 07 14:07:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:19.209 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[408e545e-ac63-4e7d-8e2e-1ab0d2434934]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:19 compute-0 nova_compute[259550]: 2025-10-07 14:07:19.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:19 compute-0 nova_compute[259550]: 2025-10-07 14:07:19.228 2 DEBUG nova.policy [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8977e4fa4adb42bfae4fe2be5d339769', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '15d729b9c0bc4738b4f887d6b764fb5a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:07:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:19.227 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1621709-935b-4a58-b6e1-f4553d685d43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:19.232 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[935ef788-bfd2-499f-92cd-def0af83c5b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:19.255 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[37a17185-314a-401e-9049-ea058019d184]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650989, 'reachable_time': 25990, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288191, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d1eabd9ee\x2d6333\x2d432b\x2db50d\x2d9679677d38f6.mount: Deactivated successfully.
Oct 07 14:07:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:19.265 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:07:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:19.266 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[2a740434-d43b-43a0-9954-b040f2c04f7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:19 compute-0 nova_compute[259550]: 2025-10-07 14:07:19.269 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:19 compute-0 nova_compute[259550]: 2025-10-07 14:07:19.270 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:19 compute-0 nova_compute[259550]: 2025-10-07 14:07:19.271 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:19 compute-0 nova_compute[259550]: 2025-10-07 14:07:19.271 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:19 compute-0 nova_compute[259550]: 2025-10-07 14:07:19.299 2 DEBUG nova.storage.rbd_utils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] rbd image dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:19 compute-0 nova_compute[259550]: 2025-10-07 14:07:19.305 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:19 compute-0 podman[288225]: 2025-10-07 14:07:19.421034775 +0000 UTC m=+0.061950566 container create 70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hermann, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 07 14:07:19 compute-0 systemd[1]: Started libpod-conmon-70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f.scope.
Oct 07 14:07:19 compute-0 podman[288225]: 2025-10-07 14:07:19.387480618 +0000 UTC m=+0.028396429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:07:19 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:07:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 167 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 197 op/s
Oct 07 14:07:19 compute-0 podman[288225]: 2025-10-07 14:07:19.556161887 +0000 UTC m=+0.197077698 container init 70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:07:19 compute-0 podman[288225]: 2025-10-07 14:07:19.566190795 +0000 UTC m=+0.207106586 container start 70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hermann, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 07 14:07:19 compute-0 podman[288225]: 2025-10-07 14:07:19.576924032 +0000 UTC m=+0.217839823 container attach 70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:07:19 compute-0 systemd[1]: libpod-70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f.scope: Deactivated successfully.
Oct 07 14:07:19 compute-0 friendly_hermann[288260]: 167 167
Oct 07 14:07:19 compute-0 conmon[288260]: conmon 70bf7384c4706cfdadb4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f.scope/container/memory.events
Oct 07 14:07:19 compute-0 podman[288225]: 2025-10-07 14:07:19.584310999 +0000 UTC m=+0.225226800 container died 70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 07 14:07:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9120e654d0be78ccedec1f9237381d7035a2c7c0bc8a79682a1a4a9931add64-merged.mount: Deactivated successfully.
Oct 07 14:07:19 compute-0 podman[288225]: 2025-10-07 14:07:19.724723302 +0000 UTC m=+0.365639093 container remove 70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 07 14:07:19 compute-0 systemd[1]: libpod-conmon-70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f.scope: Deactivated successfully.
Oct 07 14:07:19 compute-0 nova_compute[259550]: 2025-10-07 14:07:19.750 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:19 compute-0 nova_compute[259550]: 2025-10-07 14:07:19.883 2 DEBUG nova.storage.rbd_utils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] resizing rbd image dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:07:19 compute-0 ceph-mon[74295]: pgmap v1203: 305 pgs: 305 active+clean; 167 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 197 op/s
Oct 07 14:07:19 compute-0 podman[288317]: 2025-10-07 14:07:19.93529382 +0000 UTC m=+0.054296262 container create 569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:07:19 compute-0 systemd[1]: Started libpod-conmon-569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c.scope.
Oct 07 14:07:19 compute-0 nova_compute[259550]: 2025-10-07 14:07:19.994 2 INFO nova.virt.libvirt.driver [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deleting instance files /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b_del
Oct 07 14:07:19 compute-0 nova_compute[259550]: 2025-10-07 14:07:19.996 2 INFO nova.virt.libvirt.driver [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deletion of /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b_del complete
Oct 07 14:07:20 compute-0 podman[288317]: 2025-10-07 14:07:19.914991267 +0000 UTC m=+0.033993729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:07:20 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2622a655679518c44ffa4b92f8058b2155cd01bfb99200db9ee27c7da64c5d76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2622a655679518c44ffa4b92f8058b2155cd01bfb99200db9ee27c7da64c5d76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2622a655679518c44ffa4b92f8058b2155cd01bfb99200db9ee27c7da64c5d76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2622a655679518c44ffa4b92f8058b2155cd01bfb99200db9ee27c7da64c5d76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:07:20 compute-0 podman[288317]: 2025-10-07 14:07:20.049345318 +0000 UTC m=+0.168347760 container init 569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 07 14:07:20 compute-0 podman[288317]: 2025-10-07 14:07:20.061180174 +0000 UTC m=+0.180182616 container start 569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:07:20 compute-0 podman[288317]: 2025-10-07 14:07:20.066815974 +0000 UTC m=+0.185818436 container attach 569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 07 14:07:20 compute-0 nova_compute[259550]: 2025-10-07 14:07:20.097 2 DEBUG nova.objects.instance [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lazy-loading 'migration_context' on Instance uuid dfe85d14-0395-4f7f-8bc1-f536aefe2ffc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:07:20 compute-0 nova_compute[259550]: 2025-10-07 14:07:20.101 2 INFO nova.compute.manager [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Took 1.61 seconds to destroy the instance on the hypervisor.
Oct 07 14:07:20 compute-0 nova_compute[259550]: 2025-10-07 14:07:20.101 2 DEBUG oslo.service.loopingcall [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:07:20 compute-0 nova_compute[259550]: 2025-10-07 14:07:20.102 2 DEBUG nova.compute.manager [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:07:20 compute-0 nova_compute[259550]: 2025-10-07 14:07:20.102 2 DEBUG nova.network.neutron [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:07:20 compute-0 nova_compute[259550]: 2025-10-07 14:07:20.112 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:07:20 compute-0 nova_compute[259550]: 2025-10-07 14:07:20.113 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Ensure instance console log exists: /var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:07:20 compute-0 nova_compute[259550]: 2025-10-07 14:07:20.113 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:20 compute-0 nova_compute[259550]: 2025-10-07 14:07:20.113 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:20 compute-0 nova_compute[259550]: 2025-10-07 14:07:20.114 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:20 compute-0 nova_compute[259550]: 2025-10-07 14:07:20.323 2 DEBUG nova.network.neutron [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Successfully created port: 78dbc059-1828-41b6-84fa-bf4ac0b31103 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:07:20 compute-0 nova_compute[259550]: 2025-10-07 14:07:20.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:07:20 compute-0 nova_compute[259550]: 2025-10-07 14:07:20.650 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "10c5acec-9e20-431b-a467-d54b7acbabfd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:20 compute-0 nova_compute[259550]: 2025-10-07 14:07:20.651 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "10c5acec-9e20-431b-a467-d54b7acbabfd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:20 compute-0 nova_compute[259550]: 2025-10-07 14:07:20.885 2 DEBUG nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]: {
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:     "0": [
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:         {
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "devices": [
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "/dev/loop3"
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             ],
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "lv_name": "ceph_lv0",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "lv_size": "21470642176",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "name": "ceph_lv0",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "tags": {
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.cluster_name": "ceph",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.crush_device_class": "",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.encrypted": "0",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.osd_id": "0",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.type": "block",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.vdo": "0"
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             },
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "type": "block",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "vg_name": "ceph_vg0"
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:         }
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:     ],
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:     "1": [
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:         {
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "devices": [
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "/dev/loop4"
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             ],
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "lv_name": "ceph_lv1",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "lv_size": "21470642176",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "name": "ceph_lv1",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "tags": {
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.cluster_name": "ceph",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.crush_device_class": "",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.encrypted": "0",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.osd_id": "1",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.type": "block",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.vdo": "0"
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             },
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "type": "block",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "vg_name": "ceph_vg1"
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:         }
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:     ],
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:     "2": [
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:         {
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "devices": [
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "/dev/loop5"
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             ],
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "lv_name": "ceph_lv2",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "lv_size": "21470642176",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "name": "ceph_lv2",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "tags": {
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.cluster_name": "ceph",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.crush_device_class": "",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.encrypted": "0",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.osd_id": "2",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.type": "block",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:                 "ceph.vdo": "0"
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             },
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "type": "block",
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:             "vg_name": "ceph_vg2"
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:         }
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]:     ]
Oct 07 14:07:20 compute-0 agitated_khayyam[288354]: }
Oct 07 14:07:20 compute-0 systemd[1]: libpod-569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c.scope: Deactivated successfully.
Oct 07 14:07:20 compute-0 conmon[288354]: conmon 569ffc8244a13310c5d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c.scope/container/memory.events
Oct 07 14:07:20 compute-0 nova_compute[259550]: 2025-10-07 14:07:20.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:07:21 compute-0 podman[288381]: 2025-10-07 14:07:21.002293187 +0000 UTC m=+0.031864413 container died 569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:07:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-2622a655679518c44ffa4b92f8058b2155cd01bfb99200db9ee27c7da64c5d76-merged.mount: Deactivated successfully.
Oct 07 14:07:21 compute-0 podman[288381]: 2025-10-07 14:07:21.156906319 +0000 UTC m=+0.186477525 container remove 569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 07 14:07:21 compute-0 systemd[1]: libpod-conmon-569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c.scope: Deactivated successfully.
Oct 07 14:07:21 compute-0 sudo[288063]: pam_unix(sudo:session): session closed for user root
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.237 2 DEBUG nova.compute.manager [req-ddd1c097-cf60-4282-a478-72b156b1188c req-beacdfde-69a2-4eb3-a471-cab777782e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-unplugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.238 2 DEBUG oslo_concurrency.lockutils [req-ddd1c097-cf60-4282-a478-72b156b1188c req-beacdfde-69a2-4eb3-a471-cab777782e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.238 2 DEBUG oslo_concurrency.lockutils [req-ddd1c097-cf60-4282-a478-72b156b1188c req-beacdfde-69a2-4eb3-a471-cab777782e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.239 2 DEBUG oslo_concurrency.lockutils [req-ddd1c097-cf60-4282-a478-72b156b1188c req-beacdfde-69a2-4eb3-a471-cab777782e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.239 2 DEBUG nova.compute.manager [req-ddd1c097-cf60-4282-a478-72b156b1188c req-beacdfde-69a2-4eb3-a471-cab777782e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-unplugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.239 2 DEBUG nova.compute.manager [req-ddd1c097-cf60-4282-a478-72b156b1188c req-beacdfde-69a2-4eb3-a471-cab777782e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-unplugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:07:21 compute-0 sudo[288396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:07:21 compute-0 sudo[288396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:07:21 compute-0 sudo[288396]: pam_unix(sudo:session): session closed for user root
Oct 07 14:07:21 compute-0 sudo[288421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:07:21 compute-0 sudo[288421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:07:21 compute-0 sudo[288421]: pam_unix(sudo:session): session closed for user root
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.370 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.371 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.382 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.382 2 INFO nova.compute.claims [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:07:21 compute-0 sudo[288446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:07:21 compute-0 sudo[288446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:07:21 compute-0 sudo[288446]: pam_unix(sudo:session): session closed for user root
Oct 07 14:07:21 compute-0 sudo[288471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:07:21 compute-0 sudo[288471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:07:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 150 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 227 op/s
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.541 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.629 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846026.6275175, 74094438-0995-4031-9943-cc85a5ef4f57 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.630 2 INFO nova.compute.manager [-] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] VM Stopped (Lifecycle Event)
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.666 2 DEBUG nova.compute.manager [None req-5ed7bf7d-344d-4c82-b742-2473a2894331 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.809 2 DEBUG nova.network.neutron [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.848 2 INFO nova.compute.manager [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Took 1.75 seconds to deallocate network for instance.
Oct 07 14:07:21 compute-0 podman[288555]: 2025-10-07 14:07:21.88467833 +0000 UTC m=+0.048867097 container create 1cfb3ff64d8f72a3c6929e5b927ad7c2ba800c13ebbf4196fc9f4ba85dbf84ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.913 2 DEBUG oslo_concurrency.lockutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:21 compute-0 systemd[1]: Started libpod-conmon-1cfb3ff64d8f72a3c6929e5b927ad7c2ba800c13ebbf4196fc9f4ba85dbf84ac.scope.
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.956 2 DEBUG nova.network.neutron [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Successfully updated port: 78dbc059-1828-41b6-84fa-bf4ac0b31103 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:07:21 compute-0 podman[288555]: 2025-10-07 14:07:21.863606577 +0000 UTC m=+0.027795364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:07:21 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:07:21 compute-0 podman[288555]: 2025-10-07 14:07:21.978473317 +0000 UTC m=+0.142662094 container init 1cfb3ff64d8f72a3c6929e5b927ad7c2ba800c13ebbf4196fc9f4ba85dbf84ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wilson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.981 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:07:21 compute-0 podman[288555]: 2025-10-07 14:07:21.986544673 +0000 UTC m=+0.150733430 container start 1cfb3ff64d8f72a3c6929e5b927ad7c2ba800c13ebbf4196fc9f4ba85dbf84ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wilson, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.989 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquiring lock "refresh_cache-dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.990 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquired lock "refresh_cache-dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:07:21 compute-0 nova_compute[259550]: 2025-10-07 14:07:21.990 2 DEBUG nova.network.neutron [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:07:21 compute-0 podman[288555]: 2025-10-07 14:07:21.990308723 +0000 UTC m=+0.154497510 container attach 1cfb3ff64d8f72a3c6929e5b927ad7c2ba800c13ebbf4196fc9f4ba85dbf84ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:07:21 compute-0 interesting_wilson[288571]: 167 167
Oct 07 14:07:21 compute-0 systemd[1]: libpod-1cfb3ff64d8f72a3c6929e5b927ad7c2ba800c13ebbf4196fc9f4ba85dbf84ac.scope: Deactivated successfully.
Oct 07 14:07:21 compute-0 podman[288555]: 2025-10-07 14:07:21.993963531 +0000 UTC m=+0.158152318 container died 1cfb3ff64d8f72a3c6929e5b927ad7c2ba800c13ebbf4196fc9f4ba85dbf84ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:07:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b6efeaaf303e509469dd4a97a146952643ab3f4c2b939264f679aefa221cdf9-merged.mount: Deactivated successfully.
Oct 07 14:07:22 compute-0 podman[288555]: 2025-10-07 14:07:22.038371338 +0000 UTC m=+0.202560095 container remove 1cfb3ff64d8f72a3c6929e5b927ad7c2ba800c13ebbf4196fc9f4ba85dbf84ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 07 14:07:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:07:22 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/635956020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:22 compute-0 systemd[1]: libpod-conmon-1cfb3ff64d8f72a3c6929e5b927ad7c2ba800c13ebbf4196fc9f4ba85dbf84ac.scope: Deactivated successfully.
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.079 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.090 2 DEBUG nova.compute.provider_tree [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.107 2 DEBUG nova.scheduler.client.report [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.111 2 DEBUG nova.network.neutron [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.145 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.146 2 DEBUG nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.149 2 DEBUG oslo_concurrency.lockutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.237 2 DEBUG nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.237 2 DEBUG nova.network.neutron [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.244 2 DEBUG nova.compute.manager [req-9ea3882d-560f-460a-bb08-5056628e5f74 req-816f3324-2937-4969-b2a1-2cbed01f4d7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-deleted-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.280 2 INFO nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:07:22 compute-0 podman[288597]: 2025-10-07 14:07:22.212866071 +0000 UTC m=+0.031710698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.323 2 DEBUG nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:07:22 compute-0 podman[288597]: 2025-10-07 14:07:22.40589063 +0000 UTC m=+0.224735227 container create d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bartik, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.463 2 DEBUG nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.464 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.465 2 INFO nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Creating image(s)
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.491 2 DEBUG nova.storage.rbd_utils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 10c5acec-9e20-431b-a467-d54b7acbabfd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.518 2 DEBUG nova.storage.rbd_utils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 10c5acec-9e20-431b-a467-d54b7acbabfd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.541 2 DEBUG nova.storage.rbd_utils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 10c5acec-9e20-431b-a467-d54b7acbabfd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.547 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.584 2 DEBUG oslo_concurrency.processutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:22 compute-0 ceph-mon[74295]: pgmap v1204: 305 pgs: 305 active+clean; 150 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 227 op/s
Oct 07 14:07:22 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/635956020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:22 compute-0 systemd[1]: Started libpod-conmon-d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d.scope.
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:07:22
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'volumes', '.mgr', '.rgw.root', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'default.rgw.control']
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:07:22 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:07:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd8baf8ba7f9516ce10b55f09929e28055f0b91003d0b5bbeadb3a833c1e823/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:07:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd8baf8ba7f9516ce10b55f09929e28055f0b91003d0b5bbeadb3a833c1e823/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:07:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd8baf8ba7f9516ce10b55f09929e28055f0b91003d0b5bbeadb3a833c1e823/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:07:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd8baf8ba7f9516ce10b55f09929e28055f0b91003d0b5bbeadb3a833c1e823/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.645 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.646 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.647 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.647 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.674 2 DEBUG nova.storage.rbd_utils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 10c5acec-9e20-431b-a467-d54b7acbabfd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.679 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 10c5acec-9e20-431b-a467-d54b7acbabfd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:22 compute-0 podman[288597]: 2025-10-07 14:07:22.688634257 +0000 UTC m=+0.507478864 container init d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:07:22 compute-0 podman[288597]: 2025-10-07 14:07:22.698575143 +0000 UTC m=+0.517419760 container start d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:07:22 compute-0 podman[288597]: 2025-10-07 14:07:22.70671758 +0000 UTC m=+0.525562187 container attach d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.794 2 DEBUG nova.network.neutron [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 07 14:07:22 compute-0 nova_compute[259550]: 2025-10-07 14:07:22.795 2 DEBUG nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:07:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.121 2 DEBUG nova.network.neutron [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Updating instance_info_cache with network_info: [{"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:07:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:07:23 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3313080681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.153 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Releasing lock "refresh_cache-dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.153 2 DEBUG nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Instance network_info: |[{"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.156 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Start _get_guest_xml network_info=[{"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.161 2 DEBUG oslo_concurrency.processutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.165 2 WARNING nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.171 2 DEBUG nova.compute.provider_tree [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.175 2 DEBUG nova.virt.libvirt.host [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.175 2 DEBUG nova.virt.libvirt.host [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.179 2 DEBUG nova.virt.libvirt.host [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.180 2 DEBUG nova.virt.libvirt.host [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.180 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.180 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.181 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.181 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.181 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.181 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.182 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.182 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.182 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.182 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.183 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.183 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.187 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.217 2 DEBUG nova.scheduler.client.report [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.237 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 10c5acec-9e20-431b-a467-d54b7acbabfd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.262 2 DEBUG oslo_concurrency.lockutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.297 2 INFO nova.scheduler.client.report [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Deleted allocations for instance ddf09c33-d956-404b-a5d8-44a3727f9a3b
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.304 2 DEBUG nova.storage.rbd_utils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] resizing rbd image 10c5acec-9e20-431b-a467-d54b7acbabfd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.389 2 DEBUG oslo_concurrency.lockutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.438 2 DEBUG nova.objects.instance [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lazy-loading 'migration_context' on Instance uuid 10c5acec-9e20-431b-a467-d54b7acbabfd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.451 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.451 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Ensure instance console log exists: /var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.452 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.452 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.452 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.453 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.458 2 WARNING nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.462 2 DEBUG nova.virt.libvirt.host [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.463 2 DEBUG nova.virt.libvirt.host [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.468 2 DEBUG nova.compute.manager [req-9ccadb1b-3c02-4bdd-8ece-7a34a8d3e9f2 req-7c8e21c0-8fba-4c0f-b411-4f69d0fb13a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.469 2 DEBUG oslo_concurrency.lockutils [req-9ccadb1b-3c02-4bdd-8ece-7a34a8d3e9f2 req-7c8e21c0-8fba-4c0f-b411-4f69d0fb13a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.469 2 DEBUG oslo_concurrency.lockutils [req-9ccadb1b-3c02-4bdd-8ece-7a34a8d3e9f2 req-7c8e21c0-8fba-4c0f-b411-4f69d0fb13a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.470 2 DEBUG oslo_concurrency.lockutils [req-9ccadb1b-3c02-4bdd-8ece-7a34a8d3e9f2 req-7c8e21c0-8fba-4c0f-b411-4f69d0fb13a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.470 2 DEBUG nova.compute.manager [req-9ccadb1b-3c02-4bdd-8ece-7a34a8d3e9f2 req-7c8e21c0-8fba-4c0f-b411-4f69d0fb13a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.470 2 WARNING nova.compute.manager [req-9ccadb1b-3c02-4bdd-8ece-7a34a8d3e9f2 req-7c8e21c0-8fba-4c0f-b411-4f69d0fb13a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received unexpected event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with vm_state deleted and task_state None.
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.474 2 DEBUG nova.virt.libvirt.host [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.475 2 DEBUG nova.virt.libvirt.host [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.475 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.476 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.476 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.477 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.477 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.477 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.477 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.478 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.478 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.478 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.479 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.479 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.483 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 150 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.1 MiB/s wr, 198 op/s
Oct 07 14:07:23 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3313080681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:07:23 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1550266879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.773 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.793 2 DEBUG nova.storage.rbd_utils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] rbd image dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.798 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:23 compute-0 gallant_bartik[288670]: {
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:         "osd_id": 2,
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:         "type": "bluestore"
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:     },
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:         "osd_id": 1,
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:         "type": "bluestore"
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:     },
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:         "osd_id": 0,
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:         "type": "bluestore"
Oct 07 14:07:23 compute-0 gallant_bartik[288670]:     }
Oct 07 14:07:23 compute-0 gallant_bartik[288670]: }
Oct 07 14:07:23 compute-0 systemd[1]: libpod-d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d.scope: Deactivated successfully.
Oct 07 14:07:23 compute-0 systemd[1]: libpod-d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d.scope: Consumed 1.144s CPU time.
Oct 07 14:07:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:07:23 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3443311987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:07:23 compute-0 podman[288896]: 2025-10-07 14:07:23.953728729 +0000 UTC m=+0.032405088 container died d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bartik, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.980 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:23 compute-0 nova_compute[259550]: 2025-10-07 14:07:23.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:07:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-afd8baf8ba7f9516ce10b55f09929e28055f0b91003d0b5bbeadb3a833c1e823-merged.mount: Deactivated successfully.
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.010 2 DEBUG nova.storage.rbd_utils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 10c5acec-9e20-431b-a467-d54b7acbabfd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.016 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:24 compute-0 podman[288896]: 2025-10-07 14:07:24.090616597 +0000 UTC m=+0.169292956 container remove d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bartik, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:07:24 compute-0 systemd[1]: libpod-conmon-d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d.scope: Deactivated successfully.
Oct 07 14:07:24 compute-0 sudo[288471]: pam_unix(sudo:session): session closed for user root
Oct 07 14:07:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:07:24 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:07:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:07:24 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:07:24 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev b82b8d9a-65c1-410d-a85d-f7c968a6d49c does not exist
Oct 07 14:07:24 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev f4444ea3-cd3e-457f-8696-a0de6ce47c94 does not exist
Oct 07 14:07:24 compute-0 sudo[288970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:07:24 compute-0 sudo[288970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:07:24 compute-0 sudo[288970]: pam_unix(sudo:session): session closed for user root
Oct 07 14:07:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:07:24 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1508793537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:07:24 compute-0 sudo[288995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:07:24 compute-0 sudo[288995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.320 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:24 compute-0 sudo[288995]: pam_unix(sudo:session): session closed for user root
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.323 2 DEBUG nova.compute.manager [req-5d51b61b-70d0-4f4e-98cf-03da3d5b2344 req-08f629c8-a5c8-459a-9c25-145a65722867 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Received event network-changed-78dbc059-1828-41b6-84fa-bf4ac0b31103 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.323 2 DEBUG nova.compute.manager [req-5d51b61b-70d0-4f4e-98cf-03da3d5b2344 req-08f629c8-a5c8-459a-9c25-145a65722867 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Refreshing instance network info cache due to event network-changed-78dbc059-1828-41b6-84fa-bf4ac0b31103. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.323 2 DEBUG oslo_concurrency.lockutils [req-5d51b61b-70d0-4f4e-98cf-03da3d5b2344 req-08f629c8-a5c8-459a-9c25-145a65722867 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.324 2 DEBUG oslo_concurrency.lockutils [req-5d51b61b-70d0-4f4e-98cf-03da3d5b2344 req-08f629c8-a5c8-459a-9c25-145a65722867 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.324 2 DEBUG nova.network.neutron [req-5d51b61b-70d0-4f4e-98cf-03da3d5b2344 req-08f629c8-a5c8-459a-9c25-145a65722867 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Refreshing network info cache for port 78dbc059-1828-41b6-84fa-bf4ac0b31103 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.326 2 DEBUG nova.virt.libvirt.vif [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:07:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-926892453',display_name='tempest-ImagesNegativeTestJSON-server-926892453',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-926892453',id=18,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='15d729b9c0bc4738b4f887d6b764fb5a',ramdisk_id='',reservation_id='r-z35xzswz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-151850229',owner_user_name='tempest-ImagesNegativeTestJSON-151850229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:07:19Z,user_data=None,user_id='8977e4fa4adb42bfae4fe2be5d339769',uuid=dfe85d14-0395-4f7f-8bc1-f536aefe2ffc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.327 2 DEBUG nova.network.os_vif_util [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Converting VIF {"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.327 2 DEBUG nova.network.os_vif_util [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=78dbc059-1828-41b6-84fa-bf4ac0b31103,network=Network(2f4de79f-442a-4f3f-b7b5-11fe265c4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78dbc059-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.329 2 DEBUG nova.objects.instance [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lazy-loading 'pci_devices' on Instance uuid dfe85d14-0395-4f7f-8bc1-f536aefe2ffc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.352 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <uuid>dfe85d14-0395-4f7f-8bc1-f536aefe2ffc</uuid>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <name>instance-00000012</name>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <nova:name>tempest-ImagesNegativeTestJSON-server-926892453</nova:name>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:07:23</nova:creationTime>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <nova:user uuid="8977e4fa4adb42bfae4fe2be5d339769">tempest-ImagesNegativeTestJSON-151850229-project-member</nova:user>
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <nova:project uuid="15d729b9c0bc4738b4f887d6b764fb5a">tempest-ImagesNegativeTestJSON-151850229</nova:project>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <nova:port uuid="78dbc059-1828-41b6-84fa-bf4ac0b31103">
Oct 07 14:07:24 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <system>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <entry name="serial">dfe85d14-0395-4f7f-8bc1-f536aefe2ffc</entry>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <entry name="uuid">dfe85d14-0395-4f7f-8bc1-f536aefe2ffc</entry>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     </system>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <os>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   </os>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <features>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   </features>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk">
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       </source>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk.config">
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       </source>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:8e:a8:72"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <target dev="tap78dbc059-18"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc/console.log" append="off"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <video>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     </video>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:07:24 compute-0 nova_compute[259550]: </domain>
Oct 07 14:07:24 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.357 2 DEBUG nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Preparing to wait for external event network-vif-plugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.358 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquiring lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.358 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.359 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.360 2 DEBUG nova.virt.libvirt.vif [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:07:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-926892453',display_name='tempest-ImagesNegativeTestJSON-server-926892453',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-926892453',id=18,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='15d729b9c0bc4738b4f887d6b764fb5a',ramdisk_id='',reservation_id='r-z35xzswz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-151850229',owner_user_name='tempest-ImagesNegativeTestJSON-151850229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:07:19Z,user_data=None,user_id='8977e4fa4adb42bfae4fe2be5d339769',uuid=dfe85d14-0395-4f7f-8bc1-f536aefe2ffc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.360 2 DEBUG nova.network.os_vif_util [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Converting VIF {"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.361 2 DEBUG nova.network.os_vif_util [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=78dbc059-1828-41b6-84fa-bf4ac0b31103,network=Network(2f4de79f-442a-4f3f-b7b5-11fe265c4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78dbc059-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.361 2 DEBUG os_vif [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=78dbc059-1828-41b6-84fa-bf4ac0b31103,network=Network(2f4de79f-442a-4f3f-b7b5-11fe265c4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78dbc059-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.362 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.363 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.366 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap78dbc059-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.367 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap78dbc059-18, col_values=(('external_ids', {'iface-id': '78dbc059-1828-41b6-84fa-bf4ac0b31103', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:a8:72', 'vm-uuid': 'dfe85d14-0395-4f7f-8bc1-f536aefe2ffc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:24 compute-0 NetworkManager[44949]: <info>  [1759846044.3699] manager: (tap78dbc059-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.377 2 INFO os_vif [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=78dbc059-1828-41b6-84fa-bf4ac0b31103,network=Network(2f4de79f-442a-4f3f-b7b5-11fe265c4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78dbc059-18')
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.437 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.438 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.438 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] No VIF found with MAC fa:16:3e:8e:a8:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.439 2 INFO nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Using config drive
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.459 2 DEBUG nova.storage.rbd_utils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] rbd image dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:07:24 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1180699488' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.519 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.521 2 DEBUG nova.objects.instance [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lazy-loading 'pci_devices' on Instance uuid 10c5acec-9e20-431b-a467-d54b7acbabfd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.533 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <uuid>10c5acec-9e20-431b-a467-d54b7acbabfd</uuid>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <name>instance-00000013</name>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <nova:name>tempest-LiveMigrationNegativeTest-server-160156078</nova:name>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:07:23</nova:creationTime>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <nova:user uuid="027a68b305ed4ef799375643ce9e5831">tempest-LiveMigrationNegativeTest-182284186-project-member</nova:user>
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <nova:project uuid="64dfda49c33d49eb9d343ba7a8f90d6d">tempest-LiveMigrationNegativeTest-182284186</nova:project>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <system>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <entry name="serial">10c5acec-9e20-431b-a467-d54b7acbabfd</entry>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <entry name="uuid">10c5acec-9e20-431b-a467-d54b7acbabfd</entry>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     </system>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <os>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   </os>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <features>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   </features>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/10c5acec-9e20-431b-a467-d54b7acbabfd_disk">
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       </source>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/10c5acec-9e20-431b-a467-d54b7acbabfd_disk.config">
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       </source>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:07:24 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd/console.log" append="off"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <video>
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     </video>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:07:24 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:07:24 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:07:24 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:07:24 compute-0 nova_compute[259550]: </domain>
Oct 07 14:07:24 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.582 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.583 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.584 2 INFO nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Using config drive
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.608 2 DEBUG nova.storage.rbd_utils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 10c5acec-9e20-431b-a467-d54b7acbabfd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:24 compute-0 ceph-mon[74295]: pgmap v1205: 305 pgs: 305 active+clean; 150 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.1 MiB/s wr, 198 op/s
Oct 07 14:07:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1550266879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:07:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3443311987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:07:24 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:07:24 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:07:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1508793537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:07:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1180699488' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.813 2 INFO nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Creating config drive at /var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd/disk.config
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.817 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4ibdkb3x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.954 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4ibdkb3x" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.977 2 DEBUG nova.storage.rbd_utils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 10c5acec-9e20-431b-a467-d54b7acbabfd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:24 compute-0 nova_compute[259550]: 2025-10-07 14:07:24.980 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd/disk.config 10c5acec-9e20-431b-a467-d54b7acbabfd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.007 2 INFO nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Creating config drive at /var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc/disk.config
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.012 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg_uee70k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.039 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.076 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.076 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.077 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.077 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.078 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.149 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg_uee70k" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.179 2 DEBUG nova.storage.rbd_utils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] rbd image dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.185 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc/disk.config dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 163 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.2 MiB/s wr, 238 op/s
Oct 07 14:07:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:07:25 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1849065527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.556 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.574 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd/disk.config 10c5acec-9e20-431b-a467-d54b7acbabfd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.575 2 INFO nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Deleting local config drive /var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd/disk.config because it was imported into RBD.
Oct 07 14:07:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:07:25 compute-0 systemd-machined[214580]: New machine qemu-22-instance-00000013.
Oct 07 14:07:25 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-00000013.
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.847 2 DEBUG nova.network.neutron [req-5d51b61b-70d0-4f4e-98cf-03da3d5b2344 req-08f629c8-a5c8-459a-9c25-145a65722867 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Updated VIF entry in instance network info cache for port 78dbc059-1828-41b6-84fa-bf4ac0b31103. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.849 2 DEBUG nova.network.neutron [req-5d51b61b-70d0-4f4e-98cf-03da3d5b2344 req-08f629c8-a5c8-459a-9c25-145a65722867 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Updating instance_info_cache with network_info: [{"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.882 2 DEBUG oslo_concurrency.lockutils [req-5d51b61b-70d0-4f4e-98cf-03da3d5b2344 req-08f629c8-a5c8-459a-9c25-145a65722867 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:07:25 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1849065527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.896 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.897 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.900 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.900 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.902 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:07:25 compute-0 nova_compute[259550]: 2025-10-07 14:07:25.903 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.058 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.060 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4300MB free_disk=59.93330383300781GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.060 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.061 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.213 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc/disk.config dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.215 2 INFO nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Deleting local config drive /var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc/disk.config because it was imported into RBD.
Oct 07 14:07:26 compute-0 kernel: tap78dbc059-18: entered promiscuous mode
Oct 07 14:07:26 compute-0 NetworkManager[44949]: <info>  [1759846046.2729] manager: (tap78dbc059-18): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Oct 07 14:07:26 compute-0 ovn_controller[151684]: 2025-10-07T14:07:26Z|00127|binding|INFO|Claiming lport 78dbc059-1828-41b6-84fa-bf4ac0b31103 for this chassis.
Oct 07 14:07:26 compute-0 ovn_controller[151684]: 2025-10-07T14:07:26Z|00128|binding|INFO|78dbc059-1828-41b6-84fa-bf4ac0b31103: Claiming fa:16:3e:8e:a8:72 10.100.0.6
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.282 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 809b049f-447c-4cdd-b8d2-8325f6d3b576 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.283 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance dfe85d14-0395-4f7f-8bc1-f536aefe2ffc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.283 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 10c5acec-9e20-431b-a467-d54b7acbabfd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.283 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.283 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:07:26 compute-0 systemd-udevd[289233]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:07:26 compute-0 systemd-machined[214580]: New machine qemu-23-instance-00000012.
Oct 07 14:07:26 compute-0 NetworkManager[44949]: <info>  [1759846046.3168] device (tap78dbc059-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:07:26 compute-0 NetworkManager[44949]: <info>  [1759846046.3178] device (tap78dbc059-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:07:26 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-00000012.
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.355 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:26 compute-0 ovn_controller[151684]: 2025-10-07T14:07:26Z|00129|binding|INFO|Setting lport 78dbc059-1828-41b6-84fa-bf4ac0b31103 ovn-installed in OVS
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.399 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:a8:72 10.100.0.6'], port_security=['fa:16:3e:8e:a8:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dfe85d14-0395-4f7f-8bc1-f536aefe2ffc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '15d729b9c0bc4738b4f887d6b764fb5a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ad09d09b-a958-4ea8-8f7f-2304bf5c2ff2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e5e58e1-0685-45b5-97d6-a48d03b5304d, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=78dbc059-1828-41b6-84fa-bf4ac0b31103) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.400 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 78dbc059-1828-41b6-84fa-bf4ac0b31103 in datapath 2f4de79f-442a-4f3f-b7b5-11fe265c4f7c bound to our chassis
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.401 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2f4de79f-442a-4f3f-b7b5-11fe265c4f7c
Oct 07 14:07:26 compute-0 ovn_controller[151684]: 2025-10-07T14:07:26Z|00130|binding|INFO|Setting lport 78dbc059-1828-41b6-84fa-bf4ac0b31103 up in Southbound
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.418 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ecbe2632-593c-473a-9fe7-8578c2a3e745]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.419 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2f4de79f-41 in ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.421 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2f4de79f-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.421 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0a9cb575-528a-4bc9-9595-bbf31f0c0002]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.423 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3b606f1c-e78b-4058-ad15-889035244307]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.436 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[581555b7-6774-474d-9ce1-67e5302e5ad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.455 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92cc0b9b-17f0-435d-b66b-a4ec1ddaf633]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.497 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b112f4b1-a7ba-4c2e-af5a-b14f385349e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:26 compute-0 NetworkManager[44949]: <info>  [1759846046.5064] manager: (tap2f4de79f-40): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.507 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[75cbfeb1-bad6-4065-b56b-30c69e8e4ef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.560 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[39b4f7ec-b0b9-49b5-9bd9-b4fb75ae9800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.565 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[53789d76-bfde-48e3-904e-f44cffe4a25d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:26 compute-0 NetworkManager[44949]: <info>  [1759846046.5985] device (tap2f4de79f-40): carrier: link connected
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.608 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[89130bee-f947-4882-b964-d6fe6b14f5e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.632 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2abef8a1-449c-425a-ba50-177093b50b89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f4de79f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:21:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663016, 'reachable_time': 39645, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289323, 'error': None, 'target': 'ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.647 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b233f16f-7ba3-4607-b8dc-8b449adde5f7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:2157'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663016, 'tstamp': 663016}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289325, 'error': None, 'target': 'ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.665 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7878af95-9c02-4f7b-a280-81f171de5a2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f4de79f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:21:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663016, 'reachable_time': 39645, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289326, 'error': None, 'target': 'ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.701 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5e7a2cba-11c6-498f-829c-b45121fc9543]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.819 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3f171d96-1a04-4ed9-83b3-a5c912d09c3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:07:26 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/524300082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.822 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f4de79f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.822 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.823 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f4de79f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:26 compute-0 kernel: tap2f4de79f-40: entered promiscuous mode
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:26 compute-0 NetworkManager[44949]: <info>  [1759846046.8275] manager: (tap2f4de79f-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.829 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2f4de79f-40, col_values=(('external_ids', {'iface-id': 'e2713bfd-9c9e-4d08-ab85-48bb88f7c6b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:26 compute-0 ovn_controller[151684]: 2025-10-07T14:07:26Z|00131|binding|INFO|Releasing lport e2713bfd-9c9e-4d08-ab85-48bb88f7c6b2 from this chassis (sb_readonly=0)
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.848 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2f4de79f-442a-4f3f-b7b5-11fe265c4f7c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2f4de79f-442a-4f3f-b7b5-11fe265c4f7c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.849 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a4252919-7ff9-4be7-9f65-52cff94157d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.850 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/2f4de79f-442a-4f3f-b7b5-11fe265c4f7c.pid.haproxy
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 2f4de79f-442a-4f3f-b7b5-11fe265c4f7c
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:07:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.851 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c', 'env', 'PROCESS_TAG=haproxy-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2f4de79f-442a-4f3f-b7b5-11fe265c4f7c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.852 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.865 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:07:26 compute-0 ceph-mon[74295]: pgmap v1206: 305 pgs: 305 active+clean; 163 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.2 MiB/s wr, 238 op/s
Oct 07 14:07:26 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/524300082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.972 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846046.9668386, 10c5acec-9e20-431b-a467-d54b7acbabfd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.975 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] VM Resumed (Lifecycle Event)
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.982 2 DEBUG nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.985 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.991 2 INFO nova.virt.libvirt.driver [-] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Instance spawned successfully.
Oct 07 14:07:26 compute-0 nova_compute[259550]: 2025-10-07 14:07:26.993 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.000 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.034 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.039 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:07:27 compute-0 podman[289365]: 2025-10-07 14:07:27.351362365 +0000 UTC m=+0.082754152 container create 76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:07:27 compute-0 podman[289365]: 2025-10-07 14:07:27.298161283 +0000 UTC m=+0.029553090 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:07:27 compute-0 systemd[1]: Started libpod-conmon-76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32.scope.
Oct 07 14:07:27 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aed952b4b685f9b30f7b598687d00937742144bbb550bb6c55e216cf5c4aede9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:07:27 compute-0 podman[289365]: 2025-10-07 14:07:27.461198982 +0000 UTC m=+0.192590769 container init 76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:07:27 compute-0 podman[289365]: 2025-10-07 14:07:27.467249523 +0000 UTC m=+0.198641310 container start 76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 07 14:07:27 compute-0 neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c[289379]: [NOTICE]   (289385) : New worker (289387) forked
Oct 07 14:07:27 compute-0 neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c[289379]: [NOTICE]   (289385) : Loading success.
Oct 07 14:07:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 181 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.1 MiB/s wr, 187 op/s
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.581 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.581 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846046.9694026, 10c5acec-9e20-431b-a467-d54b7acbabfd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.582 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] VM Started (Lifecycle Event)
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.593 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.594 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.596 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.597 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.598 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.598 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.623 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.624 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.651 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.656 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.686 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.686 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846047.2509806, dfe85d14-0395-4f7f-8bc1-f536aefe2ffc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.687 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] VM Started (Lifecycle Event)
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.691 2 INFO nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Took 5.23 seconds to spawn the instance on the hypervisor.
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.692 2 DEBUG nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.704 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.710 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846047.2510815, dfe85d14-0395-4f7f-8bc1-f536aefe2ffc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.711 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] VM Paused (Lifecycle Event)
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.740 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.744 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.758 2 INFO nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Took 6.41 seconds to build instance.
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.764 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:07:27 compute-0 nova_compute[259550]: 2025-10-07 14:07:27.779 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "10c5acec-9e20-431b-a467-d54b7acbabfd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:27 compute-0 ceph-mon[74295]: pgmap v1207: 305 pgs: 305 active+clean; 181 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.1 MiB/s wr, 187 op/s
Oct 07 14:07:28 compute-0 ovn_controller[151684]: 2025-10-07T14:07:28Z|00132|binding|INFO|Releasing lport e2713bfd-9c9e-4d08-ab85-48bb88f7c6b2 from this chassis (sb_readonly=0)
Oct 07 14:07:28 compute-0 nova_compute[259550]: 2025-10-07 14:07:28.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:28 compute-0 nova_compute[259550]: 2025-10-07 14:07:28.567 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:07:28 compute-0 nova_compute[259550]: 2025-10-07 14:07:28.568 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:07:28 compute-0 nova_compute[259550]: 2025-10-07 14:07:28.747 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:07:28 compute-0 nova_compute[259550]: 2025-10-07 14:07:28.749 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:07:28 compute-0 nova_compute[259550]: 2025-10-07 14:07:28.749 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:07:28 compute-0 nova_compute[259550]: 2025-10-07 14:07:28.985 2 DEBUG nova.compute.manager [req-ca101f4d-bfb2-4670-8dcd-514d84ca48d1 req-5c1ca2bb-7acf-4fe9-8885-f23b4f70cd51 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Received event network-vif-plugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:07:28 compute-0 nova_compute[259550]: 2025-10-07 14:07:28.985 2 DEBUG oslo_concurrency.lockutils [req-ca101f4d-bfb2-4670-8dcd-514d84ca48d1 req-5c1ca2bb-7acf-4fe9-8885-f23b4f70cd51 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:28 compute-0 nova_compute[259550]: 2025-10-07 14:07:28.986 2 DEBUG oslo_concurrency.lockutils [req-ca101f4d-bfb2-4670-8dcd-514d84ca48d1 req-5c1ca2bb-7acf-4fe9-8885-f23b4f70cd51 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:28 compute-0 nova_compute[259550]: 2025-10-07 14:07:28.987 2 DEBUG oslo_concurrency.lockutils [req-ca101f4d-bfb2-4670-8dcd-514d84ca48d1 req-5c1ca2bb-7acf-4fe9-8885-f23b4f70cd51 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:28 compute-0 nova_compute[259550]: 2025-10-07 14:07:28.987 2 DEBUG nova.compute.manager [req-ca101f4d-bfb2-4670-8dcd-514d84ca48d1 req-5c1ca2bb-7acf-4fe9-8885-f23b4f70cd51 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Processing event network-vif-plugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:07:28 compute-0 nova_compute[259550]: 2025-10-07 14:07:28.988 2 DEBUG nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:07:28 compute-0 nova_compute[259550]: 2025-10-07 14:07:28.992 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846048.992123, dfe85d14-0395-4f7f-8bc1-f536aefe2ffc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:07:28 compute-0 nova_compute[259550]: 2025-10-07 14:07:28.992 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] VM Resumed (Lifecycle Event)
Oct 07 14:07:28 compute-0 nova_compute[259550]: 2025-10-07 14:07:28.994 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:07:28 compute-0 nova_compute[259550]: 2025-10-07 14:07:28.998 2 INFO nova.virt.libvirt.driver [-] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Instance spawned successfully.
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:28.999 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.015 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.031 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.040 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.041 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.042 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.042 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.043 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.044 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.057 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.112 2 INFO nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Took 10.05 seconds to spawn the instance on the hypervisor.
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.114 2 DEBUG nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.159 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.206 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846034.2016592, 7322f2d1-885e-4e41-8a96-e90d4ddc6c38 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.207 2 INFO nova.compute.manager [-] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] VM Stopped (Lifecycle Event)
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.211 2 INFO nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Took 11.15 seconds to build instance.
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.232 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.234 2 DEBUG nova.compute.manager [None req-1582ccdb-3bc0-4a68-804e-dc0d36ffe8dc - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 190 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.6 MiB/s wr, 268 op/s
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.800 2 DEBUG nova.objects.instance [None req-2c5560ad-cf6e-4610-9405-b8fb71491818 cd2830ea97e14a6a875d93683bcf1ddb 546157e74cec441b84dcf0074737c31f - - default default] Lazy-loading 'pci_devices' on Instance uuid 10c5acec-9e20-431b-a467-d54b7acbabfd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.827 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846049.8270304, 10c5acec-9e20-431b-a467-d54b7acbabfd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.827 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] VM Paused (Lifecycle Event)
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.850 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.855 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:07:29 compute-0 nova_compute[259550]: 2025-10-07 14:07:29.880 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 07 14:07:30 compute-0 podman[289399]: 2025-10-07 14:07:30.154862763 +0000 UTC m=+0.139047327 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 07 14:07:30 compute-0 nova_compute[259550]: 2025-10-07 14:07:30.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 07 14:07:30 compute-0 ceph-mon[74295]: pgmap v1208: 305 pgs: 305 active+clean; 190 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.6 MiB/s wr, 268 op/s
Oct 07 14:07:30 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000013.scope: Deactivated successfully.
Oct 07 14:07:30 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000013.scope: Consumed 3.739s CPU time.
Oct 07 14:07:30 compute-0 systemd-machined[214580]: Machine qemu-22-instance-00000013 terminated.
Oct 07 14:07:30 compute-0 nova_compute[259550]: 2025-10-07 14:07:30.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.012 2 DEBUG nova.compute.manager [None req-2c5560ad-cf6e-4610-9405-b8fb71491818 cd2830ea97e14a6a875d93683bcf1ddb 546157e74cec441b84dcf0074737c31f - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.072 2 DEBUG oslo_concurrency.lockutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquiring lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.073 2 DEBUG oslo_concurrency.lockutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.073 2 DEBUG oslo_concurrency.lockutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquiring lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.073 2 DEBUG oslo_concurrency.lockutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.073 2 DEBUG oslo_concurrency.lockutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.075 2 INFO nova.compute.manager [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Terminating instance
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.076 2 DEBUG nova.compute.manager [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:07:31 compute-0 kernel: tap78dbc059-18 (unregistering): left promiscuous mode
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.188 2 DEBUG nova.compute.manager [req-6517e8a6-6661-4886-8260-1158da206801 req-83c331ab-2498-4b95-a539-ed5a14a1ea6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Received event network-vif-plugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.189 2 DEBUG oslo_concurrency.lockutils [req-6517e8a6-6661-4886-8260-1158da206801 req-83c331ab-2498-4b95-a539-ed5a14a1ea6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.189 2 DEBUG oslo_concurrency.lockutils [req-6517e8a6-6661-4886-8260-1158da206801 req-83c331ab-2498-4b95-a539-ed5a14a1ea6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.189 2 DEBUG oslo_concurrency.lockutils [req-6517e8a6-6661-4886-8260-1158da206801 req-83c331ab-2498-4b95-a539-ed5a14a1ea6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.190 2 DEBUG nova.compute.manager [req-6517e8a6-6661-4886-8260-1158da206801 req-83c331ab-2498-4b95-a539-ed5a14a1ea6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] No waiting events found dispatching network-vif-plugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.190 2 WARNING nova.compute.manager [req-6517e8a6-6661-4886-8260-1158da206801 req-83c331ab-2498-4b95-a539-ed5a14a1ea6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Received unexpected event network-vif-plugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 for instance with vm_state active and task_state deleting.
Oct 07 14:07:31 compute-0 NetworkManager[44949]: <info>  [1759846051.1914] device (tap78dbc059-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:07:31 compute-0 ovn_controller[151684]: 2025-10-07T14:07:31Z|00133|binding|INFO|Releasing lport 78dbc059-1828-41b6-84fa-bf4ac0b31103 from this chassis (sb_readonly=0)
Oct 07 14:07:31 compute-0 ovn_controller[151684]: 2025-10-07T14:07:31Z|00134|binding|INFO|Setting lport 78dbc059-1828-41b6-84fa-bf4ac0b31103 down in Southbound
Oct 07 14:07:31 compute-0 ovn_controller[151684]: 2025-10-07T14:07:31Z|00135|binding|INFO|Removing iface tap78dbc059-18 ovn-installed in OVS
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:31 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct 07 14:07:31 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000012.scope: Consumed 2.847s CPU time.
Oct 07 14:07:31 compute-0 systemd-machined[214580]: Machine qemu-23-instance-00000012 terminated.
Oct 07 14:07:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.329 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:a8:72 10.100.0.6'], port_security=['fa:16:3e:8e:a8:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dfe85d14-0395-4f7f-8bc1-f536aefe2ffc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '15d729b9c0bc4738b4f887d6b764fb5a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ad09d09b-a958-4ea8-8f7f-2304bf5c2ff2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e5e58e1-0685-45b5-97d6-a48d03b5304d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=78dbc059-1828-41b6-84fa-bf4ac0b31103) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:07:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.331 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 78dbc059-1828-41b6-84fa-bf4ac0b31103 in datapath 2f4de79f-442a-4f3f-b7b5-11fe265c4f7c unbound from our chassis
Oct 07 14:07:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.332 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2f4de79f-442a-4f3f-b7b5-11fe265c4f7c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:07:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.333 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e25fae2b-6e81-438a-adc0-2625e6944035]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.334 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c namespace which is not needed anymore
Oct 07 14:07:31 compute-0 podman[289435]: 2025-10-07 14:07:31.405017445 +0000 UTC m=+0.078581351 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:07:31 compute-0 neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c[289379]: [NOTICE]   (289385) : haproxy version is 2.8.14-c23fe91
Oct 07 14:07:31 compute-0 neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c[289379]: [NOTICE]   (289385) : path to executable is /usr/sbin/haproxy
Oct 07 14:07:31 compute-0 neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c[289379]: [WARNING]  (289385) : Exiting Master process...
Oct 07 14:07:31 compute-0 neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c[289379]: [ALERT]    (289385) : Current worker (289387) exited with code 143 (Terminated)
Oct 07 14:07:31 compute-0 neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c[289379]: [WARNING]  (289385) : All workers exited. Exiting... (0)
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.512 2 INFO nova.virt.libvirt.driver [-] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Instance destroyed successfully.
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.512 2 DEBUG nova.objects.instance [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lazy-loading 'resources' on Instance uuid dfe85d14-0395-4f7f-8bc1-f536aefe2ffc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:07:31 compute-0 systemd[1]: libpod-76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32.scope: Deactivated successfully.
Oct 07 14:07:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 206 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.6 MiB/s wr, 230 op/s
Oct 07 14:07:31 compute-0 podman[289472]: 2025-10-07 14:07:31.522461964 +0000 UTC m=+0.078386186 container died 76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.547 2 DEBUG nova.virt.libvirt.vif [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:07:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-926892453',display_name='tempest-ImagesNegativeTestJSON-server-926892453',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-926892453',id=18,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:07:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='15d729b9c0bc4738b4f887d6b764fb5a',ramdisk_id='',reservation_id='r-z35xzswz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-151850229',owner_user_name='tempest-ImagesNegativeTestJSON-151850229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:07:29Z,user_data=None,user_id='8977e4fa4adb42bfae4fe2be5d339769',uuid=dfe85d14-0395-4f7f-8bc1-f536aefe2ffc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.548 2 DEBUG nova.network.os_vif_util [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Converting VIF {"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.549 2 DEBUG nova.network.os_vif_util [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=78dbc059-1828-41b6-84fa-bf4ac0b31103,network=Network(2f4de79f-442a-4f3f-b7b5-11fe265c4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78dbc059-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.549 2 DEBUG os_vif [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=78dbc059-1828-41b6-84fa-bf4ac0b31103,network=Network(2f4de79f-442a-4f3f-b7b5-11fe265c4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78dbc059-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.553 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78dbc059-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.566 2 INFO os_vif [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=78dbc059-1828-41b6-84fa-bf4ac0b31103,network=Network(2f4de79f-442a-4f3f-b7b5-11fe265c4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78dbc059-18')
Oct 07 14:07:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32-userdata-shm.mount: Deactivated successfully.
Oct 07 14:07:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-aed952b4b685f9b30f7b598687d00937742144bbb550bb6c55e216cf5c4aede9-merged.mount: Deactivated successfully.
Oct 07 14:07:31 compute-0 podman[289472]: 2025-10-07 14:07:31.637207661 +0000 UTC m=+0.193131883 container cleanup 76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 14:07:31 compute-0 systemd[1]: libpod-conmon-76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32.scope: Deactivated successfully.
Oct 07 14:07:31 compute-0 podman[289532]: 2025-10-07 14:07:31.726146489 +0000 UTC m=+0.057042536 container remove 76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:07:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.736 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[243a65fd-22d1-432e-b1ae-39f740ff2779]: (4, ('Tue Oct  7 02:07:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c (76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32)\n76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32\nTue Oct  7 02:07:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c (76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32)\n76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.738 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[01448963-c800-42b4-a3e8-d65349e4d211]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.740 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f4de79f-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:07:31 compute-0 kernel: tap2f4de79f-40: left promiscuous mode
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.747 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fcd595a9-b04e-4a1f-a7ea-f10cb99dac19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:31 compute-0 nova_compute[259550]: 2025-10-07 14:07:31.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.780 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3d503b9a-d71f-4c96-8faa-c6914774105b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.782 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a90c18c0-1e20-4903-908a-910b16a9f29b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.806 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e8a00fdd-4f7c-4b2d-ac6f-b49e5c76b262]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663005, 'reachable_time': 31956, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289548, 'error': None, 'target': 'ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d2f4de79f\x2d442a\x2d4f3f\x2db7b5\x2d11fe265c4f7c.mount: Deactivated successfully.
Oct 07 14:07:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.809 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:07:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.809 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[1a571427-564b-4bef-a79f-a54681e218f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:07:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct 07 14:07:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct 07 14:07:31 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.842801) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846051842864, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1362, "num_deletes": 251, "total_data_size": 1942667, "memory_usage": 1979456, "flush_reason": "Manual Compaction"}
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846051860949, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1911230, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24297, "largest_seqno": 25658, "table_properties": {"data_size": 1904831, "index_size": 3538, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14062, "raw_average_key_size": 20, "raw_value_size": 1891781, "raw_average_value_size": 2718, "num_data_blocks": 158, "num_entries": 696, "num_filter_entries": 696, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759845930, "oldest_key_time": 1759845930, "file_creation_time": 1759846051, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 18257 microseconds, and 6911 cpu microseconds.
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.861064) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1911230 bytes OK
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.861086) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.862853) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.862897) EVENT_LOG_v1 {"time_micros": 1759846051862887, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.862921) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1936530, prev total WAL file size 1936530, number of live WAL files 2.
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.863981) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1866KB)], [56(6890KB)]
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846051864013, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 8967292, "oldest_snapshot_seqno": -1}
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4725 keys, 7222282 bytes, temperature: kUnknown
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846051917657, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7222282, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7190952, "index_size": 18420, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 118457, "raw_average_key_size": 25, "raw_value_size": 7105868, "raw_average_value_size": 1503, "num_data_blocks": 760, "num_entries": 4725, "num_filter_entries": 4725, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759846051, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.918019) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7222282 bytes
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.920101) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.9 rd, 134.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 6.7 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(8.5) write-amplify(3.8) OK, records in: 5243, records dropped: 518 output_compression: NoCompression
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.920144) EVENT_LOG_v1 {"time_micros": 1759846051920122, "job": 30, "event": "compaction_finished", "compaction_time_micros": 53742, "compaction_time_cpu_micros": 17967, "output_level": 6, "num_output_files": 1, "total_output_size": 7222282, "num_input_records": 5243, "num_output_records": 4725, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846051920802, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846051922230, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.863879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.922354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.922363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.922366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.922370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:07:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.922373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:07:32 compute-0 nova_compute[259550]: 2025-10-07 14:07:32.121 2 INFO nova.virt.libvirt.driver [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Deleting instance files /var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_del
Oct 07 14:07:32 compute-0 nova_compute[259550]: 2025-10-07 14:07:32.122 2 INFO nova.virt.libvirt.driver [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Deletion of /var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_del complete
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001440965439213388 of space, bias 1.0, pg target 0.4322896317640164 quantized to 32 (current 32)
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:07:32 compute-0 nova_compute[259550]: 2025-10-07 14:07:32.283 2 INFO nova.compute.manager [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Took 1.21 seconds to destroy the instance on the hypervisor.
Oct 07 14:07:32 compute-0 nova_compute[259550]: 2025-10-07 14:07:32.283 2 DEBUG oslo.service.loopingcall [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:07:32 compute-0 nova_compute[259550]: 2025-10-07 14:07:32.283 2 DEBUG nova.compute.manager [-] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:07:32 compute-0 nova_compute[259550]: 2025-10-07 14:07:32.283 2 DEBUG nova.network.neutron [-] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:07:32 compute-0 nova_compute[259550]: 2025-10-07 14:07:32.495 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "10c5acec-9e20-431b-a467-d54b7acbabfd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:32 compute-0 nova_compute[259550]: 2025-10-07 14:07:32.495 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "10c5acec-9e20-431b-a467-d54b7acbabfd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:32 compute-0 nova_compute[259550]: 2025-10-07 14:07:32.495 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "10c5acec-9e20-431b-a467-d54b7acbabfd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:32 compute-0 nova_compute[259550]: 2025-10-07 14:07:32.496 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "10c5acec-9e20-431b-a467-d54b7acbabfd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:32 compute-0 nova_compute[259550]: 2025-10-07 14:07:32.496 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "10c5acec-9e20-431b-a467-d54b7acbabfd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:32 compute-0 nova_compute[259550]: 2025-10-07 14:07:32.497 2 INFO nova.compute.manager [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Terminating instance
Oct 07 14:07:32 compute-0 nova_compute[259550]: 2025-10-07 14:07:32.497 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "refresh_cache-10c5acec-9e20-431b-a467-d54b7acbabfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:07:32 compute-0 nova_compute[259550]: 2025-10-07 14:07:32.498 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquired lock "refresh_cache-10c5acec-9e20-431b-a467-d54b7acbabfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:07:32 compute-0 nova_compute[259550]: 2025-10-07 14:07:32.498 2 DEBUG nova.network.neutron [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:07:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:07:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4039940584' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:07:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:07:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4039940584' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:07:32 compute-0 nova_compute[259550]: 2025-10-07 14:07:32.719 2 DEBUG nova.network.neutron [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:07:32 compute-0 ceph-mon[74295]: pgmap v1209: 305 pgs: 305 active+clean; 206 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.6 MiB/s wr, 230 op/s
Oct 07 14:07:32 compute-0 ceph-mon[74295]: osdmap e135: 3 total, 3 up, 3 in
Oct 07 14:07:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/4039940584' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:07:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/4039940584' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.004 2 DEBUG nova.network.neutron [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.108 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Releasing lock "refresh_cache-10c5acec-9e20-431b-a467-d54b7acbabfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.109 2 DEBUG nova.compute.manager [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.120 2 INFO nova.virt.libvirt.driver [-] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Instance destroyed successfully.
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.121 2 DEBUG nova.objects.instance [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lazy-loading 'resources' on Instance uuid 10c5acec-9e20-431b-a467-d54b7acbabfd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.277 2 DEBUG nova.network.neutron [-] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.404 2 INFO nova.compute.manager [-] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Took 1.12 seconds to deallocate network for instance.
Oct 07 14:07:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 206 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.5 MiB/s wr, 218 op/s
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.658 2 DEBUG oslo_concurrency.lockutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.658 2 DEBUG oslo_concurrency.lockutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.694 2 INFO nova.virt.libvirt.driver [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Deleting instance files /var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd_del
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.695 2 INFO nova.virt.libvirt.driver [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Deletion of /var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd_del complete
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.729 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846038.7282531, ddf09c33-d956-404b-a5d8-44a3727f9a3b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.730 2 INFO nova.compute.manager [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] VM Stopped (Lifecycle Event)
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.837 2 DEBUG nova.compute.manager [None req-fc288ad5-eb1b-41ae-9ee1-2b4b946a7b6d - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:33 compute-0 ceph-mon[74295]: pgmap v1211: 305 pgs: 305 active+clean; 206 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.5 MiB/s wr, 218 op/s
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.880 2 INFO nova.compute.manager [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Took 0.77 seconds to destroy the instance on the hypervisor.
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.880 2 DEBUG oslo.service.loopingcall [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.881 2 DEBUG nova.compute.manager [-] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.881 2 DEBUG nova.network.neutron [-] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.944 2 DEBUG nova.compute.manager [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Received event network-vif-unplugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.945 2 DEBUG oslo_concurrency.lockutils [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.945 2 DEBUG oslo_concurrency.lockutils [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.945 2 DEBUG oslo_concurrency.lockutils [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.945 2 DEBUG nova.compute.manager [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] No waiting events found dispatching network-vif-unplugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.946 2 WARNING nova.compute.manager [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Received unexpected event network-vif-unplugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 for instance with vm_state deleted and task_state None.
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.946 2 DEBUG nova.compute.manager [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Received event network-vif-plugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.946 2 DEBUG oslo_concurrency.lockutils [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.946 2 DEBUG oslo_concurrency.lockutils [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.947 2 DEBUG oslo_concurrency.lockutils [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.947 2 DEBUG nova.compute.manager [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] No waiting events found dispatching network-vif-plugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:07:33 compute-0 nova_compute[259550]: 2025-10-07 14:07:33.947 2 WARNING nova.compute.manager [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Received unexpected event network-vif-plugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 for instance with vm_state deleted and task_state None.
Oct 07 14:07:34 compute-0 nova_compute[259550]: 2025-10-07 14:07:34.663 2 DEBUG oslo_concurrency.processutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:34 compute-0 nova_compute[259550]: 2025-10-07 14:07:34.713 2 DEBUG nova.network.neutron [-] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:07:34 compute-0 nova_compute[259550]: 2025-10-07 14:07:34.732 2 DEBUG nova.network.neutron [-] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:07:34 compute-0 nova_compute[259550]: 2025-10-07 14:07:34.749 2 INFO nova.compute.manager [-] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Took 0.87 seconds to deallocate network for instance.
Oct 07 14:07:34 compute-0 nova_compute[259550]: 2025-10-07 14:07:34.805 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:07:35 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3849188411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:35 compute-0 nova_compute[259550]: 2025-10-07 14:07:35.121 2 DEBUG oslo_concurrency.processutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:35 compute-0 nova_compute[259550]: 2025-10-07 14:07:35.128 2 DEBUG nova.compute.provider_tree [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:07:35 compute-0 nova_compute[259550]: 2025-10-07 14:07:35.145 2 DEBUG nova.scheduler.client.report [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:07:35 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3849188411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:35 compute-0 nova_compute[259550]: 2025-10-07 14:07:35.168 2 DEBUG oslo_concurrency.lockutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:35 compute-0 nova_compute[259550]: 2025-10-07 14:07:35.172 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:35 compute-0 nova_compute[259550]: 2025-10-07 14:07:35.217 2 INFO nova.scheduler.client.report [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Deleted allocations for instance dfe85d14-0395-4f7f-8bc1-f536aefe2ffc
Oct 07 14:07:35 compute-0 nova_compute[259550]: 2025-10-07 14:07:35.274 2 DEBUG oslo_concurrency.processutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:35 compute-0 nova_compute[259550]: 2025-10-07 14:07:35.313 2 DEBUG oslo_concurrency.lockutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.240s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:35 compute-0 nova_compute[259550]: 2025-10-07 14:07:35.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 145 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.9 MiB/s wr, 325 op/s
Oct 07 14:07:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:07:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:07:35 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3051418325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:35 compute-0 nova_compute[259550]: 2025-10-07 14:07:35.739 2 DEBUG oslo_concurrency.processutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:35 compute-0 nova_compute[259550]: 2025-10-07 14:07:35.746 2 DEBUG nova.compute.provider_tree [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:07:35 compute-0 nova_compute[259550]: 2025-10-07 14:07:35.759 2 DEBUG nova.scheduler.client.report [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:07:35 compute-0 nova_compute[259550]: 2025-10-07 14:07:35.782 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:35 compute-0 nova_compute[259550]: 2025-10-07 14:07:35.807 2 INFO nova.scheduler.client.report [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Deleted allocations for instance 10c5acec-9e20-431b-a467-d54b7acbabfd
Oct 07 14:07:35 compute-0 nova_compute[259550]: 2025-10-07 14:07:35.884 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "10c5acec-9e20-431b-a467-d54b7acbabfd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.389s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:36 compute-0 nova_compute[259550]: 2025-10-07 14:07:36.019 2 DEBUG nova.compute.manager [req-3a64622f-0f2b-4f6a-966a-094d0bc91994 req-4561319d-52f7-46d6-85d9-4012f4495b2b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Received event network-vif-deleted-78dbc059-1828-41b6-84fa-bf4ac0b31103 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:07:36 compute-0 nova_compute[259550]: 2025-10-07 14:07:36.146 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "809b049f-447c-4cdd-b8d2-8325f6d3b576" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:36 compute-0 nova_compute[259550]: 2025-10-07 14:07:36.147 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "809b049f-447c-4cdd-b8d2-8325f6d3b576" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:36 compute-0 nova_compute[259550]: 2025-10-07 14:07:36.147 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "809b049f-447c-4cdd-b8d2-8325f6d3b576-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:36 compute-0 nova_compute[259550]: 2025-10-07 14:07:36.147 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "809b049f-447c-4cdd-b8d2-8325f6d3b576-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:36 compute-0 nova_compute[259550]: 2025-10-07 14:07:36.147 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "809b049f-447c-4cdd-b8d2-8325f6d3b576-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:36 compute-0 nova_compute[259550]: 2025-10-07 14:07:36.149 2 INFO nova.compute.manager [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Terminating instance
Oct 07 14:07:36 compute-0 nova_compute[259550]: 2025-10-07 14:07:36.150 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "refresh_cache-809b049f-447c-4cdd-b8d2-8325f6d3b576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:07:36 compute-0 nova_compute[259550]: 2025-10-07 14:07:36.150 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquired lock "refresh_cache-809b049f-447c-4cdd-b8d2-8325f6d3b576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:07:36 compute-0 nova_compute[259550]: 2025-10-07 14:07:36.150 2 DEBUG nova.network.neutron [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:07:36 compute-0 ceph-mon[74295]: pgmap v1212: 305 pgs: 305 active+clean; 145 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.9 MiB/s wr, 325 op/s
Oct 07 14:07:36 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3051418325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:36 compute-0 nova_compute[259550]: 2025-10-07 14:07:36.359 2 DEBUG nova.network.neutron [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:07:36 compute-0 nova_compute[259550]: 2025-10-07 14:07:36.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:36 compute-0 nova_compute[259550]: 2025-10-07 14:07:36.589 2 DEBUG nova.network.neutron [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:07:36 compute-0 nova_compute[259550]: 2025-10-07 14:07:36.784 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Releasing lock "refresh_cache-809b049f-447c-4cdd-b8d2-8325f6d3b576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:07:36 compute-0 nova_compute[259550]: 2025-10-07 14:07:36.785 2 DEBUG nova.compute.manager [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:07:37 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000011.scope: Deactivated successfully.
Oct 07 14:07:37 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000011.scope: Consumed 13.721s CPU time.
Oct 07 14:07:37 compute-0 systemd-machined[214580]: Machine qemu-21-instance-00000011 terminated.
Oct 07 14:07:37 compute-0 nova_compute[259550]: 2025-10-07 14:07:37.235 2 INFO nova.virt.libvirt.driver [-] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Instance destroyed successfully.
Oct 07 14:07:37 compute-0 nova_compute[259550]: 2025-10-07 14:07:37.236 2 DEBUG nova.objects.instance [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lazy-loading 'resources' on Instance uuid 809b049f-447c-4cdd-b8d2-8325f6d3b576 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:07:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 121 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 329 op/s
Oct 07 14:07:38 compute-0 nova_compute[259550]: 2025-10-07 14:07:38.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:38 compute-0 ceph-mon[74295]: pgmap v1213: 305 pgs: 305 active+clean; 121 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 329 op/s
Oct 07 14:07:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 99 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.3 MiB/s wr, 234 op/s
Oct 07 14:07:40 compute-0 nova_compute[259550]: 2025-10-07 14:07:40.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:07:40 compute-0 nova_compute[259550]: 2025-10-07 14:07:40.670 2 INFO nova.virt.libvirt.driver [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Deleting instance files /var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576_del
Oct 07 14:07:40 compute-0 nova_compute[259550]: 2025-10-07 14:07:40.671 2 INFO nova.virt.libvirt.driver [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Deletion of /var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576_del complete
Oct 07 14:07:40 compute-0 nova_compute[259550]: 2025-10-07 14:07:40.790 2 INFO nova.compute.manager [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Took 4.00 seconds to destroy the instance on the hypervisor.
Oct 07 14:07:40 compute-0 nova_compute[259550]: 2025-10-07 14:07:40.791 2 DEBUG oslo.service.loopingcall [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:07:40 compute-0 nova_compute[259550]: 2025-10-07 14:07:40.791 2 DEBUG nova.compute.manager [-] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:07:40 compute-0 nova_compute[259550]: 2025-10-07 14:07:40.791 2 DEBUG nova.network.neutron [-] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:07:40 compute-0 ceph-mon[74295]: pgmap v1214: 305 pgs: 305 active+clean; 99 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.3 MiB/s wr, 234 op/s
Oct 07 14:07:40 compute-0 nova_compute[259550]: 2025-10-07 14:07:40.992 2 DEBUG nova.network.neutron [-] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:07:41 compute-0 nova_compute[259550]: 2025-10-07 14:07:41.011 2 DEBUG nova.network.neutron [-] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:07:41 compute-0 nova_compute[259550]: 2025-10-07 14:07:41.025 2 INFO nova.compute.manager [-] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Took 0.23 seconds to deallocate network for instance.
Oct 07 14:07:41 compute-0 nova_compute[259550]: 2025-10-07 14:07:41.095 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:41 compute-0 nova_compute[259550]: 2025-10-07 14:07:41.095 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:41 compute-0 nova_compute[259550]: 2025-10-07 14:07:41.168 2 DEBUG oslo_concurrency.processutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 65 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 113 KiB/s wr, 202 op/s
Oct 07 14:07:41 compute-0 nova_compute[259550]: 2025-10-07 14:07:41.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:07:41 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1087273953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:41 compute-0 nova_compute[259550]: 2025-10-07 14:07:41.681 2 DEBUG oslo_concurrency.processutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:41 compute-0 nova_compute[259550]: 2025-10-07 14:07:41.690 2 DEBUG nova.compute.provider_tree [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:07:41 compute-0 nova_compute[259550]: 2025-10-07 14:07:41.716 2 DEBUG nova.scheduler.client.report [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:07:41 compute-0 nova_compute[259550]: 2025-10-07 14:07:41.734 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:41 compute-0 nova_compute[259550]: 2025-10-07 14:07:41.758 2 INFO nova.scheduler.client.report [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Deleted allocations for instance 809b049f-447c-4cdd-b8d2-8325f6d3b576
Oct 07 14:07:41 compute-0 nova_compute[259550]: 2025-10-07 14:07:41.818 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "809b049f-447c-4cdd-b8d2-8325f6d3b576" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:41 compute-0 ceph-mon[74295]: pgmap v1215: 305 pgs: 305 active+clean; 65 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 113 KiB/s wr, 202 op/s
Oct 07 14:07:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1087273953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 65 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 96 KiB/s wr, 173 op/s
Oct 07 14:07:44 compute-0 ceph-mon[74295]: pgmap v1216: 305 pgs: 305 active+clean; 65 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 96 KiB/s wr, 173 op/s
Oct 07 14:07:45 compute-0 nova_compute[259550]: 2025-10-07 14:07:45.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 41 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 95 KiB/s wr, 175 op/s
Oct 07 14:07:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:07:46 compute-0 nova_compute[259550]: 2025-10-07 14:07:46.015 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846051.0126994, 10c5acec-9e20-431b-a467-d54b7acbabfd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:07:46 compute-0 nova_compute[259550]: 2025-10-07 14:07:46.016 2 INFO nova.compute.manager [-] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] VM Stopped (Lifecycle Event)
Oct 07 14:07:46 compute-0 nova_compute[259550]: 2025-10-07 14:07:46.057 2 DEBUG nova.compute.manager [None req-03a2cf04-50d5-4acf-b6d7-c9d360356f63 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:46 compute-0 nova_compute[259550]: 2025-10-07 14:07:46.510 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846051.508801, dfe85d14-0395-4f7f-8bc1-f536aefe2ffc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:07:46 compute-0 nova_compute[259550]: 2025-10-07 14:07:46.510 2 INFO nova.compute.manager [-] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] VM Stopped (Lifecycle Event)
Oct 07 14:07:46 compute-0 nova_compute[259550]: 2025-10-07 14:07:46.536 2 DEBUG nova.compute.manager [None req-b0912971-47bf-4c3f-b365-d27e13dc5efb - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:46 compute-0 nova_compute[259550]: 2025-10-07 14:07:46.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:46 compute-0 ceph-mon[74295]: pgmap v1217: 305 pgs: 305 active+clean; 41 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 95 KiB/s wr, 175 op/s
Oct 07 14:07:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 41 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 48 KiB/s wr, 45 op/s
Oct 07 14:07:48 compute-0 ceph-mon[74295]: pgmap v1218: 305 pgs: 305 active+clean; 41 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 48 KiB/s wr, 45 op/s
Oct 07 14:07:49 compute-0 podman[289659]: 2025-10-07 14:07:49.091072645 +0000 UTC m=+0.072173291 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 07 14:07:49 compute-0 podman[289658]: 2025-10-07 14:07:49.091173707 +0000 UTC m=+0.072521799 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:07:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 41 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:07:50 compute-0 nova_compute[259550]: 2025-10-07 14:07:50.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:07:50 compute-0 ceph-mon[74295]: pgmap v1219: 305 pgs: 305 active+clean; 41 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:07:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 41 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 938 B/s wr, 16 op/s
Oct 07 14:07:51 compute-0 nova_compute[259550]: 2025-10-07 14:07:51.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct 07 14:07:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct 07 14:07:51 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct 07 14:07:52 compute-0 nova_compute[259550]: 2025-10-07 14:07:52.234 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846057.2324383, 809b049f-447c-4cdd-b8d2-8325f6d3b576 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:07:52 compute-0 nova_compute[259550]: 2025-10-07 14:07:52.235 2 INFO nova.compute.manager [-] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] VM Stopped (Lifecycle Event)
Oct 07 14:07:52 compute-0 nova_compute[259550]: 2025-10-07 14:07:52.267 2 DEBUG nova.compute.manager [None req-cd2281ac-64f8-4554-99e5-c585c87c9d99 - - - - - -] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:07:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:07:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:07:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:07:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:07:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:07:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:07:52 compute-0 ceph-mon[74295]: pgmap v1220: 305 pgs: 305 active+clean; 41 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 938 B/s wr, 16 op/s
Oct 07 14:07:52 compute-0 ceph-mon[74295]: osdmap e136: 3 total, 3 up, 3 in
Oct 07 14:07:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 41 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 1.1 KiB/s wr, 8 op/s
Oct 07 14:07:54 compute-0 ceph-mon[74295]: pgmap v1222: 305 pgs: 305 active+clean; 41 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 1.1 KiB/s wr, 8 op/s
Oct 07 14:07:55 compute-0 nova_compute[259550]: 2025-10-07 14:07:55.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 41 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 07 14:07:55 compute-0 nova_compute[259550]: 2025-10-07 14:07:55.600 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquiring lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:55 compute-0 nova_compute[259550]: 2025-10-07 14:07:55.601 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:55 compute-0 nova_compute[259550]: 2025-10-07 14:07:55.621 2 DEBUG nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:07:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:07:55 compute-0 nova_compute[259550]: 2025-10-07 14:07:55.706 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:55 compute-0 nova_compute[259550]: 2025-10-07 14:07:55.707 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:55 compute-0 nova_compute[259550]: 2025-10-07 14:07:55.715 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:07:55 compute-0 nova_compute[259550]: 2025-10-07 14:07:55.715 2 INFO nova.compute.claims [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:07:55 compute-0 nova_compute[259550]: 2025-10-07 14:07:55.821 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:56.236 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:07:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:07:56.237 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:07:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1487533641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.352 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.360 2 DEBUG nova.compute.provider_tree [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.380 2 DEBUG nova.scheduler.client.report [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.402 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.403 2 DEBUG nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.449 2 DEBUG nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.449 2 DEBUG nova.network.neutron [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.475 2 INFO nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.499 2 DEBUG nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.631 2 DEBUG nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.632 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.633 2 INFO nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Creating image(s)
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.659 2 DEBUG nova.storage.rbd_utils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] rbd image 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.684 2 DEBUG nova.storage.rbd_utils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] rbd image 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.709 2 DEBUG nova.storage.rbd_utils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] rbd image 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.715 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.754 2 DEBUG nova.policy [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b1ccfdaee9324154bed6828c0fa32e6d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '63a8d182eca84056a1214aff59d1a164', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:07:56 compute-0 ceph-mon[74295]: pgmap v1223: 305 pgs: 305 active+clean; 41 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 07 14:07:56 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1487533641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.793 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.794 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.795 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.795 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.821 2 DEBUG nova.storage.rbd_utils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] rbd image 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:07:56 compute-0 nova_compute[259550]: 2025-10-07 14:07:56.826 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:07:57 compute-0 nova_compute[259550]: 2025-10-07 14:07:57.279 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:07:57 compute-0 nova_compute[259550]: 2025-10-07 14:07:57.340 2 DEBUG nova.storage.rbd_utils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] resizing rbd image 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:07:57 compute-0 nova_compute[259550]: 2025-10-07 14:07:57.459 2 DEBUG nova.objects.instance [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:07:57 compute-0 nova_compute[259550]: 2025-10-07 14:07:57.483 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:07:57 compute-0 nova_compute[259550]: 2025-10-07 14:07:57.484 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Ensure instance console log exists: /var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:07:57 compute-0 nova_compute[259550]: 2025-10-07 14:07:57.485 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:07:57 compute-0 nova_compute[259550]: 2025-10-07 14:07:57.486 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:07:57 compute-0 nova_compute[259550]: 2025-10-07 14:07:57.487 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:07:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 41 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 07 14:07:58 compute-0 nova_compute[259550]: 2025-10-07 14:07:58.167 2 DEBUG nova.network.neutron [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Successfully created port: d23018fc-ec2d-4a03-8e09-88c7ecb34f8b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:07:58 compute-0 ceph-mon[74295]: pgmap v1224: 305 pgs: 305 active+clean; 41 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 07 14:07:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 68 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.6 MiB/s wr, 54 op/s
Oct 07 14:08:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:00.041 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:00.041 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:00.042 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:00 compute-0 nova_compute[259550]: 2025-10-07 14:08:00.222 2 DEBUG nova.network.neutron [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Successfully updated port: d23018fc-ec2d-4a03-8e09-88c7ecb34f8b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:08:00 compute-0 nova_compute[259550]: 2025-10-07 14:08:00.254 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquiring lock "refresh_cache-4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:08:00 compute-0 nova_compute[259550]: 2025-10-07 14:08:00.255 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquired lock "refresh_cache-4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:08:00 compute-0 nova_compute[259550]: 2025-10-07 14:08:00.255 2 DEBUG nova.network.neutron [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:08:00 compute-0 nova_compute[259550]: 2025-10-07 14:08:00.483 2 DEBUG nova.network.neutron [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:08:00 compute-0 nova_compute[259550]: 2025-10-07 14:08:00.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:08:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct 07 14:08:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct 07 14:08:00 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct 07 14:08:00 compute-0 ceph-mon[74295]: pgmap v1225: 305 pgs: 305 active+clean; 68 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.6 MiB/s wr, 54 op/s
Oct 07 14:08:00 compute-0 ceph-mon[74295]: osdmap e137: 3 total, 3 up, 3 in
Oct 07 14:08:01 compute-0 podman[289887]: 2025-10-07 14:08:01.096556921 +0000 UTC m=+0.089682278 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 07 14:08:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.2 MiB/s wr, 58 op/s
Oct 07 14:08:01 compute-0 nova_compute[259550]: 2025-10-07 14:08:01.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:01 compute-0 nova_compute[259550]: 2025-10-07 14:08:01.650 2 DEBUG nova.compute.manager [req-77afa52d-fbe1-4f21-9d48-27aa2aac2766 req-314d6820-498c-44c6-abd9-569729635bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Received event network-changed-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:08:01 compute-0 nova_compute[259550]: 2025-10-07 14:08:01.651 2 DEBUG nova.compute.manager [req-77afa52d-fbe1-4f21-9d48-27aa2aac2766 req-314d6820-498c-44c6-abd9-569729635bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Refreshing instance network info cache due to event network-changed-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:08:01 compute-0 nova_compute[259550]: 2025-10-07 14:08:01.651 2 DEBUG oslo_concurrency.lockutils [req-77afa52d-fbe1-4f21-9d48-27aa2aac2766 req-314d6820-498c-44c6-abd9-569729635bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.066 2 DEBUG nova.network.neutron [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Updating instance_info_cache with network_info: [{"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:08:02 compute-0 podman[289915]: 2025-10-07 14:08:02.081043783 +0000 UTC m=+0.061380232 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.239 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Releasing lock "refresh_cache-4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.239 2 DEBUG nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Instance network_info: |[{"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.240 2 DEBUG oslo_concurrency.lockutils [req-77afa52d-fbe1-4f21-9d48-27aa2aac2766 req-314d6820-498c-44c6-abd9-569729635bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.240 2 DEBUG nova.network.neutron [req-77afa52d-fbe1-4f21-9d48-27aa2aac2766 req-314d6820-498c-44c6-abd9-569729635bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Refreshing network info cache for port d23018fc-ec2d-4a03-8e09-88c7ecb34f8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.242 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Start _get_guest_xml network_info=[{"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.246 2 WARNING nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.251 2 DEBUG nova.virt.libvirt.host [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.252 2 DEBUG nova.virt.libvirt.host [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.256 2 DEBUG nova.virt.libvirt.host [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.257 2 DEBUG nova.virt.libvirt.host [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.257 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.257 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.258 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.258 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.258 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.258 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.258 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.258 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.258 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.259 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.259 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.259 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.261 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.676 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.677 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:08:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1523630697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.750 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.771 2 DEBUG nova.storage.rbd_utils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] rbd image 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.775 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:02 compute-0 ceph-mon[74295]: pgmap v1227: 305 pgs: 305 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.2 MiB/s wr, 58 op/s
Oct 07 14:08:02 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1523630697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:02 compute-0 nova_compute[259550]: 2025-10-07 14:08:02.878 2 DEBUG nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.103 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.104 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.113 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.113 2 INFO nova.compute.claims [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:08:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:08:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3694626128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:03.239 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.251 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.253 2 DEBUG nova.virt.libvirt.vif [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:07:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-370744096',display_name='tempest-ImagesOneServerTestJSON-server-370744096',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-370744096',id=20,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63a8d182eca84056a1214aff59d1a164',ramdisk_id='',reservation_id='r-4hhekc2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-961060000',owner_user_name='tempest-ImagesOneServerTestJSON-961060000-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:07:56Z,user_data=None,user_id='b1ccfdaee9324154bed6828c0fa32e6d',uuid=4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.253 2 DEBUG nova.network.os_vif_util [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Converting VIF {"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.254 2 DEBUG nova.network.os_vif_util [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:73:3d,bridge_name='br-int',has_traffic_filtering=True,id=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b,network=Network(a90af50e-9409-4ab3-b31a-0927ca38c12d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23018fc-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.255 2 DEBUG nova.objects.instance [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.409 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:08:03 compute-0 nova_compute[259550]:   <uuid>4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1</uuid>
Oct 07 14:08:03 compute-0 nova_compute[259550]:   <name>instance-00000014</name>
Oct 07 14:08:03 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:08:03 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:08:03 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <nova:name>tempest-ImagesOneServerTestJSON-server-370744096</nova:name>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:08:02</nova:creationTime>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:08:03 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:08:03 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:08:03 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:08:03 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:08:03 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:08:03 compute-0 nova_compute[259550]:         <nova:user uuid="b1ccfdaee9324154bed6828c0fa32e6d">tempest-ImagesOneServerTestJSON-961060000-project-member</nova:user>
Oct 07 14:08:03 compute-0 nova_compute[259550]:         <nova:project uuid="63a8d182eca84056a1214aff59d1a164">tempest-ImagesOneServerTestJSON-961060000</nova:project>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:08:03 compute-0 nova_compute[259550]:         <nova:port uuid="d23018fc-ec2d-4a03-8e09-88c7ecb34f8b">
Oct 07 14:08:03 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:08:03 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:08:03 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <system>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <entry name="serial">4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1</entry>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <entry name="uuid">4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1</entry>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     </system>
Oct 07 14:08:03 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:08:03 compute-0 nova_compute[259550]:   <os>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:   </os>
Oct 07 14:08:03 compute-0 nova_compute[259550]:   <features>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:   </features>
Oct 07 14:08:03 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:08:03 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:08:03 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk">
Oct 07 14:08:03 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       </source>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:08:03 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk.config">
Oct 07 14:08:03 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       </source>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:08:03 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:84:73:3d"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <target dev="tapd23018fc-ec"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1/console.log" append="off"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <video>
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     </video>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:08:03 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:08:03 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:08:03 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:08:03 compute-0 nova_compute[259550]: </domain>
Oct 07 14:08:03 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.411 2 DEBUG nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Preparing to wait for external event network-vif-plugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.412 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquiring lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.412 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.412 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.413 2 DEBUG nova.virt.libvirt.vif [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:07:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-370744096',display_name='tempest-ImagesOneServerTestJSON-server-370744096',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-370744096',id=20,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63a8d182eca84056a1214aff59d1a164',ramdisk_id='',reservation_id='r-4hhekc2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-961060000',owner_user_name='tempest-ImagesOneServerTestJSON-961060000-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:07:56Z,user_data=None,user_id='b1ccfdaee9324154bed6828c0fa32e6d',uuid=4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.413 2 DEBUG nova.network.os_vif_util [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Converting VIF {"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.414 2 DEBUG nova.network.os_vif_util [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:73:3d,bridge_name='br-int',has_traffic_filtering=True,id=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b,network=Network(a90af50e-9409-4ab3-b31a-0927ca38c12d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23018fc-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.415 2 DEBUG os_vif [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:73:3d,bridge_name='br-int',has_traffic_filtering=True,id=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b,network=Network(a90af50e-9409-4ab3-b31a-0927ca38c12d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23018fc-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.417 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.418 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.422 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd23018fc-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.423 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd23018fc-ec, col_values=(('external_ids', {'iface-id': 'd23018fc-ec2d-4a03-8e09-88c7ecb34f8b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:73:3d', 'vm-uuid': '4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:03 compute-0 NetworkManager[44949]: <info>  [1759846083.4263] manager: (tapd23018fc-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.433 2 INFO os_vif [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:73:3d,bridge_name='br-int',has_traffic_filtering=True,id=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b,network=Network(a90af50e-9409-4ab3-b31a-0927ca38c12d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23018fc-ec')
Oct 07 14:08:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.692 2 DEBUG nova.network.neutron [req-77afa52d-fbe1-4f21-9d48-27aa2aac2766 req-314d6820-498c-44c6-abd9-569729635bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Updated VIF entry in instance network info cache for port d23018fc-ec2d-4a03-8e09-88c7ecb34f8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.693 2 DEBUG nova.network.neutron [req-77afa52d-fbe1-4f21-9d48-27aa2aac2766 req-314d6820-498c-44c6-abd9-569729635bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Updating instance_info_cache with network_info: [{"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.698 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.699 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.699 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] No VIF found with MAC fa:16:3e:84:73:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.700 2 INFO nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Using config drive
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.723 2 DEBUG nova.storage.rbd_utils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] rbd image 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.733 2 DEBUG oslo_concurrency.lockutils [req-77afa52d-fbe1-4f21-9d48-27aa2aac2766 req-314d6820-498c-44c6-abd9-569729635bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:08:03 compute-0 nova_compute[259550]: 2025-10-07 14:08:03.762 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3694626128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:08:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3253482312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:08:04 compute-0 nova_compute[259550]: 2025-10-07 14:08:04.292 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:04 compute-0 nova_compute[259550]: 2025-10-07 14:08:04.299 2 DEBUG nova.compute.provider_tree [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:08:04 compute-0 nova_compute[259550]: 2025-10-07 14:08:04.333 2 INFO nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Creating config drive at /var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1/disk.config
Oct 07 14:08:04 compute-0 nova_compute[259550]: 2025-10-07 14:08:04.337 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2j_sxebh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:04 compute-0 nova_compute[259550]: 2025-10-07 14:08:04.396 2 DEBUG nova.scheduler.client.report [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:08:04 compute-0 nova_compute[259550]: 2025-10-07 14:08:04.471 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2j_sxebh" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:04 compute-0 nova_compute[259550]: 2025-10-07 14:08:04.501 2 DEBUG nova.storage.rbd_utils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] rbd image 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:04 compute-0 nova_compute[259550]: 2025-10-07 14:08:04.506 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1/disk.config 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:04 compute-0 nova_compute[259550]: 2025-10-07 14:08:04.590 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:04 compute-0 nova_compute[259550]: 2025-10-07 14:08:04.592 2 DEBUG nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:08:04 compute-0 nova_compute[259550]: 2025-10-07 14:08:04.676 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1/disk.config 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:04 compute-0 nova_compute[259550]: 2025-10-07 14:08:04.677 2 INFO nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Deleting local config drive /var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1/disk.config because it was imported into RBD.
Oct 07 14:08:04 compute-0 kernel: tapd23018fc-ec: entered promiscuous mode
Oct 07 14:08:04 compute-0 NetworkManager[44949]: <info>  [1759846084.7464] manager: (tapd23018fc-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Oct 07 14:08:04 compute-0 nova_compute[259550]: 2025-10-07 14:08:04.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:04 compute-0 ovn_controller[151684]: 2025-10-07T14:08:04Z|00136|binding|INFO|Claiming lport d23018fc-ec2d-4a03-8e09-88c7ecb34f8b for this chassis.
Oct 07 14:08:04 compute-0 ovn_controller[151684]: 2025-10-07T14:08:04Z|00137|binding|INFO|d23018fc-ec2d-4a03-8e09-88c7ecb34f8b: Claiming fa:16:3e:84:73:3d 10.100.0.3
Oct 07 14:08:04 compute-0 nova_compute[259550]: 2025-10-07 14:08:04.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:04 compute-0 nova_compute[259550]: 2025-10-07 14:08:04.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:04 compute-0 nova_compute[259550]: 2025-10-07 14:08:04.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:04 compute-0 systemd-udevd[290092]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:08:04 compute-0 systemd-machined[214580]: New machine qemu-24-instance-00000014.
Oct 07 14:08:04 compute-0 NetworkManager[44949]: <info>  [1759846084.7963] device (tapd23018fc-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:08:04 compute-0 NetworkManager[44949]: <info>  [1759846084.7972] device (tapd23018fc-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:08:04 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000014.
Oct 07 14:08:04 compute-0 ceph-mon[74295]: pgmap v1228: 305 pgs: 305 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 07 14:08:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3253482312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:08:04 compute-0 ovn_controller[151684]: 2025-10-07T14:08:04Z|00138|binding|INFO|Setting lport d23018fc-ec2d-4a03-8e09-88c7ecb34f8b ovn-installed in OVS
Oct 07 14:08:04 compute-0 nova_compute[259550]: 2025-10-07 14:08:04.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:05 compute-0 ovn_controller[151684]: 2025-10-07T14:08:05Z|00139|binding|INFO|Setting lport d23018fc-ec2d-4a03-8e09-88c7ecb34f8b up in Southbound
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.002 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:73:3d 10.100.0.3'], port_security=['fa:16:3e:84:73:3d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a90af50e-9409-4ab3-b31a-0927ca38c12d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63a8d182eca84056a1214aff59d1a164', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee4342df-374f-4e89-b1cb-d9f5b15e7a74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70fa1751-9080-486f-a255-eba81ea8e3da, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.004 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d23018fc-ec2d-4a03-8e09-88c7ecb34f8b in datapath a90af50e-9409-4ab3-b31a-0927ca38c12d bound to our chassis
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.005 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a90af50e-9409-4ab3-b31a-0927ca38c12d
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.021 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[62455790-6035-4078-a832-dd24d2fd7d1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.023 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa90af50e-91 in ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.026 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa90af50e-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.026 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8f405dc8-e0fd-4672-81f4-a6530edd90a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.026 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc76ec8-5938-4415-9c04-f89ee6eed11d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.042 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[cde663f4-6655-4788-ac47-fccf062647c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.063 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[68d85992-2a58-41ac-9327-97f4adfa11f0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.103 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[83e07109-d8e8-4ab4-a75e-9fbecf74ddcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:05 compute-0 NetworkManager[44949]: <info>  [1759846085.1118] manager: (tapa90af50e-90): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.110 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[df9fc7f0-bbcc-4dc0-956b-dc1d9d7c6e93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:05 compute-0 nova_compute[259550]: 2025-10-07 14:08:05.117 2 DEBUG nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:08:05 compute-0 nova_compute[259550]: 2025-10-07 14:08:05.117 2 DEBUG nova.network.neutron [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.152 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1338341f-971d-403b-98fb-cfe04f4f4f62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.157 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[47e39742-5fe4-4c11-86b8-dffbbf0945e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:05 compute-0 NetworkManager[44949]: <info>  [1759846085.1838] device (tapa90af50e-90): carrier: link connected
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.193 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c8b90283-687c-412a-bfc5-0afa411dce62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.215 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8a581a84-4f46-4909-9109-049c00836e02]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa90af50e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:b8:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666874, 'reachable_time': 15680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290125, 'error': None, 'target': 'ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.235 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[808db8a1-9f8d-4937-a879-d54bbf3969f7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:b8ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 666874, 'tstamp': 666874}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290126, 'error': None, 'target': 'ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.256 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0446172e-4bb7-44e2-aef4-d1dfa9930348]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa90af50e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:b8:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 110, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 110, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666874, 'reachable_time': 15680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290128, 'error': None, 'target': 'ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.299 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[37dcc860-14a8-4ad8-acf7-0cb1a9678f0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:05 compute-0 nova_compute[259550]: 2025-10-07 14:08:05.313 2 INFO nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.383 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[61d58030-9c80-49c2-87d0-8129cf4bf583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.385 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa90af50e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.385 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.385 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa90af50e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:05 compute-0 kernel: tapa90af50e-90: entered promiscuous mode
Oct 07 14:08:05 compute-0 NetworkManager[44949]: <info>  [1759846085.3885] manager: (tapa90af50e-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Oct 07 14:08:05 compute-0 nova_compute[259550]: 2025-10-07 14:08:05.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.391 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa90af50e-90, col_values=(('external_ids', {'iface-id': '9191c21b-68d0-487b-8ce1-9baafe080d13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:05 compute-0 nova_compute[259550]: 2025-10-07 14:08:05.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:05 compute-0 ovn_controller[151684]: 2025-10-07T14:08:05Z|00140|binding|INFO|Releasing lport 9191c21b-68d0-487b-8ce1-9baafe080d13 from this chassis (sb_readonly=0)
Oct 07 14:08:05 compute-0 nova_compute[259550]: 2025-10-07 14:08:05.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:05 compute-0 nova_compute[259550]: 2025-10-07 14:08:05.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.409 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a90af50e-9409-4ab3-b31a-0927ca38c12d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a90af50e-9409-4ab3-b31a-0927ca38c12d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.410 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e13f88-42ce-4532-8bae-1fc1e3725a35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.411 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-a90af50e-9409-4ab3-b31a-0927ca38c12d
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/a90af50e-9409-4ab3-b31a-0927ca38c12d.pid.haproxy
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID a90af50e-9409-4ab3-b31a-0927ca38c12d
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:08:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.412 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d', 'env', 'PROCESS_TAG=haproxy-a90af50e-9409-4ab3-b31a-0927ca38c12d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a90af50e-9409-4ab3-b31a-0927ca38c12d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:08:05 compute-0 nova_compute[259550]: 2025-10-07 14:08:05.447 2 DEBUG nova.policy [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a27a7178326846e69ab9eaae7c70b274', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:08:05 compute-0 nova_compute[259550]: 2025-10-07 14:08:05.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Oct 07 14:08:05 compute-0 nova_compute[259550]: 2025-10-07 14:08:05.634 2 DEBUG nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:08:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:08:05 compute-0 nova_compute[259550]: 2025-10-07 14:08:05.858 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846085.8576436, 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:08:05 compute-0 nova_compute[259550]: 2025-10-07 14:08:05.860 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] VM Started (Lifecycle Event)
Oct 07 14:08:05 compute-0 podman[290202]: 2025-10-07 14:08:05.901070278 +0000 UTC m=+0.106918269 container create 9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:08:05 compute-0 podman[290202]: 2025-10-07 14:08:05.821658095 +0000 UTC m=+0.027506106 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:08:05 compute-0 ceph-mon[74295]: pgmap v1229: 305 pgs: 305 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Oct 07 14:08:05 compute-0 nova_compute[259550]: 2025-10-07 14:08:05.959 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:05 compute-0 nova_compute[259550]: 2025-10-07 14:08:05.968 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846085.8589928, 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:08:05 compute-0 nova_compute[259550]: 2025-10-07 14:08:05.968 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] VM Paused (Lifecycle Event)
Oct 07 14:08:05 compute-0 systemd[1]: Started libpod-conmon-9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e.scope.
Oct 07 14:08:05 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:08:05 compute-0 nova_compute[259550]: 2025-10-07 14:08:05.994 2 DEBUG nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62e91291afc6a80e340c9e03107adfd8c1875ed746c7741b6e4d627197ed67bb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:08:05 compute-0 nova_compute[259550]: 2025-10-07 14:08:05.997 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:05.999 2 INFO nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Creating image(s)
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.026 2 DEBUG nova.storage.rbd_utils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:06 compute-0 podman[290202]: 2025-10-07 14:08:06.045834407 +0000 UTC m=+0.251682418 container init 9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 07 14:08:06 compute-0 podman[290202]: 2025-10-07 14:08:06.052854165 +0000 UTC m=+0.258702156 container start 9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.058 2 DEBUG nova.storage.rbd_utils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:06 compute-0 neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d[290218]: [NOTICE]   (290256) : New worker (290274) forked
Oct 07 14:08:06 compute-0 neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d[290218]: [NOTICE]   (290256) : Loading success.
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.090 2 DEBUG nova.storage.rbd_utils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.096 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.129 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.135 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.169 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.181 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.182 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.183 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.183 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.210 2 DEBUG nova.storage.rbd_utils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.216 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.648 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.717 2 DEBUG nova.storage.rbd_utils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] resizing rbd image cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.826 2 DEBUG nova.objects.instance [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'migration_context' on Instance uuid cd77c7c3-e287-4a6a-b2b6-61655f604ec2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.907 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.907 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Ensure instance console log exists: /var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.908 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.908 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:06 compute-0 nova_compute[259550]: 2025-10-07 14:08:06.909 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.682 2 DEBUG nova.compute.manager [req-c78d2b86-2efd-4e09-ada5-4de906bd2f2d req-d758656a-71d3-4870-89b5-27519404f9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Received event network-vif-plugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.683 2 DEBUG oslo_concurrency.lockutils [req-c78d2b86-2efd-4e09-ada5-4de906bd2f2d req-d758656a-71d3-4870-89b5-27519404f9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.683 2 DEBUG oslo_concurrency.lockutils [req-c78d2b86-2efd-4e09-ada5-4de906bd2f2d req-d758656a-71d3-4870-89b5-27519404f9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.683 2 DEBUG oslo_concurrency.lockutils [req-c78d2b86-2efd-4e09-ada5-4de906bd2f2d req-d758656a-71d3-4870-89b5-27519404f9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.683 2 DEBUG nova.compute.manager [req-c78d2b86-2efd-4e09-ada5-4de906bd2f2d req-d758656a-71d3-4870-89b5-27519404f9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Processing event network-vif-plugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.684 2 DEBUG nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.690 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846087.6902676, 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.691 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] VM Resumed (Lifecycle Event)
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.696 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.701 2 INFO nova.virt.libvirt.driver [-] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Instance spawned successfully.
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.702 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.724 2 DEBUG nova.network.neutron [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Successfully created port: 95414c38-8a03-41fc-a460-bb80d2febc10 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.915 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.921 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.926 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.927 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.927 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.927 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.928 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:07 compute-0 nova_compute[259550]: 2025-10-07 14:08:07.928 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:08 compute-0 nova_compute[259550]: 2025-10-07 14:08:08.057 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:08:08 compute-0 nova_compute[259550]: 2025-10-07 14:08:08.266 2 INFO nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Took 11.63 seconds to spawn the instance on the hypervisor.
Oct 07 14:08:08 compute-0 nova_compute[259550]: 2025-10-07 14:08:08.267 2 DEBUG nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:08 compute-0 nova_compute[259550]: 2025-10-07 14:08:08.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:08 compute-0 nova_compute[259550]: 2025-10-07 14:08:08.434 2 INFO nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Took 12.76 seconds to build instance.
Oct 07 14:08:08 compute-0 nova_compute[259550]: 2025-10-07 14:08:08.501 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:08 compute-0 ceph-mon[74295]: pgmap v1230: 305 pgs: 305 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Oct 07 14:08:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 124 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 84 op/s
Oct 07 14:08:09 compute-0 nova_compute[259550]: 2025-10-07 14:08:09.806 2 DEBUG nova.compute.manager [req-811e856b-9620-4558-9ab6-5a9462ae5b00 req-88bf75aa-fdfb-4f9e-9dfc-40283c192a0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Received event network-vif-plugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:08:09 compute-0 nova_compute[259550]: 2025-10-07 14:08:09.806 2 DEBUG oslo_concurrency.lockutils [req-811e856b-9620-4558-9ab6-5a9462ae5b00 req-88bf75aa-fdfb-4f9e-9dfc-40283c192a0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:09 compute-0 nova_compute[259550]: 2025-10-07 14:08:09.806 2 DEBUG oslo_concurrency.lockutils [req-811e856b-9620-4558-9ab6-5a9462ae5b00 req-88bf75aa-fdfb-4f9e-9dfc-40283c192a0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:09 compute-0 nova_compute[259550]: 2025-10-07 14:08:09.806 2 DEBUG oslo_concurrency.lockutils [req-811e856b-9620-4558-9ab6-5a9462ae5b00 req-88bf75aa-fdfb-4f9e-9dfc-40283c192a0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:09 compute-0 nova_compute[259550]: 2025-10-07 14:08:09.806 2 DEBUG nova.compute.manager [req-811e856b-9620-4558-9ab6-5a9462ae5b00 req-88bf75aa-fdfb-4f9e-9dfc-40283c192a0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] No waiting events found dispatching network-vif-plugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:08:09 compute-0 nova_compute[259550]: 2025-10-07 14:08:09.807 2 WARNING nova.compute.manager [req-811e856b-9620-4558-9ab6-5a9462ae5b00 req-88bf75aa-fdfb-4f9e-9dfc-40283c192a0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Received unexpected event network-vif-plugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b for instance with vm_state active and task_state None.
Oct 07 14:08:10 compute-0 nova_compute[259550]: 2025-10-07 14:08:10.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:10 compute-0 ceph-mon[74295]: pgmap v1231: 305 pgs: 305 active+clean; 124 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 84 op/s
Oct 07 14:08:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:08:10 compute-0 nova_compute[259550]: 2025-10-07 14:08:10.841 2 DEBUG nova.network.neutron [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Successfully updated port: 95414c38-8a03-41fc-a460-bb80d2febc10 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:08:10 compute-0 nova_compute[259550]: 2025-10-07 14:08:10.951 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "refresh_cache-cd77c7c3-e287-4a6a-b2b6-61655f604ec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:08:10 compute-0 nova_compute[259550]: 2025-10-07 14:08:10.951 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquired lock "refresh_cache-cd77c7c3-e287-4a6a-b2b6-61655f604ec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:08:10 compute-0 nova_compute[259550]: 2025-10-07 14:08:10.951 2 DEBUG nova.network.neutron [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:08:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 134 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 93 op/s
Oct 07 14:08:11 compute-0 nova_compute[259550]: 2025-10-07 14:08:11.704 2 DEBUG nova.network.neutron [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:08:11 compute-0 nova_compute[259550]: 2025-10-07 14:08:11.913 2 DEBUG nova.compute.manager [req-0befd928-79f9-408e-a01c-101b70ebed88 req-8830ffce-0080-47b7-aba0-33a51eccca2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Received event network-changed-95414c38-8a03-41fc-a460-bb80d2febc10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:08:11 compute-0 nova_compute[259550]: 2025-10-07 14:08:11.913 2 DEBUG nova.compute.manager [req-0befd928-79f9-408e-a01c-101b70ebed88 req-8830ffce-0080-47b7-aba0-33a51eccca2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Refreshing instance network info cache due to event network-changed-95414c38-8a03-41fc-a460-bb80d2febc10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:08:11 compute-0 nova_compute[259550]: 2025-10-07 14:08:11.913 2 DEBUG oslo_concurrency.lockutils [req-0befd928-79f9-408e-a01c-101b70ebed88 req-8830ffce-0080-47b7-aba0-33a51eccca2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-cd77c7c3-e287-4a6a-b2b6-61655f604ec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:08:12 compute-0 ceph-mon[74295]: pgmap v1232: 305 pgs: 305 active+clean; 134 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 93 op/s
Oct 07 14:08:12 compute-0 nova_compute[259550]: 2025-10-07 14:08:12.891 2 DEBUG nova.network.neutron [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Updating instance_info_cache with network_info: [{"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:08:13 compute-0 nova_compute[259550]: 2025-10-07 14:08:13.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 134 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct 07 14:08:14 compute-0 ceph-mon[74295]: pgmap v1233: 305 pgs: 305 active+clean; 134 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct 07 14:08:15 compute-0 nova_compute[259550]: 2025-10-07 14:08:15.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 134 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:08:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.290 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Releasing lock "refresh_cache-cd77c7c3-e287-4a6a-b2b6-61655f604ec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.291 2 DEBUG nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Instance network_info: |[{"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.291 2 DEBUG oslo_concurrency.lockutils [req-0befd928-79f9-408e-a01c-101b70ebed88 req-8830ffce-0080-47b7-aba0-33a51eccca2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-cd77c7c3-e287-4a6a-b2b6-61655f604ec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.291 2 DEBUG nova.network.neutron [req-0befd928-79f9-408e-a01c-101b70ebed88 req-8830ffce-0080-47b7-aba0-33a51eccca2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Refreshing network info cache for port 95414c38-8a03-41fc-a460-bb80d2febc10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.294 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Start _get_guest_xml network_info=[{"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.299 2 WARNING nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.310 2 DEBUG nova.virt.libvirt.host [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.311 2 DEBUG nova.virt.libvirt.host [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.328 2 DEBUG nova.virt.libvirt.host [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.329 2 DEBUG nova.virt.libvirt.host [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.329 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.330 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.331 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.331 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.331 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.332 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.332 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.332 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.333 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.333 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.334 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.334 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.342 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.378 2 DEBUG nova.compute.manager [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.467 2 INFO nova.compute.manager [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] instance snapshotting
Oct 07 14:08:16 compute-0 ceph-mon[74295]: pgmap v1234: 305 pgs: 305 active+clean; 134 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:08:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:08:16 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3466957009' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.818 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.843 2 DEBUG nova.storage.rbd_utils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:16 compute-0 nova_compute[259550]: 2025-10-07 14:08:16.849 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.061 2 INFO nova.virt.libvirt.driver [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Beginning live snapshot process
Oct 07 14:08:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:08:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/628064836' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.306 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.309 2 DEBUG nova.virt.libvirt.vif [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1004114679',display_name='tempest-ImagesTestJSON-server-1004114679',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1004114679',id=21,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-rgihze8b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:08:05Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=cd77c7c3-e287-4a6a-b2b6-61655f604ec2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.310 2 DEBUG nova.network.os_vif_util [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.311 2 DEBUG nova.network.os_vif_util [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:1c:1f,bridge_name='br-int',has_traffic_filtering=True,id=95414c38-8a03-41fc-a460-bb80d2febc10,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95414c38-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.312 2 DEBUG nova.objects.instance [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'pci_devices' on Instance uuid cd77c7c3-e287-4a6a-b2b6-61655f604ec2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.405 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:08:17 compute-0 nova_compute[259550]:   <uuid>cd77c7c3-e287-4a6a-b2b6-61655f604ec2</uuid>
Oct 07 14:08:17 compute-0 nova_compute[259550]:   <name>instance-00000015</name>
Oct 07 14:08:17 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:08:17 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:08:17 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <nova:name>tempest-ImagesTestJSON-server-1004114679</nova:name>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:08:16</nova:creationTime>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:08:17 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:08:17 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:08:17 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:08:17 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:08:17 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:08:17 compute-0 nova_compute[259550]:         <nova:user uuid="a27a7178326846e69ab9eaae7c70b274">tempest-ImagesTestJSON-194092869-project-member</nova:user>
Oct 07 14:08:17 compute-0 nova_compute[259550]:         <nova:project uuid="1a6abfd8cc6f4507886ed10873d1f95c">tempest-ImagesTestJSON-194092869</nova:project>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:08:17 compute-0 nova_compute[259550]:         <nova:port uuid="95414c38-8a03-41fc-a460-bb80d2febc10">
Oct 07 14:08:17 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:08:17 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:08:17 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <system>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <entry name="serial">cd77c7c3-e287-4a6a-b2b6-61655f604ec2</entry>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <entry name="uuid">cd77c7c3-e287-4a6a-b2b6-61655f604ec2</entry>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     </system>
Oct 07 14:08:17 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:08:17 compute-0 nova_compute[259550]:   <os>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:   </os>
Oct 07 14:08:17 compute-0 nova_compute[259550]:   <features>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:   </features>
Oct 07 14:08:17 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:08:17 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:08:17 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk">
Oct 07 14:08:17 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       </source>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:08:17 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk.config">
Oct 07 14:08:17 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       </source>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:08:17 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:34:1c:1f"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <target dev="tap95414c38-8a"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2/console.log" append="off"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <video>
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     </video>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:08:17 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:08:17 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:08:17 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:08:17 compute-0 nova_compute[259550]: </domain>
Oct 07 14:08:17 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.412 2 DEBUG nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Preparing to wait for external event network-vif-plugged-95414c38-8a03-41fc-a460-bb80d2febc10 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.413 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.414 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.414 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.415 2 DEBUG nova.virt.libvirt.vif [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1004114679',display_name='tempest-ImagesTestJSON-server-1004114679',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1004114679',id=21,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-rgihze8b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:08:05Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=cd77c7c3-e287-4a6a-b2b6-61655f604ec2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.415 2 DEBUG nova.network.os_vif_util [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.416 2 DEBUG nova.network.os_vif_util [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:1c:1f,bridge_name='br-int',has_traffic_filtering=True,id=95414c38-8a03-41fc-a460-bb80d2febc10,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95414c38-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.417 2 DEBUG os_vif [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:1c:1f,bridge_name='br-int',has_traffic_filtering=True,id=95414c38-8a03-41fc-a460-bb80d2febc10,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95414c38-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.419 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.419 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.424 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95414c38-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.425 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap95414c38-8a, col_values=(('external_ids', {'iface-id': '95414c38-8a03-41fc-a460-bb80d2febc10', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:1c:1f', 'vm-uuid': 'cd77c7c3-e287-4a6a-b2b6-61655f604ec2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:17 compute-0 NetworkManager[44949]: <info>  [1759846097.4284] manager: (tap95414c38-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.437 2 INFO os_vif [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:1c:1f,bridge_name='br-int',has_traffic_filtering=True,id=95414c38-8a03-41fc-a460-bb80d2febc10,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95414c38-8a')
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.512 2 DEBUG nova.virt.libvirt.imagebackend [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:08:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 134 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.658 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.659 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.659 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No VIF found with MAC fa:16:3e:34:1c:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.660 2 INFO nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Using config drive
Oct 07 14:08:17 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3466957009' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:17 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/628064836' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:17 compute-0 nova_compute[259550]: 2025-10-07 14:08:17.698 2 DEBUG nova.storage.rbd_utils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:18 compute-0 nova_compute[259550]: 2025-10-07 14:08:18.627 2 DEBUG nova.storage.rbd_utils [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] creating snapshot(17284dfc40ab47889b7e2dd00785c4c6) on rbd image(4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:08:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct 07 14:08:18 compute-0 ceph-mon[74295]: pgmap v1235: 305 pgs: 305 active+clean; 134 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Oct 07 14:08:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct 07 14:08:18 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct 07 14:08:18 compute-0 nova_compute[259550]: 2025-10-07 14:08:18.764 2 DEBUG nova.storage.rbd_utils [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] cloning vms/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk@17284dfc40ab47889b7e2dd00785c4c6 to images/939a30ca-ae15-4718-a461-e26825722fa1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:08:18 compute-0 nova_compute[259550]: 2025-10-07 14:08:18.905 2 DEBUG nova.storage.rbd_utils [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] flattening images/939a30ca-ae15-4718-a461-e26825722fa1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:08:18 compute-0 nova_compute[259550]: 2025-10-07 14:08:18.949 2 INFO nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Creating config drive at /var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2/disk.config
Oct 07 14:08:18 compute-0 nova_compute[259550]: 2025-10-07 14:08:18.956 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6qpj939t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.093 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6qpj939t" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.127 2 DEBUG nova.storage.rbd_utils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.131 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2/disk.config cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.263 2 DEBUG nova.storage.rbd_utils [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] removing snapshot(17284dfc40ab47889b7e2dd00785c4c6) on rbd image(4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.334 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2/disk.config cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.334 2 INFO nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Deleting local config drive /var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2/disk.config because it was imported into RBD.
Oct 07 14:08:19 compute-0 kernel: tap95414c38-8a: entered promiscuous mode
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:19 compute-0 ovn_controller[151684]: 2025-10-07T14:08:19Z|00141|binding|INFO|Claiming lport 95414c38-8a03-41fc-a460-bb80d2febc10 for this chassis.
Oct 07 14:08:19 compute-0 ovn_controller[151684]: 2025-10-07T14:08:19Z|00142|binding|INFO|95414c38-8a03-41fc-a460-bb80d2febc10: Claiming fa:16:3e:34:1c:1f 10.100.0.11
Oct 07 14:08:19 compute-0 NetworkManager[44949]: <info>  [1759846099.3939] manager: (tap95414c38-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:19 compute-0 systemd-machined[214580]: New machine qemu-25-instance-00000015.
Oct 07 14:08:19 compute-0 systemd-udevd[290679]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:08:19 compute-0 NetworkManager[44949]: <info>  [1759846099.4777] device (tap95414c38-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:08:19 compute-0 NetworkManager[44949]: <info>  [1759846099.4792] device (tap95414c38-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:08:19 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000015.
Oct 07 14:08:19 compute-0 ovn_controller[151684]: 2025-10-07T14:08:19Z|00143|binding|INFO|Setting lport 95414c38-8a03-41fc-a460-bb80d2febc10 ovn-installed in OVS
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:19 compute-0 podman[290658]: 2025-10-07 14:08:19.506049833 +0000 UTC m=+0.080336469 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 07 14:08:19 compute-0 ovn_controller[151684]: 2025-10-07T14:08:19Z|00144|binding|INFO|Setting lport 95414c38-8a03-41fc-a460-bb80d2febc10 up in Southbound
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.520 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:1c:1f 10.100.0.11'], port_security=['fa:16:3e:34:1c:1f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'cd77c7c3-e287-4a6a-b2b6-61655f604ec2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=95414c38-8a03-41fc-a460-bb80d2febc10) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.521 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 95414c38-8a03-41fc-a460-bb80d2febc10 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb bound to our chassis
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.523 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct 07 14:08:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 145 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 922 KiB/s wr, 60 op/s
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.538 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f4efd6-39fe-4cd2-88e9-a8a857be7942]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.540 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f80456d-d1 in ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:08:19 compute-0 podman[290657]: 2025-10-07 14:08:19.542660361 +0000 UTC m=+0.119766412 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.542 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f80456d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.543 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6d265f8d-850b-48f7-9627-8b76b32adb85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.544 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d901a9f2-f090-44c8-947c-54bb76a4c67c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.559 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb3f793-6775-4eff-8821-b2e64d3276df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.586 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1722bbc5-65ee-44b9-ac89-b69059d8b89f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.618 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3172a22b-5193-403a-ae5b-866b36e0cc99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.626 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d727be-3daf-4777-8d4b-0f44c066206c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:19 compute-0 NetworkManager[44949]: <info>  [1759846099.6279] manager: (tap9f80456d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/75)
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.676 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ecac6b58-4ed0-4494-83ae-e5d6cbceb49d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.679 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fd097ff1-6e7c-4111-b047-468ea41c4e33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct 07 14:08:19 compute-0 ceph-mon[74295]: osdmap e138: 3 total, 3 up, 3 in
Oct 07 14:08:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct 07 14:08:19 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct 07 14:08:19 compute-0 NetworkManager[44949]: <info>  [1759846099.7160] device (tap9f80456d-d0): carrier: link connected
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.725 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[da0279d8-291b-4798-b5f5-60f6b246efa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.751 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6b6c79-0dd1-4a35-8ccf-5e3aa4ab6936]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668328, 'reachable_time': 34850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290737, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.753 2 DEBUG nova.storage.rbd_utils [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] creating snapshot(snap) on rbd image(939a30ca-ae15-4718-a461-e26825722fa1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.783 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5d8489-c5e0-4f92-8e52-710772998988]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:18ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 668328, 'tstamp': 668328}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290747, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.818 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2f6739ea-c19b-463d-bfc6-cedea18c40a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668328, 'reachable_time': 34850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290765, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.868 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[61c67761-80ae-4938-92ac-54ff46e4429a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:19 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.949 2 DEBUG nova.compute.manager [req-818b34e4-44d3-4218-9a8f-7b8ad5a32a2c req-da731cd6-70e2-40b7-8c2c-ce689284c9ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Received event network-vif-plugged-95414c38-8a03-41fc-a460-bb80d2febc10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.950 2 DEBUG oslo_concurrency.lockutils [req-818b34e4-44d3-4218-9a8f-7b8ad5a32a2c req-da731cd6-70e2-40b7-8c2c-ce689284c9ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.952 2 DEBUG oslo_concurrency.lockutils [req-818b34e4-44d3-4218-9a8f-7b8ad5a32a2c req-da731cd6-70e2-40b7-8c2c-ce689284c9ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.952 2 DEBUG oslo_concurrency.lockutils [req-818b34e4-44d3-4218-9a8f-7b8ad5a32a2c req-da731cd6-70e2-40b7-8c2c-ce689284c9ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.953 2 DEBUG nova.compute.manager [req-818b34e4-44d3-4218-9a8f-7b8ad5a32a2c req-da731cd6-70e2-40b7-8c2c-ce689284c9ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Processing event network-vif-plugged-95414c38-8a03-41fc-a460-bb80d2febc10 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.952 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[70005f73-7c0a-4c00-b674-e1b5ab36566d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.955 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.955 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.956 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f80456d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:19 compute-0 NetworkManager[44949]: <info>  [1759846099.9592] manager: (tap9f80456d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Oct 07 14:08:19 compute-0 kernel: tap9f80456d-d0: entered promiscuous mode
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.962 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f80456d-d0, col_values=(('external_ids', {'iface-id': 'aff8269b-7a34-4fc6-ae31-f73de236b2d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:19 compute-0 ovn_controller[151684]: 2025-10-07T14:08:19Z|00145|binding|INFO|Releasing lport aff8269b-7a34-4fc6-ae31-f73de236b2d6 from this chassis (sb_readonly=0)
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.983 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:08:19 compute-0 nova_compute[259550]: 2025-10-07 14:08:19.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.986 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[98b45210-5c0f-45b4-9c10-ba450b97437d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.987 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:08:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.988 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'env', 'PROCESS_TAG=haproxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.015 2 DEBUG nova.network.neutron [req-0befd928-79f9-408e-a01c-101b70ebed88 req-8830ffce-0080-47b7-aba0-33a51eccca2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Updated VIF entry in instance network info cache for port 95414c38-8a03-41fc-a460-bb80d2febc10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.015 2 DEBUG nova.network.neutron [req-0befd928-79f9-408e-a01c-101b70ebed88 req-8830ffce-0080-47b7-aba0-33a51eccca2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Updating instance_info_cache with network_info: [{"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.064 2 DEBUG oslo_concurrency.lockutils [req-0befd928-79f9-408e-a01c-101b70ebed88 req-8830ffce-0080-47b7-aba0-33a51eccca2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-cd77c7c3-e287-4a6a-b2b6-61655f604ec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:08:20 compute-0 podman[290823]: 2025-10-07 14:08:20.443259791 +0000 UTC m=+0.061614078 container create 3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 07 14:08:20 compute-0 systemd[1]: Started libpod-conmon-3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28.scope.
Oct 07 14:08:20 compute-0 podman[290823]: 2025-10-07 14:08:20.410337451 +0000 UTC m=+0.028691688 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:20 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e6de4f7c968bc67535eb773c8b2db1d245a72f57212d42fe527c2c5ef20c1fd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.558 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846100.5575027, cd77c7c3-e287-4a6a-b2b6-61655f604ec2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.558 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] VM Started (Lifecycle Event)
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.561 2 DEBUG nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:08:20 compute-0 podman[290823]: 2025-10-07 14:08:20.564860671 +0000 UTC m=+0.183214908 container init 3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.569 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.574 2 INFO nova.virt.libvirt.driver [-] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Instance spawned successfully.
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.574 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:08:20 compute-0 podman[290823]: 2025-10-07 14:08:20.576112821 +0000 UTC m=+0.194467028 container start 3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:08:20 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[290838]: [NOTICE]   (290842) : New worker (290844) forked
Oct 07 14:08:20 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[290838]: [NOTICE]   (290842) : Loading success.
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.631 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.636 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:08:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.648 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.648 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.649 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.649 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.649 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.650 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.710 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.711 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846100.5576746, cd77c7c3-e287-4a6a-b2b6-61655f604ec2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.711 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] VM Paused (Lifecycle Event)
Oct 07 14:08:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct 07 14:08:20 compute-0 ceph-mon[74295]: pgmap v1237: 305 pgs: 305 active+clean; 145 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 922 KiB/s wr, 60 op/s
Oct 07 14:08:20 compute-0 ceph-mon[74295]: osdmap e139: 3 total, 3 up, 3 in
Oct 07 14:08:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct 07 14:08:20 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.761 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.764 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846100.56942, cd77c7c3-e287-4a6a-b2b6-61655f604ec2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.764 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] VM Resumed (Lifecycle Event)
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.791 2 INFO nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Took 14.80 seconds to spawn the instance on the hypervisor.
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.792 2 DEBUG nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.817 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.822 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:08:20 compute-0 nova_compute[259550]: 2025-10-07 14:08:20.908 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:08:21 compute-0 nova_compute[259550]: 2025-10-07 14:08:21.015 2 INFO nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Took 17.94 seconds to build instance.
Oct 07 14:08:21 compute-0 nova_compute[259550]: 2025-10-07 14:08:21.093 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.416s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:21 compute-0 ovn_controller[151684]: 2025-10-07T14:08:21Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:73:3d 10.100.0.3
Oct 07 14:08:21 compute-0 ovn_controller[151684]: 2025-10-07T14:08:21Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:73:3d 10.100.0.3
Oct 07 14:08:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 158 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.9 MiB/s wr, 89 op/s
Oct 07 14:08:21 compute-0 ceph-mon[74295]: osdmap e140: 3 total, 3 up, 3 in
Oct 07 14:08:21 compute-0 nova_compute[259550]: 2025-10-07 14:08:21.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:08:22 compute-0 nova_compute[259550]: 2025-10-07 14:08:22.125 2 DEBUG nova.compute.manager [req-89be5386-a2ec-4e75-ac21-9de0873b74a1 req-fbd6264c-b3e2-4a31-a797-515c869e8086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Received event network-vif-plugged-95414c38-8a03-41fc-a460-bb80d2febc10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:08:22 compute-0 nova_compute[259550]: 2025-10-07 14:08:22.125 2 DEBUG oslo_concurrency.lockutils [req-89be5386-a2ec-4e75-ac21-9de0873b74a1 req-fbd6264c-b3e2-4a31-a797-515c869e8086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:22 compute-0 nova_compute[259550]: 2025-10-07 14:08:22.125 2 DEBUG oslo_concurrency.lockutils [req-89be5386-a2ec-4e75-ac21-9de0873b74a1 req-fbd6264c-b3e2-4a31-a797-515c869e8086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:22 compute-0 nova_compute[259550]: 2025-10-07 14:08:22.125 2 DEBUG oslo_concurrency.lockutils [req-89be5386-a2ec-4e75-ac21-9de0873b74a1 req-fbd6264c-b3e2-4a31-a797-515c869e8086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:22 compute-0 nova_compute[259550]: 2025-10-07 14:08:22.126 2 DEBUG nova.compute.manager [req-89be5386-a2ec-4e75-ac21-9de0873b74a1 req-fbd6264c-b3e2-4a31-a797-515c869e8086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] No waiting events found dispatching network-vif-plugged-95414c38-8a03-41fc-a460-bb80d2febc10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:08:22 compute-0 nova_compute[259550]: 2025-10-07 14:08:22.126 2 WARNING nova.compute.manager [req-89be5386-a2ec-4e75-ac21-9de0873b74a1 req-fbd6264c-b3e2-4a31-a797-515c869e8086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Received unexpected event network-vif-plugged-95414c38-8a03-41fc-a460-bb80d2febc10 for instance with vm_state active and task_state None.
Oct 07 14:08:22 compute-0 nova_compute[259550]: 2025-10-07 14:08:22.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:08:22
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'volumes', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control']
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:08:22 compute-0 ceph-mon[74295]: pgmap v1240: 305 pgs: 305 active+clean; 158 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.9 MiB/s wr, 89 op/s
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:08:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:08:22 compute-0 nova_compute[259550]: 2025-10-07 14:08:22.909 2 INFO nova.virt.libvirt.driver [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Snapshot image upload complete
Oct 07 14:08:22 compute-0 nova_compute[259550]: 2025-10-07 14:08:22.910 2 INFO nova.compute.manager [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Took 6.44 seconds to snapshot the instance on the hypervisor.
Oct 07 14:08:22 compute-0 nova_compute[259550]: 2025-10-07 14:08:22.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:08:22 compute-0 nova_compute[259550]: 2025-10-07 14:08:22.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:08:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 158 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.9 MiB/s wr, 89 op/s
Oct 07 14:08:23 compute-0 nova_compute[259550]: 2025-10-07 14:08:23.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:08:23 compute-0 nova_compute[259550]: 2025-10-07 14:08:23.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:08:23 compute-0 nova_compute[259550]: 2025-10-07 14:08:23.996 2 INFO nova.compute.manager [None req-1aef7977-c7f3-4cfe-ae5d-fa7a7a6d0b2a a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Pausing
Oct 07 14:08:23 compute-0 nova_compute[259550]: 2025-10-07 14:08:23.997 2 DEBUG nova.objects.instance [None req-1aef7977-c7f3-4cfe-ae5d-fa7a7a6d0b2a a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'flavor' on Instance uuid cd77c7c3-e287-4a6a-b2b6-61655f604ec2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:08:24 compute-0 nova_compute[259550]: 2025-10-07 14:08:24.097 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846104.096981, cd77c7c3-e287-4a6a-b2b6-61655f604ec2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:08:24 compute-0 nova_compute[259550]: 2025-10-07 14:08:24.097 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] VM Paused (Lifecycle Event)
Oct 07 14:08:24 compute-0 nova_compute[259550]: 2025-10-07 14:08:24.099 2 DEBUG nova.compute.manager [None req-1aef7977-c7f3-4cfe-ae5d-fa7a7a6d0b2a a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:24 compute-0 ceph-mon[74295]: pgmap v1241: 305 pgs: 305 active+clean; 158 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.9 MiB/s wr, 89 op/s
Oct 07 14:08:24 compute-0 nova_compute[259550]: 2025-10-07 14:08:24.160 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:24 compute-0 nova_compute[259550]: 2025-10-07 14:08:24.164 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:08:24 compute-0 nova_compute[259550]: 2025-10-07 14:08:24.296 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 07 14:08:24 compute-0 sudo[290853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:08:24 compute-0 sudo[290853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:08:24 compute-0 sudo[290853]: pam_unix(sudo:session): session closed for user root
Oct 07 14:08:24 compute-0 sudo[290878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:08:24 compute-0 sudo[290878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:08:24 compute-0 sudo[290878]: pam_unix(sudo:session): session closed for user root
Oct 07 14:08:24 compute-0 sudo[290903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:08:24 compute-0 sudo[290903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:08:24 compute-0 sudo[290903]: pam_unix(sudo:session): session closed for user root
Oct 07 14:08:24 compute-0 sudo[290928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:08:24 compute-0 sudo[290928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:08:25 compute-0 sudo[290928]: pam_unix(sudo:session): session closed for user root
Oct 07 14:08:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:08:25 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:08:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:08:25 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:08:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:08:25 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:08:25 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev d8c1e408-c0e4-45c7-a799-d15171f1ce7c does not exist
Oct 07 14:08:25 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 38f95525-453d-462d-9dd5-8431b6bb10af does not exist
Oct 07 14:08:25 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev cd22e9d2-d90e-4e5c-b15e-326d1b8132e0 does not exist
Oct 07 14:08:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:08:25 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:08:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:08:25 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:08:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:08:25 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:08:25 compute-0 nova_compute[259550]: 2025-10-07 14:08:25.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 213 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 7.2 MiB/s rd, 6.9 MiB/s wr, 365 op/s
Oct 07 14:08:25 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:08:25 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:08:25 compute-0 sudo[290984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:08:25 compute-0 sudo[290984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:08:25 compute-0 sudo[290984]: pam_unix(sudo:session): session closed for user root
Oct 07 14:08:25 compute-0 sudo[291009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:08:25 compute-0 sudo[291009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:08:25 compute-0 sudo[291009]: pam_unix(sudo:session): session closed for user root
Oct 07 14:08:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:08:25 compute-0 sudo[291034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:08:25 compute-0 sudo[291034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:08:25 compute-0 sudo[291034]: pam_unix(sudo:session): session closed for user root
Oct 07 14:08:25 compute-0 sudo[291059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:08:25 compute-0 sudo[291059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:08:25 compute-0 nova_compute[259550]: 2025-10-07 14:08:25.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.023 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.023 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.024 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.024 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:26 compute-0 podman[291125]: 2025-10-07 14:08:26.108029111 +0000 UTC m=+0.024365152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:08:26 compute-0 podman[291125]: 2025-10-07 14:08:26.263129306 +0000 UTC m=+0.179465367 container create d2aace6dde688c9822dad661c78f41d45e5a74c832f1793b8c967675b9f0df59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nobel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:08:26 compute-0 systemd[1]: Started libpod-conmon-d2aace6dde688c9822dad661c78f41d45e5a74c832f1793b8c967675b9f0df59.scope.
Oct 07 14:08:26 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:08:26 compute-0 podman[291125]: 2025-10-07 14:08:26.413054663 +0000 UTC m=+0.329390704 container init d2aace6dde688c9822dad661c78f41d45e5a74c832f1793b8c967675b9f0df59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nobel, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 07 14:08:26 compute-0 podman[291125]: 2025-10-07 14:08:26.424393146 +0000 UTC m=+0.340729167 container start d2aace6dde688c9822dad661c78f41d45e5a74c832f1793b8c967675b9f0df59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nobel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:08:26 compute-0 sharp_nobel[291160]: 167 167
Oct 07 14:08:26 compute-0 systemd[1]: libpod-d2aace6dde688c9822dad661c78f41d45e5a74c832f1793b8c967675b9f0df59.scope: Deactivated successfully.
Oct 07 14:08:26 compute-0 podman[291125]: 2025-10-07 14:08:26.442772817 +0000 UTC m=+0.359108858 container attach d2aace6dde688c9822dad661c78f41d45e5a74c832f1793b8c967675b9f0df59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nobel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:08:26 compute-0 podman[291125]: 2025-10-07 14:08:26.444301658 +0000 UTC m=+0.360637679 container died d2aace6dde688c9822dad661c78f41d45e5a74c832f1793b8c967675b9f0df59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nobel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 07 14:08:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:08:26 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/641932959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.481 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct 07 14:08:26 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:08:26 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:08:26 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:08:26 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:08:26 compute-0 ceph-mon[74295]: pgmap v1242: 305 pgs: 305 active+clean; 213 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 7.2 MiB/s rd, 6.9 MiB/s wr, 365 op/s
Oct 07 14:08:26 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/641932959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:08:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-9288babebde297eabeb9931ae59c183df85c8a19ad217975be28220c1dd501c6-merged.mount: Deactivated successfully.
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.621 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.622 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.626 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.627 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:08:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.649 2 DEBUG nova.compute.manager [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:26 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.699 2 INFO nova.compute.manager [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] instance snapshotting
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.699 2 WARNING nova.compute.manager [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] trying to snapshot a non-running instance: (state: 3 expected: 1)
Oct 07 14:08:26 compute-0 podman[291125]: 2025-10-07 14:08:26.74146528 +0000 UTC m=+0.657801291 container remove d2aace6dde688c9822dad661c78f41d45e5a74c832f1793b8c967675b9f0df59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nobel, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct 07 14:08:26 compute-0 systemd[1]: libpod-conmon-d2aace6dde688c9822dad661c78f41d45e5a74c832f1793b8c967675b9f0df59.scope: Deactivated successfully.
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.835 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.837 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4141MB free_disk=59.93803405761719GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.838 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.840 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:26 compute-0 podman[291190]: 2025-10-07 14:08:26.952573202 +0000 UTC m=+0.061887755 container create 261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.970 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.971 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance cd77c7c3-e287-4a6a-b2b6-61655f604ec2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.971 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.971 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:08:26 compute-0 nova_compute[259550]: 2025-10-07 14:08:26.992 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 07 14:08:27 compute-0 nova_compute[259550]: 2025-10-07 14:08:27.000 2 INFO nova.virt.libvirt.driver [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Beginning live snapshot process
Oct 07 14:08:27 compute-0 nova_compute[259550]: 2025-10-07 14:08:27.014 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 07 14:08:27 compute-0 nova_compute[259550]: 2025-10-07 14:08:27.014 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 07 14:08:27 compute-0 podman[291190]: 2025-10-07 14:08:26.920540236 +0000 UTC m=+0.029854809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:08:27 compute-0 systemd[1]: Started libpod-conmon-261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9.scope.
Oct 07 14:08:27 compute-0 nova_compute[259550]: 2025-10-07 14:08:27.031 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 07 14:08:27 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:08:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee7ae862be7955cc77b98fc15bc50809ba261da9094fa8a618b66fc2a176a6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:08:27 compute-0 nova_compute[259550]: 2025-10-07 14:08:27.059 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 07 14:08:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee7ae862be7955cc77b98fc15bc50809ba261da9094fa8a618b66fc2a176a6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:08:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee7ae862be7955cc77b98fc15bc50809ba261da9094fa8a618b66fc2a176a6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:08:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee7ae862be7955cc77b98fc15bc50809ba261da9094fa8a618b66fc2a176a6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:08:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee7ae862be7955cc77b98fc15bc50809ba261da9094fa8a618b66fc2a176a6d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:08:27 compute-0 nova_compute[259550]: 2025-10-07 14:08:27.107 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:27 compute-0 podman[291190]: 2025-10-07 14:08:27.126434889 +0000 UTC m=+0.235749452 container init 261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 07 14:08:27 compute-0 podman[291190]: 2025-10-07 14:08:27.137214617 +0000 UTC m=+0.246529170 container start 261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:08:27 compute-0 podman[291190]: 2025-10-07 14:08:27.15639138 +0000 UTC m=+0.265705943 container attach 261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:08:27 compute-0 nova_compute[259550]: 2025-10-07 14:08:27.268 2 DEBUG nova.virt.libvirt.imagebackend [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:08:27 compute-0 nova_compute[259550]: 2025-10-07 14:08:27.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:27 compute-0 nova_compute[259550]: 2025-10-07 14:08:27.516 2 DEBUG nova.storage.rbd_utils [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(5153d815f021464faf1a5737fa1826eb) on rbd image(cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:08:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 213 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 5.5 MiB/s wr, 293 op/s
Oct 07 14:08:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:08:27 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2651055100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:08:27 compute-0 nova_compute[259550]: 2025-10-07 14:08:27.590 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:27 compute-0 nova_compute[259550]: 2025-10-07 14:08:27.595 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:08:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct 07 14:08:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct 07 14:08:27 compute-0 ceph-mon[74295]: osdmap e141: 3 total, 3 up, 3 in
Oct 07 14:08:27 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2651055100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:08:27 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct 07 14:08:27 compute-0 nova_compute[259550]: 2025-10-07 14:08:27.667 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:08:27 compute-0 nova_compute[259550]: 2025-10-07 14:08:27.727 2 DEBUG nova.storage.rbd_utils [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] cloning vms/cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk@5153d815f021464faf1a5737fa1826eb to images/fd1fe7a6-77d9-4b4e-ad7a-8c1960b3d117 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:08:27 compute-0 nova_compute[259550]: 2025-10-07 14:08:27.758 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:08:27 compute-0 nova_compute[259550]: 2025-10-07 14:08:27.759 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:27 compute-0 nova_compute[259550]: 2025-10-07 14:08:27.837 2 DEBUG nova.storage.rbd_utils [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] flattening images/fd1fe7a6-77d9-4b4e-ad7a-8c1960b3d117 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:08:28 compute-0 nova_compute[259550]: 2025-10-07 14:08:28.109 2 DEBUG nova.storage.rbd_utils [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] removing snapshot(5153d815f021464faf1a5737fa1826eb) on rbd image(cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:08:28 compute-0 fervent_ptolemy[291206]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:08:28 compute-0 fervent_ptolemy[291206]: --> relative data size: 1.0
Oct 07 14:08:28 compute-0 fervent_ptolemy[291206]: --> All data devices are unavailable
Oct 07 14:08:28 compute-0 systemd[1]: libpod-261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9.scope: Deactivated successfully.
Oct 07 14:08:28 compute-0 systemd[1]: libpod-261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9.scope: Consumed 1.212s CPU time.
Oct 07 14:08:28 compute-0 podman[291190]: 2025-10-07 14:08:28.435788063 +0000 UTC m=+1.545102606 container died 261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:08:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ee7ae862be7955cc77b98fc15bc50809ba261da9094fa8a618b66fc2a176a6d-merged.mount: Deactivated successfully.
Oct 07 14:08:28 compute-0 podman[291190]: 2025-10-07 14:08:28.498329605 +0000 UTC m=+1.607644158 container remove 261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:08:28 compute-0 systemd[1]: libpod-conmon-261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9.scope: Deactivated successfully.
Oct 07 14:08:28 compute-0 sudo[291059]: pam_unix(sudo:session): session closed for user root
Oct 07 14:08:28 compute-0 sudo[291394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:08:28 compute-0 sudo[291394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:08:28 compute-0 sudo[291394]: pam_unix(sudo:session): session closed for user root
Oct 07 14:08:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct 07 14:08:28 compute-0 ceph-mon[74295]: pgmap v1244: 305 pgs: 305 active+clean; 213 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 5.5 MiB/s wr, 293 op/s
Oct 07 14:08:28 compute-0 ceph-mon[74295]: osdmap e142: 3 total, 3 up, 3 in
Oct 07 14:08:28 compute-0 sudo[291419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:08:28 compute-0 sudo[291419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:08:28 compute-0 sudo[291419]: pam_unix(sudo:session): session closed for user root
Oct 07 14:08:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct 07 14:08:28 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct 07 14:08:28 compute-0 sudo[291444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:08:28 compute-0 sudo[291444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:08:28 compute-0 sudo[291444]: pam_unix(sudo:session): session closed for user root
Oct 07 14:08:28 compute-0 nova_compute[259550]: 2025-10-07 14:08:28.760 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:08:28 compute-0 nova_compute[259550]: 2025-10-07 14:08:28.761 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:08:28 compute-0 nova_compute[259550]: 2025-10-07 14:08:28.761 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:08:28 compute-0 nova_compute[259550]: 2025-10-07 14:08:28.793 2 DEBUG nova.storage.rbd_utils [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(snap) on rbd image(fd1fe7a6-77d9-4b4e-ad7a-8c1960b3d117) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:08:28 compute-0 sudo[291469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:08:28 compute-0 sudo[291469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:08:29 compute-0 podman[291552]: 2025-10-07 14:08:29.209981865 +0000 UTC m=+0.047573273 container create bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 07 14:08:29 compute-0 systemd[1]: Started libpod-conmon-bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f.scope.
Oct 07 14:08:29 compute-0 podman[291552]: 2025-10-07 14:08:29.188227934 +0000 UTC m=+0.025819362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:08:29 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:08:29 compute-0 podman[291552]: 2025-10-07 14:08:29.324587848 +0000 UTC m=+0.162179266 container init bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:08:29 compute-0 podman[291552]: 2025-10-07 14:08:29.334329008 +0000 UTC m=+0.171920406 container start bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 07 14:08:29 compute-0 suspicious_thompson[291569]: 167 167
Oct 07 14:08:29 compute-0 systemd[1]: libpod-bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f.scope: Deactivated successfully.
Oct 07 14:08:29 compute-0 conmon[291569]: conmon bf5b33c7b7ac278c1ff3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f.scope/container/memory.events
Oct 07 14:08:29 compute-0 podman[291552]: 2025-10-07 14:08:29.348683141 +0000 UTC m=+0.186274539 container attach bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:08:29 compute-0 podman[291552]: 2025-10-07 14:08:29.350341497 +0000 UTC m=+0.187932895 container died bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:08:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d59a84d4d9d70ad77c684f39e4a1821b190827d8d355afb53ba053feb92265bf-merged.mount: Deactivated successfully.
Oct 07 14:08:29 compute-0 podman[291552]: 2025-10-07 14:08:29.446740463 +0000 UTC m=+0.284331861 container remove bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 07 14:08:29 compute-0 systemd[1]: libpod-conmon-bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f.scope: Deactivated successfully.
Oct 07 14:08:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 213 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 9.3 MiB/s rd, 8.0 MiB/s wr, 445 op/s
Oct 07 14:08:29 compute-0 podman[291595]: 2025-10-07 14:08:29.655290637 +0000 UTC m=+0.046613357 container create d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 07 14:08:29 compute-0 systemd[1]: Started libpod-conmon-d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899.scope.
Oct 07 14:08:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct 07 14:08:29 compute-0 ceph-mon[74295]: osdmap e143: 3 total, 3 up, 3 in
Oct 07 14:08:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct 07 14:08:29 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct 07 14:08:29 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:08:29 compute-0 podman[291595]: 2025-10-07 14:08:29.633609538 +0000 UTC m=+0.024932308 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f895cb635e48c7e1ccf69f77c1672cafddd601e502125ffe5fcec3a17c89dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f895cb635e48c7e1ccf69f77c1672cafddd601e502125ffe5fcec3a17c89dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f895cb635e48c7e1ccf69f77c1672cafddd601e502125ffe5fcec3a17c89dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f895cb635e48c7e1ccf69f77c1672cafddd601e502125ffe5fcec3a17c89dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:08:29 compute-0 podman[291595]: 2025-10-07 14:08:29.741210204 +0000 UTC m=+0.132532944 container init d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 07 14:08:29 compute-0 podman[291595]: 2025-10-07 14:08:29.750836621 +0000 UTC m=+0.142159341 container start d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:08:29 compute-0 podman[291595]: 2025-10-07 14:08:29.757884319 +0000 UTC m=+0.149207229 container attach d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:08:29 compute-0 nova_compute[259550]: 2025-10-07 14:08:29.848 2 DEBUG nova.compute.manager [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:29 compute-0 nova_compute[259550]: 2025-10-07 14:08:29.954 2 INFO nova.compute.manager [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] instance snapshotting
Oct 07 14:08:30 compute-0 nova_compute[259550]: 2025-10-07 14:08:30.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:30 compute-0 practical_rosalind[291613]: {
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:     "0": [
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:         {
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "devices": [
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "/dev/loop3"
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             ],
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "lv_name": "ceph_lv0",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "lv_size": "21470642176",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "name": "ceph_lv0",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "tags": {
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.cluster_name": "ceph",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.crush_device_class": "",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.encrypted": "0",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.osd_id": "0",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.type": "block",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.vdo": "0"
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             },
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "type": "block",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "vg_name": "ceph_vg0"
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:         }
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:     ],
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:     "1": [
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:         {
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "devices": [
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "/dev/loop4"
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             ],
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "lv_name": "ceph_lv1",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "lv_size": "21470642176",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "name": "ceph_lv1",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "tags": {
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.cluster_name": "ceph",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.crush_device_class": "",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.encrypted": "0",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.osd_id": "1",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.type": "block",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.vdo": "0"
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             },
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "type": "block",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "vg_name": "ceph_vg1"
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:         }
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:     ],
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:     "2": [
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:         {
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "devices": [
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "/dev/loop5"
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             ],
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "lv_name": "ceph_lv2",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "lv_size": "21470642176",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "name": "ceph_lv2",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "tags": {
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.cluster_name": "ceph",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.crush_device_class": "",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.encrypted": "0",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.osd_id": "2",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.type": "block",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:                 "ceph.vdo": "0"
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             },
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "type": "block",
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:             "vg_name": "ceph_vg2"
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:         }
Oct 07 14:08:30 compute-0 practical_rosalind[291613]:     ]
Oct 07 14:08:30 compute-0 practical_rosalind[291613]: }
Oct 07 14:08:30 compute-0 systemd[1]: libpod-d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899.scope: Deactivated successfully.
Oct 07 14:08:30 compute-0 conmon[291613]: conmon d33438c0f458b9673dad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899.scope/container/memory.events
Oct 07 14:08:30 compute-0 podman[291595]: 2025-10-07 14:08:30.608220995 +0000 UTC m=+0.999543705 container died d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:08:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-26f895cb635e48c7e1ccf69f77c1672cafddd601e502125ffe5fcec3a17c89dd-merged.mount: Deactivated successfully.
Oct 07 14:08:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:08:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct 07 14:08:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct 07 14:08:30 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct 07 14:08:30 compute-0 podman[291595]: 2025-10-07 14:08:30.676702716 +0000 UTC m=+1.068025436 container remove d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:08:30 compute-0 systemd[1]: libpod-conmon-d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899.scope: Deactivated successfully.
Oct 07 14:08:30 compute-0 sudo[291469]: pam_unix(sudo:session): session closed for user root
Oct 07 14:08:30 compute-0 ceph-mon[74295]: pgmap v1247: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 213 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 9.3 MiB/s rd, 8.0 MiB/s wr, 445 op/s
Oct 07 14:08:30 compute-0 ceph-mon[74295]: osdmap e144: 3 total, 3 up, 3 in
Oct 07 14:08:30 compute-0 ceph-mon[74295]: osdmap e145: 3 total, 3 up, 3 in
Oct 07 14:08:30 compute-0 sudo[291632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:08:30 compute-0 sudo[291632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:08:30 compute-0 sudo[291632]: pam_unix(sudo:session): session closed for user root
Oct 07 14:08:30 compute-0 sudo[291657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:08:30 compute-0 sudo[291657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:08:30 compute-0 sudo[291657]: pam_unix(sudo:session): session closed for user root
Oct 07 14:08:30 compute-0 sudo[291682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:08:30 compute-0 sudo[291682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:08:30 compute-0 sudo[291682]: pam_unix(sudo:session): session closed for user root
Oct 07 14:08:31 compute-0 sudo[291707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:08:31 compute-0 sudo[291707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:08:31 compute-0 nova_compute[259550]: 2025-10-07 14:08:31.161 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:08:31 compute-0 nova_compute[259550]: 2025-10-07 14:08:31.162 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:08:31 compute-0 nova_compute[259550]: 2025-10-07 14:08:31.163 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:08:31 compute-0 nova_compute[259550]: 2025-10-07 14:08:31.163 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:08:31 compute-0 podman[291772]: 2025-10-07 14:08:31.335784111 +0000 UTC m=+0.041283325 container create f748032d6fc5556a10aac672d9a9efb3052bf4fcfbadcfd44a4515a07b32ccbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bouman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 07 14:08:31 compute-0 systemd[1]: Started libpod-conmon-f748032d6fc5556a10aac672d9a9efb3052bf4fcfbadcfd44a4515a07b32ccbf.scope.
Oct 07 14:08:31 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:08:31 compute-0 podman[291772]: 2025-10-07 14:08:31.401262421 +0000 UTC m=+0.106761665 container init f748032d6fc5556a10aac672d9a9efb3052bf4fcfbadcfd44a4515a07b32ccbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bouman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 07 14:08:31 compute-0 podman[291772]: 2025-10-07 14:08:31.41167947 +0000 UTC m=+0.117178684 container start f748032d6fc5556a10aac672d9a9efb3052bf4fcfbadcfd44a4515a07b32ccbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:08:31 compute-0 podman[291772]: 2025-10-07 14:08:31.318601771 +0000 UTC m=+0.024101005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:08:31 compute-0 podman[291772]: 2025-10-07 14:08:31.416113318 +0000 UTC m=+0.121612622 container attach f748032d6fc5556a10aac672d9a9efb3052bf4fcfbadcfd44a4515a07b32ccbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 07 14:08:31 compute-0 condescending_bouman[291789]: 167 167
Oct 07 14:08:31 compute-0 systemd[1]: libpod-f748032d6fc5556a10aac672d9a9efb3052bf4fcfbadcfd44a4515a07b32ccbf.scope: Deactivated successfully.
Oct 07 14:08:31 compute-0 podman[291772]: 2025-10-07 14:08:31.421916353 +0000 UTC m=+0.127415567 container died f748032d6fc5556a10aac672d9a9efb3052bf4fcfbadcfd44a4515a07b32ccbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bouman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 07 14:08:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e2ac87a5301fa650f3e662ee86d66f4030fe749abe0dacb6a8e0137993663a3-merged.mount: Deactivated successfully.
Oct 07 14:08:31 compute-0 podman[291772]: 2025-10-07 14:08:31.474736435 +0000 UTC m=+0.180235639 container remove f748032d6fc5556a10aac672d9a9efb3052bf4fcfbadcfd44a4515a07b32ccbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:08:31 compute-0 systemd[1]: libpod-conmon-f748032d6fc5556a10aac672d9a9efb3052bf4fcfbadcfd44a4515a07b32ccbf.scope: Deactivated successfully.
Oct 07 14:08:31 compute-0 podman[291786]: 2025-10-07 14:08:31.501294245 +0000 UTC m=+0.124937200 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 07 14:08:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 213 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 5.4 MiB/s wr, 230 op/s
Oct 07 14:08:31 compute-0 nova_compute[259550]: 2025-10-07 14:08:31.580 2 INFO nova.virt.libvirt.driver [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Beginning live snapshot process
Oct 07 14:08:31 compute-0 podman[291838]: 2025-10-07 14:08:31.677778641 +0000 UTC m=+0.041442079 container create 4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct 07 14:08:31 compute-0 systemd[1]: Started libpod-conmon-4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466.scope.
Oct 07 14:08:31 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d48a75a27967033ad9a9dd386f35d844c1c721449e5f0434fa11df9a234af9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:08:31 compute-0 podman[291838]: 2025-10-07 14:08:31.658535447 +0000 UTC m=+0.022198905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d48a75a27967033ad9a9dd386f35d844c1c721449e5f0434fa11df9a234af9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d48a75a27967033ad9a9dd386f35d844c1c721449e5f0434fa11df9a234af9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d48a75a27967033ad9a9dd386f35d844c1c721449e5f0434fa11df9a234af9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:08:31 compute-0 podman[291838]: 2025-10-07 14:08:31.771520367 +0000 UTC m=+0.135183825 container init 4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 07 14:08:31 compute-0 podman[291838]: 2025-10-07 14:08:31.779061598 +0000 UTC m=+0.142725026 container start 4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:08:31 compute-0 podman[291838]: 2025-10-07 14:08:31.783338852 +0000 UTC m=+0.147002290 container attach 4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 07 14:08:31 compute-0 nova_compute[259550]: 2025-10-07 14:08:31.812 2 DEBUG nova.virt.libvirt.imagebackend [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011066690882008206 of space, bias 1.0, pg target 0.3320007264602462 quantized to 32 (current 32)
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010121097056716806 of space, bias 1.0, pg target 0.3036329117015042 quantized to 32 (current 32)
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:08:32 compute-0 nova_compute[259550]: 2025-10-07 14:08:32.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:32 compute-0 nova_compute[259550]: 2025-10-07 14:08:32.483 2 DEBUG nova.storage.rbd_utils [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] creating snapshot(1788116cc84544c885e6c284acc207f7) on rbd image(4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:08:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:08:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/624292419' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:08:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:08:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/624292419' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:08:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct 07 14:08:32 compute-0 ceph-mon[74295]: pgmap v1250: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 213 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 5.4 MiB/s wr, 230 op/s
Oct 07 14:08:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/624292419' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:08:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/624292419' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:08:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct 07 14:08:32 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]: {
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:         "osd_id": 2,
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:         "type": "bluestore"
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:     },
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:         "osd_id": 1,
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:         "type": "bluestore"
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:     },
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:         "osd_id": 0,
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:         "type": "bluestore"
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]:     }
Oct 07 14:08:32 compute-0 elastic_dijkstra[291855]: }
Oct 07 14:08:32 compute-0 systemd[1]: libpod-4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466.scope: Deactivated successfully.
Oct 07 14:08:32 compute-0 conmon[291855]: conmon 4e12269f81215b56eb81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466.scope/container/memory.events
Oct 07 14:08:32 compute-0 systemd[1]: libpod-4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466.scope: Consumed 1.081s CPU time.
Oct 07 14:08:32 compute-0 nova_compute[259550]: 2025-10-07 14:08:32.906 2 DEBUG nova.storage.rbd_utils [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] cloning vms/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk@1788116cc84544c885e6c284acc207f7 to images/9856ae22-305c-4ebf-9385-686fe57711a8 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:08:32 compute-0 podman[291940]: 2025-10-07 14:08:32.923152186 +0000 UTC m=+0.028583395 container died 4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 07 14:08:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8d48a75a27967033ad9a9dd386f35d844c1c721449e5f0434fa11df9a234af9-merged.mount: Deactivated successfully.
Oct 07 14:08:33 compute-0 podman[291940]: 2025-10-07 14:08:33.031859991 +0000 UTC m=+0.137291170 container remove 4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:08:33 compute-0 systemd[1]: libpod-conmon-4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466.scope: Deactivated successfully.
Oct 07 14:08:33 compute-0 podman[291939]: 2025-10-07 14:08:33.069749964 +0000 UTC m=+0.159375191 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 07 14:08:33 compute-0 sudo[291707]: pam_unix(sudo:session): session closed for user root
Oct 07 14:08:33 compute-0 nova_compute[259550]: 2025-10-07 14:08:33.074 2 DEBUG nova.storage.rbd_utils [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] flattening images/9856ae22-305c-4ebf-9385-686fe57711a8 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:08:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:08:33 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:08:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:08:33 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:08:33 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 7fe3a083-c568-4b8a-b055-72a47c03b0a2 does not exist
Oct 07 14:08:33 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 8c9c984a-f166-4703-84c6-ffb08641d9b8 does not exist
Oct 07 14:08:33 compute-0 sudo[292027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:08:33 compute-0 sudo[292027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:08:33 compute-0 sudo[292027]: pam_unix(sudo:session): session closed for user root
Oct 07 14:08:33 compute-0 sudo[292052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:08:33 compute-0 sudo[292052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:08:33 compute-0 sudo[292052]: pam_unix(sudo:session): session closed for user root
Oct 07 14:08:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 213 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.4 MiB/s wr, 190 op/s
Oct 07 14:08:33 compute-0 nova_compute[259550]: 2025-10-07 14:08:33.695 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Updating instance_info_cache with network_info: [{"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:08:33 compute-0 nova_compute[259550]: 2025-10-07 14:08:33.750 2 INFO nova.virt.libvirt.driver [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Snapshot image upload complete
Oct 07 14:08:33 compute-0 nova_compute[259550]: 2025-10-07 14:08:33.751 2 INFO nova.compute.manager [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Took 7.05 seconds to snapshot the instance on the hypervisor.
Oct 07 14:08:33 compute-0 nova_compute[259550]: 2025-10-07 14:08:33.755 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:08:33 compute-0 nova_compute[259550]: 2025-10-07 14:08:33.755 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:08:33 compute-0 nova_compute[259550]: 2025-10-07 14:08:33.756 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:08:33 compute-0 nova_compute[259550]: 2025-10-07 14:08:33.756 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:08:33 compute-0 nova_compute[259550]: 2025-10-07 14:08:33.756 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:08:33 compute-0 ceph-mon[74295]: osdmap e146: 3 total, 3 up, 3 in
Oct 07 14:08:33 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:08:33 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:08:33 compute-0 nova_compute[259550]: 2025-10-07 14:08:33.974 2 DEBUG nova.storage.rbd_utils [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] removing snapshot(1788116cc84544c885e6c284acc207f7) on rbd image(4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:08:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct 07 14:08:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct 07 14:08:35 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct 07 14:08:35 compute-0 ceph-mon[74295]: pgmap v1252: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 213 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.4 MiB/s wr, 190 op/s
Oct 07 14:08:35 compute-0 nova_compute[259550]: 2025-10-07 14:08:35.436 2 DEBUG nova.storage.rbd_utils [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] creating snapshot(snap) on rbd image(9856ae22-305c-4ebf-9385-686fe57711a8) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:08:35 compute-0 nova_compute[259550]: 2025-10-07 14:08:35.491 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Acquiring lock "31263139-61ff-4691-a1a9-a8d53fd7b388" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:35 compute-0 nova_compute[259550]: 2025-10-07 14:08:35.492 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "31263139-61ff-4691-a1a9-a8d53fd7b388" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:35 compute-0 nova_compute[259550]: 2025-10-07 14:08:35.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 292 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 9.0 MiB/s rd, 8.4 MiB/s wr, 202 op/s
Oct 07 14:08:35 compute-0 nova_compute[259550]: 2025-10-07 14:08:35.577 2 DEBUG nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:08:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:08:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct 07 14:08:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct 07 14:08:35 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct 07 14:08:35 compute-0 nova_compute[259550]: 2025-10-07 14:08:35.714 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:35 compute-0 nova_compute[259550]: 2025-10-07 14:08:35.715 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:35 compute-0 nova_compute[259550]: 2025-10-07 14:08:35.725 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:08:35 compute-0 nova_compute[259550]: 2025-10-07 14:08:35.726 2 INFO nova.compute.claims [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:08:36 compute-0 nova_compute[259550]: 2025-10-07 14:08:36.028 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:36 compute-0 nova_compute[259550]: 2025-10-07 14:08:36.112 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:36 compute-0 nova_compute[259550]: 2025-10-07 14:08:36.113 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:36 compute-0 nova_compute[259550]: 2025-10-07 14:08:36.223 2 DEBUG nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:08:36 compute-0 ceph-mon[74295]: osdmap e147: 3 total, 3 up, 3 in
Oct 07 14:08:36 compute-0 ceph-mon[74295]: pgmap v1254: 305 pgs: 305 active+clean; 292 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 9.0 MiB/s rd, 8.4 MiB/s wr, 202 op/s
Oct 07 14:08:36 compute-0 ceph-mon[74295]: osdmap e148: 3 total, 3 up, 3 in
Oct 07 14:08:36 compute-0 nova_compute[259550]: 2025-10-07 14:08:36.328 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:08:36 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/840478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:08:36 compute-0 nova_compute[259550]: 2025-10-07 14:08:36.506 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:36 compute-0 nova_compute[259550]: 2025-10-07 14:08:36.515 2 DEBUG nova.compute.provider_tree [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:08:36 compute-0 nova_compute[259550]: 2025-10-07 14:08:36.610 2 DEBUG nova.scheduler.client.report [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:08:36 compute-0 nova_compute[259550]: 2025-10-07 14:08:36.756 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:36 compute-0 nova_compute[259550]: 2025-10-07 14:08:36.758 2 DEBUG nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:08:36 compute-0 nova_compute[259550]: 2025-10-07 14:08:36.761 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:36 compute-0 nova_compute[259550]: 2025-10-07 14:08:36.767 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:08:36 compute-0 nova_compute[259550]: 2025-10-07 14:08:36.767 2 INFO nova.compute.claims [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:08:36 compute-0 nova_compute[259550]: 2025-10-07 14:08:36.965 2 DEBUG nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:08:36 compute-0 nova_compute[259550]: 2025-10-07 14:08:36.966 2 DEBUG nova.network.neutron [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.048 2 INFO nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.154 2 INFO nova.virt.libvirt.driver [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Snapshot image upload complete
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.155 2 INFO nova.compute.manager [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Took 7.20 seconds to snapshot the instance on the hypervisor.
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.177 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.210 2 DEBUG nova.network.neutron [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.211 2 DEBUG nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.213 2 DEBUG nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:08:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct 07 14:08:37 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/840478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:08:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct 07 14:08:37 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 292 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 9.9 MiB/s rd, 9.8 MiB/s wr, 201 op/s
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.635 2 DEBUG nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:08:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:08:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1791531714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.638 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.639 2 INFO nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Creating image(s)
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.665 2 DEBUG nova.storage.rbd_utils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] rbd image 31263139-61ff-4691-a1a9-a8d53fd7b388_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.695 2 DEBUG nova.storage.rbd_utils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] rbd image 31263139-61ff-4691-a1a9-a8d53fd7b388_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.719 2 DEBUG nova.storage.rbd_utils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] rbd image 31263139-61ff-4691-a1a9-a8d53fd7b388_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.723 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.749 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.756 2 DEBUG nova.compute.provider_tree [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.788 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.788 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.789 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.789 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.810 2 DEBUG nova.storage.rbd_utils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] rbd image 31263139-61ff-4691-a1a9-a8d53fd7b388_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.814 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 31263139-61ff-4691-a1a9-a8d53fd7b388_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:37 compute-0 nova_compute[259550]: 2025-10-07 14:08:37.843 2 DEBUG nova.scheduler.client.report [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.033 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.034 2 DEBUG nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.112 2 DEBUG nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.112 2 DEBUG nova.network.neutron [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.157 2 INFO nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.163 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 31263139-61ff-4691-a1a9-a8d53fd7b388_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.192 2 DEBUG nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.225 2 DEBUG nova.storage.rbd_utils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] resizing rbd image 31263139-61ff-4691-a1a9-a8d53fd7b388_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.321 2 DEBUG nova.objects.instance [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lazy-loading 'migration_context' on Instance uuid 31263139-61ff-4691-a1a9-a8d53fd7b388 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.342 2 DEBUG nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.344 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.344 2 INFO nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Creating image(s)
Oct 07 14:08:38 compute-0 ceph-mon[74295]: osdmap e149: 3 total, 3 up, 3 in
Oct 07 14:08:38 compute-0 ceph-mon[74295]: pgmap v1257: 305 pgs: 305 active+clean; 292 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 9.9 MiB/s rd, 9.8 MiB/s wr, 201 op/s
Oct 07 14:08:38 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1791531714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.371 2 DEBUG nova.storage.rbd_utils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] rbd image 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.396 2 DEBUG nova.storage.rbd_utils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] rbd image 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.416 2 DEBUG nova.storage.rbd_utils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] rbd image 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.419 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.450 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.451 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Ensure instance console log exists: /var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.452 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.452 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.452 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.454 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.458 2 WARNING nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.463 2 DEBUG nova.virt.libvirt.host [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.463 2 DEBUG nova.virt.libvirt.host [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.466 2 DEBUG nova.virt.libvirt.host [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.466 2 DEBUG nova.virt.libvirt.host [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.466 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.466 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.467 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.467 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.467 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.467 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.467 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.468 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.468 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.468 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.468 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.468 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.471 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.511 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.512 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.512 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.513 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.534 2 DEBUG nova.storage.rbd_utils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] rbd image 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.538 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.566 2 DEBUG oslo_concurrency.lockutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.567 2 DEBUG oslo_concurrency.lockutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.567 2 DEBUG oslo_concurrency.lockutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.568 2 DEBUG oslo_concurrency.lockutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.568 2 DEBUG oslo_concurrency.lockutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.570 2 INFO nova.compute.manager [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Terminating instance
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.571 2 DEBUG nova.compute.manager [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.576 2 DEBUG nova.policy [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '157a909ec18a483a901ec32a0a867038', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b03b40fd15b945118fde82b6454dbced', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:08:38 compute-0 kernel: tap95414c38-8a (unregistering): left promiscuous mode
Oct 07 14:08:38 compute-0 NetworkManager[44949]: <info>  [1759846118.6191] device (tap95414c38-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:08:38 compute-0 ovn_controller[151684]: 2025-10-07T14:08:38Z|00146|binding|INFO|Releasing lport 95414c38-8a03-41fc-a460-bb80d2febc10 from this chassis (sb_readonly=0)
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:38 compute-0 ovn_controller[151684]: 2025-10-07T14:08:38Z|00147|binding|INFO|Setting lport 95414c38-8a03-41fc-a460-bb80d2febc10 down in Southbound
Oct 07 14:08:38 compute-0 ovn_controller[151684]: 2025-10-07T14:08:38Z|00148|binding|INFO|Removing iface tap95414c38-8a ovn-installed in OVS
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:38.638 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:1c:1f 10.100.0.11'], port_security=['fa:16:3e:34:1c:1f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'cd77c7c3-e287-4a6a-b2b6-61655f604ec2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=95414c38-8a03-41fc-a460-bb80d2febc10) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:08:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:38.641 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 95414c38-8a03-41fc-a460-bb80d2febc10 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb unbound from our chassis
Oct 07 14:08:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:38.642 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:08:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:38.644 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[40922bba-1b0c-4b08-bf80-f9c777c7abcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:38.645 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb namespace which is not needed anymore
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:38 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000015.scope: Deactivated successfully.
Oct 07 14:08:38 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000015.scope: Consumed 4.398s CPU time.
Oct 07 14:08:38 compute-0 systemd-machined[214580]: Machine qemu-25-instance-00000015 terminated.
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.823 2 INFO nova.virt.libvirt.driver [-] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Instance destroyed successfully.
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.824 2 DEBUG nova.objects.instance [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'resources' on Instance uuid cd77c7c3-e287-4a6a-b2b6-61655f604ec2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:08:38 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[290838]: [NOTICE]   (290842) : haproxy version is 2.8.14-c23fe91
Oct 07 14:08:38 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[290838]: [NOTICE]   (290842) : path to executable is /usr/sbin/haproxy
Oct 07 14:08:38 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[290838]: [ALERT]    (290842) : Current worker (290844) exited with code 143 (Terminated)
Oct 07 14:08:38 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[290838]: [WARNING]  (290842) : All workers exited. Exiting... (0)
Oct 07 14:08:38 compute-0 systemd[1]: libpod-3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28.scope: Deactivated successfully.
Oct 07 14:08:38 compute-0 podman[292461]: 2025-10-07 14:08:38.841289958 +0000 UTC m=+0.071709068 container died 3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.864 2 DEBUG nova.virt.libvirt.vif [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1004114679',display_name='tempest-ImagesTestJSON-server-1004114679',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1004114679',id=21,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:08:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-rgihze8b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:08:33Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=cd77c7c3-e287-4a6a-b2b6-61655f604ec2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.865 2 DEBUG nova.network.os_vif_util [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.866 2 DEBUG nova.network.os_vif_util [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:1c:1f,bridge_name='br-int',has_traffic_filtering=True,id=95414c38-8a03-41fc-a460-bb80d2febc10,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95414c38-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.867 2 DEBUG os_vif [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:1c:1f,bridge_name='br-int',has_traffic_filtering=True,id=95414c38-8a03-41fc-a460-bb80d2febc10,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95414c38-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.869 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95414c38-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.875 2 INFO os_vif [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:1c:1f,bridge_name='br-int',has_traffic_filtering=True,id=95414c38-8a03-41fc-a460-bb80d2febc10,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95414c38-8a')
Oct 07 14:08:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28-userdata-shm.mount: Deactivated successfully.
Oct 07 14:08:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e6de4f7c968bc67535eb773c8b2db1d245a72f57212d42fe527c2c5ef20c1fd-merged.mount: Deactivated successfully.
Oct 07 14:08:38 compute-0 podman[292461]: 2025-10-07 14:08:38.912765608 +0000 UTC m=+0.143184708 container cleanup 3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 14:08:38 compute-0 systemd[1]: libpod-conmon-3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28.scope: Deactivated successfully.
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.936 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.977 2 DEBUG nova.compute.manager [req-99e363eb-cd68-498b-a748-677c87376ed6 req-17e2de66-2a24-4690-9ed9-0c9ff6322aee 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Received event network-vif-unplugged-95414c38-8a03-41fc-a460-bb80d2febc10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.978 2 DEBUG oslo_concurrency.lockutils [req-99e363eb-cd68-498b-a748-677c87376ed6 req-17e2de66-2a24-4690-9ed9-0c9ff6322aee 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.978 2 DEBUG oslo_concurrency.lockutils [req-99e363eb-cd68-498b-a748-677c87376ed6 req-17e2de66-2a24-4690-9ed9-0c9ff6322aee 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.978 2 DEBUG oslo_concurrency.lockutils [req-99e363eb-cd68-498b-a748-677c87376ed6 req-17e2de66-2a24-4690-9ed9-0c9ff6322aee 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.978 2 DEBUG nova.compute.manager [req-99e363eb-cd68-498b-a748-677c87376ed6 req-17e2de66-2a24-4690-9ed9-0c9ff6322aee 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] No waiting events found dispatching network-vif-unplugged-95414c38-8a03-41fc-a460-bb80d2febc10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:08:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:08:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2562838237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:38 compute-0 nova_compute[259550]: 2025-10-07 14:08:38.979 2 DEBUG nova.compute.manager [req-99e363eb-cd68-498b-a748-677c87376ed6 req-17e2de66-2a24-4690-9ed9-0c9ff6322aee 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Received event network-vif-unplugged-95414c38-8a03-41fc-a460-bb80d2febc10 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:08:39 compute-0 podman[292520]: 2025-10-07 14:08:39.00298185 +0000 UTC m=+0.064199677 container remove 3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:08:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:39.012 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[acf37255-5854-40c7-8c70-74a716835dc5]: (4, ('Tue Oct  7 02:08:38 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb (3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28)\n3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28\nTue Oct  7 02:08:38 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb (3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28)\n3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:39.014 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[51a4e97a-37db-43ca-a79b-1b63260e5eb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:39.016 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:39 compute-0 kernel: tap9f80456d-d0: left promiscuous mode
Oct 07 14:08:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:39.023 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3f915b27-002f-45ad-b78a-2029747a2439]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.028 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:39.045 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6d43a481-291e-420f-bcfb-bf5416cc513c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:39.047 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ba12cf3e-ebd0-42d1-8d11-1e1b0b1f3ac8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.065 2 DEBUG nova.storage.rbd_utils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] rbd image 31263139-61ff-4691-a1a9-a8d53fd7b388_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.070 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:39.069 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cf8fbd7f-f3e4-45d6-9ff9-855976d8d556]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668317, 'reachable_time': 24712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292586, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:39 compute-0 systemd[1]: run-netns-ovnmeta\x2d9f80456d\x2dd8a6\x2d4e61\x2db6cb\x2db509cd650dbb.mount: Deactivated successfully.
Oct 07 14:08:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:39.075 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:08:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:39.075 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[7098b7d0-9a66-48c1-8895-6415ef79080c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.113 2 DEBUG nova.storage.rbd_utils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] resizing rbd image 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.239 2 DEBUG nova.objects.instance [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lazy-loading 'migration_context' on Instance uuid 5fc2c826-a57b-4c9a-910a-48b72ec2ab75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.265 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.266 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Ensure instance console log exists: /var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.267 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.267 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.268 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.358 2 INFO nova.virt.libvirt.driver [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Deleting instance files /var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2_del
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.359 2 INFO nova.virt.libvirt.driver [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Deletion of /var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2_del complete
Oct 07 14:08:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2562838237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.414 2 INFO nova.compute.manager [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Took 0.84 seconds to destroy the instance on the hypervisor.
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.414 2 DEBUG oslo.service.loopingcall [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.415 2 DEBUG nova.compute.manager [-] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.415 2 DEBUG nova.network.neutron [-] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:08:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:08:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3876715671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 299 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 282 op/s
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.550 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.551 2 DEBUG nova.objects.instance [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lazy-loading 'pci_devices' on Instance uuid 31263139-61ff-4691-a1a9-a8d53fd7b388 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.566 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:08:39 compute-0 nova_compute[259550]:   <uuid>31263139-61ff-4691-a1a9-a8d53fd7b388</uuid>
Oct 07 14:08:39 compute-0 nova_compute[259550]:   <name>instance-00000016</name>
Oct 07 14:08:39 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:08:39 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:08:39 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerDiagnosticsTest-server-1281679422</nova:name>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:08:38</nova:creationTime>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:08:39 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:08:39 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:08:39 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:08:39 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:08:39 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:08:39 compute-0 nova_compute[259550]:         <nova:user uuid="2014e38add1e412a8c2e1a1b678f687f">tempest-ServerDiagnosticsTest-6855210-project-member</nova:user>
Oct 07 14:08:39 compute-0 nova_compute[259550]:         <nova:project uuid="664831261352424ba813b5a956c752cd">tempest-ServerDiagnosticsTest-6855210</nova:project>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:08:39 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:08:39 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <system>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <entry name="serial">31263139-61ff-4691-a1a9-a8d53fd7b388</entry>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <entry name="uuid">31263139-61ff-4691-a1a9-a8d53fd7b388</entry>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     </system>
Oct 07 14:08:39 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:08:39 compute-0 nova_compute[259550]:   <os>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:   </os>
Oct 07 14:08:39 compute-0 nova_compute[259550]:   <features>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:   </features>
Oct 07 14:08:39 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:08:39 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:08:39 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/31263139-61ff-4691-a1a9-a8d53fd7b388_disk">
Oct 07 14:08:39 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       </source>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:08:39 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/31263139-61ff-4691-a1a9-a8d53fd7b388_disk.config">
Oct 07 14:08:39 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       </source>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:08:39 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388/console.log" append="off"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <video>
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     </video>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:08:39 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:08:39 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:08:39 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:08:39 compute-0 nova_compute[259550]: </domain>
Oct 07 14:08:39 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.623 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.623 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.624 2 INFO nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Using config drive
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.645 2 DEBUG nova.storage.rbd_utils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] rbd image 31263139-61ff-4691-a1a9-a8d53fd7b388_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.820 2 INFO nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Creating config drive at /var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388/disk.config
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.824 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphfxn1pc0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.857 2 DEBUG nova.network.neutron [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Successfully created port: c10a9a85-702e-4bd4-92d9-474eb88ec422 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.961 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphfxn1pc0" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.989 2 DEBUG nova.storage.rbd_utils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] rbd image 31263139-61ff-4691-a1a9-a8d53fd7b388_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:39 compute-0 nova_compute[259550]: 2025-10-07 14:08:39.994 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388/disk.config 31263139-61ff-4691-a1a9-a8d53fd7b388_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:40 compute-0 nova_compute[259550]: 2025-10-07 14:08:40.154 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388/disk.config 31263139-61ff-4691-a1a9-a8d53fd7b388_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:40 compute-0 nova_compute[259550]: 2025-10-07 14:08:40.154 2 INFO nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Deleting local config drive /var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388/disk.config because it was imported into RBD.
Oct 07 14:08:40 compute-0 systemd-machined[214580]: New machine qemu-26-instance-00000016.
Oct 07 14:08:40 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-00000016.
Oct 07 14:08:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct 07 14:08:40 compute-0 nova_compute[259550]: 2025-10-07 14:08:40.394 2 DEBUG nova.network.neutron [-] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:08:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct 07 14:08:40 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3876715671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:40 compute-0 ceph-mon[74295]: pgmap v1258: 305 pgs: 305 active+clean; 299 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 282 op/s
Oct 07 14:08:40 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct 07 14:08:40 compute-0 nova_compute[259550]: 2025-10-07 14:08:40.425 2 INFO nova.compute.manager [-] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Took 1.01 seconds to deallocate network for instance.
Oct 07 14:08:40 compute-0 nova_compute[259550]: 2025-10-07 14:08:40.497 2 DEBUG oslo_concurrency.lockutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:40 compute-0 nova_compute[259550]: 2025-10-07 14:08:40.498 2 DEBUG oslo_concurrency.lockutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:40 compute-0 nova_compute[259550]: 2025-10-07 14:08:40.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:40 compute-0 nova_compute[259550]: 2025-10-07 14:08:40.605 2 DEBUG oslo_concurrency.processutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:08:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct 07 14:08:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct 07 14:08:40 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.003 2 DEBUG nova.network.neutron [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Successfully updated port: c10a9a85-702e-4bd4-92d9-474eb88ec422 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.022 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.022 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquired lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.023 2 DEBUG nova.network.neutron [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.124 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846121.12446, 31263139-61ff-4691-a1a9-a8d53fd7b388 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.125 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] VM Resumed (Lifecycle Event)
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.127 2 DEBUG nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.128 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.132 2 INFO nova.virt.libvirt.driver [-] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Instance spawned successfully.
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.132 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:08:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:08:41 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2283864756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.162 2 DEBUG oslo_concurrency.processutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.169 2 DEBUG nova.compute.provider_tree [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.177 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.183 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.183 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.184 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.184 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.185 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.185 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.190 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.199 2 DEBUG nova.scheduler.client.report [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.222 2 DEBUG nova.network.neutron [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.225 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.225 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846121.1253805, 31263139-61ff-4691-a1a9-a8d53fd7b388 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.226 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] VM Started (Lifecycle Event)
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.256 2 DEBUG oslo_concurrency.lockutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.263 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.266 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.302 2 INFO nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Took 3.67 seconds to spawn the instance on the hypervisor.
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.302 2 DEBUG nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.305 2 DEBUG nova.compute.manager [req-0f8eb1f2-0ff1-4142-84fb-4b3acb2c7819 req-af36fae8-2e64-4f55-a595-28c92ee5731c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Received event network-vif-plugged-95414c38-8a03-41fc-a460-bb80d2febc10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.306 2 DEBUG oslo_concurrency.lockutils [req-0f8eb1f2-0ff1-4142-84fb-4b3acb2c7819 req-af36fae8-2e64-4f55-a595-28c92ee5731c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.307 2 DEBUG oslo_concurrency.lockutils [req-0f8eb1f2-0ff1-4142-84fb-4b3acb2c7819 req-af36fae8-2e64-4f55-a595-28c92ee5731c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.307 2 DEBUG oslo_concurrency.lockutils [req-0f8eb1f2-0ff1-4142-84fb-4b3acb2c7819 req-af36fae8-2e64-4f55-a595-28c92ee5731c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.308 2 DEBUG nova.compute.manager [req-0f8eb1f2-0ff1-4142-84fb-4b3acb2c7819 req-af36fae8-2e64-4f55-a595-28c92ee5731c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] No waiting events found dispatching network-vif-plugged-95414c38-8a03-41fc-a460-bb80d2febc10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.308 2 WARNING nova.compute.manager [req-0f8eb1f2-0ff1-4142-84fb-4b3acb2c7819 req-af36fae8-2e64-4f55-a595-28c92ee5731c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Received unexpected event network-vif-plugged-95414c38-8a03-41fc-a460-bb80d2febc10 for instance with vm_state deleted and task_state None.
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.308 2 DEBUG nova.compute.manager [req-0f8eb1f2-0ff1-4142-84fb-4b3acb2c7819 req-af36fae8-2e64-4f55-a595-28c92ee5731c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Received event network-vif-deleted-95414c38-8a03-41fc-a460-bb80d2febc10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.311 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.316 2 INFO nova.scheduler.client.report [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Deleted allocations for instance cd77c7c3-e287-4a6a-b2b6-61655f604ec2
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.362 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.362 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.394 2 DEBUG nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.397 2 INFO nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Took 5.71 seconds to build instance.
Oct 07 14:08:41 compute-0 ceph-mon[74295]: osdmap e150: 3 total, 3 up, 3 in
Oct 07 14:08:41 compute-0 ceph-mon[74295]: osdmap e151: 3 total, 3 up, 3 in
Oct 07 14:08:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2283864756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.408 2 DEBUG oslo_concurrency.lockutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.439 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "31263139-61ff-4691-a1a9-a8d53fd7b388" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.482 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.483 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.489 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.489 2 INFO nova.compute.claims [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:08:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 325 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 7.2 MiB/s wr, 223 op/s
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.562 2 DEBUG nova.compute.manager [req-d26ca8db-edcc-479f-b3d9-5ab42ab27406 req-c96c5de1-3f24-4751-ace8-f677c792507c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received event network-changed-c10a9a85-702e-4bd4-92d9-474eb88ec422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.563 2 DEBUG nova.compute.manager [req-d26ca8db-edcc-479f-b3d9-5ab42ab27406 req-c96c5de1-3f24-4751-ace8-f677c792507c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Refreshing instance network info cache due to event network-changed-c10a9a85-702e-4bd4-92d9-474eb88ec422. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.563 2 DEBUG oslo_concurrency.lockutils [req-d26ca8db-edcc-479f-b3d9-5ab42ab27406 req-c96c5de1-3f24-4751-ace8-f677c792507c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.634 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.691 2 DEBUG oslo_concurrency.lockutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquiring lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.692 2 DEBUG oslo_concurrency.lockutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.692 2 DEBUG oslo_concurrency.lockutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquiring lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.693 2 DEBUG oslo_concurrency.lockutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.693 2 DEBUG oslo_concurrency.lockutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.699 2 INFO nova.compute.manager [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Terminating instance
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.701 2 DEBUG nova.compute.manager [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:08:41 compute-0 kernel: tapd23018fc-ec (unregistering): left promiscuous mode
Oct 07 14:08:41 compute-0 NetworkManager[44949]: <info>  [1759846121.7490] device (tapd23018fc-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:08:41 compute-0 ovn_controller[151684]: 2025-10-07T14:08:41Z|00149|binding|INFO|Releasing lport d23018fc-ec2d-4a03-8e09-88c7ecb34f8b from this chassis (sb_readonly=0)
Oct 07 14:08:41 compute-0 ovn_controller[151684]: 2025-10-07T14:08:41Z|00150|binding|INFO|Setting lport d23018fc-ec2d-4a03-8e09-88c7ecb34f8b down in Southbound
Oct 07 14:08:41 compute-0 ovn_controller[151684]: 2025-10-07T14:08:41Z|00151|binding|INFO|Removing iface tapd23018fc-ec ovn-installed in OVS
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:41.766 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:73:3d 10.100.0.3'], port_security=['fa:16:3e:84:73:3d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a90af50e-9409-4ab3-b31a-0927ca38c12d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63a8d182eca84056a1214aff59d1a164', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ee4342df-374f-4e89-b1cb-d9f5b15e7a74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70fa1751-9080-486f-a255-eba81ea8e3da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:08:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:41.768 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d23018fc-ec2d-4a03-8e09-88c7ecb34f8b in datapath a90af50e-9409-4ab3-b31a-0927ca38c12d unbound from our chassis
Oct 07 14:08:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:41.769 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a90af50e-9409-4ab3-b31a-0927ca38c12d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:08:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:41.770 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[acad5f09-6021-40a2-93e4-2f3c407fb101]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:41.770 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d namespace which is not needed anymore
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:41 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000014.scope: Deactivated successfully.
Oct 07 14:08:41 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000014.scope: Consumed 14.699s CPU time.
Oct 07 14:08:41 compute-0 systemd-machined[214580]: Machine qemu-24-instance-00000014 terminated.
Oct 07 14:08:41 compute-0 neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d[290218]: [NOTICE]   (290256) : haproxy version is 2.8.14-c23fe91
Oct 07 14:08:41 compute-0 neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d[290218]: [NOTICE]   (290256) : path to executable is /usr/sbin/haproxy
Oct 07 14:08:41 compute-0 neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d[290218]: [WARNING]  (290256) : Exiting Master process...
Oct 07 14:08:41 compute-0 neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d[290218]: [ALERT]    (290256) : Current worker (290274) exited with code 143 (Terminated)
Oct 07 14:08:41 compute-0 neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d[290218]: [WARNING]  (290256) : All workers exited. Exiting... (0)
Oct 07 14:08:41 compute-0 systemd[1]: libpod-9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e.scope: Deactivated successfully.
Oct 07 14:08:41 compute-0 conmon[290218]: conmon 9e947329e49cc86ec51f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e.scope/container/memory.events
Oct 07 14:08:41 compute-0 podman[292829]: 2025-10-07 14:08:41.924222444 +0000 UTC m=+0.053411258 container died 9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.937 2 INFO nova.virt.libvirt.driver [-] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Instance destroyed successfully.
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.938 2 DEBUG nova.objects.instance [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lazy-loading 'resources' on Instance uuid 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.952 2 DEBUG nova.virt.libvirt.vif [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:07:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-370744096',display_name='tempest-ImagesOneServerTestJSON-server-370744096',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-370744096',id=20,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:08:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='63a8d182eca84056a1214aff59d1a164',ramdisk_id='',reservation_id='r-4hhekc2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-961060000',owner_user_name='tempest-ImagesOneServerTestJSON-961060000-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:08:37Z,user_data=None,user_id='b1ccfdaee9324154bed6828c0fa32e6d',uuid=4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.953 2 DEBUG nova.network.os_vif_util [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Converting VIF {"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.953 2 DEBUG nova.network.os_vif_util [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:73:3d,bridge_name='br-int',has_traffic_filtering=True,id=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b,network=Network(a90af50e-9409-4ab3-b31a-0927ca38c12d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23018fc-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.954 2 DEBUG os_vif [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:73:3d,bridge_name='br-int',has_traffic_filtering=True,id=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b,network=Network(a90af50e-9409-4ab3-b31a-0927ca38c12d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23018fc-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:08:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e-userdata-shm.mount: Deactivated successfully.
Oct 07 14:08:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-62e91291afc6a80e340c9e03107adfd8c1875ed746c7741b6e4d627197ed67bb-merged.mount: Deactivated successfully.
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.962 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd23018fc-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:08:41 compute-0 podman[292829]: 2025-10-07 14:08:41.970413119 +0000 UTC m=+0.099601933 container cleanup 9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.971 2 INFO os_vif [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:73:3d,bridge_name='br-int',has_traffic_filtering=True,id=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b,network=Network(a90af50e-9409-4ab3-b31a-0927ca38c12d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23018fc-ec')
Oct 07 14:08:41 compute-0 systemd[1]: libpod-conmon-9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e.scope: Deactivated successfully.
Oct 07 14:08:41 compute-0 nova_compute[259550]: 2025-10-07 14:08:41.994 2 DEBUG nova.network.neutron [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updating instance_info_cache with network_info: [{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.013 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Releasing lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.013 2 DEBUG nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Instance network_info: |[{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.014 2 DEBUG oslo_concurrency.lockutils [req-d26ca8db-edcc-479f-b3d9-5ab42ab27406 req-c96c5de1-3f24-4751-ace8-f677c792507c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.014 2 DEBUG nova.network.neutron [req-d26ca8db-edcc-479f-b3d9-5ab42ab27406 req-c96c5de1-3f24-4751-ace8-f677c792507c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Refreshing network info cache for port c10a9a85-702e-4bd4-92d9-474eb88ec422 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.017 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Start _get_guest_xml network_info=[{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.026 2 WARNING nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:08:42 compute-0 podman[292876]: 2025-10-07 14:08:42.040130252 +0000 UTC m=+0.045913018 container remove 9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.043 2 DEBUG nova.virt.libvirt.host [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.045 2 DEBUG nova.virt.libvirt.host [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.050 2 DEBUG nova.virt.libvirt.host [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:08:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:42.049 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[310652b1-ac24-4d38-8429-50ceef7602d3]: (4, ('Tue Oct  7 02:08:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d (9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e)\n9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e\nTue Oct  7 02:08:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d (9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e)\n9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.051 2 DEBUG nova.virt.libvirt.host [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.051 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:08:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:42.051 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[500f7f8f-0487-40d3-957c-75d57cff7b38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.051 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.052 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:08:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:42.052 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa90af50e-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.052 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.052 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.052 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.053 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.053 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.054 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.054 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:08:42 compute-0 kernel: tapa90af50e-90: left promiscuous mode
Oct 07 14:08:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:42.058 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c10c196d-7e4d-4ffe-a611-21ce3605f450]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.066 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.066 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.077 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:42.093 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e5668d-3a03-4798-a67e-0da21175190d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:42.094 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9be5ba74-f01e-4b92-9c79-ab42b72e5cac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:42.116 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a5745ff6-c189-4c33-be89-df0456423acf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666866, 'reachable_time': 28830, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292904, 'error': None, 'target': 'ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:42 compute-0 systemd[1]: run-netns-ovnmeta\x2da90af50e\x2d9409\x2d4ab3\x2db31a\x2d0927ca38c12d.mount: Deactivated successfully.
Oct 07 14:08:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:42.121 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:08:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:42.121 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[144687fa-c6ef-402b-858e-2e95bf2685fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:08:42 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3738107768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.168 2 DEBUG nova.compute.manager [None req-3c58f946-0c2e-417c-8c22-962adfc5d572 9a04c82aa6a043f4aa7ab4f2b5ac9df5 b46bb2637a044462b276ef40de0ce5c6 - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.172 2 INFO nova.compute.manager [None req-3c58f946-0c2e-417c-8c22-962adfc5d572 9a04c82aa6a043f4aa7ab4f2b5ac9df5 b46bb2637a044462b276ef40de0ce5c6 - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Retrieving diagnostics
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.190 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.196 2 DEBUG nova.compute.provider_tree [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.210 2 DEBUG nova.scheduler.client.report [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.236 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.237 2 DEBUG nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.362 2 DEBUG nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.363 2 DEBUG nova.network.neutron [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:08:42 compute-0 ceph-mon[74295]: pgmap v1261: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 325 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 7.2 MiB/s wr, 223 op/s
Oct 07 14:08:42 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3738107768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.422 2 INFO nova.virt.libvirt.driver [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Deleting instance files /var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_del
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.423 2 INFO nova.virt.libvirt.driver [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Deletion of /var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_del complete
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.445 2 INFO nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.565 2 DEBUG nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:08:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:08:42 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/784667935' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.590 2 INFO nova.compute.manager [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Took 0.89 seconds to destroy the instance on the hypervisor.
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.592 2 DEBUG oslo.service.loopingcall [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.592 2 DEBUG nova.compute.manager [-] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.592 2 DEBUG nova.network.neutron [-] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.598 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.624 2 DEBUG nova.storage.rbd_utils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] rbd image 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.631 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.659 2 DEBUG nova.policy [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a27a7178326846e69ab9eaae7c70b274', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.665 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Acquiring lock "31263139-61ff-4691-a1a9-a8d53fd7b388" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.666 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "31263139-61ff-4691-a1a9-a8d53fd7b388" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.666 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Acquiring lock "31263139-61ff-4691-a1a9-a8d53fd7b388-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.667 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "31263139-61ff-4691-a1a9-a8d53fd7b388-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.667 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "31263139-61ff-4691-a1a9-a8d53fd7b388-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.668 2 INFO nova.compute.manager [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Terminating instance
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.669 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Acquiring lock "refresh_cache-31263139-61ff-4691-a1a9-a8d53fd7b388" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.669 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Acquired lock "refresh_cache-31263139-61ff-4691-a1a9-a8d53fd7b388" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.669 2 DEBUG nova.network.neutron [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.758 2 DEBUG nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.759 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.759 2 INFO nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Creating image(s)
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.776 2 DEBUG nova.storage.rbd_utils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.793 2 DEBUG nova.storage.rbd_utils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.811 2 DEBUG nova.storage.rbd_utils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.815 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.877 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.878 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.878 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.879 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.898 2 DEBUG nova.storage.rbd_utils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.902 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:42 compute-0 nova_compute[259550]: 2025-10-07 14:08:42.998 2 DEBUG nova.network.neutron [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:08:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:08:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/790973850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.095 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.097 2 DEBUG nova.virt.libvirt.vif [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:08:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-822129567',display_name='tempest-AttachInterfacesUnderV243Test-server-822129567',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-822129567',id=23,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOqYH9kZe9rX7hHEHsSyYrS6rRZ+9wOpWI4SrDObDRhVhO+VRnrHxHs1eVVbdpvsvOSO+WGarbwizJejZEygylEjS/5KcFmLUFjCsp72PetsN1n04qwsYDZRBjJQ0LVJNw==',key_name='tempest-keypair-1173887905',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b03b40fd15b945118fde82b6454dbced',ramdisk_id='',reservation_id='r-z4cql8qa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-169595365',owner_user_name='tempest-AttachInterfacesUnderV243Test-169595365-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:08:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='157a909ec18a483a901ec32a0a867038',uuid=5fc2c826-a57b-4c9a-910a-48b72ec2ab75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.097 2 DEBUG nova.network.os_vif_util [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Converting VIF {"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.098 2 DEBUG nova.network.os_vif_util [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:8d:83,bridge_name='br-int',has_traffic_filtering=True,id=c10a9a85-702e-4bd4-92d9-474eb88ec422,network=Network(9ef175ad-b29a-40a5-8bff-1ae744434f58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10a9a85-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.099 2 DEBUG nova.objects.instance [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lazy-loading 'pci_devices' on Instance uuid 5fc2c826-a57b-4c9a-910a-48b72ec2ab75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.171 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:08:43 compute-0 nova_compute[259550]:   <uuid>5fc2c826-a57b-4c9a-910a-48b72ec2ab75</uuid>
Oct 07 14:08:43 compute-0 nova_compute[259550]:   <name>instance-00000017</name>
Oct 07 14:08:43 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:08:43 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:08:43 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-822129567</nova:name>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:08:42</nova:creationTime>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:08:43 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:08:43 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:08:43 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:08:43 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:08:43 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:08:43 compute-0 nova_compute[259550]:         <nova:user uuid="157a909ec18a483a901ec32a0a867038">tempest-AttachInterfacesUnderV243Test-169595365-project-member</nova:user>
Oct 07 14:08:43 compute-0 nova_compute[259550]:         <nova:project uuid="b03b40fd15b945118fde82b6454dbced">tempest-AttachInterfacesUnderV243Test-169595365</nova:project>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:08:43 compute-0 nova_compute[259550]:         <nova:port uuid="c10a9a85-702e-4bd4-92d9-474eb88ec422">
Oct 07 14:08:43 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:08:43 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:08:43 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <system>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <entry name="serial">5fc2c826-a57b-4c9a-910a-48b72ec2ab75</entry>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <entry name="uuid">5fc2c826-a57b-4c9a-910a-48b72ec2ab75</entry>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     </system>
Oct 07 14:08:43 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:08:43 compute-0 nova_compute[259550]:   <os>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:   </os>
Oct 07 14:08:43 compute-0 nova_compute[259550]:   <features>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:   </features>
Oct 07 14:08:43 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:08:43 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:08:43 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk">
Oct 07 14:08:43 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       </source>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:08:43 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk.config">
Oct 07 14:08:43 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       </source>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:08:43 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:45:8d:83"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <target dev="tapc10a9a85-70"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75/console.log" append="off"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <video>
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     </video>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:08:43 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:08:43 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:08:43 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:08:43 compute-0 nova_compute[259550]: </domain>
Oct 07 14:08:43 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.180 2 DEBUG nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Preparing to wait for external event network-vif-plugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.180 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.180 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.180 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.181 2 DEBUG nova.virt.libvirt.vif [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:08:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-822129567',display_name='tempest-AttachInterfacesUnderV243Test-server-822129567',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-822129567',id=23,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOqYH9kZe9rX7hHEHsSyYrS6rRZ+9wOpWI4SrDObDRhVhO+VRnrHxHs1eVVbdpvsvOSO+WGarbwizJejZEygylEjS/5KcFmLUFjCsp72PetsN1n04qwsYDZRBjJQ0LVJNw==',key_name='tempest-keypair-1173887905',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b03b40fd15b945118fde82b6454dbced',ramdisk_id='',reservation_id='r-z4cql8qa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-169595365',owner_user_name='tempest-AttachInterfacesUnderV243Test-169595365-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:08:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='157a909ec18a483a901ec32a0a867038',uuid=5fc2c826-a57b-4c9a-910a-48b72ec2ab75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.182 2 DEBUG nova.network.os_vif_util [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Converting VIF {"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.182 2 DEBUG nova.network.os_vif_util [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:8d:83,bridge_name='br-int',has_traffic_filtering=True,id=c10a9a85-702e-4bd4-92d9-474eb88ec422,network=Network(9ef175ad-b29a-40a5-8bff-1ae744434f58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10a9a85-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.183 2 DEBUG os_vif [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:8d:83,bridge_name='br-int',has_traffic_filtering=True,id=c10a9a85-702e-4bd4-92d9-474eb88ec422,network=Network(9ef175ad-b29a-40a5-8bff-1ae744434f58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10a9a85-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.191 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc10a9a85-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.192 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc10a9a85-70, col_values=(('external_ids', {'iface-id': 'c10a9a85-702e-4bd4-92d9-474eb88ec422', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:8d:83', 'vm-uuid': '5fc2c826-a57b-4c9a-910a-48b72ec2ab75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.193 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:43 compute-0 NetworkManager[44949]: <info>  [1759846123.1954] manager: (tapc10a9a85-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.225 2 INFO os_vif [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:8d:83,bridge_name='br-int',has_traffic_filtering=True,id=c10a9a85-702e-4bd4-92d9-474eb88ec422,network=Network(9ef175ad-b29a-40a5-8bff-1ae744434f58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10a9a85-70')
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.256 2 DEBUG nova.storage.rbd_utils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] resizing rbd image 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.346 2 DEBUG nova.objects.instance [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'migration_context' on Instance uuid 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:08:43 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/784667935' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:43 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/790973850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.534 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.535 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Ensure instance console log exists: /var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.535 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.536 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.536 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.540 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.541 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.541 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] No VIF found with MAC fa:16:3e:45:8d:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.542 2 INFO nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Using config drive
Oct 07 14:08:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 325 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 6.9 MiB/s wr, 212 op/s
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.563 2 DEBUG nova.storage.rbd_utils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] rbd image 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.616 2 DEBUG nova.network.neutron [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.636 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Releasing lock "refresh_cache-31263139-61ff-4691-a1a9-a8d53fd7b388" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.637 2 DEBUG nova.compute.manager [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.684 2 DEBUG nova.compute.manager [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Received event network-vif-unplugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.685 2 DEBUG oslo_concurrency.lockutils [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.685 2 DEBUG oslo_concurrency.lockutils [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.686 2 DEBUG oslo_concurrency.lockutils [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.686 2 DEBUG nova.compute.manager [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] No waiting events found dispatching network-vif-unplugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.686 2 DEBUG nova.compute.manager [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Received event network-vif-unplugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.686 2 DEBUG nova.compute.manager [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Received event network-vif-plugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.687 2 DEBUG oslo_concurrency.lockutils [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.687 2 DEBUG oslo_concurrency.lockutils [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.687 2 DEBUG oslo_concurrency.lockutils [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.688 2 DEBUG nova.compute.manager [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] No waiting events found dispatching network-vif-plugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.688 2 WARNING nova.compute.manager [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Received unexpected event network-vif-plugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b for instance with vm_state active and task_state deleting.
Oct 07 14:08:43 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct 07 14:08:43 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000016.scope: Consumed 3.411s CPU time.
Oct 07 14:08:43 compute-0 systemd-machined[214580]: Machine qemu-26-instance-00000016 terminated.
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.863 2 INFO nova.virt.libvirt.driver [-] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Instance destroyed successfully.
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.864 2 DEBUG nova.objects.instance [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lazy-loading 'resources' on Instance uuid 31263139-61ff-4691-a1a9-a8d53fd7b388 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.909 2 DEBUG nova.network.neutron [req-d26ca8db-edcc-479f-b3d9-5ab42ab27406 req-c96c5de1-3f24-4751-ace8-f677c792507c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updated VIF entry in instance network info cache for port c10a9a85-702e-4bd4-92d9-474eb88ec422. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.910 2 DEBUG nova.network.neutron [req-d26ca8db-edcc-479f-b3d9-5ab42ab27406 req-c96c5de1-3f24-4751-ace8-f677c792507c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updating instance_info_cache with network_info: [{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.958 2 DEBUG oslo_concurrency.lockutils [req-d26ca8db-edcc-479f-b3d9-5ab42ab27406 req-c96c5de1-3f24-4751-ace8-f677c792507c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.992 2 INFO nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Creating config drive at /var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75/disk.config
Oct 07 14:08:43 compute-0 nova_compute[259550]: 2025-10-07 14:08:43.998 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw7y9vi0d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.104 2 DEBUG nova.network.neutron [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Successfully created port: 29476070-b1b0-4d1c-a313-d0fbd1793130 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.134 2 DEBUG nova.network.neutron [-] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.150 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw7y9vi0d" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.176 2 DEBUG nova.storage.rbd_utils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] rbd image 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.181 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75/disk.config 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.217 2 DEBUG nova.compute.manager [req-a676fee7-52bb-4a77-b542-640decef2b55 req-3d9aa5ae-2516-4a28-98ee-ce24760a84b7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Received event network-vif-deleted-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.218 2 INFO nova.compute.manager [req-a676fee7-52bb-4a77-b542-640decef2b55 req-3d9aa5ae-2516-4a28-98ee-ce24760a84b7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Neutron deleted interface d23018fc-ec2d-4a03-8e09-88c7ecb34f8b; detaching it from the instance and deleting it from the info cache
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.218 2 DEBUG nova.network.neutron [req-a676fee7-52bb-4a77-b542-640decef2b55 req-3d9aa5ae-2516-4a28-98ee-ce24760a84b7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.247 2 INFO nova.compute.manager [-] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Took 1.65 seconds to deallocate network for instance.
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.295 2 DEBUG nova.compute.manager [req-a676fee7-52bb-4a77-b542-640decef2b55 req-3d9aa5ae-2516-4a28-98ee-ce24760a84b7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Detach interface failed, port_id=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b, reason: Instance 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.359 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75/disk.config 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.360 2 INFO nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Deleting local config drive /var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75/disk.config because it was imported into RBD.
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.397 2 INFO nova.virt.libvirt.driver [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Deleting instance files /var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388_del
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.398 2 INFO nova.virt.libvirt.driver [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Deletion of /var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388_del complete
Oct 07 14:08:44 compute-0 kernel: tapc10a9a85-70: entered promiscuous mode
Oct 07 14:08:44 compute-0 NetworkManager[44949]: <info>  [1759846124.4165] manager: (tapc10a9a85-70): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Oct 07 14:08:44 compute-0 systemd-udevd[292792]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:44 compute-0 ovn_controller[151684]: 2025-10-07T14:08:44Z|00152|binding|INFO|Claiming lport c10a9a85-702e-4bd4-92d9-474eb88ec422 for this chassis.
Oct 07 14:08:44 compute-0 ovn_controller[151684]: 2025-10-07T14:08:44Z|00153|binding|INFO|c10a9a85-702e-4bd4-92d9-474eb88ec422: Claiming fa:16:3e:45:8d:83 10.100.0.13
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:44 compute-0 ceph-mon[74295]: pgmap v1262: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 325 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 6.9 MiB/s wr, 212 op/s
Oct 07 14:08:44 compute-0 NetworkManager[44949]: <info>  [1759846124.4280] device (tapc10a9a85-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:08:44 compute-0 NetworkManager[44949]: <info>  [1759846124.4300] device (tapc10a9a85-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:08:44 compute-0 systemd-machined[214580]: New machine qemu-27-instance-00000017.
Oct 07 14:08:44 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-00000017.
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:44 compute-0 ovn_controller[151684]: 2025-10-07T14:08:44Z|00154|binding|INFO|Setting lport c10a9a85-702e-4bd4-92d9-474eb88ec422 ovn-installed in OVS
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.557 2 DEBUG oslo_concurrency.lockutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.557 2 DEBUG oslo_concurrency.lockutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:44 compute-0 ovn_controller[151684]: 2025-10-07T14:08:44Z|00155|binding|INFO|Setting lport c10a9a85-702e-4bd4-92d9-474eb88ec422 up in Southbound
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.563 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:8d:83 10.100.0.13'], port_security=['fa:16:3e:45:8d:83 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5fc2c826-a57b-4c9a-910a-48b72ec2ab75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ef175ad-b29a-40a5-8bff-1ae744434f58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b03b40fd15b945118fde82b6454dbced', 'neutron:revision_number': '2', 'neutron:security_group_ids': '33a99878-f5f8-4125-a239-2333c49ae6de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82ef4b2f-7b03-44f3-86f3-9cfeec6eece1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c10a9a85-702e-4bd4-92d9-474eb88ec422) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.565 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c10a9a85-702e-4bd4-92d9-474eb88ec422 in datapath 9ef175ad-b29a-40a5-8bff-1ae744434f58 bound to our chassis
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.566 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ef175ad-b29a-40a5-8bff-1ae744434f58
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.578 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1fbf9af8-5f92-4136-b350-8054ab26e3f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.580 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9ef175ad-b1 in ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.582 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9ef175ad-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.582 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bed8eeca-a39a-4cc4-8fa4-989a658d648e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.583 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[98b41d01-92a5-4680-899e-c52335d05ebb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.600 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[511e52fc-3a65-4f7b-94ed-9b42e8a7172e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.622 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e15e46d8-77c3-4bbb-bcf9-7db9ba5d4d97]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.653 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9cdebcd3-78fc-4eef-9186-ab3e0359eee1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.653 2 DEBUG oslo_concurrency.processutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:44 compute-0 NetworkManager[44949]: <info>  [1759846124.6706] manager: (tap9ef175ad-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/79)
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.669 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[31cc0707-829a-4efe-9d24-3d5a8f06afc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.706 2 INFO nova.compute.manager [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Took 1.07 seconds to destroy the instance on the hypervisor.
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.707 2 DEBUG oslo.service.loopingcall [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.707 2 DEBUG nova.compute.manager [-] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.707 2 DEBUG nova.network.neutron [-] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.726 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3d29a0-b08d-44ee-9974-ea75b628f64a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.730 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6653ae5b-334d-4029-a575-a6b51d40bff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:44 compute-0 NetworkManager[44949]: <info>  [1759846124.7558] device (tap9ef175ad-b0): carrier: link connected
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.763 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0527de4c-8692-4235-8b22-95c034193767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.784 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5076f0c7-5a34-460b-8699-9819e40c47bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ef175ad-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:ed:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670832, 'reachable_time': 34892, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293264, 'error': None, 'target': 'ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.800 2 DEBUG nova.network.neutron [-] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.804 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a705b256-2775-4e74-820d-f16de7b4850c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:ed5b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 670832, 'tstamp': 670832}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293283, 'error': None, 'target': 'ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.826 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dd872aca-7970-4c29-8d17-22552cf22fcc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ef175ad-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:ed:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670832, 'reachable_time': 34892, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293284, 'error': None, 'target': 'ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.861 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[43a1f25c-1d92-4961-a0e2-2a859d3ca4b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.899 2 DEBUG nova.network.neutron [-] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.939 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8f909c-bd10-4556-aa5c-916c39cdb4b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.941 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ef175ad-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.941 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.942 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ef175ad-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:44 compute-0 NetworkManager[44949]: <info>  [1759846124.9445] manager: (tap9ef175ad-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Oct 07 14:08:44 compute-0 kernel: tap9ef175ad-b0: entered promiscuous mode
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.949 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ef175ad-b0, col_values=(('external_ids', {'iface-id': 'ee8146fb-210f-43d1-b971-6942dccdfe6d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:44 compute-0 ovn_controller[151684]: 2025-10-07T14:08:44Z|00156|binding|INFO|Releasing lport ee8146fb-210f-43d1-b971-6942dccdfe6d from this chassis (sb_readonly=0)
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.953 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9ef175ad-b29a-40a5-8bff-1ae744434f58.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9ef175ad-b29a-40a5-8bff-1ae744434f58.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.955 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[12dd2fab-616b-4060-a737-b333bd00ce8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.956 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-9ef175ad-b29a-40a5-8bff-1ae744434f58
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/9ef175ad-b29a-40a5-8bff-1ae744434f58.pid.haproxy
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 9ef175ad-b29a-40a5-8bff-1ae744434f58
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:08:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.957 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58', 'env', 'PROCESS_TAG=haproxy-9ef175ad-b29a-40a5-8bff-1ae744434f58', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9ef175ad-b29a-40a5-8bff-1ae744434f58.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:08:44 compute-0 nova_compute[259550]: 2025-10-07 14:08:44.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:08:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3474795087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:08:45 compute-0 nova_compute[259550]: 2025-10-07 14:08:45.102 2 DEBUG oslo_concurrency.processutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:45 compute-0 nova_compute[259550]: 2025-10-07 14:08:45.110 2 DEBUG nova.compute.provider_tree [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:08:45 compute-0 nova_compute[259550]: 2025-10-07 14:08:45.120 2 INFO nova.compute.manager [-] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Took 0.41 seconds to deallocate network for instance.
Oct 07 14:08:45 compute-0 podman[293318]: 2025-10-07 14:08:45.35776778 +0000 UTC m=+0.054708273 container create ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:08:45 compute-0 systemd[1]: Started libpod-conmon-ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77.scope.
Oct 07 14:08:45 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:08:45 compute-0 podman[293318]: 2025-10-07 14:08:45.328689764 +0000 UTC m=+0.025630277 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:08:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11d889d5dab38b3fea3ecfaefc7e9381c5ac5e4cbaa2ea3617b44dc00bb8f71d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:08:45 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3474795087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:08:45 compute-0 podman[293318]: 2025-10-07 14:08:45.440034629 +0000 UTC m=+0.136975152 container init ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 07 14:08:45 compute-0 podman[293318]: 2025-10-07 14:08:45.451959699 +0000 UTC m=+0.148900192 container start ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 07 14:08:45 compute-0 neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58[293333]: [NOTICE]   (293337) : New worker (293339) forked
Oct 07 14:08:45 compute-0 neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58[293333]: [NOTICE]   (293337) : Loading success.
Oct 07 14:08:45 compute-0 nova_compute[259550]: 2025-10-07 14:08:45.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 160 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 8.0 MiB/s wr, 436 op/s
Oct 07 14:08:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:08:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct 07 14:08:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct 07 14:08:45 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct 07 14:08:46 compute-0 nova_compute[259550]: 2025-10-07 14:08:46.083 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846126.0824468, 5fc2c826-a57b-4c9a-910a-48b72ec2ab75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:08:46 compute-0 nova_compute[259550]: 2025-10-07 14:08:46.084 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] VM Started (Lifecycle Event)
Oct 07 14:08:46 compute-0 nova_compute[259550]: 2025-10-07 14:08:46.498 2 DEBUG nova.network.neutron [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Successfully updated port: 29476070-b1b0-4d1c-a313-d0fbd1793130 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:08:46 compute-0 nova_compute[259550]: 2025-10-07 14:08:46.521 2 DEBUG nova.scheduler.client.report [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:08:46 compute-0 ceph-mon[74295]: pgmap v1263: 305 pgs: 305 active+clean; 160 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 8.0 MiB/s wr, 436 op/s
Oct 07 14:08:46 compute-0 ceph-mon[74295]: osdmap e152: 3 total, 3 up, 3 in
Oct 07 14:08:46 compute-0 nova_compute[259550]: 2025-10-07 14:08:46.736 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "refresh_cache-46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:08:46 compute-0 nova_compute[259550]: 2025-10-07 14:08:46.737 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquired lock "refresh_cache-46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:08:46 compute-0 nova_compute[259550]: 2025-10-07 14:08:46.737 2 DEBUG nova.network.neutron [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:08:46 compute-0 nova_compute[259550]: 2025-10-07 14:08:46.752 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:46 compute-0 nova_compute[259550]: 2025-10-07 14:08:46.758 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846126.0833654, 5fc2c826-a57b-4c9a-910a-48b72ec2ab75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:08:46 compute-0 nova_compute[259550]: 2025-10-07 14:08:46.758 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] VM Paused (Lifecycle Event)
Oct 07 14:08:46 compute-0 nova_compute[259550]: 2025-10-07 14:08:46.916 2 DEBUG oslo_concurrency.lockutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.359s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:46 compute-0 nova_compute[259550]: 2025-10-07 14:08:46.944 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:46 compute-0 nova_compute[259550]: 2025-10-07 14:08:46.945 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:46 compute-0 nova_compute[259550]: 2025-10-07 14:08:46.950 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:46 compute-0 nova_compute[259550]: 2025-10-07 14:08:46.961 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.012 2 INFO nova.scheduler.client.report [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Deleted allocations for instance 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.027 2 DEBUG oslo_concurrency.processutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.056 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.309 2 DEBUG oslo_concurrency.lockutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:08:47 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2173230607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.511 2 DEBUG oslo_concurrency.processutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.517 2 DEBUG nova.compute.provider_tree [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:08:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 160 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.4 MiB/s wr, 385 op/s
Oct 07 14:08:47 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2173230607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.703 2 DEBUG nova.scheduler.client.report [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.731 2 DEBUG nova.network.neutron [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.751 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.804 2 INFO nova.scheduler.client.report [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Deleted allocations for instance 31263139-61ff-4691-a1a9-a8d53fd7b388
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.880 2 DEBUG nova.compute.manager [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received event network-vif-plugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.880 2 DEBUG oslo_concurrency.lockutils [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.881 2 DEBUG oslo_concurrency.lockutils [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.881 2 DEBUG oslo_concurrency.lockutils [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.881 2 DEBUG nova.compute.manager [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Processing event network-vif-plugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.881 2 DEBUG nova.compute.manager [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received event network-vif-plugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.882 2 DEBUG oslo_concurrency.lockutils [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.882 2 DEBUG oslo_concurrency.lockutils [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.882 2 DEBUG oslo_concurrency.lockutils [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.882 2 DEBUG nova.compute.manager [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] No waiting events found dispatching network-vif-plugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.883 2 WARNING nova.compute.manager [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received unexpected event network-vif-plugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 for instance with vm_state building and task_state spawning.
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.883 2 DEBUG nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.889 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846127.8895493, 5fc2c826-a57b-4c9a-910a-48b72ec2ab75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.890 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] VM Resumed (Lifecycle Event)
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.893 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.898 2 INFO nova.virt.libvirt.driver [-] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Instance spawned successfully.
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.899 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.919 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "31263139-61ff-4691-a1a9-a8d53fd7b388" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.932 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.937 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.941 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.941 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.942 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.942 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.942 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:47 compute-0 nova_compute[259550]: 2025-10-07 14:08:47.943 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:48 compute-0 nova_compute[259550]: 2025-10-07 14:08:48.000 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:08:48 compute-0 nova_compute[259550]: 2025-10-07 14:08:48.070 2 INFO nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Took 9.73 seconds to spawn the instance on the hypervisor.
Oct 07 14:08:48 compute-0 nova_compute[259550]: 2025-10-07 14:08:48.071 2 DEBUG nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:48 compute-0 nova_compute[259550]: 2025-10-07 14:08:48.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:48 compute-0 nova_compute[259550]: 2025-10-07 14:08:48.238 2 INFO nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Took 11.93 seconds to build instance.
Oct 07 14:08:48 compute-0 nova_compute[259550]: 2025-10-07 14:08:48.356 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:48 compute-0 ceph-mon[74295]: pgmap v1265: 305 pgs: 305 active+clean; 160 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.4 MiB/s wr, 385 op/s
Oct 07 14:08:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.4 MiB/s wr, 304 op/s
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.841 2 DEBUG nova.network.neutron [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Updating instance_info_cache with network_info: [{"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.900 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Releasing lock "refresh_cache-46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.901 2 DEBUG nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Instance network_info: |[{"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.903 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Start _get_guest_xml network_info=[{"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.910 2 WARNING nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.916 2 DEBUG nova.virt.libvirt.host [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.917 2 DEBUG nova.virt.libvirt.host [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.924 2 DEBUG nova.virt.libvirt.host [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.925 2 DEBUG nova.virt.libvirt.host [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.925 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.925 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.926 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.926 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.926 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.927 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.927 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.927 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.927 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.928 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.928 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.928 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.931 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.991 2 DEBUG nova.compute.manager [req-9789ae0f-1582-47b6-ac8c-2e08d0b08a50 req-77768945-b840-4228-968e-78e403065de5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Received event network-changed-29476070-b1b0-4d1c-a313-d0fbd1793130 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.992 2 DEBUG nova.compute.manager [req-9789ae0f-1582-47b6-ac8c-2e08d0b08a50 req-77768945-b840-4228-968e-78e403065de5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Refreshing instance network info cache due to event network-changed-29476070-b1b0-4d1c-a313-d0fbd1793130. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.992 2 DEBUG oslo_concurrency.lockutils [req-9789ae0f-1582-47b6-ac8c-2e08d0b08a50 req-77768945-b840-4228-968e-78e403065de5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.993 2 DEBUG oslo_concurrency.lockutils [req-9789ae0f-1582-47b6-ac8c-2e08d0b08a50 req-77768945-b840-4228-968e-78e403065de5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:08:49 compute-0 nova_compute[259550]: 2025-10-07 14:08:49.993 2 DEBUG nova.network.neutron [req-9789ae0f-1582-47b6-ac8c-2e08d0b08a50 req-77768945-b840-4228-968e-78e403065de5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Refreshing network info cache for port 29476070-b1b0-4d1c-a313-d0fbd1793130 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:08:50 compute-0 podman[293415]: 2025-10-07 14:08:50.077125673 +0000 UTC m=+0.061462574 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:08:50 compute-0 podman[293414]: 2025-10-07 14:08:50.089142565 +0000 UTC m=+0.072950292 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:08:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:08:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1216984279' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:50 compute-0 nova_compute[259550]: 2025-10-07 14:08:50.404 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:50 compute-0 nova_compute[259550]: 2025-10-07 14:08:50.433 2 DEBUG nova.storage.rbd_utils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:50 compute-0 nova_compute[259550]: 2025-10-07 14:08:50.438 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:50 compute-0 nova_compute[259550]: 2025-10-07 14:08:50.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:08:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct 07 14:08:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct 07 14:08:50 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct 07 14:08:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:08:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2171010429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.000 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.002 2 DEBUG nova.virt.libvirt.vif [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:08:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-771633438',display_name='tempest-ImagesTestJSON-server-771633438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-771633438',id=24,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-6zal3iv7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:08:42Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=46d99eec-2ec6-4ea6-acb1-c0a694dd2df1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.003 2 DEBUG nova.network.os_vif_util [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.004 2 DEBUG nova.network.os_vif_util [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:c7:a1,bridge_name='br-int',has_traffic_filtering=True,id=29476070-b1b0-4d1c-a313-d0fbd1793130,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29476070-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.005 2 DEBUG nova.objects.instance [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'pci_devices' on Instance uuid 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.024 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:08:51 compute-0 nova_compute[259550]:   <uuid>46d99eec-2ec6-4ea6-acb1-c0a694dd2df1</uuid>
Oct 07 14:08:51 compute-0 nova_compute[259550]:   <name>instance-00000018</name>
Oct 07 14:08:51 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:08:51 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:08:51 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <nova:name>tempest-ImagesTestJSON-server-771633438</nova:name>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:08:49</nova:creationTime>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:08:51 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:08:51 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:08:51 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:08:51 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:08:51 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:08:51 compute-0 nova_compute[259550]:         <nova:user uuid="a27a7178326846e69ab9eaae7c70b274">tempest-ImagesTestJSON-194092869-project-member</nova:user>
Oct 07 14:08:51 compute-0 nova_compute[259550]:         <nova:project uuid="1a6abfd8cc6f4507886ed10873d1f95c">tempest-ImagesTestJSON-194092869</nova:project>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:08:51 compute-0 nova_compute[259550]:         <nova:port uuid="29476070-b1b0-4d1c-a313-d0fbd1793130">
Oct 07 14:08:51 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:08:51 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:08:51 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <system>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <entry name="serial">46d99eec-2ec6-4ea6-acb1-c0a694dd2df1</entry>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <entry name="uuid">46d99eec-2ec6-4ea6-acb1-c0a694dd2df1</entry>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     </system>
Oct 07 14:08:51 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:08:51 compute-0 nova_compute[259550]:   <os>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:   </os>
Oct 07 14:08:51 compute-0 nova_compute[259550]:   <features>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:   </features>
Oct 07 14:08:51 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:08:51 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:08:51 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk">
Oct 07 14:08:51 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       </source>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:08:51 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk.config">
Oct 07 14:08:51 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       </source>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:08:51 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:2d:c7:a1"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <target dev="tap29476070-b1"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1/console.log" append="off"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <video>
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     </video>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:08:51 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:08:51 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:08:51 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:08:51 compute-0 nova_compute[259550]: </domain>
Oct 07 14:08:51 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.026 2 DEBUG nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Preparing to wait for external event network-vif-plugged-29476070-b1b0-4d1c-a313-d0fbd1793130 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.027 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.027 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.027 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.028 2 DEBUG nova.virt.libvirt.vif [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:08:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-771633438',display_name='tempest-ImagesTestJSON-server-771633438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-771633438',id=24,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-6zal3iv7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:08:42Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=46d99eec-2ec6-4ea6-acb1-c0a694dd2df1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.029 2 DEBUG nova.network.os_vif_util [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.029 2 DEBUG nova.network.os_vif_util [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:c7:a1,bridge_name='br-int',has_traffic_filtering=True,id=29476070-b1b0-4d1c-a313-d0fbd1793130,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29476070-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.030 2 DEBUG os_vif [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:c7:a1,bridge_name='br-int',has_traffic_filtering=True,id=29476070-b1b0-4d1c-a313-d0fbd1793130,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29476070-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.037 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29476070-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.037 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29476070-b1, col_values=(('external_ids', {'iface-id': '29476070-b1b0-4d1c-a313-d0fbd1793130', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:c7:a1', 'vm-uuid': '46d99eec-2ec6-4ea6-acb1-c0a694dd2df1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:51 compute-0 NetworkManager[44949]: <info>  [1759846131.0403] manager: (tap29476070-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Oct 07 14:08:51 compute-0 ceph-mon[74295]: pgmap v1266: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.4 MiB/s wr, 304 op/s
Oct 07 14:08:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1216984279' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.050 2 INFO os_vif [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:c7:a1,bridge_name='br-int',has_traffic_filtering=True,id=29476070-b1b0-4d1c-a313-d0fbd1793130,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29476070-b1')
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.310 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.311 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.312 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No VIF found with MAC fa:16:3e:2d:c7:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.313 2 INFO nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Using config drive
Oct 07 14:08:51 compute-0 ovn_controller[151684]: 2025-10-07T14:08:51Z|00157|binding|INFO|Releasing lport ee8146fb-210f-43d1-b971-6942dccdfe6d from this chassis (sb_readonly=0)
Oct 07 14:08:51 compute-0 NetworkManager[44949]: <info>  [1759846131.4206] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Oct 07 14:08:51 compute-0 NetworkManager[44949]: <info>  [1759846131.4217] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Oct 07 14:08:51 compute-0 ovn_controller[151684]: 2025-10-07T14:08:51Z|00158|binding|INFO|Releasing lport ee8146fb-210f-43d1-b971-6942dccdfe6d from this chassis (sb_readonly=0)
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.508 2 DEBUG nova.storage.rbd_utils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.7 MiB/s wr, 395 op/s
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.919 2 DEBUG nova.compute.manager [req-d8507d22-d05b-47c3-8834-1609ae0572e1 req-ad83e513-f061-439a-b8f9-1607e2f0ef7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received event network-changed-c10a9a85-702e-4bd4-92d9-474eb88ec422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.920 2 DEBUG nova.compute.manager [req-d8507d22-d05b-47c3-8834-1609ae0572e1 req-ad83e513-f061-439a-b8f9-1607e2f0ef7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Refreshing instance network info cache due to event network-changed-c10a9a85-702e-4bd4-92d9-474eb88ec422. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.920 2 DEBUG oslo_concurrency.lockutils [req-d8507d22-d05b-47c3-8834-1609ae0572e1 req-ad83e513-f061-439a-b8f9-1607e2f0ef7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.920 2 DEBUG oslo_concurrency.lockutils [req-d8507d22-d05b-47c3-8834-1609ae0572e1 req-ad83e513-f061-439a-b8f9-1607e2f0ef7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:08:51 compute-0 nova_compute[259550]: 2025-10-07 14:08:51.921 2 DEBUG nova.network.neutron [req-d8507d22-d05b-47c3-8834-1609ae0572e1 req-ad83e513-f061-439a-b8f9-1607e2f0ef7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Refreshing network info cache for port c10a9a85-702e-4bd4-92d9-474eb88ec422 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:08:52 compute-0 nova_compute[259550]: 2025-10-07 14:08:52.013 2 INFO nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Creating config drive at /var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1/disk.config
Oct 07 14:08:52 compute-0 nova_compute[259550]: 2025-10-07 14:08:52.020 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvks_smpy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:52 compute-0 nova_compute[259550]: 2025-10-07 14:08:52.158 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvks_smpy" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:52 compute-0 nova_compute[259550]: 2025-10-07 14:08:52.187 2 DEBUG nova.storage.rbd_utils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:08:52 compute-0 nova_compute[259550]: 2025-10-07 14:08:52.193 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1/disk.config 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:08:52 compute-0 ceph-mon[74295]: osdmap e153: 3 total, 3 up, 3 in
Oct 07 14:08:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2171010429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:08:52 compute-0 ceph-mon[74295]: pgmap v1268: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.7 MiB/s wr, 395 op/s
Oct 07 14:08:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:08:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:08:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:08:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:08:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:08:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:08:52 compute-0 nova_compute[259550]: 2025-10-07 14:08:52.926 2 DEBUG nova.network.neutron [req-9789ae0f-1582-47b6-ac8c-2e08d0b08a50 req-77768945-b840-4228-968e-78e403065de5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Updated VIF entry in instance network info cache for port 29476070-b1b0-4d1c-a313-d0fbd1793130. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:08:52 compute-0 nova_compute[259550]: 2025-10-07 14:08:52.927 2 DEBUG nova.network.neutron [req-9789ae0f-1582-47b6-ac8c-2e08d0b08a50 req-77768945-b840-4228-968e-78e403065de5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Updating instance_info_cache with network_info: [{"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:08:52 compute-0 nova_compute[259550]: 2025-10-07 14:08:52.961 2 DEBUG oslo_concurrency.lockutils [req-9789ae0f-1582-47b6-ac8c-2e08d0b08a50 req-77768945-b840-4228-968e-78e403065de5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:08:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 123 op/s
Oct 07 14:08:53 compute-0 nova_compute[259550]: 2025-10-07 14:08:53.741 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1/disk.config 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:08:53 compute-0 nova_compute[259550]: 2025-10-07 14:08:53.743 2 INFO nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Deleting local config drive /var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1/disk.config because it was imported into RBD.
Oct 07 14:08:53 compute-0 kernel: tap29476070-b1: entered promiscuous mode
Oct 07 14:08:53 compute-0 NetworkManager[44949]: <info>  [1759846133.8141] manager: (tap29476070-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/84)
Oct 07 14:08:53 compute-0 ovn_controller[151684]: 2025-10-07T14:08:53Z|00159|binding|INFO|Claiming lport 29476070-b1b0-4d1c-a313-d0fbd1793130 for this chassis.
Oct 07 14:08:53 compute-0 ovn_controller[151684]: 2025-10-07T14:08:53Z|00160|binding|INFO|29476070-b1b0-4d1c-a313-d0fbd1793130: Claiming fa:16:3e:2d:c7:a1 10.100.0.14
Oct 07 14:08:53 compute-0 nova_compute[259550]: 2025-10-07 14:08:53.821 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846118.817205, cd77c7c3-e287-4a6a-b2b6-61655f604ec2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:08:53 compute-0 nova_compute[259550]: 2025-10-07 14:08:53.822 2 INFO nova.compute.manager [-] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] VM Stopped (Lifecycle Event)
Oct 07 14:08:53 compute-0 nova_compute[259550]: 2025-10-07 14:08:53.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:53 compute-0 ovn_controller[151684]: 2025-10-07T14:08:53Z|00161|binding|INFO|Setting lport 29476070-b1b0-4d1c-a313-d0fbd1793130 ovn-installed in OVS
Oct 07 14:08:53 compute-0 nova_compute[259550]: 2025-10-07 14:08:53.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:53 compute-0 nova_compute[259550]: 2025-10-07 14:08:53.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:53 compute-0 systemd-udevd[293588]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:08:53 compute-0 systemd-machined[214580]: New machine qemu-28-instance-00000018.
Oct 07 14:08:53 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-00000018.
Oct 07 14:08:53 compute-0 nova_compute[259550]: 2025-10-07 14:08:53.875 2 DEBUG nova.compute.manager [None req-67c33f92-1b14-4047-bf61-fca087f7c3e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:53 compute-0 NetworkManager[44949]: <info>  [1759846133.8825] device (tap29476070-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:08:53 compute-0 NetworkManager[44949]: <info>  [1759846133.8840] device (tap29476070-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:08:53 compute-0 ovn_controller[151684]: 2025-10-07T14:08:53Z|00162|binding|INFO|Setting lport 29476070-b1b0-4d1c-a313-d0fbd1793130 up in Southbound
Oct 07 14:08:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.901 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:c7:a1 10.100.0.14'], port_security=['fa:16:3e:2d:c7:a1 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '46d99eec-2ec6-4ea6-acb1-c0a694dd2df1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=29476070-b1b0-4d1c-a313-d0fbd1793130) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:08:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.902 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 29476070-b1b0-4d1c-a313-d0fbd1793130 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb bound to our chassis
Oct 07 14:08:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.904 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct 07 14:08:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.922 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aa043a59-b7cc-482c-b7ae-28516cc0a24e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.923 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f80456d-d1 in ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:08:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.926 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f80456d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:08:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.926 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a94e69b0-4207-4622-b6c9-22d747ec6ee9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.928 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[84d19632-441e-459c-b4d8-73c476b41461]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.945 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[fbce4366-1c07-4a81-9cb7-1f71843edff2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.974 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9188444e-57d0-4c66-bc3b-bd90b63dee7f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.014 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a87927a3-bce5-4129-a5ae-8f4bb4c504b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.021 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a06268ca-5c93-44b4-af4c-8fbd3fa2e3b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:54 compute-0 NetworkManager[44949]: <info>  [1759846134.0227] manager: (tap9f80456d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/85)
Oct 07 14:08:54 compute-0 systemd-udevd[293590]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.069 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[86eb53bc-ac8d-4d6f-bdaa-68e9f9d78ed0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.072 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[49c0753c-41f0-432f-859b-59d3e18998f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:54 compute-0 NetworkManager[44949]: <info>  [1759846134.0980] device (tap9f80456d-d0): carrier: link connected
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.107 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3c8540-77e1-494e-8b3b-0d8ad8e41ea6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.125 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa40c67-c4c7-423c-bd2d-3a7e0e3140fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671766, 'reachable_time': 29193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293621, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:54 compute-0 ovn_controller[151684]: 2025-10-07T14:08:54Z|00163|binding|INFO|Releasing lport ee8146fb-210f-43d1-b971-6942dccdfe6d from this chassis (sb_readonly=0)
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.151 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[390e1fde-3b73-4368-9d98-35718c881ef6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:18ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671766, 'tstamp': 671766}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293622, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.172 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[44e580d6-7476-43d6-8c88-efc52bf5f608]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671766, 'reachable_time': 29193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293623, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.208 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[49934b64-711a-494c-bc03-e442bc3726c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.282 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a66fea7e-b83f-4b24-8735-cbd526f67be6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.284 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.284 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.284 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f80456d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:54 compute-0 kernel: tap9f80456d-d0: entered promiscuous mode
Oct 07 14:08:54 compute-0 NetworkManager[44949]: <info>  [1759846134.2882] manager: (tap9f80456d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.289 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f80456d-d0, col_values=(('external_ids', {'iface-id': 'aff8269b-7a34-4fc6-ae31-f73de236b2d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:54 compute-0 ovn_controller[151684]: 2025-10-07T14:08:54Z|00164|binding|INFO|Releasing lport aff8269b-7a34-4fc6-ae31-f73de236b2d6 from this chassis (sb_readonly=0)
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.307 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.308 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ae6b3882-236f-4efc-a56f-59da3e634129]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.309 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:08:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.310 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'env', 'PROCESS_TAG=haproxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.333 2 DEBUG nova.compute.manager [req-24d415d5-3f3d-4104-aadf-ecef165d858d req-8c037d13-ff5b-46e1-b2e9-d1ac3a3d4166 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Received event network-vif-plugged-29476070-b1b0-4d1c-a313-d0fbd1793130 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.335 2 DEBUG oslo_concurrency.lockutils [req-24d415d5-3f3d-4104-aadf-ecef165d858d req-8c037d13-ff5b-46e1-b2e9-d1ac3a3d4166 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.335 2 DEBUG oslo_concurrency.lockutils [req-24d415d5-3f3d-4104-aadf-ecef165d858d req-8c037d13-ff5b-46e1-b2e9-d1ac3a3d4166 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.335 2 DEBUG oslo_concurrency.lockutils [req-24d415d5-3f3d-4104-aadf-ecef165d858d req-8c037d13-ff5b-46e1-b2e9-d1ac3a3d4166 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.336 2 DEBUG nova.compute.manager [req-24d415d5-3f3d-4104-aadf-ecef165d858d req-8c037d13-ff5b-46e1-b2e9-d1ac3a3d4166 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Processing event network-vif-plugged-29476070-b1b0-4d1c-a313-d0fbd1793130 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.345 2 DEBUG nova.network.neutron [req-d8507d22-d05b-47c3-8834-1609ae0572e1 req-ad83e513-f061-439a-b8f9-1607e2f0ef7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updated VIF entry in instance network info cache for port c10a9a85-702e-4bd4-92d9-474eb88ec422. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.346 2 DEBUG nova.network.neutron [req-d8507d22-d05b-47c3-8834-1609ae0572e1 req-ad83e513-f061-439a-b8f9-1607e2f0ef7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updating instance_info_cache with network_info: [{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.408 2 DEBUG oslo_concurrency.lockutils [req-d8507d22-d05b-47c3-8834-1609ae0572e1 req-ad83e513-f061-439a-b8f9-1607e2f0ef7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:08:54 compute-0 ceph-mon[74295]: pgmap v1269: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 123 op/s
Oct 07 14:08:54 compute-0 podman[293698]: 2025-10-07 14:08:54.699343869 +0000 UTC m=+0.062349207 container create 8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:08:54 compute-0 systemd[1]: Started libpod-conmon-8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188.scope.
Oct 07 14:08:54 compute-0 podman[293698]: 2025-10-07 14:08:54.661544248 +0000 UTC m=+0.024549616 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:08:54 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:08:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be2587149f0b517fdb58df03030dad6ce5b5e92bc9471fce4b0fce2cbca3a69/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:08:54 compute-0 podman[293698]: 2025-10-07 14:08:54.784087844 +0000 UTC m=+0.147093202 container init 8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:08:54 compute-0 podman[293698]: 2025-10-07 14:08:54.790775562 +0000 UTC m=+0.153780900 container start 8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:08:54 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[293714]: [NOTICE]   (293718) : New worker (293720) forked
Oct 07 14:08:54 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[293714]: [NOTICE]   (293718) : Loading success.
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.858 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846134.8572397, 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.859 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] VM Started (Lifecycle Event)
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.860 2 DEBUG nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.864 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.868 2 INFO nova.virt.libvirt.driver [-] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Instance spawned successfully.
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.869 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.907 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.908 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.909 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.910 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.910 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.911 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.918 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.922 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.983 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.984 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846134.8579829, 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:08:54 compute-0 nova_compute[259550]: 2025-10-07 14:08:54.984 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] VM Paused (Lifecycle Event)
Oct 07 14:08:55 compute-0 nova_compute[259550]: 2025-10-07 14:08:55.019 2 INFO nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Took 12.26 seconds to spawn the instance on the hypervisor.
Oct 07 14:08:55 compute-0 nova_compute[259550]: 2025-10-07 14:08:55.020 2 DEBUG nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:55 compute-0 nova_compute[259550]: 2025-10-07 14:08:55.023 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:55 compute-0 nova_compute[259550]: 2025-10-07 14:08:55.029 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846134.8633146, 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:08:55 compute-0 nova_compute[259550]: 2025-10-07 14:08:55.030 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] VM Resumed (Lifecycle Event)
Oct 07 14:08:55 compute-0 nova_compute[259550]: 2025-10-07 14:08:55.069 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:55 compute-0 nova_compute[259550]: 2025-10-07 14:08:55.073 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:08:55 compute-0 nova_compute[259550]: 2025-10-07 14:08:55.103 2 INFO nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Took 13.65 seconds to build instance.
Oct 07 14:08:55 compute-0 nova_compute[259550]: 2025-10-07 14:08:55.123 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 34 KiB/s wr, 111 op/s
Oct 07 14:08:55 compute-0 nova_compute[259550]: 2025-10-07 14:08:55.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:08:56 compute-0 nova_compute[259550]: 2025-10-07 14:08:56.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:56 compute-0 nova_compute[259550]: 2025-10-07 14:08:56.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:08:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:56.271 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:08:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:56.272 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:08:56 compute-0 nova_compute[259550]: 2025-10-07 14:08:56.581 2 DEBUG nova.compute.manager [req-bc0e0c73-2f82-4e6f-8152-ff5c7dd37d0d req-3ef7c5fd-87e8-44cd-b8b6-7225a7352259 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Received event network-vif-plugged-29476070-b1b0-4d1c-a313-d0fbd1793130 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:08:56 compute-0 nova_compute[259550]: 2025-10-07 14:08:56.582 2 DEBUG oslo_concurrency.lockutils [req-bc0e0c73-2f82-4e6f-8152-ff5c7dd37d0d req-3ef7c5fd-87e8-44cd-b8b6-7225a7352259 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:56 compute-0 nova_compute[259550]: 2025-10-07 14:08:56.582 2 DEBUG oslo_concurrency.lockutils [req-bc0e0c73-2f82-4e6f-8152-ff5c7dd37d0d req-3ef7c5fd-87e8-44cd-b8b6-7225a7352259 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:56 compute-0 nova_compute[259550]: 2025-10-07 14:08:56.582 2 DEBUG oslo_concurrency.lockutils [req-bc0e0c73-2f82-4e6f-8152-ff5c7dd37d0d req-3ef7c5fd-87e8-44cd-b8b6-7225a7352259 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:08:56 compute-0 nova_compute[259550]: 2025-10-07 14:08:56.583 2 DEBUG nova.compute.manager [req-bc0e0c73-2f82-4e6f-8152-ff5c7dd37d0d req-3ef7c5fd-87e8-44cd-b8b6-7225a7352259 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] No waiting events found dispatching network-vif-plugged-29476070-b1b0-4d1c-a313-d0fbd1793130 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:08:56 compute-0 nova_compute[259550]: 2025-10-07 14:08:56.583 2 WARNING nova.compute.manager [req-bc0e0c73-2f82-4e6f-8152-ff5c7dd37d0d req-3ef7c5fd-87e8-44cd-b8b6-7225a7352259 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Received unexpected event network-vif-plugged-29476070-b1b0-4d1c-a313-d0fbd1793130 for instance with vm_state active and task_state None.
Oct 07 14:08:56 compute-0 ceph-mon[74295]: pgmap v1270: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 34 KiB/s wr, 111 op/s
Oct 07 14:08:56 compute-0 nova_compute[259550]: 2025-10-07 14:08:56.936 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846121.9351785, 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:08:56 compute-0 nova_compute[259550]: 2025-10-07 14:08:56.937 2 INFO nova.compute.manager [-] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] VM Stopped (Lifecycle Event)
Oct 07 14:08:56 compute-0 nova_compute[259550]: 2025-10-07 14:08:56.953 2 DEBUG nova.compute.manager [None req-8ed40e0e-590a-4635-bf91-7c43be7e3fba - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:57 compute-0 nova_compute[259550]: 2025-10-07 14:08:57.128 2 DEBUG oslo_concurrency.lockutils [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:08:57 compute-0 nova_compute[259550]: 2025-10-07 14:08:57.130 2 DEBUG oslo_concurrency.lockutils [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:08:57 compute-0 nova_compute[259550]: 2025-10-07 14:08:57.130 2 DEBUG nova.compute.manager [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:57 compute-0 nova_compute[259550]: 2025-10-07 14:08:57.136 2 DEBUG nova.compute.manager [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 07 14:08:57 compute-0 nova_compute[259550]: 2025-10-07 14:08:57.137 2 DEBUG nova.objects.instance [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'flavor' on Instance uuid 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:08:57 compute-0 nova_compute[259550]: 2025-10-07 14:08:57.163 2 DEBUG nova.virt.libvirt.driver [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:08:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:08:57.274 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:08:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 34 KiB/s wr, 110 op/s
Oct 07 14:08:58 compute-0 ceph-mon[74295]: pgmap v1271: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 34 KiB/s wr, 110 op/s
Oct 07 14:08:58 compute-0 nova_compute[259550]: 2025-10-07 14:08:58.861 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846123.8593931, 31263139-61ff-4691-a1a9-a8d53fd7b388 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:08:58 compute-0 nova_compute[259550]: 2025-10-07 14:08:58.861 2 INFO nova.compute.manager [-] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] VM Stopped (Lifecycle Event)
Oct 07 14:08:58 compute-0 nova_compute[259550]: 2025-10-07 14:08:58.898 2 DEBUG nova.compute.manager [None req-42d52f3f-6a37-44e0-8332-1391f52b419e - - - - - -] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:08:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 15 KiB/s wr, 133 op/s
Oct 07 14:09:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:00.042 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:00.042 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:00.043 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:00 compute-0 nova_compute[259550]: 2025-10-07 14:09:00.385 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Acquiring lock "2f611962-32b5-4b21-b23b-303bbf54564d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:00 compute-0 nova_compute[259550]: 2025-10-07 14:09:00.386 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "2f611962-32b5-4b21-b23b-303bbf54564d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:00 compute-0 nova_compute[259550]: 2025-10-07 14:09:00.413 2 DEBUG nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:09:00 compute-0 nova_compute[259550]: 2025-10-07 14:09:00.558 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:00 compute-0 nova_compute[259550]: 2025-10-07 14:09:00.559 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:00 compute-0 nova_compute[259550]: 2025-10-07 14:09:00.565 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:09:00 compute-0 nova_compute[259550]: 2025-10-07 14:09:00.565 2 INFO nova.compute.claims [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:09:00 compute-0 nova_compute[259550]: 2025-10-07 14:09:00.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:00 compute-0 nova_compute[259550]: 2025-10-07 14:09:00.698 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:09:00 compute-0 ceph-mon[74295]: pgmap v1272: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 15 KiB/s wr, 133 op/s
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:09:01 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/158927846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.180 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.188 2 DEBUG nova.compute.provider_tree [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.202 2 DEBUG nova.scheduler.client.report [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.224 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.224 2 DEBUG nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.267 2 DEBUG nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.268 2 DEBUG nova.network.neutron [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.287 2 INFO nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.302 2 DEBUG nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.524 2 DEBUG nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.525 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.526 2 INFO nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Creating image(s)
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.543 2 DEBUG nova.storage.rbd_utils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] rbd image 2f611962-32b5-4b21-b23b-303bbf54564d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 146 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 910 KiB/s wr, 105 op/s
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.562 2 DEBUG nova.storage.rbd_utils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] rbd image 2f611962-32b5-4b21-b23b-303bbf54564d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.581 2 DEBUG nova.storage.rbd_utils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] rbd image 2f611962-32b5-4b21-b23b-303bbf54564d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.585 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.614 2 DEBUG nova.network.neutron [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.614 2 DEBUG nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.652 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.653 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.654 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.654 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.676 2 DEBUG nova.storage.rbd_utils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] rbd image 2f611962-32b5-4b21-b23b-303bbf54564d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:01 compute-0 nova_compute[259550]: 2025-10-07 14:09:01.680 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 2f611962-32b5-4b21-b23b-303bbf54564d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:01 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/158927846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:01 compute-0 ceph-mon[74295]: pgmap v1273: 305 pgs: 305 active+clean; 146 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 910 KiB/s wr, 105 op/s
Oct 07 14:09:02 compute-0 podman[293846]: 2025-10-07 14:09:02.148477508 +0000 UTC m=+0.133582961 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct 07 14:09:02 compute-0 ovn_controller[151684]: 2025-10-07T14:09:02Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:8d:83 10.100.0.13
Oct 07 14:09:02 compute-0 ovn_controller[151684]: 2025-10-07T14:09:02Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:8d:83 10.100.0.13
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.201 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 2f611962-32b5-4b21-b23b-303bbf54564d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.268 2 DEBUG nova.storage.rbd_utils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] resizing rbd image 2f611962-32b5-4b21-b23b-303bbf54564d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.467 2 DEBUG nova.objects.instance [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lazy-loading 'migration_context' on Instance uuid 2f611962-32b5-4b21-b23b-303bbf54564d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.519 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.520 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Ensure instance console log exists: /var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.520 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.520 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.521 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.522 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.528 2 WARNING nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.535 2 DEBUG nova.virt.libvirt.host [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.536 2 DEBUG nova.virt.libvirt.host [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.544 2 DEBUG nova.virt.libvirt.host [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.544 2 DEBUG nova.virt.libvirt.host [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.545 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.545 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.545 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.545 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.546 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.546 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.546 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.546 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.547 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.547 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.547 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.548 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:09:02 compute-0 nova_compute[259550]: 2025-10-07 14:09:02.550 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:09:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3294341034' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:03 compute-0 nova_compute[259550]: 2025-10-07 14:09:03.040 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:03 compute-0 nova_compute[259550]: 2025-10-07 14:09:03.066 2 DEBUG nova.storage.rbd_utils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] rbd image 2f611962-32b5-4b21-b23b-303bbf54564d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:03 compute-0 nova_compute[259550]: 2025-10-07 14:09:03.072 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3294341034' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 146 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 811 KiB/s wr, 94 op/s
Oct 07 14:09:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:09:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/770756175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:03 compute-0 nova_compute[259550]: 2025-10-07 14:09:03.680 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:03 compute-0 nova_compute[259550]: 2025-10-07 14:09:03.682 2 DEBUG nova.objects.instance [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f611962-32b5-4b21-b23b-303bbf54564d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:03 compute-0 nova_compute[259550]: 2025-10-07 14:09:03.699 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:09:03 compute-0 nova_compute[259550]:   <uuid>2f611962-32b5-4b21-b23b-303bbf54564d</uuid>
Oct 07 14:09:03 compute-0 nova_compute[259550]:   <name>instance-00000019</name>
Oct 07 14:09:03 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:09:03 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:09:03 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerExternalEventsTest-server-1051337348</nova:name>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:09:02</nova:creationTime>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:09:03 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:09:03 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:09:03 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:09:03 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:09:03 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:09:03 compute-0 nova_compute[259550]:         <nova:user uuid="3dcaa1bf2963492faed5df9583344ef6">tempest-ServerExternalEventsTest-2003492389-project-member</nova:user>
Oct 07 14:09:03 compute-0 nova_compute[259550]:         <nova:project uuid="937a05b30aa24e7ea8e94bd7364ff355">tempest-ServerExternalEventsTest-2003492389</nova:project>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:09:03 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:09:03 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <system>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <entry name="serial">2f611962-32b5-4b21-b23b-303bbf54564d</entry>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <entry name="uuid">2f611962-32b5-4b21-b23b-303bbf54564d</entry>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     </system>
Oct 07 14:09:03 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:09:03 compute-0 nova_compute[259550]:   <os>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:   </os>
Oct 07 14:09:03 compute-0 nova_compute[259550]:   <features>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:   </features>
Oct 07 14:09:03 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:09:03 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:09:03 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/2f611962-32b5-4b21-b23b-303bbf54564d_disk">
Oct 07 14:09:03 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       </source>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:09:03 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/2f611962-32b5-4b21-b23b-303bbf54564d_disk.config">
Oct 07 14:09:03 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       </source>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:09:03 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d/console.log" append="off"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <video>
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     </video>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:09:03 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:09:03 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:09:03 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:09:03 compute-0 nova_compute[259550]: </domain>
Oct 07 14:09:03 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:09:03 compute-0 nova_compute[259550]: 2025-10-07 14:09:03.762 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:09:03 compute-0 nova_compute[259550]: 2025-10-07 14:09:03.763 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:09:03 compute-0 nova_compute[259550]: 2025-10-07 14:09:03.763 2 INFO nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Using config drive
Oct 07 14:09:03 compute-0 nova_compute[259550]: 2025-10-07 14:09:03.786 2 DEBUG nova.storage.rbd_utils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] rbd image 2f611962-32b5-4b21-b23b-303bbf54564d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:03 compute-0 podman[294006]: 2025-10-07 14:09:03.792950049 +0000 UTC m=+0.052136074 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:09:03 compute-0 nova_compute[259550]: 2025-10-07 14:09:03.931 2 INFO nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Creating config drive at /var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d/disk.config
Oct 07 14:09:03 compute-0 nova_compute[259550]: 2025-10-07 14:09:03.957 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8tkmmqlc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:04 compute-0 ceph-mon[74295]: pgmap v1274: 305 pgs: 305 active+clean; 146 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 811 KiB/s wr, 94 op/s
Oct 07 14:09:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/770756175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:04 compute-0 nova_compute[259550]: 2025-10-07 14:09:04.116 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8tkmmqlc" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:04 compute-0 nova_compute[259550]: 2025-10-07 14:09:04.144 2 DEBUG nova.storage.rbd_utils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] rbd image 2f611962-32b5-4b21-b23b-303bbf54564d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:04 compute-0 nova_compute[259550]: 2025-10-07 14:09:04.148 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d/disk.config 2f611962-32b5-4b21-b23b-303bbf54564d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:04 compute-0 nova_compute[259550]: 2025-10-07 14:09:04.336 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d/disk.config 2f611962-32b5-4b21-b23b-303bbf54564d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:04 compute-0 nova_compute[259550]: 2025-10-07 14:09:04.338 2 INFO nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Deleting local config drive /var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d/disk.config because it was imported into RBD.
Oct 07 14:09:04 compute-0 systemd-machined[214580]: New machine qemu-29-instance-00000019.
Oct 07 14:09:04 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-00000019.
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.340 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846145.3395026, 2f611962-32b5-4b21-b23b-303bbf54564d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.341 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] VM Resumed (Lifecycle Event)
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.345 2 DEBUG nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.346 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.351 2 INFO nova.virt.libvirt.driver [-] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Instance spawned successfully.
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.352 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.448 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.452 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.515 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.516 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.517 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.517 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.518 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.518 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 214 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.653 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.654 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846145.3436983, 2f611962-32b5-4b21-b23b-303bbf54564d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.654 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] VM Started (Lifecycle Event)
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.697 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.701 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:09:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.817 2 INFO nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Took 4.29 seconds to spawn the instance on the hypervisor.
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.818 2 DEBUG nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.851 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:09:05 compute-0 nova_compute[259550]: 2025-10-07 14:09:05.996 2 INFO nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Took 5.46 seconds to build instance.
Oct 07 14:09:06 compute-0 nova_compute[259550]: 2025-10-07 14:09:06.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:06 compute-0 nova_compute[259550]: 2025-10-07 14:09:06.164 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "2f611962-32b5-4b21-b23b-303bbf54564d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:06 compute-0 ceph-mon[74295]: pgmap v1275: 305 pgs: 305 active+clean; 214 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Oct 07 14:09:07 compute-0 nova_compute[259550]: 2025-10-07 14:09:07.208 2 DEBUG nova.virt.libvirt.driver [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:09:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 214 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 159 op/s
Oct 07 14:09:07 compute-0 ovn_controller[151684]: 2025-10-07T14:09:07Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:c7:a1 10.100.0.14
Oct 07 14:09:07 compute-0 ovn_controller[151684]: 2025-10-07T14:09:07Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:c7:a1 10.100.0.14
Oct 07 14:09:08 compute-0 nova_compute[259550]: 2025-10-07 14:09:08.154 2 DEBUG nova.compute.manager [None req-a868c127-fbf5-409f-b31f-f69c47c9b879 710f3481ff6440b7b50ab1f3efc7d084 3d771eced5314fe9ace3db8e61070833 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:08 compute-0 nova_compute[259550]: 2025-10-07 14:09:08.155 2 DEBUG nova.compute.manager [None req-a868c127-fbf5-409f-b31f-f69c47c9b879 710f3481ff6440b7b50ab1f3efc7d084 3d771eced5314fe9ace3db8e61070833 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:09:08 compute-0 nova_compute[259550]: 2025-10-07 14:09:08.155 2 DEBUG oslo_concurrency.lockutils [None req-a868c127-fbf5-409f-b31f-f69c47c9b879 710f3481ff6440b7b50ab1f3efc7d084 3d771eced5314fe9ace3db8e61070833 - - default default] Acquiring lock "refresh_cache-2f611962-32b5-4b21-b23b-303bbf54564d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:09:08 compute-0 nova_compute[259550]: 2025-10-07 14:09:08.156 2 DEBUG oslo_concurrency.lockutils [None req-a868c127-fbf5-409f-b31f-f69c47c9b879 710f3481ff6440b7b50ab1f3efc7d084 3d771eced5314fe9ace3db8e61070833 - - default default] Acquired lock "refresh_cache-2f611962-32b5-4b21-b23b-303bbf54564d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:09:08 compute-0 nova_compute[259550]: 2025-10-07 14:09:08.156 2 DEBUG nova.network.neutron [None req-a868c127-fbf5-409f-b31f-f69c47c9b879 710f3481ff6440b7b50ab1f3efc7d084 3d771eced5314fe9ace3db8e61070833 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:09:08 compute-0 nova_compute[259550]: 2025-10-07 14:09:08.408 2 DEBUG nova.network.neutron [None req-a868c127-fbf5-409f-b31f-f69c47c9b879 710f3481ff6440b7b50ab1f3efc7d084 3d771eced5314fe9ace3db8e61070833 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:09:08 compute-0 nova_compute[259550]: 2025-10-07 14:09:08.424 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Acquiring lock "2f611962-32b5-4b21-b23b-303bbf54564d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:08 compute-0 nova_compute[259550]: 2025-10-07 14:09:08.425 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "2f611962-32b5-4b21-b23b-303bbf54564d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:08 compute-0 nova_compute[259550]: 2025-10-07 14:09:08.425 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Acquiring lock "2f611962-32b5-4b21-b23b-303bbf54564d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:08 compute-0 nova_compute[259550]: 2025-10-07 14:09:08.425 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "2f611962-32b5-4b21-b23b-303bbf54564d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:08 compute-0 nova_compute[259550]: 2025-10-07 14:09:08.426 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "2f611962-32b5-4b21-b23b-303bbf54564d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:08 compute-0 nova_compute[259550]: 2025-10-07 14:09:08.427 2 INFO nova.compute.manager [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Terminating instance
Oct 07 14:09:08 compute-0 nova_compute[259550]: 2025-10-07 14:09:08.428 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Acquiring lock "refresh_cache-2f611962-32b5-4b21-b23b-303bbf54564d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:09:08 compute-0 nova_compute[259550]: 2025-10-07 14:09:08.665 2 DEBUG nova.network.neutron [None req-a868c127-fbf5-409f-b31f-f69c47c9b879 710f3481ff6440b7b50ab1f3efc7d084 3d771eced5314fe9ace3db8e61070833 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:08 compute-0 ceph-mon[74295]: pgmap v1276: 305 pgs: 305 active+clean; 214 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 159 op/s
Oct 07 14:09:08 compute-0 nova_compute[259550]: 2025-10-07 14:09:08.769 2 DEBUG oslo_concurrency.lockutils [None req-a868c127-fbf5-409f-b31f-f69c47c9b879 710f3481ff6440b7b50ab1f3efc7d084 3d771eced5314fe9ace3db8e61070833 - - default default] Releasing lock "refresh_cache-2f611962-32b5-4b21-b23b-303bbf54564d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:09:08 compute-0 nova_compute[259550]: 2025-10-07 14:09:08.771 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Acquired lock "refresh_cache-2f611962-32b5-4b21-b23b-303bbf54564d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:09:08 compute-0 nova_compute[259550]: 2025-10-07 14:09:08.771 2 DEBUG nova.network.neutron [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:09:08 compute-0 nova_compute[259550]: 2025-10-07 14:09:08.946 2 DEBUG nova.network.neutron [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:09:09 compute-0 nova_compute[259550]: 2025-10-07 14:09:09.175 2 DEBUG nova.network.neutron [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:09 compute-0 nova_compute[259550]: 2025-10-07 14:09:09.189 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Releasing lock "refresh_cache-2f611962-32b5-4b21-b23b-303bbf54564d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:09:09 compute-0 nova_compute[259550]: 2025-10-07 14:09:09.190 2 DEBUG nova.compute.manager [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:09:09 compute-0 nova_compute[259550]: 2025-10-07 14:09:09.260 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:09 compute-0 nova_compute[259550]: 2025-10-07 14:09:09.261 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:09 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000019.scope: Deactivated successfully.
Oct 07 14:09:09 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000019.scope: Consumed 4.731s CPU time.
Oct 07 14:09:09 compute-0 nova_compute[259550]: 2025-10-07 14:09:09.294 2 DEBUG nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:09:09 compute-0 systemd-machined[214580]: Machine qemu-29-instance-00000019 terminated.
Oct 07 14:09:09 compute-0 nova_compute[259550]: 2025-10-07 14:09:09.371 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:09 compute-0 nova_compute[259550]: 2025-10-07 14:09:09.373 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:09 compute-0 nova_compute[259550]: 2025-10-07 14:09:09.379 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:09:09 compute-0 nova_compute[259550]: 2025-10-07 14:09:09.380 2 INFO nova.compute.claims [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:09:09 compute-0 nova_compute[259550]: 2025-10-07 14:09:09.395 2 DEBUG nova.objects.instance [None req-2df2bcf6-906e-4ce2-8973-740618e56219 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lazy-loading 'flavor' on Instance uuid 5fc2c826-a57b-4c9a-910a-48b72ec2ab75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:09 compute-0 nova_compute[259550]: 2025-10-07 14:09:09.413 2 INFO nova.virt.libvirt.driver [-] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Instance destroyed successfully.
Oct 07 14:09:09 compute-0 nova_compute[259550]: 2025-10-07 14:09:09.415 2 DEBUG nova.objects.instance [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lazy-loading 'resources' on Instance uuid 2f611962-32b5-4b21-b23b-303bbf54564d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:09 compute-0 nova_compute[259550]: 2025-10-07 14:09:09.422 2 DEBUG oslo_concurrency.lockutils [None req-2df2bcf6-906e-4ce2-8973-740618e56219 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:09:09 compute-0 nova_compute[259550]: 2025-10-07 14:09:09.422 2 DEBUG oslo_concurrency.lockutils [None req-2df2bcf6-906e-4ce2-8973-740618e56219 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquired lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:09:09 compute-0 nova_compute[259550]: 2025-10-07 14:09:09.526 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 234 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.4 MiB/s wr, 258 op/s
Oct 07 14:09:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:09:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/279651159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:09 compute-0 nova_compute[259550]: 2025-10-07 14:09:09.981 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:09 compute-0 nova_compute[259550]: 2025-10-07 14:09:09.991 2 DEBUG nova.compute.provider_tree [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.013 2 DEBUG nova.scheduler.client.report [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.044 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.045 2 DEBUG nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.092 2 DEBUG nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.093 2 DEBUG nova.network.neutron [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.118 2 INFO nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.139 2 DEBUG nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.238 2 DEBUG nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.239 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.240 2 INFO nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Creating image(s)
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.262 2 DEBUG nova.storage.rbd_utils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.286 2 DEBUG nova.storage.rbd_utils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.315 2 DEBUG nova.storage.rbd_utils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.321 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.353 2 DEBUG nova.policy [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '21b4c507f5c443f4b43306c884b1d67f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c166ae9e4e0f43d38afaa35966f84b05', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.393 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.393 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.394 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.394 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.416 2 DEBUG nova.storage.rbd_utils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.420 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.541 2 INFO nova.virt.libvirt.driver [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Deleting instance files /var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d_del
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.542 2 INFO nova.virt.libvirt.driver [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Deletion of /var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d_del complete
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.599 2 INFO nova.compute.manager [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Took 1.41 seconds to destroy the instance on the hypervisor.
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.600 2 DEBUG oslo.service.loopingcall [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.600 2 DEBUG nova.compute.manager [-] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.600 2 DEBUG nova.network.neutron [-] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.708 2 DEBUG nova.network.neutron [None req-2df2bcf6-906e-4ce2-8973-740618e56219 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:09:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.831 2 DEBUG nova.network.neutron [-] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.865 2 DEBUG nova.compute.manager [req-378113ef-5499-4a1a-9e15-c0bffec1ac9f req-01917955-1cc1-46a6-b382-f61c754564cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received event network-changed-c10a9a85-702e-4bd4-92d9-474eb88ec422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.865 2 DEBUG nova.compute.manager [req-378113ef-5499-4a1a-9e15-c0bffec1ac9f req-01917955-1cc1-46a6-b382-f61c754564cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Refreshing instance network info cache due to event network-changed-c10a9a85-702e-4bd4-92d9-474eb88ec422. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.866 2 DEBUG oslo_concurrency.lockutils [req-378113ef-5499-4a1a-9e15-c0bffec1ac9f req-01917955-1cc1-46a6-b382-f61c754564cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.889 2 DEBUG nova.network.neutron [-] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:10 compute-0 ceph-mon[74295]: pgmap v1277: 305 pgs: 305 active+clean; 234 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.4 MiB/s wr, 258 op/s
Oct 07 14:09:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/279651159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:10 compute-0 nova_compute[259550]: 2025-10-07 14:09:10.943 2 INFO nova.compute.manager [-] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Took 0.34 seconds to deallocate network for instance.
Oct 07 14:09:11 compute-0 kernel: tap29476070-b1 (unregistering): left promiscuous mode
Oct 07 14:09:11 compute-0 NetworkManager[44949]: <info>  [1759846151.0645] device (tap29476070-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:09:11 compute-0 ovn_controller[151684]: 2025-10-07T14:09:11Z|00165|binding|INFO|Releasing lport 29476070-b1b0-4d1c-a313-d0fbd1793130 from this chassis (sb_readonly=0)
Oct 07 14:09:11 compute-0 ovn_controller[151684]: 2025-10-07T14:09:11Z|00166|binding|INFO|Setting lport 29476070-b1b0-4d1c-a313-d0fbd1793130 down in Southbound
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.079 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:11 compute-0 ovn_controller[151684]: 2025-10-07T14:09:11Z|00167|binding|INFO|Removing iface tap29476070-b1 ovn-installed in OVS
Oct 07 14:09:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.104 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:c7:a1 10.100.0.14'], port_security=['fa:16:3e:2d:c7:a1 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '46d99eec-2ec6-4ea6-acb1-c0a694dd2df1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=29476070-b1b0-4d1c-a313-d0fbd1793130) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:09:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.106 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 29476070-b1b0-4d1c-a313-d0fbd1793130 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb unbound from our chassis
Oct 07 14:09:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.108 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:09:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.110 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7a60c3d3-2f08-4f10-83e3-82baccea7818]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.111 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb namespace which is not needed anymore
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.117 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.118 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:09:11 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000018.scope: Deactivated successfully.
Oct 07 14:09:11 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000018.scope: Consumed 13.169s CPU time.
Oct 07 14:09:11 compute-0 systemd-machined[214580]: Machine qemu-28-instance-00000018 terminated.
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.167 2 DEBUG nova.storage.rbd_utils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] resizing rbd image 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:09:11 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[293714]: [NOTICE]   (293718) : haproxy version is 2.8.14-c23fe91
Oct 07 14:09:11 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[293714]: [NOTICE]   (293718) : path to executable is /usr/sbin/haproxy
Oct 07 14:09:11 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[293714]: [WARNING]  (293718) : Exiting Master process...
Oct 07 14:09:11 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[293714]: [ALERT]    (293718) : Current worker (293720) exited with code 143 (Terminated)
Oct 07 14:09:11 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[293714]: [WARNING]  (293718) : All workers exited. Exiting... (0)
Oct 07 14:09:11 compute-0 systemd[1]: libpod-8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188.scope: Deactivated successfully.
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.281 2 DEBUG oslo_concurrency.processutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:11 compute-0 podman[294354]: 2025-10-07 14:09:11.2840006 +0000 UTC m=+0.060224710 container died 8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.324 2 DEBUG nova.network.neutron [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Successfully created port: 1a36fc68-b90f-4e28-a866-7dfb6f218d37 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.332 2 INFO nova.virt.libvirt.driver [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Instance shutdown successfully after 14 seconds.
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.347 2 INFO nova.virt.libvirt.driver [-] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Instance destroyed successfully.
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.347 2 DEBUG nova.objects.instance [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'numa_topology' on Instance uuid 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188-userdata-shm.mount: Deactivated successfully.
Oct 07 14:09:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-2be2587149f0b517fdb58df03030dad6ce5b5e92bc9471fce4b0fce2cbca3a69-merged.mount: Deactivated successfully.
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.394 2 DEBUG nova.objects.instance [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lazy-loading 'migration_context' on Instance uuid 7ee2904f-492a-4ffe-bdc2-6f4ec3285851 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:11 compute-0 podman[294354]: 2025-10-07 14:09:11.405644232 +0000 UTC m=+0.181868332 container cleanup 8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:09:11 compute-0 systemd[1]: libpod-conmon-8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188.scope: Deactivated successfully.
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.432 2 DEBUG nova.compute.manager [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.463 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.464 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Ensure instance console log exists: /var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.464 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.464 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.464 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.505 2 DEBUG oslo_concurrency.lockutils [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 14.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:11 compute-0 podman[294414]: 2025-10-07 14:09:11.539032816 +0000 UTC m=+0.104954586 container remove 8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:09:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.546 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad46ff0-9ee7-432e-af9d-15e7d79c70f1]: (4, ('Tue Oct  7 02:09:11 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb (8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188)\n8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188\nTue Oct  7 02:09:11 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb (8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188)\n8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.548 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cc983e4d-0783-4b0b-a5be-161cc21159f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.549 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:11 compute-0 kernel: tap9f80456d-d0: left promiscuous mode
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 230 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.1 MiB/s wr, 245 op/s
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.585 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[98f64a5a-3372-425f-9820-9683e5575b84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.625 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[82033271-2276-4970-8c72-85e01185106d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.627 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e76810b8-814d-4b8f-9bfc-7de62bb977ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.646 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[86dd1058-5035-435d-9749-e625db0adba3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671756, 'reachable_time': 34662, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294451, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d9f80456d\x2dd8a6\x2d4e61\x2db6cb\x2db509cd650dbb.mount: Deactivated successfully.
Oct 07 14:09:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.652 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:09:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.652 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a79218d5-d75e-4a2f-8732-9552faabe7a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:09:11 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1215526621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.758 2 DEBUG oslo_concurrency.processutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.764 2 DEBUG nova.compute.provider_tree [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.817 2 DEBUG nova.scheduler.client.report [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.852 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:11 compute-0 nova_compute[259550]: 2025-10-07 14:09:11.908 2 INFO nova.scheduler.client.report [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Deleted allocations for instance 2f611962-32b5-4b21-b23b-303bbf54564d
Oct 07 14:09:11 compute-0 ceph-mon[74295]: pgmap v1278: 305 pgs: 305 active+clean; 230 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.1 MiB/s wr, 245 op/s
Oct 07 14:09:11 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1215526621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:12 compute-0 nova_compute[259550]: 2025-10-07 14:09:12.019 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "2f611962-32b5-4b21-b23b-303bbf54564d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:12 compute-0 nova_compute[259550]: 2025-10-07 14:09:12.531 2 DEBUG nova.network.neutron [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Successfully updated port: 1a36fc68-b90f-4e28-a866-7dfb6f218d37 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:09:12 compute-0 nova_compute[259550]: 2025-10-07 14:09:12.536 2 DEBUG nova.network.neutron [None req-2df2bcf6-906e-4ce2-8973-740618e56219 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updating instance_info_cache with network_info: [{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:12 compute-0 nova_compute[259550]: 2025-10-07 14:09:12.567 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "refresh_cache-7ee2904f-492a-4ffe-bdc2-6f4ec3285851" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:09:12 compute-0 nova_compute[259550]: 2025-10-07 14:09:12.568 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquired lock "refresh_cache-7ee2904f-492a-4ffe-bdc2-6f4ec3285851" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:09:12 compute-0 nova_compute[259550]: 2025-10-07 14:09:12.568 2 DEBUG nova.network.neutron [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:09:12 compute-0 nova_compute[259550]: 2025-10-07 14:09:12.625 2 DEBUG oslo_concurrency.lockutils [None req-2df2bcf6-906e-4ce2-8973-740618e56219 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Releasing lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:09:12 compute-0 nova_compute[259550]: 2025-10-07 14:09:12.625 2 DEBUG nova.compute.manager [None req-2df2bcf6-906e-4ce2-8973-740618e56219 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Oct 07 14:09:12 compute-0 nova_compute[259550]: 2025-10-07 14:09:12.625 2 DEBUG nova.compute.manager [None req-2df2bcf6-906e-4ce2-8973-740618e56219 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] network_info to inject: |[{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Oct 07 14:09:12 compute-0 nova_compute[259550]: 2025-10-07 14:09:12.627 2 DEBUG oslo_concurrency.lockutils [req-378113ef-5499-4a1a-9e15-c0bffec1ac9f req-01917955-1cc1-46a6-b382-f61c754564cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:09:12 compute-0 nova_compute[259550]: 2025-10-07 14:09:12.627 2 DEBUG nova.network.neutron [req-378113ef-5499-4a1a-9e15-c0bffec1ac9f req-01917955-1cc1-46a6-b382-f61c754564cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Refreshing network info cache for port c10a9a85-702e-4bd4-92d9-474eb88ec422 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:09:12 compute-0 nova_compute[259550]: 2025-10-07 14:09:12.739 2 DEBUG nova.compute.manager [req-87cbcd83-d9b5-417d-8505-7a3dbc5ce9e6 req-bf0310e5-d69c-47e9-a3dc-13e9ecb4cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Received event network-changed-1a36fc68-b90f-4e28-a866-7dfb6f218d37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:12 compute-0 nova_compute[259550]: 2025-10-07 14:09:12.739 2 DEBUG nova.compute.manager [req-87cbcd83-d9b5-417d-8505-7a3dbc5ce9e6 req-bf0310e5-d69c-47e9-a3dc-13e9ecb4cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Refreshing instance network info cache due to event network-changed-1a36fc68-b90f-4e28-a866-7dfb6f218d37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:09:12 compute-0 nova_compute[259550]: 2025-10-07 14:09:12.740 2 DEBUG oslo_concurrency.lockutils [req-87cbcd83-d9b5-417d-8505-7a3dbc5ce9e6 req-bf0310e5-d69c-47e9-a3dc-13e9ecb4cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-7ee2904f-492a-4ffe-bdc2-6f4ec3285851" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:09:12 compute-0 nova_compute[259550]: 2025-10-07 14:09:12.865 2 DEBUG nova.network.neutron [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:09:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 230 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.3 MiB/s wr, 224 op/s
Oct 07 14:09:13 compute-0 nova_compute[259550]: 2025-10-07 14:09:13.971 2 DEBUG nova.compute.manager [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Received event network-vif-unplugged-29476070-b1b0-4d1c-a313-d0fbd1793130 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:13 compute-0 nova_compute[259550]: 2025-10-07 14:09:13.972 2 DEBUG oslo_concurrency.lockutils [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:13 compute-0 nova_compute[259550]: 2025-10-07 14:09:13.972 2 DEBUG oslo_concurrency.lockutils [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:13 compute-0 nova_compute[259550]: 2025-10-07 14:09:13.972 2 DEBUG oslo_concurrency.lockutils [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:13 compute-0 nova_compute[259550]: 2025-10-07 14:09:13.973 2 DEBUG nova.compute.manager [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] No waiting events found dispatching network-vif-unplugged-29476070-b1b0-4d1c-a313-d0fbd1793130 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:09:13 compute-0 nova_compute[259550]: 2025-10-07 14:09:13.973 2 WARNING nova.compute.manager [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Received unexpected event network-vif-unplugged-29476070-b1b0-4d1c-a313-d0fbd1793130 for instance with vm_state stopped and task_state None.
Oct 07 14:09:13 compute-0 nova_compute[259550]: 2025-10-07 14:09:13.973 2 DEBUG nova.compute.manager [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Received event network-vif-plugged-29476070-b1b0-4d1c-a313-d0fbd1793130 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:13 compute-0 nova_compute[259550]: 2025-10-07 14:09:13.973 2 DEBUG oslo_concurrency.lockutils [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:13 compute-0 nova_compute[259550]: 2025-10-07 14:09:13.974 2 DEBUG oslo_concurrency.lockutils [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:13 compute-0 nova_compute[259550]: 2025-10-07 14:09:13.974 2 DEBUG oslo_concurrency.lockutils [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:13 compute-0 nova_compute[259550]: 2025-10-07 14:09:13.974 2 DEBUG nova.compute.manager [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] No waiting events found dispatching network-vif-plugged-29476070-b1b0-4d1c-a313-d0fbd1793130 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:09:13 compute-0 nova_compute[259550]: 2025-10-07 14:09:13.974 2 WARNING nova.compute.manager [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Received unexpected event network-vif-plugged-29476070-b1b0-4d1c-a313-d0fbd1793130 for instance with vm_state stopped and task_state None.
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.379 2 DEBUG nova.network.neutron [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Updating instance_info_cache with network_info: [{"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.493 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Releasing lock "refresh_cache-7ee2904f-492a-4ffe-bdc2-6f4ec3285851" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.493 2 DEBUG nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Instance network_info: |[{"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.494 2 DEBUG oslo_concurrency.lockutils [req-87cbcd83-d9b5-417d-8505-7a3dbc5ce9e6 req-bf0310e5-d69c-47e9-a3dc-13e9ecb4cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-7ee2904f-492a-4ffe-bdc2-6f4ec3285851" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.494 2 DEBUG nova.network.neutron [req-87cbcd83-d9b5-417d-8505-7a3dbc5ce9e6 req-bf0310e5-d69c-47e9-a3dc-13e9ecb4cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Refreshing network info cache for port 1a36fc68-b90f-4e28-a866-7dfb6f218d37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.496 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Start _get_guest_xml network_info=[{"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.500 2 WARNING nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.504 2 DEBUG nova.virt.libvirt.host [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.505 2 DEBUG nova.virt.libvirt.host [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.509 2 DEBUG nova.virt.libvirt.host [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.510 2 DEBUG nova.virt.libvirt.host [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.510 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.511 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.511 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.511 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.512 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.512 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.512 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.513 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.513 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.513 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.514 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.514 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.518 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:14 compute-0 ceph-mon[74295]: pgmap v1279: 305 pgs: 305 active+clean; 230 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.3 MiB/s wr, 224 op/s
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.742 2 DEBUG nova.network.neutron [req-378113ef-5499-4a1a-9e15-c0bffec1ac9f req-01917955-1cc1-46a6-b382-f61c754564cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updated VIF entry in instance network info cache for port c10a9a85-702e-4bd4-92d9-474eb88ec422. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.743 2 DEBUG nova.network.neutron [req-378113ef-5499-4a1a-9e15-c0bffec1ac9f req-01917955-1cc1-46a6-b382-f61c754564cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updating instance_info_cache with network_info: [{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.804 2 DEBUG nova.objects.instance [None req-8941c0b8-4717-4979-b2dc-0357f2cb397c 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lazy-loading 'flavor' on Instance uuid 5fc2c826-a57b-4c9a-910a-48b72ec2ab75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.828 2 DEBUG oslo_concurrency.lockutils [req-378113ef-5499-4a1a-9e15-c0bffec1ac9f req-01917955-1cc1-46a6-b382-f61c754564cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:09:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:09:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1831562224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.988 2 DEBUG oslo_concurrency.lockutils [None req-8941c0b8-4717-4979-b2dc-0357f2cb397c 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.988 2 DEBUG oslo_concurrency.lockutils [None req-8941c0b8-4717-4979-b2dc-0357f2cb397c 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquired lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:09:14 compute-0 nova_compute[259550]: 2025-10-07 14:09:14.994 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.014 2 DEBUG nova.storage.rbd_utils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.019 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.051 2 DEBUG nova.compute.manager [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.252 2 INFO nova.compute.manager [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] instance snapshotting
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.253 2 WARNING nova.compute.manager [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] trying to snapshot a non-running instance: (state: 4 expected: 1)
Oct 07 14:09:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:09:15 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3576184467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.464 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.466 2 DEBUG nova.virt.libvirt.vif [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-527928282',display_name='tempest-ImagesOneServerNegativeTestJSON-server-527928282',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-527928282',id=26,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c166ae9e4e0f43d38afaa35966f84b05',ramdisk_id='',reservation_id='r-7pnh0023',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-2130756304',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:10Z,user_data=None,user_id='21b4c507f5c443f4b43306c884b1d67f',uuid=7ee2904f-492a-4ffe-bdc2-6f4ec3285851,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.466 2 DEBUG nova.network.os_vif_util [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converting VIF {"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.467 2 DEBUG nova.network.os_vif_util [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ce:2c,bridge_name='br-int',has_traffic_filtering=True,id=1a36fc68-b90f-4e28-a866-7dfb6f218d37,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a36fc68-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.468 2 DEBUG nova.objects.instance [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7ee2904f-492a-4ffe-bdc2-6f4ec3285851 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.478 2 INFO nova.virt.libvirt.driver [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Beginning cold snapshot process
Oct 07 14:09:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 246 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 7.1 MiB/s wr, 266 op/s
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.641 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:09:15 compute-0 nova_compute[259550]:   <uuid>7ee2904f-492a-4ffe-bdc2-6f4ec3285851</uuid>
Oct 07 14:09:15 compute-0 nova_compute[259550]:   <name>instance-0000001a</name>
Oct 07 14:09:15 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:09:15 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:09:15 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-527928282</nova:name>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:09:14</nova:creationTime>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:09:15 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:09:15 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:09:15 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:09:15 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:09:15 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:09:15 compute-0 nova_compute[259550]:         <nova:user uuid="21b4c507f5c443f4b43306c884b1d67f">tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member</nova:user>
Oct 07 14:09:15 compute-0 nova_compute[259550]:         <nova:project uuid="c166ae9e4e0f43d38afaa35966f84b05">tempest-ImagesOneServerNegativeTestJSON-2130756304</nova:project>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:09:15 compute-0 nova_compute[259550]:         <nova:port uuid="1a36fc68-b90f-4e28-a866-7dfb6f218d37">
Oct 07 14:09:15 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:09:15 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:09:15 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <system>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <entry name="serial">7ee2904f-492a-4ffe-bdc2-6f4ec3285851</entry>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <entry name="uuid">7ee2904f-492a-4ffe-bdc2-6f4ec3285851</entry>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     </system>
Oct 07 14:09:15 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:09:15 compute-0 nova_compute[259550]:   <os>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:   </os>
Oct 07 14:09:15 compute-0 nova_compute[259550]:   <features>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:   </features>
Oct 07 14:09:15 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:09:15 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:09:15 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk">
Oct 07 14:09:15 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       </source>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:09:15 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk.config">
Oct 07 14:09:15 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       </source>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:09:15 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:30:ce:2c"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <target dev="tap1a36fc68-b9"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851/console.log" append="off"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <video>
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     </video>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:09:15 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:09:15 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:09:15 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:09:15 compute-0 nova_compute[259550]: </domain>
Oct 07 14:09:15 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.642 2 DEBUG nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Preparing to wait for external event network-vif-plugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.643 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.643 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.643 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.644 2 DEBUG nova.virt.libvirt.vif [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-527928282',display_name='tempest-ImagesOneServerNegativeTestJSON-server-527928282',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-527928282',id=26,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c166ae9e4e0f43d38afaa35966f84b05',ramdisk_id='',reservation_id='r-7pnh0023',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-2130756304',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:10Z,user_data=None,user_id='21b4c507f5c443f4b43306c884b1d67f',uuid=7ee2904f-492a-4ffe-bdc2-6f4ec3285851,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.644 2 DEBUG nova.network.os_vif_util [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converting VIF {"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.645 2 DEBUG nova.network.os_vif_util [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ce:2c,bridge_name='br-int',has_traffic_filtering=True,id=1a36fc68-b90f-4e28-a866-7dfb6f218d37,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a36fc68-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.646 2 DEBUG os_vif [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ce:2c,bridge_name='br-int',has_traffic_filtering=True,id=1a36fc68-b90f-4e28-a866-7dfb6f218d37,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a36fc68-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.652 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a36fc68-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.652 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1a36fc68-b9, col_values=(('external_ids', {'iface-id': '1a36fc68-b90f-4e28-a866-7dfb6f218d37', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:ce:2c', 'vm-uuid': '7ee2904f-492a-4ffe-bdc2-6f4ec3285851'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:15 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1831562224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:15 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3576184467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:15 compute-0 NetworkManager[44949]: <info>  [1759846155.6554] manager: (tap1a36fc68-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.666 2 INFO os_vif [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ce:2c,bridge_name='br-int',has_traffic_filtering=True,id=1a36fc68-b90f-4e28-a866-7dfb6f218d37,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a36fc68-b9')
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.750 2 DEBUG nova.network.neutron [req-87cbcd83-d9b5-417d-8505-7a3dbc5ce9e6 req-bf0310e5-d69c-47e9-a3dc-13e9ecb4cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Updated VIF entry in instance network info cache for port 1a36fc68-b90f-4e28-a866-7dfb6f218d37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:09:15 compute-0 nova_compute[259550]: 2025-10-07 14:09:15.750 2 DEBUG nova.network.neutron [req-87cbcd83-d9b5-417d-8505-7a3dbc5ce9e6 req-bf0310e5-d69c-47e9-a3dc-13e9ecb4cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Updating instance_info_cache with network_info: [{"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:09:16 compute-0 nova_compute[259550]: 2025-10-07 14:09:16.050 2 DEBUG oslo_concurrency.lockutils [req-87cbcd83-d9b5-417d-8505-7a3dbc5ce9e6 req-bf0310e5-d69c-47e9-a3dc-13e9ecb4cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-7ee2904f-492a-4ffe-bdc2-6f4ec3285851" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:09:16 compute-0 nova_compute[259550]: 2025-10-07 14:09:16.326 2 DEBUG nova.network.neutron [None req-8941c0b8-4717-4979-b2dc-0357f2cb397c 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:09:16 compute-0 nova_compute[259550]: 2025-10-07 14:09:16.336 2 DEBUG nova.virt.libvirt.imagebackend [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:09:16 compute-0 nova_compute[259550]: 2025-10-07 14:09:16.340 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:09:16 compute-0 nova_compute[259550]: 2025-10-07 14:09:16.340 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:09:16 compute-0 nova_compute[259550]: 2025-10-07 14:09:16.340 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No VIF found with MAC fa:16:3e:30:ce:2c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:09:16 compute-0 nova_compute[259550]: 2025-10-07 14:09:16.341 2 INFO nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Using config drive
Oct 07 14:09:16 compute-0 nova_compute[259550]: 2025-10-07 14:09:16.368 2 DEBUG nova.storage.rbd_utils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:16 compute-0 nova_compute[259550]: 2025-10-07 14:09:16.554 2 DEBUG nova.storage.rbd_utils [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(9cf4cd9b69b64d4f87be7f48d49ed96d) on rbd image(46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:09:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Oct 07 14:09:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Oct 07 14:09:16 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Oct 07 14:09:16 compute-0 nova_compute[259550]: 2025-10-07 14:09:16.692 2 DEBUG nova.compute.manager [req-74969ab7-9579-4110-b9e3-d6710f5f0c29 req-e9a0d42d-a9e0-4c2d-8b6c-1454aec11a13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received event network-changed-c10a9a85-702e-4bd4-92d9-474eb88ec422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:16 compute-0 nova_compute[259550]: 2025-10-07 14:09:16.693 2 DEBUG nova.compute.manager [req-74969ab7-9579-4110-b9e3-d6710f5f0c29 req-e9a0d42d-a9e0-4c2d-8b6c-1454aec11a13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Refreshing instance network info cache due to event network-changed-c10a9a85-702e-4bd4-92d9-474eb88ec422. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:09:16 compute-0 nova_compute[259550]: 2025-10-07 14:09:16.693 2 DEBUG oslo_concurrency.lockutils [req-74969ab7-9579-4110-b9e3-d6710f5f0c29 req-e9a0d42d-a9e0-4c2d-8b6c-1454aec11a13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:09:16 compute-0 ceph-mon[74295]: pgmap v1280: 305 pgs: 305 active+clean; 246 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 7.1 MiB/s wr, 266 op/s
Oct 07 14:09:16 compute-0 nova_compute[259550]: 2025-10-07 14:09:16.815 2 DEBUG nova.storage.rbd_utils [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] cloning vms/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk@9cf4cd9b69b64d4f87be7f48d49ed96d to images/e0e0ef2a-e43a-42e6-af8b-792832119b67 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:09:16 compute-0 nova_compute[259550]: 2025-10-07 14:09:16.966 2 DEBUG nova.storage.rbd_utils [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] flattening images/e0e0ef2a-e43a-42e6-af8b-792832119b67 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:09:17 compute-0 nova_compute[259550]: 2025-10-07 14:09:17.020 2 INFO nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Creating config drive at /var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851/disk.config
Oct 07 14:09:17 compute-0 nova_compute[259550]: 2025-10-07 14:09:17.029 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprpxvzsg0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:17 compute-0 nova_compute[259550]: 2025-10-07 14:09:17.182 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprpxvzsg0" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:17 compute-0 nova_compute[259550]: 2025-10-07 14:09:17.214 2 DEBUG nova.storage.rbd_utils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:17 compute-0 nova_compute[259550]: 2025-10-07 14:09:17.219 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851/disk.config 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 246 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 229 op/s
Oct 07 14:09:17 compute-0 nova_compute[259550]: 2025-10-07 14:09:17.736 2 DEBUG nova.network.neutron [None req-8941c0b8-4717-4979-b2dc-0357f2cb397c 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updating instance_info_cache with network_info: [{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:17 compute-0 nova_compute[259550]: 2025-10-07 14:09:17.825 2 DEBUG oslo_concurrency.lockutils [None req-8941c0b8-4717-4979-b2dc-0357f2cb397c 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Releasing lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:09:17 compute-0 nova_compute[259550]: 2025-10-07 14:09:17.826 2 DEBUG nova.compute.manager [None req-8941c0b8-4717-4979-b2dc-0357f2cb397c 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Oct 07 14:09:17 compute-0 nova_compute[259550]: 2025-10-07 14:09:17.826 2 DEBUG nova.compute.manager [None req-8941c0b8-4717-4979-b2dc-0357f2cb397c 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] network_info to inject: |[{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Oct 07 14:09:17 compute-0 nova_compute[259550]: 2025-10-07 14:09:17.831 2 DEBUG oslo_concurrency.lockutils [req-74969ab7-9579-4110-b9e3-d6710f5f0c29 req-e9a0d42d-a9e0-4c2d-8b6c-1454aec11a13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:09:17 compute-0 nova_compute[259550]: 2025-10-07 14:09:17.831 2 DEBUG nova.network.neutron [req-74969ab7-9579-4110-b9e3-d6710f5f0c29 req-e9a0d42d-a9e0-4c2d-8b6c-1454aec11a13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Refreshing network info cache for port c10a9a85-702e-4bd4-92d9-474eb88ec422 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:09:17 compute-0 ceph-mon[74295]: osdmap e154: 3 total, 3 up, 3 in
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.010 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851/disk.config 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.791s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.010 2 INFO nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Deleting local config drive /var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851/disk.config because it was imported into RBD.
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.016 2 DEBUG nova.storage.rbd_utils [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] removing snapshot(9cf4cd9b69b64d4f87be7f48d49ed96d) on rbd image(46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:09:18 compute-0 kernel: tap1a36fc68-b9: entered promiscuous mode
Oct 07 14:09:18 compute-0 NetworkManager[44949]: <info>  [1759846158.0701] manager: (tap1a36fc68-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/88)
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:18 compute-0 ovn_controller[151684]: 2025-10-07T14:09:18Z|00168|binding|INFO|Claiming lport 1a36fc68-b90f-4e28-a866-7dfb6f218d37 for this chassis.
Oct 07 14:09:18 compute-0 ovn_controller[151684]: 2025-10-07T14:09:18Z|00169|binding|INFO|1a36fc68-b90f-4e28-a866-7dfb6f218d37: Claiming fa:16:3e:30:ce:2c 10.100.0.14
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.094 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ce:2c 10.100.0.14'], port_security=['fa:16:3e:30:ce:2c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7ee2904f-492a-4ffe-bdc2-6f4ec3285851', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c166ae9e4e0f43d38afaa35966f84b05', 'neutron:revision_number': '2', 'neutron:security_group_ids': '306ac68f-7d3a-41d3-a9d1-b809ff5ece38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6dab0b8-b058-4fe6-95e9-ca808f08d05f, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1a36fc68-b90f-4e28-a866-7dfb6f218d37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.096 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1a36fc68-b90f-4e28-a866-7dfb6f218d37 in datapath 0a5c95d4-1a77-48f5-83c0-afa976b7583d bound to our chassis
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.097 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0a5c95d4-1a77-48f5-83c0-afa976b7583d
Oct 07 14:09:18 compute-0 systemd-udevd[294713]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:09:18 compute-0 ovn_controller[151684]: 2025-10-07T14:09:18Z|00170|binding|INFO|Setting lport 1a36fc68-b90f-4e28-a866-7dfb6f218d37 ovn-installed in OVS
Oct 07 14:09:18 compute-0 ovn_controller[151684]: 2025-10-07T14:09:18Z|00171|binding|INFO|Setting lport 1a36fc68-b90f-4e28-a866-7dfb6f218d37 up in Southbound
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.110 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8c1dd006-c447-44a2-a337-5950105e2e9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.110 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0a5c95d4-11 in ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.112 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0a5c95d4-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.112 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[61554bf8-7da3-44e4-ac0b-05d2f4d8ebf3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.113 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b70849db-0cc0-4d8d-92bb-2f7d9393f062]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:18 compute-0 NetworkManager[44949]: <info>  [1759846158.1208] device (tap1a36fc68-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:09:18 compute-0 systemd-machined[214580]: New machine qemu-30-instance-0000001a.
Oct 07 14:09:18 compute-0 NetworkManager[44949]: <info>  [1759846158.1215] device (tap1a36fc68-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.126 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[dc54b3d8-8549-47a6-b6ba-def5498ada35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:18 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-0000001a.
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.156 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[249f9d67-1bf8-4488-901f-0b87e9129aa5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.186 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[66de09c9-db01-4d38-8356-0dcdf6c24d32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.192 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[13b06e25-ffb8-4201-babb-6f58f5da6761]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:18 compute-0 NetworkManager[44949]: <info>  [1759846158.1931] manager: (tap0a5c95d4-10): new Veth device (/org/freedesktop/NetworkManager/Devices/89)
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.223 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[569e0d33-0d69-4711-b0be-57d3a933cc48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.226 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[477c1bf0-3708-4a6b-885e-fe962d285fc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:18 compute-0 NetworkManager[44949]: <info>  [1759846158.2550] device (tap0a5c95d4-10): carrier: link connected
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.259 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d10e0403-1bd9-4c94-9fe8-97405729e04f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.279 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dc890a6e-68d8-4295-95d9-70751865352c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a5c95d4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:63:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674181, 'reachable_time': 39165, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294753, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.300 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[daf6e8e4-26f7-42b9-867e-9478680ef080]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe34:6312'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 674181, 'tstamp': 674181}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294754, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.329 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4757ef9c-2774-4410-bfe2-c8fa5f950bce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a5c95d4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:63:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674181, 'reachable_time': 39165, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294755, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.362 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6052cfd3-06a4-46c4-bf8f-7d77eeb7ecca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.427 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c10b59ab-3999-48df-b715-32c5af23b7c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.429 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a5c95d4-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.429 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.429 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a5c95d4-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:18 compute-0 kernel: tap0a5c95d4-10: entered promiscuous mode
Oct 07 14:09:18 compute-0 NetworkManager[44949]: <info>  [1759846158.4326] manager: (tap0a5c95d4-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.438 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0a5c95d4-10, col_values=(('external_ids', {'iface-id': 'a8291172-baf1-4252-9a0d-af7ef7ffa931'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:18 compute-0 ovn_controller[151684]: 2025-10-07T14:09:18Z|00172|binding|INFO|Releasing lport a8291172-baf1-4252-9a0d-af7ef7ffa931 from this chassis (sb_readonly=0)
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.464 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0a5c95d4-1a77-48f5-83c0-afa976b7583d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0a5c95d4-1a77-48f5-83c0-afa976b7583d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.465 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[13d0301b-f29c-4d11-8bbf-116a55cb6530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.466 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-0a5c95d4-1a77-48f5-83c0-afa976b7583d
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/0a5c95d4-1a77-48f5-83c0-afa976b7583d.pid.haproxy
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 0a5c95d4-1a77-48f5-83c0-afa976b7583d
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.467 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'env', 'PROCESS_TAG=haproxy-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0a5c95d4-1a77-48f5-83c0-afa976b7583d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.716 2 DEBUG oslo_concurrency.lockutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.718 2 DEBUG oslo_concurrency.lockutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.718 2 DEBUG oslo_concurrency.lockutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.719 2 DEBUG oslo_concurrency.lockutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.719 2 DEBUG oslo_concurrency.lockutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.720 2 INFO nova.compute.manager [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Terminating instance
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.721 2 DEBUG nova.compute.manager [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:09:18 compute-0 kernel: tapc10a9a85-70 (unregistering): left promiscuous mode
Oct 07 14:09:18 compute-0 NetworkManager[44949]: <info>  [1759846158.7823] device (tapc10a9a85-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:18 compute-0 ovn_controller[151684]: 2025-10-07T14:09:18Z|00173|binding|INFO|Releasing lport c10a9a85-702e-4bd4-92d9-474eb88ec422 from this chassis (sb_readonly=0)
Oct 07 14:09:18 compute-0 ovn_controller[151684]: 2025-10-07T14:09:18Z|00174|binding|INFO|Setting lport c10a9a85-702e-4bd4-92d9-474eb88ec422 down in Southbound
Oct 07 14:09:18 compute-0 ovn_controller[151684]: 2025-10-07T14:09:18Z|00175|binding|INFO|Removing iface tapc10a9a85-70 ovn-installed in OVS
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:18 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000017.scope: Deactivated successfully.
Oct 07 14:09:18 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000017.scope: Consumed 14.388s CPU time.
Oct 07 14:09:18 compute-0 systemd-machined[214580]: Machine qemu-27-instance-00000017 terminated.
Oct 07 14:09:18 compute-0 podman[294834]: 2025-10-07 14:09:18.862496158 +0000 UTC m=+0.069273603 container create 757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:09:18 compute-0 ceph-mon[74295]: pgmap v1282: 305 pgs: 305 active+clean; 246 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 229 op/s
Oct 07 14:09:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Oct 07 14:09:18 compute-0 systemd[1]: Started libpod-conmon-757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0.scope.
Oct 07 14:09:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Oct 07 14:09:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.911 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:8d:83 10.100.0.13'], port_security=['fa:16:3e:45:8d:83 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5fc2c826-a57b-4c9a-910a-48b72ec2ab75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ef175ad-b29a-40a5-8bff-1ae744434f58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b03b40fd15b945118fde82b6454dbced', 'neutron:revision_number': '6', 'neutron:security_group_ids': '33a99878-f5f8-4125-a239-2333c49ae6de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82ef4b2f-7b03-44f3-86f3-9cfeec6eece1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c10a9a85-702e-4bd4-92d9-474eb88ec422) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:09:18 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Oct 07 14:09:18 compute-0 podman[294834]: 2025-10-07 14:09:18.8210503 +0000 UTC m=+0.027827795 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:09:18 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:09:18 compute-0 NetworkManager[44949]: <info>  [1759846158.9447] manager: (tapc10a9a85-70): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Oct 07 14:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc2a49701af0e481b1ad6d0cc40f5f1da2f9ada3657154d7d31efc1fe61be3f5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:09:18 compute-0 nova_compute[259550]: 2025-10-07 14:09:18.961 2 DEBUG nova.storage.rbd_utils [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(snap) on rbd image(e0e0ef2a-e43a-42e6-af8b-792832119b67) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:09:18 compute-0 podman[294834]: 2025-10-07 14:09:18.969184239 +0000 UTC m=+0.175961694 container init 757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:09:18 compute-0 podman[294834]: 2025-10-07 14:09:18.97631718 +0000 UTC m=+0.183094605 container start 757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.002 2 INFO nova.virt.libvirt.driver [-] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Instance destroyed successfully.
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.003 2 DEBUG nova.objects.instance [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lazy-loading 'resources' on Instance uuid 5fc2c826-a57b-4c9a-910a-48b72ec2ab75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:19 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[294853]: [NOTICE]   (294879) : New worker (294884) forked
Oct 07 14:09:19 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[294853]: [NOTICE]   (294879) : Loading success.
Oct 07 14:09:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.047 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c10a9a85-702e-4bd4-92d9-474eb88ec422 in datapath 9ef175ad-b29a-40a5-8bff-1ae744434f58 unbound from our chassis
Oct 07 14:09:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.048 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ef175ad-b29a-40a5-8bff-1ae744434f58, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:09:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.049 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ff32c721-2f79-4c67-b5b7-ac41438827f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.050 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58 namespace which is not needed anymore
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.059 2 DEBUG nova.virt.libvirt.vif [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:08:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-822129567',display_name='tempest-AttachInterfacesUnderV243Test-server-822129567',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-822129567',id=23,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOqYH9kZe9rX7hHEHsSyYrS6rRZ+9wOpWI4SrDObDRhVhO+VRnrHxHs1eVVbdpvsvOSO+WGarbwizJejZEygylEjS/5KcFmLUFjCsp72PetsN1n04qwsYDZRBjJQ0LVJNw==',key_name='tempest-keypair-1173887905',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:08:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b03b40fd15b945118fde82b6454dbced',ramdisk_id='',reservation_id='r-z4cql8qa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-169595365',owner_user_name='tempest-AttachInterfacesUnderV243Test-169595365-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:09:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='157a909ec18a483a901ec32a0a867038',uuid=5fc2c826-a57b-4c9a-910a-48b72ec2ab75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.060 2 DEBUG nova.network.os_vif_util [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Converting VIF {"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.061 2 DEBUG nova.network.os_vif_util [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:8d:83,bridge_name='br-int',has_traffic_filtering=True,id=c10a9a85-702e-4bd4-92d9-474eb88ec422,network=Network(9ef175ad-b29a-40a5-8bff-1ae744434f58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10a9a85-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.061 2 DEBUG os_vif [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:8d:83,bridge_name='br-int',has_traffic_filtering=True,id=c10a9a85-702e-4bd4-92d9-474eb88ec422,network=Network(9ef175ad-b29a-40a5-8bff-1ae744434f58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10a9a85-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.065 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc10a9a85-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.076 2 INFO os_vif [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:8d:83,bridge_name='br-int',has_traffic_filtering=True,id=c10a9a85-702e-4bd4-92d9-474eb88ec422,network=Network(9ef175ad-b29a-40a5-8bff-1ae744434f58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10a9a85-70')
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.104 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846159.104299, 7ee2904f-492a-4ffe-bdc2-6f4ec3285851 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.105 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] VM Started (Lifecycle Event)
Oct 07 14:09:19 compute-0 neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58[293333]: [NOTICE]   (293337) : haproxy version is 2.8.14-c23fe91
Oct 07 14:09:19 compute-0 neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58[293333]: [NOTICE]   (293337) : path to executable is /usr/sbin/haproxy
Oct 07 14:09:19 compute-0 neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58[293333]: [WARNING]  (293337) : Exiting Master process...
Oct 07 14:09:19 compute-0 neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58[293333]: [WARNING]  (293337) : Exiting Master process...
Oct 07 14:09:19 compute-0 neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58[293333]: [ALERT]    (293337) : Current worker (293339) exited with code 143 (Terminated)
Oct 07 14:09:19 compute-0 neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58[293333]: [WARNING]  (293337) : All workers exited. Exiting... (0)
Oct 07 14:09:19 compute-0 systemd[1]: libpod-ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77.scope: Deactivated successfully.
Oct 07 14:09:19 compute-0 podman[294929]: 2025-10-07 14:09:19.205840384 +0000 UTC m=+0.054890958 container died ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:09:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77-userdata-shm.mount: Deactivated successfully.
Oct 07 14:09:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-11d889d5dab38b3fea3ecfaefc7e9381c5ac5e4cbaa2ea3617b44dc00bb8f71d-merged.mount: Deactivated successfully.
Oct 07 14:09:19 compute-0 podman[294929]: 2025-10-07 14:09:19.303290539 +0000 UTC m=+0.152341103 container cleanup ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.311 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:19 compute-0 systemd[1]: libpod-conmon-ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77.scope: Deactivated successfully.
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.317 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846159.1043916, 7ee2904f-492a-4ffe-bdc2-6f4ec3285851 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.318 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] VM Paused (Lifecycle Event)
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.375 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.380 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:09:19 compute-0 podman[294961]: 2025-10-07 14:09:19.394503687 +0000 UTC m=+0.067267079 container remove ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 14:09:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.401 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0567717d-5e26-41c6-8c0c-49a120b1c88d]: (4, ('Tue Oct  7 02:09:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58 (ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77)\necad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77\nTue Oct  7 02:09:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58 (ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77)\necad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.404 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dfca3945-1a5b-4a75-8377-df8f633cb48f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.405 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ef175ad-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:19 compute-0 kernel: tap9ef175ad-b0: left promiscuous mode
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.431 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c99be2ad-784c-4eed-b881-fe762da7888b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.447 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:09:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.465 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e63d2a4b-4770-4ffc-8e16-bd6f6686a1c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.467 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ab214ab1-eb9c-4a85-aa87-d123fb915c4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.482 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[176719dc-9ce0-4d90-b89f-ba3d47c7c9a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670821, 'reachable_time': 16180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294980, 'error': None, 'target': 'ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.486 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:09:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d9ef175ad\x2db29a\x2d40a5\x2d8bff\x2d1ae744434f58.mount: Deactivated successfully.
Oct 07 14:09:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.486 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[71d39aa2-9d30-4cef-88a2-38dce4669faf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 305 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 7.3 MiB/s wr, 163 op/s
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.689 2 INFO nova.virt.libvirt.driver [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Deleting instance files /var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75_del
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.690 2 INFO nova.virt.libvirt.driver [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Deletion of /var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75_del complete
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.859 2 INFO nova.compute.manager [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Took 1.14 seconds to destroy the instance on the hypervisor.
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.859 2 DEBUG oslo.service.loopingcall [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.860 2 DEBUG nova.compute.manager [-] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.860 2 DEBUG nova.network.neutron [-] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:09:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Oct 07 14:09:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Oct 07 14:09:19 compute-0 ceph-mon[74295]: osdmap e155: 3 total, 3 up, 3 in
Oct 07 14:09:19 compute-0 ceph-mon[74295]: pgmap v1284: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 305 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 7.3 MiB/s wr, 163 op/s
Oct 07 14:09:19 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.982 2 DEBUG nova.network.neutron [req-74969ab7-9579-4110-b9e3-d6710f5f0c29 req-e9a0d42d-a9e0-4c2d-8b6c-1454aec11a13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updated VIF entry in instance network info cache for port c10a9a85-702e-4bd4-92d9-474eb88ec422. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:09:19 compute-0 nova_compute[259550]: 2025-10-07 14:09:19.982 2 DEBUG nova.network.neutron [req-74969ab7-9579-4110-b9e3-d6710f5f0c29 req-e9a0d42d-a9e0-4c2d-8b6c-1454aec11a13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updating instance_info_cache with network_info: [{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.068 2 DEBUG oslo_concurrency.lockutils [req-74969ab7-9579-4110-b9e3-d6710f5f0c29 req-e9a0d42d-a9e0-4c2d-8b6c-1454aec11a13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.117 2 DEBUG nova.compute.manager [req-bb5ff332-7a83-4a17-8004-7f19bbc90c2f req-ac7960cb-ddc2-4460-a186-f1ee25772f08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Received event network-vif-plugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.117 2 DEBUG oslo_concurrency.lockutils [req-bb5ff332-7a83-4a17-8004-7f19bbc90c2f req-ac7960cb-ddc2-4460-a186-f1ee25772f08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.118 2 DEBUG oslo_concurrency.lockutils [req-bb5ff332-7a83-4a17-8004-7f19bbc90c2f req-ac7960cb-ddc2-4460-a186-f1ee25772f08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.118 2 DEBUG oslo_concurrency.lockutils [req-bb5ff332-7a83-4a17-8004-7f19bbc90c2f req-ac7960cb-ddc2-4460-a186-f1ee25772f08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.119 2 DEBUG nova.compute.manager [req-bb5ff332-7a83-4a17-8004-7f19bbc90c2f req-ac7960cb-ddc2-4460-a186-f1ee25772f08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Processing event network-vif-plugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.120 2 DEBUG nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.125 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846160.1238267, 7ee2904f-492a-4ffe-bdc2-6f4ec3285851 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.125 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] VM Resumed (Lifecycle Event)
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.128 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.132 2 INFO nova.virt.libvirt.driver [-] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Instance spawned successfully.
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.132 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.194 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.199 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.200 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.200 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.201 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.201 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.202 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.206 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.332 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.447 2 INFO nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Took 10.21 seconds to spawn the instance on the hypervisor.
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.447 2 DEBUG nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.675 2 INFO nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Took 11.33 seconds to build instance.
Oct 07 14:09:20 compute-0 nova_compute[259550]: 2025-10-07 14:09:20.763 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:09:20 compute-0 ceph-mon[74295]: osdmap e156: 3 total, 3 up, 3 in
Oct 07 14:09:21 compute-0 podman[294983]: 2025-10-07 14:09:21.087045353 +0000 UTC m=+0.070867295 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 07 14:09:21 compute-0 podman[294982]: 2025-10-07 14:09:21.114211109 +0000 UTC m=+0.100575669 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:09:21 compute-0 nova_compute[259550]: 2025-10-07 14:09:21.480 2 DEBUG nova.network.neutron [-] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:21 compute-0 nova_compute[259550]: 2025-10-07 14:09:21.502 2 INFO nova.virt.libvirt.driver [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Snapshot image upload complete
Oct 07 14:09:21 compute-0 nova_compute[259550]: 2025-10-07 14:09:21.503 2 INFO nova.compute.manager [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Took 6.25 seconds to snapshot the instance on the hypervisor.
Oct 07 14:09:21 compute-0 nova_compute[259550]: 2025-10-07 14:09:21.564 2 INFO nova.compute.manager [-] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Took 1.70 seconds to deallocate network for instance.
Oct 07 14:09:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 289 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 183 op/s
Oct 07 14:09:21 compute-0 nova_compute[259550]: 2025-10-07 14:09:21.853 2 DEBUG oslo_concurrency.lockutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:21 compute-0 nova_compute[259550]: 2025-10-07 14:09:21.853 2 DEBUG oslo_concurrency.lockutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:21 compute-0 nova_compute[259550]: 2025-10-07 14:09:21.963 2 DEBUG oslo_concurrency.processutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:21 compute-0 ceph-mon[74295]: pgmap v1286: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 289 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 183 op/s
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.382 2 DEBUG nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Received event network-vif-plugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.383 2 DEBUG oslo_concurrency.lockutils [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.383 2 DEBUG oslo_concurrency.lockutils [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.383 2 DEBUG oslo_concurrency.lockutils [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.384 2 DEBUG nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] No waiting events found dispatching network-vif-plugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.384 2 WARNING nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Received unexpected event network-vif-plugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 for instance with vm_state active and task_state None.
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.384 2 DEBUG nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received event network-vif-unplugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.384 2 DEBUG oslo_concurrency.lockutils [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.384 2 DEBUG oslo_concurrency.lockutils [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.385 2 DEBUG oslo_concurrency.lockutils [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.385 2 DEBUG nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] No waiting events found dispatching network-vif-unplugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.385 2 WARNING nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received unexpected event network-vif-unplugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 for instance with vm_state deleted and task_state None.
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.385 2 DEBUG nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received event network-vif-plugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.385 2 DEBUG oslo_concurrency.lockutils [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.386 2 DEBUG oslo_concurrency.lockutils [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.386 2 DEBUG oslo_concurrency.lockutils [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.386 2 DEBUG nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] No waiting events found dispatching network-vif-plugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.386 2 WARNING nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received unexpected event network-vif-plugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 for instance with vm_state deleted and task_state None.
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.386 2 DEBUG nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received event network-vif-deleted-c10a9a85-702e-4bd4-92d9-474eb88ec422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:09:22 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2260568890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.530 2 DEBUG oslo_concurrency.processutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.535 2 DEBUG nova.compute.provider_tree [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:09:22
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['vms', 'backups', 'volumes', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'images', 'cephfs.cephfs.data']
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.787 2 DEBUG nova.scheduler.client.report [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:09:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:09:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Oct 07 14:09:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:09:22 compute-0 nova_compute[259550]: 2025-10-07 14:09:22.988 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:09:22 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Oct 07 14:09:22 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2260568890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:23 compute-0 nova_compute[259550]: 2025-10-07 14:09:23.256 2 DEBUG oslo_concurrency.lockutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:23 compute-0 nova_compute[259550]: 2025-10-07 14:09:23.337 2 INFO nova.scheduler.client.report [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Deleted allocations for instance 5fc2c826-a57b-4c9a-910a-48b72ec2ab75
Oct 07 14:09:23 compute-0 nova_compute[259550]: 2025-10-07 14:09:23.534 2 DEBUG oslo_concurrency.lockutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 289 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 183 op/s
Oct 07 14:09:23 compute-0 nova_compute[259550]: 2025-10-07 14:09:23.987 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:09:24 compute-0 ceph-mon[74295]: osdmap e157: 3 total, 3 up, 3 in
Oct 07 14:09:24 compute-0 ceph-mon[74295]: pgmap v1288: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 289 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 183 op/s
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.175 2 DEBUG oslo_concurrency.lockutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.176 2 DEBUG oslo_concurrency.lockutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.176 2 DEBUG oslo_concurrency.lockutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.177 2 DEBUG oslo_concurrency.lockutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.177 2 DEBUG oslo_concurrency.lockutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.179 2 INFO nova.compute.manager [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Terminating instance
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.180 2 DEBUG nova.compute.manager [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.189 2 INFO nova.virt.libvirt.driver [-] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Instance destroyed successfully.
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.189 2 DEBUG nova.objects.instance [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'resources' on Instance uuid 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.274 2 DEBUG nova.virt.libvirt.vif [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:08:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-771633438',display_name='tempest-ImagesTestJSON-server-771633438',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-771633438',id=24,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:08:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-6zal3iv7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:09:21Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=46d99eec-2ec6-4ea6-acb1-c0a694dd2df1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.275 2 DEBUG nova.network.os_vif_util [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.275 2 DEBUG nova.network.os_vif_util [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:c7:a1,bridge_name='br-int',has_traffic_filtering=True,id=29476070-b1b0-4d1c-a313-d0fbd1793130,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29476070-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.276 2 DEBUG os_vif [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:c7:a1,bridge_name='br-int',has_traffic_filtering=True,id=29476070-b1b0-4d1c-a313-d0fbd1793130,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29476070-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.278 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29476070-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.282 2 INFO os_vif [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:c7:a1,bridge_name='br-int',has_traffic_filtering=True,id=29476070-b1b0-4d1c-a313-d0fbd1793130,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29476070-b1')
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.411 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846149.4102364, 2f611962-32b5-4b21-b23b-303bbf54564d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.412 2 INFO nova.compute.manager [-] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] VM Stopped (Lifecycle Event)
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.501 2 DEBUG nova.compute.manager [None req-750bf0e5-3011-4dfc-9764-da75552238f9 - - - - - -] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.846 2 INFO nova.virt.libvirt.driver [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Deleting instance files /var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_del
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.847 2 INFO nova.virt.libvirt.driver [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Deletion of /var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_del complete
Oct 07 14:09:24 compute-0 nova_compute[259550]: 2025-10-07 14:09:24.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:09:25 compute-0 nova_compute[259550]: 2025-10-07 14:09:25.070 2 INFO nova.compute.manager [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Took 0.89 seconds to destroy the instance on the hypervisor.
Oct 07 14:09:25 compute-0 nova_compute[259550]: 2025-10-07 14:09:25.071 2 DEBUG oslo.service.loopingcall [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:09:25 compute-0 nova_compute[259550]: 2025-10-07 14:09:25.071 2 DEBUG nova.compute.manager [-] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:09:25 compute-0 nova_compute[259550]: 2025-10-07 14:09:25.072 2 DEBUG nova.network.neutron [-] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:09:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 144 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 8.2 MiB/s rd, 4.9 MiB/s wr, 300 op/s
Oct 07 14:09:25 compute-0 nova_compute[259550]: 2025-10-07 14:09:25.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:09:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Oct 07 14:09:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Oct 07 14:09:25 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Oct 07 14:09:25 compute-0 nova_compute[259550]: 2025-10-07 14:09:25.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:09:25 compute-0 nova_compute[259550]: 2025-10-07 14:09:25.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:09:26 compute-0 nova_compute[259550]: 2025-10-07 14:09:26.091 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:26 compute-0 nova_compute[259550]: 2025-10-07 14:09:26.092 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:26 compute-0 nova_compute[259550]: 2025-10-07 14:09:26.092 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:26 compute-0 nova_compute[259550]: 2025-10-07 14:09:26.092 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:09:26 compute-0 nova_compute[259550]: 2025-10-07 14:09:26.093 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:26 compute-0 nova_compute[259550]: 2025-10-07 14:09:26.330 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846151.318896, 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:26 compute-0 nova_compute[259550]: 2025-10-07 14:09:26.331 2 INFO nova.compute.manager [-] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] VM Stopped (Lifecycle Event)
Oct 07 14:09:26 compute-0 nova_compute[259550]: 2025-10-07 14:09:26.387 2 DEBUG nova.compute.manager [None req-6aeab903-6f3f-435a-aa38-e010bd5a8358 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:09:26 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/917930297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:26 compute-0 nova_compute[259550]: 2025-10-07 14:09:26.619 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:26 compute-0 ceph-mon[74295]: pgmap v1289: 305 pgs: 305 active+clean; 144 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 8.2 MiB/s rd, 4.9 MiB/s wr, 300 op/s
Oct 07 14:09:26 compute-0 ceph-mon[74295]: osdmap e158: 3 total, 3 up, 3 in
Oct 07 14:09:26 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/917930297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.023 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.024 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.196 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.197 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4306MB free_disk=59.930442810058594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.198 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.198 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.244 2 DEBUG nova.network.neutron [-] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.353 2 DEBUG nova.compute.manager [req-4febf6f9-3fe1-4562-b78d-eee4f843a5bb req-fe904663-4c33-47e9-b969-cb4a986e568d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Received event network-vif-deleted-29476070-b1b0-4d1c-a313-d0fbd1793130 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.354 2 INFO nova.compute.manager [req-4febf6f9-3fe1-4562-b78d-eee4f843a5bb req-fe904663-4c33-47e9-b969-cb4a986e568d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Neutron deleted interface 29476070-b1b0-4d1c-a313-d0fbd1793130; detaching it from the instance and deleting it from the info cache
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.354 2 DEBUG nova.network.neutron [req-4febf6f9-3fe1-4562-b78d-eee4f843a5bb req-fe904663-4c33-47e9-b969-cb4a986e568d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.459 2 INFO nova.compute.manager [-] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Took 2.39 seconds to deallocate network for instance.
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.464 2 DEBUG nova.compute.manager [req-4febf6f9-3fe1-4562-b78d-eee4f843a5bb req-fe904663-4c33-47e9-b969-cb4a986e568d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Detach interface failed, port_id=29476070-b1b0-4d1c-a313-d0fbd1793130, reason: Instance 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.509 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.510 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 7ee2904f-492a-4ffe-bdc2-6f4ec3285851 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:09:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 144 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.3 MiB/s wr, 231 op/s
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.609 2 DEBUG oslo_concurrency.lockutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.611 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.612 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.612 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.650 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.651 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.689 2 DEBUG nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.713 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:27 compute-0 nova_compute[259550]: 2025-10-07 14:09:27.772 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:28 compute-0 nova_compute[259550]: 2025-10-07 14:09:28.020 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "f707575c-3219-44e4-9655-ccc194e7385d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:28 compute-0 nova_compute[259550]: 2025-10-07 14:09:28.021 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:28 compute-0 nova_compute[259550]: 2025-10-07 14:09:28.050 2 DEBUG nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:09:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:09:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2822598896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:28 compute-0 nova_compute[259550]: 2025-10-07 14:09:28.135 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:28 compute-0 nova_compute[259550]: 2025-10-07 14:09:28.151 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:28 compute-0 nova_compute[259550]: 2025-10-07 14:09:28.157 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:09:28 compute-0 nova_compute[259550]: 2025-10-07 14:09:28.179 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:09:28 compute-0 nova_compute[259550]: 2025-10-07 14:09:28.210 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:09:28 compute-0 nova_compute[259550]: 2025-10-07 14:09:28.210 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:28 compute-0 nova_compute[259550]: 2025-10-07 14:09:28.211 2 DEBUG oslo_concurrency.lockutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:28 compute-0 nova_compute[259550]: 2025-10-07 14:09:28.306 2 DEBUG oslo_concurrency.processutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:09:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1863809496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:28 compute-0 nova_compute[259550]: 2025-10-07 14:09:28.771 2 DEBUG oslo_concurrency.processutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:28 compute-0 nova_compute[259550]: 2025-10-07 14:09:28.778 2 DEBUG nova.compute.provider_tree [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:09:28 compute-0 ceph-mon[74295]: pgmap v1291: 305 pgs: 305 active+clean; 144 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.3 MiB/s wr, 231 op/s
Oct 07 14:09:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2822598896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1863809496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:28 compute-0 nova_compute[259550]: 2025-10-07 14:09:28.911 2 DEBUG nova.scheduler.client.report [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:09:29 compute-0 nova_compute[259550]: 2025-10-07 14:09:29.070 2 DEBUG oslo_concurrency.lockutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:29 compute-0 nova_compute[259550]: 2025-10-07 14:09:29.073 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:29 compute-0 nova_compute[259550]: 2025-10-07 14:09:29.090 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:09:29 compute-0 nova_compute[259550]: 2025-10-07 14:09:29.091 2 INFO nova.compute.claims [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:09:29 compute-0 nova_compute[259550]: 2025-10-07 14:09:29.211 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:09:29 compute-0 nova_compute[259550]: 2025-10-07 14:09:29.248 2 INFO nova.scheduler.client.report [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Deleted allocations for instance 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1
Oct 07 14:09:29 compute-0 nova_compute[259550]: 2025-10-07 14:09:29.281 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:09:29 compute-0 nova_compute[259550]: 2025-10-07 14:09:29.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.0 KiB/s wr, 219 op/s
Oct 07 14:09:29 compute-0 nova_compute[259550]: 2025-10-07 14:09:29.985 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:09:29 compute-0 nova_compute[259550]: 2025-10-07 14:09:29.985 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:09:30 compute-0 nova_compute[259550]: 2025-10-07 14:09:30.090 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:09:30 compute-0 nova_compute[259550]: 2025-10-07 14:09:30.090 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:09:30 compute-0 nova_compute[259550]: 2025-10-07 14:09:30.194 2 DEBUG oslo_concurrency.lockutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:30 compute-0 nova_compute[259550]: 2025-10-07 14:09:30.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:09:30 compute-0 ceph-mon[74295]: pgmap v1292: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.0 KiB/s wr, 219 op/s
Oct 07 14:09:31 compute-0 nova_compute[259550]: 2025-10-07 14:09:31.017 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:31 compute-0 nova_compute[259550]: 2025-10-07 14:09:31.441 2 DEBUG nova.compute.manager [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:09:31 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/627574911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:31 compute-0 nova_compute[259550]: 2025-10-07 14:09:31.507 2 INFO nova.compute.manager [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] instance snapshotting
Oct 07 14:09:31 compute-0 nova_compute[259550]: 2025-10-07 14:09:31.519 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:31 compute-0 nova_compute[259550]: 2025-10-07 14:09:31.529 2 DEBUG nova.compute.provider_tree [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:09:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 5.6 KiB/s wr, 204 op/s
Oct 07 14:09:31 compute-0 nova_compute[259550]: 2025-10-07 14:09:31.769 2 INFO nova.virt.libvirt.driver [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Beginning live snapshot process
Oct 07 14:09:31 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/627574911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:31 compute-0 nova_compute[259550]: 2025-10-07 14:09:31.836 2 DEBUG nova.scheduler.client.report [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:09:31 compute-0 nova_compute[259550]: 2025-10-07 14:09:31.953 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:31 compute-0 nova_compute[259550]: 2025-10-07 14:09:31.954 2 DEBUG nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:09:31 compute-0 nova_compute[259550]: 2025-10-07 14:09:31.956 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 3.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:31 compute-0 nova_compute[259550]: 2025-10-07 14:09:31.965 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:09:31 compute-0 nova_compute[259550]: 2025-10-07 14:09:31.966 2 INFO nova.compute.claims [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.094 2 DEBUG nova.virt.libvirt.imagebackend [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.214 2 DEBUG nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.215 2 DEBUG nova.network.neutron [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.274 2 INFO nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.308 2 DEBUG nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.337 2 DEBUG nova.storage.rbd_utils [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] creating snapshot(7c558376131f4286b2374ce28a43ede9) on rbd image(7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.391 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.431 2 DEBUG nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.433 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.434 2 INFO nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Creating image(s)
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.461 2 DEBUG nova.storage.rbd_utils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.491 2 DEBUG nova.storage.rbd_utils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.525 2 DEBUG nova.storage.rbd_utils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.529 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.599 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.601 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.602 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.602 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.626 2 DEBUG nova.storage.rbd_utils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.631 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:09:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3170871631' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:09:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:09:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3170871631' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.786 2 DEBUG nova.network.neutron [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.788 2 DEBUG nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:09:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Oct 07 14:09:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Oct 07 14:09:32 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Oct 07 14:09:32 compute-0 ceph-mon[74295]: pgmap v1293: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 5.6 KiB/s wr, 204 op/s
Oct 07 14:09:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3170871631' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:09:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3170871631' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:09:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:09:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1354197765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:32 compute-0 podman[295317]: 2025-10-07 14:09:32.941203903 +0000 UTC m=+0.114891271 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.955 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.965 2 DEBUG nova.compute.provider_tree [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:09:32 compute-0 nova_compute[259550]: 2025-10-07 14:09:32.973 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.010 2 DEBUG nova.scheduler.client.report [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.054 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.055 2 DEBUG nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.062 2 DEBUG nova.storage.rbd_utils [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] cloning vms/7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk@7c558376131f4286b2374ce28a43ede9 to images/b8bca106-0814-4401-a772-c94d18dd013b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:09:33 compute-0 ovn_controller[151684]: 2025-10-07T14:09:33Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:ce:2c 10.100.0.14
Oct 07 14:09:33 compute-0 ovn_controller[151684]: 2025-10-07T14:09:33Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:ce:2c 10.100.0.14
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.102 2 DEBUG nova.storage.rbd_utils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] resizing rbd image b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.162 2 DEBUG nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.162 2 DEBUG nova.network.neutron [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.185 2 INFO nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.232 2 DEBUG nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.242 2 DEBUG nova.storage.rbd_utils [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] flattening images/b8bca106-0814-4401-a772-c94d18dd013b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.297 2 DEBUG nova.objects.instance [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lazy-loading 'migration_context' on Instance uuid b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.314 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.314 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Ensure instance console log exists: /var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.315 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.315 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.315 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.317 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:09:33 compute-0 sudo[295452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.323 2 WARNING nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:09:33 compute-0 sudo[295452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:09:33 compute-0 sudo[295452]: pam_unix(sudo:session): session closed for user root
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.342 2 DEBUG nova.virt.libvirt.host [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.344 2 DEBUG nova.virt.libvirt.host [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.351 2 DEBUG nova.virt.libvirt.host [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.352 2 DEBUG nova.virt.libvirt.host [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.352 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.352 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.353 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.353 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.353 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.354 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.354 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.354 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.354 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.354 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.355 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.355 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.358 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:33 compute-0 ovn_controller[151684]: 2025-10-07T14:09:33Z|00176|binding|INFO|Releasing lport a8291172-baf1-4252-9a0d-af7ef7ffa931 from this chassis (sb_readonly=0)
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:33 compute-0 sudo[295494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.409 2 DEBUG nova.policy [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a27a7178326846e69ab9eaae7c70b274', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:09:33 compute-0 sudo[295494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:09:33 compute-0 sudo[295494]: pam_unix(sudo:session): session closed for user root
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.415 2 DEBUG nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.417 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.417 2 INFO nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Creating image(s)
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.454 2 DEBUG nova.storage.rbd_utils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image f707575c-3219-44e4-9655-ccc194e7385d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:33 compute-0 sudo[295520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:09:33 compute-0 sudo[295520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:09:33 compute-0 sudo[295520]: pam_unix(sudo:session): session closed for user root
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.507 2 DEBUG nova.storage.rbd_utils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image f707575c-3219-44e4-9655-ccc194e7385d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.537 2 DEBUG nova.storage.rbd_utils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image f707575c-3219-44e4-9655-ccc194e7385d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.546 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:33 compute-0 sudo[295579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:09:33 compute-0 ovn_controller[151684]: 2025-10-07T14:09:33Z|00177|binding|INFO|Releasing lport a8291172-baf1-4252-9a0d-af7ef7ffa931 from this chassis (sb_readonly=0)
Oct 07 14:09:33 compute-0 sudo[295579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:09:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.627 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.627 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.628 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.629 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.653 2 DEBUG nova.storage.rbd_utils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image f707575c-3219-44e4-9655-ccc194e7385d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.656 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f707575c-3219-44e4-9655-ccc194e7385d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:09:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4253893063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:33 compute-0 nova_compute[259550]: 2025-10-07 14:09:33.953 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:33 compute-0 ceph-mon[74295]: osdmap e159: 3 total, 3 up, 3 in
Oct 07 14:09:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1354197765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:34 compute-0 sudo[295579]: pam_unix(sudo:session): session closed for user root
Oct 07 14:09:34 compute-0 podman[295716]: 2025-10-07 14:09:34.102134071 +0000 UTC m=+0.089962905 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:09:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:09:34 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:09:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:09:34 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:09:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.157 2 DEBUG nova.storage.rbd_utils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.162 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:34 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:09:34 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 8d8f6b55-9ccc-40a7-a7cd-343cfb324b8b does not exist
Oct 07 14:09:34 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 1992dcd0-1526-47e6-92ad-a05567ec8796 does not exist
Oct 07 14:09:34 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 237c41b0-e65f-486d-9b22-187ce9c355d0 does not exist
Oct 07 14:09:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:09:34 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:09:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:09:34 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:09:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:09:34 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.194 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.195 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846158.959316, 5fc2c826-a57b-4c9a-910a-48b72ec2ab75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.195 2 INFO nova.compute.manager [-] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] VM Stopped (Lifecycle Event)
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.225 2 DEBUG nova.compute.manager [None req-4d914caf-5b44-4783-82a4-d16fc69e0019 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:34 compute-0 sudo[295754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:09:34 compute-0 sudo[295754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:09:34 compute-0 sudo[295754]: pam_unix(sudo:session): session closed for user root
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:34 compute-0 sudo[295782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:09:34 compute-0 sudo[295782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:09:34 compute-0 sudo[295782]: pam_unix(sudo:session): session closed for user root
Oct 07 14:09:34 compute-0 sudo[295844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:09:34 compute-0 sudo[295844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:09:34 compute-0 sudo[295844]: pam_unix(sudo:session): session closed for user root
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.479 2 DEBUG nova.storage.rbd_utils [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] removing snapshot(7c558376131f4286b2374ce28a43ede9) on rbd image(7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:09:34 compute-0 sudo[295869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:09:34 compute-0 sudo[295869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.536 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f707575c-3219-44e4-9655-ccc194e7385d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.880s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:09:34 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/235483505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.654 2 DEBUG nova.storage.rbd_utils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] resizing rbd image f707575c-3219-44e4-9655-ccc194e7385d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.700 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.702 2 DEBUG nova.objects.instance [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.735 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:09:34 compute-0 nova_compute[259550]:   <uuid>b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca</uuid>
Oct 07 14:09:34 compute-0 nova_compute[259550]:   <name>instance-0000001b</name>
Oct 07 14:09:34 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:09:34 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:09:34 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-1302379366</nova:name>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:09:33</nova:creationTime>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:09:34 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:09:34 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:09:34 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:09:34 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:09:34 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:09:34 compute-0 nova_compute[259550]:         <nova:user uuid="61181e933b9841959e5a4a707bc48adf">tempest-ServersAdminNegativeTestJSON-484006704-project-member</nova:user>
Oct 07 14:09:34 compute-0 nova_compute[259550]:         <nova:project uuid="5fb6da28aeda44c5b898c384c6853b38">tempest-ServersAdminNegativeTestJSON-484006704</nova:project>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:09:34 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:09:34 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <system>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <entry name="serial">b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca</entry>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <entry name="uuid">b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca</entry>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     </system>
Oct 07 14:09:34 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:09:34 compute-0 nova_compute[259550]:   <os>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:   </os>
Oct 07 14:09:34 compute-0 nova_compute[259550]:   <features>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:   </features>
Oct 07 14:09:34 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:09:34 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:09:34 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk">
Oct 07 14:09:34 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       </source>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:09:34 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk.config">
Oct 07 14:09:34 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       </source>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:09:34 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca/console.log" append="off"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <video>
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     </video>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:09:34 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:09:34 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:09:34 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:09:34 compute-0 nova_compute[259550]: </domain>
Oct 07 14:09:34 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.843 2 DEBUG nova.objects.instance [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'migration_context' on Instance uuid f707575c-3219-44e4-9655-ccc194e7385d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.887 2 DEBUG nova.network.neutron [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Successfully created port: 63d67e4f-8bad-4402-b835-b12010348a29 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.891 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.891 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.892 2 INFO nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Using config drive
Oct 07 14:09:34 compute-0 podman[296006]: 2025-10-07 14:09:34.899108461 +0000 UTC m=+0.054339382 container create 7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.919 2 DEBUG nova.storage.rbd_utils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.928 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.928 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Ensure instance console log exists: /var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.928 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.929 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:34 compute-0 nova_compute[259550]: 2025-10-07 14:09:34.929 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:34 compute-0 systemd[1]: Started libpod-conmon-7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9.scope.
Oct 07 14:09:34 compute-0 ceph-mon[74295]: pgmap v1295: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Oct 07 14:09:34 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4253893063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:34 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:09:34 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:09:34 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:09:34 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:09:34 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:09:34 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:09:34 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/235483505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:34 compute-0 podman[296006]: 2025-10-07 14:09:34.869860149 +0000 UTC m=+0.025091100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:09:34 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:09:35 compute-0 podman[296006]: 2025-10-07 14:09:35.018603216 +0000 UTC m=+0.173834167 container init 7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_diffie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 07 14:09:35 compute-0 podman[296006]: 2025-10-07 14:09:35.028015886 +0000 UTC m=+0.183246807 container start 7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:09:35 compute-0 podman[296006]: 2025-10-07 14:09:35.032961999 +0000 UTC m=+0.188192930 container attach 7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_diffie, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 07 14:09:35 compute-0 systemd[1]: libpod-7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9.scope: Deactivated successfully.
Oct 07 14:09:35 compute-0 great_diffie[296040]: 167 167
Oct 07 14:09:35 compute-0 conmon[296040]: conmon 7edb8644f1a03e83ae51 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9.scope/container/memory.events
Oct 07 14:09:35 compute-0 podman[296006]: 2025-10-07 14:09:35.038579269 +0000 UTC m=+0.193810190 container died 7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_diffie, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:09:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ab84501ef79dcb94318cfde252df2251fd49742ffaf2cab7a512129d2d7dc86-merged.mount: Deactivated successfully.
Oct 07 14:09:35 compute-0 podman[296006]: 2025-10-07 14:09:35.112608418 +0000 UTC m=+0.267839329 container remove 7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:09:35 compute-0 nova_compute[259550]: 2025-10-07 14:09:35.132 2 INFO nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Creating config drive at /var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca/disk.config
Oct 07 14:09:35 compute-0 nova_compute[259550]: 2025-10-07 14:09:35.138 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpapqavg5y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:35 compute-0 systemd[1]: libpod-conmon-7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9.scope: Deactivated successfully.
Oct 07 14:09:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Oct 07 14:09:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Oct 07 14:09:35 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Oct 07 14:09:35 compute-0 nova_compute[259550]: 2025-10-07 14:09:35.223 2 DEBUG nova.storage.rbd_utils [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] creating snapshot(snap) on rbd image(b8bca106-0814-4401-a772-c94d18dd013b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:09:35 compute-0 nova_compute[259550]: 2025-10-07 14:09:35.291 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpapqavg5y" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:35 compute-0 nova_compute[259550]: 2025-10-07 14:09:35.318 2 DEBUG nova.storage.rbd_utils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:35 compute-0 nova_compute[259550]: 2025-10-07 14:09:35.322 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca/disk.config b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:35 compute-0 podman[296083]: 2025-10-07 14:09:35.326083923 +0000 UTC m=+0.050428639 container create 6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:09:35 compute-0 systemd[1]: Started libpod-conmon-6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949.scope.
Oct 07 14:09:35 compute-0 podman[296083]: 2025-10-07 14:09:35.305177944 +0000 UTC m=+0.029522680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:09:35 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 14:09:35 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 14:09:35 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ad38c446b03bee7e834c53a7ade6376b48e164f34f30f5f58c3b9ccfb6ac37f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ad38c446b03bee7e834c53a7ade6376b48e164f34f30f5f58c3b9ccfb6ac37f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ad38c446b03bee7e834c53a7ade6376b48e164f34f30f5f58c3b9ccfb6ac37f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ad38c446b03bee7e834c53a7ade6376b48e164f34f30f5f58c3b9ccfb6ac37f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ad38c446b03bee7e834c53a7ade6376b48e164f34f30f5f58c3b9ccfb6ac37f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:09:35 compute-0 podman[296083]: 2025-10-07 14:09:35.444361544 +0000 UTC m=+0.168706260 container init 6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:09:35 compute-0 podman[296083]: 2025-10-07 14:09:35.454499765 +0000 UTC m=+0.178844481 container start 6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:09:35 compute-0 podman[296083]: 2025-10-07 14:09:35.459799507 +0000 UTC m=+0.184144233 container attach 6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 07 14:09:35 compute-0 nova_compute[259550]: 2025-10-07 14:09:35.519 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca/disk.config b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:35 compute-0 nova_compute[259550]: 2025-10-07 14:09:35.520 2 INFO nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Deleting local config drive /var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca/disk.config because it was imported into RBD.
Oct 07 14:09:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 240 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 12 MiB/s wr, 280 op/s
Oct 07 14:09:35 compute-0 nova_compute[259550]: 2025-10-07 14:09:35.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:35 compute-0 systemd-machined[214580]: New machine qemu-31-instance-0000001b.
Oct 07 14:09:35 compute-0 systemd[1]: Started Virtual Machine qemu-31-instance-0000001b.
Oct 07 14:09:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.023 2 DEBUG nova.network.neutron [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Successfully updated port: 63d67e4f-8bad-4402-b835-b12010348a29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.058 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "refresh_cache-f707575c-3219-44e4-9655-ccc194e7385d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.059 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquired lock "refresh_cache-f707575c-3219-44e4-9655-ccc194e7385d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.059 2 DEBUG nova.network.neutron [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.132 2 DEBUG nova.compute.manager [req-602cba1f-fa1b-44d6-ab4a-30305c5ffcde req-78c0d575-f4b8-4292-b523-4f1860945ad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Received event network-changed-63d67e4f-8bad-4402-b835-b12010348a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.133 2 DEBUG nova.compute.manager [req-602cba1f-fa1b-44d6-ab4a-30305c5ffcde req-78c0d575-f4b8-4292-b523-4f1860945ad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Refreshing instance network info cache due to event network-changed-63d67e4f-8bad-4402-b835-b12010348a29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.133 2 DEBUG oslo_concurrency.lockutils [req-602cba1f-fa1b-44d6-ab4a-30305c5ffcde req-78c0d575-f4b8-4292-b523-4f1860945ad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-f707575c-3219-44e4-9655-ccc194e7385d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:09:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Oct 07 14:09:36 compute-0 ceph-mon[74295]: osdmap e160: 3 total, 3 up, 3 in
Oct 07 14:09:36 compute-0 ceph-mon[74295]: pgmap v1297: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 240 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 12 MiB/s wr, 280 op/s
Oct 07 14:09:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Oct 07 14:09:36 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.294 2 DEBUG nova.network.neutron [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image b8bca106-0814-4401-a772-c94d18dd013b could not be found.
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID b8bca106-0814-4401-a772-c94d18dd013b
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver 
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver 
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image b8bca106-0814-4401-a772-c94d18dd013b could not be found.
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver 
Oct 07 14:09:36 compute-0 nova_compute[259550]: 2025-10-07 14:09:36.522 2 DEBUG nova.storage.rbd_utils [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] removing snapshot(snap) on rbd image(b8bca106-0814-4401-a772-c94d18dd013b) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:09:36 compute-0 lucid_herschel[296119]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:09:36 compute-0 lucid_herschel[296119]: --> relative data size: 1.0
Oct 07 14:09:36 compute-0 lucid_herschel[296119]: --> All data devices are unavailable
Oct 07 14:09:36 compute-0 systemd[1]: libpod-6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949.scope: Deactivated successfully.
Oct 07 14:09:36 compute-0 systemd[1]: libpod-6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949.scope: Consumed 1.170s CPU time.
Oct 07 14:09:36 compute-0 conmon[296119]: conmon 6fdcd26d13ccf20c58f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949.scope/container/memory.events
Oct 07 14:09:36 compute-0 podman[296083]: 2025-10-07 14:09:36.690314915 +0000 UTC m=+1.414659621 container died 6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 07 14:09:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ad38c446b03bee7e834c53a7ade6376b48e164f34f30f5f58c3b9ccfb6ac37f-merged.mount: Deactivated successfully.
Oct 07 14:09:36 compute-0 podman[296083]: 2025-10-07 14:09:36.755087415 +0000 UTC m=+1.479432131 container remove 6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 07 14:09:36 compute-0 systemd[1]: libpod-conmon-6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949.scope: Deactivated successfully.
Oct 07 14:09:36 compute-0 sudo[295869]: pam_unix(sudo:session): session closed for user root
Oct 07 14:09:36 compute-0 sudo[296211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:09:36 compute-0 sudo[296211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:09:36 compute-0 sudo[296211]: pam_unix(sudo:session): session closed for user root
Oct 07 14:09:36 compute-0 sudo[296269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:09:36 compute-0 sudo[296269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:09:36 compute-0 sudo[296269]: pam_unix(sudo:session): session closed for user root
Oct 07 14:09:37 compute-0 sudo[296301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:09:37 compute-0 sudo[296301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:09:37 compute-0 sudo[296301]: pam_unix(sudo:session): session closed for user root
Oct 07 14:09:37 compute-0 sudo[296328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:09:37 compute-0 sudo[296328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:09:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Oct 07 14:09:37 compute-0 ceph-mon[74295]: osdmap e161: 3 total, 3 up, 3 in
Oct 07 14:09:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Oct 07 14:09:37 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.406 2 DEBUG nova.network.neutron [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Updating instance_info_cache with network_info: [{"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.488 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846177.4881973, b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.490 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] VM Resumed (Lifecycle Event)
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.494 2 DEBUG nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.495 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.495 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Releasing lock "refresh_cache-f707575c-3219-44e4-9655-ccc194e7385d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.495 2 DEBUG nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Instance network_info: |[{"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.496 2 DEBUG oslo_concurrency.lockutils [req-602cba1f-fa1b-44d6-ab4a-30305c5ffcde req-78c0d575-f4b8-4292-b523-4f1860945ad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-f707575c-3219-44e4-9655-ccc194e7385d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.496 2 DEBUG nova.network.neutron [req-602cba1f-fa1b-44d6-ab4a-30305c5ffcde req-78c0d575-f4b8-4292-b523-4f1860945ad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Refreshing network info cache for port 63d67e4f-8bad-4402-b835-b12010348a29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.499 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Start _get_guest_xml network_info=[{"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.507 2 INFO nova.virt.libvirt.driver [-] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Instance spawned successfully.
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.508 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.512 2 WARNING nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.515 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.520 2 DEBUG nova.virt.libvirt.host [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.520 2 DEBUG nova.virt.libvirt.host [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:09:37 compute-0 podman[296411]: 2025-10-07 14:09:37.538322068 +0000 UTC m=+0.069259801 container create 800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 07 14:09:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 240 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 20 MiB/s wr, 417 op/s
Oct 07 14:09:37 compute-0 systemd[1]: Started libpod-conmon-800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7.scope.
Oct 07 14:09:37 compute-0 podman[296411]: 2025-10-07 14:09:37.505136642 +0000 UTC m=+0.036074395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:09:37 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:09:37 compute-0 podman[296411]: 2025-10-07 14:09:37.648107703 +0000 UTC m=+0.179045446 container init 800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 14:09:37 compute-0 podman[296411]: 2025-10-07 14:09:37.65920606 +0000 UTC m=+0.190143783 container start 800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:09:37 compute-0 friendly_morse[296429]: 167 167
Oct 07 14:09:37 compute-0 systemd[1]: libpod-800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7.scope: Deactivated successfully.
Oct 07 14:09:37 compute-0 conmon[296429]: conmon 800c2d36c7cdc4df3330 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7.scope/container/memory.events
Oct 07 14:09:37 compute-0 podman[296411]: 2025-10-07 14:09:37.664224703 +0000 UTC m=+0.195162426 container attach 800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 07 14:09:37 compute-0 podman[296411]: 2025-10-07 14:09:37.673526582 +0000 UTC m=+0.204464305 container died 800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:09:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5a6dd4675455ae89680f8f4ceb13283ce24670b3a7008b60008c2589bdb1350-merged.mount: Deactivated successfully.
Oct 07 14:09:37 compute-0 podman[296411]: 2025-10-07 14:09:37.72805739 +0000 UTC m=+0.258995103 container remove 800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:09:37 compute-0 systemd[1]: libpod-conmon-800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7.scope: Deactivated successfully.
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.826 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.827 2 DEBUG nova.virt.libvirt.host [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.828 2 DEBUG nova.virt.libvirt.host [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.828 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.828 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.829 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.829 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.829 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.830 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.830 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.830 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.831 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.831 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.831 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.831 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.834 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.878 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.880 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.881 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.881 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.882 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:37 compute-0 nova_compute[259550]: 2025-10-07 14:09:37.882 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:37 compute-0 podman[296455]: 2025-10-07 14:09:37.940212579 +0000 UTC m=+0.052947455 container create f3202165179d021205e574f37c0ceecde9770aae064512b7b62d19fdf2eed226 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_elgamal, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 07 14:09:37 compute-0 systemd[1]: Started libpod-conmon-f3202165179d021205e574f37c0ceecde9770aae064512b7b62d19fdf2eed226.scope.
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.008 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.009 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846177.4894216, b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.010 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] VM Started (Lifecycle Event)
Oct 07 14:09:38 compute-0 podman[296455]: 2025-10-07 14:09:37.914236736 +0000 UTC m=+0.026971632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:09:38 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3591a41247c2c019a0ef0e7f50799233ec29b7764d631a76bec8f01b71e125/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3591a41247c2c019a0ef0e7f50799233ec29b7764d631a76bec8f01b71e125/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3591a41247c2c019a0ef0e7f50799233ec29b7764d631a76bec8f01b71e125/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3591a41247c2c019a0ef0e7f50799233ec29b7764d631a76bec8f01b71e125/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:09:38 compute-0 podman[296455]: 2025-10-07 14:09:38.041958279 +0000 UTC m=+0.154693175 container init f3202165179d021205e574f37c0ceecde9770aae064512b7b62d19fdf2eed226 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_elgamal, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:09:38 compute-0 podman[296455]: 2025-10-07 14:09:38.050290742 +0000 UTC m=+0.163025618 container start f3202165179d021205e574f37c0ceecde9770aae064512b7b62d19fdf2eed226 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:09:38 compute-0 podman[296455]: 2025-10-07 14:09:38.055558173 +0000 UTC m=+0.168293049 container attach f3202165179d021205e574f37c0ceecde9770aae064512b7b62d19fdf2eed226 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.068 2 WARNING nova.compute.manager [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Image not found during snapshot: nova.exception.ImageNotFound: Image b8bca106-0814-4401-a772-c94d18dd013b could not be found.
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.094 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.101 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:09:38 compute-0 ceph-mon[74295]: osdmap e162: 3 total, 3 up, 3 in
Oct 07 14:09:38 compute-0 ceph-mon[74295]: pgmap v1300: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 240 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 20 MiB/s wr, 417 op/s
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.259 2 INFO nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Took 5.83 seconds to spawn the instance on the hypervisor.
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.261 2 DEBUG nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.320 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:09:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:09:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1075208629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.349 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.380 2 DEBUG nova.storage.rbd_utils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image f707575c-3219-44e4-9655-ccc194e7385d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.392 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.449 2 INFO nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Took 10.70 seconds to build instance.
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.555 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:09:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1257779255' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.857 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.862 2 DEBUG nova.virt.libvirt.vif [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-725999350',display_name='tempest-ImagesTestJSON-server-725999350',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-725999350',id=28,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-1i0xrx4z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:33Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=f707575c-3219-44e4-9655-ccc194e7385d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.863 2 DEBUG nova.network.os_vif_util [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.864 2 DEBUG nova.network.os_vif_util [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:81:bb,bridge_name='br-int',has_traffic_filtering=True,id=63d67e4f-8bad-4402-b835-b12010348a29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63d67e4f-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.866 2 DEBUG nova.objects.instance [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'pci_devices' on Instance uuid f707575c-3219-44e4-9655-ccc194e7385d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.900 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:09:38 compute-0 nova_compute[259550]:   <uuid>f707575c-3219-44e4-9655-ccc194e7385d</uuid>
Oct 07 14:09:38 compute-0 nova_compute[259550]:   <name>instance-0000001c</name>
Oct 07 14:09:38 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:09:38 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:09:38 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <nova:name>tempest-ImagesTestJSON-server-725999350</nova:name>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:09:37</nova:creationTime>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:09:38 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:09:38 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:09:38 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:09:38 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:09:38 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:09:38 compute-0 nova_compute[259550]:         <nova:user uuid="a27a7178326846e69ab9eaae7c70b274">tempest-ImagesTestJSON-194092869-project-member</nova:user>
Oct 07 14:09:38 compute-0 nova_compute[259550]:         <nova:project uuid="1a6abfd8cc6f4507886ed10873d1f95c">tempest-ImagesTestJSON-194092869</nova:project>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:09:38 compute-0 nova_compute[259550]:         <nova:port uuid="63d67e4f-8bad-4402-b835-b12010348a29">
Oct 07 14:09:38 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:09:38 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:09:38 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <system>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <entry name="serial">f707575c-3219-44e4-9655-ccc194e7385d</entry>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <entry name="uuid">f707575c-3219-44e4-9655-ccc194e7385d</entry>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     </system>
Oct 07 14:09:38 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:09:38 compute-0 nova_compute[259550]:   <os>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:   </os>
Oct 07 14:09:38 compute-0 nova_compute[259550]:   <features>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:   </features>
Oct 07 14:09:38 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:09:38 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:09:38 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/f707575c-3219-44e4-9655-ccc194e7385d_disk">
Oct 07 14:09:38 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       </source>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:09:38 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/f707575c-3219-44e4-9655-ccc194e7385d_disk.config">
Oct 07 14:09:38 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       </source>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:09:38 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:20:81:bb"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <target dev="tap63d67e4f-8b"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d/console.log" append="off"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <video>
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     </video>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:09:38 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:09:38 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:09:38 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:09:38 compute-0 nova_compute[259550]: </domain>
Oct 07 14:09:38 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.906 2 DEBUG nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Preparing to wait for external event network-vif-plugged-63d67e4f-8bad-4402-b835-b12010348a29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.907 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "f707575c-3219-44e4-9655-ccc194e7385d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.908 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.908 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.909 2 DEBUG nova.virt.libvirt.vif [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-725999350',display_name='tempest-ImagesTestJSON-server-725999350',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-725999350',id=28,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-1i0xrx4z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:33Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=f707575c-3219-44e4-9655-ccc194e7385d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.910 2 DEBUG nova.network.os_vif_util [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.911 2 DEBUG nova.network.os_vif_util [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:81:bb,bridge_name='br-int',has_traffic_filtering=True,id=63d67e4f-8bad-4402-b835-b12010348a29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63d67e4f-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.911 2 DEBUG os_vif [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:81:bb,bridge_name='br-int',has_traffic_filtering=True,id=63d67e4f-8bad-4402-b835-b12010348a29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63d67e4f-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.913 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.914 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.921 2 DEBUG oslo_concurrency.lockutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.922 2 DEBUG oslo_concurrency.lockutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.922 2 DEBUG oslo_concurrency.lockutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.923 2 DEBUG oslo_concurrency.lockutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.923 2 DEBUG oslo_concurrency.lockutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.924 2 INFO nova.compute.manager [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Terminating instance
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.925 2 DEBUG nova.compute.manager [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:09:38 compute-0 eager_elgamal[296485]: {
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:     "0": [
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:         {
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "devices": [
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "/dev/loop3"
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             ],
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "lv_name": "ceph_lv0",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "lv_size": "21470642176",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "name": "ceph_lv0",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "tags": {
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.cluster_name": "ceph",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.crush_device_class": "",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.encrypted": "0",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.osd_id": "0",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.type": "block",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.vdo": "0"
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             },
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "type": "block",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "vg_name": "ceph_vg0"
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:         }
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:     ],
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:     "1": [
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:         {
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "devices": [
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "/dev/loop4"
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             ],
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "lv_name": "ceph_lv1",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "lv_size": "21470642176",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "name": "ceph_lv1",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "tags": {
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.cluster_name": "ceph",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.crush_device_class": "",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.encrypted": "0",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.osd_id": "1",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.type": "block",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.vdo": "0"
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             },
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "type": "block",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "vg_name": "ceph_vg1"
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:         }
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:     ],
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:     "2": [
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:         {
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "devices": [
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "/dev/loop5"
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             ],
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "lv_name": "ceph_lv2",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "lv_size": "21470642176",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "name": "ceph_lv2",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "tags": {
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.cluster_name": "ceph",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.crush_device_class": "",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.encrypted": "0",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.osd_id": "2",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.type": "block",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:                 "ceph.vdo": "0"
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             },
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "type": "block",
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:             "vg_name": "ceph_vg2"
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:         }
Oct 07 14:09:38 compute-0 eager_elgamal[296485]:     ]
Oct 07 14:09:38 compute-0 eager_elgamal[296485]: }
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63d67e4f-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.936 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63d67e4f-8b, col_values=(('external_ids', {'iface-id': '63d67e4f-8bad-4402-b835-b12010348a29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:81:bb', 'vm-uuid': 'f707575c-3219-44e4-9655-ccc194e7385d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:38 compute-0 NetworkManager[44949]: <info>  [1759846178.9397] manager: (tap63d67e4f-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:38 compute-0 nova_compute[259550]: 2025-10-07 14:09:38.953 2 INFO os_vif [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:81:bb,bridge_name='br-int',has_traffic_filtering=True,id=63d67e4f-8bad-4402-b835-b12010348a29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63d67e4f-8b')
Oct 07 14:09:38 compute-0 systemd[1]: libpod-f3202165179d021205e574f37c0ceecde9770aae064512b7b62d19fdf2eed226.scope: Deactivated successfully.
Oct 07 14:09:38 compute-0 podman[296455]: 2025-10-07 14:09:38.982668191 +0000 UTC m=+1.095403067 container died f3202165179d021205e574f37c0ceecde9770aae064512b7b62d19fdf2eed226 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_elgamal, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.079 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.080 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.080 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No VIF found with MAC fa:16:3e:20:81:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.081 2 INFO nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Using config drive
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.104 2 DEBUG nova.storage.rbd_utils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image f707575c-3219-44e4-9655-ccc194e7385d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb3591a41247c2c019a0ef0e7f50799233ec29b7764d631a76bec8f01b71e125-merged.mount: Deactivated successfully.
Oct 07 14:09:39 compute-0 kernel: tap1a36fc68-b9 (unregistering): left promiscuous mode
Oct 07 14:09:39 compute-0 NetworkManager[44949]: <info>  [1759846179.1768] device (tap1a36fc68-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.175 2 DEBUG nova.network.neutron [req-602cba1f-fa1b-44d6-ab4a-30305c5ffcde req-78c0d575-f4b8-4292-b523-4f1860945ad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Updated VIF entry in instance network info cache for port 63d67e4f-8bad-4402-b835-b12010348a29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.175 2 DEBUG nova.network.neutron [req-602cba1f-fa1b-44d6-ab4a-30305c5ffcde req-78c0d575-f4b8-4292-b523-4f1860945ad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Updating instance_info_cache with network_info: [{"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:39 compute-0 ovn_controller[151684]: 2025-10-07T14:09:39Z|00178|binding|INFO|Releasing lport 1a36fc68-b90f-4e28-a866-7dfb6f218d37 from this chassis (sb_readonly=0)
Oct 07 14:09:39 compute-0 ovn_controller[151684]: 2025-10-07T14:09:39Z|00179|binding|INFO|Setting lport 1a36fc68-b90f-4e28-a866-7dfb6f218d37 down in Southbound
Oct 07 14:09:39 compute-0 ovn_controller[151684]: 2025-10-07T14:09:39Z|00180|binding|INFO|Removing iface tap1a36fc68-b9 ovn-installed in OVS
Oct 07 14:09:39 compute-0 podman[296455]: 2025-10-07 14:09:39.203958175 +0000 UTC m=+1.316693051 container remove f3202165179d021205e574f37c0ceecde9770aae064512b7b62d19fdf2eed226 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:39 compute-0 systemd[1]: libpod-conmon-f3202165179d021205e574f37c0ceecde9770aae064512b7b62d19fdf2eed226.scope: Deactivated successfully.
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1075208629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1257779255' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:39 compute-0 sudo[296328]: pam_unix(sudo:session): session closed for user root
Oct 07 14:09:39 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Oct 07 14:09:39 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001a.scope: Consumed 13.783s CPU time.
Oct 07 14:09:39 compute-0 systemd-machined[214580]: Machine qemu-30-instance-0000001a terminated.
Oct 07 14:09:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.333 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ce:2c 10.100.0.14'], port_security=['fa:16:3e:30:ce:2c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7ee2904f-492a-4ffe-bdc2-6f4ec3285851', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c166ae9e4e0f43d38afaa35966f84b05', 'neutron:revision_number': '4', 'neutron:security_group_ids': '306ac68f-7d3a-41d3-a9d1-b809ff5ece38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6dab0b8-b058-4fe6-95e9-ca808f08d05f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1a36fc68-b90f-4e28-a866-7dfb6f218d37) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:09:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.336 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1a36fc68-b90f-4e28-a866-7dfb6f218d37 in datapath 0a5c95d4-1a77-48f5-83c0-afa976b7583d unbound from our chassis
Oct 07 14:09:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.337 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0a5c95d4-1a77-48f5-83c0-afa976b7583d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:09:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.341 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3d12efc0-c213-4b23-8b62-dfffa5691462]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.342 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d namespace which is not needed anymore
Oct 07 14:09:39 compute-0 sudo[296580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:09:39 compute-0 sudo[296580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.353 2 DEBUG oslo_concurrency.lockutils [req-602cba1f-fa1b-44d6-ab4a-30305c5ffcde req-78c0d575-f4b8-4292-b523-4f1860945ad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-f707575c-3219-44e4-9655-ccc194e7385d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:09:39 compute-0 sudo[296580]: pam_unix(sudo:session): session closed for user root
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.408 2 INFO nova.virt.libvirt.driver [-] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Instance destroyed successfully.
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.409 2 DEBUG nova.objects.instance [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lazy-loading 'resources' on Instance uuid 7ee2904f-492a-4ffe-bdc2-6f4ec3285851 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:39 compute-0 sudo[296611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:09:39 compute-0 sudo[296611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:09:39 compute-0 sudo[296611]: pam_unix(sudo:session): session closed for user root
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.460 2 DEBUG nova.virt.libvirt.vif [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:09:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-527928282',display_name='tempest-ImagesOneServerNegativeTestJSON-server-527928282',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-527928282',id=26,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:09:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c166ae9e4e0f43d38afaa35966f84b05',ramdisk_id='',reservation_id='r-7pnh0023',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-2130756304',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:09:38Z,user_data=None,user_id='21b4c507f5c443f4b43306c884b1d67f',uuid=7ee2904f-492a-4ffe-bdc2-6f4ec3285851,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.463 2 DEBUG nova.network.os_vif_util [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converting VIF {"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.466 2 DEBUG nova.network.os_vif_util [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ce:2c,bridge_name='br-int',has_traffic_filtering=True,id=1a36fc68-b90f-4e28-a866-7dfb6f218d37,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a36fc68-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.467 2 DEBUG os_vif [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ce:2c,bridge_name='br-int',has_traffic_filtering=True,id=1a36fc68-b90f-4e28-a866-7dfb6f218d37,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a36fc68-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.470 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a36fc68-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.484 2 INFO os_vif [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ce:2c,bridge_name='br-int',has_traffic_filtering=True,id=1a36fc68-b90f-4e28-a866-7dfb6f218d37,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a36fc68-b9')
Oct 07 14:09:39 compute-0 sudo[296656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:09:39 compute-0 sudo[296656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:09:39 compute-0 sudo[296656]: pam_unix(sudo:session): session closed for user root
Oct 07 14:09:39 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[294853]: [NOTICE]   (294879) : haproxy version is 2.8.14-c23fe91
Oct 07 14:09:39 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[294853]: [NOTICE]   (294879) : path to executable is /usr/sbin/haproxy
Oct 07 14:09:39 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[294853]: [WARNING]  (294879) : Exiting Master process...
Oct 07 14:09:39 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[294853]: [ALERT]    (294879) : Current worker (294884) exited with code 143 (Terminated)
Oct 07 14:09:39 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[294853]: [WARNING]  (294879) : All workers exited. Exiting... (0)
Oct 07 14:09:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 225 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 19 MiB/s wr, 558 op/s
Oct 07 14:09:39 compute-0 systemd[1]: libpod-757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0.scope: Deactivated successfully.
Oct 07 14:09:39 compute-0 podman[296678]: 2025-10-07 14:09:39.583487269 +0000 UTC m=+0.066395825 container died 757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 07 14:09:39 compute-0 sudo[296713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:09:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0-userdata-shm.mount: Deactivated successfully.
Oct 07 14:09:39 compute-0 sudo[296713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:09:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc2a49701af0e481b1ad6d0cc40f5f1da2f9ada3657154d7d31efc1fe61be3f5-merged.mount: Deactivated successfully.
Oct 07 14:09:39 compute-0 podman[296678]: 2025-10-07 14:09:39.646313608 +0000 UTC m=+0.129222164 container cleanup 757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:09:39 compute-0 systemd[1]: libpod-conmon-757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0.scope: Deactivated successfully.
Oct 07 14:09:39 compute-0 podman[296755]: 2025-10-07 14:09:39.764379373 +0000 UTC m=+0.079579818 container remove 757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:09:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.773 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[29985c4d-c466-41d7-b09d-4c4a0195c234]: (4, ('Tue Oct  7 02:09:39 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d (757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0)\n757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0\nTue Oct  7 02:09:39 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d (757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0)\n757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.775 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[867aa254-ab72-4c4d-ae53-8e63b54a35ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.776 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a5c95d4-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:39 compute-0 kernel: tap0a5c95d4-10: left promiscuous mode
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.800 2 INFO nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Creating config drive at /var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d/disk.config
Oct 07 14:09:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.812 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9526cc21-07d9-4225-9c0e-bd770822cf0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.816 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyrocjkop execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.843 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7403b7-bd48-4a1a-ae8d-c1a6c6af7df6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.847 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7991d7c5-eab8-496e-bda8-04acd728e7c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.867 2 DEBUG nova.compute.manager [req-a62e7747-4bde-4a05-9ee0-2cae8ffc363c req-88ef4ee8-9a81-4f15-8297-7052ea7830c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Received event network-vif-unplugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.868 2 DEBUG oslo_concurrency.lockutils [req-a62e7747-4bde-4a05-9ee0-2cae8ffc363c req-88ef4ee8-9a81-4f15-8297-7052ea7830c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.868 2 DEBUG oslo_concurrency.lockutils [req-a62e7747-4bde-4a05-9ee0-2cae8ffc363c req-88ef4ee8-9a81-4f15-8297-7052ea7830c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.868 2 DEBUG oslo_concurrency.lockutils [req-a62e7747-4bde-4a05-9ee0-2cae8ffc363c req-88ef4ee8-9a81-4f15-8297-7052ea7830c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.868 2 DEBUG nova.compute.manager [req-a62e7747-4bde-4a05-9ee0-2cae8ffc363c req-88ef4ee8-9a81-4f15-8297-7052ea7830c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] No waiting events found dispatching network-vif-unplugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.868 2 DEBUG nova.compute.manager [req-a62e7747-4bde-4a05-9ee0-2cae8ffc363c req-88ef4ee8-9a81-4f15-8297-7052ea7830c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Received event network-vif-unplugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:09:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.870 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dcfcc14e-54ff-4083-be29-c61be0e61b50]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674174, 'reachable_time': 37190, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296791, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:39 compute-0 systemd[1]: run-netns-ovnmeta\x2d0a5c95d4\x2d1a77\x2d48f5\x2d83c0\x2dafa976b7583d.mount: Deactivated successfully.
Oct 07 14:09:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.879 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:09:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.879 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[770a260a-69cb-4f2d-ba52-5d592688bd33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:39 compute-0 nova_compute[259550]: 2025-10-07 14:09:39.971 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyrocjkop" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.005 2 DEBUG nova.storage.rbd_utils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image f707575c-3219-44e4-9655-ccc194e7385d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.010 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d/disk.config f707575c-3219-44e4-9655-ccc194e7385d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:40 compute-0 podman[296831]: 2025-10-07 14:09:40.079011492 +0000 UTC m=+0.060984481 container create 461c5bb98066ad00005835ed6fb79064dfa3e90542988c706af6e39d0eb1fbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 07 14:09:40 compute-0 systemd[1]: Started libpod-conmon-461c5bb98066ad00005835ed6fb79064dfa3e90542988c706af6e39d0eb1fbd0.scope.
Oct 07 14:09:40 compute-0 podman[296831]: 2025-10-07 14:09:40.052674568 +0000 UTC m=+0.034647577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:09:40 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:09:40 compute-0 podman[296831]: 2025-10-07 14:09:40.19976753 +0000 UTC m=+0.181740539 container init 461c5bb98066ad00005835ed6fb79064dfa3e90542988c706af6e39d0eb1fbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 07 14:09:40 compute-0 podman[296831]: 2025-10-07 14:09:40.210030224 +0000 UTC m=+0.192003213 container start 461c5bb98066ad00005835ed6fb79064dfa3e90542988c706af6e39d0eb1fbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:09:40 compute-0 crazy_ramanujan[296865]: 167 167
Oct 07 14:09:40 compute-0 systemd[1]: libpod-461c5bb98066ad00005835ed6fb79064dfa3e90542988c706af6e39d0eb1fbd0.scope: Deactivated successfully.
Oct 07 14:09:40 compute-0 podman[296831]: 2025-10-07 14:09:40.224687666 +0000 UTC m=+0.206660685 container attach 461c5bb98066ad00005835ed6fb79064dfa3e90542988c706af6e39d0eb1fbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct 07 14:09:40 compute-0 podman[296831]: 2025-10-07 14:09:40.226458493 +0000 UTC m=+0.208431512 container died 461c5bb98066ad00005835ed6fb79064dfa3e90542988c706af6e39d0eb1fbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 07 14:09:40 compute-0 ceph-mon[74295]: pgmap v1301: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 225 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 19 MiB/s wr, 558 op/s
Oct 07 14:09:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb189ee410f28ddd3eb718b05419a4ec278b0268cae55153b0fde6af9c12a598-merged.mount: Deactivated successfully.
Oct 07 14:09:40 compute-0 podman[296831]: 2025-10-07 14:09:40.334150471 +0000 UTC m=+0.316123500 container remove 461c5bb98066ad00005835ed6fb79064dfa3e90542988c706af6e39d0eb1fbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:09:40 compute-0 systemd[1]: libpod-conmon-461c5bb98066ad00005835ed6fb79064dfa3e90542988c706af6e39d0eb1fbd0.scope: Deactivated successfully.
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.366 2 INFO nova.virt.libvirt.driver [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Deleting instance files /var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851_del
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.368 2 INFO nova.virt.libvirt.driver [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Deletion of /var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851_del complete
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.372 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d/disk.config f707575c-3219-44e4-9655-ccc194e7385d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.373 2 INFO nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Deleting local config drive /var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d/disk.config because it was imported into RBD.
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.438 2 INFO nova.compute.manager [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Took 1.51 seconds to destroy the instance on the hypervisor.
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.440 2 DEBUG oslo.service.loopingcall [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.441 2 DEBUG nova.compute.manager [-] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.441 2 DEBUG nova.network.neutron [-] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:09:40 compute-0 kernel: tap63d67e4f-8b: entered promiscuous mode
Oct 07 14:09:40 compute-0 NetworkManager[44949]: <info>  [1759846180.4529] manager: (tap63d67e4f-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:40 compute-0 ovn_controller[151684]: 2025-10-07T14:09:40Z|00181|binding|INFO|Claiming lport 63d67e4f-8bad-4402-b835-b12010348a29 for this chassis.
Oct 07 14:09:40 compute-0 ovn_controller[151684]: 2025-10-07T14:09:40Z|00182|binding|INFO|63d67e4f-8bad-4402-b835-b12010348a29: Claiming fa:16:3e:20:81:bb 10.100.0.11
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.469 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:81:bb 10.100.0.11'], port_security=['fa:16:3e:20:81:bb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f707575c-3219-44e4-9655-ccc194e7385d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=63d67e4f-8bad-4402-b835-b12010348a29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.471 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 63d67e4f-8bad-4402-b835-b12010348a29 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb bound to our chassis
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.472 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.489 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[828517a6-4aae-4e51-a7e8-b00a1eb2a440]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.490 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f80456d-d1 in ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.494 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f80456d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.494 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2312bd5d-0c00-479e-8178-4ebcd49646db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.496 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b4e69f9f-6f35-45a0-a82a-1d93b8e5e6ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:40 compute-0 systemd-udevd[296911]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.511 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[15543cfa-0137-493d-8900-61a854a0bfae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:40 compute-0 NetworkManager[44949]: <info>  [1759846180.5207] device (tap63d67e4f-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:09:40 compute-0 systemd-machined[214580]: New machine qemu-32-instance-0000001c.
Oct 07 14:09:40 compute-0 NetworkManager[44949]: <info>  [1759846180.5224] device (tap63d67e4f-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:09:40 compute-0 systemd[1]: Started Virtual Machine qemu-32-instance-0000001c.
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.541 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[734770dc-2a7d-4df6-8b30-7bc305204e80]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:40 compute-0 podman[296901]: 2025-10-07 14:09:40.543143397 +0000 UTC m=+0.058834754 container create d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:40 compute-0 ovn_controller[151684]: 2025-10-07T14:09:40Z|00183|binding|INFO|Setting lport 63d67e4f-8bad-4402-b835-b12010348a29 ovn-installed in OVS
Oct 07 14:09:40 compute-0 ovn_controller[151684]: 2025-10-07T14:09:40Z|00184|binding|INFO|Setting lport 63d67e4f-8bad-4402-b835-b12010348a29 up in Southbound
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.584 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f43dcc-c9be-45a2-b520-94a8fbdbdc95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:40 compute-0 podman[296901]: 2025-10-07 14:09:40.520233535 +0000 UTC m=+0.035924872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:09:40 compute-0 NetworkManager[44949]: <info>  [1759846180.6165] manager: (tap9f80456d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/94)
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.615 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2dfa4437-70b3-49ac-83f8-47e1c57f0b8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:40 compute-0 systemd[1]: Started libpod-conmon-d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979.scope.
Oct 07 14:09:40 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2f544be7afb74c4f86e728f0febccafb9406b6897f853b3ce50df11b2694a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2f544be7afb74c4f86e728f0febccafb9406b6897f853b3ce50df11b2694a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2f544be7afb74c4f86e728f0febccafb9406b6897f853b3ce50df11b2694a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.669 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[28f6de18-a3e8-4324-b924-2130a112195b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2f544be7afb74c4f86e728f0febccafb9406b6897f853b3ce50df11b2694a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.678 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[31119c8b-3679-45cd-b7fa-063666857100]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:40 compute-0 podman[296901]: 2025-10-07 14:09:40.6932763 +0000 UTC m=+0.208967637 container init d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 07 14:09:40 compute-0 podman[296901]: 2025-10-07 14:09:40.7044994 +0000 UTC m=+0.220190717 container start d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:09:40 compute-0 NetworkManager[44949]: <info>  [1759846180.7091] device (tap9f80456d-d0): carrier: link connected
Oct 07 14:09:40 compute-0 podman[296901]: 2025-10-07 14:09:40.710627093 +0000 UTC m=+0.226318420 container attach d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.722 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f46042db-bbfb-4e41-9085-15e2c021d224]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.748 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "643096ea-a4a9-4b3f-b8c1-130423a4248e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.749 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "643096ea-a4a9-4b3f-b8c1-130423a4248e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.755 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c32c43c8-6fce-4cc1-92b3-9b99fb07731c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676427, 'reachable_time': 19367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296957, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.773 2 DEBUG nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.782 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3299fe-90d3-41d3-9bc2-4b6490bb8b2e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:18ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676427, 'tstamp': 676427}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296965, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.809 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b7d0e0-84b1-47db-bddd-0935e8401347]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676427, 'reachable_time': 19367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296974, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.852 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.853 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.857 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b59c88f2-0ac3-42c5-8d43-22e2365f6016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.862 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.863 2 INFO nova.compute.claims [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.917 2 DEBUG nova.compute.manager [req-7a5ce5a0-64a3-4335-a591-948ff53890ba req-20e5ea05-ed7b-4e5f-9bd0-da7d79d156fa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Received event network-vif-plugged-63d67e4f-8bad-4402-b835-b12010348a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.918 2 DEBUG oslo_concurrency.lockutils [req-7a5ce5a0-64a3-4335-a591-948ff53890ba req-20e5ea05-ed7b-4e5f-9bd0-da7d79d156fa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f707575c-3219-44e4-9655-ccc194e7385d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.918 2 DEBUG oslo_concurrency.lockutils [req-7a5ce5a0-64a3-4335-a591-948ff53890ba req-20e5ea05-ed7b-4e5f-9bd0-da7d79d156fa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.918 2 DEBUG oslo_concurrency.lockutils [req-7a5ce5a0-64a3-4335-a591-948ff53890ba req-20e5ea05-ed7b-4e5f-9bd0-da7d79d156fa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.918 2 DEBUG nova.compute.manager [req-7a5ce5a0-64a3-4335-a591-948ff53890ba req-20e5ea05-ed7b-4e5f-9bd0-da7d79d156fa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Processing event network-vif-plugged-63d67e4f-8bad-4402-b835-b12010348a29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.937 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[910235ff-7362-408e-99fc-fc7c48810d44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.939 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.940 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.940 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f80456d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:40 compute-0 kernel: tap9f80456d-d0: entered promiscuous mode
Oct 07 14:09:40 compute-0 NetworkManager[44949]: <info>  [1759846180.9437] manager: (tap9f80456d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.947 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f80456d-d0, col_values=(('external_ids', {'iface-id': 'aff8269b-7a34-4fc6-ae31-f73de236b2d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:40 compute-0 ovn_controller[151684]: 2025-10-07T14:09:40Z|00185|binding|INFO|Releasing lport aff8269b-7a34-4fc6-ae31-f73de236b2d6 from this chassis (sb_readonly=0)
Oct 07 14:09:40 compute-0 nova_compute[259550]: 2025-10-07 14:09:40.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.967 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.969 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bff38459-6902-4fb4-8898-5a4a2e70c625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.970 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:09:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.972 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'env', 'PROCESS_TAG=haproxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.070 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:41 compute-0 podman[297051]: 2025-10-07 14:09:41.422633563 +0000 UTC m=+0.076231679 container create d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.455 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846181.4548469, f707575c-3219-44e4-9655-ccc194e7385d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.456 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] VM Started (Lifecycle Event)
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.459 2 DEBUG nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.466 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:09:41 compute-0 podman[297051]: 2025-10-07 14:09:41.380168488 +0000 UTC m=+0.033766634 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.479 2 INFO nova.virt.libvirt.driver [-] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Instance spawned successfully.
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.480 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:09:41 compute-0 systemd[1]: Started libpod-conmon-d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82.scope.
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.497 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.500 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:09:41 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6c453bc14a40343f7d04b0c2d60dac70d117a7f556b8366c9ea1b23c523770b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:09:41 compute-0 podman[297051]: 2025-10-07 14:09:41.544710066 +0000 UTC m=+0.198308222 container init d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:09:41 compute-0 podman[297051]: 2025-10-07 14:09:41.550609343 +0000 UTC m=+0.204207459 container start d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 07 14:09:41 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[297066]: [NOTICE]   (297074) : New worker (297077) forked
Oct 07 14:09:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 173 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 8.6 MiB/s rd, 8.7 MiB/s wr, 486 op/s
Oct 07 14:09:41 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[297066]: [NOTICE]   (297074) : Loading success.
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.584 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.584 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846181.4551175, f707575c-3219-44e4-9655-ccc194e7385d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.585 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] VM Paused (Lifecycle Event)
Oct 07 14:09:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:09:41 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3979242475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.590 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.591 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.591 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.591 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.592 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.592 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.623 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.633 2 DEBUG nova.compute.provider_tree [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:09:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3979242475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.651 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.660 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846181.464237, f707575c-3219-44e4-9655-ccc194e7385d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.660 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] VM Resumed (Lifecycle Event)
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.678 2 DEBUG nova.scheduler.client.report [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.699 2 INFO nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Took 8.28 seconds to spawn the instance on the hypervisor.
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.699 2 DEBUG nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.710 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.714 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.721 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.722 2 DEBUG nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.775 2 INFO nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Took 13.66 seconds to build instance.
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.797 2 DEBUG nova.network.neutron [-] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.803 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.810 2 DEBUG nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.810 2 DEBUG nova.network.neutron [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.816 2 INFO nova.compute.manager [-] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Took 1.37 seconds to deallocate network for instance.
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.837 2 INFO nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.862 2 DEBUG oslo_concurrency.lockutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.863 2 DEBUG oslo_concurrency.lockutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.864 2 DEBUG nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]: {
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:         "osd_id": 2,
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:         "type": "bluestore"
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:     },
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:         "osd_id": 1,
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:         "type": "bluestore"
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:     },
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:         "osd_id": 0,
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:         "type": "bluestore"
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]:     }
Oct 07 14:09:41 compute-0 affectionate_dubinsky[296935]: }
Oct 07 14:09:41 compute-0 systemd[1]: libpod-d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979.scope: Deactivated successfully.
Oct 07 14:09:41 compute-0 systemd[1]: libpod-d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979.scope: Consumed 1.163s CPU time.
Oct 07 14:09:41 compute-0 podman[296901]: 2025-10-07 14:09:41.908118738 +0000 UTC m=+1.423810055 container died d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:09:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b2f544be7afb74c4f86e728f0febccafb9406b6897f853b3ce50df11b2694a7-merged.mount: Deactivated successfully.
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.955 2 DEBUG nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.958 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.958 2 INFO nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Creating image(s)
Oct 07 14:09:41 compute-0 podman[296901]: 2025-10-07 14:09:41.9856108 +0000 UTC m=+1.501302117 container remove d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 14:09:41 compute-0 nova_compute[259550]: 2025-10-07 14:09:41.996 2 DEBUG nova.storage.rbd_utils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:42 compute-0 sudo[296713]: pam_unix(sudo:session): session closed for user root
Oct 07 14:09:42 compute-0 systemd[1]: libpod-conmon-d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979.scope: Deactivated successfully.
Oct 07 14:09:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.053 2 DEBUG nova.storage.rbd_utils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:42 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:09:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:09:42 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:09:42 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 1e25390b-c3cf-4537-9cc8-c3d5877ea27f does not exist
Oct 07 14:09:42 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 9a406c8e-ad54-4b25-8242-ad473bd1b6ff does not exist
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.092 2 DEBUG nova.storage.rbd_utils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.097 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.129 2 DEBUG nova.network.neutron [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.130 2 DEBUG nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.131 2 DEBUG oslo_concurrency.processutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:42 compute-0 sudo[297172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:09:42 compute-0 sudo[297172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:09:42 compute-0 sudo[297172]: pam_unix(sudo:session): session closed for user root
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.172 2 DEBUG nova.compute.manager [req-531f63e8-bfdd-4337-a296-f41d5ac99748 req-2b682af8-3c22-4a61-addf-024ebbac31dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Received event network-vif-plugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.172 2 DEBUG oslo_concurrency.lockutils [req-531f63e8-bfdd-4337-a296-f41d5ac99748 req-2b682af8-3c22-4a61-addf-024ebbac31dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.173 2 DEBUG oslo_concurrency.lockutils [req-531f63e8-bfdd-4337-a296-f41d5ac99748 req-2b682af8-3c22-4a61-addf-024ebbac31dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.173 2 DEBUG oslo_concurrency.lockutils [req-531f63e8-bfdd-4337-a296-f41d5ac99748 req-2b682af8-3c22-4a61-addf-024ebbac31dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.173 2 DEBUG nova.compute.manager [req-531f63e8-bfdd-4337-a296-f41d5ac99748 req-2b682af8-3c22-4a61-addf-024ebbac31dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] No waiting events found dispatching network-vif-plugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.173 2 WARNING nova.compute.manager [req-531f63e8-bfdd-4337-a296-f41d5ac99748 req-2b682af8-3c22-4a61-addf-024ebbac31dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Received unexpected event network-vif-plugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 for instance with vm_state deleted and task_state None.
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.179 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.179 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.180 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.180 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.206 2 DEBUG nova.storage.rbd_utils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.216 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:42 compute-0 sudo[297202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:09:42 compute-0 sudo[297202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:09:42 compute-0 sudo[297202]: pam_unix(sudo:session): session closed for user root
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.603 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:09:42 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1049852527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:42 compute-0 ceph-mon[74295]: pgmap v1302: 305 pgs: 305 active+clean; 173 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 8.6 MiB/s rd, 8.7 MiB/s wr, 486 op/s
Oct 07 14:09:42 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:09:42 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:09:42 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1049852527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.679 2 DEBUG oslo_concurrency.processutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.684 2 DEBUG nova.storage.rbd_utils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] resizing rbd image 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.742 2 DEBUG nova.compute.provider_tree [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.814 2 DEBUG nova.scheduler.client.report [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.822 2 DEBUG nova.objects.instance [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lazy-loading 'migration_context' on Instance uuid 643096ea-a4a9-4b3f-b8c1-130423a4248e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.899 2 DEBUG oslo_concurrency.lockutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.918 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.919 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Ensure instance console log exists: /var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.919 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.920 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.920 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.921 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.930 2 WARNING nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.936 2 DEBUG nova.virt.libvirt.host [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.937 2 DEBUG nova.virt.libvirt.host [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.941 2 DEBUG nova.virt.libvirt.host [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.942 2 DEBUG nova.virt.libvirt.host [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.942 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.943 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.943 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.943 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.944 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.944 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.944 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.944 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.944 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.944 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.945 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.945 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:09:42 compute-0 nova_compute[259550]: 2025-10-07 14:09:42.948 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:43 compute-0 nova_compute[259550]: 2025-10-07 14:09:43.000 2 INFO nova.scheduler.client.report [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Deleted allocations for instance 7ee2904f-492a-4ffe-bdc2-6f4ec3285851
Oct 07 14:09:43 compute-0 nova_compute[259550]: 2025-10-07 14:09:43.105 2 DEBUG oslo_concurrency.lockutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:43 compute-0 nova_compute[259550]: 2025-10-07 14:09:43.246 2 DEBUG nova.compute.manager [req-c19b2554-c24b-4c2b-8c3d-7179380abf1c req-7aa9c36d-f400-41a4-a165-6d7f5e4631df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Received event network-vif-plugged-63d67e4f-8bad-4402-b835-b12010348a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:43 compute-0 nova_compute[259550]: 2025-10-07 14:09:43.247 2 DEBUG oslo_concurrency.lockutils [req-c19b2554-c24b-4c2b-8c3d-7179380abf1c req-7aa9c36d-f400-41a4-a165-6d7f5e4631df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f707575c-3219-44e4-9655-ccc194e7385d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:43 compute-0 nova_compute[259550]: 2025-10-07 14:09:43.247 2 DEBUG oslo_concurrency.lockutils [req-c19b2554-c24b-4c2b-8c3d-7179380abf1c req-7aa9c36d-f400-41a4-a165-6d7f5e4631df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:43 compute-0 nova_compute[259550]: 2025-10-07 14:09:43.247 2 DEBUG oslo_concurrency.lockutils [req-c19b2554-c24b-4c2b-8c3d-7179380abf1c req-7aa9c36d-f400-41a4-a165-6d7f5e4631df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:43 compute-0 nova_compute[259550]: 2025-10-07 14:09:43.247 2 DEBUG nova.compute.manager [req-c19b2554-c24b-4c2b-8c3d-7179380abf1c req-7aa9c36d-f400-41a4-a165-6d7f5e4631df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] No waiting events found dispatching network-vif-plugged-63d67e4f-8bad-4402-b835-b12010348a29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:09:43 compute-0 nova_compute[259550]: 2025-10-07 14:09:43.247 2 WARNING nova.compute.manager [req-c19b2554-c24b-4c2b-8c3d-7179380abf1c req-7aa9c36d-f400-41a4-a165-6d7f5e4631df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Received unexpected event network-vif-plugged-63d67e4f-8bad-4402-b835-b12010348a29 for instance with vm_state active and task_state None.
Oct 07 14:09:43 compute-0 nova_compute[259550]: 2025-10-07 14:09:43.247 2 DEBUG nova.compute.manager [req-c19b2554-c24b-4c2b-8c3d-7179380abf1c req-7aa9c36d-f400-41a4-a165-6d7f5e4631df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Received event network-vif-deleted-1a36fc68-b90f-4e28-a866-7dfb6f218d37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:09:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913645023' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:43 compute-0 nova_compute[259550]: 2025-10-07 14:09:43.463 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:43 compute-0 nova_compute[259550]: 2025-10-07 14:09:43.496 2 DEBUG nova.storage.rbd_utils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:43 compute-0 nova_compute[259550]: 2025-10-07 14:09:43.503 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 173 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.3 MiB/s wr, 219 op/s
Oct 07 14:09:43 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/913645023' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:43 compute-0 nova_compute[259550]: 2025-10-07 14:09:43.752 2 DEBUG nova.objects.instance [None req-dc0aa1dd-d6fc-498d-a6d7-1380c1d1c3bf a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'pci_devices' on Instance uuid f707575c-3219-44e4-9655-ccc194e7385d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:43 compute-0 nova_compute[259550]: 2025-10-07 14:09:43.808 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846183.8070326, f707575c-3219-44e4-9655-ccc194e7385d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:43 compute-0 nova_compute[259550]: 2025-10-07 14:09:43.808 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] VM Paused (Lifecycle Event)
Oct 07 14:09:43 compute-0 nova_compute[259550]: 2025-10-07 14:09:43.858 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:43 compute-0 nova_compute[259550]: 2025-10-07 14:09:43.866 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:09:43 compute-0 nova_compute[259550]: 2025-10-07 14:09:43.943 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 07 14:09:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:09:44 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3200452805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:44 compute-0 nova_compute[259550]: 2025-10-07 14:09:44.058 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:44 compute-0 nova_compute[259550]: 2025-10-07 14:09:44.060 2 DEBUG nova.objects.instance [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lazy-loading 'pci_devices' on Instance uuid 643096ea-a4a9-4b3f-b8c1-130423a4248e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:44 compute-0 nova_compute[259550]: 2025-10-07 14:09:44.108 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:09:44 compute-0 nova_compute[259550]:   <uuid>643096ea-a4a9-4b3f-b8c1-130423a4248e</uuid>
Oct 07 14:09:44 compute-0 nova_compute[259550]:   <name>instance-0000001d</name>
Oct 07 14:09:44 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:09:44 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:09:44 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-127110333</nova:name>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:09:42</nova:creationTime>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:09:44 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:09:44 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:09:44 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:09:44 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:09:44 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:09:44 compute-0 nova_compute[259550]:         <nova:user uuid="61181e933b9841959e5a4a707bc48adf">tempest-ServersAdminNegativeTestJSON-484006704-project-member</nova:user>
Oct 07 14:09:44 compute-0 nova_compute[259550]:         <nova:project uuid="5fb6da28aeda44c5b898c384c6853b38">tempest-ServersAdminNegativeTestJSON-484006704</nova:project>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:09:44 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:09:44 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <system>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <entry name="serial">643096ea-a4a9-4b3f-b8c1-130423a4248e</entry>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <entry name="uuid">643096ea-a4a9-4b3f-b8c1-130423a4248e</entry>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     </system>
Oct 07 14:09:44 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:09:44 compute-0 nova_compute[259550]:   <os>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:   </os>
Oct 07 14:09:44 compute-0 nova_compute[259550]:   <features>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:   </features>
Oct 07 14:09:44 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:09:44 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:09:44 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/643096ea-a4a9-4b3f-b8c1-130423a4248e_disk">
Oct 07 14:09:44 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       </source>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:09:44 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/643096ea-a4a9-4b3f-b8c1-130423a4248e_disk.config">
Oct 07 14:09:44 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       </source>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:09:44 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e/console.log" append="off"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <video>
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     </video>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:09:44 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:09:44 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:09:44 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:09:44 compute-0 nova_compute[259550]: </domain>
Oct 07 14:09:44 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:09:44 compute-0 kernel: tap63d67e4f-8b (unregistering): left promiscuous mode
Oct 07 14:09:44 compute-0 NetworkManager[44949]: <info>  [1759846184.3016] device (tap63d67e4f-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:09:44 compute-0 nova_compute[259550]: 2025-10-07 14:09:44.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:44 compute-0 ovn_controller[151684]: 2025-10-07T14:09:44Z|00186|binding|INFO|Releasing lport 63d67e4f-8bad-4402-b835-b12010348a29 from this chassis (sb_readonly=0)
Oct 07 14:09:44 compute-0 ovn_controller[151684]: 2025-10-07T14:09:44Z|00187|binding|INFO|Setting lport 63d67e4f-8bad-4402-b835-b12010348a29 down in Southbound
Oct 07 14:09:44 compute-0 ovn_controller[151684]: 2025-10-07T14:09:44Z|00188|binding|INFO|Removing iface tap63d67e4f-8b ovn-installed in OVS
Oct 07 14:09:44 compute-0 nova_compute[259550]: 2025-10-07 14:09:44.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.319 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:81:bb 10.100.0.11'], port_security=['fa:16:3e:20:81:bb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f707575c-3219-44e4-9655-ccc194e7385d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=63d67e4f-8bad-4402-b835-b12010348a29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:09:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.321 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 63d67e4f-8bad-4402-b835-b12010348a29 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb unbound from our chassis
Oct 07 14:09:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.322 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:09:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.324 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b945dd7-f151-4ed1-8b3d-9bcc4d744303]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.324 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb namespace which is not needed anymore
Oct 07 14:09:44 compute-0 nova_compute[259550]: 2025-10-07 14:09:44.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:44 compute-0 nova_compute[259550]: 2025-10-07 14:09:44.341 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:09:44 compute-0 nova_compute[259550]: 2025-10-07 14:09:44.341 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:09:44 compute-0 nova_compute[259550]: 2025-10-07 14:09:44.341 2 INFO nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Using config drive
Oct 07 14:09:44 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Oct 07 14:09:44 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Consumed 3.210s CPU time.
Oct 07 14:09:44 compute-0 systemd-machined[214580]: Machine qemu-32-instance-0000001c terminated.
Oct 07 14:09:44 compute-0 nova_compute[259550]: 2025-10-07 14:09:44.370 2 DEBUG nova.storage.rbd_utils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:44 compute-0 nova_compute[259550]: 2025-10-07 14:09:44.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:44 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[297066]: [NOTICE]   (297074) : haproxy version is 2.8.14-c23fe91
Oct 07 14:09:44 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[297066]: [NOTICE]   (297074) : path to executable is /usr/sbin/haproxy
Oct 07 14:09:44 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[297066]: [WARNING]  (297074) : Exiting Master process...
Oct 07 14:09:44 compute-0 nova_compute[259550]: 2025-10-07 14:09:44.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:44 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[297066]: [ALERT]    (297074) : Current worker (297077) exited with code 143 (Terminated)
Oct 07 14:09:44 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[297066]: [WARNING]  (297074) : All workers exited. Exiting... (0)
Oct 07 14:09:44 compute-0 systemd[1]: libpod-d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82.scope: Deactivated successfully.
Oct 07 14:09:44 compute-0 nova_compute[259550]: 2025-10-07 14:09:44.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:44 compute-0 nova_compute[259550]: 2025-10-07 14:09:44.533 2 DEBUG nova.compute.manager [None req-dc0aa1dd-d6fc-498d-a6d7-1380c1d1c3bf a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:44 compute-0 podman[297467]: 2025-10-07 14:09:44.536197047 +0000 UTC m=+0.103140517 container died d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 07 14:09:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82-userdata-shm.mount: Deactivated successfully.
Oct 07 14:09:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6c453bc14a40343f7d04b0c2d60dac70d117a7f556b8366c9ea1b23c523770b-merged.mount: Deactivated successfully.
Oct 07 14:09:44 compute-0 podman[297467]: 2025-10-07 14:09:44.594924317 +0000 UTC m=+0.161867787 container cleanup d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:09:44 compute-0 systemd[1]: libpod-conmon-d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82.scope: Deactivated successfully.
Oct 07 14:09:44 compute-0 podman[297501]: 2025-10-07 14:09:44.675023057 +0000 UTC m=+0.052408241 container remove d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 14:09:44 compute-0 ceph-mon[74295]: pgmap v1303: 305 pgs: 305 active+clean; 173 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.3 MiB/s wr, 219 op/s
Oct 07 14:09:44 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3200452805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.685 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6e75d6-f1e2-4446-b678-decd39795a1a]: (4, ('Tue Oct  7 02:09:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb (d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82)\nd8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82\nTue Oct  7 02:09:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb (d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82)\nd8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.688 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[71df4dd0-1fd4-4553-87af-10c724db0a8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.689 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:44 compute-0 nova_compute[259550]: 2025-10-07 14:09:44.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:44 compute-0 kernel: tap9f80456d-d0: left promiscuous mode
Oct 07 14:09:44 compute-0 nova_compute[259550]: 2025-10-07 14:09:44.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.719 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7abf02d7-9155-4c3c-94f0-24621b316f53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.758 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[67444a6c-4ecb-4f8d-a7f8-ba9312b9a085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.761 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0a0ace23-918d-4bb3-bb55-ad4ee64702cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.783 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[efebdc28-84c4-430b-842b-0a1ec1a3f08e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676414, 'reachable_time': 34167, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297520, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d9f80456d\x2dd8a6\x2d4e61\x2db6cb\x2db509cd650dbb.mount: Deactivated successfully.
Oct 07 14:09:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.787 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:09:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.787 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[e80ec2bd-b7aa-4626-9f36-9a60729bc2a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:45 compute-0 nova_compute[259550]: 2025-10-07 14:09:45.024 2 INFO nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Creating config drive at /var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e/disk.config
Oct 07 14:09:45 compute-0 nova_compute[259550]: 2025-10-07 14:09:45.029 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplayu_dmu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:45 compute-0 nova_compute[259550]: 2025-10-07 14:09:45.167 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplayu_dmu" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:45 compute-0 nova_compute[259550]: 2025-10-07 14:09:45.201 2 DEBUG nova.storage.rbd_utils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:45 compute-0 nova_compute[259550]: 2025-10-07 14:09:45.205 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e/disk.config 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:45 compute-0 nova_compute[259550]: 2025-10-07 14:09:45.384 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e/disk.config 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:45 compute-0 nova_compute[259550]: 2025-10-07 14:09:45.385 2 INFO nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Deleting local config drive /var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e/disk.config because it was imported into RBD.
Oct 07 14:09:45 compute-0 systemd-machined[214580]: New machine qemu-33-instance-0000001d.
Oct 07 14:09:45 compute-0 systemd[1]: Started Virtual Machine qemu-33-instance-0000001d.
Oct 07 14:09:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 181 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 4.3 MiB/s wr, 325 op/s
Oct 07 14:09:45 compute-0 nova_compute[259550]: 2025-10-07 14:09:45.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:09:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Oct 07 14:09:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Oct 07 14:09:45 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Oct 07 14:09:45 compute-0 nova_compute[259550]: 2025-10-07 14:09:45.901 2 DEBUG nova.compute.manager [req-9ff14bf0-c6c2-4be4-9a13-97891c078925 req-0deb487d-85a2-4717-9639-4e2bc68ac5fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Received event network-vif-unplugged-63d67e4f-8bad-4402-b835-b12010348a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:45 compute-0 nova_compute[259550]: 2025-10-07 14:09:45.902 2 DEBUG oslo_concurrency.lockutils [req-9ff14bf0-c6c2-4be4-9a13-97891c078925 req-0deb487d-85a2-4717-9639-4e2bc68ac5fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f707575c-3219-44e4-9655-ccc194e7385d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:45 compute-0 nova_compute[259550]: 2025-10-07 14:09:45.902 2 DEBUG oslo_concurrency.lockutils [req-9ff14bf0-c6c2-4be4-9a13-97891c078925 req-0deb487d-85a2-4717-9639-4e2bc68ac5fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:45 compute-0 nova_compute[259550]: 2025-10-07 14:09:45.902 2 DEBUG oslo_concurrency.lockutils [req-9ff14bf0-c6c2-4be4-9a13-97891c078925 req-0deb487d-85a2-4717-9639-4e2bc68ac5fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:45 compute-0 nova_compute[259550]: 2025-10-07 14:09:45.902 2 DEBUG nova.compute.manager [req-9ff14bf0-c6c2-4be4-9a13-97891c078925 req-0deb487d-85a2-4717-9639-4e2bc68ac5fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] No waiting events found dispatching network-vif-unplugged-63d67e4f-8bad-4402-b835-b12010348a29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:09:45 compute-0 nova_compute[259550]: 2025-10-07 14:09:45.903 2 WARNING nova.compute.manager [req-9ff14bf0-c6c2-4be4-9a13-97891c078925 req-0deb487d-85a2-4717-9639-4e2bc68ac5fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Received unexpected event network-vif-unplugged-63d67e4f-8bad-4402-b835-b12010348a29 for instance with vm_state suspended and task_state None.
Oct 07 14:09:46 compute-0 nova_compute[259550]: 2025-10-07 14:09:46.763 2 DEBUG nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:09:46 compute-0 nova_compute[259550]: 2025-10-07 14:09:46.764 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:09:46 compute-0 nova_compute[259550]: 2025-10-07 14:09:46.764 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846186.7629943, 643096ea-a4a9-4b3f-b8c1-130423a4248e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:46 compute-0 nova_compute[259550]: 2025-10-07 14:09:46.764 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] VM Resumed (Lifecycle Event)
Oct 07 14:09:46 compute-0 nova_compute[259550]: 2025-10-07 14:09:46.770 2 INFO nova.virt.libvirt.driver [-] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Instance spawned successfully.
Oct 07 14:09:46 compute-0 nova_compute[259550]: 2025-10-07 14:09:46.770 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:09:46 compute-0 ceph-mon[74295]: pgmap v1304: 305 pgs: 305 active+clean; 181 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 4.3 MiB/s wr, 325 op/s
Oct 07 14:09:46 compute-0 ceph-mon[74295]: osdmap e163: 3 total, 3 up, 3 in
Oct 07 14:09:46 compute-0 nova_compute[259550]: 2025-10-07 14:09:46.837 2 DEBUG nova.compute.manager [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:46 compute-0 nova_compute[259550]: 2025-10-07 14:09:46.986 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:46 compute-0 nova_compute[259550]: 2025-10-07 14:09:46.990 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:46 compute-0 nova_compute[259550]: 2025-10-07 14:09:46.991 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:46 compute-0 nova_compute[259550]: 2025-10-07 14:09:46.991 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:46 compute-0 nova_compute[259550]: 2025-10-07 14:09:46.992 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:46 compute-0 nova_compute[259550]: 2025-10-07 14:09:46.992 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:46 compute-0 nova_compute[259550]: 2025-10-07 14:09:46.992 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:09:46 compute-0 nova_compute[259550]: 2025-10-07 14:09:46.999 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:09:47 compute-0 nova_compute[259550]: 2025-10-07 14:09:47.055 2 INFO nova.compute.manager [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] instance snapshotting
Oct 07 14:09:47 compute-0 nova_compute[259550]: 2025-10-07 14:09:47.055 2 WARNING nova.compute.manager [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] trying to snapshot a non-running instance: (state: 4 expected: 1)
Oct 07 14:09:47 compute-0 nova_compute[259550]: 2025-10-07 14:09:47.159 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:09:47 compute-0 nova_compute[259550]: 2025-10-07 14:09:47.160 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846186.7641366, 643096ea-a4a9-4b3f-b8c1-130423a4248e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:47 compute-0 nova_compute[259550]: 2025-10-07 14:09:47.160 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] VM Started (Lifecycle Event)
Oct 07 14:09:47 compute-0 nova_compute[259550]: 2025-10-07 14:09:47.241 2 INFO nova.virt.libvirt.driver [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Beginning cold snapshot process
Oct 07 14:09:47 compute-0 nova_compute[259550]: 2025-10-07 14:09:47.250 2 INFO nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Took 5.29 seconds to spawn the instance on the hypervisor.
Oct 07 14:09:47 compute-0 nova_compute[259550]: 2025-10-07 14:09:47.251 2 DEBUG nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:47 compute-0 nova_compute[259550]: 2025-10-07 14:09:47.252 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:47 compute-0 nova_compute[259550]: 2025-10-07 14:09:47.261 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:09:47 compute-0 nova_compute[259550]: 2025-10-07 14:09:47.414 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:09:47 compute-0 nova_compute[259550]: 2025-10-07 14:09:47.469 2 INFO nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Took 6.65 seconds to build instance.
Oct 07 14:09:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 181 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.0 MiB/s wr, 305 op/s
Oct 07 14:09:47 compute-0 nova_compute[259550]: 2025-10-07 14:09:47.610 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "643096ea-a4a9-4b3f-b8c1-130423a4248e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:47 compute-0 nova_compute[259550]: 2025-10-07 14:09:47.682 2 DEBUG nova.virt.libvirt.imagebackend [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:09:47 compute-0 nova_compute[259550]: 2025-10-07 14:09:47.869 2 DEBUG nova.storage.rbd_utils [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(38b3ebd818224332bf23562c9421071f) on rbd image(f707575c-3219-44e4-9655-ccc194e7385d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:09:48 compute-0 nova_compute[259550]: 2025-10-07 14:09:48.170 2 DEBUG nova.compute.manager [req-d0b94a1a-4463-4099-ba96-3b5eae06603b req-aab40862-4fcb-4227-a9c5-33969db20c0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Received event network-vif-plugged-63d67e4f-8bad-4402-b835-b12010348a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:48 compute-0 nova_compute[259550]: 2025-10-07 14:09:48.171 2 DEBUG oslo_concurrency.lockutils [req-d0b94a1a-4463-4099-ba96-3b5eae06603b req-aab40862-4fcb-4227-a9c5-33969db20c0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f707575c-3219-44e4-9655-ccc194e7385d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:48 compute-0 nova_compute[259550]: 2025-10-07 14:09:48.171 2 DEBUG oslo_concurrency.lockutils [req-d0b94a1a-4463-4099-ba96-3b5eae06603b req-aab40862-4fcb-4227-a9c5-33969db20c0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:48 compute-0 nova_compute[259550]: 2025-10-07 14:09:48.171 2 DEBUG oslo_concurrency.lockutils [req-d0b94a1a-4463-4099-ba96-3b5eae06603b req-aab40862-4fcb-4227-a9c5-33969db20c0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:48 compute-0 nova_compute[259550]: 2025-10-07 14:09:48.171 2 DEBUG nova.compute.manager [req-d0b94a1a-4463-4099-ba96-3b5eae06603b req-aab40862-4fcb-4227-a9c5-33969db20c0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] No waiting events found dispatching network-vif-plugged-63d67e4f-8bad-4402-b835-b12010348a29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:09:48 compute-0 nova_compute[259550]: 2025-10-07 14:09:48.172 2 WARNING nova.compute.manager [req-d0b94a1a-4463-4099-ba96-3b5eae06603b req-aab40862-4fcb-4227-a9c5-33969db20c0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Received unexpected event network-vif-plugged-63d67e4f-8bad-4402-b835-b12010348a29 for instance with vm_state suspended and task_state image_uploading.
Oct 07 14:09:48 compute-0 nova_compute[259550]: 2025-10-07 14:09:48.535 2 DEBUG nova.objects.instance [None req-f5eba65c-e67e-43ad-8942-fe521d19dd61 f7c51012ac1149aa954786ee7320bd34 856d6220bca94fb080cb16cc9aed720e - - default default] Lazy-loading 'pci_devices' on Instance uuid 643096ea-a4a9-4b3f-b8c1-130423a4248e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:48 compute-0 nova_compute[259550]: 2025-10-07 14:09:48.559 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846188.559075, 643096ea-a4a9-4b3f-b8c1-130423a4248e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:48 compute-0 nova_compute[259550]: 2025-10-07 14:09:48.560 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] VM Paused (Lifecycle Event)
Oct 07 14:09:48 compute-0 nova_compute[259550]: 2025-10-07 14:09:48.581 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:48 compute-0 nova_compute[259550]: 2025-10-07 14:09:48.585 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:09:48 compute-0 nova_compute[259550]: 2025-10-07 14:09:48.604 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 07 14:09:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Oct 07 14:09:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Oct 07 14:09:49 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Oct 07 14:09:49 compute-0 ceph-mon[74295]: pgmap v1306: 305 pgs: 305 active+clean; 181 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.0 MiB/s wr, 305 op/s
Oct 07 14:09:49 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Oct 07 14:09:49 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Consumed 3.161s CPU time.
Oct 07 14:09:49 compute-0 systemd-machined[214580]: Machine qemu-33-instance-0000001d terminated.
Oct 07 14:09:49 compute-0 nova_compute[259550]: 2025-10-07 14:09:49.150 2 DEBUG nova.storage.rbd_utils [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] cloning vms/f707575c-3219-44e4-9655-ccc194e7385d_disk@38b3ebd818224332bf23562c9421071f to images/e10d002e-132f-4664-81e6-2c97f55f74ab clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:09:49 compute-0 nova_compute[259550]: 2025-10-07 14:09:49.195 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "6a715eda-3357-4298-88cc-596cc9986690" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:49 compute-0 nova_compute[259550]: 2025-10-07 14:09:49.196 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:49 compute-0 nova_compute[259550]: 2025-10-07 14:09:49.221 2 DEBUG nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:09:49 compute-0 nova_compute[259550]: 2025-10-07 14:09:49.240 2 DEBUG nova.compute.manager [None req-f5eba65c-e67e-43ad-8942-fe521d19dd61 f7c51012ac1149aa954786ee7320bd34 856d6220bca94fb080cb16cc9aed720e - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:49 compute-0 nova_compute[259550]: 2025-10-07 14:09:49.317 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:49 compute-0 nova_compute[259550]: 2025-10-07 14:09:49.318 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:49 compute-0 nova_compute[259550]: 2025-10-07 14:09:49.327 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:09:49 compute-0 nova_compute[259550]: 2025-10-07 14:09:49.328 2 INFO nova.compute.claims [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:09:49 compute-0 nova_compute[259550]: 2025-10-07 14:09:49.444 2 DEBUG nova.storage.rbd_utils [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] flattening images/e10d002e-132f-4664-81e6-2c97f55f74ab flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:09:49 compute-0 nova_compute[259550]: 2025-10-07 14:09:49.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:49 compute-0 nova_compute[259550]: 2025-10-07 14:09:49.527 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 181 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.7 MiB/s wr, 235 op/s
Oct 07 14:09:49 compute-0 nova_compute[259550]: 2025-10-07 14:09:49.784 2 DEBUG nova.storage.rbd_utils [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] removing snapshot(38b3ebd818224332bf23562c9421071f) on rbd image(f707575c-3219-44e4-9655-ccc194e7385d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:09:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:09:49 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/415437241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:49 compute-0 nova_compute[259550]: 2025-10-07 14:09:49.977 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:49 compute-0 nova_compute[259550]: 2025-10-07 14:09:49.984 2 DEBUG nova.compute.provider_tree [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.006 2 DEBUG nova.scheduler.client.report [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.035 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.036 2 DEBUG nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:09:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Oct 07 14:09:50 compute-0 ceph-mon[74295]: osdmap e164: 3 total, 3 up, 3 in
Oct 07 14:09:50 compute-0 ceph-mon[74295]: pgmap v1308: 305 pgs: 305 active+clean; 181 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.7 MiB/s wr, 235 op/s
Oct 07 14:09:50 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/415437241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Oct 07 14:09:50 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.096 2 DEBUG nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.097 2 DEBUG nova.network.neutron [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.107 2 DEBUG nova.storage.rbd_utils [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(snap) on rbd image(e10d002e-132f-4664-81e6-2c97f55f74ab) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.145 2 INFO nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.166 2 DEBUG nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.259 2 DEBUG nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.261 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.265 2 INFO nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Creating image(s)
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.297 2 DEBUG nova.storage.rbd_utils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 6a715eda-3357-4298-88cc-596cc9986690_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.335 2 DEBUG nova.storage.rbd_utils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 6a715eda-3357-4298-88cc-596cc9986690_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.363 2 DEBUG nova.storage.rbd_utils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 6a715eda-3357-4298-88cc-596cc9986690_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.367 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.447 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.449 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.449 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.450 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.473 2 DEBUG nova.storage.rbd_utils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 6a715eda-3357-4298-88cc-596cc9986690_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.477 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6a715eda-3357-4298-88cc-596cc9986690_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.769 2 DEBUG nova.policy [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '21b4c507f5c443f4b43306c884b1d67f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c166ae9e4e0f43d38afaa35966f84b05', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:09:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.898 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6a715eda-3357-4298-88cc-596cc9986690_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:50 compute-0 nova_compute[259550]: 2025-10-07 14:09:50.977 2 DEBUG nova.storage.rbd_utils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] resizing rbd image 6a715eda-3357-4298-88cc-596cc9986690_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:09:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Oct 07 14:09:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Oct 07 14:09:51 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Oct 07 14:09:51 compute-0 ceph-mon[74295]: osdmap e165: 3 total, 3 up, 3 in
Oct 07 14:09:51 compute-0 nova_compute[259550]: 2025-10-07 14:09:51.141 2 DEBUG nova.objects.instance [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lazy-loading 'migration_context' on Instance uuid 6a715eda-3357-4298-88cc-596cc9986690 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:51 compute-0 nova_compute[259550]: 2025-10-07 14:09:51.156 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:09:51 compute-0 nova_compute[259550]: 2025-10-07 14:09:51.156 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Ensure instance console log exists: /var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:09:51 compute-0 nova_compute[259550]: 2025-10-07 14:09:51.157 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:51 compute-0 nova_compute[259550]: 2025-10-07 14:09:51.157 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:51 compute-0 nova_compute[259550]: 2025-10-07 14:09:51.158 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 196 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 1.4 MiB/s wr, 230 op/s
Oct 07 14:09:52 compute-0 podman[297953]: 2025-10-07 14:09:52.080173683 +0000 UTC m=+0.065699547 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 07 14:09:52 compute-0 podman[297954]: 2025-10-07 14:09:52.084237742 +0000 UTC m=+0.068400140 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 14:09:52 compute-0 ceph-mon[74295]: osdmap e166: 3 total, 3 up, 3 in
Oct 07 14:09:52 compute-0 ceph-mon[74295]: pgmap v1311: 305 pgs: 305 active+clean; 196 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 1.4 MiB/s wr, 230 op/s
Oct 07 14:09:52 compute-0 nova_compute[259550]: 2025-10-07 14:09:52.348 2 DEBUG nova.network.neutron [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Successfully created port: 18b5959f-439a-440f-8a00-0709f1f9730b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:09:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:09:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:09:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:09:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:09:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:09:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:09:52 compute-0 nova_compute[259550]: 2025-10-07 14:09:52.729 2 INFO nova.virt.libvirt.driver [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Snapshot image upload complete
Oct 07 14:09:52 compute-0 nova_compute[259550]: 2025-10-07 14:09:52.729 2 INFO nova.compute.manager [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Took 5.67 seconds to snapshot the instance on the hypervisor.
Oct 07 14:09:52 compute-0 nova_compute[259550]: 2025-10-07 14:09:52.820 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "643096ea-a4a9-4b3f-b8c1-130423a4248e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:52 compute-0 nova_compute[259550]: 2025-10-07 14:09:52.821 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "643096ea-a4a9-4b3f-b8c1-130423a4248e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:52 compute-0 nova_compute[259550]: 2025-10-07 14:09:52.821 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "643096ea-a4a9-4b3f-b8c1-130423a4248e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:52 compute-0 nova_compute[259550]: 2025-10-07 14:09:52.821 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "643096ea-a4a9-4b3f-b8c1-130423a4248e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:52 compute-0 nova_compute[259550]: 2025-10-07 14:09:52.821 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "643096ea-a4a9-4b3f-b8c1-130423a4248e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:52 compute-0 nova_compute[259550]: 2025-10-07 14:09:52.823 2 INFO nova.compute.manager [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Terminating instance
Oct 07 14:09:52 compute-0 nova_compute[259550]: 2025-10-07 14:09:52.824 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "refresh_cache-643096ea-a4a9-4b3f-b8c1-130423a4248e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:09:52 compute-0 nova_compute[259550]: 2025-10-07 14:09:52.824 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquired lock "refresh_cache-643096ea-a4a9-4b3f-b8c1-130423a4248e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:09:52 compute-0 nova_compute[259550]: 2025-10-07 14:09:52.824 2 DEBUG nova.network.neutron [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.036 2 DEBUG nova.network.neutron [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.290 2 DEBUG nova.network.neutron [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.345 2 DEBUG nova.network.neutron [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Successfully updated port: 18b5959f-439a-440f-8a00-0709f1f9730b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.348 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Releasing lock "refresh_cache-643096ea-a4a9-4b3f-b8c1-130423a4248e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.348 2 DEBUG nova.compute.manager [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.355 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquiring lock "75fea627-8d48-4772-86a2-ca6bc47ed597" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.355 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.362 2 INFO nova.virt.libvirt.driver [-] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Instance destroyed successfully.
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.362 2 DEBUG nova.objects.instance [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lazy-loading 'resources' on Instance uuid 643096ea-a4a9-4b3f-b8c1-130423a4248e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.364 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "refresh_cache-6a715eda-3357-4298-88cc-596cc9986690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.364 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquired lock "refresh_cache-6a715eda-3357-4298-88cc-596cc9986690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.365 2 DEBUG nova.network.neutron [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.384 2 DEBUG nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.444 2 DEBUG nova.compute.manager [req-d0c33a7f-4a01-43ba-a47a-0080439e935c req-3b9332f6-3f56-4320-acff-6b9e5d0814aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Received event network-changed-18b5959f-439a-440f-8a00-0709f1f9730b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.445 2 DEBUG nova.compute.manager [req-d0c33a7f-4a01-43ba-a47a-0080439e935c req-3b9332f6-3f56-4320-acff-6b9e5d0814aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Refreshing instance network info cache due to event network-changed-18b5959f-439a-440f-8a00-0709f1f9730b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.445 2 DEBUG oslo_concurrency.lockutils [req-d0c33a7f-4a01-43ba-a47a-0080439e935c req-3b9332f6-3f56-4320-acff-6b9e5d0814aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6a715eda-3357-4298-88cc-596cc9986690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.467 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.467 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.475 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.476 2 INFO nova.compute.claims [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.515 2 DEBUG nova.network.neutron [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:09:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 196 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 1.4 MiB/s wr, 221 op/s
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.626 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.988 2 INFO nova.virt.libvirt.driver [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Deleting instance files /var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e_del
Oct 07 14:09:53 compute-0 nova_compute[259550]: 2025-10-07 14:09:53.990 2 INFO nova.virt.libvirt.driver [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Deletion of /var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e_del complete
Oct 07 14:09:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:09:54 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3347357936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.098 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.104 2 DEBUG nova.compute.provider_tree [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.154 2 DEBUG nova.scheduler.client.report [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.161 2 INFO nova.compute.manager [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.162 2 DEBUG oslo.service.loopingcall [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.162 2 DEBUG nova.compute.manager [-] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.162 2 DEBUG nova.network.neutron [-] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.272 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.273 2 DEBUG nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.386 2 DEBUG nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.387 2 DEBUG nova.network.neutron [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.395 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846179.3892694, 7ee2904f-492a-4ffe-bdc2-6f4ec3285851 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.395 2 INFO nova.compute.manager [-] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] VM Stopped (Lifecycle Event)
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.434 2 DEBUG nova.compute.manager [None req-aff8345b-8cbe-480a-9ae1-c7283f88e60b - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.444 2 INFO nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.495 2 DEBUG nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.622 2 DEBUG nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.624 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.624 2 INFO nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Creating image(s)
Oct 07 14:09:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Oct 07 14:09:54 compute-0 ceph-mon[74295]: pgmap v1312: 305 pgs: 305 active+clean; 196 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 1.4 MiB/s wr, 221 op/s
Oct 07 14:09:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3347357936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Oct 07 14:09:54 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.661 2 DEBUG nova.storage.rbd_utils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] rbd image 75fea627-8d48-4772-86a2-ca6bc47ed597_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.693 2 DEBUG nova.storage.rbd_utils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] rbd image 75fea627-8d48-4772-86a2-ca6bc47ed597_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.723 2 DEBUG nova.storage.rbd_utils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] rbd image 75fea627-8d48-4772-86a2-ca6bc47ed597_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.728 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.770 2 DEBUG nova.network.neutron [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Updating instance_info_cache with network_info: [{"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.775 2 DEBUG nova.policy [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8656e87752c0498891bd00461451ea40', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da79ea7f82af425b975ddff6ef03a59a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.779 2 DEBUG nova.network.neutron [-] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.798 2 DEBUG nova.network.neutron [-] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.801 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Releasing lock "refresh_cache-6a715eda-3357-4298-88cc-596cc9986690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.801 2 DEBUG nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Instance network_info: |[{"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.802 2 DEBUG oslo_concurrency.lockutils [req-d0c33a7f-4a01-43ba-a47a-0080439e935c req-3b9332f6-3f56-4320-acff-6b9e5d0814aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6a715eda-3357-4298-88cc-596cc9986690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.802 2 DEBUG nova.network.neutron [req-d0c33a7f-4a01-43ba-a47a-0080439e935c req-3b9332f6-3f56-4320-acff-6b9e5d0814aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Refreshing network info cache for port 18b5959f-439a-440f-8a00-0709f1f9730b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.805 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Start _get_guest_xml network_info=[{"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.811 2 WARNING nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.817 2 DEBUG nova.virt.libvirt.host [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.818 2 DEBUG nova.virt.libvirt.host [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.822 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.824 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.825 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.825 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.848 2 DEBUG nova.storage.rbd_utils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] rbd image 75fea627-8d48-4772-86a2-ca6bc47ed597_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.854 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 75fea627-8d48-4772-86a2-ca6bc47ed597_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.887 2 INFO nova.compute.manager [-] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Took 0.72 seconds to deallocate network for instance.
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.894 2 DEBUG nova.virt.libvirt.host [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.895 2 DEBUG nova.virt.libvirt.host [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.895 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.896 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.896 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.897 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.897 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.897 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.898 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.898 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.898 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.899 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.899 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.899 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:09:54 compute-0 nova_compute[259550]: 2025-10-07 14:09:54.903 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.019 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.020 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.132 2 DEBUG oslo_concurrency.processutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.248 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 75fea627-8d48-4772-86a2-ca6bc47ed597_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.335 2 DEBUG oslo_concurrency.lockutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "f707575c-3219-44e4-9655-ccc194e7385d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.336 2 DEBUG oslo_concurrency.lockutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.336 2 DEBUG oslo_concurrency.lockutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "f707575c-3219-44e4-9655-ccc194e7385d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.336 2 DEBUG oslo_concurrency.lockutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.336 2 DEBUG oslo_concurrency.lockutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.337 2 INFO nova.compute.manager [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Terminating instance
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.338 2 DEBUG nova.compute.manager [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.345 2 DEBUG nova.storage.rbd_utils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] resizing rbd image 75fea627-8d48-4772-86a2-ca6bc47ed597_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:09:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:09:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2687494201' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.375 2 INFO nova.virt.libvirt.driver [-] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Instance destroyed successfully.
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.376 2 DEBUG nova.objects.instance [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'resources' on Instance uuid f707575c-3219-44e4-9655-ccc194e7385d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.395 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.413 2 DEBUG nova.storage.rbd_utils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 6a715eda-3357-4298-88cc-596cc9986690_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.416 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.444 2 DEBUG nova.virt.libvirt.vif [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:09:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-725999350',display_name='tempest-ImagesTestJSON-server-725999350',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-725999350',id=28,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:09:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-1i0xrx4z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:09:52Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=f707575c-3219-44e4-9655-ccc194e7385d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.445 2 DEBUG nova.network.os_vif_util [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.445 2 DEBUG nova.network.os_vif_util [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:81:bb,bridge_name='br-int',has_traffic_filtering=True,id=63d67e4f-8bad-4402-b835-b12010348a29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63d67e4f-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.446 2 DEBUG os_vif [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:81:bb,bridge_name='br-int',has_traffic_filtering=True,id=63d67e4f-8bad-4402-b835-b12010348a29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63d67e4f-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.453 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63d67e4f-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.460 2 INFO os_vif [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:81:bb,bridge_name='br-int',has_traffic_filtering=True,id=63d67e4f-8bad-4402-b835-b12010348a29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63d67e4f-8b')
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.521 2 DEBUG nova.objects.instance [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lazy-loading 'migration_context' on Instance uuid 75fea627-8d48-4772-86a2-ca6bc47ed597 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:09:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3515382844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 267 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 13 MiB/s wr, 446 op/s
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.590 2 DEBUG oslo_concurrency.processutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.596 2 DEBUG nova.compute.provider_tree [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.607 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.608 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Ensure instance console log exists: /var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.608 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.608 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.609 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.648 2 DEBUG nova.network.neutron [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Successfully created port: cdae5f4d-2069-4d30-bd74-31e9459986db _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.660 2 DEBUG nova.scheduler.client.report [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:55 compute-0 ceph-mon[74295]: osdmap e167: 3 total, 3 up, 3 in
Oct 07 14:09:55 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2687494201' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:55 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3515382844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.696 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.736 2 INFO nova.scheduler.client.report [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Deleted allocations for instance 643096ea-a4a9-4b3f-b8c1-130423a4248e
Oct 07 14:09:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:09:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Oct 07 14:09:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Oct 07 14:09:55 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.852 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "643096ea-a4a9-4b3f-b8c1-130423a4248e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:09:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3062624384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.888 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.889 2 DEBUG nova.virt.libvirt.vif [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-799592627',display_name='tempest-ImagesOneServerNegativeTestJSON-server-799592627',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-799592627',id=30,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c166ae9e4e0f43d38afaa35966f84b05',ramdisk_id='',reservation_id='r-7fs0zddu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-2130756304',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:50Z,user_data=None,user_id='21b4c507f5c443f4b43306c884b1d67f',uuid=6a715eda-3357-4298-88cc-596cc9986690,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.889 2 DEBUG nova.network.os_vif_util [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converting VIF {"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.890 2 DEBUG nova.network.os_vif_util [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:94:47,bridge_name='br-int',has_traffic_filtering=True,id=18b5959f-439a-440f-8a00-0709f1f9730b,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b5959f-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.890 2 DEBUG nova.objects.instance [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a715eda-3357-4298-88cc-596cc9986690 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.938 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:09:55 compute-0 nova_compute[259550]:   <uuid>6a715eda-3357-4298-88cc-596cc9986690</uuid>
Oct 07 14:09:55 compute-0 nova_compute[259550]:   <name>instance-0000001e</name>
Oct 07 14:09:55 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:09:55 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:09:55 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-799592627</nova:name>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:09:54</nova:creationTime>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:09:55 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:09:55 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:09:55 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:09:55 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:09:55 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:09:55 compute-0 nova_compute[259550]:         <nova:user uuid="21b4c507f5c443f4b43306c884b1d67f">tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member</nova:user>
Oct 07 14:09:55 compute-0 nova_compute[259550]:         <nova:project uuid="c166ae9e4e0f43d38afaa35966f84b05">tempest-ImagesOneServerNegativeTestJSON-2130756304</nova:project>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:09:55 compute-0 nova_compute[259550]:         <nova:port uuid="18b5959f-439a-440f-8a00-0709f1f9730b">
Oct 07 14:09:55 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:09:55 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:09:55 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <system>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <entry name="serial">6a715eda-3357-4298-88cc-596cc9986690</entry>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <entry name="uuid">6a715eda-3357-4298-88cc-596cc9986690</entry>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     </system>
Oct 07 14:09:55 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:09:55 compute-0 nova_compute[259550]:   <os>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:   </os>
Oct 07 14:09:55 compute-0 nova_compute[259550]:   <features>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:   </features>
Oct 07 14:09:55 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:09:55 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:09:55 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/6a715eda-3357-4298-88cc-596cc9986690_disk">
Oct 07 14:09:55 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       </source>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:09:55 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/6a715eda-3357-4298-88cc-596cc9986690_disk.config">
Oct 07 14:09:55 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       </source>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:09:55 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:2a:94:47"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <target dev="tap18b5959f-43"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690/console.log" append="off"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <video>
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     </video>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:09:55 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:09:55 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:09:55 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:09:55 compute-0 nova_compute[259550]: </domain>
Oct 07 14:09:55 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.939 2 DEBUG nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Preparing to wait for external event network-vif-plugged-18b5959f-439a-440f-8a00-0709f1f9730b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.939 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "6a715eda-3357-4298-88cc-596cc9986690-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.939 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.939 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.940 2 DEBUG nova.virt.libvirt.vif [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-799592627',display_name='tempest-ImagesOneServerNegativeTestJSON-server-799592627',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-799592627',id=30,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c166ae9e4e0f43d38afaa35966f84b05',ramdisk_id='',reservation_id='r-7fs0zddu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-2130756304',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:50Z,user_data=None,user_id='21b4c507f5c443f4b43306c884b1d67f',uuid=6a715eda-3357-4298-88cc-596cc9986690,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.941 2 DEBUG nova.network.os_vif_util [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converting VIF {"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.943 2 DEBUG nova.network.os_vif_util [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:94:47,bridge_name='br-int',has_traffic_filtering=True,id=18b5959f-439a-440f-8a00-0709f1f9730b,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b5959f-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.943 2 DEBUG os_vif [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:94:47,bridge_name='br-int',has_traffic_filtering=True,id=18b5959f-439a-440f-8a00-0709f1f9730b,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b5959f-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.944 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.944 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.948 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18b5959f-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.948 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18b5959f-43, col_values=(('external_ids', {'iface-id': '18b5959f-439a-440f-8a00-0709f1f9730b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2a:94:47', 'vm-uuid': '6a715eda-3357-4298-88cc-596cc9986690'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:55 compute-0 NetworkManager[44949]: <info>  [1759846195.9508] manager: (tap18b5959f-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:55 compute-0 nova_compute[259550]: 2025-10-07 14:09:55.957 2 INFO os_vif [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:94:47,bridge_name='br-int',has_traffic_filtering=True,id=18b5959f-439a-440f-8a00-0709f1f9730b,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b5959f-43')
Oct 07 14:09:56 compute-0 nova_compute[259550]: 2025-10-07 14:09:56.004 2 INFO nova.virt.libvirt.driver [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Deleting instance files /var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d_del
Oct 07 14:09:56 compute-0 nova_compute[259550]: 2025-10-07 14:09:56.004 2 INFO nova.virt.libvirt.driver [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Deletion of /var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d_del complete
Oct 07 14:09:56 compute-0 nova_compute[259550]: 2025-10-07 14:09:56.033 2 DEBUG nova.network.neutron [req-d0c33a7f-4a01-43ba-a47a-0080439e935c req-3b9332f6-3f56-4320-acff-6b9e5d0814aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Updated VIF entry in instance network info cache for port 18b5959f-439a-440f-8a00-0709f1f9730b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:09:56 compute-0 nova_compute[259550]: 2025-10-07 14:09:56.034 2 DEBUG nova.network.neutron [req-d0c33a7f-4a01-43ba-a47a-0080439e935c req-3b9332f6-3f56-4320-acff-6b9e5d0814aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Updating instance_info_cache with network_info: [{"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:56 compute-0 nova_compute[259550]: 2025-10-07 14:09:56.053 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:09:56 compute-0 nova_compute[259550]: 2025-10-07 14:09:56.053 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:09:56 compute-0 nova_compute[259550]: 2025-10-07 14:09:56.054 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No VIF found with MAC fa:16:3e:2a:94:47, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:09:56 compute-0 nova_compute[259550]: 2025-10-07 14:09:56.054 2 INFO nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Using config drive
Oct 07 14:09:56 compute-0 nova_compute[259550]: 2025-10-07 14:09:56.074 2 DEBUG nova.storage.rbd_utils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 6a715eda-3357-4298-88cc-596cc9986690_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:56 compute-0 nova_compute[259550]: 2025-10-07 14:09:56.082 2 DEBUG oslo_concurrency.lockutils [req-d0c33a7f-4a01-43ba-a47a-0080439e935c req-3b9332f6-3f56-4320-acff-6b9e5d0814aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6a715eda-3357-4298-88cc-596cc9986690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:09:56 compute-0 nova_compute[259550]: 2025-10-07 14:09:56.136 2 INFO nova.compute.manager [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Took 0.80 seconds to destroy the instance on the hypervisor.
Oct 07 14:09:56 compute-0 nova_compute[259550]: 2025-10-07 14:09:56.137 2 DEBUG oslo.service.loopingcall [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:09:56 compute-0 nova_compute[259550]: 2025-10-07 14:09:56.137 2 DEBUG nova.compute.manager [-] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:09:56 compute-0 nova_compute[259550]: 2025-10-07 14:09:56.137 2 DEBUG nova.network.neutron [-] [instance: f707575c-3219-44e4-9655-ccc194e7385d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:09:56 compute-0 ceph-mon[74295]: pgmap v1314: 305 pgs: 305 active+clean; 267 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 13 MiB/s wr, 446 op/s
Oct 07 14:09:56 compute-0 ceph-mon[74295]: osdmap e168: 3 total, 3 up, 3 in
Oct 07 14:09:56 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3062624384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:56 compute-0 nova_compute[259550]: 2025-10-07 14:09:56.909 2 INFO nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Creating config drive at /var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690/disk.config
Oct 07 14:09:56 compute-0 nova_compute[259550]: 2025-10-07 14:09:56.914 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjbqhg7my execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:56.991 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:09:56 compute-0 nova_compute[259550]: 2025-10-07 14:09:56.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:56.993 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.046 2 DEBUG nova.network.neutron [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Successfully updated port: cdae5f4d-2069-4d30-bd74-31e9459986db _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.055 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjbqhg7my" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.087 2 DEBUG nova.storage.rbd_utils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 6a715eda-3357-4298-88cc-596cc9986690_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.092 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690/disk.config 6a715eda-3357-4298-88cc-596cc9986690_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.133 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquiring lock "refresh_cache-75fea627-8d48-4772-86a2-ca6bc47ed597" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.133 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquired lock "refresh_cache-75fea627-8d48-4772-86a2-ca6bc47ed597" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.134 2 DEBUG nova.network.neutron [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.248 2 DEBUG nova.network.neutron [-] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.270 2 INFO nova.compute.manager [-] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Took 1.13 seconds to deallocate network for instance.
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.291 2 DEBUG nova.network.neutron [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.318 2 DEBUG oslo_concurrency.lockutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.319 2 DEBUG oslo_concurrency.lockutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.434 2 DEBUG oslo_concurrency.processutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 267 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 11 MiB/s wr, 297 op/s
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.716 2 DEBUG nova.compute.manager [req-a3dbefeb-e5ea-4f37-8e18-9cda8f7a7cb6 req-92d4d666-856e-4736-a622-093c10cb42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Received event network-changed-cdae5f4d-2069-4d30-bd74-31e9459986db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.716 2 DEBUG nova.compute.manager [req-a3dbefeb-e5ea-4f37-8e18-9cda8f7a7cb6 req-92d4d666-856e-4736-a622-093c10cb42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Refreshing instance network info cache due to event network-changed-cdae5f4d-2069-4d30-bd74-31e9459986db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.717 2 DEBUG oslo_concurrency.lockutils [req-a3dbefeb-e5ea-4f37-8e18-9cda8f7a7cb6 req-92d4d666-856e-4736-a622-093c10cb42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-75fea627-8d48-4772-86a2-ca6bc47ed597" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.717 2 DEBUG nova.compute.manager [req-0a9234c8-2fbf-4b0d-99e9-b70bf95a9e61 req-ea419f89-07de-4b36-a0ef-f2de0a3d80ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Received event network-vif-deleted-63d67e4f-8bad-4402-b835-b12010348a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.845 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.846 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.846 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.846 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.846 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.848 2 INFO nova.compute.manager [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Terminating instance
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.848 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "refresh_cache-b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.848 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquired lock "refresh_cache-b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.849 2 DEBUG nova.network.neutron [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:09:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:09:57 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2096327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.932 2 DEBUG oslo_concurrency.processutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.940 2 DEBUG nova.compute.provider_tree [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.944 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690/disk.config 6a715eda-3357-4298-88cc-596cc9986690_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.852s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.944 2 INFO nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Deleting local config drive /var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690/disk.config because it was imported into RBD.
Oct 07 14:09:57 compute-0 nova_compute[259550]: 2025-10-07 14:09:57.957 2 DEBUG nova.scheduler.client.report [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:09:58 compute-0 kernel: tap18b5959f-43: entered promiscuous mode
Oct 07 14:09:58 compute-0 NetworkManager[44949]: <info>  [1759846198.0051] manager: (tap18b5959f-43): new Tun device (/org/freedesktop/NetworkManager/Devices/97)
Oct 07 14:09:58 compute-0 ovn_controller[151684]: 2025-10-07T14:09:58Z|00189|binding|INFO|Claiming lport 18b5959f-439a-440f-8a00-0709f1f9730b for this chassis.
Oct 07 14:09:58 compute-0 ovn_controller[151684]: 2025-10-07T14:09:58Z|00190|binding|INFO|18b5959f-439a-440f-8a00-0709f1f9730b: Claiming fa:16:3e:2a:94:47 10.100.0.3
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.023 2 DEBUG oslo_concurrency.lockutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:58 compute-0 ceph-mon[74295]: pgmap v1316: 305 pgs: 305 active+clean; 267 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 11 MiB/s wr, 297 op/s
Oct 07 14:09:58 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2096327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:58 compute-0 systemd-udevd[298397]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:09:58 compute-0 ovn_controller[151684]: 2025-10-07T14:09:58Z|00191|binding|INFO|Setting lport 18b5959f-439a-440f-8a00-0709f1f9730b ovn-installed in OVS
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:58 compute-0 NetworkManager[44949]: <info>  [1759846198.0601] device (tap18b5959f-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:09:58 compute-0 NetworkManager[44949]: <info>  [1759846198.0611] device (tap18b5959f-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:09:58 compute-0 systemd-machined[214580]: New machine qemu-34-instance-0000001e.
Oct 07 14:09:58 compute-0 ovn_controller[151684]: 2025-10-07T14:09:58Z|00192|binding|INFO|Setting lport 18b5959f-439a-440f-8a00-0709f1f9730b up in Southbound
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.073 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:94:47 10.100.0.3'], port_security=['fa:16:3e:2a:94:47 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6a715eda-3357-4298-88cc-596cc9986690', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c166ae9e4e0f43d38afaa35966f84b05', 'neutron:revision_number': '2', 'neutron:security_group_ids': '306ac68f-7d3a-41d3-a9d1-b809ff5ece38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6dab0b8-b058-4fe6-95e9-ca808f08d05f, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=18b5959f-439a-440f-8a00-0709f1f9730b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:09:58 compute-0 systemd[1]: Started Virtual Machine qemu-34-instance-0000001e.
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.074 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 18b5959f-439a-440f-8a00-0709f1f9730b in datapath 0a5c95d4-1a77-48f5-83c0-afa976b7583d bound to our chassis
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.075 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0a5c95d4-1a77-48f5-83c0-afa976b7583d
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.088 2 INFO nova.scheduler.client.report [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Deleted allocations for instance f707575c-3219-44e4-9655-ccc194e7385d
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.090 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7288b2f6-6516-4f01-9c15-7b5bcc0b7a0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.092 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0a5c95d4-11 in ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.094 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0a5c95d4-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.095 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[09968225-31cb-4561-a445-2eaa5548128f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.096 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc23f60-bbca-4ae8-8aa6-ecc34724996b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.109 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a50e53f5-996f-4d07-b498-5a2dc6656bc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.126 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9bdf761d-2384-4774-ac2d-2f6ab10769c0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.166 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[313c7143-36cf-426b-96df-2d4d12d9483f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:58 compute-0 NetworkManager[44949]: <info>  [1759846198.1742] manager: (tap0a5c95d4-10): new Veth device (/org/freedesktop/NetworkManager/Devices/98)
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.173 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a76fe3d7-b43d-4e69-ac33-14195ee3432d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.219 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3b8c51f0-9c74-40e3-9a17-16508b57a960]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.223 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[087dd499-8132-4ede-97af-4f8479778019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.240 2 DEBUG nova.network.neutron [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.250 2 DEBUG oslo_concurrency.lockutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:58 compute-0 NetworkManager[44949]: <info>  [1759846198.2555] device (tap0a5c95d4-10): carrier: link connected
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.261 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1465a50c-dbf0-497a-910a-798cb6310206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.281 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab22a44-3755-446e-9968-911770344643]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a5c95d4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:63:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678182, 'reachable_time': 28794, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298433, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.302 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[069ea50f-fa3d-400c-ac86-a475b22db606]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe34:6312'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678182, 'tstamp': 678182}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298434, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.321 2 DEBUG nova.network.neutron [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Updating instance_info_cache with network_info: [{"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.325 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3060eb52-fe4c-4d98-a0c8-2c521bb7c267]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a5c95d4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:63:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678182, 'reachable_time': 28794, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298435, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.346 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Releasing lock "refresh_cache-75fea627-8d48-4772-86a2-ca6bc47ed597" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.346 2 DEBUG nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Instance network_info: |[{"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.347 2 DEBUG oslo_concurrency.lockutils [req-a3dbefeb-e5ea-4f37-8e18-9cda8f7a7cb6 req-92d4d666-856e-4736-a622-093c10cb42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-75fea627-8d48-4772-86a2-ca6bc47ed597" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.347 2 DEBUG nova.network.neutron [req-a3dbefeb-e5ea-4f37-8e18-9cda8f7a7cb6 req-92d4d666-856e-4736-a622-093c10cb42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Refreshing network info cache for port cdae5f4d-2069-4d30-bd74-31e9459986db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.350 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Start _get_guest_xml network_info=[{"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.357 2 WARNING nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.366 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[78667487-b744-408b-9b73-65c782157503]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.367 2 DEBUG nova.virt.libvirt.host [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.368 2 DEBUG nova.virt.libvirt.host [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.373 2 DEBUG nova.virt.libvirt.host [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.373 2 DEBUG nova.virt.libvirt.host [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.374 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.374 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.374 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.375 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.375 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.375 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.375 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.375 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.375 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.375 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.376 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.376 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.379 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.433 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7160d2e8-5e6b-49f9-9fd3-34b0f2a3b66d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.435 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a5c95d4-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.435 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.437 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a5c95d4-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:58 compute-0 kernel: tap0a5c95d4-10: entered promiscuous mode
Oct 07 14:09:58 compute-0 NetworkManager[44949]: <info>  [1759846198.4398] manager: (tap0a5c95d4-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.442 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0a5c95d4-10, col_values=(('external_ids', {'iface-id': 'a8291172-baf1-4252-9a0d-af7ef7ffa931'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:58 compute-0 ovn_controller[151684]: 2025-10-07T14:09:58Z|00193|binding|INFO|Releasing lport a8291172-baf1-4252-9a0d-af7ef7ffa931 from this chassis (sb_readonly=0)
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.463 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0a5c95d4-1a77-48f5-83c0-afa976b7583d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0a5c95d4-1a77-48f5-83c0-afa976b7583d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.464 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[09beeb1f-59fe-4bfd-9fae-f3c4689c4463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.464 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-0a5c95d4-1a77-48f5-83c0-afa976b7583d
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/0a5c95d4-1a77-48f5-83c0-afa976b7583d.pid.haproxy
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 0a5c95d4-1a77-48f5-83c0-afa976b7583d
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:09:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.465 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'env', 'PROCESS_TAG=haproxy-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0a5c95d4-1a77-48f5-83c0-afa976b7583d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.471 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.471 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.486 2 DEBUG nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.564 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.565 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.571 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.572 2 INFO nova.compute.claims [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.584 2 DEBUG nova.network.neutron [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.602 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Releasing lock "refresh_cache-b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.603 2 DEBUG nova.compute.manager [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:09:58 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Oct 07 14:09:58 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Consumed 14.716s CPU time.
Oct 07 14:09:58 compute-0 systemd-machined[214580]: Machine qemu-31-instance-0000001b terminated.
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.747 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.834 2 INFO nova.virt.libvirt.driver [-] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Instance destroyed successfully.
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.835 2 DEBUG nova.objects.instance [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lazy-loading 'resources' on Instance uuid b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:09:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3755273547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.875 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:58 compute-0 podman[298528]: 2025-10-07 14:09:58.903372761 +0000 UTC m=+0.071963712 container create 73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.908 2 DEBUG nova.storage.rbd_utils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] rbd image 75fea627-8d48-4772-86a2-ca6bc47ed597_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:58 compute-0 nova_compute[259550]: 2025-10-07 14:09:58.925 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:58 compute-0 systemd[1]: Started libpod-conmon-73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2.scope.
Oct 07 14:09:58 compute-0 podman[298528]: 2025-10-07 14:09:58.863905894 +0000 UTC m=+0.032496905 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:09:58 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:09:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b77270b12605ac3132ea4eef6c757661be710663f1f44a5cebdb8882546cdf8c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:09:59 compute-0 podman[298528]: 2025-10-07 14:09:59.009654365 +0000 UTC m=+0.178245356 container init 73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 07 14:09:59 compute-0 podman[298528]: 2025-10-07 14:09:59.016232198 +0000 UTC m=+0.184823159 container start 73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:09:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3755273547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:59 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[298600]: [NOTICE]   (298607) : New worker (298609) forked
Oct 07 14:09:59 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[298600]: [NOTICE]   (298607) : Loading success.
Oct 07 14:09:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:09:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3207458592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.300 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.306 2 DEBUG nova.compute.provider_tree [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.310 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846199.3104303, 6a715eda-3357-4298-88cc-596cc9986690 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.311 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] VM Started (Lifecycle Event)
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.325 2 DEBUG nova.scheduler.client.report [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.330 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.337 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846199.3114696, 6a715eda-3357-4298-88cc-596cc9986690 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.337 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] VM Paused (Lifecycle Event)
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.341 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.342 2 DEBUG nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.355 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.359 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.387 2 INFO nova.virt.libvirt.driver [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Deleting instance files /var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_del
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.389 2 INFO nova.virt.libvirt.driver [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Deletion of /var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_del complete
Oct 07 14:09:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:09:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2694528085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.394 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.401 2 DEBUG nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.401 2 DEBUG nova.network.neutron [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.412 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.413 2 DEBUG nova.virt.libvirt.vif [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1141910716',display_name='tempest-ServersTestJSON-server-1141910716',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1141910716',id=31,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB8dBtsZMOusOa4IijROkUdbBylBDiqYzwyIPuD+vzYyGEcKB5fbetYwHVvk9xfq9490L3yO+927Gy6izYCdieX2E8nhzbrcXYwgrKrlKM3UVrnT4xZuHJ7KYzTZGsV5OA==',key_name='tempest-keypair-878357788',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da79ea7f82af425b975ddff6ef03a59a',ramdisk_id='',reservation_id='r-h9cj7vml',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-218965943',owner_user_name='tempest-ServersTestJSON-218965943-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8656e87752c0498891bd00461451ea40',uuid=75fea627-8d48-4772-86a2-ca6bc47ed597,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.413 2 DEBUG nova.network.os_vif_util [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Converting VIF {"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.414 2 DEBUG nova.network.os_vif_util [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:ba:59,bridge_name='br-int',has_traffic_filtering=True,id=cdae5f4d-2069-4d30-bd74-31e9459986db,network=Network(f753e53f-89de-4a31-8fea-3e9e1684d72a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdae5f4d-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.415 2 DEBUG nova.objects.instance [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lazy-loading 'pci_devices' on Instance uuid 75fea627-8d48-4772-86a2-ca6bc47ed597 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.430 2 INFO nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.440 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:09:59 compute-0 nova_compute[259550]:   <uuid>75fea627-8d48-4772-86a2-ca6bc47ed597</uuid>
Oct 07 14:09:59 compute-0 nova_compute[259550]:   <name>instance-0000001f</name>
Oct 07 14:09:59 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:09:59 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:09:59 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersTestJSON-server-1141910716</nova:name>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:09:58</nova:creationTime>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:09:59 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:09:59 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:09:59 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:09:59 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:09:59 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:09:59 compute-0 nova_compute[259550]:         <nova:user uuid="8656e87752c0498891bd00461451ea40">tempest-ServersTestJSON-218965943-project-member</nova:user>
Oct 07 14:09:59 compute-0 nova_compute[259550]:         <nova:project uuid="da79ea7f82af425b975ddff6ef03a59a">tempest-ServersTestJSON-218965943</nova:project>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:09:59 compute-0 nova_compute[259550]:         <nova:port uuid="cdae5f4d-2069-4d30-bd74-31e9459986db">
Oct 07 14:09:59 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:09:59 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:09:59 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <system>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <entry name="serial">75fea627-8d48-4772-86a2-ca6bc47ed597</entry>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <entry name="uuid">75fea627-8d48-4772-86a2-ca6bc47ed597</entry>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     </system>
Oct 07 14:09:59 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:09:59 compute-0 nova_compute[259550]:   <os>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:   </os>
Oct 07 14:09:59 compute-0 nova_compute[259550]:   <features>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:   </features>
Oct 07 14:09:59 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:09:59 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:09:59 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/75fea627-8d48-4772-86a2-ca6bc47ed597_disk">
Oct 07 14:09:59 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       </source>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:09:59 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/75fea627-8d48-4772-86a2-ca6bc47ed597_disk.config">
Oct 07 14:09:59 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       </source>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:09:59 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:6e:ba:59"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <target dev="tapcdae5f4d-20"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597/console.log" append="off"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <video>
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     </video>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:09:59 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:09:59 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:09:59 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:09:59 compute-0 nova_compute[259550]: </domain>
Oct 07 14:09:59 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.442 2 DEBUG nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Preparing to wait for external event network-vif-plugged-cdae5f4d-2069-4d30-bd74-31e9459986db prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.443 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquiring lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.443 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.443 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.445 2 DEBUG nova.virt.libvirt.vif [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1141910716',display_name='tempest-ServersTestJSON-server-1141910716',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1141910716',id=31,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB8dBtsZMOusOa4IijROkUdbBylBDiqYzwyIPuD+vzYyGEcKB5fbetYwHVvk9xfq9490L3yO+927Gy6izYCdieX2E8nhzbrcXYwgrKrlKM3UVrnT4xZuHJ7KYzTZGsV5OA==',key_name='tempest-keypair-878357788',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da79ea7f82af425b975ddff6ef03a59a',ramdisk_id='',reservation_id='r-h9cj7vml',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-218965943',owner_user_name='tempest-ServersTestJSON-218965943-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8656e87752c0498891bd00461451ea40',uuid=75fea627-8d48-4772-86a2-ca6bc47ed597,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.445 2 DEBUG nova.network.os_vif_util [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Converting VIF {"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.446 2 DEBUG nova.network.os_vif_util [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:ba:59,bridge_name='br-int',has_traffic_filtering=True,id=cdae5f4d-2069-4d30-bd74-31e9459986db,network=Network(f753e53f-89de-4a31-8fea-3e9e1684d72a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdae5f4d-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.446 2 DEBUG os_vif [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:ba:59,bridge_name='br-int',has_traffic_filtering=True,id=cdae5f4d-2069-4d30-bd74-31e9459986db,network=Network(f753e53f-89de-4a31-8fea-3e9e1684d72a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdae5f4d-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.448 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.448 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.452 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcdae5f4d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.453 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcdae5f4d-20, col_values=(('external_ids', {'iface-id': 'cdae5f4d-2069-4d30-bd74-31e9459986db', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:ba:59', 'vm-uuid': '75fea627-8d48-4772-86a2-ca6bc47ed597'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:09:59 compute-0 NetworkManager[44949]: <info>  [1759846199.4551] manager: (tapcdae5f4d-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.460 2 INFO nova.compute.manager [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Took 0.86 seconds to destroy the instance on the hypervisor.
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.460 2 DEBUG oslo.service.loopingcall [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.460 2 DEBUG nova.compute.manager [-] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.460 2 DEBUG nova.network.neutron [-] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.464 2 INFO os_vif [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:ba:59,bridge_name='br-int',has_traffic_filtering=True,id=cdae5f4d-2069-4d30-bd74-31e9459986db,network=Network(f753e53f-89de-4a31-8fea-3e9e1684d72a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdae5f4d-20')
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.531 2 DEBUG nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.539 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846184.5341327, f707575c-3219-44e4-9655-ccc194e7385d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.539 2 INFO nova.compute.manager [-] [instance: f707575c-3219-44e4-9655-ccc194e7385d] VM Stopped (Lifecycle Event)
Oct 07 14:09:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 239 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 10 MiB/s wr, 341 op/s
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.592 2 DEBUG nova.compute.manager [None req-07f7a84b-4d3b-4a73-bd2e-52acc3dc4c68 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.618 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.619 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.620 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] No VIF found with MAC fa:16:3e:6e:ba:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.621 2 INFO nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Using config drive
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.654 2 DEBUG nova.storage.rbd_utils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] rbd image 75fea627-8d48-4772-86a2-ca6bc47ed597_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.745 2 DEBUG nova.network.neutron [-] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.786 2 DEBUG nova.policy [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a27a7178326846e69ab9eaae7c70b274', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.802 2 DEBUG nova.network.neutron [-] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.854 2 DEBUG nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.856 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.856 2 INFO nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Creating image(s)
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.881 2 DEBUG nova.storage.rbd_utils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.911 2 DEBUG nova.storage.rbd_utils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.947 2 DEBUG nova.storage.rbd_utils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.952 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.989 2 DEBUG nova.network.neutron [req-a3dbefeb-e5ea-4f37-8e18-9cda8f7a7cb6 req-92d4d666-856e-4736-a622-093c10cb42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Updated VIF entry in instance network info cache for port cdae5f4d-2069-4d30-bd74-31e9459986db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.990 2 DEBUG nova.network.neutron [req-a3dbefeb-e5ea-4f37-8e18-9cda8f7a7cb6 req-92d4d666-856e-4736-a622-093c10cb42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Updating instance_info_cache with network_info: [{"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.993 2 DEBUG nova.compute.manager [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Received event network-vif-plugged-18b5959f-439a-440f-8a00-0709f1f9730b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.994 2 DEBUG oslo_concurrency.lockutils [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6a715eda-3357-4298-88cc-596cc9986690-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.994 2 DEBUG oslo_concurrency.lockutils [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.994 2 DEBUG oslo_concurrency.lockutils [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.995 2 DEBUG nova.compute.manager [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Processing event network-vif-plugged-18b5959f-439a-440f-8a00-0709f1f9730b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.995 2 DEBUG nova.compute.manager [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Received event network-vif-plugged-18b5959f-439a-440f-8a00-0709f1f9730b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.995 2 DEBUG oslo_concurrency.lockutils [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6a715eda-3357-4298-88cc-596cc9986690-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.996 2 DEBUG oslo_concurrency.lockutils [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.996 2 DEBUG oslo_concurrency.lockutils [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.996 2 DEBUG nova.compute.manager [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] No waiting events found dispatching network-vif-plugged-18b5959f-439a-440f-8a00-0709f1f9730b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.996 2 WARNING nova.compute.manager [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Received unexpected event network-vif-plugged-18b5959f-439a-440f-8a00-0709f1f9730b for instance with vm_state building and task_state spawning.
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.998 2 INFO nova.compute.manager [-] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Took 0.54 seconds to deallocate network for instance.
Oct 07 14:09:59 compute-0 nova_compute[259550]: 2025-10-07 14:09:59.998 2 DEBUG nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.006 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.007 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846200.0065184, 6a715eda-3357-4298-88cc-596cc9986690 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.007 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] VM Resumed (Lifecycle Event)
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.012 2 INFO nova.virt.libvirt.driver [-] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Instance spawned successfully.
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.013 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.016 2 DEBUG oslo_concurrency.lockutils [req-a3dbefeb-e5ea-4f37-8e18-9cda8f7a7cb6 req-92d4d666-856e-4736-a622-093c10cb42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-75fea627-8d48-4772-86a2-ca6bc47ed597" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.020 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.020 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.021 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.021 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:00 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3207458592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:00 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2694528085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:10:00 compute-0 ceph-mon[74295]: pgmap v1317: 305 pgs: 305 active+clean; 239 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 10 MiB/s wr, 341 op/s
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.043 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.044 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.045 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.050 2 DEBUG nova.storage.rbd_utils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.054 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.092 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.098 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.098 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.104 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.109 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.110 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.110 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.111 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.111 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.111 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.137 2 INFO nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Creating config drive at /var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597/disk.config
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.142 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp31881ybv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.256 2 DEBUG oslo_concurrency.processutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.293 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.300 2 INFO nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Took 10.04 seconds to spawn the instance on the hypervisor.
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.301 2 DEBUG nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.303 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp31881ybv" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.332 2 DEBUG nova.storage.rbd_utils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] rbd image 75fea627-8d48-4772-86a2-ca6bc47ed597_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.338 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597/disk.config 75fea627-8d48-4772-86a2-ca6bc47ed597_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.370 2 DEBUG nova.network.neutron [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Successfully created port: 7f14c952-52c5-4957-ae99-09ff563a62a9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.405 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.501 2 INFO nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Took 11.22 seconds to build instance.
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.508 2 DEBUG nova.storage.rbd_utils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] resizing rbd image 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.542 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.544 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597/disk.config 75fea627-8d48-4772-86a2-ca6bc47ed597_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.544 2 INFO nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Deleting local config drive /var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597/disk.config because it was imported into RBD.
Oct 07 14:10:00 compute-0 NetworkManager[44949]: <info>  [1759846200.6116] manager: (tapcdae5f4d-20): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Oct 07 14:10:00 compute-0 kernel: tapcdae5f4d-20: entered promiscuous mode
Oct 07 14:10:00 compute-0 systemd-udevd[298427]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:10:00 compute-0 ovn_controller[151684]: 2025-10-07T14:10:00Z|00194|binding|INFO|Claiming lport cdae5f4d-2069-4d30-bd74-31e9459986db for this chassis.
Oct 07 14:10:00 compute-0 ovn_controller[151684]: 2025-10-07T14:10:00Z|00195|binding|INFO|cdae5f4d-2069-4d30-bd74-31e9459986db: Claiming fa:16:3e:6e:ba:59 10.100.0.11
Oct 07 14:10:00 compute-0 NetworkManager[44949]: <info>  [1759846200.6336] device (tapcdae5f4d-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.633 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:ba:59 10.100.0.11'], port_security=['fa:16:3e:6e:ba:59 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '75fea627-8d48-4772-86a2-ca6bc47ed597', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f753e53f-89de-4a31-8fea-3e9e1684d72a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da79ea7f82af425b975ddff6ef03a59a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f73ed2d1-0e14-4944-923b-e8979adde65e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d43ba345-e1ae-4a7f-bf78-bc4ea18d4057, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=cdae5f4d-2069-4d30-bd74-31e9459986db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:10:00 compute-0 NetworkManager[44949]: <info>  [1759846200.6352] device (tapcdae5f4d-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.635 161536 INFO neutron.agent.ovn.metadata.agent [-] Port cdae5f4d-2069-4d30-bd74-31e9459986db in datapath f753e53f-89de-4a31-8fea-3e9e1684d72a bound to our chassis
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.637 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f753e53f-89de-4a31-8fea-3e9e1684d72a
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.655 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[006d84fe-711a-4294-8601-38eab7e4cd76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.656 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf753e53f-81 in ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.659 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf753e53f-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.659 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4f034ca9-e138-4081-ac67-897357871b77]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:00 compute-0 systemd-machined[214580]: New machine qemu-35-instance-0000001f.
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.660 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0268c978-87dc-4a2b-a4c0-74c9c2c9953c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.662 2 DEBUG nova.objects.instance [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'migration_context' on Instance uuid 20fd8b19-2eb6-4c42-a320-d9d23f6f4912 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:10:00 compute-0 systemd[1]: Started Virtual Machine qemu-35-instance-0000001f.
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.673 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[e41d515b-2ac9-4a9d-a72a-87f7fddaf361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.689 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.689 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Ensure instance console log exists: /var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.690 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.690 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.691 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.703 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0e5a01c1-2eda-41c8-a5e3-43dd4aeaf3d5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:00 compute-0 ovn_controller[151684]: 2025-10-07T14:10:00Z|00196|binding|INFO|Setting lport cdae5f4d-2069-4d30-bd74-31e9459986db ovn-installed in OVS
Oct 07 14:10:00 compute-0 ovn_controller[151684]: 2025-10-07T14:10:00Z|00197|binding|INFO|Setting lport cdae5f4d-2069-4d30-bd74-31e9459986db up in Southbound
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.739 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fe551248-7767-410d-bf0e-47cffb7c4dab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.747 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f4009113-d290-4b5d-b09f-4a571f29c78f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:00 compute-0 NetworkManager[44949]: <info>  [1759846200.7488] manager: (tapf753e53f-80): new Veth device (/org/freedesktop/NetworkManager/Devices/102)
Oct 07 14:10:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:10:00 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2950897051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.792 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e34b23a6-bb9c-4190-a65d-f240d292eb06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.794 2 DEBUG oslo_concurrency.processutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.796 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ef6c4122-5c10-4c8d-a12e-7dbb44505a45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:10:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.803 2 DEBUG nova.compute.provider_tree [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:10:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Oct 07 14:10:00 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.833 2 DEBUG nova.scheduler.client.report [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:10:00 compute-0 NetworkManager[44949]: <info>  [1759846200.8350] device (tapf753e53f-80): carrier: link connected
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.841 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1a7587a8-18e2-4ed7-9f9b-eea2cde19a52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.863 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[340c894d-6978-4b2a-94f8-f2a6bd384092]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf753e53f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:5c:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678439, 'reachable_time': 36927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298921, 'error': None, 'target': 'ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.882 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.883 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7086cd00-4aca-4cd2-85c1-6b099eb77061]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:5ca8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678439, 'tstamp': 678439}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298922, 'error': None, 'target': 'ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.907 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[77d77f33-8f73-4e99-9f95-017f6cc02555]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf753e53f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:5c:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678439, 'reachable_time': 36927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298923, 'error': None, 'target': 'ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:00 compute-0 nova_compute[259550]: 2025-10-07 14:10:00.940 2 INFO nova.scheduler.client.report [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Deleted allocations for instance b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca
Oct 07 14:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.941 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6cda73f0-466f-40de-a68e-eca089f92d11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:01.023 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6278cd10-9ad6-4bc0-a7a3-7151065d9954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:01.025 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf753e53f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:01.025 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:01.025 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf753e53f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:01 compute-0 NetworkManager[44949]: <info>  [1759846201.0281] manager: (tapf753e53f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Oct 07 14:10:01 compute-0 nova_compute[259550]: 2025-10-07 14:10:01.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:01 compute-0 kernel: tapf753e53f-80: entered promiscuous mode
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:01.032 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf753e53f-80, col_values=(('external_ids', {'iface-id': 'd1eb01c1-56fa-4d3a-85d1-57cfa37e8fb9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:01 compute-0 ovn_controller[151684]: 2025-10-07T14:10:01Z|00198|binding|INFO|Releasing lport d1eb01c1-56fa-4d3a-85d1-57cfa37e8fb9 from this chassis (sb_readonly=0)
Oct 07 14:10:01 compute-0 nova_compute[259550]: 2025-10-07 14:10:01.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:01.036 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f753e53f-89de-4a31-8fea-3e9e1684d72a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f753e53f-89de-4a31-8fea-3e9e1684d72a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:01.037 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4405c6-b259-45a2-a584-8ffa913e4558]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:01.038 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-f753e53f-89de-4a31-8fea-3e9e1684d72a
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/f753e53f-89de-4a31-8fea-3e9e1684d72a.pid.haproxy
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID f753e53f-89de-4a31-8fea-3e9e1684d72a
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:10:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:01.039 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a', 'env', 'PROCESS_TAG=haproxy-f753e53f-89de-4a31-8fea-3e9e1684d72a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f753e53f-89de-4a31-8fea-3e9e1684d72a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:10:01 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2950897051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:01 compute-0 ceph-mon[74295]: osdmap e169: 3 total, 3 up, 3 in
Oct 07 14:10:01 compute-0 nova_compute[259550]: 2025-10-07 14:10:01.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:01 compute-0 nova_compute[259550]: 2025-10-07 14:10:01.111 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:01 compute-0 podman[298997]: 2025-10-07 14:10:01.509091924 +0000 UTC m=+0.065760529 container create 01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 14:10:01 compute-0 systemd[1]: Started libpod-conmon-01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643.scope.
Oct 07 14:10:01 compute-0 podman[298997]: 2025-10-07 14:10:01.475612994 +0000 UTC m=+0.032281619 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:10:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 212 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 210 KiB/s rd, 4.8 MiB/s wr, 253 op/s
Oct 07 14:10:01 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b521a0ec4b82aa4091ed78acfb32e87147f7bbf599891bee81258eeecedcd051/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:10:01 compute-0 podman[298997]: 2025-10-07 14:10:01.618336576 +0000 UTC m=+0.175005211 container init 01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:10:01 compute-0 podman[298997]: 2025-10-07 14:10:01.627392163 +0000 UTC m=+0.184060778 container start 01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 14:10:01 compute-0 neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a[299013]: [NOTICE]   (299017) : New worker (299019) forked
Oct 07 14:10:01 compute-0 neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a[299013]: [NOTICE]   (299017) : Loading success.
Oct 07 14:10:01 compute-0 nova_compute[259550]: 2025-10-07 14:10:01.685 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846201.684147, 75fea627-8d48-4772-86a2-ca6bc47ed597 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:01 compute-0 nova_compute[259550]: 2025-10-07 14:10:01.685 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] VM Started (Lifecycle Event)
Oct 07 14:10:01 compute-0 nova_compute[259550]: 2025-10-07 14:10:01.718 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:01 compute-0 nova_compute[259550]: 2025-10-07 14:10:01.721 2 DEBUG nova.network.neutron [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Successfully updated port: 7f14c952-52c5-4957-ae99-09ff563a62a9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:10:01 compute-0 nova_compute[259550]: 2025-10-07 14:10:01.725 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846201.6876004, 75fea627-8d48-4772-86a2-ca6bc47ed597 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:01 compute-0 nova_compute[259550]: 2025-10-07 14:10:01.725 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] VM Paused (Lifecycle Event)
Oct 07 14:10:01 compute-0 nova_compute[259550]: 2025-10-07 14:10:01.918 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:01 compute-0 nova_compute[259550]: 2025-10-07 14:10:01.919 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "refresh_cache-20fd8b19-2eb6-4c42-a320-d9d23f6f4912" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:10:01 compute-0 nova_compute[259550]: 2025-10-07 14:10:01.919 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquired lock "refresh_cache-20fd8b19-2eb6-4c42-a320-d9d23f6f4912" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:10:01 compute-0 nova_compute[259550]: 2025-10-07 14:10:01.920 2 DEBUG nova.network.neutron [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:10:01 compute-0 nova_compute[259550]: 2025-10-07 14:10:01.924 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:10:01 compute-0 nova_compute[259550]: 2025-10-07 14:10:01.952 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.051 2 DEBUG nova.network.neutron [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:10:02 compute-0 ceph-mon[74295]: pgmap v1319: 305 pgs: 305 active+clean; 212 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 210 KiB/s rd, 4.8 MiB/s wr, 253 op/s
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.127 2 DEBUG nova.compute.manager [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Received event network-vif-plugged-cdae5f4d-2069-4d30-bd74-31e9459986db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.127 2 DEBUG oslo_concurrency.lockutils [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.128 2 DEBUG oslo_concurrency.lockutils [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.128 2 DEBUG oslo_concurrency.lockutils [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.128 2 DEBUG nova.compute.manager [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Processing event network-vif-plugged-cdae5f4d-2069-4d30-bd74-31e9459986db _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.128 2 DEBUG nova.compute.manager [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Received event network-changed-7f14c952-52c5-4957-ae99-09ff563a62a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.128 2 DEBUG nova.compute.manager [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Refreshing instance network info cache due to event network-changed-7f14c952-52c5-4957-ae99-09ff563a62a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.129 2 DEBUG oslo_concurrency.lockutils [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-20fd8b19-2eb6-4c42-a320-d9d23f6f4912" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.129 2 DEBUG nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.136 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846202.1358852, 75fea627-8d48-4772-86a2-ca6bc47ed597 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.136 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] VM Resumed (Lifecycle Event)
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.141 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.146 2 INFO nova.virt.libvirt.driver [-] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Instance spawned successfully.
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.146 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.155 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.173 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.182 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.183 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.183 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.184 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.185 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.185 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.198 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.253 2 INFO nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Took 7.63 seconds to spawn the instance on the hypervisor.
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.254 2 DEBUG nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.319 2 INFO nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Took 8.87 seconds to build instance.
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.336 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.908 2 DEBUG nova.network.neutron [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Updating instance_info_cache with network_info: [{"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.977 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Releasing lock "refresh_cache-20fd8b19-2eb6-4c42-a320-d9d23f6f4912" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.977 2 DEBUG nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Instance network_info: |[{"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.978 2 DEBUG oslo_concurrency.lockutils [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-20fd8b19-2eb6-4c42-a320-d9d23f6f4912" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.978 2 DEBUG nova.network.neutron [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Refreshing network info cache for port 7f14c952-52c5-4957-ae99-09ff563a62a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.981 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Start _get_guest_xml network_info=[{"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.989 2 WARNING nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.994 2 DEBUG nova.virt.libvirt.host [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.994 2 DEBUG nova.virt.libvirt.host [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.997 2 DEBUG nova.virt.libvirt.host [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.998 2 DEBUG nova.virt.libvirt.host [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.998 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.998 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.999 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:10:02 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.999 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:10:03 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.999 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:10:03 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.999 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:10:03 compute-0 nova_compute[259550]: 2025-10-07 14:10:02.999 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:10:03 compute-0 nova_compute[259550]: 2025-10-07 14:10:03.000 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:10:03 compute-0 nova_compute[259550]: 2025-10-07 14:10:03.000 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:10:03 compute-0 nova_compute[259550]: 2025-10-07 14:10:03.000 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:10:03 compute-0 nova_compute[259550]: 2025-10-07 14:10:03.000 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:10:03 compute-0 nova_compute[259550]: 2025-10-07 14:10:03.000 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:10:03 compute-0 nova_compute[259550]: 2025-10-07 14:10:03.003 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:03 compute-0 podman[299028]: 2025-10-07 14:10:03.127392692 +0000 UTC m=+0.108677317 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:10:03 compute-0 nova_compute[259550]: 2025-10-07 14:10:03.225 2 DEBUG nova.compute.manager [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:03 compute-0 nova_compute[259550]: 2025-10-07 14:10:03.273 2 INFO nova.compute.manager [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] instance snapshotting
Oct 07 14:10:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:10:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3931805357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:10:03 compute-0 nova_compute[259550]: 2025-10-07 14:10:03.528 2 INFO nova.virt.libvirt.driver [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Beginning live snapshot process
Oct 07 14:10:03 compute-0 nova_compute[259550]: 2025-10-07 14:10:03.549 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3931805357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:10:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 212 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 1.7 MiB/s wr, 121 op/s
Oct 07 14:10:03 compute-0 nova_compute[259550]: 2025-10-07 14:10:03.589 2 DEBUG nova.storage.rbd_utils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:03 compute-0 nova_compute[259550]: 2025-10-07 14:10:03.595 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:10:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/340070330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.100 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.101 2 DEBUG nova.virt.libvirt.vif [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-85849554',display_name='tempest-ImagesTestJSON-server-85849554',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-85849554',id=32,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-3510ii32',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:59Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=20fd8b19-2eb6-4c42-a320-d9d23f6f4912,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.102 2 DEBUG nova.network.os_vif_util [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.102 2 DEBUG nova.network.os_vif_util [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:62:58,bridge_name='br-int',has_traffic_filtering=True,id=7f14c952-52c5-4957-ae99-09ff563a62a9,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f14c952-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.104 2 DEBUG nova.objects.instance [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'pci_devices' on Instance uuid 20fd8b19-2eb6-4c42-a320-d9d23f6f4912 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.132 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:10:04 compute-0 nova_compute[259550]:   <uuid>20fd8b19-2eb6-4c42-a320-d9d23f6f4912</uuid>
Oct 07 14:10:04 compute-0 nova_compute[259550]:   <name>instance-00000020</name>
Oct 07 14:10:04 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:10:04 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:10:04 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <nova:name>tempest-ImagesTestJSON-server-85849554</nova:name>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:10:02</nova:creationTime>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:10:04 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:10:04 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:10:04 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:10:04 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:10:04 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:10:04 compute-0 nova_compute[259550]:         <nova:user uuid="a27a7178326846e69ab9eaae7c70b274">tempest-ImagesTestJSON-194092869-project-member</nova:user>
Oct 07 14:10:04 compute-0 nova_compute[259550]:         <nova:project uuid="1a6abfd8cc6f4507886ed10873d1f95c">tempest-ImagesTestJSON-194092869</nova:project>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:10:04 compute-0 nova_compute[259550]:         <nova:port uuid="7f14c952-52c5-4957-ae99-09ff563a62a9">
Oct 07 14:10:04 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:10:04 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:10:04 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <system>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <entry name="serial">20fd8b19-2eb6-4c42-a320-d9d23f6f4912</entry>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <entry name="uuid">20fd8b19-2eb6-4c42-a320-d9d23f6f4912</entry>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     </system>
Oct 07 14:10:04 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:10:04 compute-0 nova_compute[259550]:   <os>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:   </os>
Oct 07 14:10:04 compute-0 nova_compute[259550]:   <features>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:   </features>
Oct 07 14:10:04 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:10:04 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:10:04 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk">
Oct 07 14:10:04 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       </source>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:10:04 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk.config">
Oct 07 14:10:04 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       </source>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:10:04 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:35:62:58"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <target dev="tap7f14c952-52"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912/console.log" append="off"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <video>
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     </video>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:10:04 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:10:04 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:10:04 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:10:04 compute-0 nova_compute[259550]: </domain>
Oct 07 14:10:04 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.133 2 DEBUG nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Preparing to wait for external event network-vif-plugged-7f14c952-52c5-4957-ae99-09ff563a62a9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.133 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.133 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.134 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.135 2 DEBUG nova.virt.libvirt.vif [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-85849554',display_name='tempest-ImagesTestJSON-server-85849554',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-85849554',id=32,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-3510ii32',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:59Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=20fd8b19-2eb6-4c42-a320-d9d23f6f4912,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.135 2 DEBUG nova.network.os_vif_util [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.136 2 DEBUG nova.network.os_vif_util [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:62:58,bridge_name='br-int',has_traffic_filtering=True,id=7f14c952-52c5-4957-ae99-09ff563a62a9,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f14c952-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.136 2 DEBUG os_vif [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:62:58,bridge_name='br-int',has_traffic_filtering=True,id=7f14c952-52c5-4957-ae99-09ff563a62a9,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f14c952-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f14c952-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7f14c952-52, col_values=(('external_ids', {'iface-id': '7f14c952-52c5-4957-ae99-09ff563a62a9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:35:62:58', 'vm-uuid': '20fd8b19-2eb6-4c42-a320-d9d23f6f4912'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:04 compute-0 NetworkManager[44949]: <info>  [1759846204.2212] manager: (tap7f14c952-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.234 2 DEBUG nova.virt.libvirt.imagebackend [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.242 2 INFO os_vif [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:62:58,bridge_name='br-int',has_traffic_filtering=True,id=7f14c952-52c5-4957-ae99-09ff563a62a9,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f14c952-52')
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.243 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846189.241949, 643096ea-a4a9-4b3f-b8c1-130423a4248e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.244 2 INFO nova.compute.manager [-] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] VM Stopped (Lifecycle Event)
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.314 2 DEBUG nova.compute.manager [None req-b71628a9-9a1c-46bc-9206-06b4c81d5ec3 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:04 compute-0 podman[299151]: 2025-10-07 14:10:04.344989127 +0000 UTC m=+0.059409292 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent)
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.435 2 DEBUG nova.storage.rbd_utils [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] creating snapshot(ce057ab149974b87af48d06eea75efe6) on rbd image(6a715eda-3357-4298-88cc-596cc9986690_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.491 2 DEBUG nova.network.neutron [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Updated VIF entry in instance network info cache for port 7f14c952-52c5-4957-ae99-09ff563a62a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.494 2 DEBUG nova.network.neutron [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Updating instance_info_cache with network_info: [{"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:10:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Oct 07 14:10:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Oct 07 14:10:04 compute-0 ceph-mon[74295]: pgmap v1320: 305 pgs: 305 active+clean; 212 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 1.7 MiB/s wr, 121 op/s
Oct 07 14:10:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/340070330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:10:04 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.601 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.603 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.603 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No VIF found with MAC fa:16:3e:35:62:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.604 2 INFO nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Using config drive
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.642 2 DEBUG nova.storage.rbd_utils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.648 2 DEBUG oslo_concurrency.lockutils [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-20fd8b19-2eb6-4c42-a320-d9d23f6f4912" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.649 2 DEBUG nova.compute.manager [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Received event network-vif-plugged-cdae5f4d-2069-4d30-bd74-31e9459986db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.649 2 DEBUG oslo_concurrency.lockutils [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.649 2 DEBUG oslo_concurrency.lockutils [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.649 2 DEBUG oslo_concurrency.lockutils [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.649 2 DEBUG nova.compute.manager [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] No waiting events found dispatching network-vif-plugged-cdae5f4d-2069-4d30-bd74-31e9459986db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.650 2 WARNING nova.compute.manager [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Received unexpected event network-vif-plugged-cdae5f4d-2069-4d30-bd74-31e9459986db for instance with vm_state building and task_state spawning.
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.678 2 DEBUG nova.storage.rbd_utils [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] cloning vms/6a715eda-3357-4298-88cc-596cc9986690_disk@ce057ab149974b87af48d06eea75efe6 to images/b9cff922-c050-4d43-8b7e-2053a2028ec0 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.787 2 DEBUG nova.storage.rbd_utils [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] flattening images/b9cff922-c050-4d43-8b7e-2053a2028ec0 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:10:04 compute-0 ovn_controller[151684]: 2025-10-07T14:10:04Z|00199|binding|INFO|Releasing lport a8291172-baf1-4252-9a0d-af7ef7ffa931 from this chassis (sb_readonly=0)
Oct 07 14:10:04 compute-0 ovn_controller[151684]: 2025-10-07T14:10:04Z|00200|binding|INFO|Releasing lport d1eb01c1-56fa-4d3a-85d1-57cfa37e8fb9 from this chassis (sb_readonly=0)
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:04 compute-0 NetworkManager[44949]: <info>  [1759846204.8887] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Oct 07 14:10:04 compute-0 NetworkManager[44949]: <info>  [1759846204.8897] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:04 compute-0 ovn_controller[151684]: 2025-10-07T14:10:04Z|00201|binding|INFO|Releasing lport a8291172-baf1-4252-9a0d-af7ef7ffa931 from this chassis (sb_readonly=0)
Oct 07 14:10:04 compute-0 ovn_controller[151684]: 2025-10-07T14:10:04Z|00202|binding|INFO|Releasing lport d1eb01c1-56fa-4d3a-85d1-57cfa37e8fb9 from this chassis (sb_readonly=0)
Oct 07 14:10:04 compute-0 nova_compute[259550]: 2025-10-07 14:10:04.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:04.995 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:05 compute-0 nova_compute[259550]: 2025-10-07 14:10:05.073 2 DEBUG nova.storage.rbd_utils [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] removing snapshot(ce057ab149974b87af48d06eea75efe6) on rbd image(6a715eda-3357-4298-88cc-596cc9986690_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:10:05 compute-0 nova_compute[259550]: 2025-10-07 14:10:05.217 2 INFO nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Creating config drive at /var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912/disk.config
Oct 07 14:10:05 compute-0 nova_compute[259550]: 2025-10-07 14:10:05.223 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw3qoabcg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:05 compute-0 nova_compute[259550]: 2025-10-07 14:10:05.365 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw3qoabcg" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:05 compute-0 nova_compute[259550]: 2025-10-07 14:10:05.392 2 DEBUG nova.storage.rbd_utils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:05 compute-0 nova_compute[259550]: 2025-10-07 14:10:05.396 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912/disk.config 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:05 compute-0 nova_compute[259550]: 2025-10-07 14:10:05.434 2 DEBUG nova.compute.manager [req-aa44b4fe-f44c-4e5d-af9e-cd4d3d8efc5d req-3256a92d-f7ab-4702-925b-0fd6560b44c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Received event network-changed-cdae5f4d-2069-4d30-bd74-31e9459986db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:05 compute-0 nova_compute[259550]: 2025-10-07 14:10:05.435 2 DEBUG nova.compute.manager [req-aa44b4fe-f44c-4e5d-af9e-cd4d3d8efc5d req-3256a92d-f7ab-4702-925b-0fd6560b44c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Refreshing instance network info cache due to event network-changed-cdae5f4d-2069-4d30-bd74-31e9459986db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:10:05 compute-0 nova_compute[259550]: 2025-10-07 14:10:05.435 2 DEBUG oslo_concurrency.lockutils [req-aa44b4fe-f44c-4e5d-af9e-cd4d3d8efc5d req-3256a92d-f7ab-4702-925b-0fd6560b44c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-75fea627-8d48-4772-86a2-ca6bc47ed597" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:10:05 compute-0 nova_compute[259550]: 2025-10-07 14:10:05.436 2 DEBUG oslo_concurrency.lockutils [req-aa44b4fe-f44c-4e5d-af9e-cd4d3d8efc5d req-3256a92d-f7ab-4702-925b-0fd6560b44c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-75fea627-8d48-4772-86a2-ca6bc47ed597" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:10:05 compute-0 nova_compute[259550]: 2025-10-07 14:10:05.436 2 DEBUG nova.network.neutron [req-aa44b4fe-f44c-4e5d-af9e-cd4d3d8efc5d req-3256a92d-f7ab-4702-925b-0fd6560b44c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Refreshing network info cache for port cdae5f4d-2069-4d30-bd74-31e9459986db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:10:05 compute-0 nova_compute[259550]: 2025-10-07 14:10:05.578 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912/disk.config 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:05 compute-0 nova_compute[259550]: 2025-10-07 14:10:05.579 2 INFO nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Deleting local config drive /var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912/disk.config because it was imported into RBD.
Oct 07 14:10:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 204 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 6.8 MiB/s rd, 4.9 MiB/s wr, 440 op/s
Oct 07 14:10:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Oct 07 14:10:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Oct 07 14:10:05 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Oct 07 14:10:05 compute-0 ceph-mon[74295]: osdmap e170: 3 total, 3 up, 3 in
Oct 07 14:10:05 compute-0 NetworkManager[44949]: <info>  [1759846205.6485] manager: (tap7f14c952-52): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Oct 07 14:10:05 compute-0 kernel: tap7f14c952-52: entered promiscuous mode
Oct 07 14:10:05 compute-0 nova_compute[259550]: 2025-10-07 14:10:05.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:05 compute-0 ovn_controller[151684]: 2025-10-07T14:10:05Z|00203|binding|INFO|Claiming lport 7f14c952-52c5-4957-ae99-09ff563a62a9 for this chassis.
Oct 07 14:10:05 compute-0 ovn_controller[151684]: 2025-10-07T14:10:05Z|00204|binding|INFO|7f14c952-52c5-4957-ae99-09ff563a62a9: Claiming fa:16:3e:35:62:58 10.100.0.12
Oct 07 14:10:05 compute-0 nova_compute[259550]: 2025-10-07 14:10:05.678 2 DEBUG nova.storage.rbd_utils [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] creating snapshot(snap) on rbd image(b9cff922-c050-4d43-8b7e-2053a2028ec0) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:10:05 compute-0 systemd-machined[214580]: New machine qemu-36-instance-00000020.
Oct 07 14:10:05 compute-0 ovn_controller[151684]: 2025-10-07T14:10:05Z|00205|binding|INFO|Setting lport 7f14c952-52c5-4957-ae99-09ff563a62a9 ovn-installed in OVS
Oct 07 14:10:05 compute-0 systemd[1]: Started Virtual Machine qemu-36-instance-00000020.
Oct 07 14:10:05 compute-0 systemd-udevd[299348]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:10:05 compute-0 NetworkManager[44949]: <info>  [1759846205.7304] device (tap7f14c952-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:10:05 compute-0 NetworkManager[44949]: <info>  [1759846205.7320] device (tap7f14c952-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:10:05 compute-0 nova_compute[259550]: 2025-10-07 14:10:05.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.753 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:62:58 10.100.0.12'], port_security=['fa:16:3e:35:62:58 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '20fd8b19-2eb6-4c42-a320-d9d23f6f4912', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=7f14c952-52c5-4957-ae99-09ff563a62a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:10:05 compute-0 ovn_controller[151684]: 2025-10-07T14:10:05Z|00206|binding|INFO|Setting lport 7f14c952-52c5-4957-ae99-09ff563a62a9 up in Southbound
Oct 07 14:10:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.754 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 7f14c952-52c5-4957-ae99-09ff563a62a9 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb bound to our chassis
Oct 07 14:10:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.755 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct 07 14:10:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.770 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[17f4413c-c61a-4334-aa71-1aaea29e911b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.771 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f80456d-d1 in ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:10:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.774 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f80456d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:10:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.774 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ba618198-dd0f-4fcc-a215-9f4a47ab1881]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.775 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[879075b2-32e4-4877-9ab3-2c00c2621a67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.793 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6d45ac08-1737-42a1-808f-44ccd1a27bf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:10:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.822 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[12902a3f-c252-42bb-8639-b0e97c839fcb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.870 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[07e0d300-c03f-4728-a48b-7e608e6ae8dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:05 compute-0 systemd-udevd[299352]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:10:05 compute-0 NetworkManager[44949]: <info>  [1759846205.8842] manager: (tap9f80456d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/108)
Oct 07 14:10:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.882 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1414cfb1-35a9-4a92-9050-71ae77b9747a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.927 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac795d1-cb70-45f9-990f-875363535c98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.930 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7be9f388-4322-4ecd-add4-911b6b6077cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:05 compute-0 NetworkManager[44949]: <info>  [1759846205.9651] device (tap9f80456d-d0): carrier: link connected
Oct 07 14:10:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.973 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a1aceb67-0dde-48a1-81be-5ce91d8a11cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.001 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdfb177-c375-4d6f-a296-117f009f1b31]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678952, 'reachable_time': 35158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299390, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.020 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[68496312-09d5-430f-ac91-db859f70d622]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:18ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678952, 'tstamp': 678952}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299409, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.040 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aed382a4-9f71-4733-843e-7183b345d4d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678952, 'reachable_time': 35158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299418, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.099 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a595c8-a5d0-49dc-97cf-f8f323bcb397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.172 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea12977-433b-48d2-8ad0-2e959ff68723]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.175 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.175 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.176 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f80456d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:06 compute-0 NetworkManager[44949]: <info>  [1759846206.1793] manager: (tap9f80456d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Oct 07 14:10:06 compute-0 kernel: tap9f80456d-d0: entered promiscuous mode
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.183 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f80456d-d0, col_values=(('external_ids', {'iface-id': 'aff8269b-7a34-4fc6-ae31-f73de236b2d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:06 compute-0 ovn_controller[151684]: 2025-10-07T14:10:06Z|00207|binding|INFO|Releasing lport aff8269b-7a34-4fc6-ae31-f73de236b2d6 from this chassis (sb_readonly=0)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.208 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.209 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3e84846a-caff-4909-b1fa-a6c6f2a3fa56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.210 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:10:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.212 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'env', 'PROCESS_TAG=haproxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:10:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Oct 07 14:10:06 compute-0 ceph-mon[74295]: pgmap v1322: 305 pgs: 305 active+clean; 204 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 6.8 MiB/s rd, 4.9 MiB/s wr, 440 op/s
Oct 07 14:10:06 compute-0 ceph-mon[74295]: osdmap e171: 3 total, 3 up, 3 in
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.617 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846206.6166348, 20fd8b19-2eb6-4c42-a320-d9d23f6f4912 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.619 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] VM Started (Lifecycle Event)
Oct 07 14:10:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Oct 07 14:10:06 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Oct 07 14:10:06 compute-0 podman[299457]: 2025-10-07 14:10:06.644602434 +0000 UTC m=+0.070800152 container create d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.688 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:06 compute-0 systemd[1]: Started libpod-conmon-d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d.scope.
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.695 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846206.6170514, 20fd8b19-2eb6-4c42-a320-d9d23f6f4912 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.696 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] VM Paused (Lifecycle Event)
Oct 07 14:10:06 compute-0 podman[299457]: 2025-10-07 14:10:06.608462335 +0000 UTC m=+0.034660083 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:10:06 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf6c25b6716cbbe88cb0b1eb25df944f3562caf1cdf4795012d2140fef4cae0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:10:06 compute-0 podman[299457]: 2025-10-07 14:10:06.74446374 +0000 UTC m=+0.170661478 container init d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 07 14:10:06 compute-0 podman[299457]: 2025-10-07 14:10:06.750318993 +0000 UTC m=+0.176516711 container start d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:10:06 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[299472]: [NOTICE]   (299476) : New worker (299478) forked
Oct 07 14:10:06 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[299472]: [NOTICE]   (299476) : Loading success.
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.804 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.809 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image b9cff922-c050-4d43-8b7e-2053a2028ec0 could not be found.
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID b9cff922-c050-4d43-8b7e-2053a2028ec0
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver 
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver 
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image b9cff922-c050-4d43-8b7e-2053a2028ec0 could not be found.
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver 
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.893 2 DEBUG nova.storage.rbd_utils [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] removing snapshot(snap) on rbd image(b9cff922-c050-4d43-8b7e-2053a2028ec0) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.896 2 DEBUG nova.network.neutron [req-aa44b4fe-f44c-4e5d-af9e-cd4d3d8efc5d req-3256a92d-f7ab-4702-925b-0fd6560b44c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Updated VIF entry in instance network info cache for port cdae5f4d-2069-4d30-bd74-31e9459986db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:10:06 compute-0 nova_compute[259550]: 2025-10-07 14:10:06.897 2 DEBUG nova.network.neutron [req-aa44b4fe-f44c-4e5d-af9e-cd4d3d8efc5d req-3256a92d-f7ab-4702-925b-0fd6560b44c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Updating instance_info_cache with network_info: [{"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.087 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.291 2 DEBUG oslo_concurrency.lockutils [req-aa44b4fe-f44c-4e5d-af9e-cd4d3d8efc5d req-3256a92d-f7ab-4702-925b-0fd6560b44c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-75fea627-8d48-4772-86a2-ca6bc47ed597" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.537 2 DEBUG nova.compute.manager [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Received event network-vif-plugged-7f14c952-52c5-4957-ae99-09ff563a62a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.537 2 DEBUG oslo_concurrency.lockutils [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.537 2 DEBUG oslo_concurrency.lockutils [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.537 2 DEBUG oslo_concurrency.lockutils [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.538 2 DEBUG nova.compute.manager [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Processing event network-vif-plugged-7f14c952-52c5-4957-ae99-09ff563a62a9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.538 2 DEBUG nova.compute.manager [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Received event network-vif-plugged-7f14c952-52c5-4957-ae99-09ff563a62a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.538 2 DEBUG oslo_concurrency.lockutils [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.538 2 DEBUG oslo_concurrency.lockutils [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.538 2 DEBUG oslo_concurrency.lockutils [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.538 2 DEBUG nova.compute.manager [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] No waiting events found dispatching network-vif-plugged-7f14c952-52c5-4957-ae99-09ff563a62a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.539 2 WARNING nova.compute.manager [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Received unexpected event network-vif-plugged-7f14c952-52c5-4957-ae99-09ff563a62a9 for instance with vm_state building and task_state spawning.
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.539 2 DEBUG nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.551 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846207.5510323, 20fd8b19-2eb6-4c42-a320-d9d23f6f4912 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.551 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] VM Resumed (Lifecycle Event)
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.553 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.556 2 INFO nova.virt.libvirt.driver [-] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Instance spawned successfully.
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.556 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.585 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 204 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 8.9 MiB/s rd, 4.3 MiB/s wr, 425 op/s
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.590 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.593 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.593 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.593 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.594 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.594 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.595 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Oct 07 14:10:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Oct 07 14:10:07 compute-0 ceph-mon[74295]: osdmap e172: 3 total, 3 up, 3 in
Oct 07 14:10:07 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.711 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.803 2 INFO nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Took 7.95 seconds to spawn the instance on the hypervisor.
Oct 07 14:10:07 compute-0 nova_compute[259550]: 2025-10-07 14:10:07.804 2 DEBUG nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:08 compute-0 nova_compute[259550]: 2025-10-07 14:10:08.119 2 INFO nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Took 9.58 seconds to build instance.
Oct 07 14:10:08 compute-0 nova_compute[259550]: 2025-10-07 14:10:08.194 2 WARNING nova.compute.manager [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Image not found during snapshot: nova.exception.ImageNotFound: Image b9cff922-c050-4d43-8b7e-2053a2028ec0 could not be found.
Oct 07 14:10:08 compute-0 nova_compute[259550]: 2025-10-07 14:10:08.261 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:08 compute-0 ceph-mon[74295]: pgmap v1325: 305 pgs: 305 active+clean; 204 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 8.9 MiB/s rd, 4.3 MiB/s wr, 425 op/s
Oct 07 14:10:08 compute-0 ceph-mon[74295]: osdmap e173: 3 total, 3 up, 3 in
Oct 07 14:10:09 compute-0 nova_compute[259550]: 2025-10-07 14:10:09.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 204 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 4.3 MiB/s wr, 403 op/s
Oct 07 14:10:09 compute-0 nova_compute[259550]: 2025-10-07 14:10:09.701 2 DEBUG oslo_concurrency.lockutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "6a715eda-3357-4298-88cc-596cc9986690" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:09 compute-0 nova_compute[259550]: 2025-10-07 14:10:09.702 2 DEBUG oslo_concurrency.lockutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:09 compute-0 nova_compute[259550]: 2025-10-07 14:10:09.703 2 DEBUG oslo_concurrency.lockutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "6a715eda-3357-4298-88cc-596cc9986690-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:09 compute-0 nova_compute[259550]: 2025-10-07 14:10:09.703 2 DEBUG oslo_concurrency.lockutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:09 compute-0 nova_compute[259550]: 2025-10-07 14:10:09.703 2 DEBUG oslo_concurrency.lockutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:09 compute-0 nova_compute[259550]: 2025-10-07 14:10:09.705 2 INFO nova.compute.manager [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Terminating instance
Oct 07 14:10:09 compute-0 nova_compute[259550]: 2025-10-07 14:10:09.706 2 DEBUG nova.compute.manager [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:10:09 compute-0 kernel: tap18b5959f-43 (unregistering): left promiscuous mode
Oct 07 14:10:09 compute-0 NetworkManager[44949]: <info>  [1759846209.7542] device (tap18b5959f-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:10:09 compute-0 ovn_controller[151684]: 2025-10-07T14:10:09Z|00208|binding|INFO|Releasing lport 18b5959f-439a-440f-8a00-0709f1f9730b from this chassis (sb_readonly=0)
Oct 07 14:10:09 compute-0 ovn_controller[151684]: 2025-10-07T14:10:09Z|00209|binding|INFO|Setting lport 18b5959f-439a-440f-8a00-0709f1f9730b down in Southbound
Oct 07 14:10:09 compute-0 nova_compute[259550]: 2025-10-07 14:10:09.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:09 compute-0 ovn_controller[151684]: 2025-10-07T14:10:09Z|00210|binding|INFO|Removing iface tap18b5959f-43 ovn-installed in OVS
Oct 07 14:10:09 compute-0 nova_compute[259550]: 2025-10-07 14:10:09.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:09.817 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:94:47 10.100.0.3'], port_security=['fa:16:3e:2a:94:47 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6a715eda-3357-4298-88cc-596cc9986690', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c166ae9e4e0f43d38afaa35966f84b05', 'neutron:revision_number': '4', 'neutron:security_group_ids': '306ac68f-7d3a-41d3-a9d1-b809ff5ece38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6dab0b8-b058-4fe6-95e9-ca808f08d05f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=18b5959f-439a-440f-8a00-0709f1f9730b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:10:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:09.819 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 18b5959f-439a-440f-8a00-0709f1f9730b in datapath 0a5c95d4-1a77-48f5-83c0-afa976b7583d unbound from our chassis
Oct 07 14:10:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:09.820 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0a5c95d4-1a77-48f5-83c0-afa976b7583d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:10:09 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Oct 07 14:10:09 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001e.scope: Consumed 10.641s CPU time.
Oct 07 14:10:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:09.822 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[afac75ad-f9a4-4aa7-a31f-7bc25a1f2dba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:09.822 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d namespace which is not needed anymore
Oct 07 14:10:09 compute-0 systemd-machined[214580]: Machine qemu-34-instance-0000001e terminated.
Oct 07 14:10:09 compute-0 nova_compute[259550]: 2025-10-07 14:10:09.874 2 DEBUG nova.compute.manager [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:09 compute-0 nova_compute[259550]: 2025-10-07 14:10:09.955 2 INFO nova.compute.manager [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] instance snapshotting
Oct 07 14:10:09 compute-0 nova_compute[259550]: 2025-10-07 14:10:09.966 2 INFO nova.virt.libvirt.driver [-] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Instance destroyed successfully.
Oct 07 14:10:09 compute-0 nova_compute[259550]: 2025-10-07 14:10:09.967 2 DEBUG nova.objects.instance [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lazy-loading 'resources' on Instance uuid 6a715eda-3357-4298-88cc-596cc9986690 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:10:10 compute-0 nova_compute[259550]: 2025-10-07 14:10:10.049 2 DEBUG nova.virt.libvirt.vif [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-799592627',display_name='tempest-ImagesOneServerNegativeTestJSON-server-799592627',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-799592627',id=30,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:10:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c166ae9e4e0f43d38afaa35966f84b05',ramdisk_id='',reservation_id='r-7fs0zddu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-2130756304',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:10:08Z,user_data=None,user_id='21b4c507f5c443f4b43306c884b1d67f',uuid=6a715eda-3357-4298-88cc-596cc9986690,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:10:10 compute-0 nova_compute[259550]: 2025-10-07 14:10:10.049 2 DEBUG nova.network.os_vif_util [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converting VIF {"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:10:10 compute-0 nova_compute[259550]: 2025-10-07 14:10:10.050 2 DEBUG nova.network.os_vif_util [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:94:47,bridge_name='br-int',has_traffic_filtering=True,id=18b5959f-439a-440f-8a00-0709f1f9730b,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b5959f-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:10:10 compute-0 nova_compute[259550]: 2025-10-07 14:10:10.050 2 DEBUG os_vif [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:94:47,bridge_name='br-int',has_traffic_filtering=True,id=18b5959f-439a-440f-8a00-0709f1f9730b,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b5959f-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:10:10 compute-0 nova_compute[259550]: 2025-10-07 14:10:10.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:10 compute-0 nova_compute[259550]: 2025-10-07 14:10:10.053 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18b5959f-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:10 compute-0 nova_compute[259550]: 2025-10-07 14:10:10.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:10:10 compute-0 nova_compute[259550]: 2025-10-07 14:10:10.061 2 INFO os_vif [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:94:47,bridge_name='br-int',has_traffic_filtering=True,id=18b5959f-439a-440f-8a00-0709f1f9730b,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b5959f-43')
Oct 07 14:10:10 compute-0 nova_compute[259550]: 2025-10-07 14:10:10.114 2 INFO nova.virt.libvirt.driver [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Beginning live snapshot process
Oct 07 14:10:10 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[298600]: [NOTICE]   (298607) : haproxy version is 2.8.14-c23fe91
Oct 07 14:10:10 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[298600]: [NOTICE]   (298607) : path to executable is /usr/sbin/haproxy
Oct 07 14:10:10 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[298600]: [WARNING]  (298607) : Exiting Master process...
Oct 07 14:10:10 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[298600]: [WARNING]  (298607) : Exiting Master process...
Oct 07 14:10:10 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[298600]: [ALERT]    (298607) : Current worker (298609) exited with code 143 (Terminated)
Oct 07 14:10:10 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[298600]: [WARNING]  (298607) : All workers exited. Exiting... (0)
Oct 07 14:10:10 compute-0 systemd[1]: libpod-73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2.scope: Deactivated successfully.
Oct 07 14:10:10 compute-0 podman[299556]: 2025-10-07 14:10:10.217198672 +0000 UTC m=+0.242093794 container died 73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 07 14:10:10 compute-0 nova_compute[259550]: 2025-10-07 14:10:10.373 2 DEBUG nova.virt.libvirt.imagebackend [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:10:10 compute-0 nova_compute[259550]: 2025-10-07 14:10:10.568 2 DEBUG nova.storage.rbd_utils [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(b032de4bb5ff4d25b63fdb13e521c53d) on rbd image(20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:10:10 compute-0 ceph-mon[74295]: pgmap v1327: 305 pgs: 305 active+clean; 204 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 4.3 MiB/s wr, 403 op/s
Oct 07 14:10:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2-userdata-shm.mount: Deactivated successfully.
Oct 07 14:10:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b77270b12605ac3132ea4eef6c757661be710663f1f44a5cebdb8882546cdf8c-merged.mount: Deactivated successfully.
Oct 07 14:10:10 compute-0 nova_compute[259550]: 2025-10-07 14:10:10.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:10:10 compute-0 podman[299556]: 2025-10-07 14:10:10.851367602 +0000 UTC m=+0.876262744 container cleanup 73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:10:10 compute-0 systemd[1]: libpod-conmon-73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2.scope: Deactivated successfully.
Oct 07 14:10:11 compute-0 nova_compute[259550]: 2025-10-07 14:10:11.074 2 DEBUG nova.compute.manager [req-cc0df120-9087-4756-adf5-a53dc44ee329 req-71541f96-3393-474e-9af7-159fb0aea20e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Received event network-vif-unplugged-18b5959f-439a-440f-8a00-0709f1f9730b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:11 compute-0 nova_compute[259550]: 2025-10-07 14:10:11.075 2 DEBUG oslo_concurrency.lockutils [req-cc0df120-9087-4756-adf5-a53dc44ee329 req-71541f96-3393-474e-9af7-159fb0aea20e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6a715eda-3357-4298-88cc-596cc9986690-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:11 compute-0 nova_compute[259550]: 2025-10-07 14:10:11.076 2 DEBUG oslo_concurrency.lockutils [req-cc0df120-9087-4756-adf5-a53dc44ee329 req-71541f96-3393-474e-9af7-159fb0aea20e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:11 compute-0 nova_compute[259550]: 2025-10-07 14:10:11.076 2 DEBUG oslo_concurrency.lockutils [req-cc0df120-9087-4756-adf5-a53dc44ee329 req-71541f96-3393-474e-9af7-159fb0aea20e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:11 compute-0 nova_compute[259550]: 2025-10-07 14:10:11.076 2 DEBUG nova.compute.manager [req-cc0df120-9087-4756-adf5-a53dc44ee329 req-71541f96-3393-474e-9af7-159fb0aea20e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] No waiting events found dispatching network-vif-unplugged-18b5959f-439a-440f-8a00-0709f1f9730b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:10:11 compute-0 nova_compute[259550]: 2025-10-07 14:10:11.076 2 DEBUG nova.compute.manager [req-cc0df120-9087-4756-adf5-a53dc44ee329 req-71541f96-3393-474e-9af7-159fb0aea20e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Received event network-vif-unplugged-18b5959f-439a-440f-8a00-0709f1f9730b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:10:11 compute-0 podman[299657]: 2025-10-07 14:10:11.556166477 +0000 UTC m=+0.663841800 container remove 73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:10:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:11.562 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d00552c3-ef44-48a2-b70b-2b05663edda8]: (4, ('Tue Oct  7 02:10:09 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d (73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2)\n73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2\nTue Oct  7 02:10:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d (73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2)\n73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:11.564 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a12158f0-6983-4c4c-a32a-6e18a9c80a48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:11.565 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a5c95d4-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:11 compute-0 nova_compute[259550]: 2025-10-07 14:10:11.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:11 compute-0 kernel: tap0a5c95d4-10: left promiscuous mode
Oct 07 14:10:11 compute-0 nova_compute[259550]: 2025-10-07 14:10:11.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 181 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 2.1 MiB/s wr, 241 op/s
Oct 07 14:10:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:11.592 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5474952a-3d45-437e-bea5-2b39cd3ea89c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:11.625 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c73b979a-78c8-48c3-90a5-bc6b413ad55f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:11.627 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[66361437-d505-4e5b-a7a0-49c613f94424]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:11.647 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[31f2b76a-c67e-4fe2-b89a-7c0aba4d7926]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678172, 'reachable_time': 44213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299673, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d0a5c95d4\x2d1a77\x2d48f5\x2d83c0\x2dafa976b7583d.mount: Deactivated successfully.
Oct 07 14:10:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:11.650 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:10:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:11.650 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6ed03f-6cf3-42ba-b5a5-07defe9502fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Oct 07 14:10:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Oct 07 14:10:12 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Oct 07 14:10:12 compute-0 nova_compute[259550]: 2025-10-07 14:10:12.548 2 DEBUG nova.storage.rbd_utils [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] cloning vms/20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk@b032de4bb5ff4d25b63fdb13e521c53d to images/e82f6cd3-df6c-4f54-b0d1-baa8f3a03435 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:10:12 compute-0 nova_compute[259550]: 2025-10-07 14:10:12.700 2 DEBUG nova.storage.rbd_utils [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] flattening images/e82f6cd3-df6c-4f54-b0d1-baa8f3a03435 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:10:12 compute-0 ceph-mon[74295]: pgmap v1328: 305 pgs: 305 active+clean; 181 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 2.1 MiB/s wr, 241 op/s
Oct 07 14:10:12 compute-0 ceph-mon[74295]: osdmap e174: 3 total, 3 up, 3 in
Oct 07 14:10:13 compute-0 nova_compute[259550]: 2025-10-07 14:10:13.068 2 INFO nova.virt.libvirt.driver [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Deleting instance files /var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690_del
Oct 07 14:10:13 compute-0 nova_compute[259550]: 2025-10-07 14:10:13.070 2 INFO nova.virt.libvirt.driver [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Deletion of /var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690_del complete
Oct 07 14:10:13 compute-0 nova_compute[259550]: 2025-10-07 14:10:13.094 2 DEBUG nova.storage.rbd_utils [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] removing snapshot(b032de4bb5ff4d25b63fdb13e521c53d) on rbd image(20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:10:13 compute-0 nova_compute[259550]: 2025-10-07 14:10:13.224 2 DEBUG nova.compute.manager [req-4a6397f0-3608-4122-9c4b-44a0537c7b00 req-1f840e61-7c3f-43d1-a77b-814d7678ab1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Received event network-vif-plugged-18b5959f-439a-440f-8a00-0709f1f9730b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:13 compute-0 nova_compute[259550]: 2025-10-07 14:10:13.226 2 DEBUG oslo_concurrency.lockutils [req-4a6397f0-3608-4122-9c4b-44a0537c7b00 req-1f840e61-7c3f-43d1-a77b-814d7678ab1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6a715eda-3357-4298-88cc-596cc9986690-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:13 compute-0 nova_compute[259550]: 2025-10-07 14:10:13.226 2 DEBUG oslo_concurrency.lockutils [req-4a6397f0-3608-4122-9c4b-44a0537c7b00 req-1f840e61-7c3f-43d1-a77b-814d7678ab1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:13 compute-0 nova_compute[259550]: 2025-10-07 14:10:13.227 2 DEBUG oslo_concurrency.lockutils [req-4a6397f0-3608-4122-9c4b-44a0537c7b00 req-1f840e61-7c3f-43d1-a77b-814d7678ab1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:13 compute-0 nova_compute[259550]: 2025-10-07 14:10:13.227 2 DEBUG nova.compute.manager [req-4a6397f0-3608-4122-9c4b-44a0537c7b00 req-1f840e61-7c3f-43d1-a77b-814d7678ab1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] No waiting events found dispatching network-vif-plugged-18b5959f-439a-440f-8a00-0709f1f9730b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:10:13 compute-0 nova_compute[259550]: 2025-10-07 14:10:13.227 2 WARNING nova.compute.manager [req-4a6397f0-3608-4122-9c4b-44a0537c7b00 req-1f840e61-7c3f-43d1-a77b-814d7678ab1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Received unexpected event network-vif-plugged-18b5959f-439a-440f-8a00-0709f1f9730b for instance with vm_state active and task_state deleting.
Oct 07 14:10:13 compute-0 nova_compute[259550]: 2025-10-07 14:10:13.235 2 INFO nova.compute.manager [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Took 3.53 seconds to destroy the instance on the hypervisor.
Oct 07 14:10:13 compute-0 nova_compute[259550]: 2025-10-07 14:10:13.236 2 DEBUG oslo.service.loopingcall [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:10:13 compute-0 nova_compute[259550]: 2025-10-07 14:10:13.237 2 DEBUG nova.compute.manager [-] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:10:13 compute-0 nova_compute[259550]: 2025-10-07 14:10:13.237 2 DEBUG nova.network.neutron [-] [instance: 6a715eda-3357-4298-88cc-596cc9986690] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:10:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 181 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 208 op/s
Oct 07 14:10:13 compute-0 nova_compute[259550]: 2025-10-07 14:10:13.829 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846198.827514, b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:13 compute-0 nova_compute[259550]: 2025-10-07 14:10:13.830 2 INFO nova.compute.manager [-] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] VM Stopped (Lifecycle Event)
Oct 07 14:10:13 compute-0 nova_compute[259550]: 2025-10-07 14:10:13.915 2 DEBUG nova.compute.manager [None req-9c0f6664-afda-49e2-a5c7-a70f4bac13fa - - - - - -] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Oct 07 14:10:13 compute-0 nova_compute[259550]: 2025-10-07 14:10:13.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Oct 07 14:10:13 compute-0 ceph-mon[74295]: pgmap v1330: 305 pgs: 305 active+clean; 181 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 208 op/s
Oct 07 14:10:13 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Oct 07 14:10:14 compute-0 nova_compute[259550]: 2025-10-07 14:10:14.058 2 DEBUG nova.storage.rbd_utils [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(snap) on rbd image(e82f6cd3-df6c-4f54-b0d1-baa8f3a03435) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:10:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Oct 07 14:10:15 compute-0 nova_compute[259550]: 2025-10-07 14:10:15.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:15 compute-0 ceph-mon[74295]: osdmap e175: 3 total, 3 up, 3 in
Oct 07 14:10:15 compute-0 nova_compute[259550]: 2025-10-07 14:10:15.199 2 DEBUG nova.network.neutron [-] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:10:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Oct 07 14:10:15 compute-0 nova_compute[259550]: 2025-10-07 14:10:15.308 2 DEBUG nova.compute.manager [req-7d984218-dbeb-4d48-be82-b19f01d0ce9d req-0f75ff97-3a72-4abf-b380-5d183704af77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Received event network-vif-deleted-18b5959f-439a-440f-8a00-0709f1f9730b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:15 compute-0 nova_compute[259550]: 2025-10-07 14:10:15.308 2 INFO nova.compute.manager [req-7d984218-dbeb-4d48-be82-b19f01d0ce9d req-0f75ff97-3a72-4abf-b380-5d183704af77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Neutron deleted interface 18b5959f-439a-440f-8a00-0709f1f9730b; detaching it from the instance and deleting it from the info cache
Oct 07 14:10:15 compute-0 nova_compute[259550]: 2025-10-07 14:10:15.309 2 DEBUG nova.network.neutron [req-7d984218-dbeb-4d48-be82-b19f01d0ce9d req-0f75ff97-3a72-4abf-b380-5d183704af77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:10:15 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Oct 07 14:10:15 compute-0 nova_compute[259550]: 2025-10-07 14:10:15.431 2 INFO nova.compute.manager [-] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Took 2.19 seconds to deallocate network for instance.
Oct 07 14:10:15 compute-0 nova_compute[259550]: 2025-10-07 14:10:15.502 2 DEBUG nova.compute.manager [req-7d984218-dbeb-4d48-be82-b19f01d0ce9d req-0f75ff97-3a72-4abf-b380-5d183704af77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Detach interface failed, port_id=18b5959f-439a-440f-8a00-0709f1f9730b, reason: Instance 6a715eda-3357-4298-88cc-596cc9986690 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:10:15 compute-0 ovn_controller[151684]: 2025-10-07T14:10:15Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6e:ba:59 10.100.0.11
Oct 07 14:10:15 compute-0 ovn_controller[151684]: 2025-10-07T14:10:15Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6e:ba:59 10.100.0.11
Oct 07 14:10:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 198 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 7.0 MiB/s wr, 282 op/s
Oct 07 14:10:15 compute-0 nova_compute[259550]: 2025-10-07 14:10:15.671 2 DEBUG oslo_concurrency.lockutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:15 compute-0 nova_compute[259550]: 2025-10-07 14:10:15.672 2 DEBUG oslo_concurrency.lockutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:15 compute-0 nova_compute[259550]: 2025-10-07 14:10:15.769 2 DEBUG oslo_concurrency.processutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:10:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Oct 07 14:10:15 compute-0 nova_compute[259550]: 2025-10-07 14:10:15.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Oct 07 14:10:15 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Oct 07 14:10:16 compute-0 ceph-mon[74295]: osdmap e176: 3 total, 3 up, 3 in
Oct 07 14:10:16 compute-0 ceph-mon[74295]: pgmap v1333: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 198 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 7.0 MiB/s wr, 282 op/s
Oct 07 14:10:16 compute-0 ceph-mon[74295]: osdmap e177: 3 total, 3 up, 3 in
Oct 07 14:10:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:10:16 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1087346923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:16 compute-0 nova_compute[259550]: 2025-10-07 14:10:16.251 2 DEBUG oslo_concurrency.processutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:16 compute-0 nova_compute[259550]: 2025-10-07 14:10:16.259 2 DEBUG nova.compute.provider_tree [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:10:16 compute-0 nova_compute[259550]: 2025-10-07 14:10:16.335 2 DEBUG nova.scheduler.client.report [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:10:16 compute-0 nova_compute[259550]: 2025-10-07 14:10:16.467 2 DEBUG oslo_concurrency.lockutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:16 compute-0 nova_compute[259550]: 2025-10-07 14:10:16.657 2 INFO nova.scheduler.client.report [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Deleted allocations for instance 6a715eda-3357-4298-88cc-596cc9986690
Oct 07 14:10:16 compute-0 nova_compute[259550]: 2025-10-07 14:10:16.746 2 DEBUG oslo_concurrency.lockutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:16 compute-0 nova_compute[259550]: 2025-10-07 14:10:16.862 2 INFO nova.virt.libvirt.driver [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Snapshot image upload complete
Oct 07 14:10:16 compute-0 nova_compute[259550]: 2025-10-07 14:10:16.863 2 INFO nova.compute.manager [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Took 6.91 seconds to snapshot the instance on the hypervisor.
Oct 07 14:10:17 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1087346923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 198 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 7.8 MiB/s wr, 215 op/s
Oct 07 14:10:18 compute-0 ceph-mon[74295]: pgmap v1335: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 198 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 7.8 MiB/s wr, 215 op/s
Oct 07 14:10:19 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 07 14:10:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 209 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 7.8 MiB/s wr, 304 op/s
Oct 07 14:10:20 compute-0 nova_compute[259550]: 2025-10-07 14:10:20.048 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "30c3d75d-d021-411b-a277-f81ff1f707b8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:20 compute-0 nova_compute[259550]: 2025-10-07 14:10:20.049 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:20 compute-0 nova_compute[259550]: 2025-10-07 14:10:20.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:20 compute-0 nova_compute[259550]: 2025-10-07 14:10:20.063 2 DEBUG nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:10:20 compute-0 nova_compute[259550]: 2025-10-07 14:10:20.119 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:20 compute-0 nova_compute[259550]: 2025-10-07 14:10:20.120 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:20 compute-0 nova_compute[259550]: 2025-10-07 14:10:20.129 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:10:20 compute-0 nova_compute[259550]: 2025-10-07 14:10:20.130 2 INFO nova.compute.claims [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:10:20 compute-0 nova_compute[259550]: 2025-10-07 14:10:20.301 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:20 compute-0 ceph-mon[74295]: pgmap v1336: 305 pgs: 305 active+clean; 209 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 7.8 MiB/s wr, 304 op/s
Oct 07 14:10:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:10:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2822149612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:20 compute-0 nova_compute[259550]: 2025-10-07 14:10:20.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:20 compute-0 nova_compute[259550]: 2025-10-07 14:10:20.802 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:20 compute-0 nova_compute[259550]: 2025-10-07 14:10:20.809 2 DEBUG nova.compute.provider_tree [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:10:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:10:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Oct 07 14:10:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Oct 07 14:10:20 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Oct 07 14:10:20 compute-0 nova_compute[259550]: 2025-10-07 14:10:20.829 2 DEBUG nova.scheduler.client.report [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:10:20 compute-0 nova_compute[259550]: 2025-10-07 14:10:20.873 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:20 compute-0 nova_compute[259550]: 2025-10-07 14:10:20.874 2 DEBUG nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:10:20 compute-0 ovn_controller[151684]: 2025-10-07T14:10:20Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:35:62:58 10.100.0.12
Oct 07 14:10:20 compute-0 ovn_controller[151684]: 2025-10-07T14:10:20Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:35:62:58 10.100.0.12
Oct 07 14:10:20 compute-0 nova_compute[259550]: 2025-10-07 14:10:20.941 2 DEBUG nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:10:20 compute-0 nova_compute[259550]: 2025-10-07 14:10:20.941 2 DEBUG nova.network.neutron [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:10:20 compute-0 nova_compute[259550]: 2025-10-07 14:10:20.981 2 INFO nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.001 2 DEBUG nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.081 2 DEBUG nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.082 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.083 2 INFO nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Creating image(s)
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.114 2 DEBUG nova.storage.rbd_utils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 30c3d75d-d021-411b-a277-f81ff1f707b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.138 2 DEBUG nova.storage.rbd_utils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 30c3d75d-d021-411b-a277-f81ff1f707b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.165 2 DEBUG nova.storage.rbd_utils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 30c3d75d-d021-411b-a277-f81ff1f707b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.173 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.212 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.213 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.228 2 DEBUG nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.239 2 DEBUG nova.policy [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '21b4c507f5c443f4b43306c884b1d67f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c166ae9e4e0f43d38afaa35966f84b05', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.247 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.248 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.248 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.248 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.269 2 DEBUG nova.storage.rbd_utils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 30c3d75d-d021-411b-a277-f81ff1f707b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.274 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 30c3d75d-d021-411b-a277-f81ff1f707b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.337 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.337 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.345 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.345 2 INFO nova.compute.claims [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.507 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 227 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.1 MiB/s wr, 204 op/s
Oct 07 14:10:21 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2822149612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:21 compute-0 ceph-mon[74295]: osdmap e178: 3 total, 3 up, 3 in
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.679 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 30c3d75d-d021-411b-a277-f81ff1f707b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.760 2 DEBUG nova.storage.rbd_utils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] resizing rbd image 30c3d75d-d021-411b-a277-f81ff1f707b8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.869 2 DEBUG nova.objects.instance [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lazy-loading 'migration_context' on Instance uuid 30c3d75d-d021-411b-a277-f81ff1f707b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.889 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.889 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Ensure instance console log exists: /var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.890 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.890 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.890 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:10:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1669863368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.990 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:21 compute-0 nova_compute[259550]: 2025-10-07 14:10:21.997 2 DEBUG nova.compute.provider_tree [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.021 2 DEBUG nova.scheduler.client.report [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.051 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.052 2 DEBUG nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.116 2 DEBUG nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.117 2 DEBUG nova.network.neutron [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.143 2 INFO nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.165 2 DEBUG nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.269 2 DEBUG nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.270 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.271 2 INFO nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Creating image(s)
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.295 2 DEBUG nova.storage.rbd_utils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.326 2 DEBUG nova.storage.rbd_utils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.354 2 DEBUG nova.storage.rbd_utils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.359 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "bcdb7363edcb468af4a4e5b5827ee7a05a3370a5" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.360 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "bcdb7363edcb468af4a4e5b5827ee7a05a3370a5" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.367 2 DEBUG nova.policy [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a27a7178326846e69ab9eaae7c70b274', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.389 2 DEBUG oslo_concurrency.lockutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquiring lock "75fea627-8d48-4772-86a2-ca6bc47ed597" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.390 2 DEBUG oslo_concurrency.lockutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.390 2 DEBUG oslo_concurrency.lockutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquiring lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.390 2 DEBUG oslo_concurrency.lockutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.391 2 DEBUG oslo_concurrency.lockutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.392 2 INFO nova.compute.manager [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Terminating instance
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.393 2 DEBUG nova.compute.manager [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:10:22 compute-0 kernel: tapcdae5f4d-20 (unregistering): left promiscuous mode
Oct 07 14:10:22 compute-0 NetworkManager[44949]: <info>  [1759846222.4585] device (tapcdae5f4d-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:22 compute-0 ovn_controller[151684]: 2025-10-07T14:10:22Z|00211|binding|INFO|Releasing lport cdae5f4d-2069-4d30-bd74-31e9459986db from this chassis (sb_readonly=0)
Oct 07 14:10:22 compute-0 ovn_controller[151684]: 2025-10-07T14:10:22Z|00212|binding|INFO|Setting lport cdae5f4d-2069-4d30-bd74-31e9459986db down in Southbound
Oct 07 14:10:22 compute-0 ovn_controller[151684]: 2025-10-07T14:10:22Z|00213|binding|INFO|Removing iface tapcdae5f4d-20 ovn-installed in OVS
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.480 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:ba:59 10.100.0.11'], port_security=['fa:16:3e:6e:ba:59 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '75fea627-8d48-4772-86a2-ca6bc47ed597', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f753e53f-89de-4a31-8fea-3e9e1684d72a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da79ea7f82af425b975ddff6ef03a59a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f73ed2d1-0e14-4944-923b-e8979adde65e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d43ba345-e1ae-4a7f-bf78-bc4ea18d4057, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=cdae5f4d-2069-4d30-bd74-31e9459986db) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:10:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.482 161536 INFO neutron.agent.ovn.metadata.agent [-] Port cdae5f4d-2069-4d30-bd74-31e9459986db in datapath f753e53f-89de-4a31-8fea-3e9e1684d72a unbound from our chassis
Oct 07 14:10:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.483 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f753e53f-89de-4a31-8fea-3e9e1684d72a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:10:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.487 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4c944664-18d0-4856-a2ab-e9654dc75a9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.487 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a namespace which is not needed anymore
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:22 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Oct 07 14:10:22 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Consumed 13.883s CPU time.
Oct 07 14:10:22 compute-0 systemd-machined[214580]: Machine qemu-35-instance-0000001f terminated.
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:22 compute-0 podman[300054]: 2025-10-07 14:10:22.568647408 +0000 UTC m=+0.074152430 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 07 14:10:22 compute-0 podman[300051]: 2025-10-07 14:10:22.578037584 +0000 UTC m=+0.089563875 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.593 2 DEBUG nova.network.neutron [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Successfully created port: 76ff867f-df0e-4526-b0c7-6bc2163fd52a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:10:22
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'backups', 'default.rgw.log', 'volumes']
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.634 2 INFO nova.virt.libvirt.driver [-] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Instance destroyed successfully.
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.635 2 DEBUG nova.objects.instance [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lazy-loading 'resources' on Instance uuid 75fea627-8d48-4772-86a2-ca6bc47ed597 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:10:22 compute-0 neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a[299013]: [NOTICE]   (299017) : haproxy version is 2.8.14-c23fe91
Oct 07 14:10:22 compute-0 neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a[299013]: [NOTICE]   (299017) : path to executable is /usr/sbin/haproxy
Oct 07 14:10:22 compute-0 neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a[299013]: [WARNING]  (299017) : Exiting Master process...
Oct 07 14:10:22 compute-0 neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a[299013]: [ALERT]    (299017) : Current worker (299019) exited with code 143 (Terminated)
Oct 07 14:10:22 compute-0 neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a[299013]: [WARNING]  (299017) : All workers exited. Exiting... (0)
Oct 07 14:10:22 compute-0 systemd[1]: libpod-01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643.scope: Deactivated successfully.
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:10:22 compute-0 podman[300110]: 2025-10-07 14:10:22.650404837 +0000 UTC m=+0.054270338 container died 01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.652 2 DEBUG nova.virt.libvirt.vif [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:09:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1141910716',display_name='tempest-ServersTestJSON-server-1141910716',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1141910716',id=31,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB8dBtsZMOusOa4IijROkUdbBylBDiqYzwyIPuD+vzYyGEcKB5fbetYwHVvk9xfq9490L3yO+927Gy6izYCdieX2E8nhzbrcXYwgrKrlKM3UVrnT4xZuHJ7KYzTZGsV5OA==',key_name='tempest-keypair-878357788',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:10:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da79ea7f82af425b975ddff6ef03a59a',ramdisk_id='',reservation_id='r-h9cj7vml',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-218965943',owner_user_name='tempest-ServersTestJSON-218965943-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:10:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8656e87752c0498891bd00461451ea40',uuid=75fea627-8d48-4772-86a2-ca6bc47ed597,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.653 2 DEBUG nova.network.os_vif_util [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Converting VIF {"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.654 2 DEBUG nova.network.os_vif_util [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6e:ba:59,bridge_name='br-int',has_traffic_filtering=True,id=cdae5f4d-2069-4d30-bd74-31e9459986db,network=Network(f753e53f-89de-4a31-8fea-3e9e1684d72a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdae5f4d-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.654 2 DEBUG os_vif [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:ba:59,bridge_name='br-int',has_traffic_filtering=True,id=cdae5f4d-2069-4d30-bd74-31e9459986db,network=Network(f753e53f-89de-4a31-8fea-3e9e1684d72a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdae5f4d-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.656 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdae5f4d-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.661 2 INFO os_vif [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:ba:59,bridge_name='br-int',has_traffic_filtering=True,id=cdae5f4d-2069-4d30-bd74-31e9459986db,network=Network(f753e53f-89de-4a31-8fea-3e9e1684d72a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdae5f4d-20')
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.685 2 DEBUG nova.virt.libvirt.imagebackend [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image locations are: [{'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/e82f6cd3-df6c-4f54-b0d1-baa8f3a03435/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/e82f6cd3-df6c-4f54-b0d1-baa8f3a03435/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 07 14:10:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643-userdata-shm.mount: Deactivated successfully.
Oct 07 14:10:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b521a0ec4b82aa4091ed78acfb32e87147f7bbf599891bee81258eeecedcd051-merged.mount: Deactivated successfully.
Oct 07 14:10:22 compute-0 podman[300110]: 2025-10-07 14:10:22.699510128 +0000 UTC m=+0.103375629 container cleanup 01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:10:22 compute-0 systemd[1]: libpod-conmon-01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643.scope: Deactivated successfully.
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.738 2 DEBUG nova.virt.libvirt.imagebackend [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Selected location: {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/e82f6cd3-df6c-4f54-b0d1-baa8f3a03435/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.739 2 DEBUG nova.storage.rbd_utils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] cloning images/e82f6cd3-df6c-4f54-b0d1-baa8f3a03435@snap to None/9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:10:22 compute-0 podman[300179]: 2025-10-07 14:10:22.795197582 +0000 UTC m=+0.070874193 container remove 01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 07 14:10:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.802 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f164f8b9-7533-419c-9e0a-4fc338de55d9]: (4, ('Tue Oct  7 02:10:22 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a (01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643)\n01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643\nTue Oct  7 02:10:22 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a (01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643)\n01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.805 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[66754a4e-bc61-4700-a73f-06c5c76d78a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.807 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf753e53f-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:22 compute-0 kernel: tapf753e53f-80: left promiscuous mode
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:22 compute-0 ceph-mon[74295]: pgmap v1338: 305 pgs: 305 active+clean; 227 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.1 MiB/s wr, 204 op/s
Oct 07 14:10:22 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1669863368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.835 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[666e2cdb-36f5-4172-96d1-1e3aa8edd5d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:10:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.867 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0f676115-3a4a-4ae6-842f-a41f1ace8caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:10:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.870 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b8fbb0-b0a4-44ac-8857-f48194b81fe1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:10:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.891 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b2fc1cdf-a7b5-401a-bd85-38011d4ffe86]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678429, 'reachable_time': 40972, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300249, 'error': None, 'target': 'ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:22 compute-0 systemd[1]: run-netns-ovnmeta\x2df753e53f\x2d89de\x2d4a31\x2d8fea\x2d3e9e1684d72a.mount: Deactivated successfully.
Oct 07 14:10:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.894 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:10:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.895 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[c308055c-a7e2-4514-94ed-928d28d9e1ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:22 compute-0 nova_compute[259550]: 2025-10-07 14:10:22.925 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "bcdb7363edcb468af4a4e5b5827ee7a05a3370a5" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:23 compute-0 nova_compute[259550]: 2025-10-07 14:10:23.073 2 DEBUG nova.objects.instance [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'migration_context' on Instance uuid 9fd9e34f-3492-4726-ab46-1b4bab671dbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:10:23 compute-0 nova_compute[259550]: 2025-10-07 14:10:23.093 2 DEBUG nova.network.neutron [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Successfully created port: 6e3a4d42-ab7b-4d31-93af-858be34d84f8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:10:23 compute-0 nova_compute[259550]: 2025-10-07 14:10:23.104 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:10:23 compute-0 nova_compute[259550]: 2025-10-07 14:10:23.104 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Ensure instance console log exists: /var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:10:23 compute-0 nova_compute[259550]: 2025-10-07 14:10:23.105 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:23 compute-0 nova_compute[259550]: 2025-10-07 14:10:23.105 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:23 compute-0 nova_compute[259550]: 2025-10-07 14:10:23.105 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:23 compute-0 nova_compute[259550]: 2025-10-07 14:10:23.276 2 INFO nova.virt.libvirt.driver [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Deleting instance files /var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597_del
Oct 07 14:10:23 compute-0 nova_compute[259550]: 2025-10-07 14:10:23.277 2 INFO nova.virt.libvirt.driver [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Deletion of /var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597_del complete
Oct 07 14:10:23 compute-0 nova_compute[259550]: 2025-10-07 14:10:23.396 2 INFO nova.compute.manager [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Took 1.00 seconds to destroy the instance on the hypervisor.
Oct 07 14:10:23 compute-0 nova_compute[259550]: 2025-10-07 14:10:23.396 2 DEBUG oslo.service.loopingcall [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:10:23 compute-0 nova_compute[259550]: 2025-10-07 14:10:23.397 2 DEBUG nova.compute.manager [-] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:10:23 compute-0 nova_compute[259550]: 2025-10-07 14:10:23.397 2 DEBUG nova.network.neutron [-] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:10:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 227 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 453 KiB/s rd, 1.6 MiB/s wr, 122 op/s
Oct 07 14:10:23 compute-0 nova_compute[259550]: 2025-10-07 14:10:23.791 2 DEBUG nova.network.neutron [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Successfully updated port: 76ff867f-df0e-4526-b0c7-6bc2163fd52a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:10:23 compute-0 nova_compute[259550]: 2025-10-07 14:10:23.808 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "refresh_cache-30c3d75d-d021-411b-a277-f81ff1f707b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:10:23 compute-0 nova_compute[259550]: 2025-10-07 14:10:23.809 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquired lock "refresh_cache-30c3d75d-d021-411b-a277-f81ff1f707b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:10:23 compute-0 nova_compute[259550]: 2025-10-07 14:10:23.809 2 DEBUG nova.network.neutron [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:10:23 compute-0 nova_compute[259550]: 2025-10-07 14:10:23.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:10:23 compute-0 nova_compute[259550]: 2025-10-07 14:10:23.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:10:24 compute-0 nova_compute[259550]: 2025-10-07 14:10:24.012 2 DEBUG nova.network.neutron [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:10:24 compute-0 nova_compute[259550]: 2025-10-07 14:10:24.106 2 DEBUG nova.compute.manager [req-f3a32691-a391-45ce-8b69-a23ff7d65e85 req-1344a254-7c5e-4fa4-8917-181c061fb0a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Received event network-changed-76ff867f-df0e-4526-b0c7-6bc2163fd52a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:24 compute-0 nova_compute[259550]: 2025-10-07 14:10:24.107 2 DEBUG nova.compute.manager [req-f3a32691-a391-45ce-8b69-a23ff7d65e85 req-1344a254-7c5e-4fa4-8917-181c061fb0a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Refreshing instance network info cache due to event network-changed-76ff867f-df0e-4526-b0c7-6bc2163fd52a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:10:24 compute-0 nova_compute[259550]: 2025-10-07 14:10:24.107 2 DEBUG oslo_concurrency.lockutils [req-f3a32691-a391-45ce-8b69-a23ff7d65e85 req-1344a254-7c5e-4fa4-8917-181c061fb0a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-30c3d75d-d021-411b-a277-f81ff1f707b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:10:24 compute-0 nova_compute[259550]: 2025-10-07 14:10:24.524 2 DEBUG nova.network.neutron [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Successfully updated port: 6e3a4d42-ab7b-4d31-93af-858be34d84f8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:10:24 compute-0 nova_compute[259550]: 2025-10-07 14:10:24.544 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "refresh_cache-9fd9e34f-3492-4726-ab46-1b4bab671dbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:10:24 compute-0 nova_compute[259550]: 2025-10-07 14:10:24.545 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquired lock "refresh_cache-9fd9e34f-3492-4726-ab46-1b4bab671dbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:10:24 compute-0 nova_compute[259550]: 2025-10-07 14:10:24.545 2 DEBUG nova.network.neutron [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:10:24 compute-0 nova_compute[259550]: 2025-10-07 14:10:24.647 2 DEBUG nova.compute.manager [req-a7887907-5634-42ab-afe9-3d28430cbfae req-cc5f6736-a65f-4dbc-92c1-b73f30fa64fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Received event network-changed-6e3a4d42-ab7b-4d31-93af-858be34d84f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:24 compute-0 nova_compute[259550]: 2025-10-07 14:10:24.647 2 DEBUG nova.compute.manager [req-a7887907-5634-42ab-afe9-3d28430cbfae req-cc5f6736-a65f-4dbc-92c1-b73f30fa64fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Refreshing instance network info cache due to event network-changed-6e3a4d42-ab7b-4d31-93af-858be34d84f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:10:24 compute-0 nova_compute[259550]: 2025-10-07 14:10:24.648 2 DEBUG oslo_concurrency.lockutils [req-a7887907-5634-42ab-afe9-3d28430cbfae req-cc5f6736-a65f-4dbc-92c1-b73f30fa64fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-9fd9e34f-3492-4726-ab46-1b4bab671dbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:10:24 compute-0 nova_compute[259550]: 2025-10-07 14:10:24.710 2 DEBUG nova.network.neutron [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:10:24 compute-0 ceph-mon[74295]: pgmap v1339: 305 pgs: 305 active+clean; 227 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 453 KiB/s rd, 1.6 MiB/s wr, 122 op/s
Oct 07 14:10:24 compute-0 nova_compute[259550]: 2025-10-07 14:10:24.952 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846209.9506023, 6a715eda-3357-4298-88cc-596cc9986690 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:24 compute-0 nova_compute[259550]: 2025-10-07 14:10:24.953 2 INFO nova.compute.manager [-] [instance: 6a715eda-3357-4298-88cc-596cc9986690] VM Stopped (Lifecycle Event)
Oct 07 14:10:24 compute-0 nova_compute[259550]: 2025-10-07 14:10:24.975 2 DEBUG nova.compute.manager [None req-548fb4b0-6df4-4274-b933-869fddb6dcd6 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:24 compute-0 nova_compute[259550]: 2025-10-07 14:10:24.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.033 2 DEBUG nova.network.neutron [-] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.051 2 INFO nova.compute.manager [-] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Took 1.65 seconds to deallocate network for instance.
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.140 2 DEBUG oslo_concurrency.lockutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.140 2 DEBUG oslo_concurrency.lockutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.261 2 DEBUG oslo_concurrency.processutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.420 2 DEBUG nova.network.neutron [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Updating instance_info_cache with network_info: [{"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.440 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Releasing lock "refresh_cache-30c3d75d-d021-411b-a277-f81ff1f707b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.440 2 DEBUG nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Instance network_info: |[{"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.441 2 DEBUG oslo_concurrency.lockutils [req-f3a32691-a391-45ce-8b69-a23ff7d65e85 req-1344a254-7c5e-4fa4-8917-181c061fb0a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-30c3d75d-d021-411b-a277-f81ff1f707b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.441 2 DEBUG nova.network.neutron [req-f3a32691-a391-45ce-8b69-a23ff7d65e85 req-1344a254-7c5e-4fa4-8917-181c061fb0a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Refreshing network info cache for port 76ff867f-df0e-4526-b0c7-6bc2163fd52a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.444 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Start _get_guest_xml network_info=[{"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.449 2 WARNING nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.457 2 DEBUG nova.virt.libvirt.host [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.458 2 DEBUG nova.virt.libvirt.host [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.461 2 DEBUG nova.virt.libvirt.host [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.461 2 DEBUG nova.virt.libvirt.host [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.462 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.462 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.462 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.462 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.463 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.463 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.463 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.463 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.463 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.463 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.464 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.464 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.467 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 213 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 791 KiB/s rd, 5.3 MiB/s wr, 251 op/s
Oct 07 14:10:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:10:25 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3166834448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.719 2 DEBUG oslo_concurrency.processutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.727 2 DEBUG nova.compute.provider_tree [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.745 2 DEBUG nova.scheduler.client.report [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.768 2 DEBUG oslo_concurrency.lockutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.780 2 DEBUG nova.network.neutron [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Updating instance_info_cache with network_info: [{"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.796 2 INFO nova.scheduler.client.report [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Deleted allocations for instance 75fea627-8d48-4772-86a2-ca6bc47ed597
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.798 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Releasing lock "refresh_cache-9fd9e34f-3492-4726-ab46-1b4bab671dbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.798 2 DEBUG nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Instance network_info: |[{"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.799 2 DEBUG oslo_concurrency.lockutils [req-a7887907-5634-42ab-afe9-3d28430cbfae req-cc5f6736-a65f-4dbc-92c1-b73f30fa64fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-9fd9e34f-3492-4726-ab46-1b4bab671dbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.799 2 DEBUG nova.network.neutron [req-a7887907-5634-42ab-afe9-3d28430cbfae req-cc5f6736-a65f-4dbc-92c1-b73f30fa64fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Refreshing network info cache for port 6e3a4d42-ab7b-4d31-93af-858be34d84f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:10:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.801 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Start _get_guest_xml network_info=[{"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T14:10:09Z,direct_url=<?>,disk_format='raw',id=e82f6cd3-df6c-4f54-b0d1-baa8f3a03435,min_disk=1,min_ram=0,name='tempest-test-snap-1066436141',owner='1a6abfd8cc6f4507886ed10873d1f95c',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T14:10:16Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': 'e82f6cd3-df6c-4f54-b0d1-baa8f3a03435'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.843 2 WARNING nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.851 2 DEBUG nova.virt.libvirt.host [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.852 2 DEBUG nova.virt.libvirt.host [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:10:25 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3166834448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.855 2 DEBUG nova.virt.libvirt.host [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.855 2 DEBUG nova.virt.libvirt.host [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.855 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.856 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T14:10:09Z,direct_url=<?>,disk_format='raw',id=e82f6cd3-df6c-4f54-b0d1-baa8f3a03435,min_disk=1,min_ram=0,name='tempest-test-snap-1066436141',owner='1a6abfd8cc6f4507886ed10873d1f95c',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T14:10:16Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.856 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.856 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.856 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.856 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.857 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.857 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.857 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.857 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.857 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.857 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.861 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:10:25 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3874137499' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.898 2 DEBUG oslo_concurrency.lockutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.903 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.925 2 DEBUG nova.storage.rbd_utils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 30c3d75d-d021-411b-a277-f81ff1f707b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.928 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:25 compute-0 nova_compute[259550]: 2025-10-07 14:10:25.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.190 2 DEBUG nova.compute.manager [req-43fa91e8-8db0-43a2-b5e8-e153ae7cbb9a req-f1e523b4-c166-4355-a270-9251e73ea00c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Received event network-vif-deleted-cdae5f4d-2069-4d30-bd74-31e9459986db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:10:26 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2133751308' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.329 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.354 2 DEBUG nova.storage.rbd_utils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.359 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:10:26 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2194597908' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.400 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.402 2 DEBUG nova.virt.libvirt.vif [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:10:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1409154713',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1409154713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1409154713',id=33,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c166ae9e4e0f43d38afaa35966f84b05',ramdisk_id='',reservation_id='r-20bg9heg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-2130756304',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:10:21Z,user_data=None,user_id='21b4c507f5c443f4b43306c884b1d67f',uuid=30c3d75d-d021-411b-a277-f81ff1f707b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.402 2 DEBUG nova.network.os_vif_util [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converting VIF {"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.403 2 DEBUG nova.network.os_vif_util [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:66:96,bridge_name='br-int',has_traffic_filtering=True,id=76ff867f-df0e-4526-b0c7-6bc2163fd52a,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ff867f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.405 2 DEBUG nova.objects.instance [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lazy-loading 'pci_devices' on Instance uuid 30c3d75d-d021-411b-a277-f81ff1f707b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.424 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <uuid>30c3d75d-d021-411b-a277-f81ff1f707b8</uuid>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <name>instance-00000021</name>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1409154713</nova:name>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:10:25</nova:creationTime>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <nova:user uuid="21b4c507f5c443f4b43306c884b1d67f">tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member</nova:user>
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <nova:project uuid="c166ae9e4e0f43d38afaa35966f84b05">tempest-ImagesOneServerNegativeTestJSON-2130756304</nova:project>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <nova:port uuid="76ff867f-df0e-4526-b0c7-6bc2163fd52a">
Oct 07 14:10:26 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <system>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <entry name="serial">30c3d75d-d021-411b-a277-f81ff1f707b8</entry>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <entry name="uuid">30c3d75d-d021-411b-a277-f81ff1f707b8</entry>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     </system>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <os>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   </os>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <features>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   </features>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/30c3d75d-d021-411b-a277-f81ff1f707b8_disk">
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       </source>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/30c3d75d-d021-411b-a277-f81ff1f707b8_disk.config">
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       </source>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:dc:66:96"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <target dev="tap76ff867f-df"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8/console.log" append="off"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <video>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     </video>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:10:26 compute-0 nova_compute[259550]: </domain>
Oct 07 14:10:26 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.424 2 DEBUG nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Preparing to wait for external event network-vif-plugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.425 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.425 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.425 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.426 2 DEBUG nova.virt.libvirt.vif [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:10:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1409154713',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1409154713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1409154713',id=33,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c166ae9e4e0f43d38afaa35966f84b05',ramdisk_id='',reservation_id='r-20bg9heg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-2130756304',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:10:21Z,user_data=None,user_id='21b4c507f5c443f4b43306c884b1d67f',uuid=30c3d75d-d021-411b-a277-f81ff1f707b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.426 2 DEBUG nova.network.os_vif_util [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converting VIF {"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.427 2 DEBUG nova.network.os_vif_util [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:66:96,bridge_name='br-int',has_traffic_filtering=True,id=76ff867f-df0e-4526-b0c7-6bc2163fd52a,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ff867f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.427 2 DEBUG os_vif [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:66:96,bridge_name='br-int',has_traffic_filtering=True,id=76ff867f-df0e-4526-b0c7-6bc2163fd52a,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ff867f-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.428 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.429 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.432 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76ff867f-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.432 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap76ff867f-df, col_values=(('external_ids', {'iface-id': '76ff867f-df0e-4526-b0c7-6bc2163fd52a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:66:96', 'vm-uuid': '30c3d75d-d021-411b-a277-f81ff1f707b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:26 compute-0 NetworkManager[44949]: <info>  [1759846226.4349] manager: (tap76ff867f-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.440 2 INFO os_vif [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:66:96,bridge_name='br-int',has_traffic_filtering=True,id=76ff867f-df0e-4526-b0c7-6bc2163fd52a,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ff867f-df')
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.499 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.500 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.500 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No VIF found with MAC fa:16:3e:dc:66:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.500 2 INFO nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Using config drive
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.522 2 DEBUG nova.storage.rbd_utils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 30c3d75d-d021-411b-a277-f81ff1f707b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:10:26 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1827650791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.798 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.800 2 DEBUG nova.virt.libvirt.vif [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:10:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1746458742',display_name='tempest-ImagesTestJSON-server-1746458742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1746458742',id=34,image_ref='e82f6cd3-df6c-4f54-b0d1-baa8f3a03435',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-ooe0chln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='20fd8b19-2eb6-4c42-a320-d9d23f6f4912',image_min_disk='1',image_min_ram='0',image_owner_id='1a6abfd8cc6f4507886ed10873d1f95c',image_owner_project_name='tempest-ImagesTestJSON-194092869',image_owner_user_name='tempest-ImagesTestJSON-194092869-project-member',image_user_id='a27a7178326846e69ab9eaae7c70b274',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:10:22Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=9fd9e34f-3492-4726-ab46-1b4bab671dbc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.800 2 DEBUG nova.network.os_vif_util [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.801 2 DEBUG nova.network.os_vif_util [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:e0:a9,bridge_name='br-int',has_traffic_filtering=True,id=6e3a4d42-ab7b-4d31-93af-858be34d84f8,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e3a4d42-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.802 2 DEBUG nova.objects.instance [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'pci_devices' on Instance uuid 9fd9e34f-3492-4726-ab46-1b4bab671dbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.820 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <uuid>9fd9e34f-3492-4726-ab46-1b4bab671dbc</uuid>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <name>instance-00000022</name>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <nova:name>tempest-ImagesTestJSON-server-1746458742</nova:name>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:10:25</nova:creationTime>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <nova:user uuid="a27a7178326846e69ab9eaae7c70b274">tempest-ImagesTestJSON-194092869-project-member</nova:user>
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <nova:project uuid="1a6abfd8cc6f4507886ed10873d1f95c">tempest-ImagesTestJSON-194092869</nova:project>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="e82f6cd3-df6c-4f54-b0d1-baa8f3a03435"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <nova:port uuid="6e3a4d42-ab7b-4d31-93af-858be34d84f8">
Oct 07 14:10:26 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <system>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <entry name="serial">9fd9e34f-3492-4726-ab46-1b4bab671dbc</entry>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <entry name="uuid">9fd9e34f-3492-4726-ab46-1b4bab671dbc</entry>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     </system>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <os>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   </os>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <features>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   </features>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk">
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       </source>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk.config">
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       </source>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:10:26 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:49:e0:a9"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <target dev="tap6e3a4d42-ab"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc/console.log" append="off"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <video>
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     </video>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <input type="keyboard" bus="usb"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:10:26 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:10:26 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:10:26 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:10:26 compute-0 nova_compute[259550]: </domain>
Oct 07 14:10:26 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.821 2 DEBUG nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Preparing to wait for external event network-vif-plugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.821 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.821 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.821 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.822 2 DEBUG nova.virt.libvirt.vif [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:10:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1746458742',display_name='tempest-ImagesTestJSON-server-1746458742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1746458742',id=34,image_ref='e82f6cd3-df6c-4f54-b0d1-baa8f3a03435',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-ooe0chln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='20fd8b19-2eb6-4c42-a320-d9d23f6f4912',image_min_disk='1',image_min_ram='0',image_owner_id='1a6abfd8cc6f4507886ed10873d1f95c',image_owner_project_name='tempest-ImagesTestJSON-194092869',image_owner_user_name='tempest-ImagesTestJSON-194092869-project-member',image_user_id='a27a7178326846e69ab9eaae7c70b274',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:10:22Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=9fd9e34f-3492-4726-ab46-1b4bab671dbc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.822 2 DEBUG nova.network.os_vif_util [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.823 2 DEBUG nova.network.os_vif_util [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:e0:a9,bridge_name='br-int',has_traffic_filtering=True,id=6e3a4d42-ab7b-4d31-93af-858be34d84f8,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e3a4d42-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.823 2 DEBUG os_vif [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:e0:a9,bridge_name='br-int',has_traffic_filtering=True,id=6e3a4d42-ab7b-4d31-93af-858be34d84f8,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e3a4d42-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.824 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.824 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.827 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e3a4d42-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.827 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e3a4d42-ab, col_values=(('external_ids', {'iface-id': '6e3a4d42-ab7b-4d31-93af-858be34d84f8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:e0:a9', 'vm-uuid': '9fd9e34f-3492-4726-ab46-1b4bab671dbc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:26 compute-0 NetworkManager[44949]: <info>  [1759846226.8295] manager: (tap6e3a4d42-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.838 2 INFO os_vif [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:e0:a9,bridge_name='br-int',has_traffic_filtering=True,id=6e3a4d42-ab7b-4d31-93af-858be34d84f8,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e3a4d42-ab')
Oct 07 14:10:26 compute-0 ceph-mon[74295]: pgmap v1340: 305 pgs: 305 active+clean; 213 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 791 KiB/s rd, 5.3 MiB/s wr, 251 op/s
Oct 07 14:10:26 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3874137499' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:10:26 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2133751308' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:10:26 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2194597908' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:10:26 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1827650791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.867482) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846226867522, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2326, "num_deletes": 268, "total_data_size": 3325810, "memory_usage": 3370256, "flush_reason": "Manual Compaction"}
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.886 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.886 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.887 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No VIF found with MAC fa:16:3e:49:e0:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.887 2 INFO nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Using config drive
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846226888455, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3271313, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25659, "largest_seqno": 27984, "table_properties": {"data_size": 3260618, "index_size": 6933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22790, "raw_average_key_size": 21, "raw_value_size": 3238993, "raw_average_value_size": 3018, "num_data_blocks": 301, "num_entries": 1073, "num_filter_entries": 1073, "num_deletions": 268, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759846052, "oldest_key_time": 1759846052, "file_creation_time": 1759846226, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 21041 microseconds, and 7912 cpu microseconds.
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.888518) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3271313 bytes OK
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.888550) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.890199) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.890213) EVENT_LOG_v1 {"time_micros": 1759846226890209, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.890232) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3315800, prev total WAL file size 3315800, number of live WAL files 2.
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.891277) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3194KB)], [59(7053KB)]
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846226891364, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10493595, "oldest_snapshot_seqno": -1}
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.913 2 DEBUG nova.storage.rbd_utils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.923 2 DEBUG nova.network.neutron [req-a7887907-5634-42ab-afe9-3d28430cbfae req-cc5f6736-a65f-4dbc-92c1-b73f30fa64fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Updated VIF entry in instance network info cache for port 6e3a4d42-ab7b-4d31-93af-858be34d84f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.924 2 DEBUG nova.network.neutron [req-a7887907-5634-42ab-afe9-3d28430cbfae req-cc5f6736-a65f-4dbc-92c1-b73f30fa64fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Updating instance_info_cache with network_info: [{"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5261 keys, 8772442 bytes, temperature: kUnknown
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846226959909, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8772442, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8735509, "index_size": 22679, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 130819, "raw_average_key_size": 24, "raw_value_size": 8638990, "raw_average_value_size": 1642, "num_data_blocks": 932, "num_entries": 5261, "num_filter_entries": 5261, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759846226, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.960220) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8772442 bytes
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.961905) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.9 rd, 127.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 6.9 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5798, records dropped: 537 output_compression: NoCompression
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.961921) EVENT_LOG_v1 {"time_micros": 1759846226961913, "job": 32, "event": "compaction_finished", "compaction_time_micros": 68650, "compaction_time_cpu_micros": 24321, "output_level": 6, "num_output_files": 1, "total_output_size": 8772442, "num_input_records": 5798, "num_output_records": 5261, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846226962592, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846226963873, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.891129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.964071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.964079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.964081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.964082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:10:26 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.964084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.965 2 DEBUG oslo_concurrency.lockutils [req-a7887907-5634-42ab-afe9-3d28430cbfae req-cc5f6736-a65f-4dbc-92c1-b73f30fa64fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-9fd9e34f-3492-4726-ab46-1b4bab671dbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:10:26 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.992 2 INFO nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Creating config drive at /var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8/disk.config
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:26.999 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf5ez1q30 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.050 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.051 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.051 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.052 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.052 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.158 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf5ez1q30" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.192 2 DEBUG nova.storage.rbd_utils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 30c3d75d-d021-411b-a277-f81ff1f707b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.201 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8/disk.config 30c3d75d-d021-411b-a277-f81ff1f707b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.365 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8/disk.config 30c3d75d-d021-411b-a277-f81ff1f707b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.366 2 INFO nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Deleting local config drive /var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8/disk.config because it was imported into RBD.
Oct 07 14:10:27 compute-0 NetworkManager[44949]: <info>  [1759846227.4190] manager: (tap76ff867f-df): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Oct 07 14:10:27 compute-0 kernel: tap76ff867f-df: entered promiscuous mode
Oct 07 14:10:27 compute-0 ovn_controller[151684]: 2025-10-07T14:10:27Z|00214|binding|INFO|Claiming lport 76ff867f-df0e-4526-b0c7-6bc2163fd52a for this chassis.
Oct 07 14:10:27 compute-0 ovn_controller[151684]: 2025-10-07T14:10:27Z|00215|binding|INFO|76ff867f-df0e-4526-b0c7-6bc2163fd52a: Claiming fa:16:3e:dc:66:96 10.100.0.8
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.434 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:66:96 10.100.0.8'], port_security=['fa:16:3e:dc:66:96 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '30c3d75d-d021-411b-a277-f81ff1f707b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c166ae9e4e0f43d38afaa35966f84b05', 'neutron:revision_number': '2', 'neutron:security_group_ids': '306ac68f-7d3a-41d3-a9d1-b809ff5ece38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6dab0b8-b058-4fe6-95e9-ca808f08d05f, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=76ff867f-df0e-4526-b0c7-6bc2163fd52a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.435 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 76ff867f-df0e-4526-b0c7-6bc2163fd52a in datapath 0a5c95d4-1a77-48f5-83c0-afa976b7583d bound to our chassis
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.437 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0a5c95d4-1a77-48f5-83c0-afa976b7583d
Oct 07 14:10:27 compute-0 ovn_controller[151684]: 2025-10-07T14:10:27Z|00216|binding|INFO|Setting lport 76ff867f-df0e-4526-b0c7-6bc2163fd52a ovn-installed in OVS
Oct 07 14:10:27 compute-0 ovn_controller[151684]: 2025-10-07T14:10:27Z|00217|binding|INFO|Setting lport 76ff867f-df0e-4526-b0c7-6bc2163fd52a up in Southbound
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.453 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3b0720-c443-47d4-abe7-6f4ab3cc23dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.457 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0a5c95d4-11 in ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.462 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0a5c95d4-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.462 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb441c3-fd87-47f4-99ea-872cc17e216e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.464 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d4957d74-d585-4ce8-8f7c-c025a45ceba6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:27 compute-0 systemd-machined[214580]: New machine qemu-37-instance-00000021.
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:27 compute-0 systemd[1]: Started Virtual Machine qemu-37-instance-00000021.
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.484 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[e35acc90-8dba-4c8c-b249-0a94e4320cca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:27 compute-0 systemd-udevd[300570]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:10:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:10:27 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1961907499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.511 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d794bc42-aa91-4d23-b34e-55fdb313f769]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:27 compute-0 NetworkManager[44949]: <info>  [1759846227.5253] device (tap76ff867f-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:10:27 compute-0 NetworkManager[44949]: <info>  [1759846227.5264] device (tap76ff867f-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.537 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.553 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[998ccf25-5c39-4982-a86e-9f1b08ad3907]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:27 compute-0 NetworkManager[44949]: <info>  [1759846227.5613] manager: (tap0a5c95d4-10): new Veth device (/org/freedesktop/NetworkManager/Devices/113)
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.562 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[23c48f02-c1c2-4256-b689-6a3aae9095aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.583 2 INFO nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Creating config drive at /var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc/disk.config
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.591 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjbqgkq4f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 213 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 773 KiB/s rd, 5.2 MiB/s wr, 245 op/s
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.618 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c2db0790-b81d-4946-976d-82d94c581d6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.623 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ef26bd9b-1ed0-4fb1-91fd-c7757a2539b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.631 2 DEBUG nova.network.neutron [req-f3a32691-a391-45ce-8b69-a23ff7d65e85 req-1344a254-7c5e-4fa4-8917-181c061fb0a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Updated VIF entry in instance network info cache for port 76ff867f-df0e-4526-b0c7-6bc2163fd52a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.633 2 DEBUG nova.network.neutron [req-f3a32691-a391-45ce-8b69-a23ff7d65e85 req-1344a254-7c5e-4fa4-8917-181c061fb0a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Updating instance_info_cache with network_info: [{"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:10:27 compute-0 NetworkManager[44949]: <info>  [1759846227.6559] device (tap0a5c95d4-10): carrier: link connected
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.658 2 DEBUG oslo_concurrency.lockutils [req-f3a32691-a391-45ce-8b69-a23ff7d65e85 req-1344a254-7c5e-4fa4-8917-181c061fb0a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-30c3d75d-d021-411b-a277-f81ff1f707b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.665 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4807642a-ed6d-4b08-8402-436ebb2748fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.685 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a579270d-4969-4e2c-81ce-d1b894d782b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a5c95d4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:63:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681122, 'reachable_time': 24103, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300608, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.689 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.690 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.695 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.695 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.698 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.698 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.702 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7d5998-3e92-4632-a0b7-7687e305e547]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe34:6312'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 681122, 'tstamp': 681122}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300609, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.719 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8407f9f0-2657-4a52-a99b-447635cc8aa2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a5c95d4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:63:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681122, 'reachable_time': 24103, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300610, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.741 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjbqgkq4f" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.755 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[677b6865-3450-4d60-8151-83b7310e2600]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.774 2 DEBUG nova.storage.rbd_utils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.786 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc/disk.config 9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.834 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[edc1949d-5b6f-4c7b-bcbe-ad06981f2dcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.837 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a5c95d4-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.838 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.839 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a5c95d4-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:27 compute-0 NetworkManager[44949]: <info>  [1759846227.8430] manager: (tap0a5c95d4-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Oct 07 14:10:27 compute-0 kernel: tap0a5c95d4-10: entered promiscuous mode
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.852 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0a5c95d4-10, col_values=(('external_ids', {'iface-id': 'a8291172-baf1-4252-9a0d-af7ef7ffa931'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:27 compute-0 ovn_controller[151684]: 2025-10-07T14:10:27Z|00218|binding|INFO|Releasing lport a8291172-baf1-4252-9a0d-af7ef7ffa931 from this chassis (sb_readonly=0)
Oct 07 14:10:27 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1961907499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.895 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0a5c95d4-1a77-48f5-83c0-afa976b7583d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0a5c95d4-1a77-48f5-83c0-afa976b7583d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.897 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e7e689-bb4a-48fc-8001-1c0602a8c07a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.898 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-0a5c95d4-1a77-48f5-83c0-afa976b7583d
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/0a5c95d4-1a77-48f5-83c0-afa976b7583d.pid.haproxy
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 0a5c95d4-1a77-48f5-83c0-afa976b7583d
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:10:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.899 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'env', 'PROCESS_TAG=haproxy-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0a5c95d4-1a77-48f5-83c0-afa976b7583d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.969 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc/disk.config 9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:27 compute-0 nova_compute[259550]: 2025-10-07 14:10:27.970 2 INFO nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Deleting local config drive /var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc/disk.config because it was imported into RBD.
Oct 07 14:10:28 compute-0 kernel: tap6e3a4d42-ab: entered promiscuous mode
Oct 07 14:10:28 compute-0 systemd-udevd[300598]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:10:28 compute-0 NetworkManager[44949]: <info>  [1759846228.0358] manager: (tap6e3a4d42-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/115)
Oct 07 14:10:28 compute-0 NetworkManager[44949]: <info>  [1759846228.0498] device (tap6e3a4d42-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:10:28 compute-0 NetworkManager[44949]: <info>  [1759846228.0509] device (tap6e3a4d42-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:10:28 compute-0 ovn_controller[151684]: 2025-10-07T14:10:28Z|00219|binding|INFO|Claiming lport 6e3a4d42-ab7b-4d31-93af-858be34d84f8 for this chassis.
Oct 07 14:10:28 compute-0 ovn_controller[151684]: 2025-10-07T14:10:28Z|00220|binding|INFO|6e3a4d42-ab7b-4d31-93af-858be34d84f8: Claiming fa:16:3e:49:e0:a9 10.100.0.8
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.098 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:e0:a9 10.100.0.8'], port_security=['fa:16:3e:49:e0:a9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9fd9e34f-3492-4726-ab46-1b4bab671dbc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6e3a4d42-ab7b-4d31-93af-858be34d84f8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:10:28 compute-0 ovn_controller[151684]: 2025-10-07T14:10:28Z|00221|binding|INFO|Setting lport 6e3a4d42-ab7b-4d31-93af-858be34d84f8 ovn-installed in OVS
Oct 07 14:10:28 compute-0 ovn_controller[151684]: 2025-10-07T14:10:28Z|00222|binding|INFO|Setting lport 6e3a4d42-ab7b-4d31-93af-858be34d84f8 up in Southbound
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:28 compute-0 systemd-machined[214580]: New machine qemu-38-instance-00000022.
Oct 07 14:10:28 compute-0 systemd[1]: Started Virtual Machine qemu-38-instance-00000022.
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.170 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.171 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4148MB free_disk=59.92217254638672GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.172 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.173 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.269 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 20fd8b19-2eb6-4c42-a320-d9d23f6f4912 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.270 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 30c3d75d-d021-411b-a277-f81ff1f707b8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.270 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 9fd9e34f-3492-4726-ab46-1b4bab671dbc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.271 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.271 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.299 2 DEBUG nova.compute.manager [req-1d280c00-0c40-46dc-873c-9de896087780 req-fad27fc9-5b4d-4a52-a3b8-5118eb83c363 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Received event network-vif-plugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.300 2 DEBUG oslo_concurrency.lockutils [req-1d280c00-0c40-46dc-873c-9de896087780 req-fad27fc9-5b4d-4a52-a3b8-5118eb83c363 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.300 2 DEBUG oslo_concurrency.lockutils [req-1d280c00-0c40-46dc-873c-9de896087780 req-fad27fc9-5b4d-4a52-a3b8-5118eb83c363 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.300 2 DEBUG oslo_concurrency.lockutils [req-1d280c00-0c40-46dc-873c-9de896087780 req-fad27fc9-5b4d-4a52-a3b8-5118eb83c363 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.301 2 DEBUG nova.compute.manager [req-1d280c00-0c40-46dc-873c-9de896087780 req-fad27fc9-5b4d-4a52-a3b8-5118eb83c363 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Processing event network-vif-plugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:10:28 compute-0 podman[300745]: 2025-10-07 14:10:28.340080156 +0000 UTC m=+0.063835419 container create 63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.375 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:28 compute-0 systemd[1]: Started libpod-conmon-63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f.scope.
Oct 07 14:10:28 compute-0 podman[300745]: 2025-10-07 14:10:28.306213965 +0000 UTC m=+0.029969238 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.421 2 DEBUG nova.compute.manager [req-74547d21-f5f0-4c51-874c-0db5f0e0076a req-7176c6d8-0327-4182-b18c-90c1af7e9d74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Received event network-vif-plugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.421 2 DEBUG oslo_concurrency.lockutils [req-74547d21-f5f0-4c51-874c-0db5f0e0076a req-7176c6d8-0327-4182-b18c-90c1af7e9d74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.422 2 DEBUG oslo_concurrency.lockutils [req-74547d21-f5f0-4c51-874c-0db5f0e0076a req-7176c6d8-0327-4182-b18c-90c1af7e9d74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.422 2 DEBUG oslo_concurrency.lockutils [req-74547d21-f5f0-4c51-874c-0db5f0e0076a req-7176c6d8-0327-4182-b18c-90c1af7e9d74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.422 2 DEBUG nova.compute.manager [req-74547d21-f5f0-4c51-874c-0db5f0e0076a req-7176c6d8-0327-4182-b18c-90c1af7e9d74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Processing event network-vif-plugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:10:28 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c23965c9c9ac11f5345effbb29f6f66907d2d2062094a35123a593e4f89e7ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:10:28 compute-0 podman[300745]: 2025-10-07 14:10:28.464732393 +0000 UTC m=+0.188487676 container init 63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 07 14:10:28 compute-0 podman[300745]: 2025-10-07 14:10:28.475485275 +0000 UTC m=+0.199240528 container start 63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 07 14:10:28 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[300762]: [NOTICE]   (300784) : New worker (300786) forked
Oct 07 14:10:28 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[300762]: [NOTICE]   (300784) : Loading success.
Oct 07 14:10:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.568 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6e3a4d42-ab7b-4d31-93af-858be34d84f8 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb unbound from our chassis
Oct 07 14:10:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.571 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct 07 14:10:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.591 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[389746a1-03a6-496f-ac10-c9adcb14bc29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.623 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d8937223-bc85-4477-a3ec-efb446f5b0f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.628 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f80a6129-74c0-45ae-8b6e-231540f254c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.629 2 DEBUG nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.631 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846228.6286561, 30c3d75d-d021-411b-a277-f81ff1f707b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.632 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] VM Started (Lifecycle Event)
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.637 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.646 2 INFO nova.virt.libvirt.driver [-] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Instance spawned successfully.
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.646 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.655 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.659 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:10:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.665 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6d1f9e6b-5fc7-4143-9026-1d1ce762f352]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.669 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.669 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.670 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.670 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.670 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.671 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.679 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.679 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846228.630425, 30c3d75d-d021-411b-a277-f81ff1f707b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.679 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] VM Paused (Lifecycle Event)
Oct 07 14:10:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.686 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e245435f-a31c-4466-9724-c9b53fb9863c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678952, 'reachable_time': 35158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300843, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.702 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.705 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9fedd1b4-107b-47de-8d32-f5ff5bcaf197]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9f80456d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678969, 'tstamp': 678969}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300844, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9f80456d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678972, 'tstamp': 678972}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300844, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.707 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.707 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846228.6373038, 30c3d75d-d021-411b-a277-f81ff1f707b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.707 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] VM Resumed (Lifecycle Event)
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.711 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f80456d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.711 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:10:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.711 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f80456d-d0, col_values=(('external_ids', {'iface-id': 'aff8269b-7a34-4fc6-ae31-f73de236b2d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.712 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.734 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.740 2 INFO nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Took 7.66 seconds to spawn the instance on the hypervisor.
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.740 2 DEBUG nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.743 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.782 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.834 2 INFO nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Took 8.73 seconds to build instance.
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.851 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:10:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/303149936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:28 compute-0 ceph-mon[74295]: pgmap v1341: 305 pgs: 305 active+clean; 213 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 773 KiB/s rd, 5.2 MiB/s wr, 245 op/s
Oct 07 14:10:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/303149936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.895 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.902 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.924 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.953 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:10:28 compute-0 nova_compute[259550]: 2025-10-07 14:10:28.953 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:29 compute-0 ovn_controller[151684]: 2025-10-07T14:10:29Z|00223|binding|INFO|Releasing lport aff8269b-7a34-4fc6-ae31-f73de236b2d6 from this chassis (sb_readonly=0)
Oct 07 14:10:29 compute-0 ovn_controller[151684]: 2025-10-07T14:10:29Z|00224|binding|INFO|Releasing lport a8291172-baf1-4252-9a0d-af7ef7ffa931 from this chassis (sb_readonly=0)
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.077 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846229.0773168, 9fd9e34f-3492-4726-ab46-1b4bab671dbc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.078 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] VM Started (Lifecycle Event)
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.080 2 DEBUG nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.085 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.088 2 INFO nova.virt.libvirt.driver [-] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Instance spawned successfully.
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.089 2 INFO nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Took 6.82 seconds to spawn the instance on the hypervisor.
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.090 2 DEBUG nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.101 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.108 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.155 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.156 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846229.0776527, 9fd9e34f-3492-4726-ab46-1b4bab671dbc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.156 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] VM Paused (Lifecycle Event)
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.171 2 INFO nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Took 7.85 seconds to build instance.
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.192 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.196 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846229.0843832, 9fd9e34f-3492-4726-ab46-1b4bab671dbc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.196 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] VM Resumed (Lifecycle Event)
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.198 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:29 compute-0 ovn_controller[151684]: 2025-10-07T14:10:29Z|00225|binding|INFO|Releasing lport aff8269b-7a34-4fc6-ae31-f73de236b2d6 from this chassis (sb_readonly=0)
Oct 07 14:10:29 compute-0 ovn_controller[151684]: 2025-10-07T14:10:29Z|00226|binding|INFO|Releasing lport a8291172-baf1-4252-9a0d-af7ef7ffa931 from this chassis (sb_readonly=0)
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.218 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.223 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:10:29 compute-0 nova_compute[259550]: 2025-10-07 14:10:29.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 213 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 554 KiB/s rd, 4.7 MiB/s wr, 199 op/s
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.102 2 DEBUG oslo_concurrency.lockutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.103 2 DEBUG oslo_concurrency.lockutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.103 2 DEBUG oslo_concurrency.lockutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.103 2 DEBUG oslo_concurrency.lockutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.104 2 DEBUG oslo_concurrency.lockutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.105 2 INFO nova.compute.manager [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Terminating instance
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.106 2 DEBUG nova.compute.manager [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:10:30 compute-0 kernel: tap6e3a4d42-ab (unregistering): left promiscuous mode
Oct 07 14:10:30 compute-0 NetworkManager[44949]: <info>  [1759846230.1610] device (tap6e3a4d42-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:10:30 compute-0 ovn_controller[151684]: 2025-10-07T14:10:30Z|00227|binding|INFO|Releasing lport 6e3a4d42-ab7b-4d31-93af-858be34d84f8 from this chassis (sb_readonly=0)
Oct 07 14:10:30 compute-0 ovn_controller[151684]: 2025-10-07T14:10:30Z|00228|binding|INFO|Setting lport 6e3a4d42-ab7b-4d31-93af-858be34d84f8 down in Southbound
Oct 07 14:10:30 compute-0 ovn_controller[151684]: 2025-10-07T14:10:30Z|00229|binding|INFO|Removing iface tap6e3a4d42-ab ovn-installed in OVS
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.187 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:e0:a9 10.100.0.8'], port_security=['fa:16:3e:49:e0:a9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9fd9e34f-3492-4726-ab46-1b4bab671dbc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6e3a4d42-ab7b-4d31-93af-858be34d84f8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:10:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.188 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6e3a4d42-ab7b-4d31-93af-858be34d84f8 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb unbound from our chassis
Oct 07 14:10:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.189 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.208 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc224c1-d42e-4fe9-b81e-d6d69020b5bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:30 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Deactivated successfully.
Oct 07 14:10:30 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Consumed 1.807s CPU time.
Oct 07 14:10:30 compute-0 systemd-machined[214580]: Machine qemu-38-instance-00000022 terminated.
Oct 07 14:10:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.247 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[70c5fbe6-5b9a-4f19-834e-8ffface3a158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.251 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b382d5d4-b1ce-4c76-9d12-2308149538c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.286 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f6aca4-f3f8-4426-9812-42822d09cc46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.307 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b4097a6e-e8ed-453c-aa94-9dea546e78f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678952, 'reachable_time': 35158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300856, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.328 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[35708e14-1c58-4195-bd3c-b0fed74dc5e9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9f80456d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678969, 'tstamp': 678969}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300857, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9f80456d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678972, 'tstamp': 678972}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300857, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.332 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.351 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f80456d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.352 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:10:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.353 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f80456d-d0, col_values=(('external_ids', {'iface-id': 'aff8269b-7a34-4fc6-ae31-f73de236b2d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.353 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.357 2 INFO nova.virt.libvirt.driver [-] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Instance destroyed successfully.
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.357 2 DEBUG nova.objects.instance [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'resources' on Instance uuid 9fd9e34f-3492-4726-ab46-1b4bab671dbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.380 2 DEBUG nova.virt.libvirt.vif [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:10:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1746458742',display_name='tempest-ImagesTestJSON-server-1746458742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1746458742',id=34,image_ref='e82f6cd3-df6c-4f54-b0d1-baa8f3a03435',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:10:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-ooe0chln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='20fd8b19-2eb6-4c42-a320-d9d23f6f4912',image_min_disk='1',image_min_ram='0',image_owner_id='1a6abfd8cc6f4507886ed10873d1f95c',image_owner_project_name='tempest-ImagesTestJSON-194092869',image_owner_user_name='tempest-ImagesTestJSON-194092869-project-member',image_user_id='a27a7178326846e69ab9eaae7c70b274',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:10:29Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=9fd9e34f-3492-4726-ab46-1b4bab671dbc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.382 2 DEBUG nova.network.os_vif_util [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.382 2 DEBUG nova.network.os_vif_util [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:e0:a9,bridge_name='br-int',has_traffic_filtering=True,id=6e3a4d42-ab7b-4d31-93af-858be34d84f8,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e3a4d42-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.383 2 DEBUG os_vif [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:e0:a9,bridge_name='br-int',has_traffic_filtering=True,id=6e3a4d42-ab7b-4d31-93af-858be34d84f8,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e3a4d42-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.386 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e3a4d42-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.393 2 INFO os_vif [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:e0:a9,bridge_name='br-int',has_traffic_filtering=True,id=6e3a4d42-ab7b-4d31-93af-858be34d84f8,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e3a4d42-ab')
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.755 2 INFO nova.virt.libvirt.driver [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Deleting instance files /var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc_del
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.756 2 INFO nova.virt.libvirt.driver [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Deletion of /var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc_del complete
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.790 2 DEBUG nova.compute.manager [req-e024119f-2d03-4546-87a3-dc6daefd3d65 req-9a37571d-5069-40ee-bfda-4a944f33bbd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Received event network-vif-plugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.790 2 DEBUG oslo_concurrency.lockutils [req-e024119f-2d03-4546-87a3-dc6daefd3d65 req-9a37571d-5069-40ee-bfda-4a944f33bbd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.791 2 DEBUG oslo_concurrency.lockutils [req-e024119f-2d03-4546-87a3-dc6daefd3d65 req-9a37571d-5069-40ee-bfda-4a944f33bbd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.791 2 DEBUG oslo_concurrency.lockutils [req-e024119f-2d03-4546-87a3-dc6daefd3d65 req-9a37571d-5069-40ee-bfda-4a944f33bbd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.791 2 DEBUG nova.compute.manager [req-e024119f-2d03-4546-87a3-dc6daefd3d65 req-9a37571d-5069-40ee-bfda-4a944f33bbd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] No waiting events found dispatching network-vif-plugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.791 2 WARNING nova.compute.manager [req-e024119f-2d03-4546-87a3-dc6daefd3d65 req-9a37571d-5069-40ee-bfda-4a944f33bbd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Received unexpected event network-vif-plugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a for instance with vm_state active and task_state None.
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.813 2 INFO nova.compute.manager [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Took 0.71 seconds to destroy the instance on the hypervisor.
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.814 2 DEBUG oslo.service.loopingcall [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.814 2 DEBUG nova.compute.manager [-] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.814 2 DEBUG nova.network.neutron [-] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:10:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.865 2 DEBUG nova.compute.manager [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Received event network-vif-plugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.865 2 DEBUG oslo_concurrency.lockutils [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.865 2 DEBUG oslo_concurrency.lockutils [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.866 2 DEBUG oslo_concurrency.lockutils [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.866 2 DEBUG nova.compute.manager [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] No waiting events found dispatching network-vif-plugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.866 2 WARNING nova.compute.manager [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Received unexpected event network-vif-plugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 for instance with vm_state active and task_state deleting.
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.866 2 DEBUG nova.compute.manager [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Received event network-vif-unplugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.866 2 DEBUG oslo_concurrency.lockutils [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.866 2 DEBUG oslo_concurrency.lockutils [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.867 2 DEBUG oslo_concurrency.lockutils [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.867 2 DEBUG nova.compute.manager [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] No waiting events found dispatching network-vif-unplugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:10:30 compute-0 nova_compute[259550]: 2025-10-07 14:10:30.867 2 DEBUG nova.compute.manager [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Received event network-vif-unplugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:10:30 compute-0 ceph-mon[74295]: pgmap v1342: 305 pgs: 305 active+clean; 213 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 554 KiB/s rd, 4.7 MiB/s wr, 199 op/s
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.036 2 DEBUG oslo_concurrency.lockutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "30c3d75d-d021-411b-a277-f81ff1f707b8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.037 2 DEBUG oslo_concurrency.lockutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.037 2 DEBUG oslo_concurrency.lockutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.037 2 DEBUG oslo_concurrency.lockutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.037 2 DEBUG oslo_concurrency.lockutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.038 2 INFO nova.compute.manager [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Terminating instance
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.039 2 DEBUG nova.compute.manager [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:10:31 compute-0 kernel: tap76ff867f-df (unregistering): left promiscuous mode
Oct 07 14:10:31 compute-0 NetworkManager[44949]: <info>  [1759846231.0778] device (tap76ff867f-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:10:31 compute-0 ovn_controller[151684]: 2025-10-07T14:10:31Z|00230|binding|INFO|Releasing lport 76ff867f-df0e-4526-b0c7-6bc2163fd52a from this chassis (sb_readonly=0)
Oct 07 14:10:31 compute-0 ovn_controller[151684]: 2025-10-07T14:10:31Z|00231|binding|INFO|Setting lport 76ff867f-df0e-4526-b0c7-6bc2163fd52a down in Southbound
Oct 07 14:10:31 compute-0 ovn_controller[151684]: 2025-10-07T14:10:31Z|00232|binding|INFO|Removing iface tap76ff867f-df ovn-installed in OVS
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:31 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000021.scope: Deactivated successfully.
Oct 07 14:10:31 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000021.scope: Consumed 3.264s CPU time.
Oct 07 14:10:31 compute-0 systemd-machined[214580]: Machine qemu-37-instance-00000021 terminated.
Oct 07 14:10:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.232 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:66:96 10.100.0.8'], port_security=['fa:16:3e:dc:66:96 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '30c3d75d-d021-411b-a277-f81ff1f707b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c166ae9e4e0f43d38afaa35966f84b05', 'neutron:revision_number': '4', 'neutron:security_group_ids': '306ac68f-7d3a-41d3-a9d1-b809ff5ece38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6dab0b8-b058-4fe6-95e9-ca808f08d05f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=76ff867f-df0e-4526-b0c7-6bc2163fd52a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:10:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.234 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 76ff867f-df0e-4526-b0c7-6bc2163fd52a in datapath 0a5c95d4-1a77-48f5-83c0-afa976b7583d unbound from our chassis
Oct 07 14:10:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.235 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0a5c95d4-1a77-48f5-83c0-afa976b7583d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:10:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.236 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[322e670b-6c64-4bb4-b19f-7b6ed16c37d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.236 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d namespace which is not needed anymore
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.281 2 INFO nova.virt.libvirt.driver [-] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Instance destroyed successfully.
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.283 2 DEBUG nova.objects.instance [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lazy-loading 'resources' on Instance uuid 30c3d75d-d021-411b-a277-f81ff1f707b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.378 2 DEBUG nova.virt.libvirt.vif [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:10:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1409154713',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1409154713',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1409154713',id=33,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:10:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c166ae9e4e0f43d38afaa35966f84b05',ramdisk_id='',reservation_id='r-20bg9heg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-2130756304',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:10:28Z,user_data=None,user_id='21b4c507f5c443f4b43306c884b1d67f',uuid=30c3d75d-d021-411b-a277-f81ff1f707b8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.380 2 DEBUG nova.network.os_vif_util [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converting VIF {"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.381 2 DEBUG nova.network.os_vif_util [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:66:96,bridge_name='br-int',has_traffic_filtering=True,id=76ff867f-df0e-4526-b0c7-6bc2163fd52a,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ff867f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.382 2 DEBUG os_vif [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:66:96,bridge_name='br-int',has_traffic_filtering=True,id=76ff867f-df0e-4526-b0c7-6bc2163fd52a,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ff867f-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.384 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76ff867f-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.390 2 INFO os_vif [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:66:96,bridge_name='br-int',has_traffic_filtering=True,id=76ff867f-df0e-4526-b0c7-6bc2163fd52a,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ff867f-df')
Oct 07 14:10:31 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[300762]: [NOTICE]   (300784) : haproxy version is 2.8.14-c23fe91
Oct 07 14:10:31 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[300762]: [NOTICE]   (300784) : path to executable is /usr/sbin/haproxy
Oct 07 14:10:31 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[300762]: [WARNING]  (300784) : Exiting Master process...
Oct 07 14:10:31 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[300762]: [WARNING]  (300784) : Exiting Master process...
Oct 07 14:10:31 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[300762]: [ALERT]    (300784) : Current worker (300786) exited with code 143 (Terminated)
Oct 07 14:10:31 compute-0 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[300762]: [WARNING]  (300784) : All workers exited. Exiting... (0)
Oct 07 14:10:31 compute-0 systemd[1]: libpod-63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f.scope: Deactivated successfully.
Oct 07 14:10:31 compute-0 podman[300922]: 2025-10-07 14:10:31.40426372 +0000 UTC m=+0.049612665 container died 63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:10:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f-userdata-shm.mount: Deactivated successfully.
Oct 07 14:10:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c23965c9c9ac11f5345effbb29f6f66907d2d2062094a35123a593e4f89e7ac-merged.mount: Deactivated successfully.
Oct 07 14:10:31 compute-0 podman[300922]: 2025-10-07 14:10:31.457810997 +0000 UTC m=+0.103159922 container cleanup 63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:10:31 compute-0 systemd[1]: libpod-conmon-63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f.scope: Deactivated successfully.
Oct 07 14:10:31 compute-0 podman[300969]: 2025-10-07 14:10:31.536180277 +0000 UTC m=+0.052790208 container remove 63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 07 14:10:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.545 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[efd5f975-2c37-46b7-ada7-3cac08f319fd]: (4, ('Tue Oct  7 02:10:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d (63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f)\n63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f\nTue Oct  7 02:10:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d (63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f)\n63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.548 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[155ecb57-2372-4455-a4a6-58522bf0df51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.549 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a5c95d4-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:31 compute-0 kernel: tap0a5c95d4-10: left promiscuous mode
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.575 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[573cfc31-d68d-41d3-9446-a1f667c98588]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.600 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[78da40c4-fcc7-4cf9-948f-ebc508a17deb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.602 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ac78187c-7e4d-40b7-9600-ba95577f4a3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 214 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Oct 07 14:10:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.623 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe99e09-5b7e-4a08-954b-f4a220daadde]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681111, 'reachable_time': 24043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300984, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.626 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:10:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.626 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f17baa-a806-48e3-a424-403aad175a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d0a5c95d4\x2d1a77\x2d48f5\x2d83c0\x2dafa976b7583d.mount: Deactivated successfully.
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.833 2 INFO nova.virt.libvirt.driver [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Deleting instance files /var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8_del
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.833 2 INFO nova.virt.libvirt.driver [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Deletion of /var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8_del complete
Oct 07 14:10:31 compute-0 ceph-mon[74295]: pgmap v1343: 305 pgs: 305 active+clean; 214 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.931 2 INFO nova.compute.manager [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Took 0.89 seconds to destroy the instance on the hypervisor.
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.932 2 DEBUG oslo.service.loopingcall [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.932 2 DEBUG nova.compute.manager [-] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.932 2 DEBUG nova.network.neutron [-] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.954 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.954 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:10:31 compute-0 nova_compute[259550]: 2025-10-07 14:10:31.954 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:10:32 compute-0 nova_compute[259550]: 2025-10-07 14:10:32.065 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 07 14:10:32 compute-0 nova_compute[259550]: 2025-10-07 14:10:32.065 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011091491258058215 of space, bias 1.0, pg target 0.33274473774174645 quantized to 32 (current 32)
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010121097056716806 of space, bias 1.0, pg target 0.3036329117015042 quantized to 32 (current 32)
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:10:32 compute-0 nova_compute[259550]: 2025-10-07 14:10:32.339 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-20fd8b19-2eb6-4c42-a320-d9d23f6f4912" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:10:32 compute-0 nova_compute[259550]: 2025-10-07 14:10:32.340 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-20fd8b19-2eb6-4c42-a320-d9d23f6f4912" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:10:32 compute-0 nova_compute[259550]: 2025-10-07 14:10:32.340 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:10:32 compute-0 nova_compute[259550]: 2025-10-07 14:10:32.341 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 20fd8b19-2eb6-4c42-a320-d9d23f6f4912 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:10:32 compute-0 nova_compute[259550]: 2025-10-07 14:10:32.477 2 DEBUG nova.network.neutron [-] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:10:32 compute-0 nova_compute[259550]: 2025-10-07 14:10:32.508 2 INFO nova.compute.manager [-] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Took 1.69 seconds to deallocate network for instance.
Oct 07 14:10:32 compute-0 nova_compute[259550]: 2025-10-07 14:10:32.591 2 DEBUG oslo_concurrency.lockutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:32 compute-0 nova_compute[259550]: 2025-10-07 14:10:32.591 2 DEBUG oslo_concurrency.lockutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:10:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3592856576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:10:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:10:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3592856576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:10:32 compute-0 nova_compute[259550]: 2025-10-07 14:10:32.680 2 DEBUG oslo_concurrency.processutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3592856576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:10:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3592856576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:10:32 compute-0 nova_compute[259550]: 2025-10-07 14:10:32.917 2 DEBUG nova.network.neutron [-] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:10:32 compute-0 nova_compute[259550]: 2025-10-07 14:10:32.983 2 INFO nova.compute.manager [-] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Took 1.05 seconds to deallocate network for instance.
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.076 2 DEBUG nova.compute.manager [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Received event network-vif-unplugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.077 2 DEBUG oslo_concurrency.lockutils [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.077 2 DEBUG oslo_concurrency.lockutils [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.077 2 DEBUG oslo_concurrency.lockutils [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.078 2 DEBUG nova.compute.manager [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] No waiting events found dispatching network-vif-unplugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.078 2 DEBUG nova.compute.manager [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Received event network-vif-unplugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.078 2 DEBUG nova.compute.manager [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Received event network-vif-deleted-6e3a4d42-ab7b-4d31-93af-858be34d84f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.078 2 DEBUG nova.compute.manager [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Received event network-vif-plugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.079 2 DEBUG oslo_concurrency.lockutils [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.079 2 DEBUG oslo_concurrency.lockutils [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.080 2 DEBUG oslo_concurrency.lockutils [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.080 2 DEBUG nova.compute.manager [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] No waiting events found dispatching network-vif-plugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.080 2 WARNING nova.compute.manager [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Received unexpected event network-vif-plugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a for instance with vm_state active and task_state deleting.
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.082 2 DEBUG nova.compute.manager [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Received event network-vif-plugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.082 2 DEBUG oslo_concurrency.lockutils [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.083 2 DEBUG oslo_concurrency.lockutils [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.083 2 DEBUG oslo_concurrency.lockutils [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.083 2 DEBUG nova.compute.manager [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] No waiting events found dispatching network-vif-plugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.083 2 WARNING nova.compute.manager [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Received unexpected event network-vif-plugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 for instance with vm_state deleted and task_state None.
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.084 2 DEBUG nova.compute.manager [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Received event network-vif-deleted-76ff867f-df0e-4526-b0c7-6bc2163fd52a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.084 2 INFO nova.compute.manager [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Neutron deleted interface 76ff867f-df0e-4526-b0c7-6bc2163fd52a; detaching it from the instance and deleting it from the info cache
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.084 2 DEBUG nova.network.neutron [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:10:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:10:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3513709040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.147 2 DEBUG oslo_concurrency.processutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.159 2 DEBUG nova.compute.provider_tree [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.172 2 DEBUG oslo_concurrency.lockutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.176 2 DEBUG nova.compute.manager [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Detach interface failed, port_id=76ff867f-df0e-4526-b0c7-6bc2163fd52a, reason: Instance 30c3d75d-d021-411b-a277-f81ff1f707b8 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.253 2 DEBUG nova.scheduler.client.report [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.366 2 DEBUG oslo_concurrency.lockutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.368 2 DEBUG oslo_concurrency.lockutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.440 2 INFO nova.scheduler.client.report [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Deleted allocations for instance 9fd9e34f-3492-4726-ab46-1b4bab671dbc
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.447 2 DEBUG oslo_concurrency.processutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 214 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.3 MiB/s wr, 176 op/s
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.626 2 DEBUG oslo_concurrency.lockutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:10:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3779656676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.911 2 DEBUG oslo_concurrency.processutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3513709040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:33 compute-0 ceph-mon[74295]: pgmap v1344: 305 pgs: 305 active+clean; 214 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.3 MiB/s wr, 176 op/s
Oct 07 14:10:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3779656676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.918 2 DEBUG nova.compute.provider_tree [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:10:33 compute-0 nova_compute[259550]: 2025-10-07 14:10:33.987 2 DEBUG nova.scheduler.client.report [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:10:34 compute-0 nova_compute[259550]: 2025-10-07 14:10:34.089 2 DEBUG oslo_concurrency.lockutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:34 compute-0 nova_compute[259550]: 2025-10-07 14:10:34.146 2 INFO nova.scheduler.client.report [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Deleted allocations for instance 30c3d75d-d021-411b-a277-f81ff1f707b8
Oct 07 14:10:34 compute-0 podman[301030]: 2025-10-07 14:10:34.1620583 +0000 UTC m=+0.125627853 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Oct 07 14:10:34 compute-0 nova_compute[259550]: 2025-10-07 14:10:34.182 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Updating instance_info_cache with network_info: [{"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:10:34 compute-0 nova_compute[259550]: 2025-10-07 14:10:34.342 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-20fd8b19-2eb6-4c42-a320-d9d23f6f4912" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:10:34 compute-0 nova_compute[259550]: 2025-10-07 14:10:34.343 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:10:34 compute-0 nova_compute[259550]: 2025-10-07 14:10:34.343 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:10:34 compute-0 nova_compute[259550]: 2025-10-07 14:10:34.344 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:10:34 compute-0 nova_compute[259550]: 2025-10-07 14:10:34.364 2 DEBUG oslo_concurrency.lockutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:35 compute-0 podman[301058]: 2025-10-07 14:10:35.060641539 +0000 UTC m=+0.051465673 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 07 14:10:35 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:10:35 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6207 writes, 27K keys, 6207 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6207 writes, 6207 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1590 writes, 7380 keys, 1590 commit groups, 1.0 writes per commit group, ingest: 9.67 MB, 0.02 MB/s
                                           Interval WAL: 1590 writes, 1590 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     81.9      0.41              0.10        16    0.025       0      0       0.0       0.0
                                             L6      1/0    8.37 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    137.9    112.0      0.97              0.32        15    0.064     70K   8353       0.0       0.0
                                            Sum      1/0    8.37 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     97.0    103.1      1.37              0.42        31    0.044     70K   8353       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.4    126.7    130.7      0.37              0.14        10    0.037     26K   3092       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    137.9    112.0      0.97              0.32        15    0.064     70K   8353       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     82.5      0.40              0.10        15    0.027       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.033, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.06 MB/s read, 1.4 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5619101451f0#2 capacity: 304.00 MB usage: 15.32 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000172 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(996,14.75 MB,4.85334%) FilterBlock(32,201.23 KB,0.0646441%) IndexBlock(32,374.61 KB,0.120339%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 07 14:10:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Oct 07 14:10:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Oct 07 14:10:35 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Oct 07 14:10:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 159 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 48 KiB/s wr, 202 op/s
Oct 07 14:10:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:10:35 compute-0 nova_compute[259550]: 2025-10-07 14:10:35.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:36 compute-0 ceph-mon[74295]: osdmap e179: 3 total, 3 up, 3 in
Oct 07 14:10:36 compute-0 ceph-mon[74295]: pgmap v1346: 305 pgs: 305 active+clean; 159 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 48 KiB/s wr, 202 op/s
Oct 07 14:10:36 compute-0 nova_compute[259550]: 2025-10-07 14:10:36.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:36 compute-0 nova_compute[259550]: 2025-10-07 14:10:36.847 2 DEBUG oslo_concurrency.lockutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:36 compute-0 nova_compute[259550]: 2025-10-07 14:10:36.848 2 DEBUG oslo_concurrency.lockutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:36 compute-0 nova_compute[259550]: 2025-10-07 14:10:36.848 2 DEBUG oslo_concurrency.lockutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:36 compute-0 nova_compute[259550]: 2025-10-07 14:10:36.849 2 DEBUG oslo_concurrency.lockutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:36 compute-0 nova_compute[259550]: 2025-10-07 14:10:36.849 2 DEBUG oslo_concurrency.lockutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:36 compute-0 nova_compute[259550]: 2025-10-07 14:10:36.850 2 INFO nova.compute.manager [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Terminating instance
Oct 07 14:10:36 compute-0 nova_compute[259550]: 2025-10-07 14:10:36.851 2 DEBUG nova.compute.manager [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:10:36 compute-0 kernel: tap7f14c952-52 (unregistering): left promiscuous mode
Oct 07 14:10:36 compute-0 NetworkManager[44949]: <info>  [1759846236.9116] device (tap7f14c952-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:10:36 compute-0 ovn_controller[151684]: 2025-10-07T14:10:36Z|00233|binding|INFO|Releasing lport 7f14c952-52c5-4957-ae99-09ff563a62a9 from this chassis (sb_readonly=0)
Oct 07 14:10:36 compute-0 ovn_controller[151684]: 2025-10-07T14:10:36Z|00234|binding|INFO|Setting lport 7f14c952-52c5-4957-ae99-09ff563a62a9 down in Southbound
Oct 07 14:10:36 compute-0 ovn_controller[151684]: 2025-10-07T14:10:36Z|00235|binding|INFO|Removing iface tap7f14c952-52 ovn-installed in OVS
Oct 07 14:10:36 compute-0 nova_compute[259550]: 2025-10-07 14:10:36.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:36.965 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:62:58 10.100.0.12'], port_security=['fa:16:3e:35:62:58 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '20fd8b19-2eb6-4c42-a320-d9d23f6f4912', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=7f14c952-52c5-4957-ae99-09ff563a62a9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:10:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:36.966 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 7f14c952-52c5-4957-ae99-09ff563a62a9 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb unbound from our chassis
Oct 07 14:10:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:36.967 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:10:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:36.968 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2e61c2-ccc9-40cb-82af-cf66fc58af9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:36.969 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb namespace which is not needed anymore
Oct 07 14:10:36 compute-0 nova_compute[259550]: 2025-10-07 14:10:36.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:37 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000020.scope: Deactivated successfully.
Oct 07 14:10:37 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000020.scope: Consumed 13.885s CPU time.
Oct 07 14:10:37 compute-0 systemd-machined[214580]: Machine qemu-36-instance-00000020 terminated.
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.094 2 INFO nova.virt.libvirt.driver [-] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Instance destroyed successfully.
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.095 2 DEBUG nova.objects.instance [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'resources' on Instance uuid 20fd8b19-2eb6-4c42-a320-d9d23f6f4912 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.114 2 DEBUG nova.virt.libvirt.vif [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:09:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-85849554',display_name='tempest-ImagesTestJSON-server-85849554',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-85849554',id=32,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:10:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-3510ii32',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:10:16Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=20fd8b19-2eb6-4c42-a320-d9d23f6f4912,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.116 2 DEBUG nova.network.os_vif_util [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:10:37 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[299472]: [NOTICE]   (299476) : haproxy version is 2.8.14-c23fe91
Oct 07 14:10:37 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[299472]: [NOTICE]   (299476) : path to executable is /usr/sbin/haproxy
Oct 07 14:10:37 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[299472]: [WARNING]  (299476) : Exiting Master process...
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.119 2 DEBUG nova.network.os_vif_util [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:35:62:58,bridge_name='br-int',has_traffic_filtering=True,id=7f14c952-52c5-4957-ae99-09ff563a62a9,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f14c952-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.120 2 DEBUG os_vif [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:35:62:58,bridge_name='br-int',has_traffic_filtering=True,id=7f14c952-52c5-4957-ae99-09ff563a62a9,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f14c952-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:10:37 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[299472]: [ALERT]    (299476) : Current worker (299478) exited with code 143 (Terminated)
Oct 07 14:10:37 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[299472]: [WARNING]  (299476) : All workers exited. Exiting... (0)
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.122 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f14c952-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:37 compute-0 systemd[1]: libpod-d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d.scope: Deactivated successfully.
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.129 2 INFO os_vif [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:35:62:58,bridge_name='br-int',has_traffic_filtering=True,id=7f14c952-52c5-4957-ae99-09ff563a62a9,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f14c952-52')
Oct 07 14:10:37 compute-0 podman[301101]: 2025-10-07 14:10:37.130328043 +0000 UTC m=+0.055156932 container died d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:10:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d-userdata-shm.mount: Deactivated successfully.
Oct 07 14:10:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdf6c25b6716cbbe88cb0b1eb25df944f3562caf1cdf4795012d2140fef4cae0-merged.mount: Deactivated successfully.
Oct 07 14:10:37 compute-0 podman[301101]: 2025-10-07 14:10:37.186214932 +0000 UTC m=+0.111043821 container cleanup d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 07 14:10:37 compute-0 systemd[1]: libpod-conmon-d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d.scope: Deactivated successfully.
Oct 07 14:10:37 compute-0 podman[301160]: 2025-10-07 14:10:37.259642682 +0000 UTC m=+0.048255209 container remove d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:10:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:37.266 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[88996f9f-1a2e-4897-bf13-32131c9db4b1]: (4, ('Tue Oct  7 02:10:37 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb (d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d)\nd54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d\nTue Oct  7 02:10:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb (d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d)\nd54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:37.270 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[812ecd78-e7af-4b5e-90b1-f079a8f011f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:37.271 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:37 compute-0 kernel: tap9f80456d-d0: left promiscuous mode
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:37.296 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a43564e9-dd09-44c1-83cd-a245f1c047d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:37.326 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[adc525f2-97fc-48b7-92d8-d97402ee3e02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:37.327 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[17fbf39e-9d4c-4521-98ea-aaf55b553d4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.338 2 DEBUG nova.compute.manager [req-5858ada1-2c5e-4b43-9f75-cb7cf6149fe0 req-f0eb64f8-3db5-401e-88b4-5f015aac7cae 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Received event network-vif-unplugged-7f14c952-52c5-4957-ae99-09ff563a62a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.338 2 DEBUG oslo_concurrency.lockutils [req-5858ada1-2c5e-4b43-9f75-cb7cf6149fe0 req-f0eb64f8-3db5-401e-88b4-5f015aac7cae 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.338 2 DEBUG oslo_concurrency.lockutils [req-5858ada1-2c5e-4b43-9f75-cb7cf6149fe0 req-f0eb64f8-3db5-401e-88b4-5f015aac7cae 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.339 2 DEBUG oslo_concurrency.lockutils [req-5858ada1-2c5e-4b43-9f75-cb7cf6149fe0 req-f0eb64f8-3db5-401e-88b4-5f015aac7cae 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.339 2 DEBUG nova.compute.manager [req-5858ada1-2c5e-4b43-9f75-cb7cf6149fe0 req-f0eb64f8-3db5-401e-88b4-5f015aac7cae 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] No waiting events found dispatching network-vif-unplugged-7f14c952-52c5-4957-ae99-09ff563a62a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.339 2 DEBUG nova.compute.manager [req-5858ada1-2c5e-4b43-9f75-cb7cf6149fe0 req-f0eb64f8-3db5-401e-88b4-5f015aac7cae 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Received event network-vif-unplugged-7f14c952-52c5-4957-ae99-09ff563a62a9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:10:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:37.344 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5218af23-35e1-432a-b902-4a40d8f56a6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678942, 'reachable_time': 36542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301175, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d9f80456d\x2dd8a6\x2d4e61\x2db6cb\x2db509cd650dbb.mount: Deactivated successfully.
Oct 07 14:10:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:37.349 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:10:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:37.349 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f71c98-2c1c-4408-9c58-ae20730c554d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.557 2 INFO nova.virt.libvirt.driver [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Deleting instance files /var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912_del
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.558 2 INFO nova.virt.libvirt.driver [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Deletion of /var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912_del complete
Oct 07 14:10:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 159 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 48 KiB/s wr, 202 op/s
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.633 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846222.6328354, 75fea627-8d48-4772-86a2-ca6bc47ed597 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.634 2 INFO nova.compute.manager [-] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] VM Stopped (Lifecycle Event)
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.989 2 INFO nova.compute.manager [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Took 1.14 seconds to destroy the instance on the hypervisor.
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.989 2 DEBUG oslo.service.loopingcall [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.990 2 DEBUG nova.compute.manager [-] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:10:37 compute-0 nova_compute[259550]: 2025-10-07 14:10:37.990 2 DEBUG nova.network.neutron [-] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:10:38 compute-0 nova_compute[259550]: 2025-10-07 14:10:38.058 2 DEBUG nova.compute.manager [None req-60e0e5a4-c64a-424b-8a6f-126e49b5d641 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:38 compute-0 ceph-mon[74295]: pgmap v1347: 305 pgs: 305 active+clean; 159 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 48 KiB/s wr, 202 op/s
Oct 07 14:10:39 compute-0 nova_compute[259550]: 2025-10-07 14:10:39.234 2 DEBUG nova.network.neutron [-] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:10:39 compute-0 nova_compute[259550]: 2025-10-07 14:10:39.485 2 INFO nova.compute.manager [-] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Took 1.50 seconds to deallocate network for instance.
Oct 07 14:10:39 compute-0 nova_compute[259550]: 2025-10-07 14:10:39.564 2 DEBUG nova.compute.manager [req-c07cda13-fe41-4699-bcc5-67b16ad94f4d req-2b09246d-fcd1-4316-825e-b487717e6182 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Received event network-vif-plugged-7f14c952-52c5-4957-ae99-09ff563a62a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:39 compute-0 nova_compute[259550]: 2025-10-07 14:10:39.565 2 DEBUG oslo_concurrency.lockutils [req-c07cda13-fe41-4699-bcc5-67b16ad94f4d req-2b09246d-fcd1-4316-825e-b487717e6182 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:39 compute-0 nova_compute[259550]: 2025-10-07 14:10:39.565 2 DEBUG oslo_concurrency.lockutils [req-c07cda13-fe41-4699-bcc5-67b16ad94f4d req-2b09246d-fcd1-4316-825e-b487717e6182 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:39 compute-0 nova_compute[259550]: 2025-10-07 14:10:39.566 2 DEBUG oslo_concurrency.lockutils [req-c07cda13-fe41-4699-bcc5-67b16ad94f4d req-2b09246d-fcd1-4316-825e-b487717e6182 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:39 compute-0 nova_compute[259550]: 2025-10-07 14:10:39.566 2 DEBUG nova.compute.manager [req-c07cda13-fe41-4699-bcc5-67b16ad94f4d req-2b09246d-fcd1-4316-825e-b487717e6182 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] No waiting events found dispatching network-vif-plugged-7f14c952-52c5-4957-ae99-09ff563a62a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:10:39 compute-0 nova_compute[259550]: 2025-10-07 14:10:39.566 2 WARNING nova.compute.manager [req-c07cda13-fe41-4699-bcc5-67b16ad94f4d req-2b09246d-fcd1-4316-825e-b487717e6182 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Received unexpected event network-vif-plugged-7f14c952-52c5-4957-ae99-09ff563a62a9 for instance with vm_state active and task_state deleting.
Oct 07 14:10:39 compute-0 nova_compute[259550]: 2025-10-07 14:10:39.567 2 DEBUG nova.compute.manager [req-c07cda13-fe41-4699-bcc5-67b16ad94f4d req-2b09246d-fcd1-4316-825e-b487717e6182 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Received event network-vif-deleted-7f14c952-52c5-4957-ae99-09ff563a62a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 67 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 22 KiB/s wr, 227 op/s
Oct 07 14:10:39 compute-0 nova_compute[259550]: 2025-10-07 14:10:39.625 2 DEBUG oslo_concurrency.lockutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:39 compute-0 nova_compute[259550]: 2025-10-07 14:10:39.625 2 DEBUG oslo_concurrency.lockutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:39 compute-0 nova_compute[259550]: 2025-10-07 14:10:39.672 2 DEBUG oslo_concurrency.processutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:10:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/667987412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:40 compute-0 nova_compute[259550]: 2025-10-07 14:10:40.128 2 DEBUG oslo_concurrency.processutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:40 compute-0 nova_compute[259550]: 2025-10-07 14:10:40.136 2 DEBUG nova.compute.provider_tree [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:10:40 compute-0 nova_compute[259550]: 2025-10-07 14:10:40.198 2 DEBUG nova.scheduler.client.report [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:10:40 compute-0 nova_compute[259550]: 2025-10-07 14:10:40.237 2 DEBUG oslo_concurrency.lockutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:40 compute-0 nova_compute[259550]: 2025-10-07 14:10:40.298 2 INFO nova.scheduler.client.report [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Deleted allocations for instance 20fd8b19-2eb6-4c42-a320-d9d23f6f4912
Oct 07 14:10:40 compute-0 nova_compute[259550]: 2025-10-07 14:10:40.668 2 DEBUG oslo_concurrency.lockutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:40 compute-0 ceph-mon[74295]: pgmap v1348: 305 pgs: 305 active+clean; 67 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 22 KiB/s wr, 227 op/s
Oct 07 14:10:40 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/667987412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:10:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Oct 07 14:10:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Oct 07 14:10:40 compute-0 nova_compute[259550]: 2025-10-07 14:10:40.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:40 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Oct 07 14:10:41 compute-0 nova_compute[259550]: 2025-10-07 14:10:41.166 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "ade69915-c1e6-4b99-b8e6-8031c7f04049" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:41 compute-0 nova_compute[259550]: 2025-10-07 14:10:41.168 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:41 compute-0 nova_compute[259550]: 2025-10-07 14:10:41.199 2 DEBUG nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:10:41 compute-0 nova_compute[259550]: 2025-10-07 14:10:41.280 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:41 compute-0 nova_compute[259550]: 2025-10-07 14:10:41.280 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:41 compute-0 nova_compute[259550]: 2025-10-07 14:10:41.287 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:10:41 compute-0 nova_compute[259550]: 2025-10-07 14:10:41.287 2 INFO nova.compute.claims [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:10:41 compute-0 nova_compute[259550]: 2025-10-07 14:10:41.437 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.7 KiB/s wr, 245 op/s
Oct 07 14:10:41 compute-0 ceph-mon[74295]: osdmap e180: 3 total, 3 up, 3 in
Oct 07 14:10:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:10:41 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3171583419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:41 compute-0 nova_compute[259550]: 2025-10-07 14:10:41.893 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:41 compute-0 nova_compute[259550]: 2025-10-07 14:10:41.900 2 DEBUG nova.compute.provider_tree [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:10:41 compute-0 nova_compute[259550]: 2025-10-07 14:10:41.949 2 DEBUG nova.scheduler.client.report [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:10:41 compute-0 nova_compute[259550]: 2025-10-07 14:10:41.988 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:41 compute-0 nova_compute[259550]: 2025-10-07 14:10:41.989 2 DEBUG nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:10:42 compute-0 nova_compute[259550]: 2025-10-07 14:10:42.054 2 DEBUG nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:10:42 compute-0 nova_compute[259550]: 2025-10-07 14:10:42.055 2 DEBUG nova.network.neutron [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:10:42 compute-0 nova_compute[259550]: 2025-10-07 14:10:42.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:42 compute-0 nova_compute[259550]: 2025-10-07 14:10:42.145 2 INFO nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:10:42 compute-0 nova_compute[259550]: 2025-10-07 14:10:42.190 2 DEBUG nova.policy [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a27a7178326846e69ab9eaae7c70b274', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:10:42 compute-0 sudo[301222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:10:42 compute-0 sudo[301222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:10:42 compute-0 sudo[301222]: pam_unix(sudo:session): session closed for user root
Oct 07 14:10:42 compute-0 nova_compute[259550]: 2025-10-07 14:10:42.338 2 DEBUG nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:10:42 compute-0 sudo[301247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:10:42 compute-0 sudo[301247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:10:42 compute-0 sudo[301247]: pam_unix(sudo:session): session closed for user root
Oct 07 14:10:42 compute-0 sudo[301272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:10:42 compute-0 sudo[301272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:10:42 compute-0 sudo[301272]: pam_unix(sudo:session): session closed for user root
Oct 07 14:10:42 compute-0 sudo[301297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:10:42 compute-0 sudo[301297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:10:42 compute-0 nova_compute[259550]: 2025-10-07 14:10:42.735 2 DEBUG nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:10:42 compute-0 nova_compute[259550]: 2025-10-07 14:10:42.738 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:10:42 compute-0 nova_compute[259550]: 2025-10-07 14:10:42.738 2 INFO nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Creating image(s)
Oct 07 14:10:42 compute-0 nova_compute[259550]: 2025-10-07 14:10:42.760 2 DEBUG nova.storage.rbd_utils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image ade69915-c1e6-4b99-b8e6-8031c7f04049_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:42 compute-0 nova_compute[259550]: 2025-10-07 14:10:42.782 2 DEBUG nova.storage.rbd_utils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image ade69915-c1e6-4b99-b8e6-8031c7f04049_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:42 compute-0 nova_compute[259550]: 2025-10-07 14:10:42.804 2 DEBUG nova.storage.rbd_utils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image ade69915-c1e6-4b99-b8e6-8031c7f04049_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:42 compute-0 nova_compute[259550]: 2025-10-07 14:10:42.808 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:42 compute-0 ceph-mon[74295]: pgmap v1350: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.7 KiB/s wr, 245 op/s
Oct 07 14:10:42 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3171583419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:42 compute-0 nova_compute[259550]: 2025-10-07 14:10:42.885 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:42 compute-0 nova_compute[259550]: 2025-10-07 14:10:42.886 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:42 compute-0 nova_compute[259550]: 2025-10-07 14:10:42.887 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:42 compute-0 nova_compute[259550]: 2025-10-07 14:10:42.887 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:42 compute-0 nova_compute[259550]: 2025-10-07 14:10:42.914 2 DEBUG nova.storage.rbd_utils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image ade69915-c1e6-4b99-b8e6-8031c7f04049_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:42 compute-0 nova_compute[259550]: 2025-10-07 14:10:42.919 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ade69915-c1e6-4b99-b8e6-8031c7f04049_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:42 compute-0 sudo[301297]: pam_unix(sudo:session): session closed for user root
Oct 07 14:10:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:10:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:10:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:10:43 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:10:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:10:43 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:10:43 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 8e9c7d61-31bd-4982-87ef-55537c5c580c does not exist
Oct 07 14:10:43 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 1fd5de01-3e39-4529-990e-f4c6a884b8cf does not exist
Oct 07 14:10:43 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 8a6053b5-6adb-4a82-9d66-35c8f6a28269 does not exist
Oct 07 14:10:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:10:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:10:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:10:43 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:10:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:10:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:10:43 compute-0 sudo[301444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:10:43 compute-0 sudo[301444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:10:43 compute-0 sudo[301444]: pam_unix(sudo:session): session closed for user root
Oct 07 14:10:43 compute-0 sudo[301472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:10:43 compute-0 sudo[301472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:10:43 compute-0 sudo[301472]: pam_unix(sudo:session): session closed for user root
Oct 07 14:10:43 compute-0 sudo[301497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:10:43 compute-0 sudo[301497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:10:43 compute-0 sudo[301497]: pam_unix(sudo:session): session closed for user root
Oct 07 14:10:43 compute-0 sudo[301522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:10:43 compute-0 sudo[301522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:10:43 compute-0 nova_compute[259550]: 2025-10-07 14:10:43.356 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ade69915-c1e6-4b99-b8e6-8031c7f04049_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:43 compute-0 nova_compute[259550]: 2025-10-07 14:10:43.384 2 DEBUG nova.network.neutron [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Successfully created port: 3f4a637c-ddfa-49f0-b263-224957faec29 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:10:43 compute-0 nova_compute[259550]: 2025-10-07 14:10:43.421 2 DEBUG nova.storage.rbd_utils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] resizing rbd image ade69915-c1e6-4b99-b8e6-8031c7f04049_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:10:43 compute-0 nova_compute[259550]: 2025-10-07 14:10:43.515 2 DEBUG nova.objects.instance [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'migration_context' on Instance uuid ade69915-c1e6-4b99-b8e6-8031c7f04049 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:10:43 compute-0 nova_compute[259550]: 2025-10-07 14:10:43.538 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:10:43 compute-0 nova_compute[259550]: 2025-10-07 14:10:43.539 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Ensure instance console log exists: /var/lib/nova/instances/ade69915-c1e6-4b99-b8e6-8031c7f04049/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:10:43 compute-0 nova_compute[259550]: 2025-10-07 14:10:43.540 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:43 compute-0 nova_compute[259550]: 2025-10-07 14:10:43.540 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:43 compute-0 nova_compute[259550]: 2025-10-07 14:10:43.541 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.4 KiB/s wr, 86 op/s
Oct 07 14:10:43 compute-0 podman[301658]: 2025-10-07 14:10:43.64942099 +0000 UTC m=+0.037555038 container create 365c479e7389555fc7ff437eb5ce7f11a149e9ab4b8c3dc726f48306816dc9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 07 14:10:43 compute-0 systemd[1]: Started libpod-conmon-365c479e7389555fc7ff437eb5ce7f11a149e9ab4b8c3dc726f48306816dc9ce.scope.
Oct 07 14:10:43 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:10:43 compute-0 podman[301658]: 2025-10-07 14:10:43.63417621 +0000 UTC m=+0.022310278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:10:43 compute-0 podman[301658]: 2025-10-07 14:10:43.746338228 +0000 UTC m=+0.134472306 container init 365c479e7389555fc7ff437eb5ce7f11a149e9ab4b8c3dc726f48306816dc9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 07 14:10:43 compute-0 podman[301658]: 2025-10-07 14:10:43.755617352 +0000 UTC m=+0.143751400 container start 365c479e7389555fc7ff437eb5ce7f11a149e9ab4b8c3dc726f48306816dc9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 07 14:10:43 compute-0 beautiful_bell[301675]: 167 167
Oct 07 14:10:43 compute-0 systemd[1]: libpod-365c479e7389555fc7ff437eb5ce7f11a149e9ab4b8c3dc726f48306816dc9ce.scope: Deactivated successfully.
Oct 07 14:10:43 compute-0 podman[301658]: 2025-10-07 14:10:43.768008167 +0000 UTC m=+0.156142235 container attach 365c479e7389555fc7ff437eb5ce7f11a149e9ab4b8c3dc726f48306816dc9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 07 14:10:43 compute-0 podman[301658]: 2025-10-07 14:10:43.768722896 +0000 UTC m=+0.156856944 container died 365c479e7389555fc7ff437eb5ce7f11a149e9ab4b8c3dc726f48306816dc9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 07 14:10:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f67d2389a839e8824a6e2278bc254cd2a95fd06f2f8e538bdfc3a91b4928b99f-merged.mount: Deactivated successfully.
Oct 07 14:10:43 compute-0 podman[301658]: 2025-10-07 14:10:43.84498493 +0000 UTC m=+0.233118978 container remove 365c479e7389555fc7ff437eb5ce7f11a149e9ab4b8c3dc726f48306816dc9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 07 14:10:43 compute-0 systemd[1]: libpod-conmon-365c479e7389555fc7ff437eb5ce7f11a149e9ab4b8c3dc726f48306816dc9ce.scope: Deactivated successfully.
Oct 07 14:10:43 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:10:43 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:10:43 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:10:43 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:10:43 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:10:43 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:10:44 compute-0 podman[301699]: 2025-10-07 14:10:44.029193763 +0000 UTC m=+0.059353441 container create 70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct 07 14:10:44 compute-0 systemd[1]: Started libpod-conmon-70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341.scope.
Oct 07 14:10:44 compute-0 podman[301699]: 2025-10-07 14:10:43.997165811 +0000 UTC m=+0.027325509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:10:44 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1215145b27a16fc1a943db5afc1d7bacf5f6e6892e883ea93251d82314325d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1215145b27a16fc1a943db5afc1d7bacf5f6e6892e883ea93251d82314325d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1215145b27a16fc1a943db5afc1d7bacf5f6e6892e883ea93251d82314325d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1215145b27a16fc1a943db5afc1d7bacf5f6e6892e883ea93251d82314325d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1215145b27a16fc1a943db5afc1d7bacf5f6e6892e883ea93251d82314325d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:10:44 compute-0 podman[301699]: 2025-10-07 14:10:44.141233148 +0000 UTC m=+0.171392846 container init 70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 07 14:10:44 compute-0 podman[301699]: 2025-10-07 14:10:44.150997595 +0000 UTC m=+0.181157273 container start 70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 07 14:10:44 compute-0 podman[301699]: 2025-10-07 14:10:44.154304271 +0000 UTC m=+0.184463969 container attach 70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 07 14:10:44 compute-0 nova_compute[259550]: 2025-10-07 14:10:44.824 2 DEBUG nova.network.neutron [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Successfully updated port: 3f4a637c-ddfa-49f0-b263-224957faec29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:10:44 compute-0 nova_compute[259550]: 2025-10-07 14:10:44.849 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "refresh_cache-ade69915-c1e6-4b99-b8e6-8031c7f04049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:10:44 compute-0 nova_compute[259550]: 2025-10-07 14:10:44.849 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquired lock "refresh_cache-ade69915-c1e6-4b99-b8e6-8031c7f04049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:10:44 compute-0 nova_compute[259550]: 2025-10-07 14:10:44.849 2 DEBUG nova.network.neutron [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:10:44 compute-0 ceph-mon[74295]: pgmap v1351: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.4 KiB/s wr, 86 op/s
Oct 07 14:10:44 compute-0 nova_compute[259550]: 2025-10-07 14:10:44.932 2 DEBUG nova.compute.manager [req-6fff1c39-337d-4c0f-8682-3ead0600bc17 req-4772288d-4346-4eb9-b561-1cb7b01e5b0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Received event network-changed-3f4a637c-ddfa-49f0-b263-224957faec29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:44 compute-0 nova_compute[259550]: 2025-10-07 14:10:44.933 2 DEBUG nova.compute.manager [req-6fff1c39-337d-4c0f-8682-3ead0600bc17 req-4772288d-4346-4eb9-b561-1cb7b01e5b0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Refreshing instance network info cache due to event network-changed-3f4a637c-ddfa-49f0-b263-224957faec29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:10:44 compute-0 nova_compute[259550]: 2025-10-07 14:10:44.933 2 DEBUG oslo_concurrency.lockutils [req-6fff1c39-337d-4c0f-8682-3ead0600bc17 req-4772288d-4346-4eb9-b561-1cb7b01e5b0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-ade69915-c1e6-4b99-b8e6-8031c7f04049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.008 2 DEBUG nova.network.neutron [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.353 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846230.3524852, 9fd9e34f-3492-4726-ab46-1b4bab671dbc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.354 2 INFO nova.compute.manager [-] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] VM Stopped (Lifecycle Event)
Oct 07 14:10:45 compute-0 happy_lamport[301715]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:10:45 compute-0 happy_lamport[301715]: --> relative data size: 1.0
Oct 07 14:10:45 compute-0 happy_lamport[301715]: --> All data devices are unavailable
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.371 2 DEBUG nova.compute.manager [None req-9d290afd-4bda-4424-a241-fd1607a95526 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:45 compute-0 systemd[1]: libpod-70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341.scope: Deactivated successfully.
Oct 07 14:10:45 compute-0 systemd[1]: libpod-70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341.scope: Consumed 1.141s CPU time.
Oct 07 14:10:45 compute-0 podman[301699]: 2025-10-07 14:10:45.401367161 +0000 UTC m=+1.431526869 container died 70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:10:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c1215145b27a16fc1a943db5afc1d7bacf5f6e6892e883ea93251d82314325d-merged.mount: Deactivated successfully.
Oct 07 14:10:45 compute-0 podman[301699]: 2025-10-07 14:10:45.468784583 +0000 UTC m=+1.498944261 container remove 70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 07 14:10:45 compute-0 systemd[1]: libpod-conmon-70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341.scope: Deactivated successfully.
Oct 07 14:10:45 compute-0 sudo[301522]: pam_unix(sudo:session): session closed for user root
Oct 07 14:10:45 compute-0 sudo[301758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:10:45 compute-0 sudo[301758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:10:45 compute-0 sudo[301758]: pam_unix(sudo:session): session closed for user root
Oct 07 14:10:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 88 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.639 2 DEBUG nova.network.neutron [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Updating instance_info_cache with network_info: [{"id": "3f4a637c-ddfa-49f0-b263-224957faec29", "address": "fa:16:3e:a9:f0:a5", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f4a637c-dd", "ovs_interfaceid": "3f4a637c-ddfa-49f0-b263-224957faec29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:10:45 compute-0 sudo[301783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:10:45 compute-0 sudo[301783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.665 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Releasing lock "refresh_cache-ade69915-c1e6-4b99-b8e6-8031c7f04049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.665 2 DEBUG nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Instance network_info: |[{"id": "3f4a637c-ddfa-49f0-b263-224957faec29", "address": "fa:16:3e:a9:f0:a5", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f4a637c-dd", "ovs_interfaceid": "3f4a637c-ddfa-49f0-b263-224957faec29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.666 2 DEBUG oslo_concurrency.lockutils [req-6fff1c39-337d-4c0f-8682-3ead0600bc17 req-4772288d-4346-4eb9-b561-1cb7b01e5b0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-ade69915-c1e6-4b99-b8e6-8031c7f04049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.666 2 DEBUG nova.network.neutron [req-6fff1c39-337d-4c0f-8682-3ead0600bc17 req-4772288d-4346-4eb9-b561-1cb7b01e5b0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Refreshing network info cache for port 3f4a637c-ddfa-49f0-b263-224957faec29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:10:45 compute-0 sudo[301783]: pam_unix(sudo:session): session closed for user root
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.668 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Start _get_guest_xml network_info=[{"id": "3f4a637c-ddfa-49f0-b263-224957faec29", "address": "fa:16:3e:a9:f0:a5", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f4a637c-dd", "ovs_interfaceid": "3f4a637c-ddfa-49f0-b263-224957faec29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.675 2 WARNING nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.687 2 DEBUG nova.virt.libvirt.host [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.689 2 DEBUG nova.virt.libvirt.host [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.695 2 DEBUG nova.virt.libvirt.host [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.696 2 DEBUG nova.virt.libvirt.host [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.697 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.697 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.698 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.699 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.699 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.700 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.700 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.701 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.701 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.702 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.702 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.703 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.709 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:45 compute-0 sudo[301808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:10:45 compute-0 sudo[301808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:10:45 compute-0 sudo[301808]: pam_unix(sudo:session): session closed for user root
Oct 07 14:10:45 compute-0 sudo[301834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:10:45 compute-0 sudo[301834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:10:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:10:45 compute-0 nova_compute[259550]: 2025-10-07 14:10:45.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:46 compute-0 ceph-mon[74295]: pgmap v1352: 305 pgs: 305 active+clean; 88 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct 07 14:10:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:10:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3474544031' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:10:46 compute-0 podman[301920]: 2025-10-07 14:10:46.198128205 +0000 UTC m=+0.059979058 container create f37b571df4622975a76000f03cf5500c929a38b2f588bbeb9d615cce3b978540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shockley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.200 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.233 2 DEBUG nova.storage.rbd_utils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image ade69915-c1e6-4b99-b8e6-8031c7f04049_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.241 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:46 compute-0 systemd[1]: Started libpod-conmon-f37b571df4622975a76000f03cf5500c929a38b2f588bbeb9d615cce3b978540.scope.
Oct 07 14:10:46 compute-0 podman[301920]: 2025-10-07 14:10:46.16711589 +0000 UTC m=+0.028966803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.279 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846231.278565, 30c3d75d-d021-411b-a277-f81ff1f707b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.281 2 INFO nova.compute.manager [-] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] VM Stopped (Lifecycle Event)
Oct 07 14:10:46 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:10:46 compute-0 podman[301920]: 2025-10-07 14:10:46.323379646 +0000 UTC m=+0.185230479 container init f37b571df4622975a76000f03cf5500c929a38b2f588bbeb9d615cce3b978540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shockley, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:10:46 compute-0 podman[301920]: 2025-10-07 14:10:46.333398721 +0000 UTC m=+0.195249544 container start f37b571df4622975a76000f03cf5500c929a38b2f588bbeb9d615cce3b978540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 07 14:10:46 compute-0 trusting_shockley[301957]: 167 167
Oct 07 14:10:46 compute-0 systemd[1]: libpod-f37b571df4622975a76000f03cf5500c929a38b2f588bbeb9d615cce3b978540.scope: Deactivated successfully.
Oct 07 14:10:46 compute-0 podman[301920]: 2025-10-07 14:10:46.3440592 +0000 UTC m=+0.205910023 container attach f37b571df4622975a76000f03cf5500c929a38b2f588bbeb9d615cce3b978540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 07 14:10:46 compute-0 podman[301920]: 2025-10-07 14:10:46.344483212 +0000 UTC m=+0.206334035 container died f37b571df4622975a76000f03cf5500c929a38b2f588bbeb9d615cce3b978540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct 07 14:10:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-11554c8187328bfa4866e24a142c7ce5304a918dc86495f24e5ab4dc6128e285-merged.mount: Deactivated successfully.
Oct 07 14:10:46 compute-0 podman[301920]: 2025-10-07 14:10:46.439775147 +0000 UTC m=+0.301625960 container remove f37b571df4622975a76000f03cf5500c929a38b2f588bbeb9d615cce3b978540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shockley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:10:46 compute-0 systemd[1]: libpod-conmon-f37b571df4622975a76000f03cf5500c929a38b2f588bbeb9d615cce3b978540.scope: Deactivated successfully.
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.622 2 DEBUG nova.compute.manager [None req-3c22aed7-3298-4543-857d-808663300e77 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:46 compute-0 podman[301999]: 2025-10-07 14:10:46.688448153 +0000 UTC m=+0.057095212 container create fdd59f3a67f7d60279e4529b06cabe68319e5063c267c94ed7204da48ecea850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 07 14:10:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:10:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3549352093' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.717 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.719 2 DEBUG nova.virt.libvirt.vif [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:10:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1443725263',display_name='tempest-ImagesTestJSON-server-1443725263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1443725263',id=35,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-pcdh9zoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:10:42Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=ade69915-c1e6-4b99-b8e6-8031c7f04049,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3f4a637c-ddfa-49f0-b263-224957faec29", "address": "fa:16:3e:a9:f0:a5", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f4a637c-dd", "ovs_interfaceid": "3f4a637c-ddfa-49f0-b263-224957faec29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.720 2 DEBUG nova.network.os_vif_util [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "3f4a637c-ddfa-49f0-b263-224957faec29", "address": "fa:16:3e:a9:f0:a5", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f4a637c-dd", "ovs_interfaceid": "3f4a637c-ddfa-49f0-b263-224957faec29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.720 2 DEBUG nova.network.os_vif_util [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:f0:a5,bridge_name='br-int',has_traffic_filtering=True,id=3f4a637c-ddfa-49f0-b263-224957faec29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f4a637c-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.722 2 DEBUG nova.objects.instance [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'pci_devices' on Instance uuid ade69915-c1e6-4b99-b8e6-8031c7f04049 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:10:46 compute-0 systemd[1]: Started libpod-conmon-fdd59f3a67f7d60279e4529b06cabe68319e5063c267c94ed7204da48ecea850.scope.
Oct 07 14:10:46 compute-0 podman[301999]: 2025-10-07 14:10:46.662966673 +0000 UTC m=+0.031613762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:10:46 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.780 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:10:46 compute-0 nova_compute[259550]:   <uuid>ade69915-c1e6-4b99-b8e6-8031c7f04049</uuid>
Oct 07 14:10:46 compute-0 nova_compute[259550]:   <name>instance-00000023</name>
Oct 07 14:10:46 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:10:46 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:10:46 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <nova:name>tempest-ImagesTestJSON-server-1443725263</nova:name>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:10:45</nova:creationTime>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:10:46 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:10:46 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:10:46 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:10:46 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:10:46 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:10:46 compute-0 nova_compute[259550]:         <nova:user uuid="a27a7178326846e69ab9eaae7c70b274">tempest-ImagesTestJSON-194092869-project-member</nova:user>
Oct 07 14:10:46 compute-0 nova_compute[259550]:         <nova:project uuid="1a6abfd8cc6f4507886ed10873d1f95c">tempest-ImagesTestJSON-194092869</nova:project>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:10:46 compute-0 nova_compute[259550]:         <nova:port uuid="3f4a637c-ddfa-49f0-b263-224957faec29">
Oct 07 14:10:46 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:10:46 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:10:46 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <system>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <entry name="serial">ade69915-c1e6-4b99-b8e6-8031c7f04049</entry>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <entry name="uuid">ade69915-c1e6-4b99-b8e6-8031c7f04049</entry>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     </system>
Oct 07 14:10:46 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:10:46 compute-0 nova_compute[259550]:   <os>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:   </os>
Oct 07 14:10:46 compute-0 nova_compute[259550]:   <features>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:   </features>
Oct 07 14:10:46 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:10:46 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:10:46 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/ade69915-c1e6-4b99-b8e6-8031c7f04049_disk">
Oct 07 14:10:46 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       </source>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:10:46 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/ade69915-c1e6-4b99-b8e6-8031c7f04049_disk.config">
Oct 07 14:10:46 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       </source>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:10:46 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97eb45926d9350bedb3b4379deea2431b2c80f627c2461d79f3c293bb9fa2d07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:a9:f0:a5"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <target dev="tap3f4a637c-dd"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/ade69915-c1e6-4b99-b8e6-8031c7f04049/console.log" append="off"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <video>
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     </video>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:10:46 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:10:46 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:10:46 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:10:46 compute-0 nova_compute[259550]: </domain>
Oct 07 14:10:46 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.782 2 DEBUG nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Preparing to wait for external event network-vif-plugged-3f4a637c-ddfa-49f0-b263-224957faec29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.783 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.783 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.783 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.784 2 DEBUG nova.virt.libvirt.vif [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:10:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1443725263',display_name='tempest-ImagesTestJSON-server-1443725263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1443725263',id=35,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-pcdh9zoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:10:42Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=ade69915-c1e6-4b99-b8e6-8031c7f04049,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3f4a637c-ddfa-49f0-b263-224957faec29", "address": "fa:16:3e:a9:f0:a5", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f4a637c-dd", "ovs_interfaceid": "3f4a637c-ddfa-49f0-b263-224957faec29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.784 2 DEBUG nova.network.os_vif_util [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "3f4a637c-ddfa-49f0-b263-224957faec29", "address": "fa:16:3e:a9:f0:a5", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f4a637c-dd", "ovs_interfaceid": "3f4a637c-ddfa-49f0-b263-224957faec29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.785 2 DEBUG nova.network.os_vif_util [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:f0:a5,bridge_name='br-int',has_traffic_filtering=True,id=3f4a637c-ddfa-49f0-b263-224957faec29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f4a637c-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.785 2 DEBUG os_vif [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:f0:a5,bridge_name='br-int',has_traffic_filtering=True,id=3f4a637c-ddfa-49f0-b263-224957faec29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f4a637c-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.786 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.787 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97eb45926d9350bedb3b4379deea2431b2c80f627c2461d79f3c293bb9fa2d07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97eb45926d9350bedb3b4379deea2431b2c80f627c2461d79f3c293bb9fa2d07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.792 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f4a637c-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97eb45926d9350bedb3b4379deea2431b2c80f627c2461d79f3c293bb9fa2d07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.793 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3f4a637c-dd, col_values=(('external_ids', {'iface-id': '3f4a637c-ddfa-49f0-b263-224957faec29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:f0:a5', 'vm-uuid': 'ade69915-c1e6-4b99-b8e6-8031c7f04049'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:46 compute-0 NetworkManager[44949]: <info>  [1759846246.7971] manager: (tap3f4a637c-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:10:46 compute-0 podman[301999]: 2025-10-07 14:10:46.805386857 +0000 UTC m=+0.174033916 container init fdd59f3a67f7d60279e4529b06cabe68319e5063c267c94ed7204da48ecea850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shirley, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:46 compute-0 nova_compute[259550]: 2025-10-07 14:10:46.809 2 INFO os_vif [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:f0:a5,bridge_name='br-int',has_traffic_filtering=True,id=3f4a637c-ddfa-49f0-b263-224957faec29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f4a637c-dd')
Oct 07 14:10:46 compute-0 podman[301999]: 2025-10-07 14:10:46.813735047 +0000 UTC m=+0.182382086 container start fdd59f3a67f7d60279e4529b06cabe68319e5063c267c94ed7204da48ecea850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shirley, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct 07 14:10:46 compute-0 podman[301999]: 2025-10-07 14:10:46.828789142 +0000 UTC m=+0.197436181 container attach fdd59f3a67f7d60279e4529b06cabe68319e5063c267c94ed7204da48ecea850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shirley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 07 14:10:47 compute-0 nova_compute[259550]: 2025-10-07 14:10:47.042 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:10:47 compute-0 nova_compute[259550]: 2025-10-07 14:10:47.045 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:10:47 compute-0 nova_compute[259550]: 2025-10-07 14:10:47.045 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No VIF found with MAC fa:16:3e:a9:f0:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:10:47 compute-0 nova_compute[259550]: 2025-10-07 14:10:47.048 2 INFO nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Using config drive
Oct 07 14:10:47 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3474544031' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:10:47 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3549352093' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:10:47 compute-0 nova_compute[259550]: 2025-10-07 14:10:47.100 2 DEBUG nova.storage.rbd_utils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image ade69915-c1e6-4b99-b8e6-8031c7f04049_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]: {
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:     "0": [
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:         {
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "devices": [
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "/dev/loop3"
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             ],
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "lv_name": "ceph_lv0",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "lv_size": "21470642176",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "name": "ceph_lv0",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "tags": {
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.cluster_name": "ceph",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.crush_device_class": "",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.encrypted": "0",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.osd_id": "0",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.type": "block",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.vdo": "0"
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             },
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "type": "block",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "vg_name": "ceph_vg0"
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:         }
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:     ],
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:     "1": [
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:         {
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "devices": [
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "/dev/loop4"
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             ],
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "lv_name": "ceph_lv1",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "lv_size": "21470642176",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "name": "ceph_lv1",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "tags": {
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.cluster_name": "ceph",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.crush_device_class": "",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.encrypted": "0",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.osd_id": "1",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.type": "block",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.vdo": "0"
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             },
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "type": "block",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "vg_name": "ceph_vg1"
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:         }
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:     ],
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:     "2": [
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:         {
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "devices": [
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "/dev/loop5"
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             ],
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "lv_name": "ceph_lv2",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "lv_size": "21470642176",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "name": "ceph_lv2",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "tags": {
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.cluster_name": "ceph",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.crush_device_class": "",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.encrypted": "0",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.osd_id": "2",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.type": "block",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:                 "ceph.vdo": "0"
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             },
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "type": "block",
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:             "vg_name": "ceph_vg2"
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:         }
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]:     ]
Oct 07 14:10:47 compute-0 intelligent_shirley[302017]: }
Oct 07 14:10:47 compute-0 systemd[1]: libpod-fdd59f3a67f7d60279e4529b06cabe68319e5063c267c94ed7204da48ecea850.scope: Deactivated successfully.
Oct 07 14:10:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 88 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct 07 14:10:47 compute-0 podman[302047]: 2025-10-07 14:10:47.642583413 +0000 UTC m=+0.033563133 container died fdd59f3a67f7d60279e4529b06cabe68319e5063c267c94ed7204da48ecea850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shirley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:10:47 compute-0 nova_compute[259550]: 2025-10-07 14:10:47.688 2 DEBUG nova.network.neutron [req-6fff1c39-337d-4c0f-8682-3ead0600bc17 req-4772288d-4346-4eb9-b561-1cb7b01e5b0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Updated VIF entry in instance network info cache for port 3f4a637c-ddfa-49f0-b263-224957faec29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:10:47 compute-0 nova_compute[259550]: 2025-10-07 14:10:47.690 2 DEBUG nova.network.neutron [req-6fff1c39-337d-4c0f-8682-3ead0600bc17 req-4772288d-4346-4eb9-b561-1cb7b01e5b0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Updating instance_info_cache with network_info: [{"id": "3f4a637c-ddfa-49f0-b263-224957faec29", "address": "fa:16:3e:a9:f0:a5", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f4a637c-dd", "ovs_interfaceid": "3f4a637c-ddfa-49f0-b263-224957faec29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:10:47 compute-0 nova_compute[259550]: 2025-10-07 14:10:47.728 2 DEBUG oslo_concurrency.lockutils [req-6fff1c39-337d-4c0f-8682-3ead0600bc17 req-4772288d-4346-4eb9-b561-1cb7b01e5b0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-ade69915-c1e6-4b99-b8e6-8031c7f04049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:10:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-97eb45926d9350bedb3b4379deea2431b2c80f627c2461d79f3c293bb9fa2d07-merged.mount: Deactivated successfully.
Oct 07 14:10:47 compute-0 nova_compute[259550]: 2025-10-07 14:10:47.883 2 INFO nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Creating config drive at /var/lib/nova/instances/ade69915-c1e6-4b99-b8e6-8031c7f04049/disk.config
Oct 07 14:10:47 compute-0 nova_compute[259550]: 2025-10-07 14:10:47.888 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ade69915-c1e6-4b99-b8e6-8031c7f04049/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpntermyhp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:48 compute-0 nova_compute[259550]: 2025-10-07 14:10:48.047 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ade69915-c1e6-4b99-b8e6-8031c7f04049/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpntermyhp" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:48 compute-0 ceph-mon[74295]: pgmap v1353: 305 pgs: 305 active+clean; 88 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct 07 14:10:48 compute-0 podman[302047]: 2025-10-07 14:10:48.076415206 +0000 UTC m=+0.467394926 container remove fdd59f3a67f7d60279e4529b06cabe68319e5063c267c94ed7204da48ecea850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shirley, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:10:48 compute-0 systemd[1]: libpod-conmon-fdd59f3a67f7d60279e4529b06cabe68319e5063c267c94ed7204da48ecea850.scope: Deactivated successfully.
Oct 07 14:10:48 compute-0 nova_compute[259550]: 2025-10-07 14:10:48.127 2 DEBUG nova.storage.rbd_utils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image ade69915-c1e6-4b99-b8e6-8031c7f04049_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:48 compute-0 nova_compute[259550]: 2025-10-07 14:10:48.132 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ade69915-c1e6-4b99-b8e6-8031c7f04049/disk.config ade69915-c1e6-4b99-b8e6-8031c7f04049_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:48 compute-0 sudo[301834]: pam_unix(sudo:session): session closed for user root
Oct 07 14:10:48 compute-0 sudo[302084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:10:48 compute-0 sudo[302084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:10:48 compute-0 sudo[302084]: pam_unix(sudo:session): session closed for user root
Oct 07 14:10:48 compute-0 sudo[302124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:10:48 compute-0 sudo[302124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:10:48 compute-0 sudo[302124]: pam_unix(sudo:session): session closed for user root
Oct 07 14:10:48 compute-0 sudo[302149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:10:48 compute-0 sudo[302149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:10:48 compute-0 sudo[302149]: pam_unix(sudo:session): session closed for user root
Oct 07 14:10:48 compute-0 sudo[302177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:10:48 compute-0 sudo[302177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:10:48 compute-0 podman[302244]: 2025-10-07 14:10:48.753867194 +0000 UTC m=+0.027283738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:10:48 compute-0 podman[302244]: 2025-10-07 14:10:48.975343665 +0000 UTC m=+0.248760149 container create 701ca9aceb7755af3ffda173f28f26cac5c4f18467d3b80c397b212e58df9821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:10:48 compute-0 nova_compute[259550]: 2025-10-07 14:10:48.986 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ade69915-c1e6-4b99-b8e6-8031c7f04049/disk.config ade69915-c1e6-4b99-b8e6-8031c7f04049_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.854s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:48 compute-0 nova_compute[259550]: 2025-10-07 14:10:48.989 2 INFO nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Deleting local config drive /var/lib/nova/instances/ade69915-c1e6-4b99-b8e6-8031c7f04049/disk.config because it was imported into RBD.
Oct 07 14:10:49 compute-0 kernel: tap3f4a637c-dd: entered promiscuous mode
Oct 07 14:10:49 compute-0 NetworkManager[44949]: <info>  [1759846249.0672] manager: (tap3f4a637c-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/117)
Oct 07 14:10:49 compute-0 ovn_controller[151684]: 2025-10-07T14:10:49Z|00236|binding|INFO|Claiming lport 3f4a637c-ddfa-49f0-b263-224957faec29 for this chassis.
Oct 07 14:10:49 compute-0 ovn_controller[151684]: 2025-10-07T14:10:49Z|00237|binding|INFO|3f4a637c-ddfa-49f0-b263-224957faec29: Claiming fa:16:3e:a9:f0:a5 10.100.0.4
Oct 07 14:10:49 compute-0 nova_compute[259550]: 2025-10-07 14:10:49.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.085 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:f0:a5 10.100.0.4'], port_security=['fa:16:3e:a9:f0:a5 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ade69915-c1e6-4b99-b8e6-8031c7f04049', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3f4a637c-ddfa-49f0-b263-224957faec29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.087 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3f4a637c-ddfa-49f0-b263-224957faec29 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb bound to our chassis
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.089 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.105 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1cfa3292-e8ff-479a-9824-05a985046dfc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.107 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f80456d-d1 in ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:10:49 compute-0 systemd-udevd[302273]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.108 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f80456d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.109 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[00566a89-ba14-4a93-8cf2-ebbe3c15c2a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.109 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1b6d4452-661e-4a24-9216-188ee97698d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:49 compute-0 NetworkManager[44949]: <info>  [1759846249.1297] device (tap3f4a637c-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:10:49 compute-0 NetworkManager[44949]: <info>  [1759846249.1312] device (tap3f4a637c-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.130 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[58938916-69d7-4180-ae4a-59d0901b37de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:49 compute-0 systemd[1]: Started libpod-conmon-701ca9aceb7755af3ffda173f28f26cac5c4f18467d3b80c397b212e58df9821.scope.
Oct 07 14:10:49 compute-0 systemd-machined[214580]: New machine qemu-39-instance-00000023.
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.166 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f5eb224a-6397-4949-84d8-bfa864d613c6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:49 compute-0 systemd[1]: Started Virtual Machine qemu-39-instance-00000023.
Oct 07 14:10:49 compute-0 nova_compute[259550]: 2025-10-07 14:10:49.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:49 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:10:49 compute-0 ovn_controller[151684]: 2025-10-07T14:10:49Z|00238|binding|INFO|Setting lport 3f4a637c-ddfa-49f0-b263-224957faec29 ovn-installed in OVS
Oct 07 14:10:49 compute-0 ovn_controller[151684]: 2025-10-07T14:10:49Z|00239|binding|INFO|Setting lport 3f4a637c-ddfa-49f0-b263-224957faec29 up in Southbound
Oct 07 14:10:49 compute-0 nova_compute[259550]: 2025-10-07 14:10:49.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.205 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5a4143e7-a61a-4452-ba86-8180e5319f91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:49 compute-0 NetworkManager[44949]: <info>  [1759846249.2151] manager: (tap9f80456d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/118)
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.213 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6ef96bb8-52bc-45f8-8e6c-a95b318b181d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.254 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[be9f2b71-048a-4150-b8e0-197cf587a706]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.257 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f5b76f9c-fd55-43bc-8b90-15cac15fa3a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:49 compute-0 podman[302244]: 2025-10-07 14:10:49.272096915 +0000 UTC m=+0.545513379 container init 701ca9aceb7755af3ffda173f28f26cac5c4f18467d3b80c397b212e58df9821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ritchie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 07 14:10:49 compute-0 podman[302244]: 2025-10-07 14:10:49.282194182 +0000 UTC m=+0.555610626 container start 701ca9aceb7755af3ffda173f28f26cac5c4f18467d3b80c397b212e58df9821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ritchie, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 07 14:10:49 compute-0 vigilant_ritchie[302279]: 167 167
Oct 07 14:10:49 compute-0 systemd[1]: libpod-701ca9aceb7755af3ffda173f28f26cac5c4f18467d3b80c397b212e58df9821.scope: Deactivated successfully.
Oct 07 14:10:49 compute-0 NetworkManager[44949]: <info>  [1759846249.2946] device (tap9f80456d-d0): carrier: link connected
Oct 07 14:10:49 compute-0 podman[302244]: 2025-10-07 14:10:49.297652788 +0000 UTC m=+0.571069262 container attach 701ca9aceb7755af3ffda173f28f26cac5c4f18467d3b80c397b212e58df9821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ritchie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 07 14:10:49 compute-0 podman[302244]: 2025-10-07 14:10:49.299851245 +0000 UTC m=+0.573267699 container died 701ca9aceb7755af3ffda173f28f26cac5c4f18467d3b80c397b212e58df9821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.300 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f9b031-ded8-432f-8573-f29f17d974aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.320 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[007583c6-bb93-4503-aa24-415f95ceddaf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683285, 'reachable_time': 33985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302312, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-239a97716585be1fdfb668d9f1ff41246e43d97d20ec68ef553d16d5b3631d8e-merged.mount: Deactivated successfully.
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.344 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5fba90da-57b6-4319-9d97-35e9d20a91d7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:18ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683285, 'tstamp': 683285}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302320, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:49 compute-0 podman[302244]: 2025-10-07 14:10:49.355998861 +0000 UTC m=+0.629415325 container remove 701ca9aceb7755af3ffda173f28f26cac5c4f18467d3b80c397b212e58df9821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ritchie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.366 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4abc2b2f-eab3-46cc-a65c-f1e56813a24a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683285, 'reachable_time': 33985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302324, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:49 compute-0 systemd[1]: libpod-conmon-701ca9aceb7755af3ffda173f28f26cac5c4f18467d3b80c397b212e58df9821.scope: Deactivated successfully.
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.403 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[65c2f072-d090-4e7d-b6c3-f9b90b4728a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.475 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[029d7506-d488-47c5-9b8c-895af94c1896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.477 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.477 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.477 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f80456d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:49 compute-0 kernel: tap9f80456d-d0: entered promiscuous mode
Oct 07 14:10:49 compute-0 NetworkManager[44949]: <info>  [1759846249.4807] manager: (tap9f80456d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Oct 07 14:10:49 compute-0 nova_compute[259550]: 2025-10-07 14:10:49.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.483 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f80456d-d0, col_values=(('external_ids', {'iface-id': 'aff8269b-7a34-4fc6-ae31-f73de236b2d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:49 compute-0 nova_compute[259550]: 2025-10-07 14:10:49.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:49 compute-0 ovn_controller[151684]: 2025-10-07T14:10:49Z|00240|binding|INFO|Releasing lport aff8269b-7a34-4fc6-ae31-f73de236b2d6 from this chassis (sb_readonly=0)
Oct 07 14:10:49 compute-0 nova_compute[259550]: 2025-10-07 14:10:49.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.485 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.486 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d82194f0-3cc7-45b7-b232-a67fe8507930]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.487 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:10:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.488 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'env', 'PROCESS_TAG=haproxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:10:49 compute-0 nova_compute[259550]: 2025-10-07 14:10:49.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:49 compute-0 podman[302339]: 2025-10-07 14:10:49.563221718 +0000 UTC m=+0.054721929 container create 3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:10:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 88 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Oct 07 14:10:49 compute-0 systemd[1]: Started libpod-conmon-3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895.scope.
Oct 07 14:10:49 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:10:49 compute-0 podman[302339]: 2025-10-07 14:10:49.542785821 +0000 UTC m=+0.034286052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:10:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca76e4db4c88349d0faec3a83404c284b199856d9ce9753cb1b24e6ef7e9c052/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:10:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca76e4db4c88349d0faec3a83404c284b199856d9ce9753cb1b24e6ef7e9c052/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:10:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca76e4db4c88349d0faec3a83404c284b199856d9ce9753cb1b24e6ef7e9c052/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:10:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca76e4db4c88349d0faec3a83404c284b199856d9ce9753cb1b24e6ef7e9c052/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:10:49 compute-0 podman[302339]: 2025-10-07 14:10:49.672452809 +0000 UTC m=+0.163953020 container init 3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cori, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct 07 14:10:49 compute-0 podman[302339]: 2025-10-07 14:10:49.680682816 +0000 UTC m=+0.172183027 container start 3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cori, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:10:49 compute-0 podman[302339]: 2025-10-07 14:10:49.684629539 +0000 UTC m=+0.176129750 container attach 3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cori, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 14:10:49 compute-0 nova_compute[259550]: 2025-10-07 14:10:49.913 2 DEBUG nova.compute.manager [req-b972e6f0-cd93-43f8-8ac0-c1c0fe3b12df req-eb10c370-7557-47d0-842b-818da860ab23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Received event network-vif-plugged-3f4a637c-ddfa-49f0-b263-224957faec29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:49 compute-0 nova_compute[259550]: 2025-10-07 14:10:49.915 2 DEBUG oslo_concurrency.lockutils [req-b972e6f0-cd93-43f8-8ac0-c1c0fe3b12df req-eb10c370-7557-47d0-842b-818da860ab23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:49 compute-0 nova_compute[259550]: 2025-10-07 14:10:49.915 2 DEBUG oslo_concurrency.lockutils [req-b972e6f0-cd93-43f8-8ac0-c1c0fe3b12df req-eb10c370-7557-47d0-842b-818da860ab23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:49 compute-0 nova_compute[259550]: 2025-10-07 14:10:49.915 2 DEBUG oslo_concurrency.lockutils [req-b972e6f0-cd93-43f8-8ac0-c1c0fe3b12df req-eb10c370-7557-47d0-842b-818da860ab23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:49 compute-0 nova_compute[259550]: 2025-10-07 14:10:49.916 2 DEBUG nova.compute.manager [req-b972e6f0-cd93-43f8-8ac0-c1c0fe3b12df req-eb10c370-7557-47d0-842b-818da860ab23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Processing event network-vif-plugged-3f4a637c-ddfa-49f0-b263-224957faec29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:10:49 compute-0 podman[302427]: 2025-10-07 14:10:49.951823613 +0000 UTC m=+0.080605380 container create 67aadd87c0dea632c211ced9243f4d583d8d061cd4ebb81c028ccce2ae71f154 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 07 14:10:50 compute-0 systemd[1]: Started libpod-conmon-67aadd87c0dea632c211ced9243f4d583d8d061cd4ebb81c028ccce2ae71f154.scope.
Oct 07 14:10:50 compute-0 podman[302427]: 2025-10-07 14:10:49.91824433 +0000 UTC m=+0.047026187 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:10:50 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc49fbb0f3c7ff3dc9220d2764fd372dbc613429af02b3304af3c7bad83bf708/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:10:50 compute-0 podman[302427]: 2025-10-07 14:10:50.060432547 +0000 UTC m=+0.189214374 container init 67aadd87c0dea632c211ced9243f4d583d8d061cd4ebb81c028ccce2ae71f154 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:10:50 compute-0 podman[302427]: 2025-10-07 14:10:50.066881707 +0000 UTC m=+0.195663484 container start 67aadd87c0dea632c211ced9243f4d583d8d061cd4ebb81c028ccce2ae71f154 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:10:50 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[302442]: [NOTICE]   (302446) : New worker (302448) forked
Oct 07 14:10:50 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[302442]: [NOTICE]   (302446) : Loading success.
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.177 2 DEBUG nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.179 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846250.1767797, ade69915-c1e6-4b99-b8e6-8031c7f04049 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.179 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] VM Started (Lifecycle Event)
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.186 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.193 2 INFO nova.virt.libvirt.driver [-] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Instance spawned successfully.
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.194 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.230 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.232 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.232 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.233 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.233 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.234 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.234 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.241 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.276 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.277 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846250.1779625, ade69915-c1e6-4b99-b8e6-8031c7f04049 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.277 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] VM Paused (Lifecycle Event)
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.302 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.306 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846250.1860697, ade69915-c1e6-4b99-b8e6-8031c7f04049 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.307 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] VM Resumed (Lifecycle Event)
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.315 2 INFO nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Took 7.58 seconds to spawn the instance on the hypervisor.
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.315 2 DEBUG nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.327 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.330 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.363 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.393 2 INFO nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Took 9.14 seconds to build instance.
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.408 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:50 compute-0 ceph-mon[74295]: pgmap v1354: 305 pgs: 305 active+clean; 88 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Oct 07 14:10:50 compute-0 thirsty_cori[302394]: {
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:         "osd_id": 2,
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:         "type": "bluestore"
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:     },
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:         "osd_id": 1,
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:         "type": "bluestore"
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:     },
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:         "osd_id": 0,
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:         "type": "bluestore"
Oct 07 14:10:50 compute-0 thirsty_cori[302394]:     }
Oct 07 14:10:50 compute-0 thirsty_cori[302394]: }
Oct 07 14:10:50 compute-0 systemd[1]: libpod-3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895.scope: Deactivated successfully.
Oct 07 14:10:50 compute-0 systemd[1]: libpod-3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895.scope: Consumed 1.083s CPU time.
Oct 07 14:10:50 compute-0 conmon[302394]: conmon 3631ac98888937be144d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895.scope/container/memory.events
Oct 07 14:10:50 compute-0 podman[302339]: 2025-10-07 14:10:50.806070247 +0000 UTC m=+1.297570458 container died 3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:10:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca76e4db4c88349d0faec3a83404c284b199856d9ce9753cb1b24e6ef7e9c052-merged.mount: Deactivated successfully.
Oct 07 14:10:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:10:50 compute-0 nova_compute[259550]: 2025-10-07 14:10:50.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:50 compute-0 podman[302339]: 2025-10-07 14:10:50.878639405 +0000 UTC m=+1.370139606 container remove 3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cori, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 07 14:10:50 compute-0 systemd[1]: libpod-conmon-3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895.scope: Deactivated successfully.
Oct 07 14:10:50 compute-0 sudo[302177]: pam_unix(sudo:session): session closed for user root
Oct 07 14:10:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:10:50 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:10:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:10:50 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:10:50 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 6ffa1f0b-d345-4090-93d1-904fcc816c03 does not exist
Oct 07 14:10:50 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 7f0d8731-ebcd-45f5-bbb0-ee29338ea039 does not exist
Oct 07 14:10:51 compute-0 sudo[302496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:10:51 compute-0 sudo[302496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:10:51 compute-0 sudo[302496]: pam_unix(sudo:session): session closed for user root
Oct 07 14:10:51 compute-0 sudo[302521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:10:51 compute-0 sudo[302521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:10:51 compute-0 sudo[302521]: pam_unix(sudo:session): session closed for user root
Oct 07 14:10:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 88 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.0 MiB/s wr, 38 op/s
Oct 07 14:10:51 compute-0 nova_compute[259550]: 2025-10-07 14:10:51.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:51 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:10:51 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:10:51 compute-0 ceph-mon[74295]: pgmap v1355: 305 pgs: 305 active+clean; 88 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.0 MiB/s wr, 38 op/s
Oct 07 14:10:52 compute-0 nova_compute[259550]: 2025-10-07 14:10:52.037 2 DEBUG nova.compute.manager [req-b0fc59eb-004a-4647-bdb3-fad3bb32c538 req-c948a720-7223-4bee-89d7-43888e6ff80c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Received event network-vif-plugged-3f4a637c-ddfa-49f0-b263-224957faec29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:10:52 compute-0 nova_compute[259550]: 2025-10-07 14:10:52.038 2 DEBUG oslo_concurrency.lockutils [req-b0fc59eb-004a-4647-bdb3-fad3bb32c538 req-c948a720-7223-4bee-89d7-43888e6ff80c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:52 compute-0 nova_compute[259550]: 2025-10-07 14:10:52.038 2 DEBUG oslo_concurrency.lockutils [req-b0fc59eb-004a-4647-bdb3-fad3bb32c538 req-c948a720-7223-4bee-89d7-43888e6ff80c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:52 compute-0 nova_compute[259550]: 2025-10-07 14:10:52.039 2 DEBUG oslo_concurrency.lockutils [req-b0fc59eb-004a-4647-bdb3-fad3bb32c538 req-c948a720-7223-4bee-89d7-43888e6ff80c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:52 compute-0 nova_compute[259550]: 2025-10-07 14:10:52.039 2 DEBUG nova.compute.manager [req-b0fc59eb-004a-4647-bdb3-fad3bb32c538 req-c948a720-7223-4bee-89d7-43888e6ff80c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] No waiting events found dispatching network-vif-plugged-3f4a637c-ddfa-49f0-b263-224957faec29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:10:52 compute-0 nova_compute[259550]: 2025-10-07 14:10:52.039 2 WARNING nova.compute.manager [req-b0fc59eb-004a-4647-bdb3-fad3bb32c538 req-c948a720-7223-4bee-89d7-43888e6ff80c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Received unexpected event network-vif-plugged-3f4a637c-ddfa-49f0-b263-224957faec29 for instance with vm_state active and task_state None.
Oct 07 14:10:52 compute-0 nova_compute[259550]: 2025-10-07 14:10:52.093 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846237.0919805, 20fd8b19-2eb6-4c42-a320-d9d23f6f4912 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:10:52 compute-0 nova_compute[259550]: 2025-10-07 14:10:52.094 2 INFO nova.compute.manager [-] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] VM Stopped (Lifecycle Event)
Oct 07 14:10:52 compute-0 nova_compute[259550]: 2025-10-07 14:10:52.145 2 DEBUG nova.compute.manager [None req-9f1ba29d-32e1-4f64-83ba-0f404b7c80fc - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:10:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:10:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:10:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:10:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:10:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:10:52 compute-0 nova_compute[259550]: 2025-10-07 14:10:52.657 2 DEBUG nova.compute.manager [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:10:52 compute-0 nova_compute[259550]: 2025-10-07 14:10:52.699 2 INFO nova.compute.manager [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] instance snapshotting
Oct 07 14:10:53 compute-0 nova_compute[259550]: 2025-10-07 14:10:53.000 2 INFO nova.virt.libvirt.driver [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Beginning live snapshot process
Oct 07 14:10:53 compute-0 podman[302546]: 2025-10-07 14:10:53.089614012 +0000 UTC m=+0.073451222 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 07 14:10:53 compute-0 podman[302547]: 2025-10-07 14:10:53.108682543 +0000 UTC m=+0.078691640 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 07 14:10:53 compute-0 nova_compute[259550]: 2025-10-07 14:10:53.166 2 DEBUG nova.virt.libvirt.imagebackend [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:10:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Oct 07 14:10:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Oct 07 14:10:53 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Oct 07 14:10:53 compute-0 nova_compute[259550]: 2025-10-07 14:10:53.425 2 DEBUG nova.storage.rbd_utils [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(033204b4218f46faa810881a075a08b2) on rbd image(ade69915-c1e6-4b99-b8e6-8031c7f04049_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:10:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 88 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.1 MiB/s wr, 41 op/s
Oct 07 14:10:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Oct 07 14:10:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Oct 07 14:10:54 compute-0 ceph-mon[74295]: osdmap e181: 3 total, 3 up, 3 in
Oct 07 14:10:54 compute-0 ceph-mon[74295]: pgmap v1357: 305 pgs: 305 active+clean; 88 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.1 MiB/s wr, 41 op/s
Oct 07 14:10:54 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Oct 07 14:10:54 compute-0 nova_compute[259550]: 2025-10-07 14:10:54.420 2 DEBUG nova.storage.rbd_utils [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] cloning vms/ade69915-c1e6-4b99-b8e6-8031c7f04049_disk@033204b4218f46faa810881a075a08b2 to images/4dd3cc09-1db8-41d2-819d-1b5662935e7d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:10:54 compute-0 nova_compute[259550]: 2025-10-07 14:10:54.553 2 DEBUG nova.storage.rbd_utils [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] flattening images/4dd3cc09-1db8-41d2-819d-1b5662935e7d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:10:54 compute-0 nova_compute[259550]: 2025-10-07 14:10:54.826 2 DEBUG nova.storage.rbd_utils [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] removing snapshot(033204b4218f46faa810881a075a08b2) on rbd image(ade69915-c1e6-4b99-b8e6-8031c7f04049_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:10:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Oct 07 14:10:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Oct 07 14:10:55 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Oct 07 14:10:55 compute-0 ceph-mon[74295]: osdmap e182: 3 total, 3 up, 3 in
Oct 07 14:10:55 compute-0 nova_compute[259550]: 2025-10-07 14:10:55.407 2 DEBUG nova.storage.rbd_utils [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(snap) on rbd image(4dd3cc09-1db8-41d2-819d-1b5662935e7d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:10:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 99 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 185 KiB/s wr, 213 op/s
Oct 07 14:10:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:10:55 compute-0 nova_compute[259550]: 2025-10-07 14:10:55.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Oct 07 14:10:56 compute-0 ceph-mon[74295]: osdmap e183: 3 total, 3 up, 3 in
Oct 07 14:10:56 compute-0 ceph-mon[74295]: pgmap v1360: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 99 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 185 KiB/s wr, 213 op/s
Oct 07 14:10:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Oct 07 14:10:56 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 4dd3cc09-1db8-41d2-819d-1b5662935e7d could not be found.
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 4dd3cc09-1db8-41d2-819d-1b5662935e7d
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver 
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver 
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 4dd3cc09-1db8-41d2-819d-1b5662935e7d could not be found.
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver 
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.639 2 DEBUG nova.storage.rbd_utils [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] removing snapshot(snap) on rbd image(4dd3cc09-1db8-41d2-819d-1b5662935e7d) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:10:56 compute-0 rsyslogd[1004]: imjournal from <np0005473739:nova_compute>: begin to drop messages due to rate-limiting
Oct 07 14:10:56 compute-0 nova_compute[259550]: 2025-10-07 14:10:56.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Oct 07 14:10:57 compute-0 ceph-mon[74295]: osdmap e184: 3 total, 3 up, 3 in
Oct 07 14:10:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Oct 07 14:10:57 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Oct 07 14:10:57 compute-0 nova_compute[259550]: 2025-10-07 14:10:57.466 2 DEBUG oslo_concurrency.lockutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "23f8753a-d049-4882-a3f5-7878ea0c5480" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:57 compute-0 nova_compute[259550]: 2025-10-07 14:10:57.467 2 DEBUG oslo_concurrency.lockutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "23f8753a-d049-4882-a3f5-7878ea0c5480" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 99 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 8.2 MiB/s rd, 241 KiB/s wr, 308 op/s
Oct 07 14:10:57 compute-0 nova_compute[259550]: 2025-10-07 14:10:57.867 2 DEBUG nova.compute.manager [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:10:58 compute-0 nova_compute[259550]: 2025-10-07 14:10:58.070 2 WARNING nova.compute.manager [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Image not found during snapshot: nova.exception.ImageNotFound: Image 4dd3cc09-1db8-41d2-819d-1b5662935e7d could not be found.
Oct 07 14:10:58 compute-0 nova_compute[259550]: 2025-10-07 14:10:58.107 2 DEBUG oslo_concurrency.lockutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:58 compute-0 nova_compute[259550]: 2025-10-07 14:10:58.108 2 DEBUG oslo_concurrency.lockutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:58 compute-0 nova_compute[259550]: 2025-10-07 14:10:58.120 2 DEBUG nova.virt.hardware [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:10:58 compute-0 nova_compute[259550]: 2025-10-07 14:10:58.121 2 INFO nova.compute.claims [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:10:58 compute-0 nova_compute[259550]: 2025-10-07 14:10:58.243 2 DEBUG oslo_concurrency.processutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:58 compute-0 ceph-mon[74295]: osdmap e185: 3 total, 3 up, 3 in
Oct 07 14:10:58 compute-0 ceph-mon[74295]: pgmap v1363: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 99 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 8.2 MiB/s rd, 241 KiB/s wr, 308 op/s
Oct 07 14:10:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:10:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3804365752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:58 compute-0 nova_compute[259550]: 2025-10-07 14:10:58.691 2 DEBUG oslo_concurrency.processutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:58 compute-0 nova_compute[259550]: 2025-10-07 14:10:58.703 2 DEBUG nova.compute.provider_tree [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:10:58 compute-0 nova_compute[259550]: 2025-10-07 14:10:58.833 2 DEBUG nova.scheduler.client.report [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:10:58 compute-0 nova_compute[259550]: 2025-10-07 14:10:58.861 2 DEBUG oslo_concurrency.lockutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:58 compute-0 nova_compute[259550]: 2025-10-07 14:10:58.862 2 DEBUG nova.compute.manager [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:10:58 compute-0 nova_compute[259550]: 2025-10-07 14:10:58.911 2 DEBUG nova.compute.manager [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:10:58 compute-0 nova_compute[259550]: 2025-10-07 14:10:58.911 2 DEBUG nova.network.neutron [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:10:58 compute-0 nova_compute[259550]: 2025-10-07 14:10:58.934 2 INFO nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:10:58 compute-0 nova_compute[259550]: 2025-10-07 14:10:58.957 2 DEBUG nova.compute.manager [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.064 2 DEBUG nova.compute.manager [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.066 2 DEBUG nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.067 2 INFO nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Creating image(s)
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.094 2 DEBUG nova.storage.rbd_utils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 23f8753a-d049-4882-a3f5-7878ea0c5480_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.114 2 DEBUG nova.storage.rbd_utils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 23f8753a-d049-4882-a3f5-7878ea0c5480_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.135 2 DEBUG nova.storage.rbd_utils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 23f8753a-d049-4882-a3f5-7878ea0c5480_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.143 2 DEBUG oslo_concurrency.processutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.184 2 DEBUG nova.policy [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a0452296b3a942e893961944a0203d98', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '06322ecec4b94a5d94e34cc8632d4104', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.226 2 DEBUG oslo_concurrency.processutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.229 2 DEBUG oslo_concurrency.lockutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.229 2 DEBUG oslo_concurrency.lockutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.230 2 DEBUG oslo_concurrency.lockutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.250 2 DEBUG nova.storage.rbd_utils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 23f8753a-d049-4882-a3f5-7878ea0c5480_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.254 2 DEBUG oslo_concurrency.processutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 23f8753a-d049-4882-a3f5-7878ea0c5480_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.321 2 DEBUG oslo_concurrency.lockutils [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "ade69915-c1e6-4b99-b8e6-8031c7f04049" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.322 2 DEBUG oslo_concurrency.lockutils [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.322 2 DEBUG oslo_concurrency.lockutils [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.322 2 DEBUG oslo_concurrency.lockutils [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.323 2 DEBUG oslo_concurrency.lockutils [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.324 2 INFO nova.compute.manager [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Terminating instance
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.325 2 DEBUG nova.compute.manager [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:10:59 compute-0 kernel: tap3f4a637c-dd (unregistering): left promiscuous mode
Oct 07 14:10:59 compute-0 NetworkManager[44949]: <info>  [1759846259.3715] device (tap3f4a637c-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:10:59 compute-0 ovn_controller[151684]: 2025-10-07T14:10:59Z|00241|binding|INFO|Releasing lport 3f4a637c-ddfa-49f0-b263-224957faec29 from this chassis (sb_readonly=0)
Oct 07 14:10:59 compute-0 ovn_controller[151684]: 2025-10-07T14:10:59Z|00242|binding|INFO|Setting lport 3f4a637c-ddfa-49f0-b263-224957faec29 down in Southbound
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:59 compute-0 ovn_controller[151684]: 2025-10-07T14:10:59Z|00243|binding|INFO|Removing iface tap3f4a637c-dd ovn-installed in OVS
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.399 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:f0:a5 10.100.0.4'], port_security=['fa:16:3e:a9:f0:a5 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ade69915-c1e6-4b99-b8e6-8031c7f04049', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3f4a637c-ddfa-49f0-b263-224957faec29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.401 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3f4a637c-ddfa-49f0-b263-224957faec29 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb unbound from our chassis
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.402 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.403 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4096bfe7-74bf-409d-995a-2b97289d815a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.404 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb namespace which is not needed anymore
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3804365752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:10:59 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Deactivated successfully.
Oct 07 14:10:59 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Consumed 10.203s CPU time.
Oct 07 14:10:59 compute-0 systemd-machined[214580]: Machine qemu-39-instance-00000023 terminated.
Oct 07 14:10:59 compute-0 kernel: tap3f4a637c-dd: entered promiscuous mode
Oct 07 14:10:59 compute-0 NetworkManager[44949]: <info>  [1759846259.5495] manager: (tap3f4a637c-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/120)
Oct 07 14:10:59 compute-0 systemd-udevd[302880]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:10:59 compute-0 kernel: tap3f4a637c-dd (unregistering): left promiscuous mode
Oct 07 14:10:59 compute-0 ovn_controller[151684]: 2025-10-07T14:10:59Z|00244|binding|INFO|Claiming lport 3f4a637c-ddfa-49f0-b263-224957faec29 for this chassis.
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:59 compute-0 ovn_controller[151684]: 2025-10-07T14:10:59Z|00245|binding|INFO|3f4a637c-ddfa-49f0-b263-224957faec29: Claiming fa:16:3e:a9:f0:a5 10.100.0.4
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.575 2 INFO nova.virt.libvirt.driver [-] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Instance destroyed successfully.
Oct 07 14:10:59 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[302442]: [NOTICE]   (302446) : haproxy version is 2.8.14-c23fe91
Oct 07 14:10:59 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[302442]: [NOTICE]   (302446) : path to executable is /usr/sbin/haproxy
Oct 07 14:10:59 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[302442]: [WARNING]  (302446) : Exiting Master process...
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.576 2 DEBUG nova.objects.instance [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'resources' on Instance uuid ade69915-c1e6-4b99-b8e6-8031c7f04049 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:10:59 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[302442]: [ALERT]    (302446) : Current worker (302448) exited with code 143 (Terminated)
Oct 07 14:10:59 compute-0 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[302442]: [WARNING]  (302446) : All workers exited. Exiting... (0)
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:59 compute-0 ovn_controller[151684]: 2025-10-07T14:10:59Z|00246|if_status|INFO|Dropped 1 log messages in last 250 seconds (most recently, 250 seconds ago) due to excessive rate
Oct 07 14:10:59 compute-0 ovn_controller[151684]: 2025-10-07T14:10:59Z|00247|if_status|INFO|Not setting lport 3f4a637c-ddfa-49f0-b263-224957faec29 down as sb is readonly
Oct 07 14:10:59 compute-0 systemd[1]: libpod-67aadd87c0dea632c211ced9243f4d583d8d061cd4ebb81c028ccce2ae71f154.scope: Deactivated successfully.
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:59 compute-0 podman[302899]: 2025-10-07 14:10:59.589603678 +0000 UTC m=+0.068625085 container died 67aadd87c0dea632c211ced9243f4d583d8d061cd4ebb81c028ccce2ae71f154 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.609 2 DEBUG oslo_concurrency.processutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 23f8753a-d049-4882-a3f5-7878ea0c5480_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:10:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 99 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 4.1 MiB/s wr, 265 op/s
Oct 07 14:10:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-67aadd87c0dea632c211ced9243f4d583d8d061cd4ebb81c028ccce2ae71f154-userdata-shm.mount: Deactivated successfully.
Oct 07 14:10:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc49fbb0f3c7ff3dc9220d2764fd372dbc613429af02b3304af3c7bad83bf708-merged.mount: Deactivated successfully.
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.633 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:10:59 compute-0 podman[302899]: 2025-10-07 14:10:59.636791348 +0000 UTC m=+0.115812745 container cleanup 67aadd87c0dea632c211ced9243f4d583d8d061cd4ebb81c028ccce2ae71f154 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.655 2 DEBUG nova.virt.libvirt.vif [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:10:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1443725263',display_name='tempest-ImagesTestJSON-server-1443725263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1443725263',id=35,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:10:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-pcdh9zoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:10:58Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=ade69915-c1e6-4b99-b8e6-8031c7f04049,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3f4a637c-ddfa-49f0-b263-224957faec29", "address": "fa:16:3e:a9:f0:a5", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f4a637c-dd", "ovs_interfaceid": "3f4a637c-ddfa-49f0-b263-224957faec29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.655 2 DEBUG nova.network.os_vif_util [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "3f4a637c-ddfa-49f0-b263-224957faec29", "address": "fa:16:3e:a9:f0:a5", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f4a637c-dd", "ovs_interfaceid": "3f4a637c-ddfa-49f0-b263-224957faec29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.656 2 DEBUG nova.network.os_vif_util [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:f0:a5,bridge_name='br-int',has_traffic_filtering=True,id=3f4a637c-ddfa-49f0-b263-224957faec29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f4a637c-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.656 2 DEBUG os_vif [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:f0:a5,bridge_name='br-int',has_traffic_filtering=True,id=3f4a637c-ddfa-49f0-b263-224957faec29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f4a637c-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:10:59 compute-0 ovn_controller[151684]: 2025-10-07T14:10:59Z|00248|binding|INFO|Releasing lport 3f4a637c-ddfa-49f0-b263-224957faec29 from this chassis (sb_readonly=0)
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.660 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:f0:a5 10.100.0.4'], port_security=['fa:16:3e:a9:f0:a5 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ade69915-c1e6-4b99-b8e6-8031c7f04049', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3f4a637c-ddfa-49f0-b263-224957faec29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.662 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f4a637c-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:59 compute-0 systemd[1]: libpod-conmon-67aadd87c0dea632c211ced9243f4d583d8d061cd4ebb81c028ccce2ae71f154.scope: Deactivated successfully.
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.666 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:f0:a5 10.100.0.4'], port_security=['fa:16:3e:a9:f0:a5 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ade69915-c1e6-4b99-b8e6-8031c7f04049', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3f4a637c-ddfa-49f0-b263-224957faec29) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:59 compute-0 podman[302952]: 2025-10-07 14:10:59.715370154 +0000 UTC m=+0.051134885 container remove 67aadd87c0dea632c211ced9243f4d583d8d061cd4ebb81c028ccce2ae71f154 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.716 2 INFO os_vif [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:f0:a5,bridge_name='br-int',has_traffic_filtering=True,id=3f4a637c-ddfa-49f0-b263-224957faec29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f4a637c-dd')
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.721 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[edc50c08-13d9-4f21-86a5-95e7e7065fa3]: (4, ('Tue Oct  7 02:10:59 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb (67aadd87c0dea632c211ced9243f4d583d8d061cd4ebb81c028ccce2ae71f154)\n67aadd87c0dea632c211ced9243f4d583d8d061cd4ebb81c028ccce2ae71f154\nTue Oct  7 02:10:59 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb (67aadd87c0dea632c211ced9243f4d583d8d061cd4ebb81c028ccce2ae71f154)\n67aadd87c0dea632c211ced9243f4d583d8d061cd4ebb81c028ccce2ae71f154\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.723 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5828953b-2e4a-47b8-b3bc-b0cff0020662]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.724 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:10:59 compute-0 kernel: tap9f80456d-d0: left promiscuous mode
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.743 2 DEBUG nova.storage.rbd_utils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] resizing rbd image 23f8753a-d049-4882-a3f5-7878ea0c5480_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.751 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6df5bf-67a2-46f9-bb24-e14fe1f6b312]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.774 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5f026928-ef50-4664-a377-60e4f90eaf4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.776 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8611b9be-f555-4763-a457-4e97c8c2ac93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.791 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f1ffd16f-c822-4476-829f-2bc7450f7fc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683276, 'reachable_time': 17326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303017, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.794 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.794 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[d6c69a87-9768-4104-a7b8-4ace15b5463e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.796 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:10:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d9f80456d\x2dd8a6\x2d4e61\x2db6cb\x2db509cd650dbb.mount: Deactivated successfully.
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.796 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3f4a637c-ddfa-49f0-b263-224957faec29 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb unbound from our chassis
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.798 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.799 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7339fe5-6f3d-4ef8-afc9-f93bd10a33c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.800 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3f4a637c-ddfa-49f0-b263-224957faec29 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb unbound from our chassis
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.801 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:10:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:10:59.802 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0f4951-e0c0-40e3-b7ef-c466fd28b723]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.870 2 DEBUG nova.objects.instance [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'migration_context' on Instance uuid 23f8753a-d049-4882-a3f5-7878ea0c5480 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.894 2 DEBUG nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.894 2 DEBUG nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Ensure instance console log exists: /var/lib/nova/instances/23f8753a-d049-4882-a3f5-7878ea0c5480/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.895 2 DEBUG oslo_concurrency.lockutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.895 2 DEBUG oslo_concurrency.lockutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:10:59 compute-0 nova_compute[259550]: 2025-10-07 14:10:59.895 2 DEBUG oslo_concurrency.lockutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:00 compute-0 nova_compute[259550]: 2025-10-07 14:11:00.029 2 DEBUG nova.network.neutron [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Successfully created port: 46ea659d-3f0b-4237-a755-fe64ce3a2200 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:11:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:00.044 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:00.045 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:00.045 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:00 compute-0 nova_compute[259550]: 2025-10-07 14:11:00.195 2 INFO nova.virt.libvirt.driver [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Deleting instance files /var/lib/nova/instances/ade69915-c1e6-4b99-b8e6-8031c7f04049_del
Oct 07 14:11:00 compute-0 nova_compute[259550]: 2025-10-07 14:11:00.197 2 INFO nova.virt.libvirt.driver [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Deletion of /var/lib/nova/instances/ade69915-c1e6-4b99-b8e6-8031c7f04049_del complete
Oct 07 14:11:00 compute-0 nova_compute[259550]: 2025-10-07 14:11:00.252 2 INFO nova.compute.manager [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Took 0.93 seconds to destroy the instance on the hypervisor.
Oct 07 14:11:00 compute-0 nova_compute[259550]: 2025-10-07 14:11:00.252 2 DEBUG oslo.service.loopingcall [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:11:00 compute-0 nova_compute[259550]: 2025-10-07 14:11:00.252 2 DEBUG nova.compute.manager [-] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:11:00 compute-0 nova_compute[259550]: 2025-10-07 14:11:00.253 2 DEBUG nova.network.neutron [-] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:11:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Oct 07 14:11:00 compute-0 ceph-mon[74295]: pgmap v1364: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 99 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 4.1 MiB/s wr, 265 op/s
Oct 07 14:11:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Oct 07 14:11:00 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Oct 07 14:11:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:11:00 compute-0 nova_compute[259550]: 2025-10-07 14:11:00.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:01 compute-0 nova_compute[259550]: 2025-10-07 14:11:01.055 2 DEBUG nova.compute.manager [req-ba5cb0b5-8612-4d40-aa11-795fff846b08 req-47271ab4-da56-470f-b5c0-d726aadd5a1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Received event network-vif-unplugged-3f4a637c-ddfa-49f0-b263-224957faec29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:11:01 compute-0 nova_compute[259550]: 2025-10-07 14:11:01.056 2 DEBUG oslo_concurrency.lockutils [req-ba5cb0b5-8612-4d40-aa11-795fff846b08 req-47271ab4-da56-470f-b5c0-d726aadd5a1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:01 compute-0 nova_compute[259550]: 2025-10-07 14:11:01.056 2 DEBUG oslo_concurrency.lockutils [req-ba5cb0b5-8612-4d40-aa11-795fff846b08 req-47271ab4-da56-470f-b5c0-d726aadd5a1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:01 compute-0 nova_compute[259550]: 2025-10-07 14:11:01.056 2 DEBUG oslo_concurrency.lockutils [req-ba5cb0b5-8612-4d40-aa11-795fff846b08 req-47271ab4-da56-470f-b5c0-d726aadd5a1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:01 compute-0 nova_compute[259550]: 2025-10-07 14:11:01.057 2 DEBUG nova.compute.manager [req-ba5cb0b5-8612-4d40-aa11-795fff846b08 req-47271ab4-da56-470f-b5c0-d726aadd5a1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] No waiting events found dispatching network-vif-unplugged-3f4a637c-ddfa-49f0-b263-224957faec29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:11:01 compute-0 nova_compute[259550]: 2025-10-07 14:11:01.057 2 DEBUG nova.compute.manager [req-ba5cb0b5-8612-4d40-aa11-795fff846b08 req-47271ab4-da56-470f-b5c0-d726aadd5a1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Received event network-vif-unplugged-3f4a637c-ddfa-49f0-b263-224957faec29 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:11:01 compute-0 ceph-mon[74295]: osdmap e186: 3 total, 3 up, 3 in
Oct 07 14:11:01 compute-0 nova_compute[259550]: 2025-10-07 14:11:01.519 2 DEBUG nova.network.neutron [-] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:11:01 compute-0 nova_compute[259550]: 2025-10-07 14:11:01.533 2 INFO nova.compute.manager [-] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Took 1.28 seconds to deallocate network for instance.
Oct 07 14:11:01 compute-0 nova_compute[259550]: 2025-10-07 14:11:01.570 2 DEBUG oslo_concurrency.lockutils [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:01 compute-0 nova_compute[259550]: 2025-10-07 14:11:01.570 2 DEBUG oslo_concurrency.lockutils [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 88 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.1 MiB/s wr, 204 op/s
Oct 07 14:11:01 compute-0 nova_compute[259550]: 2025-10-07 14:11:01.624 2 DEBUG oslo_concurrency.processutils [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:01 compute-0 nova_compute[259550]: 2025-10-07 14:11:01.682 2 DEBUG nova.network.neutron [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Successfully updated port: 46ea659d-3f0b-4237-a755-fe64ce3a2200 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:11:01 compute-0 nova_compute[259550]: 2025-10-07 14:11:01.700 2 DEBUG oslo_concurrency.lockutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "refresh_cache-23f8753a-d049-4882-a3f5-7878ea0c5480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:11:01 compute-0 nova_compute[259550]: 2025-10-07 14:11:01.701 2 DEBUG oslo_concurrency.lockutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquired lock "refresh_cache-23f8753a-d049-4882-a3f5-7878ea0c5480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:11:01 compute-0 nova_compute[259550]: 2025-10-07 14:11:01.701 2 DEBUG nova.network.neutron [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:11:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:11:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1058283515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:02 compute-0 nova_compute[259550]: 2025-10-07 14:11:02.068 2 DEBUG oslo_concurrency.processutils [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:02 compute-0 nova_compute[259550]: 2025-10-07 14:11:02.075 2 DEBUG nova.compute.provider_tree [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:11:02 compute-0 nova_compute[259550]: 2025-10-07 14:11:02.088 2 DEBUG nova.scheduler.client.report [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:11:02 compute-0 nova_compute[259550]: 2025-10-07 14:11:02.110 2 DEBUG oslo_concurrency.lockutils [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:02 compute-0 nova_compute[259550]: 2025-10-07 14:11:02.132 2 INFO nova.scheduler.client.report [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Deleted allocations for instance ade69915-c1e6-4b99-b8e6-8031c7f04049
Oct 07 14:11:02 compute-0 nova_compute[259550]: 2025-10-07 14:11:02.161 2 DEBUG nova.network.neutron [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:11:02 compute-0 nova_compute[259550]: 2025-10-07 14:11:02.208 2 DEBUG oslo_concurrency.lockutils [None req-849398cb-f248-4d52-b80f-c0efdca4ec44 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:02 compute-0 ceph-mon[74295]: pgmap v1366: 305 pgs: 305 active+clean; 88 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.1 MiB/s wr, 204 op/s
Oct 07 14:11:02 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1058283515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:02.798 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.022 2 DEBUG nova.network.neutron [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Updating instance_info_cache with network_info: [{"id": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "address": "fa:16:3e:da:70:b0", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46ea659d-3f", "ovs_interfaceid": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.092 2 DEBUG oslo_concurrency.lockutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Releasing lock "refresh_cache-23f8753a-d049-4882-a3f5-7878ea0c5480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.093 2 DEBUG nova.compute.manager [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Instance network_info: |[{"id": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "address": "fa:16:3e:da:70:b0", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46ea659d-3f", "ovs_interfaceid": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.095 2 DEBUG nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Start _get_guest_xml network_info=[{"id": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "address": "fa:16:3e:da:70:b0", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46ea659d-3f", "ovs_interfaceid": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.099 2 WARNING nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.104 2 DEBUG nova.virt.libvirt.host [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.105 2 DEBUG nova.virt.libvirt.host [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.110 2 DEBUG nova.virt.libvirt.host [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.111 2 DEBUG nova.virt.libvirt.host [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.111 2 DEBUG nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.112 2 DEBUG nova.virt.hardware [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.112 2 DEBUG nova.virt.hardware [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.112 2 DEBUG nova.virt.hardware [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.113 2 DEBUG nova.virt.hardware [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.113 2 DEBUG nova.virt.hardware [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.113 2 DEBUG nova.virt.hardware [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.113 2 DEBUG nova.virt.hardware [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.113 2 DEBUG nova.virt.hardware [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.114 2 DEBUG nova.virt.hardware [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.115 2 DEBUG nova.virt.hardware [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.115 2 DEBUG nova.virt.hardware [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.118 2 DEBUG oslo_concurrency.processutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.227 2 DEBUG nova.compute.manager [req-2579d8d2-9a3e-4c30-86b1-aa18736839f1 req-a85fe579-d308-4fca-9596-94458fe11338 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Received event network-vif-plugged-3f4a637c-ddfa-49f0-b263-224957faec29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.227 2 DEBUG oslo_concurrency.lockutils [req-2579d8d2-9a3e-4c30-86b1-aa18736839f1 req-a85fe579-d308-4fca-9596-94458fe11338 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.228 2 DEBUG oslo_concurrency.lockutils [req-2579d8d2-9a3e-4c30-86b1-aa18736839f1 req-a85fe579-d308-4fca-9596-94458fe11338 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.228 2 DEBUG oslo_concurrency.lockutils [req-2579d8d2-9a3e-4c30-86b1-aa18736839f1 req-a85fe579-d308-4fca-9596-94458fe11338 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.228 2 DEBUG nova.compute.manager [req-2579d8d2-9a3e-4c30-86b1-aa18736839f1 req-a85fe579-d308-4fca-9596-94458fe11338 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] No waiting events found dispatching network-vif-plugged-3f4a637c-ddfa-49f0-b263-224957faec29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.228 2 WARNING nova.compute.manager [req-2579d8d2-9a3e-4c30-86b1-aa18736839f1 req-a85fe579-d308-4fca-9596-94458fe11338 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Received unexpected event network-vif-plugged-3f4a637c-ddfa-49f0-b263-224957faec29 for instance with vm_state deleted and task_state None.
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.228 2 DEBUG nova.compute.manager [req-2579d8d2-9a3e-4c30-86b1-aa18736839f1 req-a85fe579-d308-4fca-9596-94458fe11338 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Received event network-vif-deleted-3f4a637c-ddfa-49f0-b263-224957faec29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.229 2 DEBUG nova.compute.manager [req-2579d8d2-9a3e-4c30-86b1-aa18736839f1 req-a85fe579-d308-4fca-9596-94458fe11338 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Received event network-changed-46ea659d-3f0b-4237-a755-fe64ce3a2200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.229 2 DEBUG nova.compute.manager [req-2579d8d2-9a3e-4c30-86b1-aa18736839f1 req-a85fe579-d308-4fca-9596-94458fe11338 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Refreshing instance network info cache due to event network-changed-46ea659d-3f0b-4237-a755-fe64ce3a2200. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.229 2 DEBUG oslo_concurrency.lockutils [req-2579d8d2-9a3e-4c30-86b1-aa18736839f1 req-a85fe579-d308-4fca-9596-94458fe11338 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-23f8753a-d049-4882-a3f5-7878ea0c5480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.229 2 DEBUG oslo_concurrency.lockutils [req-2579d8d2-9a3e-4c30-86b1-aa18736839f1 req-a85fe579-d308-4fca-9596-94458fe11338 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-23f8753a-d049-4882-a3f5-7878ea0c5480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.230 2 DEBUG nova.network.neutron [req-2579d8d2-9a3e-4c30-86b1-aa18736839f1 req-a85fe579-d308-4fca-9596-94458fe11338 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Refreshing network info cache for port 46ea659d-3f0b-4237-a755-fe64ce3a2200 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.302 2 DEBUG oslo_concurrency.lockutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Acquiring lock "34abf10d-dbb0-41fb-abde-a52be331cc12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.303 2 DEBUG oslo_concurrency.lockutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "34abf10d-dbb0-41fb-abde-a52be331cc12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.319 2 DEBUG nova.compute.manager [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.388 2 DEBUG oslo_concurrency.lockutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.389 2 DEBUG oslo_concurrency.lockutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.396 2 DEBUG nova.virt.hardware [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.396 2 INFO nova.compute.claims [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:11:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:11:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1176856745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.582 2 DEBUG oslo_concurrency.processutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.612 2 DEBUG nova.storage.rbd_utils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 23f8753a-d049-4882-a3f5-7878ea0c5480_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 88 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 169 op/s
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.620 2 DEBUG oslo_concurrency.processutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1176856745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:03 compute-0 nova_compute[259550]: 2025-10-07 14:11:03.901 2 DEBUG oslo_concurrency.processutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:11:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2050825910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.092 2 DEBUG oslo_concurrency.processutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.096 2 DEBUG nova.virt.libvirt.vif [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:10:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1205407249',display_name='tempest-DeleteServersTestJSON-server-1205407249',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1205407249',id=36,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-0urxbnp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:10:59Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=23f8753a-d049-4882-a3f5-7878ea0c5480,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "address": "fa:16:3e:da:70:b0", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46ea659d-3f", "ovs_interfaceid": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.097 2 DEBUG nova.network.os_vif_util [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "address": "fa:16:3e:da:70:b0", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46ea659d-3f", "ovs_interfaceid": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.098 2 DEBUG nova.network.os_vif_util [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:70:b0,bridge_name='br-int',has_traffic_filtering=True,id=46ea659d-3f0b-4237-a755-fe64ce3a2200,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46ea659d-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.100 2 DEBUG nova.objects.instance [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'pci_devices' on Instance uuid 23f8753a-d049-4882-a3f5-7878ea0c5480 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.120 2 DEBUG nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:11:04 compute-0 nova_compute[259550]:   <uuid>23f8753a-d049-4882-a3f5-7878ea0c5480</uuid>
Oct 07 14:11:04 compute-0 nova_compute[259550]:   <name>instance-00000024</name>
Oct 07 14:11:04 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:11:04 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:11:04 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <nova:name>tempest-DeleteServersTestJSON-server-1205407249</nova:name>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:11:03</nova:creationTime>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:11:04 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:11:04 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:11:04 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:11:04 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:11:04 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:11:04 compute-0 nova_compute[259550]:         <nova:user uuid="a0452296b3a942e893961944a0203d98">tempest-DeleteServersTestJSON-1871282594-project-member</nova:user>
Oct 07 14:11:04 compute-0 nova_compute[259550]:         <nova:project uuid="06322ecec4b94a5d94e34cc8632d4104">tempest-DeleteServersTestJSON-1871282594</nova:project>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:11:04 compute-0 nova_compute[259550]:         <nova:port uuid="46ea659d-3f0b-4237-a755-fe64ce3a2200">
Oct 07 14:11:04 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:11:04 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:11:04 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <system>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <entry name="serial">23f8753a-d049-4882-a3f5-7878ea0c5480</entry>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <entry name="uuid">23f8753a-d049-4882-a3f5-7878ea0c5480</entry>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     </system>
Oct 07 14:11:04 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:11:04 compute-0 nova_compute[259550]:   <os>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:   </os>
Oct 07 14:11:04 compute-0 nova_compute[259550]:   <features>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:   </features>
Oct 07 14:11:04 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:11:04 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:11:04 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/23f8753a-d049-4882-a3f5-7878ea0c5480_disk">
Oct 07 14:11:04 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       </source>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:11:04 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/23f8753a-d049-4882-a3f5-7878ea0c5480_disk.config">
Oct 07 14:11:04 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       </source>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:11:04 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:da:70:b0"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <target dev="tap46ea659d-3f"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/23f8753a-d049-4882-a3f5-7878ea0c5480/console.log" append="off"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <video>
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     </video>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:11:04 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:11:04 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:11:04 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:11:04 compute-0 nova_compute[259550]: </domain>
Oct 07 14:11:04 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.123 2 DEBUG nova.compute.manager [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Preparing to wait for external event network-vif-plugged-46ea659d-3f0b-4237-a755-fe64ce3a2200 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.124 2 DEBUG oslo_concurrency.lockutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "23f8753a-d049-4882-a3f5-7878ea0c5480-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.124 2 DEBUG oslo_concurrency.lockutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "23f8753a-d049-4882-a3f5-7878ea0c5480-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.125 2 DEBUG oslo_concurrency.lockutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "23f8753a-d049-4882-a3f5-7878ea0c5480-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.126 2 DEBUG nova.virt.libvirt.vif [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:10:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1205407249',display_name='tempest-DeleteServersTestJSON-server-1205407249',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1205407249',id=36,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-0urxbnp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:10:59Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=23f8753a-d049-4882-a3f5-7878ea0c5480,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "address": "fa:16:3e:da:70:b0", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46ea659d-3f", "ovs_interfaceid": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.127 2 DEBUG nova.network.os_vif_util [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "address": "fa:16:3e:da:70:b0", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46ea659d-3f", "ovs_interfaceid": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.128 2 DEBUG nova.network.os_vif_util [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:70:b0,bridge_name='br-int',has_traffic_filtering=True,id=46ea659d-3f0b-4237-a755-fe64ce3a2200,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46ea659d-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.129 2 DEBUG os_vif [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:70:b0,bridge_name='br-int',has_traffic_filtering=True,id=46ea659d-3f0b-4237-a755-fe64ce3a2200,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46ea659d-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.132 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.138 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap46ea659d-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.138 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap46ea659d-3f, col_values=(('external_ids', {'iface-id': '46ea659d-3f0b-4237-a755-fe64ce3a2200', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:70:b0', 'vm-uuid': '23f8753a-d049-4882-a3f5-7878ea0c5480'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:04 compute-0 NetworkManager[44949]: <info>  [1759846264.1414] manager: (tap46ea659d-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.146 2 INFO os_vif [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:70:b0,bridge_name='br-int',has_traffic_filtering=True,id=46ea659d-3f0b-4237-a755-fe64ce3a2200,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46ea659d-3f')
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.201 2 DEBUG nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.202 2 DEBUG nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.202 2 DEBUG nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No VIF found with MAC fa:16:3e:da:70:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.202 2 INFO nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Using config drive
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.227 2 DEBUG nova.storage.rbd_utils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 23f8753a-d049-4882-a3f5-7878ea0c5480_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:11:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1415168359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.367 2 DEBUG oslo_concurrency.processutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.374 2 DEBUG nova.compute.provider_tree [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.388 2 DEBUG nova.scheduler.client.report [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.410 2 DEBUG oslo_concurrency.lockutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.411 2 DEBUG nova.compute.manager [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.459 2 DEBUG nova.compute.manager [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.459 2 DEBUG nova.network.neutron [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.476 2 INFO nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.511 2 DEBUG nova.compute.manager [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.600 2 DEBUG nova.compute.manager [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.602 2 DEBUG nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.602 2 INFO nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Creating image(s)
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.625 2 DEBUG nova.storage.rbd_utils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] rbd image 34abf10d-dbb0-41fb-abde-a52be331cc12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.654 2 DEBUG nova.storage.rbd_utils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] rbd image 34abf10d-dbb0-41fb-abde-a52be331cc12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.677 2 DEBUG nova.storage.rbd_utils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] rbd image 34abf10d-dbb0-41fb-abde-a52be331cc12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.682 2 DEBUG oslo_concurrency.processutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:04 compute-0 ceph-mon[74295]: pgmap v1367: 305 pgs: 305 active+clean; 88 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 169 op/s
Oct 07 14:11:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2050825910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1415168359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.760 2 DEBUG oslo_concurrency.processutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.761 2 DEBUG oslo_concurrency.lockutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.761 2 DEBUG oslo_concurrency.lockutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.762 2 DEBUG oslo_concurrency.lockutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.783 2 DEBUG nova.storage.rbd_utils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] rbd image 34abf10d-dbb0-41fb-abde-a52be331cc12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:04 compute-0 nova_compute[259550]: 2025-10-07 14:11:04.788 2 DEBUG oslo_concurrency.processutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 34abf10d-dbb0-41fb-abde-a52be331cc12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.098 2 DEBUG oslo_concurrency.processutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 34abf10d-dbb0-41fb-abde-a52be331cc12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:05 compute-0 podman[303261]: 2025-10-07 14:11:05.106784172 +0000 UTC m=+0.091227759 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.181 2 DEBUG nova.storage.rbd_utils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] resizing rbd image 34abf10d-dbb0-41fb-abde-a52be331cc12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:11:05 compute-0 podman[303294]: 2025-10-07 14:11:05.196435068 +0000 UTC m=+0.060417919 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.282 2 DEBUG nova.objects.instance [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lazy-loading 'migration_context' on Instance uuid 34abf10d-dbb0-41fb-abde-a52be331cc12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.405 2 DEBUG nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.406 2 DEBUG nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Ensure instance console log exists: /var/lib/nova/instances/34abf10d-dbb0-41fb-abde-a52be331cc12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.407 2 DEBUG oslo_concurrency.lockutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.407 2 DEBUG oslo_concurrency.lockutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.407 2 DEBUG oslo_concurrency.lockutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 100 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 6.0 MiB/s wr, 212 op/s
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.762 2 DEBUG nova.network.neutron [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.762 2 DEBUG nova.compute.manager [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.764 2 DEBUG nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.770 2 WARNING nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.776 2 DEBUG nova.virt.libvirt.host [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.778 2 DEBUG nova.virt.libvirt.host [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.781 2 DEBUG nova.virt.libvirt.host [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.782 2 DEBUG nova.virt.libvirt.host [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.782 2 DEBUG nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.782 2 DEBUG nova.virt.hardware [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.783 2 DEBUG nova.virt.hardware [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.783 2 DEBUG nova.virt.hardware [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.783 2 DEBUG nova.virt.hardware [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.784 2 DEBUG nova.virt.hardware [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.784 2 DEBUG nova.virt.hardware [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.784 2 DEBUG nova.virt.hardware [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.784 2 DEBUG nova.virt.hardware [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.785 2 DEBUG nova.virt.hardware [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.785 2 DEBUG nova.virt.hardware [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.785 2 DEBUG nova.virt.hardware [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.788 2 DEBUG oslo_concurrency.processutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:11:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Oct 07 14:11:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Oct 07 14:11:05 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.862 2 INFO nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Creating config drive at /var/lib/nova/instances/23f8753a-d049-4882-a3f5-7878ea0c5480/disk.config
Oct 07 14:11:05 compute-0 nova_compute[259550]: 2025-10-07 14:11:05.868 2 DEBUG oslo_concurrency.processutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/23f8753a-d049-4882-a3f5-7878ea0c5480/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5c7od5ux execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.014 2 DEBUG oslo_concurrency.processutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/23f8753a-d049-4882-a3f5-7878ea0c5480/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5c7od5ux" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.054 2 DEBUG nova.storage.rbd_utils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 23f8753a-d049-4882-a3f5-7878ea0c5480_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.060 2 DEBUG oslo_concurrency.processutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/23f8753a-d049-4882-a3f5-7878ea0c5480/disk.config 23f8753a-d049-4882-a3f5-7878ea0c5480_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.238 2 DEBUG oslo_concurrency.processutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/23f8753a-d049-4882-a3f5-7878ea0c5480/disk.config 23f8753a-d049-4882-a3f5-7878ea0c5480_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.240 2 INFO nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Deleting local config drive /var/lib/nova/instances/23f8753a-d049-4882-a3f5-7878ea0c5480/disk.config because it was imported into RBD.
Oct 07 14:11:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:11:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/505411119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.279 2 DEBUG oslo_concurrency.processutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.304 2 DEBUG nova.storage.rbd_utils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] rbd image 34abf10d-dbb0-41fb-abde-a52be331cc12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:06 compute-0 kernel: tap46ea659d-3f: entered promiscuous mode
Oct 07 14:11:06 compute-0 NetworkManager[44949]: <info>  [1759846266.3097] manager: (tap46ea659d-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/122)
Oct 07 14:11:06 compute-0 ovn_controller[151684]: 2025-10-07T14:11:06Z|00249|binding|INFO|Claiming lport 46ea659d-3f0b-4237-a755-fe64ce3a2200 for this chassis.
Oct 07 14:11:06 compute-0 ovn_controller[151684]: 2025-10-07T14:11:06Z|00250|binding|INFO|46ea659d-3f0b-4237-a755-fe64ce3a2200: Claiming fa:16:3e:da:70:b0 10.100.0.10
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.322 2 DEBUG oslo_concurrency.processutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:06 compute-0 systemd-udevd[303472]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:11:06 compute-0 systemd-machined[214580]: New machine qemu-40-instance-00000024.
Oct 07 14:11:06 compute-0 NetworkManager[44949]: <info>  [1759846266.3564] device (tap46ea659d-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:11:06 compute-0 NetworkManager[44949]: <info>  [1759846266.3572] device (tap46ea659d-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.359 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:70:b0 10.100.0.10'], port_security=['fa:16:3e:da:70:b0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '23f8753a-d049-4882-a3f5-7878ea0c5480', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=46ea659d-3f0b-4237-a755-fe64ce3a2200) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.360 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 46ea659d-3f0b-4237-a755-fe64ce3a2200 in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f bound to our chassis
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.361 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct 07 14:11:06 compute-0 systemd[1]: Started Virtual Machine qemu-40-instance-00000024.
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.377 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[37dff2ef-7514-4a8a-89f3-a33161c4f21f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.378 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8accac57-a1 in ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.380 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8accac57-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.380 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d72193-6356-4e6f-a644-51734015a1e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.381 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c31cf649-0dec-458d-b245-cb5ee39e4c24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.401 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[60576397-030d-4d38-9674-33b286bdea2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:06 compute-0 ovn_controller[151684]: 2025-10-07T14:11:06Z|00251|binding|INFO|Setting lport 46ea659d-3f0b-4237-a755-fe64ce3a2200 ovn-installed in OVS
Oct 07 14:11:06 compute-0 ovn_controller[151684]: 2025-10-07T14:11:06Z|00252|binding|INFO|Setting lport 46ea659d-3f0b-4237-a755-fe64ce3a2200 up in Southbound
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.418 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c3174651-5e48-45e6-803b-5322a9a3daa5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.440 2 DEBUG nova.network.neutron [req-2579d8d2-9a3e-4c30-86b1-aa18736839f1 req-a85fe579-d308-4fca-9596-94458fe11338 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Updated VIF entry in instance network info cache for port 46ea659d-3f0b-4237-a755-fe64ce3a2200. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.440 2 DEBUG nova.network.neutron [req-2579d8d2-9a3e-4c30-86b1-aa18736839f1 req-a85fe579-d308-4fca-9596-94458fe11338 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Updating instance_info_cache with network_info: [{"id": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "address": "fa:16:3e:da:70:b0", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46ea659d-3f", "ovs_interfaceid": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.461 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5b72fd45-1a24-4c23-8b1b-ec68c1e1e4d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:06 compute-0 systemd-udevd[303475]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:11:06 compute-0 NetworkManager[44949]: <info>  [1759846266.4691] manager: (tap8accac57-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/123)
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.467 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e61b85-599f-458a-adda-48812b0b6be3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.505 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4efba3ec-33cb-43e8-a809-f0420153317f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.509 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[570680c9-b688-4386-b622-1554cec04bbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:06 compute-0 NetworkManager[44949]: <info>  [1759846266.5382] device (tap8accac57-a0): carrier: link connected
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.546 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[63770a87-d5a4-4f57-9133-f9198e4e1031]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.561 2 DEBUG oslo_concurrency.lockutils [req-2579d8d2-9a3e-4c30-86b1-aa18736839f1 req-a85fe579-d308-4fca-9596-94458fe11338 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-23f8753a-d049-4882-a3f5-7878ea0c5480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.567 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c39e981e-1d03-43f8-9855-b0986e5edfac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8accac57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:e8:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685010, 'reachable_time': 33307, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303525, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.588 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd0decb-f5bf-44ab-9812-80d04d28b9c9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:e89f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685010, 'tstamp': 685010}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303541, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.611 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8fce1b29-91a8-4733-b586-f1ada5b7256c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8accac57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:e8:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685010, 'reachable_time': 33307, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303545, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.652 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8b67ac37-32bb-46dc-a804-a531a0a44250]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.734 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f1c50d-732b-429b-86f2-cd0e8f1f9d2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.736 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8accac57-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.736 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.737 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8accac57-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:06 compute-0 kernel: tap8accac57-a0: entered promiscuous mode
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:06 compute-0 NetworkManager[44949]: <info>  [1759846266.7426] manager: (tap8accac57-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.747 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8accac57-a0, col_values=(('external_ids', {'iface-id': 'a487ff40-6fa2-404e-b7fc-dbcc968fecc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:06 compute-0 ovn_controller[151684]: 2025-10-07T14:11:06Z|00253|binding|INFO|Releasing lport a487ff40-6fa2-404e-b7fc-dbcc968fecc3 from this chassis (sb_readonly=0)
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.754 2 DEBUG nova.compute.manager [req-4bc568df-887f-451e-b0a5-dc202074ce73 req-c9d73469-5faf-497d-a393-45cd8c8c7de6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Received event network-vif-plugged-46ea659d-3f0b-4237-a755-fe64ce3a2200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.755 2 DEBUG oslo_concurrency.lockutils [req-4bc568df-887f-451e-b0a5-dc202074ce73 req-c9d73469-5faf-497d-a393-45cd8c8c7de6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23f8753a-d049-4882-a3f5-7878ea0c5480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.756 2 DEBUG oslo_concurrency.lockutils [req-4bc568df-887f-451e-b0a5-dc202074ce73 req-c9d73469-5faf-497d-a393-45cd8c8c7de6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23f8753a-d049-4882-a3f5-7878ea0c5480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.752 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.754 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0f3e8e92-c147-419c-ac49-42c2add76bcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.755 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:11:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:06.756 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'env', 'PROCESS_TAG=haproxy-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8accac57-ab45-4b9b-95ed-86c2c65f202f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.756 2 DEBUG oslo_concurrency.lockutils [req-4bc568df-887f-451e-b0a5-dc202074ce73 req-c9d73469-5faf-497d-a393-45cd8c8c7de6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23f8753a-d049-4882-a3f5-7878ea0c5480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.758 2 DEBUG nova.compute.manager [req-4bc568df-887f-451e-b0a5-dc202074ce73 req-c9d73469-5faf-497d-a393-45cd8c8c7de6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Processing event network-vif-plugged-46ea659d-3f0b-4237-a755-fe64ce3a2200 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:11:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4159591360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.791 2 DEBUG oslo_concurrency.processutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.793 2 DEBUG nova.objects.instance [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lazy-loading 'pci_devices' on Instance uuid 34abf10d-dbb0-41fb-abde-a52be331cc12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.809 2 DEBUG nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:11:06 compute-0 nova_compute[259550]:   <uuid>34abf10d-dbb0-41fb-abde-a52be331cc12</uuid>
Oct 07 14:11:06 compute-0 nova_compute[259550]:   <name>instance-00000025</name>
Oct 07 14:11:06 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:11:06 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:11:06 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <nova:name>tempest-ListImageFiltersTestJSON-server-1055647797</nova:name>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:11:05</nova:creationTime>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:11:06 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:11:06 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:11:06 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:11:06 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:11:06 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:11:06 compute-0 nova_compute[259550]:         <nova:user uuid="e3b16cce26de4b00b3d15ac4efd418ca">tempest-ListImageFiltersTestJSON-690536026-project-member</nova:user>
Oct 07 14:11:06 compute-0 nova_compute[259550]:         <nova:project uuid="cf7686fa46ac42eb816008a28a8b964e">tempest-ListImageFiltersTestJSON-690536026</nova:project>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:11:06 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:11:06 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <system>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <entry name="serial">34abf10d-dbb0-41fb-abde-a52be331cc12</entry>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <entry name="uuid">34abf10d-dbb0-41fb-abde-a52be331cc12</entry>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     </system>
Oct 07 14:11:06 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:11:06 compute-0 nova_compute[259550]:   <os>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:   </os>
Oct 07 14:11:06 compute-0 nova_compute[259550]:   <features>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:   </features>
Oct 07 14:11:06 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:11:06 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:11:06 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/34abf10d-dbb0-41fb-abde-a52be331cc12_disk">
Oct 07 14:11:06 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       </source>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:11:06 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/34abf10d-dbb0-41fb-abde-a52be331cc12_disk.config">
Oct 07 14:11:06 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       </source>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:11:06 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/34abf10d-dbb0-41fb-abde-a52be331cc12/console.log" append="off"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <video>
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     </video>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:11:06 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:11:06 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:11:06 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:11:06 compute-0 nova_compute[259550]: </domain>
Oct 07 14:11:06 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.831 2 DEBUG oslo_concurrency.lockutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Acquiring lock "cf0c1f82-7b3b-4b62-a766-c12de66a966a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.832 2 DEBUG oslo_concurrency.lockutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "cf0c1f82-7b3b-4b62-a766-c12de66a966a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:06 compute-0 ceph-mon[74295]: pgmap v1368: 305 pgs: 305 active+clean; 100 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 6.0 MiB/s wr, 212 op/s
Oct 07 14:11:06 compute-0 ceph-mon[74295]: osdmap e187: 3 total, 3 up, 3 in
Oct 07 14:11:06 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/505411119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:06 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4159591360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.857 2 DEBUG nova.compute.manager [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.896 2 DEBUG nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.897 2 DEBUG nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.898 2 INFO nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Using config drive
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.924 2 DEBUG nova.storage.rbd_utils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] rbd image 34abf10d-dbb0-41fb-abde-a52be331cc12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.951 2 DEBUG oslo_concurrency.lockutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.952 2 DEBUG oslo_concurrency.lockutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.959 2 DEBUG nova.virt.hardware [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:11:06 compute-0 nova_compute[259550]: 2025-10-07 14:11:06.959 2 INFO nova.compute.claims [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.087 2 DEBUG oslo_concurrency.processutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.165 2 DEBUG nova.compute.manager [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.167 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846267.1648345, 23f8753a-d049-4882-a3f5-7878ea0c5480 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.167 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] VM Started (Lifecycle Event)
Oct 07 14:11:07 compute-0 podman[303624]: 2025-10-07 14:11:07.179444232 +0000 UTC m=+0.060012279 container create 1bd88e583df174d1707ac15def476cc8697485665702e5aebdd164cfe43b5db0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.180 2 DEBUG nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.195 2 INFO nova.virt.libvirt.driver [-] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Instance spawned successfully.
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.197 2 DEBUG nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.228 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:07 compute-0 systemd[1]: Started libpod-conmon-1bd88e583df174d1707ac15def476cc8697485665702e5aebdd164cfe43b5db0.scope.
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.232 2 DEBUG nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.233 2 DEBUG nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.233 2 DEBUG nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.234 2 DEBUG nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.234 2 DEBUG nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.234 2 DEBUG nova.virt.libvirt.driver [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:07 compute-0 podman[303624]: 2025-10-07 14:11:07.15004605 +0000 UTC m=+0.030614126 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.255 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:11:07 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:11:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff024c5b7d0d2a8b0e4f8c62d8270e242c735671a59e69c0d9e9832570498e8c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:11:07 compute-0 podman[303624]: 2025-10-07 14:11:07.283693573 +0000 UTC m=+0.164261649 container init 1bd88e583df174d1707ac15def476cc8697485665702e5aebdd164cfe43b5db0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.285 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.285 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846267.1668239, 23f8753a-d049-4882-a3f5-7878ea0c5480 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.285 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] VM Paused (Lifecycle Event)
Oct 07 14:11:07 compute-0 podman[303624]: 2025-10-07 14:11:07.289733842 +0000 UTC m=+0.170301888 container start 1bd88e583df174d1707ac15def476cc8697485665702e5aebdd164cfe43b5db0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.295 2 INFO nova.compute.manager [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Took 8.23 seconds to spawn the instance on the hypervisor.
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.296 2 DEBUG nova.compute.manager [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.305 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:07 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[303657]: [NOTICE]   (303661) : New worker (303663) forked
Oct 07 14:11:07 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[303657]: [NOTICE]   (303661) : Loading success.
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.315 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846267.1724389, 23f8753a-d049-4882-a3f5-7878ea0c5480 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.315 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] VM Resumed (Lifecycle Event)
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.346 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.349 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.373 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.385 2 INFO nova.compute.manager [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Took 9.31 seconds to build instance.
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.415 2 DEBUG oslo_concurrency.lockutils [None req-990395cf-651c-405a-87f5-a6953d87e6cb a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "23f8753a-d049-4882-a3f5-7878ea0c5480" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:11:07 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2277155550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.558 2 DEBUG oslo_concurrency.processutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.566 2 DEBUG nova.compute.provider_tree [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.579 2 DEBUG nova.scheduler.client.report [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.608 2 DEBUG oslo_concurrency.lockutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.609 2 DEBUG nova.compute.manager [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:11:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 100 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 3.6 MiB/s wr, 122 op/s
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.654 2 DEBUG nova.compute.manager [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.654 2 DEBUG nova.network.neutron [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.670 2 INFO nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.685 2 DEBUG nova.compute.manager [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.748 2 INFO nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Creating config drive at /var/lib/nova/instances/34abf10d-dbb0-41fb-abde-a52be331cc12/disk.config
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.752 2 DEBUG oslo_concurrency.processutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/34abf10d-dbb0-41fb-abde-a52be331cc12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpphyb5lko execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.794 2 DEBUG nova.compute.manager [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.796 2 DEBUG nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.796 2 INFO nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Creating image(s)
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.822 2 DEBUG nova.storage.rbd_utils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] rbd image cf0c1f82-7b3b-4b62-a766-c12de66a966a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.850 2 DEBUG nova.storage.rbd_utils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] rbd image cf0c1f82-7b3b-4b62-a766-c12de66a966a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:07 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2277155550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.879 2 DEBUG nova.storage.rbd_utils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] rbd image cf0c1f82-7b3b-4b62-a766-c12de66a966a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.884 2 DEBUG oslo_concurrency.processutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.917 2 DEBUG oslo_concurrency.processutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/34abf10d-dbb0-41fb-abde-a52be331cc12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpphyb5lko" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.943 2 DEBUG nova.storage.rbd_utils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] rbd image 34abf10d-dbb0-41fb-abde-a52be331cc12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.947 2 DEBUG oslo_concurrency.processutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/34abf10d-dbb0-41fb-abde-a52be331cc12/disk.config 34abf10d-dbb0-41fb-abde-a52be331cc12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.988 2 DEBUG oslo_concurrency.processutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.989 2 DEBUG oslo_concurrency.lockutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.990 2 DEBUG oslo_concurrency.lockutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:07 compute-0 nova_compute[259550]: 2025-10-07 14:11:07.990 2 DEBUG oslo_concurrency.lockutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.011 2 DEBUG nova.storage.rbd_utils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] rbd image cf0c1f82-7b3b-4b62-a766-c12de66a966a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.014 2 DEBUG oslo_concurrency.processutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 cf0c1f82-7b3b-4b62-a766-c12de66a966a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.131 2 DEBUG oslo_concurrency.processutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/34abf10d-dbb0-41fb-abde-a52be331cc12/disk.config 34abf10d-dbb0-41fb-abde-a52be331cc12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.132 2 INFO nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Deleting local config drive /var/lib/nova/instances/34abf10d-dbb0-41fb-abde-a52be331cc12/disk.config because it was imported into RBD.
Oct 07 14:11:08 compute-0 systemd-machined[214580]: New machine qemu-41-instance-00000025.
Oct 07 14:11:08 compute-0 systemd[1]: Started Virtual Machine qemu-41-instance-00000025.
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.319 2 DEBUG oslo_concurrency.processutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 cf0c1f82-7b3b-4b62-a766-c12de66a966a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.395 2 DEBUG nova.storage.rbd_utils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] resizing rbd image cf0c1f82-7b3b-4b62-a766-c12de66a966a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.493 2 DEBUG nova.objects.instance [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lazy-loading 'migration_context' on Instance uuid cf0c1f82-7b3b-4b62-a766-c12de66a966a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.510 2 DEBUG nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.511 2 DEBUG nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Ensure instance console log exists: /var/lib/nova/instances/cf0c1f82-7b3b-4b62-a766-c12de66a966a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.511 2 DEBUG oslo_concurrency.lockutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.511 2 DEBUG oslo_concurrency.lockutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.511 2 DEBUG oslo_concurrency.lockutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.869 2 DEBUG nova.network.neutron [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.869 2 DEBUG nova.compute.manager [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.870 2 DEBUG nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:11:08 compute-0 ceph-mon[74295]: pgmap v1370: 305 pgs: 305 active+clean; 100 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 3.6 MiB/s wr, 122 op/s
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.877 2 WARNING nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.882 2 DEBUG nova.virt.libvirt.host [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.883 2 DEBUG nova.virt.libvirt.host [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.886 2 DEBUG nova.virt.libvirt.host [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.886 2 DEBUG nova.virt.libvirt.host [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.886 2 DEBUG nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.887 2 DEBUG nova.virt.hardware [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.887 2 DEBUG nova.virt.hardware [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.888 2 DEBUG nova.virt.hardware [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.888 2 DEBUG nova.virt.hardware [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.888 2 DEBUG nova.virt.hardware [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.889 2 DEBUG nova.virt.hardware [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.889 2 DEBUG nova.virt.hardware [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.889 2 DEBUG nova.virt.hardware [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.890 2 DEBUG nova.virt.hardware [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.890 2 DEBUG nova.virt.hardware [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.890 2 DEBUG nova.virt.hardware [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:11:08 compute-0 nova_compute[259550]: 2025-10-07 14:11:08.893 2 DEBUG oslo_concurrency.processutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.205 2 DEBUG nova.compute.manager [req-f351ec80-1020-49e4-9e1c-eb1f40ff7ea4 req-a43c1261-5315-4435-8fee-18dd72fd5d87 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Received event network-vif-plugged-46ea659d-3f0b-4237-a755-fe64ce3a2200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.207 2 DEBUG oslo_concurrency.lockutils [req-f351ec80-1020-49e4-9e1c-eb1f40ff7ea4 req-a43c1261-5315-4435-8fee-18dd72fd5d87 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23f8753a-d049-4882-a3f5-7878ea0c5480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.207 2 DEBUG oslo_concurrency.lockutils [req-f351ec80-1020-49e4-9e1c-eb1f40ff7ea4 req-a43c1261-5315-4435-8fee-18dd72fd5d87 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23f8753a-d049-4882-a3f5-7878ea0c5480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.208 2 DEBUG oslo_concurrency.lockutils [req-f351ec80-1020-49e4-9e1c-eb1f40ff7ea4 req-a43c1261-5315-4435-8fee-18dd72fd5d87 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23f8753a-d049-4882-a3f5-7878ea0c5480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.209 2 DEBUG nova.compute.manager [req-f351ec80-1020-49e4-9e1c-eb1f40ff7ea4 req-a43c1261-5315-4435-8fee-18dd72fd5d87 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] No waiting events found dispatching network-vif-plugged-46ea659d-3f0b-4237-a755-fe64ce3a2200 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.210 2 WARNING nova.compute.manager [req-f351ec80-1020-49e4-9e1c-eb1f40ff7ea4 req-a43c1261-5315-4435-8fee-18dd72fd5d87 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Received unexpected event network-vif-plugged-46ea659d-3f0b-4237-a755-fe64ce3a2200 for instance with vm_state active and task_state None.
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.327 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846269.32387, 34abf10d-dbb0-41fb-abde-a52be331cc12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.332 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] VM Resumed (Lifecycle Event)
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.346 2 DEBUG nova.compute.manager [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.348 2 DEBUG nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.357 2 INFO nova.virt.libvirt.driver [-] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Instance spawned successfully.
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.358 2 DEBUG nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:11:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:11:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2351335263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.410 2 DEBUG oslo_concurrency.processutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.478 2 DEBUG nova.storage.rbd_utils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] rbd image cf0c1f82-7b3b-4b62-a766-c12de66a966a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.482 2 DEBUG oslo_concurrency.processutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 169 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.5 MiB/s wr, 216 op/s
Oct 07 14:11:09 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2351335263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:11:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3121910279' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.943 2 DEBUG oslo_concurrency.processutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.946 2 DEBUG nova.objects.instance [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lazy-loading 'pci_devices' on Instance uuid cf0c1f82-7b3b-4b62-a766-c12de66a966a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.965 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.972 2 DEBUG nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.972 2 DEBUG nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.973 2 DEBUG nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.974 2 DEBUG nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.974 2 DEBUG nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.975 2 DEBUG nova.virt.libvirt.driver [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:09 compute-0 nova_compute[259550]: 2025-10-07 14:11:09.982 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:11:10 compute-0 nova_compute[259550]: 2025-10-07 14:11:10.769 2 DEBUG nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:11:10 compute-0 nova_compute[259550]:   <uuid>cf0c1f82-7b3b-4b62-a766-c12de66a966a</uuid>
Oct 07 14:11:10 compute-0 nova_compute[259550]:   <name>instance-00000026</name>
Oct 07 14:11:10 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:11:10 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:11:10 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <nova:name>tempest-ListImageFiltersTestJSON-server-1621210517</nova:name>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:11:08</nova:creationTime>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:11:10 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:11:10 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:11:10 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:11:10 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:11:10 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:11:10 compute-0 nova_compute[259550]:         <nova:user uuid="e3b16cce26de4b00b3d15ac4efd418ca">tempest-ListImageFiltersTestJSON-690536026-project-member</nova:user>
Oct 07 14:11:10 compute-0 nova_compute[259550]:         <nova:project uuid="cf7686fa46ac42eb816008a28a8b964e">tempest-ListImageFiltersTestJSON-690536026</nova:project>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:11:10 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:11:10 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <system>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <entry name="serial">cf0c1f82-7b3b-4b62-a766-c12de66a966a</entry>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <entry name="uuid">cf0c1f82-7b3b-4b62-a766-c12de66a966a</entry>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     </system>
Oct 07 14:11:10 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:11:10 compute-0 nova_compute[259550]:   <os>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:   </os>
Oct 07 14:11:10 compute-0 nova_compute[259550]:   <features>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:   </features>
Oct 07 14:11:10 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:11:10 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:11:10 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/cf0c1f82-7b3b-4b62-a766-c12de66a966a_disk">
Oct 07 14:11:10 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       </source>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:11:10 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/cf0c1f82-7b3b-4b62-a766-c12de66a966a_disk.config">
Oct 07 14:11:10 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       </source>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:11:10 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/cf0c1f82-7b3b-4b62-a766-c12de66a966a/console.log" append="off"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <video>
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     </video>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:11:10 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:11:10 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:11:10 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:11:10 compute-0 nova_compute[259550]: </domain>
Oct 07 14:11:10 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:11:10 compute-0 nova_compute[259550]: 2025-10-07 14:11:10.772 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:11:10 compute-0 nova_compute[259550]: 2025-10-07 14:11:10.772 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846269.324347, 34abf10d-dbb0-41fb-abde-a52be331cc12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:11:10 compute-0 nova_compute[259550]: 2025-10-07 14:11:10.772 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] VM Started (Lifecycle Event)
Oct 07 14:11:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:11:10 compute-0 nova_compute[259550]: 2025-10-07 14:11:10.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:10 compute-0 ceph-mon[74295]: pgmap v1371: 305 pgs: 305 active+clean; 169 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.5 MiB/s wr, 216 op/s
Oct 07 14:11:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3121910279' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:11 compute-0 nova_compute[259550]: 2025-10-07 14:11:11.186 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:11 compute-0 nova_compute[259550]: 2025-10-07 14:11:11.193 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:11:11 compute-0 nova_compute[259550]: 2025-10-07 14:11:11.414 2 INFO nova.compute.manager [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Took 6.81 seconds to spawn the instance on the hypervisor.
Oct 07 14:11:11 compute-0 nova_compute[259550]: 2025-10-07 14:11:11.415 2 DEBUG nova.compute.manager [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 181 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.0 MiB/s wr, 221 op/s
Oct 07 14:11:11 compute-0 nova_compute[259550]: 2025-10-07 14:11:11.706 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:11:11 compute-0 nova_compute[259550]: 2025-10-07 14:11:11.713 2 DEBUG nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:11:11 compute-0 nova_compute[259550]: 2025-10-07 14:11:11.714 2 DEBUG nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:11:11 compute-0 nova_compute[259550]: 2025-10-07 14:11:11.715 2 INFO nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Using config drive
Oct 07 14:11:11 compute-0 nova_compute[259550]: 2025-10-07 14:11:11.744 2 DEBUG nova.storage.rbd_utils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] rbd image cf0c1f82-7b3b-4b62-a766-c12de66a966a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:12 compute-0 nova_compute[259550]: 2025-10-07 14:11:12.247 2 INFO nova.compute.manager [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Took 8.88 seconds to build instance.
Oct 07 14:11:12 compute-0 nova_compute[259550]: 2025-10-07 14:11:12.648 2 DEBUG oslo_concurrency.lockutils [None req-5bb1381a-61d5-4fce-b495-e477e33505c2 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "34abf10d-dbb0-41fb-abde-a52be331cc12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:12 compute-0 ceph-mon[74295]: pgmap v1372: 305 pgs: 305 active+clean; 181 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.0 MiB/s wr, 221 op/s
Oct 07 14:11:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 181 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.0 MiB/s wr, 221 op/s
Oct 07 14:11:13 compute-0 ceph-mon[74295]: pgmap v1373: 305 pgs: 305 active+clean; 181 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.0 MiB/s wr, 221 op/s
Oct 07 14:11:14 compute-0 nova_compute[259550]: 2025-10-07 14:11:14.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:14 compute-0 nova_compute[259550]: 2025-10-07 14:11:14.570 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846259.5678282, ade69915-c1e6-4b99-b8e6-8031c7f04049 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:11:14 compute-0 nova_compute[259550]: 2025-10-07 14:11:14.571 2 INFO nova.compute.manager [-] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] VM Stopped (Lifecycle Event)
Oct 07 14:11:14 compute-0 nova_compute[259550]: 2025-10-07 14:11:14.644 2 DEBUG nova.compute.manager [None req-1af7920f-0a62-4a8b-9103-312fd06f6bff - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:14 compute-0 nova_compute[259550]: 2025-10-07 14:11:14.750 2 INFO nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Creating config drive at /var/lib/nova/instances/cf0c1f82-7b3b-4b62-a766-c12de66a966a/disk.config
Oct 07 14:11:14 compute-0 nova_compute[259550]: 2025-10-07 14:11:14.757 2 DEBUG oslo_concurrency.processutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cf0c1f82-7b3b-4b62-a766-c12de66a966a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpza9x_bo7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:14 compute-0 nova_compute[259550]: 2025-10-07 14:11:14.901 2 DEBUG oslo_concurrency.processutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cf0c1f82-7b3b-4b62-a766-c12de66a966a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpza9x_bo7" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:14 compute-0 nova_compute[259550]: 2025-10-07 14:11:14.930 2 DEBUG nova.storage.rbd_utils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] rbd image cf0c1f82-7b3b-4b62-a766-c12de66a966a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:14 compute-0 nova_compute[259550]: 2025-10-07 14:11:14.934 2 DEBUG oslo_concurrency.processutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cf0c1f82-7b3b-4b62-a766-c12de66a966a/disk.config cf0c1f82-7b3b-4b62-a766-c12de66a966a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:15 compute-0 nova_compute[259550]: 2025-10-07 14:11:15.092 2 DEBUG oslo_concurrency.processutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cf0c1f82-7b3b-4b62-a766-c12de66a966a/disk.config cf0c1f82-7b3b-4b62-a766-c12de66a966a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:15 compute-0 nova_compute[259550]: 2025-10-07 14:11:15.093 2 INFO nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Deleting local config drive /var/lib/nova/instances/cf0c1f82-7b3b-4b62-a766-c12de66a966a/disk.config because it was imported into RBD.
Oct 07 14:11:15 compute-0 systemd-machined[214580]: New machine qemu-42-instance-00000026.
Oct 07 14:11:15 compute-0 systemd[1]: Started Virtual Machine qemu-42-instance-00000026.
Oct 07 14:11:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 181 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 229 op/s
Oct 07 14:11:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:11:15 compute-0 nova_compute[259550]: 2025-10-07 14:11:15.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.019 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846276.0178053, cf0c1f82-7b3b-4b62-a766-c12de66a966a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.020 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] VM Resumed (Lifecycle Event)
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.025 2 DEBUG nova.compute.manager [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.025 2 DEBUG nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.035 2 INFO nova.virt.libvirt.driver [-] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Instance spawned successfully.
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.035 2 DEBUG nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.041 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.051 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.058 2 DEBUG nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.059 2 DEBUG nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.060 2 DEBUG nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.061 2 DEBUG nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.061 2 DEBUG nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.062 2 DEBUG nova.virt.libvirt.driver [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.076 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.076 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846276.0190277, cf0c1f82-7b3b-4b62-a766-c12de66a966a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.076 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] VM Started (Lifecycle Event)
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.096 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.101 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.124 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.128 2 INFO nova.compute.manager [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Took 8.33 seconds to spawn the instance on the hypervisor.
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.129 2 DEBUG nova.compute.manager [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.641 2 DEBUG oslo_concurrency.lockutils [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "23f8753a-d049-4882-a3f5-7878ea0c5480" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.642 2 DEBUG oslo_concurrency.lockutils [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "23f8753a-d049-4882-a3f5-7878ea0c5480" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.642 2 DEBUG oslo_concurrency.lockutils [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "23f8753a-d049-4882-a3f5-7878ea0c5480-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.643 2 DEBUG oslo_concurrency.lockutils [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "23f8753a-d049-4882-a3f5-7878ea0c5480-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.643 2 DEBUG oslo_concurrency.lockutils [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "23f8753a-d049-4882-a3f5-7878ea0c5480-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.645 2 INFO nova.compute.manager [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Terminating instance
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.646 2 DEBUG nova.compute.manager [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.648 2 INFO nova.compute.manager [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Took 9.72 seconds to build instance.
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.681 2 DEBUG oslo_concurrency.lockutils [None req-6189b427-89c1-4dc6-8791-daf4b1943679 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "cf0c1f82-7b3b-4b62-a766-c12de66a966a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:16 compute-0 ceph-mon[74295]: pgmap v1374: 305 pgs: 305 active+clean; 181 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 229 op/s
Oct 07 14:11:16 compute-0 kernel: tap46ea659d-3f (unregistering): left promiscuous mode
Oct 07 14:11:16 compute-0 NetworkManager[44949]: <info>  [1759846276.6916] device (tap46ea659d-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:11:16 compute-0 ovn_controller[151684]: 2025-10-07T14:11:16Z|00254|binding|INFO|Releasing lport 46ea659d-3f0b-4237-a755-fe64ce3a2200 from this chassis (sb_readonly=0)
Oct 07 14:11:16 compute-0 ovn_controller[151684]: 2025-10-07T14:11:16Z|00255|binding|INFO|Setting lport 46ea659d-3f0b-4237-a755-fe64ce3a2200 down in Southbound
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:16 compute-0 ovn_controller[151684]: 2025-10-07T14:11:16Z|00256|binding|INFO|Removing iface tap46ea659d-3f ovn-installed in OVS
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:16.722 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:70:b0 10.100.0.10'], port_security=['fa:16:3e:da:70:b0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '23f8753a-d049-4882-a3f5-7878ea0c5480', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=46ea659d-3f0b-4237-a755-fe64ce3a2200) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:11:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:16.725 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 46ea659d-3f0b-4237-a755-fe64ce3a2200 in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f unbound from our chassis
Oct 07 14:11:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:16.727 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8accac57-ab45-4b9b-95ed-86c2c65f202f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:11:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:16.729 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[45951e7f-2fe3-4a8e-98f1-831db9c9c449]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:16.730 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f namespace which is not needed anymore
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:16 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Deactivated successfully.
Oct 07 14:11:16 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Consumed 10.273s CPU time.
Oct 07 14:11:16 compute-0 systemd-machined[214580]: Machine qemu-40-instance-00000024 terminated.
Oct 07 14:11:16 compute-0 NetworkManager[44949]: <info>  [1759846276.8718] manager: (tap46ea659d-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/125)
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.892 2 INFO nova.virt.libvirt.driver [-] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Instance destroyed successfully.
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.893 2 DEBUG nova.objects.instance [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'resources' on Instance uuid 23f8753a-d049-4882-a3f5-7878ea0c5480 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:11:16 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[303657]: [NOTICE]   (303661) : haproxy version is 2.8.14-c23fe91
Oct 07 14:11:16 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[303657]: [NOTICE]   (303661) : path to executable is /usr/sbin/haproxy
Oct 07 14:11:16 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[303657]: [WARNING]  (303661) : Exiting Master process...
Oct 07 14:11:16 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[303657]: [ALERT]    (303661) : Current worker (303663) exited with code 143 (Terminated)
Oct 07 14:11:16 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[303657]: [WARNING]  (303661) : All workers exited. Exiting... (0)
Oct 07 14:11:16 compute-0 systemd[1]: libpod-1bd88e583df174d1707ac15def476cc8697485665702e5aebdd164cfe43b5db0.scope: Deactivated successfully.
Oct 07 14:11:16 compute-0 podman[304137]: 2025-10-07 14:11:16.921080768 +0000 UTC m=+0.072099577 container died 1bd88e583df174d1707ac15def476cc8697485665702e5aebdd164cfe43b5db0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.931 2 DEBUG nova.virt.libvirt.vif [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:10:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1205407249',display_name='tempest-DeleteServersTestJSON-server-1205407249',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1205407249',id=36,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:11:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-0urxbnp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:11:07Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=23f8753a-d049-4882-a3f5-7878ea0c5480,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "address": "fa:16:3e:da:70:b0", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46ea659d-3f", "ovs_interfaceid": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.932 2 DEBUG nova.network.os_vif_util [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "address": "fa:16:3e:da:70:b0", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46ea659d-3f", "ovs_interfaceid": "46ea659d-3f0b-4237-a755-fe64ce3a2200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.933 2 DEBUG nova.network.os_vif_util [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:70:b0,bridge_name='br-int',has_traffic_filtering=True,id=46ea659d-3f0b-4237-a755-fe64ce3a2200,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46ea659d-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.933 2 DEBUG os_vif [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:70:b0,bridge_name='br-int',has_traffic_filtering=True,id=46ea659d-3f0b-4237-a755-fe64ce3a2200,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46ea659d-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap46ea659d-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:11:16 compute-0 nova_compute[259550]: 2025-10-07 14:11:16.977 2 INFO os_vif [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:70:b0,bridge_name='br-int',has_traffic_filtering=True,id=46ea659d-3f0b-4237-a755-fe64ce3a2200,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46ea659d-3f')
Oct 07 14:11:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1bd88e583df174d1707ac15def476cc8697485665702e5aebdd164cfe43b5db0-userdata-shm.mount: Deactivated successfully.
Oct 07 14:11:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff024c5b7d0d2a8b0e4f8c62d8270e242c735671a59e69c0d9e9832570498e8c-merged.mount: Deactivated successfully.
Oct 07 14:11:17 compute-0 podman[304137]: 2025-10-07 14:11:17.033582725 +0000 UTC m=+0.184601524 container cleanup 1bd88e583df174d1707ac15def476cc8697485665702e5aebdd164cfe43b5db0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 14:11:17 compute-0 systemd[1]: libpod-conmon-1bd88e583df174d1707ac15def476cc8697485665702e5aebdd164cfe43b5db0.scope: Deactivated successfully.
Oct 07 14:11:17 compute-0 podman[304194]: 2025-10-07 14:11:17.113537846 +0000 UTC m=+0.049654515 container remove 1bd88e583df174d1707ac15def476cc8697485665702e5aebdd164cfe43b5db0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:11:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:17.122 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5be05894-97e1-470b-9726-3a87266a04a8]: (4, ('Tue Oct  7 02:11:16 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f (1bd88e583df174d1707ac15def476cc8697485665702e5aebdd164cfe43b5db0)\n1bd88e583df174d1707ac15def476cc8697485665702e5aebdd164cfe43b5db0\nTue Oct  7 02:11:17 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f (1bd88e583df174d1707ac15def476cc8697485665702e5aebdd164cfe43b5db0)\n1bd88e583df174d1707ac15def476cc8697485665702e5aebdd164cfe43b5db0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:17.125 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f16255a9-5c42-45ec-b6de-11747ab4703d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:17.126 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8accac57-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:17 compute-0 nova_compute[259550]: 2025-10-07 14:11:17.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:17 compute-0 kernel: tap8accac57-a0: left promiscuous mode
Oct 07 14:11:17 compute-0 nova_compute[259550]: 2025-10-07 14:11:17.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:17 compute-0 nova_compute[259550]: 2025-10-07 14:11:17.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:17.159 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a98563a1-5d55-4313-b981-24f6ff6dc319]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:17.191 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cf8f80ae-34a8-462b-8210-841c238b3739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:17.192 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e8df074a-302b-4848-88e8-cfc64df845f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:17.209 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[47d8ebe1-7223-46b8-a536-f160bf02af59]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685001, 'reachable_time': 18882, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304207, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d8accac57\x2dab45\x2d4b9b\x2d95ed\x2d86c2c65f202f.mount: Deactivated successfully.
Oct 07 14:11:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:17.216 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:11:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:17.217 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[7f49ac39-ec43-4d57-8bc0-ca05a77d8058]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:17 compute-0 nova_compute[259550]: 2025-10-07 14:11:17.425 2 INFO nova.virt.libvirt.driver [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Deleting instance files /var/lib/nova/instances/23f8753a-d049-4882-a3f5-7878ea0c5480_del
Oct 07 14:11:17 compute-0 nova_compute[259550]: 2025-10-07 14:11:17.426 2 INFO nova.virt.libvirt.driver [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Deletion of /var/lib/nova/instances/23f8753a-d049-4882-a3f5-7878ea0c5480_del complete
Oct 07 14:11:17 compute-0 nova_compute[259550]: 2025-10-07 14:11:17.591 2 INFO nova.compute.manager [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Took 0.94 seconds to destroy the instance on the hypervisor.
Oct 07 14:11:17 compute-0 nova_compute[259550]: 2025-10-07 14:11:17.592 2 DEBUG oslo.service.loopingcall [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:11:17 compute-0 nova_compute[259550]: 2025-10-07 14:11:17.592 2 DEBUG nova.compute.manager [-] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:11:17 compute-0 nova_compute[259550]: 2025-10-07 14:11:17.592 2 DEBUG nova.network.neutron [-] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:11:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 181 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 195 op/s
Oct 07 14:11:18 compute-0 ceph-mon[74295]: pgmap v1375: 305 pgs: 305 active+clean; 181 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 195 op/s
Oct 07 14:11:18 compute-0 nova_compute[259550]: 2025-10-07 14:11:18.908 2 DEBUG nova.compute.manager [req-ac2f59e2-a788-4374-b27a-6d1f71257aa2 req-759be2a6-6fab-47ab-ac95-3949e267ee21 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Received event network-vif-unplugged-46ea659d-3f0b-4237-a755-fe64ce3a2200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:11:18 compute-0 nova_compute[259550]: 2025-10-07 14:11:18.909 2 DEBUG oslo_concurrency.lockutils [req-ac2f59e2-a788-4374-b27a-6d1f71257aa2 req-759be2a6-6fab-47ab-ac95-3949e267ee21 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23f8753a-d049-4882-a3f5-7878ea0c5480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:18 compute-0 nova_compute[259550]: 2025-10-07 14:11:18.910 2 DEBUG oslo_concurrency.lockutils [req-ac2f59e2-a788-4374-b27a-6d1f71257aa2 req-759be2a6-6fab-47ab-ac95-3949e267ee21 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23f8753a-d049-4882-a3f5-7878ea0c5480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:18 compute-0 nova_compute[259550]: 2025-10-07 14:11:18.910 2 DEBUG oslo_concurrency.lockutils [req-ac2f59e2-a788-4374-b27a-6d1f71257aa2 req-759be2a6-6fab-47ab-ac95-3949e267ee21 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23f8753a-d049-4882-a3f5-7878ea0c5480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:18 compute-0 nova_compute[259550]: 2025-10-07 14:11:18.910 2 DEBUG nova.compute.manager [req-ac2f59e2-a788-4374-b27a-6d1f71257aa2 req-759be2a6-6fab-47ab-ac95-3949e267ee21 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] No waiting events found dispatching network-vif-unplugged-46ea659d-3f0b-4237-a755-fe64ce3a2200 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:11:18 compute-0 nova_compute[259550]: 2025-10-07 14:11:18.910 2 DEBUG nova.compute.manager [req-ac2f59e2-a788-4374-b27a-6d1f71257aa2 req-759be2a6-6fab-47ab-ac95-3949e267ee21 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Received event network-vif-unplugged-46ea659d-3f0b-4237-a755-fe64ce3a2200 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:11:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 150 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 3.0 MiB/s wr, 264 op/s
Oct 07 14:11:20 compute-0 nova_compute[259550]: 2025-10-07 14:11:20.421 2 DEBUG nova.network.neutron [-] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:11:20 compute-0 nova_compute[259550]: 2025-10-07 14:11:20.476 2 INFO nova.compute.manager [-] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Took 2.88 seconds to deallocate network for instance.
Oct 07 14:11:20 compute-0 nova_compute[259550]: 2025-10-07 14:11:20.571 2 DEBUG oslo_concurrency.lockutils [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:20 compute-0 nova_compute[259550]: 2025-10-07 14:11:20.572 2 DEBUG oslo_concurrency.lockutils [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:20 compute-0 nova_compute[259550]: 2025-10-07 14:11:20.667 2 DEBUG oslo_concurrency.processutils [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:20 compute-0 ceph-mon[74295]: pgmap v1376: 305 pgs: 305 active+clean; 150 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 3.0 MiB/s wr, 264 op/s
Oct 07 14:11:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:11:20 compute-0 nova_compute[259550]: 2025-10-07 14:11:20.870 2 DEBUG nova.compute.manager [None req-91638300-1647-4efd-b04c-541a9ae72742 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:20 compute-0 nova_compute[259550]: 2025-10-07 14:11:20.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:20 compute-0 nova_compute[259550]: 2025-10-07 14:11:20.942 2 INFO nova.compute.manager [None req-91638300-1647-4efd-b04c-541a9ae72742 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] instance snapshotting
Oct 07 14:11:21 compute-0 nova_compute[259550]: 2025-10-07 14:11:21.073 2 DEBUG nova.compute.manager [req-57bc04c5-1166-4d73-8ffd-274c2e8ea51d req-04035947-237d-4bd1-8b7a-c1a1a2e6b208 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Received event network-vif-plugged-46ea659d-3f0b-4237-a755-fe64ce3a2200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:11:21 compute-0 nova_compute[259550]: 2025-10-07 14:11:21.074 2 DEBUG oslo_concurrency.lockutils [req-57bc04c5-1166-4d73-8ffd-274c2e8ea51d req-04035947-237d-4bd1-8b7a-c1a1a2e6b208 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23f8753a-d049-4882-a3f5-7878ea0c5480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:21 compute-0 nova_compute[259550]: 2025-10-07 14:11:21.074 2 DEBUG oslo_concurrency.lockutils [req-57bc04c5-1166-4d73-8ffd-274c2e8ea51d req-04035947-237d-4bd1-8b7a-c1a1a2e6b208 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23f8753a-d049-4882-a3f5-7878ea0c5480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:21 compute-0 nova_compute[259550]: 2025-10-07 14:11:21.075 2 DEBUG oslo_concurrency.lockutils [req-57bc04c5-1166-4d73-8ffd-274c2e8ea51d req-04035947-237d-4bd1-8b7a-c1a1a2e6b208 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23f8753a-d049-4882-a3f5-7878ea0c5480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:21 compute-0 nova_compute[259550]: 2025-10-07 14:11:21.075 2 DEBUG nova.compute.manager [req-57bc04c5-1166-4d73-8ffd-274c2e8ea51d req-04035947-237d-4bd1-8b7a-c1a1a2e6b208 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] No waiting events found dispatching network-vif-plugged-46ea659d-3f0b-4237-a755-fe64ce3a2200 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:11:21 compute-0 nova_compute[259550]: 2025-10-07 14:11:21.075 2 WARNING nova.compute.manager [req-57bc04c5-1166-4d73-8ffd-274c2e8ea51d req-04035947-237d-4bd1-8b7a-c1a1a2e6b208 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Received unexpected event network-vif-plugged-46ea659d-3f0b-4237-a755-fe64ce3a2200 for instance with vm_state deleted and task_state None.
Oct 07 14:11:21 compute-0 nova_compute[259550]: 2025-10-07 14:11:21.076 2 DEBUG nova.compute.manager [req-57bc04c5-1166-4d73-8ffd-274c2e8ea51d req-04035947-237d-4bd1-8b7a-c1a1a2e6b208 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Received event network-vif-deleted-46ea659d-3f0b-4237-a755-fe64ce3a2200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:11:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:11:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/421258685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:21 compute-0 nova_compute[259550]: 2025-10-07 14:11:21.139 2 INFO nova.virt.libvirt.driver [None req-91638300-1647-4efd-b04c-541a9ae72742 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Beginning live snapshot process
Oct 07 14:11:21 compute-0 nova_compute[259550]: 2025-10-07 14:11:21.159 2 DEBUG oslo_concurrency.processutils [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:21 compute-0 nova_compute[259550]: 2025-10-07 14:11:21.167 2 DEBUG nova.compute.provider_tree [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:11:21 compute-0 nova_compute[259550]: 2025-10-07 14:11:21.197 2 DEBUG nova.scheduler.client.report [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:11:21 compute-0 nova_compute[259550]: 2025-10-07 14:11:21.253 2 DEBUG oslo_concurrency.lockutils [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:21 compute-0 nova_compute[259550]: 2025-10-07 14:11:21.288 2 INFO nova.scheduler.client.report [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Deleted allocations for instance 23f8753a-d049-4882-a3f5-7878ea0c5480
Oct 07 14:11:21 compute-0 nova_compute[259550]: 2025-10-07 14:11:21.369 2 DEBUG nova.virt.libvirt.imagebackend [None req-91638300-1647-4efd-b04c-541a9ae72742 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:11:21 compute-0 nova_compute[259550]: 2025-10-07 14:11:21.416 2 DEBUG oslo_concurrency.lockutils [None req-8520bf6a-d5f9-49b9-a27b-36379bc3ba38 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "23f8753a-d049-4882-a3f5-7878ea0c5480" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:21 compute-0 nova_compute[259550]: 2025-10-07 14:11:21.601 2 DEBUG nova.storage.rbd_utils [None req-91638300-1647-4efd-b04c-541a9ae72742 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] creating snapshot(f03ac9a54be8462b8d3922a044dc334b) on rbd image(34abf10d-dbb0-41fb-abde-a52be331cc12_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:11:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 136 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 531 KiB/s wr, 206 op/s
Oct 07 14:11:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Oct 07 14:11:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Oct 07 14:11:21 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Oct 07 14:11:21 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/421258685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:21 compute-0 nova_compute[259550]: 2025-10-07 14:11:21.783 2 DEBUG nova.storage.rbd_utils [None req-91638300-1647-4efd-b04c-541a9ae72742 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] cloning vms/34abf10d-dbb0-41fb-abde-a52be331cc12_disk@f03ac9a54be8462b8d3922a044dc334b to images/b873d498-8f97-4ec1-823d-50d8439cc8ad clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:11:21 compute-0 nova_compute[259550]: 2025-10-07 14:11:21.935 2 DEBUG nova.storage.rbd_utils [None req-91638300-1647-4efd-b04c-541a9ae72742 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] flattening images/b873d498-8f97-4ec1-823d-50d8439cc8ad flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:11:21 compute-0 nova_compute[259550]: 2025-10-07 14:11:21.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:22 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 07 14:11:22 compute-0 nova_compute[259550]: 2025-10-07 14:11:22.405 2 DEBUG nova.storage.rbd_utils [None req-91638300-1647-4efd-b04c-541a9ae72742 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] removing snapshot(f03ac9a54be8462b8d3922a044dc334b) on rbd image(34abf10d-dbb0-41fb-abde-a52be331cc12_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:11:22 compute-0 nova_compute[259550]: 2025-10-07 14:11:22.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:11:22
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', '.rgw.root', '.mgr', 'volumes', 'images', 'cephfs.cephfs.meta']
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:11:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Oct 07 14:11:22 compute-0 ceph-mon[74295]: pgmap v1377: 305 pgs: 305 active+clean; 136 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 531 KiB/s wr, 206 op/s
Oct 07 14:11:22 compute-0 ceph-mon[74295]: osdmap e188: 3 total, 3 up, 3 in
Oct 07 14:11:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Oct 07 14:11:22 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Oct 07 14:11:22 compute-0 nova_compute[259550]: 2025-10-07 14:11:22.786 2 DEBUG nova.storage.rbd_utils [None req-91638300-1647-4efd-b04c-541a9ae72742 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] creating snapshot(snap) on rbd image(b873d498-8f97-4ec1-823d-50d8439cc8ad) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:11:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:11:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 136 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 282 KiB/s wr, 148 op/s
Oct 07 14:11:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Oct 07 14:11:23 compute-0 ceph-mon[74295]: osdmap e189: 3 total, 3 up, 3 in
Oct 07 14:11:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Oct 07 14:11:23 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Oct 07 14:11:24 compute-0 podman[304373]: 2025-10-07 14:11:24.097057193 +0000 UTC m=+0.080205979 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 14:11:24 compute-0 podman[304374]: 2025-10-07 14:11:24.098201013 +0000 UTC m=+0.079393378 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 07 14:11:24 compute-0 ceph-mon[74295]: pgmap v1380: 305 pgs: 305 active+clean; 136 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 282 KiB/s wr, 148 op/s
Oct 07 14:11:24 compute-0 ceph-mon[74295]: osdmap e190: 3 total, 3 up, 3 in
Oct 07 14:11:24 compute-0 nova_compute[259550]: 2025-10-07 14:11:24.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:11:24 compute-0 nova_compute[259550]: 2025-10-07 14:11:24.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:11:25 compute-0 nova_compute[259550]: 2025-10-07 14:11:25.245 2 INFO nova.virt.libvirt.driver [None req-91638300-1647-4efd-b04c-541a9ae72742 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Snapshot image upload complete
Oct 07 14:11:25 compute-0 nova_compute[259550]: 2025-10-07 14:11:25.246 2 INFO nova.compute.manager [None req-91638300-1647-4efd-b04c-541a9ae72742 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Took 4.30 seconds to snapshot the instance on the hypervisor.
Oct 07 14:11:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 239 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 9.1 MiB/s rd, 12 MiB/s wr, 320 op/s
Oct 07 14:11:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:11:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Oct 07 14:11:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Oct 07 14:11:25 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Oct 07 14:11:25 compute-0 nova_compute[259550]: 2025-10-07 14:11:25.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:25 compute-0 nova_compute[259550]: 2025-10-07 14:11:25.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:11:25 compute-0 nova_compute[259550]: 2025-10-07 14:11:25.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 07 14:11:26 compute-0 nova_compute[259550]: 2025-10-07 14:11:26.013 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 07 14:11:26 compute-0 ceph-mon[74295]: pgmap v1382: 305 pgs: 305 active+clean; 239 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 9.1 MiB/s rd, 12 MiB/s wr, 320 op/s
Oct 07 14:11:26 compute-0 ceph-mon[74295]: osdmap e191: 3 total, 3 up, 3 in
Oct 07 14:11:26 compute-0 nova_compute[259550]: 2025-10-07 14:11:26.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:27 compute-0 nova_compute[259550]: 2025-10-07 14:11:27.009 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:11:27 compute-0 nova_compute[259550]: 2025-10-07 14:11:27.011 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:11:27 compute-0 nova_compute[259550]: 2025-10-07 14:11:27.011 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:11:27 compute-0 nova_compute[259550]: 2025-10-07 14:11:27.012 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:11:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 239 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 12 MiB/s wr, 273 op/s
Oct 07 14:11:27 compute-0 nova_compute[259550]: 2025-10-07 14:11:27.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:11:28 compute-0 nova_compute[259550]: 2025-10-07 14:11:28.092 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:28 compute-0 nova_compute[259550]: 2025-10-07 14:11:28.094 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:28 compute-0 nova_compute[259550]: 2025-10-07 14:11:28.094 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:28 compute-0 nova_compute[259550]: 2025-10-07 14:11:28.094 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:11:28 compute-0 nova_compute[259550]: 2025-10-07 14:11:28.095 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:11:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/944294817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:28 compute-0 nova_compute[259550]: 2025-10-07 14:11:28.524 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:28 compute-0 nova_compute[259550]: 2025-10-07 14:11:28.643 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:11:28 compute-0 nova_compute[259550]: 2025-10-07 14:11:28.643 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:11:28 compute-0 nova_compute[259550]: 2025-10-07 14:11:28.647 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:11:28 compute-0 nova_compute[259550]: 2025-10-07 14:11:28.647 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:11:28 compute-0 nova_compute[259550]: 2025-10-07 14:11:28.815 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:11:28 compute-0 nova_compute[259550]: 2025-10-07 14:11:28.817 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3881MB free_disk=59.922035217285156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:11:28 compute-0 nova_compute[259550]: 2025-10-07 14:11:28.817 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:28 compute-0 nova_compute[259550]: 2025-10-07 14:11:28.818 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:28 compute-0 ceph-mon[74295]: pgmap v1384: 305 pgs: 305 active+clean; 239 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 12 MiB/s wr, 273 op/s
Oct 07 14:11:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/944294817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:29 compute-0 nova_compute[259550]: 2025-10-07 14:11:29.005 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 34abf10d-dbb0-41fb-abde-a52be331cc12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:11:29 compute-0 nova_compute[259550]: 2025-10-07 14:11:29.006 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance cf0c1f82-7b3b-4b62-a766-c12de66a966a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:11:29 compute-0 nova_compute[259550]: 2025-10-07 14:11:29.006 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:11:29 compute-0 nova_compute[259550]: 2025-10-07 14:11:29.007 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:11:29 compute-0 nova_compute[259550]: 2025-10-07 14:11:29.140 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:29 compute-0 nova_compute[259550]: 2025-10-07 14:11:29.545 2 DEBUG nova.compute.manager [None req-275641db-1fdd-4f4f-88df-95d596c630be e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:29 compute-0 nova_compute[259550]: 2025-10-07 14:11:29.597 2 INFO nova.compute.manager [None req-275641db-1fdd-4f4f-88df-95d596c630be e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] instance snapshotting
Oct 07 14:11:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:11:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/598647156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:29 compute-0 nova_compute[259550]: 2025-10-07 14:11:29.628 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 259 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 13 MiB/s wr, 336 op/s
Oct 07 14:11:29 compute-0 nova_compute[259550]: 2025-10-07 14:11:29.634 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:11:29 compute-0 nova_compute[259550]: 2025-10-07 14:11:29.670 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:11:29 compute-0 nova_compute[259550]: 2025-10-07 14:11:29.767 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:11:29 compute-0 nova_compute[259550]: 2025-10-07 14:11:29.767 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:29 compute-0 nova_compute[259550]: 2025-10-07 14:11:29.853 2 INFO nova.virt.libvirt.driver [None req-275641db-1fdd-4f4f-88df-95d596c630be e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Beginning live snapshot process
Oct 07 14:11:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/598647156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:30 compute-0 nova_compute[259550]: 2025-10-07 14:11:30.061 2 DEBUG nova.virt.libvirt.imagebackend [None req-275641db-1fdd-4f4f-88df-95d596c630be e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:11:30 compute-0 nova_compute[259550]: 2025-10-07 14:11:30.295 2 DEBUG nova.storage.rbd_utils [None req-275641db-1fdd-4f4f-88df-95d596c630be e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] creating snapshot(345e352ce4cc4737b0852a089932c0d8) on rbd image(cf0c1f82-7b3b-4b62-a766-c12de66a966a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:11:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:11:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Oct 07 14:11:30 compute-0 ceph-mon[74295]: pgmap v1385: 305 pgs: 305 active+clean; 259 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 13 MiB/s wr, 336 op/s
Oct 07 14:11:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Oct 07 14:11:30 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Oct 07 14:11:30 compute-0 nova_compute[259550]: 2025-10-07 14:11:30.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:30 compute-0 nova_compute[259550]: 2025-10-07 14:11:30.977 2 DEBUG nova.storage.rbd_utils [None req-275641db-1fdd-4f4f-88df-95d596c630be e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] cloning vms/cf0c1f82-7b3b-4b62-a766-c12de66a966a_disk@345e352ce4cc4737b0852a089932c0d8 to images/46f0145a-9d8b-4374-9263-17d38827fcac clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.102 2 DEBUG nova.storage.rbd_utils [None req-275641db-1fdd-4f4f-88df-95d596c630be e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] flattening images/46f0145a-9d8b-4374-9263-17d38827fcac flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.367 2 DEBUG oslo_concurrency.lockutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.368 2 DEBUG oslo_concurrency.lockutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.388 2 DEBUG nova.compute.manager [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.477 2 DEBUG nova.storage.rbd_utils [None req-275641db-1fdd-4f4f-88df-95d596c630be e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] removing snapshot(345e352ce4cc4737b0852a089932c0d8) on rbd image(cf0c1f82-7b3b-4b62-a766-c12de66a966a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.488 2 DEBUG oslo_concurrency.lockutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.489 2 DEBUG oslo_concurrency.lockutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.495 2 DEBUG nova.virt.hardware [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.496 2 INFO nova.compute.claims [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.622 2 DEBUG oslo_concurrency.processutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 272 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 12 MiB/s wr, 318 op/s
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.688 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.689 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.713 2 WARNING nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] While synchronizing instance power states, found 3 instances in the database and 2 instances on the hypervisor.
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.713 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Triggering sync for uuid 34abf10d-dbb0-41fb-abde-a52be331cc12 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.713 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Triggering sync for uuid cf0c1f82-7b3b-4b62-a766-c12de66a966a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.713 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Triggering sync for uuid 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.714 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "34abf10d-dbb0-41fb-abde-a52be331cc12" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.714 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "34abf10d-dbb0-41fb-abde-a52be331cc12" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.715 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "cf0c1f82-7b3b-4b62-a766-c12de66a966a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.715 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "cf0c1f82-7b3b-4b62-a766-c12de66a966a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.715 2 INFO nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] During sync_power_state the instance has a pending task (image_uploading). Skip.
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.715 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "cf0c1f82-7b3b-4b62-a766-c12de66a966a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.716 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.757 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "34abf10d-dbb0-41fb-abde-a52be331cc12" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.888 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846276.887427, 23f8753a-d049-4882-a3f5-7878ea0c5480 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.888 2 INFO nova.compute.manager [-] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] VM Stopped (Lifecycle Event)
Oct 07 14:11:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Oct 07 14:11:31 compute-0 ceph-mon[74295]: osdmap e192: 3 total, 3 up, 3 in
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.904 2 DEBUG nova.compute.manager [None req-7a30f45f-8849-4598-9fb6-dec1d3d52687 - - - - - -] [instance: 23f8753a-d049-4882-a3f5-7878ea0c5480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Oct 07 14:11:31 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.952 2 DEBUG nova.storage.rbd_utils [None req-275641db-1fdd-4f4f-88df-95d596c630be e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] creating snapshot(snap) on rbd image(46f0145a-9d8b-4374-9263-17d38827fcac) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.994 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:11:31 compute-0 nova_compute[259550]: 2025-10-07 14:11:31.995 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 07 14:11:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:11:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4039353805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.102 2 DEBUG oslo_concurrency.processutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.108 2 DEBUG nova.compute.provider_tree [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.143 2 DEBUG nova.scheduler.client.report [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.163 2 DEBUG oslo_concurrency.lockutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.164 2 DEBUG nova.compute.manager [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.204 2 DEBUG nova.compute.manager [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.205 2 DEBUG nova.network.neutron [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.230 2 INFO nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.249 2 DEBUG nova.compute.manager [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015139675717913655 of space, bias 1.0, pg target 0.45419027153740965 quantized to 32 (current 32)
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014143845233444033 of space, bias 1.0, pg target 0.42431535700332096 quantized to 32 (current 32)
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:11:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.348 2 DEBUG nova.compute.manager [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.350 2 DEBUG nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.350 2 INFO nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Creating image(s)
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.372 2 DEBUG nova.storage.rbd_utils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.399 2 DEBUG nova.storage.rbd_utils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.425 2 DEBUG nova.storage.rbd_utils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.430 2 DEBUG oslo_concurrency.processutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.473 2 DEBUG nova.policy [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a0452296b3a942e893961944a0203d98', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '06322ecec4b94a5d94e34cc8632d4104', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.512 2 DEBUG oslo_concurrency.processutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.513 2 DEBUG oslo_concurrency.lockutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.514 2 DEBUG oslo_concurrency.lockutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.514 2 DEBUG oslo_concurrency.lockutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.542 2 DEBUG nova.storage.rbd_utils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.547 2 DEBUG oslo_concurrency.processutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:11:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4267495188' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:11:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:11:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4267495188' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.871 2 DEBUG oslo_concurrency.processutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Oct 07 14:11:32 compute-0 ceph-mon[74295]: pgmap v1387: 305 pgs: 305 active+clean; 272 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 12 MiB/s wr, 318 op/s
Oct 07 14:11:32 compute-0 ceph-mon[74295]: osdmap e193: 3 total, 3 up, 3 in
Oct 07 14:11:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4039353805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/4267495188' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:11:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/4267495188' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:11:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Oct 07 14:11:32 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.954 2 DEBUG nova.storage.rbd_utils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] resizing rbd image 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.988 2 DEBUG nova.network.neutron [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Successfully created port: 1f0af7f0-3788-48b2-99ec-5716374a274b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.995 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.995 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:11:32 compute-0 nova_compute[259550]: 2025-10-07 14:11:32.996 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:11:33 compute-0 nova_compute[259550]: 2025-10-07 14:11:33.024 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 07 14:11:33 compute-0 nova_compute[259550]: 2025-10-07 14:11:33.090 2 DEBUG nova.objects.instance [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'migration_context' on Instance uuid 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:11:33 compute-0 nova_compute[259550]: 2025-10-07 14:11:33.103 2 DEBUG nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:11:33 compute-0 nova_compute[259550]: 2025-10-07 14:11:33.104 2 DEBUG nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Ensure instance console log exists: /var/lib/nova/instances/7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:11:33 compute-0 nova_compute[259550]: 2025-10-07 14:11:33.105 2 DEBUG oslo_concurrency.lockutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:33 compute-0 nova_compute[259550]: 2025-10-07 14:11:33.105 2 DEBUG oslo_concurrency.lockutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:33 compute-0 nova_compute[259550]: 2025-10-07 14:11:33.105 2 DEBUG oslo_concurrency.lockutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:33 compute-0 nova_compute[259550]: 2025-10-07 14:11:33.331 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-34abf10d-dbb0-41fb-abde-a52be331cc12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:11:33 compute-0 nova_compute[259550]: 2025-10-07 14:11:33.332 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-34abf10d-dbb0-41fb-abde-a52be331cc12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:11:33 compute-0 nova_compute[259550]: 2025-10-07 14:11:33.332 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:11:33 compute-0 nova_compute[259550]: 2025-10-07 14:11:33.332 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 34abf10d-dbb0-41fb-abde-a52be331cc12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:11:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 272 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 4.3 MiB/s wr, 148 op/s
Oct 07 14:11:33 compute-0 ceph-mon[74295]: osdmap e194: 3 total, 3 up, 3 in
Oct 07 14:11:33 compute-0 ceph-mon[74295]: pgmap v1390: 305 pgs: 305 active+clean; 272 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 4.3 MiB/s wr, 148 op/s
Oct 07 14:11:34 compute-0 nova_compute[259550]: 2025-10-07 14:11:34.412 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:11:34 compute-0 nova_compute[259550]: 2025-10-07 14:11:34.659 2 DEBUG nova.network.neutron [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Successfully updated port: 1f0af7f0-3788-48b2-99ec-5716374a274b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:11:34 compute-0 nova_compute[259550]: 2025-10-07 14:11:34.663 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:11:34 compute-0 nova_compute[259550]: 2025-10-07 14:11:34.676 2 DEBUG oslo_concurrency.lockutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "refresh_cache-7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:11:34 compute-0 nova_compute[259550]: 2025-10-07 14:11:34.677 2 DEBUG oslo_concurrency.lockutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquired lock "refresh_cache-7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:11:34 compute-0 nova_compute[259550]: 2025-10-07 14:11:34.677 2 DEBUG nova.network.neutron [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:11:34 compute-0 nova_compute[259550]: 2025-10-07 14:11:34.681 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-34abf10d-dbb0-41fb-abde-a52be331cc12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:11:34 compute-0 nova_compute[259550]: 2025-10-07 14:11:34.681 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:11:34 compute-0 nova_compute[259550]: 2025-10-07 14:11:34.682 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:11:34 compute-0 nova_compute[259550]: 2025-10-07 14:11:34.787 2 DEBUG nova.compute.manager [req-142e2b0d-5c7f-415c-b0bf-0bcd7122932f req-e943abe5-1ea2-4eb2-804f-c56594577066 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Received event network-changed-1f0af7f0-3788-48b2-99ec-5716374a274b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:11:34 compute-0 nova_compute[259550]: 2025-10-07 14:11:34.787 2 DEBUG nova.compute.manager [req-142e2b0d-5c7f-415c-b0bf-0bcd7122932f req-e943abe5-1ea2-4eb2-804f-c56594577066 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Refreshing instance network info cache due to event network-changed-1f0af7f0-3788-48b2-99ec-5716374a274b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:11:34 compute-0 nova_compute[259550]: 2025-10-07 14:11:34.788 2 DEBUG oslo_concurrency.lockutils [req-142e2b0d-5c7f-415c-b0bf-0bcd7122932f req-e943abe5-1ea2-4eb2-804f-c56594577066 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:11:34 compute-0 nova_compute[259550]: 2025-10-07 14:11:34.907 2 INFO nova.virt.libvirt.driver [None req-275641db-1fdd-4f4f-88df-95d596c630be e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Snapshot image upload complete
Oct 07 14:11:34 compute-0 nova_compute[259550]: 2025-10-07 14:11:34.908 2 INFO nova.compute.manager [None req-275641db-1fdd-4f4f-88df-95d596c630be e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Took 5.31 seconds to snapshot the instance on the hypervisor.
Oct 07 14:11:34 compute-0 nova_compute[259550]: 2025-10-07 14:11:34.915 2 DEBUG nova.network.neutron [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:11:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 397 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 13 MiB/s wr, 233 op/s
Oct 07 14:11:35 compute-0 nova_compute[259550]: 2025-10-07 14:11:35.685 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:11:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:11:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Oct 07 14:11:35 compute-0 nova_compute[259550]: 2025-10-07 14:11:35.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Oct 07 14:11:35 compute-0 nova_compute[259550]: 2025-10-07 14:11:35.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:11:36 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Oct 07 14:11:36 compute-0 podman[304785]: 2025-10-07 14:11:36.165791658 +0000 UTC m=+0.150412015 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:11:36 compute-0 podman[304786]: 2025-10-07 14:11:36.187884268 +0000 UTC m=+0.164652158 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.611 2 DEBUG nova.network.neutron [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Updating instance_info_cache with network_info: [{"id": "1f0af7f0-3788-48b2-99ec-5716374a274b", "address": "fa:16:3e:94:32:91", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f0af7f0-37", "ovs_interfaceid": "1f0af7f0-3788-48b2-99ec-5716374a274b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.648 2 DEBUG oslo_concurrency.lockutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Releasing lock "refresh_cache-7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.649 2 DEBUG nova.compute.manager [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Instance network_info: |[{"id": "1f0af7f0-3788-48b2-99ec-5716374a274b", "address": "fa:16:3e:94:32:91", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f0af7f0-37", "ovs_interfaceid": "1f0af7f0-3788-48b2-99ec-5716374a274b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.649 2 DEBUG oslo_concurrency.lockutils [req-142e2b0d-5c7f-415c-b0bf-0bcd7122932f req-e943abe5-1ea2-4eb2-804f-c56594577066 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.650 2 DEBUG nova.network.neutron [req-142e2b0d-5c7f-415c-b0bf-0bcd7122932f req-e943abe5-1ea2-4eb2-804f-c56594577066 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Refreshing network info cache for port 1f0af7f0-3788-48b2-99ec-5716374a274b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.652 2 DEBUG nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Start _get_guest_xml network_info=[{"id": "1f0af7f0-3788-48b2-99ec-5716374a274b", "address": "fa:16:3e:94:32:91", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f0af7f0-37", "ovs_interfaceid": "1f0af7f0-3788-48b2-99ec-5716374a274b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.659 2 WARNING nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.665 2 DEBUG nova.virt.libvirt.host [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.666 2 DEBUG nova.virt.libvirt.host [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.678 2 DEBUG nova.virt.libvirt.host [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.679 2 DEBUG nova.virt.libvirt.host [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.680 2 DEBUG nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.680 2 DEBUG nova.virt.hardware [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.681 2 DEBUG nova.virt.hardware [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.681 2 DEBUG nova.virt.hardware [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.681 2 DEBUG nova.virt.hardware [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.681 2 DEBUG nova.virt.hardware [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.681 2 DEBUG nova.virt.hardware [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.682 2 DEBUG nova.virt.hardware [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.682 2 DEBUG nova.virt.hardware [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.682 2 DEBUG nova.virt.hardware [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.682 2 DEBUG nova.virt.hardware [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.682 2 DEBUG nova.virt.hardware [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.686 2 DEBUG oslo_concurrency.processutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.741 2 DEBUG nova.compute.manager [None req-12ecb8da-03bb-42ab-8043-b88ff2c36261 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.809 2 INFO nova.compute.manager [None req-12ecb8da-03bb-42ab-8043-b88ff2c36261 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] instance snapshotting
Oct 07 14:11:36 compute-0 ceph-mon[74295]: pgmap v1391: 305 pgs: 305 active+clean; 397 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 13 MiB/s wr, 233 op/s
Oct 07 14:11:36 compute-0 ceph-mon[74295]: osdmap e195: 3 total, 3 up, 3 in
Oct 07 14:11:36 compute-0 nova_compute[259550]: 2025-10-07 14:11:36.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.009 2 INFO nova.virt.libvirt.driver [None req-12ecb8da-03bb-42ab-8043-b88ff2c36261 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Beginning live snapshot process
Oct 07 14:11:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:11:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1154006124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.164 2 DEBUG nova.virt.libvirt.imagebackend [None req-12ecb8da-03bb-42ab-8043-b88ff2c36261 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.167 2 DEBUG oslo_concurrency.processutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.239 2 DEBUG nova.storage.rbd_utils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.245 2 DEBUG oslo_concurrency.processutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.536 2 DEBUG nova.storage.rbd_utils [None req-12ecb8da-03bb-42ab-8043-b88ff2c36261 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] creating snapshot(2f1886ae1e514abc99f5e289ba92aecd) on rbd image(34abf10d-dbb0-41fb-abde-a52be331cc12_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:11:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 397 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 11 MiB/s wr, 202 op/s
Oct 07 14:11:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:11:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1025786694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.699 2 DEBUG oslo_concurrency.processutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.701 2 DEBUG nova.virt.libvirt.vif [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:11:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-970906339',display_name='tempest-DeleteServersTestJSON-server-970906339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-970906339',id=39,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-6xmxko0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:11:32Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f0af7f0-3788-48b2-99ec-5716374a274b", "address": "fa:16:3e:94:32:91", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f0af7f0-37", "ovs_interfaceid": "1f0af7f0-3788-48b2-99ec-5716374a274b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.701 2 DEBUG nova.network.os_vif_util [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "1f0af7f0-3788-48b2-99ec-5716374a274b", "address": "fa:16:3e:94:32:91", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f0af7f0-37", "ovs_interfaceid": "1f0af7f0-3788-48b2-99ec-5716374a274b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.702 2 DEBUG nova.network.os_vif_util [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:32:91,bridge_name='br-int',has_traffic_filtering=True,id=1f0af7f0-3788-48b2-99ec-5716374a274b,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f0af7f0-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.703 2 DEBUG nova.objects.instance [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.758 2 DEBUG nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:11:37 compute-0 nova_compute[259550]:   <uuid>7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77</uuid>
Oct 07 14:11:37 compute-0 nova_compute[259550]:   <name>instance-00000027</name>
Oct 07 14:11:37 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:11:37 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:11:37 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <nova:name>tempest-DeleteServersTestJSON-server-970906339</nova:name>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:11:36</nova:creationTime>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:11:37 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:11:37 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:11:37 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:11:37 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:11:37 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:11:37 compute-0 nova_compute[259550]:         <nova:user uuid="a0452296b3a942e893961944a0203d98">tempest-DeleteServersTestJSON-1871282594-project-member</nova:user>
Oct 07 14:11:37 compute-0 nova_compute[259550]:         <nova:project uuid="06322ecec4b94a5d94e34cc8632d4104">tempest-DeleteServersTestJSON-1871282594</nova:project>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:11:37 compute-0 nova_compute[259550]:         <nova:port uuid="1f0af7f0-3788-48b2-99ec-5716374a274b">
Oct 07 14:11:37 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:11:37 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:11:37 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <system>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <entry name="serial">7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77</entry>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <entry name="uuid">7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77</entry>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     </system>
Oct 07 14:11:37 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:11:37 compute-0 nova_compute[259550]:   <os>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:   </os>
Oct 07 14:11:37 compute-0 nova_compute[259550]:   <features>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:   </features>
Oct 07 14:11:37 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:11:37 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:11:37 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77_disk">
Oct 07 14:11:37 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       </source>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:11:37 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77_disk.config">
Oct 07 14:11:37 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       </source>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:11:37 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:94:32:91"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <target dev="tap1f0af7f0-37"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77/console.log" append="off"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <video>
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     </video>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:11:37 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:11:37 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:11:37 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:11:37 compute-0 nova_compute[259550]: </domain>
Oct 07 14:11:37 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.758 2 DEBUG nova.compute.manager [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Preparing to wait for external event network-vif-plugged-1f0af7f0-3788-48b2-99ec-5716374a274b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.759 2 DEBUG oslo_concurrency.lockutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.759 2 DEBUG oslo_concurrency.lockutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.759 2 DEBUG oslo_concurrency.lockutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.760 2 DEBUG nova.virt.libvirt.vif [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:11:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-970906339',display_name='tempest-DeleteServersTestJSON-server-970906339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-970906339',id=39,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-6xmxko0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:11:32Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f0af7f0-3788-48b2-99ec-5716374a274b", "address": "fa:16:3e:94:32:91", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f0af7f0-37", "ovs_interfaceid": "1f0af7f0-3788-48b2-99ec-5716374a274b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.760 2 DEBUG nova.network.os_vif_util [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "1f0af7f0-3788-48b2-99ec-5716374a274b", "address": "fa:16:3e:94:32:91", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f0af7f0-37", "ovs_interfaceid": "1f0af7f0-3788-48b2-99ec-5716374a274b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.761 2 DEBUG nova.network.os_vif_util [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:32:91,bridge_name='br-int',has_traffic_filtering=True,id=1f0af7f0-3788-48b2-99ec-5716374a274b,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f0af7f0-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.761 2 DEBUG os_vif [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:32:91,bridge_name='br-int',has_traffic_filtering=True,id=1f0af7f0-3788-48b2-99ec-5716374a274b,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f0af7f0-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.762 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.763 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.770 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f0af7f0-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.771 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1f0af7f0-37, col_values=(('external_ids', {'iface-id': '1f0af7f0-3788-48b2-99ec-5716374a274b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:94:32:91', 'vm-uuid': '7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:37 compute-0 NetworkManager[44949]: <info>  [1759846297.7739] manager: (tap1f0af7f0-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.782 2 INFO os_vif [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:32:91,bridge_name='br-int',has_traffic_filtering=True,id=1f0af7f0-3788-48b2-99ec-5716374a274b,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f0af7f0-37')
Oct 07 14:11:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Oct 07 14:11:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.869 2 DEBUG nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.869 2 DEBUG nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.869 2 DEBUG nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No VIF found with MAC fa:16:3e:94:32:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.870 2 INFO nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Using config drive
Oct 07 14:11:37 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1154006124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:37 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1025786694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:37 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.905 2 DEBUG nova.storage.rbd_utils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:37 compute-0 nova_compute[259550]: 2025-10-07 14:11:37.939 2 DEBUG nova.storage.rbd_utils [None req-12ecb8da-03bb-42ab-8043-b88ff2c36261 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] cloning vms/34abf10d-dbb0-41fb-abde-a52be331cc12_disk@2f1886ae1e514abc99f5e289ba92aecd to images/c2caf5ee-6fb2-4df8-bdb7-03568b8f7f55 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:11:38 compute-0 nova_compute[259550]: 2025-10-07 14:11:38.055 2 DEBUG nova.storage.rbd_utils [None req-12ecb8da-03bb-42ab-8043-b88ff2c36261 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] flattening images/c2caf5ee-6fb2-4df8-bdb7-03568b8f7f55 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:11:38 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 07 14:11:38 compute-0 nova_compute[259550]: 2025-10-07 14:11:38.472 2 DEBUG nova.storage.rbd_utils [None req-12ecb8da-03bb-42ab-8043-b88ff2c36261 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] removing snapshot(2f1886ae1e514abc99f5e289ba92aecd) on rbd image(34abf10d-dbb0-41fb-abde-a52be331cc12_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:11:38 compute-0 nova_compute[259550]: 2025-10-07 14:11:38.631 2 DEBUG nova.network.neutron [req-142e2b0d-5c7f-415c-b0bf-0bcd7122932f req-e943abe5-1ea2-4eb2-804f-c56594577066 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Updated VIF entry in instance network info cache for port 1f0af7f0-3788-48b2-99ec-5716374a274b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:11:38 compute-0 nova_compute[259550]: 2025-10-07 14:11:38.632 2 DEBUG nova.network.neutron [req-142e2b0d-5c7f-415c-b0bf-0bcd7122932f req-e943abe5-1ea2-4eb2-804f-c56594577066 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Updating instance_info_cache with network_info: [{"id": "1f0af7f0-3788-48b2-99ec-5716374a274b", "address": "fa:16:3e:94:32:91", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f0af7f0-37", "ovs_interfaceid": "1f0af7f0-3788-48b2-99ec-5716374a274b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:11:38 compute-0 nova_compute[259550]: 2025-10-07 14:11:38.651 2 DEBUG oslo_concurrency.lockutils [req-142e2b0d-5c7f-415c-b0bf-0bcd7122932f req-e943abe5-1ea2-4eb2-804f-c56594577066 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:11:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Oct 07 14:11:38 compute-0 ceph-mon[74295]: pgmap v1393: 305 pgs: 305 active+clean; 397 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 11 MiB/s wr, 202 op/s
Oct 07 14:11:38 compute-0 ceph-mon[74295]: osdmap e196: 3 total, 3 up, 3 in
Oct 07 14:11:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Oct 07 14:11:38 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Oct 07 14:11:38 compute-0 nova_compute[259550]: 2025-10-07 14:11:38.893 2 INFO nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Creating config drive at /var/lib/nova/instances/7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77/disk.config
Oct 07 14:11:38 compute-0 nova_compute[259550]: 2025-10-07 14:11:38.899 2 DEBUG oslo_concurrency.processutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjqhguuzw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:38 compute-0 nova_compute[259550]: 2025-10-07 14:11:38.948 2 DEBUG nova.storage.rbd_utils [None req-12ecb8da-03bb-42ab-8043-b88ff2c36261 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] creating snapshot(snap) on rbd image(c2caf5ee-6fb2-4df8-bdb7-03568b8f7f55) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:11:39 compute-0 nova_compute[259550]: 2025-10-07 14:11:39.048 2 DEBUG oslo_concurrency.processutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjqhguuzw" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:39 compute-0 nova_compute[259550]: 2025-10-07 14:11:39.072 2 DEBUG nova.storage.rbd_utils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:39 compute-0 nova_compute[259550]: 2025-10-07 14:11:39.077 2 DEBUG oslo_concurrency.processutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77/disk.config 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:39 compute-0 nova_compute[259550]: 2025-10-07 14:11:39.274 2 DEBUG oslo_concurrency.processutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77/disk.config 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:39 compute-0 nova_compute[259550]: 2025-10-07 14:11:39.275 2 INFO nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Deleting local config drive /var/lib/nova/instances/7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77/disk.config because it was imported into RBD.
Oct 07 14:11:39 compute-0 kernel: tap1f0af7f0-37: entered promiscuous mode
Oct 07 14:11:39 compute-0 NetworkManager[44949]: <info>  [1759846299.3371] manager: (tap1f0af7f0-37): new Tun device (/org/freedesktop/NetworkManager/Devices/127)
Oct 07 14:11:39 compute-0 ovn_controller[151684]: 2025-10-07T14:11:39Z|00257|binding|INFO|Claiming lport 1f0af7f0-3788-48b2-99ec-5716374a274b for this chassis.
Oct 07 14:11:39 compute-0 ovn_controller[151684]: 2025-10-07T14:11:39Z|00258|binding|INFO|1f0af7f0-3788-48b2-99ec-5716374a274b: Claiming fa:16:3e:94:32:91 10.100.0.13
Oct 07 14:11:39 compute-0 nova_compute[259550]: 2025-10-07 14:11:39.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.358 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:32:91 10.100.0.13'], port_security=['fa:16:3e:94:32:91 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1f0af7f0-3788-48b2-99ec-5716374a274b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.359 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1f0af7f0-3788-48b2-99ec-5716374a274b in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f bound to our chassis
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.360 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.375 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb15ce3-2c38-428d-8ba2-2db29886a577]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.375 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8accac57-a1 in ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:11:39 compute-0 systemd-udevd[305107]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.378 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8accac57-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.378 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ce5f40df-3e29-43fc-ac75-c627bf79cbc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.381 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[923b2641-7ffd-47c4-a0c8-b02e5ff2193e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:39 compute-0 systemd-machined[214580]: New machine qemu-43-instance-00000027.
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.395 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[27f4619a-5092-4de6-86f0-14466e488f74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:39 compute-0 NetworkManager[44949]: <info>  [1759846299.3984] device (tap1f0af7f0-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:11:39 compute-0 NetworkManager[44949]: <info>  [1759846299.3995] device (tap1f0af7f0-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:11:39 compute-0 systemd[1]: Started Virtual Machine qemu-43-instance-00000027.
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.425 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fdcdc367-ba61-4846-8e75-5158675da6fe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:39 compute-0 ovn_controller[151684]: 2025-10-07T14:11:39Z|00259|binding|INFO|Setting lport 1f0af7f0-3788-48b2-99ec-5716374a274b ovn-installed in OVS
Oct 07 14:11:39 compute-0 ovn_controller[151684]: 2025-10-07T14:11:39Z|00260|binding|INFO|Setting lport 1f0af7f0-3788-48b2-99ec-5716374a274b up in Southbound
Oct 07 14:11:39 compute-0 nova_compute[259550]: 2025-10-07 14:11:39.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.532 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[04c92518-98cd-41ff-afcc-eb160b831deb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:39 compute-0 NetworkManager[44949]: <info>  [1759846299.5423] manager: (tap8accac57-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/128)
Oct 07 14:11:39 compute-0 systemd-udevd[305111]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.540 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a52a81-56a0-4771-b2df-c6d498a5f5e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.579 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[06f015da-889b-4a0c-a3d3-7ce02d4d585d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.583 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[83da84c0-e012-4194-b5fd-eb62760b9385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:39 compute-0 NetworkManager[44949]: <info>  [1759846299.6083] device (tap8accac57-a0): carrier: link connected
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.615 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8e368bd4-10b4-491f-9100-ebf4dd5466b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.634 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[88f9f65b-ed3e-4268-a7b9-d941f1fa227f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8accac57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:e8:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688317, 'reachable_time': 31977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305140, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 468 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 18 MiB/s wr, 275 op/s
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.655 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bcbbf39c-91c2-44fe-9be8-f3ee7d039487]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:e89f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688317, 'tstamp': 688317}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305141, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.673 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[abaed69f-76fc-4224-b6e2-220097cf2adb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8accac57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:e8:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688317, 'reachable_time': 31977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305142, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.705 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0b18f0c4-e888-4e9e-86c4-3dce406b92b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.774 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a29779c4-94d8-498d-8e9e-acd232066863]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.775 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8accac57-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.776 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.776 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8accac57-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:39 compute-0 nova_compute[259550]: 2025-10-07 14:11:39.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:39 compute-0 kernel: tap8accac57-a0: entered promiscuous mode
Oct 07 14:11:39 compute-0 NetworkManager[44949]: <info>  [1759846299.7787] manager: (tap8accac57-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.781 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8accac57-a0, col_values=(('external_ids', {'iface-id': 'a487ff40-6fa2-404e-b7fc-dbcc968fecc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:39 compute-0 nova_compute[259550]: 2025-10-07 14:11:39.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:39 compute-0 ovn_controller[151684]: 2025-10-07T14:11:39Z|00261|binding|INFO|Releasing lport a487ff40-6fa2-404e-b7fc-dbcc968fecc3 from this chassis (sb_readonly=0)
Oct 07 14:11:39 compute-0 nova_compute[259550]: 2025-10-07 14:11:39.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.784 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.787 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c9611a72-376e-4e2f-b25a-7a5b80de9b6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.787 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:11:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:39.788 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'env', 'PROCESS_TAG=haproxy-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8accac57-ab45-4b9b-95ed-86c2c65f202f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:11:39 compute-0 nova_compute[259550]: 2025-10-07 14:11:39.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Oct 07 14:11:39 compute-0 ceph-mon[74295]: osdmap e197: 3 total, 3 up, 3 in
Oct 07 14:11:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Oct 07 14:11:39 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.124 2 DEBUG nova.compute.manager [req-69ef3758-51ff-482f-b2f3-94fd07443ee0 req-cd69cc50-f6d0-4a2d-80fc-93a2f638e695 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Received event network-vif-plugged-1f0af7f0-3788-48b2-99ec-5716374a274b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.124 2 DEBUG oslo_concurrency.lockutils [req-69ef3758-51ff-482f-b2f3-94fd07443ee0 req-cd69cc50-f6d0-4a2d-80fc-93a2f638e695 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.125 2 DEBUG oslo_concurrency.lockutils [req-69ef3758-51ff-482f-b2f3-94fd07443ee0 req-cd69cc50-f6d0-4a2d-80fc-93a2f638e695 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.125 2 DEBUG oslo_concurrency.lockutils [req-69ef3758-51ff-482f-b2f3-94fd07443ee0 req-cd69cc50-f6d0-4a2d-80fc-93a2f638e695 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.125 2 DEBUG nova.compute.manager [req-69ef3758-51ff-482f-b2f3-94fd07443ee0 req-cd69cc50-f6d0-4a2d-80fc-93a2f638e695 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Processing event network-vif-plugged-1f0af7f0-3788-48b2-99ec-5716374a274b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:11:40 compute-0 podman[305214]: 2025-10-07 14:11:40.199188727 +0000 UTC m=+0.054872683 container create c395f7b476014f717a3f817350bc4af241af04b82a24b98a99fcaba65a40385b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 07 14:11:40 compute-0 systemd[1]: Started libpod-conmon-c395f7b476014f717a3f817350bc4af241af04b82a24b98a99fcaba65a40385b.scope.
Oct 07 14:11:40 compute-0 podman[305214]: 2025-10-07 14:11:40.170543685 +0000 UTC m=+0.026227661 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:11:40 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8b352c5672a91cc041d64f7577ae7e356917d0325d764767cbac27b342aabf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:11:40 compute-0 podman[305214]: 2025-10-07 14:11:40.31721201 +0000 UTC m=+0.172895986 container init c395f7b476014f717a3f817350bc4af241af04b82a24b98a99fcaba65a40385b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 07 14:11:40 compute-0 podman[305214]: 2025-10-07 14:11:40.322849248 +0000 UTC m=+0.178533204 container start c395f7b476014f717a3f817350bc4af241af04b82a24b98a99fcaba65a40385b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:11:40 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[305229]: [NOTICE]   (305233) : New worker (305235) forked
Oct 07 14:11:40 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[305229]: [NOTICE]   (305233) : Loading success.
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.380 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846300.3797336, 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.381 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] VM Started (Lifecycle Event)
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.384 2 DEBUG nova.compute.manager [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.388 2 DEBUG nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.392 2 INFO nova.virt.libvirt.driver [-] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Instance spawned successfully.
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.393 2 DEBUG nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.398 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.403 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.410 2 DEBUG nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.410 2 DEBUG nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.411 2 DEBUG nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.411 2 DEBUG nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.411 2 DEBUG nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.412 2 DEBUG nova.virt.libvirt.driver [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.417 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.418 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846300.3812075, 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.418 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] VM Paused (Lifecycle Event)
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.435 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.437 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846300.3874912, 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.438 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] VM Resumed (Lifecycle Event)
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.463 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.467 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.475 2 INFO nova.compute.manager [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Took 8.13 seconds to spawn the instance on the hypervisor.
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.475 2 DEBUG nova.compute.manager [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.483 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.529 2 INFO nova.compute.manager [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Took 9.07 seconds to build instance.
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.546 2 DEBUG oslo_concurrency.lockutils [None req-8cd4406c-88be-4a8b-affd-3803db5b28b6 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.546 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 8.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.577 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:11:40 compute-0 ceph-mon[74295]: pgmap v1396: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 468 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 18 MiB/s wr, 275 op/s
Oct 07 14:11:40 compute-0 ceph-mon[74295]: osdmap e198: 3 total, 3 up, 3 in
Oct 07 14:11:40 compute-0 nova_compute[259550]: 2025-10-07 14:11:40.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:41 compute-0 nova_compute[259550]: 2025-10-07 14:11:41.094 2 INFO nova.virt.libvirt.driver [None req-12ecb8da-03bb-42ab-8043-b88ff2c36261 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Snapshot image upload complete
Oct 07 14:11:41 compute-0 nova_compute[259550]: 2025-10-07 14:11:41.095 2 INFO nova.compute.manager [None req-12ecb8da-03bb-42ab-8043-b88ff2c36261 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Took 4.28 seconds to snapshot the instance on the hypervisor.
Oct 07 14:11:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 476 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 8.3 MiB/s wr, 192 op/s
Oct 07 14:11:42 compute-0 nova_compute[259550]: 2025-10-07 14:11:42.248 2 DEBUG nova.compute.manager [req-28e4b626-4d3e-4326-9bab-fd5af0735602 req-dbab56fa-0b34-41e8-adc8-2507c84e816c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Received event network-vif-plugged-1f0af7f0-3788-48b2-99ec-5716374a274b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:11:42 compute-0 nova_compute[259550]: 2025-10-07 14:11:42.249 2 DEBUG oslo_concurrency.lockutils [req-28e4b626-4d3e-4326-9bab-fd5af0735602 req-dbab56fa-0b34-41e8-adc8-2507c84e816c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:42 compute-0 nova_compute[259550]: 2025-10-07 14:11:42.249 2 DEBUG oslo_concurrency.lockutils [req-28e4b626-4d3e-4326-9bab-fd5af0735602 req-dbab56fa-0b34-41e8-adc8-2507c84e816c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:42 compute-0 nova_compute[259550]: 2025-10-07 14:11:42.249 2 DEBUG oslo_concurrency.lockutils [req-28e4b626-4d3e-4326-9bab-fd5af0735602 req-dbab56fa-0b34-41e8-adc8-2507c84e816c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:42 compute-0 nova_compute[259550]: 2025-10-07 14:11:42.249 2 DEBUG nova.compute.manager [req-28e4b626-4d3e-4326-9bab-fd5af0735602 req-dbab56fa-0b34-41e8-adc8-2507c84e816c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] No waiting events found dispatching network-vif-plugged-1f0af7f0-3788-48b2-99ec-5716374a274b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:11:42 compute-0 nova_compute[259550]: 2025-10-07 14:11:42.250 2 WARNING nova.compute.manager [req-28e4b626-4d3e-4326-9bab-fd5af0735602 req-dbab56fa-0b34-41e8-adc8-2507c84e816c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Received unexpected event network-vif-plugged-1f0af7f0-3788-48b2-99ec-5716374a274b for instance with vm_state active and task_state None.
Oct 07 14:11:42 compute-0 nova_compute[259550]: 2025-10-07 14:11:42.516 2 INFO nova.compute.manager [None req-4d5aab6f-d616-46a3-96d3-a1c415026f42 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Pausing
Oct 07 14:11:42 compute-0 nova_compute[259550]: 2025-10-07 14:11:42.518 2 DEBUG nova.objects.instance [None req-4d5aab6f-d616-46a3-96d3-a1c415026f42 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'flavor' on Instance uuid 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:11:42 compute-0 nova_compute[259550]: 2025-10-07 14:11:42.561 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846302.5616026, 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:11:42 compute-0 nova_compute[259550]: 2025-10-07 14:11:42.562 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] VM Paused (Lifecycle Event)
Oct 07 14:11:42 compute-0 nova_compute[259550]: 2025-10-07 14:11:42.564 2 DEBUG nova.compute.manager [None req-4d5aab6f-d616-46a3-96d3-a1c415026f42 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:42 compute-0 nova_compute[259550]: 2025-10-07 14:11:42.709 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:11:42 compute-0 nova_compute[259550]: 2025-10-07 14:11:42.712 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:11:42 compute-0 nova_compute[259550]: 2025-10-07 14:11:42.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:42 compute-0 ceph-mon[74295]: pgmap v1398: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 476 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 8.3 MiB/s wr, 192 op/s
Oct 07 14:11:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 476 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 179 op/s
Oct 07 14:11:44 compute-0 ceph-mon[74295]: pgmap v1399: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 476 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 179 op/s
Oct 07 14:11:44 compute-0 nova_compute[259550]: 2025-10-07 14:11:44.952 2 DEBUG oslo_concurrency.lockutils [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:44 compute-0 nova_compute[259550]: 2025-10-07 14:11:44.954 2 DEBUG oslo_concurrency.lockutils [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:44 compute-0 nova_compute[259550]: 2025-10-07 14:11:44.954 2 DEBUG oslo_concurrency.lockutils [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:44 compute-0 nova_compute[259550]: 2025-10-07 14:11:44.955 2 DEBUG oslo_concurrency.lockutils [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:44 compute-0 nova_compute[259550]: 2025-10-07 14:11:44.955 2 DEBUG oslo_concurrency.lockutils [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:44 compute-0 nova_compute[259550]: 2025-10-07 14:11:44.956 2 INFO nova.compute.manager [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Terminating instance
Oct 07 14:11:44 compute-0 nova_compute[259550]: 2025-10-07 14:11:44.957 2 DEBUG nova.compute.manager [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:11:44 compute-0 kernel: tap1f0af7f0-37 (unregistering): left promiscuous mode
Oct 07 14:11:45 compute-0 NetworkManager[44949]: <info>  [1759846305.0025] device (tap1f0af7f0-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:11:45 compute-0 ovn_controller[151684]: 2025-10-07T14:11:45Z|00262|binding|INFO|Releasing lport 1f0af7f0-3788-48b2-99ec-5716374a274b from this chassis (sb_readonly=0)
Oct 07 14:11:45 compute-0 ovn_controller[151684]: 2025-10-07T14:11:45Z|00263|binding|INFO|Setting lport 1f0af7f0-3788-48b2-99ec-5716374a274b down in Southbound
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:45 compute-0 ovn_controller[151684]: 2025-10-07T14:11:45Z|00264|binding|INFO|Removing iface tap1f0af7f0-37 ovn-installed in OVS
Oct 07 14:11:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:45.022 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:32:91 10.100.0.13'], port_security=['fa:16:3e:94:32:91 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1f0af7f0-3788-48b2-99ec-5716374a274b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:11:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:45.024 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1f0af7f0-3788-48b2-99ec-5716374a274b in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f unbound from our chassis
Oct 07 14:11:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:45.025 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8accac57-ab45-4b9b-95ed-86c2c65f202f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:11:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:45.026 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6744df43-0152-4d53-862e-d732de20c84a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:45.027 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f namespace which is not needed anymore
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:45 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000027.scope: Deactivated successfully.
Oct 07 14:11:45 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000027.scope: Consumed 3.107s CPU time.
Oct 07 14:11:45 compute-0 systemd-machined[214580]: Machine qemu-43-instance-00000027 terminated.
Oct 07 14:11:45 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[305229]: [NOTICE]   (305233) : haproxy version is 2.8.14-c23fe91
Oct 07 14:11:45 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[305229]: [NOTICE]   (305233) : path to executable is /usr/sbin/haproxy
Oct 07 14:11:45 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[305229]: [WARNING]  (305233) : Exiting Master process...
Oct 07 14:11:45 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[305229]: [ALERT]    (305233) : Current worker (305235) exited with code 143 (Terminated)
Oct 07 14:11:45 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[305229]: [WARNING]  (305233) : All workers exited. Exiting... (0)
Oct 07 14:11:45 compute-0 systemd[1]: libpod-c395f7b476014f717a3f817350bc4af241af04b82a24b98a99fcaba65a40385b.scope: Deactivated successfully.
Oct 07 14:11:45 compute-0 conmon[305229]: conmon c395f7b476014f717a3f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c395f7b476014f717a3f817350bc4af241af04b82a24b98a99fcaba65a40385b.scope/container/memory.events
Oct 07 14:11:45 compute-0 podman[305269]: 2025-10-07 14:11:45.184703705 +0000 UTC m=+0.048287161 container died c395f7b476014f717a3f817350bc4af241af04b82a24b98a99fcaba65a40385b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.202 2 INFO nova.virt.libvirt.driver [-] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Instance destroyed successfully.
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.204 2 DEBUG nova.objects.instance [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'resources' on Instance uuid 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:11:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c395f7b476014f717a3f817350bc4af241af04b82a24b98a99fcaba65a40385b-userdata-shm.mount: Deactivated successfully.
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.230 2 DEBUG nova.virt.libvirt.vif [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:11:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-970906339',display_name='tempest-DeleteServersTestJSON-server-970906339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-970906339',id=39,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:11:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-6xmxko0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:11:42Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "1f0af7f0-3788-48b2-99ec-5716374a274b", "address": "fa:16:3e:94:32:91", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f0af7f0-37", "ovs_interfaceid": "1f0af7f0-3788-48b2-99ec-5716374a274b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.231 2 DEBUG nova.network.os_vif_util [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "1f0af7f0-3788-48b2-99ec-5716374a274b", "address": "fa:16:3e:94:32:91", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f0af7f0-37", "ovs_interfaceid": "1f0af7f0-3788-48b2-99ec-5716374a274b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.232 2 DEBUG nova.network.os_vif_util [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:32:91,bridge_name='br-int',has_traffic_filtering=True,id=1f0af7f0-3788-48b2-99ec-5716374a274b,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f0af7f0-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.232 2 DEBUG os_vif [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:32:91,bridge_name='br-int',has_traffic_filtering=True,id=1f0af7f0-3788-48b2-99ec-5716374a274b,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f0af7f0-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:11:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc8b352c5672a91cc041d64f7577ae7e356917d0325d764767cbac27b342aabf-merged.mount: Deactivated successfully.
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.235 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f0af7f0-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.240 2 INFO os_vif [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:32:91,bridge_name='br-int',has_traffic_filtering=True,id=1f0af7f0-3788-48b2-99ec-5716374a274b,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f0af7f0-37')
Oct 07 14:11:45 compute-0 podman[305269]: 2025-10-07 14:11:45.243507961 +0000 UTC m=+0.107091397 container cleanup c395f7b476014f717a3f817350bc4af241af04b82a24b98a99fcaba65a40385b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:11:45 compute-0 systemd[1]: libpod-conmon-c395f7b476014f717a3f817350bc4af241af04b82a24b98a99fcaba65a40385b.scope: Deactivated successfully.
Oct 07 14:11:45 compute-0 podman[305317]: 2025-10-07 14:11:45.324098409 +0000 UTC m=+0.052720216 container remove c395f7b476014f717a3f817350bc4af241af04b82a24b98a99fcaba65a40385b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:11:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:45.332 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c53bf833-1981-4fd2-94ca-83c4925e586a]: (4, ('Tue Oct  7 02:11:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f (c395f7b476014f717a3f817350bc4af241af04b82a24b98a99fcaba65a40385b)\nc395f7b476014f717a3f817350bc4af241af04b82a24b98a99fcaba65a40385b\nTue Oct  7 02:11:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f (c395f7b476014f717a3f817350bc4af241af04b82a24b98a99fcaba65a40385b)\nc395f7b476014f717a3f817350bc4af241af04b82a24b98a99fcaba65a40385b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:45.335 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[caeee20d-5d46-4ca0-81b9-284a346500b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:45.336 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8accac57-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:45 compute-0 kernel: tap8accac57-a0: left promiscuous mode
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:45.362 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1a58e2-41d6-42f6-acf5-fb100ef6f114]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:45.386 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4bae2290-f1de-4d03-911c-be617066f774]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:45.388 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[02510a40-919b-4d79-a4f8-6d97c1c0720d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:45.409 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b6204860-d7bf-48c7-8508-9edfde606827]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688309, 'reachable_time': 35620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305343, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:45.411 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:11:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:45.412 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[e5bb37ea-033f-4fff-a07a-a97c9af4631c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d8accac57\x2dab45\x2d4b9b\x2d95ed\x2d86c2c65f202f.mount: Deactivated successfully.
Oct 07 14:11:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 460 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 9.0 MiB/s rd, 6.0 MiB/s wr, 274 op/s
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.673 2 INFO nova.virt.libvirt.driver [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Deleting instance files /var/lib/nova/instances/7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77_del
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.674 2 INFO nova.virt.libvirt.driver [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Deletion of /var/lib/nova/instances/7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77_del complete
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.732 2 INFO nova.compute.manager [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Took 0.77 seconds to destroy the instance on the hypervisor.
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.733 2 DEBUG oslo.service.loopingcall [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.733 2 DEBUG nova.compute.manager [-] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.733 2 DEBUG nova.network.neutron [-] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:11:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:11:45 compute-0 nova_compute[259550]: 2025-10-07 14:11:45.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:45 compute-0 ceph-mon[74295]: pgmap v1400: 305 pgs: 305 active+clean; 460 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 9.0 MiB/s rd, 6.0 MiB/s wr, 274 op/s
Oct 07 14:11:46 compute-0 nova_compute[259550]: 2025-10-07 14:11:46.336 2 DEBUG nova.network.neutron [-] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:11:46 compute-0 nova_compute[259550]: 2025-10-07 14:11:46.375 2 INFO nova.compute.manager [-] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Took 0.64 seconds to deallocate network for instance.
Oct 07 14:11:46 compute-0 nova_compute[259550]: 2025-10-07 14:11:46.427 2 DEBUG nova.compute.manager [req-3a04b198-dac1-4f9e-8439-9306b2330ca8 req-1bba83bc-3451-44c8-9c97-7cbd8a62eeb2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Received event network-vif-deleted-1f0af7f0-3788-48b2-99ec-5716374a274b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:11:46 compute-0 nova_compute[259550]: 2025-10-07 14:11:46.429 2 DEBUG oslo_concurrency.lockutils [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:46 compute-0 nova_compute[259550]: 2025-10-07 14:11:46.429 2 DEBUG oslo_concurrency.lockutils [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:46 compute-0 nova_compute[259550]: 2025-10-07 14:11:46.528 2 DEBUG oslo_concurrency.processutils [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Oct 07 14:11:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Oct 07 14:11:46 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Oct 07 14:11:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:11:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4283491772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:46 compute-0 nova_compute[259550]: 2025-10-07 14:11:46.977 2 DEBUG oslo_concurrency.processutils [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:46 compute-0 nova_compute[259550]: 2025-10-07 14:11:46.988 2 DEBUG nova.compute.provider_tree [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:11:47 compute-0 nova_compute[259550]: 2025-10-07 14:11:47.061 2 DEBUG nova.scheduler.client.report [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:11:47 compute-0 nova_compute[259550]: 2025-10-07 14:11:47.098 2 DEBUG oslo_concurrency.lockutils [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:47 compute-0 nova_compute[259550]: 2025-10-07 14:11:47.186 2 INFO nova.scheduler.client.report [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Deleted allocations for instance 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77
Oct 07 14:11:47 compute-0 nova_compute[259550]: 2025-10-07 14:11:47.322 2 DEBUG oslo_concurrency.lockutils [None req-bf197002-5171-4f4e-a8c9-f48d221b780d a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 460 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 500 KiB/s wr, 212 op/s
Oct 07 14:11:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Oct 07 14:11:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Oct 07 14:11:47 compute-0 ceph-mon[74295]: osdmap e199: 3 total, 3 up, 3 in
Oct 07 14:11:47 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4283491772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:47 compute-0 ceph-mon[74295]: pgmap v1402: 305 pgs: 305 active+clean; 460 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 500 KiB/s wr, 212 op/s
Oct 07 14:11:47 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Oct 07 14:11:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Oct 07 14:11:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Oct 07 14:11:48 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Oct 07 14:11:48 compute-0 ceph-mon[74295]: osdmap e200: 3 total, 3 up, 3 in
Oct 07 14:11:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 315 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 29 KiB/s wr, 291 op/s
Oct 07 14:11:49 compute-0 ceph-mon[74295]: osdmap e201: 3 total, 3 up, 3 in
Oct 07 14:11:49 compute-0 ceph-mon[74295]: pgmap v1405: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 315 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 29 KiB/s wr, 291 op/s
Oct 07 14:11:49 compute-0 nova_compute[259550]: 2025-10-07 14:11:49.981 2 DEBUG oslo_concurrency.lockutils [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Acquiring lock "cf0c1f82-7b3b-4b62-a766-c12de66a966a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:49 compute-0 nova_compute[259550]: 2025-10-07 14:11:49.982 2 DEBUG oslo_concurrency.lockutils [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "cf0c1f82-7b3b-4b62-a766-c12de66a966a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:49 compute-0 nova_compute[259550]: 2025-10-07 14:11:49.983 2 DEBUG oslo_concurrency.lockutils [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Acquiring lock "cf0c1f82-7b3b-4b62-a766-c12de66a966a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:49 compute-0 nova_compute[259550]: 2025-10-07 14:11:49.983 2 DEBUG oslo_concurrency.lockutils [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "cf0c1f82-7b3b-4b62-a766-c12de66a966a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:49 compute-0 nova_compute[259550]: 2025-10-07 14:11:49.983 2 DEBUG oslo_concurrency.lockutils [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "cf0c1f82-7b3b-4b62-a766-c12de66a966a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:49 compute-0 nova_compute[259550]: 2025-10-07 14:11:49.984 2 INFO nova.compute.manager [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Terminating instance
Oct 07 14:11:49 compute-0 nova_compute[259550]: 2025-10-07 14:11:49.985 2 DEBUG oslo_concurrency.lockutils [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Acquiring lock "refresh_cache-cf0c1f82-7b3b-4b62-a766-c12de66a966a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:11:49 compute-0 nova_compute[259550]: 2025-10-07 14:11:49.985 2 DEBUG oslo_concurrency.lockutils [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Acquired lock "refresh_cache-cf0c1f82-7b3b-4b62-a766-c12de66a966a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:11:49 compute-0 nova_compute[259550]: 2025-10-07 14:11:49.985 2 DEBUG nova.network.neutron [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:11:50 compute-0 nova_compute[259550]: 2025-10-07 14:11:50.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:50 compute-0 nova_compute[259550]: 2025-10-07 14:11:50.328 2 DEBUG nova.network.neutron [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:11:50 compute-0 nova_compute[259550]: 2025-10-07 14:11:50.466 2 DEBUG oslo_concurrency.lockutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:50 compute-0 nova_compute[259550]: 2025-10-07 14:11:50.467 2 DEBUG oslo_concurrency.lockutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:50 compute-0 nova_compute[259550]: 2025-10-07 14:11:50.481 2 DEBUG nova.compute.manager [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:11:50 compute-0 nova_compute[259550]: 2025-10-07 14:11:50.558 2 DEBUG oslo_concurrency.lockutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:50 compute-0 nova_compute[259550]: 2025-10-07 14:11:50.559 2 DEBUG oslo_concurrency.lockutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:50 compute-0 nova_compute[259550]: 2025-10-07 14:11:50.564 2 DEBUG nova.virt.hardware [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:11:50 compute-0 nova_compute[259550]: 2025-10-07 14:11:50.564 2 INFO nova.compute.claims [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:11:50 compute-0 nova_compute[259550]: 2025-10-07 14:11:50.567 2 DEBUG nova.network.neutron [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:11:50 compute-0 nova_compute[259550]: 2025-10-07 14:11:50.588 2 DEBUG oslo_concurrency.lockutils [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Releasing lock "refresh_cache-cf0c1f82-7b3b-4b62-a766-c12de66a966a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:11:50 compute-0 nova_compute[259550]: 2025-10-07 14:11:50.589 2 DEBUG nova.compute.manager [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:11:50 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000026.scope: Deactivated successfully.
Oct 07 14:11:50 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000026.scope: Consumed 13.878s CPU time.
Oct 07 14:11:50 compute-0 systemd-machined[214580]: Machine qemu-42-instance-00000026 terminated.
Oct 07 14:11:50 compute-0 nova_compute[259550]: 2025-10-07 14:11:50.726 2 DEBUG oslo_concurrency.processutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:50 compute-0 nova_compute[259550]: 2025-10-07 14:11:50.813 2 INFO nova.virt.libvirt.driver [-] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Instance destroyed successfully.
Oct 07 14:11:50 compute-0 nova_compute[259550]: 2025-10-07 14:11:50.814 2 DEBUG nova.objects.instance [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lazy-loading 'resources' on Instance uuid cf0c1f82-7b3b-4b62-a766-c12de66a966a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:11:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:11:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Oct 07 14:11:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Oct 07 14:11:50 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Oct 07 14:11:50 compute-0 nova_compute[259550]: 2025-10-07 14:11:50.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:51 compute-0 sudo[305409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:11:51 compute-0 sudo[305409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:11:51 compute-0 sudo[305409]: pam_unix(sudo:session): session closed for user root
Oct 07 14:11:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:11:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1938596232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.207 2 DEBUG oslo_concurrency.processutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.214 2 DEBUG nova.compute.provider_tree [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:11:51 compute-0 sudo[305434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:11:51 compute-0 sudo[305434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:11:51 compute-0 sudo[305434]: pam_unix(sudo:session): session closed for user root
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.229 2 DEBUG nova.scheduler.client.report [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.250 2 DEBUG oslo_concurrency.lockutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.251 2 DEBUG nova.compute.manager [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:11:51 compute-0 sudo[305461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:11:51 compute-0 sudo[305461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:11:51 compute-0 sudo[305461]: pam_unix(sudo:session): session closed for user root
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.292 2 INFO nova.virt.libvirt.driver [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Deleting instance files /var/lib/nova/instances/cf0c1f82-7b3b-4b62-a766-c12de66a966a_del
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.294 2 INFO nova.virt.libvirt.driver [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Deletion of /var/lib/nova/instances/cf0c1f82-7b3b-4b62-a766-c12de66a966a_del complete
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.300 2 DEBUG nova.compute.manager [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.301 2 DEBUG nova.network.neutron [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.325 2 INFO nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.341 2 INFO nova.compute.manager [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.342 2 DEBUG oslo.service.loopingcall [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.342 2 DEBUG nova.compute.manager [-] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.343 2 DEBUG nova.network.neutron [-] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:11:51 compute-0 sudo[305486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:11:51 compute-0 sudo[305486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.350 2 DEBUG nova.compute.manager [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.424 2 DEBUG nova.compute.manager [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.426 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.426 2 INFO nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Creating image(s)
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.447 2 DEBUG nova.storage.rbd_utils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.474 2 DEBUG nova.storage.rbd_utils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.496 2 DEBUG nova.storage.rbd_utils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.499 2 DEBUG oslo_concurrency.processutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.572 2 DEBUG oslo_concurrency.processutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.574 2 DEBUG oslo_concurrency.lockutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.575 2 DEBUG oslo_concurrency.lockutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.575 2 DEBUG oslo_concurrency.lockutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.597 2 DEBUG nova.storage.rbd_utils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.601 2 DEBUG oslo_concurrency.processutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 259 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 128 KiB/s rd, 11 KiB/s wr, 191 op/s
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.789 2 DEBUG nova.network.neutron [-] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.815 2 DEBUG nova.network.neutron [-] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.835 2 INFO nova.compute.manager [-] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Took 0.49 seconds to deallocate network for instance.
Oct 07 14:11:51 compute-0 sudo[305486]: pam_unix(sudo:session): session closed for user root
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.852 2 DEBUG nova.policy [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7bf568df6a8d461a83d287493b393589', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'de6794f6448744329cf2081eb5b889a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:11:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:11:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:11:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:11:51 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:11:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:11:51 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.885 2 DEBUG oslo_concurrency.lockutils [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.886 2 DEBUG oslo_concurrency.lockutils [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:51 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 7e7450a4-4d39-4c68-b193-459e943dba15 does not exist
Oct 07 14:11:51 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 5d0854ff-2679-441a-93b6-ec9d686eca79 does not exist
Oct 07 14:11:51 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 9ebb3e94-507b-4aa7-a165-fe219edca8af does not exist
Oct 07 14:11:51 compute-0 ceph-mon[74295]: osdmap e202: 3 total, 3 up, 3 in
Oct 07 14:11:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1938596232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:51 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:11:51 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:11:51 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:11:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:11:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:11:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:11:51 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:11:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:11:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:11:51 compute-0 nova_compute[259550]: 2025-10-07 14:11:51.928 2 DEBUG oslo_concurrency.processutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:51 compute-0 sudo[305636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:11:51 compute-0 sudo[305636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:11:51 compute-0 sudo[305636]: pam_unix(sudo:session): session closed for user root
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.025 2 DEBUG nova.storage.rbd_utils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] resizing rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:11:52 compute-0 sudo[305686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:11:52 compute-0 sudo[305686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:11:52 compute-0 sudo[305686]: pam_unix(sudo:session): session closed for user root
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.064 2 DEBUG oslo_concurrency.processutils [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:52 compute-0 sudo[305740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.119 2 DEBUG oslo_concurrency.lockutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "9197df26-c585-4b8f-8ed6-695ee8e233a8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:52 compute-0 sudo[305740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.120 2 DEBUG oslo_concurrency.lockutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:52 compute-0 sudo[305740]: pam_unix(sudo:session): session closed for user root
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.177 2 DEBUG nova.objects.instance [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'migration_context' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:11:52 compute-0 sudo[305766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:11:52 compute-0 sudo[305766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.288 2 DEBUG nova.compute.manager [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.292 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.293 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Ensure instance console log exists: /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.294 2 DEBUG oslo_concurrency.lockutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.294 2 DEBUG oslo_concurrency.lockutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.294 2 DEBUG oslo_concurrency.lockutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.371 2 DEBUG oslo_concurrency.lockutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:52 compute-0 podman[305867]: 2025-10-07 14:11:52.524734353 +0000 UTC m=+0.040659400 container create 5560e8a982227761cfc6e53703558a85d827ed446352d8d20aa9266e16df8515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 07 14:11:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:11:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1209272733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.559 2 DEBUG oslo_concurrency.processutils [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:52 compute-0 systemd[1]: Started libpod-conmon-5560e8a982227761cfc6e53703558a85d827ed446352d8d20aa9266e16df8515.scope.
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.568 2 DEBUG nova.compute.provider_tree [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.583 2 DEBUG nova.scheduler.client.report [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:11:52 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.600 2 DEBUG oslo_concurrency.lockutils [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:52 compute-0 podman[305867]: 2025-10-07 14:11:52.505812266 +0000 UTC m=+0.021737353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.603 2 DEBUG oslo_concurrency.lockutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.611 2 DEBUG nova.virt.hardware [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.611 2 INFO nova.compute.claims [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:11:52 compute-0 podman[305867]: 2025-10-07 14:11:52.614649186 +0000 UTC m=+0.130574273 container init 5560e8a982227761cfc6e53703558a85d827ed446352d8d20aa9266e16df8515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hoover, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.622 2 INFO nova.scheduler.client.report [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Deleted allocations for instance cf0c1f82-7b3b-4b62-a766-c12de66a966a
Oct 07 14:11:52 compute-0 podman[305867]: 2025-10-07 14:11:52.624780603 +0000 UTC m=+0.140705660 container start 5560e8a982227761cfc6e53703558a85d827ed446352d8d20aa9266e16df8515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 07 14:11:52 compute-0 podman[305867]: 2025-10-07 14:11:52.628547091 +0000 UTC m=+0.144472168 container attach 5560e8a982227761cfc6e53703558a85d827ed446352d8d20aa9266e16df8515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:11:52 compute-0 laughing_hoover[305886]: 167 167
Oct 07 14:11:52 compute-0 systemd[1]: libpod-5560e8a982227761cfc6e53703558a85d827ed446352d8d20aa9266e16df8515.scope: Deactivated successfully.
Oct 07 14:11:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:11:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:11:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:11:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:11:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:11:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:11:52 compute-0 podman[305891]: 2025-10-07 14:11:52.682712075 +0000 UTC m=+0.028988693 container died 5560e8a982227761cfc6e53703558a85d827ed446352d8d20aa9266e16df8515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hoover, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.698 2 DEBUG oslo_concurrency.lockutils [None req-01a9f8ef-cd8e-497b-bf9a-52797a92c056 e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "cf0c1f82-7b3b-4b62-a766-c12de66a966a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3f2ef4f7a200cd6973d55840c994a17a436bc99070b2bd05ba225129cd3c82c-merged.mount: Deactivated successfully.
Oct 07 14:11:52 compute-0 podman[305891]: 2025-10-07 14:11:52.723449106 +0000 UTC m=+0.069725714 container remove 5560e8a982227761cfc6e53703558a85d827ed446352d8d20aa9266e16df8515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hoover, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 07 14:11:52 compute-0 systemd[1]: libpod-conmon-5560e8a982227761cfc6e53703558a85d827ed446352d8d20aa9266e16df8515.scope: Deactivated successfully.
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.745 2 DEBUG nova.network.neutron [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Successfully created port: 51e23698-81ea-413b-922d-5c9081ccd2ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:11:52 compute-0 nova_compute[259550]: 2025-10-07 14:11:52.751 2 DEBUG oslo_concurrency.processutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:52 compute-0 ceph-mon[74295]: pgmap v1407: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 259 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 128 KiB/s rd, 11 KiB/s wr, 191 op/s
Oct 07 14:11:52 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:11:52 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:11:52 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:11:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1209272733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:52 compute-0 podman[305913]: 2025-10-07 14:11:52.914581301 +0000 UTC m=+0.051355302 container create 9f4cb8a804ef6594ae43fbc2a80c2e513eb3cba2214c079fa385963c6c45776e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:11:52 compute-0 systemd[1]: Started libpod-conmon-9f4cb8a804ef6594ae43fbc2a80c2e513eb3cba2214c079fa385963c6c45776e.scope.
Oct 07 14:11:52 compute-0 podman[305913]: 2025-10-07 14:11:52.890262701 +0000 UTC m=+0.027036782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:11:52 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6b65fd10bf32537bf53ffc167554b971f834595f3eff428518115fb6e411dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6b65fd10bf32537bf53ffc167554b971f834595f3eff428518115fb6e411dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6b65fd10bf32537bf53ffc167554b971f834595f3eff428518115fb6e411dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6b65fd10bf32537bf53ffc167554b971f834595f3eff428518115fb6e411dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6b65fd10bf32537bf53ffc167554b971f834595f3eff428518115fb6e411dc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:11:53 compute-0 podman[305913]: 2025-10-07 14:11:53.015319849 +0000 UTC m=+0.152093870 container init 9f4cb8a804ef6594ae43fbc2a80c2e513eb3cba2214c079fa385963c6c45776e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:11:53 compute-0 podman[305913]: 2025-10-07 14:11:53.027604341 +0000 UTC m=+0.164378372 container start 9f4cb8a804ef6594ae43fbc2a80c2e513eb3cba2214c079fa385963c6c45776e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 07 14:11:53 compute-0 podman[305913]: 2025-10-07 14:11:53.03174131 +0000 UTC m=+0.168515421 container attach 9f4cb8a804ef6594ae43fbc2a80c2e513eb3cba2214c079fa385963c6c45776e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:11:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:11:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/359051045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.229 2 DEBUG oslo_concurrency.processutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.237 2 DEBUG nova.compute.provider_tree [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.254 2 DEBUG nova.scheduler.client.report [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.275 2 DEBUG oslo_concurrency.lockutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.276 2 DEBUG nova.compute.manager [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.320 2 DEBUG nova.compute.manager [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.321 2 DEBUG nova.network.neutron [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.344 2 INFO nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.367 2 DEBUG nova.compute.manager [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.447 2 DEBUG nova.compute.manager [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.448 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.449 2 INFO nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Creating image(s)
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.473 2 DEBUG nova.storage.rbd_utils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 9197df26-c585-4b8f-8ed6-695ee8e233a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.500 2 DEBUG nova.storage.rbd_utils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 9197df26-c585-4b8f-8ed6-695ee8e233a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.525 2 DEBUG nova.storage.rbd_utils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 9197df26-c585-4b8f-8ed6-695ee8e233a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.529 2 DEBUG oslo_concurrency.processutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.570 2 DEBUG nova.policy [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a0452296b3a942e893961944a0203d98', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '06322ecec4b94a5d94e34cc8632d4104', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.576 2 DEBUG nova.network.neutron [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Successfully updated port: 51e23698-81ea-413b-922d-5c9081ccd2ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.611 2 DEBUG oslo_concurrency.lockutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "refresh_cache-d7bf9b41-6a09-4d71-8e33-cb428d71b2a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.611 2 DEBUG oslo_concurrency.lockutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquired lock "refresh_cache-d7bf9b41-6a09-4d71-8e33-cb428d71b2a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.611 2 DEBUG nova.network.neutron [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.614 2 DEBUG oslo_concurrency.processutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.615 2 DEBUG oslo_concurrency.lockutils [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Acquiring lock "34abf10d-dbb0-41fb-abde-a52be331cc12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.615 2 DEBUG oslo_concurrency.lockutils [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "34abf10d-dbb0-41fb-abde-a52be331cc12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.615 2 DEBUG oslo_concurrency.lockutils [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Acquiring lock "34abf10d-dbb0-41fb-abde-a52be331cc12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.615 2 DEBUG oslo_concurrency.lockutils [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "34abf10d-dbb0-41fb-abde-a52be331cc12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.616 2 DEBUG oslo_concurrency.lockutils [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "34abf10d-dbb0-41fb-abde-a52be331cc12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.617 2 INFO nova.compute.manager [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Terminating instance
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.618 2 DEBUG oslo_concurrency.lockutils [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Acquiring lock "refresh_cache-34abf10d-dbb0-41fb-abde-a52be331cc12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.618 2 DEBUG oslo_concurrency.lockutils [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Acquired lock "refresh_cache-34abf10d-dbb0-41fb-abde-a52be331cc12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.619 2 DEBUG nova.network.neutron [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:11:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 259 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 8.5 KiB/s wr, 149 op/s
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.789 2 DEBUG nova.compute.manager [req-21d5e89d-c04f-4fed-8937-031a5f41290e req-8458b992-bc68-4c5e-93b6-d3ce5895c18e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received event network-changed-51e23698-81ea-413b-922d-5c9081ccd2ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.789 2 DEBUG nova.compute.manager [req-21d5e89d-c04f-4fed-8937-031a5f41290e req-8458b992-bc68-4c5e-93b6-d3ce5895c18e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Refreshing instance network info cache due to event network-changed-51e23698-81ea-413b-922d-5c9081ccd2ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.790 2 DEBUG oslo_concurrency.lockutils [req-21d5e89d-c04f-4fed-8937-031a5f41290e req-8458b992-bc68-4c5e-93b6-d3ce5895c18e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d7bf9b41-6a09-4d71-8e33-cb428d71b2a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.791 2 DEBUG oslo_concurrency.lockutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.791 2 DEBUG oslo_concurrency.lockutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.791 2 DEBUG oslo_concurrency.lockutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.814 2 DEBUG nova.storage.rbd_utils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 9197df26-c585-4b8f-8ed6-695ee8e233a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:53 compute-0 nova_compute[259550]: 2025-10-07 14:11:53.818 2 DEBUG oslo_concurrency.processutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 9197df26-c585-4b8f-8ed6-695ee8e233a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/359051045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:54 compute-0 sweet_galois[305948]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:11:54 compute-0 sweet_galois[305948]: --> relative data size: 1.0
Oct 07 14:11:54 compute-0 sweet_galois[305948]: --> All data devices are unavailable
Oct 07 14:11:54 compute-0 nova_compute[259550]: 2025-10-07 14:11:54.185 2 DEBUG oslo_concurrency.processutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 9197df26-c585-4b8f-8ed6-695ee8e233a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:54 compute-0 systemd[1]: libpod-9f4cb8a804ef6594ae43fbc2a80c2e513eb3cba2214c079fa385963c6c45776e.scope: Deactivated successfully.
Oct 07 14:11:54 compute-0 systemd[1]: libpod-9f4cb8a804ef6594ae43fbc2a80c2e513eb3cba2214c079fa385963c6c45776e.scope: Consumed 1.041s CPU time.
Oct 07 14:11:54 compute-0 podman[305913]: 2025-10-07 14:11:54.200856251 +0000 UTC m=+1.337630252 container died 9f4cb8a804ef6594ae43fbc2a80c2e513eb3cba2214c079fa385963c6c45776e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Oct 07 14:11:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d6b65fd10bf32537bf53ffc167554b971f834595f3eff428518115fb6e411dc-merged.mount: Deactivated successfully.
Oct 07 14:11:54 compute-0 podman[305913]: 2025-10-07 14:11:54.265327345 +0000 UTC m=+1.402101336 container remove 9f4cb8a804ef6594ae43fbc2a80c2e513eb3cba2214c079fa385963c6c45776e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 07 14:11:54 compute-0 systemd[1]: libpod-conmon-9f4cb8a804ef6594ae43fbc2a80c2e513eb3cba2214c079fa385963c6c45776e.scope: Deactivated successfully.
Oct 07 14:11:54 compute-0 sudo[305766]: pam_unix(sudo:session): session closed for user root
Oct 07 14:11:54 compute-0 podman[306095]: 2025-10-07 14:11:54.317161478 +0000 UTC m=+0.081726349 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 07 14:11:54 compute-0 podman[306089]: 2025-10-07 14:11:54.326731609 +0000 UTC m=+0.085498758 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:11:54 compute-0 nova_compute[259550]: 2025-10-07 14:11:54.330 2 DEBUG nova.storage.rbd_utils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] resizing rbd image 9197df26-c585-4b8f-8ed6-695ee8e233a8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:11:54 compute-0 sudo[306156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:11:54 compute-0 sudo[306156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:11:54 compute-0 sudo[306156]: pam_unix(sudo:session): session closed for user root
Oct 07 14:11:54 compute-0 nova_compute[259550]: 2025-10-07 14:11:54.447 2 DEBUG nova.objects.instance [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'migration_context' on Instance uuid 9197df26-c585-4b8f-8ed6-695ee8e233a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:11:54 compute-0 sudo[306199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:11:54 compute-0 sudo[306199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:11:54 compute-0 sudo[306199]: pam_unix(sudo:session): session closed for user root
Oct 07 14:11:54 compute-0 sudo[306242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:11:54 compute-0 sudo[306242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:11:54 compute-0 nova_compute[259550]: 2025-10-07 14:11:54.516 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:11:54 compute-0 nova_compute[259550]: 2025-10-07 14:11:54.516 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Ensure instance console log exists: /var/lib/nova/instances/9197df26-c585-4b8f-8ed6-695ee8e233a8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:11:54 compute-0 nova_compute[259550]: 2025-10-07 14:11:54.517 2 DEBUG oslo_concurrency.lockutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:54 compute-0 nova_compute[259550]: 2025-10-07 14:11:54.517 2 DEBUG oslo_concurrency.lockutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:54 compute-0 nova_compute[259550]: 2025-10-07 14:11:54.517 2 DEBUG oslo_concurrency.lockutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:54 compute-0 sudo[306242]: pam_unix(sudo:session): session closed for user root
Oct 07 14:11:54 compute-0 sudo[306267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:11:54 compute-0 sudo[306267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:11:54 compute-0 nova_compute[259550]: 2025-10-07 14:11:54.754 2 DEBUG nova.network.neutron [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:11:54 compute-0 nova_compute[259550]: 2025-10-07 14:11:54.783 2 DEBUG nova.network.neutron [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:11:54 compute-0 ceph-mon[74295]: pgmap v1408: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 259 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 8.5 KiB/s wr, 149 op/s
Oct 07 14:11:54 compute-0 podman[306331]: 2025-10-07 14:11:54.967099982 +0000 UTC m=+0.054553675 container create eef1250e0fd340e5d78d8202e0f0a33744bd4c62cd68e4cdc1d8119378a78633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:11:55 compute-0 systemd[1]: Started libpod-conmon-eef1250e0fd340e5d78d8202e0f0a33744bd4c62cd68e4cdc1d8119378a78633.scope.
Oct 07 14:11:55 compute-0 podman[306331]: 2025-10-07 14:11:54.935178463 +0000 UTC m=+0.022632176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:11:55 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:11:55 compute-0 podman[306331]: 2025-10-07 14:11:55.073241872 +0000 UTC m=+0.160695585 container init eef1250e0fd340e5d78d8202e0f0a33744bd4c62cd68e4cdc1d8119378a78633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.078 2 DEBUG nova.network.neutron [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:11:55 compute-0 podman[306331]: 2025-10-07 14:11:55.083902172 +0000 UTC m=+0.171355865 container start eef1250e0fd340e5d78d8202e0f0a33744bd4c62cd68e4cdc1d8119378a78633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Oct 07 14:11:55 compute-0 podman[306331]: 2025-10-07 14:11:55.088148104 +0000 UTC m=+0.175601797 container attach eef1250e0fd340e5d78d8202e0f0a33744bd4c62cd68e4cdc1d8119378a78633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pasteur, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:11:55 compute-0 xenodochial_pasteur[306348]: 167 167
Oct 07 14:11:55 compute-0 systemd[1]: libpod-eef1250e0fd340e5d78d8202e0f0a33744bd4c62cd68e4cdc1d8119378a78633.scope: Deactivated successfully.
Oct 07 14:11:55 compute-0 podman[306331]: 2025-10-07 14:11:55.092394255 +0000 UTC m=+0.179847968 container died eef1250e0fd340e5d78d8202e0f0a33744bd4c62cd68e4cdc1d8119378a78633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pasteur, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.099 2 DEBUG oslo_concurrency.lockutils [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Releasing lock "refresh_cache-34abf10d-dbb0-41fb-abde-a52be331cc12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.102 2 DEBUG nova.compute.manager [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:11:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e8791f4238537c6409e747e309f26ebd2801eee839d70cd016a950b67460b50-merged.mount: Deactivated successfully.
Oct 07 14:11:55 compute-0 podman[306331]: 2025-10-07 14:11:55.146816736 +0000 UTC m=+0.234270419 container remove eef1250e0fd340e5d78d8202e0f0a33744bd4c62cd68e4cdc1d8119378a78633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:11:55 compute-0 systemd[1]: libpod-conmon-eef1250e0fd340e5d78d8202e0f0a33744bd4c62cd68e4cdc1d8119378a78633.scope: Deactivated successfully.
Oct 07 14:11:55 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000025.scope: Deactivated successfully.
Oct 07 14:11:55 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000025.scope: Consumed 14.268s CPU time.
Oct 07 14:11:55 compute-0 systemd-machined[214580]: Machine qemu-41-instance-00000025 terminated.
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.334 2 INFO nova.virt.libvirt.driver [-] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Instance destroyed successfully.
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.335 2 DEBUG nova.objects.instance [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lazy-loading 'resources' on Instance uuid 34abf10d-dbb0-41fb-abde-a52be331cc12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:11:55 compute-0 podman[306373]: 2025-10-07 14:11:55.339969603 +0000 UTC m=+0.048202417 container create 432937724dfcbd18c6479988873148e0a29a8dcfd3f6941545d6d2f902ec2431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.370 2 DEBUG nova.network.neutron [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Successfully created port: 61102ef2-439c-478d-a441-fe00e54b7ff7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:11:55 compute-0 systemd[1]: Started libpod-conmon-432937724dfcbd18c6479988873148e0a29a8dcfd3f6941545d6d2f902ec2431.scope.
Oct 07 14:11:55 compute-0 podman[306373]: 2025-10-07 14:11:55.318384186 +0000 UTC m=+0.026616980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:11:55 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b5fdb498c7f677349d3e04d4ec817a9dd1a876f411594faa3b00922628e011/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b5fdb498c7f677349d3e04d4ec817a9dd1a876f411594faa3b00922628e011/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b5fdb498c7f677349d3e04d4ec817a9dd1a876f411594faa3b00922628e011/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b5fdb498c7f677349d3e04d4ec817a9dd1a876f411594faa3b00922628e011/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:11:55 compute-0 podman[306373]: 2025-10-07 14:11:55.461522639 +0000 UTC m=+0.169755443 container init 432937724dfcbd18c6479988873148e0a29a8dcfd3f6941545d6d2f902ec2431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dubinsky, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:11:55 compute-0 podman[306373]: 2025-10-07 14:11:55.479783108 +0000 UTC m=+0.188015882 container start 432937724dfcbd18c6479988873148e0a29a8dcfd3f6941545d6d2f902ec2431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 07 14:11:55 compute-0 podman[306373]: 2025-10-07 14:11:55.486736891 +0000 UTC m=+0.194969715 container attach 432937724dfcbd18c6479988873148e0a29a8dcfd3f6941545d6d2f902ec2431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dubinsky, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.575 2 DEBUG nova.network.neutron [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Updating instance_info_cache with network_info: [{"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.599 2 DEBUG oslo_concurrency.lockutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Releasing lock "refresh_cache-d7bf9b41-6a09-4d71-8e33-cb428d71b2a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.600 2 DEBUG nova.compute.manager [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance network_info: |[{"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.600 2 DEBUG oslo_concurrency.lockutils [req-21d5e89d-c04f-4fed-8937-031a5f41290e req-8458b992-bc68-4c5e-93b6-d3ce5895c18e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d7bf9b41-6a09-4d71-8e33-cb428d71b2a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.601 2 DEBUG nova.network.neutron [req-21d5e89d-c04f-4fed-8937-031a5f41290e req-8458b992-bc68-4c5e-93b6-d3ce5895c18e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Refreshing network info cache for port 51e23698-81ea-413b-922d-5c9081ccd2ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.603 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Start _get_guest_xml network_info=[{"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.609 2 WARNING nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.619 2 DEBUG nova.virt.libvirt.host [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.620 2 DEBUG nova.virt.libvirt.host [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.624 2 DEBUG nova.virt.libvirt.host [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.625 2 DEBUG nova.virt.libvirt.host [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.625 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.625 2 DEBUG nova.virt.hardware [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.626 2 DEBUG nova.virt.hardware [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.626 2 DEBUG nova.virt.hardware [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.626 2 DEBUG nova.virt.hardware [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.626 2 DEBUG nova.virt.hardware [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.626 2 DEBUG nova.virt.hardware [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.627 2 DEBUG nova.virt.hardware [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.627 2 DEBUG nova.virt.hardware [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.627 2 DEBUG nova.virt.hardware [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.627 2 DEBUG nova.virt.hardware [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.627 2 DEBUG nova.virt.hardware [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.630 2 DEBUG oslo_concurrency.processutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 178 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 176 KiB/s rd, 4.4 MiB/s wr, 260 op/s
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.859 2 INFO nova.virt.libvirt.driver [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Deleting instance files /var/lib/nova/instances/34abf10d-dbb0-41fb-abde-a52be331cc12_del
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.861 2 INFO nova.virt.libvirt.driver [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Deletion of /var/lib/nova/instances/34abf10d-dbb0-41fb-abde-a52be331cc12_del complete
Oct 07 14:11:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:11:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Oct 07 14:11:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Oct 07 14:11:55 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.934 2 INFO nova.compute.manager [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.935 2 DEBUG oslo.service.loopingcall [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.935 2 DEBUG nova.compute.manager [-] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:11:55 compute-0 nova_compute[259550]: 2025-10-07 14:11:55.936 2 DEBUG nova.network.neutron [-] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.039 2 DEBUG nova.network.neutron [-] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.061 2 DEBUG nova.network.neutron [-] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.082 2 INFO nova.compute.manager [-] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Took 0.15 seconds to deallocate network for instance.
Oct 07 14:11:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:11:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1977354769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.134 2 DEBUG oslo_concurrency.lockutils [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.135 2 DEBUG oslo_concurrency.lockutils [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.138 2 DEBUG oslo_concurrency.processutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.167 2 DEBUG nova.storage.rbd_utils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.172 2 DEBUG oslo_concurrency.processutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.277 2 DEBUG oslo_concurrency.processutils [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]: {
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:     "0": [
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:         {
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "devices": [
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "/dev/loop3"
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             ],
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "lv_name": "ceph_lv0",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "lv_size": "21470642176",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "name": "ceph_lv0",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "tags": {
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.cluster_name": "ceph",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.crush_device_class": "",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.encrypted": "0",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.osd_id": "0",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.type": "block",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.vdo": "0"
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             },
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "type": "block",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "vg_name": "ceph_vg0"
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:         }
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:     ],
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:     "1": [
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:         {
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "devices": [
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "/dev/loop4"
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             ],
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "lv_name": "ceph_lv1",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "lv_size": "21470642176",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "name": "ceph_lv1",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "tags": {
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.cluster_name": "ceph",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.crush_device_class": "",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.encrypted": "0",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.osd_id": "1",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.type": "block",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.vdo": "0"
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             },
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "type": "block",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "vg_name": "ceph_vg1"
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:         }
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:     ],
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:     "2": [
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:         {
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "devices": [
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "/dev/loop5"
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             ],
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "lv_name": "ceph_lv2",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "lv_size": "21470642176",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "name": "ceph_lv2",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "tags": {
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.cluster_name": "ceph",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.crush_device_class": "",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.encrypted": "0",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.osd_id": "2",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.type": "block",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:                 "ceph.vdo": "0"
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             },
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "type": "block",
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:             "vg_name": "ceph_vg2"
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:         }
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]:     ]
Oct 07 14:11:56 compute-0 condescending_dubinsky[306409]: }
Oct 07 14:11:56 compute-0 systemd[1]: libpod-432937724dfcbd18c6479988873148e0a29a8dcfd3f6941545d6d2f902ec2431.scope: Deactivated successfully.
Oct 07 14:11:56 compute-0 podman[306373]: 2025-10-07 14:11:56.351230405 +0000 UTC m=+1.059463189 container died 432937724dfcbd18c6479988873148e0a29a8dcfd3f6941545d6d2f902ec2431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 07 14:11:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-12b5fdb498c7f677349d3e04d4ec817a9dd1a876f411594faa3b00922628e011-merged.mount: Deactivated successfully.
Oct 07 14:11:56 compute-0 podman[306373]: 2025-10-07 14:11:56.418638996 +0000 UTC m=+1.126871760 container remove 432937724dfcbd18c6479988873148e0a29a8dcfd3f6941545d6d2f902ec2431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dubinsky, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.453 2 DEBUG nova.network.neutron [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Successfully updated port: 61102ef2-439c-478d-a441-fe00e54b7ff7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:11:56 compute-0 systemd[1]: libpod-conmon-432937724dfcbd18c6479988873148e0a29a8dcfd3f6941545d6d2f902ec2431.scope: Deactivated successfully.
Oct 07 14:11:56 compute-0 sudo[306267]: pam_unix(sudo:session): session closed for user root
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.485 2 DEBUG oslo_concurrency.lockutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "refresh_cache-9197df26-c585-4b8f-8ed6-695ee8e233a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.486 2 DEBUG oslo_concurrency.lockutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquired lock "refresh_cache-9197df26-c585-4b8f-8ed6-695ee8e233a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.486 2 DEBUG nova.network.neutron [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:11:56 compute-0 sudo[306511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:11:56 compute-0 sudo[306511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:11:56 compute-0 sudo[306511]: pam_unix(sudo:session): session closed for user root
Oct 07 14:11:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:11:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/521856292' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:56 compute-0 sudo[306536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:11:56 compute-0 sudo[306536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:11:56 compute-0 sudo[306536]: pam_unix(sudo:session): session closed for user root
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.638 2 DEBUG oslo_concurrency.processutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.641 2 DEBUG nova.virt.libvirt.vif [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:11:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1971655449',display_name='tempest-ServerDiskConfigTestJSON-server-1971655449',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1971655449',id=40,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-sgyxuhwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:11:51Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=d7bf9b41-6a09-4d71-8e33-cb428d71b2a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.641 2 DEBUG nova.network.os_vif_util [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.642 2 DEBUG nova.network.os_vif_util [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.643 2 DEBUG nova.objects.instance [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.661 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:11:56 compute-0 nova_compute[259550]:   <uuid>d7bf9b41-6a09-4d71-8e33-cb428d71b2a8</uuid>
Oct 07 14:11:56 compute-0 nova_compute[259550]:   <name>instance-00000028</name>
Oct 07 14:11:56 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:11:56 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:11:56 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1971655449</nova:name>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:11:55</nova:creationTime>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:11:56 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:11:56 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:11:56 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:11:56 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:11:56 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:11:56 compute-0 nova_compute[259550]:         <nova:user uuid="7bf568df6a8d461a83d287493b393589">tempest-ServerDiskConfigTestJSON-831175870-project-member</nova:user>
Oct 07 14:11:56 compute-0 nova_compute[259550]:         <nova:project uuid="de6794f6448744329cf2081eb5b889a5">tempest-ServerDiskConfigTestJSON-831175870</nova:project>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:11:56 compute-0 nova_compute[259550]:         <nova:port uuid="51e23698-81ea-413b-922d-5c9081ccd2ad">
Oct 07 14:11:56 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:11:56 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:11:56 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <system>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <entry name="serial">d7bf9b41-6a09-4d71-8e33-cb428d71b2a8</entry>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <entry name="uuid">d7bf9b41-6a09-4d71-8e33-cb428d71b2a8</entry>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     </system>
Oct 07 14:11:56 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:11:56 compute-0 nova_compute[259550]:   <os>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:   </os>
Oct 07 14:11:56 compute-0 nova_compute[259550]:   <features>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:   </features>
Oct 07 14:11:56 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:11:56 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:11:56 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk">
Oct 07 14:11:56 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       </source>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:11:56 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk.config">
Oct 07 14:11:56 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       </source>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:11:56 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:5e:93:42"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <target dev="tap51e23698-81"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/console.log" append="off"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <video>
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     </video>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:11:56 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:11:56 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:11:56 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:11:56 compute-0 nova_compute[259550]: </domain>
Oct 07 14:11:56 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.669 2 DEBUG nova.compute.manager [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Preparing to wait for external event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.670 2 DEBUG oslo_concurrency.lockutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.670 2 DEBUG oslo_concurrency.lockutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.671 2 DEBUG oslo_concurrency.lockutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.672 2 DEBUG nova.virt.libvirt.vif [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:11:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1971655449',display_name='tempest-ServerDiskConfigTestJSON-server-1971655449',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1971655449',id=40,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-sgyxuhwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:11:51Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=d7bf9b41-6a09-4d71-8e33-cb428d71b2a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.672 2 DEBUG nova.network.os_vif_util [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.673 2 DEBUG nova.network.os_vif_util [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.674 2 DEBUG os_vif [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.675 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.676 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:11:56 compute-0 sudo[306563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:11:56 compute-0 sudo[306563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.681 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51e23698-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.682 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap51e23698-81, col_values=(('external_ids', {'iface-id': '51e23698-81ea-413b-922d-5c9081ccd2ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:93:42', 'vm-uuid': 'd7bf9b41-6a09-4d71-8e33-cb428d71b2a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:56 compute-0 sudo[306563]: pam_unix(sudo:session): session closed for user root
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:56 compute-0 NetworkManager[44949]: <info>  [1759846316.6861] manager: (tap51e23698-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/130)
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.698 2 INFO os_vif [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81')
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.700 2 DEBUG nova.network.neutron [req-21d5e89d-c04f-4fed-8937-031a5f41290e req-8458b992-bc68-4c5e-93b6-d3ce5895c18e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Updated VIF entry in instance network info cache for port 51e23698-81ea-413b-922d-5c9081ccd2ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.700 2 DEBUG nova.network.neutron [req-21d5e89d-c04f-4fed-8937-031a5f41290e req-8458b992-bc68-4c5e-93b6-d3ce5895c18e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Updating instance_info_cache with network_info: [{"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.711 2 DEBUG nova.network.neutron [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.716 2 DEBUG oslo_concurrency.lockutils [req-21d5e89d-c04f-4fed-8937-031a5f41290e req-8458b992-bc68-4c5e-93b6-d3ce5895c18e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d7bf9b41-6a09-4d71-8e33-cb428d71b2a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:11:56 compute-0 sudo[306589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:11:56 compute-0 sudo[306589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.750 2 DEBUG nova.compute.manager [req-913d747b-b685-4430-996e-151798b15b00 req-b43badf7-84d1-4667-a305-959b564f508a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Received event network-changed-61102ef2-439c-478d-a441-fe00e54b7ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.751 2 DEBUG nova.compute.manager [req-913d747b-b685-4430-996e-151798b15b00 req-b43badf7-84d1-4667-a305-959b564f508a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Refreshing instance network info cache due to event network-changed-61102ef2-439c-478d-a441-fe00e54b7ff7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.751 2 DEBUG oslo_concurrency.lockutils [req-913d747b-b685-4430-996e-151798b15b00 req-b43badf7-84d1-4667-a305-959b564f508a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-9197df26-c585-4b8f-8ed6-695ee8e233a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.757 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.757 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.757 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No VIF found with MAC fa:16:3e:5e:93:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.758 2 INFO nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Using config drive
Oct 07 14:11:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:11:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1221106298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.782 2 DEBUG nova.storage.rbd_utils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.789 2 DEBUG oslo_concurrency.processutils [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.796 2 DEBUG nova.compute.provider_tree [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.811 2 DEBUG nova.scheduler.client.report [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.830 2 DEBUG oslo_concurrency.lockutils [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.868 2 INFO nova.scheduler.client.report [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Deleted allocations for instance 34abf10d-dbb0-41fb-abde-a52be331cc12
Oct 07 14:11:56 compute-0 ceph-mon[74295]: pgmap v1409: 305 pgs: 305 active+clean; 178 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 176 KiB/s rd, 4.4 MiB/s wr, 260 op/s
Oct 07 14:11:56 compute-0 ceph-mon[74295]: osdmap e203: 3 total, 3 up, 3 in
Oct 07 14:11:56 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1977354769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:56 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/521856292' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:56 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1221106298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:11:56 compute-0 nova_compute[259550]: 2025-10-07 14:11:56.933 2 DEBUG oslo_concurrency.lockutils [None req-efa00f1a-47ba-4c4d-900c-8496c1709b7d e3b16cce26de4b00b3d15ac4efd418ca cf7686fa46ac42eb816008a28a8b964e - - default default] Lock "34abf10d-dbb0-41fb-abde-a52be331cc12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:57 compute-0 podman[306675]: 2025-10-07 14:11:57.106771145 +0000 UTC m=+0.044590413 container create 728552328ae81d3c10c3f5b4555cab4af49e0c962675bb89012c0f4ce3154207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mclaren, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 07 14:11:57 compute-0 systemd[1]: Started libpod-conmon-728552328ae81d3c10c3f5b4555cab4af49e0c962675bb89012c0f4ce3154207.scope.
Oct 07 14:11:57 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:11:57 compute-0 podman[306675]: 2025-10-07 14:11:57.089011508 +0000 UTC m=+0.026830806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:11:57 compute-0 podman[306675]: 2025-10-07 14:11:57.199278186 +0000 UTC m=+0.137097474 container init 728552328ae81d3c10c3f5b4555cab4af49e0c962675bb89012c0f4ce3154207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 07 14:11:57 compute-0 podman[306675]: 2025-10-07 14:11:57.209849204 +0000 UTC m=+0.147668472 container start 728552328ae81d3c10c3f5b4555cab4af49e0c962675bb89012c0f4ce3154207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mclaren, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 07 14:11:57 compute-0 podman[306675]: 2025-10-07 14:11:57.21387724 +0000 UTC m=+0.151696538 container attach 728552328ae81d3c10c3f5b4555cab4af49e0c962675bb89012c0f4ce3154207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mclaren, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 07 14:11:57 compute-0 happy_mclaren[306691]: 167 167
Oct 07 14:11:57 compute-0 systemd[1]: libpod-728552328ae81d3c10c3f5b4555cab4af49e0c962675bb89012c0f4ce3154207.scope: Deactivated successfully.
Oct 07 14:11:57 compute-0 podman[306675]: 2025-10-07 14:11:57.218909262 +0000 UTC m=+0.156728540 container died 728552328ae81d3c10c3f5b4555cab4af49e0c962675bb89012c0f4ce3154207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mclaren, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Oct 07 14:11:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-000702c789c0fd3b0efaa15a96a37008725eafe24fc71d4a157d9d2de6e7f394-merged.mount: Deactivated successfully.
Oct 07 14:11:57 compute-0 podman[306675]: 2025-10-07 14:11:57.260965238 +0000 UTC m=+0.198784506 container remove 728552328ae81d3c10c3f5b4555cab4af49e0c962675bb89012c0f4ce3154207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mclaren, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct 07 14:11:57 compute-0 systemd[1]: libpod-conmon-728552328ae81d3c10c3f5b4555cab4af49e0c962675bb89012c0f4ce3154207.scope: Deactivated successfully.
Oct 07 14:11:57 compute-0 podman[306714]: 2025-10-07 14:11:57.445143999 +0000 UTC m=+0.052639845 container create 58f29fa6d2ba028197ef63e8798076ee6e9274b7a967d25cb45655230c20c520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:11:57 compute-0 systemd[1]: Started libpod-conmon-58f29fa6d2ba028197ef63e8798076ee6e9274b7a967d25cb45655230c20c520.scope.
Oct 07 14:11:57 compute-0 podman[306714]: 2025-10-07 14:11:57.419132165 +0000 UTC m=+0.026628031 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:11:57 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865cc1966d803cda19909c196e0219a73d3ebba0cafe5f270ab139e2cdf36604/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865cc1966d803cda19909c196e0219a73d3ebba0cafe5f270ab139e2cdf36604/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865cc1966d803cda19909c196e0219a73d3ebba0cafe5f270ab139e2cdf36604/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865cc1966d803cda19909c196e0219a73d3ebba0cafe5f270ab139e2cdf36604/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:11:57 compute-0 podman[306714]: 2025-10-07 14:11:57.554543665 +0000 UTC m=+0.162039521 container init 58f29fa6d2ba028197ef63e8798076ee6e9274b7a967d25cb45655230c20c520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldwasser, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:11:57 compute-0 podman[306714]: 2025-10-07 14:11:57.566052197 +0000 UTC m=+0.173548053 container start 58f29fa6d2ba028197ef63e8798076ee6e9274b7a967d25cb45655230c20c520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldwasser, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:11:57 compute-0 podman[306714]: 2025-10-07 14:11:57.57376842 +0000 UTC m=+0.181264476 container attach 58f29fa6d2ba028197ef63e8798076ee6e9274b7a967d25cb45655230c20c520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 07 14:11:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 178 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 4.2 MiB/s wr, 163 op/s
Oct 07 14:11:57 compute-0 nova_compute[259550]: 2025-10-07 14:11:57.795 2 INFO nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Creating config drive at /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/disk.config
Oct 07 14:11:57 compute-0 nova_compute[259550]: 2025-10-07 14:11:57.802 2 DEBUG oslo_concurrency.processutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvt1ymjqj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:57 compute-0 nova_compute[259550]: 2025-10-07 14:11:57.953 2 DEBUG oslo_concurrency.processutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvt1ymjqj" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:57 compute-0 nova_compute[259550]: 2025-10-07 14:11:57.986 2 DEBUG nova.storage.rbd_utils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:57 compute-0 nova_compute[259550]: 2025-10-07 14:11:57.990 2 DEBUG oslo_concurrency.processutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/disk.config d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.175 2 DEBUG oslo_concurrency.processutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/disk.config d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.176 2 INFO nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Deleting local config drive /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/disk.config because it was imported into RBD.
Oct 07 14:11:58 compute-0 kernel: tap51e23698-81: entered promiscuous mode
Oct 07 14:11:58 compute-0 NetworkManager[44949]: <info>  [1759846318.2548] manager: (tap51e23698-81): new Tun device (/org/freedesktop/NetworkManager/Devices/131)
Oct 07 14:11:58 compute-0 ovn_controller[151684]: 2025-10-07T14:11:58Z|00265|binding|INFO|Claiming lport 51e23698-81ea-413b-922d-5c9081ccd2ad for this chassis.
Oct 07 14:11:58 compute-0 ovn_controller[151684]: 2025-10-07T14:11:58Z|00266|binding|INFO|51e23698-81ea-413b-922d-5c9081ccd2ad: Claiming fa:16:3e:5e:93:42 10.100.0.13
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.321 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:93:42 10.100.0.13'], port_security=['fa:16:3e:5e:93:42 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd7bf9b41-6a09-4d71-8e33-cb428d71b2a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=51e23698-81ea-413b-922d-5c9081ccd2ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.322 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 51e23698-81ea-413b-922d-5c9081ccd2ad in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 bound to our chassis
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.324 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct 07 14:11:58 compute-0 systemd-udevd[306790]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:11:58 compute-0 systemd-machined[214580]: New machine qemu-44-instance-00000028.
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.341 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7517491-142a-4daa-a38d-e347519a38ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.342 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd2cb8ca0-11 in ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:11:58 compute-0 NetworkManager[44949]: <info>  [1759846318.3441] device (tap51e23698-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:11:58 compute-0 NetworkManager[44949]: <info>  [1759846318.3450] device (tap51e23698-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.346 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd2cb8ca0-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.347 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[79ee5577-dd43-4c11-9e98-bada02f4101e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.348 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ed14be69-0adc-498c-985c-4bfeec127dc3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.361 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[645fa8ed-c855-4e09-bc5b-3254cdd7f06d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:58 compute-0 systemd[1]: Started Virtual Machine qemu-44-instance-00000028.
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:58 compute-0 ovn_controller[151684]: 2025-10-07T14:11:58Z|00267|binding|INFO|Setting lport 51e23698-81ea-413b-922d-5c9081ccd2ad ovn-installed in OVS
Oct 07 14:11:58 compute-0 ovn_controller[151684]: 2025-10-07T14:11:58Z|00268|binding|INFO|Setting lport 51e23698-81ea-413b-922d-5c9081ccd2ad up in Southbound
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.394 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de0711d1-4aa4-4e42-b31f-6887b5a60890]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.426 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb2c1eb-583e-492d-bbd6-ed56af82f836]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:58 compute-0 NetworkManager[44949]: <info>  [1759846318.4335] manager: (tapd2cb8ca0-10): new Veth device (/org/freedesktop/NetworkManager/Devices/132)
Oct 07 14:11:58 compute-0 systemd-udevd[306796]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.432 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de807d1a-7b9a-4310-a4b0-d0d72996cb06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.474 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5f1a5d1f-40f2-47cc-84c1-4f7a7f09248f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.479 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d70f6a49-8ae8-47c0-aa59-845764f3efff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:58 compute-0 NetworkManager[44949]: <info>  [1759846318.5044] device (tapd2cb8ca0-10): carrier: link connected
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.509 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[51ba8d36-c77e-4343-a649-9ecaa6f3d88f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.528 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1dcdbdd9-78ac-4f84-a2ee-d47062a29fee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2cb8ca0-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 86], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690206, 'reachable_time': 40259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306841, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.546 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[452afef6-8ae3-4d87-8099-159993ffe697]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:eb7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 690206, 'tstamp': 690206}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306843, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.565 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ca296f01-6ee1-4242-9d65-3c6c5d434d25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2cb8ca0-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 86], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690206, 'reachable_time': 40259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306846, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.595 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[68ca06f7-bd98-4c65-90a8-839ae9cbd83f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]: {
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:         "osd_id": 2,
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:         "type": "bluestore"
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:     },
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:         "osd_id": 1,
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:         "type": "bluestore"
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:     },
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:         "osd_id": 0,
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:         "type": "bluestore"
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]:     }
Oct 07 14:11:58 compute-0 elastic_goldwasser[306731]: }
Oct 07 14:11:58 compute-0 systemd[1]: libpod-58f29fa6d2ba028197ef63e8798076ee6e9274b7a967d25cb45655230c20c520.scope: Deactivated successfully.
Oct 07 14:11:58 compute-0 systemd[1]: libpod-58f29fa6d2ba028197ef63e8798076ee6e9274b7a967d25cb45655230c20c520.scope: Consumed 1.024s CPU time.
Oct 07 14:11:58 compute-0 podman[306714]: 2025-10-07 14:11:58.66028767 +0000 UTC m=+1.267783546 container died 58f29fa6d2ba028197ef63e8798076ee6e9274b7a967d25cb45655230c20c520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldwasser, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.662 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f7678b95-390b-4cf6-bddf-5a98f4789271]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.664 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2cb8ca0-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.664 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.664 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2cb8ca0-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:58 compute-0 NetworkManager[44949]: <info>  [1759846318.6671] manager: (tapd2cb8ca0-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:58 compute-0 kernel: tapd2cb8ca0-10: entered promiscuous mode
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.677 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2cb8ca0-10, col_values=(('external_ids', {'iface-id': '93001468-74a0-4bac-94dd-0978737be6e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:58 compute-0 ovn_controller[151684]: 2025-10-07T14:11:58Z|00269|binding|INFO|Releasing lport 93001468-74a0-4bac-94dd-0978737be6e2 from this chassis (sb_readonly=0)
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.696 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.699 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0c5034de-911a-4674-8f18-d2d24ba28c69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.705 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:11:58.706 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'env', 'PROCESS_TAG=haproxy-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:11:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-865cc1966d803cda19909c196e0219a73d3ebba0cafe5f270ab139e2cdf36604-merged.mount: Deactivated successfully.
Oct 07 14:11:58 compute-0 podman[306714]: 2025-10-07 14:11:58.767251192 +0000 UTC m=+1.374747038 container remove 58f29fa6d2ba028197ef63e8798076ee6e9274b7a967d25cb45655230c20c520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 14:11:58 compute-0 systemd[1]: libpod-conmon-58f29fa6d2ba028197ef63e8798076ee6e9274b7a967d25cb45655230c20c520.scope: Deactivated successfully.
Oct 07 14:11:58 compute-0 sudo[306589]: pam_unix(sudo:session): session closed for user root
Oct 07 14:11:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:11:58 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:11:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.826 2 DEBUG nova.network.neutron [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Updating instance_info_cache with network_info: [{"id": "61102ef2-439c-478d-a441-fe00e54b7ff7", "address": "fa:16:3e:6b:88:da", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61102ef2-43", "ovs_interfaceid": "61102ef2-439c-478d-a441-fe00e54b7ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:11:58 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:11:58 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 4e95f08b-8de1-45be-9ecd-77d341eb010e does not exist
Oct 07 14:11:58 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 8cf004a8-c1f1-4149-9796-6f9fe8cf4be7 does not exist
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.855 2 DEBUG oslo_concurrency.lockutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Releasing lock "refresh_cache-9197df26-c585-4b8f-8ed6-695ee8e233a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.855 2 DEBUG nova.compute.manager [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Instance network_info: |[{"id": "61102ef2-439c-478d-a441-fe00e54b7ff7", "address": "fa:16:3e:6b:88:da", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61102ef2-43", "ovs_interfaceid": "61102ef2-439c-478d-a441-fe00e54b7ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.856 2 DEBUG oslo_concurrency.lockutils [req-913d747b-b685-4430-996e-151798b15b00 req-b43badf7-84d1-4667-a305-959b564f508a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-9197df26-c585-4b8f-8ed6-695ee8e233a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.856 2 DEBUG nova.network.neutron [req-913d747b-b685-4430-996e-151798b15b00 req-b43badf7-84d1-4667-a305-959b564f508a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Refreshing network info cache for port 61102ef2-439c-478d-a441-fe00e54b7ff7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.861 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Start _get_guest_xml network_info=[{"id": "61102ef2-439c-478d-a441-fe00e54b7ff7", "address": "fa:16:3e:6b:88:da", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61102ef2-43", "ovs_interfaceid": "61102ef2-439c-478d-a441-fe00e54b7ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.874 2 WARNING nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.888 2 DEBUG nova.virt.libvirt.host [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.890 2 DEBUG nova.virt.libvirt.host [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.895 2 DEBUG nova.virt.libvirt.host [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.896 2 DEBUG nova.virt.libvirt.host [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.896 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.897 2 DEBUG nova.virt.hardware [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.897 2 DEBUG nova.virt.hardware [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.898 2 DEBUG nova.virt.hardware [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.898 2 DEBUG nova.virt.hardware [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.898 2 DEBUG nova.virt.hardware [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.899 2 DEBUG nova.virt.hardware [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.899 2 DEBUG nova.virt.hardware [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.899 2 DEBUG nova.virt.hardware [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.900 2 DEBUG nova.virt.hardware [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.900 2 DEBUG nova.virt.hardware [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.900 2 DEBUG nova.virt.hardware [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:11:58 compute-0 nova_compute[259550]: 2025-10-07 14:11:58.904 2 DEBUG oslo_concurrency.processutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:58 compute-0 sudo[306874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:11:58 compute-0 ceph-mon[74295]: pgmap v1411: 305 pgs: 305 active+clean; 178 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 4.2 MiB/s wr, 163 op/s
Oct 07 14:11:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:11:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:11:58 compute-0 sudo[306874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:11:58 compute-0 sudo[306874]: pam_unix(sudo:session): session closed for user root
Oct 07 14:11:58 compute-0 sudo[306899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:11:58 compute-0 sudo[306899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:11:58 compute-0 sudo[306899]: pam_unix(sudo:session): session closed for user root
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.056 2 DEBUG nova.compute.manager [req-42b2b814-1928-49c8-bb95-408b7905170c req-02b0e4b7-e3f0-4db2-a2a8-42a90073d299 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.058 2 DEBUG oslo_concurrency.lockutils [req-42b2b814-1928-49c8-bb95-408b7905170c req-02b0e4b7-e3f0-4db2-a2a8-42a90073d299 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.058 2 DEBUG oslo_concurrency.lockutils [req-42b2b814-1928-49c8-bb95-408b7905170c req-02b0e4b7-e3f0-4db2-a2a8-42a90073d299 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.059 2 DEBUG oslo_concurrency.lockutils [req-42b2b814-1928-49c8-bb95-408b7905170c req-02b0e4b7-e3f0-4db2-a2a8-42a90073d299 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.059 2 DEBUG nova.compute.manager [req-42b2b814-1928-49c8-bb95-408b7905170c req-02b0e4b7-e3f0-4db2-a2a8-42a90073d299 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Processing event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:11:59 compute-0 podman[306947]: 2025-10-07 14:11:59.159187444 +0000 UTC m=+0.121029513 container create abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 07 14:11:59 compute-0 podman[306947]: 2025-10-07 14:11:59.069008843 +0000 UTC m=+0.030850942 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:11:59 compute-0 systemd[1]: Started libpod-conmon-abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be.scope.
Oct 07 14:11:59 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1832c0cf068be4050196c94b34b8807b44bf95e594cb825be8b96ba74a72be4e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:11:59 compute-0 podman[306947]: 2025-10-07 14:11:59.282602758 +0000 UTC m=+0.244444867 container init abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 07 14:11:59 compute-0 podman[306947]: 2025-10-07 14:11:59.292517089 +0000 UTC m=+0.254359208 container start abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:11:59 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[306980]: [NOTICE]   (306985) : New worker (306987) forked
Oct 07 14:11:59 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[306980]: [NOTICE]   (306985) : Loading success.
Oct 07 14:11:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:11:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2352932399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.405 2 DEBUG oslo_concurrency.processutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.428 2 DEBUG nova.storage.rbd_utils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 9197df26-c585-4b8f-8ed6-695ee8e233a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.433 2 DEBUG oslo_concurrency.processutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:11:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 139 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 4.9 MiB/s wr, 160 op/s
Oct 07 14:11:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:11:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1117432790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.904 2 DEBUG oslo_concurrency.processutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.906 2 DEBUG nova.virt.libvirt.vif [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:11:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2085542773',display_name='tempest-DeleteServersTestJSON-server-2085542773',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2085542773',id=41,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-2g13mdnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:11:53Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=9197df26-c585-4b8f-8ed6-695ee8e233a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61102ef2-439c-478d-a441-fe00e54b7ff7", "address": "fa:16:3e:6b:88:da", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61102ef2-43", "ovs_interfaceid": "61102ef2-439c-478d-a441-fe00e54b7ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.907 2 DEBUG nova.network.os_vif_util [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "61102ef2-439c-478d-a441-fe00e54b7ff7", "address": "fa:16:3e:6b:88:da", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61102ef2-43", "ovs_interfaceid": "61102ef2-439c-478d-a441-fe00e54b7ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.908 2 DEBUG nova.network.os_vif_util [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:88:da,bridge_name='br-int',has_traffic_filtering=True,id=61102ef2-439c-478d-a441-fe00e54b7ff7,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61102ef2-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.910 2 DEBUG nova.objects.instance [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9197df26-c585-4b8f-8ed6-695ee8e233a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:11:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2352932399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1117432790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.926 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:11:59 compute-0 nova_compute[259550]:   <uuid>9197df26-c585-4b8f-8ed6-695ee8e233a8</uuid>
Oct 07 14:11:59 compute-0 nova_compute[259550]:   <name>instance-00000029</name>
Oct 07 14:11:59 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:11:59 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:11:59 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <nova:name>tempest-DeleteServersTestJSON-server-2085542773</nova:name>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:11:58</nova:creationTime>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:11:59 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:11:59 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:11:59 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:11:59 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:11:59 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:11:59 compute-0 nova_compute[259550]:         <nova:user uuid="a0452296b3a942e893961944a0203d98">tempest-DeleteServersTestJSON-1871282594-project-member</nova:user>
Oct 07 14:11:59 compute-0 nova_compute[259550]:         <nova:project uuid="06322ecec4b94a5d94e34cc8632d4104">tempest-DeleteServersTestJSON-1871282594</nova:project>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:11:59 compute-0 nova_compute[259550]:         <nova:port uuid="61102ef2-439c-478d-a441-fe00e54b7ff7">
Oct 07 14:11:59 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:11:59 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:11:59 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <system>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <entry name="serial">9197df26-c585-4b8f-8ed6-695ee8e233a8</entry>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <entry name="uuid">9197df26-c585-4b8f-8ed6-695ee8e233a8</entry>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     </system>
Oct 07 14:11:59 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:11:59 compute-0 nova_compute[259550]:   <os>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:   </os>
Oct 07 14:11:59 compute-0 nova_compute[259550]:   <features>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:   </features>
Oct 07 14:11:59 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:11:59 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:11:59 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/9197df26-c585-4b8f-8ed6-695ee8e233a8_disk">
Oct 07 14:11:59 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       </source>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:11:59 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/9197df26-c585-4b8f-8ed6-695ee8e233a8_disk.config">
Oct 07 14:11:59 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       </source>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:11:59 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:6b:88:da"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <target dev="tap61102ef2-43"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/9197df26-c585-4b8f-8ed6-695ee8e233a8/console.log" append="off"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <video>
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     </video>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:11:59 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:11:59 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:11:59 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:11:59 compute-0 nova_compute[259550]: </domain>
Oct 07 14:11:59 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.932 2 DEBUG nova.compute.manager [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Preparing to wait for external event network-vif-plugged-61102ef2-439c-478d-a441-fe00e54b7ff7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.933 2 DEBUG oslo_concurrency.lockutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.933 2 DEBUG oslo_concurrency.lockutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.933 2 DEBUG oslo_concurrency.lockutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.934 2 DEBUG nova.virt.libvirt.vif [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:11:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2085542773',display_name='tempest-DeleteServersTestJSON-server-2085542773',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2085542773',id=41,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-2g13mdnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:11:53Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=9197df26-c585-4b8f-8ed6-695ee8e233a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61102ef2-439c-478d-a441-fe00e54b7ff7", "address": "fa:16:3e:6b:88:da", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61102ef2-43", "ovs_interfaceid": "61102ef2-439c-478d-a441-fe00e54b7ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.935 2 DEBUG nova.network.os_vif_util [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "61102ef2-439c-478d-a441-fe00e54b7ff7", "address": "fa:16:3e:6b:88:da", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61102ef2-43", "ovs_interfaceid": "61102ef2-439c-478d-a441-fe00e54b7ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.935 2 DEBUG nova.network.os_vif_util [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:88:da,bridge_name='br-int',has_traffic_filtering=True,id=61102ef2-439c-478d-a441-fe00e54b7ff7,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61102ef2-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.936 2 DEBUG os_vif [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:88:da,bridge_name='br-int',has_traffic_filtering=True,id=61102ef2-439c-478d-a441-fe00e54b7ff7,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61102ef2-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.937 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.938 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.943 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61102ef2-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.944 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap61102ef2-43, col_values=(('external_ids', {'iface-id': '61102ef2-439c-478d-a441-fe00e54b7ff7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:88:da', 'vm-uuid': '9197df26-c585-4b8f-8ed6-695ee8e233a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:11:59 compute-0 NetworkManager[44949]: <info>  [1759846319.9893] manager: (tap61102ef2-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:11:59 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:11:59.999 2 INFO os_vif [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:88:da,bridge_name='br-int',has_traffic_filtering=True,id=61102ef2-439c-478d-a441-fe00e54b7ff7,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61102ef2-43')
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:00.024 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:12:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:00.026 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:12:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:00.044 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:00.045 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:00.045 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.149 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.149 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.149 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No VIF found with MAC fa:16:3e:6b:88:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.150 2 INFO nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Using config drive
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.182 2 DEBUG nova.storage.rbd_utils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 9197df26-c585-4b8f-8ed6-695ee8e233a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.204 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846305.2025857, 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.205 2 INFO nova.compute.manager [-] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] VM Stopped (Lifecycle Event)
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.225 2 DEBUG nova.compute.manager [None req-e4fc1735-a45c-401a-9ae9-38067e4b8c30 - - - - - -] [instance: 7ad2bf20-35a5-4eb9-bc1c-425e5f6e5c77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.735 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846320.7352803, d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.737 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] VM Started (Lifecycle Event)
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.739 2 DEBUG nova.compute.manager [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.744 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.748 2 INFO nova.virt.libvirt.driver [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance spawned successfully.
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.749 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.760 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.770 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.775 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.776 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.776 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.777 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.777 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.778 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.806 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.806 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846320.7354438, d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.807 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] VM Paused (Lifecycle Event)
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.839 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.843 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846320.7433336, d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.843 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] VM Resumed (Lifecycle Event)
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.852 2 INFO nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Creating config drive at /var/lib/nova/instances/9197df26-c585-4b8f-8ed6-695ee8e233a8/disk.config
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.857 2 DEBUG oslo_concurrency.processutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9197df26-c585-4b8f-8ed6-695ee8e233a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpczr9cuan execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:12:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Oct 07 14:12:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Oct 07 14:12:00 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.904 2 INFO nova.compute.manager [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Took 9.48 seconds to spawn the instance on the hypervisor.
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.905 2 DEBUG nova.compute.manager [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.906 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:00 compute-0 ceph-mon[74295]: pgmap v1412: 305 pgs: 305 active+clean; 139 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 4.9 MiB/s wr, 160 op/s
Oct 07 14:12:00 compute-0 ceph-mon[74295]: osdmap e204: 3 total, 3 up, 3 in
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.923 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:12:00 compute-0 nova_compute[259550]: 2025-10-07 14:12:00.960 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.000 2 INFO nova.compute.manager [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Took 10.46 seconds to build instance.
Oct 07 14:12:01 compute-0 rsyslogd[1004]: imjournal: 2832 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.004 2 DEBUG oslo_concurrency.processutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9197df26-c585-4b8f-8ed6-695ee8e233a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpczr9cuan" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.038 2 DEBUG nova.storage.rbd_utils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 9197df26-c585-4b8f-8ed6-695ee8e233a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.044 2 DEBUG oslo_concurrency.processutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9197df26-c585-4b8f-8ed6-695ee8e233a8/disk.config 9197df26-c585-4b8f-8ed6-695ee8e233a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.084 2 DEBUG oslo_concurrency.lockutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.277 2 DEBUG oslo_concurrency.processutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9197df26-c585-4b8f-8ed6-695ee8e233a8/disk.config 9197df26-c585-4b8f-8ed6-695ee8e233a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.278 2 INFO nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Deleting local config drive /var/lib/nova/instances/9197df26-c585-4b8f-8ed6-695ee8e233a8/disk.config because it was imported into RBD.
Oct 07 14:12:01 compute-0 kernel: tap61102ef2-43: entered promiscuous mode
Oct 07 14:12:01 compute-0 NetworkManager[44949]: <info>  [1759846321.3675] manager: (tap61102ef2-43): new Tun device (/org/freedesktop/NetworkManager/Devices/135)
Oct 07 14:12:01 compute-0 ovn_controller[151684]: 2025-10-07T14:12:01Z|00270|binding|INFO|Claiming lport 61102ef2-439c-478d-a441-fe00e54b7ff7 for this chassis.
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:01 compute-0 ovn_controller[151684]: 2025-10-07T14:12:01Z|00271|binding|INFO|61102ef2-439c-478d-a441-fe00e54b7ff7: Claiming fa:16:3e:6b:88:da 10.100.0.3
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.377 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:88:da 10.100.0.3'], port_security=['fa:16:3e:6b:88:da 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9197df26-c585-4b8f-8ed6-695ee8e233a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=61102ef2-439c-478d-a441-fe00e54b7ff7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.379 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 61102ef2-439c-478d-a441-fe00e54b7ff7 in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f bound to our chassis
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.381 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct 07 14:12:01 compute-0 systemd-udevd[307154]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:12:01 compute-0 ovn_controller[151684]: 2025-10-07T14:12:01Z|00272|binding|INFO|Setting lport 61102ef2-439c-478d-a441-fe00e54b7ff7 ovn-installed in OVS
Oct 07 14:12:01 compute-0 ovn_controller[151684]: 2025-10-07T14:12:01Z|00273|binding|INFO|Setting lport 61102ef2-439c-478d-a441-fe00e54b7ff7 up in Southbound
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.397 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d5346226-27f0-429e-8438-393bc950b11d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.398 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8accac57-a1 in ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.400 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8accac57-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.400 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[52a99fbf-ca22-4544-b90e-200d92d0fbd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.403 2 DEBUG nova.network.neutron [req-913d747b-b685-4430-996e-151798b15b00 req-b43badf7-84d1-4667-a305-959b564f508a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Updated VIF entry in instance network info cache for port 61102ef2-439c-478d-a441-fe00e54b7ff7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.403 2 DEBUG nova.network.neutron [req-913d747b-b685-4430-996e-151798b15b00 req-b43badf7-84d1-4667-a305-959b564f508a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Updating instance_info_cache with network_info: [{"id": "61102ef2-439c-478d-a441-fe00e54b7ff7", "address": "fa:16:3e:6b:88:da", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61102ef2-43", "ovs_interfaceid": "61102ef2-439c-478d-a441-fe00e54b7ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.402 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2fd85b-7017-459c-b6be-679061280c13]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:01 compute-0 NetworkManager[44949]: <info>  [1759846321.4154] device (tap61102ef2-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:12:01 compute-0 NetworkManager[44949]: <info>  [1759846321.4166] device (tap61102ef2-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:12:01 compute-0 systemd-machined[214580]: New machine qemu-45-instance-00000029.
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.427 2 DEBUG oslo_concurrency.lockutils [req-913d747b-b685-4430-996e-151798b15b00 req-b43badf7-84d1-4667-a305-959b564f508a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-9197df26-c585-4b8f-8ed6-695ee8e233a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.429 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c35f82-0d05-41c6-b16a-48d8f3ce53c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:01 compute-0 systemd[1]: Started Virtual Machine qemu-45-instance-00000029.
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.450 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4ca715-54a7-409f-8d59-d01c7426a289]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.481 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e6fc7723-ac6e-4a5f-be8e-437d011a2765]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.487 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1df86b74-0c9c-4ea9-8373-54f8e5eeca6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:01 compute-0 NetworkManager[44949]: <info>  [1759846321.4885] manager: (tap8accac57-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/136)
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.525 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[19867c17-bdb3-489f-b1c3-ff9646bdd1fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.529 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[36a657a4-bc72-44b0-9808-67c36ba73f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:01 compute-0 NetworkManager[44949]: <info>  [1759846321.5575] device (tap8accac57-a0): carrier: link connected
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.566 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8e8db6-70b8-4505-b1d8-e94d94bc7813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.584 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f6bd02-4834-4b1e-9b34-86e94a1b80f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8accac57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:e8:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690512, 'reachable_time': 44706, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307189, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.604 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[361785b6-d0dd-4970-adf8-4d1f171804e6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:e89f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 690512, 'tstamp': 690512}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307190, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.627 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de234898-1845-4cbb-a6d7-9103182d0b4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8accac57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:e8:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690512, 'reachable_time': 44706, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307191, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 5.3 MiB/s wr, 199 op/s
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.668 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1018653c-d832-4620-993a-4cb22669462b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.745 2 DEBUG nova.compute.manager [req-a6d64eaa-7652-44ef-9f84-b655b649ee7d req-d81d669f-8db1-4759-8bdc-f4c9e35947a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.746 2 DEBUG oslo_concurrency.lockutils [req-a6d64eaa-7652-44ef-9f84-b655b649ee7d req-d81d669f-8db1-4759-8bdc-f4c9e35947a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.746 2 DEBUG oslo_concurrency.lockutils [req-a6d64eaa-7652-44ef-9f84-b655b649ee7d req-d81d669f-8db1-4759-8bdc-f4c9e35947a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.747 2 DEBUG oslo_concurrency.lockutils [req-a6d64eaa-7652-44ef-9f84-b655b649ee7d req-d81d669f-8db1-4759-8bdc-f4c9e35947a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.747 2 DEBUG nova.compute.manager [req-a6d64eaa-7652-44ef-9f84-b655b649ee7d req-d81d669f-8db1-4759-8bdc-f4c9e35947a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] No waiting events found dispatching network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.747 2 WARNING nova.compute.manager [req-a6d64eaa-7652-44ef-9f84-b655b649ee7d req-d81d669f-8db1-4759-8bdc-f4c9e35947a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received unexpected event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad for instance with vm_state active and task_state None.
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.756 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e6bb88-44a4-4e24-be47-5de8290b430f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.757 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8accac57-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.758 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.758 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8accac57-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:01 compute-0 NetworkManager[44949]: <info>  [1759846321.7611] manager: (tap8accac57-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Oct 07 14:12:01 compute-0 kernel: tap8accac57-a0: entered promiscuous mode
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.766 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8accac57-a0, col_values=(('external_ids', {'iface-id': 'a487ff40-6fa2-404e-b7fc-dbcc968fecc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:01 compute-0 ovn_controller[151684]: 2025-10-07T14:12:01Z|00274|binding|INFO|Releasing lport a487ff40-6fa2-404e-b7fc-dbcc968fecc3 from this chassis (sb_readonly=0)
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.787 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.788 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7a8f29a6-800e-4202-b589-416bf7a1a26f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.789 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:12:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.790 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'env', 'PROCESS_TAG=haproxy-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8accac57-ab45-4b9b-95ed-86c2c65f202f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.850 2 DEBUG nova.compute.manager [req-40464907-8f6c-49ed-93bf-71189ac30f7c req-638d4e05-28d2-4400-852a-51ede1f0d4fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Received event network-vif-plugged-61102ef2-439c-478d-a441-fe00e54b7ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.850 2 DEBUG oslo_concurrency.lockutils [req-40464907-8f6c-49ed-93bf-71189ac30f7c req-638d4e05-28d2-4400-852a-51ede1f0d4fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.851 2 DEBUG oslo_concurrency.lockutils [req-40464907-8f6c-49ed-93bf-71189ac30f7c req-638d4e05-28d2-4400-852a-51ede1f0d4fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.851 2 DEBUG oslo_concurrency.lockutils [req-40464907-8f6c-49ed-93bf-71189ac30f7c req-638d4e05-28d2-4400-852a-51ede1f0d4fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:01 compute-0 nova_compute[259550]: 2025-10-07 14:12:01.852 2 DEBUG nova.compute.manager [req-40464907-8f6c-49ed-93bf-71189ac30f7c req-638d4e05-28d2-4400-852a-51ede1f0d4fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Processing event network-vif-plugged-61102ef2-439c-478d-a441-fe00e54b7ff7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:12:01 compute-0 ceph-mon[74295]: pgmap v1414: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 5.3 MiB/s wr, 199 op/s
Oct 07 14:12:02 compute-0 podman[307265]: 2025-10-07 14:12:02.201431611 +0000 UTC m=+0.023172920 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.446 2 DEBUG nova.compute.manager [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.447 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846322.445502, 9197df26-c585-4b8f-8ed6-695ee8e233a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.447 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] VM Started (Lifecycle Event)
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.452 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.458 2 INFO nova.virt.libvirt.driver [-] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Instance spawned successfully.
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.458 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:12:02 compute-0 podman[307265]: 2025-10-07 14:12:02.463590223 +0000 UTC m=+0.285331512 container create 13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.464 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.468 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.481 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.482 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.483 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.483 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.484 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.485 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.489 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.489 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846322.4458292, 9197df26-c585-4b8f-8ed6-695ee8e233a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.490 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] VM Paused (Lifecycle Event)
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.517 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:02 compute-0 systemd[1]: Started libpod-conmon-13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da.scope.
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.523 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846322.4492142, 9197df26-c585-4b8f-8ed6-695ee8e233a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.525 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] VM Resumed (Lifecycle Event)
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.546 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.551 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.553 2 INFO nova.compute.manager [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Took 9.11 seconds to spawn the instance on the hypervisor.
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.554 2 DEBUG nova.compute.manager [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:02 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:12:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3f9a3ddea3ddccc76f392646315e89a823382f99eda108d937fed0f1f005d5d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.578 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:12:02 compute-0 podman[307265]: 2025-10-07 14:12:02.579138279 +0000 UTC m=+0.400879588 container init 13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:12:02 compute-0 podman[307265]: 2025-10-07 14:12:02.584820829 +0000 UTC m=+0.406562118 container start 13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 14:12:02 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[307279]: [NOTICE]   (307283) : New worker (307285) forked
Oct 07 14:12:02 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[307279]: [NOTICE]   (307283) : Loading success.
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.611 2 INFO nova.compute.manager [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Took 10.26 seconds to build instance.
Oct 07 14:12:02 compute-0 nova_compute[259550]: 2025-10-07 14:12:02.627 2 DEBUG oslo_concurrency.lockutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Oct 07 14:12:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Oct 07 14:12:02 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Oct 07 14:12:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.2 MiB/s wr, 63 op/s
Oct 07 14:12:03 compute-0 ceph-mon[74295]: osdmap e205: 3 total, 3 up, 3 in
Oct 07 14:12:03 compute-0 ceph-mon[74295]: pgmap v1416: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.2 MiB/s wr, 63 op/s
Oct 07 14:12:04 compute-0 nova_compute[259550]: 2025-10-07 14:12:04.308 2 DEBUG oslo_concurrency.lockutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "9197df26-c585-4b8f-8ed6-695ee8e233a8" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:04 compute-0 nova_compute[259550]: 2025-10-07 14:12:04.309 2 DEBUG oslo_concurrency.lockutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:04 compute-0 nova_compute[259550]: 2025-10-07 14:12:04.309 2 INFO nova.compute.manager [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Shelving
Oct 07 14:12:04 compute-0 nova_compute[259550]: 2025-10-07 14:12:04.331 2 DEBUG nova.virt.libvirt.driver [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:12:04 compute-0 nova_compute[259550]: 2025-10-07 14:12:04.346 2 DEBUG nova.compute.manager [req-1f7616a7-f37f-4c86-a39e-e711993ea1b4 req-ccaa44d2-53ed-4427-ba20-1a58d26bc19f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Received event network-vif-plugged-61102ef2-439c-478d-a441-fe00e54b7ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:04 compute-0 nova_compute[259550]: 2025-10-07 14:12:04.346 2 DEBUG oslo_concurrency.lockutils [req-1f7616a7-f37f-4c86-a39e-e711993ea1b4 req-ccaa44d2-53ed-4427-ba20-1a58d26bc19f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:04 compute-0 nova_compute[259550]: 2025-10-07 14:12:04.347 2 DEBUG oslo_concurrency.lockutils [req-1f7616a7-f37f-4c86-a39e-e711993ea1b4 req-ccaa44d2-53ed-4427-ba20-1a58d26bc19f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:04 compute-0 nova_compute[259550]: 2025-10-07 14:12:04.347 2 DEBUG oslo_concurrency.lockutils [req-1f7616a7-f37f-4c86-a39e-e711993ea1b4 req-ccaa44d2-53ed-4427-ba20-1a58d26bc19f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:04 compute-0 nova_compute[259550]: 2025-10-07 14:12:04.347 2 DEBUG nova.compute.manager [req-1f7616a7-f37f-4c86-a39e-e711993ea1b4 req-ccaa44d2-53ed-4427-ba20-1a58d26bc19f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] No waiting events found dispatching network-vif-plugged-61102ef2-439c-478d-a441-fe00e54b7ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:12:04 compute-0 nova_compute[259550]: 2025-10-07 14:12:04.348 2 WARNING nova.compute.manager [req-1f7616a7-f37f-4c86-a39e-e711993ea1b4 req-ccaa44d2-53ed-4427-ba20-1a58d26bc19f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Received unexpected event network-vif-plugged-61102ef2-439c-478d-a441-fe00e54b7ff7 for instance with vm_state active and task_state shelving.
Oct 07 14:12:04 compute-0 nova_compute[259550]: 2025-10-07 14:12:04.923 2 INFO nova.compute.manager [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Rebuilding instance
Oct 07 14:12:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Oct 07 14:12:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Oct 07 14:12:04 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Oct 07 14:12:04 compute-0 nova_compute[259550]: 2025-10-07 14:12:04.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:05 compute-0 nova_compute[259550]: 2025-10-07 14:12:05.081 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:05 compute-0 nova_compute[259550]: 2025-10-07 14:12:05.096 2 DEBUG nova.compute.manager [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:05 compute-0 nova_compute[259550]: 2025-10-07 14:12:05.137 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'pci_requests' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:05 compute-0 nova_compute[259550]: 2025-10-07 14:12:05.148 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:05 compute-0 nova_compute[259550]: 2025-10-07 14:12:05.162 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'resources' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:05 compute-0 nova_compute[259550]: 2025-10-07 14:12:05.171 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'migration_context' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:05 compute-0 nova_compute[259550]: 2025-10-07 14:12:05.181 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:12:05 compute-0 nova_compute[259550]: 2025-10-07 14:12:05.185 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:12:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 134 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 7.7 MiB/s rd, 56 KiB/s wr, 412 op/s
Oct 07 14:12:05 compute-0 nova_compute[259550]: 2025-10-07 14:12:05.811 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846310.8107893, cf0c1f82-7b3b-4b62-a766-c12de66a966a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:05 compute-0 nova_compute[259550]: 2025-10-07 14:12:05.813 2 INFO nova.compute.manager [-] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] VM Stopped (Lifecycle Event)
Oct 07 14:12:05 compute-0 nova_compute[259550]: 2025-10-07 14:12:05.836 2 DEBUG nova.compute.manager [None req-784f1500-68bd-4d55-93f2-77be0e9e5cc2 - - - - - -] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:12:05 compute-0 nova_compute[259550]: 2025-10-07 14:12:05.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:05 compute-0 ceph-mon[74295]: osdmap e206: 3 total, 3 up, 3 in
Oct 07 14:12:05 compute-0 ceph-mon[74295]: pgmap v1418: 305 pgs: 305 active+clean; 134 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 7.7 MiB/s rd, 56 KiB/s wr, 412 op/s
Oct 07 14:12:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:07.030 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:07 compute-0 podman[307294]: 2025-10-07 14:12:07.129211721 +0000 UTC m=+0.105566365 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent)
Oct 07 14:12:07 compute-0 podman[307295]: 2025-10-07 14:12:07.1348832 +0000 UTC m=+0.111505492 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:12:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 134 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 28 KiB/s wr, 337 op/s
Oct 07 14:12:08 compute-0 ceph-mon[74295]: pgmap v1419: 305 pgs: 305 active+clean; 134 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 28 KiB/s wr, 337 op/s
Oct 07 14:12:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 134 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 24 KiB/s wr, 300 op/s
Oct 07 14:12:09 compute-0 nova_compute[259550]: 2025-10-07 14:12:09.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:10 compute-0 nova_compute[259550]: 2025-10-07 14:12:10.331 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846315.3309925, 34abf10d-dbb0-41fb-abde-a52be331cc12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:10 compute-0 nova_compute[259550]: 2025-10-07 14:12:10.332 2 INFO nova.compute.manager [-] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] VM Stopped (Lifecycle Event)
Oct 07 14:12:10 compute-0 nova_compute[259550]: 2025-10-07 14:12:10.350 2 DEBUG nova.compute.manager [None req-a36e1fd4-9130-451b-9170-b38ea665e286 - - - - - -] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:10 compute-0 ceph-mon[74295]: pgmap v1420: 305 pgs: 305 active+clean; 134 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 24 KiB/s wr, 300 op/s
Oct 07 14:12:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:12:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Oct 07 14:12:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Oct 07 14:12:10 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Oct 07 14:12:10 compute-0 nova_compute[259550]: 2025-10-07 14:12:10.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 134 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 24 KiB/s wr, 300 op/s
Oct 07 14:12:11 compute-0 ceph-mon[74295]: osdmap e207: 3 total, 3 up, 3 in
Oct 07 14:12:12 compute-0 ceph-mon[74295]: pgmap v1422: 305 pgs: 305 active+clean; 134 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 24 KiB/s wr, 300 op/s
Oct 07 14:12:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 134 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.3 KiB/s wr, 77 op/s
Oct 07 14:12:13 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:12:13 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 16K writes, 66K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 16K writes, 4982 syncs, 3.27 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 44.80 MB, 0.07 MB/s
                                           Interval WAL: 10K writes, 3892 syncs, 2.64 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 14:12:13 compute-0 ceph-mon[74295]: pgmap v1423: 305 pgs: 305 active+clean; 134 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.3 KiB/s wr, 77 op/s
Oct 07 14:12:14 compute-0 nova_compute[259550]: 2025-10-07 14:12:14.376 2 DEBUG nova.virt.libvirt.driver [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:12:14 compute-0 ovn_controller[151684]: 2025-10-07T14:12:14Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:93:42 10.100.0.13
Oct 07 14:12:14 compute-0 ovn_controller[151684]: 2025-10-07T14:12:14Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:93:42 10.100.0.13
Oct 07 14:12:14 compute-0 nova_compute[259550]: 2025-10-07 14:12:14.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:15 compute-0 ovn_controller[151684]: 2025-10-07T14:12:15Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:88:da 10.100.0.3
Oct 07 14:12:15 compute-0 ovn_controller[151684]: 2025-10-07T14:12:15Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:88:da 10.100.0.3
Oct 07 14:12:15 compute-0 nova_compute[259550]: 2025-10-07 14:12:15.235 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:12:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 190 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 481 KiB/s rd, 5.0 MiB/s wr, 121 op/s
Oct 07 14:12:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:12:15 compute-0 nova_compute[259550]: 2025-10-07 14:12:15.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:16 compute-0 ceph-mon[74295]: pgmap v1424: 305 pgs: 305 active+clean; 190 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 481 KiB/s rd, 5.0 MiB/s wr, 121 op/s
Oct 07 14:12:17 compute-0 kernel: tap51e23698-81 (unregistering): left promiscuous mode
Oct 07 14:12:17 compute-0 NetworkManager[44949]: <info>  [1759846337.5139] device (tap51e23698-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:12:17 compute-0 ovn_controller[151684]: 2025-10-07T14:12:17Z|00275|binding|INFO|Releasing lport 51e23698-81ea-413b-922d-5c9081ccd2ad from this chassis (sb_readonly=0)
Oct 07 14:12:17 compute-0 ovn_controller[151684]: 2025-10-07T14:12:17Z|00276|binding|INFO|Setting lport 51e23698-81ea-413b-922d-5c9081ccd2ad down in Southbound
Oct 07 14:12:17 compute-0 nova_compute[259550]: 2025-10-07 14:12:17.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:17 compute-0 ovn_controller[151684]: 2025-10-07T14:12:17Z|00277|binding|INFO|Removing iface tap51e23698-81 ovn-installed in OVS
Oct 07 14:12:17 compute-0 nova_compute[259550]: 2025-10-07 14:12:17.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.566 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:93:42 10.100.0.13'], port_security=['fa:16:3e:5e:93:42 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd7bf9b41-6a09-4d71-8e33-cb428d71b2a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=51e23698-81ea-413b-922d-5c9081ccd2ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:12:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.567 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 51e23698-81ea-413b-922d-5c9081ccd2ad in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 unbound from our chassis
Oct 07 14:12:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.569 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:12:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.571 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[658c510d-3b82-4755-a9b3-d374fa30c4f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.571 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 namespace which is not needed anymore
Oct 07 14:12:17 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000028.scope: Deactivated successfully.
Oct 07 14:12:17 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000028.scope: Consumed 15.097s CPU time.
Oct 07 14:12:17 compute-0 systemd-machined[214580]: Machine qemu-44-instance-00000028 terminated.
Oct 07 14:12:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 190 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 481 KiB/s rd, 5.0 MiB/s wr, 121 op/s
Oct 07 14:12:17 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[306980]: [NOTICE]   (306985) : haproxy version is 2.8.14-c23fe91
Oct 07 14:12:17 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[306980]: [NOTICE]   (306985) : path to executable is /usr/sbin/haproxy
Oct 07 14:12:17 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[306980]: [WARNING]  (306985) : Exiting Master process...
Oct 07 14:12:17 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[306980]: [ALERT]    (306985) : Current worker (306987) exited with code 143 (Terminated)
Oct 07 14:12:17 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[306980]: [WARNING]  (306985) : All workers exited. Exiting... (0)
Oct 07 14:12:17 compute-0 systemd[1]: libpod-abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be.scope: Deactivated successfully.
Oct 07 14:12:17 compute-0 podman[307362]: 2025-10-07 14:12:17.752916842 +0000 UTC m=+0.052462780 container died abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 07 14:12:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be-userdata-shm.mount: Deactivated successfully.
Oct 07 14:12:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1832c0cf068be4050196c94b34b8807b44bf95e594cb825be8b96ba74a72be4e-merged.mount: Deactivated successfully.
Oct 07 14:12:17 compute-0 podman[307362]: 2025-10-07 14:12:17.805511424 +0000 UTC m=+0.105057372 container cleanup abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 07 14:12:17 compute-0 systemd[1]: libpod-conmon-abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be.scope: Deactivated successfully.
Oct 07 14:12:17 compute-0 podman[307402]: 2025-10-07 14:12:17.873598074 +0000 UTC m=+0.045843416 container remove abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 07 14:12:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.879 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[74d041a4-75fa-4b3a-bfcc-7be1d8cbcee1]: (4, ('Tue Oct  7 02:12:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be)\nabcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be\nTue Oct  7 02:12:17 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be)\nabcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.881 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0b2d98d0-410e-4322-a2bf-f7d00d7a8a3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.884 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2cb8ca0-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:17 compute-0 nova_compute[259550]: 2025-10-07 14:12:17.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:17 compute-0 kernel: tapd2cb8ca0-10: left promiscuous mode
Oct 07 14:12:17 compute-0 nova_compute[259550]: 2025-10-07 14:12:17.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.913 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[91fd8d1a-f3c2-4337-a9e1-3cdfdff7debb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.939 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b6a5e0-170e-4f35-91e8-89c23bc0cf4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.941 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[364b834a-c69c-4eae-a1b0-df8d7c227660]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.960 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[77521962-2a58-49c4-88f3-853c5d22a036]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690198, 'reachable_time': 29804, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307420, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:17 compute-0 systemd[1]: run-netns-ovnmeta\x2dd2cb8ca0\x2d1272\x2d4fa9\x2db4ed\x2d8d0a1e3df777.mount: Deactivated successfully.
Oct 07 14:12:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.967 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:12:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.968 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[8d538371-432e-4c8d-baa5-a68a890caf9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:17 compute-0 kernel: tap61102ef2-43 (unregistering): left promiscuous mode
Oct 07 14:12:17 compute-0 NetworkManager[44949]: <info>  [1759846337.9728] device (tap61102ef2-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:12:17 compute-0 ovn_controller[151684]: 2025-10-07T14:12:17Z|00278|binding|INFO|Releasing lport 61102ef2-439c-478d-a441-fe00e54b7ff7 from this chassis (sb_readonly=0)
Oct 07 14:12:17 compute-0 ovn_controller[151684]: 2025-10-07T14:12:17Z|00279|binding|INFO|Setting lport 61102ef2-439c-478d-a441-fe00e54b7ff7 down in Southbound
Oct 07 14:12:17 compute-0 ovn_controller[151684]: 2025-10-07T14:12:17Z|00280|binding|INFO|Removing iface tap61102ef2-43 ovn-installed in OVS
Oct 07 14:12:17 compute-0 nova_compute[259550]: 2025-10-07 14:12:17.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.002 2 DEBUG nova.compute.manager [req-9ebb3062-56dc-4da5-b1f4-eba83d2e4aca req-992be93c-ebc7-41d4-a473-2a7c9933475e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received event network-vif-unplugged-51e23698-81ea-413b-922d-5c9081ccd2ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.003 2 DEBUG oslo_concurrency.lockutils [req-9ebb3062-56dc-4da5-b1f4-eba83d2e4aca req-992be93c-ebc7-41d4-a473-2a7c9933475e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.003 2 DEBUG oslo_concurrency.lockutils [req-9ebb3062-56dc-4da5-b1f4-eba83d2e4aca req-992be93c-ebc7-41d4-a473-2a7c9933475e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.004 2 DEBUG oslo_concurrency.lockutils [req-9ebb3062-56dc-4da5-b1f4-eba83d2e4aca req-992be93c-ebc7-41d4-a473-2a7c9933475e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.004 2 DEBUG nova.compute.manager [req-9ebb3062-56dc-4da5-b1f4-eba83d2e4aca req-992be93c-ebc7-41d4-a473-2a7c9933475e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] No waiting events found dispatching network-vif-unplugged-51e23698-81ea-413b-922d-5c9081ccd2ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.004 2 WARNING nova.compute.manager [req-9ebb3062-56dc-4da5-b1f4-eba83d2e4aca req-992be93c-ebc7-41d4-a473-2a7c9933475e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received unexpected event network-vif-unplugged-51e23698-81ea-413b-922d-5c9081ccd2ad for instance with vm_state active and task_state rebuilding.
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.026 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:88:da 10.100.0.3'], port_security=['fa:16:3e:6b:88:da 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9197df26-c585-4b8f-8ed6-695ee8e233a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=61102ef2-439c-478d-a441-fe00e54b7ff7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:12:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.027 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 61102ef2-439c-478d-a441-fe00e54b7ff7 in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f unbound from our chassis
Oct 07 14:12:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.028 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8accac57-ab45-4b9b-95ed-86c2c65f202f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:12:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.029 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2cad69-2a23-4eda-a217-0d7b68eaaa47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.029 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f namespace which is not needed anymore
Oct 07 14:12:18 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Deactivated successfully.
Oct 07 14:12:18 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Consumed 13.408s CPU time.
Oct 07 14:12:18 compute-0 systemd-machined[214580]: Machine qemu-45-instance-00000029 terminated.
Oct 07 14:12:18 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[307279]: [NOTICE]   (307283) : haproxy version is 2.8.14-c23fe91
Oct 07 14:12:18 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[307279]: [NOTICE]   (307283) : path to executable is /usr/sbin/haproxy
Oct 07 14:12:18 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[307279]: [WARNING]  (307283) : Exiting Master process...
Oct 07 14:12:18 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[307279]: [ALERT]    (307283) : Current worker (307285) exited with code 143 (Terminated)
Oct 07 14:12:18 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[307279]: [WARNING]  (307283) : All workers exited. Exiting... (0)
Oct 07 14:12:18 compute-0 systemd[1]: libpod-13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da.scope: Deactivated successfully.
Oct 07 14:12:18 compute-0 podman[307445]: 2025-10-07 14:12:18.169241695 +0000 UTC m=+0.047753716 container died 13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 07 14:12:18 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:12:18 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 18K writes, 74K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 18K writes, 6001 syncs, 3.16 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 45.99 MB, 0.08 MB/s
                                           Interval WAL: 11K writes, 4529 syncs, 2.58 writes per sync, written: 0.04 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 14:12:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da-userdata-shm.mount: Deactivated successfully.
Oct 07 14:12:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3f9a3ddea3ddccc76f392646315e89a823382f99eda108d937fed0f1f005d5d-merged.mount: Deactivated successfully.
Oct 07 14:12:18 compute-0 podman[307445]: 2025-10-07 14:12:18.209782991 +0000 UTC m=+0.088295012 container cleanup 13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:12:18 compute-0 NetworkManager[44949]: <info>  [1759846338.2143] manager: (tap61102ef2-43): new Tun device (/org/freedesktop/NetworkManager/Devices/138)
Oct 07 14:12:18 compute-0 systemd[1]: libpod-conmon-13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da.scope: Deactivated successfully.
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.252 2 INFO nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance shutdown successfully after 13 seconds.
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.258 2 INFO nova.virt.libvirt.driver [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance destroyed successfully.
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.267 2 INFO nova.virt.libvirt.driver [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance destroyed successfully.
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.268 2 DEBUG nova.virt.libvirt.vif [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:11:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1971655449',display_name='tempest-ServerDiskConfigTestJSON-server-1971655449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1971655449',id=40,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-sgyxuhwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:04Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=d7bf9b41-6a09-4d71-8e33-cb428d71b2a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.269 2 DEBUG nova.network.os_vif_util [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.269 2 DEBUG nova.network.os_vif_util [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.270 2 DEBUG os_vif [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.271 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51e23698-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.281 2 INFO os_vif [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81')
Oct 07 14:12:18 compute-0 podman[307481]: 2025-10-07 14:12:18.295783451 +0000 UTC m=+0.054534664 container remove 13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.300 2 DEBUG nova.compute.manager [req-10e35511-f946-468a-8bc4-25bfc0a4157c req-725ae496-a225-450d-bf88-609cc02b0db6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Received event network-vif-unplugged-61102ef2-439c-478d-a441-fe00e54b7ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.301 2 DEBUG oslo_concurrency.lockutils [req-10e35511-f946-468a-8bc4-25bfc0a4157c req-725ae496-a225-450d-bf88-609cc02b0db6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.301 2 DEBUG oslo_concurrency.lockutils [req-10e35511-f946-468a-8bc4-25bfc0a4157c req-725ae496-a225-450d-bf88-609cc02b0db6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.302 2 DEBUG oslo_concurrency.lockutils [req-10e35511-f946-468a-8bc4-25bfc0a4157c req-725ae496-a225-450d-bf88-609cc02b0db6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.302 2 DEBUG nova.compute.manager [req-10e35511-f946-468a-8bc4-25bfc0a4157c req-725ae496-a225-450d-bf88-609cc02b0db6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] No waiting events found dispatching network-vif-unplugged-61102ef2-439c-478d-a441-fe00e54b7ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.302 2 WARNING nova.compute.manager [req-10e35511-f946-468a-8bc4-25bfc0a4157c req-725ae496-a225-450d-bf88-609cc02b0db6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Received unexpected event network-vif-unplugged-61102ef2-439c-478d-a441-fe00e54b7ff7 for instance with vm_state active and task_state shelving.
Oct 07 14:12:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.304 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[73052105-442b-44fc-a302-8023a3767018]: (4, ('Tue Oct  7 02:12:18 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f (13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da)\n13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da\nTue Oct  7 02:12:18 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f (13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da)\n13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.307 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[15120734-33f3-46ce-bad5-6e875aeeb210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.308 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8accac57-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:18 compute-0 kernel: tap8accac57-a0: left promiscuous mode
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.333 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc6b01b-d251-423f-aec8-7e2e1b640dcf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.362 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3f295dcc-bf56-44cb-baca-f911567c1592]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.364 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7dad0d7d-b83e-4556-92c1-f8adb29868df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.379 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3b841e9f-1193-49c4-a076-116d4e8db973]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690504, 'reachable_time': 43210, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307523, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.381 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:12:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.381 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[abd2916c-5454-4266-ae28-5ef041f28e42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.397 2 INFO nova.virt.libvirt.driver [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Instance shutdown successfully after 14 seconds.
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.404 2 INFO nova.virt.libvirt.driver [-] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Instance destroyed successfully.
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.405 2 DEBUG nova.objects.instance [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9197df26-c585-4b8f-8ed6-695ee8e233a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.710 2 INFO nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Deleting instance files /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_del
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.711 2 INFO nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Deletion of /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_del complete
Oct 07 14:12:18 compute-0 ceph-mon[74295]: pgmap v1425: 305 pgs: 305 active+clean; 190 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 481 KiB/s rd, 5.0 MiB/s wr, 121 op/s
Oct 07 14:12:18 compute-0 systemd[1]: run-netns-ovnmeta\x2d8accac57\x2dab45\x2d4b9b\x2d95ed\x2d86c2c65f202f.mount: Deactivated successfully.
Oct 07 14:12:18 compute-0 nova_compute[259550]: 2025-10-07 14:12:18.928 2 INFO nova.virt.libvirt.driver [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Beginning cold snapshot process
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.126 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.127 2 INFO nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Creating image(s)
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.147 2 DEBUG nova.storage.rbd_utils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.168 2 DEBUG nova.storage.rbd_utils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.188 2 DEBUG nova.storage.rbd_utils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.191 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.282 2 DEBUG nova.virt.libvirt.imagebackend [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.286 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.287 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.290 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.290 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.310 2 DEBUG nova.storage.rbd_utils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.315 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.573 2 DEBUG nova.storage.rbd_utils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] creating snapshot(1f97b8b6de8848e0ac87f9a9349cc406) on rbd image(9197df26-c585-4b8f-8ed6-695ee8e233a8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:12:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 140 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 796 KiB/s rd, 5.2 MiB/s wr, 164 op/s
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.653 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.709 2 DEBUG nova.storage.rbd_utils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] resizing rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:12:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Oct 07 14:12:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Oct 07 14:12:19 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.827 2 DEBUG nova.storage.rbd_utils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] cloning vms/9197df26-c585-4b8f-8ed6-695ee8e233a8_disk@1f97b8b6de8848e0ac87f9a9349cc406 to images/afce8312-1adb-47e3-b920-51cdbb68b862 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.861 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.862 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Ensure instance console log exists: /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.863 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.863 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.863 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.865 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Start _get_guest_xml network_info=[{"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.870 2 WARNING nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.881 2 DEBUG nova.virt.libvirt.host [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.882 2 DEBUG nova.virt.libvirt.host [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.889 2 DEBUG nova.virt.libvirt.host [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.890 2 DEBUG nova.virt.libvirt.host [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.890 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.890 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.891 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.891 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.891 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.892 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.892 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.892 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.892 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.892 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.893 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.893 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.893 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.919 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:19 compute-0 nova_compute[259550]: 2025-10-07 14:12:19.967 2 DEBUG nova.storage.rbd_utils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] flattening images/afce8312-1adb-47e3-b920-51cdbb68b862 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.104 2 DEBUG nova.compute.manager [req-d6558eaa-01a8-479f-ae9e-a4c598445123 req-0a6999d0-a056-4d4d-b880-a0b8789b27c2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.105 2 DEBUG oslo_concurrency.lockutils [req-d6558eaa-01a8-479f-ae9e-a4c598445123 req-0a6999d0-a056-4d4d-b880-a0b8789b27c2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.106 2 DEBUG oslo_concurrency.lockutils [req-d6558eaa-01a8-479f-ae9e-a4c598445123 req-0a6999d0-a056-4d4d-b880-a0b8789b27c2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.106 2 DEBUG oslo_concurrency.lockutils [req-d6558eaa-01a8-479f-ae9e-a4c598445123 req-0a6999d0-a056-4d4d-b880-a0b8789b27c2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.106 2 DEBUG nova.compute.manager [req-d6558eaa-01a8-479f-ae9e-a4c598445123 req-0a6999d0-a056-4d4d-b880-a0b8789b27c2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] No waiting events found dispatching network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.107 2 WARNING nova.compute.manager [req-d6558eaa-01a8-479f-ae9e-a4c598445123 req-0a6999d0-a056-4d4d-b880-a0b8789b27c2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received unexpected event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad for instance with vm_state active and task_state rebuild_spawning.
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.408 2 DEBUG nova.compute.manager [req-981ffef8-1bb5-4aa1-b31a-f894bf4ef33c req-4c059105-aa22-479d-9cfa-8d9268d01101 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Received event network-vif-plugged-61102ef2-439c-478d-a441-fe00e54b7ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.408 2 DEBUG oslo_concurrency.lockutils [req-981ffef8-1bb5-4aa1-b31a-f894bf4ef33c req-4c059105-aa22-479d-9cfa-8d9268d01101 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.409 2 DEBUG oslo_concurrency.lockutils [req-981ffef8-1bb5-4aa1-b31a-f894bf4ef33c req-4c059105-aa22-479d-9cfa-8d9268d01101 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.409 2 DEBUG oslo_concurrency.lockutils [req-981ffef8-1bb5-4aa1-b31a-f894bf4ef33c req-4c059105-aa22-479d-9cfa-8d9268d01101 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.409 2 DEBUG nova.compute.manager [req-981ffef8-1bb5-4aa1-b31a-f894bf4ef33c req-4c059105-aa22-479d-9cfa-8d9268d01101 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] No waiting events found dispatching network-vif-plugged-61102ef2-439c-478d-a441-fe00e54b7ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.409 2 WARNING nova.compute.manager [req-981ffef8-1bb5-4aa1-b31a-f894bf4ef33c req-4c059105-aa22-479d-9cfa-8d9268d01101 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Received unexpected event network-vif-plugged-61102ef2-439c-478d-a441-fe00e54b7ff7 for instance with vm_state active and task_state shelving_image_uploading.
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.422 2 DEBUG nova.storage.rbd_utils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] removing snapshot(1f97b8b6de8848e0ac87f9a9349cc406) on rbd image(9197df26-c585-4b8f-8ed6-695ee8e233a8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:12:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:12:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2820408890' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.446 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.472 2 DEBUG nova.storage.rbd_utils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.477 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:20 compute-0 ceph-mon[74295]: pgmap v1426: 305 pgs: 305 active+clean; 140 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 796 KiB/s rd, 5.2 MiB/s wr, 164 op/s
Oct 07 14:12:20 compute-0 ceph-mon[74295]: osdmap e208: 3 total, 3 up, 3 in
Oct 07 14:12:20 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2820408890' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:12:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Oct 07 14:12:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Oct 07 14:12:20 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.793 2 DEBUG nova.storage.rbd_utils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] creating snapshot(snap) on rbd image(afce8312-1adb-47e3-b920-51cdbb68b862) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:12:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:12:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:12:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3663148886' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.969 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.970 2 DEBUG nova.virt.libvirt.vif [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:11:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1971655449',display_name='tempest-ServerDiskConfigTestJSON-server-1971655449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1971655449',id=40,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-sgyxuhwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:18Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=d7bf9b41-6a09-4d71-8e33-cb428d71b2a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.970 2 DEBUG nova.network.os_vif_util [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.971 2 DEBUG nova.network.os_vif_util [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.973 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:12:20 compute-0 nova_compute[259550]:   <uuid>d7bf9b41-6a09-4d71-8e33-cb428d71b2a8</uuid>
Oct 07 14:12:20 compute-0 nova_compute[259550]:   <name>instance-00000028</name>
Oct 07 14:12:20 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:12:20 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:12:20 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1971655449</nova:name>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:12:19</nova:creationTime>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:12:20 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:12:20 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:12:20 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:12:20 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:12:20 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:12:20 compute-0 nova_compute[259550]:         <nova:user uuid="7bf568df6a8d461a83d287493b393589">tempest-ServerDiskConfigTestJSON-831175870-project-member</nova:user>
Oct 07 14:12:20 compute-0 nova_compute[259550]:         <nova:project uuid="de6794f6448744329cf2081eb5b889a5">tempest-ServerDiskConfigTestJSON-831175870</nova:project>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:12:20 compute-0 nova_compute[259550]:         <nova:port uuid="51e23698-81ea-413b-922d-5c9081ccd2ad">
Oct 07 14:12:20 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:12:20 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:12:20 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <system>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <entry name="serial">d7bf9b41-6a09-4d71-8e33-cb428d71b2a8</entry>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <entry name="uuid">d7bf9b41-6a09-4d71-8e33-cb428d71b2a8</entry>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     </system>
Oct 07 14:12:20 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:12:20 compute-0 nova_compute[259550]:   <os>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:   </os>
Oct 07 14:12:20 compute-0 nova_compute[259550]:   <features>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:   </features>
Oct 07 14:12:20 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:12:20 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:12:20 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk">
Oct 07 14:12:20 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       </source>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:12:20 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk.config">
Oct 07 14:12:20 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       </source>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:12:20 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:5e:93:42"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <target dev="tap51e23698-81"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/console.log" append="off"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <video>
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     </video>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:12:20 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:12:20 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:12:20 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:12:20 compute-0 nova_compute[259550]: </domain>
Oct 07 14:12:20 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.975 2 DEBUG nova.compute.manager [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Preparing to wait for external event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.975 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.975 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.975 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.976 2 DEBUG nova.virt.libvirt.vif [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:11:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1971655449',display_name='tempest-ServerDiskConfigTestJSON-server-1971655449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1971655449',id=40,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-sgyxuhwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:18Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=d7bf9b41-6a09-4d71-8e33-cb428d71b2a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.976 2 DEBUG nova.network.os_vif_util [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.977 2 DEBUG nova.network.os_vif_util [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.977 2 DEBUG os_vif [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.978 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.978 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.982 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51e23698-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.982 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap51e23698-81, col_values=(('external_ids', {'iface-id': '51e23698-81ea-413b-922d-5c9081ccd2ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:93:42', 'vm-uuid': 'd7bf9b41-6a09-4d71-8e33-cb428d71b2a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:20 compute-0 NetworkManager[44949]: <info>  [1759846340.9854] manager: (tap51e23698-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:20 compute-0 nova_compute[259550]: 2025-10-07 14:12:20.998 2 INFO os_vif [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81')
Oct 07 14:12:21 compute-0 nova_compute[259550]: 2025-10-07 14:12:21.192 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:12:21 compute-0 nova_compute[259550]: 2025-10-07 14:12:21.193 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:12:21 compute-0 nova_compute[259550]: 2025-10-07 14:12:21.194 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No VIF found with MAC fa:16:3e:5e:93:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:12:21 compute-0 nova_compute[259550]: 2025-10-07 14:12:21.194 2 INFO nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Using config drive
Oct 07 14:12:21 compute-0 nova_compute[259550]: 2025-10-07 14:12:21.218 2 DEBUG nova.storage.rbd_utils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:21 compute-0 nova_compute[259550]: 2025-10-07 14:12:21.236 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:21 compute-0 nova_compute[259550]: 2025-10-07 14:12:21.261 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'keypairs' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 157 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 8.9 MiB/s wr, 299 op/s
Oct 07 14:12:21 compute-0 nova_compute[259550]: 2025-10-07 14:12:21.675 2 INFO nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Creating config drive at /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/disk.config
Oct 07 14:12:21 compute-0 nova_compute[259550]: 2025-10-07 14:12:21.680 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxi1kwsto execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Oct 07 14:12:21 compute-0 ceph-mon[74295]: osdmap e209: 3 total, 3 up, 3 in
Oct 07 14:12:21 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3663148886' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:12:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Oct 07 14:12:21 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Oct 07 14:12:21 compute-0 nova_compute[259550]: 2025-10-07 14:12:21.840 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxi1kwsto" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:21 compute-0 nova_compute[259550]: 2025-10-07 14:12:21.875 2 DEBUG nova.storage.rbd_utils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:21 compute-0 nova_compute[259550]: 2025-10-07 14:12:21.880 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/disk.config d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:22 compute-0 nova_compute[259550]: 2025-10-07 14:12:22.081 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/disk.config d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:22 compute-0 nova_compute[259550]: 2025-10-07 14:12:22.082 2 INFO nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Deleting local config drive /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/disk.config because it was imported into RBD.
Oct 07 14:12:22 compute-0 kernel: tap51e23698-81: entered promiscuous mode
Oct 07 14:12:22 compute-0 NetworkManager[44949]: <info>  [1759846342.1527] manager: (tap51e23698-81): new Tun device (/org/freedesktop/NetworkManager/Devices/140)
Oct 07 14:12:22 compute-0 ovn_controller[151684]: 2025-10-07T14:12:22Z|00281|binding|INFO|Claiming lport 51e23698-81ea-413b-922d-5c9081ccd2ad for this chassis.
Oct 07 14:12:22 compute-0 ovn_controller[151684]: 2025-10-07T14:12:22Z|00282|binding|INFO|51e23698-81ea-413b-922d-5c9081ccd2ad: Claiming fa:16:3e:5e:93:42 10.100.0.13
Oct 07 14:12:22 compute-0 nova_compute[259550]: 2025-10-07 14:12:22.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.167 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:93:42 10.100.0.13'], port_security=['fa:16:3e:5e:93:42 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd7bf9b41-6a09-4d71-8e33-cb428d71b2a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '5', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=51e23698-81ea-413b-922d-5c9081ccd2ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.168 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 51e23698-81ea-413b-922d-5c9081ccd2ad in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 bound to our chassis
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.169 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.186 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b15b6325-57ab-474e-8d0d-67b8426cee1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.187 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd2cb8ca0-11 in ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:12:22 compute-0 ovn_controller[151684]: 2025-10-07T14:12:22Z|00283|binding|INFO|Setting lport 51e23698-81ea-413b-922d-5c9081ccd2ad ovn-installed in OVS
Oct 07 14:12:22 compute-0 ovn_controller[151684]: 2025-10-07T14:12:22Z|00284|binding|INFO|Setting lport 51e23698-81ea-413b-922d-5c9081ccd2ad up in Southbound
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.188 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd2cb8ca0-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.188 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[41079891-9f26-4f06-af71-140aa9778881]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:22 compute-0 nova_compute[259550]: 2025-10-07 14:12:22.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:22 compute-0 systemd-udevd[307972]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.189 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c45044fd-17c8-4531-8b43-9f67a54ac01e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:22 compute-0 systemd-machined[214580]: New machine qemu-46-instance-00000028.
Oct 07 14:12:22 compute-0 systemd[1]: Started Virtual Machine qemu-46-instance-00000028.
Oct 07 14:12:22 compute-0 nova_compute[259550]: 2025-10-07 14:12:22.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.212 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ba10a7-3825-4943-b927-0d6d272f6868]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:22 compute-0 NetworkManager[44949]: <info>  [1759846342.2180] device (tap51e23698-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:12:22 compute-0 NetworkManager[44949]: <info>  [1759846342.2193] device (tap51e23698-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.233 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[56e162a2-48aa-4d4a-b345-8314e9156ec7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.271 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab236e8-836e-4521-9c57-7a6f8951e707]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:22 compute-0 NetworkManager[44949]: <info>  [1759846342.2781] manager: (tapd2cb8ca0-10): new Veth device (/org/freedesktop/NetworkManager/Devices/141)
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.279 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[378641a8-8515-4006-a7ca-a65ab8488460]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.322 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fcfbe253-24b7-4b90-8da7-64b433a6d1c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.325 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f08389-bb6f-4bed-a049-bba53a0517ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:22 compute-0 NetworkManager[44949]: <info>  [1759846342.3563] device (tapd2cb8ca0-10): carrier: link connected
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.365 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[23d95bda-a3b4-4f22-ad9a-49c150fb9f42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.385 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[33a6949e-7907-43c5-b8c7-876b428a40a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2cb8ca0-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692592, 'reachable_time': 27735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308009, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.405 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[374b6cf9-9588-4b5b-9d1b-74a37f931943]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:eb7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 692592, 'tstamp': 692592}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308010, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.426 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a3fed8-9d1d-4c69-aceb-e564c17a194e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2cb8ca0-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692592, 'reachable_time': 27735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308011, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:22 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:12:22 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 14K writes, 57K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 4390 syncs, 3.30 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8519 writes, 33K keys, 8519 commit groups, 1.0 writes per commit group, ingest: 32.79 MB, 0.05 MB/s
                                           Interval WAL: 8519 writes, 3358 syncs, 2.54 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.470 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee4a63d-c4c0-474d-87a1-61053116b61b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.542 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cc95339d-130f-44bb-a5c0-76d86dcf8129]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.543 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2cb8ca0-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.544 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.544 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2cb8ca0-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:22 compute-0 kernel: tapd2cb8ca0-10: entered promiscuous mode
Oct 07 14:12:22 compute-0 NetworkManager[44949]: <info>  [1759846342.5852] manager: (tapd2cb8ca0-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Oct 07 14:12:22 compute-0 nova_compute[259550]: 2025-10-07 14:12:22.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:22 compute-0 nova_compute[259550]: 2025-10-07 14:12:22.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.596 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2cb8ca0-10, col_values=(('external_ids', {'iface-id': '93001468-74a0-4bac-94dd-0978737be6e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:22 compute-0 ovn_controller[151684]: 2025-10-07T14:12:22Z|00285|binding|INFO|Releasing lport 93001468-74a0-4bac-94dd-0978737be6e2 from this chassis (sb_readonly=0)
Oct 07 14:12:22 compute-0 nova_compute[259550]: 2025-10-07 14:12:22.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:22 compute-0 nova_compute[259550]: 2025-10-07 14:12:22.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.630 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:12:22
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['backups', 'vms', '.mgr', 'default.rgw.control', 'images', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta']
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.635 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[160ef450-817d-4222-ba47-2763b7d7c66a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.636 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:12:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.636 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'env', 'PROCESS_TAG=haproxy-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:12:22 compute-0 ceph-mon[74295]: pgmap v1429: 305 pgs: 305 active+clean; 157 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 8.9 MiB/s wr, 299 op/s
Oct 07 14:12:22 compute-0 ceph-mon[74295]: osdmap e210: 3 total, 3 up, 3 in
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:12:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:12:22 compute-0 nova_compute[259550]: 2025-10-07 14:12:22.972 2 DEBUG nova.compute.manager [req-b95e1aa4-3448-4efb-a599-b762c3ec1834 req-c9b2fccf-d5ed-4d73-8c1e-80174c7a438a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:22 compute-0 nova_compute[259550]: 2025-10-07 14:12:22.973 2 DEBUG oslo_concurrency.lockutils [req-b95e1aa4-3448-4efb-a599-b762c3ec1834 req-c9b2fccf-d5ed-4d73-8c1e-80174c7a438a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:22 compute-0 nova_compute[259550]: 2025-10-07 14:12:22.973 2 DEBUG oslo_concurrency.lockutils [req-b95e1aa4-3448-4efb-a599-b762c3ec1834 req-c9b2fccf-d5ed-4d73-8c1e-80174c7a438a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:22 compute-0 nova_compute[259550]: 2025-10-07 14:12:22.973 2 DEBUG oslo_concurrency.lockutils [req-b95e1aa4-3448-4efb-a599-b762c3ec1834 req-c9b2fccf-d5ed-4d73-8c1e-80174c7a438a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:22 compute-0 nova_compute[259550]: 2025-10-07 14:12:22.974 2 DEBUG nova.compute.manager [req-b95e1aa4-3448-4efb-a599-b762c3ec1834 req-c9b2fccf-d5ed-4d73-8c1e-80174c7a438a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Processing event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:12:23 compute-0 podman[308047]: 2025-10-07 14:12:23.092908316 +0000 UTC m=+0.071285164 container create 96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 14:12:23 compute-0 podman[308047]: 2025-10-07 14:12:23.047982445 +0000 UTC m=+0.026359323 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:12:23 compute-0 systemd[1]: Started libpod-conmon-96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e.scope.
Oct 07 14:12:23 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4b4a21acf9e777111cfeaa2eb1f82dd98dbafde04206345d29037f6c6bf1ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:12:23 compute-0 podman[308047]: 2025-10-07 14:12:23.21172763 +0000 UTC m=+0.190104508 container init 96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:12:23 compute-0 podman[308047]: 2025-10-07 14:12:23.218799935 +0000 UTC m=+0.197176783 container start 96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.225 2 INFO nova.virt.libvirt.driver [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Snapshot image upload complete
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.226 2 DEBUG nova.compute.manager [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:23 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[308063]: [NOTICE]   (308083) : New worker (308087) forked
Oct 07 14:12:23 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[308063]: [NOTICE]   (308083) : Loading success.
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.289 2 INFO nova.compute.manager [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Shelve offloading
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.304 2 INFO nova.virt.libvirt.driver [-] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Instance destroyed successfully.
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.305 2 DEBUG nova.compute.manager [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.308 2 DEBUG oslo_concurrency.lockutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "refresh_cache-9197df26-c585-4b8f-8ed6-695ee8e233a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.308 2 DEBUG oslo_concurrency.lockutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquired lock "refresh_cache-9197df26-c585-4b8f-8ed6-695ee8e233a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.309 2 DEBUG nova.network.neutron [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:12:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 157 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 217 op/s
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.798 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.798 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846343.7974691, d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.798 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] VM Started (Lifecycle Event)
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.801 2 DEBUG nova.compute.manager [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.805 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.810 2 INFO nova.virt.libvirt.driver [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance spawned successfully.
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.810 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.828 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.832 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.841 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.842 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.843 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.843 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.844 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.844 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.853 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.854 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846343.80283, d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.854 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] VM Paused (Lifecycle Event)
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.885 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.889 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846343.8048258, d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.889 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] VM Resumed (Lifecycle Event)
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.912 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.920 2 DEBUG nova.compute.manager [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.921 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.951 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.984 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.985 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:23 compute-0 nova_compute[259550]: 2025-10-07 14:12:23.985 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:12:24 compute-0 nova_compute[259550]: 2025-10-07 14:12:24.041 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:24 compute-0 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct 07 14:12:24 compute-0 ceph-mon[74295]: pgmap v1431: 305 pgs: 305 active+clean; 157 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 217 op/s
Oct 07 14:12:24 compute-0 nova_compute[259550]: 2025-10-07 14:12:24.990 2 DEBUG nova.network.neutron [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Updating instance_info_cache with network_info: [{"id": "61102ef2-439c-478d-a441-fe00e54b7ff7", "address": "fa:16:3e:6b:88:da", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61102ef2-43", "ovs_interfaceid": "61102ef2-439c-478d-a441-fe00e54b7ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.019 2 DEBUG oslo_concurrency.lockutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Releasing lock "refresh_cache-9197df26-c585-4b8f-8ed6-695ee8e233a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:12:25 compute-0 podman[308122]: 2025-10-07 14:12:25.097106379 +0000 UTC m=+0.080071686 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:12:25 compute-0 podman[308123]: 2025-10-07 14:12:25.097555621 +0000 UTC m=+0.074163031 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.598 2 DEBUG nova.compute.manager [req-40040635-73f2-49f3-9870-f263e894a633 req-a02a4fb2-4b94-43e2-b1dc-fac81e1bb9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.599 2 DEBUG oslo_concurrency.lockutils [req-40040635-73f2-49f3-9870-f263e894a633 req-a02a4fb2-4b94-43e2-b1dc-fac81e1bb9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.599 2 DEBUG oslo_concurrency.lockutils [req-40040635-73f2-49f3-9870-f263e894a633 req-a02a4fb2-4b94-43e2-b1dc-fac81e1bb9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.600 2 DEBUG oslo_concurrency.lockutils [req-40040635-73f2-49f3-9870-f263e894a633 req-a02a4fb2-4b94-43e2-b1dc-fac81e1bb9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.600 2 DEBUG nova.compute.manager [req-40040635-73f2-49f3-9870-f263e894a633 req-a02a4fb2-4b94-43e2-b1dc-fac81e1bb9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] No waiting events found dispatching network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.601 2 WARNING nova.compute.manager [req-40040635-73f2-49f3-9870-f263e894a633 req-a02a4fb2-4b94-43e2-b1dc-fac81e1bb9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received unexpected event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad for instance with vm_state active and task_state None.
Oct 07 14:12:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 246 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 11 MiB/s wr, 323 op/s
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.663 2 DEBUG oslo_concurrency.lockutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.664 2 DEBUG oslo_concurrency.lockutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.665 2 DEBUG oslo_concurrency.lockutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.666 2 DEBUG oslo_concurrency.lockutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.666 2 DEBUG oslo_concurrency.lockutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.668 2 INFO nova.compute.manager [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Terminating instance
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.670 2 DEBUG nova.compute.manager [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:12:25 compute-0 kernel: tap51e23698-81 (unregistering): left promiscuous mode
Oct 07 14:12:25 compute-0 NetworkManager[44949]: <info>  [1759846345.7291] device (tap51e23698-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:12:25 compute-0 ovn_controller[151684]: 2025-10-07T14:12:25Z|00286|binding|INFO|Releasing lport 51e23698-81ea-413b-922d-5c9081ccd2ad from this chassis (sb_readonly=0)
Oct 07 14:12:25 compute-0 ovn_controller[151684]: 2025-10-07T14:12:25Z|00287|binding|INFO|Setting lport 51e23698-81ea-413b-922d-5c9081ccd2ad down in Southbound
Oct 07 14:12:25 compute-0 ovn_controller[151684]: 2025-10-07T14:12:25Z|00288|binding|INFO|Removing iface tap51e23698-81 ovn-installed in OVS
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:25.755 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:93:42 10.100.0.13'], port_security=['fa:16:3e:5e:93:42 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd7bf9b41-6a09-4d71-8e33-cb428d71b2a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=51e23698-81ea-413b-922d-5c9081ccd2ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:12:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:25.756 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 51e23698-81ea-413b-922d-5c9081ccd2ad in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 unbound from our chassis
Oct 07 14:12:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:25.757 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:12:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:25.759 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1a00aa0e-1910-47e9-ade6-c92999e35875]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:25.761 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 namespace which is not needed anymore
Oct 07 14:12:25 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000028.scope: Deactivated successfully.
Oct 07 14:12:25 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000028.scope: Consumed 3.401s CPU time.
Oct 07 14:12:25 compute-0 systemd-machined[214580]: Machine qemu-46-instance-00000028 terminated.
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:12:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Oct 07 14:12:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Oct 07 14:12:25 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.913 2 INFO nova.virt.libvirt.driver [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance destroyed successfully.
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.914 2 DEBUG nova.objects.instance [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'resources' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.924 2 DEBUG nova.virt.libvirt.vif [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:11:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1971655449',display_name='tempest-ServerDiskConfigTestJSON-server-1971655449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1971655449',id=40,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-sgyxuhwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:12:24Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=d7bf9b41-6a09-4d71-8e33-cb428d71b2a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.925 2 DEBUG nova.network.os_vif_util [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.925 2 DEBUG nova.network.os_vif_util [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.926 2 DEBUG os_vif [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.928 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51e23698-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:25 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[308063]: [NOTICE]   (308083) : haproxy version is 2.8.14-c23fe91
Oct 07 14:12:25 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[308063]: [NOTICE]   (308083) : path to executable is /usr/sbin/haproxy
Oct 07 14:12:25 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[308063]: [WARNING]  (308083) : Exiting Master process...
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.944 2 INFO os_vif [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81')
Oct 07 14:12:25 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[308063]: [ALERT]    (308083) : Current worker (308087) exited with code 143 (Terminated)
Oct 07 14:12:25 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[308063]: [WARNING]  (308083) : All workers exited. Exiting... (0)
Oct 07 14:12:25 compute-0 systemd[1]: libpod-96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e.scope: Deactivated successfully.
Oct 07 14:12:25 compute-0 podman[308187]: 2025-10-07 14:12:25.956708124 +0000 UTC m=+0.056496887 container died 96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:12:25 compute-0 nova_compute[259550]: 2025-10-07 14:12:25.981 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e-userdata-shm.mount: Deactivated successfully.
Oct 07 14:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b4b4a21acf9e777111cfeaa2eb1f82dd98dbafde04206345d29037f6c6bf1ad-merged.mount: Deactivated successfully.
Oct 07 14:12:26 compute-0 podman[308187]: 2025-10-07 14:12:26.003626827 +0000 UTC m=+0.103415590 container cleanup 96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 07 14:12:26 compute-0 systemd[1]: libpod-conmon-96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e.scope: Deactivated successfully.
Oct 07 14:12:26 compute-0 podman[308244]: 2025-10-07 14:12:26.072272701 +0000 UTC m=+0.043868074 container remove 96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:12:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:26.078 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bedbcec3-ded7-4eba-b39f-1d1e45aab58b]: (4, ('Tue Oct  7 02:12:25 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e)\n96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e\nTue Oct  7 02:12:26 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e)\n96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:26.080 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fbaafb79-3481-4c51-968b-18ebfd30c1ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:26.081 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2cb8ca0-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:26 compute-0 kernel: tapd2cb8ca0-10: left promiscuous mode
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:26.105 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1dbb0100-b25b-4768-badd-cbcb35b00ad1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:26.141 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0c5f913e-9d8c-4e17-963d-22da99dcebcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:26.152 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c10900f6-d974-411b-a854-bf23a05a8b8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:26.171 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[18712b0a-9d57-4bff-9055-132e7487e668]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692582, 'reachable_time': 43051, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308262, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:26.174 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:12:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:26.175 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c7bdee-ad9e-4098-98a8-457a96487cf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:26 compute-0 systemd[1]: run-netns-ovnmeta\x2dd2cb8ca0\x2d1272\x2d4fa9\x2db4ed\x2d8d0a1e3df777.mount: Deactivated successfully.
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.369 2 INFO nova.virt.libvirt.driver [-] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Instance destroyed successfully.
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.369 2 DEBUG nova.objects.instance [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'resources' on Instance uuid 9197df26-c585-4b8f-8ed6-695ee8e233a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.384 2 DEBUG nova.virt.libvirt.vif [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:11:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2085542773',display_name='tempest-DeleteServersTestJSON-server-2085542773',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2085542773',id=41,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-2g13mdnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member',shelved_at='2025-10-07T14:12:23.226411',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='afce8312-1adb-47e3-b920-51cdbb68b862'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:12:19Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=9197df26-c585-4b8f-8ed6-695ee8e233a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "61102ef2-439c-478d-a441-fe00e54b7ff7", "address": "fa:16:3e:6b:88:da", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61102ef2-43", "ovs_interfaceid": "61102ef2-439c-478d-a441-fe00e54b7ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.385 2 DEBUG nova.network.os_vif_util [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "61102ef2-439c-478d-a441-fe00e54b7ff7", "address": "fa:16:3e:6b:88:da", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61102ef2-43", "ovs_interfaceid": "61102ef2-439c-478d-a441-fe00e54b7ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.386 2 DEBUG nova.network.os_vif_util [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:88:da,bridge_name='br-int',has_traffic_filtering=True,id=61102ef2-439c-478d-a441-fe00e54b7ff7,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61102ef2-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.387 2 DEBUG os_vif [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:88:da,bridge_name='br-int',has_traffic_filtering=True,id=61102ef2-439c-478d-a441-fe00e54b7ff7,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61102ef2-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.389 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61102ef2-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.402 2 INFO nova.virt.libvirt.driver [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Deleting instance files /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_del
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.403 2 INFO nova.virt.libvirt.driver [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Deletion of /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_del complete
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.406 2 INFO os_vif [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:88:da,bridge_name='br-int',has_traffic_filtering=True,id=61102ef2-439c-478d-a441-fe00e54b7ff7,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61102ef2-43')
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.577 2 INFO nova.compute.manager [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Took 0.91 seconds to destroy the instance on the hypervisor.
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.579 2 DEBUG oslo.service.loopingcall [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.579 2 DEBUG nova.compute.manager [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.580 2 DEBUG nova.network.neutron [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.885 2 INFO nova.virt.libvirt.driver [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Deleting instance files /var/lib/nova/instances/9197df26-c585-4b8f-8ed6-695ee8e233a8_del
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.886 2 INFO nova.virt.libvirt.driver [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Deletion of /var/lib/nova/instances/9197df26-c585-4b8f-8ed6-695ee8e233a8_del complete
Oct 07 14:12:26 compute-0 ceph-mon[74295]: pgmap v1432: 305 pgs: 305 active+clean; 246 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 11 MiB/s wr, 323 op/s
Oct 07 14:12:26 compute-0 ceph-mon[74295]: osdmap e211: 3 total, 3 up, 3 in
Oct 07 14:12:26 compute-0 nova_compute[259550]: 2025-10-07 14:12:26.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:12:27 compute-0 nova_compute[259550]: 2025-10-07 14:12:27.212 2 INFO nova.scheduler.client.report [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Deleted allocations for instance 9197df26-c585-4b8f-8ed6-695ee8e233a8
Oct 07 14:12:27 compute-0 nova_compute[259550]: 2025-10-07 14:12:27.367 2 DEBUG oslo_concurrency.lockutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:27 compute-0 nova_compute[259550]: 2025-10-07 14:12:27.368 2 DEBUG oslo_concurrency.lockutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:27 compute-0 nova_compute[259550]: 2025-10-07 14:12:27.422 2 DEBUG oslo_concurrency.processutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 246 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 7.0 MiB/s wr, 172 op/s
Oct 07 14:12:27 compute-0 nova_compute[259550]: 2025-10-07 14:12:27.763 2 DEBUG nova.compute.manager [req-5414bdbb-b6e7-48aa-a10b-08ffa375ba23 req-85c8c1e9-8d5b-468a-bc51-d8e02b938675 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Received event network-changed-61102ef2-439c-478d-a441-fe00e54b7ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:27 compute-0 nova_compute[259550]: 2025-10-07 14:12:27.764 2 DEBUG nova.compute.manager [req-5414bdbb-b6e7-48aa-a10b-08ffa375ba23 req-85c8c1e9-8d5b-468a-bc51-d8e02b938675 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Refreshing instance network info cache due to event network-changed-61102ef2-439c-478d-a441-fe00e54b7ff7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:12:27 compute-0 nova_compute[259550]: 2025-10-07 14:12:27.764 2 DEBUG oslo_concurrency.lockutils [req-5414bdbb-b6e7-48aa-a10b-08ffa375ba23 req-85c8c1e9-8d5b-468a-bc51-d8e02b938675 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-9197df26-c585-4b8f-8ed6-695ee8e233a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:12:27 compute-0 nova_compute[259550]: 2025-10-07 14:12:27.764 2 DEBUG oslo_concurrency.lockutils [req-5414bdbb-b6e7-48aa-a10b-08ffa375ba23 req-85c8c1e9-8d5b-468a-bc51-d8e02b938675 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-9197df26-c585-4b8f-8ed6-695ee8e233a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:12:27 compute-0 nova_compute[259550]: 2025-10-07 14:12:27.765 2 DEBUG nova.network.neutron [req-5414bdbb-b6e7-48aa-a10b-08ffa375ba23 req-85c8c1e9-8d5b-468a-bc51-d8e02b938675 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Refreshing network info cache for port 61102ef2-439c-478d-a441-fe00e54b7ff7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:12:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:12:27 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3602547399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:12:27 compute-0 nova_compute[259550]: 2025-10-07 14:12:27.946 2 DEBUG oslo_concurrency.processutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:27 compute-0 nova_compute[259550]: 2025-10-07 14:12:27.957 2 DEBUG nova.compute.provider_tree [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:12:27 compute-0 nova_compute[259550]: 2025-10-07 14:12:27.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:12:27 compute-0 nova_compute[259550]: 2025-10-07 14:12:27.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:12:27 compute-0 nova_compute[259550]: 2025-10-07 14:12:27.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:12:27 compute-0 nova_compute[259550]: 2025-10-07 14:12:27.983 2 DEBUG nova.network.neutron [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.016 2 DEBUG nova.scheduler.client.report [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.035 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.097 2 INFO nova.compute.manager [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Took 1.52 seconds to deallocate network for instance.
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.148 2 DEBUG oslo_concurrency.lockutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.151 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.151 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.151 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.151 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.295 2 DEBUG oslo_concurrency.lockutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.297 2 DEBUG oslo_concurrency.lockutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.377 2 DEBUG oslo_concurrency.processutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.448 2 DEBUG oslo_concurrency.lockutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 24.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:12:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3637371592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.625 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.776 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.779 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.832 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.834 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4196MB free_disk=59.92162322998047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.834 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:12:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/502396541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.868 2 DEBUG oslo_concurrency.processutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.874 2 DEBUG nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:12:28 compute-0 nova_compute[259550]: 2025-10-07 14:12:28.880 2 DEBUG nova.compute.provider_tree [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:12:28 compute-0 ceph-mon[74295]: pgmap v1434: 305 pgs: 305 active+clean; 246 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 7.0 MiB/s wr, 172 op/s
Oct 07 14:12:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3602547399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:12:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3637371592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:12:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/502396541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:12:29 compute-0 nova_compute[259550]: 2025-10-07 14:12:29.015 2 DEBUG nova.scheduler.client.report [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:12:29 compute-0 nova_compute[259550]: 2025-10-07 14:12:29.080 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:29 compute-0 nova_compute[259550]: 2025-10-07 14:12:29.137 2 DEBUG oslo_concurrency.lockutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:29 compute-0 nova_compute[259550]: 2025-10-07 14:12:29.140 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:29 compute-0 nova_compute[259550]: 2025-10-07 14:12:29.183 2 DEBUG nova.network.neutron [req-5414bdbb-b6e7-48aa-a10b-08ffa375ba23 req-85c8c1e9-8d5b-468a-bc51-d8e02b938675 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Updated VIF entry in instance network info cache for port 61102ef2-439c-478d-a441-fe00e54b7ff7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:12:29 compute-0 nova_compute[259550]: 2025-10-07 14:12:29.184 2 DEBUG nova.network.neutron [req-5414bdbb-b6e7-48aa-a10b-08ffa375ba23 req-85c8c1e9-8d5b-468a-bc51-d8e02b938675 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Updating instance_info_cache with network_info: [{"id": "61102ef2-439c-478d-a441-fe00e54b7ff7", "address": "fa:16:3e:6b:88:da", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": null, "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap61102ef2-43", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:12:29 compute-0 nova_compute[259550]: 2025-10-07 14:12:29.224 2 INFO nova.scheduler.client.report [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Deleted allocations for instance d7bf9b41-6a09-4d71-8e33-cb428d71b2a8
Oct 07 14:12:29 compute-0 nova_compute[259550]: 2025-10-07 14:12:29.310 2 DEBUG oslo_concurrency.lockutils [req-5414bdbb-b6e7-48aa-a10b-08ffa375ba23 req-85c8c1e9-8d5b-468a-bc51-d8e02b938675 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-9197df26-c585-4b8f-8ed6-695ee8e233a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:12:29 compute-0 nova_compute[259550]: 2025-10-07 14:12:29.341 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 4e86b418-6e7f-4e2e-9146-a847920ed11f has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692
Oct 07 14:12:29 compute-0 nova_compute[259550]: 2025-10-07 14:12:29.341 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:12:29 compute-0 nova_compute[259550]: 2025-10-07 14:12:29.342 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:12:29 compute-0 nova_compute[259550]: 2025-10-07 14:12:29.386 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:29 compute-0 nova_compute[259550]: 2025-10-07 14:12:29.545 2 DEBUG oslo_concurrency.lockutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 169 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.1 MiB/s wr, 278 op/s
Oct 07 14:12:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:12:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2842463122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:12:29 compute-0 nova_compute[259550]: 2025-10-07 14:12:29.848 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:29 compute-0 nova_compute[259550]: 2025-10-07 14:12:29.854 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:12:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Oct 07 14:12:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Oct 07 14:12:29 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Oct 07 14:12:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2842463122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:12:29 compute-0 nova_compute[259550]: 2025-10-07 14:12:29.936 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:12:30 compute-0 nova_compute[259550]: 2025-10-07 14:12:30.011 2 DEBUG nova.compute.manager [req-b5e4d3be-5034-421e-b529-a7e34246483f req-2d99fe3d-9400-4ceb-b34e-55cca40ad3eb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received event network-vif-deleted-51e23698-81ea-413b-922d-5c9081ccd2ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:30 compute-0 nova_compute[259550]: 2025-10-07 14:12:30.022 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:12:30 compute-0 nova_compute[259550]: 2025-10-07 14:12:30.023 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:30 compute-0 nova_compute[259550]: 2025-10-07 14:12:30.024 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:30 compute-0 nova_compute[259550]: 2025-10-07 14:12:30.030 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:12:30 compute-0 nova_compute[259550]: 2025-10-07 14:12:30.031 2 INFO nova.compute.claims [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:12:30 compute-0 nova_compute[259550]: 2025-10-07 14:12:30.350 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:12:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4006067363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:12:30 compute-0 nova_compute[259550]: 2025-10-07 14:12:30.836 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:30 compute-0 nova_compute[259550]: 2025-10-07 14:12:30.845 2 DEBUG nova.compute.provider_tree [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:12:30 compute-0 nova_compute[259550]: 2025-10-07 14:12:30.889 2 DEBUG nova.scheduler.client.report [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:12:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:12:30 compute-0 nova_compute[259550]: 2025-10-07 14:12:30.926 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.903s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:30 compute-0 nova_compute[259550]: 2025-10-07 14:12:30.927 2 DEBUG nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:12:30 compute-0 ceph-mon[74295]: pgmap v1435: 305 pgs: 305 active+clean; 169 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.1 MiB/s wr, 278 op/s
Oct 07 14:12:30 compute-0 ceph-mon[74295]: osdmap e212: 3 total, 3 up, 3 in
Oct 07 14:12:30 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4006067363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:12:30 compute-0 nova_compute[259550]: 2025-10-07 14:12:30.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:30 compute-0 nova_compute[259550]: 2025-10-07 14:12:30.981 2 DEBUG nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:12:30 compute-0 nova_compute[259550]: 2025-10-07 14:12:30.981 2 DEBUG nova.network.neutron [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.025 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.043 2 INFO nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.094 2 DEBUG nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.207 2 DEBUG nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.211 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.212 2 INFO nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Creating image(s)
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.234 2 DEBUG nova.storage.rbd_utils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.259 2 DEBUG nova.storage.rbd_utils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.286 2 DEBUG nova.storage.rbd_utils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.290 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.374 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.375 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.376 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.376 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.399 2 DEBUG nova.storage.rbd_utils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.403 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 95 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.1 MiB/s wr, 313 op/s
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.778 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.849 2 DEBUG nova.storage.rbd_utils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] resizing rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.886 2 DEBUG nova.policy [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7bf568df6a8d461a83d287493b393589', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'de6794f6448744329cf2081eb5b889a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:12:31 compute-0 ceph-mon[74295]: pgmap v1437: 305 pgs: 305 active+clean; 95 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.1 MiB/s wr, 313 op/s
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.974 2 DEBUG nova.objects.instance [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.993 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.993 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Ensure instance console log exists: /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.994 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.994 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:31 compute-0 nova_compute[259550]: 2025-10-07 14:12:31.994 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0011979217539232923 of space, bias 1.0, pg target 0.3593765261769877 quantized to 32 (current 32)
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:12:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:12:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:12:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3783419720' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:12:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:12:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3783419720' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:12:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3783419720' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:12:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3783419720' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:12:32 compute-0 nova_compute[259550]: 2025-10-07 14:12:32.995 2 DEBUG nova.network.neutron [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Successfully created port: 75d3896e-b08f-4485-b4d7-dff914242597 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:12:33 compute-0 nova_compute[259550]: 2025-10-07 14:12:33.227 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846338.2266135, 9197df26-c585-4b8f-8ed6-695ee8e233a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:33 compute-0 nova_compute[259550]: 2025-10-07 14:12:33.228 2 INFO nova.compute.manager [-] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] VM Stopped (Lifecycle Event)
Oct 07 14:12:33 compute-0 nova_compute[259550]: 2025-10-07 14:12:33.248 2 DEBUG nova.compute.manager [None req-365da1a0-58e6-4ab1-bfe0-63033f49af11 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 95 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.5 KiB/s wr, 164 op/s
Oct 07 14:12:33 compute-0 nova_compute[259550]: 2025-10-07 14:12:33.800 2 DEBUG nova.network.neutron [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Successfully updated port: 75d3896e-b08f-4485-b4d7-dff914242597 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:12:33 compute-0 nova_compute[259550]: 2025-10-07 14:12:33.816 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "refresh_cache-4e86b418-6e7f-4e2e-9146-a847920ed11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:12:33 compute-0 nova_compute[259550]: 2025-10-07 14:12:33.817 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquired lock "refresh_cache-4e86b418-6e7f-4e2e-9146-a847920ed11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:12:33 compute-0 nova_compute[259550]: 2025-10-07 14:12:33.817 2 DEBUG nova.network.neutron [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:12:33 compute-0 ceph-mon[74295]: pgmap v1438: 305 pgs: 305 active+clean; 95 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.5 KiB/s wr, 164 op/s
Oct 07 14:12:34 compute-0 nova_compute[259550]: 2025-10-07 14:12:34.779 2 DEBUG nova.network.neutron [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:12:34 compute-0 nova_compute[259550]: 2025-10-07 14:12:34.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:12:34 compute-0 nova_compute[259550]: 2025-10-07 14:12:34.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.007 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.020 2 DEBUG nova.compute.manager [req-8162dcdc-9049-47a7-ae03-39a39c7d957d req-ab44f156-7fcc-44e6-bfa2-84274f647cb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-changed-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.021 2 DEBUG nova.compute.manager [req-8162dcdc-9049-47a7-ae03-39a39c7d957d req-ab44f156-7fcc-44e6-bfa2-84274f647cb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Refreshing instance network info cache due to event network-changed-75d3896e-b08f-4485-b4d7-dff914242597. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.021 2 DEBUG oslo_concurrency.lockutils [req-8162dcdc-9049-47a7-ae03-39a39c7d957d req-ab44f156-7fcc-44e6-bfa2-84274f647cb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4e86b418-6e7f-4e2e-9146-a847920ed11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.095 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.096 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.114 2 DEBUG nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.183 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.184 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.194 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.194 2 INFO nova.compute.claims [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.304 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.519 2 DEBUG nova.network.neutron [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Updating instance_info_cache with network_info: [{"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.540 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Releasing lock "refresh_cache-4e86b418-6e7f-4e2e-9146-a847920ed11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.541 2 DEBUG nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance network_info: |[{"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.542 2 DEBUG oslo_concurrency.lockutils [req-8162dcdc-9049-47a7-ae03-39a39c7d957d req-ab44f156-7fcc-44e6-bfa2-84274f647cb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4e86b418-6e7f-4e2e-9146-a847920ed11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.542 2 DEBUG nova.network.neutron [req-8162dcdc-9049-47a7-ae03-39a39c7d957d req-ab44f156-7fcc-44e6-bfa2-84274f647cb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Refreshing network info cache for port 75d3896e-b08f-4485-b4d7-dff914242597 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.545 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Start _get_guest_xml network_info=[{"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.551 2 WARNING nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.562 2 DEBUG nova.virt.libvirt.host [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.562 2 DEBUG nova.virt.libvirt.host [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.566 2 DEBUG nova.virt.libvirt.host [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.567 2 DEBUG nova.virt.libvirt.host [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.567 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.568 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.568 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.568 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.569 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.569 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.569 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.570 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.570 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.570 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.570 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.571 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.574 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 186 op/s
Oct 07 14:12:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:12:35 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1465989475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.772 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.779 2 DEBUG nova.compute.provider_tree [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.798 2 DEBUG nova.scheduler.client.report [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.823 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.824 2 DEBUG nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.873 2 DEBUG nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.873 2 DEBUG nova.network.neutron [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.892 2 INFO nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:12:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:12:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Oct 07 14:12:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Oct 07 14:12:35 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.917 2 DEBUG nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.942 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.942 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.964 2 DEBUG nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:12:35 compute-0 nova_compute[259550]: 2025-10-07 14:12:35.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.027 2 DEBUG nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.028 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.029 2 INFO nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Creating image(s)
Oct 07 14:12:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:12:36 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2614234729' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.069 2 DEBUG nova.storage.rbd_utils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.095 2 DEBUG nova.storage.rbd_utils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.116 2 DEBUG nova.storage.rbd_utils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.120 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.162 2 DEBUG nova.policy [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3ba82e1a9f12417391d78758ae9bb83c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f5ee4e560ed4660a6685a086282a370', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.172 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.197 2 DEBUG nova.storage.rbd_utils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.202 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.239 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.241 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.242 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.243 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.268 2 DEBUG nova.storage.rbd_utils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.272 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.315 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.316 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.325 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.325 2 INFO nova.compute.claims [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.483 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.608 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.680 2 DEBUG nova.storage.rbd_utils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] resizing rbd image 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:12:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:12:36 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1783457451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.712 2 DEBUG nova.network.neutron [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Successfully created port: 130e9c49-53e8-495e-ac38-4d3e63f49011 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.720 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:36 compute-0 ceph-mon[74295]: pgmap v1439: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 186 op/s
Oct 07 14:12:36 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1465989475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:12:36 compute-0 ceph-mon[74295]: osdmap e213: 3 total, 3 up, 3 in
Oct 07 14:12:36 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2614234729' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:12:36 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1783457451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.724 2 DEBUG nova.virt.libvirt.vif [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1086034539',display_name='tempest-ServerDiskConfigTestJSON-server-1086034539',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1086034539',id=42,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-flb5tbhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:31Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=4e86b418-6e7f-4e2e-9146-a847920ed11f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.725 2 DEBUG nova.network.os_vif_util [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.726 2 DEBUG nova.network.os_vif_util [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.727 2 DEBUG nova.objects.instance [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.745 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:12:36 compute-0 nova_compute[259550]:   <uuid>4e86b418-6e7f-4e2e-9146-a847920ed11f</uuid>
Oct 07 14:12:36 compute-0 nova_compute[259550]:   <name>instance-0000002a</name>
Oct 07 14:12:36 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:12:36 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:12:36 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1086034539</nova:name>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:12:35</nova:creationTime>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:12:36 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:12:36 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:12:36 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:12:36 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:12:36 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:12:36 compute-0 nova_compute[259550]:         <nova:user uuid="7bf568df6a8d461a83d287493b393589">tempest-ServerDiskConfigTestJSON-831175870-project-member</nova:user>
Oct 07 14:12:36 compute-0 nova_compute[259550]:         <nova:project uuid="de6794f6448744329cf2081eb5b889a5">tempest-ServerDiskConfigTestJSON-831175870</nova:project>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:12:36 compute-0 nova_compute[259550]:         <nova:port uuid="75d3896e-b08f-4485-b4d7-dff914242597">
Oct 07 14:12:36 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:12:36 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:12:36 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <system>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <entry name="serial">4e86b418-6e7f-4e2e-9146-a847920ed11f</entry>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <entry name="uuid">4e86b418-6e7f-4e2e-9146-a847920ed11f</entry>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     </system>
Oct 07 14:12:36 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:12:36 compute-0 nova_compute[259550]:   <os>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:   </os>
Oct 07 14:12:36 compute-0 nova_compute[259550]:   <features>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:   </features>
Oct 07 14:12:36 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:12:36 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:12:36 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4e86b418-6e7f-4e2e-9146-a847920ed11f_disk">
Oct 07 14:12:36 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       </source>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:12:36 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config">
Oct 07 14:12:36 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       </source>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:12:36 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:a0:f3:88"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <target dev="tap75d3896e-b0"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/console.log" append="off"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <video>
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     </video>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:12:36 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:12:36 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:12:36 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:12:36 compute-0 nova_compute[259550]: </domain>
Oct 07 14:12:36 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.746 2 DEBUG nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Preparing to wait for external event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.747 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.747 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.747 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.748 2 DEBUG nova.virt.libvirt.vif [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1086034539',display_name='tempest-ServerDiskConfigTestJSON-server-1086034539',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1086034539',id=42,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-flb5tbhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:31Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=4e86b418-6e7f-4e2e-9146-a847920ed11f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.749 2 DEBUG nova.network.os_vif_util [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.749 2 DEBUG nova.network.os_vif_util [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.750 2 DEBUG os_vif [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.751 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.752 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.756 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75d3896e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.757 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap75d3896e-b0, col_values=(('external_ids', {'iface-id': '75d3896e-b08f-4485-b4d7-dff914242597', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a0:f3:88', 'vm-uuid': '4e86b418-6e7f-4e2e-9146-a847920ed11f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:36 compute-0 NetworkManager[44949]: <info>  [1759846356.7597] manager: (tap75d3896e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.796 2 INFO os_vif [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0')
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.801 2 DEBUG nova.objects.instance [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lazy-loading 'migration_context' on Instance uuid 867307d6-0b3f-4a3e-9dc4-a05221e2f080 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.839 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.839 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Ensure instance console log exists: /var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.840 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.840 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.841 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.865 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.866 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.866 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No VIF found with MAC fa:16:3e:a0:f3:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.867 2 INFO nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Using config drive
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.893 2 DEBUG nova.storage.rbd_utils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:36 compute-0 nova_compute[259550]: 2025-10-07 14:12:36.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:12:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:12:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2437332903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.047 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.054 2 DEBUG nova.compute.provider_tree [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.068 2 DEBUG nova.scheduler.client.report [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.099 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.099 2 DEBUG nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.153 2 DEBUG nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.153 2 DEBUG nova.network.neutron [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.178 2 INFO nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.202 2 DEBUG nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.219 2 DEBUG nova.network.neutron [req-8162dcdc-9049-47a7-ae03-39a39c7d957d req-ab44f156-7fcc-44e6-bfa2-84274f647cb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Updated VIF entry in instance network info cache for port 75d3896e-b08f-4485-b4d7-dff914242597. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.220 2 DEBUG nova.network.neutron [req-8162dcdc-9049-47a7-ae03-39a39c7d957d req-ab44f156-7fcc-44e6-bfa2-84274f647cb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Updating instance_info_cache with network_info: [{"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.264 2 DEBUG oslo_concurrency.lockutils [req-8162dcdc-9049-47a7-ae03-39a39c7d957d req-ab44f156-7fcc-44e6-bfa2-84274f647cb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4e86b418-6e7f-4e2e-9146-a847920ed11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.307 2 DEBUG nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.309 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.309 2 INFO nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Creating image(s)
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.329 2 DEBUG nova.storage.rbd_utils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.353 2 DEBUG nova.storage.rbd_utils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.381 2 DEBUG nova.storage.rbd_utils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.386 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.430 2 INFO nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Creating config drive at /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.436 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1kg0elkj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.477 2 DEBUG nova.policy [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a0452296b3a942e893961944a0203d98', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '06322ecec4b94a5d94e34cc8632d4104', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.482 2 DEBUG nova.network.neutron [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Successfully updated port: 130e9c49-53e8-495e-ac38-4d3e63f49011 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.486 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.486 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.487 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.487 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.507 2 DEBUG nova.storage.rbd_utils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.511 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.550 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.551 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquired lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.551 2 DEBUG nova.network.neutron [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.556 2 DEBUG nova.compute.manager [req-595fdeef-e987-4b94-a3b0-dfd4774661bb req-be68a0b6-3fb0-428b-a2a1-e73c6571a701 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received event network-changed-130e9c49-53e8-495e-ac38-4d3e63f49011 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.556 2 DEBUG nova.compute.manager [req-595fdeef-e987-4b94-a3b0-dfd4774661bb req-be68a0b6-3fb0-428b-a2a1-e73c6571a701 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Refreshing instance network info cache due to event network-changed-130e9c49-53e8-495e-ac38-4d3e63f49011. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.556 2 DEBUG oslo_concurrency.lockutils [req-595fdeef-e987-4b94-a3b0-dfd4774661bb req-be68a0b6-3fb0-428b-a2a1-e73c6571a701 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.582 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1kg0elkj" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.607 2 DEBUG nova.storage.rbd_utils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.613 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 2.7 MiB/s wr, 98 op/s
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.706 2 DEBUG nova.network.neutron [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:12:37 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2437332903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.861 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.894 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.895 2 INFO nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Deleting local config drive /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config because it was imported into RBD.
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.941 2 DEBUG nova.storage.rbd_utils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] resizing rbd image 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:12:37 compute-0 kernel: tap75d3896e-b0: entered promiscuous mode
Oct 07 14:12:37 compute-0 NetworkManager[44949]: <info>  [1759846357.9566] manager: (tap75d3896e-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/144)
Oct 07 14:12:37 compute-0 ovn_controller[151684]: 2025-10-07T14:12:37Z|00289|binding|INFO|Claiming lport 75d3896e-b08f-4485-b4d7-dff914242597 for this chassis.
Oct 07 14:12:37 compute-0 ovn_controller[151684]: 2025-10-07T14:12:37Z|00290|binding|INFO|75d3896e-b08f-4485-b4d7-dff914242597: Claiming fa:16:3e:a0:f3:88 10.100.0.7
Oct 07 14:12:37 compute-0 ovn_controller[151684]: 2025-10-07T14:12:37Z|00291|binding|INFO|Setting lport 75d3896e-b08f-4485-b4d7-dff914242597 ovn-installed in OVS
Oct 07 14:12:37 compute-0 ovn_controller[151684]: 2025-10-07T14:12:37Z|00292|binding|INFO|Setting lport 75d3896e-b08f-4485-b4d7-dff914242597 up in Southbound
Oct 07 14:12:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:37.998 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:f3:88 10.100.0.7'], port_security=['fa:16:3e:a0:f3:88 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4e86b418-6e7f-4e2e-9146-a847920ed11f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=75d3896e-b08f-4485-b4d7-dff914242597) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:12:37 compute-0 nova_compute[259550]: 2025-10-07 14:12:37.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.000 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 75d3896e-b08f-4485-b4d7-dff914242597 in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 bound to our chassis
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.001 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct 07 14:12:38 compute-0 systemd-udevd[309067]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.015 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a96dd042-d359-4ad7-8b41-c80102392740]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.016 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd2cb8ca0-11 in ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.019 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd2cb8ca0-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.020 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3bdaba-8353-4263-9369-9909209f7be4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:38 compute-0 systemd-machined[214580]: New machine qemu-47-instance-0000002a.
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.022 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b33304d6-1a9e-4ee3-9093-b947c0ca3975]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:38 compute-0 systemd[1]: Started Virtual Machine qemu-47-instance-0000002a.
Oct 07 14:12:38 compute-0 NetworkManager[44949]: <info>  [1759846358.0434] device (tap75d3896e-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.042 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[5b45562a-edcd-43ad-9ec6-f32964238d6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:38 compute-0 NetworkManager[44949]: <info>  [1759846358.0450] device (tap75d3896e-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.058 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[896be126-b75c-4cb6-afe6-a1abdda2b944]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:38 compute-0 podman[309047]: 2025-10-07 14:12:38.096198067 +0000 UTC m=+0.106237683 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.105 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b05db223-5c58-4851-9f9a-a923db842333]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:38 compute-0 systemd-udevd[309075]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.118 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f285a6-586a-466e-8281-e8c021e1a4cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:38 compute-0 NetworkManager[44949]: <info>  [1759846358.1204] manager: (tapd2cb8ca0-10): new Veth device (/org/freedesktop/NetworkManager/Devices/145)
Oct 07 14:12:38 compute-0 podman[309051]: 2025-10-07 14:12:38.127439518 +0000 UTC m=+0.134288631 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 07 14:12:38 compute-0 nova_compute[259550]: 2025-10-07 14:12:38.147 2 DEBUG nova.objects.instance [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'migration_context' on Instance uuid 4fef229d-c42d-43ac-a3ff-527ca68d3796 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.160 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[69e85286-7e6f-4cce-bbdc-5d623fcad6d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.163 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[74ee1f33-0d08-417b-a322-949c094a911d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:38 compute-0 nova_compute[259550]: 2025-10-07 14:12:38.162 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:12:38 compute-0 nova_compute[259550]: 2025-10-07 14:12:38.163 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Ensure instance console log exists: /var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:12:38 compute-0 nova_compute[259550]: 2025-10-07 14:12:38.163 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:38 compute-0 nova_compute[259550]: 2025-10-07 14:12:38.163 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:38 compute-0 nova_compute[259550]: 2025-10-07 14:12:38.164 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:38 compute-0 NetworkManager[44949]: <info>  [1759846358.1921] device (tapd2cb8ca0-10): carrier: link connected
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.197 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4d76d6de-cf94-450f-b3d2-1b1aca648207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.217 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[54bbd03b-b374-437d-8496-da80f01d6f9c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2cb8ca0-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694175, 'reachable_time': 41094, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309146, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.235 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[587225ff-c9b5-49c3-a4be-15acfbdeb94c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:eb7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694175, 'tstamp': 694175}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309147, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.255 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d447ab0e-4f2c-457e-b059-21f49cd98080]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2cb8ca0-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694175, 'reachable_time': 41094, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309148, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.293 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0c828c18-f29d-45d3-925d-8b533b61114f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.363 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e27e0677-e166-47dc-994b-b9684cc5d435]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.365 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2cb8ca0-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.366 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.366 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2cb8ca0-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:38 compute-0 NetworkManager[44949]: <info>  [1759846358.3694] manager: (tapd2cb8ca0-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Oct 07 14:12:38 compute-0 kernel: tapd2cb8ca0-10: entered promiscuous mode
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.371 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2cb8ca0-10, col_values=(('external_ids', {'iface-id': '93001468-74a0-4bac-94dd-0978737be6e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:38 compute-0 ovn_controller[151684]: 2025-10-07T14:12:38Z|00293|binding|INFO|Releasing lport 93001468-74a0-4bac-94dd-0978737be6e2 from this chassis (sb_readonly=0)
Oct 07 14:12:38 compute-0 nova_compute[259550]: 2025-10-07 14:12:38.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:38 compute-0 nova_compute[259550]: 2025-10-07 14:12:38.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.393 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.395 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ec9309-7cd1-4fc4-9c90-8b3d67a3b5ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.395 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:12:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.396 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'env', 'PROCESS_TAG=haproxy-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:12:38 compute-0 ceph-mon[74295]: pgmap v1441: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 2.7 MiB/s wr, 98 op/s
Oct 07 14:12:38 compute-0 podman[309222]: 2025-10-07 14:12:38.823030623 +0000 UTC m=+0.070384191 container create e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 07 14:12:38 compute-0 systemd[1]: Started libpod-conmon-e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33.scope.
Oct 07 14:12:38 compute-0 podman[309222]: 2025-10-07 14:12:38.786098482 +0000 UTC m=+0.033452100 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:12:38 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd8bb89be17d12218ba71ac6eeda91b03262ee72f2eb23233a59ffdba88634a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:12:38 compute-0 podman[309222]: 2025-10-07 14:12:38.936325301 +0000 UTC m=+0.183678909 container init e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:12:38 compute-0 podman[309222]: 2025-10-07 14:12:38.942814272 +0000 UTC m=+0.190167840 container start e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:12:38 compute-0 nova_compute[259550]: 2025-10-07 14:12:38.955 2 DEBUG nova.network.neutron [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Successfully created port: 23002fad-2420-4b68-bfd7-2d90f8b5df6d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:12:38 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[309237]: [NOTICE]   (309241) : New worker (309243) forked
Oct 07 14:12:38 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[309237]: [NOTICE]   (309241) : Loading success.
Oct 07 14:12:38 compute-0 nova_compute[259550]: 2025-10-07 14:12:38.984 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846358.9837368, 4e86b418-6e7f-4e2e-9146-a847920ed11f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:38 compute-0 nova_compute[259550]: 2025-10-07 14:12:38.985 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] VM Started (Lifecycle Event)
Oct 07 14:12:39 compute-0 nova_compute[259550]: 2025-10-07 14:12:39.005 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:39 compute-0 nova_compute[259550]: 2025-10-07 14:12:39.011 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846358.9840682, 4e86b418-6e7f-4e2e-9146-a847920ed11f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:39 compute-0 nova_compute[259550]: 2025-10-07 14:12:39.012 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] VM Paused (Lifecycle Event)
Oct 07 14:12:39 compute-0 nova_compute[259550]: 2025-10-07 14:12:39.030 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:39 compute-0 nova_compute[259550]: 2025-10-07 14:12:39.035 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:12:39 compute-0 nova_compute[259550]: 2025-10-07 14:12:39.057 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:12:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 140 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 4.5 MiB/s wr, 153 op/s
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.025 2 DEBUG nova.network.neutron [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updating instance_info_cache with network_info: [{"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.046 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Releasing lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.047 2 DEBUG nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Instance network_info: |[{"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.047 2 DEBUG oslo_concurrency.lockutils [req-595fdeef-e987-4b94-a3b0-dfd4774661bb req-be68a0b6-3fb0-428b-a2a1-e73c6571a701 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.047 2 DEBUG nova.network.neutron [req-595fdeef-e987-4b94-a3b0-dfd4774661bb req-be68a0b6-3fb0-428b-a2a1-e73c6571a701 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Refreshing network info cache for port 130e9c49-53e8-495e-ac38-4d3e63f49011 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.050 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Start _get_guest_xml network_info=[{"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.054 2 WARNING nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.060 2 DEBUG nova.virt.libvirt.host [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.060 2 DEBUG nova.virt.libvirt.host [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.069 2 DEBUG nova.virt.libvirt.host [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.070 2 DEBUG nova.virt.libvirt.host [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.071 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.071 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.071 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.072 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.072 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.072 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.072 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.072 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.073 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.073 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.073 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.073 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.076 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.189 2 DEBUG nova.compute.manager [req-693a5285-b2d9-4959-89d6-9a999cbda3bf req-3eb1edbb-ea37-47ff-b394-5960070bf802 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.190 2 DEBUG oslo_concurrency.lockutils [req-693a5285-b2d9-4959-89d6-9a999cbda3bf req-3eb1edbb-ea37-47ff-b394-5960070bf802 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.190 2 DEBUG oslo_concurrency.lockutils [req-693a5285-b2d9-4959-89d6-9a999cbda3bf req-3eb1edbb-ea37-47ff-b394-5960070bf802 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.190 2 DEBUG oslo_concurrency.lockutils [req-693a5285-b2d9-4959-89d6-9a999cbda3bf req-3eb1edbb-ea37-47ff-b394-5960070bf802 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.191 2 DEBUG nova.compute.manager [req-693a5285-b2d9-4959-89d6-9a999cbda3bf req-3eb1edbb-ea37-47ff-b394-5960070bf802 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Processing event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.191 2 DEBUG nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.195 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846360.1951532, 4e86b418-6e7f-4e2e-9146-a847920ed11f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.195 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] VM Resumed (Lifecycle Event)
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.198 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.207 2 INFO nova.virt.libvirt.driver [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance spawned successfully.
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.208 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.214 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.217 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.226 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.227 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.227 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.227 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.228 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.228 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.239 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.282 2 INFO nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Took 9.07 seconds to spawn the instance on the hypervisor.
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.283 2 DEBUG nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.353 2 INFO nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Took 11.29 seconds to build instance.
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.374 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:12:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3813252651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.552 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.573 2 DEBUG nova.storage.rbd_utils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.578 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.624 2 DEBUG nova.network.neutron [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Successfully updated port: 23002fad-2420-4b68-bfd7-2d90f8b5df6d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.638 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "refresh_cache-4fef229d-c42d-43ac-a3ff-527ca68d3796" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.639 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquired lock "refresh_cache-4fef229d-c42d-43ac-a3ff-527ca68d3796" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.640 2 DEBUG nova.network.neutron [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.722 2 DEBUG nova.compute.manager [req-e89e46c7-334c-42e8-8919-264238291857 req-a95c233b-0cef-4935-8899-b9ee34712168 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-changed-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.723 2 DEBUG nova.compute.manager [req-e89e46c7-334c-42e8-8919-264238291857 req-a95c233b-0cef-4935-8899-b9ee34712168 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Refreshing instance network info cache due to event network-changed-23002fad-2420-4b68-bfd7-2d90f8b5df6d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.724 2 DEBUG oslo_concurrency.lockutils [req-e89e46c7-334c-42e8-8919-264238291857 req-a95c233b-0cef-4935-8899-b9ee34712168 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4fef229d-c42d-43ac-a3ff-527ca68d3796" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:12:40 compute-0 ceph-mon[74295]: pgmap v1442: 305 pgs: 305 active+clean; 140 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 4.5 MiB/s wr, 153 op/s
Oct 07 14:12:40 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3813252651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.847 2 DEBUG nova.network.neutron [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:12:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.912 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846345.9105232, d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.912 2 INFO nova.compute.manager [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] VM Stopped (Lifecycle Event)
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.937 2 DEBUG nova.compute.manager [None req-fbc6f91f-5906-4f38-8ac6-41eb4102435c - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:40 compute-0 nova_compute[259550]: 2025-10-07 14:12:40.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:12:41 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2814303154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.090 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.093 2 DEBUG nova.virt.libvirt.vif [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-84480482',display_name='tempest-SecurityGroupsTestJSON-server-84480482',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-84480482',id=43,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f5ee4e560ed4660a6685a086282a370',ramdisk_id='',reservation_id='r-9i3oosu3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1673626413',owner_user_name='tempest-SecurityGroupsTestJSON-1673626413-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:35Z,user_data=None,user_id='3ba82e1a9f12417391d78758ae9bb83c',uuid=867307d6-0b3f-4a3e-9dc4-a05221e2f080,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.094 2 DEBUG nova.network.os_vif_util [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converting VIF {"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.095 2 DEBUG nova.network.os_vif_util [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:73:59,bridge_name='br-int',has_traffic_filtering=True,id=130e9c49-53e8-495e-ac38-4d3e63f49011,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130e9c49-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.096 2 DEBUG nova.objects.instance [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lazy-loading 'pci_devices' on Instance uuid 867307d6-0b3f-4a3e-9dc4-a05221e2f080 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.115 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:12:41 compute-0 nova_compute[259550]:   <uuid>867307d6-0b3f-4a3e-9dc4-a05221e2f080</uuid>
Oct 07 14:12:41 compute-0 nova_compute[259550]:   <name>instance-0000002b</name>
Oct 07 14:12:41 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:12:41 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:12:41 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <nova:name>tempest-SecurityGroupsTestJSON-server-84480482</nova:name>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:12:40</nova:creationTime>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:12:41 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:12:41 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:12:41 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:12:41 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:12:41 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:12:41 compute-0 nova_compute[259550]:         <nova:user uuid="3ba82e1a9f12417391d78758ae9bb83c">tempest-SecurityGroupsTestJSON-1673626413-project-member</nova:user>
Oct 07 14:12:41 compute-0 nova_compute[259550]:         <nova:project uuid="1f5ee4e560ed4660a6685a086282a370">tempest-SecurityGroupsTestJSON-1673626413</nova:project>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:12:41 compute-0 nova_compute[259550]:         <nova:port uuid="130e9c49-53e8-495e-ac38-4d3e63f49011">
Oct 07 14:12:41 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:12:41 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:12:41 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <system>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <entry name="serial">867307d6-0b3f-4a3e-9dc4-a05221e2f080</entry>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <entry name="uuid">867307d6-0b3f-4a3e-9dc4-a05221e2f080</entry>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     </system>
Oct 07 14:12:41 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:12:41 compute-0 nova_compute[259550]:   <os>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:   </os>
Oct 07 14:12:41 compute-0 nova_compute[259550]:   <features>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:   </features>
Oct 07 14:12:41 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:12:41 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:12:41 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk">
Oct 07 14:12:41 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       </source>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:12:41 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk.config">
Oct 07 14:12:41 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       </source>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:12:41 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:03:73:59"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <target dev="tap130e9c49-53"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080/console.log" append="off"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <video>
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     </video>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:12:41 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:12:41 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:12:41 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:12:41 compute-0 nova_compute[259550]: </domain>
Oct 07 14:12:41 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.123 2 DEBUG nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Preparing to wait for external event network-vif-plugged-130e9c49-53e8-495e-ac38-4d3e63f49011 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.123 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.123 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.124 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.125 2 DEBUG nova.virt.libvirt.vif [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-84480482',display_name='tempest-SecurityGroupsTestJSON-server-84480482',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-84480482',id=43,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f5ee4e560ed4660a6685a086282a370',ramdisk_id='',reservation_id='r-9i3oosu3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1673626413',owner_user_name='tempest-SecurityGroupsTestJSON-1673626413-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:35Z,user_data=None,user_id='3ba82e1a9f12417391d78758ae9bb83c',uuid=867307d6-0b3f-4a3e-9dc4-a05221e2f080,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.125 2 DEBUG nova.network.os_vif_util [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converting VIF {"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.126 2 DEBUG nova.network.os_vif_util [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:73:59,bridge_name='br-int',has_traffic_filtering=True,id=130e9c49-53e8-495e-ac38-4d3e63f49011,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130e9c49-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.127 2 DEBUG os_vif [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:73:59,bridge_name='br-int',has_traffic_filtering=True,id=130e9c49-53e8-495e-ac38-4d3e63f49011,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130e9c49-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.128 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.129 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.135 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap130e9c49-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.135 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap130e9c49-53, col_values=(('external_ids', {'iface-id': '130e9c49-53e8-495e-ac38-4d3e63f49011', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:73:59', 'vm-uuid': '867307d6-0b3f-4a3e-9dc4-a05221e2f080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:41 compute-0 NetworkManager[44949]: <info>  [1759846361.1384] manager: (tap130e9c49-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.145 2 INFO os_vif [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:73:59,bridge_name='br-int',has_traffic_filtering=True,id=130e9c49-53e8-495e-ac38-4d3e63f49011,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130e9c49-53')
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.224 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.225 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.225 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] No VIF found with MAC fa:16:3e:03:73:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.226 2 INFO nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Using config drive
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.248 2 DEBUG nova.storage.rbd_utils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 180 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 6.4 MiB/s wr, 127 op/s
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.813 2 DEBUG nova.network.neutron [req-595fdeef-e987-4b94-a3b0-dfd4774661bb req-be68a0b6-3fb0-428b-a2a1-e73c6571a701 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updated VIF entry in instance network info cache for port 130e9c49-53e8-495e-ac38-4d3e63f49011. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.814 2 DEBUG nova.network.neutron [req-595fdeef-e987-4b94-a3b0-dfd4774661bb req-be68a0b6-3fb0-428b-a2a1-e73c6571a701 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updating instance_info_cache with network_info: [{"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:12:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2814303154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:12:41 compute-0 nova_compute[259550]: 2025-10-07 14:12:41.837 2 DEBUG oslo_concurrency.lockutils [req-595fdeef-e987-4b94-a3b0-dfd4774661bb req-be68a0b6-3fb0-428b-a2a1-e73c6571a701 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.092 2 INFO nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Creating config drive at /var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080/disk.config
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.100 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmja815un execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.249 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmja815un" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.273 2 DEBUG nova.storage.rbd_utils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.277 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080/disk.config 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.314 2 DEBUG nova.network.neutron [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Updating instance_info_cache with network_info: [{"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.331 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Releasing lock "refresh_cache-4fef229d-c42d-43ac-a3ff-527ca68d3796" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.332 2 DEBUG nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Instance network_info: |[{"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.332 2 DEBUG oslo_concurrency.lockutils [req-e89e46c7-334c-42e8-8919-264238291857 req-a95c233b-0cef-4935-8899-b9ee34712168 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4fef229d-c42d-43ac-a3ff-527ca68d3796" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.333 2 DEBUG nova.network.neutron [req-e89e46c7-334c-42e8-8919-264238291857 req-a95c233b-0cef-4935-8899-b9ee34712168 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Refreshing network info cache for port 23002fad-2420-4b68-bfd7-2d90f8b5df6d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.336 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Start _get_guest_xml network_info=[{"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.343 2 WARNING nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.350 2 DEBUG nova.virt.libvirt.host [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.351 2 DEBUG nova.virt.libvirt.host [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.354 2 DEBUG nova.virt.libvirt.host [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.355 2 DEBUG nova.virt.libvirt.host [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.355 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.356 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.356 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.356 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.357 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.357 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.357 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.358 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.358 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.358 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.358 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.359 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.362 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.451 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080/disk.config 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.452 2 INFO nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Deleting local config drive /var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080/disk.config because it was imported into RBD.
Oct 07 14:12:42 compute-0 NetworkManager[44949]: <info>  [1759846362.5086] manager: (tap130e9c49-53): new Tun device (/org/freedesktop/NetworkManager/Devices/148)
Oct 07 14:12:42 compute-0 kernel: tap130e9c49-53: entered promiscuous mode
Oct 07 14:12:42 compute-0 ovn_controller[151684]: 2025-10-07T14:12:42Z|00294|binding|INFO|Claiming lport 130e9c49-53e8-495e-ac38-4d3e63f49011 for this chassis.
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:42 compute-0 ovn_controller[151684]: 2025-10-07T14:12:42Z|00295|binding|INFO|130e9c49-53e8-495e-ac38-4d3e63f49011: Claiming fa:16:3e:03:73:59 10.100.0.5
Oct 07 14:12:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.609 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:73:59 10.100.0.5'], port_security=['fa:16:3e:03:73:59 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '867307d6-0b3f-4a3e-9dc4-a05221e2f080', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f5ee4e560ed4660a6685a086282a370', 'neutron:revision_number': '2', 'neutron:security_group_ids': '83afa0b2-d45d-4225-8e21-5474e9077205', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=437744f3-38b8-4d96-80b1-8cef5bd6873b, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=130e9c49-53e8-495e-ac38-4d3e63f49011) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:12:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.610 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 130e9c49-53e8-495e-ac38-4d3e63f49011 in datapath 71870f0f-c94f-4d32-8df4-00da4d6d4129 bound to our chassis
Oct 07 14:12:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.612 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71870f0f-c94f-4d32-8df4-00da4d6d4129
Oct 07 14:12:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.624 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b8619153-4e10-4746-a3a8-3a673d7940b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.626 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap71870f0f-c1 in ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:12:42 compute-0 systemd-machined[214580]: New machine qemu-48-instance-0000002b.
Oct 07 14:12:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.629 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap71870f0f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:12:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.630 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc04bae-0e59-4224-a1d7-a82680d1ebf5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.632 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0b910d64-6c68-484d-a51c-a117e0b41e62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:42 compute-0 systemd-udevd[309408]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:12:42 compute-0 systemd[1]: Started Virtual Machine qemu-48-instance-0000002b.
Oct 07 14:12:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.647 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[c029d2d3-fa43-468a-adf7-75d2bdc53674]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:42 compute-0 NetworkManager[44949]: <info>  [1759846362.6516] device (tap130e9c49-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:12:42 compute-0 NetworkManager[44949]: <info>  [1759846362.6527] device (tap130e9c49-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:12:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.676 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fdaf2188-e59f-49a8-b58b-91c7f5758372]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:42 compute-0 ovn_controller[151684]: 2025-10-07T14:12:42Z|00296|binding|INFO|Setting lport 130e9c49-53e8-495e-ac38-4d3e63f49011 ovn-installed in OVS
Oct 07 14:12:42 compute-0 ovn_controller[151684]: 2025-10-07T14:12:42Z|00297|binding|INFO|Setting lport 130e9c49-53e8-495e-ac38-4d3e63f49011 up in Southbound
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.708 2 DEBUG nova.compute.manager [req-bf4856d5-8948-43fe-8420-4e24fd4a0282 req-4089029b-4ce6-4fed-94aa-bb78a478dce6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.709 2 DEBUG oslo_concurrency.lockutils [req-bf4856d5-8948-43fe-8420-4e24fd4a0282 req-4089029b-4ce6-4fed-94aa-bb78a478dce6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.709 2 DEBUG oslo_concurrency.lockutils [req-bf4856d5-8948-43fe-8420-4e24fd4a0282 req-4089029b-4ce6-4fed-94aa-bb78a478dce6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.709 2 DEBUG oslo_concurrency.lockutils [req-bf4856d5-8948-43fe-8420-4e24fd4a0282 req-4089029b-4ce6-4fed-94aa-bb78a478dce6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.709 2 DEBUG nova.compute.manager [req-bf4856d5-8948-43fe-8420-4e24fd4a0282 req-4089029b-4ce6-4fed-94aa-bb78a478dce6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] No waiting events found dispatching network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.710 2 WARNING nova.compute.manager [req-bf4856d5-8948-43fe-8420-4e24fd4a0282 req-4089029b-4ce6-4fed-94aa-bb78a478dce6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received unexpected event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 for instance with vm_state active and task_state None.
Oct 07 14:12:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.713 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3eeac796-3a58-4e23-bcb8-41149ffdd70f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.718 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ecbc2c01-4212-4da8-93ac-87167256b71b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:42 compute-0 NetworkManager[44949]: <info>  [1759846362.7202] manager: (tap71870f0f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/149)
Oct 07 14:12:42 compute-0 systemd-udevd[309412]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:12:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.768 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3b0cea11-4c7b-4383-b094-0c74c40c1ff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.772 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4df21070-83d8-44bf-be3b-a6a12801fdac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:42 compute-0 NetworkManager[44949]: <info>  [1759846362.8021] device (tap71870f0f-c0): carrier: link connected
Oct 07 14:12:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.810 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a53702-a771-4934-bf80-8edcaf425a64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:42 compute-0 ceph-mon[74295]: pgmap v1443: 305 pgs: 305 active+clean; 180 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 6.4 MiB/s wr, 127 op/s
Oct 07 14:12:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.838 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[75130151-e75d-48f6-a925-979966ea4936]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71870f0f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:d1:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694636, 'reachable_time': 24274, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309441, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.858 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5678df53-6c23-4015-a20e-980f1aa66fcb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2f:d112'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694636, 'tstamp': 694636}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309442, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.883 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6955afc1-93c7-4694-ab55-c59134d601f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71870f0f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:d1:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694636, 'reachable_time': 24274, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309443, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.892 2 DEBUG nova.compute.manager [req-d4e051a7-7cd6-47db-80ec-f03235658dfe req-f0b54fa2-855b-456c-a9ea-241d247e312b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received event network-vif-plugged-130e9c49-53e8-495e-ac38-4d3e63f49011 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.893 2 DEBUG oslo_concurrency.lockutils [req-d4e051a7-7cd6-47db-80ec-f03235658dfe req-f0b54fa2-855b-456c-a9ea-241d247e312b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.893 2 DEBUG oslo_concurrency.lockutils [req-d4e051a7-7cd6-47db-80ec-f03235658dfe req-f0b54fa2-855b-456c-a9ea-241d247e312b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.894 2 DEBUG oslo_concurrency.lockutils [req-d4e051a7-7cd6-47db-80ec-f03235658dfe req-f0b54fa2-855b-456c-a9ea-241d247e312b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.894 2 DEBUG nova.compute.manager [req-d4e051a7-7cd6-47db-80ec-f03235658dfe req-f0b54fa2-855b-456c-a9ea-241d247e312b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Processing event network-vif-plugged-130e9c49-53e8-495e-ac38-4d3e63f49011 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:12:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.920 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c4115224-a949-46fa-9c3c-3a2f3962be3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:12:42 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/141100417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:12:42 compute-0 nova_compute[259550]: 2025-10-07 14:12:42.974 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.008 2 DEBUG nova.storage.rbd_utils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:43.010 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7edf2aef-dca2-4083-a557-54105ec9f262]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:43.012 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71870f0f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:43.013 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:43.013 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71870f0f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:43 compute-0 NetworkManager[44949]: <info>  [1759846363.0158] manager: (tap71870f0f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/150)
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.014 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:43 compute-0 kernel: tap71870f0f-c0: entered promiscuous mode
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:43.020 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71870f0f-c0, col_values=(('external_ids', {'iface-id': '7bd1effb-a353-4387-8382-bb3ef13fb3f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:43 compute-0 ovn_controller[151684]: 2025-10-07T14:12:43Z|00298|binding|INFO|Releasing lport 7bd1effb-a353-4387-8382-bb3ef13fb3f0 from this chassis (sb_readonly=0)
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:43.024 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/71870f0f-c94f-4d32-8df4-00da4d6d4129.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/71870f0f-c94f-4d32-8df4-00da4d6d4129.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:43.026 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[adf94ed1-4271-4b77-9177-4cac7355b365]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:43.027 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-71870f0f-c94f-4d32-8df4-00da4d6d4129
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/71870f0f-c94f-4d32-8df4-00da4d6d4129.pid.haproxy
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 71870f0f-c94f-4d32-8df4-00da4d6d4129
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:12:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:43.029 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'env', 'PROCESS_TAG=haproxy-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/71870f0f-c94f-4d32-8df4-00da4d6d4129.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:43 compute-0 podman[309557]: 2025-10-07 14:12:43.462401162 +0000 UTC m=+0.072334153 container create 14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:12:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:12:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/739791835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:12:43 compute-0 podman[309557]: 2025-10-07 14:12:43.421834165 +0000 UTC m=+0.031767186 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.521 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:43 compute-0 systemd[1]: Started libpod-conmon-14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31.scope.
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.524 2 DEBUG nova.virt.libvirt.vif [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1001011532',display_name='tempest-DeleteServersTestJSON-server-1001011532',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1001011532',id=44,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-rcycun96',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:37Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=4fef229d-c42d-43ac-a3ff-527ca68d3796,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.525 2 DEBUG nova.network.os_vif_util [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.525 2 DEBUG nova.network.os_vif_util [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:88,bridge_name='br-int',has_traffic_filtering=True,id=23002fad-2420-4b68-bfd7-2d90f8b5df6d,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23002fad-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.527 2 DEBUG nova.objects.instance [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4fef229d-c42d-43ac-a3ff-527ca68d3796 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:43 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820a27a5171c3bf28772eb4c1b067f6a72a5b80227d7b45bd095d8022b7260b2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.552 2 DEBUG nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.554 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846363.55407, 867307d6-0b3f-4a3e-9dc4-a05221e2f080 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.554 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] VM Started (Lifecycle Event)
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.557 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.561 2 INFO nova.virt.libvirt.driver [-] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Instance spawned successfully.
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.562 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:12:43 compute-0 podman[309557]: 2025-10-07 14:12:43.573003029 +0000 UTC m=+0.182936040 container init 14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 07 14:12:43 compute-0 podman[309557]: 2025-10-07 14:12:43.578589666 +0000 UTC m=+0.188522657 container start 14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 14:12:43 compute-0 neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129[309575]: [NOTICE]   (309579) : New worker (309581) forked
Oct 07 14:12:43 compute-0 neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129[309575]: [NOTICE]   (309579) : Loading success.
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.628 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:12:43 compute-0 nova_compute[259550]:   <uuid>4fef229d-c42d-43ac-a3ff-527ca68d3796</uuid>
Oct 07 14:12:43 compute-0 nova_compute[259550]:   <name>instance-0000002c</name>
Oct 07 14:12:43 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:12:43 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:12:43 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <nova:name>tempest-DeleteServersTestJSON-server-1001011532</nova:name>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:12:42</nova:creationTime>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:12:43 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:12:43 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:12:43 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:12:43 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:12:43 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:12:43 compute-0 nova_compute[259550]:         <nova:user uuid="a0452296b3a942e893961944a0203d98">tempest-DeleteServersTestJSON-1871282594-project-member</nova:user>
Oct 07 14:12:43 compute-0 nova_compute[259550]:         <nova:project uuid="06322ecec4b94a5d94e34cc8632d4104">tempest-DeleteServersTestJSON-1871282594</nova:project>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:12:43 compute-0 nova_compute[259550]:         <nova:port uuid="23002fad-2420-4b68-bfd7-2d90f8b5df6d">
Oct 07 14:12:43 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:12:43 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:12:43 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <system>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <entry name="serial">4fef229d-c42d-43ac-a3ff-527ca68d3796</entry>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <entry name="uuid">4fef229d-c42d-43ac-a3ff-527ca68d3796</entry>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     </system>
Oct 07 14:12:43 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:12:43 compute-0 nova_compute[259550]:   <os>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:   </os>
Oct 07 14:12:43 compute-0 nova_compute[259550]:   <features>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:   </features>
Oct 07 14:12:43 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:12:43 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:12:43 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4fef229d-c42d-43ac-a3ff-527ca68d3796_disk">
Oct 07 14:12:43 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       </source>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:12:43 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4fef229d-c42d-43ac-a3ff-527ca68d3796_disk.config">
Oct 07 14:12:43 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       </source>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:12:43 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:4c:78:88"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <target dev="tap23002fad-24"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796/console.log" append="off"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <video>
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     </video>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:12:43 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:12:43 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:12:43 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:12:43 compute-0 nova_compute[259550]: </domain>
Oct 07 14:12:43 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.629 2 DEBUG nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Preparing to wait for external event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.630 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.630 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.630 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.631 2 DEBUG nova.virt.libvirt.vif [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1001011532',display_name='tempest-DeleteServersTestJSON-server-1001011532',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1001011532',id=44,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-rcycun96',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:37Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=4fef229d-c42d-43ac-a3ff-527ca68d3796,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.631 2 DEBUG nova.network.os_vif_util [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.631 2 DEBUG nova.network.os_vif_util [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:88,bridge_name='br-int',has_traffic_filtering=True,id=23002fad-2420-4b68-bfd7-2d90f8b5df6d,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23002fad-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.632 2 DEBUG os_vif [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:88,bridge_name='br-int',has_traffic_filtering=True,id=23002fad-2420-4b68-bfd7-2d90f8b5df6d,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23002fad-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.633 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.635 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap23002fad-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.636 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap23002fad-24, col_values=(('external_ids', {'iface-id': '23002fad-2420-4b68-bfd7-2d90f8b5df6d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4c:78:88', 'vm-uuid': '4fef229d-c42d-43ac-a3ff-527ca68d3796'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:43 compute-0 NetworkManager[44949]: <info>  [1759846363.6387] manager: (tap23002fad-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.646 2 INFO os_vif [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:88,bridge_name='br-int',has_traffic_filtering=True,id=23002fad-2420-4b68-bfd7-2d90f8b5df6d,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23002fad-24')
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.662 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 180 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 6.4 MiB/s wr, 127 op/s
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.671 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.675 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.676 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.676 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.677 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.684 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.685 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.727 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.729 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846363.554585, 867307d6-0b3f-4a3e-9dc4-a05221e2f080 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.729 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] VM Paused (Lifecycle Event)
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.755 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.759 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.760 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.760 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No VIF found with MAC fa:16:3e:4c:78:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.762 2 INFO nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Using config drive
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.782 2 DEBUG nova.storage.rbd_utils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.788 2 INFO nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Took 7.76 seconds to spawn the instance on the hypervisor.
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.789 2 DEBUG nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.792 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846363.5568259, 867307d6-0b3f-4a3e-9dc4-a05221e2f080 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.793 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] VM Resumed (Lifecycle Event)
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.809 2 DEBUG nova.network.neutron [req-e89e46c7-334c-42e8-8919-264238291857 req-a95c233b-0cef-4935-8899-b9ee34712168 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Updated VIF entry in instance network info cache for port 23002fad-2420-4b68-bfd7-2d90f8b5df6d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.810 2 DEBUG nova.network.neutron [req-e89e46c7-334c-42e8-8919-264238291857 req-a95c233b-0cef-4935-8899-b9ee34712168 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Updating instance_info_cache with network_info: [{"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.827 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:43 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/141100417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:12:43 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/739791835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.839 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.848 2 DEBUG oslo_concurrency.lockutils [req-e89e46c7-334c-42e8-8919-264238291857 req-a95c233b-0cef-4935-8899-b9ee34712168 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4fef229d-c42d-43ac-a3ff-527ca68d3796" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.873 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.891 2 INFO nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Took 8.73 seconds to build instance.
Oct 07 14:12:43 compute-0 nova_compute[259550]: 2025-10-07 14:12:43.909 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:44 compute-0 nova_compute[259550]: 2025-10-07 14:12:44.354 2 INFO nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Creating config drive at /var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796/disk.config
Oct 07 14:12:44 compute-0 nova_compute[259550]: 2025-10-07 14:12:44.359 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpef6p6s81 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:44 compute-0 nova_compute[259550]: 2025-10-07 14:12:44.503 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpef6p6s81" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:44 compute-0 nova_compute[259550]: 2025-10-07 14:12:44.530 2 DEBUG nova.storage.rbd_utils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:44 compute-0 nova_compute[259550]: 2025-10-07 14:12:44.534 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796/disk.config 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:44 compute-0 nova_compute[259550]: 2025-10-07 14:12:44.665 2 INFO nova.compute.manager [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Rebuilding instance
Oct 07 14:12:44 compute-0 nova_compute[259550]: 2025-10-07 14:12:44.695 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796/disk.config 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:44 compute-0 nova_compute[259550]: 2025-10-07 14:12:44.695 2 INFO nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Deleting local config drive /var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796/disk.config because it was imported into RBD.
Oct 07 14:12:44 compute-0 kernel: tap23002fad-24: entered promiscuous mode
Oct 07 14:12:44 compute-0 NetworkManager[44949]: <info>  [1759846364.7389] manager: (tap23002fad-24): new Tun device (/org/freedesktop/NetworkManager/Devices/152)
Oct 07 14:12:44 compute-0 systemd-udevd[309437]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:12:44 compute-0 nova_compute[259550]: 2025-10-07 14:12:44.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:44 compute-0 ovn_controller[151684]: 2025-10-07T14:12:44Z|00299|binding|INFO|Claiming lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d for this chassis.
Oct 07 14:12:44 compute-0 ovn_controller[151684]: 2025-10-07T14:12:44Z|00300|binding|INFO|23002fad-2420-4b68-bfd7-2d90f8b5df6d: Claiming fa:16:3e:4c:78:88 10.100.0.8
Oct 07 14:12:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.751 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:78:88 10.100.0.8'], port_security=['fa:16:3e:4c:78:88 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '4fef229d-c42d-43ac-a3ff-527ca68d3796', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=23002fad-2420-4b68-bfd7-2d90f8b5df6d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:12:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.753 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 23002fad-2420-4b68-bfd7-2d90f8b5df6d in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f bound to our chassis
Oct 07 14:12:44 compute-0 NetworkManager[44949]: <info>  [1759846364.7560] device (tap23002fad-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:12:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.756 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct 07 14:12:44 compute-0 NetworkManager[44949]: <info>  [1759846364.7572] device (tap23002fad-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:12:44 compute-0 ovn_controller[151684]: 2025-10-07T14:12:44Z|00301|binding|INFO|Setting lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d ovn-installed in OVS
Oct 07 14:12:44 compute-0 ovn_controller[151684]: 2025-10-07T14:12:44Z|00302|binding|INFO|Setting lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d up in Southbound
Oct 07 14:12:44 compute-0 nova_compute[259550]: 2025-10-07 14:12:44.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.770 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a8c43d-b178-4230-9812-80f169eae22f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.771 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8accac57-a1 in ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:12:44 compute-0 nova_compute[259550]: 2025-10-07 14:12:44.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.775 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8accac57-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:12:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.775 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4421c561-617d-4ba3-92e5-4a38dffde359]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.777 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4f749f85-ce38-48a3-98b5-1959863e43da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:44 compute-0 systemd-machined[214580]: New machine qemu-49-instance-0000002c.
Oct 07 14:12:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.799 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[42c8b25a-ef03-4f81-acb5-1d696fa33bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:44 compute-0 systemd[1]: Started Virtual Machine qemu-49-instance-0000002c.
Oct 07 14:12:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.829 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9156fe84-3ac0-4036-acf7-30c8f6d31abf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:44 compute-0 ceph-mon[74295]: pgmap v1444: 305 pgs: 305 active+clean; 180 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 6.4 MiB/s wr, 127 op/s
Oct 07 14:12:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.863 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[71f5d012-01f1-4c93-829c-725b3f5e2c34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.869 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[82f4f805-fe60-47ca-9dcc-8aa1b2ee7009]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:44 compute-0 NetworkManager[44949]: <info>  [1759846364.8704] manager: (tap8accac57-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/153)
Oct 07 14:12:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.911 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1367d6c3-7f46-4ef4-9572-92ce269743d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.918 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[122c11b5-7080-42a4-9d46-2b2b6e0ece6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:44 compute-0 NetworkManager[44949]: <info>  [1759846364.9448] device (tap8accac57-a0): carrier: link connected
Oct 07 14:12:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.952 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0c88f78c-4b14-4b72-b8a1-a23e292337bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.973 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4c647058-2d74-4c86-a501-9439a23861a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8accac57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:e8:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 99], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694850, 'reachable_time': 16785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309680, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.994 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c464c7b9-8b97-4362-9ecd-de8e86a2d10b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:e89f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694850, 'tstamp': 694850}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309681, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.015 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c0be2d-96d3-4bb9-bb3a-344b6d10d0f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8accac57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:e8:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 99], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694850, 'reachable_time': 16785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309682, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.025 2 DEBUG nova.compute.manager [req-c31c1b60-c7fa-4355-a99e-199cafb26217 req-cab00e07-b096-425e-b9d0-829c5a6ce79b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.027 2 DEBUG oslo_concurrency.lockutils [req-c31c1b60-c7fa-4355-a99e-199cafb26217 req-cab00e07-b096-425e-b9d0-829c5a6ce79b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.028 2 DEBUG oslo_concurrency.lockutils [req-c31c1b60-c7fa-4355-a99e-199cafb26217 req-cab00e07-b096-425e-b9d0-829c5a6ce79b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.028 2 DEBUG oslo_concurrency.lockutils [req-c31c1b60-c7fa-4355-a99e-199cafb26217 req-cab00e07-b096-425e-b9d0-829c5a6ce79b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.029 2 DEBUG nova.compute.manager [req-c31c1b60-c7fa-4355-a99e-199cafb26217 req-cab00e07-b096-425e-b9d0-829c5a6ce79b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Processing event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.052 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b3a5360-9b06-461e-9a83-eed219c93137]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.092 2 DEBUG nova.compute.manager [req-dcdd3326-86d2-4dbe-a7d9-e785eb3796c4 req-363e2b86-968c-41b5-889d-6945d3af6cda 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received event network-vif-plugged-130e9c49-53e8-495e-ac38-4d3e63f49011 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.092 2 DEBUG oslo_concurrency.lockutils [req-dcdd3326-86d2-4dbe-a7d9-e785eb3796c4 req-363e2b86-968c-41b5-889d-6945d3af6cda 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.093 2 DEBUG oslo_concurrency.lockutils [req-dcdd3326-86d2-4dbe-a7d9-e785eb3796c4 req-363e2b86-968c-41b5-889d-6945d3af6cda 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.093 2 DEBUG oslo_concurrency.lockutils [req-dcdd3326-86d2-4dbe-a7d9-e785eb3796c4 req-363e2b86-968c-41b5-889d-6945d3af6cda 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.093 2 DEBUG nova.compute.manager [req-dcdd3326-86d2-4dbe-a7d9-e785eb3796c4 req-363e2b86-968c-41b5-889d-6945d3af6cda 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] No waiting events found dispatching network-vif-plugged-130e9c49-53e8-495e-ac38-4d3e63f49011 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.093 2 WARNING nova.compute.manager [req-dcdd3326-86d2-4dbe-a7d9-e785eb3796c4 req-363e2b86-968c-41b5-889d-6945d3af6cda 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received unexpected event network-vif-plugged-130e9c49-53e8-495e-ac38-4d3e63f49011 for instance with vm_state active and task_state None.
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.095 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.107 2 DEBUG nova.compute.manager [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.125 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aca2c0fb-39c1-4075-bf80-8bca0131925c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.127 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8accac57-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.127 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.128 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8accac57-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:45 compute-0 NetworkManager[44949]: <info>  [1759846365.1305] manager: (tap8accac57-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Oct 07 14:12:45 compute-0 kernel: tap8accac57-a0: entered promiscuous mode
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.133 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8accac57-a0, col_values=(('external_ids', {'iface-id': 'a487ff40-6fa2-404e-b7fc-dbcc968fecc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:45 compute-0 ovn_controller[151684]: 2025-10-07T14:12:45Z|00303|binding|INFO|Releasing lport a487ff40-6fa2-404e-b7fc-dbcc968fecc3 from this chassis (sb_readonly=0)
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.155 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.156 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb04cd4-4d2d-4367-beb6-8d0e75f918e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.157 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:12:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.158 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'env', 'PROCESS_TAG=haproxy-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8accac57-ab45-4b9b-95ed-86c2c65f202f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.175 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.188 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.201 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'resources' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.219 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.234 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.242 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:12:45 compute-0 podman[309755]: 2025-10-07 14:12:45.534186859 +0000 UTC m=+0.052654084 container create fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:12:45 compute-0 systemd[1]: Started libpod-conmon-fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e.scope.
Oct 07 14:12:45 compute-0 podman[309755]: 2025-10-07 14:12:45.505161637 +0000 UTC m=+0.023628892 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:12:45 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da5c37de559ab2f97b903232e4ec93e0e34fbf169fac94209f83b9bd008f0ca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:12:45 compute-0 podman[309755]: 2025-10-07 14:12:45.634441495 +0000 UTC m=+0.152908750 container init fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:12:45 compute-0 podman[309755]: 2025-10-07 14:12:45.640296818 +0000 UTC m=+0.158764043 container start fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:12:45 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[309769]: [NOTICE]   (309773) : New worker (309775) forked
Oct 07 14:12:45 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[309769]: [NOTICE]   (309773) : Loading success.
Oct 07 14:12:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 181 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.3 MiB/s wr, 213 op/s
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.717 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846365.7164025, 4fef229d-c42d-43ac-a3ff-527ca68d3796 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.717 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] VM Started (Lifecycle Event)
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.720 2 DEBUG nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.724 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.728 2 INFO nova.virt.libvirt.driver [-] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Instance spawned successfully.
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.729 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.740 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.753 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.757 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.757 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.758 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.758 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.758 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.759 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.784 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.784 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846365.7166505, 4fef229d-c42d-43ac-a3ff-527ca68d3796 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.785 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] VM Paused (Lifecycle Event)
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.849 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.853 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846365.7229667, 4fef229d-c42d-43ac-a3ff-527ca68d3796 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.853 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] VM Resumed (Lifecycle Event)
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.859 2 INFO nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Took 8.55 seconds to spawn the instance on the hypervisor.
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.860 2 DEBUG nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.877 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.881 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:12:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.936 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.963 2 INFO nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Took 9.80 seconds to build instance.
Oct 07 14:12:45 compute-0 nova_compute[259550]: 2025-10-07 14:12:45.978 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:46 compute-0 nova_compute[259550]: 2025-10-07 14:12:46.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:46 compute-0 ceph-mon[74295]: pgmap v1445: 305 pgs: 305 active+clean; 181 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.3 MiB/s wr, 213 op/s
Oct 07 14:12:47 compute-0 nova_compute[259550]: 2025-10-07 14:12:47.175 2 DEBUG nova.compute.manager [req-2dde8f81-bf21-4a4f-a15d-008251a7ec16 req-92b7d278-d643-4cc1-a041-56b3b113dac5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:47 compute-0 nova_compute[259550]: 2025-10-07 14:12:47.175 2 DEBUG oslo_concurrency.lockutils [req-2dde8f81-bf21-4a4f-a15d-008251a7ec16 req-92b7d278-d643-4cc1-a041-56b3b113dac5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:47 compute-0 nova_compute[259550]: 2025-10-07 14:12:47.175 2 DEBUG oslo_concurrency.lockutils [req-2dde8f81-bf21-4a4f-a15d-008251a7ec16 req-92b7d278-d643-4cc1-a041-56b3b113dac5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:47 compute-0 nova_compute[259550]: 2025-10-07 14:12:47.176 2 DEBUG oslo_concurrency.lockutils [req-2dde8f81-bf21-4a4f-a15d-008251a7ec16 req-92b7d278-d643-4cc1-a041-56b3b113dac5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:47 compute-0 nova_compute[259550]: 2025-10-07 14:12:47.176 2 DEBUG nova.compute.manager [req-2dde8f81-bf21-4a4f-a15d-008251a7ec16 req-92b7d278-d643-4cc1-a041-56b3b113dac5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] No waiting events found dispatching network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:12:47 compute-0 nova_compute[259550]: 2025-10-07 14:12:47.176 2 WARNING nova.compute.manager [req-2dde8f81-bf21-4a4f-a15d-008251a7ec16 req-92b7d278-d643-4cc1-a041-56b3b113dac5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received unexpected event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d for instance with vm_state active and task_state None.
Oct 07 14:12:47 compute-0 nova_compute[259550]: 2025-10-07 14:12:47.279 2 DEBUG oslo_concurrency.lockutils [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:47 compute-0 nova_compute[259550]: 2025-10-07 14:12:47.280 2 DEBUG oslo_concurrency.lockutils [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:47 compute-0 nova_compute[259550]: 2025-10-07 14:12:47.280 2 DEBUG nova.compute.manager [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:12:47 compute-0 nova_compute[259550]: 2025-10-07 14:12:47.284 2 DEBUG nova.compute.manager [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 07 14:12:47 compute-0 nova_compute[259550]: 2025-10-07 14:12:47.284 2 DEBUG nova.objects.instance [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'flavor' on Instance uuid 4fef229d-c42d-43ac-a3ff-527ca68d3796 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:47 compute-0 nova_compute[259550]: 2025-10-07 14:12:47.312 2 DEBUG nova.virt.libvirt.driver [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:12:47 compute-0 nova_compute[259550]: 2025-10-07 14:12:47.508 2 DEBUG nova.compute.manager [req-64a3365f-b1d2-4b96-b2b1-2b5c4518d606 req-3f0bf65e-0089-46c7-8d5b-d613c51aeb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received event network-changed-130e9c49-53e8-495e-ac38-4d3e63f49011 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:47 compute-0 nova_compute[259550]: 2025-10-07 14:12:47.509 2 DEBUG nova.compute.manager [req-64a3365f-b1d2-4b96-b2b1-2b5c4518d606 req-3f0bf65e-0089-46c7-8d5b-d613c51aeb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Refreshing instance network info cache due to event network-changed-130e9c49-53e8-495e-ac38-4d3e63f49011. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:12:47 compute-0 nova_compute[259550]: 2025-10-07 14:12:47.509 2 DEBUG oslo_concurrency.lockutils [req-64a3365f-b1d2-4b96-b2b1-2b5c4518d606 req-3f0bf65e-0089-46c7-8d5b-d613c51aeb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:12:47 compute-0 nova_compute[259550]: 2025-10-07 14:12:47.509 2 DEBUG oslo_concurrency.lockutils [req-64a3365f-b1d2-4b96-b2b1-2b5c4518d606 req-3f0bf65e-0089-46c7-8d5b-d613c51aeb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:12:47 compute-0 nova_compute[259550]: 2025-10-07 14:12:47.509 2 DEBUG nova.network.neutron [req-64a3365f-b1d2-4b96-b2b1-2b5c4518d606 req-3f0bf65e-0089-46c7-8d5b-d613c51aeb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Refreshing network info cache for port 130e9c49-53e8-495e-ac38-4d3e63f49011 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:12:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 181 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.7 MiB/s wr, 181 op/s
Oct 07 14:12:48 compute-0 nova_compute[259550]: 2025-10-07 14:12:48.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:48 compute-0 ceph-mon[74295]: pgmap v1446: 305 pgs: 305 active+clean; 181 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.7 MiB/s wr, 181 op/s
Oct 07 14:12:49 compute-0 nova_compute[259550]: 2025-10-07 14:12:49.050 2 DEBUG nova.network.neutron [req-64a3365f-b1d2-4b96-b2b1-2b5c4518d606 req-3f0bf65e-0089-46c7-8d5b-d613c51aeb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updated VIF entry in instance network info cache for port 130e9c49-53e8-495e-ac38-4d3e63f49011. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:12:49 compute-0 nova_compute[259550]: 2025-10-07 14:12:49.051 2 DEBUG nova.network.neutron [req-64a3365f-b1d2-4b96-b2b1-2b5c4518d606 req-3f0bf65e-0089-46c7-8d5b-d613c51aeb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updating instance_info_cache with network_info: [{"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:12:49 compute-0 nova_compute[259550]: 2025-10-07 14:12:49.085 2 DEBUG oslo_concurrency.lockutils [req-64a3365f-b1d2-4b96-b2b1-2b5c4518d606 req-3f0bf65e-0089-46c7-8d5b-d613c51aeb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:12:49 compute-0 nova_compute[259550]: 2025-10-07 14:12:49.628 2 DEBUG nova.compute.manager [req-77299da6-b361-4e66-a419-4cef7fcce021 req-62149483-44a8-469e-84a8-b6a7ef1ba498 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received event network-changed-130e9c49-53e8-495e-ac38-4d3e63f49011 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:12:49 compute-0 nova_compute[259550]: 2025-10-07 14:12:49.628 2 DEBUG nova.compute.manager [req-77299da6-b361-4e66-a419-4cef7fcce021 req-62149483-44a8-469e-84a8-b6a7ef1ba498 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Refreshing instance network info cache due to event network-changed-130e9c49-53e8-495e-ac38-4d3e63f49011. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:12:49 compute-0 nova_compute[259550]: 2025-10-07 14:12:49.629 2 DEBUG oslo_concurrency.lockutils [req-77299da6-b361-4e66-a419-4cef7fcce021 req-62149483-44a8-469e-84a8-b6a7ef1ba498 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:12:49 compute-0 nova_compute[259550]: 2025-10-07 14:12:49.629 2 DEBUG oslo_concurrency.lockutils [req-77299da6-b361-4e66-a419-4cef7fcce021 req-62149483-44a8-469e-84a8-b6a7ef1ba498 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:12:49 compute-0 nova_compute[259550]: 2025-10-07 14:12:49.629 2 DEBUG nova.network.neutron [req-77299da6-b361-4e66-a419-4cef7fcce021 req-62149483-44a8-469e-84a8-b6a7ef1ba498 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Refreshing network info cache for port 130e9c49-53e8-495e-ac38-4d3e63f49011 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:12:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 181 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.6 MiB/s wr, 244 op/s
Oct 07 14:12:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:12:50 compute-0 ceph-mon[74295]: pgmap v1447: 305 pgs: 305 active+clean; 181 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.6 MiB/s wr, 244 op/s
Oct 07 14:12:51 compute-0 nova_compute[259550]: 2025-10-07 14:12:51.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 181 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 1.7 MiB/s wr, 216 op/s
Oct 07 14:12:51 compute-0 nova_compute[259550]: 2025-10-07 14:12:51.681 2 DEBUG nova.network.neutron [req-77299da6-b361-4e66-a419-4cef7fcce021 req-62149483-44a8-469e-84a8-b6a7ef1ba498 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updated VIF entry in instance network info cache for port 130e9c49-53e8-495e-ac38-4d3e63f49011. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:12:51 compute-0 nova_compute[259550]: 2025-10-07 14:12:51.682 2 DEBUG nova.network.neutron [req-77299da6-b361-4e66-a419-4cef7fcce021 req-62149483-44a8-469e-84a8-b6a7ef1ba498 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updating instance_info_cache with network_info: [{"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:12:51 compute-0 nova_compute[259550]: 2025-10-07 14:12:51.697 2 DEBUG oslo_concurrency.lockutils [req-77299da6-b361-4e66-a419-4cef7fcce021 req-62149483-44a8-469e-84a8-b6a7ef1ba498 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:12:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:12:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:12:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:12:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:12:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:12:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:12:52 compute-0 ceph-mon[74295]: pgmap v1448: 305 pgs: 305 active+clean; 181 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 1.7 MiB/s wr, 216 op/s
Oct 07 14:12:53 compute-0 nova_compute[259550]: 2025-10-07 14:12:53.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 181 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 25 KiB/s wr, 211 op/s
Oct 07 14:12:53 compute-0 ovn_controller[151684]: 2025-10-07T14:12:53Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a0:f3:88 10.100.0.7
Oct 07 14:12:53 compute-0 ovn_controller[151684]: 2025-10-07T14:12:53Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a0:f3:88 10.100.0.7
Oct 07 14:12:55 compute-0 ceph-mon[74295]: pgmap v1449: 305 pgs: 305 active+clean; 181 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 25 KiB/s wr, 211 op/s
Oct 07 14:12:55 compute-0 nova_compute[259550]: 2025-10-07 14:12:55.294 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:12:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 213 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 2.1 MiB/s wr, 275 op/s
Oct 07 14:12:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:12:56 compute-0 ceph-mon[74295]: pgmap v1450: 305 pgs: 305 active+clean; 213 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 2.1 MiB/s wr, 275 op/s
Oct 07 14:12:56 compute-0 nova_compute[259550]: 2025-10-07 14:12:56.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:56 compute-0 podman[309786]: 2025-10-07 14:12:56.095554101 +0000 UTC m=+0.074335265 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid)
Oct 07 14:12:56 compute-0 podman[309785]: 2025-10-07 14:12:56.101002205 +0000 UTC m=+0.084422220 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:12:56 compute-0 ovn_controller[151684]: 2025-10-07T14:12:56Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:03:73:59 10.100.0.5
Oct 07 14:12:56 compute-0 ovn_controller[151684]: 2025-10-07T14:12:56Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:03:73:59 10.100.0.5
Oct 07 14:12:57 compute-0 nova_compute[259550]: 2025-10-07 14:12:57.355 2 DEBUG nova.virt.libvirt.driver [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:12:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 213 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Oct 07 14:12:58 compute-0 kernel: tap75d3896e-b0 (unregistering): left promiscuous mode
Oct 07 14:12:58 compute-0 NetworkManager[44949]: <info>  [1759846378.2251] device (tap75d3896e-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:12:58 compute-0 ovn_controller[151684]: 2025-10-07T14:12:58Z|00304|binding|INFO|Releasing lport 75d3896e-b08f-4485-b4d7-dff914242597 from this chassis (sb_readonly=0)
Oct 07 14:12:58 compute-0 ovn_controller[151684]: 2025-10-07T14:12:58Z|00305|binding|INFO|Setting lport 75d3896e-b08f-4485-b4d7-dff914242597 down in Southbound
Oct 07 14:12:58 compute-0 nova_compute[259550]: 2025-10-07 14:12:58.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:58 compute-0 nova_compute[259550]: 2025-10-07 14:12:58.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:58 compute-0 ovn_controller[151684]: 2025-10-07T14:12:58Z|00306|binding|INFO|Removing iface tap75d3896e-b0 ovn-installed in OVS
Oct 07 14:12:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.248 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:f3:88 10.100.0.7'], port_security=['fa:16:3e:a0:f3:88 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4e86b418-6e7f-4e2e-9146-a847920ed11f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=75d3896e-b08f-4485-b4d7-dff914242597) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:12:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.250 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 75d3896e-b08f-4485-b4d7-dff914242597 in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 unbound from our chassis
Oct 07 14:12:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.251 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:12:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.253 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a5d5bd-fa7c-42d3-a54e-ddd74fe7a0a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.255 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 namespace which is not needed anymore
Oct 07 14:12:58 compute-0 nova_compute[259550]: 2025-10-07 14:12:58.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:58 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Oct 07 14:12:58 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Consumed 13.308s CPU time.
Oct 07 14:12:58 compute-0 systemd-machined[214580]: Machine qemu-47-instance-0000002a terminated.
Oct 07 14:12:58 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[309237]: [NOTICE]   (309241) : haproxy version is 2.8.14-c23fe91
Oct 07 14:12:58 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[309237]: [NOTICE]   (309241) : path to executable is /usr/sbin/haproxy
Oct 07 14:12:58 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[309237]: [WARNING]  (309241) : Exiting Master process...
Oct 07 14:12:58 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[309237]: [WARNING]  (309241) : Exiting Master process...
Oct 07 14:12:58 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[309237]: [ALERT]    (309241) : Current worker (309243) exited with code 143 (Terminated)
Oct 07 14:12:58 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[309237]: [WARNING]  (309241) : All workers exited. Exiting... (0)
Oct 07 14:12:58 compute-0 systemd[1]: libpod-e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33.scope: Deactivated successfully.
Oct 07 14:12:58 compute-0 podman[309844]: 2025-10-07 14:12:58.429445699 +0000 UTC m=+0.057676546 container died e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 07 14:12:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33-userdata-shm.mount: Deactivated successfully.
Oct 07 14:12:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fd8bb89be17d12218ba71ac6eeda91b03262ee72f2eb23233a59ffdba88634a-merged.mount: Deactivated successfully.
Oct 07 14:12:58 compute-0 nova_compute[259550]: 2025-10-07 14:12:58.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:58 compute-0 podman[309844]: 2025-10-07 14:12:58.485994746 +0000 UTC m=+0.114225583 container cleanup e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 14:12:58 compute-0 nova_compute[259550]: 2025-10-07 14:12:58.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:58 compute-0 nova_compute[259550]: 2025-10-07 14:12:58.492 2 INFO nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance shutdown successfully after 13 seconds.
Oct 07 14:12:58 compute-0 nova_compute[259550]: 2025-10-07 14:12:58.499 2 INFO nova.virt.libvirt.driver [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance destroyed successfully.
Oct 07 14:12:58 compute-0 systemd[1]: libpod-conmon-e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33.scope: Deactivated successfully.
Oct 07 14:12:58 compute-0 nova_compute[259550]: 2025-10-07 14:12:58.505 2 INFO nova.virt.libvirt.driver [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance destroyed successfully.
Oct 07 14:12:58 compute-0 nova_compute[259550]: 2025-10-07 14:12:58.508 2 DEBUG nova.virt.libvirt.vif [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1086034539',display_name='tempest-ServerDiskConfigTestJSON-server-1086034539',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1086034539',id=42,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-flb5tbhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:44Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=4e86b418-6e7f-4e2e-9146-a847920ed11f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:12:58 compute-0 nova_compute[259550]: 2025-10-07 14:12:58.509 2 DEBUG nova.network.os_vif_util [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:12:58 compute-0 nova_compute[259550]: 2025-10-07 14:12:58.509 2 DEBUG nova.network.os_vif_util [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:12:58 compute-0 nova_compute[259550]: 2025-10-07 14:12:58.510 2 DEBUG os_vif [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:12:58 compute-0 nova_compute[259550]: 2025-10-07 14:12:58.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:58 compute-0 nova_compute[259550]: 2025-10-07 14:12:58.512 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75d3896e-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:58 compute-0 nova_compute[259550]: 2025-10-07 14:12:58.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:58 compute-0 nova_compute[259550]: 2025-10-07 14:12:58.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:12:58 compute-0 nova_compute[259550]: 2025-10-07 14:12:58.521 2 INFO os_vif [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0')
Oct 07 14:12:58 compute-0 podman[309882]: 2025-10-07 14:12:58.565860046 +0000 UTC m=+0.050688015 container remove e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 07 14:12:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.573 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bf03bfde-e4be-4ab8-ae6b-b6595ac2d596]: (4, ('Tue Oct  7 02:12:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33)\ne4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33\nTue Oct  7 02:12:58 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33)\ne4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.577 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7954e2ea-2a13-412d-b415-93f01412d241]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.579 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2cb8ca0-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:12:58 compute-0 kernel: tapd2cb8ca0-10: left promiscuous mode
Oct 07 14:12:58 compute-0 nova_compute[259550]: 2025-10-07 14:12:58.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:58 compute-0 nova_compute[259550]: 2025-10-07 14:12:58.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:12:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.602 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[709244a7-7e0f-4cf4-80ec-c29eb10f0082]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.636 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[526e4d05-997c-490a-9188-7cc7d05edd9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.638 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aa952128-dcc0-4cbb-848c-c4215653e030]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.659 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[164c8782-fc65-4ce8-b7fe-70d299de46cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694165, 'reachable_time': 40405, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309916, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.663 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:12:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.663 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc12bc5-a147-4a89-a97d-802285888138]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:12:58 compute-0 systemd[1]: run-netns-ovnmeta\x2dd2cb8ca0\x2d1272\x2d4fa9\x2db4ed\x2d8d0a1e3df777.mount: Deactivated successfully.
Oct 07 14:12:58 compute-0 ceph-mon[74295]: pgmap v1451: 305 pgs: 305 active+clean; 213 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Oct 07 14:12:59 compute-0 sudo[309918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.063 2 INFO nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Deleting instance files /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f_del
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.064 2 INFO nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Deletion of /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f_del complete
Oct 07 14:12:59 compute-0 sudo[309918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:12:59 compute-0 sudo[309918]: pam_unix(sudo:session): session closed for user root
Oct 07 14:12:59 compute-0 sudo[309943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:12:59 compute-0 sudo[309943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:12:59 compute-0 sudo[309943]: pam_unix(sudo:session): session closed for user root
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.186 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.187 2 INFO nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Creating image(s)
Oct 07 14:12:59 compute-0 sudo[309968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:12:59 compute-0 sudo[309968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:12:59 compute-0 sudo[309968]: pam_unix(sudo:session): session closed for user root
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.212 2 DEBUG nova.storage.rbd_utils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.248 2 DEBUG nova.storage.rbd_utils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:59 compute-0 sudo[310011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.279 2 DEBUG nova.storage.rbd_utils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:59 compute-0 sudo[310011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.287 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:59 compute-0 ovn_controller[151684]: 2025-10-07T14:12:59Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4c:78:88 10.100.0.8
Oct 07 14:12:59 compute-0 ovn_controller[151684]: 2025-10-07T14:12:59Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4c:78:88 10.100.0.8
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.411 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.414 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.415 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.415 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.439 2 DEBUG nova.storage.rbd_utils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.444 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 205 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.0 MiB/s wr, 261 op/s
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.757 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.831 2 DEBUG nova.storage.rbd_utils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] resizing rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:12:59 compute-0 sudo[310011]: pam_unix(sudo:session): session closed for user root
Oct 07 14:12:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:12:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:12:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:12:59 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:12:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:12:59 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:12:59 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 7b69cba8-b224-43b1-95fa-04df2af0c8e6 does not exist
Oct 07 14:12:59 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 124944d5-c723-4cfe-b8e3-5ab9595b6cf1 does not exist
Oct 07 14:12:59 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 909b8cf6-5f90-47da-a60d-3e5dee2a4673 does not exist
Oct 07 14:12:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:12:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:12:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:12:59 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:12:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:12:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.924 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.925 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Ensure instance console log exists: /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.925 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.926 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.926 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.930 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Start _get_guest_xml network_info=[{"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.936 2 WARNING nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.946 2 DEBUG nova.virt.libvirt.host [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.946 2 DEBUG nova.virt.libvirt.host [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.950 2 DEBUG nova.virt.libvirt.host [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.950 2 DEBUG nova.virt.libvirt.host [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.951 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.951 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.951 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.951 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.952 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.952 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.952 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.952 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.952 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.953 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.953 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.953 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.953 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:12:59 compute-0 nova_compute[259550]: 2025-10-07 14:12:59.970 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:12:59 compute-0 sudo[310214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:12:59 compute-0 sudo[310214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:12:59 compute-0 sudo[310214]: pam_unix(sudo:session): session closed for user root
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.006 2 DEBUG nova.compute.manager [req-41929273-7ea1-42fa-92b6-525946db40d0 req-564aea9c-4215-4dc6-8f39-aed74bbd292e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-unplugged-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.006 2 DEBUG oslo_concurrency.lockutils [req-41929273-7ea1-42fa-92b6-525946db40d0 req-564aea9c-4215-4dc6-8f39-aed74bbd292e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.009 2 DEBUG oslo_concurrency.lockutils [req-41929273-7ea1-42fa-92b6-525946db40d0 req-564aea9c-4215-4dc6-8f39-aed74bbd292e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.009 2 DEBUG oslo_concurrency.lockutils [req-41929273-7ea1-42fa-92b6-525946db40d0 req-564aea9c-4215-4dc6-8f39-aed74bbd292e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.009 2 DEBUG nova.compute.manager [req-41929273-7ea1-42fa-92b6-525946db40d0 req-564aea9c-4215-4dc6-8f39-aed74bbd292e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] No waiting events found dispatching network-vif-unplugged-75d3896e-b08f-4485-b4d7-dff914242597 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.010 2 WARNING nova.compute.manager [req-41929273-7ea1-42fa-92b6-525946db40d0 req-564aea9c-4215-4dc6-8f39-aed74bbd292e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received unexpected event network-vif-unplugged-75d3896e-b08f-4485-b4d7-dff914242597 for instance with vm_state active and task_state rebuild_spawning.
Oct 07 14:13:00 compute-0 sudo[310240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:13:00 compute-0 sudo[310240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:13:00 compute-0 sudo[310240]: pam_unix(sudo:session): session closed for user root
Oct 07 14:13:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:00.045 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:00.046 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:00.046 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:00 compute-0 sudo[310265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:13:00 compute-0 sudo[310265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:13:00 compute-0 sudo[310265]: pam_unix(sudo:session): session closed for user root
Oct 07 14:13:00 compute-0 sudo[310309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:13:00 compute-0 sudo[310309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.292 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.293 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.312 2 DEBUG nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.403 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.404 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.412 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.413 2 INFO nova.compute.claims [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:13:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:13:00 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3359051724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.438 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.438 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.451 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.478 2 DEBUG nova.storage.rbd_utils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.484 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.540 2 DEBUG nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:13:00 compute-0 podman[310393]: 2025-10-07 14:13:00.544072924 +0000 UTC m=+0.046861653 container create 43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_noether, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 07 14:13:00 compute-0 systemd[1]: Started libpod-conmon-43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438.scope.
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.613 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:00 compute-0 podman[310393]: 2025-10-07 14:13:00.520463734 +0000 UTC m=+0.023252493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:13:00 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:13:00 compute-0 podman[310393]: 2025-10-07 14:13:00.665462944 +0000 UTC m=+0.168251693 container init 43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 07 14:13:00 compute-0 podman[310393]: 2025-10-07 14:13:00.676124155 +0000 UTC m=+0.178912884 container start 43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_noether, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 07 14:13:00 compute-0 quizzical_noether[310410]: 167 167
Oct 07 14:13:00 compute-0 systemd[1]: libpod-43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438.scope: Deactivated successfully.
Oct 07 14:13:00 compute-0 conmon[310410]: conmon 43352a835a78b233acad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438.scope/container/memory.events
Oct 07 14:13:00 compute-0 podman[310393]: 2025-10-07 14:13:00.68427274 +0000 UTC m=+0.187061469 container attach 43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_noether, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 07 14:13:00 compute-0 podman[310393]: 2025-10-07 14:13:00.685042919 +0000 UTC m=+0.187831648 container died 43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.697 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-597113cf9638aa70855daf7d775d33103ba228e1ed19fb47cc0f4cfc05b82b54-merged.mount: Deactivated successfully.
Oct 07 14:13:00 compute-0 podman[310393]: 2025-10-07 14:13:00.732205579 +0000 UTC m=+0.234994308 container remove 43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 07 14:13:00 compute-0 ceph-mon[74295]: pgmap v1452: 305 pgs: 305 active+clean; 205 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.0 MiB/s wr, 261 op/s
Oct 07 14:13:00 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:13:00 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:13:00 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:13:00 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:13:00 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:13:00 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:13:00 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3359051724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:00 compute-0 systemd[1]: libpod-conmon-43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438.scope: Deactivated successfully.
Oct 07 14:13:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:13:00 compute-0 podman[310472]: 2025-10-07 14:13:00.931739814 +0000 UTC m=+0.048024234 container create 1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:13:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:13:00 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/657947331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.983 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.988 2 DEBUG nova.virt.libvirt.vif [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1086034539',display_name='tempest-ServerDiskConfigTestJSON-server-1086034539',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1086034539',id=42,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-flb5tbhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:59Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=4e86b418-6e7f-4e2e-9146-a847920ed11f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.988 2 DEBUG nova.network.os_vif_util [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.990 2 DEBUG nova.network.os_vif_util [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:00 compute-0 systemd[1]: Started libpod-conmon-1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5.scope.
Oct 07 14:13:00 compute-0 nova_compute[259550]: 2025-10-07 14:13:00.995 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:13:00 compute-0 nova_compute[259550]:   <uuid>4e86b418-6e7f-4e2e-9146-a847920ed11f</uuid>
Oct 07 14:13:00 compute-0 nova_compute[259550]:   <name>instance-0000002a</name>
Oct 07 14:13:00 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:13:00 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:13:00 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1086034539</nova:name>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:12:59</nova:creationTime>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:13:00 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:13:00 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:13:00 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:13:00 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:13:00 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:13:00 compute-0 nova_compute[259550]:         <nova:user uuid="7bf568df6a8d461a83d287493b393589">tempest-ServerDiskConfigTestJSON-831175870-project-member</nova:user>
Oct 07 14:13:00 compute-0 nova_compute[259550]:         <nova:project uuid="de6794f6448744329cf2081eb5b889a5">tempest-ServerDiskConfigTestJSON-831175870</nova:project>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:13:00 compute-0 nova_compute[259550]:         <nova:port uuid="75d3896e-b08f-4485-b4d7-dff914242597">
Oct 07 14:13:00 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:13:00 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:13:00 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <system>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <entry name="serial">4e86b418-6e7f-4e2e-9146-a847920ed11f</entry>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <entry name="uuid">4e86b418-6e7f-4e2e-9146-a847920ed11f</entry>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     </system>
Oct 07 14:13:00 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:13:00 compute-0 nova_compute[259550]:   <os>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:   </os>
Oct 07 14:13:00 compute-0 nova_compute[259550]:   <features>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:   </features>
Oct 07 14:13:00 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:13:00 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:13:00 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4e86b418-6e7f-4e2e-9146-a847920ed11f_disk">
Oct 07 14:13:00 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:13:00 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config">
Oct 07 14:13:00 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:13:00 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:a0:f3:88"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <target dev="tap75d3896e-b0"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/console.log" append="off"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <video>
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     </video>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:13:00 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:13:00 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:13:00 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:13:00 compute-0 nova_compute[259550]: </domain>
Oct 07 14:13:00 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.001 2 DEBUG nova.compute.manager [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Preparing to wait for external event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.002 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.002 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.003 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.004 2 DEBUG nova.virt.libvirt.vif [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1086034539',display_name='tempest-ServerDiskConfigTestJSON-server-1086034539',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1086034539',id=42,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-flb5tbhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:59Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=4e86b418-6e7f-4e2e-9146-a847920ed11f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:13:01 compute-0 podman[310472]: 2025-10-07 14:13:00.911098122 +0000 UTC m=+0.027382542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.004 2 DEBUG nova.network.os_vif_util [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:01 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.006 2 DEBUG nova.network.os_vif_util [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.007 2 DEBUG os_vif [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.010 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.010 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.015 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75d3896e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.015 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap75d3896e-b0, col_values=(('external_ids', {'iface-id': '75d3896e-b08f-4485-b4d7-dff914242597', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a0:f3:88', 'vm-uuid': '4e86b418-6e7f-4e2e-9146-a847920ed11f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/001cf69cb98e8d4c16b1d3ad2e60e6a93a732c7abbef350900745ef149b08716/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:13:01 compute-0 NetworkManager[44949]: <info>  [1759846381.0187] manager: (tap75d3896e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:13:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/001cf69cb98e8d4c16b1d3ad2e60e6a93a732c7abbef350900745ef149b08716/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:13:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/001cf69cb98e8d4c16b1d3ad2e60e6a93a732c7abbef350900745ef149b08716/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:13:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/001cf69cb98e8d4c16b1d3ad2e60e6a93a732c7abbef350900745ef149b08716/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:13:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/001cf69cb98e8d4c16b1d3ad2e60e6a93a732c7abbef350900745ef149b08716/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.032 2 INFO os_vif [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0')
Oct 07 14:13:01 compute-0 podman[310472]: 2025-10-07 14:13:01.05447197 +0000 UTC m=+0.170756420 container init 1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 07 14:13:01 compute-0 podman[310472]: 2025-10-07 14:13:01.065026928 +0000 UTC m=+0.181311348 container start 1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cerf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 07 14:13:01 compute-0 podman[310472]: 2025-10-07 14:13:01.06966627 +0000 UTC m=+0.185950720 container attach 1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.092 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.093 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.093 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No VIF found with MAC fa:16:3e:a0:f3:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.093 2 INFO nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Using config drive
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.118 2 DEBUG nova.storage.rbd_utils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.137 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.165 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'keypairs' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:13:01 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/534977440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.202 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.213 2 DEBUG nova.compute.provider_tree [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.228 2 DEBUG nova.scheduler.client.report [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.248 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.248 2 DEBUG nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.250 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.259 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.259 2 INFO nova.compute.claims [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.305 2 DEBUG nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.305 2 DEBUG nova.network.neutron [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.326 2 INFO nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.344 2 DEBUG nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.435 2 DEBUG nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.439 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.440 2 INFO nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Creating image(s)
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.462 2 DEBUG nova.storage.rbd_utils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image f563ffb7-1ade-4b71-ab68-115322eef141_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.489 2 DEBUG nova.storage.rbd_utils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image f563ffb7-1ade-4b71-ab68-115322eef141_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.513 2 DEBUG nova.storage.rbd_utils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image f563ffb7-1ade-4b71-ab68-115322eef141_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.518 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.555 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.595 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.598 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.599 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.599 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.623 2 DEBUG nova.storage.rbd_utils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image f563ffb7-1ade-4b71-ab68-115322eef141_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.627 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f563ffb7-1ade-4b71-ab68-115322eef141_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 211 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.7 MiB/s wr, 249 op/s
Oct 07 14:13:01 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/657947331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:01 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/534977440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.851 2 DEBUG nova.policy [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3ba82e1a9f12417391d78758ae9bb83c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f5ee4e560ed4660a6685a086282a370', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.912 2 INFO nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Creating config drive at /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config
Oct 07 14:13:01 compute-0 nova_compute[259550]: 2025-10-07 14:13:01.919 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx2ak3g_o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:01 compute-0 anacron[4276]: Job `cron.monthly' started
Oct 07 14:13:01 compute-0 anacron[4276]: Job `cron.monthly' terminated
Oct 07 14:13:01 compute-0 anacron[4276]: Normal exit (3 jobs run)
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.029 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f563ffb7-1ade-4b71-ab68-115322eef141_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:13:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3594847103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.090 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.091 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx2ak3g_o" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.117 2 DEBUG nova.storage.rbd_utils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.122 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:02 compute-0 zen_cerf[310490]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.162 2 DEBUG nova.storage.rbd_utils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] resizing rbd image f563ffb7-1ade-4b71-ab68-115322eef141_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:13:02 compute-0 zen_cerf[310490]: --> relative data size: 1.0
Oct 07 14:13:02 compute-0 zen_cerf[310490]: --> All data devices are unavailable
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.199 2 DEBUG nova.compute.manager [req-bcf945f9-52b1-461f-8d9d-bdc13df2e9eb req-de33f2dc-1af6-4344-b4b8-0d7dedb65833 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:02 compute-0 systemd[1]: libpod-1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5.scope: Deactivated successfully.
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.199 2 DEBUG oslo_concurrency.lockutils [req-bcf945f9-52b1-461f-8d9d-bdc13df2e9eb req-de33f2dc-1af6-4344-b4b8-0d7dedb65833 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.200 2 DEBUG oslo_concurrency.lockutils [req-bcf945f9-52b1-461f-8d9d-bdc13df2e9eb req-de33f2dc-1af6-4344-b4b8-0d7dedb65833 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.200 2 DEBUG oslo_concurrency.lockutils [req-bcf945f9-52b1-461f-8d9d-bdc13df2e9eb req-de33f2dc-1af6-4344-b4b8-0d7dedb65833 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:02 compute-0 systemd[1]: libpod-1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5.scope: Consumed 1.049s CPU time.
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.200 2 DEBUG nova.compute.manager [req-bcf945f9-52b1-461f-8d9d-bdc13df2e9eb req-de33f2dc-1af6-4344-b4b8-0d7dedb65833 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Processing event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.204 2 DEBUG nova.compute.provider_tree [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.221 2 DEBUG nova.scheduler.client.report [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:13:02 compute-0 podman[310743]: 2025-10-07 14:13:02.252490301 +0000 UTC m=+0.035039372 container died 1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.274 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.275 2 DEBUG nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.288 2 DEBUG nova.objects.instance [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lazy-loading 'migration_context' on Instance uuid f563ffb7-1ade-4b71-ab68-115322eef141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-001cf69cb98e8d4c16b1d3ad2e60e6a93a732c7abbef350900745ef149b08716-merged.mount: Deactivated successfully.
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.302 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.302 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Ensure instance console log exists: /var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.303 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.303 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.303 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.305 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.306 2 INFO nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Deleting local config drive /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config because it was imported into RBD.
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.325 2 DEBUG nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.325 2 DEBUG nova.network.neutron [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:13:02 compute-0 podman[310743]: 2025-10-07 14:13:02.34418385 +0000 UTC m=+0.126732901 container remove 1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.343 2 INFO nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:13:02 compute-0 systemd[1]: libpod-conmon-1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5.scope: Deactivated successfully.
Oct 07 14:13:02 compute-0 kernel: tap75d3896e-b0: entered promiscuous mode
Oct 07 14:13:02 compute-0 NetworkManager[44949]: <info>  [1759846382.3663] manager: (tap75d3896e-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/156)
Oct 07 14:13:02 compute-0 ovn_controller[151684]: 2025-10-07T14:13:02Z|00307|binding|INFO|Claiming lport 75d3896e-b08f-4485-b4d7-dff914242597 for this chassis.
Oct 07 14:13:02 compute-0 ovn_controller[151684]: 2025-10-07T14:13:02Z|00308|binding|INFO|75d3896e-b08f-4485-b4d7-dff914242597: Claiming fa:16:3e:a0:f3:88 10.100.0.7
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.363 2 DEBUG nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.379 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:f3:88 10.100.0.7'], port_security=['fa:16:3e:a0:f3:88 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4e86b418-6e7f-4e2e-9146-a847920ed11f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '5', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=75d3896e-b08f-4485-b4d7-dff914242597) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.380 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 75d3896e-b08f-4485-b4d7-dff914242597 in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 bound to our chassis
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.382 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct 07 14:13:02 compute-0 ovn_controller[151684]: 2025-10-07T14:13:02Z|00309|binding|INFO|Setting lport 75d3896e-b08f-4485-b4d7-dff914242597 ovn-installed in OVS
Oct 07 14:13:02 compute-0 ovn_controller[151684]: 2025-10-07T14:13:02Z|00310|binding|INFO|Setting lport 75d3896e-b08f-4485-b4d7-dff914242597 up in Southbound
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.394 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[604aa7d3-49e0-44cf-94f1-91904cfc5846]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.395 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd2cb8ca0-11 in ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.397 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd2cb8ca0-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.397 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf4ef29-2715-449e-a751-1d5232f13bc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:02 compute-0 sudo[310309]: pam_unix(sudo:session): session closed for user root
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.400 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[79024905-b615-4739-aa6c-1b1741060fe6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:02 compute-0 systemd-machined[214580]: New machine qemu-50-instance-0000002a.
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.420 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[b4135240-2667-4a90-a4b5-218d1e699c38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:02 compute-0 systemd[1]: Started Virtual Machine qemu-50-instance-0000002a.
Oct 07 14:13:02 compute-0 systemd-udevd[310819]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.446 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[24333de2-8fe0-4a1c-acc2-f1a336d0649b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:02 compute-0 NetworkManager[44949]: <info>  [1759846382.4646] device (tap75d3896e-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:13:02 compute-0 NetworkManager[44949]: <info>  [1759846382.4657] device (tap75d3896e-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.474 2 DEBUG nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:13:02 compute-0 sudo[310800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.475 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.476 2 INFO nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Creating image(s)
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.475 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4eb5f8-1eab-4db1-ab58-377b78407c1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:02 compute-0 sudo[310800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:13:02 compute-0 systemd-udevd[310831]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:13:02 compute-0 NetworkManager[44949]: <info>  [1759846382.4884] manager: (tapd2cb8ca0-10): new Veth device (/org/freedesktop/NetworkManager/Devices/157)
Oct 07 14:13:02 compute-0 sudo[310800]: pam_unix(sudo:session): session closed for user root
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.487 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[46ef57e0-b4ff-42b3-9230-7ef019928e93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.537 2 DEBUG nova.storage.rbd_utils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.539 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2a1c13b0-af8d-45f1-aa40-7562d15e2358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.543 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3656f1b8-72a1-4575-a464-de31cb1c2a31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:02 compute-0 sudo[310846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:13:02 compute-0 sudo[310846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:13:02 compute-0 sudo[310846]: pam_unix(sudo:session): session closed for user root
Oct 07 14:13:02 compute-0 NetworkManager[44949]: <info>  [1759846382.5773] device (tapd2cb8ca0-10): carrier: link connected
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.578 2 DEBUG nova.storage.rbd_utils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.585 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[39e950f7-b167-43e2-a1f4-65fcbbc63b74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.603 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[95acf9d1-2cff-45fc-b915-9b45d11dbb82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2cb8ca0-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696614, 'reachable_time': 38955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310931, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.621 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[34c51176-235b-4234-b87b-43ece3d67019]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:eb7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696614, 'tstamp': 696614}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310957, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.623 2 DEBUG nova.storage.rbd_utils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:02 compute-0 sudo[310915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:13:02 compute-0 sudo[310915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:13:02 compute-0 sudo[310915]: pam_unix(sudo:session): session closed for user root
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.634 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.644 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f166128f-3e7e-4561-a31f-8dac3de4b1af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2cb8ca0-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696614, 'reachable_time': 38955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310962, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.674 2 DEBUG nova.policy [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.681 2 DEBUG nova.network.neutron [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Successfully created port: 4f1564c5-865b-45e8-afe1-7f7c3748c854 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.684 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.685 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e5d5e848-c35d-4747-89dd-bae662c75b70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:02 compute-0 sudo[310964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:13:02 compute-0 sudo[310964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.725 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.726 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.727 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.727 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.755 2 DEBUG nova.storage.rbd_utils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:02 compute-0 ceph-mon[74295]: pgmap v1453: 305 pgs: 305 active+clean; 211 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.7 MiB/s wr, 249 op/s
Oct 07 14:13:02 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3594847103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:02 compute-0 kernel: tapd2cb8ca0-10: entered promiscuous mode
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.765 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fd9a53be-32b7-4664-b179-081ac930919f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.767 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2cb8ca0-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.767 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.767 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2cb8ca0-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:02 compute-0 NetworkManager[44949]: <info>  [1759846382.7704] manager: (tapd2cb8ca0-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.772 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2cb8ca0-10, col_values=(('external_ids', {'iface-id': '93001468-74a0-4bac-94dd-0978737be6e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:02 compute-0 ovn_controller[151684]: 2025-10-07T14:13:02Z|00311|binding|INFO|Releasing lport 93001468-74a0-4bac-94dd-0978737be6e2 from this chassis (sb_readonly=0)
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.775 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.794 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.795 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e7e2aa-4d70-419b-9c80-011acff661f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.796 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:13:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.797 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'env', 'PROCESS_TAG=haproxy-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:13:02 compute-0 nova_compute[259550]: 2025-10-07 14:13:02.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:03 compute-0 podman[311116]: 2025-10-07 14:13:03.089513582 +0000 UTC m=+0.046319339 container create 23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct 07 14:13:03 compute-0 systemd[1]: Started libpod-conmon-23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c.scope.
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.128 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:03 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:13:03 compute-0 podman[311116]: 2025-10-07 14:13:03.067322569 +0000 UTC m=+0.024128346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:13:03 compute-0 podman[311116]: 2025-10-07 14:13:03.168895808 +0000 UTC m=+0.125701575 container init 23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cerf, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:13:03 compute-0 podman[311116]: 2025-10-07 14:13:03.178581273 +0000 UTC m=+0.135387030 container start 23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cerf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:13:03 compute-0 podman[311116]: 2025-10-07 14:13:03.18304048 +0000 UTC m=+0.139846237 container attach 23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:13:03 compute-0 happy_cerf[311146]: 167 167
Oct 07 14:13:03 compute-0 systemd[1]: libpod-23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c.scope: Deactivated successfully.
Oct 07 14:13:03 compute-0 conmon[311146]: conmon 23016269311c43f82bb3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c.scope/container/memory.events
Oct 07 14:13:03 compute-0 podman[311116]: 2025-10-07 14:13:03.189290005 +0000 UTC m=+0.146095762 container died 23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cerf, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.205 2 DEBUG nova.storage.rbd_utils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] resizing rbd image b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:13:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-32fd851fe58f29bf7b02f6bc60dc220c420964c1f270a7ab9ad59c64c3e0320b-merged.mount: Deactivated successfully.
Oct 07 14:13:03 compute-0 podman[311116]: 2025-10-07 14:13:03.23019702 +0000 UTC m=+0.187002777 container remove 23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cerf, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:13:03 compute-0 systemd[1]: libpod-conmon-23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c.scope: Deactivated successfully.
Oct 07 14:13:03 compute-0 podman[311170]: 2025-10-07 14:13:03.251475129 +0000 UTC m=+0.076095971 container create 8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:13:03 compute-0 systemd[1]: Started libpod-conmon-8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f.scope.
Oct 07 14:13:03 compute-0 podman[311170]: 2025-10-07 14:13:03.211385606 +0000 UTC m=+0.036006468 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:13:03 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4717fac25a34c640330119c1bfd29f795e79bb034362e253f351ada395c9189e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.352 2 DEBUG nova.objects.instance [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'migration_context' on Instance uuid b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:03 compute-0 podman[311170]: 2025-10-07 14:13:03.354731953 +0000 UTC m=+0.179352795 container init 8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:13:03 compute-0 podman[311170]: 2025-10-07 14:13:03.360728391 +0000 UTC m=+0.185349233 container start 8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.376 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.377 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Ensure instance console log exists: /var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.378 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.378 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.378 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:03 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[311240]: [NOTICE]   (311263) : New worker (311271) forked
Oct 07 14:13:03 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[311240]: [NOTICE]   (311263) : Loading success.
Oct 07 14:13:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:03.426 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:13:03 compute-0 podman[311269]: 2025-10-07 14:13:03.444422671 +0000 UTC m=+0.051293870 container create d85a5d033ac350fe7edcf3037b2f41f79267502ec3d059c96d48cb90c79f3046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_golick, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:13:03 compute-0 systemd[1]: Started libpod-conmon-d85a5d033ac350fe7edcf3037b2f41f79267502ec3d059c96d48cb90c79f3046.scope.
Oct 07 14:13:03 compute-0 podman[311269]: 2025-10-07 14:13:03.421381586 +0000 UTC m=+0.028252815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:13:03 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c788668318182723d67a9d81f36a00edb803297b7ba4578642ec1b5c0d8f35c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c788668318182723d67a9d81f36a00edb803297b7ba4578642ec1b5c0d8f35c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c788668318182723d67a9d81f36a00edb803297b7ba4578642ec1b5c0d8f35c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c788668318182723d67a9d81f36a00edb803297b7ba4578642ec1b5c0d8f35c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.555 2 DEBUG nova.compute.manager [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.556 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 4e86b418-6e7f-4e2e-9146-a847920ed11f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.557 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846383.5547528, 4e86b418-6e7f-4e2e-9146-a847920ed11f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.557 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] VM Started (Lifecycle Event)
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.566 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:13:03 compute-0 podman[311269]: 2025-10-07 14:13:03.572123037 +0000 UTC m=+0.178994326 container init d85a5d033ac350fe7edcf3037b2f41f79267502ec3d059c96d48cb90c79f3046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_golick, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.574 2 INFO nova.virt.libvirt.driver [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance spawned successfully.
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.574 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.578 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.582 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:13:03 compute-0 podman[311269]: 2025-10-07 14:13:03.583496116 +0000 UTC m=+0.190367315 container start d85a5d033ac350fe7edcf3037b2f41f79267502ec3d059c96d48cb90c79f3046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_golick, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 07 14:13:03 compute-0 podman[311269]: 2025-10-07 14:13:03.587605384 +0000 UTC m=+0.194476633 container attach d85a5d033ac350fe7edcf3037b2f41f79267502ec3d059c96d48cb90c79f3046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_golick, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.588 2 DEBUG nova.network.neutron [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Successfully created port: f208f539-2cf1-4007-8806-5b4836d43c4f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.594 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.595 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.595 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.595 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.596 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.596 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.601 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.601 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846383.5551705, 4e86b418-6e7f-4e2e-9146-a847920ed11f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.602 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] VM Paused (Lifecycle Event)
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.633 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.637 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846383.5637846, 4e86b418-6e7f-4e2e-9146-a847920ed11f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.637 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] VM Resumed (Lifecycle Event)
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.662 2 DEBUG nova.compute.manager [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.663 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.669 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:13:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 211 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 985 KiB/s rd, 6.7 MiB/s wr, 219 op/s
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.681 2 DEBUG nova.network.neutron [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Successfully updated port: 4f1564c5-865b-45e8-afe1-7f7c3748c854 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.710 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.710 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquired lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.710 2 DEBUG nova.network.neutron [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.712 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.750 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.752 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.752 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.761 2 DEBUG nova.compute.manager [req-2bfd5414-43b0-4690-9d2c-dfb9368a00d3 req-b8a72dcb-6341-4369-ab56-0f69a8e414fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-changed-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.761 2 DEBUG nova.compute.manager [req-2bfd5414-43b0-4690-9d2c-dfb9368a00d3 req-b8a72dcb-6341-4369-ab56-0f69a8e414fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Refreshing instance network info cache due to event network-changed-4f1564c5-865b-45e8-afe1-7f7c3748c854. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.761 2 DEBUG oslo_concurrency.lockutils [req-2bfd5414-43b0-4690-9d2c-dfb9368a00d3 req-b8a72dcb-6341-4369-ab56-0f69a8e414fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.813 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:03 compute-0 nova_compute[259550]: 2025-10-07 14:13:03.849 2 DEBUG nova.network.neutron [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:13:04 compute-0 nova_compute[259550]: 2025-10-07 14:13:04.230 2 DEBUG nova.compute.manager [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:04 compute-0 nova_compute[259550]: 2025-10-07 14:13:04.230 2 DEBUG oslo_concurrency.lockutils [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:04 compute-0 nova_compute[259550]: 2025-10-07 14:13:04.230 2 DEBUG oslo_concurrency.lockutils [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:04 compute-0 nova_compute[259550]: 2025-10-07 14:13:04.231 2 DEBUG oslo_concurrency.lockutils [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:04 compute-0 nova_compute[259550]: 2025-10-07 14:13:04.231 2 DEBUG nova.compute.manager [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] No waiting events found dispatching network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:04 compute-0 nova_compute[259550]: 2025-10-07 14:13:04.231 2 WARNING nova.compute.manager [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received unexpected event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 for instance with vm_state active and task_state None.
Oct 07 14:13:04 compute-0 nova_compute[259550]: 2025-10-07 14:13:04.231 2 DEBUG nova.compute.manager [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:04 compute-0 nova_compute[259550]: 2025-10-07 14:13:04.231 2 DEBUG oslo_concurrency.lockutils [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:04 compute-0 nova_compute[259550]: 2025-10-07 14:13:04.231 2 DEBUG oslo_concurrency.lockutils [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:04 compute-0 nova_compute[259550]: 2025-10-07 14:13:04.232 2 DEBUG oslo_concurrency.lockutils [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:04 compute-0 nova_compute[259550]: 2025-10-07 14:13:04.232 2 DEBUG nova.compute.manager [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] No waiting events found dispatching network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:04 compute-0 nova_compute[259550]: 2025-10-07 14:13:04.232 2 WARNING nova.compute.manager [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received unexpected event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 for instance with vm_state active and task_state None.
Oct 07 14:13:04 compute-0 angry_golick[311294]: {
Oct 07 14:13:04 compute-0 angry_golick[311294]:     "0": [
Oct 07 14:13:04 compute-0 angry_golick[311294]:         {
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "devices": [
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "/dev/loop3"
Oct 07 14:13:04 compute-0 angry_golick[311294]:             ],
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "lv_name": "ceph_lv0",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "lv_size": "21470642176",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "name": "ceph_lv0",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "tags": {
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.cluster_name": "ceph",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.crush_device_class": "",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.encrypted": "0",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.osd_id": "0",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.type": "block",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.vdo": "0"
Oct 07 14:13:04 compute-0 angry_golick[311294]:             },
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "type": "block",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "vg_name": "ceph_vg0"
Oct 07 14:13:04 compute-0 angry_golick[311294]:         }
Oct 07 14:13:04 compute-0 angry_golick[311294]:     ],
Oct 07 14:13:04 compute-0 angry_golick[311294]:     "1": [
Oct 07 14:13:04 compute-0 angry_golick[311294]:         {
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "devices": [
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "/dev/loop4"
Oct 07 14:13:04 compute-0 angry_golick[311294]:             ],
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "lv_name": "ceph_lv1",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "lv_size": "21470642176",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "name": "ceph_lv1",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "tags": {
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.cluster_name": "ceph",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.crush_device_class": "",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.encrypted": "0",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.osd_id": "1",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.type": "block",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.vdo": "0"
Oct 07 14:13:04 compute-0 angry_golick[311294]:             },
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "type": "block",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "vg_name": "ceph_vg1"
Oct 07 14:13:04 compute-0 angry_golick[311294]:         }
Oct 07 14:13:04 compute-0 angry_golick[311294]:     ],
Oct 07 14:13:04 compute-0 angry_golick[311294]:     "2": [
Oct 07 14:13:04 compute-0 angry_golick[311294]:         {
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "devices": [
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "/dev/loop5"
Oct 07 14:13:04 compute-0 angry_golick[311294]:             ],
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "lv_name": "ceph_lv2",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "lv_size": "21470642176",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "name": "ceph_lv2",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "tags": {
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.cluster_name": "ceph",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.crush_device_class": "",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.encrypted": "0",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.osd_id": "2",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.type": "block",
Oct 07 14:13:04 compute-0 angry_golick[311294]:                 "ceph.vdo": "0"
Oct 07 14:13:04 compute-0 angry_golick[311294]:             },
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "type": "block",
Oct 07 14:13:04 compute-0 angry_golick[311294]:             "vg_name": "ceph_vg2"
Oct 07 14:13:04 compute-0 angry_golick[311294]:         }
Oct 07 14:13:04 compute-0 angry_golick[311294]:     ]
Oct 07 14:13:04 compute-0 angry_golick[311294]: }
Oct 07 14:13:04 compute-0 systemd[1]: libpod-d85a5d033ac350fe7edcf3037b2f41f79267502ec3d059c96d48cb90c79f3046.scope: Deactivated successfully.
Oct 07 14:13:04 compute-0 podman[311269]: 2025-10-07 14:13:04.459664527 +0000 UTC m=+1.066535726 container died d85a5d033ac350fe7edcf3037b2f41f79267502ec3d059c96d48cb90c79f3046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_golick, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 07 14:13:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c788668318182723d67a9d81f36a00edb803297b7ba4578642ec1b5c0d8f35c1-merged.mount: Deactivated successfully.
Oct 07 14:13:04 compute-0 podman[311269]: 2025-10-07 14:13:04.534460164 +0000 UTC m=+1.141331363 container remove d85a5d033ac350fe7edcf3037b2f41f79267502ec3d059c96d48cb90c79f3046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_golick, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 07 14:13:04 compute-0 systemd[1]: libpod-conmon-d85a5d033ac350fe7edcf3037b2f41f79267502ec3d059c96d48cb90c79f3046.scope: Deactivated successfully.
Oct 07 14:13:04 compute-0 sudo[310964]: pam_unix(sudo:session): session closed for user root
Oct 07 14:13:04 compute-0 sudo[311316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:13:04 compute-0 sudo[311316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:13:04 compute-0 sudo[311316]: pam_unix(sudo:session): session closed for user root
Oct 07 14:13:04 compute-0 sudo[311341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:13:04 compute-0 sudo[311341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:13:04 compute-0 sudo[311341]: pam_unix(sudo:session): session closed for user root
Oct 07 14:13:04 compute-0 ceph-mon[74295]: pgmap v1454: 305 pgs: 305 active+clean; 211 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 985 KiB/s rd, 6.7 MiB/s wr, 219 op/s
Oct 07 14:13:04 compute-0 sudo[311366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:13:04 compute-0 sudo[311366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:13:04 compute-0 sudo[311366]: pam_unix(sudo:session): session closed for user root
Oct 07 14:13:04 compute-0 sudo[311391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:13:04 compute-0 sudo[311391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:13:05 compute-0 podman[311456]: 2025-10-07 14:13:05.233767705 +0000 UTC m=+0.053495957 container create fc10382d16642b1c891a25ed8b6f7151b6c7a5a3aeb81997cd8b990a019d310a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:13:05 compute-0 systemd[1]: Started libpod-conmon-fc10382d16642b1c891a25ed8b6f7151b6c7a5a3aeb81997cd8b990a019d310a.scope.
Oct 07 14:13:05 compute-0 podman[311456]: 2025-10-07 14:13:05.211820038 +0000 UTC m=+0.031548310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:13:05 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:13:05 compute-0 podman[311456]: 2025-10-07 14:13:05.339787562 +0000 UTC m=+0.159515844 container init fc10382d16642b1c891a25ed8b6f7151b6c7a5a3aeb81997cd8b990a019d310a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 07 14:13:05 compute-0 podman[311456]: 2025-10-07 14:13:05.350701088 +0000 UTC m=+0.170429340 container start fc10382d16642b1c891a25ed8b6f7151b6c7a5a3aeb81997cd8b990a019d310a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:13:05 compute-0 podman[311456]: 2025-10-07 14:13:05.354452517 +0000 UTC m=+0.174180769 container attach fc10382d16642b1c891a25ed8b6f7151b6c7a5a3aeb81997cd8b990a019d310a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 07 14:13:05 compute-0 ecstatic_wescoff[311472]: 167 167
Oct 07 14:13:05 compute-0 systemd[1]: libpod-fc10382d16642b1c891a25ed8b6f7151b6c7a5a3aeb81997cd8b990a019d310a.scope: Deactivated successfully.
Oct 07 14:13:05 compute-0 podman[311456]: 2025-10-07 14:13:05.359786917 +0000 UTC m=+0.179515169 container died fc10382d16642b1c891a25ed8b6f7151b6c7a5a3aeb81997cd8b990a019d310a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 07 14:13:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-0260dac6d8e0633de804a1b2c6476edbbeb14ccbb1d01272924ce9407e983279-merged.mount: Deactivated successfully.
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.396 2 DEBUG nova.network.neutron [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Updating instance_info_cache with network_info: [{"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:05 compute-0 podman[311456]: 2025-10-07 14:13:05.405410527 +0000 UTC m=+0.225138779 container remove fc10382d16642b1c891a25ed8b6f7151b6c7a5a3aeb81997cd8b990a019d310a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.416 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Releasing lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.416 2 DEBUG nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Instance network_info: |[{"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.417 2 DEBUG oslo_concurrency.lockutils [req-2bfd5414-43b0-4690-9d2c-dfb9368a00d3 req-b8a72dcb-6341-4369-ab56-0f69a8e414fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.417 2 DEBUG nova.network.neutron [req-2bfd5414-43b0-4690-9d2c-dfb9368a00d3 req-b8a72dcb-6341-4369-ab56-0f69a8e414fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Refreshing network info cache for port 4f1564c5-865b-45e8-afe1-7f7c3748c854 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.419 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Start _get_guest_xml network_info=[{"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.432 2 WARNING nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:13:05 compute-0 systemd[1]: libpod-conmon-fc10382d16642b1c891a25ed8b6f7151b6c7a5a3aeb81997cd8b990a019d310a.scope: Deactivated successfully.
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.438 2 DEBUG nova.virt.libvirt.host [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.439 2 DEBUG nova.virt.libvirt.host [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.448 2 DEBUG nova.virt.libvirt.host [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.449 2 DEBUG nova.virt.libvirt.host [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.449 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.449 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.450 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.450 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.450 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.451 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.451 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.451 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.451 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.451 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.452 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.452 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.455 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:05 compute-0 podman[311498]: 2025-10-07 14:13:05.639658624 +0000 UTC m=+0.065055262 container create 54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_keldysh, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:13:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 339 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 12 MiB/s wr, 322 op/s
Oct 07 14:13:05 compute-0 systemd[1]: Started libpod-conmon-54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912.scope.
Oct 07 14:13:05 compute-0 podman[311498]: 2025-10-07 14:13:05.612199772 +0000 UTC m=+0.037596450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:13:05 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df00a8a2997698fb3066cc6be3b52439261d5ded822e07831e326585286c066/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df00a8a2997698fb3066cc6be3b52439261d5ded822e07831e326585286c066/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df00a8a2997698fb3066cc6be3b52439261d5ded822e07831e326585286c066/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df00a8a2997698fb3066cc6be3b52439261d5ded822e07831e326585286c066/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:13:05 compute-0 podman[311498]: 2025-10-07 14:13:05.762202505 +0000 UTC m=+0.187599173 container init 54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_keldysh, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:13:05 compute-0 podman[311498]: 2025-10-07 14:13:05.771466678 +0000 UTC m=+0.196863326 container start 54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:13:05 compute-0 podman[311498]: 2025-10-07 14:13:05.775463924 +0000 UTC m=+0.200860562 container attach 54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.848 2 DEBUG nova.network.neutron [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Successfully updated port: f208f539-2cf1-4007-8806-5b4836d43c4f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.864 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.865 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.865 2 DEBUG nova.network.neutron [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:13:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:13:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:13:05 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/467823024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.951 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.975 2 DEBUG nova.storage.rbd_utils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image f563ffb7-1ade-4b71-ab68-115322eef141_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:05 compute-0 nova_compute[259550]: 2025-10-07 14:13:05.980 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.014 2 DEBUG nova.network.neutron [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.207 2 DEBUG nova.compute.manager [req-f185b233-d136-4c27-a97e-ec650909acbe req-7b0ae00f-35dc-4d21-b3dd-4297972ac05e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-changed-f208f539-2cf1-4007-8806-5b4836d43c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.208 2 DEBUG nova.compute.manager [req-f185b233-d136-4c27-a97e-ec650909acbe req-7b0ae00f-35dc-4d21-b3dd-4297972ac05e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Refreshing instance network info cache due to event network-changed-f208f539-2cf1-4007-8806-5b4836d43c4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.208 2 DEBUG oslo_concurrency.lockutils [req-f185b233-d136-4c27-a97e-ec650909acbe req-7b0ae00f-35dc-4d21-b3dd-4297972ac05e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.357 2 DEBUG oslo_concurrency.lockutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.358 2 DEBUG oslo_concurrency.lockutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.358 2 DEBUG oslo_concurrency.lockutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.358 2 DEBUG oslo_concurrency.lockutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.358 2 DEBUG oslo_concurrency.lockutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.360 2 INFO nova.compute.manager [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Terminating instance
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.361 2 DEBUG nova.compute.manager [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:13:06 compute-0 kernel: tap75d3896e-b0 (unregistering): left promiscuous mode
Oct 07 14:13:06 compute-0 NetworkManager[44949]: <info>  [1759846386.4105] device (tap75d3896e-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:13:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:13:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2094748141' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:06 compute-0 ovn_controller[151684]: 2025-10-07T14:13:06Z|00312|binding|INFO|Releasing lport 75d3896e-b08f-4485-b4d7-dff914242597 from this chassis (sb_readonly=0)
Oct 07 14:13:06 compute-0 ovn_controller[151684]: 2025-10-07T14:13:06Z|00313|binding|INFO|Setting lport 75d3896e-b08f-4485-b4d7-dff914242597 down in Southbound
Oct 07 14:13:06 compute-0 ovn_controller[151684]: 2025-10-07T14:13:06Z|00314|binding|INFO|Removing iface tap75d3896e-b0 ovn-installed in OVS
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.488 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:f3:88 10.100.0.7'], port_security=['fa:16:3e:a0:f3:88 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4e86b418-6e7f-4e2e-9146-a847920ed11f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=75d3896e-b08f-4485-b4d7-dff914242597) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.490 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 75d3896e-b08f-4485-b4d7-dff914242597 in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 unbound from our chassis
Oct 07 14:13:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.492 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:13:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.494 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2a8032fa-ff01-4277-b79a-9b347a853779]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.496 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 namespace which is not needed anymore
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.508 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.510 2 DEBUG nova.virt.libvirt.vif [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1638787156',display_name='tempest-SecurityGroupsTestJSON-server-1638787156',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1638787156',id=45,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f5ee4e560ed4660a6685a086282a370',ramdisk_id='',reservation_id='r-0vzib393',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1673626413',owner_user_name='tempest-SecurityGroupsTestJSON-1673626413-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:01Z,user_data=None,user_id='3ba82e1a9f12417391d78758ae9bb83c',uuid=f563ffb7-1ade-4b71-ab68-115322eef141,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.510 2 DEBUG nova.network.os_vif_util [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converting VIF {"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.511 2 DEBUG nova.network.os_vif_util [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.513 2 DEBUG nova.objects.instance [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lazy-loading 'pci_devices' on Instance uuid f563ffb7-1ade-4b71-ab68-115322eef141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.528 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:13:06 compute-0 nova_compute[259550]:   <uuid>f563ffb7-1ade-4b71-ab68-115322eef141</uuid>
Oct 07 14:13:06 compute-0 nova_compute[259550]:   <name>instance-0000002d</name>
Oct 07 14:13:06 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:13:06 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:13:06 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <nova:name>tempest-SecurityGroupsTestJSON-server-1638787156</nova:name>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:13:05</nova:creationTime>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:13:06 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:13:06 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:13:06 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:13:06 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:13:06 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:13:06 compute-0 nova_compute[259550]:         <nova:user uuid="3ba82e1a9f12417391d78758ae9bb83c">tempest-SecurityGroupsTestJSON-1673626413-project-member</nova:user>
Oct 07 14:13:06 compute-0 nova_compute[259550]:         <nova:project uuid="1f5ee4e560ed4660a6685a086282a370">tempest-SecurityGroupsTestJSON-1673626413</nova:project>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:13:06 compute-0 nova_compute[259550]:         <nova:port uuid="4f1564c5-865b-45e8-afe1-7f7c3748c854">
Oct 07 14:13:06 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:13:06 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:13:06 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <system>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <entry name="serial">f563ffb7-1ade-4b71-ab68-115322eef141</entry>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <entry name="uuid">f563ffb7-1ade-4b71-ab68-115322eef141</entry>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     </system>
Oct 07 14:13:06 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:13:06 compute-0 nova_compute[259550]:   <os>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:   </os>
Oct 07 14:13:06 compute-0 nova_compute[259550]:   <features>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:   </features>
Oct 07 14:13:06 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:13:06 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:13:06 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/f563ffb7-1ade-4b71-ab68-115322eef141_disk">
Oct 07 14:13:06 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:13:06 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/f563ffb7-1ade-4b71-ab68-115322eef141_disk.config">
Oct 07 14:13:06 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:13:06 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:6d:76:5c"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <target dev="tap4f1564c5-86"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141/console.log" append="off"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <video>
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     </video>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:13:06 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:13:06 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:13:06 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:13:06 compute-0 nova_compute[259550]: </domain>
Oct 07 14:13:06 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.529 2 DEBUG nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Preparing to wait for external event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.529 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.529 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.529 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.530 2 DEBUG nova.virt.libvirt.vif [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1638787156',display_name='tempest-SecurityGroupsTestJSON-server-1638787156',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1638787156',id=45,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f5ee4e560ed4660a6685a086282a370',ramdisk_id='',reservation_id='r-0vzib393',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1673626413',owner_user_name='tempest-SecurityGroupsTestJSON-1673626413-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:01Z,user_data=None,user_id='3ba82e1a9f12417391d78758ae9bb83c',uuid=f563ffb7-1ade-4b71-ab68-115322eef141,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.531 2 DEBUG nova.network.os_vif_util [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converting VIF {"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.532 2 DEBUG nova.network.os_vif_util [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.532 2 DEBUG os_vif [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.533 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.534 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:06 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Oct 07 14:13:06 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002a.scope: Consumed 3.731s CPU time.
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.539 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f1564c5-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:06 compute-0 systemd-machined[214580]: Machine qemu-50-instance-0000002a terminated.
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.539 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f1564c5-86, col_values=(('external_ids', {'iface-id': '4f1564c5-865b-45e8-afe1-7f7c3748c854', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:76:5c', 'vm-uuid': 'f563ffb7-1ade-4b71-ab68-115322eef141'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:06 compute-0 NetworkManager[44949]: <info>  [1759846386.5432] manager: (tap4f1564c5-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/159)
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.555 2 INFO os_vif [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86')
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.605 2 INFO nova.virt.libvirt.driver [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance destroyed successfully.
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.606 2 DEBUG nova.objects.instance [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'resources' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.619 2 DEBUG nova.virt.libvirt.vif [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1086034539',display_name='tempest-ServerDiskConfigTestJSON-server-1086034539',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1086034539',id=42,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-flb5tbhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:03Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=4e86b418-6e7f-4e2e-9146-a847920ed11f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.620 2 DEBUG nova.network.os_vif_util [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.620 2 DEBUG nova.network.os_vif_util [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.620 2 DEBUG os_vif [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.626 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75d3896e-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.637 2 INFO os_vif [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0')
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.655 2 DEBUG nova.network.neutron [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updating instance_info_cache with network_info: [{"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.660 2 DEBUG nova.compute.manager [req-6fc89d12-dfe7-4d21-ad54-13c45fcb50e1 req-db0511df-140d-4c8a-bdc9-342ad2a05e9c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-unplugged-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.661 2 DEBUG oslo_concurrency.lockutils [req-6fc89d12-dfe7-4d21-ad54-13c45fcb50e1 req-db0511df-140d-4c8a-bdc9-342ad2a05e9c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.661 2 DEBUG oslo_concurrency.lockutils [req-6fc89d12-dfe7-4d21-ad54-13c45fcb50e1 req-db0511df-140d-4c8a-bdc9-342ad2a05e9c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.661 2 DEBUG oslo_concurrency.lockutils [req-6fc89d12-dfe7-4d21-ad54-13c45fcb50e1 req-db0511df-140d-4c8a-bdc9-342ad2a05e9c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.661 2 DEBUG nova.compute.manager [req-6fc89d12-dfe7-4d21-ad54-13c45fcb50e1 req-db0511df-140d-4c8a-bdc9-342ad2a05e9c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] No waiting events found dispatching network-vif-unplugged-75d3896e-b08f-4485-b4d7-dff914242597 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.662 2 DEBUG nova.compute.manager [req-6fc89d12-dfe7-4d21-ad54-13c45fcb50e1 req-db0511df-140d-4c8a-bdc9-342ad2a05e9c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-unplugged-75d3896e-b08f-4485-b4d7-dff914242597 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.666 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.667 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.667 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] No VIF found with MAC fa:16:3e:6d:76:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.667 2 INFO nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Using config drive
Oct 07 14:13:06 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[311240]: [NOTICE]   (311263) : haproxy version is 2.8.14-c23fe91
Oct 07 14:13:06 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[311240]: [NOTICE]   (311263) : path to executable is /usr/sbin/haproxy
Oct 07 14:13:06 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[311240]: [WARNING]  (311263) : Exiting Master process...
Oct 07 14:13:06 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[311240]: [ALERT]    (311263) : Current worker (311271) exited with code 143 (Terminated)
Oct 07 14:13:06 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[311240]: [WARNING]  (311263) : All workers exited. Exiting... (0)
Oct 07 14:13:06 compute-0 systemd[1]: libpod-8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f.scope: Deactivated successfully.
Oct 07 14:13:06 compute-0 podman[311630]: 2025-10-07 14:13:06.699662287 +0000 UTC m=+0.059017352 container died 8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.702 2 DEBUG nova.storage.rbd_utils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image f563ffb7-1ade-4b71-ab68-115322eef141_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.711 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.712 2 DEBUG nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Instance network_info: |[{"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.717 2 DEBUG oslo_concurrency.lockutils [req-f185b233-d136-4c27-a97e-ec650909acbe req-7b0ae00f-35dc-4d21-b3dd-4297972ac05e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.718 2 DEBUG nova.network.neutron [req-f185b233-d136-4c27-a97e-ec650909acbe req-7b0ae00f-35dc-4d21-b3dd-4297972ac05e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Refreshing network info cache for port f208f539-2cf1-4007-8806-5b4836d43c4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.723 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Start _get_guest_xml network_info=[{"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.737 2 DEBUG nova.network.neutron [req-2bfd5414-43b0-4690-9d2c-dfb9368a00d3 req-b8a72dcb-6341-4369-ab56-0f69a8e414fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Updated VIF entry in instance network info cache for port 4f1564c5-865b-45e8-afe1-7f7c3748c854. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.738 2 DEBUG nova.network.neutron [req-2bfd5414-43b0-4690-9d2c-dfb9368a00d3 req-b8a72dcb-6341-4369-ab56-0f69a8e414fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Updating instance_info_cache with network_info: [{"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f-userdata-shm.mount: Deactivated successfully.
Oct 07 14:13:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4717fac25a34c640330119c1bfd29f795e79bb034362e253f351ada395c9189e-merged.mount: Deactivated successfully.
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.756 2 DEBUG oslo_concurrency.lockutils [req-2bfd5414-43b0-4690-9d2c-dfb9368a00d3 req-b8a72dcb-6341-4369-ab56-0f69a8e414fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.762 2 WARNING nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.768 2 DEBUG nova.virt.libvirt.host [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.768 2 DEBUG nova.virt.libvirt.host [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.772 2 DEBUG nova.virt.libvirt.host [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.773 2 DEBUG nova.virt.libvirt.host [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.773 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.773 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.774 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.774 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.774 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.774 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.774 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.775 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.775 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.775 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.775 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.775 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.778 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:06 compute-0 ceph-mon[74295]: pgmap v1455: 305 pgs: 305 active+clean; 339 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 12 MiB/s wr, 322 op/s
Oct 07 14:13:06 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/467823024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:06 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2094748141' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:06 compute-0 podman[311630]: 2025-10-07 14:13:06.789633081 +0000 UTC m=+0.148988146 container cleanup 8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 14:13:06 compute-0 systemd[1]: libpod-conmon-8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f.scope: Deactivated successfully.
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]: {
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:         "osd_id": 2,
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:         "type": "bluestore"
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:     },
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:         "osd_id": 1,
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:         "type": "bluestore"
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:     },
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:         "osd_id": 0,
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:         "type": "bluestore"
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]:     }
Oct 07 14:13:06 compute-0 quizzical_keldysh[311533]: }
Oct 07 14:13:06 compute-0 podman[311709]: 2025-10-07 14:13:06.876525345 +0000 UTC m=+0.056769282 container remove 8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:13:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.887 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1b567cdb-6e11-45f2-9944-754def1ca74d]: (4, ('Tue Oct  7 02:13:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f)\n8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f\nTue Oct  7 02:13:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f)\n8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.890 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6863cc61-c793-4fa2-b600-b0f44876cbe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.892 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2cb8ca0-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:06 compute-0 kernel: tapd2cb8ca0-10: left promiscuous mode
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:06 compute-0 systemd[1]: libpod-54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912.scope: Deactivated successfully.
Oct 07 14:13:06 compute-0 systemd[1]: libpod-54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912.scope: Consumed 1.076s CPU time.
Oct 07 14:13:06 compute-0 podman[311498]: 2025-10-07 14:13:06.917078721 +0000 UTC m=+1.342475399 container died 54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_keldysh, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.927 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d679649a-13da-47ab-b511-5f5f0779a7c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-7df00a8a2997698fb3066cc6be3b52439261d5ded822e07831e326585286c066-merged.mount: Deactivated successfully.
Oct 07 14:13:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.958 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0f385d32-0d7e-42f4-b998-9aab082032cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.961 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f9f1ba7-754d-4ce6-b8b3-9b3449bcf227]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.979 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[33c3a5c0-4750-48f3-8a86-0646869d2585]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696603, 'reachable_time': 35160, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311764, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:06 compute-0 systemd[1]: run-netns-ovnmeta\x2dd2cb8ca0\x2d1272\x2d4fa9\x2db4ed\x2d8d0a1e3df777.mount: Deactivated successfully.
Oct 07 14:13:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.988 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:13:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.988 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[36397058-2157-4d8c-aea1-43d3c1c631f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:06 compute-0 podman[311498]: 2025-10-07 14:13:06.991425245 +0000 UTC m=+1.416821893 container remove 54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_keldysh, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.992 2 INFO nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Creating config drive at /var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141/disk.config
Oct 07 14:13:06 compute-0 nova_compute[259550]: 2025-10-07 14:13:06.999 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzcn7idot execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:07 compute-0 systemd[1]: libpod-conmon-54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912.scope: Deactivated successfully.
Oct 07 14:13:07 compute-0 sudo[311391]: pam_unix(sudo:session): session closed for user root
Oct 07 14:13:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:13:07 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:13:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:13:07 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:13:07 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 146635f0-3f06-409c-b19b-9c06b8ceeda6 does not exist
Oct 07 14:13:07 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 634f47c4-ba0c-42cf-b799-9c318d6238cc does not exist
Oct 07 14:13:07 compute-0 sudo[311769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:13:07 compute-0 sudo[311769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:13:07 compute-0 sudo[311769]: pam_unix(sudo:session): session closed for user root
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.154 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzcn7idot" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.189 2 DEBUG nova.storage.rbd_utils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image f563ffb7-1ade-4b71-ab68-115322eef141_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.194 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141/disk.config f563ffb7-1ade-4b71-ab68-115322eef141_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:07 compute-0 sudo[311794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:13:07 compute-0 sudo[311794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:13:07 compute-0 sudo[311794]: pam_unix(sudo:session): session closed for user root
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.252 2 INFO nova.virt.libvirt.driver [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Deleting instance files /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f_del
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.254 2 INFO nova.virt.libvirt.driver [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Deletion of /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f_del complete
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.316 2 INFO nova.compute.manager [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Took 0.96 seconds to destroy the instance on the hypervisor.
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.317 2 DEBUG oslo.service.loopingcall [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.317 2 DEBUG nova.compute.manager [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.317 2 DEBUG nova.network.neutron [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.391 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141/disk.config f563ffb7-1ade-4b71-ab68-115322eef141_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.391 2 INFO nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Deleting local config drive /var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141/disk.config because it was imported into RBD.
Oct 07 14:13:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:13:07 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1848916736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.427 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.429 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.453 2 DEBUG nova.storage.rbd_utils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:07 compute-0 kernel: tap4f1564c5-86: entered promiscuous mode
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.459 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:07 compute-0 systemd-udevd[311584]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:13:07 compute-0 NetworkManager[44949]: <info>  [1759846387.4620] manager: (tap4f1564c5-86): new Tun device (/org/freedesktop/NetworkManager/Devices/160)
Oct 07 14:13:07 compute-0 ovn_controller[151684]: 2025-10-07T14:13:07Z|00315|binding|INFO|Claiming lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 for this chassis.
Oct 07 14:13:07 compute-0 ovn_controller[151684]: 2025-10-07T14:13:07Z|00316|binding|INFO|4f1564c5-865b-45e8-afe1-7f7c3748c854: Claiming fa:16:3e:6d:76:5c 10.100.0.3
Oct 07 14:13:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.471 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:76:5c 10.100.0.3'], port_security=['fa:16:3e:6d:76:5c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f563ffb7-1ade-4b71-ab68-115322eef141', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f5ee4e560ed4660a6685a086282a370', 'neutron:revision_number': '2', 'neutron:security_group_ids': '83afa0b2-d45d-4225-8e21-5474e9077205', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=437744f3-38b8-4d96-80b1-8cef5bd6873b, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4f1564c5-865b-45e8-afe1-7f7c3748c854) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.472 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4f1564c5-865b-45e8-afe1-7f7c3748c854 in datapath 71870f0f-c94f-4d32-8df4-00da4d6d4129 bound to our chassis
Oct 07 14:13:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.474 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71870f0f-c94f-4d32-8df4-00da4d6d4129
Oct 07 14:13:07 compute-0 NetworkManager[44949]: <info>  [1759846387.4837] device (tap4f1564c5-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:13:07 compute-0 ovn_controller[151684]: 2025-10-07T14:13:07Z|00317|binding|INFO|Setting lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 ovn-installed in OVS
Oct 07 14:13:07 compute-0 ovn_controller[151684]: 2025-10-07T14:13:07Z|00318|binding|INFO|Setting lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 up in Southbound
Oct 07 14:13:07 compute-0 NetworkManager[44949]: <info>  [1759846387.4883] device (tap4f1564c5-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:13:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.496 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[30c49ccf-2f07-4a77-96a8-993ce9a7cfb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:07 compute-0 systemd-machined[214580]: New machine qemu-51-instance-0000002d.
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:07 compute-0 systemd[1]: Started Virtual Machine qemu-51-instance-0000002d.
Oct 07 14:13:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.549 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8c1038fc-6296-4d6a-8788-191cc6cecb32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.553 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[657ad1f0-a87d-4bc8-ae2c-ebca767d776d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.591 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[afebd86e-1a73-478c-9cd1-db5d83e05cee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.619 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6df1de18-cde9-4748-8d26-3c5a9cf5b339]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71870f0f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:d1:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694636, 'reachable_time': 33464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311902, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.643 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef6c550-784a-4cab-aa24-6945a0f6f015]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap71870f0f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694652, 'tstamp': 694652}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311921, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap71870f0f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694656, 'tstamp': 694656}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311921, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.645 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71870f0f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.648 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71870f0f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.648 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.649 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71870f0f-c0, col_values=(('external_ids', {'iface-id': '7bd1effb-a353-4387-8382-bb3ef13fb3f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.649 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 339 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 9.6 MiB/s wr, 258 op/s
Oct 07 14:13:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:13:07 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/760427212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.952 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.954 2 DEBUG nova.virt.libvirt.vif [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-83378302',display_name='tempest-AttachInterfacesTestJSON-server-83378302',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-83378302',id=46,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhbNkb9RJCiI/GkFe04dtxG7xWcFO07VUrumVPfTEOgOBoJhuvmDv9cetrJ6XIQ/1YCI8CWruM93E49EXk00F5CtAbm3zc0bxYBCy7t665rpHSj0Q0DNaoPcuzsqKAXSw==',key_name='tempest-keypair-542533303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ddh3gr00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.954 2 DEBUG nova.network.os_vif_util [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.955 2 DEBUG nova.network.os_vif_util [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:75:d2,bridge_name='br-int',has_traffic_filtering=True,id=f208f539-2cf1-4007-8806-5b4836d43c4f,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf208f539-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.956 2 DEBUG nova.objects.instance [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_devices' on Instance uuid b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.991 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:13:07 compute-0 nova_compute[259550]:   <uuid>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</uuid>
Oct 07 14:13:07 compute-0 nova_compute[259550]:   <name>instance-0000002e</name>
Oct 07 14:13:07 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:13:07 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:13:07 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <nova:name>tempest-AttachInterfacesTestJSON-server-83378302</nova:name>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:13:06</nova:creationTime>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:13:07 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:13:07 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:13:07 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:13:07 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:13:07 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:13:07 compute-0 nova_compute[259550]:         <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:13:07 compute-0 nova_compute[259550]:         <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:13:07 compute-0 nova_compute[259550]:         <nova:port uuid="f208f539-2cf1-4007-8806-5b4836d43c4f">
Oct 07 14:13:07 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:13:07 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:13:07 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <system>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <entry name="serial">b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <entry name="uuid">b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     </system>
Oct 07 14:13:07 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:13:07 compute-0 nova_compute[259550]:   <os>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:   </os>
Oct 07 14:13:07 compute-0 nova_compute[259550]:   <features>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:   </features>
Oct 07 14:13:07 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:13:07 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:13:07 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk">
Oct 07 14:13:07 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:13:07 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config">
Oct 07 14:13:07 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:13:07 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:5f:75:d2"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <target dev="tapf208f539-2c"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log" append="off"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <video>
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     </video>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:13:07 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:13:07 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:13:07 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:13:07 compute-0 nova_compute[259550]: </domain>
Oct 07 14:13:07 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.992 2 DEBUG nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Preparing to wait for external event network-vif-plugged-f208f539-2cf1-4007-8806-5b4836d43c4f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.992 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.993 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.993 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.993 2 DEBUG nova.virt.libvirt.vif [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-83378302',display_name='tempest-AttachInterfacesTestJSON-server-83378302',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-83378302',id=46,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhbNkb9RJCiI/GkFe04dtxG7xWcFO07VUrumVPfTEOgOBoJhuvmDv9cetrJ6XIQ/1YCI8CWruM93E49EXk00F5CtAbm3zc0bxYBCy7t665rpHSj0Q0DNaoPcuzsqKAXSw==',key_name='tempest-keypair-542533303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ddh3gr00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.994 2 DEBUG nova.network.os_vif_util [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.994 2 DEBUG nova.network.os_vif_util [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:75:d2,bridge_name='br-int',has_traffic_filtering=True,id=f208f539-2cf1-4007-8806-5b4836d43c4f,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf208f539-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.995 2 DEBUG os_vif [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:75:d2,bridge_name='br-int',has_traffic_filtering=True,id=f208f539-2cf1-4007-8806-5b4836d43c4f,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf208f539-2c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.995 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:07 compute-0 nova_compute[259550]: 2025-10-07 14:13:07.996 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.000 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf208f539-2c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.000 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf208f539-2c, col_values=(('external_ids', {'iface-id': 'f208f539-2cf1-4007-8806-5b4836d43c4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:75:d2', 'vm-uuid': 'b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:08 compute-0 NetworkManager[44949]: <info>  [1759846388.0032] manager: (tapf208f539-2c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.010 2 INFO os_vif [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:75:d2,bridge_name='br-int',has_traffic_filtering=True,id=f208f539-2cf1-4007-8806-5b4836d43c4f,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf208f539-2c')
Oct 07 14:13:08 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:13:08 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:13:08 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1848916736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:08 compute-0 ceph-mon[74295]: pgmap v1456: 305 pgs: 305 active+clean; 339 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 9.6 MiB/s wr, 258 op/s
Oct 07 14:13:08 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/760427212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.068 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.068 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.068 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:5f:75:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.069 2 INFO nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Using config drive
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.094 2 DEBUG nova.storage.rbd_utils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.485 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846388.4849563, f563ffb7-1ade-4b71-ab68-115322eef141 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.486 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] VM Started (Lifecycle Event)
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.501 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.507 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846388.4850912, f563ffb7-1ade-4b71-ab68-115322eef141 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.508 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] VM Paused (Lifecycle Event)
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.520 2 DEBUG nova.virt.libvirt.driver [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.523 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.526 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.541 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.775 2 DEBUG nova.network.neutron [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.780 2 INFO nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Creating config drive at /var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/disk.config
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.789 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc9ngyr8u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.891 2 DEBUG nova.network.neutron [req-f185b233-d136-4c27-a97e-ec650909acbe req-7b0ae00f-35dc-4d21-b3dd-4297972ac05e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updated VIF entry in instance network info cache for port f208f539-2cf1-4007-8806-5b4836d43c4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.892 2 DEBUG nova.network.neutron [req-f185b233-d136-4c27-a97e-ec650909acbe req-7b0ae00f-35dc-4d21-b3dd-4297972ac05e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updating instance_info_cache with network_info: [{"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.894 2 INFO nova.compute.manager [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Took 1.58 seconds to deallocate network for instance.
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.920 2 DEBUG oslo_concurrency.lockutils [req-f185b233-d136-4c27-a97e-ec650909acbe req-7b0ae00f-35dc-4d21-b3dd-4297972ac05e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.941 2 DEBUG oslo_concurrency.lockutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.941 2 DEBUG oslo_concurrency.lockutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.953 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc9ngyr8u" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.979 2 DEBUG nova.storage.rbd_utils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:08 compute-0 nova_compute[259550]: 2025-10-07 14:13:08.983 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/disk.config b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:09 compute-0 podman[312007]: 2025-10-07 14:13:09.107863767 +0000 UTC m=+0.083171077 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.132 2 DEBUG nova.compute.manager [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.133 2 DEBUG oslo_concurrency.lockutils [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.133 2 DEBUG oslo_concurrency.lockutils [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.133 2 DEBUG oslo_concurrency.lockutils [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.134 2 DEBUG nova.compute.manager [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] No waiting events found dispatching network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.134 2 WARNING nova.compute.manager [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received unexpected event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 for instance with vm_state deleted and task_state None.
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.134 2 DEBUG nova.compute.manager [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.134 2 DEBUG oslo_concurrency.lockutils [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.134 2 DEBUG oslo_concurrency.lockutils [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.134 2 DEBUG oslo_concurrency.lockutils [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.135 2 DEBUG nova.compute.manager [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Processing event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.135 2 DEBUG nova.compute.manager [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.135 2 DEBUG oslo_concurrency.lockutils [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.135 2 DEBUG oslo_concurrency.lockutils [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.135 2 DEBUG oslo_concurrency.lockutils [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.135 2 DEBUG nova.compute.manager [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] No waiting events found dispatching network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.136 2 WARNING nova.compute.manager [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received unexpected event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 for instance with vm_state building and task_state spawning.
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.136 2 DEBUG nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.141 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.142 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846389.1409862, f563ffb7-1ade-4b71-ab68-115322eef141 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.142 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] VM Resumed (Lifecycle Event)
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.148 2 INFO nova.virt.libvirt.driver [-] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Instance spawned successfully.
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.149 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.175 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/disk.config b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.175 2 INFO nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Deleting local config drive /var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/disk.config because it was imported into RBD.
Oct 07 14:13:09 compute-0 podman[312009]: 2025-10-07 14:13:09.176615505 +0000 UTC m=+0.151448192 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.190 2 DEBUG oslo_concurrency.processutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.240 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:09 compute-0 kernel: tapf208f539-2c: entered promiscuous mode
Oct 07 14:13:09 compute-0 NetworkManager[44949]: <info>  [1759846389.2419] manager: (tapf208f539-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/162)
Oct 07 14:13:09 compute-0 ovn_controller[151684]: 2025-10-07T14:13:09Z|00319|binding|INFO|Claiming lport f208f539-2cf1-4007-8806-5b4836d43c4f for this chassis.
Oct 07 14:13:09 compute-0 ovn_controller[151684]: 2025-10-07T14:13:09Z|00320|binding|INFO|f208f539-2cf1-4007-8806-5b4836d43c4f: Claiming fa:16:3e:5f:75:d2 10.100.0.12
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:09 compute-0 NetworkManager[44949]: <info>  [1759846389.2597] device (tapf208f539-2c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:13:09 compute-0 NetworkManager[44949]: <info>  [1759846389.2633] device (tapf208f539-2c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.271 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:75:d2 10.100.0.12'], port_security=['fa:16:3e:5f:75:d2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e8acaacb-1526-452b-a139-17a738541bb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=f208f539-2cf1-4007-8806-5b4836d43c4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.272 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.273 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.274 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.274 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.274 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.273 161536 INFO neutron.agent.ovn.metadata.agent [-] Port f208f539-2cf1-4007-8806-5b4836d43c4f in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.274 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.275 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:09 compute-0 systemd-machined[214580]: New machine qemu-52-instance-0000002e.
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.291 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.289 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1f8ce32e-2ce4-4882-854f-a81d61b96434]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.290 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb1d9f332-f1 in ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.293 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb1d9f332-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.294 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[78f1d8a4-046e-42f7-b88e-839bdcb18fe2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.295 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[76925a83-6dd8-43fc-aa2d-3685962d5949]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.309 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6bccdef0-3668-49f1-b1ea-1b9a4bbfe42c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.325 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.337 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[68794ea4-e859-4def-91a5-7d4d939bcc36]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:09 compute-0 systemd[1]: Started Virtual Machine qemu-52-instance-0000002e.
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.356 2 INFO nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Took 7.92 seconds to spawn the instance on the hypervisor.
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.357 2 DEBUG nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:09 compute-0 ovn_controller[151684]: 2025-10-07T14:13:09Z|00321|binding|INFO|Setting lport f208f539-2cf1-4007-8806-5b4836d43c4f ovn-installed in OVS
Oct 07 14:13:09 compute-0 ovn_controller[151684]: 2025-10-07T14:13:09Z|00322|binding|INFO|Setting lport f208f539-2cf1-4007-8806-5b4836d43c4f up in Southbound
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.381 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[eafedfa6-62e2-4b9a-8789-f413389c2b95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:09 compute-0 NetworkManager[44949]: <info>  [1759846389.3882] manager: (tapb1d9f332-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/163)
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.387 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff6714f-3b79-4812-95b9-23e828478f78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.430 2 INFO nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Took 9.07 seconds to build instance.
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.433 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3a44ccba-7dfd-4781-abaa-f77546d94865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.444 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b807922d-433b-4934-bb8a-615c2744db55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.447 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:09 compute-0 NetworkManager[44949]: <info>  [1759846389.4891] device (tapb1d9f332-f0): carrier: link connected
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.500 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f2cef718-662c-41f6-967d-fb0cdf8c342e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.523 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[532be02d-8224-483a-815d-90bc9110bfff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697305, 'reachable_time': 34760, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312136, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.549 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d7282e-6c37-4628-8ae2-1959e4fea58a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe19:be96'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697305, 'tstamp': 697305}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312137, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.571 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3115bfd9-1954-4483-9681-ea118416b68f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697305, 'reachable_time': 34760, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312138, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.599 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[12662821-7cab-4e04-9283-e4d03029df9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.615 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.616 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.634 2 DEBUG nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.653 2 DEBUG nova.compute.manager [req-f9af5bbf-a216-4901-a29a-75904f1120e6 req-c3a7f012-61fc-4e6a-aa00-366a0b4e13c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-vif-plugged-f208f539-2cf1-4007-8806-5b4836d43c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.653 2 DEBUG oslo_concurrency.lockutils [req-f9af5bbf-a216-4901-a29a-75904f1120e6 req-c3a7f012-61fc-4e6a-aa00-366a0b4e13c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.654 2 DEBUG oslo_concurrency.lockutils [req-f9af5bbf-a216-4901-a29a-75904f1120e6 req-c3a7f012-61fc-4e6a-aa00-366a0b4e13c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.654 2 DEBUG oslo_concurrency.lockutils [req-f9af5bbf-a216-4901-a29a-75904f1120e6 req-c3a7f012-61fc-4e6a-aa00-366a0b4e13c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.654 2 DEBUG nova.compute.manager [req-f9af5bbf-a216-4901-a29a-75904f1120e6 req-c3a7f012-61fc-4e6a-aa00-366a0b4e13c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Processing event network-vif-plugged-f208f539-2cf1-4007-8806-5b4836d43c4f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.674 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[556a01e6-0ae6-4764-983a-695f413862db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.676 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.676 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 306 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 9.6 MiB/s wr, 330 op/s
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.677 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:09 compute-0 NetworkManager[44949]: <info>  [1759846389.6801] manager: (tapb1d9f332-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/164)
Oct 07 14:13:09 compute-0 kernel: tapb1d9f332-f0: entered promiscuous mode
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.683 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:09 compute-0 ovn_controller[151684]: 2025-10-07T14:13:09Z|00323|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.704 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b1d9f332-f920-4d6e-8e91-dd13ec334d51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b1d9f332-f920-4d6e-8e91-dd13ec334d51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.707 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a54aa580-a6be-4146-a12d-a2c7e22ec0e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.708 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/b1d9f332-f920-4d6e-8e91-dd13ec334d51.pid.haproxy
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:13:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.708 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'env', 'PROCESS_TAG=haproxy-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b1d9f332-f920-4d6e-8e91-dd13ec334d51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.712 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:13:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1022203406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.772 2 DEBUG oslo_concurrency.processutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.777 2 DEBUG nova.compute.provider_tree [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.796 2 DEBUG nova.scheduler.client.report [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.813 2 DEBUG oslo_concurrency.lockutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.815 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.822 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.822 2 INFO nova.compute.claims [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.838 2 INFO nova.scheduler.client.report [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Deleted allocations for instance 4e86b418-6e7f-4e2e-9146-a847920ed11f
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.908 2 DEBUG oslo_concurrency.lockutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:09 compute-0 nova_compute[259550]: 2025-10-07 14:13:09.979 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:10 compute-0 podman[312214]: 2025-10-07 14:13:10.165714973 +0000 UTC m=+0.053436475 container create 51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 07 14:13:10 compute-0 systemd[1]: Started libpod-conmon-51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba.scope.
Oct 07 14:13:10 compute-0 podman[312214]: 2025-10-07 14:13:10.138341064 +0000 UTC m=+0.026062586 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:13:10 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea5f871018c8e56c6586c7466d5febec36105738892f6f280160d66dce07ac2e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:13:10 compute-0 podman[312214]: 2025-10-07 14:13:10.263994547 +0000 UTC m=+0.151716079 container init 51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:13:10 compute-0 podman[312214]: 2025-10-07 14:13:10.274172755 +0000 UTC m=+0.161894257 container start 51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.285 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846390.2852232, b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.286 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] VM Started (Lifecycle Event)
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.290 2 DEBUG nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.293 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:13:10 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[312249]: [NOTICE]   (312253) : New worker (312255) forked
Oct 07 14:13:10 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[312249]: [NOTICE]   (312253) : Loading success.
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.302 2 INFO nova.virt.libvirt.driver [-] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Instance spawned successfully.
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.303 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.309 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.314 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.324 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.325 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.325 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.326 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.327 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.327 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.333 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.333 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846390.2866583, b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.333 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] VM Paused (Lifecycle Event)
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.363 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.366 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846390.293427, b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.367 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] VM Resumed (Lifecycle Event)
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.388 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.393 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.399 2 INFO nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Took 7.92 seconds to spawn the instance on the hypervisor.
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.399 2 DEBUG nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.409 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.450 2 INFO nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Took 9.86 seconds to build instance.
Oct 07 14:13:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:13:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1501691277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.463 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.488 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.494 2 DEBUG nova.compute.provider_tree [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.514 2 DEBUG nova.scheduler.client.report [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.546 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.547 2 DEBUG nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.585 2 DEBUG nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.586 2 DEBUG nova.network.neutron [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.607 2 INFO nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.647 2 DEBUG nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:13:10 compute-0 ceph-mon[74295]: pgmap v1457: 305 pgs: 305 active+clean; 306 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 9.6 MiB/s wr, 330 op/s
Oct 07 14:13:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1022203406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1501691277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.751 2 DEBUG nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.755 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.755 2 INFO nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Creating image(s)
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.780 2 DEBUG nova.storage.rbd_utils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:10 compute-0 kernel: tap23002fad-24 (unregistering): left promiscuous mode
Oct 07 14:13:10 compute-0 NetworkManager[44949]: <info>  [1759846390.7932] device (tap23002fad-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:13:10 compute-0 ovn_controller[151684]: 2025-10-07T14:13:10Z|00324|binding|INFO|Releasing lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d from this chassis (sb_readonly=0)
Oct 07 14:13:10 compute-0 ovn_controller[151684]: 2025-10-07T14:13:10Z|00325|binding|INFO|Setting lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d down in Southbound
Oct 07 14:13:10 compute-0 ovn_controller[151684]: 2025-10-07T14:13:10Z|00326|binding|INFO|Removing iface tap23002fad-24 ovn-installed in OVS
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.819 2 DEBUG nova.storage.rbd_utils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:10.820 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:78:88 10.100.0.8'], port_security=['fa:16:3e:4c:78:88 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '4fef229d-c42d-43ac-a3ff-527ca68d3796', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=23002fad-2420-4b68-bfd7-2d90f8b5df6d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:10.822 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 23002fad-2420-4b68-bfd7-2d90f8b5df6d in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f unbound from our chassis
Oct 07 14:13:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:10.825 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8accac57-ab45-4b9b-95ed-86c2c65f202f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:13:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:10.826 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3b7c9f-aa9f-472f-a0ac-c84060bbbaf1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:10.826 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f namespace which is not needed anymore
Oct 07 14:13:10 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Oct 07 14:13:10 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002c.scope: Consumed 14.115s CPU time.
Oct 07 14:13:10 compute-0 systemd-machined[214580]: Machine qemu-49-instance-0000002c terminated.
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.872 2 DEBUG nova.storage.rbd_utils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.877 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.919 2 DEBUG nova.policy [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7bf568df6a8d461a83d287493b393589', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'de6794f6448744329cf2081eb5b889a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:10 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[309769]: [NOTICE]   (309773) : haproxy version is 2.8.14-c23fe91
Oct 07 14:13:10 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[309769]: [NOTICE]   (309773) : path to executable is /usr/sbin/haproxy
Oct 07 14:13:10 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[309769]: [WARNING]  (309773) : Exiting Master process...
Oct 07 14:13:10 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[309769]: [WARNING]  (309773) : Exiting Master process...
Oct 07 14:13:10 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[309769]: [ALERT]    (309773) : Current worker (309775) exited with code 143 (Terminated)
Oct 07 14:13:10 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[309769]: [WARNING]  (309773) : All workers exited. Exiting... (0)
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.974 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.975 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.975 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:10 compute-0 nova_compute[259550]: 2025-10-07 14:13:10.976 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:10 compute-0 systemd[1]: libpod-fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e.scope: Deactivated successfully.
Oct 07 14:13:10 compute-0 conmon[309769]: conmon fa49b6bc16ac5ac81553 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e.scope/container/memory.events
Oct 07 14:13:10 compute-0 podman[312343]: 2025-10-07 14:13:10.983107369 +0000 UTC m=+0.053683822 container died fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.006 2 DEBUG nova.storage.rbd_utils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.011 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e-userdata-shm.mount: Deactivated successfully.
Oct 07 14:13:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1da5c37de559ab2f97b903232e4ec93e0e34fbf169fac94209f83b9bd008f0ca-merged.mount: Deactivated successfully.
Oct 07 14:13:11 compute-0 podman[312343]: 2025-10-07 14:13:11.024304013 +0000 UTC m=+0.094880466 container cleanup fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:13:11 compute-0 NetworkManager[44949]: <info>  [1759846391.0270] manager: (tap23002fad-24): new Tun device (/org/freedesktop/NetworkManager/Devices/165)
Oct 07 14:13:11 compute-0 kernel: tap23002fad-24: entered promiscuous mode
Oct 07 14:13:11 compute-0 systemd-udevd[312293]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:13:11 compute-0 ovn_controller[151684]: 2025-10-07T14:13:11Z|00327|binding|INFO|Claiming lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d for this chassis.
Oct 07 14:13:11 compute-0 ovn_controller[151684]: 2025-10-07T14:13:11Z|00328|binding|INFO|23002fad-2420-4b68-bfd7-2d90f8b5df6d: Claiming fa:16:3e:4c:78:88 10.100.0.8
Oct 07 14:13:11 compute-0 kernel: tap23002fad-24 (unregistering): left promiscuous mode
Oct 07 14:13:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.046 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:78:88 10.100.0.8'], port_security=['fa:16:3e:4c:78:88 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '4fef229d-c42d-43ac-a3ff-527ca68d3796', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=23002fad-2420-4b68-bfd7-2d90f8b5df6d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:11 compute-0 systemd[1]: libpod-conmon-fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e.scope: Deactivated successfully.
Oct 07 14:13:11 compute-0 ovn_controller[151684]: 2025-10-07T14:13:11Z|00329|binding|INFO|Setting lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d ovn-installed in OVS
Oct 07 14:13:11 compute-0 ovn_controller[151684]: 2025-10-07T14:13:11Z|00330|binding|INFO|Setting lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d up in Southbound
Oct 07 14:13:11 compute-0 ovn_controller[151684]: 2025-10-07T14:13:11Z|00331|binding|INFO|Releasing lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d from this chassis (sb_readonly=1)
Oct 07 14:13:11 compute-0 ovn_controller[151684]: 2025-10-07T14:13:11Z|00332|if_status|INFO|Dropped 4 log messages in last 131 seconds (most recently, 131 seconds ago) due to excessive rate
Oct 07 14:13:11 compute-0 ovn_controller[151684]: 2025-10-07T14:13:11Z|00333|if_status|INFO|Not setting lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d down as sb is readonly
Oct 07 14:13:11 compute-0 ovn_controller[151684]: 2025-10-07T14:13:11Z|00334|binding|INFO|Removing iface tap23002fad-24 ovn-installed in OVS
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:11 compute-0 ovn_controller[151684]: 2025-10-07T14:13:11Z|00335|binding|INFO|Releasing lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d from this chassis (sb_readonly=0)
Oct 07 14:13:11 compute-0 ovn_controller[151684]: 2025-10-07T14:13:11Z|00336|binding|INFO|Setting lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d down in Southbound
Oct 07 14:13:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.080 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:78:88 10.100.0.8'], port_security=['fa:16:3e:4c:78:88 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '4fef229d-c42d-43ac-a3ff-527ca68d3796', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=23002fad-2420-4b68-bfd7-2d90f8b5df6d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:11 compute-0 podman[312391]: 2025-10-07 14:13:11.119092443 +0000 UTC m=+0.064635039 container remove fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:13:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.127 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1cbcd5cf-1bc5-4146-9710-1ff389edc619]: (4, ('Tue Oct  7 02:13:10 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f (fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e)\nfa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e\nTue Oct  7 02:13:11 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f (fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e)\nfa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.129 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1c7f11-33da-43c3-8231-bba9fe9b51fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.130 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8accac57-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:11 compute-0 kernel: tap8accac57-a0: left promiscuous mode
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.205 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cb028840-0d39-4567-b736-bee7febc6d6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.231 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cb6b4af6-0216-4874-9b02-0a69298fa2f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.232 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9867a75b-99b3-4cb2-ab7a-c553b37f8d8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.250 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e2e4a5-c30d-4f10-b606-33aeee76fca3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694842, 'reachable_time': 42720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312426, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.254 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:13:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.255 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[343399e3-24bc-4ebb-adb1-ec31917c6e72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.255 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 23002fad-2420-4b68-bfd7-2d90f8b5df6d in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f unbound from our chassis
Oct 07 14:13:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d8accac57\x2dab45\x2d4b9b\x2d95ed\x2d86c2c65f202f.mount: Deactivated successfully.
Oct 07 14:13:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.260 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8accac57-ab45-4b9b-95ed-86c2c65f202f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:13:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.267 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b029e3d0-5a81-465d-8ba0-0607e485caed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.270 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 23002fad-2420-4b68-bfd7-2d90f8b5df6d in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f unbound from our chassis
Oct 07 14:13:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.276 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8accac57-ab45-4b9b-95ed-86c2c65f202f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:13:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.277 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0c3a15ed-8a9f-46b8-bf71-28b9cbd3cb06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.439 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.510 2 DEBUG nova.storage.rbd_utils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] resizing rbd image cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.551 2 INFO nova.virt.libvirt.driver [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Instance shutdown successfully after 24 seconds.
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.564 2 INFO nova.virt.libvirt.driver [-] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Instance destroyed successfully.
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.565 2 DEBUG nova.objects.instance [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4fef229d-c42d-43ac-a3ff-527ca68d3796 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.582 2 DEBUG nova.compute.manager [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.637 2 DEBUG nova.objects.instance [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'migration_context' on Instance uuid cb7392fb-c42f-47e9-b661-7cbf3dfe1263 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.656 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.657 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Ensure instance console log exists: /var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.658 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.658 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.658 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.671 2 DEBUG oslo_concurrency.lockutils [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 293 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.8 MiB/s wr, 271 op/s
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.735 2 DEBUG nova.network.neutron [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Successfully created port: c8a4482e-996f-427f-8071-48124530250e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.810 2 DEBUG nova.compute.manager [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-deleted-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.810 2 DEBUG nova.compute.manager [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-vif-unplugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.810 2 DEBUG oslo_concurrency.lockutils [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.811 2 DEBUG oslo_concurrency.lockutils [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.811 2 DEBUG oslo_concurrency.lockutils [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.811 2 DEBUG nova.compute.manager [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] No waiting events found dispatching network-vif-unplugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.811 2 WARNING nova.compute.manager [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received unexpected event network-vif-unplugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d for instance with vm_state stopped and task_state None.
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.811 2 DEBUG nova.compute.manager [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.812 2 DEBUG oslo_concurrency.lockutils [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.812 2 DEBUG oslo_concurrency.lockutils [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.812 2 DEBUG oslo_concurrency.lockutils [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.812 2 DEBUG nova.compute.manager [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] No waiting events found dispatching network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.812 2 WARNING nova.compute.manager [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received unexpected event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d for instance with vm_state stopped and task_state None.
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.966 2 DEBUG nova.compute.manager [req-82bec374-ed3a-45b4-b0d7-54a58980d4b0 req-4071a072-8ecb-4a40-8ff4-5a4bffe8e393 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-vif-plugged-f208f539-2cf1-4007-8806-5b4836d43c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.966 2 DEBUG oslo_concurrency.lockutils [req-82bec374-ed3a-45b4-b0d7-54a58980d4b0 req-4071a072-8ecb-4a40-8ff4-5a4bffe8e393 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.967 2 DEBUG oslo_concurrency.lockutils [req-82bec374-ed3a-45b4-b0d7-54a58980d4b0 req-4071a072-8ecb-4a40-8ff4-5a4bffe8e393 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.967 2 DEBUG oslo_concurrency.lockutils [req-82bec374-ed3a-45b4-b0d7-54a58980d4b0 req-4071a072-8ecb-4a40-8ff4-5a4bffe8e393 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.967 2 DEBUG nova.compute.manager [req-82bec374-ed3a-45b4-b0d7-54a58980d4b0 req-4071a072-8ecb-4a40-8ff4-5a4bffe8e393 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] No waiting events found dispatching network-vif-plugged-f208f539-2cf1-4007-8806-5b4836d43c4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:11 compute-0 nova_compute[259550]: 2025-10-07 14:13:11.967 2 WARNING nova.compute.manager [req-82bec374-ed3a-45b4-b0d7-54a58980d4b0 req-4071a072-8ecb-4a40-8ff4-5a4bffe8e393 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received unexpected event network-vif-plugged-f208f539-2cf1-4007-8806-5b4836d43c4f for instance with vm_state active and task_state None.
Oct 07 14:13:12 compute-0 NetworkManager[44949]: <info>  [1759846392.6796] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Oct 07 14:13:12 compute-0 NetworkManager[44949]: <info>  [1759846392.6805] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/167)
Oct 07 14:13:12 compute-0 nova_compute[259550]: 2025-10-07 14:13:12.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:12 compute-0 nova_compute[259550]: 2025-10-07 14:13:12.722 2 DEBUG oslo_concurrency.lockutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:12 compute-0 nova_compute[259550]: 2025-10-07 14:13:12.723 2 DEBUG oslo_concurrency.lockutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:12 compute-0 nova_compute[259550]: 2025-10-07 14:13:12.723 2 INFO nova.compute.manager [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Rebooting instance
Oct 07 14:13:12 compute-0 ovn_controller[151684]: 2025-10-07T14:13:12Z|00337|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct 07 14:13:12 compute-0 ovn_controller[151684]: 2025-10-07T14:13:12Z|00338|binding|INFO|Releasing lport 7bd1effb-a353-4387-8382-bb3ef13fb3f0 from this chassis (sb_readonly=0)
Oct 07 14:13:12 compute-0 nova_compute[259550]: 2025-10-07 14:13:12.736 2 DEBUG oslo_concurrency.lockutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:13:12 compute-0 nova_compute[259550]: 2025-10-07 14:13:12.737 2 DEBUG oslo_concurrency.lockutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquired lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:13:12 compute-0 nova_compute[259550]: 2025-10-07 14:13:12.737 2 DEBUG nova.network.neutron [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:13:12 compute-0 nova_compute[259550]: 2025-10-07 14:13:12.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:12 compute-0 ceph-mon[74295]: pgmap v1458: 305 pgs: 305 active+clean; 293 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.8 MiB/s wr, 271 op/s
Oct 07 14:13:12 compute-0 ovn_controller[151684]: 2025-10-07T14:13:12Z|00339|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct 07 14:13:12 compute-0 ovn_controller[151684]: 2025-10-07T14:13:12Z|00340|binding|INFO|Releasing lport 7bd1effb-a353-4387-8382-bb3ef13fb3f0 from this chassis (sb_readonly=0)
Oct 07 14:13:12 compute-0 nova_compute[259550]: 2025-10-07 14:13:12.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:12 compute-0 nova_compute[259550]: 2025-10-07 14:13:12.839 2 DEBUG nova.network.neutron [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Successfully updated port: c8a4482e-996f-427f-8071-48124530250e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:13:12 compute-0 nova_compute[259550]: 2025-10-07 14:13:12.854 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "refresh_cache-cb7392fb-c42f-47e9-b661-7cbf3dfe1263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:13:12 compute-0 nova_compute[259550]: 2025-10-07 14:13:12.854 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquired lock "refresh_cache-cb7392fb-c42f-47e9-b661-7cbf3dfe1263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:13:12 compute-0 nova_compute[259550]: 2025-10-07 14:13:12.854 2 DEBUG nova.network.neutron [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:13:12 compute-0 nova_compute[259550]: 2025-10-07 14:13:12.987 2 DEBUG nova.network.neutron [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:13:13 compute-0 nova_compute[259550]: 2025-10-07 14:13:13.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 293 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.1 MiB/s wr, 216 op/s
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.007 2 DEBUG nova.network.neutron [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Updating instance_info_cache with network_info: [{"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.031 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Releasing lock "refresh_cache-cb7392fb-c42f-47e9-b661-7cbf3dfe1263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.032 2 DEBUG nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Instance network_info: |[{"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.034 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Start _get_guest_xml network_info=[{"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.040 2 WARNING nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.047 2 DEBUG nova.virt.libvirt.host [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.048 2 DEBUG nova.virt.libvirt.host [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.051 2 DEBUG nova.virt.libvirt.host [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.051 2 DEBUG nova.virt.libvirt.host [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.052 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.052 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.052 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.053 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.053 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.053 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.053 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.054 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.054 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.054 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.054 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.055 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.057 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.097 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.098 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.098 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.099 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.099 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] No waiting events found dispatching network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.099 2 WARNING nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received unexpected event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d for instance with vm_state stopped and task_state None.
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.099 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.100 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.100 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.100 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.100 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] No waiting events found dispatching network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.101 2 WARNING nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received unexpected event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d for instance with vm_state stopped and task_state None.
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.101 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-vif-unplugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.101 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.101 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.102 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.102 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] No waiting events found dispatching network-vif-unplugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.102 2 WARNING nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received unexpected event network-vif-unplugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d for instance with vm_state stopped and task_state None.
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.102 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.103 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.103 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.103 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.103 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] No waiting events found dispatching network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.104 2 WARNING nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received unexpected event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d for instance with vm_state stopped and task_state None.
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.104 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-changed-f208f539-2cf1-4007-8806-5b4836d43c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.104 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Refreshing instance network info cache due to event network-changed-f208f539-2cf1-4007-8806-5b4836d43c4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.105 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.105 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.105 2 DEBUG nova.network.neutron [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Refreshing network info cache for port f208f539-2cf1-4007-8806-5b4836d43c4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.108 2 DEBUG nova.network.neutron [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Updating instance_info_cache with network_info: [{"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.130 2 DEBUG oslo_concurrency.lockutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Releasing lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.133 2 DEBUG nova.compute.manager [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.215 2 DEBUG oslo_concurrency.lockutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.216 2 DEBUG oslo_concurrency.lockutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.216 2 DEBUG oslo_concurrency.lockutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.217 2 DEBUG oslo_concurrency.lockutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.217 2 DEBUG oslo_concurrency.lockutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.218 2 INFO nova.compute.manager [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Terminating instance
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.220 2 DEBUG nova.compute.manager [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.239 2 INFO nova.virt.libvirt.driver [-] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Instance destroyed successfully.
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.240 2 DEBUG nova.objects.instance [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'resources' on Instance uuid 4fef229d-c42d-43ac-a3ff-527ca68d3796 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.259 2 DEBUG nova.virt.libvirt.vif [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1001011532',display_name='tempest-DeleteServersTestJSON-server-1001011532',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1001011532',id=44,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-rcycun96',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:11Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=4fef229d-c42d-43ac-a3ff-527ca68d3796,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.259 2 DEBUG nova.network.os_vif_util [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.261 2 DEBUG nova.network.os_vif_util [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:88,bridge_name='br-int',has_traffic_filtering=True,id=23002fad-2420-4b68-bfd7-2d90f8b5df6d,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23002fad-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.261 2 DEBUG os_vif [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:88,bridge_name='br-int',has_traffic_filtering=True,id=23002fad-2420-4b68-bfd7-2d90f8b5df6d,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23002fad-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.264 2 DEBUG nova.compute.manager [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-changed-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.264 2 DEBUG nova.compute.manager [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Refreshing instance network info cache due to event network-changed-4f1564c5-865b-45e8-afe1-7f7c3748c854. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.265 2 DEBUG oslo_concurrency.lockutils [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.265 2 DEBUG oslo_concurrency.lockutils [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.265 2 DEBUG nova.network.neutron [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Refreshing network info cache for port 4f1564c5-865b-45e8-afe1-7f7c3748c854 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.268 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23002fad-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.274 2 INFO os_vif [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:88,bridge_name='br-int',has_traffic_filtering=True,id=23002fad-2420-4b68-bfd7-2d90f8b5df6d,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23002fad-24')
Oct 07 14:13:14 compute-0 kernel: tap4f1564c5-86 (unregistering): left promiscuous mode
Oct 07 14:13:14 compute-0 NetworkManager[44949]: <info>  [1759846394.3043] device (tap4f1564c5-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:13:14 compute-0 ovn_controller[151684]: 2025-10-07T14:13:14Z|00341|binding|INFO|Releasing lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 from this chassis (sb_readonly=0)
Oct 07 14:13:14 compute-0 ovn_controller[151684]: 2025-10-07T14:13:14Z|00342|binding|INFO|Setting lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 down in Southbound
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:14 compute-0 ovn_controller[151684]: 2025-10-07T14:13:14Z|00343|binding|INFO|Removing iface tap4f1564c5-86 ovn-installed in OVS
Oct 07 14:13:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.320 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:76:5c 10.100.0.3'], port_security=['fa:16:3e:6d:76:5c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f563ffb7-1ade-4b71-ab68-115322eef141', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f5ee4e560ed4660a6685a086282a370', 'neutron:revision_number': '5', 'neutron:security_group_ids': '83afa0b2-d45d-4225-8e21-5474e9077205 fa91ff43-ea3c-45e4-a467-ff30d6e445a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=437744f3-38b8-4d96-80b1-8cef5bd6873b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4f1564c5-865b-45e8-afe1-7f7c3748c854) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.322 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4f1564c5-865b-45e8-afe1-7f7c3748c854 in datapath 71870f0f-c94f-4d32-8df4-00da4d6d4129 unbound from our chassis
Oct 07 14:13:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.324 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71870f0f-c94f-4d32-8df4-00da4d6d4129
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.340 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef1c248-8e4a-4129-a1f1-4715bd5c1a7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:14 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Oct 07 14:13:14 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002d.scope: Consumed 5.999s CPU time.
Oct 07 14:13:14 compute-0 systemd-machined[214580]: Machine qemu-51-instance-0000002d terminated.
Oct 07 14:13:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.379 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[83aaaead-bb55-4e73-95d4-47bb1901e9e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.386 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b2729305-db62-44b0-ae79-fdccef199a3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.424 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[68b6117a-e430-421a-a5a0-d40387c916d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.451 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4784cdde-ee7e-4240-a4e1-9a15c6149f23]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71870f0f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:d1:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 916, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 916, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694636, 'reachable_time': 33464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312550, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.485 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c46f0c-567e-4a0c-80b0-298e1797f0cd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap71870f0f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694652, 'tstamp': 694652}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312553, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap71870f0f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694656, 'tstamp': 694656}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312553, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.487 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71870f0f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.490 2 INFO nova.virt.libvirt.driver [-] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Instance destroyed successfully.
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.491 2 DEBUG nova.objects.instance [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lazy-loading 'resources' on Instance uuid f563ffb7-1ade-4b71-ab68-115322eef141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.494 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71870f0f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.495 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.497 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71870f0f-c0, col_values=(('external_ids', {'iface-id': '7bd1effb-a353-4387-8382-bb3ef13fb3f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.497 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.503 2 DEBUG nova.virt.libvirt.vif [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1638787156',display_name='tempest-SecurityGroupsTestJSON-server-1638787156',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1638787156',id=45,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f5ee4e560ed4660a6685a086282a370',ramdisk_id='',reservation_id='r-0vzib393',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1673626413',owner_user_name='tempest-SecurityGroupsTestJSON-1673626413-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:14Z,user_data=None,user_id='3ba82e1a9f12417391d78758ae9bb83c',uuid=f563ffb7-1ade-4b71-ab68-115322eef141,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.504 2 DEBUG nova.network.os_vif_util [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converting VIF {"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.505 2 DEBUG nova.network.os_vif_util [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.505 2 DEBUG os_vif [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.516 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f1564c5-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.525 2 INFO os_vif [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86')
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.533 2 DEBUG nova.virt.libvirt.driver [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Start _get_guest_xml network_info=[{"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.538 2 WARNING nova.virt.libvirt.driver [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.543 2 DEBUG nova.virt.libvirt.host [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.544 2 DEBUG nova.virt.libvirt.host [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:13:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:13:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3331525263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.546 2 DEBUG nova.virt.libvirt.host [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.548 2 DEBUG nova.virt.libvirt.host [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.548 2 DEBUG nova.virt.libvirt.driver [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.548 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.549 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.549 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.549 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.549 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.549 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.550 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.550 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.550 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.550 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.550 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.551 2 DEBUG nova.objects.instance [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lazy-loading 'vcpu_model' on Instance uuid f563ffb7-1ade-4b71-ab68-115322eef141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.566 2 DEBUG oslo_concurrency.processutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.598 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.621 2 DEBUG nova.storage.rbd_utils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.626 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:14 compute-0 ceph-mon[74295]: pgmap v1459: 305 pgs: 305 active+clean; 293 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.1 MiB/s wr, 216 op/s
Oct 07 14:13:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3331525263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.909 2 INFO nova.virt.libvirt.driver [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Deleting instance files /var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796_del
Oct 07 14:13:14 compute-0 nova_compute[259550]: 2025-10-07 14:13:14.914 2 INFO nova.virt.libvirt.driver [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Deletion of /var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796_del complete
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.020 2 INFO nova.compute.manager [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Took 0.80 seconds to destroy the instance on the hypervisor.
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.021 2 DEBUG oslo.service.loopingcall [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.021 2 DEBUG nova.compute.manager [-] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.022 2 DEBUG nova.network.neutron [-] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:13:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:13:15 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3297930814' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.057 2 DEBUG oslo_concurrency.processutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.099 2 DEBUG oslo_concurrency.processutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:13:15 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/553660048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.144 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.146 2 DEBUG nova.virt.libvirt.vif [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:13:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-738566473',display_name='tempest-ServerDiskConfigTestJSON-server-738566473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-738566473',id=47,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-zl7hxhm4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:10Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=cb7392fb-c42f-47e9-b661-7cbf3dfe1263,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.147 2 DEBUG nova.network.os_vif_util [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.148 2 DEBUG nova.network.os_vif_util [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:4b:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8a4482e-996f-427f-8071-48124530250e,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8a4482e-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.149 2 DEBUG nova.objects.instance [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid cb7392fb-c42f-47e9-b661-7cbf3dfe1263 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.165 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <uuid>cb7392fb-c42f-47e9-b661-7cbf3dfe1263</uuid>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <name>instance-0000002f</name>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-738566473</nova:name>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:13:14</nova:creationTime>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <nova:user uuid="7bf568df6a8d461a83d287493b393589">tempest-ServerDiskConfigTestJSON-831175870-project-member</nova:user>
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <nova:project uuid="de6794f6448744329cf2081eb5b889a5">tempest-ServerDiskConfigTestJSON-831175870</nova:project>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <nova:port uuid="c8a4482e-996f-427f-8071-48124530250e">
Oct 07 14:13:15 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <system>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <entry name="serial">cb7392fb-c42f-47e9-b661-7cbf3dfe1263</entry>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <entry name="uuid">cb7392fb-c42f-47e9-b661-7cbf3dfe1263</entry>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     </system>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <os>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   </os>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <features>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   </features>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk">
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk.config">
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:c4:4b:9c"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <target dev="tapc8a4482e-99"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263/console.log" append="off"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <video>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     </video>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:13:15 compute-0 nova_compute[259550]: </domain>
Oct 07 14:13:15 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.166 2 DEBUG nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Preparing to wait for external event network-vif-plugged-c8a4482e-996f-427f-8071-48124530250e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.177 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.177 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.177 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.178 2 DEBUG nova.virt.libvirt.vif [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:13:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-738566473',display_name='tempest-ServerDiskConfigTestJSON-server-738566473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-738566473',id=47,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-zl7hxhm4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:10Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=cb7392fb-c42f-47e9-b661-7cbf3dfe1263,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.178 2 DEBUG nova.network.os_vif_util [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.181 2 DEBUG nova.network.os_vif_util [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:4b:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8a4482e-996f-427f-8071-48124530250e,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8a4482e-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.181 2 DEBUG os_vif [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:4b:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8a4482e-996f-427f-8071-48124530250e,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8a4482e-99') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.183 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.183 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.188 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8a4482e-99, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.188 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc8a4482e-99, col_values=(('external_ids', {'iface-id': 'c8a4482e-996f-427f-8071-48124530250e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:4b:9c', 'vm-uuid': 'cb7392fb-c42f-47e9-b661-7cbf3dfe1263'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:15 compute-0 NetworkManager[44949]: <info>  [1759846395.1911] manager: (tapc8a4482e-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/168)
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.196 2 INFO os_vif [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:4b:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8a4482e-996f-427f-8071-48124530250e,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8a4482e-99')
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.265 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.266 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.266 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No VIF found with MAC fa:16:3e:c4:4b:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.266 2 INFO nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Using config drive
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.283 2 DEBUG nova.storage.rbd_utils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:13:15 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2389833081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.571 2 DEBUG oslo_concurrency.processutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.572 2 DEBUG nova.virt.libvirt.vif [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1638787156',display_name='tempest-SecurityGroupsTestJSON-server-1638787156',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1638787156',id=45,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f5ee4e560ed4660a6685a086282a370',ramdisk_id='',reservation_id='r-0vzib393',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1673626413',owner_user_name='tempest-SecurityGroupsTestJSON-1673626413-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:14Z,user_data=None,user_id='3ba82e1a9f12417391d78758ae9bb83c',uuid=f563ffb7-1ade-4b71-ab68-115322eef141,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.572 2 DEBUG nova.network.os_vif_util [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converting VIF {"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.573 2 DEBUG nova.network.os_vif_util [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.574 2 DEBUG nova.objects.instance [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lazy-loading 'pci_devices' on Instance uuid f563ffb7-1ade-4b71-ab68-115322eef141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.613 2 DEBUG nova.network.neutron [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updated VIF entry in instance network info cache for port f208f539-2cf1-4007-8806-5b4836d43c4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.613 2 DEBUG nova.network.neutron [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updating instance_info_cache with network_info: [{"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.641 2 DEBUG nova.virt.libvirt.driver [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <uuid>f563ffb7-1ade-4b71-ab68-115322eef141</uuid>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <name>instance-0000002d</name>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <nova:name>tempest-SecurityGroupsTestJSON-server-1638787156</nova:name>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:13:14</nova:creationTime>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <nova:user uuid="3ba82e1a9f12417391d78758ae9bb83c">tempest-SecurityGroupsTestJSON-1673626413-project-member</nova:user>
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <nova:project uuid="1f5ee4e560ed4660a6685a086282a370">tempest-SecurityGroupsTestJSON-1673626413</nova:project>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <nova:port uuid="4f1564c5-865b-45e8-afe1-7f7c3748c854">
Oct 07 14:13:15 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <system>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <entry name="serial">f563ffb7-1ade-4b71-ab68-115322eef141</entry>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <entry name="uuid">f563ffb7-1ade-4b71-ab68-115322eef141</entry>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     </system>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <os>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   </os>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <features>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   </features>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/f563ffb7-1ade-4b71-ab68-115322eef141_disk">
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/f563ffb7-1ade-4b71-ab68-115322eef141_disk.config">
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:13:15 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:6d:76:5c"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <target dev="tap4f1564c5-86"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141/console.log" append="off"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <video>
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     </video>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <input type="keyboard" bus="usb"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:13:15 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:13:15 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:13:15 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:13:15 compute-0 nova_compute[259550]: </domain>
Oct 07 14:13:15 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.642 2 DEBUG nova.virt.libvirt.driver [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.642 2 DEBUG nova.virt.libvirt.driver [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.644 2 DEBUG nova.virt.libvirt.vif [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1638787156',display_name='tempest-SecurityGroupsTestJSON-server-1638787156',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1638787156',id=45,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='1f5ee4e560ed4660a6685a086282a370',ramdisk_id='',reservation_id='r-0vzib393',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1673626413',owner_user_name='tempest-SecurityGroupsTestJSON-1673626413-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:14Z,user_data=None,user_id='3ba82e1a9f12417391d78758ae9bb83c',uuid=f563ffb7-1ade-4b71-ab68-115322eef141,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.644 2 DEBUG nova.network.os_vif_util [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converting VIF {"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.645 2 DEBUG nova.network.os_vif_util [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.645 2 DEBUG os_vif [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.648 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f1564c5-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.649 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f1564c5-86, col_values=(('external_ids', {'iface-id': '4f1564c5-865b-45e8-afe1-7f7c3748c854', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:76:5c', 'vm-uuid': 'f563ffb7-1ade-4b71-ab68-115322eef141'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:15 compute-0 NetworkManager[44949]: <info>  [1759846395.6514] manager: (tap4f1564c5-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.660 2 INFO os_vif [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86')
Oct 07 14:13:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 306 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 6.9 MiB/s wr, 372 op/s
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.709 2 INFO nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Creating config drive at /var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263/disk.config
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.715 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb_23wc5i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:15 compute-0 kernel: tap4f1564c5-86: entered promiscuous mode
Oct 07 14:13:15 compute-0 systemd-udevd[312539]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:13:15 compute-0 NetworkManager[44949]: <info>  [1759846395.7316] manager: (tap4f1564c5-86): new Tun device (/org/freedesktop/NetworkManager/Devices/170)
Oct 07 14:13:15 compute-0 ovn_controller[151684]: 2025-10-07T14:13:15Z|00344|binding|INFO|Claiming lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 for this chassis.
Oct 07 14:13:15 compute-0 ovn_controller[151684]: 2025-10-07T14:13:15Z|00345|binding|INFO|4f1564c5-865b-45e8-afe1-7f7c3748c854: Claiming fa:16:3e:6d:76:5c 10.100.0.3
Oct 07 14:13:15 compute-0 NetworkManager[44949]: <info>  [1759846395.7531] device (tap4f1564c5-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:13:15 compute-0 NetworkManager[44949]: <info>  [1759846395.7545] device (tap4f1564c5-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.761 2 DEBUG nova.network.neutron [-] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:15 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3297930814' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:15 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/553660048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:15 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2389833081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.764 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:13:15 compute-0 ovn_controller[151684]: 2025-10-07T14:13:15Z|00346|binding|INFO|Setting lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 ovn-installed in OVS
Oct 07 14:13:15 compute-0 ovn_controller[151684]: 2025-10-07T14:13:15Z|00347|binding|INFO|Setting lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 up in Southbound
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.776 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:76:5c 10.100.0.3'], port_security=['fa:16:3e:6d:76:5c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f563ffb7-1ade-4b71-ab68-115322eef141', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f5ee4e560ed4660a6685a086282a370', 'neutron:revision_number': '6', 'neutron:security_group_ids': '83afa0b2-d45d-4225-8e21-5474e9077205 fa91ff43-ea3c-45e4-a467-ff30d6e445a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=437744f3-38b8-4d96-80b1-8cef5bd6873b, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4f1564c5-865b-45e8-afe1-7f7c3748c854) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.782 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4f1564c5-865b-45e8-afe1-7f7c3748c854 in datapath 71870f0f-c94f-4d32-8df4-00da4d6d4129 bound to our chassis
Oct 07 14:13:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.785 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71870f0f-c94f-4d32-8df4-00da4d6d4129
Oct 07 14:13:15 compute-0 systemd-machined[214580]: New machine qemu-53-instance-0000002d.
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.797 2 INFO nova.compute.manager [-] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Took 0.78 seconds to deallocate network for instance.
Oct 07 14:13:15 compute-0 systemd[1]: Started Virtual Machine qemu-53-instance-0000002d.
Oct 07 14:13:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.812 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe77bf8-a841-44c5-bd62-183b9b5ebf1e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.844 2 DEBUG oslo_concurrency.lockutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.845 2 DEBUG oslo_concurrency.lockutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.849 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b1254faf-7b79-4614-b053-cb6f86171111]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.855 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[716855b3-c5ab-4cd3-ae2e-47f14125af6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.870 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb_23wc5i" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.896 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[35d33956-b2e9-4015-b57a-53ab0a5baa58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.901 2 DEBUG nova.storage.rbd_utils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.912 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263/disk.config cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.919 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2543e237-67c5-4cad-9bf8-384e4b61b098]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71870f0f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:d1:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 916, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 916, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694636, 'reachable_time': 33464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312738, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.937 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bd21eb67-daa0-4c38-a2e2-7907f26a0928]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap71870f0f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694652, 'tstamp': 694652}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312740, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap71870f0f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694656, 'tstamp': 694656}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312740, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.940 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71870f0f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.948 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71870f0f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.949 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.949 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71870f0f-c0, col_values=(('external_ids', {'iface-id': '7bd1effb-a353-4387-8382-bb3ef13fb3f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.949 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.977 2 DEBUG nova.network.neutron [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Updated VIF entry in instance network info cache for port 4f1564c5-865b-45e8-afe1-7f7c3748c854. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.978 2 DEBUG nova.network.neutron [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Updating instance_info_cache with network_info: [{"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.995 2 DEBUG oslo_concurrency.lockutils [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.996 2 DEBUG nova.compute.manager [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Received event network-changed-c8a4482e-996f-427f-8071-48124530250e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.996 2 DEBUG nova.compute.manager [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Refreshing instance network info cache due to event network-changed-c8a4482e-996f-427f-8071-48124530250e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.996 2 DEBUG oslo_concurrency.lockutils [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-cb7392fb-c42f-47e9-b661-7cbf3dfe1263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.997 2 DEBUG oslo_concurrency.lockutils [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-cb7392fb-c42f-47e9-b661-7cbf3dfe1263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:13:15 compute-0 nova_compute[259550]: 2025-10-07 14:13:15.997 2 DEBUG nova.network.neutron [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Refreshing network info cache for port c8a4482e-996f-427f-8071-48124530250e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.031 2 DEBUG oslo_concurrency.processutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.077 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263/disk.config cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.078 2 INFO nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Deleting local config drive /var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263/disk.config because it was imported into RBD.
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:16 compute-0 kernel: tapc8a4482e-99: entered promiscuous mode
Oct 07 14:13:16 compute-0 NetworkManager[44949]: <info>  [1759846396.1429] manager: (tapc8a4482e-99): new Tun device (/org/freedesktop/NetworkManager/Devices/171)
Oct 07 14:13:16 compute-0 ovn_controller[151684]: 2025-10-07T14:13:16Z|00348|binding|INFO|Claiming lport c8a4482e-996f-427f-8071-48124530250e for this chassis.
Oct 07 14:13:16 compute-0 ovn_controller[151684]: 2025-10-07T14:13:16Z|00349|binding|INFO|c8a4482e-996f-427f-8071-48124530250e: Claiming fa:16:3e:c4:4b:9c 10.100.0.7
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.152 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:4b:9c 10.100.0.7'], port_security=['fa:16:3e:c4:4b:9c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'cb7392fb-c42f-47e9-b661-7cbf3dfe1263', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c8a4482e-996f-427f-8071-48124530250e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.154 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c8a4482e-996f-427f-8071-48124530250e in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 bound to our chassis
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.156 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct 07 14:13:16 compute-0 NetworkManager[44949]: <info>  [1759846396.1592] device (tapc8a4482e-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.159 2 DEBUG nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-unplugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.160 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.160 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.161 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.161 2 DEBUG nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] No waiting events found dispatching network-vif-unplugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.161 2 WARNING nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received unexpected event network-vif-unplugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 for instance with vm_state active and task_state reboot_started_hard.
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.161 2 DEBUG nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:16 compute-0 NetworkManager[44949]: <info>  [1759846396.1618] device (tapc8a4482e-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.161 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.162 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.162 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.162 2 DEBUG nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] No waiting events found dispatching network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.162 2 WARNING nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received unexpected event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 for instance with vm_state active and task_state reboot_started_hard.
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.162 2 DEBUG nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.162 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.162 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.163 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.163 2 DEBUG nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] No waiting events found dispatching network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.163 2 WARNING nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received unexpected event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 for instance with vm_state active and task_state reboot_started_hard.
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.163 2 DEBUG nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.163 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.163 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.163 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.164 2 DEBUG nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] No waiting events found dispatching network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.164 2 WARNING nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received unexpected event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 for instance with vm_state active and task_state reboot_started_hard.
Oct 07 14:13:16 compute-0 ovn_controller[151684]: 2025-10-07T14:13:16Z|00350|binding|INFO|Setting lport c8a4482e-996f-427f-8071-48124530250e ovn-installed in OVS
Oct 07 14:13:16 compute-0 ovn_controller[151684]: 2025-10-07T14:13:16Z|00351|binding|INFO|Setting lport c8a4482e-996f-427f-8071-48124530250e up in Southbound
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.174 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a812b60a-27bf-4a85-b60c-fd368b155a24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.178 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd2cb8ca0-11 in ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.181 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd2cb8ca0-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.181 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3d0dbec0-bb4a-46ca-bca7-4cb28f170fc6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.184 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e59831-b5c3-4f2f-b18a-6c152a501fbb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:16 compute-0 systemd-machined[214580]: New machine qemu-54-instance-0000002f.
Oct 07 14:13:16 compute-0 systemd[1]: Started Virtual Machine qemu-54-instance-0000002f.
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.206 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[54690b49-c4e5-4011-be96-cbf99728e91e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.226 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fda2b321-4275-4a16-a284-81c51582685c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.260 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d10d3d0a-ef27-4675-9608-1828fc9d93f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:16 compute-0 NetworkManager[44949]: <info>  [1759846396.2698] manager: (tapd2cb8ca0-10): new Veth device (/org/freedesktop/NetworkManager/Devices/172)
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.268 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[afc18324-2445-4450-8e24-d370477878d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.308 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0a20b331-1e61-4636-824b-6a1bba48b7d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.312 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe23f72-a67d-4f81-9ea4-d71d6d983923]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.329 2 DEBUG nova.compute.manager [req-c14bde28-50d8-496d-abd4-66b85a6d281a req-7eb371fe-e48b-4789-8004-58814b18e06b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-vif-deleted-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:16 compute-0 NetworkManager[44949]: <info>  [1759846396.3401] device (tapd2cb8ca0-10): carrier: link connected
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.349 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[615e251e-ac42-401f-819a-da5047dfb253]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.370 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c23e61c6-a045-4e0d-87e1-7b963c139927]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2cb8ca0-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697990, 'reachable_time': 23990, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312868, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.392 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[81b0e5a2-20e7-4fdf-a64e-d1c432f0c458]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:eb7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697990, 'tstamp': 697990}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312869, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.413 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[37beefe7-2a5a-4b32-be6a-6b00c02218b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2cb8ca0-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697990, 'reachable_time': 23990, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312870, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.451 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f74f8eba-4e02-44b5-967c-9a7348e61bc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.522 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[03ff7a10-17f6-4d92-94f3-74257110a362]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.524 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2cb8ca0-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.524 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.525 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2cb8ca0-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:16 compute-0 NetworkManager[44949]: <info>  [1759846396.5281] manager: (tapd2cb8ca0-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/173)
Oct 07 14:13:16 compute-0 kernel: tapd2cb8ca0-10: entered promiscuous mode
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.530 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2cb8ca0-10, col_values=(('external_ids', {'iface-id': '93001468-74a0-4bac-94dd-0978737be6e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:16 compute-0 ovn_controller[151684]: 2025-10-07T14:13:16Z|00352|binding|INFO|Releasing lport 93001468-74a0-4bac-94dd-0978737be6e2 from this chassis (sb_readonly=0)
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:13:16 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3145046715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.550 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.551 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[459ab3ce-2499-4b34-8df3-2a3a108a7bce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.552 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:13:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.554 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'env', 'PROCESS_TAG=haproxy-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.576 2 DEBUG oslo_concurrency.processutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.582 2 DEBUG nova.compute.provider_tree [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.600 2 DEBUG nova.scheduler.client.report [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.620 2 DEBUG oslo_concurrency.lockutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.642 2 INFO nova.scheduler.client.report [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Deleted allocations for instance 4fef229d-c42d-43ac-a3ff-527ca68d3796
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.680 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for f563ffb7-1ade-4b71-ab68-115322eef141 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.681 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846396.6798258, f563ffb7-1ade-4b71-ab68-115322eef141 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.681 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] VM Resumed (Lifecycle Event)
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.700 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.704 2 DEBUG oslo_concurrency.lockutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.706 2 DEBUG nova.compute.manager [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.709 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.712 2 INFO nova.virt.libvirt.driver [-] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Instance rebooted successfully.
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.712 2 DEBUG nova.compute.manager [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.739 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.739 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846396.7039187, f563ffb7-1ade-4b71-ab68-115322eef141 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.740 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] VM Started (Lifecycle Event)
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.763 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.768 2 DEBUG oslo_concurrency.lockutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:16 compute-0 nova_compute[259550]: 2025-10-07 14:13:16.770 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:13:16 compute-0 ceph-mon[74295]: pgmap v1460: 305 pgs: 305 active+clean; 306 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 6.9 MiB/s wr, 372 op/s
Oct 07 14:13:16 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3145046715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:16 compute-0 podman[312902]: 2025-10-07 14:13:16.978565324 +0000 UTC m=+0.061353414 container create 6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 07 14:13:17 compute-0 systemd[1]: Started libpod-conmon-6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985.scope.
Oct 07 14:13:17 compute-0 podman[312902]: 2025-10-07 14:13:16.942167637 +0000 UTC m=+0.024955737 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:13:17 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650a9a6c1f663f200a5eecd27e0eaaee72d722619b67ac6d8e8939c3ec7ae27d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:13:17 compute-0 nova_compute[259550]: 2025-10-07 14:13:17.082 2 DEBUG nova.network.neutron [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Updated VIF entry in instance network info cache for port c8a4482e-996f-427f-8071-48124530250e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:13:17 compute-0 nova_compute[259550]: 2025-10-07 14:13:17.084 2 DEBUG nova.network.neutron [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Updating instance_info_cache with network_info: [{"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:17 compute-0 nova_compute[259550]: 2025-10-07 14:13:17.098 2 DEBUG oslo_concurrency.lockutils [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-cb7392fb-c42f-47e9-b661-7cbf3dfe1263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:13:17 compute-0 podman[312902]: 2025-10-07 14:13:17.123453033 +0000 UTC m=+0.206241113 container init 6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 07 14:13:17 compute-0 podman[312902]: 2025-10-07 14:13:17.131124924 +0000 UTC m=+0.213913004 container start 6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:13:17 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[312917]: [NOTICE]   (312929) : New worker (312940) forked
Oct 07 14:13:17 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[312917]: [NOTICE]   (312929) : Loading success.
Oct 07 14:13:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 306 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 268 op/s
Oct 07 14:13:17 compute-0 nova_compute[259550]: 2025-10-07 14:13:17.758 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846397.7582183, cb7392fb-c42f-47e9-b661-7cbf3dfe1263 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:17 compute-0 nova_compute[259550]: 2025-10-07 14:13:17.759 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] VM Started (Lifecycle Event)
Oct 07 14:13:17 compute-0 nova_compute[259550]: 2025-10-07 14:13:17.784 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:17 compute-0 nova_compute[259550]: 2025-10-07 14:13:17.788 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846397.7583191, cb7392fb-c42f-47e9-b661-7cbf3dfe1263 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:17 compute-0 nova_compute[259550]: 2025-10-07 14:13:17.788 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] VM Paused (Lifecycle Event)
Oct 07 14:13:17 compute-0 nova_compute[259550]: 2025-10-07 14:13:17.810 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:17 compute-0 nova_compute[259550]: 2025-10-07 14:13:17.814 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:13:17 compute-0 nova_compute[259550]: 2025-10-07 14:13:17.837 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:13:18 compute-0 ceph-mon[74295]: pgmap v1461: 305 pgs: 305 active+clean; 306 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 268 op/s
Oct 07 14:13:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 260 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 1.8 MiB/s wr, 339 op/s
Oct 07 14:13:20 compute-0 nova_compute[259550]: 2025-10-07 14:13:20.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:20 compute-0 ceph-mon[74295]: pgmap v1462: 305 pgs: 305 active+clean; 260 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 1.8 MiB/s wr, 339 op/s
Oct 07 14:13:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.601 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846386.6002095, 4e86b418-6e7f-4e2e-9146-a847920ed11f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.602 2 INFO nova.compute.manager [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] VM Stopped (Lifecycle Event)
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.622 2 DEBUG nova.compute.manager [None req-37e4caa2-fdc4-4fbe-a664-d5d550e681e9 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 260 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 292 op/s
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.897 2 DEBUG nova.compute.manager [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Received event network-vif-plugged-c8a4482e-996f-427f-8071-48124530250e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.897 2 DEBUG oslo_concurrency.lockutils [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.898 2 DEBUG oslo_concurrency.lockutils [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.898 2 DEBUG oslo_concurrency.lockutils [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.898 2 DEBUG nova.compute.manager [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Processing event network-vif-plugged-c8a4482e-996f-427f-8071-48124530250e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.899 2 DEBUG nova.compute.manager [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Received event network-vif-plugged-c8a4482e-996f-427f-8071-48124530250e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.899 2 DEBUG oslo_concurrency.lockutils [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.899 2 DEBUG oslo_concurrency.lockutils [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.900 2 DEBUG oslo_concurrency.lockutils [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.900 2 DEBUG nova.compute.manager [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] No waiting events found dispatching network-vif-plugged-c8a4482e-996f-427f-8071-48124530250e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.900 2 WARNING nova.compute.manager [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Received unexpected event network-vif-plugged-c8a4482e-996f-427f-8071-48124530250e for instance with vm_state building and task_state spawning.
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.901 2 DEBUG nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.905 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846401.9049726, cb7392fb-c42f-47e9-b661-7cbf3dfe1263 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.905 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] VM Resumed (Lifecycle Event)
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.907 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.910 2 INFO nova.virt.libvirt.driver [-] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Instance spawned successfully.
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.911 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.929 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.931 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.940 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.940 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.940 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.941 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.941 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.942 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:21 compute-0 nova_compute[259550]: 2025-10-07 14:13:21.968 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:13:22 compute-0 nova_compute[259550]: 2025-10-07 14:13:22.001 2 INFO nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Took 11.25 seconds to spawn the instance on the hypervisor.
Oct 07 14:13:22 compute-0 nova_compute[259550]: 2025-10-07 14:13:22.001 2 DEBUG nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:22 compute-0 nova_compute[259550]: 2025-10-07 14:13:22.066 2 INFO nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Took 12.38 seconds to build instance.
Oct 07 14:13:22 compute-0 nova_compute[259550]: 2025-10-07 14:13:22.085 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:13:22
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'vms', 'images', 'default.rgw.control', '.mgr']
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:13:22 compute-0 nova_compute[259550]: 2025-10-07 14:13:22.756 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "e2c535ed-c6cf-4684-82dc-ae59904e7381" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:22 compute-0 nova_compute[259550]: 2025-10-07 14:13:22.756 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:22 compute-0 nova_compute[259550]: 2025-10-07 14:13:22.772 2 DEBUG nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:13:22 compute-0 nova_compute[259550]: 2025-10-07 14:13:22.854 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:22 compute-0 nova_compute[259550]: 2025-10-07 14:13:22.855 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:22 compute-0 nova_compute[259550]: 2025-10-07 14:13:22.863 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:13:22 compute-0 nova_compute[259550]: 2025-10-07 14:13:22.863 2 INFO nova.compute.claims [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:13:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:13:22 compute-0 ceph-mon[74295]: pgmap v1463: 305 pgs: 305 active+clean; 260 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 292 op/s
Oct 07 14:13:23 compute-0 nova_compute[259550]: 2025-10-07 14:13:23.114 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:23 compute-0 nova_compute[259550]: 2025-10-07 14:13:23.549 2 DEBUG nova.compute.manager [req-c3ab3bf5-ae1e-4630-a5c7-0219564c1eb2 req-ea3df3c6-7b9f-4900-831c-054b3c1f5fe9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-changed-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:23 compute-0 nova_compute[259550]: 2025-10-07 14:13:23.550 2 DEBUG nova.compute.manager [req-c3ab3bf5-ae1e-4630-a5c7-0219564c1eb2 req-ea3df3c6-7b9f-4900-831c-054b3c1f5fe9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Refreshing instance network info cache due to event network-changed-4f1564c5-865b-45e8-afe1-7f7c3748c854. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:13:23 compute-0 nova_compute[259550]: 2025-10-07 14:13:23.550 2 DEBUG oslo_concurrency.lockutils [req-c3ab3bf5-ae1e-4630-a5c7-0219564c1eb2 req-ea3df3c6-7b9f-4900-831c-054b3c1f5fe9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:13:23 compute-0 nova_compute[259550]: 2025-10-07 14:13:23.550 2 DEBUG oslo_concurrency.lockutils [req-c3ab3bf5-ae1e-4630-a5c7-0219564c1eb2 req-ea3df3c6-7b9f-4900-831c-054b3c1f5fe9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:13:23 compute-0 nova_compute[259550]: 2025-10-07 14:13:23.551 2 DEBUG nova.network.neutron [req-c3ab3bf5-ae1e-4630-a5c7-0219564c1eb2 req-ea3df3c6-7b9f-4900-831c-054b3c1f5fe9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Refreshing network info cache for port 4f1564c5-865b-45e8-afe1-7f7c3748c854 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:13:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:13:23 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2779217877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:23 compute-0 nova_compute[259550]: 2025-10-07 14:13:23.663 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:23 compute-0 nova_compute[259550]: 2025-10-07 14:13:23.669 2 DEBUG nova.compute.provider_tree [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:13:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 260 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1.8 MiB/s wr, 252 op/s
Oct 07 14:13:23 compute-0 nova_compute[259550]: 2025-10-07 14:13:23.685 2 DEBUG nova.scheduler.client.report [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:13:23 compute-0 nova_compute[259550]: 2025-10-07 14:13:23.708 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:23 compute-0 nova_compute[259550]: 2025-10-07 14:13:23.710 2 DEBUG nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:13:23 compute-0 nova_compute[259550]: 2025-10-07 14:13:23.768 2 DEBUG nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:13:23 compute-0 nova_compute[259550]: 2025-10-07 14:13:23.769 2 DEBUG nova.network.neutron [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:13:23 compute-0 ovn_controller[151684]: 2025-10-07T14:13:23Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5f:75:d2 10.100.0.12
Oct 07 14:13:23 compute-0 ovn_controller[151684]: 2025-10-07T14:13:23Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5f:75:d2 10.100.0.12
Oct 07 14:13:23 compute-0 nova_compute[259550]: 2025-10-07 14:13:23.794 2 INFO nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:13:23 compute-0 nova_compute[259550]: 2025-10-07 14:13:23.820 2 DEBUG nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:13:23 compute-0 nova_compute[259550]: 2025-10-07 14:13:23.911 2 DEBUG nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:13:23 compute-0 nova_compute[259550]: 2025-10-07 14:13:23.913 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:13:23 compute-0 nova_compute[259550]: 2025-10-07 14:13:23.914 2 INFO nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Creating image(s)
Oct 07 14:13:23 compute-0 nova_compute[259550]: 2025-10-07 14:13:23.945 2 DEBUG nova.storage.rbd_utils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image e2c535ed-c6cf-4684-82dc-ae59904e7381_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.003 2 DEBUG nova.storage.rbd_utils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image e2c535ed-c6cf-4684-82dc-ae59904e7381_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2779217877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.029 2 DEBUG nova.storage.rbd_utils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image e2c535ed-c6cf-4684-82dc-ae59904e7381_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.035 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.093 2 DEBUG nova.policy [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a0452296b3a942e893961944a0203d98', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '06322ecec4b94a5d94e34cc8632d4104', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.127 2 DEBUG oslo_concurrency.lockutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.128 2 DEBUG oslo_concurrency.lockutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.128 2 DEBUG oslo_concurrency.lockutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.129 2 DEBUG oslo_concurrency.lockutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.129 2 DEBUG oslo_concurrency.lockutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.131 2 INFO nova.compute.manager [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Terminating instance
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.132 2 DEBUG nova.compute.manager [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.134 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.135 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.136 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.136 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.160 2 DEBUG nova.storage.rbd_utils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image e2c535ed-c6cf-4684-82dc-ae59904e7381_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.165 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 e2c535ed-c6cf-4684-82dc-ae59904e7381_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:24 compute-0 kernel: tap4f1564c5-86 (unregistering): left promiscuous mode
Oct 07 14:13:24 compute-0 NetworkManager[44949]: <info>  [1759846404.3496] device (tap4f1564c5-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:13:24 compute-0 ovn_controller[151684]: 2025-10-07T14:13:24Z|00353|binding|INFO|Releasing lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 from this chassis (sb_readonly=0)
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:24 compute-0 ovn_controller[151684]: 2025-10-07T14:13:24Z|00354|binding|INFO|Setting lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 down in Southbound
Oct 07 14:13:24 compute-0 ovn_controller[151684]: 2025-10-07T14:13:24Z|00355|binding|INFO|Removing iface tap4f1564c5-86 ovn-installed in OVS
Oct 07 14:13:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.377 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:76:5c 10.100.0.3'], port_security=['fa:16:3e:6d:76:5c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f563ffb7-1ade-4b71-ab68-115322eef141', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f5ee4e560ed4660a6685a086282a370', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2c8064f9-5492-4e10-8ae7-4680563dba64 83afa0b2-d45d-4225-8e21-5474e9077205 fa91ff43-ea3c-45e4-a467-ff30d6e445a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=437744f3-38b8-4d96-80b1-8cef5bd6873b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4f1564c5-865b-45e8-afe1-7f7c3748c854) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.379 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4f1564c5-865b-45e8-afe1-7f7c3748c854 in datapath 71870f0f-c94f-4d32-8df4-00da4d6d4129 unbound from our chassis
Oct 07 14:13:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.381 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71870f0f-c94f-4d32-8df4-00da4d6d4129
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.412 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dba74272-74fe-41cf-bbbe-056269ffc8ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:24 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Oct 07 14:13:24 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002d.scope: Consumed 8.397s CPU time.
Oct 07 14:13:24 compute-0 systemd-machined[214580]: Machine qemu-53-instance-0000002d terminated.
Oct 07 14:13:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.446 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f8eb6982-dec1-4c4d-9411-7f2474a4f624]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.450 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7603ba4f-83d8-488a-9130-128da66efcd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.488 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd12b0c-c68d-4ffd-ae61-c460c3f0a97c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.504 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c1f275-1397-44aa-9d08-9be52ebc9569]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71870f0f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:d1:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 916, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 916, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694636, 'reachable_time': 33464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313103, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.519 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f32ea8c8-2416-45a7-8d3c-8a8128b50daf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap71870f0f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694652, 'tstamp': 694652}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313104, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap71870f0f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694656, 'tstamp': 694656}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313104, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.521 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71870f0f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.528 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71870f0f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.529 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.529 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71870f0f-c0, col_values=(('external_ids', {'iface-id': '7bd1effb-a353-4387-8382-bb3ef13fb3f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.530 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.658 2 INFO nova.virt.libvirt.driver [-] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Instance destroyed successfully.
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.659 2 DEBUG nova.objects.instance [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lazy-loading 'resources' on Instance uuid f563ffb7-1ade-4b71-ab68-115322eef141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.679 2 DEBUG nova.virt.libvirt.vif [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1638787156',display_name='tempest-SecurityGroupsTestJSON-server-1638787156',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1638787156',id=45,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f5ee4e560ed4660a6685a086282a370',ramdisk_id='',reservation_id='r-0vzib393',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1673626413',owner_user_name='tempest-SecurityGroupsTestJSON-1673626413-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:16Z,user_data=None,user_id='3ba82e1a9f12417391d78758ae9bb83c',uuid=f563ffb7-1ade-4b71-ab68-115322eef141,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.680 2 DEBUG nova.network.os_vif_util [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converting VIF {"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.681 2 DEBUG nova.network.os_vif_util [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.681 2 DEBUG os_vif [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f1564c5-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:24 compute-0 nova_compute[259550]: 2025-10-07 14:13:24.693 2 INFO os_vif [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86')
Oct 07 14:13:25 compute-0 ceph-mon[74295]: pgmap v1464: 305 pgs: 305 active+clean; 260 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1.8 MiB/s wr, 252 op/s
Oct 07 14:13:25 compute-0 nova_compute[259550]: 2025-10-07 14:13:25.598 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 e2c535ed-c6cf-4684-82dc-ae59904e7381_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:25 compute-0 nova_compute[259550]: 2025-10-07 14:13:25.652 2 DEBUG nova.compute.manager [req-58d65c53-554c-4e6f-ba9a-be0c5c14c61e req-4c755963-66e7-4df2-9080-859781b0275a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-unplugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:25 compute-0 nova_compute[259550]: 2025-10-07 14:13:25.653 2 DEBUG oslo_concurrency.lockutils [req-58d65c53-554c-4e6f-ba9a-be0c5c14c61e req-4c755963-66e7-4df2-9080-859781b0275a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:25 compute-0 nova_compute[259550]: 2025-10-07 14:13:25.653 2 DEBUG oslo_concurrency.lockutils [req-58d65c53-554c-4e6f-ba9a-be0c5c14c61e req-4c755963-66e7-4df2-9080-859781b0275a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:25 compute-0 nova_compute[259550]: 2025-10-07 14:13:25.654 2 DEBUG oslo_concurrency.lockutils [req-58d65c53-554c-4e6f-ba9a-be0c5c14c61e req-4c755963-66e7-4df2-9080-859781b0275a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:25 compute-0 nova_compute[259550]: 2025-10-07 14:13:25.654 2 DEBUG nova.compute.manager [req-58d65c53-554c-4e6f-ba9a-be0c5c14c61e req-4c755963-66e7-4df2-9080-859781b0275a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] No waiting events found dispatching network-vif-unplugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:25 compute-0 nova_compute[259550]: 2025-10-07 14:13:25.655 2 DEBUG nova.compute.manager [req-58d65c53-554c-4e6f-ba9a-be0c5c14c61e req-4c755963-66e7-4df2-9080-859781b0275a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-unplugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:13:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 311 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 4.6 MiB/s wr, 374 op/s
Oct 07 14:13:25 compute-0 nova_compute[259550]: 2025-10-07 14:13:25.704 2 DEBUG nova.storage.rbd_utils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] resizing rbd image e2c535ed-c6cf-4684-82dc-ae59904e7381_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:13:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.042 2 DEBUG nova.network.neutron [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Successfully created port: 98f1c539-b9b9-4abb-89cf-268c353264ed _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.071 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846391.052923, 4fef229d-c42d-43ac-a3ff-527ca68d3796 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.072 2 INFO nova.compute.manager [-] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] VM Stopped (Lifecycle Event)
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.090 2 DEBUG nova.compute.manager [None req-7e539d33-6695-4830-9c53-ec794008e86a - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:26 compute-0 ceph-mon[74295]: pgmap v1465: 305 pgs: 305 active+clean; 311 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 4.6 MiB/s wr, 374 op/s
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.293 2 DEBUG nova.objects.instance [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'migration_context' on Instance uuid e2c535ed-c6cf-4684-82dc-ae59904e7381 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.311 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.312 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Ensure instance console log exists: /var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.313 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.313 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.313 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.554 2 INFO nova.virt.libvirt.driver [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Deleting instance files /var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141_del
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.555 2 INFO nova.virt.libvirt.driver [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Deletion of /var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141_del complete
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.604 2 INFO nova.compute.manager [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Took 2.47 seconds to destroy the instance on the hypervisor.
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.605 2 DEBUG oslo.service.loopingcall [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.606 2 DEBUG nova.compute.manager [-] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.606 2 DEBUG nova.network.neutron [-] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.916 2 DEBUG nova.network.neutron [req-c3ab3bf5-ae1e-4630-a5c7-0219564c1eb2 req-ea3df3c6-7b9f-4900-831c-054b3c1f5fe9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Updated VIF entry in instance network info cache for port 4f1564c5-865b-45e8-afe1-7f7c3748c854. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.917 2 DEBUG nova.network.neutron [req-c3ab3bf5-ae1e-4630-a5c7-0219564c1eb2 req-ea3df3c6-7b9f-4900-831c-054b3c1f5fe9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Updating instance_info_cache with network_info: [{"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.940 2 DEBUG oslo_concurrency.lockutils [req-c3ab3bf5-ae1e-4630-a5c7-0219564c1eb2 req-ea3df3c6-7b9f-4900-831c-054b3c1f5fe9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:13:26 compute-0 nova_compute[259550]: 2025-10-07 14:13:26.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:13:27 compute-0 podman[313209]: 2025-10-07 14:13:27.104778878 +0000 UTC m=+0.079306216 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:13:27 compute-0 podman[313208]: 2025-10-07 14:13:27.133183985 +0000 UTC m=+0.109128340 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 07 14:13:27 compute-0 nova_compute[259550]: 2025-10-07 14:13:27.261 2 DEBUG nova.network.neutron [-] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:27 compute-0 nova_compute[259550]: 2025-10-07 14:13:27.293 2 INFO nova.compute.manager [-] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Took 0.69 seconds to deallocate network for instance.
Oct 07 14:13:27 compute-0 nova_compute[259550]: 2025-10-07 14:13:27.377 2 DEBUG oslo_concurrency.lockutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:27 compute-0 nova_compute[259550]: 2025-10-07 14:13:27.377 2 DEBUG oslo_concurrency.lockutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:27 compute-0 nova_compute[259550]: 2025-10-07 14:13:27.402 2 DEBUG nova.scheduler.client.report [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 07 14:13:27 compute-0 nova_compute[259550]: 2025-10-07 14:13:27.439 2 DEBUG nova.scheduler.client.report [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 07 14:13:27 compute-0 nova_compute[259550]: 2025-10-07 14:13:27.440 2 DEBUG nova.compute.provider_tree [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 07 14:13:27 compute-0 nova_compute[259550]: 2025-10-07 14:13:27.458 2 DEBUG nova.scheduler.client.report [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 07 14:13:27 compute-0 nova_compute[259550]: 2025-10-07 14:13:27.481 2 DEBUG nova.scheduler.client.report [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 07 14:13:27 compute-0 nova_compute[259550]: 2025-10-07 14:13:27.591 2 DEBUG oslo_concurrency.processutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 311 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.9 MiB/s wr, 218 op/s
Oct 07 14:13:27 compute-0 nova_compute[259550]: 2025-10-07 14:13:27.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:13:27 compute-0 nova_compute[259550]: 2025-10-07 14:13:27.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:13:27 compute-0 nova_compute[259550]: 2025-10-07 14:13:27.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:13:27 compute-0 nova_compute[259550]: 2025-10-07 14:13:27.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.012 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:13:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2915733219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.067 2 DEBUG nova.network.neutron [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Successfully updated port: 98f1c539-b9b9-4abb-89cf-268c353264ed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.069 2 DEBUG oslo_concurrency.processutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.076 2 DEBUG nova.compute.provider_tree [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.080 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "refresh_cache-e2c535ed-c6cf-4684-82dc-ae59904e7381" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.080 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquired lock "refresh_cache-e2c535ed-c6cf-4684-82dc-ae59904e7381" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.080 2 DEBUG nova.network.neutron [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.093 2 DEBUG nova.scheduler.client.report [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.116 2 DEBUG oslo_concurrency.lockutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.119 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.119 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.119 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.119 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.159 2 DEBUG nova.compute.manager [req-ac48dd7a-8063-4c3c-b3a4-b29d66085268 req-0e371a6c-c7ea-48a2-a14d-230ad6f3a17d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.159 2 DEBUG oslo_concurrency.lockutils [req-ac48dd7a-8063-4c3c-b3a4-b29d66085268 req-0e371a6c-c7ea-48a2-a14d-230ad6f3a17d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.160 2 DEBUG oslo_concurrency.lockutils [req-ac48dd7a-8063-4c3c-b3a4-b29d66085268 req-0e371a6c-c7ea-48a2-a14d-230ad6f3a17d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.160 2 DEBUG oslo_concurrency.lockutils [req-ac48dd7a-8063-4c3c-b3a4-b29d66085268 req-0e371a6c-c7ea-48a2-a14d-230ad6f3a17d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.160 2 DEBUG nova.compute.manager [req-ac48dd7a-8063-4c3c-b3a4-b29d66085268 req-0e371a6c-c7ea-48a2-a14d-230ad6f3a17d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] No waiting events found dispatching network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.160 2 WARNING nova.compute.manager [req-ac48dd7a-8063-4c3c-b3a4-b29d66085268 req-0e371a6c-c7ea-48a2-a14d-230ad6f3a17d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received unexpected event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 for instance with vm_state deleted and task_state None.
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.160 2 DEBUG nova.compute.manager [req-ac48dd7a-8063-4c3c-b3a4-b29d66085268 req-0e371a6c-c7ea-48a2-a14d-230ad6f3a17d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-deleted-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.184 2 INFO nova.scheduler.client.report [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Deleted allocations for instance f563ffb7-1ade-4b71-ab68-115322eef141
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.211 2 DEBUG nova.compute.manager [req-5396cb91-019d-435a-a547-434f9a8f6e1e req-90f878b3-3cd6-40f7-923d-78b8b868e175 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Received event network-changed-98f1c539-b9b9-4abb-89cf-268c353264ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.211 2 DEBUG nova.compute.manager [req-5396cb91-019d-435a-a547-434f9a8f6e1e req-90f878b3-3cd6-40f7-923d-78b8b868e175 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Refreshing instance network info cache due to event network-changed-98f1c539-b9b9-4abb-89cf-268c353264ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.212 2 DEBUG oslo_concurrency.lockutils [req-5396cb91-019d-435a-a547-434f9a8f6e1e req-90f878b3-3cd6-40f7-923d-78b8b868e175 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-e2c535ed-c6cf-4684-82dc-ae59904e7381" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.239 2 DEBUG nova.network.neutron [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.262 2 DEBUG oslo_concurrency.lockutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:13:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2607777912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.616 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.700 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.703 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.707 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.707 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.710 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.711 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:13:28 compute-0 ceph-mon[74295]: pgmap v1466: 305 pgs: 305 active+clean; 311 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.9 MiB/s wr, 218 op/s
Oct 07 14:13:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2915733219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2607777912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.931 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.932 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3618MB free_disk=59.84690475463867GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.932 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:28 compute-0 nova_compute[259550]: 2025-10-07 14:13:28.933 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.008 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 867307d6-0b3f-4a3e-9dc4-a05221e2f080 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.009 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.009 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance cb7392fb-c42f-47e9-b661-7cbf3dfe1263 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.009 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance e2c535ed-c6cf-4684-82dc-ae59904e7381 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.009 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.009 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.015 2 DEBUG oslo_concurrency.lockutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.015 2 DEBUG oslo_concurrency.lockutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.015 2 DEBUG oslo_concurrency.lockutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.016 2 DEBUG oslo_concurrency.lockutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.016 2 DEBUG oslo_concurrency.lockutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.017 2 INFO nova.compute.manager [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Terminating instance
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.018 2 DEBUG nova.compute.manager [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:13:29 compute-0 kernel: tapc8a4482e-99 (unregistering): left promiscuous mode
Oct 07 14:13:29 compute-0 NetworkManager[44949]: <info>  [1759846409.0948] device (tapc8a4482e-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:13:29 compute-0 ovn_controller[151684]: 2025-10-07T14:13:29Z|00356|binding|INFO|Releasing lport c8a4482e-996f-427f-8071-48124530250e from this chassis (sb_readonly=0)
Oct 07 14:13:29 compute-0 ovn_controller[151684]: 2025-10-07T14:13:29Z|00357|binding|INFO|Setting lport c8a4482e-996f-427f-8071-48124530250e down in Southbound
Oct 07 14:13:29 compute-0 ovn_controller[151684]: 2025-10-07T14:13:29Z|00358|binding|INFO|Removing iface tapc8a4482e-99 ovn-installed in OVS
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.129 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:4b:9c 10.100.0.7'], port_security=['fa:16:3e:c4:4b:9c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'cb7392fb-c42f-47e9-b661-7cbf3dfe1263', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c8a4482e-996f-427f-8071-48124530250e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.131 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c8a4482e-996f-427f-8071-48124530250e in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 unbound from our chassis
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.132 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.133 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[899678fc-13cc-45f2-8d03-9ee82a4d5760]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.136 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 namespace which is not needed anymore
Oct 07 14:13:29 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Oct 07 14:13:29 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000002f.scope: Consumed 8.464s CPU time.
Oct 07 14:13:29 compute-0 systemd-machined[214580]: Machine qemu-54-instance-0000002f terminated.
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.210 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:29 compute-0 kernel: tapc8a4482e-99: entered promiscuous mode
Oct 07 14:13:29 compute-0 systemd-udevd[313292]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:13:29 compute-0 NetworkManager[44949]: <info>  [1759846409.2444] manager: (tapc8a4482e-99): new Tun device (/org/freedesktop/NetworkManager/Devices/174)
Oct 07 14:13:29 compute-0 ovn_controller[151684]: 2025-10-07T14:13:29Z|00359|binding|INFO|Claiming lport c8a4482e-996f-427f-8071-48124530250e for this chassis.
Oct 07 14:13:29 compute-0 ovn_controller[151684]: 2025-10-07T14:13:29Z|00360|binding|INFO|c8a4482e-996f-427f-8071-48124530250e: Claiming fa:16:3e:c4:4b:9c 10.100.0.7
Oct 07 14:13:29 compute-0 kernel: tapc8a4482e-99 (unregistering): left promiscuous mode
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.271 2 INFO nova.virt.libvirt.driver [-] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Instance destroyed successfully.
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.272 2 DEBUG nova.objects.instance [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'resources' on Instance uuid cb7392fb-c42f-47e9-b661-7cbf3dfe1263 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:29 compute-0 ovn_controller[151684]: 2025-10-07T14:13:29Z|00361|binding|INFO|Releasing lport c8a4482e-996f-427f-8071-48124530250e from this chassis (sb_readonly=0)
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.280 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:4b:9c 10.100.0.7'], port_security=['fa:16:3e:c4:4b:9c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'cb7392fb-c42f-47e9-b661-7cbf3dfe1263', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c8a4482e-996f-427f-8071-48124530250e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.287 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:4b:9c 10.100.0.7'], port_security=['fa:16:3e:c4:4b:9c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'cb7392fb-c42f-47e9-b661-7cbf3dfe1263', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c8a4482e-996f-427f-8071-48124530250e) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.289 2 DEBUG nova.virt.libvirt.vif [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-738566473',display_name='tempest-ServerDiskConfigTestJSON-server-738566473',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-738566473',id=47,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-zl7hxhm4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:27Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=cb7392fb-c42f-47e9-b661-7cbf3dfe1263,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.289 2 DEBUG nova.network.os_vif_util [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.290 2 DEBUG nova.network.os_vif_util [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:4b:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8a4482e-996f-427f-8071-48124530250e,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8a4482e-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.291 2 DEBUG os_vif [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:4b:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8a4482e-996f-427f-8071-48124530250e,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8a4482e-99') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.295 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8a4482e-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:13:29 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[312917]: [NOTICE]   (312929) : haproxy version is 2.8.14-c23fe91
Oct 07 14:13:29 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[312917]: [NOTICE]   (312929) : path to executable is /usr/sbin/haproxy
Oct 07 14:13:29 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[312917]: [WARNING]  (312929) : Exiting Master process...
Oct 07 14:13:29 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[312917]: [ALERT]    (312929) : Current worker (312940) exited with code 143 (Terminated)
Oct 07 14:13:29 compute-0 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[312917]: [WARNING]  (312929) : All workers exited. Exiting... (0)
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:29 compute-0 systemd[1]: libpod-6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985.scope: Deactivated successfully.
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.307 2 INFO os_vif [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:4b:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8a4482e-996f-427f-8071-48124530250e,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8a4482e-99')
Oct 07 14:13:29 compute-0 podman[313313]: 2025-10-07 14:13:29.313322181 +0000 UTC m=+0.073315468 container died 6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 07 14:13:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985-userdata-shm.mount: Deactivated successfully.
Oct 07 14:13:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-650a9a6c1f663f200a5eecd27e0eaaee72d722619b67ac6d8e8939c3ec7ae27d-merged.mount: Deactivated successfully.
Oct 07 14:13:29 compute-0 podman[313313]: 2025-10-07 14:13:29.369107547 +0000 UTC m=+0.129100834 container cleanup 6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:13:29 compute-0 systemd[1]: libpod-conmon-6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985.scope: Deactivated successfully.
Oct 07 14:13:29 compute-0 podman[313369]: 2025-10-07 14:13:29.45709062 +0000 UTC m=+0.056272151 container remove 6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.463 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[537c8d9c-a3fb-400e-a52d-9bfc8a6822f7]: (4, ('Tue Oct  7 02:13:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985)\n6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985\nTue Oct  7 02:13:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985)\n6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.466 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c67f2470-43f0-4598-94e8-d151ea670e35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.468 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2cb8ca0-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:29 compute-0 kernel: tapd2cb8ca0-10: left promiscuous mode
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.486 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5e1adab4-bbfe-4202-b8f0-750e60f5fae6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.514 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[968b5973-c742-48b1-b40d-4efcb83e2523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.516 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[10f5ce2d-1fbc-489c-b2f4-9cc176bd3939]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.536 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92b6e708-04c1-4d99-96dd-45e040c19871]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697981, 'reachable_time': 44724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313397, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.538 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.539 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[798cc4fd-1dd8-4c87-a4ec-1fafddd8e1bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:29 compute-0 systemd[1]: run-netns-ovnmeta\x2dd2cb8ca0\x2d1272\x2d4fa9\x2db4ed\x2d8d0a1e3df777.mount: Deactivated successfully.
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.540 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c8a4482e-996f-427f-8071-48124530250e in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 unbound from our chassis
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.542 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.543 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b47d8f-5b95-458a-867f-440a6a046865]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.544 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c8a4482e-996f-427f-8071-48124530250e in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 unbound from our chassis
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.546 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:13:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.547 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c7f81e3-71f2-45f6-9d95-44a70384f013]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 303 MiB data, 554 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 277 op/s
Oct 07 14:13:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:13:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2315139432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.757 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.763 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.772 2 INFO nova.virt.libvirt.driver [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Deleting instance files /var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263_del
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.773 2 INFO nova.virt.libvirt.driver [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Deletion of /var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263_del complete
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.778 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:13:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2315139432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.808 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.809 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.813537) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846409813589, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2555, "num_deletes": 523, "total_data_size": 3402637, "memory_usage": 3459904, "flush_reason": "Manual Compaction"}
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.822 2 INFO nova.compute.manager [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Took 0.80 seconds to destroy the instance on the hypervisor.
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.823 2 DEBUG oslo.service.loopingcall [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.823 2 DEBUG nova.compute.manager [-] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.823 2 DEBUG nova.network.neutron [-] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846409831271, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3338260, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27985, "largest_seqno": 30539, "table_properties": {"data_size": 3327064, "index_size": 6794, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 26751, "raw_average_key_size": 20, "raw_value_size": 3302348, "raw_average_value_size": 2492, "num_data_blocks": 296, "num_entries": 1325, "num_filter_entries": 1325, "num_deletions": 523, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759846227, "oldest_key_time": 1759846227, "file_creation_time": 1759846409, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 17791 microseconds, and 7149 cpu microseconds.
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.831327) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3338260 bytes OK
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.831355) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.833555) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.833571) EVENT_LOG_v1 {"time_micros": 1759846409833565, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.833593) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3390673, prev total WAL file size 3390673, number of live WAL files 2.
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.834799) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3260KB)], [62(8566KB)]
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846409834882, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12110702, "oldest_snapshot_seqno": -1}
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.875 2 DEBUG nova.network.neutron [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Updating instance_info_cache with network_info: [{"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5535 keys, 10369275 bytes, temperature: kUnknown
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846409904579, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10369275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10327740, "index_size": 26577, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 138933, "raw_average_key_size": 25, "raw_value_size": 10223725, "raw_average_value_size": 1847, "num_data_blocks": 1087, "num_entries": 5535, "num_filter_entries": 5535, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759846409, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.905 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Releasing lock "refresh_cache-e2c535ed-c6cf-4684-82dc-ae59904e7381" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.904860) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10369275 bytes
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.906534) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.6 rd, 148.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.4 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 6586, records dropped: 1051 output_compression: NoCompression
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.906553) EVENT_LOG_v1 {"time_micros": 1759846409906544, "job": 34, "event": "compaction_finished", "compaction_time_micros": 69782, "compaction_time_cpu_micros": 27007, "output_level": 6, "num_output_files": 1, "total_output_size": 10369275, "num_input_records": 6586, "num_output_records": 5535, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.906 2 DEBUG nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Instance network_info: |[{"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846409907222, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.907 2 DEBUG oslo_concurrency.lockutils [req-5396cb91-019d-435a-a547-434f9a8f6e1e req-90f878b3-3cd6-40f7-923d-78b8b868e175 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-e2c535ed-c6cf-4684-82dc-ae59904e7381" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.907 2 DEBUG nova.network.neutron [req-5396cb91-019d-435a-a547-434f9a8f6e1e req-90f878b3-3cd6-40f7-923d-78b8b868e175 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Refreshing network info cache for port 98f1c539-b9b9-4abb-89cf-268c353264ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846409908742, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.834678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.908875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.908882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.908884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.908886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:13:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.908888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.910 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Start _get_guest_xml network_info=[{"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.915 2 WARNING nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.922 2 DEBUG nova.virt.libvirt.host [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.923 2 DEBUG nova.virt.libvirt.host [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.930 2 DEBUG nova.virt.libvirt.host [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.930 2 DEBUG nova.virt.libvirt.host [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.931 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.931 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.931 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.931 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.932 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.932 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.932 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.932 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.932 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.933 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.933 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.933 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:13:29 compute-0 nova_compute[259550]: 2025-10-07 14:13:29.937 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:13:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/422403577' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.446 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.475 2 DEBUG nova.storage.rbd_utils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image e2c535ed-c6cf-4684-82dc-ae59904e7381_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.481 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.519 2 DEBUG nova.compute.manager [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Received event network-vif-unplugged-c8a4482e-996f-427f-8071-48124530250e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.520 2 DEBUG oslo_concurrency.lockutils [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.520 2 DEBUG oslo_concurrency.lockutils [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.521 2 DEBUG oslo_concurrency.lockutils [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.521 2 DEBUG nova.compute.manager [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] No waiting events found dispatching network-vif-unplugged-c8a4482e-996f-427f-8071-48124530250e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.521 2 DEBUG nova.compute.manager [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Received event network-vif-unplugged-c8a4482e-996f-427f-8071-48124530250e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.521 2 DEBUG nova.compute.manager [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Received event network-vif-plugged-c8a4482e-996f-427f-8071-48124530250e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.522 2 DEBUG oslo_concurrency.lockutils [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.522 2 DEBUG oslo_concurrency.lockutils [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.522 2 DEBUG oslo_concurrency.lockutils [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.523 2 DEBUG nova.compute.manager [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] No waiting events found dispatching network-vif-plugged-c8a4482e-996f-427f-8071-48124530250e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.523 2 WARNING nova.compute.manager [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Received unexpected event network-vif-plugged-c8a4482e-996f-427f-8071-48124530250e for instance with vm_state active and task_state deleting.
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.757 2 DEBUG nova.network.neutron [-] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.788 2 INFO nova.compute.manager [-] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Took 0.96 seconds to deallocate network for instance.
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.809 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:13:30 compute-0 ceph-mon[74295]: pgmap v1467: 305 pgs: 305 active+clean; 303 MiB data, 554 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 277 op/s
Oct 07 14:13:30 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/422403577' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.842 2 DEBUG oslo_concurrency.lockutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.843 2 DEBUG oslo_concurrency.lockutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:13:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2838868307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.920 2 DEBUG oslo_concurrency.processutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.968 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.971 2 DEBUG nova.virt.libvirt.vif [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:13:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1601957698',display_name='tempest-DeleteServersTestJSON-server-1601957698',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1601957698',id=48,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-bm0zqvip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:23Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=e2c535ed-c6cf-4684-82dc-ae59904e7381,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.971 2 DEBUG nova.network.os_vif_util [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.972 2 DEBUG nova.network.os_vif_util [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:a7:57,bridge_name='br-int',has_traffic_filtering=True,id=98f1c539-b9b9-4abb-89cf-268c353264ed,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f1c539-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.974 2 DEBUG nova.objects.instance [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'pci_devices' on Instance uuid e2c535ed-c6cf-4684-82dc-ae59904e7381 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:30 compute-0 nova_compute[259550]: 2025-10-07 14:13:30.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.004 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:13:31 compute-0 nova_compute[259550]:   <uuid>e2c535ed-c6cf-4684-82dc-ae59904e7381</uuid>
Oct 07 14:13:31 compute-0 nova_compute[259550]:   <name>instance-00000030</name>
Oct 07 14:13:31 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:13:31 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:13:31 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <nova:name>tempest-DeleteServersTestJSON-server-1601957698</nova:name>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:13:29</nova:creationTime>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:13:31 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:13:31 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:13:31 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:13:31 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:13:31 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:13:31 compute-0 nova_compute[259550]:         <nova:user uuid="a0452296b3a942e893961944a0203d98">tempest-DeleteServersTestJSON-1871282594-project-member</nova:user>
Oct 07 14:13:31 compute-0 nova_compute[259550]:         <nova:project uuid="06322ecec4b94a5d94e34cc8632d4104">tempest-DeleteServersTestJSON-1871282594</nova:project>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:13:31 compute-0 nova_compute[259550]:         <nova:port uuid="98f1c539-b9b9-4abb-89cf-268c353264ed">
Oct 07 14:13:31 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:13:31 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:13:31 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <system>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <entry name="serial">e2c535ed-c6cf-4684-82dc-ae59904e7381</entry>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <entry name="uuid">e2c535ed-c6cf-4684-82dc-ae59904e7381</entry>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     </system>
Oct 07 14:13:31 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:13:31 compute-0 nova_compute[259550]:   <os>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:   </os>
Oct 07 14:13:31 compute-0 nova_compute[259550]:   <features>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:   </features>
Oct 07 14:13:31 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:13:31 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:13:31 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/e2c535ed-c6cf-4684-82dc-ae59904e7381_disk">
Oct 07 14:13:31 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:13:31 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/e2c535ed-c6cf-4684-82dc-ae59904e7381_disk.config">
Oct 07 14:13:31 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:13:31 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:41:a7:57"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <target dev="tap98f1c539-b9"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381/console.log" append="off"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <video>
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     </video>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:13:31 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:13:31 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:13:31 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:13:31 compute-0 nova_compute[259550]: </domain>
Oct 07 14:13:31 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.010 2 DEBUG nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Preparing to wait for external event network-vif-plugged-98f1c539-b9b9-4abb-89cf-268c353264ed prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.010 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.011 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.011 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.012 2 DEBUG nova.virt.libvirt.vif [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:13:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1601957698',display_name='tempest-DeleteServersTestJSON-server-1601957698',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1601957698',id=48,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-bm0zqvip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:23Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=e2c535ed-c6cf-4684-82dc-ae59904e7381,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.012 2 DEBUG nova.network.os_vif_util [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.013 2 DEBUG nova.network.os_vif_util [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:a7:57,bridge_name='br-int',has_traffic_filtering=True,id=98f1c539-b9b9-4abb-89cf-268c353264ed,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f1c539-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.013 2 DEBUG os_vif [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:a7:57,bridge_name='br-int',has_traffic_filtering=True,id=98f1c539-b9b9-4abb-89cf-268c353264ed,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f1c539-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.015 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.015 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.018 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98f1c539-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.019 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap98f1c539-b9, col_values=(('external_ids', {'iface-id': '98f1c539-b9b9-4abb-89cf-268c353264ed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:a7:57', 'vm-uuid': 'e2c535ed-c6cf-4684-82dc-ae59904e7381'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:31 compute-0 NetworkManager[44949]: <info>  [1759846411.0482] manager: (tap98f1c539-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.052 2 INFO os_vif [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:a7:57,bridge_name='br-int',has_traffic_filtering=True,id=98f1c539-b9b9-4abb-89cf-268c353264ed,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f1c539-b9')
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.125 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.125 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.125 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No VIF found with MAC fa:16:3e:41:a7:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.126 2 INFO nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Using config drive
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.146 2 DEBUG nova.storage.rbd_utils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image e2c535ed-c6cf-4684-82dc-ae59904e7381_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:13:31 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1169457586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.425 2 DEBUG oslo_concurrency.processutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.430 2 DEBUG nova.compute.provider_tree [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.457 2 DEBUG nova.scheduler.client.report [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.483 2 DEBUG oslo_concurrency.lockutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.514 2 INFO nova.scheduler.client.report [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Deleted allocations for instance cb7392fb-c42f-47e9-b661-7cbf3dfe1263
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.537 2 INFO nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Creating config drive at /var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381/disk.config
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.542 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn_0pvskb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.599 2 DEBUG nova.network.neutron [req-5396cb91-019d-435a-a547-434f9a8f6e1e req-90f878b3-3cd6-40f7-923d-78b8b868e175 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Updated VIF entry in instance network info cache for port 98f1c539-b9b9-4abb-89cf-268c353264ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.600 2 DEBUG nova.network.neutron [req-5396cb91-019d-435a-a547-434f9a8f6e1e req-90f878b3-3cd6-40f7-923d-78b8b868e175 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Updating instance_info_cache with network_info: [{"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.602 2 DEBUG oslo_concurrency.lockutils [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.602 2 DEBUG oslo_concurrency.lockutils [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.603 2 DEBUG nova.objects.instance [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.638 2 DEBUG oslo_concurrency.lockutils [req-5396cb91-019d-435a-a547-434f9a8f6e1e req-90f878b3-3cd6-40f7-923d-78b8b868e175 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-e2c535ed-c6cf-4684-82dc-ae59904e7381" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.649 2 DEBUG oslo_concurrency.lockutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.682 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn_0pvskb" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 274 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 276 op/s
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.710 2 DEBUG nova.storage.rbd_utils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image e2c535ed-c6cf-4684-82dc-ae59904e7381_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.716 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381/disk.config e2c535ed-c6cf-4684-82dc-ae59904e7381_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:31 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2838868307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:13:31 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1169457586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.907 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381/disk.config e2c535ed-c6cf-4684-82dc-ae59904e7381_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.908 2 INFO nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Deleting local config drive /var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381/disk.config because it was imported into RBD.
Oct 07 14:13:31 compute-0 kernel: tap98f1c539-b9: entered promiscuous mode
Oct 07 14:13:31 compute-0 NetworkManager[44949]: <info>  [1759846411.9680] manager: (tap98f1c539-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/176)
Oct 07 14:13:31 compute-0 ovn_controller[151684]: 2025-10-07T14:13:31Z|00362|binding|INFO|Claiming lport 98f1c539-b9b9-4abb-89cf-268c353264ed for this chassis.
Oct 07 14:13:31 compute-0 ovn_controller[151684]: 2025-10-07T14:13:31Z|00363|binding|INFO|98f1c539-b9b9-4abb-89cf-268c353264ed: Claiming fa:16:3e:41:a7:57 10.100.0.13
Oct 07 14:13:31 compute-0 nova_compute[259550]: 2025-10-07 14:13:31.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:31 compute-0 NetworkManager[44949]: <info>  [1759846411.9837] device (tap98f1c539-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:13:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:31.984 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:a7:57 10.100.0.13'], port_security=['fa:16:3e:41:a7:57 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e2c535ed-c6cf-4684-82dc-ae59904e7381', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=98f1c539-b9b9-4abb-89cf-268c353264ed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:31 compute-0 NetworkManager[44949]: <info>  [1759846411.9862] device (tap98f1c539-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:13:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:31.986 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 98f1c539-b9b9-4abb-89cf-268c353264ed in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f bound to our chassis
Oct 07 14:13:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:31.989 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct 07 14:13:32 compute-0 ovn_controller[151684]: 2025-10-07T14:13:32Z|00364|binding|INFO|Setting lport 98f1c539-b9b9-4abb-89cf-268c353264ed ovn-installed in OVS
Oct 07 14:13:32 compute-0 ovn_controller[151684]: 2025-10-07T14:13:32Z|00365|binding|INFO|Setting lport 98f1c539-b9b9-4abb-89cf-268c353264ed up in Southbound
Oct 07 14:13:32 compute-0 nova_compute[259550]: 2025-10-07 14:13:32.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.002 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2ce51449-e3b2-4b2d-9bc5-ca09c97d63ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.003 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8accac57-a1 in ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:13:32 compute-0 nova_compute[259550]: 2025-10-07 14:13:32.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.006 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8accac57-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.007 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4babb802-b248-407a-9f49-082794630d46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.008 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f937b052-8fb9-4200-b4e7-8ce4cc6a40cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:32 compute-0 systemd-machined[214580]: New machine qemu-55-instance-00000030.
Oct 07 14:13:32 compute-0 nova_compute[259550]: 2025-10-07 14:13:32.021 2 DEBUG nova.objects.instance [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_requests' on Instance uuid b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:32 compute-0 systemd[1]: Started Virtual Machine qemu-55-instance-00000030.
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.025 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[96e81977-d782-4ee5-8e7a-982be04ec529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.043 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9bb17d-5068-4a18-8a57-53bb6b9b9d55]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:32 compute-0 nova_compute[259550]: 2025-10-07 14:13:32.043 2 DEBUG nova.network.neutron [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.081 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6e6f2d52-9bd9-4b57-8feb-4d8a6399ee05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.092 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a29574d6-d314-4c93-8c87-bc21f3b62b3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:32 compute-0 NetworkManager[44949]: <info>  [1759846412.0937] manager: (tap8accac57-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/177)
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.147 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[71b149be-5149-4d50-bdb9-91119808c0b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.151 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[61c9ef59-7b17-440a-809f-aacc7babb882]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:32 compute-0 NetworkManager[44949]: <info>  [1759846412.1856] device (tap8accac57-a0): carrier: link connected
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.191 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[85aa8db2-01ad-4150-b046-b24b6315bef5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:32 compute-0 nova_compute[259550]: 2025-10-07 14:13:32.209 2 DEBUG nova.policy [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.211 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d74a38ec-d88e-4fe1-bd33-2b4ae5eab4a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8accac57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:e8:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699574, 'reachable_time': 29541, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313591, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.229 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3504d6-365e-4cb4-b1db-d2c827c1f7db]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:e89f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699574, 'tstamp': 699574}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313592, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.249 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a9eb90-c1c4-4f30-9804-8e688dd3cec5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8accac57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:e8:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699574, 'reachable_time': 29541, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313593, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.276 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e8ea3d-9118-45c2-8a32-08ac3e454d94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002005269380412841 of space, bias 1.0, pg target 0.6015808141238523 quantized to 32 (current 32)
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:13:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.342 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a393b27f-03f3-4a52-b09c-563ff6c05fdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.344 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8accac57-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.345 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.346 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8accac57-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:32 compute-0 NetworkManager[44949]: <info>  [1759846412.3493] manager: (tap8accac57-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Oct 07 14:13:32 compute-0 kernel: tap8accac57-a0: entered promiscuous mode
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.354 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8accac57-a0, col_values=(('external_ids', {'iface-id': 'a487ff40-6fa2-404e-b7fc-dbcc968fecc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:32 compute-0 ovn_controller[151684]: 2025-10-07T14:13:32Z|00366|binding|INFO|Releasing lport a487ff40-6fa2-404e-b7fc-dbcc968fecc3 from this chassis (sb_readonly=0)
Oct 07 14:13:32 compute-0 nova_compute[259550]: 2025-10-07 14:13:32.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:32 compute-0 nova_compute[259550]: 2025-10-07 14:13:32.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.380 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.381 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c6cb93c4-8b99-48e2-923e-e37fd72286ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.382 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:13:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.383 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'env', 'PROCESS_TAG=haproxy-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8accac57-ab45-4b9b-95ed-86c2c65f202f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:13:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:13:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/723362424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:13:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:13:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/723362424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:13:32 compute-0 podman[313625]: 2025-10-07 14:13:32.793798227 +0000 UTC m=+0.059495025 container create 175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:13:32 compute-0 systemd[1]: Started libpod-conmon-175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147.scope.
Oct 07 14:13:32 compute-0 ceph-mon[74295]: pgmap v1468: 305 pgs: 305 active+clean; 274 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 276 op/s
Oct 07 14:13:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/723362424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:13:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/723362424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:13:32 compute-0 podman[313625]: 2025-10-07 14:13:32.761709604 +0000 UTC m=+0.027406422 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:13:32 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181ed1164f571f18c8510bf11989ae73eb1b54ec91d8a852ede1b73c5ee4b345/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:13:32 compute-0 podman[313625]: 2025-10-07 14:13:32.886050882 +0000 UTC m=+0.151747700 container init 175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:13:32 compute-0 podman[313625]: 2025-10-07 14:13:32.89167033 +0000 UTC m=+0.157367128 container start 175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:13:32 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[313674]: [NOTICE]   (313686) : New worker (313688) forked
Oct 07 14:13:32 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[313674]: [NOTICE]   (313686) : Loading success.
Oct 07 14:13:32 compute-0 nova_compute[259550]: 2025-10-07 14:13:32.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.207 2 DEBUG nova.network.neutron [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Successfully created port: f66416c8-d3f2-4dfa-b30b-8505f9a9120c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.347 2 DEBUG nova.compute.manager [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Received event network-vif-deleted-c8a4482e-996f-427f-8071-48124530250e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.347 2 DEBUG nova.compute.manager [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Received event network-vif-plugged-98f1c539-b9b9-4abb-89cf-268c353264ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.347 2 DEBUG oslo_concurrency.lockutils [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.348 2 DEBUG oslo_concurrency.lockutils [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.348 2 DEBUG oslo_concurrency.lockutils [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.348 2 DEBUG nova.compute.manager [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Processing event network-vif-plugged-98f1c539-b9b9-4abb-89cf-268c353264ed _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.349 2 DEBUG nova.compute.manager [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Received event network-vif-plugged-98f1c539-b9b9-4abb-89cf-268c353264ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.349 2 DEBUG oslo_concurrency.lockutils [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.350 2 DEBUG oslo_concurrency.lockutils [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.350 2 DEBUG oslo_concurrency.lockutils [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.350 2 DEBUG nova.compute.manager [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] No waiting events found dispatching network-vif-plugged-98f1c539-b9b9-4abb-89cf-268c353264ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.350 2 WARNING nova.compute.manager [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Received unexpected event network-vif-plugged-98f1c539-b9b9-4abb-89cf-268c353264ed for instance with vm_state building and task_state spawning.
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.388 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846413.3875046, e2c535ed-c6cf-4684-82dc-ae59904e7381 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.388 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] VM Started (Lifecycle Event)
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.390 2 DEBUG nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.393 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.397 2 INFO nova.virt.libvirt.driver [-] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Instance spawned successfully.
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.397 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.421 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.424 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.595 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.595 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846413.3877447, e2c535ed-c6cf-4684-82dc-ae59904e7381 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.595 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] VM Paused (Lifecycle Event)
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.600 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.600 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.601 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.601 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.602 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.602 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.629 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.632 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846413.3930922, e2c535ed-c6cf-4684-82dc-ae59904e7381 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.632 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] VM Resumed (Lifecycle Event)
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.659 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.662 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.668 2 INFO nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Took 9.76 seconds to spawn the instance on the hypervisor.
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.669 2 DEBUG nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.680 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:13:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 274 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 251 op/s
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.727 2 INFO nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Took 10.91 seconds to build instance.
Oct 07 14:13:33 compute-0 nova_compute[259550]: 2025-10-07 14:13:33.751 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:34 compute-0 nova_compute[259550]: 2025-10-07 14:13:34.087 2 DEBUG nova.network.neutron [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Successfully updated port: f66416c8-d3f2-4dfa-b30b-8505f9a9120c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:13:34 compute-0 nova_compute[259550]: 2025-10-07 14:13:34.100 2 DEBUG oslo_concurrency.lockutils [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:13:34 compute-0 nova_compute[259550]: 2025-10-07 14:13:34.101 2 DEBUG oslo_concurrency.lockutils [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:13:34 compute-0 nova_compute[259550]: 2025-10-07 14:13:34.101 2 DEBUG nova.network.neutron [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:13:34 compute-0 nova_compute[259550]: 2025-10-07 14:13:34.244 2 DEBUG nova.compute.manager [req-fb069bcc-7a11-4ddc-b1d7-1fffa96cbb3b req-b7b256ce-5a88-4a5a-b9d7-8f73b62f5970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-changed-f66416c8-d3f2-4dfa-b30b-8505f9a9120c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:34 compute-0 nova_compute[259550]: 2025-10-07 14:13:34.244 2 DEBUG nova.compute.manager [req-fb069bcc-7a11-4ddc-b1d7-1fffa96cbb3b req-b7b256ce-5a88-4a5a-b9d7-8f73b62f5970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Refreshing instance network info cache due to event network-changed-f66416c8-d3f2-4dfa-b30b-8505f9a9120c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:13:34 compute-0 nova_compute[259550]: 2025-10-07 14:13:34.245 2 DEBUG oslo_concurrency.lockutils [req-fb069bcc-7a11-4ddc-b1d7-1fffa96cbb3b req-b7b256ce-5a88-4a5a-b9d7-8f73b62f5970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:13:34 compute-0 nova_compute[259550]: 2025-10-07 14:13:34.286 2 WARNING nova.network.neutron [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] b1d9f332-f920-4d6e-8e91-dd13ec334d51 already exists in list: networks containing: ['b1d9f332-f920-4d6e-8e91-dd13ec334d51']. ignoring it
Oct 07 14:13:34 compute-0 ceph-mon[74295]: pgmap v1469: 305 pgs: 305 active+clean; 274 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 251 op/s
Oct 07 14:13:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 246 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.9 MiB/s wr, 314 op/s
Oct 07 14:13:35 compute-0 nova_compute[259550]: 2025-10-07 14:13:35.895 2 DEBUG nova.objects.instance [None req-94505fe2-3374-4c4c-ba18-e84125b5239a a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'pci_devices' on Instance uuid e2c535ed-c6cf-4684-82dc-ae59904e7381 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:35 compute-0 nova_compute[259550]: 2025-10-07 14:13:35.917 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846415.9173257, e2c535ed-c6cf-4684-82dc-ae59904e7381 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:35 compute-0 nova_compute[259550]: 2025-10-07 14:13:35.918 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] VM Paused (Lifecycle Event)
Oct 07 14:13:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:13:35 compute-0 nova_compute[259550]: 2025-10-07 14:13:35.937 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:35 compute-0 nova_compute[259550]: 2025-10-07 14:13:35.940 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:13:35 compute-0 nova_compute[259550]: 2025-10-07 14:13:35.973 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.251 2 DEBUG nova.network.neutron [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updating instance_info_cache with network_info: [{"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.272 2 DEBUG oslo_concurrency.lockutils [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.273 2 DEBUG oslo_concurrency.lockutils [req-fb069bcc-7a11-4ddc-b1d7-1fffa96cbb3b req-b7b256ce-5a88-4a5a-b9d7-8f73b62f5970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.274 2 DEBUG nova.network.neutron [req-fb069bcc-7a11-4ddc-b1d7-1fffa96cbb3b req-b7b256ce-5a88-4a5a-b9d7-8f73b62f5970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Refreshing network info cache for port f66416c8-d3f2-4dfa-b30b-8505f9a9120c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.277 2 DEBUG nova.virt.libvirt.vif [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-83378302',display_name='tempest-AttachInterfacesTestJSON-server-83378302',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-83378302',id=46,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhbNkb9RJCiI/GkFe04dtxG7xWcFO07VUrumVPfTEOgOBoJhuvmDv9cetrJ6XIQ/1YCI8CWruM93E49EXk00F5CtAbm3zc0bxYBCy7t665rpHSj0Q0DNaoPcuzsqKAXSw==',key_name='tempest-keypair-542533303',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ddh3gr00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.278 2 DEBUG nova.network.os_vif_util [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.278 2 DEBUG nova.network.os_vif_util [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.279 2 DEBUG os_vif [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.279 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.280 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.284 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf66416c8-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.285 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf66416c8-d3, col_values=(('external_ids', {'iface-id': 'f66416c8-d3f2-4dfa-b30b-8505f9a9120c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:a0:0a', 'vm-uuid': 'b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:36 compute-0 NetworkManager[44949]: <info>  [1759846416.2878] manager: (tapf66416c8-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/179)
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.297 2 INFO os_vif [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3')
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.298 2 DEBUG nova.virt.libvirt.vif [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-83378302',display_name='tempest-AttachInterfacesTestJSON-server-83378302',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-83378302',id=46,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhbNkb9RJCiI/GkFe04dtxG7xWcFO07VUrumVPfTEOgOBoJhuvmDv9cetrJ6XIQ/1YCI8CWruM93E49EXk00F5CtAbm3zc0bxYBCy7t665rpHSj0Q0DNaoPcuzsqKAXSw==',key_name='tempest-keypair-542533303',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ddh3gr00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.298 2 DEBUG nova.network.os_vif_util [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.299 2 DEBUG nova.network.os_vif_util [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.302 2 DEBUG nova.virt.libvirt.guest [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] attach device xml: <interface type="ethernet">
Oct 07 14:13:36 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:47:a0:0a"/>
Oct 07 14:13:36 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:13:36 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:13:36 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:13:36 compute-0 nova_compute[259550]:   <target dev="tapf66416c8-d3"/>
Oct 07 14:13:36 compute-0 nova_compute[259550]: </interface>
Oct 07 14:13:36 compute-0 nova_compute[259550]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 07 14:13:36 compute-0 kernel: tapf66416c8-d3: entered promiscuous mode
Oct 07 14:13:36 compute-0 NetworkManager[44949]: <info>  [1759846416.3162] manager: (tapf66416c8-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/180)
Oct 07 14:13:36 compute-0 ovn_controller[151684]: 2025-10-07T14:13:36Z|00367|binding|INFO|Claiming lport f66416c8-d3f2-4dfa-b30b-8505f9a9120c for this chassis.
Oct 07 14:13:36 compute-0 ovn_controller[151684]: 2025-10-07T14:13:36Z|00368|binding|INFO|f66416c8-d3f2-4dfa-b30b-8505f9a9120c: Claiming fa:16:3e:47:a0:0a 10.100.0.14
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.323 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:a0:0a 10.100.0.14'], port_security=['fa:16:3e:47:a0:0a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=f66416c8-d3f2-4dfa-b30b-8505f9a9120c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.325 161536 INFO neutron.agent.ovn.metadata.agent [-] Port f66416c8-d3f2-4dfa-b30b-8505f9a9120c in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis
Oct 07 14:13:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.326 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:13:36 compute-0 ovn_controller[151684]: 2025-10-07T14:13:36Z|00369|binding|INFO|Setting lport f66416c8-d3f2-4dfa-b30b-8505f9a9120c ovn-installed in OVS
Oct 07 14:13:36 compute-0 ovn_controller[151684]: 2025-10-07T14:13:36Z|00370|binding|INFO|Setting lport f66416c8-d3f2-4dfa-b30b-8505f9a9120c up in Southbound
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.351 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8e361fcd-4ea5-45b8-b84e-6749b742c0c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:36 compute-0 systemd-udevd[313708]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:13:36 compute-0 NetworkManager[44949]: <info>  [1759846416.3754] device (tapf66416c8-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:13:36 compute-0 NetworkManager[44949]: <info>  [1759846416.3764] device (tapf66416c8-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:13:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.390 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a36ad1aa-ef67-472f-a49e-f311fd8f715d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.396 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[43fefce8-1b63-450e-bb66-ef63e2067e7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.433 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[480a11bb-6510-4c0b-9ad1-f12c9c9ce070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.453 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a451be80-0310-43e5-8936-0d3324441f3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697305, 'reachable_time': 34760, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313715, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.474 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c80b7264-5a5d-4cc1-9819-ce0e5b601108]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697319, 'tstamp': 697319}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313716, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697323, 'tstamp': 697323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313716, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.476 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.480 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.481 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.481 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.482 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:36 compute-0 kernel: tap98f1c539-b9 (unregistering): left promiscuous mode
Oct 07 14:13:36 compute-0 NetworkManager[44949]: <info>  [1759846416.5503] device (tap98f1c539-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.560 2 DEBUG nova.virt.libvirt.driver [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.561 2 DEBUG nova.virt.libvirt.driver [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.561 2 DEBUG nova.virt.libvirt.driver [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:5f:75:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.562 2 DEBUG nova.virt.libvirt.driver [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:47:a0:0a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:36 compute-0 ovn_controller[151684]: 2025-10-07T14:13:36Z|00371|binding|INFO|Releasing lport 98f1c539-b9b9-4abb-89cf-268c353264ed from this chassis (sb_readonly=0)
Oct 07 14:13:36 compute-0 ovn_controller[151684]: 2025-10-07T14:13:36Z|00372|binding|INFO|Setting lport 98f1c539-b9b9-4abb-89cf-268c353264ed down in Southbound
Oct 07 14:13:36 compute-0 ovn_controller[151684]: 2025-10-07T14:13:36Z|00373|binding|INFO|Removing iface tap98f1c539-b9 ovn-installed in OVS
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.581 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:a7:57 10.100.0.13'], port_security=['fa:16:3e:41:a7:57 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e2c535ed-c6cf-4684-82dc-ae59904e7381', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=98f1c539-b9b9-4abb-89cf-268c353264ed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.582 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 98f1c539-b9b9-4abb-89cf-268c353264ed in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f unbound from our chassis
Oct 07 14:13:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.584 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8accac57-ab45-4b9b-95ed-86c2c65f202f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:13:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.586 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92add2a9-16c0-4da0-9373-e8e9351a6428]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.587 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f namespace which is not needed anymore
Oct 07 14:13:36 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000030.scope: Deactivated successfully.
Oct 07 14:13:36 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000030.scope: Consumed 3.934s CPU time.
Oct 07 14:13:36 compute-0 systemd-machined[214580]: Machine qemu-55-instance-00000030 terminated.
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.618 2 DEBUG nova.virt.libvirt.guest [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:13:36 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:13:36 compute-0 nova_compute[259550]:   <nova:name>tempest-AttachInterfacesTestJSON-server-83378302</nova:name>
Oct 07 14:13:36 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:13:36</nova:creationTime>
Oct 07 14:13:36 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:13:36 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:13:36 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:13:36 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:13:36 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:13:36 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:13:36 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:13:36 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:13:36 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:13:36 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:13:36 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:13:36 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:13:36 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:13:36 compute-0 nova_compute[259550]:     <nova:port uuid="f208f539-2cf1-4007-8806-5b4836d43c4f">
Oct 07 14:13:36 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 07 14:13:36 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:13:36 compute-0 nova_compute[259550]:     <nova:port uuid="f66416c8-d3f2-4dfa-b30b-8505f9a9120c">
Oct 07 14:13:36 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:13:36 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:13:36 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:13:36 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:13:36 compute-0 nova_compute[259550]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.687 2 DEBUG nova.compute.manager [None req-94505fe2-3374-4c4c-ba18-e84125b5239a a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.758 2 DEBUG oslo_concurrency.lockutils [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:36 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[313674]: [NOTICE]   (313686) : haproxy version is 2.8.14-c23fe91
Oct 07 14:13:36 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[313674]: [NOTICE]   (313686) : path to executable is /usr/sbin/haproxy
Oct 07 14:13:36 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[313674]: [WARNING]  (313686) : Exiting Master process...
Oct 07 14:13:36 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[313674]: [ALERT]    (313686) : Current worker (313688) exited with code 143 (Terminated)
Oct 07 14:13:36 compute-0 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[313674]: [WARNING]  (313686) : All workers exited. Exiting... (0)
Oct 07 14:13:36 compute-0 systemd[1]: libpod-175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147.scope: Deactivated successfully.
Oct 07 14:13:36 compute-0 podman[313745]: 2025-10-07 14:13:36.806393121 +0000 UTC m=+0.096038486 container died 175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 14:13:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-181ed1164f571f18c8510bf11989ae73eb1b54ec91d8a852ede1b73c5ee4b345-merged.mount: Deactivated successfully.
Oct 07 14:13:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147-userdata-shm.mount: Deactivated successfully.
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.903 2 DEBUG nova.compute.manager [req-4591b047-a6e7-416a-b13d-4dcf47fb5e8a req-18fc2566-a71c-4575-9d6a-aa7640d81e9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-vif-plugged-f66416c8-d3f2-4dfa-b30b-8505f9a9120c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.903 2 DEBUG oslo_concurrency.lockutils [req-4591b047-a6e7-416a-b13d-4dcf47fb5e8a req-18fc2566-a71c-4575-9d6a-aa7640d81e9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.903 2 DEBUG oslo_concurrency.lockutils [req-4591b047-a6e7-416a-b13d-4dcf47fb5e8a req-18fc2566-a71c-4575-9d6a-aa7640d81e9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.904 2 DEBUG oslo_concurrency.lockutils [req-4591b047-a6e7-416a-b13d-4dcf47fb5e8a req-18fc2566-a71c-4575-9d6a-aa7640d81e9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.904 2 DEBUG nova.compute.manager [req-4591b047-a6e7-416a-b13d-4dcf47fb5e8a req-18fc2566-a71c-4575-9d6a-aa7640d81e9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] No waiting events found dispatching network-vif-plugged-f66416c8-d3f2-4dfa-b30b-8505f9a9120c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.904 2 WARNING nova.compute.manager [req-4591b047-a6e7-416a-b13d-4dcf47fb5e8a req-18fc2566-a71c-4575-9d6a-aa7640d81e9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received unexpected event network-vif-plugged-f66416c8-d3f2-4dfa-b30b-8505f9a9120c for instance with vm_state active and task_state None.
Oct 07 14:13:36 compute-0 ceph-mon[74295]: pgmap v1470: 305 pgs: 305 active+clean; 246 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.9 MiB/s wr, 314 op/s
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:13:36 compute-0 podman[313745]: 2025-10-07 14:13:36.990084819 +0000 UTC m=+0.279730194 container cleanup 175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.999 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.999 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:13:36 compute-0 nova_compute[259550]: 2025-10-07 14:13:36.999 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:13:37 compute-0 systemd[1]: libpod-conmon-175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147.scope: Deactivated successfully.
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.070 2 DEBUG oslo_concurrency.lockutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.070 2 DEBUG oslo_concurrency.lockutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.071 2 DEBUG oslo_concurrency.lockutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.071 2 DEBUG oslo_concurrency.lockutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.072 2 DEBUG oslo_concurrency.lockutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.075 2 INFO nova.compute.manager [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Terminating instance
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.076 2 DEBUG nova.compute.manager [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:13:37 compute-0 podman[313776]: 2025-10-07 14:13:37.091108074 +0000 UTC m=+0.077937560 container remove 175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.101 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[58e11822-4d27-45cc-a465-48c47c497e4c]: (4, ('Tue Oct  7 02:13:36 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f (175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147)\n175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147\nTue Oct  7 02:13:36 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f (175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147)\n175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.103 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[53e797fc-7aab-4088-90b8-5dd3d34f2544]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.105 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8accac57-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:37 compute-0 kernel: tap8accac57-a0: left promiscuous mode
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.139 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[023a78f5-c720-4dcf-918d-1a21526b68ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:37 compute-0 kernel: tap130e9c49-53 (unregistering): left promiscuous mode
Oct 07 14:13:37 compute-0 NetworkManager[44949]: <info>  [1759846417.1583] device (tap130e9c49-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:37 compute-0 ovn_controller[151684]: 2025-10-07T14:13:37Z|00374|binding|INFO|Releasing lport 130e9c49-53e8-495e-ac38-4d3e63f49011 from this chassis (sb_readonly=0)
Oct 07 14:13:37 compute-0 ovn_controller[151684]: 2025-10-07T14:13:37Z|00375|binding|INFO|Setting lport 130e9c49-53e8-495e-ac38-4d3e63f49011 down in Southbound
Oct 07 14:13:37 compute-0 ovn_controller[151684]: 2025-10-07T14:13:37Z|00376|binding|INFO|Removing iface tap130e9c49-53 ovn-installed in OVS
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.179 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:73:59 10.100.0.5'], port_security=['fa:16:3e:03:73:59 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '867307d6-0b3f-4a3e-9dc4-a05221e2f080', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f5ee4e560ed4660a6685a086282a370', 'neutron:revision_number': '6', 'neutron:security_group_ids': '83afa0b2-d45d-4225-8e21-5474e9077205 b548d2bc-ba8b-4cd7-964e-e85cc9260fba f3568815-0490-424c-b408-29e1102ff805', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=437744f3-38b8-4d96-80b1-8cef5bd6873b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=130e9c49-53e8-495e-ac38-4d3e63f49011) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.180 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d9e869-99d5-4f3d-94ea-b492ee29abb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.183 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6a5473-3dee-435e-adb2-92432b23c218]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.207 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6dcae640-6b73-44db-b3c2-5968a064abf4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699563, 'reachable_time': 32762, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313800, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.212 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.212 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f39e67-cbd9-4742-a2a3-2fd093767df6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d8accac57\x2dab45\x2d4b9b\x2d95ed\x2d86c2c65f202f.mount: Deactivated successfully.
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.213 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 130e9c49-53e8-495e-ac38-4d3e63f49011 in datapath 71870f0f-c94f-4d32-8df4-00da4d6d4129 unbound from our chassis
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.214 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 71870f0f-c94f-4d32-8df4-00da4d6d4129, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.215 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[832f6d68-0843-4c11-8ba3-1a2268991714]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.216 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129 namespace which is not needed anymore
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.216 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.217 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.217 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.217 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 867307d6-0b3f-4a3e-9dc4-a05221e2f080 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:37 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct 07 14:13:37 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002b.scope: Consumed 14.927s CPU time.
Oct 07 14:13:37 compute-0 systemd-machined[214580]: Machine qemu-48-instance-0000002b terminated.
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.329 2 INFO nova.virt.libvirt.driver [-] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Instance destroyed successfully.
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.330 2 DEBUG nova.objects.instance [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lazy-loading 'resources' on Instance uuid 867307d6-0b3f-4a3e-9dc4-a05221e2f080 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.354 2 DEBUG nova.virt.libvirt.vif [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-84480482',display_name='tempest-SecurityGroupsTestJSON-server-84480482',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-84480482',id=43,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f5ee4e560ed4660a6685a086282a370',ramdisk_id='',reservation_id='r-9i3oosu3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1673626413',owner_user_name='tempest-SecurityGroupsTestJSON-1673626413-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:12:43Z,user_data=None,user_id='3ba82e1a9f12417391d78758ae9bb83c',uuid=867307d6-0b3f-4a3e-9dc4-a05221e2f080,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.355 2 DEBUG nova.network.os_vif_util [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converting VIF {"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.355 2 DEBUG nova.network.os_vif_util [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:03:73:59,bridge_name='br-int',has_traffic_filtering=True,id=130e9c49-53e8-495e-ac38-4d3e63f49011,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130e9c49-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.356 2 DEBUG os_vif [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:73:59,bridge_name='br-int',has_traffic_filtering=True,id=130e9c49-53e8-495e-ac38-4d3e63f49011,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130e9c49-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.358 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap130e9c49-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:37 compute-0 neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129[309575]: [NOTICE]   (309579) : haproxy version is 2.8.14-c23fe91
Oct 07 14:13:37 compute-0 neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129[309575]: [NOTICE]   (309579) : path to executable is /usr/sbin/haproxy
Oct 07 14:13:37 compute-0 neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129[309575]: [WARNING]  (309579) : Exiting Master process...
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.369 2 INFO os_vif [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:73:59,bridge_name='br-int',has_traffic_filtering=True,id=130e9c49-53e8-495e-ac38-4d3e63f49011,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130e9c49-53')
Oct 07 14:13:37 compute-0 neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129[309575]: [ALERT]    (309579) : Current worker (309581) exited with code 143 (Terminated)
Oct 07 14:13:37 compute-0 neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129[309575]: [WARNING]  (309579) : All workers exited. Exiting... (0)
Oct 07 14:13:37 compute-0 systemd[1]: libpod-14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31.scope: Deactivated successfully.
Oct 07 14:13:37 compute-0 podman[313824]: 2025-10-07 14:13:37.379423633 +0000 UTC m=+0.056520397 container died 14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:13:37 compute-0 ovn_controller[151684]: 2025-10-07T14:13:37Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:a0:0a 10.100.0.14
Oct 07 14:13:37 compute-0 ovn_controller[151684]: 2025-10-07T14:13:37Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:a0:0a 10.100.0.14
Oct 07 14:13:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31-userdata-shm.mount: Deactivated successfully.
Oct 07 14:13:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-820a27a5171c3bf28772eb4c1b067f6a72a5b80227d7b45bd095d8022b7260b2-merged.mount: Deactivated successfully.
Oct 07 14:13:37 compute-0 podman[313824]: 2025-10-07 14:13:37.490951694 +0000 UTC m=+0.168048448 container cleanup 14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:13:37 compute-0 systemd[1]: libpod-conmon-14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31.scope: Deactivated successfully.
Oct 07 14:13:37 compute-0 podman[313880]: 2025-10-07 14:13:37.598900982 +0000 UTC m=+0.077077267 container remove 14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.607 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[67c6f823-21f7-4df3-85db-f49c72ceda19]: (4, ('Tue Oct  7 02:13:37 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129 (14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31)\n14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31\nTue Oct  7 02:13:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129 (14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31)\n14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.609 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fc7631fb-42ca-4fdd-95f3-5d5578637520]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.610 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71870f0f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:37 compute-0 kernel: tap71870f0f-c0: left promiscuous mode
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.648 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8ac4a1ea-277f-4118-9abc-8cfb0e8caefc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.678 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[34b2dbfe-3ad2-40c9-a235-2c4185c24948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.680 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dacc1dfa-c92c-46ff-924c-f3f0c14a624b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 246 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 192 op/s
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.701 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[14c719b5-8d48-48a2-9634-7e1dc5cbb261]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694626, 'reachable_time': 19654, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313899, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.703 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:13:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.704 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[9898f63d-39ae-4c1e-9f8d-075b2a5d4225]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.843 2 DEBUG nova.compute.manager [req-2d1decd4-cedb-454d-8d69-e9e0ec0505c0 req-356cdbce-0da1-4ebf-be35-2f5888a4e103 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received event network-vif-unplugged-130e9c49-53e8-495e-ac38-4d3e63f49011 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.843 2 DEBUG oslo_concurrency.lockutils [req-2d1decd4-cedb-454d-8d69-e9e0ec0505c0 req-356cdbce-0da1-4ebf-be35-2f5888a4e103 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.844 2 DEBUG oslo_concurrency.lockutils [req-2d1decd4-cedb-454d-8d69-e9e0ec0505c0 req-356cdbce-0da1-4ebf-be35-2f5888a4e103 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.844 2 DEBUG oslo_concurrency.lockutils [req-2d1decd4-cedb-454d-8d69-e9e0ec0505c0 req-356cdbce-0da1-4ebf-be35-2f5888a4e103 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.844 2 DEBUG nova.compute.manager [req-2d1decd4-cedb-454d-8d69-e9e0ec0505c0 req-356cdbce-0da1-4ebf-be35-2f5888a4e103 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] No waiting events found dispatching network-vif-unplugged-130e9c49-53e8-495e-ac38-4d3e63f49011 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:37 compute-0 nova_compute[259550]: 2025-10-07 14:13:37.844 2 DEBUG nova.compute.manager [req-2d1decd4-cedb-454d-8d69-e9e0ec0505c0 req-356cdbce-0da1-4ebf-be35-2f5888a4e103 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received event network-vif-unplugged-130e9c49-53e8-495e-ac38-4d3e63f49011 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:13:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d71870f0f\x2dc94f\x2d4d32\x2d8df4\x2d00da4d6d4129.mount: Deactivated successfully.
Oct 07 14:13:37 compute-0 ceph-mon[74295]: pgmap v1471: 305 pgs: 305 active+clean; 246 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 192 op/s
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.210 2 DEBUG oslo_concurrency.lockutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "e2c535ed-c6cf-4684-82dc-ae59904e7381" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.211 2 DEBUG oslo_concurrency.lockutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.212 2 DEBUG oslo_concurrency.lockutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.212 2 DEBUG oslo_concurrency.lockutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.212 2 DEBUG oslo_concurrency.lockutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.214 2 INFO nova.compute.manager [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Terminating instance
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.215 2 DEBUG nova.compute.manager [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.222 2 INFO nova.virt.libvirt.driver [-] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Instance destroyed successfully.
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.223 2 DEBUG nova.objects.instance [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'resources' on Instance uuid e2c535ed-c6cf-4684-82dc-ae59904e7381 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.247 2 DEBUG nova.virt.libvirt.vif [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1601957698',display_name='tempest-DeleteServersTestJSON-server-1601957698',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1601957698',id=48,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-bm0zqvip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:36Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=e2c535ed-c6cf-4684-82dc-ae59904e7381,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.247 2 DEBUG nova.network.os_vif_util [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.248 2 DEBUG nova.network.os_vif_util [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:a7:57,bridge_name='br-int',has_traffic_filtering=True,id=98f1c539-b9b9-4abb-89cf-268c353264ed,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f1c539-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.248 2 DEBUG os_vif [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:a7:57,bridge_name='br-int',has_traffic_filtering=True,id=98f1c539-b9b9-4abb-89cf-268c353264ed,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f1c539-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.250 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98f1c539-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.256 2 DEBUG nova.network.neutron [req-fb069bcc-7a11-4ddc-b1d7-1fffa96cbb3b req-b7b256ce-5a88-4a5a-b9d7-8f73b62f5970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updated VIF entry in instance network info cache for port f66416c8-d3f2-4dfa-b30b-8505f9a9120c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.256 2 DEBUG nova.network.neutron [req-fb069bcc-7a11-4ddc-b1d7-1fffa96cbb3b req-b7b256ce-5a88-4a5a-b9d7-8f73b62f5970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updating instance_info_cache with network_info: [{"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.260 2 INFO os_vif [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:a7:57,bridge_name='br-int',has_traffic_filtering=True,id=98f1c539-b9b9-4abb-89cf-268c353264ed,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f1c539-b9')
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.276 2 DEBUG oslo_concurrency.lockutils [req-fb069bcc-7a11-4ddc-b1d7-1fffa96cbb3b req-b7b256ce-5a88-4a5a-b9d7-8f73b62f5970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.284 2 INFO nova.virt.libvirt.driver [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Deleting instance files /var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080_del
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.285 2 INFO nova.virt.libvirt.driver [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Deletion of /var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080_del complete
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.348 2 INFO nova.compute.manager [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Took 1.27 seconds to destroy the instance on the hypervisor.
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.349 2 DEBUG oslo.service.loopingcall [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.349 2 DEBUG nova.compute.manager [-] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.349 2 DEBUG nova.network.neutron [-] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.481 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updating instance_info_cache with network_info: [{"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.499 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.500 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.500 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.569 2 DEBUG oslo_concurrency.lockutils [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-f66416c8-d3f2-4dfa-b30b-8505f9a9120c" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.569 2 DEBUG oslo_concurrency.lockutils [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-f66416c8-d3f2-4dfa-b30b-8505f9a9120c" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.593 2 DEBUG nova.objects.instance [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.789 2 DEBUG nova.virt.libvirt.vif [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-83378302',display_name='tempest-AttachInterfacesTestJSON-server-83378302',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-83378302',id=46,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhbNkb9RJCiI/GkFe04dtxG7xWcFO07VUrumVPfTEOgOBoJhuvmDv9cetrJ6XIQ/1YCI8CWruM93E49EXk00F5CtAbm3zc0bxYBCy7t665rpHSj0Q0DNaoPcuzsqKAXSw==',key_name='tempest-keypair-542533303',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ddh3gr00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.790 2 DEBUG nova.network.os_vif_util [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.791 2 DEBUG nova.network.os_vif_util [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.797 2 DEBUG nova.virt.libvirt.guest [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.800 2 DEBUG nova.virt.libvirt.guest [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.803 2 DEBUG nova.virt.libvirt.driver [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Attempting to detach device tapf66416c8-d3 from instance b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.803 2 DEBUG nova.virt.libvirt.guest [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] detach device xml: <interface type="ethernet">
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:47:a0:0a"/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <target dev="tapf66416c8-d3"/>
Oct 07 14:13:38 compute-0 nova_compute[259550]: </interface>
Oct 07 14:13:38 compute-0 nova_compute[259550]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 07 14:13:38 compute-0 ovn_controller[151684]: 2025-10-07T14:13:38Z|00377|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.881 2 DEBUG nova.virt.libvirt.guest [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.886 2 DEBUG nova.virt.libvirt.guest [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface>not found in domain: <domain type='kvm' id='52'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <name>instance-0000002e</name>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <uuid>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</uuid>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <nova:name>tempest-AttachInterfacesTestJSON-server-83378302</nova:name>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:13:36</nova:creationTime>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <nova:port uuid="f208f539-2cf1-4007-8806-5b4836d43c4f">
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <nova:port uuid="f66416c8-d3f2-4dfa-b30b-8505f9a9120c">
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:13:38 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <memory unit='KiB'>131072</memory>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <vcpu placement='static'>1</vcpu>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <resource>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <partition>/machine</partition>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   </resource>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <sysinfo type='smbios'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <system>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <entry name='manufacturer'>RDO</entry>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <entry name='product'>OpenStack Compute</entry>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <entry name='serial'>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <entry name='uuid'>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <entry name='family'>Virtual Machine</entry>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </system>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <os>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <boot dev='hd'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <smbios mode='sysinfo'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   </os>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <features>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <vmcoreinfo state='on'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   </features>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <cpu mode='custom' match='exact' check='full'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <vendor>AMD</vendor>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='x2apic'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc-deadline'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='hypervisor'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc_adjust'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='spec-ctrl'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='stibp'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='arch-capabilities'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='ssbd'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='cmp_legacy'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='overflow-recov'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='succor'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='ibrs'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='amd-ssbd'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='virt-ssbd'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='disable' name='lbrv'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='disable' name='tsc-scale'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='disable' name='vmcb-clean'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='disable' name='flushbyasid'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='disable' name='pause-filter'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='disable' name='pfthreshold'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='rdctl-no'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='mds-no'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='pschange-mc-no'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='gds-no'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='rfds-no'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='disable' name='xsaves'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='disable' name='svm'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='require' name='topoext'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='disable' name='npt'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <feature policy='disable' name='nrip-save'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <clock offset='utc'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <timer name='pit' tickpolicy='delay'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <timer name='hpet' present='no'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <on_poweroff>destroy</on_poweroff>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <on_reboot>restart</on_reboot>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <on_crash>destroy</on_crash>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <disk type='network' device='disk'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk' index='2'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target dev='vda' bus='virtio'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='virtio-disk0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <disk type='network' device='cdrom'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config' index='1'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target dev='sda' bus='sata'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <readonly/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='sata0-0-0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='0' model='pcie-root'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pcie.0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='1' port='0x10'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.1'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='2' port='0x11'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.2'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='3' port='0x12'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.3'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='4' port='0x13'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.4'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='5' port='0x14'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.5'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='6' port='0x15'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.6'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='7' port='0x16'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.7'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='8' port='0x17'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.8'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='9' port='0x18'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.9'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='10' port='0x19'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.10'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='11' port='0x1a'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.11'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='12' port='0x1b'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.12'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='13' port='0x1c'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.13'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='14' port='0x1d'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.14'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='15' port='0x1e'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.15'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='16' port='0x1f'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.16'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='17' port='0x20'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.17'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='18' port='0x21'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.18'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='19' port='0x22'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.19'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='20' port='0x23'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.20'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='21' port='0x24'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.21'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='22' port='0x25'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.22'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='23' port='0x26'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.23'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='24' port='0x27'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.24'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target chassis='25' port='0x28'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.25'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model name='pcie-pci-bridge'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='pci.26'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='usb'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <controller type='sata' index='0'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='ide'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:5f:75:d2'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target dev='tapf208f539-2c'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='net0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:47:a0:0a'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target dev='tapf66416c8-d3'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='net1'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <serial type='pty'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <source path='/dev/pts/3'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log' append='off'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target type='isa-serial' port='0'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:         <model name='isa-serial'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       </target>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <console type='pty' tty='/dev/pts/3'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <source path='/dev/pts/3'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log' append='off'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <target type='serial' port='0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </console>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <input type='tablet' bus='usb'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='input0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='usb' bus='0' port='1'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </input>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <input type='mouse' bus='ps2'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='input1'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </input>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <input type='keyboard' bus='ps2'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='input2'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </input>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <listen type='address' address='::0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </graphics>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <audio id='1' type='none'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <video>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <model type='virtio' heads='1' primary='yes'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='video0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </video>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <watchdog model='itco' action='reset'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='watchdog0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </watchdog>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <memballoon model='virtio'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <stats period='10'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='balloon0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <rng model='virtio'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <backend model='random'>/dev/urandom</backend>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <alias name='rng0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <label>system_u:system_r:svirt_t:s0:c550,c569</label>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c550,c569</imagelabel>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <label>+107:+107</label>
Oct 07 14:13:38 compute-0 nova_compute[259550]:     <imagelabel>+107:+107</imagelabel>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:13:38 compute-0 nova_compute[259550]: </domain>
Oct 07 14:13:38 compute-0 nova_compute[259550]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.886 2 INFO nova.virt.libvirt.driver [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully detached device tapf66416c8-d3 from instance b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 from the persistent domain config.
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.887 2 DEBUG nova.virt.libvirt.driver [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] (1/8): Attempting to detach device tapf66416c8-d3 with device alias net1 from instance b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.887 2 DEBUG nova.virt.libvirt.guest [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] detach device xml: <interface type="ethernet">
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:47:a0:0a"/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:13:38 compute-0 nova_compute[259550]:   <target dev="tapf66416c8-d3"/>
Oct 07 14:13:38 compute-0 nova_compute[259550]: </interface>
Oct 07 14:13:38 compute-0 nova_compute[259550]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:38 compute-0 kernel: tapf66416c8-d3 (unregistering): left promiscuous mode
Oct 07 14:13:38 compute-0 NetworkManager[44949]: <info>  [1759846418.9812] device (tapf66416c8-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:13:38 compute-0 ovn_controller[151684]: 2025-10-07T14:13:38Z|00378|binding|INFO|Releasing lport f66416c8-d3f2-4dfa-b30b-8505f9a9120c from this chassis (sb_readonly=0)
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:38 compute-0 ovn_controller[151684]: 2025-10-07T14:13:38Z|00379|binding|INFO|Setting lport f66416c8-d3f2-4dfa-b30b-8505f9a9120c down in Southbound
Oct 07 14:13:38 compute-0 ovn_controller[151684]: 2025-10-07T14:13:38Z|00380|binding|INFO|Removing iface tapf66416c8-d3 ovn-installed in OVS
Oct 07 14:13:38 compute-0 nova_compute[259550]: 2025-10-07 14:13:38.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:38.998 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:a0:0a 10.100.0.14'], port_security=['fa:16:3e:47:a0:0a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=f66416c8-d3f2-4dfa-b30b-8505f9a9120c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:38.999 161536 INFO neutron.agent.ovn.metadata.agent [-] Port f66416c8-d3f2-4dfa-b30b-8505f9a9120c in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 unbound from our chassis
Oct 07 14:13:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.001 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.013 2 DEBUG nova.virt.libvirt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Received event <DeviceRemovedEvent: 1759846419.0136461, b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.015 2 DEBUG nova.virt.libvirt.driver [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Start waiting for the detach event from libvirt for device tapf66416c8-d3 with device alias net1 for instance b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.016 2 DEBUG nova.virt.libvirt.guest [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:13:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.017 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b95962d4-a5a2-46d1-be7e-e2c2a4142f3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.020 2 DEBUG nova.virt.libvirt.guest [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface>not found in domain: <domain type='kvm' id='52'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <name>instance-0000002e</name>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <uuid>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</uuid>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <nova:name>tempest-AttachInterfacesTestJSON-server-83378302</nova:name>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:13:36</nova:creationTime>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <nova:port uuid="f208f539-2cf1-4007-8806-5b4836d43c4f">
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <nova:port uuid="f66416c8-d3f2-4dfa-b30b-8505f9a9120c">
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:13:39 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <memory unit='KiB'>131072</memory>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <vcpu placement='static'>1</vcpu>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <resource>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <partition>/machine</partition>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   </resource>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <sysinfo type='smbios'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <system>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <entry name='manufacturer'>RDO</entry>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <entry name='product'>OpenStack Compute</entry>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <entry name='serial'>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <entry name='uuid'>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <entry name='family'>Virtual Machine</entry>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </system>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <os>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <boot dev='hd'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <smbios mode='sysinfo'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   </os>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <features>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <vmcoreinfo state='on'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   </features>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <cpu mode='custom' match='exact' check='full'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <vendor>AMD</vendor>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='x2apic'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc-deadline'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='hypervisor'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc_adjust'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='spec-ctrl'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='stibp'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='arch-capabilities'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='ssbd'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='cmp_legacy'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='overflow-recov'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='succor'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='ibrs'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='amd-ssbd'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='virt-ssbd'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='disable' name='lbrv'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='disable' name='tsc-scale'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='disable' name='vmcb-clean'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='disable' name='flushbyasid'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='disable' name='pause-filter'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='disable' name='pfthreshold'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='rdctl-no'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='mds-no'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='pschange-mc-no'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='gds-no'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='rfds-no'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='disable' name='xsaves'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='disable' name='svm'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='require' name='topoext'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='disable' name='npt'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <feature policy='disable' name='nrip-save'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <clock offset='utc'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <timer name='pit' tickpolicy='delay'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <timer name='hpet' present='no'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <on_poweroff>destroy</on_poweroff>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <on_reboot>restart</on_reboot>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <on_crash>destroy</on_crash>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <disk type='network' device='disk'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk' index='2'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target dev='vda' bus='virtio'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='virtio-disk0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <disk type='network' device='cdrom'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config' index='1'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target dev='sda' bus='sata'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <readonly/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='sata0-0-0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='0' model='pcie-root'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pcie.0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='1' port='0x10'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.1'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='2' port='0x11'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.2'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='3' port='0x12'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.3'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='4' port='0x13'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.4'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='5' port='0x14'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.5'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='6' port='0x15'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.6'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='7' port='0x16'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.7'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='8' port='0x17'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.8'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='9' port='0x18'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.9'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='10' port='0x19'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.10'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='11' port='0x1a'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.11'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='12' port='0x1b'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.12'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='13' port='0x1c'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.13'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='14' port='0x1d'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.14'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='15' port='0x1e'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.15'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='16' port='0x1f'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.16'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='17' port='0x20'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.17'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='18' port='0x21'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.18'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='19' port='0x22'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.19'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='20' port='0x23'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.20'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='21' port='0x24'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.21'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='22' port='0x25'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.22'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='23' port='0x26'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.23'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='24' port='0x27'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.24'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target chassis='25' port='0x28'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.25'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model name='pcie-pci-bridge'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='pci.26'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='usb'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <controller type='sata' index='0'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='ide'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:5f:75:d2'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target dev='tapf208f539-2c'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='net0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <serial type='pty'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <source path='/dev/pts/3'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log' append='off'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target type='isa-serial' port='0'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:         <model name='isa-serial'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       </target>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <console type='pty' tty='/dev/pts/3'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <source path='/dev/pts/3'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log' append='off'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <target type='serial' port='0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </console>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <input type='tablet' bus='usb'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='input0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='usb' bus='0' port='1'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </input>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <input type='mouse' bus='ps2'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='input1'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </input>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <input type='keyboard' bus='ps2'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='input2'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </input>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <listen type='address' address='::0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </graphics>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <audio id='1' type='none'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <video>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <model type='virtio' heads='1' primary='yes'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='video0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </video>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <watchdog model='itco' action='reset'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='watchdog0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </watchdog>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <memballoon model='virtio'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <stats period='10'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='balloon0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <rng model='virtio'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <backend model='random'>/dev/urandom</backend>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <alias name='rng0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <label>system_u:system_r:svirt_t:s0:c550,c569</label>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c550,c569</imagelabel>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <label>+107:+107</label>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <imagelabel>+107:+107</imagelabel>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:13:39 compute-0 nova_compute[259550]: </domain>
Oct 07 14:13:39 compute-0 nova_compute[259550]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.020 2 INFO nova.virt.libvirt.driver [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully detached device tapf66416c8-d3 from instance b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 from the live domain config.
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.021 2 DEBUG nova.virt.libvirt.vif [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-83378302',display_name='tempest-AttachInterfacesTestJSON-server-83378302',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-83378302',id=46,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhbNkb9RJCiI/GkFe04dtxG7xWcFO07VUrumVPfTEOgOBoJhuvmDv9cetrJ6XIQ/1YCI8CWruM93E49EXk00F5CtAbm3zc0bxYBCy7t665rpHSj0Q0DNaoPcuzsqKAXSw==',key_name='tempest-keypair-542533303',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ddh3gr00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.021 2 DEBUG nova.network.os_vif_util [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.022 2 DEBUG nova.network.os_vif_util [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.022 2 DEBUG os_vif [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.024 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66416c8-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.030 2 INFO os_vif [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3')
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.031 2 DEBUG nova.virt.libvirt.guest [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <nova:name>tempest-AttachInterfacesTestJSON-server-83378302</nova:name>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:13:39</nova:creationTime>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     <nova:port uuid="f208f539-2cf1-4007-8806-5b4836d43c4f">
Oct 07 14:13:39 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 07 14:13:39 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:13:39 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:13:39 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:13:39 compute-0 nova_compute[259550]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 07 14:13:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.059 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b9452725-8bcb-4a47-a9d1-3de4e19ad40d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.063 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[72fddc2a-73e3-4baa-95e4-ef78559d44e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.112 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9ded921f-acd3-4436-b610-a6fbfddcde0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.131 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ce89472b-bffc-41df-9fc7-baa89398d3d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 916, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 916, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697305, 'reachable_time': 34760, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313930, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:39 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 14:13:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.150 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[702761d2-9534-4b6a-8e1d-1daf90eddbfc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697319, 'tstamp': 697319}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313931, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697323, 'tstamp': 697323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313931, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.153 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.156 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.156 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.156 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.157 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.222 2 DEBUG nova.network.neutron [-] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.246 2 INFO nova.compute.manager [-] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Took 0.90 seconds to deallocate network for instance.
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.292 2 DEBUG oslo_concurrency.lockutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.292 2 DEBUG oslo_concurrency.lockutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.387 2 DEBUG oslo_concurrency.processutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.521 2 DEBUG nova.compute.manager [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-vif-plugged-f66416c8-d3f2-4dfa-b30b-8505f9a9120c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.522 2 DEBUG oslo_concurrency.lockutils [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.522 2 DEBUG oslo_concurrency.lockutils [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.522 2 DEBUG oslo_concurrency.lockutils [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.522 2 DEBUG nova.compute.manager [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] No waiting events found dispatching network-vif-plugged-f66416c8-d3f2-4dfa-b30b-8505f9a9120c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.523 2 WARNING nova.compute.manager [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received unexpected event network-vif-plugged-f66416c8-d3f2-4dfa-b30b-8505f9a9120c for instance with vm_state active and task_state None.
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.523 2 DEBUG nova.compute.manager [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Received event network-vif-unplugged-98f1c539-b9b9-4abb-89cf-268c353264ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.523 2 DEBUG oslo_concurrency.lockutils [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.523 2 DEBUG oslo_concurrency.lockutils [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.523 2 DEBUG oslo_concurrency.lockutils [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.523 2 DEBUG nova.compute.manager [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] No waiting events found dispatching network-vif-unplugged-98f1c539-b9b9-4abb-89cf-268c353264ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.524 2 DEBUG nova.compute.manager [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Received event network-vif-unplugged-98f1c539-b9b9-4abb-89cf-268c353264ed for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.524 2 DEBUG nova.compute.manager [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Received event network-vif-plugged-98f1c539-b9b9-4abb-89cf-268c353264ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.524 2 DEBUG oslo_concurrency.lockutils [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.524 2 DEBUG oslo_concurrency.lockutils [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.524 2 DEBUG oslo_concurrency.lockutils [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.524 2 DEBUG nova.compute.manager [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] No waiting events found dispatching network-vif-plugged-98f1c539-b9b9-4abb-89cf-268c353264ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.525 2 WARNING nova.compute.manager [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Received unexpected event network-vif-plugged-98f1c539-b9b9-4abb-89cf-268c353264ed for instance with vm_state suspended and task_state deleting.
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.580 2 DEBUG oslo_concurrency.lockutils [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.581 2 DEBUG oslo_concurrency.lockutils [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.581 2 DEBUG nova.network.neutron [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.655 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846404.6546457, f563ffb7-1ade-4b71-ab68-115322eef141 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.656 2 INFO nova.compute.manager [-] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] VM Stopped (Lifecycle Event)
Oct 07 14:13:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 170 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 252 op/s
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.693 2 DEBUG nova.compute.manager [None req-828f93e6-25e8-4001-8dfc-2a2028f277e5 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:13:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1629729109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.868 2 DEBUG oslo_concurrency.processutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.874 2 DEBUG nova.compute.provider_tree [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.893 2 DEBUG nova.scheduler.client.report [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.914 2 DEBUG oslo_concurrency.lockutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.923 2 DEBUG nova.compute.manager [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received event network-vif-plugged-130e9c49-53e8-495e-ac38-4d3e63f49011 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.924 2 DEBUG oslo_concurrency.lockutils [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.924 2 DEBUG oslo_concurrency.lockutils [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.924 2 DEBUG oslo_concurrency.lockutils [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.924 2 DEBUG nova.compute.manager [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] No waiting events found dispatching network-vif-plugged-130e9c49-53e8-495e-ac38-4d3e63f49011 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.925 2 WARNING nova.compute.manager [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received unexpected event network-vif-plugged-130e9c49-53e8-495e-ac38-4d3e63f49011 for instance with vm_state deleted and task_state None.
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.925 2 DEBUG nova.compute.manager [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received event network-vif-deleted-130e9c49-53e8-495e-ac38-4d3e63f49011 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.925 2 DEBUG nova.compute.manager [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-vif-deleted-f66416c8-d3f2-4dfa-b30b-8505f9a9120c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.925 2 INFO nova.compute.manager [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Neutron deleted interface f66416c8-d3f2-4dfa-b30b-8505f9a9120c; detaching it from the instance and deleting it from the info cache
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.925 2 DEBUG nova.network.neutron [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updating instance_info_cache with network_info: [{"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.941 2 INFO nova.scheduler.client.report [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Deleted allocations for instance 867307d6-0b3f-4a3e-9dc4-a05221e2f080
Oct 07 14:13:39 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.957 2 DEBUG nova.objects.instance [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lazy-loading 'system_metadata' on Instance uuid b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:39.999 2 DEBUG nova.objects.instance [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lazy-loading 'flavor' on Instance uuid b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.014 2 INFO nova.virt.libvirt.driver [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Deleting instance files /var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381_del
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.015 2 INFO nova.virt.libvirt.driver [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Deletion of /var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381_del complete
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.056 2 DEBUG nova.virt.libvirt.vif [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-83378302',display_name='tempest-AttachInterfacesTestJSON-server-83378302',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-83378302',id=46,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhbNkb9RJCiI/GkFe04dtxG7xWcFO07VUrumVPfTEOgOBoJhuvmDv9cetrJ6XIQ/1YCI8CWruM93E49EXk00F5CtAbm3zc0bxYBCy7t665rpHSj0Q0DNaoPcuzsqKAXSw==',key_name='tempest-keypair-542533303',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ddh3gr00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.056 2 DEBUG nova.network.os_vif_util [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Converting VIF {"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.057 2 DEBUG nova.network.os_vif_util [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.061 2 DEBUG nova.virt.libvirt.guest [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.064 2 DEBUG nova.virt.libvirt.guest [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface>not found in domain: <domain type='kvm' id='52'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <name>instance-0000002e</name>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <uuid>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</uuid>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:name>tempest-AttachInterfacesTestJSON-server-83378302</nova:name>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:13:39</nova:creationTime>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:port uuid="f208f539-2cf1-4007-8806-5b4836d43c4f">
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:13:40 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <memory unit='KiB'>131072</memory>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <vcpu placement='static'>1</vcpu>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <resource>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <partition>/machine</partition>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </resource>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <sysinfo type='smbios'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <system>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <entry name='manufacturer'>RDO</entry>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <entry name='product'>OpenStack Compute</entry>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <entry name='serial'>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <entry name='uuid'>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <entry name='family'>Virtual Machine</entry>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </system>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <os>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <boot dev='hd'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <smbios mode='sysinfo'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </os>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <features>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <vmcoreinfo state='on'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </features>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <cpu mode='custom' match='exact' check='full'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <vendor>AMD</vendor>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='x2apic'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc-deadline'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='hypervisor'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc_adjust'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='spec-ctrl'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='stibp'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='arch-capabilities'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='ssbd'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='cmp_legacy'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='overflow-recov'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='succor'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='ibrs'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='amd-ssbd'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='virt-ssbd'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='lbrv'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='tsc-scale'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='vmcb-clean'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='flushbyasid'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='pause-filter'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='pfthreshold'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='rdctl-no'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='mds-no'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='pschange-mc-no'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='gds-no'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='rfds-no'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='xsaves'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='svm'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='topoext'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='npt'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='nrip-save'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <clock offset='utc'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <timer name='pit' tickpolicy='delay'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <timer name='hpet' present='no'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <on_poweroff>destroy</on_poweroff>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <on_reboot>restart</on_reboot>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <on_crash>destroy</on_crash>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <disk type='network' device='disk'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk' index='2'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target dev='vda' bus='virtio'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='virtio-disk0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <disk type='network' device='cdrom'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config' index='1'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target dev='sda' bus='sata'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <readonly/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='sata0-0-0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='0' model='pcie-root'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pcie.0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='1' port='0x10'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.1'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='2' port='0x11'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.2'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='3' port='0x12'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.3'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='4' port='0x13'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.4'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='5' port='0x14'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.5'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='6' port='0x15'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.6'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='7' port='0x16'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.7'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='8' port='0x17'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.8'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='9' port='0x18'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.9'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='10' port='0x19'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.10'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='11' port='0x1a'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.11'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='12' port='0x1b'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.12'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='13' port='0x1c'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.13'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='14' port='0x1d'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.14'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='15' port='0x1e'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.15'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='16' port='0x1f'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.16'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='17' port='0x20'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.17'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='18' port='0x21'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.18'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='19' port='0x22'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.19'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='20' port='0x23'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.20'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='21' port='0x24'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.21'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='22' port='0x25'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.22'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='23' port='0x26'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.23'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='24' port='0x27'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.24'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='25' port='0x28'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.25'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-pci-bridge'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.26'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='usb'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='sata' index='0'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='ide'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:5f:75:d2'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target dev='tapf208f539-2c'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='net0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <serial type='pty'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <source path='/dev/pts/3'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log' append='off'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target type='isa-serial' port='0'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:         <model name='isa-serial'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       </target>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <console type='pty' tty='/dev/pts/3'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <source path='/dev/pts/3'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log' append='off'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target type='serial' port='0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </console>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <input type='tablet' bus='usb'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='input0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='usb' bus='0' port='1'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </input>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <input type='mouse' bus='ps2'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='input1'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </input>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <input type='keyboard' bus='ps2'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='input2'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </input>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <listen type='address' address='::0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </graphics>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <audio id='1' type='none'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <video>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model type='virtio' heads='1' primary='yes'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='video0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </video>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <watchdog model='itco' action='reset'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='watchdog0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </watchdog>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <memballoon model='virtio'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <stats period='10'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='balloon0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <rng model='virtio'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <backend model='random'>/dev/urandom</backend>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='rng0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <label>system_u:system_r:svirt_t:s0:c550,c569</label>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c550,c569</imagelabel>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <label>+107:+107</label>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <imagelabel>+107:+107</imagelabel>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:13:40 compute-0 nova_compute[259550]: </domain>
Oct 07 14:13:40 compute-0 nova_compute[259550]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.065 2 DEBUG nova.virt.libvirt.guest [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.067 2 DEBUG oslo_concurrency.lockutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.080 2 DEBUG nova.virt.libvirt.guest [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface>not found in domain: <domain type='kvm' id='52'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <name>instance-0000002e</name>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <uuid>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</uuid>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:name>tempest-AttachInterfacesTestJSON-server-83378302</nova:name>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:13:39</nova:creationTime>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:port uuid="f208f539-2cf1-4007-8806-5b4836d43c4f">
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:13:40 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <memory unit='KiB'>131072</memory>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <vcpu placement='static'>1</vcpu>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <resource>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <partition>/machine</partition>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </resource>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <sysinfo type='smbios'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <system>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <entry name='manufacturer'>RDO</entry>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <entry name='product'>OpenStack Compute</entry>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <entry name='serial'>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <entry name='uuid'>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <entry name='family'>Virtual Machine</entry>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </system>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <os>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <boot dev='hd'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <smbios mode='sysinfo'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </os>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <features>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <vmcoreinfo state='on'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </features>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <cpu mode='custom' match='exact' check='full'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <vendor>AMD</vendor>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='x2apic'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc-deadline'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='hypervisor'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc_adjust'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='spec-ctrl'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='stibp'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='arch-capabilities'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='ssbd'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='cmp_legacy'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='overflow-recov'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='succor'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='ibrs'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='amd-ssbd'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='virt-ssbd'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='lbrv'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='tsc-scale'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='vmcb-clean'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='flushbyasid'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='pause-filter'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='pfthreshold'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='rdctl-no'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='mds-no'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='pschange-mc-no'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='gds-no'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='rfds-no'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='xsaves'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='svm'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='require' name='topoext'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='npt'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <feature policy='disable' name='nrip-save'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <clock offset='utc'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <timer name='pit' tickpolicy='delay'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <timer name='hpet' present='no'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <on_poweroff>destroy</on_poweroff>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <on_reboot>restart</on_reboot>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <on_crash>destroy</on_crash>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <disk type='network' device='disk'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk' index='2'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target dev='vda' bus='virtio'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='virtio-disk0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <disk type='network' device='cdrom'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config' index='1'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       </source>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target dev='sda' bus='sata'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <readonly/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='sata0-0-0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='0' model='pcie-root'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pcie.0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='1' port='0x10'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.1'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='2' port='0x11'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.2'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='3' port='0x12'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.3'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='4' port='0x13'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.4'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='5' port='0x14'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.5'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='6' port='0x15'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.6'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='7' port='0x16'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.7'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='8' port='0x17'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.8'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='9' port='0x18'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.9'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='10' port='0x19'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.10'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='11' port='0x1a'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.11'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='12' port='0x1b'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.12'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='13' port='0x1c'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.13'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='14' port='0x1d'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.14'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='15' port='0x1e'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.15'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='16' port='0x1f'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.16'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='17' port='0x20'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.17'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='18' port='0x21'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.18'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 podman[313954]: 2025-10-07 14:13:40.084199 +0000 UTC m=+0.067963908 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='19' port='0x22'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.19'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='20' port='0x23'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.20'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='21' port='0x24'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.21'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='22' port='0x25'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.22'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='23' port='0x26'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.23'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='24' port='0x27'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.24'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target chassis='25' port='0x28'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.25'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model name='pcie-pci-bridge'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='pci.26'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='usb'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <controller type='sata' index='0'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='ide'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:5f:75:d2'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target dev='tapf208f539-2c'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='net0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <serial type='pty'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <source path='/dev/pts/3'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log' append='off'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target type='isa-serial' port='0'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:         <model name='isa-serial'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       </target>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <console type='pty' tty='/dev/pts/3'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <source path='/dev/pts/3'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log' append='off'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <target type='serial' port='0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </console>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <input type='tablet' bus='usb'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='input0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='usb' bus='0' port='1'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </input>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <input type='mouse' bus='ps2'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='input1'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </input>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <input type='keyboard' bus='ps2'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='input2'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </input>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <listen type='address' address='::0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </graphics>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <audio id='1' type='none'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <video>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <model type='virtio' heads='1' primary='yes'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='video0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </video>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <watchdog model='itco' action='reset'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='watchdog0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </watchdog>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <memballoon model='virtio'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <stats period='10'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='balloon0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <rng model='virtio'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <backend model='random'>/dev/urandom</backend>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <alias name='rng0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <label>system_u:system_r:svirt_t:s0:c550,c569</label>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c550,c569</imagelabel>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <label>+107:+107</label>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <imagelabel>+107:+107</imagelabel>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:13:40 compute-0 nova_compute[259550]: </domain>
Oct 07 14:13:40 compute-0 nova_compute[259550]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.081 2 WARNING nova.virt.libvirt.driver [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Detaching interface fa:16:3e:47:a0:0a failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapf66416c8-d3' not found.
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.082 2 DEBUG nova.virt.libvirt.vif [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-83378302',display_name='tempest-AttachInterfacesTestJSON-server-83378302',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-83378302',id=46,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhbNkb9RJCiI/GkFe04dtxG7xWcFO07VUrumVPfTEOgOBoJhuvmDv9cetrJ6XIQ/1YCI8CWruM93E49EXk00F5CtAbm3zc0bxYBCy7t665rpHSj0Q0DNaoPcuzsqKAXSw==',key_name='tempest-keypair-542533303',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ddh3gr00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.082 2 DEBUG nova.network.os_vif_util [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Converting VIF {"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.083 2 DEBUG nova.network.os_vif_util [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.083 2 DEBUG os_vif [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.086 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66416c8-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.086 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.090 2 INFO os_vif [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3')
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.091 2 DEBUG nova.virt.libvirt.guest [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:name>tempest-AttachInterfacesTestJSON-server-83378302</nova:name>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:13:40</nova:creationTime>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     <nova:port uuid="f208f539-2cf1-4007-8806-5b4836d43c4f">
Oct 07 14:13:40 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 07 14:13:40 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:13:40 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:13:40 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:13:40 compute-0 nova_compute[259550]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.114 2 INFO nova.compute.manager [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Took 1.90 seconds to destroy the instance on the hypervisor.
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.114 2 DEBUG oslo.service.loopingcall [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.115 2 DEBUG nova.compute.manager [-] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:13:40 compute-0 nova_compute[259550]: 2025-10-07 14:13:40.115 2 DEBUG nova.network.neutron [-] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:13:40 compute-0 podman[313955]: 2025-10-07 14:13:40.130233639 +0000 UTC m=+0.108975555 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 07 14:13:40 compute-0 ceph-mon[74295]: pgmap v1472: 305 pgs: 305 active+clean; 170 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 252 op/s
Oct 07 14:13:40 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1629729109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:13:40 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct 07 14:13:40 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:40.939256) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:13:40 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct 07 14:13:40 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846420939347, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 344, "num_deletes": 250, "total_data_size": 162314, "memory_usage": 168400, "flush_reason": "Manual Compaction"}
Oct 07 14:13:40 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct 07 14:13:40 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846420986824, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 160352, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30540, "largest_seqno": 30883, "table_properties": {"data_size": 158198, "index_size": 318, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5974, "raw_average_key_size": 20, "raw_value_size": 153937, "raw_average_value_size": 523, "num_data_blocks": 14, "num_entries": 294, "num_filter_entries": 294, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759846410, "oldest_key_time": 1759846410, "file_creation_time": 1759846420, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:13:40 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 47615 microseconds, and 1992 cpu microseconds.
Oct 07 14:13:40 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:40.986878) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 160352 bytes OK
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:40.986907) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.011424) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.011455) EVENT_LOG_v1 {"time_micros": 1759846421011446, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.011479) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 159968, prev total WAL file size 159968, number of live WAL files 2.
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.012056) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323533' seq:0, type:0; will stop at (end)
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(156KB)], [65(10126KB)]
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846421012098, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 10529627, "oldest_snapshot_seqno": -1}
Oct 07 14:13:41 compute-0 nova_compute[259550]: 2025-10-07 14:13:41.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5322 keys, 7236991 bytes, temperature: kUnknown
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846421138254, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7236991, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7201732, "index_size": 20877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 134722, "raw_average_key_size": 25, "raw_value_size": 7106160, "raw_average_value_size": 1335, "num_data_blocks": 849, "num_entries": 5322, "num_filter_entries": 5322, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759846421, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.138686) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7236991 bytes
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.150981) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 83.4 rd, 57.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.9 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(110.8) write-amplify(45.1) OK, records in: 5829, records dropped: 507 output_compression: NoCompression
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.151033) EVENT_LOG_v1 {"time_micros": 1759846421151012, "job": 36, "event": "compaction_finished", "compaction_time_micros": 126286, "compaction_time_cpu_micros": 27464, "output_level": 6, "num_output_files": 1, "total_output_size": 7236991, "num_input_records": 5829, "num_output_records": 5322, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846421151323, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846421152803, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.011967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.152847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.152853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.152855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.152856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:13:41 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.152857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:13:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 121 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 35 KiB/s wr, 207 op/s
Oct 07 14:13:41 compute-0 ceph-mon[74295]: pgmap v1473: 305 pgs: 305 active+clean; 121 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 35 KiB/s wr, 207 op/s
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.263 2 DEBUG nova.network.neutron [-] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.281 2 INFO nova.compute.manager [-] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Took 2.17 seconds to deallocate network for instance.
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.323 2 DEBUG oslo_concurrency.lockutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.323 2 DEBUG oslo_concurrency.lockutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.327 2 INFO nova.network.neutron [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Port f66416c8-d3f2-4dfa-b30b-8505f9a9120c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.327 2 DEBUG nova.network.neutron [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updating instance_info_cache with network_info: [{"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.340 2 DEBUG oslo_concurrency.lockutils [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.368 2 DEBUG nova.compute.manager [req-3cc0c793-ad21-49a8-8b17-6660a9a2fbad req-c63fabf0-49a1-4481-b0a3-cab839e7b503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-vif-plugged-f66416c8-d3f2-4dfa-b30b-8505f9a9120c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.369 2 DEBUG oslo_concurrency.lockutils [req-3cc0c793-ad21-49a8-8b17-6660a9a2fbad req-c63fabf0-49a1-4481-b0a3-cab839e7b503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.369 2 DEBUG oslo_concurrency.lockutils [req-3cc0c793-ad21-49a8-8b17-6660a9a2fbad req-c63fabf0-49a1-4481-b0a3-cab839e7b503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.369 2 DEBUG oslo_concurrency.lockutils [req-3cc0c793-ad21-49a8-8b17-6660a9a2fbad req-c63fabf0-49a1-4481-b0a3-cab839e7b503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.369 2 DEBUG nova.compute.manager [req-3cc0c793-ad21-49a8-8b17-6660a9a2fbad req-c63fabf0-49a1-4481-b0a3-cab839e7b503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] No waiting events found dispatching network-vif-plugged-f66416c8-d3f2-4dfa-b30b-8505f9a9120c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.370 2 WARNING nova.compute.manager [req-3cc0c793-ad21-49a8-8b17-6660a9a2fbad req-c63fabf0-49a1-4481-b0a3-cab839e7b503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received unexpected event network-vif-plugged-f66416c8-d3f2-4dfa-b30b-8505f9a9120c for instance with vm_state active and task_state None.
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.372 2 DEBUG oslo_concurrency.lockutils [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-f66416c8-d3f2-4dfa-b30b-8505f9a9120c" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.398 2 DEBUG oslo_concurrency.processutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.459 2 DEBUG nova.compute.manager [req-51859ba4-b350-4077-9088-190ac5433ed4 req-d5c53f1a-ad91-4792-83c1-731d1ede4182 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Received event network-vif-deleted-98f1c539-b9b9-4abb-89cf-268c353264ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.557 2 DEBUG oslo_concurrency.lockutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.558 2 DEBUG oslo_concurrency.lockutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.558 2 DEBUG oslo_concurrency.lockutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.558 2 DEBUG oslo_concurrency.lockutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.558 2 DEBUG oslo_concurrency.lockutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.560 2 INFO nova.compute.manager [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Terminating instance
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.561 2 DEBUG nova.compute.manager [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:13:42 compute-0 kernel: tapf208f539-2c (unregistering): left promiscuous mode
Oct 07 14:13:42 compute-0 NetworkManager[44949]: <info>  [1759846422.6186] device (tapf208f539-2c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:13:42 compute-0 ovn_controller[151684]: 2025-10-07T14:13:42Z|00381|binding|INFO|Releasing lport f208f539-2cf1-4007-8806-5b4836d43c4f from this chassis (sb_readonly=0)
Oct 07 14:13:42 compute-0 ovn_controller[151684]: 2025-10-07T14:13:42Z|00382|binding|INFO|Setting lport f208f539-2cf1-4007-8806-5b4836d43c4f down in Southbound
Oct 07 14:13:42 compute-0 ovn_controller[151684]: 2025-10-07T14:13:42Z|00383|binding|INFO|Removing iface tapf208f539-2c ovn-installed in OVS
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.632 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:75:d2 10.100.0.12'], port_security=['fa:16:3e:5f:75:d2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e8acaacb-1526-452b-a139-17a738541bb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=f208f539-2cf1-4007-8806-5b4836d43c4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:13:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.634 161536 INFO neutron.agent.ovn.metadata.agent [-] Port f208f539-2cf1-4007-8806-5b4836d43c4f in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 unbound from our chassis
Oct 07 14:13:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.635 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1d9f332-f920-4d6e-8e91-dd13ec334d51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:13:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.635 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1dfc8d9-66d2-4969-ac3b-348a2b781818]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.636 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 namespace which is not needed anymore
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:42 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Oct 07 14:13:42 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002e.scope: Consumed 14.667s CPU time.
Oct 07 14:13:42 compute-0 systemd-machined[214580]: Machine qemu-52-instance-0000002e terminated.
Oct 07 14:13:42 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[312249]: [NOTICE]   (312253) : haproxy version is 2.8.14-c23fe91
Oct 07 14:13:42 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[312249]: [NOTICE]   (312253) : path to executable is /usr/sbin/haproxy
Oct 07 14:13:42 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[312249]: [WARNING]  (312253) : Exiting Master process...
Oct 07 14:13:42 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[312249]: [ALERT]    (312253) : Current worker (312255) exited with code 143 (Terminated)
Oct 07 14:13:42 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[312249]: [WARNING]  (312253) : All workers exited. Exiting... (0)
Oct 07 14:13:42 compute-0 systemd[1]: libpod-51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba.scope: Deactivated successfully.
Oct 07 14:13:42 compute-0 podman[314043]: 2025-10-07 14:13:42.785503095 +0000 UTC m=+0.053903439 container died 51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.807 2 INFO nova.virt.libvirt.driver [-] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Instance destroyed successfully.
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.808 2 DEBUG nova.objects.instance [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'resources' on Instance uuid b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba-userdata-shm.mount: Deactivated successfully.
Oct 07 14:13:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea5f871018c8e56c6586c7466d5febec36105738892f6f280160d66dce07ac2e-merged.mount: Deactivated successfully.
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.835 2 DEBUG nova.virt.libvirt.vif [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-83378302',display_name='tempest-AttachInterfacesTestJSON-server-83378302',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-83378302',id=46,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhbNkb9RJCiI/GkFe04dtxG7xWcFO07VUrumVPfTEOgOBoJhuvmDv9cetrJ6XIQ/1YCI8CWruM93E49EXk00F5CtAbm3zc0bxYBCy7t665rpHSj0Q0DNaoPcuzsqKAXSw==',key_name='tempest-keypair-542533303',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ddh3gr00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:13:42 compute-0 podman[314043]: 2025-10-07 14:13:42.836151956 +0000 UTC m=+0.104552310 container cleanup 51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.835 2 DEBUG nova.network.os_vif_util [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.837 2 DEBUG nova.network.os_vif_util [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5f:75:d2,bridge_name='br-int',has_traffic_filtering=True,id=f208f539-2cf1-4007-8806-5b4836d43c4f,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf208f539-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.839 2 DEBUG os_vif [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:75:d2,bridge_name='br-int',has_traffic_filtering=True,id=f208f539-2cf1-4007-8806-5b4836d43c4f,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf208f539-2c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.841 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf208f539-2c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:13:42 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1700400190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.849 2 INFO os_vif [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:75:d2,bridge_name='br-int',has_traffic_filtering=True,id=f208f539-2cf1-4007-8806-5b4836d43c4f,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf208f539-2c')
Oct 07 14:13:42 compute-0 systemd[1]: libpod-conmon-51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba.scope: Deactivated successfully.
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.866 2 DEBUG oslo_concurrency.processutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.872 2 DEBUG nova.compute.provider_tree [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.894 2 DEBUG nova.scheduler.client.report [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.915 2 DEBUG oslo_concurrency.lockutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:42 compute-0 podman[314084]: 2025-10-07 14:13:42.929458209 +0000 UTC m=+0.063275914 container remove 51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:13:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.935 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb587ff-752f-4bab-8d99-5b78dd13743a]: (4, ('Tue Oct  7 02:13:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 (51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba)\n51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba\nTue Oct  7 02:13:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 (51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba)\n51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.937 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[74e9daf5-e7dc-4546-b91b-4b2ae127805c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.938 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:42 compute-0 kernel: tapb1d9f332-f0: left promiscuous mode
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.944 2 INFO nova.scheduler.client.report [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Deleted allocations for instance e2c535ed-c6cf-4684-82dc-ae59904e7381
Oct 07 14:13:42 compute-0 nova_compute[259550]: 2025-10-07 14:13:42.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:42 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1700400190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.960 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e88a8acc-01b7-4734-bb47-7f82dc546f1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.988 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c0141619-bc28-4588-9180-bdd8f4946ec5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.989 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ac05583c-0bbe-44f3-b4c1-744f63892c61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:43.007 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[14e56edc-cb9c-41bf-8f87-dbfd0382ac34]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697293, 'reachable_time': 28118, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314118, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:43 compute-0 systemd[1]: run-netns-ovnmeta\x2db1d9f332\x2df920\x2d4d6e\x2d8e91\x2ddd13ec334d51.mount: Deactivated successfully.
Oct 07 14:13:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:43.009 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:13:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:13:43.009 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[2c444cae-2469-4f4e-85e2-6e3f88b832e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:13:43 compute-0 nova_compute[259550]: 2025-10-07 14:13:43.014 2 DEBUG oslo_concurrency.lockutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:43 compute-0 nova_compute[259550]: 2025-10-07 14:13:43.300 2 INFO nova.virt.libvirt.driver [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Deleting instance files /var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_del
Oct 07 14:13:43 compute-0 nova_compute[259550]: 2025-10-07 14:13:43.301 2 INFO nova.virt.libvirt.driver [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Deletion of /var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_del complete
Oct 07 14:13:43 compute-0 nova_compute[259550]: 2025-10-07 14:13:43.376 2 INFO nova.compute.manager [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 07 14:13:43 compute-0 nova_compute[259550]: 2025-10-07 14:13:43.376 2 DEBUG oslo.service.loopingcall [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:13:43 compute-0 nova_compute[259550]: 2025-10-07 14:13:43.377 2 DEBUG nova.compute.manager [-] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:13:43 compute-0 nova_compute[259550]: 2025-10-07 14:13:43.377 2 DEBUG nova.network.neutron [-] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:13:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 121 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 28 KiB/s wr, 138 op/s
Oct 07 14:13:43 compute-0 ceph-mon[74295]: pgmap v1474: 305 pgs: 305 active+clean; 121 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 28 KiB/s wr, 138 op/s
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.267 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846409.265955, cb7392fb-c42f-47e9-b661-7cbf3dfe1263 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.268 2 INFO nova.compute.manager [-] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] VM Stopped (Lifecycle Event)
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.286 2 DEBUG nova.compute.manager [None req-882c62fe-f037-4849-84c0-f59dfade20b3 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.398 2 DEBUG nova.network.neutron [-] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.427 2 INFO nova.compute.manager [-] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Took 1.05 seconds to deallocate network for instance.
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.465 2 DEBUG nova.compute.manager [req-bbfa1ad6-ab88-4f8e-8de4-e429eabc95dc req-ba2bce58-d157-4995-9997-9b43f4f38841 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-vif-deleted-f208f539-2cf1-4007-8806-5b4836d43c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.479 2 DEBUG oslo_concurrency.lockutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.480 2 DEBUG oslo_concurrency.lockutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.537 2 DEBUG oslo_concurrency.processutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.579 2 DEBUG nova.compute.manager [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-vif-unplugged-f208f539-2cf1-4007-8806-5b4836d43c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.579 2 DEBUG oslo_concurrency.lockutils [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.579 2 DEBUG oslo_concurrency.lockutils [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.580 2 DEBUG oslo_concurrency.lockutils [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.580 2 DEBUG nova.compute.manager [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] No waiting events found dispatching network-vif-unplugged-f208f539-2cf1-4007-8806-5b4836d43c4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.580 2 WARNING nova.compute.manager [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received unexpected event network-vif-unplugged-f208f539-2cf1-4007-8806-5b4836d43c4f for instance with vm_state deleted and task_state None.
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.580 2 DEBUG nova.compute.manager [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-vif-plugged-f208f539-2cf1-4007-8806-5b4836d43c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.581 2 DEBUG oslo_concurrency.lockutils [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.581 2 DEBUG oslo_concurrency.lockutils [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.581 2 DEBUG oslo_concurrency.lockutils [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.581 2 DEBUG nova.compute.manager [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] No waiting events found dispatching network-vif-plugged-f208f539-2cf1-4007-8806-5b4836d43c4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.581 2 WARNING nova.compute.manager [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received unexpected event network-vif-plugged-f208f539-2cf1-4007-8806-5b4836d43c4f for instance with vm_state deleted and task_state None.
Oct 07 14:13:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:13:44 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2218718914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.992 2 DEBUG oslo_concurrency.processutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:44 compute-0 nova_compute[259550]: 2025-10-07 14:13:44.999 2 DEBUG nova.compute.provider_tree [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:13:45 compute-0 nova_compute[259550]: 2025-10-07 14:13:45.017 2 DEBUG nova.scheduler.client.report [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:13:45 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2218718914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:45 compute-0 nova_compute[259550]: 2025-10-07 14:13:45.043 2 DEBUG oslo_concurrency.lockutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:45 compute-0 nova_compute[259550]: 2025-10-07 14:13:45.065 2 INFO nova.scheduler.client.report [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Deleted allocations for instance b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1
Oct 07 14:13:45 compute-0 nova_compute[259550]: 2025-10-07 14:13:45.127 2 DEBUG oslo_concurrency.lockutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 41 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 31 KiB/s wr, 176 op/s
Oct 07 14:13:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:13:45 compute-0 nova_compute[259550]: 2025-10-07 14:13:45.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:46 compute-0 ceph-mon[74295]: pgmap v1475: 305 pgs: 305 active+clean; 41 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 31 KiB/s wr, 176 op/s
Oct 07 14:13:46 compute-0 nova_compute[259550]: 2025-10-07 14:13:46.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:46 compute-0 nova_compute[259550]: 2025-10-07 14:13:46.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 41 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 1021 KiB/s rd, 4.5 KiB/s wr, 112 op/s
Oct 07 14:13:47 compute-0 nova_compute[259550]: 2025-10-07 14:13:47.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:48 compute-0 ceph-mon[74295]: pgmap v1476: 305 pgs: 305 active+clean; 41 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 1021 KiB/s rd, 4.5 KiB/s wr, 112 op/s
Oct 07 14:13:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 41 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 1021 KiB/s rd, 4.5 KiB/s wr, 112 op/s
Oct 07 14:13:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:13:51 compute-0 ceph-mon[74295]: pgmap v1477: 305 pgs: 305 active+clean; 41 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 1021 KiB/s rd, 4.5 KiB/s wr, 112 op/s
Oct 07 14:13:51 compute-0 nova_compute[259550]: 2025-10-07 14:13:51.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:51 compute-0 nova_compute[259550]: 2025-10-07 14:13:51.688 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846416.6867077, e2c535ed-c6cf-4684-82dc-ae59904e7381 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:51 compute-0 nova_compute[259550]: 2025-10-07 14:13:51.688 2 INFO nova.compute.manager [-] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] VM Stopped (Lifecycle Event)
Oct 07 14:13:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 41 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.1 KiB/s wr, 52 op/s
Oct 07 14:13:51 compute-0 nova_compute[259550]: 2025-10-07 14:13:51.711 2 DEBUG nova.compute.manager [None req-e4851b99-bdcb-4014-a776-61fa336244aa - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:52 compute-0 ceph-mon[74295]: pgmap v1478: 305 pgs: 305 active+clean; 41 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.1 KiB/s wr, 52 op/s
Oct 07 14:13:52 compute-0 nova_compute[259550]: 2025-10-07 14:13:52.328 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846417.3231442, 867307d6-0b3f-4a3e-9dc4-a05221e2f080 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:52 compute-0 nova_compute[259550]: 2025-10-07 14:13:52.328 2 INFO nova.compute.manager [-] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] VM Stopped (Lifecycle Event)
Oct 07 14:13:52 compute-0 nova_compute[259550]: 2025-10-07 14:13:52.347 2 DEBUG nova.compute.manager [None req-147d87a4-92d7-426a-9e72-c7398a72874f - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:13:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:13:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:13:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:13:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:13:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:13:52 compute-0 nova_compute[259550]: 2025-10-07 14:13:52.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:53 compute-0 nova_compute[259550]: 2025-10-07 14:13:53.481 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:53 compute-0 nova_compute[259550]: 2025-10-07 14:13:53.482 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:53 compute-0 nova_compute[259550]: 2025-10-07 14:13:53.500 2 DEBUG nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:13:53 compute-0 nova_compute[259550]: 2025-10-07 14:13:53.571 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:53 compute-0 nova_compute[259550]: 2025-10-07 14:13:53.572 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:53 compute-0 nova_compute[259550]: 2025-10-07 14:13:53.579 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:13:53 compute-0 nova_compute[259550]: 2025-10-07 14:13:53.580 2 INFO nova.compute.claims [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:13:53 compute-0 nova_compute[259550]: 2025-10-07 14:13:53.681 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 41 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.7 KiB/s wr, 37 op/s
Oct 07 14:13:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:13:54 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3981732767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.144 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.151 2 DEBUG nova.compute.provider_tree [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.165 2 DEBUG nova.scheduler.client.report [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.183 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.184 2 DEBUG nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.273 2 DEBUG nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.274 2 DEBUG nova.network.neutron [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.301 2 INFO nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.344 2 DEBUG nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.472 2 DEBUG nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.474 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.474 2 INFO nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Creating image(s)
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.497 2 DEBUG nova.storage.rbd_utils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image eb8777f3-5daa-49c7-8994-687012f20453_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.527 2 DEBUG nova.storage.rbd_utils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image eb8777f3-5daa-49c7-8994-687012f20453_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.556 2 DEBUG nova.storage.rbd_utils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image eb8777f3-5daa-49c7-8994-687012f20453_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.562 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.637 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.638 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.639 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.639 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.663 2 DEBUG nova.storage.rbd_utils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image eb8777f3-5daa-49c7-8994-687012f20453_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.667 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 eb8777f3-5daa-49c7-8994-687012f20453_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:13:54 compute-0 nova_compute[259550]: 2025-10-07 14:13:54.707 2 DEBUG nova.policy [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:13:54 compute-0 ceph-mon[74295]: pgmap v1479: 305 pgs: 305 active+clean; 41 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.7 KiB/s wr, 37 op/s
Oct 07 14:13:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3981732767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:13:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 41 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.7 KiB/s wr, 37 op/s
Oct 07 14:13:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:13:55 compute-0 nova_compute[259550]: 2025-10-07 14:13:55.995 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 eb8777f3-5daa-49c7-8994-687012f20453_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:13:56 compute-0 nova_compute[259550]: 2025-10-07 14:13:56.063 2 DEBUG nova.storage.rbd_utils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] resizing rbd image eb8777f3-5daa-49c7-8994-687012f20453_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:13:56 compute-0 nova_compute[259550]: 2025-10-07 14:13:56.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:56 compute-0 nova_compute[259550]: 2025-10-07 14:13:56.165 2 DEBUG nova.objects.instance [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'migration_context' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:13:56 compute-0 nova_compute[259550]: 2025-10-07 14:13:56.180 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:13:56 compute-0 nova_compute[259550]: 2025-10-07 14:13:56.181 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Ensure instance console log exists: /var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:13:56 compute-0 nova_compute[259550]: 2025-10-07 14:13:56.181 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:13:56 compute-0 nova_compute[259550]: 2025-10-07 14:13:56.182 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:13:56 compute-0 nova_compute[259550]: 2025-10-07 14:13:56.182 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:13:56 compute-0 ceph-mon[74295]: pgmap v1480: 305 pgs: 305 active+clean; 41 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.7 KiB/s wr, 37 op/s
Oct 07 14:13:57 compute-0 nova_compute[259550]: 2025-10-07 14:13:57.566 2 DEBUG nova.network.neutron [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Successfully created port: fdcb59f4-9f89-4147-941b-a28bfa0621bf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:13:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 47 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 136 KiB/s wr, 3 op/s
Oct 07 14:13:57 compute-0 nova_compute[259550]: 2025-10-07 14:13:57.804 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846422.8031216, b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:13:57 compute-0 nova_compute[259550]: 2025-10-07 14:13:57.805 2 INFO nova.compute.manager [-] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] VM Stopped (Lifecycle Event)
Oct 07 14:13:57 compute-0 nova_compute[259550]: 2025-10-07 14:13:57.828 2 DEBUG nova.compute.manager [None req-62949155-a9eb-4472-894e-8f5bc3ac401a - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:13:57 compute-0 nova_compute[259550]: 2025-10-07 14:13:57.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:13:58 compute-0 podman[314333]: 2025-10-07 14:13:58.078848012 +0000 UTC m=+0.064695113 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid)
Oct 07 14:13:58 compute-0 podman[314332]: 2025-10-07 14:13:58.07954796 +0000 UTC m=+0.066433317 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 07 14:13:58 compute-0 ceph-mon[74295]: pgmap v1481: 305 pgs: 305 active+clean; 47 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 136 KiB/s wr, 3 op/s
Oct 07 14:13:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 73 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Oct 07 14:14:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:00.046 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:00.046 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:00.046 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:00 compute-0 nova_compute[259550]: 2025-10-07 14:14:00.104 2 DEBUG nova.network.neutron [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Successfully updated port: fdcb59f4-9f89-4147-941b-a28bfa0621bf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:14:00 compute-0 nova_compute[259550]: 2025-10-07 14:14:00.133 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:14:00 compute-0 nova_compute[259550]: 2025-10-07 14:14:00.134 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:14:00 compute-0 nova_compute[259550]: 2025-10-07 14:14:00.134 2 DEBUG nova.network.neutron [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:14:00 compute-0 nova_compute[259550]: 2025-10-07 14:14:00.290 2 DEBUG nova.compute.manager [req-43464137-db81-4531-b8ff-be2176898bb2 req-ab6f429e-5264-4791-825a-cecfb4a45c2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-changed-fdcb59f4-9f89-4147-941b-a28bfa0621bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:00 compute-0 nova_compute[259550]: 2025-10-07 14:14:00.290 2 DEBUG nova.compute.manager [req-43464137-db81-4531-b8ff-be2176898bb2 req-ab6f429e-5264-4791-825a-cecfb4a45c2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing instance network info cache due to event network-changed-fdcb59f4-9f89-4147-941b-a28bfa0621bf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:14:00 compute-0 nova_compute[259550]: 2025-10-07 14:14:00.291 2 DEBUG oslo_concurrency.lockutils [req-43464137-db81-4531-b8ff-be2176898bb2 req-ab6f429e-5264-4791-825a-cecfb4a45c2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:14:00 compute-0 nova_compute[259550]: 2025-10-07 14:14:00.428 2 DEBUG nova.network.neutron [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:14:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:14:00 compute-0 ceph-mon[74295]: pgmap v1482: 305 pgs: 305 active+clean; 73 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Oct 07 14:14:01 compute-0 nova_compute[259550]: 2025-10-07 14:14:01.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 88 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:14:01 compute-0 nova_compute[259550]: 2025-10-07 14:14:01.955 2 DEBUG nova.network.neutron [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:14:01 compute-0 ceph-mon[74295]: pgmap v1483: 305 pgs: 305 active+clean; 88 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:14:01 compute-0 nova_compute[259550]: 2025-10-07 14:14:01.974 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:14:01 compute-0 nova_compute[259550]: 2025-10-07 14:14:01.974 2 DEBUG nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Instance network_info: |[{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:14:01 compute-0 nova_compute[259550]: 2025-10-07 14:14:01.975 2 DEBUG oslo_concurrency.lockutils [req-43464137-db81-4531-b8ff-be2176898bb2 req-ab6f429e-5264-4791-825a-cecfb4a45c2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:14:01 compute-0 nova_compute[259550]: 2025-10-07 14:14:01.975 2 DEBUG nova.network.neutron [req-43464137-db81-4531-b8ff-be2176898bb2 req-ab6f429e-5264-4791-825a-cecfb4a45c2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing network info cache for port fdcb59f4-9f89-4147-941b-a28bfa0621bf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:14:01 compute-0 nova_compute[259550]: 2025-10-07 14:14:01.979 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Start _get_guest_xml network_info=[{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:14:01 compute-0 nova_compute[259550]: 2025-10-07 14:14:01.985 2 WARNING nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:14:01 compute-0 nova_compute[259550]: 2025-10-07 14:14:01.995 2 DEBUG nova.virt.libvirt.host [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:14:01 compute-0 nova_compute[259550]: 2025-10-07 14:14:01.996 2 DEBUG nova.virt.libvirt.host [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:14:02 compute-0 nova_compute[259550]: 2025-10-07 14:14:02.000 2 DEBUG nova.virt.libvirt.host [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:14:02 compute-0 nova_compute[259550]: 2025-10-07 14:14:02.000 2 DEBUG nova.virt.libvirt.host [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:14:02 compute-0 nova_compute[259550]: 2025-10-07 14:14:02.001 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:14:02 compute-0 nova_compute[259550]: 2025-10-07 14:14:02.001 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:14:02 compute-0 nova_compute[259550]: 2025-10-07 14:14:02.002 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:14:02 compute-0 nova_compute[259550]: 2025-10-07 14:14:02.002 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:14:02 compute-0 nova_compute[259550]: 2025-10-07 14:14:02.003 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:14:02 compute-0 nova_compute[259550]: 2025-10-07 14:14:02.003 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:14:02 compute-0 nova_compute[259550]: 2025-10-07 14:14:02.003 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:14:02 compute-0 nova_compute[259550]: 2025-10-07 14:14:02.004 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:14:02 compute-0 nova_compute[259550]: 2025-10-07 14:14:02.004 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:14:02 compute-0 nova_compute[259550]: 2025-10-07 14:14:02.005 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:14:02 compute-0 nova_compute[259550]: 2025-10-07 14:14:02.005 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:14:02 compute-0 nova_compute[259550]: 2025-10-07 14:14:02.005 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:14:02 compute-0 nova_compute[259550]: 2025-10-07 14:14:02.009 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:14:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1646856935' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:02 compute-0 nova_compute[259550]: 2025-10-07 14:14:02.478 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:02 compute-0 nova_compute[259550]: 2025-10-07 14:14:02.508 2 DEBUG nova.storage.rbd_utils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image eb8777f3-5daa-49c7-8994-687012f20453_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:02 compute-0 nova_compute[259550]: 2025-10-07 14:14:02.513 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:02 compute-0 nova_compute[259550]: 2025-10-07 14:14:02.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:14:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1942696248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:02 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1646856935' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.001 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.003 2 DEBUG nova.virt.libvirt.vif [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.003 2 DEBUG nova.network.os_vif_util [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.004 2 DEBUG nova.network.os_vif_util [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:98:6b,bridge_name='br-int',has_traffic_filtering=True,id=fdcb59f4-9f89-4147-941b-a28bfa0621bf,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcb59f4-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.005 2 DEBUG nova.objects.instance [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_devices' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.023 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:14:03 compute-0 nova_compute[259550]:   <uuid>eb8777f3-5daa-49c7-8994-687012f20453</uuid>
Oct 07 14:14:03 compute-0 nova_compute[259550]:   <name>instance-00000031</name>
Oct 07 14:14:03 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:14:03 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:14:03 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:14:01</nova:creationTime>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:14:03 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:14:03 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:14:03 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:14:03 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:14:03 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:14:03 compute-0 nova_compute[259550]:         <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:14:03 compute-0 nova_compute[259550]:         <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:14:03 compute-0 nova_compute[259550]:         <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct 07 14:14:03 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:14:03 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:14:03 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <system>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <entry name="serial">eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <entry name="uuid">eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     </system>
Oct 07 14:14:03 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:14:03 compute-0 nova_compute[259550]:   <os>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:   </os>
Oct 07 14:14:03 compute-0 nova_compute[259550]:   <features>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:   </features>
Oct 07 14:14:03 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:14:03 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:14:03 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/eb8777f3-5daa-49c7-8994-687012f20453_disk">
Oct 07 14:14:03 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       </source>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:14:03 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/eb8777f3-5daa-49c7-8994-687012f20453_disk.config">
Oct 07 14:14:03 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       </source>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:14:03 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:65:98:6b"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <target dev="tapfdcb59f4-9f"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log" append="off"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <video>
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     </video>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:14:03 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:14:03 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:14:03 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:14:03 compute-0 nova_compute[259550]: </domain>
Oct 07 14:14:03 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.025 2 DEBUG nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Preparing to wait for external event network-vif-plugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.026 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.026 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.026 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.027 2 DEBUG nova.virt.libvirt.vif [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.027 2 DEBUG nova.network.os_vif_util [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.028 2 DEBUG nova.network.os_vif_util [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:98:6b,bridge_name='br-int',has_traffic_filtering=True,id=fdcb59f4-9f89-4147-941b-a28bfa0621bf,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcb59f4-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.029 2 DEBUG os_vif [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:98:6b,bridge_name='br-int',has_traffic_filtering=True,id=fdcb59f4-9f89-4147-941b-a28bfa0621bf,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcb59f4-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.030 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.030 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.035 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdcb59f4-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.035 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfdcb59f4-9f, col_values=(('external_ids', {'iface-id': 'fdcb59f4-9f89-4147-941b-a28bfa0621bf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:98:6b', 'vm-uuid': 'eb8777f3-5daa-49c7-8994-687012f20453'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:03 compute-0 NetworkManager[44949]: <info>  [1759846443.0387] manager: (tapfdcb59f4-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.044 2 INFO os_vif [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:98:6b,bridge_name='br-int',has_traffic_filtering=True,id=fdcb59f4-9f89-4147-941b-a28bfa0621bf,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcb59f4-9f')
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.098 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.099 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.099 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:65:98:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.100 2 INFO nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Using config drive
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.119 2 DEBUG nova.storage.rbd_utils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image eb8777f3-5daa-49c7-8994-687012f20453_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.635 2 INFO nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Creating config drive at /var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/disk.config
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.643 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph735y8px execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.689 2 DEBUG nova.network.neutron [req-43464137-db81-4531-b8ff-be2176898bb2 req-ab6f429e-5264-4791-825a-cecfb4a45c2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updated VIF entry in instance network info cache for port fdcb59f4-9f89-4147-941b-a28bfa0621bf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.690 2 DEBUG nova.network.neutron [req-43464137-db81-4531-b8ff-be2176898bb2 req-ab6f429e-5264-4791-825a-cecfb4a45c2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:14:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 88 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.707 2 DEBUG oslo_concurrency.lockutils [req-43464137-db81-4531-b8ff-be2176898bb2 req-ab6f429e-5264-4791-825a-cecfb4a45c2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.795 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph735y8px" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.828 2 DEBUG nova.storage.rbd_utils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image eb8777f3-5daa-49c7-8994-687012f20453_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.833 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/disk.config eb8777f3-5daa-49c7-8994-687012f20453_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:03.881 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:14:03 compute-0 nova_compute[259550]: 2025-10-07 14:14:03.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:03.884 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:14:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1942696248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:03 compute-0 ceph-mon[74295]: pgmap v1484: 305 pgs: 305 active+clean; 88 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:14:04 compute-0 nova_compute[259550]: 2025-10-07 14:14:04.008 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/disk.config eb8777f3-5daa-49c7-8994-687012f20453_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:04 compute-0 nova_compute[259550]: 2025-10-07 14:14:04.009 2 INFO nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Deleting local config drive /var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/disk.config because it was imported into RBD.
Oct 07 14:14:04 compute-0 kernel: tapfdcb59f4-9f: entered promiscuous mode
Oct 07 14:14:04 compute-0 NetworkManager[44949]: <info>  [1759846444.0752] manager: (tapfdcb59f4-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/182)
Oct 07 14:14:04 compute-0 ovn_controller[151684]: 2025-10-07T14:14:04Z|00384|binding|INFO|Claiming lport fdcb59f4-9f89-4147-941b-a28bfa0621bf for this chassis.
Oct 07 14:14:04 compute-0 ovn_controller[151684]: 2025-10-07T14:14:04Z|00385|binding|INFO|fdcb59f4-9f89-4147-941b-a28bfa0621bf: Claiming fa:16:3e:65:98:6b 10.100.0.13
Oct 07 14:14:04 compute-0 nova_compute[259550]: 2025-10-07 14:14:04.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.091 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:98:6b 10.100.0.13'], port_security=['fa:16:3e:65:98:6b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'eb8777f3-5daa-49c7-8994-687012f20453', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '206b777f-07ac-463f-aac7-dcc9bac5a7aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fdcb59f4-9f89-4147-941b-a28bfa0621bf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.092 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fdcb59f4-9f89-4147-941b-a28bfa0621bf in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.094 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.110 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c3dcce0a-3912-4b6f-b15b-c5b1528bb5cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.112 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb1d9f332-f1 in ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.115 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb1d9f332-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.115 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[99aeea4c-051b-441b-b326-c84d94e92f65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.116 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd26903-f461-4e64-a795-c9a98d280a2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:04 compute-0 systemd-machined[214580]: New machine qemu-56-instance-00000031.
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.131 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[66fc4cb9-dd5e-406a-95ae-1ee98a46e9ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:04 compute-0 systemd[1]: Started Virtual Machine qemu-56-instance-00000031.
Oct 07 14:14:04 compute-0 nova_compute[259550]: 2025-10-07 14:14:04.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.160 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[47eb35fd-bb19-4e5d-a0e7-3b0e33fc0b53]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:04 compute-0 ovn_controller[151684]: 2025-10-07T14:14:04Z|00386|binding|INFO|Setting lport fdcb59f4-9f89-4147-941b-a28bfa0621bf ovn-installed in OVS
Oct 07 14:14:04 compute-0 ovn_controller[151684]: 2025-10-07T14:14:04Z|00387|binding|INFO|Setting lport fdcb59f4-9f89-4147-941b-a28bfa0621bf up in Southbound
Oct 07 14:14:04 compute-0 systemd-udevd[314509]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:14:04 compute-0 nova_compute[259550]: 2025-10-07 14:14:04.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:04 compute-0 NetworkManager[44949]: <info>  [1759846444.1834] device (tapfdcb59f4-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:14:04 compute-0 NetworkManager[44949]: <info>  [1759846444.1868] device (tapfdcb59f4-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.193 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[224985a4-7409-41d1-bf37-d8d3cb235ebd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:04 compute-0 systemd-udevd[314513]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.199 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[00cb15ba-f1ac-43e7-b41e-ff9297fdfd5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:04 compute-0 NetworkManager[44949]: <info>  [1759846444.2004] manager: (tapb1d9f332-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/183)
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.234 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a0401777-6bf1-4a29-9b38-faa5cc955966]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.237 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[47709525-fee0-4b40-a22f-df8f1a03a664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:04 compute-0 NetworkManager[44949]: <info>  [1759846444.2624] device (tapb1d9f332-f0): carrier: link connected
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.270 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3a62b831-5b24-4726-bf58-e64758f1063a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.292 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[083fd49e-c354-47bd-9c34-54d762b53780]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702782, 'reachable_time': 23108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314539, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.312 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[12698e4f-8132-41fe-a18f-b26ea9c59d9c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe19:be96'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702782, 'tstamp': 702782}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314540, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.332 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c07a3c84-2f24-4a8e-8292-b5f2100ebd2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702782, 'reachable_time': 23108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314541, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.371 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[367ef44c-1c3e-41dd-adbe-d0eae43611c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.425 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c34c97bc-8baf-4f8a-b031-7b2a05138ec0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.427 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.427 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.428 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:04 compute-0 kernel: tapb1d9f332-f0: entered promiscuous mode
Oct 07 14:14:04 compute-0 NetworkManager[44949]: <info>  [1759846444.4310] manager: (tapb1d9f332-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Oct 07 14:14:04 compute-0 nova_compute[259550]: 2025-10-07 14:14:04.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.433 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:04 compute-0 ovn_controller[151684]: 2025-10-07T14:14:04Z|00388|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct 07 14:14:04 compute-0 nova_compute[259550]: 2025-10-07 14:14:04.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:04 compute-0 nova_compute[259550]: 2025-10-07 14:14:04.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.450 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b1d9f332-f920-4d6e-8e91-dd13ec334d51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b1d9f332-f920-4d6e-8e91-dd13ec334d51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.451 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e70ee2e1-1faf-4846-95ef-91e6666e4f5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.452 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/b1d9f332-f920-4d6e-8e91-dd13ec334d51.pid.haproxy
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:14:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.453 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'env', 'PROCESS_TAG=haproxy-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b1d9f332-f920-4d6e-8e91-dd13ec334d51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:14:04 compute-0 podman[314573]: 2025-10-07 14:14:04.829029233 +0000 UTC m=+0.060305476 container create 61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:14:04 compute-0 systemd[1]: Started libpod-conmon-61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327.scope.
Oct 07 14:14:04 compute-0 podman[314573]: 2025-10-07 14:14:04.799470546 +0000 UTC m=+0.030746809 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:14:04 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16461d24c3ac066e546c480d661cadac4a5387b7af213607c5bfba84e3d1e3d1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:14:04 compute-0 podman[314573]: 2025-10-07 14:14:04.930023057 +0000 UTC m=+0.161299300 container init 61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 14:14:04 compute-0 podman[314573]: 2025-10-07 14:14:04.938127861 +0000 UTC m=+0.169404104 container start 61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 07 14:14:04 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[314588]: [NOTICE]   (314592) : New worker (314594) forked
Oct 07 14:14:04 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[314588]: [NOTICE]   (314592) : Loading success.
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.111 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.113 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.129 2 DEBUG nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.210 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.211 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.222 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.223 2 INFO nova.compute.claims [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.331 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.651 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846445.6508718, eb8777f3-5daa-49c7-8994-687012f20453 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.652 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] VM Started (Lifecycle Event)
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.673 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.678 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846445.6510162, eb8777f3-5daa-49c7-8994-687012f20453 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.678 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] VM Paused (Lifecycle Event)
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.697 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.700 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:14:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 88 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.719 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:14:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:14:05 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4276450104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.818 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.824 2 DEBUG nova.compute.provider_tree [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.841 2 DEBUG nova.scheduler.client.report [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.868 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.868 2 DEBUG nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:14:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:05.887 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.907 2 DEBUG nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.908 2 DEBUG nova.network.neutron [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.930 2 INFO nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:14:05 compute-0 nova_compute[259550]: 2025-10-07 14:14:05.951 2 DEBUG nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:14:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:14:06 compute-0 nova_compute[259550]: 2025-10-07 14:14:06.036 2 DEBUG nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:14:06 compute-0 nova_compute[259550]: 2025-10-07 14:14:06.037 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:14:06 compute-0 nova_compute[259550]: 2025-10-07 14:14:06.038 2 INFO nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Creating image(s)
Oct 07 14:14:06 compute-0 nova_compute[259550]: 2025-10-07 14:14:06.059 2 DEBUG nova.storage.rbd_utils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:06 compute-0 nova_compute[259550]: 2025-10-07 14:14:06.084 2 DEBUG nova.storage.rbd_utils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:06 compute-0 nova_compute[259550]: 2025-10-07 14:14:06.105 2 DEBUG nova.storage.rbd_utils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:06 compute-0 nova_compute[259550]: 2025-10-07 14:14:06.109 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:06 compute-0 nova_compute[259550]: 2025-10-07 14:14:06.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:06 compute-0 nova_compute[259550]: 2025-10-07 14:14:06.200 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:06 compute-0 nova_compute[259550]: 2025-10-07 14:14:06.201 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:06 compute-0 nova_compute[259550]: 2025-10-07 14:14:06.202 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:06 compute-0 nova_compute[259550]: 2025-10-07 14:14:06.202 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:06 compute-0 nova_compute[259550]: 2025-10-07 14:14:06.242 2 DEBUG nova.storage.rbd_utils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:06 compute-0 nova_compute[259550]: 2025-10-07 14:14:06.246 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:06 compute-0 nova_compute[259550]: 2025-10-07 14:14:06.593 2 DEBUG nova.policy [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '39e4681256e44d92ac5928e4f8e0d348', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ef9390a1dd804281beea149e0086b360', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:14:06 compute-0 ceph-mon[74295]: pgmap v1485: 305 pgs: 305 active+clean; 88 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 07 14:14:06 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4276450104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:14:07 compute-0 sudo[314763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:14:07 compute-0 sudo[314763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:07 compute-0 sudo[314763]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:07 compute-0 sudo[314788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:14:07 compute-0 sudo[314788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:07 compute-0 sudo[314788]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:07 compute-0 sudo[314813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:14:07 compute-0 sudo[314813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:07 compute-0 sudo[314813]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:07 compute-0 sudo[314838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 07 14:14:07 compute-0 sudo[314838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 88 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 07 14:14:07 compute-0 nova_compute[259550]: 2025-10-07 14:14:07.735 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:07 compute-0 sudo[314838]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:14:07 compute-0 nova_compute[259550]: 2025-10-07 14:14:07.825 2 DEBUG nova.storage.rbd_utils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] resizing rbd image fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:14:07 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:14:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:14:08 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.123 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.124 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:08 compute-0 sudo[314938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:14:08 compute-0 sudo[314938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:08 compute-0 sudo[314938]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.142 2 DEBUG nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:14:08 compute-0 sudo[314963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:14:08 compute-0 sudo[314963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:08 compute-0 sudo[314963]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.207 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.208 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.215 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.215 2 INFO nova.compute.claims [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:14:08 compute-0 sudo[314988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:14:08 compute-0 sudo[314988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:08 compute-0 sudo[314988]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:08 compute-0 sudo[315013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:14:08 compute-0 sudo[315013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.354 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.451 2 DEBUG nova.network.neutron [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Successfully created port: 432c69dd-fb1b-432b-b867-9fe29716430d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:14:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:14:08 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3429720248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.837 2 DEBUG nova.objects.instance [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'migration_context' on Instance uuid fc163bed-856c-4ea5-9bf3-6989fb1027eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.853 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.854 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Ensure instance console log exists: /var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.854 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.854 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.855 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.858 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.865 2 DEBUG nova.compute.provider_tree [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:14:08 compute-0 sudo[315013]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.883 2 DEBUG nova.scheduler.client.report [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:14:08 compute-0 ceph-mon[74295]: pgmap v1486: 305 pgs: 305 active+clean; 88 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 07 14:14:08 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:14:08 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:14:08 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3429720248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.911 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.912 2 DEBUG nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.962 2 DEBUG nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.962 2 DEBUG nova.network.neutron [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:14:08 compute-0 nova_compute[259550]: 2025-10-07 14:14:08.992 2 INFO nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.011 2 DEBUG nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:14:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 07 14:14:09 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.021 2 DEBUG nova.compute.manager [req-25a81aa7-db77-47e9-b766-1f673677afc2 req-e3c96ab4-dc9f-4fa6-8dc0-4dae7d07815e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.022 2 DEBUG oslo_concurrency.lockutils [req-25a81aa7-db77-47e9-b766-1f673677afc2 req-e3c96ab4-dc9f-4fa6-8dc0-4dae7d07815e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.022 2 DEBUG oslo_concurrency.lockutils [req-25a81aa7-db77-47e9-b766-1f673677afc2 req-e3c96ab4-dc9f-4fa6-8dc0-4dae7d07815e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.022 2 DEBUG oslo_concurrency.lockutils [req-25a81aa7-db77-47e9-b766-1f673677afc2 req-e3c96ab4-dc9f-4fa6-8dc0-4dae7d07815e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.023 2 DEBUG nova.compute.manager [req-25a81aa7-db77-47e9-b766-1f673677afc2 req-e3c96ab4-dc9f-4fa6-8dc0-4dae7d07815e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Processing event network-vif-plugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.024 2 DEBUG nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:14:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:14:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.029 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.030 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846449.0294657, eb8777f3-5daa-49c7-8994-687012f20453 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.030 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] VM Resumed (Lifecycle Event)
Oct 07 14:14:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:14:09 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:14:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.035 2 INFO nova.virt.libvirt.driver [-] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Instance spawned successfully.
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.036 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.063 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.068 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.068 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.069 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.069 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.070 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.070 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.078 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:14:09 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:14:09 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev c69585fa-af40-416f-b06f-871498756628 does not exist
Oct 07 14:14:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:14:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:14:09 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev b826e68f-987d-44dd-94e2-3b361c4c29b3 does not exist
Oct 07 14:14:09 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 175fcdcb-6510-4efa-96b2-8185098a148f does not exist
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.129 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:14:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:14:09 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:14:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:14:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.142 2 DEBUG nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.143 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.144 2 INFO nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Creating image(s)
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.167 2 DEBUG nova.storage.rbd_utils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] rbd image f95ef1e3-b526-4516-a982-e1e415c5a657_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:09 compute-0 sudo[315109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:14:09 compute-0 sudo[315109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:09 compute-0 sudo[315109]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.207 2 DEBUG nova.storage.rbd_utils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] rbd image f95ef1e3-b526-4516-a982-e1e415c5a657_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.248 2 DEBUG nova.storage.rbd_utils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] rbd image f95ef1e3-b526-4516-a982-e1e415c5a657_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.257 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:09 compute-0 sudo[315170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:14:09 compute-0 sudo[315170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:09 compute-0 sudo[315170]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.302 2 DEBUG nova.policy [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fae6fe12a4234cd28439c010bdf3e497', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '304ddc5a55af455ba608d37c37f217aa', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.309 2 DEBUG nova.network.neutron [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Successfully updated port: 432c69dd-fb1b-432b-b867-9fe29716430d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.314 2 INFO nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Took 14.84 seconds to spawn the instance on the hypervisor.
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.314 2 DEBUG nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.317 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquiring lock "6be00afd-ab65-48db-a575-23a285419e60" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.317 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:09 compute-0 sudo[315213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:14:09 compute-0 sudo[315213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:09 compute-0 sudo[315213]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.343 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.345 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.345 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.346 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.369 2 DEBUG nova.storage.rbd_utils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] rbd image f95ef1e3-b526-4516-a982-e1e415c5a657_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.373 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f95ef1e3-b526-4516-a982-e1e415c5a657_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:09 compute-0 sudo[315239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:14:09 compute-0 sudo[315239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.412 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.413 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquired lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.413 2 DEBUG nova.network.neutron [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.414 2 DEBUG nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.425 2 INFO nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Took 15.88 seconds to build instance.
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.457 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.486 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.487 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.493 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.493 2 INFO nova.compute.claims [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.609 2 DEBUG nova.compute.manager [req-ff9c4cc8-37c4-4ec9-97e6-0046d6565106 req-74818ab4-0315-48ef-aab2-b5eafe6f3ef2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received event network-changed-432c69dd-fb1b-432b-b867-9fe29716430d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.609 2 DEBUG nova.compute.manager [req-ff9c4cc8-37c4-4ec9-97e6-0046d6565106 req-74818ab4-0315-48ef-aab2-b5eafe6f3ef2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Refreshing instance network info cache due to event network-changed-432c69dd-fb1b-432b-b867-9fe29716430d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.609 2 DEBUG oslo_concurrency.lockutils [req-ff9c4cc8-37c4-4ec9-97e6-0046d6565106 req-74818ab4-0315-48ef-aab2-b5eafe6f3ef2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.660 2 DEBUG nova.network.neutron [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.665 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 119 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.8 MiB/s wr, 48 op/s
Oct 07 14:14:09 compute-0 podman[315342]: 2025-10-07 14:14:09.800733537 +0000 UTC m=+0.068676757 container create 78193461f0ffbac3b872ec85c1f8e2e47182f28108f5accb9e7c0fb01a58e418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_perlman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:14:09 compute-0 systemd[1]: Started libpod-conmon-78193461f0ffbac3b872ec85c1f8e2e47182f28108f5accb9e7c0fb01a58e418.scope.
Oct 07 14:14:09 compute-0 podman[315342]: 2025-10-07 14:14:09.760600462 +0000 UTC m=+0.028543712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.878 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f95ef1e3-b526-4516-a982-e1e415c5a657_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:09 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:14:09 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 07 14:14:09 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:14:09 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:14:09 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:14:09 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:14:09 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:14:09 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:14:09 compute-0 podman[315342]: 2025-10-07 14:14:09.908032407 +0000 UTC m=+0.175975657 container init 78193461f0ffbac3b872ec85c1f8e2e47182f28108f5accb9e7c0fb01a58e418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_perlman, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 07 14:14:09 compute-0 podman[315342]: 2025-10-07 14:14:09.918132963 +0000 UTC m=+0.186076183 container start 78193461f0ffbac3b872ec85c1f8e2e47182f28108f5accb9e7c0fb01a58e418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_perlman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:14:09 compute-0 podman[315342]: 2025-10-07 14:14:09.922305622 +0000 UTC m=+0.190248862 container attach 78193461f0ffbac3b872ec85c1f8e2e47182f28108f5accb9e7c0fb01a58e418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_perlman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:14:09 compute-0 silly_perlman[315376]: 167 167
Oct 07 14:14:09 compute-0 systemd[1]: libpod-78193461f0ffbac3b872ec85c1f8e2e47182f28108f5accb9e7c0fb01a58e418.scope: Deactivated successfully.
Oct 07 14:14:09 compute-0 podman[315342]: 2025-10-07 14:14:09.927053898 +0000 UTC m=+0.194997128 container died 78193461f0ffbac3b872ec85c1f8e2e47182f28108f5accb9e7c0fb01a58e418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_perlman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 14:14:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c76436bcab89d0c24a4362bb4b313e4194d8edd260dde816852289fa66db7bca-merged.mount: Deactivated successfully.
Oct 07 14:14:09 compute-0 podman[315342]: 2025-10-07 14:14:09.971572158 +0000 UTC m=+0.239515378 container remove 78193461f0ffbac3b872ec85c1f8e2e47182f28108f5accb9e7c0fb01a58e418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_perlman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:14:09 compute-0 systemd[1]: libpod-conmon-78193461f0ffbac3b872ec85c1f8e2e47182f28108f5accb9e7c0fb01a58e418.scope: Deactivated successfully.
Oct 07 14:14:09 compute-0 nova_compute[259550]: 2025-10-07 14:14:09.988 2 DEBUG nova.storage.rbd_utils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] resizing rbd image f95ef1e3-b526-4516-a982-e1e415c5a657_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.098 2 DEBUG nova.objects.instance [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lazy-loading 'migration_context' on Instance uuid f95ef1e3-b526-4516-a982-e1e415c5a657 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.115 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.116 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Ensure instance console log exists: /var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.116 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.116 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.117 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.160 2 DEBUG nova.network.neutron [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Successfully created port: fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:14:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:14:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2429707016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:14:10 compute-0 podman[315471]: 2025-10-07 14:14:10.19308661 +0000 UTC m=+0.047847658 container create 37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_tharp, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.201 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.219 2 DEBUG nova.compute.provider_tree [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:14:10 compute-0 systemd[1]: Started libpod-conmon-37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2.scope.
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.245 2 DEBUG nova.scheduler.client.report [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:14:10 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bf481602154f55f09507a4dadd5d5d6e5b81c48d5b328006ef755f05734261/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bf481602154f55f09507a4dadd5d5d6e5b81c48d5b328006ef755f05734261/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bf481602154f55f09507a4dadd5d5d6e5b81c48d5b328006ef755f05734261/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bf481602154f55f09507a4dadd5d5d6e5b81c48d5b328006ef755f05734261/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bf481602154f55f09507a4dadd5d5d6e5b81c48d5b328006ef755f05734261/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.269 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.270 2 DEBUG nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:14:10 compute-0 podman[315471]: 2025-10-07 14:14:10.173124705 +0000 UTC m=+0.027885773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:14:10 compute-0 podman[315471]: 2025-10-07 14:14:10.288730174 +0000 UTC m=+0.143491252 container init 37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_tharp, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:14:10 compute-0 podman[315471]: 2025-10-07 14:14:10.297678299 +0000 UTC m=+0.152439337 container start 37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 14:14:10 compute-0 podman[315471]: 2025-10-07 14:14:10.301218172 +0000 UTC m=+0.155979240 container attach 37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.316 2 DEBUG nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.317 2 DEBUG nova.network.neutron [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.335 2 INFO nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:14:10 compute-0 podman[315488]: 2025-10-07 14:14:10.337709972 +0000 UTC m=+0.094881956 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.355 2 DEBUG nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:14:10 compute-0 podman[315491]: 2025-10-07 14:14:10.377620451 +0000 UTC m=+0.134283861 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.459 2 DEBUG nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.460 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.461 2 INFO nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Creating image(s)
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.487 2 DEBUG nova.storage.rbd_utils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] rbd image 6be00afd-ab65-48db-a575-23a285419e60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.513 2 DEBUG nova.storage.rbd_utils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] rbd image 6be00afd-ab65-48db-a575-23a285419e60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.541 2 DEBUG nova.storage.rbd_utils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] rbd image 6be00afd-ab65-48db-a575-23a285419e60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.546 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.591 2 DEBUG nova.network.neutron [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Updating instance_info_cache with network_info: [{"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.594 2 DEBUG nova.policy [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed6b624fb80d4d4d9b897c788b614297', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'daaa10f40e014711ba0819e5a5b251c7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.617 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Releasing lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.617 2 DEBUG nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Instance network_info: |[{"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.618 2 DEBUG oslo_concurrency.lockutils [req-ff9c4cc8-37c4-4ec9-97e6-0046d6565106 req-74818ab4-0315-48ef-aab2-b5eafe6f3ef2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.618 2 DEBUG nova.network.neutron [req-ff9c4cc8-37c4-4ec9-97e6-0046d6565106 req-74818ab4-0315-48ef-aab2-b5eafe6f3ef2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Refreshing network info cache for port 432c69dd-fb1b-432b-b867-9fe29716430d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.623 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Start _get_guest_xml network_info=[{"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.640 2 WARNING nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.643 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.644 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.645 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.645 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.674 2 DEBUG nova.storage.rbd_utils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] rbd image 6be00afd-ab65-48db-a575-23a285419e60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.679 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6be00afd-ab65-48db-a575-23a285419e60_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.735 2 DEBUG nova.virt.libvirt.host [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.736 2 DEBUG nova.virt.libvirt.host [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.741 2 DEBUG nova.virt.libvirt.host [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.742 2 DEBUG nova.virt.libvirt.host [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.743 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.744 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.744 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.744 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.745 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.745 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.745 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.745 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.746 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.746 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.746 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.746 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:14:10 compute-0 nova_compute[259550]: 2025-10-07 14:14:10.750 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:10 compute-0 ceph-mon[74295]: pgmap v1487: 305 pgs: 305 active+clean; 119 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.8 MiB/s wr, 48 op/s
Oct 07 14:14:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2429707016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:14:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.150 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6be00afd-ab65-48db-a575-23a285419e60_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.224 2 DEBUG nova.storage.rbd_utils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] resizing rbd image 6be00afd-ab65-48db-a575-23a285419e60_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:14:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:14:11 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1111425089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.264 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.285 2 DEBUG nova.storage.rbd_utils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.290 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.381 2 DEBUG nova.objects.instance [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lazy-loading 'migration_context' on Instance uuid 6be00afd-ab65-48db-a575-23a285419e60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.400 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.401 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Ensure instance console log exists: /var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.401 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.402 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.403 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:11 compute-0 fervent_tharp[315492]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:14:11 compute-0 fervent_tharp[315492]: --> relative data size: 1.0
Oct 07 14:14:11 compute-0 fervent_tharp[315492]: --> All data devices are unavailable
Oct 07 14:14:11 compute-0 systemd[1]: libpod-37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2.scope: Deactivated successfully.
Oct 07 14:14:11 compute-0 systemd[1]: libpod-37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2.scope: Consumed 1.066s CPU time.
Oct 07 14:14:11 compute-0 conmon[315492]: conmon 37d3c3491ccd6295b1e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2.scope/container/memory.events
Oct 07 14:14:11 compute-0 podman[315471]: 2025-10-07 14:14:11.48582089 +0000 UTC m=+1.340581938 container died 37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_tharp, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:14:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-20bf481602154f55f09507a4dadd5d5d6e5b81c48d5b328006ef755f05734261-merged.mount: Deactivated successfully.
Oct 07 14:14:11 compute-0 podman[315471]: 2025-10-07 14:14:11.553117109 +0000 UTC m=+1.407878157 container remove 37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_tharp, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 07 14:14:11 compute-0 systemd[1]: libpod-conmon-37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2.scope: Deactivated successfully.
Oct 07 14:14:11 compute-0 sudo[315239]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:11 compute-0 sudo[315801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:14:11 compute-0 sudo[315801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:11 compute-0 sudo[315801]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 157 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 776 KiB/s rd, 3.4 MiB/s wr, 92 op/s
Oct 07 14:14:11 compute-0 sudo[315826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:14:11 compute-0 sudo[315826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:11 compute-0 sudo[315826]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:14:11 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3250958552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.806 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.807 2 DEBUG nova.virt.libvirt.vif [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:14:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-307445853',display_name='tempest-ServerActionsTestOtherA-server-307445853',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-307445853',id=50,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG+9Z65pTClTOFGfwBQwoBDEk0wDdHVeNmjMfU680t6jhHHvju/LmHnN+5TGqyxhWrME7/S2SBjWiIYsOdkRBZmw+292d2qkOy0bnGNB53h//Xfe51NNgLX77Oc4GTlk5Q==',key_name='tempest-keypair-1536550939',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-n9ekxyu9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:14:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=fc163bed-856c-4ea5-9bf3-6989fb1027eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.808 2 DEBUG nova.network.os_vif_util [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.809 2 DEBUG nova.network.os_vif_util [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:54:7c,bridge_name='br-int',has_traffic_filtering=True,id=432c69dd-fb1b-432b-b867-9fe29716430d,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap432c69dd-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.810 2 DEBUG nova.objects.instance [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'pci_devices' on Instance uuid fc163bed-856c-4ea5-9bf3-6989fb1027eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:14:11 compute-0 sudo[315851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.824 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:14:11 compute-0 nova_compute[259550]:   <uuid>fc163bed-856c-4ea5-9bf3-6989fb1027eb</uuid>
Oct 07 14:14:11 compute-0 nova_compute[259550]:   <name>instance-00000032</name>
Oct 07 14:14:11 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:14:11 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:14:11 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerActionsTestOtherA-server-307445853</nova:name>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:14:10</nova:creationTime>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:14:11 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:14:11 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:14:11 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:14:11 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:14:11 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:14:11 compute-0 sudo[315851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:11 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:14:11 compute-0 nova_compute[259550]:         <nova:user uuid="39e4681256e44d92ac5928e4f8e0d348">tempest-ServerActionsTestOtherA-508284156-project-member</nova:user>
Oct 07 14:14:11 compute-0 nova_compute[259550]:         <nova:project uuid="ef9390a1dd804281beea149e0086b360">tempest-ServerActionsTestOtherA-508284156</nova:project>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:14:11 compute-0 nova_compute[259550]:         <nova:port uuid="432c69dd-fb1b-432b-b867-9fe29716430d">
Oct 07 14:14:11 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:14:11 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:14:11 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <system>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <entry name="serial">fc163bed-856c-4ea5-9bf3-6989fb1027eb</entry>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <entry name="uuid">fc163bed-856c-4ea5-9bf3-6989fb1027eb</entry>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     </system>
Oct 07 14:14:11 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:14:11 compute-0 nova_compute[259550]:   <os>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:   </os>
Oct 07 14:14:11 compute-0 nova_compute[259550]:   <features>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:   </features>
Oct 07 14:14:11 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:14:11 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:14:11 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk">
Oct 07 14:14:11 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       </source>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:14:11 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk.config">
Oct 07 14:14:11 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       </source>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:14:11 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:a6:54:7c"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <target dev="tap432c69dd-fb"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb/console.log" append="off"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <video>
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     </video>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 sudo[315851]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:14:11 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:14:11 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:14:11 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:14:11 compute-0 nova_compute[259550]: </domain>
Oct 07 14:14:11 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.825 2 DEBUG nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Preparing to wait for external event network-vif-plugged-432c69dd-fb1b-432b-b867-9fe29716430d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.825 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.826 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.826 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.827 2 DEBUG nova.virt.libvirt.vif [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:14:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-307445853',display_name='tempest-ServerActionsTestOtherA-server-307445853',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-307445853',id=50,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG+9Z65pTClTOFGfwBQwoBDEk0wDdHVeNmjMfU680t6jhHHvju/LmHnN+5TGqyxhWrME7/S2SBjWiIYsOdkRBZmw+292d2qkOy0bnGNB53h//Xfe51NNgLX77Oc4GTlk5Q==',key_name='tempest-keypair-1536550939',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-n9ekxyu9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:14:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=fc163bed-856c-4ea5-9bf3-6989fb1027eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.828 2 DEBUG nova.network.os_vif_util [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.829 2 DEBUG nova.network.os_vif_util [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:54:7c,bridge_name='br-int',has_traffic_filtering=True,id=432c69dd-fb1b-432b-b867-9fe29716430d,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap432c69dd-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.830 2 DEBUG os_vif [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:54:7c,bridge_name='br-int',has_traffic_filtering=True,id=432c69dd-fb1b-432b-b867-9fe29716430d,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap432c69dd-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.831 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.831 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.838 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap432c69dd-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.839 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap432c69dd-fb, col_values=(('external_ids', {'iface-id': '432c69dd-fb1b-432b-b867-9fe29716430d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:54:7c', 'vm-uuid': 'fc163bed-856c-4ea5-9bf3-6989fb1027eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:11 compute-0 NetworkManager[44949]: <info>  [1759846451.8431] manager: (tap432c69dd-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:11 compute-0 nova_compute[259550]: 2025-10-07 14:14:11.851 2 INFO os_vif [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:54:7c,bridge_name='br-int',has_traffic_filtering=True,id=432c69dd-fb1b-432b-b867-9fe29716430d,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap432c69dd-fb')
Oct 07 14:14:11 compute-0 sudo[315878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:14:11 compute-0 sudo[315878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:11 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1111425089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:11 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3250958552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.027 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.028 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.028 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No VIF found with MAC fa:16:3e:a6:54:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.029 2 INFO nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Using config drive
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.054 2 DEBUG nova.storage.rbd_utils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.062 2 DEBUG nova.network.neutron [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Successfully updated port: fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.087 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "refresh_cache-f95ef1e3-b526-4516-a982-e1e415c5a657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.087 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquired lock "refresh_cache-f95ef1e3-b526-4516-a982-e1e415c5a657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.088 2 DEBUG nova.network.neutron [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.091 2 DEBUG nova.compute.manager [req-e5739303-3846-44c6-a490-88a30f1dfc1b req-767cb2e8-f0fe-41d4-8145-e972f036085c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.091 2 DEBUG oslo_concurrency.lockutils [req-e5739303-3846-44c6-a490-88a30f1dfc1b req-767cb2e8-f0fe-41d4-8145-e972f036085c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.092 2 DEBUG oslo_concurrency.lockutils [req-e5739303-3846-44c6-a490-88a30f1dfc1b req-767cb2e8-f0fe-41d4-8145-e972f036085c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.092 2 DEBUG oslo_concurrency.lockutils [req-e5739303-3846-44c6-a490-88a30f1dfc1b req-767cb2e8-f0fe-41d4-8145-e972f036085c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.092 2 DEBUG nova.compute.manager [req-e5739303-3846-44c6-a490-88a30f1dfc1b req-767cb2e8-f0fe-41d4-8145-e972f036085c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.092 2 WARNING nova.compute.manager [req-e5739303-3846-44c6-a490-88a30f1dfc1b req-767cb2e8-f0fe-41d4-8145-e972f036085c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf for instance with vm_state active and task_state None.
Oct 07 14:14:12 compute-0 podman[315964]: 2025-10-07 14:14:12.273851565 +0000 UTC m=+0.055639324 container create 6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 07 14:14:12 compute-0 podman[315964]: 2025-10-07 14:14:12.241601237 +0000 UTC m=+0.023389006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:14:12 compute-0 systemd[1]: Started libpod-conmon-6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff.scope.
Oct 07 14:14:12 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.397 2 DEBUG nova.network.neutron [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:14:12 compute-0 podman[315964]: 2025-10-07 14:14:12.41863437 +0000 UTC m=+0.200422149 container init 6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:14:12 compute-0 podman[315964]: 2025-10-07 14:14:12.429450854 +0000 UTC m=+0.211238603 container start 6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 07 14:14:12 compute-0 podman[315964]: 2025-10-07 14:14:12.436477189 +0000 UTC m=+0.218264938 container attach 6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 07 14:14:12 compute-0 bold_lalande[315980]: 167 167
Oct 07 14:14:12 compute-0 systemd[1]: libpod-6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff.scope: Deactivated successfully.
Oct 07 14:14:12 compute-0 conmon[315980]: conmon 6daa10359846b8b0ba5e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff.scope/container/memory.events
Oct 07 14:14:12 compute-0 podman[315964]: 2025-10-07 14:14:12.443577726 +0000 UTC m=+0.225365475 container died 6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 07 14:14:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-1728b64334aca8f854e9dceb515aece9ad7b80e898a332c41d48ac1660d10593-merged.mount: Deactivated successfully.
Oct 07 14:14:12 compute-0 podman[315964]: 2025-10-07 14:14:12.487344876 +0000 UTC m=+0.269132625 container remove 6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 07 14:14:12 compute-0 systemd[1]: libpod-conmon-6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff.scope: Deactivated successfully.
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.655 2 INFO nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Creating config drive at /var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb/disk.config
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.661 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg2bl1ps_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:12 compute-0 podman[316004]: 2025-10-07 14:14:12.68456776 +0000 UTC m=+0.059236998 container create a638b6f3b8290c90b68289030ef4886e5a1863f8dec2f5a3b180395aa9cd9754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_carson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:14:12 compute-0 systemd[1]: Started libpod-conmon-a638b6f3b8290c90b68289030ef4886e5a1863f8dec2f5a3b180395aa9cd9754.scope.
Oct 07 14:14:12 compute-0 podman[316004]: 2025-10-07 14:14:12.655897317 +0000 UTC m=+0.030566615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.759 2 DEBUG nova.compute.manager [req-6cadcc50-7841-4e83-b592-61a4111753ec req-d815d8be-38ee-43ac-8bec-9f7e74ea4ed9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-changed-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.759 2 DEBUG nova.compute.manager [req-6cadcc50-7841-4e83-b592-61a4111753ec req-d815d8be-38ee-43ac-8bec-9f7e74ea4ed9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Refreshing instance network info cache due to event network-changed-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.760 2 DEBUG oslo_concurrency.lockutils [req-6cadcc50-7841-4e83-b592-61a4111753ec req-d815d8be-38ee-43ac-8bec-9f7e74ea4ed9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-f95ef1e3-b526-4516-a982-e1e415c5a657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:14:12 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff4b814b092bf18a38c19c193c033b731a33f59ee0b6a1a313c1acb9777cdcb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff4b814b092bf18a38c19c193c033b731a33f59ee0b6a1a313c1acb9777cdcb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff4b814b092bf18a38c19c193c033b731a33f59ee0b6a1a313c1acb9777cdcb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff4b814b092bf18a38c19c193c033b731a33f59ee0b6a1a313c1acb9777cdcb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:14:12 compute-0 podman[316004]: 2025-10-07 14:14:12.801900215 +0000 UTC m=+0.176569483 container init a638b6f3b8290c90b68289030ef4886e5a1863f8dec2f5a3b180395aa9cd9754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_carson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 07 14:14:12 compute-0 podman[316004]: 2025-10-07 14:14:12.810459919 +0000 UTC m=+0.185129157 container start a638b6f3b8290c90b68289030ef4886e5a1863f8dec2f5a3b180395aa9cd9754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_carson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:14:12 compute-0 podman[316004]: 2025-10-07 14:14:12.814207638 +0000 UTC m=+0.188876896 container attach a638b6f3b8290c90b68289030ef4886e5a1863f8dec2f5a3b180395aa9cd9754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_carson, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 07 14:14:12 compute-0 ceph-mon[74295]: pgmap v1488: 305 pgs: 305 active+clean; 157 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 776 KiB/s rd, 3.4 MiB/s wr, 92 op/s
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.979 2 DEBUG nova.network.neutron [req-ff9c4cc8-37c4-4ec9-97e6-0046d6565106 req-74818ab4-0315-48ef-aab2-b5eafe6f3ef2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Updated VIF entry in instance network info cache for port 432c69dd-fb1b-432b-b867-9fe29716430d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.980 2 DEBUG nova.network.neutron [req-ff9c4cc8-37c4-4ec9-97e6-0046d6565106 req-74818ab4-0315-48ef-aab2-b5eafe6f3ef2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Updating instance_info_cache with network_info: [{"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.984 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg2bl1ps_" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:12 compute-0 nova_compute[259550]: 2025-10-07 14:14:12.985 2 DEBUG nova.network.neutron [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Successfully created port: e4ca2e16-9053-460a-9aee-cd724306083e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.010 2 DEBUG nova.storage.rbd_utils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.013 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb/disk.config fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.057 2 DEBUG oslo_concurrency.lockutils [req-ff9c4cc8-37c4-4ec9-97e6-0046d6565106 req-74818ab4-0315-48ef-aab2-b5eafe6f3ef2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.189 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb/disk.config fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:13 compute-0 NetworkManager[44949]: <info>  [1759846453.1912] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Oct 07 14:14:13 compute-0 NetworkManager[44949]: <info>  [1759846453.1927] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.190 2 INFO nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Deleting local config drive /var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb/disk.config because it was imported into RBD.
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:13 compute-0 NetworkManager[44949]: <info>  [1759846453.2741] manager: (tap432c69dd-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/188)
Oct 07 14:14:13 compute-0 systemd-udevd[316075]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:14:13 compute-0 systemd-machined[214580]: New machine qemu-57-instance-00000032.
Oct 07 14:14:13 compute-0 systemd[1]: Started Virtual Machine qemu-57-instance-00000032.
Oct 07 14:14:13 compute-0 kernel: tap432c69dd-fb: entered promiscuous mode
Oct 07 14:14:13 compute-0 NetworkManager[44949]: <info>  [1759846453.4108] device (tap432c69dd-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:14:13 compute-0 NetworkManager[44949]: <info>  [1759846453.4134] device (tap432c69dd-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:13 compute-0 ovn_controller[151684]: 2025-10-07T14:14:13Z|00389|binding|INFO|Claiming lport 432c69dd-fb1b-432b-b867-9fe29716430d for this chassis.
Oct 07 14:14:13 compute-0 ovn_controller[151684]: 2025-10-07T14:14:13Z|00390|binding|INFO|432c69dd-fb1b-432b-b867-9fe29716430d: Claiming fa:16:3e:a6:54:7c 10.100.0.7
Oct 07 14:14:13 compute-0 ovn_controller[151684]: 2025-10-07T14:14:13Z|00391|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.448 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:54:7c 10.100.0.7'], port_security=['fa:16:3e:a6:54:7c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'fc163bed-856c-4ea5-9bf3-6989fb1027eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef9390a1dd804281beea149e0086b360', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a36afbb-3eb8-42d9-b597-25ad85279a69', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80c61b76-cba3-471b-9dc7-bab9d6303f6a, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=432c69dd-fb1b-432b-b867-9fe29716430d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.449 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 432c69dd-fb1b-432b-b867-9fe29716430d in datapath 7ba9d553-bbaa-47f8-8281-6a74e53c37fb bound to our chassis
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.451 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba9d553-bbaa-47f8-8281-6a74e53c37fb
Oct 07 14:14:13 compute-0 ovn_controller[151684]: 2025-10-07T14:14:13Z|00392|binding|INFO|Setting lport 432c69dd-fb1b-432b-b867-9fe29716430d ovn-installed in OVS
Oct 07 14:14:13 compute-0 ovn_controller[151684]: 2025-10-07T14:14:13Z|00393|binding|INFO|Setting lport 432c69dd-fb1b-432b-b867-9fe29716430d up in Southbound
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.466 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[66561416-181d-42d9-92ba-820c25f96c97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.467 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7ba9d553-b1 in ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.469 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7ba9d553-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.469 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[794e375d-b3d1-4499-9cf7-e2ddb9c63905]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.470 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7ce8c8-4664-43ba-a02b-438bb7bb1b8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.484 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6dccf0-6e44-4804-b70b-59437e86e6bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.512 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a867c225-c0f8-4d3d-bf74-171ef649ac19]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.547 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe047b6-44dc-494f-9ca1-8187db8ebf93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:13 compute-0 NetworkManager[44949]: <info>  [1759846453.5558] manager: (tap7ba9d553-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/189)
Oct 07 14:14:13 compute-0 systemd-udevd[316079]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.557 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[741a8992-d2d2-40b1-afa9-3c32acce5637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.600 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[93602a8b-96f0-48b8-9858-a6660a6e6e5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.604 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[28b34068-e7de-46b5-94ac-a536af9b1b3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:13 compute-0 NetworkManager[44949]: <info>  [1759846453.6322] device (tap7ba9d553-b0): carrier: link connected
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.639 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d03ae228-d749-4dd8-9a69-8f30269021cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.659 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8dac0fb0-fbaf-4daa-b13b-b8eac91152d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9d553-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:7b:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703719, 'reachable_time': 17176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316113, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.679 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb246f6-5841-4a36-9360-f870e73205ff]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee3:7b82'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703719, 'tstamp': 703719}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316116, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 157 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 768 KiB/s rd, 2.8 MiB/s wr, 83 op/s
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.710 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2939db-b707-4e05-a277-36dbc5df31bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9d553-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:7b:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703719, 'reachable_time': 17176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316117, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:13 compute-0 reverent_carson[316023]: {
Oct 07 14:14:13 compute-0 reverent_carson[316023]:     "0": [
Oct 07 14:14:13 compute-0 reverent_carson[316023]:         {
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "devices": [
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "/dev/loop3"
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             ],
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "lv_name": "ceph_lv0",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "lv_size": "21470642176",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "name": "ceph_lv0",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "tags": {
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.cluster_name": "ceph",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.crush_device_class": "",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.encrypted": "0",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.osd_id": "0",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.type": "block",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.vdo": "0"
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             },
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "type": "block",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "vg_name": "ceph_vg0"
Oct 07 14:14:13 compute-0 reverent_carson[316023]:         }
Oct 07 14:14:13 compute-0 reverent_carson[316023]:     ],
Oct 07 14:14:13 compute-0 reverent_carson[316023]:     "1": [
Oct 07 14:14:13 compute-0 reverent_carson[316023]:         {
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "devices": [
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "/dev/loop4"
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             ],
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "lv_name": "ceph_lv1",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "lv_size": "21470642176",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "name": "ceph_lv1",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "tags": {
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.cluster_name": "ceph",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.crush_device_class": "",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.encrypted": "0",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.osd_id": "1",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.type": "block",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.vdo": "0"
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             },
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "type": "block",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "vg_name": "ceph_vg1"
Oct 07 14:14:13 compute-0 reverent_carson[316023]:         }
Oct 07 14:14:13 compute-0 reverent_carson[316023]:     ],
Oct 07 14:14:13 compute-0 reverent_carson[316023]:     "2": [
Oct 07 14:14:13 compute-0 reverent_carson[316023]:         {
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "devices": [
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "/dev/loop5"
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             ],
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "lv_name": "ceph_lv2",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "lv_size": "21470642176",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "name": "ceph_lv2",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "tags": {
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.cluster_name": "ceph",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.crush_device_class": "",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.encrypted": "0",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.osd_id": "2",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.type": "block",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:                 "ceph.vdo": "0"
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             },
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "type": "block",
Oct 07 14:14:13 compute-0 reverent_carson[316023]:             "vg_name": "ceph_vg2"
Oct 07 14:14:13 compute-0 reverent_carson[316023]:         }
Oct 07 14:14:13 compute-0 reverent_carson[316023]:     ]
Oct 07 14:14:13 compute-0 reverent_carson[316023]: }
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.750 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0d829443-8c8f-42e9-80ad-6b33ec351581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:13 compute-0 systemd[1]: libpod-a638b6f3b8290c90b68289030ef4886e5a1863f8dec2f5a3b180395aa9cd9754.scope: Deactivated successfully.
Oct 07 14:14:13 compute-0 podman[316004]: 2025-10-07 14:14:13.760143522 +0000 UTC m=+1.134812770 container died a638b6f3b8290c90b68289030ef4886e5a1863f8dec2f5a3b180395aa9cd9754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_carson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:14:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff4b814b092bf18a38c19c193c033b731a33f59ee0b6a1a313c1acb9777cdcb5-merged.mount: Deactivated successfully.
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.822 2 DEBUG nova.network.neutron [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Updating instance_info_cache with network_info: [{"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:14:13 compute-0 podman[316004]: 2025-10-07 14:14:13.837076384 +0000 UTC m=+1.211745622 container remove a638b6f3b8290c90b68289030ef4886e5a1863f8dec2f5a3b180395aa9cd9754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.846 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Releasing lock "refresh_cache-f95ef1e3-b526-4516-a982-e1e415c5a657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.847 2 DEBUG nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Instance network_info: |[{"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.848 2 DEBUG oslo_concurrency.lockutils [req-6cadcc50-7841-4e83-b592-61a4111753ec req-d815d8be-38ee-43ac-8bec-9f7e74ea4ed9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-f95ef1e3-b526-4516-a982-e1e415c5a657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.849 2 DEBUG nova.network.neutron [req-6cadcc50-7841-4e83-b592-61a4111753ec req-d815d8be-38ee-43ac-8bec-9f7e74ea4ed9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Refreshing network info cache for port fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.849 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[14e72c58-6710-4e84-8759-e432eb46d1ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.851 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9d553-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.851 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.852 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba9d553-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.852 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Start _get_guest_xml network_info=[{"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:13 compute-0 NetworkManager[44949]: <info>  [1759846453.8550] manager: (tap7ba9d553-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Oct 07 14:14:13 compute-0 kernel: tap7ba9d553-b0: entered promiscuous mode
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:13 compute-0 systemd[1]: libpod-conmon-a638b6f3b8290c90b68289030ef4886e5a1863f8dec2f5a3b180395aa9cd9754.scope: Deactivated successfully.
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.861 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba9d553-b0, col_values=(('external_ids', {'iface-id': 'c1da8c7c-1812-4ab6-94d3-da2a23226328'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:13 compute-0 ovn_controller[151684]: 2025-10-07T14:14:13Z|00394|binding|INFO|Releasing lport c1da8c7c-1812-4ab6-94d3-da2a23226328 from this chassis (sb_readonly=0)
Oct 07 14:14:13 compute-0 sudo[315878]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.890 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ba9d553-bbaa-47f8-8281-6a74e53c37fb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ba9d553-bbaa-47f8-8281-6a74e53c37fb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.891 2 WARNING nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.891 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3702b8b7-9f22-4b69-b254-94808446ba06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.892 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-7ba9d553-bbaa-47f8-8281-6a74e53c37fb
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/7ba9d553-bbaa-47f8-8281-6a74e53c37fb.pid.haproxy
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 7ba9d553-bbaa-47f8-8281-6a74e53c37fb
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:14:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.893 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'env', 'PROCESS_TAG=haproxy-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7ba9d553-bbaa-47f8-8281-6a74e53c37fb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.898 2 DEBUG nova.virt.libvirt.host [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.900 2 DEBUG nova.virt.libvirt.host [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.903 2 DEBUG nova.virt.libvirt.host [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.904 2 DEBUG nova.virt.libvirt.host [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.905 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.905 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.906 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.906 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.906 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.907 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.907 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.907 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.908 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.908 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.908 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.908 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:14:13 compute-0 nova_compute[259550]: 2025-10-07 14:14:13.912 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:13 compute-0 sudo[316177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:14:13 compute-0 sudo[316177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:13 compute-0 sudo[316177]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:14 compute-0 ceph-mon[74295]: pgmap v1489: 305 pgs: 305 active+clean; 157 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 768 KiB/s rd, 2.8 MiB/s wr, 83 op/s
Oct 07 14:14:14 compute-0 sudo[316207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:14:14 compute-0 sudo[316207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:14 compute-0 sudo[316207]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:14 compute-0 sudo[316232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:14:14 compute-0 sudo[316232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:14 compute-0 sudo[316232]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:14 compute-0 sudo[316276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:14:14 compute-0 sudo[316276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.194 2 DEBUG nova.compute.manager [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-changed-fdcb59f4-9f89-4147-941b-a28bfa0621bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.196 2 DEBUG nova.compute.manager [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing instance network info cache due to event network-changed-fdcb59f4-9f89-4147-941b-a28bfa0621bf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.197 2 DEBUG oslo_concurrency.lockutils [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.197 2 DEBUG oslo_concurrency.lockutils [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.197 2 DEBUG nova.network.neutron [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing network info cache for port fdcb59f4-9f89-4147-941b-a28bfa0621bf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:14:14 compute-0 podman[316322]: 2025-10-07 14:14:14.33985153 +0000 UTC m=+0.054068781 container create f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:14:14 compute-0 systemd[1]: Started libpod-conmon-f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153.scope.
Oct 07 14:14:14 compute-0 podman[316322]: 2025-10-07 14:14:14.308391524 +0000 UTC m=+0.022608785 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:14:14 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a901171075059a0193ddbd3fb0f9b59df344514fed4a53bc1eb915147968b7ce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.433 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846454.433579, fc163bed-856c-4ea5-9bf3-6989fb1027eb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.434 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] VM Started (Lifecycle Event)
Oct 07 14:14:14 compute-0 podman[316322]: 2025-10-07 14:14:14.44710334 +0000 UTC m=+0.161320601 container init f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.453 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:14 compute-0 podman[316322]: 2025-10-07 14:14:14.457981446 +0000 UTC m=+0.172198687 container start f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.458 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846454.4336905, fc163bed-856c-4ea5-9bf3-6989fb1027eb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.459 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] VM Paused (Lifecycle Event)
Oct 07 14:14:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:14:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3164028786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:14 compute-0 neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb[316346]: [NOTICE]   (316364) : New worker (316368) forked
Oct 07 14:14:14 compute-0 neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb[316346]: [NOTICE]   (316364) : Loading success.
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.486 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.492 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.503 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.529 2 DEBUG nova.storage.rbd_utils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] rbd image f95ef1e3-b526-4516-a982-e1e415c5a657_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.536 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.590 2 DEBUG nova.network.neutron [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Successfully updated port: e4ca2e16-9053-460a-9aee-cd724306083e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.592 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.616 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquiring lock "refresh_cache-6be00afd-ab65-48db-a575-23a285419e60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.617 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquired lock "refresh_cache-6be00afd-ab65-48db-a575-23a285419e60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.617 2 DEBUG nova.network.neutron [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:14:14 compute-0 podman[316408]: 2025-10-07 14:14:14.631310752 +0000 UTC m=+0.049412990 container create d1353874151ef3a0b4e6bb063d26b194d32dfb5f16c99958cef8287156d4a0ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:14:14 compute-0 systemd[1]: Started libpod-conmon-d1353874151ef3a0b4e6bb063d26b194d32dfb5f16c99958cef8287156d4a0ce.scope.
Oct 07 14:14:14 compute-0 podman[316408]: 2025-10-07 14:14:14.616150084 +0000 UTC m=+0.034252342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:14:14 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:14:14 compute-0 podman[316408]: 2025-10-07 14:14:14.758206767 +0000 UTC m=+0.176309025 container init d1353874151ef3a0b4e6bb063d26b194d32dfb5f16c99958cef8287156d4a0ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_leakey, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:14:14 compute-0 podman[316408]: 2025-10-07 14:14:14.768603571 +0000 UTC m=+0.186705809 container start d1353874151ef3a0b4e6bb063d26b194d32dfb5f16c99958cef8287156d4a0ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_leakey, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 07 14:14:14 compute-0 podman[316408]: 2025-10-07 14:14:14.772650037 +0000 UTC m=+0.190752295 container attach d1353874151ef3a0b4e6bb063d26b194d32dfb5f16c99958cef8287156d4a0ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 07 14:14:14 compute-0 quizzical_leakey[316434]: 167 167
Oct 07 14:14:14 compute-0 systemd[1]: libpod-d1353874151ef3a0b4e6bb063d26b194d32dfb5f16c99958cef8287156d4a0ce.scope: Deactivated successfully.
Oct 07 14:14:14 compute-0 podman[316408]: 2025-10-07 14:14:14.778210313 +0000 UTC m=+0.196312551 container died d1353874151ef3a0b4e6bb063d26b194d32dfb5f16c99958cef8287156d4a0ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_leakey, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:14:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f16f68329b92d251204b4cd4926c0be14aa313a60ceab31ae681ced3c63c5cc-merged.mount: Deactivated successfully.
Oct 07 14:14:14 compute-0 podman[316408]: 2025-10-07 14:14:14.819368875 +0000 UTC m=+0.237471113 container remove d1353874151ef3a0b4e6bb063d26b194d32dfb5f16c99958cef8287156d4a0ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_leakey, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 07 14:14:14 compute-0 systemd[1]: libpod-conmon-d1353874151ef3a0b4e6bb063d26b194d32dfb5f16c99958cef8287156d4a0ce.scope: Deactivated successfully.
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.832 2 DEBUG nova.compute.manager [req-4bd6a682-8e33-4143-a05c-3ff448af5dff req-0060a0bc-53a6-4f12-8a45-c1a04ff78169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received event network-changed-e4ca2e16-9053-460a-9aee-cd724306083e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.835 2 DEBUG nova.compute.manager [req-4bd6a682-8e33-4143-a05c-3ff448af5dff req-0060a0bc-53a6-4f12-8a45-c1a04ff78169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Refreshing instance network info cache due to event network-changed-e4ca2e16-9053-460a-9aee-cd724306083e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.835 2 DEBUG oslo_concurrency.lockutils [req-4bd6a682-8e33-4143-a05c-3ff448af5dff req-0060a0bc-53a6-4f12-8a45-c1a04ff78169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6be00afd-ab65-48db-a575-23a285419e60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:14:14 compute-0 nova_compute[259550]: 2025-10-07 14:14:14.884 2 DEBUG nova.network.neutron [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:14:15 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3164028786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:15 compute-0 podman[316470]: 2025-10-07 14:14:15.028903843 +0000 UTC m=+0.057555635 container create a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 07 14:14:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:14:15 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2195307386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:15 compute-0 systemd[1]: Started libpod-conmon-a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d.scope.
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.073 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.074 2 DEBUG nova.virt.libvirt.vif [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1758004192',display_name='tempest-InstanceActionsTestJSON-server-1758004192',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1758004192',id=51,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='304ddc5a55af455ba608d37c37f217aa',ramdisk_id='',reservation_id='r-79xwgd02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-550730255',owner_user_name='tempest-InstanceActionsTestJSON-550730255-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:14:09Z,user_data=None,user_id='fae6fe12a4234cd28439c010bdf3e497',uuid=f95ef1e3-b526-4516-a982-e1e415c5a657,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.075 2 DEBUG nova.network.os_vif_util [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converting VIF {"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.075 2 DEBUG nova.network.os_vif_util [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.077 2 DEBUG nova.objects.instance [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lazy-loading 'pci_devices' on Instance uuid f95ef1e3-b526-4516-a982-e1e415c5a657 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:14:15 compute-0 podman[316470]: 2025-10-07 14:14:15.006024131 +0000 UTC m=+0.034675973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.094 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:14:15 compute-0 nova_compute[259550]:   <uuid>f95ef1e3-b526-4516-a982-e1e415c5a657</uuid>
Oct 07 14:14:15 compute-0 nova_compute[259550]:   <name>instance-00000033</name>
Oct 07 14:14:15 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:14:15 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:14:15 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <nova:name>tempest-InstanceActionsTestJSON-server-1758004192</nova:name>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:14:13</nova:creationTime>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:14:15 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:14:15 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:14:15 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:14:15 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:14:15 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:14:15 compute-0 nova_compute[259550]:         <nova:user uuid="fae6fe12a4234cd28439c010bdf3e497">tempest-InstanceActionsTestJSON-550730255-project-member</nova:user>
Oct 07 14:14:15 compute-0 nova_compute[259550]:         <nova:project uuid="304ddc5a55af455ba608d37c37f217aa">tempest-InstanceActionsTestJSON-550730255</nova:project>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:14:15 compute-0 nova_compute[259550]:         <nova:port uuid="fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598">
Oct 07 14:14:15 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:14:15 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:14:15 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <system>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <entry name="serial">f95ef1e3-b526-4516-a982-e1e415c5a657</entry>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <entry name="uuid">f95ef1e3-b526-4516-a982-e1e415c5a657</entry>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     </system>
Oct 07 14:14:15 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:14:15 compute-0 nova_compute[259550]:   <os>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:   </os>
Oct 07 14:14:15 compute-0 nova_compute[259550]:   <features>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:   </features>
Oct 07 14:14:15 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:14:15 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:14:15 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/f95ef1e3-b526-4516-a982-e1e415c5a657_disk">
Oct 07 14:14:15 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       </source>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:14:15 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/f95ef1e3-b526-4516-a982-e1e415c5a657_disk.config">
Oct 07 14:14:15 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       </source>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:14:15 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:ed:21:7c"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <target dev="tapfa2c6ead-0b"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657/console.log" append="off"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <video>
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     </video>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:14:15 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:14:15 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:14:15 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:14:15 compute-0 nova_compute[259550]: </domain>
Oct 07 14:14:15 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.095 2 DEBUG nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Preparing to wait for external event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.095 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.096 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.096 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.097 2 DEBUG nova.virt.libvirt.vif [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1758004192',display_name='tempest-InstanceActionsTestJSON-server-1758004192',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1758004192',id=51,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='304ddc5a55af455ba608d37c37f217aa',ramdisk_id='',reservation_id='r-79xwgd02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-550730255',owner_user_name='tempest-InstanceActionsTestJSON-550730255-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:14:09Z,user_data=None,user_id='fae6fe12a4234cd28439c010bdf3e497',uuid=f95ef1e3-b526-4516-a982-e1e415c5a657,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.097 2 DEBUG nova.network.os_vif_util [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converting VIF {"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.098 2 DEBUG nova.network.os_vif_util [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.098 2 DEBUG os_vif [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.099 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.100 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:14:15 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000db361e285a685b1729a8c273bbb6c1bcce20b402b6ff675d0f4cf74bd2399/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000db361e285a685b1729a8c273bbb6c1bcce20b402b6ff675d0f4cf74bd2399/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000db361e285a685b1729a8c273bbb6c1bcce20b402b6ff675d0f4cf74bd2399/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.106 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa2c6ead-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.107 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfa2c6ead-0b, col_values=(('external_ids', {'iface-id': 'fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:21:7c', 'vm-uuid': 'f95ef1e3-b526-4516-a982-e1e415c5a657'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:15 compute-0 NetworkManager[44949]: <info>  [1759846455.1106] manager: (tapfa2c6ead-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/191)
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000db361e285a685b1729a8c273bbb6c1bcce20b402b6ff675d0f4cf74bd2399/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.118 2 INFO os_vif [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b')
Oct 07 14:14:15 compute-0 podman[316470]: 2025-10-07 14:14:15.125607255 +0000 UTC m=+0.154259067 container init a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_aryabhata, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:14:15 compute-0 podman[316470]: 2025-10-07 14:14:15.133691417 +0000 UTC m=+0.162343209 container start a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_aryabhata, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:14:15 compute-0 podman[316470]: 2025-10-07 14:14:15.138277858 +0000 UTC m=+0.166929670 container attach a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.190 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.191 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.191 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] No VIF found with MAC fa:16:3e:ed:21:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.192 2 INFO nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Using config drive
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.216 2 DEBUG nova.storage.rbd_utils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] rbd image f95ef1e3-b526-4516-a982-e1e415c5a657_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.464 2 DEBUG nova.network.neutron [req-6cadcc50-7841-4e83-b592-61a4111753ec req-d815d8be-38ee-43ac-8bec-9f7e74ea4ed9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Updated VIF entry in instance network info cache for port fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.465 2 DEBUG nova.network.neutron [req-6cadcc50-7841-4e83-b592-61a4111753ec req-d815d8be-38ee-43ac-8bec-9f7e74ea4ed9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Updating instance_info_cache with network_info: [{"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:14:15 compute-0 nova_compute[259550]: 2025-10-07 14:14:15.487 2 DEBUG oslo_concurrency.lockutils [req-6cadcc50-7841-4e83-b592-61a4111753ec req-d815d8be-38ee-43ac-8bec-9f7e74ea4ed9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-f95ef1e3-b526-4516-a982-e1e415c5a657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:14:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 216 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.1 MiB/s wr, 163 op/s
Oct 07 14:14:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:14:16 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2195307386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:16 compute-0 ceph-mon[74295]: pgmap v1490: 305 pgs: 305 active+clean; 216 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.1 MiB/s wr, 163 op/s
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.118 2 INFO nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Creating config drive at /var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657/disk.config
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.124 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbi7aimn4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]: {
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:         "osd_id": 2,
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:         "type": "bluestore"
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:     },
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:         "osd_id": 1,
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:         "type": "bluestore"
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:     },
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:         "osd_id": 0,
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:         "type": "bluestore"
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]:     }
Oct 07 14:14:16 compute-0 admiring_aryabhata[316488]: }
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.169 2 DEBUG nova.network.neutron [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updated VIF entry in instance network info cache for port fdcb59f4-9f89-4147-941b-a28bfa0621bf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.170 2 DEBUG nova.network.neutron [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.199 2 DEBUG oslo_concurrency.lockutils [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.200 2 DEBUG nova.compute.manager [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received event network-vif-plugged-432c69dd-fb1b-432b-b867-9fe29716430d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.200 2 DEBUG oslo_concurrency.lockutils [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.201 2 DEBUG oslo_concurrency.lockutils [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.201 2 DEBUG oslo_concurrency.lockutils [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.201 2 DEBUG nova.compute.manager [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Processing event network-vif-plugged-432c69dd-fb1b-432b-b867-9fe29716430d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.201 2 DEBUG nova.compute.manager [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received event network-vif-plugged-432c69dd-fb1b-432b-b867-9fe29716430d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.202 2 DEBUG oslo_concurrency.lockutils [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.202 2 DEBUG oslo_concurrency.lockutils [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.202 2 DEBUG oslo_concurrency.lockutils [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.202 2 DEBUG nova.compute.manager [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] No waiting events found dispatching network-vif-plugged-432c69dd-fb1b-432b-b867-9fe29716430d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.203 2 WARNING nova.compute.manager [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received unexpected event network-vif-plugged-432c69dd-fb1b-432b-b867-9fe29716430d for instance with vm_state building and task_state spawning.
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.204 2 DEBUG nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:14:16 compute-0 systemd[1]: libpod-a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d.scope: Deactivated successfully.
Oct 07 14:14:16 compute-0 systemd[1]: libpod-a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d.scope: Consumed 1.079s CPU time.
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.212 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846456.2114837, fc163bed-856c-4ea5-9bf3-6989fb1027eb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.212 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] VM Resumed (Lifecycle Event)
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.215 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.222 2 INFO nova.virt.libvirt.driver [-] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Instance spawned successfully.
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.222 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.231 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.236 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.247 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.247 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.248 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.248 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.249 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.249 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:16 compute-0 podman[316545]: 2025-10-07 14:14:16.261277117 +0000 UTC m=+0.024884236 container died a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.277 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbi7aimn4" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.278 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:14:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-000db361e285a685b1729a8c273bbb6c1bcce20b402b6ff675d0f4cf74bd2399-merged.mount: Deactivated successfully.
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.314 2 DEBUG nova.storage.rbd_utils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] rbd image f95ef1e3-b526-4516-a982-e1e415c5a657_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.320 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657/disk.config f95ef1e3-b526-4516-a982-e1e415c5a657_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:16 compute-0 podman[316545]: 2025-10-07 14:14:16.339348028 +0000 UTC m=+0.102955137 container remove a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_aryabhata, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:14:16 compute-0 systemd[1]: libpod-conmon-a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d.scope: Deactivated successfully.
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.375 2 INFO nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Took 10.34 seconds to spawn the instance on the hypervisor.
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.376 2 DEBUG nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:16 compute-0 sudo[316276]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:14:16 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:14:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:14:16 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:14:16 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 8a96fe33-d2d1-4378-8db5-f7acdc8ffef9 does not exist
Oct 07 14:14:16 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev d7175329-fdd1-4e38-b4ba-0b1884d7209e does not exist
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.449 2 INFO nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Took 11.27 seconds to build instance.
Oct 07 14:14:16 compute-0 sudo[316594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.493 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:16 compute-0 sudo[316594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:16 compute-0 sudo[316594]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.525 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657/disk.config f95ef1e3-b526-4516-a982-e1e415c5a657_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.526 2 INFO nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Deleting local config drive /var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657/disk.config because it was imported into RBD.
Oct 07 14:14:16 compute-0 sudo[316622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:14:16 compute-0 sudo[316622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:14:16 compute-0 sudo[316622]: pam_unix(sudo:session): session closed for user root
Oct 07 14:14:16 compute-0 kernel: tapfa2c6ead-0b: entered promiscuous mode
Oct 07 14:14:16 compute-0 NetworkManager[44949]: <info>  [1759846456.6035] manager: (tapfa2c6ead-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/192)
Oct 07 14:14:16 compute-0 ovn_controller[151684]: 2025-10-07T14:14:16Z|00395|binding|INFO|Claiming lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 for this chassis.
Oct 07 14:14:16 compute-0 ovn_controller[151684]: 2025-10-07T14:14:16Z|00396|binding|INFO|fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598: Claiming fa:16:3e:ed:21:7c 10.100.0.10
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.622 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:21:7c 10.100.0.10'], port_security=['fa:16:3e:ed:21:7c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f95ef1e3-b526-4516-a982-e1e415c5a657', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '304ddc5a55af455ba608d37c37f217aa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5e1b758c-b736-4703-bebe-5a0d70f8524d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d731c88f-69f4-4c5d-9133-965e19cd35d1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:14:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.624 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 in datapath 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b bound to our chassis
Oct 07 14:14:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.625 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b
Oct 07 14:14:16 compute-0 ovn_controller[151684]: 2025-10-07T14:14:16Z|00397|binding|INFO|Setting lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 ovn-installed in OVS
Oct 07 14:14:16 compute-0 ovn_controller[151684]: 2025-10-07T14:14:16Z|00398|binding|INFO|Setting lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 up in Southbound
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.641 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cfaa19f6-5c49-45de-bafe-ab75ca980ca5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.642 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4ce7b2cd-51 in ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:14:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.645 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4ce7b2cd-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:14:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.645 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d91ea4-c6ea-429a-89cd-63f8c64ccd27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.647 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f40782e1-7a97-4e9f-b944-00714268ff76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:16 compute-0 systemd-udevd[316660]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:14:16 compute-0 systemd-machined[214580]: New machine qemu-58-instance-00000033.
Oct 07 14:14:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.663 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a1e32de7-7a29-4b04-84b3-54bb9b0bc4bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:16 compute-0 NetworkManager[44949]: <info>  [1759846456.6702] device (tapfa2c6ead-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:14:16 compute-0 NetworkManager[44949]: <info>  [1759846456.6715] device (tapfa2c6ead-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:14:16 compute-0 systemd[1]: Started Virtual Machine qemu-58-instance-00000033.
Oct 07 14:14:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.699 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de4343c7-e3f2-41a7-84c7-057679f1a0b9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.738 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cd714b86-a254-413e-ac36-3e0796ad795a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:16 compute-0 NetworkManager[44949]: <info>  [1759846456.7461] manager: (tap4ce7b2cd-50): new Veth device (/org/freedesktop/NetworkManager/Devices/193)
Oct 07 14:14:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.744 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7a7aa7-c355-4e00-a9eb-1ec6a345f1c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.787 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5bd1e366-72d6-49bb-a108-584dfef26faa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.792 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[63686403-f11d-4364-9edd-f49bd2c0890d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:16 compute-0 NetworkManager[44949]: <info>  [1759846456.8229] device (tap4ce7b2cd-50): carrier: link connected
Oct 07 14:14:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.836 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b8645dfb-761d-4388-8fcc-763c4ade81be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.858 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[923d1d48-2fa3-4518-8011-869e76391405]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ce7b2cd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:bc:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704038, 'reachable_time': 43082, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316693, 'error': None, 'target': 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.880 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[67d5bcd1-c980-4f82-8703-bc3bf93f9a6f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec8:bcce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704038, 'tstamp': 704038}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316694, 'error': None, 'target': 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.903 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ed25dd5c-c018-4d01-aef8-61f906156fd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ce7b2cd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:bc:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704038, 'reachable_time': 43082, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316695, 'error': None, 'target': 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.904 2 DEBUG nova.network.neutron [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Updating instance_info_cache with network_info: [{"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:14:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.937 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f900a2-2207-4525-a6b2-8629d8ca00ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.940 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Releasing lock "refresh_cache-6be00afd-ab65-48db-a575-23a285419e60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.940 2 DEBUG nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Instance network_info: |[{"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.940 2 DEBUG oslo_concurrency.lockutils [req-4bd6a682-8e33-4143-a05c-3ff448af5dff req-0060a0bc-53a6-4f12-8a45-c1a04ff78169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6be00afd-ab65-48db-a575-23a285419e60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.941 2 DEBUG nova.network.neutron [req-4bd6a682-8e33-4143-a05c-3ff448af5dff req-0060a0bc-53a6-4f12-8a45-c1a04ff78169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Refreshing network info cache for port e4ca2e16-9053-460a-9aee-cd724306083e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.943 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Start _get_guest_xml network_info=[{"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.949 2 WARNING nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.954 2 DEBUG nova.virt.libvirt.host [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.955 2 DEBUG nova.virt.libvirt.host [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.960 2 DEBUG nova.virt.libvirt.host [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.961 2 DEBUG nova.virt.libvirt.host [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.961 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.961 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.962 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.962 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.962 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.963 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.963 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.963 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.963 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.964 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.964 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.964 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:14:16 compute-0 nova_compute[259550]: 2025-10-07 14:14:16.967 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:17.031 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[17078243-955f-4352-89b9-b824637038fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:17.033 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ce7b2cd-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:17.033 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:17.034 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ce7b2cd-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:17 compute-0 nova_compute[259550]: 2025-10-07 14:14:17.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:17 compute-0 kernel: tap4ce7b2cd-50: entered promiscuous mode
Oct 07 14:14:17 compute-0 NetworkManager[44949]: <info>  [1759846457.0369] manager: (tap4ce7b2cd-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:17.040 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ce7b2cd-50, col_values=(('external_ids', {'iface-id': '1909b8a8-f403-4e60-bcbd-3a0491b233a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:17 compute-0 nova_compute[259550]: 2025-10-07 14:14:17.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:17 compute-0 ovn_controller[151684]: 2025-10-07T14:14:17Z|00399|binding|INFO|Releasing lport 1909b8a8-f403-4e60-bcbd-3a0491b233a9 from this chassis (sb_readonly=0)
Oct 07 14:14:17 compute-0 nova_compute[259550]: 2025-10-07 14:14:17.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:17.060 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4ce7b2cd-57c6-43a1-b302-a9e47c2a613b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4ce7b2cd-57c6-43a1-b302-a9e47c2a613b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:17.062 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d4012384-79ce-402b-9d4f-f7a79614ab07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:17.063 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/4ce7b2cd-57c6-43a1-b302-a9e47c2a613b.pid.haproxy
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:14:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:17.065 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'env', 'PROCESS_TAG=haproxy-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4ce7b2cd-57c6-43a1-b302-a9e47c2a613b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:14:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:14:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:14:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:14:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3567838038' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:17 compute-0 nova_compute[259550]: 2025-10-07 14:14:17.516 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:17 compute-0 podman[316786]: 2025-10-07 14:14:17.518795001 +0000 UTC m=+0.053699623 container create 77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 07 14:14:17 compute-0 nova_compute[259550]: 2025-10-07 14:14:17.546 2 DEBUG nova.storage.rbd_utils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] rbd image 6be00afd-ab65-48db-a575-23a285419e60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:17 compute-0 nova_compute[259550]: 2025-10-07 14:14:17.556 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:17 compute-0 systemd[1]: Started libpod-conmon-77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03.scope.
Oct 07 14:14:17 compute-0 podman[316786]: 2025-10-07 14:14:17.489168163 +0000 UTC m=+0.024072825 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:14:17 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b633774cda3ee70e43501224cdb7411d4938ebe0f67121f0bf818e12aae2f79e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:14:17 compute-0 podman[316786]: 2025-10-07 14:14:17.605908811 +0000 UTC m=+0.140813443 container init 77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 07 14:14:17 compute-0 podman[316786]: 2025-10-07 14:14:17.612169216 +0000 UTC m=+0.147073848 container start 77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:14:17 compute-0 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[316820]: [NOTICE]   (316825) : New worker (316827) forked
Oct 07 14:14:17 compute-0 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[316820]: [NOTICE]   (316825) : Loading success.
Oct 07 14:14:17 compute-0 nova_compute[259550]: 2025-10-07 14:14:17.699 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846457.6980968, f95ef1e3-b526-4516-a982-e1e415c5a657 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:14:17 compute-0 nova_compute[259550]: 2025-10-07 14:14:17.700 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] VM Started (Lifecycle Event)
Oct 07 14:14:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 227 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 161 op/s
Oct 07 14:14:17 compute-0 nova_compute[259550]: 2025-10-07 14:14:17.737 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:17 compute-0 nova_compute[259550]: 2025-10-07 14:14:17.748 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846457.7000487, f95ef1e3-b526-4516-a982-e1e415c5a657 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:14:17 compute-0 nova_compute[259550]: 2025-10-07 14:14:17.749 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] VM Paused (Lifecycle Event)
Oct 07 14:14:17 compute-0 nova_compute[259550]: 2025-10-07 14:14:17.811 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:17 compute-0 nova_compute[259550]: 2025-10-07 14:14:17.815 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:14:17 compute-0 nova_compute[259550]: 2025-10-07 14:14:17.832 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:14:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:14:18 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/397282575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.040 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.043 2 DEBUG nova.virt.libvirt.vif [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:14:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-749882943',display_name='tempest-ServersTestManualDisk-server-749882943',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-749882943',id=52,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCrm9pvHnuJnXcyCVc0QhU5t7gBB0n4d3wZnsQ891irt3BRxIDuply0Hxb2qV0GnCSguPcTHy9UPa8LT2jy6C6OH3oPbsOkrpY1oYyFQDRv9Qn+7DlyBAQdOnjMOuLm/AQ==',key_name='tempest-keypair-1254505334',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='daaa10f40e014711ba0819e5a5b251c7',ramdisk_id='',reservation_id='r-807vnzq7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-456247214',owner_user_name='tempest-ServersTestManualDisk-456247214-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:14:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ed6b624fb80d4d4d9b897c788b614297',uuid=6be00afd-ab65-48db-a575-23a285419e60,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.045 2 DEBUG nova.network.os_vif_util [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Converting VIF {"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.047 2 DEBUG nova.network.os_vif_util [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:78:e6,bridge_name='br-int',has_traffic_filtering=True,id=e4ca2e16-9053-460a-9aee-cd724306083e,network=Network(d1e4e3b1-c955-4533-8a61-49b547446a5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4ca2e16-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.050 2 DEBUG nova.objects.instance [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6be00afd-ab65-48db-a575-23a285419e60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.059 2 DEBUG nova.compute.manager [req-13113647-5444-43aa-8803-30684f7f5297 req-1732e308-5e24-44b8-ac1c-27877c6e6baf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.060 2 DEBUG oslo_concurrency.lockutils [req-13113647-5444-43aa-8803-30684f7f5297 req-1732e308-5e24-44b8-ac1c-27877c6e6baf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.061 2 DEBUG oslo_concurrency.lockutils [req-13113647-5444-43aa-8803-30684f7f5297 req-1732e308-5e24-44b8-ac1c-27877c6e6baf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.061 2 DEBUG oslo_concurrency.lockutils [req-13113647-5444-43aa-8803-30684f7f5297 req-1732e308-5e24-44b8-ac1c-27877c6e6baf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.062 2 DEBUG nova.compute.manager [req-13113647-5444-43aa-8803-30684f7f5297 req-1732e308-5e24-44b8-ac1c-27877c6e6baf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Processing event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.064 2 DEBUG nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.078 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846458.0767114, f95ef1e3-b526-4516-a982-e1e415c5a657 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.078 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] VM Resumed (Lifecycle Event)
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.082 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:14:18 compute-0 nova_compute[259550]:   <uuid>6be00afd-ab65-48db-a575-23a285419e60</uuid>
Oct 07 14:14:18 compute-0 nova_compute[259550]:   <name>instance-00000034</name>
Oct 07 14:14:18 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:14:18 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:14:18 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersTestManualDisk-server-749882943</nova:name>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:14:16</nova:creationTime>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:14:18 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:14:18 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:14:18 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:14:18 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:14:18 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:14:18 compute-0 nova_compute[259550]:         <nova:user uuid="ed6b624fb80d4d4d9b897c788b614297">tempest-ServersTestManualDisk-456247214-project-member</nova:user>
Oct 07 14:14:18 compute-0 nova_compute[259550]:         <nova:project uuid="daaa10f40e014711ba0819e5a5b251c7">tempest-ServersTestManualDisk-456247214</nova:project>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:14:18 compute-0 nova_compute[259550]:         <nova:port uuid="e4ca2e16-9053-460a-9aee-cd724306083e">
Oct 07 14:14:18 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:14:18 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:14:18 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <system>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <entry name="serial">6be00afd-ab65-48db-a575-23a285419e60</entry>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <entry name="uuid">6be00afd-ab65-48db-a575-23a285419e60</entry>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     </system>
Oct 07 14:14:18 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:14:18 compute-0 nova_compute[259550]:   <os>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:   </os>
Oct 07 14:14:18 compute-0 nova_compute[259550]:   <features>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:   </features>
Oct 07 14:14:18 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:14:18 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:14:18 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/6be00afd-ab65-48db-a575-23a285419e60_disk">
Oct 07 14:14:18 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       </source>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:14:18 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/6be00afd-ab65-48db-a575-23a285419e60_disk.config">
Oct 07 14:14:18 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       </source>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:14:18 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:17:78:e6"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <target dev="tape4ca2e16-90"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60/console.log" append="off"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <video>
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     </video>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:14:18 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:14:18 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:14:18 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:14:18 compute-0 nova_compute[259550]: </domain>
Oct 07 14:14:18 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.082 2 DEBUG nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Preparing to wait for external event network-vif-plugged-e4ca2e16-9053-460a-9aee-cd724306083e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.082 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquiring lock "6be00afd-ab65-48db-a575-23a285419e60-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.083 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.083 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.083 2 DEBUG nova.virt.libvirt.vif [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:14:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-749882943',display_name='tempest-ServersTestManualDisk-server-749882943',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-749882943',id=52,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCrm9pvHnuJnXcyCVc0QhU5t7gBB0n4d3wZnsQ891irt3BRxIDuply0Hxb2qV0GnCSguPcTHy9UPa8LT2jy6C6OH3oPbsOkrpY1oYyFQDRv9Qn+7DlyBAQdOnjMOuLm/AQ==',key_name='tempest-keypair-1254505334',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='daaa10f40e014711ba0819e5a5b251c7',ramdisk_id='',reservation_id='r-807vnzq7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-456247214',owner_user_name='tempest-ServersTestManualDisk-456247214-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:14:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ed6b624fb80d4d4d9b897c788b614297',uuid=6be00afd-ab65-48db-a575-23a285419e60,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.084 2 DEBUG nova.network.os_vif_util [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Converting VIF {"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.084 2 DEBUG nova.network.os_vif_util [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:78:e6,bridge_name='br-int',has_traffic_filtering=True,id=e4ca2e16-9053-460a-9aee-cd724306083e,network=Network(d1e4e3b1-c955-4533-8a61-49b547446a5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4ca2e16-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.085 2 DEBUG os_vif [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:78:e6,bridge_name='br-int',has_traffic_filtering=True,id=e4ca2e16-9053-460a-9aee-cd724306083e,network=Network(d1e4e3b1-c955-4533-8a61-49b547446a5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4ca2e16-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.086 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.087 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.087 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.090 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4ca2e16-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.091 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape4ca2e16-90, col_values=(('external_ids', {'iface-id': 'e4ca2e16-9053-460a-9aee-cd724306083e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:78:e6', 'vm-uuid': '6be00afd-ab65-48db-a575-23a285419e60'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:18 compute-0 NetworkManager[44949]: <info>  [1759846458.0934] manager: (tape4ca2e16-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/195)
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.096 2 INFO nova.virt.libvirt.driver [-] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Instance spawned successfully.
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.096 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.101 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.104 2 INFO os_vif [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:78:e6,bridge_name='br-int',has_traffic_filtering=True,id=e4ca2e16-9053-460a-9aee-cd724306083e,network=Network(d1e4e3b1-c955-4533-8a61-49b547446a5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4ca2e16-90')
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.107 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.133 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.138 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.138 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.138 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.139 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.139 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.139 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.170 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.170 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.171 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] No VIF found with MAC fa:16:3e:17:78:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.171 2 INFO nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Using config drive
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.195 2 DEBUG nova.storage.rbd_utils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] rbd image 6be00afd-ab65-48db-a575-23a285419e60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.206 2 INFO nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Took 9.06 seconds to spawn the instance on the hypervisor.
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.206 2 DEBUG nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.266 2 INFO nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Took 10.08 seconds to build instance.
Oct 07 14:14:18 compute-0 nova_compute[259550]: 2025-10-07 14:14:18.284 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:18 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3567838038' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:18 compute-0 ceph-mon[74295]: pgmap v1491: 305 pgs: 305 active+clean; 227 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 161 op/s
Oct 07 14:14:18 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/397282575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.080 2 INFO nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Creating config drive at /var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60/disk.config
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.087 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53luq7d5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.248 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53luq7d5" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.280 2 DEBUG nova.storage.rbd_utils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] rbd image 6be00afd-ab65-48db-a575-23a285419e60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.285 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60/disk.config 6be00afd-ab65-48db-a575-23a285419e60_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.488 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60/disk.config 6be00afd-ab65-48db-a575-23a285419e60_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.489 2 INFO nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Deleting local config drive /var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60/disk.config because it was imported into RBD.
Oct 07 14:14:19 compute-0 kernel: tape4ca2e16-90: entered promiscuous mode
Oct 07 14:14:19 compute-0 NetworkManager[44949]: <info>  [1759846459.5395] manager: (tape4ca2e16-90): new Tun device (/org/freedesktop/NetworkManager/Devices/196)
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:19 compute-0 ovn_controller[151684]: 2025-10-07T14:14:19Z|00400|binding|INFO|Claiming lport e4ca2e16-9053-460a-9aee-cd724306083e for this chassis.
Oct 07 14:14:19 compute-0 ovn_controller[151684]: 2025-10-07T14:14:19Z|00401|binding|INFO|e4ca2e16-9053-460a-9aee-cd724306083e: Claiming fa:16:3e:17:78:e6 10.100.0.11
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.552 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:78:e6 10.100.0.11'], port_security=['fa:16:3e:17:78:e6 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6be00afd-ab65-48db-a575-23a285419e60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d1e4e3b1-c955-4533-8a61-49b547446a5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daaa10f40e014711ba0819e5a5b251c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ad649100-f9d9-4b25-b07d-f2fec1bfad18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18c2539a-e2f1-4e57-8c40-f76b6e8c17e8, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=e4ca2e16-9053-460a-9aee-cd724306083e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.554 161536 INFO neutron.agent.ovn.metadata.agent [-] Port e4ca2e16-9053-460a-9aee-cd724306083e in datapath d1e4e3b1-c955-4533-8a61-49b547446a5a bound to our chassis
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.559 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d1e4e3b1-c955-4533-8a61-49b547446a5a
Oct 07 14:14:19 compute-0 NetworkManager[44949]: <info>  [1759846459.5650] device (tape4ca2e16-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:14:19 compute-0 NetworkManager[44949]: <info>  [1759846459.5663] device (tape4ca2e16-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:14:19 compute-0 ovn_controller[151684]: 2025-10-07T14:14:19Z|00402|binding|INFO|Setting lport e4ca2e16-9053-460a-9aee-cd724306083e ovn-installed in OVS
Oct 07 14:14:19 compute-0 ovn_controller[151684]: 2025-10-07T14:14:19Z|00403|binding|INFO|Setting lport e4ca2e16-9053-460a-9aee-cd724306083e up in Southbound
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.579 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c8584687-418f-4daf-8600-f420029030a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.581 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd1e4e3b1-c1 in ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.583 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd1e4e3b1-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.583 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de65ed5b-2657-48b6-8e28-6e73e0de54ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.585 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1554862-a6a9-4bab-ac05-9bc33c4670ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.602 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[c19cc9b2-f436-42cc-aaa9-b17f2f628ba5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:19 compute-0 systemd-machined[214580]: New machine qemu-59-instance-00000034.
Oct 07 14:14:19 compute-0 systemd[1]: Started Virtual Machine qemu-59-instance-00000034.
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.629 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[72c75a41-34c4-4522-973a-bbe560eb4807]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.677 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8e22c610-1d51-4d27-a626-08baf7e5e36c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:19 compute-0 NetworkManager[44949]: <info>  [1759846459.6853] manager: (tapd1e4e3b1-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/197)
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.688 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ec423368-e8cc-41c5-9a6d-b3bddc42d5da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 227 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.3 MiB/s wr, 223 op/s
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.711 2 DEBUG nova.network.neutron [req-4bd6a682-8e33-4143-a05c-3ff448af5dff req-0060a0bc-53a6-4f12-8a45-c1a04ff78169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Updated VIF entry in instance network info cache for port e4ca2e16-9053-460a-9aee-cd724306083e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.712 2 DEBUG nova.network.neutron [req-4bd6a682-8e33-4143-a05c-3ff448af5dff req-0060a0bc-53a6-4f12-8a45-c1a04ff78169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Updating instance_info_cache with network_info: [{"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:14:19 compute-0 systemd-udevd[316945]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.729 2 DEBUG oslo_concurrency.lockutils [req-4bd6a682-8e33-4143-a05c-3ff448af5dff req-0060a0bc-53a6-4f12-8a45-c1a04ff78169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6be00afd-ab65-48db-a575-23a285419e60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.744 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ba54c48d-b02d-43b8-a9b2-5b624ef69f50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.764 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b440a99e-9320-4453-9246-bd5cecbb32dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:19 compute-0 NetworkManager[44949]: <info>  [1759846459.7935] device (tapd1e4e3b1-c0): carrier: link connected
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.795 2 DEBUG oslo_concurrency.lockutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.796 2 DEBUG oslo_concurrency.lockutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.796 2 INFO nova.compute.manager [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Rebooting instance
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.800 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[db91fdf8-1470-48a9-b63a-9fc364796c2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.821 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[38880ae6-4ce2-4104-aa9f-cedc4444d145]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd1e4e3b1-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:64:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704335, 'reachable_time': 19780, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316964, 'error': None, 'target': 'ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.840 2 DEBUG oslo_concurrency.lockutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "refresh_cache-f95ef1e3-b526-4516-a982-e1e415c5a657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.841 2 DEBUG oslo_concurrency.lockutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquired lock "refresh_cache-f95ef1e3-b526-4516-a982-e1e415c5a657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.841 2 DEBUG nova.network.neutron [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.843 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9ecee968-37f5-4c34-ac18-4d35bd5984f5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:64c0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704335, 'tstamp': 704335}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316965, 'error': None, 'target': 'ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.866 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[62aad6da-9ef8-4d2c-a218-5a88e55d6577]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd1e4e3b1-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:64:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704335, 'reachable_time': 19780, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316966, 'error': None, 'target': 'ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.904 2 DEBUG nova.compute.manager [req-45a4fd40-6368-4dca-95ef-83b010eadfca req-6efdd194-784c-4ebf-8dbd-857225d9a0de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received event network-changed-432c69dd-fb1b-432b-b867-9fe29716430d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.904 2 DEBUG nova.compute.manager [req-45a4fd40-6368-4dca-95ef-83b010eadfca req-6efdd194-784c-4ebf-8dbd-857225d9a0de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Refreshing instance network info cache due to event network-changed-432c69dd-fb1b-432b-b867-9fe29716430d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.905 2 DEBUG oslo_concurrency.lockutils [req-45a4fd40-6368-4dca-95ef-83b010eadfca req-6efdd194-784c-4ebf-8dbd-857225d9a0de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.905 2 DEBUG oslo_concurrency.lockutils [req-45a4fd40-6368-4dca-95ef-83b010eadfca req-6efdd194-784c-4ebf-8dbd-857225d9a0de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.905 2 DEBUG nova.network.neutron [req-45a4fd40-6368-4dca-95ef-83b010eadfca req-6efdd194-784c-4ebf-8dbd-857225d9a0de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Refreshing network info cache for port 432c69dd-fb1b-432b-b867-9fe29716430d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.905 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eeecfe0c-cd86-46f4-aaa6-e87aad7987aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.987 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2bdf6f44-3146-43ab-890d-ff51d0ab470e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.991 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1e4e3b1-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.991 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:14:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.992 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd1e4e3b1-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:19 compute-0 kernel: tapd1e4e3b1-c0: entered promiscuous mode
Oct 07 14:14:19 compute-0 nova_compute[259550]: 2025-10-07 14:14:19.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:19 compute-0 NetworkManager[44949]: <info>  [1759846459.9980] manager: (tapd1e4e3b1-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/198)
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:20.002 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd1e4e3b1-c0, col_values=(('external_ids', {'iface-id': 'b45f6f84-7ca0-4e1b-b2cb-172425235762'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:20 compute-0 ovn_controller[151684]: 2025-10-07T14:14:20Z|00404|binding|INFO|Releasing lport b45f6f84-7ca0-4e1b-b2cb-172425235762 from this chassis (sb_readonly=0)
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:20.007 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d1e4e3b1-c955-4533-8a61-49b547446a5a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d1e4e3b1-c955-4533-8a61-49b547446a5a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:20.008 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4afcff-d694-426f-a0ca-18bfb286161f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:20.010 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-d1e4e3b1-c955-4533-8a61-49b547446a5a
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/d1e4e3b1-c955-4533-8a61-49b547446a5a.pid.haproxy
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID d1e4e3b1-c955-4533-8a61-49b547446a5a
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:14:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:20.012 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a', 'env', 'PROCESS_TAG=haproxy-d1e4e3b1-c955-4533-8a61-49b547446a5a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d1e4e3b1-c955-4533-8a61-49b547446a5a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.146 2 DEBUG nova.compute.manager [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.146 2 DEBUG oslo_concurrency.lockutils [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.147 2 DEBUG oslo_concurrency.lockutils [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.147 2 DEBUG oslo_concurrency.lockutils [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.147 2 DEBUG nova.compute.manager [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] No waiting events found dispatching network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.147 2 WARNING nova.compute.manager [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received unexpected event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 for instance with vm_state active and task_state rebooting_hard.
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.149 2 DEBUG nova.compute.manager [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received event network-vif-plugged-e4ca2e16-9053-460a-9aee-cd724306083e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.149 2 DEBUG oslo_concurrency.lockutils [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6be00afd-ab65-48db-a575-23a285419e60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.150 2 DEBUG oslo_concurrency.lockutils [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.151 2 DEBUG oslo_concurrency.lockutils [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.151 2 DEBUG nova.compute.manager [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Processing event network-vif-plugged-e4ca2e16-9053-460a-9aee-cd724306083e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:14:20 compute-0 podman[317037]: 2025-10-07 14:14:20.508490445 +0000 UTC m=+0.070239576 container create e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 07 14:14:20 compute-0 systemd[1]: Started libpod-conmon-e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c.scope.
Oct 07 14:14:20 compute-0 podman[317037]: 2025-10-07 14:14:20.464586375 +0000 UTC m=+0.026335316 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:14:20 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2cf9d455103e8b5a6ebb536445b2ec9cea0901a75ddb51351064b3aa652499/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.582 2 DEBUG nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.584 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846460.582082, 6be00afd-ab65-48db-a575-23a285419e60 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.584 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] VM Started (Lifecycle Event)
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.588 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.593 2 INFO nova.virt.libvirt.driver [-] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Instance spawned successfully.
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.593 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:14:20 compute-0 podman[317037]: 2025-10-07 14:14:20.6006612 +0000 UTC m=+0.162410141 container init e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:14:20 compute-0 podman[317037]: 2025-10-07 14:14:20.608779944 +0000 UTC m=+0.170528855 container start e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.610 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.619 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.626 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.627 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.628 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.628 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.629 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.629 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:14:20 compute-0 neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a[317051]: [NOTICE]   (317055) : New worker (317057) forked
Oct 07 14:14:20 compute-0 neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a[317051]: [NOTICE]   (317055) : Loading success.
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.651 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.652 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846460.5823793, 6be00afd-ab65-48db-a575-23a285419e60 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.652 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] VM Paused (Lifecycle Event)
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.676 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.680 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846460.586493, 6be00afd-ab65-48db-a575-23a285419e60 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.680 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] VM Resumed (Lifecycle Event)
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.707 2 INFO nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Took 10.25 seconds to spawn the instance on the hypervisor.
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.707 2 DEBUG nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.709 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.715 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.748 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.773 2 INFO nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Took 11.30 seconds to build instance.
Oct 07 14:14:20 compute-0 ceph-mon[74295]: pgmap v1492: 305 pgs: 305 active+clean; 227 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.3 MiB/s wr, 223 op/s
Oct 07 14:14:20 compute-0 nova_compute[259550]: 2025-10-07 14:14:20.794 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.283 2 DEBUG nova.network.neutron [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Updating instance_info_cache with network_info: [{"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.303 2 DEBUG oslo_concurrency.lockutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Releasing lock "refresh_cache-f95ef1e3-b526-4516-a982-e1e415c5a657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.306 2 DEBUG nova.compute.manager [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:21 compute-0 kernel: tapfa2c6ead-0b (unregistering): left promiscuous mode
Oct 07 14:14:21 compute-0 NetworkManager[44949]: <info>  [1759846461.4696] device (tapfa2c6ead-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:14:21 compute-0 ovn_controller[151684]: 2025-10-07T14:14:21Z|00405|binding|INFO|Releasing lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 from this chassis (sb_readonly=0)
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:21 compute-0 ovn_controller[151684]: 2025-10-07T14:14:21Z|00406|binding|INFO|Setting lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 down in Southbound
Oct 07 14:14:21 compute-0 ovn_controller[151684]: 2025-10-07T14:14:21Z|00407|binding|INFO|Removing iface tapfa2c6ead-0b ovn-installed in OVS
Oct 07 14:14:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.491 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:21:7c 10.100.0.10'], port_security=['fa:16:3e:ed:21:7c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f95ef1e3-b526-4516-a982-e1e415c5a657', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '304ddc5a55af455ba608d37c37f217aa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5e1b758c-b736-4703-bebe-5a0d70f8524d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d731c88f-69f4-4c5d-9133-965e19cd35d1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:14:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.493 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 in datapath 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b unbound from our chassis
Oct 07 14:14:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.494 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:14:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.496 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[78ccb436-dfb7-4714-803d-f61cab7ac6f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.498 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b namespace which is not needed anymore
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:21 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000033.scope: Deactivated successfully.
Oct 07 14:14:21 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000033.scope: Consumed 4.182s CPU time.
Oct 07 14:14:21 compute-0 systemd-machined[214580]: Machine qemu-58-instance-00000033 terminated.
Oct 07 14:14:21 compute-0 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[316820]: [NOTICE]   (316825) : haproxy version is 2.8.14-c23fe91
Oct 07 14:14:21 compute-0 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[316820]: [NOTICE]   (316825) : path to executable is /usr/sbin/haproxy
Oct 07 14:14:21 compute-0 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[316820]: [WARNING]  (316825) : Exiting Master process...
Oct 07 14:14:21 compute-0 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[316820]: [WARNING]  (316825) : Exiting Master process...
Oct 07 14:14:21 compute-0 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[316820]: [ALERT]    (316825) : Current worker (316827) exited with code 143 (Terminated)
Oct 07 14:14:21 compute-0 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[316820]: [WARNING]  (316825) : All workers exited. Exiting... (0)
Oct 07 14:14:21 compute-0 systemd[1]: libpod-77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03.scope: Deactivated successfully.
Oct 07 14:14:21 compute-0 conmon[316820]: conmon 77156c139be9147324e9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03.scope/container/memory.events
Oct 07 14:14:21 compute-0 podman[317086]: 2025-10-07 14:14:21.658884685 +0000 UTC m=+0.053912866 container died 77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.658 2 INFO nova.virt.libvirt.driver [-] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Instance destroyed successfully.
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.659 2 DEBUG nova.objects.instance [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lazy-loading 'resources' on Instance uuid f95ef1e3-b526-4516-a982-e1e415c5a657 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.673 2 DEBUG nova.virt.libvirt.vif [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1758004192',display_name='tempest-InstanceActionsTestJSON-server-1758004192',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1758004192',id=51,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='304ddc5a55af455ba608d37c37f217aa',ramdisk_id='',reservation_id='r-79xwgd02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-550730255',owner_user_name='tempest-InstanceActionsTestJSON-550730255-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:21Z,user_data=None,user_id='fae6fe12a4234cd28439c010bdf3e497',uuid=f95ef1e3-b526-4516-a982-e1e415c5a657,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.674 2 DEBUG nova.network.os_vif_util [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converting VIF {"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.675 2 DEBUG nova.network.os_vif_util [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.676 2 DEBUG os_vif [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.681 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa2c6ead-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:14:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03-userdata-shm.mount: Deactivated successfully.
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.695 2 INFO os_vif [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b')
Oct 07 14:14:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b633774cda3ee70e43501224cdb7411d4938ebe0f67121f0bf818e12aae2f79e-merged.mount: Deactivated successfully.
Oct 07 14:14:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 227 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 4.2 MiB/s wr, 265 op/s
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.714 2 DEBUG nova.virt.libvirt.driver [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Start _get_guest_xml network_info=[{"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:14:21 compute-0 podman[317086]: 2025-10-07 14:14:21.720559894 +0000 UTC m=+0.115588075 container cleanup 77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.721 2 WARNING nova.virt.libvirt.driver [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.731 2 DEBUG nova.virt.libvirt.host [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.732 2 DEBUG nova.virt.libvirt.host [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.735 2 DEBUG nova.virt.libvirt.host [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.736 2 DEBUG nova.virt.libvirt.host [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.737 2 DEBUG nova.virt.libvirt.driver [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.737 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.737 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.738 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.738 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.738 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.738 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.739 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.739 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.739 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.740 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.740 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.740 2 DEBUG nova.objects.instance [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lazy-loading 'vcpu_model' on Instance uuid f95ef1e3-b526-4516-a982-e1e415c5a657 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:14:21 compute-0 systemd[1]: libpod-conmon-77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03.scope: Deactivated successfully.
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.756 2 DEBUG oslo_concurrency.processutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:21 compute-0 podman[317124]: 2025-10-07 14:14:21.805909858 +0000 UTC m=+0.059020780 container remove 77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:14:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.813 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7b6e81ce-9e60-4c04-8ca9-ead4a48e7934]: (4, ('Tue Oct  7 02:14:21 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b (77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03)\n77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03\nTue Oct  7 02:14:21 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b (77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03)\n77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.814 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[89f4f8ac-fd49-49ee-ae64-40eea671ef89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.815 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ce7b2cd-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:21 compute-0 kernel: tap4ce7b2cd-50: left promiscuous mode
Oct 07 14:14:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.822 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f2c86af8-ed23-4f77-b94a-601f332b2257]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:21 compute-0 nova_compute[259550]: 2025-10-07 14:14:21.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.849 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c59d42c9-e23b-4de1-89b0-df36d7ecd33b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.850 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fd478458-f614-4194-aeef-df23d2abbbf0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.868 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[778aef4f-f182-40da-9b1c-8cbfa6aa3b4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704029, 'reachable_time': 34026, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317139, 'error': None, 'target': 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d4ce7b2cd\x2d57c6\x2d43a1\x2db302\x2da9e47c2a613b.mount: Deactivated successfully.
Oct 07 14:14:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.874 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:14:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.874 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7e419e-f35a-46f9-854a-2c7685e2bd27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.015 2 DEBUG nova.network.neutron [req-45a4fd40-6368-4dca-95ef-83b010eadfca req-6efdd194-784c-4ebf-8dbd-857225d9a0de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Updated VIF entry in instance network info cache for port 432c69dd-fb1b-432b-b867-9fe29716430d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.016 2 DEBUG nova.network.neutron [req-45a4fd40-6368-4dca-95ef-83b010eadfca req-6efdd194-784c-4ebf-8dbd-857225d9a0de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Updating instance_info_cache with network_info: [{"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.033 2 DEBUG oslo_concurrency.lockutils [req-45a4fd40-6368-4dca-95ef-83b010eadfca req-6efdd194-784c-4ebf-8dbd-857225d9a0de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.223 2 DEBUG nova.compute.manager [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received event network-vif-plugged-e4ca2e16-9053-460a-9aee-cd724306083e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.224 2 DEBUG oslo_concurrency.lockutils [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6be00afd-ab65-48db-a575-23a285419e60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.226 2 DEBUG oslo_concurrency.lockutils [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.226 2 DEBUG oslo_concurrency.lockutils [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.227 2 DEBUG nova.compute.manager [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] No waiting events found dispatching network-vif-plugged-e4ca2e16-9053-460a-9aee-cd724306083e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.227 2 WARNING nova.compute.manager [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received unexpected event network-vif-plugged-e4ca2e16-9053-460a-9aee-cd724306083e for instance with vm_state active and task_state None.
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.227 2 DEBUG nova.compute.manager [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-unplugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.227 2 DEBUG oslo_concurrency.lockutils [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.228 2 DEBUG oslo_concurrency.lockutils [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.228 2 DEBUG oslo_concurrency.lockutils [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.228 2 DEBUG nova.compute.manager [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] No waiting events found dispatching network-vif-unplugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.228 2 WARNING nova.compute.manager [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received unexpected event network-vif-unplugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 for instance with vm_state active and task_state reboot_started_hard.
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.229 2 DEBUG nova.compute.manager [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.229 2 DEBUG oslo_concurrency.lockutils [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.229 2 DEBUG oslo_concurrency.lockutils [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.230 2 DEBUG oslo_concurrency.lockutils [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.230 2 DEBUG nova.compute.manager [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] No waiting events found dispatching network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.230 2 WARNING nova.compute.manager [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received unexpected event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 for instance with vm_state active and task_state reboot_started_hard.
Oct 07 14:14:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:14:22 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/551195547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.276 2 DEBUG oslo_concurrency.processutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.309 2 DEBUG oslo_concurrency.processutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:22 compute-0 ovn_controller[151684]: 2025-10-07T14:14:22Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:98:6b 10.100.0.13
Oct 07 14:14:22 compute-0 ovn_controller[151684]: 2025-10-07T14:14:22Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:98:6b 10.100.0.13
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:14:22
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'backups', 'default.rgw.log']
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:14:22 compute-0 ceph-mon[74295]: pgmap v1493: 305 pgs: 305 active+clean; 227 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 4.2 MiB/s wr, 265 op/s
Oct 07 14:14:22 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/551195547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:14:22 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/978359750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.869 2 DEBUG oslo_concurrency.processutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.873 2 DEBUG nova.virt.libvirt.vif [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1758004192',display_name='tempest-InstanceActionsTestJSON-server-1758004192',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1758004192',id=51,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='304ddc5a55af455ba608d37c37f217aa',ramdisk_id='',reservation_id='r-79xwgd02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-550730255',owner_user_name='tempest-InstanceActionsTestJSON-550730255-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:21Z,user_data=None,user_id='fae6fe12a4234cd28439c010bdf3e497',uuid=f95ef1e3-b526-4516-a982-e1e415c5a657,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.874 2 DEBUG nova.network.os_vif_util [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converting VIF {"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.876 2 DEBUG nova.network.os_vif_util [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.879 2 DEBUG nova.objects.instance [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lazy-loading 'pci_devices' on Instance uuid f95ef1e3-b526-4516-a982-e1e415c5a657 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:14:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.907 2 DEBUG nova.virt.libvirt.driver [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:14:22 compute-0 nova_compute[259550]:   <uuid>f95ef1e3-b526-4516-a982-e1e415c5a657</uuid>
Oct 07 14:14:22 compute-0 nova_compute[259550]:   <name>instance-00000033</name>
Oct 07 14:14:22 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:14:22 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:14:22 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <nova:name>tempest-InstanceActionsTestJSON-server-1758004192</nova:name>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:14:21</nova:creationTime>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:14:22 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:14:22 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:14:22 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:14:22 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:14:22 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:14:22 compute-0 nova_compute[259550]:         <nova:user uuid="fae6fe12a4234cd28439c010bdf3e497">tempest-InstanceActionsTestJSON-550730255-project-member</nova:user>
Oct 07 14:14:22 compute-0 nova_compute[259550]:         <nova:project uuid="304ddc5a55af455ba608d37c37f217aa">tempest-InstanceActionsTestJSON-550730255</nova:project>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:14:22 compute-0 nova_compute[259550]:         <nova:port uuid="fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598">
Oct 07 14:14:22 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:14:22 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:14:22 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <system>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <entry name="serial">f95ef1e3-b526-4516-a982-e1e415c5a657</entry>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <entry name="uuid">f95ef1e3-b526-4516-a982-e1e415c5a657</entry>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     </system>
Oct 07 14:14:22 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:14:22 compute-0 nova_compute[259550]:   <os>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:   </os>
Oct 07 14:14:22 compute-0 nova_compute[259550]:   <features>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:   </features>
Oct 07 14:14:22 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:14:22 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:14:22 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/f95ef1e3-b526-4516-a982-e1e415c5a657_disk">
Oct 07 14:14:22 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       </source>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:14:22 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/f95ef1e3-b526-4516-a982-e1e415c5a657_disk.config">
Oct 07 14:14:22 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       </source>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:14:22 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:ed:21:7c"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <target dev="tapfa2c6ead-0b"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657/console.log" append="off"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <video>
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     </video>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <input type="keyboard" bus="usb"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:14:22 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:14:22 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:14:22 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:14:22 compute-0 nova_compute[259550]: </domain>
Oct 07 14:14:22 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.921 2 DEBUG nova.virt.libvirt.driver [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.924 2 DEBUG nova.virt.libvirt.driver [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.937 2 DEBUG nova.virt.libvirt.vif [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1758004192',display_name='tempest-InstanceActionsTestJSON-server-1758004192',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1758004192',id=51,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='304ddc5a55af455ba608d37c37f217aa',ramdisk_id='',reservation_id='r-79xwgd02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-550730255',owner_user_name='tempest-InstanceActionsTestJSON-550730255-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:21Z,user_data=None,user_id='fae6fe12a4234cd28439c010bdf3e497',uuid=f95ef1e3-b526-4516-a982-e1e415c5a657,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.941 2 DEBUG nova.network.os_vif_util [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converting VIF {"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.948 2 DEBUG nova.network.os_vif_util [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.954 2 DEBUG os_vif [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.956 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.957 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.960 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa2c6ead-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.961 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfa2c6ead-0b, col_values=(('external_ids', {'iface-id': 'fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:21:7c', 'vm-uuid': 'f95ef1e3-b526-4516-a982-e1e415c5a657'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:22 compute-0 NetworkManager[44949]: <info>  [1759846462.9642] manager: (tapfa2c6ead-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:22 compute-0 nova_compute[259550]: 2025-10-07 14:14:22.969 2 INFO os_vif [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b')
Oct 07 14:14:23 compute-0 kernel: tapfa2c6ead-0b: entered promiscuous mode
Oct 07 14:14:23 compute-0 NetworkManager[44949]: <info>  [1759846463.0764] manager: (tapfa2c6ead-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/200)
Oct 07 14:14:23 compute-0 nova_compute[259550]: 2025-10-07 14:14:23.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:23 compute-0 ovn_controller[151684]: 2025-10-07T14:14:23Z|00408|binding|INFO|Claiming lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 for this chassis.
Oct 07 14:14:23 compute-0 ovn_controller[151684]: 2025-10-07T14:14:23Z|00409|binding|INFO|fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598: Claiming fa:16:3e:ed:21:7c 10.100.0.10
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.085 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:21:7c 10.100.0.10'], port_security=['fa:16:3e:ed:21:7c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f95ef1e3-b526-4516-a982-e1e415c5a657', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '304ddc5a55af455ba608d37c37f217aa', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5e1b758c-b736-4703-bebe-5a0d70f8524d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d731c88f-69f4-4c5d-9133-965e19cd35d1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.086 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 in datapath 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b bound to our chassis
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.087 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.102 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[073aef83-106a-4710-97ba-5972b97170bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.103 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4ce7b2cd-51 in ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.104 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4ce7b2cd-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.104 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7fee45-dc7e-49f3-bcda-6894def9e6fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.106 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[67114146-18ad-47c8-8991-4bb1c7581750]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:23 compute-0 ovn_controller[151684]: 2025-10-07T14:14:23Z|00410|binding|INFO|Setting lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 ovn-installed in OVS
Oct 07 14:14:23 compute-0 ovn_controller[151684]: 2025-10-07T14:14:23Z|00411|binding|INFO|Setting lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 up in Southbound
Oct 07 14:14:23 compute-0 nova_compute[259550]: 2025-10-07 14:14:23.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:23 compute-0 nova_compute[259550]: 2025-10-07 14:14:23.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.122 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[f2595379-44a8-4259-b073-70103b10c37b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:23 compute-0 systemd-machined[214580]: New machine qemu-60-instance-00000033.
Oct 07 14:14:23 compute-0 systemd[1]: Started Virtual Machine qemu-60-instance-00000033.
Oct 07 14:14:23 compute-0 systemd-udevd[317219]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.147 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e445436b-6b9d-4d2e-a6ac-58c463d2bf67]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:23 compute-0 NetworkManager[44949]: <info>  [1759846463.1591] device (tapfa2c6ead-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:14:23 compute-0 NetworkManager[44949]: <info>  [1759846463.1600] device (tapfa2c6ead-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.196 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5217ba52-08b5-4222-822e-7a5884c20155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:23 compute-0 systemd-udevd[317222]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.206 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[18e72750-8bbf-41a8-95fd-e36abd230e87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:23 compute-0 NetworkManager[44949]: <info>  [1759846463.2080] manager: (tap4ce7b2cd-50): new Veth device (/org/freedesktop/NetworkManager/Devices/201)
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.254 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[aaaf166d-f21b-4e8e-b921-cde6bf947e76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.257 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[03208380-1f1f-486c-a220-a75c6acaf1fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:23 compute-0 NetworkManager[44949]: <info>  [1759846463.2801] device (tap4ce7b2cd-50): carrier: link connected
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.285 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ed38bcec-99fb-4365-becc-5067929b4d23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.310 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5408251f-265b-427d-baf7-333d2f4ac875]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ce7b2cd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:bc:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704684, 'reachable_time': 44276, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317249, 'error': None, 'target': 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.329 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[613eaadb-86b2-4af2-b9ea-c9688417e5b8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec8:bcce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704684, 'tstamp': 704684}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317250, 'error': None, 'target': 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.346 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc9c30c-8598-4b9f-9697-cb2ba8596869]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ce7b2cd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:bc:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704684, 'reachable_time': 44276, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317251, 'error': None, 'target': 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.381 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[210dc2c0-689a-464d-ad49-1ca725dd8299]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.443 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c998a8ea-af2d-4d02-b09c-35c30418aa64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.445 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ce7b2cd-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.445 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.446 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ce7b2cd-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:23 compute-0 NetworkManager[44949]: <info>  [1759846463.4484] manager: (tap4ce7b2cd-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/202)
Oct 07 14:14:23 compute-0 kernel: tap4ce7b2cd-50: entered promiscuous mode
Oct 07 14:14:23 compute-0 nova_compute[259550]: 2025-10-07 14:14:23.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.451 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ce7b2cd-50, col_values=(('external_ids', {'iface-id': '1909b8a8-f403-4e60-bcbd-3a0491b233a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:23 compute-0 nova_compute[259550]: 2025-10-07 14:14:23.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:23 compute-0 ovn_controller[151684]: 2025-10-07T14:14:23Z|00412|binding|INFO|Releasing lport 1909b8a8-f403-4e60-bcbd-3a0491b233a9 from this chassis (sb_readonly=0)
Oct 07 14:14:23 compute-0 nova_compute[259550]: 2025-10-07 14:14:23.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.473 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4ce7b2cd-57c6-43a1-b302-a9e47c2a613b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4ce7b2cd-57c6-43a1-b302-a9e47c2a613b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.474 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5f4e7fc1-3ab0-4161-bec5-a1546207d94c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.475 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/4ce7b2cd-57c6-43a1-b302-a9e47c2a613b.pid.haproxy
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:14:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.477 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'env', 'PROCESS_TAG=haproxy-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4ce7b2cd-57c6-43a1-b302-a9e47c2a613b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:14:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 227 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.5 MiB/s wr, 207 op/s
Oct 07 14:14:23 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/978359750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:14:23 compute-0 podman[317282]: 2025-10-07 14:14:23.877356399 +0000 UTC m=+0.059582764 container create 5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:14:23 compute-0 systemd[1]: Started libpod-conmon-5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9.scope.
Oct 07 14:14:23 compute-0 podman[317282]: 2025-10-07 14:14:23.843851164 +0000 UTC m=+0.026077579 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:14:23 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04179669fb3fb9cd6332de7447c41e0390e38c28e1c7b71bc0b58cfa47c41ae7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:14:23 compute-0 podman[317282]: 2025-10-07 14:14:23.975396039 +0000 UTC m=+0.157622444 container init 5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 07 14:14:23 compute-0 podman[317282]: 2025-10-07 14:14:23.981893411 +0000 UTC m=+0.164119806 container start 5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:14:24 compute-0 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[317296]: [NOTICE]   (317300) : New worker (317302) forked
Oct 07 14:14:24 compute-0 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[317296]: [NOTICE]   (317300) : Loading success.
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.058 2 DEBUG nova.compute.manager [req-739d19ca-204b-49b6-aad1-c731da077e18 req-7e4ae25a-b036-46cd-9fa5-6d297f8ddac4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received event network-changed-e4ca2e16-9053-460a-9aee-cd724306083e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.059 2 DEBUG nova.compute.manager [req-739d19ca-204b-49b6-aad1-c731da077e18 req-7e4ae25a-b036-46cd-9fa5-6d297f8ddac4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Refreshing instance network info cache due to event network-changed-e4ca2e16-9053-460a-9aee-cd724306083e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.059 2 DEBUG oslo_concurrency.lockutils [req-739d19ca-204b-49b6-aad1-c731da077e18 req-7e4ae25a-b036-46cd-9fa5-6d297f8ddac4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6be00afd-ab65-48db-a575-23a285419e60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.060 2 DEBUG oslo_concurrency.lockutils [req-739d19ca-204b-49b6-aad1-c731da077e18 req-7e4ae25a-b036-46cd-9fa5-6d297f8ddac4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6be00afd-ab65-48db-a575-23a285419e60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.060 2 DEBUG nova.network.neutron [req-739d19ca-204b-49b6-aad1-c731da077e18 req-7e4ae25a-b036-46cd-9fa5-6d297f8ddac4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Refreshing network info cache for port e4ca2e16-9053-460a-9aee-cd724306083e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.291 2 DEBUG nova.compute.manager [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.292 2 DEBUG oslo_concurrency.lockutils [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.293 2 DEBUG oslo_concurrency.lockutils [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.293 2 DEBUG oslo_concurrency.lockutils [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.293 2 DEBUG nova.compute.manager [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] No waiting events found dispatching network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.294 2 WARNING nova.compute.manager [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received unexpected event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 for instance with vm_state active and task_state reboot_started_hard.
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.294 2 DEBUG nova.compute.manager [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.294 2 DEBUG oslo_concurrency.lockutils [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.295 2 DEBUG oslo_concurrency.lockutils [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.295 2 DEBUG oslo_concurrency.lockutils [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.296 2 DEBUG nova.compute.manager [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] No waiting events found dispatching network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.296 2 WARNING nova.compute.manager [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received unexpected event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 for instance with vm_state active and task_state reboot_started_hard.
Oct 07 14:14:24 compute-0 ceph-mon[74295]: pgmap v1494: 305 pgs: 305 active+clean; 227 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.5 MiB/s wr, 207 op/s
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.966 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for f95ef1e3-b526-4516-a982-e1e415c5a657 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.968 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846464.9658492, f95ef1e3-b526-4516-a982-e1e415c5a657 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.969 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] VM Resumed (Lifecycle Event)
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.971 2 DEBUG nova.compute.manager [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.975 2 INFO nova.virt.libvirt.driver [-] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Instance rebooted successfully.
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.976 2 DEBUG nova.compute.manager [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.990 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:24 compute-0 nova_compute[259550]: 2025-10-07 14:14:24.994 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:14:25 compute-0 nova_compute[259550]: 2025-10-07 14:14:25.025 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 07 14:14:25 compute-0 nova_compute[259550]: 2025-10-07 14:14:25.026 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846464.9670968, f95ef1e3-b526-4516-a982-e1e415c5a657 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:14:25 compute-0 nova_compute[259550]: 2025-10-07 14:14:25.026 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] VM Started (Lifecycle Event)
Oct 07 14:14:25 compute-0 nova_compute[259550]: 2025-10-07 14:14:25.048 2 DEBUG oslo_concurrency.lockutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:25 compute-0 nova_compute[259550]: 2025-10-07 14:14:25.051 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:25 compute-0 nova_compute[259550]: 2025-10-07 14:14:25.055 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:14:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 249 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 3.7 MiB/s wr, 330 op/s
Oct 07 14:14:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:14:26 compute-0 nova_compute[259550]: 2025-10-07 14:14:26.069 2 DEBUG nova.network.neutron [req-739d19ca-204b-49b6-aad1-c731da077e18 req-7e4ae25a-b036-46cd-9fa5-6d297f8ddac4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Updated VIF entry in instance network info cache for port e4ca2e16-9053-460a-9aee-cd724306083e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:14:26 compute-0 nova_compute[259550]: 2025-10-07 14:14:26.070 2 DEBUG nova.network.neutron [req-739d19ca-204b-49b6-aad1-c731da077e18 req-7e4ae25a-b036-46cd-9fa5-6d297f8ddac4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Updating instance_info_cache with network_info: [{"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:14:26 compute-0 nova_compute[259550]: 2025-10-07 14:14:26.089 2 DEBUG oslo_concurrency.lockutils [req-739d19ca-204b-49b6-aad1-c731da077e18 req-7e4ae25a-b036-46cd-9fa5-6d297f8ddac4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6be00afd-ab65-48db-a575-23a285419e60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:14:26 compute-0 nova_compute[259550]: 2025-10-07 14:14:26.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:26 compute-0 ceph-mon[74295]: pgmap v1495: 305 pgs: 305 active+clean; 249 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 3.7 MiB/s wr, 330 op/s
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.059 2 DEBUG oslo_concurrency.lockutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.061 2 DEBUG oslo_concurrency.lockutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.062 2 DEBUG oslo_concurrency.lockutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.062 2 DEBUG oslo_concurrency.lockutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.063 2 DEBUG oslo_concurrency.lockutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.066 2 INFO nova.compute.manager [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Terminating instance
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.068 2 DEBUG nova.compute.manager [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:14:27 compute-0 kernel: tapfa2c6ead-0b (unregistering): left promiscuous mode
Oct 07 14:14:27 compute-0 NetworkManager[44949]: <info>  [1759846467.1264] device (tapfa2c6ead-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:14:27 compute-0 ovn_controller[151684]: 2025-10-07T14:14:27Z|00413|binding|INFO|Releasing lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 from this chassis (sb_readonly=0)
Oct 07 14:14:27 compute-0 ovn_controller[151684]: 2025-10-07T14:14:27Z|00414|binding|INFO|Setting lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 down in Southbound
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:27 compute-0 ovn_controller[151684]: 2025-10-07T14:14:27Z|00415|binding|INFO|Removing iface tapfa2c6ead-0b ovn-installed in OVS
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:27 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000033.scope: Deactivated successfully.
Oct 07 14:14:27 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000033.scope: Consumed 3.830s CPU time.
Oct 07 14:14:27 compute-0 systemd-machined[214580]: Machine qemu-60-instance-00000033 terminated.
Oct 07 14:14:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.238 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:21:7c 10.100.0.10'], port_security=['fa:16:3e:ed:21:7c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f95ef1e3-b526-4516-a982-e1e415c5a657', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '304ddc5a55af455ba608d37c37f217aa', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5e1b758c-b736-4703-bebe-5a0d70f8524d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d731c88f-69f4-4c5d-9133-965e19cd35d1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:14:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.240 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 in datapath 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b unbound from our chassis
Oct 07 14:14:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.242 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:14:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.244 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aa3ae83f-8b1e-493d-8e5a-06dfcc8339ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.245 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b namespace which is not needed anymore
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.310 2 INFO nova.virt.libvirt.driver [-] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Instance destroyed successfully.
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.310 2 DEBUG nova.objects.instance [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lazy-loading 'resources' on Instance uuid f95ef1e3-b526-4516-a982-e1e415c5a657 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.326 2 DEBUG nova.virt.libvirt.vif [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1758004192',display_name='tempest-InstanceActionsTestJSON-server-1758004192',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1758004192',id=51,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='304ddc5a55af455ba608d37c37f217aa',ramdisk_id='',reservation_id='r-79xwgd02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-550730255',owner_user_name='tempest-InstanceActionsTestJSON-550730255-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:25Z,user_data=None,user_id='fae6fe12a4234cd28439c010bdf3e497',uuid=f95ef1e3-b526-4516-a982-e1e415c5a657,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.326 2 DEBUG nova.network.os_vif_util [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converting VIF {"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.327 2 DEBUG nova.network.os_vif_util [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.327 2 DEBUG os_vif [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.333 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa2c6ead-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.340 2 INFO os_vif [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b')
Oct 07 14:14:27 compute-0 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[317296]: [NOTICE]   (317300) : haproxy version is 2.8.14-c23fe91
Oct 07 14:14:27 compute-0 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[317296]: [NOTICE]   (317300) : path to executable is /usr/sbin/haproxy
Oct 07 14:14:27 compute-0 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[317296]: [WARNING]  (317300) : Exiting Master process...
Oct 07 14:14:27 compute-0 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[317296]: [ALERT]    (317300) : Current worker (317302) exited with code 143 (Terminated)
Oct 07 14:14:27 compute-0 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[317296]: [WARNING]  (317300) : All workers exited. Exiting... (0)
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.450 2 DEBUG nova.compute.manager [req-4348f83c-03b6-4895-9e5c-fd052671fd29 req-068e69f5-2713-44ab-88c7-4744534baca9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-unplugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.450 2 DEBUG oslo_concurrency.lockutils [req-4348f83c-03b6-4895-9e5c-fd052671fd29 req-068e69f5-2713-44ab-88c7-4744534baca9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.450 2 DEBUG oslo_concurrency.lockutils [req-4348f83c-03b6-4895-9e5c-fd052671fd29 req-068e69f5-2713-44ab-88c7-4744534baca9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.451 2 DEBUG oslo_concurrency.lockutils [req-4348f83c-03b6-4895-9e5c-fd052671fd29 req-068e69f5-2713-44ab-88c7-4744534baca9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.452 2 DEBUG nova.compute.manager [req-4348f83c-03b6-4895-9e5c-fd052671fd29 req-068e69f5-2713-44ab-88c7-4744534baca9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] No waiting events found dispatching network-vif-unplugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:14:27 compute-0 systemd[1]: libpod-5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9.scope: Deactivated successfully.
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.452 2 DEBUG nova.compute.manager [req-4348f83c-03b6-4895-9e5c-fd052671fd29 req-068e69f5-2713-44ab-88c7-4744534baca9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-unplugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:14:27 compute-0 podman[317400]: 2025-10-07 14:14:27.457627288 +0000 UTC m=+0.070058881 container died 5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 07 14:14:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9-userdata-shm.mount: Deactivated successfully.
Oct 07 14:14:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-04179669fb3fb9cd6332de7447c41e0390e38c28e1c7b71bc0b58cfa47c41ae7-merged.mount: Deactivated successfully.
Oct 07 14:14:27 compute-0 podman[317400]: 2025-10-07 14:14:27.590965681 +0000 UTC m=+0.203397274 container cleanup 5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:14:27 compute-0 systemd[1]: libpod-conmon-5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9.scope: Deactivated successfully.
Oct 07 14:14:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 260 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 2.4 MiB/s wr, 285 op/s
Oct 07 14:14:27 compute-0 podman[317433]: 2025-10-07 14:14:27.712393899 +0000 UTC m=+0.089372773 container remove 5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:14:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.726 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c736fc3-0bae-4aa2-9dc0-2a037c6b288d]: (4, ('Tue Oct  7 02:14:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b (5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9)\n5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9\nTue Oct  7 02:14:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b (5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9)\n5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.729 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[659b73fb-c468-4894-89e1-030b173d2404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.730 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ce7b2cd-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:27 compute-0 kernel: tap4ce7b2cd-50: left promiscuous mode
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.787 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7847f353-ea22-44e2-8ed7-c7f0acceb660]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.822 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b7caafb6-bad1-45fc-af11-1c6c0d2eadc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.824 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[39d42572-3e14-45c2-8baf-0219dfc2ec09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.846 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ace66bc4-43a5-48ac-88c2-e0f133b7a046]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704674, 'reachable_time': 25137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317448, 'error': None, 'target': 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d4ce7b2cd\x2d57c6\x2d43a1\x2db302\x2da9e47c2a613b.mount: Deactivated successfully.
Oct 07 14:14:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.852 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:14:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.853 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a38774e4-172a-459b-8b4a-b885742c34ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:14:27 compute-0 nova_compute[259550]: 2025-10-07 14:14:27.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:14:28 compute-0 nova_compute[259550]: 2025-10-07 14:14:28.008 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:28 compute-0 nova_compute[259550]: 2025-10-07 14:14:28.009 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:28 compute-0 nova_compute[259550]: 2025-10-07 14:14:28.009 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:28 compute-0 nova_compute[259550]: 2025-10-07 14:14:28.010 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:14:28 compute-0 nova_compute[259550]: 2025-10-07 14:14:28.011 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:28 compute-0 nova_compute[259550]: 2025-10-07 14:14:28.174 2 INFO nova.virt.libvirt.driver [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Deleting instance files /var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657_del
Oct 07 14:14:28 compute-0 nova_compute[259550]: 2025-10-07 14:14:28.175 2 INFO nova.virt.libvirt.driver [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Deletion of /var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657_del complete
Oct 07 14:14:28 compute-0 nova_compute[259550]: 2025-10-07 14:14:28.259 2 INFO nova.compute.manager [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Took 1.19 seconds to destroy the instance on the hypervisor.
Oct 07 14:14:28 compute-0 nova_compute[259550]: 2025-10-07 14:14:28.260 2 DEBUG oslo.service.loopingcall [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:14:28 compute-0 nova_compute[259550]: 2025-10-07 14:14:28.260 2 DEBUG nova.compute.manager [-] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:14:28 compute-0 nova_compute[259550]: 2025-10-07 14:14:28.261 2 DEBUG nova.network.neutron [-] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:14:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:14:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3447623833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:14:28 compute-0 nova_compute[259550]: 2025-10-07 14:14:28.503 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:28 compute-0 podman[317473]: 2025-10-07 14:14:28.63497135 +0000 UTC m=+0.069797615 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 07 14:14:28 compute-0 podman[317472]: 2025-10-07 14:14:28.646919916 +0000 UTC m=+0.085096449 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 07 14:14:28 compute-0 nova_compute[259550]: 2025-10-07 14:14:28.768 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:14:28 compute-0 nova_compute[259550]: 2025-10-07 14:14:28.768 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:14:28 compute-0 nova_compute[259550]: 2025-10-07 14:14:28.771 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:14:28 compute-0 nova_compute[259550]: 2025-10-07 14:14:28.772 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:14:28 compute-0 nova_compute[259550]: 2025-10-07 14:14:28.775 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:14:28 compute-0 nova_compute[259550]: 2025-10-07 14:14:28.775 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:14:28 compute-0 ceph-mon[74295]: pgmap v1496: 305 pgs: 305 active+clean; 260 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 2.4 MiB/s wr, 285 op/s
Oct 07 14:14:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3447623833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.005 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.006 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3686MB free_disk=59.88011932373047GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.006 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.007 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.224 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance eb8777f3-5daa-49c7-8994-687012f20453 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.225 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance fc163bed-856c-4ea5-9bf3-6989fb1027eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.225 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance f95ef1e3-b526-4516-a982-e1e415c5a657 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.225 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 6be00afd-ab65-48db-a575-23a285419e60 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.225 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.226 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.344 2 DEBUG nova.network.neutron [-] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.370 2 INFO nova.compute.manager [-] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Took 1.11 seconds to deallocate network for instance.
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.382 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.438 2 DEBUG oslo_concurrency.lockutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.440 2 DEBUG nova.compute.manager [req-e733b6c7-5640-4761-815a-905bad7f9b1f req-8499bdd9-cddd-4600-a5d6-b6fdfba21874 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-deleted-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.518 2 DEBUG nova.compute.manager [req-42e75314-f7ab-4cc2-ac5f-d77bd7463b13 req-2910127e-30fb-4ea6-839d-c2b8ad3017c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.519 2 DEBUG oslo_concurrency.lockutils [req-42e75314-f7ab-4cc2-ac5f-d77bd7463b13 req-2910127e-30fb-4ea6-839d-c2b8ad3017c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.519 2 DEBUG oslo_concurrency.lockutils [req-42e75314-f7ab-4cc2-ac5f-d77bd7463b13 req-2910127e-30fb-4ea6-839d-c2b8ad3017c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.520 2 DEBUG oslo_concurrency.lockutils [req-42e75314-f7ab-4cc2-ac5f-d77bd7463b13 req-2910127e-30fb-4ea6-839d-c2b8ad3017c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.520 2 DEBUG nova.compute.manager [req-42e75314-f7ab-4cc2-ac5f-d77bd7463b13 req-2910127e-30fb-4ea6-839d-c2b8ad3017c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] No waiting events found dispatching network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.520 2 WARNING nova.compute.manager [req-42e75314-f7ab-4cc2-ac5f-d77bd7463b13 req-2910127e-30fb-4ea6-839d-c2b8ad3017c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received unexpected event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 for instance with vm_state deleted and task_state None.
Oct 07 14:14:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 250 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 3.7 MiB/s wr, 359 op/s
Oct 07 14:14:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:14:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2799586118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.885 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.892 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.913 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.935 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.936 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:29 compute-0 nova_compute[259550]: 2025-10-07 14:14:29.937 2 DEBUG oslo_concurrency.lockutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:30 compute-0 nova_compute[259550]: 2025-10-07 14:14:30.018 2 DEBUG oslo_concurrency.processutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:30 compute-0 nova_compute[259550]: 2025-10-07 14:14:30.185 2 DEBUG oslo_concurrency.lockutils [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:30 compute-0 nova_compute[259550]: 2025-10-07 14:14:30.186 2 DEBUG oslo_concurrency.lockutils [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:30 compute-0 nova_compute[259550]: 2025-10-07 14:14:30.187 2 DEBUG nova.objects.instance [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:14:30 compute-0 ovn_controller[151684]: 2025-10-07T14:14:30Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a6:54:7c 10.100.0.7
Oct 07 14:14:30 compute-0 ovn_controller[151684]: 2025-10-07T14:14:30Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a6:54:7c 10.100.0.7
Oct 07 14:14:30 compute-0 nova_compute[259550]: 2025-10-07 14:14:30.221 2 DEBUG nova.objects.instance [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_requests' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:14:30 compute-0 nova_compute[259550]: 2025-10-07 14:14:30.326 2 DEBUG nova.network.neutron [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:14:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:14:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4020859017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:14:30 compute-0 nova_compute[259550]: 2025-10-07 14:14:30.511 2 DEBUG oslo_concurrency.processutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:30 compute-0 nova_compute[259550]: 2025-10-07 14:14:30.518 2 DEBUG nova.compute.provider_tree [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:14:30 compute-0 nova_compute[259550]: 2025-10-07 14:14:30.539 2 DEBUG nova.scheduler.client.report [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:14:30 compute-0 nova_compute[259550]: 2025-10-07 14:14:30.562 2 DEBUG oslo_concurrency.lockutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:30 compute-0 nova_compute[259550]: 2025-10-07 14:14:30.601 2 DEBUG nova.policy [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:14:30 compute-0 nova_compute[259550]: 2025-10-07 14:14:30.605 2 INFO nova.scheduler.client.report [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Deleted allocations for instance f95ef1e3-b526-4516-a982-e1e415c5a657
Oct 07 14:14:30 compute-0 nova_compute[259550]: 2025-10-07 14:14:30.668 2 DEBUG oslo_concurrency.lockutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:30 compute-0 ceph-mon[74295]: pgmap v1497: 305 pgs: 305 active+clean; 250 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 3.7 MiB/s wr, 359 op/s
Oct 07 14:14:30 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2799586118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:14:30 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4020859017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:14:30 compute-0 nova_compute[259550]: 2025-10-07 14:14:30.933 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:14:30 compute-0 nova_compute[259550]: 2025-10-07 14:14:30.933 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:14:30 compute-0 nova_compute[259550]: 2025-10-07 14:14:30.934 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:14:30 compute-0 nova_compute[259550]: 2025-10-07 14:14:30.934 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:14:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:14:31 compute-0 nova_compute[259550]: 2025-10-07 14:14:31.182 2 DEBUG nova.network.neutron [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Successfully created port: 9a533309-4d4d-4458-9a27-3fe85361ab15 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:14:31 compute-0 nova_compute[259550]: 2025-10-07 14:14:31.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 241 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 4.2 MiB/s wr, 355 op/s
Oct 07 14:14:31 compute-0 nova_compute[259550]: 2025-10-07 14:14:31.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:14:32 compute-0 nova_compute[259550]: 2025-10-07 14:14:32.108 2 DEBUG nova.network.neutron [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Successfully updated port: 9a533309-4d4d-4458-9a27-3fe85361ab15 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:14:32 compute-0 nova_compute[259550]: 2025-10-07 14:14:32.127 2 DEBUG oslo_concurrency.lockutils [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:14:32 compute-0 nova_compute[259550]: 2025-10-07 14:14:32.127 2 DEBUG oslo_concurrency.lockutils [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:14:32 compute-0 nova_compute[259550]: 2025-10-07 14:14:32.127 2 DEBUG nova.network.neutron [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:14:32 compute-0 nova_compute[259550]: 2025-10-07 14:14:32.182 2 DEBUG nova.compute.manager [req-d60f7624-875f-4275-a6d9-533b3f4894ca req-1761421a-f50a-4635-b585-bd25bf76aa49 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-changed-9a533309-4d4d-4458-9a27-3fe85361ab15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:32 compute-0 nova_compute[259550]: 2025-10-07 14:14:32.182 2 DEBUG nova.compute.manager [req-d60f7624-875f-4275-a6d9-533b3f4894ca req-1761421a-f50a-4635-b585-bd25bf76aa49 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing instance network info cache due to event network-changed-9a533309-4d4d-4458-9a27-3fe85361ab15. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:14:32 compute-0 nova_compute[259550]: 2025-10-07 14:14:32.183 2 DEBUG oslo_concurrency.lockutils [req-d60f7624-875f-4275-a6d9-533b3f4894ca req-1761421a-f50a-4635-b585-bd25bf76aa49 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:14:32 compute-0 nova_compute[259550]: 2025-10-07 14:14:32.294 2 WARNING nova.network.neutron [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] b1d9f332-f920-4d6e-8e91-dd13ec334d51 already exists in list: networks containing: ['b1d9f332-f920-4d6e-8e91-dd13ec334d51']. ignoring it
Oct 07 14:14:32 compute-0 nova_compute[259550]: 2025-10-07 14:14:32.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018551317192485512 of space, bias 1.0, pg target 0.5565395157745654 quantized to 32 (current 32)
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:14:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:14:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:14:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1271116766' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:14:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:14:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1271116766' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:14:32 compute-0 ceph-mon[74295]: pgmap v1498: 305 pgs: 305 active+clean; 241 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 4.2 MiB/s wr, 355 op/s
Oct 07 14:14:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1271116766' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:14:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1271116766' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:14:32 compute-0 nova_compute[259550]: 2025-10-07 14:14:32.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:14:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 241 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.2 MiB/s wr, 293 op/s
Oct 07 14:14:34 compute-0 ovn_controller[151684]: 2025-10-07T14:14:34Z|00416|binding|INFO|Releasing lport c1da8c7c-1812-4ab6-94d3-da2a23226328 from this chassis (sb_readonly=0)
Oct 07 14:14:34 compute-0 ovn_controller[151684]: 2025-10-07T14:14:34Z|00417|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct 07 14:14:34 compute-0 ovn_controller[151684]: 2025-10-07T14:14:34Z|00418|binding|INFO|Releasing lport b45f6f84-7ca0-4e1b-b2cb-172425235762 from this chassis (sb_readonly=0)
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.207 2 DEBUG nova.network.neutron [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.228 2 DEBUG oslo_concurrency.lockutils [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.229 2 DEBUG oslo_concurrency.lockutils [req-d60f7624-875f-4275-a6d9-533b3f4894ca req-1761421a-f50a-4635-b585-bd25bf76aa49 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.229 2 DEBUG nova.network.neutron [req-d60f7624-875f-4275-a6d9-533b3f4894ca req-1761421a-f50a-4635-b585-bd25bf76aa49 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing network info cache for port 9a533309-4d4d-4458-9a27-3fe85361ab15 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.233 2 DEBUG nova.virt.libvirt.vif [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.233 2 DEBUG nova.network.os_vif_util [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.234 2 DEBUG nova.network.os_vif_util [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.234 2 DEBUG os_vif [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.240 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a533309-4d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.241 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9a533309-4d, col_values=(('external_ids', {'iface-id': '9a533309-4d4d-4458-9a27-3fe85361ab15', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:f6:d8', 'vm-uuid': 'eb8777f3-5daa-49c7-8994-687012f20453'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:34 compute-0 NetworkManager[44949]: <info>  [1759846474.2437] manager: (tap9a533309-4d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.252 2 INFO os_vif [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d')
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.253 2 DEBUG nova.virt.libvirt.vif [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.253 2 DEBUG nova.network.os_vif_util [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.254 2 DEBUG nova.network.os_vif_util [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.256 2 DEBUG nova.virt.libvirt.guest [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] attach device xml: <interface type="ethernet">
Oct 07 14:14:34 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:2b:f6:d8"/>
Oct 07 14:14:34 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:14:34 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:14:34 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:14:34 compute-0 nova_compute[259550]:   <target dev="tap9a533309-4d"/>
Oct 07 14:14:34 compute-0 nova_compute[259550]: </interface>
Oct 07 14:14:34 compute-0 nova_compute[259550]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 07 14:14:34 compute-0 kernel: tap9a533309-4d: entered promiscuous mode
Oct 07 14:14:34 compute-0 NetworkManager[44949]: <info>  [1759846474.2705] manager: (tap9a533309-4d): new Tun device (/org/freedesktop/NetworkManager/Devices/204)
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:34 compute-0 ovn_controller[151684]: 2025-10-07T14:14:34Z|00419|binding|INFO|Claiming lport 9a533309-4d4d-4458-9a27-3fe85361ab15 for this chassis.
Oct 07 14:14:34 compute-0 ovn_controller[151684]: 2025-10-07T14:14:34Z|00420|binding|INFO|9a533309-4d4d-4458-9a27-3fe85361ab15: Claiming fa:16:3e:2b:f6:d8 10.100.0.8
Oct 07 14:14:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.279 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:f6:d8 10.100.0.8'], port_security=['fa:16:3e:2b:f6:d8 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'eb8777f3-5daa-49c7-8994-687012f20453', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9a533309-4d4d-4458-9a27-3fe85361ab15) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:14:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.280 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9a533309-4d4d-4458-9a27-3fe85361ab15 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis
Oct 07 14:14:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.281 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:34 compute-0 ovn_controller[151684]: 2025-10-07T14:14:34Z|00421|binding|INFO|Setting lport 9a533309-4d4d-4458-9a27-3fe85361ab15 ovn-installed in OVS
Oct 07 14:14:34 compute-0 ovn_controller[151684]: 2025-10-07T14:14:34Z|00422|binding|INFO|Setting lport 9a533309-4d4d-4458-9a27-3fe85361ab15 up in Southbound
Oct 07 14:14:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.301 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8cff8175-8382-40f3-8993-232567652005]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:34 compute-0 systemd-udevd[317564]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:14:34 compute-0 NetworkManager[44949]: <info>  [1759846474.3338] device (tap9a533309-4d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:14:34 compute-0 NetworkManager[44949]: <info>  [1759846474.3346] device (tap9a533309-4d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.355 2 DEBUG nova.virt.libvirt.driver [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:14:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.357 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[eeddfa2a-5c03-4ecb-976e-7e5a14c20a10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.356 2 DEBUG nova.virt.libvirt.driver [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.357 2 DEBUG nova.virt.libvirt.driver [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:65:98:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.357 2 DEBUG nova.virt.libvirt.driver [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:2b:f6:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:14:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.362 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f2c37d-e8e1-46ff-8146-dcae0c66e133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.388 2 DEBUG nova.virt.libvirt.guest [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:14:34 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:14:34 compute-0 nova_compute[259550]:   <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct 07 14:14:34 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:14:34</nova:creationTime>
Oct 07 14:14:34 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:14:34 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:14:34 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:14:34 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:14:34 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:14:34 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:14:34 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:14:34 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:14:34 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:14:34 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:14:34 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:14:34 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:14:34 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:14:34 compute-0 nova_compute[259550]:     <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct 07 14:14:34 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:14:34 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:14:34 compute-0 nova_compute[259550]:     <nova:port uuid="9a533309-4d4d-4458-9a27-3fe85361ab15">
Oct 07 14:14:34 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 07 14:14:34 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:14:34 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:14:34 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:14:34 compute-0 nova_compute[259550]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 07 14:14:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.399 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[27472cda-29a2-4c5f-9375-921a938f3153]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.417 2 DEBUG oslo_concurrency.lockutils [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.421 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1024c68d-5442-4fe3-9f9d-c778d9442bdd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702782, 'reachable_time': 23108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317571, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.441 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92ce3109-53d0-44b5-8134-555f1af42f93]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702795, 'tstamp': 702795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317572, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702798, 'tstamp': 702798}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317572, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.443 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.447 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.447 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:14:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.447 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.448 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.603 2 DEBUG nova.compute.manager [req-e6e2cbba-9248-4bb1-8c08-3cb93c653878 req-560935ee-b64a-4a6c-8592-997ada56cb31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-9a533309-4d4d-4458-9a27-3fe85361ab15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.603 2 DEBUG oslo_concurrency.lockutils [req-e6e2cbba-9248-4bb1-8c08-3cb93c653878 req-560935ee-b64a-4a6c-8592-997ada56cb31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.604 2 DEBUG oslo_concurrency.lockutils [req-e6e2cbba-9248-4bb1-8c08-3cb93c653878 req-560935ee-b64a-4a6c-8592-997ada56cb31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.604 2 DEBUG oslo_concurrency.lockutils [req-e6e2cbba-9248-4bb1-8c08-3cb93c653878 req-560935ee-b64a-4a6c-8592-997ada56cb31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.604 2 DEBUG nova.compute.manager [req-e6e2cbba-9248-4bb1-8c08-3cb93c653878 req-560935ee-b64a-4a6c-8592-997ada56cb31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-9a533309-4d4d-4458-9a27-3fe85361ab15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:14:34 compute-0 nova_compute[259550]: 2025-10-07 14:14:34.604 2 WARNING nova.compute.manager [req-e6e2cbba-9248-4bb1-8c08-3cb93c653878 req-560935ee-b64a-4a6c-8592-997ada56cb31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-9a533309-4d4d-4458-9a27-3fe85361ab15 for instance with vm_state active and task_state None.
Oct 07 14:14:34 compute-0 ceph-mon[74295]: pgmap v1499: 305 pgs: 305 active+clean; 241 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.2 MiB/s wr, 293 op/s
Oct 07 14:14:35 compute-0 ovn_controller[151684]: 2025-10-07T14:14:35Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:17:78:e6 10.100.0.11
Oct 07 14:14:35 compute-0 ovn_controller[151684]: 2025-10-07T14:14:35Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:17:78:e6 10.100.0.11
Oct 07 14:14:35 compute-0 ovn_controller[151684]: 2025-10-07T14:14:35Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2b:f6:d8 10.100.0.8
Oct 07 14:14:35 compute-0 ovn_controller[151684]: 2025-10-07T14:14:35Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2b:f6:d8 10.100.0.8
Oct 07 14:14:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 258 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 5.5 MiB/s wr, 349 op/s
Oct 07 14:14:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:14:36 compute-0 nova_compute[259550]: 2025-10-07 14:14:36.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:36 compute-0 nova_compute[259550]: 2025-10-07 14:14:36.343 2 DEBUG oslo_concurrency.lockutils [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:36 compute-0 nova_compute[259550]: 2025-10-07 14:14:36.344 2 DEBUG oslo_concurrency.lockutils [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:36 compute-0 nova_compute[259550]: 2025-10-07 14:14:36.344 2 DEBUG nova.objects.instance [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:14:36 compute-0 nova_compute[259550]: 2025-10-07 14:14:36.748 2 DEBUG nova.compute.manager [req-3ce900a3-5a72-4e22-8e45-f29adc9efe5a req-e8a3f335-8eaa-428a-885b-b6355e656395 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-9a533309-4d4d-4458-9a27-3fe85361ab15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:36 compute-0 nova_compute[259550]: 2025-10-07 14:14:36.748 2 DEBUG oslo_concurrency.lockutils [req-3ce900a3-5a72-4e22-8e45-f29adc9efe5a req-e8a3f335-8eaa-428a-885b-b6355e656395 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:36 compute-0 nova_compute[259550]: 2025-10-07 14:14:36.748 2 DEBUG oslo_concurrency.lockutils [req-3ce900a3-5a72-4e22-8e45-f29adc9efe5a req-e8a3f335-8eaa-428a-885b-b6355e656395 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:36 compute-0 nova_compute[259550]: 2025-10-07 14:14:36.749 2 DEBUG oslo_concurrency.lockutils [req-3ce900a3-5a72-4e22-8e45-f29adc9efe5a req-e8a3f335-8eaa-428a-885b-b6355e656395 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:36 compute-0 nova_compute[259550]: 2025-10-07 14:14:36.749 2 DEBUG nova.compute.manager [req-3ce900a3-5a72-4e22-8e45-f29adc9efe5a req-e8a3f335-8eaa-428a-885b-b6355e656395 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-9a533309-4d4d-4458-9a27-3fe85361ab15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:14:36 compute-0 nova_compute[259550]: 2025-10-07 14:14:36.749 2 WARNING nova.compute.manager [req-3ce900a3-5a72-4e22-8e45-f29adc9efe5a req-e8a3f335-8eaa-428a-885b-b6355e656395 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-9a533309-4d4d-4458-9a27-3fe85361ab15 for instance with vm_state active and task_state None.
Oct 07 14:14:36 compute-0 nova_compute[259550]: 2025-10-07 14:14:36.784 2 DEBUG nova.network.neutron [req-d60f7624-875f-4275-a6d9-533b3f4894ca req-1761421a-f50a-4635-b585-bd25bf76aa49 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updated VIF entry in instance network info cache for port 9a533309-4d4d-4458-9a27-3fe85361ab15. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:14:36 compute-0 nova_compute[259550]: 2025-10-07 14:14:36.784 2 DEBUG nova.network.neutron [req-d60f7624-875f-4275-a6d9-533b3f4894ca req-1761421a-f50a-4635-b585-bd25bf76aa49 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:14:36 compute-0 nova_compute[259550]: 2025-10-07 14:14:36.810 2 DEBUG oslo_concurrency.lockutils [req-d60f7624-875f-4275-a6d9-533b3f4894ca req-1761421a-f50a-4635-b585-bd25bf76aa49 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:14:36 compute-0 ceph-mon[74295]: pgmap v1500: 305 pgs: 305 active+clean; 258 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 5.5 MiB/s wr, 349 op/s
Oct 07 14:14:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 275 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.2 MiB/s wr, 241 op/s
Oct 07 14:14:37 compute-0 nova_compute[259550]: 2025-10-07 14:14:37.870 2 DEBUG nova.objects.instance [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_requests' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:14:37 compute-0 nova_compute[259550]: 2025-10-07 14:14:37.924 2 DEBUG nova.network.neutron [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:14:37 compute-0 nova_compute[259550]: 2025-10-07 14:14:37.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:14:37 compute-0 nova_compute[259550]: 2025-10-07 14:14:37.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:14:38 compute-0 nova_compute[259550]: 2025-10-07 14:14:38.021 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:14:38 compute-0 nova_compute[259550]: 2025-10-07 14:14:38.872 2 DEBUG nova.policy [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:14:38 compute-0 ceph-mon[74295]: pgmap v1501: 305 pgs: 305 active+clean; 275 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.2 MiB/s wr, 241 op/s
Oct 07 14:14:38 compute-0 nova_compute[259550]: 2025-10-07 14:14:38.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:14:39 compute-0 nova_compute[259550]: 2025-10-07 14:14:39.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:39 compute-0 nova_compute[259550]: 2025-10-07 14:14:39.397 2 DEBUG nova.network.neutron [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Successfully created port: 3e6d7010-f744-42e8-b831-8a1955357b14 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:14:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 279 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 217 op/s
Oct 07 14:14:40 compute-0 nova_compute[259550]: 2025-10-07 14:14:40.542 2 DEBUG nova.network.neutron [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Successfully updated port: 3e6d7010-f744-42e8-b831-8a1955357b14 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:14:40 compute-0 nova_compute[259550]: 2025-10-07 14:14:40.640 2 DEBUG oslo_concurrency.lockutils [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:14:40 compute-0 nova_compute[259550]: 2025-10-07 14:14:40.640 2 DEBUG oslo_concurrency.lockutils [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:14:40 compute-0 nova_compute[259550]: 2025-10-07 14:14:40.641 2 DEBUG nova.network.neutron [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:14:40 compute-0 nova_compute[259550]: 2025-10-07 14:14:40.812 2 DEBUG nova.compute.manager [req-a0b61e6e-d613-4d04-bf66-9cab2729ba6a req-67a98496-d929-480b-9b20-b02d8391be7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-changed-3e6d7010-f744-42e8-b831-8a1955357b14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:40 compute-0 nova_compute[259550]: 2025-10-07 14:14:40.812 2 DEBUG nova.compute.manager [req-a0b61e6e-d613-4d04-bf66-9cab2729ba6a req-67a98496-d929-480b-9b20-b02d8391be7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing instance network info cache due to event network-changed-3e6d7010-f744-42e8-b831-8a1955357b14. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:14:40 compute-0 nova_compute[259550]: 2025-10-07 14:14:40.813 2 DEBUG oslo_concurrency.lockutils [req-a0b61e6e-d613-4d04-bf66-9cab2729ba6a req-67a98496-d929-480b-9b20-b02d8391be7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:14:40 compute-0 ceph-mon[74295]: pgmap v1502: 305 pgs: 305 active+clean; 279 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 217 op/s
Oct 07 14:14:40 compute-0 nova_compute[259550]: 2025-10-07 14:14:40.916 2 WARNING nova.network.neutron [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] b1d9f332-f920-4d6e-8e91-dd13ec334d51 already exists in list: networks containing: ['b1d9f332-f920-4d6e-8e91-dd13ec334d51']. ignoring it
Oct 07 14:14:40 compute-0 nova_compute[259550]: 2025-10-07 14:14:40.917 2 WARNING nova.network.neutron [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] b1d9f332-f920-4d6e-8e91-dd13ec334d51 already exists in list: networks containing: ['b1d9f332-f920-4d6e-8e91-dd13ec334d51']. ignoring it
Oct 07 14:14:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:14:41 compute-0 podman[317574]: 2025-10-07 14:14:41.093159007 +0000 UTC m=+0.070438742 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:14:41 compute-0 podman[317575]: 2025-10-07 14:14:41.136413419 +0000 UTC m=+0.114423304 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 07 14:14:41 compute-0 nova_compute[259550]: 2025-10-07 14:14:41.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 279 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 142 op/s
Oct 07 14:14:42 compute-0 nova_compute[259550]: 2025-10-07 14:14:42.308 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846467.307876, f95ef1e3-b526-4516-a982-e1e415c5a657 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:14:42 compute-0 nova_compute[259550]: 2025-10-07 14:14:42.309 2 INFO nova.compute.manager [-] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] VM Stopped (Lifecycle Event)
Oct 07 14:14:42 compute-0 nova_compute[259550]: 2025-10-07 14:14:42.465 2 DEBUG nova.compute.manager [None req-ad368fdd-13e3-440c-816a-029123bfb55d - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:42 compute-0 ceph-mon[74295]: pgmap v1503: 305 pgs: 305 active+clean; 279 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 142 op/s
Oct 07 14:14:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 279 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 460 KiB/s rd, 2.2 MiB/s wr, 82 op/s
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.567 2 DEBUG oslo_concurrency.lockutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquiring lock "6be00afd-ab65-48db-a575-23a285419e60" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.567 2 DEBUG oslo_concurrency.lockutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.568 2 DEBUG oslo_concurrency.lockutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquiring lock "6be00afd-ab65-48db-a575-23a285419e60-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.568 2 DEBUG oslo_concurrency.lockutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.568 2 DEBUG oslo_concurrency.lockutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.570 2 INFO nova.compute.manager [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Terminating instance
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.571 2 DEBUG nova.compute.manager [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:14:44 compute-0 kernel: tape4ca2e16-90 (unregistering): left promiscuous mode
Oct 07 14:14:44 compute-0 NetworkManager[44949]: <info>  [1759846484.6392] device (tape4ca2e16-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:14:44 compute-0 ovn_controller[151684]: 2025-10-07T14:14:44Z|00423|binding|INFO|Releasing lport e4ca2e16-9053-460a-9aee-cd724306083e from this chassis (sb_readonly=0)
Oct 07 14:14:44 compute-0 ovn_controller[151684]: 2025-10-07T14:14:44Z|00424|binding|INFO|Setting lport e4ca2e16-9053-460a-9aee-cd724306083e down in Southbound
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:44 compute-0 ovn_controller[151684]: 2025-10-07T14:14:44Z|00425|binding|INFO|Removing iface tape4ca2e16-90 ovn-installed in OVS
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:44 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000034.scope: Deactivated successfully.
Oct 07 14:14:44 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000034.scope: Consumed 13.993s CPU time.
Oct 07 14:14:44 compute-0 systemd-machined[214580]: Machine qemu-59-instance-00000034 terminated.
Oct 07 14:14:44 compute-0 ovn_controller[151684]: 2025-10-07T14:14:44Z|00426|binding|INFO|Releasing lport c1da8c7c-1812-4ab6-94d3-da2a23226328 from this chassis (sb_readonly=0)
Oct 07 14:14:44 compute-0 ovn_controller[151684]: 2025-10-07T14:14:44Z|00427|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct 07 14:14:44 compute-0 ovn_controller[151684]: 2025-10-07T14:14:44Z|00428|binding|INFO|Releasing lport b45f6f84-7ca0-4e1b-b2cb-172425235762 from this chassis (sb_readonly=0)
Oct 07 14:14:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:44.745 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:78:e6 10.100.0.11'], port_security=['fa:16:3e:17:78:e6 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6be00afd-ab65-48db-a575-23a285419e60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d1e4e3b1-c955-4533-8a61-49b547446a5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daaa10f40e014711ba0819e5a5b251c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ad649100-f9d9-4b25-b07d-f2fec1bfad18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18c2539a-e2f1-4e57-8c40-f76b6e8c17e8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=e4ca2e16-9053-460a-9aee-cd724306083e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:14:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:44.747 161536 INFO neutron.agent.ovn.metadata.agent [-] Port e4ca2e16-9053-460a-9aee-cd724306083e in datapath d1e4e3b1-c955-4533-8a61-49b547446a5a unbound from our chassis
Oct 07 14:14:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:44.749 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d1e4e3b1-c955-4533-8a61-49b547446a5a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:14:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:44.750 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[40c337fb-bdb6-46dd-b298-0de2bfb994c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:44.751 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a namespace which is not needed anymore
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.814 2 INFO nova.virt.libvirt.driver [-] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Instance destroyed successfully.
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.815 2 DEBUG nova.objects.instance [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lazy-loading 'resources' on Instance uuid 6be00afd-ab65-48db-a575-23a285419e60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.846 2 DEBUG nova.virt.libvirt.vif [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:14:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-749882943',display_name='tempest-ServersTestManualDisk-server-749882943',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-749882943',id=52,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCrm9pvHnuJnXcyCVc0QhU5t7gBB0n4d3wZnsQ891irt3BRxIDuply0Hxb2qV0GnCSguPcTHy9UPa8LT2jy6C6OH3oPbsOkrpY1oYyFQDRv9Qn+7DlyBAQdOnjMOuLm/AQ==',key_name='tempest-keypair-1254505334',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='daaa10f40e014711ba0819e5a5b251c7',ramdisk_id='',reservation_id='r-807vnzq7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-456247214',owner_user_name='tempest-ServersTestManualDisk-456247214-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ed6b624fb80d4d4d9b897c788b614297',uuid=6be00afd-ab65-48db-a575-23a285419e60,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.847 2 DEBUG nova.network.os_vif_util [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Converting VIF {"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.847 2 DEBUG nova.network.os_vif_util [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:78:e6,bridge_name='br-int',has_traffic_filtering=True,id=e4ca2e16-9053-460a-9aee-cd724306083e,network=Network(d1e4e3b1-c955-4533-8a61-49b547446a5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4ca2e16-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.848 2 DEBUG os_vif [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:78:e6,bridge_name='br-int',has_traffic_filtering=True,id=e4ca2e16-9053-460a-9aee-cd724306083e,network=Network(d1e4e3b1-c955-4533-8a61-49b547446a5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4ca2e16-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.852 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4ca2e16-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:44 compute-0 nova_compute[259550]: 2025-10-07 14:14:44.859 2 INFO os_vif [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:78:e6,bridge_name='br-int',has_traffic_filtering=True,id=e4ca2e16-9053-460a-9aee-cd724306083e,network=Network(d1e4e3b1-c955-4533-8a61-49b547446a5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4ca2e16-90')
Oct 07 14:14:44 compute-0 neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a[317051]: [NOTICE]   (317055) : haproxy version is 2.8.14-c23fe91
Oct 07 14:14:44 compute-0 neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a[317051]: [NOTICE]   (317055) : path to executable is /usr/sbin/haproxy
Oct 07 14:14:44 compute-0 neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a[317051]: [WARNING]  (317055) : Exiting Master process...
Oct 07 14:14:44 compute-0 neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a[317051]: [WARNING]  (317055) : Exiting Master process...
Oct 07 14:14:44 compute-0 neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a[317051]: [ALERT]    (317055) : Current worker (317057) exited with code 143 (Terminated)
Oct 07 14:14:44 compute-0 neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a[317051]: [WARNING]  (317055) : All workers exited. Exiting... (0)
Oct 07 14:14:44 compute-0 systemd[1]: libpod-e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c.scope: Deactivated successfully.
Oct 07 14:14:44 compute-0 podman[317652]: 2025-10-07 14:14:44.92945076 +0000 UTC m=+0.055763355 container died e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:14:44 compute-0 ceph-mon[74295]: pgmap v1504: 305 pgs: 305 active+clean; 279 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 460 KiB/s rd, 2.2 MiB/s wr, 82 op/s
Oct 07 14:14:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c-userdata-shm.mount: Deactivated successfully.
Oct 07 14:14:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac2cf9d455103e8b5a6ebb536445b2ec9cea0901a75ddb51351064b3aa652499-merged.mount: Deactivated successfully.
Oct 07 14:14:44 compute-0 podman[317652]: 2025-10-07 14:14:44.980445236 +0000 UTC m=+0.106757831 container cleanup e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 07 14:14:44 compute-0 systemd[1]: libpod-conmon-e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c.scope: Deactivated successfully.
Oct 07 14:14:45 compute-0 podman[317699]: 2025-10-07 14:14:45.060783199 +0000 UTC m=+0.054626764 container remove e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 07 14:14:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:45.067 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[30d35523-36b0-43ce-96c5-8a008f3bd7ef]: (4, ('Tue Oct  7 02:14:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a (e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c)\ne8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c\nTue Oct  7 02:14:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a (e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c)\ne8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:45.068 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2e5ed4bf-6aaa-4263-80e8-85b1a0e2050f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:45.069 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1e4e3b1-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:45 compute-0 nova_compute[259550]: 2025-10-07 14:14:45.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:45 compute-0 kernel: tapd1e4e3b1-c0: left promiscuous mode
Oct 07 14:14:45 compute-0 nova_compute[259550]: 2025-10-07 14:14:45.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:45.080 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8b36c8-dcea-4744-9742-72798750fa42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:45 compute-0 nova_compute[259550]: 2025-10-07 14:14:45.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:45.105 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a7a36b6d-5b47-4e89-ab96-58c1ed036408]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:45.110 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d88520c8-8a2d-498f-88aa-99fd57cd2609]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:45.129 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a3c472-cbd8-468d-9848-92ae47ac0432]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704323, 'reachable_time': 20075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317717, 'error': None, 'target': 'ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:45 compute-0 systemd[1]: run-netns-ovnmeta\x2dd1e4e3b1\x2dc955\x2d4533\x2d8a61\x2d49b547446a5a.mount: Deactivated successfully.
Oct 07 14:14:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:45.136 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:14:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:45.137 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[34e37f86-9dfe-4ba9-ad24-d28179a15d13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:45 compute-0 nova_compute[259550]: 2025-10-07 14:14:45.221 2 DEBUG nova.compute.manager [req-62846d75-2894-4592-a27e-200bc66fceb3 req-5549939e-d2f4-4d7c-8ea4-352d43e7e6e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received event network-vif-unplugged-e4ca2e16-9053-460a-9aee-cd724306083e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:45 compute-0 nova_compute[259550]: 2025-10-07 14:14:45.221 2 DEBUG oslo_concurrency.lockutils [req-62846d75-2894-4592-a27e-200bc66fceb3 req-5549939e-d2f4-4d7c-8ea4-352d43e7e6e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6be00afd-ab65-48db-a575-23a285419e60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:45 compute-0 nova_compute[259550]: 2025-10-07 14:14:45.222 2 DEBUG oslo_concurrency.lockutils [req-62846d75-2894-4592-a27e-200bc66fceb3 req-5549939e-d2f4-4d7c-8ea4-352d43e7e6e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:45 compute-0 nova_compute[259550]: 2025-10-07 14:14:45.222 2 DEBUG oslo_concurrency.lockutils [req-62846d75-2894-4592-a27e-200bc66fceb3 req-5549939e-d2f4-4d7c-8ea4-352d43e7e6e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:45 compute-0 nova_compute[259550]: 2025-10-07 14:14:45.222 2 DEBUG nova.compute.manager [req-62846d75-2894-4592-a27e-200bc66fceb3 req-5549939e-d2f4-4d7c-8ea4-352d43e7e6e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] No waiting events found dispatching network-vif-unplugged-e4ca2e16-9053-460a-9aee-cd724306083e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:14:45 compute-0 nova_compute[259550]: 2025-10-07 14:14:45.222 2 DEBUG nova.compute.manager [req-62846d75-2894-4592-a27e-200bc66fceb3 req-5549939e-d2f4-4d7c-8ea4-352d43e7e6e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received event network-vif-unplugged-e4ca2e16-9053-460a-9aee-cd724306083e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:14:45 compute-0 nova_compute[259550]: 2025-10-07 14:14:45.285 2 INFO nova.virt.libvirt.driver [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Deleting instance files /var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60_del
Oct 07 14:14:45 compute-0 nova_compute[259550]: 2025-10-07 14:14:45.286 2 INFO nova.virt.libvirt.driver [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Deletion of /var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60_del complete
Oct 07 14:14:45 compute-0 nova_compute[259550]: 2025-10-07 14:14:45.332 2 INFO nova.compute.manager [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Took 0.76 seconds to destroy the instance on the hypervisor.
Oct 07 14:14:45 compute-0 nova_compute[259550]: 2025-10-07 14:14:45.332 2 DEBUG oslo.service.loopingcall [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:14:45 compute-0 nova_compute[259550]: 2025-10-07 14:14:45.332 2 DEBUG nova.compute.manager [-] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:14:45 compute-0 nova_compute[259550]: 2025-10-07 14:14:45.333 2 DEBUG nova.network.neutron [-] [instance: 6be00afd-ab65-48db-a575-23a285419e60] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:14:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 279 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 461 KiB/s rd, 2.2 MiB/s wr, 83 op/s
Oct 07 14:14:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.188 2 DEBUG nova.network.neutron [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.211 2 DEBUG oslo_concurrency.lockutils [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.213 2 DEBUG oslo_concurrency.lockutils [req-a0b61e6e-d613-4d04-bf66-9cab2729ba6a req-67a98496-d929-480b-9b20-b02d8391be7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.213 2 DEBUG nova.network.neutron [req-a0b61e6e-d613-4d04-bf66-9cab2729ba6a req-67a98496-d929-480b-9b20-b02d8391be7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing network info cache for port 3e6d7010-f744-42e8-b831-8a1955357b14 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.216 2 DEBUG nova.virt.libvirt.vif [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.217 2 DEBUG nova.network.os_vif_util [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.217 2 DEBUG nova.network.os_vif_util [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:02:a9,bridge_name='br-int',has_traffic_filtering=True,id=3e6d7010-f744-42e8-b831-8a1955357b14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e6d7010-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.218 2 DEBUG os_vif [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:02:a9,bridge_name='br-int',has_traffic_filtering=True,id=3e6d7010-f744-42e8-b831-8a1955357b14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e6d7010-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.219 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.221 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e6d7010-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.221 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3e6d7010-f7, col_values=(('external_ids', {'iface-id': '3e6d7010-f744-42e8-b831-8a1955357b14', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:02:a9', 'vm-uuid': 'eb8777f3-5daa-49c7-8994-687012f20453'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:46 compute-0 NetworkManager[44949]: <info>  [1759846486.2241] manager: (tap3e6d7010-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.230 2 INFO os_vif [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:02:a9,bridge_name='br-int',has_traffic_filtering=True,id=3e6d7010-f744-42e8-b831-8a1955357b14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e6d7010-f7')
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.231 2 DEBUG nova.virt.libvirt.vif [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.231 2 DEBUG nova.network.os_vif_util [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.232 2 DEBUG nova.network.os_vif_util [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:02:a9,bridge_name='br-int',has_traffic_filtering=True,id=3e6d7010-f744-42e8-b831-8a1955357b14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e6d7010-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.238 2 DEBUG nova.virt.libvirt.guest [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] attach device xml: <interface type="ethernet">
Oct 07 14:14:46 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:04:02:a9"/>
Oct 07 14:14:46 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:14:46 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:14:46 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:14:46 compute-0 nova_compute[259550]:   <target dev="tap3e6d7010-f7"/>
Oct 07 14:14:46 compute-0 nova_compute[259550]: </interface>
Oct 07 14:14:46 compute-0 nova_compute[259550]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 07 14:14:46 compute-0 systemd-udevd[317621]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:14:46 compute-0 kernel: tap3e6d7010-f7: entered promiscuous mode
Oct 07 14:14:46 compute-0 NetworkManager[44949]: <info>  [1759846486.2541] manager: (tap3e6d7010-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/206)
Oct 07 14:14:46 compute-0 ovn_controller[151684]: 2025-10-07T14:14:46Z|00429|binding|INFO|Claiming lport 3e6d7010-f744-42e8-b831-8a1955357b14 for this chassis.
Oct 07 14:14:46 compute-0 ovn_controller[151684]: 2025-10-07T14:14:46Z|00430|binding|INFO|3e6d7010-f744-42e8-b831-8a1955357b14: Claiming fa:16:3e:04:02:a9 10.100.0.6
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.264 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:02:a9 10.100.0.6'], port_security=['fa:16:3e:04:02:a9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'eb8777f3-5daa-49c7-8994-687012f20453', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3e6d7010-f744-42e8-b831-8a1955357b14) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:14:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.266 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3e6d7010-f744-42e8-b831-8a1955357b14 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis
Oct 07 14:14:46 compute-0 NetworkManager[44949]: <info>  [1759846486.2682] device (tap3e6d7010-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:14:46 compute-0 NetworkManager[44949]: <info>  [1759846486.2699] device (tap3e6d7010-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:14:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.270 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:14:46 compute-0 ovn_controller[151684]: 2025-10-07T14:14:46Z|00431|binding|INFO|Setting lport 3e6d7010-f744-42e8-b831-8a1955357b14 ovn-installed in OVS
Oct 07 14:14:46 compute-0 ovn_controller[151684]: 2025-10-07T14:14:46Z|00432|binding|INFO|Setting lport 3e6d7010-f744-42e8-b831-8a1955357b14 up in Southbound
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.288 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7c629d33-8a7b-4baa-9412-94735b463a3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.329 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5501ebf7-7867-4be7-a262-361262e4c430]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.333 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e03844cd-7ee0-47be-a4bc-fe5d548c22cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.340 2 DEBUG nova.virt.libvirt.driver [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.340 2 DEBUG nova.virt.libvirt.driver [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.340 2 DEBUG nova.virt.libvirt.driver [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:65:98:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.340 2 DEBUG nova.virt.libvirt.driver [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:2b:f6:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.341 2 DEBUG nova.virt.libvirt.driver [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:04:02:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.363 2 DEBUG nova.virt.libvirt.guest [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:14:46 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:14:46 compute-0 nova_compute[259550]:   <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct 07 14:14:46 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:14:46</nova:creationTime>
Oct 07 14:14:46 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:14:46 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:14:46 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:14:46 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:14:46 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:14:46 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:14:46 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:14:46 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:14:46 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:14:46 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:14:46 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:14:46 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:14:46 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:14:46 compute-0 nova_compute[259550]:     <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct 07 14:14:46 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:14:46 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:14:46 compute-0 nova_compute[259550]:     <nova:port uuid="9a533309-4d4d-4458-9a27-3fe85361ab15">
Oct 07 14:14:46 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 07 14:14:46 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:14:46 compute-0 nova_compute[259550]:     <nova:port uuid="3e6d7010-f744-42e8-b831-8a1955357b14">
Oct 07 14:14:46 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 07 14:14:46 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:14:46 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:14:46 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:14:46 compute-0 nova_compute[259550]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 07 14:14:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.374 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[518aea15-4ba6-456f-92ae-39d2e453da09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.394 2 DEBUG oslo_concurrency.lockutils [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.395 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3ddafa99-9439-41b1-a77d-23e13cdc407f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702782, 'reachable_time': 23108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317731, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.415 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d9163ab3-2b69-4c0f-a16d-1aa4ac9cffd1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702795, 'tstamp': 702795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317732, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702798, 'tstamp': 702798}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317732, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:14:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.417 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:46 compute-0 nova_compute[259550]: 2025-10-07 14:14:46.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.420 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.420 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:14:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.420 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.421 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:14:46 compute-0 ceph-mon[74295]: pgmap v1505: 305 pgs: 305 active+clean; 279 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 461 KiB/s rd, 2.2 MiB/s wr, 83 op/s
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.102 2 DEBUG nova.network.neutron [-] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.123 2 INFO nova.compute.manager [-] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Took 1.79 seconds to deallocate network for instance.
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.168 2 DEBUG oslo_concurrency.lockutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.169 2 DEBUG oslo_concurrency.lockutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.198 2 DEBUG nova.compute.manager [req-78e4b816-98a1-4bde-a6f2-86fb97423fc0 req-b5ba95c4-9b4f-4579-916e-8b701d329f06 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received event network-vif-deleted-e4ca2e16-9053-460a-9aee-cd724306083e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.295 2 DEBUG oslo_concurrency.processutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.343 2 DEBUG nova.compute.manager [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received event network-vif-plugged-e4ca2e16-9053-460a-9aee-cd724306083e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.344 2 DEBUG oslo_concurrency.lockutils [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6be00afd-ab65-48db-a575-23a285419e60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.344 2 DEBUG oslo_concurrency.lockutils [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.344 2 DEBUG oslo_concurrency.lockutils [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.344 2 DEBUG nova.compute.manager [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] No waiting events found dispatching network-vif-plugged-e4ca2e16-9053-460a-9aee-cd724306083e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.344 2 WARNING nova.compute.manager [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received unexpected event network-vif-plugged-e4ca2e16-9053-460a-9aee-cd724306083e for instance with vm_state deleted and task_state None.
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.345 2 DEBUG nova.compute.manager [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-3e6d7010-f744-42e8-b831-8a1955357b14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.345 2 DEBUG oslo_concurrency.lockutils [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.345 2 DEBUG oslo_concurrency.lockutils [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.345 2 DEBUG oslo_concurrency.lockutils [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.346 2 DEBUG nova.compute.manager [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-3e6d7010-f744-42e8-b831-8a1955357b14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.346 2 WARNING nova.compute.manager [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-3e6d7010-f744-42e8-b831-8a1955357b14 for instance with vm_state active and task_state None.
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.346 2 DEBUG nova.compute.manager [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-3e6d7010-f744-42e8-b831-8a1955357b14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.346 2 DEBUG oslo_concurrency.lockutils [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.347 2 DEBUG oslo_concurrency.lockutils [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.347 2 DEBUG oslo_concurrency.lockutils [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.347 2 DEBUG nova.compute.manager [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-3e6d7010-f744-42e8-b831-8a1955357b14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.347 2 WARNING nova.compute.manager [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-3e6d7010-f744-42e8-b831-8a1955357b14 for instance with vm_state active and task_state None.
Oct 07 14:14:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 251 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 993 KiB/s wr, 33 op/s
Oct 07 14:14:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:14:47 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/857963059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.753 2 DEBUG oslo_concurrency.processutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.761 2 DEBUG nova.compute.provider_tree [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.778 2 DEBUG nova.scheduler.client.report [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.803 2 DEBUG oslo_concurrency.lockutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.834 2 INFO nova.scheduler.client.report [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Deleted allocations for instance 6be00afd-ab65-48db-a575-23a285419e60
Oct 07 14:14:47 compute-0 nova_compute[259550]: 2025-10-07 14:14:47.906 2 DEBUG oslo_concurrency.lockutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:14:47 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/857963059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:14:48 compute-0 nova_compute[259550]: 2025-10-07 14:14:48.035 2 DEBUG oslo_concurrency.lockutils [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-be1f37db-0265-418f-bc5a-36bd71615d14" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:14:48 compute-0 nova_compute[259550]: 2025-10-07 14:14:48.036 2 DEBUG oslo_concurrency.lockutils [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-be1f37db-0265-418f-bc5a-36bd71615d14" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:14:48 compute-0 nova_compute[259550]: 2025-10-07 14:14:48.036 2 DEBUG nova.objects.instance [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:14:48 compute-0 nova_compute[259550]: 2025-10-07 14:14:48.095 2 DEBUG nova.network.neutron [req-a0b61e6e-d613-4d04-bf66-9cab2729ba6a req-67a98496-d929-480b-9b20-b02d8391be7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updated VIF entry in instance network info cache for port 3e6d7010-f744-42e8-b831-8a1955357b14. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:14:48 compute-0 nova_compute[259550]: 2025-10-07 14:14:48.095 2 DEBUG nova.network.neutron [req-a0b61e6e-d613-4d04-bf66-9cab2729ba6a req-67a98496-d929-480b-9b20-b02d8391be7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:14:48 compute-0 nova_compute[259550]: 2025-10-07 14:14:48.115 2 DEBUG oslo_concurrency.lockutils [req-a0b61e6e-d613-4d04-bf66-9cab2729ba6a req-67a98496-d929-480b-9b20-b02d8391be7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:14:48 compute-0 nova_compute[259550]: 2025-10-07 14:14:48.664 2 DEBUG nova.objects.instance [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_requests' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:14:48 compute-0 nova_compute[259550]: 2025-10-07 14:14:48.677 2 DEBUG nova.network.neutron [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:14:48 compute-0 nova_compute[259550]: 2025-10-07 14:14:48.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:48 compute-0 ceph-mon[74295]: pgmap v1506: 305 pgs: 305 active+clean; 251 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 993 KiB/s wr, 33 op/s
Oct 07 14:14:49 compute-0 ovn_controller[151684]: 2025-10-07T14:14:49Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:04:02:a9 10.100.0.6
Oct 07 14:14:49 compute-0 ovn_controller[151684]: 2025-10-07T14:14:49Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:04:02:a9 10.100.0.6
Oct 07 14:14:49 compute-0 nova_compute[259550]: 2025-10-07 14:14:49.316 2 DEBUG nova.policy [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:14:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 200 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 120 KiB/s wr, 40 op/s
Oct 07 14:14:50 compute-0 ceph-mon[74295]: pgmap v1507: 305 pgs: 305 active+clean; 200 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 120 KiB/s wr, 40 op/s
Oct 07 14:14:50 compute-0 nova_compute[259550]: 2025-10-07 14:14:50.666 2 DEBUG nova.network.neutron [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Successfully updated port: be1f37db-0265-418f-bc5a-36bd71615d14 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:14:50 compute-0 nova_compute[259550]: 2025-10-07 14:14:50.741 2 DEBUG oslo_concurrency.lockutils [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:14:50 compute-0 nova_compute[259550]: 2025-10-07 14:14:50.741 2 DEBUG oslo_concurrency.lockutils [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:14:50 compute-0 nova_compute[259550]: 2025-10-07 14:14:50.742 2 DEBUG nova.network.neutron [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:14:50 compute-0 nova_compute[259550]: 2025-10-07 14:14:50.858 2 DEBUG nova.compute.manager [req-f53b9425-0eef-439e-9225-d92c94b72e90 req-852728f3-17cc-409c-b08f-67127c50d118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-changed-be1f37db-0265-418f-bc5a-36bd71615d14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:14:50 compute-0 nova_compute[259550]: 2025-10-07 14:14:50.858 2 DEBUG nova.compute.manager [req-f53b9425-0eef-439e-9225-d92c94b72e90 req-852728f3-17cc-409c-b08f-67127c50d118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing instance network info cache due to event network-changed-be1f37db-0265-418f-bc5a-36bd71615d14. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:14:50 compute-0 nova_compute[259550]: 2025-10-07 14:14:50.859 2 DEBUG oslo_concurrency.lockutils [req-f53b9425-0eef-439e-9225-d92c94b72e90 req-852728f3-17cc-409c-b08f-67127c50d118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:14:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:14:51 compute-0 nova_compute[259550]: 2025-10-07 14:14:51.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:51 compute-0 nova_compute[259550]: 2025-10-07 14:14:51.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 200 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 19 KiB/s wr, 29 op/s
Oct 07 14:14:51 compute-0 nova_compute[259550]: 2025-10-07 14:14:51.877 2 WARNING nova.network.neutron [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] b1d9f332-f920-4d6e-8e91-dd13ec334d51 already exists in list: networks containing: ['b1d9f332-f920-4d6e-8e91-dd13ec334d51']. ignoring it
Oct 07 14:14:51 compute-0 nova_compute[259550]: 2025-10-07 14:14:51.878 2 WARNING nova.network.neutron [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] b1d9f332-f920-4d6e-8e91-dd13ec334d51 already exists in list: networks containing: ['b1d9f332-f920-4d6e-8e91-dd13ec334d51']. ignoring it
Oct 07 14:14:51 compute-0 nova_compute[259550]: 2025-10-07 14:14:51.878 2 WARNING nova.network.neutron [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] b1d9f332-f920-4d6e-8e91-dd13ec334d51 already exists in list: networks containing: ['b1d9f332-f920-4d6e-8e91-dd13ec334d51']. ignoring it
Oct 07 14:14:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:14:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:14:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:14:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:14:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:14:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:14:52 compute-0 ceph-mon[74295]: pgmap v1508: 305 pgs: 305 active+clean; 200 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 19 KiB/s wr, 29 op/s
Oct 07 14:14:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 200 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 19 KiB/s wr, 29 op/s
Oct 07 14:14:54 compute-0 ceph-mon[74295]: pgmap v1509: 305 pgs: 305 active+clean; 200 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 19 KiB/s wr, 29 op/s
Oct 07 14:14:55 compute-0 ovn_controller[151684]: 2025-10-07T14:14:55Z|00433|binding|INFO|Releasing lport c1da8c7c-1812-4ab6-94d3-da2a23226328 from this chassis (sb_readonly=0)
Oct 07 14:14:55 compute-0 ovn_controller[151684]: 2025-10-07T14:14:55Z|00434|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct 07 14:14:55 compute-0 nova_compute[259550]: 2025-10-07 14:14:55.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 200 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 21 KiB/s wr, 29 op/s
Oct 07 14:14:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:14:56 compute-0 nova_compute[259550]: 2025-10-07 14:14:56.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:56 compute-0 nova_compute[259550]: 2025-10-07 14:14:56.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:56 compute-0 sshd-session[317756]: error: kex_exchange_identification: read: Connection reset by peer
Oct 07 14:14:56 compute-0 sshd-session[317756]: Connection reset by 45.140.17.97 port 43259
Oct 07 14:14:56 compute-0 ceph-mon[74295]: pgmap v1510: 305 pgs: 305 active+clean; 200 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 21 KiB/s wr, 29 op/s
Oct 07 14:14:57 compute-0 nova_compute[259550]: 2025-10-07 14:14:57.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 200 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 29 op/s
Oct 07 14:14:58 compute-0 ceph-mon[74295]: pgmap v1511: 305 pgs: 305 active+clean; 200 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 29 op/s
Oct 07 14:14:59 compute-0 podman[317758]: 2025-10-07 14:14:59.092953383 +0000 UTC m=+0.068357576 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:14:59 compute-0 podman[317757]: 2025-10-07 14:14:59.092968624 +0000 UTC m=+0.073276937 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.638 2 DEBUG nova.network.neutron [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:14:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 200 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 11 KiB/s wr, 24 op/s
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.810 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846484.8086007, 6be00afd-ab65-48db-a575-23a285419e60 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.811 2 INFO nova.compute.manager [-] [instance: 6be00afd-ab65-48db-a575-23a285419e60] VM Stopped (Lifecycle Event)
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.920 2 DEBUG oslo_concurrency.lockutils [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.922 2 DEBUG oslo_concurrency.lockutils [req-f53b9425-0eef-439e-9225-d92c94b72e90 req-852728f3-17cc-409c-b08f-67127c50d118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.923 2 DEBUG nova.network.neutron [req-f53b9425-0eef-439e-9225-d92c94b72e90 req-852728f3-17cc-409c-b08f-67127c50d118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing network info cache for port be1f37db-0265-418f-bc5a-36bd71615d14 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.926 2 DEBUG nova.compute.manager [None req-7832283a-8156-40ef-bf5c-892d37de73a7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.930 2 DEBUG nova.virt.libvirt.vif [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.931 2 DEBUG nova.network.os_vif_util [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.932 2 DEBUG nova.network.os_vif_util [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:b3:7c,bridge_name='br-int',has_traffic_filtering=True,id=be1f37db-0265-418f-bc5a-36bd71615d14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe1f37db-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.933 2 DEBUG os_vif [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:b3:7c,bridge_name='br-int',has_traffic_filtering=True,id=be1f37db-0265-418f-bc5a-36bd71615d14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe1f37db-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.936 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.940 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe1f37db-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.941 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe1f37db-02, col_values=(('external_ids', {'iface-id': 'be1f37db-0265-418f-bc5a-36bd71615d14', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:b3:7c', 'vm-uuid': 'eb8777f3-5daa-49c7-8994-687012f20453'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:59 compute-0 NetworkManager[44949]: <info>  [1759846499.9436] manager: (tapbe1f37db-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/207)
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.950 2 INFO os_vif [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:b3:7c,bridge_name='br-int',has_traffic_filtering=True,id=be1f37db-0265-418f-bc5a-36bd71615d14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe1f37db-02')
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.951 2 DEBUG nova.virt.libvirt.vif [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.951 2 DEBUG nova.network.os_vif_util [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.952 2 DEBUG nova.network.os_vif_util [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:b3:7c,bridge_name='br-int',has_traffic_filtering=True,id=be1f37db-0265-418f-bc5a-36bd71615d14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe1f37db-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.956 2 DEBUG nova.virt.libvirt.guest [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] attach device xml: <interface type="ethernet">
Oct 07 14:14:59 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:4d:b3:7c"/>
Oct 07 14:14:59 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:14:59 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:14:59 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:14:59 compute-0 nova_compute[259550]:   <target dev="tapbe1f37db-02"/>
Oct 07 14:14:59 compute-0 nova_compute[259550]: </interface>
Oct 07 14:14:59 compute-0 nova_compute[259550]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 07 14:14:59 compute-0 kernel: tapbe1f37db-02: entered promiscuous mode
Oct 07 14:14:59 compute-0 NetworkManager[44949]: <info>  [1759846499.9692] manager: (tapbe1f37db-02): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:14:59 compute-0 ovn_controller[151684]: 2025-10-07T14:14:59Z|00435|binding|INFO|Claiming lport be1f37db-0265-418f-bc5a-36bd71615d14 for this chassis.
Oct 07 14:14:59 compute-0 ovn_controller[151684]: 2025-10-07T14:14:59Z|00436|binding|INFO|be1f37db-0265-418f-bc5a-36bd71615d14: Claiming fa:16:3e:4d:b3:7c 10.100.0.4
Oct 07 14:14:59 compute-0 ovn_controller[151684]: 2025-10-07T14:14:59Z|00437|binding|INFO|Setting lport be1f37db-0265-418f-bc5a-36bd71615d14 ovn-installed in OVS
Oct 07 14:14:59 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:00 compute-0 nova_compute[259550]: 2025-10-07 14:14:59.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:00 compute-0 systemd-udevd[317803]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:15:00 compute-0 NetworkManager[44949]: <info>  [1759846500.0214] device (tapbe1f37db-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:15:00 compute-0 NetworkManager[44949]: <info>  [1759846500.0224] device (tapbe1f37db-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.046 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.047 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.049 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:00 compute-0 ovn_controller[151684]: 2025-10-07T14:15:00Z|00438|binding|INFO|Setting lport be1f37db-0265-418f-bc5a-36bd71615d14 up in Southbound
Oct 07 14:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.108 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:b3:7c 10.100.0.4'], port_security=['fa:16:3e:4d:b3:7c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-166023136', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'eb8777f3-5daa-49c7-8994-687012f20453', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-166023136', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=be1f37db-0265-418f-bc5a-36bd71615d14) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.109 161536 INFO neutron.agent.ovn.metadata.agent [-] Port be1f37db-0265-418f-bc5a-36bd71615d14 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis
Oct 07 14:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.111 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.131 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b21653df-3878-45d8-85c8-4f529b7e3552]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.171 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3c9ba95d-528c-4ca4-a352-92f94df57298]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.175 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[62949bf7-9edf-4a45-8fae-f34a88be3e7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.206 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2f55a99b-7484-495c-8a5a-73be8203df6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.226 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f95f8600-13c4-4313-942e-e8cfc637e920]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702782, 'reachable_time': 23108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317811, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.246 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6a111d49-8ae2-4dbd-996d-a5c2c018650e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702795, 'tstamp': 702795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317812, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702798, 'tstamp': 702798}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317812, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.248 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:00 compute-0 nova_compute[259550]: 2025-10-07 14:15:00.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:00 compute-0 nova_compute[259550]: 2025-10-07 14:15:00.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.251 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.252 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.252 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.253 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:00 compute-0 nova_compute[259550]: 2025-10-07 14:15:00.379 2 DEBUG nova.virt.libvirt.driver [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:00 compute-0 nova_compute[259550]: 2025-10-07 14:15:00.380 2 DEBUG nova.virt.libvirt.driver [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:00 compute-0 nova_compute[259550]: 2025-10-07 14:15:00.380 2 DEBUG nova.virt.libvirt.driver [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:65:98:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:15:00 compute-0 nova_compute[259550]: 2025-10-07 14:15:00.380 2 DEBUG nova.virt.libvirt.driver [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:2b:f6:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:15:00 compute-0 nova_compute[259550]: 2025-10-07 14:15:00.380 2 DEBUG nova.virt.libvirt.driver [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:04:02:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:15:00 compute-0 nova_compute[259550]: 2025-10-07 14:15:00.380 2 DEBUG nova.virt.libvirt.driver [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:4d:b3:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:15:00 compute-0 nova_compute[259550]: 2025-10-07 14:15:00.518 2 DEBUG nova.virt.libvirt.guest [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:15:00 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:15:00 compute-0 nova_compute[259550]:   <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct 07 14:15:00 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:15:00</nova:creationTime>
Oct 07 14:15:00 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:15:00 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:15:00 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:15:00 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:15:00 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:15:00 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:15:00 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:15:00 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:15:00 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:15:00 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:15:00 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:15:00 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:15:00 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:15:00 compute-0 nova_compute[259550]:     <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct 07 14:15:00 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:15:00 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:00 compute-0 nova_compute[259550]:     <nova:port uuid="9a533309-4d4d-4458-9a27-3fe85361ab15">
Oct 07 14:15:00 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 07 14:15:00 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:00 compute-0 nova_compute[259550]:     <nova:port uuid="3e6d7010-f744-42e8-b831-8a1955357b14">
Oct 07 14:15:00 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 07 14:15:00 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:00 compute-0 nova_compute[259550]:     <nova:port uuid="be1f37db-0265-418f-bc5a-36bd71615d14">
Oct 07 14:15:00 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 07 14:15:00 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:00 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:15:00 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:15:00 compute-0 nova_compute[259550]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 07 14:15:00 compute-0 nova_compute[259550]: 2025-10-07 14:15:00.562 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:00 compute-0 nova_compute[259550]: 2025-10-07 14:15:00.563 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:00 compute-0 nova_compute[259550]: 2025-10-07 14:15:00.566 2 DEBUG oslo_concurrency.lockutils [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-be1f37db-0265-418f-bc5a-36bd71615d14" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 12.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:00 compute-0 nova_compute[259550]: 2025-10-07 14:15:00.609 2 DEBUG nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:15:00 compute-0 nova_compute[259550]: 2025-10-07 14:15:00.737 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:00 compute-0 nova_compute[259550]: 2025-10-07 14:15:00.738 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:00 compute-0 nova_compute[259550]: 2025-10-07 14:15:00.749 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:15:00 compute-0 nova_compute[259550]: 2025-10-07 14:15:00.750 2 INFO nova.compute.claims [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:15:00 compute-0 ceph-mon[74295]: pgmap v1512: 305 pgs: 305 active+clean; 200 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 11 KiB/s wr, 24 op/s
Oct 07 14:15:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:15:01 compute-0 nova_compute[259550]: 2025-10-07 14:15:01.039 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:01 compute-0 nova_compute[259550]: 2025-10-07 14:15:01.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:15:01 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/22000158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:01 compute-0 nova_compute[259550]: 2025-10-07 14:15:01.511 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:01 compute-0 nova_compute[259550]: 2025-10-07 14:15:01.518 2 DEBUG nova.compute.provider_tree [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:15:01 compute-0 ovn_controller[151684]: 2025-10-07T14:15:01Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:b3:7c 10.100.0.4
Oct 07 14:15:01 compute-0 ovn_controller[151684]: 2025-10-07T14:15:01Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:b3:7c 10.100.0.4
Oct 07 14:15:01 compute-0 nova_compute[259550]: 2025-10-07 14:15:01.597 2 DEBUG nova.scheduler.client.report [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:15:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 200 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 7.4 KiB/s wr, 1 op/s
Oct 07 14:15:01 compute-0 nova_compute[259550]: 2025-10-07 14:15:01.759 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:01 compute-0 nova_compute[259550]: 2025-10-07 14:15:01.760 2 DEBUG nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:15:01 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/22000158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:01 compute-0 nova_compute[259550]: 2025-10-07 14:15:01.913 2 DEBUG nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:15:01 compute-0 nova_compute[259550]: 2025-10-07 14:15:01.913 2 DEBUG nova.network.neutron [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:15:01 compute-0 nova_compute[259550]: 2025-10-07 14:15:01.943 2 DEBUG nova.compute.manager [req-46d517b5-7aef-46ba-bd99-693810295a4a req-12091840-23f4-4a66-b3f4-000fb486ffb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-be1f37db-0265-418f-bc5a-36bd71615d14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:01 compute-0 nova_compute[259550]: 2025-10-07 14:15:01.944 2 DEBUG oslo_concurrency.lockutils [req-46d517b5-7aef-46ba-bd99-693810295a4a req-12091840-23f4-4a66-b3f4-000fb486ffb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:01 compute-0 nova_compute[259550]: 2025-10-07 14:15:01.944 2 DEBUG oslo_concurrency.lockutils [req-46d517b5-7aef-46ba-bd99-693810295a4a req-12091840-23f4-4a66-b3f4-000fb486ffb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:01 compute-0 nova_compute[259550]: 2025-10-07 14:15:01.945 2 DEBUG oslo_concurrency.lockutils [req-46d517b5-7aef-46ba-bd99-693810295a4a req-12091840-23f4-4a66-b3f4-000fb486ffb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:01 compute-0 nova_compute[259550]: 2025-10-07 14:15:01.945 2 DEBUG nova.compute.manager [req-46d517b5-7aef-46ba-bd99-693810295a4a req-12091840-23f4-4a66-b3f4-000fb486ffb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-be1f37db-0265-418f-bc5a-36bd71615d14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:01 compute-0 nova_compute[259550]: 2025-10-07 14:15:01.945 2 WARNING nova.compute.manager [req-46d517b5-7aef-46ba-bd99-693810295a4a req-12091840-23f4-4a66-b3f4-000fb486ffb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-be1f37db-0265-418f-bc5a-36bd71615d14 for instance with vm_state active and task_state None.
Oct 07 14:15:01 compute-0 nova_compute[259550]: 2025-10-07 14:15:01.969 2 INFO nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.006 2 DEBUG nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.169 2 DEBUG nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.171 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.171 2 INFO nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Creating image(s)
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.195 2 DEBUG nova.storage.rbd_utils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.221 2 DEBUG nova.storage.rbd_utils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.247 2 DEBUG nova.storage.rbd_utils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.251 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.334 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.335 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.336 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.336 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.358 2 DEBUG nova.storage.rbd_utils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.363 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.486 2 DEBUG nova.policy [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '39e4681256e44d92ac5928e4f8e0d348', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ef9390a1dd804281beea149e0086b360', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.669 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.752 2 DEBUG nova.storage.rbd_utils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] resizing rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.854 2 DEBUG nova.objects.instance [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'migration_context' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.890 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.891 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Ensure instance console log exists: /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.891 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.892 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:02 compute-0 nova_compute[259550]: 2025-10-07 14:15:02.892 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:02 compute-0 ceph-mon[74295]: pgmap v1513: 305 pgs: 305 active+clean; 200 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 7.4 KiB/s wr, 1 op/s
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.052 2 DEBUG nova.network.neutron [req-f53b9425-0eef-439e-9225-d92c94b72e90 req-852728f3-17cc-409c-b08f-67127c50d118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updated VIF entry in instance network info cache for port be1f37db-0265-418f-bc5a-36bd71615d14. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.052 2 DEBUG nova.network.neutron [req-f53b9425-0eef-439e-9225-d92c94b72e90 req-852728f3-17cc-409c-b08f-67127c50d118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.071 2 DEBUG oslo_concurrency.lockutils [req-f53b9425-0eef-439e-9225-d92c94b72e90 req-852728f3-17cc-409c-b08f-67127c50d118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.167 2 DEBUG oslo_concurrency.lockutils [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-9a533309-4d4d-4458-9a27-3fe85361ab15" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.168 2 DEBUG oslo_concurrency.lockutils [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-9a533309-4d4d-4458-9a27-3fe85361ab15" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.191 2 DEBUG nova.objects.instance [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.222 2 DEBUG nova.virt.libvirt.vif [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.223 2 DEBUG nova.network.os_vif_util [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.224 2 DEBUG nova.network.os_vif_util [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.228 2 DEBUG nova.virt.libvirt.guest [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.230 2 DEBUG nova.virt.libvirt.guest [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.232 2 DEBUG nova.virt.libvirt.driver [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Attempting to detach device tap9a533309-4d from instance eb8777f3-5daa-49c7-8994-687012f20453 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.233 2 DEBUG nova.virt.libvirt.guest [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] detach device xml: <interface type="ethernet">
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:2b:f6:d8"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <target dev="tap9a533309-4d"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]: </interface>
Oct 07 14:15:03 compute-0 nova_compute[259550]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.238 2 DEBUG nova.virt.libvirt.guest [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.242 2 DEBUG nova.virt.libvirt.guest [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface>not found in domain: <domain type='kvm' id='56'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <name>instance-00000031</name>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <uuid>eb8777f3-5daa-49c7-8994-687012f20453</uuid>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:15:00</nova:creationTime>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:port uuid="9a533309-4d4d-4458-9a27-3fe85361ab15">
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:port uuid="3e6d7010-f744-42e8-b831-8a1955357b14">
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:port uuid="be1f37db-0265-418f-bc5a-36bd71615d14">
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:15:03 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <memory unit='KiB'>131072</memory>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <vcpu placement='static'>1</vcpu>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <resource>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <partition>/machine</partition>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </resource>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <sysinfo type='smbios'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <system>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <entry name='manufacturer'>RDO</entry>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <entry name='product'>OpenStack Compute</entry>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <entry name='serial'>eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <entry name='uuid'>eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <entry name='family'>Virtual Machine</entry>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </system>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <os>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <boot dev='hd'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <smbios mode='sysinfo'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </os>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <features>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <vmcoreinfo state='on'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </features>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <cpu mode='custom' match='exact' check='full'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <vendor>AMD</vendor>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='x2apic'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc-deadline'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='hypervisor'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc_adjust'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='spec-ctrl'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='stibp'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='arch-capabilities'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='ssbd'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='cmp_legacy'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='overflow-recov'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='succor'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='ibrs'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='amd-ssbd'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='virt-ssbd'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='lbrv'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='tsc-scale'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='vmcb-clean'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='flushbyasid'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='pause-filter'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='pfthreshold'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='rdctl-no'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='mds-no'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='pschange-mc-no'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='gds-no'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='rfds-no'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='xsaves'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='svm'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='topoext'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='npt'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='nrip-save'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <clock offset='utc'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <timer name='pit' tickpolicy='delay'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <timer name='hpet' present='no'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <on_poweroff>destroy</on_poweroff>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <on_reboot>restart</on_reboot>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <on_crash>destroy</on_crash>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <disk type='network' device='disk'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/eb8777f3-5daa-49c7-8994-687012f20453_disk' index='2'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target dev='vda' bus='virtio'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='virtio-disk0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <disk type='network' device='cdrom'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/eb8777f3-5daa-49c7-8994-687012f20453_disk.config' index='1'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target dev='sda' bus='sata'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <readonly/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='sata0-0-0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='0' model='pcie-root'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pcie.0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='1' port='0x10'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.1'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='2' port='0x11'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.2'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='3' port='0x12'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.3'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='4' port='0x13'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.4'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='5' port='0x14'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.5'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='6' port='0x15'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.6'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='7' port='0x16'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.7'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='8' port='0x17'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.8'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='9' port='0x18'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.9'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='10' port='0x19'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.10'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='11' port='0x1a'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.11'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='12' port='0x1b'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.12'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='13' port='0x1c'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.13'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='14' port='0x1d'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.14'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='15' port='0x1e'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.15'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='16' port='0x1f'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.16'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='17' port='0x20'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.17'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='18' port='0x21'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.18'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='19' port='0x22'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.19'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='20' port='0x23'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.20'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='21' port='0x24'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.21'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='22' port='0x25'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.22'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='23' port='0x26'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.23'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='24' port='0x27'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.24'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='25' port='0x28'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.25'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-pci-bridge'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.26'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='usb'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='sata' index='0'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='ide'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:65:98:6b'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target dev='tapfdcb59f4-9f'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='net0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:2b:f6:d8'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target dev='tap9a533309-4d'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='net1'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:04:02:a9'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target dev='tap3e6d7010-f7'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='net2'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:4d:b3:7c'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target dev='tapbe1f37db-02'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='net3'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <serial type='pty'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log' append='off'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target type='isa-serial' port='0'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:         <model name='isa-serial'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       </target>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <console type='pty' tty='/dev/pts/0'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log' append='off'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target type='serial' port='0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </console>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <input type='tablet' bus='usb'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='input0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='usb' bus='0' port='1'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </input>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <input type='mouse' bus='ps2'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='input1'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </input>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <input type='keyboard' bus='ps2'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='input2'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </input>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <listen type='address' address='::0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </graphics>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <audio id='1' type='none'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <video>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model type='virtio' heads='1' primary='yes'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='video0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </video>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <watchdog model='itco' action='reset'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='watchdog0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </watchdog>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <memballoon model='virtio'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <stats period='10'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='balloon0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <rng model='virtio'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <backend model='random'>/dev/urandom</backend>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='rng0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <label>system_u:system_r:svirt_t:s0:c351,c644</label>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c351,c644</imagelabel>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <label>+107:+107</label>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <imagelabel>+107:+107</imagelabel>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:15:03 compute-0 nova_compute[259550]: </domain>
Oct 07 14:15:03 compute-0 nova_compute[259550]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.243 2 INFO nova.virt.libvirt.driver [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully detached device tap9a533309-4d from instance eb8777f3-5daa-49c7-8994-687012f20453 from the persistent domain config.
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.243 2 DEBUG nova.virt.libvirt.driver [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] (1/8): Attempting to detach device tap9a533309-4d with device alias net1 from instance eb8777f3-5daa-49c7-8994-687012f20453 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.243 2 DEBUG nova.virt.libvirt.guest [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] detach device xml: <interface type="ethernet">
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:2b:f6:d8"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <target dev="tap9a533309-4d"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]: </interface>
Oct 07 14:15:03 compute-0 nova_compute[259550]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.320 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquiring lock "52aed8a1-32e4-4242-881e-1b40f79f09e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.321 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:03 compute-0 kernel: tap9a533309-4d (unregistering): left promiscuous mode
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.346 2 DEBUG nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:15:03 compute-0 NetworkManager[44949]: <info>  [1759846503.3538] device (tap9a533309-4d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:15:03 compute-0 ovn_controller[151684]: 2025-10-07T14:15:03Z|00439|binding|INFO|Releasing lport 9a533309-4d4d-4458-9a27-3fe85361ab15 from this chassis (sb_readonly=0)
Oct 07 14:15:03 compute-0 ovn_controller[151684]: 2025-10-07T14:15:03Z|00440|binding|INFO|Setting lport 9a533309-4d4d-4458-9a27-3fe85361ab15 down in Southbound
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:03 compute-0 ovn_controller[151684]: 2025-10-07T14:15:03Z|00441|binding|INFO|Removing iface tap9a533309-4d ovn-installed in OVS
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.369 2 DEBUG nova.virt.libvirt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Received event <DeviceRemovedEvent: 1759846503.3692205, eb8777f3-5daa-49c7-8994-687012f20453 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.371 2 DEBUG nova.virt.libvirt.driver [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Start waiting for the detach event from libvirt for device tap9a533309-4d with device alias net1 for instance eb8777f3-5daa-49c7-8994-687012f20453 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.371 2 DEBUG nova.virt.libvirt.guest [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:15:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.373 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:f6:d8 10.100.0.8'], port_security=['fa:16:3e:2b:f6:d8 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'eb8777f3-5daa-49c7-8994-687012f20453', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9a533309-4d4d-4458-9a27-3fe85361ab15) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.374 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9a533309-4d4d-4458-9a27-3fe85361ab15 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 unbound from our chassis
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.375 2 DEBUG nova.virt.libvirt.guest [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface>not found in domain: <domain type='kvm' id='56'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <name>instance-00000031</name>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <uuid>eb8777f3-5daa-49c7-8994-687012f20453</uuid>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:15:00</nova:creationTime>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:15:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.375 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:port uuid="9a533309-4d4d-4458-9a27-3fe85361ab15">
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:port uuid="3e6d7010-f744-42e8-b831-8a1955357b14">
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:port uuid="be1f37db-0265-418f-bc5a-36bd71615d14">
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:15:03 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <memory unit='KiB'>131072</memory>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <vcpu placement='static'>1</vcpu>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <resource>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <partition>/machine</partition>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </resource>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <sysinfo type='smbios'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <system>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <entry name='manufacturer'>RDO</entry>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <entry name='product'>OpenStack Compute</entry>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <entry name='serial'>eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <entry name='uuid'>eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <entry name='family'>Virtual Machine</entry>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </system>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <os>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <boot dev='hd'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <smbios mode='sysinfo'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </os>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <features>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <vmcoreinfo state='on'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </features>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <cpu mode='custom' match='exact' check='full'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <vendor>AMD</vendor>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='x2apic'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc-deadline'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='hypervisor'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc_adjust'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='spec-ctrl'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='stibp'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='arch-capabilities'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='ssbd'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='cmp_legacy'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='overflow-recov'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='succor'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='ibrs'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='amd-ssbd'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='virt-ssbd'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='lbrv'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='tsc-scale'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='vmcb-clean'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='flushbyasid'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='pause-filter'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='pfthreshold'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='rdctl-no'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='mds-no'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='pschange-mc-no'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='gds-no'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='rfds-no'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='xsaves'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='svm'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='require' name='topoext'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='npt'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='nrip-save'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <clock offset='utc'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <timer name='pit' tickpolicy='delay'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <timer name='hpet' present='no'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <on_poweroff>destroy</on_poweroff>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <on_reboot>restart</on_reboot>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <on_crash>destroy</on_crash>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <disk type='network' device='disk'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/eb8777f3-5daa-49c7-8994-687012f20453_disk' index='2'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target dev='vda' bus='virtio'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='virtio-disk0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <disk type='network' device='cdrom'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/eb8777f3-5daa-49c7-8994-687012f20453_disk.config' index='1'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target dev='sda' bus='sata'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <readonly/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='sata0-0-0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='0' model='pcie-root'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pcie.0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='1' port='0x10'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.1'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='2' port='0x11'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.2'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='3' port='0x12'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.3'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='4' port='0x13'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.4'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='5' port='0x14'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.5'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='6' port='0x15'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.6'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='7' port='0x16'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.7'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='8' port='0x17'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.8'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='9' port='0x18'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.9'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='10' port='0x19'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.10'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='11' port='0x1a'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.11'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='12' port='0x1b'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.12'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='13' port='0x1c'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.13'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='14' port='0x1d'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.14'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='15' port='0x1e'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.15'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='16' port='0x1f'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.16'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='17' port='0x20'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.17'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='18' port='0x21'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.18'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='19' port='0x22'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.19'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='20' port='0x23'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.20'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='21' port='0x24'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.21'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='22' port='0x25'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.22'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='23' port='0x26'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.23'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='24' port='0x27'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.24'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target chassis='25' port='0x28'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.25'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model name='pcie-pci-bridge'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='pci.26'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='usb'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <controller type='sata' index='0'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='ide'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:65:98:6b'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target dev='tapfdcb59f4-9f'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='net0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:04:02:a9'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target dev='tap3e6d7010-f7'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='net2'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:4d:b3:7c'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target dev='tapbe1f37db-02'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='net3'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <serial type='pty'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log' append='off'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target type='isa-serial' port='0'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:         <model name='isa-serial'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       </target>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <console type='pty' tty='/dev/pts/0'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log' append='off'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <target type='serial' port='0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </console>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <input type='tablet' bus='usb'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='input0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='usb' bus='0' port='1'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </input>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <input type='mouse' bus='ps2'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='input1'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </input>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <input type='keyboard' bus='ps2'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='input2'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </input>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <listen type='address' address='::0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </graphics>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <audio id='1' type='none'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <video>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <model type='virtio' heads='1' primary='yes'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='video0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </video>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <watchdog model='itco' action='reset'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='watchdog0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </watchdog>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <memballoon model='virtio'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <stats period='10'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='balloon0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <rng model='virtio'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <backend model='random'>/dev/urandom</backend>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <alias name='rng0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <label>system_u:system_r:svirt_t:s0:c351,c644</label>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c351,c644</imagelabel>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <label>+107:+107</label>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <imagelabel>+107:+107</imagelabel>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:15:03 compute-0 nova_compute[259550]: </domain>
Oct 07 14:15:03 compute-0 nova_compute[259550]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.375 2 INFO nova.virt.libvirt.driver [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully detached device tap9a533309-4d from instance eb8777f3-5daa-49c7-8994-687012f20453 from the live domain config.
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.376 2 DEBUG nova.virt.libvirt.vif [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.376 2 DEBUG nova.network.os_vif_util [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.377 2 DEBUG nova.network.os_vif_util [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.377 2 DEBUG os_vif [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.380 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a533309-4d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.389 2 INFO os_vif [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d')
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.390 2 DEBUG nova.virt.libvirt.guest [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:15:03</nova:creationTime>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:port uuid="3e6d7010-f744-42e8-b831-8a1955357b14">
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     <nova:port uuid="be1f37db-0265-418f-bc5a-36bd71615d14">
Oct 07 14:15:03 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 07 14:15:03 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:03 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:15:03 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:15:03 compute-0 nova_compute[259550]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 07 14:15:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.394 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f9af96df-ee32-48ee-b1eb-28d1c446127f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.423 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0b58c35e-6fb3-4574-99db-f87c6f62d046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.426 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6ffb493f-7f67-4574-9a71-5990ab780ee1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.433 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.434 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.441 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.442 2 INFO nova.compute.claims [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:15:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.451 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ef746da5-e1eb-4c4b-a9aa-b9ce20cd86d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.470 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b78f8a4d-69cd-4aab-a324-2dfa1f6b3949]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702782, 'reachable_time': 23108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318012, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.488 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[20daee53-81d1-40b6-83a5-7f302c88c79d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702795, 'tstamp': 702795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318013, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702798, 'tstamp': 702798}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318013, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.490 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.494 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.494 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.494 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.494 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.515 2 DEBUG nova.network.neutron [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Successfully created port: 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.612 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 200 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 7.1 KiB/s wr, 1 op/s
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.936 2 DEBUG nova.compute.manager [req-340a8fea-4717-4a40-9317-e90260fbf697 req-0fb7657d-64ca-4722-8966-bea6ef92aef4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-unplugged-9a533309-4d4d-4458-9a27-3fe85361ab15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.936 2 DEBUG oslo_concurrency.lockutils [req-340a8fea-4717-4a40-9317-e90260fbf697 req-0fb7657d-64ca-4722-8966-bea6ef92aef4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.937 2 DEBUG oslo_concurrency.lockutils [req-340a8fea-4717-4a40-9317-e90260fbf697 req-0fb7657d-64ca-4722-8966-bea6ef92aef4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.937 2 DEBUG oslo_concurrency.lockutils [req-340a8fea-4717-4a40-9317-e90260fbf697 req-0fb7657d-64ca-4722-8966-bea6ef92aef4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.937 2 DEBUG nova.compute.manager [req-340a8fea-4717-4a40-9317-e90260fbf697 req-0fb7657d-64ca-4722-8966-bea6ef92aef4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-unplugged-9a533309-4d4d-4458-9a27-3fe85361ab15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:03 compute-0 nova_compute[259550]: 2025-10-07 14:15:03.938 2 WARNING nova.compute.manager [req-340a8fea-4717-4a40-9317-e90260fbf697 req-0fb7657d-64ca-4722-8966-bea6ef92aef4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-unplugged-9a533309-4d4d-4458-9a27-3fe85361ab15 for instance with vm_state active and task_state None.
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.028 2 DEBUG nova.compute.manager [req-88b046ff-a7ad-42c9-a105-76cb9e67c29d req-ca5e711a-0393-4b9d-986d-98d7d7af7d04 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-be1f37db-0265-418f-bc5a-36bd71615d14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.029 2 DEBUG oslo_concurrency.lockutils [req-88b046ff-a7ad-42c9-a105-76cb9e67c29d req-ca5e711a-0393-4b9d-986d-98d7d7af7d04 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.029 2 DEBUG oslo_concurrency.lockutils [req-88b046ff-a7ad-42c9-a105-76cb9e67c29d req-ca5e711a-0393-4b9d-986d-98d7d7af7d04 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.029 2 DEBUG oslo_concurrency.lockutils [req-88b046ff-a7ad-42c9-a105-76cb9e67c29d req-ca5e711a-0393-4b9d-986d-98d7d7af7d04 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.030 2 DEBUG nova.compute.manager [req-88b046ff-a7ad-42c9-a105-76cb9e67c29d req-ca5e711a-0393-4b9d-986d-98d7d7af7d04 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-be1f37db-0265-418f-bc5a-36bd71615d14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.030 2 WARNING nova.compute.manager [req-88b046ff-a7ad-42c9-a105-76cb9e67c29d req-ca5e711a-0393-4b9d-986d-98d7d7af7d04 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-be1f37db-0265-418f-bc5a-36bd71615d14 for instance with vm_state active and task_state None.
Oct 07 14:15:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:15:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3132550295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.092 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.099 2 DEBUG nova.compute.provider_tree [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.116 2 DEBUG nova.scheduler.client.report [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.144 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.145 2 DEBUG nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.189 2 DEBUG nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.189 2 DEBUG nova.network.neutron [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.207 2 INFO nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.233 2 DEBUG nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.311 2 DEBUG nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.312 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.312 2 INFO nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Creating image(s)
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.331 2 DEBUG nova.storage.rbd_utils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] rbd image 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.355 2 DEBUG nova.storage.rbd_utils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] rbd image 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.377 2 DEBUG nova.storage.rbd_utils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] rbd image 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.380 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.454 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.456 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.457 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.458 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.490 2 DEBUG nova.storage.rbd_utils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] rbd image 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.495 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.804 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.878 2 DEBUG nova.storage.rbd_utils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] resizing rbd image 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:15:04 compute-0 ceph-mon[74295]: pgmap v1514: 305 pgs: 305 active+clean; 200 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 7.1 KiB/s wr, 1 op/s
Oct 07 14:15:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3132550295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:04.954 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:04.955 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:15:04 compute-0 nova_compute[259550]: 2025-10-07 14:15:04.986 2 DEBUG nova.objects.instance [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lazy-loading 'migration_context' on Instance uuid 52aed8a1-32e4-4242-881e-1b40f79f09e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:05 compute-0 nova_compute[259550]: 2025-10-07 14:15:05.004 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:15:05 compute-0 nova_compute[259550]: 2025-10-07 14:15:05.004 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Ensure instance console log exists: /var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:15:05 compute-0 nova_compute[259550]: 2025-10-07 14:15:05.005 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:05 compute-0 nova_compute[259550]: 2025-10-07 14:15:05.005 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:05 compute-0 nova_compute[259550]: 2025-10-07 14:15:05.005 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:05 compute-0 nova_compute[259550]: 2025-10-07 14:15:05.089 2 DEBUG nova.policy [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b202876689054b5ebeef4c4648b455bf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a497c44829943d787416adb835d66e5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:15:05 compute-0 nova_compute[259550]: 2025-10-07 14:15:05.698 2 DEBUG oslo_concurrency.lockutils [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:05 compute-0 nova_compute[259550]: 2025-10-07 14:15:05.699 2 DEBUG oslo_concurrency.lockutils [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:05 compute-0 nova_compute[259550]: 2025-10-07 14:15:05.700 2 DEBUG nova.network.neutron [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:15:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 230 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 928 KiB/s wr, 15 op/s
Oct 07 14:15:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.255 2 DEBUG nova.compute.manager [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-9a533309-4d4d-4458-9a27-3fe85361ab15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.255 2 DEBUG oslo_concurrency.lockutils [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.255 2 DEBUG oslo_concurrency.lockutils [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.256 2 DEBUG oslo_concurrency.lockutils [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.256 2 DEBUG nova.compute.manager [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-9a533309-4d4d-4458-9a27-3fe85361ab15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.256 2 WARNING nova.compute.manager [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-9a533309-4d4d-4458-9a27-3fe85361ab15 for instance with vm_state active and task_state None.
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.256 2 DEBUG nova.compute.manager [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-deleted-9a533309-4d4d-4458-9a27-3fe85361ab15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.256 2 INFO nova.compute.manager [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Neutron deleted interface 9a533309-4d4d-4458-9a27-3fe85361ab15; detaching it from the instance and deleting it from the info cache
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.257 2 DEBUG nova.network.neutron [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.276 2 DEBUG nova.objects.instance [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lazy-loading 'system_metadata' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.298 2 DEBUG nova.objects.instance [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lazy-loading 'flavor' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.317 2 DEBUG nova.virt.libvirt.vif [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.318 2 DEBUG nova.network.os_vif_util [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Converting VIF {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.319 2 DEBUG nova.network.os_vif_util [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.322 2 DEBUG nova.virt.libvirt.guest [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.325 2 DEBUG nova.virt.libvirt.guest [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface>not found in domain: <domain type='kvm' id='56'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <name>instance-00000031</name>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <uuid>eb8777f3-5daa-49c7-8994-687012f20453</uuid>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:15:03</nova:creationTime>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:port uuid="3e6d7010-f744-42e8-b831-8a1955357b14">
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:port uuid="be1f37db-0265-418f-bc5a-36bd71615d14">
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:15:06 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <memory unit='KiB'>131072</memory>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <vcpu placement='static'>1</vcpu>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <resource>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <partition>/machine</partition>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </resource>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <sysinfo type='smbios'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <system>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <entry name='manufacturer'>RDO</entry>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <entry name='product'>OpenStack Compute</entry>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <entry name='serial'>eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <entry name='uuid'>eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <entry name='family'>Virtual Machine</entry>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </system>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <os>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <boot dev='hd'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <smbios mode='sysinfo'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </os>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <features>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <vmcoreinfo state='on'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </features>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <cpu mode='custom' match='exact' check='full'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <vendor>AMD</vendor>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='x2apic'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc-deadline'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='hypervisor'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc_adjust'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='spec-ctrl'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='stibp'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='arch-capabilities'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='ssbd'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='cmp_legacy'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='overflow-recov'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='succor'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='ibrs'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='amd-ssbd'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='virt-ssbd'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='lbrv'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='tsc-scale'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='vmcb-clean'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='flushbyasid'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='pause-filter'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='pfthreshold'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='rdctl-no'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='mds-no'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='pschange-mc-no'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='gds-no'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='rfds-no'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='xsaves'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='svm'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='topoext'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='npt'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='nrip-save'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <clock offset='utc'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <timer name='pit' tickpolicy='delay'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <timer name='hpet' present='no'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <on_poweroff>destroy</on_poweroff>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <on_reboot>restart</on_reboot>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <on_crash>destroy</on_crash>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <disk type='network' device='disk'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/eb8777f3-5daa-49c7-8994-687012f20453_disk' index='2'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target dev='vda' bus='virtio'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='virtio-disk0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <disk type='network' device='cdrom'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/eb8777f3-5daa-49c7-8994-687012f20453_disk.config' index='1'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target dev='sda' bus='sata'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <readonly/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='sata0-0-0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='0' model='pcie-root'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pcie.0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='1' port='0x10'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.1'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='2' port='0x11'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.2'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='3' port='0x12'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.3'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='4' port='0x13'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.4'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='5' port='0x14'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.5'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='6' port='0x15'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.6'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='7' port='0x16'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.7'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='8' port='0x17'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.8'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='9' port='0x18'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.9'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='10' port='0x19'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.10'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='11' port='0x1a'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.11'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='12' port='0x1b'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.12'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='13' port='0x1c'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.13'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='14' port='0x1d'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.14'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='15' port='0x1e'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.15'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='16' port='0x1f'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.16'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='17' port='0x20'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.17'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='18' port='0x21'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.18'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='19' port='0x22'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.19'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='20' port='0x23'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.20'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='21' port='0x24'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.21'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='22' port='0x25'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.22'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='23' port='0x26'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.23'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='24' port='0x27'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.24'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='25' port='0x28'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.25'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-pci-bridge'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.26'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='usb'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='sata' index='0'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='ide'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:65:98:6b'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target dev='tapfdcb59f4-9f'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='net0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:04:02:a9'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target dev='tap3e6d7010-f7'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='net2'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:4d:b3:7c'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target dev='tapbe1f37db-02'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='net3'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <serial type='pty'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log' append='off'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target type='isa-serial' port='0'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:         <model name='isa-serial'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       </target>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <console type='pty' tty='/dev/pts/0'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log' append='off'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target type='serial' port='0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </console>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <input type='tablet' bus='usb'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='input0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='usb' bus='0' port='1'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </input>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <input type='mouse' bus='ps2'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='input1'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </input>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <input type='keyboard' bus='ps2'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='input2'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </input>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <listen type='address' address='::0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </graphics>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <audio id='1' type='none'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <video>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model type='virtio' heads='1' primary='yes'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='video0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </video>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <watchdog model='itco' action='reset'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='watchdog0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </watchdog>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <memballoon model='virtio'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <stats period='10'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='balloon0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <rng model='virtio'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <backend model='random'>/dev/urandom</backend>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='rng0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <label>system_u:system_r:svirt_t:s0:c351,c644</label>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c351,c644</imagelabel>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <label>+107:+107</label>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <imagelabel>+107:+107</imagelabel>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:15:06 compute-0 nova_compute[259550]: </domain>
Oct 07 14:15:06 compute-0 nova_compute[259550]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.327 2 DEBUG nova.virt.libvirt.guest [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.330 2 DEBUG nova.virt.libvirt.guest [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface>not found in domain: <domain type='kvm' id='56'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <name>instance-00000031</name>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <uuid>eb8777f3-5daa-49c7-8994-687012f20453</uuid>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:15:03</nova:creationTime>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:port uuid="3e6d7010-f744-42e8-b831-8a1955357b14">
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:port uuid="be1f37db-0265-418f-bc5a-36bd71615d14">
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:15:06 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <memory unit='KiB'>131072</memory>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <vcpu placement='static'>1</vcpu>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <resource>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <partition>/machine</partition>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </resource>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <sysinfo type='smbios'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <system>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <entry name='manufacturer'>RDO</entry>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <entry name='product'>OpenStack Compute</entry>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <entry name='serial'>eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <entry name='uuid'>eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <entry name='family'>Virtual Machine</entry>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </system>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <os>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <boot dev='hd'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <smbios mode='sysinfo'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </os>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <features>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <vmcoreinfo state='on'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </features>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <cpu mode='custom' match='exact' check='full'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <vendor>AMD</vendor>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='x2apic'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc-deadline'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='hypervisor'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc_adjust'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='spec-ctrl'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='stibp'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='arch-capabilities'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='ssbd'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='cmp_legacy'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='overflow-recov'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='succor'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='ibrs'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='amd-ssbd'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='virt-ssbd'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='lbrv'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='tsc-scale'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='vmcb-clean'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='flushbyasid'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='pause-filter'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='pfthreshold'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='rdctl-no'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='mds-no'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='pschange-mc-no'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='gds-no'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='rfds-no'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='xsaves'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='svm'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='require' name='topoext'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='npt'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <feature policy='disable' name='nrip-save'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <clock offset='utc'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <timer name='pit' tickpolicy='delay'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <timer name='hpet' present='no'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <on_poweroff>destroy</on_poweroff>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <on_reboot>restart</on_reboot>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <on_crash>destroy</on_crash>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <disk type='network' device='disk'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/eb8777f3-5daa-49c7-8994-687012f20453_disk' index='2'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target dev='vda' bus='virtio'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='virtio-disk0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <disk type='network' device='cdrom'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/eb8777f3-5daa-49c7-8994-687012f20453_disk.config' index='1'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target dev='sda' bus='sata'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <readonly/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='sata0-0-0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='0' model='pcie-root'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pcie.0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='1' port='0x10'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.1'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='2' port='0x11'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.2'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='3' port='0x12'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.3'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='4' port='0x13'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.4'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='5' port='0x14'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.5'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='6' port='0x15'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.6'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='7' port='0x16'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.7'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='8' port='0x17'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.8'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='9' port='0x18'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.9'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='10' port='0x19'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.10'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='11' port='0x1a'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.11'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='12' port='0x1b'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.12'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='13' port='0x1c'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.13'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='14' port='0x1d'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.14'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='15' port='0x1e'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.15'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='16' port='0x1f'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.16'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='17' port='0x20'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.17'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='18' port='0x21'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.18'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='19' port='0x22'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.19'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='20' port='0x23'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.20'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='21' port='0x24'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.21'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='22' port='0x25'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.22'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='23' port='0x26'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.23'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='24' port='0x27'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.24'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target chassis='25' port='0x28'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.25'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model name='pcie-pci-bridge'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='pci.26'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='usb'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <controller type='sata' index='0'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='ide'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:65:98:6b'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target dev='tapfdcb59f4-9f'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='net0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:04:02:a9'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target dev='tap3e6d7010-f7'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='net2'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:4d:b3:7c'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target dev='tapbe1f37db-02'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='net3'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <serial type='pty'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log' append='off'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target type='isa-serial' port='0'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:         <model name='isa-serial'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       </target>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <console type='pty' tty='/dev/pts/0'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log' append='off'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <target type='serial' port='0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </console>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <input type='tablet' bus='usb'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='input0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='usb' bus='0' port='1'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </input>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <input type='mouse' bus='ps2'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='input1'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </input>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <input type='keyboard' bus='ps2'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='input2'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </input>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <listen type='address' address='::0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </graphics>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <audio id='1' type='none'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <video>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <model type='virtio' heads='1' primary='yes'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='video0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </video>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <watchdog model='itco' action='reset'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='watchdog0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </watchdog>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <memballoon model='virtio'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <stats period='10'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='balloon0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <rng model='virtio'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <backend model='random'>/dev/urandom</backend>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <alias name='rng0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <label>system_u:system_r:svirt_t:s0:c351,c644</label>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c351,c644</imagelabel>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <label>+107:+107</label>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <imagelabel>+107:+107</imagelabel>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:15:06 compute-0 nova_compute[259550]: </domain>
Oct 07 14:15:06 compute-0 nova_compute[259550]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.330 2 WARNING nova.virt.libvirt.driver [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Detaching interface fa:16:3e:2b:f6:d8 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap9a533309-4d' not found.
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.331 2 DEBUG nova.virt.libvirt.vif [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.331 2 DEBUG nova.network.os_vif_util [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Converting VIF {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.332 2 DEBUG nova.network.os_vif_util [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.333 2 DEBUG os_vif [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a533309-4d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.336 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.339 2 INFO os_vif [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d')
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.339 2 DEBUG nova.virt.libvirt.guest [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:15:06</nova:creationTime>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:port uuid="3e6d7010-f744-42e8-b831-8a1955357b14">
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     <nova:port uuid="be1f37db-0265-418f-bc5a-36bd71615d14">
Oct 07 14:15:06 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 07 14:15:06 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:15:06 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:15:06 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:15:06 compute-0 nova_compute[259550]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.643 2 DEBUG nova.network.neutron [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Successfully updated port: 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.687 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "refresh_cache-a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.687 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquired lock "refresh_cache-a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.687 2 DEBUG nova.network.neutron [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.836 2 DEBUG nova.network.neutron [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Successfully created port: 9999835e-253e-4f1f-82c7-59a30f3e1537 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:15:06 compute-0 ceph-mon[74295]: pgmap v1515: 305 pgs: 305 active+clean; 230 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 928 KiB/s wr, 15 op/s
Oct 07 14:15:06 compute-0 nova_compute[259550]: 2025-10-07 14:15:06.974 2 DEBUG nova.network.neutron [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.399 161536 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 4cdb8222-4ae2-4185-90c6-f5e1374079e9 with type ""
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.400 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:b3:7c 10.100.0.4'], port_security=['fa:16:3e:4d:b3:7c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-166023136', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'eb8777f3-5daa-49c7-8994-687012f20453', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-166023136', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=be1f37db-0265-418f-bc5a-36bd71615d14) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.402 161536 INFO neutron.agent.ovn.metadata.agent [-] Port be1f37db-0265-418f-bc5a-36bd71615d14 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 unbound from our chassis
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.404 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:15:07 compute-0 ovn_controller[151684]: 2025-10-07T14:15:07Z|00442|binding|INFO|Removing iface tapbe1f37db-02 ovn-installed in OVS
Oct 07 14:15:07 compute-0 ovn_controller[151684]: 2025-10-07T14:15:07Z|00443|binding|INFO|Removing lport be1f37db-0265-418f-bc5a-36bd71615d14 ovn-installed in OVS
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.424 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[68814c89-5f15-4f65-afee-44cd6e2d0964]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.457 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea3db4e-c2fd-47c6-85f8-31b84bcce89b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.460 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8b544677-04e0-4736-8ded-81e1329ae6d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.489 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d5dfb984-60ec-4b51-a108-95b46c252a7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.507 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[97cb26e8-ba24-4d84-a31b-b3269b634613]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 13, 'rx_bytes': 1126, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 13, 'rx_bytes': 1126, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702782, 'reachable_time': 23108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318207, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.525 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[97a58195-3289-4c5d-b26e-fe066b34d8f5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702795, 'tstamp': 702795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318208, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702798, 'tstamp': 702798}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318208, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.527 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.530 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.530 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.531 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.531 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.624 2 DEBUG oslo_concurrency.lockutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.625 2 DEBUG oslo_concurrency.lockutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.625 2 DEBUG oslo_concurrency.lockutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.625 2 DEBUG oslo_concurrency.lockutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.625 2 DEBUG oslo_concurrency.lockutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.626 2 INFO nova.compute.manager [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Terminating instance
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.627 2 DEBUG nova.compute.manager [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:15:07 compute-0 kernel: tapfdcb59f4-9f (unregistering): left promiscuous mode
Oct 07 14:15:07 compute-0 NetworkManager[44949]: <info>  [1759846507.7324] device (tapfdcb59f4-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:15:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 271 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.7 MiB/s wr, 31 op/s
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:07 compute-0 ovn_controller[151684]: 2025-10-07T14:15:07Z|00444|binding|INFO|Releasing lport fdcb59f4-9f89-4147-941b-a28bfa0621bf from this chassis (sb_readonly=0)
Oct 07 14:15:07 compute-0 ovn_controller[151684]: 2025-10-07T14:15:07Z|00445|binding|INFO|Setting lport fdcb59f4-9f89-4147-941b-a28bfa0621bf down in Southbound
Oct 07 14:15:07 compute-0 ovn_controller[151684]: 2025-10-07T14:15:07Z|00446|binding|INFO|Removing iface tapfdcb59f4-9f ovn-installed in OVS
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.747 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:98:6b 10.100.0.13'], port_security=['fa:16:3e:65:98:6b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'eb8777f3-5daa-49c7-8994-687012f20453', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '206b777f-07ac-463f-aac7-dcc9bac5a7aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fdcb59f4-9f89-4147-941b-a28bfa0621bf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.748 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fdcb59f4-9f89-4147-941b-a28bfa0621bf in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 unbound from our chassis
Oct 07 14:15:07 compute-0 kernel: tap3e6d7010-f7 (unregistering): left promiscuous mode
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.752 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:15:07 compute-0 NetworkManager[44949]: <info>  [1759846507.7576] device (tap3e6d7010-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:07 compute-0 ovn_controller[151684]: 2025-10-07T14:15:07Z|00447|binding|INFO|Releasing lport 3e6d7010-f744-42e8-b831-8a1955357b14 from this chassis (sb_readonly=0)
Oct 07 14:15:07 compute-0 ovn_controller[151684]: 2025-10-07T14:15:07Z|00448|binding|INFO|Setting lport 3e6d7010-f744-42e8-b831-8a1955357b14 down in Southbound
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:07 compute-0 ovn_controller[151684]: 2025-10-07T14:15:07Z|00449|binding|INFO|Removing iface tap3e6d7010-f7 ovn-installed in OVS
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.775 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:02:a9 10.100.0.6'], port_security=['fa:16:3e:04:02:a9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'eb8777f3-5daa-49c7-8994-687012f20453', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3e6d7010-f744-42e8-b831-8a1955357b14) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.774 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dc76dced-f39f-495a-88ed-76f5eaca257f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:07 compute-0 kernel: tapbe1f37db-02 (unregistering): left promiscuous mode
Oct 07 14:15:07 compute-0 NetworkManager[44949]: <info>  [1759846507.8036] device (tapbe1f37db-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.813 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7bb6c84b-fd0e-4518-84d2-9acb45ac9e07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.817 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a32588-a538-4a74-927b-062459bf1c3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.853 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[aa31e42c-1974-4107-b23d-84973e04cc43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:07 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000031.scope: Deactivated successfully.
Oct 07 14:15:07 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000031.scope: Consumed 16.730s CPU time.
Oct 07 14:15:07 compute-0 systemd-machined[214580]: Machine qemu-56-instance-00000031 terminated.
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.878 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e951296e-c83c-4ad3-9d4e-5b6e25764699]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 15, 'rx_bytes': 1126, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 15, 'rx_bytes': 1126, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702782, 'reachable_time': 23108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318231, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.898 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5b2b8863-3e99-4771-b925-1c2833773e73]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702795, 'tstamp': 702795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318232, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702798, 'tstamp': 702798}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318232, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.900 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:07 compute-0 nova_compute[259550]: 2025-10-07 14:15:07.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.914 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.914 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.915 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.916 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.916 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3e6d7010-f744-42e8-b831-8a1955357b14 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 unbound from our chassis
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.918 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1d9f332-f920-4d6e-8e91-dd13ec334d51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.919 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6bea6e48-b4ac-44cc-b81b-282249535bfb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.920 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 namespace which is not needed anymore
Oct 07 14:15:08 compute-0 NetworkManager[44949]: <info>  [1759846508.0690] manager: (tap3e6d7010-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/209)
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:08 compute-0 NetworkManager[44949]: <info>  [1759846508.0839] manager: (tapbe1f37db-02): new Tun device (/org/freedesktop/NetworkManager/Devices/210)
Oct 07 14:15:08 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[314588]: [NOTICE]   (314592) : haproxy version is 2.8.14-c23fe91
Oct 07 14:15:08 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[314588]: [NOTICE]   (314592) : path to executable is /usr/sbin/haproxy
Oct 07 14:15:08 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[314588]: [WARNING]  (314592) : Exiting Master process...
Oct 07 14:15:08 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[314588]: [WARNING]  (314592) : Exiting Master process...
Oct 07 14:15:08 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[314588]: [ALERT]    (314592) : Current worker (314594) exited with code 143 (Terminated)
Oct 07 14:15:08 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[314588]: [WARNING]  (314592) : All workers exited. Exiting... (0)
Oct 07 14:15:08 compute-0 systemd[1]: libpod-61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327.scope: Deactivated successfully.
Oct 07 14:15:08 compute-0 podman[318253]: 2025-10-07 14:15:08.098471622 +0000 UTC m=+0.062844081 container died 61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.112 2 INFO nova.network.neutron [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Port 9a533309-4d4d-4458-9a27-3fe85361ab15 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.119 2 INFO nova.virt.libvirt.driver [-] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Instance destroyed successfully.
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.119 2 DEBUG nova.objects.instance [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'resources' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327-userdata-shm.mount: Deactivated successfully.
Oct 07 14:15:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-16461d24c3ac066e546c480d661cadac4a5387b7af213607c5bfba84e3d1e3d1-merged.mount: Deactivated successfully.
Oct 07 14:15:08 compute-0 podman[318253]: 2025-10-07 14:15:08.15214574 +0000 UTC m=+0.116518199 container cleanup 61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 07 14:15:08 compute-0 systemd[1]: libpod-conmon-61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327.scope: Deactivated successfully.
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.176 2 DEBUG nova.virt.libvirt.vif [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.177 2 DEBUG nova.network.os_vif_util [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.178 2 DEBUG nova.network.os_vif_util [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:98:6b,bridge_name='br-int',has_traffic_filtering=True,id=fdcb59f4-9f89-4147-941b-a28bfa0621bf,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcb59f4-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.178 2 DEBUG os_vif [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:98:6b,bridge_name='br-int',has_traffic_filtering=True,id=fdcb59f4-9f89-4147-941b-a28bfa0621bf,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcb59f4-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.180 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdcb59f4-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.192 2 INFO os_vif [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:98:6b,bridge_name='br-int',has_traffic_filtering=True,id=fdcb59f4-9f89-4147-941b-a28bfa0621bf,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcb59f4-9f')
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.193 2 DEBUG nova.virt.libvirt.vif [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.193 2 DEBUG nova.network.os_vif_util [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.194 2 DEBUG nova.network.os_vif_util [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:04:02:a9,bridge_name='br-int',has_traffic_filtering=True,id=3e6d7010-f744-42e8-b831-8a1955357b14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e6d7010-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.195 2 DEBUG os_vif [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:02:a9,bridge_name='br-int',has_traffic_filtering=True,id=3e6d7010-f744-42e8-b831-8a1955357b14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e6d7010-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.197 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e6d7010-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.207 2 INFO os_vif [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:02:a9,bridge_name='br-int',has_traffic_filtering=True,id=3e6d7010-f744-42e8-b831-8a1955357b14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e6d7010-f7')
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.208 2 DEBUG nova.virt.libvirt.vif [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.208 2 DEBUG nova.network.os_vif_util [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.209 2 DEBUG nova.network.os_vif_util [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:b3:7c,bridge_name='br-int',has_traffic_filtering=True,id=be1f37db-0265-418f-bc5a-36bd71615d14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe1f37db-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.209 2 DEBUG os_vif [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:b3:7c,bridge_name='br-int',has_traffic_filtering=True,id=be1f37db-0265-418f-bc5a-36bd71615d14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe1f37db-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.211 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe1f37db-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:15:08 compute-0 podman[318320]: 2025-10-07 14:15:08.217132937 +0000 UTC m=+0.042392891 container remove 61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.217 2 INFO os_vif [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:b3:7c,bridge_name='br-int',has_traffic_filtering=True,id=be1f37db-0265-418f-bc5a-36bd71615d14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe1f37db-02')
Oct 07 14:15:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:08.223 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9cbf0b14-0edd-41f3-a3ea-56d4486406af]: (4, ('Tue Oct  7 02:15:08 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 (61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327)\n61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327\nTue Oct  7 02:15:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 (61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327)\n61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:08.225 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0f3fdd56-7def-4456-9c6e-595b90284874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:08.226 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:08 compute-0 kernel: tapb1d9f332-f0: left promiscuous mode
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:08.247 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[01b85d8d-be29-4e18-b6f2-a99ae05501bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:08.274 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c5117107-9489-431f-8eae-b29942fca9b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:08.276 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f9cfb307-8deb-42ae-8eed-8344e16af5ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:08.296 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[58d73921-61b2-4115-be4c-06b7c6c6c1d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702775, 'reachable_time': 39736, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318355, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:08 compute-0 systemd[1]: run-netns-ovnmeta\x2db1d9f332\x2df920\x2d4d6e\x2d8e91\x2ddd13ec334d51.mount: Deactivated successfully.
Oct 07 14:15:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:08.302 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:15:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:08.302 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[9eca11db-2981-4c22-9b7c-986c9e5b59c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.336 2 DEBUG nova.compute.manager [req-0a7e8061-1a5c-49b3-a056-39366a061013 req-a6ed056d-e283-4a73-86e9-8a8494c3ce75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-unplugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.336 2 DEBUG oslo_concurrency.lockutils [req-0a7e8061-1a5c-49b3-a056-39366a061013 req-a6ed056d-e283-4a73-86e9-8a8494c3ce75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.337 2 DEBUG oslo_concurrency.lockutils [req-0a7e8061-1a5c-49b3-a056-39366a061013 req-a6ed056d-e283-4a73-86e9-8a8494c3ce75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.337 2 DEBUG oslo_concurrency.lockutils [req-0a7e8061-1a5c-49b3-a056-39366a061013 req-a6ed056d-e283-4a73-86e9-8a8494c3ce75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.338 2 DEBUG nova.compute.manager [req-0a7e8061-1a5c-49b3-a056-39366a061013 req-a6ed056d-e283-4a73-86e9-8a8494c3ce75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-unplugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.338 2 DEBUG nova.compute.manager [req-0a7e8061-1a5c-49b3-a056-39366a061013 req-a6ed056d-e283-4a73-86e9-8a8494c3ce75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-unplugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.639 2 DEBUG nova.compute.manager [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-changed-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.640 2 DEBUG nova.compute.manager [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Refreshing instance network info cache due to event network-changed-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.642 2 DEBUG oslo_concurrency.lockutils [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.672 2 DEBUG nova.network.neutron [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Updating instance_info_cache with network_info: [{"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.686 2 INFO nova.virt.libvirt.driver [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Deleting instance files /var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453_del
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.687 2 INFO nova.virt.libvirt.driver [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Deletion of /var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453_del complete
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.694 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Releasing lock "refresh_cache-a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.694 2 DEBUG nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance network_info: |[{"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.695 2 DEBUG oslo_concurrency.lockutils [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.695 2 DEBUG nova.network.neutron [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Refreshing network info cache for port 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.698 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Start _get_guest_xml network_info=[{"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.703 2 WARNING nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.708 2 DEBUG nova.virt.libvirt.host [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.709 2 DEBUG nova.virt.libvirt.host [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.717 2 DEBUG nova.virt.libvirt.host [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.718 2 DEBUG nova.virt.libvirt.host [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.718 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.718 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.719 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.720 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.720 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.720 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.720 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.721 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.721 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.722 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.722 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.722 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.726 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.771 2 INFO nova.compute.manager [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Took 1.14 seconds to destroy the instance on the hypervisor.
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.772 2 DEBUG oslo.service.loopingcall [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.773 2 DEBUG nova.compute.manager [-] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:15:08 compute-0 nova_compute[259550]: 2025-10-07 14:15:08.774 2 DEBUG nova.network.neutron [-] [instance: eb8777f3-5daa-49c7-8994-687012f20453] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:15:08 compute-0 ceph-mon[74295]: pgmap v1516: 305 pgs: 305 active+clean; 271 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.7 MiB/s wr, 31 op/s
Oct 07 14:15:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:15:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211987338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.201 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.228 2 DEBUG nova.storage.rbd_utils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.233 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.347 2 DEBUG neutronclient.v2_0.client [-] Error message: {"NeutronError": {"type": "PortNotFound", "message": "Port be1f37db-0265-418f-bc5a-36bd71615d14 could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.349 2 DEBUG nova.network.neutron [-] Unable to show port be1f37db-0265-418f-bc5a-36bd71615d14 as it no longer exists. _unbind_ports /usr/lib/python3.9/site-packages/nova/network/neutron.py:666
Oct 07 14:15:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:15:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1158229080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.677 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.679 2 DEBUG nova.virt.libvirt.vif [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2003953244',display_name='tempest-tempest.common.compute-instance-2003953244',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2003953244',id=53,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-u5vbcnoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:02Z,user_data=None,user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=a8585c64-eb21-491a-9a4c-b9ac6e8e4a30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.679 2 DEBUG nova.network.os_vif_util [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.680 2 DEBUG nova.network.os_vif_util [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.682 2 DEBUG nova.objects.instance [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'pci_devices' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.695 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:15:09 compute-0 nova_compute[259550]:   <uuid>a8585c64-eb21-491a-9a4c-b9ac6e8e4a30</uuid>
Oct 07 14:15:09 compute-0 nova_compute[259550]:   <name>instance-00000035</name>
Oct 07 14:15:09 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:15:09 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:15:09 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <nova:name>tempest-tempest.common.compute-instance-2003953244</nova:name>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:15:08</nova:creationTime>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:15:09 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:15:09 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:15:09 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:15:09 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:15:09 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:15:09 compute-0 nova_compute[259550]:         <nova:user uuid="39e4681256e44d92ac5928e4f8e0d348">tempest-ServerActionsTestOtherA-508284156-project-member</nova:user>
Oct 07 14:15:09 compute-0 nova_compute[259550]:         <nova:project uuid="ef9390a1dd804281beea149e0086b360">tempest-ServerActionsTestOtherA-508284156</nova:project>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:15:09 compute-0 nova_compute[259550]:         <nova:port uuid="2781ab1e-ba6c-4689-8da2-ddcf85b31ca8">
Oct 07 14:15:09 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:15:09 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:15:09 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <system>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <entry name="serial">a8585c64-eb21-491a-9a4c-b9ac6e8e4a30</entry>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <entry name="uuid">a8585c64-eb21-491a-9a4c-b9ac6e8e4a30</entry>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     </system>
Oct 07 14:15:09 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:15:09 compute-0 nova_compute[259550]:   <os>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:   </os>
Oct 07 14:15:09 compute-0 nova_compute[259550]:   <features>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:   </features>
Oct 07 14:15:09 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:15:09 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:15:09 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk">
Oct 07 14:15:09 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:15:09 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config">
Oct 07 14:15:09 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:15:09 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:d4:48:b2"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <target dev="tap2781ab1e-ba"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/console.log" append="off"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <video>
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     </video>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:15:09 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:15:09 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:15:09 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:15:09 compute-0 nova_compute[259550]: </domain>
Oct 07 14:15:09 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.695 2 DEBUG nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Preparing to wait for external event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.696 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.696 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.696 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.697 2 DEBUG nova.virt.libvirt.vif [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2003953244',display_name='tempest-tempest.common.compute-instance-2003953244',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2003953244',id=53,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-u5vbcnoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:02Z,user_data=None,user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=a8585c64-eb21-491a-9a4c-b9ac6e8e4a30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.697 2 DEBUG nova.network.os_vif_util [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.697 2 DEBUG nova.network.os_vif_util [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.698 2 DEBUG os_vif [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.699 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.699 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2781ab1e-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2781ab1e-ba, col_values=(('external_ids', {'iface-id': '2781ab1e-ba6c-4689-8da2-ddcf85b31ca8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:48:b2', 'vm-uuid': 'a8585c64-eb21-491a-9a4c-b9ac6e8e4a30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:09 compute-0 NetworkManager[44949]: <info>  [1759846509.7048] manager: (tap2781ab1e-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.711 2 INFO os_vif [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba')
Oct 07 14:15:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 245 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.6 MiB/s wr, 73 op/s
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.760 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.760 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.760 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No VIF found with MAC fa:16:3e:d4:48:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.761 2 INFO nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Using config drive
Oct 07 14:15:09 compute-0 nova_compute[259550]: 2025-10-07 14:15:09.782 2 DEBUG nova.storage.rbd_utils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:09 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1211987338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:09 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1158229080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.395 2 DEBUG nova.network.neutron [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Successfully updated port: 9999835e-253e-4f1f-82c7-59a30f3e1537 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.408 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquiring lock "refresh_cache-52aed8a1-32e4-4242-881e-1b40f79f09e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.408 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquired lock "refresh_cache-52aed8a1-32e4-4242-881e-1b40f79f09e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.408 2 DEBUG nova.network.neutron [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.790 2 DEBUG nova.network.neutron [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.887 2 INFO nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Creating config drive at /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.893 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0v_2snf2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:10 compute-0 ceph-mon[74295]: pgmap v1517: 305 pgs: 305 active+clean; 245 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.6 MiB/s wr, 73 op/s
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.955 2 DEBUG nova.compute.manager [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.956 2 DEBUG oslo_concurrency.lockutils [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.956 2 DEBUG oslo_concurrency.lockutils [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.956 2 DEBUG oslo_concurrency.lockutils [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.956 2 DEBUG nova.compute.manager [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.956 2 WARNING nova.compute.manager [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf for instance with vm_state active and task_state deleting.
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.957 2 DEBUG nova.compute.manager [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-unplugged-3e6d7010-f744-42e8-b831-8a1955357b14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.957 2 DEBUG oslo_concurrency.lockutils [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.957 2 DEBUG oslo_concurrency.lockutils [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.957 2 DEBUG oslo_concurrency.lockutils [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.957 2 DEBUG nova.compute.manager [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-unplugged-3e6d7010-f744-42e8-b831-8a1955357b14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.957 2 DEBUG nova.compute.manager [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-unplugged-3e6d7010-f744-42e8-b831-8a1955357b14 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.958 2 DEBUG nova.compute.manager [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-3e6d7010-f744-42e8-b831-8a1955357b14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.958 2 DEBUG oslo_concurrency.lockutils [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.958 2 DEBUG oslo_concurrency.lockutils [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.958 2 DEBUG oslo_concurrency.lockutils [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.958 2 DEBUG nova.compute.manager [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-3e6d7010-f744-42e8-b831-8a1955357b14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:10 compute-0 nova_compute[259550]: 2025-10-07 14:15:10.958 2 WARNING nova.compute.manager [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-3e6d7010-f744-42e8-b831-8a1955357b14 for instance with vm_state active and task_state deleting.
Oct 07 14:15:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.022 2 DEBUG nova.compute.manager [req-1347914c-91b0-4399-adcd-7c2107d6959d req-c85987aa-6719-4ab9-9a6e-2d9bbf5ef55a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Received event network-changed-9999835e-253e-4f1f-82c7-59a30f3e1537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.023 2 DEBUG nova.compute.manager [req-1347914c-91b0-4399-adcd-7c2107d6959d req-c85987aa-6719-4ab9-9a6e-2d9bbf5ef55a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Refreshing instance network info cache due to event network-changed-9999835e-253e-4f1f-82c7-59a30f3e1537. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.023 2 DEBUG oslo_concurrency.lockutils [req-1347914c-91b0-4399-adcd-7c2107d6959d req-c85987aa-6719-4ab9-9a6e-2d9bbf5ef55a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-52aed8a1-32e4-4242-881e-1b40f79f09e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.051 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0v_2snf2" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.076 2 DEBUG nova.storage.rbd_utils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.080 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.275 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.276 2 INFO nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Deleting local config drive /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config because it was imported into RBD.
Oct 07 14:15:11 compute-0 kernel: tap2781ab1e-ba: entered promiscuous mode
Oct 07 14:15:11 compute-0 NetworkManager[44949]: <info>  [1759846511.3416] manager: (tap2781ab1e-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/212)
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:11 compute-0 ovn_controller[151684]: 2025-10-07T14:15:11Z|00450|binding|INFO|Claiming lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 for this chassis.
Oct 07 14:15:11 compute-0 ovn_controller[151684]: 2025-10-07T14:15:11Z|00451|binding|INFO|2781ab1e-ba6c-4689-8da2-ddcf85b31ca8: Claiming fa:16:3e:d4:48:b2 10.100.0.5
Oct 07 14:15:11 compute-0 ovn_controller[151684]: 2025-10-07T14:15:11Z|00452|binding|INFO|Setting lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 ovn-installed in OVS
Oct 07 14:15:11 compute-0 ovn_controller[151684]: 2025-10-07T14:15:11Z|00453|binding|INFO|Setting lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 up in Southbound
Oct 07 14:15:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.362 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:48:b2 10.100.0.5'], port_security=['fa:16:3e:d4:48:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a8585c64-eb21-491a-9a4c-b9ac6e8e4a30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef9390a1dd804281beea149e0086b360', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd609a9ff-183f-496e-83cc-641ffdd2b1f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80c61b76-cba3-471b-9dc7-bab9d6303f6a, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.366 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 in datapath 7ba9d553-bbaa-47f8-8281-6a74e53c37fb bound to our chassis
Oct 07 14:15:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.368 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba9d553-bbaa-47f8-8281-6a74e53c37fb
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.391 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fda0ac18-6e4a-40af-b653-87e76b2b3f14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:11 compute-0 systemd-machined[214580]: New machine qemu-61-instance-00000035.
Oct 07 14:15:11 compute-0 systemd[1]: Started Virtual Machine qemu-61-instance-00000035.
Oct 07 14:15:11 compute-0 systemd-udevd[318517]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:15:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.446 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[80f6c820-e464-4dee-acea-02a2fbbf1435]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.451 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[85d3987b-cc48-4024-b38a-8b640b4ab05c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:11 compute-0 NetworkManager[44949]: <info>  [1759846511.4529] device (tap2781ab1e-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:15:11 compute-0 NetworkManager[44949]: <info>  [1759846511.4545] device (tap2781ab1e-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:15:11 compute-0 podman[318490]: 2025-10-07 14:15:11.494062143 +0000 UTC m=+0.108707613 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 14:15:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.494 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fd63be78-7a8d-4aa5-a68d-f4a8e53f6703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.512 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0b817283-2ffe-4f09-904f-724f4100dae6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9d553-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:7b:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703719, 'reachable_time': 17176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318545, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.534 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f103a4-bc27-4b9d-8283-55f4e18a3695]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703735, 'tstamp': 703735}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318546, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703740, 'tstamp': 703740}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318546, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.536 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9d553-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:11 compute-0 podman[318491]: 2025-10-07 14:15:11.539117453 +0000 UTC m=+0.151580705 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.544 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba9d553-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.544 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.545 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba9d553-b0, col_values=(('external_ids', {'iface-id': 'c1da8c7c-1812-4ab6-94d3-da2a23226328'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.545 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.633 2 DEBUG nova.network.neutron [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Updated VIF entry in instance network info cache for port 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.634 2 DEBUG nova.network.neutron [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Updating instance_info_cache with network_info: [{"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.649 2 DEBUG oslo_concurrency.lockutils [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.649 2 DEBUG nova.compute.manager [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-deleted-be1f37db-0265-418f-bc5a-36bd71615d14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.649 2 INFO nova.compute.manager [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Neutron deleted interface be1f37db-0265-418f-bc5a-36bd71615d14; detaching it from the instance and deleting it from the info cache
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.650 2 DEBUG nova.network.neutron [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:11 compute-0 nova_compute[259550]: 2025-10-07 14:15:11.668 2 DEBUG nova.compute.manager [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Detach interface failed, port_id=be1f37db-0265-418f-bc5a-36bd71615d14, reason: Instance eb8777f3-5daa-49c7-8994-687012f20453 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:15:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 MiB/s wr, 82 op/s
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.446 2 DEBUG nova.network.neutron [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.472 2 DEBUG oslo_concurrency.lockutils [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.487 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846512.486061, a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.487 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] VM Started (Lifecycle Event)
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.507 2 DEBUG oslo_concurrency.lockutils [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-9a533309-4d4d-4458-9a27-3fe85361ab15" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 9.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.510 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.515 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846512.4861999, a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.516 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] VM Paused (Lifecycle Event)
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.532 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.538 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.546 2 DEBUG nova.network.neutron [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Updating instance_info_cache with network_info: [{"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.560 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.566 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Releasing lock "refresh_cache-52aed8a1-32e4-4242-881e-1b40f79f09e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.567 2 DEBUG nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Instance network_info: |[{"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.568 2 DEBUG oslo_concurrency.lockutils [req-1347914c-91b0-4399-adcd-7c2107d6959d req-c85987aa-6719-4ab9-9a6e-2d9bbf5ef55a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-52aed8a1-32e4-4242-881e-1b40f79f09e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.569 2 DEBUG nova.network.neutron [req-1347914c-91b0-4399-adcd-7c2107d6959d req-c85987aa-6719-4ab9-9a6e-2d9bbf5ef55a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Refreshing network info cache for port 9999835e-253e-4f1f-82c7-59a30f3e1537 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.575 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Start _get_guest_xml network_info=[{"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.582 2 WARNING nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.591 2 DEBUG nova.virt.libvirt.host [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.592 2 DEBUG nova.virt.libvirt.host [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.596 2 DEBUG nova.virt.libvirt.host [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.596 2 DEBUG nova.virt.libvirt.host [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.597 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.597 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.597 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.598 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.598 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.598 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.598 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.599 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.599 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.599 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.599 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.599 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:15:12 compute-0 nova_compute[259550]: 2025-10-07 14:15:12.604 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:12 compute-0 ceph-mon[74295]: pgmap v1518: 305 pgs: 305 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 MiB/s wr, 82 op/s
Oct 07 14:15:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:15:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1848191264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.092 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.117 2 DEBUG nova.storage.rbd_utils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] rbd image 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.122 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.320 2 DEBUG nova.compute.manager [req-cc8da2e3-71c4-472d-bf5a-f5f8d072a150 req-37b48678-812c-4c00-8123-70a84b77296b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.321 2 DEBUG oslo_concurrency.lockutils [req-cc8da2e3-71c4-472d-bf5a-f5f8d072a150 req-37b48678-812c-4c00-8123-70a84b77296b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.321 2 DEBUG oslo_concurrency.lockutils [req-cc8da2e3-71c4-472d-bf5a-f5f8d072a150 req-37b48678-812c-4c00-8123-70a84b77296b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.321 2 DEBUG oslo_concurrency.lockutils [req-cc8da2e3-71c4-472d-bf5a-f5f8d072a150 req-37b48678-812c-4c00-8123-70a84b77296b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.322 2 DEBUG nova.compute.manager [req-cc8da2e3-71c4-472d-bf5a-f5f8d072a150 req-37b48678-812c-4c00-8123-70a84b77296b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Processing event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.322 2 DEBUG nova.compute.manager [req-cc8da2e3-71c4-472d-bf5a-f5f8d072a150 req-37b48678-812c-4c00-8123-70a84b77296b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-deleted-fdcb59f4-9f89-4147-941b-a28bfa0621bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.322 2 INFO nova.compute.manager [req-cc8da2e3-71c4-472d-bf5a-f5f8d072a150 req-37b48678-812c-4c00-8123-70a84b77296b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Neutron deleted interface fdcb59f4-9f89-4147-941b-a28bfa0621bf; detaching it from the instance and deleting it from the info cache
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.322 2 DEBUG nova.network.neutron [req-cc8da2e3-71c4-472d-bf5a-f5f8d072a150 req-37b48678-812c-4c00-8123-70a84b77296b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.328 2 DEBUG nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.349 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846513.348338, a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.349 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] VM Resumed (Lifecycle Event)
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.354 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.358 2 DEBUG nova.compute.manager [req-cc8da2e3-71c4-472d-bf5a-f5f8d072a150 req-37b48678-812c-4c00-8123-70a84b77296b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Detach interface failed, port_id=fdcb59f4-9f89-4147-941b-a28bfa0621bf, reason: Instance eb8777f3-5daa-49c7-8994-687012f20453 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.361 2 INFO nova.virt.libvirt.driver [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance spawned successfully.
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.362 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.371 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.375 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.394 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.394 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.395 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.395 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.395 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.396 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.406 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.468 2 INFO nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Took 11.30 seconds to spawn the instance on the hypervisor.
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.469 2 DEBUG nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.542 2 INFO nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Took 12.83 seconds to build instance.
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.565 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:15:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1163290645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.610 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.612 2 DEBUG nova.virt.libvirt.vif [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1063149174',display_name='tempest-InstanceActionsNegativeTestJSON-server-1063149174',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1063149174',id=54,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a497c44829943d787416adb835d66e5',ramdisk_id='',reservation_id='r-p0tay0if',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-243150513',owner_user_name='tempest-InstanceActionsNegativeTestJSON-243150513-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:04Z,user_data=None,user_id='b202876689054b5ebeef4c4648b455bf',uuid=52aed8a1-32e4-4242-881e-1b40f79f09e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.613 2 DEBUG nova.network.os_vif_util [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Converting VIF {"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.614 2 DEBUG nova.network.os_vif_util [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:32:36,bridge_name='br-int',has_traffic_filtering=True,id=9999835e-253e-4f1f-82c7-59a30f3e1537,network=Network(eff78bf7-14f6-4b50-9495-e8815b9b3aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9999835e-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.616 2 DEBUG nova.objects.instance [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 52aed8a1-32e4-4242-881e-1b40f79f09e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.638 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:15:13 compute-0 nova_compute[259550]:   <uuid>52aed8a1-32e4-4242-881e-1b40f79f09e1</uuid>
Oct 07 14:15:13 compute-0 nova_compute[259550]:   <name>instance-00000036</name>
Oct 07 14:15:13 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:15:13 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:15:13 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <nova:name>tempest-InstanceActionsNegativeTestJSON-server-1063149174</nova:name>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:15:12</nova:creationTime>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:15:13 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:15:13 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:15:13 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:15:13 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:15:13 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:15:13 compute-0 nova_compute[259550]:         <nova:user uuid="b202876689054b5ebeef4c4648b455bf">tempest-InstanceActionsNegativeTestJSON-243150513-project-member</nova:user>
Oct 07 14:15:13 compute-0 nova_compute[259550]:         <nova:project uuid="0a497c44829943d787416adb835d66e5">tempest-InstanceActionsNegativeTestJSON-243150513</nova:project>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:15:13 compute-0 nova_compute[259550]:         <nova:port uuid="9999835e-253e-4f1f-82c7-59a30f3e1537">
Oct 07 14:15:13 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:15:13 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:15:13 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <system>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <entry name="serial">52aed8a1-32e4-4242-881e-1b40f79f09e1</entry>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <entry name="uuid">52aed8a1-32e4-4242-881e-1b40f79f09e1</entry>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     </system>
Oct 07 14:15:13 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:15:13 compute-0 nova_compute[259550]:   <os>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:   </os>
Oct 07 14:15:13 compute-0 nova_compute[259550]:   <features>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:   </features>
Oct 07 14:15:13 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:15:13 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:15:13 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/52aed8a1-32e4-4242-881e-1b40f79f09e1_disk">
Oct 07 14:15:13 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:15:13 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/52aed8a1-32e4-4242-881e-1b40f79f09e1_disk.config">
Oct 07 14:15:13 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:15:13 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:fb:32:36"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <target dev="tap9999835e-25"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1/console.log" append="off"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <video>
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     </video>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:15:13 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:15:13 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:15:13 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:15:13 compute-0 nova_compute[259550]: </domain>
Oct 07 14:15:13 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.639 2 DEBUG nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Preparing to wait for external event network-vif-plugged-9999835e-253e-4f1f-82c7-59a30f3e1537 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.639 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquiring lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.639 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.640 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.640 2 DEBUG nova.virt.libvirt.vif [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1063149174',display_name='tempest-InstanceActionsNegativeTestJSON-server-1063149174',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1063149174',id=54,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a497c44829943d787416adb835d66e5',ramdisk_id='',reservation_id='r-p0tay0if',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-243150513',owner_user_name='tempest-InstanceActionsNegativeTestJSON-243150513-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:04Z,user_data=None,user_id='b202876689054b5ebeef4c4648b455bf',uuid=52aed8a1-32e4-4242-881e-1b40f79f09e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.641 2 DEBUG nova.network.os_vif_util [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Converting VIF {"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.641 2 DEBUG nova.network.os_vif_util [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:32:36,bridge_name='br-int',has_traffic_filtering=True,id=9999835e-253e-4f1f-82c7-59a30f3e1537,network=Network(eff78bf7-14f6-4b50-9495-e8815b9b3aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9999835e-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.642 2 DEBUG os_vif [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:32:36,bridge_name='br-int',has_traffic_filtering=True,id=9999835e-253e-4f1f-82c7-59a30f3e1537,network=Network(eff78bf7-14f6-4b50-9495-e8815b9b3aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9999835e-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9999835e-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9999835e-25, col_values=(('external_ids', {'iface-id': '9999835e-253e-4f1f-82c7-59a30f3e1537', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:32:36', 'vm-uuid': '52aed8a1-32e4-4242-881e-1b40f79f09e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:13 compute-0 NetworkManager[44949]: <info>  [1759846513.6486] manager: (tap9999835e-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.655 2 INFO os_vif [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:32:36,bridge_name='br-int',has_traffic_filtering=True,id=9999835e-253e-4f1f-82c7-59a30f3e1537,network=Network(eff78bf7-14f6-4b50-9495-e8815b9b3aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9999835e-25')
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.673 2 DEBUG nova.network.neutron [-] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.696 2 INFO nova.compute.manager [-] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Took 4.92 seconds to deallocate network for instance.
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 MiB/s wr, 82 op/s
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.751 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.751 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.751 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] No VIF found with MAC fa:16:3e:fb:32:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.751 2 INFO nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Using config drive
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.771 2 DEBUG nova.storage.rbd_utils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] rbd image 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.808 2 DEBUG oslo_concurrency.lockutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.808 2 DEBUG oslo_concurrency.lockutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:13 compute-0 nova_compute[259550]: 2025-10-07 14:15:13.934 2 DEBUG oslo_concurrency.processutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:13.957 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:13 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1848191264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:13 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1163290645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:14 compute-0 nova_compute[259550]: 2025-10-07 14:15:14.405 2 INFO nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Creating config drive at /var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1/disk.config
Oct 07 14:15:14 compute-0 nova_compute[259550]: 2025-10-07 14:15:14.412 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_ffkaf9_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:15:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3048651713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:14 compute-0 nova_compute[259550]: 2025-10-07 14:15:14.466 2 DEBUG oslo_concurrency.processutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:14 compute-0 nova_compute[259550]: 2025-10-07 14:15:14.473 2 DEBUG nova.compute.provider_tree [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:15:14 compute-0 nova_compute[259550]: 2025-10-07 14:15:14.567 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_ffkaf9_" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:14 compute-0 nova_compute[259550]: 2025-10-07 14:15:14.612 2 DEBUG nova.storage.rbd_utils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] rbd image 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:14 compute-0 nova_compute[259550]: 2025-10-07 14:15:14.617 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1/disk.config 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:14 compute-0 nova_compute[259550]: 2025-10-07 14:15:14.662 2 DEBUG nova.scheduler.client.report [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:15:14 compute-0 nova_compute[259550]: 2025-10-07 14:15:14.720 2 DEBUG oslo_concurrency.lockutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:14 compute-0 nova_compute[259550]: 2025-10-07 14:15:14.796 2 INFO nova.scheduler.client.report [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Deleted allocations for instance eb8777f3-5daa-49c7-8994-687012f20453
Oct 07 14:15:14 compute-0 nova_compute[259550]: 2025-10-07 14:15:14.797 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1/disk.config 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:14 compute-0 nova_compute[259550]: 2025-10-07 14:15:14.798 2 INFO nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Deleting local config drive /var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1/disk.config because it was imported into RBD.
Oct 07 14:15:14 compute-0 nova_compute[259550]: 2025-10-07 14:15:14.858 2 DEBUG nova.network.neutron [req-1347914c-91b0-4399-adcd-7c2107d6959d req-c85987aa-6719-4ab9-9a6e-2d9bbf5ef55a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Updated VIF entry in instance network info cache for port 9999835e-253e-4f1f-82c7-59a30f3e1537. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:15:14 compute-0 nova_compute[259550]: 2025-10-07 14:15:14.859 2 DEBUG nova.network.neutron [req-1347914c-91b0-4399-adcd-7c2107d6959d req-c85987aa-6719-4ab9-9a6e-2d9bbf5ef55a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Updating instance_info_cache with network_info: [{"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:14 compute-0 NetworkManager[44949]: <info>  [1759846514.8685] manager: (tap9999835e-25): new Tun device (/org/freedesktop/NetworkManager/Devices/214)
Oct 07 14:15:14 compute-0 kernel: tap9999835e-25: entered promiscuous mode
Oct 07 14:15:14 compute-0 ovn_controller[151684]: 2025-10-07T14:15:14Z|00454|binding|INFO|Claiming lport 9999835e-253e-4f1f-82c7-59a30f3e1537 for this chassis.
Oct 07 14:15:14 compute-0 ovn_controller[151684]: 2025-10-07T14:15:14Z|00455|binding|INFO|9999835e-253e-4f1f-82c7-59a30f3e1537: Claiming fa:16:3e:fb:32:36 10.100.0.12
Oct 07 14:15:14 compute-0 nova_compute[259550]: 2025-10-07 14:15:14.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:14 compute-0 systemd-machined[214580]: New machine qemu-62-instance-00000036.
Oct 07 14:15:14 compute-0 ovn_controller[151684]: 2025-10-07T14:15:14Z|00456|binding|INFO|Setting lport 9999835e-253e-4f1f-82c7-59a30f3e1537 ovn-installed in OVS
Oct 07 14:15:14 compute-0 nova_compute[259550]: 2025-10-07 14:15:14.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:14 compute-0 nova_compute[259550]: 2025-10-07 14:15:14.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:14 compute-0 systemd[1]: Started Virtual Machine qemu-62-instance-00000036.
Oct 07 14:15:14 compute-0 systemd-udevd[318751]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:15:14 compute-0 NetworkManager[44949]: <info>  [1759846514.9525] device (tap9999835e-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:15:14 compute-0 NetworkManager[44949]: <info>  [1759846514.9536] device (tap9999835e-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:15:14 compute-0 ceph-mon[74295]: pgmap v1519: 305 pgs: 305 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 MiB/s wr, 82 op/s
Oct 07 14:15:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3048651713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:15 compute-0 ovn_controller[151684]: 2025-10-07T14:15:15Z|00457|binding|INFO|Setting lport 9999835e-253e-4f1f-82c7-59a30f3e1537 up in Southbound
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.075 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:32:36 10.100.0.12'], port_security=['fa:16:3e:fb:32:36 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '52aed8a1-32e4-4242-881e-1b40f79f09e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eff78bf7-14f6-4b50-9495-e8815b9b3aa7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a497c44829943d787416adb835d66e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ad3edd26-6a78-4069-a120-e0c484f5035e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d008834-8f4c-4a69-8755-e7ba45e692b8, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9999835e-253e-4f1f-82c7-59a30f3e1537) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.078 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9999835e-253e-4f1f-82c7-59a30f3e1537 in datapath eff78bf7-14f6-4b50-9495-e8815b9b3aa7 bound to our chassis
Oct 07 14:15:15 compute-0 nova_compute[259550]: 2025-10-07 14:15:15.079 2 DEBUG oslo_concurrency.lockutils [req-1347914c-91b0-4399-adcd-7c2107d6959d req-c85987aa-6719-4ab9-9a6e-2d9bbf5ef55a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-52aed8a1-32e4-4242-881e-1b40f79f09e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.082 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eff78bf7-14f6-4b50-9495-e8815b9b3aa7
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.101 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6a62490f-aa24-4cb1-89a1-82f43eab984e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.102 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeff78bf7-11 in ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.105 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeff78bf7-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.106 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[180c98ec-f84b-40c9-9c41-b791edfd02a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.107 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cb79b7b4-cb9c-4525-be10-58cc2273ada3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.125 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7ac55b-6efa-4bc6-8828-a435aad17c62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.144 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[15234c32-5885-4deb-b575-b7b57b340fbe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.184 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1e92cb7a-ca98-453f-a336-84d870357c73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:15 compute-0 NetworkManager[44949]: <info>  [1759846515.1979] manager: (tapeff78bf7-10): new Veth device (/org/freedesktop/NetworkManager/Devices/215)
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.196 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b4ebc0ee-7ea6-4bb1-8848-1819e7135275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:15 compute-0 systemd-udevd[318754]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:15:15 compute-0 nova_compute[259550]: 2025-10-07 14:15:15.214 2 DEBUG oslo_concurrency.lockutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.242 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[83f4324d-df5b-47de-acf1-8069fd4102a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.245 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[52f7273e-a095-4f2b-8787-108cf274f5ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:15 compute-0 NetworkManager[44949]: <info>  [1759846515.2748] device (tapeff78bf7-10): carrier: link connected
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.282 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1960ce76-cfe7-4cdd-9cec-21512e0a5311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.314 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[295a8cc7-9d89-437d-bf55-26ee3bbbb754]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeff78bf7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:73:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709883, 'reachable_time': 38864, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318786, 'error': None, 'target': 'ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.337 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5aacd3d9-7a9e-4072-8ddd-4c6b0a076b01]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec0:738a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 709883, 'tstamp': 709883}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318787, 'error': None, 'target': 'ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.376 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b29f4d-eaca-4a3a-871d-84f2c8652d58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeff78bf7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:73:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709883, 'reachable_time': 38864, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318788, 'error': None, 'target': 'ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.423 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cc5124ac-4cf5-4675-9551-141a898986dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.526 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b793aa8-fa1f-44ab-a4e7-2af5cd1fcb9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.529 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeff78bf7-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.529 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.529 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeff78bf7-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:15 compute-0 nova_compute[259550]: 2025-10-07 14:15:15.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:15 compute-0 NetworkManager[44949]: <info>  [1759846515.5325] manager: (tapeff78bf7-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/216)
Oct 07 14:15:15 compute-0 kernel: tapeff78bf7-10: entered promiscuous mode
Oct 07 14:15:15 compute-0 nova_compute[259550]: 2025-10-07 14:15:15.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.543 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeff78bf7-10, col_values=(('external_ids', {'iface-id': 'cc7c289d-e80e-4365-a774-6c85c9835f56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:15 compute-0 ovn_controller[151684]: 2025-10-07T14:15:15Z|00458|binding|INFO|Releasing lport cc7c289d-e80e-4365-a774-6c85c9835f56 from this chassis (sb_readonly=0)
Oct 07 14:15:15 compute-0 nova_compute[259550]: 2025-10-07 14:15:15.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:15 compute-0 nova_compute[259550]: 2025-10-07 14:15:15.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.561 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eff78bf7-14f6-4b50-9495-e8815b9b3aa7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eff78bf7-14f6-4b50-9495-e8815b9b3aa7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.562 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[80f717ea-bdb4-48fe-8db0-0f760a0e1a4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.562 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-eff78bf7-14f6-4b50-9495-e8815b9b3aa7
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/eff78bf7-14f6-4b50-9495-e8815b9b3aa7.pid.haproxy
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID eff78bf7-14f6-4b50-9495-e8815b9b3aa7
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:15:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.563 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7', 'env', 'PROCESS_TAG=haproxy-eff78bf7-14f6-4b50-9495-e8815b9b3aa7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eff78bf7-14f6-4b50-9495-e8815b9b3aa7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:15:15 compute-0 nova_compute[259550]: 2025-10-07 14:15:15.570 2 DEBUG nova.compute.manager [req-de110d3d-4895-451d-9d2d-7ff800da459c req-0e9d693f-604d-4682-96db-289b734c16ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:15 compute-0 nova_compute[259550]: 2025-10-07 14:15:15.571 2 DEBUG oslo_concurrency.lockutils [req-de110d3d-4895-451d-9d2d-7ff800da459c req-0e9d693f-604d-4682-96db-289b734c16ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:15 compute-0 nova_compute[259550]: 2025-10-07 14:15:15.571 2 DEBUG oslo_concurrency.lockutils [req-de110d3d-4895-451d-9d2d-7ff800da459c req-0e9d693f-604d-4682-96db-289b734c16ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:15 compute-0 nova_compute[259550]: 2025-10-07 14:15:15.571 2 DEBUG oslo_concurrency.lockutils [req-de110d3d-4895-451d-9d2d-7ff800da459c req-0e9d693f-604d-4682-96db-289b734c16ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:15 compute-0 nova_compute[259550]: 2025-10-07 14:15:15.572 2 DEBUG nova.compute.manager [req-de110d3d-4895-451d-9d2d-7ff800da459c req-0e9d693f-604d-4682-96db-289b734c16ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] No waiting events found dispatching network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:15 compute-0 nova_compute[259550]: 2025-10-07 14:15:15.572 2 WARNING nova.compute.manager [req-de110d3d-4895-451d-9d2d-7ff800da459c req-0e9d693f-604d-4682-96db-289b734c16ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received unexpected event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 for instance with vm_state active and task_state None.
Oct 07 14:15:15 compute-0 nova_compute[259550]: 2025-10-07 14:15:15.572 2 DEBUG nova.compute.manager [req-de110d3d-4895-451d-9d2d-7ff800da459c req-0e9d693f-604d-4682-96db-289b734c16ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-deleted-3e6d7010-f744-42e8-b831-8a1955357b14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 213 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 3.6 MiB/s wr, 101 op/s
Oct 07 14:15:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:15:15 compute-0 ceph-mon[74295]: pgmap v1520: 305 pgs: 305 active+clean; 213 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 3.6 MiB/s wr, 101 op/s
Oct 07 14:15:15 compute-0 podman[318862]: 2025-10-07 14:15:15.986911199 +0000 UTC m=+0.065083280 container create 1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 07 14:15:16 compute-0 podman[318862]: 2025-10-07 14:15:15.948071224 +0000 UTC m=+0.026243315 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:15:16 compute-0 systemd[1]: Started libpod-conmon-1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3.scope.
Oct 07 14:15:16 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f008917257b7e03664568af339d089295b0525495a36406dab66623e6accadcf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:15:16 compute-0 nova_compute[259550]: 2025-10-07 14:15:16.093 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846516.091174, 52aed8a1-32e4-4242-881e-1b40f79f09e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:16 compute-0 nova_compute[259550]: 2025-10-07 14:15:16.094 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] VM Started (Lifecycle Event)
Oct 07 14:15:16 compute-0 podman[318862]: 2025-10-07 14:15:16.104678211 +0000 UTC m=+0.182850282 container init 1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 07 14:15:16 compute-0 podman[318862]: 2025-10-07 14:15:16.110831233 +0000 UTC m=+0.189003304 container start 1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 07 14:15:16 compute-0 neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7[318877]: [NOTICE]   (318881) : New worker (318883) forked
Oct 07 14:15:16 compute-0 neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7[318877]: [NOTICE]   (318881) : Loading success.
Oct 07 14:15:16 compute-0 nova_compute[259550]: 2025-10-07 14:15:16.150 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:16 compute-0 nova_compute[259550]: 2025-10-07 14:15:16.155 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846516.0919826, 52aed8a1-32e4-4242-881e-1b40f79f09e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:16 compute-0 nova_compute[259550]: 2025-10-07 14:15:16.155 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] VM Paused (Lifecycle Event)
Oct 07 14:15:16 compute-0 nova_compute[259550]: 2025-10-07 14:15:16.176 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:16 compute-0 nova_compute[259550]: 2025-10-07 14:15:16.180 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:15:16 compute-0 nova_compute[259550]: 2025-10-07 14:15:16.216 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:15:16 compute-0 nova_compute[259550]: 2025-10-07 14:15:16.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:16 compute-0 sudo[318892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:15:16 compute-0 sudo[318892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:16 compute-0 sudo[318892]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:16 compute-0 sudo[318917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:15:16 compute-0 sudo[318917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:16 compute-0 sudo[318917]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:16 compute-0 sudo[318942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:15:16 compute-0 sudo[318942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:16 compute-0 sudo[318942]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:16 compute-0 nova_compute[259550]: 2025-10-07 14:15:16.810 2 DEBUG oslo_concurrency.lockutils [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:16 compute-0 nova_compute[259550]: 2025-10-07 14:15:16.811 2 DEBUG oslo_concurrency.lockutils [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:16 compute-0 nova_compute[259550]: 2025-10-07 14:15:16.811 2 DEBUG nova.compute.manager [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:16 compute-0 nova_compute[259550]: 2025-10-07 14:15:16.815 2 DEBUG nova.compute.manager [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 07 14:15:16 compute-0 nova_compute[259550]: 2025-10-07 14:15:16.816 2 DEBUG nova.objects.instance [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'flavor' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:16 compute-0 sudo[318967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 07 14:15:16 compute-0 sudo[318967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:16 compute-0 nova_compute[259550]: 2025-10-07 14:15:16.848 2 DEBUG nova.virt.libvirt.driver [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:17 compute-0 podman[319064]: 2025-10-07 14:15:17.420629614 +0000 UTC m=+0.077725594 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 07 14:15:17 compute-0 podman[319064]: 2025-10-07 14:15:17.534237936 +0000 UTC m=+0.191333946 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.724 2 DEBUG nova.compute.manager [req-dbb01178-22bd-4ddc-99d0-18223b0262fb req-16087506-0b11-458d-bfdb-26f862fec630 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Received event network-vif-plugged-9999835e-253e-4f1f-82c7-59a30f3e1537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.724 2 DEBUG oslo_concurrency.lockutils [req-dbb01178-22bd-4ddc-99d0-18223b0262fb req-16087506-0b11-458d-bfdb-26f862fec630 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.724 2 DEBUG oslo_concurrency.lockutils [req-dbb01178-22bd-4ddc-99d0-18223b0262fb req-16087506-0b11-458d-bfdb-26f862fec630 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.725 2 DEBUG oslo_concurrency.lockutils [req-dbb01178-22bd-4ddc-99d0-18223b0262fb req-16087506-0b11-458d-bfdb-26f862fec630 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.725 2 DEBUG nova.compute.manager [req-dbb01178-22bd-4ddc-99d0-18223b0262fb req-16087506-0b11-458d-bfdb-26f862fec630 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Processing event network-vif-plugged-9999835e-253e-4f1f-82c7-59a30f3e1537 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.726 2 DEBUG nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.730 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846517.7297144, 52aed8a1-32e4-4242-881e-1b40f79f09e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.730 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] VM Resumed (Lifecycle Event)
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.732 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.736 2 INFO nova.virt.libvirt.driver [-] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Instance spawned successfully.
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.736 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:15:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 213 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 123 op/s
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.916 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.924 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.924 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.925 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.925 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.926 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.926 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:17 compute-0 nova_compute[259550]: 2025-10-07 14:15:17.932 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:15:18 compute-0 nova_compute[259550]: 2025-10-07 14:15:18.182 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:15:18 compute-0 sudo[318967]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:15:18 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:15:18 compute-0 nova_compute[259550]: 2025-10-07 14:15:18.343 2 INFO nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Took 14.03 seconds to spawn the instance on the hypervisor.
Oct 07 14:15:18 compute-0 nova_compute[259550]: 2025-10-07 14:15:18.343 2 DEBUG nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:15:18 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:15:18 compute-0 sudo[319218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:15:18 compute-0 sudo[319218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:18 compute-0 sudo[319218]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:18 compute-0 sudo[319243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:15:18 compute-0 sudo[319243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:18 compute-0 sudo[319243]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:18 compute-0 nova_compute[259550]: 2025-10-07 14:15:18.505 2 INFO nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Took 15.10 seconds to build instance.
Oct 07 14:15:18 compute-0 sudo[319268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:15:18 compute-0 sudo[319268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:18 compute-0 sudo[319268]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:18 compute-0 sudo[319293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:15:18 compute-0 sudo[319293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:18 compute-0 nova_compute[259550]: 2025-10-07 14:15:18.627 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:18 compute-0 nova_compute[259550]: 2025-10-07 14:15:18.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:18 compute-0 ceph-mon[74295]: pgmap v1521: 305 pgs: 305 active+clean; 213 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 123 op/s
Oct 07 14:15:18 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:15:18 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:15:19 compute-0 sudo[319293]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:15:19 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:15:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:15:19 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:15:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:15:19 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:15:19 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev f76aaa61-c543-4163-9875-6a2685f3023a does not exist
Oct 07 14:15:19 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 3c12dd02-6d54-4e96-9583-532d2faf7a76 does not exist
Oct 07 14:15:19 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 4e436ff3-ec4c-4f08-a05a-718131bbe410 does not exist
Oct 07 14:15:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:15:19 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:15:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:15:19 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:15:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:15:19 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:15:19 compute-0 sudo[319350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:15:19 compute-0 sudo[319350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:19 compute-0 sudo[319350]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:19 compute-0 sudo[319375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:15:19 compute-0 sudo[319375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:19 compute-0 sudo[319375]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:19 compute-0 sudo[319400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:15:19 compute-0 sudo[319400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:19 compute-0 sudo[319400]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:19 compute-0 sudo[319425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:15:19 compute-0 sudo[319425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 214 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 908 KiB/s wr, 148 op/s
Oct 07 14:15:19 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:15:19 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:15:19 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:15:19 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:15:19 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:15:19 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:15:19 compute-0 podman[319490]: 2025-10-07 14:15:19.872144905 +0000 UTC m=+0.042531444 container create 941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_cannon, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:15:19 compute-0 systemd[1]: Started libpod-conmon-941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7.scope.
Oct 07 14:15:19 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:15:19 compute-0 podman[319490]: 2025-10-07 14:15:19.852683681 +0000 UTC m=+0.023070240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:15:19 compute-0 podman[319490]: 2025-10-07 14:15:19.962385639 +0000 UTC m=+0.132772208 container init 941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 14:15:19 compute-0 podman[319490]: 2025-10-07 14:15:19.974744586 +0000 UTC m=+0.145131125 container start 941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 07 14:15:19 compute-0 podman[319490]: 2025-10-07 14:15:19.979181153 +0000 UTC m=+0.149567742 container attach 941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_cannon, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 07 14:15:19 compute-0 gallant_cannon[319506]: 167 167
Oct 07 14:15:19 compute-0 systemd[1]: libpod-941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7.scope: Deactivated successfully.
Oct 07 14:15:19 compute-0 conmon[319506]: conmon 941401266a500f45164a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7.scope/container/memory.events
Oct 07 14:15:19 compute-0 podman[319490]: 2025-10-07 14:15:19.985079659 +0000 UTC m=+0.155466198 container died 941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_cannon, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 07 14:15:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cab6bb67954298f054bf1df89fbe4713ba2a1385c24c37e9c1bc6a3f7048888-merged.mount: Deactivated successfully.
Oct 07 14:15:20 compute-0 podman[319490]: 2025-10-07 14:15:20.028865435 +0000 UTC m=+0.199251974 container remove 941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_cannon, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 07 14:15:20 compute-0 systemd[1]: libpod-conmon-941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7.scope: Deactivated successfully.
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.096 2 DEBUG nova.compute.manager [req-3561270f-16a4-4e60-bf86-97915450e551 req-63127477-3283-40ab-95c8-143cf092e3dd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Received event network-vif-plugged-9999835e-253e-4f1f-82c7-59a30f3e1537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.098 2 DEBUG oslo_concurrency.lockutils [req-3561270f-16a4-4e60-bf86-97915450e551 req-63127477-3283-40ab-95c8-143cf092e3dd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.098 2 DEBUG oslo_concurrency.lockutils [req-3561270f-16a4-4e60-bf86-97915450e551 req-63127477-3283-40ab-95c8-143cf092e3dd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.098 2 DEBUG oslo_concurrency.lockutils [req-3561270f-16a4-4e60-bf86-97915450e551 req-63127477-3283-40ab-95c8-143cf092e3dd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.098 2 DEBUG nova.compute.manager [req-3561270f-16a4-4e60-bf86-97915450e551 req-63127477-3283-40ab-95c8-143cf092e3dd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] No waiting events found dispatching network-vif-plugged-9999835e-253e-4f1f-82c7-59a30f3e1537 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.098 2 WARNING nova.compute.manager [req-3561270f-16a4-4e60-bf86-97915450e551 req-63127477-3283-40ab-95c8-143cf092e3dd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Received unexpected event network-vif-plugged-9999835e-253e-4f1f-82c7-59a30f3e1537 for instance with vm_state active and task_state None.
Oct 07 14:15:20 compute-0 podman[319530]: 2025-10-07 14:15:20.218200957 +0000 UTC m=+0.051578903 container create 95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mcnulty, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 07 14:15:20 compute-0 systemd[1]: Started libpod-conmon-95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672.scope.
Oct 07 14:15:20 compute-0 podman[319530]: 2025-10-07 14:15:20.19633432 +0000 UTC m=+0.029712256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:15:20 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:15:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1f300a7802b9347d61ae95cb2a299ac6328759152a468dae7b629e09b0a4ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:15:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1f300a7802b9347d61ae95cb2a299ac6328759152a468dae7b629e09b0a4ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:15:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1f300a7802b9347d61ae95cb2a299ac6328759152a468dae7b629e09b0a4ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:15:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1f300a7802b9347d61ae95cb2a299ac6328759152a468dae7b629e09b0a4ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:15:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1f300a7802b9347d61ae95cb2a299ac6328759152a468dae7b629e09b0a4ac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:15:20 compute-0 podman[319530]: 2025-10-07 14:15:20.346712752 +0000 UTC m=+0.180090688 container init 95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:15:20 compute-0 podman[319530]: 2025-10-07 14:15:20.359175521 +0000 UTC m=+0.192553437 container start 95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:15:20 compute-0 podman[319530]: 2025-10-07 14:15:20.362847598 +0000 UTC m=+0.196225534 container attach 95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mcnulty, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.442 2 DEBUG oslo_concurrency.lockutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquiring lock "52aed8a1-32e4-4242-881e-1b40f79f09e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.444 2 DEBUG oslo_concurrency.lockutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.445 2 DEBUG oslo_concurrency.lockutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquiring lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.445 2 DEBUG oslo_concurrency.lockutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.446 2 DEBUG oslo_concurrency.lockutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.447 2 INFO nova.compute.manager [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Terminating instance
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.448 2 DEBUG nova.compute.manager [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:15:20 compute-0 kernel: tap9999835e-25 (unregistering): left promiscuous mode
Oct 07 14:15:20 compute-0 NetworkManager[44949]: <info>  [1759846520.4970] device (tap9999835e-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:15:20 compute-0 ovn_controller[151684]: 2025-10-07T14:15:20Z|00459|binding|INFO|Releasing lport 9999835e-253e-4f1f-82c7-59a30f3e1537 from this chassis (sb_readonly=0)
Oct 07 14:15:20 compute-0 ovn_controller[151684]: 2025-10-07T14:15:20Z|00460|binding|INFO|Setting lport 9999835e-253e-4f1f-82c7-59a30f3e1537 down in Southbound
Oct 07 14:15:20 compute-0 ovn_controller[151684]: 2025-10-07T14:15:20Z|00461|binding|INFO|Removing iface tap9999835e-25 ovn-installed in OVS
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:20 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000036.scope: Deactivated successfully.
Oct 07 14:15:20 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000036.scope: Consumed 3.779s CPU time.
Oct 07 14:15:20 compute-0 systemd-machined[214580]: Machine qemu-62-instance-00000036 terminated.
Oct 07 14:15:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:20.668 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:32:36 10.100.0.12'], port_security=['fa:16:3e:fb:32:36 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '52aed8a1-32e4-4242-881e-1b40f79f09e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eff78bf7-14f6-4b50-9495-e8815b9b3aa7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a497c44829943d787416adb835d66e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ad3edd26-6a78-4069-a120-e0c484f5035e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d008834-8f4c-4a69-8755-e7ba45e692b8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9999835e-253e-4f1f-82c7-59a30f3e1537) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:20.670 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9999835e-253e-4f1f-82c7-59a30f3e1537 in datapath eff78bf7-14f6-4b50-9495-e8815b9b3aa7 unbound from our chassis
Oct 07 14:15:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:20.672 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eff78bf7-14f6-4b50-9495-e8815b9b3aa7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:15:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:20.674 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ceedcbdf-ecb2-4214-a5d9-ebc998dfcf13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:20.674 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7 namespace which is not needed anymore
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.703 2 INFO nova.virt.libvirt.driver [-] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Instance destroyed successfully.
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.705 2 DEBUG nova.objects.instance [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lazy-loading 'resources' on Instance uuid 52aed8a1-32e4-4242-881e-1b40f79f09e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:20 compute-0 ceph-mon[74295]: pgmap v1522: 305 pgs: 305 active+clean; 214 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 908 KiB/s wr, 148 op/s
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.830 2 DEBUG nova.virt.libvirt.vif [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1063149174',display_name='tempest-InstanceActionsNegativeTestJSON-server-1063149174',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1063149174',id=54,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a497c44829943d787416adb835d66e5',ramdisk_id='',reservation_id='r-p0tay0if',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-243150513',owner_user_name='tempest-InstanceActionsNegativeTestJSON-243150513-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:18Z,user_data=None,user_id='b202876689054b5ebeef4c4648b455bf',uuid=52aed8a1-32e4-4242-881e-1b40f79f09e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.832 2 DEBUG nova.network.os_vif_util [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Converting VIF {"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.833 2 DEBUG nova.network.os_vif_util [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:32:36,bridge_name='br-int',has_traffic_filtering=True,id=9999835e-253e-4f1f-82c7-59a30f3e1537,network=Network(eff78bf7-14f6-4b50-9495-e8815b9b3aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9999835e-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.833 2 DEBUG os_vif [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:32:36,bridge_name='br-int',has_traffic_filtering=True,id=9999835e-253e-4f1f-82c7-59a30f3e1537,network=Network(eff78bf7-14f6-4b50-9495-e8815b9b3aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9999835e-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:20 compute-0 neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7[318877]: [NOTICE]   (318881) : haproxy version is 2.8.14-c23fe91
Oct 07 14:15:20 compute-0 neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7[318877]: [NOTICE]   (318881) : path to executable is /usr/sbin/haproxy
Oct 07 14:15:20 compute-0 neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7[318877]: [WARNING]  (318881) : Exiting Master process...
Oct 07 14:15:20 compute-0 neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7[318877]: [WARNING]  (318881) : Exiting Master process...
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.835 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9999835e-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:20 compute-0 neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7[318877]: [ALERT]    (318881) : Current worker (318883) exited with code 143 (Terminated)
Oct 07 14:15:20 compute-0 neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7[318877]: [WARNING]  (318881) : All workers exited. Exiting... (0)
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:20 compute-0 systemd[1]: libpod-1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3.scope: Deactivated successfully.
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:20 compute-0 nova_compute[259550]: 2025-10-07 14:15:20.843 2 INFO os_vif [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:32:36,bridge_name='br-int',has_traffic_filtering=True,id=9999835e-253e-4f1f-82c7-59a30f3e1537,network=Network(eff78bf7-14f6-4b50-9495-e8815b9b3aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9999835e-25')
Oct 07 14:15:20 compute-0 podman[319584]: 2025-10-07 14:15:20.849122114 +0000 UTC m=+0.053131275 container died 1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:15:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3-userdata-shm.mount: Deactivated successfully.
Oct 07 14:15:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f008917257b7e03664568af339d089295b0525495a36406dab66623e6accadcf-merged.mount: Deactivated successfully.
Oct 07 14:15:20 compute-0 podman[319584]: 2025-10-07 14:15:20.928805799 +0000 UTC m=+0.132814970 container cleanup 1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 07 14:15:20 compute-0 systemd[1]: libpod-conmon-1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3.scope: Deactivated successfully.
Oct 07 14:15:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:15:21 compute-0 podman[319628]: 2025-10-07 14:15:21.022379211 +0000 UTC m=+0.062059820 container remove 1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 07 14:15:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:21.029 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b655a60a-dde8-487a-9c21-6853d91511a6]: (4, ('Tue Oct  7 02:15:20 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7 (1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3)\n1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3\nTue Oct  7 02:15:20 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7 (1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3)\n1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:21.032 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d2781824-262e-42b3-bd45-debb7e0d477e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:21.033 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeff78bf7-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:21 compute-0 nova_compute[259550]: 2025-10-07 14:15:21.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:21 compute-0 kernel: tapeff78bf7-10: left promiscuous mode
Oct 07 14:15:21 compute-0 nova_compute[259550]: 2025-10-07 14:15:21.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:21.059 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a44b814f-6c54-43ae-9a66-a7d07f21ad15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:21.091 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5f28a5e2-a67d-4f08-81c7-3e2ce270ab3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:21.093 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5254de-aa45-48cd-b17f-cf805e8493df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:21.120 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5751d15a-a85b-4090-ae44-776d06fad0be]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709873, 'reachable_time': 19112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319643, 'error': None, 'target': 'ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:21 compute-0 systemd[1]: run-netns-ovnmeta\x2deff78bf7\x2d14f6\x2d4b50\x2d9495\x2de8815b9b3aa7.mount: Deactivated successfully.
Oct 07 14:15:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:21.124 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:15:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:21.124 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[bb24bc13-6568-4845-b313-48ecfad16940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:21 compute-0 nova_compute[259550]: 2025-10-07 14:15:21.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:21 compute-0 nova_compute[259550]: 2025-10-07 14:15:21.326 2 INFO nova.virt.libvirt.driver [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Deleting instance files /var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1_del
Oct 07 14:15:21 compute-0 nova_compute[259550]: 2025-10-07 14:15:21.328 2 INFO nova.virt.libvirt.driver [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Deletion of /var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1_del complete
Oct 07 14:15:21 compute-0 focused_mcnulty[319546]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:15:21 compute-0 focused_mcnulty[319546]: --> relative data size: 1.0
Oct 07 14:15:21 compute-0 focused_mcnulty[319546]: --> All data devices are unavailable
Oct 07 14:15:21 compute-0 systemd[1]: libpod-95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672.scope: Deactivated successfully.
Oct 07 14:15:21 compute-0 systemd[1]: libpod-95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672.scope: Consumed 1.087s CPU time.
Oct 07 14:15:21 compute-0 podman[319530]: 2025-10-07 14:15:21.56440927 +0000 UTC m=+1.397787186 container died 95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:15:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc1f300a7802b9347d61ae95cb2a299ac6328759152a468dae7b629e09b0a4ac-merged.mount: Deactivated successfully.
Oct 07 14:15:21 compute-0 podman[319530]: 2025-10-07 14:15:21.639863893 +0000 UTC m=+1.473241829 container remove 95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:15:21 compute-0 systemd[1]: libpod-conmon-95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672.scope: Deactivated successfully.
Oct 07 14:15:21 compute-0 sudo[319425]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:21 compute-0 nova_compute[259550]: 2025-10-07 14:15:21.682 2 INFO nova.compute.manager [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Took 1.23 seconds to destroy the instance on the hypervisor.
Oct 07 14:15:21 compute-0 nova_compute[259550]: 2025-10-07 14:15:21.684 2 DEBUG oslo.service.loopingcall [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:15:21 compute-0 nova_compute[259550]: 2025-10-07 14:15:21.685 2 DEBUG nova.compute.manager [-] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:15:21 compute-0 nova_compute[259550]: 2025-10-07 14:15:21.686 2 DEBUG nova.network.neutron [-] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:15:21 compute-0 sudo[319680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:15:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 214 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 34 KiB/s wr, 138 op/s
Oct 07 14:15:21 compute-0 sudo[319680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:21 compute-0 sudo[319680]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:21 compute-0 sudo[319705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:15:21 compute-0 sudo[319705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:21 compute-0 sudo[319705]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:21 compute-0 sudo[319730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:15:21 compute-0 sudo[319730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:21 compute-0 sudo[319730]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:21 compute-0 sudo[319755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:15:21 compute-0 sudo[319755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:22 compute-0 podman[319820]: 2025-10-07 14:15:22.335558531 +0000 UTC m=+0.050758911 container create a46836d9d94fa6c86737618f3ffff4678a971fde418a3d5f52a4116c35ca684c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 07 14:15:22 compute-0 systemd[1]: Started libpod-conmon-a46836d9d94fa6c86737618f3ffff4678a971fde418a3d5f52a4116c35ca684c.scope.
Oct 07 14:15:22 compute-0 podman[319820]: 2025-10-07 14:15:22.314031642 +0000 UTC m=+0.029232032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:15:22 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:15:22 compute-0 podman[319820]: 2025-10-07 14:15:22.427704376 +0000 UTC m=+0.142904776 container init a46836d9d94fa6c86737618f3ffff4678a971fde418a3d5f52a4116c35ca684c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 07 14:15:22 compute-0 podman[319820]: 2025-10-07 14:15:22.443372949 +0000 UTC m=+0.158573329 container start a46836d9d94fa6c86737618f3ffff4678a971fde418a3d5f52a4116c35ca684c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 07 14:15:22 compute-0 podman[319820]: 2025-10-07 14:15:22.447332124 +0000 UTC m=+0.162532524 container attach a46836d9d94fa6c86737618f3ffff4678a971fde418a3d5f52a4116c35ca684c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:15:22 compute-0 pedantic_mccarthy[319836]: 167 167
Oct 07 14:15:22 compute-0 systemd[1]: libpod-a46836d9d94fa6c86737618f3ffff4678a971fde418a3d5f52a4116c35ca684c.scope: Deactivated successfully.
Oct 07 14:15:22 compute-0 podman[319820]: 2025-10-07 14:15:22.452266784 +0000 UTC m=+0.167467164 container died a46836d9d94fa6c86737618f3ffff4678a971fde418a3d5f52a4116c35ca684c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 07 14:15:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-663394db2c2f07de4bf0e2f63ea5d949b92c7e78e8040ed449a88ad69e71013e-merged.mount: Deactivated successfully.
Oct 07 14:15:22 compute-0 podman[319820]: 2025-10-07 14:15:22.495590888 +0000 UTC m=+0.210791268 container remove a46836d9d94fa6c86737618f3ffff4678a971fde418a3d5f52a4116c35ca684c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:15:22 compute-0 systemd[1]: libpod-conmon-a46836d9d94fa6c86737618f3ffff4678a971fde418a3d5f52a4116c35ca684c.scope: Deactivated successfully.
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:15:22
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'vms', 'images', 'volumes', 'default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:15:22 compute-0 podman[319860]: 2025-10-07 14:15:22.691311289 +0000 UTC m=+0.053889555 container create b0b0c1a7212fb9f13aa559fd0d3217af7a98aba880794cf70d024f873d1090f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:15:22 compute-0 systemd[1]: Started libpod-conmon-b0b0c1a7212fb9f13aa559fd0d3217af7a98aba880794cf70d024f873d1090f9.scope.
Oct 07 14:15:22 compute-0 podman[319860]: 2025-10-07 14:15:22.666048192 +0000 UTC m=+0.028626468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:15:22 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d1d732419bc1b80779c3b856588668093e9e4d9fc4329bc7e124e01cf5efac2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d1d732419bc1b80779c3b856588668093e9e4d9fc4329bc7e124e01cf5efac2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d1d732419bc1b80779c3b856588668093e9e4d9fc4329bc7e124e01cf5efac2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d1d732419bc1b80779c3b856588668093e9e4d9fc4329bc7e124e01cf5efac2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:15:22 compute-0 podman[319860]: 2025-10-07 14:15:22.784398898 +0000 UTC m=+0.146977174 container init b0b0c1a7212fb9f13aa559fd0d3217af7a98aba880794cf70d024f873d1090f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 07 14:15:22 compute-0 podman[319860]: 2025-10-07 14:15:22.792587174 +0000 UTC m=+0.155165430 container start b0b0c1a7212fb9f13aa559fd0d3217af7a98aba880794cf70d024f873d1090f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 07 14:15:22 compute-0 podman[319860]: 2025-10-07 14:15:22.796145489 +0000 UTC m=+0.158723765 container attach b0b0c1a7212fb9f13aa559fd0d3217af7a98aba880794cf70d024f873d1090f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:15:22 compute-0 ceph-mon[74295]: pgmap v1523: 305 pgs: 305 active+clean; 214 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 34 KiB/s wr, 138 op/s
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:15:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:15:23 compute-0 nova_compute[259550]: 2025-10-07 14:15:23.099 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846508.098485, eb8777f3-5daa-49c7-8994-687012f20453 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:23 compute-0 nova_compute[259550]: 2025-10-07 14:15:23.102 2 INFO nova.compute.manager [-] [instance: eb8777f3-5daa-49c7-8994-687012f20453] VM Stopped (Lifecycle Event)
Oct 07 14:15:23 compute-0 nova_compute[259550]: 2025-10-07 14:15:23.225 2 DEBUG nova.compute.manager [None req-1243766d-f832-439d-b8f3-ef56befec3e2 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]: {
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:     "0": [
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:         {
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "devices": [
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "/dev/loop3"
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             ],
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "lv_name": "ceph_lv0",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "lv_size": "21470642176",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "name": "ceph_lv0",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "tags": {
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.cluster_name": "ceph",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.crush_device_class": "",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.encrypted": "0",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.osd_id": "0",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.type": "block",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.vdo": "0"
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             },
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "type": "block",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "vg_name": "ceph_vg0"
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:         }
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:     ],
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:     "1": [
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:         {
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "devices": [
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "/dev/loop4"
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             ],
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "lv_name": "ceph_lv1",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "lv_size": "21470642176",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "name": "ceph_lv1",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "tags": {
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.cluster_name": "ceph",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.crush_device_class": "",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.encrypted": "0",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.osd_id": "1",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.type": "block",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.vdo": "0"
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             },
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "type": "block",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "vg_name": "ceph_vg1"
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:         }
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:     ],
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:     "2": [
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:         {
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "devices": [
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "/dev/loop5"
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             ],
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "lv_name": "ceph_lv2",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "lv_size": "21470642176",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "name": "ceph_lv2",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "tags": {
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.cluster_name": "ceph",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.crush_device_class": "",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.encrypted": "0",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.osd_id": "2",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.type": "block",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:                 "ceph.vdo": "0"
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             },
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "type": "block",
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:             "vg_name": "ceph_vg2"
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:         }
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]:     ]
Oct 07 14:15:23 compute-0 nifty_satoshi[319877]: }
Oct 07 14:15:23 compute-0 systemd[1]: libpod-b0b0c1a7212fb9f13aa559fd0d3217af7a98aba880794cf70d024f873d1090f9.scope: Deactivated successfully.
Oct 07 14:15:23 compute-0 podman[319860]: 2025-10-07 14:15:23.696540774 +0000 UTC m=+1.059119030 container died b0b0c1a7212fb9f13aa559fd0d3217af7a98aba880794cf70d024f873d1090f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:15:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d1d732419bc1b80779c3b856588668093e9e4d9fc4329bc7e124e01cf5efac2-merged.mount: Deactivated successfully.
Oct 07 14:15:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 214 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 34 KiB/s wr, 128 op/s
Oct 07 14:15:23 compute-0 podman[319860]: 2025-10-07 14:15:23.762098086 +0000 UTC m=+1.124676342 container remove b0b0c1a7212fb9f13aa559fd0d3217af7a98aba880794cf70d024f873d1090f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:15:23 compute-0 systemd[1]: libpod-conmon-b0b0c1a7212fb9f13aa559fd0d3217af7a98aba880794cf70d024f873d1090f9.scope: Deactivated successfully.
Oct 07 14:15:23 compute-0 sudo[319755]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:23 compute-0 sudo[319898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:15:23 compute-0 sudo[319898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:23 compute-0 sudo[319898]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:23 compute-0 sudo[319923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:15:23 compute-0 sudo[319923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:23 compute-0 nova_compute[259550]: 2025-10-07 14:15:23.946 2 DEBUG nova.network.neutron [-] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:23 compute-0 sudo[319923]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:23 compute-0 nova_compute[259550]: 2025-10-07 14:15:23.979 2 INFO nova.compute.manager [-] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Took 2.29 seconds to deallocate network for instance.
Oct 07 14:15:23 compute-0 sudo[319948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:15:23 compute-0 sudo[319948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:23 compute-0 sudo[319948]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:24 compute-0 nova_compute[259550]: 2025-10-07 14:15:24.029 2 DEBUG oslo_concurrency.lockutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:24 compute-0 nova_compute[259550]: 2025-10-07 14:15:24.030 2 DEBUG oslo_concurrency.lockutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:24 compute-0 sudo[319973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:15:24 compute-0 sudo[319973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:24 compute-0 nova_compute[259550]: 2025-10-07 14:15:24.070 2 DEBUG nova.compute.manager [req-75bd97a6-a110-40ee-88e7-1711f54d4afa req-01e097a1-92c1-432b-abb5-c22d4d7b1e8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Received event network-vif-deleted-9999835e-253e-4f1f-82c7-59a30f3e1537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:24 compute-0 nova_compute[259550]: 2025-10-07 14:15:24.147 2 DEBUG oslo_concurrency.processutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:24 compute-0 podman[320055]: 2025-10-07 14:15:24.424159425 +0000 UTC m=+0.051618064 container create b118fda24da28d0eda4429e59b6dc211fa1a5ac36cbd3ce92116f7af611da3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dijkstra, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 07 14:15:24 compute-0 systemd[1]: Started libpod-conmon-b118fda24da28d0eda4429e59b6dc211fa1a5ac36cbd3ce92116f7af611da3f7.scope.
Oct 07 14:15:24 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:15:24 compute-0 podman[320055]: 2025-10-07 14:15:24.397668876 +0000 UTC m=+0.025127535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:15:24 compute-0 podman[320055]: 2025-10-07 14:15:24.508433712 +0000 UTC m=+0.135892391 container init b118fda24da28d0eda4429e59b6dc211fa1a5ac36cbd3ce92116f7af611da3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dijkstra, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:15:24 compute-0 podman[320055]: 2025-10-07 14:15:24.517826689 +0000 UTC m=+0.145285338 container start b118fda24da28d0eda4429e59b6dc211fa1a5ac36cbd3ce92116f7af611da3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dijkstra, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:15:24 compute-0 podman[320055]: 2025-10-07 14:15:24.521784294 +0000 UTC m=+0.149242953 container attach b118fda24da28d0eda4429e59b6dc211fa1a5ac36cbd3ce92116f7af611da3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dijkstra, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct 07 14:15:24 compute-0 sharp_dijkstra[320071]: 167 167
Oct 07 14:15:24 compute-0 systemd[1]: libpod-b118fda24da28d0eda4429e59b6dc211fa1a5ac36cbd3ce92116f7af611da3f7.scope: Deactivated successfully.
Oct 07 14:15:24 compute-0 podman[320055]: 2025-10-07 14:15:24.526989162 +0000 UTC m=+0.154447851 container died b118fda24da28d0eda4429e59b6dc211fa1a5ac36cbd3ce92116f7af611da3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dijkstra, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:15:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-84f2e1f79e348795de54c1945eca0eea2e3e1a0c39320ad6f2bcf3c19f888b49-merged.mount: Deactivated successfully.
Oct 07 14:15:24 compute-0 podman[320055]: 2025-10-07 14:15:24.572300038 +0000 UTC m=+0.199758677 container remove b118fda24da28d0eda4429e59b6dc211fa1a5ac36cbd3ce92116f7af611da3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dijkstra, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 07 14:15:24 compute-0 systemd[1]: libpod-conmon-b118fda24da28d0eda4429e59b6dc211fa1a5ac36cbd3ce92116f7af611da3f7.scope: Deactivated successfully.
Oct 07 14:15:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:15:24 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1874586540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:24 compute-0 nova_compute[259550]: 2025-10-07 14:15:24.644 2 DEBUG oslo_concurrency.processutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:24 compute-0 nova_compute[259550]: 2025-10-07 14:15:24.661 2 DEBUG nova.compute.provider_tree [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:15:24 compute-0 nova_compute[259550]: 2025-10-07 14:15:24.679 2 DEBUG nova.scheduler.client.report [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:15:24 compute-0 nova_compute[259550]: 2025-10-07 14:15:24.706 2 DEBUG oslo_concurrency.lockutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:24 compute-0 nova_compute[259550]: 2025-10-07 14:15:24.746 2 INFO nova.scheduler.client.report [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Deleted allocations for instance 52aed8a1-32e4-4242-881e-1b40f79f09e1
Oct 07 14:15:24 compute-0 podman[320098]: 2025-10-07 14:15:24.782300206 +0000 UTC m=+0.052225460 container create 89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 07 14:15:24 compute-0 ceph-mon[74295]: pgmap v1524: 305 pgs: 305 active+clean; 214 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 34 KiB/s wr, 128 op/s
Oct 07 14:15:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1874586540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:24 compute-0 systemd[1]: Started libpod-conmon-89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967.scope.
Oct 07 14:15:24 compute-0 nova_compute[259550]: 2025-10-07 14:15:24.829 2 DEBUG oslo_concurrency.lockutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.385s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:24 compute-0 podman[320098]: 2025-10-07 14:15:24.758689892 +0000 UTC m=+0.028615176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:15:24 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:15:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b4bcf72977ecf7270b99e80d8b9eef545aafd1f2174128218dfae760c7204d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:15:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b4bcf72977ecf7270b99e80d8b9eef545aafd1f2174128218dfae760c7204d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:15:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b4bcf72977ecf7270b99e80d8b9eef545aafd1f2174128218dfae760c7204d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:15:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b4bcf72977ecf7270b99e80d8b9eef545aafd1f2174128218dfae760c7204d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:15:24 compute-0 podman[320098]: 2025-10-07 14:15:24.890628418 +0000 UTC m=+0.160553692 container init 89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:15:24 compute-0 podman[320098]: 2025-10-07 14:15:24.89941131 +0000 UTC m=+0.169336564 container start 89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:15:24 compute-0 podman[320098]: 2025-10-07 14:15:24.910503493 +0000 UTC m=+0.180428957 container attach 89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 07 14:15:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 191 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 165 op/s
Oct 07 14:15:25 compute-0 nova_compute[259550]: 2025-10-07 14:15:25.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:25 compute-0 eager_hellman[320114]: {
Oct 07 14:15:25 compute-0 eager_hellman[320114]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:15:25 compute-0 eager_hellman[320114]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:15:25 compute-0 eager_hellman[320114]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:15:25 compute-0 eager_hellman[320114]:         "osd_id": 2,
Oct 07 14:15:25 compute-0 eager_hellman[320114]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:15:25 compute-0 eager_hellman[320114]:         "type": "bluestore"
Oct 07 14:15:25 compute-0 eager_hellman[320114]:     },
Oct 07 14:15:25 compute-0 eager_hellman[320114]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:15:25 compute-0 eager_hellman[320114]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:15:25 compute-0 eager_hellman[320114]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:15:25 compute-0 eager_hellman[320114]:         "osd_id": 1,
Oct 07 14:15:25 compute-0 eager_hellman[320114]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:15:25 compute-0 eager_hellman[320114]:         "type": "bluestore"
Oct 07 14:15:25 compute-0 eager_hellman[320114]:     },
Oct 07 14:15:25 compute-0 eager_hellman[320114]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:15:25 compute-0 eager_hellman[320114]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:15:25 compute-0 eager_hellman[320114]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:15:25 compute-0 eager_hellman[320114]:         "osd_id": 0,
Oct 07 14:15:25 compute-0 eager_hellman[320114]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:15:25 compute-0 eager_hellman[320114]:         "type": "bluestore"
Oct 07 14:15:25 compute-0 eager_hellman[320114]:     }
Oct 07 14:15:25 compute-0 eager_hellman[320114]: }
Oct 07 14:15:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:15:25 compute-0 systemd[1]: libpod-89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967.scope: Deactivated successfully.
Oct 07 14:15:25 compute-0 systemd[1]: libpod-89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967.scope: Consumed 1.063s CPU time.
Oct 07 14:15:25 compute-0 conmon[320114]: conmon 89d49500d5170b20fae3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967.scope/container/memory.events
Oct 07 14:15:25 compute-0 podman[320098]: 2025-10-07 14:15:25.984947396 +0000 UTC m=+1.254872650 container died 89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 07 14:15:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6b4bcf72977ecf7270b99e80d8b9eef545aafd1f2174128218dfae760c7204d-merged.mount: Deactivated successfully.
Oct 07 14:15:26 compute-0 podman[320098]: 2025-10-07 14:15:26.045682551 +0000 UTC m=+1.315607805 container remove 89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:15:26 compute-0 systemd[1]: libpod-conmon-89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967.scope: Deactivated successfully.
Oct 07 14:15:26 compute-0 sudo[319973]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:15:26 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:15:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:15:26 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:15:26 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 0d85fd2a-0794-42e0-a1a9-f4988b8a16c1 does not exist
Oct 07 14:15:26 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 6487c7c9-3441-4128-b178-4aab68a343f4 does not exist
Oct 07 14:15:26 compute-0 sudo[320159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:15:26 compute-0 sudo[320159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:26 compute-0 sudo[320159]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:26 compute-0 sudo[320184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:15:26 compute-0 sudo[320184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:15:26 compute-0 sudo[320184]: pam_unix(sudo:session): session closed for user root
Oct 07 14:15:26 compute-0 nova_compute[259550]: 2025-10-07 14:15:26.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:26 compute-0 ovn_controller[151684]: 2025-10-07T14:15:26Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d4:48:b2 10.100.0.5
Oct 07 14:15:26 compute-0 ovn_controller[151684]: 2025-10-07T14:15:26Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d4:48:b2 10.100.0.5
Oct 07 14:15:26 compute-0 ceph-mon[74295]: pgmap v1525: 305 pgs: 305 active+clean; 191 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 165 op/s
Oct 07 14:15:26 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:15:26 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:15:26 compute-0 nova_compute[259550]: 2025-10-07 14:15:26.912 2 DEBUG nova.virt.libvirt.driver [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:15:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 178 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 838 KiB/s wr, 164 op/s
Oct 07 14:15:28 compute-0 nova_compute[259550]: 2025-10-07 14:15:28.220 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:28 compute-0 nova_compute[259550]: 2025-10-07 14:15:28.221 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:28 compute-0 nova_compute[259550]: 2025-10-07 14:15:28.237 2 DEBUG nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:15:28 compute-0 nova_compute[259550]: 2025-10-07 14:15:28.305 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:28 compute-0 nova_compute[259550]: 2025-10-07 14:15:28.305 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:28 compute-0 nova_compute[259550]: 2025-10-07 14:15:28.313 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:15:28 compute-0 nova_compute[259550]: 2025-10-07 14:15:28.314 2 INFO nova.compute.claims [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:15:28 compute-0 nova_compute[259550]: 2025-10-07 14:15:28.458 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:28 compute-0 ceph-mon[74295]: pgmap v1526: 305 pgs: 305 active+clean; 178 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 838 KiB/s wr, 164 op/s
Oct 07 14:15:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:15:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3081888355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:28 compute-0 nova_compute[259550]: 2025-10-07 14:15:28.938 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:28 compute-0 nova_compute[259550]: 2025-10-07 14:15:28.946 2 DEBUG nova.compute.provider_tree [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:15:28 compute-0 nova_compute[259550]: 2025-10-07 14:15:28.966 2 DEBUG nova.scheduler.client.report [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:15:28 compute-0 nova_compute[259550]: 2025-10-07 14:15:28.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:15:28 compute-0 nova_compute[259550]: 2025-10-07 14:15:28.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:15:28 compute-0 nova_compute[259550]: 2025-10-07 14:15:28.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.002 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.002 2 DEBUG nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.006 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.007 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.007 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.007 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.008 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.048 2 DEBUG nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.049 2 DEBUG nova.network.neutron [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.068 2 INFO nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.082 2 DEBUG nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:15:29 compute-0 kernel: tap2781ab1e-ba (unregistering): left promiscuous mode
Oct 07 14:15:29 compute-0 NetworkManager[44949]: <info>  [1759846529.1930] device (tap2781ab1e-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:15:29 compute-0 ovn_controller[151684]: 2025-10-07T14:15:29Z|00462|binding|INFO|Releasing lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 from this chassis (sb_readonly=0)
Oct 07 14:15:29 compute-0 ovn_controller[151684]: 2025-10-07T14:15:29Z|00463|binding|INFO|Setting lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 down in Southbound
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:29 compute-0 ovn_controller[151684]: 2025-10-07T14:15:29Z|00464|binding|INFO|Removing iface tap2781ab1e-ba ovn-installed in OVS
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.235 2 DEBUG nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.236 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.237 2 INFO nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Creating image(s)
Oct 07 14:15:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.240 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:48:b2 10.100.0.5'], port_security=['fa:16:3e:d4:48:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a8585c64-eb21-491a-9a4c-b9ac6e8e4a30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef9390a1dd804281beea149e0086b360', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd609a9ff-183f-496e-83cc-641ffdd2b1f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80c61b76-cba3-471b-9dc7-bab9d6303f6a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.241 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 in datapath 7ba9d553-bbaa-47f8-8281-6a74e53c37fb unbound from our chassis
Oct 07 14:15:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.243 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba9d553-bbaa-47f8-8281-6a74e53c37fb
Oct 07 14:15:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.263 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[65f4c121-31fa-4316-884f-2b7611b8c0a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:29 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000035.scope: Deactivated successfully.
Oct 07 14:15:29 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000035.scope: Consumed 13.953s CPU time.
Oct 07 14:15:29 compute-0 systemd-machined[214580]: Machine qemu-61-instance-00000035 terminated.
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.303 2 DEBUG nova.storage.rbd_utils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.302 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[eb90d0e0-bcbd-4826-ba7a-b84eab67efaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.306 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed21cf3-1b2d-439b-8541-a3d9ae77de55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:29 compute-0 podman[320251]: 2025-10-07 14:15:29.327443473 +0000 UTC m=+0.089748281 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 07 14:15:29 compute-0 podman[320254]: 2025-10-07 14:15:29.328675586 +0000 UTC m=+0.093556082 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:15:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.341 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[02eebe2e-9f57-4624-b693-b8d2bc761a64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.350 2 DEBUG nova.storage.rbd_utils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.363 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[627a26cb-c87f-40bb-804d-0990818a0fa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9d553-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:7b:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703719, 'reachable_time': 17176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320332, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.376 2 DEBUG nova.storage.rbd_utils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.381 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4931be7a-34b6-4be1-af4a-478c469be848]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703735, 'tstamp': 703735}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320350, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703740, 'tstamp': 703740}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320350, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.382 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.383 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9d553-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.390 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba9d553-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.390 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.390 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba9d553-b0, col_values=(('external_ids', {'iface-id': 'c1da8c7c-1812-4ab6-94d3-da2a23226328'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.391 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.461 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.462 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.463 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.463 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:15:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3552535257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.488 2 DEBUG nova.storage.rbd_utils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.491 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.521 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.621 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.621 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.628 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.628 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:15:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 195 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 177 op/s
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.791 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.848 2 DEBUG nova.storage.rbd_utils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] resizing rbd image 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:15:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3081888355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3552535257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.885 2 DEBUG nova.policy [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.956 2 DEBUG nova.objects.instance [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'migration_context' on Instance uuid 188af2a5-ff92-4f42-8bdc-5dec2f24d46a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.958 2 INFO nova.virt.libvirt.driver [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance shutdown successfully after 13 seconds.
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.965 2 INFO nova.virt.libvirt.driver [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance destroyed successfully.
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.965 2 DEBUG nova.objects.instance [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'numa_topology' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.978 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.978 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Ensure instance console log exists: /var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.978 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.979 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.979 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:29 compute-0 nova_compute[259550]: 2025-10-07 14:15:29.980 2 DEBUG nova.compute.manager [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.003 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.004 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3992MB free_disk=59.91245651245117GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.004 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.005 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.033 2 DEBUG oslo_concurrency.lockutils [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.070 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance fc163bed-856c-4ea5-9bf3-6989fb1027eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.070 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.070 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 188af2a5-ff92-4f42-8bdc-5dec2f24d46a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.070 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.071 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.133 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.525 2 DEBUG nova.network.neutron [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Successfully created port: 62690261-dde3-43ca-929a-e6b75a76bafb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:15:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:15:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279895366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.632 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.639 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.656 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.694 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.695 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:30 compute-0 ceph-mon[74295]: pgmap v1527: 305 pgs: 305 active+clean; 195 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 177 op/s
Oct 07 14:15:30 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4279895366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.863 2 DEBUG nova.compute.manager [req-d7d19666-93e7-414d-98bb-148c8a4ca51a req-49782467-37d0-4092-908f-a4d7cdd495f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-vif-unplugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.863 2 DEBUG oslo_concurrency.lockutils [req-d7d19666-93e7-414d-98bb-148c8a4ca51a req-49782467-37d0-4092-908f-a4d7cdd495f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.864 2 DEBUG oslo_concurrency.lockutils [req-d7d19666-93e7-414d-98bb-148c8a4ca51a req-49782467-37d0-4092-908f-a4d7cdd495f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.864 2 DEBUG oslo_concurrency.lockutils [req-d7d19666-93e7-414d-98bb-148c8a4ca51a req-49782467-37d0-4092-908f-a4d7cdd495f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.864 2 DEBUG nova.compute.manager [req-d7d19666-93e7-414d-98bb-148c8a4ca51a req-49782467-37d0-4092-908f-a4d7cdd495f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] No waiting events found dispatching network-vif-unplugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:30 compute-0 nova_compute[259550]: 2025-10-07 14:15:30.864 2 WARNING nova.compute.manager [req-d7d19666-93e7-414d-98bb-148c8a4ca51a req-49782467-37d0-4092-908f-a4d7cdd495f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received unexpected event network-vif-unplugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 for instance with vm_state stopped and task_state None.
Oct 07 14:15:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:15:31 compute-0 ovn_controller[151684]: 2025-10-07T14:15:31Z|00465|binding|INFO|Releasing lport c1da8c7c-1812-4ab6-94d3-da2a23226328 from this chassis (sb_readonly=0)
Oct 07 14:15:31 compute-0 nova_compute[259550]: 2025-10-07 14:15:31.185 2 DEBUG nova.network.neutron [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Successfully updated port: 62690261-dde3-43ca-929a-e6b75a76bafb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:15:31 compute-0 nova_compute[259550]: 2025-10-07 14:15:31.206 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:31 compute-0 nova_compute[259550]: 2025-10-07 14:15:31.206 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:31 compute-0 nova_compute[259550]: 2025-10-07 14:15:31.206 2 DEBUG nova.network.neutron [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:15:31 compute-0 nova_compute[259550]: 2025-10-07 14:15:31.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:31 compute-0 nova_compute[259550]: 2025-10-07 14:15:31.296 2 DEBUG nova.compute.manager [req-e0971caf-482d-4446-ae31-1963494fbf29 req-bff07ec3-4c3f-4085-bbe3-c321d37ffd6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:31 compute-0 nova_compute[259550]: 2025-10-07 14:15:31.297 2 DEBUG nova.compute.manager [req-e0971caf-482d-4446-ae31-1963494fbf29 req-bff07ec3-4c3f-4085-bbe3-c321d37ffd6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing instance network info cache due to event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:15:31 compute-0 nova_compute[259550]: 2025-10-07 14:15:31.297 2 DEBUG oslo_concurrency.lockutils [req-e0971caf-482d-4446-ae31-1963494fbf29 req-bff07ec3-4c3f-4085-bbe3-c321d37ffd6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:31 compute-0 nova_compute[259550]: 2025-10-07 14:15:31.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:31 compute-0 nova_compute[259550]: 2025-10-07 14:15:31.393 2 DEBUG nova.network.neutron [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:15:31 compute-0 nova_compute[259550]: 2025-10-07 14:15:31.695 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:15:31 compute-0 nova_compute[259550]: 2025-10-07 14:15:31.696 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:15:31 compute-0 nova_compute[259550]: 2025-10-07 14:15:31.696 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:15:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 221 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 145 op/s
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.116 2 DEBUG nova.network.neutron [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.134 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.135 2 DEBUG nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Instance network_info: |[{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.135 2 DEBUG oslo_concurrency.lockutils [req-e0971caf-482d-4446-ae31-1963494fbf29 req-bff07ec3-4c3f-4085-bbe3-c321d37ffd6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.136 2 DEBUG nova.network.neutron [req-e0971caf-482d-4446-ae31-1963494fbf29 req-bff07ec3-4c3f-4085-bbe3-c321d37ffd6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.142 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Start _get_guest_xml network_info=[{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.150 2 WARNING nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.163 2 DEBUG nova.virt.libvirt.host [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.164 2 DEBUG nova.virt.libvirt.host [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.170 2 DEBUG nova.virt.libvirt.host [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.171 2 DEBUG nova.virt.libvirt.host [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.172 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.172 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.174 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.174 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.175 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.176 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.176 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.177 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.177 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.178 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.179 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.179 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.185 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.234 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.234 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.256 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.324 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.325 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.334 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.334 2 INFO nova.compute.claims [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017198106930064465 of space, bias 1.0, pg target 0.5159432079019339 quantized to 32 (current 32)
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:15:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.498 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:15:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3163787589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.661 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:15:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2108271123' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:15:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:15:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2108271123' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.711 2 DEBUG nova.storage.rbd_utils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.717 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:32 compute-0 ceph-mon[74295]: pgmap v1528: 305 pgs: 305 active+clean; 221 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 145 op/s
Oct 07 14:15:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3163787589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2108271123' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:15:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2108271123' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:15:32 compute-0 nova_compute[259550]: 2025-10-07 14:15:32.989 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:15:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:15:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/355181987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.009 2 INFO nova.compute.manager [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Rebuilding instance
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.026 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.032 2 DEBUG nova.compute.provider_tree [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.036 2 DEBUG nova.compute.manager [req-e1f4319e-1855-4694-8cd0-0296dd5f3554 req-cd70ae82-fc97-4155-a60b-a7969f419978 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.036 2 DEBUG oslo_concurrency.lockutils [req-e1f4319e-1855-4694-8cd0-0296dd5f3554 req-cd70ae82-fc97-4155-a60b-a7969f419978 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.037 2 DEBUG oslo_concurrency.lockutils [req-e1f4319e-1855-4694-8cd0-0296dd5f3554 req-cd70ae82-fc97-4155-a60b-a7969f419978 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.037 2 DEBUG oslo_concurrency.lockutils [req-e1f4319e-1855-4694-8cd0-0296dd5f3554 req-cd70ae82-fc97-4155-a60b-a7969f419978 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.037 2 DEBUG nova.compute.manager [req-e1f4319e-1855-4694-8cd0-0296dd5f3554 req-cd70ae82-fc97-4155-a60b-a7969f419978 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] No waiting events found dispatching network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.037 2 WARNING nova.compute.manager [req-e1f4319e-1855-4694-8cd0-0296dd5f3554 req-cd70ae82-fc97-4155-a60b-a7969f419978 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received unexpected event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 for instance with vm_state stopped and task_state rebuilding.
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.045 2 DEBUG nova.scheduler.client.report [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.082 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.083 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.131 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.132 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:15:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:15:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/419391028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.155 2 INFO nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.161 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.162 2 DEBUG nova.virt.libvirt.vif [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-229960760',display_name='tempest-tempest.common.compute-instance-229960760',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-229960760',id=55,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-c0jd8utk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=188af2a5-ff92-4f42-8bdc-5dec2f24d46a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.163 2 DEBUG nova.network.os_vif_util [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.164 2 DEBUG nova.network.os_vif_util [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:aa:77,bridge_name='br-int',has_traffic_filtering=True,id=62690261-dde3-43ca-929a-e6b75a76bafb,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62690261-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.165 2 DEBUG nova.objects.instance [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_devices' on Instance uuid 188af2a5-ff92-4f42-8bdc-5dec2f24d46a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.171 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.176 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:15:33 compute-0 nova_compute[259550]:   <uuid>188af2a5-ff92-4f42-8bdc-5dec2f24d46a</uuid>
Oct 07 14:15:33 compute-0 nova_compute[259550]:   <name>instance-00000037</name>
Oct 07 14:15:33 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:15:33 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:15:33 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <nova:name>tempest-tempest.common.compute-instance-229960760</nova:name>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:15:32</nova:creationTime>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:15:33 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:15:33 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:15:33 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:15:33 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:15:33 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:15:33 compute-0 nova_compute[259550]:         <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:15:33 compute-0 nova_compute[259550]:         <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:15:33 compute-0 nova_compute[259550]:         <nova:port uuid="62690261-dde3-43ca-929a-e6b75a76bafb">
Oct 07 14:15:33 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:15:33 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:15:33 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <system>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <entry name="serial">188af2a5-ff92-4f42-8bdc-5dec2f24d46a</entry>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <entry name="uuid">188af2a5-ff92-4f42-8bdc-5dec2f24d46a</entry>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     </system>
Oct 07 14:15:33 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:15:33 compute-0 nova_compute[259550]:   <os>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:   </os>
Oct 07 14:15:33 compute-0 nova_compute[259550]:   <features>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:   </features>
Oct 07 14:15:33 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:15:33 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:15:33 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk">
Oct 07 14:15:33 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:15:33 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk.config">
Oct 07 14:15:33 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:15:33 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:a5:aa:77"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <target dev="tap62690261-dd"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/console.log" append="off"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <video>
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     </video>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:15:33 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:15:33 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:15:33 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:15:33 compute-0 nova_compute[259550]: </domain>
Oct 07 14:15:33 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.178 2 DEBUG nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Preparing to wait for external event network-vif-plugged-62690261-dde3-43ca-929a-e6b75a76bafb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.179 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.179 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.179 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.180 2 DEBUG nova.virt.libvirt.vif [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-229960760',display_name='tempest-tempest.common.compute-instance-229960760',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-229960760',id=55,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-c0jd8utk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=188af2a5-ff92-4f42-8bdc-5dec2f24d46a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.180 2 DEBUG nova.network.os_vif_util [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.181 2 DEBUG nova.network.os_vif_util [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:aa:77,bridge_name='br-int',has_traffic_filtering=True,id=62690261-dde3-43ca-929a-e6b75a76bafb,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62690261-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.182 2 DEBUG os_vif [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:aa:77,bridge_name='br-int',has_traffic_filtering=True,id=62690261-dde3-43ca-929a-e6b75a76bafb,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62690261-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.187 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62690261-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.187 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap62690261-dd, col_values=(('external_ids', {'iface-id': '62690261-dde3-43ca-929a-e6b75a76bafb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:aa:77', 'vm-uuid': '188af2a5-ff92-4f42-8bdc-5dec2f24d46a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:33 compute-0 NetworkManager[44949]: <info>  [1759846533.1900] manager: (tap62690261-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.198 2 INFO os_vif [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:aa:77,bridge_name='br-int',has_traffic_filtering=True,id=62690261-dde3-43ca-929a-e6b75a76bafb,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62690261-dd')
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.263 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.264 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.265 2 INFO nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Creating image(s)
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.290 2 DEBUG nova.storage.rbd_utils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 83645517-a08a-46d7-b715-15b5d7f078ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.317 2 DEBUG nova.storage.rbd_utils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 83645517-a08a-46d7-b715-15b5d7f078ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.343 2 DEBUG nova.storage.rbd_utils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 83645517-a08a-46d7-b715-15b5d7f078ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.348 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.400 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.401 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.402 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:a5:aa:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.403 2 INFO nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Using config drive
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.429 2 DEBUG nova.storage.rbd_utils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.439 2 DEBUG nova.policy [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ff37c390826e43079eff2a1423ccc2b8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.443 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.444 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.445 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.445 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.474 2 DEBUG nova.storage.rbd_utils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 83645517-a08a-46d7-b715-15b5d7f078ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.479 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 83645517-a08a-46d7-b715-15b5d7f078ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.519 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.561 2 DEBUG nova.compute.manager [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.603 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'pci_requests' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.615 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'pci_devices' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.628 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'resources' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.640 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'migration_context' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.651 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.657 2 INFO nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance already shutdown.
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.664 2 INFO nova.virt.libvirt.driver [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance destroyed successfully.
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.673 2 INFO nova.virt.libvirt.driver [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance destroyed successfully.
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.675 2 DEBUG nova.virt.libvirt.vif [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2003953244',display_name='tempest-tempest.common.compute-instance-2003953244',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2003953244',id=53,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-u5vbcnoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:32Z,user_data=None,user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=a8585c64-eb21-491a-9a4c-b9ac6e8e4a30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.676 2 DEBUG nova.network.os_vif_util [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.677 2 DEBUG nova.network.os_vif_util [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.678 2 DEBUG os_vif [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.682 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2781ab1e-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.694 2 INFO os_vif [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba')
Oct 07 14:15:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 221 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 993 KiB/s rd, 3.2 MiB/s wr, 113 op/s
Oct 07 14:15:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/355181987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/419391028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.897 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 83645517-a08a-46d7-b715-15b5d7f078ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:33 compute-0 nova_compute[259550]: 2025-10-07 14:15:33.981 2 DEBUG nova.storage.rbd_utils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] resizing rbd image 83645517-a08a-46d7-b715-15b5d7f078ff_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.036 2 DEBUG nova.network.neutron [req-e0971caf-482d-4446-ae31-1963494fbf29 req-bff07ec3-4c3f-4085-bbe3-c321d37ffd6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updated VIF entry in instance network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.037 2 DEBUG nova.network.neutron [req-e0971caf-482d-4446-ae31-1963494fbf29 req-bff07ec3-4c3f-4085-bbe3-c321d37ffd6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.055 2 DEBUG oslo_concurrency.lockutils [req-e0971caf-482d-4446-ae31-1963494fbf29 req-bff07ec3-4c3f-4085-bbe3-c321d37ffd6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.152 2 DEBUG nova.objects.instance [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lazy-loading 'migration_context' on Instance uuid 83645517-a08a-46d7-b715-15b5d7f078ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.169 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.169 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Ensure instance console log exists: /var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.170 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.170 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.170 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.371 2 INFO nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Creating config drive at /var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/disk.config
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.379 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe4duz0s0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.464 2 INFO nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Deleting instance files /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_del
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.466 2 INFO nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Deletion of /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_del complete
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.529 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe4duz0s0" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.558 2 DEBUG nova.storage.rbd_utils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.562 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/disk.config 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.663 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.664 2 INFO nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Creating image(s)
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.694 2 DEBUG nova.storage.rbd_utils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.722 2 DEBUG nova.storage.rbd_utils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.750 2 DEBUG nova.storage.rbd_utils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.755 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.846 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.847 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.848 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.849 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.875 2 DEBUG nova.storage.rbd_utils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:34 compute-0 nova_compute[259550]: 2025-10-07 14:15:34.879 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:34 compute-0 ceph-mon[74295]: pgmap v1529: 305 pgs: 305 active+clean; 221 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 993 KiB/s rd, 3.2 MiB/s wr, 113 op/s
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.013 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/disk.config 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.014 2 INFO nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Deleting local config drive /var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/disk.config because it was imported into RBD.
Oct 07 14:15:35 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct 07 14:15:35 compute-0 kernel: tap62690261-dd: entered promiscuous mode
Oct 07 14:15:35 compute-0 NetworkManager[44949]: <info>  [1759846535.0813] manager: (tap62690261-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/218)
Oct 07 14:15:35 compute-0 ovn_controller[151684]: 2025-10-07T14:15:35Z|00466|binding|INFO|Claiming lport 62690261-dde3-43ca-929a-e6b75a76bafb for this chassis.
Oct 07 14:15:35 compute-0 ovn_controller[151684]: 2025-10-07T14:15:35Z|00467|binding|INFO|62690261-dde3-43ca-929a-e6b75a76bafb: Claiming fa:16:3e:a5:aa:77 10.100.0.3
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:35 compute-0 ovn_controller[151684]: 2025-10-07T14:15:35Z|00468|binding|INFO|Setting lport 62690261-dde3-43ca-929a-e6b75a76bafb ovn-installed in OVS
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:35 compute-0 systemd-udevd[320940]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:15:35 compute-0 systemd-machined[214580]: New machine qemu-63-instance-00000037.
Oct 07 14:15:35 compute-0 systemd[1]: Started Virtual Machine qemu-63-instance-00000037.
Oct 07 14:15:35 compute-0 NetworkManager[44949]: <info>  [1759846535.1838] device (tap62690261-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:15:35 compute-0 NetworkManager[44949]: <info>  [1759846535.1845] device (tap62690261-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.242 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:aa:77 10.100.0.3'], port_security=['fa:16:3e:a5:aa:77 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '188af2a5-ff92-4f42-8bdc-5dec2f24d46a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67397a02-5eae-462d-b5c7-e258b23b19a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=62690261-dde3-43ca-929a-e6b75a76bafb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:35 compute-0 ovn_controller[151684]: 2025-10-07T14:15:35Z|00469|binding|INFO|Setting lport 62690261-dde3-43ca-929a-e6b75a76bafb up in Southbound
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.243 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 62690261-dde3-43ca-929a-e6b75a76bafb in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.244 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.260 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6ceb91f5-3fd7-43a3-a565-e5bb4a5d10ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.262 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb1d9f332-f1 in ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.264 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb1d9f332-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.264 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b0ab4060-4d34-4517-8873-2b8a93459e5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.264 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[10a71023-b845-4e47-8af5-abcc0e49622f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.285 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[77745322-a51c-42d8-9a5e-44d6b247735c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.286 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Successfully created port: 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.314 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.318 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[faba8609-3fb9-4b8a-8649-f77388739e24]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.345 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d3507b3e-a9c1-43da-9b20-ad0519908760]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:35 compute-0 NetworkManager[44949]: <info>  [1759846535.3565] manager: (tapb1d9f332-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/219)
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.355 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[45c520e6-1985-41f3-9b5a-8b83c927181a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:35 compute-0 systemd-udevd[320942]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.392 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cfafe03a-2788-453c-8475-08784ec2b645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.397 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a23d3234-ba27-42a0-adcd-c7c544b2a00f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.422 2 DEBUG nova.storage.rbd_utils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] resizing rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:15:35 compute-0 NetworkManager[44949]: <info>  [1759846535.4238] device (tapb1d9f332-f0): carrier: link connected
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.428 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2f0d6f-413a-4f24-99dd-269dc85bfd6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.446 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[df2ff0e4-84cf-4b7c-aaef-a5e9341cea92]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711898, 'reachable_time': 29007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321012, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.473 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b4313708-4bfb-4ed4-a097-a1944c6a98ec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe19:be96'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711898, 'tstamp': 711898}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321030, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.495 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e986c676-5a4e-42c6-b78a-da84f7a78bf2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711898, 'reachable_time': 29007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321031, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.513 2 DEBUG nova.compute.manager [req-3870bad4-81bd-43cf-b74c-c76f6e9f61bb req-4c18e91d-61c1-45d6-9a8f-0ed2d513a812 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-vif-plugged-62690261-dde3-43ca-929a-e6b75a76bafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.513 2 DEBUG oslo_concurrency.lockutils [req-3870bad4-81bd-43cf-b74c-c76f6e9f61bb req-4c18e91d-61c1-45d6-9a8f-0ed2d513a812 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.514 2 DEBUG oslo_concurrency.lockutils [req-3870bad4-81bd-43cf-b74c-c76f6e9f61bb req-4c18e91d-61c1-45d6-9a8f-0ed2d513a812 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.514 2 DEBUG oslo_concurrency.lockutils [req-3870bad4-81bd-43cf-b74c-c76f6e9f61bb req-4c18e91d-61c1-45d6-9a8f-0ed2d513a812 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.514 2 DEBUG nova.compute.manager [req-3870bad4-81bd-43cf-b74c-c76f6e9f61bb req-4c18e91d-61c1-45d6-9a8f-0ed2d513a812 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Processing event network-vif-plugged-62690261-dde3-43ca-929a-e6b75a76bafb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.534 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[738c5bc2-ee47-446c-82d6-11f7bc1d888e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.558 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.559 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Ensure instance console log exists: /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.560 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.560 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.561 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.564 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Start _get_guest_xml network_info=[{"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.577 2 WARNING nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.583 2 DEBUG nova.virt.libvirt.host [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.584 2 DEBUG nova.virt.libvirt.host [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.588 2 DEBUG nova.virt.libvirt.host [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.589 2 DEBUG nova.virt.libvirt.host [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.589 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.589 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.590 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.590 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.590 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.591 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.591 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.591 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.591 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.592 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.592 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.592 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.593 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.617 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.620 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[70c290dd-1518-4102-ab55-3a9a0584559a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.622 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.622 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.622 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:35 compute-0 NetworkManager[44949]: <info>  [1759846535.6254] manager: (tapb1d9f332-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Oct 07 14:15:35 compute-0 kernel: tapb1d9f332-f0: entered promiscuous mode
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.631 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:35 compute-0 ovn_controller[151684]: 2025-10-07T14:15:35Z|00470|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.634 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b1d9f332-f920-4d6e-8e91-dd13ec334d51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b1d9f332-f920-4d6e-8e91-dd13ec334d51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.636 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[223e22bc-ada2-4a1c-b3a1-c4c68cd236cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.637 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/b1d9f332-f920-4d6e-8e91-dd13ec334d51.pid.haproxy
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:15:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.638 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'env', 'PROCESS_TAG=haproxy-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b1d9f332-f920-4d6e-8e91-dd13ec334d51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.698 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846520.695742, 52aed8a1-32e4-4242-881e-1b40f79f09e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.699 2 INFO nova.compute.manager [-] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] VM Stopped (Lifecycle Event)
Oct 07 14:15:35 compute-0 nova_compute[259550]: 2025-10-07 14:15:35.718 2 DEBUG nova.compute.manager [None req-88aa6437-bfbd-4482-8257-ba401c4bec9d - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 253 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 5.0 MiB/s wr, 172 op/s
Oct 07 14:15:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:15:36 compute-0 podman[321143]: 2025-10-07 14:15:36.069308923 +0000 UTC m=+0.076932563 container create 1da7f753f22f1fd1eb5a83c355a3bf63718fd9b868b9ba7adf51be31837203c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 07 14:15:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:15:36 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/949957800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.104 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:36 compute-0 podman[321143]: 2025-10-07 14:15:36.021412027 +0000 UTC m=+0.029035757 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:15:36 compute-0 systemd[1]: Started libpod-conmon-1da7f753f22f1fd1eb5a83c355a3bf63718fd9b868b9ba7adf51be31837203c4.scope.
Oct 07 14:15:36 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.141 2 DEBUG nova.storage.rbd_utils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be1fcb889aa3b4e4dd790a95fe0086d7925bd0a3e445f3c7ef44dfbecbd0e8a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.146 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:36 compute-0 podman[321143]: 2025-10-07 14:15:36.165826933 +0000 UTC m=+0.173450593 container init 1da7f753f22f1fd1eb5a83c355a3bf63718fd9b868b9ba7adf51be31837203c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:15:36 compute-0 podman[321143]: 2025-10-07 14:15:36.173031433 +0000 UTC m=+0.180655073 container start 1da7f753f22f1fd1eb5a83c355a3bf63718fd9b868b9ba7adf51be31837203c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.187 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846536.1392632, 188af2a5-ff92-4f42-8bdc-5dec2f24d46a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.188 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] VM Started (Lifecycle Event)
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.193 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Successfully created port: 83e99e50-2115-4dee-9274-a2a6528a8a8f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.197 2 DEBUG nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:15:36 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[321166]: [NOTICE]   (321183) : New worker (321185) forked
Oct 07 14:15:36 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[321166]: [NOTICE]   (321183) : Loading success.
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.209 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.216 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.219 2 INFO nova.virt.libvirt.driver [-] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Instance spawned successfully.
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.220 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.222 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.243 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.243 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846536.139543, 188af2a5-ff92-4f42-8bdc-5dec2f24d46a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.243 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] VM Paused (Lifecycle Event)
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.249 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.249 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.250 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.250 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.250 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.251 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.260 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.263 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846536.2090194, 188af2a5-ff92-4f42-8bdc-5dec2f24d46a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.263 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] VM Resumed (Lifecycle Event)
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.280 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.283 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.309 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.316 2 INFO nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Took 7.08 seconds to spawn the instance on the hypervisor.
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.316 2 DEBUG nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.427 2 INFO nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Took 8.14 seconds to build instance.
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.446 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:15:36 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/516785795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.690 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.692 2 DEBUG nova.virt.libvirt.vif [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2003953244',display_name='tempest-tempest.common.compute-instance-2003953244',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2003953244',id=53,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-u5vbcnoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:34Z,user_data=None,user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=a8585c64-eb21-491a-9a4c-b9ac6e8e4a30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.692 2 DEBUG nova.network.os_vif_util [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.693 2 DEBUG nova.network.os_vif_util [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.695 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:15:36 compute-0 nova_compute[259550]:   <uuid>a8585c64-eb21-491a-9a4c-b9ac6e8e4a30</uuid>
Oct 07 14:15:36 compute-0 nova_compute[259550]:   <name>instance-00000035</name>
Oct 07 14:15:36 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:15:36 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:15:36 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <nova:name>tempest-tempest.common.compute-instance-2003953244</nova:name>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:15:35</nova:creationTime>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:15:36 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:15:36 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:15:36 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:15:36 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:15:36 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:15:36 compute-0 nova_compute[259550]:         <nova:user uuid="39e4681256e44d92ac5928e4f8e0d348">tempest-ServerActionsTestOtherA-508284156-project-member</nova:user>
Oct 07 14:15:36 compute-0 nova_compute[259550]:         <nova:project uuid="ef9390a1dd804281beea149e0086b360">tempest-ServerActionsTestOtherA-508284156</nova:project>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:15:36 compute-0 nova_compute[259550]:         <nova:port uuid="2781ab1e-ba6c-4689-8da2-ddcf85b31ca8">
Oct 07 14:15:36 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:15:36 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:15:36 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <system>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <entry name="serial">a8585c64-eb21-491a-9a4c-b9ac6e8e4a30</entry>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <entry name="uuid">a8585c64-eb21-491a-9a4c-b9ac6e8e4a30</entry>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     </system>
Oct 07 14:15:36 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:15:36 compute-0 nova_compute[259550]:   <os>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:   </os>
Oct 07 14:15:36 compute-0 nova_compute[259550]:   <features>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:   </features>
Oct 07 14:15:36 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:15:36 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:15:36 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk">
Oct 07 14:15:36 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:15:36 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config">
Oct 07 14:15:36 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:15:36 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:d4:48:b2"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <target dev="tap2781ab1e-ba"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/console.log" append="off"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <video>
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     </video>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:15:36 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:15:36 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:15:36 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:15:36 compute-0 nova_compute[259550]: </domain>
Oct 07 14:15:36 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.696 2 DEBUG nova.compute.manager [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Preparing to wait for external event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.696 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.696 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.696 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.697 2 DEBUG nova.virt.libvirt.vif [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2003953244',display_name='tempest-tempest.common.compute-instance-2003953244',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2003953244',id=53,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-u5vbcnoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:34Z,user_data=None,user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=a8585c64-eb21-491a-9a4c-b9ac6e8e4a30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.697 2 DEBUG nova.network.os_vif_util [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.698 2 DEBUG nova.network.os_vif_util [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.698 2 DEBUG os_vif [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.699 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.699 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2781ab1e-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2781ab1e-ba, col_values=(('external_ids', {'iface-id': '2781ab1e-ba6c-4689-8da2-ddcf85b31ca8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:48:b2', 'vm-uuid': 'a8585c64-eb21-491a-9a4c-b9ac6e8e4a30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:36 compute-0 NetworkManager[44949]: <info>  [1759846536.7072] manager: (tap2781ab1e-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.717 2 INFO os_vif [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba')
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.776 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.777 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.777 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No VIF found with MAC fa:16:3e:d4:48:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.777 2 INFO nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Using config drive
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.796 2 DEBUG nova.storage.rbd_utils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.803 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Successfully created port: c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.824 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'ec2_ids' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:36 compute-0 nova_compute[259550]: 2025-10-07 14:15:36.853 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'keypairs' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:36 compute-0 ceph-mon[74295]: pgmap v1530: 305 pgs: 305 active+clean; 253 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 5.0 MiB/s wr, 172 op/s
Oct 07 14:15:36 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/949957800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:36 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/516785795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:37 compute-0 nova_compute[259550]: 2025-10-07 14:15:37.198 2 INFO nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Creating config drive at /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config
Oct 07 14:15:37 compute-0 nova_compute[259550]: 2025-10-07 14:15:37.205 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuphkpol5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:37 compute-0 nova_compute[259550]: 2025-10-07 14:15:37.360 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuphkpol5" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:37 compute-0 nova_compute[259550]: 2025-10-07 14:15:37.387 2 DEBUG nova.storage.rbd_utils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:37 compute-0 nova_compute[259550]: 2025-10-07 14:15:37.391 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:37 compute-0 nova_compute[259550]: 2025-10-07 14:15:37.579 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:37 compute-0 nova_compute[259550]: 2025-10-07 14:15:37.581 2 INFO nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Deleting local config drive /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config because it was imported into RBD.
Oct 07 14:15:37 compute-0 kernel: tap2781ab1e-ba: entered promiscuous mode
Oct 07 14:15:37 compute-0 NetworkManager[44949]: <info>  [1759846537.6340] manager: (tap2781ab1e-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/222)
Oct 07 14:15:37 compute-0 ovn_controller[151684]: 2025-10-07T14:15:37Z|00471|binding|INFO|Claiming lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 for this chassis.
Oct 07 14:15:37 compute-0 nova_compute[259550]: 2025-10-07 14:15:37.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:37 compute-0 ovn_controller[151684]: 2025-10-07T14:15:37Z|00472|binding|INFO|2781ab1e-ba6c-4689-8da2-ddcf85b31ca8: Claiming fa:16:3e:d4:48:b2 10.100.0.5
Oct 07 14:15:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.644 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:48:b2 10.100.0.5'], port_security=['fa:16:3e:d4:48:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a8585c64-eb21-491a-9a4c-b9ac6e8e4a30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef9390a1dd804281beea149e0086b360', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'd609a9ff-183f-496e-83cc-641ffdd2b1f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80c61b76-cba3-471b-9dc7-bab9d6303f6a, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.646 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 in datapath 7ba9d553-bbaa-47f8-8281-6a74e53c37fb bound to our chassis
Oct 07 14:15:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.647 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba9d553-bbaa-47f8-8281-6a74e53c37fb
Oct 07 14:15:37 compute-0 NetworkManager[44949]: <info>  [1759846537.6591] device (tap2781ab1e-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:15:37 compute-0 NetworkManager[44949]: <info>  [1759846537.6603] device (tap2781ab1e-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:15:37 compute-0 ovn_controller[151684]: 2025-10-07T14:15:37Z|00473|binding|INFO|Setting lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 ovn-installed in OVS
Oct 07 14:15:37 compute-0 ovn_controller[151684]: 2025-10-07T14:15:37Z|00474|binding|INFO|Setting lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 up in Southbound
Oct 07 14:15:37 compute-0 nova_compute[259550]: 2025-10-07 14:15:37.669 2 DEBUG nova.compute.manager [req-46d400c7-0d7d-4a64-bfa6-5e1b2dc044d0 req-85c7dd56-4a00-4848-834d-22dc1c9c47a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-vif-plugged-62690261-dde3-43ca-929a-e6b75a76bafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:37 compute-0 nova_compute[259550]: 2025-10-07 14:15:37.670 2 DEBUG oslo_concurrency.lockutils [req-46d400c7-0d7d-4a64-bfa6-5e1b2dc044d0 req-85c7dd56-4a00-4848-834d-22dc1c9c47a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:37 compute-0 nova_compute[259550]: 2025-10-07 14:15:37.670 2 DEBUG oslo_concurrency.lockutils [req-46d400c7-0d7d-4a64-bfa6-5e1b2dc044d0 req-85c7dd56-4a00-4848-834d-22dc1c9c47a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:37 compute-0 nova_compute[259550]: 2025-10-07 14:15:37.670 2 DEBUG oslo_concurrency.lockutils [req-46d400c7-0d7d-4a64-bfa6-5e1b2dc044d0 req-85c7dd56-4a00-4848-834d-22dc1c9c47a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:37 compute-0 nova_compute[259550]: 2025-10-07 14:15:37.671 2 DEBUG nova.compute.manager [req-46d400c7-0d7d-4a64-bfa6-5e1b2dc044d0 req-85c7dd56-4a00-4848-834d-22dc1c9c47a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] No waiting events found dispatching network-vif-plugged-62690261-dde3-43ca-929a-e6b75a76bafb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:37 compute-0 nova_compute[259550]: 2025-10-07 14:15:37.671 2 WARNING nova.compute.manager [req-46d400c7-0d7d-4a64-bfa6-5e1b2dc044d0 req-85c7dd56-4a00-4848-834d-22dc1c9c47a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received unexpected event network-vif-plugged-62690261-dde3-43ca-929a-e6b75a76bafb for instance with vm_state active and task_state None.
Oct 07 14:15:37 compute-0 nova_compute[259550]: 2025-10-07 14:15:37.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.671 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f1cc9f89-3107-441d-a1c0-ba6f35d55351]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:37 compute-0 systemd-machined[214580]: New machine qemu-64-instance-00000035.
Oct 07 14:15:37 compute-0 systemd[1]: Started Virtual Machine qemu-64-instance-00000035.
Oct 07 14:15:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.716 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[114667b4-e893-4710-9943-73dcc12fdbcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.720 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0f2796b6-1b21-4978-99af-15c227954500]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 250 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 6.0 MiB/s wr, 158 op/s
Oct 07 14:15:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.756 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8b91c9d1-0fe0-41dc-80d6-cc2ba64452d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.776 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[32afd1c3-a129-4dc8-8f5f-fd1b9b2934d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9d553-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:7b:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703719, 'reachable_time': 17176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321299, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.798 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1d448861-4ed4-426a-aaee-dc3194022d6b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703735, 'tstamp': 703735}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321301, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703740, 'tstamp': 703740}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321301, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.800 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9d553-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:37 compute-0 nova_compute[259550]: 2025-10-07 14:15:37.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:37 compute-0 nova_compute[259550]: 2025-10-07 14:15:37.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.805 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba9d553-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.805 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.806 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba9d553-b0, col_values=(('external_ids', {'iface-id': 'c1da8c7c-1812-4ab6-94d3-da2a23226328'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.806 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:37 compute-0 nova_compute[259550]: 2025-10-07 14:15:37.989 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:15:38 compute-0 nova_compute[259550]: 2025-10-07 14:15:38.330 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Successfully updated port: 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:15:38 compute-0 nova_compute[259550]: 2025-10-07 14:15:38.619 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:15:38 compute-0 nova_compute[259550]: 2025-10-07 14:15:38.620 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846538.618657, a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:38 compute-0 nova_compute[259550]: 2025-10-07 14:15:38.620 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] VM Started (Lifecycle Event)
Oct 07 14:15:38 compute-0 nova_compute[259550]: 2025-10-07 14:15:38.636 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:38 compute-0 nova_compute[259550]: 2025-10-07 14:15:38.640 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846538.619887, a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:38 compute-0 nova_compute[259550]: 2025-10-07 14:15:38.641 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] VM Paused (Lifecycle Event)
Oct 07 14:15:38 compute-0 nova_compute[259550]: 2025-10-07 14:15:38.659 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:38 compute-0 nova_compute[259550]: 2025-10-07 14:15:38.663 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:15:38 compute-0 nova_compute[259550]: 2025-10-07 14:15:38.682 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:15:38 compute-0 ceph-mon[74295]: pgmap v1531: 305 pgs: 305 active+clean; 250 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 6.0 MiB/s wr, 158 op/s
Oct 07 14:15:38 compute-0 nova_compute[259550]: 2025-10-07 14:15:38.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:15:38 compute-0 nova_compute[259550]: 2025-10-07 14:15:38.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:15:38 compute-0 nova_compute[259550]: 2025-10-07 14:15:38.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.000 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.322 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.322 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.322 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.323 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fc163bed-856c-4ea5-9bf3-6989fb1027eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 260 MiB data, 596 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 6.7 MiB/s wr, 211 op/s
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.761 2 DEBUG nova.compute.manager [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.761 2 DEBUG oslo_concurrency.lockutils [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.761 2 DEBUG oslo_concurrency.lockutils [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.762 2 DEBUG oslo_concurrency.lockutils [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.762 2 DEBUG nova.compute.manager [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Processing event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.762 2 DEBUG nova.compute.manager [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-changed-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.762 2 DEBUG nova.compute.manager [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Refreshing instance network info cache due to event network-changed-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.762 2 DEBUG oslo_concurrency.lockutils [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.762 2 DEBUG oslo_concurrency.lockutils [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.763 2 DEBUG nova.network.neutron [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Refreshing network info cache for port 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.764 2 DEBUG nova.compute.manager [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.773 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846539.773405, a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.774 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] VM Resumed (Lifecycle Event)
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.777 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.788 2 INFO nova.virt.libvirt.driver [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance spawned successfully.
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.788 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.805 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.812 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.817 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.818 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.818 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.818 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.819 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.819 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.853 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.909 2 DEBUG nova.compute.manager [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:39 compute-0 nova_compute[259550]: 2025-10-07 14:15:39.971 2 INFO nova.compute.manager [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] bringing vm to original state: 'stopped'
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.042 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.043 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.043 2 DEBUG nova.compute.manager [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.047 2 DEBUG nova.compute.manager [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.056 2 DEBUG nova.network.neutron [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:15:40 compute-0 kernel: tap2781ab1e-ba (unregistering): left promiscuous mode
Oct 07 14:15:40 compute-0 NetworkManager[44949]: <info>  [1759846540.0888] device (tap2781ab1e-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:40 compute-0 ovn_controller[151684]: 2025-10-07T14:15:40Z|00475|binding|INFO|Releasing lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 from this chassis (sb_readonly=0)
Oct 07 14:15:40 compute-0 ovn_controller[151684]: 2025-10-07T14:15:40Z|00476|binding|INFO|Setting lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 down in Southbound
Oct 07 14:15:40 compute-0 ovn_controller[151684]: 2025-10-07T14:15:40Z|00477|binding|INFO|Removing iface tap2781ab1e-ba ovn-installed in OVS
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.114 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:48:b2 10.100.0.5'], port_security=['fa:16:3e:d4:48:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a8585c64-eb21-491a-9a4c-b9ac6e8e4a30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef9390a1dd804281beea149e0086b360', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd609a9ff-183f-496e-83cc-641ffdd2b1f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80c61b76-cba3-471b-9dc7-bab9d6303f6a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.115 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 in datapath 7ba9d553-bbaa-47f8-8281-6a74e53c37fb unbound from our chassis
Oct 07 14:15:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.116 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba9d553-bbaa-47f8-8281-6a74e53c37fb
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.135 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4a6ae7-b5c4-49d0-a6d7-82d9d0cef61e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:40 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000035.scope: Deactivated successfully.
Oct 07 14:15:40 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000035.scope: Consumed 1.044s CPU time.
Oct 07 14:15:40 compute-0 systemd-machined[214580]: Machine qemu-64-instance-00000035 terminated.
Oct 07 14:15:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.175 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[698d6264-9ff1-43f1-b023-ddf98a3d3f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.182 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9ba94b-88fd-4a27-bf92-833ae1b7a2f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.214 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f1c4bf-36a9-44a9-8ad7-a5a371fe7302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.234 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3524e2-938c-4238-9546-812e22a12920]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9d553-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:7b:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703719, 'reachable_time': 17176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321355, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.254 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[549dd7b7-c0e7-41ae-af3c-a09137b75dd1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703735, 'tstamp': 703735}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321356, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703740, 'tstamp': 703740}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321356, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.258 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9d553-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.266 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba9d553-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.267 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.268 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba9d553-b0, col_values=(('external_ids', {'iface-id': 'c1da8c7c-1812-4ab6-94d3-da2a23226328'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.268 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.289 2 INFO nova.virt.libvirt.driver [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance destroyed successfully.
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.290 2 DEBUG nova.compute.manager [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.347 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.376 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.376 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.377 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.453 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.538 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.539 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.541 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Successfully updated port: 83e99e50-2115-4dee-9274-a2a6528a8a8f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.543 2 DEBUG nova.network.neutron [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.568 2 DEBUG nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.570 2 DEBUG oslo_concurrency.lockutils [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.571 2 DEBUG nova.compute.manager [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.571 2 DEBUG oslo_concurrency.lockutils [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.571 2 DEBUG oslo_concurrency.lockutils [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.571 2 DEBUG oslo_concurrency.lockutils [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.572 2 DEBUG nova.compute.manager [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] No waiting events found dispatching network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.572 2 WARNING nova.compute.manager [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received unexpected event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 for instance with vm_state stopped and task_state rebuild_spawning.
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.668 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.668 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.682 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.683 2 INFO nova.compute.claims [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:15:40 compute-0 nova_compute[259550]: 2025-10-07 14:15:40.850 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:40 compute-0 ceph-mon[74295]: pgmap v1532: 305 pgs: 305 active+clean; 260 MiB data, 596 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 6.7 MiB/s wr, 211 op/s
Oct 07 14:15:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:15:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:15:41 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2412246010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.308 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.317 2 DEBUG nova.compute.provider_tree [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.336 2 DEBUG nova.scheduler.client.report [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.373 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.374 2 DEBUG nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.441 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Successfully updated port: c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.470 2 DEBUG nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.473 2 DEBUG nova.network.neutron [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.485 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.485 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquired lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.485 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.494 2 INFO nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.514 2 DEBUG nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.598 2 DEBUG nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.600 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.600 2 INFO nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Creating image(s)
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.627 2 DEBUG nova.storage.rbd_utils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image cfd30417-ee01-41d3-8a93-e49cd960d338_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.650 2 DEBUG nova.storage.rbd_utils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image cfd30417-ee01-41d3-8a93-e49cd960d338_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.677 2 DEBUG nova.storage.rbd_utils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image cfd30417-ee01-41d3-8a93-e49cd960d338_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.681 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.727 2 DEBUG nova.policy [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd8faa7636d634de587c1631c3452264e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '972aa9372a81406990460fb46cf827e0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.731 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.736 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Updating instance_info_cache with network_info: [{"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 260 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.4 MiB/s wr, 199 op/s
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.770 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.772 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.773 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.773 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.798 2 DEBUG nova.storage.rbd_utils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image cfd30417-ee01-41d3-8a93-e49cd960d338_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.802 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 cfd30417-ee01-41d3-8a93-e49cd960d338_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.842 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.842 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:15:41 compute-0 nova_compute[259550]: 2025-10-07 14:15:41.843 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:15:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2412246010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:42 compute-0 podman[321485]: 2025-10-07 14:15:42.094444676 +0000 UTC m=+0.070415211 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 07 14:15:42 compute-0 nova_compute[259550]: 2025-10-07 14:15:42.137 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 cfd30417-ee01-41d3-8a93-e49cd960d338_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:42 compute-0 podman[321486]: 2025-10-07 14:15:42.146937672 +0000 UTC m=+0.117536456 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 07 14:15:42 compute-0 nova_compute[259550]: 2025-10-07 14:15:42.219 2 DEBUG nova.storage.rbd_utils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] resizing rbd image cfd30417-ee01-41d3-8a93-e49cd960d338_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:15:42 compute-0 nova_compute[259550]: 2025-10-07 14:15:42.331 2 DEBUG nova.objects.instance [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'migration_context' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:42 compute-0 nova_compute[259550]: 2025-10-07 14:15:42.344 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:15:42 compute-0 nova_compute[259550]: 2025-10-07 14:15:42.344 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Ensure instance console log exists: /var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:15:42 compute-0 nova_compute[259550]: 2025-10-07 14:15:42.345 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:42 compute-0 nova_compute[259550]: 2025-10-07 14:15:42.345 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:42 compute-0 nova_compute[259550]: 2025-10-07 14:15:42.346 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:42 compute-0 nova_compute[259550]: 2025-10-07 14:15:42.802 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:42 compute-0 nova_compute[259550]: 2025-10-07 14:15:42.803 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:42 compute-0 nova_compute[259550]: 2025-10-07 14:15:42.824 2 DEBUG nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:15:42 compute-0 nova_compute[259550]: 2025-10-07 14:15:42.886 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:42 compute-0 nova_compute[259550]: 2025-10-07 14:15:42.888 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:42 compute-0 nova_compute[259550]: 2025-10-07 14:15:42.895 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:15:42 compute-0 nova_compute[259550]: 2025-10-07 14:15:42.895 2 INFO nova.compute.claims [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:15:42 compute-0 ceph-mon[74295]: pgmap v1533: 305 pgs: 305 active+clean; 260 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.4 MiB/s wr, 199 op/s
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.025 2 DEBUG nova.compute.manager [req-d1557687-e8d3-4d41-aa25-dadfccaa4c0c req-087ebd70-b4ad-4abb-94e8-7fc701754777 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-changed-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.025 2 DEBUG nova.compute.manager [req-d1557687-e8d3-4d41-aa25-dadfccaa4c0c req-087ebd70-b4ad-4abb-94e8-7fc701754777 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Refreshing instance network info cache due to event network-changed-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.026 2 DEBUG oslo_concurrency.lockutils [req-d1557687-e8d3-4d41-aa25-dadfccaa4c0c req-087ebd70-b4ad-4abb-94e8-7fc701754777 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.104 2 DEBUG nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.105 2 DEBUG nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing instance network info cache due to event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.105 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.105 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.106 2 DEBUG nova.network.neutron [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.124 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.182 2 DEBUG nova.network.neutron [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Successfully created port: 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:15:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:15:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/776255077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.599 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.606 2 DEBUG nova.compute.provider_tree [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.622 2 DEBUG nova.scheduler.client.report [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:15:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 260 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 191 op/s
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.844 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.846 2 DEBUG nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.900 2 DEBUG nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.901 2 DEBUG nova.network.neutron [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.917 2 INFO nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.935 2 DEBUG nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.978 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:43 compute-0 nova_compute[259550]: 2025-10-07 14:15:43.978 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:43 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/776255077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:43 compute-0 ceph-mon[74295]: pgmap v1534: 305 pgs: 305 active+clean; 260 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 191 op/s
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.007 2 DEBUG nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.067 2 DEBUG nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.068 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.068 2 INFO nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Creating image(s)
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.091 2 DEBUG nova.storage.rbd_utils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.116 2 DEBUG nova.storage.rbd_utils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.143 2 DEBUG nova.storage.rbd_utils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.147 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.199 2 DEBUG nova.network.neutron [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Successfully updated port: 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.212 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "refresh_cache-cfd30417-ee01-41d3-8a93-e49cd960d338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.212 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquired lock "refresh_cache-cfd30417-ee01-41d3-8a93-e49cd960d338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.213 2 DEBUG nova.network.neutron [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.221 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.222 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.224 2 DEBUG nova.policy [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.226 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.228 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.229 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.229 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.255 2 DEBUG nova.storage.rbd_utils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.261 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.311 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.312 2 INFO nova.compute.claims [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.532 2 DEBUG nova.network.neutron [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.574 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.622 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.704 2 DEBUG nova.storage.rbd_utils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] resizing rbd image 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.809 2 DEBUG nova.objects.instance [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'migration_context' on Instance uuid 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.838 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.839 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Ensure instance console log exists: /var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.840 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.840 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.841 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.858 2 DEBUG nova.network.neutron [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updated VIF entry in instance network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:15:44 compute-0 nova_compute[259550]: 2025-10-07 14:15:44.859 2 DEBUG nova.network.neutron [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:15:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1955181159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.053 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.059 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.059 2 DEBUG nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-changed-83e99e50-2115-4dee-9274-a2a6528a8a8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.059 2 DEBUG nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Refreshing instance network info cache due to event network-changed-83e99e50-2115-4dee-9274-a2a6528a8a8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.060 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.063 2 DEBUG nova.compute.provider_tree [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:15:45 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1955181159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.132 2 DEBUG nova.scheduler.client.report [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.170 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.172 2 DEBUG nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.217 2 DEBUG nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.218 2 DEBUG nova.network.neutron [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.239 2 INFO nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.257 2 DEBUG nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.335 2 DEBUG nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.337 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.337 2 INFO nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Creating image(s)
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.366 2 DEBUG nova.storage.rbd_utils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.393 2 DEBUG nova.storage.rbd_utils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.419 2 DEBUG nova.storage.rbd_utils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.424 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.495 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.496 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.497 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.498 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.521 2 DEBUG nova.storage.rbd_utils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.525 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 302 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.1 MiB/s wr, 205 op/s
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.865 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.946 2 DEBUG nova.storage.rbd_utils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] resizing rbd image b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:15:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:15:45 compute-0 nova_compute[259550]: 2025-10-07 14:15:45.979 2 DEBUG nova.policy [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd8faa7636d634de587c1631c3452264e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '972aa9372a81406990460fb46cf827e0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.038 2 DEBUG nova.objects.instance [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'migration_context' on Instance uuid b3d2cd05-012d-4189-bc6c-c40fc1f72c0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.076 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.077 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Ensure instance console log exists: /var/lib/nova/instances/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.078 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.078 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.078 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:46 compute-0 ceph-mon[74295]: pgmap v1535: 305 pgs: 305 active+clean; 302 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.1 MiB/s wr, 205 op/s
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.268 2 DEBUG nova.network.neutron [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Updating instance_info_cache with network_info: [{"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.381 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Releasing lock "refresh_cache-cfd30417-ee01-41d3-8a93-e49cd960d338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.382 2 DEBUG nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Instance network_info: |[{"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.387 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Start _get_guest_xml network_info=[{"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.395 2 WARNING nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.407 2 DEBUG nova.virt.libvirt.host [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.408 2 DEBUG nova.virt.libvirt.host [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.413 2 DEBUG nova.virt.libvirt.host [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.414 2 DEBUG nova.virt.libvirt.host [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.414 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.415 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.415 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.416 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.416 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.416 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.417 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.417 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.417 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.418 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.418 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.418 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.421 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.778 2 DEBUG nova.compute.manager [req-aac5b3ed-4f9f-46ca-82e3-527c2c098b2f req-48226dc6-2644-4acb-bcd3-55fecacb45ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received event network-changed-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.779 2 DEBUG nova.compute.manager [req-aac5b3ed-4f9f-46ca-82e3-527c2c098b2f req-48226dc6-2644-4acb-bcd3-55fecacb45ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Refreshing instance network info cache due to event network-changed-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.780 2 DEBUG oslo_concurrency.lockutils [req-aac5b3ed-4f9f-46ca-82e3-527c2c098b2f req-48226dc6-2644-4acb-bcd3-55fecacb45ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-cfd30417-ee01-41d3-8a93-e49cd960d338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.780 2 DEBUG oslo_concurrency.lockutils [req-aac5b3ed-4f9f-46ca-82e3-527c2c098b2f req-48226dc6-2644-4acb-bcd3-55fecacb45ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-cfd30417-ee01-41d3-8a93-e49cd960d338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.781 2 DEBUG nova.network.neutron [req-aac5b3ed-4f9f-46ca-82e3-527c2c098b2f req-48226dc6-2644-4acb-bcd3-55fecacb45ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Refreshing network info cache for port 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:15:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:15:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3893354059' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.872 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.901 2 DEBUG nova.storage.rbd_utils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image cfd30417-ee01-41d3-8a93-e49cd960d338_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:46 compute-0 nova_compute[259550]: 2025-10-07 14:15:46.905 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.067 2 DEBUG nova.network.neutron [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Successfully created port: 8718eef8-8e7a-42ab-8df9-b469e81779d9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:15:47 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3893354059' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.201 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "a23d6956-f85a-40b1-9e54-1b32d2af191e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.202 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.229 2 DEBUG nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.311 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.313 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.319 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.319 2 INFO nova.compute.claims [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.323 2 DEBUG oslo_concurrency.lockutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.323 2 DEBUG oslo_concurrency.lockutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.324 2 DEBUG oslo_concurrency.lockutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.324 2 DEBUG oslo_concurrency.lockutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.324 2 DEBUG oslo_concurrency.lockutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.326 2 INFO nova.compute.manager [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Terminating instance
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.327 2 DEBUG nova.compute.manager [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.329 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Updating instance_info_cache with network_info: [{"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.337 2 INFO nova.virt.libvirt.driver [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance destroyed successfully.
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.337 2 DEBUG nova.objects.instance [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'resources' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.353 2 DEBUG nova.virt.libvirt.vif [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2003953244',display_name='tempest-tempest.common.compute-instance-2003953244',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2003953244',id=53,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-u5vbcnoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:40Z,user_data=None,user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=a8585c64-eb21-491a-9a4c-b9ac6e8e4a30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.354 2 DEBUG nova.network.os_vif_util [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.355 2 DEBUG nova.network.os_vif_util [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.356 2 DEBUG os_vif [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.358 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2781ab1e-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.361 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Releasing lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.361 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Instance network_info: |[{"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.363 2 DEBUG oslo_concurrency.lockutils [req-d1557687-e8d3-4d41-aa25-dadfccaa4c0c req-087ebd70-b4ad-4abb-94e8-7fc701754777 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.363 2 DEBUG nova.network.neutron [req-d1557687-e8d3-4d41-aa25-dadfccaa4c0c req-087ebd70-b4ad-4abb-94e8-7fc701754777 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Refreshing network info cache for port c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.367 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Start _get_guest_xml network_info=[{"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.370 2 INFO os_vif [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba')
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.410 2 WARNING nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:15:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:15:47 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1362211224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.424 2 DEBUG nova.virt.libvirt.host [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.425 2 DEBUG nova.virt.libvirt.host [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.429 2 DEBUG nova.virt.libvirt.host [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.430 2 DEBUG nova.virt.libvirt.host [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.431 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.431 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.432 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.432 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.432 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.432 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.432 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.433 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.433 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.433 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.433 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.433 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.437 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.488 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.499 2 DEBUG nova.virt.libvirt.vif [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-3008329',display_name='tempest-ListServerFiltersTestJSON-instance-3008329',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-3008329',id=57,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-hm3q3ej4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:41Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=cfd30417-ee01-41d3-8a93-e49cd960d338,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.500 2 DEBUG nova.network.os_vif_util [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.501 2 DEBUG nova.network.os_vif_util [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.502 2 DEBUG nova.objects.instance [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:47 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.525 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:15:47 compute-0 nova_compute[259550]:   <uuid>cfd30417-ee01-41d3-8a93-e49cd960d338</uuid>
Oct 07 14:15:47 compute-0 nova_compute[259550]:   <name>instance-00000039</name>
Oct 07 14:15:47 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:15:47 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:15:47 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-3008329</nova:name>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:15:46</nova:creationTime>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:15:47 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:15:47 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:15:47 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:15:47 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:15:47 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:15:47 compute-0 nova_compute[259550]:         <nova:user uuid="d8faa7636d634de587c1631c3452264e">tempest-ListServerFiltersTestJSON-937453277-project-member</nova:user>
Oct 07 14:15:47 compute-0 nova_compute[259550]:         <nova:project uuid="972aa9372a81406990460fb46cf827e0">tempest-ListServerFiltersTestJSON-937453277</nova:project>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:15:47 compute-0 nova_compute[259550]:         <nova:port uuid="0b66f2d4-e098-4b4c-902f-2a9a2a9764cc">
Oct 07 14:15:47 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:15:47 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:15:47 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <system>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <entry name="serial">cfd30417-ee01-41d3-8a93-e49cd960d338</entry>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <entry name="uuid">cfd30417-ee01-41d3-8a93-e49cd960d338</entry>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     </system>
Oct 07 14:15:47 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:15:47 compute-0 nova_compute[259550]:   <os>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:   </os>
Oct 07 14:15:47 compute-0 nova_compute[259550]:   <features>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:   </features>
Oct 07 14:15:47 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:15:47 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:15:47 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/cfd30417-ee01-41d3-8a93-e49cd960d338_disk">
Oct 07 14:15:47 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:15:47 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/cfd30417-ee01-41d3-8a93-e49cd960d338_disk.config">
Oct 07 14:15:47 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:15:47 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:1e:6d:07"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <target dev="tap0b66f2d4-e0"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338/console.log" append="off"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <video>
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     </video>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:15:47 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:15:47 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:15:47 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:15:47 compute-0 nova_compute[259550]: </domain>
Oct 07 14:15:47 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.527 2 DEBUG nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Preparing to wait for external event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.528 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.528 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.528 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.529 2 DEBUG nova.virt.libvirt.vif [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-3008329',display_name='tempest-ListServerFiltersTestJSON-instance-3008329',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-3008329',id=57,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-hm3q3ej4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:41Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=cfd30417-ee01-41d3-8a93-e49cd960d338,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.530 2 DEBUG nova.network.os_vif_util [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.531 2 DEBUG nova.network.os_vif_util [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.532 2 DEBUG os_vif [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.533 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.534 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.549 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b66f2d4-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.550 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b66f2d4-e0, col_values=(('external_ids', {'iface-id': '0b66f2d4-e098-4b4c-902f-2a9a2a9764cc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:6d:07', 'vm-uuid': 'cfd30417-ee01-41d3-8a93-e49cd960d338'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:47 compute-0 NetworkManager[44949]: <info>  [1759846547.5843] manager: (tap0b66f2d4-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/223)
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.603 2 INFO os_vif [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0')
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.612 2 DEBUG nova.network.neutron [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Successfully created port: 41eef051-1c52-4c3c-9854-2ee923b4ab0e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.686 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.686 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.687 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] No VIF found with MAC fa:16:3e:1e:6d:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.688 2 INFO nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Using config drive
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.720 2 DEBUG nova.storage.rbd_utils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image cfd30417-ee01-41d3-8a93-e49cd960d338_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:47 compute-0 nova_compute[259550]: 2025-10-07 14:15:47.726 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 350 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.5 MiB/s wr, 172 op/s
Oct 07 14:15:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:15:48 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/239535329' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.038 2 INFO nova.virt.libvirt.driver [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Deleting instance files /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_del
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.039 2 INFO nova.virt.libvirt.driver [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Deletion of /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_del complete
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.041 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.064 2 DEBUG nova.storage.rbd_utils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 83645517-a08a-46d7-b715-15b5d7f078ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.070 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:48 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1362211224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:48 compute-0 ceph-mon[74295]: pgmap v1536: 305 pgs: 305 active+clean; 350 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.5 MiB/s wr, 172 op/s
Oct 07 14:15:48 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/239535329' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.149 2 INFO nova.compute.manager [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Took 0.82 seconds to destroy the instance on the hypervisor.
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.150 2 DEBUG oslo.service.loopingcall [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.151 2 DEBUG nova.compute.manager [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.151 2 DEBUG nova.network.neutron [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:15:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:15:48 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2962877139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.279 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.287 2 DEBUG nova.compute.provider_tree [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.309 2 DEBUG nova.scheduler.client.report [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.352 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.352 2 DEBUG nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.414 2 DEBUG nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.414 2 DEBUG nova.network.neutron [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.440 2 INFO nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.461 2 DEBUG nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.496 2 INFO nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Creating config drive at /var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338/disk.config
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.501 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjar80ncy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:15:48 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2627039618' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.562 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.565 2 DEBUG nova.virt.libvirt.vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1819774511',display_name='tempest-ServersTestMultiNic-server-1819774511',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1819774511',id=56,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-svs0lr8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:33Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=83645517-a08a-46d7-b715-15b5d7f078ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.565 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.566 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:3a:0b,bridge_name='br-int',has_traffic_filtering=True,id=2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c09dd65-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.568 2 DEBUG nova.virt.libvirt.vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1819774511',display_name='tempest-ServersTestMultiNic-server-1819774511',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1819774511',id=56,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-svs0lr8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:33Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=83645517-a08a-46d7-b715-15b5d7f078ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.568 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.569 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:89:37,bridge_name='br-int',has_traffic_filtering=True,id=83e99e50-2115-4dee-9274-a2a6528a8a8f,network=Network(80d44241-e806-45e5-b77b-78848bbeea79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83e99e50-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.569 2 DEBUG nova.virt.libvirt.vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1819774511',display_name='tempest-ServersTestMultiNic-server-1819774511',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1819774511',id=56,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-svs0lr8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:33Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=83645517-a08a-46d7-b715-15b5d7f078ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.570 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.570 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:8f:80,bridge_name='br-int',has_traffic_filtering=True,id=c1ccd58c-dbf6-4d2c-9a75-1effb73b5105,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ccd58c-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.571 2 DEBUG nova.objects.instance [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lazy-loading 'pci_devices' on Instance uuid 83645517-a08a-46d7-b715-15b5d7f078ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.593 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:15:48 compute-0 nova_compute[259550]:   <uuid>83645517-a08a-46d7-b715-15b5d7f078ff</uuid>
Oct 07 14:15:48 compute-0 nova_compute[259550]:   <name>instance-00000038</name>
Oct 07 14:15:48 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:15:48 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:15:48 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersTestMultiNic-server-1819774511</nova:name>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:15:47</nova:creationTime>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:15:48 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:15:48 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:15:48 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:15:48 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:15:48 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:15:48 compute-0 nova_compute[259550]:         <nova:user uuid="ff37c390826e43079eff2a1423ccc2b8">tempest-ServersTestMultiNic-1400500697-project-member</nova:user>
Oct 07 14:15:48 compute-0 nova_compute[259550]:         <nova:project uuid="1a99ac1945604cf5a5a5bd917ea52280">tempest-ServersTestMultiNic-1400500697</nova:project>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:15:48 compute-0 nova_compute[259550]:         <nova:port uuid="2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b">
Oct 07 14:15:48 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.159" ipVersion="4"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:15:48 compute-0 nova_compute[259550]:         <nova:port uuid="83e99e50-2115-4dee-9274-a2a6528a8a8f">
Oct 07 14:15:48 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.1.194" ipVersion="4"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:15:48 compute-0 nova_compute[259550]:         <nova:port uuid="c1ccd58c-dbf6-4d2c-9a75-1effb73b5105">
Oct 07 14:15:48 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.60" ipVersion="4"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:15:48 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:15:48 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <system>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <entry name="serial">83645517-a08a-46d7-b715-15b5d7f078ff</entry>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <entry name="uuid">83645517-a08a-46d7-b715-15b5d7f078ff</entry>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     </system>
Oct 07 14:15:48 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:15:48 compute-0 nova_compute[259550]:   <os>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:   </os>
Oct 07 14:15:48 compute-0 nova_compute[259550]:   <features>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:   </features>
Oct 07 14:15:48 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:15:48 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:15:48 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/83645517-a08a-46d7-b715-15b5d7f078ff_disk">
Oct 07 14:15:48 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:15:48 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/83645517-a08a-46d7-b715-15b5d7f078ff_disk.config">
Oct 07 14:15:48 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:15:48 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:7f:3a:0b"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <target dev="tap2c09dd65-3e"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:e7:89:37"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <target dev="tap83e99e50-21"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:34:8f:80"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <target dev="tapc1ccd58c-db"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff/console.log" append="off"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <video>
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     </video>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:15:48 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:15:48 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:15:48 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:15:48 compute-0 nova_compute[259550]: </domain>
Oct 07 14:15:48 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.594 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Preparing to wait for external event network-vif-plugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.594 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.594 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.594 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.595 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Preparing to wait for external event network-vif-plugged-83e99e50-2115-4dee-9274-a2a6528a8a8f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.596 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.596 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.597 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.597 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Preparing to wait for external event network-vif-plugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.597 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.597 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.597 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.598 2 DEBUG nova.virt.libvirt.vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1819774511',display_name='tempest-ServersTestMultiNic-server-1819774511',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1819774511',id=56,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-svs0lr8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:33Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=83645517-a08a-46d7-b715-15b5d7f078ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.598 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.599 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:3a:0b,bridge_name='br-int',has_traffic_filtering=True,id=2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c09dd65-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.599 2 DEBUG os_vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:3a:0b,bridge_name='br-int',has_traffic_filtering=True,id=2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c09dd65-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.604 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c09dd65-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.604 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c09dd65-3e, col_values=(('external_ids', {'iface-id': '2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:3a:0b', 'vm-uuid': '83645517-a08a-46d7-b715-15b5d7f078ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:48 compute-0 NetworkManager[44949]: <info>  [1759846548.6078] manager: (tap2c09dd65-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.611 2 DEBUG nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.612 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.613 2 INFO nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Creating image(s)
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.635 2 DEBUG nova.storage.rbd_utils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image a23d6956-f85a-40b1-9e54-1b32d2af191e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.661 2 DEBUG nova.storage.rbd_utils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image a23d6956-f85a-40b1-9e54-1b32d2af191e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.686 2 DEBUG nova.storage.rbd_utils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image a23d6956-f85a-40b1-9e54-1b32d2af191e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.691 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.740 2 DEBUG nova.policy [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd8faa7636d634de587c1631c3452264e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '972aa9372a81406990460fb46cf827e0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.744 2 DEBUG nova.network.neutron [req-d1557687-e8d3-4d41-aa25-dadfccaa4c0c req-087ebd70-b4ad-4abb-94e8-7fc701754777 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Updated VIF entry in instance network info cache for port c1ccd58c-dbf6-4d2c-9a75-1effb73b5105. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.745 2 DEBUG nova.network.neutron [req-d1557687-e8d3-4d41-aa25-dadfccaa4c0c req-087ebd70-b4ad-4abb-94e8-7fc701754777 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Updating instance_info_cache with network_info: [{"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.748 2 DEBUG nova.network.neutron [req-aac5b3ed-4f9f-46ca-82e3-527c2c098b2f req-48226dc6-2644-4acb-bcd3-55fecacb45ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Updated VIF entry in instance network info cache for port 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.749 2 DEBUG nova.network.neutron [req-aac5b3ed-4f9f-46ca-82e3-527c2c098b2f req-48226dc6-2644-4acb-bcd3-55fecacb45ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Updating instance_info_cache with network_info: [{"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.750 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjar80ncy" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.752 2 INFO os_vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:3a:0b,bridge_name='br-int',has_traffic_filtering=True,id=2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c09dd65-3e')
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.753 2 DEBUG nova.virt.libvirt.vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1819774511',display_name='tempest-ServersTestMultiNic-server-1819774511',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1819774511',id=56,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-svs0lr8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:33Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=83645517-a08a-46d7-b715-15b5d7f078ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.753 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.754 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:89:37,bridge_name='br-int',has_traffic_filtering=True,id=83e99e50-2115-4dee-9274-a2a6528a8a8f,network=Network(80d44241-e806-45e5-b77b-78848bbeea79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83e99e50-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.754 2 DEBUG os_vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:89:37,bridge_name='br-int',has_traffic_filtering=True,id=83e99e50-2115-4dee-9274-a2a6528a8a8f,network=Network(80d44241-e806-45e5-b77b-78848bbeea79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83e99e50-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.784 2 DEBUG nova.storage.rbd_utils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image cfd30417-ee01-41d3-8a93-e49cd960d338_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.788 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338/disk.config cfd30417-ee01-41d3-8a93-e49cd960d338_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.829 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.829 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.832 2 DEBUG oslo_concurrency.lockutils [req-aac5b3ed-4f9f-46ca-82e3-527c2c098b2f req-48226dc6-2644-4acb-bcd3-55fecacb45ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-cfd30417-ee01-41d3-8a93-e49cd960d338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.834 2 DEBUG oslo_concurrency.lockutils [req-d1557687-e8d3-4d41-aa25-dadfccaa4c0c req-087ebd70-b4ad-4abb-94e8-7fc701754777 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.834 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.835 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.835 2 DEBUG nova.network.neutron [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Refreshing network info cache for port 83e99e50-2115-4dee-9274-a2a6528a8a8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.837 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.838 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.839 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.865 2 DEBUG nova.storage.rbd_utils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image a23d6956-f85a-40b1-9e54-1b32d2af191e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.870 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a23d6956-f85a-40b1-9e54-1b32d2af191e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.917 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap83e99e50-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap83e99e50-21, col_values=(('external_ids', {'iface-id': '83e99e50-2115-4dee-9274-a2a6528a8a8f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:89:37', 'vm-uuid': '83645517-a08a-46d7-b715-15b5d7f078ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:48 compute-0 NetworkManager[44949]: <info>  [1759846548.9214] manager: (tap83e99e50-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.936 2 INFO os_vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:89:37,bridge_name='br-int',has_traffic_filtering=True,id=83e99e50-2115-4dee-9274-a2a6528a8a8f,network=Network(80d44241-e806-45e5-b77b-78848bbeea79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83e99e50-21')
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.938 2 DEBUG nova.virt.libvirt.vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1819774511',display_name='tempest-ServersTestMultiNic-server-1819774511',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1819774511',id=56,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-svs0lr8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:33Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=83645517-a08a-46d7-b715-15b5d7f078ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.938 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.939 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:8f:80,bridge_name='br-int',has_traffic_filtering=True,id=c1ccd58c-dbf6-4d2c-9a75-1effb73b5105,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ccd58c-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.939 2 DEBUG os_vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:8f:80,bridge_name='br-int',has_traffic_filtering=True,id=c1ccd58c-dbf6-4d2c-9a75-1effb73b5105,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ccd58c-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.940 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.940 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.943 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1ccd58c-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.944 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc1ccd58c-db, col_values=(('external_ids', {'iface-id': 'c1ccd58c-dbf6-4d2c-9a75-1effb73b5105', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:8f:80', 'vm-uuid': '83645517-a08a-46d7-b715-15b5d7f078ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:48 compute-0 NetworkManager[44949]: <info>  [1759846548.9464] manager: (tapc1ccd58c-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/226)
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:48 compute-0 nova_compute[259550]: 2025-10-07 14:15:48.961 2 INFO os_vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:8f:80,bridge_name='br-int',has_traffic_filtering=True,id=c1ccd58c-dbf6-4d2c-9a75-1effb73b5105,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ccd58c-db')
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.028 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.028 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.029 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] No VIF found with MAC fa:16:3e:7f:3a:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.029 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] No VIF found with MAC fa:16:3e:e7:89:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.029 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] No VIF found with MAC fa:16:3e:34:8f:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.030 2 INFO nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Using config drive
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.055 2 DEBUG nova.storage.rbd_utils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 83645517-a08a-46d7-b715-15b5d7f078ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:49 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2962877139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:49 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2627039618' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.184 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338/disk.config cfd30417-ee01-41d3-8a93-e49cd960d338_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.185 2 INFO nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Deleting local config drive /var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338/disk.config because it was imported into RBD.
Oct 07 14:15:49 compute-0 NetworkManager[44949]: <info>  [1759846549.2598] manager: (tap0b66f2d4-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/227)
Oct 07 14:15:49 compute-0 kernel: tap0b66f2d4-e0: entered promiscuous mode
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:49 compute-0 ovn_controller[151684]: 2025-10-07T14:15:49Z|00478|binding|INFO|Claiming lport 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc for this chassis.
Oct 07 14:15:49 compute-0 ovn_controller[151684]: 2025-10-07T14:15:49Z|00479|binding|INFO|0b66f2d4-e098-4b4c-902f-2a9a2a9764cc: Claiming fa:16:3e:1e:6d:07 10.100.0.11
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.292 2 DEBUG nova.network.neutron [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Successfully updated port: 8718eef8-8e7a-42ab-8df9-b469e81779d9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.295 2 DEBUG nova.network.neutron [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.292 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:6d:07 10.100.0.11'], port_security=['fa:16:3e:1e:6d:07 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'cfd30417-ee01-41d3-8a93-e49cd960d338', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '972aa9372a81406990460fb46cf827e0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '21573a58-df26-46b3-96bc-30ac8d7d5432', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8439febc-2ab3-4376-877e-4af159445d58, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.293 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc in datapath 4fd643de-a9bb-4c41-8437-fb901dfd8879 bound to our chassis
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.295 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fd643de-a9bb-4c41-8437-fb901dfd8879
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.308 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a23d6956-f85a-40b1-9e54-1b32d2af191e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:49 compute-0 systemd-udevd[322342]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.309 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e5d0f4c8-3a15-4de1-83ff-bbe5072627fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.310 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4fd643de-a1 in ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.312 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4fd643de-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.312 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1c5a967-0611-438e-a4d6-98edc77fad1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.313 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e61ff0-c507-4201-a4be-b49a78119d6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:49 compute-0 systemd-machined[214580]: New machine qemu-65-instance-00000039.
Oct 07 14:15:49 compute-0 NetworkManager[44949]: <info>  [1759846549.3288] device (tap0b66f2d4-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.327 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[955906e3-4916-4fa5-b11b-0f2ca244cb4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:49 compute-0 systemd[1]: Started Virtual Machine qemu-65-instance-00000039.
Oct 07 14:15:49 compute-0 NetworkManager[44949]: <info>  [1759846549.3346] device (tap0b66f2d4-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:15:49 compute-0 ovn_controller[151684]: 2025-10-07T14:15:49Z|00480|binding|INFO|Setting lport 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc ovn-installed in OVS
Oct 07 14:15:49 compute-0 ovn_controller[151684]: 2025-10-07T14:15:49Z|00481|binding|INFO|Setting lport 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc up in Southbound
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.349 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[09e4b181-a6a0-4a2f-a688-1e122ac533d1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.366 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.367 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.367 2 DEBUG nova.network.neutron [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.370 2 INFO nova.compute.manager [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Took 1.22 seconds to deallocate network for instance.
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.388 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e895022d-189e-4d89-a2c8-2966ce3385a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:49 compute-0 systemd-udevd[322353]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:15:49 compute-0 NetworkManager[44949]: <info>  [1759846549.3951] manager: (tap4fd643de-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/228)
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.394 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[940ec0d6-fd83-41de-ba98-9cc719983adf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.437 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[11fa80b3-f829-4549-b1cf-db03efdf1ac5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.441 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b12d9399-43d5-4b01-8107-98dc9e842703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.450 2 DEBUG nova.compute.manager [req-82a263af-dd33-4870-bde6-2db29037b276 req-a9d77ad3-c0b5-4396-9fa0-84045a3361dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-vif-deleted-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.453 2 DEBUG oslo_concurrency.lockutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.453 2 DEBUG oslo_concurrency.lockutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.463 2 DEBUG nova.storage.rbd_utils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] resizing rbd image a23d6956-f85a-40b1-9e54-1b32d2af191e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:15:49 compute-0 NetworkManager[44949]: <info>  [1759846549.4682] device (tap4fd643de-a0): carrier: link connected
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.475 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f4cb31-0f43-42f5-afdd-e68f4ce9a51f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.500 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4b872e74-3aed-451b-9cfc-77c3104122e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fd643de-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:80:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713303, 'reachable_time': 32732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322438, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.520 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[20749739-7057-4dfe-8742-7b0a38890779]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:808e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713303, 'tstamp': 713303}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322442, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.541 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[17f2ce83-2257-4d53-bb6b-24c19962bd9b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fd643de-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:80:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713303, 'reachable_time': 32732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322443, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.580 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c223f3-0d27-49bb-8e18-a07c316fbb11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.601 2 DEBUG nova.objects.instance [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'migration_context' on Instance uuid a23d6956-f85a-40b1-9e54-1b32d2af191e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:49 compute-0 ovn_controller[151684]: 2025-10-07T14:15:49Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:aa:77 10.100.0.3
Oct 07 14:15:49 compute-0 ovn_controller[151684]: 2025-10-07T14:15:49Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:aa:77 10.100.0.3
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.621 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.622 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Ensure instance console log exists: /var/lib/nova/instances/a23d6956-f85a-40b1-9e54-1b32d2af191e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.622 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.623 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.623 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.634 2 DEBUG nova.network.neutron [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.644 2 INFO nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Creating config drive at /var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff/disk.config
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.650 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp64izqdxs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.655 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d256280f-56a8-4f1f-b549-5c52c6deac7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.657 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd643de-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.657 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.657 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fd643de-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:49 compute-0 NetworkManager[44949]: <info>  [1759846549.6602] manager: (tap4fd643de-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Oct 07 14:15:49 compute-0 kernel: tap4fd643de-a0: entered promiscuous mode
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.672 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fd643de-a0, col_values=(('external_ids', {'iface-id': '879f54f7-e219-4616-9199-264d02fdd4cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:49 compute-0 ovn_controller[151684]: 2025-10-07T14:15:49Z|00482|binding|INFO|Releasing lport 879f54f7-e219-4616-9199-264d02fdd4cf from this chassis (sb_readonly=0)
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.687 2 DEBUG nova.network.neutron [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Successfully updated port: 41eef051-1c52-4c3c-9854-2ee923b4ab0e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.710 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4fd643de-a9bb-4c41-8437-fb901dfd8879.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4fd643de-a9bb-4c41-8437-fb901dfd8879.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.711 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[819930d1-871e-49cd-9c9a-6037b625cfc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.711 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-4fd643de-a9bb-4c41-8437-fb901dfd8879
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/4fd643de-a9bb-4c41-8437-fb901dfd8879.pid.haproxy
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 4fd643de-a9bb-4c41-8437-fb901dfd8879
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:15:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.712 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'env', 'PROCESS_TAG=haproxy-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4fd643de-a9bb-4c41-8437-fb901dfd8879.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:15:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 375 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 7.9 MiB/s wr, 245 op/s
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.757 2 DEBUG oslo_concurrency.processutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.794 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp64izqdxs" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.822 2 DEBUG nova.storage.rbd_utils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 83645517-a08a-46d7-b715-15b5d7f078ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.829 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff/disk.config 83645517-a08a-46d7-b715-15b5d7f078ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.876 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "refresh_cache-b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.877 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquired lock "refresh_cache-b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.878 2 DEBUG nova.network.neutron [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:15:49 compute-0 nova_compute[259550]: 2025-10-07 14:15:49.888 2 DEBUG nova.network.neutron [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Successfully created port: ae1b9c2d-384d-4134-8799-babeadd70605 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.034 2 DEBUG nova.compute.manager [req-0406cf40-1238-4ef3-a409-64bd150cd0f9 req-86e6fd57-b04d-410d-9272-ebda637e1f27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.035 2 DEBUG oslo_concurrency.lockutils [req-0406cf40-1238-4ef3-a409-64bd150cd0f9 req-86e6fd57-b04d-410d-9272-ebda637e1f27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.035 2 DEBUG oslo_concurrency.lockutils [req-0406cf40-1238-4ef3-a409-64bd150cd0f9 req-86e6fd57-b04d-410d-9272-ebda637e1f27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.035 2 DEBUG oslo_concurrency.lockutils [req-0406cf40-1238-4ef3-a409-64bd150cd0f9 req-86e6fd57-b04d-410d-9272-ebda637e1f27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.035 2 DEBUG nova.compute.manager [req-0406cf40-1238-4ef3-a409-64bd150cd0f9 req-86e6fd57-b04d-410d-9272-ebda637e1f27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Processing event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.036 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff/disk.config 83645517-a08a-46d7-b715-15b5d7f078ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.037 2 INFO nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Deleting local config drive /var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff/disk.config because it was imported into RBD.
Oct 07 14:15:50 compute-0 NetworkManager[44949]: <info>  [1759846550.0954] manager: (tap2c09dd65-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/230)
Oct 07 14:15:50 compute-0 systemd-udevd[322401]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:15:50 compute-0 kernel: tap2c09dd65-3e: entered promiscuous mode
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:50 compute-0 ovn_controller[151684]: 2025-10-07T14:15:50Z|00483|binding|INFO|Claiming lport 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b for this chassis.
Oct 07 14:15:50 compute-0 ovn_controller[151684]: 2025-10-07T14:15:50Z|00484|binding|INFO|2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b: Claiming fa:16:3e:7f:3a:0b 10.100.0.159
Oct 07 14:15:50 compute-0 NetworkManager[44949]: <info>  [1759846550.1138] manager: (tap83e99e50-21): new Tun device (/org/freedesktop/NetworkManager/Devices/231)
Oct 07 14:15:50 compute-0 NetworkManager[44949]: <info>  [1759846550.1166] device (tap2c09dd65-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:15:50 compute-0 NetworkManager[44949]: <info>  [1759846550.1175] device (tap2c09dd65-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:15:50 compute-0 NetworkManager[44949]: <info>  [1759846550.1370] manager: (tapc1ccd58c-db): new Tun device (/org/freedesktop/NetworkManager/Devices/232)
Oct 07 14:15:50 compute-0 ceph-mon[74295]: pgmap v1537: 305 pgs: 305 active+clean; 375 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 7.9 MiB/s wr, 245 op/s
Oct 07 14:15:50 compute-0 kernel: tapc1ccd58c-db: entered promiscuous mode
Oct 07 14:15:50 compute-0 kernel: tap83e99e50-21: entered promiscuous mode
Oct 07 14:15:50 compute-0 NetworkManager[44949]: <info>  [1759846550.1658] device (tapc1ccd58c-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:15:50 compute-0 podman[322614]: 2025-10-07 14:15:50.166278368 +0000 UTC m=+0.066410585 container create 5270920f909fdea441dfb7ca72e61dcfd92f65902596cac2bd7577d2203e7c7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 07 14:15:50 compute-0 NetworkManager[44949]: <info>  [1759846550.1667] device (tap83e99e50-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:15:50 compute-0 NetworkManager[44949]: <info>  [1759846550.1678] device (tapc1ccd58c-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:15:50 compute-0 NetworkManager[44949]: <info>  [1759846550.1682] device (tap83e99e50-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:50 compute-0 ovn_controller[151684]: 2025-10-07T14:15:50Z|00485|if_status|INFO|Not updating pb chassis for c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 now as sb is readonly
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:50 compute-0 systemd-machined[214580]: New machine qemu-66-instance-00000038.
Oct 07 14:15:50 compute-0 ovn_controller[151684]: 2025-10-07T14:15:50Z|00486|binding|INFO|Claiming lport c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 for this chassis.
Oct 07 14:15:50 compute-0 ovn_controller[151684]: 2025-10-07T14:15:50Z|00487|binding|INFO|c1ccd58c-dbf6-4d2c-9a75-1effb73b5105: Claiming fa:16:3e:34:8f:80 10.100.0.60
Oct 07 14:15:50 compute-0 ovn_controller[151684]: 2025-10-07T14:15:50Z|00488|binding|INFO|Claiming lport 83e99e50-2115-4dee-9274-a2a6528a8a8f for this chassis.
Oct 07 14:15:50 compute-0 ovn_controller[151684]: 2025-10-07T14:15:50Z|00489|binding|INFO|83e99e50-2115-4dee-9274-a2a6528a8a8f: Claiming fa:16:3e:e7:89:37 10.100.1.194
Oct 07 14:15:50 compute-0 ovn_controller[151684]: 2025-10-07T14:15:50Z|00490|binding|INFO|Releasing lport c1da8c7c-1812-4ab6-94d3-da2a23226328 from this chassis (sb_readonly=0)
Oct 07 14:15:50 compute-0 ovn_controller[151684]: 2025-10-07T14:15:50Z|00491|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct 07 14:15:50 compute-0 ovn_controller[151684]: 2025-10-07T14:15:50Z|00492|binding|INFO|Releasing lport 879f54f7-e219-4616-9199-264d02fdd4cf from this chassis (sb_readonly=0)
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.197 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:3a:0b 10.100.0.159'], port_security=['fa:16:3e:7f:3a:0b 10.100.0.159'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.159/24', 'neutron:device_id': '83645517-a08a-46d7-b715-15b5d7f078ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9366dbfb-d976-4858-b6b3-90aea7266ca1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:50 compute-0 systemd[1]: Started Virtual Machine qemu-66-instance-00000038.
Oct 07 14:15:50 compute-0 systemd[1]: Started libpod-conmon-5270920f909fdea441dfb7ca72e61dcfd92f65902596cac2bd7577d2203e7c7f.scope.
Oct 07 14:15:50 compute-0 ovn_controller[151684]: 2025-10-07T14:15:50Z|00493|binding|INFO|Setting lport 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b ovn-installed in OVS
Oct 07 14:15:50 compute-0 ovn_controller[151684]: 2025-10-07T14:15:50Z|00494|binding|INFO|Setting lport 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b up in Southbound
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:50 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:15:50 compute-0 podman[322614]: 2025-10-07 14:15:50.136284085 +0000 UTC m=+0.036416322 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716b321e2ea49d08584ea3087661278ff343536565c536d17e2c3ecf1e952674/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:15:50 compute-0 podman[322614]: 2025-10-07 14:15:50.249746963 +0000 UTC m=+0.149879190 container init 5270920f909fdea441dfb7ca72e61dcfd92f65902596cac2bd7577d2203e7c7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 07 14:15:50 compute-0 podman[322614]: 2025-10-07 14:15:50.257815036 +0000 UTC m=+0.157947253 container start 5270920f909fdea441dfb7ca72e61dcfd92f65902596cac2bd7577d2203e7c7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 07 14:15:50 compute-0 ovn_controller[151684]: 2025-10-07T14:15:50Z|00495|binding|INFO|Setting lport 83e99e50-2115-4dee-9274-a2a6528a8a8f ovn-installed in OVS
Oct 07 14:15:50 compute-0 ovn_controller[151684]: 2025-10-07T14:15:50Z|00496|binding|INFO|Setting lport c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 ovn-installed in OVS
Oct 07 14:15:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:15:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3125380320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.270 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:89:37 10.100.1.194'], port_security=['fa:16:3e:e7:89:37 10.100.1.194'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.194/24', 'neutron:device_id': '83645517-a08a-46d7-b715-15b5d7f078ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80d44241-e806-45e5-b77b-78848bbeea79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=303ccb5e-5aaa-463a-b70f-452ecb37838d, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=83e99e50-2115-4dee-9274-a2a6528a8a8f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:50 compute-0 ovn_controller[151684]: 2025-10-07T14:15:50Z|00497|binding|INFO|Setting lport 83e99e50-2115-4dee-9274-a2a6528a8a8f up in Southbound
Oct 07 14:15:50 compute-0 ovn_controller[151684]: 2025-10-07T14:15:50Z|00498|binding|INFO|Setting lport c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 up in Southbound
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.271 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:8f:80 10.100.0.60'], port_security=['fa:16:3e:34:8f:80 10.100.0.60'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.60/24', 'neutron:device_id': '83645517-a08a-46d7-b715-15b5d7f078ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9366dbfb-d976-4858-b6b3-90aea7266ca1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c1ccd58c-dbf6-4d2c-9a75-1effb73b5105) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:50 compute-0 neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879[322645]: [NOTICE]   (322654) : New worker (322659) forked
Oct 07 14:15:50 compute-0 neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879[322645]: [NOTICE]   (322654) : Loading success.
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.296 2 DEBUG oslo_concurrency.processutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.302 2 DEBUG nova.compute.provider_tree [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.348 2 DEBUG nova.scheduler.client.report [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.368 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b in datapath 7dfb1828-2cb7-4626-9426-ecd9cd6a2b51 unbound from our chassis
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.370 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7dfb1828-2cb7-4626-9426-ecd9cd6a2b51
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.383 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ccfb6f93-11fa-4791-b01b-b6c9f44b5c78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.383 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7dfb1828-21 in ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.386 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7dfb1828-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.386 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb7714c-2c09-47cd-b863-dda6b15982de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.387 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a10bcde7-8879-45ce-bb94-70d7d884fe4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.408 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[ed1e7dea-3816-4bb5-ae8d-8ad6d3d0eb03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.414 2 DEBUG oslo_concurrency.lockutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.427 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[477f8e7f-a465-43ca-a6de-dae3fb6aa3b7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.469 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[664a44df-47ac-4176-ae29-7aa013d74cb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:50 compute-0 NetworkManager[44949]: <info>  [1759846550.4798] manager: (tap7dfb1828-20): new Veth device (/org/freedesktop/NetworkManager/Devices/233)
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.480 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bd7cceae-7042-4b47-8099-e0ecd8d05b76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.519 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cfbfd050-f425-4807-a5f5-a9e648ddfe97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.524 2 INFO nova.scheduler.client.report [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Deleted allocations for instance a8585c64-eb21-491a-9a4c-b9ac6e8e4a30
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.523 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c15e9f-22bd-4706-9c8f-418f651a8da3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:50 compute-0 NetworkManager[44949]: <info>  [1759846550.5519] device (tap7dfb1828-20): carrier: link connected
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.557 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d4352cd3-c485-406f-9e64-bcd174f63bab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.568 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846550.5682902, cfd30417-ee01-41d3-8a93-e49cd960d338 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.569 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] VM Started (Lifecycle Event)
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.571 2 DEBUG nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.574 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.581 2 INFO nova.virt.libvirt.driver [-] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Instance spawned successfully.
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.581 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.585 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[682732be-1de9-4427-809f-61379b203f01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7dfb1828-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:e5:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713411, 'reachable_time': 35189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322719, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.600 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee503ee-1995-40af-8ce8-4fa528b87d4f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe50:e5c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713411, 'tstamp': 713411}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322722, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.611 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.611 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.612 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.612 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.613 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.613 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.621 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb890d4-dcb7-4b56-914e-de2537b1bf21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7dfb1828-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:e5:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713411, 'reachable_time': 35189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322723, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.656 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc9767a-96b2-48e1-ab17-877a5511a9e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.669 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.673 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.730 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.730 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846550.5691764, cfd30417-ee01-41d3-8a93-e49cd960d338 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.731 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] VM Paused (Lifecycle Event)
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.734 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1286281f-569c-40b3-850e-d996d36e46f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.736 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7dfb1828-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.736 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.736 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7dfb1828-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:50 compute-0 NetworkManager[44949]: <info>  [1759846550.7394] manager: (tap7dfb1828-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Oct 07 14:15:50 compute-0 kernel: tap7dfb1828-20: entered promiscuous mode
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.743 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7dfb1828-20, col_values=(('external_ids', {'iface-id': '8933a3d5-743b-489b-a9ca-89380da9bbe0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:50 compute-0 ovn_controller[151684]: 2025-10-07T14:15:50Z|00499|binding|INFO|Releasing lport 8933a3d5-743b-489b-a9ca-89380da9bbe0 from this chassis (sb_readonly=0)
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.765 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7dfb1828-2cb7-4626-9426-ecd9cd6a2b51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7dfb1828-2cb7-4626-9426-ecd9cd6a2b51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.768 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c1fddc3-e2be-4e60-a0b2-bcd860f6e14d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.769 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/7dfb1828-2cb7-4626-9426-ecd9cd6a2b51.pid.haproxy
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 7dfb1828-2cb7-4626-9426-ecd9cd6a2b51
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:15:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.770 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'env', 'PROCESS_TAG=haproxy-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7dfb1828-2cb7-4626-9426-ecd9cd6a2b51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.782 2 DEBUG oslo_concurrency.lockutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.459s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.814 2 DEBUG nova.network.neutron [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.842 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.847 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846550.5781906, cfd30417-ee01-41d3-8a93-e49cd960d338 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.847 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] VM Resumed (Lifecycle Event)
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.872 2 INFO nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Took 9.27 seconds to spawn the instance on the hypervisor.
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.873 2 DEBUG nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.873 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.879 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.909 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.930 2 INFO nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Took 10.29 seconds to build instance.
Oct 07 14:15:50 compute-0 nova_compute[259550]: 2025-10-07 14:15:50.948 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:15:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3125380320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:15:51 compute-0 nova_compute[259550]: 2025-10-07 14:15:51.165 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846551.165249, 83645517-a08a-46d7-b715-15b5d7f078ff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:51 compute-0 nova_compute[259550]: 2025-10-07 14:15:51.166 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] VM Started (Lifecycle Event)
Oct 07 14:15:51 compute-0 nova_compute[259550]: 2025-10-07 14:15:51.184 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:51 compute-0 nova_compute[259550]: 2025-10-07 14:15:51.188 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846551.1664104, 83645517-a08a-46d7-b715-15b5d7f078ff => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:51 compute-0 nova_compute[259550]: 2025-10-07 14:15:51.188 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] VM Paused (Lifecycle Event)
Oct 07 14:15:51 compute-0 nova_compute[259550]: 2025-10-07 14:15:51.213 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:51 compute-0 nova_compute[259550]: 2025-10-07 14:15:51.217 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:15:51 compute-0 podman[322754]: 2025-10-07 14:15:51.209805015 +0000 UTC m=+0.053252418 container create 7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 07 14:15:51 compute-0 nova_compute[259550]: 2025-10-07 14:15:51.237 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:15:51 compute-0 systemd[1]: Started libpod-conmon-7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263.scope.
Oct 07 14:15:51 compute-0 podman[322754]: 2025-10-07 14:15:51.185566255 +0000 UTC m=+0.029013688 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:15:51 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785017db9e4b610117d840e5812177c07ef3c9b381f8f952e3b666c54f1281e4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:15:51 compute-0 podman[322754]: 2025-10-07 14:15:51.3212742 +0000 UTC m=+0.164721643 container init 7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:15:51 compute-0 podman[322754]: 2025-10-07 14:15:51.327987537 +0000 UTC m=+0.171434950 container start 7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 07 14:15:51 compute-0 neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51[322769]: [NOTICE]   (322773) : New worker (322775) forked
Oct 07 14:15:51 compute-0 neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51[322769]: [NOTICE]   (322773) : Loading success.
Oct 07 14:15:51 compute-0 nova_compute[259550]: 2025-10-07 14:15:51.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.398 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 83e99e50-2115-4dee-9274-a2a6528a8a8f in datapath 80d44241-e806-45e5-b77b-78848bbeea79 unbound from our chassis
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.400 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80d44241-e806-45e5-b77b-78848bbeea79
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.415 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[921dc001-de02-446c-ab5e-1d0eb565a3b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.417 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap80d44241-e1 in ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.419 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap80d44241-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.420 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1814eb37-2b59-4df5-a151-5935c40b3f61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.421 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0c92640f-53c9-4aa3-806b-b37c7283bfbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.433 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[61c3a1be-0d13-4b6b-8946-9db15872fcf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.453 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f79561c-4153-41e8-991e-9b9ad7c237b4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.494 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[bd044880-4959-41f9-adba-28bb3ca2d9d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:51 compute-0 NetworkManager[44949]: <info>  [1759846551.5024] manager: (tap80d44241-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/235)
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.503 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6cfb6579-4666-470a-8d8b-9358ce3337aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.548 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b8bff7e7-daaf-4628-bbda-2f0fd11d6ebd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.553 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b730746c-9c8d-4176-8de8-d60df6e00840]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:51 compute-0 NetworkManager[44949]: <info>  [1759846551.5835] device (tap80d44241-e0): carrier: link connected
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.588 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b303d409-410f-453f-96d9-78690705dffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.611 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0b88d986-3318-44d3-be7d-91617c4ab367]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80d44241-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:97:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713514, 'reachable_time': 41142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322794, 'error': None, 'target': 'ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.631 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92abc7be-1e94-42fb-ade0-0909e83de93d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe16:97a7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713514, 'tstamp': 713514}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322795, 'error': None, 'target': 'ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.654 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d282c8-9411-48f0-8947-323b0466cd47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80d44241-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:97:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713514, 'reachable_time': 41142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322796, 'error': None, 'target': 'ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.694 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fa19c4a5-6092-4202-9236-e127d01495f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 399 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 8.1 MiB/s wr, 219 op/s
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.772 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f0c96d17-efdf-43a4-9991-eeaaf83b63dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.774 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80d44241-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.775 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.775 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80d44241-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:51 compute-0 nova_compute[259550]: 2025-10-07 14:15:51.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:51 compute-0 NetworkManager[44949]: <info>  [1759846551.7784] manager: (tap80d44241-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/236)
Oct 07 14:15:51 compute-0 kernel: tap80d44241-e0: entered promiscuous mode
Oct 07 14:15:51 compute-0 nova_compute[259550]: 2025-10-07 14:15:51.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.780 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80d44241-e0, col_values=(('external_ids', {'iface-id': '5d22e327-7d41-463a-8ed7-53ae88715a72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:51 compute-0 nova_compute[259550]: 2025-10-07 14:15:51.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:51 compute-0 ovn_controller[151684]: 2025-10-07T14:15:51Z|00500|binding|INFO|Releasing lport 5d22e327-7d41-463a-8ed7-53ae88715a72 from this chassis (sb_readonly=0)
Oct 07 14:15:51 compute-0 nova_compute[259550]: 2025-10-07 14:15:51.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.807 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/80d44241-e806-45e5-b77b-78848bbeea79.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/80d44241-e806-45e5-b77b-78848bbeea79.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.808 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd782d2-0c1c-4eef-b118-69398ae8652b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.809 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-80d44241-e806-45e5-b77b-78848bbeea79
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/80d44241-e806-45e5-b77b-78848bbeea79.pid.haproxy
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 80d44241-e806-45e5-b77b-78848bbeea79
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:15:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.811 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79', 'env', 'PROCESS_TAG=haproxy-80d44241-e806-45e5-b77b-78848bbeea79', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/80d44241-e806-45e5-b77b-78848bbeea79.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.061 2 DEBUG nova.network.neutron [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Updated VIF entry in instance network info cache for port 83e99e50-2115-4dee-9274-a2a6528a8a8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.062 2 DEBUG nova.network.neutron [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Updating instance_info_cache with network_info: [{"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.077 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.078 2 DEBUG nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-vif-unplugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.078 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.079 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.079 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.079 2 DEBUG nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] No waiting events found dispatching network-vif-unplugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.079 2 WARNING nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received unexpected event network-vif-unplugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 for instance with vm_state stopped and task_state None.
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.080 2 DEBUG nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.080 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.080 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.080 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.081 2 DEBUG nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] No waiting events found dispatching network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.081 2 WARNING nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received unexpected event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 for instance with vm_state stopped and task_state None.
Oct 07 14:15:52 compute-0 ceph-mon[74295]: pgmap v1538: 305 pgs: 305 active+clean; 399 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 8.1 MiB/s wr, 219 op/s
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.168 2 DEBUG nova.compute.manager [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-changed-8718eef8-8e7a-42ab-8df9-b469e81779d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.169 2 DEBUG nova.compute.manager [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing instance network info cache due to event network-changed-8718eef8-8e7a-42ab-8df9-b469e81779d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.169 2 DEBUG oslo_concurrency.lockutils [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:52 compute-0 podman[322829]: 2025-10-07 14:15:52.221405399 +0000 UTC m=+0.098043731 container create 61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 07 14:15:52 compute-0 systemd[1]: Started libpod-conmon-61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14.scope.
Oct 07 14:15:52 compute-0 podman[322829]: 2025-10-07 14:15:52.180526928 +0000 UTC m=+0.057165290 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:15:52 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0beb1c026ee7bdf433c8b1469d1aea543db15d951ca29d849193a9e68ff92284/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:15:52 compute-0 podman[322829]: 2025-10-07 14:15:52.297136109 +0000 UTC m=+0.173774471 container init 61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:15:52 compute-0 podman[322829]: 2025-10-07 14:15:52.308835468 +0000 UTC m=+0.185473800 container start 61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:15:52 compute-0 neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79[322845]: [NOTICE]   (322849) : New worker (322851) forked
Oct 07 14:15:52 compute-0 neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79[322845]: [NOTICE]   (322849) : Loading success.
Oct 07 14:15:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.410 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 in datapath 7dfb1828-2cb7-4626-9426-ecd9cd6a2b51 unbound from our chassis
Oct 07 14:15:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.418 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7dfb1828-2cb7-4626-9426-ecd9cd6a2b51
Oct 07 14:15:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.440 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[47e25c95-35e6-48c9-9645-cc89719e8e93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.470 2 DEBUG nova.network.neutron [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updating instance_info_cache with network_info: [{"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.476 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[94b07a04-619d-4e1e-b3e2-857629cc1e4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.482 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[827d5a3a-f7ee-471d-9c8a-45a24c4ff827]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.517 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9c99a6a6-f24f-44dc-9268-1f41e330d3bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.540 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[094ebb99-e0c1-40f2-a03b-0a907def4ab5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7dfb1828-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:e5:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713411, 'reachable_time': 35189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322865, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.559 2 DEBUG nova.network.neutron [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Successfully updated port: ae1b9c2d-384d-4134-8799-babeadd70605 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:15:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.564 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f6af2dcb-67ca-4e78-bcfb-7e94fbfdfaf8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7dfb1828-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713425, 'tstamp': 713425}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322866, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap7dfb1828-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713428, 'tstamp': 713428}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322866, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.566 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7dfb1828-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.572 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7dfb1828-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.573 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.573 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7dfb1828-20, col_values=(('external_ids', {'iface-id': '8933a3d5-743b-489b-a9ca-89380da9bbe0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.574 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:15:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:15:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:15:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:15:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:15:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.676 2 DEBUG nova.compute.manager [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.677 2 DEBUG oslo_concurrency.lockutils [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.678 2 DEBUG oslo_concurrency.lockutils [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.678 2 DEBUG oslo_concurrency.lockutils [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.679 2 DEBUG nova.compute.manager [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] No waiting events found dispatching network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.679 2 WARNING nova.compute.manager [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received unexpected event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc for instance with vm_state active and task_state None.
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.679 2 DEBUG nova.compute.manager [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-plugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.680 2 DEBUG oslo_concurrency.lockutils [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.681 2 DEBUG oslo_concurrency.lockutils [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.683 2 DEBUG oslo_concurrency.lockutils [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.683 2 DEBUG nova.compute.manager [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Processing event network-vif-plugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.686 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.686 2 DEBUG nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Instance network_info: |[{"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.687 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "refresh_cache-a23d6956-f85a-40b1-9e54-1b32d2af191e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.687 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquired lock "refresh_cache-a23d6956-f85a-40b1-9e54-1b32d2af191e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.688 2 DEBUG nova.network.neutron [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.689 2 DEBUG oslo_concurrency.lockutils [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.690 2 DEBUG nova.network.neutron [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing network info cache for port 8718eef8-8e7a-42ab-8df9-b469e81779d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.695 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Start _get_guest_xml network_info=[{"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.704 2 WARNING nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.716 2 DEBUG nova.virt.libvirt.host [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.717 2 DEBUG nova.virt.libvirt.host [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.722 2 DEBUG nova.virt.libvirt.host [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.724 2 DEBUG nova.virt.libvirt.host [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.725 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.725 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.726 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.727 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.727 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.728 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.728 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.729 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.729 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.730 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.730 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.731 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.736 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.780 2 DEBUG nova.network.neutron [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Updating instance_info_cache with network_info: [{"id": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "address": "fa:16:3e:08:11:48", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41eef051-1c", "ovs_interfaceid": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.881 2 DEBUG nova.network.neutron [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.917 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Releasing lock "refresh_cache-b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.918 2 DEBUG nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Instance network_info: |[{"id": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "address": "fa:16:3e:08:11:48", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41eef051-1c", "ovs_interfaceid": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.921 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Start _get_guest_xml network_info=[{"id": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "address": "fa:16:3e:08:11:48", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41eef051-1c", "ovs_interfaceid": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': 'd37bdf89-ce37-478a-af4d-2b9cd0435b79'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.927 2 WARNING nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.931 2 DEBUG nova.virt.libvirt.host [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.932 2 DEBUG nova.virt.libvirt.host [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.936 2 DEBUG nova.virt.libvirt.host [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.936 2 DEBUG nova.virt.libvirt.host [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.937 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.937 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.938 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.938 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.939 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.939 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.939 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.940 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.940 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.940 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.941 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.941 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:15:52 compute-0 nova_compute[259550]: 2025-10-07 14:15:52.945 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:15:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1144808172' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.224 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.247 2 DEBUG nova.storage.rbd_utils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.252 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1144808172' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:15:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3069102547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.439 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.462 2 DEBUG nova.storage.rbd_utils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.466 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 399 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 443 KiB/s rd, 8.1 MiB/s wr, 181 op/s
Oct 07 14:15:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:15:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2305087272' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.794 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.797 2 DEBUG nova.virt.libvirt.vif [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-985989312',display_name='tempest-tempest.common.compute-instance-985989312',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-985989312',id=58,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-hnk7jfbi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.797 2 DEBUG nova.network.os_vif_util [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.799 2 DEBUG nova.network.os_vif_util [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:8c:cc,bridge_name='br-int',has_traffic_filtering=True,id=8718eef8-8e7a-42ab-8df9-b469e81779d9,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8718eef8-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.800 2 DEBUG nova.objects.instance [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_devices' on Instance uuid 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.836 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:15:53 compute-0 nova_compute[259550]:   <uuid>8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a</uuid>
Oct 07 14:15:53 compute-0 nova_compute[259550]:   <name>instance-0000003a</name>
Oct 07 14:15:53 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:15:53 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:15:53 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <nova:name>tempest-tempest.common.compute-instance-985989312</nova:name>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:15:52</nova:creationTime>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:15:53 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:15:53 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:15:53 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:15:53 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:15:53 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:15:53 compute-0 nova_compute[259550]:         <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:15:53 compute-0 nova_compute[259550]:         <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:15:53 compute-0 nova_compute[259550]:         <nova:port uuid="8718eef8-8e7a-42ab-8df9-b469e81779d9">
Oct 07 14:15:53 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:15:53 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:15:53 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <system>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <entry name="serial">8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a</entry>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <entry name="uuid">8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a</entry>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     </system>
Oct 07 14:15:53 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:15:53 compute-0 nova_compute[259550]:   <os>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:   </os>
Oct 07 14:15:53 compute-0 nova_compute[259550]:   <features>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:   </features>
Oct 07 14:15:53 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:15:53 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:15:53 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk">
Oct 07 14:15:53 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:15:53 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk.config">
Oct 07 14:15:53 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:15:53 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:04:8c:cc"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <target dev="tap8718eef8-8e"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/console.log" append="off"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <video>
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     </video>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:15:53 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:15:53 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:15:53 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:15:53 compute-0 nova_compute[259550]: </domain>
Oct 07 14:15:53 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.837 2 DEBUG nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Preparing to wait for external event network-vif-plugged-8718eef8-8e7a-42ab-8df9-b469e81779d9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.842 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.843 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.843 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.844 2 DEBUG nova.virt.libvirt.vif [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-985989312',display_name='tempest-tempest.common.compute-instance-985989312',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-985989312',id=58,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-hnk7jfbi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.845 2 DEBUG nova.network.os_vif_util [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.846 2 DEBUG nova.network.os_vif_util [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:8c:cc,bridge_name='br-int',has_traffic_filtering=True,id=8718eef8-8e7a-42ab-8df9-b469e81779d9,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8718eef8-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.847 2 DEBUG os_vif [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:8c:cc,bridge_name='br-int',has_traffic_filtering=True,id=8718eef8-8e7a-42ab-8df9-b469e81779d9,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8718eef8-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.848 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.849 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.853 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8718eef8-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.853 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8718eef8-8e, col_values=(('external_ids', {'iface-id': '8718eef8-8e7a-42ab-8df9-b469e81779d9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:8c:cc', 'vm-uuid': '8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:53 compute-0 NetworkManager[44949]: <info>  [1759846553.8569] manager: (tap8718eef8-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.869 2 INFO os_vif [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:8c:cc,bridge_name='br-int',has_traffic_filtering=True,id=8718eef8-8e7a-42ab-8df9-b469e81779d9,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8718eef8-8e')
Oct 07 14:15:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:15:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3623727302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.985 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.986 2 DEBUG nova.virt.libvirt.vif [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-346182642',display_name='tempest-ListServerFiltersTestJSON-instance-346182642',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-346182642',id=59,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-bm2choqm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:45Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=b3d2cd05-012d-4189-bc6c-c40fc1f72c0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "address": "fa:16:3e:08:11:48", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41eef051-1c", "ovs_interfaceid": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.987 2 DEBUG nova.network.os_vif_util [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "address": "fa:16:3e:08:11:48", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41eef051-1c", "ovs_interfaceid": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.988 2 DEBUG nova.network.os_vif_util [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:11:48,bridge_name='br-int',has_traffic_filtering=True,id=41eef051-1c52-4c3c-9854-2ee923b4ab0e,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41eef051-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:53 compute-0 nova_compute[259550]: 2025-10-07 14:15:53.989 2 DEBUG nova.objects.instance [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3d2cd05-012d-4189-bc6c-c40fc1f72c0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.074 2 DEBUG nova.network.neutron [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Updating instance_info_cache with network_info: [{"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.131 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:15:54 compute-0 nova_compute[259550]:   <uuid>b3d2cd05-012d-4189-bc6c-c40fc1f72c0f</uuid>
Oct 07 14:15:54 compute-0 nova_compute[259550]:   <name>instance-0000003b</name>
Oct 07 14:15:54 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:15:54 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:15:54 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-346182642</nova:name>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:15:52</nova:creationTime>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:15:54 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:15:54 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:15:54 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:15:54 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:15:54 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:15:54 compute-0 nova_compute[259550]:         <nova:user uuid="d8faa7636d634de587c1631c3452264e">tempest-ListServerFiltersTestJSON-937453277-project-member</nova:user>
Oct 07 14:15:54 compute-0 nova_compute[259550]:         <nova:project uuid="972aa9372a81406990460fb46cf827e0">tempest-ListServerFiltersTestJSON-937453277</nova:project>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:15:54 compute-0 nova_compute[259550]:         <nova:port uuid="41eef051-1c52-4c3c-9854-2ee923b4ab0e">
Oct 07 14:15:54 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:15:54 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:15:54 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <system>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <entry name="serial">b3d2cd05-012d-4189-bc6c-c40fc1f72c0f</entry>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <entry name="uuid">b3d2cd05-012d-4189-bc6c-c40fc1f72c0f</entry>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     </system>
Oct 07 14:15:54 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:15:54 compute-0 nova_compute[259550]:   <os>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:   </os>
Oct 07 14:15:54 compute-0 nova_compute[259550]:   <features>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:   </features>
Oct 07 14:15:54 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:15:54 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:15:54 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk">
Oct 07 14:15:54 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:15:54 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk.config">
Oct 07 14:15:54 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:15:54 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:08:11:48"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <target dev="tap41eef051-1c"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f/console.log" append="off"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <video>
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     </video>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:15:54 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:15:54 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:15:54 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:15:54 compute-0 nova_compute[259550]: </domain>
Oct 07 14:15:54 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.139 2 DEBUG nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Preparing to wait for external event network-vif-plugged-41eef051-1c52-4c3c-9854-2ee923b4ab0e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.140 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.141 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.141 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.143 2 DEBUG nova.virt.libvirt.vif [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-346182642',display_name='tempest-ListServerFiltersTestJSON-instance-346182642',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-346182642',id=59,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-bm2choqm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:45Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=b3d2cd05-012d-4189-bc6c-c40fc1f72c0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "address": "fa:16:3e:08:11:48", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41eef051-1c", "ovs_interfaceid": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.144 2 DEBUG nova.network.os_vif_util [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "address": "fa:16:3e:08:11:48", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41eef051-1c", "ovs_interfaceid": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.145 2 DEBUG nova.network.os_vif_util [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:11:48,bridge_name='br-int',has_traffic_filtering=True,id=41eef051-1c52-4c3c-9854-2ee923b4ab0e,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41eef051-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.146 2 DEBUG os_vif [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:11:48,bridge_name='br-int',has_traffic_filtering=True,id=41eef051-1c52-4c3c-9854-2ee923b4ab0e,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41eef051-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.147 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.148 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.150 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Releasing lock "refresh_cache-a23d6956-f85a-40b1-9e54-1b32d2af191e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.151 2 DEBUG nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Instance network_info: |[{"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.155 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Start _get_guest_xml network_info=[{"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.156 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41eef051-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.157 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41eef051-1c, col_values=(('external_ids', {'iface-id': '41eef051-1c52-4c3c-9854-2ee923b4ab0e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:11:48', 'vm-uuid': 'b3d2cd05-012d-4189-bc6c-c40fc1f72c0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:54 compute-0 NetworkManager[44949]: <info>  [1759846554.1597] manager: (tap41eef051-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.166 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.166 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.167 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:04:8c:cc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.167 2 INFO nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Using config drive
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.189 2 DEBUG nova.storage.rbd_utils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.198 2 INFO os_vif [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:11:48,bridge_name='br-int',has_traffic_filtering=True,id=41eef051-1c52-4c3c-9854-2ee923b4ab0e,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41eef051-1c')
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.209 2 WARNING nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.222 2 DEBUG nova.virt.libvirt.host [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.223 2 DEBUG nova.virt.libvirt.host [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.227 2 DEBUG nova.virt.libvirt.host [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.227 2 DEBUG nova.virt.libvirt.host [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.227 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.228 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ccc66c07-b66e-4392-aa22-965659a0d015',id=5,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.228 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.228 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.229 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.229 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.229 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.230 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.230 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.230 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.230 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.231 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.234 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3069102547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:54 compute-0 ceph-mon[74295]: pgmap v1539: 305 pgs: 305 active+clean; 399 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 443 KiB/s rd, 8.1 MiB/s wr, 181 op/s
Oct 07 14:15:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2305087272' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3623727302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.303 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.304 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.304 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] No VIF found with MAC fa:16:3e:08:11:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.305 2 INFO nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Using config drive
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.329 2 DEBUG nova.storage.rbd_utils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:15:54 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2198154260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.719 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.750 2 DEBUG nova.storage.rbd_utils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image a23d6956-f85a-40b1-9e54-1b32d2af191e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.755 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.990 2 DEBUG nova.network.neutron [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updated VIF entry in instance network info cache for port 8718eef8-8e7a-42ab-8df9-b469e81779d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:15:54 compute-0 nova_compute[259550]: 2025-10-07 14:15:54.991 2 DEBUG nova.network.neutron [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updating instance_info_cache with network_info: [{"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.013 2 DEBUG oslo_concurrency.lockutils [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.014 2 DEBUG nova.compute.manager [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Received event network-changed-41eef051-1c52-4c3c-9854-2ee923b4ab0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.014 2 DEBUG nova.compute.manager [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Refreshing instance network info cache due to event network-changed-41eef051-1c52-4c3c-9854-2ee923b4ab0e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.015 2 DEBUG oslo_concurrency.lockutils [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.015 2 DEBUG oslo_concurrency.lockutils [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.015 2 DEBUG nova.network.neutron [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Refreshing network info cache for port 41eef051-1c52-4c3c-9854-2ee923b4ab0e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.080 2 DEBUG nova.compute.manager [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-plugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.081 2 DEBUG oslo_concurrency.lockutils [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.081 2 DEBUG oslo_concurrency.lockutils [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.081 2 DEBUG oslo_concurrency.lockutils [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.081 2 DEBUG nova.compute.manager [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Processing event network-vif-plugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.082 2 DEBUG nova.compute.manager [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-plugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.082 2 DEBUG oslo_concurrency.lockutils [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.082 2 DEBUG oslo_concurrency.lockutils [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.083 2 DEBUG oslo_concurrency.lockutils [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.083 2 DEBUG nova.compute.manager [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] No event matching network-vif-plugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 in dict_keys([('network-vif-plugged', '83e99e50-2115-4dee-9274-a2a6528a8a8f')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.083 2 WARNING nova.compute.manager [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received unexpected event network-vif-plugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 for instance with vm_state building and task_state spawning.
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.130 2 INFO nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Creating config drive at /var/lib/nova/instances/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f/disk.config
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.137 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgcef8k4i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.191 2 INFO nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Creating config drive at /var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/disk.config
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.198 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcbua35t5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:15:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1778173598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.251 2 DEBUG nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-plugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.251 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.251 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.252 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.252 2 DEBUG nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] No event matching network-vif-plugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b in dict_keys([('network-vif-plugged', '83e99e50-2115-4dee-9274-a2a6528a8a8f')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.252 2 WARNING nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received unexpected event network-vif-plugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b for instance with vm_state building and task_state spawning.
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.252 2 DEBUG nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-plugged-83e99e50-2115-4dee-9274-a2a6528a8a8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.252 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.253 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.253 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.253 2 DEBUG nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Processing event network-vif-plugged-83e99e50-2115-4dee-9274-a2a6528a8a8f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.253 2 DEBUG nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Received event network-changed-ae1b9c2d-384d-4134-8799-babeadd70605 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.253 2 DEBUG nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Refreshing instance network info cache due to event network-changed-ae1b9c2d-384d-4134-8799-babeadd70605. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.254 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a23d6956-f85a-40b1-9e54-1b32d2af191e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.254 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a23d6956-f85a-40b1-9e54-1b32d2af191e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.254 2 DEBUG nova.network.neutron [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Refreshing network info cache for port ae1b9c2d-384d-4134-8799-babeadd70605 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.259 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.260 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Instance event wait completed in 4 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.262 2 DEBUG nova.virt.libvirt.vif [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-289403182',display_name='tempest-ListServerFiltersTestJSON-instance-289403182',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-289403182',id=60,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-d58h1k0w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:48Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=a23d6956-f85a-40b1-9e54-1b32d2af191e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.263 2 DEBUG nova.network.os_vif_util [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.263 2 DEBUG nova.network.os_vif_util [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:1b:e2,bridge_name='br-int',has_traffic_filtering=True,id=ae1b9c2d-384d-4134-8799-babeadd70605,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1b9c2d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.265 2 DEBUG nova.objects.instance [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid a23d6956-f85a-40b1-9e54-1b32d2af191e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.266 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846555.2655506, 83645517-a08a-46d7-b715-15b5d7f078ff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.267 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] VM Resumed (Lifecycle Event)
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.269 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.274 2 INFO nova.virt.libvirt.driver [-] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Instance spawned successfully.
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.275 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:15:55 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2198154260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:55 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1778173598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.289 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846540.2850916, a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.289 2 INFO nova.compute.manager [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] VM Stopped (Lifecycle Event)
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.292 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:15:55 compute-0 nova_compute[259550]:   <uuid>a23d6956-f85a-40b1-9e54-1b32d2af191e</uuid>
Oct 07 14:15:55 compute-0 nova_compute[259550]:   <name>instance-0000003c</name>
Oct 07 14:15:55 compute-0 nova_compute[259550]:   <memory>196608</memory>
Oct 07 14:15:55 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:15:55 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-289403182</nova:name>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:15:54</nova:creationTime>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <nova:flavor name="m1.micro">
Oct 07 14:15:55 compute-0 nova_compute[259550]:         <nova:memory>192</nova:memory>
Oct 07 14:15:55 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:15:55 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:15:55 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:15:55 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:15:55 compute-0 nova_compute[259550]:         <nova:user uuid="d8faa7636d634de587c1631c3452264e">tempest-ListServerFiltersTestJSON-937453277-project-member</nova:user>
Oct 07 14:15:55 compute-0 nova_compute[259550]:         <nova:project uuid="972aa9372a81406990460fb46cf827e0">tempest-ListServerFiltersTestJSON-937453277</nova:project>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:15:55 compute-0 nova_compute[259550]:         <nova:port uuid="ae1b9c2d-384d-4134-8799-babeadd70605">
Oct 07 14:15:55 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:15:55 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:15:55 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <system>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <entry name="serial">a23d6956-f85a-40b1-9e54-1b32d2af191e</entry>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <entry name="uuid">a23d6956-f85a-40b1-9e54-1b32d2af191e</entry>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     </system>
Oct 07 14:15:55 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:15:55 compute-0 nova_compute[259550]:   <os>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:   </os>
Oct 07 14:15:55 compute-0 nova_compute[259550]:   <features>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:   </features>
Oct 07 14:15:55 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:15:55 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:15:55 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/a23d6956-f85a-40b1-9e54-1b32d2af191e_disk">
Oct 07 14:15:55 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:15:55 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/a23d6956-f85a-40b1-9e54-1b32d2af191e_disk.config">
Oct 07 14:15:55 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       </source>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:15:55 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:09:1b:e2"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <target dev="tapae1b9c2d-38"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/a23d6956-f85a-40b1-9e54-1b32d2af191e/console.log" append="off"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <video>
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     </video>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:15:55 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:15:55 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:15:55 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:15:55 compute-0 nova_compute[259550]: </domain>
Oct 07 14:15:55 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.292 2 DEBUG nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Preparing to wait for external event network-vif-plugged-ae1b9c2d-384d-4134-8799-babeadd70605 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.293 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.293 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.293 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.294 2 DEBUG nova.virt.libvirt.vif [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-289403182',display_name='tempest-ListServerFiltersTestJSON-instance-289403182',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-289403182',id=60,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-d58h1k0w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:48Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=a23d6956-f85a-40b1-9e54-1b32d2af191e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.294 2 DEBUG nova.network.os_vif_util [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.294 2 DEBUG nova.network.os_vif_util [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:1b:e2,bridge_name='br-int',has_traffic_filtering=True,id=ae1b9c2d-384d-4134-8799-babeadd70605,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1b9c2d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.295 2 DEBUG os_vif [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:1b:e2,bridge_name='br-int',has_traffic_filtering=True,id=ae1b9c2d-384d-4134-8799-babeadd70605,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1b9c2d-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.296 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.297 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.298 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.298 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgcef8k4i" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.330 2 DEBUG nova.storage.rbd_utils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.337 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f/disk.config b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.380 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcbua35t5" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.415 2 DEBUG nova.storage.rbd_utils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.421 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/disk.config 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.473 2 DEBUG nova.compute.manager [None req-d5792eef-1d55-49bd-8c08-5b8ee2c23537 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.475 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapae1b9c2d-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapae1b9c2d-38, col_values=(('external_ids', {'iface-id': 'ae1b9c2d-384d-4134-8799-babeadd70605', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:1b:e2', 'vm-uuid': 'a23d6956-f85a-40b1-9e54-1b32d2af191e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:55 compute-0 NetworkManager[44949]: <info>  [1759846555.4791] manager: (tapae1b9c2d-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.485 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.493 2 INFO os_vif [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:1b:e2,bridge_name='br-int',has_traffic_filtering=True,id=ae1b9c2d-384d-4134-8799-babeadd70605,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1b9c2d-38')
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.498 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.498 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.499 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.500 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.500 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.500 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.527 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f/disk.config b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.528 2 INFO nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Deleting local config drive /var/lib/nova/instances/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f/disk.config because it was imported into RBD.
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.539 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.577 2 INFO nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Took 22.31 seconds to spawn the instance on the hypervisor.
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.578 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.597 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.598 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.598 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] No VIF found with MAC fa:16:3e:09:1b:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.598 2 INFO nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Using config drive
Oct 07 14:15:55 compute-0 NetworkManager[44949]: <info>  [1759846555.5994] manager: (tap41eef051-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/240)
Oct 07 14:15:55 compute-0 kernel: tap41eef051-1c: entered promiscuous mode
Oct 07 14:15:55 compute-0 ovn_controller[151684]: 2025-10-07T14:15:55Z|00501|binding|INFO|Claiming lport 41eef051-1c52-4c3c-9854-2ee923b4ab0e for this chassis.
Oct 07 14:15:55 compute-0 ovn_controller[151684]: 2025-10-07T14:15:55Z|00502|binding|INFO|41eef051-1c52-4c3c-9854-2ee923b4ab0e: Claiming fa:16:3e:08:11:48 10.100.0.8
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.627 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:11:48 10.100.0.8'], port_security=['fa:16:3e:08:11:48 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b3d2cd05-012d-4189-bc6c-c40fc1f72c0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '972aa9372a81406990460fb46cf827e0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '21573a58-df26-46b3-96bc-30ac8d7d5432', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8439febc-2ab3-4376-877e-4af159445d58, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=41eef051-1c52-4c3c-9854-2ee923b4ab0e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.629 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 41eef051-1c52-4c3c-9854-2ee923b4ab0e in datapath 4fd643de-a9bb-4c41-8437-fb901dfd8879 bound to our chassis
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.630 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fd643de-a9bb-4c41-8437-fb901dfd8879
Oct 07 14:15:55 compute-0 systemd-udevd[323204]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:15:55 compute-0 ovn_controller[151684]: 2025-10-07T14:15:55Z|00503|binding|INFO|Setting lport 41eef051-1c52-4c3c-9854-2ee923b4ab0e ovn-installed in OVS
Oct 07 14:15:55 compute-0 ovn_controller[151684]: 2025-10-07T14:15:55Z|00504|binding|INFO|Setting lport 41eef051-1c52-4c3c-9854-2ee923b4ab0e up in Southbound
Oct 07 14:15:55 compute-0 NetworkManager[44949]: <info>  [1759846555.6551] device (tap41eef051-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:15:55 compute-0 NetworkManager[44949]: <info>  [1759846555.6557] device (tap41eef051-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.654 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d1058e-8b2d-476e-ae80-d70ae546a5d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.652 2 DEBUG nova.storage.rbd_utils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image a23d6956-f85a-40b1-9e54-1b32d2af191e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:55 compute-0 systemd-machined[214580]: New machine qemu-67-instance-0000003b.
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:55 compute-0 systemd[1]: Started Virtual Machine qemu-67-instance-0000003b.
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.695 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[101dc38b-f377-4f4f-b35c-2a32021b9d6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.699 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[61b9b183-6deb-4cd7-b2b6-0e924dd80ab6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.719 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/disk.config 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.720 2 INFO nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Deleting local config drive /var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/disk.config because it was imported into RBD.
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.735 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1233b24e-e418-4817-9968-f8b482b530cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 432 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 9.2 MiB/s wr, 280 op/s
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.764 2 INFO nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Took 23.46 seconds to build instance.
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.763 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2d4df6ef-70d9-477b-bdb1-9644cdfef7f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fd643de-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:80:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713303, 'reachable_time': 32732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323227, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.785 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.787 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0157e45c-1782-44dd-84b0-c4b9c1e8f9e4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713317, 'tstamp': 713317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323231, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713321, 'tstamp': 713321}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323231, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.789 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd643de-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.805 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fd643de-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.806 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.807 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fd643de-a0, col_values=(('external_ids', {'iface-id': '879f54f7-e219-4616-9199-264d02fdd4cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.807 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:55 compute-0 NetworkManager[44949]: <info>  [1759846555.8103] manager: (tap8718eef8-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/241)
Oct 07 14:15:55 compute-0 kernel: tap8718eef8-8e: entered promiscuous mode
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:55 compute-0 ovn_controller[151684]: 2025-10-07T14:15:55Z|00505|binding|INFO|Claiming lport 8718eef8-8e7a-42ab-8df9-b469e81779d9 for this chassis.
Oct 07 14:15:55 compute-0 ovn_controller[151684]: 2025-10-07T14:15:55Z|00506|binding|INFO|8718eef8-8e7a-42ab-8df9-b469e81779d9: Claiming fa:16:3e:04:8c:cc 10.100.0.13
Oct 07 14:15:55 compute-0 NetworkManager[44949]: <info>  [1759846555.8285] device (tap8718eef8-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:15:55 compute-0 NetworkManager[44949]: <info>  [1759846555.8294] device (tap8718eef8-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.833 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:8c:cc 10.100.0.13'], port_security=['fa:16:3e:04:8c:cc 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67397a02-5eae-462d-b5c7-e258b23b19a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8718eef8-8e7a-42ab-8df9-b469e81779d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.834 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8718eef8-8e7a-42ab-8df9-b469e81779d9 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.836 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:15:55 compute-0 ovn_controller[151684]: 2025-10-07T14:15:55Z|00507|binding|INFO|Setting lport 8718eef8-8e7a-42ab-8df9-b469e81779d9 ovn-installed in OVS
Oct 07 14:15:55 compute-0 ovn_controller[151684]: 2025-10-07T14:15:55Z|00508|binding|INFO|Setting lport 8718eef8-8e7a-42ab-8df9-b469e81779d9 up in Southbound
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:55 compute-0 nova_compute[259550]: 2025-10-07 14:15:55.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:55 compute-0 systemd-machined[214580]: New machine qemu-68-instance-0000003a.
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.861 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[67312a5e-6981-4e71-a1d1-1b8527b4bcad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:55 compute-0 systemd[1]: Started Virtual Machine qemu-68-instance-0000003a.
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.902 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[eec4375e-1a82-4e86-a398-ca3e7675f258]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.905 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[56dced3b-94cb-4d06-b803-cc116133102b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.944 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[61ad6104-4994-498a-9aee-def5864402d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:15:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.979 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b414c6f6-0d33-42c5-bc5e-9af2573134a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711898, 'reachable_time': 29007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323258, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.001 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c4891d30-a519-4354-b387-24b241c54a9a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711913, 'tstamp': 711913}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323259, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711917, 'tstamp': 711917}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323259, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.003 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.010 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.010 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.011 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.012 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.272 2 INFO nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Creating config drive at /var/lib/nova/instances/a23d6956-f85a-40b1-9e54-1b32d2af191e/disk.config
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.278 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a23d6956-f85a-40b1-9e54-1b32d2af191e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4x6znez execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:56 compute-0 ceph-mon[74295]: pgmap v1540: 305 pgs: 305 active+clean; 432 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 9.2 MiB/s wr, 280 op/s
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.425 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a23d6956-f85a-40b1-9e54-1b32d2af191e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4x6znez" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.460 2 DEBUG nova.storage.rbd_utils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image a23d6956-f85a-40b1-9e54-1b32d2af191e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.469 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a23d6956-f85a-40b1-9e54-1b32d2af191e/disk.config a23d6956-f85a-40b1-9e54-1b32d2af191e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.690 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a23d6956-f85a-40b1-9e54-1b32d2af191e/disk.config a23d6956-f85a-40b1-9e54-1b32d2af191e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.691 2 INFO nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Deleting local config drive /var/lib/nova/instances/a23d6956-f85a-40b1-9e54-1b32d2af191e/disk.config because it was imported into RBD.
Oct 07 14:15:56 compute-0 kernel: tapae1b9c2d-38: entered promiscuous mode
Oct 07 14:15:56 compute-0 NetworkManager[44949]: <info>  [1759846556.7641] manager: (tapae1b9c2d-38): new Tun device (/org/freedesktop/NetworkManager/Devices/242)
Oct 07 14:15:56 compute-0 ovn_controller[151684]: 2025-10-07T14:15:56Z|00509|binding|INFO|Claiming lport ae1b9c2d-384d-4134-8799-babeadd70605 for this chassis.
Oct 07 14:15:56 compute-0 ovn_controller[151684]: 2025-10-07T14:15:56Z|00510|binding|INFO|ae1b9c2d-384d-4134-8799-babeadd70605: Claiming fa:16:3e:09:1b:e2 10.100.0.12
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:56 compute-0 NetworkManager[44949]: <info>  [1759846556.7820] device (tapae1b9c2d-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:15:56 compute-0 NetworkManager[44949]: <info>  [1759846556.7828] device (tapae1b9c2d-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:56 compute-0 ovn_controller[151684]: 2025-10-07T14:15:56Z|00511|binding|INFO|Setting lport ae1b9c2d-384d-4134-8799-babeadd70605 ovn-installed in OVS
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:56 compute-0 ovn_controller[151684]: 2025-10-07T14:15:56Z|00512|binding|INFO|Setting lport ae1b9c2d-384d-4134-8799-babeadd70605 up in Southbound
Oct 07 14:15:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.869 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:1b:e2 10.100.0.12'], port_security=['fa:16:3e:09:1b:e2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a23d6956-f85a-40b1-9e54-1b32d2af191e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '972aa9372a81406990460fb46cf827e0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '21573a58-df26-46b3-96bc-30ac8d7d5432', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8439febc-2ab3-4376-877e-4af159445d58, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ae1b9c2d-384d-4134-8799-babeadd70605) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:15:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.871 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ae1b9c2d-384d-4134-8799-babeadd70605 in datapath 4fd643de-a9bb-4c41-8437-fb901dfd8879 bound to our chassis
Oct 07 14:15:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.873 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fd643de-a9bb-4c41-8437-fb901dfd8879
Oct 07 14:15:56 compute-0 systemd-machined[214580]: New machine qemu-69-instance-0000003c.
Oct 07 14:15:56 compute-0 systemd[1]: Started Virtual Machine qemu-69-instance-0000003c.
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.900 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846556.8991299, 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.900 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] VM Started (Lifecycle Event)
Oct 07 14:15:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.904 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[40eb2261-006f-4ffa-809f-1fd44553adf7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.951 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.956 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ec715a3b-b0ad-488a-88c8-56b383f2288b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.963 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a7748904-beb8-41b9-9155-6f15c1ce16ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.963 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846556.8996463, 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.964 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] VM Paused (Lifecycle Event)
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.988 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:56 compute-0 nova_compute[259550]: 2025-10-07 14:15:56.993 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:15:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:57.004 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d4be01af-f53a-44f4-ba40-eccb77f576bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:57.029 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3f926b2c-e042-4ff0-84ff-9e114030d3cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fd643de-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:80:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713303, 'reachable_time': 32732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323414, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:57.068 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7dcf99c4-270a-42bb-bdc8-fac3b6108f0f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713317, 'tstamp': 713317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323415, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713321, 'tstamp': 713321}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323415, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:15:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:57.070 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd643de-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:57 compute-0 nova_compute[259550]: 2025-10-07 14:15:57.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:15:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:57.073 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fd643de-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:57.074 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:57.074 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fd643de-a0, col_values=(('external_ids', {'iface-id': '879f54f7-e219-4616-9199-264d02fdd4cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:15:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:15:57.075 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:15:57 compute-0 nova_compute[259550]: 2025-10-07 14:15:57.130 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:15:57 compute-0 nova_compute[259550]: 2025-10-07 14:15:57.187 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846557.1872563, b3d2cd05-012d-4189-bc6c-c40fc1f72c0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:57 compute-0 nova_compute[259550]: 2025-10-07 14:15:57.188 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] VM Started (Lifecycle Event)
Oct 07 14:15:57 compute-0 nova_compute[259550]: 2025-10-07 14:15:57.209 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:57 compute-0 nova_compute[259550]: 2025-10-07 14:15:57.214 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846557.1875792, b3d2cd05-012d-4189-bc6c-c40fc1f72c0f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:57 compute-0 nova_compute[259550]: 2025-10-07 14:15:57.215 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] VM Paused (Lifecycle Event)
Oct 07 14:15:57 compute-0 nova_compute[259550]: 2025-10-07 14:15:57.240 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:57 compute-0 nova_compute[259550]: 2025-10-07 14:15:57.244 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:15:57 compute-0 nova_compute[259550]: 2025-10-07 14:15:57.270 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:15:57 compute-0 nova_compute[259550]: 2025-10-07 14:15:57.740 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846557.7395525, a23d6956-f85a-40b1-9e54-1b32d2af191e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:57 compute-0 nova_compute[259550]: 2025-10-07 14:15:57.741 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] VM Started (Lifecycle Event)
Oct 07 14:15:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 432 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 7.5 MiB/s wr, 274 op/s
Oct 07 14:15:57 compute-0 nova_compute[259550]: 2025-10-07 14:15:57.762 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:57 compute-0 nova_compute[259550]: 2025-10-07 14:15:57.767 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846557.7397404, a23d6956-f85a-40b1-9e54-1b32d2af191e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:57 compute-0 nova_compute[259550]: 2025-10-07 14:15:57.767 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] VM Paused (Lifecycle Event)
Oct 07 14:15:57 compute-0 nova_compute[259550]: 2025-10-07 14:15:57.786 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:57 compute-0 nova_compute[259550]: 2025-10-07 14:15:57.790 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:15:57 compute-0 nova_compute[259550]: 2025-10-07 14:15:57.821 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.103 2 DEBUG nova.compute.manager [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Received event network-vif-plugged-41eef051-1c52-4c3c-9854-2ee923b4ab0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.103 2 DEBUG oslo_concurrency.lockutils [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.104 2 DEBUG oslo_concurrency.lockutils [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.104 2 DEBUG oslo_concurrency.lockutils [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.104 2 DEBUG nova.compute.manager [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Processing event network-vif-plugged-41eef051-1c52-4c3c-9854-2ee923b4ab0e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.104 2 DEBUG nova.compute.manager [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Received event network-vif-plugged-41eef051-1c52-4c3c-9854-2ee923b4ab0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.104 2 DEBUG oslo_concurrency.lockutils [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.105 2 DEBUG oslo_concurrency.lockutils [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.105 2 DEBUG oslo_concurrency.lockutils [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.105 2 DEBUG nova.compute.manager [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] No waiting events found dispatching network-vif-plugged-41eef051-1c52-4c3c-9854-2ee923b4ab0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.105 2 WARNING nova.compute.manager [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Received unexpected event network-vif-plugged-41eef051-1c52-4c3c-9854-2ee923b4ab0e for instance with vm_state building and task_state spawning.
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.106 2 DEBUG nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.109 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846558.109567, b3d2cd05-012d-4189-bc6c-c40fc1f72c0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.110 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] VM Resumed (Lifecycle Event)
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.113 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.117 2 INFO nova.virt.libvirt.driver [-] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Instance spawned successfully.
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.117 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.141 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.156 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.161 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.161 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.162 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.162 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.162 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.163 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.189 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.197 2 DEBUG nova.compute.manager [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-vif-plugged-8718eef8-8e7a-42ab-8df9-b469e81779d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.198 2 DEBUG oslo_concurrency.lockutils [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.198 2 DEBUG oslo_concurrency.lockutils [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.198 2 DEBUG oslo_concurrency.lockutils [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.198 2 DEBUG nova.compute.manager [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Processing event network-vif-plugged-8718eef8-8e7a-42ab-8df9-b469e81779d9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.198 2 DEBUG nova.compute.manager [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-vif-plugged-8718eef8-8e7a-42ab-8df9-b469e81779d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.199 2 DEBUG oslo_concurrency.lockutils [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.199 2 DEBUG oslo_concurrency.lockutils [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.199 2 DEBUG oslo_concurrency.lockutils [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.199 2 DEBUG nova.compute.manager [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] No waiting events found dispatching network-vif-plugged-8718eef8-8e7a-42ab-8df9-b469e81779d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.199 2 WARNING nova.compute.manager [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received unexpected event network-vif-plugged-8718eef8-8e7a-42ab-8df9-b469e81779d9 for instance with vm_state building and task_state spawning.
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.200 2 DEBUG nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.218 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846558.2131026, 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.218 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] VM Resumed (Lifecycle Event)
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.221 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.232 2 INFO nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Took 12.90 seconds to spawn the instance on the hypervisor.
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.233 2 DEBUG nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.237 2 INFO nova.virt.libvirt.driver [-] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Instance spawned successfully.
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.237 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.245 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.248 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.287 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.292 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.293 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.293 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.293 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.294 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.294 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.333 2 INFO nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Took 14.14 seconds to build instance.
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.355 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.377s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.361 2 INFO nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Took 14.29 seconds to spawn the instance on the hypervisor.
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.361 2 DEBUG nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.433 2 INFO nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Took 15.56 seconds to build instance.
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.449 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:58 compute-0 ceph-mon[74295]: pgmap v1541: 305 pgs: 305 active+clean; 432 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 7.5 MiB/s wr, 274 op/s
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.948 2 DEBUG nova.network.neutron [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Updated VIF entry in instance network info cache for port 41eef051-1c52-4c3c-9854-2ee923b4ab0e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.948 2 DEBUG nova.network.neutron [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Updating instance_info_cache with network_info: [{"id": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "address": "fa:16:3e:08:11:48", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41eef051-1c", "ovs_interfaceid": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:58 compute-0 nova_compute[259550]: 2025-10-07 14:15:58.966 2 DEBUG oslo_concurrency.lockutils [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:59 compute-0 nova_compute[259550]: 2025-10-07 14:15:59.046 2 DEBUG nova.network.neutron [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Updated VIF entry in instance network info cache for port ae1b9c2d-384d-4134-8799-babeadd70605. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:15:59 compute-0 nova_compute[259550]: 2025-10-07 14:15:59.047 2 DEBUG nova.network.neutron [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Updating instance_info_cache with network_info: [{"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:15:59 compute-0 nova_compute[259550]: 2025-10-07 14:15:59.063 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a23d6956-f85a-40b1-9e54-1b32d2af191e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:15:59 compute-0 nova_compute[259550]: 2025-10-07 14:15:59.063 2 DEBUG nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-plugged-83e99e50-2115-4dee-9274-a2a6528a8a8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:15:59 compute-0 nova_compute[259550]: 2025-10-07 14:15:59.064 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:15:59 compute-0 nova_compute[259550]: 2025-10-07 14:15:59.064 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:15:59 compute-0 nova_compute[259550]: 2025-10-07 14:15:59.064 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:15:59 compute-0 nova_compute[259550]: 2025-10-07 14:15:59.064 2 DEBUG nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] No waiting events found dispatching network-vif-plugged-83e99e50-2115-4dee-9274-a2a6528a8a8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:15:59 compute-0 nova_compute[259550]: 2025-10-07 14:15:59.065 2 WARNING nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received unexpected event network-vif-plugged-83e99e50-2115-4dee-9274-a2a6528a8a8f for instance with vm_state building and task_state spawning.
Oct 07 14:15:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 432 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.3 MiB/s wr, 317 op/s
Oct 07 14:16:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:00.047 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:00.048 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:00.050 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:00 compute-0 podman[323459]: 2025-10-07 14:16:00.195512029 +0000 UTC m=+0.130665213 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:16:00 compute-0 podman[323458]: 2025-10-07 14:16:00.20124779 +0000 UTC m=+0.129276616 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:00 compute-0 ceph-mon[74295]: pgmap v1542: 305 pgs: 305 active+clean; 432 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.3 MiB/s wr, 317 op/s
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.859 2 DEBUG nova.compute.manager [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Received event network-vif-plugged-ae1b9c2d-384d-4134-8799-babeadd70605 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.860 2 DEBUG oslo_concurrency.lockutils [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.861 2 DEBUG oslo_concurrency.lockutils [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.866 2 DEBUG oslo_concurrency.lockutils [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.869 2 DEBUG nova.compute.manager [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Processing event network-vif-plugged-ae1b9c2d-384d-4134-8799-babeadd70605 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.873 2 DEBUG nova.compute.manager [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Received event network-vif-plugged-ae1b9c2d-384d-4134-8799-babeadd70605 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.875 2 DEBUG oslo_concurrency.lockutils [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.877 2 DEBUG oslo_concurrency.lockutils [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.880 2 DEBUG oslo_concurrency.lockutils [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.882 2 DEBUG nova.compute.manager [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] No waiting events found dispatching network-vif-plugged-ae1b9c2d-384d-4134-8799-babeadd70605 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.883 2 WARNING nova.compute.manager [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Received unexpected event network-vif-plugged-ae1b9c2d-384d-4134-8799-babeadd70605 for instance with vm_state building and task_state spawning.
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.891 2 DEBUG nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.908 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846560.8999472, a23d6956-f85a-40b1-9e54-1b32d2af191e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.908 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] VM Resumed (Lifecycle Event)
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.911 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.917 2 INFO nova.virt.libvirt.driver [-] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Instance spawned successfully.
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.918 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.944 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.954 2 DEBUG oslo_concurrency.lockutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.955 2 DEBUG oslo_concurrency.lockutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.955 2 DEBUG oslo_concurrency.lockutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.955 2 DEBUG oslo_concurrency.lockutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.956 2 DEBUG oslo_concurrency.lockutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.957 2 INFO nova.compute.manager [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Terminating instance
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.958 2 DEBUG nova.compute.manager [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.962 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.967 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.967 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.968 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.968 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.969 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:16:00 compute-0 nova_compute[259550]: 2025-10-07 14:16:00.969 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:16:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.002 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:16:01 compute-0 kernel: tap2c09dd65-3e (unregistering): left promiscuous mode
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.032 2 INFO nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Took 12.42 seconds to spawn the instance on the hypervisor.
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.033 2 DEBUG nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:01 compute-0 NetworkManager[44949]: <info>  [1759846561.0372] device (tap2c09dd65-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:16:01 compute-0 ovn_controller[151684]: 2025-10-07T14:16:01Z|00513|binding|INFO|Releasing lport 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b from this chassis (sb_readonly=0)
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 ovn_controller[151684]: 2025-10-07T14:16:01Z|00514|binding|INFO|Setting lport 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b down in Southbound
Oct 07 14:16:01 compute-0 ovn_controller[151684]: 2025-10-07T14:16:01Z|00515|binding|INFO|Removing iface tap2c09dd65-3e ovn-installed in OVS
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.067 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:3a:0b 10.100.0.159'], port_security=['fa:16:3e:7f:3a:0b 10.100.0.159'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.159/24', 'neutron:device_id': '83645517-a08a-46d7-b715-15b5d7f078ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9366dbfb-d976-4858-b6b3-90aea7266ca1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.068 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b in datapath 7dfb1828-2cb7-4626-9426-ecd9cd6a2b51 unbound from our chassis
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.073 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7dfb1828-2cb7-4626-9426-ecd9cd6a2b51
Oct 07 14:16:01 compute-0 kernel: tap83e99e50-21 (unregistering): left promiscuous mode
Oct 07 14:16:01 compute-0 NetworkManager[44949]: <info>  [1759846561.0814] device (tap83e99e50-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 ovn_controller[151684]: 2025-10-07T14:16:01Z|00516|binding|INFO|Releasing lport 83e99e50-2115-4dee-9274-a2a6528a8a8f from this chassis (sb_readonly=0)
Oct 07 14:16:01 compute-0 ovn_controller[151684]: 2025-10-07T14:16:01Z|00517|binding|INFO|Setting lport 83e99e50-2115-4dee-9274-a2a6528a8a8f down in Southbound
Oct 07 14:16:01 compute-0 ovn_controller[151684]: 2025-10-07T14:16:01Z|00518|binding|INFO|Removing iface tap83e99e50-21 ovn-installed in OVS
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.103 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[99f9f42e-3a8b-4876-b8e7-23be12f86a6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.109 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:89:37 10.100.1.194'], port_security=['fa:16:3e:e7:89:37 10.100.1.194'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.194/24', 'neutron:device_id': '83645517-a08a-46d7-b715-15b5d7f078ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80d44241-e806-45e5-b77b-78848bbeea79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=303ccb5e-5aaa-463a-b70f-452ecb37838d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=83e99e50-2115-4dee-9274-a2a6528a8a8f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:01 compute-0 kernel: tapc1ccd58c-db (unregistering): left promiscuous mode
Oct 07 14:16:01 compute-0 NetworkManager[44949]: <info>  [1759846561.1251] device (tapc1ccd58c-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.126 2 INFO nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Took 13.84 seconds to build instance.
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 ovn_controller[151684]: 2025-10-07T14:16:01Z|00519|binding|INFO|Releasing lport c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 from this chassis (sb_readonly=0)
Oct 07 14:16:01 compute-0 ovn_controller[151684]: 2025-10-07T14:16:01Z|00520|binding|INFO|Setting lport c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 down in Southbound
Oct 07 14:16:01 compute-0 ovn_controller[151684]: 2025-10-07T14:16:01Z|00521|binding|INFO|Removing iface tapc1ccd58c-db ovn-installed in OVS
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.156 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.161 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:8f:80 10.100.0.60'], port_security=['fa:16:3e:34:8f:80 10.100.0.60'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.60/24', 'neutron:device_id': '83645517-a08a-46d7-b715-15b5d7f078ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9366dbfb-d976-4858-b6b3-90aea7266ca1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c1ccd58c-dbf6-4d2c-9a75-1effb73b5105) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.176 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[726fa838-4fda-4e15-aee7-113108e03664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.183 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[da9189d8-d447-4c29-9ddb-aa700e952981]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:01 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000038.scope: Deactivated successfully.
Oct 07 14:16:01 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000038.scope: Consumed 6.462s CPU time.
Oct 07 14:16:01 compute-0 systemd-machined[214580]: Machine qemu-66-instance-00000038 terminated.
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.239 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[dbeb0cd1-f276-411f-87d9-f74ed2f6a84c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.266 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1af95b-9d3b-4213-a0ea-5203fe3a292d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7dfb1828-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:e5:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713411, 'reachable_time': 35189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323515, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.290 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5047ff30-1aea-4bf3-aafd-2cf8a6abe863]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7dfb1828-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713425, 'tstamp': 713425}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323516, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap7dfb1828-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713428, 'tstamp': 713428}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323516, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.292 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7dfb1828-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.308 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7dfb1828-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.309 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.309 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7dfb1828-20, col_values=(('external_ids', {'iface-id': '8933a3d5-743b-489b-a9ca-89380da9bbe0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.310 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.311 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 83e99e50-2115-4dee-9274-a2a6528a8a8f in datapath 80d44241-e806-45e5-b77b-78848bbeea79 unbound from our chassis
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.313 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 80d44241-e806-45e5-b77b-78848bbeea79, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.314 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb0fda1-3c1d-4927-b792-a9a3f054ed7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.315 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79 namespace which is not needed anymore
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 NetworkManager[44949]: <info>  [1759846561.4078] manager: (tap83e99e50-21): new Tun device (/org/freedesktop/NetworkManager/Devices/243)
Oct 07 14:16:01 compute-0 NetworkManager[44949]: <info>  [1759846561.4192] manager: (tapc1ccd58c-db): new Tun device (/org/freedesktop/NetworkManager/Devices/244)
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.443 2 INFO nova.virt.libvirt.driver [-] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Instance destroyed successfully.
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.444 2 DEBUG nova.objects.instance [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lazy-loading 'resources' on Instance uuid 83645517-a08a-46d7-b715-15b5d7f078ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.458 2 DEBUG nova.virt.libvirt.vif [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1819774511',display_name='tempest-ServersTestMultiNic-server-1819774511',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1819774511',id=56,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-svs0lr8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:55Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=83645517-a08a-46d7-b715-15b5d7f078ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.460 2 DEBUG nova.network.os_vif_util [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.461 2 DEBUG nova.network.os_vif_util [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:3a:0b,bridge_name='br-int',has_traffic_filtering=True,id=2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c09dd65-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.461 2 DEBUG os_vif [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:3a:0b,bridge_name='br-int',has_traffic_filtering=True,id=2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c09dd65-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.463 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c09dd65-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.483 2 INFO os_vif [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:3a:0b,bridge_name='br-int',has_traffic_filtering=True,id=2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c09dd65-3e')
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.485 2 DEBUG nova.virt.libvirt.vif [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1819774511',display_name='tempest-ServersTestMultiNic-server-1819774511',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1819774511',id=56,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-svs0lr8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:55Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=83645517-a08a-46d7-b715-15b5d7f078ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.485 2 DEBUG nova.network.os_vif_util [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.486 2 DEBUG nova.network.os_vif_util [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:89:37,bridge_name='br-int',has_traffic_filtering=True,id=83e99e50-2115-4dee-9274-a2a6528a8a8f,network=Network(80d44241-e806-45e5-b77b-78848bbeea79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83e99e50-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.487 2 DEBUG os_vif [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:89:37,bridge_name='br-int',has_traffic_filtering=True,id=83e99e50-2115-4dee-9274-a2a6528a8a8f,network=Network(80d44241-e806-45e5-b77b-78848bbeea79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83e99e50-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.489 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83e99e50-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.500 2 INFO os_vif [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:89:37,bridge_name='br-int',has_traffic_filtering=True,id=83e99e50-2115-4dee-9274-a2a6528a8a8f,network=Network(80d44241-e806-45e5-b77b-78848bbeea79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83e99e50-21')
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.501 2 DEBUG nova.virt.libvirt.vif [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1819774511',display_name='tempest-ServersTestMultiNic-server-1819774511',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1819774511',id=56,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-svs0lr8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:55Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=83645517-a08a-46d7-b715-15b5d7f078ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.501 2 DEBUG nova.network.os_vif_util [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.502 2 DEBUG nova.network.os_vif_util [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:8f:80,bridge_name='br-int',has_traffic_filtering=True,id=c1ccd58c-dbf6-4d2c-9a75-1effb73b5105,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ccd58c-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.502 2 DEBUG os_vif [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:8f:80,bridge_name='br-int',has_traffic_filtering=True,id=c1ccd58c-dbf6-4d2c-9a75-1effb73b5105,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ccd58c-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.504 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1ccd58c-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.507 2 INFO os_vif [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:8f:80,bridge_name='br-int',has_traffic_filtering=True,id=c1ccd58c-dbf6-4d2c-9a75-1effb73b5105,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ccd58c-db')
Oct 07 14:16:01 compute-0 neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79[322845]: [NOTICE]   (322849) : haproxy version is 2.8.14-c23fe91
Oct 07 14:16:01 compute-0 neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79[322845]: [NOTICE]   (322849) : path to executable is /usr/sbin/haproxy
Oct 07 14:16:01 compute-0 neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79[322845]: [WARNING]  (322849) : Exiting Master process...
Oct 07 14:16:01 compute-0 neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79[322845]: [WARNING]  (322849) : Exiting Master process...
Oct 07 14:16:01 compute-0 neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79[322845]: [ALERT]    (322849) : Current worker (322851) exited with code 143 (Terminated)
Oct 07 14:16:01 compute-0 neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79[322845]: [WARNING]  (322849) : All workers exited. Exiting... (0)
Oct 07 14:16:01 compute-0 systemd[1]: libpod-61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14.scope: Deactivated successfully.
Oct 07 14:16:01 compute-0 podman[323566]: 2025-10-07 14:16:01.56248613 +0000 UTC m=+0.063285013 container died 61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:16:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14-userdata-shm.mount: Deactivated successfully.
Oct 07 14:16:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0beb1c026ee7bdf433c8b1469d1aea543db15d951ca29d849193a9e68ff92284-merged.mount: Deactivated successfully.
Oct 07 14:16:01 compute-0 podman[323566]: 2025-10-07 14:16:01.644485336 +0000 UTC m=+0.145284179 container cleanup 61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 07 14:16:01 compute-0 systemd[1]: libpod-conmon-61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14.scope: Deactivated successfully.
Oct 07 14:16:01 compute-0 podman[323611]: 2025-10-07 14:16:01.745902075 +0000 UTC m=+0.067470513 container remove 61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:16:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 432 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 2.9 MiB/s wr, 327 op/s
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.761 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[602b881e-c047-4d83-9fab-737b2b31951c]: (4, ('Tue Oct  7 02:16:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79 (61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14)\n61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14\nTue Oct  7 02:16:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79 (61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14)\n61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.764 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8b6b2b-8a65-428e-a968-368e8c0503d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.767 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80d44241-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:01 compute-0 kernel: tap80d44241-e0: left promiscuous mode
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.790 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[66179e6e-710f-4d1a-9b4a-698df8bd302a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:01 compute-0 nova_compute[259550]: 2025-10-07 14:16:01.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.813 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f92cf045-a484-4413-9b57-5338be3245c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.815 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f0059ec0-6e2f-449d-93e2-206627fdd4ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.837 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[91e8c480-4718-4410-b50b-37f7f0af9aa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713505, 'reachable_time': 26007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323627, 'error': None, 'target': 'ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.844 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:16:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d80d44241\x2de806\x2d45e5\x2db77b\x2d78848bbeea79.mount: Deactivated successfully.
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.844 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[2054a9d3-004e-4a1f-998f-318ae9b27028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.845 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 in datapath 7dfb1828-2cb7-4626-9426-ecd9cd6a2b51 unbound from our chassis
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.847 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7dfb1828-2cb7-4626-9426-ecd9cd6a2b51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.849 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5616f83e-f6f1-4002-b2c6-918304c70ceb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.850 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51 namespace which is not needed anymore
Oct 07 14:16:02 compute-0 nova_compute[259550]: 2025-10-07 14:16:02.027 2 INFO nova.virt.libvirt.driver [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Deleting instance files /var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff_del
Oct 07 14:16:02 compute-0 nova_compute[259550]: 2025-10-07 14:16:02.028 2 INFO nova.virt.libvirt.driver [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Deletion of /var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff_del complete
Oct 07 14:16:02 compute-0 neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51[322769]: [NOTICE]   (322773) : haproxy version is 2.8.14-c23fe91
Oct 07 14:16:02 compute-0 neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51[322769]: [NOTICE]   (322773) : path to executable is /usr/sbin/haproxy
Oct 07 14:16:02 compute-0 neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51[322769]: [WARNING]  (322773) : Exiting Master process...
Oct 07 14:16:02 compute-0 neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51[322769]: [ALERT]    (322773) : Current worker (322775) exited with code 143 (Terminated)
Oct 07 14:16:02 compute-0 neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51[322769]: [WARNING]  (322773) : All workers exited. Exiting... (0)
Oct 07 14:16:02 compute-0 systemd[1]: libpod-7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263.scope: Deactivated successfully.
Oct 07 14:16:02 compute-0 podman[323644]: 2025-10-07 14:16:02.058534373 +0000 UTC m=+0.068760707 container died 7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:16:02 compute-0 nova_compute[259550]: 2025-10-07 14:16:02.078 2 DEBUG nova.compute.manager [req-6bd94138-9a41-4daf-9389-0fdcf79a8ac4 req-c49f21c3-6235-4cbe-b972-42a081071df5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-unplugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:02 compute-0 nova_compute[259550]: 2025-10-07 14:16:02.079 2 DEBUG oslo_concurrency.lockutils [req-6bd94138-9a41-4daf-9389-0fdcf79a8ac4 req-c49f21c3-6235-4cbe-b972-42a081071df5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:02 compute-0 nova_compute[259550]: 2025-10-07 14:16:02.079 2 DEBUG oslo_concurrency.lockutils [req-6bd94138-9a41-4daf-9389-0fdcf79a8ac4 req-c49f21c3-6235-4cbe-b972-42a081071df5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:02 compute-0 nova_compute[259550]: 2025-10-07 14:16:02.080 2 DEBUG oslo_concurrency.lockutils [req-6bd94138-9a41-4daf-9389-0fdcf79a8ac4 req-c49f21c3-6235-4cbe-b972-42a081071df5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:02 compute-0 nova_compute[259550]: 2025-10-07 14:16:02.080 2 DEBUG nova.compute.manager [req-6bd94138-9a41-4daf-9389-0fdcf79a8ac4 req-c49f21c3-6235-4cbe-b972-42a081071df5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] No waiting events found dispatching network-vif-unplugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:02 compute-0 nova_compute[259550]: 2025-10-07 14:16:02.080 2 DEBUG nova.compute.manager [req-6bd94138-9a41-4daf-9389-0fdcf79a8ac4 req-c49f21c3-6235-4cbe-b972-42a081071df5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-unplugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:16:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263-userdata-shm.mount: Deactivated successfully.
Oct 07 14:16:02 compute-0 nova_compute[259550]: 2025-10-07 14:16:02.094 2 INFO nova.compute.manager [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Took 1.14 seconds to destroy the instance on the hypervisor.
Oct 07 14:16:02 compute-0 nova_compute[259550]: 2025-10-07 14:16:02.095 2 DEBUG oslo.service.loopingcall [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:16:02 compute-0 nova_compute[259550]: 2025-10-07 14:16:02.095 2 DEBUG nova.compute.manager [-] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:16:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-785017db9e4b610117d840e5812177c07ef3c9b381f8f952e3b666c54f1281e4-merged.mount: Deactivated successfully.
Oct 07 14:16:02 compute-0 nova_compute[259550]: 2025-10-07 14:16:02.095 2 DEBUG nova.network.neutron [-] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:16:02 compute-0 podman[323644]: 2025-10-07 14:16:02.107453866 +0000 UTC m=+0.117680200 container cleanup 7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true)
Oct 07 14:16:02 compute-0 systemd[1]: libpod-conmon-7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263.scope: Deactivated successfully.
Oct 07 14:16:02 compute-0 podman[323671]: 2025-10-07 14:16:02.202015754 +0000 UTC m=+0.060177900 container remove 7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 07 14:16:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:02.209 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8f079f-d756-44de-80c8-dcf8e47c78ac]: (4, ('Tue Oct  7 02:16:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51 (7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263)\n7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263\nTue Oct  7 02:16:02 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51 (7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263)\n7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:02.211 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4737c261-2d86-4e24-9543-a4f13e4382df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:02.212 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7dfb1828-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:02 compute-0 nova_compute[259550]: 2025-10-07 14:16:02.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:02 compute-0 kernel: tap7dfb1828-20: left promiscuous mode
Oct 07 14:16:02 compute-0 nova_compute[259550]: 2025-10-07 14:16:02.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:02.246 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f7ba97e7-f04e-4565-9f9e-235f98ed35f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:02.268 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b320e4fb-f99a-4954-be09-7a797b4369ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:02.270 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f7eaf56e-340d-485c-ad81-357a68563734]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:02.290 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[209a31bb-14a6-45bb-84ed-29c6a949e684]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713402, 'reachable_time': 15069, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323684, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:02.293 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:16:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:02.294 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[af94bda6-8dea-4370-a23b-07b32ef37223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d7dfb1828\x2d2cb7\x2d4626\x2d9426\x2decd9cd6a2b51.mount: Deactivated successfully.
Oct 07 14:16:02 compute-0 ceph-mon[74295]: pgmap v1543: 305 pgs: 305 active+clean; 432 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 2.9 MiB/s wr, 327 op/s
Oct 07 14:16:03 compute-0 nova_compute[259550]: 2025-10-07 14:16:03.106 2 DEBUG nova.compute.manager [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-unplugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:03 compute-0 nova_compute[259550]: 2025-10-07 14:16:03.106 2 DEBUG oslo_concurrency.lockutils [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:03 compute-0 nova_compute[259550]: 2025-10-07 14:16:03.107 2 DEBUG oslo_concurrency.lockutils [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:03 compute-0 nova_compute[259550]: 2025-10-07 14:16:03.107 2 DEBUG oslo_concurrency.lockutils [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:03 compute-0 nova_compute[259550]: 2025-10-07 14:16:03.107 2 DEBUG nova.compute.manager [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] No waiting events found dispatching network-vif-unplugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:03 compute-0 nova_compute[259550]: 2025-10-07 14:16:03.107 2 DEBUG nova.compute.manager [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-unplugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:16:03 compute-0 nova_compute[259550]: 2025-10-07 14:16:03.107 2 DEBUG nova.compute.manager [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-plugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:03 compute-0 nova_compute[259550]: 2025-10-07 14:16:03.108 2 DEBUG oslo_concurrency.lockutils [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:03 compute-0 nova_compute[259550]: 2025-10-07 14:16:03.108 2 DEBUG oslo_concurrency.lockutils [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:03 compute-0 nova_compute[259550]: 2025-10-07 14:16:03.108 2 DEBUG oslo_concurrency.lockutils [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:03 compute-0 nova_compute[259550]: 2025-10-07 14:16:03.108 2 DEBUG nova.compute.manager [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] No waiting events found dispatching network-vif-plugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:03 compute-0 nova_compute[259550]: 2025-10-07 14:16:03.108 2 WARNING nova.compute.manager [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received unexpected event network-vif-plugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 for instance with vm_state active and task_state deleting.
Oct 07 14:16:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 432 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 1.2 MiB/s wr, 282 op/s
Oct 07 14:16:04 compute-0 nova_compute[259550]: 2025-10-07 14:16:04.373 2 DEBUG nova.compute.manager [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-plugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:04 compute-0 nova_compute[259550]: 2025-10-07 14:16:04.373 2 DEBUG oslo_concurrency.lockutils [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:04 compute-0 nova_compute[259550]: 2025-10-07 14:16:04.374 2 DEBUG oslo_concurrency.lockutils [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:04 compute-0 nova_compute[259550]: 2025-10-07 14:16:04.374 2 DEBUG oslo_concurrency.lockutils [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:04 compute-0 nova_compute[259550]: 2025-10-07 14:16:04.375 2 DEBUG nova.compute.manager [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] No waiting events found dispatching network-vif-plugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:04 compute-0 nova_compute[259550]: 2025-10-07 14:16:04.375 2 WARNING nova.compute.manager [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received unexpected event network-vif-plugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b for instance with vm_state active and task_state deleting.
Oct 07 14:16:04 compute-0 nova_compute[259550]: 2025-10-07 14:16:04.376 2 DEBUG nova.compute.manager [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-unplugged-83e99e50-2115-4dee-9274-a2a6528a8a8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:04 compute-0 nova_compute[259550]: 2025-10-07 14:16:04.377 2 DEBUG oslo_concurrency.lockutils [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:04 compute-0 nova_compute[259550]: 2025-10-07 14:16:04.377 2 DEBUG oslo_concurrency.lockutils [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:04 compute-0 nova_compute[259550]: 2025-10-07 14:16:04.378 2 DEBUG oslo_concurrency.lockutils [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:04 compute-0 nova_compute[259550]: 2025-10-07 14:16:04.379 2 DEBUG nova.compute.manager [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] No waiting events found dispatching network-vif-unplugged-83e99e50-2115-4dee-9274-a2a6528a8a8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:04 compute-0 nova_compute[259550]: 2025-10-07 14:16:04.379 2 DEBUG nova.compute.manager [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-unplugged-83e99e50-2115-4dee-9274-a2a6528a8a8f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:16:04 compute-0 nova_compute[259550]: 2025-10-07 14:16:04.380 2 DEBUG nova.compute.manager [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-plugged-83e99e50-2115-4dee-9274-a2a6528a8a8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:04 compute-0 nova_compute[259550]: 2025-10-07 14:16:04.381 2 DEBUG oslo_concurrency.lockutils [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:04 compute-0 nova_compute[259550]: 2025-10-07 14:16:04.381 2 DEBUG oslo_concurrency.lockutils [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:04 compute-0 nova_compute[259550]: 2025-10-07 14:16:04.382 2 DEBUG oslo_concurrency.lockutils [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:04 compute-0 nova_compute[259550]: 2025-10-07 14:16:04.382 2 DEBUG nova.compute.manager [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] No waiting events found dispatching network-vif-plugged-83e99e50-2115-4dee-9274-a2a6528a8a8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:04 compute-0 nova_compute[259550]: 2025-10-07 14:16:04.383 2 WARNING nova.compute.manager [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received unexpected event network-vif-plugged-83e99e50-2115-4dee-9274-a2a6528a8a8f for instance with vm_state active and task_state deleting.
Oct 07 14:16:04 compute-0 ceph-mon[74295]: pgmap v1544: 305 pgs: 305 active+clean; 432 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 1.2 MiB/s wr, 282 op/s
Oct 07 14:16:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:05.584 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:05.586 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:16:05 compute-0 nova_compute[259550]: 2025-10-07 14:16:05.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:05 compute-0 nova_compute[259550]: 2025-10-07 14:16:05.621 2 DEBUG nova.network.neutron [-] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:05 compute-0 nova_compute[259550]: 2025-10-07 14:16:05.643 2 INFO nova.compute.manager [-] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Took 3.55 seconds to deallocate network for instance.
Oct 07 14:16:05 compute-0 nova_compute[259550]: 2025-10-07 14:16:05.700 2 DEBUG oslo_concurrency.lockutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:05 compute-0 nova_compute[259550]: 2025-10-07 14:16:05.700 2 DEBUG oslo_concurrency.lockutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 407 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 1.2 MiB/s wr, 370 op/s
Oct 07 14:16:05 compute-0 nova_compute[259550]: 2025-10-07 14:16:05.851 2 DEBUG nova.compute.manager [req-4ba8ac50-150c-4637-a6d2-c621395491c4 req-966faf70-c734-4376-bd1e-233774ec7775 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-deleted-83e99e50-2115-4dee-9274-a2a6528a8a8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:05 compute-0 nova_compute[259550]: 2025-10-07 14:16:05.851 2 DEBUG nova.compute.manager [req-4ba8ac50-150c-4637-a6d2-c621395491c4 req-966faf70-c734-4376-bd1e-233774ec7775 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-deleted-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:05 compute-0 nova_compute[259550]: 2025-10-07 14:16:05.916 2 DEBUG oslo_concurrency.processutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:16:06 compute-0 nova_compute[259550]: 2025-10-07 14:16:06.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:16:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1332087720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:06 compute-0 nova_compute[259550]: 2025-10-07 14:16:06.459 2 DEBUG oslo_concurrency.processutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:06 compute-0 nova_compute[259550]: 2025-10-07 14:16:06.467 2 DEBUG nova.compute.provider_tree [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:16:06 compute-0 nova_compute[259550]: 2025-10-07 14:16:06.495 2 DEBUG nova.scheduler.client.report [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:16:06 compute-0 nova_compute[259550]: 2025-10-07 14:16:06.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:06 compute-0 nova_compute[259550]: 2025-10-07 14:16:06.520 2 DEBUG oslo_concurrency.lockutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:06 compute-0 nova_compute[259550]: 2025-10-07 14:16:06.556 2 INFO nova.scheduler.client.report [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Deleted allocations for instance 83645517-a08a-46d7-b715-15b5d7f078ff
Oct 07 14:16:06 compute-0 nova_compute[259550]: 2025-10-07 14:16:06.633 2 DEBUG oslo_concurrency.lockutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:06 compute-0 nova_compute[259550]: 2025-10-07 14:16:06.758 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:06 compute-0 nova_compute[259550]: 2025-10-07 14:16:06.759 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:06 compute-0 nova_compute[259550]: 2025-10-07 14:16:06.780 2 DEBUG nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:16:06 compute-0 ceph-mon[74295]: pgmap v1545: 305 pgs: 305 active+clean; 407 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 1.2 MiB/s wr, 370 op/s
Oct 07 14:16:06 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1332087720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:06 compute-0 nova_compute[259550]: 2025-10-07 14:16:06.991 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:06 compute-0 nova_compute[259550]: 2025-10-07 14:16:06.992 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.000 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.001 2 INFO nova.compute.claims [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.224 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:16:07 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3815651720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.751 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.757 2 DEBUG nova.compute.provider_tree [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:16:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 386 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 59 KiB/s wr, 316 op/s
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.774 2 DEBUG nova.scheduler.client.report [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.796 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.798 2 DEBUG nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.840 2 DEBUG nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.840 2 DEBUG nova.network.neutron [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.864 2 INFO nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:16:07 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3815651720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.880 2 DEBUG nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.960 2 DEBUG nova.compute.manager [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.961 2 DEBUG nova.compute.manager [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing instance network info cache due to event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.961 2 DEBUG oslo_concurrency.lockutils [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.961 2 DEBUG oslo_concurrency.lockutils [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.961 2 DEBUG nova.network.neutron [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.990 2 DEBUG nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.991 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:16:07 compute-0 nova_compute[259550]: 2025-10-07 14:16:07.992 2 INFO nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Creating image(s)
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.017 2 DEBUG nova.storage.rbd_utils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.042 2 DEBUG nova.storage.rbd_utils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.089 2 DEBUG nova.storage.rbd_utils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.093 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.133 2 DEBUG nova.policy [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '39e4681256e44d92ac5928e4f8e0d348', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ef9390a1dd804281beea149e0086b360', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.178 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.179 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.179 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.180 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:08 compute-0 ovn_controller[151684]: 2025-10-07T14:16:08Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:6d:07 10.100.0.11
Oct 07 14:16:08 compute-0 ovn_controller[151684]: 2025-10-07T14:16:08Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:6d:07 10.100.0.11
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.205 2 DEBUG nova.storage.rbd_utils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.210 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.509 2 DEBUG nova.compute.manager [req-b1eb01f0-7c03-4a21-a490-e0bffc97f1a4 req-c6b8cb42-ed30-4d6d-832a-e9da5d0501bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-changed-8718eef8-8e7a-42ab-8df9-b469e81779d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.510 2 DEBUG nova.compute.manager [req-b1eb01f0-7c03-4a21-a490-e0bffc97f1a4 req-c6b8cb42-ed30-4d6d-832a-e9da5d0501bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing instance network info cache due to event network-changed-8718eef8-8e7a-42ab-8df9-b469e81779d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.510 2 DEBUG oslo_concurrency.lockutils [req-b1eb01f0-7c03-4a21-a490-e0bffc97f1a4 req-c6b8cb42-ed30-4d6d-832a-e9da5d0501bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.510 2 DEBUG oslo_concurrency.lockutils [req-b1eb01f0-7c03-4a21-a490-e0bffc97f1a4 req-c6b8cb42-ed30-4d6d-832a-e9da5d0501bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.511 2 DEBUG nova.network.neutron [req-b1eb01f0-7c03-4a21-a490-e0bffc97f1a4 req-c6b8cb42-ed30-4d6d-832a-e9da5d0501bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing network info cache for port 8718eef8-8e7a-42ab-8df9-b469e81779d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.530 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.614 2 DEBUG nova.storage.rbd_utils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] resizing rbd image c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:16:08 compute-0 ovn_controller[151684]: 2025-10-07T14:16:08Z|00522|binding|INFO|Releasing lport c1da8c7c-1812-4ab6-94d3-da2a23226328 from this chassis (sb_readonly=0)
Oct 07 14:16:08 compute-0 ovn_controller[151684]: 2025-10-07T14:16:08Z|00523|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct 07 14:16:08 compute-0 ovn_controller[151684]: 2025-10-07T14:16:08Z|00524|binding|INFO|Releasing lport 879f54f7-e219-4616-9199-264d02fdd4cf from this chassis (sb_readonly=0)
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.762 2 DEBUG nova.objects.instance [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'migration_context' on Instance uuid c14b06ec-ce54-4081-8d72-b22529c3b0b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.790 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.791 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Ensure instance console log exists: /var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.792 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.792 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:08 compute-0 nova_compute[259550]: 2025-10-07 14:16:08.792 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:08 compute-0 ceph-mon[74295]: pgmap v1546: 305 pgs: 305 active+clean; 386 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 59 KiB/s wr, 316 op/s
Oct 07 14:16:09 compute-0 nova_compute[259550]: 2025-10-07 14:16:09.103 2 DEBUG nova.network.neutron [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Successfully created port: ac811e84-843c-4265-b536-c653f6135295 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:16:09 compute-0 nova_compute[259550]: 2025-10-07 14:16:09.154 2 DEBUG oslo_concurrency.lockutils [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-188af2a5-ff92-4f42-8bdc-5dec2f24d46a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:09 compute-0 nova_compute[259550]: 2025-10-07 14:16:09.155 2 DEBUG oslo_concurrency.lockutils [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-188af2a5-ff92-4f42-8bdc-5dec2f24d46a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:09 compute-0 nova_compute[259550]: 2025-10-07 14:16:09.155 2 DEBUG nova.objects.instance [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid 188af2a5-ff92-4f42-8bdc-5dec2f24d46a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:09.590 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:09 compute-0 nova_compute[259550]: 2025-10-07 14:16:09.608 2 DEBUG nova.network.neutron [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updated VIF entry in instance network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:16:09 compute-0 nova_compute[259550]: 2025-10-07 14:16:09.610 2 DEBUG nova.network.neutron [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:09 compute-0 nova_compute[259550]: 2025-10-07 14:16:09.628 2 DEBUG oslo_concurrency.lockutils [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:09 compute-0 nova_compute[259550]: 2025-10-07 14:16:09.628 2 DEBUG nova.compute.manager [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-changed-8718eef8-8e7a-42ab-8df9-b469e81779d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:09 compute-0 nova_compute[259550]: 2025-10-07 14:16:09.628 2 DEBUG nova.compute.manager [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing instance network info cache due to event network-changed-8718eef8-8e7a-42ab-8df9-b469e81779d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:16:09 compute-0 nova_compute[259550]: 2025-10-07 14:16:09.629 2 DEBUG oslo_concurrency.lockutils [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 428 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 2.5 MiB/s wr, 372 op/s
Oct 07 14:16:09 compute-0 nova_compute[259550]: 2025-10-07 14:16:09.804 2 DEBUG nova.objects.instance [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_requests' on Instance uuid 188af2a5-ff92-4f42-8bdc-5dec2f24d46a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:09 compute-0 nova_compute[259550]: 2025-10-07 14:16:09.818 2 DEBUG nova.network.neutron [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.144 2 DEBUG nova.network.neutron [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Successfully updated port: ac811e84-843c-4265-b536-c653f6135295 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.154 2 DEBUG nova.network.neutron [req-b1eb01f0-7c03-4a21-a490-e0bffc97f1a4 req-c6b8cb42-ed30-4d6d-832a-e9da5d0501bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updated VIF entry in instance network info cache for port 8718eef8-8e7a-42ab-8df9-b469e81779d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.154 2 DEBUG nova.network.neutron [req-b1eb01f0-7c03-4a21-a490-e0bffc97f1a4 req-c6b8cb42-ed30-4d6d-832a-e9da5d0501bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updating instance_info_cache with network_info: [{"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.160 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "refresh_cache-c14b06ec-ce54-4081-8d72-b22529c3b0b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.160 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquired lock "refresh_cache-c14b06ec-ce54-4081-8d72-b22529c3b0b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.161 2 DEBUG nova.network.neutron [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.181 2 DEBUG oslo_concurrency.lockutils [req-b1eb01f0-7c03-4a21-a490-e0bffc97f1a4 req-c6b8cb42-ed30-4d6d-832a-e9da5d0501bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.187 2 DEBUG oslo_concurrency.lockutils [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.187 2 DEBUG nova.network.neutron [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing network info cache for port 8718eef8-8e7a-42ab-8df9-b469e81779d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.223 2 DEBUG nova.compute.manager [req-8e9d058d-845e-4fab-b159-33f9e1c23a60 req-77c462a4-066a-45ce-bfa5-6760038fdf08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received event network-changed-ac811e84-843c-4265-b536-c653f6135295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.223 2 DEBUG nova.compute.manager [req-8e9d058d-845e-4fab-b159-33f9e1c23a60 req-77c462a4-066a-45ce-bfa5-6760038fdf08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Refreshing instance network info cache due to event network-changed-ac811e84-843c-4265-b536-c653f6135295. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.224 2 DEBUG oslo_concurrency.lockutils [req-8e9d058d-845e-4fab-b159-33f9e1c23a60 req-77c462a4-066a-45ce-bfa5-6760038fdf08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-c14b06ec-ce54-4081-8d72-b22529c3b0b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.307 2 DEBUG nova.network.neutron [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.342 2 DEBUG oslo_concurrency.lockutils [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.342 2 DEBUG oslo_concurrency.lockutils [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.343 2 DEBUG nova.compute.manager [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.351 2 DEBUG nova.compute.manager [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.352 2 DEBUG nova.objects.instance [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'flavor' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.382 2 DEBUG nova.virt.libvirt.driver [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.449 2 DEBUG nova.policy [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:16:10 compute-0 ceph-mon[74295]: pgmap v1547: 305 pgs: 305 active+clean; 428 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 2.5 MiB/s wr, 372 op/s
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.930 2 DEBUG nova.compute.manager [req-13fbabd3-6078-4072-91bc-31000ce21c76 req-4b14bbc3-95b0-4900-89ed-cdca3310fb4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.930 2 DEBUG nova.compute.manager [req-13fbabd3-6078-4072-91bc-31000ce21c76 req-4b14bbc3-95b0-4900-89ed-cdca3310fb4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing instance network info cache due to event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.930 2 DEBUG oslo_concurrency.lockutils [req-13fbabd3-6078-4072-91bc-31000ce21c76 req-4b14bbc3-95b0-4900-89ed-cdca3310fb4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.930 2 DEBUG oslo_concurrency.lockutils [req-13fbabd3-6078-4072-91bc-31000ce21c76 req-4b14bbc3-95b0-4900-89ed-cdca3310fb4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:10 compute-0 nova_compute[259550]: 2025-10-07 14:16:10.930 2 DEBUG nova.network.neutron [req-13fbabd3-6078-4072-91bc-31000ce21c76 req-4b14bbc3-95b0-4900-89ed-cdca3310fb4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:16:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.127 2 DEBUG nova.network.neutron [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Updating instance_info_cache with network_info: [{"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.156 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Releasing lock "refresh_cache-c14b06ec-ce54-4081-8d72-b22529c3b0b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.156 2 DEBUG nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Instance network_info: |[{"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.156 2 DEBUG oslo_concurrency.lockutils [req-8e9d058d-845e-4fab-b159-33f9e1c23a60 req-77c462a4-066a-45ce-bfa5-6760038fdf08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-c14b06ec-ce54-4081-8d72-b22529c3b0b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.157 2 DEBUG nova.network.neutron [req-8e9d058d-845e-4fab-b159-33f9e1c23a60 req-77c462a4-066a-45ce-bfa5-6760038fdf08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Refreshing network info cache for port ac811e84-843c-4265-b536-c653f6135295 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.159 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Start _get_guest_xml network_info=[{"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.167 2 WARNING nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.172 2 DEBUG nova.virt.libvirt.host [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.172 2 DEBUG nova.virt.libvirt.host [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.182 2 DEBUG nova.virt.libvirt.host [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.183 2 DEBUG nova.virt.libvirt.host [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.183 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.183 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.184 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.184 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.184 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.184 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.185 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.185 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.185 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.185 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.186 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.186 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.188 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.531 2 DEBUG nova.network.neutron [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updated VIF entry in instance network info cache for port 8718eef8-8e7a-42ab-8df9-b469e81779d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.532 2 DEBUG nova.network.neutron [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updating instance_info_cache with network_info: [{"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.548 2 DEBUG oslo_concurrency.lockutils [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.549 2 DEBUG nova.compute.manager [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-deleted-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:16:11 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3701021763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.701 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.724 2 DEBUG nova.storage.rbd_utils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.728 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 465 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 3.9 MiB/s wr, 331 op/s
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.817 2 DEBUG nova.network.neutron [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Successfully updated port: 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:16:11 compute-0 nova_compute[259550]: 2025-10-07 14:16:11.832 2 DEBUG oslo_concurrency.lockutils [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:11 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3701021763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:16:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:16:12 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3969290928' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.218 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.220 2 DEBUG nova.virt.libvirt.vif [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:16:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-203431907',display_name='tempest-ServerActionsTestOtherA-server-203431907',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-203431907',id=61,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-m8v55e0v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:16:07Z,user_data=None,user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=c14b06ec-ce54-4081-8d72-b22529c3b0b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.220 2 DEBUG nova.network.os_vif_util [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.226 2 DEBUG nova.network.os_vif_util [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:97:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac811e84-843c-4265-b536-c653f6135295,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac811e84-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.228 2 DEBUG nova.objects.instance [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'pci_devices' on Instance uuid c14b06ec-ce54-4081-8d72-b22529c3b0b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.246 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:16:12 compute-0 nova_compute[259550]:   <uuid>c14b06ec-ce54-4081-8d72-b22529c3b0b7</uuid>
Oct 07 14:16:12 compute-0 nova_compute[259550]:   <name>instance-0000003d</name>
Oct 07 14:16:12 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:16:12 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:16:12 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerActionsTestOtherA-server-203431907</nova:name>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:16:11</nova:creationTime>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:16:12 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:16:12 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:16:12 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:16:12 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:16:12 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:16:12 compute-0 nova_compute[259550]:         <nova:user uuid="39e4681256e44d92ac5928e4f8e0d348">tempest-ServerActionsTestOtherA-508284156-project-member</nova:user>
Oct 07 14:16:12 compute-0 nova_compute[259550]:         <nova:project uuid="ef9390a1dd804281beea149e0086b360">tempest-ServerActionsTestOtherA-508284156</nova:project>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:16:12 compute-0 nova_compute[259550]:         <nova:port uuid="ac811e84-843c-4265-b536-c653f6135295">
Oct 07 14:16:12 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:16:12 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:16:12 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <system>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <entry name="serial">c14b06ec-ce54-4081-8d72-b22529c3b0b7</entry>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <entry name="uuid">c14b06ec-ce54-4081-8d72-b22529c3b0b7</entry>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     </system>
Oct 07 14:16:12 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:16:12 compute-0 nova_compute[259550]:   <os>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:   </os>
Oct 07 14:16:12 compute-0 nova_compute[259550]:   <features>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:   </features>
Oct 07 14:16:12 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:16:12 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:16:12 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk">
Oct 07 14:16:12 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       </source>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:16:12 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk.config">
Oct 07 14:16:12 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       </source>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:16:12 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:66:97:c9"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <target dev="tapac811e84-84"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7/console.log" append="off"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <video>
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     </video>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:16:12 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:16:12 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:16:12 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:16:12 compute-0 nova_compute[259550]: </domain>
Oct 07 14:16:12 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.253 2 DEBUG nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Preparing to wait for external event network-vif-plugged-ac811e84-843c-4265-b536-c653f6135295 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.254 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.255 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.255 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.256 2 DEBUG nova.virt.libvirt.vif [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:16:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-203431907',display_name='tempest-ServerActionsTestOtherA-server-203431907',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-203431907',id=61,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-m8v55e0v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:16:07Z,user_data=None,user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=c14b06ec-ce54-4081-8d72-b22529c3b0b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.257 2 DEBUG nova.network.os_vif_util [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.258 2 DEBUG nova.network.os_vif_util [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:97:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac811e84-843c-4265-b536-c653f6135295,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac811e84-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.259 2 DEBUG os_vif [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:97:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac811e84-843c-4265-b536-c653f6135295,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac811e84-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.261 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.261 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.266 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac811e84-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.266 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac811e84-84, col_values=(('external_ids', {'iface-id': 'ac811e84-843c-4265-b536-c653f6135295', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:97:c9', 'vm-uuid': 'c14b06ec-ce54-4081-8d72-b22529c3b0b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:12 compute-0 NetworkManager[44949]: <info>  [1759846572.2704] manager: (tapac811e84-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.278 2 INFO os_vif [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:97:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac811e84-843c-4265-b536-c653f6135295,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac811e84-84')
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.313 2 DEBUG nova.network.neutron [req-13fbabd3-6078-4072-91bc-31000ce21c76 req-4b14bbc3-95b0-4900-89ed-cdca3310fb4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updated VIF entry in instance network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.314 2 DEBUG nova.network.neutron [req-13fbabd3-6078-4072-91bc-31000ce21c76 req-4b14bbc3-95b0-4900-89ed-cdca3310fb4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.328 2 DEBUG oslo_concurrency.lockutils [req-13fbabd3-6078-4072-91bc-31000ce21c76 req-4b14bbc3-95b0-4900-89ed-cdca3310fb4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.329 2 DEBUG oslo_concurrency.lockutils [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.330 2 DEBUG nova.network.neutron [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.355 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.355 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.355 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No VIF found with MAC fa:16:3e:66:97:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.356 2 INFO nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Using config drive
Oct 07 14:16:12 compute-0 podman[323960]: 2025-10-07 14:16:12.402768166 +0000 UTC m=+0.085053848 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.407 2 DEBUG nova.storage.rbd_utils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:16:12 compute-0 podman[323961]: 2025-10-07 14:16:12.470144916 +0000 UTC m=+0.150995690 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.475 2 DEBUG nova.network.neutron [req-8e9d058d-845e-4fab-b159-33f9e1c23a60 req-77c462a4-066a-45ce-bfa5-6760038fdf08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Updated VIF entry in instance network info cache for port ac811e84-843c-4265-b536-c653f6135295. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.476 2 DEBUG nova.network.neutron [req-8e9d058d-845e-4fab-b159-33f9e1c23a60 req-77c462a4-066a-45ce-bfa5-6760038fdf08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Updating instance_info_cache with network_info: [{"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.492 2 DEBUG oslo_concurrency.lockutils [req-8e9d058d-845e-4fab-b159-33f9e1c23a60 req-77c462a4-066a-45ce-bfa5-6760038fdf08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-c14b06ec-ce54-4081-8d72-b22529c3b0b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.501 2 WARNING nova.network.neutron [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] b1d9f332-f920-4d6e-8e91-dd13ec334d51 already exists in list: networks containing: ['b1d9f332-f920-4d6e-8e91-dd13ec334d51']. ignoring it
Oct 07 14:16:12 compute-0 kernel: tap0b66f2d4-e0 (unregistering): left promiscuous mode
Oct 07 14:16:12 compute-0 NetworkManager[44949]: <info>  [1759846572.7148] device (tap0b66f2d4-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:16:12 compute-0 ovn_controller[151684]: 2025-10-07T14:16:12Z|00525|binding|INFO|Releasing lport 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc from this chassis (sb_readonly=0)
Oct 07 14:16:12 compute-0 ovn_controller[151684]: 2025-10-07T14:16:12Z|00526|binding|INFO|Setting lport 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc down in Southbound
Oct 07 14:16:12 compute-0 ovn_controller[151684]: 2025-10-07T14:16:12Z|00527|binding|INFO|Removing iface tap0b66f2d4-e0 ovn-installed in OVS
Oct 07 14:16:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.743 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:6d:07 10.100.0.11'], port_security=['fa:16:3e:1e:6d:07 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'cfd30417-ee01-41d3-8a93-e49cd960d338', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '972aa9372a81406990460fb46cf827e0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '21573a58-df26-46b3-96bc-30ac8d7d5432', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8439febc-2ab3-4376-877e-4af159445d58, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.744 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc in datapath 4fd643de-a9bb-4c41-8437-fb901dfd8879 unbound from our chassis
Oct 07 14:16:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.746 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fd643de-a9bb-4c41-8437-fb901dfd8879
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.771 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0a4153ad-f381-4f17-b4b7-0cdf831cc7eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:12 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000039.scope: Deactivated successfully.
Oct 07 14:16:12 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000039.scope: Consumed 17.283s CPU time.
Oct 07 14:16:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.809 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4a454797-f62f-4aee-9e94-652aba1cea5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:12 compute-0 systemd-machined[214580]: Machine qemu-65-instance-00000039 terminated.
Oct 07 14:16:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.815 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ad6ebb83-c427-4631-b293-80c232062570]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.842 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9224cce6-a9d0-4f77-bb56-e50f7d72c801]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.862 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d2073594-0ae4-48b3-bda7-fcca755e08b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fd643de-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:80:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713303, 'reachable_time': 32732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324034, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.880 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[78498cbc-833f-4ad7-b38d-422b4feea01c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713317, 'tstamp': 713317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324035, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713321, 'tstamp': 713321}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324035, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.882 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd643de-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.893 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fd643de-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.894 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.894 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fd643de-a0, col_values=(('external_ids', {'iface-id': '879f54f7-e219-4616-9199-264d02fdd4cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.895 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.907 2 INFO nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Creating config drive at /var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7/disk.config
Oct 07 14:16:12 compute-0 ceph-mon[74295]: pgmap v1548: 305 pgs: 305 active+clean; 465 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 3.9 MiB/s wr, 331 op/s
Oct 07 14:16:12 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3969290928' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.920 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi_gwamto execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:12 compute-0 nova_compute[259550]: 2025-10-07 14:16:12.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:13 compute-0 nova_compute[259550]: 2025-10-07 14:16:13.069 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi_gwamto" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:13 compute-0 nova_compute[259550]: 2025-10-07 14:16:13.109 2 DEBUG nova.storage.rbd_utils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:16:13 compute-0 nova_compute[259550]: 2025-10-07 14:16:13.113 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7/disk.config c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:13 compute-0 nova_compute[259550]: 2025-10-07 14:16:13.164 2 DEBUG nova.compute.manager [req-ea0d7e9c-aaef-42f4-a560-4c5b4d161a7a req-41c8df02-12cf-4d21-b9ed-9d9105b5847f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-changed-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:13 compute-0 nova_compute[259550]: 2025-10-07 14:16:13.165 2 DEBUG nova.compute.manager [req-ea0d7e9c-aaef-42f4-a560-4c5b4d161a7a req-41c8df02-12cf-4d21-b9ed-9d9105b5847f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing instance network info cache due to event network-changed-1b8e1852-2a5e-4c50-9ab0-110dfb492a49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:16:13 compute-0 nova_compute[259550]: 2025-10-07 14:16:13.165 2 DEBUG oslo_concurrency.lockutils [req-ea0d7e9c-aaef-42f4-a560-4c5b4d161a7a req-41c8df02-12cf-4d21-b9ed-9d9105b5847f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:13 compute-0 nova_compute[259550]: 2025-10-07 14:16:13.310 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7/disk.config c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:13 compute-0 nova_compute[259550]: 2025-10-07 14:16:13.311 2 INFO nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Deleting local config drive /var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7/disk.config because it was imported into RBD.
Oct 07 14:16:13 compute-0 NetworkManager[44949]: <info>  [1759846573.3809] manager: (tapac811e84-84): new Tun device (/org/freedesktop/NetworkManager/Devices/246)
Oct 07 14:16:13 compute-0 systemd-udevd[324025]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:16:13 compute-0 kernel: tapac811e84-84: entered promiscuous mode
Oct 07 14:16:13 compute-0 nova_compute[259550]: 2025-10-07 14:16:13.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:13 compute-0 ovn_controller[151684]: 2025-10-07T14:16:13Z|00528|binding|INFO|Claiming lport ac811e84-843c-4265-b536-c653f6135295 for this chassis.
Oct 07 14:16:13 compute-0 ovn_controller[151684]: 2025-10-07T14:16:13Z|00529|binding|INFO|ac811e84-843c-4265-b536-c653f6135295: Claiming fa:16:3e:66:97:c9 10.100.0.6
Oct 07 14:16:13 compute-0 NetworkManager[44949]: <info>  [1759846573.4027] device (tapac811e84-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:16:13 compute-0 NetworkManager[44949]: <info>  [1759846573.4079] device (tapac811e84-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:16:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.410 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:97:c9 10.100.0.6'], port_security=['fa:16:3e:66:97:c9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c14b06ec-ce54-4081-8d72-b22529c3b0b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef9390a1dd804281beea149e0086b360', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd609a9ff-183f-496e-83cc-641ffdd2b1f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80c61b76-cba3-471b-9dc7-bab9d6303f6a, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ac811e84-843c-4265-b536-c653f6135295) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.411 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ac811e84-843c-4265-b536-c653f6135295 in datapath 7ba9d553-bbaa-47f8-8281-6a74e53c37fb bound to our chassis
Oct 07 14:16:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.414 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba9d553-bbaa-47f8-8281-6a74e53c37fb
Oct 07 14:16:13 compute-0 nova_compute[259550]: 2025-10-07 14:16:13.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:13 compute-0 ovn_controller[151684]: 2025-10-07T14:16:13Z|00530|binding|INFO|Setting lport ac811e84-843c-4265-b536-c653f6135295 ovn-installed in OVS
Oct 07 14:16:13 compute-0 ovn_controller[151684]: 2025-10-07T14:16:13Z|00531|binding|INFO|Setting lport ac811e84-843c-4265-b536-c653f6135295 up in Southbound
Oct 07 14:16:13 compute-0 systemd-machined[214580]: New machine qemu-70-instance-0000003d.
Oct 07 14:16:13 compute-0 nova_compute[259550]: 2025-10-07 14:16:13.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:13 compute-0 nova_compute[259550]: 2025-10-07 14:16:13.436 2 INFO nova.virt.libvirt.driver [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Instance shutdown successfully after 3 seconds.
Oct 07 14:16:13 compute-0 systemd[1]: Started Virtual Machine qemu-70-instance-0000003d.
Oct 07 14:16:13 compute-0 nova_compute[259550]: 2025-10-07 14:16:13.443 2 INFO nova.virt.libvirt.driver [-] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Instance destroyed successfully.
Oct 07 14:16:13 compute-0 nova_compute[259550]: 2025-10-07 14:16:13.443 2 DEBUG nova.objects.instance [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'numa_topology' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.443 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8df42042-2036-4ab0-b452-207d66bae9ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:13 compute-0 nova_compute[259550]: 2025-10-07 14:16:13.460 2 DEBUG nova.compute.manager [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:13 compute-0 nova_compute[259550]: 2025-10-07 14:16:13.503 2 DEBUG oslo_concurrency.lockutils [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.508 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c53b23-9e5e-47b7-9913-c61e29f91cb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.513 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f5758e32-d28e-4dd9-b996-1bb1273d67d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.549 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[248e213f-dd23-481f-9e2a-526dda93297b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.572 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[53704c9d-dbe5-4764-99f0-060f999294fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9d553-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:7b:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703719, 'reachable_time': 17176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324117, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.594 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[066740e7-624e-4965-b4e9-f8f3c165b121]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703735, 'tstamp': 703735}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324118, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703740, 'tstamp': 703740}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324118, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.596 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9d553-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:13 compute-0 nova_compute[259550]: 2025-10-07 14:16:13.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:13 compute-0 nova_compute[259550]: 2025-10-07 14:16:13.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.605 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba9d553-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.606 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.606 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba9d553-b0, col_values=(('external_ids', {'iface-id': 'c1da8c7c-1812-4ab6-94d3-da2a23226328'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.607 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 465 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.9 MiB/s wr, 224 op/s
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.462 2 DEBUG nova.network.neutron [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.524 2 DEBUG oslo_concurrency.lockutils [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.525 2 DEBUG oslo_concurrency.lockutils [req-ea0d7e9c-aaef-42f4-a560-4c5b4d161a7a req-41c8df02-12cf-4d21-b9ed-9d9105b5847f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.526 2 DEBUG nova.network.neutron [req-ea0d7e9c-aaef-42f4-a560-4c5b4d161a7a req-41c8df02-12cf-4d21-b9ed-9d9105b5847f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing network info cache for port 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.529 2 DEBUG nova.virt.libvirt.vif [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-229960760',display_name='tempest-tempest.common.compute-instance-229960760',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-229960760',id=55,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-c0jd8utk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=188af2a5-ff92-4f42-8bdc-5dec2f24d46a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.530 2 DEBUG nova.network.os_vif_util [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.530 2 DEBUG nova.network.os_vif_util [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.531 2 DEBUG os_vif [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.532 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.532 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.535 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b8e1852-2a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.535 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b8e1852-2a, col_values=(('external_ids', {'iface-id': '1b8e1852-2a5e-4c50-9ab0-110dfb492a49', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:ab:6d', 'vm-uuid': '188af2a5-ff92-4f42-8bdc-5dec2f24d46a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:14 compute-0 NetworkManager[44949]: <info>  [1759846574.5388] manager: (tap1b8e1852-2a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.552 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846574.5500708, c14b06ec-ce54-4081-8d72-b22529c3b0b7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.552 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] VM Started (Lifecycle Event)
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.556 2 INFO os_vif [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a')
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.557 2 DEBUG nova.virt.libvirt.vif [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-229960760',display_name='tempest-tempest.common.compute-instance-229960760',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-229960760',id=55,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-c0jd8utk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=188af2a5-ff92-4f42-8bdc-5dec2f24d46a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.557 2 DEBUG nova.network.os_vif_util [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.558 2 DEBUG nova.network.os_vif_util [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.561 2 DEBUG nova.virt.libvirt.guest [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] attach device xml: <interface type="ethernet">
Oct 07 14:16:14 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:1e:ab:6d"/>
Oct 07 14:16:14 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:16:14 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:16:14 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:16:14 compute-0 nova_compute[259550]:   <target dev="tap1b8e1852-2a"/>
Oct 07 14:16:14 compute-0 nova_compute[259550]: </interface>
Oct 07 14:16:14 compute-0 nova_compute[259550]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 07 14:16:14 compute-0 NetworkManager[44949]: <info>  [1759846574.5715] manager: (tap1b8e1852-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/248)
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.572 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:14 compute-0 kernel: tap1b8e1852-2a: entered promiscuous mode
Oct 07 14:16:14 compute-0 ovn_controller[151684]: 2025-10-07T14:16:14Z|00532|binding|INFO|Claiming lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 for this chassis.
Oct 07 14:16:14 compute-0 ovn_controller[151684]: 2025-10-07T14:16:14Z|00533|binding|INFO|1b8e1852-2a5e-4c50-9ab0-110dfb492a49: Claiming fa:16:3e:1e:ab:6d 10.100.0.9
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:14 compute-0 NetworkManager[44949]: <info>  [1759846574.5897] device (tap1b8e1852-2a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:16:14 compute-0 NetworkManager[44949]: <info>  [1759846574.5904] device (tap1b8e1852-2a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:16:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.588 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:ab:6d 10.100.0.9'], port_security=['fa:16:3e:1e:ab:6d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-997458546', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '188af2a5-ff92-4f42-8bdc-5dec2f24d46a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-997458546', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1b8e1852-2a5e-4c50-9ab0-110dfb492a49) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.590 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis
Oct 07 14:16:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.591 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.593 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846574.5524998, c14b06ec-ce54-4081-8d72-b22529c3b0b7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.594 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] VM Paused (Lifecycle Event)
Oct 07 14:16:14 compute-0 ovn_controller[151684]: 2025-10-07T14:16:14Z|00534|binding|INFO|Setting lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 ovn-installed in OVS
Oct 07 14:16:14 compute-0 ovn_controller[151684]: 2025-10-07T14:16:14Z|00535|binding|INFO|Setting lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 up in Southbound
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.607 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0af29e5c-e2a9-4164-9661-4ff426e2ddb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.612 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.622 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.643 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:16:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.652 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f3e11b-8fe8-4544-a864-5ffe68ca6996]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.656 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[bf622f28-f4de-4be8-980d-12cfcbb9b3c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.661 2 DEBUG nova.virt.libvirt.driver [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.661 2 DEBUG nova.virt.libvirt.driver [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.662 2 DEBUG nova.virt.libvirt.driver [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:a5:aa:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.662 2 DEBUG nova.virt.libvirt.driver [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:1e:ab:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.681 2 DEBUG nova.virt.libvirt.guest [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:16:14 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:16:14 compute-0 nova_compute[259550]:   <nova:name>tempest-tempest.common.compute-instance-229960760</nova:name>
Oct 07 14:16:14 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:16:14</nova:creationTime>
Oct 07 14:16:14 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:16:14 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:16:14 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:16:14 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:16:14 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:16:14 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:16:14 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:16:14 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:16:14 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:16:14 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:16:14 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:16:14 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:16:14 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:16:14 compute-0 nova_compute[259550]:     <nova:port uuid="62690261-dde3-43ca-929a-e6b75a76bafb">
Oct 07 14:16:14 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:16:14 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:16:14 compute-0 nova_compute[259550]:     <nova:port uuid="1b8e1852-2a5e-4c50-9ab0-110dfb492a49">
Oct 07 14:16:14 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 07 14:16:14 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:16:14 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:16:14 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:16:14 compute-0 nova_compute[259550]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 07 14:16:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.704 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f53bba93-51ad-490a-bbe5-92807fc126b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.708 2 DEBUG oslo_concurrency.lockutils [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-188af2a5-ff92-4f42-8bdc-5dec2f24d46a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.725 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1bf657-d9d6-4336-80c6-2c9c0ea9caa8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711898, 'reachable_time': 29007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324185, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.745 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[44080cb5-ad2e-484c-8a10-00f6d0175c41]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711913, 'tstamp': 711913}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324186, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711917, 'tstamp': 711917}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324186, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.747 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:14 compute-0 nova_compute[259550]: 2025-10-07 14:16:14.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.753 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.754 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.754 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.754 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:14 compute-0 ceph-mon[74295]: pgmap v1549: 305 pgs: 305 active+clean; 465 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.9 MiB/s wr, 224 op/s
Oct 07 14:16:15 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct 07 14:16:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 465 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.9 MiB/s wr, 231 op/s
Oct 07 14:16:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.047 2 DEBUG nova.compute.manager [req-a1fe0b13-3f27-4797-ab34-c34b0f8f7155 req-4468b9fe-0d67-4e2a-a8ca-e905aa13480f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.047 2 DEBUG oslo_concurrency.lockutils [req-a1fe0b13-3f27-4797-ab34-c34b0f8f7155 req-4468b9fe-0d67-4e2a-a8ca-e905aa13480f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.048 2 DEBUG oslo_concurrency.lockutils [req-a1fe0b13-3f27-4797-ab34-c34b0f8f7155 req-4468b9fe-0d67-4e2a-a8ca-e905aa13480f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.048 2 DEBUG oslo_concurrency.lockutils [req-a1fe0b13-3f27-4797-ab34-c34b0f8f7155 req-4468b9fe-0d67-4e2a-a8ca-e905aa13480f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.048 2 DEBUG nova.compute.manager [req-a1fe0b13-3f27-4797-ab34-c34b0f8f7155 req-4468b9fe-0d67-4e2a-a8ca-e905aa13480f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] No waiting events found dispatching network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.048 2 WARNING nova.compute.manager [req-a1fe0b13-3f27-4797-ab34-c34b0f8f7155 req-4468b9fe-0d67-4e2a-a8ca-e905aa13480f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received unexpected event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 for instance with vm_state active and task_state None.
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.126 2 DEBUG nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received event network-vif-unplugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.126 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.127 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.127 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.127 2 DEBUG nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] No waiting events found dispatching network-vif-unplugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.127 2 WARNING nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received unexpected event network-vif-unplugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc for instance with vm_state stopped and task_state None.
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.127 2 DEBUG nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.128 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.128 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.128 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.128 2 DEBUG nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] No waiting events found dispatching network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.129 2 WARNING nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received unexpected event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc for instance with vm_state stopped and task_state None.
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.129 2 DEBUG nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received event network-vif-plugged-ac811e84-843c-4265-b536-c653f6135295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.129 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.129 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.129 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.130 2 DEBUG nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Processing event network-vif-plugged-ac811e84-843c-4265-b536-c653f6135295 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.130 2 DEBUG nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received event network-vif-plugged-ac811e84-843c-4265-b536-c653f6135295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.131 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.131 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.131 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.131 2 DEBUG nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] No waiting events found dispatching network-vif-plugged-ac811e84-843c-4265-b536-c653f6135295 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.132 2 WARNING nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received unexpected event network-vif-plugged-ac811e84-843c-4265-b536-c653f6135295 for instance with vm_state building and task_state spawning.
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.133 2 DEBUG nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.146 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846576.1448803, c14b06ec-ce54-4081-8d72-b22529c3b0b7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.147 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] VM Resumed (Lifecycle Event)
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.152 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.159 2 INFO nova.virt.libvirt.driver [-] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Instance spawned successfully.
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.160 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.171 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.176 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.187 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.188 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.188 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.189 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.189 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.189 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.209 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.298 2 DEBUG oslo_concurrency.lockutils [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-188af2a5-ff92-4f42-8bdc-5dec2f24d46a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.299 2 DEBUG oslo_concurrency.lockutils [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-188af2a5-ff92-4f42-8bdc-5dec2f24d46a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.327 2 INFO nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Took 8.34 seconds to spawn the instance on the hypervisor.
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.328 2 DEBUG nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.337 2 DEBUG nova.objects.instance [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid 188af2a5-ff92-4f42-8bdc-5dec2f24d46a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.371 2 DEBUG nova.virt.libvirt.vif [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-229960760',display_name='tempest-tempest.common.compute-instance-229960760',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-229960760',id=55,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-c0jd8utk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=188af2a5-ff92-4f42-8bdc-5dec2f24d46a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.372 2 DEBUG nova.network.os_vif_util [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.373 2 DEBUG nova.network.os_vif_util [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.379 2 DEBUG nova.virt.libvirt.guest [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.382 2 DEBUG nova.virt.libvirt.guest [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.385 2 DEBUG nova.virt.libvirt.driver [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Attempting to detach device tap1b8e1852-2a from instance 188af2a5-ff92-4f42-8bdc-5dec2f24d46a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.386 2 DEBUG nova.virt.libvirt.guest [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] detach device xml: <interface type="ethernet">
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:1e:ab:6d"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <target dev="tap1b8e1852-2a"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]: </interface>
Oct 07 14:16:16 compute-0 nova_compute[259550]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.394 2 DEBUG nova.virt.libvirt.guest [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.400 2 DEBUG nova.virt.libvirt.guest [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface>not found in domain: <domain type='kvm' id='63'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <name>instance-00000037</name>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <uuid>188af2a5-ff92-4f42-8bdc-5dec2f24d46a</uuid>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:name>tempest-tempest.common.compute-instance-229960760</nova:name>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:16:14</nova:creationTime>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:port uuid="62690261-dde3-43ca-929a-e6b75a76bafb">
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:port uuid="1b8e1852-2a5e-4c50-9ab0-110dfb492a49">
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:16:16 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <memory unit='KiB'>131072</memory>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <vcpu placement='static'>1</vcpu>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <resource>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <partition>/machine</partition>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </resource>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <sysinfo type='smbios'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <system>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <entry name='manufacturer'>RDO</entry>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <entry name='product'>OpenStack Compute</entry>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <entry name='serial'>188af2a5-ff92-4f42-8bdc-5dec2f24d46a</entry>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <entry name='uuid'>188af2a5-ff92-4f42-8bdc-5dec2f24d46a</entry>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <entry name='family'>Virtual Machine</entry>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </system>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <os>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <boot dev='hd'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <smbios mode='sysinfo'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </os>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <features>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <vmcoreinfo state='on'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </features>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <cpu mode='custom' match='exact' check='full'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <vendor>AMD</vendor>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='x2apic'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc-deadline'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='hypervisor'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc_adjust'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='spec-ctrl'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='stibp'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='arch-capabilities'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='ssbd'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='cmp_legacy'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='overflow-recov'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='succor'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='ibrs'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='amd-ssbd'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='virt-ssbd'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='lbrv'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='tsc-scale'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='vmcb-clean'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='flushbyasid'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='pause-filter'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='pfthreshold'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='rdctl-no'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='mds-no'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='pschange-mc-no'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='gds-no'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='rfds-no'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='xsaves'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='svm'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='topoext'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='npt'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='nrip-save'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <clock offset='utc'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <timer name='pit' tickpolicy='delay'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <timer name='hpet' present='no'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <on_poweroff>destroy</on_poweroff>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <on_reboot>restart</on_reboot>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <on_crash>destroy</on_crash>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <disk type='network' device='disk'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk' index='2'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       </source>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target dev='vda' bus='virtio'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='virtio-disk0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <disk type='network' device='cdrom'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk.config' index='1'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       </source>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target dev='sda' bus='sata'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <readonly/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='sata0-0-0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='0' model='pcie-root'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pcie.0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='1' port='0x10'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.1'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='2' port='0x11'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.2'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='3' port='0x12'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.3'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='4' port='0x13'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.4'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='5' port='0x14'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.5'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='6' port='0x15'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.6'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='7' port='0x16'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.7'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='8' port='0x17'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.8'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='9' port='0x18'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.9'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='10' port='0x19'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.10'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='11' port='0x1a'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.11'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='12' port='0x1b'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.12'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='13' port='0x1c'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.13'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='14' port='0x1d'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.14'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='15' port='0x1e'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.15'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='16' port='0x1f'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.16'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='17' port='0x20'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.17'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='18' port='0x21'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.18'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='19' port='0x22'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.19'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='20' port='0x23'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.20'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='21' port='0x24'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.21'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='22' port='0x25'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.22'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='23' port='0x26'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.23'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='24' port='0x27'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.24'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='25' port='0x28'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.25'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-pci-bridge'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.26'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='usb'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='sata' index='0'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='ide'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:a5:aa:77'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target dev='tap62690261-dd'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='net0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:1e:ab:6d'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target dev='tap1b8e1852-2a'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='net1'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <serial type='pty'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/console.log' append='off'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target type='isa-serial' port='0'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:         <model name='isa-serial'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       </target>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <console type='pty' tty='/dev/pts/0'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/console.log' append='off'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target type='serial' port='0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </console>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <input type='tablet' bus='usb'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='input0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='usb' bus='0' port='1'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </input>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <input type='mouse' bus='ps2'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='input1'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </input>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <input type='keyboard' bus='ps2'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='input2'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </input>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <listen type='address' address='::0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </graphics>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <audio id='1' type='none'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <video>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model type='virtio' heads='1' primary='yes'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='video0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </video>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <watchdog model='itco' action='reset'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='watchdog0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </watchdog>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <memballoon model='virtio'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <stats period='10'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='balloon0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <rng model='virtio'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <backend model='random'>/dev/urandom</backend>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='rng0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <label>system_u:system_r:svirt_t:s0:c542,c990</label>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c542,c990</imagelabel>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <label>+107:+107</label>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <imagelabel>+107:+107</imagelabel>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:16:16 compute-0 nova_compute[259550]: </domain>
Oct 07 14:16:16 compute-0 nova_compute[259550]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.401 2 INFO nova.virt.libvirt.driver [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully detached device tap1b8e1852-2a from instance 188af2a5-ff92-4f42-8bdc-5dec2f24d46a from the persistent domain config.
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.401 2 DEBUG nova.virt.libvirt.driver [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] (1/8): Attempting to detach device tap1b8e1852-2a with device alias net1 from instance 188af2a5-ff92-4f42-8bdc-5dec2f24d46a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.402 2 DEBUG nova.virt.libvirt.guest [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] detach device xml: <interface type="ethernet">
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:1e:ab:6d"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <target dev="tap1b8e1852-2a"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]: </interface>
Oct 07 14:16:16 compute-0 nova_compute[259550]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.416 2 INFO nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Took 9.57 seconds to build instance.
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.435 2 DEBUG nova.network.neutron [req-ea0d7e9c-aaef-42f4-a560-4c5b4d161a7a req-41c8df02-12cf-4d21-b9ed-9d9105b5847f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updated VIF entry in instance network info cache for port 1b8e1852-2a5e-4c50-9ab0-110dfb492a49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.436 2 DEBUG nova.network.neutron [req-ea0d7e9c-aaef-42f4-a560-4c5b4d161a7a req-41c8df02-12cf-4d21-b9ed-9d9105b5847f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.437 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846561.4340413, 83645517-a08a-46d7-b715-15b5d7f078ff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.438 2 INFO nova.compute.manager [-] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] VM Stopped (Lifecycle Event)
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.447 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.459 2 DEBUG oslo_concurrency.lockutils [req-ea0d7e9c-aaef-42f4-a560-4c5b4d161a7a req-41c8df02-12cf-4d21-b9ed-9d9105b5847f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.473 2 DEBUG nova.compute.manager [None req-ddf38e79-546a-4003-83fb-7a0f529aec3e - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:16 compute-0 kernel: tap1b8e1852-2a (unregistering): left promiscuous mode
Oct 07 14:16:16 compute-0 NetworkManager[44949]: <info>  [1759846576.4857] device (tap1b8e1852-2a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:16:16 compute-0 ovn_controller[151684]: 2025-10-07T14:16:16Z|00536|binding|INFO|Releasing lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 from this chassis (sb_readonly=0)
Oct 07 14:16:16 compute-0 ovn_controller[151684]: 2025-10-07T14:16:16Z|00537|binding|INFO|Setting lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 down in Southbound
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:16 compute-0 ovn_controller[151684]: 2025-10-07T14:16:16Z|00538|binding|INFO|Removing iface tap1b8e1852-2a ovn-installed in OVS
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.508 2 DEBUG nova.virt.libvirt.driver [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Start waiting for the detach event from libvirt for device tap1b8e1852-2a with device alias net1 for instance 188af2a5-ff92-4f42-8bdc-5dec2f24d46a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.513 2 DEBUG nova.virt.libvirt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Received event <DeviceRemovedEvent: 1759846576.5135515, 188af2a5-ff92-4f42-8bdc-5dec2f24d46a => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.514 2 DEBUG nova.virt.libvirt.guest [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:16:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.523 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:ab:6d 10.100.0.9'], port_security=['fa:16:3e:1e:ab:6d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-997458546', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '188af2a5-ff92-4f42-8bdc-5dec2f24d46a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-997458546', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1b8e1852-2a5e-4c50-9ab0-110dfb492a49) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.523 2 DEBUG nova.virt.libvirt.guest [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface>not found in domain: <domain type='kvm' id='63'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <name>instance-00000037</name>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <uuid>188af2a5-ff92-4f42-8bdc-5dec2f24d46a</uuid>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:name>tempest-tempest.common.compute-instance-229960760</nova:name>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:16:14</nova:creationTime>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:port uuid="62690261-dde3-43ca-929a-e6b75a76bafb">
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:port uuid="1b8e1852-2a5e-4c50-9ab0-110dfb492a49">
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:16:16 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <memory unit='KiB'>131072</memory>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <vcpu placement='static'>1</vcpu>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <resource>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <partition>/machine</partition>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </resource>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <sysinfo type='smbios'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <system>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <entry name='manufacturer'>RDO</entry>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <entry name='product'>OpenStack Compute</entry>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <entry name='serial'>188af2a5-ff92-4f42-8bdc-5dec2f24d46a</entry>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <entry name='uuid'>188af2a5-ff92-4f42-8bdc-5dec2f24d46a</entry>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <entry name='family'>Virtual Machine</entry>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </system>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <os>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <boot dev='hd'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <smbios mode='sysinfo'/>
Oct 07 14:16:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.524 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 unbound from our chassis
Oct 07 14:16:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.526 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </os>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <features>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <vmcoreinfo state='on'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </features>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <cpu mode='custom' match='exact' check='full'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <vendor>AMD</vendor>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='x2apic'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc-deadline'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='hypervisor'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc_adjust'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='spec-ctrl'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='stibp'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='arch-capabilities'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='ssbd'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='cmp_legacy'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='overflow-recov'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='succor'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='ibrs'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='amd-ssbd'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='virt-ssbd'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='lbrv'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='tsc-scale'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='vmcb-clean'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='flushbyasid'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='pause-filter'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='pfthreshold'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='rdctl-no'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='mds-no'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='pschange-mc-no'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='gds-no'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='rfds-no'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='xsaves'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='svm'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='require' name='topoext'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='npt'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <feature policy='disable' name='nrip-save'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <clock offset='utc'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <timer name='pit' tickpolicy='delay'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <timer name='hpet' present='no'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <on_poweroff>destroy</on_poweroff>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <on_reboot>restart</on_reboot>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <on_crash>destroy</on_crash>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <disk type='network' device='disk'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk' index='2'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       </source>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target dev='vda' bus='virtio'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='virtio-disk0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <disk type='network' device='cdrom'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk.config' index='1'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       </source>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target dev='sda' bus='sata'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <readonly/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='sata0-0-0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='0' model='pcie-root'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pcie.0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='1' port='0x10'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.1'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='2' port='0x11'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.2'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='3' port='0x12'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.3'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='4' port='0x13'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.4'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='5' port='0x14'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.5'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='6' port='0x15'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.6'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='7' port='0x16'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.7'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='8' port='0x17'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.8'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='9' port='0x18'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.9'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='10' port='0x19'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.10'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='11' port='0x1a'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.11'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='12' port='0x1b'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.12'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='13' port='0x1c'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.13'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='14' port='0x1d'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.14'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='15' port='0x1e'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.15'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='16' port='0x1f'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.16'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='17' port='0x20'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.17'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='18' port='0x21'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.18'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='19' port='0x22'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.19'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='20' port='0x23'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.20'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='21' port='0x24'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.21'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='22' port='0x25'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.22'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='23' port='0x26'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.23'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='24' port='0x27'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.24'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target chassis='25' port='0x28'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.25'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model name='pcie-pci-bridge'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='pci.26'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='usb'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <controller type='sata' index='0'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='ide'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:a5:aa:77'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target dev='tap62690261-dd'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='net0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <serial type='pty'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/console.log' append='off'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target type='isa-serial' port='0'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:         <model name='isa-serial'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       </target>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <console type='pty' tty='/dev/pts/0'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/console.log' append='off'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <target type='serial' port='0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </console>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <input type='tablet' bus='usb'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='input0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='usb' bus='0' port='1'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </input>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <input type='mouse' bus='ps2'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='input1'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </input>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <input type='keyboard' bus='ps2'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='input2'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </input>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <listen type='address' address='::0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </graphics>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <audio id='1' type='none'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <video>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <model type='virtio' heads='1' primary='yes'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='video0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </video>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <watchdog model='itco' action='reset'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='watchdog0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </watchdog>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <memballoon model='virtio'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <stats period='10'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='balloon0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <rng model='virtio'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <backend model='random'>/dev/urandom</backend>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <alias name='rng0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <label>system_u:system_r:svirt_t:s0:c542,c990</label>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c542,c990</imagelabel>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <label>+107:+107</label>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <imagelabel>+107:+107</imagelabel>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:16:16 compute-0 nova_compute[259550]: </domain>
Oct 07 14:16:16 compute-0 nova_compute[259550]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.523 2 INFO nova.virt.libvirt.driver [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully detached device tap1b8e1852-2a from instance 188af2a5-ff92-4f42-8bdc-5dec2f24d46a from the live domain config.
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.524 2 DEBUG nova.virt.libvirt.vif [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-229960760',display_name='tempest-tempest.common.compute-instance-229960760',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-229960760',id=55,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-c0jd8utk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=188af2a5-ff92-4f42-8bdc-5dec2f24d46a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.525 2 DEBUG nova.network.os_vif_util [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.526 2 DEBUG nova.network.os_vif_util [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.526 2 DEBUG os_vif [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.528 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b8e1852-2a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.550 2 INFO os_vif [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a')
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.552 2 DEBUG nova.virt.libvirt.guest [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:name>tempest-tempest.common.compute-instance-229960760</nova:name>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:16:16</nova:creationTime>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     <nova:port uuid="62690261-dde3-43ca-929a-e6b75a76bafb">
Oct 07 14:16:16 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:16:16 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:16:16 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:16:16 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:16:16 compute-0 nova_compute[259550]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 07 14:16:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.564 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5b361a39-8000-4704-a60c-42b432466fe0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.623 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c3061a57-eb48-463a-b3cb-2a3e200326ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.626 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[161b1f34-f532-4234-b74d-48f8ec9611c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.657 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8effd6f0-a80c-4be1-a963-2848f54d7064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.680 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3a67296e-968b-40ec-ab5d-d1462a930206]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711898, 'reachable_time': 29007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324204, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.706 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4486dc30-9d89-44c3-95b5-024f2d4a45ab]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711913, 'tstamp': 711913}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324205, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711917, 'tstamp': 711917}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324205, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.709 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:16 compute-0 nova_compute[259550]: 2025-10-07 14:16:16.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.715 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.716 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.716 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.718 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:16 compute-0 ceph-mon[74295]: pgmap v1550: 305 pgs: 305 active+clean; 465 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.9 MiB/s wr, 231 op/s
Oct 07 14:16:16 compute-0 ovn_controller[151684]: 2025-10-07T14:16:16Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:08:11:48 10.100.0.8
Oct 07 14:16:16 compute-0 ovn_controller[151684]: 2025-10-07T14:16:16Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:08:11:48 10.100.0.8
Oct 07 14:16:17 compute-0 nova_compute[259550]: 2025-10-07 14:16:17.170 2 DEBUG nova.objects.instance [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'flavor' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:17 compute-0 nova_compute[259550]: 2025-10-07 14:16:17.195 2 DEBUG oslo_concurrency.lockutils [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "refresh_cache-cfd30417-ee01-41d3-8a93-e49cd960d338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:17 compute-0 nova_compute[259550]: 2025-10-07 14:16:17.196 2 DEBUG oslo_concurrency.lockutils [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquired lock "refresh_cache-cfd30417-ee01-41d3-8a93-e49cd960d338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:17 compute-0 nova_compute[259550]: 2025-10-07 14:16:17.197 2 DEBUG nova.network.neutron [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:16:17 compute-0 nova_compute[259550]: 2025-10-07 14:16:17.197 2 DEBUG nova.objects.instance [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'info_cache' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:17 compute-0 nova_compute[259550]: 2025-10-07 14:16:17.467 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:17 compute-0 nova_compute[259550]: 2025-10-07 14:16:17.468 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:17 compute-0 nova_compute[259550]: 2025-10-07 14:16:17.488 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:16:17 compute-0 nova_compute[259550]: 2025-10-07 14:16:17.559 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:17 compute-0 nova_compute[259550]: 2025-10-07 14:16:17.559 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 481 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.0 MiB/s wr, 168 op/s
Oct 07 14:16:17 compute-0 nova_compute[259550]: 2025-10-07 14:16:17.803 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:16:17 compute-0 nova_compute[259550]: 2025-10-07 14:16:17.804 2 INFO nova.compute.claims [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:16:17 compute-0 ovn_controller[151684]: 2025-10-07T14:16:17Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:04:8c:cc 10.100.0.13
Oct 07 14:16:17 compute-0 ovn_controller[151684]: 2025-10-07T14:16:17Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:04:8c:cc 10.100.0.13
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.096 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.227 2 DEBUG oslo_concurrency.lockutils [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.228 2 DEBUG oslo_concurrency.lockutils [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.228 2 DEBUG nova.network.neutron [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.285 2 DEBUG nova.compute.manager [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.285 2 DEBUG oslo_concurrency.lockutils [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.286 2 DEBUG oslo_concurrency.lockutils [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.286 2 DEBUG oslo_concurrency.lockutils [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.286 2 DEBUG nova.compute.manager [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] No waiting events found dispatching network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.286 2 WARNING nova.compute.manager [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received unexpected event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 for instance with vm_state active and task_state None.
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.286 2 DEBUG nova.compute.manager [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-vif-unplugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.287 2 DEBUG oslo_concurrency.lockutils [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.287 2 DEBUG oslo_concurrency.lockutils [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.287 2 DEBUG oslo_concurrency.lockutils [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.287 2 DEBUG nova.compute.manager [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] No waiting events found dispatching network-vif-unplugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.287 2 WARNING nova.compute.manager [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received unexpected event network-vif-unplugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 for instance with vm_state active and task_state None.
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.288 2 DEBUG nova.compute.manager [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.288 2 DEBUG oslo_concurrency.lockutils [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.288 2 DEBUG oslo_concurrency.lockutils [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.288 2 DEBUG oslo_concurrency.lockutils [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.289 2 DEBUG nova.compute.manager [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] No waiting events found dispatching network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.289 2 WARNING nova.compute.manager [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received unexpected event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 for instance with vm_state active and task_state None.
Oct 07 14:16:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:16:18 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1147183102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.685 2 DEBUG nova.network.neutron [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Updating instance_info_cache with network_info: [{"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.688 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.695 2 DEBUG nova.compute.provider_tree [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.715 2 DEBUG oslo_concurrency.lockutils [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Releasing lock "refresh_cache-cfd30417-ee01-41d3-8a93-e49cd960d338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.721 2 DEBUG nova.scheduler.client.report [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.745 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.747 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.765 2 INFO nova.virt.libvirt.driver [-] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Instance destroyed successfully.
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.765 2 DEBUG nova.objects.instance [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'numa_topology' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.780 2 DEBUG nova.objects.instance [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'resources' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.799 2 DEBUG nova.virt.libvirt.vif [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-3008329',display_name='tempest-ListServerFiltersTestJSON-instance-3008329',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-3008329',id=57,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-hm3q3ej4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:16:13Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=cfd30417-ee01-41d3-8a93-e49cd960d338,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.800 2 DEBUG nova.network.os_vif_util [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.801 2 DEBUG nova.network.os_vif_util [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.801 2 DEBUG os_vif [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.805 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b66f2d4-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.810 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.811 2 DEBUG nova.network.neutron [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.817 2 INFO os_vif [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0')
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.825 2 DEBUG nova.virt.libvirt.driver [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Start _get_guest_xml network_info=[{"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.831 2 INFO nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.846 2 WARNING nova.virt.libvirt.driver [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.854 2 DEBUG nova.virt.libvirt.host [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.855 2 DEBUG nova.virt.libvirt.host [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.857 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.863 2 DEBUG nova.virt.libvirt.host [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.864 2 DEBUG nova.virt.libvirt.host [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.864 2 DEBUG nova.virt.libvirt.driver [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.865 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.865 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.865 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.865 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.866 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.866 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.866 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.867 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.867 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.867 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.867 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.867 2 DEBUG nova.objects.instance [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'vcpu_model' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:18 compute-0 nova_compute[259550]: 2025-10-07 14:16:18.901 2 DEBUG oslo_concurrency.processutils [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:18 compute-0 ceph-mon[74295]: pgmap v1551: 305 pgs: 305 active+clean; 481 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.0 MiB/s wr, 168 op/s
Oct 07 14:16:18 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1147183102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.021 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.023 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.023 2 INFO nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Creating image(s)
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.055 2 DEBUG nova.storage.rbd_utils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.085 2 DEBUG nova.storage.rbd_utils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.135 2 DEBUG nova.storage.rbd_utils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.143 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.205 2 DEBUG nova.policy [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ff37c390826e43079eff2a1423ccc2b8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.256 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.257 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.258 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.258 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.287 2 DEBUG nova.storage.rbd_utils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.293 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:16:19 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/326782982' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.513 2 DEBUG oslo_concurrency.processutils [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.564 2 DEBUG oslo_concurrency.processutils [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.649 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.749 2 INFO nova.network.neutron [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Port 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.749 2 DEBUG nova.network.neutron [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 532 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 9.1 MiB/s wr, 250 op/s
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.776 2 DEBUG nova.storage.rbd_utils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] resizing rbd image 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.824 2 DEBUG oslo_concurrency.lockutils [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.855 2 DEBUG oslo_concurrency.lockutils [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-188af2a5-ff92-4f42-8bdc-5dec2f24d46a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.872 2 DEBUG nova.compute.manager [req-99290488-f883-4ebf-a0fa-63fbfb38c23c req-fb94d3c8-33f9-4c24-8a65-c6f784264303 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received event network-changed-ac811e84-843c-4265-b536-c653f6135295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.873 2 DEBUG nova.compute.manager [req-99290488-f883-4ebf-a0fa-63fbfb38c23c req-fb94d3c8-33f9-4c24-8a65-c6f784264303 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Refreshing instance network info cache due to event network-changed-ac811e84-843c-4265-b536-c653f6135295. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.873 2 DEBUG oslo_concurrency.lockutils [req-99290488-f883-4ebf-a0fa-63fbfb38c23c req-fb94d3c8-33f9-4c24-8a65-c6f784264303 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-c14b06ec-ce54-4081-8d72-b22529c3b0b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.873 2 DEBUG oslo_concurrency.lockutils [req-99290488-f883-4ebf-a0fa-63fbfb38c23c req-fb94d3c8-33f9-4c24-8a65-c6f784264303 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-c14b06ec-ce54-4081-8d72-b22529c3b0b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.873 2 DEBUG nova.network.neutron [req-99290488-f883-4ebf-a0fa-63fbfb38c23c req-fb94d3c8-33f9-4c24-8a65-c6f784264303 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Refreshing network info cache for port ac811e84-843c-4265-b536-c653f6135295 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.933 2 DEBUG nova.objects.instance [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lazy-loading 'migration_context' on Instance uuid 4aa20e30-d71a-4765-9b3e-a72a156d2c88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.948 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.949 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Ensure instance console log exists: /var/lib/nova/instances/4aa20e30-d71a-4765-9b3e-a72a156d2c88/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.949 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.950 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:19 compute-0 nova_compute[259550]: 2025-10-07 14:16:19.950 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:19 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/326782982' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:16:19 compute-0 ovn_controller[151684]: 2025-10-07T14:16:19Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:1b:e2 10.100.0.12
Oct 07 14:16:19 compute-0 ovn_controller[151684]: 2025-10-07T14:16:19Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:1b:e2 10.100.0.12
Oct 07 14:16:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:16:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2810400117' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.189 2 DEBUG oslo_concurrency.processutils [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.191 2 DEBUG nova.virt.libvirt.vif [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-3008329',display_name='tempest-ListServerFiltersTestJSON-instance-3008329',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-3008329',id=57,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-hm3q3ej4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:16:13Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=cfd30417-ee01-41d3-8a93-e49cd960d338,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.191 2 DEBUG nova.network.os_vif_util [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.193 2 DEBUG nova.network.os_vif_util [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.194 2 DEBUG nova.objects.instance [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.226 2 DEBUG nova.network.neutron [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Successfully created port: 68a7ca31-4ee2-4e32-9574-3113f63090cf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.231 2 DEBUG nova.virt.libvirt.driver [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:16:20 compute-0 nova_compute[259550]:   <uuid>cfd30417-ee01-41d3-8a93-e49cd960d338</uuid>
Oct 07 14:16:20 compute-0 nova_compute[259550]:   <name>instance-00000039</name>
Oct 07 14:16:20 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:16:20 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:16:20 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-3008329</nova:name>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:16:18</nova:creationTime>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:16:20 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:16:20 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:16:20 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:16:20 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:16:20 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:16:20 compute-0 nova_compute[259550]:         <nova:user uuid="d8faa7636d634de587c1631c3452264e">tempest-ListServerFiltersTestJSON-937453277-project-member</nova:user>
Oct 07 14:16:20 compute-0 nova_compute[259550]:         <nova:project uuid="972aa9372a81406990460fb46cf827e0">tempest-ListServerFiltersTestJSON-937453277</nova:project>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:16:20 compute-0 nova_compute[259550]:         <nova:port uuid="0b66f2d4-e098-4b4c-902f-2a9a2a9764cc">
Oct 07 14:16:20 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:16:20 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:16:20 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <system>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <entry name="serial">cfd30417-ee01-41d3-8a93-e49cd960d338</entry>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <entry name="uuid">cfd30417-ee01-41d3-8a93-e49cd960d338</entry>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     </system>
Oct 07 14:16:20 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:16:20 compute-0 nova_compute[259550]:   <os>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:   </os>
Oct 07 14:16:20 compute-0 nova_compute[259550]:   <features>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:   </features>
Oct 07 14:16:20 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:16:20 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:16:20 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/cfd30417-ee01-41d3-8a93-e49cd960d338_disk">
Oct 07 14:16:20 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       </source>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:16:20 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/cfd30417-ee01-41d3-8a93-e49cd960d338_disk.config">
Oct 07 14:16:20 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       </source>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:16:20 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:1e:6d:07"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <target dev="tap0b66f2d4-e0"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338/console.log" append="off"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <video>
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     </video>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <input type="keyboard" bus="usb"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:16:20 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:16:20 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:16:20 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:16:20 compute-0 nova_compute[259550]: </domain>
Oct 07 14:16:20 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.237 2 DEBUG nova.virt.libvirt.driver [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.237 2 DEBUG nova.virt.libvirt.driver [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.238 2 DEBUG nova.virt.libvirt.vif [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-3008329',display_name='tempest-ListServerFiltersTestJSON-instance-3008329',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-3008329',id=57,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-hm3q3ej4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:16:13Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=cfd30417-ee01-41d3-8a93-e49cd960d338,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.238 2 DEBUG nova.network.os_vif_util [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.239 2 DEBUG nova.network.os_vif_util [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.239 2 DEBUG os_vif [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.241 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.241 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.245 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b66f2d4-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.246 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b66f2d4-e0, col_values=(('external_ids', {'iface-id': '0b66f2d4-e098-4b4c-902f-2a9a2a9764cc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:6d:07', 'vm-uuid': 'cfd30417-ee01-41d3-8a93-e49cd960d338'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:20 compute-0 NetworkManager[44949]: <info>  [1759846580.2486] manager: (tap0b66f2d4-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.255 2 INFO os_vif [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0')
Oct 07 14:16:20 compute-0 kernel: tap0b66f2d4-e0: entered promiscuous mode
Oct 07 14:16:20 compute-0 NetworkManager[44949]: <info>  [1759846580.3314] manager: (tap0b66f2d4-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/250)
Oct 07 14:16:20 compute-0 ovn_controller[151684]: 2025-10-07T14:16:20Z|00539|binding|INFO|Claiming lport 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc for this chassis.
Oct 07 14:16:20 compute-0 ovn_controller[151684]: 2025-10-07T14:16:20Z|00540|binding|INFO|0b66f2d4-e098-4b4c-902f-2a9a2a9764cc: Claiming fa:16:3e:1e:6d:07 10.100.0.11
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:20 compute-0 ovn_controller[151684]: 2025-10-07T14:16:20Z|00541|binding|INFO|Setting lport 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc ovn-installed in OVS
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:20 compute-0 systemd-udevd[324471]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:16:20 compute-0 systemd-machined[214580]: New machine qemu-71-instance-00000039.
Oct 07 14:16:20 compute-0 systemd[1]: Started Virtual Machine qemu-71-instance-00000039.
Oct 07 14:16:20 compute-0 NetworkManager[44949]: <info>  [1759846580.3961] device (tap0b66f2d4-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:16:20 compute-0 NetworkManager[44949]: <info>  [1759846580.3974] device (tap0b66f2d4-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:16:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.399 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:6d:07 10.100.0.11'], port_security=['fa:16:3e:1e:6d:07 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'cfd30417-ee01-41d3-8a93-e49cd960d338', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '972aa9372a81406990460fb46cf827e0', 'neutron:revision_number': '5', 'neutron:security_group_ids': '21573a58-df26-46b3-96bc-30ac8d7d5432', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8439febc-2ab3-4376-877e-4af159445d58, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.402 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc in datapath 4fd643de-a9bb-4c41-8437-fb901dfd8879 bound to our chassis
Oct 07 14:16:20 compute-0 ovn_controller[151684]: 2025-10-07T14:16:20Z|00542|binding|INFO|Setting lport 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc up in Southbound
Oct 07 14:16:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.405 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fd643de-a9bb-4c41-8437-fb901dfd8879
Oct 07 14:16:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.424 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4b999519-8c00-4ea0-a0d9-2aec6c519be6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.463 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2975baee-5ecf-4303-a167-23f813494e30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.467 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0a20044b-144b-4829-85e5-032bcadb28fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.511 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[62557919-fff3-48bb-91fc-b73a053bcf0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.534 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b627c2ee-0481-4f0e-8d2b-1a4227b46980]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fd643de-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:80:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713303, 'reachable_time': 32732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324486, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.554 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fd8c7a37-d3f8-446c-a5c8-f02032463c8a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713317, 'tstamp': 713317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324487, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713321, 'tstamp': 713321}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324487, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.557 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd643de-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.560 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fd643de-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.561 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.561 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fd643de-a0, col_values=(('external_ids', {'iface-id': '879f54f7-e219-4616-9199-264d02fdd4cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.561 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.937 2 DEBUG nova.compute.manager [req-ab612754-72e0-48c3-85bc-7c53a3ef95e4 req-a4b01765-93d4-48ea-b973-32aee32b9942 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.938 2 DEBUG nova.compute.manager [req-ab612754-72e0-48c3-85bc-7c53a3ef95e4 req-a4b01765-93d4-48ea-b973-32aee32b9942 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing instance network info cache due to event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.938 2 DEBUG oslo_concurrency.lockutils [req-ab612754-72e0-48c3-85bc-7c53a3ef95e4 req-a4b01765-93d4-48ea-b973-32aee32b9942 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.938 2 DEBUG oslo_concurrency.lockutils [req-ab612754-72e0-48c3-85bc-7c53a3ef95e4 req-a4b01765-93d4-48ea-b973-32aee32b9942 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:20 compute-0 nova_compute[259550]: 2025-10-07 14:16:20.939 2 DEBUG nova.network.neutron [req-ab612754-72e0-48c3-85bc-7c53a3ef95e4 req-a4b01765-93d4-48ea-b973-32aee32b9942 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:16:20 compute-0 ceph-mon[74295]: pgmap v1552: 305 pgs: 305 active+clean; 532 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 9.1 MiB/s wr, 250 op/s
Oct 07 14:16:20 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2810400117' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:16:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.213 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for cfd30417-ee01-41d3-8a93-e49cd960d338 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.214 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846581.212521, cfd30417-ee01-41d3-8a93-e49cd960d338 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.214 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] VM Resumed (Lifecycle Event)
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.224 2 DEBUG nova.compute.manager [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.230 2 INFO nova.virt.libvirt.driver [-] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Instance rebooted successfully.
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.230 2 DEBUG nova.compute.manager [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.245 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.251 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.279 2 DEBUG oslo_concurrency.lockutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.279 2 DEBUG oslo_concurrency.lockutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.279 2 DEBUG oslo_concurrency.lockutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.280 2 DEBUG oslo_concurrency.lockutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.280 2 DEBUG oslo_concurrency.lockutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.281 2 INFO nova.compute.manager [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Terminating instance
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.282 2 DEBUG nova.compute.manager [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.284 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] During sync_power_state the instance has a pending task (powering-on). Skip.
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.284 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846581.2126734, cfd30417-ee01-41d3-8a93-e49cd960d338 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.284 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] VM Started (Lifecycle Event)
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.308 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.315 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:16:21 compute-0 kernel: tapac811e84-84 (unregistering): left promiscuous mode
Oct 07 14:16:21 compute-0 NetworkManager[44949]: <info>  [1759846581.3301] device (tapac811e84-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:21 compute-0 ovn_controller[151684]: 2025-10-07T14:16:21Z|00543|binding|INFO|Releasing lport ac811e84-843c-4265-b536-c653f6135295 from this chassis (sb_readonly=0)
Oct 07 14:16:21 compute-0 ovn_controller[151684]: 2025-10-07T14:16:21Z|00544|binding|INFO|Setting lport ac811e84-843c-4265-b536-c653f6135295 down in Southbound
Oct 07 14:16:21 compute-0 ovn_controller[151684]: 2025-10-07T14:16:21Z|00545|binding|INFO|Removing iface tapac811e84-84 ovn-installed in OVS
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.351 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:97:c9 10.100.0.6'], port_security=['fa:16:3e:66:97:c9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c14b06ec-ce54-4081-8d72-b22529c3b0b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef9390a1dd804281beea149e0086b360', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80c61b76-cba3-471b-9dc7-bab9d6303f6a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ac811e84-843c-4265-b536-c653f6135295) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.352 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ac811e84-843c-4265-b536-c653f6135295 in datapath 7ba9d553-bbaa-47f8-8281-6a74e53c37fb unbound from our chassis
Oct 07 14:16:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.354 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba9d553-bbaa-47f8-8281-6a74e53c37fb
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.373 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d0b5369e-e95f-4375-9d3d-5e054dd3cf09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:21 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Oct 07 14:16:21 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003d.scope: Consumed 5.985s CPU time.
Oct 07 14:16:21 compute-0 systemd-machined[214580]: Machine qemu-70-instance-0000003d terminated.
Oct 07 14:16:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.410 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d80ca6-088d-42ab-b760-cea73efc3b20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.414 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[31107181-faa5-4709-88be-c020e5c183f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.446 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6a125fdf-571d-4fe7-91fc-84bc78f378e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.468 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[95cdeac9-0e85-42a6-b7a9-70d34a3cf2fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9d553-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:7b:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703719, 'reachable_time': 17176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324539, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.498 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cd31408a-2dc5-4cd0-b9eb-a718851b8810]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703735, 'tstamp': 703735}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324540, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703740, 'tstamp': 703740}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324540, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.500 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9d553-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.510 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba9d553-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.510 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.511 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba9d553-b0, col_values=(('external_ids', {'iface-id': 'c1da8c7c-1812-4ab6-94d3-da2a23226328'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.512 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.524 2 INFO nova.virt.libvirt.driver [-] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Instance destroyed successfully.
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.525 2 DEBUG nova.objects.instance [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'resources' on Instance uuid c14b06ec-ce54-4081-8d72-b22529c3b0b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.542 2 DEBUG nova.virt.libvirt.vif [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:16:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-203431907',display_name='tempest-ServerActionsTestOtherA-server-203431907',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-203431907',id=61,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:16:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-m8v55e0v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:16:16Z,user_data=None,user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=c14b06ec-ce54-4081-8d72-b22529c3b0b7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.543 2 DEBUG nova.network.os_vif_util [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.543 2 DEBUG nova.network.os_vif_util [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:97:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac811e84-843c-4265-b536-c653f6135295,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac811e84-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.544 2 DEBUG os_vif [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:97:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac811e84-843c-4265-b536-c653f6135295,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac811e84-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.546 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac811e84-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.555 2 INFO os_vif [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:97:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac811e84-843c-4265-b536-c653f6135295,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac811e84-84')
Oct 07 14:16:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 572 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 8.2 MiB/s wr, 272 op/s
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.859 2 DEBUG nova.network.neutron [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Successfully created port: ec2bde76-e053-498c-9d73-ef340b6cfe82 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.864 2 DEBUG nova.network.neutron [req-99290488-f883-4ebf-a0fa-63fbfb38c23c req-fb94d3c8-33f9-4c24-8a65-c6f784264303 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Updated VIF entry in instance network info cache for port ac811e84-843c-4265-b536-c653f6135295. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.864 2 DEBUG nova.network.neutron [req-99290488-f883-4ebf-a0fa-63fbfb38c23c req-fb94d3c8-33f9-4c24-8a65-c6f784264303 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Updating instance_info_cache with network_info: [{"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.904 2 DEBUG oslo_concurrency.lockutils [req-99290488-f883-4ebf-a0fa-63fbfb38c23c req-fb94d3c8-33f9-4c24-8a65-c6f784264303 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-c14b06ec-ce54-4081-8d72-b22529c3b0b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.971 2 DEBUG nova.compute.manager [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.972 2 DEBUG oslo_concurrency.lockutils [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.972 2 DEBUG oslo_concurrency.lockutils [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.973 2 DEBUG oslo_concurrency.lockutils [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.973 2 DEBUG nova.compute.manager [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] No waiting events found dispatching network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.973 2 WARNING nova.compute.manager [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received unexpected event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc for instance with vm_state active and task_state None.
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.974 2 DEBUG nova.compute.manager [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.974 2 DEBUG oslo_concurrency.lockutils [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.974 2 DEBUG oslo_concurrency.lockutils [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.975 2 DEBUG oslo_concurrency.lockutils [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.975 2 DEBUG nova.compute.manager [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] No waiting events found dispatching network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.975 2 WARNING nova.compute.manager [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received unexpected event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc for instance with vm_state active and task_state None.
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.982 2 INFO nova.virt.libvirt.driver [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Deleting instance files /var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7_del
Oct 07 14:16:21 compute-0 nova_compute[259550]: 2025-10-07 14:16:21.983 2 INFO nova.virt.libvirt.driver [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Deletion of /var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7_del complete
Oct 07 14:16:22 compute-0 nova_compute[259550]: 2025-10-07 14:16:22.052 2 DEBUG nova.network.neutron [req-ab612754-72e0-48c3-85bc-7c53a3ef95e4 req-a4b01765-93d4-48ea-b973-32aee32b9942 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updated VIF entry in instance network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:16:22 compute-0 nova_compute[259550]: 2025-10-07 14:16:22.052 2 DEBUG nova.network.neutron [req-ab612754-72e0-48c3-85bc-7c53a3ef95e4 req-a4b01765-93d4-48ea-b973-32aee32b9942 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:22 compute-0 nova_compute[259550]: 2025-10-07 14:16:22.060 2 INFO nova.compute.manager [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 07 14:16:22 compute-0 nova_compute[259550]: 2025-10-07 14:16:22.061 2 DEBUG oslo.service.loopingcall [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:16:22 compute-0 nova_compute[259550]: 2025-10-07 14:16:22.061 2 DEBUG nova.compute.manager [-] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:16:22 compute-0 nova_compute[259550]: 2025-10-07 14:16:22.062 2 DEBUG nova.network.neutron [-] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:16:22 compute-0 nova_compute[259550]: 2025-10-07 14:16:22.152 2 DEBUG oslo_concurrency.lockutils [req-ab612754-72e0-48c3-85bc-7c53a3ef95e4 req-a4b01765-93d4-48ea-b973-32aee32b9942 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:16:22
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'vms', 'volumes', '.rgw.root', 'backups', 'images', 'default.rgw.log']
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:16:22 compute-0 nova_compute[259550]: 2025-10-07 14:16:22.715 2 DEBUG nova.network.neutron [-] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:22 compute-0 nova_compute[259550]: 2025-10-07 14:16:22.735 2 INFO nova.compute.manager [-] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Took 0.67 seconds to deallocate network for instance.
Oct 07 14:16:22 compute-0 nova_compute[259550]: 2025-10-07 14:16:22.772 2 DEBUG nova.network.neutron [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Successfully updated port: 68a7ca31-4ee2-4e32-9574-3113f63090cf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:16:22 compute-0 nova_compute[259550]: 2025-10-07 14:16:22.775 2 DEBUG oslo_concurrency.lockutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:22 compute-0 nova_compute[259550]: 2025-10-07 14:16:22.775 2 DEBUG oslo_concurrency.lockutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:16:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:16:22 compute-0 nova_compute[259550]: 2025-10-07 14:16:22.933 2 DEBUG oslo_concurrency.processutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:22 compute-0 ceph-mon[74295]: pgmap v1553: 305 pgs: 305 active+clean; 572 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 8.2 MiB/s wr, 272 op/s
Oct 07 14:16:23 compute-0 nova_compute[259550]: 2025-10-07 14:16:23.075 2 DEBUG nova.compute.manager [req-93358a9b-7c60-48ea-af2b-d396b15a330d req-3a3f3577-2966-42de-857c-5809a7d165cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-changed-8718eef8-8e7a-42ab-8df9-b469e81779d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:23 compute-0 nova_compute[259550]: 2025-10-07 14:16:23.075 2 DEBUG nova.compute.manager [req-93358a9b-7c60-48ea-af2b-d396b15a330d req-3a3f3577-2966-42de-857c-5809a7d165cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing instance network info cache due to event network-changed-8718eef8-8e7a-42ab-8df9-b469e81779d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:16:23 compute-0 nova_compute[259550]: 2025-10-07 14:16:23.076 2 DEBUG oslo_concurrency.lockutils [req-93358a9b-7c60-48ea-af2b-d396b15a330d req-3a3f3577-2966-42de-857c-5809a7d165cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:23 compute-0 nova_compute[259550]: 2025-10-07 14:16:23.076 2 DEBUG oslo_concurrency.lockutils [req-93358a9b-7c60-48ea-af2b-d396b15a330d req-3a3f3577-2966-42de-857c-5809a7d165cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:23 compute-0 nova_compute[259550]: 2025-10-07 14:16:23.076 2 DEBUG nova.network.neutron [req-93358a9b-7c60-48ea-af2b-d396b15a330d req-3a3f3577-2966-42de-857c-5809a7d165cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing network info cache for port 8718eef8-8e7a-42ab-8df9-b469e81779d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:16:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:16:23 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2765835727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:23 compute-0 nova_compute[259550]: 2025-10-07 14:16:23.450 2 DEBUG oslo_concurrency.processutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:23 compute-0 nova_compute[259550]: 2025-10-07 14:16:23.466 2 DEBUG nova.compute.provider_tree [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:16:23 compute-0 nova_compute[259550]: 2025-10-07 14:16:23.496 2 DEBUG nova.scheduler.client.report [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:16:23 compute-0 ceph-mgr[74587]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3626055412
Oct 07 14:16:23 compute-0 nova_compute[259550]: 2025-10-07 14:16:23.540 2 DEBUG oslo_concurrency.lockutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:23 compute-0 nova_compute[259550]: 2025-10-07 14:16:23.569 2 INFO nova.scheduler.client.report [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Deleted allocations for instance c14b06ec-ce54-4081-8d72-b22529c3b0b7
Oct 07 14:16:23 compute-0 nova_compute[259550]: 2025-10-07 14:16:23.632 2 DEBUG oslo_concurrency.lockutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:23 compute-0 nova_compute[259550]: 2025-10-07 14:16:23.647 2 DEBUG nova.network.neutron [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Successfully updated port: ec2bde76-e053-498c-9d73-ef340b6cfe82 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:16:23 compute-0 nova_compute[259550]: 2025-10-07 14:16:23.663 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "refresh_cache-4aa20e30-d71a-4765-9b3e-a72a156d2c88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:23 compute-0 nova_compute[259550]: 2025-10-07 14:16:23.663 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquired lock "refresh_cache-4aa20e30-d71a-4765-9b3e-a72a156d2c88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:23 compute-0 nova_compute[259550]: 2025-10-07 14:16:23.663 2 DEBUG nova.network.neutron [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:16:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 572 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.7 MiB/s wr, 244 op/s
Oct 07 14:16:23 compute-0 nova_compute[259550]: 2025-10-07 14:16:23.807 2 DEBUG nova.network.neutron [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:16:23 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2765835727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.308 2 DEBUG oslo_concurrency.lockutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.308 2 DEBUG oslo_concurrency.lockutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.308 2 DEBUG oslo_concurrency.lockutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.309 2 DEBUG oslo_concurrency.lockutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.309 2 DEBUG oslo_concurrency.lockutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.310 2 INFO nova.compute.manager [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Terminating instance
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.311 2 DEBUG nova.compute.manager [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.324 2 DEBUG nova.network.neutron [req-93358a9b-7c60-48ea-af2b-d396b15a330d req-3a3f3577-2966-42de-857c-5809a7d165cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updated VIF entry in instance network info cache for port 8718eef8-8e7a-42ab-8df9-b469e81779d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.325 2 DEBUG nova.network.neutron [req-93358a9b-7c60-48ea-af2b-d396b15a330d req-3a3f3577-2966-42de-857c-5809a7d165cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updating instance_info_cache with network_info: [{"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.358 2 DEBUG oslo_concurrency.lockutils [req-93358a9b-7c60-48ea-af2b-d396b15a330d req-3a3f3577-2966-42de-857c-5809a7d165cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.359 2 DEBUG nova.compute.manager [req-93358a9b-7c60-48ea-af2b-d396b15a330d req-3a3f3577-2966-42de-857c-5809a7d165cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received event network-vif-deleted-ac811e84-843c-4265-b536-c653f6135295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:24 compute-0 kernel: tap432c69dd-fb (unregistering): left promiscuous mode
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.371 2 DEBUG nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received event network-vif-unplugged-ac811e84-843c-4265-b536-c653f6135295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.371 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.372 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.372 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.372 2 DEBUG nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] No waiting events found dispatching network-vif-unplugged-ac811e84-843c-4265-b536-c653f6135295 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.372 2 WARNING nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received unexpected event network-vif-unplugged-ac811e84-843c-4265-b536-c653f6135295 for instance with vm_state deleted and task_state None.
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.372 2 DEBUG nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received event network-vif-plugged-ac811e84-843c-4265-b536-c653f6135295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.373 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.373 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.373 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.373 2 DEBUG nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] No waiting events found dispatching network-vif-plugged-ac811e84-843c-4265-b536-c653f6135295 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:24 compute-0 NetworkManager[44949]: <info>  [1759846584.3760] device (tap432c69dd-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.373 2 WARNING nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received unexpected event network-vif-plugged-ac811e84-843c-4265-b536-c653f6135295 for instance with vm_state deleted and task_state None.
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.381 2 DEBUG nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-changed-68a7ca31-4ee2-4e32-9574-3113f63090cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.381 2 DEBUG nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Refreshing instance network info cache due to event network-changed-68a7ca31-4ee2-4e32-9574-3113f63090cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.382 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4aa20e30-d71a-4765-9b3e-a72a156d2c88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:24 compute-0 ovn_controller[151684]: 2025-10-07T14:16:24Z|00546|binding|INFO|Releasing lport 432c69dd-fb1b-432b-b867-9fe29716430d from this chassis (sb_readonly=0)
Oct 07 14:16:24 compute-0 ovn_controller[151684]: 2025-10-07T14:16:24Z|00547|binding|INFO|Setting lport 432c69dd-fb1b-432b-b867-9fe29716430d down in Southbound
Oct 07 14:16:24 compute-0 ovn_controller[151684]: 2025-10-07T14:16:24Z|00548|binding|INFO|Removing iface tap432c69dd-fb ovn-installed in OVS
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.415 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:54:7c 10.100.0.7'], port_security=['fa:16:3e:a6:54:7c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'fc163bed-856c-4ea5-9bf3-6989fb1027eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef9390a1dd804281beea149e0086b360', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a36afbb-3eb8-42d9-b597-25ad85279a69', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80c61b76-cba3-471b-9dc7-bab9d6303f6a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=432c69dd-fb1b-432b-b867-9fe29716430d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.416 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 432c69dd-fb1b-432b-b867-9fe29716430d in datapath 7ba9d553-bbaa-47f8-8281-6a74e53c37fb unbound from our chassis
Oct 07 14:16:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.418 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7ba9d553-bbaa-47f8-8281-6a74e53c37fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:16:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.421 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[66bbf6fb-d745-4a2c-bac2-ec0c15ae093d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.421 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb namespace which is not needed anymore
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.434 2 DEBUG oslo_concurrency.lockutils [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.434 2 DEBUG oslo_concurrency.lockutils [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.435 2 DEBUG nova.objects.instance [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:24 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000032.scope: Deactivated successfully.
Oct 07 14:16:24 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000032.scope: Consumed 18.499s CPU time.
Oct 07 14:16:24 compute-0 systemd-machined[214580]: Machine qemu-57-instance-00000032 terminated.
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.554 2 INFO nova.virt.libvirt.driver [-] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Instance destroyed successfully.
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.555 2 DEBUG nova.objects.instance [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'resources' on Instance uuid fc163bed-856c-4ea5-9bf3-6989fb1027eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.570 2 DEBUG nova.virt.libvirt.vif [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:14:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-307445853',display_name='tempest-ServerActionsTestOtherA-server-307445853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-307445853',id=50,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG+9Z65pTClTOFGfwBQwoBDEk0wDdHVeNmjMfU680t6jhHHvju/LmHnN+5TGqyxhWrME7/S2SBjWiIYsOdkRBZmw+292d2qkOy0bnGNB53h//Xfe51NNgLX77Oc4GTlk5Q==',key_name='tempest-keypair-1536550939',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-n9ekxyu9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=fc163bed-856c-4ea5-9bf3-6989fb1027eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.570 2 DEBUG nova.network.os_vif_util [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.571 2 DEBUG nova.network.os_vif_util [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a6:54:7c,bridge_name='br-int',has_traffic_filtering=True,id=432c69dd-fb1b-432b-b867-9fe29716430d,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap432c69dd-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.571 2 DEBUG os_vif [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:54:7c,bridge_name='br-int',has_traffic_filtering=True,id=432c69dd-fb1b-432b-b867-9fe29716430d,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap432c69dd-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.574 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap432c69dd-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.580 2 INFO os_vif [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:54:7c,bridge_name='br-int',has_traffic_filtering=True,id=432c69dd-fb1b-432b-b867-9fe29716430d,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap432c69dd-fb')
Oct 07 14:16:24 compute-0 neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb[316346]: [NOTICE]   (316364) : haproxy version is 2.8.14-c23fe91
Oct 07 14:16:24 compute-0 neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb[316346]: [NOTICE]   (316364) : path to executable is /usr/sbin/haproxy
Oct 07 14:16:24 compute-0 neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb[316346]: [WARNING]  (316364) : Exiting Master process...
Oct 07 14:16:24 compute-0 neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb[316346]: [WARNING]  (316364) : Exiting Master process...
Oct 07 14:16:24 compute-0 neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb[316346]: [ALERT]    (316364) : Current worker (316368) exited with code 143 (Terminated)
Oct 07 14:16:24 compute-0 neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb[316346]: [WARNING]  (316364) : All workers exited. Exiting... (0)
Oct 07 14:16:24 compute-0 systemd[1]: libpod-f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153.scope: Deactivated successfully.
Oct 07 14:16:24 compute-0 podman[324623]: 2025-10-07 14:16:24.631874078 +0000 UTC m=+0.065567123 container died f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:16:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153-userdata-shm.mount: Deactivated successfully.
Oct 07 14:16:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-a901171075059a0193ddbd3fb0f9b59df344514fed4a53bc1eb915147968b7ce-merged.mount: Deactivated successfully.
Oct 07 14:16:24 compute-0 podman[324623]: 2025-10-07 14:16:24.705246686 +0000 UTC m=+0.138939731 container cleanup f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 07 14:16:24 compute-0 systemd[1]: libpod-conmon-f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153.scope: Deactivated successfully.
Oct 07 14:16:24 compute-0 podman[324671]: 2025-10-07 14:16:24.781321215 +0000 UTC m=+0.047733342 container remove f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:16:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.791 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[70851d26-4540-456a-860a-2aaf639716a3]: (4, ('Tue Oct  7 02:16:24 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb (f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153)\nf4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153\nTue Oct  7 02:16:24 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb (f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153)\nf4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.795 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f904101-7ba7-47f5-b37f-1b240e894f7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.797 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9d553-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:24 compute-0 kernel: tap7ba9d553-b0: left promiscuous mode
Oct 07 14:16:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.805 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[521f66b1-8af8-4db2-8cd9-b53ef80705b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.827 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d08b07-9286-44b1-a5e2-a3a05c017c87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.829 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b866e3-3065-4f57-8a0e-8686ea0316aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.849 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e7ece3-9bef-4cf4-ab5f-35bca2a148b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703710, 'reachable_time': 28689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324688, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d7ba9d553\x2dbbaa\x2d47f8\x2d8281\x2d6a74e53c37fb.mount: Deactivated successfully.
Oct 07 14:16:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.860 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:16:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.861 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[694f659f-50e8-4c78-8615-2039555e1cda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.894 2 DEBUG nova.objects.instance [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_requests' on Instance uuid 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:24 compute-0 nova_compute[259550]: 2025-10-07 14:16:24.950 2 DEBUG nova.network.neutron [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:16:24 compute-0 ceph-mon[74295]: pgmap v1554: 305 pgs: 305 active+clean; 572 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.7 MiB/s wr, 244 op/s
Oct 07 14:16:25 compute-0 nova_compute[259550]: 2025-10-07 14:16:25.069 2 INFO nova.virt.libvirt.driver [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Deleting instance files /var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb_del
Oct 07 14:16:25 compute-0 nova_compute[259550]: 2025-10-07 14:16:25.070 2 INFO nova.virt.libvirt.driver [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Deletion of /var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb_del complete
Oct 07 14:16:25 compute-0 nova_compute[259550]: 2025-10-07 14:16:25.230 2 INFO nova.compute.manager [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Took 0.92 seconds to destroy the instance on the hypervisor.
Oct 07 14:16:25 compute-0 nova_compute[259550]: 2025-10-07 14:16:25.230 2 DEBUG oslo.service.loopingcall [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:16:25 compute-0 nova_compute[259550]: 2025-10-07 14:16:25.231 2 DEBUG nova.compute.manager [-] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:16:25 compute-0 nova_compute[259550]: 2025-10-07 14:16:25.231 2 DEBUG nova.network.neutron [-] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:16:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 587 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 8.2 MiB/s wr, 367 op/s
Oct 07 14:16:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:16:26 compute-0 ceph-mon[74295]: pgmap v1555: 305 pgs: 305 active+clean; 587 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 8.2 MiB/s wr, 367 op/s
Oct 07 14:16:26 compute-0 sudo[324690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:16:26 compute-0 sudo[324690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:16:26 compute-0 sudo[324690]: pam_unix(sudo:session): session closed for user root
Oct 07 14:16:26 compute-0 nova_compute[259550]: 2025-10-07 14:16:26.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:26 compute-0 sudo[324715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:16:26 compute-0 sudo[324715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:16:26 compute-0 sudo[324715]: pam_unix(sudo:session): session closed for user root
Oct 07 14:16:26 compute-0 sudo[324740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:16:26 compute-0 sudo[324740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:16:26 compute-0 sudo[324740]: pam_unix(sudo:session): session closed for user root
Oct 07 14:16:26 compute-0 sudo[324765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:16:26 compute-0 sudo[324765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:16:26 compute-0 nova_compute[259550]: 2025-10-07 14:16:26.980 2 DEBUG nova.policy [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:16:26 compute-0 nova_compute[259550]: 2025-10-07 14:16:26.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:16:26 compute-0 nova_compute[259550]: 2025-10-07 14:16:26.985 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 07 14:16:27 compute-0 nova_compute[259550]: 2025-10-07 14:16:27.024 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 07 14:16:27 compute-0 sudo[324765]: pam_unix(sudo:session): session closed for user root
Oct 07 14:16:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:16:27 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:16:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:16:27 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:16:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:16:27 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:16:27 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 8d4ab9af-9cb6-4697-81bc-0550a38a30c9 does not exist
Oct 07 14:16:27 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 335cad32-8f5e-495c-abbc-f7b476e10ab1 does not exist
Oct 07 14:16:27 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 85879097-8a40-400e-87d2-a0a5c12a5aee does not exist
Oct 07 14:16:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:16:27 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:16:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:16:27 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:16:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:16:27 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:16:27 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:16:27 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:16:27 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:16:27 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:16:27 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:16:27 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:16:27 compute-0 sudo[324820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:16:27 compute-0 sudo[324820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:16:27 compute-0 sudo[324820]: pam_unix(sudo:session): session closed for user root
Oct 07 14:16:27 compute-0 sudo[324845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:16:27 compute-0 sudo[324845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:16:27 compute-0 sudo[324845]: pam_unix(sudo:session): session closed for user root
Oct 07 14:16:27 compute-0 sudo[324870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:16:27 compute-0 sudo[324870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:16:27 compute-0 sudo[324870]: pam_unix(sudo:session): session closed for user root
Oct 07 14:16:27 compute-0 sudo[324895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:16:27 compute-0 sudo[324895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:16:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 534 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 8.2 MiB/s wr, 382 op/s
Oct 07 14:16:27 compute-0 podman[324960]: 2025-10-07 14:16:27.964429673 +0000 UTC m=+0.074187611 container create 10012d6dcebcd3a65297e81d5c0485230231adb228f4b3ed965e411064cf4fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 07 14:16:28 compute-0 podman[324960]: 2025-10-07 14:16:27.916202889 +0000 UTC m=+0.025960817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:16:28 compute-0 systemd[1]: Started libpod-conmon-10012d6dcebcd3a65297e81d5c0485230231adb228f4b3ed965e411064cf4fbc.scope.
Oct 07 14:16:28 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:16:28 compute-0 podman[324960]: 2025-10-07 14:16:28.091715106 +0000 UTC m=+0.201473054 container init 10012d6dcebcd3a65297e81d5c0485230231adb228f4b3ed965e411064cf4fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:16:28 compute-0 podman[324960]: 2025-10-07 14:16:28.099913552 +0000 UTC m=+0.209671470 container start 10012d6dcebcd3a65297e81d5c0485230231adb228f4b3ed965e411064cf4fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_villani, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:16:28 compute-0 stupefied_villani[324977]: 167 167
Oct 07 14:16:28 compute-0 systemd[1]: libpod-10012d6dcebcd3a65297e81d5c0485230231adb228f4b3ed965e411064cf4fbc.scope: Deactivated successfully.
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.107 2 DEBUG nova.network.neutron [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Updating instance_info_cache with network_info: [{"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:28 compute-0 podman[324960]: 2025-10-07 14:16:28.119181431 +0000 UTC m=+0.228939389 container attach 10012d6dcebcd3a65297e81d5c0485230231adb228f4b3ed965e411064cf4fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_villani, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:16:28 compute-0 podman[324960]: 2025-10-07 14:16:28.120746922 +0000 UTC m=+0.230504820 container died 10012d6dcebcd3a65297e81d5c0485230231adb228f4b3ed965e411064cf4fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_villani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.140 2 DEBUG nova.network.neutron [-] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d74f88123681738f50f4da98dc23b128f2ef609295ad4fb42ae87860887d220-merged.mount: Deactivated successfully.
Oct 07 14:16:28 compute-0 podman[324960]: 2025-10-07 14:16:28.266485982 +0000 UTC m=+0.376243880 container remove 10012d6dcebcd3a65297e81d5c0485230231adb228f4b3ed965e411064cf4fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 07 14:16:28 compute-0 systemd[1]: libpod-conmon-10012d6dcebcd3a65297e81d5c0485230231adb228f4b3ed965e411064cf4fbc.scope: Deactivated successfully.
Oct 07 14:16:28 compute-0 ceph-mon[74295]: pgmap v1556: 305 pgs: 305 active+clean; 534 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 8.2 MiB/s wr, 382 op/s
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.441 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Releasing lock "refresh_cache-4aa20e30-d71a-4765-9b3e-a72a156d2c88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.441 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Instance network_info: |[{"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.442 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4aa20e30-d71a-4765-9b3e-a72a156d2c88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.442 2 DEBUG nova.network.neutron [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Refreshing network info cache for port 68a7ca31-4ee2-4e32-9574-3113f63090cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.445 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Start _get_guest_xml network_info=[{"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.451 2 WARNING nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.457 2 DEBUG nova.virt.libvirt.host [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.458 2 DEBUG nova.virt.libvirt.host [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.462 2 DEBUG nova.virt.libvirt.host [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.462 2 DEBUG nova.virt.libvirt.host [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.463 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.463 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.463 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.464 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.464 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.464 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.464 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.464 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.465 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.480 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.481 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.481 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.486 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.529 2 INFO nova.compute.manager [-] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Took 3.30 seconds to deallocate network for instance.
Oct 07 14:16:28 compute-0 podman[325002]: 2025-10-07 14:16:28.541184309 +0000 UTC m=+0.094372304 container create 3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cohen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:16:28 compute-0 podman[325002]: 2025-10-07 14:16:28.491744883 +0000 UTC m=+0.044932898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:16:28 compute-0 systemd[1]: Started libpod-conmon-3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32.scope.
Oct 07 14:16:28 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173110fb48c71b72589668bb9b4f369e5c95ab0b6b4623e98d0f6e94508f8e5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173110fb48c71b72589668bb9b4f369e5c95ab0b6b4623e98d0f6e94508f8e5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173110fb48c71b72589668bb9b4f369e5c95ab0b6b4623e98d0f6e94508f8e5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173110fb48c71b72589668bb9b4f369e5c95ab0b6b4623e98d0f6e94508f8e5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173110fb48c71b72589668bb9b4f369e5c95ab0b6b4623e98d0f6e94508f8e5e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:16:28 compute-0 podman[325002]: 2025-10-07 14:16:28.700030785 +0000 UTC m=+0.253218790 container init 3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:16:28 compute-0 podman[325002]: 2025-10-07 14:16:28.711533309 +0000 UTC m=+0.264721304 container start 3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cohen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:16:28 compute-0 podman[325002]: 2025-10-07 14:16:28.747708265 +0000 UTC m=+0.300896310 container attach 3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cohen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.833 2 DEBUG oslo_concurrency.lockutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.835 2 DEBUG oslo_concurrency.lockutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:16:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3214005704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:16:28 compute-0 nova_compute[259550]: 2025-10-07 14:16:28.983 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.005 2 DEBUG nova.storage.rbd_utils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.010 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.043 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.044 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.205 2 DEBUG oslo_concurrency.processutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3214005704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:16:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:16:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1799583381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.500 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.502 2 DEBUG nova.virt.libvirt.vif [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:16:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1632304559',display_name='tempest-ServersTestMultiNic-server-1632304559',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1632304559',id=62,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-c2aa0bw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:16:18Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=4aa20e30-d71a-4765-9b3e-a72a156d2c88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.502 2 DEBUG nova.network.os_vif_util [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.503 2 DEBUG nova.network.os_vif_util [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:be:e9,bridge_name='br-int',has_traffic_filtering=True,id=68a7ca31-4ee2-4e32-9574-3113f63090cf,network=Network(b044fd19-c1bd-4478-b5ed-6feb8831fea0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68a7ca31-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.505 2 DEBUG nova.virt.libvirt.vif [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:16:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1632304559',display_name='tempest-ServersTestMultiNic-server-1632304559',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1632304559',id=62,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-c2aa0bw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:16:18Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=4aa20e30-d71a-4765-9b3e-a72a156d2c88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.506 2 DEBUG nova.network.os_vif_util [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.506 2 DEBUG nova.network.os_vif_util [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=ec2bde76-e053-498c-9d73-ef340b6cfe82,network=Network(8ca58653-b705-43e9-827e-3bbf7240a807),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bde76-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.516 2 DEBUG nova.objects.instance [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4aa20e30-d71a-4765-9b3e-a72a156d2c88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.611 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:16:29 compute-0 nova_compute[259550]:   <uuid>4aa20e30-d71a-4765-9b3e-a72a156d2c88</uuid>
Oct 07 14:16:29 compute-0 nova_compute[259550]:   <name>instance-0000003e</name>
Oct 07 14:16:29 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:16:29 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:16:29 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersTestMultiNic-server-1632304559</nova:name>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:16:28</nova:creationTime>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:16:29 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:16:29 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:16:29 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:16:29 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:16:29 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:16:29 compute-0 nova_compute[259550]:         <nova:user uuid="ff37c390826e43079eff2a1423ccc2b8">tempest-ServersTestMultiNic-1400500697-project-member</nova:user>
Oct 07 14:16:29 compute-0 nova_compute[259550]:         <nova:project uuid="1a99ac1945604cf5a5a5bd917ea52280">tempest-ServersTestMultiNic-1400500697</nova:project>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:16:29 compute-0 nova_compute[259550]:         <nova:port uuid="68a7ca31-4ee2-4e32-9574-3113f63090cf">
Oct 07 14:16:29 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:16:29 compute-0 nova_compute[259550]:         <nova:port uuid="ec2bde76-e053-498c-9d73-ef340b6cfe82">
Oct 07 14:16:29 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.1.60" ipVersion="4"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:16:29 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:16:29 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <system>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <entry name="serial">4aa20e30-d71a-4765-9b3e-a72a156d2c88</entry>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <entry name="uuid">4aa20e30-d71a-4765-9b3e-a72a156d2c88</entry>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     </system>
Oct 07 14:16:29 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:16:29 compute-0 nova_compute[259550]:   <os>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:   </os>
Oct 07 14:16:29 compute-0 nova_compute[259550]:   <features>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:   </features>
Oct 07 14:16:29 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:16:29 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:16:29 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk">
Oct 07 14:16:29 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       </source>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:16:29 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk.config">
Oct 07 14:16:29 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       </source>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:16:29 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:6f:be:e9"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <target dev="tap68a7ca31-4e"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:09:34:a2"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <target dev="tapec2bde76-e0"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/4aa20e30-d71a-4765-9b3e-a72a156d2c88/console.log" append="off"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <video>
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     </video>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:16:29 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:16:29 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:16:29 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:16:29 compute-0 nova_compute[259550]: </domain>
Oct 07 14:16:29 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.612 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Preparing to wait for external event network-vif-plugged-68a7ca31-4ee2-4e32-9574-3113f63090cf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.612 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.612 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.612 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.613 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Preparing to wait for external event network-vif-plugged-ec2bde76-e053-498c-9d73-ef340b6cfe82 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.613 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.613 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.613 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.614 2 DEBUG nova.virt.libvirt.vif [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:16:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1632304559',display_name='tempest-ServersTestMultiNic-server-1632304559',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1632304559',id=62,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-c2aa0bw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:16:18Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=4aa20e30-d71a-4765-9b3e-a72a156d2c88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.614 2 DEBUG nova.network.os_vif_util [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.615 2 DEBUG nova.network.os_vif_util [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:be:e9,bridge_name='br-int',has_traffic_filtering=True,id=68a7ca31-4ee2-4e32-9574-3113f63090cf,network=Network(b044fd19-c1bd-4478-b5ed-6feb8831fea0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68a7ca31-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.615 2 DEBUG os_vif [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:be:e9,bridge_name='br-int',has_traffic_filtering=True,id=68a7ca31-4ee2-4e32-9574-3113f63090cf,network=Network(b044fd19-c1bd-4478-b5ed-6feb8831fea0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68a7ca31-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.617 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.621 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68a7ca31-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.621 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap68a7ca31-4e, col_values=(('external_ids', {'iface-id': '68a7ca31-4ee2-4e32-9574-3113f63090cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:be:e9', 'vm-uuid': '4aa20e30-d71a-4765-9b3e-a72a156d2c88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:29 compute-0 NetworkManager[44949]: <info>  [1759846589.6241] manager: (tap68a7ca31-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.632 2 INFO os_vif [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:be:e9,bridge_name='br-int',has_traffic_filtering=True,id=68a7ca31-4ee2-4e32-9574-3113f63090cf,network=Network(b044fd19-c1bd-4478-b5ed-6feb8831fea0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68a7ca31-4e')
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.633 2 DEBUG nova.virt.libvirt.vif [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:16:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1632304559',display_name='tempest-ServersTestMultiNic-server-1632304559',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1632304559',id=62,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-c2aa0bw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:16:18Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=4aa20e30-d71a-4765-9b3e-a72a156d2c88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.633 2 DEBUG nova.network.os_vif_util [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.635 2 DEBUG nova.network.os_vif_util [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=ec2bde76-e053-498c-9d73-ef340b6cfe82,network=Network(8ca58653-b705-43e9-827e-3bbf7240a807),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bde76-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.635 2 DEBUG os_vif [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=ec2bde76-e053-498c-9d73-ef340b6cfe82,network=Network(8ca58653-b705-43e9-827e-3bbf7240a807),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bde76-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.636 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.636 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.639 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec2bde76-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.639 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec2bde76-e0, col_values=(('external_ids', {'iface-id': 'ec2bde76-e053-498c-9d73-ef340b6cfe82', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:34:a2', 'vm-uuid': '4aa20e30-d71a-4765-9b3e-a72a156d2c88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:29 compute-0 NetworkManager[44949]: <info>  [1759846589.6412] manager: (tapec2bde76-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.650 2 INFO os_vif [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=ec2bde76-e053-498c-9d73-ef340b6cfe82,network=Network(8ca58653-b705-43e9-827e-3bbf7240a807),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bde76-e0')
Oct 07 14:16:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:16:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139568954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.720 2 DEBUG oslo_concurrency.processutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.728 2 DEBUG nova.compute.provider_tree [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:16:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 484 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 7.1 MiB/s wr, 380 op/s
Oct 07 14:16:29 compute-0 dreamy_cohen[325021]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:16:29 compute-0 dreamy_cohen[325021]: --> relative data size: 1.0
Oct 07 14:16:29 compute-0 dreamy_cohen[325021]: --> All data devices are unavailable
Oct 07 14:16:29 compute-0 systemd[1]: libpod-3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32.scope: Deactivated successfully.
Oct 07 14:16:29 compute-0 systemd[1]: libpod-3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32.scope: Consumed 1.025s CPU time.
Oct 07 14:16:29 compute-0 podman[325002]: 2025-10-07 14:16:29.817741401 +0000 UTC m=+1.370929396 container died 3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:16:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-173110fb48c71b72589668bb9b4f369e5c95ab0b6b4623e98d0f6e94508f8e5e-merged.mount: Deactivated successfully.
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.896 2 DEBUG nova.scheduler.client.report [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:16:29 compute-0 podman[325002]: 2025-10-07 14:16:29.897304793 +0000 UTC m=+1.450492798 container remove 3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cohen, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.905 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.907 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.907 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] No VIF found with MAC fa:16:3e:6f:be:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.907 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] No VIF found with MAC fa:16:3e:09:34:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.908 2 INFO nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Using config drive
Oct 07 14:16:29 compute-0 systemd[1]: libpod-conmon-3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32.scope: Deactivated successfully.
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.930 2 DEBUG nova.storage.rbd_utils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:16:29 compute-0 sudo[324895]: pam_unix(sudo:session): session closed for user root
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.959 2 DEBUG oslo_concurrency.lockutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.995 2 INFO nova.scheduler.client.report [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Deleted allocations for instance fc163bed-856c-4ea5-9bf3-6989fb1027eb
Oct 07 14:16:29 compute-0 nova_compute[259550]: 2025-10-07 14:16:29.997 2 DEBUG nova.network.neutron [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Successfully updated port: 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:16:30 compute-0 sudo[325168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:16:30 compute-0 sudo[325168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:16:30 compute-0 sudo[325168]: pam_unix(sudo:session): session closed for user root
Oct 07 14:16:30 compute-0 sudo[325193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:16:30 compute-0 sudo[325193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:16:30 compute-0 sudo[325193]: pam_unix(sudo:session): session closed for user root
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.105 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.105 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.105 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.105 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.106 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:30 compute-0 sudo[325218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:16:30 compute-0 sudo[325218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:16:30 compute-0 sudo[325218]: pam_unix(sudo:session): session closed for user root
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.150 2 DEBUG oslo_concurrency.lockutils [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.159 2 DEBUG oslo_concurrency.lockutils [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.159 2 DEBUG nova.network.neutron [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.164 2 DEBUG nova.compute.manager [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received event network-vif-unplugged-432c69dd-fb1b-432b-b867-9fe29716430d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.164 2 DEBUG oslo_concurrency.lockutils [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.164 2 DEBUG oslo_concurrency.lockutils [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.164 2 DEBUG oslo_concurrency.lockutils [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.165 2 DEBUG nova.compute.manager [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] No waiting events found dispatching network-vif-unplugged-432c69dd-fb1b-432b-b867-9fe29716430d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.165 2 WARNING nova.compute.manager [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received unexpected event network-vif-unplugged-432c69dd-fb1b-432b-b867-9fe29716430d for instance with vm_state deleted and task_state None.
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.165 2 DEBUG nova.compute.manager [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received event network-vif-plugged-432c69dd-fb1b-432b-b867-9fe29716430d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.165 2 DEBUG oslo_concurrency.lockutils [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.165 2 DEBUG oslo_concurrency.lockutils [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.165 2 DEBUG oslo_concurrency.lockutils [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.166 2 DEBUG nova.compute.manager [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] No waiting events found dispatching network-vif-plugged-432c69dd-fb1b-432b-b867-9fe29716430d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.166 2 WARNING nova.compute.manager [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received unexpected event network-vif-plugged-432c69dd-fb1b-432b-b867-9fe29716430d for instance with vm_state deleted and task_state None.
Oct 07 14:16:30 compute-0 sudo[325244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:16:30 compute-0 sudo[325244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.242 2 DEBUG oslo_concurrency.lockutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:30 compute-0 podman[325268]: 2025-10-07 14:16:30.308422434 +0000 UTC m=+0.068945982 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:16:30 compute-0 podman[325269]: 2025-10-07 14:16:30.308673491 +0000 UTC m=+0.063313004 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:16:30 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1799583381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:16:30 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4139568954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:30 compute-0 ceph-mon[74295]: pgmap v1557: 305 pgs: 305 active+clean; 484 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 7.1 MiB/s wr, 380 op/s
Oct 07 14:16:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:16:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1014860140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.595 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:30 compute-0 podman[325368]: 2025-10-07 14:16:30.618818524 +0000 UTC m=+0.060604202 container create 9d977bb49934add5953920a40f21c49d3bcfc36ca7e33cb82d6a3430224c85c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:16:30 compute-0 systemd[1]: Started libpod-conmon-9d977bb49934add5953920a40f21c49d3bcfc36ca7e33cb82d6a3430224c85c5.scope.
Oct 07 14:16:30 compute-0 podman[325368]: 2025-10-07 14:16:30.585907014 +0000 UTC m=+0.027692702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:16:30 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:16:30 compute-0 podman[325368]: 2025-10-07 14:16:30.696689171 +0000 UTC m=+0.138474879 container init 9d977bb49934add5953920a40f21c49d3bcfc36ca7e33cb82d6a3430224c85c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williamson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:16:30 compute-0 podman[325368]: 2025-10-07 14:16:30.704838486 +0000 UTC m=+0.146624204 container start 9d977bb49934add5953920a40f21c49d3bcfc36ca7e33cb82d6a3430224c85c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williamson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:16:30 compute-0 podman[325368]: 2025-10-07 14:16:30.709468128 +0000 UTC m=+0.151253826 container attach 9d977bb49934add5953920a40f21c49d3bcfc36ca7e33cb82d6a3430224c85c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 07 14:16:30 compute-0 laughing_williamson[325385]: 167 167
Oct 07 14:16:30 compute-0 systemd[1]: libpod-9d977bb49934add5953920a40f21c49d3bcfc36ca7e33cb82d6a3430224c85c5.scope: Deactivated successfully.
Oct 07 14:16:30 compute-0 podman[325368]: 2025-10-07 14:16:30.713596267 +0000 UTC m=+0.155381955 container died 9d977bb49934add5953920a40f21c49d3bcfc36ca7e33cb82d6a3430224c85c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williamson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 07 14:16:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-e491b4bdaaa625084a57e1c0bac85f63b38b550f3e64af51647f290012f22852-merged.mount: Deactivated successfully.
Oct 07 14:16:30 compute-0 podman[325368]: 2025-10-07 14:16:30.764241615 +0000 UTC m=+0.206027313 container remove 9d977bb49934add5953920a40f21c49d3bcfc36ca7e33cb82d6a3430224c85c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williamson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:16:30 compute-0 systemd[1]: libpod-conmon-9d977bb49934add5953920a40f21c49d3bcfc36ca7e33cb82d6a3430224c85c5.scope: Deactivated successfully.
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.810 2 WARNING nova.network.neutron [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] b1d9f332-f920-4d6e-8e91-dd13ec334d51 already exists in list: networks containing: ['b1d9f332-f920-4d6e-8e91-dd13ec334d51']. ignoring it
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.822 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.822 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.828 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.828 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.833 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.834 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.838 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.838 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.843 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.843 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.848 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.848 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.978 2 INFO nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Creating config drive at /var/lib/nova/instances/4aa20e30-d71a-4765-9b3e-a72a156d2c88/disk.config
Oct 07 14:16:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:16:30 compute-0 podman[325408]: 2025-10-07 14:16:30.982399718 +0000 UTC m=+0.055801735 container create b83e6098c8b58391081fbfd5aa599999c6525bda74613622bbea0e36306b988a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_stonebraker, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:16:30 compute-0 nova_compute[259550]: 2025-10-07 14:16:30.988 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4aa20e30-d71a-4765-9b3e-a72a156d2c88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprc1u6kdl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:31 compute-0 systemd[1]: Started libpod-conmon-b83e6098c8b58391081fbfd5aa599999c6525bda74613622bbea0e36306b988a.scope.
Oct 07 14:16:31 compute-0 podman[325408]: 2025-10-07 14:16:30.960741116 +0000 UTC m=+0.034143153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:16:31 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:16:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fab0535584fe63a8f32fd786e140daa1868d09b07451bd136f6a6f44f0ec137/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:16:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fab0535584fe63a8f32fd786e140daa1868d09b07451bd136f6a6f44f0ec137/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:16:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fab0535584fe63a8f32fd786e140daa1868d09b07451bd136f6a6f44f0ec137/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:16:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fab0535584fe63a8f32fd786e140daa1868d09b07451bd136f6a6f44f0ec137/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:16:31 compute-0 podman[325408]: 2025-10-07 14:16:31.089257012 +0000 UTC m=+0.162659059 container init b83e6098c8b58391081fbfd5aa599999c6525bda74613622bbea0e36306b988a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 07 14:16:31 compute-0 podman[325408]: 2025-10-07 14:16:31.099592904 +0000 UTC m=+0.172994921 container start b83e6098c8b58391081fbfd5aa599999c6525bda74613622bbea0e36306b988a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_stonebraker, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:16:31 compute-0 podman[325408]: 2025-10-07 14:16:31.103776925 +0000 UTC m=+0.177178942 container attach b83e6098c8b58391081fbfd5aa599999c6525bda74613622bbea0e36306b988a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.136 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4aa20e30-d71a-4765-9b3e-a72a156d2c88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprc1u6kdl" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.163 2 DEBUG nova.storage.rbd_utils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.168 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4aa20e30-d71a-4765-9b3e-a72a156d2c88/disk.config 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.215 2 DEBUG nova.compute.manager [req-79fe507a-8ae7-45ac-a7a8-5462d79c66bd req-9b18cba2-4893-4847-abb8-6e6c18696bac 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-changed-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.215 2 DEBUG nova.compute.manager [req-79fe507a-8ae7-45ac-a7a8-5462d79c66bd req-9b18cba2-4893-4847-abb8-6e6c18696bac 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing instance network info cache due to event network-changed-1b8e1852-2a5e-4c50-9ab0-110dfb492a49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.216 2 DEBUG oslo_concurrency.lockutils [req-79fe507a-8ae7-45ac-a7a8-5462d79c66bd req-9b18cba2-4893-4847-abb8-6e6c18696bac 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.266 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.267 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3184MB free_disk=59.73991012573242GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.267 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.267 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:31 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1014860140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.337 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4aa20e30-d71a-4765-9b3e-a72a156d2c88/disk.config 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.338 2 INFO nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Deleting local config drive /var/lib/nova/instances/4aa20e30-d71a-4765-9b3e-a72a156d2c88/disk.config because it was imported into RBD.
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:31 compute-0 NetworkManager[44949]: <info>  [1759846591.4201] manager: (tap68a7ca31-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/253)
Oct 07 14:16:31 compute-0 kernel: tap68a7ca31-4e: entered promiscuous mode
Oct 07 14:16:31 compute-0 ovn_controller[151684]: 2025-10-07T14:16:31Z|00549|binding|INFO|Claiming lport 68a7ca31-4ee2-4e32-9574-3113f63090cf for this chassis.
Oct 07 14:16:31 compute-0 ovn_controller[151684]: 2025-10-07T14:16:31Z|00550|binding|INFO|68a7ca31-4ee2-4e32-9574-3113f63090cf: Claiming fa:16:3e:6f:be:e9 10.100.0.6
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:31 compute-0 NetworkManager[44949]: <info>  [1759846591.4455] manager: (tapec2bde76-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/254)
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.457 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:be:e9 10.100.0.6'], port_security=['fa:16:3e:6f:be:e9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/24', 'neutron:device_id': '4aa20e30-d71a-4765-9b3e-a72a156d2c88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b044fd19-c1bd-4478-b5ed-6feb8831fea0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21c6dcc3-8dd4-4e2a-af6b-6893629fd93b, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=68a7ca31-4ee2-4e32-9574-3113f63090cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.459 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 68a7ca31-4ee2-4e32-9574-3113f63090cf in datapath b044fd19-c1bd-4478-b5ed-6feb8831fea0 bound to our chassis
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.460 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b044fd19-c1bd-4478-b5ed-6feb8831fea0
Oct 07 14:16:31 compute-0 systemd-udevd[325487]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:16:31 compute-0 systemd-udevd[325485]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.481 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a296e3c8-8fe3-45bf-ae8d-4287ffb3506f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.483 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb044fd19-c1 in ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:16:31 compute-0 kernel: tapec2bde76-e0: entered promiscuous mode
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.487 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb044fd19-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:16:31 compute-0 systemd-machined[214580]: New machine qemu-72-instance-0000003e.
Oct 07 14:16:31 compute-0 ovn_controller[151684]: 2025-10-07T14:16:31Z|00551|binding|INFO|Claiming lport ec2bde76-e053-498c-9d73-ef340b6cfe82 for this chassis.
Oct 07 14:16:31 compute-0 ovn_controller[151684]: 2025-10-07T14:16:31Z|00552|binding|INFO|ec2bde76-e053-498c-9d73-ef340b6cfe82: Claiming fa:16:3e:09:34:a2 10.100.1.60
Oct 07 14:16:31 compute-0 NetworkManager[44949]: <info>  [1759846591.4958] device (tapec2bde76-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:16:31 compute-0 NetworkManager[44949]: <info>  [1759846591.4968] device (tap68a7ca31-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:16:31 compute-0 NetworkManager[44949]: <info>  [1759846591.4975] device (tapec2bde76-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.488 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b076e94e-f212-4f8a-b960-e8d5f6547adc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.494 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[11998fd6-0f4a-436e-b35c-1af79bc7782c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:31 compute-0 systemd[1]: Started Virtual Machine qemu-72-instance-0000003e.
Oct 07 14:16:31 compute-0 NetworkManager[44949]: <info>  [1759846591.5023] device (tap68a7ca31-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:16:31 compute-0 ovn_controller[151684]: 2025-10-07T14:16:31Z|00553|binding|INFO|Setting lport 68a7ca31-4ee2-4e32-9574-3113f63090cf ovn-installed in OVS
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.517 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 188af2a5-ff92-4f42-8bdc-5dec2f24d46a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.517 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance cfd30417-ee01-41d3-8a93-e49cd960d338 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.517 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.517 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance b3d2cd05-012d-4189-bc6c-c40fc1f72c0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.517 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance a23d6956-f85a-40b1-9e54-1b32d2af191e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.518 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 4aa20e30-d71a-4765-9b3e-a72a156d2c88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.518 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.518 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1344MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.524 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c9d5c4-55f7-44a9-b375-bdf48bad544e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:31 compute-0 ovn_controller[151684]: 2025-10-07T14:16:31Z|00554|binding|INFO|Setting lport ec2bde76-e053-498c-9d73-ef340b6cfe82 ovn-installed in OVS
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.553 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[defb5ce8-1b72-4770-8424-1d5e15afd0fd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.595 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d84de245-1632-48e4-a23a-ee522bd1d107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:31 compute-0 NetworkManager[44949]: <info>  [1759846591.6024] manager: (tapb044fd19-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/255)
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.603 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d7a0b9-3eda-4bb8-b989-c9e2b1567756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.610 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:34:a2 10.100.1.60'], port_security=['fa:16:3e:09:34:a2 10.100.1.60'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.60/24', 'neutron:device_id': '4aa20e30-d71a-4765-9b3e-a72a156d2c88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8ca58653-b705-43e9-827e-3bbf7240a807', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a397202a-2ca0-49d4-bf1d-28612c8843c1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ec2bde76-e053-498c-9d73-ef340b6cfe82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:31 compute-0 ovn_controller[151684]: 2025-10-07T14:16:31Z|00555|binding|INFO|Setting lport ec2bde76-e053-498c-9d73-ef340b6cfe82 up in Southbound
Oct 07 14:16:31 compute-0 ovn_controller[151684]: 2025-10-07T14:16:31Z|00556|binding|INFO|Setting lport 68a7ca31-4ee2-4e32-9574-3113f63090cf up in Southbound
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.638 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.660 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a42d1476-8d3c-4a34-8c12-f8c1914047da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.670 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[02a64aad-714d-4e8e-8bea-fb8b7a91bb94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:31 compute-0 NetworkManager[44949]: <info>  [1759846591.7027] device (tapb044fd19-c0): carrier: link connected
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.711 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a2d041-f7b1-4ee0-9de6-0685cb092b11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.737 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c7957135-1c85-4e95-b5bb-879b889afd60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb044fd19-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:a4:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717526, 'reachable_time': 32623, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325522, 'error': None, 'target': 'ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.759 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c07d00e8-5a02-4345-bc73-9fffaf96f9b1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:a44a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 717526, 'tstamp': 717526}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325523, 'error': None, 'target': 'ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 484 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.0 MiB/s wr, 253 op/s
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.793 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1499f181-23a6-46f6-8e74-2168f572300d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb044fd19-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:a4:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717526, 'reachable_time': 32623, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325524, 'error': None, 'target': 'ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.847 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[24335326-5171-4574-8f7c-42e0e7a5950e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.922 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c867552a-30d1-468f-bbf1-512e5d5b7ae6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.924 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb044fd19-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.924 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.925 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb044fd19-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:31 compute-0 kernel: tapb044fd19-c0: entered promiscuous mode
Oct 07 14:16:31 compute-0 NetworkManager[44949]: <info>  [1759846591.9278] manager: (tapb044fd19-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.930 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb044fd19-c0, col_values=(('external_ids', {'iface-id': '4fe1f261-bc24-44ec-9b31-81ad525446fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:31 compute-0 ovn_controller[151684]: 2025-10-07T14:16:31Z|00557|binding|INFO|Releasing lport 4fe1f261-bc24-44ec-9b31-81ad525446fb from this chassis (sb_readonly=0)
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]: {
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:     "0": [
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:         {
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "devices": [
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "/dev/loop3"
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             ],
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "lv_name": "ceph_lv0",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "lv_size": "21470642176",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "name": "ceph_lv0",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "tags": {
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.cluster_name": "ceph",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.crush_device_class": "",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.encrypted": "0",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.osd_id": "0",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.type": "block",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.vdo": "0"
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             },
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "type": "block",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "vg_name": "ceph_vg0"
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:         }
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:     ],
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:     "1": [
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:         {
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "devices": [
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "/dev/loop4"
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             ],
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "lv_name": "ceph_lv1",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "lv_size": "21470642176",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "name": "ceph_lv1",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "tags": {
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.cluster_name": "ceph",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.crush_device_class": "",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.encrypted": "0",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.osd_id": "1",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.type": "block",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.vdo": "0"
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             },
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "type": "block",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "vg_name": "ceph_vg1"
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:         }
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:     ],
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:     "2": [
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:         {
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "devices": [
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "/dev/loop5"
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             ],
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "lv_name": "ceph_lv2",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "lv_size": "21470642176",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "name": "ceph_lv2",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "tags": {
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.cluster_name": "ceph",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.crush_device_class": "",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.encrypted": "0",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.osd_id": "2",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.type": "block",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:                 "ceph.vdo": "0"
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             },
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "type": "block",
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:             "vg_name": "ceph_vg2"
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:         }
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]:     ]
Oct 07 14:16:31 compute-0 elegant_stonebraker[325427]: }
Oct 07 14:16:31 compute-0 nova_compute[259550]: 2025-10-07 14:16:31.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.962 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b044fd19-c1bd-4478-b5ed-6feb8831fea0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b044fd19-c1bd-4478-b5ed-6feb8831fea0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.963 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[760af0a5-0923-4162-9be7-57928b9b5871]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.964 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-b044fd19-c1bd-4478-b5ed-6feb8831fea0
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/b044fd19-c1bd-4478-b5ed-6feb8831fea0.pid.haproxy
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID b044fd19-c1bd-4478-b5ed-6feb8831fea0
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:16:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.965 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0', 'env', 'PROCESS_TAG=haproxy-b044fd19-c1bd-4478-b5ed-6feb8831fea0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b044fd19-c1bd-4478-b5ed-6feb8831fea0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:16:31 compute-0 systemd[1]: libpod-b83e6098c8b58391081fbfd5aa599999c6525bda74613622bbea0e36306b988a.scope: Deactivated successfully.
Oct 07 14:16:31 compute-0 podman[325408]: 2025-10-07 14:16:31.987443918 +0000 UTC m=+1.060845955 container died b83e6098c8b58391081fbfd5aa599999c6525bda74613622bbea0e36306b988a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_stonebraker, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:16:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fab0535584fe63a8f32fd786e140daa1868d09b07451bd136f6a6f44f0ec137-merged.mount: Deactivated successfully.
Oct 07 14:16:32 compute-0 podman[325408]: 2025-10-07 14:16:32.077858717 +0000 UTC m=+1.151260734 container remove b83e6098c8b58391081fbfd5aa599999c6525bda74613622bbea0e36306b988a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_stonebraker, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:16:32 compute-0 systemd[1]: libpod-conmon-b83e6098c8b58391081fbfd5aa599999c6525bda74613622bbea0e36306b988a.scope: Deactivated successfully.
Oct 07 14:16:32 compute-0 sudo[325244]: pam_unix(sudo:session): session closed for user root
Oct 07 14:16:32 compute-0 sudo[325609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:16:32 compute-0 sudo[325609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:16:32 compute-0 sudo[325609]: pam_unix(sudo:session): session closed for user root
Oct 07 14:16:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:16:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3127327563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.234 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.245 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:16:32 compute-0 sudo[325637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:16:32 compute-0 sudo[325637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:16:32 compute-0 sudo[325637]: pam_unix(sudo:session): session closed for user root
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.307 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:16:32 compute-0 sudo[325664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:16:32 compute-0 sudo[325664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:16:32 compute-0 sudo[325664]: pam_unix(sudo:session): session closed for user root
Oct 07 14:16:32 compute-0 ceph-mon[74295]: pgmap v1558: 305 pgs: 305 active+clean; 484 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.0 MiB/s wr, 253 op/s
Oct 07 14:16:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3127327563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0041403273954874 of space, bias 1.0, pg target 1.24209821864622 quantized to 32 (current 32)
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.1991676866616201 quantized to 32 (current 32)
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:16:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 07 14:16:32 compute-0 sudo[325705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:16:32 compute-0 sudo[325705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.424 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.424 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.425 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.425 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 07 14:16:32 compute-0 podman[325710]: 2025-10-07 14:16:32.431826377 +0000 UTC m=+0.071197241 container create 179b031c8dc198d24aba4a9eabc4d52a9bc31d2cefb1f827c82b8f8efdff2a8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.431 2 DEBUG nova.network.neutron [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Updated VIF entry in instance network info cache for port 68a7ca31-4ee2-4e32-9574-3113f63090cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.432 2 DEBUG nova.network.neutron [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Updating instance_info_cache with network_info: [{"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:32 compute-0 systemd[1]: Started libpod-conmon-179b031c8dc198d24aba4a9eabc4d52a9bc31d2cefb1f827c82b8f8efdff2a8c.scope.
Oct 07 14:16:32 compute-0 podman[325710]: 2025-10-07 14:16:32.396278149 +0000 UTC m=+0.035649043 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:16:32 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7783d4fd75c24f0a72d4cfed4e7ddb762e4c7fbdb995022c9958d288ab1b9453/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.527 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4aa20e30-d71a-4765-9b3e-a72a156d2c88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.528 2 DEBUG nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-changed-ec2bde76-e053-498c-9d73-ef340b6cfe82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.528 2 DEBUG nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Refreshing instance network info cache due to event network-changed-ec2bde76-e053-498c-9d73-ef340b6cfe82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.528 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4aa20e30-d71a-4765-9b3e-a72a156d2c88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.528 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4aa20e30-d71a-4765-9b3e-a72a156d2c88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.529 2 DEBUG nova.network.neutron [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Refreshing network info cache for port ec2bde76-e053-498c-9d73-ef340b6cfe82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:16:32 compute-0 podman[325710]: 2025-10-07 14:16:32.530798932 +0000 UTC m=+0.170169836 container init 179b031c8dc198d24aba4a9eabc4d52a9bc31d2cefb1f827c82b8f8efdff2a8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 14:16:32 compute-0 podman[325710]: 2025-10-07 14:16:32.539018069 +0000 UTC m=+0.178388953 container start 179b031c8dc198d24aba4a9eabc4d52a9bc31d2cefb1f827c82b8f8efdff2a8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:16:32 compute-0 neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0[325748]: [NOTICE]   (325758) : New worker (325765) forked
Oct 07 14:16:32 compute-0 neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0[325748]: [NOTICE]   (325758) : Loading success.
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.607 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ec2bde76-e053-498c-9d73-ef340b6cfe82 in datapath 8ca58653-b705-43e9-827e-3bbf7240a807 unbound from our chassis
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.609 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8ca58653-b705-43e9-827e-3bbf7240a807
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.624 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4a500c-cbff-47f4-9ee8-6f22533598ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.625 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8ca58653-b1 in ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.627 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8ca58653-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.627 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[312ff651-7ffd-48a6-88d1-ab4d08351b17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.629 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a74d376b-46c9-4dc4-ae29-26600ed781fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.642 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[ff90daba-64fe-40b3-9d59-b4547eec8ec1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:16:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3024468296' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:16:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:16:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3024468296' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.679 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5ceb4c1d-4fe5-4009-b4d9-e8f6b4775470]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.703 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846592.7027683, 4aa20e30-d71a-4765-9b3e-a72a156d2c88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.703 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] VM Started (Lifecycle Event)
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.718 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b078c6d5-614d-4214-af5d-c4f09ad70482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:32 compute-0 NetworkManager[44949]: <info>  [1759846592.7257] manager: (tap8ca58653-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/257)
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.723 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[936a1b3e-2d25-43cd-a6d5-1741e8fb383f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:32 compute-0 systemd-udevd[325516]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.748 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.755 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846592.703018, 4aa20e30-d71a-4765-9b3e-a72a156d2c88 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.755 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] VM Paused (Lifecycle Event)
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.765 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d169dffa-f8fd-4e42-966e-22350b24c789]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.768 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[dcbe4b6a-a236-40ab-b57f-cfbe51425309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:32 compute-0 podman[325807]: 2025-10-07 14:16:32.782134012 +0000 UTC m=+0.051489341 container create ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mendeleev, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 07 14:16:32 compute-0 NetworkManager[44949]: <info>  [1759846592.7973] device (tap8ca58653-b0): carrier: link connected
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.804 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[71417853-1274-4e4f-9838-e1f3890af24f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.827 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[24f2ef52-e1e8-4220-8b92-0830266f838e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8ca58653-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:fa:9a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717636, 'reachable_time': 18852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325826, 'error': None, 'target': 'ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:32 compute-0 systemd[1]: Started libpod-conmon-ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a.scope.
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.849 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cda5c29e-642f-4b0c-8344-c9597b7eaba1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:fa9a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 717636, 'tstamp': 717636}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325829, 'error': None, 'target': 'ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:32 compute-0 podman[325807]: 2025-10-07 14:16:32.765254866 +0000 UTC m=+0.034610225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:16:32 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.875 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a0bccad0-51a1-4f24-bd77-40589a25a34e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8ca58653-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:fa:9a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717636, 'reachable_time': 18852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325833, 'error': None, 'target': 'ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:32 compute-0 podman[325807]: 2025-10-07 14:16:32.887529356 +0000 UTC m=+0.156884705 container init ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mendeleev, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:16:32 compute-0 podman[325807]: 2025-10-07 14:16:32.896683737 +0000 UTC m=+0.166039066 container start ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mendeleev, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 07 14:16:32 compute-0 podman[325807]: 2025-10-07 14:16:32.9005427 +0000 UTC m=+0.169898059 container attach ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 07 14:16:32 compute-0 systemd[1]: libpod-ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a.scope: Deactivated successfully.
Oct 07 14:16:32 compute-0 compassionate_mendeleev[325830]: 167 167
Oct 07 14:16:32 compute-0 conmon[325830]: conmon ba3244946807c252c44d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a.scope/container/memory.events
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.914 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[23462fda-bf81-4dd1-baad-cbfaff14bac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:32 compute-0 podman[325838]: 2025-10-07 14:16:32.955691397 +0000 UTC m=+0.032809758 container died ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mendeleev, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.984 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1920694f-ae8c-435a-bf38-d491eb52d969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.986 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ca58653-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.986 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.987 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ca58653-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.988 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:32 compute-0 NetworkManager[44949]: <info>  [1759846592.9898] manager: (tap8ca58653-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Oct 07 14:16:32 compute-0 kernel: tap8ca58653-b0: entered promiscuous mode
Oct 07 14:16:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.993 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8ca58653-b0, col_values=(('external_ids', {'iface-id': 'f1bc0e82-dad6-41a5-b620-24a2adc845a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:32 compute-0 nova_compute[259550]: 2025-10-07 14:16:32.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:32 compute-0 ovn_controller[151684]: 2025-10-07T14:16:32Z|00558|binding|INFO|Releasing lport f1bc0e82-dad6-41a5-b620-24a2adc845a9 from this chassis (sb_readonly=0)
Oct 07 14:16:33 compute-0 nova_compute[259550]: 2025-10-07 14:16:33.001 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:16:33 compute-0 podman[325838]: 2025-10-07 14:16:33.007344861 +0000 UTC m=+0.084463202 container remove ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:16:33 compute-0 nova_compute[259550]: 2025-10-07 14:16:33.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:33.011 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8ca58653-b705-43e9-827e-3bbf7240a807.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8ca58653-b705-43e9-827e-3bbf7240a807.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:33.012 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[60634390-b973-4e24-99ad-0e65870a3604]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:33.013 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-8ca58653-b705-43e9-827e-3bbf7240a807
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/8ca58653-b705-43e9-827e-3bbf7240a807.pid.haproxy
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 8ca58653-b705-43e9-827e-3bbf7240a807
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:16:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:33.014 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807', 'env', 'PROCESS_TAG=haproxy-8ca58653-b705-43e9-827e-3bbf7240a807', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8ca58653-b705-43e9-827e-3bbf7240a807.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:16:33 compute-0 systemd[1]: libpod-conmon-ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a.scope: Deactivated successfully.
Oct 07 14:16:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-939088120ee116cceb0c3ade37b0269a883a25d9c9a1a5c68d4bc1eee4c58bb4-merged.mount: Deactivated successfully.
Oct 07 14:16:33 compute-0 nova_compute[259550]: 2025-10-07 14:16:33.128 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:16:33 compute-0 podman[325865]: 2025-10-07 14:16:33.232424807 +0000 UTC m=+0.045226236 container create e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 07 14:16:33 compute-0 systemd[1]: Started libpod-conmon-e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec.scope.
Oct 07 14:16:33 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/717c5c98f00ec96e267e62aa8f24ad374bc25e3b59593d083d1578a260701cd0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/717c5c98f00ec96e267e62aa8f24ad374bc25e3b59593d083d1578a260701cd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/717c5c98f00ec96e267e62aa8f24ad374bc25e3b59593d083d1578a260701cd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/717c5c98f00ec96e267e62aa8f24ad374bc25e3b59593d083d1578a260701cd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:16:33 compute-0 podman[325865]: 2025-10-07 14:16:33.210877038 +0000 UTC m=+0.023678477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:16:33 compute-0 podman[325865]: 2025-10-07 14:16:33.323353349 +0000 UTC m=+0.136154778 container init e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_solomon, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:16:33 compute-0 podman[325865]: 2025-10-07 14:16:33.332286805 +0000 UTC m=+0.145088224 container start e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_solomon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:16:33 compute-0 podman[325865]: 2025-10-07 14:16:33.33741801 +0000 UTC m=+0.150219459 container attach e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_solomon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:16:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3024468296' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:16:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3024468296' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:16:33 compute-0 podman[325908]: 2025-10-07 14:16:33.424637214 +0000 UTC m=+0.052466676 container create 05cf35aca424565d0626631f889b3225bbbaac461a5badf791563efd529b3753 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:16:33 compute-0 systemd[1]: Started libpod-conmon-05cf35aca424565d0626631f889b3225bbbaac461a5badf791563efd529b3753.scope.
Oct 07 14:16:33 compute-0 nova_compute[259550]: 2025-10-07 14:16:33.481 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:16:33 compute-0 nova_compute[259550]: 2025-10-07 14:16:33.482 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:16:33 compute-0 nova_compute[259550]: 2025-10-07 14:16:33.482 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:16:33 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:16:33 compute-0 podman[325908]: 2025-10-07 14:16:33.395938167 +0000 UTC m=+0.023767659 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7b1cb4503c16a2a98e5b8ed9040d0c7346f943fe58ca2f33977494e163ea458/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:16:33 compute-0 podman[325908]: 2025-10-07 14:16:33.516910522 +0000 UTC m=+0.144740014 container init 05cf35aca424565d0626631f889b3225bbbaac461a5badf791563efd529b3753 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 07 14:16:33 compute-0 podman[325908]: 2025-10-07 14:16:33.525397706 +0000 UTC m=+0.153227168 container start 05cf35aca424565d0626631f889b3225bbbaac461a5badf791563efd529b3753 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 14:16:33 compute-0 neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807[325923]: [NOTICE]   (325927) : New worker (325929) forked
Oct 07 14:16:33 compute-0 neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807[325923]: [NOTICE]   (325927) : Loading success.
Oct 07 14:16:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 484 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 168 op/s
Oct 07 14:16:33 compute-0 nova_compute[259550]: 2025-10-07 14:16:33.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:16:33 compute-0 nova_compute[259550]: 2025-10-07 14:16:33.985 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:16:34 compute-0 ceph-mon[74295]: pgmap v1559: 305 pgs: 305 active+clean; 484 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 168 op/s
Oct 07 14:16:34 compute-0 nova_compute[259550]: 2025-10-07 14:16:34.372 2 DEBUG nova.compute.manager [req-aaf35bb0-7400-4e00-898b-25d5be6a11c8 req-f7d8c007-4ffc-49cb-a927-945b4fd32105 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received event network-vif-deleted-432c69dd-fb1b-432b-b867-9fe29716430d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:34 compute-0 focused_solomon[325886]: {
Oct 07 14:16:34 compute-0 focused_solomon[325886]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:16:34 compute-0 focused_solomon[325886]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:16:34 compute-0 focused_solomon[325886]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:16:34 compute-0 focused_solomon[325886]:         "osd_id": 2,
Oct 07 14:16:34 compute-0 focused_solomon[325886]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:16:34 compute-0 focused_solomon[325886]:         "type": "bluestore"
Oct 07 14:16:34 compute-0 focused_solomon[325886]:     },
Oct 07 14:16:34 compute-0 focused_solomon[325886]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:16:34 compute-0 focused_solomon[325886]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:16:34 compute-0 focused_solomon[325886]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:16:34 compute-0 focused_solomon[325886]:         "osd_id": 1,
Oct 07 14:16:34 compute-0 focused_solomon[325886]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:16:34 compute-0 focused_solomon[325886]:         "type": "bluestore"
Oct 07 14:16:34 compute-0 focused_solomon[325886]:     },
Oct 07 14:16:34 compute-0 focused_solomon[325886]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:16:34 compute-0 focused_solomon[325886]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:16:34 compute-0 focused_solomon[325886]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:16:34 compute-0 focused_solomon[325886]:         "osd_id": 0,
Oct 07 14:16:34 compute-0 focused_solomon[325886]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:16:34 compute-0 focused_solomon[325886]:         "type": "bluestore"
Oct 07 14:16:34 compute-0 focused_solomon[325886]:     }
Oct 07 14:16:34 compute-0 focused_solomon[325886]: }
Oct 07 14:16:34 compute-0 systemd[1]: libpod-e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec.scope: Deactivated successfully.
Oct 07 14:16:34 compute-0 systemd[1]: libpod-e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec.scope: Consumed 1.050s CPU time.
Oct 07 14:16:34 compute-0 podman[325966]: 2025-10-07 14:16:34.446853369 +0000 UTC m=+0.028436913 container died e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 07 14:16:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-717c5c98f00ec96e267e62aa8f24ad374bc25e3b59593d083d1578a260701cd0-merged.mount: Deactivated successfully.
Oct 07 14:16:34 compute-0 podman[325966]: 2025-10-07 14:16:34.508584989 +0000 UTC m=+0.090168533 container remove e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 14:16:34 compute-0 systemd[1]: libpod-conmon-e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec.scope: Deactivated successfully.
Oct 07 14:16:34 compute-0 sudo[325705]: pam_unix(sudo:session): session closed for user root
Oct 07 14:16:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:16:34 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:16:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:16:34 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:16:34 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 59215874-037b-4cbe-900c-990d51b957e2 does not exist
Oct 07 14:16:34 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 80aa4256-efa3-4386-8166-c69b3d8a5a4b does not exist
Oct 07 14:16:34 compute-0 nova_compute[259550]: 2025-10-07 14:16:34.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:34 compute-0 sudo[325982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:16:34 compute-0 sudo[325982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:16:34 compute-0 sudo[325982]: pam_unix(sudo:session): session closed for user root
Oct 07 14:16:34 compute-0 sudo[326007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:16:34 compute-0 sudo[326007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:16:34 compute-0 sudo[326007]: pam_unix(sudo:session): session closed for user root
Oct 07 14:16:34 compute-0 ovn_controller[151684]: 2025-10-07T14:16:34Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:6d:07 10.100.0.11
Oct 07 14:16:34 compute-0 ovn_controller[151684]: 2025-10-07T14:16:34Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:6d:07 10.100.0.11
Oct 07 14:16:34 compute-0 nova_compute[259550]: 2025-10-07 14:16:34.874 2 DEBUG nova.network.neutron [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updating instance_info_cache with network_info: [{"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.044 2 DEBUG oslo_concurrency.lockutils [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.046 2 DEBUG oslo_concurrency.lockutils [req-79fe507a-8ae7-45ac-a7a8-5462d79c66bd req-9b18cba2-4893-4847-abb8-6e6c18696bac 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.046 2 DEBUG nova.network.neutron [req-79fe507a-8ae7-45ac-a7a8-5462d79c66bd req-9b18cba2-4893-4847-abb8-6e6c18696bac 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing network info cache for port 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.049 2 DEBUG nova.virt.libvirt.vif [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-985989312',display_name='tempest-tempest.common.compute-instance-985989312',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-985989312',id=58,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-hnk7jfbi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.049 2 DEBUG nova.network.os_vif_util [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.050 2 DEBUG nova.network.os_vif_util [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.051 2 DEBUG os_vif [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.052 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.052 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.056 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b8e1852-2a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.057 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b8e1852-2a, col_values=(('external_ids', {'iface-id': '1b8e1852-2a5e-4c50-9ab0-110dfb492a49', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:ab:6d', 'vm-uuid': '8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:35 compute-0 NetworkManager[44949]: <info>  [1759846595.0603] manager: (tap1b8e1852-2a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.070 2 INFO os_vif [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a')
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.071 2 DEBUG nova.virt.libvirt.vif [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-985989312',display_name='tempest-tempest.common.compute-instance-985989312',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-985989312',id=58,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-hnk7jfbi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.072 2 DEBUG nova.network.os_vif_util [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.072 2 DEBUG nova.network.os_vif_util [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.088 2 DEBUG nova.virt.libvirt.guest [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] attach device xml: <interface type="ethernet">
Oct 07 14:16:35 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:1e:ab:6d"/>
Oct 07 14:16:35 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:16:35 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:16:35 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:16:35 compute-0 nova_compute[259550]:   <target dev="tap1b8e1852-2a"/>
Oct 07 14:16:35 compute-0 nova_compute[259550]: </interface>
Oct 07 14:16:35 compute-0 nova_compute[259550]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 07 14:16:35 compute-0 kernel: tap1b8e1852-2a: entered promiscuous mode
Oct 07 14:16:35 compute-0 NetworkManager[44949]: <info>  [1759846595.1042] manager: (tap1b8e1852-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Oct 07 14:16:35 compute-0 ovn_controller[151684]: 2025-10-07T14:16:35Z|00559|binding|INFO|Claiming lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 for this chassis.
Oct 07 14:16:35 compute-0 ovn_controller[151684]: 2025-10-07T14:16:35Z|00560|binding|INFO|1b8e1852-2a5e-4c50-9ab0-110dfb492a49: Claiming fa:16:3e:1e:ab:6d 10.100.0.9
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:35 compute-0 ovn_controller[151684]: 2025-10-07T14:16:35Z|00561|binding|INFO|Setting lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 ovn-installed in OVS
Oct 07 14:16:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.140 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:ab:6d 10.100.0.9'], port_security=['fa:16:3e:1e:ab:6d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-997458546', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-997458546', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '7', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1b8e1852-2a5e-4c50-9ab0-110dfb492a49) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:35 compute-0 ovn_controller[151684]: 2025-10-07T14:16:35Z|00562|binding|INFO|Setting lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 up in Southbound
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.142 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis
Oct 07 14:16:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.144 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:35 compute-0 systemd-udevd[326038]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.152 2 DEBUG nova.compute.manager [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-vif-plugged-ec2bde76-e053-498c-9d73-ef340b6cfe82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.153 2 DEBUG oslo_concurrency.lockutils [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.153 2 DEBUG oslo_concurrency.lockutils [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.153 2 DEBUG oslo_concurrency.lockutils [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.154 2 DEBUG nova.compute.manager [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Processing event network-vif-plugged-ec2bde76-e053-498c-9d73-ef340b6cfe82 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.154 2 DEBUG nova.compute.manager [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-vif-plugged-ec2bde76-e053-498c-9d73-ef340b6cfe82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.154 2 DEBUG oslo_concurrency.lockutils [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.154 2 DEBUG oslo_concurrency.lockutils [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.154 2 DEBUG oslo_concurrency.lockutils [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.155 2 DEBUG nova.compute.manager [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] No event matching network-vif-plugged-ec2bde76-e053-498c-9d73-ef340b6cfe82 in dict_keys([('network-vif-plugged', '68a7ca31-4ee2-4e32-9574-3113f63090cf')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.155 2 WARNING nova.compute.manager [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received unexpected event network-vif-plugged-ec2bde76-e053-498c-9d73-ef340b6cfe82 for instance with vm_state building and task_state spawning.
Oct 07 14:16:35 compute-0 NetworkManager[44949]: <info>  [1759846595.1654] device (tap1b8e1852-2a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:16:35 compute-0 NetworkManager[44949]: <info>  [1759846595.1694] device (tap1b8e1852-2a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:16:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.169 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6760f675-852e-4ace-b04f-3a3d9c1bfc05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.185 2 DEBUG nova.network.neutron [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Updated VIF entry in instance network info cache for port ec2bde76-e053-498c-9d73-ef340b6cfe82. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.186 2 DEBUG nova.network.neutron [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Updating instance_info_cache with network_info: [{"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.203 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[11c8642d-cb4c-4692-82a1-9c99fb5def66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.207 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7257dd9d-4337-4d6d-bbe9-cc8d4bdfc7cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.244 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[667545a8-8178-4135-95f8-09f830dc10ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.269 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dd710a74-7b39-4656-8ae9-ff631807e7db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711898, 'reachable_time': 29007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326046, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.293 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e8aec241-f037-49f3-9585-a5d685a304f0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711913, 'tstamp': 711913}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326047, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711917, 'tstamp': 711917}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326047, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.294 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.299 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.299 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.299 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.300 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.413 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4aa20e30-d71a-4765-9b3e-a72a156d2c88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.420 2 DEBUG nova.virt.libvirt.driver [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.420 2 DEBUG nova.virt.libvirt.driver [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.421 2 DEBUG nova.virt.libvirt.driver [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:04:8c:cc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.421 2 DEBUG nova.virt.libvirt.driver [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:1e:ab:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.537 2 DEBUG nova.virt.libvirt.guest [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:16:35 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:16:35 compute-0 nova_compute[259550]:   <nova:name>tempest-tempest.common.compute-instance-985989312</nova:name>
Oct 07 14:16:35 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:16:35</nova:creationTime>
Oct 07 14:16:35 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:16:35 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:16:35 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:16:35 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:16:35 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:16:35 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:16:35 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:16:35 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:16:35 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:16:35 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:16:35 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:16:35 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:16:35 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:16:35 compute-0 nova_compute[259550]:     <nova:port uuid="8718eef8-8e7a-42ab-8df9-b469e81779d9">
Oct 07 14:16:35 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:16:35 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:16:35 compute-0 nova_compute[259550]:     <nova:port uuid="1b8e1852-2a5e-4c50-9ab0-110dfb492a49">
Oct 07 14:16:35 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 07 14:16:35 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:16:35 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:16:35 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:16:35 compute-0 nova_compute[259550]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 07 14:16:35 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:16:35 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:16:35 compute-0 nova_compute[259550]: 2025-10-07 14:16:35.600 2 DEBUG oslo_concurrency.lockutils [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 11.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 484 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.5 MiB/s wr, 213 op/s
Oct 07 14:16:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:36 compute-0 ovn_controller[151684]: 2025-10-07T14:16:36Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:ab:6d 10.100.0.9
Oct 07 14:16:36 compute-0 ovn_controller[151684]: 2025-10-07T14:16:36Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:ab:6d 10.100.0.9
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.523 2 DEBUG nova.compute.manager [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-vif-plugged-68a7ca31-4ee2-4e32-9574-3113f63090cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.524 2 DEBUG oslo_concurrency.lockutils [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.524 2 DEBUG oslo_concurrency.lockutils [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.525 2 DEBUG oslo_concurrency.lockutils [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.525 2 DEBUG nova.compute.manager [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Processing event network-vif-plugged-68a7ca31-4ee2-4e32-9574-3113f63090cf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.525 2 DEBUG nova.compute.manager [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-vif-plugged-68a7ca31-4ee2-4e32-9574-3113f63090cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.525 2 DEBUG oslo_concurrency.lockutils [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.525 2 DEBUG oslo_concurrency.lockutils [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.526 2 DEBUG oslo_concurrency.lockutils [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.526 2 DEBUG nova.compute.manager [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] No waiting events found dispatching network-vif-plugged-68a7ca31-4ee2-4e32-9574-3113f63090cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.526 2 WARNING nova.compute.manager [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received unexpected event network-vif-plugged-68a7ca31-4ee2-4e32-9574-3113f63090cf for instance with vm_state building and task_state spawning.
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.526 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846581.5224988, c14b06ec-ce54-4081-8d72-b22529c3b0b7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.526 2 INFO nova.compute.manager [-] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] VM Stopped (Lifecycle Event)
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.528 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.532 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846596.5318658, 4aa20e30-d71a-4765-9b3e-a72a156d2c88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.532 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] VM Resumed (Lifecycle Event)
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.535 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.540 2 INFO nova.virt.libvirt.driver [-] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Instance spawned successfully.
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.541 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:16:36 compute-0 ceph-mon[74295]: pgmap v1560: 305 pgs: 305 active+clean; 484 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.5 MiB/s wr, 213 op/s
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.576 2 DEBUG nova.compute.manager [None req-52f76325-e81f-4d0c-86d2-b5abaf04d7cf - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.612 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.616 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.617 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.617 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.617 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.618 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.618 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.622 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.706 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.764 2 INFO nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Took 17.74 seconds to spawn the instance on the hypervisor.
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.764 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.870 2 INFO nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Took 19.34 seconds to build instance.
Oct 07 14:16:36 compute-0 nova_compute[259550]: 2025-10-07 14:16:36.934 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.384 2 DEBUG nova.network.neutron [req-79fe507a-8ae7-45ac-a7a8-5462d79c66bd req-9b18cba2-4893-4847-abb8-6e6c18696bac 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updated VIF entry in instance network info cache for port 1b8e1852-2a5e-4c50-9ab0-110dfb492a49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.385 2 DEBUG nova.network.neutron [req-79fe507a-8ae7-45ac-a7a8-5462d79c66bd req-9b18cba2-4893-4847-abb8-6e6c18696bac 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updating instance_info_cache with network_info: [{"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.419 2 DEBUG nova.compute.manager [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.419 2 DEBUG oslo_concurrency.lockutils [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.420 2 DEBUG oslo_concurrency.lockutils [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.420 2 DEBUG oslo_concurrency.lockutils [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.420 2 DEBUG nova.compute.manager [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] No waiting events found dispatching network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.420 2 WARNING nova.compute.manager [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received unexpected event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 for instance with vm_state active and task_state None.
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.420 2 DEBUG nova.compute.manager [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.421 2 DEBUG oslo_concurrency.lockutils [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.421 2 DEBUG oslo_concurrency.lockutils [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.421 2 DEBUG oslo_concurrency.lockutils [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.421 2 DEBUG nova.compute.manager [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] No waiting events found dispatching network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.422 2 WARNING nova.compute.manager [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received unexpected event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 for instance with vm_state active and task_state None.
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.424 2 DEBUG oslo_concurrency.lockutils [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "a23d6956-f85a-40b1-9e54-1b32d2af191e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.425 2 DEBUG oslo_concurrency.lockutils [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.425 2 DEBUG oslo_concurrency.lockutils [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.425 2 DEBUG oslo_concurrency.lockutils [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.425 2 DEBUG oslo_concurrency.lockutils [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.427 2 INFO nova.compute.manager [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Terminating instance
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.427 2 DEBUG nova.compute.manager [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.467 2 DEBUG oslo_concurrency.lockutils [req-79fe507a-8ae7-45ac-a7a8-5462d79c66bd req-9b18cba2-4893-4847-abb8-6e6c18696bac 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:37 compute-0 kernel: tapae1b9c2d-38 (unregistering): left promiscuous mode
Oct 07 14:16:37 compute-0 NetworkManager[44949]: <info>  [1759846597.4822] device (tapae1b9c2d-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:16:37 compute-0 ovn_controller[151684]: 2025-10-07T14:16:37Z|00563|binding|INFO|Releasing lport ae1b9c2d-384d-4134-8799-babeadd70605 from this chassis (sb_readonly=0)
Oct 07 14:16:37 compute-0 ovn_controller[151684]: 2025-10-07T14:16:37Z|00564|binding|INFO|Setting lport ae1b9c2d-384d-4134-8799-babeadd70605 down in Southbound
Oct 07 14:16:37 compute-0 ovn_controller[151684]: 2025-10-07T14:16:37Z|00565|binding|INFO|Removing iface tapae1b9c2d-38 ovn-installed in OVS
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.506 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:1b:e2 10.100.0.12'], port_security=['fa:16:3e:09:1b:e2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a23d6956-f85a-40b1-9e54-1b32d2af191e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '972aa9372a81406990460fb46cf827e0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '21573a58-df26-46b3-96bc-30ac8d7d5432', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8439febc-2ab3-4376-877e-4af159445d58, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ae1b9c2d-384d-4134-8799-babeadd70605) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.507 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ae1b9c2d-384d-4134-8799-babeadd70605 in datapath 4fd643de-a9bb-4c41-8437-fb901dfd8879 unbound from our chassis
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.508 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fd643de-a9bb-4c41-8437-fb901dfd8879
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.518 2 DEBUG oslo_concurrency.lockutils [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.519 2 DEBUG oslo_concurrency.lockutils [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.528 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[886139a8-31d2-4a00-9349-596df3c6f159]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.546 2 DEBUG nova.objects.instance [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.566 2 DEBUG nova.virt.libvirt.vif [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-985989312',display_name='tempest-tempest.common.compute-instance-985989312',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-985989312',id=58,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-hnk7jfbi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.567 2 DEBUG nova.network.os_vif_util [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.568 2 DEBUG nova.network.os_vif_util [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.572 2 DEBUG nova.virt.libvirt.guest [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.576 2 DEBUG nova.virt.libvirt.guest [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.578 2 DEBUG nova.virt.libvirt.driver [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Attempting to detach device tap1b8e1852-2a from instance 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.579 2 DEBUG nova.virt.libvirt.guest [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] detach device xml: <interface type="ethernet">
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:1e:ab:6d"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <target dev="tap1b8e1852-2a"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]: </interface>
Oct 07 14:16:37 compute-0 nova_compute[259550]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 07 14:16:37 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Oct 07 14:16:37 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003c.scope: Consumed 19.608s CPU time.
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.583 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8018e650-01fe-4548-897f-2282e78f5b53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:37 compute-0 systemd-machined[214580]: Machine qemu-69-instance-0000003c terminated.
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.594 2 DEBUG nova.virt.libvirt.guest [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.597 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[41aea983-aa6f-4bc7-8b60-90381cdf0f49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.604 2 DEBUG nova.virt.libvirt.guest [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface>not found in domain: <domain type='kvm' id='68'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <name>instance-0000003a</name>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <uuid>8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a</uuid>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:name>tempest-tempest.common.compute-instance-985989312</nova:name>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:16:35</nova:creationTime>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:port uuid="8718eef8-8e7a-42ab-8df9-b469e81779d9">
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:port uuid="1b8e1852-2a5e-4c50-9ab0-110dfb492a49">
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:16:37 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <memory unit='KiB'>131072</memory>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <vcpu placement='static'>1</vcpu>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <resource>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <partition>/machine</partition>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </resource>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <sysinfo type='smbios'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <system>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <entry name='manufacturer'>RDO</entry>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <entry name='product'>OpenStack Compute</entry>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <entry name='serial'>8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a</entry>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <entry name='uuid'>8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a</entry>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <entry name='family'>Virtual Machine</entry>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </system>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <os>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <boot dev='hd'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <smbios mode='sysinfo'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </os>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <features>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <vmcoreinfo state='on'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </features>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <cpu mode='custom' match='exact' check='full'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <vendor>AMD</vendor>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='x2apic'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc-deadline'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='hypervisor'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc_adjust'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='spec-ctrl'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='stibp'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='arch-capabilities'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='ssbd'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='cmp_legacy'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='overflow-recov'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='succor'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='ibrs'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='amd-ssbd'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='virt-ssbd'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='lbrv'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='tsc-scale'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='vmcb-clean'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='flushbyasid'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='pause-filter'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='pfthreshold'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='rdctl-no'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='mds-no'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='pschange-mc-no'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='gds-no'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='rfds-no'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='xsaves'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='svm'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='topoext'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='npt'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='nrip-save'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <clock offset='utc'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <timer name='pit' tickpolicy='delay'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <timer name='hpet' present='no'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <on_poweroff>destroy</on_poweroff>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <on_reboot>restart</on_reboot>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <on_crash>destroy</on_crash>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <disk type='network' device='disk'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk' index='2'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       </source>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target dev='vda' bus='virtio'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='virtio-disk0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <disk type='network' device='cdrom'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk.config' index='1'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       </source>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target dev='sda' bus='sata'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <readonly/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='sata0-0-0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='0' model='pcie-root'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pcie.0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='1' port='0x10'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.1'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='2' port='0x11'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.2'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='3' port='0x12'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.3'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='4' port='0x13'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.4'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='5' port='0x14'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.5'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='6' port='0x15'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.6'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='7' port='0x16'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.7'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='8' port='0x17'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.8'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='9' port='0x18'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.9'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='10' port='0x19'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.10'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='11' port='0x1a'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.11'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='12' port='0x1b'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.12'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='13' port='0x1c'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.13'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='14' port='0x1d'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.14'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='15' port='0x1e'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.15'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='16' port='0x1f'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.16'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='17' port='0x20'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.17'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='18' port='0x21'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.18'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='19' port='0x22'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.19'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='20' port='0x23'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.20'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='21' port='0x24'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.21'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='22' port='0x25'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.22'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='23' port='0x26'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.23'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='24' port='0x27'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.24'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='25' port='0x28'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.25'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-pci-bridge'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.26'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='usb'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='sata' index='0'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='ide'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:04:8c:cc'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target dev='tap8718eef8-8e'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='net0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:1e:ab:6d'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target dev='tap1b8e1852-2a'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='net1'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <serial type='pty'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <source path='/dev/pts/4'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/console.log' append='off'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target type='isa-serial' port='0'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:         <model name='isa-serial'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       </target>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <console type='pty' tty='/dev/pts/4'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <source path='/dev/pts/4'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/console.log' append='off'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target type='serial' port='0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </console>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <input type='tablet' bus='usb'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='input0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='usb' bus='0' port='1'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </input>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <input type='mouse' bus='ps2'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='input1'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </input>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <input type='keyboard' bus='ps2'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='input2'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </input>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <graphics type='vnc' port='5905' autoport='yes' listen='::0'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <listen type='address' address='::0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </graphics>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <audio id='1' type='none'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <video>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model type='virtio' heads='1' primary='yes'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='video0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </video>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <watchdog model='itco' action='reset'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='watchdog0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </watchdog>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <memballoon model='virtio'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <stats period='10'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='balloon0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <rng model='virtio'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <backend model='random'>/dev/urandom</backend>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='rng0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <label>system_u:system_r:svirt_t:s0:c111,c966</label>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c111,c966</imagelabel>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <label>+107:+107</label>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <imagelabel>+107:+107</imagelabel>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:16:37 compute-0 nova_compute[259550]: </domain>
Oct 07 14:16:37 compute-0 nova_compute[259550]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.604 2 INFO nova.virt.libvirt.driver [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully detached device tap1b8e1852-2a from instance 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a from the persistent domain config.
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.605 2 DEBUG nova.virt.libvirt.driver [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] (1/8): Attempting to detach device tap1b8e1852-2a with device alias net1 from instance 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.605 2 DEBUG nova.virt.libvirt.guest [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] detach device xml: <interface type="ethernet">
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:1e:ab:6d"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <target dev="tap1b8e1852-2a"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]: </interface>
Oct 07 14:16:37 compute-0 nova_compute[259550]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.614 2 DEBUG oslo_concurrency.lockutils [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.615 2 DEBUG oslo_concurrency.lockutils [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.615 2 DEBUG oslo_concurrency.lockutils [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.615 2 DEBUG oslo_concurrency.lockutils [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.616 2 DEBUG oslo_concurrency.lockutils [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.617 2 INFO nova.compute.manager [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Terminating instance
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.618 2 DEBUG nova.compute.manager [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.635 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[adc029ad-69ec-4f33-92b0-494a74f109e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.662 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[23739bad-f541-4fb9-939b-1f430c24b546]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fd643de-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:80:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713303, 'reachable_time': 32732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326059, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.671 2 INFO nova.virt.libvirt.driver [-] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Instance destroyed successfully.
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.673 2 DEBUG nova.objects.instance [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'resources' on Instance uuid a23d6956-f85a-40b1-9e54-1b32d2af191e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.684 2 DEBUG nova.virt.libvirt.vif [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-289403182',display_name='tempest-ListServerFiltersTestJSON-instance-289403182',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-289403182',id=60,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:16:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-d58h1k0w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:16:01Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=a23d6956-f85a-40b1-9e54-1b32d2af191e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.684 2 DEBUG nova.network.os_vif_util [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.685 2 DEBUG nova.network.os_vif_util [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:1b:e2,bridge_name='br-int',has_traffic_filtering=True,id=ae1b9c2d-384d-4134-8799-babeadd70605,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1b9c2d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.685 2 DEBUG os_vif [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:1b:e2,bridge_name='br-int',has_traffic_filtering=True,id=ae1b9c2d-384d-4134-8799-babeadd70605,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1b9c2d-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.686 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c7d4121-3d9e-49d7-a03b-0e05378e7605]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713317, 'tstamp': 713317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326069, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713321, 'tstamp': 713321}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326069, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapae1b9c2d-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.688 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd643de-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:16:37 compute-0 kernel: tap68a7ca31-4e (unregistering): left promiscuous mode
Oct 07 14:16:37 compute-0 NetworkManager[44949]: <info>  [1759846597.6971] device (tap68a7ca31-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:16:37 compute-0 ovn_controller[151684]: 2025-10-07T14:16:37Z|00566|binding|INFO|Releasing lport 68a7ca31-4ee2-4e32-9574-3113f63090cf from this chassis (sb_readonly=0)
Oct 07 14:16:37 compute-0 ovn_controller[151684]: 2025-10-07T14:16:37Z|00567|binding|INFO|Setting lport 68a7ca31-4ee2-4e32-9574-3113f63090cf down in Southbound
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:37 compute-0 ovn_controller[151684]: 2025-10-07T14:16:37Z|00568|binding|INFO|Removing iface tap68a7ca31-4e ovn-installed in OVS
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.720 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fd643de-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.720 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.721 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fd643de-a0, col_values=(('external_ids', {'iface-id': '879f54f7-e219-4616-9199-264d02fdd4cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.721 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:37 compute-0 kernel: tap1b8e1852-2a (unregistering): left promiscuous mode
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.729 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:be:e9 10.100.0.6'], port_security=['fa:16:3e:6f:be:e9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/24', 'neutron:device_id': '4aa20e30-d71a-4765-9b3e-a72a156d2c88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b044fd19-c1bd-4478-b5ed-6feb8831fea0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21c6dcc3-8dd4-4e2a-af6b-6893629fd93b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=68a7ca31-4ee2-4e32-9574-3113f63090cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.731 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 68a7ca31-4ee2-4e32-9574-3113f63090cf in datapath b044fd19-c1bd-4478-b5ed-6feb8831fea0 unbound from our chassis
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.729 2 INFO os_vif [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:1b:e2,bridge_name='br-int',has_traffic_filtering=True,id=ae1b9c2d-384d-4134-8799-babeadd70605,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1b9c2d-38')
Oct 07 14:16:37 compute-0 kernel: tapec2bde76-e0 (unregistering): left promiscuous mode
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.733 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b044fd19-c1bd-4478-b5ed-6feb8831fea0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.734 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab98bc5-891b-47a6-bb95-fa2a00660631]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.734 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0 namespace which is not needed anymore
Oct 07 14:16:37 compute-0 NetworkManager[44949]: <info>  [1759846597.7360] device (tap1b8e1852-2a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:16:37 compute-0 NetworkManager[44949]: <info>  [1759846597.7372] device (tapec2bde76-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.768 2 DEBUG nova.virt.libvirt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Received event <DeviceRemovedEvent: 1759846597.7391524, 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 07 14:16:37 compute-0 ovn_controller[151684]: 2025-10-07T14:16:37Z|00569|binding|INFO|Releasing lport ec2bde76-e053-498c-9d73-ef340b6cfe82 from this chassis (sb_readonly=0)
Oct 07 14:16:37 compute-0 ovn_controller[151684]: 2025-10-07T14:16:37Z|00570|binding|INFO|Setting lport ec2bde76-e053-498c-9d73-ef340b6cfe82 down in Southbound
Oct 07 14:16:37 compute-0 ovn_controller[151684]: 2025-10-07T14:16:37Z|00571|binding|INFO|Releasing lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 from this chassis (sb_readonly=0)
Oct 07 14:16:37 compute-0 ovn_controller[151684]: 2025-10-07T14:16:37Z|00572|binding|INFO|Setting lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 down in Southbound
Oct 07 14:16:37 compute-0 ovn_controller[151684]: 2025-10-07T14:16:37Z|00573|binding|INFO|Removing iface tapec2bde76-e0 ovn-installed in OVS
Oct 07 14:16:37 compute-0 ovn_controller[151684]: 2025-10-07T14:16:37Z|00574|binding|INFO|Removing iface tap1b8e1852-2a ovn-installed in OVS
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.769 2 DEBUG nova.virt.libvirt.driver [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Start waiting for the detach event from libvirt for device tap1b8e1852-2a with device alias net1 for instance 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.774 2 DEBUG nova.virt.libvirt.guest [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 484 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 636 KiB/s rd, 56 KiB/s wr, 99 op/s
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.782 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:ab:6d 10.100.0.9'], port_security=['fa:16:3e:1e:ab:6d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-997458546', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-997458546', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '9', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1b8e1852-2a5e-4c50-9ab0-110dfb492a49) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:37 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Oct 07 14:16:37 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003e.scope: Consumed 2.020s CPU time.
Oct 07 14:16:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.784 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:34:a2 10.100.1.60'], port_security=['fa:16:3e:09:34:a2 10.100.1.60'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.60/24', 'neutron:device_id': '4aa20e30-d71a-4765-9b3e-a72a156d2c88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8ca58653-b705-43e9-827e-3bbf7240a807', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a397202a-2ca0-49d4-bf1d-28612c8843c1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ec2bde76-e053-498c-9d73-ef340b6cfe82) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:37 compute-0 systemd-machined[214580]: Machine qemu-72-instance-0000003e terminated.
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.788 2 DEBUG nova.virt.libvirt.guest [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface>not found in domain: <domain type='kvm' id='68'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <name>instance-0000003a</name>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <uuid>8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a</uuid>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:name>tempest-tempest.common.compute-instance-985989312</nova:name>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:16:35</nova:creationTime>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:port uuid="8718eef8-8e7a-42ab-8df9-b469e81779d9">
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:port uuid="1b8e1852-2a5e-4c50-9ab0-110dfb492a49">
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:16:37 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <memory unit='KiB'>131072</memory>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <vcpu placement='static'>1</vcpu>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <resource>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <partition>/machine</partition>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </resource>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <sysinfo type='smbios'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <system>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <entry name='manufacturer'>RDO</entry>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <entry name='product'>OpenStack Compute</entry>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <entry name='serial'>8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a</entry>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <entry name='uuid'>8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a</entry>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <entry name='family'>Virtual Machine</entry>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </system>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <os>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <boot dev='hd'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <smbios mode='sysinfo'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </os>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <features>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <vmcoreinfo state='on'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </features>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <cpu mode='custom' match='exact' check='full'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <vendor>AMD</vendor>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='x2apic'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc-deadline'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='hypervisor'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc_adjust'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='spec-ctrl'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='stibp'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='arch-capabilities'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='ssbd'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='cmp_legacy'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='overflow-recov'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='succor'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='ibrs'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='amd-ssbd'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='virt-ssbd'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='lbrv'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='tsc-scale'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='vmcb-clean'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='flushbyasid'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='pause-filter'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='pfthreshold'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='rdctl-no'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='mds-no'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='pschange-mc-no'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='gds-no'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='rfds-no'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='xsaves'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='svm'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='require' name='topoext'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='npt'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <feature policy='disable' name='nrip-save'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <clock offset='utc'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <timer name='pit' tickpolicy='delay'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <timer name='hpet' present='no'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <on_poweroff>destroy</on_poweroff>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <on_reboot>restart</on_reboot>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <on_crash>destroy</on_crash>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <disk type='network' device='disk'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk' index='2'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       </source>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target dev='vda' bus='virtio'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='virtio-disk0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <disk type='network' device='cdrom'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk.config' index='1'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       </source>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target dev='sda' bus='sata'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <readonly/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='sata0-0-0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='0' model='pcie-root'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pcie.0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='1' port='0x10'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.1'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='2' port='0x11'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.2'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='3' port='0x12'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.3'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='4' port='0x13'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.4'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='5' port='0x14'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.5'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='6' port='0x15'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.6'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='7' port='0x16'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.7'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='8' port='0x17'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.8'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='9' port='0x18'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.9'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='10' port='0x19'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.10'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='11' port='0x1a'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.11'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='12' port='0x1b'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.12'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='13' port='0x1c'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.13'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='14' port='0x1d'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.14'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='15' port='0x1e'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.15'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='16' port='0x1f'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.16'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='17' port='0x20'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.17'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='18' port='0x21'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.18'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='19' port='0x22'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.19'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='20' port='0x23'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.20'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='21' port='0x24'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.21'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='22' port='0x25'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.22'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='23' port='0x26'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.23'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='24' port='0x27'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.24'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target chassis='25' port='0x28'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.25'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model name='pcie-pci-bridge'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='pci.26'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='usb'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <controller type='sata' index='0'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='ide'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:04:8c:cc'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target dev='tap8718eef8-8e'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='net0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <serial type='pty'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <source path='/dev/pts/4'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/console.log' append='off'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target type='isa-serial' port='0'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:         <model name='isa-serial'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       </target>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <console type='pty' tty='/dev/pts/4'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <source path='/dev/pts/4'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/console.log' append='off'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <target type='serial' port='0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </console>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <input type='tablet' bus='usb'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='input0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='usb' bus='0' port='1'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </input>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <input type='mouse' bus='ps2'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='input1'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </input>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <input type='keyboard' bus='ps2'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='input2'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </input>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <graphics type='vnc' port='5905' autoport='yes' listen='::0'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <listen type='address' address='::0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </graphics>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <audio id='1' type='none'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <video>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <model type='virtio' heads='1' primary='yes'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='video0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </video>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <watchdog model='itco' action='reset'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='watchdog0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </watchdog>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <memballoon model='virtio'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <stats period='10'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='balloon0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <rng model='virtio'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <backend model='random'>/dev/urandom</backend>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <alias name='rng0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <label>system_u:system_r:svirt_t:s0:c111,c966</label>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c111,c966</imagelabel>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <label>+107:+107</label>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <imagelabel>+107:+107</imagelabel>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:16:37 compute-0 nova_compute[259550]: </domain>
Oct 07 14:16:37 compute-0 nova_compute[259550]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.791 2 INFO nova.virt.libvirt.driver [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully detached device tap1b8e1852-2a from instance 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a from the live domain config.
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.792 2 DEBUG nova.virt.libvirt.vif [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-985989312',display_name='tempest-tempest.common.compute-instance-985989312',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-985989312',id=58,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-hnk7jfbi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.792 2 DEBUG nova.network.os_vif_util [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.793 2 DEBUG nova.network.os_vif_util [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.793 2 DEBUG os_vif [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.794 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b8e1852-2a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.816 2 INFO os_vif [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a')
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.817 2 DEBUG nova.virt.libvirt.guest [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:name>tempest-tempest.common.compute-instance-985989312</nova:name>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:16:37</nova:creationTime>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     <nova:port uuid="8718eef8-8e7a-42ab-8df9-b469e81779d9">
Oct 07 14:16:37 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:16:37 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:16:37 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:16:37 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:16:37 compute-0 nova_compute[259550]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 07 14:16:37 compute-0 NetworkManager[44949]: <info>  [1759846597.8418] manager: (tap68a7ca31-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/261)
Oct 07 14:16:37 compute-0 NetworkManager[44949]: <info>  [1759846597.8562] manager: (tapec2bde76-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/262)
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.880 2 INFO nova.virt.libvirt.driver [-] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Instance destroyed successfully.
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.881 2 DEBUG nova.objects.instance [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lazy-loading 'resources' on Instance uuid 4aa20e30-d71a-4765-9b3e-a72a156d2c88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:37 compute-0 neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0[325748]: [NOTICE]   (325758) : haproxy version is 2.8.14-c23fe91
Oct 07 14:16:37 compute-0 neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0[325748]: [NOTICE]   (325758) : path to executable is /usr/sbin/haproxy
Oct 07 14:16:37 compute-0 neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0[325748]: [WARNING]  (325758) : Exiting Master process...
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.891 2 DEBUG nova.virt.libvirt.vif [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:16:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1632304559',display_name='tempest-ServersTestMultiNic-server-1632304559',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1632304559',id=62,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:16:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-c2aa0bw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:16:36Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=4aa20e30-d71a-4765-9b3e-a72a156d2c88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.892 2 DEBUG nova.network.os_vif_util [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.892 2 DEBUG nova.network.os_vif_util [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:be:e9,bridge_name='br-int',has_traffic_filtering=True,id=68a7ca31-4ee2-4e32-9574-3113f63090cf,network=Network(b044fd19-c1bd-4478-b5ed-6feb8831fea0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68a7ca31-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.893 2 DEBUG os_vif [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:be:e9,bridge_name='br-int',has_traffic_filtering=True,id=68a7ca31-4ee2-4e32-9574-3113f63090cf,network=Network(b044fd19-c1bd-4478-b5ed-6feb8831fea0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68a7ca31-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:16:37 compute-0 neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0[325748]: [ALERT]    (325758) : Current worker (325765) exited with code 143 (Terminated)
Oct 07 14:16:37 compute-0 neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0[325748]: [WARNING]  (325758) : All workers exited. Exiting... (0)
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.895 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68a7ca31-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:37 compute-0 systemd[1]: libpod-179b031c8dc198d24aba4a9eabc4d52a9bc31d2cefb1f827c82b8f8efdff2a8c.scope: Deactivated successfully.
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:16:37 compute-0 podman[326116]: 2025-10-07 14:16:37.902036603 +0000 UTC m=+0.056526084 container died 179b031c8dc198d24aba4a9eabc4d52a9bc31d2cefb1f827c82b8f8efdff2a8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 07 14:16:37 compute-0 rsyslogd[1004]: imjournal from <np0005473739:nova_compute>: begin to drop messages due to rate-limiting
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.906 2 INFO os_vif [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:be:e9,bridge_name='br-int',has_traffic_filtering=True,id=68a7ca31-4ee2-4e32-9574-3113f63090cf,network=Network(b044fd19-c1bd-4478-b5ed-6feb8831fea0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68a7ca31-4e')
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.906 2 DEBUG nova.virt.libvirt.vif [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:16:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1632304559',display_name='tempest-ServersTestMultiNic-server-1632304559',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1632304559',id=62,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:16:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-c2aa0bw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:16:36Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=4aa20e30-d71a-4765-9b3e-a72a156d2c88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.907 2 DEBUG nova.network.os_vif_util [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.907 2 DEBUG nova.network.os_vif_util [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=ec2bde76-e053-498c-9d73-ef340b6cfe82,network=Network(8ca58653-b705-43e9-827e-3bbf7240a807),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bde76-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.907 2 DEBUG os_vif [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=ec2bde76-e053-498c-9d73-ef340b6cfe82,network=Network(8ca58653-b705-43e9-827e-3bbf7240a807),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bde76-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec2bde76-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:16:37 compute-0 nova_compute[259550]: 2025-10-07 14:16:37.915 2 INFO os_vif [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=ec2bde76-e053-498c-9d73-ef340b6cfe82,network=Network(8ca58653-b705-43e9-827e-3bbf7240a807),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bde76-e0')
Oct 07 14:16:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-179b031c8dc198d24aba4a9eabc4d52a9bc31d2cefb1f827c82b8f8efdff2a8c-userdata-shm.mount: Deactivated successfully.
Oct 07 14:16:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7783d4fd75c24f0a72d4cfed4e7ddb762e4c7fbdb995022c9958d288ab1b9453-merged.mount: Deactivated successfully.
Oct 07 14:16:37 compute-0 podman[326116]: 2025-10-07 14:16:37.954754586 +0000 UTC m=+0.109244067 container cleanup 179b031c8dc198d24aba4a9eabc4d52a9bc31d2cefb1f827c82b8f8efdff2a8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:16:37 compute-0 systemd[1]: libpod-conmon-179b031c8dc198d24aba4a9eabc4d52a9bc31d2cefb1f827c82b8f8efdff2a8c.scope: Deactivated successfully.
Oct 07 14:16:38 compute-0 podman[326177]: 2025-10-07 14:16:38.043572892 +0000 UTC m=+0.063025576 container remove 179b031c8dc198d24aba4a9eabc4d52a9bc31d2cefb1f827c82b8f8efdff2a8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.054 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0a0b90a2-ab7e-43c9-9768-f962ecf58e82]: (4, ('Tue Oct  7 02:16:37 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0 (179b031c8dc198d24aba4a9eabc4d52a9bc31d2cefb1f827c82b8f8efdff2a8c)\n179b031c8dc198d24aba4a9eabc4d52a9bc31d2cefb1f827c82b8f8efdff2a8c\nTue Oct  7 02:16:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0 (179b031c8dc198d24aba4a9eabc4d52a9bc31d2cefb1f827c82b8f8efdff2a8c)\n179b031c8dc198d24aba4a9eabc4d52a9bc31d2cefb1f827c82b8f8efdff2a8c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.057 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[07b2fe16-1e10-4209-b854-7e1d7c217550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.059 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb044fd19-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:38 compute-0 kernel: tapb044fd19-c0: left promiscuous mode
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.065 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f803a934-396b-48af-8812-dcd548af7ad2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.096 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a16659e7-3d8f-4519-b3e2-e2272dbc4a25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.097 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5cebfe45-fd86-4c69-ae55-b2992f027364]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.121 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fd0b94c0-c6e3-4c1c-8473-42444530fc0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717515, 'reachable_time': 21830, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326194, 'error': None, 'target': 'ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 systemd[1]: run-netns-ovnmeta\x2db044fd19\x2dc1bd\x2d4478\x2db5ed\x2d6feb8831fea0.mount: Deactivated successfully.
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.124 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.125 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[464aeb4f-e7f7-49f4-972a-f3c4ed8701ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.127 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 unbound from our chassis
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.129 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.146 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3d75b3d1-1040-48de-84f7-729523537e5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.190 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f69a514c-4539-4fc2-b389-626f1d4a6d56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.193 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[254664ce-8030-4ede-b8bc-5b99f65f98a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.225 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b050dcbc-0a1e-412d-bbed-6e706ea254d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.246 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fe9a4b16-5d99-462c-beed-5fcd67a65106]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711898, 'reachable_time': 29007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326201, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.264 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ac39c661-2a8b-4e0b-8b65-476eac6096f6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711913, 'tstamp': 711913}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326202, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711917, 'tstamp': 711917}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326202, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.266 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.269 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.269 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.270 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.270 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.272 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ec2bde76-e053-498c-9d73-ef340b6cfe82 in datapath 8ca58653-b705-43e9-827e-3bbf7240a807 unbound from our chassis
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.274 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8ca58653-b705-43e9-827e-3bbf7240a807, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.275 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[102b4bfc-67e0-4fa3-b504-b0bd044c6c74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.276 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807 namespace which is not needed anymore
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.343 2 INFO nova.virt.libvirt.driver [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Deleting instance files /var/lib/nova/instances/a23d6956-f85a-40b1-9e54-1b32d2af191e_del
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.344 2 INFO nova.virt.libvirt.driver [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Deletion of /var/lib/nova/instances/a23d6956-f85a-40b1-9e54-1b32d2af191e_del complete
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.395 2 INFO nova.compute.manager [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Took 0.97 seconds to destroy the instance on the hypervisor.
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.396 2 DEBUG oslo.service.loopingcall [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.397 2 DEBUG nova.compute.manager [-] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.397 2 DEBUG nova.network.neutron [-] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:16:38 compute-0 neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807[325923]: [NOTICE]   (325927) : haproxy version is 2.8.14-c23fe91
Oct 07 14:16:38 compute-0 neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807[325923]: [NOTICE]   (325927) : path to executable is /usr/sbin/haproxy
Oct 07 14:16:38 compute-0 neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807[325923]: [WARNING]  (325927) : Exiting Master process...
Oct 07 14:16:38 compute-0 neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807[325923]: [WARNING]  (325927) : Exiting Master process...
Oct 07 14:16:38 compute-0 neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807[325923]: [ALERT]    (325927) : Current worker (325929) exited with code 143 (Terminated)
Oct 07 14:16:38 compute-0 neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807[325923]: [WARNING]  (325927) : All workers exited. Exiting... (0)
Oct 07 14:16:38 compute-0 systemd[1]: libpod-05cf35aca424565d0626631f889b3225bbbaac461a5badf791563efd529b3753.scope: Deactivated successfully.
Oct 07 14:16:38 compute-0 podman[326220]: 2025-10-07 14:16:38.430859523 +0000 UTC m=+0.054491770 container died 05cf35aca424565d0626631f889b3225bbbaac461a5badf791563efd529b3753 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.435 2 INFO nova.virt.libvirt.driver [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Deleting instance files /var/lib/nova/instances/4aa20e30-d71a-4765-9b3e-a72a156d2c88_del
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.436 2 INFO nova.virt.libvirt.driver [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Deletion of /var/lib/nova/instances/4aa20e30-d71a-4765-9b3e-a72a156d2c88_del complete
Oct 07 14:16:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05cf35aca424565d0626631f889b3225bbbaac461a5badf791563efd529b3753-userdata-shm.mount: Deactivated successfully.
Oct 07 14:16:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7b1cb4503c16a2a98e5b8ed9040d0c7346f943fe58ca2f33977494e163ea458-merged.mount: Deactivated successfully.
Oct 07 14:16:38 compute-0 podman[326220]: 2025-10-07 14:16:38.469414522 +0000 UTC m=+0.093046769 container cleanup 05cf35aca424565d0626631f889b3225bbbaac461a5badf791563efd529b3753 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:16:38 compute-0 systemd[1]: libpod-conmon-05cf35aca424565d0626631f889b3225bbbaac461a5badf791563efd529b3753.scope: Deactivated successfully.
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.491 2 INFO nova.compute.manager [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Took 0.87 seconds to destroy the instance on the hypervisor.
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.491 2 DEBUG oslo.service.loopingcall [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.491 2 DEBUG nova.compute.manager [-] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.492 2 DEBUG nova.network.neutron [-] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:16:38 compute-0 podman[326247]: 2025-10-07 14:16:38.53789462 +0000 UTC m=+0.042972835 container remove 05cf35aca424565d0626631f889b3225bbbaac461a5badf791563efd529b3753 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.544 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8c93790d-5fc0-4d10-8ad8-cf6470ee7b86]: (4, ('Tue Oct  7 02:16:38 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807 (05cf35aca424565d0626631f889b3225bbbaac461a5badf791563efd529b3753)\n05cf35aca424565d0626631f889b3225bbbaac461a5badf791563efd529b3753\nTue Oct  7 02:16:38 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807 (05cf35aca424565d0626631f889b3225bbbaac461a5badf791563efd529b3753)\n05cf35aca424565d0626631f889b3225bbbaac461a5badf791563efd529b3753\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.547 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[38ee64ac-9d7c-4dad-918e-e5f1df3b7bc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.548 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ca58653-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:38 compute-0 kernel: tap8ca58653-b0: left promiscuous mode
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.570 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[72739561-0ecb-4330-8885-cab7cf37c335]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.589 2 DEBUG nova.compute.manager [req-231bce09-56e6-45ee-bf5d-ad12a63f95bb req-68b791d6-3c16-46d5-838c-ed9c783fbe0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-vif-unplugged-68a7ca31-4ee2-4e32-9574-3113f63090cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.589 2 DEBUG oslo_concurrency.lockutils [req-231bce09-56e6-45ee-bf5d-ad12a63f95bb req-68b791d6-3c16-46d5-838c-ed9c783fbe0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.590 2 DEBUG oslo_concurrency.lockutils [req-231bce09-56e6-45ee-bf5d-ad12a63f95bb req-68b791d6-3c16-46d5-838c-ed9c783fbe0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.590 2 DEBUG oslo_concurrency.lockutils [req-231bce09-56e6-45ee-bf5d-ad12a63f95bb req-68b791d6-3c16-46d5-838c-ed9c783fbe0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.590 2 DEBUG nova.compute.manager [req-231bce09-56e6-45ee-bf5d-ad12a63f95bb req-68b791d6-3c16-46d5-838c-ed9c783fbe0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] No waiting events found dispatching network-vif-unplugged-68a7ca31-4ee2-4e32-9574-3113f63090cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.590 2 DEBUG nova.compute.manager [req-231bce09-56e6-45ee-bf5d-ad12a63f95bb req-68b791d6-3c16-46d5-838c-ed9c783fbe0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-vif-unplugged-68a7ca31-4ee2-4e32-9574-3113f63090cf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.590 2 DEBUG nova.compute.manager [req-231bce09-56e6-45ee-bf5d-ad12a63f95bb req-68b791d6-3c16-46d5-838c-ed9c783fbe0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-vif-plugged-68a7ca31-4ee2-4e32-9574-3113f63090cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.590 2 DEBUG oslo_concurrency.lockutils [req-231bce09-56e6-45ee-bf5d-ad12a63f95bb req-68b791d6-3c16-46d5-838c-ed9c783fbe0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.591 2 DEBUG oslo_concurrency.lockutils [req-231bce09-56e6-45ee-bf5d-ad12a63f95bb req-68b791d6-3c16-46d5-838c-ed9c783fbe0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.591 2 DEBUG oslo_concurrency.lockutils [req-231bce09-56e6-45ee-bf5d-ad12a63f95bb req-68b791d6-3c16-46d5-838c-ed9c783fbe0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.591 2 DEBUG nova.compute.manager [req-231bce09-56e6-45ee-bf5d-ad12a63f95bb req-68b791d6-3c16-46d5-838c-ed9c783fbe0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] No waiting events found dispatching network-vif-plugged-68a7ca31-4ee2-4e32-9574-3113f63090cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:38 compute-0 nova_compute[259550]: 2025-10-07 14:16:38.591 2 WARNING nova.compute.manager [req-231bce09-56e6-45ee-bf5d-ad12a63f95bb req-68b791d6-3c16-46d5-838c-ed9c783fbe0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received unexpected event network-vif-plugged-68a7ca31-4ee2-4e32-9574-3113f63090cf for instance with vm_state active and task_state deleting.
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.594 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7d68b753-22b8-453b-8dda-78f6fb45b94d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.596 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ac590885-76ca-41bb-9c2a-a78950002fce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.617 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8c523f0c-2abf-4ef6-a831-8b2a4a9e3a54]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717627, 'reachable_time': 33328, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326260, 'error': None, 'target': 'ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.620 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:16:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:38.620 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[0414f633-8f0c-4e5a-ae86-3a1e54529f91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:38 compute-0 ceph-mon[74295]: pgmap v1561: 305 pgs: 305 active+clean; 484 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 636 KiB/s rd, 56 KiB/s wr, 99 op/s
Oct 07 14:16:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d8ca58653\x2db705\x2d43e9\x2d827e\x2d3bbf7240a807.mount: Deactivated successfully.
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.504 2 DEBUG nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Received event network-vif-unplugged-ae1b9c2d-384d-4134-8799-babeadd70605 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.505 2 DEBUG oslo_concurrency.lockutils [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.505 2 DEBUG oslo_concurrency.lockutils [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.505 2 DEBUG oslo_concurrency.lockutils [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.505 2 DEBUG nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] No waiting events found dispatching network-vif-unplugged-ae1b9c2d-384d-4134-8799-babeadd70605 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.506 2 DEBUG nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Received event network-vif-unplugged-ae1b9c2d-384d-4134-8799-babeadd70605 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.506 2 DEBUG nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Received event network-vif-plugged-ae1b9c2d-384d-4134-8799-babeadd70605 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.506 2 DEBUG oslo_concurrency.lockutils [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.507 2 DEBUG oslo_concurrency.lockutils [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.507 2 DEBUG oslo_concurrency.lockutils [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.507 2 DEBUG nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] No waiting events found dispatching network-vif-plugged-ae1b9c2d-384d-4134-8799-babeadd70605 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.507 2 WARNING nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Received unexpected event network-vif-plugged-ae1b9c2d-384d-4134-8799-babeadd70605 for instance with vm_state active and task_state deleting.
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.508 2 DEBUG nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-vif-unplugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.508 2 DEBUG oslo_concurrency.lockutils [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.508 2 DEBUG oslo_concurrency.lockutils [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.508 2 DEBUG oslo_concurrency.lockutils [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.508 2 DEBUG nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] No waiting events found dispatching network-vif-unplugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.509 2 WARNING nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received unexpected event network-vif-unplugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 for instance with vm_state active and task_state None.
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.509 2 DEBUG nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.509 2 DEBUG oslo_concurrency.lockutils [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.509 2 DEBUG oslo_concurrency.lockutils [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.509 2 DEBUG oslo_concurrency.lockutils [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.509 2 DEBUG nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] No waiting events found dispatching network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.510 2 WARNING nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received unexpected event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 for instance with vm_state active and task_state None.
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.510 2 DEBUG nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-vif-unplugged-ec2bde76-e053-498c-9d73-ef340b6cfe82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.510 2 DEBUG oslo_concurrency.lockutils [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.510 2 DEBUG oslo_concurrency.lockutils [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.510 2 DEBUG oslo_concurrency.lockutils [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.510 2 DEBUG nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] No waiting events found dispatching network-vif-unplugged-ec2bde76-e053-498c-9d73-ef340b6cfe82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.510 2 DEBUG nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-vif-unplugged-ec2bde76-e053-498c-9d73-ef340b6cfe82 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.511 2 DEBUG nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-vif-plugged-ec2bde76-e053-498c-9d73-ef340b6cfe82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.511 2 DEBUG oslo_concurrency.lockutils [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.511 2 DEBUG oslo_concurrency.lockutils [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.511 2 DEBUG oslo_concurrency.lockutils [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.511 2 DEBUG nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] No waiting events found dispatching network-vif-plugged-ec2bde76-e053-498c-9d73-ef340b6cfe82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.511 2 WARNING nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received unexpected event network-vif-plugged-ec2bde76-e053-498c-9d73-ef340b6cfe82 for instance with vm_state active and task_state deleting.
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.512 2 DEBUG nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-vif-deleted-ec2bde76-e053-498c-9d73-ef340b6cfe82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.512 2 INFO nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Neutron deleted interface ec2bde76-e053-498c-9d73-ef340b6cfe82; detaching it from the instance and deleting it from the info cache
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.512 2 DEBUG nova.network.neutron [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Updating instance_info_cache with network_info: [{"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.551 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846584.5511107, fc163bed-856c-4ea5-9bf3-6989fb1027eb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.552 2 INFO nova.compute.manager [-] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] VM Stopped (Lifecycle Event)
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.555 2 DEBUG nova.compute.manager [req-2d5fd8c3-5e6d-44ab-af1a-7d06ac4491fe req-3f3a063e-6ac9-4784-86fe-1fe29a9c52e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Detach interface failed, port_id=ec2bde76-e053-498c-9d73-ef340b6cfe82, reason: Instance 4aa20e30-d71a-4765-9b3e-a72a156d2c88 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:16:39 compute-0 nova_compute[259550]: 2025-10-07 14:16:39.572 2 DEBUG nova.compute.manager [None req-f3afc775-39b6-4633-8250-91af502315d8 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 403 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 50 KiB/s wr, 145 op/s
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.019 2 DEBUG nova.network.neutron [-] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.035 2 INFO nova.compute.manager [-] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Took 1.64 seconds to deallocate network for instance.
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.110 2 DEBUG oslo_concurrency.lockutils [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.110 2 DEBUG oslo_concurrency.lockutils [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.150 2 DEBUG nova.network.neutron [-] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:40 compute-0 ovn_controller[151684]: 2025-10-07T14:16:40Z|00575|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct 07 14:16:40 compute-0 ovn_controller[151684]: 2025-10-07T14:16:40Z|00576|binding|INFO|Releasing lport 879f54f7-e219-4616-9199-264d02fdd4cf from this chassis (sb_readonly=0)
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.198 2 INFO nova.compute.manager [-] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Took 1.71 seconds to deallocate network for instance.
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.235 2 DEBUG oslo_concurrency.processutils [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.280 2 DEBUG oslo_concurrency.lockutils [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.281 2 DEBUG oslo_concurrency.lockutils [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.281 2 DEBUG nova.network.neutron [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.320 2 DEBUG oslo_concurrency.lockutils [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.372 2 DEBUG oslo_concurrency.lockutils [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.373 2 DEBUG oslo_concurrency.lockutils [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.374 2 DEBUG oslo_concurrency.lockutils [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.374 2 DEBUG oslo_concurrency.lockutils [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.374 2 DEBUG oslo_concurrency.lockutils [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.376 2 INFO nova.compute.manager [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Terminating instance
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.377 2 DEBUG nova.compute.manager [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:16:40 compute-0 kernel: tap8718eef8-8e (unregistering): left promiscuous mode
Oct 07 14:16:40 compute-0 NetworkManager[44949]: <info>  [1759846600.4357] device (tap8718eef8-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:16:40 compute-0 ovn_controller[151684]: 2025-10-07T14:16:40Z|00577|binding|INFO|Releasing lport 8718eef8-8e7a-42ab-8df9-b469e81779d9 from this chassis (sb_readonly=0)
Oct 07 14:16:40 compute-0 ovn_controller[151684]: 2025-10-07T14:16:40Z|00578|binding|INFO|Setting lport 8718eef8-8e7a-42ab-8df9-b469e81779d9 down in Southbound
Oct 07 14:16:40 compute-0 ovn_controller[151684]: 2025-10-07T14:16:40Z|00579|binding|INFO|Removing iface tap8718eef8-8e ovn-installed in OVS
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:40.494 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:8c:cc 10.100.0.13'], port_security=['fa:16:3e:04:8c:cc 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67397a02-5eae-462d-b5c7-e258b23b19a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8718eef8-8e7a-42ab-8df9-b469e81779d9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:40.495 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8718eef8-8e7a-42ab-8df9-b469e81779d9 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 unbound from our chassis
Oct 07 14:16:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:40.497 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:40.517 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[66e26801-d270-4fe4-a611-e60483c965d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:40 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Oct 07 14:16:40 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003a.scope: Consumed 20.558s CPU time.
Oct 07 14:16:40 compute-0 systemd-machined[214580]: Machine qemu-68-instance-0000003a terminated.
Oct 07 14:16:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:40.552 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ae6897-8ae8-4792-8cf6-5d21387dcd8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:40.558 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[959c192b-4ae6-4961-a95e-7fd9fe481d43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:40.602 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[82329ed5-c738-434e-93e8-0e9d5e351c88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.619 2 INFO nova.virt.libvirt.driver [-] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Instance destroyed successfully.
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.620 2 DEBUG nova.objects.instance [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'resources' on Instance uuid 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:40.626 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d37c3535-c95a-4876-a297-ddfbffb83a38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 1000, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 1000, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711898, 'reachable_time': 29007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326300, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:40.646 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[31293624-49e0-446b-9ee0-7efdec608b80]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711913, 'tstamp': 711913}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326305, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711917, 'tstamp': 711917}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326305, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:40.649 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:40.656 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:40.656 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:40.657 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:40.657 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:16:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/639558109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.701 2 DEBUG oslo_concurrency.processutils [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.707 2 DEBUG nova.virt.libvirt.vif [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-985989312',display_name='tempest-tempest.common.compute-instance-985989312',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-985989312',id=58,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-hnk7jfbi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.707 2 DEBUG nova.network.os_vif_util [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.708 2 DEBUG nova.network.os_vif_util [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:04:8c:cc,bridge_name='br-int',has_traffic_filtering=True,id=8718eef8-8e7a-42ab-8df9-b469e81779d9,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8718eef8-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.708 2 DEBUG os_vif [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:8c:cc,bridge_name='br-int',has_traffic_filtering=True,id=8718eef8-8e7a-42ab-8df9-b469e81779d9,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8718eef8-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.710 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8718eef8-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.713 2 DEBUG nova.compute.provider_tree [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.717 2 INFO os_vif [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:8c:cc,bridge_name='br-int',has_traffic_filtering=True,id=8718eef8-8e7a-42ab-8df9-b469e81779d9,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8718eef8-8e')
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.718 2 DEBUG nova.virt.libvirt.vif [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-985989312',display_name='tempest-tempest.common.compute-instance-985989312',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-985989312',id=58,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-hnk7jfbi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.719 2 DEBUG nova.network.os_vif_util [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.719 2 DEBUG nova.network.os_vif_util [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.720 2 DEBUG os_vif [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.721 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b8e1852-2a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.722 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.723 2 INFO os_vif [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a')
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.758 2 DEBUG nova.scheduler.client.report [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.834 2 DEBUG oslo_concurrency.lockutils [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.839 2 DEBUG oslo_concurrency.lockutils [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:40 compute-0 ceph-mon[74295]: pgmap v1562: 305 pgs: 305 active+clean; 403 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 50 KiB/s wr, 145 op/s
Oct 07 14:16:40 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/639558109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.858 2 INFO nova.scheduler.client.report [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Deleted allocations for instance a23d6956-f85a-40b1-9e54-1b32d2af191e
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.917 2 DEBUG oslo_concurrency.lockutils [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:40 compute-0 nova_compute[259550]: 2025-10-07 14:16:40.953 2 DEBUG oslo_concurrency.processutils [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.006 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.007 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.042 2 DEBUG nova.compute.manager [req-fca469cd-5ba4-4cf9-a0d0-6ade432e824e req-c770a92c-e7ad-42c8-a4fc-7671b3a6868f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Received event network-vif-deleted-ae1b9c2d-384d-4134-8799-babeadd70605 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.139 2 INFO nova.virt.libvirt.driver [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Deleting instance files /var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_del
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.140 2 INFO nova.virt.libvirt.driver [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Deletion of /var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_del complete
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.198 2 INFO nova.compute.manager [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Took 0.82 seconds to destroy the instance on the hypervisor.
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.199 2 DEBUG oslo.service.loopingcall [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.199 2 DEBUG nova.compute.manager [-] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.200 2 DEBUG nova.network.neutron [-] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.254 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.254 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.254 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:16:41 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3440599578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.512 2 DEBUG oslo_concurrency.processutils [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.519 2 DEBUG nova.compute.provider_tree [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.535 2 DEBUG nova.scheduler.client.report [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.561 2 DEBUG oslo_concurrency.lockutils [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.599 2 INFO nova.scheduler.client.report [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Deleted allocations for instance 4aa20e30-d71a-4765-9b3e-a72a156d2c88
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.626 2 DEBUG nova.compute.manager [req-d67408fa-5b1a-44a9-9aec-4db33d6771fb req-be8e4841-5789-428e-8547-4689902ccd0d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-vif-deleted-68a7ca31-4ee2-4e32-9574-3113f63090cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.655 2 DEBUG oslo_concurrency.lockutils [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 360 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 51 KiB/s wr, 149 op/s
Oct 07 14:16:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3440599578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.853 2 INFO nova.network.neutron [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Port 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.854 2 DEBUG nova.network.neutron [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updating instance_info_cache with network_info: [{"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.871 2 DEBUG oslo_concurrency.lockutils [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:41 compute-0 nova_compute[259550]: 2025-10-07 14:16:41.893 2 DEBUG oslo_concurrency.lockutils [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.128 2 DEBUG oslo_concurrency.lockutils [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.128 2 DEBUG oslo_concurrency.lockutils [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.129 2 DEBUG oslo_concurrency.lockutils [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.129 2 DEBUG oslo_concurrency.lockutils [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.130 2 DEBUG oslo_concurrency.lockutils [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.131 2 INFO nova.compute.manager [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Terminating instance
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.132 2 DEBUG nova.compute.manager [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:16:42 compute-0 kernel: tap41eef051-1c (unregistering): left promiscuous mode
Oct 07 14:16:42 compute-0 NetworkManager[44949]: <info>  [1759846602.1956] device (tap41eef051-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:16:42 compute-0 ovn_controller[151684]: 2025-10-07T14:16:42Z|00580|binding|INFO|Releasing lport 41eef051-1c52-4c3c-9854-2ee923b4ab0e from this chassis (sb_readonly=0)
Oct 07 14:16:42 compute-0 ovn_controller[151684]: 2025-10-07T14:16:42Z|00581|binding|INFO|Setting lport 41eef051-1c52-4c3c-9854-2ee923b4ab0e down in Southbound
Oct 07 14:16:42 compute-0 ovn_controller[151684]: 2025-10-07T14:16:42Z|00582|binding|INFO|Removing iface tap41eef051-1c ovn-installed in OVS
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:42.209 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:11:48 10.100.0.8'], port_security=['fa:16:3e:08:11:48 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b3d2cd05-012d-4189-bc6c-c40fc1f72c0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '972aa9372a81406990460fb46cf827e0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '21573a58-df26-46b3-96bc-30ac8d7d5432', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8439febc-2ab3-4376-877e-4af159445d58, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=41eef051-1c52-4c3c-9854-2ee923b4ab0e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:42.210 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 41eef051-1c52-4c3c-9854-2ee923b4ab0e in datapath 4fd643de-a9bb-4c41-8437-fb901dfd8879 unbound from our chassis
Oct 07 14:16:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:42.211 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fd643de-a9bb-4c41-8437-fb901dfd8879
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:42.228 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bb9a6c92-9a78-41c4-a301-6cf425854837]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:42.273 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9056bb21-4ccc-437b-96af-00eef8faa130]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:42 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Oct 07 14:16:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:42.280 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ad229f7a-a2f7-41d5-9afb-79c5efdade58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:42 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003b.scope: Consumed 20.505s CPU time.
Oct 07 14:16:42 compute-0 systemd-machined[214580]: Machine qemu-67-instance-0000003b terminated.
Oct 07 14:16:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:42.318 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[33566914-0285-40b6-9011-101c87ff8314]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:42.339 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ab2f8aae-40fa-4e7c-895b-99c3a37db717]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fd643de-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:80:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 1084, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 1084, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713303, 'reachable_time': 32732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326361, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:42.361 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[50d2cc93-4c16-45e1-98d0-2a66bfa66923]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713317, 'tstamp': 713317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326363, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713321, 'tstamp': 713321}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326363, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:42.363 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd643de-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.372 2 INFO nova.virt.libvirt.driver [-] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Instance destroyed successfully.
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.372 2 DEBUG nova.objects.instance [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'resources' on Instance uuid b3d2cd05-012d-4189-bc6c-c40fc1f72c0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:42.374 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fd643de-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:42.374 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:42.375 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fd643de-a0, col_values=(('external_ids', {'iface-id': '879f54f7-e219-4616-9199-264d02fdd4cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:42.375 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.387 2 DEBUG nova.virt.libvirt.vif [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-346182642',display_name='tempest-ListServerFiltersTestJSON-instance-346182642',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-346182642',id=59,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-bm2choqm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:58Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=b3d2cd05-012d-4189-bc6c-c40fc1f72c0f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "address": "fa:16:3e:08:11:48", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41eef051-1c", "ovs_interfaceid": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.387 2 DEBUG nova.network.os_vif_util [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "address": "fa:16:3e:08:11:48", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41eef051-1c", "ovs_interfaceid": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.388 2 DEBUG nova.network.os_vif_util [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:11:48,bridge_name='br-int',has_traffic_filtering=True,id=41eef051-1c52-4c3c-9854-2ee923b4ab0e,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41eef051-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.388 2 DEBUG os_vif [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:11:48,bridge_name='br-int',has_traffic_filtering=True,id=41eef051-1c52-4c3c-9854-2ee923b4ab0e,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41eef051-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.390 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41eef051-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.394 2 INFO os_vif [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:11:48,bridge_name='br-int',has_traffic_filtering=True,id=41eef051-1c52-4c3c-9854-2ee923b4ab0e,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41eef051-1c')
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.675 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.688 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.689 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.689 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.690 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.829 2 INFO nova.virt.libvirt.driver [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Deleting instance files /var/lib/nova/instances/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_del
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.829 2 INFO nova.virt.libvirt.driver [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Deletion of /var/lib/nova/instances/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_del complete
Oct 07 14:16:42 compute-0 ceph-mon[74295]: pgmap v1563: 305 pgs: 305 active+clean; 360 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 51 KiB/s wr, 149 op/s
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.872 2 INFO nova.compute.manager [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Took 0.74 seconds to destroy the instance on the hypervisor.
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.873 2 DEBUG oslo.service.loopingcall [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.873 2 DEBUG nova.compute.manager [-] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:16:42 compute-0 nova_compute[259550]: 2025-10-07 14:16:42.873 2 DEBUG nova.network.neutron [-] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:16:43 compute-0 podman[326394]: 2025-10-07 14:16:43.081418555 +0000 UTC m=+0.065268565 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct 07 14:16:43 compute-0 podman[326395]: 2025-10-07 14:16:43.108360808 +0000 UTC m=+0.093312207 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.179 2 DEBUG nova.network.neutron [-] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.182 2 DEBUG nova.compute.manager [req-15706176-64fc-4e29-a06c-98605dda8855 req-99f7f9d2-5715-4c69-9900-9b2f523d5bad 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-vif-unplugged-8718eef8-8e7a-42ab-8df9-b469e81779d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.183 2 DEBUG oslo_concurrency.lockutils [req-15706176-64fc-4e29-a06c-98605dda8855 req-99f7f9d2-5715-4c69-9900-9b2f523d5bad 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.183 2 DEBUG oslo_concurrency.lockutils [req-15706176-64fc-4e29-a06c-98605dda8855 req-99f7f9d2-5715-4c69-9900-9b2f523d5bad 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.183 2 DEBUG oslo_concurrency.lockutils [req-15706176-64fc-4e29-a06c-98605dda8855 req-99f7f9d2-5715-4c69-9900-9b2f523d5bad 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.183 2 DEBUG nova.compute.manager [req-15706176-64fc-4e29-a06c-98605dda8855 req-99f7f9d2-5715-4c69-9900-9b2f523d5bad 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] No waiting events found dispatching network-vif-unplugged-8718eef8-8e7a-42ab-8df9-b469e81779d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.183 2 DEBUG nova.compute.manager [req-15706176-64fc-4e29-a06c-98605dda8855 req-99f7f9d2-5715-4c69-9900-9b2f523d5bad 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-vif-unplugged-8718eef8-8e7a-42ab-8df9-b469e81779d9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.183 2 DEBUG nova.compute.manager [req-15706176-64fc-4e29-a06c-98605dda8855 req-99f7f9d2-5715-4c69-9900-9b2f523d5bad 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-vif-plugged-8718eef8-8e7a-42ab-8df9-b469e81779d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.184 2 DEBUG oslo_concurrency.lockutils [req-15706176-64fc-4e29-a06c-98605dda8855 req-99f7f9d2-5715-4c69-9900-9b2f523d5bad 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.184 2 DEBUG oslo_concurrency.lockutils [req-15706176-64fc-4e29-a06c-98605dda8855 req-99f7f9d2-5715-4c69-9900-9b2f523d5bad 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.184 2 DEBUG oslo_concurrency.lockutils [req-15706176-64fc-4e29-a06c-98605dda8855 req-99f7f9d2-5715-4c69-9900-9b2f523d5bad 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.184 2 DEBUG nova.compute.manager [req-15706176-64fc-4e29-a06c-98605dda8855 req-99f7f9d2-5715-4c69-9900-9b2f523d5bad 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] No waiting events found dispatching network-vif-plugged-8718eef8-8e7a-42ab-8df9-b469e81779d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.184 2 WARNING nova.compute.manager [req-15706176-64fc-4e29-a06c-98605dda8855 req-99f7f9d2-5715-4c69-9900-9b2f523d5bad 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received unexpected event network-vif-plugged-8718eef8-8e7a-42ab-8df9-b469e81779d9 for instance with vm_state active and task_state deleting.
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.223 2 INFO nova.compute.manager [-] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Took 2.02 seconds to deallocate network for instance.
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.281 2 DEBUG oslo_concurrency.lockutils [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.281 2 DEBUG oslo_concurrency.lockutils [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.402 2 DEBUG oslo_concurrency.processutils [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.743 2 DEBUG nova.compute.manager [req-15ad1f1b-2156-4859-8ed3-a00e2169d29e req-e9c1d56f-f1b0-4ab5-a984-5ae2284feb77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Received event network-vif-unplugged-41eef051-1c52-4c3c-9854-2ee923b4ab0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.743 2 DEBUG oslo_concurrency.lockutils [req-15ad1f1b-2156-4859-8ed3-a00e2169d29e req-e9c1d56f-f1b0-4ab5-a984-5ae2284feb77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.744 2 DEBUG oslo_concurrency.lockutils [req-15ad1f1b-2156-4859-8ed3-a00e2169d29e req-e9c1d56f-f1b0-4ab5-a984-5ae2284feb77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.744 2 DEBUG oslo_concurrency.lockutils [req-15ad1f1b-2156-4859-8ed3-a00e2169d29e req-e9c1d56f-f1b0-4ab5-a984-5ae2284feb77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.744 2 DEBUG nova.compute.manager [req-15ad1f1b-2156-4859-8ed3-a00e2169d29e req-e9c1d56f-f1b0-4ab5-a984-5ae2284feb77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] No waiting events found dispatching network-vif-unplugged-41eef051-1c52-4c3c-9854-2ee923b4ab0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.744 2 DEBUG nova.compute.manager [req-15ad1f1b-2156-4859-8ed3-a00e2169d29e req-e9c1d56f-f1b0-4ab5-a984-5ae2284feb77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Received event network-vif-unplugged-41eef051-1c52-4c3c-9854-2ee923b4ab0e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.744 2 DEBUG nova.compute.manager [req-15ad1f1b-2156-4859-8ed3-a00e2169d29e req-e9c1d56f-f1b0-4ab5-a984-5ae2284feb77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Received event network-vif-plugged-41eef051-1c52-4c3c-9854-2ee923b4ab0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.744 2 DEBUG oslo_concurrency.lockutils [req-15ad1f1b-2156-4859-8ed3-a00e2169d29e req-e9c1d56f-f1b0-4ab5-a984-5ae2284feb77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.744 2 DEBUG oslo_concurrency.lockutils [req-15ad1f1b-2156-4859-8ed3-a00e2169d29e req-e9c1d56f-f1b0-4ab5-a984-5ae2284feb77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.745 2 DEBUG oslo_concurrency.lockutils [req-15ad1f1b-2156-4859-8ed3-a00e2169d29e req-e9c1d56f-f1b0-4ab5-a984-5ae2284feb77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.745 2 DEBUG nova.compute.manager [req-15ad1f1b-2156-4859-8ed3-a00e2169d29e req-e9c1d56f-f1b0-4ab5-a984-5ae2284feb77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] No waiting events found dispatching network-vif-plugged-41eef051-1c52-4c3c-9854-2ee923b4ab0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.745 2 WARNING nova.compute.manager [req-15ad1f1b-2156-4859-8ed3-a00e2169d29e req-e9c1d56f-f1b0-4ab5-a984-5ae2284feb77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Received unexpected event network-vif-plugged-41eef051-1c52-4c3c-9854-2ee923b4ab0e for instance with vm_state active and task_state deleting.
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.745 2 DEBUG nova.compute.manager [req-15ad1f1b-2156-4859-8ed3-a00e2169d29e req-e9c1d56f-f1b0-4ab5-a984-5ae2284feb77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-vif-deleted-8718eef8-8e7a-42ab-8df9-b469e81779d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 360 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 51 KiB/s wr, 149 op/s
Oct 07 14:16:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:16:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982341355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.851 2 DEBUG oslo_concurrency.processutils [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.858 2 DEBUG nova.compute.provider_tree [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:16:43 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3982341355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.889 2 DEBUG nova.scheduler.client.report [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.936 2 DEBUG oslo_concurrency.lockutils [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:43 compute-0 nova_compute[259550]: 2025-10-07 14:16:43.958 2 INFO nova.scheduler.client.report [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Deleted allocations for instance 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a
Oct 07 14:16:44 compute-0 nova_compute[259550]: 2025-10-07 14:16:44.015 2 DEBUG oslo_concurrency.lockutils [None req-05e80ff6-9db6-4c5d-a2d9-ca0f34e0085d eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:44 compute-0 nova_compute[259550]: 2025-10-07 14:16:44.134 2 DEBUG nova.network.neutron [-] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:44 compute-0 nova_compute[259550]: 2025-10-07 14:16:44.150 2 INFO nova.compute.manager [-] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Took 1.28 seconds to deallocate network for instance.
Oct 07 14:16:44 compute-0 nova_compute[259550]: 2025-10-07 14:16:44.197 2 DEBUG oslo_concurrency.lockutils [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:44 compute-0 nova_compute[259550]: 2025-10-07 14:16:44.199 2 DEBUG oslo_concurrency.lockutils [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:44 compute-0 nova_compute[259550]: 2025-10-07 14:16:44.276 2 DEBUG oslo_concurrency.processutils [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:16:44 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2670889452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:44 compute-0 nova_compute[259550]: 2025-10-07 14:16:44.730 2 DEBUG oslo_concurrency.processutils [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:44 compute-0 nova_compute[259550]: 2025-10-07 14:16:44.737 2 DEBUG nova.compute.provider_tree [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:16:44 compute-0 nova_compute[259550]: 2025-10-07 14:16:44.785 2 DEBUG nova.scheduler.client.report [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:16:44 compute-0 ceph-mon[74295]: pgmap v1564: 305 pgs: 305 active+clean; 360 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 51 KiB/s wr, 149 op/s
Oct 07 14:16:44 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2670889452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:44 compute-0 nova_compute[259550]: 2025-10-07 14:16:44.969 2 DEBUG oslo_concurrency.lockutils [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.005 2 INFO nova.scheduler.client.report [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Deleted allocations for instance b3d2cd05-012d-4189-bc6c-c40fc1f72c0f
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.120 2 DEBUG oslo_concurrency.lockutils [None req-43331ac8-a59c-4730-9282-2d82963add7e d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.991s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.165 2 DEBUG oslo_concurrency.lockutils [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.166 2 DEBUG oslo_concurrency.lockutils [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.166 2 DEBUG oslo_concurrency.lockutils [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.166 2 DEBUG oslo_concurrency.lockutils [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.166 2 DEBUG oslo_concurrency.lockutils [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.168 2 INFO nova.compute.manager [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Terminating instance
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.168 2 DEBUG nova.compute.manager [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:16:45 compute-0 kernel: tap62690261-dd (unregistering): left promiscuous mode
Oct 07 14:16:45 compute-0 NetworkManager[44949]: <info>  [1759846605.2214] device (tap62690261-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:16:45 compute-0 ovn_controller[151684]: 2025-10-07T14:16:45Z|00583|binding|INFO|Releasing lport 62690261-dde3-43ca-929a-e6b75a76bafb from this chassis (sb_readonly=0)
Oct 07 14:16:45 compute-0 ovn_controller[151684]: 2025-10-07T14:16:45Z|00584|binding|INFO|Setting lport 62690261-dde3-43ca-929a-e6b75a76bafb down in Southbound
Oct 07 14:16:45 compute-0 ovn_controller[151684]: 2025-10-07T14:16:45Z|00585|binding|INFO|Removing iface tap62690261-dd ovn-installed in OVS
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:45 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000037.scope: Deactivated successfully.
Oct 07 14:16:45 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000037.scope: Consumed 15.897s CPU time.
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.330 2 DEBUG nova.compute.manager [req-fa829eb7-0964-42f1-bdd2-423e35c7d3ea req-114e2dfb-b016-438c-a212-1b946e279384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Received event network-vif-deleted-41eef051-1c52-4c3c-9854-2ee923b4ab0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:45.331 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:aa:77 10.100.0.3'], port_security=['fa:16:3e:a5:aa:77 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '188af2a5-ff92-4f42-8bdc-5dec2f24d46a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67397a02-5eae-462d-b5c7-e258b23b19a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=62690261-dde3-43ca-929a-e6b75a76bafb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:45 compute-0 systemd-machined[214580]: Machine qemu-63-instance-00000037 terminated.
Oct 07 14:16:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:45.332 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 62690261-dde3-43ca-929a-e6b75a76bafb in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 unbound from our chassis
Oct 07 14:16:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:45.333 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1d9f332-f920-4d6e-8e91-dd13ec334d51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:16:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:45.334 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[170536cf-860a-45a5-b7b4-eaba49a6800a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:45.335 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 namespace which is not needed anymore
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.405 2 INFO nova.virt.libvirt.driver [-] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Instance destroyed successfully.
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.407 2 DEBUG nova.objects.instance [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'resources' on Instance uuid 188af2a5-ff92-4f42-8bdc-5dec2f24d46a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:45 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[321166]: [NOTICE]   (321183) : haproxy version is 2.8.14-c23fe91
Oct 07 14:16:45 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[321166]: [NOTICE]   (321183) : path to executable is /usr/sbin/haproxy
Oct 07 14:16:45 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[321166]: [WARNING]  (321183) : Exiting Master process...
Oct 07 14:16:45 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[321166]: [WARNING]  (321183) : Exiting Master process...
Oct 07 14:16:45 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[321166]: [ALERT]    (321183) : Current worker (321185) exited with code 143 (Terminated)
Oct 07 14:16:45 compute-0 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[321166]: [WARNING]  (321183) : All workers exited. Exiting... (0)
Oct 07 14:16:45 compute-0 systemd[1]: libpod-1da7f753f22f1fd1eb5a83c355a3bf63718fd9b868b9ba7adf51be31837203c4.scope: Deactivated successfully.
Oct 07 14:16:45 compute-0 podman[326513]: 2025-10-07 14:16:45.485866084 +0000 UTC m=+0.047728212 container died 1da7f753f22f1fd1eb5a83c355a3bf63718fd9b868b9ba7adf51be31837203c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.494 2 DEBUG nova.virt.libvirt.vif [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-229960760',display_name='tempest-tempest.common.compute-instance-229960760',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-229960760',id=55,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-c0jd8utk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=188af2a5-ff92-4f42-8bdc-5dec2f24d46a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.495 2 DEBUG nova.network.os_vif_util [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.496 2 DEBUG nova.network.os_vif_util [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:aa:77,bridge_name='br-int',has_traffic_filtering=True,id=62690261-dde3-43ca-929a-e6b75a76bafb,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62690261-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.496 2 DEBUG os_vif [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:aa:77,bridge_name='br-int',has_traffic_filtering=True,id=62690261-dde3-43ca-929a-e6b75a76bafb,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62690261-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.498 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62690261-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.506 2 INFO os_vif [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:aa:77,bridge_name='br-int',has_traffic_filtering=True,id=62690261-dde3-43ca-929a-e6b75a76bafb,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62690261-dd')
Oct 07 14:16:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-4be1fcb889aa3b4e4dd790a95fe0086d7925bd0a3e445f3c7ef44dfbecbd0e8a-merged.mount: Deactivated successfully.
Oct 07 14:16:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1da7f753f22f1fd1eb5a83c355a3bf63718fd9b868b9ba7adf51be31837203c4-userdata-shm.mount: Deactivated successfully.
Oct 07 14:16:45 compute-0 podman[326513]: 2025-10-07 14:16:45.527284958 +0000 UTC m=+0.089147096 container cleanup 1da7f753f22f1fd1eb5a83c355a3bf63718fd9b868b9ba7adf51be31837203c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:16:45 compute-0 systemd[1]: libpod-conmon-1da7f753f22f1fd1eb5a83c355a3bf63718fd9b868b9ba7adf51be31837203c4.scope: Deactivated successfully.
Oct 07 14:16:45 compute-0 podman[326559]: 2025-10-07 14:16:45.599114376 +0000 UTC m=+0.045669248 container remove 1da7f753f22f1fd1eb5a83c355a3bf63718fd9b868b9ba7adf51be31837203c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:16:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:45.604 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[783b8a9e-2a6a-4287-82bd-5c1175d637cc]: (4, ('Tue Oct  7 02:16:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 (1da7f753f22f1fd1eb5a83c355a3bf63718fd9b868b9ba7adf51be31837203c4)\n1da7f753f22f1fd1eb5a83c355a3bf63718fd9b868b9ba7adf51be31837203c4\nTue Oct  7 02:16:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 (1da7f753f22f1fd1eb5a83c355a3bf63718fd9b868b9ba7adf51be31837203c4)\n1da7f753f22f1fd1eb5a83c355a3bf63718fd9b868b9ba7adf51be31837203c4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:45.607 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d82bbd42-f885-4a76-9703-8c0a68925aa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:45.608 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:45 compute-0 kernel: tapb1d9f332-f0: left promiscuous mode
Oct 07 14:16:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:45.615 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[536ce6e1-1c64-46f0-8ba4-3cc98cb3580f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:45.642 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ca0f5a-ebc0-4ea2-ad24-0c93787fd274]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:45.644 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[78dca740-733d-478e-bef8-6ba4f24de5a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:45.660 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[df67bc63-39a1-4b58-86fe-3b9778615df8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711890, 'reachable_time': 19872, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326577, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:45 compute-0 systemd[1]: run-netns-ovnmeta\x2db1d9f332\x2df920\x2d4d6e\x2d8e91\x2ddd13ec334d51.mount: Deactivated successfully.
Oct 07 14:16:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:45.663 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:16:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:45.664 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[23bec61f-0fc9-492a-8d44-dbfd1263404f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.758 2 DEBUG oslo_concurrency.lockutils [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.758 2 DEBUG oslo_concurrency.lockutils [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.758 2 DEBUG oslo_concurrency.lockutils [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.759 2 DEBUG oslo_concurrency.lockutils [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.759 2 DEBUG oslo_concurrency.lockutils [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.760 2 INFO nova.compute.manager [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Terminating instance
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.761 2 DEBUG nova.compute.manager [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:16:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 276 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 53 KiB/s wr, 189 op/s
Oct 07 14:16:45 compute-0 kernel: tap0b66f2d4-e0 (unregistering): left promiscuous mode
Oct 07 14:16:45 compute-0 NetworkManager[44949]: <info>  [1759846605.8311] device (tap0b66f2d4-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:16:45 compute-0 ovn_controller[151684]: 2025-10-07T14:16:45Z|00586|binding|INFO|Releasing lport 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc from this chassis (sb_readonly=0)
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:45 compute-0 ovn_controller[151684]: 2025-10-07T14:16:45Z|00587|binding|INFO|Setting lport 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc down in Southbound
Oct 07 14:16:45 compute-0 ovn_controller[151684]: 2025-10-07T14:16:45Z|00588|binding|INFO|Removing iface tap0b66f2d4-e0 ovn-installed in OVS
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:45.855 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:6d:07 10.100.0.11'], port_security=['fa:16:3e:1e:6d:07 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'cfd30417-ee01-41d3-8a93-e49cd960d338', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '972aa9372a81406990460fb46cf827e0', 'neutron:revision_number': '6', 'neutron:security_group_ids': '21573a58-df26-46b3-96bc-30ac8d7d5432', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8439febc-2ab3-4376-877e-4af159445d58, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:16:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:45.856 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc in datapath 4fd643de-a9bb-4c41-8437-fb901dfd8879 unbound from our chassis
Oct 07 14:16:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:45.857 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4fd643de-a9bb-4c41-8437-fb901dfd8879, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:45.858 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ca4588b3-985a-41a0-808f-bc5c2c46577f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:45.859 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879 namespace which is not needed anymore
Oct 07 14:16:45 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000039.scope: Deactivated successfully.
Oct 07 14:16:45 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000039.scope: Consumed 13.738s CPU time.
Oct 07 14:16:45 compute-0 systemd-machined[214580]: Machine qemu-71-instance-00000039 terminated.
Oct 07 14:16:45 compute-0 NetworkManager[44949]: <info>  [1759846605.9827] manager: (tap0b66f2d4-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/263)
Oct 07 14:16:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.996 2 INFO nova.virt.libvirt.driver [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Deleting instance files /var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a_del
Oct 07 14:16:45 compute-0 nova_compute[259550]: 2025-10-07 14:16:45.997 2 INFO nova.virt.libvirt.driver [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Deletion of /var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a_del complete
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.005 2 INFO nova.virt.libvirt.driver [-] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Instance destroyed successfully.
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.005 2 DEBUG nova.objects.instance [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'resources' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:16:46 compute-0 neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879[322645]: [NOTICE]   (322654) : haproxy version is 2.8.14-c23fe91
Oct 07 14:16:46 compute-0 neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879[322645]: [NOTICE]   (322654) : path to executable is /usr/sbin/haproxy
Oct 07 14:16:46 compute-0 neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879[322645]: [WARNING]  (322654) : Exiting Master process...
Oct 07 14:16:46 compute-0 neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879[322645]: [ALERT]    (322654) : Current worker (322659) exited with code 143 (Terminated)
Oct 07 14:16:46 compute-0 neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879[322645]: [WARNING]  (322654) : All workers exited. Exiting... (0)
Oct 07 14:16:46 compute-0 systemd[1]: libpod-5270920f909fdea441dfb7ca72e61dcfd92f65902596cac2bd7577d2203e7c7f.scope: Deactivated successfully.
Oct 07 14:16:46 compute-0 podman[326599]: 2025-10-07 14:16:46.020714283 +0000 UTC m=+0.056903145 container died 5270920f909fdea441dfb7ca72e61dcfd92f65902596cac2bd7577d2203e7c7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 07 14:16:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5270920f909fdea441dfb7ca72e61dcfd92f65902596cac2bd7577d2203e7c7f-userdata-shm.mount: Deactivated successfully.
Oct 07 14:16:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-716b321e2ea49d08584ea3087661278ff343536565c536d17e2c3ecf1e952674-merged.mount: Deactivated successfully.
Oct 07 14:16:46 compute-0 podman[326599]: 2025-10-07 14:16:46.056656122 +0000 UTC m=+0.092844964 container cleanup 5270920f909fdea441dfb7ca72e61dcfd92f65902596cac2bd7577d2203e7c7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:16:46 compute-0 systemd[1]: libpod-conmon-5270920f909fdea441dfb7ca72e61dcfd92f65902596cac2bd7577d2203e7c7f.scope: Deactivated successfully.
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.078 2 DEBUG nova.compute.manager [req-2add700b-f758-4495-bd91-c4055fec7617 req-3d883ef1-33c8-4a07-a4bc-69706968c179 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-vif-unplugged-62690261-dde3-43ca-929a-e6b75a76bafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.078 2 DEBUG oslo_concurrency.lockutils [req-2add700b-f758-4495-bd91-c4055fec7617 req-3d883ef1-33c8-4a07-a4bc-69706968c179 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.078 2 DEBUG oslo_concurrency.lockutils [req-2add700b-f758-4495-bd91-c4055fec7617 req-3d883ef1-33c8-4a07-a4bc-69706968c179 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.079 2 DEBUG oslo_concurrency.lockutils [req-2add700b-f758-4495-bd91-c4055fec7617 req-3d883ef1-33c8-4a07-a4bc-69706968c179 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.079 2 DEBUG nova.compute.manager [req-2add700b-f758-4495-bd91-c4055fec7617 req-3d883ef1-33c8-4a07-a4bc-69706968c179 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] No waiting events found dispatching network-vif-unplugged-62690261-dde3-43ca-929a-e6b75a76bafb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.079 2 DEBUG nova.compute.manager [req-2add700b-f758-4495-bd91-c4055fec7617 req-3d883ef1-33c8-4a07-a4bc-69706968c179 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-vif-unplugged-62690261-dde3-43ca-929a-e6b75a76bafb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:16:46 compute-0 podman[326639]: 2025-10-07 14:16:46.113993937 +0000 UTC m=+0.037091361 container remove 5270920f909fdea441dfb7ca72e61dcfd92f65902596cac2bd7577d2203e7c7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:16:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:46.119 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f12a4a-25a3-483e-97b8-d692b3d3aaa4]: (4, ('Tue Oct  7 02:16:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879 (5270920f909fdea441dfb7ca72e61dcfd92f65902596cac2bd7577d2203e7c7f)\n5270920f909fdea441dfb7ca72e61dcfd92f65902596cac2bd7577d2203e7c7f\nTue Oct  7 02:16:46 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879 (5270920f909fdea441dfb7ca72e61dcfd92f65902596cac2bd7577d2203e7c7f)\n5270920f909fdea441dfb7ca72e61dcfd92f65902596cac2bd7577d2203e7c7f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:46.121 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aee9f33d-5e85-4c45-908c-49e80ad2ea00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:46.122 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd643de-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:46 compute-0 kernel: tap4fd643de-a0: left promiscuous mode
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:46.148 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a9cf4000-9d1f-4dee-9682-b7c71964a92b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:46.171 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ec2822ee-9ab7-4bc9-84dd-0bf999e678aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:46.173 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[85780b5b-304b-4dab-ab76-2eaba04addb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:46.190 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[81256d34-8b25-43a7-b2e3-cd373f52b0d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713294, 'reachable_time': 35247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326658, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:46.192 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:16:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:16:46.192 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[77416208-e6f4-494f-925e-8a44647669cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.421 2 DEBUG nova.virt.libvirt.vif [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-3008329',display_name='tempest-ListServerFiltersTestJSON-instance-3008329',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-3008329',id=57,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-hm3q3ej4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:16:21Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=cfd30417-ee01-41d3-8a93-e49cd960d338,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.421 2 DEBUG nova.network.os_vif_util [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.422 2 DEBUG nova.network.os_vif_util [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.423 2 DEBUG os_vif [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.424 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b66f2d4-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.428 2 INFO os_vif [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0')
Oct 07 14:16:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d4fd643de\x2da9bb\x2d4c41\x2d8437\x2dfb901dfd8879.mount: Deactivated successfully.
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.590 2 INFO nova.compute.manager [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Took 1.42 seconds to destroy the instance on the hypervisor.
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.590 2 DEBUG oslo.service.loopingcall [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.591 2 DEBUG nova.compute.manager [-] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.591 2 DEBUG nova.network.neutron [-] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.856 2 INFO nova.virt.libvirt.driver [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Deleting instance files /var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338_del
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.857 2 INFO nova.virt.libvirt.driver [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Deletion of /var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338_del complete
Oct 07 14:16:46 compute-0 ceph-mon[74295]: pgmap v1565: 305 pgs: 305 active+clean; 276 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 53 KiB/s wr, 189 op/s
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.930 2 INFO nova.compute.manager [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Took 1.17 seconds to destroy the instance on the hypervisor.
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.931 2 DEBUG oslo.service.loopingcall [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.931 2 DEBUG nova.compute.manager [-] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:16:46 compute-0 nova_compute[259550]: 2025-10-07 14:16:46.932 2 DEBUG nova.network.neutron [-] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:16:47 compute-0 nova_compute[259550]: 2025-10-07 14:16:47.519 2 DEBUG nova.compute.manager [req-26e703ba-1dcc-4a85-b597-bb722472e46d req-d55350e4-5c7a-496d-a451-836b3276ede4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received event network-vif-unplugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:47 compute-0 nova_compute[259550]: 2025-10-07 14:16:47.519 2 DEBUG oslo_concurrency.lockutils [req-26e703ba-1dcc-4a85-b597-bb722472e46d req-d55350e4-5c7a-496d-a451-836b3276ede4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:47 compute-0 nova_compute[259550]: 2025-10-07 14:16:47.519 2 DEBUG oslo_concurrency.lockutils [req-26e703ba-1dcc-4a85-b597-bb722472e46d req-d55350e4-5c7a-496d-a451-836b3276ede4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:47 compute-0 nova_compute[259550]: 2025-10-07 14:16:47.520 2 DEBUG oslo_concurrency.lockutils [req-26e703ba-1dcc-4a85-b597-bb722472e46d req-d55350e4-5c7a-496d-a451-836b3276ede4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:47 compute-0 nova_compute[259550]: 2025-10-07 14:16:47.520 2 DEBUG nova.compute.manager [req-26e703ba-1dcc-4a85-b597-bb722472e46d req-d55350e4-5c7a-496d-a451-836b3276ede4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] No waiting events found dispatching network-vif-unplugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:47 compute-0 nova_compute[259550]: 2025-10-07 14:16:47.520 2 DEBUG nova.compute.manager [req-26e703ba-1dcc-4a85-b597-bb722472e46d req-d55350e4-5c7a-496d-a451-836b3276ede4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received event network-vif-unplugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:16:47 compute-0 nova_compute[259550]: 2025-10-07 14:16:47.520 2 DEBUG nova.compute.manager [req-26e703ba-1dcc-4a85-b597-bb722472e46d req-d55350e4-5c7a-496d-a451-836b3276ede4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:47 compute-0 nova_compute[259550]: 2025-10-07 14:16:47.521 2 DEBUG oslo_concurrency.lockutils [req-26e703ba-1dcc-4a85-b597-bb722472e46d req-d55350e4-5c7a-496d-a451-836b3276ede4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:47 compute-0 nova_compute[259550]: 2025-10-07 14:16:47.521 2 DEBUG oslo_concurrency.lockutils [req-26e703ba-1dcc-4a85-b597-bb722472e46d req-d55350e4-5c7a-496d-a451-836b3276ede4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:47 compute-0 nova_compute[259550]: 2025-10-07 14:16:47.521 2 DEBUG oslo_concurrency.lockutils [req-26e703ba-1dcc-4a85-b597-bb722472e46d req-d55350e4-5c7a-496d-a451-836b3276ede4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:47 compute-0 nova_compute[259550]: 2025-10-07 14:16:47.522 2 DEBUG nova.compute.manager [req-26e703ba-1dcc-4a85-b597-bb722472e46d req-d55350e4-5c7a-496d-a451-836b3276ede4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] No waiting events found dispatching network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:47 compute-0 nova_compute[259550]: 2025-10-07 14:16:47.522 2 WARNING nova.compute.manager [req-26e703ba-1dcc-4a85-b597-bb722472e46d req-d55350e4-5c7a-496d-a451-836b3276ede4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received unexpected event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc for instance with vm_state active and task_state deleting.
Oct 07 14:16:47 compute-0 nova_compute[259550]: 2025-10-07 14:16:47.619 2 DEBUG nova.network.neutron [-] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:47 compute-0 nova_compute[259550]: 2025-10-07 14:16:47.682 2 INFO nova.compute.manager [-] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Took 1.09 seconds to deallocate network for instance.
Oct 07 14:16:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 167 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 28 KiB/s wr, 161 op/s
Oct 07 14:16:47 compute-0 nova_compute[259550]: 2025-10-07 14:16:47.913 2 DEBUG nova.network.neutron [-] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:16:47 compute-0 nova_compute[259550]: 2025-10-07 14:16:47.959 2 DEBUG oslo_concurrency.lockutils [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:47 compute-0 nova_compute[259550]: 2025-10-07 14:16:47.959 2 DEBUG oslo_concurrency.lockutils [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:47 compute-0 nova_compute[259550]: 2025-10-07 14:16:47.995 2 INFO nova.compute.manager [-] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Took 1.06 seconds to deallocate network for instance.
Oct 07 14:16:48 compute-0 nova_compute[259550]: 2025-10-07 14:16:48.116 2 DEBUG oslo_concurrency.lockutils [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:48 compute-0 ceph-mon[74295]: pgmap v1566: 305 pgs: 305 active+clean; 167 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 28 KiB/s wr, 161 op/s
Oct 07 14:16:48 compute-0 nova_compute[259550]: 2025-10-07 14:16:48.933 2 DEBUG nova.compute.manager [req-01ca273e-a918-4894-8ead-e5030268d40c req-6b8b876d-0cfc-4a5e-8a4e-8bb29e9fa430 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-vif-plugged-62690261-dde3-43ca-929a-e6b75a76bafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:48 compute-0 nova_compute[259550]: 2025-10-07 14:16:48.933 2 DEBUG oslo_concurrency.lockutils [req-01ca273e-a918-4894-8ead-e5030268d40c req-6b8b876d-0cfc-4a5e-8a4e-8bb29e9fa430 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:16:48 compute-0 nova_compute[259550]: 2025-10-07 14:16:48.934 2 DEBUG oslo_concurrency.lockutils [req-01ca273e-a918-4894-8ead-e5030268d40c req-6b8b876d-0cfc-4a5e-8a4e-8bb29e9fa430 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:48 compute-0 nova_compute[259550]: 2025-10-07 14:16:48.934 2 DEBUG oslo_concurrency.lockutils [req-01ca273e-a918-4894-8ead-e5030268d40c req-6b8b876d-0cfc-4a5e-8a4e-8bb29e9fa430 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:48 compute-0 nova_compute[259550]: 2025-10-07 14:16:48.934 2 DEBUG nova.compute.manager [req-01ca273e-a918-4894-8ead-e5030268d40c req-6b8b876d-0cfc-4a5e-8a4e-8bb29e9fa430 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] No waiting events found dispatching network-vif-plugged-62690261-dde3-43ca-929a-e6b75a76bafb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:16:48 compute-0 nova_compute[259550]: 2025-10-07 14:16:48.934 2 WARNING nova.compute.manager [req-01ca273e-a918-4894-8ead-e5030268d40c req-6b8b876d-0cfc-4a5e-8a4e-8bb29e9fa430 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received unexpected event network-vif-plugged-62690261-dde3-43ca-929a-e6b75a76bafb for instance with vm_state deleted and task_state None.
Oct 07 14:16:49 compute-0 nova_compute[259550]: 2025-10-07 14:16:49.700 2 DEBUG oslo_concurrency.processutils [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 60 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 30 KiB/s wr, 201 op/s
Oct 07 14:16:50 compute-0 nova_compute[259550]: 2025-10-07 14:16:50.083 2 DEBUG nova.compute.manager [req-5529b517-6a7e-4b02-81f3-0e20e1b176b8 req-67c1038a-da83-436a-bd15-41236c3ca052 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-vif-deleted-62690261-dde3-43ca-929a-e6b75a76bafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:50 compute-0 nova_compute[259550]: 2025-10-07 14:16:50.083 2 DEBUG nova.compute.manager [req-5529b517-6a7e-4b02-81f3-0e20e1b176b8 req-67c1038a-da83-436a-bd15-41236c3ca052 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received event network-vif-deleted-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:16:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:16:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3200126432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:50 compute-0 nova_compute[259550]: 2025-10-07 14:16:50.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:50 compute-0 nova_compute[259550]: 2025-10-07 14:16:50.162 2 DEBUG oslo_concurrency.processutils [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:50 compute-0 nova_compute[259550]: 2025-10-07 14:16:50.170 2 DEBUG nova.compute.provider_tree [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:16:50 compute-0 nova_compute[259550]: 2025-10-07 14:16:50.344 2 DEBUG nova.scheduler.client.report [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:16:50 compute-0 nova_compute[259550]: 2025-10-07 14:16:50.416 2 DEBUG oslo_concurrency.lockutils [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:50 compute-0 nova_compute[259550]: 2025-10-07 14:16:50.420 2 DEBUG oslo_concurrency.lockutils [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 2.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:16:50 compute-0 nova_compute[259550]: 2025-10-07 14:16:50.485 2 DEBUG oslo_concurrency.processutils [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:16:50 compute-0 nova_compute[259550]: 2025-10-07 14:16:50.524 2 INFO nova.scheduler.client.report [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Deleted allocations for instance 188af2a5-ff92-4f42-8bdc-5dec2f24d46a
Oct 07 14:16:50 compute-0 nova_compute[259550]: 2025-10-07 14:16:50.793 2 DEBUG oslo_concurrency.lockutils [None req-50d0c45e-a389-42c9-a82b-7b3e371a6140 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:16:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2304423272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:50 compute-0 ceph-mon[74295]: pgmap v1567: 305 pgs: 305 active+clean; 60 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 30 KiB/s wr, 201 op/s
Oct 07 14:16:50 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3200126432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:50 compute-0 nova_compute[259550]: 2025-10-07 14:16:50.929 2 DEBUG oslo_concurrency.processutils [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:16:50 compute-0 nova_compute[259550]: 2025-10-07 14:16:50.935 2 DEBUG nova.compute.provider_tree [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:16:50 compute-0 nova_compute[259550]: 2025-10-07 14:16:50.966 2 DEBUG nova.scheduler.client.report [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:16:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:16:51 compute-0 nova_compute[259550]: 2025-10-07 14:16:51.012 2 DEBUG oslo_concurrency.lockutils [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:51 compute-0 nova_compute[259550]: 2025-10-07 14:16:51.074 2 INFO nova.scheduler.client.report [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Deleted allocations for instance cfd30417-ee01-41d3-8a93-e49cd960d338
Oct 07 14:16:51 compute-0 nova_compute[259550]: 2025-10-07 14:16:51.215 2 DEBUG oslo_concurrency.lockutils [None req-1f150074-91ae-4efc-acd7-af3a1bdf2254 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:16:51 compute-0 nova_compute[259550]: 2025-10-07 14:16:51.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:51 compute-0 nova_compute[259550]: 2025-10-07 14:16:51.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:51 compute-0 nova_compute[259550]: 2025-10-07 14:16:51.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 41 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 11 KiB/s wr, 138 op/s
Oct 07 14:16:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2304423272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:16:51 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct 07 14:16:51 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:16:51.927496) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:16:51 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct 07 14:16:51 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846611927536, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1993, "num_deletes": 251, "total_data_size": 3042239, "memory_usage": 3094336, "flush_reason": "Manual Compaction"}
Oct 07 14:16:51 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct 07 14:16:51 compute-0 nova_compute[259550]: 2025-10-07 14:16:51.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:51 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846611943723, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 2966095, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30884, "largest_seqno": 32876, "table_properties": {"data_size": 2957352, "index_size": 5301, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19122, "raw_average_key_size": 20, "raw_value_size": 2939413, "raw_average_value_size": 3123, "num_data_blocks": 235, "num_entries": 941, "num_filter_entries": 941, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759846421, "oldest_key_time": 1759846421, "file_creation_time": 1759846611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:16:51 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 16357 microseconds, and 6363 cpu microseconds.
Oct 07 14:16:51 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:16:51 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:16:51.943848) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 2966095 bytes OK
Oct 07 14:16:51 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:16:51.943902) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct 07 14:16:51 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:16:51.945846) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct 07 14:16:51 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:16:51.945863) EVENT_LOG_v1 {"time_micros": 1759846611945857, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:16:51 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:16:51.945883) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:16:51 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3033749, prev total WAL file size 3033749, number of live WAL files 2.
Oct 07 14:16:51 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:16:51 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:16:51.947073) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct 07 14:16:51 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:16:51 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(2896KB)], [68(7067KB)]
Oct 07 14:16:51 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846611947174, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10203086, "oldest_snapshot_seqno": -1}
Oct 07 14:16:52 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5749 keys, 8520110 bytes, temperature: kUnknown
Oct 07 14:16:52 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846612003475, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8520110, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8481262, "index_size": 23378, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 144417, "raw_average_key_size": 25, "raw_value_size": 8377624, "raw_average_value_size": 1457, "num_data_blocks": 950, "num_entries": 5749, "num_filter_entries": 5749, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759846611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:16:52 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:16:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:16:52.003721) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8520110 bytes
Oct 07 14:16:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:16:52.005067) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.0 rd, 151.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 6.9 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 6263, records dropped: 514 output_compression: NoCompression
Oct 07 14:16:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:16:52.005083) EVENT_LOG_v1 {"time_micros": 1759846612005075, "job": 38, "event": "compaction_finished", "compaction_time_micros": 56361, "compaction_time_cpu_micros": 27213, "output_level": 6, "num_output_files": 1, "total_output_size": 8520110, "num_input_records": 6263, "num_output_records": 5749, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:16:52 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:16:52 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846612005744, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct 07 14:16:52 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:16:52 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846612007028, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct 07 14:16:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:16:51.946950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:16:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:16:52.007082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:16:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:16:52.007086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:16:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:16:52.007087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:16:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:16:52.007088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:16:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:16:52.007089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:16:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:16:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:16:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:16:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:16:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:16:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:16:52 compute-0 nova_compute[259550]: 2025-10-07 14:16:52.667 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846597.6667318, a23d6956-f85a-40b1-9e54-1b32d2af191e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:16:52 compute-0 nova_compute[259550]: 2025-10-07 14:16:52.668 2 INFO nova.compute.manager [-] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] VM Stopped (Lifecycle Event)
Oct 07 14:16:52 compute-0 nova_compute[259550]: 2025-10-07 14:16:52.721 2 DEBUG nova.compute.manager [None req-6571892b-71e7-462b-9ac5-13df47d3a77b - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:52 compute-0 nova_compute[259550]: 2025-10-07 14:16:52.875 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846597.8724732, 4aa20e30-d71a-4765-9b3e-a72a156d2c88 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:16:52 compute-0 nova_compute[259550]: 2025-10-07 14:16:52.875 2 INFO nova.compute.manager [-] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] VM Stopped (Lifecycle Event)
Oct 07 14:16:52 compute-0 ceph-mon[74295]: pgmap v1568: 305 pgs: 305 active+clean; 41 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 11 KiB/s wr, 138 op/s
Oct 07 14:16:53 compute-0 nova_compute[259550]: 2025-10-07 14:16:53.757 2 DEBUG nova.compute.manager [None req-1692a337-9e00-42d0-b6cf-e7a909eb9202 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 41 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 8.7 KiB/s wr, 110 op/s
Oct 07 14:16:54 compute-0 ceph-mon[74295]: pgmap v1569: 305 pgs: 305 active+clean; 41 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 8.7 KiB/s wr, 110 op/s
Oct 07 14:16:55 compute-0 nova_compute[259550]: 2025-10-07 14:16:55.617 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846600.6156762, 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:16:55 compute-0 nova_compute[259550]: 2025-10-07 14:16:55.618 2 INFO nova.compute.manager [-] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] VM Stopped (Lifecycle Event)
Oct 07 14:16:55 compute-0 nova_compute[259550]: 2025-10-07 14:16:55.773 2 DEBUG nova.compute.manager [None req-63191097-fa7f-46ba-8271-4b979c637f48 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 41 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 8.7 KiB/s wr, 110 op/s
Oct 07 14:16:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:16:56 compute-0 nova_compute[259550]: 2025-10-07 14:16:56.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:56 compute-0 nova_compute[259550]: 2025-10-07 14:16:56.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:16:56 compute-0 ceph-mon[74295]: pgmap v1570: 305 pgs: 305 active+clean; 41 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 8.7 KiB/s wr, 110 op/s
Oct 07 14:16:57 compute-0 nova_compute[259550]: 2025-10-07 14:16:57.369 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846602.3674726, b3d2cd05-012d-4189-bc6c-c40fc1f72c0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:16:57 compute-0 nova_compute[259550]: 2025-10-07 14:16:57.370 2 INFO nova.compute.manager [-] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] VM Stopped (Lifecycle Event)
Oct 07 14:16:57 compute-0 nova_compute[259550]: 2025-10-07 14:16:57.562 2 DEBUG nova.compute.manager [None req-9f83fd97-fa10-418c-bbdb-8ff655452f59 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:16:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 41 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 6.7 KiB/s wr, 71 op/s
Oct 07 14:16:58 compute-0 ceph-mon[74295]: pgmap v1571: 305 pgs: 305 active+clean; 41 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 6.7 KiB/s wr, 71 op/s
Oct 07 14:16:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 41 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 5.7 KiB/s wr, 53 op/s
Oct 07 14:17:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:00.048 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:00.049 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:00.049 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:00 compute-0 nova_compute[259550]: 2025-10-07 14:17:00.404 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846605.403191, 188af2a5-ff92-4f42-8bdc-5dec2f24d46a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:17:00 compute-0 nova_compute[259550]: 2025-10-07 14:17:00.405 2 INFO nova.compute.manager [-] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] VM Stopped (Lifecycle Event)
Oct 07 14:17:00 compute-0 nova_compute[259550]: 2025-10-07 14:17:00.475 2 DEBUG oslo_concurrency.lockutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "85cd6b5c-f0f7-49fa-a999-64818baf3648" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:00 compute-0 nova_compute[259550]: 2025-10-07 14:17:00.475 2 DEBUG oslo_concurrency.lockutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:00 compute-0 nova_compute[259550]: 2025-10-07 14:17:00.713 2 DEBUG nova.compute.manager [None req-bb35a34b-a573-4d60-9593-983b9ccbae6d - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:00 compute-0 ceph-mon[74295]: pgmap v1572: 305 pgs: 305 active+clean; 41 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 5.7 KiB/s wr, 53 op/s
Oct 07 14:17:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:17:00 compute-0 nova_compute[259550]: 2025-10-07 14:17:00.991 2 DEBUG nova.compute.manager [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:17:01 compute-0 nova_compute[259550]: 2025-10-07 14:17:01.003 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846606.0008795, cfd30417-ee01-41d3-8a93-e49cd960d338 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:17:01 compute-0 nova_compute[259550]: 2025-10-07 14:17:01.004 2 INFO nova.compute.manager [-] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] VM Stopped (Lifecycle Event)
Oct 07 14:17:01 compute-0 podman[326725]: 2025-10-07 14:17:01.080335529 +0000 UTC m=+0.064097155 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid)
Oct 07 14:17:01 compute-0 podman[326724]: 2025-10-07 14:17:01.094547674 +0000 UTC m=+0.070271208 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 07 14:17:01 compute-0 nova_compute[259550]: 2025-10-07 14:17:01.240 2 DEBUG nova.compute.manager [None req-b12c15ba-2234-4a52-a3b0-a9aa87888258 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:01 compute-0 nova_compute[259550]: 2025-10-07 14:17:01.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:01 compute-0 nova_compute[259550]: 2025-10-07 14:17:01.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:01 compute-0 nova_compute[259550]: 2025-10-07 14:17:01.711 2 DEBUG oslo_concurrency.lockutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:01 compute-0 nova_compute[259550]: 2025-10-07 14:17:01.711 2 DEBUG oslo_concurrency.lockutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:01 compute-0 nova_compute[259550]: 2025-10-07 14:17:01.720 2 DEBUG nova.virt.hardware [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:17:01 compute-0 nova_compute[259550]: 2025-10-07 14:17:01.721 2 INFO nova.compute.claims [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:17:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 41 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 341 B/s wr, 5 op/s
Oct 07 14:17:02 compute-0 ceph-mon[74295]: pgmap v1573: 305 pgs: 305 active+clean; 41 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 341 B/s wr, 5 op/s
Oct 07 14:17:03 compute-0 nova_compute[259550]: 2025-10-07 14:17:03.250 2 DEBUG oslo_concurrency.processutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:17:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3796608021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:17:03 compute-0 nova_compute[259550]: 2025-10-07 14:17:03.724 2 DEBUG oslo_concurrency.processutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:03 compute-0 nova_compute[259550]: 2025-10-07 14:17:03.731 2 DEBUG nova.compute.provider_tree [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:17:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 41 MiB data, 542 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:17:03 compute-0 nova_compute[259550]: 2025-10-07 14:17:03.960 2 DEBUG nova.scheduler.client.report [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:17:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3796608021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:17:04 compute-0 nova_compute[259550]: 2025-10-07 14:17:04.322 2 DEBUG oslo_concurrency.lockutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:04 compute-0 nova_compute[259550]: 2025-10-07 14:17:04.323 2 DEBUG nova.compute.manager [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:17:04 compute-0 nova_compute[259550]: 2025-10-07 14:17:04.590 2 DEBUG nova.compute.manager [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:17:04 compute-0 nova_compute[259550]: 2025-10-07 14:17:04.591 2 DEBUG nova.network.neutron [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:17:04 compute-0 nova_compute[259550]: 2025-10-07 14:17:04.752 2 INFO nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:17:04 compute-0 nova_compute[259550]: 2025-10-07 14:17:04.837 2 DEBUG nova.compute.manager [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:17:04 compute-0 ceph-mon[74295]: pgmap v1574: 305 pgs: 305 active+clean; 41 MiB data, 542 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:17:05 compute-0 nova_compute[259550]: 2025-10-07 14:17:05.450 2 DEBUG nova.compute.manager [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:17:05 compute-0 nova_compute[259550]: 2025-10-07 14:17:05.451 2 DEBUG nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:17:05 compute-0 nova_compute[259550]: 2025-10-07 14:17:05.452 2 INFO nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Creating image(s)
Oct 07 14:17:05 compute-0 nova_compute[259550]: 2025-10-07 14:17:05.476 2 DEBUG nova.storage.rbd_utils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:05 compute-0 nova_compute[259550]: 2025-10-07 14:17:05.502 2 DEBUG nova.storage.rbd_utils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:05 compute-0 nova_compute[259550]: 2025-10-07 14:17:05.531 2 DEBUG nova.storage.rbd_utils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:05 compute-0 nova_compute[259550]: 2025-10-07 14:17:05.535 2 DEBUG oslo_concurrency.processutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:05 compute-0 nova_compute[259550]: 2025-10-07 14:17:05.577 2 DEBUG nova.policy [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7166dede3fa7455fb8ac0840d97d0be8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd8dca3ec607447dd8f2e6dd1c0714628', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:17:05 compute-0 nova_compute[259550]: 2025-10-07 14:17:05.628 2 DEBUG oslo_concurrency.processutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:05 compute-0 nova_compute[259550]: 2025-10-07 14:17:05.629 2 DEBUG oslo_concurrency.lockutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:05 compute-0 nova_compute[259550]: 2025-10-07 14:17:05.630 2 DEBUG oslo_concurrency.lockutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:05 compute-0 nova_compute[259550]: 2025-10-07 14:17:05.631 2 DEBUG oslo_concurrency.lockutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:05 compute-0 nova_compute[259550]: 2025-10-07 14:17:05.658 2 DEBUG nova.storage.rbd_utils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:05 compute-0 nova_compute[259550]: 2025-10-07 14:17:05.662 2 DEBUG oslo_concurrency.processutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 41 MiB data, 542 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:17:05 compute-0 nova_compute[259550]: 2025-10-07 14:17:05.984 2 DEBUG oslo_concurrency.processutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:17:06 compute-0 ceph-mon[74295]: pgmap v1575: 305 pgs: 305 active+clean; 41 MiB data, 542 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:17:06 compute-0 nova_compute[259550]: 2025-10-07 14:17:06.054 2 DEBUG nova.storage.rbd_utils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] resizing rbd image 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:17:06 compute-0 nova_compute[259550]: 2025-10-07 14:17:06.190 2 DEBUG nova.objects.instance [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'migration_context' on Instance uuid 85cd6b5c-f0f7-49fa-a999-64818baf3648 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:06 compute-0 nova_compute[259550]: 2025-10-07 14:17:06.317 2 DEBUG nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:17:06 compute-0 nova_compute[259550]: 2025-10-07 14:17:06.317 2 DEBUG nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Ensure instance console log exists: /var/lib/nova/instances/85cd6b5c-f0f7-49fa-a999-64818baf3648/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:17:06 compute-0 nova_compute[259550]: 2025-10-07 14:17:06.318 2 DEBUG oslo_concurrency.lockutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:06 compute-0 nova_compute[259550]: 2025-10-07 14:17:06.318 2 DEBUG oslo_concurrency.lockutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:06 compute-0 nova_compute[259550]: 2025-10-07 14:17:06.318 2 DEBUG oslo_concurrency.lockutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:06 compute-0 nova_compute[259550]: 2025-10-07 14:17:06.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:06 compute-0 nova_compute[259550]: 2025-10-07 14:17:06.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:07.307 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:17:07 compute-0 nova_compute[259550]: 2025-10-07 14:17:07.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:07.309 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:17:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 45 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 26 KiB/s wr, 10 op/s
Oct 07 14:17:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:08.312 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:08 compute-0 nova_compute[259550]: 2025-10-07 14:17:08.793 2 DEBUG nova.network.neutron [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Successfully created port: b2d2caee-177d-4ad4-98ba-7dd1d95e296b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:17:08 compute-0 ceph-mon[74295]: pgmap v1576: 305 pgs: 305 active+clean; 45 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 26 KiB/s wr, 10 op/s
Oct 07 14:17:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 80 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 1.7 MiB/s wr, 14 op/s
Oct 07 14:17:10 compute-0 nova_compute[259550]: 2025-10-07 14:17:10.440 2 DEBUG nova.network.neutron [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Successfully updated port: b2d2caee-177d-4ad4-98ba-7dd1d95e296b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:17:10 compute-0 nova_compute[259550]: 2025-10-07 14:17:10.461 2 DEBUG oslo_concurrency.lockutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "refresh_cache-85cd6b5c-f0f7-49fa-a999-64818baf3648" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:17:10 compute-0 nova_compute[259550]: 2025-10-07 14:17:10.461 2 DEBUG oslo_concurrency.lockutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquired lock "refresh_cache-85cd6b5c-f0f7-49fa-a999-64818baf3648" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:17:10 compute-0 nova_compute[259550]: 2025-10-07 14:17:10.461 2 DEBUG nova.network.neutron [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:17:10 compute-0 nova_compute[259550]: 2025-10-07 14:17:10.651 2 DEBUG nova.compute.manager [req-4de22641-91ca-4203-b7d8-26355b7edab2 req-6212d516-1992-492a-a3ba-b25a57f7cf18 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Received event network-changed-b2d2caee-177d-4ad4-98ba-7dd1d95e296b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:10 compute-0 nova_compute[259550]: 2025-10-07 14:17:10.651 2 DEBUG nova.compute.manager [req-4de22641-91ca-4203-b7d8-26355b7edab2 req-6212d516-1992-492a-a3ba-b25a57f7cf18 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Refreshing instance network info cache due to event network-changed-b2d2caee-177d-4ad4-98ba-7dd1d95e296b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:17:10 compute-0 nova_compute[259550]: 2025-10-07 14:17:10.651 2 DEBUG oslo_concurrency.lockutils [req-4de22641-91ca-4203-b7d8-26355b7edab2 req-6212d516-1992-492a-a3ba-b25a57f7cf18 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-85cd6b5c-f0f7-49fa-a999-64818baf3648" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:17:10 compute-0 nova_compute[259550]: 2025-10-07 14:17:10.762 2 DEBUG nova.network.neutron [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:17:10 compute-0 ceph-mon[74295]: pgmap v1577: 305 pgs: 305 active+clean; 80 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 1.7 MiB/s wr, 14 op/s
Oct 07 14:17:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:17:11 compute-0 nova_compute[259550]: 2025-10-07 14:17:11.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:11 compute-0 nova_compute[259550]: 2025-10-07 14:17:11.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.314 2 DEBUG nova.network.neutron [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Updating instance_info_cache with network_info: [{"id": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "address": "fa:16:3e:32:3d:56", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2d2caee-17", "ovs_interfaceid": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.340 2 DEBUG oslo_concurrency.lockutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Releasing lock "refresh_cache-85cd6b5c-f0f7-49fa-a999-64818baf3648" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.340 2 DEBUG nova.compute.manager [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Instance network_info: |[{"id": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "address": "fa:16:3e:32:3d:56", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2d2caee-17", "ovs_interfaceid": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.341 2 DEBUG oslo_concurrency.lockutils [req-4de22641-91ca-4203-b7d8-26355b7edab2 req-6212d516-1992-492a-a3ba-b25a57f7cf18 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-85cd6b5c-f0f7-49fa-a999-64818baf3648" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.342 2 DEBUG nova.network.neutron [req-4de22641-91ca-4203-b7d8-26355b7edab2 req-6212d516-1992-492a-a3ba-b25a57f7cf18 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Refreshing network info cache for port b2d2caee-177d-4ad4-98ba-7dd1d95e296b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.346 2 DEBUG nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Start _get_guest_xml network_info=[{"id": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "address": "fa:16:3e:32:3d:56", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2d2caee-17", "ovs_interfaceid": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.352 2 WARNING nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.358 2 DEBUG nova.virt.libvirt.host [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.359 2 DEBUG nova.virt.libvirt.host [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.362 2 DEBUG nova.virt.libvirt.host [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.363 2 DEBUG nova.virt.libvirt.host [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.363 2 DEBUG nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.364 2 DEBUG nova.virt.hardware [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.364 2 DEBUG nova.virt.hardware [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.365 2 DEBUG nova.virt.hardware [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.365 2 DEBUG nova.virt.hardware [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.365 2 DEBUG nova.virt.hardware [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.366 2 DEBUG nova.virt.hardware [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.366 2 DEBUG nova.virt.hardware [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.366 2 DEBUG nova.virt.hardware [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.367 2 DEBUG nova.virt.hardware [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.367 2 DEBUG nova.virt.hardware [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.367 2 DEBUG nova.virt.hardware [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.371 2 DEBUG oslo_concurrency.processutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:17:12 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2272366595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.914 2 DEBUG oslo_concurrency.processutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.941 2 DEBUG nova.storage.rbd_utils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:12 compute-0 nova_compute[259550]: 2025-10-07 14:17:12.947 2 DEBUG oslo_concurrency.processutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:12 compute-0 ceph-mon[74295]: pgmap v1578: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:17:12 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2272366595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:17:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1406390584' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.420 2 DEBUG oslo_concurrency.processutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.423 2 DEBUG nova.virt.libvirt.vif [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:16:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2023954398',display_name='tempest-ServerRescueTestJSON-server-2023954398',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-2023954398',id=63,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8dca3ec607447dd8f2e6dd1c0714628',ramdisk_id='',reservation_id='r-lmffjsq5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1604630325',owner_user_name='tempest-ServerRescueTestJSON-1604630325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:17:05Z,user_data=None,user_id='7166dede3fa7455fb8ac0840d97d0be8',uuid=85cd6b5c-f0f7-49fa-a999-64818baf3648,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "address": "fa:16:3e:32:3d:56", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2d2caee-17", "ovs_interfaceid": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.423 2 DEBUG nova.network.os_vif_util [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Converting VIF {"id": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "address": "fa:16:3e:32:3d:56", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2d2caee-17", "ovs_interfaceid": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.424 2 DEBUG nova.network.os_vif_util [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:3d:56,bridge_name='br-int',has_traffic_filtering=True,id=b2d2caee-177d-4ad4-98ba-7dd1d95e296b,network=Network(2dfecd64-708d-4596-88ae-4b7d716e998c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2d2caee-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.425 2 DEBUG nova.objects.instance [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'pci_devices' on Instance uuid 85cd6b5c-f0f7-49fa-a999-64818baf3648 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.570 2 DEBUG nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:17:13 compute-0 nova_compute[259550]:   <uuid>85cd6b5c-f0f7-49fa-a999-64818baf3648</uuid>
Oct 07 14:17:13 compute-0 nova_compute[259550]:   <name>instance-0000003f</name>
Oct 07 14:17:13 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:17:13 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:17:13 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerRescueTestJSON-server-2023954398</nova:name>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:17:12</nova:creationTime>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:17:13 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:17:13 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:17:13 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:17:13 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:17:13 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:17:13 compute-0 nova_compute[259550]:         <nova:user uuid="7166dede3fa7455fb8ac0840d97d0be8">tempest-ServerRescueTestJSON-1604630325-project-member</nova:user>
Oct 07 14:17:13 compute-0 nova_compute[259550]:         <nova:project uuid="d8dca3ec607447dd8f2e6dd1c0714628">tempest-ServerRescueTestJSON-1604630325</nova:project>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:17:13 compute-0 nova_compute[259550]:         <nova:port uuid="b2d2caee-177d-4ad4-98ba-7dd1d95e296b">
Oct 07 14:17:13 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:17:13 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:17:13 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <system>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <entry name="serial">85cd6b5c-f0f7-49fa-a999-64818baf3648</entry>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <entry name="uuid">85cd6b5c-f0f7-49fa-a999-64818baf3648</entry>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     </system>
Oct 07 14:17:13 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:17:13 compute-0 nova_compute[259550]:   <os>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:   </os>
Oct 07 14:17:13 compute-0 nova_compute[259550]:   <features>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:   </features>
Oct 07 14:17:13 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:17:13 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:17:13 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/85cd6b5c-f0f7-49fa-a999-64818baf3648_disk">
Oct 07 14:17:13 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       </source>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:17:13 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/85cd6b5c-f0f7-49fa-a999-64818baf3648_disk.config">
Oct 07 14:17:13 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       </source>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:17:13 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:32:3d:56"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <target dev="tapb2d2caee-17"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/85cd6b5c-f0f7-49fa-a999-64818baf3648/console.log" append="off"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <video>
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     </video>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:17:13 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:17:13 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:17:13 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:17:13 compute-0 nova_compute[259550]: </domain>
Oct 07 14:17:13 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.572 2 DEBUG nova.compute.manager [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Preparing to wait for external event network-vif-plugged-b2d2caee-177d-4ad4-98ba-7dd1d95e296b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.573 2 DEBUG oslo_concurrency.lockutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.574 2 DEBUG oslo_concurrency.lockutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.574 2 DEBUG oslo_concurrency.lockutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.575 2 DEBUG nova.virt.libvirt.vif [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:16:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2023954398',display_name='tempest-ServerRescueTestJSON-server-2023954398',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-2023954398',id=63,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8dca3ec607447dd8f2e6dd1c0714628',ramdisk_id='',reservation_id='r-lmffjsq5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1604630325',owner_user_name='tempest-ServerRescueTestJSON-1604630325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:17:05Z,user_data=None,user_id='7166dede3fa7455fb8ac0840d97d0be8',uuid=85cd6b5c-f0f7-49fa-a999-64818baf3648,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "address": "fa:16:3e:32:3d:56", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2d2caee-17", "ovs_interfaceid": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.575 2 DEBUG nova.network.os_vif_util [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Converting VIF {"id": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "address": "fa:16:3e:32:3d:56", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2d2caee-17", "ovs_interfaceid": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.576 2 DEBUG nova.network.os_vif_util [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:3d:56,bridge_name='br-int',has_traffic_filtering=True,id=b2d2caee-177d-4ad4-98ba-7dd1d95e296b,network=Network(2dfecd64-708d-4596-88ae-4b7d716e998c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2d2caee-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.577 2 DEBUG os_vif [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:3d:56,bridge_name='br-int',has_traffic_filtering=True,id=b2d2caee-177d-4ad4-98ba-7dd1d95e296b,network=Network(2dfecd64-708d-4596-88ae-4b7d716e998c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2d2caee-17') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.578 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.579 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2d2caee-17, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.587 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb2d2caee-17, col_values=(('external_ids', {'iface-id': 'b2d2caee-177d-4ad4-98ba-7dd1d95e296b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:3d:56', 'vm-uuid': '85cd6b5c-f0f7-49fa-a999-64818baf3648'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:13 compute-0 NetworkManager[44949]: <info>  [1759846633.5912] manager: (tapb2d2caee-17): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/264)
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.602 2 INFO os_vif [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:3d:56,bridge_name='br-int',has_traffic_filtering=True,id=b2d2caee-177d-4ad4-98ba-7dd1d95e296b,network=Network(2dfecd64-708d-4596-88ae-4b7d716e998c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2d2caee-17')
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.692 2 DEBUG nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.692 2 DEBUG nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.692 2 DEBUG nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] No VIF found with MAC fa:16:3e:32:3d:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.693 2 INFO nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Using config drive
Oct 07 14:17:13 compute-0 nova_compute[259550]: 2025-10-07 14:17:13.718 2 DEBUG nova.storage.rbd_utils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:13 compute-0 podman[327015]: 2025-10-07 14:17:13.727869535 +0000 UTC m=+0.088771516 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 07 14:17:13 compute-0 podman[327016]: 2025-10-07 14:17:13.738787633 +0000 UTC m=+0.093752658 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:17:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:17:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1406390584' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:14 compute-0 ceph-mon[74295]: pgmap v1579: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:17:14 compute-0 nova_compute[259550]: 2025-10-07 14:17:14.164 2 INFO nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Creating config drive at /var/lib/nova/instances/85cd6b5c-f0f7-49fa-a999-64818baf3648/disk.config
Oct 07 14:17:14 compute-0 nova_compute[259550]: 2025-10-07 14:17:14.169 2 DEBUG oslo_concurrency.processutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/85cd6b5c-f0f7-49fa-a999-64818baf3648/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw79td967 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:14 compute-0 nova_compute[259550]: 2025-10-07 14:17:14.313 2 DEBUG oslo_concurrency.processutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/85cd6b5c-f0f7-49fa-a999-64818baf3648/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw79td967" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:14 compute-0 nova_compute[259550]: 2025-10-07 14:17:14.342 2 DEBUG nova.storage.rbd_utils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:14 compute-0 nova_compute[259550]: 2025-10-07 14:17:14.346 2 DEBUG oslo_concurrency.processutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/85cd6b5c-f0f7-49fa-a999-64818baf3648/disk.config 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:14 compute-0 nova_compute[259550]: 2025-10-07 14:17:14.910 2 DEBUG nova.network.neutron [req-4de22641-91ca-4203-b7d8-26355b7edab2 req-6212d516-1992-492a-a3ba-b25a57f7cf18 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Updated VIF entry in instance network info cache for port b2d2caee-177d-4ad4-98ba-7dd1d95e296b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:17:14 compute-0 nova_compute[259550]: 2025-10-07 14:17:14.911 2 DEBUG nova.network.neutron [req-4de22641-91ca-4203-b7d8-26355b7edab2 req-6212d516-1992-492a-a3ba-b25a57f7cf18 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Updating instance_info_cache with network_info: [{"id": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "address": "fa:16:3e:32:3d:56", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2d2caee-17", "ovs_interfaceid": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:17:14 compute-0 nova_compute[259550]: 2025-10-07 14:17:14.930 2 DEBUG oslo_concurrency.lockutils [req-4de22641-91ca-4203-b7d8-26355b7edab2 req-6212d516-1992-492a-a3ba-b25a57f7cf18 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-85cd6b5c-f0f7-49fa-a999-64818baf3648" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:17:15 compute-0 nova_compute[259550]: 2025-10-07 14:17:15.490 2 DEBUG oslo_concurrency.processutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/85cd6b5c-f0f7-49fa-a999-64818baf3648/disk.config 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:15 compute-0 nova_compute[259550]: 2025-10-07 14:17:15.491 2 INFO nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Deleting local config drive /var/lib/nova/instances/85cd6b5c-f0f7-49fa-a999-64818baf3648/disk.config because it was imported into RBD.
Oct 07 14:17:15 compute-0 kernel: tapb2d2caee-17: entered promiscuous mode
Oct 07 14:17:15 compute-0 NetworkManager[44949]: <info>  [1759846635.5558] manager: (tapb2d2caee-17): new Tun device (/org/freedesktop/NetworkManager/Devices/265)
Oct 07 14:17:15 compute-0 ovn_controller[151684]: 2025-10-07T14:17:15Z|00589|binding|INFO|Claiming lport b2d2caee-177d-4ad4-98ba-7dd1d95e296b for this chassis.
Oct 07 14:17:15 compute-0 ovn_controller[151684]: 2025-10-07T14:17:15Z|00590|binding|INFO|b2d2caee-177d-4ad4-98ba-7dd1d95e296b: Claiming fa:16:3e:32:3d:56 10.100.0.5
Oct 07 14:17:15 compute-0 nova_compute[259550]: 2025-10-07 14:17:15.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:15 compute-0 systemd-udevd[327133]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:17:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:15.600 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:3d:56 10.100.0.5'], port_security=['fa:16:3e:32:3d:56 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '85cd6b5c-f0f7-49fa-a999-64818baf3648', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dfecd64-708d-4596-88ae-4b7d716e998c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8dca3ec607447dd8f2e6dd1c0714628', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fe5613ec-7a3f-454f-9d6b-9216d9c4d645', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=952e23a8-726e-49a4-b02e-737dfec98b14, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b2d2caee-177d-4ad4-98ba-7dd1d95e296b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:17:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:15.601 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b2d2caee-177d-4ad4-98ba-7dd1d95e296b in datapath 2dfecd64-708d-4596-88ae-4b7d716e998c bound to our chassis
Oct 07 14:17:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:15.602 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2dfecd64-708d-4596-88ae-4b7d716e998c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:17:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:15.603 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5b9107-ba8d-467f-9a07-0d319137fa4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:15 compute-0 systemd-machined[214580]: New machine qemu-73-instance-0000003f.
Oct 07 14:17:15 compute-0 NetworkManager[44949]: <info>  [1759846635.6152] device (tapb2d2caee-17): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:17:15 compute-0 NetworkManager[44949]: <info>  [1759846635.6164] device (tapb2d2caee-17): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:17:15 compute-0 nova_compute[259550]: 2025-10-07 14:17:15.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:15 compute-0 systemd[1]: Started Virtual Machine qemu-73-instance-0000003f.
Oct 07 14:17:15 compute-0 ovn_controller[151684]: 2025-10-07T14:17:15Z|00591|binding|INFO|Setting lport b2d2caee-177d-4ad4-98ba-7dd1d95e296b ovn-installed in OVS
Oct 07 14:17:15 compute-0 ovn_controller[151684]: 2025-10-07T14:17:15Z|00592|binding|INFO|Setting lport b2d2caee-177d-4ad4-98ba-7dd1d95e296b up in Southbound
Oct 07 14:17:15 compute-0 nova_compute[259550]: 2025-10-07 14:17:15.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:17:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.091 2 DEBUG nova.compute.manager [req-3fabc5bc-cbd6-417f-81d5-cd01a5e76b4b req-5146cb37-4e47-477e-89d7-13f5ccd5a410 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Received event network-vif-plugged-b2d2caee-177d-4ad4-98ba-7dd1d95e296b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.092 2 DEBUG oslo_concurrency.lockutils [req-3fabc5bc-cbd6-417f-81d5-cd01a5e76b4b req-5146cb37-4e47-477e-89d7-13f5ccd5a410 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.092 2 DEBUG oslo_concurrency.lockutils [req-3fabc5bc-cbd6-417f-81d5-cd01a5e76b4b req-5146cb37-4e47-477e-89d7-13f5ccd5a410 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.092 2 DEBUG oslo_concurrency.lockutils [req-3fabc5bc-cbd6-417f-81d5-cd01a5e76b4b req-5146cb37-4e47-477e-89d7-13f5ccd5a410 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.092 2 DEBUG nova.compute.manager [req-3fabc5bc-cbd6-417f-81d5-cd01a5e76b4b req-5146cb37-4e47-477e-89d7-13f5ccd5a410 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Processing event network-vif-plugged-b2d2caee-177d-4ad4-98ba-7dd1d95e296b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.517 2 DEBUG nova.compute.manager [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.518 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846636.5182173, 85cd6b5c-f0f7-49fa-a999-64818baf3648 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.518 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] VM Started (Lifecycle Event)
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.524 2 DEBUG nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.527 2 INFO nova.virt.libvirt.driver [-] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Instance spawned successfully.
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.527 2 DEBUG nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.537 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.540 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.589 2 DEBUG nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.590 2 DEBUG nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.590 2 DEBUG nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.591 2 DEBUG nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.591 2 DEBUG nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.592 2 DEBUG nova.virt.libvirt.driver [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.705 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.705 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846636.521079, 85cd6b5c-f0f7-49fa-a999-64818baf3648 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.705 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] VM Paused (Lifecycle Event)
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.733 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.738 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846636.5229712, 85cd6b5c-f0f7-49fa-a999-64818baf3648 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.738 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] VM Resumed (Lifecycle Event)
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.746 2 INFO nova.compute.manager [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Took 11.30 seconds to spawn the instance on the hypervisor.
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.746 2 DEBUG nova.compute.manager [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.756 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.758 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.799 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.825 2 INFO nova.compute.manager [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Took 15.47 seconds to build instance.
Oct 07 14:17:16 compute-0 nova_compute[259550]: 2025-10-07 14:17:16.843 2 DEBUG oslo_concurrency.lockutils [None req-2e0bf001-92d9-4f2f-b1f6-c5d72056e67b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:16 compute-0 ceph-mon[74295]: pgmap v1580: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:17:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 07 14:17:18 compute-0 nova_compute[259550]: 2025-10-07 14:17:18.572 2 DEBUG nova.compute.manager [req-0b3edfe7-a046-400a-b34c-280bbe8e2125 req-aed68017-b2c1-4401-bf56-399ebff5c8d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Received event network-vif-plugged-b2d2caee-177d-4ad4-98ba-7dd1d95e296b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:18 compute-0 nova_compute[259550]: 2025-10-07 14:17:18.572 2 DEBUG oslo_concurrency.lockutils [req-0b3edfe7-a046-400a-b34c-280bbe8e2125 req-aed68017-b2c1-4401-bf56-399ebff5c8d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:18 compute-0 nova_compute[259550]: 2025-10-07 14:17:18.573 2 DEBUG oslo_concurrency.lockutils [req-0b3edfe7-a046-400a-b34c-280bbe8e2125 req-aed68017-b2c1-4401-bf56-399ebff5c8d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:18 compute-0 nova_compute[259550]: 2025-10-07 14:17:18.573 2 DEBUG oslo_concurrency.lockutils [req-0b3edfe7-a046-400a-b34c-280bbe8e2125 req-aed68017-b2c1-4401-bf56-399ebff5c8d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:18 compute-0 nova_compute[259550]: 2025-10-07 14:17:18.573 2 DEBUG nova.compute.manager [req-0b3edfe7-a046-400a-b34c-280bbe8e2125 req-aed68017-b2c1-4401-bf56-399ebff5c8d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] No waiting events found dispatching network-vif-plugged-b2d2caee-177d-4ad4-98ba-7dd1d95e296b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:17:18 compute-0 nova_compute[259550]: 2025-10-07 14:17:18.573 2 WARNING nova.compute.manager [req-0b3edfe7-a046-400a-b34c-280bbe8e2125 req-aed68017-b2c1-4401-bf56-399ebff5c8d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Received unexpected event network-vif-plugged-b2d2caee-177d-4ad4-98ba-7dd1d95e296b for instance with vm_state active and task_state None.
Oct 07 14:17:18 compute-0 nova_compute[259550]: 2025-10-07 14:17:18.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:18 compute-0 ceph-mon[74295]: pgmap v1581: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 07 14:17:19 compute-0 nova_compute[259550]: 2025-10-07 14:17:19.265 2 INFO nova.compute.manager [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Rescuing
Oct 07 14:17:19 compute-0 nova_compute[259550]: 2025-10-07 14:17:19.265 2 DEBUG oslo_concurrency.lockutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "refresh_cache-85cd6b5c-f0f7-49fa-a999-64818baf3648" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:17:19 compute-0 nova_compute[259550]: 2025-10-07 14:17:19.265 2 DEBUG oslo_concurrency.lockutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquired lock "refresh_cache-85cd6b5c-f0f7-49fa-a999-64818baf3648" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:17:19 compute-0 nova_compute[259550]: 2025-10-07 14:17:19.266 2 DEBUG nova.network.neutron [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:17:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Oct 07 14:17:20 compute-0 ceph-mon[74295]: pgmap v1582: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Oct 07 14:17:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:17:21 compute-0 nova_compute[259550]: 2025-10-07 14:17:21.130 2 DEBUG nova.network.neutron [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Updating instance_info_cache with network_info: [{"id": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "address": "fa:16:3e:32:3d:56", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2d2caee-17", "ovs_interfaceid": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:17:21 compute-0 nova_compute[259550]: 2025-10-07 14:17:21.383 2 DEBUG oslo_concurrency.lockutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Releasing lock "refresh_cache-85cd6b5c-f0f7-49fa-a999-64818baf3648" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:17:21 compute-0 nova_compute[259550]: 2025-10-07 14:17:21.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:21 compute-0 nova_compute[259550]: 2025-10-07 14:17:21.742 2 DEBUG nova.virt.libvirt.driver [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:17:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 99 KiB/s wr, 86 op/s
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:17:22
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'images', 'vms', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data']
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:17:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:17:22 compute-0 ceph-mon[74295]: pgmap v1583: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 99 KiB/s wr, 86 op/s
Oct 07 14:17:23 compute-0 nova_compute[259550]: 2025-10-07 14:17:23.101 2 DEBUG oslo_concurrency.lockutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Acquiring lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:23 compute-0 nova_compute[259550]: 2025-10-07 14:17:23.102 2 DEBUG oslo_concurrency.lockutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:23 compute-0 nova_compute[259550]: 2025-10-07 14:17:23.306 2 DEBUG nova.compute.manager [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:17:23 compute-0 nova_compute[259550]: 2025-10-07 14:17:23.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:17:23 compute-0 nova_compute[259550]: 2025-10-07 14:17:23.819 2 DEBUG oslo_concurrency.lockutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:23 compute-0 nova_compute[259550]: 2025-10-07 14:17:23.820 2 DEBUG oslo_concurrency.lockutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:23 compute-0 nova_compute[259550]: 2025-10-07 14:17:23.826 2 DEBUG nova.virt.hardware [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:17:23 compute-0 nova_compute[259550]: 2025-10-07 14:17:23.827 2 INFO nova.compute.claims [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:17:23 compute-0 nova_compute[259550]: 2025-10-07 14:17:23.986 2 DEBUG oslo_concurrency.processutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:17:24 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2898733974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:17:24 compute-0 nova_compute[259550]: 2025-10-07 14:17:24.455 2 DEBUG oslo_concurrency.processutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:24 compute-0 nova_compute[259550]: 2025-10-07 14:17:24.462 2 DEBUG nova.compute.provider_tree [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:17:24 compute-0 nova_compute[259550]: 2025-10-07 14:17:24.517 2 DEBUG nova.scheduler.client.report [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:17:24 compute-0 nova_compute[259550]: 2025-10-07 14:17:24.556 2 DEBUG oslo_concurrency.lockutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:24 compute-0 nova_compute[259550]: 2025-10-07 14:17:24.557 2 DEBUG nova.compute.manager [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:17:24 compute-0 nova_compute[259550]: 2025-10-07 14:17:24.725 2 DEBUG nova.compute.manager [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:17:24 compute-0 nova_compute[259550]: 2025-10-07 14:17:24.725 2 DEBUG nova.network.neutron [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:17:24 compute-0 nova_compute[259550]: 2025-10-07 14:17:24.934 2 INFO nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:17:24 compute-0 nova_compute[259550]: 2025-10-07 14:17:24.948 2 DEBUG nova.policy [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '63af1dd3b8c54df9a2d8488d7cfc1590', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '28bfd63ece134e70ac7fe3739775042b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:17:24 compute-0 ceph-mon[74295]: pgmap v1584: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:17:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2898733974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:17:24 compute-0 nova_compute[259550]: 2025-10-07 14:17:24.981 2 DEBUG nova.compute.manager [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:17:24 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 14:17:24 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.195 2 DEBUG nova.compute.manager [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.196 2 DEBUG nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.197 2 INFO nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Creating image(s)
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.216 2 DEBUG nova.storage.rbd_utils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] rbd image dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.238 2 DEBUG nova.storage.rbd_utils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] rbd image dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.258 2 DEBUG nova.storage.rbd_utils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] rbd image dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.262 2 DEBUG oslo_concurrency.processutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.309 2 DEBUG oslo_concurrency.lockutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Acquiring lock "96c85d5b-5885-4549-8dfd-75a4611179b5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.309 2 DEBUG oslo_concurrency.lockutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Lock "96c85d5b-5885-4549-8dfd-75a4611179b5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.340 2 DEBUG nova.compute.manager [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.349 2 DEBUG oslo_concurrency.processutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.350 2 DEBUG oslo_concurrency.lockutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.350 2 DEBUG oslo_concurrency.lockutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.351 2 DEBUG oslo_concurrency.lockutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.374 2 DEBUG nova.storage.rbd_utils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] rbd image dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.378 2 DEBUG oslo_concurrency.processutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.495 2 DEBUG oslo_concurrency.lockutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.496 2 DEBUG oslo_concurrency.lockutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.501 2 DEBUG nova.virt.hardware [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.501 2 INFO nova.compute.claims [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.682 2 DEBUG oslo_concurrency.processutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.753 2 DEBUG oslo_concurrency.processutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.792 2 DEBUG nova.storage.rbd_utils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] resizing rbd image dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:17:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.893 2 DEBUG nova.objects.instance [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lazy-loading 'migration_context' on Instance uuid dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.972 2 DEBUG nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.972 2 DEBUG nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Ensure instance console log exists: /var/lib/nova/instances/dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.973 2 DEBUG oslo_concurrency.lockutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.973 2 DEBUG oslo_concurrency.lockutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:25 compute-0 nova_compute[259550]: 2025-10-07 14:17:25.974 2 DEBUG oslo_concurrency.lockutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.021 2 DEBUG nova.network.neutron [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Successfully created port: 2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:17:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:17:26 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3310727204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.204 2 DEBUG oslo_concurrency.processutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.210 2 DEBUG nova.compute.provider_tree [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.296 2 DEBUG nova.scheduler.client.report [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.398 2 DEBUG oslo_concurrency.lockutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.399 2 DEBUG nova.compute.manager [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.514 2 DEBUG nova.compute.manager [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.514 2 DEBUG nova.network.neutron [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.579 2 INFO nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.624 2 DEBUG nova.compute.manager [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.703 2 DEBUG nova.policy [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4526fd07b25c4e6c9fd61f99f0451dba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9c7fa9a5146949b5a1222248ae125eff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.749 2 DEBUG nova.compute.manager [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.750 2 DEBUG nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.751 2 INFO nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Creating image(s)
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.770 2 DEBUG nova.storage.rbd_utils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] rbd image 96c85d5b-5885-4549-8dfd-75a4611179b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.792 2 DEBUG nova.storage.rbd_utils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] rbd image 96c85d5b-5885-4549-8dfd-75a4611179b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.813 2 DEBUG nova.storage.rbd_utils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] rbd image 96c85d5b-5885-4549-8dfd-75a4611179b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.817 2 DEBUG oslo_concurrency.processutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.886 2 DEBUG oslo_concurrency.processutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.887 2 DEBUG oslo_concurrency.lockutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.888 2 DEBUG oslo_concurrency.lockutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.889 2 DEBUG oslo_concurrency.lockutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.910 2 DEBUG nova.storage.rbd_utils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] rbd image 96c85d5b-5885-4549-8dfd-75a4611179b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:26 compute-0 nova_compute[259550]: 2025-10-07 14:17:26.914 2 DEBUG oslo_concurrency.processutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 96c85d5b-5885-4549-8dfd-75a4611179b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:26 compute-0 ceph-mon[74295]: pgmap v1585: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:17:26 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3310727204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:17:27 compute-0 nova_compute[259550]: 2025-10-07 14:17:27.210 2 DEBUG oslo_concurrency.processutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 96c85d5b-5885-4549-8dfd-75a4611179b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:27 compute-0 nova_compute[259550]: 2025-10-07 14:17:27.278 2 DEBUG nova.storage.rbd_utils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] resizing rbd image 96c85d5b-5885-4549-8dfd-75a4611179b5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:17:27 compute-0 nova_compute[259550]: 2025-10-07 14:17:27.449 2 DEBUG nova.objects.instance [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Lazy-loading 'migration_context' on Instance uuid 96c85d5b-5885-4549-8dfd-75a4611179b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:27 compute-0 nova_compute[259550]: 2025-10-07 14:17:27.647 2 DEBUG nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:17:27 compute-0 nova_compute[259550]: 2025-10-07 14:17:27.648 2 DEBUG nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Ensure instance console log exists: /var/lib/nova/instances/96c85d5b-5885-4549-8dfd-75a4611179b5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:17:27 compute-0 nova_compute[259550]: 2025-10-07 14:17:27.648 2 DEBUG oslo_concurrency.lockutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:27 compute-0 nova_compute[259550]: 2025-10-07 14:17:27.648 2 DEBUG oslo_concurrency.lockutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:27 compute-0 nova_compute[259550]: 2025-10-07 14:17:27.649 2 DEBUG oslo_concurrency.lockutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:27 compute-0 nova_compute[259550]: 2025-10-07 14:17:27.702 2 DEBUG nova.network.neutron [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Successfully updated port: 2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:17:27 compute-0 nova_compute[259550]: 2025-10-07 14:17:27.792 2 DEBUG oslo_concurrency.lockutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Acquiring lock "refresh_cache-dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:17:27 compute-0 nova_compute[259550]: 2025-10-07 14:17:27.792 2 DEBUG oslo_concurrency.lockutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Acquired lock "refresh_cache-dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:17:27 compute-0 nova_compute[259550]: 2025-10-07 14:17:27.792 2 DEBUG nova.network.neutron [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:17:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:17:28 compute-0 nova_compute[259550]: 2025-10-07 14:17:28.015 2 DEBUG nova.network.neutron [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Successfully created port: 682f4b22-7214-4f0d-894b-84354797f177 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:17:28 compute-0 nova_compute[259550]: 2025-10-07 14:17:28.163 2 DEBUG nova.network.neutron [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:17:28 compute-0 nova_compute[259550]: 2025-10-07 14:17:28.191 2 DEBUG nova.compute.manager [req-ec37c996-19d1-4729-9aeb-e15073b27cf3 req-11f35ac8-77f6-4c03-bfa0-f09f4362aed8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Received event network-changed-2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:28 compute-0 nova_compute[259550]: 2025-10-07 14:17:28.191 2 DEBUG nova.compute.manager [req-ec37c996-19d1-4729-9aeb-e15073b27cf3 req-11f35ac8-77f6-4c03-bfa0-f09f4362aed8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Refreshing instance network info cache due to event network-changed-2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:17:28 compute-0 nova_compute[259550]: 2025-10-07 14:17:28.192 2 DEBUG oslo_concurrency.lockutils [req-ec37c996-19d1-4729-9aeb-e15073b27cf3 req-11f35ac8-77f6-4c03-bfa0-f09f4362aed8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:17:28 compute-0 nova_compute[259550]: 2025-10-07 14:17:28.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:28 compute-0 ceph-mon[74295]: pgmap v1586: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.220 2 DEBUG nova.network.neutron [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Updating instance_info_cache with network_info: [{"id": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "address": "fa:16:3e:e8:c8:d3", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe29bbe-c4", "ovs_interfaceid": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.267 2 DEBUG oslo_concurrency.lockutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Releasing lock "refresh_cache-dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.268 2 DEBUG nova.compute.manager [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Instance network_info: |[{"id": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "address": "fa:16:3e:e8:c8:d3", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe29bbe-c4", "ovs_interfaceid": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.268 2 DEBUG oslo_concurrency.lockutils [req-ec37c996-19d1-4729-9aeb-e15073b27cf3 req-11f35ac8-77f6-4c03-bfa0-f09f4362aed8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.268 2 DEBUG nova.network.neutron [req-ec37c996-19d1-4729-9aeb-e15073b27cf3 req-11f35ac8-77f6-4c03-bfa0-f09f4362aed8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Refreshing network info cache for port 2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.272 2 DEBUG nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Start _get_guest_xml network_info=[{"id": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "address": "fa:16:3e:e8:c8:d3", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe29bbe-c4", "ovs_interfaceid": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.275 2 WARNING nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.279 2 DEBUG nova.virt.libvirt.host [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.279 2 DEBUG nova.virt.libvirt.host [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.288 2 DEBUG nova.virt.libvirt.host [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.288 2 DEBUG nova.virt.libvirt.host [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.289 2 DEBUG nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.289 2 DEBUG nova.virt.hardware [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.289 2 DEBUG nova.virt.hardware [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.290 2 DEBUG nova.virt.hardware [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.290 2 DEBUG nova.virt.hardware [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.290 2 DEBUG nova.virt.hardware [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.290 2 DEBUG nova.virt.hardware [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.291 2 DEBUG nova.virt.hardware [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.291 2 DEBUG nova.virt.hardware [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.291 2 DEBUG nova.virt.hardware [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.291 2 DEBUG nova.virt.hardware [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.291 2 DEBUG nova.virt.hardware [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.294 2 DEBUG oslo_concurrency.processutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.364 2 DEBUG nova.network.neutron [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Successfully updated port: 682f4b22-7214-4f0d-894b-84354797f177 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.523 2 DEBUG oslo_concurrency.lockutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Acquiring lock "refresh_cache-96c85d5b-5885-4549-8dfd-75a4611179b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.524 2 DEBUG oslo_concurrency.lockutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Acquired lock "refresh_cache-96c85d5b-5885-4549-8dfd-75a4611179b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.524 2 DEBUG nova.network.neutron [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:17:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:17:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1050698122' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 157 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.4 MiB/s wr, 137 op/s
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.804 2 DEBUG oslo_concurrency.processutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.824 2 DEBUG nova.storage.rbd_utils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] rbd image dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:29 compute-0 nova_compute[259550]: 2025-10-07 14:17:29.827 2 DEBUG oslo_concurrency.processutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1050698122' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:29 compute-0 ceph-mon[74295]: pgmap v1587: 305 pgs: 305 active+clean; 157 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.4 MiB/s wr, 137 op/s
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.116 2 DEBUG nova.network.neutron [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:17:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:17:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2257378486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.262 2 DEBUG oslo_concurrency.processutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.264 2 DEBUG nova.virt.libvirt.vif [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:17:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-957016237',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-957016237',id=64,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28bfd63ece134e70ac7fe3739775042b',ramdisk_id='',reservation_id='r-nwbsytri',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1141062112',owner_user_name='tempest-AttachInterfacesV270Test-1141062112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:17:25Z,user_data=None,user_id='63af1dd3b8c54df9a2d8488d7cfc1590',uuid=dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "address": "fa:16:3e:e8:c8:d3", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe29bbe-c4", "ovs_interfaceid": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.264 2 DEBUG nova.network.os_vif_util [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Converting VIF {"id": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "address": "fa:16:3e:e8:c8:d3", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe29bbe-c4", "ovs_interfaceid": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.265 2 DEBUG nova.network.os_vif_util [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:c8:d3,bridge_name='br-int',has_traffic_filtering=True,id=2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9,network=Network(69e0bc1b-64b0-44c5-8a8e-258e554b00ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe29bbe-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.266 2 DEBUG nova.objects.instance [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lazy-loading 'pci_devices' on Instance uuid dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.332 2 DEBUG nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:17:30 compute-0 nova_compute[259550]:   <uuid>dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e</uuid>
Oct 07 14:17:30 compute-0 nova_compute[259550]:   <name>instance-00000040</name>
Oct 07 14:17:30 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:17:30 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:17:30 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <nova:name>tempest-AttachInterfacesV270Test-server-957016237</nova:name>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:17:29</nova:creationTime>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:17:30 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:17:30 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:17:30 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:17:30 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:17:30 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:17:30 compute-0 nova_compute[259550]:         <nova:user uuid="63af1dd3b8c54df9a2d8488d7cfc1590">tempest-AttachInterfacesV270Test-1141062112-project-member</nova:user>
Oct 07 14:17:30 compute-0 nova_compute[259550]:         <nova:project uuid="28bfd63ece134e70ac7fe3739775042b">tempest-AttachInterfacesV270Test-1141062112</nova:project>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:17:30 compute-0 nova_compute[259550]:         <nova:port uuid="2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9">
Oct 07 14:17:30 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:17:30 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:17:30 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <system>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <entry name="serial">dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e</entry>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <entry name="uuid">dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e</entry>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     </system>
Oct 07 14:17:30 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:17:30 compute-0 nova_compute[259550]:   <os>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:   </os>
Oct 07 14:17:30 compute-0 nova_compute[259550]:   <features>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:   </features>
Oct 07 14:17:30 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:17:30 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:17:30 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e_disk">
Oct 07 14:17:30 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       </source>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:17:30 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e_disk.config">
Oct 07 14:17:30 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       </source>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:17:30 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:e8:c8:d3"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <target dev="tap2fe29bbe-c4"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e/console.log" append="off"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <video>
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     </video>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:17:30 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:17:30 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:17:30 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:17:30 compute-0 nova_compute[259550]: </domain>
Oct 07 14:17:30 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.333 2 DEBUG nova.compute.manager [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Preparing to wait for external event network-vif-plugged-2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.334 2 DEBUG oslo_concurrency.lockutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Acquiring lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.334 2 DEBUG oslo_concurrency.lockutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.334 2 DEBUG oslo_concurrency.lockutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.335 2 DEBUG nova.virt.libvirt.vif [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:17:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-957016237',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-957016237',id=64,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28bfd63ece134e70ac7fe3739775042b',ramdisk_id='',reservation_id='r-nwbsytri',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1141062112',owner_user_name='tempest-AttachInterfacesV270Test-1141062112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:17:25Z,user_data=None,user_id='63af1dd3b8c54df9a2d8488d7cfc1590',uuid=dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "address": "fa:16:3e:e8:c8:d3", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe29bbe-c4", "ovs_interfaceid": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.335 2 DEBUG nova.network.os_vif_util [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Converting VIF {"id": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "address": "fa:16:3e:e8:c8:d3", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe29bbe-c4", "ovs_interfaceid": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.336 2 DEBUG nova.network.os_vif_util [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:c8:d3,bridge_name='br-int',has_traffic_filtering=True,id=2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9,network=Network(69e0bc1b-64b0-44c5-8a8e-258e554b00ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe29bbe-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.336 2 DEBUG os_vif [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:c8:d3,bridge_name='br-int',has_traffic_filtering=True,id=2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9,network=Network(69e0bc1b-64b0-44c5-8a8e-258e554b00ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe29bbe-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.338 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.338 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.342 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fe29bbe-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.342 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2fe29bbe-c4, col_values=(('external_ids', {'iface-id': '2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:c8:d3', 'vm-uuid': 'dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:30 compute-0 NetworkManager[44949]: <info>  [1759846650.3463] manager: (tap2fe29bbe-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/266)
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.354 2 INFO os_vif [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:c8:d3,bridge_name='br-int',has_traffic_filtering=True,id=2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9,network=Network(69e0bc1b-64b0-44c5-8a8e-258e554b00ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe29bbe-c4')
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.516 2 DEBUG nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.516 2 DEBUG nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.517 2 DEBUG nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] No VIF found with MAC fa:16:3e:e8:c8:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.517 2 INFO nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Using config drive
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.540 2 DEBUG nova.storage.rbd_utils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] rbd image dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.625 2 DEBUG nova.compute.manager [req-2578fa15-a325-4b96-953e-e633e5cf2e13 req-bd5f2b6e-9d1b-4bb8-97e5-e64d0a68eb70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Received event network-changed-682f4b22-7214-4f0d-894b-84354797f177 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.625 2 DEBUG nova.compute.manager [req-2578fa15-a325-4b96-953e-e633e5cf2e13 req-bd5f2b6e-9d1b-4bb8-97e5-e64d0a68eb70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Refreshing instance network info cache due to event network-changed-682f4b22-7214-4f0d-894b-84354797f177. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.626 2 DEBUG oslo_concurrency.lockutils [req-2578fa15-a325-4b96-953e-e633e5cf2e13 req-bd5f2b6e-9d1b-4bb8-97e5-e64d0a68eb70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-96c85d5b-5885-4549-8dfd-75a4611179b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.670 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.874 2 DEBUG nova.network.neutron [req-ec37c996-19d1-4729-9aeb-e15073b27cf3 req-11f35ac8-77f6-4c03-bfa0-f09f4362aed8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Updated VIF entry in instance network info cache for port 2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.875 2 DEBUG nova.network.neutron [req-ec37c996-19d1-4729-9aeb-e15073b27cf3 req-11f35ac8-77f6-4c03-bfa0-f09f4362aed8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Updating instance_info_cache with network_info: [{"id": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "address": "fa:16:3e:e8:c8:d3", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe29bbe-c4", "ovs_interfaceid": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:17:30 compute-0 nova_compute[259550]: 2025-10-07 14:17:30.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:17:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.000 2 DEBUG oslo_concurrency.lockutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.001 2 DEBUG oslo_concurrency.lockutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:31 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2257378486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.020 2 DEBUG oslo_concurrency.lockutils [req-ec37c996-19d1-4729-9aeb-e15073b27cf3 req-11f35ac8-77f6-4c03-bfa0-f09f4362aed8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.023 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.024 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.025 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.025 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.026 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.066 2 INFO nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Creating config drive at /var/lib/nova/instances/dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e/disk.config
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.072 2 DEBUG oslo_concurrency.processutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqe5t9dyw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.108 2 DEBUG nova.compute.manager [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.127 2 DEBUG nova.network.neutron [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Updating instance_info_cache with network_info: [{"id": "682f4b22-7214-4f0d-894b-84354797f177", "address": "fa:16:3e:96:39:46", "network": {"id": "93abc0f9-2033-4fe7-a220-f8724a484807", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1441083038-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c7fa9a5146949b5a1222248ae125eff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap682f4b22-72", "ovs_interfaceid": "682f4b22-7214-4f0d-894b-84354797f177", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.210 2 DEBUG oslo_concurrency.lockutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Releasing lock "refresh_cache-96c85d5b-5885-4549-8dfd-75a4611179b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.210 2 DEBUG nova.compute.manager [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Instance network_info: |[{"id": "682f4b22-7214-4f0d-894b-84354797f177", "address": "fa:16:3e:96:39:46", "network": {"id": "93abc0f9-2033-4fe7-a220-f8724a484807", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1441083038-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c7fa9a5146949b5a1222248ae125eff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap682f4b22-72", "ovs_interfaceid": "682f4b22-7214-4f0d-894b-84354797f177", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.211 2 DEBUG oslo_concurrency.lockutils [req-2578fa15-a325-4b96-953e-e633e5cf2e13 req-bd5f2b6e-9d1b-4bb8-97e5-e64d0a68eb70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-96c85d5b-5885-4549-8dfd-75a4611179b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.211 2 DEBUG nova.network.neutron [req-2578fa15-a325-4b96-953e-e633e5cf2e13 req-bd5f2b6e-9d1b-4bb8-97e5-e64d0a68eb70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Refreshing network info cache for port 682f4b22-7214-4f0d-894b-84354797f177 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.214 2 DEBUG nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Start _get_guest_xml network_info=[{"id": "682f4b22-7214-4f0d-894b-84354797f177", "address": "fa:16:3e:96:39:46", "network": {"id": "93abc0f9-2033-4fe7-a220-f8724a484807", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1441083038-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c7fa9a5146949b5a1222248ae125eff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap682f4b22-72", "ovs_interfaceid": "682f4b22-7214-4f0d-894b-84354797f177", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.215 2 DEBUG oslo_concurrency.processutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqe5t9dyw" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.239 2 DEBUG nova.storage.rbd_utils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] rbd image dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.242 2 DEBUG oslo_concurrency.processutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e/disk.config dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.285 2 WARNING nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.290 2 DEBUG nova.virt.libvirt.host [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.291 2 DEBUG nova.virt.libvirt.host [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.294 2 DEBUG nova.virt.libvirt.host [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.295 2 DEBUG nova.virt.libvirt.host [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.295 2 DEBUG nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.296 2 DEBUG nova.virt.hardware [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.296 2 DEBUG nova.virt.hardware [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.296 2 DEBUG nova.virt.hardware [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.297 2 DEBUG nova.virt.hardware [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.297 2 DEBUG nova.virt.hardware [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.297 2 DEBUG nova.virt.hardware [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.297 2 DEBUG nova.virt.hardware [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.298 2 DEBUG nova.virt.hardware [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.298 2 DEBUG nova.virt.hardware [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.298 2 DEBUG nova.virt.hardware [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.298 2 DEBUG nova.virt.hardware [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.302 2 DEBUG oslo_concurrency.processutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.348 2 DEBUG oslo_concurrency.lockutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.350 2 DEBUG oslo_concurrency.lockutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.362 2 DEBUG nova.virt.hardware [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.362 2 INFO nova.compute.claims [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.410 2 DEBUG oslo_concurrency.processutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e/disk.config dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.412 2 INFO nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Deleting local config drive /var/lib/nova/instances/dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e/disk.config because it was imported into RBD.
Oct 07 14:17:31 compute-0 kernel: tap2fe29bbe-c4: entered promiscuous mode
Oct 07 14:17:31 compute-0 NetworkManager[44949]: <info>  [1759846651.4772] manager: (tap2fe29bbe-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/267)
Oct 07 14:17:31 compute-0 ovn_controller[151684]: 2025-10-07T14:17:31Z|00593|binding|INFO|Claiming lport 2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 for this chassis.
Oct 07 14:17:31 compute-0 ovn_controller[151684]: 2025-10-07T14:17:31Z|00594|binding|INFO|2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9: Claiming fa:16:3e:e8:c8:d3 10.100.0.8
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:17:31 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/921568590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:17:31 compute-0 systemd-machined[214580]: New machine qemu-74-instance-00000040.
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.518 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:c8:d3 10.100.0.8'], port_security=['fa:16:3e:e8:c8:d3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69e0bc1b-64b0-44c5-8a8e-258e554b00ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28bfd63ece134e70ac7fe3739775042b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '56e6356f-f5af-42e7-a71b-c36b572dfc87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d812f007-77e0-49c1-a3a6-f5b9db65bb47, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.520 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 in datapath 69e0bc1b-64b0-44c5-8a8e-258e554b00ab bound to our chassis
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.522 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69e0bc1b-64b0-44c5-8a8e-258e554b00ab
Oct 07 14:17:31 compute-0 systemd[1]: Started Virtual Machine qemu-74-instance-00000040.
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.535 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.538 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1da09b2f-4166-4ed6-a7fe-34dec6f7be3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.540 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap69e0bc1b-61 in ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.545 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap69e0bc1b-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.545 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[81cf35a8-b3d6-4a5a-9757-010c8649424b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.546 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bc81ffb4-5512-483e-9dc3-6e134ecbfd01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:31 compute-0 systemd-udevd[327775]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.567 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f14f36-ebb0-4482-9a25-901c288b00ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:31 compute-0 ovn_controller[151684]: 2025-10-07T14:17:31Z|00595|binding|INFO|Setting lport 2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 ovn-installed in OVS
Oct 07 14:17:31 compute-0 NetworkManager[44949]: <info>  [1759846651.5829] device (tap2fe29bbe-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:17:31 compute-0 ovn_controller[151684]: 2025-10-07T14:17:31Z|00596|binding|INFO|Setting lport 2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 up in Southbound
Oct 07 14:17:31 compute-0 NetworkManager[44949]: <info>  [1759846651.5839] device (tap2fe29bbe-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.591 2 DEBUG oslo_concurrency.processutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:31 compute-0 podman[327737]: 2025-10-07 14:17:31.602569387 +0000 UTC m=+0.103695360 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.601 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[50d732fd-9431-41cb-911c-5b4e96427780]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:31 compute-0 podman[327733]: 2025-10-07 14:17:31.612352435 +0000 UTC m=+0.114552077 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.644 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba8a29f-b12f-4983-86c5-03320d9e1c12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:31 compute-0 NetworkManager[44949]: <info>  [1759846651.6524] manager: (tap69e0bc1b-60): new Veth device (/org/freedesktop/NetworkManager/Devices/268)
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.654 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa3db00-41f9-4f1e-a491-8d8948ffc645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.695 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[52a9edae-999e-42e1-8d83-48ef47048e21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.698 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b20d280c-ee30-4359-b87e-002a59e0a3ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:31 compute-0 NetworkManager[44949]: <info>  [1759846651.7238] device (tap69e0bc1b-60): carrier: link connected
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.732 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[68a95dbd-7386-4b00-8980-f88dee89032d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.750 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dacade73-9b4e-49ef-9209-2a12da65a0ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69e0bc1b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:cb:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 181], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723528, 'reachable_time': 36591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327814, 'error': None, 'target': 'ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.767 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[447023c3-10a5-49e4-af85-cb5b9065f535]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe15:cb31'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 723528, 'tstamp': 723528}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327833, 'error': None, 'target': 'ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.779 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.780 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.785 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.785 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.786 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[06523322-d0fb-4d57-aa3f-af92fd5b6290]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69e0bc1b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:cb:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 181], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723528, 'reachable_time': 36591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327834, 'error': None, 'target': 'ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:17:31 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571910804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 209 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 498 KiB/s rd, 5.6 MiB/s wr, 120 op/s
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.820 2 DEBUG oslo_concurrency.processutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.821 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[913209ab-f2da-4e70-9e76-6555285ab5ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.853 2 DEBUG nova.storage.rbd_utils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] rbd image 96c85d5b-5885-4549-8dfd-75a4611179b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.857 2 DEBUG oslo_concurrency.processutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.902 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7c42b5c2-63e9-46ec-88c2-1ff56be278ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.903 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69e0bc1b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.903 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.904 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69e0bc1b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:31 compute-0 kernel: tap69e0bc1b-60: entered promiscuous mode
Oct 07 14:17:31 compute-0 NetworkManager[44949]: <info>  [1759846651.9068] manager: (tap69e0bc1b-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.910 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69e0bc1b-60, col_values=(('external_ids', {'iface-id': 'fba4a7b9-3cb1-4ad4-a5e2-52a6e74ddfdc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:31 compute-0 ovn_controller[151684]: 2025-10-07T14:17:31Z|00597|binding|INFO|Releasing lport fba4a7b9-3cb1-4ad4-a5e2-52a6e74ddfdc from this chassis (sb_readonly=0)
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.912 2 DEBUG nova.virt.libvirt.driver [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:17:31 compute-0 nova_compute[259550]: 2025-10-07 14:17:31.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.942 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/69e0bc1b-64b0-44c5-8a8e-258e554b00ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/69e0bc1b-64b0-44c5-8a8e-258e554b00ab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.945 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c1c9980-a831-4243-8b58-6c5a7187c3ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.946 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-69e0bc1b-64b0-44c5-8a8e-258e554b00ab
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/69e0bc1b-64b0-44c5-8a8e-258e554b00ab.pid.haproxy
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 69e0bc1b-64b0-44c5-8a8e-258e554b00ab
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:17:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:31.947 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab', 'env', 'PROCESS_TAG=haproxy-69e0bc1b-64b0-44c5-8a8e-258e554b00ab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/69e0bc1b-64b0-44c5-8a8e-258e554b00ab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:17:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/921568590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:17:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3571910804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:32 compute-0 ceph-mon[74295]: pgmap v1588: 305 pgs: 305 active+clean; 209 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 498 KiB/s rd, 5.6 MiB/s wr, 120 op/s
Oct 07 14:17:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:17:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3472808326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.110 2 DEBUG oslo_concurrency.processutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.117 2 DEBUG nova.compute.provider_tree [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.154 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.155 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3887MB free_disk=59.92753601074219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.155 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.201 2 DEBUG nova.scheduler.client.report [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.314 2 DEBUG oslo_concurrency.lockutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.314 2 DEBUG nova.compute.manager [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.317 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:32 compute-0 podman[327951]: 2025-10-07 14:17:32.355863357 +0000 UTC m=+0.053764312 container create 2198d18df16bdc30b641e53eb9cf8b1cff56a4518712b1eed9500ac55e5acc14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001444272156020056 of space, bias 1.0, pg target 0.4332816468060168 quantized to 32 (current 32)
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:17:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:17:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:17:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3526179053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:32 compute-0 systemd[1]: Started libpod-conmon-2198d18df16bdc30b641e53eb9cf8b1cff56a4518712b1eed9500ac55e5acc14.scope.
Oct 07 14:17:32 compute-0 podman[327951]: 2025-10-07 14:17:32.324478177 +0000 UTC m=+0.022379152 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.441 2 DEBUG oslo_concurrency.processutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.442 2 DEBUG nova.virt.libvirt.vif [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:17:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-535142755',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-535142755',id=65,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9c7fa9a5146949b5a1222248ae125eff',ramdisk_id='',reservation_id='r-kmfl7o90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-106709279',owner_user_name='tempest-InstanceActionsV221TestJSON-106709279-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:17:26Z,user_data=None,user_id='4526fd07b25c4e6c9fd61f99f0451dba',uuid=96c85d5b-5885-4549-8dfd-75a4611179b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "682f4b22-7214-4f0d-894b-84354797f177", "address": "fa:16:3e:96:39:46", "network": {"id": "93abc0f9-2033-4fe7-a220-f8724a484807", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1441083038-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c7fa9a5146949b5a1222248ae125eff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap682f4b22-72", "ovs_interfaceid": "682f4b22-7214-4f0d-894b-84354797f177", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.442 2 DEBUG nova.network.os_vif_util [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Converting VIF {"id": "682f4b22-7214-4f0d-894b-84354797f177", "address": "fa:16:3e:96:39:46", "network": {"id": "93abc0f9-2033-4fe7-a220-f8724a484807", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1441083038-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c7fa9a5146949b5a1222248ae125eff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap682f4b22-72", "ovs_interfaceid": "682f4b22-7214-4f0d-894b-84354797f177", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.443 2 DEBUG nova.network.os_vif_util [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:39:46,bridge_name='br-int',has_traffic_filtering=True,id=682f4b22-7214-4f0d-894b-84354797f177,network=Network(93abc0f9-2033-4fe7-a220-f8724a484807),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap682f4b22-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.444 2 DEBUG nova.objects.instance [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Lazy-loading 'pci_devices' on Instance uuid 96c85d5b-5885-4549-8dfd-75a4611179b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:32 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:17:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ac8babd68cbecc61f117f5dbdb724a7fe81656e3e6bb12c40820e4b694fa319/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:17:32 compute-0 podman[327951]: 2025-10-07 14:17:32.464516496 +0000 UTC m=+0.162417471 container init 2198d18df16bdc30b641e53eb9cf8b1cff56a4518712b1eed9500ac55e5acc14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 14:17:32 compute-0 podman[327951]: 2025-10-07 14:17:32.469629352 +0000 UTC m=+0.167530307 container start 2198d18df16bdc30b641e53eb9cf8b1cff56a4518712b1eed9500ac55e5acc14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.480 2 DEBUG nova.network.neutron [req-2578fa15-a325-4b96-953e-e633e5cf2e13 req-bd5f2b6e-9d1b-4bb8-97e5-e64d0a68eb70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Updated VIF entry in instance network info cache for port 682f4b22-7214-4f0d-894b-84354797f177. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.481 2 DEBUG nova.network.neutron [req-2578fa15-a325-4b96-953e-e633e5cf2e13 req-bd5f2b6e-9d1b-4bb8-97e5-e64d0a68eb70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Updating instance_info_cache with network_info: [{"id": "682f4b22-7214-4f0d-894b-84354797f177", "address": "fa:16:3e:96:39:46", "network": {"id": "93abc0f9-2033-4fe7-a220-f8724a484807", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1441083038-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c7fa9a5146949b5a1222248ae125eff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap682f4b22-72", "ovs_interfaceid": "682f4b22-7214-4f0d-894b-84354797f177", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:17:32 compute-0 neutron-haproxy-ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab[327967]: [NOTICE]   (327972) : New worker (327974) forked
Oct 07 14:17:32 compute-0 neutron-haproxy-ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab[327967]: [NOTICE]   (327972) : Loading success.
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.568 2 DEBUG nova.compute.manager [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.568 2 DEBUG nova.network.neutron [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.576 2 DEBUG nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:17:32 compute-0 nova_compute[259550]:   <uuid>96c85d5b-5885-4549-8dfd-75a4611179b5</uuid>
Oct 07 14:17:32 compute-0 nova_compute[259550]:   <name>instance-00000041</name>
Oct 07 14:17:32 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:17:32 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:17:32 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <nova:name>tempest-InstanceActionsV221TestJSON-server-535142755</nova:name>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:17:31</nova:creationTime>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:17:32 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:17:32 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:17:32 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:17:32 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:17:32 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:17:32 compute-0 nova_compute[259550]:         <nova:user uuid="4526fd07b25c4e6c9fd61f99f0451dba">tempest-InstanceActionsV221TestJSON-106709279-project-member</nova:user>
Oct 07 14:17:32 compute-0 nova_compute[259550]:         <nova:project uuid="9c7fa9a5146949b5a1222248ae125eff">tempest-InstanceActionsV221TestJSON-106709279</nova:project>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:17:32 compute-0 nova_compute[259550]:         <nova:port uuid="682f4b22-7214-4f0d-894b-84354797f177">
Oct 07 14:17:32 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:17:32 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:17:32 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <system>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <entry name="serial">96c85d5b-5885-4549-8dfd-75a4611179b5</entry>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <entry name="uuid">96c85d5b-5885-4549-8dfd-75a4611179b5</entry>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     </system>
Oct 07 14:17:32 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:17:32 compute-0 nova_compute[259550]:   <os>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:   </os>
Oct 07 14:17:32 compute-0 nova_compute[259550]:   <features>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:   </features>
Oct 07 14:17:32 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:17:32 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:17:32 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/96c85d5b-5885-4549-8dfd-75a4611179b5_disk">
Oct 07 14:17:32 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       </source>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:17:32 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/96c85d5b-5885-4549-8dfd-75a4611179b5_disk.config">
Oct 07 14:17:32 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       </source>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:17:32 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:96:39:46"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <target dev="tap682f4b22-72"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/96c85d5b-5885-4549-8dfd-75a4611179b5/console.log" append="off"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <video>
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     </video>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:17:32 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:17:32 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:17:32 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:17:32 compute-0 nova_compute[259550]: </domain>
Oct 07 14:17:32 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.577 2 DEBUG nova.compute.manager [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Preparing to wait for external event network-vif-plugged-682f4b22-7214-4f0d-894b-84354797f177 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.577 2 DEBUG oslo_concurrency.lockutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Acquiring lock "96c85d5b-5885-4549-8dfd-75a4611179b5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.577 2 DEBUG oslo_concurrency.lockutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Lock "96c85d5b-5885-4549-8dfd-75a4611179b5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.577 2 DEBUG oslo_concurrency.lockutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Lock "96c85d5b-5885-4549-8dfd-75a4611179b5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.578 2 DEBUG nova.virt.libvirt.vif [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:17:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-535142755',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-535142755',id=65,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9c7fa9a5146949b5a1222248ae125eff',ramdisk_id='',reservation_id='r-kmfl7o90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-106709279',owner_user_name='tempest-InstanceActionsV221TestJSON-106709279-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:17:26Z,user_data=None,user_id='4526fd07b25c4e6c9fd61f99f0451dba',uuid=96c85d5b-5885-4549-8dfd-75a4611179b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "682f4b22-7214-4f0d-894b-84354797f177", "address": "fa:16:3e:96:39:46", "network": {"id": "93abc0f9-2033-4fe7-a220-f8724a484807", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1441083038-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c7fa9a5146949b5a1222248ae125eff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap682f4b22-72", "ovs_interfaceid": "682f4b22-7214-4f0d-894b-84354797f177", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.578 2 DEBUG nova.network.os_vif_util [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Converting VIF {"id": "682f4b22-7214-4f0d-894b-84354797f177", "address": "fa:16:3e:96:39:46", "network": {"id": "93abc0f9-2033-4fe7-a220-f8724a484807", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1441083038-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c7fa9a5146949b5a1222248ae125eff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap682f4b22-72", "ovs_interfaceid": "682f4b22-7214-4f0d-894b-84354797f177", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.579 2 DEBUG nova.network.os_vif_util [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:39:46,bridge_name='br-int',has_traffic_filtering=True,id=682f4b22-7214-4f0d-894b-84354797f177,network=Network(93abc0f9-2033-4fe7-a220-f8724a484807),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap682f4b22-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.579 2 DEBUG os_vif [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:39:46,bridge_name='br-int',has_traffic_filtering=True,id=682f4b22-7214-4f0d-894b-84354797f177,network=Network(93abc0f9-2033-4fe7-a220-f8724a484807),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap682f4b22-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.581 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.581 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.584 2 DEBUG oslo_concurrency.lockutils [req-2578fa15-a325-4b96-953e-e633e5cf2e13 req-bd5f2b6e-9d1b-4bb8-97e5-e64d0a68eb70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-96c85d5b-5885-4549-8dfd-75a4611179b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.587 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap682f4b22-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.587 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap682f4b22-72, col_values=(('external_ids', {'iface-id': '682f4b22-7214-4f0d-894b-84354797f177', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:39:46', 'vm-uuid': '96c85d5b-5885-4549-8dfd-75a4611179b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:32 compute-0 NetworkManager[44949]: <info>  [1759846652.5896] manager: (tap682f4b22-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.597 2 INFO os_vif [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:39:46,bridge_name='br-int',has_traffic_filtering=True,id=682f4b22-7214-4f0d-894b-84354797f177,network=Network(93abc0f9-2033-4fe7-a220-f8724a484807),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap682f4b22-72')
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.608 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 85cd6b5c-f0f7-49fa-a999-64818baf3648 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.608 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.608 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 96c85d5b-5885-4549-8dfd-75a4611179b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.609 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 1d580bbb-a6fd-442c-8524-409ba5c344d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.609 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.609 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.629 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846652.6292365, dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.630 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] VM Started (Lifecycle Event)
Oct 07 14:17:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:17:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3048070289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:17:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:17:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3048070289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.678 2 INFO nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.682 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.687 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846652.6301746, dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.687 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] VM Paused (Lifecycle Event)
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.690 2 DEBUG nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.691 2 DEBUG nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.691 2 DEBUG nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] No VIF found with MAC fa:16:3e:96:39:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.691 2 INFO nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Using config drive
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.715 2 DEBUG nova.storage.rbd_utils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] rbd image 96c85d5b-5885-4549-8dfd-75a4611179b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.724 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.725 2 DEBUG nova.compute.manager [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.731 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.738 2 DEBUG nova.policy [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '51afbbb19e4a4e2184c89302ccf45428', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8379283f8a594c2ab94773d2b49cbb30', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.749 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.789 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.985 2 DEBUG nova.compute.manager [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.987 2 DEBUG nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:17:32 compute-0 nova_compute[259550]: 2025-10-07 14:17:32.988 2 INFO nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Creating image(s)
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.014 2 DEBUG nova.storage.rbd_utils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image 1d580bbb-a6fd-442c-8524-409ba5c344d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3472808326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:17:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3526179053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3048070289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:17:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3048070289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.049 2 DEBUG nova.storage.rbd_utils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image 1d580bbb-a6fd-442c-8524-409ba5c344d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.074 2 DEBUG nova.storage.rbd_utils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image 1d580bbb-a6fd-442c-8524-409ba5c344d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.080 2 DEBUG oslo_concurrency.processutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.148 2 DEBUG oslo_concurrency.processutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.150 2 DEBUG oslo_concurrency.lockutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.151 2 DEBUG oslo_concurrency.lockutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.151 2 DEBUG oslo_concurrency.lockutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:17:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/876176602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.178 2 DEBUG nova.storage.rbd_utils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image 1d580bbb-a6fd-442c-8524-409ba5c344d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.181 2 DEBUG oslo_concurrency.processutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 1d580bbb-a6fd-442c-8524-409ba5c344d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.215 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.220 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.268 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.459 2 DEBUG oslo_concurrency.processutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 1d580bbb-a6fd-442c-8524-409ba5c344d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.524 2 DEBUG nova.storage.rbd_utils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] resizing rbd image 1d580bbb-a6fd-442c-8524-409ba5c344d0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.634 2 DEBUG nova.objects.instance [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'migration_context' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.678 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.679 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 235 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 6.6 MiB/s wr, 144 op/s
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.876 2 DEBUG nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.877 2 DEBUG nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Ensure instance console log exists: /var/lib/nova/instances/1d580bbb-a6fd-442c-8524-409ba5c344d0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.877 2 DEBUG oslo_concurrency.lockutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.878 2 DEBUG oslo_concurrency.lockutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:33 compute-0 nova_compute[259550]: 2025-10-07 14:17:33.878 2 DEBUG oslo_concurrency.lockutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:34 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/876176602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:17:34 compute-0 ceph-mon[74295]: pgmap v1589: 305 pgs: 305 active+clean; 235 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 6.6 MiB/s wr, 144 op/s
Oct 07 14:17:34 compute-0 kernel: tapb2d2caee-17 (unregistering): left promiscuous mode
Oct 07 14:17:34 compute-0 NetworkManager[44949]: <info>  [1759846654.1670] device (tapb2d2caee-17): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:34 compute-0 ovn_controller[151684]: 2025-10-07T14:17:34Z|00598|binding|INFO|Releasing lport b2d2caee-177d-4ad4-98ba-7dd1d95e296b from this chassis (sb_readonly=0)
Oct 07 14:17:34 compute-0 ovn_controller[151684]: 2025-10-07T14:17:34Z|00599|binding|INFO|Setting lport b2d2caee-177d-4ad4-98ba-7dd1d95e296b down in Southbound
Oct 07 14:17:34 compute-0 ovn_controller[151684]: 2025-10-07T14:17:34Z|00600|binding|INFO|Removing iface tapb2d2caee-17 ovn-installed in OVS
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:34 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Oct 07 14:17:34 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000003f.scope: Consumed 12.895s CPU time.
Oct 07 14:17:34 compute-0 systemd-machined[214580]: Machine qemu-73-instance-0000003f terminated.
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.262 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:3d:56 10.100.0.5'], port_security=['fa:16:3e:32:3d:56 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '85cd6b5c-f0f7-49fa-a999-64818baf3648', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dfecd64-708d-4596-88ae-4b7d716e998c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8dca3ec607447dd8f2e6dd1c0714628', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fe5613ec-7a3f-454f-9d6b-9216d9c4d645', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=952e23a8-726e-49a4-b02e-737dfec98b14, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b2d2caee-177d-4ad4-98ba-7dd1d95e296b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.265 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b2d2caee-177d-4ad4-98ba-7dd1d95e296b in datapath 2dfecd64-708d-4596-88ae-4b7d716e998c unbound from our chassis
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.266 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2dfecd64-708d-4596-88ae-4b7d716e998c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.267 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9f67fb3f-8104-4be6-ba19-3a50dfeacb8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.295 2 INFO nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Creating config drive at /var/lib/nova/instances/96c85d5b-5885-4549-8dfd-75a4611179b5/disk.config
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.301 2 DEBUG oslo_concurrency.processutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/96c85d5b-5885-4549-8dfd-75a4611179b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm5oavi8n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.450 2 DEBUG oslo_concurrency.processutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/96c85d5b-5885-4549-8dfd-75a4611179b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm5oavi8n" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.478 2 DEBUG nova.storage.rbd_utils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] rbd image 96c85d5b-5885-4549-8dfd-75a4611179b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.482 2 DEBUG oslo_concurrency.processutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/96c85d5b-5885-4549-8dfd-75a4611179b5/disk.config 96c85d5b-5885-4549-8dfd-75a4611179b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.645 2 DEBUG oslo_concurrency.processutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/96c85d5b-5885-4549-8dfd-75a4611179b5/disk.config 96c85d5b-5885-4549-8dfd-75a4611179b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.647 2 INFO nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Deleting local config drive /var/lib/nova/instances/96c85d5b-5885-4549-8dfd-75a4611179b5/disk.config because it was imported into RBD.
Oct 07 14:17:34 compute-0 kernel: tap682f4b22-72: entered promiscuous mode
Oct 07 14:17:34 compute-0 NetworkManager[44949]: <info>  [1759846654.7134] manager: (tap682f4b22-72): new Tun device (/org/freedesktop/NetworkManager/Devices/271)
Oct 07 14:17:34 compute-0 ovn_controller[151684]: 2025-10-07T14:17:34Z|00601|binding|INFO|Claiming lport 682f4b22-7214-4f0d-894b-84354797f177 for this chassis.
Oct 07 14:17:34 compute-0 ovn_controller[151684]: 2025-10-07T14:17:34Z|00602|binding|INFO|682f4b22-7214-4f0d-894b-84354797f177: Claiming fa:16:3e:96:39:46 10.100.0.10
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.729 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:39:46 10.100.0.10'], port_security=['fa:16:3e:96:39:46 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '96c85d5b-5885-4549-8dfd-75a4611179b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93abc0f9-2033-4fe7-a220-f8724a484807', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9c7fa9a5146949b5a1222248ae125eff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4caa2c2d-49d5-436d-8044-b2b063af6328', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9bd9a45f-b260-4d6a-b186-b2ecaf59b3ef, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=682f4b22-7214-4f0d-894b-84354797f177) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.730 2 DEBUG nova.network.neutron [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Successfully created port: 5fb8904b-227a-4dac-8c3a-82a23ba9832c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.731 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 682f4b22-7214-4f0d-894b-84354797f177 in datapath 93abc0f9-2033-4fe7-a220-f8724a484807 bound to our chassis
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.732 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93abc0f9-2033-4fe7-a220-f8724a484807
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.745 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c165a6a4-b7ce-4e8d-a8dd-918772421840]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.747 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap93abc0f9-21 in ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.749 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap93abc0f9-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.750 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[31b87a6e-c207-4a5c-9b53-2c5210795e82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.750 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2c966606-840a-42b5-8a16-0c1df5e3e3d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:34 compute-0 systemd-machined[214580]: New machine qemu-75-instance-00000041.
Oct 07 14:17:34 compute-0 systemd-udevd[328266]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.761 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[9255ba93-a7e6-4096-9621-585c81093328]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:34 compute-0 NetworkManager[44949]: <info>  [1759846654.7673] device (tap682f4b22-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:17:34 compute-0 systemd[1]: Started Virtual Machine qemu-75-instance-00000041.
Oct 07 14:17:34 compute-0 NetworkManager[44949]: <info>  [1759846654.7695] device (tap682f4b22-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.786 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ffd71ea5-0f53-48e6-baba-625397a27c89]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.812 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ae0a06-9224-4e8e-af1d-85e3ddaa4613]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:34 compute-0 systemd-udevd[328270]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.817 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d23cf9b4-0e25-4674-b403-7d91708c7943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:34 compute-0 NetworkManager[44949]: <info>  [1759846654.8186] manager: (tap93abc0f9-20): new Veth device (/org/freedesktop/NetworkManager/Devices/272)
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:34 compute-0 ovn_controller[151684]: 2025-10-07T14:17:34Z|00603|binding|INFO|Setting lport 682f4b22-7214-4f0d-894b-84354797f177 ovn-installed in OVS
Oct 07 14:17:34 compute-0 ovn_controller[151684]: 2025-10-07T14:17:34Z|00604|binding|INFO|Setting lport 682f4b22-7214-4f0d-894b-84354797f177 up in Southbound
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:34 compute-0 sudo[328271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.851 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ca198d08-64a6-4039-9d87-6ed5fba97cc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.855 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d755a377-b07c-4da2-9f9b-6a33cb770d69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:34 compute-0 sudo[328271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:17:34 compute-0 sudo[328271]: pam_unix(sudo:session): session closed for user root
Oct 07 14:17:34 compute-0 NetworkManager[44949]: <info>  [1759846654.8775] device (tap93abc0f9-20): carrier: link connected
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.884 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d78e2f6e-72ea-4668-94ea-a7142dbcc6f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.904 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[55633b88-7acf-4597-ba4d-07aa8385dc4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93abc0f9-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:76:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723844, 'reachable_time': 41733, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328340, 'error': None, 'target': 'ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:34 compute-0 sudo[328325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.922 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6e287181-5112-4a4b-b53c-19aed15f835f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaf:7639'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 723844, 'tstamp': 723844}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328351, 'error': None, 'target': 'ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:34 compute-0 sudo[328325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:17:34 compute-0 sudo[328325]: pam_unix(sudo:session): session closed for user root
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.929 2 INFO nova.virt.libvirt.driver [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Instance shutdown successfully after 13 seconds.
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.937 2 INFO nova.virt.libvirt.driver [-] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Instance destroyed successfully.
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.937 2 DEBUG nova.objects.instance [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'numa_topology' on Instance uuid 85cd6b5c-f0f7-49fa-a999-64818baf3648 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.942 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c028f8c5-d8ec-4fa9-9798-a896dfd3c1d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93abc0f9-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:76:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723844, 'reachable_time': 41733, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328354, 'error': None, 'target': 'ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.957 2 INFO nova.virt.libvirt.driver [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Attempting rescue
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.958 2 DEBUG nova.virt.libvirt.driver [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.963 2 DEBUG nova.virt.libvirt.driver [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.963 2 INFO nova.virt.libvirt.driver [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Creating image(s)
Oct 07 14:17:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:34.978 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2b73ca-1f30-4701-9d24-24633f9c5963]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:34 compute-0 sudo[328355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:17:34 compute-0 sudo[328355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.988 2 DEBUG nova.storage.rbd_utils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:34 compute-0 sudo[328355]: pam_unix(sudo:session): session closed for user root
Oct 07 14:17:34 compute-0 nova_compute[259550]: 2025-10-07 14:17:34.992 2 DEBUG nova.objects.instance [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 85cd6b5c-f0f7-49fa-a999-64818baf3648 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.040 2 DEBUG nova.storage.rbd_utils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:35.043 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fd6cf74b-4ded-4634-952d-e32db4e83e1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:35.044 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93abc0f9-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:35.044 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:35.045 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93abc0f9-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:35 compute-0 sudo[328402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:17:35 compute-0 NetworkManager[44949]: <info>  [1759846655.0476] manager: (tap93abc0f9-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Oct 07 14:17:35 compute-0 sudo[328402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:17:35 compute-0 kernel: tap93abc0f9-20: entered promiscuous mode
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:35.054 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93abc0f9-20, col_values=(('external_ids', {'iface-id': '844f37a0-6f3c-401b-900a-f5fe4a4e1d6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:35 compute-0 ovn_controller[151684]: 2025-10-07T14:17:35Z|00605|binding|INFO|Releasing lport 844f37a0-6f3c-401b-900a-f5fe4a4e1d6c from this chassis (sb_readonly=0)
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.068 2 DEBUG nova.storage.rbd_utils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.071 2 DEBUG oslo_concurrency.processutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:35.079 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/93abc0f9-2033-4fe7-a220-f8724a484807.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/93abc0f9-2033-4fe7-a220-f8724a484807.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:35.080 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d0889242-c086-4f85-900a-d6f853a07843]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:35.081 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-93abc0f9-2033-4fe7-a220-f8724a484807
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/93abc0f9-2033-4fe7-a220-f8724a484807.pid.haproxy
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 93abc0f9-2033-4fe7-a220-f8724a484807
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:17:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:35.082 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807', 'env', 'PROCESS_TAG=haproxy-93abc0f9-2033-4fe7-a220-f8724a484807', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/93abc0f9-2033-4fe7-a220-f8724a484807.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.153 2 DEBUG oslo_concurrency.processutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.154 2 DEBUG oslo_concurrency.lockutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.154 2 DEBUG oslo_concurrency.lockutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.159 2 DEBUG oslo_concurrency.lockutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.188 2 DEBUG nova.storage.rbd_utils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.196 2 DEBUG oslo_concurrency.processutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.480 2 DEBUG oslo_concurrency.processutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.482 2 DEBUG nova.objects.instance [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'migration_context' on Instance uuid 85cd6b5c-f0f7-49fa-a999-64818baf3648 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:35 compute-0 podman[328592]: 2025-10-07 14:17:35.489741651 +0000 UTC m=+0.055199449 container create 0abce769cb8f0b6f2763b279d5208aee1bc7951aac57bd4f1b1769545473a575 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.500 2 DEBUG nova.virt.libvirt.driver [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.501 2 DEBUG nova.virt.libvirt.driver [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Start _get_guest_xml network_info=[{"id": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "address": "fa:16:3e:32:3d:56", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1885672423-network", "vif_mac": "fa:16:3e:32:3d:56"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2d2caee-17", "ovs_interfaceid": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.501 2 DEBUG nova.objects.instance [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'resources' on Instance uuid 85cd6b5c-f0f7-49fa-a999-64818baf3648 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.523 2 WARNING nova.virt.libvirt.driver [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:17:35 compute-0 systemd[1]: Started libpod-conmon-0abce769cb8f0b6f2763b279d5208aee1bc7951aac57bd4f1b1769545473a575.scope.
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.533 2 DEBUG nova.virt.libvirt.host [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.534 2 DEBUG nova.virt.libvirt.host [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.538 2 DEBUG nova.virt.libvirt.host [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.539 2 DEBUG nova.virt.libvirt.host [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.539 2 DEBUG nova.virt.libvirt.driver [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.540 2 DEBUG nova.virt.hardware [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.540 2 DEBUG nova.virt.hardware [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.540 2 DEBUG nova.virt.hardware [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.540 2 DEBUG nova.virt.hardware [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.541 2 DEBUG nova.virt.hardware [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.541 2 DEBUG nova.virt.hardware [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.541 2 DEBUG nova.virt.hardware [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.541 2 DEBUG nova.virt.hardware [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.541 2 DEBUG nova.virt.hardware [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.542 2 DEBUG nova.virt.hardware [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.542 2 DEBUG nova.virt.hardware [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.542 2 DEBUG nova.objects.instance [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 85cd6b5c-f0f7-49fa-a999-64818baf3648 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:35 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:17:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22305393e8b2e20d9422e866430cf7f792d82655db5c1a9b488c823c170dabe1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:17:35 compute-0 podman[328592]: 2025-10-07 14:17:35.460273143 +0000 UTC m=+0.025730961 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.559 2 DEBUG oslo_concurrency.processutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:35 compute-0 sudo[328402]: pam_unix(sudo:session): session closed for user root
Oct 07 14:17:35 compute-0 podman[328592]: 2025-10-07 14:17:35.575111747 +0000 UTC m=+0.140569565 container init 0abce769cb8f0b6f2763b279d5208aee1bc7951aac57bd4f1b1769545473a575 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:17:35 compute-0 podman[328592]: 2025-10-07 14:17:35.58055295 +0000 UTC m=+0.146010748 container start 0abce769cb8f0b6f2763b279d5208aee1bc7951aac57bd4f1b1769545473a575 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 07 14:17:35 compute-0 neutron-haproxy-ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807[328621]: [NOTICE]   (328626) : New worker (328628) forked
Oct 07 14:17:35 compute-0 neutron-haproxy-ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807[328621]: [NOTICE]   (328626) : Loading success.
Oct 07 14:17:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:17:35 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:17:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:17:35 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:17:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:17:35 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:17:35 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 13dbbd50-6164-4f51-97c3-3adeabb3f6d5 does not exist
Oct 07 14:17:35 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 72ac8bef-3b1f-496d-b8c2-5bb1f03d3a5a does not exist
Oct 07 14:17:35 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 86051a5a-9797-4802-881e-3bf0f29c8e7a does not exist
Oct 07 14:17:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:17:35 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:17:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:17:35 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:17:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:17:35 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:17:35 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:17:35 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:17:35 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:17:35 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:17:35 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:17:35 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.679 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.680 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.680 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:17:35 compute-0 sudo[328637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:17:35 compute-0 sudo[328637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:17:35 compute-0 sudo[328637]: pam_unix(sudo:session): session closed for user root
Oct 07 14:17:35 compute-0 sudo[328681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:17:35 compute-0 sudo[328681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:17:35 compute-0 sudo[328681]: pam_unix(sudo:session): session closed for user root
Oct 07 14:17:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 274 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 8.0 MiB/s wr, 161 op/s
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.809 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846655.808268, 96c85d5b-5885-4549-8dfd-75a4611179b5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.809 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] VM Started (Lifecycle Event)
Oct 07 14:17:35 compute-0 sudo[328706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:17:35 compute-0 sudo[328706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:17:35 compute-0 sudo[328706]: pam_unix(sudo:session): session closed for user root
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.839 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.844 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846655.8097932, 96c85d5b-5885-4549-8dfd-75a4611179b5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.844 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] VM Paused (Lifecycle Event)
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.866 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:35 compute-0 sudo[328731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.875 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:17:35 compute-0 sudo[328731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.896 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:17:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:17:35 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3359769032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.978 2 DEBUG nova.network.neutron [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Successfully updated port: 5fb8904b-227a-4dac-8c3a-82a23ba9832c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:17:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.994 2 DEBUG oslo_concurrency.processutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:35 compute-0 nova_compute[259550]: 2025-10-07 14:17:35.995 2 DEBUG oslo_concurrency.processutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:36 compute-0 nova_compute[259550]: 2025-10-07 14:17:36.039 2 DEBUG oslo_concurrency.lockutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:17:36 compute-0 nova_compute[259550]: 2025-10-07 14:17:36.039 2 DEBUG oslo_concurrency.lockutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquired lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:17:36 compute-0 nova_compute[259550]: 2025-10-07 14:17:36.039 2 DEBUG nova.network.neutron [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:17:36 compute-0 podman[328817]: 2025-10-07 14:17:36.236501298 +0000 UTC m=+0.042409081 container create 0823021b116c35118486f69b14e403de13089f038f2fd9af2aabc4c876ff5c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_carson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 07 14:17:36 compute-0 nova_compute[259550]: 2025-10-07 14:17:36.251 2 DEBUG nova.network.neutron [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:17:36 compute-0 systemd[1]: Started libpod-conmon-0823021b116c35118486f69b14e403de13089f038f2fd9af2aabc4c876ff5c78.scope.
Oct 07 14:17:36 compute-0 podman[328817]: 2025-10-07 14:17:36.21763114 +0000 UTC m=+0.023538933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:17:36 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:17:36 compute-0 podman[328817]: 2025-10-07 14:17:36.340357382 +0000 UTC m=+0.146265175 container init 0823021b116c35118486f69b14e403de13089f038f2fd9af2aabc4c876ff5c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_carson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 07 14:17:36 compute-0 nova_compute[259550]: 2025-10-07 14:17:36.344 2 DEBUG nova.compute.manager [req-91609673-9f39-4a08-8cba-b3c5a88c6a56 req-b04db5a8-c179-4be0-a89d-966758228e64 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-changed-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:36 compute-0 nova_compute[259550]: 2025-10-07 14:17:36.345 2 DEBUG nova.compute.manager [req-91609673-9f39-4a08-8cba-b3c5a88c6a56 req-b04db5a8-c179-4be0-a89d-966758228e64 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Refreshing instance network info cache due to event network-changed-5fb8904b-227a-4dac-8c3a-82a23ba9832c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:17:36 compute-0 nova_compute[259550]: 2025-10-07 14:17:36.346 2 DEBUG oslo_concurrency.lockutils [req-91609673-9f39-4a08-8cba-b3c5a88c6a56 req-b04db5a8-c179-4be0-a89d-966758228e64 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:17:36 compute-0 podman[328817]: 2025-10-07 14:17:36.3497896 +0000 UTC m=+0.155697383 container start 0823021b116c35118486f69b14e403de13089f038f2fd9af2aabc4c876ff5c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_carson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:17:36 compute-0 podman[328817]: 2025-10-07 14:17:36.354152377 +0000 UTC m=+0.160060150 container attach 0823021b116c35118486f69b14e403de13089f038f2fd9af2aabc4c876ff5c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_carson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:17:36 compute-0 kind_carson[328833]: 167 167
Oct 07 14:17:36 compute-0 systemd[1]: libpod-0823021b116c35118486f69b14e403de13089f038f2fd9af2aabc4c876ff5c78.scope: Deactivated successfully.
Oct 07 14:17:36 compute-0 conmon[328833]: conmon 0823021b116c35118486 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0823021b116c35118486f69b14e403de13089f038f2fd9af2aabc4c876ff5c78.scope/container/memory.events
Oct 07 14:17:36 compute-0 podman[328817]: 2025-10-07 14:17:36.359109007 +0000 UTC m=+0.165016770 container died 0823021b116c35118486f69b14e403de13089f038f2fd9af2aabc4c876ff5c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_carson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:17:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5fa4ba14ab47e44f886894aa8ac50ce9c14ca67c8239795aea731c3dc67b23f-merged.mount: Deactivated successfully.
Oct 07 14:17:36 compute-0 nova_compute[259550]: 2025-10-07 14:17:36.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:36 compute-0 podman[328817]: 2025-10-07 14:17:36.405585825 +0000 UTC m=+0.211493608 container remove 0823021b116c35118486f69b14e403de13089f038f2fd9af2aabc4c876ff5c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 07 14:17:36 compute-0 systemd[1]: libpod-conmon-0823021b116c35118486f69b14e403de13089f038f2fd9af2aabc4c876ff5c78.scope: Deactivated successfully.
Oct 07 14:17:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:17:36 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3996110129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:36 compute-0 nova_compute[259550]: 2025-10-07 14:17:36.464 2 DEBUG oslo_concurrency.processutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:36 compute-0 nova_compute[259550]: 2025-10-07 14:17:36.465 2 DEBUG oslo_concurrency.processutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:36 compute-0 podman[328860]: 2025-10-07 14:17:36.586626308 +0000 UTC m=+0.042396952 container create 64f9772fdf40a7abdbc258a368b254f9cd1d0930a7f375c551d221dc0a32b3d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:17:36 compute-0 systemd[1]: Started libpod-conmon-64f9772fdf40a7abdbc258a368b254f9cd1d0930a7f375c551d221dc0a32b3d9.scope.
Oct 07 14:17:36 compute-0 podman[328860]: 2025-10-07 14:17:36.56742444 +0000 UTC m=+0.023195124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:17:36 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:17:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce4ff2436e8cb0cb53a22047736f908625775d46d9d2a2d37a2d95440b6e00b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:17:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce4ff2436e8cb0cb53a22047736f908625775d46d9d2a2d37a2d95440b6e00b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:17:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce4ff2436e8cb0cb53a22047736f908625775d46d9d2a2d37a2d95440b6e00b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:17:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce4ff2436e8cb0cb53a22047736f908625775d46d9d2a2d37a2d95440b6e00b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:17:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce4ff2436e8cb0cb53a22047736f908625775d46d9d2a2d37a2d95440b6e00b8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:17:36 compute-0 ceph-mon[74295]: pgmap v1590: 305 pgs: 305 active+clean; 274 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 8.0 MiB/s wr, 161 op/s
Oct 07 14:17:36 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3359769032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:36 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3996110129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:36 compute-0 podman[328860]: 2025-10-07 14:17:36.688654512 +0000 UTC m=+0.144425196 container init 64f9772fdf40a7abdbc258a368b254f9cd1d0930a7f375c551d221dc0a32b3d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 07 14:17:36 compute-0 podman[328860]: 2025-10-07 14:17:36.698563745 +0000 UTC m=+0.154334399 container start 64f9772fdf40a7abdbc258a368b254f9cd1d0930a7f375c551d221dc0a32b3d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:17:36 compute-0 podman[328860]: 2025-10-07 14:17:36.702067387 +0000 UTC m=+0.157838071 container attach 64f9772fdf40a7abdbc258a368b254f9cd1d0930a7f375c551d221dc0a32b3d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 07 14:17:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:17:36 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3433057108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:36 compute-0 nova_compute[259550]: 2025-10-07 14:17:36.945 2 DEBUG oslo_concurrency.processutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:36 compute-0 nova_compute[259550]: 2025-10-07 14:17:36.949 2 DEBUG nova.virt.libvirt.vif [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:16:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2023954398',display_name='tempest-ServerRescueTestJSON-server-2023954398',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-2023954398',id=63,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d8dca3ec607447dd8f2e6dd1c0714628',ramdisk_id='',reservation_id='r-lmffjsq5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1604630325',owner_user_name='tempest-ServerRescueTestJSON-1604630325-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:17:16Z,user_data=None,user_id='7166dede3fa7455fb8ac0840d97d0be8',uuid=85cd6b5c-f0f7-49fa-a999-64818baf3648,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "address": "fa:16:3e:32:3d:56", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1885672423-network", "vif_mac": "fa:16:3e:32:3d:56"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2d2caee-17", "ovs_interfaceid": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:17:36 compute-0 nova_compute[259550]: 2025-10-07 14:17:36.950 2 DEBUG nova.network.os_vif_util [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Converting VIF {"id": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "address": "fa:16:3e:32:3d:56", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1885672423-network", "vif_mac": "fa:16:3e:32:3d:56"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2d2caee-17", "ovs_interfaceid": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:17:36 compute-0 nova_compute[259550]: 2025-10-07 14:17:36.952 2 DEBUG nova.network.os_vif_util [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:32:3d:56,bridge_name='br-int',has_traffic_filtering=True,id=b2d2caee-177d-4ad4-98ba-7dd1d95e296b,network=Network(2dfecd64-708d-4596-88ae-4b7d716e998c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2d2caee-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:17:36 compute-0 nova_compute[259550]: 2025-10-07 14:17:36.954 2 DEBUG nova.objects.instance [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'pci_devices' on Instance uuid 85cd6b5c-f0f7-49fa-a999-64818baf3648 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:36 compute-0 nova_compute[259550]: 2025-10-07 14:17:36.992 2 DEBUG nova.virt.libvirt.driver [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:17:36 compute-0 nova_compute[259550]:   <uuid>85cd6b5c-f0f7-49fa-a999-64818baf3648</uuid>
Oct 07 14:17:36 compute-0 nova_compute[259550]:   <name>instance-0000003f</name>
Oct 07 14:17:36 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:17:36 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:17:36 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerRescueTestJSON-server-2023954398</nova:name>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:17:35</nova:creationTime>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:17:36 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:17:36 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:17:36 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:17:36 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:17:36 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:17:36 compute-0 nova_compute[259550]:         <nova:user uuid="7166dede3fa7455fb8ac0840d97d0be8">tempest-ServerRescueTestJSON-1604630325-project-member</nova:user>
Oct 07 14:17:36 compute-0 nova_compute[259550]:         <nova:project uuid="d8dca3ec607447dd8f2e6dd1c0714628">tempest-ServerRescueTestJSON-1604630325</nova:project>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:17:36 compute-0 nova_compute[259550]:         <nova:port uuid="b2d2caee-177d-4ad4-98ba-7dd1d95e296b">
Oct 07 14:17:36 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:17:36 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:17:36 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <system>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <entry name="serial">85cd6b5c-f0f7-49fa-a999-64818baf3648</entry>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <entry name="uuid">85cd6b5c-f0f7-49fa-a999-64818baf3648</entry>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     </system>
Oct 07 14:17:36 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:17:36 compute-0 nova_compute[259550]:   <os>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:   </os>
Oct 07 14:17:36 compute-0 nova_compute[259550]:   <features>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:   </features>
Oct 07 14:17:36 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:17:36 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:17:36 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/85cd6b5c-f0f7-49fa-a999-64818baf3648_disk.rescue">
Oct 07 14:17:36 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       </source>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:17:36 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/85cd6b5c-f0f7-49fa-a999-64818baf3648_disk">
Oct 07 14:17:36 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       </source>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:17:36 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <target dev="vdb" bus="virtio"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/85cd6b5c-f0f7-49fa-a999-64818baf3648_disk.config.rescue">
Oct 07 14:17:36 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       </source>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:17:36 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:32:3d:56"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <target dev="tapb2d2caee-17"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/85cd6b5c-f0f7-49fa-a999-64818baf3648/console.log" append="off"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <video>
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     </video>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:17:36 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:17:36 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:17:36 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:17:36 compute-0 nova_compute[259550]: </domain>
Oct 07 14:17:36 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:17:37 compute-0 nova_compute[259550]: 2025-10-07 14:17:37.006 2 INFO nova.virt.libvirt.driver [-] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Instance destroyed successfully.
Oct 07 14:17:37 compute-0 nova_compute[259550]: 2025-10-07 14:17:37.232 2 DEBUG nova.virt.libvirt.driver [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:17:37 compute-0 nova_compute[259550]: 2025-10-07 14:17:37.233 2 DEBUG nova.virt.libvirt.driver [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:17:37 compute-0 nova_compute[259550]: 2025-10-07 14:17:37.233 2 DEBUG nova.virt.libvirt.driver [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:17:37 compute-0 nova_compute[259550]: 2025-10-07 14:17:37.234 2 DEBUG nova.virt.libvirt.driver [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] No VIF found with MAC fa:16:3e:32:3d:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:17:37 compute-0 nova_compute[259550]: 2025-10-07 14:17:37.235 2 INFO nova.virt.libvirt.driver [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Using config drive
Oct 07 14:17:37 compute-0 nova_compute[259550]: 2025-10-07 14:17:37.258 2 DEBUG nova.storage.rbd_utils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:37 compute-0 nova_compute[259550]: 2025-10-07 14:17:37.304 2 DEBUG nova.objects.instance [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 85cd6b5c-f0f7-49fa-a999-64818baf3648 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:37 compute-0 nova_compute[259550]: 2025-10-07 14:17:37.343 2 DEBUG nova.objects.instance [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'keypairs' on Instance uuid 85cd6b5c-f0f7-49fa-a999-64818baf3648 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:37 compute-0 nova_compute[259550]: 2025-10-07 14:17:37.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:37 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3433057108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:37 compute-0 boring_northcutt[328895]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:17:37 compute-0 boring_northcutt[328895]: --> relative data size: 1.0
Oct 07 14:17:37 compute-0 boring_northcutt[328895]: --> All data devices are unavailable
Oct 07 14:17:37 compute-0 systemd[1]: libpod-64f9772fdf40a7abdbc258a368b254f9cd1d0930a7f375c551d221dc0a32b3d9.scope: Deactivated successfully.
Oct 07 14:17:37 compute-0 systemd[1]: libpod-64f9772fdf40a7abdbc258a368b254f9cd1d0930a7f375c551d221dc0a32b3d9.scope: Consumed 1.036s CPU time.
Oct 07 14:17:37 compute-0 podman[328860]: 2025-10-07 14:17:37.79089161 +0000 UTC m=+1.246662264 container died 64f9772fdf40a7abdbc258a368b254f9cd1d0930a7f375c551d221dc0a32b3d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:17:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 274 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 8.0 MiB/s wr, 161 op/s
Oct 07 14:17:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce4ff2436e8cb0cb53a22047736f908625775d46d9d2a2d37a2d95440b6e00b8-merged.mount: Deactivated successfully.
Oct 07 14:17:37 compute-0 podman[328860]: 2025-10-07 14:17:37.855018234 +0000 UTC m=+1.310788908 container remove 64f9772fdf40a7abdbc258a368b254f9cd1d0930a7f375c551d221dc0a32b3d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:17:37 compute-0 systemd[1]: libpod-conmon-64f9772fdf40a7abdbc258a368b254f9cd1d0930a7f375c551d221dc0a32b3d9.scope: Deactivated successfully.
Oct 07 14:17:37 compute-0 sudo[328731]: pam_unix(sudo:session): session closed for user root
Oct 07 14:17:37 compute-0 sudo[328959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:17:37 compute-0 sudo[328959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:17:37 compute-0 sudo[328959]: pam_unix(sudo:session): session closed for user root
Oct 07 14:17:38 compute-0 sudo[328984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:17:38 compute-0 sudo[328984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:17:38 compute-0 sudo[328984]: pam_unix(sudo:session): session closed for user root
Oct 07 14:17:38 compute-0 sudo[329009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:17:38 compute-0 sudo[329009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:17:38 compute-0 sudo[329009]: pam_unix(sudo:session): session closed for user root
Oct 07 14:17:38 compute-0 sudo[329034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:17:38 compute-0 sudo[329034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.162 2 INFO nova.virt.libvirt.driver [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Creating config drive at /var/lib/nova/instances/85cd6b5c-f0f7-49fa-a999-64818baf3648/disk.config.rescue
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.169 2 DEBUG oslo_concurrency.processutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/85cd6b5c-f0f7-49fa-a999-64818baf3648/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdz1zetsl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.261 2 DEBUG nova.network.neutron [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Updating instance_info_cache with network_info: [{"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.318 2 DEBUG oslo_concurrency.processutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/85cd6b5c-f0f7-49fa-a999-64818baf3648/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdz1zetsl" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.344 2 DEBUG nova.storage.rbd_utils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.348 2 DEBUG oslo_concurrency.processutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/85cd6b5c-f0f7-49fa-a999-64818baf3648/disk.config.rescue 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.420 2 DEBUG oslo_concurrency.lockutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Releasing lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.421 2 DEBUG nova.compute.manager [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance network_info: |[{"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.422 2 DEBUG oslo_concurrency.lockutils [req-91609673-9f39-4a08-8cba-b3c5a88c6a56 req-b04db5a8-c179-4be0-a89d-966758228e64 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.422 2 DEBUG nova.network.neutron [req-91609673-9f39-4a08-8cba-b3c5a88c6a56 req-b04db5a8-c179-4be0-a89d-966758228e64 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Refreshing network info cache for port 5fb8904b-227a-4dac-8c3a-82a23ba9832c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.426 2 DEBUG nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Start _get_guest_xml network_info=[{"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.433 2 WARNING nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.440 2 DEBUG nova.virt.libvirt.host [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.441 2 DEBUG nova.virt.libvirt.host [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.444 2 DEBUG nova.virt.libvirt.host [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.445 2 DEBUG nova.virt.libvirt.host [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.445 2 DEBUG nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.446 2 DEBUG nova.virt.hardware [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.446 2 DEBUG nova.virt.hardware [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.446 2 DEBUG nova.virt.hardware [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.447 2 DEBUG nova.virt.hardware [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.447 2 DEBUG nova.virt.hardware [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.447 2 DEBUG nova.virt.hardware [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.448 2 DEBUG nova.virt.hardware [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.448 2 DEBUG nova.virt.hardware [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.448 2 DEBUG nova.virt.hardware [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.449 2 DEBUG nova.virt.hardware [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.449 2 DEBUG nova.virt.hardware [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.453 2 DEBUG oslo_concurrency.processutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:38 compute-0 podman[329128]: 2025-10-07 14:17:38.483308801 +0000 UTC m=+0.049114238 container create f438f70b0c16ca61a7e9f619265b50d1388caf963a0e55854808ed64e9b172d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_easley, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 07 14:17:38 compute-0 systemd[1]: Started libpod-conmon-f438f70b0c16ca61a7e9f619265b50d1388caf963a0e55854808ed64e9b172d1.scope.
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.527 2 DEBUG oslo_concurrency.processutils [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/85cd6b5c-f0f7-49fa-a999-64818baf3648/disk.config.rescue 85cd6b5c-f0f7-49fa-a999-64818baf3648_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.529 2 INFO nova.virt.libvirt.driver [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Deleting local config drive /var/lib/nova/instances/85cd6b5c-f0f7-49fa-a999-64818baf3648/disk.config.rescue because it was imported into RBD.
Oct 07 14:17:38 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:17:38 compute-0 podman[329128]: 2025-10-07 14:17:38.464773072 +0000 UTC m=+0.030578549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:17:38 compute-0 podman[329128]: 2025-10-07 14:17:38.576444252 +0000 UTC m=+0.142249719 container init f438f70b0c16ca61a7e9f619265b50d1388caf963a0e55854808ed64e9b172d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_easley, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 07 14:17:38 compute-0 podman[329128]: 2025-10-07 14:17:38.583885518 +0000 UTC m=+0.149690975 container start f438f70b0c16ca61a7e9f619265b50d1388caf963a0e55854808ed64e9b172d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_easley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:17:38 compute-0 NetworkManager[44949]: <info>  [1759846658.5854] manager: (tapb2d2caee-17): new Tun device (/org/freedesktop/NetworkManager/Devices/274)
Oct 07 14:17:38 compute-0 kernel: tapb2d2caee-17: entered promiscuous mode
Oct 07 14:17:38 compute-0 ovn_controller[151684]: 2025-10-07T14:17:38Z|00606|binding|INFO|Claiming lport b2d2caee-177d-4ad4-98ba-7dd1d95e296b for this chassis.
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:38 compute-0 ovn_controller[151684]: 2025-10-07T14:17:38Z|00607|binding|INFO|b2d2caee-177d-4ad4-98ba-7dd1d95e296b: Claiming fa:16:3e:32:3d:56 10.100.0.5
Oct 07 14:17:38 compute-0 podman[329128]: 2025-10-07 14:17:38.58962033 +0000 UTC m=+0.155425777 container attach f438f70b0c16ca61a7e9f619265b50d1388caf963a0e55854808ed64e9b172d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_easley, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 07 14:17:38 compute-0 cool_easley[329155]: 167 167
Oct 07 14:17:38 compute-0 systemd[1]: libpod-f438f70b0c16ca61a7e9f619265b50d1388caf963a0e55854808ed64e9b172d1.scope: Deactivated successfully.
Oct 07 14:17:38 compute-0 conmon[329155]: conmon f438f70b0c16ca61a7e9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f438f70b0c16ca61a7e9f619265b50d1388caf963a0e55854808ed64e9b172d1.scope/container/memory.events
Oct 07 14:17:38 compute-0 podman[329128]: 2025-10-07 14:17:38.600165289 +0000 UTC m=+0.165970726 container died f438f70b0c16ca61a7e9f619265b50d1388caf963a0e55854808ed64e9b172d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 07 14:17:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:38.613 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:3d:56 10.100.0.5'], port_security=['fa:16:3e:32:3d:56 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '85cd6b5c-f0f7-49fa-a999-64818baf3648', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dfecd64-708d-4596-88ae-4b7d716e998c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8dca3ec607447dd8f2e6dd1c0714628', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fe5613ec-7a3f-454f-9d6b-9216d9c4d645', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=952e23a8-726e-49a4-b02e-737dfec98b14, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b2d2caee-177d-4ad4-98ba-7dd1d95e296b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:17:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:38.615 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b2d2caee-177d-4ad4-98ba-7dd1d95e296b in datapath 2dfecd64-708d-4596-88ae-4b7d716e998c bound to our chassis
Oct 07 14:17:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:38.616 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2dfecd64-708d-4596-88ae-4b7d716e998c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:17:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:38.617 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed0e6fb-d81f-43b7-a930-141cff72c7c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:38 compute-0 ovn_controller[151684]: 2025-10-07T14:17:38Z|00608|binding|INFO|Setting lport b2d2caee-177d-4ad4-98ba-7dd1d95e296b ovn-installed in OVS
Oct 07 14:17:38 compute-0 ovn_controller[151684]: 2025-10-07T14:17:38Z|00609|binding|INFO|Setting lport b2d2caee-177d-4ad4-98ba-7dd1d95e296b up in Southbound
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9ef9756da8cc36ecc0ddf9e0563547d8556b8559409227f6496364a01b264b8-merged.mount: Deactivated successfully.
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:38 compute-0 systemd-machined[214580]: New machine qemu-76-instance-0000003f.
Oct 07 14:17:38 compute-0 podman[329128]: 2025-10-07 14:17:38.644971123 +0000 UTC m=+0.210776570 container remove f438f70b0c16ca61a7e9f619265b50d1388caf963a0e55854808ed64e9b172d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_easley, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:17:38 compute-0 systemd[1]: Started Virtual Machine qemu-76-instance-0000003f.
Oct 07 14:17:38 compute-0 systemd-udevd[329205]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:17:38 compute-0 systemd[1]: libpod-conmon-f438f70b0c16ca61a7e9f619265b50d1388caf963a0e55854808ed64e9b172d1.scope: Deactivated successfully.
Oct 07 14:17:38 compute-0 NetworkManager[44949]: <info>  [1759846658.6900] device (tapb2d2caee-17): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:17:38 compute-0 NetworkManager[44949]: <info>  [1759846658.6929] device (tapb2d2caee-17): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:17:38 compute-0 ceph-mon[74295]: pgmap v1591: 305 pgs: 305 active+clean; 274 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 8.0 MiB/s wr, 161 op/s
Oct 07 14:17:38 compute-0 podman[329219]: 2025-10-07 14:17:38.847052791 +0000 UTC m=+0.045192575 container create 6fadc923eeaa0d20d5a5f75b2577ad3c3d34f7c86f472f6146776dc5591862bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 07 14:17:38 compute-0 systemd[1]: Started libpod-conmon-6fadc923eeaa0d20d5a5f75b2577ad3c3d34f7c86f472f6146776dc5591862bf.scope.
Oct 07 14:17:38 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:17:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a992e3428c82495d9b66d67835b9536d9da13968d444b11b50d3dd76f1d10f8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:17:38 compute-0 podman[329219]: 2025-10-07 14:17:38.828816799 +0000 UTC m=+0.026956623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:17:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a992e3428c82495d9b66d67835b9536d9da13968d444b11b50d3dd76f1d10f8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:17:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a992e3428c82495d9b66d67835b9536d9da13968d444b11b50d3dd76f1d10f8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:17:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a992e3428c82495d9b66d67835b9536d9da13968d444b11b50d3dd76f1d10f8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:17:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:17:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2410749233' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:38 compute-0 podman[329219]: 2025-10-07 14:17:38.94733723 +0000 UTC m=+0.145477044 container init 6fadc923eeaa0d20d5a5f75b2577ad3c3d34f7c86f472f6146776dc5591862bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haslett, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 07 14:17:38 compute-0 podman[329219]: 2025-10-07 14:17:38.957612061 +0000 UTC m=+0.155751845 container start 6fadc923eeaa0d20d5a5f75b2577ad3c3d34f7c86f472f6146776dc5591862bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haslett, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 07 14:17:38 compute-0 podman[329219]: 2025-10-07 14:17:38.96174559 +0000 UTC m=+0.159885394 container attach 6fadc923eeaa0d20d5a5f75b2577ad3c3d34f7c86f472f6146776dc5591862bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haslett, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:17:38 compute-0 nova_compute[259550]: 2025-10-07 14:17:38.995 2 DEBUG oslo_concurrency.processutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.029 2 DEBUG nova.storage.rbd_utils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image 1d580bbb-a6fd-442c-8524-409ba5c344d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.036 2 DEBUG oslo_concurrency.processutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:17:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1020577500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.506 2 DEBUG oslo_concurrency.processutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.510 2 DEBUG nova.virt.libvirt.vif [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:17:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1718645090',display_name='tempest-ServerActionsTestJSON-server-1718645090',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1718645090',id=66,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD46Fy5CoN+Ml3EBoxKcZ2Dc+Ex8Fs/j0JXzrdEiunFq6ivVsrIblCZq3tN14fyHQcfewP1+4i7OCcMUFM6dTRwhbKPS7W3sI5N9qFb8Gfb+awrk0XppQafXmqkzDHedqQ==',key_name='tempest-keypair-607419023',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-0mdnuhhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:17:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=1d580bbb-a6fd-442c-8524-409ba5c344d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.510 2 DEBUG nova.network.os_vif_util [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.512 2 DEBUG nova.network.os_vif_util [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.514 2 DEBUG nova.objects.instance [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.581 2 DEBUG nova.compute.manager [None req-87a2c2c5-1cd8-4623-ad15-77393bc5fb0b 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.582 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 85cd6b5c-f0f7-49fa-a999-64818baf3648 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.582 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846659.5790403, 85cd6b5c-f0f7-49fa-a999-64818baf3648 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.582 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] VM Resumed (Lifecycle Event)
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.640 2 DEBUG nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:17:39 compute-0 nova_compute[259550]:   <uuid>1d580bbb-a6fd-442c-8524-409ba5c344d0</uuid>
Oct 07 14:17:39 compute-0 nova_compute[259550]:   <name>instance-00000042</name>
Oct 07 14:17:39 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:17:39 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:17:39 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerActionsTestJSON-server-1718645090</nova:name>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:17:38</nova:creationTime>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:17:39 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:17:39 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:17:39 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:17:39 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:17:39 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:17:39 compute-0 nova_compute[259550]:         <nova:user uuid="51afbbb19e4a4e2184c89302ccf45428">tempest-ServerActionsTestJSON-263209083-project-member</nova:user>
Oct 07 14:17:39 compute-0 nova_compute[259550]:         <nova:project uuid="8379283f8a594c2ab94773d2b49cbb30">tempest-ServerActionsTestJSON-263209083</nova:project>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:17:39 compute-0 nova_compute[259550]:         <nova:port uuid="5fb8904b-227a-4dac-8c3a-82a23ba9832c">
Oct 07 14:17:39 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:17:39 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:17:39 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <system>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <entry name="serial">1d580bbb-a6fd-442c-8524-409ba5c344d0</entry>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <entry name="uuid">1d580bbb-a6fd-442c-8524-409ba5c344d0</entry>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     </system>
Oct 07 14:17:39 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:17:39 compute-0 nova_compute[259550]:   <os>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:   </os>
Oct 07 14:17:39 compute-0 nova_compute[259550]:   <features>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:   </features>
Oct 07 14:17:39 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:17:39 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:17:39 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1d580bbb-a6fd-442c-8524-409ba5c344d0_disk">
Oct 07 14:17:39 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       </source>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:17:39 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1d580bbb-a6fd-442c-8524-409ba5c344d0_disk.config">
Oct 07 14:17:39 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       </source>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:17:39 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:af:c7:50"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <target dev="tap5fb8904b-22"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/1d580bbb-a6fd-442c-8524-409ba5c344d0/console.log" append="off"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <video>
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     </video>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:17:39 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:17:39 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:17:39 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:17:39 compute-0 nova_compute[259550]: </domain>
Oct 07 14:17:39 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.647 2 DEBUG nova.compute.manager [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Preparing to wait for external event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.648 2 DEBUG oslo_concurrency.lockutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.648 2 DEBUG oslo_concurrency.lockutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.648 2 DEBUG oslo_concurrency.lockutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.649 2 DEBUG nova.virt.libvirt.vif [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:17:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1718645090',display_name='tempest-ServerActionsTestJSON-server-1718645090',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1718645090',id=66,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD46Fy5CoN+Ml3EBoxKcZ2Dc+Ex8Fs/j0JXzrdEiunFq6ivVsrIblCZq3tN14fyHQcfewP1+4i7OCcMUFM6dTRwhbKPS7W3sI5N9qFb8Gfb+awrk0XppQafXmqkzDHedqQ==',key_name='tempest-keypair-607419023',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-0mdnuhhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:17:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=1d580bbb-a6fd-442c-8524-409ba5c344d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.649 2 DEBUG nova.network.os_vif_util [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.650 2 DEBUG nova.network.os_vif_util [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.651 2 DEBUG os_vif [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.655 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.656 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.659 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fb8904b-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.660 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5fb8904b-22, col_values=(('external_ids', {'iface-id': '5fb8904b-227a-4dac-8c3a-82a23ba9832c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:c7:50', 'vm-uuid': '1d580bbb-a6fd-442c-8524-409ba5c344d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:39 compute-0 NetworkManager[44949]: <info>  [1759846659.6626] manager: (tap5fb8904b-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/275)
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.671 2 INFO os_vif [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22')
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.673 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.685 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:17:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2410749233' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1020577500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.725 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] During sync_power_state the instance has a pending task (rescuing). Skip.
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.726 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846659.5795593, 85cd6b5c-f0f7-49fa-a999-64818baf3648 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.726 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] VM Started (Lifecycle Event)
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.761 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.765 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.775 2 DEBUG nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.775 2 DEBUG nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.775 2 DEBUG nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] No VIF found with MAC fa:16:3e:af:c7:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.776 2 INFO nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Using config drive
Oct 07 14:17:39 compute-0 nova_compute[259550]: 2025-10-07 14:17:39.800 2 DEBUG nova.storage.rbd_utils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image 1d580bbb-a6fd-442c-8524-409ba5c344d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 298 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 8.7 MiB/s wr, 185 op/s
Oct 07 14:17:39 compute-0 agitated_haslett[329235]: {
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:     "0": [
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:         {
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "devices": [
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "/dev/loop3"
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             ],
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "lv_name": "ceph_lv0",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "lv_size": "21470642176",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "name": "ceph_lv0",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "tags": {
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.cluster_name": "ceph",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.crush_device_class": "",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.encrypted": "0",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.osd_id": "0",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.type": "block",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.vdo": "0"
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             },
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "type": "block",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "vg_name": "ceph_vg0"
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:         }
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:     ],
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:     "1": [
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:         {
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "devices": [
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "/dev/loop4"
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             ],
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "lv_name": "ceph_lv1",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "lv_size": "21470642176",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "name": "ceph_lv1",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "tags": {
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.cluster_name": "ceph",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.crush_device_class": "",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.encrypted": "0",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.osd_id": "1",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.type": "block",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.vdo": "0"
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             },
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "type": "block",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "vg_name": "ceph_vg1"
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:         }
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:     ],
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:     "2": [
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:         {
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "devices": [
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "/dev/loop5"
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             ],
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "lv_name": "ceph_lv2",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "lv_size": "21470642176",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "name": "ceph_lv2",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "tags": {
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.cluster_name": "ceph",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.crush_device_class": "",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.encrypted": "0",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.osd_id": "2",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.type": "block",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:                 "ceph.vdo": "0"
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             },
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "type": "block",
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:             "vg_name": "ceph_vg2"
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:         }
Oct 07 14:17:39 compute-0 agitated_haslett[329235]:     ]
Oct 07 14:17:39 compute-0 agitated_haslett[329235]: }
Oct 07 14:17:39 compute-0 systemd[1]: libpod-6fadc923eeaa0d20d5a5f75b2577ad3c3d34f7c86f472f6146776dc5591862bf.scope: Deactivated successfully.
Oct 07 14:17:39 compute-0 podman[329219]: 2025-10-07 14:17:39.851212517 +0000 UTC m=+1.049352311 container died 6fadc923eeaa0d20d5a5f75b2577ad3c3d34f7c86f472f6146776dc5591862bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haslett, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:17:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a992e3428c82495d9b66d67835b9536d9da13968d444b11b50d3dd76f1d10f8d-merged.mount: Deactivated successfully.
Oct 07 14:17:39 compute-0 podman[329219]: 2025-10-07 14:17:39.9205777 +0000 UTC m=+1.118717484 container remove 6fadc923eeaa0d20d5a5f75b2577ad3c3d34f7c86f472f6146776dc5591862bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haslett, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 07 14:17:39 compute-0 systemd[1]: libpod-conmon-6fadc923eeaa0d20d5a5f75b2577ad3c3d34f7c86f472f6146776dc5591862bf.scope: Deactivated successfully.
Oct 07 14:17:39 compute-0 sudo[329034]: pam_unix(sudo:session): session closed for user root
Oct 07 14:17:40 compute-0 sudo[329376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:17:40 compute-0 sudo[329376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:17:40 compute-0 nova_compute[259550]: 2025-10-07 14:17:40.031 2 DEBUG nova.network.neutron [req-91609673-9f39-4a08-8cba-b3c5a88c6a56 req-b04db5a8-c179-4be0-a89d-966758228e64 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Updated VIF entry in instance network info cache for port 5fb8904b-227a-4dac-8c3a-82a23ba9832c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:17:40 compute-0 nova_compute[259550]: 2025-10-07 14:17:40.031 2 DEBUG nova.network.neutron [req-91609673-9f39-4a08-8cba-b3c5a88c6a56 req-b04db5a8-c179-4be0-a89d-966758228e64 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Updating instance_info_cache with network_info: [{"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:17:40 compute-0 sudo[329376]: pam_unix(sudo:session): session closed for user root
Oct 07 14:17:40 compute-0 sudo[329401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:17:40 compute-0 sudo[329401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:17:40 compute-0 sudo[329401]: pam_unix(sudo:session): session closed for user root
Oct 07 14:17:40 compute-0 nova_compute[259550]: 2025-10-07 14:17:40.153 2 INFO nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Creating config drive at /var/lib/nova/instances/1d580bbb-a6fd-442c-8524-409ba5c344d0/disk.config
Oct 07 14:17:40 compute-0 nova_compute[259550]: 2025-10-07 14:17:40.161 2 DEBUG oslo_concurrency.processutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1d580bbb-a6fd-442c-8524-409ba5c344d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyiokxpwl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:40 compute-0 sudo[329426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:17:40 compute-0 sudo[329426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:17:40 compute-0 sudo[329426]: pam_unix(sudo:session): session closed for user root
Oct 07 14:17:40 compute-0 nova_compute[259550]: 2025-10-07 14:17:40.197 2 DEBUG oslo_concurrency.lockutils [req-91609673-9f39-4a08-8cba-b3c5a88c6a56 req-b04db5a8-c179-4be0-a89d-966758228e64 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:17:40 compute-0 sudo[329452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:17:40 compute-0 sudo[329452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:17:40 compute-0 nova_compute[259550]: 2025-10-07 14:17:40.304 2 DEBUG oslo_concurrency.processutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1d580bbb-a6fd-442c-8524-409ba5c344d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyiokxpwl" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:40 compute-0 nova_compute[259550]: 2025-10-07 14:17:40.330 2 DEBUG nova.storage.rbd_utils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image 1d580bbb-a6fd-442c-8524-409ba5c344d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:40 compute-0 nova_compute[259550]: 2025-10-07 14:17:40.335 2 DEBUG oslo_concurrency.processutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1d580bbb-a6fd-442c-8524-409ba5c344d0/disk.config 1d580bbb-a6fd-442c-8524-409ba5c344d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:40 compute-0 nova_compute[259550]: 2025-10-07 14:17:40.497 2 DEBUG oslo_concurrency.processutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1d580bbb-a6fd-442c-8524-409ba5c344d0/disk.config 1d580bbb-a6fd-442c-8524-409ba5c344d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:40 compute-0 nova_compute[259550]: 2025-10-07 14:17:40.498 2 INFO nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Deleting local config drive /var/lib/nova/instances/1d580bbb-a6fd-442c-8524-409ba5c344d0/disk.config because it was imported into RBD.
Oct 07 14:17:40 compute-0 NetworkManager[44949]: <info>  [1759846660.5685] manager: (tap5fb8904b-22): new Tun device (/org/freedesktop/NetworkManager/Devices/276)
Oct 07 14:17:40 compute-0 kernel: tap5fb8904b-22: entered promiscuous mode
Oct 07 14:17:40 compute-0 systemd-udevd[329207]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:17:40 compute-0 nova_compute[259550]: 2025-10-07 14:17:40.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:40 compute-0 ovn_controller[151684]: 2025-10-07T14:17:40Z|00610|binding|INFO|Claiming lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c for this chassis.
Oct 07 14:17:40 compute-0 ovn_controller[151684]: 2025-10-07T14:17:40Z|00611|binding|INFO|5fb8904b-227a-4dac-8c3a-82a23ba9832c: Claiming fa:16:3e:af:c7:50 10.100.0.3
Oct 07 14:17:40 compute-0 NetworkManager[44949]: <info>  [1759846660.5878] device (tap5fb8904b-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:17:40 compute-0 NetworkManager[44949]: <info>  [1759846660.5887] device (tap5fb8904b-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:17:40 compute-0 systemd-machined[214580]: New machine qemu-77-instance-00000042.
Oct 07 14:17:40 compute-0 podman[329563]: 2025-10-07 14:17:40.634340484 +0000 UTC m=+0.056500563 container create a10cb347cf43a4b737597b6c24f9aec8cba741798c32ca966e0289b31cf99ca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:17:40 compute-0 systemd[1]: Started Virtual Machine qemu-77-instance-00000042.
Oct 07 14:17:40 compute-0 nova_compute[259550]: 2025-10-07 14:17:40.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:40.665 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:c7:50 10.100.0.3'], port_security=['fa:16:3e:af:c7:50 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1d580bbb-a6fd-442c-8524-409ba5c344d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8379283f8a594c2ab94773d2b49cbb30', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c8e218c0-ab29-4b01-8bdb-1da00e3ea9f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44f26794-be65-4c90-a6ef-3a0e5efa6810, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5fb8904b-227a-4dac-8c3a-82a23ba9832c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:17:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:40.666 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5fb8904b-227a-4dac-8c3a-82a23ba9832c in datapath c21c541a-0d39-4ceb-ba44-53a9c1280779 bound to our chassis
Oct 07 14:17:40 compute-0 ovn_controller[151684]: 2025-10-07T14:17:40Z|00612|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c ovn-installed in OVS
Oct 07 14:17:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:40.669 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:17:40 compute-0 ovn_controller[151684]: 2025-10-07T14:17:40Z|00613|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c up in Southbound
Oct 07 14:17:40 compute-0 nova_compute[259550]: 2025-10-07 14:17:40.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:40.685 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1d66e455-30d4-492f-ae5f-2542c342a75d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:40.686 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc21c541a-01 in ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:17:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:40.688 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc21c541a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:17:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:40.688 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f1353efc-6232-4fd6-a53a-0fd73efcbbf0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:40.689 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[120b3ad5-0ce1-4145-807b-0b8d4f0d0b06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:40 compute-0 systemd[1]: Started libpod-conmon-a10cb347cf43a4b737597b6c24f9aec8cba741798c32ca966e0289b31cf99ca7.scope.
Oct 07 14:17:40 compute-0 podman[329563]: 2025-10-07 14:17:40.605730189 +0000 UTC m=+0.027890088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:17:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:40.704 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[940e8ab4-5c17-482b-9ae0-851bf5c17168]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:40 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:17:40 compute-0 ceph-mon[74295]: pgmap v1592: 305 pgs: 305 active+clean; 298 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 8.7 MiB/s wr, 185 op/s
Oct 07 14:17:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:40.736 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d848d91e-e72b-46bf-a2cf-f9be88a88794]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:40 compute-0 podman[329563]: 2025-10-07 14:17:40.737251454 +0000 UTC m=+0.159411353 container init a10cb347cf43a4b737597b6c24f9aec8cba741798c32ca966e0289b31cf99ca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:17:40 compute-0 podman[329563]: 2025-10-07 14:17:40.746980681 +0000 UTC m=+0.169140560 container start a10cb347cf43a4b737597b6c24f9aec8cba741798c32ca966e0289b31cf99ca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 07 14:17:40 compute-0 podman[329563]: 2025-10-07 14:17:40.75224552 +0000 UTC m=+0.174405399 container attach a10cb347cf43a4b737597b6c24f9aec8cba741798c32ca966e0289b31cf99ca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 07 14:17:40 compute-0 systemd[1]: libpod-a10cb347cf43a4b737597b6c24f9aec8cba741798c32ca966e0289b31cf99ca7.scope: Deactivated successfully.
Oct 07 14:17:40 compute-0 recursing_faraday[329586]: 167 167
Oct 07 14:17:40 compute-0 conmon[329586]: conmon a10cb347cf43a4b73759 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a10cb347cf43a4b737597b6c24f9aec8cba741798c32ca966e0289b31cf99ca7.scope/container/memory.events
Oct 07 14:17:40 compute-0 podman[329563]: 2025-10-07 14:17:40.756630646 +0000 UTC m=+0.178790525 container died a10cb347cf43a4b737597b6c24f9aec8cba741798c32ca966e0289b31cf99ca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 14:17:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-96635ef6619b02ddf0fc880fa1bb2a2c2aa78fd48447458549678c245e4d68df-merged.mount: Deactivated successfully.
Oct 07 14:17:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:40.789 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ab223d22-71cb-4cfe-833e-79f3f158148a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:40 compute-0 podman[329563]: 2025-10-07 14:17:40.799077527 +0000 UTC m=+0.221237406 container remove a10cb347cf43a4b737597b6c24f9aec8cba741798c32ca966e0289b31cf99ca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 14:17:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:40.800 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4259d01c-69c7-48bc-9ef1-4d3665d29c9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:40 compute-0 NetworkManager[44949]: <info>  [1759846660.8015] manager: (tapc21c541a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/277)
Oct 07 14:17:40 compute-0 systemd[1]: libpod-conmon-a10cb347cf43a4b737597b6c24f9aec8cba741798c32ca966e0289b31cf99ca7.scope: Deactivated successfully.
Oct 07 14:17:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:40.844 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b9afc09f-c8bd-41c5-869e-229908cb6ad4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:40.848 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4c3389eb-cfea-45e3-bbe1-3e99c448564d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:40 compute-0 NetworkManager[44949]: <info>  [1759846660.8773] device (tapc21c541a-00): carrier: link connected
Oct 07 14:17:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:40.887 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[83d87a72-08e7-40ad-9189-b806628f3c26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:40.904 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[84097735-ef1f-408e-9472-ce298c673551]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc21c541a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:8b:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724444, 'reachable_time': 23584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329632, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:40.929 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0cad2e01-72bd-47f4-a30e-6a6d23e42e55]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:8b48'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724444, 'tstamp': 724444}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329646, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:40.945 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0da40138-8a6b-4c72-b4da-bf763276dc7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc21c541a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:8b:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724444, 'reachable_time': 23584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329652, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:40.986 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[91142a68-f190-421e-a998-3d4cc04630e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:17:41 compute-0 podman[329666]: 2025-10-07 14:17:41.012431323 +0000 UTC m=+0.043619844 container create 22a431f84c42422755961ee4832fa03a127e0fe7ba087ead82d4cef54a24fad7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:17:41 compute-0 systemd[1]: Started libpod-conmon-22a431f84c42422755961ee4832fa03a127e0fe7ba087ead82d4cef54a24fad7.scope.
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:41.064 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6430b61b-a38f-43ce-ab48-7a7d99b80ec8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:41.065 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc21c541a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:41.065 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:41.066 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc21c541a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:41 compute-0 kernel: tapc21c541a-00: entered promiscuous mode
Oct 07 14:17:41 compute-0 nova_compute[259550]: 2025-10-07 14:17:41.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:41 compute-0 NetworkManager[44949]: <info>  [1759846661.0686] manager: (tapc21c541a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Oct 07 14:17:41 compute-0 nova_compute[259550]: 2025-10-07 14:17:41.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:41.071 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc21c541a-00, col_values=(('external_ids', {'iface-id': '5989e5ed-c89e-446a-960e-503196fd3680'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:41 compute-0 ovn_controller[151684]: 2025-10-07T14:17:41Z|00614|binding|INFO|Releasing lport 5989e5ed-c89e-446a-960e-503196fd3680 from this chassis (sb_readonly=0)
Oct 07 14:17:41 compute-0 nova_compute[259550]: 2025-10-07 14:17:41.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:41 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8969b4b2f951238d2145017a27637cce61ecd3d8b7029e73ea9961c78a87bc7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:17:41 compute-0 nova_compute[259550]: 2025-10-07 14:17:41.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:41.088 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c21c541a-0d39-4ceb-ba44-53a9c1280779.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c21c541a-0d39-4ceb-ba44-53a9c1280779.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:41.089 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d8a66282-de43-4ba2-8fcd-f03138221eaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:41 compute-0 podman[329666]: 2025-10-07 14:17:40.995971418 +0000 UTC m=+0.027159959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:41.090 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/c21c541a-0d39-4ceb-ba44-53a9c1280779.pid.haproxy
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:17:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:41.090 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'env', 'PROCESS_TAG=haproxy-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c21c541a-0d39-4ceb-ba44-53a9c1280779.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8969b4b2f951238d2145017a27637cce61ecd3d8b7029e73ea9961c78a87bc7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8969b4b2f951238d2145017a27637cce61ecd3d8b7029e73ea9961c78a87bc7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8969b4b2f951238d2145017a27637cce61ecd3d8b7029e73ea9961c78a87bc7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:17:41 compute-0 podman[329666]: 2025-10-07 14:17:41.111937942 +0000 UTC m=+0.143126513 container init 22a431f84c42422755961ee4832fa03a127e0fe7ba087ead82d4cef54a24fad7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mestorf, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:17:41 compute-0 podman[329666]: 2025-10-07 14:17:41.119799969 +0000 UTC m=+0.150988480 container start 22a431f84c42422755961ee4832fa03a127e0fe7ba087ead82d4cef54a24fad7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 07 14:17:41 compute-0 podman[329666]: 2025-10-07 14:17:41.123937408 +0000 UTC m=+0.155125979 container attach 22a431f84c42422755961ee4832fa03a127e0fe7ba087ead82d4cef54a24fad7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:17:41 compute-0 nova_compute[259550]: 2025-10-07 14:17:41.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:41 compute-0 podman[329734]: 2025-10-07 14:17:41.496412328 +0000 UTC m=+0.053867094 container create 4cab5be5e70ba83d4d0f07700cb0e0d0c42eb37035ad88a9b5747ecac7d11c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 07 14:17:41 compute-0 systemd[1]: Started libpod-conmon-4cab5be5e70ba83d4d0f07700cb0e0d0c42eb37035ad88a9b5747ecac7d11c75.scope.
Oct 07 14:17:41 compute-0 nova_compute[259550]: 2025-10-07 14:17:41.559 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846661.5587032, 1d580bbb-a6fd-442c-8524-409ba5c344d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:17:41 compute-0 nova_compute[259550]: 2025-10-07 14:17:41.560 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] VM Started (Lifecycle Event)
Oct 07 14:17:41 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:17:41 compute-0 podman[329734]: 2025-10-07 14:17:41.469787414 +0000 UTC m=+0.027242200 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e126eb1439b5e971ce0fae5a29f82a5cc1d5044424533229e02ef0bc79948baf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:17:41 compute-0 podman[329734]: 2025-10-07 14:17:41.586349093 +0000 UTC m=+0.143803859 container init 4cab5be5e70ba83d4d0f07700cb0e0d0c42eb37035ad88a9b5747ecac7d11c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 07 14:17:41 compute-0 podman[329734]: 2025-10-07 14:17:41.595863255 +0000 UTC m=+0.153318031 container start 4cab5be5e70ba83d4d0f07700cb0e0d0c42eb37035ad88a9b5747ecac7d11c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 07 14:17:41 compute-0 nova_compute[259550]: 2025-10-07 14:17:41.606 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:41 compute-0 nova_compute[259550]: 2025-10-07 14:17:41.611 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846661.5589523, 1d580bbb-a6fd-442c-8524-409ba5c344d0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:17:41 compute-0 nova_compute[259550]: 2025-10-07 14:17:41.612 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] VM Paused (Lifecycle Event)
Oct 07 14:17:41 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[329749]: [NOTICE]   (329753) : New worker (329755) forked
Oct 07 14:17:41 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[329749]: [NOTICE]   (329753) : Loading success.
Oct 07 14:17:41 compute-0 nova_compute[259550]: 2025-10-07 14:17:41.639 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:41 compute-0 nova_compute[259550]: 2025-10-07 14:17:41.644 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:17:41 compute-0 nova_compute[259550]: 2025-10-07 14:17:41.660 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:17:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 306 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 5.9 MiB/s wr, 131 op/s
Oct 07 14:17:41 compute-0 nova_compute[259550]: 2025-10-07 14:17:41.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:17:42 compute-0 competent_mestorf[329704]: {
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:         "osd_id": 2,
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:         "type": "bluestore"
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:     },
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:         "osd_id": 1,
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:         "type": "bluestore"
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:     },
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:         "osd_id": 0,
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:         "type": "bluestore"
Oct 07 14:17:42 compute-0 competent_mestorf[329704]:     }
Oct 07 14:17:42 compute-0 competent_mestorf[329704]: }
Oct 07 14:17:42 compute-0 systemd[1]: libpod-22a431f84c42422755961ee4832fa03a127e0fe7ba087ead82d4cef54a24fad7.scope: Deactivated successfully.
Oct 07 14:17:42 compute-0 conmon[329704]: conmon 22a431f84c4242275596 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22a431f84c42422755961ee4832fa03a127e0fe7ba087ead82d4cef54a24fad7.scope/container/memory.events
Oct 07 14:17:42 compute-0 podman[329666]: 2025-10-07 14:17:42.172392865 +0000 UTC m=+1.203581386 container died 22a431f84c42422755961ee4832fa03a127e0fe7ba087ead82d4cef54a24fad7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 07 14:17:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8969b4b2f951238d2145017a27637cce61ecd3d8b7029e73ea9961c78a87bc7c-merged.mount: Deactivated successfully.
Oct 07 14:17:42 compute-0 podman[329666]: 2025-10-07 14:17:42.297636664 +0000 UTC m=+1.328825185 container remove 22a431f84c42422755961ee4832fa03a127e0fe7ba087ead82d4cef54a24fad7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 07 14:17:42 compute-0 systemd[1]: libpod-conmon-22a431f84c42422755961ee4832fa03a127e0fe7ba087ead82d4cef54a24fad7.scope: Deactivated successfully.
Oct 07 14:17:42 compute-0 sudo[329452]: pam_unix(sudo:session): session closed for user root
Oct 07 14:17:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:17:42 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:17:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:17:42 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:17:42 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev c52c4fb1-c33f-49cb-a229-446e651f9dda does not exist
Oct 07 14:17:42 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev a280d403-6a94-4894-9ed8-6a7ca064e695 does not exist
Oct 07 14:17:42 compute-0 sudo[329805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:17:42 compute-0 sudo[329805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:17:42 compute-0 sudo[329805]: pam_unix(sudo:session): session closed for user root
Oct 07 14:17:42 compute-0 sudo[329830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:17:42 compute-0 sudo[329830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:17:42 compute-0 sudo[329830]: pam_unix(sudo:session): session closed for user root
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.746 2 DEBUG nova.compute.manager [req-17ee7a8c-51f3-4b25-9858-1dc089f3864c req-4cbd58e7-9416-4a21-9da0-c4ca97ba145c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Received event network-vif-plugged-2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.750 2 DEBUG oslo_concurrency.lockutils [req-17ee7a8c-51f3-4b25-9858-1dc089f3864c req-4cbd58e7-9416-4a21-9da0-c4ca97ba145c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.750 2 DEBUG oslo_concurrency.lockutils [req-17ee7a8c-51f3-4b25-9858-1dc089f3864c req-4cbd58e7-9416-4a21-9da0-c4ca97ba145c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.750 2 DEBUG oslo_concurrency.lockutils [req-17ee7a8c-51f3-4b25-9858-1dc089f3864c req-4cbd58e7-9416-4a21-9da0-c4ca97ba145c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.751 2 DEBUG nova.compute.manager [req-17ee7a8c-51f3-4b25-9858-1dc089f3864c req-4cbd58e7-9416-4a21-9da0-c4ca97ba145c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Processing event network-vif-plugged-2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.752 2 DEBUG nova.compute.manager [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Instance event wait completed in 10 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.758 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846662.75755, dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.759 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] VM Resumed (Lifecycle Event)
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.762 2 DEBUG nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.766 2 INFO nova.virt.libvirt.driver [-] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Instance spawned successfully.
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.767 2 DEBUG nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:17:42 compute-0 ceph-mon[74295]: pgmap v1593: 305 pgs: 305 active+clean; 306 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 5.9 MiB/s wr, 131 op/s
Oct 07 14:17:42 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:17:42 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.915 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.920 2 DEBUG nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.921 2 DEBUG nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.922 2 DEBUG nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.922 2 DEBUG nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.922 2 DEBUG nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.923 2 DEBUG nova.virt.libvirt.driver [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.929 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.985 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.986 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:17:42 compute-0 nova_compute[259550]: 2025-10-07 14:17:42.989 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:17:43 compute-0 nova_compute[259550]: 2025-10-07 14:17:43.024 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 07 14:17:43 compute-0 nova_compute[259550]: 2025-10-07 14:17:43.025 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 07 14:17:43 compute-0 nova_compute[259550]: 2025-10-07 14:17:43.025 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 07 14:17:43 compute-0 nova_compute[259550]: 2025-10-07 14:17:43.035 2 INFO nova.compute.manager [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Took 17.84 seconds to spawn the instance on the hypervisor.
Oct 07 14:17:43 compute-0 nova_compute[259550]: 2025-10-07 14:17:43.036 2 DEBUG nova.compute.manager [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:43 compute-0 nova_compute[259550]: 2025-10-07 14:17:43.120 2 INFO nova.compute.manager [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Took 19.33 seconds to build instance.
Oct 07 14:17:43 compute-0 nova_compute[259550]: 2025-10-07 14:17:43.179 2 DEBUG oslo_concurrency.lockutils [None req-75b40660-c039-46e1-8593-ae300fa161e3 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:43 compute-0 nova_compute[259550]: 2025-10-07 14:17:43.379 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-85cd6b5c-f0f7-49fa-a999-64818baf3648" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:17:43 compute-0 nova_compute[259550]: 2025-10-07 14:17:43.380 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-85cd6b5c-f0f7-49fa-a999-64818baf3648" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:17:43 compute-0 nova_compute[259550]: 2025-10-07 14:17:43.380 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:17:43 compute-0 nova_compute[259550]: 2025-10-07 14:17:43.381 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 85cd6b5c-f0f7-49fa-a999-64818baf3648 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 306 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.7 MiB/s wr, 143 op/s
Oct 07 14:17:44 compute-0 podman[329855]: 2025-10-07 14:17:44.088826952 +0000 UTC m=+0.067483453 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:17:44 compute-0 podman[329856]: 2025-10-07 14:17:44.118812904 +0000 UTC m=+0.098146214 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 07 14:17:44 compute-0 nova_compute[259550]: 2025-10-07 14:17:44.532 2 DEBUG oslo_concurrency.lockutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "9c5c9653-6de6-4975-86c6-887803a35913" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:44 compute-0 nova_compute[259550]: 2025-10-07 14:17:44.532 2 DEBUG oslo_concurrency.lockutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:44 compute-0 nova_compute[259550]: 2025-10-07 14:17:44.586 2 DEBUG nova.compute.manager [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:17:44 compute-0 nova_compute[259550]: 2025-10-07 14:17:44.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:44 compute-0 ceph-mon[74295]: pgmap v1594: 305 pgs: 305 active+clean; 306 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.7 MiB/s wr, 143 op/s
Oct 07 14:17:44 compute-0 nova_compute[259550]: 2025-10-07 14:17:44.966 2 DEBUG oslo_concurrency.lockutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:44 compute-0 nova_compute[259550]: 2025-10-07 14:17:44.967 2 DEBUG oslo_concurrency.lockutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:44 compute-0 nova_compute[259550]: 2025-10-07 14:17:44.975 2 DEBUG nova.virt.hardware [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:17:44 compute-0 nova_compute[259550]: 2025-10-07 14:17:44.976 2 INFO nova.compute.claims [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.111 2 DEBUG nova.compute.manager [req-b7f3e072-39b4-49a1-9b09-e58fc1336850 req-80753abd-174f-4d0a-8281-07505063d136 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Received event network-vif-plugged-2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.112 2 DEBUG oslo_concurrency.lockutils [req-b7f3e072-39b4-49a1-9b09-e58fc1336850 req-80753abd-174f-4d0a-8281-07505063d136 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.112 2 DEBUG oslo_concurrency.lockutils [req-b7f3e072-39b4-49a1-9b09-e58fc1336850 req-80753abd-174f-4d0a-8281-07505063d136 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.112 2 DEBUG oslo_concurrency.lockutils [req-b7f3e072-39b4-49a1-9b09-e58fc1336850 req-80753abd-174f-4d0a-8281-07505063d136 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.113 2 DEBUG nova.compute.manager [req-b7f3e072-39b4-49a1-9b09-e58fc1336850 req-80753abd-174f-4d0a-8281-07505063d136 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] No waiting events found dispatching network-vif-plugged-2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.113 2 WARNING nova.compute.manager [req-b7f3e072-39b4-49a1-9b09-e58fc1336850 req-80753abd-174f-4d0a-8281-07505063d136 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Received unexpected event network-vif-plugged-2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 for instance with vm_state active and task_state None.
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.113 2 DEBUG nova.compute.manager [req-b7f3e072-39b4-49a1-9b09-e58fc1336850 req-80753abd-174f-4d0a-8281-07505063d136 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Received event network-vif-unplugged-b2d2caee-177d-4ad4-98ba-7dd1d95e296b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.114 2 DEBUG oslo_concurrency.lockutils [req-b7f3e072-39b4-49a1-9b09-e58fc1336850 req-80753abd-174f-4d0a-8281-07505063d136 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.114 2 DEBUG oslo_concurrency.lockutils [req-b7f3e072-39b4-49a1-9b09-e58fc1336850 req-80753abd-174f-4d0a-8281-07505063d136 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.114 2 DEBUG oslo_concurrency.lockutils [req-b7f3e072-39b4-49a1-9b09-e58fc1336850 req-80753abd-174f-4d0a-8281-07505063d136 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.115 2 DEBUG nova.compute.manager [req-b7f3e072-39b4-49a1-9b09-e58fc1336850 req-80753abd-174f-4d0a-8281-07505063d136 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] No waiting events found dispatching network-vif-unplugged-b2d2caee-177d-4ad4-98ba-7dd1d95e296b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.115 2 WARNING nova.compute.manager [req-b7f3e072-39b4-49a1-9b09-e58fc1336850 req-80753abd-174f-4d0a-8281-07505063d136 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Received unexpected event network-vif-unplugged-b2d2caee-177d-4ad4-98ba-7dd1d95e296b for instance with vm_state rescued and task_state None.
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.115 2 DEBUG nova.compute.manager [req-b7f3e072-39b4-49a1-9b09-e58fc1336850 req-80753abd-174f-4d0a-8281-07505063d136 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Received event network-vif-plugged-b2d2caee-177d-4ad4-98ba-7dd1d95e296b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.116 2 DEBUG oslo_concurrency.lockutils [req-b7f3e072-39b4-49a1-9b09-e58fc1336850 req-80753abd-174f-4d0a-8281-07505063d136 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.116 2 DEBUG oslo_concurrency.lockutils [req-b7f3e072-39b4-49a1-9b09-e58fc1336850 req-80753abd-174f-4d0a-8281-07505063d136 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.116 2 DEBUG oslo_concurrency.lockutils [req-b7f3e072-39b4-49a1-9b09-e58fc1336850 req-80753abd-174f-4d0a-8281-07505063d136 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.117 2 DEBUG nova.compute.manager [req-b7f3e072-39b4-49a1-9b09-e58fc1336850 req-80753abd-174f-4d0a-8281-07505063d136 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] No waiting events found dispatching network-vif-plugged-b2d2caee-177d-4ad4-98ba-7dd1d95e296b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.117 2 WARNING nova.compute.manager [req-b7f3e072-39b4-49a1-9b09-e58fc1336850 req-80753abd-174f-4d0a-8281-07505063d136 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Received unexpected event network-vif-plugged-b2d2caee-177d-4ad4-98ba-7dd1d95e296b for instance with vm_state rescued and task_state None.
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.467 2 DEBUG oslo_concurrency.processutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 306 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.7 MiB/s wr, 172 op/s
Oct 07 14:17:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:17:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4054566797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.931 2 DEBUG oslo_concurrency.processutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.938 2 DEBUG nova.compute.provider_tree [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:17:45 compute-0 nova_compute[259550]: 2025-10-07 14:17:45.987 2 DEBUG nova.scheduler.client.report [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:17:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.104 2 DEBUG oslo_concurrency.lockutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.105 2 DEBUG nova.compute.manager [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.257 2 DEBUG nova.compute.manager [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.257 2 DEBUG nova.network.neutron [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.416 2 INFO nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.439 2 DEBUG nova.policy [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7166dede3fa7455fb8ac0840d97d0be8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd8dca3ec607447dd8f2e6dd1c0714628', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.494 2 DEBUG nova.compute.manager [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.564 2 DEBUG oslo_concurrency.lockutils [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Acquiring lock "interface-dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.565 2 DEBUG oslo_concurrency.lockutils [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lock "interface-dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.566 2 DEBUG nova.objects.instance [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lazy-loading 'flavor' on Instance uuid dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.713 2 DEBUG nova.objects.instance [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lazy-loading 'pci_requests' on Instance uuid dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.812 2 DEBUG nova.network.neutron [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.862 2 DEBUG nova.compute.manager [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.864 2 DEBUG nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.865 2 INFO nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Creating image(s)
Oct 07 14:17:46 compute-0 ceph-mon[74295]: pgmap v1595: 305 pgs: 305 active+clean; 306 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.7 MiB/s wr, 172 op/s
Oct 07 14:17:46 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4054566797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.890 2 DEBUG nova.storage.rbd_utils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 9c5c9653-6de6-4975-86c6-887803a35913_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.919 2 DEBUG nova.storage.rbd_utils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 9c5c9653-6de6-4975-86c6-887803a35913_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.947 2 DEBUG nova.storage.rbd_utils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 9c5c9653-6de6-4975-86c6-887803a35913_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:46 compute-0 nova_compute[259550]: 2025-10-07 14:17:46.950 2 DEBUG oslo_concurrency.processutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.021 2 DEBUG oslo_concurrency.processutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.022 2 DEBUG oslo_concurrency.lockutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.022 2 DEBUG oslo_concurrency.lockutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.023 2 DEBUG oslo_concurrency.lockutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.043 2 DEBUG nova.storage.rbd_utils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 9c5c9653-6de6-4975-86c6-887803a35913_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.046 2 DEBUG oslo_concurrency.processutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 9c5c9653-6de6-4975-86c6-887803a35913_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.253 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Updating instance_info_cache with network_info: [{"id": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "address": "fa:16:3e:32:3d:56", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2d2caee-17", "ovs_interfaceid": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.299 2 DEBUG nova.compute.manager [req-7db7356c-608f-4fab-a88b-30d721bffd66 req-e7b81e08-7700-45ac-bfea-dcfaf077455e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Received event network-vif-plugged-682f4b22-7214-4f0d-894b-84354797f177 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.300 2 DEBUG oslo_concurrency.lockutils [req-7db7356c-608f-4fab-a88b-30d721bffd66 req-e7b81e08-7700-45ac-bfea-dcfaf077455e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "96c85d5b-5885-4549-8dfd-75a4611179b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.301 2 DEBUG oslo_concurrency.lockutils [req-7db7356c-608f-4fab-a88b-30d721bffd66 req-e7b81e08-7700-45ac-bfea-dcfaf077455e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96c85d5b-5885-4549-8dfd-75a4611179b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.301 2 DEBUG oslo_concurrency.lockutils [req-7db7356c-608f-4fab-a88b-30d721bffd66 req-e7b81e08-7700-45ac-bfea-dcfaf077455e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96c85d5b-5885-4549-8dfd-75a4611179b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.301 2 DEBUG nova.compute.manager [req-7db7356c-608f-4fab-a88b-30d721bffd66 req-e7b81e08-7700-45ac-bfea-dcfaf077455e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Processing event network-vif-plugged-682f4b22-7214-4f0d-894b-84354797f177 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.302 2 DEBUG nova.compute.manager [req-7db7356c-608f-4fab-a88b-30d721bffd66 req-e7b81e08-7700-45ac-bfea-dcfaf077455e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Received event network-vif-plugged-682f4b22-7214-4f0d-894b-84354797f177 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.302 2 DEBUG oslo_concurrency.lockutils [req-7db7356c-608f-4fab-a88b-30d721bffd66 req-e7b81e08-7700-45ac-bfea-dcfaf077455e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "96c85d5b-5885-4549-8dfd-75a4611179b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.302 2 DEBUG oslo_concurrency.lockutils [req-7db7356c-608f-4fab-a88b-30d721bffd66 req-e7b81e08-7700-45ac-bfea-dcfaf077455e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96c85d5b-5885-4549-8dfd-75a4611179b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.303 2 DEBUG oslo_concurrency.lockutils [req-7db7356c-608f-4fab-a88b-30d721bffd66 req-e7b81e08-7700-45ac-bfea-dcfaf077455e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96c85d5b-5885-4549-8dfd-75a4611179b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.303 2 DEBUG nova.compute.manager [req-7db7356c-608f-4fab-a88b-30d721bffd66 req-e7b81e08-7700-45ac-bfea-dcfaf077455e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] No waiting events found dispatching network-vif-plugged-682f4b22-7214-4f0d-894b-84354797f177 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.303 2 WARNING nova.compute.manager [req-7db7356c-608f-4fab-a88b-30d721bffd66 req-e7b81e08-7700-45ac-bfea-dcfaf077455e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Received unexpected event network-vif-plugged-682f4b22-7214-4f0d-894b-84354797f177 for instance with vm_state building and task_state spawning.
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.303 2 DEBUG nova.compute.manager [req-7db7356c-608f-4fab-a88b-30d721bffd66 req-e7b81e08-7700-45ac-bfea-dcfaf077455e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Received event network-vif-plugged-b2d2caee-177d-4ad4-98ba-7dd1d95e296b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.304 2 DEBUG oslo_concurrency.lockutils [req-7db7356c-608f-4fab-a88b-30d721bffd66 req-e7b81e08-7700-45ac-bfea-dcfaf077455e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.304 2 DEBUG oslo_concurrency.lockutils [req-7db7356c-608f-4fab-a88b-30d721bffd66 req-e7b81e08-7700-45ac-bfea-dcfaf077455e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.304 2 DEBUG oslo_concurrency.lockutils [req-7db7356c-608f-4fab-a88b-30d721bffd66 req-e7b81e08-7700-45ac-bfea-dcfaf077455e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.305 2 DEBUG nova.compute.manager [req-7db7356c-608f-4fab-a88b-30d721bffd66 req-e7b81e08-7700-45ac-bfea-dcfaf077455e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] No waiting events found dispatching network-vif-plugged-b2d2caee-177d-4ad4-98ba-7dd1d95e296b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.305 2 WARNING nova.compute.manager [req-7db7356c-608f-4fab-a88b-30d721bffd66 req-e7b81e08-7700-45ac-bfea-dcfaf077455e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Received unexpected event network-vif-plugged-b2d2caee-177d-4ad4-98ba-7dd1d95e296b for instance with vm_state rescued and task_state None.
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.310 2 DEBUG nova.compute.manager [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Instance event wait completed in 11 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.326 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846667.3172746, 96c85d5b-5885-4549-8dfd-75a4611179b5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.327 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] VM Resumed (Lifecycle Event)
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.336 2 DEBUG nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.341 2 INFO nova.virt.libvirt.driver [-] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Instance spawned successfully.
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.342 2 DEBUG nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.459 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.461 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-85cd6b5c-f0f7-49fa-a999-64818baf3648" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.461 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.464 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.526 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.528 2 DEBUG nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.529 2 DEBUG nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.529 2 DEBUG nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.530 2 DEBUG nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.530 2 DEBUG nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.531 2 DEBUG nova.virt.libvirt.driver [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.540 2 DEBUG oslo_concurrency.processutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 9c5c9653-6de6-4975-86c6-887803a35913_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.615 2 DEBUG nova.storage.rbd_utils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] resizing rbd image 9c5c9653-6de6-4975-86c6-887803a35913_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.650 2 INFO nova.compute.manager [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Took 20.90 seconds to spawn the instance on the hypervisor.
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.651 2 DEBUG nova.compute.manager [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.727 2 DEBUG nova.objects.instance [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'migration_context' on Instance uuid 9c5c9653-6de6-4975-86c6-887803a35913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.764 2 INFO nova.compute.manager [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Took 22.29 seconds to build instance.
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.799 2 DEBUG nova.policy [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '63af1dd3b8c54df9a2d8488d7cfc1590', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '28bfd63ece134e70ac7fe3739775042b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:17:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 306 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.3 MiB/s wr, 155 op/s
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.811 2 DEBUG nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.811 2 DEBUG nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Ensure instance console log exists: /var/lib/nova/instances/9c5c9653-6de6-4975-86c6-887803a35913/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.812 2 DEBUG oslo_concurrency.lockutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.812 2 DEBUG oslo_concurrency.lockutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:47 compute-0 nova_compute[259550]: 2025-10-07 14:17:47.813 2 DEBUG oslo_concurrency.lockutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:48 compute-0 nova_compute[259550]: 2025-10-07 14:17:48.149 2 DEBUG oslo_concurrency.lockutils [None req-2fb37654-857d-4004-a40b-7e607c81a45e 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Lock "96c85d5b-5885-4549-8dfd-75a4611179b5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:48 compute-0 ceph-mon[74295]: pgmap v1596: 305 pgs: 305 active+clean; 306 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.3 MiB/s wr, 155 op/s
Oct 07 14:17:49 compute-0 nova_compute[259550]: 2025-10-07 14:17:49.117 2 DEBUG nova.network.neutron [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Successfully created port: 0d353c80-d149-4939-a0c5-da59491d3f99 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:17:49 compute-0 nova_compute[259550]: 2025-10-07 14:17:49.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 335 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 226 op/s
Oct 07 14:17:50 compute-0 nova_compute[259550]: 2025-10-07 14:17:50.745 2 DEBUG nova.network.neutron [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Successfully created port: 922633f6-7416-4658-9b86-daf2a7ecfed9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:17:50 compute-0 ceph-mon[74295]: pgmap v1597: 305 pgs: 305 active+clean; 335 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 226 op/s
Oct 07 14:17:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.161 2 DEBUG nova.compute.manager [req-580cd4be-83a3-41ca-aa6c-e2b3cb4d0c1b req-5bdd0e1f-ca77-494e-9222-4ba1b6aaf90b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Received event network-vif-plugged-b2d2caee-177d-4ad4-98ba-7dd1d95e296b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.162 2 DEBUG oslo_concurrency.lockutils [req-580cd4be-83a3-41ca-aa6c-e2b3cb4d0c1b req-5bdd0e1f-ca77-494e-9222-4ba1b6aaf90b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.162 2 DEBUG oslo_concurrency.lockutils [req-580cd4be-83a3-41ca-aa6c-e2b3cb4d0c1b req-5bdd0e1f-ca77-494e-9222-4ba1b6aaf90b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.162 2 DEBUG oslo_concurrency.lockutils [req-580cd4be-83a3-41ca-aa6c-e2b3cb4d0c1b req-5bdd0e1f-ca77-494e-9222-4ba1b6aaf90b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.163 2 DEBUG nova.compute.manager [req-580cd4be-83a3-41ca-aa6c-e2b3cb4d0c1b req-5bdd0e1f-ca77-494e-9222-4ba1b6aaf90b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] No waiting events found dispatching network-vif-plugged-b2d2caee-177d-4ad4-98ba-7dd1d95e296b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.163 2 WARNING nova.compute.manager [req-580cd4be-83a3-41ca-aa6c-e2b3cb4d0c1b req-5bdd0e1f-ca77-494e-9222-4ba1b6aaf90b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Received unexpected event network-vif-plugged-b2d2caee-177d-4ad4-98ba-7dd1d95e296b for instance with vm_state rescued and task_state None.
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.163 2 DEBUG nova.compute.manager [req-580cd4be-83a3-41ca-aa6c-e2b3cb4d0c1b req-5bdd0e1f-ca77-494e-9222-4ba1b6aaf90b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.163 2 DEBUG oslo_concurrency.lockutils [req-580cd4be-83a3-41ca-aa6c-e2b3cb4d0c1b req-5bdd0e1f-ca77-494e-9222-4ba1b6aaf90b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.163 2 DEBUG oslo_concurrency.lockutils [req-580cd4be-83a3-41ca-aa6c-e2b3cb4d0c1b req-5bdd0e1f-ca77-494e-9222-4ba1b6aaf90b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.164 2 DEBUG oslo_concurrency.lockutils [req-580cd4be-83a3-41ca-aa6c-e2b3cb4d0c1b req-5bdd0e1f-ca77-494e-9222-4ba1b6aaf90b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.164 2 DEBUG nova.compute.manager [req-580cd4be-83a3-41ca-aa6c-e2b3cb4d0c1b req-5bdd0e1f-ca77-494e-9222-4ba1b6aaf90b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Processing event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.164 2 DEBUG nova.compute.manager [req-580cd4be-83a3-41ca-aa6c-e2b3cb4d0c1b req-5bdd0e1f-ca77-494e-9222-4ba1b6aaf90b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.164 2 DEBUG oslo_concurrency.lockutils [req-580cd4be-83a3-41ca-aa6c-e2b3cb4d0c1b req-5bdd0e1f-ca77-494e-9222-4ba1b6aaf90b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.165 2 DEBUG oslo_concurrency.lockutils [req-580cd4be-83a3-41ca-aa6c-e2b3cb4d0c1b req-5bdd0e1f-ca77-494e-9222-4ba1b6aaf90b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.165 2 DEBUG oslo_concurrency.lockutils [req-580cd4be-83a3-41ca-aa6c-e2b3cb4d0c1b req-5bdd0e1f-ca77-494e-9222-4ba1b6aaf90b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.165 2 DEBUG nova.compute.manager [req-580cd4be-83a3-41ca-aa6c-e2b3cb4d0c1b req-5bdd0e1f-ca77-494e-9222-4ba1b6aaf90b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.165 2 WARNING nova.compute.manager [req-580cd4be-83a3-41ca-aa6c-e2b3cb4d0c1b req-5bdd0e1f-ca77-494e-9222-4ba1b6aaf90b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state building and task_state spawning.
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.166 2 DEBUG nova.compute.manager [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance event wait completed in 9 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.168 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846671.1684365, 1d580bbb-a6fd-442c-8524-409ba5c344d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.168 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] VM Resumed (Lifecycle Event)
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.176 2 DEBUG nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.182 2 INFO nova.virt.libvirt.driver [-] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance spawned successfully.
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.183 2 DEBUG nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.383 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.390 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.395 2 DEBUG nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.395 2 DEBUG nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.396 2 DEBUG nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.396 2 DEBUG nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.397 2 DEBUG nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.397 2 DEBUG nova.virt.libvirt.driver [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.459 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.550 2 INFO nova.compute.manager [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Took 18.56 seconds to spawn the instance on the hypervisor.
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.551 2 DEBUG nova.compute.manager [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.736 2 INFO nova.compute.manager [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Took 20.46 seconds to build instance.
Oct 07 14:17:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 353 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 2.4 MiB/s wr, 236 op/s
Oct 07 14:17:51 compute-0 nova_compute[259550]: 2025-10-07 14:17:51.866 2 DEBUG nova.network.neutron [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Successfully updated port: 0d353c80-d149-4939-a0c5-da59491d3f99 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.038 2 DEBUG oslo_concurrency.lockutils [None req-cdfe7482-364d-46b7-963d-63e160c3130e 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.100 2 DEBUG oslo_concurrency.lockutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "refresh_cache-9c5c9653-6de6-4975-86c6-887803a35913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.101 2 DEBUG oslo_concurrency.lockutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquired lock "refresh_cache-9c5c9653-6de6-4975-86c6-887803a35913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.101 2 DEBUG nova.network.neutron [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.103 2 DEBUG nova.network.neutron [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Successfully updated port: 922633f6-7416-4658-9b86-daf2a7ecfed9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.117 2 DEBUG oslo_concurrency.lockutils [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Acquiring lock "96c85d5b-5885-4549-8dfd-75a4611179b5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.117 2 DEBUG oslo_concurrency.lockutils [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Lock "96c85d5b-5885-4549-8dfd-75a4611179b5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.118 2 DEBUG oslo_concurrency.lockutils [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Acquiring lock "96c85d5b-5885-4549-8dfd-75a4611179b5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.118 2 DEBUG oslo_concurrency.lockutils [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Lock "96c85d5b-5885-4549-8dfd-75a4611179b5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.118 2 DEBUG oslo_concurrency.lockutils [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Lock "96c85d5b-5885-4549-8dfd-75a4611179b5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.119 2 INFO nova.compute.manager [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Terminating instance
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.120 2 DEBUG nova.compute.manager [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:17:52 compute-0 kernel: tap682f4b22-72 (unregistering): left promiscuous mode
Oct 07 14:17:52 compute-0 NetworkManager[44949]: <info>  [1759846672.2171] device (tap682f4b22-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:17:52 compute-0 ovn_controller[151684]: 2025-10-07T14:17:52Z|00615|binding|INFO|Releasing lport 682f4b22-7214-4f0d-894b-84354797f177 from this chassis (sb_readonly=0)
Oct 07 14:17:52 compute-0 ovn_controller[151684]: 2025-10-07T14:17:52Z|00616|binding|INFO|Setting lport 682f4b22-7214-4f0d-894b-84354797f177 down in Southbound
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:52 compute-0 ovn_controller[151684]: 2025-10-07T14:17:52Z|00617|binding|INFO|Removing iface tap682f4b22-72 ovn-installed in OVS
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:52 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000041.scope: Deactivated successfully.
Oct 07 14:17:52 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000041.scope: Consumed 5.535s CPU time.
Oct 07 14:17:52 compute-0 systemd-machined[214580]: Machine qemu-75-instance-00000041 terminated.
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.358 2 INFO nova.virt.libvirt.driver [-] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Instance destroyed successfully.
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.358 2 DEBUG nova.objects.instance [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Lazy-loading 'resources' on Instance uuid 96c85d5b-5885-4549-8dfd-75a4611179b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:52.427 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:39:46 10.100.0.10'], port_security=['fa:16:3e:96:39:46 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '96c85d5b-5885-4549-8dfd-75a4611179b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93abc0f9-2033-4fe7-a220-f8724a484807', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9c7fa9a5146949b5a1222248ae125eff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4caa2c2d-49d5-436d-8044-b2b063af6328', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9bd9a45f-b260-4d6a-b186-b2ecaf59b3ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=682f4b22-7214-4f0d-894b-84354797f177) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:17:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:52.429 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 682f4b22-7214-4f0d-894b-84354797f177 in datapath 93abc0f9-2033-4fe7-a220-f8724a484807 unbound from our chassis
Oct 07 14:17:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:52.430 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 93abc0f9-2033-4fe7-a220-f8724a484807, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:17:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:52.432 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[595ba727-1190-4ec7-878f-c2e7e4e9a392]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:52.432 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807 namespace which is not needed anymore
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.482 2 DEBUG nova.virt.libvirt.vif [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-535142755',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-535142755',id=65,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9c7fa9a5146949b5a1222248ae125eff',ramdisk_id='',reservation_id='r-kmfl7o90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-106709279',owner_user_name='tempest-InstanceActionsV221TestJSON-106709279-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:17:47Z,user_data=None,user_id='4526fd07b25c4e6c9fd61f99f0451dba',uuid=96c85d5b-5885-4549-8dfd-75a4611179b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "682f4b22-7214-4f0d-894b-84354797f177", "address": "fa:16:3e:96:39:46", "network": {"id": "93abc0f9-2033-4fe7-a220-f8724a484807", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1441083038-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c7fa9a5146949b5a1222248ae125eff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap682f4b22-72", "ovs_interfaceid": "682f4b22-7214-4f0d-894b-84354797f177", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.483 2 DEBUG nova.network.os_vif_util [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Converting VIF {"id": "682f4b22-7214-4f0d-894b-84354797f177", "address": "fa:16:3e:96:39:46", "network": {"id": "93abc0f9-2033-4fe7-a220-f8724a484807", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1441083038-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c7fa9a5146949b5a1222248ae125eff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap682f4b22-72", "ovs_interfaceid": "682f4b22-7214-4f0d-894b-84354797f177", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.483 2 DEBUG nova.network.os_vif_util [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:39:46,bridge_name='br-int',has_traffic_filtering=True,id=682f4b22-7214-4f0d-894b-84354797f177,network=Network(93abc0f9-2033-4fe7-a220-f8724a484807),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap682f4b22-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.485 2 DEBUG os_vif [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:39:46,bridge_name='br-int',has_traffic_filtering=True,id=682f4b22-7214-4f0d-894b-84354797f177,network=Network(93abc0f9-2033-4fe7-a220-f8724a484807),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap682f4b22-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.487 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap682f4b22-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.495 2 INFO os_vif [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:39:46,bridge_name='br-int',has_traffic_filtering=True,id=682f4b22-7214-4f0d-894b-84354797f177,network=Network(93abc0f9-2033-4fe7-a220-f8724a484807),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap682f4b22-72')
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.519 2 DEBUG oslo_concurrency.lockutils [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Acquiring lock "refresh_cache-dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.520 2 DEBUG oslo_concurrency.lockutils [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Acquired lock "refresh_cache-dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.520 2 DEBUG nova.network.neutron [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:17:52 compute-0 neutron-haproxy-ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807[328621]: [NOTICE]   (328626) : haproxy version is 2.8.14-c23fe91
Oct 07 14:17:52 compute-0 neutron-haproxy-ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807[328621]: [NOTICE]   (328626) : path to executable is /usr/sbin/haproxy
Oct 07 14:17:52 compute-0 neutron-haproxy-ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807[328621]: [WARNING]  (328626) : Exiting Master process...
Oct 07 14:17:52 compute-0 neutron-haproxy-ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807[328621]: [WARNING]  (328626) : Exiting Master process...
Oct 07 14:17:52 compute-0 neutron-haproxy-ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807[328621]: [ALERT]    (328626) : Current worker (328628) exited with code 143 (Terminated)
Oct 07 14:17:52 compute-0 neutron-haproxy-ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807[328621]: [WARNING]  (328626) : All workers exited. Exiting... (0)
Oct 07 14:17:52 compute-0 systemd[1]: libpod-0abce769cb8f0b6f2763b279d5208aee1bc7951aac57bd4f1b1769545473a575.scope: Deactivated successfully.
Oct 07 14:17:52 compute-0 podman[330135]: 2025-10-07 14:17:52.626003207 +0000 UTC m=+0.079925792 container died 0abce769cb8f0b6f2763b279d5208aee1bc7951aac57bd4f1b1769545473a575 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:17:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:17:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:17:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:17:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:17:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:17:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.885 2 DEBUG nova.network.neutron [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:17:52 compute-0 nova_compute[259550]: 2025-10-07 14:17:52.894 2 WARNING nova.network.neutron [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] 69e0bc1b-64b0-44c5-8a8e-258e554b00ab already exists in list: networks containing: ['69e0bc1b-64b0-44c5-8a8e-258e554b00ab']. ignoring it
Oct 07 14:17:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0abce769cb8f0b6f2763b279d5208aee1bc7951aac57bd4f1b1769545473a575-userdata-shm.mount: Deactivated successfully.
Oct 07 14:17:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-22305393e8b2e20d9422e866430cf7f792d82655db5c1a9b488c823c170dabe1-merged.mount: Deactivated successfully.
Oct 07 14:17:53 compute-0 ceph-mon[74295]: pgmap v1598: 305 pgs: 305 active+clean; 353 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 2.4 MiB/s wr, 236 op/s
Oct 07 14:17:53 compute-0 podman[330135]: 2025-10-07 14:17:53.040362503 +0000 UTC m=+0.494285068 container cleanup 0abce769cb8f0b6f2763b279d5208aee1bc7951aac57bd4f1b1769545473a575 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:17:53 compute-0 systemd[1]: libpod-conmon-0abce769cb8f0b6f2763b279d5208aee1bc7951aac57bd4f1b1769545473a575.scope: Deactivated successfully.
Oct 07 14:17:53 compute-0 podman[330166]: 2025-10-07 14:17:53.634005275 +0000 UTC m=+0.567938474 container remove 0abce769cb8f0b6f2763b279d5208aee1bc7951aac57bd4f1b1769545473a575 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 07 14:17:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:53.644 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe1ec3d-9125-4689-94f6-88477901b760]: (4, ('Tue Oct  7 02:17:52 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807 (0abce769cb8f0b6f2763b279d5208aee1bc7951aac57bd4f1b1769545473a575)\n0abce769cb8f0b6f2763b279d5208aee1bc7951aac57bd4f1b1769545473a575\nTue Oct  7 02:17:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807 (0abce769cb8f0b6f2763b279d5208aee1bc7951aac57bd4f1b1769545473a575)\n0abce769cb8f0b6f2763b279d5208aee1bc7951aac57bd4f1b1769545473a575\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:53.647 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c653f6-fa6e-4ed1-bf9e-90b17cda2421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:53.648 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93abc0f9-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:53 compute-0 nova_compute[259550]: 2025-10-07 14:17:53.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:53 compute-0 kernel: tap93abc0f9-20: left promiscuous mode
Oct 07 14:17:53 compute-0 nova_compute[259550]: 2025-10-07 14:17:53.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:53 compute-0 nova_compute[259550]: 2025-10-07 14:17:53.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:53.697 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5b84b1ff-a899-4486-9132-a5fe7ea3d6fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:53 compute-0 nova_compute[259550]: 2025-10-07 14:17:53.722 2 DEBUG nova.compute.manager [req-59a4f954-b272-4c14-b2b8-b3b8d6c07c75 req-4d74e084-7bed-4939-a686-1336aa562151 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received event network-changed-0d353c80-d149-4939-a0c5-da59491d3f99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:53 compute-0 nova_compute[259550]: 2025-10-07 14:17:53.722 2 DEBUG nova.compute.manager [req-59a4f954-b272-4c14-b2b8-b3b8d6c07c75 req-4d74e084-7bed-4939-a686-1336aa562151 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Refreshing instance network info cache due to event network-changed-0d353c80-d149-4939-a0c5-da59491d3f99. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:17:53 compute-0 nova_compute[259550]: 2025-10-07 14:17:53.722 2 DEBUG oslo_concurrency.lockutils [req-59a4f954-b272-4c14-b2b8-b3b8d6c07c75 req-4d74e084-7bed-4939-a686-1336aa562151 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-9c5c9653-6de6-4975-86c6-887803a35913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:17:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:53.737 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7fcd06db-1a6d-464e-9259-9e0c7318e57f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:53.738 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a3815873-4126-4ea6-af92-b7b4d326ee6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:53.755 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f3d136ed-c3e9-4f06-9d99-ea0d0a1f3b63]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723837, 'reachable_time': 26349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330182, 'error': None, 'target': 'ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:53.758 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-93abc0f9-2033-4fe7-a220-f8724a484807 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:17:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:53.758 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[d32d95ee-ce73-45d5-b602-0fe273c44b9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d93abc0f9\x2d2033\x2d4fe7\x2da220\x2df8724a484807.mount: Deactivated successfully.
Oct 07 14:17:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 353 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 1.8 MiB/s wr, 316 op/s
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.114 2 DEBUG nova.network.neutron [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Updating instance_info_cache with network_info: [{"id": "0d353c80-d149-4939-a0c5-da59491d3f99", "address": "fa:16:3e:82:7a:5f", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d353c80-d1", "ovs_interfaceid": "0d353c80-d149-4939-a0c5-da59491d3f99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:17:54 compute-0 ceph-mon[74295]: pgmap v1599: 305 pgs: 305 active+clean; 353 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 1.8 MiB/s wr, 316 op/s
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.440 2 DEBUG nova.compute.manager [req-e29565f1-d191-45bb-94d3-e87f9cbbf053 req-20a6f65b-6365-4d26-970a-9684d27999c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Received event network-vif-unplugged-682f4b22-7214-4f0d-894b-84354797f177 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.441 2 DEBUG oslo_concurrency.lockutils [req-e29565f1-d191-45bb-94d3-e87f9cbbf053 req-20a6f65b-6365-4d26-970a-9684d27999c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "96c85d5b-5885-4549-8dfd-75a4611179b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.441 2 DEBUG oslo_concurrency.lockutils [req-e29565f1-d191-45bb-94d3-e87f9cbbf053 req-20a6f65b-6365-4d26-970a-9684d27999c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96c85d5b-5885-4549-8dfd-75a4611179b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.441 2 DEBUG oslo_concurrency.lockutils [req-e29565f1-d191-45bb-94d3-e87f9cbbf053 req-20a6f65b-6365-4d26-970a-9684d27999c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96c85d5b-5885-4549-8dfd-75a4611179b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.442 2 DEBUG nova.compute.manager [req-e29565f1-d191-45bb-94d3-e87f9cbbf053 req-20a6f65b-6365-4d26-970a-9684d27999c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] No waiting events found dispatching network-vif-unplugged-682f4b22-7214-4f0d-894b-84354797f177 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.442 2 DEBUG nova.compute.manager [req-e29565f1-d191-45bb-94d3-e87f9cbbf053 req-20a6f65b-6365-4d26-970a-9684d27999c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Received event network-vif-unplugged-682f4b22-7214-4f0d-894b-84354797f177 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.571 2 DEBUG oslo_concurrency.lockutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Releasing lock "refresh_cache-9c5c9653-6de6-4975-86c6-887803a35913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.571 2 DEBUG nova.compute.manager [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Instance network_info: |[{"id": "0d353c80-d149-4939-a0c5-da59491d3f99", "address": "fa:16:3e:82:7a:5f", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d353c80-d1", "ovs_interfaceid": "0d353c80-d149-4939-a0c5-da59491d3f99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.571 2 DEBUG oslo_concurrency.lockutils [req-59a4f954-b272-4c14-b2b8-b3b8d6c07c75 req-4d74e084-7bed-4939-a686-1336aa562151 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-9c5c9653-6de6-4975-86c6-887803a35913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.572 2 DEBUG nova.network.neutron [req-59a4f954-b272-4c14-b2b8-b3b8d6c07c75 req-4d74e084-7bed-4939-a686-1336aa562151 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Refreshing network info cache for port 0d353c80-d149-4939-a0c5-da59491d3f99 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.574 2 DEBUG nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Start _get_guest_xml network_info=[{"id": "0d353c80-d149-4939-a0c5-da59491d3f99", "address": "fa:16:3e:82:7a:5f", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d353c80-d1", "ovs_interfaceid": "0d353c80-d149-4939-a0c5-da59491d3f99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.581 2 WARNING nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.586 2 DEBUG nova.virt.libvirt.host [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.587 2 DEBUG nova.virt.libvirt.host [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.591 2 DEBUG nova.virt.libvirt.host [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.592 2 DEBUG nova.virt.libvirt.host [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.592 2 DEBUG nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.592 2 DEBUG nova.virt.hardware [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.592 2 DEBUG nova.virt.hardware [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.593 2 DEBUG nova.virt.hardware [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.593 2 DEBUG nova.virt.hardware [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.593 2 DEBUG nova.virt.hardware [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.593 2 DEBUG nova.virt.hardware [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.593 2 DEBUG nova.virt.hardware [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.593 2 DEBUG nova.virt.hardware [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.594 2 DEBUG nova.virt.hardware [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.594 2 DEBUG nova.virt.hardware [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.594 2 DEBUG nova.virt.hardware [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:17:54 compute-0 nova_compute[259550]: 2025-10-07 14:17:54.596 2 DEBUG oslo_concurrency.processutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:17:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1085986839' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.034 2 DEBUG oslo_concurrency.processutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.055 2 DEBUG nova.storage.rbd_utils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 9c5c9653-6de6-4975-86c6-887803a35913_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.059 2 DEBUG oslo_concurrency.processutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:17:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1305158915' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:55 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1085986839' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.538 2 DEBUG oslo_concurrency.processutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.540 2 DEBUG nova.virt.libvirt.vif [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:17:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2135478368',display_name='tempest-ServerRescueTestJSON-server-2135478368',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-2135478368',id=67,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8dca3ec607447dd8f2e6dd1c0714628',ramdisk_id='',reservation_id='r-hhot1xln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1604630325',owner_user_name='tempest-ServerRescueTestJSON-1604630325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:17:46Z,user_data=None,user_id='7166dede3fa7455fb8ac0840d97d0be8',uuid=9c5c9653-6de6-4975-86c6-887803a35913,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d353c80-d149-4939-a0c5-da59491d3f99", "address": "fa:16:3e:82:7a:5f", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d353c80-d1", "ovs_interfaceid": "0d353c80-d149-4939-a0c5-da59491d3f99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.541 2 DEBUG nova.network.os_vif_util [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Converting VIF {"id": "0d353c80-d149-4939-a0c5-da59491d3f99", "address": "fa:16:3e:82:7a:5f", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d353c80-d1", "ovs_interfaceid": "0d353c80-d149-4939-a0c5-da59491d3f99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.542 2 DEBUG nova.network.os_vif_util [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:7a:5f,bridge_name='br-int',has_traffic_filtering=True,id=0d353c80-d149-4939-a0c5-da59491d3f99,network=Network(2dfecd64-708d-4596-88ae-4b7d716e998c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d353c80-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.543 2 DEBUG nova.objects.instance [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9c5c9653-6de6-4975-86c6-887803a35913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.632 2 DEBUG nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:17:55 compute-0 nova_compute[259550]:   <uuid>9c5c9653-6de6-4975-86c6-887803a35913</uuid>
Oct 07 14:17:55 compute-0 nova_compute[259550]:   <name>instance-00000043</name>
Oct 07 14:17:55 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:17:55 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:17:55 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerRescueTestJSON-server-2135478368</nova:name>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:17:54</nova:creationTime>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:17:55 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:17:55 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:17:55 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:17:55 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:17:55 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:17:55 compute-0 nova_compute[259550]:         <nova:user uuid="7166dede3fa7455fb8ac0840d97d0be8">tempest-ServerRescueTestJSON-1604630325-project-member</nova:user>
Oct 07 14:17:55 compute-0 nova_compute[259550]:         <nova:project uuid="d8dca3ec607447dd8f2e6dd1c0714628">tempest-ServerRescueTestJSON-1604630325</nova:project>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:17:55 compute-0 nova_compute[259550]:         <nova:port uuid="0d353c80-d149-4939-a0c5-da59491d3f99">
Oct 07 14:17:55 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:17:55 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:17:55 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <system>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <entry name="serial">9c5c9653-6de6-4975-86c6-887803a35913</entry>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <entry name="uuid">9c5c9653-6de6-4975-86c6-887803a35913</entry>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     </system>
Oct 07 14:17:55 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:17:55 compute-0 nova_compute[259550]:   <os>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:   </os>
Oct 07 14:17:55 compute-0 nova_compute[259550]:   <features>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:   </features>
Oct 07 14:17:55 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:17:55 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:17:55 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/9c5c9653-6de6-4975-86c6-887803a35913_disk">
Oct 07 14:17:55 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       </source>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:17:55 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/9c5c9653-6de6-4975-86c6-887803a35913_disk.config">
Oct 07 14:17:55 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       </source>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:17:55 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:82:7a:5f"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <target dev="tap0d353c80-d1"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/9c5c9653-6de6-4975-86c6-887803a35913/console.log" append="off"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <video>
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     </video>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:17:55 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:17:55 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:17:55 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:17:55 compute-0 nova_compute[259550]: </domain>
Oct 07 14:17:55 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.633 2 DEBUG nova.compute.manager [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Preparing to wait for external event network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.633 2 DEBUG oslo_concurrency.lockutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "9c5c9653-6de6-4975-86c6-887803a35913-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.633 2 DEBUG oslo_concurrency.lockutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.634 2 DEBUG oslo_concurrency.lockutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.634 2 DEBUG nova.virt.libvirt.vif [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:17:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2135478368',display_name='tempest-ServerRescueTestJSON-server-2135478368',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-2135478368',id=67,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8dca3ec607447dd8f2e6dd1c0714628',ramdisk_id='',reservation_id='r-hhot1xln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1604630325',owner_user_name='tempest-ServerRescueTestJSON-1604630325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:17:46Z,user_data=None,user_id='7166dede3fa7455fb8ac0840d97d0be8',uuid=9c5c9653-6de6-4975-86c6-887803a35913,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d353c80-d149-4939-a0c5-da59491d3f99", "address": "fa:16:3e:82:7a:5f", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d353c80-d1", "ovs_interfaceid": "0d353c80-d149-4939-a0c5-da59491d3f99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.635 2 DEBUG nova.network.os_vif_util [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Converting VIF {"id": "0d353c80-d149-4939-a0c5-da59491d3f99", "address": "fa:16:3e:82:7a:5f", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d353c80-d1", "ovs_interfaceid": "0d353c80-d149-4939-a0c5-da59491d3f99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.635 2 DEBUG nova.network.os_vif_util [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:7a:5f,bridge_name='br-int',has_traffic_filtering=True,id=0d353c80-d149-4939-a0c5-da59491d3f99,network=Network(2dfecd64-708d-4596-88ae-4b7d716e998c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d353c80-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.635 2 DEBUG os_vif [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:7a:5f,bridge_name='br-int',has_traffic_filtering=True,id=0d353c80-d149-4939-a0c5-da59491d3f99,network=Network(2dfecd64-708d-4596-88ae-4b7d716e998c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d353c80-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.636 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.637 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d353c80-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0d353c80-d1, col_values=(('external_ids', {'iface-id': '0d353c80-d149-4939-a0c5-da59491d3f99', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:7a:5f', 'vm-uuid': '9c5c9653-6de6-4975-86c6-887803a35913'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:55 compute-0 NetworkManager[44949]: <info>  [1759846675.6461] manager: (tap0d353c80-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/279)
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:55 compute-0 nova_compute[259550]: 2025-10-07 14:17:55.654 2 INFO os_vif [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:7a:5f,bridge_name='br-int',has_traffic_filtering=True,id=0d353c80-d149-4939-a0c5-da59491d3f99,network=Network(2dfecd64-708d-4596-88ae-4b7d716e998c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d353c80-d1')
Oct 07 14:17:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 330 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 2.0 MiB/s wr, 309 op/s
Oct 07 14:17:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.005 2 DEBUG nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.006 2 DEBUG nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.006 2 DEBUG nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] No VIF found with MAC fa:16:3e:82:7a:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.007 2 INFO nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Using config drive
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.032 2 DEBUG nova.storage.rbd_utils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 9c5c9653-6de6-4975-86c6-887803a35913_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.320 2 DEBUG nova.network.neutron [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Updating instance_info_cache with network_info: [{"id": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "address": "fa:16:3e:e8:c8:d3", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe29bbe-c4", "ovs_interfaceid": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "922633f6-7416-4658-9b86-daf2a7ecfed9", "address": "fa:16:3e:c8:8a:43", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922633f6-74", "ovs_interfaceid": "922633f6-7416-4658-9b86-daf2a7ecfed9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.435 2 DEBUG oslo_concurrency.lockutils [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Releasing lock "refresh_cache-dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.438 2 DEBUG nova.virt.libvirt.vif [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-957016237',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-957016237',id=64,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='28bfd63ece134e70ac7fe3739775042b',ramdisk_id='',reservation_id='r-nwbsytri',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1141062112',owner_user_name='tempest-AttachInterfacesV270Test-1141062112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:17:43Z,user_data=None,user_id='63af1dd3b8c54df9a2d8488d7cfc1590',uuid=dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "922633f6-7416-4658-9b86-daf2a7ecfed9", "address": "fa:16:3e:c8:8a:43", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922633f6-74", "ovs_interfaceid": "922633f6-7416-4658-9b86-daf2a7ecfed9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.438 2 DEBUG nova.network.os_vif_util [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Converting VIF {"id": "922633f6-7416-4658-9b86-daf2a7ecfed9", "address": "fa:16:3e:c8:8a:43", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922633f6-74", "ovs_interfaceid": "922633f6-7416-4658-9b86-daf2a7ecfed9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.439 2 DEBUG nova.network.os_vif_util [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:8a:43,bridge_name='br-int',has_traffic_filtering=True,id=922633f6-7416-4658-9b86-daf2a7ecfed9,network=Network(69e0bc1b-64b0-44c5-8a8e-258e554b00ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922633f6-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.439 2 DEBUG os_vif [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:8a:43,bridge_name='br-int',has_traffic_filtering=True,id=922633f6-7416-4658-9b86-daf2a7ecfed9,network=Network(69e0bc1b-64b0-44c5-8a8e-258e554b00ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922633f6-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.440 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.440 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.442 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap922633f6-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap922633f6-74, col_values=(('external_ids', {'iface-id': '922633f6-7416-4658-9b86-daf2a7ecfed9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:8a:43', 'vm-uuid': 'dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:56 compute-0 NetworkManager[44949]: <info>  [1759846676.4454] manager: (tap922633f6-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/280)
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.451 2 INFO os_vif [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:8a:43,bridge_name='br-int',has_traffic_filtering=True,id=922633f6-7416-4658-9b86-daf2a7ecfed9,network=Network(69e0bc1b-64b0-44c5-8a8e-258e554b00ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922633f6-74')
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.452 2 DEBUG nova.virt.libvirt.vif [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-957016237',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-957016237',id=64,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='28bfd63ece134e70ac7fe3739775042b',ramdisk_id='',reservation_id='r-nwbsytri',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1141062112',owner_user_name='tempest-AttachInterfacesV270Test-1141062112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:17:43Z,user_data=None,user_id='63af1dd3b8c54df9a2d8488d7cfc1590',uuid=dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "922633f6-7416-4658-9b86-daf2a7ecfed9", "address": "fa:16:3e:c8:8a:43", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922633f6-74", "ovs_interfaceid": "922633f6-7416-4658-9b86-daf2a7ecfed9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.453 2 DEBUG nova.network.os_vif_util [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Converting VIF {"id": "922633f6-7416-4658-9b86-daf2a7ecfed9", "address": "fa:16:3e:c8:8a:43", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922633f6-74", "ovs_interfaceid": "922633f6-7416-4658-9b86-daf2a7ecfed9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.453 2 DEBUG nova.network.os_vif_util [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:8a:43,bridge_name='br-int',has_traffic_filtering=True,id=922633f6-7416-4658-9b86-daf2a7ecfed9,network=Network(69e0bc1b-64b0-44c5-8a8e-258e554b00ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922633f6-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.457 2 DEBUG nova.virt.libvirt.guest [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] attach device xml: <interface type="ethernet">
Oct 07 14:17:56 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:c8:8a:43"/>
Oct 07 14:17:56 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:17:56 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:17:56 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:17:56 compute-0 nova_compute[259550]:   <target dev="tap922633f6-74"/>
Oct 07 14:17:56 compute-0 nova_compute[259550]: </interface>
Oct 07 14:17:56 compute-0 nova_compute[259550]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 07 14:17:56 compute-0 kernel: tap922633f6-74: entered promiscuous mode
Oct 07 14:17:56 compute-0 NetworkManager[44949]: <info>  [1759846676.4724] manager: (tap922633f6-74): new Tun device (/org/freedesktop/NetworkManager/Devices/281)
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:56 compute-0 ovn_controller[151684]: 2025-10-07T14:17:56Z|00618|binding|INFO|Claiming lport 922633f6-7416-4658-9b86-daf2a7ecfed9 for this chassis.
Oct 07 14:17:56 compute-0 ovn_controller[151684]: 2025-10-07T14:17:56Z|00619|binding|INFO|922633f6-7416-4658-9b86-daf2a7ecfed9: Claiming fa:16:3e:c8:8a:43 10.100.0.5
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.493 2 INFO nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Creating config drive at /var/lib/nova/instances/9c5c9653-6de6-4975-86c6-887803a35913/disk.config
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.499 2 DEBUG oslo_concurrency.processutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9c5c9653-6de6-4975-86c6-887803a35913/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjdjj_hz_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:56 compute-0 ovn_controller[151684]: 2025-10-07T14:17:56Z|00620|binding|INFO|Setting lport 922633f6-7416-4658-9b86-daf2a7ecfed9 ovn-installed in OVS
Oct 07 14:17:56 compute-0 systemd-udevd[330277]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:17:56 compute-0 NetworkManager[44949]: <info>  [1759846676.5297] device (tap922633f6-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:17:56 compute-0 NetworkManager[44949]: <info>  [1759846676.5315] device (tap922633f6-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:17:56 compute-0 ovn_controller[151684]: 2025-10-07T14:17:56Z|00621|binding|INFO|Setting lport 922633f6-7416-4658-9b86-daf2a7ecfed9 up in Southbound
Oct 07 14:17:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:56.545 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:8a:43 10.100.0.5'], port_security=['fa:16:3e:c8:8a:43 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69e0bc1b-64b0-44c5-8a8e-258e554b00ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28bfd63ece134e70ac7fe3739775042b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '56e6356f-f5af-42e7-a71b-c36b572dfc87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d812f007-77e0-49c1-a3a6-f5b9db65bb47, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=922633f6-7416-4658-9b86-daf2a7ecfed9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:17:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:56.547 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 922633f6-7416-4658-9b86-daf2a7ecfed9 in datapath 69e0bc1b-64b0-44c5-8a8e-258e554b00ab bound to our chassis
Oct 07 14:17:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:56.550 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69e0bc1b-64b0-44c5-8a8e-258e554b00ab
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:56.576 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[60d585fc-5eb2-4515-8433-812ffbe5c19b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:56.616 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[43ffde77-b825-42d1-85f8-a6acd9e17be9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.619 2 DEBUG nova.compute.manager [req-b43c5daf-54a3-4b90-8a9a-0a1ac197d5d5 req-9396ca82-d3c5-4678-a656-41236b68b53f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Received event network-changed-922633f6-7416-4658-9b86-daf2a7ecfed9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.619 2 DEBUG nova.compute.manager [req-b43c5daf-54a3-4b90-8a9a-0a1ac197d5d5 req-9396ca82-d3c5-4678-a656-41236b68b53f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Refreshing instance network info cache due to event network-changed-922633f6-7416-4658-9b86-daf2a7ecfed9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.620 2 DEBUG oslo_concurrency.lockutils [req-b43c5daf-54a3-4b90-8a9a-0a1ac197d5d5 req-9396ca82-d3c5-4678-a656-41236b68b53f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.620 2 DEBUG oslo_concurrency.lockutils [req-b43c5daf-54a3-4b90-8a9a-0a1ac197d5d5 req-9396ca82-d3c5-4678-a656-41236b68b53f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.620 2 DEBUG nova.network.neutron [req-b43c5daf-54a3-4b90-8a9a-0a1ac197d5d5 req-9396ca82-d3c5-4678-a656-41236b68b53f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Refreshing network info cache for port 922633f6-7416-4658-9b86-daf2a7ecfed9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:17:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:56.620 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[09bea464-cf34-40cd-ae23-4832b427873e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:56.655 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf415ae-34e0-4bf8-b15b-9bfc89896070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:56 compute-0 nova_compute[259550]: 2025-10-07 14:17:56.663 2 DEBUG oslo_concurrency.processutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9c5c9653-6de6-4975-86c6-887803a35913/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjdjj_hz_" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:56.677 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c9112355-9c07-4dfc-bc44-420a6e97f4c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69e0bc1b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:cb:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 181], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723528, 'reachable_time': 36591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330290, 'error': None, 'target': 'ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:56.698 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e5996a16-b161-47a6-932e-1d55f6e7f8c5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap69e0bc1b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 723542, 'tstamp': 723542}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330298, 'error': None, 'target': 'ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap69e0bc1b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 723545, 'tstamp': 723545}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330298, 'error': None, 'target': 'ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:56.701 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69e0bc1b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:56.708 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69e0bc1b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:56.709 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:17:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:56.709 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69e0bc1b-60, col_values=(('external_ids', {'iface-id': 'fba4a7b9-3cb1-4ad4-a5e2-52a6e74ddfdc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:17:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:56.709 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:17:56 compute-0 NetworkManager[44949]: <info>  [1759846676.8189] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Oct 07 14:17:56 compute-0 NetworkManager[44949]: <info>  [1759846676.8195] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Oct 07 14:17:56 compute-0 ovn_controller[151684]: 2025-10-07T14:17:56Z|00622|binding|INFO|Releasing lport fba4a7b9-3cb1-4ad4-a5e2-52a6e74ddfdc from this chassis (sb_readonly=0)
Oct 07 14:17:56 compute-0 ovn_controller[151684]: 2025-10-07T14:17:56Z|00623|binding|INFO|Releasing lport 5989e5ed-c89e-446a-960e-503196fd3680 from this chassis (sb_readonly=0)
Oct 07 14:17:56 compute-0 ovn_controller[151684]: 2025-10-07T14:17:56Z|00624|binding|INFO|Releasing lport fba4a7b9-3cb1-4ad4-a5e2-52a6e74ddfdc from this chassis (sb_readonly=0)
Oct 07 14:17:56 compute-0 ovn_controller[151684]: 2025-10-07T14:17:56Z|00625|binding|INFO|Releasing lport 5989e5ed-c89e-446a-960e-503196fd3680 from this chassis (sb_readonly=0)
Oct 07 14:17:57 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1305158915' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:17:57 compute-0 ceph-mon[74295]: pgmap v1600: 305 pgs: 305 active+clean; 330 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 2.0 MiB/s wr, 309 op/s
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.395 2 DEBUG nova.storage.rbd_utils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 9c5c9653-6de6-4975-86c6-887803a35913_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.401 2 DEBUG oslo_concurrency.processutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9c5c9653-6de6-4975-86c6-887803a35913/disk.config 9c5c9653-6de6-4975-86c6-887803a35913_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.443 2 DEBUG nova.network.neutron [req-59a4f954-b272-4c14-b2b8-b3b8d6c07c75 req-4d74e084-7bed-4939-a686-1336aa562151 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Updated VIF entry in instance network info cache for port 0d353c80-d149-4939-a0c5-da59491d3f99. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.444 2 DEBUG nova.network.neutron [req-59a4f954-b272-4c14-b2b8-b3b8d6c07c75 req-4d74e084-7bed-4939-a686-1336aa562151 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Updating instance_info_cache with network_info: [{"id": "0d353c80-d149-4939-a0c5-da59491d3f99", "address": "fa:16:3e:82:7a:5f", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d353c80-d1", "ovs_interfaceid": "0d353c80-d149-4939-a0c5-da59491d3f99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.451 2 DEBUG nova.compute.manager [req-f365cee0-15e3-45b6-82d5-0f0990c2c9a7 req-d5a12c33-b5aa-4cdf-b2e3-7274c0ff60a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Received event network-vif-plugged-682f4b22-7214-4f0d-894b-84354797f177 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.451 2 DEBUG oslo_concurrency.lockutils [req-f365cee0-15e3-45b6-82d5-0f0990c2c9a7 req-d5a12c33-b5aa-4cdf-b2e3-7274c0ff60a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "96c85d5b-5885-4549-8dfd-75a4611179b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.452 2 DEBUG oslo_concurrency.lockutils [req-f365cee0-15e3-45b6-82d5-0f0990c2c9a7 req-d5a12c33-b5aa-4cdf-b2e3-7274c0ff60a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96c85d5b-5885-4549-8dfd-75a4611179b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.452 2 DEBUG oslo_concurrency.lockutils [req-f365cee0-15e3-45b6-82d5-0f0990c2c9a7 req-d5a12c33-b5aa-4cdf-b2e3-7274c0ff60a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96c85d5b-5885-4549-8dfd-75a4611179b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.452 2 DEBUG nova.compute.manager [req-f365cee0-15e3-45b6-82d5-0f0990c2c9a7 req-d5a12c33-b5aa-4cdf-b2e3-7274c0ff60a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] No waiting events found dispatching network-vif-plugged-682f4b22-7214-4f0d-894b-84354797f177 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.452 2 WARNING nova.compute.manager [req-f365cee0-15e3-45b6-82d5-0f0990c2c9a7 req-d5a12c33-b5aa-4cdf-b2e3-7274c0ff60a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Received unexpected event network-vif-plugged-682f4b22-7214-4f0d-894b-84354797f177 for instance with vm_state active and task_state deleting.
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.483 2 DEBUG oslo_concurrency.lockutils [req-59a4f954-b272-4c14-b2b8-b3b8d6c07c75 req-4d74e084-7bed-4939-a686-1336aa562151 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-9c5c9653-6de6-4975-86c6-887803a35913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.553 2 DEBUG nova.virt.libvirt.driver [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.553 2 DEBUG nova.virt.libvirt.driver [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.554 2 DEBUG nova.virt.libvirt.driver [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] No VIF found with MAC fa:16:3e:e8:c8:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.554 2 DEBUG nova.virt.libvirt.driver [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] No VIF found with MAC fa:16:3e:c8:8a:43, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.660 2 DEBUG nova.virt.libvirt.guest [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:17:57 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:17:57 compute-0 nova_compute[259550]:   <nova:name>tempest-AttachInterfacesV270Test-server-957016237</nova:name>
Oct 07 14:17:57 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:17:57</nova:creationTime>
Oct 07 14:17:57 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:17:57 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:17:57 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:17:57 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:17:57 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:17:57 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:17:57 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:17:57 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:17:57 compute-0 nova_compute[259550]:     <nova:user uuid="63af1dd3b8c54df9a2d8488d7cfc1590">tempest-AttachInterfacesV270Test-1141062112-project-member</nova:user>
Oct 07 14:17:57 compute-0 nova_compute[259550]:     <nova:project uuid="28bfd63ece134e70ac7fe3739775042b">tempest-AttachInterfacesV270Test-1141062112</nova:project>
Oct 07 14:17:57 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:17:57 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:17:57 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:17:57 compute-0 nova_compute[259550]:     <nova:port uuid="2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9">
Oct 07 14:17:57 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 07 14:17:57 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:17:57 compute-0 nova_compute[259550]:     <nova:port uuid="922633f6-7416-4658-9b86-daf2a7ecfed9">
Oct 07 14:17:57 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 07 14:17:57 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:17:57 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:17:57 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:17:57 compute-0 nova_compute[259550]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.765 2 DEBUG oslo_concurrency.lockutils [None req-e6fd9114-3e9f-45f9-a177-6a20ccb68405 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lock "interface-dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 11.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 330 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.0 MiB/s wr, 247 op/s
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.899 2 DEBUG oslo_concurrency.processutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9c5c9653-6de6-4975-86c6-887803a35913/disk.config 9c5c9653-6de6-4975-86c6-887803a35913_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.900 2 INFO nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Deleting local config drive /var/lib/nova/instances/9c5c9653-6de6-4975-86c6-887803a35913/disk.config because it was imported into RBD.
Oct 07 14:17:57 compute-0 kernel: tap0d353c80-d1: entered promiscuous mode
Oct 07 14:17:57 compute-0 NetworkManager[44949]: <info>  [1759846677.9715] manager: (tap0d353c80-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/284)
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:57 compute-0 ovn_controller[151684]: 2025-10-07T14:17:57Z|00626|binding|INFO|Claiming lport 0d353c80-d149-4939-a0c5-da59491d3f99 for this chassis.
Oct 07 14:17:57 compute-0 ovn_controller[151684]: 2025-10-07T14:17:57Z|00627|binding|INFO|0d353c80-d149-4939-a0c5-da59491d3f99: Claiming fa:16:3e:82:7a:5f 10.100.0.8
Oct 07 14:17:57 compute-0 NetworkManager[44949]: <info>  [1759846677.9912] device (tap0d353c80-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:17:57 compute-0 NetworkManager[44949]: <info>  [1759846677.9927] device (tap0d353c80-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:17:57 compute-0 ovn_controller[151684]: 2025-10-07T14:17:57Z|00628|binding|INFO|Setting lport 0d353c80-d149-4939-a0c5-da59491d3f99 ovn-installed in OVS
Oct 07 14:17:57 compute-0 nova_compute[259550]: 2025-10-07 14:17:57.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:58 compute-0 nova_compute[259550]: 2025-10-07 14:17:58.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:17:58 compute-0 systemd-machined[214580]: New machine qemu-78-instance-00000043.
Oct 07 14:17:58 compute-0 systemd[1]: Started Virtual Machine qemu-78-instance-00000043.
Oct 07 14:17:58 compute-0 nova_compute[259550]: 2025-10-07 14:17:58.042 2 INFO nova.virt.libvirt.driver [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Deleting instance files /var/lib/nova/instances/96c85d5b-5885-4549-8dfd-75a4611179b5_del
Oct 07 14:17:58 compute-0 nova_compute[259550]: 2025-10-07 14:17:58.043 2 INFO nova.virt.libvirt.driver [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Deletion of /var/lib/nova/instances/96c85d5b-5885-4549-8dfd-75a4611179b5_del complete
Oct 07 14:17:58 compute-0 ovn_controller[151684]: 2025-10-07T14:17:58Z|00629|binding|INFO|Setting lport 0d353c80-d149-4939-a0c5-da59491d3f99 up in Southbound
Oct 07 14:17:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:58.068 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:7a:5f 10.100.0.8'], port_security=['fa:16:3e:82:7a:5f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9c5c9653-6de6-4975-86c6-887803a35913', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dfecd64-708d-4596-88ae-4b7d716e998c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8dca3ec607447dd8f2e6dd1c0714628', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fe5613ec-7a3f-454f-9d6b-9216d9c4d645', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=952e23a8-726e-49a4-b02e-737dfec98b14, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0d353c80-d149-4939-a0c5-da59491d3f99) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:17:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:58.070 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0d353c80-d149-4939-a0c5-da59491d3f99 in datapath 2dfecd64-708d-4596-88ae-4b7d716e998c bound to our chassis
Oct 07 14:17:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:58.071 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2dfecd64-708d-4596-88ae-4b7d716e998c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:17:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:17:58.073 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ec905032-5ead-4031-8289-8209a4cfe613]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:17:58 compute-0 nova_compute[259550]: 2025-10-07 14:17:58.194 2 INFO nova.compute.manager [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Took 6.07 seconds to destroy the instance on the hypervisor.
Oct 07 14:17:58 compute-0 nova_compute[259550]: 2025-10-07 14:17:58.195 2 DEBUG oslo.service.loopingcall [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:17:58 compute-0 nova_compute[259550]: 2025-10-07 14:17:58.195 2 DEBUG nova.compute.manager [-] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:17:58 compute-0 nova_compute[259550]: 2025-10-07 14:17:58.196 2 DEBUG nova.network.neutron [-] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:17:58 compute-0 ovn_controller[151684]: 2025-10-07T14:17:58Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e8:c8:d3 10.100.0.8
Oct 07 14:17:58 compute-0 ovn_controller[151684]: 2025-10-07T14:17:58Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:c8:d3 10.100.0.8
Oct 07 14:17:58 compute-0 ceph-mon[74295]: pgmap v1601: 305 pgs: 305 active+clean; 330 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.0 MiB/s wr, 247 op/s
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.178 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846679.1775186, 9c5c9653-6de6-4975-86c6-887803a35913 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.178 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] VM Started (Lifecycle Event)
Oct 07 14:17:59 compute-0 ovn_controller[151684]: 2025-10-07T14:17:59Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c8:8a:43 10.100.0.5
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.220 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:59 compute-0 ovn_controller[151684]: 2025-10-07T14:17:59Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c8:8a:43 10.100.0.5
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.227 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846679.1776834, 9c5c9653-6de6-4975-86c6-887803a35913 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.227 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] VM Paused (Lifecycle Event)
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.261 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.264 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.285 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.363 2 DEBUG nova.compute.manager [req-a0c4249a-7846-4a4e-a4c6-cf7cc56291e4 req-8cb327d5-eb0c-4cdf-9130-6d7cd5ae0530 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Received event network-vif-plugged-922633f6-7416-4658-9b86-daf2a7ecfed9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.363 2 DEBUG oslo_concurrency.lockutils [req-a0c4249a-7846-4a4e-a4c6-cf7cc56291e4 req-8cb327d5-eb0c-4cdf-9130-6d7cd5ae0530 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.363 2 DEBUG oslo_concurrency.lockutils [req-a0c4249a-7846-4a4e-a4c6-cf7cc56291e4 req-8cb327d5-eb0c-4cdf-9130-6d7cd5ae0530 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.363 2 DEBUG oslo_concurrency.lockutils [req-a0c4249a-7846-4a4e-a4c6-cf7cc56291e4 req-8cb327d5-eb0c-4cdf-9130-6d7cd5ae0530 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.363 2 DEBUG nova.compute.manager [req-a0c4249a-7846-4a4e-a4c6-cf7cc56291e4 req-8cb327d5-eb0c-4cdf-9130-6d7cd5ae0530 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] No waiting events found dispatching network-vif-plugged-922633f6-7416-4658-9b86-daf2a7ecfed9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.364 2 WARNING nova.compute.manager [req-a0c4249a-7846-4a4e-a4c6-cf7cc56291e4 req-8cb327d5-eb0c-4cdf-9130-6d7cd5ae0530 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Received unexpected event network-vif-plugged-922633f6-7416-4658-9b86-daf2a7ecfed9 for instance with vm_state active and task_state None.
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.364 2 DEBUG nova.compute.manager [req-a0c4249a-7846-4a4e-a4c6-cf7cc56291e4 req-8cb327d5-eb0c-4cdf-9130-6d7cd5ae0530 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Received event network-vif-plugged-922633f6-7416-4658-9b86-daf2a7ecfed9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.364 2 DEBUG oslo_concurrency.lockutils [req-a0c4249a-7846-4a4e-a4c6-cf7cc56291e4 req-8cb327d5-eb0c-4cdf-9130-6d7cd5ae0530 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.364 2 DEBUG oslo_concurrency.lockutils [req-a0c4249a-7846-4a4e-a4c6-cf7cc56291e4 req-8cb327d5-eb0c-4cdf-9130-6d7cd5ae0530 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.364 2 DEBUG oslo_concurrency.lockutils [req-a0c4249a-7846-4a4e-a4c6-cf7cc56291e4 req-8cb327d5-eb0c-4cdf-9130-6d7cd5ae0530 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.364 2 DEBUG nova.compute.manager [req-a0c4249a-7846-4a4e-a4c6-cf7cc56291e4 req-8cb327d5-eb0c-4cdf-9130-6d7cd5ae0530 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] No waiting events found dispatching network-vif-plugged-922633f6-7416-4658-9b86-daf2a7ecfed9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:17:59 compute-0 nova_compute[259550]: 2025-10-07 14:17:59.365 2 WARNING nova.compute.manager [req-a0c4249a-7846-4a4e-a4c6-cf7cc56291e4 req-8cb327d5-eb0c-4cdf-9130-6d7cd5ae0530 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Received unexpected event network-vif-plugged-922633f6-7416-4658-9b86-daf2a7ecfed9 for instance with vm_state active and task_state None.
Oct 07 14:17:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 321 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.2 MiB/s wr, 295 op/s
Oct 07 14:18:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:00.049 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:00.050 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:00.050 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.282 2 DEBUG nova.compute.manager [req-1b88d95a-18d0-4462-ad36-2bf4324c304f req-081c3f77-af13-415e-9f6e-399ff0e856f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-changed-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.283 2 DEBUG nova.compute.manager [req-1b88d95a-18d0-4462-ad36-2bf4324c304f req-081c3f77-af13-415e-9f6e-399ff0e856f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Refreshing instance network info cache due to event network-changed-5fb8904b-227a-4dac-8c3a-82a23ba9832c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.283 2 DEBUG oslo_concurrency.lockutils [req-1b88d95a-18d0-4462-ad36-2bf4324c304f req-081c3f77-af13-415e-9f6e-399ff0e856f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.283 2 DEBUG oslo_concurrency.lockutils [req-1b88d95a-18d0-4462-ad36-2bf4324c304f req-081c3f77-af13-415e-9f6e-399ff0e856f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.283 2 DEBUG nova.network.neutron [req-1b88d95a-18d0-4462-ad36-2bf4324c304f req-081c3f77-af13-415e-9f6e-399ff0e856f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Refreshing network info cache for port 5fb8904b-227a-4dac-8c3a-82a23ba9832c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.427 2 DEBUG nova.network.neutron [-] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.456 2 INFO nova.compute.manager [-] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Took 2.26 seconds to deallocate network for instance.
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.509 2 DEBUG oslo_concurrency.lockutils [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.509 2 DEBUG oslo_concurrency.lockutils [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.684 2 DEBUG oslo_concurrency.processutils [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.935 2 DEBUG oslo_concurrency.lockutils [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Acquiring lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.935 2 DEBUG oslo_concurrency.lockutils [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.936 2 DEBUG oslo_concurrency.lockutils [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Acquiring lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.936 2 DEBUG oslo_concurrency.lockutils [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.937 2 DEBUG oslo_concurrency.lockutils [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.938 2 INFO nova.compute.manager [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Terminating instance
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.940 2 DEBUG nova.compute.manager [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.962 2 DEBUG nova.network.neutron [req-b43c5daf-54a3-4b90-8a9a-0a1ac197d5d5 req-9396ca82-d3c5-4678-a656-41236b68b53f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Updated VIF entry in instance network info cache for port 922633f6-7416-4658-9b86-daf2a7ecfed9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.963 2 DEBUG nova.network.neutron [req-b43c5daf-54a3-4b90-8a9a-0a1ac197d5d5 req-9396ca82-d3c5-4678-a656-41236b68b53f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Updating instance_info_cache with network_info: [{"id": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "address": "fa:16:3e:e8:c8:d3", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe29bbe-c4", "ovs_interfaceid": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "922633f6-7416-4658-9b86-daf2a7ecfed9", "address": "fa:16:3e:c8:8a:43", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922633f6-74", "ovs_interfaceid": "922633f6-7416-4658-9b86-daf2a7ecfed9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:00 compute-0 ceph-mon[74295]: pgmap v1602: 305 pgs: 305 active+clean; 321 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.2 MiB/s wr, 295 op/s
Oct 07 14:18:00 compute-0 nova_compute[259550]: 2025-10-07 14:18:00.986 2 DEBUG oslo_concurrency.lockutils [req-b43c5daf-54a3-4b90-8a9a-0a1ac197d5d5 req-9396ca82-d3c5-4678-a656-41236b68b53f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:18:00 compute-0 kernel: tap2fe29bbe-c4 (unregistering): left promiscuous mode
Oct 07 14:18:00 compute-0 NetworkManager[44949]: <info>  [1759846680.9973] device (tap2fe29bbe-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:18:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:18:01.005522) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846681005574, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 840, "num_deletes": 257, "total_data_size": 1052399, "memory_usage": 1068464, "flush_reason": "Manual Compaction"}
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846681048420, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1042116, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32877, "largest_seqno": 33716, "table_properties": {"data_size": 1037900, "index_size": 1868, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9507, "raw_average_key_size": 19, "raw_value_size": 1029353, "raw_average_value_size": 2083, "num_data_blocks": 83, "num_entries": 494, "num_filter_entries": 494, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759846612, "oldest_key_time": 1759846612, "file_creation_time": 1759846681, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 42980 microseconds, and 3852 cpu microseconds.
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:18:01.048502) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1042116 bytes OK
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:18:01.048524) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:18:01.049877) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:18:01.049893) EVENT_LOG_v1 {"time_micros": 1759846681049888, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:18:01.049911) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1048182, prev total WAL file size 1048182, number of live WAL files 2.
Oct 07 14:18:01 compute-0 ovn_controller[151684]: 2025-10-07T14:18:01Z|00630|binding|INFO|Releasing lport 2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 from this chassis (sb_readonly=0)
Oct 07 14:18:01 compute-0 ovn_controller[151684]: 2025-10-07T14:18:01Z|00631|binding|INFO|Setting lport 2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 down in Southbound
Oct 07 14:18:01 compute-0 ovn_controller[151684]: 2025-10-07T14:18:01Z|00632|binding|INFO|Removing iface tap2fe29bbe-c4 ovn-installed in OVS
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.059 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:c8:d3 10.100.0.8'], port_security=['fa:16:3e:e8:c8:d3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69e0bc1b-64b0-44c5-8a8e-258e554b00ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28bfd63ece134e70ac7fe3739775042b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '56e6356f-f5af-42e7-a71b-c36b572dfc87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d812f007-77e0-49c1-a3a6-f5b9db65bb47, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.060 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 in datapath 69e0bc1b-64b0-44c5-8a8e-258e554b00ab unbound from our chassis
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.061 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69e0bc1b-64b0-44c5-8a8e-258e554b00ab
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:18:01.065083) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303037' seq:72057594037927935, type:22 .. '6C6F676D0031323630' seq:0, type:0; will stop at (end)
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1017KB)], [71(8320KB)]
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846681065162, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9562226, "oldest_snapshot_seqno": -1}
Oct 07 14:18:01 compute-0 kernel: tap922633f6-74 (unregistering): left promiscuous mode
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:01 compute-0 NetworkManager[44949]: <info>  [1759846681.0806] device (tap922633f6-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.086 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5a1e20fa-5c71-490a-82ef-33229b893126]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:01 compute-0 ovn_controller[151684]: 2025-10-07T14:18:01Z|00633|binding|INFO|Releasing lport 922633f6-7416-4658-9b86-daf2a7ecfed9 from this chassis (sb_readonly=0)
Oct 07 14:18:01 compute-0 ovn_controller[151684]: 2025-10-07T14:18:01Z|00634|binding|INFO|Setting lport 922633f6-7416-4658-9b86-daf2a7ecfed9 down in Southbound
Oct 07 14:18:01 compute-0 ovn_controller[151684]: 2025-10-07T14:18:01Z|00635|binding|INFO|Removing iface tap922633f6-74 ovn-installed in OVS
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.105 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:8a:43 10.100.0.5'], port_security=['fa:16:3e:c8:8a:43 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69e0bc1b-64b0-44c5-8a8e-258e554b00ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28bfd63ece134e70ac7fe3739775042b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '56e6356f-f5af-42e7-a71b-c36b572dfc87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d812f007-77e0-49c1-a3a6-f5b9db65bb47, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=922633f6-7416-4658-9b86-daf2a7ecfed9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.121 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b1fed8db-c084-4a72-baa2-1061f94957a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.126 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[535480f6-2a42-489d-adec-fca13873221f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5717 keys, 9446102 bytes, temperature: kUnknown
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846681127859, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9446102, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9405977, "index_size": 24719, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14341, "raw_key_size": 144746, "raw_average_key_size": 25, "raw_value_size": 9301407, "raw_average_value_size": 1626, "num_data_blocks": 1006, "num_entries": 5717, "num_filter_entries": 5717, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759846681, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:18:01.128137) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9446102 bytes
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:18:01.130590) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.3 rd, 150.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.1 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(18.2) write-amplify(9.1) OK, records in: 6243, records dropped: 526 output_compression: NoCompression
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:18:01.130616) EVENT_LOG_v1 {"time_micros": 1759846681130606, "job": 40, "event": "compaction_finished", "compaction_time_micros": 62789, "compaction_time_cpu_micros": 26849, "output_level": 6, "num_output_files": 1, "total_output_size": 9446102, "num_input_records": 6243, "num_output_records": 5717, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846681131404, "job": 40, "event": "table_file_deletion", "file_number": 73}
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846681132700, "job": 40, "event": "table_file_deletion", "file_number": 71}
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:18:01.063126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:18:01.132844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:18:01.132851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:18:01.132855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:18:01.132857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:18:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:18:01.132859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:18:01 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000040.scope: Deactivated successfully.
Oct 07 14:18:01 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000040.scope: Consumed 13.544s CPU time.
Oct 07 14:18:01 compute-0 systemd-machined[214580]: Machine qemu-74-instance-00000040 terminated.
Oct 07 14:18:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:18:01 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3320156030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.160 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb67d87-577b-425c-92ea-d2543a0cd02c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:01 compute-0 NetworkManager[44949]: <info>  [1759846681.1724] manager: (tap922633f6-74): new Tun device (/org/freedesktop/NetworkManager/Devices/285)
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.184 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[216bcc37-e406-4868-a713-0fabb65c66f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69e0bc1b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:cb:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 181], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723528, 'reachable_time': 42978, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330441, 'error': None, 'target': 'ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.190 2 DEBUG oslo_concurrency.processutils [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.194 2 INFO nova.virt.libvirt.driver [-] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Instance destroyed successfully.
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.195 2 DEBUG nova.objects.instance [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lazy-loading 'resources' on Instance uuid dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.205 2 DEBUG nova.compute.provider_tree [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.206 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[36761449-a77e-453c-a983-7e057f031620]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap69e0bc1b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 723542, 'tstamp': 723542}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330453, 'error': None, 'target': 'ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap69e0bc1b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 723545, 'tstamp': 723545}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330453, 'error': None, 'target': 'ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.208 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69e0bc1b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.217 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69e0bc1b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.217 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.217 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69e0bc1b-60, col_values=(('external_ids', {'iface-id': 'fba4a7b9-3cb1-4ad4-a5e2-52a6e74ddfdc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.218 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.219 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 922633f6-7416-4658-9b86-daf2a7ecfed9 in datapath 69e0bc1b-64b0-44c5-8a8e-258e554b00ab unbound from our chassis
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.220 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 69e0bc1b-64b0-44c5-8a8e-258e554b00ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.221 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[03fa971c-881e-43c9-8591-6759fb9c3556]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.222 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab namespace which is not needed anymore
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.229 2 DEBUG nova.virt.libvirt.vif [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-957016237',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-957016237',id=64,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='28bfd63ece134e70ac7fe3739775042b',ramdisk_id='',reservation_id='r-nwbsytri',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1141062112',owner_user_name='tempest-AttachInterfacesV270Test-1141062112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:17:43Z,user_data=None,user_id='63af1dd3b8c54df9a2d8488d7cfc1590',uuid=dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "address": "fa:16:3e:e8:c8:d3", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe29bbe-c4", "ovs_interfaceid": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.229 2 DEBUG nova.network.os_vif_util [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Converting VIF {"id": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "address": "fa:16:3e:e8:c8:d3", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe29bbe-c4", "ovs_interfaceid": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.230 2 DEBUG nova.network.os_vif_util [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:c8:d3,bridge_name='br-int',has_traffic_filtering=True,id=2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9,network=Network(69e0bc1b-64b0-44c5-8a8e-258e554b00ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe29bbe-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.231 2 DEBUG os_vif [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:c8:d3,bridge_name='br-int',has_traffic_filtering=True,id=2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9,network=Network(69e0bc1b-64b0-44c5-8a8e-258e554b00ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe29bbe-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.234 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fe29bbe-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.237 2 DEBUG nova.scheduler.client.report [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.246 2 INFO os_vif [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:c8:d3,bridge_name='br-int',has_traffic_filtering=True,id=2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9,network=Network(69e0bc1b-64b0-44c5-8a8e-258e554b00ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe29bbe-c4')
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.246 2 DEBUG nova.virt.libvirt.vif [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-957016237',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-957016237',id=64,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='28bfd63ece134e70ac7fe3739775042b',ramdisk_id='',reservation_id='r-nwbsytri',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1141062112',owner_user_name='tempest-AttachInterfacesV270Test-1141062112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:17:43Z,user_data=None,user_id='63af1dd3b8c54df9a2d8488d7cfc1590',uuid=dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "922633f6-7416-4658-9b86-daf2a7ecfed9", "address": "fa:16:3e:c8:8a:43", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922633f6-74", "ovs_interfaceid": "922633f6-7416-4658-9b86-daf2a7ecfed9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.247 2 DEBUG nova.network.os_vif_util [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Converting VIF {"id": "922633f6-7416-4658-9b86-daf2a7ecfed9", "address": "fa:16:3e:c8:8a:43", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922633f6-74", "ovs_interfaceid": "922633f6-7416-4658-9b86-daf2a7ecfed9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.247 2 DEBUG nova.network.os_vif_util [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:8a:43,bridge_name='br-int',has_traffic_filtering=True,id=922633f6-7416-4658-9b86-daf2a7ecfed9,network=Network(69e0bc1b-64b0-44c5-8a8e-258e554b00ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922633f6-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.247 2 DEBUG os_vif [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:8a:43,bridge_name='br-int',has_traffic_filtering=True,id=922633f6-7416-4658-9b86-daf2a7ecfed9,network=Network(69e0bc1b-64b0-44c5-8a8e-258e554b00ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922633f6-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.249 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap922633f6-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.256 2 INFO os_vif [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:8a:43,bridge_name='br-int',has_traffic_filtering=True,id=922633f6-7416-4658-9b86-daf2a7ecfed9,network=Network(69e0bc1b-64b0-44c5-8a8e-258e554b00ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922633f6-74')
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.272 2 DEBUG oslo_concurrency.lockutils [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.302 2 INFO nova.scheduler.client.report [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Deleted allocations for instance 96c85d5b-5885-4549-8dfd-75a4611179b5
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.363 2 DEBUG oslo_concurrency.lockutils [None req-09cb8cab-8124-4b34-a200-13e2bb6b6f3d 4526fd07b25c4e6c9fd61f99f0451dba 9c7fa9a5146949b5a1222248ae125eff - - default default] Lock "96c85d5b-5885-4549-8dfd-75a4611179b5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:01 compute-0 neutron-haproxy-ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab[327967]: [NOTICE]   (327972) : haproxy version is 2.8.14-c23fe91
Oct 07 14:18:01 compute-0 neutron-haproxy-ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab[327967]: [NOTICE]   (327972) : path to executable is /usr/sbin/haproxy
Oct 07 14:18:01 compute-0 neutron-haproxy-ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab[327967]: [WARNING]  (327972) : Exiting Master process...
Oct 07 14:18:01 compute-0 neutron-haproxy-ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab[327967]: [ALERT]    (327972) : Current worker (327974) exited with code 143 (Terminated)
Oct 07 14:18:01 compute-0 neutron-haproxy-ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab[327967]: [WARNING]  (327972) : All workers exited. Exiting... (0)
Oct 07 14:18:01 compute-0 systemd[1]: libpod-2198d18df16bdc30b641e53eb9cf8b1cff56a4518712b1eed9500ac55e5acc14.scope: Deactivated successfully.
Oct 07 14:18:01 compute-0 conmon[327967]: conmon 2198d18df16bdc30b641 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2198d18df16bdc30b641e53eb9cf8b1cff56a4518712b1eed9500ac55e5acc14.scope/container/memory.events
Oct 07 14:18:01 compute-0 podman[330491]: 2025-10-07 14:18:01.384138499 +0000 UTC m=+0.060092878 container died 2198d18df16bdc30b641e53eb9cf8b1cff56a4518712b1eed9500ac55e5acc14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2198d18df16bdc30b641e53eb9cf8b1cff56a4518712b1eed9500ac55e5acc14-userdata-shm.mount: Deactivated successfully.
Oct 07 14:18:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ac8babd68cbecc61f117f5dbdb724a7fe81656e3e6bb12c40820e4b694fa319-merged.mount: Deactivated successfully.
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.503 2 DEBUG nova.compute.manager [req-288ac9d8-01b8-4895-9f98-0050061c2d4f req-dc0c21db-37ac-4f92-9beb-04057bfa3e8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received event network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.504 2 DEBUG oslo_concurrency.lockutils [req-288ac9d8-01b8-4895-9f98-0050061c2d4f req-dc0c21db-37ac-4f92-9beb-04057bfa3e8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9c5c9653-6de6-4975-86c6-887803a35913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.504 2 DEBUG oslo_concurrency.lockutils [req-288ac9d8-01b8-4895-9f98-0050061c2d4f req-dc0c21db-37ac-4f92-9beb-04057bfa3e8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.504 2 DEBUG oslo_concurrency.lockutils [req-288ac9d8-01b8-4895-9f98-0050061c2d4f req-dc0c21db-37ac-4f92-9beb-04057bfa3e8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.504 2 DEBUG nova.compute.manager [req-288ac9d8-01b8-4895-9f98-0050061c2d4f req-dc0c21db-37ac-4f92-9beb-04057bfa3e8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Processing event network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.504 2 DEBUG nova.compute.manager [req-288ac9d8-01b8-4895-9f98-0050061c2d4f req-dc0c21db-37ac-4f92-9beb-04057bfa3e8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received event network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.505 2 DEBUG oslo_concurrency.lockutils [req-288ac9d8-01b8-4895-9f98-0050061c2d4f req-dc0c21db-37ac-4f92-9beb-04057bfa3e8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9c5c9653-6de6-4975-86c6-887803a35913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.505 2 DEBUG oslo_concurrency.lockutils [req-288ac9d8-01b8-4895-9f98-0050061c2d4f req-dc0c21db-37ac-4f92-9beb-04057bfa3e8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.505 2 DEBUG oslo_concurrency.lockutils [req-288ac9d8-01b8-4895-9f98-0050061c2d4f req-dc0c21db-37ac-4f92-9beb-04057bfa3e8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.505 2 DEBUG nova.compute.manager [req-288ac9d8-01b8-4895-9f98-0050061c2d4f req-dc0c21db-37ac-4f92-9beb-04057bfa3e8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] No waiting events found dispatching network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.505 2 WARNING nova.compute.manager [req-288ac9d8-01b8-4895-9f98-0050061c2d4f req-dc0c21db-37ac-4f92-9beb-04057bfa3e8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received unexpected event network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 for instance with vm_state building and task_state spawning.
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.505 2 DEBUG nova.compute.manager [req-288ac9d8-01b8-4895-9f98-0050061c2d4f req-dc0c21db-37ac-4f92-9beb-04057bfa3e8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Received event network-vif-unplugged-922633f6-7416-4658-9b86-daf2a7ecfed9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.506 2 DEBUG oslo_concurrency.lockutils [req-288ac9d8-01b8-4895-9f98-0050061c2d4f req-dc0c21db-37ac-4f92-9beb-04057bfa3e8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.506 2 DEBUG oslo_concurrency.lockutils [req-288ac9d8-01b8-4895-9f98-0050061c2d4f req-dc0c21db-37ac-4f92-9beb-04057bfa3e8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.506 2 DEBUG oslo_concurrency.lockutils [req-288ac9d8-01b8-4895-9f98-0050061c2d4f req-dc0c21db-37ac-4f92-9beb-04057bfa3e8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.506 2 DEBUG nova.compute.manager [req-288ac9d8-01b8-4895-9f98-0050061c2d4f req-dc0c21db-37ac-4f92-9beb-04057bfa3e8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] No waiting events found dispatching network-vif-unplugged-922633f6-7416-4658-9b86-daf2a7ecfed9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.506 2 DEBUG nova.compute.manager [req-288ac9d8-01b8-4895-9f98-0050061c2d4f req-dc0c21db-37ac-4f92-9beb-04057bfa3e8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Received event network-vif-unplugged-922633f6-7416-4658-9b86-daf2a7ecfed9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.507 2 DEBUG nova.compute.manager [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.510 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846681.5101988, 9c5c9653-6de6-4975-86c6-887803a35913 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.510 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] VM Resumed (Lifecycle Event)
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.512 2 DEBUG nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.518 2 INFO nova.virt.libvirt.driver [-] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Instance spawned successfully.
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.518 2 DEBUG nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:18:01 compute-0 podman[330491]: 2025-10-07 14:18:01.523915532 +0000 UTC m=+0.199869881 container cleanup 2198d18df16bdc30b641e53eb9cf8b1cff56a4518712b1eed9500ac55e5acc14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.537 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:01 compute-0 systemd[1]: libpod-conmon-2198d18df16bdc30b641e53eb9cf8b1cff56a4518712b1eed9500ac55e5acc14.scope: Deactivated successfully.
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.543 2 DEBUG nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.545 2 DEBUG nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.545 2 DEBUG nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.546 2 DEBUG nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.546 2 DEBUG nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.547 2 DEBUG nova.virt.libvirt.driver [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.553 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.588 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.608 2 INFO nova.compute.manager [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Took 14.75 seconds to spawn the instance on the hypervisor.
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.609 2 DEBUG nova.compute.manager [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:01 compute-0 podman[330520]: 2025-10-07 14:18:01.643606444 +0000 UTC m=+0.089372242 container remove 2198d18df16bdc30b641e53eb9cf8b1cff56a4518712b1eed9500ac55e5acc14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.651 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[36756d82-9b1f-4ccb-aa34-c7fbf853aa02]: (4, ('Tue Oct  7 02:18:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab (2198d18df16bdc30b641e53eb9cf8b1cff56a4518712b1eed9500ac55e5acc14)\n2198d18df16bdc30b641e53eb9cf8b1cff56a4518712b1eed9500ac55e5acc14\nTue Oct  7 02:18:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab (2198d18df16bdc30b641e53eb9cf8b1cff56a4518712b1eed9500ac55e5acc14)\n2198d18df16bdc30b641e53eb9cf8b1cff56a4518712b1eed9500ac55e5acc14\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.654 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c303c8a7-82a5-416a-9d47-59e4a68731de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.658 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69e0bc1b-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:01 compute-0 kernel: tap69e0bc1b-60: left promiscuous mode
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.667 2 INFO nova.compute.manager [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Took 16.81 seconds to build instance.
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.682 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3f19e2-a6b8-44e1-8090-34fe53fbba1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:01 compute-0 nova_compute[259550]: 2025-10-07 14:18:01.687 2 DEBUG oslo_concurrency.lockutils [None req-34f5fcce-6e7e-49b4-b24d-7e88c4bf897c 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.706 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1fcba461-8e21-45ea-81b8-8cdd61fcc19e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.708 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[87a4b564-26e9-4fb7-ac91-753935fbb46a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.732 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4c57ba1b-661a-4798-affb-e045091f2351]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723519, 'reachable_time': 24331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330551, 'error': None, 'target': 'ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d69e0bc1b\x2d64b0\x2d44c5\x2d8a8e\x2d258e554b00ab.mount: Deactivated successfully.
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.735 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-69e0bc1b-64b0-44c5-8a8e-258e554b00ab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:18:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:01.735 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[ec5748cf-b1a3-412f-a756-68e51a6e2be7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:01 compute-0 podman[330534]: 2025-10-07 14:18:01.76917059 +0000 UTC m=+0.071104159 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 07 14:18:01 compute-0 podman[330535]: 2025-10-07 14:18:01.781173088 +0000 UTC m=+0.074749096 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:18:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 335 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.1 MiB/s wr, 254 op/s
Oct 07 14:18:02 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3320156030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:02 compute-0 ceph-mon[74295]: pgmap v1603: 305 pgs: 305 active+clean; 335 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.1 MiB/s wr, 254 op/s
Oct 07 14:18:02 compute-0 nova_compute[259550]: 2025-10-07 14:18:02.471 2 DEBUG nova.compute.manager [req-42d85051-5e8b-4057-a6f9-18eb6d036e97 req-a1927c75-39d1-4289-a78c-d24e3923419d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Received event network-vif-deleted-682f4b22-7214-4f0d-894b-84354797f177 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:02 compute-0 nova_compute[259550]: 2025-10-07 14:18:02.472 2 DEBUG nova.compute.manager [req-42d85051-5e8b-4057-a6f9-18eb6d036e97 req-a1927c75-39d1-4289-a78c-d24e3923419d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Received event network-vif-unplugged-2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:02 compute-0 nova_compute[259550]: 2025-10-07 14:18:02.472 2 DEBUG oslo_concurrency.lockutils [req-42d85051-5e8b-4057-a6f9-18eb6d036e97 req-a1927c75-39d1-4289-a78c-d24e3923419d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:02 compute-0 nova_compute[259550]: 2025-10-07 14:18:02.472 2 DEBUG oslo_concurrency.lockutils [req-42d85051-5e8b-4057-a6f9-18eb6d036e97 req-a1927c75-39d1-4289-a78c-d24e3923419d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:02 compute-0 nova_compute[259550]: 2025-10-07 14:18:02.473 2 DEBUG oslo_concurrency.lockutils [req-42d85051-5e8b-4057-a6f9-18eb6d036e97 req-a1927c75-39d1-4289-a78c-d24e3923419d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:02 compute-0 nova_compute[259550]: 2025-10-07 14:18:02.473 2 DEBUG nova.compute.manager [req-42d85051-5e8b-4057-a6f9-18eb6d036e97 req-a1927c75-39d1-4289-a78c-d24e3923419d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] No waiting events found dispatching network-vif-unplugged-2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:02 compute-0 nova_compute[259550]: 2025-10-07 14:18:02.473 2 DEBUG nova.compute.manager [req-42d85051-5e8b-4057-a6f9-18eb6d036e97 req-a1927c75-39d1-4289-a78c-d24e3923419d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Received event network-vif-unplugged-2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:18:02 compute-0 nova_compute[259550]: 2025-10-07 14:18:02.473 2 DEBUG nova.compute.manager [req-42d85051-5e8b-4057-a6f9-18eb6d036e97 req-a1927c75-39d1-4289-a78c-d24e3923419d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Received event network-vif-plugged-2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:02 compute-0 nova_compute[259550]: 2025-10-07 14:18:02.473 2 DEBUG oslo_concurrency.lockutils [req-42d85051-5e8b-4057-a6f9-18eb6d036e97 req-a1927c75-39d1-4289-a78c-d24e3923419d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:02 compute-0 nova_compute[259550]: 2025-10-07 14:18:02.473 2 DEBUG oslo_concurrency.lockutils [req-42d85051-5e8b-4057-a6f9-18eb6d036e97 req-a1927c75-39d1-4289-a78c-d24e3923419d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:02 compute-0 nova_compute[259550]: 2025-10-07 14:18:02.474 2 DEBUG oslo_concurrency.lockutils [req-42d85051-5e8b-4057-a6f9-18eb6d036e97 req-a1927c75-39d1-4289-a78c-d24e3923419d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:02 compute-0 nova_compute[259550]: 2025-10-07 14:18:02.474 2 DEBUG nova.compute.manager [req-42d85051-5e8b-4057-a6f9-18eb6d036e97 req-a1927c75-39d1-4289-a78c-d24e3923419d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] No waiting events found dispatching network-vif-plugged-2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:02 compute-0 nova_compute[259550]: 2025-10-07 14:18:02.474 2 WARNING nova.compute.manager [req-42d85051-5e8b-4057-a6f9-18eb6d036e97 req-a1927c75-39d1-4289-a78c-d24e3923419d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Received unexpected event network-vif-plugged-2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 for instance with vm_state active and task_state deleting.
Oct 07 14:18:03 compute-0 nova_compute[259550]: 2025-10-07 14:18:03.006 2 DEBUG nova.network.neutron [req-1b88d95a-18d0-4462-ad36-2bf4324c304f req-081c3f77-af13-415e-9f6e-399ff0e856f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Updated VIF entry in instance network info cache for port 5fb8904b-227a-4dac-8c3a-82a23ba9832c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:18:03 compute-0 nova_compute[259550]: 2025-10-07 14:18:03.007 2 DEBUG nova.network.neutron [req-1b88d95a-18d0-4462-ad36-2bf4324c304f req-081c3f77-af13-415e-9f6e-399ff0e856f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Updating instance_info_cache with network_info: [{"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:03 compute-0 nova_compute[259550]: 2025-10-07 14:18:03.097 2 DEBUG oslo_concurrency.lockutils [req-1b88d95a-18d0-4462-ad36-2bf4324c304f req-081c3f77-af13-415e-9f6e-399ff0e856f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:18:03 compute-0 nova_compute[259550]: 2025-10-07 14:18:03.635 2 DEBUG nova.compute.manager [req-2591b50e-87fa-4d24-977f-5025a4188bac req-67afad24-193f-4ef3-9ffd-bc94d4a41276 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Received event network-vif-plugged-922633f6-7416-4658-9b86-daf2a7ecfed9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:03 compute-0 nova_compute[259550]: 2025-10-07 14:18:03.635 2 DEBUG oslo_concurrency.lockutils [req-2591b50e-87fa-4d24-977f-5025a4188bac req-67afad24-193f-4ef3-9ffd-bc94d4a41276 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:03 compute-0 nova_compute[259550]: 2025-10-07 14:18:03.635 2 DEBUG oslo_concurrency.lockutils [req-2591b50e-87fa-4d24-977f-5025a4188bac req-67afad24-193f-4ef3-9ffd-bc94d4a41276 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:03 compute-0 nova_compute[259550]: 2025-10-07 14:18:03.635 2 DEBUG oslo_concurrency.lockutils [req-2591b50e-87fa-4d24-977f-5025a4188bac req-67afad24-193f-4ef3-9ffd-bc94d4a41276 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:03 compute-0 nova_compute[259550]: 2025-10-07 14:18:03.636 2 DEBUG nova.compute.manager [req-2591b50e-87fa-4d24-977f-5025a4188bac req-67afad24-193f-4ef3-9ffd-bc94d4a41276 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] No waiting events found dispatching network-vif-plugged-922633f6-7416-4658-9b86-daf2a7ecfed9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:03 compute-0 nova_compute[259550]: 2025-10-07 14:18:03.636 2 WARNING nova.compute.manager [req-2591b50e-87fa-4d24-977f-5025a4188bac req-67afad24-193f-4ef3-9ffd-bc94d4a41276 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Received unexpected event network-vif-plugged-922633f6-7416-4658-9b86-daf2a7ecfed9 for instance with vm_state active and task_state deleting.
Oct 07 14:18:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 304 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.8 MiB/s wr, 285 op/s
Oct 07 14:18:03 compute-0 nova_compute[259550]: 2025-10-07 14:18:03.838 2 INFO nova.virt.libvirt.driver [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Deleting instance files /var/lib/nova/instances/dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e_del
Oct 07 14:18:03 compute-0 nova_compute[259550]: 2025-10-07 14:18:03.839 2 INFO nova.virt.libvirt.driver [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Deletion of /var/lib/nova/instances/dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e_del complete
Oct 07 14:18:04 compute-0 nova_compute[259550]: 2025-10-07 14:18:04.012 2 INFO nova.compute.manager [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Took 3.07 seconds to destroy the instance on the hypervisor.
Oct 07 14:18:04 compute-0 nova_compute[259550]: 2025-10-07 14:18:04.013 2 DEBUG oslo.service.loopingcall [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:18:04 compute-0 nova_compute[259550]: 2025-10-07 14:18:04.013 2 DEBUG nova.compute.manager [-] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:18:04 compute-0 nova_compute[259550]: 2025-10-07 14:18:04.013 2 DEBUG nova.network.neutron [-] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:18:04 compute-0 nova_compute[259550]: 2025-10-07 14:18:04.761 2 INFO nova.compute.manager [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Rescuing
Oct 07 14:18:04 compute-0 nova_compute[259550]: 2025-10-07 14:18:04.762 2 DEBUG oslo_concurrency.lockutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "refresh_cache-9c5c9653-6de6-4975-86c6-887803a35913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:18:04 compute-0 nova_compute[259550]: 2025-10-07 14:18:04.763 2 DEBUG oslo_concurrency.lockutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquired lock "refresh_cache-9c5c9653-6de6-4975-86c6-887803a35913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:18:04 compute-0 nova_compute[259550]: 2025-10-07 14:18:04.763 2 DEBUG nova.network.neutron [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:18:04 compute-0 ceph-mon[74295]: pgmap v1604: 305 pgs: 305 active+clean; 304 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.8 MiB/s wr, 285 op/s
Oct 07 14:18:05 compute-0 ovn_controller[151684]: 2025-10-07T14:18:05Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:af:c7:50 10.100.0.3
Oct 07 14:18:05 compute-0 ovn_controller[151684]: 2025-10-07T14:18:05Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:c7:50 10.100.0.3
Oct 07 14:18:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 283 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.1 MiB/s wr, 263 op/s
Oct 07 14:18:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:18:06 compute-0 nova_compute[259550]: 2025-10-07 14:18:06.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:06 compute-0 nova_compute[259550]: 2025-10-07 14:18:06.280 2 DEBUG nova.compute.manager [req-3d987ef4-f50a-4794-bc6b-aaabb611d2a5 req-f7f09b55-5ba8-43d3-97dc-07f95f859281 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Received event network-vif-deleted-922633f6-7416-4658-9b86-daf2a7ecfed9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:06 compute-0 nova_compute[259550]: 2025-10-07 14:18:06.281 2 INFO nova.compute.manager [req-3d987ef4-f50a-4794-bc6b-aaabb611d2a5 req-f7f09b55-5ba8-43d3-97dc-07f95f859281 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Neutron deleted interface 922633f6-7416-4658-9b86-daf2a7ecfed9; detaching it from the instance and deleting it from the info cache
Oct 07 14:18:06 compute-0 nova_compute[259550]: 2025-10-07 14:18:06.281 2 DEBUG nova.network.neutron [req-3d987ef4-f50a-4794-bc6b-aaabb611d2a5 req-f7f09b55-5ba8-43d3-97dc-07f95f859281 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Updating instance_info_cache with network_info: [{"id": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "address": "fa:16:3e:e8:c8:d3", "network": {"id": "69e0bc1b-64b0-44c5-8a8e-258e554b00ab", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-219105653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28bfd63ece134e70ac7fe3739775042b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe29bbe-c4", "ovs_interfaceid": "2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:06 compute-0 ceph-mon[74295]: pgmap v1605: 305 pgs: 305 active+clean; 283 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.1 MiB/s wr, 263 op/s
Oct 07 14:18:06 compute-0 nova_compute[259550]: 2025-10-07 14:18:06.327 2 DEBUG nova.compute.manager [req-3d987ef4-f50a-4794-bc6b-aaabb611d2a5 req-f7f09b55-5ba8-43d3-97dc-07f95f859281 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Detach interface failed, port_id=922633f6-7416-4658-9b86-daf2a7ecfed9, reason: Instance dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:18:06 compute-0 nova_compute[259550]: 2025-10-07 14:18:06.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:06 compute-0 nova_compute[259550]: 2025-10-07 14:18:06.794 2 DEBUG nova.network.neutron [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Updating instance_info_cache with network_info: [{"id": "0d353c80-d149-4939-a0c5-da59491d3f99", "address": "fa:16:3e:82:7a:5f", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d353c80-d1", "ovs_interfaceid": "0d353c80-d149-4939-a0c5-da59491d3f99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:06 compute-0 nova_compute[259550]: 2025-10-07 14:18:06.874 2 DEBUG nova.network.neutron [-] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:06 compute-0 nova_compute[259550]: 2025-10-07 14:18:06.944 2 DEBUG oslo_concurrency.lockutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Releasing lock "refresh_cache-9c5c9653-6de6-4975-86c6-887803a35913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:18:06 compute-0 nova_compute[259550]: 2025-10-07 14:18:06.953 2 INFO nova.compute.manager [-] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Took 2.94 seconds to deallocate network for instance.
Oct 07 14:18:07 compute-0 nova_compute[259550]: 2025-10-07 14:18:07.032 2 DEBUG oslo_concurrency.lockutils [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:07 compute-0 nova_compute[259550]: 2025-10-07 14:18:07.033 2 DEBUG oslo_concurrency.lockutils [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:07 compute-0 nova_compute[259550]: 2025-10-07 14:18:07.132 2 DEBUG oslo_concurrency.processutils [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:07 compute-0 ovn_controller[151684]: 2025-10-07T14:18:07Z|00636|binding|INFO|Releasing lport 5989e5ed-c89e-446a-960e-503196fd3680 from this chassis (sb_readonly=0)
Oct 07 14:18:07 compute-0 nova_compute[259550]: 2025-10-07 14:18:07.207 2 DEBUG nova.virt.libvirt.driver [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:18:07 compute-0 nova_compute[259550]: 2025-10-07 14:18:07.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:07 compute-0 nova_compute[259550]: 2025-10-07 14:18:07.356 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846672.354959, 96c85d5b-5885-4549-8dfd-75a4611179b5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:07 compute-0 nova_compute[259550]: 2025-10-07 14:18:07.357 2 INFO nova.compute.manager [-] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] VM Stopped (Lifecycle Event)
Oct 07 14:18:07 compute-0 nova_compute[259550]: 2025-10-07 14:18:07.388 2 DEBUG nova.compute.manager [None req-270744b9-249b-4dda-8fd1-9ebbe6338e18 - - - - - -] [instance: 96c85d5b-5885-4549-8dfd-75a4611179b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:18:07 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2407848708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:07 compute-0 nova_compute[259550]: 2025-10-07 14:18:07.618 2 DEBUG oslo_concurrency.processutils [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:07 compute-0 nova_compute[259550]: 2025-10-07 14:18:07.626 2 DEBUG nova.compute.provider_tree [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:18:07 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2407848708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:07 compute-0 nova_compute[259550]: 2025-10-07 14:18:07.775 2 DEBUG nova.scheduler.client.report [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:18:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 283 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 214 op/s
Oct 07 14:18:07 compute-0 nova_compute[259550]: 2025-10-07 14:18:07.858 2 DEBUG oslo_concurrency.lockutils [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:07 compute-0 nova_compute[259550]: 2025-10-07 14:18:07.953 2 INFO nova.scheduler.client.report [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Deleted allocations for instance dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e
Oct 07 14:18:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:08.207 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:18:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:08.208 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:18:08 compute-0 nova_compute[259550]: 2025-10-07 14:18:08.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:08 compute-0 nova_compute[259550]: 2025-10-07 14:18:08.229 2 DEBUG oslo_concurrency.lockutils [None req-06bc5d55-af73-4536-9bfc-b822374e3d2c 63af1dd3b8c54df9a2d8488d7cfc1590 28bfd63ece134e70ac7fe3739775042b - - default default] Lock "dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:08 compute-0 nova_compute[259550]: 2025-10-07 14:18:08.447 2 DEBUG nova.compute.manager [req-fc44b312-2e96-4bc9-85a4-b2f1199f019d req-da215aee-80c9-4037-94e8-525e5767118d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Received event network-vif-deleted-2fe29bbe-c456-49f8-a7fe-93bb5d7ee1c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:08 compute-0 ceph-mon[74295]: pgmap v1606: 305 pgs: 305 active+clean; 283 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 214 op/s
Oct 07 14:18:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 286 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.0 MiB/s wr, 237 op/s
Oct 07 14:18:11 compute-0 ceph-mon[74295]: pgmap v1607: 305 pgs: 305 active+clean; 286 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.0 MiB/s wr, 237 op/s
Oct 07 14:18:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:18:11 compute-0 nova_compute[259550]: 2025-10-07 14:18:11.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:11 compute-0 nova_compute[259550]: 2025-10-07 14:18:11.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 295 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 195 op/s
Oct 07 14:18:12 compute-0 ceph-mon[74295]: pgmap v1608: 305 pgs: 305 active+clean; 295 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 195 op/s
Oct 07 14:18:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 307 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 190 op/s
Oct 07 14:18:14 compute-0 ceph-mon[74295]: pgmap v1609: 305 pgs: 305 active+clean; 307 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 190 op/s
Oct 07 14:18:15 compute-0 podman[330597]: 2025-10-07 14:18:15.078584184 +0000 UTC m=+0.063521329 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 07 14:18:15 compute-0 podman[330598]: 2025-10-07 14:18:15.12989439 +0000 UTC m=+0.104642476 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:18:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 319 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 147 op/s
Oct 07 14:18:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:18:16 compute-0 nova_compute[259550]: 2025-10-07 14:18:16.189 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846681.1875215, dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:16 compute-0 nova_compute[259550]: 2025-10-07 14:18:16.190 2 INFO nova.compute.manager [-] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] VM Stopped (Lifecycle Event)
Oct 07 14:18:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:16.209 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:16 compute-0 nova_compute[259550]: 2025-10-07 14:18:16.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:16 compute-0 nova_compute[259550]: 2025-10-07 14:18:16.281 2 DEBUG oslo_concurrency.lockutils [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:16 compute-0 nova_compute[259550]: 2025-10-07 14:18:16.282 2 DEBUG oslo_concurrency.lockutils [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:16 compute-0 nova_compute[259550]: 2025-10-07 14:18:16.282 2 INFO nova.compute.manager [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Rebooting instance
Oct 07 14:18:16 compute-0 nova_compute[259550]: 2025-10-07 14:18:16.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:16 compute-0 nova_compute[259550]: 2025-10-07 14:18:16.801 2 DEBUG nova.compute.manager [None req-1acc7d2a-4b63-4b52-be93-9c22b212e314 - - - - - -] [instance: dbf99021-cfec-45e3-bf0e-c5a2fb1bf90e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:16 compute-0 nova_compute[259550]: 2025-10-07 14:18:16.802 2 DEBUG oslo_concurrency.lockutils [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:18:16 compute-0 nova_compute[259550]: 2025-10-07 14:18:16.803 2 DEBUG oslo_concurrency.lockutils [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquired lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:18:16 compute-0 nova_compute[259550]: 2025-10-07 14:18:16.804 2 DEBUG nova.network.neutron [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:18:16 compute-0 ceph-mon[74295]: pgmap v1610: 305 pgs: 305 active+clean; 319 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 147 op/s
Oct 07 14:18:17 compute-0 ovn_controller[151684]: 2025-10-07T14:18:17Z|00637|binding|INFO|Releasing lport 5989e5ed-c89e-446a-960e-503196fd3680 from this chassis (sb_readonly=0)
Oct 07 14:18:17 compute-0 nova_compute[259550]: 2025-10-07 14:18:17.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:17 compute-0 nova_compute[259550]: 2025-10-07 14:18:17.277 2 DEBUG nova.virt.libvirt.driver [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:18:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 319 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 2.3 MiB/s wr, 76 op/s
Oct 07 14:18:18 compute-0 ceph-mon[74295]: pgmap v1611: 305 pgs: 305 active+clean; 319 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 2.3 MiB/s wr, 76 op/s
Oct 07 14:18:19 compute-0 kernel: tap0d353c80-d1 (unregistering): left promiscuous mode
Oct 07 14:18:19 compute-0 NetworkManager[44949]: <info>  [1759846699.6261] device (tap0d353c80-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:18:19 compute-0 ovn_controller[151684]: 2025-10-07T14:18:19Z|00638|binding|INFO|Releasing lport 0d353c80-d149-4939-a0c5-da59491d3f99 from this chassis (sb_readonly=0)
Oct 07 14:18:19 compute-0 ovn_controller[151684]: 2025-10-07T14:18:19Z|00639|binding|INFO|Setting lport 0d353c80-d149-4939-a0c5-da59491d3f99 down in Southbound
Oct 07 14:18:19 compute-0 nova_compute[259550]: 2025-10-07 14:18:19.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:19 compute-0 ovn_controller[151684]: 2025-10-07T14:18:19Z|00640|binding|INFO|Removing iface tap0d353c80-d1 ovn-installed in OVS
Oct 07 14:18:19 compute-0 nova_compute[259550]: 2025-10-07 14:18:19.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:19 compute-0 nova_compute[259550]: 2025-10-07 14:18:19.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:19 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000043.scope: Deactivated successfully.
Oct 07 14:18:19 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000043.scope: Consumed 13.838s CPU time.
Oct 07 14:18:19 compute-0 systemd-machined[214580]: Machine qemu-78-instance-00000043 terminated.
Oct 07 14:18:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:19.730 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:7a:5f 10.100.0.8'], port_security=['fa:16:3e:82:7a:5f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9c5c9653-6de6-4975-86c6-887803a35913', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dfecd64-708d-4596-88ae-4b7d716e998c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8dca3ec607447dd8f2e6dd1c0714628', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fe5613ec-7a3f-454f-9d6b-9216d9c4d645', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=952e23a8-726e-49a4-b02e-737dfec98b14, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0d353c80-d149-4939-a0c5-da59491d3f99) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:18:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:19.732 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0d353c80-d149-4939-a0c5-da59491d3f99 in datapath 2dfecd64-708d-4596-88ae-4b7d716e998c unbound from our chassis
Oct 07 14:18:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:19.732 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2dfecd64-708d-4596-88ae-4b7d716e998c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:18:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:19.734 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a5d077b1-c6b5-4c26-9ea6-a39b2cdaa53b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 328 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 522 KiB/s rd, 2.3 MiB/s wr, 94 op/s
Oct 07 14:18:19 compute-0 nova_compute[259550]: 2025-10-07 14:18:19.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:19 compute-0 nova_compute[259550]: 2025-10-07 14:18:19.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.274 2 DEBUG nova.network.neutron [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Updating instance_info_cache with network_info: [{"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.292 2 DEBUG oslo_concurrency.lockutils [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Releasing lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.295 2 DEBUG nova.compute.manager [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.296 2 INFO nova.virt.libvirt.driver [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Instance shutdown successfully after 13 seconds.
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.305 2 INFO nova.virt.libvirt.driver [-] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Instance destroyed successfully.
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.306 2 DEBUG nova.objects.instance [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9c5c9653-6de6-4975-86c6-887803a35913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.328 2 INFO nova.virt.libvirt.driver [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Attempting rescue
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.329 2 DEBUG nova.virt.libvirt.driver [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.334 2 DEBUG nova.virt.libvirt.driver [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.335 2 INFO nova.virt.libvirt.driver [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Creating image(s)
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.358 2 DEBUG nova.storage.rbd_utils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 9c5c9653-6de6-4975-86c6-887803a35913_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.362 2 DEBUG nova.objects.instance [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9c5c9653-6de6-4975-86c6-887803a35913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.407 2 DEBUG nova.storage.rbd_utils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 9c5c9653-6de6-4975-86c6-887803a35913_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.432 2 DEBUG nova.storage.rbd_utils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 9c5c9653-6de6-4975-86c6-887803a35913_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.436 2 DEBUG oslo_concurrency.processutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.475 2 DEBUG nova.compute.manager [req-acae962c-7d95-436d-bfed-d4ff8aeb613d req-20eea11f-d70c-4056-9d5c-2071ad3c2e50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received event network-vif-unplugged-0d353c80-d149-4939-a0c5-da59491d3f99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.476 2 DEBUG oslo_concurrency.lockutils [req-acae962c-7d95-436d-bfed-d4ff8aeb613d req-20eea11f-d70c-4056-9d5c-2071ad3c2e50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9c5c9653-6de6-4975-86c6-887803a35913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.477 2 DEBUG oslo_concurrency.lockutils [req-acae962c-7d95-436d-bfed-d4ff8aeb613d req-20eea11f-d70c-4056-9d5c-2071ad3c2e50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.477 2 DEBUG oslo_concurrency.lockutils [req-acae962c-7d95-436d-bfed-d4ff8aeb613d req-20eea11f-d70c-4056-9d5c-2071ad3c2e50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.478 2 DEBUG nova.compute.manager [req-acae962c-7d95-436d-bfed-d4ff8aeb613d req-20eea11f-d70c-4056-9d5c-2071ad3c2e50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] No waiting events found dispatching network-vif-unplugged-0d353c80-d149-4939-a0c5-da59491d3f99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.478 2 WARNING nova.compute.manager [req-acae962c-7d95-436d-bfed-d4ff8aeb613d req-20eea11f-d70c-4056-9d5c-2071ad3c2e50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received unexpected event network-vif-unplugged-0d353c80-d149-4939-a0c5-da59491d3f99 for instance with vm_state active and task_state rescuing.
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.513 2 DEBUG oslo_concurrency.processutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.514 2 DEBUG oslo_concurrency.lockutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.514 2 DEBUG oslo_concurrency.lockutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.515 2 DEBUG oslo_concurrency.lockutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:20 compute-0 kernel: tap5fb8904b-22 (unregistering): left promiscuous mode
Oct 07 14:18:20 compute-0 NetworkManager[44949]: <info>  [1759846700.5373] device (tap5fb8904b-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.540 2 DEBUG nova.storage.rbd_utils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 9c5c9653-6de6-4975-86c6-887803a35913_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.546 2 DEBUG oslo_concurrency.processutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 9c5c9653-6de6-4975-86c6-887803a35913_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:20 compute-0 ovn_controller[151684]: 2025-10-07T14:18:20Z|00641|binding|INFO|Releasing lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c from this chassis (sb_readonly=0)
Oct 07 14:18:20 compute-0 ovn_controller[151684]: 2025-10-07T14:18:20Z|00642|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c down in Southbound
Oct 07 14:18:20 compute-0 ovn_controller[151684]: 2025-10-07T14:18:20Z|00643|binding|INFO|Removing iface tap5fb8904b-22 ovn-installed in OVS
Oct 07 14:18:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:20.554 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:c7:50 10.100.0.3'], port_security=['fa:16:3e:af:c7:50 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1d580bbb-a6fd-442c-8524-409ba5c344d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8379283f8a594c2ab94773d2b49cbb30', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c8e218c0-ab29-4b01-8bdb-1da00e3ea9f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44f26794-be65-4c90-a6ef-3a0e5efa6810, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5fb8904b-227a-4dac-8c3a-82a23ba9832c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:18:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:20.555 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5fb8904b-227a-4dac-8c3a-82a23ba9832c in datapath c21c541a-0d39-4ceb-ba44-53a9c1280779 unbound from our chassis
Oct 07 14:18:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:20.556 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c21c541a-0d39-4ceb-ba44-53a9c1280779, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:18:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:20.557 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de171658-2ca2-459c-9e22-df4c126e7169]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:20.557 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 namespace which is not needed anymore
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:20 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000042.scope: Deactivated successfully.
Oct 07 14:18:20 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000042.scope: Consumed 13.557s CPU time.
Oct 07 14:18:20 compute-0 systemd-machined[214580]: Machine qemu-77-instance-00000042 terminated.
Oct 07 14:18:20 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[329749]: [NOTICE]   (329753) : haproxy version is 2.8.14-c23fe91
Oct 07 14:18:20 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[329749]: [NOTICE]   (329753) : path to executable is /usr/sbin/haproxy
Oct 07 14:18:20 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[329749]: [WARNING]  (329753) : Exiting Master process...
Oct 07 14:18:20 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[329749]: [ALERT]    (329753) : Current worker (329755) exited with code 143 (Terminated)
Oct 07 14:18:20 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[329749]: [WARNING]  (329753) : All workers exited. Exiting... (0)
Oct 07 14:18:20 compute-0 systemd[1]: libpod-4cab5be5e70ba83d4d0f07700cb0e0d0c42eb37035ad88a9b5747ecac7d11c75.scope: Deactivated successfully.
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:20 compute-0 podman[330773]: 2025-10-07 14:18:20.707578336 +0000 UTC m=+0.051930843 container died 4cab5be5e70ba83d4d0f07700cb0e0d0c42eb37035ad88a9b5747ecac7d11c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.728 2 INFO nova.virt.libvirt.driver [-] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance destroyed successfully.
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.729 2 DEBUG nova.objects.instance [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'resources' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.757 2 DEBUG nova.virt.libvirt.vif [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1718645090',display_name='tempest-ServerActionsTestJSON-server-1718645090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1718645090',id=66,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD46Fy5CoN+Ml3EBoxKcZ2Dc+Ex8Fs/j0JXzrdEiunFq6ivVsrIblCZq3tN14fyHQcfewP1+4i7OCcMUFM6dTRwhbKPS7W3sI5N9qFb8Gfb+awrk0XppQafXmqkzDHedqQ==',key_name='tempest-keypair-607419023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-0mdnuhhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:18:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=1d580bbb-a6fd-442c-8524-409ba5c344d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.757 2 DEBUG nova.network.os_vif_util [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.758 2 DEBUG nova.network.os_vif_util [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.759 2 DEBUG os_vif [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.761 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fb8904b-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:18:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4cab5be5e70ba83d4d0f07700cb0e0d0c42eb37035ad88a9b5747ecac7d11c75-userdata-shm.mount: Deactivated successfully.
Oct 07 14:18:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e126eb1439b5e971ce0fae5a29f82a5cc1d5044424533229e02ef0bc79948baf-merged.mount: Deactivated successfully.
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.775 2 INFO os_vif [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22')
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.781 2 DEBUG nova.virt.libvirt.driver [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Start _get_guest_xml network_info=[{"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.790 2 WARNING nova.virt.libvirt.driver [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.801 2 DEBUG nova.virt.libvirt.host [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.802 2 DEBUG nova.virt.libvirt.host [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.806 2 DEBUG nova.virt.libvirt.host [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.806 2 DEBUG nova.virt.libvirt.host [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.807 2 DEBUG nova.virt.libvirt.driver [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.807 2 DEBUG nova.virt.hardware [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.807 2 DEBUG nova.virt.hardware [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.807 2 DEBUG nova.virt.hardware [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.808 2 DEBUG nova.virt.hardware [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.808 2 DEBUG nova.virt.hardware [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.808 2 DEBUG nova.virt.hardware [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.808 2 DEBUG nova.virt.hardware [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.808 2 DEBUG nova.virt.hardware [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.809 2 DEBUG nova.virt.hardware [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.809 2 DEBUG nova.virt.hardware [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.809 2 DEBUG nova.virt.hardware [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.809 2 DEBUG nova.objects.instance [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:20 compute-0 podman[330773]: 2025-10-07 14:18:20.832073855 +0000 UTC m=+0.176426392 container cleanup 4cab5be5e70ba83d4d0f07700cb0e0d0c42eb37035ad88a9b5747ecac7d11c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:18:20 compute-0 systemd[1]: libpod-conmon-4cab5be5e70ba83d4d0f07700cb0e0d0c42eb37035ad88a9b5747ecac7d11c75.scope: Deactivated successfully.
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.859 2 DEBUG oslo_concurrency.processutils [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:20 compute-0 podman[330820]: 2025-10-07 14:18:20.901703255 +0000 UTC m=+0.045633987 container remove 4cab5be5e70ba83d4d0f07700cb0e0d0c42eb37035ad88a9b5747ecac7d11c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.909 2 DEBUG nova.compute.manager [req-27cb5294-234e-4616-ad10-83d7a3e481be req-978241e2-294b-4a04-827a-8123dc7e52cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-unplugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:20.908 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[786dd2d8-50ec-428b-a012-eddc4cf01952]: (4, ('Tue Oct  7 02:18:20 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 (4cab5be5e70ba83d4d0f07700cb0e0d0c42eb37035ad88a9b5747ecac7d11c75)\n4cab5be5e70ba83d4d0f07700cb0e0d0c42eb37035ad88a9b5747ecac7d11c75\nTue Oct  7 02:18:20 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 (4cab5be5e70ba83d4d0f07700cb0e0d0c42eb37035ad88a9b5747ecac7d11c75)\n4cab5be5e70ba83d4d0f07700cb0e0d0c42eb37035ad88a9b5747ecac7d11c75\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.910 2 DEBUG oslo_concurrency.lockutils [req-27cb5294-234e-4616-ad10-83d7a3e481be req-978241e2-294b-4a04-827a-8123dc7e52cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.910 2 DEBUG oslo_concurrency.lockutils [req-27cb5294-234e-4616-ad10-83d7a3e481be req-978241e2-294b-4a04-827a-8123dc7e52cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.910 2 DEBUG oslo_concurrency.lockutils [req-27cb5294-234e-4616-ad10-83d7a3e481be req-978241e2-294b-4a04-827a-8123dc7e52cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.911 2 DEBUG nova.compute.manager [req-27cb5294-234e-4616-ad10-83d7a3e481be req-978241e2-294b-4a04-827a-8123dc7e52cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-unplugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.911 2 WARNING nova.compute.manager [req-27cb5294-234e-4616-ad10-83d7a3e481be req-978241e2-294b-4a04-827a-8123dc7e52cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-unplugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state active and task_state reboot_started_hard.
Oct 07 14:18:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:20.911 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[63934045-34be-4115-84fe-6f8fdb7ddbea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.912 2 DEBUG oslo_concurrency.processutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 9c5c9653-6de6-4975-86c6-887803a35913_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:20.912 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc21c541a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.913 2 DEBUG nova.objects.instance [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'migration_context' on Instance uuid 9c5c9653-6de6-4975-86c6-887803a35913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:20 compute-0 kernel: tapc21c541a-00: left promiscuous mode
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:20.941 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b9753893-1954-4e39-b45f-a77704e8e7cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.964 2 DEBUG nova.virt.libvirt.driver [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.965 2 DEBUG nova.virt.libvirt.driver [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Start _get_guest_xml network_info=[{"id": "0d353c80-d149-4939-a0c5-da59491d3f99", "address": "fa:16:3e:82:7a:5f", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1885672423-network", "vif_mac": "fa:16:3e:82:7a:5f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d353c80-d1", "ovs_interfaceid": "0d353c80-d149-4939-a0c5-da59491d3f99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.966 2 DEBUG nova.objects.instance [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'resources' on Instance uuid 9c5c9653-6de6-4975-86c6-887803a35913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:20.970 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4b8ce1a6-5fc7-4f57-9da1-9ae11ab8596d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:20.972 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[507f4e2c-76c3-451c-888e-e40bad150d94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:20 compute-0 ceph-mon[74295]: pgmap v1612: 305 pgs: 305 active+clean; 328 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 522 KiB/s rd, 2.3 MiB/s wr, 94 op/s
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.986 2 WARNING nova.virt.libvirt.driver [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.992 2 DEBUG nova.virt.libvirt.host [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.992 2 DEBUG nova.virt.libvirt.host [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.997 2 DEBUG nova.virt.libvirt.host [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.997 2 DEBUG nova.virt.libvirt.host [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.998 2 DEBUG nova.virt.libvirt.driver [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.998 2 DEBUG nova.virt.hardware [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.999 2 DEBUG nova.virt.hardware [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:18:20 compute-0 nova_compute[259550]: 2025-10-07 14:18:20.999 2 DEBUG nova.virt.hardware [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.000 2 DEBUG nova.virt.hardware [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.000 2 DEBUG nova.virt.hardware [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.000 2 DEBUG nova.virt.hardware [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.001 2 DEBUG nova.virt.hardware [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.001 2 DEBUG nova.virt.hardware [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.001 2 DEBUG nova.virt.hardware [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.002 2 DEBUG nova.virt.hardware [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.002 2 DEBUG nova.virt.hardware [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.003 2 DEBUG nova.objects.instance [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9c5c9653-6de6-4975-86c6-887803a35913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:21.003 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[57874623-e1f1-48c3-9743-c374c10110c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724434, 'reachable_time': 27430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330839, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:21 compute-0 systemd[1]: run-netns-ovnmeta\x2dc21c541a\x2d0d39\x2d4ceb\x2dba44\x2d53a9c1280779.mount: Deactivated successfully.
Oct 07 14:18:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:21.006 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:18:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:21.006 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[1f3dfc79-ed5a-4746-9424-802e312abe56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.027 2 DEBUG oslo_concurrency.processutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:18:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:18:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3949707929' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.341 2 DEBUG oslo_concurrency.processutils [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.375 2 DEBUG oslo_concurrency.processutils [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:18:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3763646530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.537 2 DEBUG oslo_concurrency.processutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.538 2 DEBUG oslo_concurrency.processutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:18:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1621368882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.798 2 DEBUG oslo_concurrency.processutils [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.800 2 DEBUG nova.virt.libvirt.vif [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1718645090',display_name='tempest-ServerActionsTestJSON-server-1718645090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1718645090',id=66,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD46Fy5CoN+Ml3EBoxKcZ2Dc+Ex8Fs/j0JXzrdEiunFq6ivVsrIblCZq3tN14fyHQcfewP1+4i7OCcMUFM6dTRwhbKPS7W3sI5N9qFb8Gfb+awrk0XppQafXmqkzDHedqQ==',key_name='tempest-keypair-607419023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-0mdnuhhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:18:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=1d580bbb-a6fd-442c-8524-409ba5c344d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.801 2 DEBUG nova.network.os_vif_util [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.802 2 DEBUG nova.network.os_vif_util [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.803 2 DEBUG nova.objects.instance [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.818 2 DEBUG nova.virt.libvirt.driver [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:18:21 compute-0 nova_compute[259550]:   <uuid>1d580bbb-a6fd-442c-8524-409ba5c344d0</uuid>
Oct 07 14:18:21 compute-0 nova_compute[259550]:   <name>instance-00000042</name>
Oct 07 14:18:21 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:18:21 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:18:21 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerActionsTestJSON-server-1718645090</nova:name>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:18:20</nova:creationTime>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:18:21 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:18:21 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:18:21 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:18:21 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:18:21 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:18:21 compute-0 nova_compute[259550]:         <nova:user uuid="51afbbb19e4a4e2184c89302ccf45428">tempest-ServerActionsTestJSON-263209083-project-member</nova:user>
Oct 07 14:18:21 compute-0 nova_compute[259550]:         <nova:project uuid="8379283f8a594c2ab94773d2b49cbb30">tempest-ServerActionsTestJSON-263209083</nova:project>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:18:21 compute-0 nova_compute[259550]:         <nova:port uuid="5fb8904b-227a-4dac-8c3a-82a23ba9832c">
Oct 07 14:18:21 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:18:21 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:18:21 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <system>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <entry name="serial">1d580bbb-a6fd-442c-8524-409ba5c344d0</entry>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <entry name="uuid">1d580bbb-a6fd-442c-8524-409ba5c344d0</entry>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     </system>
Oct 07 14:18:21 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:18:21 compute-0 nova_compute[259550]:   <os>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:   </os>
Oct 07 14:18:21 compute-0 nova_compute[259550]:   <features>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:   </features>
Oct 07 14:18:21 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:18:21 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:18:21 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1d580bbb-a6fd-442c-8524-409ba5c344d0_disk">
Oct 07 14:18:21 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       </source>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:18:21 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1d580bbb-a6fd-442c-8524-409ba5c344d0_disk.config">
Oct 07 14:18:21 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       </source>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:18:21 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:af:c7:50"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <target dev="tap5fb8904b-22"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/1d580bbb-a6fd-442c-8524-409ba5c344d0/console.log" append="off"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <video>
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     </video>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <input type="keyboard" bus="usb"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:18:21 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:18:21 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:18:21 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:18:21 compute-0 nova_compute[259550]: </domain>
Oct 07 14:18:21 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.824 2 DEBUG nova.virt.libvirt.driver [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.824 2 DEBUG nova.virt.libvirt.driver [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:18:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 328 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.825 2 DEBUG nova.virt.libvirt.vif [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1718645090',display_name='tempest-ServerActionsTestJSON-server-1718645090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1718645090',id=66,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD46Fy5CoN+Ml3EBoxKcZ2Dc+Ex8Fs/j0JXzrdEiunFq6ivVsrIblCZq3tN14fyHQcfewP1+4i7OCcMUFM6dTRwhbKPS7W3sI5N9qFb8Gfb+awrk0XppQafXmqkzDHedqQ==',key_name='tempest-keypair-607419023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-0mdnuhhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:18:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=1d580bbb-a6fd-442c-8524-409ba5c344d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.827 2 DEBUG nova.network.os_vif_util [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.827 2 DEBUG nova.network.os_vif_util [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.828 2 DEBUG os_vif [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.829 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.830 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.833 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fb8904b-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.834 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5fb8904b-22, col_values=(('external_ids', {'iface-id': '5fb8904b-227a-4dac-8c3a-82a23ba9832c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:c7:50', 'vm-uuid': '1d580bbb-a6fd-442c-8524-409ba5c344d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:21 compute-0 NetworkManager[44949]: <info>  [1759846701.8370] manager: (tap5fb8904b-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.843 2 INFO os_vif [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22')
Oct 07 14:18:21 compute-0 kernel: tap5fb8904b-22: entered promiscuous mode
Oct 07 14:18:21 compute-0 systemd-udevd[330646]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:18:21 compute-0 NetworkManager[44949]: <info>  [1759846701.9263] manager: (tap5fb8904b-22): new Tun device (/org/freedesktop/NetworkManager/Devices/287)
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:21 compute-0 ovn_controller[151684]: 2025-10-07T14:18:21Z|00644|binding|INFO|Claiming lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c for this chassis.
Oct 07 14:18:21 compute-0 ovn_controller[151684]: 2025-10-07T14:18:21Z|00645|binding|INFO|5fb8904b-227a-4dac-8c3a-82a23ba9832c: Claiming fa:16:3e:af:c7:50 10.100.0.3
Oct 07 14:18:21 compute-0 NetworkManager[44949]: <info>  [1759846701.9446] device (tap5fb8904b-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:18:21 compute-0 NetworkManager[44949]: <info>  [1759846701.9475] device (tap5fb8904b-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:18:21 compute-0 ovn_controller[151684]: 2025-10-07T14:18:21Z|00646|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c ovn-installed in OVS
Oct 07 14:18:21 compute-0 ovn_controller[151684]: 2025-10-07T14:18:21Z|00647|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c up in Southbound
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:21.948 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:c7:50 10.100.0.3'], port_security=['fa:16:3e:af:c7:50 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1d580bbb-a6fd-442c-8524-409ba5c344d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8379283f8a594c2ab94773d2b49cbb30', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c8e218c0-ab29-4b01-8bdb-1da00e3ea9f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44f26794-be65-4c90-a6ef-3a0e5efa6810, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5fb8904b-227a-4dac-8c3a-82a23ba9832c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:18:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:21.950 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5fb8904b-227a-4dac-8c3a-82a23ba9832c in datapath c21c541a-0d39-4ceb-ba44-53a9c1280779 bound to our chassis
Oct 07 14:18:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:21.952 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:21.966 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e20a172c-182a-4c08-ac4e-7e8b09848489]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:21.966 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc21c541a-01 in ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:18:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:21.969 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc21c541a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:18:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:21.969 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[23cea40b-907a-4960-a4bd-86cf9cef375d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:21.971 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e0bfd7a1-6920-48d4-9062-c9aa3e38a900]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:21 compute-0 nova_compute[259550]: 2025-10-07 14:18:21.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:21 compute-0 systemd-machined[214580]: New machine qemu-79-instance-00000042.
Oct 07 14:18:21 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3949707929' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:21 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3763646530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:21 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1621368882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:21.984 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[7681558f-71a0-4d81-ae5d-191908b5b09c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:22 compute-0 systemd[1]: Started Virtual Machine qemu-79-instance-00000042.
Oct 07 14:18:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:18:22 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3037205427' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:22.015 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3709b059-451b-4f82-a1e8-42f379d89b67]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.029 2 DEBUG oslo_concurrency.processutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.030 2 DEBUG oslo_concurrency.processutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:22.048 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cb95afc8-a290-4aa8-a96b-e3098cd83cbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:22.053 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7c6bf349-3848-4bd6-90cb-e10e1370d71f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:22 compute-0 NetworkManager[44949]: <info>  [1759846702.0544] manager: (tapc21c541a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/288)
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:22.086 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a2451c92-ed86-4f1c-9eb9-67098f6c90f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:22.091 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8c25844b-9b6f-4a7c-886a-f02dc81ddca0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:22 compute-0 NetworkManager[44949]: <info>  [1759846702.1186] device (tapc21c541a-00): carrier: link connected
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:22.125 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[91fa8cb7-dc5d-485f-864b-414e55142d40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:22.147 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[25460214-f170-4c01-a7d4-bfd959178cba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc21c541a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:8b:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728568, 'reachable_time': 28850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330999, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:22.165 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd4d665-0911-40ef-bee5-d68fa7cd5e6c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:8b48'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 728568, 'tstamp': 728568}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331000, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:22.183 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0b2f0d68-948f-4bee-b9d2-cf51cfd9b268]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc21c541a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:8b:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728568, 'reachable_time': 28850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331005, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:22.215 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a4332afe-4f23-4ef6-982d-1e962927e8bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:22.283 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1c155169-5249-4614-8798-d36b5841cf3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:22.285 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc21c541a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:22.285 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:22.285 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc21c541a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:22 compute-0 NetworkManager[44949]: <info>  [1759846702.2883] manager: (tapc21c541a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/289)
Oct 07 14:18:22 compute-0 kernel: tapc21c541a-00: entered promiscuous mode
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:22.294 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc21c541a-00, col_values=(('external_ids', {'iface-id': '5989e5ed-c89e-446a-960e-503196fd3680'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:22 compute-0 ovn_controller[151684]: 2025-10-07T14:18:22Z|00648|binding|INFO|Releasing lport 5989e5ed-c89e-446a-960e-503196fd3680 from this chassis (sb_readonly=0)
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:22.318 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c21c541a-0d39-4ceb-ba44-53a9c1280779.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c21c541a-0d39-4ceb-ba44-53a9c1280779.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:22.319 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[76ac2899-3602-4b4b-a498-655113e7cff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:22.320 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/c21c541a-0d39-4ceb-ba44-53a9c1280779.pid.haproxy
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:18:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:22.321 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'env', 'PROCESS_TAG=haproxy-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c21c541a-0d39-4ceb-ba44-53a9c1280779.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:18:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:18:22 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3056492333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.496 2 DEBUG oslo_concurrency.processutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.497 2 DEBUG nova.virt.libvirt.vif [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2135478368',display_name='tempest-ServerRescueTestJSON-server-2135478368',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-2135478368',id=67,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:18:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d8dca3ec607447dd8f2e6dd1c0714628',ramdisk_id='',reservation_id='r-hhot1xln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1604630325',owner_user_name='tempest-ServerRescueTestJSON-1604630325-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:18:01Z,user_data=None,user_id='7166dede3fa7455fb8ac0840d97d0be8',uuid=9c5c9653-6de6-4975-86c6-887803a35913,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d353c80-d149-4939-a0c5-da59491d3f99", "address": "fa:16:3e:82:7a:5f", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1885672423-network", "vif_mac": "fa:16:3e:82:7a:5f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d353c80-d1", "ovs_interfaceid": "0d353c80-d149-4939-a0c5-da59491d3f99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.498 2 DEBUG nova.network.os_vif_util [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Converting VIF {"id": "0d353c80-d149-4939-a0c5-da59491d3f99", "address": "fa:16:3e:82:7a:5f", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1885672423-network", "vif_mac": "fa:16:3e:82:7a:5f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d353c80-d1", "ovs_interfaceid": "0d353c80-d149-4939-a0c5-da59491d3f99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.499 2 DEBUG nova.network.os_vif_util [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:82:7a:5f,bridge_name='br-int',has_traffic_filtering=True,id=0d353c80-d149-4939-a0c5-da59491d3f99,network=Network(2dfecd64-708d-4596-88ae-4b7d716e998c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d353c80-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.500 2 DEBUG nova.objects.instance [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9c5c9653-6de6-4975-86c6-887803a35913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.514 2 DEBUG nova.virt.libvirt.driver [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:18:22 compute-0 nova_compute[259550]:   <uuid>9c5c9653-6de6-4975-86c6-887803a35913</uuid>
Oct 07 14:18:22 compute-0 nova_compute[259550]:   <name>instance-00000043</name>
Oct 07 14:18:22 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:18:22 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:18:22 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerRescueTestJSON-server-2135478368</nova:name>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:18:20</nova:creationTime>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:18:22 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:18:22 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:18:22 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:18:22 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:18:22 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:18:22 compute-0 nova_compute[259550]:         <nova:user uuid="7166dede3fa7455fb8ac0840d97d0be8">tempest-ServerRescueTestJSON-1604630325-project-member</nova:user>
Oct 07 14:18:22 compute-0 nova_compute[259550]:         <nova:project uuid="d8dca3ec607447dd8f2e6dd1c0714628">tempest-ServerRescueTestJSON-1604630325</nova:project>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:18:22 compute-0 nova_compute[259550]:         <nova:port uuid="0d353c80-d149-4939-a0c5-da59491d3f99">
Oct 07 14:18:22 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:18:22 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:18:22 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <system>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <entry name="serial">9c5c9653-6de6-4975-86c6-887803a35913</entry>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <entry name="uuid">9c5c9653-6de6-4975-86c6-887803a35913</entry>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     </system>
Oct 07 14:18:22 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:18:22 compute-0 nova_compute[259550]:   <os>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:   </os>
Oct 07 14:18:22 compute-0 nova_compute[259550]:   <features>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:   </features>
Oct 07 14:18:22 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:18:22 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:18:22 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/9c5c9653-6de6-4975-86c6-887803a35913_disk.rescue">
Oct 07 14:18:22 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       </source>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:18:22 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/9c5c9653-6de6-4975-86c6-887803a35913_disk">
Oct 07 14:18:22 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       </source>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:18:22 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <target dev="vdb" bus="virtio"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/9c5c9653-6de6-4975-86c6-887803a35913_disk.config.rescue">
Oct 07 14:18:22 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       </source>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:18:22 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:82:7a:5f"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <target dev="tap0d353c80-d1"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/9c5c9653-6de6-4975-86c6-887803a35913/console.log" append="off"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <video>
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     </video>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:18:22 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:18:22 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:18:22 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:18:22 compute-0 nova_compute[259550]: </domain>
Oct 07 14:18:22 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.526 2 INFO nova.virt.libvirt.driver [-] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Instance destroyed successfully.
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.575 2 DEBUG nova.virt.libvirt.driver [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.576 2 DEBUG nova.virt.libvirt.driver [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.576 2 DEBUG nova.virt.libvirt.driver [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.576 2 DEBUG nova.virt.libvirt.driver [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] No VIF found with MAC fa:16:3e:82:7a:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.577 2 INFO nova.virt.libvirt.driver [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Using config drive
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.598 2 DEBUG nova.storage.rbd_utils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 9c5c9653-6de6-4975-86c6-887803a35913_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.609 2 DEBUG nova.compute.manager [req-321a8e9c-209f-4d2e-9fd8-9ad54b9f9c0c req-ae11878c-88bb-4669-9a3b-f9b1996725fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received event network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.609 2 DEBUG oslo_concurrency.lockutils [req-321a8e9c-209f-4d2e-9fd8-9ad54b9f9c0c req-ae11878c-88bb-4669-9a3b-f9b1996725fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9c5c9653-6de6-4975-86c6-887803a35913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.610 2 DEBUG oslo_concurrency.lockutils [req-321a8e9c-209f-4d2e-9fd8-9ad54b9f9c0c req-ae11878c-88bb-4669-9a3b-f9b1996725fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.610 2 DEBUG oslo_concurrency.lockutils [req-321a8e9c-209f-4d2e-9fd8-9ad54b9f9c0c req-ae11878c-88bb-4669-9a3b-f9b1996725fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.610 2 DEBUG nova.compute.manager [req-321a8e9c-209f-4d2e-9fd8-9ad54b9f9c0c req-ae11878c-88bb-4669-9a3b-f9b1996725fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] No waiting events found dispatching network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.611 2 WARNING nova.compute.manager [req-321a8e9c-209f-4d2e-9fd8-9ad54b9f9c0c req-ae11878c-88bb-4669-9a3b-f9b1996725fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received unexpected event network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 for instance with vm_state active and task_state rescuing.
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.619 2 DEBUG nova.objects.instance [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9c5c9653-6de6-4975-86c6-887803a35913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.647 2 DEBUG nova.objects.instance [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'keypairs' on Instance uuid 9c5c9653-6de6-4975-86c6-887803a35913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:18:22
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.mgr', 'vms', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes']
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:18:22 compute-0 podman[331119]: 2025-10-07 14:18:22.705069134 +0000 UTC m=+0.058614820 container create 4a9d412ffb667fb6c3b63301cd1426b888219e8d0934d21e1612b44c1ba0150e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:18:22 compute-0 systemd[1]: Started libpod-conmon-4a9d412ffb667fb6c3b63301cd1426b888219e8d0934d21e1612b44c1ba0150e.scope.
Oct 07 14:18:22 compute-0 podman[331119]: 2025-10-07 14:18:22.676802097 +0000 UTC m=+0.030347803 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:22 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:18:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81ad967c0191c3c6a677b3297589d2973c5e25304c9cea9842cd1971eb0249f5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:18:22 compute-0 podman[331119]: 2025-10-07 14:18:22.811188907 +0000 UTC m=+0.164734613 container init 4a9d412ffb667fb6c3b63301cd1426b888219e8d0934d21e1612b44c1ba0150e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.814 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 1d580bbb-a6fd-442c-8524-409ba5c344d0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.814 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846702.813474, 1d580bbb-a6fd-442c-8524-409ba5c344d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.814 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] VM Resumed (Lifecycle Event)
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.816 2 DEBUG nova.compute.manager [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:18:22 compute-0 podman[331119]: 2025-10-07 14:18:22.818123181 +0000 UTC m=+0.171668867 container start 4a9d412ffb667fb6c3b63301cd1426b888219e8d0934d21e1612b44c1ba0150e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.820 2 INFO nova.virt.libvirt.driver [-] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance rebooted successfully.
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.820 2 DEBUG nova.compute.manager [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.837 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.841 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:18:22 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[331135]: [NOTICE]   (331139) : New worker (331141) forked
Oct 07 14:18:22 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[331135]: [NOTICE]   (331139) : Loading success.
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.866 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.867 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846702.8165054, 1d580bbb-a6fd-442c-8524-409ba5c344d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.867 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] VM Started (Lifecycle Event)
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.871 2 DEBUG oslo_concurrency.lockutils [None req-bf41828b-637b-4835-89dd-e937c7a9eaee 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.888 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:22 compute-0 nova_compute[259550]: 2025-10-07 14:18:22.892 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:18:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:18:22 compute-0 ceph-mon[74295]: pgmap v1613: 305 pgs: 305 active+clean; 328 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Oct 07 14:18:22 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3037205427' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:22 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3056492333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.029 2 INFO nova.virt.libvirt.driver [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Creating config drive at /var/lib/nova/instances/9c5c9653-6de6-4975-86c6-887803a35913/disk.config.rescue
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.035 2 DEBUG oslo_concurrency.processutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9c5c9653-6de6-4975-86c6-887803a35913/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmrmtcckz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.104 2 DEBUG nova.compute.manager [req-f6097a4d-1dd1-4329-ba5b-8c22b09c01f0 req-97889dfc-1e75-4697-bb44-a14df31dba8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.105 2 DEBUG oslo_concurrency.lockutils [req-f6097a4d-1dd1-4329-ba5b-8c22b09c01f0 req-97889dfc-1e75-4697-bb44-a14df31dba8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.105 2 DEBUG oslo_concurrency.lockutils [req-f6097a4d-1dd1-4329-ba5b-8c22b09c01f0 req-97889dfc-1e75-4697-bb44-a14df31dba8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.105 2 DEBUG oslo_concurrency.lockutils [req-f6097a4d-1dd1-4329-ba5b-8c22b09c01f0 req-97889dfc-1e75-4697-bb44-a14df31dba8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.106 2 DEBUG nova.compute.manager [req-f6097a4d-1dd1-4329-ba5b-8c22b09c01f0 req-97889dfc-1e75-4697-bb44-a14df31dba8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.106 2 WARNING nova.compute.manager [req-f6097a4d-1dd1-4329-ba5b-8c22b09c01f0 req-97889dfc-1e75-4697-bb44-a14df31dba8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state active and task_state None.
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.106 2 DEBUG nova.compute.manager [req-f6097a4d-1dd1-4329-ba5b-8c22b09c01f0 req-97889dfc-1e75-4697-bb44-a14df31dba8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.106 2 DEBUG oslo_concurrency.lockutils [req-f6097a4d-1dd1-4329-ba5b-8c22b09c01f0 req-97889dfc-1e75-4697-bb44-a14df31dba8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.106 2 DEBUG oslo_concurrency.lockutils [req-f6097a4d-1dd1-4329-ba5b-8c22b09c01f0 req-97889dfc-1e75-4697-bb44-a14df31dba8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.106 2 DEBUG oslo_concurrency.lockutils [req-f6097a4d-1dd1-4329-ba5b-8c22b09c01f0 req-97889dfc-1e75-4697-bb44-a14df31dba8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.107 2 DEBUG nova.compute.manager [req-f6097a4d-1dd1-4329-ba5b-8c22b09c01f0 req-97889dfc-1e75-4697-bb44-a14df31dba8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.107 2 WARNING nova.compute.manager [req-f6097a4d-1dd1-4329-ba5b-8c22b09c01f0 req-97889dfc-1e75-4697-bb44-a14df31dba8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state active and task_state None.
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.107 2 DEBUG nova.compute.manager [req-f6097a4d-1dd1-4329-ba5b-8c22b09c01f0 req-97889dfc-1e75-4697-bb44-a14df31dba8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.107 2 DEBUG oslo_concurrency.lockutils [req-f6097a4d-1dd1-4329-ba5b-8c22b09c01f0 req-97889dfc-1e75-4697-bb44-a14df31dba8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.107 2 DEBUG oslo_concurrency.lockutils [req-f6097a4d-1dd1-4329-ba5b-8c22b09c01f0 req-97889dfc-1e75-4697-bb44-a14df31dba8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.108 2 DEBUG oslo_concurrency.lockutils [req-f6097a4d-1dd1-4329-ba5b-8c22b09c01f0 req-97889dfc-1e75-4697-bb44-a14df31dba8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.108 2 DEBUG nova.compute.manager [req-f6097a4d-1dd1-4329-ba5b-8c22b09c01f0 req-97889dfc-1e75-4697-bb44-a14df31dba8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.108 2 WARNING nova.compute.manager [req-f6097a4d-1dd1-4329-ba5b-8c22b09c01f0 req-97889dfc-1e75-4697-bb44-a14df31dba8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state active and task_state None.
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.184 2 DEBUG oslo_concurrency.processutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9c5c9653-6de6-4975-86c6-887803a35913/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmrmtcckz" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.212 2 DEBUG nova.storage.rbd_utils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] rbd image 9c5c9653-6de6-4975-86c6-887803a35913_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.218 2 DEBUG oslo_concurrency.processutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9c5c9653-6de6-4975-86c6-887803a35913/disk.config.rescue 9c5c9653-6de6-4975-86c6-887803a35913_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.398 2 DEBUG oslo_concurrency.processutils [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9c5c9653-6de6-4975-86c6-887803a35913/disk.config.rescue 9c5c9653-6de6-4975-86c6-887803a35913_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.399 2 INFO nova.virt.libvirt.driver [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Deleting local config drive /var/lib/nova/instances/9c5c9653-6de6-4975-86c6-887803a35913/disk.config.rescue because it was imported into RBD.
Oct 07 14:18:23 compute-0 kernel: tap0d353c80-d1: entered promiscuous mode
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:23 compute-0 ovn_controller[151684]: 2025-10-07T14:18:23Z|00649|binding|INFO|Claiming lport 0d353c80-d149-4939-a0c5-da59491d3f99 for this chassis.
Oct 07 14:18:23 compute-0 ovn_controller[151684]: 2025-10-07T14:18:23Z|00650|binding|INFO|0d353c80-d149-4939-a0c5-da59491d3f99: Claiming fa:16:3e:82:7a:5f 10.100.0.8
Oct 07 14:18:23 compute-0 NetworkManager[44949]: <info>  [1759846703.4503] manager: (tap0d353c80-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/290)
Oct 07 14:18:23 compute-0 ovn_controller[151684]: 2025-10-07T14:18:23Z|00651|binding|INFO|Setting lport 0d353c80-d149-4939-a0c5-da59491d3f99 ovn-installed in OVS
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:23 compute-0 ovn_controller[151684]: 2025-10-07T14:18:23Z|00652|binding|INFO|Setting lport 0d353c80-d149-4939-a0c5-da59491d3f99 up in Southbound
Oct 07 14:18:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:23.485 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:7a:5f 10.100.0.8'], port_security=['fa:16:3e:82:7a:5f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9c5c9653-6de6-4975-86c6-887803a35913', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dfecd64-708d-4596-88ae-4b7d716e998c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8dca3ec607447dd8f2e6dd1c0714628', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'fe5613ec-7a3f-454f-9d6b-9216d9c4d645', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=952e23a8-726e-49a4-b02e-737dfec98b14, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0d353c80-d149-4939-a0c5-da59491d3f99) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:18:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:23.487 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0d353c80-d149-4939-a0c5-da59491d3f99 in datapath 2dfecd64-708d-4596-88ae-4b7d716e998c bound to our chassis
Oct 07 14:18:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:23.488 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2dfecd64-708d-4596-88ae-4b7d716e998c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:18:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:23.490 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cde735ea-bc66-4896-97c8-8e7a14dd4ddb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:23 compute-0 systemd-machined[214580]: New machine qemu-80-instance-00000043.
Oct 07 14:18:23 compute-0 systemd[1]: Started Virtual Machine qemu-80-instance-00000043.
Oct 07 14:18:23 compute-0 systemd-udevd[331203]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:18:23 compute-0 NetworkManager[44949]: <info>  [1759846703.5402] device (tap0d353c80-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:18:23 compute-0 NetworkManager[44949]: <info>  [1759846703.5413] device (tap0d353c80-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.628 2 INFO nova.compute.manager [None req-be36edcf-7a3b-4020-959f-c183d1124fe5 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Get console output
Oct 07 14:18:23 compute-0 nova_compute[259550]: 2025-10-07 14:18:23.637 2 INFO oslo.privsep.daemon [None req-be36edcf-7a3b-4020-959f-c183d1124fe5 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp06ftghko/privsep.sock']
Oct 07 14:18:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 358 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 664 KiB/s rd, 3.3 MiB/s wr, 100 op/s
Oct 07 14:18:24 compute-0 nova_compute[259550]: 2025-10-07 14:18:24.373 2 DEBUG nova.compute.manager [None req-c0dfa55d-92a7-44b0-bab8-c718d823ed6d 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:24 compute-0 nova_compute[259550]: 2025-10-07 14:18:24.374 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 9c5c9653-6de6-4975-86c6-887803a35913 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:18:24 compute-0 nova_compute[259550]: 2025-10-07 14:18:24.374 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846704.3694518, 9c5c9653-6de6-4975-86c6-887803a35913 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:24 compute-0 nova_compute[259550]: 2025-10-07 14:18:24.374 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] VM Resumed (Lifecycle Event)
Oct 07 14:18:24 compute-0 nova_compute[259550]: 2025-10-07 14:18:24.431 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:24 compute-0 nova_compute[259550]: 2025-10-07 14:18:24.435 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:18:24 compute-0 nova_compute[259550]: 2025-10-07 14:18:24.488 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] During sync_power_state the instance has a pending task (rescuing). Skip.
Oct 07 14:18:24 compute-0 nova_compute[259550]: 2025-10-07 14:18:24.488 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846704.369571, 9c5c9653-6de6-4975-86c6-887803a35913 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:24 compute-0 nova_compute[259550]: 2025-10-07 14:18:24.488 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] VM Started (Lifecycle Event)
Oct 07 14:18:24 compute-0 nova_compute[259550]: 2025-10-07 14:18:24.501 2 INFO oslo.privsep.daemon [None req-be36edcf-7a3b-4020-959f-c183d1124fe5 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Spawned new privsep daemon via rootwrap
Oct 07 14:18:24 compute-0 nova_compute[259550]: 2025-10-07 14:18:24.364 29474 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 07 14:18:24 compute-0 nova_compute[259550]: 2025-10-07 14:18:24.369 29474 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 07 14:18:24 compute-0 nova_compute[259550]: 2025-10-07 14:18:24.371 29474 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 07 14:18:24 compute-0 nova_compute[259550]: 2025-10-07 14:18:24.371 29474 INFO oslo.privsep.daemon [-] privsep daemon running as pid 29474
Oct 07 14:18:24 compute-0 nova_compute[259550]: 2025-10-07 14:18:24.551 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:24 compute-0 nova_compute[259550]: 2025-10-07 14:18:24.554 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:18:24 compute-0 nova_compute[259550]: 2025-10-07 14:18:24.599 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:18:25 compute-0 ceph-mon[74295]: pgmap v1614: 305 pgs: 305 active+clean; 358 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 664 KiB/s rd, 3.3 MiB/s wr, 100 op/s
Oct 07 14:18:25 compute-0 nova_compute[259550]: 2025-10-07 14:18:25.747 2 INFO nova.compute.manager [None req-dfe8dd79-311a-4655-affa-d32d86fd62dd 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Unrescuing
Oct 07 14:18:25 compute-0 nova_compute[259550]: 2025-10-07 14:18:25.748 2 DEBUG oslo_concurrency.lockutils [None req-dfe8dd79-311a-4655-affa-d32d86fd62dd 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "refresh_cache-9c5c9653-6de6-4975-86c6-887803a35913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:18:25 compute-0 nova_compute[259550]: 2025-10-07 14:18:25.749 2 DEBUG oslo_concurrency.lockutils [None req-dfe8dd79-311a-4655-affa-d32d86fd62dd 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquired lock "refresh_cache-9c5c9653-6de6-4975-86c6-887803a35913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:18:25 compute-0 nova_compute[259550]: 2025-10-07 14:18:25.749 2 DEBUG nova.network.neutron [None req-dfe8dd79-311a-4655-affa-d32d86fd62dd 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:18:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 374 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.5 MiB/s wr, 126 op/s
Oct 07 14:18:26 compute-0 ceph-mon[74295]: pgmap v1615: 305 pgs: 305 active+clean; 374 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.5 MiB/s wr, 126 op/s
Oct 07 14:18:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:18:26 compute-0 nova_compute[259550]: 2025-10-07 14:18:26.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:26 compute-0 nova_compute[259550]: 2025-10-07 14:18:26.541 2 DEBUG nova.compute.manager [req-e03ef748-5043-4c6c-bf6f-4661a32f2f3e req-25318ab2-4caf-44cd-a0f8-58054b1bac08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received event network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:26 compute-0 nova_compute[259550]: 2025-10-07 14:18:26.541 2 DEBUG oslo_concurrency.lockutils [req-e03ef748-5043-4c6c-bf6f-4661a32f2f3e req-25318ab2-4caf-44cd-a0f8-58054b1bac08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9c5c9653-6de6-4975-86c6-887803a35913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:26 compute-0 nova_compute[259550]: 2025-10-07 14:18:26.542 2 DEBUG oslo_concurrency.lockutils [req-e03ef748-5043-4c6c-bf6f-4661a32f2f3e req-25318ab2-4caf-44cd-a0f8-58054b1bac08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:26 compute-0 nova_compute[259550]: 2025-10-07 14:18:26.542 2 DEBUG oslo_concurrency.lockutils [req-e03ef748-5043-4c6c-bf6f-4661a32f2f3e req-25318ab2-4caf-44cd-a0f8-58054b1bac08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:26 compute-0 nova_compute[259550]: 2025-10-07 14:18:26.542 2 DEBUG nova.compute.manager [req-e03ef748-5043-4c6c-bf6f-4661a32f2f3e req-25318ab2-4caf-44cd-a0f8-58054b1bac08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] No waiting events found dispatching network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:26 compute-0 nova_compute[259550]: 2025-10-07 14:18:26.542 2 WARNING nova.compute.manager [req-e03ef748-5043-4c6c-bf6f-4661a32f2f3e req-25318ab2-4caf-44cd-a0f8-58054b1bac08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received unexpected event network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 for instance with vm_state rescued and task_state unrescuing.
Oct 07 14:18:26 compute-0 nova_compute[259550]: 2025-10-07 14:18:26.707 2 DEBUG nova.network.neutron [None req-dfe8dd79-311a-4655-affa-d32d86fd62dd 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Updating instance_info_cache with network_info: [{"id": "0d353c80-d149-4939-a0c5-da59491d3f99", "address": "fa:16:3e:82:7a:5f", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d353c80-d1", "ovs_interfaceid": "0d353c80-d149-4939-a0c5-da59491d3f99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:26 compute-0 nova_compute[259550]: 2025-10-07 14:18:26.727 2 DEBUG oslo_concurrency.lockutils [None req-dfe8dd79-311a-4655-affa-d32d86fd62dd 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Releasing lock "refresh_cache-9c5c9653-6de6-4975-86c6-887803a35913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:18:26 compute-0 nova_compute[259550]: 2025-10-07 14:18:26.728 2 DEBUG nova.objects.instance [None req-dfe8dd79-311a-4655-affa-d32d86fd62dd 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'flavor' on Instance uuid 9c5c9653-6de6-4975-86c6-887803a35913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:26 compute-0 kernel: tap0d353c80-d1 (unregistering): left promiscuous mode
Oct 07 14:18:26 compute-0 nova_compute[259550]: 2025-10-07 14:18:26.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:26 compute-0 NetworkManager[44949]: <info>  [1759846706.8486] device (tap0d353c80-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:18:26 compute-0 ovn_controller[151684]: 2025-10-07T14:18:26Z|00653|binding|INFO|Releasing lport 0d353c80-d149-4939-a0c5-da59491d3f99 from this chassis (sb_readonly=0)
Oct 07 14:18:26 compute-0 ovn_controller[151684]: 2025-10-07T14:18:26Z|00654|binding|INFO|Setting lport 0d353c80-d149-4939-a0c5-da59491d3f99 down in Southbound
Oct 07 14:18:26 compute-0 ovn_controller[151684]: 2025-10-07T14:18:26Z|00655|binding|INFO|Removing iface tap0d353c80-d1 ovn-installed in OVS
Oct 07 14:18:26 compute-0 nova_compute[259550]: 2025-10-07 14:18:26.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:26 compute-0 nova_compute[259550]: 2025-10-07 14:18:26.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:26.871 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:7a:5f 10.100.0.8'], port_security=['fa:16:3e:82:7a:5f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9c5c9653-6de6-4975-86c6-887803a35913', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dfecd64-708d-4596-88ae-4b7d716e998c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8dca3ec607447dd8f2e6dd1c0714628', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fe5613ec-7a3f-454f-9d6b-9216d9c4d645', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=952e23a8-726e-49a4-b02e-737dfec98b14, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0d353c80-d149-4939-a0c5-da59491d3f99) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:18:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:26.872 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0d353c80-d149-4939-a0c5-da59491d3f99 in datapath 2dfecd64-708d-4596-88ae-4b7d716e998c unbound from our chassis
Oct 07 14:18:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:26.873 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2dfecd64-708d-4596-88ae-4b7d716e998c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:18:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:26.874 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c02afc9d-29aa-4d5d-b628-39afe17ae7cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:26 compute-0 nova_compute[259550]: 2025-10-07 14:18:26.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:26 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000043.scope: Deactivated successfully.
Oct 07 14:18:26 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000043.scope: Consumed 3.185s CPU time.
Oct 07 14:18:26 compute-0 systemd-machined[214580]: Machine qemu-80-instance-00000043 terminated.
Oct 07 14:18:26 compute-0 nova_compute[259550]: 2025-10-07 14:18:26.986 2 INFO nova.virt.libvirt.driver [-] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Instance destroyed successfully.
Oct 07 14:18:26 compute-0 nova_compute[259550]: 2025-10-07 14:18:26.987 2 DEBUG nova.objects.instance [None req-dfe8dd79-311a-4655-affa-d32d86fd62dd 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9c5c9653-6de6-4975-86c6-887803a35913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:27 compute-0 kernel: tap0d353c80-d1: entered promiscuous mode
Oct 07 14:18:27 compute-0 NetworkManager[44949]: <info>  [1759846707.0622] manager: (tap0d353c80-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/291)
Oct 07 14:18:27 compute-0 ovn_controller[151684]: 2025-10-07T14:18:27Z|00656|binding|INFO|Claiming lport 0d353c80-d149-4939-a0c5-da59491d3f99 for this chassis.
Oct 07 14:18:27 compute-0 ovn_controller[151684]: 2025-10-07T14:18:27Z|00657|binding|INFO|0d353c80-d149-4939-a0c5-da59491d3f99: Claiming fa:16:3e:82:7a:5f 10.100.0.8
Oct 07 14:18:27 compute-0 systemd-udevd[331281]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:18:27 compute-0 nova_compute[259550]: 2025-10-07 14:18:27.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:27 compute-0 NetworkManager[44949]: <info>  [1759846707.0755] device (tap0d353c80-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:18:27 compute-0 NetworkManager[44949]: <info>  [1759846707.0771] device (tap0d353c80-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:18:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:27.082 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:7a:5f 10.100.0.8'], port_security=['fa:16:3e:82:7a:5f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9c5c9653-6de6-4975-86c6-887803a35913', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dfecd64-708d-4596-88ae-4b7d716e998c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8dca3ec607447dd8f2e6dd1c0714628', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fe5613ec-7a3f-454f-9d6b-9216d9c4d645', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=952e23a8-726e-49a4-b02e-737dfec98b14, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0d353c80-d149-4939-a0c5-da59491d3f99) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:18:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:27.083 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0d353c80-d149-4939-a0c5-da59491d3f99 in datapath 2dfecd64-708d-4596-88ae-4b7d716e998c bound to our chassis
Oct 07 14:18:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:27.084 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2dfecd64-708d-4596-88ae-4b7d716e998c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:18:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:27.084 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5d59b211-993f-41c9-b2d2-567d0454c313]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:27 compute-0 ovn_controller[151684]: 2025-10-07T14:18:27Z|00658|binding|INFO|Setting lport 0d353c80-d149-4939-a0c5-da59491d3f99 ovn-installed in OVS
Oct 07 14:18:27 compute-0 ovn_controller[151684]: 2025-10-07T14:18:27Z|00659|binding|INFO|Setting lport 0d353c80-d149-4939-a0c5-da59491d3f99 up in Southbound
Oct 07 14:18:27 compute-0 nova_compute[259550]: 2025-10-07 14:18:27.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:27 compute-0 nova_compute[259550]: 2025-10-07 14:18:27.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:27 compute-0 systemd-machined[214580]: New machine qemu-81-instance-00000043.
Oct 07 14:18:27 compute-0 systemd[1]: Started Virtual Machine qemu-81-instance-00000043.
Oct 07 14:18:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 374 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.9 MiB/s wr, 105 op/s
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.062 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 9c5c9653-6de6-4975-86c6-887803a35913 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.063 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846708.061798, 9c5c9653-6de6-4975-86c6-887803a35913 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.064 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] VM Resumed (Lifecycle Event)
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.103 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.112 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.155 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.156 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846708.0630717, 9c5c9653-6de6-4975-86c6-887803a35913 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.156 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] VM Started (Lifecycle Event)
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.190 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.194 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.230 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.465 2 DEBUG nova.compute.manager [None req-dfe8dd79-311a-4655-affa-d32d86fd62dd 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.833 2 DEBUG nova.compute.manager [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received event network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.833 2 DEBUG oslo_concurrency.lockutils [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9c5c9653-6de6-4975-86c6-887803a35913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.834 2 DEBUG oslo_concurrency.lockutils [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.834 2 DEBUG oslo_concurrency.lockutils [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.834 2 DEBUG nova.compute.manager [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] No waiting events found dispatching network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.837 2 WARNING nova.compute.manager [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received unexpected event network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 for instance with vm_state rescued and task_state unrescuing.
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.837 2 DEBUG nova.compute.manager [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received event network-vif-unplugged-0d353c80-d149-4939-a0c5-da59491d3f99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.838 2 DEBUG oslo_concurrency.lockutils [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9c5c9653-6de6-4975-86c6-887803a35913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.842 2 DEBUG oslo_concurrency.lockutils [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.842 2 DEBUG oslo_concurrency.lockutils [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.842 2 DEBUG nova.compute.manager [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] No waiting events found dispatching network-vif-unplugged-0d353c80-d149-4939-a0c5-da59491d3f99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.843 2 WARNING nova.compute.manager [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received unexpected event network-vif-unplugged-0d353c80-d149-4939-a0c5-da59491d3f99 for instance with vm_state rescued and task_state unrescuing.
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.844 2 DEBUG nova.compute.manager [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received event network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.845 2 DEBUG oslo_concurrency.lockutils [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9c5c9653-6de6-4975-86c6-887803a35913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.847 2 DEBUG oslo_concurrency.lockutils [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.847 2 DEBUG oslo_concurrency.lockutils [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.847 2 DEBUG nova.compute.manager [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] No waiting events found dispatching network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.847 2 WARNING nova.compute.manager [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received unexpected event network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 for instance with vm_state rescued and task_state unrescuing.
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.847 2 DEBUG nova.compute.manager [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received event network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.848 2 DEBUG oslo_concurrency.lockutils [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9c5c9653-6de6-4975-86c6-887803a35913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.848 2 DEBUG oslo_concurrency.lockutils [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.848 2 DEBUG oslo_concurrency.lockutils [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.848 2 DEBUG nova.compute.manager [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] No waiting events found dispatching network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:28 compute-0 nova_compute[259550]: 2025-10-07 14:18:28.848 2 WARNING nova.compute.manager [req-d9cd32c2-57dd-4de4-b8e3-2a2ea3bb4c50 req-a8f82599-a388-4600-b400-db9a96de4d2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received unexpected event network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 for instance with vm_state rescued and task_state unrescuing.
Oct 07 14:18:28 compute-0 ceph-mon[74295]: pgmap v1616: 305 pgs: 305 active+clean; 374 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.9 MiB/s wr, 105 op/s
Oct 07 14:18:29 compute-0 nova_compute[259550]: 2025-10-07 14:18:29.078 2 DEBUG oslo_concurrency.lockutils [None req-be155c09-3320-4592-874b-70b5323f4a38 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:29 compute-0 nova_compute[259550]: 2025-10-07 14:18:29.078 2 DEBUG oslo_concurrency.lockutils [None req-be155c09-3320-4592-874b-70b5323f4a38 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:29 compute-0 nova_compute[259550]: 2025-10-07 14:18:29.078 2 DEBUG nova.compute.manager [None req-be155c09-3320-4592-874b-70b5323f4a38 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:29 compute-0 nova_compute[259550]: 2025-10-07 14:18:29.086 2 DEBUG nova.compute.manager [None req-be155c09-3320-4592-874b-70b5323f4a38 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 07 14:18:29 compute-0 nova_compute[259550]: 2025-10-07 14:18:29.087 2 DEBUG nova.objects.instance [None req-be155c09-3320-4592-874b-70b5323f4a38 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'flavor' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:29 compute-0 nova_compute[259550]: 2025-10-07 14:18:29.152 2 DEBUG nova.virt.libvirt.driver [None req-be155c09-3320-4592-874b-70b5323f4a38 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:18:29 compute-0 nova_compute[259550]: 2025-10-07 14:18:29.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 343 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 193 op/s
Oct 07 14:18:30 compute-0 nova_compute[259550]: 2025-10-07 14:18:30.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:30 compute-0 ceph-mon[74295]: pgmap v1617: 305 pgs: 305 active+clean; 343 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 193 op/s
Oct 07 14:18:30 compute-0 nova_compute[259550]: 2025-10-07 14:18:30.957 2 DEBUG nova.compute.manager [req-29a5cf20-cc2f-488d-846e-56bc60cf2b11 req-ff068b07-02da-4a1c-8c81-9c0668634b7c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received event network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:30 compute-0 nova_compute[259550]: 2025-10-07 14:18:30.960 2 DEBUG oslo_concurrency.lockutils [req-29a5cf20-cc2f-488d-846e-56bc60cf2b11 req-ff068b07-02da-4a1c-8c81-9c0668634b7c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9c5c9653-6de6-4975-86c6-887803a35913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:30 compute-0 nova_compute[259550]: 2025-10-07 14:18:30.961 2 DEBUG oslo_concurrency.lockutils [req-29a5cf20-cc2f-488d-846e-56bc60cf2b11 req-ff068b07-02da-4a1c-8c81-9c0668634b7c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:30 compute-0 nova_compute[259550]: 2025-10-07 14:18:30.961 2 DEBUG oslo_concurrency.lockutils [req-29a5cf20-cc2f-488d-846e-56bc60cf2b11 req-ff068b07-02da-4a1c-8c81-9c0668634b7c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:30 compute-0 nova_compute[259550]: 2025-10-07 14:18:30.961 2 DEBUG nova.compute.manager [req-29a5cf20-cc2f-488d-846e-56bc60cf2b11 req-ff068b07-02da-4a1c-8c81-9c0668634b7c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] No waiting events found dispatching network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:30 compute-0 nova_compute[259550]: 2025-10-07 14:18:30.962 2 WARNING nova.compute.manager [req-29a5cf20-cc2f-488d-846e-56bc60cf2b11 req-ff068b07-02da-4a1c-8c81-9c0668634b7c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received unexpected event network-vif-plugged-0d353c80-d149-4939-a0c5-da59491d3f99 for instance with vm_state active and task_state deleting.
Oct 07 14:18:30 compute-0 nova_compute[259550]: 2025-10-07 14:18:30.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:18:30 compute-0 nova_compute[259550]: 2025-10-07 14:18:30.987 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:18:30 compute-0 nova_compute[259550]: 2025-10-07 14:18:30.987 2 DEBUG oslo_concurrency.lockutils [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "9c5c9653-6de6-4975-86c6-887803a35913" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:30 compute-0 nova_compute[259550]: 2025-10-07 14:18:30.989 2 DEBUG oslo_concurrency.lockutils [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:30 compute-0 nova_compute[259550]: 2025-10-07 14:18:30.990 2 DEBUG oslo_concurrency.lockutils [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "9c5c9653-6de6-4975-86c6-887803a35913-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:30 compute-0 nova_compute[259550]: 2025-10-07 14:18:30.992 2 DEBUG oslo_concurrency.lockutils [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:30 compute-0 nova_compute[259550]: 2025-10-07 14:18:30.993 2 DEBUG oslo_concurrency.lockutils [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:30 compute-0 nova_compute[259550]: 2025-10-07 14:18:30.995 2 INFO nova.compute.manager [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Terminating instance
Oct 07 14:18:30 compute-0 nova_compute[259550]: 2025-10-07 14:18:30.996 2 DEBUG nova.compute.manager [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:18:31 compute-0 kernel: tap0d353c80-d1 (unregistering): left promiscuous mode
Oct 07 14:18:31 compute-0 NetworkManager[44949]: <info>  [1759846711.0501] device (tap0d353c80-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:18:31 compute-0 ovn_controller[151684]: 2025-10-07T14:18:31Z|00660|binding|INFO|Releasing lport 0d353c80-d149-4939-a0c5-da59491d3f99 from this chassis (sb_readonly=0)
Oct 07 14:18:31 compute-0 ovn_controller[151684]: 2025-10-07T14:18:31Z|00661|binding|INFO|Setting lport 0d353c80-d149-4939-a0c5-da59491d3f99 down in Southbound
Oct 07 14:18:31 compute-0 ovn_controller[151684]: 2025-10-07T14:18:31Z|00662|binding|INFO|Removing iface tap0d353c80-d1 ovn-installed in OVS
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:18:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:31.083 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:7a:5f 10.100.0.8'], port_security=['fa:16:3e:82:7a:5f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9c5c9653-6de6-4975-86c6-887803a35913', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dfecd64-708d-4596-88ae-4b7d716e998c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8dca3ec607447dd8f2e6dd1c0714628', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'fe5613ec-7a3f-454f-9d6b-9216d9c4d645', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=952e23a8-726e-49a4-b02e-737dfec98b14, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0d353c80-d149-4939-a0c5-da59491d3f99) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:18:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:31.084 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0d353c80-d149-4939-a0c5-da59491d3f99 in datapath 2dfecd64-708d-4596-88ae-4b7d716e998c unbound from our chassis
Oct 07 14:18:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:31.084 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2dfecd64-708d-4596-88ae-4b7d716e998c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:18:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:31.088 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a7ca45-0e22-447d-82a7-ca5408ff2a89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:31 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000043.scope: Deactivated successfully.
Oct 07 14:18:31 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000043.scope: Consumed 3.881s CPU time.
Oct 07 14:18:31 compute-0 systemd-machined[214580]: Machine qemu-81-instance-00000043 terminated.
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.255 2 INFO nova.virt.libvirt.driver [-] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Instance destroyed successfully.
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.256 2 DEBUG nova.objects.instance [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'resources' on Instance uuid 9c5c9653-6de6-4975-86c6-887803a35913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.309 2 DEBUG nova.virt.libvirt.vif [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2135478368',display_name='tempest-ServerRescueTestJSON-server-2135478368',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-2135478368',id=67,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:18:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d8dca3ec607447dd8f2e6dd1c0714628',ramdisk_id='',reservation_id='r-hhot1xln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1604630325',owner_user_name='tempest-ServerRescueTestJSON-1604630325-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:18:28Z,user_data=None,user_id='7166dede3fa7455fb8ac0840d97d0be8',uuid=9c5c9653-6de6-4975-86c6-887803a35913,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d353c80-d149-4939-a0c5-da59491d3f99", "address": "fa:16:3e:82:7a:5f", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d353c80-d1", "ovs_interfaceid": "0d353c80-d149-4939-a0c5-da59491d3f99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.309 2 DEBUG nova.network.os_vif_util [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Converting VIF {"id": "0d353c80-d149-4939-a0c5-da59491d3f99", "address": "fa:16:3e:82:7a:5f", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d353c80-d1", "ovs_interfaceid": "0d353c80-d149-4939-a0c5-da59491d3f99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.310 2 DEBUG nova.network.os_vif_util [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:7a:5f,bridge_name='br-int',has_traffic_filtering=True,id=0d353c80-d149-4939-a0c5-da59491d3f99,network=Network(2dfecd64-708d-4596-88ae-4b7d716e998c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d353c80-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.310 2 DEBUG os_vif [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:7a:5f,bridge_name='br-int',has_traffic_filtering=True,id=0d353c80-d149-4939-a0c5-da59491d3f99,network=Network(2dfecd64-708d-4596-88ae-4b7d716e998c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d353c80-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.313 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d353c80-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.322 2 INFO os_vif [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:7a:5f,bridge_name='br-int',has_traffic_filtering=True,id=0d353c80-d149-4939-a0c5-da59491d3f99,network=Network(2dfecd64-708d-4596-88ae-4b7d716e998c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d353c80-d1')
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.730 2 INFO nova.virt.libvirt.driver [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Deleting instance files /var/lib/nova/instances/9c5c9653-6de6-4975-86c6-887803a35913_del
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.732 2 INFO nova.virt.libvirt.driver [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Deletion of /var/lib/nova/instances/9c5c9653-6de6-4975-86c6-887803a35913_del complete
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.825 2 INFO nova.compute.manager [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.826 2 DEBUG oslo.service.loopingcall [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.826 2 DEBUG nova.compute.manager [-] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.826 2 DEBUG nova.network.neutron [-] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:18:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 328 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 1.8 MiB/s wr, 233 op/s
Oct 07 14:18:31 compute-0 nova_compute[259550]: 2025-10-07 14:18:31.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:18:32 compute-0 podman[331412]: 2025-10-07 14:18:32.101265892 +0000 UTC m=+0.079796839 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 07 14:18:32 compute-0 podman[331411]: 2025-10-07 14:18:32.129005375 +0000 UTC m=+0.107707646 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026255967352021878 of space, bias 1.0, pg target 0.7876790205606563 quantized to 32 (current 32)
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:18:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:18:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:18:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/104523592' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:18:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:18:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/104523592' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:18:32 compute-0 nova_compute[259550]: 2025-10-07 14:18:32.924 2 DEBUG nova.network.neutron [-] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:32 compute-0 ceph-mon[74295]: pgmap v1618: 305 pgs: 305 active+clean; 328 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 1.8 MiB/s wr, 233 op/s
Oct 07 14:18:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/104523592' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:18:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/104523592' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:18:32 compute-0 nova_compute[259550]: 2025-10-07 14:18:32.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:18:32 compute-0 nova_compute[259550]: 2025-10-07 14:18:32.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:18:32 compute-0 nova_compute[259550]: 2025-10-07 14:18:32.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.071 2 DEBUG nova.compute.manager [req-6fb1f191-b663-4a75-ba29-f715ac8a5f13 req-27267c71-fe65-4215-b272-c35c73918707 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Received event network-vif-deleted-0d353c80-d149-4939-a0c5-da59491d3f99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.071 2 INFO nova.compute.manager [req-6fb1f191-b663-4a75-ba29-f715ac8a5f13 req-27267c71-fe65-4215-b272-c35c73918707 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Neutron deleted interface 0d353c80-d149-4939-a0c5-da59491d3f99; detaching it from the instance and deleting it from the info cache
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.072 2 DEBUG nova.network.neutron [req-6fb1f191-b663-4a75-ba29-f715ac8a5f13 req-27267c71-fe65-4215-b272-c35c73918707 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.089 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.089 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.090 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.090 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.090 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.146 2 INFO nova.compute.manager [-] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Took 1.32 seconds to deallocate network for instance.
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.158 2 DEBUG nova.compute.manager [req-6fb1f191-b663-4a75-ba29-f715ac8a5f13 req-27267c71-fe65-4215-b272-c35c73918707 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Detach interface failed, port_id=0d353c80-d149-4939-a0c5-da59491d3f99, reason: Instance 9c5c9653-6de6-4975-86c6-887803a35913 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.213 2 DEBUG oslo_concurrency.lockutils [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.213 2 DEBUG oslo_concurrency.lockutils [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.244 2 DEBUG nova.scheduler.client.report [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.263 2 DEBUG nova.scheduler.client.report [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.263 2 DEBUG nova.compute.provider_tree [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.278 2 DEBUG nova.scheduler.client.report [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.300 2 DEBUG nova.scheduler.client.report [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.375 2 DEBUG oslo_concurrency.processutils [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:18:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1713894381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.624 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.725 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.726 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.726 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.731 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.732 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:18:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 278 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 1.8 MiB/s wr, 285 op/s
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.896 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.898 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3593MB free_disk=59.83077621459961GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.898 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:18:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1785342093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.939 2 DEBUG oslo_concurrency.processutils [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.943 2 DEBUG nova.compute.provider_tree [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:18:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1713894381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1785342093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:33 compute-0 nova_compute[259550]: 2025-10-07 14:18:33.987 2 DEBUG nova.scheduler.client.report [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:18:34 compute-0 nova_compute[259550]: 2025-10-07 14:18:34.023 2 DEBUG oslo_concurrency.lockutils [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:34 compute-0 nova_compute[259550]: 2025-10-07 14:18:34.026 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:34 compute-0 nova_compute[259550]: 2025-10-07 14:18:34.076 2 INFO nova.scheduler.client.report [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Deleted allocations for instance 9c5c9653-6de6-4975-86c6-887803a35913
Oct 07 14:18:34 compute-0 nova_compute[259550]: 2025-10-07 14:18:34.120 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 85cd6b5c-f0f7-49fa-a999-64818baf3648 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:18:34 compute-0 nova_compute[259550]: 2025-10-07 14:18:34.121 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 1d580bbb-a6fd-442c-8524-409ba5c344d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:18:34 compute-0 nova_compute[259550]: 2025-10-07 14:18:34.121 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:18:34 compute-0 nova_compute[259550]: 2025-10-07 14:18:34.121 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:18:34 compute-0 nova_compute[259550]: 2025-10-07 14:18:34.155 2 DEBUG oslo_concurrency.lockutils [None req-2243076b-d68e-4c3a-b128-ee7460999416 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "9c5c9653-6de6-4975-86c6-887803a35913" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:34 compute-0 nova_compute[259550]: 2025-10-07 14:18:34.177 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:18:34 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4290974305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:34 compute-0 nova_compute[259550]: 2025-10-07 14:18:34.615 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:34 compute-0 nova_compute[259550]: 2025-10-07 14:18:34.621 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:18:34 compute-0 nova_compute[259550]: 2025-10-07 14:18:34.641 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:18:34 compute-0 nova_compute[259550]: 2025-10-07 14:18:34.664 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:18:34 compute-0 nova_compute[259550]: 2025-10-07 14:18:34.665 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:34 compute-0 ceph-mon[74295]: pgmap v1619: 305 pgs: 305 active+clean; 278 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 1.8 MiB/s wr, 285 op/s
Oct 07 14:18:34 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4290974305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.339 2 DEBUG oslo_concurrency.lockutils [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "85cd6b5c-f0f7-49fa-a999-64818baf3648" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.340 2 DEBUG oslo_concurrency.lockutils [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.340 2 DEBUG oslo_concurrency.lockutils [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.341 2 DEBUG oslo_concurrency.lockutils [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.341 2 DEBUG oslo_concurrency.lockutils [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.343 2 INFO nova.compute.manager [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Terminating instance
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.345 2 DEBUG nova.compute.manager [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:35 compute-0 kernel: tapb2d2caee-17 (unregistering): left promiscuous mode
Oct 07 14:18:35 compute-0 NetworkManager[44949]: <info>  [1759846715.4202] device (tapb2d2caee-17): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:18:35 compute-0 ovn_controller[151684]: 2025-10-07T14:18:35Z|00663|binding|INFO|Releasing lport b2d2caee-177d-4ad4-98ba-7dd1d95e296b from this chassis (sb_readonly=0)
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:35 compute-0 ovn_controller[151684]: 2025-10-07T14:18:35Z|00664|binding|INFO|Setting lport b2d2caee-177d-4ad4-98ba-7dd1d95e296b down in Southbound
Oct 07 14:18:35 compute-0 ovn_controller[151684]: 2025-10-07T14:18:35Z|00665|binding|INFO|Removing iface tapb2d2caee-17 ovn-installed in OVS
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:35.439 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:3d:56 10.100.0.5'], port_security=['fa:16:3e:32:3d:56 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '85cd6b5c-f0f7-49fa-a999-64818baf3648', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dfecd64-708d-4596-88ae-4b7d716e998c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8dca3ec607447dd8f2e6dd1c0714628', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fe5613ec-7a3f-454f-9d6b-9216d9c4d645', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=952e23a8-726e-49a4-b02e-737dfec98b14, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b2d2caee-177d-4ad4-98ba-7dd1d95e296b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:18:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:35.440 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b2d2caee-177d-4ad4-98ba-7dd1d95e296b in datapath 2dfecd64-708d-4596-88ae-4b7d716e998c unbound from our chassis
Oct 07 14:18:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:35.441 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2dfecd64-708d-4596-88ae-4b7d716e998c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:18:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:35.442 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8eb04a51-e67c-432c-a121-e15d8176b417]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:35 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Oct 07 14:18:35 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d0000003f.scope: Consumed 15.211s CPU time.
Oct 07 14:18:35 compute-0 systemd-machined[214580]: Machine qemu-76-instance-0000003f terminated.
Oct 07 14:18:35 compute-0 ovn_controller[151684]: 2025-10-07T14:18:35Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:c7:50 10.100.0.3
Oct 07 14:18:35 compute-0 systemd-udevd[331515]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:18:35 compute-0 kernel: tapb2d2caee-17: entered promiscuous mode
Oct 07 14:18:35 compute-0 NetworkManager[44949]: <info>  [1759846715.5789] manager: (tapb2d2caee-17): new Tun device (/org/freedesktop/NetworkManager/Devices/292)
Oct 07 14:18:35 compute-0 ovn_controller[151684]: 2025-10-07T14:18:35Z|00666|binding|INFO|Claiming lport b2d2caee-177d-4ad4-98ba-7dd1d95e296b for this chassis.
Oct 07 14:18:35 compute-0 kernel: tapb2d2caee-17 (unregistering): left promiscuous mode
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:35 compute-0 ovn_controller[151684]: 2025-10-07T14:18:35Z|00667|binding|INFO|b2d2caee-177d-4ad4-98ba-7dd1d95e296b: Claiming fa:16:3e:32:3d:56 10.100.0.5
Oct 07 14:18:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:35.591 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:3d:56 10.100.0.5'], port_security=['fa:16:3e:32:3d:56 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '85cd6b5c-f0f7-49fa-a999-64818baf3648', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dfecd64-708d-4596-88ae-4b7d716e998c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8dca3ec607447dd8f2e6dd1c0714628', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fe5613ec-7a3f-454f-9d6b-9216d9c4d645', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=952e23a8-726e-49a4-b02e-737dfec98b14, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b2d2caee-177d-4ad4-98ba-7dd1d95e296b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:18:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:35.593 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b2d2caee-177d-4ad4-98ba-7dd1d95e296b in datapath 2dfecd64-708d-4596-88ae-4b7d716e998c bound to our chassis
Oct 07 14:18:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:35.593 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2dfecd64-708d-4596-88ae-4b7d716e998c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:18:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:35.595 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b8811272-2037-47cf-87a2-32a9d9c4728b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:35 compute-0 ovn_controller[151684]: 2025-10-07T14:18:35Z|00668|binding|INFO|Setting lport b2d2caee-177d-4ad4-98ba-7dd1d95e296b ovn-installed in OVS
Oct 07 14:18:35 compute-0 ovn_controller[151684]: 2025-10-07T14:18:35Z|00669|binding|INFO|Setting lport b2d2caee-177d-4ad4-98ba-7dd1d95e296b up in Southbound
Oct 07 14:18:35 compute-0 ovn_controller[151684]: 2025-10-07T14:18:35Z|00670|binding|INFO|Releasing lport b2d2caee-177d-4ad4-98ba-7dd1d95e296b from this chassis (sb_readonly=1)
Oct 07 14:18:35 compute-0 ovn_controller[151684]: 2025-10-07T14:18:35Z|00671|if_status|INFO|Dropped 2 log messages in last 325 seconds (most recently, 325 seconds ago) due to excessive rate
Oct 07 14:18:35 compute-0 ovn_controller[151684]: 2025-10-07T14:18:35Z|00672|if_status|INFO|Not setting lport b2d2caee-177d-4ad4-98ba-7dd1d95e296b down as sb is readonly
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:35 compute-0 ovn_controller[151684]: 2025-10-07T14:18:35Z|00673|binding|INFO|Removing iface tapb2d2caee-17 ovn-installed in OVS
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:35 compute-0 ovn_controller[151684]: 2025-10-07T14:18:35Z|00674|binding|INFO|Releasing lport b2d2caee-177d-4ad4-98ba-7dd1d95e296b from this chassis (sb_readonly=0)
Oct 07 14:18:35 compute-0 ovn_controller[151684]: 2025-10-07T14:18:35Z|00675|binding|INFO|Setting lport b2d2caee-177d-4ad4-98ba-7dd1d95e296b down in Southbound
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.611 2 INFO nova.virt.libvirt.driver [-] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Instance destroyed successfully.
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.611 2 DEBUG nova.objects.instance [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lazy-loading 'resources' on Instance uuid 85cd6b5c-f0f7-49fa-a999-64818baf3648 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:35.614 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:3d:56 10.100.0.5'], port_security=['fa:16:3e:32:3d:56 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '85cd6b5c-f0f7-49fa-a999-64818baf3648', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dfecd64-708d-4596-88ae-4b7d716e998c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8dca3ec607447dd8f2e6dd1c0714628', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fe5613ec-7a3f-454f-9d6b-9216d9c4d645', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=952e23a8-726e-49a4-b02e-737dfec98b14, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b2d2caee-177d-4ad4-98ba-7dd1d95e296b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:18:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:35.615 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b2d2caee-177d-4ad4-98ba-7dd1d95e296b in datapath 2dfecd64-708d-4596-88ae-4b7d716e998c unbound from our chassis
Oct 07 14:18:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:35.615 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2dfecd64-708d-4596-88ae-4b7d716e998c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:18:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:35.616 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd7ddb5-fa5e-4943-99de-b34f735245f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.625 2 DEBUG nova.virt.libvirt.vif [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:16:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2023954398',display_name='tempest-ServerRescueTestJSON-server-2023954398',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-2023954398',id=63,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d8dca3ec607447dd8f2e6dd1c0714628',ramdisk_id='',reservation_id='r-lmffjsq5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1604630325',owner_user_name='tempest-ServerRescueTestJSON-1604630325-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:17:39Z,user_data=None,user_id='7166dede3fa7455fb8ac0840d97d0be8',uuid=85cd6b5c-f0f7-49fa-a999-64818baf3648,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "address": "fa:16:3e:32:3d:56", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2d2caee-17", "ovs_interfaceid": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.625 2 DEBUG nova.network.os_vif_util [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Converting VIF {"id": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "address": "fa:16:3e:32:3d:56", "network": {"id": "2dfecd64-708d-4596-88ae-4b7d716e998c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1885672423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8dca3ec607447dd8f2e6dd1c0714628", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2d2caee-17", "ovs_interfaceid": "b2d2caee-177d-4ad4-98ba-7dd1d95e296b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.626 2 DEBUG nova.network.os_vif_util [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:3d:56,bridge_name='br-int',has_traffic_filtering=True,id=b2d2caee-177d-4ad4-98ba-7dd1d95e296b,network=Network(2dfecd64-708d-4596-88ae-4b7d716e998c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2d2caee-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.626 2 DEBUG os_vif [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:3d:56,bridge_name='br-int',has_traffic_filtering=True,id=b2d2caee-177d-4ad4-98ba-7dd1d95e296b,network=Network(2dfecd64-708d-4596-88ae-4b7d716e998c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2d2caee-17') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.628 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2d2caee-17, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:35 compute-0 nova_compute[259550]: 2025-10-07 14:18:35.634 2 INFO os_vif [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:3d:56,bridge_name='br-int',has_traffic_filtering=True,id=b2d2caee-177d-4ad4-98ba-7dd1d95e296b,network=Network(2dfecd64-708d-4596-88ae-4b7d716e998c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2d2caee-17')
Oct 07 14:18:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 248 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 678 KiB/s wr, 287 op/s
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.037 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.038 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.056 2 DEBUG nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.072 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.073 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.094 2 DEBUG nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.134 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.135 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.143 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.143 2 INFO nova.compute.claims [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.316 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.430 2 INFO nova.virt.libvirt.driver [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Deleting instance files /var/lib/nova/instances/85cd6b5c-f0f7-49fa-a999-64818baf3648_del
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.431 2 INFO nova.virt.libvirt.driver [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Deletion of /var/lib/nova/instances/85cd6b5c-f0f7-49fa-a999-64818baf3648_del complete
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.496 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.548 2 INFO nova.compute.manager [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Took 1.20 seconds to destroy the instance on the hypervisor.
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.549 2 DEBUG oslo.service.loopingcall [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.549 2 DEBUG nova.compute.manager [-] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.550 2 DEBUG nova.network.neutron [-] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:18:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:18:36 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1353130008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.916 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.922 2 DEBUG nova.compute.provider_tree [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:18:36 compute-0 nova_compute[259550]: 2025-10-07 14:18:36.953 2 DEBUG nova.scheduler.client.report [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:18:36 compute-0 ceph-mon[74295]: pgmap v1620: 305 pgs: 305 active+clean; 248 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 678 KiB/s wr, 287 op/s
Oct 07 14:18:36 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1353130008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.002 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.003 2 DEBUG nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.006 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.012 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.013 2 INFO nova.compute.claims [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.094 2 DEBUG nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.094 2 DEBUG nova.network.neutron [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.121 2 INFO nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.146 2 DEBUG nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.246 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.293 2 DEBUG nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.296 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.297 2 INFO nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Creating image(s)
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.323 2 DEBUG nova.storage.rbd_utils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image d8f33644-b6b8-492e-b9e6-32cd3ad4a28a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.351 2 DEBUG nova.storage.rbd_utils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image d8f33644-b6b8-492e-b9e6-32cd3ad4a28a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.380 2 DEBUG nova.storage.rbd_utils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image d8f33644-b6b8-492e-b9e6-32cd3ad4a28a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.384 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.424 2 DEBUG nova.policy [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c51e9a60f0f4b28b9d5cfaa7a0180eb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1b9950f52692469d9b44d8201fd3b990', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.471 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.471 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.472 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.472 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.495 2 DEBUG nova.storage.rbd_utils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image d8f33644-b6b8-492e-b9e6-32cd3ad4a28a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.498 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 d8f33644-b6b8-492e-b9e6-32cd3ad4a28a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.668 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:18:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:18:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/959626367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.748 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.755 2 DEBUG nova.compute.provider_tree [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.771 2 DEBUG nova.scheduler.client.report [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.786 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 d8f33644-b6b8-492e-b9e6-32cd3ad4a28a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.820 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.822 2 DEBUG nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:18:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 248 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 14 KiB/s wr, 235 op/s
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.866 2 DEBUG nova.storage.rbd_utils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] resizing rbd image d8f33644-b6b8-492e-b9e6-32cd3ad4a28a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.897 2 DEBUG nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.897 2 DEBUG nova.network.neutron [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.918 2 INFO nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.965 2 DEBUG nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:18:37 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/959626367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.974 2 DEBUG nova.objects.instance [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lazy-loading 'migration_context' on Instance uuid d8f33644-b6b8-492e-b9e6-32cd3ad4a28a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.995 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.995 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Ensure instance console log exists: /var/lib/nova/instances/d8f33644-b6b8-492e-b9e6-32cd3ad4a28a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.996 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.996 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:37 compute-0 nova_compute[259550]: 2025-10-07 14:18:37.996 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.067 2 DEBUG nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.070 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.071 2 INFO nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Creating image(s)
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.091 2 DEBUG nova.storage.rbd_utils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image 197479db-0f5f-4dc1-a59d-efca0e6e4dec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.112 2 DEBUG nova.storage.rbd_utils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image 197479db-0f5f-4dc1-a59d-efca0e6e4dec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.134 2 DEBUG nova.storage.rbd_utils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image 197479db-0f5f-4dc1-a59d-efca0e6e4dec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.136 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.180 2 DEBUG nova.policy [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c51e9a60f0f4b28b9d5cfaa7a0180eb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1b9950f52692469d9b44d8201fd3b990', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.234 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.235 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.235 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.236 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.258 2 DEBUG nova.storage.rbd_utils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image 197479db-0f5f-4dc1-a59d-efca0e6e4dec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.262 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 197479db-0f5f-4dc1-a59d-efca0e6e4dec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.519 2 DEBUG nova.network.neutron [-] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.589 2 DEBUG nova.compute.manager [req-39c81f4f-fcb6-4917-bbcd-b63d30163f85 req-7aea04d4-d9e6-4e72-8809-24e77d7e80bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Received event network-vif-deleted-b2d2caee-177d-4ad4-98ba-7dd1d95e296b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.590 2 INFO nova.compute.manager [req-39c81f4f-fcb6-4917-bbcd-b63d30163f85 req-7aea04d4-d9e6-4e72-8809-24e77d7e80bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Neutron deleted interface b2d2caee-177d-4ad4-98ba-7dd1d95e296b; detaching it from the instance and deleting it from the info cache
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.590 2 DEBUG nova.network.neutron [req-39c81f4f-fcb6-4917-bbcd-b63d30163f85 req-7aea04d4-d9e6-4e72-8809-24e77d7e80bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.600 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 197479db-0f5f-4dc1-a59d-efca0e6e4dec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.668 2 INFO nova.compute.manager [-] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Took 2.12 seconds to deallocate network for instance.
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.678 2 DEBUG nova.storage.rbd_utils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] resizing rbd image 197479db-0f5f-4dc1-a59d-efca0e6e4dec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.711 2 DEBUG nova.compute.manager [req-39c81f4f-fcb6-4917-bbcd-b63d30163f85 req-7aea04d4-d9e6-4e72-8809-24e77d7e80bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Detach interface failed, port_id=b2d2caee-177d-4ad4-98ba-7dd1d95e296b, reason: Instance 85cd6b5c-f0f7-49fa-a999-64818baf3648 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.787 2 DEBUG nova.objects.instance [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lazy-loading 'migration_context' on Instance uuid 197479db-0f5f-4dc1-a59d-efca0e6e4dec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.893 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.893 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Ensure instance console log exists: /var/lib/nova/instances/197479db-0f5f-4dc1-a59d-efca0e6e4dec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.894 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.895 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.895 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.939 2 DEBUG oslo_concurrency.lockutils [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:38 compute-0 nova_compute[259550]: 2025-10-07 14:18:38.940 2 DEBUG oslo_concurrency.lockutils [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:38 compute-0 ceph-mon[74295]: pgmap v1621: 305 pgs: 305 active+clean; 248 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 14 KiB/s wr, 235 op/s
Oct 07 14:18:39 compute-0 nova_compute[259550]: 2025-10-07 14:18:39.052 2 DEBUG oslo_concurrency.processutils [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:39 compute-0 nova_compute[259550]: 2025-10-07 14:18:39.237 2 DEBUG nova.virt.libvirt.driver [None req-be155c09-3320-4592-874b-70b5323f4a38 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:18:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:18:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1622848307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:39 compute-0 nova_compute[259550]: 2025-10-07 14:18:39.509 2 DEBUG oslo_concurrency.processutils [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:39 compute-0 nova_compute[259550]: 2025-10-07 14:18:39.515 2 DEBUG nova.compute.provider_tree [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:18:39 compute-0 nova_compute[259550]: 2025-10-07 14:18:39.538 2 DEBUG nova.scheduler.client.report [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:18:39 compute-0 nova_compute[259550]: 2025-10-07 14:18:39.577 2 DEBUG oslo_concurrency.lockutils [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:39 compute-0 nova_compute[259550]: 2025-10-07 14:18:39.617 2 INFO nova.scheduler.client.report [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Deleted allocations for instance 85cd6b5c-f0f7-49fa-a999-64818baf3648
Oct 07 14:18:39 compute-0 nova_compute[259550]: 2025-10-07 14:18:39.650 2 DEBUG nova.network.neutron [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Successfully created port: 32a0e3a4-24b8-40fb-acc0-917c83ab05f2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:18:39 compute-0 nova_compute[259550]: 2025-10-07 14:18:39.686 2 DEBUG oslo_concurrency.lockutils [None req-79e61d54-5237-443c-ba2e-292bd1d7a35a 7166dede3fa7455fb8ac0840d97d0be8 d8dca3ec607447dd8f2e6dd1c0714628 - - default default] Lock "85cd6b5c-f0f7-49fa-a999-64818baf3648" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 218 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 2.2 MiB/s wr, 302 op/s
Oct 07 14:18:39 compute-0 nova_compute[259550]: 2025-10-07 14:18:39.871 2 DEBUG nova.network.neutron [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Successfully created port: c4e55cf1-8c8c-4a64-a62d-127fc8d27806 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:18:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1622848307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:40 compute-0 nova_compute[259550]: 2025-10-07 14:18:40.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:40 compute-0 ovn_controller[151684]: 2025-10-07T14:18:40Z|00676|binding|INFO|Releasing lport 5989e5ed-c89e-446a-960e-503196fd3680 from this chassis (sb_readonly=0)
Oct 07 14:18:40 compute-0 nova_compute[259550]: 2025-10-07 14:18:40.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:40 compute-0 ceph-mon[74295]: pgmap v1622: 305 pgs: 305 active+clean; 218 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 2.2 MiB/s wr, 302 op/s
Oct 07 14:18:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:18:41 compute-0 nova_compute[259550]: 2025-10-07 14:18:41.187 2 DEBUG nova.network.neutron [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Successfully updated port: 32a0e3a4-24b8-40fb-acc0-917c83ab05f2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:18:41 compute-0 nova_compute[259550]: 2025-10-07 14:18:41.204 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "refresh_cache-d8f33644-b6b8-492e-b9e6-32cd3ad4a28a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:18:41 compute-0 nova_compute[259550]: 2025-10-07 14:18:41.205 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquired lock "refresh_cache-d8f33644-b6b8-492e-b9e6-32cd3ad4a28a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:18:41 compute-0 nova_compute[259550]: 2025-10-07 14:18:41.205 2 DEBUG nova.network.neutron [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:18:41 compute-0 nova_compute[259550]: 2025-10-07 14:18:41.231 2 DEBUG nova.network.neutron [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Successfully updated port: c4e55cf1-8c8c-4a64-a62d-127fc8d27806 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:18:41 compute-0 nova_compute[259550]: 2025-10-07 14:18:41.247 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "refresh_cache-197479db-0f5f-4dc1-a59d-efca0e6e4dec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:18:41 compute-0 nova_compute[259550]: 2025-10-07 14:18:41.248 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquired lock "refresh_cache-197479db-0f5f-4dc1-a59d-efca0e6e4dec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:18:41 compute-0 nova_compute[259550]: 2025-10-07 14:18:41.248 2 DEBUG nova.network.neutron [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:18:41 compute-0 nova_compute[259550]: 2025-10-07 14:18:41.299 2 DEBUG nova.compute.manager [req-51c2add8-b0ba-46fa-a46b-3b517ab13b3d req-9e74f92b-7af4-44de-9c13-0a2a0fd0d488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Received event network-changed-32a0e3a4-24b8-40fb-acc0-917c83ab05f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:41 compute-0 nova_compute[259550]: 2025-10-07 14:18:41.300 2 DEBUG nova.compute.manager [req-51c2add8-b0ba-46fa-a46b-3b517ab13b3d req-9e74f92b-7af4-44de-9c13-0a2a0fd0d488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Refreshing instance network info cache due to event network-changed-32a0e3a4-24b8-40fb-acc0-917c83ab05f2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:18:41 compute-0 nova_compute[259550]: 2025-10-07 14:18:41.300 2 DEBUG oslo_concurrency.lockutils [req-51c2add8-b0ba-46fa-a46b-3b517ab13b3d req-9e74f92b-7af4-44de-9c13-0a2a0fd0d488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d8f33644-b6b8-492e-b9e6-32cd3ad4a28a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:18:41 compute-0 nova_compute[259550]: 2025-10-07 14:18:41.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:41 compute-0 kernel: tap5fb8904b-22 (unregistering): left promiscuous mode
Oct 07 14:18:41 compute-0 NetworkManager[44949]: <info>  [1759846721.4888] device (tap5fb8904b-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:18:41 compute-0 nova_compute[259550]: 2025-10-07 14:18:41.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:41 compute-0 ovn_controller[151684]: 2025-10-07T14:18:41Z|00677|binding|INFO|Releasing lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c from this chassis (sb_readonly=0)
Oct 07 14:18:41 compute-0 ovn_controller[151684]: 2025-10-07T14:18:41Z|00678|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c down in Southbound
Oct 07 14:18:41 compute-0 ovn_controller[151684]: 2025-10-07T14:18:41Z|00679|binding|INFO|Removing iface tap5fb8904b-22 ovn-installed in OVS
Oct 07 14:18:41 compute-0 nova_compute[259550]: 2025-10-07 14:18:41.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:41.509 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:c7:50 10.100.0.3'], port_security=['fa:16:3e:af:c7:50 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1d580bbb-a6fd-442c-8524-409ba5c344d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8379283f8a594c2ab94773d2b49cbb30', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c8e218c0-ab29-4b01-8bdb-1da00e3ea9f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44f26794-be65-4c90-a6ef-3a0e5efa6810, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5fb8904b-227a-4dac-8c3a-82a23ba9832c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:18:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:41.511 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5fb8904b-227a-4dac-8c3a-82a23ba9832c in datapath c21c541a-0d39-4ceb-ba44-53a9c1280779 unbound from our chassis
Oct 07 14:18:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:41.513 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c21c541a-0d39-4ceb-ba44-53a9c1280779, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:18:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:41.514 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[da1a29b6-9186-440a-acad-d38918f3621d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:41.515 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 namespace which is not needed anymore
Oct 07 14:18:41 compute-0 nova_compute[259550]: 2025-10-07 14:18:41.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:41 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000042.scope: Deactivated successfully.
Oct 07 14:18:41 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000042.scope: Consumed 12.848s CPU time.
Oct 07 14:18:41 compute-0 systemd-machined[214580]: Machine qemu-79-instance-00000042 terminated.
Oct 07 14:18:41 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[331135]: [NOTICE]   (331139) : haproxy version is 2.8.14-c23fe91
Oct 07 14:18:41 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[331135]: [NOTICE]   (331139) : path to executable is /usr/sbin/haproxy
Oct 07 14:18:41 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[331135]: [WARNING]  (331139) : Exiting Master process...
Oct 07 14:18:41 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[331135]: [ALERT]    (331139) : Current worker (331141) exited with code 143 (Terminated)
Oct 07 14:18:41 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[331135]: [WARNING]  (331139) : All workers exited. Exiting... (0)
Oct 07 14:18:41 compute-0 systemd[1]: libpod-4a9d412ffb667fb6c3b63301cd1426b888219e8d0934d21e1612b44c1ba0150e.scope: Deactivated successfully.
Oct 07 14:18:41 compute-0 podman[331967]: 2025-10-07 14:18:41.678872131 +0000 UTC m=+0.053034272 container died 4a9d412ffb667fb6c3b63301cd1426b888219e8d0934d21e1612b44c1ba0150e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 07 14:18:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-81ad967c0191c3c6a677b3297589d2973c5e25304c9cea9842cd1971eb0249f5-merged.mount: Deactivated successfully.
Oct 07 14:18:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4a9d412ffb667fb6c3b63301cd1426b888219e8d0934d21e1612b44c1ba0150e-userdata-shm.mount: Deactivated successfully.
Oct 07 14:18:41 compute-0 podman[331967]: 2025-10-07 14:18:41.723472839 +0000 UTC m=+0.097634970 container cleanup 4a9d412ffb667fb6c3b63301cd1426b888219e8d0934d21e1612b44c1ba0150e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 07 14:18:41 compute-0 nova_compute[259550]: 2025-10-07 14:18:41.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:41 compute-0 systemd[1]: libpod-conmon-4a9d412ffb667fb6c3b63301cd1426b888219e8d0934d21e1612b44c1ba0150e.scope: Deactivated successfully.
Oct 07 14:18:41 compute-0 nova_compute[259550]: 2025-10-07 14:18:41.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:41 compute-0 podman[332003]: 2025-10-07 14:18:41.796734334 +0000 UTC m=+0.046364976 container remove 4a9d412ffb667fb6c3b63301cd1426b888219e8d0934d21e1612b44c1ba0150e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 14:18:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:41.804 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d967fbb0-de44-4e6a-bb1d-b55b06732425]: (4, ('Tue Oct  7 02:18:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 (4a9d412ffb667fb6c3b63301cd1426b888219e8d0934d21e1612b44c1ba0150e)\n4a9d412ffb667fb6c3b63301cd1426b888219e8d0934d21e1612b44c1ba0150e\nTue Oct  7 02:18:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 (4a9d412ffb667fb6c3b63301cd1426b888219e8d0934d21e1612b44c1ba0150e)\n4a9d412ffb667fb6c3b63301cd1426b888219e8d0934d21e1612b44c1ba0150e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:41.807 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b28d9623-d3ae-4366-81df-bf14df21755a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:41.808 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc21c541a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:41 compute-0 nova_compute[259550]: 2025-10-07 14:18:41.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:41 compute-0 kernel: tapc21c541a-00: left promiscuous mode
Oct 07 14:18:41 compute-0 nova_compute[259550]: 2025-10-07 14:18:41.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:41.833 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[68643087-7941-4815-a84a-ea948edc0a6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 214 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.4 MiB/s wr, 258 op/s
Oct 07 14:18:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:41.866 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d65968c7-2a5f-4316-ae66-63744c60447e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:41.869 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3c4d4f-473f-4b2c-89ca-611336a680b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:41.887 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[10644695-7011-49e1-9aeb-edcdbc40afa3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728560, 'reachable_time': 35602, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332026, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:41 compute-0 systemd[1]: run-netns-ovnmeta\x2dc21c541a\x2d0d39\x2d4ceb\x2dba44\x2d53a9c1280779.mount: Deactivated successfully.
Oct 07 14:18:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:41.892 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:18:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:41.892 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[80ce5e9b-21ec-44f9-a8fe-0e3d045bef47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:42 compute-0 nova_compute[259550]: 2025-10-07 14:18:42.066 2 DEBUG nova.network.neutron [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:18:42 compute-0 nova_compute[259550]: 2025-10-07 14:18:42.072 2 DEBUG nova.network.neutron [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:18:42 compute-0 nova_compute[259550]: 2025-10-07 14:18:42.254 2 INFO nova.virt.libvirt.driver [None req-be155c09-3320-4592-874b-70b5323f4a38 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance shutdown successfully after 13 seconds.
Oct 07 14:18:42 compute-0 nova_compute[259550]: 2025-10-07 14:18:42.259 2 INFO nova.virt.libvirt.driver [-] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance destroyed successfully.
Oct 07 14:18:42 compute-0 nova_compute[259550]: 2025-10-07 14:18:42.260 2 DEBUG nova.objects.instance [None req-be155c09-3320-4592-874b-70b5323f4a38 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'numa_topology' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:42 compute-0 nova_compute[259550]: 2025-10-07 14:18:42.279 2 DEBUG nova.compute.manager [None req-be155c09-3320-4592-874b-70b5323f4a38 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:42 compute-0 nova_compute[259550]: 2025-10-07 14:18:42.341 2 DEBUG oslo_concurrency.lockutils [None req-be155c09-3320-4592-874b-70b5323f4a38 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.263s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:42 compute-0 sudo[332027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:18:42 compute-0 sudo[332027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:18:42 compute-0 sudo[332027]: pam_unix(sudo:session): session closed for user root
Oct 07 14:18:42 compute-0 sudo[332052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:18:42 compute-0 sudo[332052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:18:42 compute-0 sudo[332052]: pam_unix(sudo:session): session closed for user root
Oct 07 14:18:42 compute-0 sudo[332077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:18:42 compute-0 sudo[332077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:18:42 compute-0 sudo[332077]: pam_unix(sudo:session): session closed for user root
Oct 07 14:18:42 compute-0 sudo[332102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:18:42 compute-0 sudo[332102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:18:42 compute-0 nova_compute[259550]: 2025-10-07 14:18:42.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:18:42 compute-0 nova_compute[259550]: 2025-10-07 14:18:42.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:18:42 compute-0 nova_compute[259550]: 2025-10-07 14:18:42.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:18:42 compute-0 ceph-mon[74295]: pgmap v1623: 305 pgs: 305 active+clean; 214 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.4 MiB/s wr, 258 op/s
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:42.999 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:42.999 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.269 2 DEBUG nova.network.neutron [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Updating instance_info_cache with network_info: [{"id": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "address": "fa:16:3e:ac:11:67", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e55cf1-8c", "ovs_interfaceid": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:43 compute-0 sudo[332102]: pam_unix(sudo:session): session closed for user root
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.278 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.279 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.279 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.279 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.297 2 DEBUG nova.network.neutron [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Updating instance_info_cache with network_info: [{"id": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "address": "fa:16:3e:0c:eb:4d", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a0e3a4-24", "ovs_interfaceid": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.302 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Releasing lock "refresh_cache-197479db-0f5f-4dc1-a59d-efca0e6e4dec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.303 2 DEBUG nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Instance network_info: |[{"id": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "address": "fa:16:3e:ac:11:67", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e55cf1-8c", "ovs_interfaceid": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.311 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Start _get_guest_xml network_info=[{"id": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "address": "fa:16:3e:ac:11:67", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e55cf1-8c", "ovs_interfaceid": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.317 2 WARNING nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:18:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:18:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:18:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:18:43 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.322 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Releasing lock "refresh_cache-d8f33644-b6b8-492e-b9e6-32cd3ad4a28a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.323 2 DEBUG nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Instance network_info: |[{"id": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "address": "fa:16:3e:0c:eb:4d", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a0e3a4-24", "ovs_interfaceid": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:18:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.323 2 DEBUG oslo_concurrency.lockutils [req-51c2add8-b0ba-46fa-a46b-3b517ab13b3d req-9e74f92b-7af4-44de-9c13-0a2a0fd0d488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d8f33644-b6b8-492e-b9e6-32cd3ad4a28a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.323 2 DEBUG nova.network.neutron [req-51c2add8-b0ba-46fa-a46b-3b517ab13b3d req-9e74f92b-7af4-44de-9c13-0a2a0fd0d488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Refreshing network info cache for port 32a0e3a4-24b8-40fb-acc0-917c83ab05f2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.326 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Start _get_guest_xml network_info=[{"id": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "address": "fa:16:3e:0c:eb:4d", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a0e3a4-24", "ovs_interfaceid": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.326 2 DEBUG nova.virt.libvirt.host [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.327 2 DEBUG nova.virt.libvirt.host [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:18:43 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:18:43 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev fcb6536c-1a68-48a6-abc4-69ed376a7519 does not exist
Oct 07 14:18:43 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 41734f49-de76-4130-8f09-fd09ce0e0107 does not exist
Oct 07 14:18:43 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 62d23253-1b98-404d-9639-15d3d6d07d0f does not exist
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.334 2 DEBUG nova.virt.libvirt.host [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.335 2 DEBUG nova.virt.libvirt.host [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.335 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.335 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.336 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.336 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.336 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.336 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.336 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:18:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.337 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:18:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.337 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.337 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.337 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.337 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.340 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:18:43 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:18:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:18:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.385 2 WARNING nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.390 2 DEBUG nova.virt.libvirt.host [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.391 2 DEBUG nova.virt.libvirt.host [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.394 2 DEBUG nova.virt.libvirt.host [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.394 2 DEBUG nova.virt.libvirt.host [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.394 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.395 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.395 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.395 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.395 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.396 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.396 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.396 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.396 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.396 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.396 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.397 2 DEBUG nova.virt.hardware [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.400 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:43 compute-0 sudo[332160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:18:43 compute-0 sudo[332160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:18:43 compute-0 sudo[332160]: pam_unix(sudo:session): session closed for user root
Oct 07 14:18:43 compute-0 sudo[332187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:18:43 compute-0 sudo[332187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:18:43 compute-0 sudo[332187]: pam_unix(sudo:session): session closed for user root
Oct 07 14:18:43 compute-0 sudo[332231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:18:43 compute-0 sudo[332231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:18:43 compute-0 sudo[332231]: pam_unix(sudo:session): session closed for user root
Oct 07 14:18:43 compute-0 sudo[332275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:18:43 compute-0 sudo[332275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:18:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:18:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4061128633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.799 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.824 2 DEBUG nova.storage.rbd_utils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image 197479db-0f5f-4dc1-a59d-efca0e6e4dec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 215 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 211 op/s
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.836 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:18:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/556199740' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.870 2 DEBUG nova.compute.manager [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Received event network-changed-c4e55cf1-8c8c-4a64-a62d-127fc8d27806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.871 2 DEBUG nova.compute.manager [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Refreshing instance network info cache due to event network-changed-c4e55cf1-8c8c-4a64-a62d-127fc8d27806. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.871 2 DEBUG oslo_concurrency.lockutils [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-197479db-0f5f-4dc1-a59d-efca0e6e4dec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.871 2 DEBUG oslo_concurrency.lockutils [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-197479db-0f5f-4dc1-a59d-efca0e6e4dec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.871 2 DEBUG nova.network.neutron [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Refreshing network info cache for port c4e55cf1-8c8c-4a64-a62d-127fc8d27806 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.876 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.899 2 DEBUG nova.storage.rbd_utils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image d8f33644-b6b8-492e-b9e6-32cd3ad4a28a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.903 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:43 compute-0 podman[332379]: 2025-10-07 14:18:43.968495546 +0000 UTC m=+0.044607079 container create 0d1a2093c592fb4d14851b9eadc63e07ba28c347017f2180ccf9b84304b75711 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 07 14:18:43 compute-0 nova_compute[259550]: 2025-10-07 14:18:43.984 2 DEBUG nova.objects.instance [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'flavor' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:44 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:18:44 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:18:44 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:18:44 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:18:44 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:18:44 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:18:44 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4061128633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:44 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/556199740' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:44 compute-0 systemd[1]: Started libpod-conmon-0d1a2093c592fb4d14851b9eadc63e07ba28c347017f2180ccf9b84304b75711.scope.
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.012 2 DEBUG oslo_concurrency.lockutils [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:18:44 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:18:44 compute-0 podman[332379]: 2025-10-07 14:18:43.947427899 +0000 UTC m=+0.023539462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:18:44 compute-0 podman[332379]: 2025-10-07 14:18:44.05080652 +0000 UTC m=+0.126918053 container init 0d1a2093c592fb4d14851b9eadc63e07ba28c347017f2180ccf9b84304b75711 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 07 14:18:44 compute-0 podman[332379]: 2025-10-07 14:18:44.05874577 +0000 UTC m=+0.134857303 container start 0d1a2093c592fb4d14851b9eadc63e07ba28c347017f2180ccf9b84304b75711 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bouman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:18:44 compute-0 podman[332379]: 2025-10-07 14:18:44.06179571 +0000 UTC m=+0.137907243 container attach 0d1a2093c592fb4d14851b9eadc63e07ba28c347017f2180ccf9b84304b75711 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bouman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:18:44 compute-0 stoic_bouman[332415]: 167 167
Oct 07 14:18:44 compute-0 systemd[1]: libpod-0d1a2093c592fb4d14851b9eadc63e07ba28c347017f2180ccf9b84304b75711.scope: Deactivated successfully.
Oct 07 14:18:44 compute-0 podman[332379]: 2025-10-07 14:18:44.064767679 +0000 UTC m=+0.140879212 container died 0d1a2093c592fb4d14851b9eadc63e07ba28c347017f2180ccf9b84304b75711 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct 07 14:18:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7f73e13744dd4be5cb430913012fee996dbfaf5d721a9b5700f0d1da83927ed-merged.mount: Deactivated successfully.
Oct 07 14:18:44 compute-0 podman[332379]: 2025-10-07 14:18:44.103524823 +0000 UTC m=+0.179636356 container remove 0d1a2093c592fb4d14851b9eadc63e07ba28c347017f2180ccf9b84304b75711 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 07 14:18:44 compute-0 systemd[1]: libpod-conmon-0d1a2093c592fb4d14851b9eadc63e07ba28c347017f2180ccf9b84304b75711.scope: Deactivated successfully.
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:44 compute-0 podman[332458]: 2025-10-07 14:18:44.279272825 +0000 UTC m=+0.051877831 container create 085da5278ca27d280f69ad113485e567a6661d5fe34bd644595986dbd4b5bb05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hopper, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 07 14:18:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:18:44 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2136483619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.327 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.328 2 DEBUG nova.virt.libvirt.vif [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1597908979',display_name='tempest-tempest.common.compute-instance-1597908979-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1597908979-2',id=69,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b9950f52692469d9b44d8201fd3b990',ramdisk_id='',reservation_id='r-wp2d13bt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1647657804',owner_user_name='tempest-MultipleCreateTestJSON-1647657804-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:18:38Z,user_data=None,user_id='4c51e9a60f0f4b28b9d5cfaa7a0180eb',uuid=197479db-0f5f-4dc1-a59d-efca0e6e4dec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "address": "fa:16:3e:ac:11:67", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e55cf1-8c", "ovs_interfaceid": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.328 2 DEBUG nova.network.os_vif_util [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converting VIF {"id": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "address": "fa:16:3e:ac:11:67", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e55cf1-8c", "ovs_interfaceid": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.329 2 DEBUG nova.network.os_vif_util [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:11:67,bridge_name='br-int',has_traffic_filtering=True,id=c4e55cf1-8c8c-4a64-a62d-127fc8d27806,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e55cf1-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.331 2 DEBUG nova.objects.instance [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lazy-loading 'pci_devices' on Instance uuid 197479db-0f5f-4dc1-a59d-efca0e6e4dec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:44 compute-0 systemd[1]: Started libpod-conmon-085da5278ca27d280f69ad113485e567a6661d5fe34bd644595986dbd4b5bb05.scope.
Oct 07 14:18:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:18:44 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2639995245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:44 compute-0 podman[332458]: 2025-10-07 14:18:44.259861063 +0000 UTC m=+0.032466099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:18:44 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6634f41e2de17a9e39e150934de8e917e1051f37c4cc124be43834d06e5a6b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6634f41e2de17a9e39e150934de8e917e1051f37c4cc124be43834d06e5a6b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6634f41e2de17a9e39e150934de8e917e1051f37c4cc124be43834d06e5a6b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6634f41e2de17a9e39e150934de8e917e1051f37c4cc124be43834d06e5a6b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6634f41e2de17a9e39e150934de8e917e1051f37c4cc124be43834d06e5a6b3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.369 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.371 2 DEBUG nova.virt.libvirt.vif [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1597908979',display_name='tempest-tempest.common.compute-instance-1597908979-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1597908979-1',id=68,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b9950f52692469d9b44d8201fd3b990',ramdisk_id='',reservation_id='r-wp2d13bt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1647657804',owner_user_name='tempest-MultipleCreateTestJSON-1647657804-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:18:37Z,user_data=None,user_id='4c51e9a60f0f4b28b9d5cfaa7a0180eb',uuid=d8f33644-b6b8-492e-b9e6-32cd3ad4a28a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "address": "fa:16:3e:0c:eb:4d", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a0e3a4-24", "ovs_interfaceid": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.371 2 DEBUG nova.network.os_vif_util [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converting VIF {"id": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "address": "fa:16:3e:0c:eb:4d", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a0e3a4-24", "ovs_interfaceid": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.372 2 DEBUG nova.network.os_vif_util [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:eb:4d,bridge_name='br-int',has_traffic_filtering=True,id=32a0e3a4-24b8-40fb-acc0-917c83ab05f2,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a0e3a4-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.373 2 DEBUG nova.objects.instance [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lazy-loading 'pci_devices' on Instance uuid d8f33644-b6b8-492e-b9e6-32cd3ad4a28a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.376 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <uuid>197479db-0f5f-4dc1-a59d-efca0e6e4dec</uuid>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <name>instance-00000045</name>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <nova:name>tempest-tempest.common.compute-instance-1597908979-2</nova:name>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:18:43</nova:creationTime>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <nova:user uuid="4c51e9a60f0f4b28b9d5cfaa7a0180eb">tempest-MultipleCreateTestJSON-1647657804-project-member</nova:user>
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <nova:project uuid="1b9950f52692469d9b44d8201fd3b990">tempest-MultipleCreateTestJSON-1647657804</nova:project>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <nova:port uuid="c4e55cf1-8c8c-4a64-a62d-127fc8d27806">
Oct 07 14:18:44 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <system>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <entry name="serial">197479db-0f5f-4dc1-a59d-efca0e6e4dec</entry>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <entry name="uuid">197479db-0f5f-4dc1-a59d-efca0e6e4dec</entry>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     </system>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <os>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   </os>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <features>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   </features>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/197479db-0f5f-4dc1-a59d-efca0e6e4dec_disk">
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       </source>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/197479db-0f5f-4dc1-a59d-efca0e6e4dec_disk.config">
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       </source>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:ac:11:67"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <target dev="tapc4e55cf1-8c"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/197479db-0f5f-4dc1-a59d-efca0e6e4dec/console.log" append="off"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <video>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     </video>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:18:44 compute-0 nova_compute[259550]: </domain>
Oct 07 14:18:44 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.377 2 DEBUG nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Preparing to wait for external event network-vif-plugged-c4e55cf1-8c8c-4a64-a62d-127fc8d27806 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.377 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.377 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.377 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.378 2 DEBUG nova.virt.libvirt.vif [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1597908979',display_name='tempest-tempest.common.compute-instance-1597908979-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1597908979-2',id=69,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b9950f52692469d9b44d8201fd3b990',ramdisk_id='',reservation_id='r-wp2d13bt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1647657804',owner_user_name='tempest-MultipleCreateTestJSON-1647657804-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:18:38Z,user_data=None,user_id='4c51e9a60f0f4b28b9d5cfaa7a0180eb',uuid=197479db-0f5f-4dc1-a59d-efca0e6e4dec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "address": "fa:16:3e:ac:11:67", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e55cf1-8c", "ovs_interfaceid": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.380 2 DEBUG nova.network.os_vif_util [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converting VIF {"id": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "address": "fa:16:3e:ac:11:67", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e55cf1-8c", "ovs_interfaceid": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:18:44 compute-0 podman[332458]: 2025-10-07 14:18:44.382115022 +0000 UTC m=+0.154720028 container init 085da5278ca27d280f69ad113485e567a6661d5fe34bd644595986dbd4b5bb05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.381 2 DEBUG nova.network.os_vif_util [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:11:67,bridge_name='br-int',has_traffic_filtering=True,id=c4e55cf1-8c8c-4a64-a62d-127fc8d27806,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e55cf1-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.383 2 DEBUG os_vif [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:11:67,bridge_name='br-int',has_traffic_filtering=True,id=c4e55cf1-8c8c-4a64-a62d-127fc8d27806,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e55cf1-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:44 compute-0 podman[332458]: 2025-10-07 14:18:44.38995215 +0000 UTC m=+0.162557156 container start 085da5278ca27d280f69ad113485e567a6661d5fe34bd644595986dbd4b5bb05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.391 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.391 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:18:44 compute-0 podman[332458]: 2025-10-07 14:18:44.393419021 +0000 UTC m=+0.166024047 container attach 085da5278ca27d280f69ad113485e567a6661d5fe34bd644595986dbd4b5bb05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hopper, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.395 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4e55cf1-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.395 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc4e55cf1-8c, col_values=(('external_ids', {'iface-id': 'c4e55cf1-8c8c-4a64-a62d-127fc8d27806', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:11:67', 'vm-uuid': '197479db-0f5f-4dc1-a59d-efca0e6e4dec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:44 compute-0 NetworkManager[44949]: <info>  [1759846724.3975] manager: (tapc4e55cf1-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.406 2 INFO os_vif [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:11:67,bridge_name='br-int',has_traffic_filtering=True,id=c4e55cf1-8c8c-4a64-a62d-127fc8d27806,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e55cf1-8c')
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.464 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <uuid>d8f33644-b6b8-492e-b9e6-32cd3ad4a28a</uuid>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <name>instance-00000044</name>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <nova:name>tempest-tempest.common.compute-instance-1597908979-1</nova:name>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:18:43</nova:creationTime>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <nova:user uuid="4c51e9a60f0f4b28b9d5cfaa7a0180eb">tempest-MultipleCreateTestJSON-1647657804-project-member</nova:user>
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <nova:project uuid="1b9950f52692469d9b44d8201fd3b990">tempest-MultipleCreateTestJSON-1647657804</nova:project>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <nova:port uuid="32a0e3a4-24b8-40fb-acc0-917c83ab05f2">
Oct 07 14:18:44 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <system>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <entry name="serial">d8f33644-b6b8-492e-b9e6-32cd3ad4a28a</entry>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <entry name="uuid">d8f33644-b6b8-492e-b9e6-32cd3ad4a28a</entry>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     </system>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <os>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   </os>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <features>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   </features>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/d8f33644-b6b8-492e-b9e6-32cd3ad4a28a_disk">
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       </source>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/d8f33644-b6b8-492e-b9e6-32cd3ad4a28a_disk.config">
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       </source>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:18:44 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:0c:eb:4d"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <target dev="tap32a0e3a4-24"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/d8f33644-b6b8-492e-b9e6-32cd3ad4a28a/console.log" append="off"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <video>
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     </video>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:18:44 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:18:44 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:18:44 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:18:44 compute-0 nova_compute[259550]: </domain>
Oct 07 14:18:44 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.465 2 DEBUG nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Preparing to wait for external event network-vif-plugged-32a0e3a4-24b8-40fb-acc0-917c83ab05f2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.466 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.466 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.466 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.467 2 DEBUG nova.virt.libvirt.vif [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1597908979',display_name='tempest-tempest.common.compute-instance-1597908979-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1597908979-1',id=68,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b9950f52692469d9b44d8201fd3b990',ramdisk_id='',reservation_id='r-wp2d13bt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1647657804',owner_user_name='tempest-MultipleCreateTestJSON-1647657804-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:18:37Z,user_data=None,user_id='4c51e9a60f0f4b28b9d5cfaa7a0180eb',uuid=d8f33644-b6b8-492e-b9e6-32cd3ad4a28a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "address": "fa:16:3e:0c:eb:4d", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a0e3a4-24", "ovs_interfaceid": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.467 2 DEBUG nova.network.os_vif_util [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converting VIF {"id": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "address": "fa:16:3e:0c:eb:4d", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a0e3a4-24", "ovs_interfaceid": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.468 2 DEBUG nova.network.os_vif_util [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:eb:4d,bridge_name='br-int',has_traffic_filtering=True,id=32a0e3a4-24b8-40fb-acc0-917c83ab05f2,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a0e3a4-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.468 2 DEBUG os_vif [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:eb:4d,bridge_name='br-int',has_traffic_filtering=True,id=32a0e3a4-24b8-40fb-acc0-917c83ab05f2,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a0e3a4-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.470 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.470 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.474 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32a0e3a4-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.474 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap32a0e3a4-24, col_values=(('external_ids', {'iface-id': '32a0e3a4-24b8-40fb-acc0-917c83ab05f2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:eb:4d', 'vm-uuid': 'd8f33644-b6b8-492e-b9e6-32cd3ad4a28a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:44 compute-0 NetworkManager[44949]: <info>  [1759846724.4768] manager: (tap32a0e3a4-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.488 2 INFO os_vif [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:eb:4d,bridge_name='br-int',has_traffic_filtering=True,id=32a0e3a4-24b8-40fb-acc0-917c83ab05f2,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a0e3a4-24')
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.730 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.730 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.731 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] No VIF found with MAC fa:16:3e:ac:11:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.731 2 INFO nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Using config drive
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.752 2 DEBUG nova.storage.rbd_utils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image 197479db-0f5f-4dc1-a59d-efca0e6e4dec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.899 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.900 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.900 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] No VIF found with MAC fa:16:3e:0c:eb:4d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.901 2 INFO nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Using config drive
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.922 2 DEBUG nova.storage.rbd_utils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image d8f33644-b6b8-492e-b9e6-32cd3ad4a28a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.981 2 DEBUG nova.network.neutron [req-51c2add8-b0ba-46fa-a46b-3b517ab13b3d req-9e74f92b-7af4-44de-9c13-0a2a0fd0d488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Updated VIF entry in instance network info cache for port 32a0e3a4-24b8-40fb-acc0-917c83ab05f2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:18:44 compute-0 nova_compute[259550]: 2025-10-07 14:18:44.982 2 DEBUG nova.network.neutron [req-51c2add8-b0ba-46fa-a46b-3b517ab13b3d req-9e74f92b-7af4-44de-9c13-0a2a0fd0d488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Updating instance_info_cache with network_info: [{"id": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "address": "fa:16:3e:0c:eb:4d", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a0e3a4-24", "ovs_interfaceid": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:45 compute-0 ceph-mon[74295]: pgmap v1624: 305 pgs: 305 active+clean; 215 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 211 op/s
Oct 07 14:18:45 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2136483619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:45 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2639995245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.111 2 DEBUG oslo_concurrency.lockutils [req-51c2add8-b0ba-46fa-a46b-3b517ab13b3d req-9e74f92b-7af4-44de-9c13-0a2a0fd0d488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d8f33644-b6b8-492e-b9e6-32cd3ad4a28a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.117 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Updating instance_info_cache with network_info: [{"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.276 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.277 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.277 2 DEBUG oslo_concurrency.lockutils [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquired lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.278 2 DEBUG nova.network.neutron [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.278 2 DEBUG nova.objects.instance [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'info_cache' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.279 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.304 2 INFO nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Creating config drive at /var/lib/nova/instances/197479db-0f5f-4dc1-a59d-efca0e6e4dec/disk.config
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.308 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/197479db-0f5f-4dc1-a59d-efca0e6e4dec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5d2dc6fw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.347 2 DEBUG nova.network.neutron [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Updated VIF entry in instance network info cache for port c4e55cf1-8c8c-4a64-a62d-127fc8d27806. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.348 2 DEBUG nova.network.neutron [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Updating instance_info_cache with network_info: [{"id": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "address": "fa:16:3e:ac:11:67", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e55cf1-8c", "ovs_interfaceid": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:45 compute-0 reverent_hopper[332476]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:18:45 compute-0 reverent_hopper[332476]: --> relative data size: 1.0
Oct 07 14:18:45 compute-0 reverent_hopper[332476]: --> All data devices are unavailable
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.435 2 DEBUG oslo_concurrency.lockutils [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-197479db-0f5f-4dc1-a59d-efca0e6e4dec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.435 2 DEBUG nova.compute.manager [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-unplugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.436 2 DEBUG oslo_concurrency.lockutils [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.436 2 DEBUG oslo_concurrency.lockutils [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.436 2 DEBUG oslo_concurrency.lockutils [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.437 2 DEBUG nova.compute.manager [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-unplugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.437 2 WARNING nova.compute.manager [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-unplugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state stopped and task_state None.
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.437 2 DEBUG nova.compute.manager [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.437 2 DEBUG oslo_concurrency.lockutils [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.438 2 DEBUG oslo_concurrency.lockutils [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.438 2 DEBUG oslo_concurrency.lockutils [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.438 2 DEBUG nova.compute.manager [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.438 2 WARNING nova.compute.manager [req-90945d7f-9a91-4a58-ae31-bf11388f669c req-68d329d4-ee60-4bd7-8d20-6541f10a65fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state stopped and task_state None.
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.445 2 INFO nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Creating config drive at /var/lib/nova/instances/d8f33644-b6b8-492e-b9e6-32cd3ad4a28a/disk.config
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.449 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d8f33644-b6b8-492e-b9e6-32cd3ad4a28a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpio72jsr3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:45 compute-0 systemd[1]: libpod-085da5278ca27d280f69ad113485e567a6661d5fe34bd644595986dbd4b5bb05.scope: Deactivated successfully.
Oct 07 14:18:45 compute-0 conmon[332476]: conmon 085da5278ca27d280f69 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-085da5278ca27d280f69ad113485e567a6661d5fe34bd644595986dbd4b5bb05.scope/container/memory.events
Oct 07 14:18:45 compute-0 podman[332458]: 2025-10-07 14:18:45.456254918 +0000 UTC m=+1.228859924 container died 085da5278ca27d280f69ad113485e567a6661d5fe34bd644595986dbd4b5bb05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hopper, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.480 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/197479db-0f5f-4dc1-a59d-efca0e6e4dec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5d2dc6fw" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6634f41e2de17a9e39e150934de8e917e1051f37c4cc124be43834d06e5a6b3-merged.mount: Deactivated successfully.
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.513 2 DEBUG nova.storage.rbd_utils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image 197479db-0f5f-4dc1-a59d-efca0e6e4dec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:45 compute-0 podman[332458]: 2025-10-07 14:18:45.523812172 +0000 UTC m=+1.296417188 container remove 085da5278ca27d280f69ad113485e567a6661d5fe34bd644595986dbd4b5bb05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.524 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/197479db-0f5f-4dc1-a59d-efca0e6e4dec/disk.config 197479db-0f5f-4dc1-a59d-efca0e6e4dec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:45 compute-0 systemd[1]: libpod-conmon-085da5278ca27d280f69ad113485e567a6661d5fe34bd644595986dbd4b5bb05.scope: Deactivated successfully.
Oct 07 14:18:45 compute-0 sudo[332275]: pam_unix(sudo:session): session closed for user root
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.586 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d8f33644-b6b8-492e-b9e6-32cd3ad4a28a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpio72jsr3" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:45 compute-0 podman[332554]: 2025-10-07 14:18:45.607168324 +0000 UTC m=+0.105318102 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.617 2 DEBUG nova.storage.rbd_utils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image d8f33644-b6b8-492e-b9e6-32cd3ad4a28a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:18:45 compute-0 podman[332564]: 2025-10-07 14:18:45.625042767 +0000 UTC m=+0.118776109 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.628 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d8f33644-b6b8-492e-b9e6-32cd3ad4a28a/disk.config d8f33644-b6b8-492e-b9e6-32cd3ad4a28a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:45 compute-0 sudo[332619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:18:45 compute-0 sudo[332619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:18:45 compute-0 sudo[332619]: pam_unix(sudo:session): session closed for user root
Oct 07 14:18:45 compute-0 sudo[332689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:18:45 compute-0 sudo[332689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:18:45 compute-0 sudo[332689]: pam_unix(sudo:session): session closed for user root
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.768 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/197479db-0f5f-4dc1-a59d-efca0e6e4dec/disk.config 197479db-0f5f-4dc1-a59d-efca0e6e4dec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.244s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.769 2 INFO nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Deleting local config drive /var/lib/nova/instances/197479db-0f5f-4dc1-a59d-efca0e6e4dec/disk.config because it was imported into RBD.
Oct 07 14:18:45 compute-0 sudo[332732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:18:45 compute-0 sudo[332732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:18:45 compute-0 sudo[332732]: pam_unix(sudo:session): session closed for user root
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.821 2 DEBUG oslo_concurrency.processutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d8f33644-b6b8-492e-b9e6-32cd3ad4a28a/disk.config d8f33644-b6b8-492e-b9e6-32cd3ad4a28a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.822 2 INFO nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Deleting local config drive /var/lib/nova/instances/d8f33644-b6b8-492e-b9e6-32cd3ad4a28a/disk.config because it was imported into RBD.
Oct 07 14:18:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 215 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 494 KiB/s rd, 3.6 MiB/s wr, 158 op/s
Oct 07 14:18:45 compute-0 sudo[332760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:18:45 compute-0 kernel: tapc4e55cf1-8c: entered promiscuous mode
Oct 07 14:18:45 compute-0 NetworkManager[44949]: <info>  [1759846725.8559] manager: (tapc4e55cf1-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Oct 07 14:18:45 compute-0 sudo[332760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:45 compute-0 ovn_controller[151684]: 2025-10-07T14:18:45Z|00680|binding|INFO|Claiming lport c4e55cf1-8c8c-4a64-a62d-127fc8d27806 for this chassis.
Oct 07 14:18:45 compute-0 ovn_controller[151684]: 2025-10-07T14:18:45Z|00681|binding|INFO|c4e55cf1-8c8c-4a64-a62d-127fc8d27806: Claiming fa:16:3e:ac:11:67 10.100.0.14
Oct 07 14:18:45 compute-0 systemd-udevd[332806]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:18:45 compute-0 NetworkManager[44949]: <info>  [1759846725.8911] manager: (tap32a0e3a4-24): new Tun device (/org/freedesktop/NetworkManager/Devices/296)
Oct 07 14:18:45 compute-0 systemd-udevd[332811]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:18:45 compute-0 NetworkManager[44949]: <info>  [1759846725.9062] device (tapc4e55cf1-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:18:45 compute-0 NetworkManager[44949]: <info>  [1759846725.9068] device (tapc4e55cf1-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:18:45 compute-0 systemd-machined[214580]: New machine qemu-82-instance-00000045.
Oct 07 14:18:45 compute-0 kernel: tap32a0e3a4-24: entered promiscuous mode
Oct 07 14:18:45 compute-0 NetworkManager[44949]: <info>  [1759846725.9461] device (tap32a0e3a4-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:18:45 compute-0 NetworkManager[44949]: <info>  [1759846725.9473] device (tap32a0e3a4-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:18:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:45.950 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:11:67 10.100.0.14'], port_security=['fa:16:3e:ac:11:67 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '197479db-0f5f-4dc1-a59d-efca0e6e4dec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6692b777-8c3f-47b2-9a67-3efff279d953', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b9950f52692469d9b44d8201fd3b990', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0ddc283-993d-4ef2-9ca1-083b7e8e7595', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14eec310-5a63-4764-9a59-d973ac25767c, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c4e55cf1-8c8c-4a64-a62d-127fc8d27806) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:18:45 compute-0 ovn_controller[151684]: 2025-10-07T14:18:45Z|00682|binding|INFO|Claiming lport 32a0e3a4-24b8-40fb-acc0-917c83ab05f2 for this chassis.
Oct 07 14:18:45 compute-0 ovn_controller[151684]: 2025-10-07T14:18:45Z|00683|binding|INFO|32a0e3a4-24b8-40fb-acc0-917c83ab05f2: Claiming fa:16:3e:0c:eb:4d 10.100.0.7
Oct 07 14:18:45 compute-0 systemd[1]: Started Virtual Machine qemu-82-instance-00000045.
Oct 07 14:18:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:45.952 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c4e55cf1-8c8c-4a64-a62d-127fc8d27806 in datapath 6692b777-8c3f-47b2-9a67-3efff279d953 bound to our chassis
Oct 07 14:18:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:45.954 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6692b777-8c3f-47b2-9a67-3efff279d953
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:45 compute-0 ovn_controller[151684]: 2025-10-07T14:18:45Z|00684|binding|INFO|Setting lport c4e55cf1-8c8c-4a64-a62d-127fc8d27806 ovn-installed in OVS
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:45.969 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d2661e09-be89-4a1e-8c9f-59d3e4b96a96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:45.970 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6692b777-81 in ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:18:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:45.973 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6692b777-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:18:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:45.973 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6f50fc-ab3a-42f2-9569-479931bb9c05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:45.975 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[adb732fd-c178-4a78-b7d5-aa73f0175e99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:45 compute-0 ovn_controller[151684]: 2025-10-07T14:18:45Z|00685|binding|INFO|Setting lport 32a0e3a4-24b8-40fb-acc0-917c83ab05f2 ovn-installed in OVS
Oct 07 14:18:45 compute-0 systemd-machined[214580]: New machine qemu-83-instance-00000044.
Oct 07 14:18:45 compute-0 nova_compute[259550]: 2025-10-07 14:18:45.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:45.990 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[67853beb-56a0-438b-baa9-f288717a467c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:45 compute-0 systemd[1]: Started Virtual Machine qemu-83-instance-00000044.
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.008 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1f9945-30ac-48c4-b0b9-d67730db45a7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:46 compute-0 ovn_controller[151684]: 2025-10-07T14:18:46Z|00686|binding|INFO|Setting lport 32a0e3a4-24b8-40fb-acc0-917c83ab05f2 up in Southbound
Oct 07 14:18:46 compute-0 ovn_controller[151684]: 2025-10-07T14:18:46Z|00687|binding|INFO|Setting lport c4e55cf1-8c8c-4a64-a62d-127fc8d27806 up in Southbound
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.033 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:eb:4d 10.100.0.7'], port_security=['fa:16:3e:0c:eb:4d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd8f33644-b6b8-492e-b9e6-32cd3ad4a28a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6692b777-8c3f-47b2-9a67-3efff279d953', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b9950f52692469d9b44d8201fd3b990', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0ddc283-993d-4ef2-9ca1-083b7e8e7595', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14eec310-5a63-4764-9a59-d973ac25767c, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=32a0e3a4-24b8-40fb-acc0-917c83ab05f2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.046 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[da050bf5-5749-40cc-af88-c85e4bd96818]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:46 compute-0 NetworkManager[44949]: <info>  [1759846726.0536] manager: (tap6692b777-80): new Veth device (/org/freedesktop/NetworkManager/Devices/297)
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.053 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ab543c08-de4b-4f19-880e-dc287ebd97eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:46 compute-0 systemd-udevd[332815]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:18:46 compute-0 ceph-mon[74295]: pgmap v1625: 305 pgs: 305 active+clean; 215 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 494 KiB/s rd, 3.6 MiB/s wr, 158 op/s
Oct 07 14:18:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.109 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4cc3a178-d278-4c4c-a3ab-48ccc38b865c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.113 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc08f3d-5f18-420d-b32f-284609df20a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:46 compute-0 NetworkManager[44949]: <info>  [1759846726.1375] device (tap6692b777-80): carrier: link connected
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.144 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[03cf0c65-78c2-4a46-ae1e-7dd78ae0b56b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.161 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5250bce4-164c-4ea3-92dd-6d4be70718b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6692b777-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:25:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 730970, 'reachable_time': 37981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332893, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.180 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[45fe9ee3-99a5-4932-8e8b-52663eabed29]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe50:254c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 730970, 'tstamp': 730970}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332894, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.198 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[842076ad-d09a-4071-9d9e-fb6c6976cd5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6692b777-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:25:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 730970, 'reachable_time': 37981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 332897, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.231 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[40adc68b-a8ef-40cb-bb2e-330be2324b4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:46 compute-0 podman[332895]: 2025-10-07 14:18:46.240665849 +0000 UTC m=+0.046529080 container create c3845f2f2ba242c2720532ad5111ae6ce5bdf6df47df1a4af0f467d26e160763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lamarr, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 07 14:18:46 compute-0 nova_compute[259550]: 2025-10-07 14:18:46.254 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846711.2532728, 9c5c9653-6de6-4975-86c6-887803a35913 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:46 compute-0 nova_compute[259550]: 2025-10-07 14:18:46.256 2 INFO nova.compute.manager [-] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] VM Stopped (Lifecycle Event)
Oct 07 14:18:46 compute-0 systemd[1]: Started libpod-conmon-c3845f2f2ba242c2720532ad5111ae6ce5bdf6df47df1a4af0f467d26e160763.scope.
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.297 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb8ebb3-9dd3-45c9-a6a7-08858e0d955d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.299 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6692b777-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.299 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.300 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6692b777-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:46 compute-0 nova_compute[259550]: 2025-10-07 14:18:46.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:46 compute-0 NetworkManager[44949]: <info>  [1759846726.3030] manager: (tap6692b777-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Oct 07 14:18:46 compute-0 kernel: tap6692b777-80: entered promiscuous mode
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.312 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6692b777-80, col_values=(('external_ids', {'iface-id': 'c2ea13ef-38e7-4acc-a428-c933f929d020'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:46 compute-0 ovn_controller[151684]: 2025-10-07T14:18:46Z|00688|binding|INFO|Releasing lport c2ea13ef-38e7-4acc-a428-c933f929d020 from this chassis (sb_readonly=0)
Oct 07 14:18:46 compute-0 podman[332895]: 2025-10-07 14:18:46.217897968 +0000 UTC m=+0.023761219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:18:46 compute-0 nova_compute[259550]: 2025-10-07 14:18:46.314 2 DEBUG nova.compute.manager [None req-46f7d350-21da-45be-adf1-393dc54cf605 - - - - - -] [instance: 9c5c9653-6de6-4975-86c6-887803a35913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:46 compute-0 nova_compute[259550]: 2025-10-07 14:18:46.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:46 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:18:46 compute-0 nova_compute[259550]: 2025-10-07 14:18:46.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:46 compute-0 nova_compute[259550]: 2025-10-07 14:18:46.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:46 compute-0 podman[332895]: 2025-10-07 14:18:46.337635971 +0000 UTC m=+0.143499222 container init c3845f2f2ba242c2720532ad5111ae6ce5bdf6df47df1a4af0f467d26e160763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lamarr, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.337 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6692b777-8c3f-47b2-9a67-3efff279d953.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6692b777-8c3f-47b2-9a67-3efff279d953.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.338 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[30716a19-ebd2-4033-9618-97716ffb9a11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.339 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-6692b777-8c3f-47b2-9a67-3efff279d953
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/6692b777-8c3f-47b2-9a67-3efff279d953.pid.haproxy
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 6692b777-8c3f-47b2-9a67-3efff279d953
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.340 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'env', 'PROCESS_TAG=haproxy-6692b777-8c3f-47b2-9a67-3efff279d953', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6692b777-8c3f-47b2-9a67-3efff279d953.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:18:46 compute-0 podman[332895]: 2025-10-07 14:18:46.346346681 +0000 UTC m=+0.152209912 container start c3845f2f2ba242c2720532ad5111ae6ce5bdf6df47df1a4af0f467d26e160763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:18:46 compute-0 podman[332895]: 2025-10-07 14:18:46.350913242 +0000 UTC m=+0.156776493 container attach c3845f2f2ba242c2720532ad5111ae6ce5bdf6df47df1a4af0f467d26e160763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lamarr, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Oct 07 14:18:46 compute-0 blissful_lamarr[332916]: 167 167
Oct 07 14:18:46 compute-0 systemd[1]: libpod-c3845f2f2ba242c2720532ad5111ae6ce5bdf6df47df1a4af0f467d26e160763.scope: Deactivated successfully.
Oct 07 14:18:46 compute-0 podman[332926]: 2025-10-07 14:18:46.404795306 +0000 UTC m=+0.029517142 container died c3845f2f2ba242c2720532ad5111ae6ce5bdf6df47df1a4af0f467d26e160763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:18:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-326bdeb527b9b5cc4d54819ea8d86c347138d1f5c0a0fce160daa83c18e54516-merged.mount: Deactivated successfully.
Oct 07 14:18:46 compute-0 podman[332926]: 2025-10-07 14:18:46.448475259 +0000 UTC m=+0.073197075 container remove c3845f2f2ba242c2720532ad5111ae6ce5bdf6df47df1a4af0f467d26e160763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Oct 07 14:18:46 compute-0 systemd[1]: libpod-conmon-c3845f2f2ba242c2720532ad5111ae6ce5bdf6df47df1a4af0f467d26e160763.scope: Deactivated successfully.
Oct 07 14:18:46 compute-0 nova_compute[259550]: 2025-10-07 14:18:46.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:46 compute-0 podman[333026]: 2025-10-07 14:18:46.645210626 +0000 UTC m=+0.050162126 container create 4662483aa833f882e7a608ec916ac03bcc8b70f151896ccb4f01744ebfaa9d6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_pike, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:18:46 compute-0 systemd[1]: Started libpod-conmon-4662483aa833f882e7a608ec916ac03bcc8b70f151896ccb4f01744ebfaa9d6f.scope.
Oct 07 14:18:46 compute-0 podman[333026]: 2025-10-07 14:18:46.626740258 +0000 UTC m=+0.031691778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:18:46 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2230ae44251fc29aca6043325f9189ad7e91d52b982329f6ffc03dc3ef70c0a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2230ae44251fc29aca6043325f9189ad7e91d52b982329f6ffc03dc3ef70c0a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2230ae44251fc29aca6043325f9189ad7e91d52b982329f6ffc03dc3ef70c0a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2230ae44251fc29aca6043325f9189ad7e91d52b982329f6ffc03dc3ef70c0a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:18:46 compute-0 podman[333068]: 2025-10-07 14:18:46.759097685 +0000 UTC m=+0.061071975 container create e7de7b90d2d39b1a1ea3654a41a8116baaefb68e7b8a4047d1b0cfded0593856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:18:46 compute-0 podman[333026]: 2025-10-07 14:18:46.766060849 +0000 UTC m=+0.171012369 container init 4662483aa833f882e7a608ec916ac03bcc8b70f151896ccb4f01744ebfaa9d6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_pike, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Oct 07 14:18:46 compute-0 podman[333026]: 2025-10-07 14:18:46.777162782 +0000 UTC m=+0.182114292 container start 4662483aa833f882e7a608ec916ac03bcc8b70f151896ccb4f01744ebfaa9d6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:18:46 compute-0 podman[333026]: 2025-10-07 14:18:46.781127816 +0000 UTC m=+0.186079316 container attach 4662483aa833f882e7a608ec916ac03bcc8b70f151896ccb4f01744ebfaa9d6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_pike, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:18:46 compute-0 systemd[1]: Started libpod-conmon-e7de7b90d2d39b1a1ea3654a41a8116baaefb68e7b8a4047d1b0cfded0593856.scope.
Oct 07 14:18:46 compute-0 podman[333068]: 2025-10-07 14:18:46.726999657 +0000 UTC m=+0.028973977 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:18:46 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e24dd34d63725a6bd51f91f81b36c0439b677762aa5f118e7e1279232eae5eb9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:18:46 compute-0 podman[333068]: 2025-10-07 14:18:46.862146027 +0000 UTC m=+0.164120337 container init e7de7b90d2d39b1a1ea3654a41a8116baaefb68e7b8a4047d1b0cfded0593856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:18:46 compute-0 podman[333068]: 2025-10-07 14:18:46.868247179 +0000 UTC m=+0.170221469 container start e7de7b90d2d39b1a1ea3654a41a8116baaefb68e7b8a4047d1b0cfded0593856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:18:46 compute-0 neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953[333092]: [NOTICE]   (333096) : New worker (333098) forked
Oct 07 14:18:46 compute-0 neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953[333092]: [NOTICE]   (333096) : Loading success.
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.942 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 32a0e3a4-24b8-40fb-acc0-917c83ab05f2 in datapath 6692b777-8c3f-47b2-9a67-3efff279d953 unbound from our chassis
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.943 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6692b777-8c3f-47b2-9a67-3efff279d953
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.961 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[71618de5-467b-408c-a140-78db5ebda4b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.990 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e6aa19ed-2c53-4ec9-aa9c-1536f25392f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:46.994 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7269c919-241d-42c5-b2e7-c5ca0f982532]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:47.021 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[07010533-8ef7-4b86-b0bc-fd7678ef8dc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:47 compute-0 nova_compute[259550]: 2025-10-07 14:18:47.039 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846727.0391462, 197479db-0f5f-4dc1-a59d-efca0e6e4dec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:47 compute-0 nova_compute[259550]: 2025-10-07 14:18:47.040 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] VM Started (Lifecycle Event)
Oct 07 14:18:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:47.046 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[53851fd8-f611-4f8d-9707-2442aa673897]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6692b777-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:25:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 5, 'rx_bytes': 196, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 5, 'rx_bytes': 196, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 730970, 'reachable_time': 37981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333112, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:47.063 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe330c4-ed66-47e2-aea0-6087d92552e0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6692b777-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 730982, 'tstamp': 730982}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333113, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6692b777-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 730985, 'tstamp': 730985}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333113, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:47.064 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6692b777-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:47 compute-0 nova_compute[259550]: 2025-10-07 14:18:47.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:47 compute-0 nova_compute[259550]: 2025-10-07 14:18:47.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:47.071 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6692b777-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:47.072 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:18:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:47.072 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6692b777-80, col_values=(('external_ids', {'iface-id': 'c2ea13ef-38e7-4acc-a428-c933f929d020'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:47.072 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:18:47 compute-0 nova_compute[259550]: 2025-10-07 14:18:47.119 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:47 compute-0 nova_compute[259550]: 2025-10-07 14:18:47.122 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846727.041558, 197479db-0f5f-4dc1-a59d-efca0e6e4dec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:47 compute-0 nova_compute[259550]: 2025-10-07 14:18:47.122 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] VM Paused (Lifecycle Event)
Oct 07 14:18:47 compute-0 nova_compute[259550]: 2025-10-07 14:18:47.282 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:47 compute-0 nova_compute[259550]: 2025-10-07 14:18:47.287 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:18:47 compute-0 nova_compute[259550]: 2025-10-07 14:18:47.432 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:18:47 compute-0 nova_compute[259550]: 2025-10-07 14:18:47.433 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846727.1700613, d8f33644-b6b8-492e-b9e6-32cd3ad4a28a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:47 compute-0 nova_compute[259550]: 2025-10-07 14:18:47.433 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] VM Started (Lifecycle Event)
Oct 07 14:18:47 compute-0 loving_pike[333078]: {
Oct 07 14:18:47 compute-0 loving_pike[333078]:     "0": [
Oct 07 14:18:47 compute-0 loving_pike[333078]:         {
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "devices": [
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "/dev/loop3"
Oct 07 14:18:47 compute-0 loving_pike[333078]:             ],
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "lv_name": "ceph_lv0",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "lv_size": "21470642176",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "name": "ceph_lv0",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "tags": {
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.cluster_name": "ceph",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.crush_device_class": "",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.encrypted": "0",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.osd_id": "0",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.type": "block",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.vdo": "0"
Oct 07 14:18:47 compute-0 loving_pike[333078]:             },
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "type": "block",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "vg_name": "ceph_vg0"
Oct 07 14:18:47 compute-0 loving_pike[333078]:         }
Oct 07 14:18:47 compute-0 loving_pike[333078]:     ],
Oct 07 14:18:47 compute-0 loving_pike[333078]:     "1": [
Oct 07 14:18:47 compute-0 loving_pike[333078]:         {
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "devices": [
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "/dev/loop4"
Oct 07 14:18:47 compute-0 loving_pike[333078]:             ],
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "lv_name": "ceph_lv1",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "lv_size": "21470642176",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "name": "ceph_lv1",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "tags": {
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.cluster_name": "ceph",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.crush_device_class": "",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.encrypted": "0",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.osd_id": "1",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.type": "block",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.vdo": "0"
Oct 07 14:18:47 compute-0 loving_pike[333078]:             },
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "type": "block",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "vg_name": "ceph_vg1"
Oct 07 14:18:47 compute-0 loving_pike[333078]:         }
Oct 07 14:18:47 compute-0 loving_pike[333078]:     ],
Oct 07 14:18:47 compute-0 loving_pike[333078]:     "2": [
Oct 07 14:18:47 compute-0 loving_pike[333078]:         {
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "devices": [
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "/dev/loop5"
Oct 07 14:18:47 compute-0 loving_pike[333078]:             ],
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "lv_name": "ceph_lv2",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "lv_size": "21470642176",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "name": "ceph_lv2",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "tags": {
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.cluster_name": "ceph",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.crush_device_class": "",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.encrypted": "0",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.osd_id": "2",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.type": "block",
Oct 07 14:18:47 compute-0 loving_pike[333078]:                 "ceph.vdo": "0"
Oct 07 14:18:47 compute-0 loving_pike[333078]:             },
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "type": "block",
Oct 07 14:18:47 compute-0 loving_pike[333078]:             "vg_name": "ceph_vg2"
Oct 07 14:18:47 compute-0 loving_pike[333078]:         }
Oct 07 14:18:47 compute-0 loving_pike[333078]:     ]
Oct 07 14:18:47 compute-0 loving_pike[333078]: }
Oct 07 14:18:47 compute-0 nova_compute[259550]: 2025-10-07 14:18:47.574 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:47 compute-0 systemd[1]: libpod-4662483aa833f882e7a608ec916ac03bcc8b70f151896ccb4f01744ebfaa9d6f.scope: Deactivated successfully.
Oct 07 14:18:47 compute-0 podman[333026]: 2025-10-07 14:18:47.577264818 +0000 UTC m=+0.982216398 container died 4662483aa833f882e7a608ec916ac03bcc8b70f151896ccb4f01744ebfaa9d6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_pike, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 07 14:18:47 compute-0 nova_compute[259550]: 2025-10-07 14:18:47.580 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846727.1701648, d8f33644-b6b8-492e-b9e6-32cd3ad4a28a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:47 compute-0 nova_compute[259550]: 2025-10-07 14:18:47.580 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] VM Paused (Lifecycle Event)
Oct 07 14:18:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-2230ae44251fc29aca6043325f9189ad7e91d52b982329f6ffc03dc3ef70c0a0-merged.mount: Deactivated successfully.
Oct 07 14:18:47 compute-0 podman[333026]: 2025-10-07 14:18:47.644031442 +0000 UTC m=+1.048982952 container remove 4662483aa833f882e7a608ec916ac03bcc8b70f151896ccb4f01744ebfaa9d6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_pike, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:18:47 compute-0 systemd[1]: libpod-conmon-4662483aa833f882e7a608ec916ac03bcc8b70f151896ccb4f01744ebfaa9d6f.scope: Deactivated successfully.
Oct 07 14:18:47 compute-0 sudo[332760]: pam_unix(sudo:session): session closed for user root
Oct 07 14:18:47 compute-0 nova_compute[259550]: 2025-10-07 14:18:47.687 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:47 compute-0 nova_compute[259550]: 2025-10-07 14:18:47.693 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:18:47 compute-0 sudo[333130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:18:47 compute-0 sudo[333130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:18:47 compute-0 sudo[333130]: pam_unix(sudo:session): session closed for user root
Oct 07 14:18:47 compute-0 nova_compute[259550]: 2025-10-07 14:18:47.788 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:18:47 compute-0 sudo[333155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:18:47 compute-0 sudo[333155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:18:47 compute-0 sudo[333155]: pam_unix(sudo:session): session closed for user root
Oct 07 14:18:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 215 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 144 KiB/s rd, 3.6 MiB/s wr, 122 op/s
Oct 07 14:18:47 compute-0 sudo[333180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:18:47 compute-0 sudo[333180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:18:47 compute-0 sudo[333180]: pam_unix(sudo:session): session closed for user root
Oct 07 14:18:47 compute-0 sudo[333205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:18:47 compute-0 sudo[333205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:18:48 compute-0 podman[333271]: 2025-10-07 14:18:48.243406756 +0000 UTC m=+0.035350226 container create 9a2255bd2b5596c155aa7d9a262d52db1bfb3b27e8f44ad6496b02a6c4d2488e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 14:18:48 compute-0 systemd[1]: Started libpod-conmon-9a2255bd2b5596c155aa7d9a262d52db1bfb3b27e8f44ad6496b02a6c4d2488e.scope.
Oct 07 14:18:48 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:18:48 compute-0 podman[333271]: 2025-10-07 14:18:48.228071931 +0000 UTC m=+0.020015421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:18:48 compute-0 podman[333271]: 2025-10-07 14:18:48.323881491 +0000 UTC m=+0.115825041 container init 9a2255bd2b5596c155aa7d9a262d52db1bfb3b27e8f44ad6496b02a6c4d2488e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chebyshev, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 07 14:18:48 compute-0 podman[333271]: 2025-10-07 14:18:48.339426132 +0000 UTC m=+0.131369602 container start 9a2255bd2b5596c155aa7d9a262d52db1bfb3b27e8f44ad6496b02a6c4d2488e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 07 14:18:48 compute-0 podman[333271]: 2025-10-07 14:18:48.343230733 +0000 UTC m=+0.135174253 container attach 9a2255bd2b5596c155aa7d9a262d52db1bfb3b27e8f44ad6496b02a6c4d2488e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:18:48 compute-0 boring_chebyshev[333287]: 167 167
Oct 07 14:18:48 compute-0 systemd[1]: libpod-9a2255bd2b5596c155aa7d9a262d52db1bfb3b27e8f44ad6496b02a6c4d2488e.scope: Deactivated successfully.
Oct 07 14:18:48 compute-0 podman[333271]: 2025-10-07 14:18:48.346101698 +0000 UTC m=+0.138045208 container died 9a2255bd2b5596c155aa7d9a262d52db1bfb3b27e8f44ad6496b02a6c4d2488e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chebyshev, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:18:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d3dc6e63767449dd84f9291977ee0dad6addefbeda479761ad63765153fcad6-merged.mount: Deactivated successfully.
Oct 07 14:18:48 compute-0 podman[333271]: 2025-10-07 14:18:48.385299854 +0000 UTC m=+0.177243324 container remove 9a2255bd2b5596c155aa7d9a262d52db1bfb3b27e8f44ad6496b02a6c4d2488e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 07 14:18:48 compute-0 systemd[1]: libpod-conmon-9a2255bd2b5596c155aa7d9a262d52db1bfb3b27e8f44ad6496b02a6c4d2488e.scope: Deactivated successfully.
Oct 07 14:18:48 compute-0 podman[333311]: 2025-10-07 14:18:48.585278257 +0000 UTC m=+0.043555222 container create 6a21b9213cdb71dc0a1a3072cb31a4fcd0280ba15ca70938bbbb9ab2a89d76aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_golick, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:18:48 compute-0 systemd[1]: Started libpod-conmon-6a21b9213cdb71dc0a1a3072cb31a4fcd0280ba15ca70938bbbb9ab2a89d76aa.scope.
Oct 07 14:18:48 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951f7fc94b187bcd52e6810f601c352b8a99ee0bf0eb0915485c3e4822c35792/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951f7fc94b187bcd52e6810f601c352b8a99ee0bf0eb0915485c3e4822c35792/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951f7fc94b187bcd52e6810f601c352b8a99ee0bf0eb0915485c3e4822c35792/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951f7fc94b187bcd52e6810f601c352b8a99ee0bf0eb0915485c3e4822c35792/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:18:48 compute-0 podman[333311]: 2025-10-07 14:18:48.569188951 +0000 UTC m=+0.027465916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:18:48 compute-0 podman[333311]: 2025-10-07 14:18:48.67016005 +0000 UTC m=+0.128436995 container init 6a21b9213cdb71dc0a1a3072cb31a4fcd0280ba15ca70938bbbb9ab2a89d76aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_golick, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:18:48 compute-0 podman[333311]: 2025-10-07 14:18:48.677969635 +0000 UTC m=+0.136246590 container start 6a21b9213cdb71dc0a1a3072cb31a4fcd0280ba15ca70938bbbb9ab2a89d76aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_golick, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:18:48 compute-0 podman[333311]: 2025-10-07 14:18:48.682344951 +0000 UTC m=+0.140621916 container attach 6a21b9213cdb71dc0a1a3072cb31a4fcd0280ba15ca70938bbbb9ab2a89d76aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_golick, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 07 14:18:48 compute-0 ceph-mon[74295]: pgmap v1626: 305 pgs: 305 active+clean; 215 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 144 KiB/s rd, 3.6 MiB/s wr, 122 op/s
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.373 2 DEBUG nova.compute.manager [req-a14331a1-94bc-4de8-9d2f-ac4647fecf2b req-a6d89738-71a1-48a2-b8a2-cc12002f5cd7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Received event network-vif-plugged-32a0e3a4-24b8-40fb-acc0-917c83ab05f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.374 2 DEBUG oslo_concurrency.lockutils [req-a14331a1-94bc-4de8-9d2f-ac4647fecf2b req-a6d89738-71a1-48a2-b8a2-cc12002f5cd7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.375 2 DEBUG oslo_concurrency.lockutils [req-a14331a1-94bc-4de8-9d2f-ac4647fecf2b req-a6d89738-71a1-48a2-b8a2-cc12002f5cd7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.375 2 DEBUG oslo_concurrency.lockutils [req-a14331a1-94bc-4de8-9d2f-ac4647fecf2b req-a6d89738-71a1-48a2-b8a2-cc12002f5cd7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.375 2 DEBUG nova.compute.manager [req-a14331a1-94bc-4de8-9d2f-ac4647fecf2b req-a6d89738-71a1-48a2-b8a2-cc12002f5cd7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Processing event network-vif-plugged-32a0e3a4-24b8-40fb-acc0-917c83ab05f2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.376 2 DEBUG nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.381 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846729.3810694, d8f33644-b6b8-492e-b9e6-32cd3ad4a28a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.381 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] VM Resumed (Lifecycle Event)
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.383 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.386 2 INFO nova.virt.libvirt.driver [-] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Instance spawned successfully.
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.386 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.406 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.411 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.414 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.415 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.415 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.416 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.416 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.417 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.450 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.460 2 DEBUG nova.compute.manager [req-771a85ca-823b-4d10-a0a3-afc2c71941ca req-df063552-d6a6-4271-bc5b-c611db36dbce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Received event network-vif-plugged-c4e55cf1-8c8c-4a64-a62d-127fc8d27806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.461 2 DEBUG oslo_concurrency.lockutils [req-771a85ca-823b-4d10-a0a3-afc2c71941ca req-df063552-d6a6-4271-bc5b-c611db36dbce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.461 2 DEBUG oslo_concurrency.lockutils [req-771a85ca-823b-4d10-a0a3-afc2c71941ca req-df063552-d6a6-4271-bc5b-c611db36dbce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.461 2 DEBUG oslo_concurrency.lockutils [req-771a85ca-823b-4d10-a0a3-afc2c71941ca req-df063552-d6a6-4271-bc5b-c611db36dbce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.461 2 DEBUG nova.compute.manager [req-771a85ca-823b-4d10-a0a3-afc2c71941ca req-df063552-d6a6-4271-bc5b-c611db36dbce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Processing event network-vif-plugged-c4e55cf1-8c8c-4a64-a62d-127fc8d27806 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.462 2 DEBUG nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.465 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846729.4650283, 197479db-0f5f-4dc1-a59d-efca0e6e4dec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.465 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] VM Resumed (Lifecycle Event)
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.467 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.471 2 INFO nova.virt.libvirt.driver [-] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Instance spawned successfully.
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.471 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.496 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.497 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.498 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.499 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.499 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.500 2 DEBUG nova.virt.libvirt.driver [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.511 2 INFO nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Took 12.22 seconds to spawn the instance on the hypervisor.
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.512 2 DEBUG nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.514 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.524 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.575 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.610 2 INFO nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Took 11.54 seconds to spawn the instance on the hypervisor.
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.610 2 DEBUG nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.614 2 INFO nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Took 13.52 seconds to build instance.
Oct 07 14:18:49 compute-0 elastic_golick[333327]: {
Oct 07 14:18:49 compute-0 elastic_golick[333327]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:18:49 compute-0 elastic_golick[333327]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:18:49 compute-0 elastic_golick[333327]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:18:49 compute-0 elastic_golick[333327]:         "osd_id": 2,
Oct 07 14:18:49 compute-0 elastic_golick[333327]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:18:49 compute-0 elastic_golick[333327]:         "type": "bluestore"
Oct 07 14:18:49 compute-0 elastic_golick[333327]:     },
Oct 07 14:18:49 compute-0 elastic_golick[333327]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:18:49 compute-0 elastic_golick[333327]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:18:49 compute-0 elastic_golick[333327]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:18:49 compute-0 elastic_golick[333327]:         "osd_id": 1,
Oct 07 14:18:49 compute-0 elastic_golick[333327]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:18:49 compute-0 elastic_golick[333327]:         "type": "bluestore"
Oct 07 14:18:49 compute-0 elastic_golick[333327]:     },
Oct 07 14:18:49 compute-0 elastic_golick[333327]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:18:49 compute-0 elastic_golick[333327]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:18:49 compute-0 elastic_golick[333327]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:18:49 compute-0 elastic_golick[333327]:         "osd_id": 0,
Oct 07 14:18:49 compute-0 elastic_golick[333327]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:18:49 compute-0 elastic_golick[333327]:         "type": "bluestore"
Oct 07 14:18:49 compute-0 elastic_golick[333327]:     }
Oct 07 14:18:49 compute-0 elastic_golick[333327]: }
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.646 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:49 compute-0 systemd[1]: libpod-6a21b9213cdb71dc0a1a3072cb31a4fcd0280ba15ca70938bbbb9ab2a89d76aa.scope: Deactivated successfully.
Oct 07 14:18:49 compute-0 podman[333311]: 2025-10-07 14:18:49.671194683 +0000 UTC m=+1.129471768 container died 6a21b9213cdb71dc0a1a3072cb31a4fcd0280ba15ca70938bbbb9ab2a89d76aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_golick, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.684 2 INFO nova.compute.manager [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Took 13.41 seconds to build instance.
Oct 07 14:18:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-951f7fc94b187bcd52e6810f601c352b8a99ee0bf0eb0915485c3e4822c35792-merged.mount: Deactivated successfully.
Oct 07 14:18:49 compute-0 nova_compute[259550]: 2025-10-07 14:18:49.723 2 DEBUG oslo_concurrency.lockutils [None req-6f8bf1a3-289d-45f7-bf88-327b6803726b 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:49 compute-0 podman[333311]: 2025-10-07 14:18:49.754652518 +0000 UTC m=+1.212929473 container remove 6a21b9213cdb71dc0a1a3072cb31a4fcd0280ba15ca70938bbbb9ab2a89d76aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_golick, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 07 14:18:49 compute-0 systemd[1]: libpod-conmon-6a21b9213cdb71dc0a1a3072cb31a4fcd0280ba15ca70938bbbb9ab2a89d76aa.scope: Deactivated successfully.
Oct 07 14:18:49 compute-0 sudo[333205]: pam_unix(sudo:session): session closed for user root
Oct 07 14:18:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:18:49 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:18:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:18:49 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:18:49 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev de987442-0230-4102-bbe6-3d0d5d52322a does not exist
Oct 07 14:18:49 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 0c7865cb-2c4b-46e2-a4f9-fb2c394b0795 does not exist
Oct 07 14:18:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 215 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 153 KiB/s rd, 3.6 MiB/s wr, 133 op/s
Oct 07 14:18:49 compute-0 sudo[333374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:18:49 compute-0 sudo[333374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:18:49 compute-0 sudo[333374]: pam_unix(sudo:session): session closed for user root
Oct 07 14:18:49 compute-0 sudo[333399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:18:49 compute-0 sudo[333399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:18:49 compute-0 sudo[333399]: pam_unix(sudo:session): session closed for user root
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.356 2 DEBUG nova.network.neutron [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Updating instance_info_cache with network_info: [{"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.376 2 DEBUG oslo_concurrency.lockutils [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Releasing lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.412 2 INFO nova.virt.libvirt.driver [-] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance destroyed successfully.
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.413 2 DEBUG nova.objects.instance [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'numa_topology' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.429 2 DEBUG nova.objects.instance [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'resources' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.448 2 DEBUG nova.virt.libvirt.vif [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1718645090',display_name='tempest-ServerActionsTestJSON-server-1718645090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1718645090',id=66,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD46Fy5CoN+Ml3EBoxKcZ2Dc+Ex8Fs/j0JXzrdEiunFq6ivVsrIblCZq3tN14fyHQcfewP1+4i7OCcMUFM6dTRwhbKPS7W3sI5N9qFb8Gfb+awrk0XppQafXmqkzDHedqQ==',key_name='tempest-keypair-607419023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-0mdnuhhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:18:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=1d580bbb-a6fd-442c-8524-409ba5c344d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.450 2 DEBUG nova.network.os_vif_util [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.451 2 DEBUG nova.network.os_vif_util [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.452 2 DEBUG os_vif [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.457 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fb8904b-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.465 2 INFO os_vif [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22')
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.476 2 DEBUG nova.virt.libvirt.driver [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Start _get_guest_xml network_info=[{"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.483 2 WARNING nova.virt.libvirt.driver [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.490 2 DEBUG nova.virt.libvirt.host [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.492 2 DEBUG nova.virt.libvirt.host [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.498 2 DEBUG nova.virt.libvirt.host [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.499 2 DEBUG nova.virt.libvirt.host [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.499 2 DEBUG nova.virt.libvirt.driver [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.500 2 DEBUG nova.virt.hardware [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.500 2 DEBUG nova.virt.hardware [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.501 2 DEBUG nova.virt.hardware [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.501 2 DEBUG nova.virt.hardware [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.501 2 DEBUG nova.virt.hardware [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.501 2 DEBUG nova.virt.hardware [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.502 2 DEBUG nova.virt.hardware [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.502 2 DEBUG nova.virt.hardware [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.503 2 DEBUG nova.virt.hardware [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.503 2 DEBUG nova.virt.hardware [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.503 2 DEBUG nova.virt.hardware [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.504 2 DEBUG nova.objects.instance [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.535 2 DEBUG oslo_concurrency.processutils [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.605 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846715.6038387, 85cd6b5c-f0f7-49fa-a999-64818baf3648 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.607 2 INFO nova.compute.manager [-] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] VM Stopped (Lifecycle Event)
Oct 07 14:18:50 compute-0 nova_compute[259550]: 2025-10-07 14:18:50.658 2 DEBUG nova.compute.manager [None req-b1771d67-1ef9-429e-bb75-32570ccf117e - - - - - -] [instance: 85cd6b5c-f0f7-49fa-a999-64818baf3648] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:50 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:18:50 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:18:50 compute-0 ceph-mon[74295]: pgmap v1627: 305 pgs: 305 active+clean; 215 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 153 KiB/s rd, 3.6 MiB/s wr, 133 op/s
Oct 07 14:18:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:18:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1680984955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.049 2 DEBUG oslo_concurrency.processutils [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.079 2 DEBUG oslo_concurrency.processutils [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.496 2 DEBUG nova.compute.manager [req-833da704-c636-4c19-b255-913e5787aef6 req-63337958-f6be-475e-9b88-8c3822456c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Received event network-vif-plugged-32a0e3a4-24b8-40fb-acc0-917c83ab05f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.497 2 DEBUG oslo_concurrency.lockutils [req-833da704-c636-4c19-b255-913e5787aef6 req-63337958-f6be-475e-9b88-8c3822456c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.497 2 DEBUG oslo_concurrency.lockutils [req-833da704-c636-4c19-b255-913e5787aef6 req-63337958-f6be-475e-9b88-8c3822456c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.497 2 DEBUG oslo_concurrency.lockutils [req-833da704-c636-4c19-b255-913e5787aef6 req-63337958-f6be-475e-9b88-8c3822456c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.497 2 DEBUG nova.compute.manager [req-833da704-c636-4c19-b255-913e5787aef6 req-63337958-f6be-475e-9b88-8c3822456c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] No waiting events found dispatching network-vif-plugged-32a0e3a4-24b8-40fb-acc0-917c83ab05f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.498 2 WARNING nova.compute.manager [req-833da704-c636-4c19-b255-913e5787aef6 req-63337958-f6be-475e-9b88-8c3822456c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Received unexpected event network-vif-plugged-32a0e3a4-24b8-40fb-acc0-917c83ab05f2 for instance with vm_state active and task_state None.
Oct 07 14:18:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:18:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1751869945' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.518 2 DEBUG oslo_concurrency.processutils [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.520 2 DEBUG nova.virt.libvirt.vif [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1718645090',display_name='tempest-ServerActionsTestJSON-server-1718645090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1718645090',id=66,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD46Fy5CoN+Ml3EBoxKcZ2Dc+Ex8Fs/j0JXzrdEiunFq6ivVsrIblCZq3tN14fyHQcfewP1+4i7OCcMUFM6dTRwhbKPS7W3sI5N9qFb8Gfb+awrk0XppQafXmqkzDHedqQ==',key_name='tempest-keypair-607419023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-0mdnuhhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:18:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=1d580bbb-a6fd-442c-8524-409ba5c344d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.520 2 DEBUG nova.network.os_vif_util [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.521 2 DEBUG nova.network.os_vif_util [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.523 2 DEBUG nova.objects.instance [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.550 2 DEBUG nova.virt.libvirt.driver [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:18:51 compute-0 nova_compute[259550]:   <uuid>1d580bbb-a6fd-442c-8524-409ba5c344d0</uuid>
Oct 07 14:18:51 compute-0 nova_compute[259550]:   <name>instance-00000042</name>
Oct 07 14:18:51 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:18:51 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:18:51 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerActionsTestJSON-server-1718645090</nova:name>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:18:50</nova:creationTime>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:18:51 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:18:51 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:18:51 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:18:51 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:18:51 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:18:51 compute-0 nova_compute[259550]:         <nova:user uuid="51afbbb19e4a4e2184c89302ccf45428">tempest-ServerActionsTestJSON-263209083-project-member</nova:user>
Oct 07 14:18:51 compute-0 nova_compute[259550]:         <nova:project uuid="8379283f8a594c2ab94773d2b49cbb30">tempest-ServerActionsTestJSON-263209083</nova:project>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:18:51 compute-0 nova_compute[259550]:         <nova:port uuid="5fb8904b-227a-4dac-8c3a-82a23ba9832c">
Oct 07 14:18:51 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:18:51 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:18:51 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <system>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <entry name="serial">1d580bbb-a6fd-442c-8524-409ba5c344d0</entry>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <entry name="uuid">1d580bbb-a6fd-442c-8524-409ba5c344d0</entry>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     </system>
Oct 07 14:18:51 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:18:51 compute-0 nova_compute[259550]:   <os>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:   </os>
Oct 07 14:18:51 compute-0 nova_compute[259550]:   <features>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:   </features>
Oct 07 14:18:51 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:18:51 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:18:51 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1d580bbb-a6fd-442c-8524-409ba5c344d0_disk">
Oct 07 14:18:51 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       </source>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:18:51 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1d580bbb-a6fd-442c-8524-409ba5c344d0_disk.config">
Oct 07 14:18:51 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       </source>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:18:51 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:af:c7:50"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <target dev="tap5fb8904b-22"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/1d580bbb-a6fd-442c-8524-409ba5c344d0/console.log" append="off"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <video>
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     </video>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <input type="keyboard" bus="usb"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:18:51 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:18:51 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:18:51 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:18:51 compute-0 nova_compute[259550]: </domain>
Oct 07 14:18:51 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.551 2 DEBUG nova.virt.libvirt.driver [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.551 2 DEBUG nova.virt.libvirt.driver [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.552 2 DEBUG nova.virt.libvirt.vif [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1718645090',display_name='tempest-ServerActionsTestJSON-server-1718645090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1718645090',id=66,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD46Fy5CoN+Ml3EBoxKcZ2Dc+Ex8Fs/j0JXzrdEiunFq6ivVsrIblCZq3tN14fyHQcfewP1+4i7OCcMUFM6dTRwhbKPS7W3sI5N9qFb8Gfb+awrk0XppQafXmqkzDHedqQ==',key_name='tempest-keypair-607419023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-0mdnuhhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:18:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=1d580bbb-a6fd-442c-8524-409ba5c344d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.552 2 DEBUG nova.network.os_vif_util [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.553 2 DEBUG nova.network.os_vif_util [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.553 2 DEBUG os_vif [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.554 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.554 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.557 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fb8904b-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.557 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5fb8904b-22, col_values=(('external_ids', {'iface-id': '5fb8904b-227a-4dac-8c3a-82a23ba9832c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:c7:50', 'vm-uuid': '1d580bbb-a6fd-442c-8524-409ba5c344d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:51 compute-0 NetworkManager[44949]: <info>  [1759846731.5594] manager: (tap5fb8904b-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/299)
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.565 2 INFO os_vif [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22')
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.591 2 DEBUG nova.compute.manager [req-c6d6632b-f450-4a33-9cbb-37b4fb9dfc09 req-5fd005a5-08b1-4cfd-9e78-9193097079f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Received event network-vif-plugged-c4e55cf1-8c8c-4a64-a62d-127fc8d27806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.592 2 DEBUG oslo_concurrency.lockutils [req-c6d6632b-f450-4a33-9cbb-37b4fb9dfc09 req-5fd005a5-08b1-4cfd-9e78-9193097079f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.592 2 DEBUG oslo_concurrency.lockutils [req-c6d6632b-f450-4a33-9cbb-37b4fb9dfc09 req-5fd005a5-08b1-4cfd-9e78-9193097079f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.593 2 DEBUG oslo_concurrency.lockutils [req-c6d6632b-f450-4a33-9cbb-37b4fb9dfc09 req-5fd005a5-08b1-4cfd-9e78-9193097079f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.594 2 DEBUG nova.compute.manager [req-c6d6632b-f450-4a33-9cbb-37b4fb9dfc09 req-5fd005a5-08b1-4cfd-9e78-9193097079f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] No waiting events found dispatching network-vif-plugged-c4e55cf1-8c8c-4a64-a62d-127fc8d27806 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.594 2 WARNING nova.compute.manager [req-c6d6632b-f450-4a33-9cbb-37b4fb9dfc09 req-5fd005a5-08b1-4cfd-9e78-9193097079f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Received unexpected event network-vif-plugged-c4e55cf1-8c8c-4a64-a62d-127fc8d27806 for instance with vm_state active and task_state None.
Oct 07 14:18:51 compute-0 kernel: tap5fb8904b-22: entered promiscuous mode
Oct 07 14:18:51 compute-0 NetworkManager[44949]: <info>  [1759846731.6445] manager: (tap5fb8904b-22): new Tun device (/org/freedesktop/NetworkManager/Devices/300)
Oct 07 14:18:51 compute-0 ovn_controller[151684]: 2025-10-07T14:18:51Z|00689|binding|INFO|Claiming lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c for this chassis.
Oct 07 14:18:51 compute-0 ovn_controller[151684]: 2025-10-07T14:18:51Z|00690|binding|INFO|5fb8904b-227a-4dac-8c3a-82a23ba9832c: Claiming fa:16:3e:af:c7:50 10.100.0.3
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:51 compute-0 NetworkManager[44949]: <info>  [1759846731.6556] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Oct 07 14:18:51 compute-0 NetworkManager[44949]: <info>  [1759846731.6564] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:51 compute-0 systemd-machined[214580]: New machine qemu-84-instance-00000042.
Oct 07 14:18:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:51.678 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:c7:50 10.100.0.3'], port_security=['fa:16:3e:af:c7:50 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1d580bbb-a6fd-442c-8524-409ba5c344d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8379283f8a594c2ab94773d2b49cbb30', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'c8e218c0-ab29-4b01-8bdb-1da00e3ea9f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44f26794-be65-4c90-a6ef-3a0e5efa6810, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5fb8904b-227a-4dac-8c3a-82a23ba9832c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:18:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:51.680 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5fb8904b-227a-4dac-8c3a-82a23ba9832c in datapath c21c541a-0d39-4ceb-ba44-53a9c1280779 bound to our chassis
Oct 07 14:18:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:51.683 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:18:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:51.694 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8763f8e5-3147-4a85-9432-e8644c2cefc5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:51.695 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc21c541a-01 in ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:18:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:51.697 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc21c541a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:18:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:51.697 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7a825bfd-6fae-400a-88a7-358eea882e47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:51.698 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[308a0257-7d62-4d35-b06d-fa47e46b3e99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:51 compute-0 systemd[1]: Started Virtual Machine qemu-84-instance-00000042.
Oct 07 14:18:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:51.709 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[4350326a-266e-4a6b-9e63-64f52381be8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:51.733 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5daa9fb9-92b1-4f79-b343-e00141b46136]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:51 compute-0 systemd-udevd[333507]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:18:51 compute-0 NetworkManager[44949]: <info>  [1759846731.7524] device (tap5fb8904b-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:18:51 compute-0 NetworkManager[44949]: <info>  [1759846731.7536] device (tap5fb8904b-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:51 compute-0 ovn_controller[151684]: 2025-10-07T14:18:51Z|00691|binding|INFO|Releasing lport c2ea13ef-38e7-4acc-a428-c933f929d020 from this chassis (sb_readonly=0)
Oct 07 14:18:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:51.767 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec65199-a223-4ea4-8627-ad468f478770]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:51 compute-0 NetworkManager[44949]: <info>  [1759846731.7750] manager: (tapc21c541a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/303)
Oct 07 14:18:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:51.773 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[388ef058-ee94-4ae4-8a11-a8793deb7eae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:51 compute-0 ovn_controller[151684]: 2025-10-07T14:18:51Z|00692|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c ovn-installed in OVS
Oct 07 14:18:51 compute-0 ovn_controller[151684]: 2025-10-07T14:18:51Z|00693|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c up in Southbound
Oct 07 14:18:51 compute-0 nova_compute[259550]: 2025-10-07 14:18:51.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:51.813 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7a20ec55-1383-4c19-9fba-880b7b0d9085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:51.816 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3d3c379f-bc3f-4f02-8e08-31b9b62d4ef9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 216 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 874 KiB/s rd, 1.4 MiB/s wr, 103 op/s
Oct 07 14:18:51 compute-0 NetworkManager[44949]: <info>  [1759846731.8439] device (tapc21c541a-00): carrier: link connected
Oct 07 14:18:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:51.849 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[85612f0c-fd75-4d98-ada4-5e886df4d621]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:51.868 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd04da0-143d-4c7a-be9f-79b8c64e3d9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc21c541a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:8b:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 207], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731540, 'reachable_time': 22076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333534, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:51.887 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d502db-0122-420e-afe6-ae02e28cfc4f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:8b48'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731540, 'tstamp': 731540}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333542, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:51.904 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6bea6e19-89eb-4d82-ada5-caddc7aee2d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc21c541a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:8b:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 207], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731540, 'reachable_time': 22076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333552, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:51.933 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[73f61a28-d63a-43a4-9a0c-b487876b160f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.000 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0b2df294-5dc1-4b00-b35b-d9573cf718f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.003 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc21c541a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.003 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.004 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc21c541a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:52 compute-0 kernel: tapc21c541a-00: entered promiscuous mode
Oct 07 14:18:52 compute-0 NetworkManager[44949]: <info>  [1759846732.0061] manager: (tapc21c541a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/304)
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.009 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc21c541a-00, col_values=(('external_ids', {'iface-id': '5989e5ed-c89e-446a-960e-503196fd3680'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:52 compute-0 ovn_controller[151684]: 2025-10-07T14:18:52Z|00694|binding|INFO|Releasing lport 5989e5ed-c89e-446a-960e-503196fd3680 from this chassis (sb_readonly=0)
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.027 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c21c541a-0d39-4ceb-ba44-53a9c1280779.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c21c541a-0d39-4ceb-ba44-53a9c1280779.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.028 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f679bb33-5c4f-4fe1-874e-6463cefcec44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.029 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/c21c541a-0d39-4ceb-ba44-53a9c1280779.pid.haproxy
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.029 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'env', 'PROCESS_TAG=haproxy-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c21c541a-0d39-4ceb-ba44-53a9c1280779.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:18:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1680984955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1751869945' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:18:52 compute-0 ceph-mon[74295]: pgmap v1628: 305 pgs: 305 active+clean; 216 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 874 KiB/s rd, 1.4 MiB/s wr, 103 op/s
Oct 07 14:18:52 compute-0 podman[333606]: 2025-10-07 14:18:52.413510366 +0000 UTC m=+0.069422086 container create 410f1934f2b0609b6c236b0c10e44cdf4806eacc9474c5d37826e73f8d1d13c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:18:52 compute-0 systemd[1]: Started libpod-conmon-410f1934f2b0609b6c236b0c10e44cdf4806eacc9474c5d37826e73f8d1d13c7.scope.
Oct 07 14:18:52 compute-0 podman[333606]: 2025-10-07 14:18:52.365323673 +0000 UTC m=+0.021235413 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.473 2 DEBUG nova.compute.manager [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.475 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 1d580bbb-a6fd-442c-8524-409ba5c344d0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.475 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846732.4731522, 1d580bbb-a6fd-442c-8524-409ba5c344d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.475 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] VM Resumed (Lifecycle Event)
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.482 2 INFO nova.virt.libvirt.driver [-] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance rebooted successfully.
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.483 2 DEBUG nova.compute.manager [None req-67e43910-ac47-4049-a5b0-0cf54b2a5f58 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:52 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.494 2 DEBUG oslo_concurrency.lockutils [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.495 2 DEBUG oslo_concurrency.lockutils [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.495 2 DEBUG oslo_concurrency.lockutils [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.495 2 DEBUG oslo_concurrency.lockutils [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.496 2 DEBUG oslo_concurrency.lockutils [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d12bd0f3e263f087f558ed997876aeac2f3e4d9469628bf34192ae23ab472ea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.497 2 INFO nova.compute.manager [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Terminating instance
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.498 2 DEBUG nova.compute.manager [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:18:52 compute-0 podman[333606]: 2025-10-07 14:18:52.527202648 +0000 UTC m=+0.183114388 container init 410f1934f2b0609b6c236b0c10e44cdf4806eacc9474c5d37826e73f8d1d13c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:18:52 compute-0 podman[333606]: 2025-10-07 14:18:52.535132138 +0000 UTC m=+0.191043858 container start 410f1934f2b0609b6c236b0c10e44cdf4806eacc9474c5d37826e73f8d1d13c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.542 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.546 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:18:52 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[333618]: [NOTICE]   (333622) : New worker (333624) forked
Oct 07 14:18:52 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[333618]: [NOTICE]   (333622) : Loading success.
Oct 07 14:18:52 compute-0 kernel: tap32a0e3a4-24 (unregistering): left promiscuous mode
Oct 07 14:18:52 compute-0 NetworkManager[44949]: <info>  [1759846732.5724] device (tap32a0e3a4-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:18:52 compute-0 ovn_controller[151684]: 2025-10-07T14:18:52Z|00695|binding|INFO|Releasing lport 32a0e3a4-24b8-40fb-acc0-917c83ab05f2 from this chassis (sb_readonly=0)
Oct 07 14:18:52 compute-0 ovn_controller[151684]: 2025-10-07T14:18:52Z|00696|binding|INFO|Setting lport 32a0e3a4-24b8-40fb-acc0-917c83ab05f2 down in Southbound
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:52 compute-0 ovn_controller[151684]: 2025-10-07T14:18:52Z|00697|binding|INFO|Removing iface tap32a0e3a4-24 ovn-installed in OVS
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:52 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000044.scope: Deactivated successfully.
Oct 07 14:18:52 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000044.scope: Consumed 4.171s CPU time.
Oct 07 14:18:52 compute-0 systemd-machined[214580]: Machine qemu-83-instance-00000044 terminated.
Oct 07 14:18:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:18:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:18:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:18:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:18:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:18:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.742 2 INFO nova.virt.libvirt.driver [-] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Instance destroyed successfully.
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.743 2 DEBUG nova.objects.instance [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lazy-loading 'resources' on Instance uuid d8f33644-b6b8-492e-b9e6-32cd3ad4a28a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.746 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:eb:4d 10.100.0.7'], port_security=['fa:16:3e:0c:eb:4d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd8f33644-b6b8-492e-b9e6-32cd3ad4a28a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6692b777-8c3f-47b2-9a67-3efff279d953', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b9950f52692469d9b44d8201fd3b990', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0ddc283-993d-4ef2-9ca1-083b7e8e7595', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14eec310-5a63-4764-9a59-d973ac25767c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=32a0e3a4-24b8-40fb-acc0-917c83ab05f2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.748 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 32a0e3a4-24b8-40fb-acc0-917c83ab05f2 in datapath 6692b777-8c3f-47b2-9a67-3efff279d953 unbound from our chassis
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.749 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6692b777-8c3f-47b2-9a67-3efff279d953
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.753 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846732.4742112, 1d580bbb-a6fd-442c-8524-409ba5c344d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.753 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] VM Started (Lifecycle Event)
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.769 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3100d5a4-a39b-44ae-a143-f6c66a484256]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.810 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c7651fa5-c8ce-4094-85ef-730c25fa3e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.815 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1acb8ef2-7f96-4c3c-a255-60072ffbd822]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.849 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3847ce1f-ec9c-4b3b-80c2-4f30474e32b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.873 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[684bc2d9-e3b1-48b1-9c69-714770eecbf0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6692b777-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:25:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 730970, 'reachable_time': 37981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333651, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.894 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c39a897c-d937-4538-be18-37c105c5afbd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6692b777-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 730982, 'tstamp': 730982}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333652, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6692b777-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 730985, 'tstamp': 730985}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333652, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.897 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6692b777-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.900 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.902 2 DEBUG nova.virt.libvirt.vif [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1597908979',display_name='tempest-tempest.common.compute-instance-1597908979-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1597908979-1',id=68,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:18:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1b9950f52692469d9b44d8201fd3b990',ramdisk_id='',reservation_id='r-wp2d13bt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1647657804',owner_user_name='tempest-MultipleCreateTestJSON-1647657804-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:18:49Z,user_data=None,user_id='4c51e9a60f0f4b28b9d5cfaa7a0180eb',uuid=d8f33644-b6b8-492e-b9e6-32cd3ad4a28a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "address": "fa:16:3e:0c:eb:4d", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a0e3a4-24", "ovs_interfaceid": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.902 2 DEBUG nova.network.os_vif_util [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converting VIF {"id": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "address": "fa:16:3e:0c:eb:4d", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a0e3a4-24", "ovs_interfaceid": "32a0e3a4-24b8-40fb-acc0-917c83ab05f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.903 2 DEBUG nova.network.os_vif_util [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:eb:4d,bridge_name='br-int',has_traffic_filtering=True,id=32a0e3a4-24b8-40fb-acc0-917c83ab05f2,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a0e3a4-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.903 2 DEBUG os_vif [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:eb:4d,bridge_name='br-int',has_traffic_filtering=True,id=32a0e3a4-24b8-40fb-acc0-917c83ab05f2,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a0e3a4-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.905 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6692b777-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.905 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.906 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6692b777-80, col_values=(('external_ids', {'iface-id': 'c2ea13ef-38e7-4acc-a428-c933f929d020'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:52.906 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.907 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32a0e3a4-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.911 2 INFO os_vif [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:eb:4d,bridge_name='br-int',has_traffic_filtering=True,id=32a0e3a4-24b8-40fb-acc0-917c83ab05f2,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a0e3a4-24')
Oct 07 14:18:52 compute-0 nova_compute[259550]: 2025-10-07 14:18:52.934 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.087 2 DEBUG oslo_concurrency.lockutils [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.088 2 DEBUG oslo_concurrency.lockutils [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.088 2 DEBUG oslo_concurrency.lockutils [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.088 2 DEBUG oslo_concurrency.lockutils [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.088 2 DEBUG oslo_concurrency.lockutils [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.090 2 INFO nova.compute.manager [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Terminating instance
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.091 2 DEBUG nova.compute.manager [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:18:53 compute-0 kernel: tapc4e55cf1-8c (unregistering): left promiscuous mode
Oct 07 14:18:53 compute-0 NetworkManager[44949]: <info>  [1759846733.1325] device (tapc4e55cf1-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:18:53 compute-0 ovn_controller[151684]: 2025-10-07T14:18:53Z|00698|binding|INFO|Releasing lport c4e55cf1-8c8c-4a64-a62d-127fc8d27806 from this chassis (sb_readonly=0)
Oct 07 14:18:53 compute-0 ovn_controller[151684]: 2025-10-07T14:18:53Z|00699|binding|INFO|Setting lport c4e55cf1-8c8c-4a64-a62d-127fc8d27806 down in Southbound
Oct 07 14:18:53 compute-0 ovn_controller[151684]: 2025-10-07T14:18:53Z|00700|binding|INFO|Removing iface tapc4e55cf1-8c ovn-installed in OVS
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:53 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000045.scope: Deactivated successfully.
Oct 07 14:18:53 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000045.scope: Consumed 4.663s CPU time.
Oct 07 14:18:53 compute-0 systemd-machined[214580]: Machine qemu-82-instance-00000045 terminated.
Oct 07 14:18:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:53.207 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:11:67 10.100.0.14'], port_security=['fa:16:3e:ac:11:67 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '197479db-0f5f-4dc1-a59d-efca0e6e4dec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6692b777-8c3f-47b2-9a67-3efff279d953', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b9950f52692469d9b44d8201fd3b990', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0ddc283-993d-4ef2-9ca1-083b7e8e7595', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14eec310-5a63-4764-9a59-d973ac25767c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c4e55cf1-8c8c-4a64-a62d-127fc8d27806) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:18:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:53.208 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c4e55cf1-8c8c-4a64-a62d-127fc8d27806 in datapath 6692b777-8c3f-47b2-9a67-3efff279d953 unbound from our chassis
Oct 07 14:18:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:53.209 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6692b777-8c3f-47b2-9a67-3efff279d953, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:18:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:53.210 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[12bae086-8f98-403e-8b17-606842db9593]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:53.211 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953 namespace which is not needed anymore
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.324 2 INFO nova.virt.libvirt.driver [-] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Instance destroyed successfully.
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.325 2 DEBUG nova.objects.instance [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lazy-loading 'resources' on Instance uuid 197479db-0f5f-4dc1-a59d-efca0e6e4dec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:53 compute-0 neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953[333092]: [NOTICE]   (333096) : haproxy version is 2.8.14-c23fe91
Oct 07 14:18:53 compute-0 neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953[333092]: [NOTICE]   (333096) : path to executable is /usr/sbin/haproxy
Oct 07 14:18:53 compute-0 neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953[333092]: [WARNING]  (333096) : Exiting Master process...
Oct 07 14:18:53 compute-0 neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953[333092]: [ALERT]    (333096) : Current worker (333098) exited with code 143 (Terminated)
Oct 07 14:18:53 compute-0 neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953[333092]: [WARNING]  (333096) : All workers exited. Exiting... (0)
Oct 07 14:18:53 compute-0 systemd[1]: libpod-e7de7b90d2d39b1a1ea3654a41a8116baaefb68e7b8a4047d1b0cfded0593856.scope: Deactivated successfully.
Oct 07 14:18:53 compute-0 podman[333693]: 2025-10-07 14:18:53.411532791 +0000 UTC m=+0.110366287 container died e7de7b90d2d39b1a1ea3654a41a8116baaefb68e7b8a4047d1b0cfded0593856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.440 2 DEBUG nova.virt.libvirt.vif [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1597908979',display_name='tempest-tempest.common.compute-instance-1597908979-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1597908979-2',id=69,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-07T14:18:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1b9950f52692469d9b44d8201fd3b990',ramdisk_id='',reservation_id='r-wp2d13bt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1647657804',owner_user_name='tempest-MultipleCreateTestJSON-1647657804-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:18:49Z,user_data=None,user_id='4c51e9a60f0f4b28b9d5cfaa7a0180eb',uuid=197479db-0f5f-4dc1-a59d-efca0e6e4dec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "address": "fa:16:3e:ac:11:67", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e55cf1-8c", "ovs_interfaceid": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.441 2 DEBUG nova.network.os_vif_util [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converting VIF {"id": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "address": "fa:16:3e:ac:11:67", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e55cf1-8c", "ovs_interfaceid": "c4e55cf1-8c8c-4a64-a62d-127fc8d27806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.442 2 DEBUG nova.network.os_vif_util [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:11:67,bridge_name='br-int',has_traffic_filtering=True,id=c4e55cf1-8c8c-4a64-a62d-127fc8d27806,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e55cf1-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.442 2 DEBUG os_vif [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:11:67,bridge_name='br-int',has_traffic_filtering=True,id=c4e55cf1-8c8c-4a64-a62d-127fc8d27806,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e55cf1-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4e55cf1-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.447 2 INFO os_vif [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:11:67,bridge_name='br-int',has_traffic_filtering=True,id=c4e55cf1-8c8c-4a64-a62d-127fc8d27806,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e55cf1-8c')
Oct 07 14:18:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e7de7b90d2d39b1a1ea3654a41a8116baaefb68e7b8a4047d1b0cfded0593856-userdata-shm.mount: Deactivated successfully.
Oct 07 14:18:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e24dd34d63725a6bd51f91f81b36c0439b677762aa5f118e7e1279232eae5eb9-merged.mount: Deactivated successfully.
Oct 07 14:18:53 compute-0 podman[333693]: 2025-10-07 14:18:53.648429949 +0000 UTC m=+0.347263455 container cleanup e7de7b90d2d39b1a1ea3654a41a8116baaefb68e7b8a4047d1b0cfded0593856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 07 14:18:53 compute-0 systemd[1]: libpod-conmon-e7de7b90d2d39b1a1ea3654a41a8116baaefb68e7b8a4047d1b0cfded0593856.scope: Deactivated successfully.
Oct 07 14:18:53 compute-0 podman[333750]: 2025-10-07 14:18:53.72911478 +0000 UTC m=+0.055847666 container remove e7de7b90d2d39b1a1ea3654a41a8116baaefb68e7b8a4047d1b0cfded0593856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 07 14:18:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:53.737 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c460000a-a608-491c-b334-4a280b8df7dc]: (4, ('Tue Oct  7 02:18:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953 (e7de7b90d2d39b1a1ea3654a41a8116baaefb68e7b8a4047d1b0cfded0593856)\ne7de7b90d2d39b1a1ea3654a41a8116baaefb68e7b8a4047d1b0cfded0593856\nTue Oct  7 02:18:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953 (e7de7b90d2d39b1a1ea3654a41a8116baaefb68e7b8a4047d1b0cfded0593856)\ne7de7b90d2d39b1a1ea3654a41a8116baaefb68e7b8a4047d1b0cfded0593856\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:53.739 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[76d2ba67-e0c4-4506-901e-d09351d83c37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:53.740 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6692b777-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:53 compute-0 kernel: tap6692b777-80: left promiscuous mode
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:53.761 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad7eaae-4e7d-4362-8e83-351be358e638]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:53.784 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0e8abce1-0215-40b2-9700-c1e3140dff15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:53.786 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e1fe2647-8220-42df-9ca4-c99335ccaa9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:53.802 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[96aef75c-a15a-42bf-8053-9025a647adf8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 730959, 'reachable_time': 23113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333767, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:53.805 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:18:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:18:53.805 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[f37641eb-a31d-42fa-b3b1-5ca88db776f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:18:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d6692b777\x2d8c3f\x2d47b2\x2d9a67\x2d3efff279d953.mount: Deactivated successfully.
Oct 07 14:18:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 190 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 171 KiB/s wr, 173 op/s
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.853 2 DEBUG nova.compute.manager [req-4ce313c1-4d50-4c28-a152-1e85a4bd5219 req-fd666343-e67e-4338-b1db-5f9dfdd64ac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.853 2 DEBUG oslo_concurrency.lockutils [req-4ce313c1-4d50-4c28-a152-1e85a4bd5219 req-fd666343-e67e-4338-b1db-5f9dfdd64ac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.853 2 DEBUG oslo_concurrency.lockutils [req-4ce313c1-4d50-4c28-a152-1e85a4bd5219 req-fd666343-e67e-4338-b1db-5f9dfdd64ac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.853 2 DEBUG oslo_concurrency.lockutils [req-4ce313c1-4d50-4c28-a152-1e85a4bd5219 req-fd666343-e67e-4338-b1db-5f9dfdd64ac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.853 2 DEBUG nova.compute.manager [req-4ce313c1-4d50-4c28-a152-1e85a4bd5219 req-fd666343-e67e-4338-b1db-5f9dfdd64ac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.854 2 WARNING nova.compute.manager [req-4ce313c1-4d50-4c28-a152-1e85a4bd5219 req-fd666343-e67e-4338-b1db-5f9dfdd64ac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state active and task_state None.
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.854 2 DEBUG nova.compute.manager [req-4ce313c1-4d50-4c28-a152-1e85a4bd5219 req-fd666343-e67e-4338-b1db-5f9dfdd64ac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.854 2 DEBUG oslo_concurrency.lockutils [req-4ce313c1-4d50-4c28-a152-1e85a4bd5219 req-fd666343-e67e-4338-b1db-5f9dfdd64ac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.854 2 DEBUG oslo_concurrency.lockutils [req-4ce313c1-4d50-4c28-a152-1e85a4bd5219 req-fd666343-e67e-4338-b1db-5f9dfdd64ac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.854 2 DEBUG oslo_concurrency.lockutils [req-4ce313c1-4d50-4c28-a152-1e85a4bd5219 req-fd666343-e67e-4338-b1db-5f9dfdd64ac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.854 2 DEBUG nova.compute.manager [req-4ce313c1-4d50-4c28-a152-1e85a4bd5219 req-fd666343-e67e-4338-b1db-5f9dfdd64ac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.854 2 WARNING nova.compute.manager [req-4ce313c1-4d50-4c28-a152-1e85a4bd5219 req-fd666343-e67e-4338-b1db-5f9dfdd64ac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state active and task_state None.
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.935 2 INFO nova.virt.libvirt.driver [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Deleting instance files /var/lib/nova/instances/d8f33644-b6b8-492e-b9e6-32cd3ad4a28a_del
Oct 07 14:18:53 compute-0 nova_compute[259550]: 2025-10-07 14:18:53.936 2 INFO nova.virt.libvirt.driver [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Deletion of /var/lib/nova/instances/d8f33644-b6b8-492e-b9e6-32cd3ad4a28a_del complete
Oct 07 14:18:54 compute-0 nova_compute[259550]: 2025-10-07 14:18:54.085 2 INFO nova.compute.manager [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Took 1.59 seconds to destroy the instance on the hypervisor.
Oct 07 14:18:54 compute-0 nova_compute[259550]: 2025-10-07 14:18:54.085 2 DEBUG oslo.service.loopingcall [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:18:54 compute-0 nova_compute[259550]: 2025-10-07 14:18:54.086 2 DEBUG nova.compute.manager [-] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:18:54 compute-0 nova_compute[259550]: 2025-10-07 14:18:54.086 2 DEBUG nova.network.neutron [-] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:18:54 compute-0 nova_compute[259550]: 2025-10-07 14:18:54.098 2 INFO nova.virt.libvirt.driver [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Deleting instance files /var/lib/nova/instances/197479db-0f5f-4dc1-a59d-efca0e6e4dec_del
Oct 07 14:18:54 compute-0 nova_compute[259550]: 2025-10-07 14:18:54.099 2 INFO nova.virt.libvirt.driver [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Deletion of /var/lib/nova/instances/197479db-0f5f-4dc1-a59d-efca0e6e4dec_del complete
Oct 07 14:18:54 compute-0 nova_compute[259550]: 2025-10-07 14:18:54.661 2 DEBUG nova.compute.manager [req-ce637cc2-0984-4e67-a434-4a25453ea47d req-f3319c35-237e-4add-802d-26cf66a83f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Received event network-vif-unplugged-c4e55cf1-8c8c-4a64-a62d-127fc8d27806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:54 compute-0 nova_compute[259550]: 2025-10-07 14:18:54.662 2 DEBUG oslo_concurrency.lockutils [req-ce637cc2-0984-4e67-a434-4a25453ea47d req-f3319c35-237e-4add-802d-26cf66a83f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:54 compute-0 nova_compute[259550]: 2025-10-07 14:18:54.662 2 DEBUG oslo_concurrency.lockutils [req-ce637cc2-0984-4e67-a434-4a25453ea47d req-f3319c35-237e-4add-802d-26cf66a83f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:54 compute-0 nova_compute[259550]: 2025-10-07 14:18:54.662 2 DEBUG oslo_concurrency.lockutils [req-ce637cc2-0984-4e67-a434-4a25453ea47d req-f3319c35-237e-4add-802d-26cf66a83f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:54 compute-0 nova_compute[259550]: 2025-10-07 14:18:54.662 2 DEBUG nova.compute.manager [req-ce637cc2-0984-4e67-a434-4a25453ea47d req-f3319c35-237e-4add-802d-26cf66a83f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] No waiting events found dispatching network-vif-unplugged-c4e55cf1-8c8c-4a64-a62d-127fc8d27806 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:54 compute-0 nova_compute[259550]: 2025-10-07 14:18:54.663 2 DEBUG nova.compute.manager [req-ce637cc2-0984-4e67-a434-4a25453ea47d req-f3319c35-237e-4add-802d-26cf66a83f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Received event network-vif-unplugged-c4e55cf1-8c8c-4a64-a62d-127fc8d27806 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:18:54 compute-0 nova_compute[259550]: 2025-10-07 14:18:54.666 2 INFO nova.compute.manager [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Took 1.57 seconds to destroy the instance on the hypervisor.
Oct 07 14:18:54 compute-0 nova_compute[259550]: 2025-10-07 14:18:54.666 2 DEBUG oslo.service.loopingcall [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:18:54 compute-0 nova_compute[259550]: 2025-10-07 14:18:54.667 2 DEBUG nova.compute.manager [-] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:18:54 compute-0 nova_compute[259550]: 2025-10-07 14:18:54.667 2 DEBUG nova.network.neutron [-] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:18:54 compute-0 ceph-mon[74295]: pgmap v1629: 305 pgs: 305 active+clean; 190 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 171 KiB/s wr, 173 op/s
Oct 07 14:18:55 compute-0 nova_compute[259550]: 2025-10-07 14:18:55.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 155 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 26 KiB/s wr, 226 op/s
Oct 07 14:18:55 compute-0 nova_compute[259550]: 2025-10-07 14:18:55.952 2 DEBUG nova.compute.manager [req-4478e51d-2c8f-496a-a4c6-655dab810b05 req-345ba69e-e497-42eb-b69c-71cf3337ddd4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Received event network-vif-unplugged-32a0e3a4-24b8-40fb-acc0-917c83ab05f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:55 compute-0 nova_compute[259550]: 2025-10-07 14:18:55.954 2 DEBUG oslo_concurrency.lockutils [req-4478e51d-2c8f-496a-a4c6-655dab810b05 req-345ba69e-e497-42eb-b69c-71cf3337ddd4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:55 compute-0 nova_compute[259550]: 2025-10-07 14:18:55.955 2 DEBUG oslo_concurrency.lockutils [req-4478e51d-2c8f-496a-a4c6-655dab810b05 req-345ba69e-e497-42eb-b69c-71cf3337ddd4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:55 compute-0 nova_compute[259550]: 2025-10-07 14:18:55.955 2 DEBUG oslo_concurrency.lockutils [req-4478e51d-2c8f-496a-a4c6-655dab810b05 req-345ba69e-e497-42eb-b69c-71cf3337ddd4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:55 compute-0 nova_compute[259550]: 2025-10-07 14:18:55.956 2 DEBUG nova.compute.manager [req-4478e51d-2c8f-496a-a4c6-655dab810b05 req-345ba69e-e497-42eb-b69c-71cf3337ddd4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] No waiting events found dispatching network-vif-unplugged-32a0e3a4-24b8-40fb-acc0-917c83ab05f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:55 compute-0 nova_compute[259550]: 2025-10-07 14:18:55.956 2 DEBUG nova.compute.manager [req-4478e51d-2c8f-496a-a4c6-655dab810b05 req-345ba69e-e497-42eb-b69c-71cf3337ddd4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Received event network-vif-unplugged-32a0e3a4-24b8-40fb-acc0-917c83ab05f2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:18:55 compute-0 nova_compute[259550]: 2025-10-07 14:18:55.956 2 DEBUG nova.compute.manager [req-4478e51d-2c8f-496a-a4c6-655dab810b05 req-345ba69e-e497-42eb-b69c-71cf3337ddd4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Received event network-vif-plugged-32a0e3a4-24b8-40fb-acc0-917c83ab05f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:55 compute-0 nova_compute[259550]: 2025-10-07 14:18:55.957 2 DEBUG oslo_concurrency.lockutils [req-4478e51d-2c8f-496a-a4c6-655dab810b05 req-345ba69e-e497-42eb-b69c-71cf3337ddd4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:55 compute-0 nova_compute[259550]: 2025-10-07 14:18:55.957 2 DEBUG oslo_concurrency.lockutils [req-4478e51d-2c8f-496a-a4c6-655dab810b05 req-345ba69e-e497-42eb-b69c-71cf3337ddd4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:55 compute-0 nova_compute[259550]: 2025-10-07 14:18:55.957 2 DEBUG oslo_concurrency.lockutils [req-4478e51d-2c8f-496a-a4c6-655dab810b05 req-345ba69e-e497-42eb-b69c-71cf3337ddd4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:55 compute-0 nova_compute[259550]: 2025-10-07 14:18:55.957 2 DEBUG nova.compute.manager [req-4478e51d-2c8f-496a-a4c6-655dab810b05 req-345ba69e-e497-42eb-b69c-71cf3337ddd4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] No waiting events found dispatching network-vif-plugged-32a0e3a4-24b8-40fb-acc0-917c83ab05f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:55 compute-0 nova_compute[259550]: 2025-10-07 14:18:55.958 2 WARNING nova.compute.manager [req-4478e51d-2c8f-496a-a4c6-655dab810b05 req-345ba69e-e497-42eb-b69c-71cf3337ddd4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Received unexpected event network-vif-plugged-32a0e3a4-24b8-40fb-acc0-917c83ab05f2 for instance with vm_state active and task_state deleting.
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.053 2 DEBUG nova.network.neutron [-] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.078 2 INFO nova.compute.manager [-] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Took 1.99 seconds to deallocate network for instance.
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.132 2 DEBUG oslo_concurrency.lockutils [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.132 2 DEBUG oslo_concurrency.lockutils [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.133 2 DEBUG nova.network.neutron [-] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.174 2 INFO nova.compute.manager [-] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Took 1.51 seconds to deallocate network for instance.
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.246 2 DEBUG oslo_concurrency.lockutils [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.265 2 DEBUG oslo_concurrency.processutils [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:18:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3778013523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.744 2 DEBUG oslo_concurrency.processutils [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.750 2 DEBUG nova.compute.provider_tree [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.770 2 DEBUG nova.compute.manager [req-bd3e0ab4-0b1c-4312-a0cd-cea730198baf req-23ad27ef-4e86-4a89-b302-95b1fe741758 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Received event network-vif-plugged-c4e55cf1-8c8c-4a64-a62d-127fc8d27806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.770 2 DEBUG oslo_concurrency.lockutils [req-bd3e0ab4-0b1c-4312-a0cd-cea730198baf req-23ad27ef-4e86-4a89-b302-95b1fe741758 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.771 2 DEBUG oslo_concurrency.lockutils [req-bd3e0ab4-0b1c-4312-a0cd-cea730198baf req-23ad27ef-4e86-4a89-b302-95b1fe741758 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.771 2 DEBUG oslo_concurrency.lockutils [req-bd3e0ab4-0b1c-4312-a0cd-cea730198baf req-23ad27ef-4e86-4a89-b302-95b1fe741758 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.772 2 DEBUG nova.compute.manager [req-bd3e0ab4-0b1c-4312-a0cd-cea730198baf req-23ad27ef-4e86-4a89-b302-95b1fe741758 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] No waiting events found dispatching network-vif-plugged-c4e55cf1-8c8c-4a64-a62d-127fc8d27806 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.772 2 WARNING nova.compute.manager [req-bd3e0ab4-0b1c-4312-a0cd-cea730198baf req-23ad27ef-4e86-4a89-b302-95b1fe741758 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Received unexpected event network-vif-plugged-c4e55cf1-8c8c-4a64-a62d-127fc8d27806 for instance with vm_state deleted and task_state None.
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.774 2 DEBUG nova.scheduler.client.report [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.804 2 DEBUG oslo_concurrency.lockutils [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.808 2 DEBUG oslo_concurrency.lockutils [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.836 2 INFO nova.scheduler.client.report [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Deleted allocations for instance d8f33644-b6b8-492e-b9e6-32cd3ad4a28a
Oct 07 14:18:56 compute-0 ceph-mon[74295]: pgmap v1630: 305 pgs: 305 active+clean; 155 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 26 KiB/s wr, 226 op/s
Oct 07 14:18:56 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3778013523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.923 2 DEBUG oslo_concurrency.processutils [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:18:56 compute-0 nova_compute[259550]: 2025-10-07 14:18:56.962 2 DEBUG oslo_concurrency.lockutils [None req-40427bc1-93e8-4846-ba15-a4233e5bac5f 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "d8f33644-b6b8-492e-b9e6-32cd3ad4a28a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:18:57 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4028318388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:57 compute-0 nova_compute[259550]: 2025-10-07 14:18:57.374 2 DEBUG oslo_concurrency.processutils [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:18:57 compute-0 nova_compute[259550]: 2025-10-07 14:18:57.382 2 DEBUG nova.compute.provider_tree [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:18:57 compute-0 nova_compute[259550]: 2025-10-07 14:18:57.410 2 DEBUG nova.scheduler.client.report [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:18:57 compute-0 nova_compute[259550]: 2025-10-07 14:18:57.433 2 DEBUG oslo_concurrency.lockutils [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:57 compute-0 nova_compute[259550]: 2025-10-07 14:18:57.482 2 INFO nova.scheduler.client.report [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Deleted allocations for instance 197479db-0f5f-4dc1-a59d-efca0e6e4dec
Oct 07 14:18:57 compute-0 nova_compute[259550]: 2025-10-07 14:18:57.570 2 DEBUG oslo_concurrency.lockutils [None req-bb68a257-1f05-4b14-aa11-dd3495c276fb 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "197479db-0f5f-4dc1-a59d-efca0e6e4dec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:18:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 155 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 26 KiB/s wr, 226 op/s
Oct 07 14:18:57 compute-0 nova_compute[259550]: 2025-10-07 14:18:57.846 2 INFO nova.compute.manager [None req-10529bfc-5a19-4316-abef-0d55c6216c01 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Pausing
Oct 07 14:18:57 compute-0 nova_compute[259550]: 2025-10-07 14:18:57.846 2 DEBUG nova.objects.instance [None req-10529bfc-5a19-4316-abef-0d55c6216c01 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'flavor' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:18:57 compute-0 nova_compute[259550]: 2025-10-07 14:18:57.885 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846737.8850808, 1d580bbb-a6fd-442c-8524-409ba5c344d0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:18:57 compute-0 nova_compute[259550]: 2025-10-07 14:18:57.885 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] VM Paused (Lifecycle Event)
Oct 07 14:18:57 compute-0 nova_compute[259550]: 2025-10-07 14:18:57.887 2 DEBUG nova.compute.manager [None req-10529bfc-5a19-4316-abef-0d55c6216c01 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:57 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4028318388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:18:57 compute-0 nova_compute[259550]: 2025-10-07 14:18:57.931 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:18:57 compute-0 nova_compute[259550]: 2025-10-07 14:18:57.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:57 compute-0 nova_compute[259550]: 2025-10-07 14:18:57.937 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:18:57 compute-0 nova_compute[259550]: 2025-10-07 14:18:57.965 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 07 14:18:58 compute-0 nova_compute[259550]: 2025-10-07 14:18:58.064 2 DEBUG nova.compute.manager [req-aa1a7fcf-864b-4b52-b5fa-c87227e97f34 req-b5cbec53-f4aa-467c-bf28-9b09f1ca43a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Received event network-vif-deleted-32a0e3a4-24b8-40fb-acc0-917c83ab05f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:58 compute-0 nova_compute[259550]: 2025-10-07 14:18:58.064 2 DEBUG nova.compute.manager [req-aa1a7fcf-864b-4b52-b5fa-c87227e97f34 req-b5cbec53-f4aa-467c-bf28-9b09f1ca43a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Received event network-vif-deleted-c4e55cf1-8c8c-4a64-a62d-127fc8d27806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:18:58 compute-0 nova_compute[259550]: 2025-10-07 14:18:58.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:18:58 compute-0 ceph-mon[74295]: pgmap v1631: 305 pgs: 305 active+clean; 155 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 26 KiB/s wr, 226 op/s
Oct 07 14:18:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 123 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 27 KiB/s wr, 271 op/s
Oct 07 14:19:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:00.051 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:00.052 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:00.052 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:00 compute-0 ceph-mon[74295]: pgmap v1632: 305 pgs: 305 active+clean; 123 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 27 KiB/s wr, 271 op/s
Oct 07 14:19:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:19:01 compute-0 nova_compute[259550]: 2025-10-07 14:19:01.189 2 INFO nova.compute.manager [None req-99883a5d-4809-4412-8283-787404af9407 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Unpausing
Oct 07 14:19:01 compute-0 nova_compute[259550]: 2025-10-07 14:19:01.190 2 DEBUG nova.objects.instance [None req-99883a5d-4809-4412-8283-787404af9407 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'flavor' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:01 compute-0 nova_compute[259550]: 2025-10-07 14:19:01.260 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846741.260559, 1d580bbb-a6fd-442c-8524-409ba5c344d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:01 compute-0 nova_compute[259550]: 2025-10-07 14:19:01.261 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] VM Resumed (Lifecycle Event)
Oct 07 14:19:01 compute-0 virtqemud[259430]: argument unsupported: QEMU guest agent is not configured
Oct 07 14:19:01 compute-0 nova_compute[259550]: 2025-10-07 14:19:01.265 2 DEBUG nova.virt.libvirt.guest [None req-99883a5d-4809-4412-8283-787404af9407 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 07 14:19:01 compute-0 nova_compute[259550]: 2025-10-07 14:19:01.265 2 DEBUG nova.compute.manager [None req-99883a5d-4809-4412-8283-787404af9407 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:01 compute-0 nova_compute[259550]: 2025-10-07 14:19:01.310 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:01 compute-0 nova_compute[259550]: 2025-10-07 14:19:01.315 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:19:01 compute-0 nova_compute[259550]: 2025-10-07 14:19:01.360 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] During sync_power_state the instance has a pending task (unpausing). Skip.
Oct 07 14:19:01 compute-0 nova_compute[259550]: 2025-10-07 14:19:01.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 123 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 27 KiB/s wr, 260 op/s
Oct 07 14:19:02 compute-0 nova_compute[259550]: 2025-10-07 14:19:02.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:02 compute-0 nova_compute[259550]: 2025-10-07 14:19:02.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:02 compute-0 ceph-mon[74295]: pgmap v1633: 305 pgs: 305 active+clean; 123 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 27 KiB/s wr, 260 op/s
Oct 07 14:19:03 compute-0 podman[333814]: 2025-10-07 14:19:03.079037736 +0000 UTC m=+0.060751527 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:19:03 compute-0 podman[333813]: 2025-10-07 14:19:03.079829746 +0000 UTC m=+0.062440331 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 07 14:19:03 compute-0 nova_compute[259550]: 2025-10-07 14:19:03.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:03 compute-0 nova_compute[259550]: 2025-10-07 14:19:03.552 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:03 compute-0 nova_compute[259550]: 2025-10-07 14:19:03.553 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:03 compute-0 nova_compute[259550]: 2025-10-07 14:19:03.568 2 DEBUG nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:19:03 compute-0 nova_compute[259550]: 2025-10-07 14:19:03.596 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "6f874afd-fefc-434c-a46f-cf611ba65494" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:03 compute-0 nova_compute[259550]: 2025-10-07 14:19:03.597 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "6f874afd-fefc-434c-a46f-cf611ba65494" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:03 compute-0 nova_compute[259550]: 2025-10-07 14:19:03.629 2 DEBUG nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:19:03 compute-0 nova_compute[259550]: 2025-10-07 14:19:03.668 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:03 compute-0 nova_compute[259550]: 2025-10-07 14:19:03.669 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:03 compute-0 nova_compute[259550]: 2025-10-07 14:19:03.676 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:19:03 compute-0 nova_compute[259550]: 2025-10-07 14:19:03.676 2 INFO nova.compute.claims [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:19:03 compute-0 nova_compute[259550]: 2025-10-07 14:19:03.707 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:03 compute-0 nova_compute[259550]: 2025-10-07 14:19:03.825 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 123 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.4 KiB/s wr, 223 op/s
Oct 07 14:19:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:19:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3817457324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.296 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.302 2 DEBUG nova.compute.provider_tree [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.316 2 DEBUG nova.scheduler.client.report [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.338 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.339 2 DEBUG nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.342 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.354 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.355 2 INFO nova.compute.claims [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.397 2 DEBUG nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.398 2 DEBUG nova.network.neutron [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.431 2 INFO nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.446 2 DEBUG nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.532 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.567 2 DEBUG nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.569 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.569 2 INFO nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Creating image(s)
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.591 2 DEBUG nova.storage.rbd_utils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image f0fba6a7-6467-4ac2-99a6-a2dee485e570_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.616 2 DEBUG nova.storage.rbd_utils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image f0fba6a7-6467-4ac2-99a6-a2dee485e570_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.637 2 DEBUG nova.storage.rbd_utils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image f0fba6a7-6467-4ac2-99a6-a2dee485e570_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.640 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.679 2 DEBUG nova.policy [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c51e9a60f0f4b28b9d5cfaa7a0180eb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1b9950f52692469d9b44d8201fd3b990', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.744 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.745 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.745 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.746 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.765 2 DEBUG nova.storage.rbd_utils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image f0fba6a7-6467-4ac2-99a6-a2dee485e570_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:04 compute-0 nova_compute[259550]: 2025-10-07 14:19:04.768 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f0fba6a7-6467-4ac2-99a6-a2dee485e570_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:04 compute-0 ceph-mon[74295]: pgmap v1634: 305 pgs: 305 active+clean; 123 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.4 KiB/s wr, 223 op/s
Oct 07 14:19:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3817457324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:19:05 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2779112352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.023 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.029 2 DEBUG nova.compute.provider_tree [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.062 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f0fba6a7-6467-4ac2-99a6-a2dee485e570_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.116 2 DEBUG nova.storage.rbd_utils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] resizing rbd image f0fba6a7-6467-4ac2-99a6-a2dee485e570_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.170 2 DEBUG nova.scheduler.client.report [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.211 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.212 2 DEBUG nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.218 2 DEBUG nova.objects.instance [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lazy-loading 'migration_context' on Instance uuid f0fba6a7-6467-4ac2-99a6-a2dee485e570 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.245 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.246 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Ensure instance console log exists: /var/lib/nova/instances/f0fba6a7-6467-4ac2-99a6-a2dee485e570/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.246 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.246 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.246 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.271 2 DEBUG nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.271 2 DEBUG nova.network.neutron [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.306 2 INFO nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.329 2 DEBUG nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.429 2 DEBUG nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.430 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.431 2 INFO nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Creating image(s)
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.458 2 DEBUG nova.storage.rbd_utils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image 6f874afd-fefc-434c-a46f-cf611ba65494_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.481 2 DEBUG nova.storage.rbd_utils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image 6f874afd-fefc-434c-a46f-cf611ba65494_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.506 2 DEBUG nova.storage.rbd_utils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image 6f874afd-fefc-434c-a46f-cf611ba65494_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.511 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.554 2 DEBUG oslo_concurrency.lockutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Acquiring lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.555 2 DEBUG oslo_concurrency.lockutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.599 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.600 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.601 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.601 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.629 2 DEBUG nova.storage.rbd_utils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image 6f874afd-fefc-434c-a46f-cf611ba65494_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.643 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6f874afd-fefc-434c-a46f-cf611ba65494_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.680 2 DEBUG nova.compute.manager [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.700 2 DEBUG nova.network.neutron [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Successfully created port: ab477d11-03cd-475a-b8af-5317e974190d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.768 2 DEBUG oslo_concurrency.lockutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.768 2 DEBUG oslo_concurrency.lockutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.774 2 DEBUG nova.virt.hardware [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.774 2 INFO nova.compute.claims [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.777 2 DEBUG nova.policy [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c51e9a60f0f4b28b9d5cfaa7a0180eb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1b9950f52692469d9b44d8201fd3b990', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:19:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 146 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 111 op/s
Oct 07 14:19:05 compute-0 nova_compute[259550]: 2025-10-07 14:19:05.981 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6f874afd-fefc-434c-a46f-cf611ba65494_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:05 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2779112352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.036 2 DEBUG oslo_concurrency.processutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.075 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "bb276692-f47b-4c86-864a-f3654cf63f5a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.076 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "bb276692-f47b-4c86-864a-f3654cf63f5a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.080 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "92c7d888-ced0-4649-9ad6-317350eb7225" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.080 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "92c7d888-ced0-4649-9ad6-317350eb7225" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.085 2 DEBUG nova.storage.rbd_utils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] resizing rbd image 6f874afd-fefc-434c-a46f-cf611ba65494_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.112 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.114 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.134 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "6f82c687-5361-4922-85c2-ea9d7e48a39a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.135 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "6f82c687-5361-4922-85c2-ea9d7e48a39a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.176 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.187 2 DEBUG nova.objects.instance [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lazy-loading 'migration_context' on Instance uuid 6f874afd-fefc-434c-a46f-cf611ba65494 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.249 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.262 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.262 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Ensure instance console log exists: /var/lib/nova/instances/6f874afd-fefc-434c-a46f-cf611ba65494/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.263 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.263 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.263 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.268 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.291 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:19:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3626930480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.478 2 DEBUG oslo_concurrency.processutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.483 2 DEBUG nova.compute.provider_tree [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.497 2 DEBUG nova.scheduler.client.report [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.522 2 DEBUG oslo_concurrency.lockutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.522 2 DEBUG nova.compute.manager [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.524 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.530 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.530 2 INFO nova.compute.claims [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.589 2 DEBUG nova.compute.manager [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.590 2 DEBUG nova.network.neutron [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.614 2 INFO nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.631 2 DEBUG nova.compute.manager [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.669 2 DEBUG nova.network.neutron [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Successfully created port: ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.708 2 DEBUG nova.compute.manager [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.710 2 DEBUG nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.710 2 INFO nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Creating image(s)
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.733 2 DEBUG nova.storage.rbd_utils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] rbd image 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.754 2 DEBUG nova.storage.rbd_utils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] rbd image 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.777 2 DEBUG nova.storage.rbd_utils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] rbd image 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.781 2 DEBUG oslo_concurrency.processutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.822 2 DEBUG nova.policy [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b627cc9a6a884d6cb236df7e0154b97a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '743fea8acfcc4e73b1981dc0dcf95f63', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.826 2 DEBUG nova.network.neutron [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Successfully updated port: ab477d11-03cd-475a-b8af-5317e974190d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.843 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "refresh_cache-f0fba6a7-6467-4ac2-99a6-a2dee485e570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.844 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquired lock "refresh_cache-f0fba6a7-6467-4ac2-99a6-a2dee485e570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.844 2 DEBUG nova.network.neutron [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.863 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.894 2 DEBUG oslo_concurrency.processutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.895 2 DEBUG oslo_concurrency.lockutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.895 2 DEBUG oslo_concurrency.lockutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.895 2 DEBUG oslo_concurrency.lockutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.915 2 DEBUG nova.storage.rbd_utils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] rbd image 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:06 compute-0 nova_compute[259550]: 2025-10-07 14:19:06.919 2 DEBUG oslo_concurrency.processutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:06 compute-0 ceph-mon[74295]: pgmap v1635: 305 pgs: 305 active+clean; 146 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 111 op/s
Oct 07 14:19:06 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3626930480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.006 2 DEBUG nova.compute.manager [req-b4112a5c-3287-4ec8-8286-748756f6b604 req-fbd52c47-b930-4d70-a3de-7d80e1f90ad5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Received event network-changed-ab477d11-03cd-475a-b8af-5317e974190d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.006 2 DEBUG nova.compute.manager [req-b4112a5c-3287-4ec8-8286-748756f6b604 req-fbd52c47-b930-4d70-a3de-7d80e1f90ad5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Refreshing instance network info cache due to event network-changed-ab477d11-03cd-475a-b8af-5317e974190d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.006 2 DEBUG oslo_concurrency.lockutils [req-b4112a5c-3287-4ec8-8286-748756f6b604 req-fbd52c47-b930-4d70-a3de-7d80e1f90ad5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-f0fba6a7-6467-4ac2-99a6-a2dee485e570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.010 2 DEBUG nova.network.neutron [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.274 2 DEBUG oslo_concurrency.processutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:19:07 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2092637446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.340 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.346 2 DEBUG nova.storage.rbd_utils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] resizing rbd image 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.380 2 DEBUG nova.compute.provider_tree [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.398 2 DEBUG nova.scheduler.client.report [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.432 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.432 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.434 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.439 2 DEBUG nova.objects.instance [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lazy-loading 'migration_context' on Instance uuid 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.442 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.442 2 INFO nova.compute.claims [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.475 2 DEBUG nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.475 2 DEBUG nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Ensure instance console log exists: /var/lib/nova/instances/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.476 2 DEBUG oslo_concurrency.lockutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.476 2 DEBUG oslo_concurrency.lockutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.476 2 DEBUG oslo_concurrency.lockutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.523 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.524 2 DEBUG nova.network.neutron [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.547 2 INFO nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.567 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:19:07 compute-0 ovn_controller[151684]: 2025-10-07T14:19:07Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:c7:50 10.100.0.3
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.675 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.677 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.678 2 INFO nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Creating image(s)
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.706 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image 92c7d888-ced0-4649-9ad6-317350eb7225_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.732 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image 92c7d888-ced0-4649-9ad6-317350eb7225_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.759 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image 92c7d888-ced0-4649-9ad6-317350eb7225_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.767 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.815 2 DEBUG nova.policy [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '270ab376b3a74a75a6bab5c13d3100c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cd27f5e3b8cf47649e0c6593a61034a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.820 2 DEBUG nova.network.neutron [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Successfully created port: 8017da49-bbc8-4eae-8b3e-79bb4e587e62 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.827 2 DEBUG nova.network.neutron [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Updating instance_info_cache with network_info: [{"id": "ab477d11-03cd-475a-b8af-5317e974190d", "address": "fa:16:3e:7c:90:1f", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab477d11-03", "ovs_interfaceid": "ab477d11-03cd-475a-b8af-5317e974190d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.843 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.845 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 146 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 692 KiB/s rd, 1.0 MiB/s wr, 47 op/s
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.845 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.846 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.871 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image 92c7d888-ced0-4649-9ad6-317350eb7225_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.875 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 92c7d888-ced0-4649-9ad6-317350eb7225_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.911 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Releasing lock "refresh_cache-f0fba6a7-6467-4ac2-99a6-a2dee485e570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.911 2 DEBUG nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Instance network_info: |[{"id": "ab477d11-03cd-475a-b8af-5317e974190d", "address": "fa:16:3e:7c:90:1f", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab477d11-03", "ovs_interfaceid": "ab477d11-03cd-475a-b8af-5317e974190d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.912 2 DEBUG oslo_concurrency.lockutils [req-b4112a5c-3287-4ec8-8286-748756f6b604 req-fbd52c47-b930-4d70-a3de-7d80e1f90ad5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-f0fba6a7-6467-4ac2-99a6-a2dee485e570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.913 2 DEBUG nova.network.neutron [req-b4112a5c-3287-4ec8-8286-748756f6b604 req-fbd52c47-b930-4d70-a3de-7d80e1f90ad5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Refreshing network info cache for port ab477d11-03cd-475a-b8af-5317e974190d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.916 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Start _get_guest_xml network_info=[{"id": "ab477d11-03cd-475a-b8af-5317e974190d", "address": "fa:16:3e:7c:90:1f", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab477d11-03", "ovs_interfaceid": "ab477d11-03cd-475a-b8af-5317e974190d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.921 2 WARNING nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.925 2 DEBUG nova.virt.libvirt.host [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.926 2 DEBUG nova.virt.libvirt.host [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.929 2 DEBUG nova.virt.libvirt.host [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.930 2 DEBUG nova.virt.libvirt.host [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.930 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.930 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.931 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.931 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.931 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.932 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.932 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.932 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.932 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.933 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.933 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.933 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.936 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.968 2 DEBUG nova.network.neutron [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Successfully updated port: ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.995 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "refresh_cache-6f874afd-fefc-434c-a46f-cf611ba65494" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.995 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquired lock "refresh_cache-6f874afd-fefc-434c-a46f-cf611ba65494" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.995 2 DEBUG nova.network.neutron [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:19:07 compute-0 nova_compute[259550]: 2025-10-07 14:19:07.997 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:08 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2092637446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.093 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846732.7396817, d8f33644-b6b8-492e-b9e6-32cd3ad4a28a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.094 2 INFO nova.compute.manager [-] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] VM Stopped (Lifecycle Event)
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.114 2 DEBUG nova.compute.manager [None req-c87fce6b-0b8f-499c-952d-544a52db17a8 - - - - - -] [instance: d8f33644-b6b8-492e-b9e6-32cd3ad4a28a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.221 2 DEBUG nova.network.neutron [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.322 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846733.3223326, 197479db-0f5f-4dc1-a59d-efca0e6e4dec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.323 2 INFO nova.compute.manager [-] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] VM Stopped (Lifecycle Event)
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.349 2 DEBUG nova.compute.manager [None req-9abc04e1-5dc5-431d-a15c-26aa0e3fa397 - - - - - -] [instance: 197479db-0f5f-4dc1-a59d-efca0e6e4dec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.375 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 92c7d888-ced0-4649-9ad6-317350eb7225_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:19:08 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/672953339' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.436 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] resizing rbd image 92c7d888-ced0-4649-9ad6-317350eb7225_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:19:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:19:08 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3212311545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.464 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.497 2 DEBUG nova.storage.rbd_utils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image f0fba6a7-6467-4ac2-99a6-a2dee485e570_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.501 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:08.518 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:19:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:08.520 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.532 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.538 2 DEBUG nova.compute.provider_tree [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.573 2 DEBUG nova.scheduler.client.report [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.583 2 DEBUG nova.objects.instance [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 92c7d888-ced0-4649-9ad6-317350eb7225 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.604 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.607 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.610 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 2.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.611 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.611 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Ensure instance console log exists: /var/lib/nova/instances/92c7d888-ced0-4649-9ad6-317350eb7225/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.612 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.612 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.612 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.617 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.617 2 INFO nova.compute.claims [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.689 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.689 2 DEBUG nova.network.neutron [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.706 2 INFO nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.710 2 DEBUG nova.network.neutron [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Successfully created port: 25f2e3b0-8524-4ad6-aa04-e562aad3ab5e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.728 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.814 2 DEBUG nova.network.neutron [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Successfully updated port: 8017da49-bbc8-4eae-8b3e-79bb4e587e62 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.818 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.819 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.820 2 INFO nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Creating image(s)
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.840 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image bb276692-f47b-4c86-864a-f3654cf63f5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.859 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image bb276692-f47b-4c86-864a-f3654cf63f5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.880 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image bb276692-f47b-4c86-864a-f3654cf63f5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.883 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.923 2 DEBUG nova.compute.manager [req-6fd9daa6-10d0-4454-a331-b146041ad483 req-aacc087e-5141-4ed1-9f32-2621ff51eca1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received event network-changed-8017da49-bbc8-4eae-8b3e-79bb4e587e62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.923 2 DEBUG nova.compute.manager [req-6fd9daa6-10d0-4454-a331-b146041ad483 req-aacc087e-5141-4ed1-9f32-2621ff51eca1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Refreshing instance network info cache due to event network-changed-8017da49-bbc8-4eae-8b3e-79bb4e587e62. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.923 2 DEBUG oslo_concurrency.lockutils [req-6fd9daa6-10d0-4454-a331-b146041ad483 req-aacc087e-5141-4ed1-9f32-2621ff51eca1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.924 2 DEBUG oslo_concurrency.lockutils [req-6fd9daa6-10d0-4454-a331-b146041ad483 req-aacc087e-5141-4ed1-9f32-2621ff51eca1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.924 2 DEBUG nova.network.neutron [req-6fd9daa6-10d0-4454-a331-b146041ad483 req-aacc087e-5141-4ed1-9f32-2621ff51eca1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Refreshing network info cache for port 8017da49-bbc8-4eae-8b3e-79bb4e587e62 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.925 2 DEBUG oslo_concurrency.lockutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Acquiring lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.929 2 DEBUG nova.policy [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '270ab376b3a74a75a6bab5c13d3100c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cd27f5e3b8cf47649e0c6593a61034a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:19:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:19:08 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2475389620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.951 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.953 2 DEBUG nova.virt.libvirt.vif [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:19:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1631138651',display_name='tempest-MultipleCreateTestJSON-server-1631138651-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1631138651-1',id=70,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b9950f52692469d9b44d8201fd3b990',ramdisk_id='',reservation_id='r-un1h9szj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1647657804',owner_user_name='tempest-MultipleCreateTestJSON-1647657804-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:19:04Z,user_data=None,user_id='4c51e9a60f0f4b28b9d5cfaa7a0180eb',uuid=f0fba6a7-6467-4ac2-99a6-a2dee485e570,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab477d11-03cd-475a-b8af-5317e974190d", "address": "fa:16:3e:7c:90:1f", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab477d11-03", "ovs_interfaceid": "ab477d11-03cd-475a-b8af-5317e974190d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.953 2 DEBUG nova.network.os_vif_util [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converting VIF {"id": "ab477d11-03cd-475a-b8af-5317e974190d", "address": "fa:16:3e:7c:90:1f", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab477d11-03", "ovs_interfaceid": "ab477d11-03cd-475a-b8af-5317e974190d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.954 2 DEBUG nova.network.os_vif_util [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:90:1f,bridge_name='br-int',has_traffic_filtering=True,id=ab477d11-03cd-475a-b8af-5317e974190d,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab477d11-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.954 2 DEBUG nova.objects.instance [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lazy-loading 'pci_devices' on Instance uuid f0fba6a7-6467-4ac2-99a6-a2dee485e570 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.958 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.958 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.959 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.959 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.981 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image bb276692-f47b-4c86-864a-f3654cf63f5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:08 compute-0 nova_compute[259550]: 2025-10-07 14:19:08.985 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 bb276692-f47b-4c86-864a-f3654cf63f5a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:09 compute-0 ceph-mon[74295]: pgmap v1636: 305 pgs: 305 active+clean; 146 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 692 KiB/s rd, 1.0 MiB/s wr, 47 op/s
Oct 07 14:19:09 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/672953339' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:09 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3212311545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:09 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2475389620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.023 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:19:09 compute-0 nova_compute[259550]:   <uuid>f0fba6a7-6467-4ac2-99a6-a2dee485e570</uuid>
Oct 07 14:19:09 compute-0 nova_compute[259550]:   <name>instance-00000046</name>
Oct 07 14:19:09 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:19:09 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:19:09 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <nova:name>tempest-MultipleCreateTestJSON-server-1631138651-1</nova:name>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:19:07</nova:creationTime>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:19:09 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:19:09 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:19:09 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:19:09 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:19:09 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:19:09 compute-0 nova_compute[259550]:         <nova:user uuid="4c51e9a60f0f4b28b9d5cfaa7a0180eb">tempest-MultipleCreateTestJSON-1647657804-project-member</nova:user>
Oct 07 14:19:09 compute-0 nova_compute[259550]:         <nova:project uuid="1b9950f52692469d9b44d8201fd3b990">tempest-MultipleCreateTestJSON-1647657804</nova:project>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:19:09 compute-0 nova_compute[259550]:         <nova:port uuid="ab477d11-03cd-475a-b8af-5317e974190d">
Oct 07 14:19:09 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:19:09 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:19:09 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <system>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <entry name="serial">f0fba6a7-6467-4ac2-99a6-a2dee485e570</entry>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <entry name="uuid">f0fba6a7-6467-4ac2-99a6-a2dee485e570</entry>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     </system>
Oct 07 14:19:09 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:19:09 compute-0 nova_compute[259550]:   <os>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:   </os>
Oct 07 14:19:09 compute-0 nova_compute[259550]:   <features>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:   </features>
Oct 07 14:19:09 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:19:09 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:19:09 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/f0fba6a7-6467-4ac2-99a6-a2dee485e570_disk">
Oct 07 14:19:09 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       </source>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:19:09 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/f0fba6a7-6467-4ac2-99a6-a2dee485e570_disk.config">
Oct 07 14:19:09 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       </source>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:19:09 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:7c:90:1f"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <target dev="tapab477d11-03"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/f0fba6a7-6467-4ac2-99a6-a2dee485e570/console.log" append="off"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <video>
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     </video>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:19:09 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:19:09 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:19:09 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:19:09 compute-0 nova_compute[259550]: </domain>
Oct 07 14:19:09 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.024 2 DEBUG nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Preparing to wait for external event network-vif-plugged-ab477d11-03cd-475a-b8af-5317e974190d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.024 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.024 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.024 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.025 2 DEBUG nova.virt.libvirt.vif [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:19:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1631138651',display_name='tempest-MultipleCreateTestJSON-server-1631138651-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1631138651-1',id=70,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b9950f52692469d9b44d8201fd3b990',ramdisk_id='',reservation_id='r-un1h9szj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1647657804',owner_user_name='tempest-MultipleCreateTestJSON-1647657804-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:19:04Z,user_data=None,user_id='4c51e9a60f0f4b28b9d5cfaa7a0180eb',uuid=f0fba6a7-6467-4ac2-99a6-a2dee485e570,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab477d11-03cd-475a-b8af-5317e974190d", "address": "fa:16:3e:7c:90:1f", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab477d11-03", "ovs_interfaceid": "ab477d11-03cd-475a-b8af-5317e974190d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.025 2 DEBUG nova.network.os_vif_util [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converting VIF {"id": "ab477d11-03cd-475a-b8af-5317e974190d", "address": "fa:16:3e:7c:90:1f", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab477d11-03", "ovs_interfaceid": "ab477d11-03cd-475a-b8af-5317e974190d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.026 2 DEBUG nova.network.os_vif_util [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:90:1f,bridge_name='br-int',has_traffic_filtering=True,id=ab477d11-03cd-475a-b8af-5317e974190d,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab477d11-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.026 2 DEBUG os_vif [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:90:1f,bridge_name='br-int',has_traffic_filtering=True,id=ab477d11-03cd-475a-b8af-5317e974190d,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab477d11-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.027 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.028 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab477d11-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapab477d11-03, col_values=(('external_ids', {'iface-id': 'ab477d11-03cd-475a-b8af-5317e974190d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:90:1f', 'vm-uuid': 'f0fba6a7-6467-4ac2-99a6-a2dee485e570'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:09 compute-0 NetworkManager[44949]: <info>  [1759846749.0427] manager: (tapab477d11-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.047 2 INFO os_vif [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:90:1f,bridge_name='br-int',has_traffic_filtering=True,id=ab477d11-03cd-475a-b8af-5317e974190d,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab477d11-03')
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.049 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.097 2 DEBUG nova.compute.manager [req-74aec647-1c96-467c-af1f-91cbe20c461f req-d507d5bc-3e6e-4930-bc9f-383d7134f288 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Received event network-changed-ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.098 2 DEBUG nova.compute.manager [req-74aec647-1c96-467c-af1f-91cbe20c461f req-d507d5bc-3e6e-4930-bc9f-383d7134f288 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Refreshing instance network info cache due to event network-changed-ec6afd34-2f7e-47f1-a3b1-c8c920a624f5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.098 2 DEBUG oslo_concurrency.lockutils [req-74aec647-1c96-467c-af1f-91cbe20c461f req-d507d5bc-3e6e-4930-bc9f-383d7134f288 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6f874afd-fefc-434c-a46f-cf611ba65494" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.148 2 DEBUG nova.network.neutron [req-6fd9daa6-10d0-4454-a331-b146041ad483 req-aacc087e-5141-4ed1-9f32-2621ff51eca1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.192 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.193 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.193 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] No VIF found with MAC fa:16:3e:7c:90:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.194 2 INFO nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Using config drive
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.230 2 DEBUG nova.storage.rbd_utils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image f0fba6a7-6467-4ac2-99a6-a2dee485e570_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.296 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 bb276692-f47b-4c86-864a-f3654cf63f5a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.357 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] resizing rbd image bb276692-f47b-4c86-864a-f3654cf63f5a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.446 2 DEBUG nova.network.neutron [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Updating instance_info_cache with network_info: [{"id": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "address": "fa:16:3e:6e:86:d7", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec6afd34-2f", "ovs_interfaceid": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.458 2 DEBUG nova.objects.instance [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lazy-loading 'migration_context' on Instance uuid bb276692-f47b-4c86-864a-f3654cf63f5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:19:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3994017013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.503 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.504 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Ensure instance console log exists: /var/lib/nova/instances/bb276692-f47b-4c86-864a-f3654cf63f5a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.505 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.505 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.505 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.511 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.515 2 DEBUG nova.compute.provider_tree [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.523 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Releasing lock "refresh_cache-6f874afd-fefc-434c-a46f-cf611ba65494" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.524 2 DEBUG nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Instance network_info: |[{"id": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "address": "fa:16:3e:6e:86:d7", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec6afd34-2f", "ovs_interfaceid": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.524 2 DEBUG oslo_concurrency.lockutils [req-74aec647-1c96-467c-af1f-91cbe20c461f req-d507d5bc-3e6e-4930-bc9f-383d7134f288 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6f874afd-fefc-434c-a46f-cf611ba65494" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.524 2 DEBUG nova.network.neutron [req-74aec647-1c96-467c-af1f-91cbe20c461f req-d507d5bc-3e6e-4930-bc9f-383d7134f288 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Refreshing network info cache for port ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.526 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Start _get_guest_xml network_info=[{"id": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "address": "fa:16:3e:6e:86:d7", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec6afd34-2f", "ovs_interfaceid": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.530 2 WARNING nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.535 2 DEBUG nova.virt.libvirt.host [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.536 2 DEBUG nova.virt.libvirt.host [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.541 2 DEBUG nova.virt.libvirt.host [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.541 2 DEBUG nova.virt.libvirt.host [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.541 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.542 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.542 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.542 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.543 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.543 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.543 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.543 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.544 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.544 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.544 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.544 2 DEBUG nova.virt.hardware [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.547 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.595 2 DEBUG nova.scheduler.client.report [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.618 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.618 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.670 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.671 2 DEBUG nova.network.neutron [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.691 2 INFO nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.753 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.761 2 DEBUG nova.network.neutron [req-b4112a5c-3287-4ec8-8286-748756f6b604 req-fbd52c47-b930-4d70-a3de-7d80e1f90ad5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Updated VIF entry in instance network info cache for port ab477d11-03cd-475a-b8af-5317e974190d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.762 2 DEBUG nova.network.neutron [req-b4112a5c-3287-4ec8-8286-748756f6b604 req-fbd52c47-b930-4d70-a3de-7d80e1f90ad5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Updating instance_info_cache with network_info: [{"id": "ab477d11-03cd-475a-b8af-5317e974190d", "address": "fa:16:3e:7c:90:1f", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab477d11-03", "ovs_interfaceid": "ab477d11-03cd-475a-b8af-5317e974190d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 267 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 933 KiB/s rd, 4.9 MiB/s wr, 143 op/s
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.847 2 DEBUG oslo_concurrency.lockutils [req-b4112a5c-3287-4ec8-8286-748756f6b604 req-fbd52c47-b930-4d70-a3de-7d80e1f90ad5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-f0fba6a7-6467-4ac2-99a6-a2dee485e570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:19:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1945512001' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.958 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.960 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.960 2 INFO nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Creating image(s)
Oct 07 14:19:09 compute-0 nova_compute[259550]: 2025-10-07 14:19:09.982 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image 6f82c687-5361-4922-85c2-ea9d7e48a39a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.009 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image 6f82c687-5361-4922-85c2-ea9d7e48a39a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3994017013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1945512001' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.039 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image 6f82c687-5361-4922-85c2-ea9d7e48a39a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.044 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.080 2 DEBUG nova.network.neutron [req-6fd9daa6-10d0-4454-a331-b146041ad483 req-aacc087e-5141-4ed1-9f32-2621ff51eca1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.084 2 INFO nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Creating config drive at /var/lib/nova/instances/f0fba6a7-6467-4ac2-99a6-a2dee485e570/disk.config
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.088 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f0fba6a7-6467-4ac2-99a6-a2dee485e570/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjy6y452o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.124 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.150 2 DEBUG nova.storage.rbd_utils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image 6f874afd-fefc-434c-a46f-cf611ba65494_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.155 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.189 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.191 2 DEBUG nova.network.neutron [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Successfully created port: 6b980feb-2acb-4ed6-b48f-045cc3b7caff _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.197 2 DEBUG nova.policy [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '270ab376b3a74a75a6bab5c13d3100c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cd27f5e3b8cf47649e0c6593a61034a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.198 2 DEBUG oslo_concurrency.lockutils [req-6fd9daa6-10d0-4454-a331-b146041ad483 req-aacc087e-5141-4ed1-9f32-2621ff51eca1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.200 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.200 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.200 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.225 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image 6f82c687-5361-4922-85c2-ea9d7e48a39a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.229 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6f82c687-5361-4922-85c2-ea9d7e48a39a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.258 2 DEBUG oslo_concurrency.lockutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Acquired lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.259 2 DEBUG nova.network.neutron [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.260 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f0fba6a7-6467-4ac2-99a6-a2dee485e570/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjy6y452o" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.283 2 DEBUG nova.storage.rbd_utils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image f0fba6a7-6467-4ac2-99a6-a2dee485e570_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.287 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f0fba6a7-6467-4ac2-99a6-a2dee485e570/disk.config f0fba6a7-6467-4ac2-99a6-a2dee485e570_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:19:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/967399732' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.625 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.626 2 DEBUG nova.virt.libvirt.vif [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:19:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1631138651',display_name='tempest-MultipleCreateTestJSON-server-1631138651-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1631138651-2',id=71,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b9950f52692469d9b44d8201fd3b990',ramdisk_id='',reservation_id='r-un1h9szj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1647657804',owner_user_name='tempest-MultipleCreateTestJSON-1647657804-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:19:05Z,user_data=None,user_id='4c51e9a60f0f4b28b9d5cfaa7a0180eb',uuid=6f874afd-fefc-434c-a46f-cf611ba65494,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "address": "fa:16:3e:6e:86:d7", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec6afd34-2f", "ovs_interfaceid": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.627 2 DEBUG nova.network.os_vif_util [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converting VIF {"id": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "address": "fa:16:3e:6e:86:d7", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec6afd34-2f", "ovs_interfaceid": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.628 2 DEBUG nova.network.os_vif_util [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:86:d7,bridge_name='br-int',has_traffic_filtering=True,id=ec6afd34-2f7e-47f1-a3b1-c8c920a624f5,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec6afd34-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.629 2 DEBUG nova.objects.instance [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6f874afd-fefc-434c-a46f-cf611ba65494 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.646 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:19:10 compute-0 nova_compute[259550]:   <uuid>6f874afd-fefc-434c-a46f-cf611ba65494</uuid>
Oct 07 14:19:10 compute-0 nova_compute[259550]:   <name>instance-00000047</name>
Oct 07 14:19:10 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:19:10 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:19:10 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <nova:name>tempest-MultipleCreateTestJSON-server-1631138651-2</nova:name>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:19:09</nova:creationTime>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:19:10 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:19:10 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:19:10 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:19:10 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:19:10 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:19:10 compute-0 nova_compute[259550]:         <nova:user uuid="4c51e9a60f0f4b28b9d5cfaa7a0180eb">tempest-MultipleCreateTestJSON-1647657804-project-member</nova:user>
Oct 07 14:19:10 compute-0 nova_compute[259550]:         <nova:project uuid="1b9950f52692469d9b44d8201fd3b990">tempest-MultipleCreateTestJSON-1647657804</nova:project>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:19:10 compute-0 nova_compute[259550]:         <nova:port uuid="ec6afd34-2f7e-47f1-a3b1-c8c920a624f5">
Oct 07 14:19:10 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:19:10 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:19:10 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <system>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <entry name="serial">6f874afd-fefc-434c-a46f-cf611ba65494</entry>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <entry name="uuid">6f874afd-fefc-434c-a46f-cf611ba65494</entry>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     </system>
Oct 07 14:19:10 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:19:10 compute-0 nova_compute[259550]:   <os>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:   </os>
Oct 07 14:19:10 compute-0 nova_compute[259550]:   <features>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:   </features>
Oct 07 14:19:10 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:19:10 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:19:10 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/6f874afd-fefc-434c-a46f-cf611ba65494_disk">
Oct 07 14:19:10 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       </source>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:19:10 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/6f874afd-fefc-434c-a46f-cf611ba65494_disk.config">
Oct 07 14:19:10 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       </source>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:19:10 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:6e:86:d7"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <target dev="tapec6afd34-2f"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/6f874afd-fefc-434c-a46f-cf611ba65494/console.log" append="off"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <video>
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     </video>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:19:10 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:19:10 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:19:10 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:19:10 compute-0 nova_compute[259550]: </domain>
Oct 07 14:19:10 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.646 2 DEBUG nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Preparing to wait for external event network-vif-plugged-ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.646 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "6f874afd-fefc-434c-a46f-cf611ba65494-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.647 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "6f874afd-fefc-434c-a46f-cf611ba65494-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.647 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "6f874afd-fefc-434c-a46f-cf611ba65494-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.648 2 DEBUG nova.virt.libvirt.vif [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:19:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1631138651',display_name='tempest-MultipleCreateTestJSON-server-1631138651-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1631138651-2',id=71,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b9950f52692469d9b44d8201fd3b990',ramdisk_id='',reservation_id='r-un1h9szj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1647657804',owner_user_name='tempest-MultipleCreateTestJSON-1647657804-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:19:05Z,user_data=None,user_id='4c51e9a60f0f4b28b9d5cfaa7a0180eb',uuid=6f874afd-fefc-434c-a46f-cf611ba65494,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "address": "fa:16:3e:6e:86:d7", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec6afd34-2f", "ovs_interfaceid": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.648 2 DEBUG nova.network.os_vif_util [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converting VIF {"id": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "address": "fa:16:3e:6e:86:d7", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec6afd34-2f", "ovs_interfaceid": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.649 2 DEBUG nova.network.os_vif_util [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:86:d7,bridge_name='br-int',has_traffic_filtering=True,id=ec6afd34-2f7e-47f1-a3b1-c8c920a624f5,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec6afd34-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.649 2 DEBUG os_vif [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:86:d7,bridge_name='br-int',has_traffic_filtering=True,id=ec6afd34-2f7e-47f1-a3b1-c8c920a624f5,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec6afd34-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.650 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.651 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.654 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec6afd34-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.654 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec6afd34-2f, col_values=(('external_ids', {'iface-id': 'ec6afd34-2f7e-47f1-a3b1-c8c920a624f5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:86:d7', 'vm-uuid': '6f874afd-fefc-434c-a46f-cf611ba65494'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:10 compute-0 NetworkManager[44949]: <info>  [1759846750.6566] manager: (tapec6afd34-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.664 2 INFO os_vif [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:86:d7,bridge_name='br-int',has_traffic_filtering=True,id=ec6afd34-2f7e-47f1-a3b1-c8c920a624f5,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec6afd34-2f')
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.779 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.780 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.780 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] No VIF found with MAC fa:16:3e:6e:86:d7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.780 2 INFO nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Using config drive
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.811 2 DEBUG nova.storage.rbd_utils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image 6f874afd-fefc-434c-a46f-cf611ba65494_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.817 2 DEBUG nova.network.neutron [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.914 2 DEBUG nova.network.neutron [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Successfully updated port: 25f2e3b0-8524-4ad6-aa04-e562aad3ab5e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.930 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "refresh_cache-92c7d888-ced0-4649-9ad6-317350eb7225" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.930 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquired lock "refresh_cache-92c7d888-ced0-4649-9ad6-317350eb7225" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.930 2 DEBUG nova.network.neutron [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.981 2 DEBUG nova.compute.manager [req-fe2ad607-626b-49aa-bde9-ccb1f9b4ae67 req-a5803b73-be07-4add-a774-54803bcb8ead 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Received event network-changed-25f2e3b0-8524-4ad6-aa04-e562aad3ab5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.982 2 DEBUG nova.compute.manager [req-fe2ad607-626b-49aa-bde9-ccb1f9b4ae67 req-a5803b73-be07-4add-a774-54803bcb8ead 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Refreshing instance network info cache due to event network-changed-25f2e3b0-8524-4ad6-aa04-e562aad3ab5e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:19:10 compute-0 nova_compute[259550]: 2025-10-07 14:19:10.982 2 DEBUG oslo_concurrency.lockutils [req-fe2ad607-626b-49aa-bde9-ccb1f9b4ae67 req-a5803b73-be07-4add-a774-54803bcb8ead 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-92c7d888-ced0-4649-9ad6-317350eb7225" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:11 compute-0 ceph-mon[74295]: pgmap v1637: 305 pgs: 305 active+clean; 267 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 933 KiB/s rd, 4.9 MiB/s wr, 143 op/s
Oct 07 14:19:11 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/967399732' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.144 2 DEBUG nova.network.neutron [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:19:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.223 2 INFO nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Creating config drive at /var/lib/nova/instances/6f874afd-fefc-434c-a46f-cf611ba65494/disk.config
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.230 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6f874afd-fefc-434c-a46f-cf611ba65494/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_9b906d_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.265 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f0fba6a7-6467-4ac2-99a6-a2dee485e570/disk.config f0fba6a7-6467-4ac2-99a6-a2dee485e570_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.978s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.266 2 INFO nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Deleting local config drive /var/lib/nova/instances/f0fba6a7-6467-4ac2-99a6-a2dee485e570/disk.config because it was imported into RBD.
Oct 07 14:19:11 compute-0 kernel: tapab477d11-03: entered promiscuous mode
Oct 07 14:19:11 compute-0 NetworkManager[44949]: <info>  [1759846751.3247] manager: (tapab477d11-03): new Tun device (/org/freedesktop/NetworkManager/Devices/307)
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.333 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6f82c687-5361-4922-85c2-ea9d7e48a39a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:11 compute-0 ovn_controller[151684]: 2025-10-07T14:19:11Z|00701|binding|INFO|Claiming lport ab477d11-03cd-475a-b8af-5317e974190d for this chassis.
Oct 07 14:19:11 compute-0 ovn_controller[151684]: 2025-10-07T14:19:11Z|00702|binding|INFO|ab477d11-03cd-475a-b8af-5317e974190d: Claiming fa:16:3e:7c:90:1f 10.100.0.13
Oct 07 14:19:11 compute-0 systemd-udevd[335140]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.370 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:90:1f 10.100.0.13'], port_security=['fa:16:3e:7c:90:1f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f0fba6a7-6467-4ac2-99a6-a2dee485e570', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6692b777-8c3f-47b2-9a67-3efff279d953', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b9950f52692469d9b44d8201fd3b990', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0ddc283-993d-4ef2-9ca1-083b7e8e7595', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14eec310-5a63-4764-9a59-d973ac25767c, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ab477d11-03cd-475a-b8af-5317e974190d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.371 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ab477d11-03cd-475a-b8af-5317e974190d in datapath 6692b777-8c3f-47b2-9a67-3efff279d953 bound to our chassis
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.373 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6692b777-8c3f-47b2-9a67-3efff279d953
Oct 07 14:19:11 compute-0 NetworkManager[44949]: <info>  [1759846751.3781] device (tapab477d11-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:19:11 compute-0 NetworkManager[44949]: <info>  [1759846751.3792] device (tapab477d11-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:19:11 compute-0 ovn_controller[151684]: 2025-10-07T14:19:11Z|00703|binding|INFO|Setting lport ab477d11-03cd-475a-b8af-5317e974190d ovn-installed in OVS
Oct 07 14:19:11 compute-0 ovn_controller[151684]: 2025-10-07T14:19:11Z|00704|binding|INFO|Setting lport ab477d11-03cd-475a-b8af-5317e974190d up in Southbound
Oct 07 14:19:11 compute-0 systemd-machined[214580]: New machine qemu-85-instance-00000046.
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:11 compute-0 systemd[1]: Started Virtual Machine qemu-85-instance-00000046.
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.392 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f9df06af-8c34-4908-86f9-8f164021f0c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.394 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6692b777-81 in ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.395 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6f874afd-fefc-434c-a46f-cf611ba65494/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_9b906d_" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.398 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6692b777-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.398 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb09f74-45f3-4b72-b0a8-3095855a9e7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.400 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3537afd6-88b6-438d-8eb0-fba66f22662b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.412 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[592ff880-69e8-4cbb-ae0a-d97b3f7a3a73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.426 2 DEBUG nova.storage.rbd_utils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] rbd image 6f874afd-fefc-434c-a46f-cf611ba65494_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.431 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6f874afd-fefc-434c-a46f-cf611ba65494/disk.config 6f874afd-fefc-434c-a46f-cf611ba65494_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.441 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf9a152-6faa-4aa3-bb0e-5206a26cb8e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.477 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[20a560aa-821d-4348-8992-9a02436559d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.483 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a6404c-9f76-4a2d-9879-31d8d54a052b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:11 compute-0 NetworkManager[44949]: <info>  [1759846751.4851] manager: (tap6692b777-80): new Veth device (/org/freedesktop/NetworkManager/Devices/308)
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.529 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[086616b7-4eb8-48e9-b586-497851046f5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.532 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] resizing rbd image 6f82c687-5361-4922-85c2-ea9d7e48a39a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.532 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b94e6552-3565-46ce-be05-75a89f216266]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:11 compute-0 NetworkManager[44949]: <info>  [1759846751.5579] device (tap6692b777-80): carrier: link connected
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.564 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[820fef8c-6bd7-4bbc-9b39-4423f62425a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.583 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9a98b3-d9ab-4f99-b6d5-a0ee59461432]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6692b777-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:25:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733512, 'reachable_time': 18417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335252, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.600 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[23039189-7d08-46df-8024-68fae51a1df5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe50:254c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733512, 'tstamp': 733512}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335253, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.620 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0d281649-49c8-4690-a5c1-6fbbc1908093]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6692b777-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:25:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733512, 'reachable_time': 18417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335254, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.650 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[297bfde6-bc0e-48fe-8f01-f178ae9bc8a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.728 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8080dfec-f232-42f3-aef2-9ba463940ed8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.729 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6692b777-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.729 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.730 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6692b777-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:11 compute-0 kernel: tap6692b777-80: entered promiscuous mode
Oct 07 14:19:11 compute-0 NetworkManager[44949]: <info>  [1759846751.7328] manager: (tap6692b777-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/309)
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.738 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6692b777-80, col_values=(('external_ids', {'iface-id': 'c2ea13ef-38e7-4acc-a428-c933f929d020'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:11 compute-0 ovn_controller[151684]: 2025-10-07T14:19:11Z|00705|binding|INFO|Releasing lport c2ea13ef-38e7-4acc-a428-c933f929d020 from this chassis (sb_readonly=0)
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.763 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6692b777-8c3f-47b2-9a67-3efff279d953.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6692b777-8c3f-47b2-9a67-3efff279d953.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.766 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5fee01e1-b85d-467d-bf37-4aad72820cf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.769 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-6692b777-8c3f-47b2-9a67-3efff279d953
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/6692b777-8c3f-47b2-9a67-3efff279d953.pid.haproxy
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 6692b777-8c3f-47b2-9a67-3efff279d953
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:19:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:11.771 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'env', 'PROCESS_TAG=haproxy-6692b777-8c3f-47b2-9a67-3efff279d953', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6692b777-8c3f-47b2-9a67-3efff279d953.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:19:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 323 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 609 KiB/s rd, 7.4 MiB/s wr, 163 op/s
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.867 2 DEBUG nova.objects.instance [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 6f82c687-5361-4922-85c2-ea9d7e48a39a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.914 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.915 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Ensure instance console log exists: /var/lib/nova/instances/6f82c687-5361-4922-85c2-ea9d7e48a39a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.915 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.916 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.916 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.957 2 DEBUG oslo_concurrency.processutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6f874afd-fefc-434c-a46f-cf611ba65494/disk.config 6f874afd-fefc-434c-a46f-cf611ba65494_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:11 compute-0 nova_compute[259550]: 2025-10-07 14:19:11.958 2 INFO nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Deleting local config drive /var/lib/nova/instances/6f874afd-fefc-434c-a46f-cf611ba65494/disk.config because it was imported into RBD.
Oct 07 14:19:12 compute-0 NetworkManager[44949]: <info>  [1759846752.0145] manager: (tapec6afd34-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/310)
Oct 07 14:19:12 compute-0 kernel: tapec6afd34-2f: entered promiscuous mode
Oct 07 14:19:12 compute-0 systemd-udevd[335208]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:19:12 compute-0 ovn_controller[151684]: 2025-10-07T14:19:12Z|00706|binding|INFO|Claiming lport ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 for this chassis.
Oct 07 14:19:12 compute-0 ovn_controller[151684]: 2025-10-07T14:19:12Z|00707|binding|INFO|ec6afd34-2f7e-47f1-a3b1-c8c920a624f5: Claiming fa:16:3e:6e:86:d7 10.100.0.7
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:12 compute-0 NetworkManager[44949]: <info>  [1759846752.0275] device (tapec6afd34-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:19:12 compute-0 NetworkManager[44949]: <info>  [1759846752.0287] device (tapec6afd34-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:19:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:12.033 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:86:d7 10.100.0.7'], port_security=['fa:16:3e:6e:86:d7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6f874afd-fefc-434c-a46f-cf611ba65494', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6692b777-8c3f-47b2-9a67-3efff279d953', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b9950f52692469d9b44d8201fd3b990', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0ddc283-993d-4ef2-9ca1-083b7e8e7595', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14eec310-5a63-4764-9a59-d973ac25767c, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ec6afd34-2f7e-47f1-a3b1-c8c920a624f5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:19:12 compute-0 ovn_controller[151684]: 2025-10-07T14:19:12Z|00708|binding|INFO|Setting lport ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 ovn-installed in OVS
Oct 07 14:19:12 compute-0 ovn_controller[151684]: 2025-10-07T14:19:12Z|00709|binding|INFO|Setting lport ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 up in Southbound
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:12 compute-0 systemd-machined[214580]: New machine qemu-86-instance-00000047.
Oct 07 14:19:12 compute-0 systemd[1]: Started Virtual Machine qemu-86-instance-00000047.
Oct 07 14:19:12 compute-0 ceph-mon[74295]: pgmap v1638: 305 pgs: 305 active+clean; 323 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 609 KiB/s rd, 7.4 MiB/s wr, 163 op/s
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.158 2 DEBUG nova.network.neutron [req-74aec647-1c96-467c-af1f-91cbe20c461f req-d507d5bc-3e6e-4930-bc9f-383d7134f288 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Updated VIF entry in instance network info cache for port ec6afd34-2f7e-47f1-a3b1-c8c920a624f5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.159 2 DEBUG nova.network.neutron [req-74aec647-1c96-467c-af1f-91cbe20c461f req-d507d5bc-3e6e-4930-bc9f-383d7134f288 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Updating instance_info_cache with network_info: [{"id": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "address": "fa:16:3e:6e:86:d7", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec6afd34-2f", "ovs_interfaceid": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:12 compute-0 podman[335372]: 2025-10-07 14:19:12.199501939 +0000 UTC m=+0.083989550 container create 45f60bbb7ccbe1fa7a3682fbe6e81ae5894f45314dcada18da302857b0aa2383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:19:12 compute-0 podman[335372]: 2025-10-07 14:19:12.141251581 +0000 UTC m=+0.025739192 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.256 2 DEBUG oslo_concurrency.lockutils [req-74aec647-1c96-467c-af1f-91cbe20c461f req-d507d5bc-3e6e-4930-bc9f-383d7134f288 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6f874afd-fefc-434c-a46f-cf611ba65494" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:12 compute-0 systemd[1]: Started libpod-conmon-45f60bbb7ccbe1fa7a3682fbe6e81ae5894f45314dcada18da302857b0aa2383.scope.
Oct 07 14:19:12 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a21a875943054032e28bfd0fed4b66bc15c250c60e3bc527416cbad3807ab00/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:19:12 compute-0 podman[335372]: 2025-10-07 14:19:12.376814483 +0000 UTC m=+0.261302104 container init 45f60bbb7ccbe1fa7a3682fbe6e81ae5894f45314dcada18da302857b0aa2383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:19:12 compute-0 podman[335372]: 2025-10-07 14:19:12.387783493 +0000 UTC m=+0.272271104 container start 45f60bbb7ccbe1fa7a3682fbe6e81ae5894f45314dcada18da302857b0aa2383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.403 2 DEBUG nova.network.neutron [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Successfully created port: b7e52cb7-a446-4784-b8e5-3819b3120f8a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:19:12 compute-0 neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953[335396]: [NOTICE]   (335430) : New worker (335436) forked
Oct 07 14:19:12 compute-0 neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953[335396]: [NOTICE]   (335430) : Loading success.
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.470 2 DEBUG nova.network.neutron [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Updating instance_info_cache with network_info: [{"id": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "address": "fa:16:3e:06:ba:6c", "network": {"id": "f862c074-27c9-45f5-8f61-3c7cbb05b94a", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "743fea8acfcc4e73b1981dc0dcf95f63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8017da49-bb", "ovs_interfaceid": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:12.477 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 in datapath 6692b777-8c3f-47b2-9a67-3efff279d953 unbound from our chassis
Oct 07 14:19:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:12.479 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6692b777-8c3f-47b2-9a67-3efff279d953
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.485 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846752.4852185, f0fba6a7-6467-4ac2-99a6-a2dee485e570 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.486 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] VM Started (Lifecycle Event)
Oct 07 14:19:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:12.494 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f1acf337-b266-42e9-a879-0a46b0a0e806]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.500 2 DEBUG oslo_concurrency.lockutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Releasing lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.500 2 DEBUG nova.compute.manager [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Instance network_info: |[{"id": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "address": "fa:16:3e:06:ba:6c", "network": {"id": "f862c074-27c9-45f5-8f61-3c7cbb05b94a", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "743fea8acfcc4e73b1981dc0dcf95f63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8017da49-bb", "ovs_interfaceid": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.503 2 DEBUG nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Start _get_guest_xml network_info=[{"id": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "address": "fa:16:3e:06:ba:6c", "network": {"id": "f862c074-27c9-45f5-8f61-3c7cbb05b94a", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "743fea8acfcc4e73b1981dc0dcf95f63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8017da49-bb", "ovs_interfaceid": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.505 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.509 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846752.485325, f0fba6a7-6467-4ac2-99a6-a2dee485e570 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.509 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] VM Paused (Lifecycle Event)
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.511 2 WARNING nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.515 2 DEBUG nova.virt.libvirt.host [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.515 2 DEBUG nova.virt.libvirt.host [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.518 2 DEBUG nova.virt.libvirt.host [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.519 2 DEBUG nova.virt.libvirt.host [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.519 2 DEBUG nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.520 2 DEBUG nova.virt.hardware [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.520 2 DEBUG nova.virt.hardware [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.520 2 DEBUG nova.virt.hardware [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.520 2 DEBUG nova.virt.hardware [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.521 2 DEBUG nova.virt.hardware [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.521 2 DEBUG nova.virt.hardware [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.521 2 DEBUG nova.virt.hardware [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.521 2 DEBUG nova.virt.hardware [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.521 2 DEBUG nova.virt.hardware [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:19:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:12.521 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.522 2 DEBUG nova.virt.hardware [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.522 2 DEBUG nova.virt.hardware [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.524 2 DEBUG oslo_concurrency.processutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:12.524 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[40c8b3c6-0aca-4c52-bca6-46edb8699a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:12.527 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[10d6d98c-4975-4811-a4ce-84470cccb303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:12.554 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[aac4901b-c32f-48bd-85ae-763eba3b28b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.555 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.561 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:19:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:12.573 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[855dbae6-f57c-44c3-b263-273e676a2d8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6692b777-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:25:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733512, 'reachable_time': 18417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335452, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.581 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:19:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:12.592 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[36273873-659d-44a6-8721-4bf184239c6b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6692b777-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733525, 'tstamp': 733525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335453, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6692b777-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733528, 'tstamp': 733528}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335453, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:12.595 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6692b777-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:12.634 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6692b777-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:12.635 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:12.635 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6692b777-80, col_values=(('external_ids', {'iface-id': 'c2ea13ef-38e7-4acc-a428-c933f929d020'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:12.636 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.932 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846752.9315834, 6f874afd-fefc-434c-a46f-cf611ba65494 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.932 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] VM Started (Lifecycle Event)
Oct 07 14:19:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:19:12 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3135396451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.967 2 DEBUG oslo_concurrency.processutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.986 2 DEBUG nova.storage.rbd_utils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] rbd image 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:12 compute-0 nova_compute[259550]: 2025-10-07 14:19:12.990 2 DEBUG oslo_concurrency.processutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.020 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.025 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846752.931901, 6f874afd-fefc-434c-a46f-cf611ba65494 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.025 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] VM Paused (Lifecycle Event)
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.043 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.047 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.063 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:19:13 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3135396451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.175 2 DEBUG nova.compute.manager [req-f4fd6e72-4517-49fc-9c68-5b5da5ba1538 req-3975172e-c550-4182-9e84-0fd329189b9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Received event network-vif-plugged-ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.176 2 DEBUG oslo_concurrency.lockutils [req-f4fd6e72-4517-49fc-9c68-5b5da5ba1538 req-3975172e-c550-4182-9e84-0fd329189b9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6f874afd-fefc-434c-a46f-cf611ba65494-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.176 2 DEBUG oslo_concurrency.lockutils [req-f4fd6e72-4517-49fc-9c68-5b5da5ba1538 req-3975172e-c550-4182-9e84-0fd329189b9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6f874afd-fefc-434c-a46f-cf611ba65494-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.177 2 DEBUG oslo_concurrency.lockutils [req-f4fd6e72-4517-49fc-9c68-5b5da5ba1538 req-3975172e-c550-4182-9e84-0fd329189b9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6f874afd-fefc-434c-a46f-cf611ba65494-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.177 2 DEBUG nova.compute.manager [req-f4fd6e72-4517-49fc-9c68-5b5da5ba1538 req-3975172e-c550-4182-9e84-0fd329189b9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Processing event network-vif-plugged-ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.178 2 DEBUG nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.182 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846753.1821997, 6f874afd-fefc-434c-a46f-cf611ba65494 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.182 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] VM Resumed (Lifecycle Event)
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.203 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.207 2 INFO nova.virt.libvirt.driver [-] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Instance spawned successfully.
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.209 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.211 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.223 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.247 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.252 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.252 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.253 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.254 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.254 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.255 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.285 2 DEBUG nova.compute.manager [req-388b3112-d9cc-4951-91a0-572a4d2cae1c req-d07d06e4-4c05-4865-9c0d-bd3a61d44016 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Received event network-vif-plugged-ab477d11-03cd-475a-b8af-5317e974190d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.285 2 DEBUG oslo_concurrency.lockutils [req-388b3112-d9cc-4951-91a0-572a4d2cae1c req-d07d06e4-4c05-4865-9c0d-bd3a61d44016 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.285 2 DEBUG oslo_concurrency.lockutils [req-388b3112-d9cc-4951-91a0-572a4d2cae1c req-d07d06e4-4c05-4865-9c0d-bd3a61d44016 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.286 2 DEBUG oslo_concurrency.lockutils [req-388b3112-d9cc-4951-91a0-572a4d2cae1c req-d07d06e4-4c05-4865-9c0d-bd3a61d44016 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.286 2 DEBUG nova.compute.manager [req-388b3112-d9cc-4951-91a0-572a4d2cae1c req-d07d06e4-4c05-4865-9c0d-bd3a61d44016 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Processing event network-vif-plugged-ab477d11-03cd-475a-b8af-5317e974190d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.287 2 DEBUG nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.291 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846753.291018, f0fba6a7-6467-4ac2-99a6-a2dee485e570 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.291 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] VM Resumed (Lifecycle Event)
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.293 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.303 2 INFO nova.virt.libvirt.driver [-] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Instance spawned successfully.
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.304 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.309 2 INFO nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Took 7.88 seconds to spawn the instance on the hypervisor.
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.310 2 DEBUG nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.318 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.325 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.329 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.329 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.330 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.331 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.333 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.333 2 DEBUG nova.virt.libvirt.driver [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.366 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.407 2 INFO nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Took 9.72 seconds to build instance.
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.413 2 INFO nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Took 8.85 seconds to spawn the instance on the hypervisor.
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.414 2 DEBUG nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.426 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "6f874afd-fefc-434c-a46f-cf611ba65494" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.448 2 DEBUG nova.network.neutron [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Updating instance_info_cache with network_info: [{"id": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "address": "fa:16:3e:19:84:e7", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f2e3b0-85", "ovs_interfaceid": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:19:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/573987673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.477 2 DEBUG oslo_concurrency.processutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.478 2 DEBUG nova.virt.libvirt.vif [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-2005089717',display_name='tempest-ServerRescueTestJSONUnderV235-server-2005089717',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-2005089717',id=72,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='743fea8acfcc4e73b1981dc0dcf95f63',ramdisk_id='',reservation_id='r-g9plrghc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-510433047',owner_user_name='tempest-ServerRescueTestJSONUnderV235-510433047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:19:06Z,user_data=None,user_id='b627cc9a6a884d6cb236df7e0154b97a',uuid=1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "address": "fa:16:3e:06:ba:6c", "network": {"id": "f862c074-27c9-45f5-8f61-3c7cbb05b94a", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "743fea8acfcc4e73b1981dc0dcf95f63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8017da49-bb", "ovs_interfaceid": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.478 2 DEBUG nova.network.os_vif_util [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Converting VIF {"id": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "address": "fa:16:3e:06:ba:6c", "network": {"id": "f862c074-27c9-45f5-8f61-3c7cbb05b94a", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "743fea8acfcc4e73b1981dc0dcf95f63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8017da49-bb", "ovs_interfaceid": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.479 2 DEBUG nova.network.os_vif_util [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:ba:6c,bridge_name='br-int',has_traffic_filtering=True,id=8017da49-bbc8-4eae-8b3e-79bb4e587e62,network=Network(f862c074-27c9-45f5-8f61-3c7cbb05b94a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8017da49-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.480 2 DEBUG nova.objects.instance [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.503 2 DEBUG nova.network.neutron [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Successfully updated port: 6b980feb-2acb-4ed6-b48f-045cc3b7caff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.532 2 INFO nova.compute.manager [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Took 9.89 seconds to build instance.
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.598 2 DEBUG nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:19:13 compute-0 nova_compute[259550]:   <uuid>1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e</uuid>
Oct 07 14:19:13 compute-0 nova_compute[259550]:   <name>instance-00000048</name>
Oct 07 14:19:13 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:19:13 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:19:13 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-2005089717</nova:name>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:19:12</nova:creationTime>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:19:13 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:19:13 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:19:13 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:19:13 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:19:13 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:19:13 compute-0 nova_compute[259550]:         <nova:user uuid="b627cc9a6a884d6cb236df7e0154b97a">tempest-ServerRescueTestJSONUnderV235-510433047-project-member</nova:user>
Oct 07 14:19:13 compute-0 nova_compute[259550]:         <nova:project uuid="743fea8acfcc4e73b1981dc0dcf95f63">tempest-ServerRescueTestJSONUnderV235-510433047</nova:project>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:19:13 compute-0 nova_compute[259550]:         <nova:port uuid="8017da49-bbc8-4eae-8b3e-79bb4e587e62">
Oct 07 14:19:13 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:19:13 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:19:13 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <system>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <entry name="serial">1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e</entry>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <entry name="uuid">1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e</entry>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     </system>
Oct 07 14:19:13 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:19:13 compute-0 nova_compute[259550]:   <os>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:   </os>
Oct 07 14:19:13 compute-0 nova_compute[259550]:   <features>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:   </features>
Oct 07 14:19:13 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:19:13 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:19:13 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk">
Oct 07 14:19:13 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       </source>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:19:13 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk.config">
Oct 07 14:19:13 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       </source>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:19:13 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:06:ba:6c"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <target dev="tap8017da49-bb"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e/console.log" append="off"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <video>
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     </video>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:19:13 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:19:13 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:19:13 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:19:13 compute-0 nova_compute[259550]: </domain>
Oct 07 14:19:13 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.598 2 DEBUG nova.compute.manager [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Preparing to wait for external event network-vif-plugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.603 2 DEBUG oslo_concurrency.lockutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Acquiring lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.603 2 DEBUG oslo_concurrency.lockutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.603 2 DEBUG oslo_concurrency.lockutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.604 2 DEBUG nova.virt.libvirt.vif [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-2005089717',display_name='tempest-ServerRescueTestJSONUnderV235-server-2005089717',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-2005089717',id=72,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='743fea8acfcc4e73b1981dc0dcf95f63',ramdisk_id='',reservation_id='r-g9plrghc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-510433047',owner_user_name='tempest-ServerRescueTestJSONUnderV235-510433047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:19:06Z,user_data=None,user_id='b627cc9a6a884d6cb236df7e0154b97a',uuid=1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "address": "fa:16:3e:06:ba:6c", "network": {"id": "f862c074-27c9-45f5-8f61-3c7cbb05b94a", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "743fea8acfcc4e73b1981dc0dcf95f63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8017da49-bb", "ovs_interfaceid": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.604 2 DEBUG nova.network.os_vif_util [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Converting VIF {"id": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "address": "fa:16:3e:06:ba:6c", "network": {"id": "f862c074-27c9-45f5-8f61-3c7cbb05b94a", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "743fea8acfcc4e73b1981dc0dcf95f63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8017da49-bb", "ovs_interfaceid": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.605 2 DEBUG nova.network.os_vif_util [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:ba:6c,bridge_name='br-int',has_traffic_filtering=True,id=8017da49-bbc8-4eae-8b3e-79bb4e587e62,network=Network(f862c074-27c9-45f5-8f61-3c7cbb05b94a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8017da49-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.606 2 DEBUG os_vif [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:ba:6c,bridge_name='br-int',has_traffic_filtering=True,id=8017da49-bbc8-4eae-8b3e-79bb4e587e62,network=Network(f862c074-27c9-45f5-8f61-3c7cbb05b94a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8017da49-bb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.608 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.608 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.609 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Releasing lock "refresh_cache-92c7d888-ced0-4649-9ad6-317350eb7225" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.609 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Instance network_info: |[{"id": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "address": "fa:16:3e:19:84:e7", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f2e3b0-85", "ovs_interfaceid": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.610 2 DEBUG oslo_concurrency.lockutils [None req-b9d09302-f2a7-438a-b048-a3ed04950c25 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.613 2 DEBUG oslo_concurrency.lockutils [req-fe2ad607-626b-49aa-bde9-ccb1f9b4ae67 req-a5803b73-be07-4add-a774-54803bcb8ead 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-92c7d888-ced0-4649-9ad6-317350eb7225" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.613 2 DEBUG nova.network.neutron [req-fe2ad607-626b-49aa-bde9-ccb1f9b4ae67 req-a5803b73-be07-4add-a774-54803bcb8ead 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Refreshing network info cache for port 25f2e3b0-8524-4ad6-aa04-e562aad3ab5e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.616 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Start _get_guest_xml network_info=[{"id": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "address": "fa:16:3e:19:84:e7", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f2e3b0-85", "ovs_interfaceid": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.617 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "refresh_cache-bb276692-f47b-4c86-864a-f3654cf63f5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.617 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquired lock "refresh_cache-bb276692-f47b-4c86-864a-f3654cf63f5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.617 2 DEBUG nova.network.neutron [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.619 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8017da49-bb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.620 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8017da49-bb, col_values=(('external_ids', {'iface-id': '8017da49-bbc8-4eae-8b3e-79bb4e587e62', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:06:ba:6c', 'vm-uuid': '1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:13 compute-0 NetworkManager[44949]: <info>  [1759846753.6226] manager: (tap8017da49-bb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.631 2 INFO os_vif [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:ba:6c,bridge_name='br-int',has_traffic_filtering=True,id=8017da49-bbc8-4eae-8b3e-79bb4e587e62,network=Network(f862c074-27c9-45f5-8f61-3c7cbb05b94a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8017da49-bb')
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.632 2 WARNING nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.641 2 DEBUG nova.virt.libvirt.host [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.642 2 DEBUG nova.virt.libvirt.host [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.655 2 DEBUG nova.virt.libvirt.host [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.656 2 DEBUG nova.virt.libvirt.host [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.656 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.657 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.657 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.658 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.658 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.658 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.659 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.659 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.660 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.660 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.660 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.660 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.663 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.731 2 DEBUG nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.732 2 DEBUG nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.740 2 DEBUG nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] No VIF found with MAC fa:16:3e:06:ba:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.740 2 INFO nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Using config drive
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.765 2 DEBUG nova.storage.rbd_utils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] rbd image 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.816 2 DEBUG nova.network.neutron [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:19:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 381 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 644 KiB/s rd, 9.5 MiB/s wr, 214 op/s
Oct 07 14:19:13 compute-0 nova_compute[259550]: 2025-10-07 14:19:13.992 2 DEBUG nova.network.neutron [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Successfully updated port: b7e52cb7-a446-4784-b8e5-3819b3120f8a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.010 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "refresh_cache-6f82c687-5361-4922-85c2-ea9d7e48a39a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.010 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquired lock "refresh_cache-6f82c687-5361-4922-85c2-ea9d7e48a39a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.010 2 DEBUG nova.network.neutron [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:19:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:19:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1541899964' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.156 2 INFO nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Creating config drive at /var/lib/nova/instances/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e/disk.config
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.162 2 DEBUG oslo_concurrency.processutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsrpmbr9n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.201 2 DEBUG nova.network.neutron [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.209 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/573987673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:14 compute-0 ceph-mon[74295]: pgmap v1639: 305 pgs: 305 active+clean; 381 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 644 KiB/s rd, 9.5 MiB/s wr, 214 op/s
Oct 07 14:19:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1541899964' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.237 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image 92c7d888-ced0-4649-9ad6-317350eb7225_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.242 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.308 2 DEBUG oslo_concurrency.processutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsrpmbr9n" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.358 2 DEBUG nova.storage.rbd_utils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] rbd image 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.361 2 DEBUG oslo_concurrency.processutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e/disk.config 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.707 2 DEBUG oslo_concurrency.lockutils [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.707 2 DEBUG oslo_concurrency.lockutils [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.708 2 INFO nova.compute.manager [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Rebooting instance
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.723 2 DEBUG oslo_concurrency.lockutils [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.723 2 DEBUG oslo_concurrency.lockutils [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquired lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.723 2 DEBUG nova.network.neutron [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:19:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:19:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1164372765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.807 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.808 2 DEBUG nova.virt.libvirt.vif [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1269325506',display_name='tempest-ListServersNegativeTestJSON-server-1269325506-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1269325506-2',id=74,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cd27f5e3b8cf47649e0c6593a61034a5',ramdisk_id='',reservation_id='r-hlvgjz1v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1836187401',owner_user_name='tempest-ListServersNegativeTestJSON-1836187401-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:19:07Z,user_data=None,user_id='270ab376b3a74a75a6bab5c13d3100c0',uuid=92c7d888-ced0-4649-9ad6-317350eb7225,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "address": "fa:16:3e:19:84:e7", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f2e3b0-85", "ovs_interfaceid": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.809 2 DEBUG nova.network.os_vif_util [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Converting VIF {"id": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "address": "fa:16:3e:19:84:e7", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f2e3b0-85", "ovs_interfaceid": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.809 2 DEBUG nova.network.os_vif_util [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:84:e7,bridge_name='br-int',has_traffic_filtering=True,id=25f2e3b0-8524-4ad6-aa04-e562aad3ab5e,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f2e3b0-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.810 2 DEBUG nova.objects.instance [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 92c7d888-ced0-4649-9ad6-317350eb7225 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.828 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:19:14 compute-0 nova_compute[259550]:   <uuid>92c7d888-ced0-4649-9ad6-317350eb7225</uuid>
Oct 07 14:19:14 compute-0 nova_compute[259550]:   <name>instance-0000004a</name>
Oct 07 14:19:14 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:19:14 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:19:14 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <nova:name>tempest-ListServersNegativeTestJSON-server-1269325506-2</nova:name>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:19:13</nova:creationTime>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:19:14 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:19:14 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:19:14 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:19:14 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:19:14 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:19:14 compute-0 nova_compute[259550]:         <nova:user uuid="270ab376b3a74a75a6bab5c13d3100c0">tempest-ListServersNegativeTestJSON-1836187401-project-member</nova:user>
Oct 07 14:19:14 compute-0 nova_compute[259550]:         <nova:project uuid="cd27f5e3b8cf47649e0c6593a61034a5">tempest-ListServersNegativeTestJSON-1836187401</nova:project>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:19:14 compute-0 nova_compute[259550]:         <nova:port uuid="25f2e3b0-8524-4ad6-aa04-e562aad3ab5e">
Oct 07 14:19:14 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:19:14 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:19:14 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <system>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <entry name="serial">92c7d888-ced0-4649-9ad6-317350eb7225</entry>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <entry name="uuid">92c7d888-ced0-4649-9ad6-317350eb7225</entry>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     </system>
Oct 07 14:19:14 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:19:14 compute-0 nova_compute[259550]:   <os>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:   </os>
Oct 07 14:19:14 compute-0 nova_compute[259550]:   <features>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:   </features>
Oct 07 14:19:14 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:19:14 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:19:14 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/92c7d888-ced0-4649-9ad6-317350eb7225_disk">
Oct 07 14:19:14 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       </source>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:19:14 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/92c7d888-ced0-4649-9ad6-317350eb7225_disk.config">
Oct 07 14:19:14 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       </source>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:19:14 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:19:84:e7"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <target dev="tap25f2e3b0-85"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/92c7d888-ced0-4649-9ad6-317350eb7225/console.log" append="off"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <video>
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     </video>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:19:14 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:19:14 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:19:14 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:19:14 compute-0 nova_compute[259550]: </domain>
Oct 07 14:19:14 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.828 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Preparing to wait for external event network-vif-plugged-25f2e3b0-8524-4ad6-aa04-e562aad3ab5e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.829 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "92c7d888-ced0-4649-9ad6-317350eb7225-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.829 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "92c7d888-ced0-4649-9ad6-317350eb7225-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.829 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "92c7d888-ced0-4649-9ad6-317350eb7225-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.830 2 DEBUG nova.virt.libvirt.vif [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1269325506',display_name='tempest-ListServersNegativeTestJSON-server-1269325506-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1269325506-2',id=74,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cd27f5e3b8cf47649e0c6593a61034a5',ramdisk_id='',reservation_id='r-hlvgjz1v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1836187401',owner_user_name='tempest-ListServersNegativeTestJSON-1836187401-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:19:07Z,user_data=None,user_id='270ab376b3a74a75a6bab5c13d3100c0',uuid=92c7d888-ced0-4649-9ad6-317350eb7225,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "address": "fa:16:3e:19:84:e7", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f2e3b0-85", "ovs_interfaceid": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.830 2 DEBUG nova.network.os_vif_util [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Converting VIF {"id": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "address": "fa:16:3e:19:84:e7", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f2e3b0-85", "ovs_interfaceid": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.830 2 DEBUG nova.network.os_vif_util [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:84:e7,bridge_name='br-int',has_traffic_filtering=True,id=25f2e3b0-8524-4ad6-aa04-e562aad3ab5e,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f2e3b0-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.831 2 DEBUG os_vif [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:84:e7,bridge_name='br-int',has_traffic_filtering=True,id=25f2e3b0-8524-4ad6-aa04-e562aad3ab5e,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f2e3b0-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.832 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.832 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.837 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25f2e3b0-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.837 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25f2e3b0-85, col_values=(('external_ids', {'iface-id': '25f2e3b0-8524-4ad6-aa04-e562aad3ab5e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:84:e7', 'vm-uuid': '92c7d888-ced0-4649-9ad6-317350eb7225'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:14 compute-0 NetworkManager[44949]: <info>  [1759846754.8398] manager: (tap25f2e3b0-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.846 2 INFO os_vif [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:84:e7,bridge_name='br-int',has_traffic_filtering=True,id=25f2e3b0-8524-4ad6-aa04-e562aad3ab5e,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f2e3b0-85')
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.863 2 DEBUG oslo_concurrency.processutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e/disk.config 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.864 2 INFO nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Deleting local config drive /var/lib/nova/instances/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e/disk.config because it was imported into RBD.
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.905 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.905 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.905 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] No VIF found with MAC fa:16:3e:19:84:e7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.906 2 INFO nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Using config drive
Oct 07 14:19:14 compute-0 kernel: tap8017da49-bb: entered promiscuous mode
Oct 07 14:19:14 compute-0 NetworkManager[44949]: <info>  [1759846754.9185] manager: (tap8017da49-bb): new Tun device (/org/freedesktop/NetworkManager/Devices/313)
Oct 07 14:19:14 compute-0 ovn_controller[151684]: 2025-10-07T14:19:14Z|00710|binding|INFO|Claiming lport 8017da49-bbc8-4eae-8b3e-79bb4e587e62 for this chassis.
Oct 07 14:19:14 compute-0 ovn_controller[151684]: 2025-10-07T14:19:14Z|00711|binding|INFO|8017da49-bbc8-4eae-8b3e-79bb4e587e62: Claiming fa:16:3e:06:ba:6c 10.100.0.7
Oct 07 14:19:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:14.938 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:ba:6c 10.100.0.7'], port_security=['fa:16:3e:06:ba:6c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f862c074-27c9-45f5-8f61-3c7cbb05b94a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '743fea8acfcc4e73b1981dc0dcf95f63', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0e3566eb-dd32-483b-8573-b62f6b6a147e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0a2b256-a7c8-4510-ae1e-632d23b47983, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8017da49-bbc8-4eae-8b3e-79bb4e587e62) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:19:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:14.939 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8017da49-bbc8-4eae-8b3e-79bb4e587e62 in datapath f862c074-27c9-45f5-8f61-3c7cbb05b94a bound to our chassis
Oct 07 14:19:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:14.940 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f862c074-27c9-45f5-8f61-3c7cbb05b94a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:19:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:14.941 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dbe3bc10-08ee-425d-a4fb-97629730f24e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.941 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image 92c7d888-ced0-4649-9ad6-317350eb7225_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:14 compute-0 ovn_controller[151684]: 2025-10-07T14:19:14Z|00712|binding|INFO|Setting lport 8017da49-bbc8-4eae-8b3e-79bb4e587e62 ovn-installed in OVS
Oct 07 14:19:14 compute-0 ovn_controller[151684]: 2025-10-07T14:19:14Z|00713|binding|INFO|Setting lport 8017da49-bbc8-4eae-8b3e-79bb4e587e62 up in Southbound
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:14 compute-0 systemd-udevd[335670]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:19:14 compute-0 nova_compute[259550]: 2025-10-07 14:19:14.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:14 compute-0 NetworkManager[44949]: <info>  [1759846754.9720] device (tap8017da49-bb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:19:14 compute-0 NetworkManager[44949]: <info>  [1759846754.9749] device (tap8017da49-bb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:19:14 compute-0 systemd-machined[214580]: New machine qemu-87-instance-00000048.
Oct 07 14:19:15 compute-0 systemd[1]: Started Virtual Machine qemu-87-instance-00000048.
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.179 2 DEBUG nova.network.neutron [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Updating instance_info_cache with network_info: [{"id": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "address": "fa:16:3e:cf:6e:23", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b980feb-2a", "ovs_interfaceid": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.215 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Releasing lock "refresh_cache-bb276692-f47b-4c86-864a-f3654cf63f5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.216 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Instance network_info: |[{"id": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "address": "fa:16:3e:cf:6e:23", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b980feb-2a", "ovs_interfaceid": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.218 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Start _get_guest_xml network_info=[{"id": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "address": "fa:16:3e:cf:6e:23", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b980feb-2a", "ovs_interfaceid": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.223 2 WARNING nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.228 2 DEBUG nova.virt.libvirt.host [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.229 2 DEBUG nova.virt.libvirt.host [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.233 2 DEBUG nova.virt.libvirt.host [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.233 2 DEBUG nova.virt.libvirt.host [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.234 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.234 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.235 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.235 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.235 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.236 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.236 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.236 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.237 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.237 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.237 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.237 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.240 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:15 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1164372765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.483 2 INFO nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Creating config drive at /var/lib/nova/instances/92c7d888-ced0-4649-9ad6-317350eb7225/disk.config
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.490 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/92c7d888-ced0-4649-9ad6-317350eb7225/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgu31f2ei execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.618 2 DEBUG nova.compute.manager [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Received event network-changed-6b980feb-2acb-4ed6-b48f-045cc3b7caff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.619 2 DEBUG nova.compute.manager [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Refreshing instance network info cache due to event network-changed-6b980feb-2acb-4ed6-b48f-045cc3b7caff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.619 2 DEBUG oslo_concurrency.lockutils [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-bb276692-f47b-4c86-864a-f3654cf63f5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.620 2 DEBUG oslo_concurrency.lockutils [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-bb276692-f47b-4c86-864a-f3654cf63f5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.620 2 DEBUG nova.network.neutron [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Refreshing network info cache for port 6b980feb-2acb-4ed6-b48f-045cc3b7caff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.630 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/92c7d888-ced0-4649-9ad6-317350eb7225/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgu31f2ei" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.657 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image 92c7d888-ced0-4649-9ad6-317350eb7225_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.662 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/92c7d888-ced0-4649-9ad6-317350eb7225/disk.config 92c7d888-ced0-4649-9ad6-317350eb7225_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:19:15 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2260784764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.709 2 DEBUG nova.compute.manager [req-f464251b-625f-4e06-a1b3-48f6fde8e843 req-0c6d4e5d-3b61-43d5-8803-6d5064f685df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Received event network-vif-plugged-ab477d11-03cd-475a-b8af-5317e974190d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.710 2 DEBUG oslo_concurrency.lockutils [req-f464251b-625f-4e06-a1b3-48f6fde8e843 req-0c6d4e5d-3b61-43d5-8803-6d5064f685df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.711 2 DEBUG oslo_concurrency.lockutils [req-f464251b-625f-4e06-a1b3-48f6fde8e843 req-0c6d4e5d-3b61-43d5-8803-6d5064f685df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.711 2 DEBUG oslo_concurrency.lockutils [req-f464251b-625f-4e06-a1b3-48f6fde8e843 req-0c6d4e5d-3b61-43d5-8803-6d5064f685df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.712 2 DEBUG nova.compute.manager [req-f464251b-625f-4e06-a1b3-48f6fde8e843 req-0c6d4e5d-3b61-43d5-8803-6d5064f685df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] No waiting events found dispatching network-vif-plugged-ab477d11-03cd-475a-b8af-5317e974190d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.712 2 WARNING nova.compute.manager [req-f464251b-625f-4e06-a1b3-48f6fde8e843 req-0c6d4e5d-3b61-43d5-8803-6d5064f685df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Received unexpected event network-vif-plugged-ab477d11-03cd-475a-b8af-5317e974190d for instance with vm_state active and task_state None.
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.722 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.753 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image bb276692-f47b-4c86-864a-f3654cf63f5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.762 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 401 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 11 MiB/s wr, 296 op/s
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.861 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/92c7d888-ced0-4649-9ad6-317350eb7225/disk.config 92c7d888-ced0-4649-9ad6-317350eb7225_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.863 2 INFO nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Deleting local config drive /var/lib/nova/instances/92c7d888-ced0-4649-9ad6-317350eb7225/disk.config because it was imported into RBD.
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.913 2 DEBUG nova.network.neutron [req-fe2ad607-626b-49aa-bde9-ccb1f9b4ae67 req-a5803b73-be07-4add-a774-54803bcb8ead 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Updated VIF entry in instance network info cache for port 25f2e3b0-8524-4ad6-aa04-e562aad3ab5e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.914 2 DEBUG nova.network.neutron [req-fe2ad607-626b-49aa-bde9-ccb1f9b4ae67 req-a5803b73-be07-4add-a774-54803bcb8ead 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Updating instance_info_cache with network_info: [{"id": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "address": "fa:16:3e:19:84:e7", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f2e3b0-85", "ovs_interfaceid": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.934 2 DEBUG oslo_concurrency.lockutils [req-fe2ad607-626b-49aa-bde9-ccb1f9b4ae67 req-a5803b73-be07-4add-a774-54803bcb8ead 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-92c7d888-ced0-4649-9ad6-317350eb7225" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:15 compute-0 kernel: tap25f2e3b0-85: entered promiscuous mode
Oct 07 14:19:15 compute-0 NetworkManager[44949]: <info>  [1759846755.9374] manager: (tap25f2e3b0-85): new Tun device (/org/freedesktop/NetworkManager/Devices/314)
Oct 07 14:19:15 compute-0 ovn_controller[151684]: 2025-10-07T14:19:15Z|00714|binding|INFO|Claiming lport 25f2e3b0-8524-4ad6-aa04-e562aad3ab5e for this chassis.
Oct 07 14:19:15 compute-0 ovn_controller[151684]: 2025-10-07T14:19:15Z|00715|binding|INFO|25f2e3b0-8524-4ad6-aa04-e562aad3ab5e: Claiming fa:16:3e:19:84:e7 10.100.0.3
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:15.954 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:84:e7 10.100.0.3'], port_security=['fa:16:3e:19:84:e7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '92c7d888-ced0-4649-9ad6-317350eb7225', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd27f5e3b8cf47649e0c6593a61034a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7ea18a51-0d6c-4941-8d6c-7aa9d90c61f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=144f60fd-f027-4212-aa62-bf6874cc4de3, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=25f2e3b0-8524-4ad6-aa04-e562aad3ab5e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:19:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:15.956 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 25f2e3b0-8524-4ad6-aa04-e562aad3ab5e in datapath db7fe9a9-837e-4f1a-8110-d3f657219e12 bound to our chassis
Oct 07 14:19:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:15.958 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network db7fe9a9-837e-4f1a-8110-d3f657219e12
Oct 07 14:19:15 compute-0 NetworkManager[44949]: <info>  [1759846755.9598] device (tap25f2e3b0-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:19:15 compute-0 NetworkManager[44949]: <info>  [1759846755.9609] device (tap25f2e3b0-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:19:15 compute-0 ovn_controller[151684]: 2025-10-07T14:19:15Z|00716|binding|INFO|Setting lport 25f2e3b0-8524-4ad6-aa04-e562aad3ab5e ovn-installed in OVS
Oct 07 14:19:15 compute-0 ovn_controller[151684]: 2025-10-07T14:19:15Z|00717|binding|INFO|Setting lport 25f2e3b0-8524-4ad6-aa04-e562aad3ab5e up in Southbound
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:15.973 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[69ef0f24-4a7e-47a8-a9d4-7498d792b0d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:15.974 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdb7fe9a9-81 in ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:19:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:15.976 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdb7fe9a9-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:19:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:15.976 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d8e4a5-fa53-4693-a34c-14ffd36bf8cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:15.977 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[02f3ab4e-dc1e-42d2-9bf7-22e9945427e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:15.989 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[5d855fd1-00e4-4fb8-ad9a-6190f3a5a066]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.997 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846755.9947228, 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:15 compute-0 nova_compute[259550]: 2025-10-07 14:19:15.998 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] VM Started (Lifecycle Event)
Oct 07 14:19:16 compute-0 systemd-machined[214580]: New machine qemu-88-instance-0000004a.
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.019 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:16 compute-0 systemd[1]: Started Virtual Machine qemu-88-instance-0000004a.
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:16.022 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d47f4f1f-e16e-46c1-a915-e03e2cc7fd07]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.028 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846755.994847, 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.029 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] VM Paused (Lifecycle Event)
Oct 07 14:19:16 compute-0 podman[335809]: 2025-10-07 14:19:16.031441067 +0000 UTC m=+0.135798639 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 07 14:19:16 compute-0 podman[335810]: 2025-10-07 14:19:16.039322665 +0000 UTC m=+0.141277343 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:16.064 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[41e293ad-fc25-4ffe-bc93-e5a6a2328bdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.065 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:16 compute-0 NetworkManager[44949]: <info>  [1759846756.0728] manager: (tapdb7fe9a9-80): new Veth device (/org/freedesktop/NetworkManager/Devices/315)
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:16.074 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c45810c1-6f8e-4e29-9254-82bc37c28b35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.078 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.106 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:16.114 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[64b50562-9f8d-4956-91f7-498c9b305f43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:16.117 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8513833a-223c-4c74-9a69-e4d3543a281f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:19:16 compute-0 NetworkManager[44949]: <info>  [1759846756.1559] device (tapdb7fe9a9-80): carrier: link connected
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:16.162 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[544f3114-1321-4050-ae82-44749f14bc02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:16.182 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7c1fd8f9-3f29-4bd6-874f-7cd0050fe86e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb7fe9a9-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:c4:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733972, 'reachable_time': 20209, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335913, 'error': None, 'target': 'ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:16.198 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8b72197a-0e12-4242-9561-92d273c94f13]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:c43e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733972, 'tstamp': 733972}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335914, 'error': None, 'target': 'ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:16.217 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6688b5bc-a0bb-4940-b359-c6bcc4061272]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb7fe9a9-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:c4:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733972, 'reachable_time': 20209, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335915, 'error': None, 'target': 'ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:19:16 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1648749980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:16.263 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b495c93d-1a39-47ce-9a3c-efd76de3944c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.269 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.271 2 DEBUG nova.virt.libvirt.vif [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1269325506',display_name='tempest-ListServersNegativeTestJSON-server-1269325506-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1269325506-1',id=73,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cd27f5e3b8cf47649e0c6593a61034a5',ramdisk_id='',reservation_id='r-hlvgjz1v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1836187401',owner_user_name='tempest-ListServersNegativeTestJSON-1836187401-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:19:08Z,user_data=None,user_id='270ab376b3a74a75a6bab5c13d3100c0',uuid=bb276692-f47b-4c86-864a-f3654cf63f5a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "address": "fa:16:3e:cf:6e:23", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b980feb-2a", "ovs_interfaceid": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.272 2 DEBUG nova.network.os_vif_util [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Converting VIF {"id": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "address": "fa:16:3e:cf:6e:23", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b980feb-2a", "ovs_interfaceid": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.273 2 DEBUG nova.network.os_vif_util [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:6e:23,bridge_name='br-int',has_traffic_filtering=True,id=6b980feb-2acb-4ed6-b48f-045cc3b7caff,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b980feb-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.275 2 DEBUG nova.objects.instance [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid bb276692-f47b-4c86-864a-f3654cf63f5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.299 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:19:16 compute-0 nova_compute[259550]:   <uuid>bb276692-f47b-4c86-864a-f3654cf63f5a</uuid>
Oct 07 14:19:16 compute-0 nova_compute[259550]:   <name>instance-00000049</name>
Oct 07 14:19:16 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:19:16 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:19:16 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <nova:name>tempest-ListServersNegativeTestJSON-server-1269325506-1</nova:name>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:19:15</nova:creationTime>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:19:16 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:19:16 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:19:16 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:19:16 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:19:16 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:19:16 compute-0 nova_compute[259550]:         <nova:user uuid="270ab376b3a74a75a6bab5c13d3100c0">tempest-ListServersNegativeTestJSON-1836187401-project-member</nova:user>
Oct 07 14:19:16 compute-0 nova_compute[259550]:         <nova:project uuid="cd27f5e3b8cf47649e0c6593a61034a5">tempest-ListServersNegativeTestJSON-1836187401</nova:project>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:19:16 compute-0 nova_compute[259550]:         <nova:port uuid="6b980feb-2acb-4ed6-b48f-045cc3b7caff">
Oct 07 14:19:16 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:19:16 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:19:16 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <system>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <entry name="serial">bb276692-f47b-4c86-864a-f3654cf63f5a</entry>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <entry name="uuid">bb276692-f47b-4c86-864a-f3654cf63f5a</entry>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     </system>
Oct 07 14:19:16 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:19:16 compute-0 nova_compute[259550]:   <os>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:   </os>
Oct 07 14:19:16 compute-0 nova_compute[259550]:   <features>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:   </features>
Oct 07 14:19:16 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:19:16 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:19:16 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/bb276692-f47b-4c86-864a-f3654cf63f5a_disk">
Oct 07 14:19:16 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       </source>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:19:16 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/bb276692-f47b-4c86-864a-f3654cf63f5a_disk.config">
Oct 07 14:19:16 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       </source>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:19:16 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:cf:6e:23"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <target dev="tap6b980feb-2a"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/bb276692-f47b-4c86-864a-f3654cf63f5a/console.log" append="off"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <video>
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     </video>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:19:16 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:19:16 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:19:16 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:19:16 compute-0 nova_compute[259550]: </domain>
Oct 07 14:19:16 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.300 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Preparing to wait for external event network-vif-plugged-6b980feb-2acb-4ed6-b48f-045cc3b7caff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.300 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "bb276692-f47b-4c86-864a-f3654cf63f5a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.300 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "bb276692-f47b-4c86-864a-f3654cf63f5a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.301 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "bb276692-f47b-4c86-864a-f3654cf63f5a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.302 2 DEBUG nova.virt.libvirt.vif [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1269325506',display_name='tempest-ListServersNegativeTestJSON-server-1269325506-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1269325506-1',id=73,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cd27f5e3b8cf47649e0c6593a61034a5',ramdisk_id='',reservation_id='r-hlvgjz1v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1836187401',owner_user_name='tempest-ListServersNegativeTestJSON-1836187401-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:19:08Z,user_data=None,user_id='270ab376b3a74a75a6bab5c13d3100c0',uuid=bb276692-f47b-4c86-864a-f3654cf63f5a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "address": "fa:16:3e:cf:6e:23", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b980feb-2a", "ovs_interfaceid": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.302 2 DEBUG nova.network.os_vif_util [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Converting VIF {"id": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "address": "fa:16:3e:cf:6e:23", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b980feb-2a", "ovs_interfaceid": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.303 2 DEBUG nova.network.os_vif_util [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:6e:23,bridge_name='br-int',has_traffic_filtering=True,id=6b980feb-2acb-4ed6-b48f-045cc3b7caff,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b980feb-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.304 2 DEBUG os_vif [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:6e:23,bridge_name='br-int',has_traffic_filtering=True,id=6b980feb-2acb-4ed6-b48f-045cc3b7caff,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b980feb-2a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.314 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.315 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.321 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b980feb-2a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.321 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b980feb-2a, col_values=(('external_ids', {'iface-id': '6b980feb-2acb-4ed6-b48f-045cc3b7caff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:6e:23', 'vm-uuid': 'bb276692-f47b-4c86-864a-f3654cf63f5a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:16 compute-0 NetworkManager[44949]: <info>  [1759846756.3243] manager: (tap6b980feb-2a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/316)
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.330 2 INFO os_vif [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:6e:23,bridge_name='br-int',has_traffic_filtering=True,id=6b980feb-2acb-4ed6-b48f-045cc3b7caff,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b980feb-2a')
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:16.342 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[955226bf-48f0-46ba-ac0d-cc147e4cb04d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:16.344 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb7fe9a9-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:16.344 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:16.344 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb7fe9a9-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:16 compute-0 kernel: tapdb7fe9a9-80: entered promiscuous mode
Oct 07 14:19:16 compute-0 NetworkManager[44949]: <info>  [1759846756.3485] manager: (tapdb7fe9a9-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/317)
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:16.353 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdb7fe9a9-80, col_values=(('external_ids', {'iface-id': '544e29c8-c520-4608-9347-a7d53124e335'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:16 compute-0 ovn_controller[151684]: 2025-10-07T14:19:16Z|00718|binding|INFO|Releasing lport 544e29c8-c520-4608-9347-a7d53124e335 from this chassis (sb_readonly=0)
Oct 07 14:19:16 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2260784764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:16 compute-0 ceph-mon[74295]: pgmap v1640: 305 pgs: 305 active+clean; 401 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 11 MiB/s wr, 296 op/s
Oct 07 14:19:16 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1648749980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.381 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.381 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.381 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] No VIF found with MAC fa:16:3e:cf:6e:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.382 2 INFO nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Using config drive
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:16.382 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/db7fe9a9-837e-4f1a-8110-d3f657219e12.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/db7fe9a9-837e-4f1a-8110-d3f657219e12.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:16.384 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f353a280-c7c0-465e-b9f6-7c1cc40fb544]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:16.386 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-db7fe9a9-837e-4f1a-8110-d3f657219e12
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/db7fe9a9-837e-4f1a-8110-d3f657219e12.pid.haproxy
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID db7fe9a9-837e-4f1a-8110-d3f657219e12
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:19:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:16.386 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'env', 'PROCESS_TAG=haproxy-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/db7fe9a9-837e-4f1a-8110-d3f657219e12.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.405 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image bb276692-f47b-4c86-864a-f3654cf63f5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.454 2 DEBUG nova.network.neutron [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Updating instance_info_cache with network_info: [{"id": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "address": "fa:16:3e:88:aa:95", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e52cb7-a4", "ovs_interfaceid": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.482 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Releasing lock "refresh_cache-6f82c687-5361-4922-85c2-ea9d7e48a39a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.482 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Instance network_info: |[{"id": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "address": "fa:16:3e:88:aa:95", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e52cb7-a4", "ovs_interfaceid": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.484 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Start _get_guest_xml network_info=[{"id": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "address": "fa:16:3e:88:aa:95", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e52cb7-a4", "ovs_interfaceid": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.490 2 WARNING nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.495 2 DEBUG nova.virt.libvirt.host [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.497 2 DEBUG nova.virt.libvirt.host [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.502 2 DEBUG nova.virt.libvirt.host [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.502 2 DEBUG nova.virt.libvirt.host [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.504 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.504 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.505 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.505 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.505 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.505 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.506 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.506 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.506 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.506 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.506 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.506 2 DEBUG nova.virt.hardware [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.508 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:16 compute-0 podman[336032]: 2025-10-07 14:19:16.779995151 +0000 UTC m=+0.076537503 container create 895556184f7424a6cfe27d827c003acaacb85f22ae55cf9cc678755cc28ed5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 14:19:16 compute-0 systemd[1]: Started libpod-conmon-895556184f7424a6cfe27d827c003acaacb85f22ae55cf9cc678755cc28ed5d5.scope.
Oct 07 14:19:16 compute-0 podman[336032]: 2025-10-07 14:19:16.731559602 +0000 UTC m=+0.028101974 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:19:16 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/843bcc122bf2585225dbc5eda2ef41a336207dc6b30a2484c3b0ddeea21be137/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:19:16 compute-0 podman[336032]: 2025-10-07 14:19:16.872774092 +0000 UTC m=+0.169316474 container init 895556184f7424a6cfe27d827c003acaacb85f22ae55cf9cc678755cc28ed5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:19:16 compute-0 podman[336032]: 2025-10-07 14:19:16.879806248 +0000 UTC m=+0.176348600 container start 895556184f7424a6cfe27d827c003acaacb85f22ae55cf9cc678755cc28ed5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:19:16 compute-0 neutron-haproxy-ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12[336046]: [NOTICE]   (336050) : New worker (336052) forked
Oct 07 14:19:16 compute-0 neutron-haproxy-ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12[336046]: [NOTICE]   (336050) : Loading success.
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.984 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846756.9834008, 92c7d888-ced0-4649-9ad6-317350eb7225 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:16 compute-0 nova_compute[259550]: 2025-10-07 14:19:16.985 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] VM Started (Lifecycle Event)
Oct 07 14:19:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:19:16 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/875498266' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.013 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.044 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image 6f82c687-5361-4922-85c2-ea9d7e48a39a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.049 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.103 2 INFO nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Creating config drive at /var/lib/nova/instances/bb276692-f47b-4c86-864a-f3654cf63f5a/disk.config
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.110 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb276692-f47b-4c86-864a-f3654cf63f5a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj_5j6d1x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.152 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.156 2 DEBUG nova.network.neutron [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Updating instance_info_cache with network_info: [{"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.163 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846756.9835913, 92c7d888-ced0-4649-9ad6-317350eb7225 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.163 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] VM Paused (Lifecycle Event)
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.266 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb276692-f47b-4c86-864a-f3654cf63f5a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj_5j6d1x" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.301 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image bb276692-f47b-4c86-864a-f3654cf63f5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.307 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb276692-f47b-4c86-864a-f3654cf63f5a/disk.config bb276692-f47b-4c86-864a-f3654cf63f5a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.346 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.347 2 DEBUG oslo_concurrency.lockutils [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Releasing lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.349 2 DEBUG nova.compute.manager [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.354 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:19:17 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/875498266' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.466 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.468 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb276692-f47b-4c86-864a-f3654cf63f5a/disk.config bb276692-f47b-4c86-864a-f3654cf63f5a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.468 2 INFO nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Deleting local config drive /var/lib/nova/instances/bb276692-f47b-4c86-864a-f3654cf63f5a/disk.config because it was imported into RBD.
Oct 07 14:19:17 compute-0 NetworkManager[44949]: <info>  [1759846757.5254] manager: (tap6b980feb-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/318)
Oct 07 14:19:17 compute-0 kernel: tap6b980feb-2a: entered promiscuous mode
Oct 07 14:19:17 compute-0 systemd-udevd[335901]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:19:17 compute-0 ovn_controller[151684]: 2025-10-07T14:19:17Z|00719|binding|INFO|Claiming lport 6b980feb-2acb-4ed6-b48f-045cc3b7caff for this chassis.
Oct 07 14:19:17 compute-0 ovn_controller[151684]: 2025-10-07T14:19:17Z|00720|binding|INFO|6b980feb-2acb-4ed6-b48f-045cc3b7caff: Claiming fa:16:3e:cf:6e:23 10.100.0.13
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:17 compute-0 NetworkManager[44949]: <info>  [1759846757.5408] device (tap6b980feb-2a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:19:17 compute-0 NetworkManager[44949]: <info>  [1759846757.5418] device (tap6b980feb-2a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:19:17 compute-0 ovn_controller[151684]: 2025-10-07T14:19:17Z|00721|binding|INFO|Setting lport 6b980feb-2acb-4ed6-b48f-045cc3b7caff ovn-installed in OVS
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:17 compute-0 systemd-machined[214580]: New machine qemu-89-instance-00000049.
Oct 07 14:19:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:19:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2304415721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:17 compute-0 systemd[1]: Started Virtual Machine qemu-89-instance-00000049.
Oct 07 14:19:17 compute-0 ovn_controller[151684]: 2025-10-07T14:19:17Z|00722|binding|INFO|Setting lport 6b980feb-2acb-4ed6-b48f-045cc3b7caff up in Southbound
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.585 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:6e:23 10.100.0.13'], port_security=['fa:16:3e:cf:6e:23 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bb276692-f47b-4c86-864a-f3654cf63f5a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd27f5e3b8cf47649e0c6593a61034a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7ea18a51-0d6c-4941-8d6c-7aa9d90c61f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=144f60fd-f027-4212-aa62-bf6874cc4de3, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6b980feb-2acb-4ed6-b48f-045cc3b7caff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.586 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6b980feb-2acb-4ed6-b48f-045cc3b7caff in datapath db7fe9a9-837e-4f1a-8110-d3f657219e12 bound to our chassis
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.588 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network db7fe9a9-837e-4f1a-8110-d3f657219e12
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.598 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.599 2 DEBUG nova.virt.libvirt.vif [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1269325506',display_name='tempest-ListServersNegativeTestJSON-server-1269325506-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1269325506-3',id=75,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cd27f5e3b8cf47649e0c6593a61034a5',ramdisk_id='',reservation_id='r-hlvgjz1v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1836187401',owner_user_name='tempest-ListServersNegativeTestJSON-1836187401-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:19:09Z,user_data=None,user_id='270ab376b3a74a75a6bab5c13d3100c0',uuid=6f82c687-5361-4922-85c2-ea9d7e48a39a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "address": "fa:16:3e:88:aa:95", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e52cb7-a4", "ovs_interfaceid": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.600 2 DEBUG nova.network.os_vif_util [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Converting VIF {"id": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "address": "fa:16:3e:88:aa:95", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e52cb7-a4", "ovs_interfaceid": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.601 2 DEBUG nova.network.os_vif_util [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:aa:95,bridge_name='br-int',has_traffic_filtering=True,id=b7e52cb7-a446-4784-b8e5-3819b3120f8a,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7e52cb7-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.602 2 DEBUG nova.objects.instance [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6f82c687-5361-4922-85c2-ea9d7e48a39a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.603 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8cdf176f-1baa-4f27-bdc2-c48d656678ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.633 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f13b6d9b-42be-44de-a3be-4ad1a24448de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.638 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[73c08133-a00d-407d-89ff-82ef34e57733]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.643 2 DEBUG nova.network.neutron [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Updated VIF entry in instance network info cache for port 6b980feb-2acb-4ed6-b48f-045cc3b7caff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.644 2 DEBUG nova.network.neutron [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Updating instance_info_cache with network_info: [{"id": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "address": "fa:16:3e:cf:6e:23", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b980feb-2a", "ovs_interfaceid": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.688 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f204a516-7b6d-4581-a0c0-515b26bd9120]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.703 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a5dd336a-a0da-45f8-a9a3-2c9419579af7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb7fe9a9-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:c4:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733972, 'reachable_time': 20209, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 528, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 528, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336169, 'error': None, 'target': 'ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.716 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:19:17 compute-0 nova_compute[259550]:   <uuid>6f82c687-5361-4922-85c2-ea9d7e48a39a</uuid>
Oct 07 14:19:17 compute-0 nova_compute[259550]:   <name>instance-0000004b</name>
Oct 07 14:19:17 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:19:17 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:19:17 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <nova:name>tempest-ListServersNegativeTestJSON-server-1269325506-3</nova:name>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:19:16</nova:creationTime>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:19:17 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:19:17 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:19:17 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:19:17 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:19:17 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:19:17 compute-0 nova_compute[259550]:         <nova:user uuid="270ab376b3a74a75a6bab5c13d3100c0">tempest-ListServersNegativeTestJSON-1836187401-project-member</nova:user>
Oct 07 14:19:17 compute-0 nova_compute[259550]:         <nova:project uuid="cd27f5e3b8cf47649e0c6593a61034a5">tempest-ListServersNegativeTestJSON-1836187401</nova:project>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:19:17 compute-0 nova_compute[259550]:         <nova:port uuid="b7e52cb7-a446-4784-b8e5-3819b3120f8a">
Oct 07 14:19:17 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:19:17 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:19:17 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <system>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <entry name="serial">6f82c687-5361-4922-85c2-ea9d7e48a39a</entry>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <entry name="uuid">6f82c687-5361-4922-85c2-ea9d7e48a39a</entry>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     </system>
Oct 07 14:19:17 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:19:17 compute-0 nova_compute[259550]:   <os>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:   </os>
Oct 07 14:19:17 compute-0 nova_compute[259550]:   <features>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:   </features>
Oct 07 14:19:17 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:19:17 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:19:17 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/6f82c687-5361-4922-85c2-ea9d7e48a39a_disk">
Oct 07 14:19:17 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       </source>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:19:17 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/6f82c687-5361-4922-85c2-ea9d7e48a39a_disk.config">
Oct 07 14:19:17 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       </source>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:19:17 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:88:aa:95"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <target dev="tapb7e52cb7-a4"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/6f82c687-5361-4922-85c2-ea9d7e48a39a/console.log" append="off"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <video>
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     </video>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:19:17 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:19:17 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:19:17 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:19:17 compute-0 nova_compute[259550]: </domain>
Oct 07 14:19:17 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.722 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Preparing to wait for external event network-vif-plugged-b7e52cb7-a446-4784-b8e5-3819b3120f8a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.722 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "6f82c687-5361-4922-85c2-ea9d7e48a39a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.723 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "6f82c687-5361-4922-85c2-ea9d7e48a39a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.723 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "6f82c687-5361-4922-85c2-ea9d7e48a39a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.724 2 DEBUG nova.virt.libvirt.vif [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1269325506',display_name='tempest-ListServersNegativeTestJSON-server-1269325506-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1269325506-3',id=75,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cd27f5e3b8cf47649e0c6593a61034a5',ramdisk_id='',reservation_id='r-hlvgjz1v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1836187401',owner_user_name='tempest-ListServersNegativeTestJSON-1836187401-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:19:09Z,user_data=None,user_id='270ab376b3a74a75a6bab5c13d3100c0',uuid=6f82c687-5361-4922-85c2-ea9d7e48a39a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "address": "fa:16:3e:88:aa:95", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e52cb7-a4", "ovs_interfaceid": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.724 2 DEBUG nova.network.os_vif_util [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Converting VIF {"id": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "address": "fa:16:3e:88:aa:95", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e52cb7-a4", "ovs_interfaceid": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.725 2 DEBUG nova.network.os_vif_util [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:aa:95,bridge_name='br-int',has_traffic_filtering=True,id=b7e52cb7-a446-4784-b8e5-3819b3120f8a,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7e52cb7-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.725 2 DEBUG os_vif [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:aa:95,bridge_name='br-int',has_traffic_filtering=True,id=b7e52cb7-a446-4784-b8e5-3819b3120f8a,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7e52cb7-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.726 2 DEBUG oslo_concurrency.lockutils [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-bb276692-f47b-4c86-864a-f3654cf63f5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.726 2 DEBUG nova.compute.manager [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Received event network-vif-plugged-ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.726 2 DEBUG oslo_concurrency.lockutils [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6f874afd-fefc-434c-a46f-cf611ba65494-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.727 2 DEBUG oslo_concurrency.lockutils [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6f874afd-fefc-434c-a46f-cf611ba65494-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.727 2 DEBUG oslo_concurrency.lockutils [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6f874afd-fefc-434c-a46f-cf611ba65494-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.727 2 DEBUG nova.compute.manager [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] No waiting events found dispatching network-vif-plugged-ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.727 2 WARNING nova.compute.manager [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Received unexpected event network-vif-plugged-ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 for instance with vm_state active and task_state None.
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.727 2 DEBUG nova.compute.manager [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Received event network-changed-b7e52cb7-a446-4784-b8e5-3819b3120f8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.728 2 DEBUG nova.compute.manager [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Refreshing instance network info cache due to event network-changed-b7e52cb7-a446-4784-b8e5-3819b3120f8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.728 2 DEBUG oslo_concurrency.lockutils [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6f82c687-5361-4922-85c2-ea9d7e48a39a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.728 2 DEBUG oslo_concurrency.lockutils [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6f82c687-5361-4922-85c2-ea9d7e48a39a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.728 2 DEBUG nova.network.neutron [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Refreshing network info cache for port b7e52cb7-a446-4784-b8e5-3819b3120f8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.730 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.730 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.731 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[494fe40a-2b60-43cd-a179-275d3df8c4d4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdb7fe9a9-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733986, 'tstamp': 733986}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336170, 'error': None, 'target': 'ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdb7fe9a9-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733990, 'tstamp': 733990}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336170, 'error': None, 'target': 'ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.733 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb7fe9a9-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.733 2 DEBUG oslo_concurrency.lockutils [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.733 2 DEBUG oslo_concurrency.lockutils [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.734 2 DEBUG oslo_concurrency.lockutils [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.734 2 DEBUG oslo_concurrency.lockutils [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.734 2 DEBUG oslo_concurrency.lockutils [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.735 2 INFO nova.compute.manager [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Terminating instance
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.736 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb7fe9a9-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.736 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.736 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdb7fe9a9-80, col_values=(('external_ids', {'iface-id': '544e29c8-c520-4608-9347-a7d53124e335'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.737 2 DEBUG nova.compute.manager [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.737 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7e52cb7-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.741 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb7e52cb7-a4, col_values=(('external_ids', {'iface-id': 'b7e52cb7-a446-4784-b8e5-3819b3120f8a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:aa:95', 'vm-uuid': '6f82c687-5361-4922-85c2-ea9d7e48a39a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:17 compute-0 NetworkManager[44949]: <info>  [1759846757.7429] manager: (tapb7e52cb7-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/319)
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.749 2 INFO os_vif [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:aa:95,bridge_name='br-int',has_traffic_filtering=True,id=b7e52cb7-a446-4784-b8e5-3819b3120f8a,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7e52cb7-a4')
Oct 07 14:19:17 compute-0 kernel: tapab477d11-03 (unregistering): left promiscuous mode
Oct 07 14:19:17 compute-0 NetworkManager[44949]: <info>  [1759846757.7915] device (tapab477d11-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:17 compute-0 ovn_controller[151684]: 2025-10-07T14:19:17Z|00723|binding|INFO|Releasing lport ab477d11-03cd-475a-b8af-5317e974190d from this chassis (sb_readonly=0)
Oct 07 14:19:17 compute-0 ovn_controller[151684]: 2025-10-07T14:19:17Z|00724|binding|INFO|Setting lport ab477d11-03cd-475a-b8af-5317e974190d down in Southbound
Oct 07 14:19:17 compute-0 ovn_controller[151684]: 2025-10-07T14:19:17Z|00725|binding|INFO|Removing iface tapab477d11-03 ovn-installed in OVS
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.810 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:90:1f 10.100.0.13'], port_security=['fa:16:3e:7c:90:1f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f0fba6a7-6467-4ac2-99a6-a2dee485e570', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6692b777-8c3f-47b2-9a67-3efff279d953', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b9950f52692469d9b44d8201fd3b990', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0ddc283-993d-4ef2-9ca1-083b7e8e7595', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14eec310-5a63-4764-9a59-d973ac25767c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ab477d11-03cd-475a-b8af-5317e974190d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.812 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ab477d11-03cd-475a-b8af-5317e974190d in datapath 6692b777-8c3f-47b2-9a67-3efff279d953 unbound from our chassis
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.813 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6692b777-8c3f-47b2-9a67-3efff279d953
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.827 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.828 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.829 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] No VIF found with MAC fa:16:3e:88:aa:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.829 2 INFO nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Using config drive
Oct 07 14:19:17 compute-0 kernel: tap5fb8904b-22 (unregistering): left promiscuous mode
Oct 07 14:19:17 compute-0 NetworkManager[44949]: <info>  [1759846757.8403] device (tap5fb8904b-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:19:17 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000046.scope: Deactivated successfully.
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.839 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bd697e01-ffc7-40ed-98e1-6c096907c902]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:17 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000046.scope: Consumed 5.251s CPU time.
Oct 07 14:19:17 compute-0 systemd-machined[214580]: Machine qemu-85-instance-00000046 terminated.
Oct 07 14:19:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 401 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 9.7 MiB/s wr, 295 op/s
Oct 07 14:19:17 compute-0 ovn_controller[151684]: 2025-10-07T14:19:17Z|00726|binding|INFO|Releasing lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c from this chassis (sb_readonly=0)
Oct 07 14:19:17 compute-0 ovn_controller[151684]: 2025-10-07T14:19:17Z|00727|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c down in Southbound
Oct 07 14:19:17 compute-0 ovn_controller[151684]: 2025-10-07T14:19:17Z|00728|binding|INFO|Removing iface tap5fb8904b-22 ovn-installed in OVS
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.870 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3a2c7a60-e324-490b-91cf-fabb416bdbe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.881 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:c7:50 10.100.0.3'], port_security=['fa:16:3e:af:c7:50 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1d580bbb-a6fd-442c-8524-409ba5c344d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8379283f8a594c2ab94773d2b49cbb30', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'c8e218c0-ab29-4b01-8bdb-1da00e3ea9f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44f26794-be65-4c90-a6ef-3a0e5efa6810, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5fb8904b-227a-4dac-8c3a-82a23ba9832c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.881 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image 6f82c687-5361-4922-85c2-ea9d7e48a39a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.882 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e2dc5d-b0f7-4528-8856-af97ac97cf09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:17 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000042.scope: Deactivated successfully.
Oct 07 14:19:17 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000042.scope: Consumed 13.137s CPU time.
Oct 07 14:19:17 compute-0 systemd-machined[214580]: Machine qemu-84-instance-00000042 terminated.
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.914 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[dd5d0f40-d370-4df3-9b1d-7b4759b63286]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.931 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[01ba84f2-91bc-455d-b52c-0bc30deff272]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6692b777-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:25:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733512, 'reachable_time': 18417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336254, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.948 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6bf483-3fa4-43a8-97d4-483cb3d697c5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6692b777-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733525, 'tstamp': 733525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336255, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6692b777-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733528, 'tstamp': 733528}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336255, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.949 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6692b777-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.966 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6692b777-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.966 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.966 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6692b777-80, col_values=(('external_ids', {'iface-id': 'c2ea13ef-38e7-4acc-a428-c933f929d020'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.967 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.974 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5fb8904b-227a-4dac-8c3a-82a23ba9832c in datapath c21c541a-0d39-4ceb-ba44-53a9c1280779 unbound from our chassis
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.975 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c21c541a-0d39-4ceb-ba44-53a9c1280779, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.976 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2e931ee9-a252-49eb-b0f2-0385460af99f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:17.977 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 namespace which is not needed anymore
Oct 07 14:19:17 compute-0 NetworkManager[44949]: <info>  [1759846757.9844] manager: (tap5fb8904b-22): new Tun device (/org/freedesktop/NetworkManager/Devices/320)
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.987 2 INFO nova.virt.libvirt.driver [-] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Instance destroyed successfully.
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.988 2 DEBUG nova.objects.instance [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lazy-loading 'resources' on Instance uuid f0fba6a7-6467-4ac2-99a6-a2dee485e570 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.996 2 INFO nova.virt.libvirt.driver [-] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance destroyed successfully.
Oct 07 14:19:17 compute-0 nova_compute[259550]: 2025-10-07 14:19:17.997 2 DEBUG nova.objects.instance [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'resources' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.009 2 DEBUG nova.virt.libvirt.vif [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:19:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1631138651',display_name='tempest-MultipleCreateTestJSON-server-1631138651-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1631138651-1',id=70,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:19:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1b9950f52692469d9b44d8201fd3b990',ramdisk_id='',reservation_id='r-un1h9szj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1647657804',owner_user_name='tempest-MultipleCreateTestJSON-1647657804-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:19:13Z,user_data=None,user_id='4c51e9a60f0f4b28b9d5cfaa7a0180eb',uuid=f0fba6a7-6467-4ac2-99a6-a2dee485e570,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab477d11-03cd-475a-b8af-5317e974190d", "address": "fa:16:3e:7c:90:1f", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab477d11-03", "ovs_interfaceid": "ab477d11-03cd-475a-b8af-5317e974190d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.010 2 DEBUG nova.network.os_vif_util [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converting VIF {"id": "ab477d11-03cd-475a-b8af-5317e974190d", "address": "fa:16:3e:7c:90:1f", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab477d11-03", "ovs_interfaceid": "ab477d11-03cd-475a-b8af-5317e974190d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.010 2 DEBUG nova.network.os_vif_util [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:90:1f,bridge_name='br-int',has_traffic_filtering=True,id=ab477d11-03cd-475a-b8af-5317e974190d,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab477d11-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.011 2 DEBUG os_vif [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:90:1f,bridge_name='br-int',has_traffic_filtering=True,id=ab477d11-03cd-475a-b8af-5317e974190d,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab477d11-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.013 2 DEBUG nova.virt.libvirt.vif [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1718645090',display_name='tempest-ServerActionsTestJSON-server-1718645090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1718645090',id=66,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD46Fy5CoN+Ml3EBoxKcZ2Dc+Ex8Fs/j0JXzrdEiunFq6ivVsrIblCZq3tN14fyHQcfewP1+4i7OCcMUFM6dTRwhbKPS7W3sI5N9qFb8Gfb+awrk0XppQafXmqkzDHedqQ==',key_name='tempest-keypair-607419023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-0mdnuhhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:19:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=1d580bbb-a6fd-442c-8524-409ba5c344d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.014 2 DEBUG nova.network.os_vif_util [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.014 2 DEBUG nova.network.os_vif_util [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.015 2 DEBUG os_vif [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.017 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab477d11-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.028 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fb8904b-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.033 2 INFO os_vif [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:90:1f,bridge_name='br-int',has_traffic_filtering=True,id=ab477d11-03cd-475a-b8af-5317e974190d,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab477d11-03')
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.050 2 INFO os_vif [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22')
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.059 2 DEBUG nova.virt.libvirt.driver [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Start _get_guest_xml network_info=[{"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.064 2 WARNING nova.virt.libvirt.driver [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.074 2 DEBUG nova.virt.libvirt.host [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.075 2 DEBUG nova.virt.libvirt.host [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.081 2 DEBUG nova.virt.libvirt.host [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.082 2 DEBUG nova.virt.libvirt.host [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.082 2 DEBUG nova.virt.libvirt.driver [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.082 2 DEBUG nova.virt.hardware [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.083 2 DEBUG nova.virt.hardware [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.083 2 DEBUG nova.virt.hardware [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.083 2 DEBUG nova.virt.hardware [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.083 2 DEBUG nova.virt.hardware [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.083 2 DEBUG nova.virt.hardware [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.083 2 DEBUG nova.virt.hardware [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.084 2 DEBUG nova.virt.hardware [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.084 2 DEBUG nova.virt.hardware [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.084 2 DEBUG nova.virt.hardware [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.084 2 DEBUG nova.virt.hardware [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.084 2 DEBUG nova.objects.instance [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.102 2 DEBUG oslo_concurrency.processutils [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:18 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[333618]: [NOTICE]   (333622) : haproxy version is 2.8.14-c23fe91
Oct 07 14:19:18 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[333618]: [NOTICE]   (333622) : path to executable is /usr/sbin/haproxy
Oct 07 14:19:18 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[333618]: [WARNING]  (333622) : Exiting Master process...
Oct 07 14:19:18 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[333618]: [WARNING]  (333622) : Exiting Master process...
Oct 07 14:19:18 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[333618]: [ALERT]    (333622) : Current worker (333624) exited with code 143 (Terminated)
Oct 07 14:19:18 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[333618]: [WARNING]  (333622) : All workers exited. Exiting... (0)
Oct 07 14:19:18 compute-0 systemd[1]: libpod-410f1934f2b0609b6c236b0c10e44cdf4806eacc9474c5d37826e73f8d1d13c7.scope: Deactivated successfully.
Oct 07 14:19:18 compute-0 podman[336325]: 2025-10-07 14:19:18.146707966 +0000 UTC m=+0.056987887 container died 410f1934f2b0609b6c236b0c10e44cdf4806eacc9474c5d37826e73f8d1d13c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 07 14:19:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-410f1934f2b0609b6c236b0c10e44cdf4806eacc9474c5d37826e73f8d1d13c7-userdata-shm.mount: Deactivated successfully.
Oct 07 14:19:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d12bd0f3e263f087f558ed997876aeac2f3e4d9469628bf34192ae23ab472ea-merged.mount: Deactivated successfully.
Oct 07 14:19:18 compute-0 podman[336325]: 2025-10-07 14:19:18.193727378 +0000 UTC m=+0.104007279 container cleanup 410f1934f2b0609b6c236b0c10e44cdf4806eacc9474c5d37826e73f8d1d13c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 07 14:19:18 compute-0 systemd[1]: libpod-conmon-410f1934f2b0609b6c236b0c10e44cdf4806eacc9474c5d37826e73f8d1d13c7.scope: Deactivated successfully.
Oct 07 14:19:18 compute-0 podman[336354]: 2025-10-07 14:19:18.264709863 +0000 UTC m=+0.046554941 container remove 410f1934f2b0609b6c236b0c10e44cdf4806eacc9474c5d37826e73f8d1d13c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 07 14:19:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:18.270 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e4114d-4f6f-4ccf-8785-fcf3fc6e3b21]: (4, ('Tue Oct  7 02:19:18 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 (410f1934f2b0609b6c236b0c10e44cdf4806eacc9474c5d37826e73f8d1d13c7)\n410f1934f2b0609b6c236b0c10e44cdf4806eacc9474c5d37826e73f8d1d13c7\nTue Oct  7 02:19:18 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 (410f1934f2b0609b6c236b0c10e44cdf4806eacc9474c5d37826e73f8d1d13c7)\n410f1934f2b0609b6c236b0c10e44cdf4806eacc9474c5d37826e73f8d1d13c7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:18.272 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf4331e-086f-4ab9-813c-e9d8bcefacc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:18.273 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc21c541a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:18 compute-0 kernel: tapc21c541a-00: left promiscuous mode
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:18.296 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3203ef45-0fdb-4018-988e-04b6b103f2cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:18.321 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0e544dd2-55eb-4713-9152-631b9095c98b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:18.324 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[13c9173d-9cbc-4b80-b0ba-872a1e967689]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.338 2 INFO nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Creating config drive at /var/lib/nova/instances/6f82c687-5361-4922-85c2-ea9d7e48a39a/disk.config
Oct 07 14:19:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:18.340 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[90f83b21-bada-4cf9-ac1f-d6058ea60f86]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731532, 'reachable_time': 40165, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336389, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:18 compute-0 systemd[1]: run-netns-ovnmeta\x2dc21c541a\x2d0d39\x2d4ceb\x2dba44\x2d53a9c1280779.mount: Deactivated successfully.
Oct 07 14:19:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:18.346 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:19:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:18.346 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd2ba06-3f0f-4b4a-ba70-a50873abde38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.346 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6f82c687-5361-4922-85c2-ea9d7e48a39a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpulg2lqtv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:18 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2304415721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:18 compute-0 ceph-mon[74295]: pgmap v1641: 305 pgs: 305 active+clean; 401 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 9.7 MiB/s wr, 295 op/s
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.474 2 INFO nova.virt.libvirt.driver [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Deleting instance files /var/lib/nova/instances/f0fba6a7-6467-4ac2-99a6-a2dee485e570_del
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.475 2 INFO nova.virt.libvirt.driver [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Deletion of /var/lib/nova/instances/f0fba6a7-6467-4ac2-99a6-a2dee485e570_del complete
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.479 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846758.4789789, bb276692-f47b-4c86-864a-f3654cf63f5a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.479 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] VM Started (Lifecycle Event)
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.491 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6f82c687-5361-4922-85c2-ea9d7e48a39a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpulg2lqtv" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.510 2 DEBUG nova.storage.rbd_utils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] rbd image 6f82c687-5361-4922-85c2-ea9d7e48a39a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.513 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6f82c687-5361-4922-85c2-ea9d7e48a39a/disk.config 6f82c687-5361-4922-85c2-ea9d7e48a39a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:19:18 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1051125853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.603 2 DEBUG oslo_concurrency.processutils [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.634 2 DEBUG oslo_concurrency.processutils [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.685 2 DEBUG oslo_concurrency.processutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6f82c687-5361-4922-85c2-ea9d7e48a39a/disk.config 6f82c687-5361-4922-85c2-ea9d7e48a39a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.686 2 INFO nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Deleting local config drive /var/lib/nova/instances/6f82c687-5361-4922-85c2-ea9d7e48a39a/disk.config because it was imported into RBD.
Oct 07 14:19:18 compute-0 virtqemud[259430]: End of file while reading data: Input/output error
Oct 07 14:19:18 compute-0 virtqemud[259430]: End of file while reading data: Input/output error
Oct 07 14:19:18 compute-0 kernel: tapb7e52cb7-a4: entered promiscuous mode
Oct 07 14:19:18 compute-0 NetworkManager[44949]: <info>  [1759846758.7464] manager: (tapb7e52cb7-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/321)
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:18 compute-0 systemd-udevd[336298]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:19:18 compute-0 ovn_controller[151684]: 2025-10-07T14:19:18Z|00729|binding|INFO|Claiming lport b7e52cb7-a446-4784-b8e5-3819b3120f8a for this chassis.
Oct 07 14:19:18 compute-0 ovn_controller[151684]: 2025-10-07T14:19:18Z|00730|binding|INFO|b7e52cb7-a446-4784-b8e5-3819b3120f8a: Claiming fa:16:3e:88:aa:95 10.100.0.14
Oct 07 14:19:18 compute-0 NetworkManager[44949]: <info>  [1759846758.7635] device (tapb7e52cb7-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:19:18 compute-0 NetworkManager[44949]: <info>  [1759846758.7643] device (tapb7e52cb7-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:19:18 compute-0 ovn_controller[151684]: 2025-10-07T14:19:18Z|00731|binding|INFO|Setting lport b7e52cb7-a446-4784-b8e5-3819b3120f8a ovn-installed in OVS
Oct 07 14:19:18 compute-0 nova_compute[259550]: 2025-10-07 14:19:18.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:18 compute-0 systemd-machined[214580]: New machine qemu-90-instance-0000004b.
Oct 07 14:19:18 compute-0 systemd[1]: Started Virtual Machine qemu-90-instance-0000004b.
Oct 07 14:19:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:19:19 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/159027367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.105 2 DEBUG oslo_concurrency.processutils [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.106 2 DEBUG nova.virt.libvirt.vif [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1718645090',display_name='tempest-ServerActionsTestJSON-server-1718645090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1718645090',id=66,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD46Fy5CoN+Ml3EBoxKcZ2Dc+Ex8Fs/j0JXzrdEiunFq6ivVsrIblCZq3tN14fyHQcfewP1+4i7OCcMUFM6dTRwhbKPS7W3sI5N9qFb8Gfb+awrk0XppQafXmqkzDHedqQ==',key_name='tempest-keypair-607419023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-0mdnuhhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:19:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=1d580bbb-a6fd-442c-8524-409ba5c344d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.106 2 DEBUG nova.network.os_vif_util [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.107 2 DEBUG nova.network.os_vif_util [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.108 2 DEBUG nova.objects.instance [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:19 compute-0 ovn_controller[151684]: 2025-10-07T14:19:19Z|00732|binding|INFO|Setting lport b7e52cb7-a446-4784-b8e5-3819b3120f8a up in Southbound
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.145 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:aa:95 10.100.0.14'], port_security=['fa:16:3e:88:aa:95 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6f82c687-5361-4922-85c2-ea9d7e48a39a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd27f5e3b8cf47649e0c6593a61034a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7ea18a51-0d6c-4941-8d6c-7aa9d90c61f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=144f60fd-f027-4212-aa62-bf6874cc4de3, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b7e52cb7-a446-4784-b8e5-3819b3120f8a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.146 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b7e52cb7-a446-4784-b8e5-3819b3120f8a in datapath db7fe9a9-837e-4f1a-8110-d3f657219e12 bound to our chassis
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.148 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network db7fe9a9-837e-4f1a-8110-d3f657219e12
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.170 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ec7d1ab4-fde5-4298-903e-88f999490ef9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.206 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ec45d91c-976b-4dde-8318-3462f9c2f4a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.210 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc19598-a90c-498b-be90-f422b704487e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.250 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[97f868e7-bd36-463e-a4e5-370e643c42b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.282 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[42c5b427-5a8b-40c5-9c53-d4b29b96fdad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb7fe9a9-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:c4:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733972, 'reachable_time': 20209, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336538, 'error': None, 'target': 'ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.289 2 DEBUG nova.compute.manager [req-c506a9ec-7422-40fb-a360-5370528d010c req-45793faa-591d-448d-bd27-8bfea5ad8f4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Received event network-vif-plugged-25f2e3b0-8524-4ad6-aa04-e562aad3ab5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.289 2 DEBUG oslo_concurrency.lockutils [req-c506a9ec-7422-40fb-a360-5370528d010c req-45793faa-591d-448d-bd27-8bfea5ad8f4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "92c7d888-ced0-4649-9ad6-317350eb7225-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.289 2 DEBUG oslo_concurrency.lockutils [req-c506a9ec-7422-40fb-a360-5370528d010c req-45793faa-591d-448d-bd27-8bfea5ad8f4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "92c7d888-ced0-4649-9ad6-317350eb7225-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.290 2 DEBUG oslo_concurrency.lockutils [req-c506a9ec-7422-40fb-a360-5370528d010c req-45793faa-591d-448d-bd27-8bfea5ad8f4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "92c7d888-ced0-4649-9ad6-317350eb7225-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.290 2 DEBUG nova.compute.manager [req-c506a9ec-7422-40fb-a360-5370528d010c req-45793faa-591d-448d-bd27-8bfea5ad8f4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Processing event network-vif-plugged-25f2e3b0-8524-4ad6-aa04-e562aad3ab5e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.290 2 DEBUG nova.compute.manager [req-c506a9ec-7422-40fb-a360-5370528d010c req-45793faa-591d-448d-bd27-8bfea5ad8f4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Received event network-vif-plugged-25f2e3b0-8524-4ad6-aa04-e562aad3ab5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.290 2 DEBUG oslo_concurrency.lockutils [req-c506a9ec-7422-40fb-a360-5370528d010c req-45793faa-591d-448d-bd27-8bfea5ad8f4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "92c7d888-ced0-4649-9ad6-317350eb7225-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.291 2 DEBUG oslo_concurrency.lockutils [req-c506a9ec-7422-40fb-a360-5370528d010c req-45793faa-591d-448d-bd27-8bfea5ad8f4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "92c7d888-ced0-4649-9ad6-317350eb7225-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.291 2 DEBUG oslo_concurrency.lockutils [req-c506a9ec-7422-40fb-a360-5370528d010c req-45793faa-591d-448d-bd27-8bfea5ad8f4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "92c7d888-ced0-4649-9ad6-317350eb7225-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.291 2 DEBUG nova.compute.manager [req-c506a9ec-7422-40fb-a360-5370528d010c req-45793faa-591d-448d-bd27-8bfea5ad8f4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] No waiting events found dispatching network-vif-plugged-25f2e3b0-8524-4ad6-aa04-e562aad3ab5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.291 2 WARNING nova.compute.manager [req-c506a9ec-7422-40fb-a360-5370528d010c req-45793faa-591d-448d-bd27-8bfea5ad8f4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Received unexpected event network-vif-plugged-25f2e3b0-8524-4ad6-aa04-e562aad3ab5e for instance with vm_state building and task_state spawning.
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.294 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.317 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.322 2 INFO nova.virt.libvirt.driver [-] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Instance spawned successfully.
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.321 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[05be22f0-a7af-4b96-ab94-d5f3c527624a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdb7fe9a9-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733986, 'tstamp': 733986}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336539, 'error': None, 'target': 'ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdb7fe9a9-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733990, 'tstamp': 733990}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336539, 'error': None, 'target': 'ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.322 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.324 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb7fe9a9-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.328 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb7fe9a9-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.329 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.331 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdb7fe9a9-80, col_values=(('external_ids', {'iface-id': '544e29c8-c520-4608-9347-a7d53124e335'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.333 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.360 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.366 2 DEBUG nova.virt.libvirt.driver [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:19:19 compute-0 nova_compute[259550]:   <uuid>1d580bbb-a6fd-442c-8524-409ba5c344d0</uuid>
Oct 07 14:19:19 compute-0 nova_compute[259550]:   <name>instance-00000042</name>
Oct 07 14:19:19 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:19:19 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:19:19 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerActionsTestJSON-server-1718645090</nova:name>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:19:18</nova:creationTime>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:19:19 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:19:19 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:19:19 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:19:19 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:19:19 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:19:19 compute-0 nova_compute[259550]:         <nova:user uuid="51afbbb19e4a4e2184c89302ccf45428">tempest-ServerActionsTestJSON-263209083-project-member</nova:user>
Oct 07 14:19:19 compute-0 nova_compute[259550]:         <nova:project uuid="8379283f8a594c2ab94773d2b49cbb30">tempest-ServerActionsTestJSON-263209083</nova:project>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:19:19 compute-0 nova_compute[259550]:         <nova:port uuid="5fb8904b-227a-4dac-8c3a-82a23ba9832c">
Oct 07 14:19:19 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:19:19 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:19:19 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <system>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <entry name="serial">1d580bbb-a6fd-442c-8524-409ba5c344d0</entry>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <entry name="uuid">1d580bbb-a6fd-442c-8524-409ba5c344d0</entry>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     </system>
Oct 07 14:19:19 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:19:19 compute-0 nova_compute[259550]:   <os>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:   </os>
Oct 07 14:19:19 compute-0 nova_compute[259550]:   <features>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:   </features>
Oct 07 14:19:19 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:19:19 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:19:19 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1d580bbb-a6fd-442c-8524-409ba5c344d0_disk">
Oct 07 14:19:19 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       </source>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:19:19 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1d580bbb-a6fd-442c-8524-409ba5c344d0_disk.config">
Oct 07 14:19:19 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       </source>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:19:19 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:af:c7:50"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <target dev="tap5fb8904b-22"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/1d580bbb-a6fd-442c-8524-409ba5c344d0/console.log" append="off"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <video>
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     </video>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <input type="keyboard" bus="usb"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:19:19 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:19:19 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:19:19 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:19:19 compute-0 nova_compute[259550]: </domain>
Oct 07 14:19:19 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.371 2 DEBUG nova.virt.libvirt.driver [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.372 2 DEBUG nova.virt.libvirt.driver [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.373 2 DEBUG nova.virt.libvirt.vif [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1718645090',display_name='tempest-ServerActionsTestJSON-server-1718645090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1718645090',id=66,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD46Fy5CoN+Ml3EBoxKcZ2Dc+Ex8Fs/j0JXzrdEiunFq6ivVsrIblCZq3tN14fyHQcfewP1+4i7OCcMUFM6dTRwhbKPS7W3sI5N9qFb8Gfb+awrk0XppQafXmqkzDHedqQ==',key_name='tempest-keypair-607419023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-0mdnuhhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:19:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=1d580bbb-a6fd-442c-8524-409ba5c344d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.373 2 DEBUG nova.network.os_vif_util [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.374 2 DEBUG nova.network.os_vif_util [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.374 2 DEBUG os_vif [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.375 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.378 2 DEBUG oslo_concurrency.lockutils [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "6f874afd-fefc-434c-a46f-cf611ba65494" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.378 2 DEBUG oslo_concurrency.lockutils [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "6f874afd-fefc-434c-a46f-cf611ba65494" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.379 2 DEBUG oslo_concurrency.lockutils [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "6f874afd-fefc-434c-a46f-cf611ba65494-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.379 2 DEBUG oslo_concurrency.lockutils [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "6f874afd-fefc-434c-a46f-cf611ba65494-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.379 2 DEBUG oslo_concurrency.lockutils [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "6f874afd-fefc-434c-a46f-cf611ba65494-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.384 2 INFO nova.compute.manager [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Terminating instance
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.386 2 DEBUG nova.compute.manager [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:19:19 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1051125853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:19 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/159027367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.392 2 DEBUG nova.compute.manager [req-07f60b60-fa1c-44fd-a401-9d64d7f42c81 req-884f7656-f6fe-4ed3-9ee0-e524c828540b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received event network-vif-plugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.393 2 DEBUG oslo_concurrency.lockutils [req-07f60b60-fa1c-44fd-a401-9d64d7f42c81 req-884f7656-f6fe-4ed3-9ee0-e524c828540b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.393 2 DEBUG oslo_concurrency.lockutils [req-07f60b60-fa1c-44fd-a401-9d64d7f42c81 req-884f7656-f6fe-4ed3-9ee0-e524c828540b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.393 2 DEBUG oslo_concurrency.lockutils [req-07f60b60-fa1c-44fd-a401-9d64d7f42c81 req-884f7656-f6fe-4ed3-9ee0-e524c828540b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.393 2 DEBUG nova.compute.manager [req-07f60b60-fa1c-44fd-a401-9d64d7f42c81 req-884f7656-f6fe-4ed3-9ee0-e524c828540b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Processing event network-vif-plugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.394 2 DEBUG nova.compute.manager [req-07f60b60-fa1c-44fd-a401-9d64d7f42c81 req-884f7656-f6fe-4ed3-9ee0-e524c828540b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received event network-vif-plugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.394 2 DEBUG oslo_concurrency.lockutils [req-07f60b60-fa1c-44fd-a401-9d64d7f42c81 req-884f7656-f6fe-4ed3-9ee0-e524c828540b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.394 2 DEBUG oslo_concurrency.lockutils [req-07f60b60-fa1c-44fd-a401-9d64d7f42c81 req-884f7656-f6fe-4ed3-9ee0-e524c828540b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.395 2 DEBUG oslo_concurrency.lockutils [req-07f60b60-fa1c-44fd-a401-9d64d7f42c81 req-884f7656-f6fe-4ed3-9ee0-e524c828540b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.395 2 DEBUG nova.compute.manager [req-07f60b60-fa1c-44fd-a401-9d64d7f42c81 req-884f7656-f6fe-4ed3-9ee0-e524c828540b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] No waiting events found dispatching network-vif-plugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.395 2 WARNING nova.compute.manager [req-07f60b60-fa1c-44fd-a401-9d64d7f42c81 req-884f7656-f6fe-4ed3-9ee0-e524c828540b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received unexpected event network-vif-plugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 for instance with vm_state building and task_state spawning.
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.399 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fb8904b-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.400 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5fb8904b-22, col_values=(('external_ids', {'iface-id': '5fb8904b-227a-4dac-8c3a-82a23ba9832c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:c7:50', 'vm-uuid': '1d580bbb-a6fd-442c-8524-409ba5c344d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:19 compute-0 NetworkManager[44949]: <info>  [1759846759.4027] manager: (tap5fb8904b-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/322)
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.412 2 DEBUG nova.compute.manager [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.420 2 INFO os_vif [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22')
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.423 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846758.480874, bb276692-f47b-4c86-864a-f3654cf63f5a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.423 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] VM Paused (Lifecycle Event)
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.427 2 DEBUG nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.434 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.435 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.436 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.436 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.437 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.438 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.442 2 INFO nova.compute.manager [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Took 1.70 seconds to destroy the instance on the hypervisor.
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.442 2 DEBUG oslo.service.loopingcall [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.445 2 DEBUG nova.compute.manager [-] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.446 2 DEBUG nova.network.neutron [-] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.448 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.463 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.468 2 INFO nova.virt.libvirt.driver [-] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Instance spawned successfully.
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.469 2 DEBUG nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:19:19 compute-0 kernel: tapec6afd34-2f (unregistering): left promiscuous mode
Oct 07 14:19:19 compute-0 NetworkManager[44949]: <info>  [1759846759.4965] device (tapec6afd34-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.508 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.509 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846759.3002818, 92c7d888-ced0-4649-9ad6-317350eb7225 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.509 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] VM Resumed (Lifecycle Event)
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:19 compute-0 NetworkManager[44949]: <info>  [1759846759.5131] manager: (tap5fb8904b-22): new Tun device (/org/freedesktop/NetworkManager/Devices/323)
Oct 07 14:19:19 compute-0 ovn_controller[151684]: 2025-10-07T14:19:19Z|00733|binding|INFO|Releasing lport ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 from this chassis (sb_readonly=0)
Oct 07 14:19:19 compute-0 ovn_controller[151684]: 2025-10-07T14:19:19Z|00734|binding|INFO|Setting lport ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 down in Southbound
Oct 07 14:19:19 compute-0 kernel: tap5fb8904b-22: entered promiscuous mode
Oct 07 14:19:19 compute-0 ovn_controller[151684]: 2025-10-07T14:19:19Z|00735|binding|INFO|Removing iface tapec6afd34-2f ovn-installed in OVS
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.523 2 INFO nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Took 11.85 seconds to spawn the instance on the hypervisor.
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.524 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.524 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:86:d7 10.100.0.7'], port_security=['fa:16:3e:6e:86:d7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6f874afd-fefc-434c-a46f-cf611ba65494', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6692b777-8c3f-47b2-9a67-3efff279d953', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b9950f52692469d9b44d8201fd3b990', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0ddc283-993d-4ef2-9ca1-083b7e8e7595', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14eec310-5a63-4764-9a59-d973ac25767c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ec6afd34-2f7e-47f1-a3b1-c8c920a624f5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.525 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 in datapath 6692b777-8c3f-47b2-9a67-3efff279d953 unbound from our chassis
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.526 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6692b777-8c3f-47b2-9a67-3efff279d953, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.529 2 DEBUG nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.530 2 DEBUG nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.530 2 DEBUG nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.530 2 DEBUG nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.531 2 DEBUG nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.531 2 DEBUG nova.virt.libvirt.driver [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.534 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[381f4b9b-e9ba-4f28-84a4-9e8dd9963f23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.535 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953 namespace which is not needed anymore
Oct 07 14:19:19 compute-0 NetworkManager[44949]: <info>  [1759846759.5369] device (tap5fb8904b-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:19:19 compute-0 NetworkManager[44949]: <info>  [1759846759.5410] device (tap5fb8904b-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.546 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:19 compute-0 ovn_controller[151684]: 2025-10-07T14:19:19Z|00736|binding|INFO|Claiming lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c for this chassis.
Oct 07 14:19:19 compute-0 ovn_controller[151684]: 2025-10-07T14:19:19Z|00737|binding|INFO|5fb8904b-227a-4dac-8c3a-82a23ba9832c: Claiming fa:16:3e:af:c7:50 10.100.0.3
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.557 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:c7:50 10.100.0.3'], port_security=['fa:16:3e:af:c7:50 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1d580bbb-a6fd-442c-8524-409ba5c344d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8379283f8a594c2ab94773d2b49cbb30', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'c8e218c0-ab29-4b01-8bdb-1da00e3ea9f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44f26794-be65-4c90-a6ef-3a0e5efa6810, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5fb8904b-227a-4dac-8c3a-82a23ba9832c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.561 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:19:19 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d00000047.scope: Deactivated successfully.
Oct 07 14:19:19 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d00000047.scope: Consumed 7.061s CPU time.
Oct 07 14:19:19 compute-0 systemd-machined[214580]: New machine qemu-91-instance-00000042.
Oct 07 14:19:19 compute-0 ovn_controller[151684]: 2025-10-07T14:19:19Z|00738|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c ovn-installed in OVS
Oct 07 14:19:19 compute-0 ovn_controller[151684]: 2025-10-07T14:19:19Z|00739|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c up in Southbound
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:19 compute-0 systemd[1]: Started Virtual Machine qemu-91-instance-00000042.
Oct 07 14:19:19 compute-0 systemd-machined[214580]: Machine qemu-86-instance-00000047 terminated.
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.606 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.606 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846759.4184988, 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.606 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] VM Resumed (Lifecycle Event)
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.643 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.652 2 INFO nova.compute.manager [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Took 12.94 seconds to spawn the instance on the hypervisor.
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.653 2 DEBUG nova.compute.manager [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:19 compute-0 NetworkManager[44949]: <info>  [1759846759.6560] manager: (tapec6afd34-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/324)
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.664 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.667 2 INFO nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Took 13.45 seconds to build instance.
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.690 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.692 2 INFO nova.virt.libvirt.driver [-] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Instance destroyed successfully.
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.692 2 DEBUG nova.objects.instance [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lazy-loading 'resources' on Instance uuid 6f874afd-fefc-434c-a46f-cf611ba65494 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.718 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "92c7d888-ced0-4649-9ad6-317350eb7225" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.724 2 DEBUG nova.virt.libvirt.vif [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:19:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1631138651',display_name='tempest-MultipleCreateTestJSON-server-1631138651-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1631138651-2',id=71,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-07T14:19:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1b9950f52692469d9b44d8201fd3b990',ramdisk_id='',reservation_id='r-un1h9szj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1647657804',owner_user_name='tempest-MultipleCreateTestJSON-1647657804-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:19:13Z,user_data=None,user_id='4c51e9a60f0f4b28b9d5cfaa7a0180eb',uuid=6f874afd-fefc-434c-a46f-cf611ba65494,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "address": "fa:16:3e:6e:86:d7", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec6afd34-2f", "ovs_interfaceid": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.724 2 DEBUG nova.network.os_vif_util [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converting VIF {"id": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "address": "fa:16:3e:6e:86:d7", "network": {"id": "6692b777-8c3f-47b2-9a67-3efff279d953", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1093998964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b9950f52692469d9b44d8201fd3b990", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec6afd34-2f", "ovs_interfaceid": "ec6afd34-2f7e-47f1-a3b1-c8c920a624f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.725 2 DEBUG nova.network.os_vif_util [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:86:d7,bridge_name='br-int',has_traffic_filtering=True,id=ec6afd34-2f7e-47f1-a3b1-c8c920a624f5,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec6afd34-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.725 2 DEBUG os_vif [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:86:d7,bridge_name='br-int',has_traffic_filtering=True,id=ec6afd34-2f7e-47f1-a3b1-c8c920a624f5,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec6afd34-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.728 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec6afd34-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:19 compute-0 neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953[335396]: [NOTICE]   (335430) : haproxy version is 2.8.14-c23fe91
Oct 07 14:19:19 compute-0 neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953[335396]: [NOTICE]   (335430) : path to executable is /usr/sbin/haproxy
Oct 07 14:19:19 compute-0 neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953[335396]: [ALERT]    (335430) : Current worker (335436) exited with code 143 (Terminated)
Oct 07 14:19:19 compute-0 neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953[335396]: [WARNING]  (335430) : All workers exited. Exiting... (0)
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:19:19 compute-0 systemd[1]: libpod-45f60bbb7ccbe1fa7a3682fbe6e81ae5894f45314dcada18da302857b0aa2383.scope: Deactivated successfully.
Oct 07 14:19:19 compute-0 conmon[335396]: conmon 45f60bbb7ccbe1fa7a36 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-45f60bbb7ccbe1fa7a3682fbe6e81ae5894f45314dcada18da302857b0aa2383.scope/container/memory.events
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.733 2 INFO os_vif [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:86:d7,bridge_name='br-int',has_traffic_filtering=True,id=ec6afd34-2f7e-47f1-a3b1-c8c920a624f5,network=Network(6692b777-8c3f-47b2-9a67-3efff279d953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec6afd34-2f')
Oct 07 14:19:19 compute-0 podman[336588]: 2025-10-07 14:19:19.736594166 +0000 UTC m=+0.066884018 container died 45f60bbb7ccbe1fa7a3682fbe6e81ae5894f45314dcada18da302857b0aa2383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.760 2 DEBUG nova.network.neutron [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Updated VIF entry in instance network info cache for port b7e52cb7-a446-4784-b8e5-3819b3120f8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.760 2 DEBUG nova.network.neutron [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Updating instance_info_cache with network_info: [{"id": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "address": "fa:16:3e:88:aa:95", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e52cb7-a4", "ovs_interfaceid": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-45f60bbb7ccbe1fa7a3682fbe6e81ae5894f45314dcada18da302857b0aa2383-userdata-shm.mount: Deactivated successfully.
Oct 07 14:19:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a21a875943054032e28bfd0fed4b66bc15c250c60e3bc527416cbad3807ab00-merged.mount: Deactivated successfully.
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.775 2 INFO nova.compute.manager [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Took 14.04 seconds to build instance.
Oct 07 14:19:19 compute-0 podman[336588]: 2025-10-07 14:19:19.779510359 +0000 UTC m=+0.109800211 container cleanup 45f60bbb7ccbe1fa7a3682fbe6e81ae5894f45314dcada18da302857b0aa2383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.781 2 DEBUG oslo_concurrency.lockutils [req-dfeb8217-623f-4302-9205-cac8a4642502 req-dd5998cd-7667-4993-8039-888b4aa1d58b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6f82c687-5361-4922-85c2-ea9d7e48a39a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:19 compute-0 systemd[1]: libpod-conmon-45f60bbb7ccbe1fa7a3682fbe6e81ae5894f45314dcada18da302857b0aa2383.scope: Deactivated successfully.
Oct 07 14:19:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 378 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 9.7 MiB/s wr, 403 op/s
Oct 07 14:19:19 compute-0 podman[336640]: 2025-10-07 14:19:19.864447093 +0000 UTC m=+0.059526144 container remove 45f60bbb7ccbe1fa7a3682fbe6e81ae5894f45314dcada18da302857b0aa2383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.872 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[03e6be0a-15c2-454e-b41d-c00d05095152]: (4, ('Tue Oct  7 02:19:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953 (45f60bbb7ccbe1fa7a3682fbe6e81ae5894f45314dcada18da302857b0aa2383)\n45f60bbb7ccbe1fa7a3682fbe6e81ae5894f45314dcada18da302857b0aa2383\nTue Oct  7 02:19:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953 (45f60bbb7ccbe1fa7a3682fbe6e81ae5894f45314dcada18da302857b0aa2383)\n45f60bbb7ccbe1fa7a3682fbe6e81ae5894f45314dcada18da302857b0aa2383\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.876 2 DEBUG oslo_concurrency.lockutils [None req-1e35adc2-7ba1-4f12-a7dc-3f291b9e0426 b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.875 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0d956762-a799-486c-bc48-326f173a2ed1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.877 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6692b777-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:19 compute-0 kernel: tap6692b777-80: left promiscuous mode
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:19 compute-0 nova_compute[259550]: 2025-10-07 14:19:19.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.902 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[21a95d49-dacf-4b9c-98d2-88c289a4c2fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.931 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[48877cfa-0581-44b1-9068-d1639dea0df4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.933 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ee266bd9-5ac8-42a8-b6a2-81deb2c913d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.955 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de0bfdae-0060-4ba3-ac42-28a0a72e24ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733503, 'reachable_time': 22183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336655, 'error': None, 'target': 'ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d6692b777\x2d8c3f\x2d47b2\x2d9a67\x2d3efff279d953.mount: Deactivated successfully.
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.960 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6692b777-8c3f-47b2-9a67-3efff279d953 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.960 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[70cb7e47-2647-4c6b-867d-6744ff10bf14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.963 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5fb8904b-227a-4dac-8c3a-82a23ba9832c in datapath c21c541a-0d39-4ceb-ba44-53a9c1280779 unbound from our chassis
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.966 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.979 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[52a89176-35a3-43bd-87c8-4205b7fdd0d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.981 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc21c541a-01 in ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.984 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc21c541a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.985 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[822e5e3d-9e68-4cb4-b251-1c82089d7c62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:19.986 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[602103fa-7640-4491-ba81-3a38993f0ecb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.004 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f27ac9-7e95-4473-bf04-d0c29053a43a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.020 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4dca707f-4cb7-4edd-9239-0a19ee6a3f5c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.035 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846760.035484, 6f82c687-5361-4922-85c2-ea9d7e48a39a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.036 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] VM Started (Lifecycle Event)
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.062 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.062 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[36dab28d-5b5f-45e8-80c3-1d77d92662e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.065 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846760.0355546, 6f82c687-5361-4922-85c2-ea9d7e48a39a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.066 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] VM Paused (Lifecycle Event)
Oct 07 14:19:20 compute-0 NetworkManager[44949]: <info>  [1759846760.0691] manager: (tapc21c541a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/325)
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.069 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf77d27-d979-4192-955f-6a2016f56a63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.088 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.092 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.119 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[58fb57f9-df6c-40f3-b0c1-c08b37fabba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.122 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a24593ba-d2e3-49da-b99d-673fa4548ad6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.127 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:19:20 compute-0 NetworkManager[44949]: <info>  [1759846760.1582] device (tapc21c541a-00): carrier: link connected
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.165 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[899378aa-a0d4-4079-943b-96bc7b088d15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.190 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[84efb142-73fb-4451-8555-b085a0935ba4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc21c541a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:8b:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734372, 'reachable_time': 15798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336722, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.214 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[41fa014c-54f3-4b8e-b7bf-f97d05c82105]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:8b48'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734372, 'tstamp': 734372}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336723, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.232 2 INFO nova.virt.libvirt.driver [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Deleting instance files /var/lib/nova/instances/6f874afd-fefc-434c-a46f-cf611ba65494_del
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.233 2 INFO nova.virt.libvirt.driver [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Deletion of /var/lib/nova/instances/6f874afd-fefc-434c-a46f-cf611ba65494_del complete
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.236 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4d620db0-3154-4bad-804f-5fb5e194b715]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc21c541a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:8b:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734372, 'reachable_time': 15798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 336724, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.270 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7431283d-4270-4de7-93ae-2ad83019b3ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.303 2 INFO nova.compute.manager [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Took 0.92 seconds to destroy the instance on the hypervisor.
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.304 2 DEBUG oslo.service.loopingcall [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.304 2 DEBUG nova.compute.manager [-] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.304 2 DEBUG nova.network.neutron [-] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.351 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[353dd97a-21eb-4c64-a3d1-fca182632bfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.352 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc21c541a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.352 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.353 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc21c541a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:20 compute-0 kernel: tapc21c541a-00: entered promiscuous mode
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:20 compute-0 NetworkManager[44949]: <info>  [1759846760.3557] manager: (tapc21c541a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/326)
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.357 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc21c541a-00, col_values=(('external_ids', {'iface-id': '5989e5ed-c89e-446a-960e-503196fd3680'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:20 compute-0 ovn_controller[151684]: 2025-10-07T14:19:20Z|00740|binding|INFO|Releasing lport 5989e5ed-c89e-446a-960e-503196fd3680 from this chassis (sb_readonly=0)
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.362 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c21c541a-0d39-4ceb-ba44-53a9c1280779.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c21c541a-0d39-4ceb-ba44-53a9c1280779.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.363 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed6ffb5-f257-4f50-9f11-8c2d3b8cb2f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.365 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/c21c541a-0d39-4ceb-ba44-53a9c1280779.pid.haproxy
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:19:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:20.366 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'env', 'PROCESS_TAG=haproxy-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c21c541a-0d39-4ceb-ba44-53a9c1280779.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:20 compute-0 ceph-mon[74295]: pgmap v1642: 305 pgs: 305 active+clean; 378 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 9.7 MiB/s wr, 403 op/s
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.727 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 1d580bbb-a6fd-442c-8524-409ba5c344d0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.728 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846760.7271342, 1d580bbb-a6fd-442c-8524-409ba5c344d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.728 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] VM Resumed (Lifecycle Event)
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.743 2 DEBUG nova.compute.manager [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.747 2 INFO nova.virt.libvirt.driver [-] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance rebooted successfully.
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.747 2 DEBUG nova.compute.manager [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.753 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.755 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:19:20 compute-0 podman[336755]: 2025-10-07 14:19:20.757801273 +0000 UTC m=+0.061457575 container create 7a730de37c8badaaaaa840e3de093a150903e6047d0d6b7a14f62b6fad20e7e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.776 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.777 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846760.7428544, 1d580bbb-a6fd-442c-8524-409ba5c344d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.777 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] VM Started (Lifecycle Event)
Oct 07 14:19:20 compute-0 systemd[1]: Started libpod-conmon-7a730de37c8badaaaaa840e3de093a150903e6047d0d6b7a14f62b6fad20e7e3.scope.
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.812 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.815 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:19:20 compute-0 podman[336755]: 2025-10-07 14:19:20.722973372 +0000 UTC m=+0.026629704 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.819 2 DEBUG oslo_concurrency.lockutils [None req-3fd1f5de-099c-40c4-a7fd-adc5a648a47c 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:20 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.835 2 DEBUG nova.network.neutron [-] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cd32b143c8095c3270c6918928ef66c799fe7c0f958ed20ae7ad2da851af997/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:19:20 compute-0 podman[336755]: 2025-10-07 14:19:20.851061087 +0000 UTC m=+0.154717419 container init 7a730de37c8badaaaaa840e3de093a150903e6047d0d6b7a14f62b6fad20e7e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:19:20 compute-0 podman[336755]: 2025-10-07 14:19:20.858316848 +0000 UTC m=+0.161973150 container start 7a730de37c8badaaaaa840e3de093a150903e6047d0d6b7a14f62b6fad20e7e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:19:20 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[336770]: [NOTICE]   (336774) : New worker (336776) forked
Oct 07 14:19:20 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[336770]: [NOTICE]   (336774) : Loading success.
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.888 2 INFO nova.compute.manager [-] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Took 1.44 seconds to deallocate network for instance.
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.948 2 DEBUG oslo_concurrency.lockutils [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:20 compute-0 nova_compute[259550]: 2025-10-07 14:19:20.948 2 DEBUG oslo_concurrency.lockutils [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.149 2 DEBUG oslo_concurrency.processutils [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.303 2 DEBUG nova.network.neutron [-] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.328 2 INFO nova.compute.manager [-] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Took 1.02 seconds to deallocate network for instance.
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.388 2 DEBUG oslo_concurrency.lockutils [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.422 2 DEBUG nova.compute.manager [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Received event network-vif-plugged-6b980feb-2acb-4ed6-b48f-045cc3b7caff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.423 2 DEBUG oslo_concurrency.lockutils [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "bb276692-f47b-4c86-864a-f3654cf63f5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.424 2 DEBUG oslo_concurrency.lockutils [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "bb276692-f47b-4c86-864a-f3654cf63f5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.424 2 DEBUG oslo_concurrency.lockutils [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "bb276692-f47b-4c86-864a-f3654cf63f5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.425 2 DEBUG nova.compute.manager [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Processing event network-vif-plugged-6b980feb-2acb-4ed6-b48f-045cc3b7caff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.425 2 DEBUG nova.compute.manager [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Received event network-vif-plugged-6b980feb-2acb-4ed6-b48f-045cc3b7caff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.425 2 DEBUG oslo_concurrency.lockutils [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "bb276692-f47b-4c86-864a-f3654cf63f5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.426 2 DEBUG oslo_concurrency.lockutils [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "bb276692-f47b-4c86-864a-f3654cf63f5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.426 2 DEBUG oslo_concurrency.lockutils [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "bb276692-f47b-4c86-864a-f3654cf63f5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.427 2 DEBUG nova.compute.manager [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] No waiting events found dispatching network-vif-plugged-6b980feb-2acb-4ed6-b48f-045cc3b7caff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.427 2 WARNING nova.compute.manager [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Received unexpected event network-vif-plugged-6b980feb-2acb-4ed6-b48f-045cc3b7caff for instance with vm_state building and task_state spawning.
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.427 2 DEBUG nova.compute.manager [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Received event network-vif-unplugged-ab477d11-03cd-475a-b8af-5317e974190d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.428 2 DEBUG oslo_concurrency.lockutils [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.428 2 DEBUG oslo_concurrency.lockutils [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.429 2 DEBUG oslo_concurrency.lockutils [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.429 2 DEBUG nova.compute.manager [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] No waiting events found dispatching network-vif-unplugged-ab477d11-03cd-475a-b8af-5317e974190d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.430 2 WARNING nova.compute.manager [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Received unexpected event network-vif-unplugged-ab477d11-03cd-475a-b8af-5317e974190d for instance with vm_state deleted and task_state None.
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.430 2 DEBUG nova.compute.manager [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Received event network-vif-plugged-ab477d11-03cd-475a-b8af-5317e974190d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.430 2 DEBUG oslo_concurrency.lockutils [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.431 2 DEBUG oslo_concurrency.lockutils [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.431 2 DEBUG oslo_concurrency.lockutils [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.432 2 DEBUG nova.compute.manager [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] No waiting events found dispatching network-vif-plugged-ab477d11-03cd-475a-b8af-5317e974190d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.432 2 WARNING nova.compute.manager [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Received unexpected event network-vif-plugged-ab477d11-03cd-475a-b8af-5317e974190d for instance with vm_state deleted and task_state None.
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.432 2 DEBUG nova.compute.manager [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Received event network-vif-plugged-b7e52cb7-a446-4784-b8e5-3819b3120f8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.432 2 DEBUG oslo_concurrency.lockutils [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6f82c687-5361-4922-85c2-ea9d7e48a39a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.433 2 DEBUG oslo_concurrency.lockutils [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6f82c687-5361-4922-85c2-ea9d7e48a39a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.433 2 DEBUG oslo_concurrency.lockutils [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6f82c687-5361-4922-85c2-ea9d7e48a39a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.434 2 DEBUG nova.compute.manager [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Processing event network-vif-plugged-b7e52cb7-a446-4784-b8e5-3819b3120f8a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.434 2 DEBUG nova.compute.manager [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Received event network-vif-plugged-b7e52cb7-a446-4784-b8e5-3819b3120f8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.434 2 DEBUG oslo_concurrency.lockutils [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6f82c687-5361-4922-85c2-ea9d7e48a39a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.435 2 DEBUG oslo_concurrency.lockutils [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6f82c687-5361-4922-85c2-ea9d7e48a39a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.435 2 DEBUG oslo_concurrency.lockutils [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6f82c687-5361-4922-85c2-ea9d7e48a39a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.435 2 DEBUG nova.compute.manager [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] No waiting events found dispatching network-vif-plugged-b7e52cb7-a446-4784-b8e5-3819b3120f8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.436 2 WARNING nova.compute.manager [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Received unexpected event network-vif-plugged-b7e52cb7-a446-4784-b8e5-3819b3120f8a for instance with vm_state building and task_state spawning.
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.436 2 DEBUG nova.compute.manager [req-c76e99b4-1315-442f-a8fd-81d1e10679ff req-54879b64-48a8-43f3-a8e8-02b75e701075 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Received event network-vif-deleted-ab477d11-03cd-475a-b8af-5317e974190d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.438 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.439 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.458 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846761.4557865, bb276692-f47b-4c86-864a-f3654cf63f5a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.460 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] VM Resumed (Lifecycle Event)
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.463 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.463 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.472 2 INFO nova.virt.libvirt.driver [-] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Instance spawned successfully.
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.473 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.476 2 INFO nova.virt.libvirt.driver [-] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Instance spawned successfully.
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.476 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.602 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.605 2 DEBUG nova.compute.manager [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-unplugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.606 2 DEBUG oslo_concurrency.lockutils [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.606 2 DEBUG oslo_concurrency.lockutils [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.607 2 DEBUG oslo_concurrency.lockutils [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.607 2 DEBUG nova.compute.manager [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-unplugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.607 2 WARNING nova.compute.manager [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-unplugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state active and task_state None.
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.607 2 DEBUG nova.compute.manager [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.608 2 DEBUG oslo_concurrency.lockutils [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.608 2 DEBUG oslo_concurrency.lockutils [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.608 2 DEBUG oslo_concurrency.lockutils [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.609 2 DEBUG nova.compute.manager [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.609 2 WARNING nova.compute.manager [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state active and task_state None.
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.609 2 DEBUG nova.compute.manager [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Received event network-vif-unplugged-ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.609 2 DEBUG oslo_concurrency.lockutils [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6f874afd-fefc-434c-a46f-cf611ba65494-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.610 2 DEBUG oslo_concurrency.lockutils [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6f874afd-fefc-434c-a46f-cf611ba65494-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.610 2 DEBUG oslo_concurrency.lockutils [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6f874afd-fefc-434c-a46f-cf611ba65494-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.610 2 DEBUG nova.compute.manager [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] No waiting events found dispatching network-vif-unplugged-ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.611 2 WARNING nova.compute.manager [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Received unexpected event network-vif-unplugged-ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 for instance with vm_state deleted and task_state None.
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.611 2 DEBUG nova.compute.manager [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Received event network-vif-plugged-ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.611 2 DEBUG oslo_concurrency.lockutils [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6f874afd-fefc-434c-a46f-cf611ba65494-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.612 2 DEBUG oslo_concurrency.lockutils [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6f874afd-fefc-434c-a46f-cf611ba65494-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.612 2 DEBUG oslo_concurrency.lockutils [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6f874afd-fefc-434c-a46f-cf611ba65494-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.612 2 DEBUG nova.compute.manager [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] No waiting events found dispatching network-vif-plugged-ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.613 2 WARNING nova.compute.manager [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Received unexpected event network-vif-plugged-ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 for instance with vm_state deleted and task_state None.
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.613 2 DEBUG nova.compute.manager [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.613 2 DEBUG oslo_concurrency.lockutils [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.613 2 DEBUG oslo_concurrency.lockutils [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.614 2 DEBUG oslo_concurrency.lockutils [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.614 2 DEBUG nova.compute.manager [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.614 2 WARNING nova.compute.manager [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state active and task_state None.
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.615 2 DEBUG nova.compute.manager [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.615 2 DEBUG oslo_concurrency.lockutils [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.615 2 DEBUG oslo_concurrency.lockutils [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.616 2 DEBUG oslo_concurrency.lockutils [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.616 2 DEBUG nova.compute.manager [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.616 2 WARNING nova.compute.manager [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state active and task_state None.
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.616 2 DEBUG nova.compute.manager [req-5c6d9144-6a00-460a-8dde-c12490397d32 req-023e8389-68e7-4e38-9294-23d47e7be6b5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Received event network-vif-deleted-ec6afd34-2f7e-47f1-a3b1-c8c920a624f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:19:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/561409379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.628 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.629 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.630 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.630 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.631 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.631 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.639 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.639 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.640 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.640 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.641 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.641 2 DEBUG nova.virt.libvirt.driver [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.646 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.652 2 DEBUG oslo_concurrency.processutils [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.660 2 DEBUG nova.compute.provider_tree [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:19:21 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/561409379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.775 2 DEBUG nova.scheduler.client.report [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.794 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.794 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846761.4679682, 6f82c687-5361-4922-85c2-ea9d7e48a39a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.795 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] VM Resumed (Lifecycle Event)
Oct 07 14:19:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 335 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.8 MiB/s wr, 346 op/s
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.854 2 DEBUG oslo_concurrency.lockutils [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.856 2 DEBUG oslo_concurrency.lockutils [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.865 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.868 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.876 2 INFO nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Took 13.06 seconds to spawn the instance on the hypervisor.
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.877 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.901 2 INFO nova.scheduler.client.report [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Deleted allocations for instance f0fba6a7-6467-4ac2-99a6-a2dee485e570
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.903 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.909 2 INFO nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Took 11.95 seconds to spawn the instance on the hypervisor.
Oct 07 14:19:21 compute-0 nova_compute[259550]: 2025-10-07 14:19:21.909 2 DEBUG nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:22 compute-0 nova_compute[259550]: 2025-10-07 14:19:22.014 2 INFO nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Took 15.77 seconds to build instance.
Oct 07 14:19:22 compute-0 nova_compute[259550]: 2025-10-07 14:19:22.029 2 INFO nova.compute.manager [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Took 15.76 seconds to build instance.
Oct 07 14:19:22 compute-0 nova_compute[259550]: 2025-10-07 14:19:22.032 2 DEBUG oslo_concurrency.lockutils [None req-ab60f9e6-bcb0-4800-a8c8-007713d652ff 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "f0fba6a7-6467-4ac2-99a6-a2dee485e570" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.299s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:22 compute-0 nova_compute[259550]: 2025-10-07 14:19:22.047 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "bb276692-f47b-4c86-864a-f3654cf63f5a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:22 compute-0 nova_compute[259550]: 2025-10-07 14:19:22.059 2 DEBUG oslo_concurrency.lockutils [None req-2344519a-6167-44aa-afdc-bd035b740b90 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "6f82c687-5361-4922-85c2-ea9d7e48a39a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:22 compute-0 nova_compute[259550]: 2025-10-07 14:19:22.084 2 DEBUG oslo_concurrency.processutils [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:22 compute-0 nova_compute[259550]: 2025-10-07 14:19:22.568 2 INFO nova.compute.manager [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Rescuing
Oct 07 14:19:22 compute-0 nova_compute[259550]: 2025-10-07 14:19:22.569 2 DEBUG oslo_concurrency.lockutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Acquiring lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:22 compute-0 nova_compute[259550]: 2025-10-07 14:19:22.569 2 DEBUG oslo_concurrency.lockutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Acquired lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:22 compute-0 nova_compute[259550]: 2025-10-07 14:19:22.569 2 DEBUG nova.network.neutron [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:19:22
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'default.rgw.meta', 'backups', 'default.rgw.log', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'images']
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:19:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:19:22 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/547365161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:22 compute-0 nova_compute[259550]: 2025-10-07 14:19:22.757 2 DEBUG oslo_concurrency.processutils [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.672s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:22 compute-0 nova_compute[259550]: 2025-10-07 14:19:22.762 2 DEBUG nova.compute.provider_tree [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:19:22 compute-0 nova_compute[259550]: 2025-10-07 14:19:22.784 2 DEBUG nova.scheduler.client.report [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:19:22 compute-0 nova_compute[259550]: 2025-10-07 14:19:22.833 2 DEBUG oslo_concurrency.lockutils [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.977s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:22 compute-0 ceph-mon[74295]: pgmap v1643: 305 pgs: 305 active+clean; 335 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.8 MiB/s wr, 346 op/s
Oct 07 14:19:22 compute-0 nova_compute[259550]: 2025-10-07 14:19:22.873 2 INFO nova.scheduler.client.report [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Deleted allocations for instance 6f874afd-fefc-434c-a46f-cf611ba65494
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:19:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:19:22 compute-0 nova_compute[259550]: 2025-10-07 14:19:22.977 2 DEBUG oslo_concurrency.lockutils [None req-27c70522-b613-44ea-addd-002f8cd58578 4c51e9a60f0f4b28b9d5cfaa7a0180eb 1b9950f52692469d9b44d8201fd3b990 - - default default] Lock "6f874afd-fefc-434c-a46f-cf611ba65494" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 308 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 9.8 MiB/s rd, 3.4 MiB/s wr, 488 op/s
Oct 07 14:19:23 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/547365161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:24 compute-0 nova_compute[259550]: 2025-10-07 14:19:24.075 2 DEBUG nova.network.neutron [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Updating instance_info_cache with network_info: [{"id": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "address": "fa:16:3e:06:ba:6c", "network": {"id": "f862c074-27c9-45f5-8f61-3c7cbb05b94a", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "743fea8acfcc4e73b1981dc0dcf95f63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8017da49-bb", "ovs_interfaceid": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:24 compute-0 nova_compute[259550]: 2025-10-07 14:19:24.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:24 compute-0 ceph-mon[74295]: pgmap v1644: 305 pgs: 305 active+clean; 308 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 9.8 MiB/s rd, 3.4 MiB/s wr, 488 op/s
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.444 2 DEBUG oslo_concurrency.lockutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Releasing lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.482 2 DEBUG oslo_concurrency.lockutils [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "bb276692-f47b-4c86-864a-f3654cf63f5a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.482 2 DEBUG oslo_concurrency.lockutils [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "bb276692-f47b-4c86-864a-f3654cf63f5a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.483 2 DEBUG oslo_concurrency.lockutils [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "bb276692-f47b-4c86-864a-f3654cf63f5a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.483 2 DEBUG oslo_concurrency.lockutils [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "bb276692-f47b-4c86-864a-f3654cf63f5a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.483 2 DEBUG oslo_concurrency.lockutils [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "bb276692-f47b-4c86-864a-f3654cf63f5a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.484 2 INFO nova.compute.manager [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Terminating instance
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.485 2 DEBUG nova.compute.manager [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:19:25 compute-0 kernel: tap6b980feb-2a (unregistering): left promiscuous mode
Oct 07 14:19:25 compute-0 NetworkManager[44949]: <info>  [1759846765.5371] device (tap6b980feb-2a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:25 compute-0 ovn_controller[151684]: 2025-10-07T14:19:25Z|00741|binding|INFO|Releasing lport 6b980feb-2acb-4ed6-b48f-045cc3b7caff from this chassis (sb_readonly=0)
Oct 07 14:19:25 compute-0 ovn_controller[151684]: 2025-10-07T14:19:25Z|00742|binding|INFO|Setting lport 6b980feb-2acb-4ed6-b48f-045cc3b7caff down in Southbound
Oct 07 14:19:25 compute-0 ovn_controller[151684]: 2025-10-07T14:19:25Z|00743|binding|INFO|Removing iface tap6b980feb-2a ovn-installed in OVS
Oct 07 14:19:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:25.558 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:6e:23 10.100.0.13'], port_security=['fa:16:3e:cf:6e:23 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bb276692-f47b-4c86-864a-f3654cf63f5a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd27f5e3b8cf47649e0c6593a61034a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7ea18a51-0d6c-4941-8d6c-7aa9d90c61f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=144f60fd-f027-4212-aa62-bf6874cc4de3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6b980feb-2acb-4ed6-b48f-045cc3b7caff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:19:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:25.559 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6b980feb-2acb-4ed6-b48f-045cc3b7caff in datapath db7fe9a9-837e-4f1a-8110-d3f657219e12 unbound from our chassis
Oct 07 14:19:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:25.561 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network db7fe9a9-837e-4f1a-8110-d3f657219e12
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:25.583 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9a72592f-e1fa-4166-8453-310ee266472e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:25 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d00000049.scope: Deactivated successfully.
Oct 07 14:19:25 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d00000049.scope: Consumed 4.702s CPU time.
Oct 07 14:19:25 compute-0 systemd-machined[214580]: Machine qemu-89-instance-00000049 terminated.
Oct 07 14:19:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:25.613 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[26d8909f-d7cd-405e-8a2c-520f9ccfdc72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:25.616 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7e53e99f-5109-43d2-a922-dd1939922144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:25.649 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7c2aa705-f118-4297-87f2-903d854b8191]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:25.668 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bf0a42f7-f6a8-4a5a-81f8-d58de979472c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb7fe9a9-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:c4:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733972, 'reachable_time': 20209, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336840, 'error': None, 'target': 'ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:25.688 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0a06fccf-1648-434a-9af8-81c489af635e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdb7fe9a9-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733986, 'tstamp': 733986}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336841, 'error': None, 'target': 'ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdb7fe9a9-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733990, 'tstamp': 733990}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336841, 'error': None, 'target': 'ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:25.690 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb7fe9a9-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:25.697 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb7fe9a9-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:25.697 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:25.697 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdb7fe9a9-80, col_values=(('external_ids', {'iface-id': '544e29c8-c520-4608-9347-a7d53124e335'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:25.698 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.720 2 INFO nova.virt.libvirt.driver [-] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Instance destroyed successfully.
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.722 2 DEBUG nova.objects.instance [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lazy-loading 'resources' on Instance uuid bb276692-f47b-4c86-864a-f3654cf63f5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.744 2 DEBUG nova.virt.libvirt.vif [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1269325506',display_name='tempest-ListServersNegativeTestJSON-server-1269325506-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1269325506-1',id=73,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:19:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cd27f5e3b8cf47649e0c6593a61034a5',ramdisk_id='',reservation_id='r-hlvgjz1v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1836187401',owner_user_name='tempest-ListServersNegativeTestJSON-1836187401-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:19:21Z,user_data=None,user_id='270ab376b3a74a75a6bab5c13d3100c0',uuid=bb276692-f47b-4c86-864a-f3654cf63f5a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "address": "fa:16:3e:cf:6e:23", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b980feb-2a", "ovs_interfaceid": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.745 2 DEBUG nova.network.os_vif_util [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Converting VIF {"id": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "address": "fa:16:3e:cf:6e:23", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b980feb-2a", "ovs_interfaceid": "6b980feb-2acb-4ed6-b48f-045cc3b7caff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.746 2 DEBUG nova.network.os_vif_util [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:6e:23,bridge_name='br-int',has_traffic_filtering=True,id=6b980feb-2acb-4ed6-b48f-045cc3b7caff,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b980feb-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.747 2 DEBUG os_vif [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:6e:23,bridge_name='br-int',has_traffic_filtering=True,id=6b980feb-2acb-4ed6-b48f-045cc3b7caff,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b980feb-2a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.755 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b980feb-2a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.771 2 INFO os_vif [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:6e:23,bridge_name='br-int',has_traffic_filtering=True,id=6b980feb-2acb-4ed6-b48f-045cc3b7caff,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b980feb-2a')
Oct 07 14:19:25 compute-0 nova_compute[259550]: 2025-10-07 14:19:25.804 2 DEBUG nova.virt.libvirt.driver [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:19:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 308 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 1.3 MiB/s wr, 560 op/s
Oct 07 14:19:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:19:26 compute-0 nova_compute[259550]: 2025-10-07 14:19:26.298 2 DEBUG nova.compute.manager [req-3700d1b1-f030-473c-acd8-cb1a8419b09c req-91874b8a-67b0-4153-8a27-9f0801f22c75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Received event network-vif-unplugged-6b980feb-2acb-4ed6-b48f-045cc3b7caff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:26 compute-0 nova_compute[259550]: 2025-10-07 14:19:26.298 2 DEBUG oslo_concurrency.lockutils [req-3700d1b1-f030-473c-acd8-cb1a8419b09c req-91874b8a-67b0-4153-8a27-9f0801f22c75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "bb276692-f47b-4c86-864a-f3654cf63f5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:26 compute-0 nova_compute[259550]: 2025-10-07 14:19:26.298 2 DEBUG oslo_concurrency.lockutils [req-3700d1b1-f030-473c-acd8-cb1a8419b09c req-91874b8a-67b0-4153-8a27-9f0801f22c75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "bb276692-f47b-4c86-864a-f3654cf63f5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:26 compute-0 nova_compute[259550]: 2025-10-07 14:19:26.298 2 DEBUG oslo_concurrency.lockutils [req-3700d1b1-f030-473c-acd8-cb1a8419b09c req-91874b8a-67b0-4153-8a27-9f0801f22c75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "bb276692-f47b-4c86-864a-f3654cf63f5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:26 compute-0 nova_compute[259550]: 2025-10-07 14:19:26.299 2 DEBUG nova.compute.manager [req-3700d1b1-f030-473c-acd8-cb1a8419b09c req-91874b8a-67b0-4153-8a27-9f0801f22c75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] No waiting events found dispatching network-vif-unplugged-6b980feb-2acb-4ed6-b48f-045cc3b7caff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:26 compute-0 nova_compute[259550]: 2025-10-07 14:19:26.299 2 DEBUG nova.compute.manager [req-3700d1b1-f030-473c-acd8-cb1a8419b09c req-91874b8a-67b0-4153-8a27-9f0801f22c75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Received event network-vif-unplugged-6b980feb-2acb-4ed6-b48f-045cc3b7caff for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:19:26 compute-0 nova_compute[259550]: 2025-10-07 14:19:26.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:27 compute-0 ceph-mon[74295]: pgmap v1645: 305 pgs: 305 active+clean; 308 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 1.3 MiB/s wr, 560 op/s
Oct 07 14:19:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 308 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 43 KiB/s wr, 477 op/s
Oct 07 14:19:28 compute-0 ceph-mon[74295]: pgmap v1646: 305 pgs: 305 active+clean; 308 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 43 KiB/s wr, 477 op/s
Oct 07 14:19:28 compute-0 nova_compute[259550]: 2025-10-07 14:19:28.409 2 DEBUG nova.compute.manager [req-788d728a-837d-4933-98fe-0c634bd667da req-f1789917-ba59-48b7-bdf4-f67d850a0e3d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Received event network-vif-plugged-6b980feb-2acb-4ed6-b48f-045cc3b7caff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:28 compute-0 nova_compute[259550]: 2025-10-07 14:19:28.411 2 DEBUG oslo_concurrency.lockutils [req-788d728a-837d-4933-98fe-0c634bd667da req-f1789917-ba59-48b7-bdf4-f67d850a0e3d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "bb276692-f47b-4c86-864a-f3654cf63f5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:28 compute-0 nova_compute[259550]: 2025-10-07 14:19:28.411 2 DEBUG oslo_concurrency.lockutils [req-788d728a-837d-4933-98fe-0c634bd667da req-f1789917-ba59-48b7-bdf4-f67d850a0e3d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "bb276692-f47b-4c86-864a-f3654cf63f5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:28 compute-0 nova_compute[259550]: 2025-10-07 14:19:28.412 2 DEBUG oslo_concurrency.lockutils [req-788d728a-837d-4933-98fe-0c634bd667da req-f1789917-ba59-48b7-bdf4-f67d850a0e3d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "bb276692-f47b-4c86-864a-f3654cf63f5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:28 compute-0 nova_compute[259550]: 2025-10-07 14:19:28.413 2 DEBUG nova.compute.manager [req-788d728a-837d-4933-98fe-0c634bd667da req-f1789917-ba59-48b7-bdf4-f67d850a0e3d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] No waiting events found dispatching network-vif-plugged-6b980feb-2acb-4ed6-b48f-045cc3b7caff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:28 compute-0 nova_compute[259550]: 2025-10-07 14:19:28.413 2 WARNING nova.compute.manager [req-788d728a-837d-4933-98fe-0c634bd667da req-f1789917-ba59-48b7-bdf4-f67d850a0e3d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Received unexpected event network-vif-plugged-6b980feb-2acb-4ed6-b48f-045cc3b7caff for instance with vm_state active and task_state deleting.
Oct 07 14:19:28 compute-0 nova_compute[259550]: 2025-10-07 14:19:28.479 2 INFO nova.virt.libvirt.driver [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Deleting instance files /var/lib/nova/instances/bb276692-f47b-4c86-864a-f3654cf63f5a_del
Oct 07 14:19:28 compute-0 nova_compute[259550]: 2025-10-07 14:19:28.481 2 INFO nova.virt.libvirt.driver [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Deletion of /var/lib/nova/instances/bb276692-f47b-4c86-864a-f3654cf63f5a_del complete
Oct 07 14:19:28 compute-0 nova_compute[259550]: 2025-10-07 14:19:28.573 2 INFO nova.compute.manager [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Took 3.09 seconds to destroy the instance on the hypervisor.
Oct 07 14:19:28 compute-0 nova_compute[259550]: 2025-10-07 14:19:28.574 2 DEBUG oslo.service.loopingcall [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:19:28 compute-0 nova_compute[259550]: 2025-10-07 14:19:28.574 2 DEBUG nova.compute.manager [-] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:19:28 compute-0 nova_compute[259550]: 2025-10-07 14:19:28.575 2 DEBUG nova.network.neutron [-] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:19:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 278 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 43 KiB/s wr, 499 op/s
Oct 07 14:19:30 compute-0 nova_compute[259550]: 2025-10-07 14:19:30.138 2 DEBUG nova.network.neutron [-] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:30 compute-0 nova_compute[259550]: 2025-10-07 14:19:30.178 2 DEBUG nova.compute.manager [req-2dcdaa71-8832-4ada-9e56-14395fb24442 req-ea2e9847-d485-455b-b9b7-e6df9aecd41d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Received event network-vif-deleted-6b980feb-2acb-4ed6-b48f-045cc3b7caff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:30 compute-0 nova_compute[259550]: 2025-10-07 14:19:30.179 2 INFO nova.compute.manager [req-2dcdaa71-8832-4ada-9e56-14395fb24442 req-ea2e9847-d485-455b-b9b7-e6df9aecd41d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Neutron deleted interface 6b980feb-2acb-4ed6-b48f-045cc3b7caff; detaching it from the instance and deleting it from the info cache
Oct 07 14:19:30 compute-0 nova_compute[259550]: 2025-10-07 14:19:30.179 2 DEBUG nova.network.neutron [req-2dcdaa71-8832-4ada-9e56-14395fb24442 req-ea2e9847-d485-455b-b9b7-e6df9aecd41d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:30 compute-0 nova_compute[259550]: 2025-10-07 14:19:30.199 2 INFO nova.compute.manager [-] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Took 1.62 seconds to deallocate network for instance.
Oct 07 14:19:30 compute-0 ovn_controller[151684]: 2025-10-07T14:19:30Z|00744|binding|INFO|Releasing lport 5989e5ed-c89e-446a-960e-503196fd3680 from this chassis (sb_readonly=0)
Oct 07 14:19:30 compute-0 ovn_controller[151684]: 2025-10-07T14:19:30Z|00745|binding|INFO|Releasing lport 544e29c8-c520-4608-9347-a7d53124e335 from this chassis (sb_readonly=0)
Oct 07 14:19:30 compute-0 nova_compute[259550]: 2025-10-07 14:19:30.207 2 DEBUG nova.compute.manager [req-2dcdaa71-8832-4ada-9e56-14395fb24442 req-ea2e9847-d485-455b-b9b7-e6df9aecd41d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Detach interface failed, port_id=6b980feb-2acb-4ed6-b48f-045cc3b7caff, reason: Instance bb276692-f47b-4c86-864a-f3654cf63f5a could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:19:30 compute-0 nova_compute[259550]: 2025-10-07 14:19:30.275 2 DEBUG oslo_concurrency.lockutils [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:30 compute-0 nova_compute[259550]: 2025-10-07 14:19:30.276 2 DEBUG oslo_concurrency.lockutils [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:30 compute-0 nova_compute[259550]: 2025-10-07 14:19:30.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:30 compute-0 nova_compute[259550]: 2025-10-07 14:19:30.423 2 DEBUG oslo_concurrency.processutils [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:30 compute-0 nova_compute[259550]: 2025-10-07 14:19:30.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:30 compute-0 ceph-mon[74295]: pgmap v1647: 305 pgs: 305 active+clean; 278 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 43 KiB/s wr, 499 op/s
Oct 07 14:19:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:19:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2860232456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:30 compute-0 nova_compute[259550]: 2025-10-07 14:19:30.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:19:30 compute-0 nova_compute[259550]: 2025-10-07 14:19:30.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:19:30 compute-0 nova_compute[259550]: 2025-10-07 14:19:30.985 2 DEBUG oslo_concurrency.processutils [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:30 compute-0 nova_compute[259550]: 2025-10-07 14:19:30.994 2 DEBUG nova.compute.provider_tree [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:19:31 compute-0 nova_compute[259550]: 2025-10-07 14:19:31.037 2 DEBUG nova.scheduler.client.report [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:19:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:19:31 compute-0 nova_compute[259550]: 2025-10-07 14:19:31.195 2 DEBUG oslo_concurrency.lockutils [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:31 compute-0 nova_compute[259550]: 2025-10-07 14:19:31.228 2 INFO nova.scheduler.client.report [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Deleted allocations for instance bb276692-f47b-4c86-864a-f3654cf63f5a
Oct 07 14:19:31 compute-0 nova_compute[259550]: 2025-10-07 14:19:31.322 2 DEBUG oslo_concurrency.lockutils [None req-7e4ede11-149b-4994-9c03-3c37dc4a8dc5 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "bb276692-f47b-4c86-864a-f3654cf63f5a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:31 compute-0 nova_compute[259550]: 2025-10-07 14:19:31.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 262 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 27 KiB/s wr, 395 op/s
Oct 07 14:19:31 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2860232456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001806039692811096 of space, bias 1.0, pg target 0.5418119078433288 quantized to 32 (current 32)
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:19:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:19:32 compute-0 ovn_controller[151684]: 2025-10-07T14:19:32Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:19:84:e7 10.100.0.3
Oct 07 14:19:32 compute-0 ovn_controller[151684]: 2025-10-07T14:19:32Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:19:84:e7 10.100.0.3
Oct 07 14:19:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:19:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/762505346' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:19:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:19:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/762505346' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:19:32 compute-0 ceph-mon[74295]: pgmap v1648: 305 pgs: 305 active+clean; 262 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 27 KiB/s wr, 395 op/s
Oct 07 14:19:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/762505346' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:19:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/762505346' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:19:32 compute-0 nova_compute[259550]: 2025-10-07 14:19:32.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:19:32 compute-0 nova_compute[259550]: 2025-10-07 14:19:32.985 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846757.9840782, f0fba6a7-6467-4ac2-99a6-a2dee485e570 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:32 compute-0 nova_compute[259550]: 2025-10-07 14:19:32.985 2 INFO nova.compute.manager [-] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] VM Stopped (Lifecycle Event)
Oct 07 14:19:33 compute-0 nova_compute[259550]: 2025-10-07 14:19:33.003 2 DEBUG nova.compute.manager [None req-f5a64118-549a-44a6-8c94-5174adc3588a - - - - - -] [instance: f0fba6a7-6467-4ac2-99a6-a2dee485e570] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:33 compute-0 ovn_controller[151684]: 2025-10-07T14:19:33Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:c7:50 10.100.0.3
Oct 07 14:19:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 332 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 5.2 MiB/s wr, 492 op/s
Oct 07 14:19:33 compute-0 nova_compute[259550]: 2025-10-07 14:19:33.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:19:33 compute-0 nova_compute[259550]: 2025-10-07 14:19:33.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:19:34 compute-0 podman[336893]: 2025-10-07 14:19:34.095042701 +0000 UTC m=+0.072297221 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 07 14:19:34 compute-0 podman[336894]: 2025-10-07 14:19:34.112854071 +0000 UTC m=+0.090014509 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2)
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.691 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846759.6783879, 6f874afd-fefc-434c-a46f-cf611ba65494 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.692 2 INFO nova.compute.manager [-] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] VM Stopped (Lifecycle Event)
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.710 2 DEBUG nova.compute.manager [None req-9952b181-1747-42a5-a440-80a27b9794ef - - - - - -] [instance: 6f874afd-fefc-434c-a46f-cf611ba65494] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.715 2 DEBUG oslo_concurrency.lockutils [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "92c7d888-ced0-4649-9ad6-317350eb7225" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.716 2 DEBUG oslo_concurrency.lockutils [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "92c7d888-ced0-4649-9ad6-317350eb7225" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.716 2 DEBUG oslo_concurrency.lockutils [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "92c7d888-ced0-4649-9ad6-317350eb7225-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.716 2 DEBUG oslo_concurrency.lockutils [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "92c7d888-ced0-4649-9ad6-317350eb7225-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.717 2 DEBUG oslo_concurrency.lockutils [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "92c7d888-ced0-4649-9ad6-317350eb7225-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.718 2 INFO nova.compute.manager [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Terminating instance
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.719 2 DEBUG nova.compute.manager [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:19:34 compute-0 kernel: tap25f2e3b0-85 (unregistering): left promiscuous mode
Oct 07 14:19:34 compute-0 NetworkManager[44949]: <info>  [1759846774.7848] device (tap25f2e3b0-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:34 compute-0 ovn_controller[151684]: 2025-10-07T14:19:34Z|00746|binding|INFO|Releasing lport 25f2e3b0-8524-4ad6-aa04-e562aad3ab5e from this chassis (sb_readonly=0)
Oct 07 14:19:34 compute-0 ovn_controller[151684]: 2025-10-07T14:19:34Z|00747|binding|INFO|Setting lport 25f2e3b0-8524-4ad6-aa04-e562aad3ab5e down in Southbound
Oct 07 14:19:34 compute-0 ovn_controller[151684]: 2025-10-07T14:19:34Z|00748|binding|INFO|Removing iface tap25f2e3b0-85 ovn-installed in OVS
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:34.798 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:84:e7 10.100.0.3'], port_security=['fa:16:3e:19:84:e7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '92c7d888-ced0-4649-9ad6-317350eb7225', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd27f5e3b8cf47649e0c6593a61034a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7ea18a51-0d6c-4941-8d6c-7aa9d90c61f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=144f60fd-f027-4212-aa62-bf6874cc4de3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=25f2e3b0-8524-4ad6-aa04-e562aad3ab5e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:19:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:34.800 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 25f2e3b0-8524-4ad6-aa04-e562aad3ab5e in datapath db7fe9a9-837e-4f1a-8110-d3f657219e12 unbound from our chassis
Oct 07 14:19:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:34.802 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network db7fe9a9-837e-4f1a-8110-d3f657219e12
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:34.820 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[669f078a-8a74-49b8-8cdc-d832001eb809]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:34 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Oct 07 14:19:34 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d0000004a.scope: Consumed 13.299s CPU time.
Oct 07 14:19:34 compute-0 systemd-machined[214580]: Machine qemu-88-instance-0000004a terminated.
Oct 07 14:19:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:34.849 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8eb24e63-33b3-45a3-a35b-2ded4d481277]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:34.852 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c13a69-3f97-43db-92cd-f0059f1e7056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.879 2 DEBUG oslo_concurrency.lockutils [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "6f82c687-5361-4922-85c2-ea9d7e48a39a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.879 2 DEBUG oslo_concurrency.lockutils [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "6f82c687-5361-4922-85c2-ea9d7e48a39a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.880 2 DEBUG oslo_concurrency.lockutils [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "6f82c687-5361-4922-85c2-ea9d7e48a39a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.880 2 DEBUG oslo_concurrency.lockutils [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "6f82c687-5361-4922-85c2-ea9d7e48a39a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.880 2 DEBUG oslo_concurrency.lockutils [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "6f82c687-5361-4922-85c2-ea9d7e48a39a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:34.880 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[29a4a9ee-b4dc-4652-93b6-589b09aee46d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.881 2 INFO nova.compute.manager [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Terminating instance
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.882 2 DEBUG nova.compute.manager [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:19:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:34.898 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b42c27-99c7-4e42-bfe5-052a7068ece0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb7fe9a9-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:c4:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733972, 'reachable_time': 20209, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336945, 'error': None, 'target': 'ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:34.911 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e2e819-7379-4b84-8951-222162c272e4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdb7fe9a9-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733986, 'tstamp': 733986}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336946, 'error': None, 'target': 'ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdb7fe9a9-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733990, 'tstamp': 733990}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336946, 'error': None, 'target': 'ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:34.912 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb7fe9a9-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:34.918 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb7fe9a9-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:34.918 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:34.919 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdb7fe9a9-80, col_values=(('external_ids', {'iface-id': '544e29c8-c520-4608-9347-a7d53124e335'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:34.919 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.953 2 INFO nova.virt.libvirt.driver [-] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Instance destroyed successfully.
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.954 2 DEBUG nova.objects.instance [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lazy-loading 'resources' on Instance uuid 92c7d888-ced0-4649-9ad6-317350eb7225 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.972 2 DEBUG nova.virt.libvirt.vif [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1269325506',display_name='tempest-ListServersNegativeTestJSON-server-1269325506-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1269325506-2',id=74,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-07T14:19:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cd27f5e3b8cf47649e0c6593a61034a5',ramdisk_id='',reservation_id='r-hlvgjz1v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1836187401',owner_user_name='tempest-ListServersNegativeTestJSON-1836187401-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:19:19Z,user_data=None,user_id='270ab376b3a74a75a6bab5c13d3100c0',uuid=92c7d888-ced0-4649-9ad6-317350eb7225,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "address": "fa:16:3e:19:84:e7", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f2e3b0-85", "ovs_interfaceid": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.972 2 DEBUG nova.network.os_vif_util [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Converting VIF {"id": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "address": "fa:16:3e:19:84:e7", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f2e3b0-85", "ovs_interfaceid": "25f2e3b0-8524-4ad6-aa04-e562aad3ab5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.973 2 DEBUG nova.network.os_vif_util [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:84:e7,bridge_name='br-int',has_traffic_filtering=True,id=25f2e3b0-8524-4ad6-aa04-e562aad3ab5e,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f2e3b0-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.973 2 DEBUG os_vif [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:84:e7,bridge_name='br-int',has_traffic_filtering=True,id=25f2e3b0-8524-4ad6-aa04-e562aad3ab5e,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f2e3b0-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.975 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25f2e3b0-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.979 2 INFO os_vif [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:84:e7,bridge_name='br-int',has_traffic_filtering=True,id=25f2e3b0-8524-4ad6-aa04-e562aad3ab5e,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f2e3b0-85')
Oct 07 14:19:34 compute-0 nova_compute[259550]: 2025-10-07 14:19:34.993 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:19:34 compute-0 ceph-mon[74295]: pgmap v1649: 305 pgs: 305 active+clean; 332 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 5.2 MiB/s wr, 492 op/s
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.021 2 DEBUG nova.compute.manager [req-01e8cecf-5b48-49fd-ab6c-7fe6fd3a2b22 req-c42a5803-503d-4121-8a02-8c079971581b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Received event network-vif-unplugged-25f2e3b0-8524-4ad6-aa04-e562aad3ab5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.022 2 DEBUG oslo_concurrency.lockutils [req-01e8cecf-5b48-49fd-ab6c-7fe6fd3a2b22 req-c42a5803-503d-4121-8a02-8c079971581b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "92c7d888-ced0-4649-9ad6-317350eb7225-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.022 2 DEBUG oslo_concurrency.lockutils [req-01e8cecf-5b48-49fd-ab6c-7fe6fd3a2b22 req-c42a5803-503d-4121-8a02-8c079971581b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "92c7d888-ced0-4649-9ad6-317350eb7225-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.022 2 DEBUG oslo_concurrency.lockutils [req-01e8cecf-5b48-49fd-ab6c-7fe6fd3a2b22 req-c42a5803-503d-4121-8a02-8c079971581b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "92c7d888-ced0-4649-9ad6-317350eb7225-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.022 2 DEBUG nova.compute.manager [req-01e8cecf-5b48-49fd-ab6c-7fe6fd3a2b22 req-c42a5803-503d-4121-8a02-8c079971581b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] No waiting events found dispatching network-vif-unplugged-25f2e3b0-8524-4ad6-aa04-e562aad3ab5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.023 2 DEBUG nova.compute.manager [req-01e8cecf-5b48-49fd-ab6c-7fe6fd3a2b22 req-c42a5803-503d-4121-8a02-8c079971581b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Received event network-vif-unplugged-25f2e3b0-8524-4ad6-aa04-e562aad3ab5e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.027 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.028 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.028 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.028 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.028 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:35 compute-0 kernel: tapb7e52cb7-a4 (unregistering): left promiscuous mode
Oct 07 14:19:35 compute-0 NetworkManager[44949]: <info>  [1759846775.1422] device (tapb7e52cb7-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:19:35 compute-0 ovn_controller[151684]: 2025-10-07T14:19:35Z|00749|binding|INFO|Releasing lport b7e52cb7-a446-4784-b8e5-3819b3120f8a from this chassis (sb_readonly=0)
Oct 07 14:19:35 compute-0 ovn_controller[151684]: 2025-10-07T14:19:35Z|00750|binding|INFO|Setting lport b7e52cb7-a446-4784-b8e5-3819b3120f8a down in Southbound
Oct 07 14:19:35 compute-0 ovn_controller[151684]: 2025-10-07T14:19:35Z|00751|binding|INFO|Removing iface tapb7e52cb7-a4 ovn-installed in OVS
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:35.195 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:aa:95 10.100.0.14'], port_security=['fa:16:3e:88:aa:95 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6f82c687-5361-4922-85c2-ea9d7e48a39a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd27f5e3b8cf47649e0c6593a61034a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7ea18a51-0d6c-4941-8d6c-7aa9d90c61f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=144f60fd-f027-4212-aa62-bf6874cc4de3, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b7e52cb7-a446-4784-b8e5-3819b3120f8a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:19:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:35.196 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b7e52cb7-a446-4784-b8e5-3819b3120f8a in datapath db7fe9a9-837e-4f1a-8110-d3f657219e12 unbound from our chassis
Oct 07 14:19:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:35.197 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network db7fe9a9-837e-4f1a-8110-d3f657219e12, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:19:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:35.198 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[46b5d7c3-6ddc-4dcb-9465-d42833f618f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:35.198 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12 namespace which is not needed anymore
Oct 07 14:19:35 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Oct 07 14:19:35 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004b.scope: Consumed 12.766s CPU time.
Oct 07 14:19:35 compute-0 systemd-machined[214580]: Machine qemu-90-instance-0000004b terminated.
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.318 2 INFO nova.virt.libvirt.driver [-] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Instance destroyed successfully.
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.318 2 DEBUG nova.objects.instance [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lazy-loading 'resources' on Instance uuid 6f82c687-5361-4922-85c2-ea9d7e48a39a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:35 compute-0 neutron-haproxy-ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12[336046]: [NOTICE]   (336050) : haproxy version is 2.8.14-c23fe91
Oct 07 14:19:35 compute-0 neutron-haproxy-ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12[336046]: [NOTICE]   (336050) : path to executable is /usr/sbin/haproxy
Oct 07 14:19:35 compute-0 neutron-haproxy-ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12[336046]: [WARNING]  (336050) : Exiting Master process...
Oct 07 14:19:35 compute-0 neutron-haproxy-ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12[336046]: [WARNING]  (336050) : Exiting Master process...
Oct 07 14:19:35 compute-0 neutron-haproxy-ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12[336046]: [ALERT]    (336050) : Current worker (336052) exited with code 143 (Terminated)
Oct 07 14:19:35 compute-0 neutron-haproxy-ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12[336046]: [WARNING]  (336050) : All workers exited. Exiting... (0)
Oct 07 14:19:35 compute-0 systemd[1]: libpod-895556184f7424a6cfe27d827c003acaacb85f22ae55cf9cc678755cc28ed5d5.scope: Deactivated successfully.
Oct 07 14:19:35 compute-0 podman[337017]: 2025-10-07 14:19:35.338777536 +0000 UTC m=+0.058685101 container died 895556184f7424a6cfe27d827c003acaacb85f22ae55cf9cc678755cc28ed5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.346 2 DEBUG nova.virt.libvirt.vif [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1269325506',display_name='tempest-ListServersNegativeTestJSON-server-1269325506-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1269325506-3',id=75,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-10-07T14:19:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cd27f5e3b8cf47649e0c6593a61034a5',ramdisk_id='',reservation_id='r-hlvgjz1v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1836187401',owner_user_name='tempest-ListServersNegativeTestJSON-1836187401-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:19:21Z,user_data=None,user_id='270ab376b3a74a75a6bab5c13d3100c0',uuid=6f82c687-5361-4922-85c2-ea9d7e48a39a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "address": "fa:16:3e:88:aa:95", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e52cb7-a4", "ovs_interfaceid": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.346 2 DEBUG nova.network.os_vif_util [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Converting VIF {"id": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "address": "fa:16:3e:88:aa:95", "network": {"id": "db7fe9a9-837e-4f1a-8110-d3f657219e12", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1852045879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd27f5e3b8cf47649e0c6593a61034a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e52cb7-a4", "ovs_interfaceid": "b7e52cb7-a446-4784-b8e5-3819b3120f8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.347 2 DEBUG nova.network.os_vif_util [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:aa:95,bridge_name='br-int',has_traffic_filtering=True,id=b7e52cb7-a446-4784-b8e5-3819b3120f8a,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7e52cb7-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.348 2 DEBUG os_vif [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:aa:95,bridge_name='br-int',has_traffic_filtering=True,id=b7e52cb7-a446-4784-b8e5-3819b3120f8a,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7e52cb7-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.350 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7e52cb7-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.360 2 INFO os_vif [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:aa:95,bridge_name='br-int',has_traffic_filtering=True,id=b7e52cb7-a446-4784-b8e5-3819b3120f8a,network=Network(db7fe9a9-837e-4f1a-8110-d3f657219e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7e52cb7-a4')
Oct 07 14:19:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-895556184f7424a6cfe27d827c003acaacb85f22ae55cf9cc678755cc28ed5d5-userdata-shm.mount: Deactivated successfully.
Oct 07 14:19:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-843bcc122bf2585225dbc5eda2ef41a336207dc6b30a2484c3b0ddeea21be137-merged.mount: Deactivated successfully.
Oct 07 14:19:35 compute-0 podman[337017]: 2025-10-07 14:19:35.423523005 +0000 UTC m=+0.143430560 container cleanup 895556184f7424a6cfe27d827c003acaacb85f22ae55cf9cc678755cc28ed5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:19:35 compute-0 systemd[1]: libpod-conmon-895556184f7424a6cfe27d827c003acaacb85f22ae55cf9cc678755cc28ed5d5.scope: Deactivated successfully.
Oct 07 14:19:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:19:35 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/261787050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:35 compute-0 podman[337068]: 2025-10-07 14:19:35.490139185 +0000 UTC m=+0.045916614 container remove 895556184f7424a6cfe27d827c003acaacb85f22ae55cf9cc678755cc28ed5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 07 14:19:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:35.498 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6562a3bc-cd08-4d37-bbfa-048b8a6361fd]: (4, ('Tue Oct  7 02:19:35 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12 (895556184f7424a6cfe27d827c003acaacb85f22ae55cf9cc678755cc28ed5d5)\n895556184f7424a6cfe27d827c003acaacb85f22ae55cf9cc678755cc28ed5d5\nTue Oct  7 02:19:35 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12 (895556184f7424a6cfe27d827c003acaacb85f22ae55cf9cc678755cc28ed5d5)\n895556184f7424a6cfe27d827c003acaacb85f22ae55cf9cc678755cc28ed5d5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:35.500 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3e823d40-d993-4790-bc67-69f0882720d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:35.501 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb7fe9a9-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.506 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:35 compute-0 kernel: tapdb7fe9a9-80: left promiscuous mode
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:35.519 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[94ca51f9-d6a5-439d-b7d8-3973ead0fa7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:35.550 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ecbd1712-2c65-41d0-9d72-1f94dbb46722]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:35.551 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[beee29c7-8bc6-4851-a26e-c892a45c0e2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:35.565 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fd63c148-2474-47db-8d37-e1fa863b171e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733962, 'reachable_time': 32946, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337090, 'error': None, 'target': 'ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:35.568 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-db7fe9a9-837e-4f1a-8110-d3f657219e12 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:19:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:35.568 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[602ab183-38b1-4a72-93e7-a7489ceeea31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:35 compute-0 systemd[1]: run-netns-ovnmeta\x2ddb7fe9a9\x2d837e\x2d4f1a\x2d8110\x2dd3f657219e12.mount: Deactivated successfully.
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.605 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.605 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.609 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.609 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.612 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.612 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.615 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.615 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.801 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.802 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3668MB free_disk=59.82072448730469GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.803 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.803 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.851 2 DEBUG nova.virt.libvirt.driver [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:19:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 356 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 6.3 MiB/s wr, 372 op/s
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.882 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 1d580bbb-a6fd-442c-8524-409ba5c344d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.882 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.883 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 92c7d888-ced0-4649-9ad6-317350eb7225 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.883 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 6f82c687-5361-4922-85c2-ea9d7e48a39a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.884 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.884 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:19:35 compute-0 nova_compute[259550]: 2025-10-07 14:19:35.989 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:36 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/261787050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:36 compute-0 nova_compute[259550]: 2025-10-07 14:19:36.048 2 INFO nova.virt.libvirt.driver [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Deleting instance files /var/lib/nova/instances/92c7d888-ced0-4649-9ad6-317350eb7225_del
Oct 07 14:19:36 compute-0 nova_compute[259550]: 2025-10-07 14:19:36.050 2 INFO nova.virt.libvirt.driver [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Deletion of /var/lib/nova/instances/92c7d888-ced0-4649-9ad6-317350eb7225_del complete
Oct 07 14:19:36 compute-0 nova_compute[259550]: 2025-10-07 14:19:36.101 2 INFO nova.compute.manager [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Took 1.38 seconds to destroy the instance on the hypervisor.
Oct 07 14:19:36 compute-0 nova_compute[259550]: 2025-10-07 14:19:36.102 2 DEBUG oslo.service.loopingcall [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:19:36 compute-0 nova_compute[259550]: 2025-10-07 14:19:36.102 2 DEBUG nova.compute.manager [-] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:19:36 compute-0 nova_compute[259550]: 2025-10-07 14:19:36.102 2 DEBUG nova.network.neutron [-] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:19:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:19:36 compute-0 nova_compute[259550]: 2025-10-07 14:19:36.206 2 INFO nova.virt.libvirt.driver [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Deleting instance files /var/lib/nova/instances/6f82c687-5361-4922-85c2-ea9d7e48a39a_del
Oct 07 14:19:36 compute-0 nova_compute[259550]: 2025-10-07 14:19:36.206 2 INFO nova.virt.libvirt.driver [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Deletion of /var/lib/nova/instances/6f82c687-5361-4922-85c2-ea9d7e48a39a_del complete
Oct 07 14:19:36 compute-0 nova_compute[259550]: 2025-10-07 14:19:36.254 2 INFO nova.compute.manager [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Took 1.37 seconds to destroy the instance on the hypervisor.
Oct 07 14:19:36 compute-0 nova_compute[259550]: 2025-10-07 14:19:36.254 2 DEBUG oslo.service.loopingcall [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:19:36 compute-0 nova_compute[259550]: 2025-10-07 14:19:36.255 2 DEBUG nova.compute.manager [-] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:19:36 compute-0 nova_compute[259550]: 2025-10-07 14:19:36.255 2 DEBUG nova.network.neutron [-] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:19:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:19:36 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1966830984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:36 compute-0 nova_compute[259550]: 2025-10-07 14:19:36.452 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:36 compute-0 nova_compute[259550]: 2025-10-07 14:19:36.457 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:19:36 compute-0 nova_compute[259550]: 2025-10-07 14:19:36.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:36 compute-0 nova_compute[259550]: 2025-10-07 14:19:36.578 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:19:36 compute-0 nova_compute[259550]: 2025-10-07 14:19:36.638 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:19:36 compute-0 nova_compute[259550]: 2025-10-07 14:19:36.639 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:36 compute-0 nova_compute[259550]: 2025-10-07 14:19:36.987 2 DEBUG nova.network.neutron [-] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.004 2 INFO nova.compute.manager [-] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Took 0.90 seconds to deallocate network for instance.
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.042 2 DEBUG oslo_concurrency.lockutils [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.043 2 DEBUG oslo_concurrency.lockutils [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:37 compute-0 ceph-mon[74295]: pgmap v1650: 305 pgs: 305 active+clean; 356 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 6.3 MiB/s wr, 372 op/s
Oct 07 14:19:37 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1966830984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.077 2 DEBUG nova.network.neutron [-] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.102 2 INFO nova.compute.manager [-] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Took 0.85 seconds to deallocate network for instance.
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.110 2 DEBUG nova.compute.manager [req-9ab737cb-d3e7-4203-b18d-8c0714eee5c5 req-e515c975-0514-4458-ba6d-5ce1c694a30f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Received event network-vif-deleted-25f2e3b0-8524-4ad6-aa04-e562aad3ab5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.164 2 DEBUG oslo_concurrency.lockutils [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.170 2 DEBUG nova.compute.manager [req-09de4c9d-b731-452b-b2df-760457a461f2 req-5c6f4f24-22d0-40b4-af2f-5014c894b7de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Received event network-vif-plugged-25f2e3b0-8524-4ad6-aa04-e562aad3ab5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.170 2 DEBUG oslo_concurrency.lockutils [req-09de4c9d-b731-452b-b2df-760457a461f2 req-5c6f4f24-22d0-40b4-af2f-5014c894b7de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "92c7d888-ced0-4649-9ad6-317350eb7225-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.171 2 DEBUG oslo_concurrency.lockutils [req-09de4c9d-b731-452b-b2df-760457a461f2 req-5c6f4f24-22d0-40b4-af2f-5014c894b7de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "92c7d888-ced0-4649-9ad6-317350eb7225-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.171 2 DEBUG oslo_concurrency.lockutils [req-09de4c9d-b731-452b-b2df-760457a461f2 req-5c6f4f24-22d0-40b4-af2f-5014c894b7de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "92c7d888-ced0-4649-9ad6-317350eb7225-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.171 2 DEBUG nova.compute.manager [req-09de4c9d-b731-452b-b2df-760457a461f2 req-5c6f4f24-22d0-40b4-af2f-5014c894b7de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] No waiting events found dispatching network-vif-plugged-25f2e3b0-8524-4ad6-aa04-e562aad3ab5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.171 2 WARNING nova.compute.manager [req-09de4c9d-b731-452b-b2df-760457a461f2 req-5c6f4f24-22d0-40b4-af2f-5014c894b7de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Received unexpected event network-vif-plugged-25f2e3b0-8524-4ad6-aa04-e562aad3ab5e for instance with vm_state deleted and task_state None.
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.171 2 DEBUG nova.compute.manager [req-09de4c9d-b731-452b-b2df-760457a461f2 req-5c6f4f24-22d0-40b4-af2f-5014c894b7de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Received event network-vif-unplugged-b7e52cb7-a446-4784-b8e5-3819b3120f8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.172 2 DEBUG oslo_concurrency.lockutils [req-09de4c9d-b731-452b-b2df-760457a461f2 req-5c6f4f24-22d0-40b4-af2f-5014c894b7de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6f82c687-5361-4922-85c2-ea9d7e48a39a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.172 2 DEBUG oslo_concurrency.lockutils [req-09de4c9d-b731-452b-b2df-760457a461f2 req-5c6f4f24-22d0-40b4-af2f-5014c894b7de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6f82c687-5361-4922-85c2-ea9d7e48a39a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.172 2 DEBUG oslo_concurrency.lockutils [req-09de4c9d-b731-452b-b2df-760457a461f2 req-5c6f4f24-22d0-40b4-af2f-5014c894b7de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6f82c687-5361-4922-85c2-ea9d7e48a39a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.172 2 DEBUG nova.compute.manager [req-09de4c9d-b731-452b-b2df-760457a461f2 req-5c6f4f24-22d0-40b4-af2f-5014c894b7de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] No waiting events found dispatching network-vif-unplugged-b7e52cb7-a446-4784-b8e5-3819b3120f8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.172 2 WARNING nova.compute.manager [req-09de4c9d-b731-452b-b2df-760457a461f2 req-5c6f4f24-22d0-40b4-af2f-5014c894b7de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Received unexpected event network-vif-unplugged-b7e52cb7-a446-4784-b8e5-3819b3120f8a for instance with vm_state deleted and task_state None.
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.173 2 DEBUG nova.compute.manager [req-09de4c9d-b731-452b-b2df-760457a461f2 req-5c6f4f24-22d0-40b4-af2f-5014c894b7de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Received event network-vif-plugged-b7e52cb7-a446-4784-b8e5-3819b3120f8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.173 2 DEBUG oslo_concurrency.lockutils [req-09de4c9d-b731-452b-b2df-760457a461f2 req-5c6f4f24-22d0-40b4-af2f-5014c894b7de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6f82c687-5361-4922-85c2-ea9d7e48a39a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.173 2 DEBUG oslo_concurrency.lockutils [req-09de4c9d-b731-452b-b2df-760457a461f2 req-5c6f4f24-22d0-40b4-af2f-5014c894b7de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6f82c687-5361-4922-85c2-ea9d7e48a39a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.173 2 DEBUG oslo_concurrency.lockutils [req-09de4c9d-b731-452b-b2df-760457a461f2 req-5c6f4f24-22d0-40b4-af2f-5014c894b7de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6f82c687-5361-4922-85c2-ea9d7e48a39a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.173 2 DEBUG nova.compute.manager [req-09de4c9d-b731-452b-b2df-760457a461f2 req-5c6f4f24-22d0-40b4-af2f-5014c894b7de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] No waiting events found dispatching network-vif-plugged-b7e52cb7-a446-4784-b8e5-3819b3120f8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.173 2 WARNING nova.compute.manager [req-09de4c9d-b731-452b-b2df-760457a461f2 req-5c6f4f24-22d0-40b4-af2f-5014c894b7de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Received unexpected event network-vif-plugged-b7e52cb7-a446-4784-b8e5-3819b3120f8a for instance with vm_state deleted and task_state None.
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.180 2 DEBUG oslo_concurrency.processutils [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:19:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1297185102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.648 2 DEBUG oslo_concurrency.processutils [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.655 2 DEBUG nova.compute.provider_tree [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.671 2 DEBUG nova.scheduler.client.report [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.689 2 DEBUG oslo_concurrency.lockutils [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.692 2 DEBUG oslo_concurrency.lockutils [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.725 2 INFO nova.scheduler.client.report [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Deleted allocations for instance 92c7d888-ced0-4649-9ad6-317350eb7225
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.783 2 DEBUG oslo_concurrency.processutils [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:37 compute-0 nova_compute[259550]: 2025-10-07 14:19:37.822 2 DEBUG oslo_concurrency.lockutils [None req-392094a9-a038-4fb4-8bed-bf8375fae2dd 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "92c7d888-ced0-4649-9ad6-317350eb7225" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 356 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 6.3 MiB/s wr, 249 op/s
Oct 07 14:19:38 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1297185102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:38 compute-0 ceph-mon[74295]: pgmap v1651: 305 pgs: 305 active+clean; 356 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 6.3 MiB/s wr, 249 op/s
Oct 07 14:19:38 compute-0 kernel: tap8017da49-bb (unregistering): left promiscuous mode
Oct 07 14:19:38 compute-0 NetworkManager[44949]: <info>  [1759846778.1655] device (tap8017da49-bb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:38 compute-0 ovn_controller[151684]: 2025-10-07T14:19:38Z|00752|binding|INFO|Releasing lport 8017da49-bbc8-4eae-8b3e-79bb4e587e62 from this chassis (sb_readonly=0)
Oct 07 14:19:38 compute-0 ovn_controller[151684]: 2025-10-07T14:19:38Z|00753|binding|INFO|Setting lport 8017da49-bbc8-4eae-8b3e-79bb4e587e62 down in Southbound
Oct 07 14:19:38 compute-0 ovn_controller[151684]: 2025-10-07T14:19:38Z|00754|binding|INFO|Removing iface tap8017da49-bb ovn-installed in OVS
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:38.182 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:ba:6c 10.100.0.7'], port_security=['fa:16:3e:06:ba:6c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f862c074-27c9-45f5-8f61-3c7cbb05b94a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '743fea8acfcc4e73b1981dc0dcf95f63', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0e3566eb-dd32-483b-8573-b62f6b6a147e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0a2b256-a7c8-4510-ae1e-632d23b47983, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8017da49-bbc8-4eae-8b3e-79bb4e587e62) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:19:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:38.183 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8017da49-bbc8-4eae-8b3e-79bb4e587e62 in datapath f862c074-27c9-45f5-8f61-3c7cbb05b94a unbound from our chassis
Oct 07 14:19:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:38.184 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f862c074-27c9-45f5-8f61-3c7cbb05b94a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:19:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:38.185 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e1005f4a-3a4b-4d4e-b257-becd86371b3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:38 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d00000048.scope: Deactivated successfully.
Oct 07 14:19:38 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d00000048.scope: Consumed 13.468s CPU time.
Oct 07 14:19:38 compute-0 systemd-machined[214580]: Machine qemu-87-instance-00000048 terminated.
Oct 07 14:19:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:19:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1548571387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.273 2 DEBUG oslo_concurrency.processutils [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.278 2 DEBUG nova.compute.provider_tree [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.308 2 DEBUG nova.scheduler.client.report [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.338 2 DEBUG oslo_concurrency.lockutils [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.366 2 INFO nova.scheduler.client.report [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Deleted allocations for instance 6f82c687-5361-4922-85c2-ea9d7e48a39a
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.444 2 DEBUG oslo_concurrency.lockutils [None req-2a927d04-0227-48f0-a048-c67f45db13b7 270ab376b3a74a75a6bab5c13d3100c0 cd27f5e3b8cf47649e0c6593a61034a5 - - default default] Lock "6f82c687-5361-4922-85c2-ea9d7e48a39a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.627 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.866 2 INFO nova.virt.libvirt.driver [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Instance shutdown successfully after 13 seconds.
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.872 2 INFO nova.virt.libvirt.driver [-] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Instance destroyed successfully.
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.872 2 DEBUG nova.objects.instance [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lazy-loading 'numa_topology' on Instance uuid 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.887 2 INFO nova.virt.libvirt.driver [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Attempting rescue
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.888 2 DEBUG nova.virt.libvirt.driver [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.894 2 DEBUG nova.virt.libvirt.driver [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.894 2 INFO nova.virt.libvirt.driver [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Creating image(s)
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.915 2 DEBUG nova.storage.rbd_utils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] rbd image 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.918 2 DEBUG nova.objects.instance [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.951 2 DEBUG nova.storage.rbd_utils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] rbd image 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.970 2 DEBUG nova.storage.rbd_utils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] rbd image 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:38 compute-0 nova_compute[259550]: 2025-10-07 14:19:38.973 2 DEBUG oslo_concurrency.processutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1548571387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.072 2 DEBUG oslo_concurrency.processutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.073 2 DEBUG oslo_concurrency.lockutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.073 2 DEBUG oslo_concurrency.lockutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.073 2 DEBUG oslo_concurrency.lockutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.092 2 DEBUG nova.storage.rbd_utils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] rbd image 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.095 2 DEBUG oslo_concurrency.processutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.238 2 DEBUG nova.compute.manager [req-acb37187-c69c-4f48-992b-4d7053f7a6aa req-78cc9860-4edb-479c-be36-4517e79c17e3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Received event network-vif-deleted-b7e52cb7-a446-4784-b8e5-3819b3120f8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.239 2 DEBUG nova.compute.manager [req-acb37187-c69c-4f48-992b-4d7053f7a6aa req-78cc9860-4edb-479c-be36-4517e79c17e3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received event network-vif-unplugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.240 2 DEBUG oslo_concurrency.lockutils [req-acb37187-c69c-4f48-992b-4d7053f7a6aa req-78cc9860-4edb-479c-be36-4517e79c17e3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.240 2 DEBUG oslo_concurrency.lockutils [req-acb37187-c69c-4f48-992b-4d7053f7a6aa req-78cc9860-4edb-479c-be36-4517e79c17e3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.241 2 DEBUG oslo_concurrency.lockutils [req-acb37187-c69c-4f48-992b-4d7053f7a6aa req-78cc9860-4edb-479c-be36-4517e79c17e3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.241 2 DEBUG nova.compute.manager [req-acb37187-c69c-4f48-992b-4d7053f7a6aa req-78cc9860-4edb-479c-be36-4517e79c17e3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] No waiting events found dispatching network-vif-unplugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.242 2 WARNING nova.compute.manager [req-acb37187-c69c-4f48-992b-4d7053f7a6aa req-78cc9860-4edb-479c-be36-4517e79c17e3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received unexpected event network-vif-unplugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 for instance with vm_state active and task_state rescuing.
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.242 2 DEBUG nova.compute.manager [req-acb37187-c69c-4f48-992b-4d7053f7a6aa req-78cc9860-4edb-479c-be36-4517e79c17e3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received event network-vif-plugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.243 2 DEBUG oslo_concurrency.lockutils [req-acb37187-c69c-4f48-992b-4d7053f7a6aa req-78cc9860-4edb-479c-be36-4517e79c17e3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.243 2 DEBUG oslo_concurrency.lockutils [req-acb37187-c69c-4f48-992b-4d7053f7a6aa req-78cc9860-4edb-479c-be36-4517e79c17e3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.244 2 DEBUG oslo_concurrency.lockutils [req-acb37187-c69c-4f48-992b-4d7053f7a6aa req-78cc9860-4edb-479c-be36-4517e79c17e3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.244 2 DEBUG nova.compute.manager [req-acb37187-c69c-4f48-992b-4d7053f7a6aa req-78cc9860-4edb-479c-be36-4517e79c17e3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] No waiting events found dispatching network-vif-plugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.245 2 WARNING nova.compute.manager [req-acb37187-c69c-4f48-992b-4d7053f7a6aa req-78cc9860-4edb-479c-be36-4517e79c17e3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received unexpected event network-vif-plugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 for instance with vm_state active and task_state rescuing.
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.404 2 DEBUG oslo_concurrency.processutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.405 2 DEBUG nova.objects.instance [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lazy-loading 'migration_context' on Instance uuid 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.419 2 DEBUG nova.virt.libvirt.driver [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.420 2 DEBUG nova.virt.libvirt.driver [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Start _get_guest_xml network_info=[{"id": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "address": "fa:16:3e:06:ba:6c", "network": {"id": "f862c074-27c9-45f5-8f61-3c7cbb05b94a", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "vif_mac": "fa:16:3e:06:ba:6c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "743fea8acfcc4e73b1981dc0dcf95f63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8017da49-bb", "ovs_interfaceid": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.420 2 DEBUG nova.objects.instance [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lazy-loading 'resources' on Instance uuid 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.437 2 WARNING nova.virt.libvirt.driver [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.442 2 DEBUG nova.virt.libvirt.host [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.442 2 DEBUG nova.virt.libvirt.host [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.445 2 DEBUG nova.virt.libvirt.host [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.445 2 DEBUG nova.virt.libvirt.host [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.446 2 DEBUG nova.virt.libvirt.driver [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.446 2 DEBUG nova.virt.hardware [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.446 2 DEBUG nova.virt.hardware [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.447 2 DEBUG nova.virt.hardware [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.447 2 DEBUG nova.virt.hardware [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.447 2 DEBUG nova.virt.hardware [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.447 2 DEBUG nova.virt.hardware [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.447 2 DEBUG nova.virt.hardware [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.448 2 DEBUG nova.virt.hardware [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.448 2 DEBUG nova.virt.hardware [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.448 2 DEBUG nova.virt.hardware [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.448 2 DEBUG nova.virt.hardware [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.449 2 DEBUG nova.objects.instance [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.471 2 DEBUG oslo_concurrency.processutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 250 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 6.4 MiB/s wr, 314 op/s
Oct 07 14:19:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:19:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/538663313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.892 2 DEBUG oslo_concurrency.processutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.894 2 DEBUG oslo_concurrency.processutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:39 compute-0 nova_compute[259550]: 2025-10-07 14:19:39.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.019 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:19:40 compute-0 ceph-mon[74295]: pgmap v1652: 305 pgs: 305 active+clean; 250 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 6.4 MiB/s wr, 314 op/s
Oct 07 14:19:40 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/538663313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:19:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2389092032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.341 2 DEBUG oslo_concurrency.processutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.342 2 DEBUG oslo_concurrency.processutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.720 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846765.7132027, bb276692-f47b-4c86-864a-f3654cf63f5a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.721 2 INFO nova.compute.manager [-] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] VM Stopped (Lifecycle Event)
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.762 2 DEBUG nova.compute.manager [None req-051d6f1b-1593-4456-905c-f19a38042a02 - - - - - -] [instance: bb276692-f47b-4c86-864a-f3654cf63f5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:19:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1331843202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.788 2 DEBUG oslo_concurrency.processutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.790 2 DEBUG nova.virt.libvirt.vif [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-2005089717',display_name='tempest-ServerRescueTestJSONUnderV235-server-2005089717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-2005089717',id=72,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:19:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='743fea8acfcc4e73b1981dc0dcf95f63',ramdisk_id='',reservation_id='r-g9plrghc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-510433047',owner_user_name='tempest-ServerRescueTestJSONUnderV235-510433047-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:19:19Z,user_data=None,user_id='b627cc9a6a884d6cb236df7e0154b97a',uuid=1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "address": "fa:16:3e:06:ba:6c", "network": {"id": "f862c074-27c9-45f5-8f61-3c7cbb05b94a", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "vif_mac": "fa:16:3e:06:ba:6c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "743fea8acfcc4e73b1981dc0dcf95f63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8017da49-bb", "ovs_interfaceid": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.790 2 DEBUG nova.network.os_vif_util [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Converting VIF {"id": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "address": "fa:16:3e:06:ba:6c", "network": {"id": "f862c074-27c9-45f5-8f61-3c7cbb05b94a", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "vif_mac": "fa:16:3e:06:ba:6c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "743fea8acfcc4e73b1981dc0dcf95f63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8017da49-bb", "ovs_interfaceid": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.791 2 DEBUG nova.network.os_vif_util [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:06:ba:6c,bridge_name='br-int',has_traffic_filtering=True,id=8017da49-bbc8-4eae-8b3e-79bb4e587e62,network=Network(f862c074-27c9-45f5-8f61-3c7cbb05b94a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8017da49-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.792 2 DEBUG nova.objects.instance [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.817 2 DEBUG nova.virt.libvirt.driver [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:19:40 compute-0 nova_compute[259550]:   <uuid>1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e</uuid>
Oct 07 14:19:40 compute-0 nova_compute[259550]:   <name>instance-00000048</name>
Oct 07 14:19:40 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:19:40 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:19:40 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-2005089717</nova:name>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:19:39</nova:creationTime>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:19:40 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:19:40 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:19:40 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:19:40 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:19:40 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:19:40 compute-0 nova_compute[259550]:         <nova:user uuid="b627cc9a6a884d6cb236df7e0154b97a">tempest-ServerRescueTestJSONUnderV235-510433047-project-member</nova:user>
Oct 07 14:19:40 compute-0 nova_compute[259550]:         <nova:project uuid="743fea8acfcc4e73b1981dc0dcf95f63">tempest-ServerRescueTestJSONUnderV235-510433047</nova:project>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:19:40 compute-0 nova_compute[259550]:         <nova:port uuid="8017da49-bbc8-4eae-8b3e-79bb4e587e62">
Oct 07 14:19:40 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:19:40 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:19:40 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <system>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <entry name="serial">1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e</entry>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <entry name="uuid">1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e</entry>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     </system>
Oct 07 14:19:40 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:19:40 compute-0 nova_compute[259550]:   <os>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:   </os>
Oct 07 14:19:40 compute-0 nova_compute[259550]:   <features>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:   </features>
Oct 07 14:19:40 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:19:40 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:19:40 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk.rescue">
Oct 07 14:19:40 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       </source>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:19:40 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk">
Oct 07 14:19:40 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       </source>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:19:40 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <target dev="vdb" bus="virtio"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk.config.rescue">
Oct 07 14:19:40 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       </source>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:19:40 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:06:ba:6c"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <target dev="tap8017da49-bb"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e/console.log" append="off"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <video>
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     </video>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:19:40 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:19:40 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:19:40 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:19:40 compute-0 nova_compute[259550]: </domain>
Oct 07 14:19:40 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.825 2 INFO nova.virt.libvirt.driver [-] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Instance destroyed successfully.
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.888 2 DEBUG nova.virt.libvirt.driver [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.889 2 DEBUG nova.virt.libvirt.driver [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.889 2 DEBUG nova.virt.libvirt.driver [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.889 2 DEBUG nova.virt.libvirt.driver [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] No VIF found with MAC fa:16:3e:06:ba:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.890 2 INFO nova.virt.libvirt.driver [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Using config drive
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.912 2 DEBUG nova.storage.rbd_utils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] rbd image 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:40 compute-0 nova_compute[259550]: 2025-10-07 14:19:40.956 2 DEBUG nova.objects.instance [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:41 compute-0 nova_compute[259550]: 2025-10-07 14:19:41.000 2 DEBUG nova.objects.instance [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lazy-loading 'keypairs' on Instance uuid 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2389092032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1331843202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:19:41 compute-0 nova_compute[259550]: 2025-10-07 14:19:41.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:41 compute-0 nova_compute[259550]: 2025-10-07 14:19:41.675 2 INFO nova.virt.libvirt.driver [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Creating config drive at /var/lib/nova/instances/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e/disk.config.rescue
Oct 07 14:19:41 compute-0 nova_compute[259550]: 2025-10-07 14:19:41.680 2 DEBUG oslo_concurrency.processutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo5wfpkcf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:41 compute-0 nova_compute[259550]: 2025-10-07 14:19:41.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:41 compute-0 nova_compute[259550]: 2025-10-07 14:19:41.822 2 DEBUG oslo_concurrency.processutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo5wfpkcf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:41 compute-0 nova_compute[259550]: 2025-10-07 14:19:41.849 2 DEBUG nova.storage.rbd_utils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] rbd image 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:41 compute-0 nova_compute[259550]: 2025-10-07 14:19:41.853 2 DEBUG oslo_concurrency.processutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e/disk.config.rescue 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 226 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 7.3 MiB/s wr, 300 op/s
Oct 07 14:19:42 compute-0 ceph-mon[74295]: pgmap v1653: 305 pgs: 305 active+clean; 226 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 7.3 MiB/s wr, 300 op/s
Oct 07 14:19:42 compute-0 nova_compute[259550]: 2025-10-07 14:19:42.382 2 DEBUG oslo_concurrency.processutils [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e/disk.config.rescue 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:42 compute-0 nova_compute[259550]: 2025-10-07 14:19:42.383 2 INFO nova.virt.libvirt.driver [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Deleting local config drive /var/lib/nova/instances/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e/disk.config.rescue because it was imported into RBD.
Oct 07 14:19:42 compute-0 kernel: tap8017da49-bb: entered promiscuous mode
Oct 07 14:19:42 compute-0 nova_compute[259550]: 2025-10-07 14:19:42.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:42 compute-0 NetworkManager[44949]: <info>  [1759846782.4427] manager: (tap8017da49-bb): new Tun device (/org/freedesktop/NetworkManager/Devices/327)
Oct 07 14:19:42 compute-0 ovn_controller[151684]: 2025-10-07T14:19:42Z|00755|binding|INFO|Claiming lport 8017da49-bbc8-4eae-8b3e-79bb4e587e62 for this chassis.
Oct 07 14:19:42 compute-0 ovn_controller[151684]: 2025-10-07T14:19:42Z|00756|binding|INFO|8017da49-bbc8-4eae-8b3e-79bb4e587e62: Claiming fa:16:3e:06:ba:6c 10.100.0.7
Oct 07 14:19:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:42.455 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:ba:6c 10.100.0.7'], port_security=['fa:16:3e:06:ba:6c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f862c074-27c9-45f5-8f61-3c7cbb05b94a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '743fea8acfcc4e73b1981dc0dcf95f63', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0e3566eb-dd32-483b-8573-b62f6b6a147e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0a2b256-a7c8-4510-ae1e-632d23b47983, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8017da49-bbc8-4eae-8b3e-79bb4e587e62) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:19:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:42.457 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8017da49-bbc8-4eae-8b3e-79bb4e587e62 in datapath f862c074-27c9-45f5-8f61-3c7cbb05b94a bound to our chassis
Oct 07 14:19:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:42.458 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f862c074-27c9-45f5-8f61-3c7cbb05b94a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:19:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:42.459 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[37a1794d-d5f5-484a-b12b-e1dc2f0504dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:42 compute-0 systemd-udevd[337407]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:19:42 compute-0 nova_compute[259550]: 2025-10-07 14:19:42.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:42 compute-0 ovn_controller[151684]: 2025-10-07T14:19:42Z|00757|binding|INFO|Setting lport 8017da49-bbc8-4eae-8b3e-79bb4e587e62 ovn-installed in OVS
Oct 07 14:19:42 compute-0 ovn_controller[151684]: 2025-10-07T14:19:42Z|00758|binding|INFO|Setting lport 8017da49-bbc8-4eae-8b3e-79bb4e587e62 up in Southbound
Oct 07 14:19:42 compute-0 systemd-machined[214580]: New machine qemu-92-instance-00000048.
Oct 07 14:19:42 compute-0 nova_compute[259550]: 2025-10-07 14:19:42.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:42 compute-0 NetworkManager[44949]: <info>  [1759846782.4808] device (tap8017da49-bb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:19:42 compute-0 NetworkManager[44949]: <info>  [1759846782.4821] device (tap8017da49-bb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:19:42 compute-0 systemd[1]: Started Virtual Machine qemu-92-instance-00000048.
Oct 07 14:19:42 compute-0 nova_compute[259550]: 2025-10-07 14:19:42.951 2 DEBUG nova.compute.manager [req-79225247-35ce-43d0-bcfa-c7e2ae253dec req-55c5636a-cf2c-475d-8f0d-25c8bd3fc6aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received event network-vif-plugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:42 compute-0 nova_compute[259550]: 2025-10-07 14:19:42.951 2 DEBUG oslo_concurrency.lockutils [req-79225247-35ce-43d0-bcfa-c7e2ae253dec req-55c5636a-cf2c-475d-8f0d-25c8bd3fc6aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:42 compute-0 nova_compute[259550]: 2025-10-07 14:19:42.952 2 DEBUG oslo_concurrency.lockutils [req-79225247-35ce-43d0-bcfa-c7e2ae253dec req-55c5636a-cf2c-475d-8f0d-25c8bd3fc6aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:42 compute-0 nova_compute[259550]: 2025-10-07 14:19:42.952 2 DEBUG oslo_concurrency.lockutils [req-79225247-35ce-43d0-bcfa-c7e2ae253dec req-55c5636a-cf2c-475d-8f0d-25c8bd3fc6aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:42 compute-0 nova_compute[259550]: 2025-10-07 14:19:42.952 2 DEBUG nova.compute.manager [req-79225247-35ce-43d0-bcfa-c7e2ae253dec req-55c5636a-cf2c-475d-8f0d-25c8bd3fc6aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] No waiting events found dispatching network-vif-plugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:42 compute-0 nova_compute[259550]: 2025-10-07 14:19:42.952 2 WARNING nova.compute.manager [req-79225247-35ce-43d0-bcfa-c7e2ae253dec req-55c5636a-cf2c-475d-8f0d-25c8bd3fc6aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received unexpected event network-vif-plugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 for instance with vm_state active and task_state rescuing.
Oct 07 14:19:42 compute-0 ovn_controller[151684]: 2025-10-07T14:19:42Z|00759|binding|INFO|Releasing lport 5989e5ed-c89e-446a-960e-503196fd3680 from this chassis (sb_readonly=0)
Oct 07 14:19:43 compute-0 nova_compute[259550]: 2025-10-07 14:19:43.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:43 compute-0 nova_compute[259550]: 2025-10-07 14:19:43.400 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:19:43 compute-0 nova_compute[259550]: 2025-10-07 14:19:43.401 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846783.4003808, 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:43 compute-0 nova_compute[259550]: 2025-10-07 14:19:43.401 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] VM Resumed (Lifecycle Event)
Oct 07 14:19:43 compute-0 nova_compute[259550]: 2025-10-07 14:19:43.405 2 DEBUG nova.compute.manager [None req-97840aa6-98f5-447a-b650-e14c4eadd17e b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:43 compute-0 nova_compute[259550]: 2025-10-07 14:19:43.598 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:43 compute-0 nova_compute[259550]: 2025-10-07 14:19:43.602 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:19:43 compute-0 nova_compute[259550]: 2025-10-07 14:19:43.667 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] During sync_power_state the instance has a pending task (rescuing). Skip.
Oct 07 14:19:43 compute-0 nova_compute[259550]: 2025-10-07 14:19:43.668 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846783.4033751, 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:43 compute-0 nova_compute[259550]: 2025-10-07 14:19:43.668 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] VM Started (Lifecycle Event)
Oct 07 14:19:43 compute-0 nova_compute[259550]: 2025-10-07 14:19:43.780 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:43 compute-0 nova_compute[259550]: 2025-10-07 14:19:43.784 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:19:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 248 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 8.2 MiB/s wr, 313 op/s
Oct 07 14:19:44 compute-0 nova_compute[259550]: 2025-10-07 14:19:44.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:19:44 compute-0 nova_compute[259550]: 2025-10-07 14:19:44.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:19:44 compute-0 nova_compute[259550]: 2025-10-07 14:19:44.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:19:44 compute-0 nova_compute[259550]: 2025-10-07 14:19:44.983 2 DEBUG oslo_concurrency.lockutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Acquiring lock "b9440d6f-9236-4208-b50d-4badc845e3cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:44 compute-0 nova_compute[259550]: 2025-10-07 14:19:44.984 2 DEBUG oslo_concurrency.lockutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Lock "b9440d6f-9236-4208-b50d-4badc845e3cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.009 2 DEBUG nova.compute.manager [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:19:45 compute-0 ceph-mon[74295]: pgmap v1654: 305 pgs: 305 active+clean; 248 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 8.2 MiB/s wr, 313 op/s
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.087 2 DEBUG oslo_concurrency.lockutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.088 2 DEBUG oslo_concurrency.lockutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.095 2 DEBUG nova.virt.hardware [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.095 2 INFO nova.compute.claims [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.196 2 DEBUG nova.compute.manager [req-c0217f91-7104-471e-9b84-38ec86192bca req-42fa982e-9621-4f9e-8cbe-2a7da0423143 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received event network-vif-plugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.197 2 DEBUG oslo_concurrency.lockutils [req-c0217f91-7104-471e-9b84-38ec86192bca req-42fa982e-9621-4f9e-8cbe-2a7da0423143 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.197 2 DEBUG oslo_concurrency.lockutils [req-c0217f91-7104-471e-9b84-38ec86192bca req-42fa982e-9621-4f9e-8cbe-2a7da0423143 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.197 2 DEBUG oslo_concurrency.lockutils [req-c0217f91-7104-471e-9b84-38ec86192bca req-42fa982e-9621-4f9e-8cbe-2a7da0423143 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.198 2 DEBUG nova.compute.manager [req-c0217f91-7104-471e-9b84-38ec86192bca req-42fa982e-9621-4f9e-8cbe-2a7da0423143 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] No waiting events found dispatching network-vif-plugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.198 2 WARNING nova.compute.manager [req-c0217f91-7104-471e-9b84-38ec86192bca req-42fa982e-9621-4f9e-8cbe-2a7da0423143 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received unexpected event network-vif-plugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 for instance with vm_state rescued and task_state None.
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.241 2 DEBUG oslo_concurrency.processutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:19:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3360772937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.671 2 DEBUG oslo_concurrency.processutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.676 2 DEBUG nova.compute.provider_tree [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.702 2 DEBUG nova.scheduler.client.report [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.743 2 DEBUG oslo_concurrency.lockutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.744 2 DEBUG nova.compute.manager [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.799 2 DEBUG nova.compute.manager [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.800 2 DEBUG nova.network.neutron [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.820 2 INFO nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.848 2 DEBUG nova.compute.manager [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.855 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.855 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.855 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.856 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 248 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.1 MiB/s wr, 216 op/s
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.988 2 DEBUG nova.compute.manager [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.991 2 DEBUG nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:19:45 compute-0 nova_compute[259550]: 2025-10-07 14:19:45.991 2 INFO nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Creating image(s)
Oct 07 14:19:46 compute-0 nova_compute[259550]: 2025-10-07 14:19:46.023 2 DEBUG nova.storage.rbd_utils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] rbd image b9440d6f-9236-4208-b50d-4badc845e3cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:46 compute-0 nova_compute[259550]: 2025-10-07 14:19:46.053 2 DEBUG nova.storage.rbd_utils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] rbd image b9440d6f-9236-4208-b50d-4badc845e3cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:46 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3360772937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:46 compute-0 ceph-mon[74295]: pgmap v1655: 305 pgs: 305 active+clean; 248 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.1 MiB/s wr, 216 op/s
Oct 07 14:19:46 compute-0 nova_compute[259550]: 2025-10-07 14:19:46.092 2 DEBUG nova.storage.rbd_utils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] rbd image b9440d6f-9236-4208-b50d-4badc845e3cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:46 compute-0 nova_compute[259550]: 2025-10-07 14:19:46.097 2 DEBUG oslo_concurrency.processutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:19:46 compute-0 nova_compute[259550]: 2025-10-07 14:19:46.189 2 DEBUG oslo_concurrency.processutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:46 compute-0 nova_compute[259550]: 2025-10-07 14:19:46.190 2 DEBUG oslo_concurrency.lockutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:46 compute-0 nova_compute[259550]: 2025-10-07 14:19:46.191 2 DEBUG oslo_concurrency.lockutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:46 compute-0 nova_compute[259550]: 2025-10-07 14:19:46.192 2 DEBUG oslo_concurrency.lockutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:46 compute-0 nova_compute[259550]: 2025-10-07 14:19:46.229 2 DEBUG nova.storage.rbd_utils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] rbd image b9440d6f-9236-4208-b50d-4badc845e3cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:46 compute-0 nova_compute[259550]: 2025-10-07 14:19:46.233 2 DEBUG oslo_concurrency.processutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b9440d6f-9236-4208-b50d-4badc845e3cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:46 compute-0 nova_compute[259550]: 2025-10-07 14:19:46.270 2 DEBUG nova.policy [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '732745f303ad42618c5be16545b23042', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7abbe3898e634d999186c395be83c175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:19:46 compute-0 nova_compute[259550]: 2025-10-07 14:19:46.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:47 compute-0 podman[337594]: 2025-10-07 14:19:47.074395675 +0000 UTC m=+0.056612317 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 07 14:19:47 compute-0 podman[337595]: 2025-10-07 14:19:47.119387253 +0000 UTC m=+0.101878482 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 07 14:19:47 compute-0 nova_compute[259550]: 2025-10-07 14:19:47.166 2 DEBUG oslo_concurrency.processutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b9440d6f-9236-4208-b50d-4badc845e3cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.933s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:47 compute-0 nova_compute[259550]: 2025-10-07 14:19:47.225 2 DEBUG nova.storage.rbd_utils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] resizing rbd image b9440d6f-9236-4208-b50d-4badc845e3cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:19:47 compute-0 nova_compute[259550]: 2025-10-07 14:19:47.569 2 DEBUG nova.network.neutron [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Successfully created port: 0e8be78f-ea5e-430e-a21b-0be4db278329 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:19:47 compute-0 nova_compute[259550]: 2025-10-07 14:19:47.742 2 DEBUG nova.objects.instance [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Lazy-loading 'migration_context' on Instance uuid b9440d6f-9236-4208-b50d-4badc845e3cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:47 compute-0 nova_compute[259550]: 2025-10-07 14:19:47.757 2 DEBUG nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:19:47 compute-0 nova_compute[259550]: 2025-10-07 14:19:47.757 2 DEBUG nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Ensure instance console log exists: /var/lib/nova/instances/b9440d6f-9236-4208-b50d-4badc845e3cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:19:47 compute-0 nova_compute[259550]: 2025-10-07 14:19:47.757 2 DEBUG oslo_concurrency.lockutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:47 compute-0 nova_compute[259550]: 2025-10-07 14:19:47.758 2 DEBUG oslo_concurrency.lockutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:47 compute-0 nova_compute[259550]: 2025-10-07 14:19:47.758 2 DEBUG oslo_concurrency.lockutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 248 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.9 MiB/s wr, 128 op/s
Oct 07 14:19:48 compute-0 nova_compute[259550]: 2025-10-07 14:19:48.554 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Updating instance_info_cache with network_info: [{"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:48 compute-0 nova_compute[259550]: 2025-10-07 14:19:48.579 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:48 compute-0 nova_compute[259550]: 2025-10-07 14:19:48.580 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:19:48 compute-0 nova_compute[259550]: 2025-10-07 14:19:48.581 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:19:48 compute-0 nova_compute[259550]: 2025-10-07 14:19:48.599 2 DEBUG nova.network.neutron [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Successfully updated port: 0e8be78f-ea5e-430e-a21b-0be4db278329 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:19:48 compute-0 nova_compute[259550]: 2025-10-07 14:19:48.614 2 DEBUG oslo_concurrency.lockutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Acquiring lock "refresh_cache-b9440d6f-9236-4208-b50d-4badc845e3cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:48 compute-0 nova_compute[259550]: 2025-10-07 14:19:48.614 2 DEBUG oslo_concurrency.lockutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Acquired lock "refresh_cache-b9440d6f-9236-4208-b50d-4badc845e3cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:48 compute-0 nova_compute[259550]: 2025-10-07 14:19:48.614 2 DEBUG nova.network.neutron [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:19:48 compute-0 nova_compute[259550]: 2025-10-07 14:19:48.711 2 DEBUG nova.compute.manager [req-d8775294-222c-4359-b161-75177b8ca8c3 req-673079fb-8ae4-4d0d-926b-e08f43347cfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Received event network-changed-0e8be78f-ea5e-430e-a21b-0be4db278329 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:48 compute-0 nova_compute[259550]: 2025-10-07 14:19:48.712 2 DEBUG nova.compute.manager [req-d8775294-222c-4359-b161-75177b8ca8c3 req-673079fb-8ae4-4d0d-926b-e08f43347cfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Refreshing instance network info cache due to event network-changed-0e8be78f-ea5e-430e-a21b-0be4db278329. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:19:48 compute-0 nova_compute[259550]: 2025-10-07 14:19:48.712 2 DEBUG oslo_concurrency.lockutils [req-d8775294-222c-4359-b161-75177b8ca8c3 req-673079fb-8ae4-4d0d-926b-e08f43347cfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b9440d6f-9236-4208-b50d-4badc845e3cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:48 compute-0 nova_compute[259550]: 2025-10-07 14:19:48.992 2 DEBUG nova.network.neutron [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:19:49 compute-0 ceph-mon[74295]: pgmap v1656: 305 pgs: 305 active+clean; 248 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.9 MiB/s wr, 128 op/s
Oct 07 14:19:49 compute-0 nova_compute[259550]: 2025-10-07 14:19:49.068 2 DEBUG nova.compute.manager [req-75ad3813-13ab-4513-93fa-4ffb504021aa req-5cd3fcbe-5d51-4907-942d-5331965619f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received event network-changed-8017da49-bbc8-4eae-8b3e-79bb4e587e62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:49 compute-0 nova_compute[259550]: 2025-10-07 14:19:49.068 2 DEBUG nova.compute.manager [req-75ad3813-13ab-4513-93fa-4ffb504021aa req-5cd3fcbe-5d51-4907-942d-5331965619f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Refreshing instance network info cache due to event network-changed-8017da49-bbc8-4eae-8b3e-79bb4e587e62. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:19:49 compute-0 nova_compute[259550]: 2025-10-07 14:19:49.068 2 DEBUG oslo_concurrency.lockutils [req-75ad3813-13ab-4513-93fa-4ffb504021aa req-5cd3fcbe-5d51-4907-942d-5331965619f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:49 compute-0 nova_compute[259550]: 2025-10-07 14:19:49.069 2 DEBUG oslo_concurrency.lockutils [req-75ad3813-13ab-4513-93fa-4ffb504021aa req-5cd3fcbe-5d51-4907-942d-5331965619f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:49 compute-0 nova_compute[259550]: 2025-10-07 14:19:49.069 2 DEBUG nova.network.neutron [req-75ad3813-13ab-4513-93fa-4ffb504021aa req-5cd3fcbe-5d51-4907-942d-5331965619f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Refreshing network info cache for port 8017da49-bbc8-4eae-8b3e-79bb4e587e62 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:19:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 277 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 175 op/s
Oct 07 14:19:49 compute-0 nova_compute[259550]: 2025-10-07 14:19:49.953 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846774.951039, 92c7d888-ced0-4649-9ad6-317350eb7225 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:49 compute-0 nova_compute[259550]: 2025-10-07 14:19:49.953 2 INFO nova.compute.manager [-] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] VM Stopped (Lifecycle Event)
Oct 07 14:19:49 compute-0 sudo[337709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:19:49 compute-0 nova_compute[259550]: 2025-10-07 14:19:49.991 2 DEBUG nova.compute.manager [None req-0381c201-e829-4e52-b4b9-3edb175419ae - - - - - -] [instance: 92c7d888-ced0-4649-9ad6-317350eb7225] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:49 compute-0 sudo[337709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:19:49 compute-0 sudo[337709]: pam_unix(sudo:session): session closed for user root
Oct 07 14:19:50 compute-0 sudo[337734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:19:50 compute-0 sudo[337734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:19:50 compute-0 sudo[337734]: pam_unix(sudo:session): session closed for user root
Oct 07 14:19:50 compute-0 sudo[337759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:19:50 compute-0 sudo[337759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:19:50 compute-0 sudo[337759]: pam_unix(sudo:session): session closed for user root
Oct 07 14:19:50 compute-0 ceph-mon[74295]: pgmap v1657: 305 pgs: 305 active+clean; 277 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 175 op/s
Oct 07 14:19:50 compute-0 sudo[337784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:19:50 compute-0 sudo[337784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.314 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846775.3138149, 6f82c687-5361-4922-85c2-ea9d7e48a39a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.315 2 INFO nova.compute.manager [-] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] VM Stopped (Lifecycle Event)
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.340 2 DEBUG nova.compute.manager [None req-c9e2701b-529f-40f5-9f27-43d2456f6a7e - - - - - -] [instance: 6f82c687-5361-4922-85c2-ea9d7e48a39a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.526 2 DEBUG nova.network.neutron [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Updating instance_info_cache with network_info: [{"id": "0e8be78f-ea5e-430e-a21b-0be4db278329", "address": "fa:16:3e:06:14:c1", "network": {"id": "94bb140d-c42d-404a-bd0c-9da8b76b4904", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-183786465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7abbe3898e634d999186c395be83c175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e8be78f-ea", "ovs_interfaceid": "0e8be78f-ea5e-430e-a21b-0be4db278329", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.546 2 DEBUG oslo_concurrency.lockutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Releasing lock "refresh_cache-b9440d6f-9236-4208-b50d-4badc845e3cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.547 2 DEBUG nova.compute.manager [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Instance network_info: |[{"id": "0e8be78f-ea5e-430e-a21b-0be4db278329", "address": "fa:16:3e:06:14:c1", "network": {"id": "94bb140d-c42d-404a-bd0c-9da8b76b4904", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-183786465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7abbe3898e634d999186c395be83c175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e8be78f-ea", "ovs_interfaceid": "0e8be78f-ea5e-430e-a21b-0be4db278329", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.547 2 DEBUG oslo_concurrency.lockutils [req-d8775294-222c-4359-b161-75177b8ca8c3 req-673079fb-8ae4-4d0d-926b-e08f43347cfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b9440d6f-9236-4208-b50d-4badc845e3cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.548 2 DEBUG nova.network.neutron [req-d8775294-222c-4359-b161-75177b8ca8c3 req-673079fb-8ae4-4d0d-926b-e08f43347cfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Refreshing network info cache for port 0e8be78f-ea5e-430e-a21b-0be4db278329 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.552 2 DEBUG nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Start _get_guest_xml network_info=[{"id": "0e8be78f-ea5e-430e-a21b-0be4db278329", "address": "fa:16:3e:06:14:c1", "network": {"id": "94bb140d-c42d-404a-bd0c-9da8b76b4904", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-183786465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7abbe3898e634d999186c395be83c175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e8be78f-ea", "ovs_interfaceid": "0e8be78f-ea5e-430e-a21b-0be4db278329", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.558 2 WARNING nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.567 2 DEBUG nova.virt.libvirt.host [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.568 2 DEBUG nova.virt.libvirt.host [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.571 2 DEBUG nova.virt.libvirt.host [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.572 2 DEBUG nova.virt.libvirt.host [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.573 2 DEBUG nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.573 2 DEBUG nova.virt.hardware [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.573 2 DEBUG nova.virt.hardware [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.574 2 DEBUG nova.virt.hardware [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.574 2 DEBUG nova.virt.hardware [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.574 2 DEBUG nova.virt.hardware [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.574 2 DEBUG nova.virt.hardware [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.575 2 DEBUG nova.virt.hardware [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.575 2 DEBUG nova.virt.hardware [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.576 2 DEBUG nova.virt.hardware [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.576 2 DEBUG nova.virt.hardware [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.576 2 DEBUG nova.virt.hardware [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.578 2 DEBUG oslo_concurrency.processutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:50 compute-0 sudo[337784]: pam_unix(sudo:session): session closed for user root
Oct 07 14:19:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:19:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:19:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:19:50 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:19:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:19:50 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:19:50 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev ded0f091-f445-4fa4-8386-b609c21ca4cb does not exist
Oct 07 14:19:50 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 5e05f9f2-a231-4ba7-b6e0-4865653e7968 does not exist
Oct 07 14:19:50 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 7631eeff-c901-4ba1-b898-c6190f13467d does not exist
Oct 07 14:19:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:19:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:19:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:19:50 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:19:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:19:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:19:50 compute-0 sudo[337839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:19:50 compute-0 sudo[337839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:19:50 compute-0 sudo[337839]: pam_unix(sudo:session): session closed for user root
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.786 2 DEBUG nova.compute.manager [req-fd8972a4-f9a1-4189-9416-d57399a4cd0e req-76280234-589b-454b-800c-8c7c1a8f092f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received event network-changed-8017da49-bbc8-4eae-8b3e-79bb4e587e62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.787 2 DEBUG nova.compute.manager [req-fd8972a4-f9a1-4189-9416-d57399a4cd0e req-76280234-589b-454b-800c-8c7c1a8f092f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Refreshing instance network info cache due to event network-changed-8017da49-bbc8-4eae-8b3e-79bb4e587e62. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.787 2 DEBUG oslo_concurrency.lockutils [req-fd8972a4-f9a1-4189-9416-d57399a4cd0e req-76280234-589b-454b-800c-8c7c1a8f092f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:50 compute-0 sudo[337883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:19:50 compute-0 sudo[337883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:19:50 compute-0 sudo[337883]: pam_unix(sudo:session): session closed for user root
Oct 07 14:19:50 compute-0 sudo[337908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:19:50 compute-0 sudo[337908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:19:50 compute-0 sudo[337908]: pam_unix(sudo:session): session closed for user root
Oct 07 14:19:50 compute-0 sudo[337933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:19:50 compute-0 sudo[337933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.940 2 DEBUG nova.network.neutron [req-75ad3813-13ab-4513-93fa-4ffb504021aa req-5cd3fcbe-5d51-4907-942d-5331965619f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Updated VIF entry in instance network info cache for port 8017da49-bbc8-4eae-8b3e-79bb4e587e62. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.941 2 DEBUG nova.network.neutron [req-75ad3813-13ab-4513-93fa-4ffb504021aa req-5cd3fcbe-5d51-4907-942d-5331965619f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Updating instance_info_cache with network_info: [{"id": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "address": "fa:16:3e:06:ba:6c", "network": {"id": "f862c074-27c9-45f5-8f61-3c7cbb05b94a", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "743fea8acfcc4e73b1981dc0dcf95f63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8017da49-bb", "ovs_interfaceid": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.956 2 DEBUG oslo_concurrency.lockutils [req-75ad3813-13ab-4513-93fa-4ffb504021aa req-5cd3fcbe-5d51-4907-942d-5331965619f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.958 2 DEBUG oslo_concurrency.lockutils [req-fd8972a4-f9a1-4189-9416-d57399a4cd0e req-76280234-589b-454b-800c-8c7c1a8f092f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:50 compute-0 nova_compute[259550]: 2025-10-07 14:19:50.958 2 DEBUG nova.network.neutron [req-fd8972a4-f9a1-4189-9416-d57399a4cd0e req-76280234-589b-454b-800c-8c7c1a8f092f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Refreshing network info cache for port 8017da49-bbc8-4eae-8b3e-79bb4e587e62 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:19:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:19:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279826336' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.066 2 DEBUG oslo_concurrency.processutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.090 2 DEBUG nova.storage.rbd_utils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] rbd image b9440d6f-9236-4208-b50d-4badc845e3cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.094 2 DEBUG oslo_concurrency.processutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:19:51 compute-0 podman[338019]: 2025-10-07 14:19:51.232803317 +0000 UTC m=+0.045467962 container create c081365d9c3c96af57e8363bcfb12b7a1c02c49c803a60ef9a51cf6e59b7ba71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 07 14:19:51 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:19:51 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:19:51 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:19:51 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:19:51 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:19:51 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:19:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4279826336' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:51 compute-0 systemd[1]: Started libpod-conmon-c081365d9c3c96af57e8363bcfb12b7a1c02c49c803a60ef9a51cf6e59b7ba71.scope.
Oct 07 14:19:51 compute-0 podman[338019]: 2025-10-07 14:19:51.21021175 +0000 UTC m=+0.022876415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:19:51 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:19:51 compute-0 podman[338019]: 2025-10-07 14:19:51.328521715 +0000 UTC m=+0.141186380 container init c081365d9c3c96af57e8363bcfb12b7a1c02c49c803a60ef9a51cf6e59b7ba71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:19:51 compute-0 podman[338019]: 2025-10-07 14:19:51.335334456 +0000 UTC m=+0.147999111 container start c081365d9c3c96af57e8363bcfb12b7a1c02c49c803a60ef9a51cf6e59b7ba71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:19:51 compute-0 podman[338019]: 2025-10-07 14:19:51.339854345 +0000 UTC m=+0.152518990 container attach c081365d9c3c96af57e8363bcfb12b7a1c02c49c803a60ef9a51cf6e59b7ba71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 07 14:19:51 compute-0 quizzical_kapitsa[338054]: 167 167
Oct 07 14:19:51 compute-0 systemd[1]: libpod-c081365d9c3c96af57e8363bcfb12b7a1c02c49c803a60ef9a51cf6e59b7ba71.scope: Deactivated successfully.
Oct 07 14:19:51 compute-0 conmon[338054]: conmon c081365d9c3c96af57e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c081365d9c3c96af57e8363bcfb12b7a1c02c49c803a60ef9a51cf6e59b7ba71.scope/container/memory.events
Oct 07 14:19:51 compute-0 podman[338019]: 2025-10-07 14:19:51.342679449 +0000 UTC m=+0.155344094 container died c081365d9c3c96af57e8363bcfb12b7a1c02c49c803a60ef9a51cf6e59b7ba71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:19:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bf11c8fe54177457cc0e2c5d1cd49e53bfca90ad9e2ba9d432c0405c5e9cdb4-merged.mount: Deactivated successfully.
Oct 07 14:19:51 compute-0 podman[338019]: 2025-10-07 14:19:51.396353167 +0000 UTC m=+0.209017812 container remove c081365d9c3c96af57e8363bcfb12b7a1c02c49c803a60ef9a51cf6e59b7ba71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:19:51 compute-0 systemd[1]: libpod-conmon-c081365d9c3c96af57e8363bcfb12b7a1c02c49c803a60ef9a51cf6e59b7ba71.scope: Deactivated successfully.
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:19:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3828275872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.540 2 DEBUG oslo_concurrency.processutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.541 2 DEBUG nova.virt.libvirt.vif [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:19:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1177251261',display_name='tempest-ServerAddressesTestJSON-server-1177251261',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1177251261',id=76,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7abbe3898e634d999186c395be83c175',ramdisk_id='',reservation_id='r-gehsi897',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-2009272603',owner_user_name='tempest-ServerAddressesTestJSON-2009272603-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:19:45Z,user_data=None,user_id='732745f303ad42618c5be16545b23042',uuid=b9440d6f-9236-4208-b50d-4badc845e3cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e8be78f-ea5e-430e-a21b-0be4db278329", "address": "fa:16:3e:06:14:c1", "network": {"id": "94bb140d-c42d-404a-bd0c-9da8b76b4904", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-183786465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7abbe3898e634d999186c395be83c175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e8be78f-ea", "ovs_interfaceid": "0e8be78f-ea5e-430e-a21b-0be4db278329", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.542 2 DEBUG nova.network.os_vif_util [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Converting VIF {"id": "0e8be78f-ea5e-430e-a21b-0be4db278329", "address": "fa:16:3e:06:14:c1", "network": {"id": "94bb140d-c42d-404a-bd0c-9da8b76b4904", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-183786465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7abbe3898e634d999186c395be83c175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e8be78f-ea", "ovs_interfaceid": "0e8be78f-ea5e-430e-a21b-0be4db278329", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.543 2 DEBUG nova.network.os_vif_util [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:14:c1,bridge_name='br-int',has_traffic_filtering=True,id=0e8be78f-ea5e-430e-a21b-0be4db278329,network=Network(94bb140d-c42d-404a-bd0c-9da8b76b4904),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e8be78f-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.544 2 DEBUG nova.objects.instance [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Lazy-loading 'pci_devices' on Instance uuid b9440d6f-9236-4208-b50d-4badc845e3cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:51 compute-0 podman[338079]: 2025-10-07 14:19:51.570822816 +0000 UTC m=+0.042053152 container create 1908049262edb416a7a9f34237c941c7d83f18ae183d643080a11edf9bec998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pasteur, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.610 2 DEBUG nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:19:51 compute-0 nova_compute[259550]:   <uuid>b9440d6f-9236-4208-b50d-4badc845e3cd</uuid>
Oct 07 14:19:51 compute-0 nova_compute[259550]:   <name>instance-0000004c</name>
Oct 07 14:19:51 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:19:51 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:19:51 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerAddressesTestJSON-server-1177251261</nova:name>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:19:50</nova:creationTime>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:19:51 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:19:51 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:19:51 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:19:51 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:19:51 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:19:51 compute-0 nova_compute[259550]:         <nova:user uuid="732745f303ad42618c5be16545b23042">tempest-ServerAddressesTestJSON-2009272603-project-member</nova:user>
Oct 07 14:19:51 compute-0 nova_compute[259550]:         <nova:project uuid="7abbe3898e634d999186c395be83c175">tempest-ServerAddressesTestJSON-2009272603</nova:project>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:19:51 compute-0 nova_compute[259550]:         <nova:port uuid="0e8be78f-ea5e-430e-a21b-0be4db278329">
Oct 07 14:19:51 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:19:51 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:19:51 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <system>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <entry name="serial">b9440d6f-9236-4208-b50d-4badc845e3cd</entry>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <entry name="uuid">b9440d6f-9236-4208-b50d-4badc845e3cd</entry>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     </system>
Oct 07 14:19:51 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:19:51 compute-0 nova_compute[259550]:   <os>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:   </os>
Oct 07 14:19:51 compute-0 nova_compute[259550]:   <features>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:   </features>
Oct 07 14:19:51 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:19:51 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:19:51 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/b9440d6f-9236-4208-b50d-4badc845e3cd_disk">
Oct 07 14:19:51 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       </source>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:19:51 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/b9440d6f-9236-4208-b50d-4badc845e3cd_disk.config">
Oct 07 14:19:51 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       </source>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:19:51 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:06:14:c1"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <target dev="tap0e8be78f-ea"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/b9440d6f-9236-4208-b50d-4badc845e3cd/console.log" append="off"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <video>
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     </video>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:19:51 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:19:51 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:19:51 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:19:51 compute-0 nova_compute[259550]: </domain>
Oct 07 14:19:51 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.616 2 DEBUG nova.compute.manager [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Preparing to wait for external event network-vif-plugged-0e8be78f-ea5e-430e-a21b-0be4db278329 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.616 2 DEBUG oslo_concurrency.lockutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Acquiring lock "b9440d6f-9236-4208-b50d-4badc845e3cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.616 2 DEBUG oslo_concurrency.lockutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Lock "b9440d6f-9236-4208-b50d-4badc845e3cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.617 2 DEBUG oslo_concurrency.lockutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Lock "b9440d6f-9236-4208-b50d-4badc845e3cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.617 2 DEBUG nova.virt.libvirt.vif [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:19:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1177251261',display_name='tempest-ServerAddressesTestJSON-server-1177251261',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1177251261',id=76,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7abbe3898e634d999186c395be83c175',ramdisk_id='',reservation_id='r-gehsi897',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-2009272603',owner_user_name='tempest-ServerAddressesTestJSON-2009272603-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:19:45Z,user_data=None,user_id='732745f303ad42618c5be16545b23042',uuid=b9440d6f-9236-4208-b50d-4badc845e3cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e8be78f-ea5e-430e-a21b-0be4db278329", "address": "fa:16:3e:06:14:c1", "network": {"id": "94bb140d-c42d-404a-bd0c-9da8b76b4904", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-183786465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7abbe3898e634d999186c395be83c175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e8be78f-ea", "ovs_interfaceid": "0e8be78f-ea5e-430e-a21b-0be4db278329", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.618 2 DEBUG nova.network.os_vif_util [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Converting VIF {"id": "0e8be78f-ea5e-430e-a21b-0be4db278329", "address": "fa:16:3e:06:14:c1", "network": {"id": "94bb140d-c42d-404a-bd0c-9da8b76b4904", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-183786465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7abbe3898e634d999186c395be83c175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e8be78f-ea", "ovs_interfaceid": "0e8be78f-ea5e-430e-a21b-0be4db278329", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.618 2 DEBUG nova.network.os_vif_util [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:14:c1,bridge_name='br-int',has_traffic_filtering=True,id=0e8be78f-ea5e-430e-a21b-0be4db278329,network=Network(94bb140d-c42d-404a-bd0c-9da8b76b4904),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e8be78f-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:51 compute-0 systemd[1]: Started libpod-conmon-1908049262edb416a7a9f34237c941c7d83f18ae183d643080a11edf9bec998d.scope.
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.621 2 DEBUG os_vif [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:14:c1,bridge_name='br-int',has_traffic_filtering=True,id=0e8be78f-ea5e-430e-a21b-0be4db278329,network=Network(94bb140d-c42d-404a-bd0c-9da8b76b4904),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e8be78f-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.627 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.627 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e8be78f-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0e8be78f-ea, col_values=(('external_ids', {'iface-id': '0e8be78f-ea5e-430e-a21b-0be4db278329', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:06:14:c1', 'vm-uuid': 'b9440d6f-9236-4208-b50d-4badc845e3cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:51 compute-0 NetworkManager[44949]: <info>  [1759846791.6350] manager: (tap0e8be78f-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/328)
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.642 2 INFO os_vif [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:14:c1,bridge_name='br-int',has_traffic_filtering=True,id=0e8be78f-ea5e-430e-a21b-0be4db278329,network=Network(94bb140d-c42d-404a-bd0c-9da8b76b4904),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e8be78f-ea')
Oct 07 14:19:51 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656afff0e381d7da8182c1983eeb19b24d1a2f22292f246ad4822ad5d25363f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656afff0e381d7da8182c1983eeb19b24d1a2f22292f246ad4822ad5d25363f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656afff0e381d7da8182c1983eeb19b24d1a2f22292f246ad4822ad5d25363f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656afff0e381d7da8182c1983eeb19b24d1a2f22292f246ad4822ad5d25363f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656afff0e381d7da8182c1983eeb19b24d1a2f22292f246ad4822ad5d25363f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:19:51 compute-0 podman[338079]: 2025-10-07 14:19:51.550331615 +0000 UTC m=+0.021561981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:19:51 compute-0 podman[338079]: 2025-10-07 14:19:51.677196076 +0000 UTC m=+0.148426442 container init 1908049262edb416a7a9f34237c941c7d83f18ae183d643080a11edf9bec998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pasteur, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 07 14:19:51 compute-0 podman[338079]: 2025-10-07 14:19:51.68525918 +0000 UTC m=+0.156489516 container start 1908049262edb416a7a9f34237c941c7d83f18ae183d643080a11edf9bec998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pasteur, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 07 14:19:51 compute-0 podman[338079]: 2025-10-07 14:19:51.691042212 +0000 UTC m=+0.162272588 container attach 1908049262edb416a7a9f34237c941c7d83f18ae183d643080a11edf9bec998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.712 2 DEBUG nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.712 2 DEBUG nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.713 2 DEBUG nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] No VIF found with MAC fa:16:3e:06:14:c1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.713 2 INFO nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Using config drive
Oct 07 14:19:51 compute-0 nova_compute[259550]: 2025-10-07 14:19:51.741 2 DEBUG nova.storage.rbd_utils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] rbd image b9440d6f-9236-4208-b50d-4badc845e3cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 124 op/s
Oct 07 14:19:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3828275872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:19:52 compute-0 ceph-mon[74295]: pgmap v1658: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 124 op/s
Oct 07 14:19:52 compute-0 nova_compute[259550]: 2025-10-07 14:19:52.404 2 INFO nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Creating config drive at /var/lib/nova/instances/b9440d6f-9236-4208-b50d-4badc845e3cd/disk.config
Oct 07 14:19:52 compute-0 nova_compute[259550]: 2025-10-07 14:19:52.410 2 DEBUG oslo_concurrency.processutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b9440d6f-9236-4208-b50d-4badc845e3cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjj6w_c5m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:52 compute-0 nova_compute[259550]: 2025-10-07 14:19:52.441 2 DEBUG nova.network.neutron [req-d8775294-222c-4359-b161-75177b8ca8c3 req-673079fb-8ae4-4d0d-926b-e08f43347cfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Updated VIF entry in instance network info cache for port 0e8be78f-ea5e-430e-a21b-0be4db278329. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:19:52 compute-0 nova_compute[259550]: 2025-10-07 14:19:52.442 2 DEBUG nova.network.neutron [req-d8775294-222c-4359-b161-75177b8ca8c3 req-673079fb-8ae4-4d0d-926b-e08f43347cfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Updating instance_info_cache with network_info: [{"id": "0e8be78f-ea5e-430e-a21b-0be4db278329", "address": "fa:16:3e:06:14:c1", "network": {"id": "94bb140d-c42d-404a-bd0c-9da8b76b4904", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-183786465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7abbe3898e634d999186c395be83c175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e8be78f-ea", "ovs_interfaceid": "0e8be78f-ea5e-430e-a21b-0be4db278329", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:52 compute-0 nova_compute[259550]: 2025-10-07 14:19:52.461 2 DEBUG oslo_concurrency.lockutils [req-d8775294-222c-4359-b161-75177b8ca8c3 req-673079fb-8ae4-4d0d-926b-e08f43347cfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b9440d6f-9236-4208-b50d-4badc845e3cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:52 compute-0 nova_compute[259550]: 2025-10-07 14:19:52.548 2 DEBUG oslo_concurrency.processutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b9440d6f-9236-4208-b50d-4badc845e3cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjj6w_c5m" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:52 compute-0 nova_compute[259550]: 2025-10-07 14:19:52.576 2 DEBUG nova.storage.rbd_utils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] rbd image b9440d6f-9236-4208-b50d-4badc845e3cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:52 compute-0 nova_compute[259550]: 2025-10-07 14:19:52.580 2 DEBUG oslo_concurrency.processutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b9440d6f-9236-4208-b50d-4badc845e3cd/disk.config b9440d6f-9236-4208-b50d-4badc845e3cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:19:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:19:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:19:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:19:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:19:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:19:52 compute-0 admiring_pasteur[338097]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:19:52 compute-0 admiring_pasteur[338097]: --> relative data size: 1.0
Oct 07 14:19:52 compute-0 admiring_pasteur[338097]: --> All data devices are unavailable
Oct 07 14:19:52 compute-0 systemd[1]: libpod-1908049262edb416a7a9f34237c941c7d83f18ae183d643080a11edf9bec998d.scope: Deactivated successfully.
Oct 07 14:19:52 compute-0 podman[338079]: 2025-10-07 14:19:52.766593835 +0000 UTC m=+1.237824201 container died 1908049262edb416a7a9f34237c941c7d83f18ae183d643080a11edf9bec998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pasteur, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:19:52 compute-0 systemd[1]: libpod-1908049262edb416a7a9f34237c941c7d83f18ae183d643080a11edf9bec998d.scope: Consumed 1.024s CPU time.
Oct 07 14:19:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-656afff0e381d7da8182c1983eeb19b24d1a2f22292f246ad4822ad5d25363f9-merged.mount: Deactivated successfully.
Oct 07 14:19:52 compute-0 nova_compute[259550]: 2025-10-07 14:19:52.817 2 DEBUG oslo_concurrency.processutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b9440d6f-9236-4208-b50d-4badc845e3cd/disk.config b9440d6f-9236-4208-b50d-4badc845e3cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.238s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:52 compute-0 nova_compute[259550]: 2025-10-07 14:19:52.822 2 INFO nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Deleting local config drive /var/lib/nova/instances/b9440d6f-9236-4208-b50d-4badc845e3cd/disk.config because it was imported into RBD.
Oct 07 14:19:52 compute-0 podman[338079]: 2025-10-07 14:19:52.854787224 +0000 UTC m=+1.326017570 container remove 1908049262edb416a7a9f34237c941c7d83f18ae183d643080a11edf9bec998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pasteur, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 07 14:19:52 compute-0 systemd[1]: libpod-conmon-1908049262edb416a7a9f34237c941c7d83f18ae183d643080a11edf9bec998d.scope: Deactivated successfully.
Oct 07 14:19:52 compute-0 sudo[337933]: pam_unix(sudo:session): session closed for user root
Oct 07 14:19:52 compute-0 kernel: tap0e8be78f-ea: entered promiscuous mode
Oct 07 14:19:52 compute-0 NetworkManager[44949]: <info>  [1759846792.8849] manager: (tap0e8be78f-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/329)
Oct 07 14:19:52 compute-0 ovn_controller[151684]: 2025-10-07T14:19:52Z|00760|binding|INFO|Claiming lport 0e8be78f-ea5e-430e-a21b-0be4db278329 for this chassis.
Oct 07 14:19:52 compute-0 nova_compute[259550]: 2025-10-07 14:19:52.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:52 compute-0 ovn_controller[151684]: 2025-10-07T14:19:52Z|00761|binding|INFO|0e8be78f-ea5e-430e-a21b-0be4db278329: Claiming fa:16:3e:06:14:c1 10.100.0.4
Oct 07 14:19:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:52.896 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:14:c1 10.100.0.4'], port_security=['fa:16:3e:06:14:c1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b9440d6f-9236-4208-b50d-4badc845e3cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94bb140d-c42d-404a-bd0c-9da8b76b4904', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7abbe3898e634d999186c395be83c175', 'neutron:revision_number': '2', 'neutron:security_group_ids': '83cc0daa-bd02-4453-aa45-32daca20eb15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6278ad7c-8290-4efd-a23d-6685530f4733, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0e8be78f-ea5e-430e-a21b-0be4db278329) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:19:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:52.898 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0e8be78f-ea5e-430e-a21b-0be4db278329 in datapath 94bb140d-c42d-404a-bd0c-9da8b76b4904 bound to our chassis
Oct 07 14:19:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:52.899 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 94bb140d-c42d-404a-bd0c-9da8b76b4904
Oct 07 14:19:52 compute-0 ovn_controller[151684]: 2025-10-07T14:19:52Z|00762|binding|INFO|Setting lport 0e8be78f-ea5e-430e-a21b-0be4db278329 ovn-installed in OVS
Oct 07 14:19:52 compute-0 ovn_controller[151684]: 2025-10-07T14:19:52Z|00763|binding|INFO|Setting lport 0e8be78f-ea5e-430e-a21b-0be4db278329 up in Southbound
Oct 07 14:19:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:52.915 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc35e5e-a8cc-4ff9-a414-74c9f52d8687]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:52 compute-0 nova_compute[259550]: 2025-10-07 14:19:52.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:52 compute-0 systemd-udevd[338224]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:19:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:52.921 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap94bb140d-c1 in ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:19:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:52.925 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap94bb140d-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:19:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:52.925 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[98ca5532-b8f6-468e-84d5-c1318bad1e44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:52.926 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d4fc490d-32a5-48ab-a5a0-8cd9ddb1505a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:52 compute-0 systemd-machined[214580]: New machine qemu-93-instance-0000004c.
Oct 07 14:19:52 compute-0 NetworkManager[44949]: <info>  [1759846792.9405] device (tap0e8be78f-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:19:52 compute-0 NetworkManager[44949]: <info>  [1759846792.9418] device (tap0e8be78f-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:19:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:52.938 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[fd9510ed-f4da-42f9-82e4-fac29d2b6c4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:52 compute-0 systemd[1]: Started Virtual Machine qemu-93-instance-0000004c.
Oct 07 14:19:52 compute-0 sudo[338209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:19:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:52.954 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cf0a8e96-00fa-4d6d-bddf-e321641d9f9d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:52 compute-0 sudo[338209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:19:52 compute-0 sudo[338209]: pam_unix(sudo:session): session closed for user root
Oct 07 14:19:52 compute-0 nova_compute[259550]: 2025-10-07 14:19:52.967 2 DEBUG nova.network.neutron [req-fd8972a4-f9a1-4189-9416-d57399a4cd0e req-76280234-589b-454b-800c-8c7c1a8f092f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Updated VIF entry in instance network info cache for port 8017da49-bbc8-4eae-8b3e-79bb4e587e62. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:19:52 compute-0 nova_compute[259550]: 2025-10-07 14:19:52.968 2 DEBUG nova.network.neutron [req-fd8972a4-f9a1-4189-9416-d57399a4cd0e req-76280234-589b-454b-800c-8c7c1a8f092f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Updating instance_info_cache with network_info: [{"id": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "address": "fa:16:3e:06:ba:6c", "network": {"id": "f862c074-27c9-45f5-8f61-3c7cbb05b94a", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "743fea8acfcc4e73b1981dc0dcf95f63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8017da49-bb", "ovs_interfaceid": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:52 compute-0 nova_compute[259550]: 2025-10-07 14:19:52.982 2 DEBUG oslo_concurrency.lockutils [req-fd8972a4-f9a1-4189-9416-d57399a4cd0e req-76280234-589b-454b-800c-8c7c1a8f092f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:52.983 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7a00bdfe-b241-411b-99a1-4ae5df8fedd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:52 compute-0 systemd-udevd[338240]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:19:52 compute-0 NetworkManager[44949]: <info>  [1759846792.9911] manager: (tap94bb140d-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/330)
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:52.990 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3354f8b8-1771-40a3-9805-4aa42824be29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:53 compute-0 sudo[338246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:19:53 compute-0 sudo[338246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:19:53 compute-0 sudo[338246]: pam_unix(sudo:session): session closed for user root
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:53.037 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4a3f8dd1-0464-40d6-83a3-d49e1ad0155d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:53.040 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d54b1b8e-dcff-4031-982d-8b6c0f517937]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:53 compute-0 nova_compute[259550]: 2025-10-07 14:19:53.062 2 DEBUG nova.compute.manager [req-b4592f14-5b12-40ca-8219-a4eb1c1499f7 req-52539c5f-578a-4f5a-a08e-284eb5fe1718 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received event network-changed-8017da49-bbc8-4eae-8b3e-79bb4e587e62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:53 compute-0 nova_compute[259550]: 2025-10-07 14:19:53.063 2 DEBUG nova.compute.manager [req-b4592f14-5b12-40ca-8219-a4eb1c1499f7 req-52539c5f-578a-4f5a-a08e-284eb5fe1718 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Refreshing instance network info cache due to event network-changed-8017da49-bbc8-4eae-8b3e-79bb4e587e62. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:19:53 compute-0 nova_compute[259550]: 2025-10-07 14:19:53.063 2 DEBUG oslo_concurrency.lockutils [req-b4592f14-5b12-40ca-8219-a4eb1c1499f7 req-52539c5f-578a-4f5a-a08e-284eb5fe1718 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:53 compute-0 nova_compute[259550]: 2025-10-07 14:19:53.063 2 DEBUG oslo_concurrency.lockutils [req-b4592f14-5b12-40ca-8219-a4eb1c1499f7 req-52539c5f-578a-4f5a-a08e-284eb5fe1718 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:53 compute-0 nova_compute[259550]: 2025-10-07 14:19:53.063 2 DEBUG nova.network.neutron [req-b4592f14-5b12-40ca-8219-a4eb1c1499f7 req-52539c5f-578a-4f5a-a08e-284eb5fe1718 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Refreshing network info cache for port 8017da49-bbc8-4eae-8b3e-79bb4e587e62 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:19:53 compute-0 NetworkManager[44949]: <info>  [1759846793.0660] device (tap94bb140d-c0): carrier: link connected
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:53.073 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9233f639-3fc0-48c4-9f6b-e41709ba7aaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:53 compute-0 sudo[338294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:19:53 compute-0 sudo[338294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:19:53 compute-0 sudo[338294]: pam_unix(sudo:session): session closed for user root
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:53.094 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0be945ef-f86b-4ee7-971a-9db2f09e60f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap94bb140d-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:99:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737663, 'reachable_time': 27387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338320, 'error': None, 'target': 'ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:53.115 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[336260fd-c474-4a72-a07b-e3270eef548c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:998a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737663, 'tstamp': 737663}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338328, 'error': None, 'target': 'ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:53 compute-0 nova_compute[259550]: 2025-10-07 14:19:53.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:53 compute-0 sudo[338322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:19:53 compute-0 sudo[338322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:53.146 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[39e9a509-2099-407e-9609-bc439461e952]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap94bb140d-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:99:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737663, 'reachable_time': 27387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338346, 'error': None, 'target': 'ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:53.184 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4d295635-3369-457d-b1db-d3998df7df6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:53.240 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e154218c-d267-45aa-8a27-ed05a2885d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:53.241 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94bb140d-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:53.242 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:53.242 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94bb140d-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:53 compute-0 nova_compute[259550]: 2025-10-07 14:19:53.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:53 compute-0 kernel: tap94bb140d-c0: entered promiscuous mode
Oct 07 14:19:53 compute-0 NetworkManager[44949]: <info>  [1759846793.2450] manager: (tap94bb140d-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/331)
Oct 07 14:19:53 compute-0 nova_compute[259550]: 2025-10-07 14:19:53.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:53.250 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap94bb140d-c0, col_values=(('external_ids', {'iface-id': '549760bf-662c-45da-b5ac-c14e5f35da44'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:53 compute-0 nova_compute[259550]: 2025-10-07 14:19:53.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:53 compute-0 nova_compute[259550]: 2025-10-07 14:19:53.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:53 compute-0 ovn_controller[151684]: 2025-10-07T14:19:53Z|00764|binding|INFO|Releasing lport 549760bf-662c-45da-b5ac-c14e5f35da44 from this chassis (sb_readonly=0)
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:53.256 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/94bb140d-c42d-404a-bd0c-9da8b76b4904.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/94bb140d-c42d-404a-bd0c-9da8b76b4904.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:53.256 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6eea22a9-9bd8-4c5e-a7f0-08f44af9d39a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:53.257 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-94bb140d-c42d-404a-bd0c-9da8b76b4904
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/94bb140d-c42d-404a-bd0c-9da8b76b4904.pid.haproxy
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 94bb140d-c42d-404a-bd0c-9da8b76b4904
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:19:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:53.258 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904', 'env', 'PROCESS_TAG=haproxy-94bb140d-c42d-404a-bd0c-9da8b76b4904', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/94bb140d-c42d-404a-bd0c-9da8b76b4904.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:19:53 compute-0 nova_compute[259550]: 2025-10-07 14:19:53.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:53 compute-0 podman[338397]: 2025-10-07 14:19:53.46032449 +0000 UTC m=+0.040740037 container create e5ca99c4cfc6f91c0e8dcb13be5f2a88f07b21d0927e7f51f4212a2ba7dd2546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:19:53 compute-0 systemd[1]: Started libpod-conmon-e5ca99c4cfc6f91c0e8dcb13be5f2a88f07b21d0927e7f51f4212a2ba7dd2546.scope.
Oct 07 14:19:53 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:19:53 compute-0 podman[338397]: 2025-10-07 14:19:53.442631484 +0000 UTC m=+0.023047051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:19:53 compute-0 podman[338397]: 2025-10-07 14:19:53.559199143 +0000 UTC m=+0.139614710 container init e5ca99c4cfc6f91c0e8dcb13be5f2a88f07b21d0927e7f51f4212a2ba7dd2546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_edison, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 07 14:19:53 compute-0 podman[338397]: 2025-10-07 14:19:53.570789349 +0000 UTC m=+0.151204896 container start e5ca99c4cfc6f91c0e8dcb13be5f2a88f07b21d0927e7f51f4212a2ba7dd2546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_edison, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:19:53 compute-0 podman[338397]: 2025-10-07 14:19:53.575252637 +0000 UTC m=+0.155668204 container attach e5ca99c4cfc6f91c0e8dcb13be5f2a88f07b21d0927e7f51f4212a2ba7dd2546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_edison, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:19:53 compute-0 boring_edison[338413]: 167 167
Oct 07 14:19:53 compute-0 systemd[1]: libpod-e5ca99c4cfc6f91c0e8dcb13be5f2a88f07b21d0927e7f51f4212a2ba7dd2546.scope: Deactivated successfully.
Oct 07 14:19:53 compute-0 podman[338432]: 2025-10-07 14:19:53.631799741 +0000 UTC m=+0.028799682 container died e5ca99c4cfc6f91c0e8dcb13be5f2a88f07b21d0927e7f51f4212a2ba7dd2546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 07 14:19:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba0e0b503c40e2bed49718bf84c2a74da4518cfacbd2e8db56075a21941e0899-merged.mount: Deactivated successfully.
Oct 07 14:19:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 118 op/s
Oct 07 14:19:53 compute-0 podman[338452]: 2025-10-07 14:19:53.866949773 +0000 UTC m=+0.234090425 container create ac936d632a7a95ffc540cf7a0417158fa7079db4928d31e9ab4806cafc313a4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 07 14:19:53 compute-0 podman[338432]: 2025-10-07 14:19:53.884721513 +0000 UTC m=+0.281721434 container remove e5ca99c4cfc6f91c0e8dcb13be5f2a88f07b21d0927e7f51f4212a2ba7dd2546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:19:53 compute-0 podman[338452]: 2025-10-07 14:19:53.789550428 +0000 UTC m=+0.156691110 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:19:53 compute-0 systemd[1]: libpod-conmon-e5ca99c4cfc6f91c0e8dcb13be5f2a88f07b21d0927e7f51f4212a2ba7dd2546.scope: Deactivated successfully.
Oct 07 14:19:53 compute-0 systemd[1]: Started libpod-conmon-ac936d632a7a95ffc540cf7a0417158fa7079db4928d31e9ab4806cafc313a4b.scope.
Oct 07 14:19:53 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3615b7418b16e855fcbe839d3fd51372dfdd6d425aee36356b1c8448595442f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:19:53 compute-0 podman[338452]: 2025-10-07 14:19:53.988266838 +0000 UTC m=+0.355407510 container init ac936d632a7a95ffc540cf7a0417158fa7079db4928d31e9ab4806cafc313a4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 07 14:19:53 compute-0 podman[338452]: 2025-10-07 14:19:53.996067854 +0000 UTC m=+0.363208516 container start ac936d632a7a95ffc540cf7a0417158fa7079db4928d31e9ab4806cafc313a4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 07 14:19:54 compute-0 neutron-haproxy-ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904[338515]: [NOTICE]   (338519) : New worker (338522) forked
Oct 07 14:19:54 compute-0 neutron-haproxy-ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904[338515]: [NOTICE]   (338519) : Loading success.
Oct 07 14:19:54 compute-0 podman[338535]: 2025-10-07 14:19:54.113886376 +0000 UTC m=+0.049295823 container create ff28314cdf33f6428a72b644f895d868e4560c89d686878022105b4fe2d81602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 07 14:19:54 compute-0 systemd[1]: Started libpod-conmon-ff28314cdf33f6428a72b644f895d868e4560c89d686878022105b4fe2d81602.scope.
Oct 07 14:19:54 compute-0 podman[338535]: 2025-10-07 14:19:54.091640138 +0000 UTC m=+0.027049605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:19:54 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335be74e9633f81b06a01da7a5964ff4b280edf9a1241d9d9de4e96b160b9d97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335be74e9633f81b06a01da7a5964ff4b280edf9a1241d9d9de4e96b160b9d97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335be74e9633f81b06a01da7a5964ff4b280edf9a1241d9d9de4e96b160b9d97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335be74e9633f81b06a01da7a5964ff4b280edf9a1241d9d9de4e96b160b9d97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:19:54 compute-0 podman[338535]: 2025-10-07 14:19:54.21740292 +0000 UTC m=+0.152812397 container init ff28314cdf33f6428a72b644f895d868e4560c89d686878022105b4fe2d81602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hofstadter, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 07 14:19:54 compute-0 podman[338535]: 2025-10-07 14:19:54.2261041 +0000 UTC m=+0.161513557 container start ff28314cdf33f6428a72b644f895d868e4560c89d686878022105b4fe2d81602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 07 14:19:54 compute-0 podman[338535]: 2025-10-07 14:19:54.230865786 +0000 UTC m=+0.166275283 container attach ff28314cdf33f6428a72b644f895d868e4560c89d686878022105b4fe2d81602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:19:54 compute-0 nova_compute[259550]: 2025-10-07 14:19:54.369 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846794.3688877, b9440d6f-9236-4208-b50d-4badc845e3cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:54 compute-0 nova_compute[259550]: 2025-10-07 14:19:54.370 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] VM Started (Lifecycle Event)
Oct 07 14:19:54 compute-0 nova_compute[259550]: 2025-10-07 14:19:54.390 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:54 compute-0 nova_compute[259550]: 2025-10-07 14:19:54.396 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846794.3702216, b9440d6f-9236-4208-b50d-4badc845e3cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:54 compute-0 nova_compute[259550]: 2025-10-07 14:19:54.396 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] VM Paused (Lifecycle Event)
Oct 07 14:19:54 compute-0 nova_compute[259550]: 2025-10-07 14:19:54.412 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:54 compute-0 nova_compute[259550]: 2025-10-07 14:19:54.415 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:19:54 compute-0 nova_compute[259550]: 2025-10-07 14:19:54.433 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:19:54 compute-0 nova_compute[259550]: 2025-10-07 14:19:54.571 2 DEBUG nova.network.neutron [req-b4592f14-5b12-40ca-8219-a4eb1c1499f7 req-52539c5f-578a-4f5a-a08e-284eb5fe1718 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Updated VIF entry in instance network info cache for port 8017da49-bbc8-4eae-8b3e-79bb4e587e62. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:19:54 compute-0 nova_compute[259550]: 2025-10-07 14:19:54.572 2 DEBUG nova.network.neutron [req-b4592f14-5b12-40ca-8219-a4eb1c1499f7 req-52539c5f-578a-4f5a-a08e-284eb5fe1718 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Updating instance_info_cache with network_info: [{"id": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "address": "fa:16:3e:06:ba:6c", "network": {"id": "f862c074-27c9-45f5-8f61-3c7cbb05b94a", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "743fea8acfcc4e73b1981dc0dcf95f63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8017da49-bb", "ovs_interfaceid": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:54 compute-0 nova_compute[259550]: 2025-10-07 14:19:54.595 2 DEBUG oslo_concurrency.lockutils [req-b4592f14-5b12-40ca-8219-a4eb1c1499f7 req-52539c5f-578a-4f5a-a08e-284eb5fe1718 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:54 compute-0 ceph-mon[74295]: pgmap v1659: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 118 op/s
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]: {
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:     "0": [
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:         {
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "devices": [
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "/dev/loop3"
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             ],
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "lv_name": "ceph_lv0",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "lv_size": "21470642176",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "name": "ceph_lv0",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "tags": {
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.cluster_name": "ceph",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.crush_device_class": "",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.encrypted": "0",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.osd_id": "0",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.type": "block",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.vdo": "0"
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             },
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "type": "block",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "vg_name": "ceph_vg0"
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:         }
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:     ],
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:     "1": [
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:         {
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "devices": [
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "/dev/loop4"
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             ],
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "lv_name": "ceph_lv1",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "lv_size": "21470642176",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "name": "ceph_lv1",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "tags": {
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.cluster_name": "ceph",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.crush_device_class": "",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.encrypted": "0",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.osd_id": "1",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.type": "block",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.vdo": "0"
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             },
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "type": "block",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "vg_name": "ceph_vg1"
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:         }
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:     ],
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:     "2": [
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:         {
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "devices": [
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "/dev/loop5"
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             ],
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "lv_name": "ceph_lv2",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "lv_size": "21470642176",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "name": "ceph_lv2",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "tags": {
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.cluster_name": "ceph",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.crush_device_class": "",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.encrypted": "0",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.osd_id": "2",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.type": "block",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:                 "ceph.vdo": "0"
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             },
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "type": "block",
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:             "vg_name": "ceph_vg2"
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:         }
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]:     ]
Oct 07 14:19:55 compute-0 youthful_hofstadter[338551]: }
Oct 07 14:19:55 compute-0 systemd[1]: libpod-ff28314cdf33f6428a72b644f895d868e4560c89d686878022105b4fe2d81602.scope: Deactivated successfully.
Oct 07 14:19:55 compute-0 podman[338535]: 2025-10-07 14:19:55.077758219 +0000 UTC m=+1.013167666 container died ff28314cdf33f6428a72b644f895d868e4560c89d686878022105b4fe2d81602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 07 14:19:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-335be74e9633f81b06a01da7a5964ff4b280edf9a1241d9d9de4e96b160b9d97-merged.mount: Deactivated successfully.
Oct 07 14:19:55 compute-0 podman[338535]: 2025-10-07 14:19:55.138628026 +0000 UTC m=+1.074037473 container remove ff28314cdf33f6428a72b644f895d868e4560c89d686878022105b4fe2d81602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 07 14:19:55 compute-0 systemd[1]: libpod-conmon-ff28314cdf33f6428a72b644f895d868e4560c89d686878022105b4fe2d81602.scope: Deactivated successfully.
Oct 07 14:19:55 compute-0 sudo[338322]: pam_unix(sudo:session): session closed for user root
Oct 07 14:19:55 compute-0 sudo[338572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:19:55 compute-0 sudo[338572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:19:55 compute-0 sudo[338572]: pam_unix(sudo:session): session closed for user root
Oct 07 14:19:55 compute-0 sudo[338597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:19:55 compute-0 sudo[338597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:19:55 compute-0 sudo[338597]: pam_unix(sudo:session): session closed for user root
Oct 07 14:19:55 compute-0 sudo[338622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:19:55 compute-0 sudo[338622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:19:55 compute-0 sudo[338622]: pam_unix(sudo:session): session closed for user root
Oct 07 14:19:55 compute-0 sudo[338647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:19:55 compute-0 sudo[338647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.452 2 DEBUG nova.compute.manager [req-b54031ac-0fd7-45a4-aa04-785ef12f3786 req-0a0d92ab-3aa7-47cf-809a-11e84e0a74a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Received event network-vif-plugged-0e8be78f-ea5e-430e-a21b-0be4db278329 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.453 2 DEBUG oslo_concurrency.lockutils [req-b54031ac-0fd7-45a4-aa04-785ef12f3786 req-0a0d92ab-3aa7-47cf-809a-11e84e0a74a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b9440d6f-9236-4208-b50d-4badc845e3cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.453 2 DEBUG oslo_concurrency.lockutils [req-b54031ac-0fd7-45a4-aa04-785ef12f3786 req-0a0d92ab-3aa7-47cf-809a-11e84e0a74a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b9440d6f-9236-4208-b50d-4badc845e3cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.453 2 DEBUG oslo_concurrency.lockutils [req-b54031ac-0fd7-45a4-aa04-785ef12f3786 req-0a0d92ab-3aa7-47cf-809a-11e84e0a74a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b9440d6f-9236-4208-b50d-4badc845e3cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.453 2 DEBUG nova.compute.manager [req-b54031ac-0fd7-45a4-aa04-785ef12f3786 req-0a0d92ab-3aa7-47cf-809a-11e84e0a74a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Processing event network-vif-plugged-0e8be78f-ea5e-430e-a21b-0be4db278329 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.453 2 DEBUG nova.compute.manager [req-b54031ac-0fd7-45a4-aa04-785ef12f3786 req-0a0d92ab-3aa7-47cf-809a-11e84e0a74a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Received event network-vif-plugged-0e8be78f-ea5e-430e-a21b-0be4db278329 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.453 2 DEBUG oslo_concurrency.lockutils [req-b54031ac-0fd7-45a4-aa04-785ef12f3786 req-0a0d92ab-3aa7-47cf-809a-11e84e0a74a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b9440d6f-9236-4208-b50d-4badc845e3cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.454 2 DEBUG oslo_concurrency.lockutils [req-b54031ac-0fd7-45a4-aa04-785ef12f3786 req-0a0d92ab-3aa7-47cf-809a-11e84e0a74a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b9440d6f-9236-4208-b50d-4badc845e3cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.454 2 DEBUG oslo_concurrency.lockutils [req-b54031ac-0fd7-45a4-aa04-785ef12f3786 req-0a0d92ab-3aa7-47cf-809a-11e84e0a74a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b9440d6f-9236-4208-b50d-4badc845e3cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.454 2 DEBUG nova.compute.manager [req-b54031ac-0fd7-45a4-aa04-785ef12f3786 req-0a0d92ab-3aa7-47cf-809a-11e84e0a74a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] No waiting events found dispatching network-vif-plugged-0e8be78f-ea5e-430e-a21b-0be4db278329 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.454 2 WARNING nova.compute.manager [req-b54031ac-0fd7-45a4-aa04-785ef12f3786 req-0a0d92ab-3aa7-47cf-809a-11e84e0a74a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Received unexpected event network-vif-plugged-0e8be78f-ea5e-430e-a21b-0be4db278329 for instance with vm_state building and task_state spawning.
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.454 2 DEBUG nova.compute.manager [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.459 2 DEBUG nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.460 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846795.4595013, b9440d6f-9236-4208-b50d-4badc845e3cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.460 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] VM Resumed (Lifecycle Event)
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.463 2 INFO nova.virt.libvirt.driver [-] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Instance spawned successfully.
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.464 2 DEBUG nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.487 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.493 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.496 2 DEBUG nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.497 2 DEBUG nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.497 2 DEBUG nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.497 2 DEBUG nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.498 2 DEBUG nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.498 2 DEBUG nova.virt.libvirt.driver [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.529 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.590 2 INFO nova.compute.manager [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Took 9.60 seconds to spawn the instance on the hypervisor.
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.591 2 DEBUG nova.compute.manager [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.674 2 INFO nova.compute.manager [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Took 10.62 seconds to build instance.
Oct 07 14:19:55 compute-0 nova_compute[259550]: 2025-10-07 14:19:55.692 2 DEBUG oslo_concurrency.lockutils [None req-4679a786-54b8-4e17-97ce-09733c21ba90 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Lock "b9440d6f-9236-4208-b50d-4badc845e3cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:55 compute-0 podman[338712]: 2025-10-07 14:19:55.792536081 +0000 UTC m=+0.057352397 container create 8f6d4bf11b415c52566a22ae2e6d45ea4df87c64c738c98ed570058ddcecfa99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moore, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:19:55 compute-0 podman[338712]: 2025-10-07 14:19:55.75689761 +0000 UTC m=+0.021713956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:19:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Oct 07 14:19:55 compute-0 systemd[1]: Started libpod-conmon-8f6d4bf11b415c52566a22ae2e6d45ea4df87c64c738c98ed570058ddcecfa99.scope.
Oct 07 14:19:55 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:19:55 compute-0 podman[338712]: 2025-10-07 14:19:55.948309505 +0000 UTC m=+0.213125841 container init 8f6d4bf11b415c52566a22ae2e6d45ea4df87c64c738c98ed570058ddcecfa99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moore, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 07 14:19:55 compute-0 podman[338712]: 2025-10-07 14:19:55.958579887 +0000 UTC m=+0.223396203 container start 8f6d4bf11b415c52566a22ae2e6d45ea4df87c64c738c98ed570058ddcecfa99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 07 14:19:55 compute-0 hungry_moore[338728]: 167 167
Oct 07 14:19:55 compute-0 systemd[1]: libpod-8f6d4bf11b415c52566a22ae2e6d45ea4df87c64c738c98ed570058ddcecfa99.scope: Deactivated successfully.
Oct 07 14:19:56 compute-0 podman[338712]: 2025-10-07 14:19:56.061845324 +0000 UTC m=+0.326661660 container attach 8f6d4bf11b415c52566a22ae2e6d45ea4df87c64c738c98ed570058ddcecfa99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moore, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 07 14:19:56 compute-0 podman[338712]: 2025-10-07 14:19:56.062293747 +0000 UTC m=+0.327110083 container died 8f6d4bf11b415c52566a22ae2e6d45ea4df87c64c738c98ed570058ddcecfa99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:19:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a6ec3605de194bd5cd5cf1729e6852c9670463848f63a3bd755b0f6d6492614-merged.mount: Deactivated successfully.
Oct 07 14:19:56 compute-0 podman[338712]: 2025-10-07 14:19:56.124525031 +0000 UTC m=+0.389341357 container remove 8f6d4bf11b415c52566a22ae2e6d45ea4df87c64c738c98ed570058ddcecfa99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moore, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 07 14:19:56 compute-0 systemd[1]: libpod-conmon-8f6d4bf11b415c52566a22ae2e6d45ea4df87c64c738c98ed570058ddcecfa99.scope: Deactivated successfully.
Oct 07 14:19:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:19:56 compute-0 podman[338753]: 2025-10-07 14:19:56.337773625 +0000 UTC m=+0.060091819 container create edbf586ab8c13db3bb45f2181242604b4cb4a602aec294e162d4e69057bc1963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:19:56 compute-0 systemd[1]: Started libpod-conmon-edbf586ab8c13db3bb45f2181242604b4cb4a602aec294e162d4e69057bc1963.scope.
Oct 07 14:19:56 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0ac4afbac5b90387dbcc6b31327bff5e9dd66959f8531ee18c86faaa7745bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0ac4afbac5b90387dbcc6b31327bff5e9dd66959f8531ee18c86faaa7745bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0ac4afbac5b90387dbcc6b31327bff5e9dd66959f8531ee18c86faaa7745bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0ac4afbac5b90387dbcc6b31327bff5e9dd66959f8531ee18c86faaa7745bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:19:56 compute-0 podman[338753]: 2025-10-07 14:19:56.317689113 +0000 UTC m=+0.040007337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:19:56 compute-0 podman[338753]: 2025-10-07 14:19:56.425154642 +0000 UTC m=+0.147472866 container init edbf586ab8c13db3bb45f2181242604b4cb4a602aec294e162d4e69057bc1963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wing, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:19:56 compute-0 podman[338753]: 2025-10-07 14:19:56.431219952 +0000 UTC m=+0.153538146 container start edbf586ab8c13db3bb45f2181242604b4cb4a602aec294e162d4e69057bc1963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:19:56 compute-0 podman[338753]: 2025-10-07 14:19:56.434458709 +0000 UTC m=+0.156776933 container attach edbf586ab8c13db3bb45f2181242604b4cb4a602aec294e162d4e69057bc1963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:19:56 compute-0 nova_compute[259550]: 2025-10-07 14:19:56.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:56 compute-0 nova_compute[259550]: 2025-10-07 14:19:56.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:56 compute-0 ceph-mon[74295]: pgmap v1660: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]: {
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:         "osd_id": 2,
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:         "type": "bluestore"
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:     },
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:         "osd_id": 1,
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:         "type": "bluestore"
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:     },
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:         "osd_id": 0,
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:         "type": "bluestore"
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]:     }
Oct 07 14:19:57 compute-0 ecstatic_wing[338771]: }
Oct 07 14:19:57 compute-0 systemd[1]: libpod-edbf586ab8c13db3bb45f2181242604b4cb4a602aec294e162d4e69057bc1963.scope: Deactivated successfully.
Oct 07 14:19:57 compute-0 podman[338753]: 2025-10-07 14:19:57.384153226 +0000 UTC m=+1.106471420 container died edbf586ab8c13db3bb45f2181242604b4cb4a602aec294e162d4e69057bc1963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:19:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e0ac4afbac5b90387dbcc6b31327bff5e9dd66959f8531ee18c86faaa7745bb-merged.mount: Deactivated successfully.
Oct 07 14:19:57 compute-0 podman[338753]: 2025-10-07 14:19:57.593299921 +0000 UTC m=+1.315618115 container remove edbf586ab8c13db3bb45f2181242604b4cb4a602aec294e162d4e69057bc1963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 07 14:19:57 compute-0 nova_compute[259550]: 2025-10-07 14:19:57.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:57 compute-0 sudo[338647]: pam_unix(sudo:session): session closed for user root
Oct 07 14:19:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:19:57 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:19:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:19:57 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:19:57 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 1eb91da1-721a-46b9-9dea-63f9ffc69b88 does not exist
Oct 07 14:19:57 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 93bb7393-8d73-4cf7-a25d-98b5dba398dc does not exist
Oct 07 14:19:57 compute-0 systemd[1]: libpod-conmon-edbf586ab8c13db3bb45f2181242604b4cb4a602aec294e162d4e69057bc1963.scope: Deactivated successfully.
Oct 07 14:19:57 compute-0 sudo[338817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:19:57 compute-0 sudo[338817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:19:57 compute-0 sudo[338817]: pam_unix(sudo:session): session closed for user root
Oct 07 14:19:57 compute-0 sudo[338842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:19:57 compute-0 sudo[338842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:19:57 compute-0 sudo[338842]: pam_unix(sudo:session): session closed for user root
Oct 07 14:19:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Oct 07 14:19:57 compute-0 nova_compute[259550]: 2025-10-07 14:19:57.990 2 DEBUG nova.compute.manager [req-80b87dcc-eca3-4d59-82d3-648ff7506cff req-bcbfefd5-b05a-4be8-8f90-94249e3af5fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received event network-changed-8017da49-bbc8-4eae-8b3e-79bb4e587e62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:57 compute-0 nova_compute[259550]: 2025-10-07 14:19:57.991 2 DEBUG nova.compute.manager [req-80b87dcc-eca3-4d59-82d3-648ff7506cff req-bcbfefd5-b05a-4be8-8f90-94249e3af5fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Refreshing instance network info cache due to event network-changed-8017da49-bbc8-4eae-8b3e-79bb4e587e62. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:19:57 compute-0 nova_compute[259550]: 2025-10-07 14:19:57.991 2 DEBUG oslo_concurrency.lockutils [req-80b87dcc-eca3-4d59-82d3-648ff7506cff req-bcbfefd5-b05a-4be8-8f90-94249e3af5fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:19:57 compute-0 nova_compute[259550]: 2025-10-07 14:19:57.991 2 DEBUG oslo_concurrency.lockutils [req-80b87dcc-eca3-4d59-82d3-648ff7506cff req-bcbfefd5-b05a-4be8-8f90-94249e3af5fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:19:57 compute-0 nova_compute[259550]: 2025-10-07 14:19:57.991 2 DEBUG nova.network.neutron [req-80b87dcc-eca3-4d59-82d3-648ff7506cff req-bcbfefd5-b05a-4be8-8f90-94249e3af5fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Refreshing network info cache for port 8017da49-bbc8-4eae-8b3e-79bb4e587e62 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.412 2 DEBUG oslo_concurrency.lockutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "af953de2-8586-45f6-8f4b-a09dec17ef5f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.412 2 DEBUG oslo_concurrency.lockutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.432 2 DEBUG nova.compute.manager [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.513 2 DEBUG oslo_concurrency.lockutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.515 2 DEBUG oslo_concurrency.lockutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.524 2 DEBUG nova.virt.hardware [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.524 2 INFO nova.compute.claims [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.593 2 DEBUG oslo_concurrency.lockutils [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Acquiring lock "b9440d6f-9236-4208-b50d-4badc845e3cd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.594 2 DEBUG oslo_concurrency.lockutils [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Lock "b9440d6f-9236-4208-b50d-4badc845e3cd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.594 2 DEBUG oslo_concurrency.lockutils [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Acquiring lock "b9440d6f-9236-4208-b50d-4badc845e3cd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.594 2 DEBUG oslo_concurrency.lockutils [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Lock "b9440d6f-9236-4208-b50d-4badc845e3cd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.594 2 DEBUG oslo_concurrency.lockutils [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Lock "b9440d6f-9236-4208-b50d-4badc845e3cd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.595 2 INFO nova.compute.manager [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Terminating instance
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.596 2 DEBUG nova.compute.manager [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:19:58 compute-0 kernel: tap0e8be78f-ea (unregistering): left promiscuous mode
Oct 07 14:19:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:19:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:19:58 compute-0 ceph-mon[74295]: pgmap v1661: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Oct 07 14:19:58 compute-0 NetworkManager[44949]: <info>  [1759846798.6387] device (tap0e8be78f-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:19:58 compute-0 ovn_controller[151684]: 2025-10-07T14:19:58Z|00765|binding|INFO|Releasing lport 0e8be78f-ea5e-430e-a21b-0be4db278329 from this chassis (sb_readonly=0)
Oct 07 14:19:58 compute-0 ovn_controller[151684]: 2025-10-07T14:19:58Z|00766|binding|INFO|Setting lport 0e8be78f-ea5e-430e-a21b-0be4db278329 down in Southbound
Oct 07 14:19:58 compute-0 ovn_controller[151684]: 2025-10-07T14:19:58Z|00767|binding|INFO|Removing iface tap0e8be78f-ea ovn-installed in OVS
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:58.658 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:14:c1 10.100.0.4'], port_security=['fa:16:3e:06:14:c1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b9440d6f-9236-4208-b50d-4badc845e3cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94bb140d-c42d-404a-bd0c-9da8b76b4904', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7abbe3898e634d999186c395be83c175', 'neutron:revision_number': '4', 'neutron:security_group_ids': '83cc0daa-bd02-4453-aa45-32daca20eb15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6278ad7c-8290-4efd-a23d-6685530f4733, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0e8be78f-ea5e-430e-a21b-0be4db278329) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:19:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:58.660 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0e8be78f-ea5e-430e-a21b-0be4db278329 in datapath 94bb140d-c42d-404a-bd0c-9da8b76b4904 unbound from our chassis
Oct 07 14:19:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:58.661 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 94bb140d-c42d-404a-bd0c-9da8b76b4904, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:19:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:58.662 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bf08715e-2158-434c-aac3-76ec819631d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:58.662 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904 namespace which is not needed anymore
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.680 2 DEBUG oslo_concurrency.processutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:58 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Oct 07 14:19:58 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004c.scope: Consumed 4.330s CPU time.
Oct 07 14:19:58 compute-0 systemd-machined[214580]: Machine qemu-93-instance-0000004c terminated.
Oct 07 14:19:58 compute-0 neutron-haproxy-ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904[338515]: [NOTICE]   (338519) : haproxy version is 2.8.14-c23fe91
Oct 07 14:19:58 compute-0 neutron-haproxy-ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904[338515]: [NOTICE]   (338519) : path to executable is /usr/sbin/haproxy
Oct 07 14:19:58 compute-0 neutron-haproxy-ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904[338515]: [WARNING]  (338519) : Exiting Master process...
Oct 07 14:19:58 compute-0 neutron-haproxy-ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904[338515]: [WARNING]  (338519) : Exiting Master process...
Oct 07 14:19:58 compute-0 neutron-haproxy-ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904[338515]: [ALERT]    (338519) : Current worker (338522) exited with code 143 (Terminated)
Oct 07 14:19:58 compute-0 neutron-haproxy-ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904[338515]: [WARNING]  (338519) : All workers exited. Exiting... (0)
Oct 07 14:19:58 compute-0 systemd[1]: libpod-ac936d632a7a95ffc540cf7a0417158fa7079db4928d31e9ab4806cafc313a4b.scope: Deactivated successfully.
Oct 07 14:19:58 compute-0 podman[338891]: 2025-10-07 14:19:58.7947817 +0000 UTC m=+0.046586271 container died ac936d632a7a95ffc540cf7a0417158fa7079db4928d31e9ab4806cafc313a4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.839 2 INFO nova.virt.libvirt.driver [-] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Instance destroyed successfully.
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.840 2 DEBUG nova.objects.instance [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Lazy-loading 'resources' on Instance uuid b9440d6f-9236-4208-b50d-4badc845e3cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:19:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac936d632a7a95ffc540cf7a0417158fa7079db4928d31e9ab4806cafc313a4b-userdata-shm.mount: Deactivated successfully.
Oct 07 14:19:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3615b7418b16e855fcbe839d3fd51372dfdd6d425aee36356b1c8448595442f-merged.mount: Deactivated successfully.
Oct 07 14:19:58 compute-0 podman[338891]: 2025-10-07 14:19:58.853271785 +0000 UTC m=+0.105076366 container cleanup ac936d632a7a95ffc540cf7a0417158fa7079db4928d31e9ab4806cafc313a4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.852 2 DEBUG nova.virt.libvirt.vif [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:19:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1177251261',display_name='tempest-ServerAddressesTestJSON-server-1177251261',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1177251261',id=76,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:19:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7abbe3898e634d999186c395be83c175',ramdisk_id='',reservation_id='r-gehsi897',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-2009272603',owner_user_name='tempest-ServerAddressesTestJSON-2009272603-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:19:55Z,user_data=None,user_id='732745f303ad42618c5be16545b23042',uuid=b9440d6f-9236-4208-b50d-4badc845e3cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0e8be78f-ea5e-430e-a21b-0be4db278329", "address": "fa:16:3e:06:14:c1", "network": {"id": "94bb140d-c42d-404a-bd0c-9da8b76b4904", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-183786465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7abbe3898e634d999186c395be83c175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e8be78f-ea", "ovs_interfaceid": "0e8be78f-ea5e-430e-a21b-0be4db278329", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.853 2 DEBUG nova.network.os_vif_util [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Converting VIF {"id": "0e8be78f-ea5e-430e-a21b-0be4db278329", "address": "fa:16:3e:06:14:c1", "network": {"id": "94bb140d-c42d-404a-bd0c-9da8b76b4904", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-183786465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7abbe3898e634d999186c395be83c175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e8be78f-ea", "ovs_interfaceid": "0e8be78f-ea5e-430e-a21b-0be4db278329", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.854 2 DEBUG nova.network.os_vif_util [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:14:c1,bridge_name='br-int',has_traffic_filtering=True,id=0e8be78f-ea5e-430e-a21b-0be4db278329,network=Network(94bb140d-c42d-404a-bd0c-9da8b76b4904),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e8be78f-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.854 2 DEBUG os_vif [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:14:c1,bridge_name='br-int',has_traffic_filtering=True,id=0e8be78f-ea5e-430e-a21b-0be4db278329,network=Network(94bb140d-c42d-404a-bd0c-9da8b76b4904),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e8be78f-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.857 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e8be78f-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.865 2 INFO os_vif [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:14:c1,bridge_name='br-int',has_traffic_filtering=True,id=0e8be78f-ea5e-430e-a21b-0be4db278329,network=Network(94bb140d-c42d-404a-bd0c-9da8b76b4904),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e8be78f-ea')
Oct 07 14:19:58 compute-0 systemd[1]: libpod-conmon-ac936d632a7a95ffc540cf7a0417158fa7079db4928d31e9ab4806cafc313a4b.scope: Deactivated successfully.
Oct 07 14:19:58 compute-0 podman[338949]: 2025-10-07 14:19:58.917888462 +0000 UTC m=+0.044005643 container remove ac936d632a7a95ffc540cf7a0417158fa7079db4928d31e9ab4806cafc313a4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:19:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:58.924 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d837511a-8b01-414f-ad4c-739dd12da3f7]: (4, ('Tue Oct  7 02:19:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904 (ac936d632a7a95ffc540cf7a0417158fa7079db4928d31e9ab4806cafc313a4b)\nac936d632a7a95ffc540cf7a0417158fa7079db4928d31e9ab4806cafc313a4b\nTue Oct  7 02:19:58 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904 (ac936d632a7a95ffc540cf7a0417158fa7079db4928d31e9ab4806cafc313a4b)\nac936d632a7a95ffc540cf7a0417158fa7079db4928d31e9ab4806cafc313a4b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:58.926 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0bedaf64-720d-4a40-8bd8-ae28b7911421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:58.926 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94bb140d-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:19:58 compute-0 kernel: tap94bb140d-c0: left promiscuous mode
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:58.944 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dfeeb85a-a692-4447-bc92-e2b3ef2fc5d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:58 compute-0 nova_compute[259550]: 2025-10-07 14:19:58.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:19:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:58.980 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1f6c0830-1a87-4a4d-a85c-7e3819ee1eda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:58.982 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a839faf1-5bf0-4ce4-b4bf-5fd9058310b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:59.003 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5c5373a4-f069-4799-afac-de11baa3f13a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737654, 'reachable_time': 37487, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338983, 'error': None, 'target': 'ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d94bb140d\x2dc42d\x2d404a\x2dbd0c\x2d9da8b76b4904.mount: Deactivated successfully.
Oct 07 14:19:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:59.009 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-94bb140d-c42d-404a-bd0c-9da8b76b4904 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:19:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:19:59.009 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[d354cf57-5b1f-4929-8d28-f8157ce2d2a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.071 2 DEBUG nova.compute.manager [req-e6867632-8dab-42dd-b1e8-67bf1d567796 req-e4cffc87-9eb0-4d37-9da1-0f6ab1ba350f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Received event network-vif-unplugged-0e8be78f-ea5e-430e-a21b-0be4db278329 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.071 2 DEBUG oslo_concurrency.lockutils [req-e6867632-8dab-42dd-b1e8-67bf1d567796 req-e4cffc87-9eb0-4d37-9da1-0f6ab1ba350f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b9440d6f-9236-4208-b50d-4badc845e3cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.072 2 DEBUG oslo_concurrency.lockutils [req-e6867632-8dab-42dd-b1e8-67bf1d567796 req-e4cffc87-9eb0-4d37-9da1-0f6ab1ba350f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b9440d6f-9236-4208-b50d-4badc845e3cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.072 2 DEBUG oslo_concurrency.lockutils [req-e6867632-8dab-42dd-b1e8-67bf1d567796 req-e4cffc87-9eb0-4d37-9da1-0f6ab1ba350f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b9440d6f-9236-4208-b50d-4badc845e3cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.072 2 DEBUG nova.compute.manager [req-e6867632-8dab-42dd-b1e8-67bf1d567796 req-e4cffc87-9eb0-4d37-9da1-0f6ab1ba350f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] No waiting events found dispatching network-vif-unplugged-0e8be78f-ea5e-430e-a21b-0be4db278329 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.072 2 DEBUG nova.compute.manager [req-e6867632-8dab-42dd-b1e8-67bf1d567796 req-e4cffc87-9eb0-4d37-9da1-0f6ab1ba350f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Received event network-vif-unplugged-0e8be78f-ea5e-430e-a21b-0be4db278329 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:19:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:19:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2340529722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.194 2 DEBUG oslo_concurrency.processutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.200 2 DEBUG nova.compute.provider_tree [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.215 2 DEBUG nova.scheduler.client.report [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.233 2 DEBUG nova.network.neutron [req-80b87dcc-eca3-4d59-82d3-648ff7506cff req-bcbfefd5-b05a-4be8-8f90-94249e3af5fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Updated VIF entry in instance network info cache for port 8017da49-bbc8-4eae-8b3e-79bb4e587e62. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.233 2 DEBUG nova.network.neutron [req-80b87dcc-eca3-4d59-82d3-648ff7506cff req-bcbfefd5-b05a-4be8-8f90-94249e3af5fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Updating instance_info_cache with network_info: [{"id": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "address": "fa:16:3e:06:ba:6c", "network": {"id": "f862c074-27c9-45f5-8f61-3c7cbb05b94a", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "743fea8acfcc4e73b1981dc0dcf95f63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8017da49-bb", "ovs_interfaceid": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.258 2 DEBUG oslo_concurrency.lockutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.259 2 DEBUG nova.compute.manager [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.262 2 DEBUG oslo_concurrency.lockutils [req-80b87dcc-eca3-4d59-82d3-648ff7506cff req-bcbfefd5-b05a-4be8-8f90-94249e3af5fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.304 2 DEBUG nova.compute.manager [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.305 2 DEBUG nova.network.neutron [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.333 2 INFO nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.363 2 DEBUG nova.compute.manager [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.476 2 DEBUG nova.compute.manager [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.478 2 DEBUG nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.478 2 INFO nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Creating image(s)
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.501 2 DEBUG nova.storage.rbd_utils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image af953de2-8586-45f6-8f4b-a09dec17ef5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.526 2 DEBUG nova.storage.rbd_utils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image af953de2-8586-45f6-8f4b-a09dec17ef5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.556 2 DEBUG nova.storage.rbd_utils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image af953de2-8586-45f6-8f4b-a09dec17ef5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.559 2 DEBUG oslo_concurrency.processutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.595 2 DEBUG nova.policy [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '51afbbb19e4a4e2184c89302ccf45428', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8379283f8a594c2ab94773d2b49cbb30', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.642 2 DEBUG oslo_concurrency.processutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.642 2 DEBUG oslo_concurrency.lockutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.643 2 DEBUG oslo_concurrency.lockutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.643 2 DEBUG oslo_concurrency.lockutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.821 2 DEBUG nova.storage.rbd_utils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image af953de2-8586-45f6-8f4b-a09dec17ef5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:19:59 compute-0 nova_compute[259550]: 2025-10-07 14:19:59.824 2 DEBUG oslo_concurrency.processutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 af953de2-8586-45f6-8f4b-a09dec17ef5f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:19:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Oct 07 14:19:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2340529722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:00.052 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:00.052 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:00.053 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:00 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 07 14:20:00 compute-0 nova_compute[259550]: 2025-10-07 14:20:00.436 2 DEBUG oslo_concurrency.processutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 af953de2-8586-45f6-8f4b-a09dec17ef5f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:00 compute-0 nova_compute[259550]: 2025-10-07 14:20:00.496 2 DEBUG nova.storage.rbd_utils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] resizing rbd image af953de2-8586-45f6-8f4b-a09dec17ef5f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:20:00 compute-0 nova_compute[259550]: 2025-10-07 14:20:00.540 2 INFO nova.virt.libvirt.driver [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Deleting instance files /var/lib/nova/instances/b9440d6f-9236-4208-b50d-4badc845e3cd_del
Oct 07 14:20:00 compute-0 nova_compute[259550]: 2025-10-07 14:20:00.541 2 INFO nova.virt.libvirt.driver [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Deletion of /var/lib/nova/instances/b9440d6f-9236-4208-b50d-4badc845e3cd_del complete
Oct 07 14:20:00 compute-0 nova_compute[259550]: 2025-10-07 14:20:00.602 2 DEBUG nova.objects.instance [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'migration_context' on Instance uuid af953de2-8586-45f6-8f4b-a09dec17ef5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:00 compute-0 nova_compute[259550]: 2025-10-07 14:20:00.607 2 INFO nova.compute.manager [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Took 2.01 seconds to destroy the instance on the hypervisor.
Oct 07 14:20:00 compute-0 nova_compute[259550]: 2025-10-07 14:20:00.607 2 DEBUG oslo.service.loopingcall [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:20:00 compute-0 nova_compute[259550]: 2025-10-07 14:20:00.607 2 DEBUG nova.compute.manager [-] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:20:00 compute-0 nova_compute[259550]: 2025-10-07 14:20:00.608 2 DEBUG nova.network.neutron [-] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:20:00 compute-0 nova_compute[259550]: 2025-10-07 14:20:00.615 2 DEBUG nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:20:00 compute-0 nova_compute[259550]: 2025-10-07 14:20:00.616 2 DEBUG nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Ensure instance console log exists: /var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:20:00 compute-0 nova_compute[259550]: 2025-10-07 14:20:00.616 2 DEBUG oslo_concurrency.lockutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:00 compute-0 nova_compute[259550]: 2025-10-07 14:20:00.617 2 DEBUG oslo_concurrency.lockutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:00 compute-0 nova_compute[259550]: 2025-10-07 14:20:00.617 2 DEBUG oslo_concurrency.lockutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:00 compute-0 nova_compute[259550]: 2025-10-07 14:20:00.805 2 DEBUG nova.network.neutron [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Successfully created port: 58bc8432-92a0-493d-a022-134fa0354f89 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:20:01 compute-0 ceph-mon[74295]: pgmap v1662: 305 pgs: 305 active+clean; 295 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Oct 07 14:20:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.227 2 DEBUG nova.compute.manager [req-0bdc9409-6f6a-484a-aeda-f9312d8e811c req-a1a2fadf-7b04-4a52-b4c5-6a0b38b20458 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Received event network-vif-plugged-0e8be78f-ea5e-430e-a21b-0be4db278329 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.228 2 DEBUG oslo_concurrency.lockutils [req-0bdc9409-6f6a-484a-aeda-f9312d8e811c req-a1a2fadf-7b04-4a52-b4c5-6a0b38b20458 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b9440d6f-9236-4208-b50d-4badc845e3cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.228 2 DEBUG oslo_concurrency.lockutils [req-0bdc9409-6f6a-484a-aeda-f9312d8e811c req-a1a2fadf-7b04-4a52-b4c5-6a0b38b20458 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b9440d6f-9236-4208-b50d-4badc845e3cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.229 2 DEBUG oslo_concurrency.lockutils [req-0bdc9409-6f6a-484a-aeda-f9312d8e811c req-a1a2fadf-7b04-4a52-b4c5-6a0b38b20458 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b9440d6f-9236-4208-b50d-4badc845e3cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.229 2 DEBUG nova.compute.manager [req-0bdc9409-6f6a-484a-aeda-f9312d8e811c req-a1a2fadf-7b04-4a52-b4c5-6a0b38b20458 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] No waiting events found dispatching network-vif-plugged-0e8be78f-ea5e-430e-a21b-0be4db278329 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.229 2 WARNING nova.compute.manager [req-0bdc9409-6f6a-484a-aeda-f9312d8e811c req-a1a2fadf-7b04-4a52-b4c5-6a0b38b20458 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Received unexpected event network-vif-plugged-0e8be78f-ea5e-430e-a21b-0be4db278329 for instance with vm_state active and task_state deleting.
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.363 2 DEBUG nova.network.neutron [-] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.382 2 INFO nova.compute.manager [-] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Took 0.77 seconds to deallocate network for instance.
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.409 2 DEBUG oslo_concurrency.lockutils [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Acquiring lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.410 2 DEBUG oslo_concurrency.lockutils [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.410 2 DEBUG oslo_concurrency.lockutils [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Acquiring lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.410 2 DEBUG oslo_concurrency.lockutils [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.411 2 DEBUG oslo_concurrency.lockutils [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.412 2 INFO nova.compute.manager [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Terminating instance
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.413 2 DEBUG nova.compute.manager [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.434 2 DEBUG oslo_concurrency.lockutils [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.435 2 DEBUG oslo_concurrency.lockutils [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.475 2 DEBUG nova.compute.manager [req-84b5ef91-3aef-4cea-9982-7382c3ebf3ed req-56fe840b-e291-4aba-a89b-1162f23bda56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Received event network-vif-deleted-0e8be78f-ea5e-430e-a21b-0be4db278329 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.530 2 DEBUG oslo_concurrency.processutils [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:01 compute-0 kernel: tap8017da49-bb (unregistering): left promiscuous mode
Oct 07 14:20:01 compute-0 NetworkManager[44949]: <info>  [1759846801.5969] device (tap8017da49-bb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:01 compute-0 ovn_controller[151684]: 2025-10-07T14:20:01Z|00768|binding|INFO|Releasing lport 8017da49-bbc8-4eae-8b3e-79bb4e587e62 from this chassis (sb_readonly=0)
Oct 07 14:20:01 compute-0 ovn_controller[151684]: 2025-10-07T14:20:01Z|00769|binding|INFO|Setting lport 8017da49-bbc8-4eae-8b3e-79bb4e587e62 down in Southbound
Oct 07 14:20:01 compute-0 ovn_controller[151684]: 2025-10-07T14:20:01Z|00770|binding|INFO|Removing iface tap8017da49-bb ovn-installed in OVS
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:01.619 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:ba:6c 10.100.0.7'], port_security=['fa:16:3e:06:ba:6c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f862c074-27c9-45f5-8f61-3c7cbb05b94a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '743fea8acfcc4e73b1981dc0dcf95f63', 'neutron:revision_number': '8', 'neutron:security_group_ids': '0e3566eb-dd32-483b-8573-b62f6b6a147e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0a2b256-a7c8-4510-ae1e-632d23b47983, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8017da49-bbc8-4eae-8b3e-79bb4e587e62) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:20:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:01.621 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8017da49-bbc8-4eae-8b3e-79bb4e587e62 in datapath f862c074-27c9-45f5-8f61-3c7cbb05b94a unbound from our chassis
Oct 07 14:20:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:01.622 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f862c074-27c9-45f5-8f61-3c7cbb05b94a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:20:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:01.636 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3a875ae1-faf7-4cca-9e4a-c13dd5b8bbdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:01 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d00000048.scope: Deactivated successfully.
Oct 07 14:20:01 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d00000048.scope: Consumed 13.056s CPU time.
Oct 07 14:20:01 compute-0 systemd-machined[214580]: Machine qemu-92-instance-00000048 terminated.
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.718 2 DEBUG nova.network.neutron [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Successfully updated port: 58bc8432-92a0-493d-a022-134fa0354f89 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.734 2 DEBUG oslo_concurrency.lockutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "refresh_cache-af953de2-8586-45f6-8f4b-a09dec17ef5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.735 2 DEBUG oslo_concurrency.lockutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquired lock "refresh_cache-af953de2-8586-45f6-8f4b-a09dec17ef5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.735 2 DEBUG nova.network.neutron [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.860 2 DEBUG nova.network.neutron [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.865 2 INFO nova.virt.libvirt.driver [-] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Instance destroyed successfully.
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.866 2 DEBUG nova.objects.instance [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lazy-loading 'resources' on Instance uuid 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 303 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.909 2 DEBUG nova.virt.libvirt.vif [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-2005089717',display_name='tempest-ServerRescueTestJSONUnderV235-server-2005089717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-2005089717',id=72,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:19:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='743fea8acfcc4e73b1981dc0dcf95f63',ramdisk_id='',reservation_id='r-g9plrghc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-510433047',owner_user_name='tempest-ServerRescueTestJSONUnderV235-510433047-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:19:43Z,user_data=None,user_id='b627cc9a6a884d6cb236df7e0154b97a',uuid=1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "address": "fa:16:3e:06:ba:6c", "network": {"id": "f862c074-27c9-45f5-8f61-3c7cbb05b94a", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "743fea8acfcc4e73b1981dc0dcf95f63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8017da49-bb", "ovs_interfaceid": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.910 2 DEBUG nova.network.os_vif_util [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Converting VIF {"id": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "address": "fa:16:3e:06:ba:6c", "network": {"id": "f862c074-27c9-45f5-8f61-3c7cbb05b94a", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-181106309-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "743fea8acfcc4e73b1981dc0dcf95f63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8017da49-bb", "ovs_interfaceid": "8017da49-bbc8-4eae-8b3e-79bb4e587e62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.910 2 DEBUG nova.network.os_vif_util [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:06:ba:6c,bridge_name='br-int',has_traffic_filtering=True,id=8017da49-bbc8-4eae-8b3e-79bb4e587e62,network=Network(f862c074-27c9-45f5-8f61-3c7cbb05b94a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8017da49-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.911 2 DEBUG os_vif [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:ba:6c,bridge_name='br-int',has_traffic_filtering=True,id=8017da49-bbc8-4eae-8b3e-79bb4e587e62,network=Network(f862c074-27c9-45f5-8f61-3c7cbb05b94a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8017da49-bb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.912 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8017da49-bb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:01 compute-0 nova_compute[259550]: 2025-10-07 14:20:01.921 2 INFO os_vif [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:ba:6c,bridge_name='br-int',has_traffic_filtering=True,id=8017da49-bbc8-4eae-8b3e-79bb4e587e62,network=Network(f862c074-27c9-45f5-8f61-3c7cbb05b94a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8017da49-bb')
Oct 07 14:20:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:20:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3334558627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:02 compute-0 nova_compute[259550]: 2025-10-07 14:20:02.054 2 DEBUG oslo_concurrency.processutils [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:02 compute-0 nova_compute[259550]: 2025-10-07 14:20:02.060 2 DEBUG nova.compute.provider_tree [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:20:02 compute-0 nova_compute[259550]: 2025-10-07 14:20:02.079 2 DEBUG nova.scheduler.client.report [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:20:02 compute-0 nova_compute[259550]: 2025-10-07 14:20:02.099 2 DEBUG oslo_concurrency.lockutils [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:02 compute-0 ceph-mon[74295]: pgmap v1663: 305 pgs: 305 active+clean; 303 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Oct 07 14:20:02 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3334558627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:02 compute-0 nova_compute[259550]: 2025-10-07 14:20:02.124 2 INFO nova.scheduler.client.report [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Deleted allocations for instance b9440d6f-9236-4208-b50d-4badc845e3cd
Oct 07 14:20:02 compute-0 nova_compute[259550]: 2025-10-07 14:20:02.183 2 DEBUG oslo_concurrency.lockutils [None req-1a7d8bcc-9f4c-4731-9c4d-ba273fa30e67 732745f303ad42618c5be16545b23042 7abbe3898e634d999186c395be83c175 - - default default] Lock "b9440d6f-9236-4208-b50d-4badc845e3cd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:02 compute-0 nova_compute[259550]: 2025-10-07 14:20:02.820 2 INFO nova.virt.libvirt.driver [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Deleting instance files /var/lib/nova/instances/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_del
Oct 07 14:20:02 compute-0 nova_compute[259550]: 2025-10-07 14:20:02.821 2 INFO nova.virt.libvirt.driver [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Deletion of /var/lib/nova/instances/1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e_del complete
Oct 07 14:20:02 compute-0 nova_compute[259550]: 2025-10-07 14:20:02.913 2 INFO nova.compute.manager [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Took 1.50 seconds to destroy the instance on the hypervisor.
Oct 07 14:20:02 compute-0 nova_compute[259550]: 2025-10-07 14:20:02.913 2 DEBUG oslo.service.loopingcall [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:20:02 compute-0 nova_compute[259550]: 2025-10-07 14:20:02.914 2 DEBUG nova.compute.manager [-] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:20:02 compute-0 nova_compute[259550]: 2025-10-07 14:20:02.914 2 DEBUG nova.network.neutron [-] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.099 2 DEBUG nova.network.neutron [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Updating instance_info_cache with network_info: [{"id": "58bc8432-92a0-493d-a022-134fa0354f89", "address": "fa:16:3e:8c:1d:fb", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58bc8432-92", "ovs_interfaceid": "58bc8432-92a0-493d-a022-134fa0354f89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.115 2 DEBUG oslo_concurrency.lockutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Releasing lock "refresh_cache-af953de2-8586-45f6-8f4b-a09dec17ef5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.116 2 DEBUG nova.compute.manager [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Instance network_info: |[{"id": "58bc8432-92a0-493d-a022-134fa0354f89", "address": "fa:16:3e:8c:1d:fb", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58bc8432-92", "ovs_interfaceid": "58bc8432-92a0-493d-a022-134fa0354f89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.120 2 DEBUG nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Start _get_guest_xml network_info=[{"id": "58bc8432-92a0-493d-a022-134fa0354f89", "address": "fa:16:3e:8c:1d:fb", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58bc8432-92", "ovs_interfaceid": "58bc8432-92a0-493d-a022-134fa0354f89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.126 2 WARNING nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.131 2 DEBUG nova.virt.libvirt.host [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.132 2 DEBUG nova.virt.libvirt.host [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.135 2 DEBUG nova.virt.libvirt.host [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.136 2 DEBUG nova.virt.libvirt.host [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.136 2 DEBUG nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.136 2 DEBUG nova.virt.hardware [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.136 2 DEBUG nova.virt.hardware [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.137 2 DEBUG nova.virt.hardware [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.137 2 DEBUG nova.virt.hardware [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.137 2 DEBUG nova.virt.hardware [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.137 2 DEBUG nova.virt.hardware [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.138 2 DEBUG nova.virt.hardware [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.138 2 DEBUG nova.virt.hardware [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.138 2 DEBUG nova.virt.hardware [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.138 2 DEBUG nova.virt.hardware [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.138 2 DEBUG nova.virt.hardware [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.141 2 DEBUG oslo_concurrency.processutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.306 2 DEBUG nova.compute.manager [req-9418b57f-7620-41e1-9793-695f328c8c85 req-be5bf296-5ebd-4bed-b758-cb9a02acaf5d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Received event network-changed-58bc8432-92a0-493d-a022-134fa0354f89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.307 2 DEBUG nova.compute.manager [req-9418b57f-7620-41e1-9793-695f328c8c85 req-be5bf296-5ebd-4bed-b758-cb9a02acaf5d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Refreshing instance network info cache due to event network-changed-58bc8432-92a0-493d-a022-134fa0354f89. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.307 2 DEBUG oslo_concurrency.lockutils [req-9418b57f-7620-41e1-9793-695f328c8c85 req-be5bf296-5ebd-4bed-b758-cb9a02acaf5d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-af953de2-8586-45f6-8f4b-a09dec17ef5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.308 2 DEBUG oslo_concurrency.lockutils [req-9418b57f-7620-41e1-9793-695f328c8c85 req-be5bf296-5ebd-4bed-b758-cb9a02acaf5d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-af953de2-8586-45f6-8f4b-a09dec17ef5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.308 2 DEBUG nova.network.neutron [req-9418b57f-7620-41e1-9793-695f328c8c85 req-be5bf296-5ebd-4bed-b758-cb9a02acaf5d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Refreshing network info cache for port 58bc8432-92a0-493d-a022-134fa0354f89 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.384 2 DEBUG oslo_concurrency.lockutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.384 2 DEBUG oslo_concurrency.lockutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.407 2 DEBUG nova.compute.manager [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.482 2 DEBUG oslo_concurrency.lockutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.483 2 DEBUG oslo_concurrency.lockutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.491 2 DEBUG nova.virt.hardware [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.492 2 INFO nova.compute.claims [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:20:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:20:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1370695845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.566 2 DEBUG nova.compute.manager [req-1a0fe15e-25fa-4b35-8c5d-93c2f73856a9 req-5181d70b-129d-4dc9-aa9e-96d71317e636 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received event network-vif-unplugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.567 2 DEBUG oslo_concurrency.lockutils [req-1a0fe15e-25fa-4b35-8c5d-93c2f73856a9 req-5181d70b-129d-4dc9-aa9e-96d71317e636 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.567 2 DEBUG oslo_concurrency.lockutils [req-1a0fe15e-25fa-4b35-8c5d-93c2f73856a9 req-5181d70b-129d-4dc9-aa9e-96d71317e636 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.568 2 DEBUG oslo_concurrency.lockutils [req-1a0fe15e-25fa-4b35-8c5d-93c2f73856a9 req-5181d70b-129d-4dc9-aa9e-96d71317e636 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.568 2 DEBUG nova.compute.manager [req-1a0fe15e-25fa-4b35-8c5d-93c2f73856a9 req-5181d70b-129d-4dc9-aa9e-96d71317e636 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] No waiting events found dispatching network-vif-unplugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.568 2 DEBUG nova.compute.manager [req-1a0fe15e-25fa-4b35-8c5d-93c2f73856a9 req-5181d70b-129d-4dc9-aa9e-96d71317e636 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received event network-vif-unplugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.569 2 DEBUG nova.compute.manager [req-1a0fe15e-25fa-4b35-8c5d-93c2f73856a9 req-5181d70b-129d-4dc9-aa9e-96d71317e636 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received event network-vif-plugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.569 2 DEBUG oslo_concurrency.lockutils [req-1a0fe15e-25fa-4b35-8c5d-93c2f73856a9 req-5181d70b-129d-4dc9-aa9e-96d71317e636 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.569 2 DEBUG oslo_concurrency.lockutils [req-1a0fe15e-25fa-4b35-8c5d-93c2f73856a9 req-5181d70b-129d-4dc9-aa9e-96d71317e636 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.569 2 DEBUG oslo_concurrency.lockutils [req-1a0fe15e-25fa-4b35-8c5d-93c2f73856a9 req-5181d70b-129d-4dc9-aa9e-96d71317e636 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.570 2 DEBUG nova.compute.manager [req-1a0fe15e-25fa-4b35-8c5d-93c2f73856a9 req-5181d70b-129d-4dc9-aa9e-96d71317e636 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] No waiting events found dispatching network-vif-plugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.570 2 WARNING nova.compute.manager [req-1a0fe15e-25fa-4b35-8c5d-93c2f73856a9 req-5181d70b-129d-4dc9-aa9e-96d71317e636 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received unexpected event network-vif-plugged-8017da49-bbc8-4eae-8b3e-79bb4e587e62 for instance with vm_state rescued and task_state deleting.
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.581 2 DEBUG oslo_concurrency.processutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.606 2 DEBUG nova.storage.rbd_utils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image af953de2-8586-45f6-8f4b-a09dec17ef5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.610 2 DEBUG oslo_concurrency.processutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1370695845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.686 2 DEBUG oslo_concurrency.processutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.847 2 DEBUG nova.network.neutron [-] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.863 2 INFO nova.compute.manager [-] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Took 0.95 seconds to deallocate network for instance.
Oct 07 14:20:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 211 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 218 op/s
Oct 07 14:20:03 compute-0 nova_compute[259550]: 2025-10-07 14:20:03.899 2 DEBUG oslo_concurrency.lockutils [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:20:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/836699505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.066 2 DEBUG oslo_concurrency.processutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.068 2 DEBUG nova.virt.libvirt.vif [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-520514475',display_name='tempest-tempest.common.compute-instance-520514475',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-520514475',id=77,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-agtdcm1m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:19:59Z,user_data=None,user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=af953de2-8586-45f6-8f4b-a09dec17ef5f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "58bc8432-92a0-493d-a022-134fa0354f89", "address": "fa:16:3e:8c:1d:fb", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58bc8432-92", "ovs_interfaceid": "58bc8432-92a0-493d-a022-134fa0354f89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.069 2 DEBUG nova.network.os_vif_util [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "58bc8432-92a0-493d-a022-134fa0354f89", "address": "fa:16:3e:8c:1d:fb", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58bc8432-92", "ovs_interfaceid": "58bc8432-92a0-493d-a022-134fa0354f89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.070 2 DEBUG nova.network.os_vif_util [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:1d:fb,bridge_name='br-int',has_traffic_filtering=True,id=58bc8432-92a0-493d-a022-134fa0354f89,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58bc8432-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.071 2 DEBUG nova.objects.instance [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'pci_devices' on Instance uuid af953de2-8586-45f6-8f4b-a09dec17ef5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.093 2 DEBUG nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:20:04 compute-0 nova_compute[259550]:   <uuid>af953de2-8586-45f6-8f4b-a09dec17ef5f</uuid>
Oct 07 14:20:04 compute-0 nova_compute[259550]:   <name>instance-0000004d</name>
Oct 07 14:20:04 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:20:04 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:20:04 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <nova:name>tempest-tempest.common.compute-instance-520514475</nova:name>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:20:03</nova:creationTime>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:20:04 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:20:04 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:20:04 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:20:04 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:20:04 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:20:04 compute-0 nova_compute[259550]:         <nova:user uuid="51afbbb19e4a4e2184c89302ccf45428">tempest-ServerActionsTestJSON-263209083-project-member</nova:user>
Oct 07 14:20:04 compute-0 nova_compute[259550]:         <nova:project uuid="8379283f8a594c2ab94773d2b49cbb30">tempest-ServerActionsTestJSON-263209083</nova:project>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:20:04 compute-0 nova_compute[259550]:         <nova:port uuid="58bc8432-92a0-493d-a022-134fa0354f89">
Oct 07 14:20:04 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:20:04 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:20:04 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <system>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <entry name="serial">af953de2-8586-45f6-8f4b-a09dec17ef5f</entry>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <entry name="uuid">af953de2-8586-45f6-8f4b-a09dec17ef5f</entry>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     </system>
Oct 07 14:20:04 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:20:04 compute-0 nova_compute[259550]:   <os>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:   </os>
Oct 07 14:20:04 compute-0 nova_compute[259550]:   <features>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:   </features>
Oct 07 14:20:04 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:20:04 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:20:04 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/af953de2-8586-45f6-8f4b-a09dec17ef5f_disk">
Oct 07 14:20:04 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       </source>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:20:04 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/af953de2-8586-45f6-8f4b-a09dec17ef5f_disk.config">
Oct 07 14:20:04 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       </source>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:20:04 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:8c:1d:fb"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <target dev="tap58bc8432-92"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f/console.log" append="off"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <video>
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     </video>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:20:04 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:20:04 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:20:04 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:20:04 compute-0 nova_compute[259550]: </domain>
Oct 07 14:20:04 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.094 2 DEBUG nova.compute.manager [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Preparing to wait for external event network-vif-plugged-58bc8432-92a0-493d-a022-134fa0354f89 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.095 2 DEBUG oslo_concurrency.lockutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.095 2 DEBUG oslo_concurrency.lockutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.095 2 DEBUG oslo_concurrency.lockutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.096 2 DEBUG nova.virt.libvirt.vif [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-520514475',display_name='tempest-tempest.common.compute-instance-520514475',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-520514475',id=77,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-agtdcm1m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:19:59Z,user_data=None,user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=af953de2-8586-45f6-8f4b-a09dec17ef5f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "58bc8432-92a0-493d-a022-134fa0354f89", "address": "fa:16:3e:8c:1d:fb", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58bc8432-92", "ovs_interfaceid": "58bc8432-92a0-493d-a022-134fa0354f89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.096 2 DEBUG nova.network.os_vif_util [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "58bc8432-92a0-493d-a022-134fa0354f89", "address": "fa:16:3e:8c:1d:fb", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58bc8432-92", "ovs_interfaceid": "58bc8432-92a0-493d-a022-134fa0354f89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.097 2 DEBUG nova.network.os_vif_util [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:1d:fb,bridge_name='br-int',has_traffic_filtering=True,id=58bc8432-92a0-493d-a022-134fa0354f89,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58bc8432-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.097 2 DEBUG os_vif [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:1d:fb,bridge_name='br-int',has_traffic_filtering=True,id=58bc8432-92a0-493d-a022-134fa0354f89,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58bc8432-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.098 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.098 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.104 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58bc8432-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.104 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap58bc8432-92, col_values=(('external_ids', {'iface-id': '58bc8432-92a0-493d-a022-134fa0354f89', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:1d:fb', 'vm-uuid': 'af953de2-8586-45f6-8f4b-a09dec17ef5f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:20:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2616456320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:04 compute-0 NetworkManager[44949]: <info>  [1759846804.1064] manager: (tap58bc8432-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.113 2 INFO os_vif [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:1d:fb,bridge_name='br-int',has_traffic_filtering=True,id=58bc8432-92a0-493d-a022-134fa0354f89,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58bc8432-92')
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.135 2 DEBUG oslo_concurrency.processutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.144 2 DEBUG nova.compute.provider_tree [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:20:04 compute-0 podman[339299]: 2025-10-07 14:20:04.229837478 +0000 UTC m=+0.070224596 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:20:04 compute-0 podman[339300]: 2025-10-07 14:20:04.253983226 +0000 UTC m=+0.094711822 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.302 2 DEBUG nova.scheduler.client.report [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.314 2 DEBUG nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.315 2 DEBUG nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.315 2 DEBUG nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] No VIF found with MAC fa:16:3e:8c:1d:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.315 2 INFO nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Using config drive
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.333 2 DEBUG nova.storage.rbd_utils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image af953de2-8586-45f6-8f4b-a09dec17ef5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.490 2 DEBUG oslo_concurrency.lockutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.491 2 DEBUG nova.compute.manager [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.493 2 DEBUG oslo_concurrency.lockutils [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:04 compute-0 ceph-mon[74295]: pgmap v1664: 305 pgs: 305 active+clean; 211 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 218 op/s
Oct 07 14:20:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/836699505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2616456320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.828 2 DEBUG nova.compute.manager [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:20:04 compute-0 nova_compute[259550]: 2025-10-07 14:20:04.829 2 DEBUG nova.network.neutron [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:20:05 compute-0 nova_compute[259550]: 2025-10-07 14:20:05.043 2 INFO nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:20:05 compute-0 nova_compute[259550]: 2025-10-07 14:20:05.391 2 DEBUG nova.compute.manager [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:20:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 169 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 239 op/s
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.049 2 DEBUG nova.compute.manager [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.051 2 DEBUG nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.052 2 INFO nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Creating image(s)
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.078 2 DEBUG nova.storage.rbd_utils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.110 2 DEBUG nova.storage.rbd_utils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.139 2 DEBUG nova.storage.rbd_utils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.145 2 DEBUG oslo_concurrency.processutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.209 2 DEBUG nova.policy [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b99e8c19767d42aa96c7d646cacc3772', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a266c4b5f8164bceb621e0e23116c515', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.238 2 DEBUG oslo_concurrency.processutils [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.283 2 DEBUG oslo_concurrency.processutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.284 2 DEBUG oslo_concurrency.lockutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.286 2 DEBUG oslo_concurrency.lockutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.286 2 DEBUG oslo_concurrency.lockutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.319 2 DEBUG nova.storage.rbd_utils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.326 2 DEBUG oslo_concurrency.processutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 d932a7ab-839c-48b9-804f-90cc8634e93b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.401 2 DEBUG nova.compute.manager [req-0ff8db95-3c41-4993-a9a7-f0f4ed7fe928 req-cdd9cdaa-4756-4ccb-98f9-5168094c9fa8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Received event network-vif-deleted-8017da49-bbc8-4eae-8b3e-79bb4e587e62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:20:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376647194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.694 2 DEBUG oslo_concurrency.processutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 d932a7ab-839c-48b9-804f-90cc8634e93b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.731 2 DEBUG oslo_concurrency.processutils [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.766 2 DEBUG nova.storage.rbd_utils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] resizing rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.793 2 DEBUG nova.compute.provider_tree [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.813 2 DEBUG nova.scheduler.client.report [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.846 2 DEBUG oslo_concurrency.lockutils [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.857 2 DEBUG nova.objects.instance [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'migration_context' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.875 2 DEBUG nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.876 2 DEBUG nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Ensure instance console log exists: /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.876 2 DEBUG oslo_concurrency.lockutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.877 2 DEBUG oslo_concurrency.lockutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.877 2 DEBUG oslo_concurrency.lockutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.880 2 INFO nova.scheduler.client.report [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Deleted allocations for instance 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.947 2 DEBUG oslo_concurrency.lockutils [None req-82f68109-80e6-4574-b629-52093983804f b627cc9a6a884d6cb236df7e0154b97a 743fea8acfcc4e73b1981dc0dcf95f63 - - default default] Lock "1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:06 compute-0 ceph-mon[74295]: pgmap v1665: 305 pgs: 305 active+clean; 169 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 239 op/s
Oct 07 14:20:06 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2376647194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.984 2 INFO nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Creating config drive at /var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f/disk.config
Oct 07 14:20:06 compute-0 nova_compute[259550]: 2025-10-07 14:20:06.989 2 DEBUG oslo_concurrency.processutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv2rn0_ue execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:07 compute-0 nova_compute[259550]: 2025-10-07 14:20:07.125 2 DEBUG oslo_concurrency.processutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv2rn0_ue" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:07 compute-0 nova_compute[259550]: 2025-10-07 14:20:07.156 2 DEBUG nova.storage.rbd_utils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image af953de2-8586-45f6-8f4b-a09dec17ef5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:07 compute-0 nova_compute[259550]: 2025-10-07 14:20:07.160 2 DEBUG oslo_concurrency.processutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f/disk.config af953de2-8586-45f6-8f4b-a09dec17ef5f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:07 compute-0 nova_compute[259550]: 2025-10-07 14:20:07.218 2 DEBUG nova.network.neutron [req-9418b57f-7620-41e1-9793-695f328c8c85 req-be5bf296-5ebd-4bed-b758-cb9a02acaf5d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Updated VIF entry in instance network info cache for port 58bc8432-92a0-493d-a022-134fa0354f89. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:20:07 compute-0 nova_compute[259550]: 2025-10-07 14:20:07.219 2 DEBUG nova.network.neutron [req-9418b57f-7620-41e1-9793-695f328c8c85 req-be5bf296-5ebd-4bed-b758-cb9a02acaf5d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Updating instance_info_cache with network_info: [{"id": "58bc8432-92a0-493d-a022-134fa0354f89", "address": "fa:16:3e:8c:1d:fb", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58bc8432-92", "ovs_interfaceid": "58bc8432-92a0-493d-a022-134fa0354f89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:20:07 compute-0 nova_compute[259550]: 2025-10-07 14:20:07.237 2 DEBUG oslo_concurrency.lockutils [req-9418b57f-7620-41e1-9793-695f328c8c85 req-be5bf296-5ebd-4bed-b758-cb9a02acaf5d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-af953de2-8586-45f6-8f4b-a09dec17ef5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:20:07 compute-0 nova_compute[259550]: 2025-10-07 14:20:07.334 2 DEBUG oslo_concurrency.processutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f/disk.config af953de2-8586-45f6-8f4b-a09dec17ef5f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:07 compute-0 nova_compute[259550]: 2025-10-07 14:20:07.335 2 INFO nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Deleting local config drive /var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f/disk.config because it was imported into RBD.
Oct 07 14:20:07 compute-0 kernel: tap58bc8432-92: entered promiscuous mode
Oct 07 14:20:07 compute-0 NetworkManager[44949]: <info>  [1759846807.4138] manager: (tap58bc8432-92): new Tun device (/org/freedesktop/NetworkManager/Devices/333)
Oct 07 14:20:07 compute-0 nova_compute[259550]: 2025-10-07 14:20:07.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:07 compute-0 ovn_controller[151684]: 2025-10-07T14:20:07Z|00771|binding|INFO|Claiming lport 58bc8432-92a0-493d-a022-134fa0354f89 for this chassis.
Oct 07 14:20:07 compute-0 ovn_controller[151684]: 2025-10-07T14:20:07Z|00772|binding|INFO|58bc8432-92a0-493d-a022-134fa0354f89: Claiming fa:16:3e:8c:1d:fb 10.100.0.10
Oct 07 14:20:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:07.424 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:1d:fb 10.100.0.10'], port_security=['fa:16:3e:8c:1d:fb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'af953de2-8586-45f6-8f4b-a09dec17ef5f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8379283f8a594c2ab94773d2b49cbb30', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b57b9fa1-4ab8-4b42-afe4-8fb0bccad4c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44f26794-be65-4c90-a6ef-3a0e5efa6810, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=58bc8432-92a0-493d-a022-134fa0354f89) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:20:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:07.427 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 58bc8432-92a0-493d-a022-134fa0354f89 in datapath c21c541a-0d39-4ceb-ba44-53a9c1280779 bound to our chassis
Oct 07 14:20:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:07.429 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:20:07 compute-0 ovn_controller[151684]: 2025-10-07T14:20:07Z|00773|binding|INFO|Setting lport 58bc8432-92a0-493d-a022-134fa0354f89 ovn-installed in OVS
Oct 07 14:20:07 compute-0 ovn_controller[151684]: 2025-10-07T14:20:07Z|00774|binding|INFO|Setting lport 58bc8432-92a0-493d-a022-134fa0354f89 up in Southbound
Oct 07 14:20:07 compute-0 nova_compute[259550]: 2025-10-07 14:20:07.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:07 compute-0 nova_compute[259550]: 2025-10-07 14:20:07.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:07.446 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0a560e96-707d-4ed6-a51f-ecf770c77d4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:07 compute-0 systemd-machined[214580]: New machine qemu-94-instance-0000004d.
Oct 07 14:20:07 compute-0 systemd[1]: Started Virtual Machine qemu-94-instance-0000004d.
Oct 07 14:20:07 compute-0 systemd-udevd[339598]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:20:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:07.486 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8d5a48d5-dec0-4995-8d5b-11a669a4f45e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:07.490 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c35224-7e44-4bf7-a853-aa652d8036c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:07 compute-0 NetworkManager[44949]: <info>  [1759846807.4983] device (tap58bc8432-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:20:07 compute-0 NetworkManager[44949]: <info>  [1759846807.4995] device (tap58bc8432-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:20:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:07.526 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[21fe69c8-466c-4013-a295-8404488968a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:07.544 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2a902b53-4fd1-4aee-bf43-8fc341d320ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc21c541a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:8b:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734372, 'reachable_time': 15798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339608, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:07.566 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[178dec39-d05f-4cef-aa2c-0e341bbbd34a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc21c541a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734386, 'tstamp': 734386}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339609, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc21c541a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734390, 'tstamp': 734390}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339609, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:07.568 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc21c541a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:07 compute-0 nova_compute[259550]: 2025-10-07 14:20:07.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:07.572 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc21c541a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:07.572 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:07.573 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc21c541a-00, col_values=(('external_ids', {'iface-id': '5989e5ed-c89e-446a-960e-503196fd3680'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:07.573 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 169 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 223 op/s
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.233 2 DEBUG nova.network.neutron [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Successfully created port: 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.410 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846808.4087956, af953de2-8586-45f6-8f4b-a09dec17ef5f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.411 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] VM Started (Lifecycle Event)
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.433 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.438 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846808.4095542, af953de2-8586-45f6-8f4b-a09dec17ef5f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.438 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] VM Paused (Lifecycle Event)
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.516 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.521 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.558 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.864 2 DEBUG nova.compute.manager [req-7d63b78c-95d5-49ef-88c7-10fcb5ad5456 req-5e30eeee-07f2-4797-9978-6dc78398adf1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Received event network-vif-plugged-58bc8432-92a0-493d-a022-134fa0354f89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.865 2 DEBUG oslo_concurrency.lockutils [req-7d63b78c-95d5-49ef-88c7-10fcb5ad5456 req-5e30eeee-07f2-4797-9978-6dc78398adf1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.865 2 DEBUG oslo_concurrency.lockutils [req-7d63b78c-95d5-49ef-88c7-10fcb5ad5456 req-5e30eeee-07f2-4797-9978-6dc78398adf1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.865 2 DEBUG oslo_concurrency.lockutils [req-7d63b78c-95d5-49ef-88c7-10fcb5ad5456 req-5e30eeee-07f2-4797-9978-6dc78398adf1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.866 2 DEBUG nova.compute.manager [req-7d63b78c-95d5-49ef-88c7-10fcb5ad5456 req-5e30eeee-07f2-4797-9978-6dc78398adf1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Processing event network-vif-plugged-58bc8432-92a0-493d-a022-134fa0354f89 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.866 2 DEBUG nova.compute.manager [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.871 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846808.8709924, af953de2-8586-45f6-8f4b-a09dec17ef5f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.872 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] VM Resumed (Lifecycle Event)
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.874 2 DEBUG nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.882 2 INFO nova.virt.libvirt.driver [-] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Instance spawned successfully.
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.883 2 DEBUG nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.894 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.901 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.907 2 DEBUG nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.908 2 DEBUG nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.908 2 DEBUG nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.909 2 DEBUG nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.910 2 DEBUG nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.910 2 DEBUG nova.virt.libvirt.driver [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.914 2 DEBUG nova.network.neutron [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Successfully updated port: 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.924 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.944 2 DEBUG oslo_concurrency.lockutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.945 2 DEBUG oslo_concurrency.lockutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquired lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.945 2 DEBUG nova.network.neutron [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:20:08 compute-0 ceph-mon[74295]: pgmap v1666: 305 pgs: 305 active+clean; 169 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 223 op/s
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.982 2 INFO nova.compute.manager [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Took 9.51 seconds to spawn the instance on the hypervisor.
Oct 07 14:20:08 compute-0 nova_compute[259550]: 2025-10-07 14:20:08.983 2 DEBUG nova.compute.manager [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:09 compute-0 nova_compute[259550]: 2025-10-07 14:20:09.029 2 DEBUG nova.compute.manager [req-c32bc221-14c5-4c6a-bb86-fe2845e3cfa1 req-7a4c563b-ed06-4147-8bc4-58cc2b5b2f1b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-changed-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:09 compute-0 nova_compute[259550]: 2025-10-07 14:20:09.029 2 DEBUG nova.compute.manager [req-c32bc221-14c5-4c6a-bb86-fe2845e3cfa1 req-7a4c563b-ed06-4147-8bc4-58cc2b5b2f1b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Refreshing instance network info cache due to event network-changed-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:20:09 compute-0 nova_compute[259550]: 2025-10-07 14:20:09.030 2 DEBUG oslo_concurrency.lockutils [req-c32bc221-14c5-4c6a-bb86-fe2845e3cfa1 req-7a4c563b-ed06-4147-8bc4-58cc2b5b2f1b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:20:09 compute-0 nova_compute[259550]: 2025-10-07 14:20:09.052 2 INFO nova.compute.manager [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Took 10.57 seconds to build instance.
Oct 07 14:20:09 compute-0 nova_compute[259550]: 2025-10-07 14:20:09.069 2 DEBUG oslo_concurrency.lockutils [None req-bfd04c7c-41cb-4115-9f9c-9a01b73533b6 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:09 compute-0 nova_compute[259550]: 2025-10-07 14:20:09.108 2 DEBUG nova.network.neutron [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:20:09 compute-0 nova_compute[259550]: 2025-10-07 14:20:09.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:09 compute-0 nova_compute[259550]: 2025-10-07 14:20:09.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:09.182 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:20:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:09.184 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:20:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 208 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.1 MiB/s wr, 247 op/s
Oct 07 14:20:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:10.186 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:10 compute-0 nova_compute[259550]: 2025-10-07 14:20:10.916 2 DEBUG nova.network.neutron [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updating instance_info_cache with network_info: [{"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:20:10 compute-0 ceph-mon[74295]: pgmap v1667: 305 pgs: 305 active+clean; 208 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.1 MiB/s wr, 247 op/s
Oct 07 14:20:11 compute-0 ovn_controller[151684]: 2025-10-07T14:20:11Z|00775|binding|INFO|Releasing lport 5989e5ed-c89e-446a-960e-503196fd3680 from this chassis (sb_readonly=0)
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.358 2 DEBUG nova.compute.manager [req-c982c2e6-8f2e-4b4c-af93-088410115be5 req-54e244be-7c96-4d93-8a91-5bcf4584545a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Received event network-vif-plugged-58bc8432-92a0-493d-a022-134fa0354f89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.358 2 DEBUG oslo_concurrency.lockutils [req-c982c2e6-8f2e-4b4c-af93-088410115be5 req-54e244be-7c96-4d93-8a91-5bcf4584545a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.358 2 DEBUG oslo_concurrency.lockutils [req-c982c2e6-8f2e-4b4c-af93-088410115be5 req-54e244be-7c96-4d93-8a91-5bcf4584545a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.359 2 DEBUG oslo_concurrency.lockutils [req-c982c2e6-8f2e-4b4c-af93-088410115be5 req-54e244be-7c96-4d93-8a91-5bcf4584545a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.359 2 DEBUG nova.compute.manager [req-c982c2e6-8f2e-4b4c-af93-088410115be5 req-54e244be-7c96-4d93-8a91-5bcf4584545a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] No waiting events found dispatching network-vif-plugged-58bc8432-92a0-493d-a022-134fa0354f89 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.359 2 WARNING nova.compute.manager [req-c982c2e6-8f2e-4b4c-af93-088410115be5 req-54e244be-7c96-4d93-8a91-5bcf4584545a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Received unexpected event network-vif-plugged-58bc8432-92a0-493d-a022-134fa0354f89 for instance with vm_state active and task_state None.
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.361 2 DEBUG oslo_concurrency.lockutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Releasing lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.362 2 DEBUG nova.compute.manager [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Instance network_info: |[{"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.362 2 DEBUG oslo_concurrency.lockutils [req-c32bc221-14c5-4c6a-bb86-fe2845e3cfa1 req-7a4c563b-ed06-4147-8bc4-58cc2b5b2f1b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.362 2 DEBUG nova.network.neutron [req-c32bc221-14c5-4c6a-bb86-fe2845e3cfa1 req-7a4c563b-ed06-4147-8bc4-58cc2b5b2f1b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Refreshing network info cache for port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.374 2 DEBUG nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Start _get_guest_xml network_info=[{"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.383 2 WARNING nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.446 2 DEBUG nova.virt.libvirt.host [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.447 2 DEBUG nova.virt.libvirt.host [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.453 2 DEBUG nova.virt.libvirt.host [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.454 2 DEBUG nova.virt.libvirt.host [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.454 2 DEBUG nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.454 2 DEBUG nova.virt.hardware [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.455 2 DEBUG nova.virt.hardware [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.455 2 DEBUG nova.virt.hardware [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.455 2 DEBUG nova.virt.hardware [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.456 2 DEBUG nova.virt.hardware [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.456 2 DEBUG nova.virt.hardware [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.456 2 DEBUG nova.virt.hardware [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.456 2 DEBUG nova.virt.hardware [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.457 2 DEBUG nova.virt.hardware [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.457 2 DEBUG nova.virt.hardware [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.457 2 DEBUG nova.virt.hardware [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.464 2 DEBUG oslo_concurrency.processutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 215 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 208 op/s
Oct 07 14:20:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:20:11 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3623023066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.926 2 DEBUG oslo_concurrency.processutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.948 2 DEBUG nova.storage.rbd_utils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:11 compute-0 nova_compute[259550]: 2025-10-07 14:20:11.952 2 DEBUG oslo_concurrency.processutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:12 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3623023066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:20:12 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2382968039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.446 2 DEBUG oslo_concurrency.processutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.448 2 DEBUG nova.virt.libvirt.vif [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:20:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-32969053',display_name='tempest-ServerActionsTestOtherB-server-32969053',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-32969053',id=78,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOqlzBNyTrZ3ukqlO6TVbreu9ViSyA0zBmuJZOrzxut70yuUvNen/46QBxy28dcNWssGIhIyWpj4pHS1UxVS0zSbAfGPV97emvFP5GyJtAaeCUYpqJsMV8Zmvgjpt+/vMQ==',key_name='tempest-keypair-1118909469',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a266c4b5f8164bceb621e0e23116c515',ramdisk_id='',reservation_id='r-r374dg9m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1033607333',owner_user_name='tempest-ServerActionsTestOtherB-1033607333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:20:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b99e8c19767d42aa96c7d646cacc3772',uuid=d932a7ab-839c-48b9-804f-90cc8634e93b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.449 2 DEBUG nova.network.os_vif_util [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converting VIF {"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.450 2 DEBUG nova.network.os_vif_util [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.451 2 DEBUG nova.objects.instance [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'pci_devices' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.592 2 DEBUG nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:20:12 compute-0 nova_compute[259550]:   <uuid>d932a7ab-839c-48b9-804f-90cc8634e93b</uuid>
Oct 07 14:20:12 compute-0 nova_compute[259550]:   <name>instance-0000004e</name>
Oct 07 14:20:12 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:20:12 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:20:12 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerActionsTestOtherB-server-32969053</nova:name>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:20:11</nova:creationTime>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:20:12 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:20:12 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:20:12 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:20:12 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:20:12 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:20:12 compute-0 nova_compute[259550]:         <nova:user uuid="b99e8c19767d42aa96c7d646cacc3772">tempest-ServerActionsTestOtherB-1033607333-project-member</nova:user>
Oct 07 14:20:12 compute-0 nova_compute[259550]:         <nova:project uuid="a266c4b5f8164bceb621e0e23116c515">tempest-ServerActionsTestOtherB-1033607333</nova:project>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:20:12 compute-0 nova_compute[259550]:         <nova:port uuid="6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8">
Oct 07 14:20:12 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:20:12 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:20:12 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <system>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <entry name="serial">d932a7ab-839c-48b9-804f-90cc8634e93b</entry>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <entry name="uuid">d932a7ab-839c-48b9-804f-90cc8634e93b</entry>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     </system>
Oct 07 14:20:12 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:20:12 compute-0 nova_compute[259550]:   <os>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:   </os>
Oct 07 14:20:12 compute-0 nova_compute[259550]:   <features>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:   </features>
Oct 07 14:20:12 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:20:12 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:20:12 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/d932a7ab-839c-48b9-804f-90cc8634e93b_disk">
Oct 07 14:20:12 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       </source>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:20:12 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/d932a7ab-839c-48b9-804f-90cc8634e93b_disk.config">
Oct 07 14:20:12 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       </source>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:20:12 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:71:12:7c"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <target dev="tap6e1eba9e-f1"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/console.log" append="off"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <video>
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     </video>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:20:12 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:20:12 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:20:12 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:20:12 compute-0 nova_compute[259550]: </domain>
Oct 07 14:20:12 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.593 2 DEBUG nova.compute.manager [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Preparing to wait for external event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.593 2 DEBUG oslo_concurrency.lockutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.594 2 DEBUG oslo_concurrency.lockutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.594 2 DEBUG oslo_concurrency.lockutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.595 2 DEBUG nova.virt.libvirt.vif [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:20:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-32969053',display_name='tempest-ServerActionsTestOtherB-server-32969053',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-32969053',id=78,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOqlzBNyTrZ3ukqlO6TVbreu9ViSyA0zBmuJZOrzxut70yuUvNen/46QBxy28dcNWssGIhIyWpj4pHS1UxVS0zSbAfGPV97emvFP5GyJtAaeCUYpqJsMV8Zmvgjpt+/vMQ==',key_name='tempest-keypair-1118909469',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a266c4b5f8164bceb621e0e23116c515',ramdisk_id='',reservation_id='r-r374dg9m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1033607333',owner_user_name='tempest-ServerActionsTestOtherB-1033607333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:20:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b99e8c19767d42aa96c7d646cacc3772',uuid=d932a7ab-839c-48b9-804f-90cc8634e93b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.595 2 DEBUG nova.network.os_vif_util [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converting VIF {"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.596 2 DEBUG nova.network.os_vif_util [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.596 2 DEBUG os_vif [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.597 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.598 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e1eba9e-f1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.602 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e1eba9e-f1, col_values=(('external_ids', {'iface-id': '6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:12:7c', 'vm-uuid': 'd932a7ab-839c-48b9-804f-90cc8634e93b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:12 compute-0 NetworkManager[44949]: <info>  [1759846812.6043] manager: (tap6e1eba9e-f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/334)
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.617 2 INFO os_vif [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1')
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.716 2 INFO nova.compute.manager [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Rebuilding instance
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.745 2 DEBUG nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.746 2 DEBUG nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.746 2 DEBUG nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No VIF found with MAC fa:16:3e:71:12:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.746 2 INFO nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Using config drive
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.774 2 DEBUG nova.storage.rbd_utils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:12 compute-0 nova_compute[259550]: 2025-10-07 14:20:12.952 2 DEBUG nova.objects.instance [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'trusted_certs' on Instance uuid af953de2-8586-45f6-8f4b-a09dec17ef5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.011 2 DEBUG nova.compute.manager [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:13 compute-0 ceph-mon[74295]: pgmap v1668: 305 pgs: 305 active+clean; 215 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 208 op/s
Oct 07 14:20:13 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2382968039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.044 2 INFO nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Creating config drive at /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/disk.config
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.050 2 DEBUG oslo_concurrency.processutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb5tsrykm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.103 2 DEBUG nova.objects.instance [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'pci_requests' on Instance uuid af953de2-8586-45f6-8f4b-a09dec17ef5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.119 2 DEBUG nova.objects.instance [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'pci_devices' on Instance uuid af953de2-8586-45f6-8f4b-a09dec17ef5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.132 2 DEBUG nova.objects.instance [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'resources' on Instance uuid af953de2-8586-45f6-8f4b-a09dec17ef5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.143 2 DEBUG nova.objects.instance [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'migration_context' on Instance uuid af953de2-8586-45f6-8f4b-a09dec17ef5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.155 2 DEBUG nova.objects.instance [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.160 2 DEBUG nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.216 2 DEBUG oslo_concurrency.processutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb5tsrykm" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.258 2 DEBUG nova.storage.rbd_utils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.262 2 DEBUG oslo_concurrency.processutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/disk.config d932a7ab-839c-48b9-804f-90cc8634e93b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.465 2 DEBUG oslo_concurrency.processutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/disk.config d932a7ab-839c-48b9-804f-90cc8634e93b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.466 2 INFO nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Deleting local config drive /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/disk.config because it was imported into RBD.
Oct 07 14:20:13 compute-0 NetworkManager[44949]: <info>  [1759846813.5454] manager: (tap6e1eba9e-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/335)
Oct 07 14:20:13 compute-0 kernel: tap6e1eba9e-f1: entered promiscuous mode
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:13 compute-0 ovn_controller[151684]: 2025-10-07T14:20:13Z|00776|binding|INFO|Claiming lport 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 for this chassis.
Oct 07 14:20:13 compute-0 ovn_controller[151684]: 2025-10-07T14:20:13Z|00777|binding|INFO|6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8: Claiming fa:16:3e:71:12:7c 10.100.0.3
Oct 07 14:20:13 compute-0 systemd-udevd[339786]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:20:13 compute-0 ovn_controller[151684]: 2025-10-07T14:20:13Z|00778|binding|INFO|Setting lport 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 ovn-installed in OVS
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.590 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:12:7c 10.100.0.3'], port_security=['fa:16:3e:71:12:7c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd932a7ab-839c-48b9-804f-90cc8634e93b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a266c4b5f8164bceb621e0e23116c515', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2167ecc9-baa9-4f24-bd7c-9baa94d55bea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061d6c9e-c728-4632-b92d-e6b85ba42658, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:20:13 compute-0 ovn_controller[151684]: 2025-10-07T14:20:13Z|00779|binding|INFO|Setting lport 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 up in Southbound
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.591 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 in datapath ebea7a9d-f576-4b9e-8316-859c29b06dc2 bound to our chassis
Oct 07 14:20:13 compute-0 NetworkManager[44949]: <info>  [1759846813.5946] device (tap6e1eba9e-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:20:13 compute-0 systemd-machined[214580]: New machine qemu-95-instance-0000004e.
Oct 07 14:20:13 compute-0 NetworkManager[44949]: <info>  [1759846813.6019] device (tap6e1eba9e-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.601 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebea7a9d-f576-4b9e-8316-859c29b06dc2
Oct 07 14:20:13 compute-0 systemd[1]: Started Virtual Machine qemu-95-instance-0000004e.
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.618 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[07573b32-6d2e-4f47-a83c-0d89bc2d0d16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.619 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapebea7a9d-f1 in ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.621 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapebea7a9d-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.621 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[98589d88-5c60-4348-ad36-32cbfb98d0e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.623 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ad4624f4-e504-4d57-a339-4abf15e584e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.646 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[819a956e-4306-4c3a-a610-9eb43810aeb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.674 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b0917dcf-b975-44f8-8304-93a0bd3d99c5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.701 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[852f9567-d9af-4463-956b-35c28147d3ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:13 compute-0 NetworkManager[44949]: <info>  [1759846813.7090] manager: (tapebea7a9d-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/336)
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.710 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[716e1e4b-ffe9-467b-80d5-17c9ced66119]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.742 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[050995a6-4bdc-4f4d-9033-f4435221a14c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.746 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f133ac41-aa2a-4adf-a2b3-4212019b05b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:13 compute-0 NetworkManager[44949]: <info>  [1759846813.7714] device (tapebea7a9d-f0): carrier: link connected
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.781 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e40e570a-8af7-4c24-af99-87eeec6c58c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.803 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[02ae9a4d-537e-4885-b9b7-a0f56996b8ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebea7a9d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:58:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739733, 'reachable_time': 16237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339821, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.819 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[71bb6292-3561-4c9a-aeda-810986204e28]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febc:58fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739733, 'tstamp': 739733}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339822, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.836 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846798.8346202, b9440d6f-9236-4208-b50d-4badc845e3cd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.837 2 INFO nova.compute.manager [-] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] VM Stopped (Lifecycle Event)
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.842 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[07479fed-f89b-4b48-9e0a-cedc85d96f66]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebea7a9d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:58:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739733, 'reachable_time': 16237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339823, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.868 2 DEBUG nova.compute.manager [None req-820a6316-6cfa-486b-b22e-79522bff837d - - - - - -] [instance: b9440d6f-9236-4208-b50d-4badc845e3cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.879 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cf07a4a2-4cff-482b-a85b-a1768639f775]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 215 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 193 op/s
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.954 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[434cac28-6c89-4ee5-b230-f18eaf0421c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.956 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebea7a9d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.956 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.957 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebea7a9d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:13 compute-0 NetworkManager[44949]: <info>  [1759846813.9592] manager: (tapebea7a9d-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Oct 07 14:20:13 compute-0 kernel: tapebea7a9d-f0: entered promiscuous mode
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.972 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebea7a9d-f0, col_values=(('external_ids', {'iface-id': 'e0f4a07d-63f3-4c49-8cad-69cdf20a2608'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:13 compute-0 ovn_controller[151684]: 2025-10-07T14:20:13Z|00780|binding|INFO|Releasing lport e0f4a07d-63f3-4c49-8cad-69cdf20a2608 from this chassis (sb_readonly=0)
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.982 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ebea7a9d-f576-4b9e-8316-859c29b06dc2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ebea7a9d-f576-4b9e-8316-859c29b06dc2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.983 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0af9c7ce-ccfc-4d02-b9fc-3add2cecb83a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.984 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-ebea7a9d-f576-4b9e-8316-859c29b06dc2
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/ebea7a9d-f576-4b9e-8316-859c29b06dc2.pid.haproxy
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID ebea7a9d-f576-4b9e-8316-859c29b06dc2
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:20:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:13.986 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'env', 'PROCESS_TAG=haproxy-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ebea7a9d-f576-4b9e-8316-859c29b06dc2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:20:13 compute-0 nova_compute[259550]: 2025-10-07 14:20:13.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:14 compute-0 ceph-mon[74295]: pgmap v1669: 305 pgs: 305 active+clean; 215 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 193 op/s
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.150 2 DEBUG nova.network.neutron [req-c32bc221-14c5-4c6a-bb86-fe2845e3cfa1 req-7a4c563b-ed06-4147-8bc4-58cc2b5b2f1b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updated VIF entry in instance network info cache for port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.151 2 DEBUG nova.network.neutron [req-c32bc221-14c5-4c6a-bb86-fe2845e3cfa1 req-7a4c563b-ed06-4147-8bc4-58cc2b5b2f1b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updating instance_info_cache with network_info: [{"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.167 2 DEBUG oslo_concurrency.lockutils [req-c32bc221-14c5-4c6a-bb86-fe2845e3cfa1 req-7a4c563b-ed06-4147-8bc4-58cc2b5b2f1b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:20:14 compute-0 podman[339862]: 2025-10-07 14:20:14.356982625 +0000 UTC m=+0.049555860 container create d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 07 14:20:14 compute-0 systemd[1]: Started libpod-conmon-d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5.scope.
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.385 2 DEBUG nova.compute.manager [req-8ef636ba-e9c0-47b7-a235-ca263a92c31c req-9e056006-9193-43d9-95db-4265227796df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.386 2 DEBUG oslo_concurrency.lockutils [req-8ef636ba-e9c0-47b7-a235-ca263a92c31c req-9e056006-9193-43d9-95db-4265227796df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.386 2 DEBUG oslo_concurrency.lockutils [req-8ef636ba-e9c0-47b7-a235-ca263a92c31c req-9e056006-9193-43d9-95db-4265227796df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.387 2 DEBUG oslo_concurrency.lockutils [req-8ef636ba-e9c0-47b7-a235-ca263a92c31c req-9e056006-9193-43d9-95db-4265227796df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.387 2 DEBUG nova.compute.manager [req-8ef636ba-e9c0-47b7-a235-ca263a92c31c req-9e056006-9193-43d9-95db-4265227796df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Processing event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:20:14 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcdee683237114ce949886d137839ba727ebc7082b9a6e07603460b0f5ad933/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:20:14 compute-0 podman[339862]: 2025-10-07 14:20:14.332284372 +0000 UTC m=+0.024857627 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:20:14 compute-0 podman[339862]: 2025-10-07 14:20:14.428232357 +0000 UTC m=+0.120805612 container init d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 07 14:20:14 compute-0 podman[339862]: 2025-10-07 14:20:14.433943878 +0000 UTC m=+0.126517113 container start d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 07 14:20:14 compute-0 neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2[339909]: [NOTICE]   (339916) : New worker (339918) forked
Oct 07 14:20:14 compute-0 neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2[339909]: [NOTICE]   (339916) : Loading success.
Oct 07 14:20:14 compute-0 ovn_controller[151684]: 2025-10-07T14:20:14Z|00781|binding|INFO|Releasing lport 5989e5ed-c89e-446a-960e-503196fd3680 from this chassis (sb_readonly=0)
Oct 07 14:20:14 compute-0 ovn_controller[151684]: 2025-10-07T14:20:14Z|00782|binding|INFO|Releasing lport e0f4a07d-63f3-4c49-8cad-69cdf20a2608 from this chassis (sb_readonly=0)
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.922 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846814.9222288, d932a7ab-839c-48b9-804f-90cc8634e93b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.923 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] VM Started (Lifecycle Event)
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.924 2 DEBUG nova.compute.manager [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.928 2 DEBUG nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.932 2 INFO nova.virt.libvirt.driver [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Instance spawned successfully.
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.932 2 DEBUG nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.942 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.948 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.954 2 DEBUG nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.954 2 DEBUG nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.955 2 DEBUG nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.955 2 DEBUG nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.956 2 DEBUG nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.956 2 DEBUG nova.virt.libvirt.driver [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.985 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.986 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846814.922379, d932a7ab-839c-48b9-804f-90cc8634e93b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:14 compute-0 nova_compute[259550]: 2025-10-07 14:20:14.986 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] VM Paused (Lifecycle Event)
Oct 07 14:20:15 compute-0 nova_compute[259550]: 2025-10-07 14:20:15.018 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:15 compute-0 nova_compute[259550]: 2025-10-07 14:20:15.022 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846814.9274607, d932a7ab-839c-48b9-804f-90cc8634e93b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:15 compute-0 nova_compute[259550]: 2025-10-07 14:20:15.022 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] VM Resumed (Lifecycle Event)
Oct 07 14:20:15 compute-0 nova_compute[259550]: 2025-10-07 14:20:15.027 2 INFO nova.compute.manager [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Took 8.98 seconds to spawn the instance on the hypervisor.
Oct 07 14:20:15 compute-0 nova_compute[259550]: 2025-10-07 14:20:15.027 2 DEBUG nova.compute.manager [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:15 compute-0 nova_compute[259550]: 2025-10-07 14:20:15.036 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:15 compute-0 nova_compute[259550]: 2025-10-07 14:20:15.039 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:20:15 compute-0 nova_compute[259550]: 2025-10-07 14:20:15.066 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:20:15 compute-0 nova_compute[259550]: 2025-10-07 14:20:15.108 2 INFO nova.compute.manager [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Took 11.65 seconds to build instance.
Oct 07 14:20:15 compute-0 nova_compute[259550]: 2025-10-07 14:20:15.127 2 DEBUG oslo_concurrency.lockutils [None req-446b03f8-f2f5-41b1-aed3-7a894014a3ee b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 216 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 07 14:20:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:20:16 compute-0 nova_compute[259550]: 2025-10-07 14:20:16.481 2 DEBUG nova.compute.manager [req-74cfd40b-8a03-486d-a777-40f1cb7e6add req-39767ae2-8ced-4d88-98c6-735512d65131 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:16 compute-0 nova_compute[259550]: 2025-10-07 14:20:16.482 2 DEBUG oslo_concurrency.lockutils [req-74cfd40b-8a03-486d-a777-40f1cb7e6add req-39767ae2-8ced-4d88-98c6-735512d65131 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:16 compute-0 nova_compute[259550]: 2025-10-07 14:20:16.483 2 DEBUG oslo_concurrency.lockutils [req-74cfd40b-8a03-486d-a777-40f1cb7e6add req-39767ae2-8ced-4d88-98c6-735512d65131 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:16 compute-0 nova_compute[259550]: 2025-10-07 14:20:16.483 2 DEBUG oslo_concurrency.lockutils [req-74cfd40b-8a03-486d-a777-40f1cb7e6add req-39767ae2-8ced-4d88-98c6-735512d65131 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:16 compute-0 nova_compute[259550]: 2025-10-07 14:20:16.484 2 DEBUG nova.compute.manager [req-74cfd40b-8a03-486d-a777-40f1cb7e6add req-39767ae2-8ced-4d88-98c6-735512d65131 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] No waiting events found dispatching network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:20:16 compute-0 nova_compute[259550]: 2025-10-07 14:20:16.484 2 WARNING nova.compute.manager [req-74cfd40b-8a03-486d-a777-40f1cb7e6add req-39767ae2-8ced-4d88-98c6-735512d65131 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received unexpected event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 for instance with vm_state active and task_state None.
Oct 07 14:20:16 compute-0 nova_compute[259550]: 2025-10-07 14:20:16.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:16 compute-0 nova_compute[259550]: 2025-10-07 14:20:16.855 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846801.8531277, 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:16 compute-0 nova_compute[259550]: 2025-10-07 14:20:16.856 2 INFO nova.compute.manager [-] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] VM Stopped (Lifecycle Event)
Oct 07 14:20:16 compute-0 nova_compute[259550]: 2025-10-07 14:20:16.884 2 DEBUG nova.compute.manager [None req-cb993d5f-6f60-4769-a0b8-9be34309fc34 - - - - - -] [instance: 1aa0b667-a2f7-4a13-a679-ca1eee5a0f8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:16 compute-0 ceph-mon[74295]: pgmap v1670: 305 pgs: 305 active+clean; 216 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 07 14:20:17 compute-0 nova_compute[259550]: 2025-10-07 14:20:17.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 216 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Oct 07 14:20:18 compute-0 podman[339927]: 2025-10-07 14:20:18.091263193 +0000 UTC m=+0.073356689 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent)
Oct 07 14:20:18 compute-0 podman[339928]: 2025-10-07 14:20:18.150005375 +0000 UTC m=+0.136910308 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 07 14:20:18 compute-0 nova_compute[259550]: 2025-10-07 14:20:18.599 2 DEBUG nova.compute.manager [req-0bac4299-ab02-4a36-b4b9-20ad8ec5dc87 req-9cab07a1-6bf2-4cf7-95f0-a6381e8c4031 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-changed-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:18 compute-0 nova_compute[259550]: 2025-10-07 14:20:18.599 2 DEBUG nova.compute.manager [req-0bac4299-ab02-4a36-b4b9-20ad8ec5dc87 req-9cab07a1-6bf2-4cf7-95f0-a6381e8c4031 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Refreshing instance network info cache due to event network-changed-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:20:18 compute-0 nova_compute[259550]: 2025-10-07 14:20:18.600 2 DEBUG oslo_concurrency.lockutils [req-0bac4299-ab02-4a36-b4b9-20ad8ec5dc87 req-9cab07a1-6bf2-4cf7-95f0-a6381e8c4031 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:20:18 compute-0 nova_compute[259550]: 2025-10-07 14:20:18.600 2 DEBUG oslo_concurrency.lockutils [req-0bac4299-ab02-4a36-b4b9-20ad8ec5dc87 req-9cab07a1-6bf2-4cf7-95f0-a6381e8c4031 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:20:18 compute-0 nova_compute[259550]: 2025-10-07 14:20:18.600 2 DEBUG nova.network.neutron [req-0bac4299-ab02-4a36-b4b9-20ad8ec5dc87 req-9cab07a1-6bf2-4cf7-95f0-a6381e8c4031 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Refreshing network info cache for port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:20:18 compute-0 ceph-mon[74295]: pgmap v1671: 305 pgs: 305 active+clean; 216 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Oct 07 14:20:19 compute-0 nova_compute[259550]: 2025-10-07 14:20:19.668 2 DEBUG nova.network.neutron [req-0bac4299-ab02-4a36-b4b9-20ad8ec5dc87 req-9cab07a1-6bf2-4cf7-95f0-a6381e8c4031 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updated VIF entry in instance network info cache for port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:20:19 compute-0 nova_compute[259550]: 2025-10-07 14:20:19.669 2 DEBUG nova.network.neutron [req-0bac4299-ab02-4a36-b4b9-20ad8ec5dc87 req-9cab07a1-6bf2-4cf7-95f0-a6381e8c4031 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updating instance_info_cache with network_info: [{"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:20:19 compute-0 nova_compute[259550]: 2025-10-07 14:20:19.691 2 DEBUG oslo_concurrency.lockutils [req-0bac4299-ab02-4a36-b4b9-20ad8ec5dc87 req-9cab07a1-6bf2-4cf7-95f0-a6381e8c4031 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:20:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 216 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 157 op/s
Oct 07 14:20:20 compute-0 nova_compute[259550]: 2025-10-07 14:20:20.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:21 compute-0 ceph-mon[74295]: pgmap v1672: 305 pgs: 305 active+clean; 216 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 157 op/s
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:20:21.160398) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846821160456, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1508, "num_deletes": 251, "total_data_size": 2163294, "memory_usage": 2214352, "flush_reason": "Manual Compaction"}
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Oct 07 14:20:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846821172075, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 2129229, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33717, "largest_seqno": 35224, "table_properties": {"data_size": 2122379, "index_size": 3857, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15357, "raw_average_key_size": 20, "raw_value_size": 2108283, "raw_average_value_size": 2781, "num_data_blocks": 172, "num_entries": 758, "num_filter_entries": 758, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759846682, "oldest_key_time": 1759846682, "file_creation_time": 1759846821, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 11699 microseconds, and 5668 cpu microseconds.
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:20:21.172104) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 2129229 bytes OK
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:20:21.172119) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:20:21.173828) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:20:21.173840) EVENT_LOG_v1 {"time_micros": 1759846821173836, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:20:21.173853) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2156591, prev total WAL file size 2156591, number of live WAL files 2.
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:20:21.174550) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(2079KB)], [74(9224KB)]
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846821174583, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11575331, "oldest_snapshot_seqno": -1}
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 5961 keys, 9852365 bytes, temperature: kUnknown
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846821228525, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 9852365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9810563, "index_size": 25820, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14917, "raw_key_size": 150676, "raw_average_key_size": 25, "raw_value_size": 9701723, "raw_average_value_size": 1627, "num_data_blocks": 1047, "num_entries": 5961, "num_filter_entries": 5961, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759846821, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:20:21.228790) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 9852365 bytes
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:20:21.230627) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.2 rd, 182.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.0 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(10.1) write-amplify(4.6) OK, records in: 6475, records dropped: 514 output_compression: NoCompression
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:20:21.230646) EVENT_LOG_v1 {"time_micros": 1759846821230637, "job": 42, "event": "compaction_finished", "compaction_time_micros": 54039, "compaction_time_cpu_micros": 26123, "output_level": 6, "num_output_files": 1, "total_output_size": 9852365, "num_input_records": 6475, "num_output_records": 5961, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846821231153, "job": 42, "event": "table_file_deletion", "file_number": 76}
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846821232857, "job": 42, "event": "table_file_deletion", "file_number": 74}
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:20:21.174446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:20:21.232988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:20:21.232995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:20:21.232997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:20:21.233000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:20:21 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:20:21.233002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:20:21 compute-0 ovn_controller[151684]: 2025-10-07T14:20:21Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:1d:fb 10.100.0.10
Oct 07 14:20:21 compute-0 ovn_controller[151684]: 2025-10-07T14:20:21Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:1d:fb 10.100.0.10
Oct 07 14:20:21 compute-0 nova_compute[259550]: 2025-10-07 14:20:21.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 216 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 159 op/s
Oct 07 14:20:22 compute-0 ceph-mon[74295]: pgmap v1673: 305 pgs: 305 active+clean; 216 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 159 op/s
Oct 07 14:20:22 compute-0 nova_compute[259550]: 2025-10-07 14:20:22.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:20:22
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'images', 'default.rgw.log', 'backups', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root']
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:20:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:20:23 compute-0 nova_compute[259550]: 2025-10-07 14:20:23.207 2 DEBUG nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:20:23 compute-0 nova_compute[259550]: 2025-10-07 14:20:23.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 246 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 154 op/s
Oct 07 14:20:24 compute-0 ceph-mon[74295]: pgmap v1674: 305 pgs: 305 active+clean; 246 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 154 op/s
Oct 07 14:20:25 compute-0 kernel: tap58bc8432-92 (unregistering): left promiscuous mode
Oct 07 14:20:25 compute-0 NetworkManager[44949]: <info>  [1759846825.4951] device (tap58bc8432-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:20:25 compute-0 ovn_controller[151684]: 2025-10-07T14:20:25Z|00783|binding|INFO|Releasing lport 58bc8432-92a0-493d-a022-134fa0354f89 from this chassis (sb_readonly=0)
Oct 07 14:20:25 compute-0 ovn_controller[151684]: 2025-10-07T14:20:25Z|00784|binding|INFO|Setting lport 58bc8432-92a0-493d-a022-134fa0354f89 down in Southbound
Oct 07 14:20:25 compute-0 ovn_controller[151684]: 2025-10-07T14:20:25Z|00785|binding|INFO|Removing iface tap58bc8432-92 ovn-installed in OVS
Oct 07 14:20:25 compute-0 nova_compute[259550]: 2025-10-07 14:20:25.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:25 compute-0 nova_compute[259550]: 2025-10-07 14:20:25.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:25 compute-0 nova_compute[259550]: 2025-10-07 14:20:25.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:25.558 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:1d:fb 10.100.0.10'], port_security=['fa:16:3e:8c:1d:fb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'af953de2-8586-45f6-8f4b-a09dec17ef5f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8379283f8a594c2ab94773d2b49cbb30', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b57b9fa1-4ab8-4b42-afe4-8fb0bccad4c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44f26794-be65-4c90-a6ef-3a0e5efa6810, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=58bc8432-92a0-493d-a022-134fa0354f89) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:20:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:25.560 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 58bc8432-92a0-493d-a022-134fa0354f89 in datapath c21c541a-0d39-4ceb-ba44-53a9c1280779 unbound from our chassis
Oct 07 14:20:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:25.561 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:20:25 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Oct 07 14:20:25 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d0000004d.scope: Consumed 13.474s CPU time.
Oct 07 14:20:25 compute-0 nova_compute[259550]: 2025-10-07 14:20:25.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:25 compute-0 systemd-machined[214580]: Machine qemu-94-instance-0000004d terminated.
Oct 07 14:20:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:25.578 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b7387214-6860-4039-aaa2-c124852475d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:25.614 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4250f8a1-dbf1-4f48-a6fa-bb6d9147cd01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:25.616 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6509f7b8-ad10-46de-b7bb-de33d9d2eace]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:25.653 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[df9c4f8c-d936-4f6d-a81d-3b8ecdde78bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:25.678 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de012f5e-f941-4426-8965-b94611380fd1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc21c541a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:8b:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734372, 'reachable_time': 15798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339984, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:25.701 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b6101734-e5d7-4f18-8e72-616138cb9b7a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc21c541a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734386, 'tstamp': 734386}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339985, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc21c541a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734390, 'tstamp': 734390}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339985, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:25.703 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc21c541a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:25 compute-0 nova_compute[259550]: 2025-10-07 14:20:25.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:25.710 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc21c541a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:25.711 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:25 compute-0 nova_compute[259550]: 2025-10-07 14:20:25.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:25.711 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc21c541a-00, col_values=(('external_ids', {'iface-id': '5989e5ed-c89e-446a-960e-503196fd3680'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:25.711 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 248 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 134 op/s
Oct 07 14:20:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:20:26 compute-0 nova_compute[259550]: 2025-10-07 14:20:26.222 2 INFO nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Instance shutdown successfully after 13 seconds.
Oct 07 14:20:26 compute-0 nova_compute[259550]: 2025-10-07 14:20:26.230 2 INFO nova.virt.libvirt.driver [-] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Instance destroyed successfully.
Oct 07 14:20:26 compute-0 nova_compute[259550]: 2025-10-07 14:20:26.236 2 INFO nova.virt.libvirt.driver [-] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Instance destroyed successfully.
Oct 07 14:20:26 compute-0 nova_compute[259550]: 2025-10-07 14:20:26.237 2 DEBUG nova.virt.libvirt.vif [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-520514475',display_name='tempest-ServerActionsTestJSON-server-254513888',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-520514475',id=77,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:20:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-agtdcm1m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:20:11Z,user_data=None,user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=af953de2-8586-45f6-8f4b-a09dec17ef5f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "58bc8432-92a0-493d-a022-134fa0354f89", "address": "fa:16:3e:8c:1d:fb", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58bc8432-92", "ovs_interfaceid": "58bc8432-92a0-493d-a022-134fa0354f89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:20:26 compute-0 nova_compute[259550]: 2025-10-07 14:20:26.238 2 DEBUG nova.network.os_vif_util [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "58bc8432-92a0-493d-a022-134fa0354f89", "address": "fa:16:3e:8c:1d:fb", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58bc8432-92", "ovs_interfaceid": "58bc8432-92a0-493d-a022-134fa0354f89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:20:26 compute-0 nova_compute[259550]: 2025-10-07 14:20:26.239 2 DEBUG nova.network.os_vif_util [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:1d:fb,bridge_name='br-int',has_traffic_filtering=True,id=58bc8432-92a0-493d-a022-134fa0354f89,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58bc8432-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:20:26 compute-0 nova_compute[259550]: 2025-10-07 14:20:26.239 2 DEBUG os_vif [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:1d:fb,bridge_name='br-int',has_traffic_filtering=True,id=58bc8432-92a0-493d-a022-134fa0354f89,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58bc8432-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:20:26 compute-0 nova_compute[259550]: 2025-10-07 14:20:26.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:26 compute-0 nova_compute[259550]: 2025-10-07 14:20:26.243 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58bc8432-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:26 compute-0 nova_compute[259550]: 2025-10-07 14:20:26.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:26 compute-0 nova_compute[259550]: 2025-10-07 14:20:26.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:20:26 compute-0 nova_compute[259550]: 2025-10-07 14:20:26.254 2 INFO os_vif [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:1d:fb,bridge_name='br-int',has_traffic_filtering=True,id=58bc8432-92a0-493d-a022-134fa0354f89,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58bc8432-92')
Oct 07 14:20:26 compute-0 nova_compute[259550]: 2025-10-07 14:20:26.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:26 compute-0 nova_compute[259550]: 2025-10-07 14:20:26.820 2 INFO nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Deleting instance files /var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f_del
Oct 07 14:20:26 compute-0 nova_compute[259550]: 2025-10-07 14:20:26.821 2 INFO nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Deletion of /var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f_del complete
Oct 07 14:20:26 compute-0 nova_compute[259550]: 2025-10-07 14:20:26.950 2 DEBUG nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:20:26 compute-0 nova_compute[259550]: 2025-10-07 14:20:26.950 2 INFO nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Creating image(s)
Oct 07 14:20:26 compute-0 nova_compute[259550]: 2025-10-07 14:20:26.970 2 DEBUG nova.storage.rbd_utils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image af953de2-8586-45f6-8f4b-a09dec17ef5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:26 compute-0 ceph-mon[74295]: pgmap v1675: 305 pgs: 305 active+clean; 248 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 134 op/s
Oct 07 14:20:26 compute-0 nova_compute[259550]: 2025-10-07 14:20:26.995 2 DEBUG nova.storage.rbd_utils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image af953de2-8586-45f6-8f4b-a09dec17ef5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.016 2 DEBUG nova.storage.rbd_utils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image af953de2-8586-45f6-8f4b-a09dec17ef5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.020 2 DEBUG oslo_concurrency.processutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.094 2 DEBUG oslo_concurrency.processutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.095 2 DEBUG oslo_concurrency.lockutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.095 2 DEBUG oslo_concurrency.lockutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.096 2 DEBUG oslo_concurrency.lockutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.116 2 DEBUG nova.storage.rbd_utils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image af953de2-8586-45f6-8f4b-a09dec17ef5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.120 2 DEBUG oslo_concurrency.processutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 af953de2-8586-45f6-8f4b-a09dec17ef5f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.435 2 DEBUG oslo_concurrency.processutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 af953de2-8586-45f6-8f4b-a09dec17ef5f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:27 compute-0 ovn_controller[151684]: 2025-10-07T14:20:27Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:71:12:7c 10.100.0.3
Oct 07 14:20:27 compute-0 ovn_controller[151684]: 2025-10-07T14:20:27Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:71:12:7c 10.100.0.3
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.511 2 DEBUG nova.storage.rbd_utils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] resizing rbd image af953de2-8586-45f6-8f4b-a09dec17ef5f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.621 2 DEBUG nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.622 2 DEBUG nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Ensure instance console log exists: /var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.622 2 DEBUG oslo_concurrency.lockutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.622 2 DEBUG oslo_concurrency.lockutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.623 2 DEBUG oslo_concurrency.lockutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.625 2 DEBUG nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Start _get_guest_xml network_info=[{"id": "58bc8432-92a0-493d-a022-134fa0354f89", "address": "fa:16:3e:8c:1d:fb", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58bc8432-92", "ovs_interfaceid": "58bc8432-92a0-493d-a022-134fa0354f89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.628 2 WARNING nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.640 2 DEBUG nova.virt.libvirt.host [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.640 2 DEBUG nova.virt.libvirt.host [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.644 2 DEBUG nova.virt.libvirt.host [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.644 2 DEBUG nova.virt.libvirt.host [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.645 2 DEBUG nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.645 2 DEBUG nova.virt.hardware [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.645 2 DEBUG nova.virt.hardware [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.646 2 DEBUG nova.virt.hardware [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.646 2 DEBUG nova.virt.hardware [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.646 2 DEBUG nova.virt.hardware [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.646 2 DEBUG nova.virt.hardware [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.646 2 DEBUG nova.virt.hardware [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.646 2 DEBUG nova.virt.hardware [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.646 2 DEBUG nova.virt.hardware [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.647 2 DEBUG nova.virt.hardware [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.647 2 DEBUG nova.virt.hardware [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.647 2 DEBUG nova.objects.instance [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'vcpu_model' on Instance uuid af953de2-8586-45f6-8f4b-a09dec17ef5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.665 2 DEBUG oslo_concurrency.processutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:27 compute-0 nova_compute[259550]: 2025-10-07 14:20:27.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 248 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Oct 07 14:20:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:20:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2504744027' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.155 2 DEBUG oslo_concurrency.processutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.180 2 DEBUG nova.storage.rbd_utils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image af953de2-8586-45f6-8f4b-a09dec17ef5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.184 2 DEBUG oslo_concurrency.processutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:20:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4003725013' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.627 2 DEBUG oslo_concurrency.lockutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Acquiring lock "e4daf401-270a-4e5d-9849-ba90ed06e97c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.628 2 DEBUG oslo_concurrency.lockutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Lock "e4daf401-270a-4e5d-9849-ba90ed06e97c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.632 2 DEBUG oslo_concurrency.processutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.634 2 DEBUG nova.virt.libvirt.vif [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-520514475',display_name='tempest-ServerActionsTestJSON-server-254513888',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-520514475',id=77,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:20:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-agtdcm1m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:20:26Z,user_data=None,user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=af953de2-8586-45f6-8f4b-a09dec17ef5f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "58bc8432-92a0-493d-a022-134fa0354f89", "address": "fa:16:3e:8c:1d:fb", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58bc8432-92", "ovs_interfaceid": "58bc8432-92a0-493d-a022-134fa0354f89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.634 2 DEBUG nova.network.os_vif_util [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "58bc8432-92a0-493d-a022-134fa0354f89", "address": "fa:16:3e:8c:1d:fb", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58bc8432-92", "ovs_interfaceid": "58bc8432-92a0-493d-a022-134fa0354f89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.636 2 DEBUG nova.network.os_vif_util [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:1d:fb,bridge_name='br-int',has_traffic_filtering=True,id=58bc8432-92a0-493d-a022-134fa0354f89,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58bc8432-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.640 2 DEBUG nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:20:28 compute-0 nova_compute[259550]:   <uuid>af953de2-8586-45f6-8f4b-a09dec17ef5f</uuid>
Oct 07 14:20:28 compute-0 nova_compute[259550]:   <name>instance-0000004d</name>
Oct 07 14:20:28 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:20:28 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:20:28 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerActionsTestJSON-server-254513888</nova:name>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:20:27</nova:creationTime>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:20:28 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:20:28 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:20:28 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:20:28 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:20:28 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:20:28 compute-0 nova_compute[259550]:         <nova:user uuid="51afbbb19e4a4e2184c89302ccf45428">tempest-ServerActionsTestJSON-263209083-project-member</nova:user>
Oct 07 14:20:28 compute-0 nova_compute[259550]:         <nova:project uuid="8379283f8a594c2ab94773d2b49cbb30">tempest-ServerActionsTestJSON-263209083</nova:project>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:20:28 compute-0 nova_compute[259550]:         <nova:port uuid="58bc8432-92a0-493d-a022-134fa0354f89">
Oct 07 14:20:28 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:20:28 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:20:28 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <system>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <entry name="serial">af953de2-8586-45f6-8f4b-a09dec17ef5f</entry>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <entry name="uuid">af953de2-8586-45f6-8f4b-a09dec17ef5f</entry>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     </system>
Oct 07 14:20:28 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:20:28 compute-0 nova_compute[259550]:   <os>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:   </os>
Oct 07 14:20:28 compute-0 nova_compute[259550]:   <features>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:   </features>
Oct 07 14:20:28 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:20:28 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:20:28 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/af953de2-8586-45f6-8f4b-a09dec17ef5f_disk">
Oct 07 14:20:28 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       </source>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:20:28 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/af953de2-8586-45f6-8f4b-a09dec17ef5f_disk.config">
Oct 07 14:20:28 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       </source>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:20:28 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:8c:1d:fb"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <target dev="tap58bc8432-92"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f/console.log" append="off"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <video>
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     </video>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:20:28 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:20:28 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:20:28 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:20:28 compute-0 nova_compute[259550]: </domain>
Oct 07 14:20:28 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.641 2 DEBUG nova.compute.manager [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Preparing to wait for external event network-vif-plugged-58bc8432-92a0-493d-a022-134fa0354f89 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.641 2 DEBUG oslo_concurrency.lockutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.642 2 DEBUG oslo_concurrency.lockutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.642 2 DEBUG oslo_concurrency.lockutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.643 2 DEBUG nova.virt.libvirt.vif [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-520514475',display_name='tempest-ServerActionsTestJSON-server-254513888',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-520514475',id=77,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:20:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-agtdcm1m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:20:26Z,user_data=None,user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=af953de2-8586-45f6-8f4b-a09dec17ef5f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "58bc8432-92a0-493d-a022-134fa0354f89", "address": "fa:16:3e:8c:1d:fb", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58bc8432-92", "ovs_interfaceid": "58bc8432-92a0-493d-a022-134fa0354f89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.643 2 DEBUG nova.network.os_vif_util [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "58bc8432-92a0-493d-a022-134fa0354f89", "address": "fa:16:3e:8c:1d:fb", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58bc8432-92", "ovs_interfaceid": "58bc8432-92a0-493d-a022-134fa0354f89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.644 2 DEBUG nova.network.os_vif_util [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:1d:fb,bridge_name='br-int',has_traffic_filtering=True,id=58bc8432-92a0-493d-a022-134fa0354f89,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58bc8432-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.644 2 DEBUG os_vif [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:1d:fb,bridge_name='br-int',has_traffic_filtering=True,id=58bc8432-92a0-493d-a022-134fa0354f89,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58bc8432-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.652 2 DEBUG nova.compute.manager [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.658 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58bc8432-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.659 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap58bc8432-92, col_values=(('external_ids', {'iface-id': '58bc8432-92a0-493d-a022-134fa0354f89', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:1d:fb', 'vm-uuid': 'af953de2-8586-45f6-8f4b-a09dec17ef5f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:28 compute-0 NetworkManager[44949]: <info>  [1759846828.6624] manager: (tap58bc8432-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.667 2 INFO os_vif [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:1d:fb,bridge_name='br-int',has_traffic_filtering=True,id=58bc8432-92a0-493d-a022-134fa0354f89,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58bc8432-92')
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.724 2 DEBUG nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.724 2 DEBUG nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.724 2 DEBUG nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] No VIF found with MAC fa:16:3e:8c:1d:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.724 2 INFO nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Using config drive
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.743 2 DEBUG nova.storage.rbd_utils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image af953de2-8586-45f6-8f4b-a09dec17ef5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.750 2 DEBUG oslo_concurrency.lockutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.750 2 DEBUG oslo_concurrency.lockutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.757 2 DEBUG nova.virt.hardware [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.757 2 INFO nova.compute.claims [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.763 2 DEBUG nova.objects.instance [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'ec2_ids' on Instance uuid af953de2-8586-45f6-8f4b-a09dec17ef5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.789 2 DEBUG nova.objects.instance [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'keypairs' on Instance uuid af953de2-8586-45f6-8f4b-a09dec17ef5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:28 compute-0 nova_compute[259550]: 2025-10-07 14:20:28.927 2 DEBUG oslo_concurrency.processutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:28 compute-0 ceph-mon[74295]: pgmap v1676: 305 pgs: 305 active+clean; 248 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Oct 07 14:20:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2504744027' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4003725013' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.100 2 INFO nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Creating config drive at /var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f/disk.config
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.106 2 DEBUG oslo_concurrency.processutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa0uontqk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.245 2 DEBUG oslo_concurrency.processutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa0uontqk" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.273 2 DEBUG nova.storage.rbd_utils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] rbd image af953de2-8586-45f6-8f4b-a09dec17ef5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.277 2 DEBUG oslo_concurrency.processutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f/disk.config af953de2-8586-45f6-8f4b-a09dec17ef5f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:20:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/624222436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.371 2 DEBUG oslo_concurrency.processutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.379 2 DEBUG nova.compute.provider_tree [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.393 2 DEBUG nova.scheduler.client.report [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.414 2 DEBUG oslo_concurrency.lockutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.415 2 DEBUG nova.compute.manager [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.426 2 DEBUG oslo_concurrency.processutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f/disk.config af953de2-8586-45f6-8f4b-a09dec17ef5f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.426 2 INFO nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Deleting local config drive /var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f/disk.config because it was imported into RBD.
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.452 2 DEBUG nova.compute.manager [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.453 2 DEBUG nova.network.neutron [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.468 2 INFO nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:20:29 compute-0 kernel: tap58bc8432-92: entered promiscuous mode
Oct 07 14:20:29 compute-0 ovn_controller[151684]: 2025-10-07T14:20:29Z|00786|binding|INFO|Claiming lport 58bc8432-92a0-493d-a022-134fa0354f89 for this chassis.
Oct 07 14:20:29 compute-0 ovn_controller[151684]: 2025-10-07T14:20:29Z|00787|binding|INFO|58bc8432-92a0-493d-a022-134fa0354f89: Claiming fa:16:3e:8c:1d:fb 10.100.0.10
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.487 2 DEBUG nova.compute.manager [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:29 compute-0 NetworkManager[44949]: <info>  [1759846829.4943] manager: (tap58bc8432-92): new Tun device (/org/freedesktop/NetworkManager/Devices/339)
Oct 07 14:20:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:29.495 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:1d:fb 10.100.0.10'], port_security=['fa:16:3e:8c:1d:fb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'af953de2-8586-45f6-8f4b-a09dec17ef5f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8379283f8a594c2ab94773d2b49cbb30', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b57b9fa1-4ab8-4b42-afe4-8fb0bccad4c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44f26794-be65-4c90-a6ef-3a0e5efa6810, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=58bc8432-92a0-493d-a022-134fa0354f89) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:20:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:29.496 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 58bc8432-92a0-493d-a022-134fa0354f89 in datapath c21c541a-0d39-4ceb-ba44-53a9c1280779 bound to our chassis
Oct 07 14:20:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:29.498 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:20:29 compute-0 ovn_controller[151684]: 2025-10-07T14:20:29Z|00788|binding|INFO|Setting lport 58bc8432-92a0-493d-a022-134fa0354f89 ovn-installed in OVS
Oct 07 14:20:29 compute-0 ovn_controller[151684]: 2025-10-07T14:20:29Z|00789|binding|INFO|Setting lport 58bc8432-92a0-493d-a022-134fa0354f89 up in Southbound
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:29.515 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[45103aa0-a2f2-4cdf-9a37-78ffb3932d5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:29 compute-0 systemd-machined[214580]: New machine qemu-96-instance-0000004d.
Oct 07 14:20:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:29.541 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[442bf382-889d-414b-8eae-0b07e841a1f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:29.545 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[aae20192-f511-4ee5-b155-f6159b99d073]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:29 compute-0 systemd[1]: Started Virtual Machine qemu-96-instance-0000004d.
Oct 07 14:20:29 compute-0 systemd-udevd[340346]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.568 2 DEBUG nova.compute.manager [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.569 2 DEBUG nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.570 2 INFO nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Creating image(s)
Oct 07 14:20:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:29.574 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[055eea71-361b-4a5d-ad92-356b1ece0c33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:29 compute-0 NetworkManager[44949]: <info>  [1759846829.5854] device (tap58bc8432-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:20:29 compute-0 NetworkManager[44949]: <info>  [1759846829.5863] device (tap58bc8432-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:20:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:29.594 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[39d63d7a-bc5d-4859-99de-54cde63b2e2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc21c541a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:8b:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734372, 'reachable_time': 15798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340352, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.596 2 DEBUG nova.storage.rbd_utils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] rbd image e4daf401-270a-4e5d-9849-ba90ed06e97c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:29.611 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[01ddf3a4-e474-4ec7-9034-25967cc83082]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc21c541a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734386, 'tstamp': 734386}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340372, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc21c541a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734390, 'tstamp': 734390}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340372, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:29.613 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc21c541a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:29.615 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc21c541a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:29.616 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:29.616 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc21c541a-00, col_values=(('external_ids', {'iface-id': '5989e5ed-c89e-446a-960e-503196fd3680'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:29.616 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.630 2 DEBUG nova.storage.rbd_utils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] rbd image e4daf401-270a-4e5d-9849-ba90ed06e97c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.653 2 DEBUG nova.storage.rbd_utils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] rbd image e4daf401-270a-4e5d-9849-ba90ed06e97c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.657 2 DEBUG oslo_concurrency.processutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.729 2 DEBUG oslo_concurrency.processutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.729 2 DEBUG oslo_concurrency.lockutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.730 2 DEBUG oslo_concurrency.lockutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.730 2 DEBUG oslo_concurrency.lockutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.749 2 DEBUG nova.storage.rbd_utils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] rbd image e4daf401-270a-4e5d-9849-ba90ed06e97c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:29 compute-0 nova_compute[259550]: 2025-10-07 14:20:29.755 2 DEBUG oslo_concurrency.processutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 e4daf401-270a-4e5d-9849-ba90ed06e97c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 246 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.7 MiB/s wr, 221 op/s
Oct 07 14:20:30 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/624222436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:30 compute-0 nova_compute[259550]: 2025-10-07 14:20:30.091 2 DEBUG oslo_concurrency.processutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 e4daf401-270a-4e5d-9849-ba90ed06e97c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:30 compute-0 nova_compute[259550]: 2025-10-07 14:20:30.153 2 DEBUG nova.storage.rbd_utils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] resizing rbd image e4daf401-270a-4e5d-9849-ba90ed06e97c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:20:30 compute-0 nova_compute[259550]: 2025-10-07 14:20:30.187 2 DEBUG nova.policy [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5f01a69322de4fe7b1825ca0af51e47a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d0f2a5220d741d6b9adf50a4787d2d5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:20:30 compute-0 nova_compute[259550]: 2025-10-07 14:20:30.252 2 DEBUG nova.objects.instance [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Lazy-loading 'migration_context' on Instance uuid e4daf401-270a-4e5d-9849-ba90ed06e97c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:30 compute-0 nova_compute[259550]: 2025-10-07 14:20:30.274 2 DEBUG nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:20:30 compute-0 nova_compute[259550]: 2025-10-07 14:20:30.275 2 DEBUG nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Ensure instance console log exists: /var/lib/nova/instances/e4daf401-270a-4e5d-9849-ba90ed06e97c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:20:30 compute-0 nova_compute[259550]: 2025-10-07 14:20:30.276 2 DEBUG oslo_concurrency.lockutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:30 compute-0 nova_compute[259550]: 2025-10-07 14:20:30.276 2 DEBUG oslo_concurrency.lockutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:30 compute-0 nova_compute[259550]: 2025-10-07 14:20:30.276 2 DEBUG oslo_concurrency.lockutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:30 compute-0 nova_compute[259550]: 2025-10-07 14:20:30.429 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for af953de2-8586-45f6-8f4b-a09dec17ef5f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:20:30 compute-0 nova_compute[259550]: 2025-10-07 14:20:30.429 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846830.4278853, af953de2-8586-45f6-8f4b-a09dec17ef5f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:30 compute-0 nova_compute[259550]: 2025-10-07 14:20:30.429 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] VM Started (Lifecycle Event)
Oct 07 14:20:30 compute-0 nova_compute[259550]: 2025-10-07 14:20:30.451 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:30 compute-0 nova_compute[259550]: 2025-10-07 14:20:30.456 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846830.431474, af953de2-8586-45f6-8f4b-a09dec17ef5f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:30 compute-0 nova_compute[259550]: 2025-10-07 14:20:30.456 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] VM Paused (Lifecycle Event)
Oct 07 14:20:30 compute-0 nova_compute[259550]: 2025-10-07 14:20:30.477 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:30 compute-0 nova_compute[259550]: 2025-10-07 14:20:30.481 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:20:30 compute-0 nova_compute[259550]: 2025-10-07 14:20:30.502 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:20:30 compute-0 nova_compute[259550]: 2025-10-07 14:20:30.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:20:31 compute-0 ceph-mon[74295]: pgmap v1677: 305 pgs: 305 active+clean; 246 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.7 MiB/s wr, 221 op/s
Oct 07 14:20:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:20:31 compute-0 nova_compute[259550]: 2025-10-07 14:20:31.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:31 compute-0 nova_compute[259550]: 2025-10-07 14:20:31.605 2 DEBUG nova.compute.manager [req-1b7f0615-9a8e-4a8f-884a-c4f228f7f955 req-5d7b8d25-e826-4f64-9f0c-6a8a4ca7cca3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Received event network-vif-unplugged-58bc8432-92a0-493d-a022-134fa0354f89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:31 compute-0 nova_compute[259550]: 2025-10-07 14:20:31.606 2 DEBUG oslo_concurrency.lockutils [req-1b7f0615-9a8e-4a8f-884a-c4f228f7f955 req-5d7b8d25-e826-4f64-9f0c-6a8a4ca7cca3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:31 compute-0 nova_compute[259550]: 2025-10-07 14:20:31.606 2 DEBUG oslo_concurrency.lockutils [req-1b7f0615-9a8e-4a8f-884a-c4f228f7f955 req-5d7b8d25-e826-4f64-9f0c-6a8a4ca7cca3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:31 compute-0 nova_compute[259550]: 2025-10-07 14:20:31.606 2 DEBUG oslo_concurrency.lockutils [req-1b7f0615-9a8e-4a8f-884a-c4f228f7f955 req-5d7b8d25-e826-4f64-9f0c-6a8a4ca7cca3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:31 compute-0 nova_compute[259550]: 2025-10-07 14:20:31.607 2 DEBUG nova.compute.manager [req-1b7f0615-9a8e-4a8f-884a-c4f228f7f955 req-5d7b8d25-e826-4f64-9f0c-6a8a4ca7cca3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] No event matching network-vif-unplugged-58bc8432-92a0-493d-a022-134fa0354f89 in dict_keys([('network-vif-plugged', '58bc8432-92a0-493d-a022-134fa0354f89')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 07 14:20:31 compute-0 nova_compute[259550]: 2025-10-07 14:20:31.607 2 WARNING nova.compute.manager [req-1b7f0615-9a8e-4a8f-884a-c4f228f7f955 req-5d7b8d25-e826-4f64-9f0c-6a8a4ca7cca3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Received unexpected event network-vif-unplugged-58bc8432-92a0-493d-a022-134fa0354f89 for instance with vm_state active and task_state rebuild_spawning.
Oct 07 14:20:31 compute-0 nova_compute[259550]: 2025-10-07 14:20:31.657 2 DEBUG nova.network.neutron [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Successfully created port: a9982148-39f9-4c14-b0b3-24209c8ba700 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:20:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 260 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 7.0 MiB/s wr, 206 op/s
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002040117088298496 of space, bias 1.0, pg target 0.6120351264895487 quantized to 32 (current 32)
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:20:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:20:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:20:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4090223137' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:20:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:20:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4090223137' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:20:32 compute-0 nova_compute[259550]: 2025-10-07 14:20:32.963 2 DEBUG oslo_concurrency.lockutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "520ba82e-9633-4722-aa98-526012a7e0fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:32 compute-0 nova_compute[259550]: 2025-10-07 14:20:32.964 2 DEBUG oslo_concurrency.lockutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:32 compute-0 nova_compute[259550]: 2025-10-07 14:20:32.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.005 2 DEBUG nova.compute.manager [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:20:33 compute-0 ceph-mon[74295]: pgmap v1678: 305 pgs: 305 active+clean; 260 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 7.0 MiB/s wr, 206 op/s
Oct 07 14:20:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/4090223137' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:20:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/4090223137' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.024 2 DEBUG nova.network.neutron [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Successfully updated port: a9982148-39f9-4c14-b0b3-24209c8ba700 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.157 2 DEBUG oslo_concurrency.lockutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Acquiring lock "refresh_cache-e4daf401-270a-4e5d-9849-ba90ed06e97c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.158 2 DEBUG oslo_concurrency.lockutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Acquired lock "refresh_cache-e4daf401-270a-4e5d-9849-ba90ed06e97c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.158 2 DEBUG nova.network.neutron [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.186 2 DEBUG oslo_concurrency.lockutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.186 2 DEBUG oslo_concurrency.lockutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.195 2 DEBUG nova.virt.hardware [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.196 2 INFO nova.compute.claims [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.485 2 DEBUG nova.network.neutron [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.536 2 DEBUG oslo_concurrency.processutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.713 2 DEBUG nova.compute.manager [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Received event network-vif-plugged-58bc8432-92a0-493d-a022-134fa0354f89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.714 2 DEBUG oslo_concurrency.lockutils [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.715 2 DEBUG oslo_concurrency.lockutils [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.715 2 DEBUG oslo_concurrency.lockutils [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.715 2 DEBUG nova.compute.manager [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Processing event network-vif-plugged-58bc8432-92a0-493d-a022-134fa0354f89 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.715 2 DEBUG nova.compute.manager [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Received event network-vif-plugged-58bc8432-92a0-493d-a022-134fa0354f89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.716 2 DEBUG oslo_concurrency.lockutils [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.716 2 DEBUG oslo_concurrency.lockutils [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.716 2 DEBUG oslo_concurrency.lockutils [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.717 2 DEBUG nova.compute.manager [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] No waiting events found dispatching network-vif-plugged-58bc8432-92a0-493d-a022-134fa0354f89 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.717 2 WARNING nova.compute.manager [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Received unexpected event network-vif-plugged-58bc8432-92a0-493d-a022-134fa0354f89 for instance with vm_state active and task_state rebuild_spawning.
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.717 2 DEBUG nova.compute.manager [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Received event network-vif-plugged-58bc8432-92a0-493d-a022-134fa0354f89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.717 2 DEBUG oslo_concurrency.lockutils [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.718 2 DEBUG oslo_concurrency.lockutils [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.718 2 DEBUG oslo_concurrency.lockutils [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.718 2 DEBUG nova.compute.manager [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] No waiting events found dispatching network-vif-plugged-58bc8432-92a0-493d-a022-134fa0354f89 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.719 2 WARNING nova.compute.manager [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Received unexpected event network-vif-plugged-58bc8432-92a0-493d-a022-134fa0354f89 for instance with vm_state active and task_state rebuild_spawning.
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.719 2 DEBUG nova.compute.manager [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Received event network-changed-a9982148-39f9-4c14-b0b3-24209c8ba700 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.719 2 DEBUG nova.compute.manager [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Refreshing instance network info cache due to event network-changed-a9982148-39f9-4c14-b0b3-24209c8ba700. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.720 2 DEBUG oslo_concurrency.lockutils [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-e4daf401-270a-4e5d-9849-ba90ed06e97c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.720 2 DEBUG nova.compute.manager [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.734 2 DEBUG nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.735 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846833.7336998, af953de2-8586-45f6-8f4b-a09dec17ef5f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.736 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] VM Resumed (Lifecycle Event)
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.756 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.768 2 INFO nova.virt.libvirt.driver [-] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Instance spawned successfully.
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.769 2 DEBUG nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.774 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.797 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.808 2 DEBUG nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.810 2 DEBUG nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.811 2 DEBUG nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.811 2 DEBUG nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.812 2 DEBUG nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.813 2 DEBUG nova.virt.libvirt.driver [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.870 2 DEBUG nova.compute.manager [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 295 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 708 KiB/s rd, 7.3 MiB/s wr, 212 op/s
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.935 2 DEBUG oslo_concurrency.lockutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:20:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3325566288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.984 2 DEBUG oslo_concurrency.processutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:33 compute-0 nova_compute[259550]: 2025-10-07 14:20:33.989 2 DEBUG nova.compute.provider_tree [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.004 2 DEBUG nova.scheduler.client.report [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:20:34 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3325566288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.028 2 DEBUG oslo_concurrency.lockutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.028 2 DEBUG nova.compute.manager [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.031 2 DEBUG oslo_concurrency.lockutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.031 2 DEBUG nova.objects.instance [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.093 2 DEBUG nova.compute.manager [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.093 2 DEBUG nova.network.neutron [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.113 2 DEBUG oslo_concurrency.lockutils [None req-164ae98d-9e1d-4d38-b2c9-bce95c1d83d8 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.115 2 INFO nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.145 2 DEBUG nova.compute.manager [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.245 2 DEBUG nova.compute.manager [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.247 2 DEBUG nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.249 2 INFO nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Creating image(s)
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.283 2 DEBUG nova.storage.rbd_utils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image 520ba82e-9633-4722-aa98-526012a7e0fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.307 2 DEBUG nova.storage.rbd_utils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image 520ba82e-9633-4722-aa98-526012a7e0fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.331 2 DEBUG nova.storage.rbd_utils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image 520ba82e-9633-4722-aa98-526012a7e0fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.334 2 DEBUG oslo_concurrency.processutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.371 2 DEBUG nova.policy [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '76c41b2458aa4c37a8bdf5b4970f70e7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4a4526130e0488b963baba67ee5d8db', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.376 2 DEBUG nova.network.neutron [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Updating instance_info_cache with network_info: [{"id": "a9982148-39f9-4c14-b0b3-24209c8ba700", "address": "fa:16:3e:e9:ad:70", "network": {"id": "3f0a06f3-96b8-4a72-8818-6dfe3b57b611", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-130425917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d0f2a5220d741d6b9adf50a4787d2d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9982148-39", "ovs_interfaceid": "a9982148-39f9-4c14-b0b3-24209c8ba700", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.385 2 DEBUG oslo_concurrency.lockutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.385 2 DEBUG oslo_concurrency.lockutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.397 2 DEBUG oslo_concurrency.lockutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Releasing lock "refresh_cache-e4daf401-270a-4e5d-9849-ba90ed06e97c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.397 2 DEBUG nova.compute.manager [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Instance network_info: |[{"id": "a9982148-39f9-4c14-b0b3-24209c8ba700", "address": "fa:16:3e:e9:ad:70", "network": {"id": "3f0a06f3-96b8-4a72-8818-6dfe3b57b611", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-130425917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d0f2a5220d741d6b9adf50a4787d2d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9982148-39", "ovs_interfaceid": "a9982148-39f9-4c14-b0b3-24209c8ba700", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.398 2 DEBUG oslo_concurrency.lockutils [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-e4daf401-270a-4e5d-9849-ba90ed06e97c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.398 2 DEBUG nova.network.neutron [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Refreshing network info cache for port a9982148-39f9-4c14-b0b3-24209c8ba700 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.401 2 DEBUG nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Start _get_guest_xml network_info=[{"id": "a9982148-39f9-4c14-b0b3-24209c8ba700", "address": "fa:16:3e:e9:ad:70", "network": {"id": "3f0a06f3-96b8-4a72-8818-6dfe3b57b611", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-130425917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d0f2a5220d741d6b9adf50a4787d2d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9982148-39", "ovs_interfaceid": "a9982148-39f9-4c14-b0b3-24209c8ba700", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.403 2 DEBUG nova.compute.manager [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.409 2 WARNING nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.413 2 DEBUG nova.virt.libvirt.host [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.414 2 DEBUG nova.virt.libvirt.host [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.417 2 DEBUG nova.virt.libvirt.host [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.417 2 DEBUG nova.virt.libvirt.host [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.417 2 DEBUG nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.418 2 DEBUG nova.virt.hardware [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.418 2 DEBUG nova.virt.hardware [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.419 2 DEBUG nova.virt.hardware [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.419 2 DEBUG nova.virt.hardware [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.419 2 DEBUG nova.virt.hardware [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.419 2 DEBUG nova.virt.hardware [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.420 2 DEBUG nova.virt.hardware [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.420 2 DEBUG nova.virt.hardware [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.420 2 DEBUG nova.virt.hardware [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.421 2 DEBUG nova.virt.hardware [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.421 2 DEBUG nova.virt.hardware [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.425 2 DEBUG oslo_concurrency.processutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.460 2 DEBUG oslo_concurrency.processutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.461 2 DEBUG oslo_concurrency.lockutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.461 2 DEBUG oslo_concurrency.lockutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.462 2 DEBUG oslo_concurrency.lockutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.485 2 DEBUG nova.storage.rbd_utils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image 520ba82e-9633-4722-aa98-526012a7e0fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.488 2 DEBUG oslo_concurrency.processutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 520ba82e-9633-4722-aa98-526012a7e0fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.541 2 DEBUG oslo_concurrency.lockutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.542 2 DEBUG oslo_concurrency.lockutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.549 2 DEBUG nova.virt.hardware [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.550 2 INFO nova.compute.claims [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.754 2 DEBUG oslo_concurrency.processutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.813 2 DEBUG oslo_concurrency.processutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 520ba82e-9633-4722-aa98-526012a7e0fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:20:34 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4219833095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.866 2 DEBUG oslo_concurrency.processutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.901 2 DEBUG nova.storage.rbd_utils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] rbd image e4daf401-270a-4e5d-9849-ba90ed06e97c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.908 2 DEBUG oslo_concurrency.processutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.948 2 DEBUG nova.storage.rbd_utils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] resizing rbd image 520ba82e-9633-4722-aa98-526012a7e0fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:20:34 compute-0 nova_compute[259550]: 2025-10-07 14:20:34.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:20:35 compute-0 ceph-mon[74295]: pgmap v1679: 305 pgs: 305 active+clean; 295 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 708 KiB/s rd, 7.3 MiB/s wr, 212 op/s
Oct 07 14:20:35 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4219833095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:35 compute-0 podman[340796]: 2025-10-07 14:20:35.079507768 +0000 UTC m=+0.068118050 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.081 2 DEBUG nova.objects.instance [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lazy-loading 'migration_context' on Instance uuid 520ba82e-9633-4722-aa98-526012a7e0fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:35 compute-0 podman[340797]: 2025-10-07 14:20:35.08946405 +0000 UTC m=+0.077392635 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.097 2 DEBUG nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.098 2 DEBUG nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Ensure instance console log exists: /var/lib/nova/instances/520ba82e-9633-4722-aa98-526012a7e0fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.098 2 DEBUG oslo_concurrency.lockutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.099 2 DEBUG oslo_concurrency.lockutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.099 2 DEBUG oslo_concurrency.lockutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:35 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:20:35 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 7760 writes, 35K keys, 7760 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 7760 writes, 7760 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1553 writes, 7283 keys, 1553 commit groups, 1.0 writes per commit group, ingest: 9.42 MB, 0.02 MB/s
                                           Interval WAL: 1553 writes, 1553 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     78.3      0.54              0.12        21    0.026       0      0       0.0       0.0
                                             L6      1/0    9.40 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.6    138.3    113.5      1.34              0.45        20    0.067    101K    11K       0.0       0.0
                                            Sum      1/0    9.40 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   4.6     98.3    103.3      1.88              0.58        41    0.046    101K    11K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.7    101.8    103.8      0.51              0.16        10    0.051     31K   3112       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    138.3    113.5      1.34              0.45        20    0.067    101K    11K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     78.7      0.54              0.12        20    0.027       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.042, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.19 GB write, 0.06 MB/s write, 0.18 GB read, 0.06 MB/s read, 1.9 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5619101451f0#2 capacity: 304.00 MB usage: 21.99 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000153 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1449,21.19 MB,6.97097%) FilterBlock(42,292.30 KB,0.0938968%) IndexBlock(42,521.70 KB,0.167591%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 07 14:20:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:20:35 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1613459954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.196 2 DEBUG oslo_concurrency.processutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.203 2 DEBUG nova.compute.provider_tree [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.219 2 DEBUG nova.scheduler.client.report [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.239 2 DEBUG oslo_concurrency.lockutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.240 2 DEBUG nova.compute.manager [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.294 2 DEBUG nova.compute.manager [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.295 2 DEBUG nova.network.neutron [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.321 2 INFO nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.343 2 DEBUG nova.compute.manager [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:20:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:20:35 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/797352516' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.384 2 DEBUG oslo_concurrency.processutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.386 2 DEBUG nova.virt.libvirt.vif [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:20:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-127650468',display_name='tempest-ServerAddressesNegativeTestJSON-server-127650468',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-127650468',id=79,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d0f2a5220d741d6b9adf50a4787d2d5',ramdisk_id='',reservation_id='r-0avbj2h1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-928804141',owner_user_name='tempest-ServerAddressesNegativeTestJSON-928804141-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:20:29Z,user_data=None,user_id='5f01a69322de4fe7b1825ca0af51e47a',uuid=e4daf401-270a-4e5d-9849-ba90ed06e97c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a9982148-39f9-4c14-b0b3-24209c8ba700", "address": "fa:16:3e:e9:ad:70", "network": {"id": "3f0a06f3-96b8-4a72-8818-6dfe3b57b611", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-130425917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d0f2a5220d741d6b9adf50a4787d2d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9982148-39", "ovs_interfaceid": "a9982148-39f9-4c14-b0b3-24209c8ba700", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.386 2 DEBUG nova.network.os_vif_util [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Converting VIF {"id": "a9982148-39f9-4c14-b0b3-24209c8ba700", "address": "fa:16:3e:e9:ad:70", "network": {"id": "3f0a06f3-96b8-4a72-8818-6dfe3b57b611", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-130425917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d0f2a5220d741d6b9adf50a4787d2d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9982148-39", "ovs_interfaceid": "a9982148-39f9-4c14-b0b3-24209c8ba700", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.387 2 DEBUG nova.network.os_vif_util [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ad:70,bridge_name='br-int',has_traffic_filtering=True,id=a9982148-39f9-4c14-b0b3-24209c8ba700,network=Network(3f0a06f3-96b8-4a72-8818-6dfe3b57b611),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9982148-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.389 2 DEBUG nova.objects.instance [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Lazy-loading 'pci_devices' on Instance uuid e4daf401-270a-4e5d-9849-ba90ed06e97c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.417 2 DEBUG nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:20:35 compute-0 nova_compute[259550]:   <uuid>e4daf401-270a-4e5d-9849-ba90ed06e97c</uuid>
Oct 07 14:20:35 compute-0 nova_compute[259550]:   <name>instance-0000004f</name>
Oct 07 14:20:35 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:20:35 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:20:35 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerAddressesNegativeTestJSON-server-127650468</nova:name>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:20:34</nova:creationTime>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:20:35 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:20:35 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:20:35 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:20:35 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:20:35 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:20:35 compute-0 nova_compute[259550]:         <nova:user uuid="5f01a69322de4fe7b1825ca0af51e47a">tempest-ServerAddressesNegativeTestJSON-928804141-project-member</nova:user>
Oct 07 14:20:35 compute-0 nova_compute[259550]:         <nova:project uuid="0d0f2a5220d741d6b9adf50a4787d2d5">tempest-ServerAddressesNegativeTestJSON-928804141</nova:project>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:20:35 compute-0 nova_compute[259550]:         <nova:port uuid="a9982148-39f9-4c14-b0b3-24209c8ba700">
Oct 07 14:20:35 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:20:35 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:20:35 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <system>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <entry name="serial">e4daf401-270a-4e5d-9849-ba90ed06e97c</entry>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <entry name="uuid">e4daf401-270a-4e5d-9849-ba90ed06e97c</entry>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     </system>
Oct 07 14:20:35 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:20:35 compute-0 nova_compute[259550]:   <os>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:   </os>
Oct 07 14:20:35 compute-0 nova_compute[259550]:   <features>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:   </features>
Oct 07 14:20:35 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:20:35 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:20:35 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/e4daf401-270a-4e5d-9849-ba90ed06e97c_disk">
Oct 07 14:20:35 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       </source>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:20:35 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/e4daf401-270a-4e5d-9849-ba90ed06e97c_disk.config">
Oct 07 14:20:35 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       </source>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:20:35 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:e9:ad:70"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <target dev="tapa9982148-39"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/e4daf401-270a-4e5d-9849-ba90ed06e97c/console.log" append="off"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <video>
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     </video>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:20:35 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:20:35 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:20:35 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:20:35 compute-0 nova_compute[259550]: </domain>
Oct 07 14:20:35 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.423 2 DEBUG nova.compute.manager [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Preparing to wait for external event network-vif-plugged-a9982148-39f9-4c14-b0b3-24209c8ba700 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.423 2 DEBUG oslo_concurrency.lockutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Acquiring lock "e4daf401-270a-4e5d-9849-ba90ed06e97c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.424 2 DEBUG oslo_concurrency.lockutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Lock "e4daf401-270a-4e5d-9849-ba90ed06e97c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.424 2 DEBUG oslo_concurrency.lockutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Lock "e4daf401-270a-4e5d-9849-ba90ed06e97c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.425 2 DEBUG nova.virt.libvirt.vif [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:20:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-127650468',display_name='tempest-ServerAddressesNegativeTestJSON-server-127650468',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-127650468',id=79,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d0f2a5220d741d6b9adf50a4787d2d5',ramdisk_id='',reservation_id='r-0avbj2h1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-928804141',owner_user_name='tempest-ServerAddressesNegativeTestJSON-928804141-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:20:29Z,user_data=None,user_id='5f01a69322de4fe7b1825ca0af51e47a',uuid=e4daf401-270a-4e5d-9849-ba90ed06e97c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a9982148-39f9-4c14-b0b3-24209c8ba700", "address": "fa:16:3e:e9:ad:70", "network": {"id": "3f0a06f3-96b8-4a72-8818-6dfe3b57b611", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-130425917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d0f2a5220d741d6b9adf50a4787d2d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9982148-39", "ovs_interfaceid": "a9982148-39f9-4c14-b0b3-24209c8ba700", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.425 2 DEBUG nova.network.os_vif_util [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Converting VIF {"id": "a9982148-39f9-4c14-b0b3-24209c8ba700", "address": "fa:16:3e:e9:ad:70", "network": {"id": "3f0a06f3-96b8-4a72-8818-6dfe3b57b611", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-130425917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d0f2a5220d741d6b9adf50a4787d2d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9982148-39", "ovs_interfaceid": "a9982148-39f9-4c14-b0b3-24209c8ba700", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.426 2 DEBUG nova.network.os_vif_util [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ad:70,bridge_name='br-int',has_traffic_filtering=True,id=a9982148-39f9-4c14-b0b3-24209c8ba700,network=Network(3f0a06f3-96b8-4a72-8818-6dfe3b57b611),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9982148-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.426 2 DEBUG os_vif [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ad:70,bridge_name='br-int',has_traffic_filtering=True,id=a9982148-39f9-4c14-b0b3-24209c8ba700,network=Network(3f0a06f3-96b8-4a72-8818-6dfe3b57b611),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9982148-39') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.428 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.428 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.431 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa9982148-39, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.432 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa9982148-39, col_values=(('external_ids', {'iface-id': 'a9982148-39f9-4c14-b0b3-24209c8ba700', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:ad:70', 'vm-uuid': 'e4daf401-270a-4e5d-9849-ba90ed06e97c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:35 compute-0 NetworkManager[44949]: <info>  [1759846835.4344] manager: (tapa9982148-39): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/340)
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.438 2 DEBUG nova.compute.manager [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.439 2 DEBUG nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.440 2 INFO nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Creating image(s)
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.460 2 DEBUG nova.storage.rbd_utils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.486 2 DEBUG nova.storage.rbd_utils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.511 2 DEBUG nova.storage.rbd_utils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.516 2 DEBUG oslo_concurrency.processutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.549 2 INFO os_vif [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ad:70,bridge_name='br-int',has_traffic_filtering=True,id=a9982148-39f9-4c14-b0b3-24209c8ba700,network=Network(3f0a06f3-96b8-4a72-8818-6dfe3b57b611),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9982148-39')
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.585 2 DEBUG oslo_concurrency.processutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.586 2 DEBUG oslo_concurrency.lockutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.587 2 DEBUG oslo_concurrency.lockutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.587 2 DEBUG oslo_concurrency.lockutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.608 2 DEBUG nova.storage.rbd_utils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.623 2 DEBUG oslo_concurrency.processutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.675 2 DEBUG nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.676 2 DEBUG nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.676 2 DEBUG nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] No VIF found with MAC fa:16:3e:e9:ad:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.677 2 INFO nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Using config drive
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.702 2 DEBUG nova.storage.rbd_utils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] rbd image e4daf401-270a-4e5d-9849-ba90ed06e97c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 313 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 6.2 MiB/s wr, 203 op/s
Oct 07 14:20:35 compute-0 nova_compute[259550]: 2025-10-07 14:20:35.950 2 DEBUG oslo_concurrency.processutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:36 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1613459954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:36 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/797352516' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:36 compute-0 ceph-mon[74295]: pgmap v1680: 305 pgs: 305 active+clean; 313 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 6.2 MiB/s wr, 203 op/s
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.053 2 DEBUG nova.storage.rbd_utils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] resizing rbd image db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.147 2 DEBUG nova.objects.instance [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lazy-loading 'migration_context' on Instance uuid db51ce2e-5a2e-4329-a629-6f5fcee5c673 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.160 2 DEBUG nova.policy [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '76c41b2458aa4c37a8bdf5b4970f70e7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4a4526130e0488b963baba67ee5d8db', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.164 2 DEBUG nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.165 2 DEBUG nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Ensure instance console log exists: /var/lib/nova/instances/db51ce2e-5a2e-4329-a629-6f5fcee5c673/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.166 2 DEBUG oslo_concurrency.lockutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.166 2 DEBUG oslo_concurrency.lockutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.167 2 DEBUG oslo_concurrency.lockutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.299 2 INFO nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Creating config drive at /var/lib/nova/instances/e4daf401-270a-4e5d-9849-ba90ed06e97c/disk.config
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.305 2 DEBUG oslo_concurrency.processutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e4daf401-270a-4e5d-9849-ba90ed06e97c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr63_wkgd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.338 2 DEBUG nova.network.neutron [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Successfully created port: 741587b4-ed2b-4895-9352-480eb38c0c50 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.447 2 DEBUG oslo_concurrency.processutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e4daf401-270a-4e5d-9849-ba90ed06e97c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr63_wkgd" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.481 2 DEBUG nova.storage.rbd_utils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] rbd image e4daf401-270a-4e5d-9849-ba90ed06e97c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.487 2 DEBUG oslo_concurrency.processutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e4daf401-270a-4e5d-9849-ba90ed06e97c/disk.config e4daf401-270a-4e5d-9849-ba90ed06e97c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.649 2 DEBUG oslo_concurrency.processutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e4daf401-270a-4e5d-9849-ba90ed06e97c/disk.config e4daf401-270a-4e5d-9849-ba90ed06e97c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.651 2 INFO nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Deleting local config drive /var/lib/nova/instances/e4daf401-270a-4e5d-9849-ba90ed06e97c/disk.config because it was imported into RBD.
Oct 07 14:20:36 compute-0 kernel: tapa9982148-39: entered promiscuous mode
Oct 07 14:20:36 compute-0 NetworkManager[44949]: <info>  [1759846836.7052] manager: (tapa9982148-39): new Tun device (/org/freedesktop/NetworkManager/Devices/341)
Oct 07 14:20:36 compute-0 ovn_controller[151684]: 2025-10-07T14:20:36Z|00790|binding|INFO|Claiming lport a9982148-39f9-4c14-b0b3-24209c8ba700 for this chassis.
Oct 07 14:20:36 compute-0 ovn_controller[151684]: 2025-10-07T14:20:36Z|00791|binding|INFO|a9982148-39f9-4c14-b0b3-24209c8ba700: Claiming fa:16:3e:e9:ad:70 10.100.0.13
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:36.715 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:ad:70 10.100.0.13'], port_security=['fa:16:3e:e9:ad:70 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e4daf401-270a-4e5d-9849-ba90ed06e97c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f0a06f3-96b8-4a72-8818-6dfe3b57b611', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d0f2a5220d741d6b9adf50a4787d2d5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '07b51984-ba91-42b9-a431-00fe10539ee8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=62de4f09-63e8-4fa8-87ba-f68cc8fd0e47, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=a9982148-39f9-4c14-b0b3-24209c8ba700) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:20:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:36.716 161536 INFO neutron.agent.ovn.metadata.agent [-] Port a9982148-39f9-4c14-b0b3-24209c8ba700 in datapath 3f0a06f3-96b8-4a72-8818-6dfe3b57b611 bound to our chassis
Oct 07 14:20:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:36.717 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3f0a06f3-96b8-4a72-8818-6dfe3b57b611
Oct 07 14:20:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:36.731 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[543775df-d26d-4b45-bc72-470522b19cf2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:36.731 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3f0a06f3-91 in ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:20:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:36.733 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3f0a06f3-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:20:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:36.733 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e1b32feb-ff76-4e00-b0e3-a4cf108471ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:36.734 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4b2918-03f3-4281-bc88-fe1272258dbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:36 compute-0 ovn_controller[151684]: 2025-10-07T14:20:36Z|00792|binding|INFO|Setting lport a9982148-39f9-4c14-b0b3-24209c8ba700 ovn-installed in OVS
Oct 07 14:20:36 compute-0 ovn_controller[151684]: 2025-10-07T14:20:36Z|00793|binding|INFO|Setting lport a9982148-39f9-4c14-b0b3-24209c8ba700 up in Southbound
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:36.751 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[4d0ca1eb-0a36-4800-b2c6-daa4ee30c59c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.756 2 DEBUG nova.compute.manager [None req-5817aab1-1f5c-42fe-8cc1-8815c4d1db38 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:36 compute-0 systemd-udevd[341114]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:20:36 compute-0 systemd-machined[214580]: New machine qemu-97-instance-0000004f.
Oct 07 14:20:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:36.770 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9569d0-279e-4680-b471-8d4a0b7e7d2a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:36 compute-0 NetworkManager[44949]: <info>  [1759846836.7732] device (tapa9982148-39): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:20:36 compute-0 systemd[1]: Started Virtual Machine qemu-97-instance-0000004f.
Oct 07 14:20:36 compute-0 NetworkManager[44949]: <info>  [1759846836.7745] device (tapa9982148-39): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:20:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:36.807 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[dcef99dc-e2b8-4c06-b162-f81c0d9fa6c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:36.814 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cafac5d1-54fa-47d5-ade0-3838a8fc0848]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:36 compute-0 NetworkManager[44949]: <info>  [1759846836.8160] manager: (tap3f0a06f3-90): new Veth device (/org/freedesktop/NetworkManager/Devices/342)
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.813 2 INFO nova.compute.manager [None req-5817aab1-1f5c-42fe-8cc1-8815c4d1db38 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] instance snapshotting
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.818 2 DEBUG nova.objects.instance [None req-5817aab1-1f5c-42fe-8cc1-8815c4d1db38 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'flavor' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:36.849 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[48aee1e9-dac5-4224-a25f-abb3f3fc3cf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:36.853 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c51a923a-2093-4a3b-a6d8-b01e22754ce5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:36 compute-0 NetworkManager[44949]: <info>  [1759846836.8760] device (tap3f0a06f3-90): carrier: link connected
Oct 07 14:20:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:36.881 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f9713096-0ac5-40bb-95c6-3b8d2695729e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:36.897 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[01114692-4770-4415-a2eb-7030790ab7c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f0a06f3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:35:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742044, 'reachable_time': 36511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341146, 'error': None, 'target': 'ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:36.914 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d9259462-f7c7-47c4-84f0-28939a3c7fbd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:352a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742044, 'tstamp': 742044}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341147, 'error': None, 'target': 'ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:36.928 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[941c88c2-a176-4870-b203-3a2f0d3ae987]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f0a06f3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:35:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742044, 'reachable_time': 36511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 341148, 'error': None, 'target': 'ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:36.960 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b80ed59a-bd99-4cef-a9b6-af7fa86cf9cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:36 compute-0 nova_compute[259550]: 2025-10-07 14:20:36.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:37.019 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d188607e-ef44-497b-83c3-e88c75d4ef22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:37.020 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f0a06f3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:37.021 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:37.021 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f0a06f3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:37 compute-0 kernel: tap3f0a06f3-90: entered promiscuous mode
Oct 07 14:20:37 compute-0 NetworkManager[44949]: <info>  [1759846837.0252] manager: (tap3f0a06f3-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:37.026 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3f0a06f3-90, col_values=(('external_ids', {'iface-id': '408c850f-88cf-449f-b1e2-f4569243ff21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:37.030 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3f0a06f3-96b8-4a72-8818-6dfe3b57b611.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3f0a06f3-96b8-4a72-8818-6dfe3b57b611.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:20:37 compute-0 ovn_controller[151684]: 2025-10-07T14:20:37Z|00794|binding|INFO|Releasing lport 408c850f-88cf-449f-b1e2-f4569243ff21 from this chassis (sb_readonly=0)
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:37.033 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b6128818-431f-4973-9f40-eabedf038279]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:37.034 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-3f0a06f3-96b8-4a72-8818-6dfe3b57b611
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/3f0a06f3-96b8-4a72-8818-6dfe3b57b611.pid.haproxy
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 3f0a06f3-96b8-4a72-8818-6dfe3b57b611
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:37.035 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611', 'env', 'PROCESS_TAG=haproxy-3f0a06f3-96b8-4a72-8818-6dfe3b57b611', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3f0a06f3-96b8-4a72-8818-6dfe3b57b611.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.076 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.078 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.078 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.078 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.079 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.192 2 DEBUG nova.compute.manager [req-2851ee34-86b3-439f-9b2b-cb5fe43c8c09 req-d1f81673-44f6-44d2-9dd9-9a0420e475b7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Received event network-vif-plugged-a9982148-39f9-4c14-b0b3-24209c8ba700 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.193 2 DEBUG oslo_concurrency.lockutils [req-2851ee34-86b3-439f-9b2b-cb5fe43c8c09 req-d1f81673-44f6-44d2-9dd9-9a0420e475b7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e4daf401-270a-4e5d-9849-ba90ed06e97c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.193 2 DEBUG oslo_concurrency.lockutils [req-2851ee34-86b3-439f-9b2b-cb5fe43c8c09 req-d1f81673-44f6-44d2-9dd9-9a0420e475b7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e4daf401-270a-4e5d-9849-ba90ed06e97c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.193 2 DEBUG oslo_concurrency.lockutils [req-2851ee34-86b3-439f-9b2b-cb5fe43c8c09 req-d1f81673-44f6-44d2-9dd9-9a0420e475b7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e4daf401-270a-4e5d-9849-ba90ed06e97c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.194 2 DEBUG nova.compute.manager [req-2851ee34-86b3-439f-9b2b-cb5fe43c8c09 req-d1f81673-44f6-44d2-9dd9-9a0420e475b7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Processing event network-vif-plugged-a9982148-39f9-4c14-b0b3-24209c8ba700 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.334 2 INFO nova.virt.libvirt.driver [None req-5817aab1-1f5c-42fe-8cc1-8815c4d1db38 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Beginning live snapshot process
Oct 07 14:20:37 compute-0 podman[341200]: 2025-10-07 14:20:37.480019382 +0000 UTC m=+0.064972078 container create 634d1b3d997e27dc9c0d558692f4dbeb32c6874b53fdde53faae7b8504309bba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.500 2 DEBUG nova.network.neutron [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Updated VIF entry in instance network info cache for port a9982148-39f9-4c14-b0b3-24209c8ba700. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.500 2 DEBUG nova.network.neutron [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Updating instance_info_cache with network_info: [{"id": "a9982148-39f9-4c14-b0b3-24209c8ba700", "address": "fa:16:3e:e9:ad:70", "network": {"id": "3f0a06f3-96b8-4a72-8818-6dfe3b57b611", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-130425917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d0f2a5220d741d6b9adf50a4787d2d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9982148-39", "ovs_interfaceid": "a9982148-39f9-4c14-b0b3-24209c8ba700", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.508 2 DEBUG nova.virt.libvirt.imagebackend [None req-5817aab1-1f5c-42fe-8cc1-8815c4d1db38 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:20:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:20:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1095816140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:37 compute-0 systemd[1]: Started libpod-conmon-634d1b3d997e27dc9c0d558692f4dbeb32c6874b53fdde53faae7b8504309bba.scope.
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.526 2 DEBUG oslo_concurrency.lockutils [req-3dbeafb8-1a68-4912-803a-9d4809029257 req-e0d10567-2f93-4135-832e-e11485ed5cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-e4daf401-270a-4e5d-9849-ba90ed06e97c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.535 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:37 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:20:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ba51844e46552a3ee7bdb6478f0d5afcf810a7eb9e3bfc92d3ad9b52b73197/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:20:37 compute-0 podman[341200]: 2025-10-07 14:20:37.45608866 +0000 UTC m=+0.041041376 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:20:37 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1095816140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:37 compute-0 podman[341200]: 2025-10-07 14:20:37.591439575 +0000 UTC m=+0.176392291 container init 634d1b3d997e27dc9c0d558692f4dbeb32c6874b53fdde53faae7b8504309bba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 07 14:20:37 compute-0 podman[341200]: 2025-10-07 14:20:37.600026633 +0000 UTC m=+0.184979329 container start 634d1b3d997e27dc9c0d558692f4dbeb32c6874b53fdde53faae7b8504309bba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 14:20:37 compute-0 neutron-haproxy-ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611[341274]: [NOTICE]   (341296) : New worker (341298) forked
Oct 07 14:20:37 compute-0 neutron-haproxy-ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611[341274]: [NOTICE]   (341296) : Loading success.
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.626 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.627 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.647 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.647 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.653 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.653 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.655 2 DEBUG nova.network.neutron [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Successfully created port: 90cfcf08-ae06-44fb-8402-0d70909f2e5c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.660 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.661 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.758 2 DEBUG nova.storage.rbd_utils [None req-5817aab1-1f5c-42fe-8cc1-8815c4d1db38 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] creating snapshot(43558e93bc01484294d21b56befc7c87) on rbd image(d932a7ab-839c-48b9-804f-90cc8634e93b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.825 2 DEBUG oslo_concurrency.lockutils [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "af953de2-8586-45f6-8f4b-a09dec17ef5f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.826 2 DEBUG oslo_concurrency.lockutils [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.826 2 DEBUG oslo_concurrency.lockutils [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.827 2 DEBUG oslo_concurrency.lockutils [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.827 2 DEBUG oslo_concurrency.lockutils [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.828 2 INFO nova.compute.manager [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Terminating instance
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.829 2 DEBUG nova.compute.manager [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.855 2 DEBUG nova.network.neutron [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Successfully updated port: 741587b4-ed2b-4895-9352-480eb38c0c50 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:20:37 compute-0 kernel: tap58bc8432-92 (unregistering): left promiscuous mode
Oct 07 14:20:37 compute-0 NetworkManager[44949]: <info>  [1759846837.8715] device (tap58bc8432-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.874 2 DEBUG oslo_concurrency.lockutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "refresh_cache-520ba82e-9633-4722-aa98-526012a7e0fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.875 2 DEBUG oslo_concurrency.lockutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquired lock "refresh_cache-520ba82e-9633-4722-aa98-526012a7e0fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.875 2 DEBUG nova.network.neutron [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:37 compute-0 ovn_controller[151684]: 2025-10-07T14:20:37Z|00795|binding|INFO|Releasing lport 58bc8432-92a0-493d-a022-134fa0354f89 from this chassis (sb_readonly=0)
Oct 07 14:20:37 compute-0 ovn_controller[151684]: 2025-10-07T14:20:37Z|00796|binding|INFO|Setting lport 58bc8432-92a0-493d-a022-134fa0354f89 down in Southbound
Oct 07 14:20:37 compute-0 ovn_controller[151684]: 2025-10-07T14:20:37Z|00797|binding|INFO|Removing iface tap58bc8432-92 ovn-installed in OVS
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:37.893 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:1d:fb 10.100.0.10'], port_security=['fa:16:3e:8c:1d:fb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'af953de2-8586-45f6-8f4b-a09dec17ef5f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8379283f8a594c2ab94773d2b49cbb30', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b57b9fa1-4ab8-4b42-afe4-8fb0bccad4c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44f26794-be65-4c90-a6ef-3a0e5efa6810, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=58bc8432-92a0-493d-a022-134fa0354f89) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:37.894 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 58bc8432-92a0-493d-a022-134fa0354f89 in datapath c21c541a-0d39-4ceb-ba44-53a9c1280779 unbound from our chassis
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:37.896 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 313 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 6.1 MiB/s wr, 185 op/s
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:37.918 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1556c857-b22a-4a4d-abe6-4dd8a481cbe8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:37 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Oct 07 14:20:37 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d0000004d.scope: Consumed 4.887s CPU time.
Oct 07 14:20:37 compute-0 systemd-machined[214580]: Machine qemu-96-instance-0000004d terminated.
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:37.954 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb7411a-0a85-41f2-aa89-1c527b4e2354]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:37.958 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef0c697-f4bd-4753-921f-52fc830cea0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.978 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.979 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3390MB free_disk=59.85050964355469GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.979 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:37 compute-0 nova_compute[259550]: 2025-10-07 14:20:37.980 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:37.983 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0586ac6d-e400-44c8-b867-22b225c423a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:38.001 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ea14dc2f-727f-4c4b-9030-e8935ac76e3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc21c541a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:8b:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734372, 'reachable_time': 15798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341334, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:38.017 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d6c07cb0-5a07-462f-af01-cb1a8f44778b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc21c541a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734386, 'tstamp': 734386}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341335, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc21c541a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734390, 'tstamp': 734390}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341335, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:38.018 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc21c541a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:38.025 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc21c541a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:38.025 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:38.025 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc21c541a-00, col_values=(('external_ids', {'iface-id': '5989e5ed-c89e-446a-960e-503196fd3680'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:38.026 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:38 compute-0 NetworkManager[44949]: <info>  [1759846838.0500] manager: (tap58bc8432-92): new Tun device (/org/freedesktop/NetworkManager/Devices/344)
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.053 2 DEBUG nova.network.neutron [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.065 2 INFO nova.virt.libvirt.driver [-] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Instance destroyed successfully.
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.065 2 DEBUG nova.objects.instance [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'resources' on Instance uuid af953de2-8586-45f6-8f4b-a09dec17ef5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.075 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 1d580bbb-a6fd-442c-8524-409ba5c344d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.075 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance af953de2-8586-45f6-8f4b-a09dec17ef5f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.075 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance d932a7ab-839c-48b9-804f-90cc8634e93b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.076 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance e4daf401-270a-4e5d-9849-ba90ed06e97c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.076 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 520ba82e-9633-4722-aa98-526012a7e0fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.076 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance db51ce2e-5a2e-4329-a629-6f5fcee5c673 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.076 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.076 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.081 2 DEBUG nova.virt.libvirt.vif [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-520514475',display_name='tempest-ServerActionsTestJSON-server-254513888',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-520514475',id=77,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:20:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-agtdcm1m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:20:34Z,user_data=None,user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=af953de2-8586-45f6-8f4b-a09dec17ef5f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "58bc8432-92a0-493d-a022-134fa0354f89", "address": "fa:16:3e:8c:1d:fb", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58bc8432-92", "ovs_interfaceid": "58bc8432-92a0-493d-a022-134fa0354f89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.081 2 DEBUG nova.network.os_vif_util [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "58bc8432-92a0-493d-a022-134fa0354f89", "address": "fa:16:3e:8c:1d:fb", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58bc8432-92", "ovs_interfaceid": "58bc8432-92a0-493d-a022-134fa0354f89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.084 2 DEBUG nova.network.os_vif_util [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:1d:fb,bridge_name='br-int',has_traffic_filtering=True,id=58bc8432-92a0-493d-a022-134fa0354f89,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58bc8432-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.084 2 DEBUG os_vif [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:1d:fb,bridge_name='br-int',has_traffic_filtering=True,id=58bc8432-92a0-493d-a022-134fa0354f89,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58bc8432-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.087 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58bc8432-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.092 2 INFO os_vif [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:1d:fb,bridge_name='br-int',has_traffic_filtering=True,id=58bc8432-92a0-493d-a022-134fa0354f89,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58bc8432-92')
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.139 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846838.1387155, e4daf401-270a-4e5d-9849-ba90ed06e97c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.141 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] VM Started (Lifecycle Event)
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.144 2 DEBUG nova.compute.manager [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.148 2 DEBUG nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.151 2 INFO nova.virt.libvirt.driver [-] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Instance spawned successfully.
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.151 2 DEBUG nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.169 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.181 2 DEBUG nova.compute.manager [req-5099dc41-a2b9-49ec-afb2-7272de572a9b req-847d2faf-a3f4-4ab8-bb2c-c70f1bfa5e7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Received event network-changed-741587b4-ed2b-4895-9352-480eb38c0c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.181 2 DEBUG nova.compute.manager [req-5099dc41-a2b9-49ec-afb2-7272de572a9b req-847d2faf-a3f4-4ab8-bb2c-c70f1bfa5e7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Refreshing instance network info cache due to event network-changed-741587b4-ed2b-4895-9352-480eb38c0c50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.181 2 DEBUG oslo_concurrency.lockutils [req-5099dc41-a2b9-49ec-afb2-7272de572a9b req-847d2faf-a3f4-4ab8-bb2c-c70f1bfa5e7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-520ba82e-9633-4722-aa98-526012a7e0fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.183 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.185 2 DEBUG nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.186 2 DEBUG nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.186 2 DEBUG nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.187 2 DEBUG nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.187 2 DEBUG nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.188 2 DEBUG nova.virt.libvirt.driver [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.213 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.254 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.255 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846838.140292, e4daf401-270a-4e5d-9849-ba90ed06e97c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.255 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] VM Paused (Lifecycle Event)
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.260 2 INFO nova.compute.manager [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Took 8.69 seconds to spawn the instance on the hypervisor.
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.261 2 DEBUG nova.compute.manager [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.301 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.304 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846838.1471965, e4daf401-270a-4e5d-9849-ba90ed06e97c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.304 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] VM Resumed (Lifecycle Event)
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.329 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.332 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.341 2 INFO nova.compute.manager [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Took 9.63 seconds to build instance.
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.356 2 DEBUG oslo_concurrency.lockutils [None req-fbb5c77a-4e50-4ab1-8818-ccd1fc219309 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Lock "e4daf401-270a-4e5d-9849-ba90ed06e97c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.532 2 INFO nova.virt.libvirt.driver [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Deleting instance files /var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f_del
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.533 2 INFO nova.virt.libvirt.driver [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Deletion of /var/lib/nova/instances/af953de2-8586-45f6-8f4b-a09dec17ef5f_del complete
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.581 2 INFO nova.compute.manager [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.582 2 DEBUG oslo.service.loopingcall [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.582 2 DEBUG nova.compute.manager [-] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.582 2 DEBUG nova.network.neutron [-] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:20:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Oct 07 14:20:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Oct 07 14:20:38 compute-0 ceph-mon[74295]: pgmap v1681: 305 pgs: 305 active+clean; 313 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 6.1 MiB/s wr, 185 op/s
Oct 07 14:20:38 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.658 2 DEBUG nova.storage.rbd_utils [None req-5817aab1-1f5c-42fe-8cc1-8815c4d1db38 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] cloning vms/d932a7ab-839c-48b9-804f-90cc8634e93b_disk@43558e93bc01484294d21b56befc7c87 to images/c247e1f6-c84a-44fb-9941-cc1c3eb07362 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:20:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:20:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4030684326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.692 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.696 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.716 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.738 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.739 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:38 compute-0 nova_compute[259550]: 2025-10-07 14:20:38.762 2 DEBUG nova.storage.rbd_utils [None req-5817aab1-1f5c-42fe-8cc1-8815c4d1db38 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] flattening images/c247e1f6-c84a-44fb-9941-cc1c3eb07362 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.041 2 DEBUG nova.network.neutron [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Successfully updated port: 90cfcf08-ae06-44fb-8402-0d70909f2e5c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.070 2 DEBUG oslo_concurrency.lockutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "refresh_cache-db51ce2e-5a2e-4329-a629-6f5fcee5c673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.070 2 DEBUG oslo_concurrency.lockutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquired lock "refresh_cache-db51ce2e-5a2e-4329-a629-6f5fcee5c673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.070 2 DEBUG nova.network.neutron [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.169 2 DEBUG nova.storage.rbd_utils [None req-5817aab1-1f5c-42fe-8cc1-8815c4d1db38 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] removing snapshot(43558e93bc01484294d21b56befc7c87) on rbd image(d932a7ab-839c-48b9-804f-90cc8634e93b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.192 2 DEBUG nova.network.neutron [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Updating instance_info_cache with network_info: [{"id": "741587b4-ed2b-4895-9352-480eb38c0c50", "address": "fa:16:3e:c0:04:6e", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap741587b4-ed", "ovs_interfaceid": "741587b4-ed2b-4895-9352-480eb38c0c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.230 2 DEBUG oslo_concurrency.lockutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Releasing lock "refresh_cache-520ba82e-9633-4722-aa98-526012a7e0fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.231 2 DEBUG nova.compute.manager [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Instance network_info: |[{"id": "741587b4-ed2b-4895-9352-480eb38c0c50", "address": "fa:16:3e:c0:04:6e", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap741587b4-ed", "ovs_interfaceid": "741587b4-ed2b-4895-9352-480eb38c0c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.231 2 DEBUG oslo_concurrency.lockutils [req-5099dc41-a2b9-49ec-afb2-7272de572a9b req-847d2faf-a3f4-4ab8-bb2c-c70f1bfa5e7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-520ba82e-9633-4722-aa98-526012a7e0fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.231 2 DEBUG nova.network.neutron [req-5099dc41-a2b9-49ec-afb2-7272de572a9b req-847d2faf-a3f4-4ab8-bb2c-c70f1bfa5e7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Refreshing network info cache for port 741587b4-ed2b-4895-9352-480eb38c0c50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.234 2 DEBUG nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Start _get_guest_xml network_info=[{"id": "741587b4-ed2b-4895-9352-480eb38c0c50", "address": "fa:16:3e:c0:04:6e", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap741587b4-ed", "ovs_interfaceid": "741587b4-ed2b-4895-9352-480eb38c0c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.235 2 DEBUG nova.network.neutron [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.241 2 WARNING nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.245 2 DEBUG nova.virt.libvirt.host [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.245 2 DEBUG nova.virt.libvirt.host [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.248 2 DEBUG nova.virt.libvirt.host [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.248 2 DEBUG nova.virt.libvirt.host [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.249 2 DEBUG nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.249 2 DEBUG nova.virt.hardware [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.249 2 DEBUG nova.virt.hardware [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.250 2 DEBUG nova.virt.hardware [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.250 2 DEBUG nova.virt.hardware [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.250 2 DEBUG nova.virt.hardware [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.250 2 DEBUG nova.virt.hardware [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.251 2 DEBUG nova.virt.hardware [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.251 2 DEBUG nova.virt.hardware [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.251 2 DEBUG nova.virt.hardware [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.251 2 DEBUG nova.virt.hardware [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.252 2 DEBUG nova.virt.hardware [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.254 2 DEBUG oslo_concurrency.processutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.302 2 DEBUG nova.compute.manager [req-da6052aa-a0ad-4ed7-90d2-bd52200120e5 req-5efcff81-7e36-483a-9b1b-2d1e7dbf0207 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Received event network-vif-plugged-a9982148-39f9-4c14-b0b3-24209c8ba700 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.303 2 DEBUG oslo_concurrency.lockutils [req-da6052aa-a0ad-4ed7-90d2-bd52200120e5 req-5efcff81-7e36-483a-9b1b-2d1e7dbf0207 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e4daf401-270a-4e5d-9849-ba90ed06e97c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.304 2 DEBUG oslo_concurrency.lockutils [req-da6052aa-a0ad-4ed7-90d2-bd52200120e5 req-5efcff81-7e36-483a-9b1b-2d1e7dbf0207 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e4daf401-270a-4e5d-9849-ba90ed06e97c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.304 2 DEBUG oslo_concurrency.lockutils [req-da6052aa-a0ad-4ed7-90d2-bd52200120e5 req-5efcff81-7e36-483a-9b1b-2d1e7dbf0207 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e4daf401-270a-4e5d-9849-ba90ed06e97c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.304 2 DEBUG nova.compute.manager [req-da6052aa-a0ad-4ed7-90d2-bd52200120e5 req-5efcff81-7e36-483a-9b1b-2d1e7dbf0207 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] No waiting events found dispatching network-vif-plugged-a9982148-39f9-4c14-b0b3-24209c8ba700 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.304 2 WARNING nova.compute.manager [req-da6052aa-a0ad-4ed7-90d2-bd52200120e5 req-5efcff81-7e36-483a-9b1b-2d1e7dbf0207 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Received unexpected event network-vif-plugged-a9982148-39f9-4c14-b0b3-24209c8ba700 for instance with vm_state active and task_state None.
Oct 07 14:20:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Oct 07 14:20:39 compute-0 ceph-mon[74295]: osdmap e214: 3 total, 3 up, 3 in
Oct 07 14:20:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4030684326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Oct 07 14:20:39 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.646 2 DEBUG nova.storage.rbd_utils [None req-5817aab1-1f5c-42fe-8cc1-8815c4d1db38 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] creating snapshot(snap) on rbd image(c247e1f6-c84a-44fb-9941-cc1c3eb07362) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:20:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:20:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2559741827' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.753 2 DEBUG oslo_concurrency.processutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.776 2 DEBUG nova.storage.rbd_utils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image 520ba82e-9633-4722-aa98-526012a7e0fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:39 compute-0 nova_compute[259550]: 2025-10-07 14:20:39.781 2 DEBUG oslo_concurrency.processutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 341 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.8 MiB/s wr, 257 op/s
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.153 2 DEBUG nova.network.neutron [-] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.174 2 INFO nova.compute.manager [-] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Took 1.59 seconds to deallocate network for instance.
Oct 07 14:20:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:20:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/807781948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.228 2 DEBUG oslo_concurrency.lockutils [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.229 2 DEBUG oslo_concurrency.lockutils [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.241 2 DEBUG oslo_concurrency.processutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.243 2 DEBUG nova.virt.libvirt.vif [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:20:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-750095100',display_name='tempest-ServerRescueNegativeTestJSON-server-750095100',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-750095100',id=80,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4a4526130e0488b963baba67ee5d8db',ramdisk_id='',reservation_id='r-m87humlf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-2134412460',owner_user_name='tempest-ServerRescueNegativeTestJSON-2134412460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:20:34Z,user_data=None,user_id='76c41b2458aa4c37a8bdf5b4970f70e7',uuid=520ba82e-9633-4722-aa98-526012a7e0fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "741587b4-ed2b-4895-9352-480eb38c0c50", "address": "fa:16:3e:c0:04:6e", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap741587b4-ed", "ovs_interfaceid": "741587b4-ed2b-4895-9352-480eb38c0c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.244 2 DEBUG nova.network.os_vif_util [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Converting VIF {"id": "741587b4-ed2b-4895-9352-480eb38c0c50", "address": "fa:16:3e:c0:04:6e", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap741587b4-ed", "ovs_interfaceid": "741587b4-ed2b-4895-9352-480eb38c0c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.246 2 DEBUG nova.network.os_vif_util [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:04:6e,bridge_name='br-int',has_traffic_filtering=True,id=741587b4-ed2b-4895-9352-480eb38c0c50,network=Network(6f0d357a-c6e3-4f85-ade7-a04fa945b92c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap741587b4-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.248 2 DEBUG nova.objects.instance [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lazy-loading 'pci_devices' on Instance uuid 520ba82e-9633-4722-aa98-526012a7e0fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.268 2 DEBUG nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:20:40 compute-0 nova_compute[259550]:   <uuid>520ba82e-9633-4722-aa98-526012a7e0fb</uuid>
Oct 07 14:20:40 compute-0 nova_compute[259550]:   <name>instance-00000050</name>
Oct 07 14:20:40 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:20:40 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:20:40 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-750095100</nova:name>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:20:39</nova:creationTime>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:20:40 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:20:40 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:20:40 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:20:40 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:20:40 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:20:40 compute-0 nova_compute[259550]:         <nova:user uuid="76c41b2458aa4c37a8bdf5b4970f70e7">tempest-ServerRescueNegativeTestJSON-2134412460-project-member</nova:user>
Oct 07 14:20:40 compute-0 nova_compute[259550]:         <nova:project uuid="b4a4526130e0488b963baba67ee5d8db">tempest-ServerRescueNegativeTestJSON-2134412460</nova:project>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:20:40 compute-0 nova_compute[259550]:         <nova:port uuid="741587b4-ed2b-4895-9352-480eb38c0c50">
Oct 07 14:20:40 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:20:40 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:20:40 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <system>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <entry name="serial">520ba82e-9633-4722-aa98-526012a7e0fb</entry>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <entry name="uuid">520ba82e-9633-4722-aa98-526012a7e0fb</entry>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     </system>
Oct 07 14:20:40 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:20:40 compute-0 nova_compute[259550]:   <os>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:   </os>
Oct 07 14:20:40 compute-0 nova_compute[259550]:   <features>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:   </features>
Oct 07 14:20:40 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:20:40 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:20:40 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/520ba82e-9633-4722-aa98-526012a7e0fb_disk">
Oct 07 14:20:40 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       </source>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:20:40 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/520ba82e-9633-4722-aa98-526012a7e0fb_disk.config">
Oct 07 14:20:40 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       </source>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:20:40 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:c0:04:6e"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <target dev="tap741587b4-ed"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/520ba82e-9633-4722-aa98-526012a7e0fb/console.log" append="off"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <video>
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     </video>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:20:40 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:20:40 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:20:40 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:20:40 compute-0 nova_compute[259550]: </domain>
Oct 07 14:20:40 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.269 2 DEBUG nova.compute.manager [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Preparing to wait for external event network-vif-plugged-741587b4-ed2b-4895-9352-480eb38c0c50 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.269 2 DEBUG oslo_concurrency.lockutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.270 2 DEBUG oslo_concurrency.lockutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.270 2 DEBUG oslo_concurrency.lockutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.272 2 DEBUG nova.virt.libvirt.vif [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:20:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-750095100',display_name='tempest-ServerRescueNegativeTestJSON-server-750095100',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-750095100',id=80,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4a4526130e0488b963baba67ee5d8db',ramdisk_id='',reservation_id='r-m87humlf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-2134412460',owner_user_name='tempest-ServerRescueNegativeTestJSON-2134412460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:20:34Z,user_data=None,user_id='76c41b2458aa4c37a8bdf5b4970f70e7',uuid=520ba82e-9633-4722-aa98-526012a7e0fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "741587b4-ed2b-4895-9352-480eb38c0c50", "address": "fa:16:3e:c0:04:6e", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap741587b4-ed", "ovs_interfaceid": "741587b4-ed2b-4895-9352-480eb38c0c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.272 2 DEBUG nova.network.os_vif_util [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Converting VIF {"id": "741587b4-ed2b-4895-9352-480eb38c0c50", "address": "fa:16:3e:c0:04:6e", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap741587b4-ed", "ovs_interfaceid": "741587b4-ed2b-4895-9352-480eb38c0c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.273 2 DEBUG nova.network.os_vif_util [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:04:6e,bridge_name='br-int',has_traffic_filtering=True,id=741587b4-ed2b-4895-9352-480eb38c0c50,network=Network(6f0d357a-c6e3-4f85-ade7-a04fa945b92c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap741587b4-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.274 2 DEBUG os_vif [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:04:6e,bridge_name='br-int',has_traffic_filtering=True,id=741587b4-ed2b-4895-9352-480eb38c0c50,network=Network(6f0d357a-c6e3-4f85-ade7-a04fa945b92c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap741587b4-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.279 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.280 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.287 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap741587b4-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.287 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap741587b4-ed, col_values=(('external_ids', {'iface-id': '741587b4-ed2b-4895-9352-480eb38c0c50', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:04:6e', 'vm-uuid': '520ba82e-9633-4722-aa98-526012a7e0fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:40 compute-0 NetworkManager[44949]: <info>  [1759846840.2897] manager: (tap741587b4-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/345)
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.296 2 DEBUG nova.compute.manager [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Received event network-vif-unplugged-58bc8432-92a0-493d-a022-134fa0354f89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.297 2 DEBUG oslo_concurrency.lockutils [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.297 2 DEBUG oslo_concurrency.lockutils [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.297 2 DEBUG oslo_concurrency.lockutils [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.297 2 DEBUG nova.compute.manager [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] No waiting events found dispatching network-vif-unplugged-58bc8432-92a0-493d-a022-134fa0354f89 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.298 2 WARNING nova.compute.manager [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Received unexpected event network-vif-unplugged-58bc8432-92a0-493d-a022-134fa0354f89 for instance with vm_state deleted and task_state None.
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.298 2 DEBUG nova.compute.manager [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Received event network-vif-plugged-58bc8432-92a0-493d-a022-134fa0354f89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.298 2 DEBUG oslo_concurrency.lockutils [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.299 2 DEBUG oslo_concurrency.lockutils [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.299 2 DEBUG oslo_concurrency.lockutils [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.299 2 DEBUG nova.compute.manager [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] No waiting events found dispatching network-vif-plugged-58bc8432-92a0-493d-a022-134fa0354f89 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.299 2 WARNING nova.compute.manager [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Received unexpected event network-vif-plugged-58bc8432-92a0-493d-a022-134fa0354f89 for instance with vm_state deleted and task_state None.
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.300 2 DEBUG nova.compute.manager [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Received event network-changed-90cfcf08-ae06-44fb-8402-0d70909f2e5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.300 2 DEBUG nova.compute.manager [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Refreshing instance network info cache due to event network-changed-90cfcf08-ae06-44fb-8402-0d70909f2e5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.300 2 DEBUG oslo_concurrency.lockutils [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-db51ce2e-5a2e-4329-a629-6f5fcee5c673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.305 2 INFO os_vif [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:04:6e,bridge_name='br-int',has_traffic_filtering=True,id=741587b4-ed2b-4895-9352-480eb38c0c50,network=Network(6f0d357a-c6e3-4f85-ade7-a04fa945b92c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap741587b4-ed')
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.346 2 DEBUG nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.347 2 DEBUG nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.347 2 DEBUG nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] No VIF found with MAC fa:16:3e:c0:04:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.347 2 INFO nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Using config drive
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.365 2 DEBUG nova.storage.rbd_utils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image 520ba82e-9633-4722-aa98-526012a7e0fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.408 2 DEBUG oslo_concurrency.processutils [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.568 2 DEBUG oslo_concurrency.lockutils [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Acquiring lock "e4daf401-270a-4e5d-9849-ba90ed06e97c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.568 2 DEBUG oslo_concurrency.lockutils [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Lock "e4daf401-270a-4e5d-9849-ba90ed06e97c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.569 2 DEBUG oslo_concurrency.lockutils [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Acquiring lock "e4daf401-270a-4e5d-9849-ba90ed06e97c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.569 2 DEBUG oslo_concurrency.lockutils [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Lock "e4daf401-270a-4e5d-9849-ba90ed06e97c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.569 2 DEBUG oslo_concurrency.lockutils [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Lock "e4daf401-270a-4e5d-9849-ba90ed06e97c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.571 2 INFO nova.compute.manager [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Terminating instance
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.572 2 DEBUG nova.compute.manager [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:20:40 compute-0 kernel: tapa9982148-39 (unregistering): left promiscuous mode
Oct 07 14:20:40 compute-0 NetworkManager[44949]: <info>  [1759846840.6121] device (tapa9982148-39): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:20:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Oct 07 14:20:40 compute-0 ceph-mon[74295]: osdmap e215: 3 total, 3 up, 3 in
Oct 07 14:20:40 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2559741827' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:40 compute-0 ceph-mon[74295]: pgmap v1684: 305 pgs: 305 active+clean; 341 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.8 MiB/s wr, 257 op/s
Oct 07 14:20:40 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/807781948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Oct 07 14:20:40 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Oct 07 14:20:40 compute-0 ovn_controller[151684]: 2025-10-07T14:20:40Z|00798|binding|INFO|Releasing lport a9982148-39f9-4c14-b0b3-24209c8ba700 from this chassis (sb_readonly=0)
Oct 07 14:20:40 compute-0 ovn_controller[151684]: 2025-10-07T14:20:40Z|00799|binding|INFO|Setting lport a9982148-39f9-4c14-b0b3-24209c8ba700 down in Southbound
Oct 07 14:20:40 compute-0 ovn_controller[151684]: 2025-10-07T14:20:40Z|00800|binding|INFO|Removing iface tapa9982148-39 ovn-installed in OVS
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:40.646 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:ad:70 10.100.0.13'], port_security=['fa:16:3e:e9:ad:70 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e4daf401-270a-4e5d-9849-ba90ed06e97c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f0a06f3-96b8-4a72-8818-6dfe3b57b611', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d0f2a5220d741d6b9adf50a4787d2d5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '07b51984-ba91-42b9-a431-00fe10539ee8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=62de4f09-63e8-4fa8-87ba-f68cc8fd0e47, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=a9982148-39f9-4c14-b0b3-24209c8ba700) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:20:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:40.648 161536 INFO neutron.agent.ovn.metadata.agent [-] Port a9982148-39f9-4c14-b0b3-24209c8ba700 in datapath 3f0a06f3-96b8-4a72-8818-6dfe3b57b611 unbound from our chassis
Oct 07 14:20:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:40.649 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f0a06f3-96b8-4a72-8818-6dfe3b57b611, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:20:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:40.651 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[39ff2537-025c-4c8b-990e-b6fa91292c6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:40.651 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611 namespace which is not needed anymore
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.661 2 DEBUG nova.network.neutron [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Updating instance_info_cache with network_info: [{"id": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "address": "fa:16:3e:0e:49:a3", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfcf08-ae", "ovs_interfaceid": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:40 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Oct 07 14:20:40 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d0000004f.scope: Consumed 3.702s CPU time.
Oct 07 14:20:40 compute-0 systemd-machined[214580]: Machine qemu-97-instance-0000004f terminated.
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.695 2 DEBUG oslo_concurrency.lockutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Releasing lock "refresh_cache-db51ce2e-5a2e-4329-a629-6f5fcee5c673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.695 2 DEBUG nova.compute.manager [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Instance network_info: |[{"id": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "address": "fa:16:3e:0e:49:a3", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfcf08-ae", "ovs_interfaceid": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.696 2 DEBUG oslo_concurrency.lockutils [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-db51ce2e-5a2e-4329-a629-6f5fcee5c673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.697 2 DEBUG nova.network.neutron [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Refreshing network info cache for port 90cfcf08-ae06-44fb-8402-0d70909f2e5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.700 2 DEBUG nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Start _get_guest_xml network_info=[{"id": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "address": "fa:16:3e:0e:49:a3", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfcf08-ae", "ovs_interfaceid": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.708 2 WARNING nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.716 2 DEBUG nova.virt.libvirt.host [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.717 2 DEBUG nova.virt.libvirt.host [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.726 2 DEBUG nova.virt.libvirt.host [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.727 2 DEBUG nova.virt.libvirt.host [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.727 2 DEBUG nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.727 2 DEBUG nova.virt.hardware [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.728 2 DEBUG nova.virt.hardware [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.728 2 DEBUG nova.virt.hardware [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.728 2 DEBUG nova.virt.hardware [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.728 2 DEBUG nova.virt.hardware [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.728 2 DEBUG nova.virt.hardware [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.729 2 DEBUG nova.virt.hardware [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.729 2 DEBUG nova.virt.hardware [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.729 2 DEBUG nova.virt.hardware [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.729 2 DEBUG nova.virt.hardware [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.729 2 DEBUG nova.virt.hardware [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.732 2 DEBUG oslo_concurrency.processutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:40 compute-0 neutron-haproxy-ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611[341274]: [NOTICE]   (341296) : haproxy version is 2.8.14-c23fe91
Oct 07 14:20:40 compute-0 neutron-haproxy-ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611[341274]: [NOTICE]   (341296) : path to executable is /usr/sbin/haproxy
Oct 07 14:20:40 compute-0 neutron-haproxy-ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611[341274]: [WARNING]  (341296) : Exiting Master process...
Oct 07 14:20:40 compute-0 neutron-haproxy-ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611[341274]: [WARNING]  (341296) : Exiting Master process...
Oct 07 14:20:40 compute-0 neutron-haproxy-ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611[341274]: [ALERT]    (341296) : Current worker (341298) exited with code 143 (Terminated)
Oct 07 14:20:40 compute-0 neutron-haproxy-ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611[341274]: [WARNING]  (341296) : All workers exited. Exiting... (0)
Oct 07 14:20:40 compute-0 systemd[1]: libpod-634d1b3d997e27dc9c0d558692f4dbeb32c6874b53fdde53faae7b8504309bba.scope: Deactivated successfully.
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.787 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.787 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:20:40 compute-0 podman[341605]: 2025-10-07 14:20:40.792146897 +0000 UTC m=+0.049803486 container died 634d1b3d997e27dc9c0d558692f4dbeb32c6874b53fdde53faae7b8504309bba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.804 2 INFO nova.virt.libvirt.driver [-] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Instance destroyed successfully.
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.805 2 DEBUG nova.objects.instance [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Lazy-loading 'resources' on Instance uuid e4daf401-270a-4e5d-9849-ba90ed06e97c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-634d1b3d997e27dc9c0d558692f4dbeb32c6874b53fdde53faae7b8504309bba-userdata-shm.mount: Deactivated successfully.
Oct 07 14:20:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-49ba51844e46552a3ee7bdb6478f0d5afcf810a7eb9e3bfc92d3ad9b52b73197-merged.mount: Deactivated successfully.
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.836 2 DEBUG nova.virt.libvirt.vif [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:20:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-127650468',display_name='tempest-ServerAddressesNegativeTestJSON-server-127650468',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-127650468',id=79,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:20:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d0f2a5220d741d6b9adf50a4787d2d5',ramdisk_id='',reservation_id='r-0avbj2h1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-928804141',owner_user_name='tempest-ServerAddressesNegativeTestJSON-928804141-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:20:38Z,user_data=None,user_id='5f01a69322de4fe7b1825ca0af51e47a',uuid=e4daf401-270a-4e5d-9849-ba90ed06e97c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a9982148-39f9-4c14-b0b3-24209c8ba700", "address": "fa:16:3e:e9:ad:70", "network": {"id": "3f0a06f3-96b8-4a72-8818-6dfe3b57b611", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-130425917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d0f2a5220d741d6b9adf50a4787d2d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9982148-39", "ovs_interfaceid": "a9982148-39f9-4c14-b0b3-24209c8ba700", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.837 2 DEBUG nova.network.os_vif_util [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Converting VIF {"id": "a9982148-39f9-4c14-b0b3-24209c8ba700", "address": "fa:16:3e:e9:ad:70", "network": {"id": "3f0a06f3-96b8-4a72-8818-6dfe3b57b611", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-130425917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d0f2a5220d741d6b9adf50a4787d2d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9982148-39", "ovs_interfaceid": "a9982148-39f9-4c14-b0b3-24209c8ba700", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.838 2 DEBUG nova.network.os_vif_util [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ad:70,bridge_name='br-int',has_traffic_filtering=True,id=a9982148-39f9-4c14-b0b3-24209c8ba700,network=Network(3f0a06f3-96b8-4a72-8818-6dfe3b57b611),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9982148-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.838 2 DEBUG os_vif [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ad:70,bridge_name='br-int',has_traffic_filtering=True,id=a9982148-39f9-4c14-b0b3-24209c8ba700,network=Network(3f0a06f3-96b8-4a72-8818-6dfe3b57b611),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9982148-39') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.848 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa9982148-39, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:40 compute-0 podman[341605]: 2025-10-07 14:20:40.848893267 +0000 UTC m=+0.106549846 container cleanup 634d1b3d997e27dc9c0d558692f4dbeb32c6874b53fdde53faae7b8504309bba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:40 compute-0 systemd[1]: libpod-conmon-634d1b3d997e27dc9c0d558692f4dbeb32c6874b53fdde53faae7b8504309bba.scope: Deactivated successfully.
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.861 2 INFO nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Creating config drive at /var/lib/nova/instances/520ba82e-9633-4722-aa98-526012a7e0fb/disk.config
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.867 2 DEBUG oslo_concurrency.processutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/520ba82e-9633-4722-aa98-526012a7e0fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfxykr8kd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.902 2 INFO os_vif [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ad:70,bridge_name='br-int',has_traffic_filtering=True,id=a9982148-39f9-4c14-b0b3-24209c8ba700,network=Network(3f0a06f3-96b8-4a72-8818-6dfe3b57b611),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9982148-39')
Oct 07 14:20:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:20:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3189927897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:40 compute-0 podman[341650]: 2025-10-07 14:20:40.935637478 +0000 UTC m=+0.056363069 container remove 634d1b3d997e27dc9c0d558692f4dbeb32c6874b53fdde53faae7b8504309bba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.942 2 DEBUG oslo_concurrency.processutils [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:40.942 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5fdda0b1-a417-42be-8049-a0187041804b]: (4, ('Tue Oct  7 02:20:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611 (634d1b3d997e27dc9c0d558692f4dbeb32c6874b53fdde53faae7b8504309bba)\n634d1b3d997e27dc9c0d558692f4dbeb32c6874b53fdde53faae7b8504309bba\nTue Oct  7 02:20:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611 (634d1b3d997e27dc9c0d558692f4dbeb32c6874b53fdde53faae7b8504309bba)\n634d1b3d997e27dc9c0d558692f4dbeb32c6874b53fdde53faae7b8504309bba\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:40.944 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3e1eb498-c830-4e10-a6ba-9481eef95829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:40.945 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f0a06f3-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:40 compute-0 kernel: tap3f0a06f3-90: left promiscuous mode
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.952 2 DEBUG nova.compute.provider_tree [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:40.971 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b86ddca7-4798-4ffc-ad1e-56bbece0b2c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:40 compute-0 nova_compute[259550]: 2025-10-07 14:20:40.978 2 DEBUG nova.scheduler.client.report [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:40.999 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[65322e28-2abe-483b-839e-51a5d7e6a38f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.000 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2a84f2ed-254d-4a1e-8bac-ed69c86171da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.007 2 DEBUG oslo_concurrency.lockutils [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.020 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[85daee31-4481-4ff8-8beb-308ad6bb2d00]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742036, 'reachable_time': 28486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341710, 'error': None, 'target': 'ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.021 2 DEBUG oslo_concurrency.processutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/520ba82e-9633-4722-aa98-526012a7e0fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfxykr8kd" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d3f0a06f3\x2d96b8\x2d4a72\x2d8818\x2d6dfe3b57b611.mount: Deactivated successfully.
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.021 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3f0a06f3-96b8-4a72-8818-6dfe3b57b611 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.022 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[01a234f8-7625-45cc-a99c-ef69beeadef9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.051 2 DEBUG nova.storage.rbd_utils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image 520ba82e-9633-4722-aa98-526012a7e0fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.056 2 DEBUG oslo_concurrency.processutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/520ba82e-9633-4722-aa98-526012a7e0fb/disk.config 520ba82e-9633-4722-aa98-526012a7e0fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.092 2 DEBUG nova.network.neutron [req-5099dc41-a2b9-49ec-afb2-7272de572a9b req-847d2faf-a3f4-4ab8-bb2c-c70f1bfa5e7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Updated VIF entry in instance network info cache for port 741587b4-ed2b-4895-9352-480eb38c0c50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.093 2 DEBUG nova.network.neutron [req-5099dc41-a2b9-49ec-afb2-7272de572a9b req-847d2faf-a3f4-4ab8-bb2c-c70f1bfa5e7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Updating instance_info_cache with network_info: [{"id": "741587b4-ed2b-4895-9352-480eb38c0c50", "address": "fa:16:3e:c0:04:6e", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap741587b4-ed", "ovs_interfaceid": "741587b4-ed2b-4895-9352-480eb38c0c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.096 2 INFO nova.scheduler.client.report [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Deleted allocations for instance af953de2-8586-45f6-8f4b-a09dec17ef5f
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.125 2 DEBUG oslo_concurrency.lockutils [req-5099dc41-a2b9-49ec-afb2-7272de572a9b req-847d2faf-a3f4-4ab8-bb2c-c70f1bfa5e7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-520ba82e-9633-4722-aa98-526012a7e0fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:20:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.174 2 DEBUG oslo_concurrency.lockutils [None req-afd86bee-1b26-4b22-9aa8-685548b57d25 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "af953de2-8586-45f6-8f4b-a09dec17ef5f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:20:41 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/395760795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.244 2 DEBUG oslo_concurrency.processutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.265 2 DEBUG nova.storage.rbd_utils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.268 2 DEBUG oslo_concurrency.processutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.302 2 DEBUG oslo_concurrency.processutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/520ba82e-9633-4722-aa98-526012a7e0fb/disk.config 520ba82e-9633-4722-aa98-526012a7e0fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.308 2 INFO nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Deleting local config drive /var/lib/nova/instances/520ba82e-9633-4722-aa98-526012a7e0fb/disk.config because it was imported into RBD.
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.340 2 INFO nova.virt.libvirt.driver [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Deleting instance files /var/lib/nova/instances/e4daf401-270a-4e5d-9849-ba90ed06e97c_del
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.341 2 INFO nova.virt.libvirt.driver [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Deletion of /var/lib/nova/instances/e4daf401-270a-4e5d-9849-ba90ed06e97c_del complete
Oct 07 14:20:41 compute-0 kernel: tap741587b4-ed: entered promiscuous mode
Oct 07 14:20:41 compute-0 NetworkManager[44949]: <info>  [1759846841.3694] manager: (tap741587b4-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/346)
Oct 07 14:20:41 compute-0 systemd-udevd[341586]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:20:41 compute-0 ovn_controller[151684]: 2025-10-07T14:20:41Z|00801|binding|INFO|Claiming lport 741587b4-ed2b-4895-9352-480eb38c0c50 for this chassis.
Oct 07 14:20:41 compute-0 ovn_controller[151684]: 2025-10-07T14:20:41Z|00802|binding|INFO|741587b4-ed2b-4895-9352-480eb38c0c50: Claiming fa:16:3e:c0:04:6e 10.100.0.12
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.379 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:04:6e 10.100.0.12'], port_security=['fa:16:3e:c0:04:6e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '520ba82e-9633-4722-aa98-526012a7e0fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4a4526130e0488b963baba67ee5d8db', 'neutron:revision_number': '2', 'neutron:security_group_ids': '952416dd-2ec0-44ba-9a64-1fedfbe98e81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad1d90b9-2025-4f75-89e1-be6f053043dd, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=741587b4-ed2b-4895-9352-480eb38c0c50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.380 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 741587b4-ed2b-4895-9352-480eb38c0c50 in datapath 6f0d357a-c6e3-4f85-ade7-a04fa945b92c bound to our chassis
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.382 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6f0d357a-c6e3-4f85-ade7-a04fa945b92c
Oct 07 14:20:41 compute-0 NetworkManager[44949]: <info>  [1759846841.3907] device (tap741587b4-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:20:41 compute-0 NetworkManager[44949]: <info>  [1759846841.3915] device (tap741587b4-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:20:41 compute-0 ovn_controller[151684]: 2025-10-07T14:20:41Z|00803|binding|INFO|Setting lport 741587b4-ed2b-4895-9352-480eb38c0c50 ovn-installed in OVS
Oct 07 14:20:41 compute-0 ovn_controller[151684]: 2025-10-07T14:20:41Z|00804|binding|INFO|Setting lport 741587b4-ed2b-4895-9352-480eb38c0c50 up in Southbound
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.396 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a6687af3-3d3a-466f-820d-75f1d8fabdf6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.397 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6f0d357a-c1 in ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.399 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6f0d357a-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.399 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[45e32c51-fe37-4a20-9067-d6dde5549a44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.400 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c39dc402-1452-480d-9583-7246fb9fe3f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.402 2 DEBUG nova.compute.manager [req-0fb157f8-5703-484a-85f0-2711c153d683 req-c577e1cf-50ce-47eb-8de4-13eaf2692258 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Received event network-vif-deleted-58bc8432-92a0-493d-a022-134fa0354f89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.402 2 DEBUG nova.compute.manager [req-0fb157f8-5703-484a-85f0-2711c153d683 req-c577e1cf-50ce-47eb-8de4-13eaf2692258 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Received event network-vif-unplugged-a9982148-39f9-4c14-b0b3-24209c8ba700 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.403 2 DEBUG oslo_concurrency.lockutils [req-0fb157f8-5703-484a-85f0-2711c153d683 req-c577e1cf-50ce-47eb-8de4-13eaf2692258 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e4daf401-270a-4e5d-9849-ba90ed06e97c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.403 2 DEBUG oslo_concurrency.lockutils [req-0fb157f8-5703-484a-85f0-2711c153d683 req-c577e1cf-50ce-47eb-8de4-13eaf2692258 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e4daf401-270a-4e5d-9849-ba90ed06e97c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.403 2 DEBUG oslo_concurrency.lockutils [req-0fb157f8-5703-484a-85f0-2711c153d683 req-c577e1cf-50ce-47eb-8de4-13eaf2692258 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e4daf401-270a-4e5d-9849-ba90ed06e97c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.403 2 DEBUG nova.compute.manager [req-0fb157f8-5703-484a-85f0-2711c153d683 req-c577e1cf-50ce-47eb-8de4-13eaf2692258 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] No waiting events found dispatching network-vif-unplugged-a9982148-39f9-4c14-b0b3-24209c8ba700 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.404 2 DEBUG nova.compute.manager [req-0fb157f8-5703-484a-85f0-2711c153d683 req-c577e1cf-50ce-47eb-8de4-13eaf2692258 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Received event network-vif-unplugged-a9982148-39f9-4c14-b0b3-24209c8ba700 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.404 2 DEBUG nova.compute.manager [req-0fb157f8-5703-484a-85f0-2711c153d683 req-c577e1cf-50ce-47eb-8de4-13eaf2692258 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Received event network-vif-plugged-a9982148-39f9-4c14-b0b3-24209c8ba700 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.404 2 DEBUG oslo_concurrency.lockutils [req-0fb157f8-5703-484a-85f0-2711c153d683 req-c577e1cf-50ce-47eb-8de4-13eaf2692258 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e4daf401-270a-4e5d-9849-ba90ed06e97c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.404 2 DEBUG oslo_concurrency.lockutils [req-0fb157f8-5703-484a-85f0-2711c153d683 req-c577e1cf-50ce-47eb-8de4-13eaf2692258 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e4daf401-270a-4e5d-9849-ba90ed06e97c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.405 2 DEBUG oslo_concurrency.lockutils [req-0fb157f8-5703-484a-85f0-2711c153d683 req-c577e1cf-50ce-47eb-8de4-13eaf2692258 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e4daf401-270a-4e5d-9849-ba90ed06e97c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.405 2 DEBUG nova.compute.manager [req-0fb157f8-5703-484a-85f0-2711c153d683 req-c577e1cf-50ce-47eb-8de4-13eaf2692258 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] No waiting events found dispatching network-vif-plugged-a9982148-39f9-4c14-b0b3-24209c8ba700 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.405 2 WARNING nova.compute.manager [req-0fb157f8-5703-484a-85f0-2711c153d683 req-c577e1cf-50ce-47eb-8de4-13eaf2692258 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Received unexpected event network-vif-plugged-a9982148-39f9-4c14-b0b3-24209c8ba700 for instance with vm_state active and task_state deleting.
Oct 07 14:20:41 compute-0 systemd-machined[214580]: New machine qemu-98-instance-00000050.
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.413 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[5e519208-72ab-433f-bcf0-fcfceaf551ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 systemd[1]: Started Virtual Machine qemu-98-instance-00000050.
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.420 2 INFO nova.compute.manager [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Took 0.85 seconds to destroy the instance on the hypervisor.
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.421 2 DEBUG oslo.service.loopingcall [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.426 2 DEBUG nova.compute.manager [-] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.426 2 DEBUG nova.network.neutron [-] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.439 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4da54cd0-e340-4ccb-ab7c-d2a15d9ee6be]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.469 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[90c9dd04-1863-414d-85a8-1b730b703642]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.475 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[52039355-e7d5-4b0d-b076-34d017ebd5ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 NetworkManager[44949]: <info>  [1759846841.4758] manager: (tap6f0d357a-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/347)
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.520 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[33bedd8a-5a5c-408e-be80-ec4c9e27c954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.523 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7c4aa748-89da-4d4a-89a5-e38096840ffb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 NetworkManager[44949]: <info>  [1759846841.5449] device (tap6f0d357a-c0): carrier: link connected
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.550 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d40aa0c2-7cd3-4aae-9476-87c3dd265a87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.568 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b71fcc0f-f77b-4589-bd8b-080f51d3ada6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f0d357a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:24:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742510, 'reachable_time': 35013, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341833, 'error': None, 'target': 'ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.586 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e24830-711c-47d3-ba43-aef8e60acd67]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe76:248a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742510, 'tstamp': 742510}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341834, 'error': None, 'target': 'ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.604 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[90d7c425-0687-49d3-9a3d-30e05c2803cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f0d357a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:24:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742510, 'reachable_time': 35013, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 341835, 'error': None, 'target': 'ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 ceph-mon[74295]: osdmap e216: 3 total, 3 up, 3 in
Oct 07 14:20:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3189927897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/395760795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.634 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9897f1ab-39b3-4e75-bd97-2407da895d90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.716 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e22114e6-3429-408a-a548-753c1a30ac34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.718 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f0d357a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.719 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.720 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f0d357a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:41 compute-0 NetworkManager[44949]: <info>  [1759846841.7227] manager: (tap6f0d357a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/348)
Oct 07 14:20:41 compute-0 kernel: tap6f0d357a-c0: entered promiscuous mode
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.728 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6f0d357a-c0, col_values=(('external_ids', {'iface-id': '5d4e6d5f-90cf-4d0a-9a33-cd00a81ef0cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:41 compute-0 ovn_controller[151684]: 2025-10-07T14:20:41Z|00805|binding|INFO|Releasing lport 5d4e6d5f-90cf-4d0a-9a33-cd00a81ef0cc from this chassis (sb_readonly=0)
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.754 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6f0d357a-c6e3-4f85-ade7-a04fa945b92c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6f0d357a-c6e3-4f85-ade7-a04fa945b92c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.755 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cef0f881-1e2a-4773-b4a9-dc45ce28b602]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.756 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-6f0d357a-c6e3-4f85-ade7-a04fa945b92c
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/6f0d357a-c6e3-4f85-ade7-a04fa945b92c.pid.haproxy
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 6f0d357a-c6e3-4f85-ade7-a04fa945b92c
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:20:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:41.758 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'env', 'PROCESS_TAG=haproxy-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6f0d357a-c6e3-4f85-ade7-a04fa945b92c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:20:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:20:41 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3083641559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.801 2 DEBUG oslo_concurrency.processutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.802 2 DEBUG nova.virt.libvirt.vif [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:20:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-520844186',display_name='tempest-ServerRescueNegativeTestJSON-server-520844186',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-520844186',id=81,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4a4526130e0488b963baba67ee5d8db',ramdisk_id='',reservation_id='r-h377dr90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-2134412460',owner_user_name='tempest-ServerRescueNegativeTestJSON-2134412460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:20:35Z,user_data=None,user_id='76c41b2458aa4c37a8bdf5b4970f70e7',uuid=db51ce2e-5a2e-4329-a629-6f5fcee5c673,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "address": "fa:16:3e:0e:49:a3", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfcf08-ae", "ovs_interfaceid": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.803 2 DEBUG nova.network.os_vif_util [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Converting VIF {"id": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "address": "fa:16:3e:0e:49:a3", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfcf08-ae", "ovs_interfaceid": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.803 2 DEBUG nova.network.os_vif_util [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:49:a3,bridge_name='br-int',has_traffic_filtering=True,id=90cfcf08-ae06-44fb-8402-0d70909f2e5c,network=Network(6f0d357a-c6e3-4f85-ade7-a04fa945b92c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90cfcf08-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.804 2 DEBUG nova.objects.instance [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lazy-loading 'pci_devices' on Instance uuid db51ce2e-5a2e-4329-a629-6f5fcee5c673 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.823 2 DEBUG nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:20:41 compute-0 nova_compute[259550]:   <uuid>db51ce2e-5a2e-4329-a629-6f5fcee5c673</uuid>
Oct 07 14:20:41 compute-0 nova_compute[259550]:   <name>instance-00000051</name>
Oct 07 14:20:41 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:20:41 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:20:41 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-520844186</nova:name>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:20:40</nova:creationTime>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:20:41 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:20:41 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:20:41 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:20:41 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:20:41 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:20:41 compute-0 nova_compute[259550]:         <nova:user uuid="76c41b2458aa4c37a8bdf5b4970f70e7">tempest-ServerRescueNegativeTestJSON-2134412460-project-member</nova:user>
Oct 07 14:20:41 compute-0 nova_compute[259550]:         <nova:project uuid="b4a4526130e0488b963baba67ee5d8db">tempest-ServerRescueNegativeTestJSON-2134412460</nova:project>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:20:41 compute-0 nova_compute[259550]:         <nova:port uuid="90cfcf08-ae06-44fb-8402-0d70909f2e5c">
Oct 07 14:20:41 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:20:41 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:20:41 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <system>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <entry name="serial">db51ce2e-5a2e-4329-a629-6f5fcee5c673</entry>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <entry name="uuid">db51ce2e-5a2e-4329-a629-6f5fcee5c673</entry>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     </system>
Oct 07 14:20:41 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:20:41 compute-0 nova_compute[259550]:   <os>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:   </os>
Oct 07 14:20:41 compute-0 nova_compute[259550]:   <features>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:   </features>
Oct 07 14:20:41 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:20:41 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:20:41 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk">
Oct 07 14:20:41 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       </source>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:20:41 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk.config">
Oct 07 14:20:41 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       </source>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:20:41 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:0e:49:a3"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <target dev="tap90cfcf08-ae"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/db51ce2e-5a2e-4329-a629-6f5fcee5c673/console.log" append="off"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <video>
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     </video>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:20:41 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:20:41 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:20:41 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:20:41 compute-0 nova_compute[259550]: </domain>
Oct 07 14:20:41 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.823 2 DEBUG nova.compute.manager [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Preparing to wait for external event network-vif-plugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.823 2 DEBUG oslo_concurrency.lockutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.824 2 DEBUG oslo_concurrency.lockutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.824 2 DEBUG oslo_concurrency.lockutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.825 2 DEBUG nova.virt.libvirt.vif [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:20:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-520844186',display_name='tempest-ServerRescueNegativeTestJSON-server-520844186',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-520844186',id=81,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4a4526130e0488b963baba67ee5d8db',ramdisk_id='',reservation_id='r-h377dr90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-2134412460',owner_user_name='tempest-ServerRescueNegativeTestJSON-2134412460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:20:35Z,user_data=None,user_id='76c41b2458aa4c37a8bdf5b4970f70e7',uuid=db51ce2e-5a2e-4329-a629-6f5fcee5c673,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "address": "fa:16:3e:0e:49:a3", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfcf08-ae", "ovs_interfaceid": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.825 2 DEBUG nova.network.os_vif_util [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Converting VIF {"id": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "address": "fa:16:3e:0e:49:a3", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfcf08-ae", "ovs_interfaceid": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.826 2 DEBUG nova.network.os_vif_util [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:49:a3,bridge_name='br-int',has_traffic_filtering=True,id=90cfcf08-ae06-44fb-8402-0d70909f2e5c,network=Network(6f0d357a-c6e3-4f85-ade7-a04fa945b92c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90cfcf08-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.826 2 DEBUG os_vif [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:49:a3,bridge_name='br-int',has_traffic_filtering=True,id=90cfcf08-ae06-44fb-8402-0d70909f2e5c,network=Network(6f0d357a-c6e3-4f85-ade7-a04fa945b92c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90cfcf08-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.827 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.827 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.830 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap90cfcf08-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.830 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap90cfcf08-ae, col_values=(('external_ids', {'iface-id': '90cfcf08-ae06-44fb-8402-0d70909f2e5c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:49:a3', 'vm-uuid': 'db51ce2e-5a2e-4329-a629-6f5fcee5c673'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:41 compute-0 NetworkManager[44949]: <info>  [1759846841.8325] manager: (tap90cfcf08-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/349)
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.837 2 INFO os_vif [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:49:a3,bridge_name='br-int',has_traffic_filtering=True,id=90cfcf08-ae06-44fb-8402-0d70909f2e5c,network=Network(6f0d357a-c6e3-4f85-ade7-a04fa945b92c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90cfcf08-ae')
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.892 2 DEBUG nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.892 2 DEBUG nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.893 2 DEBUG nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] No VIF found with MAC fa:16:3e:0e:49:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.893 2 INFO nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Using config drive
Oct 07 14:20:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 360 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 7.6 MiB/s wr, 411 op/s
Oct 07 14:20:41 compute-0 nova_compute[259550]: 2025-10-07 14:20:41.913 2 DEBUG nova.storage.rbd_utils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:42 compute-0 podman[341931]: 2025-10-07 14:20:42.138305879 +0000 UTC m=+0.047644719 container create a96d1328ace994924dc376c24545f38c9a4ee8becb5474edc6e1d9d527a4422a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 07 14:20:42 compute-0 systemd[1]: Started libpod-conmon-a96d1328ace994924dc376c24545f38c9a4ee8becb5474edc6e1d9d527a4422a.scope.
Oct 07 14:20:42 compute-0 podman[341931]: 2025-10-07 14:20:42.11111784 +0000 UTC m=+0.020456700 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:20:42 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:20:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4546a3a171e1c741dd9ac510627f2daee2c6bc3e01f38d2cc6f5b0c614382fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:20:42 compute-0 podman[341931]: 2025-10-07 14:20:42.229099068 +0000 UTC m=+0.138437918 container init a96d1328ace994924dc376c24545f38c9a4ee8becb5474edc6e1d9d527a4422a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 07 14:20:42 compute-0 podman[341931]: 2025-10-07 14:20:42.234001047 +0000 UTC m=+0.143339877 container start a96d1328ace994924dc376c24545f38c9a4ee8becb5474edc6e1d9d527a4422a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:20:42 compute-0 neutron-haproxy-ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c[341946]: [NOTICE]   (341950) : New worker (341952) forked
Oct 07 14:20:42 compute-0 neutron-haproxy-ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c[341946]: [NOTICE]   (341950) : Loading success.
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.309 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846842.3090384, 520ba82e-9633-4722-aa98-526012a7e0fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.310 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] VM Started (Lifecycle Event)
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.319 2 INFO nova.virt.libvirt.driver [None req-5817aab1-1f5c-42fe-8cc1-8815c4d1db38 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Snapshot image upload complete
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.319 2 INFO nova.compute.manager [None req-5817aab1-1f5c-42fe-8cc1-8815c4d1db38 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Took 5.48 seconds to snapshot the instance on the hypervisor.
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.341 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.345 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846842.30915, 520ba82e-9633-4722-aa98-526012a7e0fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.345 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] VM Paused (Lifecycle Event)
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.373 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.382 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.396 2 INFO nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Creating config drive at /var/lib/nova/instances/db51ce2e-5a2e-4329-a629-6f5fcee5c673/disk.config
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.400 2 DEBUG oslo_concurrency.processutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db51ce2e-5a2e-4329-a629-6f5fcee5c673/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8bawrouh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.434 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.540 2 DEBUG oslo_concurrency.processutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db51ce2e-5a2e-4329-a629-6f5fcee5c673/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8bawrouh" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.568 2 DEBUG nova.storage.rbd_utils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.571 2 DEBUG oslo_concurrency.processutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db51ce2e-5a2e-4329-a629-6f5fcee5c673/disk.config db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.608 2 DEBUG nova.compute.manager [None req-5817aab1-1f5c-42fe-8cc1-8815c4d1db38 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Oct 07 14:20:42 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3083641559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:42 compute-0 ceph-mon[74295]: pgmap v1686: 305 pgs: 305 active+clean; 360 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 7.6 MiB/s wr, 411 op/s
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.754 2 DEBUG oslo_concurrency.processutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db51ce2e-5a2e-4329-a629-6f5fcee5c673/disk.config db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.755 2 INFO nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Deleting local config drive /var/lib/nova/instances/db51ce2e-5a2e-4329-a629-6f5fcee5c673/disk.config because it was imported into RBD.
Oct 07 14:20:42 compute-0 kernel: tap90cfcf08-ae: entered promiscuous mode
Oct 07 14:20:42 compute-0 systemd-udevd[341815]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:20:42 compute-0 NetworkManager[44949]: <info>  [1759846842.7989] manager: (tap90cfcf08-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/350)
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:42 compute-0 ovn_controller[151684]: 2025-10-07T14:20:42Z|00806|binding|INFO|Claiming lport 90cfcf08-ae06-44fb-8402-0d70909f2e5c for this chassis.
Oct 07 14:20:42 compute-0 ovn_controller[151684]: 2025-10-07T14:20:42Z|00807|binding|INFO|90cfcf08-ae06-44fb-8402-0d70909f2e5c: Claiming fa:16:3e:0e:49:a3 10.100.0.7
Oct 07 14:20:42 compute-0 NetworkManager[44949]: <info>  [1759846842.8105] device (tap90cfcf08-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:20:42 compute-0 NetworkManager[44949]: <info>  [1759846842.8116] device (tap90cfcf08-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:20:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:42.812 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:49:a3 10.100.0.7'], port_security=['fa:16:3e:0e:49:a3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'db51ce2e-5a2e-4329-a629-6f5fcee5c673', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4a4526130e0488b963baba67ee5d8db', 'neutron:revision_number': '2', 'neutron:security_group_ids': '952416dd-2ec0-44ba-9a64-1fedfbe98e81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad1d90b9-2025-4f75-89e1-be6f053043dd, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=90cfcf08-ae06-44fb-8402-0d70909f2e5c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:20:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:42.813 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 90cfcf08-ae06-44fb-8402-0d70909f2e5c in datapath 6f0d357a-c6e3-4f85-ade7-a04fa945b92c bound to our chassis
Oct 07 14:20:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:42.815 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6f0d357a-c6e3-4f85-ade7-a04fa945b92c
Oct 07 14:20:42 compute-0 ovn_controller[151684]: 2025-10-07T14:20:42Z|00808|binding|INFO|Setting lport 90cfcf08-ae06-44fb-8402-0d70909f2e5c ovn-installed in OVS
Oct 07 14:20:42 compute-0 ovn_controller[151684]: 2025-10-07T14:20:42Z|00809|binding|INFO|Setting lport 90cfcf08-ae06-44fb-8402-0d70909f2e5c up in Southbound
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:42 compute-0 systemd-machined[214580]: New machine qemu-99-instance-00000051.
Oct 07 14:20:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:42.832 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ab3f9bc9-048e-4f30-929a-ede587eedd30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:42 compute-0 systemd[1]: Started Virtual Machine qemu-99-instance-00000051.
Oct 07 14:20:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:42.862 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[985223ff-0e31-4227-8506-d601ff0b40ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:42.865 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[49f95c82-7c87-4a31-ab2b-41b795cc3b45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:42.896 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[73be8612-0f21-4e2b-b2cb-d8fad0f274d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:42.913 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1918fa2-cee0-4352-82ed-d9d1a5a791b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f0d357a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:24:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742510, 'reachable_time': 35013, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342024, 'error': None, 'target': 'ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:42.941 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2d4d6551-319f-4319-8dc5-06cc4dac1caf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6f0d357a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742523, 'tstamp': 742523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342026, 'error': None, 'target': 'ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6f0d357a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742527, 'tstamp': 742527}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342026, 'error': None, 'target': 'ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:42.943 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f0d357a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:42.947 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f0d357a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:42.948 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:42.948 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6f0d357a-c0, col_values=(('external_ids', {'iface-id': '5d4e6d5f-90cf-4d0a-9a33-cd00a81ef0cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:42.948 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.952 2 DEBUG nova.network.neutron [-] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:20:42 compute-0 nova_compute[259550]: 2025-10-07 14:20:42.980 2 INFO nova.compute.manager [-] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Took 1.55 seconds to deallocate network for instance.
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.033 2 DEBUG oslo_concurrency.lockutils [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.034 2 DEBUG oslo_concurrency.lockutils [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.152 2 DEBUG oslo_concurrency.processutils [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.184 2 DEBUG nova.network.neutron [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Updated VIF entry in instance network info cache for port 90cfcf08-ae06-44fb-8402-0d70909f2e5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.186 2 DEBUG nova.network.neutron [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Updating instance_info_cache with network_info: [{"id": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "address": "fa:16:3e:0e:49:a3", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfcf08-ae", "ovs_interfaceid": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.213 2 DEBUG oslo_concurrency.lockutils [req-0a999a8f-c121-4c58-bd46-cbc5a1b05c79 req-5754007c-90d4-4784-ad9b-9852c2803397 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-db51ce2e-5a2e-4329-a629-6f5fcee5c673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.335 2 DEBUG nova.compute.manager [req-fe5e8709-819e-48a7-8cf8-c9cfcb5459e0 req-0f3ff665-e751-491f-aacc-c6a1dfb3931c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Received event network-vif-plugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.336 2 DEBUG oslo_concurrency.lockutils [req-fe5e8709-819e-48a7-8cf8-c9cfcb5459e0 req-0f3ff665-e751-491f-aacc-c6a1dfb3931c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.336 2 DEBUG oslo_concurrency.lockutils [req-fe5e8709-819e-48a7-8cf8-c9cfcb5459e0 req-0f3ff665-e751-491f-aacc-c6a1dfb3931c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.337 2 DEBUG oslo_concurrency.lockutils [req-fe5e8709-819e-48a7-8cf8-c9cfcb5459e0 req-0f3ff665-e751-491f-aacc-c6a1dfb3931c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.337 2 DEBUG nova.compute.manager [req-fe5e8709-819e-48a7-8cf8-c9cfcb5459e0 req-0f3ff665-e751-491f-aacc-c6a1dfb3931c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Processing event network-vif-plugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:20:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:20:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1240089213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.618 2 DEBUG oslo_concurrency.processutils [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.623 2 DEBUG nova.compute.provider_tree [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.636 2 DEBUG nova.scheduler.client.report [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:20:43 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1240089213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.656 2 DEBUG oslo_concurrency.lockutils [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.680 2 INFO nova.scheduler.client.report [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Deleted allocations for instance e4daf401-270a-4e5d-9849-ba90ed06e97c
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.879 2 DEBUG oslo_concurrency.lockutils [None req-5f8c81b9-464e-4439-ae75-3b6e4228c361 5f01a69322de4fe7b1825ca0af51e47a 0d0f2a5220d741d6b9adf50a4787d2d5 - - default default] Lock "e4daf401-270a-4e5d-9849-ba90ed06e97c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.311s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.900 2 DEBUG nova.compute.manager [None req-dbe64067-3463-4c7e-9c34-cd217c4c3505 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.909 2 DEBUG nova.compute.manager [req-dff8a8a4-2c21-405d-b9af-2f08563ebf50 req-a33c6890-2a58-42bf-9e2d-dec8f48cc42d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Received event network-vif-plugged-741587b4-ed2b-4895-9352-480eb38c0c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.909 2 DEBUG oslo_concurrency.lockutils [req-dff8a8a4-2c21-405d-b9af-2f08563ebf50 req-a33c6890-2a58-42bf-9e2d-dec8f48cc42d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.909 2 DEBUG oslo_concurrency.lockutils [req-dff8a8a4-2c21-405d-b9af-2f08563ebf50 req-a33c6890-2a58-42bf-9e2d-dec8f48cc42d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.909 2 DEBUG oslo_concurrency.lockutils [req-dff8a8a4-2c21-405d-b9af-2f08563ebf50 req-a33c6890-2a58-42bf-9e2d-dec8f48cc42d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.910 2 DEBUG nova.compute.manager [req-dff8a8a4-2c21-405d-b9af-2f08563ebf50 req-a33c6890-2a58-42bf-9e2d-dec8f48cc42d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Processing event network-vif-plugged-741587b4-ed2b-4895-9352-480eb38c0c50 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.910 2 DEBUG nova.compute.manager [req-dff8a8a4-2c21-405d-b9af-2f08563ebf50 req-a33c6890-2a58-42bf-9e2d-dec8f48cc42d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Received event network-vif-plugged-741587b4-ed2b-4895-9352-480eb38c0c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.910 2 DEBUG oslo_concurrency.lockutils [req-dff8a8a4-2c21-405d-b9af-2f08563ebf50 req-a33c6890-2a58-42bf-9e2d-dec8f48cc42d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.910 2 DEBUG oslo_concurrency.lockutils [req-dff8a8a4-2c21-405d-b9af-2f08563ebf50 req-a33c6890-2a58-42bf-9e2d-dec8f48cc42d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.911 2 DEBUG oslo_concurrency.lockutils [req-dff8a8a4-2c21-405d-b9af-2f08563ebf50 req-a33c6890-2a58-42bf-9e2d-dec8f48cc42d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.911 2 DEBUG nova.compute.manager [req-dff8a8a4-2c21-405d-b9af-2f08563ebf50 req-a33c6890-2a58-42bf-9e2d-dec8f48cc42d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] No waiting events found dispatching network-vif-plugged-741587b4-ed2b-4895-9352-480eb38c0c50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.911 2 WARNING nova.compute.manager [req-dff8a8a4-2c21-405d-b9af-2f08563ebf50 req-a33c6890-2a58-42bf-9e2d-dec8f48cc42d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Received unexpected event network-vif-plugged-741587b4-ed2b-4895-9352-480eb38c0c50 for instance with vm_state building and task_state spawning.
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.911 2 DEBUG nova.compute.manager [req-dff8a8a4-2c21-405d-b9af-2f08563ebf50 req-a33c6890-2a58-42bf-9e2d-dec8f48cc42d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Received event network-vif-deleted-a9982148-39f9-4c14-b0b3-24209c8ba700 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.912 2 DEBUG nova.compute.manager [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:20:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 386 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 14 MiB/s wr, 636 op/s
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.915 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846843.9154823, 520ba82e-9633-4722-aa98-526012a7e0fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.915 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] VM Resumed (Lifecycle Event)
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.918 2 DEBUG nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.920 2 INFO nova.virt.libvirt.driver [-] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Instance spawned successfully.
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.921 2 DEBUG nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.944 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.948 2 INFO nova.compute.manager [None req-dbe64067-3463-4c7e-9c34-cd217c4c3505 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] instance snapshotting
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.949 2 DEBUG nova.objects.instance [None req-dbe64067-3463-4c7e-9c34-cd217c4c3505 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'flavor' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.950 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.954 2 DEBUG nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.955 2 DEBUG nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.955 2 DEBUG nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.955 2 DEBUG nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.956 2 DEBUG nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.956 2 DEBUG nova.virt.libvirt.driver [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:43 compute-0 nova_compute[259550]: 2025-10-07 14:20:43.982 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.013 2 INFO nova.compute.manager [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Took 9.77 seconds to spawn the instance on the hypervisor.
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.014 2 DEBUG nova.compute.manager [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.041 2 DEBUG oslo_concurrency.lockutils [None req-e67a3cbb-e927-48c4-b273-e3453396d7a4 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.042 2 DEBUG oslo_concurrency.lockutils [None req-e67a3cbb-e927-48c4-b273-e3453396d7a4 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.042 2 DEBUG nova.compute.manager [None req-e67a3cbb-e927-48c4-b273-e3453396d7a4 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.046 2 DEBUG nova.compute.manager [None req-e67a3cbb-e927-48c4-b273-e3453396d7a4 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.046 2 DEBUG nova.objects.instance [None req-e67a3cbb-e927-48c4-b273-e3453396d7a4 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'flavor' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.074 2 DEBUG nova.virt.libvirt.driver [None req-e67a3cbb-e927-48c4-b273-e3453396d7a4 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.084 2 INFO nova.compute.manager [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Took 10.92 seconds to build instance.
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.100 2 DEBUG oslo_concurrency.lockutils [None req-419b781d-32db-4e1f-9948-beaa46605c20 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.151 2 DEBUG nova.compute.manager [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.152 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846844.1507123, db51ce2e-5a2e-4329-a629-6f5fcee5c673 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.152 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] VM Started (Lifecycle Event)
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.157 2 DEBUG nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.161 2 INFO nova.virt.libvirt.driver [-] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Instance spawned successfully.
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.162 2 DEBUG nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.170 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.177 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.188 2 DEBUG nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.188 2 DEBUG nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.189 2 DEBUG nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.189 2 DEBUG nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.190 2 DEBUG nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.190 2 DEBUG nova.virt.libvirt.driver [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.196 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.197 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846844.1513987, db51ce2e-5a2e-4329-a629-6f5fcee5c673 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.197 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] VM Paused (Lifecycle Event)
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.201 2 INFO nova.virt.libvirt.driver [None req-dbe64067-3463-4c7e-9c34-cd217c4c3505 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Beginning live snapshot process
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.238 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.241 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846844.1559849, db51ce2e-5a2e-4329-a629-6f5fcee5c673 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.241 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] VM Resumed (Lifecycle Event)
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.283 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.286 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.308 2 INFO nova.compute.manager [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Took 8.87 seconds to spawn the instance on the hypervisor.
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.309 2 DEBUG nova.compute.manager [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.319 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.415 2 INFO nova.compute.manager [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Took 9.89 seconds to build instance.
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.422 2 DEBUG nova.virt.libvirt.imagebackend [None req-dbe64067-3463-4c7e-9c34-cd217c4c3505 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.433 2 DEBUG oslo_concurrency.lockutils [None req-ffebe58d-cf73-4325-9d08-2ef18ae01d61 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.607 2 DEBUG nova.storage.rbd_utils [None req-dbe64067-3463-4c7e-9c34-cd217c4c3505 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] creating snapshot(35eae341e90a4c7187e269d69b77c79b) on rbd image(d932a7ab-839c-48b9-804f-90cc8634e93b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:20:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Oct 07 14:20:44 compute-0 ceph-mon[74295]: pgmap v1687: 305 pgs: 305 active+clean; 386 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 14 MiB/s wr, 636 op/s
Oct 07 14:20:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Oct 07 14:20:44 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:20:44 compute-0 nova_compute[259550]: 2025-10-07 14:20:44.985 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:20:45 compute-0 nova_compute[259550]: 2025-10-07 14:20:45.011 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:20:45 compute-0 nova_compute[259550]: 2025-10-07 14:20:45.042 2 DEBUG nova.storage.rbd_utils [None req-dbe64067-3463-4c7e-9c34-cd217c4c3505 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] cloning vms/d932a7ab-839c-48b9-804f-90cc8634e93b_disk@35eae341e90a4c7187e269d69b77c79b to images/6d174394-f1cd-43d8-9e98-643891927613 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:20:45 compute-0 nova_compute[259550]: 2025-10-07 14:20:45.222 2 DEBUG nova.storage.rbd_utils [None req-dbe64067-3463-4c7e-9c34-cd217c4c3505 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] flattening images/6d174394-f1cd-43d8-9e98-643891927613 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:20:45 compute-0 nova_compute[259550]: 2025-10-07 14:20:45.421 2 DEBUG nova.compute.manager [req-f8de8c7a-3e2b-4b44-b4eb-5bd566258db4 req-e1e0db67-5f10-4bb9-9d2e-081481059f88 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Received event network-vif-plugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:45 compute-0 nova_compute[259550]: 2025-10-07 14:20:45.421 2 DEBUG oslo_concurrency.lockutils [req-f8de8c7a-3e2b-4b44-b4eb-5bd566258db4 req-e1e0db67-5f10-4bb9-9d2e-081481059f88 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:45 compute-0 nova_compute[259550]: 2025-10-07 14:20:45.422 2 DEBUG oslo_concurrency.lockutils [req-f8de8c7a-3e2b-4b44-b4eb-5bd566258db4 req-e1e0db67-5f10-4bb9-9d2e-081481059f88 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:45 compute-0 nova_compute[259550]: 2025-10-07 14:20:45.422 2 DEBUG oslo_concurrency.lockutils [req-f8de8c7a-3e2b-4b44-b4eb-5bd566258db4 req-e1e0db67-5f10-4bb9-9d2e-081481059f88 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:45 compute-0 nova_compute[259550]: 2025-10-07 14:20:45.422 2 DEBUG nova.compute.manager [req-f8de8c7a-3e2b-4b44-b4eb-5bd566258db4 req-e1e0db67-5f10-4bb9-9d2e-081481059f88 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] No waiting events found dispatching network-vif-plugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:20:45 compute-0 nova_compute[259550]: 2025-10-07 14:20:45.422 2 WARNING nova.compute.manager [req-f8de8c7a-3e2b-4b44-b4eb-5bd566258db4 req-e1e0db67-5f10-4bb9-9d2e-081481059f88 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Received unexpected event network-vif-plugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c for instance with vm_state active and task_state None.
Oct 07 14:20:45 compute-0 ceph-mon[74295]: osdmap e217: 3 total, 3 up, 3 in
Oct 07 14:20:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 382 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 9.6 MiB/s wr, 488 op/s
Oct 07 14:20:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:20:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Oct 07 14:20:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Oct 07 14:20:46 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Oct 07 14:20:46 compute-0 nova_compute[259550]: 2025-10-07 14:20:46.489 2 INFO nova.compute.manager [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Rescuing
Oct 07 14:20:46 compute-0 nova_compute[259550]: 2025-10-07 14:20:46.490 2 DEBUG oslo_concurrency.lockutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "refresh_cache-db51ce2e-5a2e-4329-a629-6f5fcee5c673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:20:46 compute-0 nova_compute[259550]: 2025-10-07 14:20:46.490 2 DEBUG oslo_concurrency.lockutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquired lock "refresh_cache-db51ce2e-5a2e-4329-a629-6f5fcee5c673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:20:46 compute-0 nova_compute[259550]: 2025-10-07 14:20:46.490 2 DEBUG nova.network.neutron [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:20:46 compute-0 nova_compute[259550]: 2025-10-07 14:20:46.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:46 compute-0 nova_compute[259550]: 2025-10-07 14:20:46.816 2 DEBUG nova.storage.rbd_utils [None req-dbe64067-3463-4c7e-9c34-cd217c4c3505 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] removing snapshot(35eae341e90a4c7187e269d69b77c79b) on rbd image(d932a7ab-839c-48b9-804f-90cc8634e93b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:20:46 compute-0 nova_compute[259550]: 2025-10-07 14:20:46.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:46 compute-0 kernel: tap5fb8904b-22 (unregistering): left promiscuous mode
Oct 07 14:20:46 compute-0 NetworkManager[44949]: <info>  [1759846846.8510] device (tap5fb8904b-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:20:46 compute-0 nova_compute[259550]: 2025-10-07 14:20:46.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:46 compute-0 nova_compute[259550]: 2025-10-07 14:20:46.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:46 compute-0 ovn_controller[151684]: 2025-10-07T14:20:46Z|00810|binding|INFO|Releasing lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c from this chassis (sb_readonly=0)
Oct 07 14:20:46 compute-0 ovn_controller[151684]: 2025-10-07T14:20:46Z|00811|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c down in Southbound
Oct 07 14:20:46 compute-0 ovn_controller[151684]: 2025-10-07T14:20:46Z|00812|binding|INFO|Removing iface tap5fb8904b-22 ovn-installed in OVS
Oct 07 14:20:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:46.871 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:c7:50 10.100.0.3'], port_security=['fa:16:3e:af:c7:50 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1d580bbb-a6fd-442c-8524-409ba5c344d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8379283f8a594c2ab94773d2b49cbb30', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'c8e218c0-ab29-4b01-8bdb-1da00e3ea9f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44f26794-be65-4c90-a6ef-3a0e5efa6810, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5fb8904b-227a-4dac-8c3a-82a23ba9832c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:20:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:46.872 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5fb8904b-227a-4dac-8c3a-82a23ba9832c in datapath c21c541a-0d39-4ceb-ba44-53a9c1280779 unbound from our chassis
Oct 07 14:20:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:46.873 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c21c541a-0d39-4ceb-ba44-53a9c1280779, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:20:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:46.874 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[df75d561-a8d3-4866-a9d2-5e5798173e1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:46.877 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 namespace which is not needed anymore
Oct 07 14:20:46 compute-0 nova_compute[259550]: 2025-10-07 14:20:46.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:46 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d00000042.scope: Deactivated successfully.
Oct 07 14:20:46 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d00000042.scope: Consumed 16.102s CPU time.
Oct 07 14:20:46 compute-0 ceph-mon[74295]: pgmap v1689: 305 pgs: 305 active+clean; 382 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 9.6 MiB/s wr, 488 op/s
Oct 07 14:20:46 compute-0 ceph-mon[74295]: osdmap e218: 3 total, 3 up, 3 in
Oct 07 14:20:46 compute-0 systemd-machined[214580]: Machine qemu-91-instance-00000042 terminated.
Oct 07 14:20:46 compute-0 nova_compute[259550]: 2025-10-07 14:20:46.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:20:47 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[336770]: [NOTICE]   (336774) : haproxy version is 2.8.14-c23fe91
Oct 07 14:20:47 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[336770]: [NOTICE]   (336774) : path to executable is /usr/sbin/haproxy
Oct 07 14:20:47 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[336770]: [WARNING]  (336774) : Exiting Master process...
Oct 07 14:20:47 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[336770]: [WARNING]  (336774) : Exiting Master process...
Oct 07 14:20:47 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[336770]: [ALERT]    (336774) : Current worker (336776) exited with code 143 (Terminated)
Oct 07 14:20:47 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[336770]: [WARNING]  (336774) : All workers exited. Exiting... (0)
Oct 07 14:20:47 compute-0 systemd[1]: libpod-7a730de37c8badaaaaa840e3de093a150903e6047d0d6b7a14f62b6fad20e7e3.scope: Deactivated successfully.
Oct 07 14:20:47 compute-0 podman[342238]: 2025-10-07 14:20:47.023510161 +0000 UTC m=+0.050765792 container died 7a730de37c8badaaaaa840e3de093a150903e6047d0d6b7a14f62b6fad20e7e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 07 14:20:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7a730de37c8badaaaaa840e3de093a150903e6047d0d6b7a14f62b6fad20e7e3-userdata-shm.mount: Deactivated successfully.
Oct 07 14:20:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cd32b143c8095c3270c6918928ef66c799fe7c0f958ed20ae7ad2da851af997-merged.mount: Deactivated successfully.
Oct 07 14:20:47 compute-0 podman[342238]: 2025-10-07 14:20:47.06474297 +0000 UTC m=+0.091998591 container cleanup 7a730de37c8badaaaaa840e3de093a150903e6047d0d6b7a14f62b6fad20e7e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 07 14:20:47 compute-0 systemd[1]: libpod-conmon-7a730de37c8badaaaaa840e3de093a150903e6047d0d6b7a14f62b6fad20e7e3.scope: Deactivated successfully.
Oct 07 14:20:47 compute-0 nova_compute[259550]: 2025-10-07 14:20:47.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:47 compute-0 nova_compute[259550]: 2025-10-07 14:20:47.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:47 compute-0 podman[342270]: 2025-10-07 14:20:47.139070563 +0000 UTC m=+0.044222339 container remove 7a730de37c8badaaaaa840e3de093a150903e6047d0d6b7a14f62b6fad20e7e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 07 14:20:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:47.146 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[77bad7aa-3e4b-4d6f-9f99-51dba8bf991b]: (4, ('Tue Oct  7 02:20:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 (7a730de37c8badaaaaa840e3de093a150903e6047d0d6b7a14f62b6fad20e7e3)\n7a730de37c8badaaaaa840e3de093a150903e6047d0d6b7a14f62b6fad20e7e3\nTue Oct  7 02:20:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 (7a730de37c8badaaaaa840e3de093a150903e6047d0d6b7a14f62b6fad20e7e3)\n7a730de37c8badaaaaa840e3de093a150903e6047d0d6b7a14f62b6fad20e7e3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:47.147 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[30010442-d265-4b31-84f1-c741ea98297a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:47.148 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc21c541a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:47 compute-0 nova_compute[259550]: 2025-10-07 14:20:47.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:47 compute-0 kernel: tapc21c541a-00: left promiscuous mode
Oct 07 14:20:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Oct 07 14:20:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Oct 07 14:20:47 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Oct 07 14:20:47 compute-0 nova_compute[259550]: 2025-10-07 14:20:47.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:47 compute-0 nova_compute[259550]: 2025-10-07 14:20:47.213 2 INFO nova.virt.libvirt.driver [None req-e67a3cbb-e927-48c4-b273-e3453396d7a4 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance shutdown successfully after 3 seconds.
Oct 07 14:20:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:47.224 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[00ffd5d3-7b2d-44ef-968e-5655f3947fe5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:47 compute-0 nova_compute[259550]: 2025-10-07 14:20:47.226 2 INFO nova.virt.libvirt.driver [-] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance destroyed successfully.
Oct 07 14:20:47 compute-0 nova_compute[259550]: 2025-10-07 14:20:47.227 2 DEBUG nova.objects.instance [None req-e67a3cbb-e927-48c4-b273-e3453396d7a4 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'numa_topology' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:47 compute-0 nova_compute[259550]: 2025-10-07 14:20:47.245 2 DEBUG nova.storage.rbd_utils [None req-dbe64067-3463-4c7e-9c34-cd217c4c3505 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] creating snapshot(snap) on rbd image(6d174394-f1cd-43d8-9e98-643891927613) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:20:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:47.252 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1f4461-45db-454c-8fbe-4d226c67aee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:47.253 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[84a435d3-d243-4d50-a567-b1dabf509a00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:47.271 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e3bf0f98-b58a-4276-b0d7-18cef8d8f8d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734362, 'reachable_time': 34965, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342306, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:47 compute-0 systemd[1]: run-netns-ovnmeta\x2dc21c541a\x2d0d39\x2d4ceb\x2dba44\x2d53a9c1280779.mount: Deactivated successfully.
Oct 07 14:20:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:47.274 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:20:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:47.274 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[2b8c8f81-51e0-464d-8eda-810114d64c58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:47 compute-0 nova_compute[259550]: 2025-10-07 14:20:47.281 2 DEBUG nova.compute.manager [None req-e67a3cbb-e927-48c4-b273-e3453396d7a4 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:47 compute-0 nova_compute[259550]: 2025-10-07 14:20:47.351 2 DEBUG oslo_concurrency.lockutils [None req-e67a3cbb-e927-48c4-b273-e3453396d7a4 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:47 compute-0 nova_compute[259550]: 2025-10-07 14:20:47.646 2 DEBUG nova.compute.manager [req-c3b4d7b4-e666-4e3f-a8e7-c8229827ac2f req-a6289b6b-4c08-4878-99ef-2cb8e61e39d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-unplugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:47 compute-0 nova_compute[259550]: 2025-10-07 14:20:47.647 2 DEBUG oslo_concurrency.lockutils [req-c3b4d7b4-e666-4e3f-a8e7-c8229827ac2f req-a6289b6b-4c08-4878-99ef-2cb8e61e39d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:47 compute-0 nova_compute[259550]: 2025-10-07 14:20:47.647 2 DEBUG oslo_concurrency.lockutils [req-c3b4d7b4-e666-4e3f-a8e7-c8229827ac2f req-a6289b6b-4c08-4878-99ef-2cb8e61e39d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:47 compute-0 nova_compute[259550]: 2025-10-07 14:20:47.647 2 DEBUG oslo_concurrency.lockutils [req-c3b4d7b4-e666-4e3f-a8e7-c8229827ac2f req-a6289b6b-4c08-4878-99ef-2cb8e61e39d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:47 compute-0 nova_compute[259550]: 2025-10-07 14:20:47.647 2 DEBUG nova.compute.manager [req-c3b4d7b4-e666-4e3f-a8e7-c8229827ac2f req-a6289b6b-4c08-4878-99ef-2cb8e61e39d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-unplugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:20:47 compute-0 nova_compute[259550]: 2025-10-07 14:20:47.648 2 WARNING nova.compute.manager [req-c3b4d7b4-e666-4e3f-a8e7-c8229827ac2f req-a6289b6b-4c08-4878-99ef-2cb8e61e39d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-unplugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state stopped and task_state None.
Oct 07 14:20:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 382 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 7.7 MiB/s wr, 323 op/s
Oct 07 14:20:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Oct 07 14:20:48 compute-0 ceph-mon[74295]: osdmap e219: 3 total, 3 up, 3 in
Oct 07 14:20:48 compute-0 ceph-mon[74295]: pgmap v1692: 305 pgs: 305 active+clean; 382 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 7.7 MiB/s wr, 323 op/s
Oct 07 14:20:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Oct 07 14:20:48 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Oct 07 14:20:48 compute-0 nova_compute[259550]: 2025-10-07 14:20:48.445 2 DEBUG nova.objects.instance [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'flavor' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:48 compute-0 nova_compute[259550]: 2025-10-07 14:20:48.474 2 DEBUG oslo_concurrency.lockutils [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:20:48 compute-0 nova_compute[259550]: 2025-10-07 14:20:48.474 2 DEBUG oslo_concurrency.lockutils [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquired lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:20:48 compute-0 nova_compute[259550]: 2025-10-07 14:20:48.475 2 DEBUG nova.network.neutron [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:20:48 compute-0 nova_compute[259550]: 2025-10-07 14:20:48.475 2 DEBUG nova.objects.instance [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'info_cache' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:49 compute-0 podman[342318]: 2025-10-07 14:20:49.080832629 +0000 UTC m=+0.059154274 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 07 14:20:49 compute-0 ovn_controller[151684]: 2025-10-07T14:20:49Z|00813|binding|INFO|Releasing lport e0f4a07d-63f3-4c49-8cad-69cdf20a2608 from this chassis (sb_readonly=0)
Oct 07 14:20:49 compute-0 ovn_controller[151684]: 2025-10-07T14:20:49Z|00814|binding|INFO|Releasing lport 5d4e6d5f-90cf-4d0a-9a33-cd00a81ef0cc from this chassis (sb_readonly=0)
Oct 07 14:20:49 compute-0 nova_compute[259550]: 2025-10-07 14:20:49.097 2 DEBUG nova.network.neutron [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Updating instance_info_cache with network_info: [{"id": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "address": "fa:16:3e:0e:49:a3", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfcf08-ae", "ovs_interfaceid": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:20:49 compute-0 podman[342319]: 2025-10-07 14:20:49.160692518 +0000 UTC m=+0.130973630 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:20:49 compute-0 nova_compute[259550]: 2025-10-07 14:20:49.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:49 compute-0 ceph-mon[74295]: osdmap e220: 3 total, 3 up, 3 in
Oct 07 14:20:49 compute-0 nova_compute[259550]: 2025-10-07 14:20:49.422 2 DEBUG oslo_concurrency.lockutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Releasing lock "refresh_cache-db51ce2e-5a2e-4329-a629-6f5fcee5c673" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:20:49 compute-0 nova_compute[259550]: 2025-10-07 14:20:49.796 2 DEBUG nova.compute.manager [req-76f31fae-2b30-49a1-b3f5-fed20f5a6820 req-1c04708b-ab38-4d4e-bbe6-961fd0506cc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:49 compute-0 nova_compute[259550]: 2025-10-07 14:20:49.796 2 DEBUG oslo_concurrency.lockutils [req-76f31fae-2b30-49a1-b3f5-fed20f5a6820 req-1c04708b-ab38-4d4e-bbe6-961fd0506cc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:49 compute-0 nova_compute[259550]: 2025-10-07 14:20:49.797 2 DEBUG oslo_concurrency.lockutils [req-76f31fae-2b30-49a1-b3f5-fed20f5a6820 req-1c04708b-ab38-4d4e-bbe6-961fd0506cc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:49 compute-0 nova_compute[259550]: 2025-10-07 14:20:49.797 2 DEBUG oslo_concurrency.lockutils [req-76f31fae-2b30-49a1-b3f5-fed20f5a6820 req-1c04708b-ab38-4d4e-bbe6-961fd0506cc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:49 compute-0 nova_compute[259550]: 2025-10-07 14:20:49.797 2 DEBUG nova.compute.manager [req-76f31fae-2b30-49a1-b3f5-fed20f5a6820 req-1c04708b-ab38-4d4e-bbe6-961fd0506cc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:20:49 compute-0 nova_compute[259550]: 2025-10-07 14:20:49.798 2 WARNING nova.compute.manager [req-76f31fae-2b30-49a1-b3f5-fed20f5a6820 req-1c04708b-ab38-4d4e-bbe6-961fd0506cc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state stopped and task_state powering-on.
Oct 07 14:20:49 compute-0 nova_compute[259550]: 2025-10-07 14:20:49.805 2 DEBUG nova.virt.libvirt.driver [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:20:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 441 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 18 MiB/s rd, 8.0 MiB/s wr, 467 op/s
Oct 07 14:20:49 compute-0 nova_compute[259550]: 2025-10-07 14:20:49.943 2 INFO nova.virt.libvirt.driver [None req-dbe64067-3463-4c7e-9c34-cd217c4c3505 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Snapshot image upload complete
Oct 07 14:20:49 compute-0 nova_compute[259550]: 2025-10-07 14:20:49.944 2 INFO nova.compute.manager [None req-dbe64067-3463-4c7e-9c34-cd217c4c3505 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Took 5.96 seconds to snapshot the instance on the hypervisor.
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.184 2 DEBUG nova.compute.manager [None req-dbe64067-3463-4c7e-9c34-cd217c4c3505 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Oct 07 14:20:50 compute-0 ceph-mon[74295]: pgmap v1694: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 441 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 18 MiB/s rd, 8.0 MiB/s wr, 467 op/s
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.416 2 DEBUG nova.network.neutron [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Updating instance_info_cache with network_info: [{"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.436 2 DEBUG oslo_concurrency.lockutils [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Releasing lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.459 2 INFO nova.virt.libvirt.driver [-] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance destroyed successfully.
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.460 2 DEBUG nova.objects.instance [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'numa_topology' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.473 2 DEBUG nova.objects.instance [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'resources' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.486 2 DEBUG nova.virt.libvirt.vif [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1718645090',display_name='tempest-ServerActionsTestJSON-server-1718645090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1718645090',id=66,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD46Fy5CoN+Ml3EBoxKcZ2Dc+Ex8Fs/j0JXzrdEiunFq6ivVsrIblCZq3tN14fyHQcfewP1+4i7OCcMUFM6dTRwhbKPS7W3sI5N9qFb8Gfb+awrk0XppQafXmqkzDHedqQ==',key_name='tempest-keypair-607419023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-0mdnuhhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:20:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=1d580bbb-a6fd-442c-8524-409ba5c344d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.487 2 DEBUG nova.network.os_vif_util [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.488 2 DEBUG nova.network.os_vif_util [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.488 2 DEBUG os_vif [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.490 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fb8904b-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.498 2 INFO os_vif [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22')
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.504 2 DEBUG nova.virt.libvirt.driver [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Start _get_guest_xml network_info=[{"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.508 2 WARNING nova.virt.libvirt.driver [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.514 2 DEBUG nova.virt.libvirt.host [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.515 2 DEBUG nova.virt.libvirt.host [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.518 2 DEBUG nova.virt.libvirt.host [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.519 2 DEBUG nova.virt.libvirt.host [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.519 2 DEBUG nova.virt.libvirt.driver [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.520 2 DEBUG nova.virt.hardware [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.520 2 DEBUG nova.virt.hardware [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.521 2 DEBUG nova.virt.hardware [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.521 2 DEBUG nova.virt.hardware [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.521 2 DEBUG nova.virt.hardware [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.521 2 DEBUG nova.virt.hardware [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.522 2 DEBUG nova.virt.hardware [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.522 2 DEBUG nova.virt.hardware [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.522 2 DEBUG nova.virt.hardware [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.523 2 DEBUG nova.virt.hardware [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.523 2 DEBUG nova.virt.hardware [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.523 2 DEBUG nova.objects.instance [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.540 2 DEBUG oslo_concurrency.processutils [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:20:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/745212482' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:50 compute-0 nova_compute[259550]: 2025-10-07 14:20:50.985 2 DEBUG oslo_concurrency.processutils [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.016 2 DEBUG oslo_concurrency.processutils [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:20:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:20:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:20:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3159843319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.522 2 DEBUG oslo_concurrency.processutils [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.524 2 DEBUG nova.virt.libvirt.vif [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1718645090',display_name='tempest-ServerActionsTestJSON-server-1718645090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1718645090',id=66,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD46Fy5CoN+Ml3EBoxKcZ2Dc+Ex8Fs/j0JXzrdEiunFq6ivVsrIblCZq3tN14fyHQcfewP1+4i7OCcMUFM6dTRwhbKPS7W3sI5N9qFb8Gfb+awrk0XppQafXmqkzDHedqQ==',key_name='tempest-keypair-607419023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-0mdnuhhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:20:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=1d580bbb-a6fd-442c-8524-409ba5c344d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.524 2 DEBUG nova.network.os_vif_util [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.525 2 DEBUG nova.network.os_vif_util [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.527 2 DEBUG nova.objects.instance [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.541 2 DEBUG nova.virt.libvirt.driver [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:20:51 compute-0 nova_compute[259550]:   <uuid>1d580bbb-a6fd-442c-8524-409ba5c344d0</uuid>
Oct 07 14:20:51 compute-0 nova_compute[259550]:   <name>instance-00000042</name>
Oct 07 14:20:51 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:20:51 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:20:51 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerActionsTestJSON-server-1718645090</nova:name>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:20:50</nova:creationTime>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:20:51 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:20:51 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:20:51 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:20:51 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:20:51 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:20:51 compute-0 nova_compute[259550]:         <nova:user uuid="51afbbb19e4a4e2184c89302ccf45428">tempest-ServerActionsTestJSON-263209083-project-member</nova:user>
Oct 07 14:20:51 compute-0 nova_compute[259550]:         <nova:project uuid="8379283f8a594c2ab94773d2b49cbb30">tempest-ServerActionsTestJSON-263209083</nova:project>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:20:51 compute-0 nova_compute[259550]:         <nova:port uuid="5fb8904b-227a-4dac-8c3a-82a23ba9832c">
Oct 07 14:20:51 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:20:51 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:20:51 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <system>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <entry name="serial">1d580bbb-a6fd-442c-8524-409ba5c344d0</entry>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <entry name="uuid">1d580bbb-a6fd-442c-8524-409ba5c344d0</entry>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     </system>
Oct 07 14:20:51 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:20:51 compute-0 nova_compute[259550]:   <os>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:   </os>
Oct 07 14:20:51 compute-0 nova_compute[259550]:   <features>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:   </features>
Oct 07 14:20:51 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:20:51 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:20:51 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1d580bbb-a6fd-442c-8524-409ba5c344d0_disk">
Oct 07 14:20:51 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       </source>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:20:51 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1d580bbb-a6fd-442c-8524-409ba5c344d0_disk.config">
Oct 07 14:20:51 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       </source>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:20:51 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:af:c7:50"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <target dev="tap5fb8904b-22"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/1d580bbb-a6fd-442c-8524-409ba5c344d0/console.log" append="off"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <video>
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     </video>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <input type="keyboard" bus="usb"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:20:51 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:20:51 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:20:51 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:20:51 compute-0 nova_compute[259550]: </domain>
Oct 07 14:20:51 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.548 2 DEBUG nova.virt.libvirt.driver [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.549 2 DEBUG nova.virt.libvirt.driver [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.550 2 DEBUG nova.virt.libvirt.vif [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1718645090',display_name='tempest-ServerActionsTestJSON-server-1718645090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1718645090',id=66,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD46Fy5CoN+Ml3EBoxKcZ2Dc+Ex8Fs/j0JXzrdEiunFq6ivVsrIblCZq3tN14fyHQcfewP1+4i7OCcMUFM6dTRwhbKPS7W3sI5N9qFb8Gfb+awrk0XppQafXmqkzDHedqQ==',key_name='tempest-keypair-607419023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-0mdnuhhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:20:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=1d580bbb-a6fd-442c-8524-409ba5c344d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.550 2 DEBUG nova.network.os_vif_util [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.551 2 DEBUG nova.network.os_vif_util [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.552 2 DEBUG os_vif [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.553 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.553 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.557 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fb8904b-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.558 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5fb8904b-22, col_values=(('external_ids', {'iface-id': '5fb8904b-227a-4dac-8c3a-82a23ba9832c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:c7:50', 'vm-uuid': '1d580bbb-a6fd-442c-8524-409ba5c344d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:51 compute-0 NetworkManager[44949]: <info>  [1759846851.5604] manager: (tap5fb8904b-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.566 2 INFO os_vif [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22')
Oct 07 14:20:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/745212482' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:51 compute-0 kernel: tap5fb8904b-22: entered promiscuous mode
Oct 07 14:20:51 compute-0 NetworkManager[44949]: <info>  [1759846851.6771] manager: (tap5fb8904b-22): new Tun device (/org/freedesktop/NetworkManager/Devices/352)
Oct 07 14:20:51 compute-0 ovn_controller[151684]: 2025-10-07T14:20:51Z|00815|binding|INFO|Claiming lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c for this chassis.
Oct 07 14:20:51 compute-0 ovn_controller[151684]: 2025-10-07T14:20:51Z|00816|binding|INFO|5fb8904b-227a-4dac-8c3a-82a23ba9832c: Claiming fa:16:3e:af:c7:50 10.100.0.3
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:51.690 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:c7:50 10.100.0.3'], port_security=['fa:16:3e:af:c7:50 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1d580bbb-a6fd-442c-8524-409ba5c344d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8379283f8a594c2ab94773d2b49cbb30', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'c8e218c0-ab29-4b01-8bdb-1da00e3ea9f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44f26794-be65-4c90-a6ef-3a0e5efa6810, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5fb8904b-227a-4dac-8c3a-82a23ba9832c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:20:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:51.693 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5fb8904b-227a-4dac-8c3a-82a23ba9832c in datapath c21c541a-0d39-4ceb-ba44-53a9c1280779 bound to our chassis
Oct 07 14:20:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:51.696 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:20:51 compute-0 ovn_controller[151684]: 2025-10-07T14:20:51Z|00817|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c ovn-installed in OVS
Oct 07 14:20:51 compute-0 ovn_controller[151684]: 2025-10-07T14:20:51Z|00818|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c up in Southbound
Oct 07 14:20:51 compute-0 systemd-udevd[342439]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:51.707 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[20a5d27d-3d42-44b3-bd46-efcfc6b765f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:51.709 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc21c541a-01 in ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:20:51 compute-0 nova_compute[259550]: 2025-10-07 14:20:51.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:51.715 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc21c541a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:20:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:51.715 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9831d6-dee1-4944-a360-460b5b0719da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:51.716 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c82f3f-229a-41c0-903f-23ea3f8c0fda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:51 compute-0 NetworkManager[44949]: <info>  [1759846851.7240] device (tap5fb8904b-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:20:51 compute-0 NetworkManager[44949]: <info>  [1759846851.7250] device (tap5fb8904b-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:20:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:51.729 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[293ed667-5e01-4fc5-a1ec-8ed6e1ce071c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:51 compute-0 systemd-machined[214580]: New machine qemu-100-instance-00000042.
Oct 07 14:20:51 compute-0 systemd[1]: Started Virtual Machine qemu-100-instance-00000042.
Oct 07 14:20:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:51.754 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ba12c9f3-ba72-412e-9778-38758ba7c5f8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:51.792 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a065088a-f536-4237-928b-4e64c7e61f8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:51 compute-0 NetworkManager[44949]: <info>  [1759846851.7998] manager: (tapc21c541a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/353)
Oct 07 14:20:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:51.798 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ce1a4669-62cb-4e24-a9df-5c23546ec670]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:51.838 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[69b0c0cc-f1a4-42b1-ad8f-ce04a24a0c48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:51.843 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1dbaa561-1176-417a-9d6c-c9de04625a50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:51 compute-0 NetworkManager[44949]: <info>  [1759846851.8769] device (tapc21c541a-00): carrier: link connected
Oct 07 14:20:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:51.883 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8e53c9a1-198c-4765-ad28-51dffdae2b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:51.900 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dcacd3b2-5e85-45cb-a058-cedde47825d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc21c541a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:8b:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 743544, 'reachable_time': 24359, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342474, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 453 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.6 MiB/s wr, 339 op/s
Oct 07 14:20:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:51.918 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[143315dd-3757-4c3c-89c9-c4518e77cdd3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:8b48'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 743544, 'tstamp': 743544}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342475, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:51.936 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5ced4d49-ff54-4222-aef4-21a1e7b9bf30]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc21c541a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:8b:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 743544, 'reachable_time': 24359, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 342476, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:51.965 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[085ac908-41fa-4a00-b392-9138de6395b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:52.026 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e1b220-29ca-482b-b074-59b7fb0d6014]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:52.027 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc21c541a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:52.027 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:52.027 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc21c541a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:52 compute-0 NetworkManager[44949]: <info>  [1759846852.0300] manager: (tapc21c541a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/354)
Oct 07 14:20:52 compute-0 kernel: tapc21c541a-00: entered promiscuous mode
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:52.032 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc21c541a-00, col_values=(('external_ids', {'iface-id': '5989e5ed-c89e-446a-960e-503196fd3680'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:52 compute-0 ovn_controller[151684]: 2025-10-07T14:20:52Z|00819|binding|INFO|Releasing lport 5989e5ed-c89e-446a-960e-503196fd3680 from this chassis (sb_readonly=0)
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:52.092 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c21c541a-0d39-4ceb-ba44-53a9c1280779.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c21c541a-0d39-4ceb-ba44-53a9c1280779.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:52.092 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[53f8bd5c-4b4a-4022-9477-50f434904302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:52.093 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/c21c541a-0d39-4ceb-ba44-53a9c1280779.pid.haproxy
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:20:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:52.094 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'env', 'PROCESS_TAG=haproxy-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c21c541a-0d39-4ceb-ba44-53a9c1280779.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:20:52 compute-0 podman[342549]: 2025-10-07 14:20:52.472719461 +0000 UTC m=+0.052506178 container create 494baeea714739e631a7a60966f83c14bdfd20b01005f18d9be781f70a783576 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.506 2 DEBUG nova.compute.manager [None req-b70c0835-517b-4630-b439-5aff79bb42f4 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:52 compute-0 systemd[1]: Started libpod-conmon-494baeea714739e631a7a60966f83c14bdfd20b01005f18d9be781f70a783576.scope.
Oct 07 14:20:52 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:20:52 compute-0 podman[342549]: 2025-10-07 14:20:52.446750205 +0000 UTC m=+0.026536942 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d42f759479779f73b135b42398a5a9df28c65a0bcd261cf343c432eee771fab5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.551 2 INFO nova.compute.manager [None req-b70c0835-517b-4630-b439-5aff79bb42f4 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] instance snapshotting
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.555 2 DEBUG nova.objects.instance [None req-b70c0835-517b-4630-b439-5aff79bb42f4 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'flavor' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:52 compute-0 podman[342549]: 2025-10-07 14:20:52.565872281 +0000 UTC m=+0.145659018 container init 494baeea714739e631a7a60966f83c14bdfd20b01005f18d9be781f70a783576 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 07 14:20:52 compute-0 podman[342549]: 2025-10-07 14:20:52.574270963 +0000 UTC m=+0.154057700 container start 494baeea714739e631a7a60966f83c14bdfd20b01005f18d9be781f70a783576 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:20:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3159843319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:20:52 compute-0 ceph-mon[74295]: pgmap v1695: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 453 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.6 MiB/s wr, 339 op/s
Oct 07 14:20:52 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[342562]: [NOTICE]   (342566) : New worker (342568) forked
Oct 07 14:20:52 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[342562]: [NOTICE]   (342566) : Loading success.
Oct 07 14:20:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:20:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:20:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:20:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:20:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:20:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.703 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 1d580bbb-a6fd-442c-8524-409ba5c344d0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.704 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846852.7020693, 1d580bbb-a6fd-442c-8524-409ba5c344d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.709 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] VM Resumed (Lifecycle Event)
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.710 2 DEBUG nova.compute.manager [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.714 2 INFO nova.virt.libvirt.driver [-] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance rebooted successfully.
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.714 2 DEBUG nova.compute.manager [None req-baa73c61-ff95-43ea-861b-3798ff23938a 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.738 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.742 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.793 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846852.7047257, 1d580bbb-a6fd-442c-8524-409ba5c344d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.793 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] VM Started (Lifecycle Event)
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.817 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.820 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.893 2 INFO nova.virt.libvirt.driver [None req-b70c0835-517b-4630-b439-5aff79bb42f4 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Beginning live snapshot process
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.985 2 DEBUG nova.compute.manager [req-e0c8e96c-9e81-426b-a2f9-705498177c8e req-24cbc556-0199-437f-9638-b39026d2f1cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.986 2 DEBUG oslo_concurrency.lockutils [req-e0c8e96c-9e81-426b-a2f9-705498177c8e req-24cbc556-0199-437f-9638-b39026d2f1cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.987 2 DEBUG oslo_concurrency.lockutils [req-e0c8e96c-9e81-426b-a2f9-705498177c8e req-24cbc556-0199-437f-9638-b39026d2f1cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.988 2 DEBUG oslo_concurrency.lockutils [req-e0c8e96c-9e81-426b-a2f9-705498177c8e req-24cbc556-0199-437f-9638-b39026d2f1cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.989 2 DEBUG nova.compute.manager [req-e0c8e96c-9e81-426b-a2f9-705498177c8e req-24cbc556-0199-437f-9638-b39026d2f1cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:20:52 compute-0 nova_compute[259550]: 2025-10-07 14:20:52.992 2 WARNING nova.compute.manager [req-e0c8e96c-9e81-426b-a2f9-705498177c8e req-24cbc556-0199-437f-9638-b39026d2f1cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state active and task_state None.
Oct 07 14:20:53 compute-0 nova_compute[259550]: 2025-10-07 14:20:53.051 2 DEBUG nova.virt.libvirt.imagebackend [None req-b70c0835-517b-4630-b439-5aff79bb42f4 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:20:53 compute-0 nova_compute[259550]: 2025-10-07 14:20:53.063 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846838.0619633, af953de2-8586-45f6-8f4b-a09dec17ef5f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:53 compute-0 nova_compute[259550]: 2025-10-07 14:20:53.064 2 INFO nova.compute.manager [-] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] VM Stopped (Lifecycle Event)
Oct 07 14:20:53 compute-0 nova_compute[259550]: 2025-10-07 14:20:53.092 2 DEBUG nova.compute.manager [None req-9b16d5f3-b386-4474-be3e-cc6e91fa56d7 - - - - - -] [instance: af953de2-8586-45f6-8f4b-a09dec17ef5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:53 compute-0 nova_compute[259550]: 2025-10-07 14:20:53.229 2 DEBUG nova.storage.rbd_utils [None req-b70c0835-517b-4630-b439-5aff79bb42f4 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] creating snapshot(c10c6a10f6fb4e119e44999470620b5a) on rbd image(d932a7ab-839c-48b9-804f-90cc8634e93b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:20:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Oct 07 14:20:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Oct 07 14:20:53 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Oct 07 14:20:53 compute-0 nova_compute[259550]: 2025-10-07 14:20:53.658 2 DEBUG nova.storage.rbd_utils [None req-b70c0835-517b-4630-b439-5aff79bb42f4 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] cloning vms/d932a7ab-839c-48b9-804f-90cc8634e93b_disk@c10c6a10f6fb4e119e44999470620b5a to images/c302ac45-7a48-4787-9189-d8cbb86b9e28 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:20:53 compute-0 nova_compute[259550]: 2025-10-07 14:20:53.754 2 DEBUG nova.storage.rbd_utils [None req-b70c0835-517b-4630-b439-5aff79bb42f4 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] flattening images/c302ac45-7a48-4787-9189-d8cbb86b9e28 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:20:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 453 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 5.9 MiB/s wr, 362 op/s
Oct 07 14:20:54 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct 07 14:20:54 compute-0 nova_compute[259550]: 2025-10-07 14:20:54.245 2 DEBUG nova.storage.rbd_utils [None req-b70c0835-517b-4630-b439-5aff79bb42f4 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] removing snapshot(c10c6a10f6fb4e119e44999470620b5a) on rbd image(d932a7ab-839c-48b9-804f-90cc8634e93b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:20:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Oct 07 14:20:54 compute-0 ceph-mon[74295]: osdmap e221: 3 total, 3 up, 3 in
Oct 07 14:20:54 compute-0 ceph-mon[74295]: pgmap v1697: 305 pgs: 305 active+clean; 453 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 5.9 MiB/s wr, 362 op/s
Oct 07 14:20:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Oct 07 14:20:54 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Oct 07 14:20:54 compute-0 nova_compute[259550]: 2025-10-07 14:20:54.634 2 DEBUG nova.storage.rbd_utils [None req-b70c0835-517b-4630-b439-5aff79bb42f4 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] creating snapshot(snap) on rbd image(c302ac45-7a48-4787-9189-d8cbb86b9e28) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:20:55 compute-0 nova_compute[259550]: 2025-10-07 14:20:55.058 2 DEBUG nova.compute.manager [req-36b05db7-3bf5-4487-8fa8-3382c0559939 req-1fb240d8-dd42-4a64-8811-7a467598bebe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:55 compute-0 nova_compute[259550]: 2025-10-07 14:20:55.059 2 DEBUG oslo_concurrency.lockutils [req-36b05db7-3bf5-4487-8fa8-3382c0559939 req-1fb240d8-dd42-4a64-8811-7a467598bebe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:55 compute-0 nova_compute[259550]: 2025-10-07 14:20:55.059 2 DEBUG oslo_concurrency.lockutils [req-36b05db7-3bf5-4487-8fa8-3382c0559939 req-1fb240d8-dd42-4a64-8811-7a467598bebe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:55 compute-0 nova_compute[259550]: 2025-10-07 14:20:55.060 2 DEBUG oslo_concurrency.lockutils [req-36b05db7-3bf5-4487-8fa8-3382c0559939 req-1fb240d8-dd42-4a64-8811-7a467598bebe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:55 compute-0 nova_compute[259550]: 2025-10-07 14:20:55.060 2 DEBUG nova.compute.manager [req-36b05db7-3bf5-4487-8fa8-3382c0559939 req-1fb240d8-dd42-4a64-8811-7a467598bebe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:20:55 compute-0 nova_compute[259550]: 2025-10-07 14:20:55.061 2 WARNING nova.compute.manager [req-36b05db7-3bf5-4487-8fa8-3382c0559939 req-1fb240d8-dd42-4a64-8811-7a467598bebe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state active and task_state None.
Oct 07 14:20:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Oct 07 14:20:55 compute-0 ceph-mon[74295]: osdmap e222: 3 total, 3 up, 3 in
Oct 07 14:20:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Oct 07 14:20:55 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Oct 07 14:20:55 compute-0 nova_compute[259550]: 2025-10-07 14:20:55.803 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846840.8015716, e4daf401-270a-4e5d-9849-ba90ed06e97c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:55 compute-0 nova_compute[259550]: 2025-10-07 14:20:55.803 2 INFO nova.compute.manager [-] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] VM Stopped (Lifecycle Event)
Oct 07 14:20:55 compute-0 nova_compute[259550]: 2025-10-07 14:20:55.827 2 DEBUG nova.compute.manager [None req-594a4d59-7979-42c4-bb92-091d0ead8c20 - - - - - -] [instance: e4daf401-270a-4e5d-9849-ba90ed06e97c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 491 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 5.2 MiB/s wr, 252 op/s
Oct 07 14:20:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:20:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Oct 07 14:20:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Oct 07 14:20:56 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Oct 07 14:20:56 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 07 14:20:56 compute-0 nova_compute[259550]: 2025-10-07 14:20:56.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:56 compute-0 nova_compute[259550]: 2025-10-07 14:20:56.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:56 compute-0 ceph-mon[74295]: osdmap e223: 3 total, 3 up, 3 in
Oct 07 14:20:56 compute-0 ceph-mon[74295]: pgmap v1700: 305 pgs: 305 active+clean; 491 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 5.2 MiB/s wr, 252 op/s
Oct 07 14:20:56 compute-0 ceph-mon[74295]: osdmap e224: 3 total, 3 up, 3 in
Oct 07 14:20:56 compute-0 ovn_controller[151684]: 2025-10-07T14:20:56Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0e:49:a3 10.100.0.7
Oct 07 14:20:56 compute-0 ovn_controller[151684]: 2025-10-07T14:20:56Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0e:49:a3 10.100.0.7
Oct 07 14:20:56 compute-0 ovn_controller[151684]: 2025-10-07T14:20:56Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c0:04:6e 10.100.0.12
Oct 07 14:20:56 compute-0 ovn_controller[151684]: 2025-10-07T14:20:56Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:04:6e 10.100.0.12
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.088 2 DEBUG nova.objects.instance [None req-a74ad941-b58a-4e00-bc2b-5cb8fa78ccf2 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.112 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846857.1117723, 1d580bbb-a6fd-442c-8524-409ba5c344d0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.112 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] VM Paused (Lifecycle Event)
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.118 2 INFO nova.virt.libvirt.driver [None req-b70c0835-517b-4630-b439-5aff79bb42f4 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Snapshot image upload complete
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.118 2 INFO nova.compute.manager [None req-b70c0835-517b-4630-b439-5aff79bb42f4 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Took 4.54 seconds to snapshot the instance on the hypervisor.
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.137 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.150 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.179 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.447 2 DEBUG nova.compute.manager [None req-b70c0835-517b-4630-b439-5aff79bb42f4 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.448 2 DEBUG nova.compute.manager [None req-b70c0835-517b-4630-b439-5aff79bb42f4 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.449 2 DEBUG nova.compute.manager [None req-b70c0835-517b-4630-b439-5aff79bb42f4 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Deleting image c247e1f6-c84a-44fb-9941-cc1c3eb07362 _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:57 compute-0 kernel: tap5fb8904b-22 (unregistering): left promiscuous mode
Oct 07 14:20:57 compute-0 NetworkManager[44949]: <info>  [1759846857.6125] device (tap5fb8904b-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:20:57 compute-0 ovn_controller[151684]: 2025-10-07T14:20:57Z|00820|binding|INFO|Releasing lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c from this chassis (sb_readonly=0)
Oct 07 14:20:57 compute-0 ovn_controller[151684]: 2025-10-07T14:20:57Z|00821|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c down in Southbound
Oct 07 14:20:57 compute-0 ovn_controller[151684]: 2025-10-07T14:20:57Z|00822|binding|INFO|Removing iface tap5fb8904b-22 ovn-installed in OVS
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:57.633 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:c7:50 10.100.0.3'], port_security=['fa:16:3e:af:c7:50 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1d580bbb-a6fd-442c-8524-409ba5c344d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8379283f8a594c2ab94773d2b49cbb30', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'c8e218c0-ab29-4b01-8bdb-1da00e3ea9f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44f26794-be65-4c90-a6ef-3a0e5efa6810, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5fb8904b-227a-4dac-8c3a-82a23ba9832c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:20:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:57.634 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5fb8904b-227a-4dac-8c3a-82a23ba9832c in datapath c21c541a-0d39-4ceb-ba44-53a9c1280779 unbound from our chassis
Oct 07 14:20:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:57.635 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c21c541a-0d39-4ceb-ba44-53a9c1280779, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:20:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:57.636 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3a9e8195-c180-49bc-9186-ff89bebd62d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:57.637 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 namespace which is not needed anymore
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:57 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000042.scope: Deactivated successfully.
Oct 07 14:20:57 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000042.scope: Consumed 5.401s CPU time.
Oct 07 14:20:57 compute-0 systemd-machined[214580]: Machine qemu-100-instance-00000042 terminated.
Oct 07 14:20:57 compute-0 kernel: tap5fb8904b-22: entered promiscuous mode
Oct 07 14:20:57 compute-0 NetworkManager[44949]: <info>  [1759846857.7310] manager: (tap5fb8904b-22): new Tun device (/org/freedesktop/NetworkManager/Devices/355)
Oct 07 14:20:57 compute-0 systemd-udevd[342724]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:20:57 compute-0 ovn_controller[151684]: 2025-10-07T14:20:57Z|00823|binding|INFO|Claiming lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c for this chassis.
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:57 compute-0 kernel: tap5fb8904b-22 (unregistering): left promiscuous mode
Oct 07 14:20:57 compute-0 ovn_controller[151684]: 2025-10-07T14:20:57Z|00824|binding|INFO|5fb8904b-227a-4dac-8c3a-82a23ba9832c: Claiming fa:16:3e:af:c7:50 10.100.0.3
Oct 07 14:20:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:57.745 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:c7:50 10.100.0.3'], port_security=['fa:16:3e:af:c7:50 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1d580bbb-a6fd-442c-8524-409ba5c344d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8379283f8a594c2ab94773d2b49cbb30', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'c8e218c0-ab29-4b01-8bdb-1da00e3ea9f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44f26794-be65-4c90-a6ef-3a0e5efa6810, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5fb8904b-227a-4dac-8c3a-82a23ba9832c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.751 2 DEBUG nova.compute.manager [None req-a74ad941-b58a-4e00-bc2b-5cb8fa78ccf2 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:57 compute-0 ovn_controller[151684]: 2025-10-07T14:20:57Z|00825|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c ovn-installed in OVS
Oct 07 14:20:57 compute-0 ovn_controller[151684]: 2025-10-07T14:20:57Z|00826|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c up in Southbound
Oct 07 14:20:57 compute-0 ovn_controller[151684]: 2025-10-07T14:20:57Z|00827|binding|INFO|Releasing lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c from this chassis (sb_readonly=1)
Oct 07 14:20:57 compute-0 ovn_controller[151684]: 2025-10-07T14:20:57Z|00828|if_status|INFO|Dropped 4 log messages in last 142 seconds (most recently, 142 seconds ago) due to excessive rate
Oct 07 14:20:57 compute-0 ovn_controller[151684]: 2025-10-07T14:20:57Z|00829|if_status|INFO|Not setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c down as sb is readonly
Oct 07 14:20:57 compute-0 ovn_controller[151684]: 2025-10-07T14:20:57Z|00830|binding|INFO|Removing iface tap5fb8904b-22 ovn-installed in OVS
Oct 07 14:20:57 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[342562]: [NOTICE]   (342566) : haproxy version is 2.8.14-c23fe91
Oct 07 14:20:57 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[342562]: [NOTICE]   (342566) : path to executable is /usr/sbin/haproxy
Oct 07 14:20:57 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[342562]: [WARNING]  (342566) : Exiting Master process...
Oct 07 14:20:57 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[342562]: [WARNING]  (342566) : Exiting Master process...
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:57 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[342562]: [ALERT]    (342566) : Current worker (342568) exited with code 143 (Terminated)
Oct 07 14:20:57 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[342562]: [WARNING]  (342566) : All workers exited. Exiting... (0)
Oct 07 14:20:57 compute-0 ovn_controller[151684]: 2025-10-07T14:20:57Z|00831|binding|INFO|Releasing lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c from this chassis (sb_readonly=0)
Oct 07 14:20:57 compute-0 ovn_controller[151684]: 2025-10-07T14:20:57Z|00832|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c down in Southbound
Oct 07 14:20:57 compute-0 systemd[1]: libpod-494baeea714739e631a7a60966f83c14bdfd20b01005f18d9be781f70a783576.scope: Deactivated successfully.
Oct 07 14:20:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:57.778 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:c7:50 10.100.0.3'], port_security=['fa:16:3e:af:c7:50 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1d580bbb-a6fd-442c-8524-409ba5c344d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8379283f8a594c2ab94773d2b49cbb30', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'c8e218c0-ab29-4b01-8bdb-1da00e3ea9f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44f26794-be65-4c90-a6ef-3a0e5efa6810, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5fb8904b-227a-4dac-8c3a-82a23ba9832c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:20:57 compute-0 podman[342744]: 2025-10-07 14:20:57.77996629 +0000 UTC m=+0.048696466 container died 494baeea714739e631a7a60966f83c14bdfd20b01005f18d9be781f70a783576 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:57 compute-0 sudo[342765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:20:57 compute-0 sudo[342765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:20:57 compute-0 sudo[342765]: pam_unix(sudo:session): session closed for user root
Oct 07 14:20:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-494baeea714739e631a7a60966f83c14bdfd20b01005f18d9be781f70a783576-userdata-shm.mount: Deactivated successfully.
Oct 07 14:20:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Oct 07 14:20:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d42f759479779f73b135b42398a5a9df28c65a0bcd261cf343c432eee771fab5-merged.mount: Deactivated successfully.
Oct 07 14:20:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Oct 07 14:20:57 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Oct 07 14:20:57 compute-0 podman[342744]: 2025-10-07 14:20:57.866808005 +0000 UTC m=+0.135538171 container cleanup 494baeea714739e631a7a60966f83c14bdfd20b01005f18d9be781f70a783576 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 07 14:20:57 compute-0 systemd[1]: libpod-conmon-494baeea714739e631a7a60966f83c14bdfd20b01005f18d9be781f70a783576.scope: Deactivated successfully.
Oct 07 14:20:57 compute-0 sudo[342798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:20:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 491 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.2 MiB/s wr, 208 op/s
Oct 07 14:20:57 compute-0 sudo[342798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:20:57 compute-0 sudo[342798]: pam_unix(sudo:session): session closed for user root
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.927 2 DEBUG nova.compute.manager [req-cc1a313d-5ae4-408b-ad66-f4af30c27819 req-2940a9a6-02cc-4fc6-aff8-6d915a403d9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-unplugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.928 2 DEBUG oslo_concurrency.lockutils [req-cc1a313d-5ae4-408b-ad66-f4af30c27819 req-2940a9a6-02cc-4fc6-aff8-6d915a403d9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.928 2 DEBUG oslo_concurrency.lockutils [req-cc1a313d-5ae4-408b-ad66-f4af30c27819 req-2940a9a6-02cc-4fc6-aff8-6d915a403d9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.928 2 DEBUG oslo_concurrency.lockutils [req-cc1a313d-5ae4-408b-ad66-f4af30c27819 req-2940a9a6-02cc-4fc6-aff8-6d915a403d9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.929 2 DEBUG nova.compute.manager [req-cc1a313d-5ae4-408b-ad66-f4af30c27819 req-2940a9a6-02cc-4fc6-aff8-6d915a403d9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-unplugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:20:57 compute-0 nova_compute[259550]: 2025-10-07 14:20:57.929 2 WARNING nova.compute.manager [req-cc1a313d-5ae4-408b-ad66-f4af30c27819 req-2940a9a6-02cc-4fc6-aff8-6d915a403d9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-unplugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state suspended and task_state None.
Oct 07 14:20:57 compute-0 sudo[342835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:20:57 compute-0 sudo[342835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:20:57 compute-0 sudo[342835]: pam_unix(sudo:session): session closed for user root
Oct 07 14:20:57 compute-0 podman[342812]: 2025-10-07 14:20:57.988169891 +0000 UTC m=+0.088957881 container remove 494baeea714739e631a7a60966f83c14bdfd20b01005f18d9be781f70a783576 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:20:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:57.996 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4e730d-a1c9-49d5-8f16-fbd17816a6b5]: (4, ('Tue Oct  7 02:20:57 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 (494baeea714739e631a7a60966f83c14bdfd20b01005f18d9be781f70a783576)\n494baeea714739e631a7a60966f83c14bdfd20b01005f18d9be781f70a783576\nTue Oct  7 02:20:57 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 (494baeea714739e631a7a60966f83c14bdfd20b01005f18d9be781f70a783576)\n494baeea714739e631a7a60966f83c14bdfd20b01005f18d9be781f70a783576\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:57.999 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ab6db1c6-cb3a-4737-89ab-98039f8184cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:58.000 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc21c541a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:20:58 compute-0 nova_compute[259550]: 2025-10-07 14:20:58.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:58 compute-0 kernel: tapc21c541a-00: left promiscuous mode
Oct 07 14:20:58 compute-0 nova_compute[259550]: 2025-10-07 14:20:58.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:20:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:58.032 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cbbd1405-8a10-4662-93fc-ab955d0f2f12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:58 compute-0 sudo[342862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:20:58 compute-0 sudo[342862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:20:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:58.058 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[afbcff2c-888d-476e-83f6-8c62d49e5e39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:58.060 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[17b4a700-5a8a-4769-869b-56a10a600088]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:58.080 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4c47c9b6-8811-41a3-be39-f202c8cad62b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 743535, 'reachable_time': 24434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342892, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:58 compute-0 systemd[1]: run-netns-ovnmeta\x2dc21c541a\x2d0d39\x2d4ceb\x2dba44\x2d53a9c1280779.mount: Deactivated successfully.
Oct 07 14:20:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:58.085 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:20:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:58.085 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7d1e22-443a-43ea-aafa-f447d8ccd2a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:58.086 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5fb8904b-227a-4dac-8c3a-82a23ba9832c in datapath c21c541a-0d39-4ceb-ba44-53a9c1280779 unbound from our chassis
Oct 07 14:20:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:58.087 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c21c541a-0d39-4ceb-ba44-53a9c1280779, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:20:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:58.093 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[22af7873-8f8f-4010-bf4a-a9388d181aa7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:58.093 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5fb8904b-227a-4dac-8c3a-82a23ba9832c in datapath c21c541a-0d39-4ceb-ba44-53a9c1280779 unbound from our chassis
Oct 07 14:20:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:58.095 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c21c541a-0d39-4ceb-ba44-53a9c1280779, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:20:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:20:58.095 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[09f7607f-a4ab-4988-8629-e21150ff01b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:20:58 compute-0 sudo[342862]: pam_unix(sudo:session): session closed for user root
Oct 07 14:20:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:20:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:20:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:20:58 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:20:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:20:58 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:20:58 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 28cbd7a5-7081-4bd4-b1d6-8fb12823f0bc does not exist
Oct 07 14:20:58 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev c3c6cf7f-f135-4fe6-b167-6b37e539b78f does not exist
Oct 07 14:20:58 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 5ef46fa6-1967-4229-8dbb-fa74fbb47811 does not exist
Oct 07 14:20:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:20:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:20:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:20:58 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:20:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:20:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:20:58 compute-0 sudo[342922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:20:58 compute-0 sudo[342922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:20:58 compute-0 sudo[342922]: pam_unix(sudo:session): session closed for user root
Oct 07 14:20:58 compute-0 sudo[342947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:20:58 compute-0 sudo[342947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:20:58 compute-0 sudo[342947]: pam_unix(sudo:session): session closed for user root
Oct 07 14:20:58 compute-0 sudo[342972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:20:58 compute-0 sudo[342972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:20:58 compute-0 sudo[342972]: pam_unix(sudo:session): session closed for user root
Oct 07 14:20:58 compute-0 ceph-mon[74295]: osdmap e225: 3 total, 3 up, 3 in
Oct 07 14:20:58 compute-0 ceph-mon[74295]: pgmap v1703: 305 pgs: 305 active+clean; 491 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.2 MiB/s wr, 208 op/s
Oct 07 14:20:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:20:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:20:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:20:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:20:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:20:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:20:58 compute-0 sudo[342997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:20:58 compute-0 sudo[342997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:20:59 compute-0 podman[343062]: 2025-10-07 14:20:59.271439671 +0000 UTC m=+0.045787711 container create 3224b01d4127728f4292e3b29790e14a7112dcc531b7b0349e25adf91be3fc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_zhukovsky, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 07 14:20:59 compute-0 systemd[1]: Started libpod-conmon-3224b01d4127728f4292e3b29790e14a7112dcc531b7b0349e25adf91be3fc19.scope.
Oct 07 14:20:59 compute-0 podman[343062]: 2025-10-07 14:20:59.251651848 +0000 UTC m=+0.025999868 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:20:59 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:20:59 compute-0 podman[343062]: 2025-10-07 14:20:59.364836568 +0000 UTC m=+0.139184588 container init 3224b01d4127728f4292e3b29790e14a7112dcc531b7b0349e25adf91be3fc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 07 14:20:59 compute-0 podman[343062]: 2025-10-07 14:20:59.372753207 +0000 UTC m=+0.147101207 container start 3224b01d4127728f4292e3b29790e14a7112dcc531b7b0349e25adf91be3fc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_zhukovsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:20:59 compute-0 podman[343062]: 2025-10-07 14:20:59.376798554 +0000 UTC m=+0.151146574 container attach 3224b01d4127728f4292e3b29790e14a7112dcc531b7b0349e25adf91be3fc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_zhukovsky, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 07 14:20:59 compute-0 systemd[1]: libpod-3224b01d4127728f4292e3b29790e14a7112dcc531b7b0349e25adf91be3fc19.scope: Deactivated successfully.
Oct 07 14:20:59 compute-0 compassionate_zhukovsky[343079]: 167 167
Oct 07 14:20:59 compute-0 conmon[343079]: conmon 3224b01d4127728f4292 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3224b01d4127728f4292e3b29790e14a7112dcc531b7b0349e25adf91be3fc19.scope/container/memory.events
Oct 07 14:20:59 compute-0 podman[343062]: 2025-10-07 14:20:59.381242912 +0000 UTC m=+0.155590922 container died 3224b01d4127728f4292e3b29790e14a7112dcc531b7b0349e25adf91be3fc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_zhukovsky, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 07 14:20:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-87950c5c67a3992f04b67a36f07154bc5bbf1ac6753c7ed188c6ea48d9ee15a1-merged.mount: Deactivated successfully.
Oct 07 14:20:59 compute-0 podman[343062]: 2025-10-07 14:20:59.427864853 +0000 UTC m=+0.202212853 container remove 3224b01d4127728f4292e3b29790e14a7112dcc531b7b0349e25adf91be3fc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 07 14:20:59 compute-0 systemd[1]: libpod-conmon-3224b01d4127728f4292e3b29790e14a7112dcc531b7b0349e25adf91be3fc19.scope: Deactivated successfully.
Oct 07 14:20:59 compute-0 podman[343102]: 2025-10-07 14:20:59.618366405 +0000 UTC m=+0.040871700 container create fc68064ce287f5dbcf9d124d518a512bb4483697dc34355ba076b5059c7f0d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:20:59 compute-0 systemd[1]: Started libpod-conmon-fc68064ce287f5dbcf9d124d518a512bb4483697dc34355ba076b5059c7f0d7d.scope.
Oct 07 14:20:59 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f24d8b98245b6f457d77e08f6efba35d730d3d3b9d23bdccfff224b29f0fc416/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f24d8b98245b6f457d77e08f6efba35d730d3d3b9d23bdccfff224b29f0fc416/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f24d8b98245b6f457d77e08f6efba35d730d3d3b9d23bdccfff224b29f0fc416/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f24d8b98245b6f457d77e08f6efba35d730d3d3b9d23bdccfff224b29f0fc416/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f24d8b98245b6f457d77e08f6efba35d730d3d3b9d23bdccfff224b29f0fc416/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:20:59 compute-0 podman[343102]: 2025-10-07 14:20:59.601435678 +0000 UTC m=+0.023940993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:20:59 compute-0 podman[343102]: 2025-10-07 14:20:59.707046569 +0000 UTC m=+0.129551864 container init fc68064ce287f5dbcf9d124d518a512bb4483697dc34355ba076b5059c7f0d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_knuth, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:20:59 compute-0 podman[343102]: 2025-10-07 14:20:59.714739081 +0000 UTC m=+0.137244376 container start fc68064ce287f5dbcf9d124d518a512bb4483697dc34355ba076b5059c7f0d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:20:59 compute-0 podman[343102]: 2025-10-07 14:20:59.717856563 +0000 UTC m=+0.140361848 container attach fc68064ce287f5dbcf9d124d518a512bb4483697dc34355ba076b5059c7f0d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_knuth, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:20:59 compute-0 nova_compute[259550]: 2025-10-07 14:20:59.754 2 INFO nova.compute.manager [None req-cb9e6129-66f8-4a67-a276-ade8ee5737f1 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Resuming
Oct 07 14:20:59 compute-0 nova_compute[259550]: 2025-10-07 14:20:59.756 2 DEBUG nova.objects.instance [None req-cb9e6129-66f8-4a67-a276-ade8ee5737f1 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'flavor' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:20:59 compute-0 nova_compute[259550]: 2025-10-07 14:20:59.795 2 DEBUG oslo_concurrency.lockutils [None req-cb9e6129-66f8-4a67-a276-ade8ee5737f1 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:20:59 compute-0 nova_compute[259550]: 2025-10-07 14:20:59.796 2 DEBUG oslo_concurrency.lockutils [None req-cb9e6129-66f8-4a67-a276-ade8ee5737f1 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquired lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:20:59 compute-0 nova_compute[259550]: 2025-10-07 14:20:59.796 2 DEBUG nova.network.neutron [None req-cb9e6129-66f8-4a67-a276-ade8ee5737f1 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:20:59 compute-0 nova_compute[259550]: 2025-10-07 14:20:59.883 2 DEBUG nova.virt.libvirt.driver [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:20:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 520 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 16 MiB/s wr, 555 op/s
Oct 07 14:21:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:00.052 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:00.052 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:00.053 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.156 2 DEBUG nova.compute.manager [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.156 2 DEBUG oslo_concurrency.lockutils [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.157 2 DEBUG oslo_concurrency.lockutils [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.157 2 DEBUG oslo_concurrency.lockutils [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.157 2 DEBUG nova.compute.manager [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.157 2 WARNING nova.compute.manager [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state suspended and task_state resuming.
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.157 2 DEBUG nova.compute.manager [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.157 2 DEBUG oslo_concurrency.lockutils [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.158 2 DEBUG oslo_concurrency.lockutils [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.158 2 DEBUG oslo_concurrency.lockutils [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.158 2 DEBUG nova.compute.manager [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.158 2 WARNING nova.compute.manager [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state suspended and task_state resuming.
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.158 2 DEBUG nova.compute.manager [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.158 2 DEBUG oslo_concurrency.lockutils [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.159 2 DEBUG oslo_concurrency.lockutils [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.159 2 DEBUG oslo_concurrency.lockutils [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.159 2 DEBUG nova.compute.manager [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.159 2 WARNING nova.compute.manager [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state suspended and task_state resuming.
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.159 2 DEBUG nova.compute.manager [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-unplugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.159 2 DEBUG oslo_concurrency.lockutils [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.160 2 DEBUG oslo_concurrency.lockutils [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.160 2 DEBUG oslo_concurrency.lockutils [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.160 2 DEBUG nova.compute.manager [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-unplugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.160 2 WARNING nova.compute.manager [req-d4c8b4a3-5abc-4185-a5eb-74d5059188b1 req-e61e024e-f760-4882-8878-7d460349d97d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-unplugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state suspended and task_state resuming.
Oct 07 14:21:00 compute-0 sad_knuth[343120]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:21:00 compute-0 sad_knuth[343120]: --> relative data size: 1.0
Oct 07 14:21:00 compute-0 sad_knuth[343120]: --> All data devices are unavailable
Oct 07 14:21:00 compute-0 systemd[1]: libpod-fc68064ce287f5dbcf9d124d518a512bb4483697dc34355ba076b5059c7f0d7d.scope: Deactivated successfully.
Oct 07 14:21:00 compute-0 podman[343102]: 2025-10-07 14:21:00.819864545 +0000 UTC m=+1.242369920 container died fc68064ce287f5dbcf9d124d518a512bb4483697dc34355ba076b5059c7f0d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_knuth, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 07 14:21:00 compute-0 systemd[1]: libpod-fc68064ce287f5dbcf9d124d518a512bb4483697dc34355ba076b5059c7f0d7d.scope: Consumed 1.036s CPU time.
Oct 07 14:21:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f24d8b98245b6f457d77e08f6efba35d730d3d3b9d23bdccfff224b29f0fc416-merged.mount: Deactivated successfully.
Oct 07 14:21:00 compute-0 podman[343102]: 2025-10-07 14:21:00.871485269 +0000 UTC m=+1.293990564 container remove fc68064ce287f5dbcf9d124d518a512bb4483697dc34355ba076b5059c7f0d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_knuth, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.873 2 DEBUG nova.network.neutron [None req-cb9e6129-66f8-4a67-a276-ade8ee5737f1 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Updating instance_info_cache with network_info: [{"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:00 compute-0 systemd[1]: libpod-conmon-fc68064ce287f5dbcf9d124d518a512bb4483697dc34355ba076b5059c7f0d7d.scope: Deactivated successfully.
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.890 2 DEBUG oslo_concurrency.lockutils [None req-cb9e6129-66f8-4a67-a276-ade8ee5737f1 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Releasing lock "refresh_cache-1d580bbb-a6fd-442c-8524-409ba5c344d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.894 2 DEBUG nova.virt.libvirt.vif [None req-cb9e6129-66f8-4a67-a276-ade8ee5737f1 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1718645090',display_name='tempest-ServerActionsTestJSON-server-1718645090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1718645090',id=66,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD46Fy5CoN+Ml3EBoxKcZ2Dc+Ex8Fs/j0JXzrdEiunFq6ivVsrIblCZq3tN14fyHQcfewP1+4i7OCcMUFM6dTRwhbKPS7W3sI5N9qFb8Gfb+awrk0XppQafXmqkzDHedqQ==',key_name='tempest-keypair-607419023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-0mdnuhhe',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:20:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=1d580bbb-a6fd-442c-8524-409ba5c344d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.894 2 DEBUG nova.network.os_vif_util [None req-cb9e6129-66f8-4a67-a276-ade8ee5737f1 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.895 2 DEBUG nova.network.os_vif_util [None req-cb9e6129-66f8-4a67-a276-ade8ee5737f1 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.895 2 DEBUG os_vif [None req-cb9e6129-66f8-4a67-a276-ade8ee5737f1 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.896 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.896 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:00 compute-0 sudo[342997]: pam_unix(sudo:session): session closed for user root
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.899 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fb8904b-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.899 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5fb8904b-22, col_values=(('external_ids', {'iface-id': '5fb8904b-227a-4dac-8c3a-82a23ba9832c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:c7:50', 'vm-uuid': '1d580bbb-a6fd-442c-8524-409ba5c344d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.900 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.900 2 INFO os_vif [None req-cb9e6129-66f8-4a67-a276-ade8ee5737f1 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22')
Oct 07 14:21:00 compute-0 nova_compute[259550]: 2025-10-07 14:21:00.932 2 DEBUG nova.objects.instance [None req-cb9e6129-66f8-4a67-a276-ade8ee5737f1 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'numa_topology' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:00 compute-0 sudo[343161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:21:00 compute-0 sudo[343161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:21:00 compute-0 sudo[343161]: pam_unix(sudo:session): session closed for user root
Oct 07 14:21:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Oct 07 14:21:00 compute-0 ceph-mon[74295]: pgmap v1704: 305 pgs: 305 active+clean; 520 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 16 MiB/s wr, 555 op/s
Oct 07 14:21:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Oct 07 14:21:00 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Oct 07 14:21:01 compute-0 kernel: tap5fb8904b-22: entered promiscuous mode
Oct 07 14:21:01 compute-0 NetworkManager[44949]: <info>  [1759846861.0030] manager: (tap5fb8904b-22): new Tun device (/org/freedesktop/NetworkManager/Devices/356)
Oct 07 14:21:01 compute-0 nova_compute[259550]: 2025-10-07 14:21:01.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:01 compute-0 ovn_controller[151684]: 2025-10-07T14:21:01Z|00833|binding|INFO|Claiming lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c for this chassis.
Oct 07 14:21:01 compute-0 ovn_controller[151684]: 2025-10-07T14:21:01Z|00834|binding|INFO|5fb8904b-227a-4dac-8c3a-82a23ba9832c: Claiming fa:16:3e:af:c7:50 10.100.0.3
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.013 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:c7:50 10.100.0.3'], port_security=['fa:16:3e:af:c7:50 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1d580bbb-a6fd-442c-8524-409ba5c344d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8379283f8a594c2ab94773d2b49cbb30', 'neutron:revision_number': '15', 'neutron:security_group_ids': 'c8e218c0-ab29-4b01-8bdb-1da00e3ea9f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44f26794-be65-4c90-a6ef-3a0e5efa6810, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5fb8904b-227a-4dac-8c3a-82a23ba9832c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.014 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5fb8904b-227a-4dac-8c3a-82a23ba9832c in datapath c21c541a-0d39-4ceb-ba44-53a9c1280779 bound to our chassis
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.016 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:21:01 compute-0 ovn_controller[151684]: 2025-10-07T14:21:01Z|00835|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c ovn-installed in OVS
Oct 07 14:21:01 compute-0 ovn_controller[151684]: 2025-10-07T14:21:01Z|00836|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c up in Southbound
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.027 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5a2a3799-8894-4f83-b7da-dceacd0e9eaf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.028 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc21c541a-01 in ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:21:01 compute-0 nova_compute[259550]: 2025-10-07 14:21:01.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.030 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc21c541a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.030 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fe9962b8-2ecf-4226-9d51-9d8909c1abae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.033 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[76688adf-7ef6-4e13-8c81-18f943721c2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:01 compute-0 nova_compute[259550]: 2025-10-07 14:21:01.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:01 compute-0 sudo[343190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:21:01 compute-0 sudo[343190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:21:01 compute-0 systemd-udevd[343225]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:21:01 compute-0 sudo[343190]: pam_unix(sudo:session): session closed for user root
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.044 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[ccdf54e9-74b2-4200-83b9-d8140ac084c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:01 compute-0 NetworkManager[44949]: <info>  [1759846861.0567] device (tap5fb8904b-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:21:01 compute-0 systemd-machined[214580]: New machine qemu-101-instance-00000042.
Oct 07 14:21:01 compute-0 NetworkManager[44949]: <info>  [1759846861.0583] device (tap5fb8904b-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:21:01 compute-0 systemd[1]: Started Virtual Machine qemu-101-instance-00000042.
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.070 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bf220744-304b-4b37-9736-920b1a69dad8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.100 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e128a692-2485-481e-a1e4-92f654c2e17c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.105 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d92496b2-554c-4c13-ba9a-6686f9561c21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:01 compute-0 sudo[343230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:21:01 compute-0 NetworkManager[44949]: <info>  [1759846861.1080] manager: (tapc21c541a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/357)
Oct 07 14:21:01 compute-0 sudo[343230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:21:01 compute-0 sudo[343230]: pam_unix(sudo:session): session closed for user root
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.140 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b7728752-0b81-49dc-a3e3-f1a0743ac6b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.144 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b65d306d-07fd-4e22-a4f7-314d5c24043d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:01 compute-0 NetworkManager[44949]: <info>  [1759846861.1662] device (tapc21c541a-00): carrier: link connected
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.171 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2ef8af-2301-4e60-ac8f-3f0e0309a1e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:01 compute-0 sudo[343266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:21:01 compute-0 sudo[343266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.191 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ab6bdd6b-ee46-4602-a04d-44ad4dbd6a5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc21c541a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:8b:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 744473, 'reachable_time': 30831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343308, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:21:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.210 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0d27a651-266f-40cd-8f14-e37cabeab468]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:8b48'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 744473, 'tstamp': 744473}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343309, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Oct 07 14:21:01 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.232 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[81703f96-3fc9-44a0-bd57-8eb1eafddda6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc21c541a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:8b:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 744473, 'reachable_time': 30831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 343310, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.266 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[02455072-f531-4f0c-9062-58b3f434fce5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.352 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f1440b89-8d7f-4e05-8194-01cf32b9b7bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.354 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc21c541a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.355 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.355 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc21c541a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:01 compute-0 nova_compute[259550]: 2025-10-07 14:21:01.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:01 compute-0 NetworkManager[44949]: <info>  [1759846861.3582] manager: (tapc21c541a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/358)
Oct 07 14:21:01 compute-0 kernel: tapc21c541a-00: entered promiscuous mode
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.360 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc21c541a-00, col_values=(('external_ids', {'iface-id': '5989e5ed-c89e-446a-960e-503196fd3680'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:01 compute-0 nova_compute[259550]: 2025-10-07 14:21:01.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:01 compute-0 ovn_controller[151684]: 2025-10-07T14:21:01Z|00837|binding|INFO|Releasing lport 5989e5ed-c89e-446a-960e-503196fd3680 from this chassis (sb_readonly=0)
Oct 07 14:21:01 compute-0 nova_compute[259550]: 2025-10-07 14:21:01.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.383 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c21c541a-0d39-4ceb-ba44-53a9c1280779.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c21c541a-0d39-4ceb-ba44-53a9c1280779.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.384 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eadc97de-418c-466e-8ae0-555f89ec0130]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.385 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/c21c541a-0d39-4ceb-ba44-53a9c1280779.pid.haproxy
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID c21c541a-0d39-4ceb-ba44-53a9c1280779
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:21:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:01.387 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'env', 'PROCESS_TAG=haproxy-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c21c541a-0d39-4ceb-ba44-53a9c1280779.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:21:01 compute-0 nova_compute[259550]: 2025-10-07 14:21:01.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:01 compute-0 nova_compute[259550]: 2025-10-07 14:21:01.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:01 compute-0 podman[343357]: 2025-10-07 14:21:01.56049387 +0000 UTC m=+0.053217737 container create b3e57b45478e60e20553835b4ef468f76d004c289a94fb66bf57ed87958d4465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 07 14:21:01 compute-0 systemd[1]: Started libpod-conmon-b3e57b45478e60e20553835b4ef468f76d004c289a94fb66bf57ed87958d4465.scope.
Oct 07 14:21:01 compute-0 podman[343357]: 2025-10-07 14:21:01.538341066 +0000 UTC m=+0.031064963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:21:01 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:21:01 compute-0 podman[343357]: 2025-10-07 14:21:01.654020871 +0000 UTC m=+0.146744768 container init b3e57b45478e60e20553835b4ef468f76d004c289a94fb66bf57ed87958d4465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 07 14:21:01 compute-0 podman[343357]: 2025-10-07 14:21:01.667406845 +0000 UTC m=+0.160130722 container start b3e57b45478e60e20553835b4ef468f76d004c289a94fb66bf57ed87958d4465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_curie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:21:01 compute-0 podman[343357]: 2025-10-07 14:21:01.671987246 +0000 UTC m=+0.164711153 container attach b3e57b45478e60e20553835b4ef468f76d004c289a94fb66bf57ed87958d4465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_curie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 07 14:21:01 compute-0 upbeat_curie[343408]: 167 167
Oct 07 14:21:01 compute-0 systemd[1]: libpod-b3e57b45478e60e20553835b4ef468f76d004c289a94fb66bf57ed87958d4465.scope: Deactivated successfully.
Oct 07 14:21:01 compute-0 conmon[343408]: conmon b3e57b45478e60e20553 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b3e57b45478e60e20553835b4ef468f76d004c289a94fb66bf57ed87958d4465.scope/container/memory.events
Oct 07 14:21:01 compute-0 nova_compute[259550]: 2025-10-07 14:21:01.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:01 compute-0 podman[343424]: 2025-10-07 14:21:01.723691552 +0000 UTC m=+0.030046095 container died b3e57b45478e60e20553835b4ef468f76d004c289a94fb66bf57ed87958d4465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_curie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 07 14:21:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c074802da1393ee2c0f7f26639cd8981b08499d30cb4a5cac7d05afbf1b29222-merged.mount: Deactivated successfully.
Oct 07 14:21:01 compute-0 podman[343424]: 2025-10-07 14:21:01.788442742 +0000 UTC m=+0.094797265 container remove b3e57b45478e60e20553835b4ef468f76d004c289a94fb66bf57ed87958d4465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_curie, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:21:01 compute-0 systemd[1]: libpod-conmon-b3e57b45478e60e20553835b4ef468f76d004c289a94fb66bf57ed87958d4465.scope: Deactivated successfully.
Oct 07 14:21:01 compute-0 podman[343452]: 2025-10-07 14:21:01.843562018 +0000 UTC m=+0.055073126 container create 32f75f1463f8e278e960246a323b3677443e8f8f1ecdf9f150ed4b8aa467bb50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 07 14:21:01 compute-0 systemd[1]: Started libpod-conmon-32f75f1463f8e278e960246a323b3677443e8f8f1ecdf9f150ed4b8aa467bb50.scope.
Oct 07 14:21:01 compute-0 podman[343452]: 2025-10-07 14:21:01.815644891 +0000 UTC m=+0.027156079 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:21:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 519 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 13 MiB/s wr, 460 op/s
Oct 07 14:21:01 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4f7bfc737bb29f3b15499b8892f2760c43b4863d41aec4df4519e830f646ca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:21:01 compute-0 podman[343452]: 2025-10-07 14:21:01.950952075 +0000 UTC m=+0.162463203 container init 32f75f1463f8e278e960246a323b3677443e8f8f1ecdf9f150ed4b8aa467bb50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 07 14:21:01 compute-0 podman[343452]: 2025-10-07 14:21:01.957683593 +0000 UTC m=+0.169194701 container start 32f75f1463f8e278e960246a323b3677443e8f8f1ecdf9f150ed4b8aa467bb50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:21:01 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[343466]: [NOTICE]   (343488) : New worker (343492) forked
Oct 07 14:21:01 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[343466]: [NOTICE]   (343488) : Loading success.
Oct 07 14:21:01 compute-0 ceph-mon[74295]: osdmap e226: 3 total, 3 up, 3 in
Oct 07 14:21:01 compute-0 ceph-mon[74295]: osdmap e227: 3 total, 3 up, 3 in
Oct 07 14:21:01 compute-0 podman[343474]: 2025-10-07 14:21:01.994501805 +0000 UTC m=+0.048021259 container create 0788f1e422ffa9b5e78017ed8c559699720f0ce0816cf5362a1fc4c963141ac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 07 14:21:02 compute-0 systemd[1]: Started libpod-conmon-0788f1e422ffa9b5e78017ed8c559699720f0ce0816cf5362a1fc4c963141ac5.scope.
Oct 07 14:21:02 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:21:02 compute-0 podman[343474]: 2025-10-07 14:21:01.973296355 +0000 UTC m=+0.026815749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53832f7217c2cd0fca9f1615e852ceb6506c31ede409ebba3afcb6c1a4912cf7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53832f7217c2cd0fca9f1615e852ceb6506c31ede409ebba3afcb6c1a4912cf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53832f7217c2cd0fca9f1615e852ceb6506c31ede409ebba3afcb6c1a4912cf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53832f7217c2cd0fca9f1615e852ceb6506c31ede409ebba3afcb6c1a4912cf7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:21:02 compute-0 podman[343474]: 2025-10-07 14:21:02.082653754 +0000 UTC m=+0.136173138 container init 0788f1e422ffa9b5e78017ed8c559699720f0ce0816cf5362a1fc4c963141ac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:21:02 compute-0 podman[343474]: 2025-10-07 14:21:02.089556116 +0000 UTC m=+0.143075480 container start 0788f1e422ffa9b5e78017ed8c559699720f0ce0816cf5362a1fc4c963141ac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_goldstine, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 07 14:21:02 compute-0 podman[343474]: 2025-10-07 14:21:02.09346738 +0000 UTC m=+0.146986874 container attach 0788f1e422ffa9b5e78017ed8c559699720f0ce0816cf5362a1fc4c963141ac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 07 14:21:02 compute-0 kernel: tap90cfcf08-ae (unregistering): left promiscuous mode
Oct 07 14:21:02 compute-0 NetworkManager[44949]: <info>  [1759846862.1705] device (tap90cfcf08-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:21:02 compute-0 ovn_controller[151684]: 2025-10-07T14:21:02Z|00838|binding|INFO|Releasing lport 90cfcf08-ae06-44fb-8402-0d70909f2e5c from this chassis (sb_readonly=0)
Oct 07 14:21:02 compute-0 ovn_controller[151684]: 2025-10-07T14:21:02Z|00839|binding|INFO|Setting lport 90cfcf08-ae06-44fb-8402-0d70909f2e5c down in Southbound
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:02 compute-0 ovn_controller[151684]: 2025-10-07T14:21:02Z|00840|binding|INFO|Removing iface tap90cfcf08-ae ovn-installed in OVS
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:02.194 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:49:a3 10.100.0.7'], port_security=['fa:16:3e:0e:49:a3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'db51ce2e-5a2e-4329-a629-6f5fcee5c673', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4a4526130e0488b963baba67ee5d8db', 'neutron:revision_number': '4', 'neutron:security_group_ids': '952416dd-2ec0-44ba-9a64-1fedfbe98e81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad1d90b9-2025-4f75-89e1-be6f053043dd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=90cfcf08-ae06-44fb-8402-0d70909f2e5c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:02.195 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 90cfcf08-ae06-44fb-8402-0d70909f2e5c in datapath 6f0d357a-c6e3-4f85-ade7-a04fa945b92c unbound from our chassis
Oct 07 14:21:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:02.197 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6f0d357a-c6e3-4f85-ade7-a04fa945b92c
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Oct 07 14:21:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Oct 07 14:21:02 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.229 2 DEBUG nova.compute.manager [req-d1307886-d94c-49d9-893e-7b2cf64f42d5 req-1f3ad2b7-2c98-452a-bbdc-f7c3806b2af1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.229 2 DEBUG oslo_concurrency.lockutils [req-d1307886-d94c-49d9-893e-7b2cf64f42d5 req-1f3ad2b7-2c98-452a-bbdc-f7c3806b2af1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.230 2 DEBUG oslo_concurrency.lockutils [req-d1307886-d94c-49d9-893e-7b2cf64f42d5 req-1f3ad2b7-2c98-452a-bbdc-f7c3806b2af1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.230 2 DEBUG oslo_concurrency.lockutils [req-d1307886-d94c-49d9-893e-7b2cf64f42d5 req-1f3ad2b7-2c98-452a-bbdc-f7c3806b2af1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.230 2 DEBUG nova.compute.manager [req-d1307886-d94c-49d9-893e-7b2cf64f42d5 req-1f3ad2b7-2c98-452a-bbdc-f7c3806b2af1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.230 2 WARNING nova.compute.manager [req-d1307886-d94c-49d9-893e-7b2cf64f42d5 req-1f3ad2b7-2c98-452a-bbdc-f7c3806b2af1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state suspended and task_state resuming.
Oct 07 14:21:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:02.234 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d8ab6c-735c-4a4b-8dbf-3f310ad5362b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:02 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000051.scope: Deactivated successfully.
Oct 07 14:21:02 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000051.scope: Consumed 14.615s CPU time.
Oct 07 14:21:02 compute-0 systemd-machined[214580]: Machine qemu-99-instance-00000051 terminated.
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.270 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 1d580bbb-a6fd-442c-8524-409ba5c344d0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.270 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846862.2695487, 1d580bbb-a6fd-442c-8524-409ba5c344d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.270 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] VM Started (Lifecycle Event)
Oct 07 14:21:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:02.272 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ca0472-0f14-47c6-b1a1-475fcabfe3ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:02.275 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec248d4-dce1-45a9-9678-b4074875b9cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.289 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.297 2 DEBUG nova.compute.manager [None req-cb9e6129-66f8-4a67-a276-ade8ee5737f1 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.298 2 DEBUG nova.objects.instance [None req-cb9e6129-66f8-4a67-a276-ade8ee5737f1 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.299 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:02.321 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9d742af6-a38b-4c8e-a7b1-4e003e36845f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.328 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.328 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846862.27753, 1d580bbb-a6fd-442c-8524-409ba5c344d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.328 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] VM Resumed (Lifecycle Event)
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.331 2 INFO nova.virt.libvirt.driver [-] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance running successfully.
Oct 07 14:21:02 compute-0 virtqemud[259430]: argument unsupported: QEMU guest agent is not configured
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.334 2 DEBUG nova.virt.libvirt.guest [None req-cb9e6129-66f8-4a67-a276-ade8ee5737f1 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.334 2 DEBUG nova.compute.manager [None req-cb9e6129-66f8-4a67-a276-ade8ee5737f1 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.345 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.347 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.374 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 07 14:21:02 compute-0 NetworkManager[44949]: <info>  [1759846862.4267] manager: (tap90cfcf08-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/359)
Oct 07 14:21:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:02.463 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fb014782-bcc2-497b-8bff-0c98f304357e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f0d357a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:24:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 874, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 874, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742510, 'reachable_time': 35013, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343517, 'error': None, 'target': 'ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:02.491 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb975f2-5c42-4d5b-aed5-08b78b2ebea7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6f0d357a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742523, 'tstamp': 742523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343528, 'error': None, 'target': 'ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6f0d357a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742527, 'tstamp': 742527}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343528, 'error': None, 'target': 'ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:02.493 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f0d357a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:02.499 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f0d357a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:02.499 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:02.499 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6f0d357a-c0, col_values=(('external_ids', {'iface-id': '5d4e6d5f-90cf-4d0a-9a33-cd00a81ef0cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:02.499 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.908 2 INFO nova.virt.libvirt.driver [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Instance shutdown successfully after 13 seconds.
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.912 2 INFO nova.virt.libvirt.driver [-] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Instance destroyed successfully.
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.912 2 DEBUG nova.objects.instance [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lazy-loading 'numa_topology' on Instance uuid db51ce2e-5a2e-4329-a629-6f5fcee5c673 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.928 2 INFO nova.virt.libvirt.driver [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Attempting rescue
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.929 2 DEBUG nova.virt.libvirt.driver [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.932 2 DEBUG nova.virt.libvirt.driver [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.933 2 INFO nova.virt.libvirt.driver [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Creating image(s)
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]: {
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:     "0": [
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:         {
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "devices": [
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "/dev/loop3"
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             ],
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "lv_name": "ceph_lv0",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "lv_size": "21470642176",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "name": "ceph_lv0",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "tags": {
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.cluster_name": "ceph",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.crush_device_class": "",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.encrypted": "0",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.osd_id": "0",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.type": "block",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.vdo": "0"
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             },
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "type": "block",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "vg_name": "ceph_vg0"
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:         }
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:     ],
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:     "1": [
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:         {
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "devices": [
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "/dev/loop4"
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             ],
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "lv_name": "ceph_lv1",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "lv_size": "21470642176",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "name": "ceph_lv1",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "tags": {
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.cluster_name": "ceph",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.crush_device_class": "",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.encrypted": "0",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.osd_id": "1",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.type": "block",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.vdo": "0"
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             },
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "type": "block",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "vg_name": "ceph_vg1"
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:         }
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:     ],
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:     "2": [
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:         {
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "devices": [
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "/dev/loop5"
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             ],
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "lv_name": "ceph_lv2",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "lv_size": "21470642176",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "name": "ceph_lv2",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "tags": {
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.cluster_name": "ceph",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.crush_device_class": "",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.encrypted": "0",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.osd_id": "2",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.type": "block",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:                 "ceph.vdo": "0"
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             },
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "type": "block",
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:             "vg_name": "ceph_vg2"
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:         }
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]:     ]
Oct 07 14:21:02 compute-0 dreamy_goldstine[343504]: }
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.967 2 DEBUG nova.storage.rbd_utils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:02 compute-0 systemd[1]: libpod-0788f1e422ffa9b5e78017ed8c559699720f0ce0816cf5362a1fc4c963141ac5.scope: Deactivated successfully.
Oct 07 14:21:02 compute-0 nova_compute[259550]: 2025-10-07 14:21:02.972 2 DEBUG nova.objects.instance [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lazy-loading 'trusted_certs' on Instance uuid db51ce2e-5a2e-4329-a629-6f5fcee5c673 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:02 compute-0 podman[343474]: 2025-10-07 14:21:02.974379861 +0000 UTC m=+1.027899245 container died 0788f1e422ffa9b5e78017ed8c559699720f0ce0816cf5362a1fc4c963141ac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_goldstine, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:21:02 compute-0 ceph-mon[74295]: pgmap v1707: 305 pgs: 305 active+clean; 519 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 13 MiB/s wr, 460 op/s
Oct 07 14:21:02 compute-0 ceph-mon[74295]: osdmap e228: 3 total, 3 up, 3 in
Oct 07 14:21:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-53832f7217c2cd0fca9f1615e852ceb6506c31ede409ebba3afcb6c1a4912cf7-merged.mount: Deactivated successfully.
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.029 2 DEBUG nova.storage.rbd_utils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:03 compute-0 podman[343474]: 2025-10-07 14:21:03.040800506 +0000 UTC m=+1.094319870 container remove 0788f1e422ffa9b5e78017ed8c559699720f0ce0816cf5362a1fc4c963141ac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_goldstine, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.052 2 DEBUG nova.storage.rbd_utils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.054 2 DEBUG oslo_concurrency.processutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:03 compute-0 systemd[1]: libpod-conmon-0788f1e422ffa9b5e78017ed8c559699720f0ce0816cf5362a1fc4c963141ac5.scope: Deactivated successfully.
Oct 07 14:21:03 compute-0 sudo[343266]: pam_unix(sudo:session): session closed for user root
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.131 2 DEBUG oslo_concurrency.processutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.132 2 DEBUG oslo_concurrency.lockutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.133 2 DEBUG oslo_concurrency.lockutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.133 2 DEBUG oslo_concurrency.lockutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:03 compute-0 sudo[343600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:21:03 compute-0 sudo[343600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:21:03 compute-0 sudo[343600]: pam_unix(sudo:session): session closed for user root
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.158 2 DEBUG nova.storage.rbd_utils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.169 2 DEBUG oslo_concurrency.processutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:03 compute-0 sudo[343642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:21:03 compute-0 sudo[343642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:21:03 compute-0 sudo[343642]: pam_unix(sudo:session): session closed for user root
Oct 07 14:21:03 compute-0 sudo[343671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:21:03 compute-0 sudo[343671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:21:03 compute-0 sudo[343671]: pam_unix(sudo:session): session closed for user root
Oct 07 14:21:03 compute-0 sudo[343711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:21:03 compute-0 sudo[343711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.517 2 DEBUG oslo_concurrency.processutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.518 2 DEBUG nova.objects.instance [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lazy-loading 'migration_context' on Instance uuid db51ce2e-5a2e-4329-a629-6f5fcee5c673 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.530 2 DEBUG nova.virt.libvirt.driver [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.531 2 DEBUG nova.virt.libvirt.driver [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Start _get_guest_xml network_info=[{"id": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "address": "fa:16:3e:0e:49:a3", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "vif_mac": "fa:16:3e:0e:49:a3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfcf08-ae", "ovs_interfaceid": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.532 2 DEBUG nova.objects.instance [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lazy-loading 'resources' on Instance uuid db51ce2e-5a2e-4329-a629-6f5fcee5c673 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.548 2 WARNING nova.virt.libvirt.driver [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.555 2 DEBUG nova.virt.libvirt.host [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.557 2 DEBUG nova.virt.libvirt.host [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.560 2 DEBUG nova.virt.libvirt.host [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.561 2 DEBUG nova.virt.libvirt.host [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.562 2 DEBUG nova.virt.libvirt.driver [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.562 2 DEBUG nova.virt.hardware [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.562 2 DEBUG nova.virt.hardware [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.562 2 DEBUG nova.virt.hardware [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.563 2 DEBUG nova.virt.hardware [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.563 2 DEBUG nova.virt.hardware [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.563 2 DEBUG nova.virt.hardware [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.563 2 DEBUG nova.virt.hardware [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.564 2 DEBUG nova.virt.hardware [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.564 2 DEBUG nova.virt.hardware [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.564 2 DEBUG nova.virt.hardware [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.564 2 DEBUG nova.virt.hardware [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.565 2 DEBUG nova.objects.instance [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lazy-loading 'vcpu_model' on Instance uuid db51ce2e-5a2e-4329-a629-6f5fcee5c673 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:03 compute-0 nova_compute[259550]: 2025-10-07 14:21:03.582 2 DEBUG oslo_concurrency.processutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:03 compute-0 podman[343778]: 2025-10-07 14:21:03.70228895 +0000 UTC m=+0.047653590 container create 0fc42b8596082053b155ecaf85ec5ffa76562e28b014d71fe7c29213e7ee897e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mclean, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:21:03 compute-0 systemd[1]: Started libpod-conmon-0fc42b8596082053b155ecaf85ec5ffa76562e28b014d71fe7c29213e7ee897e.scope.
Oct 07 14:21:03 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:21:03 compute-0 podman[343778]: 2025-10-07 14:21:03.773907092 +0000 UTC m=+0.119271752 container init 0fc42b8596082053b155ecaf85ec5ffa76562e28b014d71fe7c29213e7ee897e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mclean, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Oct 07 14:21:03 compute-0 podman[343778]: 2025-10-07 14:21:03.680644898 +0000 UTC m=+0.026009578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:21:03 compute-0 podman[343778]: 2025-10-07 14:21:03.782674034 +0000 UTC m=+0.128038664 container start 0fc42b8596082053b155ecaf85ec5ffa76562e28b014d71fe7c29213e7ee897e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:21:03 compute-0 podman[343778]: 2025-10-07 14:21:03.786077963 +0000 UTC m=+0.131442593 container attach 0fc42b8596082053b155ecaf85ec5ffa76562e28b014d71fe7c29213e7ee897e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mclean, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 07 14:21:03 compute-0 eager_mclean[343813]: 167 167
Oct 07 14:21:03 compute-0 systemd[1]: libpod-0fc42b8596082053b155ecaf85ec5ffa76562e28b014d71fe7c29213e7ee897e.scope: Deactivated successfully.
Oct 07 14:21:03 compute-0 conmon[343813]: conmon 0fc42b8596082053b155 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0fc42b8596082053b155ecaf85ec5ffa76562e28b014d71fe7c29213e7ee897e.scope/container/memory.events
Oct 07 14:21:03 compute-0 podman[343778]: 2025-10-07 14:21:03.788631311 +0000 UTC m=+0.133995941 container died 0fc42b8596082053b155ecaf85ec5ffa76562e28b014d71fe7c29213e7ee897e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:21:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-defd5e13204e8ad2dd357d03fe74bc9c16024472b152a936c50b09c85440b34f-merged.mount: Deactivated successfully.
Oct 07 14:21:03 compute-0 podman[343778]: 2025-10-07 14:21:03.83100978 +0000 UTC m=+0.176374410 container remove 0fc42b8596082053b155ecaf85ec5ffa76562e28b014d71fe7c29213e7ee897e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 14:21:03 compute-0 systemd[1]: libpod-conmon-0fc42b8596082053b155ecaf85ec5ffa76562e28b014d71fe7c29213e7ee897e.scope: Deactivated successfully.
Oct 07 14:21:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 444 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 14 MiB/s wr, 564 op/s
Oct 07 14:21:04 compute-0 podman[343836]: 2025-10-07 14:21:04.008241932 +0000 UTC m=+0.041268921 container create 26f0b64e5647ee0b24dfe474d02b5ccb796111ba0ac4f3353aa8b1d959c94898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_burnell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 07 14:21:04 compute-0 systemd[1]: Started libpod-conmon-26f0b64e5647ee0b24dfe474d02b5ccb796111ba0ac4f3353aa8b1d959c94898.scope.
Oct 07 14:21:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:21:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3802449095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.080 2 DEBUG oslo_concurrency.processutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.082 2 DEBUG oslo_concurrency.processutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:04 compute-0 podman[343836]: 2025-10-07 14:21:03.98810801 +0000 UTC m=+0.021135009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:21:04 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc0aaf0a24b21b1bede5a67ef280e020180f5f90ecbf45b4473a2ddbf01dd54f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc0aaf0a24b21b1bede5a67ef280e020180f5f90ecbf45b4473a2ddbf01dd54f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc0aaf0a24b21b1bede5a67ef280e020180f5f90ecbf45b4473a2ddbf01dd54f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc0aaf0a24b21b1bede5a67ef280e020180f5f90ecbf45b4473a2ddbf01dd54f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:21:04 compute-0 podman[343836]: 2025-10-07 14:21:04.107130564 +0000 UTC m=+0.140157593 container init 26f0b64e5647ee0b24dfe474d02b5ccb796111ba0ac4f3353aa8b1d959c94898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 07 14:21:04 compute-0 podman[343836]: 2025-10-07 14:21:04.117010645 +0000 UTC m=+0.150037634 container start 26f0b64e5647ee0b24dfe474d02b5ccb796111ba0ac4f3353aa8b1d959c94898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_burnell, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:21:04 compute-0 podman[343836]: 2025-10-07 14:21:04.120869017 +0000 UTC m=+0.153896026 container attach 26f0b64e5647ee0b24dfe474d02b5ccb796111ba0ac4f3353aa8b1d959c94898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.467 2 DEBUG oslo_concurrency.lockutils [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.467 2 DEBUG oslo_concurrency.lockutils [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.468 2 DEBUG oslo_concurrency.lockutils [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.470 2 DEBUG oslo_concurrency.lockutils [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.470 2 DEBUG oslo_concurrency.lockutils [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.471 2 INFO nova.compute.manager [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Terminating instance
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.475 2 DEBUG nova.compute.manager [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.485 2 DEBUG oslo_concurrency.lockutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Acquiring lock "0d136ab7-c186-4909-bb2f-371bd5f68c90" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.485 2 DEBUG oslo_concurrency.lockutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Lock "0d136ab7-c186-4909-bb2f-371bd5f68c90" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:04 compute-0 kernel: tap5fb8904b-22 (unregistering): left promiscuous mode
Oct 07 14:21:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:21:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/163652467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:04 compute-0 NetworkManager[44949]: <info>  [1759846864.5256] device (tap5fb8904b-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.528 2 DEBUG nova.compute.manager [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:21:04 compute-0 ovn_controller[151684]: 2025-10-07T14:21:04Z|00841|binding|INFO|Releasing lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c from this chassis (sb_readonly=0)
Oct 07 14:21:04 compute-0 ovn_controller[151684]: 2025-10-07T14:21:04Z|00842|binding|INFO|Setting lport 5fb8904b-227a-4dac-8c3a-82a23ba9832c down in Southbound
Oct 07 14:21:04 compute-0 ovn_controller[151684]: 2025-10-07T14:21:04Z|00843|binding|INFO|Removing iface tap5fb8904b-22 ovn-installed in OVS
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.552 2 DEBUG oslo_concurrency.processutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.552 2 DEBUG oslo_concurrency.processutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:04 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000042.scope: Deactivated successfully.
Oct 07 14:21:04 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000042.scope: Consumed 3.186s CPU time.
Oct 07 14:21:04 compute-0 systemd-machined[214580]: Machine qemu-101-instance-00000042 terminated.
Oct 07 14:21:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:04.603 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:c7:50 10.100.0.3'], port_security=['fa:16:3e:af:c7:50 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1d580bbb-a6fd-442c-8524-409ba5c344d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8379283f8a594c2ab94773d2b49cbb30', 'neutron:revision_number': '15', 'neutron:security_group_ids': 'c8e218c0-ab29-4b01-8bdb-1da00e3ea9f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44f26794-be65-4c90-a6ef-3a0e5efa6810, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5fb8904b-227a-4dac-8c3a-82a23ba9832c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:04.604 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5fb8904b-227a-4dac-8c3a-82a23ba9832c in datapath c21c541a-0d39-4ceb-ba44-53a9c1280779 unbound from our chassis
Oct 07 14:21:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:04.605 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c21c541a-0d39-4ceb-ba44-53a9c1280779, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:21:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:04.606 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b5c846-c21a-4038-b395-63ea48a789c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:04.606 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 namespace which is not needed anymore
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.724 2 INFO nova.virt.libvirt.driver [-] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Instance destroyed successfully.
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.725 2 DEBUG nova.objects.instance [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lazy-loading 'resources' on Instance uuid 1d580bbb-a6fd-442c-8524-409ba5c344d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.742 2 DEBUG oslo_concurrency.lockutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.743 2 DEBUG oslo_concurrency.lockutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.744 2 DEBUG nova.virt.libvirt.vif [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:17:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1718645090',display_name='tempest-ServerActionsTestJSON-server-1718645090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1718645090',id=66,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD46Fy5CoN+Ml3EBoxKcZ2Dc+Ex8Fs/j0JXzrdEiunFq6ivVsrIblCZq3tN14fyHQcfewP1+4i7OCcMUFM6dTRwhbKPS7W3sI5N9qFb8Gfb+awrk0XppQafXmqkzDHedqQ==',key_name='tempest-keypair-607419023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:17:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8379283f8a594c2ab94773d2b49cbb30',ramdisk_id='',reservation_id='r-0mdnuhhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-263209083',owner_user_name='tempest-ServerActionsTestJSON-263209083-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:21:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51afbbb19e4a4e2184c89302ccf45428',uuid=1d580bbb-a6fd-442c-8524-409ba5c344d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.745 2 DEBUG nova.network.os_vif_util [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converting VIF {"id": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "address": "fa:16:3e:af:c7:50", "network": {"id": "c21c541a-0d39-4ceb-ba44-53a9c1280779", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-433175884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8379283f8a594c2ab94773d2b49cbb30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb8904b-22", "ovs_interfaceid": "5fb8904b-227a-4dac-8c3a-82a23ba9832c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.746 2 DEBUG nova.network.os_vif_util [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.746 2 DEBUG os_vif [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.749 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fb8904b-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.758 2 DEBUG nova.virt.hardware [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.758 2 INFO nova.compute.claims [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.765 2 INFO os_vif [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:c7:50,bridge_name='br-int',has_traffic_filtering=True,id=5fb8904b-227a-4dac-8c3a-82a23ba9832c,network=Network(c21c541a-0d39-4ceb-ba44-53a9c1280779),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb8904b-22')
Oct 07 14:21:04 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[343466]: [NOTICE]   (343488) : haproxy version is 2.8.14-c23fe91
Oct 07 14:21:04 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[343466]: [NOTICE]   (343488) : path to executable is /usr/sbin/haproxy
Oct 07 14:21:04 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[343466]: [ALERT]    (343488) : Current worker (343492) exited with code 143 (Terminated)
Oct 07 14:21:04 compute-0 neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779[343466]: [WARNING]  (343488) : All workers exited. Exiting... (0)
Oct 07 14:21:04 compute-0 systemd[1]: libpod-32f75f1463f8e278e960246a323b3677443e8f8f1ecdf9f150ed4b8aa467bb50.scope: Deactivated successfully.
Oct 07 14:21:04 compute-0 podman[343919]: 2025-10-07 14:21:04.777872063 +0000 UTC m=+0.062183163 container died 32f75f1463f8e278e960246a323b3677443e8f8f1ecdf9f150ed4b8aa467bb50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:21:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-32f75f1463f8e278e960246a323b3677443e8f8f1ecdf9f150ed4b8aa467bb50-userdata-shm.mount: Deactivated successfully.
Oct 07 14:21:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf4f7bfc737bb29f3b15499b8892f2760c43b4863d41aec4df4519e830f646ca-merged.mount: Deactivated successfully.
Oct 07 14:21:04 compute-0 podman[343919]: 2025-10-07 14:21:04.852491635 +0000 UTC m=+0.136802705 container cleanup 32f75f1463f8e278e960246a323b3677443e8f8f1ecdf9f150ed4b8aa467bb50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:21:04 compute-0 systemd[1]: libpod-conmon-32f75f1463f8e278e960246a323b3677443e8f8f1ecdf9f150ed4b8aa467bb50.scope: Deactivated successfully.
Oct 07 14:21:04 compute-0 podman[343988]: 2025-10-07 14:21:04.926464438 +0000 UTC m=+0.048895331 container remove 32f75f1463f8e278e960246a323b3677443e8f8f1ecdf9f150ed4b8aa467bb50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 07 14:21:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:04.932 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[38505c04-c047-4783-8408-f2e2ae123700]: (4, ('Tue Oct  7 02:21:04 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 (32f75f1463f8e278e960246a323b3677443e8f8f1ecdf9f150ed4b8aa467bb50)\n32f75f1463f8e278e960246a323b3677443e8f8f1ecdf9f150ed4b8aa467bb50\nTue Oct  7 02:21:04 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 (32f75f1463f8e278e960246a323b3677443e8f8f1ecdf9f150ed4b8aa467bb50)\n32f75f1463f8e278e960246a323b3677443e8f8f1ecdf9f150ed4b8aa467bb50\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:04.934 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[97a97a26-2b0e-4aae-b098-ccb02672d918]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:04.935 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc21c541a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:04 compute-0 kernel: tapc21c541a-00: left promiscuous mode
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:04.963 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[50585cc0-4f62-4a99-a75d-cdc390ef914b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:04 compute-0 nova_compute[259550]: 2025-10-07 14:21:04.974 2 DEBUG oslo_concurrency.processutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:04.997 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[267b38a8-8dd9-4338-a5ba-d08379cd2766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:04.999 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[59281be4-669b-4813-9b1f-ca4ec60cc0da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:05 compute-0 ceph-mon[74295]: pgmap v1709: 305 pgs: 305 active+clean; 444 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 14 MiB/s wr, 564 op/s
Oct 07 14:21:05 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3802449095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:05 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/163652467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:05.026 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4da51d86-63a8-41b2-927d-a765d14018fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 744465, 'reachable_time': 44023, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344018, 'error': None, 'target': 'ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:05.029 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c21c541a-0d39-4ceb-ba44-53a9c1280779 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:21:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:05.030 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[70879e6a-73ff-4459-af57-053b2c48727b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:05 compute-0 systemd[1]: run-netns-ovnmeta\x2dc21c541a\x2d0d39\x2d4ceb\x2dba44\x2d53a9c1280779.mount: Deactivated successfully.
Oct 07 14:21:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:21:05 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2543718082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.092 2 DEBUG oslo_concurrency.processutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.094 2 DEBUG nova.virt.libvirt.vif [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:20:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-520844186',display_name='tempest-ServerRescueNegativeTestJSON-server-520844186',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-520844186',id=81,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:20:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4a4526130e0488b963baba67ee5d8db',ramdisk_id='',reservation_id='r-h377dr90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-2134412460',owner_user_name='tempest-ServerRescueNegativeTestJSON-2134412460-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:20:44Z,user_data=None,user_id='76c41b2458aa4c37a8bdf5b4970f70e7',uuid=db51ce2e-5a2e-4329-a629-6f5fcee5c673,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "address": "fa:16:3e:0e:49:a3", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "vif_mac": "fa:16:3e:0e:49:a3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfcf08-ae", "ovs_interfaceid": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.094 2 DEBUG nova.network.os_vif_util [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Converting VIF {"id": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "address": "fa:16:3e:0e:49:a3", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "vif_mac": "fa:16:3e:0e:49:a3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfcf08-ae", "ovs_interfaceid": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.095 2 DEBUG nova.network.os_vif_util [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:49:a3,bridge_name='br-int',has_traffic_filtering=True,id=90cfcf08-ae06-44fb-8402-0d70909f2e5c,network=Network(6f0d357a-c6e3-4f85-ade7-a04fa945b92c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90cfcf08-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.096 2 DEBUG nova.objects.instance [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lazy-loading 'pci_devices' on Instance uuid db51ce2e-5a2e-4329-a629-6f5fcee5c673 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.154 2 DEBUG nova.virt.libvirt.driver [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:21:05 compute-0 nova_compute[259550]:   <uuid>db51ce2e-5a2e-4329-a629-6f5fcee5c673</uuid>
Oct 07 14:21:05 compute-0 nova_compute[259550]:   <name>instance-00000051</name>
Oct 07 14:21:05 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:21:05 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:21:05 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-520844186</nova:name>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:21:03</nova:creationTime>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:21:05 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:21:05 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:21:05 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:21:05 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:21:05 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:21:05 compute-0 nova_compute[259550]:         <nova:user uuid="76c41b2458aa4c37a8bdf5b4970f70e7">tempest-ServerRescueNegativeTestJSON-2134412460-project-member</nova:user>
Oct 07 14:21:05 compute-0 nova_compute[259550]:         <nova:project uuid="b4a4526130e0488b963baba67ee5d8db">tempest-ServerRescueNegativeTestJSON-2134412460</nova:project>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:21:05 compute-0 nova_compute[259550]:         <nova:port uuid="90cfcf08-ae06-44fb-8402-0d70909f2e5c">
Oct 07 14:21:05 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:21:05 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:21:05 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <system>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <entry name="serial">db51ce2e-5a2e-4329-a629-6f5fcee5c673</entry>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <entry name="uuid">db51ce2e-5a2e-4329-a629-6f5fcee5c673</entry>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     </system>
Oct 07 14:21:05 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:21:05 compute-0 nova_compute[259550]:   <os>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:   </os>
Oct 07 14:21:05 compute-0 nova_compute[259550]:   <features>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:   </features>
Oct 07 14:21:05 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:21:05 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:21:05 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk.rescue">
Oct 07 14:21:05 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       </source>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:21:05 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk">
Oct 07 14:21:05 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       </source>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:21:05 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <target dev="vdb" bus="virtio"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk.config.rescue">
Oct 07 14:21:05 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       </source>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:21:05 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:0e:49:a3"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <target dev="tap90cfcf08-ae"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/db51ce2e-5a2e-4329-a629-6f5fcee5c673/console.log" append="off"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <video>
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     </video>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:21:05 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:21:05 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:21:05 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:21:05 compute-0 nova_compute[259550]: </domain>
Oct 07 14:21:05 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.165 2 INFO nova.virt.libvirt.driver [-] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Instance destroyed successfully.
Oct 07 14:21:05 compute-0 trusting_burnell[343853]: {
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:         "osd_id": 2,
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:         "type": "bluestore"
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:     },
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:         "osd_id": 1,
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:         "type": "bluestore"
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:     },
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:         "osd_id": 0,
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:         "type": "bluestore"
Oct 07 14:21:05 compute-0 trusting_burnell[343853]:     }
Oct 07 14:21:05 compute-0 trusting_burnell[343853]: }
Oct 07 14:21:05 compute-0 systemd[1]: libpod-26f0b64e5647ee0b24dfe474d02b5ccb796111ba0ac4f3353aa8b1d959c94898.scope: Deactivated successfully.
Oct 07 14:21:05 compute-0 systemd[1]: libpod-26f0b64e5647ee0b24dfe474d02b5ccb796111ba0ac4f3353aa8b1d959c94898.scope: Consumed 1.072s CPU time.
Oct 07 14:21:05 compute-0 podman[343836]: 2025-10-07 14:21:05.249449821 +0000 UTC m=+1.282476800 container died 26f0b64e5647ee0b24dfe474d02b5ccb796111ba0ac4f3353aa8b1d959c94898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.269 2 DEBUG nova.virt.libvirt.driver [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.270 2 DEBUG nova.virt.libvirt.driver [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.270 2 DEBUG nova.virt.libvirt.driver [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.270 2 DEBUG nova.virt.libvirt.driver [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] No VIF found with MAC fa:16:3e:0e:49:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.271 2 INFO nova.virt.libvirt.driver [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Using config drive
Oct 07 14:21:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc0aaf0a24b21b1bede5a67ef280e020180f5f90ecbf45b4473a2ddbf01dd54f-merged.mount: Deactivated successfully.
Oct 07 14:21:05 compute-0 podman[344057]: 2025-10-07 14:21:05.294542872 +0000 UTC m=+0.097514166 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 07 14:21:05 compute-0 podman[344058]: 2025-10-07 14:21:05.294627714 +0000 UTC m=+0.095249197 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.311 2 DEBUG nova.storage.rbd_utils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:05 compute-0 podman[343836]: 2025-10-07 14:21:05.321187296 +0000 UTC m=+1.354214275 container remove 26f0b64e5647ee0b24dfe474d02b5ccb796111ba0ac4f3353aa8b1d959c94898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.324 2 INFO nova.virt.libvirt.driver [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Deleting instance files /var/lib/nova/instances/1d580bbb-a6fd-442c-8524-409ba5c344d0_del
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.325 2 INFO nova.virt.libvirt.driver [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Deletion of /var/lib/nova/instances/1d580bbb-a6fd-442c-8524-409ba5c344d0_del complete
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.330 2 DEBUG nova.objects.instance [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lazy-loading 'ec2_ids' on Instance uuid db51ce2e-5a2e-4329-a629-6f5fcee5c673 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:05 compute-0 systemd[1]: libpod-conmon-26f0b64e5647ee0b24dfe474d02b5ccb796111ba0ac4f3353aa8b1d959c94898.scope: Deactivated successfully.
Oct 07 14:21:05 compute-0 sudo[343711]: pam_unix(sudo:session): session closed for user root
Oct 07 14:21:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.376 2 DEBUG nova.objects.instance [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lazy-loading 'keypairs' on Instance uuid db51ce2e-5a2e-4329-a629-6f5fcee5c673 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:05 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:21:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:21:05 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:21:05 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 2595fe3b-bdb0-4bfc-82ac-279bf7ee7c4a does not exist
Oct 07 14:21:05 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 82f5ece1-3a6a-4a85-828d-583696aa857f does not exist
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.408 2 INFO nova.compute.manager [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Took 0.93 seconds to destroy the instance on the hypervisor.
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.408 2 DEBUG oslo.service.loopingcall [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.409 2 DEBUG nova.compute.manager [-] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.409 2 DEBUG nova.network.neutron [-] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:21:05 compute-0 sudo[344127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:21:05 compute-0 sudo[344127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:21:05 compute-0 sudo[344127]: pam_unix(sudo:session): session closed for user root
Oct 07 14:21:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:21:05 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1615998100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.511 2 DEBUG oslo_concurrency.processutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.519 2 DEBUG nova.compute.provider_tree [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.534 2 DEBUG nova.scheduler.client.report [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:21:05 compute-0 sudo[344152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:21:05 compute-0 sudo[344152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:21:05 compute-0 sudo[344152]: pam_unix(sudo:session): session closed for user root
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.558 2 DEBUG oslo_concurrency.lockutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.559 2 DEBUG nova.compute.manager [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.606 2 DEBUG nova.compute.manager [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.607 2 DEBUG nova.network.neutron [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.628 2 INFO nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.645 2 DEBUG nova.compute.manager [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.710 2 INFO nova.virt.libvirt.driver [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Creating config drive at /var/lib/nova/instances/db51ce2e-5a2e-4329-a629-6f5fcee5c673/disk.config.rescue
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.716 2 DEBUG oslo_concurrency.processutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db51ce2e-5a2e-4329-a629-6f5fcee5c673/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmputw9ag0v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.800 2 DEBUG nova.policy [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e40754d770434e4787d389c3bddd8803', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '708e4a28049e4f2d94f64a6656d970d9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.860 2 DEBUG oslo_concurrency.processutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db51ce2e-5a2e-4329-a629-6f5fcee5c673/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmputw9ag0v" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.888 2 DEBUG nova.storage.rbd_utils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] rbd image db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.891 2 DEBUG oslo_concurrency.processutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db51ce2e-5a2e-4329-a629-6f5fcee5c673/disk.config.rescue db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 357 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 576 KiB/s rd, 6.0 MiB/s wr, 231 op/s
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.926 2 DEBUG nova.compute.manager [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.928 2 DEBUG nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.929 2 INFO nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Creating image(s)
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.955 2 DEBUG nova.storage.rbd_utils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] rbd image 0d136ab7-c186-4909-bb2f-371bd5f68c90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:05 compute-0 nova_compute[259550]: 2025-10-07 14:21:05.983 2 DEBUG nova.storage.rbd_utils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] rbd image 0d136ab7-c186-4909-bb2f-371bd5f68c90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.007 2 DEBUG nova.storage.rbd_utils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] rbd image 0d136ab7-c186-4909-bb2f-371bd5f68c90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.010 2 DEBUG oslo_concurrency.processutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:06 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2543718082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:21:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:21:06 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1615998100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.041 2 DEBUG oslo_concurrency.processutils [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db51ce2e-5a2e-4329-a629-6f5fcee5c673/disk.config.rescue db51ce2e-5a2e-4329-a629-6f5fcee5c673_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.042 2 INFO nova.virt.libvirt.driver [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Deleting local config drive /var/lib/nova/instances/db51ce2e-5a2e-4329-a629-6f5fcee5c673/disk.config.rescue because it was imported into RBD.
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.081 2 DEBUG oslo_concurrency.processutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.081 2 DEBUG oslo_concurrency.lockutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.082 2 DEBUG oslo_concurrency.lockutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.082 2 DEBUG oslo_concurrency.lockutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:06 compute-0 kernel: tap90cfcf08-ae: entered promiscuous mode
Oct 07 14:21:06 compute-0 NetworkManager[44949]: <info>  [1759846866.0953] manager: (tap90cfcf08-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/360)
Oct 07 14:21:06 compute-0 systemd-udevd[343887]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:21:06 compute-0 ovn_controller[151684]: 2025-10-07T14:21:06Z|00844|binding|INFO|Claiming lport 90cfcf08-ae06-44fb-8402-0d70909f2e5c for this chassis.
Oct 07 14:21:06 compute-0 ovn_controller[151684]: 2025-10-07T14:21:06Z|00845|binding|INFO|90cfcf08-ae06-44fb-8402-0d70909f2e5c: Claiming fa:16:3e:0e:49:a3 10.100.0.7
Oct 07 14:21:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:06.102 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:49:a3 10.100.0.7'], port_security=['fa:16:3e:0e:49:a3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'db51ce2e-5a2e-4329-a629-6f5fcee5c673', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4a4526130e0488b963baba67ee5d8db', 'neutron:revision_number': '4', 'neutron:security_group_ids': '952416dd-2ec0-44ba-9a64-1fedfbe98e81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad1d90b9-2025-4f75-89e1-be6f053043dd, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=90cfcf08-ae06-44fb-8402-0d70909f2e5c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:06.103 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 90cfcf08-ae06-44fb-8402-0d70909f2e5c in datapath 6f0d357a-c6e3-4f85-ade7-a04fa945b92c bound to our chassis
Oct 07 14:21:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:06.104 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6f0d357a-c6e3-4f85-ade7-a04fa945b92c
Oct 07 14:21:06 compute-0 NetworkManager[44949]: <info>  [1759846866.1084] device (tap90cfcf08-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:21:06 compute-0 NetworkManager[44949]: <info>  [1759846866.1098] device (tap90cfcf08-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:21:06 compute-0 ovn_controller[151684]: 2025-10-07T14:21:06Z|00846|binding|INFO|Setting lport 90cfcf08-ae06-44fb-8402-0d70909f2e5c ovn-installed in OVS
Oct 07 14:21:06 compute-0 ovn_controller[151684]: 2025-10-07T14:21:06Z|00847|binding|INFO|Setting lport 90cfcf08-ae06-44fb-8402-0d70909f2e5c up in Southbound
Oct 07 14:21:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:06.124 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[da6bdc05-cac5-447b-bfe3-f98f0267f8bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.126 2 DEBUG nova.storage.rbd_utils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] rbd image 0d136ab7-c186-4909-bb2f-371bd5f68c90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.133 2 DEBUG oslo_concurrency.processutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 0d136ab7-c186-4909-bb2f-371bd5f68c90_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:06 compute-0 systemd-machined[214580]: New machine qemu-102-instance-00000051.
Oct 07 14:21:06 compute-0 systemd[1]: Started Virtual Machine qemu-102-instance-00000051.
Oct 07 14:21:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:06.154 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[68b8bb91-27b3-4277-b5dd-3ca480e60a0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:06.158 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[82fe3bc9-4fe4-4eca-b223-fe5391e102be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:06.186 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c32e6a9b-e8ab-4bc1-bdec-19dd32ed8ac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.194 2 DEBUG nova.compute.manager [req-21747d02-1934-48ed-837d-e26942915d9c req-52604187-9b99-470c-a78c-5412252f81c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.195 2 DEBUG oslo_concurrency.lockutils [req-21747d02-1934-48ed-837d-e26942915d9c req-52604187-9b99-470c-a78c-5412252f81c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.196 2 DEBUG oslo_concurrency.lockutils [req-21747d02-1934-48ed-837d-e26942915d9c req-52604187-9b99-470c-a78c-5412252f81c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.196 2 DEBUG oslo_concurrency.lockutils [req-21747d02-1934-48ed-837d-e26942915d9c req-52604187-9b99-470c-a78c-5412252f81c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.196 2 DEBUG nova.compute.manager [req-21747d02-1934-48ed-837d-e26942915d9c req-52604187-9b99-470c-a78c-5412252f81c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.197 2 WARNING nova.compute.manager [req-21747d02-1934-48ed-837d-e26942915d9c req-52604187-9b99-470c-a78c-5412252f81c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state active and task_state deleting.
Oct 07 14:21:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:06.203 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[04862aac-7b7b-4d03-bba2-d8801812cff4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f0d357a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:24:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742510, 'reachable_time': 35013, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344319, 'error': None, 'target': 'ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:21:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Oct 07 14:21:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Oct 07 14:21:06 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Oct 07 14:21:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:06.229 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[71891030-30de-45d4-b299-8862ed86225f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6f0d357a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742523, 'tstamp': 742523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344336, 'error': None, 'target': 'ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6f0d357a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742527, 'tstamp': 742527}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344336, 'error': None, 'target': 'ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:06.231 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f0d357a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:06.234 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f0d357a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:06.234 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:06.235 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6f0d357a-c0, col_values=(('external_ids', {'iface-id': '5d4e6d5f-90cf-4d0a-9a33-cd00a81ef0cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:06.235 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.489 2 DEBUG oslo_concurrency.processutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 0d136ab7-c186-4909-bb2f-371bd5f68c90_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.552 2 DEBUG nova.storage.rbd_utils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] resizing rbd image 0d136ab7-c186-4909-bb2f-371bd5f68c90_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.604 2 DEBUG nova.network.neutron [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Successfully created port: c628defc-21d8-49f9-bb89-cd77be34b0d1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.644 2 DEBUG nova.objects.instance [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Lazy-loading 'migration_context' on Instance uuid 0d136ab7-c186-4909-bb2f-371bd5f68c90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.663 2 DEBUG nova.network.neutron [-] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.665 2 DEBUG nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.665 2 DEBUG nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Ensure instance console log exists: /var/lib/nova/instances/0d136ab7-c186-4909-bb2f-371bd5f68c90/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.666 2 DEBUG oslo_concurrency.lockutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.666 2 DEBUG oslo_concurrency.lockutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.667 2 DEBUG oslo_concurrency.lockutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.684 2 INFO nova.compute.manager [-] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Took 1.28 seconds to deallocate network for instance.
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.740 2 DEBUG oslo_concurrency.lockutils [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.741 2 DEBUG oslo_concurrency.lockutils [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:06 compute-0 nova_compute[259550]: 2025-10-07 14:21:06.882 2 DEBUG oslo_concurrency.processutils [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:07 compute-0 ceph-mon[74295]: pgmap v1710: 305 pgs: 305 active+clean; 357 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 576 KiB/s rd, 6.0 MiB/s wr, 231 op/s
Oct 07 14:21:07 compute-0 ceph-mon[74295]: osdmap e229: 3 total, 3 up, 3 in
Oct 07 14:21:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:21:07 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3606665885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.357 2 DEBUG oslo_concurrency.processutils [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.364 2 DEBUG nova.compute.provider_tree [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.400 2 DEBUG nova.scheduler.client.report [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.426 2 DEBUG oslo_concurrency.lockutils [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.448 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for db51ce2e-5a2e-4329-a629-6f5fcee5c673 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.449 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846867.4480581, db51ce2e-5a2e-4329-a629-6f5fcee5c673 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.449 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] VM Resumed (Lifecycle Event)
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.453 2 DEBUG nova.compute.manager [None req-0571215a-7629-42a5-8693-5b5c8ba265e4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.456 2 INFO nova.scheduler.client.report [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Deleted allocations for instance 1d580bbb-a6fd-442c-8524-409ba5c344d0
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.472 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.475 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.499 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] During sync_power_state the instance has a pending task (rescuing). Skip.
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.500 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846867.448319, db51ce2e-5a2e-4329-a629-6f5fcee5c673 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.500 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] VM Started (Lifecycle Event)
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.536 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.539 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.542 2 DEBUG nova.network.neutron [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Successfully updated port: c628defc-21d8-49f9-bb89-cd77be34b0d1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.572 2 DEBUG oslo_concurrency.lockutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Acquiring lock "refresh_cache-0d136ab7-c186-4909-bb2f-371bd5f68c90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.572 2 DEBUG oslo_concurrency.lockutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Acquired lock "refresh_cache-0d136ab7-c186-4909-bb2f-371bd5f68c90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.572 2 DEBUG nova.network.neutron [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.576 2 DEBUG oslo_concurrency.lockutils [None req-92127d9f-bce1-4c58-b8aa-6c3c95619c24 51afbbb19e4a4e2184c89302ccf45428 8379283f8a594c2ab94773d2b49cbb30 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.646 2 DEBUG nova.compute.manager [req-1bf51246-311b-4f63-a436-1339ae9efd0f req-f5d0bb53-280f-432f-ba78-5b9c43b61d4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Received event network-changed-c628defc-21d8-49f9-bb89-cd77be34b0d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.647 2 DEBUG nova.compute.manager [req-1bf51246-311b-4f63-a436-1339ae9efd0f req-f5d0bb53-280f-432f-ba78-5b9c43b61d4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Refreshing instance network info cache due to event network-changed-c628defc-21d8-49f9-bb89-cd77be34b0d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.647 2 DEBUG oslo_concurrency.lockutils [req-1bf51246-311b-4f63-a436-1339ae9efd0f req-f5d0bb53-280f-432f-ba78-5b9c43b61d4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-0d136ab7-c186-4909-bb2f-371bd5f68c90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:21:07 compute-0 nova_compute[259550]: 2025-10-07 14:21:07.739 2 DEBUG nova.network.neutron [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:21:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 357 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 3.2 MiB/s wr, 132 op/s
Oct 07 14:21:08 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3606665885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.272 2 DEBUG nova.compute.manager [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.272 2 DEBUG oslo_concurrency.lockutils [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.272 2 DEBUG oslo_concurrency.lockutils [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.273 2 DEBUG oslo_concurrency.lockutils [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1d580bbb-a6fd-442c-8524-409ba5c344d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.273 2 DEBUG nova.compute.manager [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] No waiting events found dispatching network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.273 2 WARNING nova.compute.manager [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received unexpected event network-vif-plugged-5fb8904b-227a-4dac-8c3a-82a23ba9832c for instance with vm_state deleted and task_state None.
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.273 2 DEBUG nova.compute.manager [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Received event network-vif-deleted-5fb8904b-227a-4dac-8c3a-82a23ba9832c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.273 2 DEBUG nova.compute.manager [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Received event network-vif-unplugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.273 2 DEBUG oslo_concurrency.lockutils [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.274 2 DEBUG oslo_concurrency.lockutils [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.274 2 DEBUG oslo_concurrency.lockutils [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.274 2 DEBUG nova.compute.manager [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] No waiting events found dispatching network-vif-unplugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.274 2 WARNING nova.compute.manager [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Received unexpected event network-vif-unplugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c for instance with vm_state rescued and task_state None.
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.274 2 DEBUG nova.compute.manager [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Received event network-vif-plugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.274 2 DEBUG oslo_concurrency.lockutils [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.275 2 DEBUG oslo_concurrency.lockutils [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.275 2 DEBUG oslo_concurrency.lockutils [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.275 2 DEBUG nova.compute.manager [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] No waiting events found dispatching network-vif-plugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.275 2 WARNING nova.compute.manager [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Received unexpected event network-vif-plugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c for instance with vm_state rescued and task_state None.
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.275 2 DEBUG nova.compute.manager [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Received event network-vif-plugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.275 2 DEBUG oslo_concurrency.lockutils [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.276 2 DEBUG oslo_concurrency.lockutils [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.276 2 DEBUG oslo_concurrency.lockutils [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.276 2 DEBUG nova.compute.manager [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] No waiting events found dispatching network-vif-plugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.276 2 WARNING nova.compute.manager [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Received unexpected event network-vif-plugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c for instance with vm_state rescued and task_state None.
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.276 2 DEBUG nova.compute.manager [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Received event network-vif-plugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.276 2 DEBUG oslo_concurrency.lockutils [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.276 2 DEBUG oslo_concurrency.lockutils [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.277 2 DEBUG oslo_concurrency.lockutils [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.277 2 DEBUG nova.compute.manager [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] No waiting events found dispatching network-vif-plugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.277 2 WARNING nova.compute.manager [req-ca75f9b7-f84b-44f1-ad14-cdcf6d79060e req-375811d3-ec52-4bf3-bb91-cc373dfe47fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Received unexpected event network-vif-plugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c for instance with vm_state rescued and task_state None.
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.697 2 DEBUG nova.network.neutron [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Updating instance_info_cache with network_info: [{"id": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "address": "fa:16:3e:b4:ec:de", "network": {"id": "f3fe872a-f1f5-4d4b-ab03-c8171ac96319", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1256199359-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "708e4a28049e4f2d94f64a6656d970d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc628defc-21", "ovs_interfaceid": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.724 2 DEBUG oslo_concurrency.lockutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Releasing lock "refresh_cache-0d136ab7-c186-4909-bb2f-371bd5f68c90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.724 2 DEBUG nova.compute.manager [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Instance network_info: |[{"id": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "address": "fa:16:3e:b4:ec:de", "network": {"id": "f3fe872a-f1f5-4d4b-ab03-c8171ac96319", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1256199359-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "708e4a28049e4f2d94f64a6656d970d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc628defc-21", "ovs_interfaceid": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.725 2 DEBUG oslo_concurrency.lockutils [req-1bf51246-311b-4f63-a436-1339ae9efd0f req-f5d0bb53-280f-432f-ba78-5b9c43b61d4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-0d136ab7-c186-4909-bb2f-371bd5f68c90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.725 2 DEBUG nova.network.neutron [req-1bf51246-311b-4f63-a436-1339ae9efd0f req-f5d0bb53-280f-432f-ba78-5b9c43b61d4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Refreshing network info cache for port c628defc-21d8-49f9-bb89-cd77be34b0d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.728 2 DEBUG nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Start _get_guest_xml network_info=[{"id": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "address": "fa:16:3e:b4:ec:de", "network": {"id": "f3fe872a-f1f5-4d4b-ab03-c8171ac96319", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1256199359-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "708e4a28049e4f2d94f64a6656d970d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc628defc-21", "ovs_interfaceid": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.731 2 WARNING nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.738 2 DEBUG nova.virt.libvirt.host [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.739 2 DEBUG nova.virt.libvirt.host [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.746 2 DEBUG nova.virt.libvirt.host [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.747 2 DEBUG nova.virt.libvirt.host [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.747 2 DEBUG nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.747 2 DEBUG nova.virt.hardware [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.748 2 DEBUG nova.virt.hardware [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.748 2 DEBUG nova.virt.hardware [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.748 2 DEBUG nova.virt.hardware [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.749 2 DEBUG nova.virt.hardware [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.749 2 DEBUG nova.virt.hardware [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.749 2 DEBUG nova.virt.hardware [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.749 2 DEBUG nova.virt.hardware [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.750 2 DEBUG nova.virt.hardware [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.750 2 DEBUG nova.virt.hardware [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.750 2 DEBUG nova.virt.hardware [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:21:08 compute-0 nova_compute[259550]: 2025-10-07 14:21:08.753 2 DEBUG oslo_concurrency.processutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:09 compute-0 ceph-mon[74295]: pgmap v1712: 305 pgs: 305 active+clean; 357 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 3.2 MiB/s wr, 132 op/s
Oct 07 14:21:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:21:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3729322987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.177 2 DEBUG oslo_concurrency.processutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.203 2 DEBUG nova.storage.rbd_utils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] rbd image 0d136ab7-c186-4909-bb2f-371bd5f68c90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.208 2 DEBUG oslo_concurrency.processutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:21:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/828387877' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.650 2 DEBUG oslo_concurrency.processutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.652 2 DEBUG nova.virt.libvirt.vif [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:21:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-949551492',display_name='tempest-ServerMetadataTestJSON-server-949551492',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-949551492',id=82,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='708e4a28049e4f2d94f64a6656d970d9',ramdisk_id='',reservation_id='r-zbangfdt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-982286352',owner_user_name='tempest-ServerMetadataTestJSON-982286352-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:21:05Z,user_data=None,user_id='e40754d770434e4787d389c3bddd8803',uuid=0d136ab7-c186-4909-bb2f-371bd5f68c90,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "address": "fa:16:3e:b4:ec:de", "network": {"id": "f3fe872a-f1f5-4d4b-ab03-c8171ac96319", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1256199359-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "708e4a28049e4f2d94f64a6656d970d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc628defc-21", "ovs_interfaceid": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.652 2 DEBUG nova.network.os_vif_util [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Converting VIF {"id": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "address": "fa:16:3e:b4:ec:de", "network": {"id": "f3fe872a-f1f5-4d4b-ab03-c8171ac96319", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1256199359-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "708e4a28049e4f2d94f64a6656d970d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc628defc-21", "ovs_interfaceid": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.653 2 DEBUG nova.network.os_vif_util [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:ec:de,bridge_name='br-int',has_traffic_filtering=True,id=c628defc-21d8-49f9-bb89-cd77be34b0d1,network=Network(f3fe872a-f1f5-4d4b-ab03-c8171ac96319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc628defc-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.654 2 DEBUG nova.objects.instance [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0d136ab7-c186-4909-bb2f-371bd5f68c90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.673 2 DEBUG nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:21:09 compute-0 nova_compute[259550]:   <uuid>0d136ab7-c186-4909-bb2f-371bd5f68c90</uuid>
Oct 07 14:21:09 compute-0 nova_compute[259550]:   <name>instance-00000052</name>
Oct 07 14:21:09 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:21:09 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:21:09 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerMetadataTestJSON-server-949551492</nova:name>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:21:08</nova:creationTime>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:21:09 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:21:09 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:21:09 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:21:09 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:21:09 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:21:09 compute-0 nova_compute[259550]:         <nova:user uuid="e40754d770434e4787d389c3bddd8803">tempest-ServerMetadataTestJSON-982286352-project-member</nova:user>
Oct 07 14:21:09 compute-0 nova_compute[259550]:         <nova:project uuid="708e4a28049e4f2d94f64a6656d970d9">tempest-ServerMetadataTestJSON-982286352</nova:project>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:21:09 compute-0 nova_compute[259550]:         <nova:port uuid="c628defc-21d8-49f9-bb89-cd77be34b0d1">
Oct 07 14:21:09 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:21:09 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:21:09 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <system>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <entry name="serial">0d136ab7-c186-4909-bb2f-371bd5f68c90</entry>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <entry name="uuid">0d136ab7-c186-4909-bb2f-371bd5f68c90</entry>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     </system>
Oct 07 14:21:09 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:21:09 compute-0 nova_compute[259550]:   <os>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:   </os>
Oct 07 14:21:09 compute-0 nova_compute[259550]:   <features>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:   </features>
Oct 07 14:21:09 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:21:09 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:21:09 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/0d136ab7-c186-4909-bb2f-371bd5f68c90_disk">
Oct 07 14:21:09 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       </source>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:21:09 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/0d136ab7-c186-4909-bb2f-371bd5f68c90_disk.config">
Oct 07 14:21:09 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       </source>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:21:09 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:b4:ec:de"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <target dev="tapc628defc-21"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/0d136ab7-c186-4909-bb2f-371bd5f68c90/console.log" append="off"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <video>
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     </video>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:21:09 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:21:09 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:21:09 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:21:09 compute-0 nova_compute[259550]: </domain>
Oct 07 14:21:09 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.678 2 DEBUG nova.compute.manager [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Preparing to wait for external event network-vif-plugged-c628defc-21d8-49f9-bb89-cd77be34b0d1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.679 2 DEBUG oslo_concurrency.lockutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Acquiring lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.679 2 DEBUG oslo_concurrency.lockutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.679 2 DEBUG oslo_concurrency.lockutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.680 2 DEBUG nova.virt.libvirt.vif [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:21:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-949551492',display_name='tempest-ServerMetadataTestJSON-server-949551492',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-949551492',id=82,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='708e4a28049e4f2d94f64a6656d970d9',ramdisk_id='',reservation_id='r-zbangfdt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-982286352',owner_user_name='tempest-ServerMetadataTestJSON-982286352-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:21:05Z,user_data=None,user_id='e40754d770434e4787d389c3bddd8803',uuid=0d136ab7-c186-4909-bb2f-371bd5f68c90,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "address": "fa:16:3e:b4:ec:de", "network": {"id": "f3fe872a-f1f5-4d4b-ab03-c8171ac96319", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1256199359-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "708e4a28049e4f2d94f64a6656d970d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc628defc-21", "ovs_interfaceid": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.681 2 DEBUG nova.network.os_vif_util [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Converting VIF {"id": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "address": "fa:16:3e:b4:ec:de", "network": {"id": "f3fe872a-f1f5-4d4b-ab03-c8171ac96319", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1256199359-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "708e4a28049e4f2d94f64a6656d970d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc628defc-21", "ovs_interfaceid": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.681 2 DEBUG nova.network.os_vif_util [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:ec:de,bridge_name='br-int',has_traffic_filtering=True,id=c628defc-21d8-49f9-bb89-cd77be34b0d1,network=Network(f3fe872a-f1f5-4d4b-ab03-c8171ac96319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc628defc-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.682 2 DEBUG os_vif [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:ec:de,bridge_name='br-int',has_traffic_filtering=True,id=c628defc-21d8-49f9-bb89-cd77be34b0d1,network=Network(f3fe872a-f1f5-4d4b-ab03-c8171ac96319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc628defc-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.683 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.686 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc628defc-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc628defc-21, col_values=(('external_ids', {'iface-id': 'c628defc-21d8-49f9-bb89-cd77be34b0d1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:ec:de', 'vm-uuid': '0d136ab7-c186-4909-bb2f-371bd5f68c90'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:09 compute-0 NetworkManager[44949]: <info>  [1759846869.6899] manager: (tapc628defc-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.695 2 INFO os_vif [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:ec:de,bridge_name='br-int',has_traffic_filtering=True,id=c628defc-21d8-49f9-bb89-cd77be34b0d1,network=Network(f3fe872a-f1f5-4d4b-ab03-c8171ac96319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc628defc-21')
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.715 2 DEBUG oslo_concurrency.lockutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "152070e7-9c74-429d-b9d8-c09cbcba121e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.716 2 DEBUG oslo_concurrency.lockutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.745 2 DEBUG nova.compute.manager [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.786 2 DEBUG nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.786 2 DEBUG nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.787 2 DEBUG nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] No VIF found with MAC fa:16:3e:b4:ec:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.788 2 INFO nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Using config drive
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.810 2 DEBUG nova.storage.rbd_utils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] rbd image 0d136ab7-c186-4909-bb2f-371bd5f68c90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.834 2 DEBUG oslo_concurrency.lockutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.834 2 DEBUG oslo_concurrency.lockutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.845 2 DEBUG nova.virt.hardware [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:21:09 compute-0 nova_compute[259550]: 2025-10-07 14:21:09.846 2 INFO nova.compute.claims [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:21:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 346 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 250 op/s
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.030 2 DEBUG oslo_concurrency.processutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3729322987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/828387877' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.064 2 INFO nova.compute.manager [None req-25f21d14-d06e-49ec-8e50-b591529a7bb2 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Pausing
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.065 2 DEBUG nova.objects.instance [None req-25f21d14-d06e-49ec-8e50-b591529a7bb2 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lazy-loading 'flavor' on Instance uuid 520ba82e-9633-4722-aa98-526012a7e0fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.094 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846870.0941033, 520ba82e-9633-4722-aa98-526012a7e0fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.095 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] VM Paused (Lifecycle Event)
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.098 2 DEBUG nova.compute.manager [None req-25f21d14-d06e-49ec-8e50-b591529a7bb2 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.126 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.130 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.162 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.186 2 INFO nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Creating config drive at /var/lib/nova/instances/0d136ab7-c186-4909-bb2f-371bd5f68c90/disk.config
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.190 2 DEBUG oslo_concurrency.processutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0d136ab7-c186-4909-bb2f-371bd5f68c90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkft0b8iy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.330 2 DEBUG oslo_concurrency.processutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0d136ab7-c186-4909-bb2f-371bd5f68c90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkft0b8iy" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.357 2 DEBUG nova.storage.rbd_utils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] rbd image 0d136ab7-c186-4909-bb2f-371bd5f68c90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.360 2 DEBUG oslo_concurrency.processutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0d136ab7-c186-4909-bb2f-371bd5f68c90/disk.config 0d136ab7-c186-4909-bb2f-371bd5f68c90_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.404 2 DEBUG nova.network.neutron [req-1bf51246-311b-4f63-a436-1339ae9efd0f req-f5d0bb53-280f-432f-ba78-5b9c43b61d4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Updated VIF entry in instance network info cache for port c628defc-21d8-49f9-bb89-cd77be34b0d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.405 2 DEBUG nova.network.neutron [req-1bf51246-311b-4f63-a436-1339ae9efd0f req-f5d0bb53-280f-432f-ba78-5b9c43b61d4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Updating instance_info_cache with network_info: [{"id": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "address": "fa:16:3e:b4:ec:de", "network": {"id": "f3fe872a-f1f5-4d4b-ab03-c8171ac96319", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1256199359-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "708e4a28049e4f2d94f64a6656d970d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc628defc-21", "ovs_interfaceid": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.419 2 DEBUG oslo_concurrency.lockutils [req-1bf51246-311b-4f63-a436-1339ae9efd0f req-f5d0bb53-280f-432f-ba78-5b9c43b61d4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-0d136ab7-c186-4909-bb2f-371bd5f68c90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:21:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:21:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/285489297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.482 2 DEBUG oslo_concurrency.processutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.488 2 DEBUG nova.compute.provider_tree [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.510 2 DEBUG nova.scheduler.client.report [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.514 2 DEBUG oslo_concurrency.processutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0d136ab7-c186-4909-bb2f-371bd5f68c90/disk.config 0d136ab7-c186-4909-bb2f-371bd5f68c90_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.515 2 INFO nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Deleting local config drive /var/lib/nova/instances/0d136ab7-c186-4909-bb2f-371bd5f68c90/disk.config because it was imported into RBD.
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.537 2 DEBUG oslo_concurrency.lockutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.538 2 DEBUG nova.compute.manager [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:21:10 compute-0 kernel: tapc628defc-21: entered promiscuous mode
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.580 2 DEBUG nova.compute.manager [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.580 2 DEBUG nova.network.neutron [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:21:10 compute-0 NetworkManager[44949]: <info>  [1759846870.5842] manager: (tapc628defc-21): new Tun device (/org/freedesktop/NetworkManager/Devices/362)
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:10 compute-0 ovn_controller[151684]: 2025-10-07T14:21:10Z|00848|binding|INFO|Claiming lport c628defc-21d8-49f9-bb89-cd77be34b0d1 for this chassis.
Oct 07 14:21:10 compute-0 ovn_controller[151684]: 2025-10-07T14:21:10Z|00849|binding|INFO|c628defc-21d8-49f9-bb89-cd77be34b0d1: Claiming fa:16:3e:b4:ec:de 10.100.0.14
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.593 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:ec:de 10.100.0.14'], port_security=['fa:16:3e:b4:ec:de 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0d136ab7-c186-4909-bb2f-371bd5f68c90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3fe872a-f1f5-4d4b-ab03-c8171ac96319', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '708e4a28049e4f2d94f64a6656d970d9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bcc59a24-54bc-4c70-a7bb-4b6692bab887', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75641a37-b0ec-4ea0-abf8-1b5576ae74cc, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c628defc-21d8-49f9-bb89-cd77be34b0d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.595 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c628defc-21d8-49f9-bb89-cd77be34b0d1 in datapath f3fe872a-f1f5-4d4b-ab03-c8171ac96319 bound to our chassis
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.596 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3fe872a-f1f5-4d4b-ab03-c8171ac96319
Oct 07 14:21:10 compute-0 ovn_controller[151684]: 2025-10-07T14:21:10Z|00850|binding|INFO|Setting lport c628defc-21d8-49f9-bb89-cd77be34b0d1 ovn-installed in OVS
Oct 07 14:21:10 compute-0 ovn_controller[151684]: 2025-10-07T14:21:10Z|00851|binding|INFO|Setting lport c628defc-21d8-49f9-bb89-cd77be34b0d1 up in Southbound
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.613 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[28c785ed-99c8-410f-859c-1f1da1132eee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.614 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3fe872a-f1 in ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:21:10 compute-0 systemd-udevd[344651]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.617 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3fe872a-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.617 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3e8f6ffd-830b-431f-b9a1-c123332c14ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.618 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a0c2a331-5226-44ce-8ba8-7426a62abdf7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.630 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[59add98b-c295-41f7-a48d-468750275f41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.632 2 INFO nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:21:10 compute-0 NetworkManager[44949]: <info>  [1759846870.6387] device (tapc628defc-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:21:10 compute-0 systemd-machined[214580]: New machine qemu-103-instance-00000052.
Oct 07 14:21:10 compute-0 NetworkManager[44949]: <info>  [1759846870.6394] device (tapc628defc-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.648 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ba84f5-8317-45d1-9eb3-dd91a056c208]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:10 compute-0 systemd[1]: Started Virtual Machine qemu-103-instance-00000052.
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.685 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[056e6e20-4f13-4020-b7bc-a2ed482d6379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:10 compute-0 NetworkManager[44949]: <info>  [1759846870.6911] manager: (tapf3fe872a-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/363)
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.689 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae7d2ae-8267-44cb-b6a4-645e8a2c5046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.725 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3d5aa5-ee42-4a21-99ab-2e0b561eb4be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.727 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[40798d0f-3e95-4f0f-84e2-f87940488a6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:10 compute-0 NetworkManager[44949]: <info>  [1759846870.7588] device (tapf3fe872a-f0): carrier: link connected
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.765 2 DEBUG nova.compute.manager [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.769 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0ed159-f412-4782-8bdc-b083b41609cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.772 2 DEBUG nova.policy [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b99e8c19767d42aa96c7d646cacc3772', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a266c4b5f8164bceb621e0e23116c515', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.794 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[46c36476-57b4-44e0-9b40-a6943a037a2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3fe872a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:46:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745432, 'reachable_time': 32964, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344684, 'error': None, 'target': 'ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.811 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aa7db586-7bbe-4b10-8511-dabee6f11eae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:46cd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 745432, 'tstamp': 745432}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344685, 'error': None, 'target': 'ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.830 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5e2b6f07-bf33-4761-bda1-724bbd777883]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3fe872a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:46:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745432, 'reachable_time': 32964, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 344686, 'error': None, 'target': 'ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.868 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7e5fc2b3-f53a-471d-9f42-576c23d84964]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.939 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a0603a1a-dcae-492d-b3c6-0c527ed38ec5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.940 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3fe872a-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.940 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.941 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3fe872a-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:10 compute-0 kernel: tapf3fe872a-f0: entered promiscuous mode
Oct 07 14:21:10 compute-0 NetworkManager[44949]: <info>  [1759846870.9431] manager: (tapf3fe872a-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.947 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3fe872a-f0, col_values=(('external_ids', {'iface-id': '7cb3e32f-9fd4-4950-87b3-0763fc25eb31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:10 compute-0 ovn_controller[151684]: 2025-10-07T14:21:10Z|00852|binding|INFO|Releasing lport 7cb3e32f-9fd4-4950-87b3-0763fc25eb31 from this chassis (sb_readonly=0)
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.950 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3fe872a-f1f5-4d4b-ab03-c8171ac96319.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3fe872a-f1f5-4d4b-ab03-c8171ac96319.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.951 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f3c30dc5-966f-43a8-bcc2-7f7b3b6e9857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.952 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-f3fe872a-f1f5-4d4b-ab03-c8171ac96319
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/f3fe872a-f1f5-4d4b-ab03-c8171ac96319.pid.haproxy
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID f3fe872a-f1f5-4d4b-ab03-c8171ac96319
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:21:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:10.953 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319', 'env', 'PROCESS_TAG=haproxy-f3fe872a-f1f5-4d4b-ab03-c8171ac96319', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3fe872a-f1f5-4d4b-ab03-c8171ac96319.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:21:10 compute-0 nova_compute[259550]: 2025-10-07 14:21:10.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.043 2 DEBUG nova.compute.manager [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.044 2 DEBUG nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.045 2 INFO nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Creating image(s)
Oct 07 14:21:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:11.045 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:11 compute-0 ceph-mon[74295]: pgmap v1713: 305 pgs: 305 active+clean; 346 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 250 op/s
Oct 07 14:21:11 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/285489297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.071 2 DEBUG nova.storage.rbd_utils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image 152070e7-9c74-429d-b9d8-c09cbcba121e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.094 2 DEBUG nova.storage.rbd_utils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image 152070e7-9c74-429d-b9d8-c09cbcba121e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.122 2 DEBUG nova.storage.rbd_utils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image 152070e7-9c74-429d-b9d8-c09cbcba121e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.126 2 DEBUG oslo_concurrency.processutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.182 2 DEBUG nova.compute.manager [req-f72f7aff-a3e8-421f-820f-ea004a185d46 req-0d402970-6880-4319-95dc-619c5b0fe8cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Received event network-vif-plugged-c628defc-21d8-49f9-bb89-cd77be34b0d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.182 2 DEBUG oslo_concurrency.lockutils [req-f72f7aff-a3e8-421f-820f-ea004a185d46 req-0d402970-6880-4319-95dc-619c5b0fe8cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.182 2 DEBUG oslo_concurrency.lockutils [req-f72f7aff-a3e8-421f-820f-ea004a185d46 req-0d402970-6880-4319-95dc-619c5b0fe8cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.183 2 DEBUG oslo_concurrency.lockutils [req-f72f7aff-a3e8-421f-820f-ea004a185d46 req-0d402970-6880-4319-95dc-619c5b0fe8cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.183 2 DEBUG nova.compute.manager [req-f72f7aff-a3e8-421f-820f-ea004a185d46 req-0d402970-6880-4319-95dc-619c5b0fe8cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Processing event network-vif-plugged-c628defc-21d8-49f9-bb89-cd77be34b0d1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.207 2 DEBUG oslo_concurrency.processutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.208 2 DEBUG oslo_concurrency.lockutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.209 2 DEBUG oslo_concurrency.lockutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.209 2 DEBUG oslo_concurrency.lockutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:21:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Oct 07 14:21:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Oct 07 14:21:11 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.241 2 DEBUG nova.storage.rbd_utils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image 152070e7-9c74-429d-b9d8-c09cbcba121e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.249 2 DEBUG oslo_concurrency.processutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 152070e7-9c74-429d-b9d8-c09cbcba121e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:11 compute-0 podman[344833]: 2025-10-07 14:21:11.328292996 +0000 UTC m=+0.047799124 container create d482ebdf1d916ffab7068f4a7a6d8be2164032f8f9a7469c190b08baeb1ca102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 07 14:21:11 compute-0 systemd[1]: Started libpod-conmon-d482ebdf1d916ffab7068f4a7a6d8be2164032f8f9a7469c190b08baeb1ca102.scope.
Oct 07 14:21:11 compute-0 podman[344833]: 2025-10-07 14:21:11.302697089 +0000 UTC m=+0.022203217 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:21:11 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46ed431caf3099cbf6dac34fde300da934d798f19b38368f9bf0d2d91b934477/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:21:11 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 14:21:11 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 14:21:11 compute-0 podman[344833]: 2025-10-07 14:21:11.4310602 +0000 UTC m=+0.150566348 container init d482ebdf1d916ffab7068f4a7a6d8be2164032f8f9a7469c190b08baeb1ca102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:21:11 compute-0 podman[344833]: 2025-10-07 14:21:11.437357717 +0000 UTC m=+0.156863845 container start d482ebdf1d916ffab7068f4a7a6d8be2164032f8f9a7469c190b08baeb1ca102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 07 14:21:11 compute-0 neutron-haproxy-ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319[344866]: [NOTICE]   (344871) : New worker (344873) forked
Oct 07 14:21:11 compute-0 neutron-haproxy-ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319[344866]: [NOTICE]   (344871) : Loading success.
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:11.535 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.574 2 DEBUG oslo_concurrency.processutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 152070e7-9c74-429d-b9d8-c09cbcba121e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.620 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846871.6133285, 0d136ab7-c186-4909-bb2f-371bd5f68c90 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.620 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] VM Started (Lifecycle Event)
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.622 2 DEBUG nova.compute.manager [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.639 2 DEBUG nova.storage.rbd_utils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] resizing rbd image 152070e7-9c74-429d-b9d8-c09cbcba121e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.665 2 DEBUG nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.669 2 INFO nova.virt.libvirt.driver [-] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Instance spawned successfully.
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.669 2 DEBUG nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.714 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.718 2 DEBUG nova.objects.instance [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'migration_context' on Instance uuid 152070e7-9c74-429d-b9d8-c09cbcba121e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.720 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.746 2 DEBUG nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.746 2 DEBUG nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.747 2 DEBUG nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.747 2 DEBUG nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.748 2 DEBUG nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.748 2 DEBUG nova.virt.libvirt.driver [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.783 2 DEBUG nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.784 2 DEBUG nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Ensure instance console log exists: /var/lib/nova/instances/152070e7-9c74-429d-b9d8-c09cbcba121e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.784 2 DEBUG oslo_concurrency.lockutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.785 2 DEBUG oslo_concurrency.lockutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.785 2 DEBUG oslo_concurrency.lockutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.786 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.786 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846871.613444, 0d136ab7-c186-4909-bb2f-371bd5f68c90 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:11 compute-0 nova_compute[259550]: 2025-10-07 14:21:11.786 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] VM Paused (Lifecycle Event)
Oct 07 14:21:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 372 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.7 MiB/s wr, 200 op/s
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.000 2 INFO nova.compute.manager [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Took 6.07 seconds to spawn the instance on the hypervisor.
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.000 2 DEBUG nova.compute.manager [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.002 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.013 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846871.626041, 0d136ab7-c186-4909-bb2f-371bd5f68c90 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.014 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] VM Resumed (Lifecycle Event)
Oct 07 14:21:12 compute-0 ovn_controller[151684]: 2025-10-07T14:21:12Z|00853|binding|INFO|Releasing lport 7cb3e32f-9fd4-4950-87b3-0763fc25eb31 from this chassis (sb_readonly=0)
Oct 07 14:21:12 compute-0 ovn_controller[151684]: 2025-10-07T14:21:12Z|00854|binding|INFO|Releasing lport e0f4a07d-63f3-4c49-8cad-69cdf20a2608 from this chassis (sb_readonly=0)
Oct 07 14:21:12 compute-0 ovn_controller[151684]: 2025-10-07T14:21:12Z|00855|binding|INFO|Releasing lport 5d4e6d5f-90cf-4d0a-9a33-cd00a81ef0cc from this chassis (sb_readonly=0)
Oct 07 14:21:12 compute-0 ceph-mon[74295]: osdmap e230: 3 total, 3 up, 3 in
Oct 07 14:21:12 compute-0 ceph-mon[74295]: pgmap v1715: 305 pgs: 305 active+clean; 372 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.7 MiB/s wr, 200 op/s
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.230 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.234 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.242 2 INFO nova.compute.manager [None req-674cc693-b434-413f-aa19-9a0e1ed877aa 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Unpausing
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.243 2 DEBUG nova.objects.instance [None req-674cc693-b434-413f-aa19-9a0e1ed877aa 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lazy-loading 'flavor' on Instance uuid 520ba82e-9633-4722-aa98-526012a7e0fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.273 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.297 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846872.2912989, 520ba82e-9633-4722-aa98-526012a7e0fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.298 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] VM Resumed (Lifecycle Event)
Oct 07 14:21:12 compute-0 virtqemud[259430]: argument unsupported: QEMU guest agent is not configured
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.303 2 INFO nova.compute.manager [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Took 7.59 seconds to build instance.
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.307 2 DEBUG nova.virt.libvirt.guest [None req-674cc693-b434-413f-aa19-9a0e1ed877aa 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.308 2 DEBUG nova.compute.manager [None req-674cc693-b434-413f-aa19-9a0e1ed877aa 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.321 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.326 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.344 2 DEBUG oslo_concurrency.lockutils [None req-90b5bdad-3345-4ea9-9937-1ae415e140c9 e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Lock "0d136ab7-c186-4909-bb2f-371bd5f68c90" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.351 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] During sync_power_state the instance has a pending task (unpausing). Skip.
Oct 07 14:21:12 compute-0 nova_compute[259550]: 2025-10-07 14:21:12.508 2 DEBUG nova.network.neutron [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Successfully created port: f4f9776d-fbc4-4c99-a076-d6391636ed53 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:21:13 compute-0 nova_compute[259550]: 2025-10-07 14:21:13.285 2 DEBUG nova.compute.manager [req-de8a0f89-fae2-4612-b706-eed48fd09051 req-01b9910d-b600-4d69-bb78-96fd7b4ebb2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Received event network-vif-plugged-c628defc-21d8-49f9-bb89-cd77be34b0d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:13 compute-0 nova_compute[259550]: 2025-10-07 14:21:13.285 2 DEBUG oslo_concurrency.lockutils [req-de8a0f89-fae2-4612-b706-eed48fd09051 req-01b9910d-b600-4d69-bb78-96fd7b4ebb2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:13 compute-0 nova_compute[259550]: 2025-10-07 14:21:13.285 2 DEBUG oslo_concurrency.lockutils [req-de8a0f89-fae2-4612-b706-eed48fd09051 req-01b9910d-b600-4d69-bb78-96fd7b4ebb2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:13 compute-0 nova_compute[259550]: 2025-10-07 14:21:13.285 2 DEBUG oslo_concurrency.lockutils [req-de8a0f89-fae2-4612-b706-eed48fd09051 req-01b9910d-b600-4d69-bb78-96fd7b4ebb2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:13 compute-0 nova_compute[259550]: 2025-10-07 14:21:13.286 2 DEBUG nova.compute.manager [req-de8a0f89-fae2-4612-b706-eed48fd09051 req-01b9910d-b600-4d69-bb78-96fd7b4ebb2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] No waiting events found dispatching network-vif-plugged-c628defc-21d8-49f9-bb89-cd77be34b0d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:13 compute-0 nova_compute[259550]: 2025-10-07 14:21:13.286 2 WARNING nova.compute.manager [req-de8a0f89-fae2-4612-b706-eed48fd09051 req-01b9910d-b600-4d69-bb78-96fd7b4ebb2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Received unexpected event network-vif-plugged-c628defc-21d8-49f9-bb89-cd77be34b0d1 for instance with vm_state active and task_state None.
Oct 07 14:21:13 compute-0 nova_compute[259550]: 2025-10-07 14:21:13.545 2 DEBUG nova.network.neutron [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Successfully updated port: f4f9776d-fbc4-4c99-a076-d6391636ed53 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:21:13 compute-0 nova_compute[259550]: 2025-10-07 14:21:13.560 2 DEBUG oslo_concurrency.lockutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "refresh_cache-152070e7-9c74-429d-b9d8-c09cbcba121e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:21:13 compute-0 nova_compute[259550]: 2025-10-07 14:21:13.560 2 DEBUG oslo_concurrency.lockutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquired lock "refresh_cache-152070e7-9c74-429d-b9d8-c09cbcba121e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:21:13 compute-0 nova_compute[259550]: 2025-10-07 14:21:13.560 2 DEBUG nova.network.neutron [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:21:13 compute-0 nova_compute[259550]: 2025-10-07 14:21:13.762 2 DEBUG nova.network.neutron [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:21:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 408 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.7 MiB/s wr, 284 op/s
Oct 07 14:21:14 compute-0 nova_compute[259550]: 2025-10-07 14:21:14.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:14 compute-0 ceph-mon[74295]: pgmap v1716: 305 pgs: 305 active+clean; 408 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.7 MiB/s wr, 284 op/s
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.328 2 DEBUG nova.network.neutron [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Updating instance_info_cache with network_info: [{"id": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "address": "fa:16:3e:5d:02:42", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f9776d-fb", "ovs_interfaceid": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.352 2 DEBUG oslo_concurrency.lockutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Releasing lock "refresh_cache-152070e7-9c74-429d-b9d8-c09cbcba121e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.352 2 DEBUG nova.compute.manager [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Instance network_info: |[{"id": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "address": "fa:16:3e:5d:02:42", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f9776d-fb", "ovs_interfaceid": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.356 2 DEBUG nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Start _get_guest_xml network_info=[{"id": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "address": "fa:16:3e:5d:02:42", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f9776d-fb", "ovs_interfaceid": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.360 2 WARNING nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.366 2 DEBUG nova.virt.libvirt.host [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.366 2 DEBUG nova.virt.libvirt.host [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.369 2 DEBUG nova.virt.libvirt.host [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.369 2 DEBUG nova.virt.libvirt.host [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.369 2 DEBUG nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.369 2 DEBUG nova.virt.hardware [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.370 2 DEBUG nova.virt.hardware [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.370 2 DEBUG nova.virt.hardware [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.370 2 DEBUG nova.virt.hardware [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.370 2 DEBUG nova.virt.hardware [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.371 2 DEBUG nova.virt.hardware [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.371 2 DEBUG nova.virt.hardware [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.371 2 DEBUG nova.virt.hardware [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.371 2 DEBUG nova.virt.hardware [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.371 2 DEBUG nova.virt.hardware [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.371 2 DEBUG nova.virt.hardware [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.373 2 DEBUG oslo_concurrency.processutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.541 2 DEBUG nova.compute.manager [req-8b261590-a01d-4c03-835b-32945fbc32d5 req-dde3a07a-d925-40fd-a5e1-a8d670cf8cb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Received event network-changed-f4f9776d-fbc4-4c99-a076-d6391636ed53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.542 2 DEBUG nova.compute.manager [req-8b261590-a01d-4c03-835b-32945fbc32d5 req-dde3a07a-d925-40fd-a5e1-a8d670cf8cb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Refreshing instance network info cache due to event network-changed-f4f9776d-fbc4-4c99-a076-d6391636ed53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.542 2 DEBUG oslo_concurrency.lockutils [req-8b261590-a01d-4c03-835b-32945fbc32d5 req-dde3a07a-d925-40fd-a5e1-a8d670cf8cb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-152070e7-9c74-429d-b9d8-c09cbcba121e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.542 2 DEBUG oslo_concurrency.lockutils [req-8b261590-a01d-4c03-835b-32945fbc32d5 req-dde3a07a-d925-40fd-a5e1-a8d670cf8cb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-152070e7-9c74-429d-b9d8-c09cbcba121e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.542 2 DEBUG nova.network.neutron [req-8b261590-a01d-4c03-835b-32945fbc32d5 req-dde3a07a-d925-40fd-a5e1-a8d670cf8cb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Refreshing network info cache for port f4f9776d-fbc4-4c99-a076-d6391636ed53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.711 2 DEBUG oslo_concurrency.lockutils [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.712 2 DEBUG oslo_concurrency.lockutils [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.712 2 DEBUG oslo_concurrency.lockutils [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.712 2 DEBUG oslo_concurrency.lockutils [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.713 2 DEBUG oslo_concurrency.lockutils [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.714 2 INFO nova.compute.manager [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Terminating instance
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.715 2 DEBUG nova.compute.manager [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:21:15 compute-0 kernel: tap90cfcf08-ae (unregistering): left promiscuous mode
Oct 07 14:21:15 compute-0 NetworkManager[44949]: <info>  [1759846875.7686] device (tap90cfcf08-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:15 compute-0 ovn_controller[151684]: 2025-10-07T14:21:15Z|00856|binding|INFO|Releasing lport 90cfcf08-ae06-44fb-8402-0d70909f2e5c from this chassis (sb_readonly=0)
Oct 07 14:21:15 compute-0 ovn_controller[151684]: 2025-10-07T14:21:15Z|00857|binding|INFO|Setting lport 90cfcf08-ae06-44fb-8402-0d70909f2e5c down in Southbound
Oct 07 14:21:15 compute-0 ovn_controller[151684]: 2025-10-07T14:21:15Z|00858|binding|INFO|Removing iface tap90cfcf08-ae ovn-installed in OVS
Oct 07 14:21:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:15.790 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:49:a3 10.100.0.7'], port_security=['fa:16:3e:0e:49:a3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'db51ce2e-5a2e-4329-a629-6f5fcee5c673', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4a4526130e0488b963baba67ee5d8db', 'neutron:revision_number': '6', 'neutron:security_group_ids': '952416dd-2ec0-44ba-9a64-1fedfbe98e81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad1d90b9-2025-4f75-89e1-be6f053043dd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=90cfcf08-ae06-44fb-8402-0d70909f2e5c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:15.791 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 90cfcf08-ae06-44fb-8402-0d70909f2e5c in datapath 6f0d357a-c6e3-4f85-ade7-a04fa945b92c unbound from our chassis
Oct 07 14:21:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:15.793 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6f0d357a-c6e3-4f85-ade7-a04fa945b92c
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:15.811 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ad4ad3ae-7445-44e0-9922-78168c40da89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:21:15 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3744379447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:15 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000051.scope: Deactivated successfully.
Oct 07 14:21:15 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000051.scope: Consumed 9.554s CPU time.
Oct 07 14:21:15 compute-0 systemd-machined[214580]: Machine qemu-102-instance-00000051 terminated.
Oct 07 14:21:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:15.841 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[22e705ad-51d9-4545-bc03-69c1639c66ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:15.844 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ab0747a9-bf59-4c4e-8696-6acc07183624]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.850 2 DEBUG oslo_concurrency.processutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:15.874 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6af0e0c8-724d-4b9e-81f9-4de520f4f119]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.880 2 DEBUG nova.storage.rbd_utils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image 152070e7-9c74-429d-b9d8-c09cbcba121e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.887 2 DEBUG oslo_concurrency.processutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:15.891 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9f8122bb-43da-4f0d-a730-b53927766ef8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f0d357a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:24:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742510, 'reachable_time': 35013, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345005, 'error': None, 'target': 'ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:15.908 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5603a6-ad77-4b68-a698-4b4a49023835]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6f0d357a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742523, 'tstamp': 742523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345006, 'error': None, 'target': 'ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6f0d357a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742527, 'tstamp': 742527}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345006, 'error': None, 'target': 'ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:15.910 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f0d357a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:15.917 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f0d357a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:15.917 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:15.917 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6f0d357a-c0, col_values=(('external_ids', {'iface-id': '5d4e6d5f-90cf-4d0a-9a33-cd00a81ef0cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:15.918 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 418 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.4 MiB/s wr, 282 op/s
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.948 2 INFO nova.virt.libvirt.driver [-] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Instance destroyed successfully.
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.951 2 DEBUG nova.objects.instance [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lazy-loading 'resources' on Instance uuid db51ce2e-5a2e-4329-a629-6f5fcee5c673 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.972 2 DEBUG nova.virt.libvirt.vif [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:20:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-520844186',display_name='tempest-ServerRescueNegativeTestJSON-server-520844186',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-520844186',id=81,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:21:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4a4526130e0488b963baba67ee5d8db',ramdisk_id='',reservation_id='r-h377dr90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-2134412460',owner_user_name='tempest-ServerRescueNegativeTestJSON-2134412460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:21:07Z,user_data=None,user_id='76c41b2458aa4c37a8bdf5b4970f70e7',uuid=db51ce2e-5a2e-4329-a629-6f5fcee5c673,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "address": "fa:16:3e:0e:49:a3", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfcf08-ae", "ovs_interfaceid": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.973 2 DEBUG nova.network.os_vif_util [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Converting VIF {"id": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "address": "fa:16:3e:0e:49:a3", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfcf08-ae", "ovs_interfaceid": "90cfcf08-ae06-44fb-8402-0d70909f2e5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.975 2 DEBUG nova.network.os_vif_util [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:49:a3,bridge_name='br-int',has_traffic_filtering=True,id=90cfcf08-ae06-44fb-8402-0d70909f2e5c,network=Network(6f0d357a-c6e3-4f85-ade7-a04fa945b92c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90cfcf08-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.975 2 DEBUG os_vif [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:49:a3,bridge_name='br-int',has_traffic_filtering=True,id=90cfcf08-ae06-44fb-8402-0d70909f2e5c,network=Network(6f0d357a-c6e3-4f85-ade7-a04fa945b92c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90cfcf08-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.979 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap90cfcf08-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:21:15 compute-0 nova_compute[259550]: 2025-10-07 14:21:15.987 2 INFO os_vif [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:49:a3,bridge_name='br-int',has_traffic_filtering=True,id=90cfcf08-ae06-44fb-8402-0d70909f2e5c,network=Network(6f0d357a-c6e3-4f85-ade7-a04fa945b92c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90cfcf08-ae')
Oct 07 14:21:16 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3744379447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:21:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:21:16 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3201429951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.396 2 DEBUG oslo_concurrency.processutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.397 2 DEBUG nova.virt.libvirt.vif [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:21:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1872291949',display_name='tempest-ServerActionsTestOtherB-server-1872291949',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1872291949',id=83,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a266c4b5f8164bceb621e0e23116c515',ramdisk_id='',reservation_id='r-3sm7005j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1033607333',owner_user_name='tempest-ServerActionsTestOtherB-1033607333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:21:10Z,user_data=None,user_id='b99e8c19767d42aa96c7d646cacc3772',uuid=152070e7-9c74-429d-b9d8-c09cbcba121e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "address": "fa:16:3e:5d:02:42", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f9776d-fb", "ovs_interfaceid": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.398 2 DEBUG nova.network.os_vif_util [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converting VIF {"id": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "address": "fa:16:3e:5d:02:42", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f9776d-fb", "ovs_interfaceid": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.398 2 DEBUG nova.network.os_vif_util [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:02:42,bridge_name='br-int',has_traffic_filtering=True,id=f4f9776d-fbc4-4c99-a076-d6391636ed53,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f9776d-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.400 2 DEBUG nova.objects.instance [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'pci_devices' on Instance uuid 152070e7-9c74-429d-b9d8-c09cbcba121e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.452 2 DEBUG nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:21:16 compute-0 nova_compute[259550]:   <uuid>152070e7-9c74-429d-b9d8-c09cbcba121e</uuid>
Oct 07 14:21:16 compute-0 nova_compute[259550]:   <name>instance-00000053</name>
Oct 07 14:21:16 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:21:16 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:21:16 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerActionsTestOtherB-server-1872291949</nova:name>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:21:15</nova:creationTime>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:21:16 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:21:16 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:21:16 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:21:16 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:21:16 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:21:16 compute-0 nova_compute[259550]:         <nova:user uuid="b99e8c19767d42aa96c7d646cacc3772">tempest-ServerActionsTestOtherB-1033607333-project-member</nova:user>
Oct 07 14:21:16 compute-0 nova_compute[259550]:         <nova:project uuid="a266c4b5f8164bceb621e0e23116c515">tempest-ServerActionsTestOtherB-1033607333</nova:project>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:21:16 compute-0 nova_compute[259550]:         <nova:port uuid="f4f9776d-fbc4-4c99-a076-d6391636ed53">
Oct 07 14:21:16 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:21:16 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:21:16 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <system>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <entry name="serial">152070e7-9c74-429d-b9d8-c09cbcba121e</entry>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <entry name="uuid">152070e7-9c74-429d-b9d8-c09cbcba121e</entry>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     </system>
Oct 07 14:21:16 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:21:16 compute-0 nova_compute[259550]:   <os>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:   </os>
Oct 07 14:21:16 compute-0 nova_compute[259550]:   <features>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:   </features>
Oct 07 14:21:16 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:21:16 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:21:16 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/152070e7-9c74-429d-b9d8-c09cbcba121e_disk">
Oct 07 14:21:16 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       </source>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:21:16 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/152070e7-9c74-429d-b9d8-c09cbcba121e_disk.config">
Oct 07 14:21:16 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       </source>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:21:16 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:5d:02:42"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <target dev="tapf4f9776d-fb"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/152070e7-9c74-429d-b9d8-c09cbcba121e/console.log" append="off"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <video>
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     </video>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:21:16 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:21:16 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:21:16 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:21:16 compute-0 nova_compute[259550]: </domain>
Oct 07 14:21:16 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.461 2 DEBUG nova.compute.manager [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Preparing to wait for external event network-vif-plugged-f4f9776d-fbc4-4c99-a076-d6391636ed53 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.462 2 DEBUG oslo_concurrency.lockutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.463 2 DEBUG oslo_concurrency.lockutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.463 2 DEBUG oslo_concurrency.lockutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.465 2 DEBUG nova.virt.libvirt.vif [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:21:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1872291949',display_name='tempest-ServerActionsTestOtherB-server-1872291949',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1872291949',id=83,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a266c4b5f8164bceb621e0e23116c515',ramdisk_id='',reservation_id='r-3sm7005j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1033607333',owner_user_name='tempest-ServerActionsTestOtherB-1033607333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:21:10Z,user_data=None,user_id='b99e8c19767d42aa96c7d646cacc3772',uuid=152070e7-9c74-429d-b9d8-c09cbcba121e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "address": "fa:16:3e:5d:02:42", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f9776d-fb", "ovs_interfaceid": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.465 2 DEBUG nova.network.os_vif_util [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converting VIF {"id": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "address": "fa:16:3e:5d:02:42", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f9776d-fb", "ovs_interfaceid": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.466 2 DEBUG nova.network.os_vif_util [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:02:42,bridge_name='br-int',has_traffic_filtering=True,id=f4f9776d-fbc4-4c99-a076-d6391636ed53,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f9776d-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.467 2 DEBUG os_vif [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:02:42,bridge_name='br-int',has_traffic_filtering=True,id=f4f9776d-fbc4-4c99-a076-d6391636ed53,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f9776d-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.468 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.469 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4f9776d-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.475 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf4f9776d-fb, col_values=(('external_ids', {'iface-id': 'f4f9776d-fbc4-4c99-a076-d6391636ed53', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:02:42', 'vm-uuid': '152070e7-9c74-429d-b9d8-c09cbcba121e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:16 compute-0 NetworkManager[44949]: <info>  [1759846876.4800] manager: (tapf4f9776d-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.484 2 INFO os_vif [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:02:42,bridge_name='br-int',has_traffic_filtering=True,id=f4f9776d-fbc4-4c99-a076-d6391636ed53,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f9776d-fb')
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.552 2 DEBUG nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.553 2 DEBUG nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.553 2 DEBUG nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No VIF found with MAC fa:16:3e:5d:02:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.554 2 INFO nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Using config drive
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.573 2 DEBUG nova.storage.rbd_utils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image 152070e7-9c74-429d-b9d8-c09cbcba121e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.776 2 INFO nova.virt.libvirt.driver [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Deleting instance files /var/lib/nova/instances/db51ce2e-5a2e-4329-a629-6f5fcee5c673_del
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.780 2 INFO nova.virt.libvirt.driver [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Deletion of /var/lib/nova/instances/db51ce2e-5a2e-4329-a629-6f5fcee5c673_del complete
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.847 2 INFO nova.compute.manager [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Took 1.13 seconds to destroy the instance on the hypervisor.
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.848 2 DEBUG oslo.service.loopingcall [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.848 2 DEBUG nova.compute.manager [-] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.848 2 DEBUG nova.network.neutron [-] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.906 2 INFO nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Creating config drive at /var/lib/nova/instances/152070e7-9c74-429d-b9d8-c09cbcba121e/disk.config
Oct 07 14:21:16 compute-0 nova_compute[259550]: 2025-10-07 14:21:16.911 2 DEBUG oslo_concurrency.processutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/152070e7-9c74-429d-b9d8-c09cbcba121e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaldfo_tx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:17 compute-0 ceph-mon[74295]: pgmap v1717: 305 pgs: 305 active+clean; 418 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.4 MiB/s wr, 282 op/s
Oct 07 14:21:17 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3201429951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.046 2 DEBUG nova.network.neutron [req-8b261590-a01d-4c03-835b-32945fbc32d5 req-dde3a07a-d925-40fd-a5e1-a8d670cf8cb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Updated VIF entry in instance network info cache for port f4f9776d-fbc4-4c99-a076-d6391636ed53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.047 2 DEBUG nova.network.neutron [req-8b261590-a01d-4c03-835b-32945fbc32d5 req-dde3a07a-d925-40fd-a5e1-a8d670cf8cb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Updating instance_info_cache with network_info: [{"id": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "address": "fa:16:3e:5d:02:42", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f9776d-fb", "ovs_interfaceid": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.060 2 DEBUG oslo_concurrency.processutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/152070e7-9c74-429d-b9d8-c09cbcba121e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaldfo_tx" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.089 2 DEBUG nova.storage.rbd_utils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image 152070e7-9c74-429d-b9d8-c09cbcba121e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.094 2 DEBUG oslo_concurrency.processutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/152070e7-9c74-429d-b9d8-c09cbcba121e/disk.config 152070e7-9c74-429d-b9d8-c09cbcba121e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.157 2 DEBUG oslo_concurrency.lockutils [req-8b261590-a01d-4c03-835b-32945fbc32d5 req-dde3a07a-d925-40fd-a5e1-a8d670cf8cb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-152070e7-9c74-429d-b9d8-c09cbcba121e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.271 2 DEBUG oslo_concurrency.processutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/152070e7-9c74-429d-b9d8-c09cbcba121e/disk.config 152070e7-9c74-429d-b9d8-c09cbcba121e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.272 2 INFO nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Deleting local config drive /var/lib/nova/instances/152070e7-9c74-429d-b9d8-c09cbcba121e/disk.config because it was imported into RBD.
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.302 2 DEBUG oslo_concurrency.lockutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Acquiring lock "6c2a17d6-c4c3-4b0b-868b-651a3d5df64f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.302 2 DEBUG oslo_concurrency.lockutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Lock "6c2a17d6-c4c3-4b0b-868b-651a3d5df64f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.318 2 DEBUG nova.compute.manager [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:21:17 compute-0 kernel: tapf4f9776d-fb: entered promiscuous mode
Oct 07 14:21:17 compute-0 systemd-udevd[344978]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:21:17 compute-0 NetworkManager[44949]: <info>  [1759846877.3247] manager: (tapf4f9776d-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/366)
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:17 compute-0 ovn_controller[151684]: 2025-10-07T14:21:17Z|00859|binding|INFO|Claiming lport f4f9776d-fbc4-4c99-a076-d6391636ed53 for this chassis.
Oct 07 14:21:17 compute-0 ovn_controller[151684]: 2025-10-07T14:21:17Z|00860|binding|INFO|f4f9776d-fbc4-4c99-a076-d6391636ed53: Claiming fa:16:3e:5d:02:42 10.100.0.7
Oct 07 14:21:17 compute-0 NetworkManager[44949]: <info>  [1759846877.3342] device (tapf4f9776d-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:21:17 compute-0 NetworkManager[44949]: <info>  [1759846877.3360] device (tapf4f9776d-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:21:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:17.338 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:02:42 10.100.0.7'], port_security=['fa:16:3e:5d:02:42 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '152070e7-9c74-429d-b9d8-c09cbcba121e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a266c4b5f8164bceb621e0e23116c515', 'neutron:revision_number': '2', 'neutron:security_group_ids': '38933899-ccd3-4eef-a7a6-a99092e32db4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061d6c9e-c728-4632-b92d-e6b85ba42658, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=f4f9776d-fbc4-4c99-a076-d6391636ed53) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:17.339 161536 INFO neutron.agent.ovn.metadata.agent [-] Port f4f9776d-fbc4-4c99-a076-d6391636ed53 in datapath ebea7a9d-f576-4b9e-8316-859c29b06dc2 bound to our chassis
Oct 07 14:21:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:17.340 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebea7a9d-f576-4b9e-8316-859c29b06dc2
Oct 07 14:21:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:17.360 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c2af0b6d-8f42-436c-81ad-552a4e54eb95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:17 compute-0 systemd-machined[214580]: New machine qemu-104-instance-00000053.
Oct 07 14:21:17 compute-0 ovn_controller[151684]: 2025-10-07T14:21:17Z|00861|binding|INFO|Setting lport f4f9776d-fbc4-4c99-a076-d6391636ed53 ovn-installed in OVS
Oct 07 14:21:17 compute-0 ovn_controller[151684]: 2025-10-07T14:21:17Z|00862|binding|INFO|Setting lport f4f9776d-fbc4-4c99-a076-d6391636ed53 up in Southbound
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:17 compute-0 systemd[1]: Started Virtual Machine qemu-104-instance-00000053.
Oct 07 14:21:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:17.393 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3c26dbf7-d895-4245-a0a2-44fdf410d1c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:17.396 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5074eba5-19c6-4e5b-9441-e1fda03df4ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.403 2 DEBUG oslo_concurrency.lockutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.404 2 DEBUG oslo_concurrency.lockutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.413 2 DEBUG nova.virt.hardware [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.414 2 INFO nova.compute.claims [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:21:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:17.426 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[43862cae-fc0d-4a34-a491-7f7e27c94fa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:17.442 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[48e57f05-d632-4139-ae7b-4d023baffa7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebea7a9d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:58:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739733, 'reachable_time': 16237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345143, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:17.460 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb4a404-546e-40a7-bfba-fe8afbfd7c51]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapebea7a9d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739748, 'tstamp': 739748}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345145, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapebea7a9d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739751, 'tstamp': 739751}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345145, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:17.461 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebea7a9d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:17.464 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebea7a9d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:17.464 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:17.465 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebea7a9d-f0, col_values=(('external_ids', {'iface-id': 'e0f4a07d-63f3-4c49-8cad-69cdf20a2608'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:17.465 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.601 2 DEBUG oslo_concurrency.processutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.693 2 DEBUG nova.compute.manager [req-9045d04d-cb84-41d7-95ea-15285df1396d req-f86f5b5f-2f78-4387-852a-c398517f3ce2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Received event network-vif-unplugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.694 2 DEBUG oslo_concurrency.lockutils [req-9045d04d-cb84-41d7-95ea-15285df1396d req-f86f5b5f-2f78-4387-852a-c398517f3ce2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.694 2 DEBUG oslo_concurrency.lockutils [req-9045d04d-cb84-41d7-95ea-15285df1396d req-f86f5b5f-2f78-4387-852a-c398517f3ce2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.694 2 DEBUG oslo_concurrency.lockutils [req-9045d04d-cb84-41d7-95ea-15285df1396d req-f86f5b5f-2f78-4387-852a-c398517f3ce2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.695 2 DEBUG nova.compute.manager [req-9045d04d-cb84-41d7-95ea-15285df1396d req-f86f5b5f-2f78-4387-852a-c398517f3ce2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] No waiting events found dispatching network-vif-unplugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.695 2 DEBUG nova.compute.manager [req-9045d04d-cb84-41d7-95ea-15285df1396d req-f86f5b5f-2f78-4387-852a-c398517f3ce2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Received event network-vif-unplugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.695 2 DEBUG nova.compute.manager [req-9045d04d-cb84-41d7-95ea-15285df1396d req-f86f5b5f-2f78-4387-852a-c398517f3ce2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Received event network-vif-plugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.696 2 DEBUG oslo_concurrency.lockutils [req-9045d04d-cb84-41d7-95ea-15285df1396d req-f86f5b5f-2f78-4387-852a-c398517f3ce2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.696 2 DEBUG oslo_concurrency.lockutils [req-9045d04d-cb84-41d7-95ea-15285df1396d req-f86f5b5f-2f78-4387-852a-c398517f3ce2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.696 2 DEBUG oslo_concurrency.lockutils [req-9045d04d-cb84-41d7-95ea-15285df1396d req-f86f5b5f-2f78-4387-852a-c398517f3ce2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.697 2 DEBUG nova.compute.manager [req-9045d04d-cb84-41d7-95ea-15285df1396d req-f86f5b5f-2f78-4387-852a-c398517f3ce2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] No waiting events found dispatching network-vif-plugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.697 2 WARNING nova.compute.manager [req-9045d04d-cb84-41d7-95ea-15285df1396d req-f86f5b5f-2f78-4387-852a-c398517f3ce2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Received unexpected event network-vif-plugged-90cfcf08-ae06-44fb-8402-0d70909f2e5c for instance with vm_state rescued and task_state deleting.
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.883 2 DEBUG oslo_concurrency.lockutils [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Acquiring lock "0d136ab7-c186-4909-bb2f-371bd5f68c90" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.883 2 DEBUG oslo_concurrency.lockutils [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Lock "0d136ab7-c186-4909-bb2f-371bd5f68c90" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.884 2 DEBUG oslo_concurrency.lockutils [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Acquiring lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.884 2 DEBUG oslo_concurrency.lockutils [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.885 2 DEBUG oslo_concurrency.lockutils [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.886 2 INFO nova.compute.manager [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Terminating instance
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.887 2 DEBUG nova.compute.manager [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:21:17 compute-0 kernel: tapc628defc-21 (unregistering): left promiscuous mode
Oct 07 14:21:17 compute-0 NetworkManager[44949]: <info>  [1759846877.9254] device (tapc628defc-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:21:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 418 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 274 op/s
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:17 compute-0 ovn_controller[151684]: 2025-10-07T14:21:17Z|00863|binding|INFO|Releasing lport c628defc-21d8-49f9-bb89-cd77be34b0d1 from this chassis (sb_readonly=0)
Oct 07 14:21:17 compute-0 ovn_controller[151684]: 2025-10-07T14:21:17Z|00864|binding|INFO|Setting lport c628defc-21d8-49f9-bb89-cd77be34b0d1 down in Southbound
Oct 07 14:21:17 compute-0 ovn_controller[151684]: 2025-10-07T14:21:17Z|00865|binding|INFO|Removing iface tapc628defc-21 ovn-installed in OVS
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.942 2 DEBUG nova.network.neutron [-] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:17.942 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:ec:de 10.100.0.14'], port_security=['fa:16:3e:b4:ec:de 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0d136ab7-c186-4909-bb2f-371bd5f68c90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3fe872a-f1f5-4d4b-ab03-c8171ac96319', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '708e4a28049e4f2d94f64a6656d970d9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bcc59a24-54bc-4c70-a7bb-4b6692bab887', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75641a37-b0ec-4ea0-abf8-1b5576ae74cc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c628defc-21d8-49f9-bb89-cd77be34b0d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:17.942 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c628defc-21d8-49f9-bb89-cd77be34b0d1 in datapath f3fe872a-f1f5-4d4b-ab03-c8171ac96319 unbound from our chassis
Oct 07 14:21:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:17.944 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3fe872a-f1f5-4d4b-ab03-c8171ac96319, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:21:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:17.944 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f7c74ad4-e807-4070-9799-05d410a1970e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:17.945 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319 namespace which is not needed anymore
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:17 compute-0 nova_compute[259550]: 2025-10-07 14:21:17.961 2 INFO nova.compute.manager [-] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Took 1.11 seconds to deallocate network for instance.
Oct 07 14:21:17 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000052.scope: Deactivated successfully.
Oct 07 14:21:17 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000052.scope: Consumed 7.141s CPU time.
Oct 07 14:21:17 compute-0 systemd-machined[214580]: Machine qemu-103-instance-00000052 terminated.
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.013 2 DEBUG oslo_concurrency.lockutils [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:21:18 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2523018793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.055 2 DEBUG oslo_concurrency.processutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.061 2 DEBUG nova.compute.provider_tree [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:21:18 compute-0 neutron-haproxy-ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319[344866]: [NOTICE]   (344871) : haproxy version is 2.8.14-c23fe91
Oct 07 14:21:18 compute-0 neutron-haproxy-ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319[344866]: [NOTICE]   (344871) : path to executable is /usr/sbin/haproxy
Oct 07 14:21:18 compute-0 neutron-haproxy-ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319[344866]: [WARNING]  (344871) : Exiting Master process...
Oct 07 14:21:18 compute-0 neutron-haproxy-ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319[344866]: [WARNING]  (344871) : Exiting Master process...
Oct 07 14:21:18 compute-0 neutron-haproxy-ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319[344866]: [ALERT]    (344871) : Current worker (344873) exited with code 143 (Terminated)
Oct 07 14:21:18 compute-0 neutron-haproxy-ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319[344866]: [WARNING]  (344871) : All workers exited. Exiting... (0)
Oct 07 14:21:18 compute-0 systemd[1]: libpod-d482ebdf1d916ffab7068f4a7a6d8be2164032f8f9a7469c190b08baeb1ca102.scope: Deactivated successfully.
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.069 2 DEBUG nova.compute.manager [req-4d7d3271-b62c-46e0-b5ae-8db950ac8dea req-283761e7-aba7-4a5c-a07f-39ec18ce1f32 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Received event network-vif-deleted-90cfcf08-ae06-44fb-8402-0d70909f2e5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:18 compute-0 podman[345186]: 2025-10-07 14:21:18.076549303 +0000 UTC m=+0.047164187 container died d482ebdf1d916ffab7068f4a7a6d8be2164032f8f9a7469c190b08baeb1ca102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.082 2 DEBUG nova.scheduler.client.report [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:21:18 compute-0 kernel: tapc628defc-21: entered promiscuous mode
Oct 07 14:21:18 compute-0 NetworkManager[44949]: <info>  [1759846878.1073] manager: (tapc628defc-21): new Tun device (/org/freedesktop/NetworkManager/Devices/367)
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.108 2 DEBUG oslo_concurrency.lockutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.110 2 DEBUG nova.compute.manager [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:21:18 compute-0 kernel: tapc628defc-21 (unregistering): left promiscuous mode
Oct 07 14:21:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d482ebdf1d916ffab7068f4a7a6d8be2164032f8f9a7469c190b08baeb1ca102-userdata-shm.mount: Deactivated successfully.
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.113 2 DEBUG oslo_concurrency.lockutils [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:18 compute-0 ovn_controller[151684]: 2025-10-07T14:21:18Z|00866|binding|INFO|Claiming lport c628defc-21d8-49f9-bb89-cd77be34b0d1 for this chassis.
Oct 07 14:21:18 compute-0 ovn_controller[151684]: 2025-10-07T14:21:18Z|00867|binding|INFO|c628defc-21d8-49f9-bb89-cd77be34b0d1: Claiming fa:16:3e:b4:ec:de 10.100.0.14
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:18.132 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:ec:de 10.100.0.14'], port_security=['fa:16:3e:b4:ec:de 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0d136ab7-c186-4909-bb2f-371bd5f68c90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3fe872a-f1f5-4d4b-ab03-c8171ac96319', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '708e4a28049e4f2d94f64a6656d970d9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bcc59a24-54bc-4c70-a7bb-4b6692bab887', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75641a37-b0ec-4ea0-abf8-1b5576ae74cc, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c628defc-21d8-49f9-bb89-cd77be34b0d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-46ed431caf3099cbf6dac34fde300da934d798f19b38368f9bf0d2d91b934477-merged.mount: Deactivated successfully.
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.144 2 INFO nova.virt.libvirt.driver [-] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Instance destroyed successfully.
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.144 2 DEBUG nova.objects.instance [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Lazy-loading 'resources' on Instance uuid 0d136ab7-c186-4909-bb2f-371bd5f68c90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:18 compute-0 podman[345186]: 2025-10-07 14:21:18.148993267 +0000 UTC m=+0.119608151 container cleanup d482ebdf1d916ffab7068f4a7a6d8be2164032f8f9a7469c190b08baeb1ca102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.153 2 DEBUG nova.compute.manager [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 07 14:21:18 compute-0 ovn_controller[151684]: 2025-10-07T14:21:18Z|00868|binding|INFO|Setting lport c628defc-21d8-49f9-bb89-cd77be34b0d1 ovn-installed in OVS
Oct 07 14:21:18 compute-0 ovn_controller[151684]: 2025-10-07T14:21:18Z|00869|binding|INFO|Setting lport c628defc-21d8-49f9-bb89-cd77be34b0d1 up in Southbound
Oct 07 14:21:18 compute-0 ovn_controller[151684]: 2025-10-07T14:21:18Z|00870|binding|INFO|Releasing lport c628defc-21d8-49f9-bb89-cd77be34b0d1 from this chassis (sb_readonly=1)
Oct 07 14:21:18 compute-0 ovn_controller[151684]: 2025-10-07T14:21:18Z|00871|if_status|INFO|Dropped 2 log messages in last 20 seconds (most recently, 20 seconds ago) due to excessive rate
Oct 07 14:21:18 compute-0 ovn_controller[151684]: 2025-10-07T14:21:18Z|00872|if_status|INFO|Not setting lport c628defc-21d8-49f9-bb89-cd77be34b0d1 down as sb is readonly
Oct 07 14:21:18 compute-0 ovn_controller[151684]: 2025-10-07T14:21:18Z|00873|binding|INFO|Removing iface tapc628defc-21 ovn-installed in OVS
Oct 07 14:21:18 compute-0 ovn_controller[151684]: 2025-10-07T14:21:18Z|00874|binding|INFO|Releasing lport c628defc-21d8-49f9-bb89-cd77be34b0d1 from this chassis (sb_readonly=0)
Oct 07 14:21:18 compute-0 ovn_controller[151684]: 2025-10-07T14:21:18Z|00875|binding|INFO|Setting lport c628defc-21d8-49f9-bb89-cd77be34b0d1 down in Southbound
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.164 2 DEBUG nova.virt.libvirt.vif [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:21:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-949551492',display_name='tempest-ServerMetadataTestJSON-server-949551492',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-949551492',id=82,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:21:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='708e4a28049e4f2d94f64a6656d970d9',ramdisk_id='',reservation_id='r-zbangfdt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-982286352',owner_user_name='tempest-ServerMetadataTestJSON-982286352-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:21:17Z,user_data=None,user_id='e40754d770434e4787d389c3bddd8803',uuid=0d136ab7-c186-4909-bb2f-371bd5f68c90,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "address": "fa:16:3e:b4:ec:de", "network": {"id": "f3fe872a-f1f5-4d4b-ab03-c8171ac96319", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1256199359-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "708e4a28049e4f2d94f64a6656d970d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc628defc-21", "ovs_interfaceid": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.165 2 DEBUG nova.network.os_vif_util [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Converting VIF {"id": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "address": "fa:16:3e:b4:ec:de", "network": {"id": "f3fe872a-f1f5-4d4b-ab03-c8171ac96319", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1256199359-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "708e4a28049e4f2d94f64a6656d970d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc628defc-21", "ovs_interfaceid": "c628defc-21d8-49f9-bb89-cd77be34b0d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.165 2 DEBUG nova.network.os_vif_util [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:ec:de,bridge_name='br-int',has_traffic_filtering=True,id=c628defc-21d8-49f9-bb89-cd77be34b0d1,network=Network(f3fe872a-f1f5-4d4b-ab03-c8171ac96319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc628defc-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.166 2 DEBUG os_vif [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:ec:de,bridge_name='br-int',has_traffic_filtering=True,id=c628defc-21d8-49f9-bb89-cd77be34b0d1,network=Network(f3fe872a-f1f5-4d4b-ab03-c8171ac96319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc628defc-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:21:18 compute-0 systemd[1]: libpod-conmon-d482ebdf1d916ffab7068f4a7a6d8be2164032f8f9a7469c190b08baeb1ca102.scope: Deactivated successfully.
Oct 07 14:21:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:18.165 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:ec:de 10.100.0.14'], port_security=['fa:16:3e:b4:ec:de 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0d136ab7-c186-4909-bb2f-371bd5f68c90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3fe872a-f1f5-4d4b-ab03-c8171ac96319', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '708e4a28049e4f2d94f64a6656d970d9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bcc59a24-54bc-4c70-a7bb-4b6692bab887', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75641a37-b0ec-4ea0-abf8-1b5576ae74cc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c628defc-21d8-49f9-bb89-cd77be34b0d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.169 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc628defc-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.173 2 INFO nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.183 2 INFO os_vif [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:ec:de,bridge_name='br-int',has_traffic_filtering=True,id=c628defc-21d8-49f9-bb89-cd77be34b0d1,network=Network(f3fe872a-f1f5-4d4b-ab03-c8171ac96319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc628defc-21')
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.198 2 DEBUG nova.compute.manager [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:21:18 compute-0 podman[345219]: 2025-10-07 14:21:18.243948045 +0000 UTC m=+0.060225692 container remove d482ebdf1d916ffab7068f4a7a6d8be2164032f8f9a7469c190b08baeb1ca102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 14:21:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:18.250 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8573f6ea-0f51-4621-944c-8555c6369f1c]: (4, ('Tue Oct  7 02:21:18 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319 (d482ebdf1d916ffab7068f4a7a6d8be2164032f8f9a7469c190b08baeb1ca102)\nd482ebdf1d916ffab7068f4a7a6d8be2164032f8f9a7469c190b08baeb1ca102\nTue Oct  7 02:21:18 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319 (d482ebdf1d916ffab7068f4a7a6d8be2164032f8f9a7469c190b08baeb1ca102)\nd482ebdf1d916ffab7068f4a7a6d8be2164032f8f9a7469c190b08baeb1ca102\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:18.253 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1037644d-6481-4365-aba1-36071a257413]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:18.254 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3fe872a-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:18 compute-0 kernel: tapf3fe872a-f0: left promiscuous mode
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:18.276 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3508fdfe-a968-4f5c-8fe3-7362fa0cdf74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.284 2 DEBUG nova.compute.manager [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.286 2 DEBUG nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.286 2 INFO nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Creating image(s)
Oct 07 14:21:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:18.303 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0b127a70-aacb-453b-94ec-f19e9e35d677]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:18.305 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9135c592-81bb-44e6-8d29-52a2cf112e17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.313 2 DEBUG nova.storage.rbd_utils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] rbd image 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:18.321 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f6bdd8cb-5562-4070-84dc-60f9986f2959]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745424, 'reachable_time': 15904, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345268, 'error': None, 'target': 'ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:18.324 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3fe872a-f1f5-4d4b-ab03-c8171ac96319 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:21:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:18.324 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[f2412586-a849-4c06-8865-d587f19748d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:18 compute-0 systemd[1]: run-netns-ovnmeta\x2df3fe872a\x2df1f5\x2d4d4b\x2dab03\x2dc8171ac96319.mount: Deactivated successfully.
Oct 07 14:21:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:18.325 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c628defc-21d8-49f9-bb89-cd77be34b0d1 in datapath f3fe872a-f1f5-4d4b-ab03-c8171ac96319 unbound from our chassis
Oct 07 14:21:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:18.327 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3fe872a-f1f5-4d4b-ab03-c8171ac96319, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:21:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:18.328 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7bde41fe-980d-49e5-bc39-7bcf4f4ea2a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:18.329 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c628defc-21d8-49f9-bb89-cd77be34b0d1 in datapath f3fe872a-f1f5-4d4b-ab03-c8171ac96319 unbound from our chassis
Oct 07 14:21:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:18.330 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3fe872a-f1f5-4d4b-ab03-c8171ac96319, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:21:18 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:18.331 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[526025fa-a03a-4852-8813-f3957577d2ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.349 2 DEBUG nova.storage.rbd_utils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] rbd image 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.379 2 DEBUG nova.storage.rbd_utils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] rbd image 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.384 2 DEBUG oslo_concurrency.processutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.422 2 DEBUG oslo_concurrency.processutils [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.460 2 DEBUG oslo_concurrency.processutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.461 2 DEBUG oslo_concurrency.lockutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.462 2 DEBUG oslo_concurrency.lockutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.462 2 DEBUG oslo_concurrency.lockutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.485 2 DEBUG nova.storage.rbd_utils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] rbd image 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.490 2 DEBUG oslo_concurrency.processutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.659 2 INFO nova.virt.libvirt.driver [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Deleting instance files /var/lib/nova/instances/0d136ab7-c186-4909-bb2f-371bd5f68c90_del
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.662 2 INFO nova.virt.libvirt.driver [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Deletion of /var/lib/nova/instances/0d136ab7-c186-4909-bb2f-371bd5f68c90_del complete
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.718 2 INFO nova.compute.manager [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.719 2 DEBUG oslo.service.loopingcall [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.719 2 DEBUG nova.compute.manager [-] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.719 2 DEBUG nova.network.neutron [-] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:21:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:21:18 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279729273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.878 2 DEBUG oslo_concurrency.processutils [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.880 2 DEBUG oslo_concurrency.processutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.948 2 DEBUG nova.storage.rbd_utils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] resizing rbd image 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:21:18 compute-0 nova_compute[259550]: 2025-10-07 14:21:18.983 2 DEBUG nova.compute.provider_tree [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:21:19 compute-0 ceph-mon[74295]: pgmap v1718: 305 pgs: 305 active+clean; 418 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 274 op/s
Oct 07 14:21:19 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2523018793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:19 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4279729273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.073 2 DEBUG nova.objects.instance [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Lazy-loading 'migration_context' on Instance uuid 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.106 2 DEBUG nova.scheduler.client.report [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.123 2 DEBUG nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.124 2 DEBUG nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Ensure instance console log exists: /var/lib/nova/instances/6c2a17d6-c4c3-4b0b-868b-651a3d5df64f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.124 2 DEBUG oslo_concurrency.lockutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.125 2 DEBUG oslo_concurrency.lockutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.125 2 DEBUG oslo_concurrency.lockutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.126 2 DEBUG nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.132 2 WARNING nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.138 2 DEBUG nova.virt.libvirt.host [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.138 2 DEBUG nova.virt.libvirt.host [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.141 2 DEBUG nova.virt.libvirt.host [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.141 2 DEBUG nova.virt.libvirt.host [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.141 2 DEBUG nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.141 2 DEBUG nova.virt.hardware [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.142 2 DEBUG nova.virt.hardware [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.142 2 DEBUG nova.virt.hardware [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.142 2 DEBUG nova.virt.hardware [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.142 2 DEBUG nova.virt.hardware [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.142 2 DEBUG nova.virt.hardware [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.142 2 DEBUG nova.virt.hardware [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.143 2 DEBUG nova.virt.hardware [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.143 2 DEBUG nova.virt.hardware [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.143 2 DEBUG nova.virt.hardware [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.143 2 DEBUG nova.virt.hardware [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.146 2 DEBUG oslo_concurrency.processutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.345 2 DEBUG oslo_concurrency.lockutils [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.508 2 INFO nova.scheduler.client.report [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Deleted allocations for instance db51ce2e-5a2e-4329-a629-6f5fcee5c673
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.520 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846879.5196347, 152070e7-9c74-429d-b9d8-c09cbcba121e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.520 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] VM Started (Lifecycle Event)
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.570 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.570 2 DEBUG nova.network.neutron [-] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.574 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846879.5222745, 152070e7-9c74-429d-b9d8-c09cbcba121e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.574 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] VM Paused (Lifecycle Event)
Oct 07 14:21:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:21:19 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2903386605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.611 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.612 2 DEBUG oslo_concurrency.processutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.631 2 DEBUG nova.storage.rbd_utils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] rbd image 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.635 2 DEBUG oslo_concurrency.processutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.667 2 INFO nova.compute.manager [-] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Took 0.95 seconds to deallocate network for instance.
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.670 2 DEBUG oslo_concurrency.lockutils [None req-c4a64b40-6e24-4899-bf57-a8c96e5926a4 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "db51ce2e-5a2e-4329-a629-6f5fcee5c673" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.677 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.709 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.719 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846864.717739, 1d580bbb-a6fd-442c-8524-409ba5c344d0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.720 2 INFO nova.compute.manager [-] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] VM Stopped (Lifecycle Event)
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.728 2 DEBUG oslo_concurrency.lockutils [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.728 2 DEBUG oslo_concurrency.lockutils [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.744 2 DEBUG nova.compute.manager [None req-25a2e517-2cf6-436a-80ac-16e266de05b1 - - - - - -] [instance: 1d580bbb-a6fd-442c-8524-409ba5c344d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.779 2 DEBUG nova.compute.manager [req-b0b53b4b-c683-4cc3-9bac-06fa3274af02 req-60463ace-07a1-4400-abff-0cb37dffb32e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Received event network-vif-unplugged-c628defc-21d8-49f9-bb89-cd77be34b0d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.779 2 DEBUG oslo_concurrency.lockutils [req-b0b53b4b-c683-4cc3-9bac-06fa3274af02 req-60463ace-07a1-4400-abff-0cb37dffb32e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.779 2 DEBUG oslo_concurrency.lockutils [req-b0b53b4b-c683-4cc3-9bac-06fa3274af02 req-60463ace-07a1-4400-abff-0cb37dffb32e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.780 2 DEBUG oslo_concurrency.lockutils [req-b0b53b4b-c683-4cc3-9bac-06fa3274af02 req-60463ace-07a1-4400-abff-0cb37dffb32e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.780 2 DEBUG nova.compute.manager [req-b0b53b4b-c683-4cc3-9bac-06fa3274af02 req-60463ace-07a1-4400-abff-0cb37dffb32e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] No waiting events found dispatching network-vif-unplugged-c628defc-21d8-49f9-bb89-cd77be34b0d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.780 2 WARNING nova.compute.manager [req-b0b53b4b-c683-4cc3-9bac-06fa3274af02 req-60463ace-07a1-4400-abff-0cb37dffb32e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Received unexpected event network-vif-unplugged-c628defc-21d8-49f9-bb89-cd77be34b0d1 for instance with vm_state deleted and task_state None.
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.780 2 DEBUG nova.compute.manager [req-b0b53b4b-c683-4cc3-9bac-06fa3274af02 req-60463ace-07a1-4400-abff-0cb37dffb32e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Received event network-vif-plugged-c628defc-21d8-49f9-bb89-cd77be34b0d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.780 2 DEBUG oslo_concurrency.lockutils [req-b0b53b4b-c683-4cc3-9bac-06fa3274af02 req-60463ace-07a1-4400-abff-0cb37dffb32e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.781 2 DEBUG oslo_concurrency.lockutils [req-b0b53b4b-c683-4cc3-9bac-06fa3274af02 req-60463ace-07a1-4400-abff-0cb37dffb32e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.781 2 DEBUG oslo_concurrency.lockutils [req-b0b53b4b-c683-4cc3-9bac-06fa3274af02 req-60463ace-07a1-4400-abff-0cb37dffb32e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.781 2 DEBUG nova.compute.manager [req-b0b53b4b-c683-4cc3-9bac-06fa3274af02 req-60463ace-07a1-4400-abff-0cb37dffb32e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] No waiting events found dispatching network-vif-plugged-c628defc-21d8-49f9-bb89-cd77be34b0d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.781 2 WARNING nova.compute.manager [req-b0b53b4b-c683-4cc3-9bac-06fa3274af02 req-60463ace-07a1-4400-abff-0cb37dffb32e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Received unexpected event network-vif-plugged-c628defc-21d8-49f9-bb89-cd77be34b0d1 for instance with vm_state deleted and task_state None.
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.781 2 DEBUG nova.compute.manager [req-b0b53b4b-c683-4cc3-9bac-06fa3274af02 req-60463ace-07a1-4400-abff-0cb37dffb32e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Received event network-vif-plugged-c628defc-21d8-49f9-bb89-cd77be34b0d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.782 2 DEBUG oslo_concurrency.lockutils [req-b0b53b4b-c683-4cc3-9bac-06fa3274af02 req-60463ace-07a1-4400-abff-0cb37dffb32e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.782 2 DEBUG oslo_concurrency.lockutils [req-b0b53b4b-c683-4cc3-9bac-06fa3274af02 req-60463ace-07a1-4400-abff-0cb37dffb32e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.782 2 DEBUG oslo_concurrency.lockutils [req-b0b53b4b-c683-4cc3-9bac-06fa3274af02 req-60463ace-07a1-4400-abff-0cb37dffb32e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d136ab7-c186-4909-bb2f-371bd5f68c90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.782 2 DEBUG nova.compute.manager [req-b0b53b4b-c683-4cc3-9bac-06fa3274af02 req-60463ace-07a1-4400-abff-0cb37dffb32e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] No waiting events found dispatching network-vif-plugged-c628defc-21d8-49f9-bb89-cd77be34b0d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.782 2 WARNING nova.compute.manager [req-b0b53b4b-c683-4cc3-9bac-06fa3274af02 req-60463ace-07a1-4400-abff-0cb37dffb32e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Received unexpected event network-vif-plugged-c628defc-21d8-49f9-bb89-cd77be34b0d1 for instance with vm_state deleted and task_state None.
Oct 07 14:21:19 compute-0 nova_compute[259550]: 2025-10-07 14:21:19.835 2 DEBUG oslo_concurrency.processutils [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 318 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.4 MiB/s wr, 237 op/s
Oct 07 14:21:20 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2903386605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:20 compute-0 podman[345565]: 2025-10-07 14:21:20.062823024 +0000 UTC m=+0.051744028 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:21:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:21:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4068962034' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:20 compute-0 podman[345566]: 2025-10-07 14:21:20.10205368 +0000 UTC m=+0.088746815 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.101 2 DEBUG oslo_concurrency.processutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.103 2 DEBUG nova.objects.instance [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.120 2 DEBUG nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:21:20 compute-0 nova_compute[259550]:   <uuid>6c2a17d6-c4c3-4b0b-868b-651a3d5df64f</uuid>
Oct 07 14:21:20 compute-0 nova_compute[259550]:   <name>instance-00000054</name>
Oct 07 14:21:20 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:21:20 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:21:20 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersAaction247Test-server-1097005560</nova:name>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:21:19</nova:creationTime>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:21:20 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:21:20 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:21:20 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:21:20 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:21:20 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:21:20 compute-0 nova_compute[259550]:         <nova:user uuid="8a54f1ad97434a5da961252a38ffaf20">tempest-ServersAaction247Test-117810211-project-member</nova:user>
Oct 07 14:21:20 compute-0 nova_compute[259550]:         <nova:project uuid="40f8a7de74204535b5eb4c0f48aab527">tempest-ServersAaction247Test-117810211</nova:project>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:21:20 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:21:20 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <system>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <entry name="serial">6c2a17d6-c4c3-4b0b-868b-651a3d5df64f</entry>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <entry name="uuid">6c2a17d6-c4c3-4b0b-868b-651a3d5df64f</entry>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     </system>
Oct 07 14:21:20 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:21:20 compute-0 nova_compute[259550]:   <os>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:   </os>
Oct 07 14:21:20 compute-0 nova_compute[259550]:   <features>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:   </features>
Oct 07 14:21:20 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:21:20 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:21:20 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/6c2a17d6-c4c3-4b0b-868b-651a3d5df64f_disk">
Oct 07 14:21:20 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       </source>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:21:20 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/6c2a17d6-c4c3-4b0b-868b-651a3d5df64f_disk.config">
Oct 07 14:21:20 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       </source>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:21:20 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/6c2a17d6-c4c3-4b0b-868b-651a3d5df64f/console.log" append="off"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <video>
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     </video>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:21:20 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:21:20 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:21:20 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:21:20 compute-0 nova_compute[259550]: </domain>
Oct 07 14:21:20 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.148 2 DEBUG nova.compute.manager [req-28395757-e463-4f0e-b696-64ef1c4d5964 req-238d3046-df89-41d9-bd99-d86696bb016f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Received event network-vif-plugged-f4f9776d-fbc4-4c99-a076-d6391636ed53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.148 2 DEBUG oslo_concurrency.lockutils [req-28395757-e463-4f0e-b696-64ef1c4d5964 req-238d3046-df89-41d9-bd99-d86696bb016f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.149 2 DEBUG oslo_concurrency.lockutils [req-28395757-e463-4f0e-b696-64ef1c4d5964 req-238d3046-df89-41d9-bd99-d86696bb016f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.149 2 DEBUG oslo_concurrency.lockutils [req-28395757-e463-4f0e-b696-64ef1c4d5964 req-238d3046-df89-41d9-bd99-d86696bb016f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.149 2 DEBUG nova.compute.manager [req-28395757-e463-4f0e-b696-64ef1c4d5964 req-238d3046-df89-41d9-bd99-d86696bb016f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Processing event network-vif-plugged-f4f9776d-fbc4-4c99-a076-d6391636ed53 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.149 2 DEBUG nova.compute.manager [req-28395757-e463-4f0e-b696-64ef1c4d5964 req-238d3046-df89-41d9-bd99-d86696bb016f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Received event network-vif-plugged-f4f9776d-fbc4-4c99-a076-d6391636ed53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.150 2 DEBUG oslo_concurrency.lockutils [req-28395757-e463-4f0e-b696-64ef1c4d5964 req-238d3046-df89-41d9-bd99-d86696bb016f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.150 2 DEBUG oslo_concurrency.lockutils [req-28395757-e463-4f0e-b696-64ef1c4d5964 req-238d3046-df89-41d9-bd99-d86696bb016f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.150 2 DEBUG oslo_concurrency.lockutils [req-28395757-e463-4f0e-b696-64ef1c4d5964 req-238d3046-df89-41d9-bd99-d86696bb016f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.150 2 DEBUG nova.compute.manager [req-28395757-e463-4f0e-b696-64ef1c4d5964 req-238d3046-df89-41d9-bd99-d86696bb016f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] No waiting events found dispatching network-vif-plugged-f4f9776d-fbc4-4c99-a076-d6391636ed53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.150 2 WARNING nova.compute.manager [req-28395757-e463-4f0e-b696-64ef1c4d5964 req-238d3046-df89-41d9-bd99-d86696bb016f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Received unexpected event network-vif-plugged-f4f9776d-fbc4-4c99-a076-d6391636ed53 for instance with vm_state building and task_state spawning.
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.151 2 DEBUG nova.compute.manager [req-28395757-e463-4f0e-b696-64ef1c4d5964 req-238d3046-df89-41d9-bd99-d86696bb016f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Received event network-vif-deleted-c628defc-21d8-49f9-bb89-cd77be34b0d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.152 2 DEBUG nova.compute.manager [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.161 2 DEBUG nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.170 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846880.1704903, 152070e7-9c74-429d-b9d8-c09cbcba121e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.171 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] VM Resumed (Lifecycle Event)
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.178 2 INFO nova.virt.libvirt.driver [-] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Instance spawned successfully.
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.179 2 DEBUG nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.188 2 DEBUG nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.188 2 DEBUG nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.189 2 INFO nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Using config drive
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.210 2 DEBUG nova.storage.rbd_utils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] rbd image 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.216 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.234 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.236 2 DEBUG nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.237 2 DEBUG nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.237 2 DEBUG nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.238 2 DEBUG nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.238 2 DEBUG nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.238 2 DEBUG nova.virt.libvirt.driver [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.260 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:21:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:21:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3489364226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.318 2 INFO nova.compute.manager [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Took 9.27 seconds to spawn the instance on the hypervisor.
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.318 2 DEBUG nova.compute.manager [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.318 2 DEBUG oslo_concurrency.processutils [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.325 2 DEBUG nova.compute.provider_tree [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.340 2 INFO nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Creating config drive at /var/lib/nova/instances/6c2a17d6-c4c3-4b0b-868b-651a3d5df64f/disk.config
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.345 2 DEBUG oslo_concurrency.processutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6c2a17d6-c4c3-4b0b-868b-651a3d5df64f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpidixx2bz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.376 2 DEBUG nova.scheduler.client.report [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.388 2 INFO nova.compute.manager [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Took 10.57 seconds to build instance.
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.415 2 DEBUG oslo_concurrency.lockutils [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.418 2 DEBUG oslo_concurrency.lockutils [None req-2161c93a-ec7c-4b97-b1c9-cb17308c914c b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.436 2 INFO nova.scheduler.client.report [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Deleted allocations for instance 0d136ab7-c186-4909-bb2f-371bd5f68c90
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.482 2 DEBUG oslo_concurrency.processutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6c2a17d6-c4c3-4b0b-868b-651a3d5df64f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpidixx2bz" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.505 2 DEBUG nova.storage.rbd_utils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] rbd image 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.510 2 DEBUG oslo_concurrency.processutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6c2a17d6-c4c3-4b0b-868b-651a3d5df64f/disk.config 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:20.536 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.548 2 DEBUG oslo_concurrency.lockutils [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "520ba82e-9633-4722-aa98-526012a7e0fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.549 2 DEBUG oslo_concurrency.lockutils [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.549 2 DEBUG oslo_concurrency.lockutils [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.549 2 DEBUG oslo_concurrency.lockutils [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.549 2 DEBUG oslo_concurrency.lockutils [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.551 2 INFO nova.compute.manager [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Terminating instance
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.552 2 DEBUG nova.compute.manager [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.554 2 DEBUG oslo_concurrency.lockutils [None req-19b94b87-9c07-4151-92c7-642793b148de e40754d770434e4787d389c3bddd8803 708e4a28049e4f2d94f64a6656d970d9 - - default default] Lock "0d136ab7-c186-4909-bb2f-371bd5f68c90" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:20 compute-0 kernel: tap741587b4-ed (unregistering): left promiscuous mode
Oct 07 14:21:20 compute-0 NetworkManager[44949]: <info>  [1759846880.6196] device (tap741587b4-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:21:20 compute-0 ovn_controller[151684]: 2025-10-07T14:21:20Z|00876|binding|INFO|Releasing lport 741587b4-ed2b-4895-9352-480eb38c0c50 from this chassis (sb_readonly=0)
Oct 07 14:21:20 compute-0 ovn_controller[151684]: 2025-10-07T14:21:20Z|00877|binding|INFO|Setting lport 741587b4-ed2b-4895-9352-480eb38c0c50 down in Southbound
Oct 07 14:21:20 compute-0 ovn_controller[151684]: 2025-10-07T14:21:20Z|00878|binding|INFO|Removing iface tap741587b4-ed ovn-installed in OVS
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:20 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d00000050.scope: Deactivated successfully.
Oct 07 14:21:20 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d00000050.scope: Consumed 14.647s CPU time.
Oct 07 14:21:20 compute-0 systemd-machined[214580]: Machine qemu-98-instance-00000050 terminated.
Oct 07 14:21:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:20.680 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:04:6e 10.100.0.12'], port_security=['fa:16:3e:c0:04:6e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '520ba82e-9633-4722-aa98-526012a7e0fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4a4526130e0488b963baba67ee5d8db', 'neutron:revision_number': '4', 'neutron:security_group_ids': '952416dd-2ec0-44ba-9a64-1fedfbe98e81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad1d90b9-2025-4f75-89e1-be6f053043dd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=741587b4-ed2b-4895-9352-480eb38c0c50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:20.682 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 741587b4-ed2b-4895-9352-480eb38c0c50 in datapath 6f0d357a-c6e3-4f85-ade7-a04fa945b92c unbound from our chassis
Oct 07 14:21:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:20.684 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6f0d357a-c6e3-4f85-ade7-a04fa945b92c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:21:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:20.685 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f50342c1-d827-49de-9ac2-06a534d9cdb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:20.685 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c namespace which is not needed anymore
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.709 2 DEBUG oslo_concurrency.processutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6c2a17d6-c4c3-4b0b-868b-651a3d5df64f/disk.config 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.710 2 INFO nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Deleting local config drive /var/lib/nova/instances/6c2a17d6-c4c3-4b0b-868b-651a3d5df64f/disk.config because it was imported into RBD.
Oct 07 14:21:20 compute-0 kernel: tap741587b4-ed: entered promiscuous mode
Oct 07 14:21:20 compute-0 systemd-udevd[345483]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:21:20 compute-0 NetworkManager[44949]: <info>  [1759846880.7822] manager: (tap741587b4-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/368)
Oct 07 14:21:20 compute-0 ovn_controller[151684]: 2025-10-07T14:21:20Z|00879|binding|INFO|Claiming lport 741587b4-ed2b-4895-9352-480eb38c0c50 for this chassis.
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:20 compute-0 ovn_controller[151684]: 2025-10-07T14:21:20Z|00880|binding|INFO|741587b4-ed2b-4895-9352-480eb38c0c50: Claiming fa:16:3e:c0:04:6e 10.100.0.12
Oct 07 14:21:20 compute-0 kernel: tap741587b4-ed (unregistering): left promiscuous mode
Oct 07 14:21:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:20.791 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:04:6e 10.100.0.12'], port_security=['fa:16:3e:c0:04:6e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '520ba82e-9633-4722-aa98-526012a7e0fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4a4526130e0488b963baba67ee5d8db', 'neutron:revision_number': '4', 'neutron:security_group_ids': '952416dd-2ec0-44ba-9a64-1fedfbe98e81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad1d90b9-2025-4f75-89e1-be6f053043dd, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=741587b4-ed2b-4895-9352-480eb38c0c50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:20 compute-0 ovn_controller[151684]: 2025-10-07T14:21:20Z|00881|binding|INFO|Setting lport 741587b4-ed2b-4895-9352-480eb38c0c50 ovn-installed in OVS
Oct 07 14:21:20 compute-0 ovn_controller[151684]: 2025-10-07T14:21:20Z|00882|binding|INFO|Setting lport 741587b4-ed2b-4895-9352-480eb38c0c50 up in Southbound
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:20 compute-0 systemd-machined[214580]: New machine qemu-105-instance-00000054.
Oct 07 14:21:20 compute-0 ovn_controller[151684]: 2025-10-07T14:21:20Z|00883|binding|INFO|Releasing lport 741587b4-ed2b-4895-9352-480eb38c0c50 from this chassis (sb_readonly=0)
Oct 07 14:21:20 compute-0 ovn_controller[151684]: 2025-10-07T14:21:20Z|00884|binding|INFO|Setting lport 741587b4-ed2b-4895-9352-480eb38c0c50 down in Southbound
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:20 compute-0 ovn_controller[151684]: 2025-10-07T14:21:20Z|00885|binding|INFO|Removing iface tap741587b4-ed ovn-installed in OVS
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:20 compute-0 systemd[1]: Started Virtual Machine qemu-105-instance-00000054.
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.824 2 INFO nova.virt.libvirt.driver [-] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Instance destroyed successfully.
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.824 2 DEBUG nova.objects.instance [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lazy-loading 'resources' on Instance uuid 520ba82e-9633-4722-aa98-526012a7e0fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:20.827 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:04:6e 10.100.0.12'], port_security=['fa:16:3e:c0:04:6e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '520ba82e-9633-4722-aa98-526012a7e0fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4a4526130e0488b963baba67ee5d8db', 'neutron:revision_number': '4', 'neutron:security_group_ids': '952416dd-2ec0-44ba-9a64-1fedfbe98e81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad1d90b9-2025-4f75-89e1-be6f053043dd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=741587b4-ed2b-4895-9352-480eb38c0c50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.836 2 DEBUG nova.virt.libvirt.vif [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:20:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-750095100',display_name='tempest-ServerRescueNegativeTestJSON-server-750095100',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-750095100',id=80,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:20:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4a4526130e0488b963baba67ee5d8db',ramdisk_id='',reservation_id='r-m87humlf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-2134412460',owner_user_name='tempest-ServerRescueNegativeTestJSON-2134412460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:21:12Z,user_data=None,user_id='76c41b2458aa4c37a8bdf5b4970f70e7',uuid=520ba82e-9633-4722-aa98-526012a7e0fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "741587b4-ed2b-4895-9352-480eb38c0c50", "address": "fa:16:3e:c0:04:6e", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap741587b4-ed", "ovs_interfaceid": "741587b4-ed2b-4895-9352-480eb38c0c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.837 2 DEBUG nova.network.os_vif_util [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Converting VIF {"id": "741587b4-ed2b-4895-9352-480eb38c0c50", "address": "fa:16:3e:c0:04:6e", "network": {"id": "6f0d357a-c6e3-4f85-ade7-a04fa945b92c", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1203758481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4a4526130e0488b963baba67ee5d8db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap741587b4-ed", "ovs_interfaceid": "741587b4-ed2b-4895-9352-480eb38c0c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.837 2 DEBUG nova.network.os_vif_util [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:04:6e,bridge_name='br-int',has_traffic_filtering=True,id=741587b4-ed2b-4895-9352-480eb38c0c50,network=Network(6f0d357a-c6e3-4f85-ade7-a04fa945b92c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap741587b4-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.838 2 DEBUG os_vif [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:04:6e,bridge_name='br-int',has_traffic_filtering=True,id=741587b4-ed2b-4895-9352-480eb38c0c50,network=Network(6f0d357a-c6e3-4f85-ade7-a04fa945b92c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap741587b4-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.839 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap741587b4-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:20 compute-0 neutron-haproxy-ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c[341946]: [NOTICE]   (341950) : haproxy version is 2.8.14-c23fe91
Oct 07 14:21:20 compute-0 neutron-haproxy-ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c[341946]: [NOTICE]   (341950) : path to executable is /usr/sbin/haproxy
Oct 07 14:21:20 compute-0 neutron-haproxy-ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c[341946]: [WARNING]  (341950) : Exiting Master process...
Oct 07 14:21:20 compute-0 neutron-haproxy-ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c[341946]: [WARNING]  (341950) : Exiting Master process...
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:20 compute-0 neutron-haproxy-ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c[341946]: [ALERT]    (341950) : Current worker (341952) exited with code 143 (Terminated)
Oct 07 14:21:20 compute-0 neutron-haproxy-ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c[341946]: [WARNING]  (341950) : All workers exited. Exiting... (0)
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:21:20 compute-0 systemd[1]: libpod-a96d1328ace994924dc376c24545f38c9a4ee8becb5474edc6e1d9d527a4422a.scope: Deactivated successfully.
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.849 2 INFO os_vif [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:04:6e,bridge_name='br-int',has_traffic_filtering=True,id=741587b4-ed2b-4895-9352-480eb38c0c50,network=Network(6f0d357a-c6e3-4f85-ade7-a04fa945b92c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap741587b4-ed')
Oct 07 14:21:20 compute-0 podman[345701]: 2025-10-07 14:21:20.854588141 +0000 UTC m=+0.072434715 container died a96d1328ace994924dc376c24545f38c9a4ee8becb5474edc6e1d9d527a4422a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 07 14:21:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a96d1328ace994924dc376c24545f38c9a4ee8becb5474edc6e1d9d527a4422a-userdata-shm.mount: Deactivated successfully.
Oct 07 14:21:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4546a3a171e1c741dd9ac510627f2daee2c6bc3e01f38d2cc6f5b0c614382fa-merged.mount: Deactivated successfully.
Oct 07 14:21:20 compute-0 podman[345701]: 2025-10-07 14:21:20.897201526 +0000 UTC m=+0.115048100 container cleanup a96d1328ace994924dc376c24545f38c9a4ee8becb5474edc6e1d9d527a4422a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 07 14:21:20 compute-0 systemd[1]: libpod-conmon-a96d1328ace994924dc376c24545f38c9a4ee8becb5474edc6e1d9d527a4422a.scope: Deactivated successfully.
Oct 07 14:21:20 compute-0 podman[345757]: 2025-10-07 14:21:20.970355818 +0000 UTC m=+0.049629642 container remove a96d1328ace994924dc376c24545f38c9a4ee8becb5474edc6e1d9d527a4422a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:21:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:20.979 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2628fef9-b356-4f5e-abd1-e02a71ed437a]: (4, ('Tue Oct  7 02:21:20 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c (a96d1328ace994924dc376c24545f38c9a4ee8becb5474edc6e1d9d527a4422a)\na96d1328ace994924dc376c24545f38c9a4ee8becb5474edc6e1d9d527a4422a\nTue Oct  7 02:21:20 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c (a96d1328ace994924dc376c24545f38c9a4ee8becb5474edc6e1d9d527a4422a)\na96d1328ace994924dc376c24545f38c9a4ee8becb5474edc6e1d9d527a4422a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:20.981 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3adb36ac-5dc8-4915-99f0-69fd7ef303b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:20.982 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f0d357a-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:20 compute-0 kernel: tap6f0d357a-c0: left promiscuous mode
Oct 07 14:21:20 compute-0 nova_compute[259550]: 2025-10-07 14:21:20.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:20.987 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[225087c4-803b-4c2d-a1d9-4aa32b31dee4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:21.005 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[392e2c4d-9206-4bbb-a7ae-d77c09bde9f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:21.007 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[52ecf041-e671-4821-9ddc-7a2d49ed2123]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:21.026 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[70fce096-6f30-42ed-a871-fc559d0fd5c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742502, 'reachable_time': 43467, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345772, 'error': None, 'target': 'ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:21.029 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6f0d357a-c6e3-4f85-ade7-a04fa945b92c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:21:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:21.029 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[2b2945a2-0e3d-4493-a294-d117662f5304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:21.030 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 741587b4-ed2b-4895-9352-480eb38c0c50 in datapath 6f0d357a-c6e3-4f85-ade7-a04fa945b92c unbound from our chassis
Oct 07 14:21:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d6f0d357a\x2dc6e3\x2d4f85\x2dade7\x2da04fa945b92c.mount: Deactivated successfully.
Oct 07 14:21:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:21.031 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6f0d357a-c6e3-4f85-ade7-a04fa945b92c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:21:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:21.032 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1c4a5a-88e3-4719-8d13-0b8da04d4bfd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:21.033 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 741587b4-ed2b-4895-9352-480eb38c0c50 in datapath 6f0d357a-c6e3-4f85-ade7-a04fa945b92c unbound from our chassis
Oct 07 14:21:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:21.034 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6f0d357a-c6e3-4f85-ade7-a04fa945b92c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:21:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:21.035 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[daf60060-900a-403b-889a-b4d02f97a5a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:21 compute-0 ceph-mon[74295]: pgmap v1719: 305 pgs: 305 active+clean; 318 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.4 MiB/s wr, 237 op/s
Oct 07 14:21:21 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4068962034' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:21 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3489364226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.278 2 INFO nova.virt.libvirt.driver [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Deleting instance files /var/lib/nova/instances/520ba82e-9633-4722-aa98-526012a7e0fb_del
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.278 2 INFO nova.virt.libvirt.driver [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Deletion of /var/lib/nova/instances/520ba82e-9633-4722-aa98-526012a7e0fb_del complete
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.340 2 INFO nova.compute.manager [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.340 2 DEBUG oslo.service.loopingcall [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.341 2 DEBUG nova.compute.manager [-] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.341 2 DEBUG nova.network.neutron [-] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.661 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846881.6615705, 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.662 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] VM Resumed (Lifecycle Event)
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.664 2 DEBUG nova.compute.manager [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.664 2 DEBUG nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.667 2 INFO nova.virt.libvirt.driver [-] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Instance spawned successfully.
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.667 2 DEBUG nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.698 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.702 2 DEBUG nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.703 2 DEBUG nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.703 2 DEBUG nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.703 2 DEBUG nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.704 2 DEBUG nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.704 2 DEBUG nova.virt.libvirt.driver [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.707 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.740 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.740 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846881.661787, 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.740 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] VM Started (Lifecycle Event)
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.770 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.775 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.780 2 INFO nova.compute.manager [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Took 3.49 seconds to spawn the instance on the hypervisor.
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.780 2 DEBUG nova.compute.manager [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.792 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.832 2 INFO nova.compute.manager [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Took 4.47 seconds to build instance.
Oct 07 14:21:21 compute-0 nova_compute[259550]: 2025-10-07 14:21:21.846 2 DEBUG oslo_concurrency.lockutils [None req-786ed41e-c3fe-4522-9302-7dc100fc430a 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Lock "6c2a17d6-c4c3-4b0b-868b-651a3d5df64f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 263 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 238 op/s
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.451 2 DEBUG nova.compute.manager [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Received event network-vif-unplugged-741587b4-ed2b-4895-9352-480eb38c0c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.451 2 DEBUG oslo_concurrency.lockutils [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.452 2 DEBUG oslo_concurrency.lockutils [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.452 2 DEBUG oslo_concurrency.lockutils [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.452 2 DEBUG nova.compute.manager [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] No waiting events found dispatching network-vif-unplugged-741587b4-ed2b-4895-9352-480eb38c0c50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.452 2 DEBUG nova.compute.manager [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Received event network-vif-unplugged-741587b4-ed2b-4895-9352-480eb38c0c50 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.452 2 DEBUG nova.compute.manager [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Received event network-vif-plugged-741587b4-ed2b-4895-9352-480eb38c0c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.453 2 DEBUG oslo_concurrency.lockutils [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.453 2 DEBUG oslo_concurrency.lockutils [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.453 2 DEBUG oslo_concurrency.lockutils [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.453 2 DEBUG nova.compute.manager [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] No waiting events found dispatching network-vif-plugged-741587b4-ed2b-4895-9352-480eb38c0c50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.454 2 WARNING nova.compute.manager [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Received unexpected event network-vif-plugged-741587b4-ed2b-4895-9352-480eb38c0c50 for instance with vm_state active and task_state deleting.
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.454 2 DEBUG nova.compute.manager [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Received event network-vif-plugged-741587b4-ed2b-4895-9352-480eb38c0c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.454 2 DEBUG oslo_concurrency.lockutils [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.454 2 DEBUG oslo_concurrency.lockutils [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.454 2 DEBUG oslo_concurrency.lockutils [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.455 2 DEBUG nova.compute.manager [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] No waiting events found dispatching network-vif-plugged-741587b4-ed2b-4895-9352-480eb38c0c50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.455 2 WARNING nova.compute.manager [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Received unexpected event network-vif-plugged-741587b4-ed2b-4895-9352-480eb38c0c50 for instance with vm_state active and task_state deleting.
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.455 2 DEBUG nova.compute.manager [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Received event network-vif-plugged-741587b4-ed2b-4895-9352-480eb38c0c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.455 2 DEBUG oslo_concurrency.lockutils [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.455 2 DEBUG oslo_concurrency.lockutils [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.456 2 DEBUG oslo_concurrency.lockutils [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.456 2 DEBUG nova.compute.manager [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] No waiting events found dispatching network-vif-plugged-741587b4-ed2b-4895-9352-480eb38c0c50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.456 2 WARNING nova.compute.manager [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Received unexpected event network-vif-plugged-741587b4-ed2b-4895-9352-480eb38c0c50 for instance with vm_state active and task_state deleting.
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.456 2 DEBUG nova.compute.manager [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Received event network-vif-unplugged-741587b4-ed2b-4895-9352-480eb38c0c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.457 2 DEBUG oslo_concurrency.lockutils [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.457 2 DEBUG oslo_concurrency.lockutils [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.457 2 DEBUG oslo_concurrency.lockutils [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.457 2 DEBUG nova.compute.manager [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] No waiting events found dispatching network-vif-unplugged-741587b4-ed2b-4895-9352-480eb38c0c50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.458 2 DEBUG nova.compute.manager [req-2510bf4e-b8af-4949-8da5-d8216d955cb2 req-32798fdc-999f-4932-a1a7-1215ae28563c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Received event network-vif-unplugged-741587b4-ed2b-4895-9352-480eb38c0c50 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.602 2 INFO nova.compute.manager [None req-67242c65-14fb-4b30-bd2c-2dfd43f06442 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Get console output
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:21:22
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'backups', '.rgw.root', 'volumes', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr']
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.836 2 DEBUG nova.network.neutron [-] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.852 2 INFO nova.compute.manager [-] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Took 1.51 seconds to deallocate network for instance.
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.886 2 DEBUG nova.compute.manager [None req-df8f886d-abfd-4906-a5ab-13a39417aed4 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.897 2 DEBUG oslo_concurrency.lockutils [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.897 2 DEBUG oslo_concurrency.lockutils [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.923 2 INFO nova.compute.manager [None req-df8f886d-abfd-4906-a5ab-13a39417aed4 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] instance snapshotting
Oct 07 14:21:22 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.923 2 DEBUG nova.objects.instance [None req-df8f886d-abfd-4906-a5ab-13a39417aed4 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Lazy-loading 'flavor' on Instance uuid 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:21:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:22.999 2 DEBUG oslo_concurrency.processutils [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.037 2 DEBUG oslo_concurrency.lockutils [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Acquiring lock "6c2a17d6-c4c3-4b0b-868b-651a3d5df64f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.037 2 DEBUG oslo_concurrency.lockutils [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Lock "6c2a17d6-c4c3-4b0b-868b-651a3d5df64f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.038 2 DEBUG oslo_concurrency.lockutils [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Acquiring lock "6c2a17d6-c4c3-4b0b-868b-651a3d5df64f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.038 2 DEBUG oslo_concurrency.lockutils [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Lock "6c2a17d6-c4c3-4b0b-868b-651a3d5df64f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.038 2 DEBUG oslo_concurrency.lockutils [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Lock "6c2a17d6-c4c3-4b0b-868b-651a3d5df64f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.039 2 INFO nova.compute.manager [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Terminating instance
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.040 2 DEBUG oslo_concurrency.lockutils [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Acquiring lock "refresh_cache-6c2a17d6-c4c3-4b0b-868b-651a3d5df64f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.040 2 DEBUG oslo_concurrency.lockutils [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Acquired lock "refresh_cache-6c2a17d6-c4c3-4b0b-868b-651a3d5df64f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.041 2 DEBUG nova.network.neutron [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:21:23 compute-0 ceph-mon[74295]: pgmap v1720: 305 pgs: 305 active+clean; 263 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 238 op/s
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.193 2 DEBUG nova.network.neutron [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.207 2 INFO nova.virt.libvirt.driver [None req-df8f886d-abfd-4906-a5ab-13a39417aed4 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Beginning live snapshot process
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.250 2 DEBUG nova.compute.manager [None req-df8f886d-abfd-4906-a5ab-13a39417aed4 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390
Oct 07 14:21:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:21:23 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1100033805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.468 2 DEBUG oslo_concurrency.processutils [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.475 2 DEBUG nova.compute.provider_tree [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.489 2 DEBUG nova.scheduler.client.report [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.526 2 DEBUG oslo_concurrency.lockutils [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.565 2 INFO nova.scheduler.client.report [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Deleted allocations for instance 520ba82e-9633-4722-aa98-526012a7e0fb
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.568 2 DEBUG nova.network.neutron [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.586 2 DEBUG oslo_concurrency.lockutils [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Releasing lock "refresh_cache-6c2a17d6-c4c3-4b0b-868b-651a3d5df64f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.587 2 DEBUG nova.compute.manager [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:21:23 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000054.scope: Deactivated successfully.
Oct 07 14:21:23 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000054.scope: Consumed 2.720s CPU time.
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.652 2 DEBUG oslo_concurrency.lockutils [None req-d8f9c781-4c5f-4688-bcb8-e82fb87cfcae 76c41b2458aa4c37a8bdf5b4970f70e7 b4a4526130e0488b963baba67ee5d8db - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:23 compute-0 systemd-machined[214580]: Machine qemu-105-instance-00000054 terminated.
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.751 2 DEBUG nova.compute.manager [None req-df8f886d-abfd-4906-a5ab-13a39417aed4 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.818 2 INFO nova.virt.libvirt.driver [-] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Instance destroyed successfully.
Oct 07 14:21:23 compute-0 nova_compute[259550]: 2025-10-07 14:21:23.819 2 DEBUG nova.objects.instance [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Lazy-loading 'resources' on Instance uuid 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 242 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 3.6 MiB/s wr, 327 op/s
Oct 07 14:21:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1100033805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:24 compute-0 ceph-mon[74295]: pgmap v1721: 305 pgs: 305 active+clean; 242 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 3.6 MiB/s wr, 327 op/s
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.228 2 INFO nova.virt.libvirt.driver [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Deleting instance files /var/lib/nova/instances/6c2a17d6-c4c3-4b0b-868b-651a3d5df64f_del
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.229 2 INFO nova.virt.libvirt.driver [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Deletion of /var/lib/nova/instances/6c2a17d6-c4c3-4b0b-868b-651a3d5df64f_del complete
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.289 2 INFO nova.compute.manager [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Took 0.70 seconds to destroy the instance on the hypervisor.
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.290 2 DEBUG oslo.service.loopingcall [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.290 2 DEBUG nova.compute.manager [-] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.291 2 DEBUG nova.network.neutron [-] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.433 2 DEBUG nova.network.neutron [-] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.452 2 DEBUG nova.network.neutron [-] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.466 2 INFO nova.compute.manager [-] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Took 0.18 seconds to deallocate network for instance.
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.507 2 DEBUG oslo_concurrency.lockutils [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.508 2 DEBUG oslo_concurrency.lockutils [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.585 2 DEBUG oslo_concurrency.processutils [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.673 2 DEBUG nova.compute.manager [req-21b75e56-98dd-41d5-bdf9-55e4977f50f5 req-64e6668a-70f7-4d0b-952e-c2e58d946c1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Received event network-vif-plugged-741587b4-ed2b-4895-9352-480eb38c0c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.675 2 DEBUG oslo_concurrency.lockutils [req-21b75e56-98dd-41d5-bdf9-55e4977f50f5 req-64e6668a-70f7-4d0b-952e-c2e58d946c1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.676 2 DEBUG oslo_concurrency.lockutils [req-21b75e56-98dd-41d5-bdf9-55e4977f50f5 req-64e6668a-70f7-4d0b-952e-c2e58d946c1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.676 2 DEBUG oslo_concurrency.lockutils [req-21b75e56-98dd-41d5-bdf9-55e4977f50f5 req-64e6668a-70f7-4d0b-952e-c2e58d946c1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "520ba82e-9633-4722-aa98-526012a7e0fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.677 2 DEBUG nova.compute.manager [req-21b75e56-98dd-41d5-bdf9-55e4977f50f5 req-64e6668a-70f7-4d0b-952e-c2e58d946c1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] No waiting events found dispatching network-vif-plugged-741587b4-ed2b-4895-9352-480eb38c0c50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.678 2 WARNING nova.compute.manager [req-21b75e56-98dd-41d5-bdf9-55e4977f50f5 req-64e6668a-70f7-4d0b-952e-c2e58d946c1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Received unexpected event network-vif-plugged-741587b4-ed2b-4895-9352-480eb38c0c50 for instance with vm_state deleted and task_state None.
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.678 2 DEBUG nova.compute.manager [req-21b75e56-98dd-41d5-bdf9-55e4977f50f5 req-64e6668a-70f7-4d0b-952e-c2e58d946c1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Received event network-vif-deleted-741587b4-ed2b-4895-9352-480eb38c0c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:24 compute-0 ovn_controller[151684]: 2025-10-07T14:21:24Z|00886|binding|INFO|Releasing lport e0f4a07d-63f3-4c49-8cad-69cdf20a2608 from this chassis (sb_readonly=0)
Oct 07 14:21:24 compute-0 nova_compute[259550]: 2025-10-07 14:21:24.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:21:25 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3879842289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:25 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3879842289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:25 compute-0 nova_compute[259550]: 2025-10-07 14:21:25.075 2 DEBUG oslo_concurrency.processutils [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:25 compute-0 nova_compute[259550]: 2025-10-07 14:21:25.085 2 DEBUG nova.compute.provider_tree [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:21:25 compute-0 nova_compute[259550]: 2025-10-07 14:21:25.136 2 DEBUG nova.scheduler.client.report [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:21:25 compute-0 nova_compute[259550]: 2025-10-07 14:21:25.156 2 DEBUG oslo_concurrency.lockutils [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:25 compute-0 nova_compute[259550]: 2025-10-07 14:21:25.178 2 INFO nova.scheduler.client.report [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Deleted allocations for instance 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f
Oct 07 14:21:25 compute-0 nova_compute[259550]: 2025-10-07 14:21:25.242 2 DEBUG oslo_concurrency.lockutils [None req-256969c7-0983-4232-89ff-0a15684a7a57 8a54f1ad97434a5da961252a38ffaf20 40f8a7de74204535b5eb4c0f48aab527 - - default default] Lock "6c2a17d6-c4c3-4b0b-868b-651a3d5df64f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:25 compute-0 nova_compute[259550]: 2025-10-07 14:21:25.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 197 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.2 MiB/s wr, 345 op/s
Oct 07 14:21:26 compute-0 ceph-mon[74295]: pgmap v1722: 305 pgs: 305 active+clean; 197 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.2 MiB/s wr, 345 op/s
Oct 07 14:21:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:21:26 compute-0 nova_compute[259550]: 2025-10-07 14:21:26.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:26 compute-0 nova_compute[259550]: 2025-10-07 14:21:26.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:21:26 compute-0 nova_compute[259550]: 2025-10-07 14:21:26.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 07 14:21:26 compute-0 nova_compute[259550]: 2025-10-07 14:21:26.997 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 07 14:21:27 compute-0 ovn_controller[151684]: 2025-10-07T14:21:27Z|00887|binding|INFO|Releasing lport e0f4a07d-63f3-4c49-8cad-69cdf20a2608 from this chassis (sb_readonly=0)
Oct 07 14:21:27 compute-0 nova_compute[259550]: 2025-10-07 14:21:27.254 2 DEBUG oslo_concurrency.lockutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "5fb1bd6b-903f-4d62-bc97-25ecaa45deac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:27 compute-0 nova_compute[259550]: 2025-10-07 14:21:27.255 2 DEBUG oslo_concurrency.lockutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "5fb1bd6b-903f-4d62-bc97-25ecaa45deac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:27 compute-0 nova_compute[259550]: 2025-10-07 14:21:27.274 2 DEBUG nova.compute.manager [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:21:27 compute-0 nova_compute[259550]: 2025-10-07 14:21:27.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:27 compute-0 nova_compute[259550]: 2025-10-07 14:21:27.377 2 DEBUG oslo_concurrency.lockutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:27 compute-0 nova_compute[259550]: 2025-10-07 14:21:27.378 2 DEBUG oslo_concurrency.lockutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:27 compute-0 nova_compute[259550]: 2025-10-07 14:21:27.386 2 DEBUG nova.virt.hardware [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:21:27 compute-0 nova_compute[259550]: 2025-10-07 14:21:27.387 2 INFO nova.compute.claims [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:21:27 compute-0 nova_compute[259550]: 2025-10-07 14:21:27.540 2 DEBUG oslo_concurrency.processutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 197 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 306 op/s
Oct 07 14:21:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:21:27 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1897495191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:27 compute-0 nova_compute[259550]: 2025-10-07 14:21:27.978 2 DEBUG oslo_concurrency.processutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:27 compute-0 nova_compute[259550]: 2025-10-07 14:21:27.986 2 DEBUG nova.compute.provider_tree [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.004 2 DEBUG nova.scheduler.client.report [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:21:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1897495191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.147 2 DEBUG oslo_concurrency.lockutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.148 2 DEBUG nova.compute.manager [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.241 2 DEBUG nova.compute.manager [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.242 2 DEBUG nova.network.neutron [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.281 2 INFO nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.330 2 DEBUG nova.compute.manager [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.608 2 DEBUG nova.compute.manager [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.610 2 DEBUG nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.611 2 INFO nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Creating image(s)
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.675 2 DEBUG nova.storage.rbd_utils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image 5fb1bd6b-903f-4d62-bc97-25ecaa45deac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.700 2 DEBUG nova.storage.rbd_utils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image 5fb1bd6b-903f-4d62-bc97-25ecaa45deac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.728 2 DEBUG nova.storage.rbd_utils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image 5fb1bd6b-903f-4d62-bc97-25ecaa45deac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.732 2 DEBUG oslo_concurrency.processutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.777 2 DEBUG nova.policy [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b99e8c19767d42aa96c7d646cacc3772', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a266c4b5f8164bceb621e0e23116c515', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.816 2 DEBUG oslo_concurrency.processutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.817 2 DEBUG oslo_concurrency.lockutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.818 2 DEBUG oslo_concurrency.lockutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.818 2 DEBUG oslo_concurrency.lockutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.839 2 DEBUG nova.storage.rbd_utils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image 5fb1bd6b-903f-4d62-bc97-25ecaa45deac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.843 2 DEBUG oslo_concurrency.processutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5fb1bd6b-903f-4d62-bc97-25ecaa45deac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:28 compute-0 nova_compute[259550]: 2025-10-07 14:21:28.954 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:21:29 compute-0 ceph-mon[74295]: pgmap v1723: 305 pgs: 305 active+clean; 197 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 306 op/s
Oct 07 14:21:29 compute-0 nova_compute[259550]: 2025-10-07 14:21:29.165 2 DEBUG oslo_concurrency.processutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5fb1bd6b-903f-4d62-bc97-25ecaa45deac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:29 compute-0 nova_compute[259550]: 2025-10-07 14:21:29.227 2 DEBUG nova.storage.rbd_utils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] resizing rbd image 5fb1bd6b-903f-4d62-bc97-25ecaa45deac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:21:29 compute-0 nova_compute[259550]: 2025-10-07 14:21:29.319 2 DEBUG nova.objects.instance [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'migration_context' on Instance uuid 5fb1bd6b-903f-4d62-bc97-25ecaa45deac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:29 compute-0 nova_compute[259550]: 2025-10-07 14:21:29.332 2 DEBUG nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:21:29 compute-0 nova_compute[259550]: 2025-10-07 14:21:29.332 2 DEBUG nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Ensure instance console log exists: /var/lib/nova/instances/5fb1bd6b-903f-4d62-bc97-25ecaa45deac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:21:29 compute-0 nova_compute[259550]: 2025-10-07 14:21:29.333 2 DEBUG oslo_concurrency.lockutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:29 compute-0 nova_compute[259550]: 2025-10-07 14:21:29.334 2 DEBUG oslo_concurrency.lockutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:29 compute-0 nova_compute[259550]: 2025-10-07 14:21:29.334 2 DEBUG oslo_concurrency.lockutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:29 compute-0 nova_compute[259550]: 2025-10-07 14:21:29.824 2 DEBUG nova.network.neutron [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Successfully created port: 5465af99-7afb-409c-9889-3320da6f749f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:21:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 167 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 309 op/s
Oct 07 14:21:30 compute-0 nova_compute[259550]: 2025-10-07 14:21:30.713 2 DEBUG nova.network.neutron [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Successfully updated port: 5465af99-7afb-409c-9889-3320da6f749f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:21:30 compute-0 nova_compute[259550]: 2025-10-07 14:21:30.733 2 DEBUG oslo_concurrency.lockutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "refresh_cache-5fb1bd6b-903f-4d62-bc97-25ecaa45deac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:21:30 compute-0 nova_compute[259550]: 2025-10-07 14:21:30.733 2 DEBUG oslo_concurrency.lockutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquired lock "refresh_cache-5fb1bd6b-903f-4d62-bc97-25ecaa45deac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:21:30 compute-0 nova_compute[259550]: 2025-10-07 14:21:30.734 2 DEBUG nova.network.neutron [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:21:30 compute-0 nova_compute[259550]: 2025-10-07 14:21:30.837 2 DEBUG nova.compute.manager [req-e060c8c9-7fcf-4f78-9335-791867b33837 req-b9fd1a73-dfa3-44fc-a66d-574fdcdc6a4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Received event network-changed-5465af99-7afb-409c-9889-3320da6f749f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:30 compute-0 nova_compute[259550]: 2025-10-07 14:21:30.838 2 DEBUG nova.compute.manager [req-e060c8c9-7fcf-4f78-9335-791867b33837 req-b9fd1a73-dfa3-44fc-a66d-574fdcdc6a4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Refreshing instance network info cache due to event network-changed-5465af99-7afb-409c-9889-3320da6f749f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:21:30 compute-0 nova_compute[259550]: 2025-10-07 14:21:30.838 2 DEBUG oslo_concurrency.lockutils [req-e060c8c9-7fcf-4f78-9335-791867b33837 req-b9fd1a73-dfa3-44fc-a66d-574fdcdc6a4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5fb1bd6b-903f-4d62-bc97-25ecaa45deac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:21:30 compute-0 nova_compute[259550]: 2025-10-07 14:21:30.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:30 compute-0 nova_compute[259550]: 2025-10-07 14:21:30.946 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846875.9451947, db51ce2e-5a2e-4329-a629-6f5fcee5c673 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:30 compute-0 nova_compute[259550]: 2025-10-07 14:21:30.947 2 INFO nova.compute.manager [-] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] VM Stopped (Lifecycle Event)
Oct 07 14:21:30 compute-0 nova_compute[259550]: 2025-10-07 14:21:30.956 2 DEBUG nova.network.neutron [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:21:30 compute-0 nova_compute[259550]: 2025-10-07 14:21:30.968 2 DEBUG nova.compute.manager [None req-ba2b4d7e-f87e-4a3d-9982-6b9eecf9d73b - - - - - -] [instance: db51ce2e-5a2e-4329-a629-6f5fcee5c673] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:31 compute-0 ceph-mon[74295]: pgmap v1724: 305 pgs: 305 active+clean; 167 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 309 op/s
Oct 07 14:21:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.695 2 DEBUG nova.network.neutron [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Updating instance_info_cache with network_info: [{"id": "5465af99-7afb-409c-9889-3320da6f749f", "address": "fa:16:3e:cd:df:4f", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5465af99-7a", "ovs_interfaceid": "5465af99-7afb-409c-9889-3320da6f749f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.715 2 DEBUG oslo_concurrency.lockutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Releasing lock "refresh_cache-5fb1bd6b-903f-4d62-bc97-25ecaa45deac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.716 2 DEBUG nova.compute.manager [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Instance network_info: |[{"id": "5465af99-7afb-409c-9889-3320da6f749f", "address": "fa:16:3e:cd:df:4f", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5465af99-7a", "ovs_interfaceid": "5465af99-7afb-409c-9889-3320da6f749f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.719 2 DEBUG oslo_concurrency.lockutils [req-e060c8c9-7fcf-4f78-9335-791867b33837 req-b9fd1a73-dfa3-44fc-a66d-574fdcdc6a4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5fb1bd6b-903f-4d62-bc97-25ecaa45deac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.719 2 DEBUG nova.network.neutron [req-e060c8c9-7fcf-4f78-9335-791867b33837 req-b9fd1a73-dfa3-44fc-a66d-574fdcdc6a4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Refreshing network info cache for port 5465af99-7afb-409c-9889-3320da6f749f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.726 2 DEBUG nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Start _get_guest_xml network_info=[{"id": "5465af99-7afb-409c-9889-3320da6f749f", "address": "fa:16:3e:cd:df:4f", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5465af99-7a", "ovs_interfaceid": "5465af99-7afb-409c-9889-3320da6f749f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.734 2 WARNING nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.745 2 DEBUG nova.virt.libvirt.host [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.746 2 DEBUG nova.virt.libvirt.host [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.751 2 DEBUG nova.virt.libvirt.host [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.752 2 DEBUG nova.virt.libvirt.host [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.752 2 DEBUG nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.752 2 DEBUG nova.virt.hardware [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.753 2 DEBUG nova.virt.hardware [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.753 2 DEBUG nova.virt.hardware [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.753 2 DEBUG nova.virt.hardware [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.753 2 DEBUG nova.virt.hardware [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.753 2 DEBUG nova.virt.hardware [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.753 2 DEBUG nova.virt.hardware [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.753 2 DEBUG nova.virt.hardware [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.754 2 DEBUG nova.virt.hardware [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.754 2 DEBUG nova.virt.hardware [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.754 2 DEBUG nova.virt.hardware [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:21:31 compute-0 nova_compute[259550]: 2025-10-07 14:21:31.756 2 DEBUG oslo_concurrency.processutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 187 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.7 MiB/s wr, 259 op/s
Oct 07 14:21:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:21:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1793417328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.203 2 DEBUG oslo_concurrency.processutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.233 2 DEBUG nova.storage.rbd_utils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image 5fb1bd6b-903f-4d62-bc97-25ecaa45deac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.237 2 DEBUG oslo_concurrency.processutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012778552736537085 of space, bias 1.0, pg target 0.38335658209611256 quantized to 32 (current 32)
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:21:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:21:32 compute-0 ovn_controller[151684]: 2025-10-07T14:21:32Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:02:42 10.100.0.7
Oct 07 14:21:32 compute-0 ovn_controller[151684]: 2025-10-07T14:21:32Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:02:42 10.100.0.7
Oct 07 14:21:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:21:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/86522812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.677 2 DEBUG oslo_concurrency.processutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.679 2 DEBUG nova.virt.libvirt.vif [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:21:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1261181308',display_name='tempest-ServerActionsTestOtherB-server-1261181308',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1261181308',id=85,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a266c4b5f8164bceb621e0e23116c515',ramdisk_id='',reservation_id='r-abm0lrmn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1033607333',owner_user_name='tempest-ServerActionsTestOtherB-1033607333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:21:28Z,user_data=None,user_id='b99e8c19767d42aa96c7d646cacc3772',uuid=5fb1bd6b-903f-4d62-bc97-25ecaa45deac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5465af99-7afb-409c-9889-3320da6f749f", "address": "fa:16:3e:cd:df:4f", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5465af99-7a", "ovs_interfaceid": "5465af99-7afb-409c-9889-3320da6f749f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:21:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/717376261' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.680 2 DEBUG nova.network.os_vif_util [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converting VIF {"id": "5465af99-7afb-409c-9889-3320da6f749f", "address": "fa:16:3e:cd:df:4f", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5465af99-7a", "ovs_interfaceid": "5465af99-7afb-409c-9889-3320da6f749f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.680 2 DEBUG nova.network.os_vif_util [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:df:4f,bridge_name='br-int',has_traffic_filtering=True,id=5465af99-7afb-409c-9889-3320da6f749f,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5465af99-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.681 2 DEBUG nova.objects.instance [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5fb1bd6b-903f-4d62-bc97-25ecaa45deac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:21:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/717376261' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.698 2 DEBUG nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:21:32 compute-0 nova_compute[259550]:   <uuid>5fb1bd6b-903f-4d62-bc97-25ecaa45deac</uuid>
Oct 07 14:21:32 compute-0 nova_compute[259550]:   <name>instance-00000055</name>
Oct 07 14:21:32 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:21:32 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:21:32 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerActionsTestOtherB-server-1261181308</nova:name>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:21:31</nova:creationTime>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:21:32 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:21:32 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:21:32 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:21:32 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:21:32 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:21:32 compute-0 nova_compute[259550]:         <nova:user uuid="b99e8c19767d42aa96c7d646cacc3772">tempest-ServerActionsTestOtherB-1033607333-project-member</nova:user>
Oct 07 14:21:32 compute-0 nova_compute[259550]:         <nova:project uuid="a266c4b5f8164bceb621e0e23116c515">tempest-ServerActionsTestOtherB-1033607333</nova:project>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:21:32 compute-0 nova_compute[259550]:         <nova:port uuid="5465af99-7afb-409c-9889-3320da6f749f">
Oct 07 14:21:32 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:21:32 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:21:32 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <system>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <entry name="serial">5fb1bd6b-903f-4d62-bc97-25ecaa45deac</entry>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <entry name="uuid">5fb1bd6b-903f-4d62-bc97-25ecaa45deac</entry>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     </system>
Oct 07 14:21:32 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:21:32 compute-0 nova_compute[259550]:   <os>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:   </os>
Oct 07 14:21:32 compute-0 nova_compute[259550]:   <features>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:   </features>
Oct 07 14:21:32 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:21:32 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:21:32 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/5fb1bd6b-903f-4d62-bc97-25ecaa45deac_disk">
Oct 07 14:21:32 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       </source>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:21:32 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/5fb1bd6b-903f-4d62-bc97-25ecaa45deac_disk.config">
Oct 07 14:21:32 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       </source>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:21:32 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:cd:df:4f"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <target dev="tap5465af99-7a"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/5fb1bd6b-903f-4d62-bc97-25ecaa45deac/console.log" append="off"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <video>
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     </video>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:21:32 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:21:32 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:21:32 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:21:32 compute-0 nova_compute[259550]: </domain>
Oct 07 14:21:32 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.698 2 DEBUG nova.compute.manager [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Preparing to wait for external event network-vif-plugged-5465af99-7afb-409c-9889-3320da6f749f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.699 2 DEBUG oslo_concurrency.lockutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "5fb1bd6b-903f-4d62-bc97-25ecaa45deac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.699 2 DEBUG oslo_concurrency.lockutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "5fb1bd6b-903f-4d62-bc97-25ecaa45deac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.699 2 DEBUG oslo_concurrency.lockutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "5fb1bd6b-903f-4d62-bc97-25ecaa45deac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.700 2 DEBUG nova.virt.libvirt.vif [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:21:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1261181308',display_name='tempest-ServerActionsTestOtherB-server-1261181308',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1261181308',id=85,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a266c4b5f8164bceb621e0e23116c515',ramdisk_id='',reservation_id='r-abm0lrmn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1033607333',owner_user_name='tempest-ServerActionsTestOtherB-1033607333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:21:28Z,user_data=None,user_id='b99e8c19767d42aa96c7d646cacc3772',uuid=5fb1bd6b-903f-4d62-bc97-25ecaa45deac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5465af99-7afb-409c-9889-3320da6f749f", "address": "fa:16:3e:cd:df:4f", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5465af99-7a", "ovs_interfaceid": "5465af99-7afb-409c-9889-3320da6f749f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.700 2 DEBUG nova.network.os_vif_util [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converting VIF {"id": "5465af99-7afb-409c-9889-3320da6f749f", "address": "fa:16:3e:cd:df:4f", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5465af99-7a", "ovs_interfaceid": "5465af99-7afb-409c-9889-3320da6f749f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.700 2 DEBUG nova.network.os_vif_util [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:df:4f,bridge_name='br-int',has_traffic_filtering=True,id=5465af99-7afb-409c-9889-3320da6f749f,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5465af99-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.701 2 DEBUG os_vif [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:df:4f,bridge_name='br-int',has_traffic_filtering=True,id=5465af99-7afb-409c-9889-3320da6f749f,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5465af99-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.706 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5465af99-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.706 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5465af99-7a, col_values=(('external_ids', {'iface-id': '5465af99-7afb-409c-9889-3320da6f749f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:df:4f', 'vm-uuid': '5fb1bd6b-903f-4d62-bc97-25ecaa45deac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:32 compute-0 NetworkManager[44949]: <info>  [1759846892.7087] manager: (tap5465af99-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.714 2 INFO os_vif [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:df:4f,bridge_name='br-int',has_traffic_filtering=True,id=5465af99-7afb-409c-9889-3320da6f749f,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5465af99-7a')
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.773 2 DEBUG nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.774 2 DEBUG nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.774 2 DEBUG nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No VIF found with MAC fa:16:3e:cd:df:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.775 2 INFO nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Using config drive
Oct 07 14:21:32 compute-0 nova_compute[259550]: 2025-10-07 14:21:32.793 2 DEBUG nova.storage.rbd_utils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image 5fb1bd6b-903f-4d62-bc97-25ecaa45deac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:33 compute-0 nova_compute[259550]: 2025-10-07 14:21:33.031 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:21:33 compute-0 ceph-mon[74295]: pgmap v1725: 305 pgs: 305 active+clean; 187 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.7 MiB/s wr, 259 op/s
Oct 07 14:21:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1793417328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/86522812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/717376261' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:21:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/717376261' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:21:33 compute-0 nova_compute[259550]: 2025-10-07 14:21:33.142 2 DEBUG nova.network.neutron [req-e060c8c9-7fcf-4f78-9335-791867b33837 req-b9fd1a73-dfa3-44fc-a66d-574fdcdc6a4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Updated VIF entry in instance network info cache for port 5465af99-7afb-409c-9889-3320da6f749f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:21:33 compute-0 nova_compute[259550]: 2025-10-07 14:21:33.143 2 DEBUG nova.network.neutron [req-e060c8c9-7fcf-4f78-9335-791867b33837 req-b9fd1a73-dfa3-44fc-a66d-574fdcdc6a4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Updating instance_info_cache with network_info: [{"id": "5465af99-7afb-409c-9889-3320da6f749f", "address": "fa:16:3e:cd:df:4f", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5465af99-7a", "ovs_interfaceid": "5465af99-7afb-409c-9889-3320da6f749f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:33 compute-0 nova_compute[259550]: 2025-10-07 14:21:33.145 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846878.1391144, 0d136ab7-c186-4909-bb2f-371bd5f68c90 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:33 compute-0 nova_compute[259550]: 2025-10-07 14:21:33.145 2 INFO nova.compute.manager [-] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] VM Stopped (Lifecycle Event)
Oct 07 14:21:33 compute-0 nova_compute[259550]: 2025-10-07 14:21:33.165 2 DEBUG oslo_concurrency.lockutils [req-e060c8c9-7fcf-4f78-9335-791867b33837 req-b9fd1a73-dfa3-44fc-a66d-574fdcdc6a4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5fb1bd6b-903f-4d62-bc97-25ecaa45deac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:21:33 compute-0 nova_compute[259550]: 2025-10-07 14:21:33.167 2 DEBUG nova.compute.manager [None req-a7f987a7-8f7b-4773-a1e5-1c611b5a7da8 - - - - - -] [instance: 0d136ab7-c186-4909-bb2f-371bd5f68c90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:33 compute-0 nova_compute[259550]: 2025-10-07 14:21:33.573 2 INFO nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Creating config drive at /var/lib/nova/instances/5fb1bd6b-903f-4d62-bc97-25ecaa45deac/disk.config
Oct 07 14:21:33 compute-0 nova_compute[259550]: 2025-10-07 14:21:33.578 2 DEBUG oslo_concurrency.processutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5fb1bd6b-903f-4d62-bc97-25ecaa45deac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzlr3eobk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:33 compute-0 nova_compute[259550]: 2025-10-07 14:21:33.732 2 DEBUG oslo_concurrency.processutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5fb1bd6b-903f-4d62-bc97-25ecaa45deac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzlr3eobk" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:33 compute-0 nova_compute[259550]: 2025-10-07 14:21:33.763 2 DEBUG nova.storage.rbd_utils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image 5fb1bd6b-903f-4d62-bc97-25ecaa45deac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:33 compute-0 nova_compute[259550]: 2025-10-07 14:21:33.768 2 DEBUG oslo_concurrency.processutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5fb1bd6b-903f-4d62-bc97-25ecaa45deac/disk.config 5fb1bd6b-903f-4d62-bc97-25ecaa45deac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 234 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.0 MiB/s wr, 269 op/s
Oct 07 14:21:33 compute-0 nova_compute[259550]: 2025-10-07 14:21:33.969 2 DEBUG oslo_concurrency.processutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5fb1bd6b-903f-4d62-bc97-25ecaa45deac/disk.config 5fb1bd6b-903f-4d62-bc97-25ecaa45deac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:33 compute-0 nova_compute[259550]: 2025-10-07 14:21:33.970 2 INFO nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Deleting local config drive /var/lib/nova/instances/5fb1bd6b-903f-4d62-bc97-25ecaa45deac/disk.config because it was imported into RBD.
Oct 07 14:21:33 compute-0 nova_compute[259550]: 2025-10-07 14:21:33.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:21:33 compute-0 nova_compute[259550]: 2025-10-07 14:21:33.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:34 compute-0 nova_compute[259550]: 2025-10-07 14:21:34.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:34 compute-0 kernel: tap5465af99-7a: entered promiscuous mode
Oct 07 14:21:34 compute-0 NetworkManager[44949]: <info>  [1759846894.0363] manager: (tap5465af99-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/370)
Oct 07 14:21:34 compute-0 ovn_controller[151684]: 2025-10-07T14:21:34Z|00888|binding|INFO|Claiming lport 5465af99-7afb-409c-9889-3320da6f749f for this chassis.
Oct 07 14:21:34 compute-0 ovn_controller[151684]: 2025-10-07T14:21:34Z|00889|binding|INFO|5465af99-7afb-409c-9889-3320da6f749f: Claiming fa:16:3e:cd:df:4f 10.100.0.6
Oct 07 14:21:34 compute-0 nova_compute[259550]: 2025-10-07 14:21:34.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:34.054 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:df:4f 10.100.0.6'], port_security=['fa:16:3e:cd:df:4f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5fb1bd6b-903f-4d62-bc97-25ecaa45deac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a266c4b5f8164bceb621e0e23116c515', 'neutron:revision_number': '2', 'neutron:security_group_ids': '38933899-ccd3-4eef-a7a6-a99092e32db4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061d6c9e-c728-4632-b92d-e6b85ba42658, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5465af99-7afb-409c-9889-3320da6f749f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:34.056 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5465af99-7afb-409c-9889-3320da6f749f in datapath ebea7a9d-f576-4b9e-8316-859c29b06dc2 bound to our chassis
Oct 07 14:21:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:34.057 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebea7a9d-f576-4b9e-8316-859c29b06dc2
Oct 07 14:21:34 compute-0 nova_compute[259550]: 2025-10-07 14:21:34.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:34 compute-0 ovn_controller[151684]: 2025-10-07T14:21:34Z|00890|binding|INFO|Setting lport 5465af99-7afb-409c-9889-3320da6f749f ovn-installed in OVS
Oct 07 14:21:34 compute-0 ovn_controller[151684]: 2025-10-07T14:21:34Z|00891|binding|INFO|Setting lport 5465af99-7afb-409c-9889-3320da6f749f up in Southbound
Oct 07 14:21:34 compute-0 nova_compute[259550]: 2025-10-07 14:21:34.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:34 compute-0 systemd-udevd[346207]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:21:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:34.078 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b7faf8b-fba5-4738-9478-4672f293f577]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:34 compute-0 systemd-machined[214580]: New machine qemu-106-instance-00000055.
Oct 07 14:21:34 compute-0 NetworkManager[44949]: <info>  [1759846894.0888] device (tap5465af99-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:21:34 compute-0 NetworkManager[44949]: <info>  [1759846894.0899] device (tap5465af99-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:21:34 compute-0 systemd[1]: Started Virtual Machine qemu-106-instance-00000055.
Oct 07 14:21:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:34.108 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[463ba721-741c-4696-8005-c6c76394b1c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:34.112 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9d8122fb-efc8-4e9d-91d3-d69b8d01b09b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:34.143 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2beed1d2-d864-4785-bd95-2abafb51789a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:34.162 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e3431e9f-ac6b-4bb3-97fc-93c472b316b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebea7a9d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:58:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739733, 'reachable_time': 16237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346219, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:34.179 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eb408ebd-7856-4a5c-aab2-764446f41344]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapebea7a9d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739748, 'tstamp': 739748}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346220, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapebea7a9d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739751, 'tstamp': 739751}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346220, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:34.181 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebea7a9d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:34 compute-0 nova_compute[259550]: 2025-10-07 14:21:34.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:34 compute-0 nova_compute[259550]: 2025-10-07 14:21:34.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:34.184 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebea7a9d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:34.184 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:34.185 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebea7a9d-f0, col_values=(('external_ids', {'iface-id': 'e0f4a07d-63f3-4c49-8cad-69cdf20a2608'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:34.185 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:34 compute-0 nova_compute[259550]: 2025-10-07 14:21:34.962 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846894.9614353, 5fb1bd6b-903f-4d62-bc97-25ecaa45deac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:34 compute-0 nova_compute[259550]: 2025-10-07 14:21:34.962 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] VM Started (Lifecycle Event)
Oct 07 14:21:34 compute-0 nova_compute[259550]: 2025-10-07 14:21:34.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:21:34 compute-0 nova_compute[259550]: 2025-10-07 14:21:34.991 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:34 compute-0 nova_compute[259550]: 2025-10-07 14:21:34.997 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846894.9616091, 5fb1bd6b-903f-4d62-bc97-25ecaa45deac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:34 compute-0 nova_compute[259550]: 2025-10-07 14:21:34.997 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] VM Paused (Lifecycle Event)
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.022 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.027 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:35 compute-0 ceph-mon[74295]: pgmap v1726: 305 pgs: 305 active+clean; 234 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.0 MiB/s wr, 269 op/s
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.059 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.819 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846880.8180964, 520ba82e-9633-4722-aa98-526012a7e0fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.820 2 INFO nova.compute.manager [-] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] VM Stopped (Lifecycle Event)
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.845 2 DEBUG nova.compute.manager [None req-ab1ea392-1c58-4139-9a77-c6b3bd29cbaa - - - - - -] [instance: 520ba82e-9633-4722-aa98-526012a7e0fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.901 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.925 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Triggering sync for uuid d932a7ab-839c-48b9-804f-90cc8634e93b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.926 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Triggering sync for uuid 152070e7-9c74-429d-b9d8-c09cbcba121e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.926 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Triggering sync for uuid 5fb1bd6b-903f-4d62-bc97-25ecaa45deac _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.926 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.927 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.927 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "152070e7-9c74-429d-b9d8-c09cbcba121e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.927 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.928 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "5fb1bd6b-903f-4d62-bc97-25ecaa45deac" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 246 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 181 op/s
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.957 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.960 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:21:35 compute-0 nova_compute[259550]: 2025-10-07 14:21:35.981 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:21:36 compute-0 ceph-mon[74295]: pgmap v1727: 305 pgs: 305 active+clean; 246 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 181 op/s
Oct 07 14:21:36 compute-0 podman[346265]: 2025-10-07 14:21:36.109225908 +0000 UTC m=+0.087621696 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:21:36 compute-0 podman[346264]: 2025-10-07 14:21:36.112764211 +0000 UTC m=+0.102049227 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 07 14:21:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.842 2 DEBUG nova.compute.manager [req-4cc69dcb-5444-434e-8e65-21e0ea08db47 req-9eb8dbbc-acdb-46dd-85af-cd816aa82524 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Received event network-vif-plugged-5465af99-7afb-409c-9889-3320da6f749f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.842 2 DEBUG oslo_concurrency.lockutils [req-4cc69dcb-5444-434e-8e65-21e0ea08db47 req-9eb8dbbc-acdb-46dd-85af-cd816aa82524 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5fb1bd6b-903f-4d62-bc97-25ecaa45deac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.843 2 DEBUG oslo_concurrency.lockutils [req-4cc69dcb-5444-434e-8e65-21e0ea08db47 req-9eb8dbbc-acdb-46dd-85af-cd816aa82524 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fb1bd6b-903f-4d62-bc97-25ecaa45deac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.843 2 DEBUG oslo_concurrency.lockutils [req-4cc69dcb-5444-434e-8e65-21e0ea08db47 req-9eb8dbbc-acdb-46dd-85af-cd816aa82524 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fb1bd6b-903f-4d62-bc97-25ecaa45deac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.844 2 DEBUG nova.compute.manager [req-4cc69dcb-5444-434e-8e65-21e0ea08db47 req-9eb8dbbc-acdb-46dd-85af-cd816aa82524 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Processing event network-vif-plugged-5465af99-7afb-409c-9889-3320da6f749f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.844 2 DEBUG nova.compute.manager [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.847 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846896.8477511, 5fb1bd6b-903f-4d62-bc97-25ecaa45deac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.848 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] VM Resumed (Lifecycle Event)
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.849 2 DEBUG nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.853 2 INFO nova.virt.libvirt.driver [-] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Instance spawned successfully.
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.853 2 DEBUG nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.870 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.876 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.882 2 DEBUG nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.883 2 DEBUG nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.883 2 DEBUG nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.884 2 DEBUG nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.884 2 DEBUG nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.885 2 DEBUG nova.virt.libvirt.driver [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.911 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.949 2 INFO nova.compute.manager [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Took 8.34 seconds to spawn the instance on the hypervisor.
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.950 2 DEBUG nova.compute.manager [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:36 compute-0 nova_compute[259550]: 2025-10-07 14:21:36.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.006 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.007 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.007 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.007 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.007 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.063 2 INFO nova.compute.manager [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Took 9.72 seconds to build instance.
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.094 2 DEBUG oslo_concurrency.lockutils [None req-4023f7d2-7795-4796-afd7-9f0cd0b53af0 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "5fb1bd6b-903f-4d62-bc97-25ecaa45deac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.095 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "5fb1bd6b-903f-4d62-bc97-25ecaa45deac" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 1.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.095 2 INFO nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.095 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "5fb1bd6b-903f-4d62-bc97-25ecaa45deac" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:21:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2223447491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.494 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:37 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2223447491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.746 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.746 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.751 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.751 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.755 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.755 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:21:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 246 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.964 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.965 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3444MB free_disk=59.87657928466797GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.966 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:37 compute-0 nova_compute[259550]: 2025-10-07 14:21:37.966 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.125 2 INFO nova.compute.manager [None req-74d9d661-3d36-45da-843d-05248f9768a4 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Pausing
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.126 2 DEBUG nova.objects.instance [None req-74d9d661-3d36-45da-843d-05248f9768a4 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'flavor' on Instance uuid 5fb1bd6b-903f-4d62-bc97-25ecaa45deac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.153 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846898.1537068, 5fb1bd6b-903f-4d62-bc97-25ecaa45deac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.154 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] VM Paused (Lifecycle Event)
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.156 2 DEBUG nova.compute.manager [None req-74d9d661-3d36-45da-843d-05248f9768a4 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.178 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.181 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.205 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.350 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance d932a7ab-839c-48b9-804f-90cc8634e93b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.351 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 152070e7-9c74-429d-b9d8-c09cbcba121e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.351 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 5fb1bd6b-903f-4d62-bc97-25ecaa45deac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.351 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.351 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.516 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:38 compute-0 ceph-mon[74295]: pgmap v1728: 305 pgs: 305 active+clean; 246 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.816 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846883.814649, 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.817 2 INFO nova.compute.manager [-] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] VM Stopped (Lifecycle Event)
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.835 2 DEBUG nova.compute.manager [None req-ff00e4b0-310c-4cd2-b841-21ed91b91309 - - - - - -] [instance: 6c2a17d6-c4c3-4b0b-868b-651a3d5df64f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.928 2 DEBUG nova.compute.manager [req-849e101e-273a-4d6d-b139-21fdc300c104 req-27645530-3da1-4bcf-a0b5-fb714dec81b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Received event network-vif-plugged-5465af99-7afb-409c-9889-3320da6f749f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.929 2 DEBUG oslo_concurrency.lockutils [req-849e101e-273a-4d6d-b139-21fdc300c104 req-27645530-3da1-4bcf-a0b5-fb714dec81b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5fb1bd6b-903f-4d62-bc97-25ecaa45deac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.929 2 DEBUG oslo_concurrency.lockutils [req-849e101e-273a-4d6d-b139-21fdc300c104 req-27645530-3da1-4bcf-a0b5-fb714dec81b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fb1bd6b-903f-4d62-bc97-25ecaa45deac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.929 2 DEBUG oslo_concurrency.lockutils [req-849e101e-273a-4d6d-b139-21fdc300c104 req-27645530-3da1-4bcf-a0b5-fb714dec81b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fb1bd6b-903f-4d62-bc97-25ecaa45deac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.930 2 DEBUG nova.compute.manager [req-849e101e-273a-4d6d-b139-21fdc300c104 req-27645530-3da1-4bcf-a0b5-fb714dec81b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] No waiting events found dispatching network-vif-plugged-5465af99-7afb-409c-9889-3320da6f749f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.930 2 WARNING nova.compute.manager [req-849e101e-273a-4d6d-b139-21fdc300c104 req-27645530-3da1-4bcf-a0b5-fb714dec81b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Received unexpected event network-vif-plugged-5465af99-7afb-409c-9889-3320da6f749f for instance with vm_state paused and task_state None.
Oct 07 14:21:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:21:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3979225825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.976 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.981 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:21:38 compute-0 nova_compute[259550]: 2025-10-07 14:21:38.995 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:21:39 compute-0 nova_compute[259550]: 2025-10-07 14:21:39.018 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:21:39 compute-0 nova_compute[259550]: 2025-10-07 14:21:39.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3979225825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:39 compute-0 nova_compute[259550]: 2025-10-07 14:21:39.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 246 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 134 op/s
Oct 07 14:21:40 compute-0 nova_compute[259550]: 2025-10-07 14:21:40.446 2 DEBUG oslo_concurrency.lockutils [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "5fb1bd6b-903f-4d62-bc97-25ecaa45deac" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:40 compute-0 nova_compute[259550]: 2025-10-07 14:21:40.447 2 DEBUG oslo_concurrency.lockutils [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "5fb1bd6b-903f-4d62-bc97-25ecaa45deac" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:40 compute-0 nova_compute[259550]: 2025-10-07 14:21:40.447 2 INFO nova.compute.manager [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Shelving
Oct 07 14:21:40 compute-0 kernel: tap5465af99-7a (unregistering): left promiscuous mode
Oct 07 14:21:40 compute-0 NetworkManager[44949]: <info>  [1759846900.5152] device (tap5465af99-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:21:40 compute-0 ovn_controller[151684]: 2025-10-07T14:21:40Z|00892|binding|INFO|Releasing lport 5465af99-7afb-409c-9889-3320da6f749f from this chassis (sb_readonly=0)
Oct 07 14:21:40 compute-0 ovn_controller[151684]: 2025-10-07T14:21:40Z|00893|binding|INFO|Setting lport 5465af99-7afb-409c-9889-3320da6f749f down in Southbound
Oct 07 14:21:40 compute-0 nova_compute[259550]: 2025-10-07 14:21:40.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:40 compute-0 ovn_controller[151684]: 2025-10-07T14:21:40Z|00894|binding|INFO|Removing iface tap5465af99-7a ovn-installed in OVS
Oct 07 14:21:40 compute-0 nova_compute[259550]: 2025-10-07 14:21:40.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:40.535 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:df:4f 10.100.0.6'], port_security=['fa:16:3e:cd:df:4f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5fb1bd6b-903f-4d62-bc97-25ecaa45deac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a266c4b5f8164bceb621e0e23116c515', 'neutron:revision_number': '4', 'neutron:security_group_ids': '38933899-ccd3-4eef-a7a6-a99092e32db4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061d6c9e-c728-4632-b92d-e6b85ba42658, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5465af99-7afb-409c-9889-3320da6f749f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:40.537 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5465af99-7afb-409c-9889-3320da6f749f in datapath ebea7a9d-f576-4b9e-8316-859c29b06dc2 unbound from our chassis
Oct 07 14:21:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:40.539 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebea7a9d-f576-4b9e-8316-859c29b06dc2
Oct 07 14:21:40 compute-0 nova_compute[259550]: 2025-10-07 14:21:40.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:40 compute-0 ceph-mon[74295]: pgmap v1729: 305 pgs: 305 active+clean; 246 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 134 op/s
Oct 07 14:21:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:40.559 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[66fc2fd0-9d04-40e0-a454-cea5abd0ac18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:40 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000055.scope: Deactivated successfully.
Oct 07 14:21:40 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000055.scope: Consumed 2.178s CPU time.
Oct 07 14:21:40 compute-0 systemd-machined[214580]: Machine qemu-106-instance-00000055 terminated.
Oct 07 14:21:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:40.593 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5d677461-1d4d-44a6-ad06-909bc3c167f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:40.596 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[37b27597-d322-4e0b-b326-9dd45638d6fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:40.629 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6e7b70dc-5424-4cac-8603-f150f581152f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:40.647 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[af8717f8-b82e-4f37-919b-a714d497a461]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebea7a9d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:58:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739733, 'reachable_time': 16237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346364, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:40.663 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf854d5-f644-4d24-80af-68501e891015]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapebea7a9d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739748, 'tstamp': 739748}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346365, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapebea7a9d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739751, 'tstamp': 739751}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346365, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:40.665 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebea7a9d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:40 compute-0 nova_compute[259550]: 2025-10-07 14:21:40.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:40.672 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebea7a9d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:40 compute-0 nova_compute[259550]: 2025-10-07 14:21:40.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:40.672 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:40.673 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebea7a9d-f0, col_values=(('external_ids', {'iface-id': 'e0f4a07d-63f3-4c49-8cad-69cdf20a2608'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:40.673 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:40 compute-0 nova_compute[259550]: 2025-10-07 14:21:40.701 2 INFO nova.virt.libvirt.driver [-] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Instance destroyed successfully.
Oct 07 14:21:40 compute-0 nova_compute[259550]: 2025-10-07 14:21:40.701 2 DEBUG nova.objects.instance [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'numa_topology' on Instance uuid 5fb1bd6b-903f-4d62-bc97-25ecaa45deac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:40 compute-0 nova_compute[259550]: 2025-10-07 14:21:40.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:40 compute-0 nova_compute[259550]: 2025-10-07 14:21:40.953 2 INFO nova.virt.libvirt.driver [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Beginning cold snapshot process
Oct 07 14:21:40 compute-0 nova_compute[259550]: 2025-10-07 14:21:40.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:21:40 compute-0 nova_compute[259550]: 2025-10-07 14:21:40.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:21:40 compute-0 nova_compute[259550]: 2025-10-07 14:21:40.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:21:40 compute-0 nova_compute[259550]: 2025-10-07 14:21:40.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 07 14:21:40 compute-0 nova_compute[259550]: 2025-10-07 14:21:40.994 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:21:41 compute-0 nova_compute[259550]: 2025-10-07 14:21:41.089 2 DEBUG nova.virt.libvirt.imagebackend [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:21:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:21:41 compute-0 nova_compute[259550]: 2025-10-07 14:21:41.340 2 DEBUG nova.storage.rbd_utils [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] creating snapshot(be8568c9dceb4b1cadd069a4045a61f1) on rbd image(5fb1bd6b-903f-4d62-bc97-25ecaa45deac_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:21:41 compute-0 nova_compute[259550]: 2025-10-07 14:21:41.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Oct 07 14:21:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Oct 07 14:21:41 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Oct 07 14:21:41 compute-0 nova_compute[259550]: 2025-10-07 14:21:41.638 2 DEBUG nova.storage.rbd_utils [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] cloning vms/5fb1bd6b-903f-4d62-bc97-25ecaa45deac_disk@be8568c9dceb4b1cadd069a4045a61f1 to images/2820ec38-7fe7-413d-a083-eeb166b5f495 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:21:41 compute-0 nova_compute[259550]: 2025-10-07 14:21:41.751 2 DEBUG nova.storage.rbd_utils [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] flattening images/2820ec38-7fe7-413d-a083-eeb166b5f495 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:21:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 246 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.7 MiB/s wr, 174 op/s
Oct 07 14:21:42 compute-0 nova_compute[259550]: 2025-10-07 14:21:42.049 2 DEBUG nova.storage.rbd_utils [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] removing snapshot(be8568c9dceb4b1cadd069a4045a61f1) on rbd image(5fb1bd6b-903f-4d62-bc97-25ecaa45deac_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:21:42 compute-0 nova_compute[259550]: 2025-10-07 14:21:42.485 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Acquiring lock "73126140-a6e8-4630-a01d-3738d29c02b8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:42 compute-0 nova_compute[259550]: 2025-10-07 14:21:42.486 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lock "73126140-a6e8-4630-a01d-3738d29c02b8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:42 compute-0 nova_compute[259550]: 2025-10-07 14:21:42.518 2 DEBUG nova.compute.manager [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:21:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Oct 07 14:21:42 compute-0 ceph-mon[74295]: osdmap e231: 3 total, 3 up, 3 in
Oct 07 14:21:42 compute-0 ceph-mon[74295]: pgmap v1731: 305 pgs: 305 active+clean; 246 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.7 MiB/s wr, 174 op/s
Oct 07 14:21:42 compute-0 nova_compute[259550]: 2025-10-07 14:21:42.579 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:42 compute-0 nova_compute[259550]: 2025-10-07 14:21:42.580 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Oct 07 14:21:42 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Oct 07 14:21:42 compute-0 nova_compute[259550]: 2025-10-07 14:21:42.586 2 DEBUG nova.virt.hardware [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:21:42 compute-0 nova_compute[259550]: 2025-10-07 14:21:42.587 2 INFO nova.compute.claims [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:21:42 compute-0 nova_compute[259550]: 2025-10-07 14:21:42.643 2 DEBUG nova.storage.rbd_utils [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] creating snapshot(snap) on rbd image(2820ec38-7fe7-413d-a083-eeb166b5f495) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:21:42 compute-0 nova_compute[259550]: 2025-10-07 14:21:42.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:42 compute-0 nova_compute[259550]: 2025-10-07 14:21:42.760 2 DEBUG oslo_concurrency.processutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.052 2 DEBUG oslo_concurrency.lockutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Acquiring lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.052 2 DEBUG oslo_concurrency.lockutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.070 2 DEBUG nova.compute.manager [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.123 2 DEBUG oslo_concurrency.lockutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:21:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1743158299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.251 2 DEBUG oslo_concurrency.processutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.258 2 DEBUG nova.compute.provider_tree [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.275 2 DEBUG nova.scheduler.client.report [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.294 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.296 2 DEBUG oslo_concurrency.lockutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.303 2 DEBUG nova.virt.hardware [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.303 2 INFO nova.compute.claims [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.354 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Acquiring lock "270d5768-e098-4fc4-bd64-405cbf8bf3d8" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.355 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lock "270d5768-e098-4fc4-bd64-405cbf8bf3d8" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.366 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lock "270d5768-e098-4fc4-bd64-405cbf8bf3d8" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.367 2 DEBUG nova.compute.manager [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.453 2 DEBUG nova.compute.manager [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.454 2 DEBUG nova.network.neutron [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.471 2 INFO nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.489 2 DEBUG nova.compute.manager [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.530 2 DEBUG oslo_concurrency.processutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.582 2 DEBUG nova.compute.manager [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.584 2 DEBUG nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:21:43 compute-0 ceph-mon[74295]: osdmap e232: 3 total, 3 up, 3 in
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.585 2 INFO nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Creating image(s)
Oct 07 14:21:43 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1743158299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Oct 07 14:21:43 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.631 2 DEBUG nova.storage.rbd_utils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] rbd image 73126140-a6e8-4630-a01d-3738d29c02b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.660 2 DEBUG nova.storage.rbd_utils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] rbd image 73126140-a6e8-4630-a01d-3738d29c02b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.689 2 DEBUG nova.storage.rbd_utils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] rbd image 73126140-a6e8-4630-a01d-3738d29c02b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.692 2 DEBUG oslo_concurrency.processutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.730 2 DEBUG nova.policy [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '60b5888940d84f1ba06f4db3c95083bd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '81f15d4c85484c8598dbfb6cc7690b09', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.769 2 DEBUG oslo_concurrency.processutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.770 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.771 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.772 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.795 2 DEBUG nova.storage.rbd_utils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] rbd image 73126140-a6e8-4630-a01d-3738d29c02b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:43 compute-0 nova_compute[259550]: 2025-10-07 14:21:43.799 2 DEBUG oslo_concurrency.processutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 73126140-a6e8-4630-a01d-3738d29c02b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 276 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 1.4 MiB/s wr, 165 op/s
Oct 07 14:21:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:21:44 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1232739158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.054 2 DEBUG oslo_concurrency.processutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.061 2 DEBUG nova.compute.provider_tree [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.087 2 DEBUG nova.scheduler.client.report [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.112 2 DEBUG oslo_concurrency.lockutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.113 2 DEBUG nova.compute.manager [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.125 2 DEBUG oslo_concurrency.processutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 73126140-a6e8-4630-a01d-3738d29c02b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.186 2 DEBUG nova.compute.manager [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.186 2 DEBUG nova.network.neutron [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.194 2 DEBUG nova.storage.rbd_utils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] resizing rbd image 73126140-a6e8-4630-a01d-3738d29c02b8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.227 2 INFO nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.243 2 DEBUG nova.compute.manager [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.295 2 DEBUG nova.objects.instance [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lazy-loading 'migration_context' on Instance uuid 73126140-a6e8-4630-a01d-3738d29c02b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.312 2 DEBUG oslo_concurrency.lockutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "2f0de516-cf33-49b6-b036-aee8c2f72943" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.313 2 DEBUG oslo_concurrency.lockutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.314 2 DEBUG nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.314 2 DEBUG nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Ensure instance console log exists: /var/lib/nova/instances/73126140-a6e8-4630-a01d-3738d29c02b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.315 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.315 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.316 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.337 2 DEBUG nova.compute.manager [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.338 2 DEBUG nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.338 2 INFO nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Creating image(s)
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.359 2 DEBUG nova.storage.rbd_utils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] rbd image ad8af3d5-66d4-4db1-bd40-42d766f2fde7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.375 2 DEBUG nova.storage.rbd_utils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] rbd image ad8af3d5-66d4-4db1-bd40-42d766f2fde7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.392 2 DEBUG nova.storage.rbd_utils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] rbd image ad8af3d5-66d4-4db1-bd40-42d766f2fde7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.395 2 DEBUG oslo_concurrency.processutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.424 2 DEBUG nova.compute.manager [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.432 2 DEBUG nova.policy [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f3a5238c8d6b406aa83ab9cfd1b31cf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0675ef8ab0b84423b35c16687980a886', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.464 2 DEBUG oslo_concurrency.processutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.465 2 DEBUG oslo_concurrency.lockutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.466 2 DEBUG oslo_concurrency.lockutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.466 2 DEBUG oslo_concurrency.lockutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.487 2 DEBUG nova.storage.rbd_utils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] rbd image ad8af3d5-66d4-4db1-bd40-42d766f2fde7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.490 2 DEBUG oslo_concurrency.processutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ad8af3d5-66d4-4db1-bd40-42d766f2fde7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.540 2 DEBUG oslo_concurrency.lockutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.541 2 DEBUG oslo_concurrency.lockutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.546 2 DEBUG nova.virt.hardware [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.547 2 INFO nova.compute.claims [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:21:44 compute-0 ceph-mon[74295]: osdmap e233: 3 total, 3 up, 3 in
Oct 07 14:21:44 compute-0 ceph-mon[74295]: pgmap v1734: 305 pgs: 305 active+clean; 276 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 1.4 MiB/s wr, 165 op/s
Oct 07 14:21:44 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1232739158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.729 2 DEBUG oslo_concurrency.processutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.830 2 DEBUG oslo_concurrency.processutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ad8af3d5-66d4-4db1-bd40-42d766f2fde7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.897 2 DEBUG nova.storage.rbd_utils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] resizing rbd image ad8af3d5-66d4-4db1-bd40-42d766f2fde7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.982 2 DEBUG nova.objects.instance [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lazy-loading 'migration_context' on Instance uuid ad8af3d5-66d4-4db1-bd40-42d766f2fde7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.993 2 DEBUG nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.994 2 DEBUG nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Ensure instance console log exists: /var/lib/nova/instances/ad8af3d5-66d4-4db1-bd40-42d766f2fde7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.994 2 DEBUG oslo_concurrency.lockutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.995 2 DEBUG oslo_concurrency.lockutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:44 compute-0 nova_compute[259550]: 2025-10-07 14:21:44.995 2 DEBUG oslo_concurrency.lockutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.000 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.157 2 INFO nova.virt.libvirt.driver [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Snapshot image upload complete
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.158 2 DEBUG nova.compute.manager [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:21:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/979691551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.192 2 DEBUG nova.network.neutron [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Successfully created port: e15b4d99-880b-4329-88b4-7851081f5ad7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.201 2 DEBUG oslo_concurrency.processutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.206 2 DEBUG nova.compute.provider_tree [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.210 2 INFO nova.compute.manager [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Shelve offloading
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.215 2 INFO nova.virt.libvirt.driver [-] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Instance destroyed successfully.
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.216 2 DEBUG nova.compute.manager [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.217 2 DEBUG oslo_concurrency.lockutils [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "refresh_cache-5fb1bd6b-903f-4d62-bc97-25ecaa45deac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.217 2 DEBUG oslo_concurrency.lockutils [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquired lock "refresh_cache-5fb1bd6b-903f-4d62-bc97-25ecaa45deac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.217 2 DEBUG nova.network.neutron [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.219 2 DEBUG nova.scheduler.client.report [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.246 2 DEBUG oslo_concurrency.lockutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.247 2 DEBUG nova.compute.manager [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.298 2 DEBUG nova.compute.manager [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.298 2 DEBUG nova.network.neutron [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.317 2 INFO nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.335 2 DEBUG nova.compute.manager [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.414 2 DEBUG nova.compute.manager [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.415 2 DEBUG nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.416 2 INFO nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Creating image(s)
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.440 2 DEBUG nova.storage.rbd_utils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2f0de516-cf33-49b6-b036-aee8c2f72943_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.467 2 DEBUG nova.storage.rbd_utils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2f0de516-cf33-49b6-b036-aee8c2f72943_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.492 2 DEBUG nova.storage.rbd_utils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2f0de516-cf33-49b6-b036-aee8c2f72943_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.497 2 DEBUG oslo_concurrency.processutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.532 2 DEBUG nova.policy [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2606252961124ad2a15c7f7529b28488', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.567 2 DEBUG oslo_concurrency.processutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.567 2 DEBUG oslo_concurrency.lockutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.568 2 DEBUG oslo_concurrency.lockutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.568 2 DEBUG oslo_concurrency.lockutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.589 2 DEBUG nova.storage.rbd_utils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2f0de516-cf33-49b6-b036-aee8c2f72943_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.593 2 DEBUG oslo_concurrency.processutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 2f0de516-cf33-49b6-b036-aee8c2f72943_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:45 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/979691551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.866 2 DEBUG nova.network.neutron [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Successfully created port: f13bcd51-d70f-4e6d-9161-9020447812fc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.904 2 DEBUG oslo_concurrency.processutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 2f0de516-cf33-49b6-b036-aee8c2f72943_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 345 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 7.6 MiB/s wr, 213 op/s
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.966 2 DEBUG nova.storage.rbd_utils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] resizing rbd image 2f0de516-cf33-49b6-b036-aee8c2f72943_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.998 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.998 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:21:45 compute-0 nova_compute[259550]: 2025-10-07 14:21:45.999 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:21:46 compute-0 nova_compute[259550]: 2025-10-07 14:21:46.025 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 07 14:21:46 compute-0 nova_compute[259550]: 2025-10-07 14:21:46.025 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 07 14:21:46 compute-0 nova_compute[259550]: 2025-10-07 14:21:46.026 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 07 14:21:46 compute-0 nova_compute[259550]: 2025-10-07 14:21:46.062 2 DEBUG nova.objects.instance [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'migration_context' on Instance uuid 2f0de516-cf33-49b6-b036-aee8c2f72943 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:46 compute-0 nova_compute[259550]: 2025-10-07 14:21:46.065 2 DEBUG nova.network.neutron [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Successfully created port: d1f34f39-0808-4a53-bf40-fff477dee819 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:21:46 compute-0 nova_compute[259550]: 2025-10-07 14:21:46.080 2 DEBUG nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:21:46 compute-0 nova_compute[259550]: 2025-10-07 14:21:46.081 2 DEBUG nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Ensure instance console log exists: /var/lib/nova/instances/2f0de516-cf33-49b6-b036-aee8c2f72943/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:21:46 compute-0 nova_compute[259550]: 2025-10-07 14:21:46.081 2 DEBUG oslo_concurrency.lockutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:46 compute-0 nova_compute[259550]: 2025-10-07 14:21:46.082 2 DEBUG oslo_concurrency.lockutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:46 compute-0 nova_compute[259550]: 2025-10-07 14:21:46.082 2 DEBUG oslo_concurrency.lockutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:21:46 compute-0 nova_compute[259550]: 2025-10-07 14:21:46.268 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:21:46 compute-0 nova_compute[259550]: 2025-10-07 14:21:46.269 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:21:46 compute-0 nova_compute[259550]: 2025-10-07 14:21:46.269 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:21:46 compute-0 nova_compute[259550]: 2025-10-07 14:21:46.270 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:46 compute-0 nova_compute[259550]: 2025-10-07 14:21:46.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:46 compute-0 ceph-mon[74295]: pgmap v1735: 305 pgs: 305 active+clean; 345 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 7.6 MiB/s wr, 213 op/s
Oct 07 14:21:46 compute-0 nova_compute[259550]: 2025-10-07 14:21:46.837 2 DEBUG nova.network.neutron [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Successfully updated port: d1f34f39-0808-4a53-bf40-fff477dee819 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:21:46 compute-0 nova_compute[259550]: 2025-10-07 14:21:46.857 2 DEBUG oslo_concurrency.lockutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "refresh_cache-2f0de516-cf33-49b6-b036-aee8c2f72943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:21:46 compute-0 nova_compute[259550]: 2025-10-07 14:21:46.858 2 DEBUG oslo_concurrency.lockutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquired lock "refresh_cache-2f0de516-cf33-49b6-b036-aee8c2f72943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:21:46 compute-0 nova_compute[259550]: 2025-10-07 14:21:46.858 2 DEBUG nova.network.neutron [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:21:47 compute-0 nova_compute[259550]: 2025-10-07 14:21:47.030 2 DEBUG nova.network.neutron [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:21:47 compute-0 nova_compute[259550]: 2025-10-07 14:21:47.312 2 DEBUG nova.network.neutron [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Updating instance_info_cache with network_info: [{"id": "5465af99-7afb-409c-9889-3320da6f749f", "address": "fa:16:3e:cd:df:4f", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5465af99-7a", "ovs_interfaceid": "5465af99-7afb-409c-9889-3320da6f749f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:47 compute-0 nova_compute[259550]: 2025-10-07 14:21:47.318 2 DEBUG nova.network.neutron [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Successfully updated port: e15b4d99-880b-4329-88b4-7851081f5ad7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:21:47 compute-0 nova_compute[259550]: 2025-10-07 14:21:47.342 2 DEBUG oslo_concurrency.lockutils [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Releasing lock "refresh_cache-5fb1bd6b-903f-4d62-bc97-25ecaa45deac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:21:47 compute-0 nova_compute[259550]: 2025-10-07 14:21:47.345 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Acquiring lock "refresh_cache-73126140-a6e8-4630-a01d-3738d29c02b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:21:47 compute-0 nova_compute[259550]: 2025-10-07 14:21:47.345 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Acquired lock "refresh_cache-73126140-a6e8-4630-a01d-3738d29c02b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:21:47 compute-0 nova_compute[259550]: 2025-10-07 14:21:47.345 2 DEBUG nova.network.neutron [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:21:47 compute-0 nova_compute[259550]: 2025-10-07 14:21:47.426 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updating instance_info_cache with network_info: [{"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:47 compute-0 nova_compute[259550]: 2025-10-07 14:21:47.446 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:21:47 compute-0 nova_compute[259550]: 2025-10-07 14:21:47.447 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:21:47 compute-0 nova_compute[259550]: 2025-10-07 14:21:47.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 345 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 7.2 MiB/s wr, 150 op/s
Oct 07 14:21:48 compute-0 nova_compute[259550]: 2025-10-07 14:21:48.204 2 DEBUG nova.network.neutron [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:21:48 compute-0 nova_compute[259550]: 2025-10-07 14:21:48.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:21:48 compute-0 ceph-mon[74295]: pgmap v1736: 305 pgs: 305 active+clean; 345 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 7.2 MiB/s wr, 150 op/s
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.094 2 DEBUG nova.network.neutron [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Updating instance_info_cache with network_info: [{"id": "d1f34f39-0808-4a53-bf40-fff477dee819", "address": "fa:16:3e:f6:9a:2b", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1f34f39-08", "ovs_interfaceid": "d1f34f39-0808-4a53-bf40-fff477dee819", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.106 2 DEBUG nova.network.neutron [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Successfully updated port: f13bcd51-d70f-4e6d-9161-9020447812fc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.258 2 DEBUG oslo_concurrency.lockutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Releasing lock "refresh_cache-2f0de516-cf33-49b6-b036-aee8c2f72943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.258 2 DEBUG nova.compute.manager [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Instance network_info: |[{"id": "d1f34f39-0808-4a53-bf40-fff477dee819", "address": "fa:16:3e:f6:9a:2b", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1f34f39-08", "ovs_interfaceid": "d1f34f39-0808-4a53-bf40-fff477dee819", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.260 2 DEBUG nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Start _get_guest_xml network_info=[{"id": "d1f34f39-0808-4a53-bf40-fff477dee819", "address": "fa:16:3e:f6:9a:2b", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1f34f39-08", "ovs_interfaceid": "d1f34f39-0808-4a53-bf40-fff477dee819", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.266 2 DEBUG nova.compute.manager [req-98cc28a0-ea37-40f6-a007-71818c84c556 req-87122895-d1c3-440d-88fa-f7ffaee04d10 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Received event network-changed-e15b4d99-880b-4329-88b4-7851081f5ad7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.266 2 DEBUG nova.compute.manager [req-98cc28a0-ea37-40f6-a007-71818c84c556 req-87122895-d1c3-440d-88fa-f7ffaee04d10 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Refreshing instance network info cache due to event network-changed-e15b4d99-880b-4329-88b4-7851081f5ad7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.267 2 DEBUG oslo_concurrency.lockutils [req-98cc28a0-ea37-40f6-a007-71818c84c556 req-87122895-d1c3-440d-88fa-f7ffaee04d10 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-73126140-a6e8-4630-a01d-3738d29c02b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.268 2 DEBUG nova.compute.manager [req-8ea0127f-1bab-4edf-88e8-a5a4dbbdc526 req-bd163932-b1a9-429e-9804-7ec4b207ee82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Received event network-changed-d1f34f39-0808-4a53-bf40-fff477dee819 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.268 2 DEBUG nova.compute.manager [req-8ea0127f-1bab-4edf-88e8-a5a4dbbdc526 req-bd163932-b1a9-429e-9804-7ec4b207ee82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Refreshing instance network info cache due to event network-changed-d1f34f39-0808-4a53-bf40-fff477dee819. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.269 2 DEBUG oslo_concurrency.lockutils [req-8ea0127f-1bab-4edf-88e8-a5a4dbbdc526 req-bd163932-b1a9-429e-9804-7ec4b207ee82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-2f0de516-cf33-49b6-b036-aee8c2f72943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.269 2 DEBUG oslo_concurrency.lockutils [req-8ea0127f-1bab-4edf-88e8-a5a4dbbdc526 req-bd163932-b1a9-429e-9804-7ec4b207ee82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-2f0de516-cf33-49b6-b036-aee8c2f72943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.269 2 DEBUG nova.network.neutron [req-8ea0127f-1bab-4edf-88e8-a5a4dbbdc526 req-bd163932-b1a9-429e-9804-7ec4b207ee82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Refreshing network info cache for port d1f34f39-0808-4a53-bf40-fff477dee819 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.271 2 DEBUG oslo_concurrency.lockutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Acquiring lock "refresh_cache-ad8af3d5-66d4-4db1-bd40-42d766f2fde7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.271 2 DEBUG oslo_concurrency.lockutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Acquired lock "refresh_cache-ad8af3d5-66d4-4db1-bd40-42d766f2fde7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.271 2 DEBUG nova.network.neutron [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.274 2 WARNING nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.277 2 DEBUG nova.virt.libvirt.host [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.278 2 DEBUG nova.virt.libvirt.host [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.280 2 DEBUG nova.virt.libvirt.host [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.281 2 DEBUG nova.virt.libvirt.host [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.281 2 DEBUG nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.281 2 DEBUG nova.virt.hardware [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.282 2 DEBUG nova.virt.hardware [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.282 2 DEBUG nova.virt.hardware [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.282 2 DEBUG nova.virt.hardware [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.283 2 DEBUG nova.virt.hardware [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.283 2 DEBUG nova.virt.hardware [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.283 2 DEBUG nova.virt.hardware [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.283 2 DEBUG nova.virt.hardware [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.283 2 DEBUG nova.virt.hardware [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.284 2 DEBUG nova.virt.hardware [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.284 2 DEBUG nova.virt.hardware [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.287 2 DEBUG oslo_concurrency.processutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.452 2 INFO nova.virt.libvirt.driver [-] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Instance destroyed successfully.
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.453 2 DEBUG nova.objects.instance [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'resources' on Instance uuid 5fb1bd6b-903f-4d62-bc97-25ecaa45deac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.488 2 DEBUG nova.virt.libvirt.vif [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:21:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1261181308',display_name='tempest-ServerActionsTestOtherB-server-1261181308',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1261181308',id=85,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:21:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a266c4b5f8164bceb621e0e23116c515',ramdisk_id='',reservation_id='r-abm0lrmn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1033607333',owner_user_name='tempest-ServerActionsTestOtherB-1033607333-project-member',shelved_at='2025-10-07T14:21:45.158146',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='2820ec38-7fe7-413d-a083-eeb166b5f495'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:21:41Z,user_data=None,user_id='b99e8c19767d42aa96c7d646cacc3772',uuid=5fb1bd6b-903f-4d62-bc97-25ecaa45deac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "5465af99-7afb-409c-9889-3320da6f749f", "address": "fa:16:3e:cd:df:4f", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5465af99-7a", "ovs_interfaceid": "5465af99-7afb-409c-9889-3320da6f749f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.489 2 DEBUG nova.network.os_vif_util [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converting VIF {"id": "5465af99-7afb-409c-9889-3320da6f749f", "address": "fa:16:3e:cd:df:4f", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5465af99-7a", "ovs_interfaceid": "5465af99-7afb-409c-9889-3320da6f749f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.490 2 DEBUG nova.network.os_vif_util [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cd:df:4f,bridge_name='br-int',has_traffic_filtering=True,id=5465af99-7afb-409c-9889-3320da6f749f,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5465af99-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.490 2 DEBUG os_vif [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:df:4f,bridge_name='br-int',has_traffic_filtering=True,id=5465af99-7afb-409c-9889-3320da6f749f,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5465af99-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.492 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5465af99-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.498 2 INFO os_vif [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:df:4f,bridge_name='br-int',has_traffic_filtering=True,id=5465af99-7afb-409c-9889-3320da6f749f,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5465af99-7a')
Oct 07 14:21:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:21:49 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1571761365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.775 2 DEBUG oslo_concurrency.processutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.796 2 DEBUG nova.storage.rbd_utils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2f0de516-cf33-49b6-b036-aee8c2f72943_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.801 2 DEBUG oslo_concurrency.processutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.907 2 INFO nova.virt.libvirt.driver [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Deleting instance files /var/lib/nova/instances/5fb1bd6b-903f-4d62-bc97-25ecaa45deac_del
Oct 07 14:21:49 compute-0 nova_compute[259550]: 2025-10-07 14:21:49.908 2 INFO nova.virt.libvirt.driver [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Deletion of /var/lib/nova/instances/5fb1bd6b-903f-4d62-bc97-25ecaa45deac_del complete
Oct 07 14:21:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 423 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 10 MiB/s wr, 218 op/s
Oct 07 14:21:50 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1571761365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.060 2 INFO nova.scheduler.client.report [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Deleted allocations for instance 5fb1bd6b-903f-4d62-bc97-25ecaa45deac
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.151 2 DEBUG oslo_concurrency.lockutils [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.151 2 DEBUG oslo_concurrency.lockutils [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.169 2 DEBUG nova.network.neutron [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:21:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:21:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2265868039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.255 2 DEBUG oslo_concurrency.processutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.257 2 DEBUG nova.virt.libvirt.vif [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:21:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-500502864',display_name='tempest-₡-500502864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--500502864',id=88,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-ann565o7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:21:45Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=2f0de516-cf33-49b6-b036-aee8c2f72943,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1f34f39-0808-4a53-bf40-fff477dee819", "address": "fa:16:3e:f6:9a:2b", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1f34f39-08", "ovs_interfaceid": "d1f34f39-0808-4a53-bf40-fff477dee819", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.257 2 DEBUG nova.network.os_vif_util [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "d1f34f39-0808-4a53-bf40-fff477dee819", "address": "fa:16:3e:f6:9a:2b", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1f34f39-08", "ovs_interfaceid": "d1f34f39-0808-4a53-bf40-fff477dee819", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.258 2 DEBUG nova.network.os_vif_util [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:9a:2b,bridge_name='br-int',has_traffic_filtering=True,id=d1f34f39-0808-4a53-bf40-fff477dee819,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1f34f39-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.259 2 DEBUG nova.objects.instance [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f0de516-cf33-49b6-b036-aee8c2f72943 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.276 2 DEBUG nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:21:50 compute-0 nova_compute[259550]:   <uuid>2f0de516-cf33-49b6-b036-aee8c2f72943</uuid>
Oct 07 14:21:50 compute-0 nova_compute[259550]:   <name>instance-00000058</name>
Oct 07 14:21:50 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:21:50 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:21:50 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <nova:name>tempest-₡-500502864</nova:name>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:21:49</nova:creationTime>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:21:50 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:21:50 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:21:50 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:21:50 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:21:50 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:21:50 compute-0 nova_compute[259550]:         <nova:user uuid="2606252961124ad2a15c7f7529b28488">tempest-ServersTestJSON-387950529-project-member</nova:user>
Oct 07 14:21:50 compute-0 nova_compute[259550]:         <nova:project uuid="ca7071ac09d84d15aba25489e9bb909a">tempest-ServersTestJSON-387950529</nova:project>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:21:50 compute-0 nova_compute[259550]:         <nova:port uuid="d1f34f39-0808-4a53-bf40-fff477dee819">
Oct 07 14:21:50 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:21:50 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:21:50 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <system>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <entry name="serial">2f0de516-cf33-49b6-b036-aee8c2f72943</entry>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <entry name="uuid">2f0de516-cf33-49b6-b036-aee8c2f72943</entry>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     </system>
Oct 07 14:21:50 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:21:50 compute-0 nova_compute[259550]:   <os>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:   </os>
Oct 07 14:21:50 compute-0 nova_compute[259550]:   <features>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:   </features>
Oct 07 14:21:50 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:21:50 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:21:50 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/2f0de516-cf33-49b6-b036-aee8c2f72943_disk">
Oct 07 14:21:50 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       </source>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:21:50 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/2f0de516-cf33-49b6-b036-aee8c2f72943_disk.config">
Oct 07 14:21:50 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       </source>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:21:50 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:f6:9a:2b"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <target dev="tapd1f34f39-08"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/2f0de516-cf33-49b6-b036-aee8c2f72943/console.log" append="off"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <video>
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     </video>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:21:50 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:21:50 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:21:50 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:21:50 compute-0 nova_compute[259550]: </domain>
Oct 07 14:21:50 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.278 2 DEBUG nova.compute.manager [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Preparing to wait for external event network-vif-plugged-d1f34f39-0808-4a53-bf40-fff477dee819 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.278 2 DEBUG oslo_concurrency.lockutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.279 2 DEBUG oslo_concurrency.lockutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.279 2 DEBUG oslo_concurrency.lockutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.280 2 DEBUG nova.virt.libvirt.vif [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:21:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-500502864',display_name='tempest-₡-500502864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--500502864',id=88,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-ann565o7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:21:45Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=2f0de516-cf33-49b6-b036-aee8c2f72943,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1f34f39-0808-4a53-bf40-fff477dee819", "address": "fa:16:3e:f6:9a:2b", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1f34f39-08", "ovs_interfaceid": "d1f34f39-0808-4a53-bf40-fff477dee819", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.280 2 DEBUG nova.network.os_vif_util [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "d1f34f39-0808-4a53-bf40-fff477dee819", "address": "fa:16:3e:f6:9a:2b", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1f34f39-08", "ovs_interfaceid": "d1f34f39-0808-4a53-bf40-fff477dee819", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.281 2 DEBUG nova.network.os_vif_util [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:9a:2b,bridge_name='br-int',has_traffic_filtering=True,id=d1f34f39-0808-4a53-bf40-fff477dee819,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1f34f39-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.281 2 DEBUG os_vif [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:9a:2b,bridge_name='br-int',has_traffic_filtering=True,id=d1f34f39-0808-4a53-bf40-fff477dee819,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1f34f39-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.282 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.283 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.286 2 DEBUG oslo_concurrency.processutils [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.319 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd1f34f39-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.319 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd1f34f39-08, col_values=(('external_ids', {'iface-id': 'd1f34f39-0808-4a53-bf40-fff477dee819', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:9a:2b', 'vm-uuid': '2f0de516-cf33-49b6-b036-aee8c2f72943'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:50 compute-0 NetworkManager[44949]: <info>  [1759846910.3223] manager: (tapd1f34f39-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/371)
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.327 2 INFO os_vif [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:9a:2b,bridge_name='br-int',has_traffic_filtering=True,id=d1f34f39-0808-4a53-bf40-fff477dee819,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1f34f39-08')
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.399 2 DEBUG nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.399 2 DEBUG nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.400 2 DEBUG nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No VIF found with MAC fa:16:3e:f6:9a:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.400 2 INFO nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Using config drive
Oct 07 14:21:50 compute-0 podman[347168]: 2025-10-07 14:21:50.419583441 +0000 UTC m=+0.050323730 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.423 2 DEBUG nova.storage.rbd_utils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2f0de516-cf33-49b6-b036-aee8c2f72943_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:50 compute-0 podman[347169]: 2025-10-07 14:21:50.452609584 +0000 UTC m=+0.083539738 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Oct 07 14:21:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:21:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/477185845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.730 2 DEBUG oslo_concurrency.processutils [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.737 2 DEBUG nova.compute.provider_tree [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.802 2 DEBUG nova.scheduler.client.report [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.873 2 DEBUG oslo_concurrency.lockutils [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.975 2 DEBUG nova.network.neutron [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Updating instance_info_cache with network_info: [{"id": "e15b4d99-880b-4329-88b4-7851081f5ad7", "address": "fa:16:3e:ec:ef:ce", "network": {"id": "c81f0e54-a629-4cc1-82e0-c91f2de9be43", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1371127941-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81f15d4c85484c8598dbfb6cc7690b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15b4d99-88", "ovs_interfaceid": "e15b4d99-880b-4329-88b4-7851081f5ad7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:50 compute-0 nova_compute[259550]: 2025-10-07 14:21:50.981 2 DEBUG oslo_concurrency.lockutils [None req-754ff7d3-f30e-43d1-beb5-1cbfc894c56f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "5fb1bd6b-903f-4d62-bc97-25ecaa45deac" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 10.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:51 compute-0 ceph-mon[74295]: pgmap v1737: 305 pgs: 305 active+clean; 423 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 10 MiB/s wr, 218 op/s
Oct 07 14:21:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2265868039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/477185845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.060 2 INFO nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Creating config drive at /var/lib/nova/instances/2f0de516-cf33-49b6-b036-aee8c2f72943/disk.config
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.067 2 DEBUG oslo_concurrency.processutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2f0de516-cf33-49b6-b036-aee8c2f72943/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_f900wuf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.123 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Releasing lock "refresh_cache-73126140-a6e8-4630-a01d-3738d29c02b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.123 2 DEBUG nova.compute.manager [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Instance network_info: |[{"id": "e15b4d99-880b-4329-88b4-7851081f5ad7", "address": "fa:16:3e:ec:ef:ce", "network": {"id": "c81f0e54-a629-4cc1-82e0-c91f2de9be43", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1371127941-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81f15d4c85484c8598dbfb6cc7690b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15b4d99-88", "ovs_interfaceid": "e15b4d99-880b-4329-88b4-7851081f5ad7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.124 2 DEBUG oslo_concurrency.lockutils [req-98cc28a0-ea37-40f6-a007-71818c84c556 req-87122895-d1c3-440d-88fa-f7ffaee04d10 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-73126140-a6e8-4630-a01d-3738d29c02b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.125 2 DEBUG nova.network.neutron [req-98cc28a0-ea37-40f6-a007-71818c84c556 req-87122895-d1c3-440d-88fa-f7ffaee04d10 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Refreshing network info cache for port e15b4d99-880b-4329-88b4-7851081f5ad7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.129 2 DEBUG nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Start _get_guest_xml network_info=[{"id": "e15b4d99-880b-4329-88b4-7851081f5ad7", "address": "fa:16:3e:ec:ef:ce", "network": {"id": "c81f0e54-a629-4cc1-82e0-c91f2de9be43", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1371127941-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81f15d4c85484c8598dbfb6cc7690b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15b4d99-88", "ovs_interfaceid": "e15b4d99-880b-4329-88b4-7851081f5ad7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.135 2 WARNING nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.142 2 DEBUG nova.virt.libvirt.host [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.143 2 DEBUG nova.virt.libvirt.host [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.146 2 DEBUG nova.virt.libvirt.host [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.146 2 DEBUG nova.virt.libvirt.host [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.147 2 DEBUG nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.147 2 DEBUG nova.virt.hardware [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.147 2 DEBUG nova.virt.hardware [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.148 2 DEBUG nova.virt.hardware [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.148 2 DEBUG nova.virt.hardware [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.148 2 DEBUG nova.virt.hardware [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.148 2 DEBUG nova.virt.hardware [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.148 2 DEBUG nova.virt.hardware [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.149 2 DEBUG nova.virt.hardware [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.149 2 DEBUG nova.virt.hardware [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.149 2 DEBUG nova.virt.hardware [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.149 2 DEBUG nova.virt.hardware [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.153 2 DEBUG oslo_concurrency.processutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.223 2 DEBUG oslo_concurrency.processutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2f0de516-cf33-49b6-b036-aee8c2f72943/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_f900wuf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:21:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Oct 07 14:21:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Oct 07 14:21:51 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.258 2 DEBUG nova.storage.rbd_utils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2f0de516-cf33-49b6-b036-aee8c2f72943_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.263 2 DEBUG oslo_concurrency.processutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2f0de516-cf33-49b6-b036-aee8c2f72943/disk.config 2f0de516-cf33-49b6-b036-aee8c2f72943_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.428 2 DEBUG oslo_concurrency.processutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2f0de516-cf33-49b6-b036-aee8c2f72943/disk.config 2f0de516-cf33-49b6-b036-aee8c2f72943_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.429 2 INFO nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Deleting local config drive /var/lib/nova/instances/2f0de516-cf33-49b6-b036-aee8c2f72943/disk.config because it was imported into RBD.
Oct 07 14:21:51 compute-0 NetworkManager[44949]: <info>  [1759846911.5056] manager: (tapd1f34f39-08): new Tun device (/org/freedesktop/NetworkManager/Devices/372)
Oct 07 14:21:51 compute-0 kernel: tapd1f34f39-08: entered promiscuous mode
Oct 07 14:21:51 compute-0 ovn_controller[151684]: 2025-10-07T14:21:51Z|00895|binding|INFO|Claiming lport d1f34f39-0808-4a53-bf40-fff477dee819 for this chassis.
Oct 07 14:21:51 compute-0 ovn_controller[151684]: 2025-10-07T14:21:51Z|00896|binding|INFO|d1f34f39-0808-4a53-bf40-fff477dee819: Claiming fa:16:3e:f6:9a:2b 10.100.0.4
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.518 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:9a:2b 10.100.0.4'], port_security=['fa:16:3e:f6:9a:2b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2f0de516-cf33-49b6-b036-aee8c2f72943', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d1f34f39-0808-4a53-bf40-fff477dee819) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.519 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d1f34f39-0808-4a53-bf40-fff477dee819 in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 bound to our chassis
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.521 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7
Oct 07 14:21:51 compute-0 ovn_controller[151684]: 2025-10-07T14:21:51Z|00897|binding|INFO|Setting lport d1f34f39-0808-4a53-bf40-fff477dee819 ovn-installed in OVS
Oct 07 14:21:51 compute-0 ovn_controller[151684]: 2025-10-07T14:21:51Z|00898|binding|INFO|Setting lport d1f34f39-0808-4a53-bf40-fff477dee819 up in Southbound
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.539 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fba59149-4f7d-4a10-9957-721a9a2b4677]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.542 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap55c52758-91 in ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.544 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap55c52758-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.544 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0f719c-60fe-4a28-be4e-20d1a86412ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.547 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[11d55399-ee3d-49b2-a6cf-442d43e164b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:51 compute-0 systemd-udevd[347321]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:21:51 compute-0 systemd-machined[214580]: New machine qemu-107-instance-00000058.
Oct 07 14:21:51 compute-0 NetworkManager[44949]: <info>  [1759846911.5669] device (tapd1f34f39-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:21:51 compute-0 NetworkManager[44949]: <info>  [1759846911.5675] device (tapd1f34f39-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.570 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf2357b-0f30-44ef-9814-88c3c69c4bd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:51 compute-0 systemd[1]: Started Virtual Machine qemu-107-instance-00000058.
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.591 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[147ed31d-4a95-459b-b5b9-99150a02a681]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.601 2 DEBUG nova.compute.manager [req-26c1a113-5d53-48d6-8926-b89e7a6da3e4 req-09d16173-9cbb-4594-9f9c-cec24f4c14be 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Received event network-changed-f13bcd51-d70f-4e6d-9161-9020447812fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.601 2 DEBUG nova.compute.manager [req-26c1a113-5d53-48d6-8926-b89e7a6da3e4 req-09d16173-9cbb-4594-9f9c-cec24f4c14be 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Refreshing instance network info cache due to event network-changed-f13bcd51-d70f-4e6d-9161-9020447812fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.602 2 DEBUG oslo_concurrency.lockutils [req-26c1a113-5d53-48d6-8926-b89e7a6da3e4 req-09d16173-9cbb-4594-9f9c-cec24f4c14be 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-ad8af3d5-66d4-4db1-bd40-42d766f2fde7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:21:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:21:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/438852194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.628 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e39fd0aa-dede-42f3-9d60-19727950c348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.634 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bfeb8313-d7ab-480f-b474-b24ecf3a5429]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:51 compute-0 NetworkManager[44949]: <info>  [1759846911.6362] manager: (tap55c52758-90): new Veth device (/org/freedesktop/NetworkManager/Devices/373)
Oct 07 14:21:51 compute-0 systemd-udevd[347324]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.656 2 DEBUG oslo_concurrency.processutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.677 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8ccd8aef-1b2e-4937-be50-50e504e3345c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.681 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[62e597f9-d254-448b-a6e4-c91942022cda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.699 2 DEBUG nova.storage.rbd_utils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] rbd image 73126140-a6e8-4630-a01d-3738d29c02b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.704 2 DEBUG oslo_concurrency.processutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:51 compute-0 NetworkManager[44949]: <info>  [1759846911.7107] device (tap55c52758-90): carrier: link connected
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.720 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0f2f55a4-6ad9-4fb5-9c4c-225bdfdc4ef6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.743 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bac56f4a-6d61-4403-8c02-bd8db5a151e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 31052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347376, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.760 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[42f30965-f838-43a0-affe-9c3359820193]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:221a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749527, 'tstamp': 749527}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347377, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.783 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7ebc581c-cae2-4ce3-876d-cc0ab5241444]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 31052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 347378, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.823 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[03c95486-ddf2-4e11-9d57-0ed30016cc28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.891 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7fc5002a-b9f7-4f61-92da-961661c6d7eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.892 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.893 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.893 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:51 compute-0 NetworkManager[44949]: <info>  [1759846911.8958] manager: (tap55c52758-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Oct 07 14:21:51 compute-0 kernel: tap55c52758-90: entered promiscuous mode
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.898 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:51 compute-0 ovn_controller[151684]: 2025-10-07T14:21:51Z|00899|binding|INFO|Releasing lport 401012b3-9244-4a9f-9a1e-3bf75a54a412 from this chassis (sb_readonly=0)
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.917 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/55c52758-97c9-4a7e-b735-6c70d1ca75a7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/55c52758-97c9-4a7e-b735-6c70d1ca75a7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.918 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8840c89f-2b65-4709-af93-e3c2aef45dff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.919 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-55c52758-97c9-4a7e-b735-6c70d1ca75a7
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/55c52758-97c9-4a7e-b735-6c70d1ca75a7.pid.haproxy
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 55c52758-97c9-4a7e-b735-6c70d1ca75a7
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:21:51 compute-0 nova_compute[259550]: 2025-10-07 14:21:51.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:51.919 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'env', 'PROCESS_TAG=haproxy-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/55c52758-97c9-4a7e-b735-6c70d1ca75a7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:21:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 412 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 9.7 MiB/s wr, 221 op/s
Oct 07 14:21:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:21:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1355159360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.212 2 DEBUG oslo_concurrency.processutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.214 2 DEBUG nova.virt.libvirt.vif [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:21:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-410070911',display_name='tempest-ServerGroupTestJSON-server-410070911',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-410070911',id=86,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81f15d4c85484c8598dbfb6cc7690b09',ramdisk_id='',reservation_id='r-di69khpg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1239398362',owner_user_name='tempest-ServerGroupTestJSON-1239398362-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:21:43Z,user_data=None,user_id='60b5888940d84f1ba06f4db3c95083bd',uuid=73126140-a6e8-4630-a01d-3738d29c02b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e15b4d99-880b-4329-88b4-7851081f5ad7", "address": "fa:16:3e:ec:ef:ce", "network": {"id": "c81f0e54-a629-4cc1-82e0-c91f2de9be43", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1371127941-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81f15d4c85484c8598dbfb6cc7690b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15b4d99-88", "ovs_interfaceid": "e15b4d99-880b-4329-88b4-7851081f5ad7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.214 2 DEBUG nova.network.os_vif_util [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Converting VIF {"id": "e15b4d99-880b-4329-88b4-7851081f5ad7", "address": "fa:16:3e:ec:ef:ce", "network": {"id": "c81f0e54-a629-4cc1-82e0-c91f2de9be43", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1371127941-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81f15d4c85484c8598dbfb6cc7690b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15b4d99-88", "ovs_interfaceid": "e15b4d99-880b-4329-88b4-7851081f5ad7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.215 2 DEBUG nova.network.os_vif_util [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=e15b4d99-880b-4329-88b4-7851081f5ad7,network=Network(c81f0e54-a629-4cc1-82e0-c91f2de9be43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape15b4d99-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.217 2 DEBUG nova.objects.instance [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lazy-loading 'pci_devices' on Instance uuid 73126140-a6e8-4630-a01d-3738d29c02b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.236 2 DEBUG nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:21:52 compute-0 nova_compute[259550]:   <uuid>73126140-a6e8-4630-a01d-3738d29c02b8</uuid>
Oct 07 14:21:52 compute-0 nova_compute[259550]:   <name>instance-00000056</name>
Oct 07 14:21:52 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:21:52 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:21:52 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerGroupTestJSON-server-410070911</nova:name>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:21:51</nova:creationTime>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:21:52 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:21:52 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:21:52 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:21:52 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:21:52 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:21:52 compute-0 nova_compute[259550]:         <nova:user uuid="60b5888940d84f1ba06f4db3c95083bd">tempest-ServerGroupTestJSON-1239398362-project-member</nova:user>
Oct 07 14:21:52 compute-0 nova_compute[259550]:         <nova:project uuid="81f15d4c85484c8598dbfb6cc7690b09">tempest-ServerGroupTestJSON-1239398362</nova:project>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:21:52 compute-0 nova_compute[259550]:         <nova:port uuid="e15b4d99-880b-4329-88b4-7851081f5ad7">
Oct 07 14:21:52 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:21:52 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:21:52 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <system>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <entry name="serial">73126140-a6e8-4630-a01d-3738d29c02b8</entry>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <entry name="uuid">73126140-a6e8-4630-a01d-3738d29c02b8</entry>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     </system>
Oct 07 14:21:52 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:21:52 compute-0 nova_compute[259550]:   <os>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:   </os>
Oct 07 14:21:52 compute-0 nova_compute[259550]:   <features>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:   </features>
Oct 07 14:21:52 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:21:52 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:21:52 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/73126140-a6e8-4630-a01d-3738d29c02b8_disk">
Oct 07 14:21:52 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       </source>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:21:52 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/73126140-a6e8-4630-a01d-3738d29c02b8_disk.config">
Oct 07 14:21:52 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       </source>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:21:52 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:ec:ef:ce"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <target dev="tape15b4d99-88"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/73126140-a6e8-4630-a01d-3738d29c02b8/console.log" append="off"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <video>
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     </video>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:21:52 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:21:52 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:21:52 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:21:52 compute-0 nova_compute[259550]: </domain>
Oct 07 14:21:52 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.237 2 DEBUG nova.compute.manager [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Preparing to wait for external event network-vif-plugged-e15b4d99-880b-4329-88b4-7851081f5ad7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.237 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Acquiring lock "73126140-a6e8-4630-a01d-3738d29c02b8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.237 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lock "73126140-a6e8-4630-a01d-3738d29c02b8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.237 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lock "73126140-a6e8-4630-a01d-3738d29c02b8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.238 2 DEBUG nova.virt.libvirt.vif [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:21:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-410070911',display_name='tempest-ServerGroupTestJSON-server-410070911',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-410070911',id=86,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81f15d4c85484c8598dbfb6cc7690b09',ramdisk_id='',reservation_id='r-di69khpg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1239398362',owner_user_name='tempest-ServerGroupTestJSON-1239398362-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:21:43Z,user_data=None,user_id='60b5888940d84f1ba06f4db3c95083bd',uuid=73126140-a6e8-4630-a01d-3738d29c02b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e15b4d99-880b-4329-88b4-7851081f5ad7", "address": "fa:16:3e:ec:ef:ce", "network": {"id": "c81f0e54-a629-4cc1-82e0-c91f2de9be43", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1371127941-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81f15d4c85484c8598dbfb6cc7690b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15b4d99-88", "ovs_interfaceid": "e15b4d99-880b-4329-88b4-7851081f5ad7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.239 2 DEBUG nova.network.os_vif_util [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Converting VIF {"id": "e15b4d99-880b-4329-88b4-7851081f5ad7", "address": "fa:16:3e:ec:ef:ce", "network": {"id": "c81f0e54-a629-4cc1-82e0-c91f2de9be43", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1371127941-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81f15d4c85484c8598dbfb6cc7690b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15b4d99-88", "ovs_interfaceid": "e15b4d99-880b-4329-88b4-7851081f5ad7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.240 2 DEBUG nova.network.os_vif_util [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=e15b4d99-880b-4329-88b4-7851081f5ad7,network=Network(c81f0e54-a629-4cc1-82e0-c91f2de9be43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape15b4d99-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.240 2 DEBUG os_vif [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=e15b4d99-880b-4329-88b4-7851081f5ad7,network=Network(c81f0e54-a629-4cc1-82e0-c91f2de9be43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape15b4d99-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.241 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.241 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:52 compute-0 ceph-mon[74295]: osdmap e234: 3 total, 3 up, 3 in
Oct 07 14:21:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/438852194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:52 compute-0 ceph-mon[74295]: pgmap v1739: 305 pgs: 305 active+clean; 412 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 9.7 MiB/s wr, 221 op/s
Oct 07 14:21:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1355159360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.246 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape15b4d99-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.246 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape15b4d99-88, col_values=(('external_ids', {'iface-id': 'e15b4d99-880b-4329-88b4-7851081f5ad7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:ef:ce', 'vm-uuid': '73126140-a6e8-4630-a01d-3738d29c02b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:52 compute-0 NetworkManager[44949]: <info>  [1759846912.2489] manager: (tape15b4d99-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.254 2 INFO os_vif [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=e15b4d99-880b-4329-88b4-7851081f5ad7,network=Network(c81f0e54-a629-4cc1-82e0-c91f2de9be43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape15b4d99-88')
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.317 2 DEBUG nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.318 2 DEBUG nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.318 2 DEBUG nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] No VIF found with MAC fa:16:3e:ec:ef:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.318 2 INFO nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Using config drive
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.344 2 DEBUG nova.storage.rbd_utils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] rbd image 73126140-a6e8-4630-a01d-3738d29c02b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:52 compute-0 podman[347473]: 2025-10-07 14:21:52.348838256 +0000 UTC m=+0.052077586 container create b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:21:52 compute-0 systemd[1]: Started libpod-conmon-b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb.scope.
Oct 07 14:21:52 compute-0 podman[347473]: 2025-10-07 14:21:52.317846228 +0000 UTC m=+0.021085578 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:21:52 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a3a13a26d565d27b55a2c061c5f559046e2a2ef2ed17aeaf90edd70b5935596/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:21:52 compute-0 podman[347473]: 2025-10-07 14:21:52.475625326 +0000 UTC m=+0.178864676 container init b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 07 14:21:52 compute-0 podman[347473]: 2025-10-07 14:21:52.481613804 +0000 UTC m=+0.184853134 container start b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:21:52 compute-0 neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7[347508]: [NOTICE]   (347512) : New worker (347514) forked
Oct 07 14:21:52 compute-0 neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7[347508]: [NOTICE]   (347512) : Loading success.
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.569 2 DEBUG nova.network.neutron [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Updating instance_info_cache with network_info: [{"id": "f13bcd51-d70f-4e6d-9161-9020447812fc", "address": "fa:16:3e:cd:66:73", "network": {"id": "07bd0a98-4ed2-404c-b943-5ab56d6fbe70", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-842528261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0675ef8ab0b84423b35c16687980a886", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf13bcd51-d7", "ovs_interfaceid": "f13bcd51-d70f-4e6d-9161-9020447812fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.597 2 DEBUG oslo_concurrency.lockutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Releasing lock "refresh_cache-ad8af3d5-66d4-4db1-bd40-42d766f2fde7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.597 2 DEBUG nova.compute.manager [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Instance network_info: |[{"id": "f13bcd51-d70f-4e6d-9161-9020447812fc", "address": "fa:16:3e:cd:66:73", "network": {"id": "07bd0a98-4ed2-404c-b943-5ab56d6fbe70", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-842528261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0675ef8ab0b84423b35c16687980a886", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf13bcd51-d7", "ovs_interfaceid": "f13bcd51-d70f-4e6d-9161-9020447812fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.598 2 DEBUG oslo_concurrency.lockutils [req-26c1a113-5d53-48d6-8926-b89e7a6da3e4 req-09d16173-9cbb-4594-9f9c-cec24f4c14be 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-ad8af3d5-66d4-4db1-bd40-42d766f2fde7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.598 2 DEBUG nova.network.neutron [req-26c1a113-5d53-48d6-8926-b89e7a6da3e4 req-09d16173-9cbb-4594-9f9c-cec24f4c14be 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Refreshing network info cache for port f13bcd51-d70f-4e6d-9161-9020447812fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.602 2 DEBUG nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Start _get_guest_xml network_info=[{"id": "f13bcd51-d70f-4e6d-9161-9020447812fc", "address": "fa:16:3e:cd:66:73", "network": {"id": "07bd0a98-4ed2-404c-b943-5ab56d6fbe70", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-842528261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0675ef8ab0b84423b35c16687980a886", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf13bcd51-d7", "ovs_interfaceid": "f13bcd51-d70f-4e6d-9161-9020447812fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.607 2 WARNING nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.610 2 DEBUG nova.virt.libvirt.host [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.611 2 DEBUG nova.virt.libvirt.host [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.613 2 DEBUG nova.virt.libvirt.host [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.614 2 DEBUG nova.virt.libvirt.host [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.614 2 DEBUG nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.614 2 DEBUG nova.virt.hardware [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.615 2 DEBUG nova.virt.hardware [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.615 2 DEBUG nova.virt.hardware [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.615 2 DEBUG nova.virt.hardware [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.615 2 DEBUG nova.virt.hardware [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.615 2 DEBUG nova.virt.hardware [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.616 2 DEBUG nova.virt.hardware [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.616 2 DEBUG nova.virt.hardware [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.616 2 DEBUG nova.virt.hardware [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.617 2 DEBUG nova.virt.hardware [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.617 2 DEBUG nova.virt.hardware [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.619 2 DEBUG oslo_concurrency.processutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:21:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:21:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:21:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:21:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:21:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.809 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846912.8084953, 2f0de516-cf33-49b6-b036-aee8c2f72943 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.809 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] VM Started (Lifecycle Event)
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.933 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.940 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846912.8096724, 2f0de516-cf33-49b6-b036-aee8c2f72943 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.940 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] VM Paused (Lifecycle Event)
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.947 2 DEBUG nova.network.neutron [req-98cc28a0-ea37-40f6-a007-71818c84c556 req-87122895-d1c3-440d-88fa-f7ffaee04d10 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Updated VIF entry in instance network info cache for port e15b4d99-880b-4329-88b4-7851081f5ad7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.947 2 DEBUG nova.network.neutron [req-98cc28a0-ea37-40f6-a007-71818c84c556 req-87122895-d1c3-440d-88fa-f7ffaee04d10 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Updating instance_info_cache with network_info: [{"id": "e15b4d99-880b-4329-88b4-7851081f5ad7", "address": "fa:16:3e:ec:ef:ce", "network": {"id": "c81f0e54-a629-4cc1-82e0-c91f2de9be43", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1371127941-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81f15d4c85484c8598dbfb6cc7690b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15b4d99-88", "ovs_interfaceid": "e15b4d99-880b-4329-88b4-7851081f5ad7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.976 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.978 2 DEBUG oslo_concurrency.lockutils [req-98cc28a0-ea37-40f6-a007-71818c84c556 req-87122895-d1c3-440d-88fa-f7ffaee04d10 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-73126140-a6e8-4630-a01d-3738d29c02b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:21:52 compute-0 nova_compute[259550]: 2025-10-07 14:21:52.981 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.000 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:21:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:21:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4170151708' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.088 2 DEBUG oslo_concurrency.processutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.115 2 DEBUG nova.storage.rbd_utils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] rbd image ad8af3d5-66d4-4db1-bd40-42d766f2fde7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.120 2 DEBUG oslo_concurrency.processutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.163 2 DEBUG nova.network.neutron [req-8ea0127f-1bab-4edf-88e8-a5a4dbbdc526 req-bd163932-b1a9-429e-9804-7ec4b207ee82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Updated VIF entry in instance network info cache for port d1f34f39-0808-4a53-bf40-fff477dee819. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.164 2 DEBUG nova.network.neutron [req-8ea0127f-1bab-4edf-88e8-a5a4dbbdc526 req-bd163932-b1a9-429e-9804-7ec4b207ee82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Updating instance_info_cache with network_info: [{"id": "d1f34f39-0808-4a53-bf40-fff477dee819", "address": "fa:16:3e:f6:9a:2b", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1f34f39-08", "ovs_interfaceid": "d1f34f39-0808-4a53-bf40-fff477dee819", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.242 2 DEBUG oslo_concurrency.lockutils [req-8ea0127f-1bab-4edf-88e8-a5a4dbbdc526 req-bd163932-b1a9-429e-9804-7ec4b207ee82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-2f0de516-cf33-49b6-b036-aee8c2f72943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:21:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4170151708' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.269 2 INFO nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Creating config drive at /var/lib/nova/instances/73126140-a6e8-4630-a01d-3738d29c02b8/disk.config
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.274 2 DEBUG oslo_concurrency.processutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73126140-a6e8-4630-a01d-3738d29c02b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6i8dnnh8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.414 2 DEBUG oslo_concurrency.processutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73126140-a6e8-4630-a01d-3738d29c02b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6i8dnnh8" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.448 2 DEBUG nova.storage.rbd_utils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] rbd image 73126140-a6e8-4630-a01d-3738d29c02b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.453 2 DEBUG oslo_concurrency.processutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73126140-a6e8-4630-a01d-3738d29c02b8/disk.config 73126140-a6e8-4630-a01d-3738d29c02b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:21:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/40085115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.612 2 DEBUG oslo_concurrency.processutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.615 2 DEBUG nova.virt.libvirt.vif [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:21:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-502915421',display_name='tempest-ServerMetadataNegativeTestJSON-server-502915421',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-502915421',id=87,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0675ef8ab0b84423b35c16687980a886',ramdisk_id='',reservation_id='r-a57ir3jh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1962853244',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1962853244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:21:44Z,user_data=None,user_id='f3a5238c8d6b406aa83ab9cfd1b31cf1',uuid=ad8af3d5-66d4-4db1-bd40-42d766f2fde7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f13bcd51-d70f-4e6d-9161-9020447812fc", "address": "fa:16:3e:cd:66:73", "network": {"id": "07bd0a98-4ed2-404c-b943-5ab56d6fbe70", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-842528261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0675ef8ab0b84423b35c16687980a886", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf13bcd51-d7", "ovs_interfaceid": "f13bcd51-d70f-4e6d-9161-9020447812fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.616 2 DEBUG nova.network.os_vif_util [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Converting VIF {"id": "f13bcd51-d70f-4e6d-9161-9020447812fc", "address": "fa:16:3e:cd:66:73", "network": {"id": "07bd0a98-4ed2-404c-b943-5ab56d6fbe70", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-842528261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0675ef8ab0b84423b35c16687980a886", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf13bcd51-d7", "ovs_interfaceid": "f13bcd51-d70f-4e6d-9161-9020447812fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.617 2 DEBUG nova.network.os_vif_util [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:66:73,bridge_name='br-int',has_traffic_filtering=True,id=f13bcd51-d70f-4e6d-9161-9020447812fc,network=Network(07bd0a98-4ed2-404c-b943-5ab56d6fbe70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf13bcd51-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.619 2 DEBUG nova.objects.instance [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lazy-loading 'pci_devices' on Instance uuid ad8af3d5-66d4-4db1-bd40-42d766f2fde7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.634 2 DEBUG nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:21:53 compute-0 nova_compute[259550]:   <uuid>ad8af3d5-66d4-4db1-bd40-42d766f2fde7</uuid>
Oct 07 14:21:53 compute-0 nova_compute[259550]:   <name>instance-00000057</name>
Oct 07 14:21:53 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:21:53 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:21:53 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerMetadataNegativeTestJSON-server-502915421</nova:name>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:21:52</nova:creationTime>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:21:53 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:21:53 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:21:53 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:21:53 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:21:53 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:21:53 compute-0 nova_compute[259550]:         <nova:user uuid="f3a5238c8d6b406aa83ab9cfd1b31cf1">tempest-ServerMetadataNegativeTestJSON-1962853244-project-member</nova:user>
Oct 07 14:21:53 compute-0 nova_compute[259550]:         <nova:project uuid="0675ef8ab0b84423b35c16687980a886">tempest-ServerMetadataNegativeTestJSON-1962853244</nova:project>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:21:53 compute-0 nova_compute[259550]:         <nova:port uuid="f13bcd51-d70f-4e6d-9161-9020447812fc">
Oct 07 14:21:53 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:21:53 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:21:53 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <system>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <entry name="serial">ad8af3d5-66d4-4db1-bd40-42d766f2fde7</entry>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <entry name="uuid">ad8af3d5-66d4-4db1-bd40-42d766f2fde7</entry>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     </system>
Oct 07 14:21:53 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:21:53 compute-0 nova_compute[259550]:   <os>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:   </os>
Oct 07 14:21:53 compute-0 nova_compute[259550]:   <features>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:   </features>
Oct 07 14:21:53 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:21:53 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:21:53 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/ad8af3d5-66d4-4db1-bd40-42d766f2fde7_disk">
Oct 07 14:21:53 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       </source>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:21:53 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/ad8af3d5-66d4-4db1-bd40-42d766f2fde7_disk.config">
Oct 07 14:21:53 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       </source>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:21:53 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:cd:66:73"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <target dev="tapf13bcd51-d7"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/ad8af3d5-66d4-4db1-bd40-42d766f2fde7/console.log" append="off"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <video>
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     </video>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:21:53 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:21:53 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:21:53 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:21:53 compute-0 nova_compute[259550]: </domain>
Oct 07 14:21:53 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.636 2 DEBUG nova.compute.manager [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Preparing to wait for external event network-vif-plugged-f13bcd51-d70f-4e6d-9161-9020447812fc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.636 2 DEBUG oslo_concurrency.lockutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Acquiring lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.637 2 DEBUG oslo_concurrency.lockutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.637 2 DEBUG oslo_concurrency.lockutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.638 2 DEBUG nova.virt.libvirt.vif [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:21:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-502915421',display_name='tempest-ServerMetadataNegativeTestJSON-server-502915421',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-502915421',id=87,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0675ef8ab0b84423b35c16687980a886',ramdisk_id='',reservation_id='r-a57ir3jh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1962853244',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1962853244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:21:44Z,user_data=None,user_id='f3a5238c8d6b406aa83ab9cfd1b31cf1',uuid=ad8af3d5-66d4-4db1-bd40-42d766f2fde7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f13bcd51-d70f-4e6d-9161-9020447812fc", "address": "fa:16:3e:cd:66:73", "network": {"id": "07bd0a98-4ed2-404c-b943-5ab56d6fbe70", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-842528261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0675ef8ab0b84423b35c16687980a886", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf13bcd51-d7", "ovs_interfaceid": "f13bcd51-d70f-4e6d-9161-9020447812fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.638 2 DEBUG nova.network.os_vif_util [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Converting VIF {"id": "f13bcd51-d70f-4e6d-9161-9020447812fc", "address": "fa:16:3e:cd:66:73", "network": {"id": "07bd0a98-4ed2-404c-b943-5ab56d6fbe70", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-842528261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0675ef8ab0b84423b35c16687980a886", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf13bcd51-d7", "ovs_interfaceid": "f13bcd51-d70f-4e6d-9161-9020447812fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.638 2 DEBUG nova.network.os_vif_util [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:66:73,bridge_name='br-int',has_traffic_filtering=True,id=f13bcd51-d70f-4e6d-9161-9020447812fc,network=Network(07bd0a98-4ed2-404c-b943-5ab56d6fbe70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf13bcd51-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.639 2 DEBUG os_vif [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:66:73,bridge_name='br-int',has_traffic_filtering=True,id=f13bcd51-d70f-4e6d-9161-9020447812fc,network=Network(07bd0a98-4ed2-404c-b943-5ab56d6fbe70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf13bcd51-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.640 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.640 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf13bcd51-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf13bcd51-d7, col_values=(('external_ids', {'iface-id': 'f13bcd51-d70f-4e6d-9161-9020447812fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:66:73', 'vm-uuid': 'ad8af3d5-66d4-4db1-bd40-42d766f2fde7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:53 compute-0 NetworkManager[44949]: <info>  [1759846913.6462] manager: (tapf13bcd51-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.654 2 INFO os_vif [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:66:73,bridge_name='br-int',has_traffic_filtering=True,id=f13bcd51-d70f-4e6d-9161-9020447812fc,network=Network(07bd0a98-4ed2-404c-b943-5ab56d6fbe70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf13bcd51-d7')
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.657 2 DEBUG oslo_concurrency.processutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73126140-a6e8-4630-a01d-3738d29c02b8/disk.config 73126140-a6e8-4630-a01d-3738d29c02b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.658 2 INFO nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Deleting local config drive /var/lib/nova/instances/73126140-a6e8-4630-a01d-3738d29c02b8/disk.config because it was imported into RBD.
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.705 2 DEBUG nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.706 2 DEBUG nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.707 2 DEBUG nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] No VIF found with MAC fa:16:3e:cd:66:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.708 2 INFO nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Using config drive
Oct 07 14:21:53 compute-0 NetworkManager[44949]: <info>  [1759846913.7316] manager: (tape15b4d99-88): new Tun device (/org/freedesktop/NetworkManager/Devices/377)
Oct 07 14:21:53 compute-0 kernel: tape15b4d99-88: entered promiscuous mode
Oct 07 14:21:53 compute-0 systemd-udevd[347349]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:21:53 compute-0 ovn_controller[151684]: 2025-10-07T14:21:53Z|00900|binding|INFO|Claiming lport e15b4d99-880b-4329-88b4-7851081f5ad7 for this chassis.
Oct 07 14:21:53 compute-0 ovn_controller[151684]: 2025-10-07T14:21:53Z|00901|binding|INFO|e15b4d99-880b-4329-88b4-7851081f5ad7: Claiming fa:16:3e:ec:ef:ce 10.100.0.3
Oct 07 14:21:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:53.744 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:ef:ce 10.100.0.3'], port_security=['fa:16:3e:ec:ef:ce 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '73126140-a6e8-4630-a01d-3738d29c02b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c81f0e54-a629-4cc1-82e0-c91f2de9be43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81f15d4c85484c8598dbfb6cc7690b09', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd8735795-c3aa-47d6-9757-7e07deb863b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74b3dbba-d952-49f6-881e-af607431f63c, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=e15b4d99-880b-4329-88b4-7851081f5ad7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.745 2 DEBUG nova.storage.rbd_utils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] rbd image ad8af3d5-66d4-4db1-bd40-42d766f2fde7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:53.746 161536 INFO neutron.agent.ovn.metadata.agent [-] Port e15b4d99-880b-4329-88b4-7851081f5ad7 in datapath c81f0e54-a629-4cc1-82e0-c91f2de9be43 bound to our chassis
Oct 07 14:21:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:53.748 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c81f0e54-a629-4cc1-82e0-c91f2de9be43
Oct 07 14:21:53 compute-0 NetworkManager[44949]: <info>  [1759846913.7542] device (tape15b4d99-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:21:53 compute-0 NetworkManager[44949]: <info>  [1759846913.7556] device (tape15b4d99-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:53.763 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3b874d3b-567c-49ec-8da2-c0154ccf8e93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:53.764 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc81f0e54-a1 in ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:21:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:53.766 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc81f0e54-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:21:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:53.766 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9f879737-4df5-4d83-a4f3-b30a478abefa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:53.767 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[41c5d43d-1062-4977-ac8d-52f9d2dbd60b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:53 compute-0 systemd-machined[214580]: New machine qemu-108-instance-00000056.
Oct 07 14:21:53 compute-0 ovn_controller[151684]: 2025-10-07T14:21:53Z|00902|binding|INFO|Setting lport e15b4d99-880b-4329-88b4-7851081f5ad7 ovn-installed in OVS
Oct 07 14:21:53 compute-0 ovn_controller[151684]: 2025-10-07T14:21:53Z|00903|binding|INFO|Setting lport e15b4d99-880b-4329-88b4-7851081f5ad7 up in Southbound
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:53 compute-0 systemd[1]: Started Virtual Machine qemu-108-instance-00000056.
Oct 07 14:21:53 compute-0 nova_compute[259550]: 2025-10-07 14:21:53.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:53.783 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[2c9f853e-a0c4-4810-bd26-c149d8eb111e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:53.807 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9a1d5f1e-04ef-41cc-bc27-daa95a1ebd33]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:53.837 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[064791d0-563b-40a6-b25c-9c19d82102f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:53.844 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd64959-1899-4f69-a921-51bfe2c92f26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:53 compute-0 NetworkManager[44949]: <info>  [1759846913.8458] manager: (tapc81f0e54-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/378)
Oct 07 14:21:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:53.874 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[45ec2d46-0578-4efd-8770-e637925e7099]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:53.878 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8740d924-99c3-4e08-8e75-74f1f4ff7da3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:53 compute-0 NetworkManager[44949]: <info>  [1759846913.9011] device (tapc81f0e54-a0): carrier: link connected
Oct 07 14:21:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:53.907 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6ebcbadf-8217-4a85-a42c-7502fa0c0588]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:53.925 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[75ad7e21-c6cb-4d64-8120-be21eaf208e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc81f0e54-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:c5:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749746, 'reachable_time': 15903, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347683, 'error': None, 'target': 'ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:53.941 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1f274024-a62b-4d31-b141-8ebbfa969974]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe68:c55b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749746, 'tstamp': 749746}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347684, 'error': None, 'target': 'ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 385 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 7.7 MiB/s wr, 194 op/s
Oct 07 14:21:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:53.959 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5667254e-1019-497b-86c5-dee35cc09682]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc81f0e54-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:c5:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749746, 'reachable_time': 15903, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 347685, 'error': None, 'target': 'ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:53.994 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[355641ae-4752-4031-9cf5-dddd350239fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.057 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0aed1320-aebf-4b9e-a43b-9a33a53a8fb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.059 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc81f0e54-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.059 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.060 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc81f0e54-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:54 compute-0 NetworkManager[44949]: <info>  [1759846914.0626] manager: (tapc81f0e54-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:54 compute-0 kernel: tapc81f0e54-a0: entered promiscuous mode
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.068 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc81f0e54-a0, col_values=(('external_ids', {'iface-id': '7b25df7d-7bdc-43be-9108-73b9b4074002'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:54 compute-0 ovn_controller[151684]: 2025-10-07T14:21:54Z|00904|binding|INFO|Releasing lport 7b25df7d-7bdc-43be-9108-73b9b4074002 from this chassis (sb_readonly=0)
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.096 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c81f0e54-a629-4cc1-82e0-c91f2de9be43.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c81f0e54-a629-4cc1-82e0-c91f2de9be43.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.098 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6d545387-8c6d-4f86-ba96-f1d67b2f06b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.098 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-c81f0e54-a629-4cc1-82e0-c91f2de9be43
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/c81f0e54-a629-4cc1-82e0-c91f2de9be43.pid.haproxy
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID c81f0e54-a629-4cc1-82e0-c91f2de9be43
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.099 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43', 'env', 'PROCESS_TAG=haproxy-c81f0e54-a629-4cc1-82e0-c91f2de9be43', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c81f0e54-a629-4cc1-82e0-c91f2de9be43.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:21:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/40085115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:21:54 compute-0 ceph-mon[74295]: pgmap v1740: 305 pgs: 305 active+clean; 385 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 7.7 MiB/s wr, 194 op/s
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.293 2 DEBUG nova.compute.manager [req-b8868d46-6a66-4432-b284-f2a40aa284b9 req-62a1c416-160d-469a-a7f9-ea2c92cf61a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Received event network-vif-plugged-e15b4d99-880b-4329-88b4-7851081f5ad7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.294 2 DEBUG oslo_concurrency.lockutils [req-b8868d46-6a66-4432-b284-f2a40aa284b9 req-62a1c416-160d-469a-a7f9-ea2c92cf61a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "73126140-a6e8-4630-a01d-3738d29c02b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.294 2 DEBUG oslo_concurrency.lockutils [req-b8868d46-6a66-4432-b284-f2a40aa284b9 req-62a1c416-160d-469a-a7f9-ea2c92cf61a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "73126140-a6e8-4630-a01d-3738d29c02b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.294 2 DEBUG oslo_concurrency.lockutils [req-b8868d46-6a66-4432-b284-f2a40aa284b9 req-62a1c416-160d-469a-a7f9-ea2c92cf61a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "73126140-a6e8-4630-a01d-3738d29c02b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.295 2 DEBUG nova.compute.manager [req-b8868d46-6a66-4432-b284-f2a40aa284b9 req-62a1c416-160d-469a-a7f9-ea2c92cf61a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Processing event network-vif-plugged-e15b4d99-880b-4329-88b4-7851081f5ad7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.397 2 INFO nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Creating config drive at /var/lib/nova/instances/ad8af3d5-66d4-4db1-bd40-42d766f2fde7/disk.config
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.403 2 DEBUG oslo_concurrency.processutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ad8af3d5-66d4-4db1-bd40-42d766f2fde7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyxax82xq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:54 compute-0 podman[347765]: 2025-10-07 14:21:54.485147261 +0000 UTC m=+0.048317788 container create eb7c8de26a0fba58dc48ba5d5a95bf52ce207185a3d832c5654338f882fab309 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:21:54 compute-0 systemd[1]: Started libpod-conmon-eb7c8de26a0fba58dc48ba5d5a95bf52ce207185a3d832c5654338f882fab309.scope.
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.542 2 DEBUG oslo_concurrency.processutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ad8af3d5-66d4-4db1-bd40-42d766f2fde7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyxax82xq" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:54 compute-0 podman[347765]: 2025-10-07 14:21:54.458773244 +0000 UTC m=+0.021943771 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:21:54 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be7bf7714a99ee89be5050702845427e856efd24b6a62b4a61c711a8dfa1a8e4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.570 2 DEBUG nova.storage.rbd_utils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] rbd image ad8af3d5-66d4-4db1-bd40-42d766f2fde7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.579 2 DEBUG oslo_concurrency.processutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ad8af3d5-66d4-4db1-bd40-42d766f2fde7/disk.config ad8af3d5-66d4-4db1-bd40-42d766f2fde7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:54 compute-0 podman[347765]: 2025-10-07 14:21:54.583713534 +0000 UTC m=+0.146884061 container init eb7c8de26a0fba58dc48ba5d5a95bf52ce207185a3d832c5654338f882fab309 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:21:54 compute-0 podman[347765]: 2025-10-07 14:21:54.590120894 +0000 UTC m=+0.153291401 container start eb7c8de26a0fba58dc48ba5d5a95bf52ce207185a3d832c5654338f882fab309 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 07 14:21:54 compute-0 neutron-haproxy-ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43[347782]: [NOTICE]   (347805) : New worker (347807) forked
Oct 07 14:21:54 compute-0 neutron-haproxy-ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43[347782]: [NOTICE]   (347805) : Loading success.
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.626 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846914.6002982, 73126140-a6e8-4630-a01d-3738d29c02b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.627 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] VM Started (Lifecycle Event)
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.629 2 DEBUG nova.compute.manager [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.634 2 DEBUG nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.639 2 INFO nova.virt.libvirt.driver [-] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Instance spawned successfully.
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.639 2 DEBUG nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.658 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.665 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.670 2 DEBUG nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.670 2 DEBUG nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.671 2 DEBUG nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.671 2 DEBUG nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.672 2 DEBUG nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.672 2 DEBUG nova.virt.libvirt.driver [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.682 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.682 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846914.600479, 73126140-a6e8-4630-a01d-3738d29c02b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.683 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] VM Paused (Lifecycle Event)
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.710 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.713 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846914.633223, 73126140-a6e8-4630-a01d-3738d29c02b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.713 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] VM Resumed (Lifecycle Event)
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.737 2 INFO nova.compute.manager [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Took 11.15 seconds to spawn the instance on the hypervisor.
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.738 2 DEBUG nova.compute.manager [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.739 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.745 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.764 2 DEBUG oslo_concurrency.processutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ad8af3d5-66d4-4db1-bd40-42d766f2fde7/disk.config ad8af3d5-66d4-4db1-bd40-42d766f2fde7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.765 2 INFO nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Deleting local config drive /var/lib/nova/instances/ad8af3d5-66d4-4db1-bd40-42d766f2fde7/disk.config because it was imported into RBD.
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.783 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.820 2 INFO nova.compute.manager [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Took 12.26 seconds to build instance.
Oct 07 14:21:54 compute-0 kernel: tapf13bcd51-d7: entered promiscuous mode
Oct 07 14:21:54 compute-0 NetworkManager[44949]: <info>  [1759846914.8264] manager: (tapf13bcd51-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/380)
Oct 07 14:21:54 compute-0 ovn_controller[151684]: 2025-10-07T14:21:54Z|00905|binding|INFO|Claiming lport f13bcd51-d70f-4e6d-9161-9020447812fc for this chassis.
Oct 07 14:21:54 compute-0 ovn_controller[151684]: 2025-10-07T14:21:54Z|00906|binding|INFO|f13bcd51-d70f-4e6d-9161-9020447812fc: Claiming fa:16:3e:cd:66:73 10.100.0.14
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.838 2 DEBUG oslo_concurrency.lockutils [None req-fdb4792c-9d22-4e80-af3d-a62b50ecb4da 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lock "73126140-a6e8-4630-a01d-3738d29c02b8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.840 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:66:73 10.100.0.14'], port_security=['fa:16:3e:cd:66:73 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ad8af3d5-66d4-4db1-bd40-42d766f2fde7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07bd0a98-4ed2-404c-b943-5ab56d6fbe70', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0675ef8ab0b84423b35c16687980a886', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f8f5a877-2f83-48ed-ba2f-9e5e11b5344f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4575b0a6-4039-4056-90b1-7c4d395195d8, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=f13bcd51-d70f-4e6d-9161-9020447812fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.841 161536 INFO neutron.agent.ovn.metadata.agent [-] Port f13bcd51-d70f-4e6d-9161-9020447812fc in datapath 07bd0a98-4ed2-404c-b943-5ab56d6fbe70 bound to our chassis
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.843 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 07bd0a98-4ed2-404c-b943-5ab56d6fbe70
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.857 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d471c0-5c99-4670-87b6-6b6d7cd5c13d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.858 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap07bd0a98-41 in ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.861 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap07bd0a98-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.861 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5987a798-afc7-4bfa-85f7-d847e98a8fee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:54 compute-0 ovn_controller[151684]: 2025-10-07T14:21:54Z|00907|binding|INFO|Setting lport f13bcd51-d70f-4e6d-9161-9020447812fc ovn-installed in OVS
Oct 07 14:21:54 compute-0 ovn_controller[151684]: 2025-10-07T14:21:54Z|00908|binding|INFO|Setting lport f13bcd51-d70f-4e6d-9161-9020447812fc up in Southbound
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.862 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7378fd4e-2ecd-4e4c-ba3d-a0fdc6fa5b96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:54 compute-0 nova_compute[259550]: 2025-10-07 14:21:54.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:54 compute-0 systemd-machined[214580]: New machine qemu-109-instance-00000057.
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.878 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[afcae354-17fc-4e69-8cdb-b8e6bafd029c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:54 compute-0 systemd[1]: Started Virtual Machine qemu-109-instance-00000057.
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.894 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[25cfb3d2-043f-4dbb-8cc7-aa05c7b7fdcb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:54 compute-0 systemd-udevd[347851]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:21:54 compute-0 NetworkManager[44949]: <info>  [1759846914.9217] device (tapf13bcd51-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:21:54 compute-0 NetworkManager[44949]: <info>  [1759846914.9223] device (tapf13bcd51-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.931 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[bdaad320-d4e6-4e9f-ac4d-92fe1fe5888a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:54 compute-0 systemd-udevd[347854]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:21:54 compute-0 NetworkManager[44949]: <info>  [1759846914.9366] manager: (tap07bd0a98-40): new Veth device (/org/freedesktop/NetworkManager/Devices/381)
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.935 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[854e7cc9-bfef-4249-852d-684f7ad603d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.969 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c02a7e1b-f0a6-49f2-bd21-d9d92c644588]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:54.975 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fa5398c2-acc5-40b1-acdd-001051b7a4cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:55 compute-0 NetworkManager[44949]: <info>  [1759846915.0023] device (tap07bd0a98-40): carrier: link connected
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:55.007 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ff81e2ef-bf56-46b0-8882-96614609f2a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:55.025 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a1caf68a-c5fa-4620-b6aa-c7d53fdfe8cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap07bd0a98-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:01:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 266], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749856, 'reachable_time': 30925, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347880, 'error': None, 'target': 'ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:55.040 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc15fb6-45fb-4a76-8293-5c0cc2eda888]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe12:15b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749856, 'tstamp': 749856}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347881, 'error': None, 'target': 'ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:55.057 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d2e668-4656-494b-a290-141392dc1007]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap07bd0a98-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:01:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 266], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749856, 'reachable_time': 30925, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 347882, 'error': None, 'target': 'ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:55.086 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[174ebfbb-9e77-4b67-8173-5f433d2bbc92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:55.151 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[37d231b4-9239-4212-95b8-f4e82500d811]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:55.152 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07bd0a98-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:55.152 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:55.153 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07bd0a98-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:55 compute-0 NetworkManager[44949]: <info>  [1759846915.1565] manager: (tap07bd0a98-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/382)
Oct 07 14:21:55 compute-0 kernel: tap07bd0a98-40: entered promiscuous mode
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:55.161 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap07bd0a98-40, col_values=(('external_ids', {'iface-id': '55cdac1c-5f11-4b25-a2c3-97af6bb70e45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:55 compute-0 ovn_controller[151684]: 2025-10-07T14:21:55Z|00909|binding|INFO|Releasing lport 55cdac1c-5f11-4b25-a2c3-97af6bb70e45 from this chassis (sb_readonly=0)
Oct 07 14:21:55 compute-0 nova_compute[259550]: 2025-10-07 14:21:55.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:55 compute-0 nova_compute[259550]: 2025-10-07 14:21:55.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:55.190 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/07bd0a98-4ed2-404c-b943-5ab56d6fbe70.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/07bd0a98-4ed2-404c-b943-5ab56d6fbe70.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:55.191 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8fffde-93a5-430c-8047-b7081f760dc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:55.192 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-07bd0a98-4ed2-404c-b943-5ab56d6fbe70
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/07bd0a98-4ed2-404c-b943-5ab56d6fbe70.pid.haproxy
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 07bd0a98-4ed2-404c-b943-5ab56d6fbe70
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:21:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:55.193 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70', 'env', 'PROCESS_TAG=haproxy-07bd0a98-4ed2-404c-b943-5ab56d6fbe70', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/07bd0a98-4ed2-404c-b943-5ab56d6fbe70.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:21:55 compute-0 nova_compute[259550]: 2025-10-07 14:21:55.220 2 DEBUG nova.network.neutron [req-26c1a113-5d53-48d6-8926-b89e7a6da3e4 req-09d16173-9cbb-4594-9f9c-cec24f4c14be 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Updated VIF entry in instance network info cache for port f13bcd51-d70f-4e6d-9161-9020447812fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:21:55 compute-0 nova_compute[259550]: 2025-10-07 14:21:55.221 2 DEBUG nova.network.neutron [req-26c1a113-5d53-48d6-8926-b89e7a6da3e4 req-09d16173-9cbb-4594-9f9c-cec24f4c14be 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Updating instance_info_cache with network_info: [{"id": "f13bcd51-d70f-4e6d-9161-9020447812fc", "address": "fa:16:3e:cd:66:73", "network": {"id": "07bd0a98-4ed2-404c-b943-5ab56d6fbe70", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-842528261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0675ef8ab0b84423b35c16687980a886", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf13bcd51-d7", "ovs_interfaceid": "f13bcd51-d70f-4e6d-9161-9020447812fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:55 compute-0 nova_compute[259550]: 2025-10-07 14:21:55.261 2 DEBUG oslo_concurrency.lockutils [req-26c1a113-5d53-48d6-8926-b89e7a6da3e4 req-09d16173-9cbb-4594-9f9c-cec24f4c14be 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-ad8af3d5-66d4-4db1-bd40-42d766f2fde7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:21:55 compute-0 podman[347914]: 2025-10-07 14:21:55.609251706 +0000 UTC m=+0.063572761 container create 8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 07 14:21:55 compute-0 systemd[1]: Started libpod-conmon-8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee.scope.
Oct 07 14:21:55 compute-0 podman[347914]: 2025-10-07 14:21:55.572796974 +0000 UTC m=+0.027118049 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:21:55 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a7ffeed998343da5b6f9862a42b1f51bbf1206668c387a270d60de94496b3a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:21:55 compute-0 podman[347914]: 2025-10-07 14:21:55.684535785 +0000 UTC m=+0.138856860 container init 8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 07 14:21:55 compute-0 podman[347914]: 2025-10-07 14:21:55.690015249 +0000 UTC m=+0.144336304 container start 8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:21:55 compute-0 nova_compute[259550]: 2025-10-07 14:21:55.699 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846900.6977396, 5fb1bd6b-903f-4d62-bc97-25ecaa45deac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:55 compute-0 nova_compute[259550]: 2025-10-07 14:21:55.700 2 INFO nova.compute.manager [-] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] VM Stopped (Lifecycle Event)
Oct 07 14:21:55 compute-0 neutron-haproxy-ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70[347965]: [NOTICE]   (347974) : New worker (347977) forked
Oct 07 14:21:55 compute-0 neutron-haproxy-ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70[347965]: [NOTICE]   (347974) : Loading success.
Oct 07 14:21:55 compute-0 nova_compute[259550]: 2025-10-07 14:21:55.741 2 DEBUG nova.compute.manager [None req-bc1144b3-e1da-4132-ba50-c4d3e327c982 - - - - - -] [instance: 5fb1bd6b-903f-4d62-bc97-25ecaa45deac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 385 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 4.0 MiB/s wr, 143 op/s
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.155 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846916.15507, ad8af3d5-66d4-4db1-bd40-42d766f2fde7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.156 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] VM Started (Lifecycle Event)
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.185 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.190 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846916.1553555, ad8af3d5-66d4-4db1-bd40-42d766f2fde7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.190 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] VM Paused (Lifecycle Event)
Oct 07 14:21:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.239 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.260 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.297 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.612 2 DEBUG nova.compute.manager [req-54448a87-de6e-4143-9a6e-120073486528 req-63853a31-b55d-454e-b731-19f13c601897 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Received event network-vif-plugged-e15b4d99-880b-4329-88b4-7851081f5ad7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.613 2 DEBUG oslo_concurrency.lockutils [req-54448a87-de6e-4143-9a6e-120073486528 req-63853a31-b55d-454e-b731-19f13c601897 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "73126140-a6e8-4630-a01d-3738d29c02b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.613 2 DEBUG oslo_concurrency.lockutils [req-54448a87-de6e-4143-9a6e-120073486528 req-63853a31-b55d-454e-b731-19f13c601897 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "73126140-a6e8-4630-a01d-3738d29c02b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.614 2 DEBUG oslo_concurrency.lockutils [req-54448a87-de6e-4143-9a6e-120073486528 req-63853a31-b55d-454e-b731-19f13c601897 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "73126140-a6e8-4630-a01d-3738d29c02b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.614 2 DEBUG nova.compute.manager [req-54448a87-de6e-4143-9a6e-120073486528 req-63853a31-b55d-454e-b731-19f13c601897 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] No waiting events found dispatching network-vif-plugged-e15b4d99-880b-4329-88b4-7851081f5ad7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.614 2 WARNING nova.compute.manager [req-54448a87-de6e-4143-9a6e-120073486528 req-63853a31-b55d-454e-b731-19f13c601897 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Received unexpected event network-vif-plugged-e15b4d99-880b-4329-88b4-7851081f5ad7 for instance with vm_state active and task_state None.
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.615 2 DEBUG nova.compute.manager [req-54448a87-de6e-4143-9a6e-120073486528 req-63853a31-b55d-454e-b731-19f13c601897 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Received event network-vif-plugged-f13bcd51-d70f-4e6d-9161-9020447812fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.615 2 DEBUG oslo_concurrency.lockutils [req-54448a87-de6e-4143-9a6e-120073486528 req-63853a31-b55d-454e-b731-19f13c601897 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.615 2 DEBUG oslo_concurrency.lockutils [req-54448a87-de6e-4143-9a6e-120073486528 req-63853a31-b55d-454e-b731-19f13c601897 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.615 2 DEBUG oslo_concurrency.lockutils [req-54448a87-de6e-4143-9a6e-120073486528 req-63853a31-b55d-454e-b731-19f13c601897 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.616 2 DEBUG nova.compute.manager [req-54448a87-de6e-4143-9a6e-120073486528 req-63853a31-b55d-454e-b731-19f13c601897 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Processing event network-vif-plugged-f13bcd51-d70f-4e6d-9161-9020447812fc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.616 2 DEBUG nova.compute.manager [req-54448a87-de6e-4143-9a6e-120073486528 req-63853a31-b55d-454e-b731-19f13c601897 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Received event network-vif-plugged-f13bcd51-d70f-4e6d-9161-9020447812fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.616 2 DEBUG oslo_concurrency.lockutils [req-54448a87-de6e-4143-9a6e-120073486528 req-63853a31-b55d-454e-b731-19f13c601897 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.617 2 DEBUG oslo_concurrency.lockutils [req-54448a87-de6e-4143-9a6e-120073486528 req-63853a31-b55d-454e-b731-19f13c601897 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.617 2 DEBUG oslo_concurrency.lockutils [req-54448a87-de6e-4143-9a6e-120073486528 req-63853a31-b55d-454e-b731-19f13c601897 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.617 2 DEBUG nova.compute.manager [req-54448a87-de6e-4143-9a6e-120073486528 req-63853a31-b55d-454e-b731-19f13c601897 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] No waiting events found dispatching network-vif-plugged-f13bcd51-d70f-4e6d-9161-9020447812fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.617 2 WARNING nova.compute.manager [req-54448a87-de6e-4143-9a6e-120073486528 req-63853a31-b55d-454e-b731-19f13c601897 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Received unexpected event network-vif-plugged-f13bcd51-d70f-4e6d-9161-9020447812fc for instance with vm_state building and task_state spawning.
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.618 2 DEBUG nova.compute.manager [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.621 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846916.6210022, ad8af3d5-66d4-4db1-bd40-42d766f2fde7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.621 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] VM Resumed (Lifecycle Event)
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.623 2 DEBUG nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.626 2 INFO nova.virt.libvirt.driver [-] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Instance spawned successfully.
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.626 2 DEBUG nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.684 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.689 2 DEBUG nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.690 2 DEBUG nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.690 2 DEBUG nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.690 2 DEBUG nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.691 2 DEBUG nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.691 2 DEBUG nova.virt.libvirt.driver [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.696 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.809 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.855 2 DEBUG oslo_concurrency.lockutils [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Acquiring lock "73126140-a6e8-4630-a01d-3738d29c02b8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.855 2 DEBUG oslo_concurrency.lockutils [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lock "73126140-a6e8-4630-a01d-3738d29c02b8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.856 2 DEBUG oslo_concurrency.lockutils [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Acquiring lock "73126140-a6e8-4630-a01d-3738d29c02b8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.856 2 DEBUG oslo_concurrency.lockutils [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lock "73126140-a6e8-4630-a01d-3738d29c02b8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.856 2 DEBUG oslo_concurrency.lockutils [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lock "73126140-a6e8-4630-a01d-3738d29c02b8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.858 2 INFO nova.compute.manager [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Terminating instance
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.859 2 DEBUG nova.compute.manager [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.900 2 INFO nova.compute.manager [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Took 12.56 seconds to spawn the instance on the hypervisor.
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.901 2 DEBUG nova.compute.manager [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:21:56 compute-0 kernel: tape15b4d99-88 (unregistering): left promiscuous mode
Oct 07 14:21:56 compute-0 NetworkManager[44949]: <info>  [1759846916.9088] device (tape15b4d99-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:21:56 compute-0 ovn_controller[151684]: 2025-10-07T14:21:56Z|00910|binding|INFO|Releasing lport e15b4d99-880b-4329-88b4-7851081f5ad7 from this chassis (sb_readonly=0)
Oct 07 14:21:56 compute-0 ovn_controller[151684]: 2025-10-07T14:21:56Z|00911|binding|INFO|Setting lport e15b4d99-880b-4329-88b4-7851081f5ad7 down in Southbound
Oct 07 14:21:56 compute-0 ovn_controller[151684]: 2025-10-07T14:21:56Z|00912|binding|INFO|Removing iface tape15b4d99-88 ovn-installed in OVS
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:56 compute-0 nova_compute[259550]: 2025-10-07 14:21:56.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:56 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000056.scope: Deactivated successfully.
Oct 07 14:21:56 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000056.scope: Consumed 2.952s CPU time.
Oct 07 14:21:56 compute-0 systemd-machined[214580]: Machine qemu-108-instance-00000056 terminated.
Oct 07 14:21:57 compute-0 ceph-mon[74295]: pgmap v1741: 305 pgs: 305 active+clean; 385 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 4.0 MiB/s wr, 143 op/s
Oct 07 14:21:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:57.046 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:ef:ce 10.100.0.3'], port_security=['fa:16:3e:ec:ef:ce 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '73126140-a6e8-4630-a01d-3738d29c02b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c81f0e54-a629-4cc1-82e0-c91f2de9be43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81f15d4c85484c8598dbfb6cc7690b09', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8735795-c3aa-47d6-9757-7e07deb863b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74b3dbba-d952-49f6-881e-af607431f63c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=e15b4d99-880b-4329-88b4-7851081f5ad7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:57.047 161536 INFO neutron.agent.ovn.metadata.agent [-] Port e15b4d99-880b-4329-88b4-7851081f5ad7 in datapath c81f0e54-a629-4cc1-82e0-c91f2de9be43 unbound from our chassis
Oct 07 14:21:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:57.049 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c81f0e54-a629-4cc1-82e0-c91f2de9be43, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:21:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:57.050 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4cbb6f51-5920-4753-a8a4-a7834f939e79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:57.051 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43 namespace which is not needed anymore
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.058 2 INFO nova.compute.manager [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Took 13.95 seconds to build instance.
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.090 2 INFO nova.virt.libvirt.driver [-] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Instance destroyed successfully.
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.090 2 DEBUG nova.objects.instance [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lazy-loading 'resources' on Instance uuid 73126140-a6e8-4630-a01d-3738d29c02b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.145 2 DEBUG oslo_concurrency.lockutils [None req-17fc4092-4aa2-44ef-9c13-f4594d10cfc5 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:57 compute-0 neutron-haproxy-ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43[347782]: [NOTICE]   (347805) : haproxy version is 2.8.14-c23fe91
Oct 07 14:21:57 compute-0 neutron-haproxy-ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43[347782]: [NOTICE]   (347805) : path to executable is /usr/sbin/haproxy
Oct 07 14:21:57 compute-0 neutron-haproxy-ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43[347782]: [WARNING]  (347805) : Exiting Master process...
Oct 07 14:21:57 compute-0 neutron-haproxy-ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43[347782]: [WARNING]  (347805) : Exiting Master process...
Oct 07 14:21:57 compute-0 neutron-haproxy-ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43[347782]: [ALERT]    (347805) : Current worker (347807) exited with code 143 (Terminated)
Oct 07 14:21:57 compute-0 neutron-haproxy-ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43[347782]: [WARNING]  (347805) : All workers exited. Exiting... (0)
Oct 07 14:21:57 compute-0 systemd[1]: libpod-eb7c8de26a0fba58dc48ba5d5a95bf52ce207185a3d832c5654338f882fab309.scope: Deactivated successfully.
Oct 07 14:21:57 compute-0 podman[348018]: 2025-10-07 14:21:57.205222587 +0000 UTC m=+0.049178091 container died eb7c8de26a0fba58dc48ba5d5a95bf52ce207185a3d832c5654338f882fab309 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:21:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-be7bf7714a99ee89be5050702845427e856efd24b6a62b4a61c711a8dfa1a8e4-merged.mount: Deactivated successfully.
Oct 07 14:21:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eb7c8de26a0fba58dc48ba5d5a95bf52ce207185a3d832c5654338f882fab309-userdata-shm.mount: Deactivated successfully.
Oct 07 14:21:57 compute-0 podman[348018]: 2025-10-07 14:21:57.258299049 +0000 UTC m=+0.102254553 container cleanup eb7c8de26a0fba58dc48ba5d5a95bf52ce207185a3d832c5654338f882fab309 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:21:57 compute-0 systemd[1]: libpod-conmon-eb7c8de26a0fba58dc48ba5d5a95bf52ce207185a3d832c5654338f882fab309.scope: Deactivated successfully.
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.318 2 DEBUG nova.virt.libvirt.vif [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:21:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-410070911',display_name='tempest-ServerGroupTestJSON-server-410070911',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-410070911',id=86,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:21:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='81f15d4c85484c8598dbfb6cc7690b09',ramdisk_id='',reservation_id='r-di69khpg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-1239398362',owner_user_name='tempest-ServerGroupTestJSON-1239398362-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:21:54Z,user_data=None,user_id='60b5888940d84f1ba06f4db3c95083bd',uuid=73126140-a6e8-4630-a01d-3738d29c02b8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e15b4d99-880b-4329-88b4-7851081f5ad7", "address": "fa:16:3e:ec:ef:ce", "network": {"id": "c81f0e54-a629-4cc1-82e0-c91f2de9be43", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1371127941-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81f15d4c85484c8598dbfb6cc7690b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15b4d99-88", "ovs_interfaceid": "e15b4d99-880b-4329-88b4-7851081f5ad7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.319 2 DEBUG nova.network.os_vif_util [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Converting VIF {"id": "e15b4d99-880b-4329-88b4-7851081f5ad7", "address": "fa:16:3e:ec:ef:ce", "network": {"id": "c81f0e54-a629-4cc1-82e0-c91f2de9be43", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1371127941-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81f15d4c85484c8598dbfb6cc7690b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15b4d99-88", "ovs_interfaceid": "e15b4d99-880b-4329-88b4-7851081f5ad7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.320 2 DEBUG nova.network.os_vif_util [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=e15b4d99-880b-4329-88b4-7851081f5ad7,network=Network(c81f0e54-a629-4cc1-82e0-c91f2de9be43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape15b4d99-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.320 2 DEBUG os_vif [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=e15b4d99-880b-4329-88b4-7851081f5ad7,network=Network(c81f0e54-a629-4cc1-82e0-c91f2de9be43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape15b4d99-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.323 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape15b4d99-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.329 2 INFO os_vif [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=e15b4d99-880b-4329-88b4-7851081f5ad7,network=Network(c81f0e54-a629-4cc1-82e0-c91f2de9be43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape15b4d99-88')
Oct 07 14:21:57 compute-0 podman[348047]: 2025-10-07 14:21:57.342188145 +0000 UTC m=+0.063230472 container remove eb7c8de26a0fba58dc48ba5d5a95bf52ce207185a3d832c5654338f882fab309 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 07 14:21:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:57.350 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[567628ad-1bf9-4e9e-a602-7ca41b11bf86]: (4, ('Tue Oct  7 02:21:57 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43 (eb7c8de26a0fba58dc48ba5d5a95bf52ce207185a3d832c5654338f882fab309)\neb7c8de26a0fba58dc48ba5d5a95bf52ce207185a3d832c5654338f882fab309\nTue Oct  7 02:21:57 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43 (eb7c8de26a0fba58dc48ba5d5a95bf52ce207185a3d832c5654338f882fab309)\neb7c8de26a0fba58dc48ba5d5a95bf52ce207185a3d832c5654338f882fab309\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:57.352 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[627b7aa1-5341-4955-95b5-d6a713d7a03f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:57.353 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc81f0e54-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:57 compute-0 kernel: tapc81f0e54-a0: left promiscuous mode
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:57.376 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[919fc3e8-9db1-473c-a75c-5b80e9b7efaf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:57.404 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d29f2cbb-b127-45b6-b491-3c37de25e144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:57.405 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[326281d1-c757-4c8e-b435-7656a3397edf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:57.422 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7cadd96e-ecc5-4f94-aabe-b8eaccef5556]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749739, 'reachable_time': 26386, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348080, 'error': None, 'target': 'ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:57.424 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c81f0e54-a629-4cc1-82e0-c91f2de9be43 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:21:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:57.425 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[dcce39a3-6b6c-4dba-a709-e6caa36120b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:57 compute-0 systemd[1]: run-netns-ovnmeta\x2dc81f0e54\x2da629\x2d4cc1\x2d82e0\x2dc91f2de9be43.mount: Deactivated successfully.
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.507 2 DEBUG oslo_concurrency.lockutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.507 2 DEBUG oslo_concurrency.lockutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.508 2 INFO nova.compute.manager [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Shelving
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.530 2 DEBUG nova.virt.libvirt.driver [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:21:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 385 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 4.0 MiB/s wr, 143 op/s
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.997 2 INFO nova.virt.libvirt.driver [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Deleting instance files /var/lib/nova/instances/73126140-a6e8-4630-a01d-3738d29c02b8_del
Oct 07 14:21:57 compute-0 nova_compute[259550]: 2025-10-07 14:21:57.998 2 INFO nova.virt.libvirt.driver [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Deletion of /var/lib/nova/instances/73126140-a6e8-4630-a01d-3738d29c02b8_del complete
Oct 07 14:21:58 compute-0 nova_compute[259550]: 2025-10-07 14:21:58.052 2 INFO nova.compute.manager [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Took 1.19 seconds to destroy the instance on the hypervisor.
Oct 07 14:21:58 compute-0 nova_compute[259550]: 2025-10-07 14:21:58.053 2 DEBUG oslo.service.loopingcall [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:21:58 compute-0 nova_compute[259550]: 2025-10-07 14:21:58.054 2 DEBUG nova.compute.manager [-] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:21:58 compute-0 nova_compute[259550]: 2025-10-07 14:21:58.054 2 DEBUG nova.network.neutron [-] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:21:58 compute-0 nova_compute[259550]: 2025-10-07 14:21:58.740 2 DEBUG nova.compute.manager [req-20af2b58-81cb-4a81-a3b4-9e2122db5f62 req-32adc6a2-9165-47d1-9c2a-471393194e29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Received event network-vif-unplugged-e15b4d99-880b-4329-88b4-7851081f5ad7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:58 compute-0 nova_compute[259550]: 2025-10-07 14:21:58.741 2 DEBUG oslo_concurrency.lockutils [req-20af2b58-81cb-4a81-a3b4-9e2122db5f62 req-32adc6a2-9165-47d1-9c2a-471393194e29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "73126140-a6e8-4630-a01d-3738d29c02b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:58 compute-0 nova_compute[259550]: 2025-10-07 14:21:58.742 2 DEBUG oslo_concurrency.lockutils [req-20af2b58-81cb-4a81-a3b4-9e2122db5f62 req-32adc6a2-9165-47d1-9c2a-471393194e29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "73126140-a6e8-4630-a01d-3738d29c02b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:58 compute-0 nova_compute[259550]: 2025-10-07 14:21:58.742 2 DEBUG oslo_concurrency.lockutils [req-20af2b58-81cb-4a81-a3b4-9e2122db5f62 req-32adc6a2-9165-47d1-9c2a-471393194e29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "73126140-a6e8-4630-a01d-3738d29c02b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:58 compute-0 nova_compute[259550]: 2025-10-07 14:21:58.742 2 DEBUG nova.compute.manager [req-20af2b58-81cb-4a81-a3b4-9e2122db5f62 req-32adc6a2-9165-47d1-9c2a-471393194e29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] No waiting events found dispatching network-vif-unplugged-e15b4d99-880b-4329-88b4-7851081f5ad7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:58 compute-0 nova_compute[259550]: 2025-10-07 14:21:58.742 2 DEBUG nova.compute.manager [req-20af2b58-81cb-4a81-a3b4-9e2122db5f62 req-32adc6a2-9165-47d1-9c2a-471393194e29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Received event network-vif-unplugged-e15b4d99-880b-4329-88b4-7851081f5ad7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:21:58 compute-0 nova_compute[259550]: 2025-10-07 14:21:58.742 2 DEBUG nova.compute.manager [req-20af2b58-81cb-4a81-a3b4-9e2122db5f62 req-32adc6a2-9165-47d1-9c2a-471393194e29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Received event network-vif-plugged-e15b4d99-880b-4329-88b4-7851081f5ad7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:21:58 compute-0 nova_compute[259550]: 2025-10-07 14:21:58.742 2 DEBUG oslo_concurrency.lockutils [req-20af2b58-81cb-4a81-a3b4-9e2122db5f62 req-32adc6a2-9165-47d1-9c2a-471393194e29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "73126140-a6e8-4630-a01d-3738d29c02b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:58 compute-0 nova_compute[259550]: 2025-10-07 14:21:58.743 2 DEBUG oslo_concurrency.lockutils [req-20af2b58-81cb-4a81-a3b4-9e2122db5f62 req-32adc6a2-9165-47d1-9c2a-471393194e29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "73126140-a6e8-4630-a01d-3738d29c02b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:58 compute-0 nova_compute[259550]: 2025-10-07 14:21:58.743 2 DEBUG oslo_concurrency.lockutils [req-20af2b58-81cb-4a81-a3b4-9e2122db5f62 req-32adc6a2-9165-47d1-9c2a-471393194e29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "73126140-a6e8-4630-a01d-3738d29c02b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:58 compute-0 nova_compute[259550]: 2025-10-07 14:21:58.743 2 DEBUG nova.compute.manager [req-20af2b58-81cb-4a81-a3b4-9e2122db5f62 req-32adc6a2-9165-47d1-9c2a-471393194e29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] No waiting events found dispatching network-vif-plugged-e15b4d99-880b-4329-88b4-7851081f5ad7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:21:58 compute-0 nova_compute[259550]: 2025-10-07 14:21:58.743 2 WARNING nova.compute.manager [req-20af2b58-81cb-4a81-a3b4-9e2122db5f62 req-32adc6a2-9165-47d1-9c2a-471393194e29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Received unexpected event network-vif-plugged-e15b4d99-880b-4329-88b4-7851081f5ad7 for instance with vm_state active and task_state deleting.
Oct 07 14:21:58 compute-0 nova_compute[259550]: 2025-10-07 14:21:58.860 2 DEBUG nova.network.neutron [-] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:21:58 compute-0 nova_compute[259550]: 2025-10-07 14:21:58.971 2 INFO nova.compute.manager [-] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Took 0.92 seconds to deallocate network for instance.
Oct 07 14:21:59 compute-0 ceph-mon[74295]: pgmap v1742: 305 pgs: 305 active+clean; 385 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 4.0 MiB/s wr, 143 op/s
Oct 07 14:21:59 compute-0 nova_compute[259550]: 2025-10-07 14:21:59.097 2 DEBUG oslo_concurrency.lockutils [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:21:59 compute-0 nova_compute[259550]: 2025-10-07 14:21:59.098 2 DEBUG oslo_concurrency.lockutils [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:21:59 compute-0 nova_compute[259550]: 2025-10-07 14:21:59.241 2 DEBUG oslo_concurrency.processutils [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:21:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:21:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4141755829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:21:59 compute-0 nova_compute[259550]: 2025-10-07 14:21:59.728 2 DEBUG oslo_concurrency.processutils [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:21:59 compute-0 nova_compute[259550]: 2025-10-07 14:21:59.733 2 DEBUG nova.compute.provider_tree [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:21:59 compute-0 nova_compute[259550]: 2025-10-07 14:21:59.755 2 DEBUG nova.scheduler.client.report [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:21:59 compute-0 nova_compute[259550]: 2025-10-07 14:21:59.813 2 DEBUG oslo_concurrency.lockutils [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:59 compute-0 kernel: tap6e1eba9e-f1 (unregistering): left promiscuous mode
Oct 07 14:21:59 compute-0 NetworkManager[44949]: <info>  [1759846919.8391] device (tap6e1eba9e-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:21:59 compute-0 ovn_controller[151684]: 2025-10-07T14:21:59Z|00913|binding|INFO|Releasing lport 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 from this chassis (sb_readonly=0)
Oct 07 14:21:59 compute-0 ovn_controller[151684]: 2025-10-07T14:21:59Z|00914|binding|INFO|Setting lport 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 down in Southbound
Oct 07 14:21:59 compute-0 ovn_controller[151684]: 2025-10-07T14:21:59Z|00915|binding|INFO|Removing iface tap6e1eba9e-f1 ovn-installed in OVS
Oct 07 14:21:59 compute-0 nova_compute[259550]: 2025-10-07 14:21:59.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:59 compute-0 nova_compute[259550]: 2025-10-07 14:21:59.858 2 INFO nova.scheduler.client.report [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Deleted allocations for instance 73126140-a6e8-4630-a01d-3738d29c02b8
Oct 07 14:21:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:59.863 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:12:7c 10.100.0.3'], port_security=['fa:16:3e:71:12:7c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd932a7ab-839c-48b9-804f-90cc8634e93b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a266c4b5f8164bceb621e0e23116c515', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2167ecc9-baa9-4f24-bd7c-9baa94d55bea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061d6c9e-c728-4632-b92d-e6b85ba42658, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:21:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:59.865 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 in datapath ebea7a9d-f576-4b9e-8316-859c29b06dc2 unbound from our chassis
Oct 07 14:21:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:59.866 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebea7a9d-f576-4b9e-8316-859c29b06dc2
Oct 07 14:21:59 compute-0 nova_compute[259550]: 2025-10-07 14:21:59.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:59.885 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2e49156c-e1cd-4b0c-a93d-79c7261da5ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:59 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Oct 07 14:21:59 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d0000004e.scope: Consumed 17.420s CPU time.
Oct 07 14:21:59 compute-0 systemd-machined[214580]: Machine qemu-95-instance-0000004e terminated.
Oct 07 14:21:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:59.915 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2dc6b262-dfb0-4cda-98b1-d72003c1d264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:59.918 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[06086ef7-e583-4739-9f95-566c080785df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 364 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 521 KiB/s wr, 162 op/s
Oct 07 14:21:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:59.945 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[871260fa-92a9-4c5b-b011-724dc417c342]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:59.965 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a4f0c5b9-9270-4659-9ca1-00f196e96f43]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebea7a9d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:58:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739733, 'reachable_time': 16237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348115, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:59.980 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dbdbfa64-3f79-4ec6-9b0d-8f22e5bef00f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapebea7a9d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739748, 'tstamp': 739748}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348116, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapebea7a9d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739751, 'tstamp': 739751}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348116, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:21:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:59.982 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebea7a9d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:59 compute-0 nova_compute[259550]: 2025-10-07 14:21:59.982 2 DEBUG oslo_concurrency.lockutils [None req-3c7f309d-0f1f-41d4-9702-07e363ef83dd 60b5888940d84f1ba06f4db3c95083bd 81f15d4c85484c8598dbfb6cc7690b09 - - default default] Lock "73126140-a6e8-4630-a01d-3738d29c02b8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:21:59 compute-0 nova_compute[259550]: 2025-10-07 14:21:59.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:59 compute-0 nova_compute[259550]: 2025-10-07 14:21:59.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:21:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:59.988 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebea7a9d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:59.988 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:21:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:59.988 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebea7a9d-f0, col_values=(('external_ids', {'iface-id': 'e0f4a07d-63f3-4c49-8cad-69cdf20a2608'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:21:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:21:59.988 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:00 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4141755829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:00.053 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:00.053 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:00.054 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:00 compute-0 nova_compute[259550]: 2025-10-07 14:22:00.549 2 INFO nova.virt.libvirt.driver [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Instance shutdown successfully after 3 seconds.
Oct 07 14:22:00 compute-0 nova_compute[259550]: 2025-10-07 14:22:00.554 2 INFO nova.virt.libvirt.driver [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Instance destroyed successfully.
Oct 07 14:22:00 compute-0 nova_compute[259550]: 2025-10-07 14:22:00.555 2 DEBUG nova.objects.instance [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'numa_topology' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:00 compute-0 nova_compute[259550]: 2025-10-07 14:22:00.865 2 INFO nova.virt.libvirt.driver [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Beginning cold snapshot process
Oct 07 14:22:00 compute-0 nova_compute[259550]: 2025-10-07 14:22:00.958 2 DEBUG nova.compute.manager [req-06fef322-1fed-4c24-86ea-01374932115c req-baa261b6-b61b-42a4-b7a0-df7601c403ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-vif-unplugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:00 compute-0 nova_compute[259550]: 2025-10-07 14:22:00.958 2 DEBUG oslo_concurrency.lockutils [req-06fef322-1fed-4c24-86ea-01374932115c req-baa261b6-b61b-42a4-b7a0-df7601c403ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:00 compute-0 nova_compute[259550]: 2025-10-07 14:22:00.959 2 DEBUG oslo_concurrency.lockutils [req-06fef322-1fed-4c24-86ea-01374932115c req-baa261b6-b61b-42a4-b7a0-df7601c403ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:00 compute-0 nova_compute[259550]: 2025-10-07 14:22:00.959 2 DEBUG oslo_concurrency.lockutils [req-06fef322-1fed-4c24-86ea-01374932115c req-baa261b6-b61b-42a4-b7a0-df7601c403ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:00 compute-0 nova_compute[259550]: 2025-10-07 14:22:00.959 2 DEBUG nova.compute.manager [req-06fef322-1fed-4c24-86ea-01374932115c req-baa261b6-b61b-42a4-b7a0-df7601c403ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] No waiting events found dispatching network-vif-unplugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:00 compute-0 nova_compute[259550]: 2025-10-07 14:22:00.960 2 WARNING nova.compute.manager [req-06fef322-1fed-4c24-86ea-01374932115c req-baa261b6-b61b-42a4-b7a0-df7601c403ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received unexpected event network-vif-unplugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 for instance with vm_state active and task_state shelving_image_pending_upload.
Oct 07 14:22:00 compute-0 nova_compute[259550]: 2025-10-07 14:22:00.960 2 DEBUG nova.compute.manager [req-06fef322-1fed-4c24-86ea-01374932115c req-baa261b6-b61b-42a4-b7a0-df7601c403ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:00 compute-0 nova_compute[259550]: 2025-10-07 14:22:00.960 2 DEBUG oslo_concurrency.lockutils [req-06fef322-1fed-4c24-86ea-01374932115c req-baa261b6-b61b-42a4-b7a0-df7601c403ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:00 compute-0 nova_compute[259550]: 2025-10-07 14:22:00.960 2 DEBUG oslo_concurrency.lockutils [req-06fef322-1fed-4c24-86ea-01374932115c req-baa261b6-b61b-42a4-b7a0-df7601c403ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:00 compute-0 nova_compute[259550]: 2025-10-07 14:22:00.960 2 DEBUG oslo_concurrency.lockutils [req-06fef322-1fed-4c24-86ea-01374932115c req-baa261b6-b61b-42a4-b7a0-df7601c403ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:00 compute-0 nova_compute[259550]: 2025-10-07 14:22:00.961 2 DEBUG nova.compute.manager [req-06fef322-1fed-4c24-86ea-01374932115c req-baa261b6-b61b-42a4-b7a0-df7601c403ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] No waiting events found dispatching network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:00 compute-0 nova_compute[259550]: 2025-10-07 14:22:00.961 2 WARNING nova.compute.manager [req-06fef322-1fed-4c24-86ea-01374932115c req-baa261b6-b61b-42a4-b7a0-df7601c403ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received unexpected event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 for instance with vm_state active and task_state shelving_image_pending_upload.
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.022 2 DEBUG nova.compute.manager [req-8f8d55d0-1b73-4099-beab-2966a30b0af9 req-71b5d13d-97d4-47ba-b7dd-6050930b5a37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Received event network-vif-deleted-e15b4d99-880b-4329-88b4-7851081f5ad7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.023 2 DEBUG nova.compute.manager [req-8f8d55d0-1b73-4099-beab-2966a30b0af9 req-71b5d13d-97d4-47ba-b7dd-6050930b5a37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Received event network-vif-plugged-d1f34f39-0808-4a53-bf40-fff477dee819 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.023 2 DEBUG oslo_concurrency.lockutils [req-8f8d55d0-1b73-4099-beab-2966a30b0af9 req-71b5d13d-97d4-47ba-b7dd-6050930b5a37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.023 2 DEBUG oslo_concurrency.lockutils [req-8f8d55d0-1b73-4099-beab-2966a30b0af9 req-71b5d13d-97d4-47ba-b7dd-6050930b5a37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.023 2 DEBUG oslo_concurrency.lockutils [req-8f8d55d0-1b73-4099-beab-2966a30b0af9 req-71b5d13d-97d4-47ba-b7dd-6050930b5a37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.023 2 DEBUG nova.compute.manager [req-8f8d55d0-1b73-4099-beab-2966a30b0af9 req-71b5d13d-97d4-47ba-b7dd-6050930b5a37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Processing event network-vif-plugged-d1f34f39-0808-4a53-bf40-fff477dee819 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.024 2 DEBUG nova.compute.manager [req-8f8d55d0-1b73-4099-beab-2966a30b0af9 req-71b5d13d-97d4-47ba-b7dd-6050930b5a37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Received event network-vif-plugged-d1f34f39-0808-4a53-bf40-fff477dee819 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.024 2 DEBUG oslo_concurrency.lockutils [req-8f8d55d0-1b73-4099-beab-2966a30b0af9 req-71b5d13d-97d4-47ba-b7dd-6050930b5a37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.024 2 DEBUG oslo_concurrency.lockutils [req-8f8d55d0-1b73-4099-beab-2966a30b0af9 req-71b5d13d-97d4-47ba-b7dd-6050930b5a37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.024 2 DEBUG oslo_concurrency.lockutils [req-8f8d55d0-1b73-4099-beab-2966a30b0af9 req-71b5d13d-97d4-47ba-b7dd-6050930b5a37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.025 2 DEBUG nova.compute.manager [req-8f8d55d0-1b73-4099-beab-2966a30b0af9 req-71b5d13d-97d4-47ba-b7dd-6050930b5a37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] No waiting events found dispatching network-vif-plugged-d1f34f39-0808-4a53-bf40-fff477dee819 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.025 2 WARNING nova.compute.manager [req-8f8d55d0-1b73-4099-beab-2966a30b0af9 req-71b5d13d-97d4-47ba-b7dd-6050930b5a37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Received unexpected event network-vif-plugged-d1f34f39-0808-4a53-bf40-fff477dee819 for instance with vm_state building and task_state spawning.
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.026 2 DEBUG nova.compute.manager [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Instance event wait completed in 8 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:22:01 compute-0 ceph-mon[74295]: pgmap v1743: 305 pgs: 305 active+clean; 364 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 521 KiB/s wr, 162 op/s
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.040 2 DEBUG nova.virt.libvirt.imagebackend [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.043 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846921.036484, 2f0de516-cf33-49b6-b036-aee8c2f72943 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.044 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] VM Resumed (Lifecycle Event)
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.048 2 DEBUG nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.052 2 INFO nova.virt.libvirt.driver [-] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Instance spawned successfully.
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.053 2 DEBUG nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.072 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.077 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.082 2 DEBUG nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.082 2 DEBUG nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.083 2 DEBUG nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.083 2 DEBUG nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.083 2 DEBUG nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.084 2 DEBUG nova.virt.libvirt.driver [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.107 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.148 2 INFO nova.compute.manager [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Took 15.73 seconds to spawn the instance on the hypervisor.
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.148 2 DEBUG nova.compute.manager [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.215 2 INFO nova.compute.manager [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Took 16.69 seconds to build instance.
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.229 2 DEBUG nova.storage.rbd_utils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] creating snapshot(97b2f88dc1be43ee906215f7a98ce3b8) on rbd image(d932a7ab-839c-48b9-804f-90cc8634e93b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:22:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:22:01.240987) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846921241034, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1432, "num_deletes": 258, "total_data_size": 1862575, "memory_usage": 1901864, "flush_reason": "Manual Compaction"}
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846921252865, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 1260057, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35225, "largest_seqno": 36656, "table_properties": {"data_size": 1254517, "index_size": 2744, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14950, "raw_average_key_size": 21, "raw_value_size": 1242186, "raw_average_value_size": 1797, "num_data_blocks": 120, "num_entries": 691, "num_filter_entries": 691, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759846822, "oldest_key_time": 1759846822, "file_creation_time": 1759846921, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 11924 microseconds, and 5801 cpu microseconds.
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:22:01.252913) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 1260057 bytes OK
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:22:01.252936) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:22:01.255090) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:22:01.255109) EVENT_LOG_v1 {"time_micros": 1759846921255103, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:22:01.255128) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 1856025, prev total WAL file size 1856025, number of live WAL files 2.
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:22:01.255867) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323532' seq:72057594037927935, type:22 .. '6D6772737461740031353033' seq:0, type:0; will stop at (end)
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(1230KB)], [77(9621KB)]
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846921255908, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 11112422, "oldest_snapshot_seqno": -1}
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6167 keys, 8383403 bytes, temperature: kUnknown
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846921308884, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 8383403, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8343264, "index_size": 23645, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15429, "raw_key_size": 155462, "raw_average_key_size": 25, "raw_value_size": 8233714, "raw_average_value_size": 1335, "num_data_blocks": 958, "num_entries": 6167, "num_filter_entries": 6167, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759846921, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:22:01.309131) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 8383403 bytes
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:22:01.310487) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.3 rd, 157.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(15.5) write-amplify(6.7) OK, records in: 6652, records dropped: 485 output_compression: NoCompression
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:22:01.310511) EVENT_LOG_v1 {"time_micros": 1759846921310501, "job": 44, "event": "compaction_finished", "compaction_time_micros": 53086, "compaction_time_cpu_micros": 20695, "output_level": 6, "num_output_files": 1, "total_output_size": 8383403, "num_input_records": 6652, "num_output_records": 6167, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846921310852, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846921312605, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:22:01.255800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:22:01.312664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:22:01.312670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:22:01.312671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:22:01.312673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:22:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:22:01.312675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.392 2 DEBUG oslo_concurrency.lockutils [None req-4a07a1f1-7164-4440-b45a-329fc715ac4d 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.488 2 DEBUG oslo_concurrency.lockutils [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Acquiring lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.488 2 DEBUG oslo_concurrency.lockutils [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.489 2 DEBUG oslo_concurrency.lockutils [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Acquiring lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.489 2 DEBUG oslo_concurrency.lockutils [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.489 2 DEBUG oslo_concurrency.lockutils [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.491 2 INFO nova.compute.manager [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Terminating instance
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.492 2 DEBUG nova.compute.manager [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:22:01 compute-0 kernel: tapf13bcd51-d7 (unregistering): left promiscuous mode
Oct 07 14:22:01 compute-0 NetworkManager[44949]: <info>  [1759846921.5308] device (tapf13bcd51-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:22:01 compute-0 ovn_controller[151684]: 2025-10-07T14:22:01Z|00916|binding|INFO|Releasing lport f13bcd51-d70f-4e6d-9161-9020447812fc from this chassis (sb_readonly=0)
Oct 07 14:22:01 compute-0 ovn_controller[151684]: 2025-10-07T14:22:01Z|00917|binding|INFO|Setting lport f13bcd51-d70f-4e6d-9161-9020447812fc down in Southbound
Oct 07 14:22:01 compute-0 ovn_controller[151684]: 2025-10-07T14:22:01Z|00918|binding|INFO|Removing iface tapf13bcd51-d7 ovn-installed in OVS
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.582 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:66:73 10.100.0.14'], port_security=['fa:16:3e:cd:66:73 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ad8af3d5-66d4-4db1-bd40-42d766f2fde7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07bd0a98-4ed2-404c-b943-5ab56d6fbe70', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0675ef8ab0b84423b35c16687980a886', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f8f5a877-2f83-48ed-ba2f-9e5e11b5344f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4575b0a6-4039-4056-90b1-7c4d395195d8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=f13bcd51-d70f-4e6d-9161-9020447812fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:22:01 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d00000057.scope: Deactivated successfully.
Oct 07 14:22:01 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d00000057.scope: Consumed 5.865s CPU time.
Oct 07 14:22:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.584 161536 INFO neutron.agent.ovn.metadata.agent [-] Port f13bcd51-d70f-4e6d-9161-9020447812fc in datapath 07bd0a98-4ed2-404c-b943-5ab56d6fbe70 unbound from our chassis
Oct 07 14:22:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.586 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 07bd0a98-4ed2-404c-b943-5ab56d6fbe70, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.587 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e79498-758d-428f-9325-c2f4040db18b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.587 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70 namespace which is not needed anymore
Oct 07 14:22:01 compute-0 systemd-machined[214580]: Machine qemu-109-instance-00000057 terminated.
Oct 07 14:22:01 compute-0 neutron-haproxy-ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70[347965]: [NOTICE]   (347974) : haproxy version is 2.8.14-c23fe91
Oct 07 14:22:01 compute-0 neutron-haproxy-ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70[347965]: [NOTICE]   (347974) : path to executable is /usr/sbin/haproxy
Oct 07 14:22:01 compute-0 neutron-haproxy-ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70[347965]: [WARNING]  (347974) : Exiting Master process...
Oct 07 14:22:01 compute-0 neutron-haproxy-ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70[347965]: [ALERT]    (347974) : Current worker (347977) exited with code 143 (Terminated)
Oct 07 14:22:01 compute-0 neutron-haproxy-ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70[347965]: [WARNING]  (347974) : All workers exited. Exiting... (0)
Oct 07 14:22:01 compute-0 systemd[1]: libpod-8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee.scope: Deactivated successfully.
Oct 07 14:22:01 compute-0 conmon[347965]: conmon 8c4351ba767e7d11a717 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee.scope/container/memory.events
Oct 07 14:22:01 compute-0 podman[348203]: 2025-10-07 14:22:01.711755175 +0000 UTC m=+0.044377903 container died 8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.721 2 INFO nova.virt.libvirt.driver [-] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Instance destroyed successfully.
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.721 2 DEBUG nova.objects.instance [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lazy-loading 'resources' on Instance uuid ad8af3d5-66d4-4db1-bd40-42d766f2fde7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a7ffeed998343da5b6f9862a42b1f51bbf1206668c387a270d60de94496b3a6-merged.mount: Deactivated successfully.
Oct 07 14:22:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee-userdata-shm.mount: Deactivated successfully.
Oct 07 14:22:01 compute-0 podman[348203]: 2025-10-07 14:22:01.755417338 +0000 UTC m=+0.088040066 container cleanup 8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 07 14:22:01 compute-0 systemd[1]: libpod-conmon-8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee.scope: Deactivated successfully.
Oct 07 14:22:01 compute-0 podman[348248]: 2025-10-07 14:22:01.813741649 +0000 UTC m=+0.036590777 container remove 8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 07 14:22:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.819 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dde34601-90bb-416b-8b2f-f4585ecd2096]: (4, ('Tue Oct  7 02:22:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70 (8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee)\n8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee\nTue Oct  7 02:22:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70 (8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee)\n8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.820 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2f58bda8-c763-4695-bdb2-b2518761fc62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.821 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07bd0a98-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:01 compute-0 kernel: tap07bd0a98-40: left promiscuous mode
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.842 2 DEBUG nova.virt.libvirt.vif [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:21:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-502915421',display_name='tempest-ServerMetadataNegativeTestJSON-server-502915421',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-502915421',id=87,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:21:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0675ef8ab0b84423b35c16687980a886',ramdisk_id='',reservation_id='r-a57ir3jh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1962853244',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1962853244-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:21:56Z,user_data=None,user_id='f3a5238c8d6b406aa83ab9cfd1b31cf1',uuid=ad8af3d5-66d4-4db1-bd40-42d766f2fde7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f13bcd51-d70f-4e6d-9161-9020447812fc", "address": "fa:16:3e:cd:66:73", "network": {"id": "07bd0a98-4ed2-404c-b943-5ab56d6fbe70", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-842528261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0675ef8ab0b84423b35c16687980a886", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf13bcd51-d7", "ovs_interfaceid": "f13bcd51-d70f-4e6d-9161-9020447812fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.843 2 DEBUG nova.network.os_vif_util [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Converting VIF {"id": "f13bcd51-d70f-4e6d-9161-9020447812fc", "address": "fa:16:3e:cd:66:73", "network": {"id": "07bd0a98-4ed2-404c-b943-5ab56d6fbe70", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-842528261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0675ef8ab0b84423b35c16687980a886", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf13bcd51-d7", "ovs_interfaceid": "f13bcd51-d70f-4e6d-9161-9020447812fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.844 2 DEBUG nova.network.os_vif_util [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:66:73,bridge_name='br-int',has_traffic_filtering=True,id=f13bcd51-d70f-4e6d-9161-9020447812fc,network=Network(07bd0a98-4ed2-404c-b943-5ab56d6fbe70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf13bcd51-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.844 2 DEBUG os_vif [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:66:73,bridge_name='br-int',has_traffic_filtering=True,id=f13bcd51-d70f-4e6d-9161-9020447812fc,network=Network(07bd0a98-4ed2-404c-b943-5ab56d6fbe70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf13bcd51-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.846 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf13bcd51-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:22:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.852 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8583bcff-c4ec-4eae-87a0-b602fe45c843]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:01 compute-0 nova_compute[259550]: 2025-10-07 14:22:01.867 2 INFO os_vif [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:66:73,bridge_name='br-int',has_traffic_filtering=True,id=f13bcd51-d70f-4e6d-9161-9020447812fc,network=Network(07bd0a98-4ed2-404c-b943-5ab56d6fbe70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf13bcd51-d7')
Oct 07 14:22:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.870 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f412c636-f322-4f93-ae1b-53808d39c47d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.875 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9b7f4344-7373-41cc-8c88-19d1fd907203]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.894 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1a3f75a2-5fea-4c32-934f-661acc5e6bd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749849, 'reachable_time': 20850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348276, 'error': None, 'target': 'ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.897 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:22:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.897 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[9db2a649-4c26-417f-a94d-7ac26ebbde46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d07bd0a98\x2d4ed2\x2d404c\x2db943\x2d5ab56d6fbe70.mount: Deactivated successfully.
Oct 07 14:22:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 339 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 55 KiB/s wr, 223 op/s
Oct 07 14:22:02 compute-0 rsyslogd[1004]: imjournal: 18258 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 07 14:22:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Oct 07 14:22:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Oct 07 14:22:02 compute-0 ceph-mon[74295]: pgmap v1744: 305 pgs: 305 active+clean; 339 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 55 KiB/s wr, 223 op/s
Oct 07 14:22:02 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Oct 07 14:22:02 compute-0 nova_compute[259550]: 2025-10-07 14:22:02.324 2 DEBUG nova.storage.rbd_utils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] cloning vms/d932a7ab-839c-48b9-804f-90cc8634e93b_disk@97b2f88dc1be43ee906215f7a98ce3b8 to images/4a1936df-f472-466a-a569-3e6ba7a787d4 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:22:02 compute-0 nova_compute[259550]: 2025-10-07 14:22:02.433 2 INFO nova.virt.libvirt.driver [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Deleting instance files /var/lib/nova/instances/ad8af3d5-66d4-4db1-bd40-42d766f2fde7_del
Oct 07 14:22:02 compute-0 nova_compute[259550]: 2025-10-07 14:22:02.434 2 INFO nova.virt.libvirt.driver [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Deletion of /var/lib/nova/instances/ad8af3d5-66d4-4db1-bd40-42d766f2fde7_del complete
Oct 07 14:22:02 compute-0 nova_compute[259550]: 2025-10-07 14:22:02.449 2 DEBUG nova.storage.rbd_utils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] flattening images/4a1936df-f472-466a-a569-3e6ba7a787d4 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:22:02 compute-0 nova_compute[259550]: 2025-10-07 14:22:02.514 2 INFO nova.compute.manager [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Took 1.02 seconds to destroy the instance on the hypervisor.
Oct 07 14:22:02 compute-0 nova_compute[259550]: 2025-10-07 14:22:02.514 2 DEBUG oslo.service.loopingcall [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:22:02 compute-0 nova_compute[259550]: 2025-10-07 14:22:02.515 2 DEBUG nova.compute.manager [-] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:22:02 compute-0 nova_compute[259550]: 2025-10-07 14:22:02.515 2 DEBUG nova.network.neutron [-] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:22:02 compute-0 nova_compute[259550]: 2025-10-07 14:22:02.936 2 DEBUG nova.storage.rbd_utils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] removing snapshot(97b2f88dc1be43ee906215f7a98ce3b8) on rbd image(d932a7ab-839c-48b9-804f-90cc8634e93b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.160 2 DEBUG nova.compute.manager [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Received event network-vif-unplugged-f13bcd51-d70f-4e6d-9161-9020447812fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.160 2 DEBUG oslo_concurrency.lockutils [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.161 2 DEBUG oslo_concurrency.lockutils [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.161 2 DEBUG oslo_concurrency.lockutils [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.161 2 DEBUG nova.compute.manager [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] No waiting events found dispatching network-vif-unplugged-f13bcd51-d70f-4e6d-9161-9020447812fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.161 2 DEBUG nova.compute.manager [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Received event network-vif-unplugged-f13bcd51-d70f-4e6d-9161-9020447812fc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.161 2 DEBUG nova.compute.manager [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Received event network-vif-plugged-f13bcd51-d70f-4e6d-9161-9020447812fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.162 2 DEBUG oslo_concurrency.lockutils [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.162 2 DEBUG oslo_concurrency.lockutils [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.162 2 DEBUG oslo_concurrency.lockutils [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.162 2 DEBUG nova.compute.manager [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] No waiting events found dispatching network-vif-plugged-f13bcd51-d70f-4e6d-9161-9020447812fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.162 2 WARNING nova.compute.manager [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Received unexpected event network-vif-plugged-f13bcd51-d70f-4e6d-9161-9020447812fc for instance with vm_state active and task_state deleting.
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.202 2 DEBUG nova.network.neutron [-] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.222 2 INFO nova.compute.manager [-] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Took 0.71 seconds to deallocate network for instance.
Oct 07 14:22:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Oct 07 14:22:03 compute-0 ceph-mon[74295]: osdmap e235: 3 total, 3 up, 3 in
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.259 2 DEBUG oslo_concurrency.lockutils [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.260 2 DEBUG oslo_concurrency.lockutils [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Oct 07 14:22:03 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.318 2 DEBUG nova.storage.rbd_utils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] creating snapshot(snap) on rbd image(4a1936df-f472-466a-a569-3e6ba7a787d4) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.355 2 DEBUG nova.compute.manager [req-d54c52f2-b80f-41af-adc6-92f4cbb6da85 req-9ddc8412-f384-4a33-8955-0ef99826bccd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Received event network-vif-deleted-f13bcd51-d70f-4e6d-9161-9020447812fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.404 2 DEBUG oslo_concurrency.processutils [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:22:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3537870051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.850 2 DEBUG oslo_concurrency.processutils [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.856 2 DEBUG nova.compute.provider_tree [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:22:03 compute-0 nova_compute[259550]: 2025-10-07 14:22:03.921 2 DEBUG nova.scheduler.client.report [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:22:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 386 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 5.3 MiB/s wr, 414 op/s
Oct 07 14:22:04 compute-0 nova_compute[259550]: 2025-10-07 14:22:04.030 2 DEBUG oslo_concurrency.lockutils [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:04 compute-0 nova_compute[259550]: 2025-10-07 14:22:04.085 2 INFO nova.scheduler.client.report [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Deleted allocations for instance ad8af3d5-66d4-4db1-bd40-42d766f2fde7
Oct 07 14:22:04 compute-0 nova_compute[259550]: 2025-10-07 14:22:04.233 2 DEBUG oslo_concurrency.lockutils [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Oct 07 14:22:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Oct 07 14:22:04 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Oct 07 14:22:04 compute-0 ceph-mon[74295]: osdmap e236: 3 total, 3 up, 3 in
Oct 07 14:22:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3537870051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:04 compute-0 ceph-mon[74295]: pgmap v1747: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 386 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 5.3 MiB/s wr, 414 op/s
Oct 07 14:22:05 compute-0 ceph-mon[74295]: osdmap e237: 3 total, 3 up, 3 in
Oct 07 14:22:05 compute-0 ovn_controller[151684]: 2025-10-07T14:22:05Z|00919|binding|INFO|Releasing lport e0f4a07d-63f3-4c49-8cad-69cdf20a2608 from this chassis (sb_readonly=0)
Oct 07 14:22:05 compute-0 ovn_controller[151684]: 2025-10-07T14:22:05Z|00920|binding|INFO|Releasing lport 401012b3-9244-4a9f-9a1e-3bf75a54a412 from this chassis (sb_readonly=0)
Oct 07 14:22:05 compute-0 sudo[348402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:22:05 compute-0 sudo[348402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:22:05 compute-0 sudo[348402]: pam_unix(sudo:session): session closed for user root
Oct 07 14:22:05 compute-0 nova_compute[259550]: 2025-10-07 14:22:05.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:05 compute-0 sudo[348427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:22:05 compute-0 sudo[348427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:22:05 compute-0 sudo[348427]: pam_unix(sudo:session): session closed for user root
Oct 07 14:22:05 compute-0 sudo[348452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:22:05 compute-0 sudo[348452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:22:05 compute-0 sudo[348452]: pam_unix(sudo:session): session closed for user root
Oct 07 14:22:05 compute-0 nova_compute[259550]: 2025-10-07 14:22:05.799 2 INFO nova.virt.libvirt.driver [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Snapshot image upload complete
Oct 07 14:22:05 compute-0 nova_compute[259550]: 2025-10-07 14:22:05.799 2 DEBUG nova.compute.manager [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:05 compute-0 sudo[348477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:22:05 compute-0 sudo[348477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:22:05 compute-0 nova_compute[259550]: 2025-10-07 14:22:05.870 2 INFO nova.compute.manager [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Shelve offloading
Oct 07 14:22:05 compute-0 nova_compute[259550]: 2025-10-07 14:22:05.877 2 INFO nova.virt.libvirt.driver [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Instance destroyed successfully.
Oct 07 14:22:05 compute-0 nova_compute[259550]: 2025-10-07 14:22:05.877 2 DEBUG nova.compute.manager [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:05 compute-0 nova_compute[259550]: 2025-10-07 14:22:05.879 2 DEBUG oslo_concurrency.lockutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:22:05 compute-0 nova_compute[259550]: 2025-10-07 14:22:05.879 2 DEBUG oslo_concurrency.lockutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquired lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:22:05 compute-0 nova_compute[259550]: 2025-10-07 14:22:05.879 2 DEBUG nova.network.neutron [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:22:05 compute-0 nova_compute[259550]: 2025-10-07 14:22:05.894 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:05 compute-0 nova_compute[259550]: 2025-10-07 14:22:05.895 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:05 compute-0 nova_compute[259550]: 2025-10-07 14:22:05.932 2 DEBUG nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:22:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 372 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 16 MiB/s rd, 7.8 MiB/s wr, 481 op/s
Oct 07 14:22:06 compute-0 nova_compute[259550]: 2025-10-07 14:22:06.141 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:06 compute-0 nova_compute[259550]: 2025-10-07 14:22:06.144 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:06 compute-0 nova_compute[259550]: 2025-10-07 14:22:06.152 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:22:06 compute-0 nova_compute[259550]: 2025-10-07 14:22:06.152 2 INFO nova.compute.claims [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:22:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:22:06 compute-0 sudo[348477]: pam_unix(sudo:session): session closed for user root
Oct 07 14:22:06 compute-0 ceph-mon[74295]: pgmap v1749: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 372 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 16 MiB/s rd, 7.8 MiB/s wr, 481 op/s
Oct 07 14:22:06 compute-0 nova_compute[259550]: 2025-10-07 14:22:06.361 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:22:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:22:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:22:06 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:22:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:22:06 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:22:06 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev d1a88338-6437-484e-8c42-0319df240c32 does not exist
Oct 07 14:22:06 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev bf38e8d2-bb48-4150-8017-7c3da1158835 does not exist
Oct 07 14:22:06 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 1287e576-192e-4c3b-bbce-2faa29287201 does not exist
Oct 07 14:22:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:22:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:22:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:22:06 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:22:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:22:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:22:06 compute-0 sudo[348534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:22:06 compute-0 sudo[348534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:22:06 compute-0 sudo[348534]: pam_unix(sudo:session): session closed for user root
Oct 07 14:22:06 compute-0 sudo[348571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:22:06 compute-0 sudo[348571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:22:06 compute-0 sudo[348571]: pam_unix(sudo:session): session closed for user root
Oct 07 14:22:06 compute-0 podman[348559]: 2025-10-07 14:22:06.535956415 +0000 UTC m=+0.064460263 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:22:06 compute-0 podman[348558]: 2025-10-07 14:22:06.563496552 +0000 UTC m=+0.092060992 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 07 14:22:06 compute-0 nova_compute[259550]: 2025-10-07 14:22:06.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:06 compute-0 sudo[348640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:22:06 compute-0 sudo[348640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:22:06 compute-0 sudo[348640]: pam_unix(sudo:session): session closed for user root
Oct 07 14:22:06 compute-0 sudo[348666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:22:06 compute-0 sudo[348666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:22:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:22:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1197930786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:06 compute-0 nova_compute[259550]: 2025-10-07 14:22:06.842 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:06 compute-0 nova_compute[259550]: 2025-10-07 14:22:06.848 2 DEBUG nova.compute.provider_tree [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:22:06 compute-0 nova_compute[259550]: 2025-10-07 14:22:06.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:06 compute-0 nova_compute[259550]: 2025-10-07 14:22:06.906 2 DEBUG nova.scheduler.client.report [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:22:06 compute-0 podman[348733]: 2025-10-07 14:22:06.967730561 +0000 UTC m=+0.047942477 container create 95acac42c385337fb149810c131208059c4fceb745f3c12a131b931f3a481fc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 07 14:22:07 compute-0 systemd[1]: Started libpod-conmon-95acac42c385337fb149810c131208059c4fceb745f3c12a131b931f3a481fc1.scope.
Oct 07 14:22:07 compute-0 podman[348733]: 2025-10-07 14:22:06.940485472 +0000 UTC m=+0.020697388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:22:07 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:22:07 compute-0 nova_compute[259550]: 2025-10-07 14:22:07.053 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:07 compute-0 nova_compute[259550]: 2025-10-07 14:22:07.054 2 DEBUG nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:22:07 compute-0 podman[348733]: 2025-10-07 14:22:07.101744541 +0000 UTC m=+0.181956457 container init 95acac42c385337fb149810c131208059c4fceb745f3c12a131b931f3a481fc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 14:22:07 compute-0 podman[348733]: 2025-10-07 14:22:07.109890107 +0000 UTC m=+0.190102023 container start 95acac42c385337fb149810c131208059c4fceb745f3c12a131b931f3a481fc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:22:07 compute-0 objective_banzai[348749]: 167 167
Oct 07 14:22:07 compute-0 systemd[1]: libpod-95acac42c385337fb149810c131208059c4fceb745f3c12a131b931f3a481fc1.scope: Deactivated successfully.
Oct 07 14:22:07 compute-0 podman[348733]: 2025-10-07 14:22:07.175744166 +0000 UTC m=+0.255956092 container attach 95acac42c385337fb149810c131208059c4fceb745f3c12a131b931f3a481fc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_banzai, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 07 14:22:07 compute-0 podman[348733]: 2025-10-07 14:22:07.176252869 +0000 UTC m=+0.256464795 container died 95acac42c385337fb149810c131208059c4fceb745f3c12a131b931f3a481fc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_banzai, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:22:07 compute-0 nova_compute[259550]: 2025-10-07 14:22:07.213 2 DEBUG nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:22:07 compute-0 nova_compute[259550]: 2025-10-07 14:22:07.214 2 DEBUG nova.network.neutron [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:22:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-483dce97899cb92b2e7c1f4f9fc91d7dcbb0cb254e905219494fff7e6ddba55e-merged.mount: Deactivated successfully.
Oct 07 14:22:07 compute-0 podman[348733]: 2025-10-07 14:22:07.259052286 +0000 UTC m=+0.339264212 container remove 95acac42c385337fb149810c131208059c4fceb745f3c12a131b931f3a481fc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 07 14:22:07 compute-0 systemd[1]: libpod-conmon-95acac42c385337fb149810c131208059c4fceb745f3c12a131b931f3a481fc1.scope: Deactivated successfully.
Oct 07 14:22:07 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:22:07 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:22:07 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:22:07 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:22:07 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:22:07 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:22:07 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1197930786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:07 compute-0 nova_compute[259550]: 2025-10-07 14:22:07.440 2 INFO nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:22:07 compute-0 podman[348775]: 2025-10-07 14:22:07.44512211 +0000 UTC m=+0.039914904 container create 76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poitras, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:22:07 compute-0 systemd[1]: Started libpod-conmon-76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9.scope.
Oct 07 14:22:07 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bc02db5c1cdf59de1a652a2137d090bacee84fad3befe3a9877dcd37c58124/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bc02db5c1cdf59de1a652a2137d090bacee84fad3befe3a9877dcd37c58124/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bc02db5c1cdf59de1a652a2137d090bacee84fad3befe3a9877dcd37c58124/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bc02db5c1cdf59de1a652a2137d090bacee84fad3befe3a9877dcd37c58124/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bc02db5c1cdf59de1a652a2137d090bacee84fad3befe3a9877dcd37c58124/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:22:07 compute-0 podman[348775]: 2025-10-07 14:22:07.428582503 +0000 UTC m=+0.023375307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:22:07 compute-0 podman[348775]: 2025-10-07 14:22:07.524745364 +0000 UTC m=+0.119538178 container init 76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:22:07 compute-0 podman[348775]: 2025-10-07 14:22:07.537757777 +0000 UTC m=+0.132550571 container start 76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 07 14:22:07 compute-0 podman[348775]: 2025-10-07 14:22:07.540835798 +0000 UTC m=+0.135628592 container attach 76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poitras, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:22:07 compute-0 nova_compute[259550]: 2025-10-07 14:22:07.582 2 DEBUG nova.policy [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2606252961124ad2a15c7f7529b28488', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:22:07 compute-0 nova_compute[259550]: 2025-10-07 14:22:07.779 2 DEBUG nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:22:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 372 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 326 op/s
Oct 07 14:22:08 compute-0 nova_compute[259550]: 2025-10-07 14:22:08.325 2 DEBUG nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:22:08 compute-0 nova_compute[259550]: 2025-10-07 14:22:08.326 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:22:08 compute-0 nova_compute[259550]: 2025-10-07 14:22:08.326 2 INFO nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Creating image(s)
Oct 07 14:22:08 compute-0 ceph-mon[74295]: pgmap v1750: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 372 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 326 op/s
Oct 07 14:22:08 compute-0 nova_compute[259550]: 2025-10-07 14:22:08.354 2 DEBUG nova.storage.rbd_utils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:08 compute-0 nova_compute[259550]: 2025-10-07 14:22:08.398 2 DEBUG nova.storage.rbd_utils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:08 compute-0 nova_compute[259550]: 2025-10-07 14:22:08.427 2 DEBUG nova.storage.rbd_utils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:08 compute-0 nova_compute[259550]: 2025-10-07 14:22:08.433 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:08 compute-0 nova_compute[259550]: 2025-10-07 14:22:08.478 2 DEBUG nova.network.neutron [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updating instance_info_cache with network_info: [{"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:08 compute-0 nova_compute[259550]: 2025-10-07 14:22:08.521 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:08 compute-0 nova_compute[259550]: 2025-10-07 14:22:08.522 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:08 compute-0 nova_compute[259550]: 2025-10-07 14:22:08.523 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:08 compute-0 nova_compute[259550]: 2025-10-07 14:22:08.523 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:08 compute-0 nova_compute[259550]: 2025-10-07 14:22:08.549 2 DEBUG nova.storage.rbd_utils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:08 compute-0 nova_compute[259550]: 2025-10-07 14:22:08.554 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:08 compute-0 great_poitras[348793]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:22:08 compute-0 great_poitras[348793]: --> relative data size: 1.0
Oct 07 14:22:08 compute-0 great_poitras[348793]: --> All data devices are unavailable
Oct 07 14:22:08 compute-0 nova_compute[259550]: 2025-10-07 14:22:08.592 2 DEBUG oslo_concurrency.lockutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Releasing lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:22:08 compute-0 systemd[1]: libpod-76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9.scope: Deactivated successfully.
Oct 07 14:22:08 compute-0 systemd[1]: libpod-76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9.scope: Consumed 1.016s CPU time.
Oct 07 14:22:08 compute-0 podman[348775]: 2025-10-07 14:22:08.617627114 +0000 UTC m=+1.212419918 container died 76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 07 14:22:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-32bc02db5c1cdf59de1a652a2137d090bacee84fad3befe3a9877dcd37c58124-merged.mount: Deactivated successfully.
Oct 07 14:22:08 compute-0 podman[348775]: 2025-10-07 14:22:08.679770677 +0000 UTC m=+1.274563471 container remove 76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 07 14:22:08 compute-0 systemd[1]: libpod-conmon-76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9.scope: Deactivated successfully.
Oct 07 14:22:08 compute-0 sudo[348666]: pam_unix(sudo:session): session closed for user root
Oct 07 14:22:08 compute-0 sudo[348929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:22:08 compute-0 sudo[348929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:22:08 compute-0 sudo[348929]: pam_unix(sudo:session): session closed for user root
Oct 07 14:22:08 compute-0 sudo[348954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:22:08 compute-0 sudo[348954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:22:08 compute-0 sudo[348954]: pam_unix(sudo:session): session closed for user root
Oct 07 14:22:08 compute-0 nova_compute[259550]: 2025-10-07 14:22:08.890 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:08 compute-0 sudo[348979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:22:08 compute-0 sudo[348979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:22:08 compute-0 sudo[348979]: pam_unix(sudo:session): session closed for user root
Oct 07 14:22:08 compute-0 nova_compute[259550]: 2025-10-07 14:22:08.942 2 DEBUG nova.network.neutron [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Successfully created port: 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:22:08 compute-0 sudo[349017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:22:08 compute-0 sudo[349017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:22:08 compute-0 nova_compute[259550]: 2025-10-07 14:22:08.981 2 DEBUG nova.storage.rbd_utils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] resizing rbd image e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:22:09 compute-0 nova_compute[259550]: 2025-10-07 14:22:09.081 2 DEBUG nova.objects.instance [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'migration_context' on Instance uuid e1499379-f6d6-4edd-8af0-2ccf7c6e6683 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:09 compute-0 nova_compute[259550]: 2025-10-07 14:22:09.101 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:22:09 compute-0 nova_compute[259550]: 2025-10-07 14:22:09.101 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Ensure instance console log exists: /var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:22:09 compute-0 nova_compute[259550]: 2025-10-07 14:22:09.102 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:09 compute-0 nova_compute[259550]: 2025-10-07 14:22:09.102 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:09 compute-0 nova_compute[259550]: 2025-10-07 14:22:09.102 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:09 compute-0 podman[349139]: 2025-10-07 14:22:09.305177877 +0000 UTC m=+0.034849761 container create 9d3cdf63e625591ef7e7ef1f0f8a04223de1d6fcbb448c8c130d716072713042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dewdney, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 07 14:22:09 compute-0 systemd[1]: Started libpod-conmon-9d3cdf63e625591ef7e7ef1f0f8a04223de1d6fcbb448c8c130d716072713042.scope.
Oct 07 14:22:09 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:22:09 compute-0 podman[349139]: 2025-10-07 14:22:09.289910254 +0000 UTC m=+0.019582158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:22:09 compute-0 podman[349139]: 2025-10-07 14:22:09.388781275 +0000 UTC m=+0.118453179 container init 9d3cdf63e625591ef7e7ef1f0f8a04223de1d6fcbb448c8c130d716072713042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:22:09 compute-0 podman[349139]: 2025-10-07 14:22:09.396198482 +0000 UTC m=+0.125870366 container start 9d3cdf63e625591ef7e7ef1f0f8a04223de1d6fcbb448c8c130d716072713042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dewdney, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:22:09 compute-0 podman[349139]: 2025-10-07 14:22:09.399830697 +0000 UTC m=+0.129502611 container attach 9d3cdf63e625591ef7e7ef1f0f8a04223de1d6fcbb448c8c130d716072713042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dewdney, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:22:09 compute-0 jovial_dewdney[349155]: 167 167
Oct 07 14:22:09 compute-0 systemd[1]: libpod-9d3cdf63e625591ef7e7ef1f0f8a04223de1d6fcbb448c8c130d716072713042.scope: Deactivated successfully.
Oct 07 14:22:09 compute-0 podman[349139]: 2025-10-07 14:22:09.405160598 +0000 UTC m=+0.134832482 container died 9d3cdf63e625591ef7e7ef1f0f8a04223de1d6fcbb448c8c130d716072713042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dewdney, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:22:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-66a2f4bf1e6fdec8b2278831b4227c54424b6bc7a37267c46084cd436904fb2f-merged.mount: Deactivated successfully.
Oct 07 14:22:09 compute-0 podman[349139]: 2025-10-07 14:22:09.453361741 +0000 UTC m=+0.183033625 container remove 9d3cdf63e625591ef7e7ef1f0f8a04223de1d6fcbb448c8c130d716072713042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dewdney, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:22:09 compute-0 systemd[1]: libpod-conmon-9d3cdf63e625591ef7e7ef1f0f8a04223de1d6fcbb448c8c130d716072713042.scope: Deactivated successfully.
Oct 07 14:22:09 compute-0 ovn_controller[151684]: 2025-10-07T14:22:09Z|00921|binding|INFO|Releasing lport e0f4a07d-63f3-4c49-8cad-69cdf20a2608 from this chassis (sb_readonly=0)
Oct 07 14:22:09 compute-0 ovn_controller[151684]: 2025-10-07T14:22:09Z|00922|binding|INFO|Releasing lport 401012b3-9244-4a9f-9a1e-3bf75a54a412 from this chassis (sb_readonly=0)
Oct 07 14:22:09 compute-0 nova_compute[259550]: 2025-10-07 14:22:09.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:09 compute-0 podman[349178]: 2025-10-07 14:22:09.639764515 +0000 UTC m=+0.050724851 container create ccb6a53f7d62756e9d1dfe01bcab2f34e30087be859f3b5ecc493730dc818b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swartz, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:22:09 compute-0 systemd[1]: Started libpod-conmon-ccb6a53f7d62756e9d1dfe01bcab2f34e30087be859f3b5ecc493730dc818b6b.scope.
Oct 07 14:22:09 compute-0 podman[349178]: 2025-10-07 14:22:09.612646399 +0000 UTC m=+0.023606785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:22:09 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:22:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ab8101037099106c51b2a4c672256dd16e6c9c6403ed190611229cd0b57785/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:22:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ab8101037099106c51b2a4c672256dd16e6c9c6403ed190611229cd0b57785/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:22:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ab8101037099106c51b2a4c672256dd16e6c9c6403ed190611229cd0b57785/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:22:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ab8101037099106c51b2a4c672256dd16e6c9c6403ed190611229cd0b57785/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:22:09 compute-0 podman[349178]: 2025-10-07 14:22:09.740404584 +0000 UTC m=+0.151364950 container init ccb6a53f7d62756e9d1dfe01bcab2f34e30087be859f3b5ecc493730dc818b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swartz, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:22:09 compute-0 podman[349178]: 2025-10-07 14:22:09.746266179 +0000 UTC m=+0.157226515 container start ccb6a53f7d62756e9d1dfe01bcab2f34e30087be859f3b5ecc493730dc818b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swartz, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 07 14:22:09 compute-0 podman[349178]: 2025-10-07 14:22:09.7497096 +0000 UTC m=+0.160669956 container attach ccb6a53f7d62756e9d1dfe01bcab2f34e30087be859f3b5ecc493730dc818b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swartz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:22:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 372 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 9.1 MiB/s rd, 6.1 MiB/s wr, 276 op/s
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.236 2 INFO nova.virt.libvirt.driver [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Instance destroyed successfully.
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.237 2 DEBUG nova.objects.instance [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'resources' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.250 2 DEBUG nova.virt.libvirt.vif [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:20:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-32969053',display_name='tempest-ServerActionsTestOtherB-server-32969053',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-32969053',id=78,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOqlzBNyTrZ3ukqlO6TVbreu9ViSyA0zBmuJZOrzxut70yuUvNen/46QBxy28dcNWssGIhIyWpj4pHS1UxVS0zSbAfGPV97emvFP5GyJtAaeCUYpqJsMV8Zmvgjpt+/vMQ==',key_name='tempest-keypair-1118909469',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:20:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a266c4b5f8164bceb621e0e23116c515',ramdisk_id='',reservation_id='r-r374dg9m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1033607333',owner_user_name='tempest-ServerActionsTestOtherB-1033607333-project-member',shelved_at='2025-10-07T14:22:05.799435',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='4a1936df-f472-466a-a569-3e6ba7a787d4'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:22:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b99e8c19767d42aa96c7d646cacc3772',uuid=d932a7ab-839c-48b9-804f-90cc8634e93b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.251 2 DEBUG nova.network.os_vif_util [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converting VIF {"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.252 2 DEBUG nova.network.os_vif_util [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.252 2 DEBUG os_vif [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.255 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e1eba9e-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.260 2 INFO os_vif [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1')
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.328 2 DEBUG nova.compute.manager [req-3c0e76e1-441d-4178-9fa3-3123f69768e5 req-3152159e-cd89-4376-b458-703c8b92f46c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-changed-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.329 2 DEBUG nova.compute.manager [req-3c0e76e1-441d-4178-9fa3-3123f69768e5 req-3152159e-cd89-4376-b458-703c8b92f46c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Refreshing instance network info cache due to event network-changed-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.329 2 DEBUG oslo_concurrency.lockutils [req-3c0e76e1-441d-4178-9fa3-3123f69768e5 req-3152159e-cd89-4376-b458-703c8b92f46c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.330 2 DEBUG oslo_concurrency.lockutils [req-3c0e76e1-441d-4178-9fa3-3123f69768e5 req-3152159e-cd89-4376-b458-703c8b92f46c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.330 2 DEBUG nova.network.neutron [req-3c0e76e1-441d-4178-9fa3-3123f69768e5 req-3152159e-cd89-4376-b458-703c8b92f46c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Refreshing network info cache for port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:22:10 compute-0 modest_swartz[349195]: {
Oct 07 14:22:10 compute-0 modest_swartz[349195]:     "0": [
Oct 07 14:22:10 compute-0 modest_swartz[349195]:         {
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "devices": [
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "/dev/loop3"
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             ],
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "lv_name": "ceph_lv0",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "lv_size": "21470642176",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "name": "ceph_lv0",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "tags": {
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.cluster_name": "ceph",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.crush_device_class": "",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.encrypted": "0",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.osd_id": "0",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.type": "block",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.vdo": "0"
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             },
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "type": "block",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "vg_name": "ceph_vg0"
Oct 07 14:22:10 compute-0 modest_swartz[349195]:         }
Oct 07 14:22:10 compute-0 modest_swartz[349195]:     ],
Oct 07 14:22:10 compute-0 modest_swartz[349195]:     "1": [
Oct 07 14:22:10 compute-0 modest_swartz[349195]:         {
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "devices": [
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "/dev/loop4"
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             ],
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "lv_name": "ceph_lv1",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "lv_size": "21470642176",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "name": "ceph_lv1",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "tags": {
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.cluster_name": "ceph",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.crush_device_class": "",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.encrypted": "0",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.osd_id": "1",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.type": "block",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.vdo": "0"
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             },
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "type": "block",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "vg_name": "ceph_vg1"
Oct 07 14:22:10 compute-0 modest_swartz[349195]:         }
Oct 07 14:22:10 compute-0 modest_swartz[349195]:     ],
Oct 07 14:22:10 compute-0 modest_swartz[349195]:     "2": [
Oct 07 14:22:10 compute-0 modest_swartz[349195]:         {
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "devices": [
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "/dev/loop5"
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             ],
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "lv_name": "ceph_lv2",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "lv_size": "21470642176",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "name": "ceph_lv2",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "tags": {
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.cluster_name": "ceph",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.crush_device_class": "",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.encrypted": "0",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.osd_id": "2",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.type": "block",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:                 "ceph.vdo": "0"
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             },
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "type": "block",
Oct 07 14:22:10 compute-0 modest_swartz[349195]:             "vg_name": "ceph_vg2"
Oct 07 14:22:10 compute-0 modest_swartz[349195]:         }
Oct 07 14:22:10 compute-0 modest_swartz[349195]:     ]
Oct 07 14:22:10 compute-0 modest_swartz[349195]: }
Oct 07 14:22:10 compute-0 systemd[1]: libpod-ccb6a53f7d62756e9d1dfe01bcab2f34e30087be859f3b5ecc493730dc818b6b.scope: Deactivated successfully.
Oct 07 14:22:10 compute-0 podman[349178]: 2025-10-07 14:22:10.564844193 +0000 UTC m=+0.975804539 container died ccb6a53f7d62756e9d1dfe01bcab2f34e30087be859f3b5ecc493730dc818b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swartz, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.574 2 DEBUG nova.network.neutron [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Successfully updated port: 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:22:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6ab8101037099106c51b2a4c672256dd16e6c9c6403ed190611229cd0b57785-merged.mount: Deactivated successfully.
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.615 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "refresh_cache-e1499379-f6d6-4edd-8af0-2ccf7c6e6683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.616 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquired lock "refresh_cache-e1499379-f6d6-4edd-8af0-2ccf7c6e6683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.616 2 DEBUG nova.network.neutron [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:22:10 compute-0 podman[349178]: 2025-10-07 14:22:10.658263811 +0000 UTC m=+1.069224147 container remove ccb6a53f7d62756e9d1dfe01bcab2f34e30087be859f3b5ecc493730dc818b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swartz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:22:10 compute-0 systemd[1]: libpod-conmon-ccb6a53f7d62756e9d1dfe01bcab2f34e30087be859f3b5ecc493730dc818b6b.scope: Deactivated successfully.
Oct 07 14:22:10 compute-0 sudo[349017]: pam_unix(sudo:session): session closed for user root
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.736 2 INFO nova.virt.libvirt.driver [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Deleting instance files /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b_del
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.737 2 INFO nova.virt.libvirt.driver [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Deletion of /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b_del complete
Oct 07 14:22:10 compute-0 sudo[349233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:22:10 compute-0 sudo[349233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:22:10 compute-0 sudo[349233]: pam_unix(sudo:session): session closed for user root
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.833 2 INFO nova.scheduler.client.report [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Deleted allocations for instance d932a7ab-839c-48b9-804f-90cc8634e93b
Oct 07 14:22:10 compute-0 sudo[349258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:22:10 compute-0 sudo[349258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:22:10 compute-0 sudo[349258]: pam_unix(sudo:session): session closed for user root
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.889 2 DEBUG oslo_concurrency.lockutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.890 2 DEBUG oslo_concurrency.lockutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:10 compute-0 sudo[349283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:22:10 compute-0 sudo[349283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:22:10 compute-0 sudo[349283]: pam_unix(sudo:session): session closed for user root
Oct 07 14:22:10 compute-0 sudo[349308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:22:10 compute-0 sudo[349308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:22:10 compute-0 nova_compute[259550]: 2025-10-07 14:22:10.993 2 DEBUG oslo_concurrency.processutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:11 compute-0 ceph-mon[74295]: pgmap v1751: 305 pgs: 305 active+clean; 372 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 9.1 MiB/s rd, 6.1 MiB/s wr, 276 op/s
Oct 07 14:22:11 compute-0 nova_compute[259550]: 2025-10-07 14:22:11.029 2 DEBUG nova.network.neutron [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:22:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:22:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Oct 07 14:22:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Oct 07 14:22:11 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Oct 07 14:22:11 compute-0 podman[349392]: 2025-10-07 14:22:11.304716199 +0000 UTC m=+0.040110641 container create e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:22:11 compute-0 systemd[1]: Started libpod-conmon-e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e.scope.
Oct 07 14:22:11 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:22:11 compute-0 podman[349392]: 2025-10-07 14:22:11.288794198 +0000 UTC m=+0.024188660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:22:11 compute-0 podman[349392]: 2025-10-07 14:22:11.399104922 +0000 UTC m=+0.134499384 container init e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:22:11 compute-0 podman[349392]: 2025-10-07 14:22:11.405686446 +0000 UTC m=+0.141080898 container start e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:22:11 compute-0 xenodochial_lamport[349406]: 167 167
Oct 07 14:22:11 compute-0 podman[349392]: 2025-10-07 14:22:11.410557564 +0000 UTC m=+0.145952026 container attach e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:22:11 compute-0 systemd[1]: libpod-e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e.scope: Deactivated successfully.
Oct 07 14:22:11 compute-0 conmon[349406]: conmon e8db30969d725b4407aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e.scope/container/memory.events
Oct 07 14:22:11 compute-0 podman[349392]: 2025-10-07 14:22:11.412398753 +0000 UTC m=+0.147793195 container died e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 07 14:22:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-462ba15222914317d75ae07f3d51a76a62d2d063584a7637d8c71609333b1291-merged.mount: Deactivated successfully.
Oct 07 14:22:11 compute-0 podman[349392]: 2025-10-07 14:22:11.447367427 +0000 UTC m=+0.182761859 container remove e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:22:11 compute-0 systemd[1]: libpod-conmon-e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e.scope: Deactivated successfully.
Oct 07 14:22:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:22:11 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3690602146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:11 compute-0 nova_compute[259550]: 2025-10-07 14:22:11.507 2 DEBUG oslo_concurrency.processutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:11 compute-0 nova_compute[259550]: 2025-10-07 14:22:11.520 2 DEBUG nova.compute.provider_tree [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:22:11 compute-0 nova_compute[259550]: 2025-10-07 14:22:11.542 2 DEBUG nova.scheduler.client.report [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:22:11 compute-0 nova_compute[259550]: 2025-10-07 14:22:11.564 2 DEBUG oslo_concurrency.lockutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:11 compute-0 nova_compute[259550]: 2025-10-07 14:22:11.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:11 compute-0 nova_compute[259550]: 2025-10-07 14:22:11.620 2 DEBUG oslo_concurrency.lockutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:11 compute-0 podman[349432]: 2025-10-07 14:22:11.646455006 +0000 UTC m=+0.049427987 container create 436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:22:11 compute-0 systemd[1]: Started libpod-conmon-436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59.scope.
Oct 07 14:22:11 compute-0 podman[349432]: 2025-10-07 14:22:11.62541312 +0000 UTC m=+0.028386121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:22:11 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:22:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b83e6ea8f714521e98c7db094e960749086fb60113503fc5e562457758632ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:22:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b83e6ea8f714521e98c7db094e960749086fb60113503fc5e562457758632ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:22:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b83e6ea8f714521e98c7db094e960749086fb60113503fc5e562457758632ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:22:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b83e6ea8f714521e98c7db094e960749086fb60113503fc5e562457758632ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:22:11 compute-0 podman[349432]: 2025-10-07 14:22:11.748280675 +0000 UTC m=+0.151253676 container init 436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 07 14:22:11 compute-0 podman[349432]: 2025-10-07 14:22:11.756057901 +0000 UTC m=+0.159030872 container start 436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 07 14:22:11 compute-0 podman[349432]: 2025-10-07 14:22:11.760905109 +0000 UTC m=+0.163878100 container attach 436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:22:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 373 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Oct 07 14:22:12 compute-0 nova_compute[259550]: 2025-10-07 14:22:12.088 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846917.0868595, 73126140-a6e8-4630-a01d-3738d29c02b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:12 compute-0 nova_compute[259550]: 2025-10-07 14:22:12.089 2 INFO nova.compute.manager [-] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] VM Stopped (Lifecycle Event)
Oct 07 14:22:12 compute-0 nova_compute[259550]: 2025-10-07 14:22:12.110 2 DEBUG nova.compute.manager [None req-24a2d0b3-dd5e-4790-86de-e98263ecd9db - - - - - -] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:12 compute-0 ceph-mon[74295]: osdmap e238: 3 total, 3 up, 3 in
Oct 07 14:22:12 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3690602146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:12 compute-0 ceph-mon[74295]: pgmap v1753: 305 pgs: 305 active+clean; 373 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Oct 07 14:22:12 compute-0 elated_rubin[349449]: {
Oct 07 14:22:12 compute-0 elated_rubin[349449]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:22:12 compute-0 elated_rubin[349449]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:22:12 compute-0 elated_rubin[349449]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:22:12 compute-0 elated_rubin[349449]:         "osd_id": 2,
Oct 07 14:22:12 compute-0 elated_rubin[349449]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:22:12 compute-0 elated_rubin[349449]:         "type": "bluestore"
Oct 07 14:22:12 compute-0 elated_rubin[349449]:     },
Oct 07 14:22:12 compute-0 elated_rubin[349449]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:22:12 compute-0 elated_rubin[349449]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:22:12 compute-0 elated_rubin[349449]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:22:12 compute-0 elated_rubin[349449]:         "osd_id": 1,
Oct 07 14:22:12 compute-0 elated_rubin[349449]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:22:12 compute-0 elated_rubin[349449]:         "type": "bluestore"
Oct 07 14:22:12 compute-0 elated_rubin[349449]:     },
Oct 07 14:22:12 compute-0 elated_rubin[349449]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:22:12 compute-0 elated_rubin[349449]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:22:12 compute-0 elated_rubin[349449]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:22:12 compute-0 elated_rubin[349449]:         "osd_id": 0,
Oct 07 14:22:12 compute-0 elated_rubin[349449]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:22:12 compute-0 elated_rubin[349449]:         "type": "bluestore"
Oct 07 14:22:12 compute-0 elated_rubin[349449]:     }
Oct 07 14:22:12 compute-0 elated_rubin[349449]: }
Oct 07 14:22:12 compute-0 systemd[1]: libpod-436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59.scope: Deactivated successfully.
Oct 07 14:22:12 compute-0 systemd[1]: libpod-436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59.scope: Consumed 1.013s CPU time.
Oct 07 14:22:12 compute-0 podman[349432]: 2025-10-07 14:22:12.811147084 +0000 UTC m=+1.214120075 container died 436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 14:22:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b83e6ea8f714521e98c7db094e960749086fb60113503fc5e562457758632ac-merged.mount: Deactivated successfully.
Oct 07 14:22:12 compute-0 podman[349432]: 2025-10-07 14:22:12.891451075 +0000 UTC m=+1.294424036 container remove 436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:22:12 compute-0 systemd[1]: libpod-conmon-436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59.scope: Deactivated successfully.
Oct 07 14:22:12 compute-0 sudo[349308]: pam_unix(sudo:session): session closed for user root
Oct 07 14:22:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:22:12 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:22:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:22:12 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:22:12 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 9cd0e662-f3c7-48d3-898d-1cf1607a8a03 does not exist
Oct 07 14:22:12 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 77cbe83d-c178-42eb-a7a7-c844f3661db2 does not exist
Oct 07 14:22:13 compute-0 sudo[349496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:22:13 compute-0 sudo[349496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:22:13 compute-0 sudo[349496]: pam_unix(sudo:session): session closed for user root
Oct 07 14:22:13 compute-0 sudo[349521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:22:13 compute-0 sudo[349521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:22:13 compute-0 sudo[349521]: pam_unix(sudo:session): session closed for user root
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.212 2 DEBUG nova.compute.manager [req-1c0e7a86-b06e-4a77-a4b4-6d08cacf81f0 req-d047eeb6-da05-43f0-afd4-279c5c1e356c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Received event network-changed-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.213 2 DEBUG nova.compute.manager [req-1c0e7a86-b06e-4a77-a4b4-6d08cacf81f0 req-d047eeb6-da05-43f0-afd4-279c5c1e356c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Refreshing instance network info cache due to event network-changed-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.213 2 DEBUG oslo_concurrency.lockutils [req-1c0e7a86-b06e-4a77-a4b4-6d08cacf81f0 req-d047eeb6-da05-43f0-afd4-279c5c1e356c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-e1499379-f6d6-4edd-8af0-2ccf7c6e6683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:22:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:13.282 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:13.287 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.289 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "fdd84566-c63e-469a-9173-55b845d32171" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.291 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "fdd84566-c63e-469a-9173-55b845d32171" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.311 2 DEBUG nova.compute.manager [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.315 2 DEBUG nova.network.neutron [req-3c0e76e1-441d-4178-9fa3-3123f69768e5 req-3152159e-cd89-4376-b458-703c8b92f46c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updated VIF entry in instance network info cache for port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.315 2 DEBUG nova.network.neutron [req-3c0e76e1-441d-4178-9fa3-3123f69768e5 req-3152159e-cd89-4376-b458-703c8b92f46c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updating instance_info_cache with network_info: [{"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": null, "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.346 2 DEBUG oslo_concurrency.lockutils [req-3c0e76e1-441d-4178-9fa3-3123f69768e5 req-3152159e-cd89-4376-b458-703c8b92f46c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.395 2 DEBUG nova.network.neutron [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Updating instance_info_cache with network_info: [{"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.404 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.405 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.412 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.413 2 INFO nova.compute.claims [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.422 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Releasing lock "refresh_cache-e1499379-f6d6-4edd-8af0-2ccf7c6e6683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.423 2 DEBUG nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Instance network_info: |[{"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.423 2 DEBUG oslo_concurrency.lockutils [req-1c0e7a86-b06e-4a77-a4b4-6d08cacf81f0 req-d047eeb6-da05-43f0-afd4-279c5c1e356c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-e1499379-f6d6-4edd-8af0-2ccf7c6e6683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.423 2 DEBUG nova.network.neutron [req-1c0e7a86-b06e-4a77-a4b4-6d08cacf81f0 req-d047eeb6-da05-43f0-afd4-279c5c1e356c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Refreshing network info cache for port 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.427 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Start _get_guest_xml network_info=[{"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.435 2 WARNING nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.440 2 DEBUG nova.virt.libvirt.host [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.441 2 DEBUG nova.virt.libvirt.host [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.445 2 DEBUG nova.virt.libvirt.host [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.446 2 DEBUG nova.virt.libvirt.host [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.446 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.446 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.447 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.447 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.447 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.448 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.448 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.448 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.449 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.449 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.449 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.449 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.454 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.617 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:13 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:22:13 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 26K writes, 107K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.04 MB/s
                                           Cumulative WAL: 26K writes, 8981 syncs, 2.96 writes per sync, written: 0.10 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 41.53 MB, 0.07 MB/s
                                           Interval WAL: 10K writes, 3999 syncs, 2.57 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 14:22:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:22:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3943841341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.938 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:13 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:22:13 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:22:13 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3943841341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 359 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.7 MiB/s wr, 183 op/s
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.965 2 DEBUG nova.storage.rbd_utils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:13 compute-0 nova_compute[259550]: 2025-10-07 14:22:13.969 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:22:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2239424145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.102 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.109 2 DEBUG nova.compute.provider_tree [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.200 2 DEBUG nova.scheduler.client.report [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.313 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.314 2 DEBUG nova.compute.manager [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:22:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:22:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2763622679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.439 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.441 2 DEBUG nova.virt.libvirt.vif [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1150053982',display_name='tempest-ServersTestJSON-server-1150053982',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1150053982',id=89,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-44wx00ob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:08Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=e1499379-f6d6-4edd-8af0-2ccf7c6e6683,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.441 2 DEBUG nova.network.os_vif_util [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.442 2 DEBUG nova.network.os_vif_util [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:63:8c,bridge_name='br-int',has_traffic_filtering=True,id=0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a1ae764-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.443 2 DEBUG nova.objects.instance [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'pci_devices' on Instance uuid e1499379-f6d6-4edd-8af0-2ccf7c6e6683 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.589 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:22:14 compute-0 nova_compute[259550]:   <uuid>e1499379-f6d6-4edd-8af0-2ccf7c6e6683</uuid>
Oct 07 14:22:14 compute-0 nova_compute[259550]:   <name>instance-00000059</name>
Oct 07 14:22:14 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:22:14 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:22:14 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersTestJSON-server-1150053982</nova:name>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:22:13</nova:creationTime>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:22:14 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:22:14 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:22:14 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:22:14 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:22:14 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:22:14 compute-0 nova_compute[259550]:         <nova:user uuid="2606252961124ad2a15c7f7529b28488">tempest-ServersTestJSON-387950529-project-member</nova:user>
Oct 07 14:22:14 compute-0 nova_compute[259550]:         <nova:project uuid="ca7071ac09d84d15aba25489e9bb909a">tempest-ServersTestJSON-387950529</nova:project>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:22:14 compute-0 nova_compute[259550]:         <nova:port uuid="0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1">
Oct 07 14:22:14 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:22:14 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:22:14 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <system>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <entry name="serial">e1499379-f6d6-4edd-8af0-2ccf7c6e6683</entry>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <entry name="uuid">e1499379-f6d6-4edd-8af0-2ccf7c6e6683</entry>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     </system>
Oct 07 14:22:14 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:22:14 compute-0 nova_compute[259550]:   <os>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:   </os>
Oct 07 14:22:14 compute-0 nova_compute[259550]:   <features>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:   </features>
Oct 07 14:22:14 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:22:14 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:22:14 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk">
Oct 07 14:22:14 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       </source>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:22:14 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk.config">
Oct 07 14:22:14 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       </source>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:22:14 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:98:63:8c"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <target dev="tap0a1ae764-a8"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683/console.log" append="off"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <video>
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     </video>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:22:14 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:22:14 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:22:14 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:22:14 compute-0 nova_compute[259550]: </domain>
Oct 07 14:22:14 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.590 2 DEBUG nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Preparing to wait for external event network-vif-plugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.590 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.591 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.591 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.591 2 DEBUG nova.virt.libvirt.vif [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1150053982',display_name='tempest-ServersTestJSON-server-1150053982',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1150053982',id=89,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-44wx00ob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:08Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=e1499379-f6d6-4edd-8af0-2ccf7c6e6683,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.592 2 DEBUG nova.network.os_vif_util [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.592 2 DEBUG nova.network.os_vif_util [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:63:8c,bridge_name='br-int',has_traffic_filtering=True,id=0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a1ae764-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.593 2 DEBUG os_vif [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:63:8c,bridge_name='br-int',has_traffic_filtering=True,id=0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a1ae764-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.594 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.594 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.597 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a1ae764-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.597 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0a1ae764-a8, col_values=(('external_ids', {'iface-id': '0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:63:8c', 'vm-uuid': 'e1499379-f6d6-4edd-8af0-2ccf7c6e6683'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:14 compute-0 NetworkManager[44949]: <info>  [1759846934.6004] manager: (tap0a1ae764-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/383)
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.606 2 DEBUG nova.compute.manager [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.608 2 INFO os_vif [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:63:8c,bridge_name='br-int',has_traffic_filtering=True,id=0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a1ae764-a8')
Oct 07 14:22:14 compute-0 ovn_controller[151684]: 2025-10-07T14:22:14Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:9a:2b 10.100.0.4
Oct 07 14:22:14 compute-0 ovn_controller[151684]: 2025-10-07T14:22:14Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:9a:2b 10.100.0.4
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.717 2 INFO nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.781 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.782 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.782 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No VIF found with MAC fa:16:3e:98:63:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.783 2 INFO nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Using config drive
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.817 2 DEBUG nova.storage.rbd_utils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:14 compute-0 nova_compute[259550]: 2025-10-07 14:22:14.823 2 DEBUG nova.compute.manager [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:22:14 compute-0 ceph-mon[74295]: pgmap v1754: 305 pgs: 305 active+clean; 359 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.7 MiB/s wr, 183 op/s
Oct 07 14:22:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2239424145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2763622679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.061 2 DEBUG nova.compute.manager [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.062 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.063 2 INFO nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Creating image(s)
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.087 2 DEBUG nova.storage.rbd_utils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image fdd84566-c63e-469a-9173-55b845d32171_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.122 2 DEBUG nova.storage.rbd_utils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image fdd84566-c63e-469a-9173-55b845d32171_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.149 2 DEBUG nova.storage.rbd_utils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image fdd84566-c63e-469a-9173-55b845d32171_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.154 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.207 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846920.0854871, d932a7ab-839c-48b9-804f-90cc8634e93b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.207 2 INFO nova.compute.manager [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] VM Stopped (Lifecycle Event)
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.219 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "c8c2d410-01f0-4ef2-9ce3-232347c32e46" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.220 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "c8c2d410-01f0-4ef2-9ce3-232347c32e46" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.259 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.260 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.261 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.261 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.283 2 DEBUG nova.storage.rbd_utils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image fdd84566-c63e-469a-9173-55b845d32171_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.289 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 fdd84566-c63e-469a-9173-55b845d32171_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.341 2 DEBUG nova.compute.manager [None req-863a0d83-3370-41ec-8b59-b5b071dddd24 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.385 2 DEBUG nova.compute.manager [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.439 2 INFO nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Creating config drive at /var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683/disk.config
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.448 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyazyw5px execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.613 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyazyw5px" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.642 2 DEBUG nova.storage.rbd_utils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.648 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683/disk.config e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.693 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 fdd84566-c63e-469a-9173-55b845d32171_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.771 2 DEBUG nova.storage.rbd_utils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] resizing rbd image fdd84566-c63e-469a-9173-55b845d32171_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.796 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.797 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.803 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.804 2 INFO nova.compute.claims [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.825 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683/disk.config e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.825 2 INFO nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Deleting local config drive /var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683/disk.config because it was imported into RBD.
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.869 2 DEBUG nova.objects.instance [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'migration_context' on Instance uuid fdd84566-c63e-469a-9173-55b845d32171 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:15 compute-0 kernel: tap0a1ae764-a8: entered promiscuous mode
Oct 07 14:22:15 compute-0 NetworkManager[44949]: <info>  [1759846935.8830] manager: (tap0a1ae764-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/384)
Oct 07 14:22:15 compute-0 ovn_controller[151684]: 2025-10-07T14:22:15Z|00923|binding|INFO|Claiming lport 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 for this chassis.
Oct 07 14:22:15 compute-0 ovn_controller[151684]: 2025-10-07T14:22:15Z|00924|binding|INFO|0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1: Claiming fa:16:3e:98:63:8c 10.100.0.7
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:15 compute-0 ovn_controller[151684]: 2025-10-07T14:22:15Z|00925|binding|INFO|Setting lport 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 ovn-installed in OVS
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:15 compute-0 systemd-udevd[349870]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:22:15 compute-0 systemd-machined[214580]: New machine qemu-110-instance-00000059.
Oct 07 14:22:15 compute-0 NetworkManager[44949]: <info>  [1759846935.9298] device (tap0a1ae764-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:22:15 compute-0 systemd[1]: Started Virtual Machine qemu-110-instance-00000059.
Oct 07 14:22:15 compute-0 NetworkManager[44949]: <info>  [1759846935.9315] device (tap0a1ae764-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:22:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 370 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 412 KiB/s rd, 4.6 MiB/s wr, 148 op/s
Oct 07 14:22:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:15.988 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:63:8c 10.100.0.7'], port_security=['fa:16:3e:98:63:8c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e1499379-f6d6-4edd-8af0-2ccf7c6e6683', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:22:15 compute-0 ovn_controller[151684]: 2025-10-07T14:22:15Z|00926|binding|INFO|Setting lport 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 up in Southbound
Oct 07 14:22:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:15.989 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 bound to our chassis
Oct 07 14:22:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:15.991 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.996 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.996 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Ensure instance console log exists: /var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.997 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.998 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:15 compute-0 nova_compute[259550]: 2025-10-07 14:22:15.998 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.000 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.006 2 WARNING nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.012 2 DEBUG nova.virt.libvirt.host [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.013 2 DEBUG nova.virt.libvirt.host [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.016 2 DEBUG nova.virt.libvirt.host [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.017 2 DEBUG nova.virt.libvirt.host [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.017 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.017 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.018 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.018 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.018 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.019 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.019 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.019 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.019 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:22:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.017 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[28716ae1-7eef-4dd2-bab6-8d5d972916e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.020 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.020 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.020 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.024 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.058 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b8bac890-6edd-44a9-8ac7-6db11fdadf16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.062 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[be82e7de-0023-4598-a3ad-28445f64fb60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.097 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[002d510a-cbd5-4e49-9a2e-e68323a75760]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.121 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[384bbcd2-b662-405c-9a47-63cc9357415e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 31052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349885, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.144 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd7952c-4481-4a3b-a64d-55e662f67dd0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349886, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349886, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.145 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.150 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.151 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.151 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.152 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:22:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:22:16 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279221662' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.480 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.508 2 DEBUG nova.storage.rbd_utils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image fdd84566-c63e-469a-9173-55b845d32171_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.513 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.547 2 DEBUG nova.network.neutron [req-1c0e7a86-b06e-4a77-a4b4-6d08cacf81f0 req-d047eeb6-da05-43f0-afd4-279c5c1e356c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Updated VIF entry in instance network info cache for port 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.548 2 DEBUG nova.network.neutron [req-1c0e7a86-b06e-4a77-a4b4-6d08cacf81f0 req-d047eeb6-da05-43f0-afd4-279c5c1e356c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Updating instance_info_cache with network_info: [{"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.677 2 DEBUG oslo_concurrency.lockutils [req-1c0e7a86-b06e-4a77-a4b4-6d08cacf81f0 req-d047eeb6-da05-43f0-afd4-279c5c1e356c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-e1499379-f6d6-4edd-8af0-2ccf7c6e6683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.683 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.730 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846921.718816, ad8af3d5-66d4-4db1-bd40-42d766f2fde7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.730 2 INFO nova.compute.manager [-] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] VM Stopped (Lifecycle Event)
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.937 2 DEBUG nova.compute.manager [None req-9a3140d1-4a2b-4127-b880-1025c224b666 - - - - - -] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:22:16 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3080529391' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.972 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:16 compute-0 nova_compute[259550]: 2025-10-07 14:22:16.974 2 DEBUG nova.objects.instance [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'pci_devices' on Instance uuid fdd84566-c63e-469a-9173-55b845d32171 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.003 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:22:17 compute-0 nova_compute[259550]:   <uuid>fdd84566-c63e-469a-9173-55b845d32171</uuid>
Oct 07 14:22:17 compute-0 nova_compute[259550]:   <name>instance-0000005a</name>
Oct 07 14:22:17 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:22:17 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:22:17 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerShowV247Test-server-1393680851</nova:name>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:22:16</nova:creationTime>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:22:17 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:22:17 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:22:17 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:22:17 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:22:17 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:22:17 compute-0 nova_compute[259550]:         <nova:user uuid="680153f599e540e2ad16e791561c02e2">tempest-ServerShowV247Test-1983417179-project-member</nova:user>
Oct 07 14:22:17 compute-0 nova_compute[259550]:         <nova:project uuid="a580b052291544cca604ec1cdb73a416">tempest-ServerShowV247Test-1983417179</nova:project>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:22:17 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:22:17 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <system>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <entry name="serial">fdd84566-c63e-469a-9173-55b845d32171</entry>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <entry name="uuid">fdd84566-c63e-469a-9173-55b845d32171</entry>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     </system>
Oct 07 14:22:17 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:22:17 compute-0 nova_compute[259550]:   <os>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:   </os>
Oct 07 14:22:17 compute-0 nova_compute[259550]:   <features>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:   </features>
Oct 07 14:22:17 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:22:17 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:22:17 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/fdd84566-c63e-469a-9173-55b845d32171_disk">
Oct 07 14:22:17 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       </source>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:22:17 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/fdd84566-c63e-469a-9173-55b845d32171_disk.config">
Oct 07 14:22:17 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       </source>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:22:17 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171/console.log" append="off"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <video>
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     </video>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:22:17 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:22:17 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:22:17 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:22:17 compute-0 nova_compute[259550]: </domain>
Oct 07 14:22:17 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:22:17 compute-0 ceph-mon[74295]: pgmap v1755: 305 pgs: 305 active+clean; 370 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 412 KiB/s rd, 4.6 MiB/s wr, 148 op/s
Oct 07 14:22:17 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4279221662' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:17 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3080529391' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:22:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4166670939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.168 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.173 2 DEBUG nova.compute.provider_tree [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.226 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846937.2255259, e1499379-f6d6-4edd-8af0-2ccf7c6e6683 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.226 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] VM Started (Lifecycle Event)
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.274 2 DEBUG nova.scheduler.client.report [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.279 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.285 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846937.2266014, e1499379-f6d6-4edd-8af0-2ccf7c6e6683 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.285 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] VM Paused (Lifecycle Event)
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.416 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.420 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.420 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.420 2 INFO nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Using config drive
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.439 2 DEBUG nova.storage.rbd_utils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image fdd84566-c63e-469a-9173-55b845d32171_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.445 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.566 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.567 2 DEBUG nova.compute.manager [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.685 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.689 2 DEBUG nova.compute.manager [req-be041000-22b3-4446-b2ff-60766679b310 req-f4c16992-050f-4267-8812-f9ca55987b4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Received event network-vif-plugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.690 2 DEBUG oslo_concurrency.lockutils [req-be041000-22b3-4446-b2ff-60766679b310 req-f4c16992-050f-4267-8812-f9ca55987b4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.690 2 DEBUG oslo_concurrency.lockutils [req-be041000-22b3-4446-b2ff-60766679b310 req-f4c16992-050f-4267-8812-f9ca55987b4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.690 2 DEBUG oslo_concurrency.lockutils [req-be041000-22b3-4446-b2ff-60766679b310 req-f4c16992-050f-4267-8812-f9ca55987b4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.691 2 DEBUG nova.compute.manager [req-be041000-22b3-4446-b2ff-60766679b310 req-f4c16992-050f-4267-8812-f9ca55987b4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Processing event network-vif-plugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.692 2 DEBUG nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.697 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846937.6964352, e1499379-f6d6-4edd-8af0-2ccf7c6e6683 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.697 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] VM Resumed (Lifecycle Event)
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.699 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.702 2 INFO nova.virt.libvirt.driver [-] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Instance spawned successfully.
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.702 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.726 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.729 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.740 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.741 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.741 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.741 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.742 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.742 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.747 2 DEBUG nova.compute.manager [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.778 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.893 2 INFO nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.951 2 INFO nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Took 9.63 seconds to spawn the instance on the hypervisor.
Oct 07 14:22:17 compute-0 nova_compute[259550]: 2025-10-07 14:22:17.951 2 DEBUG nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 370 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 413 KiB/s rd, 4.6 MiB/s wr, 148 op/s
Oct 07 14:22:18 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4166670939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.044 2 DEBUG nova.compute.manager [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.167 2 INFO nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Took 12.04 seconds to build instance.
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.183 2 INFO nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Creating config drive at /var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171/disk.config
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.188 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2vrtjreb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:18 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:22:18 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 30K writes, 120K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.04 MB/s
                                           Cumulative WAL: 30K writes, 10K syncs, 2.87 writes per sync, written: 0.11 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 12K writes, 45K keys, 12K commit groups, 1.0 writes per commit group, ingest: 47.24 MB, 0.08 MB/s
                                           Interval WAL: 12K writes, 4782 syncs, 2.52 writes per sync, written: 0.05 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.238 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.332 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2vrtjreb" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.356 2 DEBUG nova.storage.rbd_utils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image fdd84566-c63e-469a-9173-55b845d32171_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.360 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171/disk.config fdd84566-c63e-469a-9173-55b845d32171_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.422 2 DEBUG nova.compute.manager [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.425 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.427 2 INFO nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Creating image(s)
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.456 2 DEBUG nova.storage.rbd_utils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.484 2 DEBUG nova.storage.rbd_utils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.510 2 DEBUG nova.storage.rbd_utils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.514 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.591 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.592 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.593 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.594 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.619 2 DEBUG nova.storage.rbd_utils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:18 compute-0 nova_compute[259550]: 2025-10-07 14:22:18.624 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:19 compute-0 ceph-mon[74295]: pgmap v1756: 305 pgs: 305 active+clean; 370 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 413 KiB/s rd, 4.6 MiB/s wr, 148 op/s
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.110 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171/disk.config fdd84566-c63e-469a-9173-55b845d32171_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.750s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.111 2 INFO nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Deleting local config drive /var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171/disk.config because it was imported into RBD.
Oct 07 14:22:19 compute-0 systemd-machined[214580]: New machine qemu-111-instance-0000005a.
Oct 07 14:22:19 compute-0 systemd[1]: Started Virtual Machine qemu-111-instance-0000005a.
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.419 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.420 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.421 2 INFO nova.compute.manager [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Unshelving
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.541 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.542 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.550 2 DEBUG nova.objects.instance [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'pci_requests' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.592 2 DEBUG nova.objects.instance [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'numa_topology' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.852 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.917 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.917 2 INFO nova.compute.claims [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.948 2 DEBUG nova.compute.manager [req-34364224-9a1a-4398-b10f-569f580ab712 req-09fafbf0-e0c5-4a6a-af1a-2c4aa9482c8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Received event network-vif-plugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.948 2 DEBUG oslo_concurrency.lockutils [req-34364224-9a1a-4398-b10f-569f580ab712 req-09fafbf0-e0c5-4a6a-af1a-2c4aa9482c8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.948 2 DEBUG oslo_concurrency.lockutils [req-34364224-9a1a-4398-b10f-569f580ab712 req-09fafbf0-e0c5-4a6a-af1a-2c4aa9482c8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.948 2 DEBUG oslo_concurrency.lockutils [req-34364224-9a1a-4398-b10f-569f580ab712 req-09fafbf0-e0c5-4a6a-af1a-2c4aa9482c8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.949 2 DEBUG nova.compute.manager [req-34364224-9a1a-4398-b10f-569f580ab712 req-09fafbf0-e0c5-4a6a-af1a-2c4aa9482c8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] No waiting events found dispatching network-vif-plugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.949 2 WARNING nova.compute.manager [req-34364224-9a1a-4398-b10f-569f580ab712 req-09fafbf0-e0c5-4a6a-af1a-2c4aa9482c8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Received unexpected event network-vif-plugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 for instance with vm_state active and task_state deleting.
Oct 07 14:22:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 407 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.5 MiB/s wr, 194 op/s
Oct 07 14:22:19 compute-0 nova_compute[259550]: 2025-10-07 14:22:19.957 2 DEBUG nova.storage.rbd_utils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] resizing rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.095 2 DEBUG oslo_concurrency.lockutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.095 2 DEBUG oslo_concurrency.lockutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.096 2 DEBUG oslo_concurrency.lockutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.096 2 DEBUG oslo_concurrency.lockutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.096 2 DEBUG oslo_concurrency.lockutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.097 2 INFO nova.compute.manager [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Terminating instance
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.098 2 DEBUG nova.compute.manager [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:22:20 compute-0 ceph-mon[74295]: pgmap v1757: 305 pgs: 305 active+clean; 407 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.5 MiB/s wr, 194 op/s
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.283 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:20 compute-0 kernel: tap0a1ae764-a8 (unregistering): left promiscuous mode
Oct 07 14:22:20 compute-0 NetworkManager[44949]: <info>  [1759846940.3256] device (tap0a1ae764-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:20 compute-0 ovn_controller[151684]: 2025-10-07T14:22:20Z|00927|binding|INFO|Releasing lport 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 from this chassis (sb_readonly=0)
Oct 07 14:22:20 compute-0 ovn_controller[151684]: 2025-10-07T14:22:20Z|00928|binding|INFO|Setting lport 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 down in Southbound
Oct 07 14:22:20 compute-0 ovn_controller[151684]: 2025-10-07T14:22:20Z|00929|binding|INFO|Removing iface tap0a1ae764-a8 ovn-installed in OVS
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.372 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846940.37246, fdd84566-c63e-469a-9173-55b845d32171 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.373 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fdd84566-c63e-469a-9173-55b845d32171] VM Resumed (Lifecycle Event)
Oct 07 14:22:20 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000059.scope: Deactivated successfully.
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.376 2 DEBUG nova.compute.manager [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:22:20 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000059.scope: Consumed 3.738s CPU time.
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.379 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:22:20 compute-0 systemd-machined[214580]: Machine qemu-110-instance-00000059 terminated.
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.383 2 INFO nova.virt.libvirt.driver [-] [instance: fdd84566-c63e-469a-9173-55b845d32171] Instance spawned successfully.
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.383 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:22:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.480 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:63:8c 10.100.0.7'], port_security=['fa:16:3e:98:63:8c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e1499379-f6d6-4edd-8af0-2ccf7c6e6683', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:22:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.482 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 unbound from our chassis
Oct 07 14:22:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.483 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.496 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fdd84566-c63e-469a-9173-55b845d32171] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.502 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fdd84566-c63e-469a-9173-55b845d32171] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:22:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.501 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[67835ee6-ec07-4898-8d18-b9ba43645b84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.528 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.529 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.529 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.529 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.530 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.530 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.540 2 INFO nova.virt.libvirt.driver [-] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Instance destroyed successfully.
Oct 07 14:22:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.540 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[be5482df-20cd-4ce9-89ca-5c30183380d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.540 2 DEBUG nova.objects.instance [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'resources' on Instance uuid e1499379-f6d6-4edd-8af0-2ccf7c6e6683 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.552 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3255692e-bbf5-4cdf-8c29-21249e655076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.585 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fcd19856-c3ef-4e85-b932-9f63638f97bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.609 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[433f84eb-e607-462c-8978-9318011942c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 874, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 874, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 31052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350336, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.631 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6abf3529-d1f5-41ef-be75-6baff4e6d3cc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350342, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350342, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.633 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.640 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.640 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.641 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.641 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.666 2 DEBUG nova.virt.libvirt.vif [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:22:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1150053982',display_name='tempest-ServersTestJSON-server-1150053982',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1150053982',id=89,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:22:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-44wx00ob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:22:18Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=e1499379-f6d6-4edd-8af0-2ccf7c6e6683,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.666 2 DEBUG nova.network.os_vif_util [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.667 2 DEBUG nova.network.os_vif_util [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:63:8c,bridge_name='br-int',has_traffic_filtering=True,id=0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a1ae764-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.668 2 DEBUG os_vif [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:63:8c,bridge_name='br-int',has_traffic_filtering=True,id=0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a1ae764-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:22:20 compute-0 podman[350315]: 2025-10-07 14:22:20.6696638 +0000 UTC m=+0.094809096 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.672 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fdd84566-c63e-469a-9173-55b845d32171] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.673 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846940.3758037, fdd84566-c63e-469a-9173-55b845d32171 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.673 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fdd84566-c63e-469a-9173-55b845d32171] VM Started (Lifecycle Event)
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.675 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a1ae764-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.681 2 INFO os_vif [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:63:8c,bridge_name='br-int',has_traffic_filtering=True,id=0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a1ae764-a8')
Oct 07 14:22:20 compute-0 podman[350317]: 2025-10-07 14:22:20.683225538 +0000 UTC m=+0.108370854 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.707 2 DEBUG nova.objects.instance [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'migration_context' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.740 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.740 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Ensure instance console log exists: /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.741 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.741 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.742 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.743 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.748 2 WARNING nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.753 2 DEBUG nova.virt.libvirt.host [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.754 2 DEBUG nova.virt.libvirt.host [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.758 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fdd84566-c63e-469a-9173-55b845d32171] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.758 2 DEBUG nova.virt.libvirt.host [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.759 2 DEBUG nova.virt.libvirt.host [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.759 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.759 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.760 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.760 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.761 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.761 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.761 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.762 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.762 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.762 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.763 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.763 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.766 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:22:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1480256532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.804 2 INFO nova.compute.manager [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Took 5.74 seconds to spawn the instance on the hypervisor.
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.804 2 DEBUG nova.compute.manager [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.805 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.810 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fdd84566-c63e-469a-9173-55b845d32171] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.817 2 DEBUG nova.compute.provider_tree [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.856 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fdd84566-c63e-469a-9173-55b845d32171] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.953 2 DEBUG nova.scheduler.client.report [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:22:20 compute-0 nova_compute[259550]: 2025-10-07 14:22:20.986 2 INFO nova.compute.manager [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Took 7.61 seconds to build instance.
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.055 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.067 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "fdd84566-c63e-469a-9173-55b845d32171" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.182 2 INFO nova.virt.libvirt.driver [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Deleting instance files /var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683_del
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.183 2 INFO nova.virt.libvirt.driver [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Deletion of /var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683_del complete
Oct 07 14:22:21 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1480256532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.210 2 INFO nova.network.neutron [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updating port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 07 14:22:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:22:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:22:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2002588659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.266 2 INFO nova.compute.manager [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Took 1.17 seconds to destroy the instance on the hypervisor.
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.268 2 DEBUG oslo.service.loopingcall [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.268 2 DEBUG nova.compute.manager [-] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.268 2 DEBUG nova.network.neutron [-] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.279 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.303 2 DEBUG nova.storage.rbd_utils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.307 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:22:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1981774959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.757 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.760 2 DEBUG nova.objects.instance [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'pci_devices' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.781 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:22:21 compute-0 nova_compute[259550]:   <uuid>c8c2d410-01f0-4ef2-9ce3-232347c32e46</uuid>
Oct 07 14:22:21 compute-0 nova_compute[259550]:   <name>instance-0000005b</name>
Oct 07 14:22:21 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:22:21 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:22:21 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerShowV247Test-server-1207130222</nova:name>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:22:20</nova:creationTime>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:22:21 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:22:21 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:22:21 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:22:21 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:22:21 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:22:21 compute-0 nova_compute[259550]:         <nova:user uuid="680153f599e540e2ad16e791561c02e2">tempest-ServerShowV247Test-1983417179-project-member</nova:user>
Oct 07 14:22:21 compute-0 nova_compute[259550]:         <nova:project uuid="a580b052291544cca604ec1cdb73a416">tempest-ServerShowV247Test-1983417179</nova:project>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:22:21 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:22:21 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <system>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <entry name="serial">c8c2d410-01f0-4ef2-9ce3-232347c32e46</entry>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <entry name="uuid">c8c2d410-01f0-4ef2-9ce3-232347c32e46</entry>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     </system>
Oct 07 14:22:21 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:22:21 compute-0 nova_compute[259550]:   <os>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:   </os>
Oct 07 14:22:21 compute-0 nova_compute[259550]:   <features>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:   </features>
Oct 07 14:22:21 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:22:21 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:22:21 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk">
Oct 07 14:22:21 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       </source>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:22:21 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config">
Oct 07 14:22:21 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       </source>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:22:21 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/console.log" append="off"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <video>
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     </video>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:22:21 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:22:21 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:22:21 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:22:21 compute-0 nova_compute[259550]: </domain>
Oct 07 14:22:21 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.839 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.839 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.840 2 INFO nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Using config drive
Oct 07 14:22:21 compute-0 nova_compute[259550]: 2025-10-07 14:22:21.865 2 DEBUG nova.storage.rbd_utils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 440 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.4 MiB/s wr, 198 op/s
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.068 2 INFO nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Creating config drive at /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.073 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoa7vc73i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.213 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoa7vc73i" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.251 2 DEBUG nova.storage.rbd_utils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:22 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2002588659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:22 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1981774959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:22 compute-0 ceph-mon[74295]: pgmap v1758: 305 pgs: 305 active+clean; 440 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.4 MiB/s wr, 198 op/s
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.255 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:22.289 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.368 2 DEBUG nova.compute.manager [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Received event network-vif-unplugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.369 2 DEBUG oslo_concurrency.lockutils [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.370 2 DEBUG oslo_concurrency.lockutils [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.370 2 DEBUG oslo_concurrency.lockutils [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.370 2 DEBUG nova.compute.manager [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] No waiting events found dispatching network-vif-unplugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.371 2 DEBUG nova.compute.manager [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Received event network-vif-unplugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.371 2 DEBUG nova.compute.manager [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Received event network-vif-plugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.371 2 DEBUG oslo_concurrency.lockutils [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.372 2 DEBUG oslo_concurrency.lockutils [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.372 2 DEBUG oslo_concurrency.lockutils [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.372 2 DEBUG nova.compute.manager [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] No waiting events found dispatching network-vif-plugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.373 2 WARNING nova.compute.manager [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Received unexpected event network-vif-plugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 for instance with vm_state active and task_state deleting.
Oct 07 14:22:22 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:22:22 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 22K writes, 88K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 22K writes, 7417 syncs, 2.99 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7697 writes, 31K keys, 7697 commit groups, 1.0 writes per commit group, ingest: 36.55 MB, 0.06 MB/s
                                           Interval WAL: 7697 writes, 3027 syncs, 2.54 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.472 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.473 2 INFO nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Deleting local config drive /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config because it was imported into RBD.
Oct 07 14:22:22 compute-0 systemd-machined[214580]: New machine qemu-112-instance-0000005b.
Oct 07 14:22:22 compute-0 systemd[1]: Started Virtual Machine qemu-112-instance-0000005b.
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.648 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.649 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquired lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.649 2 DEBUG nova.network.neutron [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:22:22
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'images', 'default.rgw.meta', 'vms', 'backups', 'volumes']
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.685 2 DEBUG nova.network.neutron [-] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.789 2 INFO nova.compute.manager [-] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Took 1.52 seconds to deallocate network for instance.
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.844 2 DEBUG oslo_concurrency.lockutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.845 2 DEBUG oslo_concurrency.lockutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:22:22 compute-0 nova_compute[259550]: 2025-10-07 14:22:22.961 2 DEBUG oslo_concurrency.processutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.437 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846943.436468, c8c2d410-01f0-4ef2-9ce3-232347c32e46 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.438 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] VM Resumed (Lifecycle Event)
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.441 2 DEBUG nova.compute.manager [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.441 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.449 2 INFO nova.virt.libvirt.driver [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance spawned successfully.
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.449 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:22:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:22:23 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2908031063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.460 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.464 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.478 2 DEBUG oslo_concurrency.processutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.480 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.481 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.481 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.481 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.481 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.482 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.488 2 DEBUG nova.compute.provider_tree [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.491 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.491 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846943.4411333, c8c2d410-01f0-4ef2-9ce3-232347c32e46 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.491 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] VM Started (Lifecycle Event)
Oct 07 14:22:23 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2908031063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.521 2 DEBUG nova.scheduler.client.report [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.527 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.530 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.561 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.568 2 DEBUG oslo_concurrency.lockutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.574 2 INFO nova.compute.manager [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Took 5.15 seconds to spawn the instance on the hypervisor.
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.574 2 DEBUG nova.compute.manager [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.594 2 INFO nova.scheduler.client.report [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Deleted allocations for instance e1499379-f6d6-4edd-8af0-2ccf7c6e6683
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.645 2 INFO nova.compute.manager [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Took 7.88 seconds to build instance.
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.671 2 DEBUG oslo_concurrency.lockutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.673 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "c8c2d410-01f0-4ef2-9ce3-232347c32e46" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:23 compute-0 nova_compute[259550]: 2025-10-07 14:22:23.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 441 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.5 MiB/s wr, 294 op/s
Oct 07 14:22:24 compute-0 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.302 2 DEBUG nova.network.neutron [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updating instance_info_cache with network_info: [{"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.323 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Releasing lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.324 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.325 2 INFO nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Creating image(s)
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.349 2 DEBUG nova.storage.rbd_utils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.353 2 DEBUG nova.objects.instance [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.385 2 DEBUG nova.storage.rbd_utils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.414 2 DEBUG nova.storage.rbd_utils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.419 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "b50efc6a4419348da95302d442f7696307d8e85c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.420 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "b50efc6a4419348da95302d442f7696307d8e85c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.471 2 DEBUG nova.compute.manager [req-1f2ef5bd-a523-465b-8fb7-416968aa207f req-095565e8-1117-4e0e-9904-4958375e8f46 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Received event network-vif-deleted-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.471 2 DEBUG nova.compute.manager [req-1f2ef5bd-a523-465b-8fb7-416968aa207f req-095565e8-1117-4e0e-9904-4958375e8f46 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-changed-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.471 2 DEBUG nova.compute.manager [req-1f2ef5bd-a523-465b-8fb7-416968aa207f req-095565e8-1117-4e0e-9904-4958375e8f46 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Refreshing instance network info cache due to event network-changed-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.472 2 DEBUG oslo_concurrency.lockutils [req-1f2ef5bd-a523-465b-8fb7-416968aa207f req-095565e8-1117-4e0e-9904-4958375e8f46 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.472 2 DEBUG oslo_concurrency.lockutils [req-1f2ef5bd-a523-465b-8fb7-416968aa207f req-095565e8-1117-4e0e-9904-4958375e8f46 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.472 2 DEBUG nova.network.neutron [req-1f2ef5bd-a523-465b-8fb7-416968aa207f req-095565e8-1117-4e0e-9904-4958375e8f46 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Refreshing network info cache for port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:22:24 compute-0 ceph-mon[74295]: pgmap v1759: 305 pgs: 305 active+clean; 441 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.5 MiB/s wr, 294 op/s
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.571 2 INFO nova.compute.manager [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Rebuilding instance
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.606 2 DEBUG nova.virt.libvirt.imagebackend [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Image locations are: [{'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/4a1936df-f472-466a-a569-3e6ba7a787d4/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/4a1936df-f472-466a-a569-3e6ba7a787d4/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.655 2 DEBUG nova.virt.libvirt.imagebackend [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Selected location: {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/4a1936df-f472-466a-a569-3e6ba7a787d4/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.656 2 DEBUG nova.storage.rbd_utils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] cloning images/4a1936df-f472-466a-a569-3e6ba7a787d4@snap to None/d932a7ab-839c-48b9-804f-90cc8634e93b_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.769 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "b50efc6a4419348da95302d442f7696307d8e85c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.865 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.918 2 DEBUG nova.compute.manager [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.930 2 DEBUG nova.objects.instance [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'migration_context' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.984 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'pci_requests' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:24 compute-0 nova_compute[259550]: 2025-10-07 14:22:24.994 2 DEBUG nova.storage.rbd_utils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] flattening vms/d932a7ab-839c-48b9-804f-90cc8634e93b_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.038 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'pci_devices' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.051 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'resources' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.080 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'migration_context' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.092 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.104 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.461 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Image rbd:vms/d932a7ab-839c-48b9-804f-90cc8634e93b_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.462 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.463 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Ensure instance console log exists: /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.463 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.464 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.464 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.467 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Start _get_guest_xml network_info=[{"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T14:21:57Z,direct_url=<?>,disk_format='raw',id=4a1936df-f472-466a-a569-3e6ba7a787d4,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-32969053-shelved',owner='a266c4b5f8164bceb621e0e23116c515',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T14:22:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.472 2 WARNING nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.476 2 DEBUG nova.virt.libvirt.host [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.477 2 DEBUG nova.virt.libvirt.host [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.479 2 DEBUG nova.virt.libvirt.host [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.479 2 DEBUG nova.virt.libvirt.host [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.480 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.480 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T14:21:57Z,direct_url=<?>,disk_format='raw',id=4a1936df-f472-466a-a569-3e6ba7a787d4,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-32969053-shelved',owner='a266c4b5f8164bceb621e0e23116c515',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T14:22:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.480 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.481 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.481 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.481 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.481 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.481 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.482 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.482 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.482 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.482 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.483 2 DEBUG nova.objects.instance [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.509 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:25 compute-0 nova_compute[259550]: 2025-10-07 14:22:25.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 434 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 5.0 MiB/s wr, 320 op/s
Oct 07 14:22:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:22:26 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1192574666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.069 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.096 2 DEBUG nova.storage.rbd_utils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.101 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.377 2 DEBUG nova.network.neutron [req-1f2ef5bd-a523-465b-8fb7-416968aa207f req-095565e8-1117-4e0e-9904-4958375e8f46 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updated VIF entry in instance network info cache for port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.379 2 DEBUG nova.network.neutron [req-1f2ef5bd-a523-465b-8fb7-416968aa207f req-095565e8-1117-4e0e-9904-4958375e8f46 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updating instance_info_cache with network_info: [{"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.403 2 DEBUG oslo_concurrency.lockutils [req-1f2ef5bd-a523-465b-8fb7-416968aa207f req-095565e8-1117-4e0e-9904-4958375e8f46 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:22:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:22:26 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1677105785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.588 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.593 2 DEBUG nova.virt.libvirt.vif [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:20:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-32969053',display_name='tempest-ServerActionsTestOtherB-server-32969053',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-32969053',id=78,image_ref='4a1936df-f472-466a-a569-3e6ba7a787d4',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1118909469',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:20:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='a266c4b5f8164bceb621e0e23116c515',ramdisk_id='',reservation_id='r-r374dg9m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1033607333',owner_user_name='tempest-ServerActionsTestOtherB-1033607333-project-member',shelved_at='2025-10-07T14:22:05.799435',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='4a1936df-f472-466a-a569-3e6ba7a787d4'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b99e8c19767d42aa96c7d646cacc3772',uuid=d932a7ab-839c-48b9-804f-90cc8634e93b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.594 2 DEBUG nova.network.os_vif_util [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converting VIF {"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.595 2 DEBUG nova.network.os_vif_util [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.597 2 DEBUG nova.objects.instance [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'pci_devices' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.617 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:22:26 compute-0 nova_compute[259550]:   <uuid>d932a7ab-839c-48b9-804f-90cc8634e93b</uuid>
Oct 07 14:22:26 compute-0 nova_compute[259550]:   <name>instance-0000004e</name>
Oct 07 14:22:26 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:22:26 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:22:26 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerActionsTestOtherB-server-32969053</nova:name>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:22:25</nova:creationTime>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:22:26 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:22:26 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:22:26 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:22:26 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:22:26 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:22:26 compute-0 nova_compute[259550]:         <nova:user uuid="b99e8c19767d42aa96c7d646cacc3772">tempest-ServerActionsTestOtherB-1033607333-project-member</nova:user>
Oct 07 14:22:26 compute-0 nova_compute[259550]:         <nova:project uuid="a266c4b5f8164bceb621e0e23116c515">tempest-ServerActionsTestOtherB-1033607333</nova:project>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="4a1936df-f472-466a-a569-3e6ba7a787d4"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:22:26 compute-0 nova_compute[259550]:         <nova:port uuid="6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8">
Oct 07 14:22:26 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:22:26 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:22:26 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <system>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <entry name="serial">d932a7ab-839c-48b9-804f-90cc8634e93b</entry>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <entry name="uuid">d932a7ab-839c-48b9-804f-90cc8634e93b</entry>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     </system>
Oct 07 14:22:26 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:22:26 compute-0 nova_compute[259550]:   <os>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:   </os>
Oct 07 14:22:26 compute-0 nova_compute[259550]:   <features>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:   </features>
Oct 07 14:22:26 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:22:26 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:22:26 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/d932a7ab-839c-48b9-804f-90cc8634e93b_disk">
Oct 07 14:22:26 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       </source>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:22:26 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/d932a7ab-839c-48b9-804f-90cc8634e93b_disk.config">
Oct 07 14:22:26 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       </source>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:22:26 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:71:12:7c"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <target dev="tap6e1eba9e-f1"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/console.log" append="off"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <video>
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     </video>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <input type="keyboard" bus="usb"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:22:26 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:22:26 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:22:26 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:22:26 compute-0 nova_compute[259550]: </domain>
Oct 07 14:22:26 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.623 2 DEBUG nova.compute.manager [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Preparing to wait for external event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.624 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.624 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.624 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.625 2 DEBUG nova.virt.libvirt.vif [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:20:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-32969053',display_name='tempest-ServerActionsTestOtherB-server-32969053',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-32969053',id=78,image_ref='4a1936df-f472-466a-a569-3e6ba7a787d4',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1118909469',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:20:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='a266c4b5f8164bceb621e0e23116c515',ramdisk_id='',reservation_id='r-r374dg9m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1033607333',owner_user_name='tempest-ServerActionsTestOtherB-1033607333-project-member',shelved_at='2025-10-07T14:22:05.799435',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='4a1936df-f472-466a-a569-3e6ba7a787d4'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b99e8c19767d42aa96c7d646cacc3772',uuid=d932a7ab-839c-48b9-804f-90cc8634e93b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.626 2 DEBUG nova.network.os_vif_util [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converting VIF {"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.626 2 DEBUG nova.network.os_vif_util [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.627 2 DEBUG os_vif [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.628 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.629 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.633 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e1eba9e-f1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.633 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e1eba9e-f1, col_values=(('external_ids', {'iface-id': '6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:12:7c', 'vm-uuid': 'd932a7ab-839c-48b9-804f-90cc8634e93b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:26 compute-0 NetworkManager[44949]: <info>  [1759846946.6366] manager: (tap6e1eba9e-f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.645 2 INFO os_vif [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1')
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.711 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.712 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.712 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No VIF found with MAC fa:16:3e:71:12:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.713 2 INFO nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Using config drive
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.740 2 DEBUG nova.storage.rbd_utils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.776 2 DEBUG nova.objects.instance [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'ec2_ids' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:26 compute-0 nova_compute[259550]: 2025-10-07 14:22:26.823 2 DEBUG nova.objects.instance [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'keypairs' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:27 compute-0 ceph-mon[74295]: pgmap v1760: 305 pgs: 305 active+clean; 434 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 5.0 MiB/s wr, 320 op/s
Oct 07 14:22:27 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1192574666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:27 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1677105785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.128 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "a888e66f-9992-460a-ab15-79ac0261c4e2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.129 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.156 2 DEBUG nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.237 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.238 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.249 2 INFO nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Creating config drive at /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/disk.config
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.262 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj9u2bb1p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.325 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.326 2 INFO nova.compute.claims [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.427 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj9u2bb1p" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.456 2 DEBUG nova.storage.rbd_utils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.460 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/disk.config d932a7ab-839c-48b9-804f-90cc8634e93b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.582 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.640 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/disk.config d932a7ab-839c-48b9-804f-90cc8634e93b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.646 2 INFO nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Deleting local config drive /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/disk.config because it was imported into RBD.
Oct 07 14:22:27 compute-0 NetworkManager[44949]: <info>  [1759846947.7206] manager: (tap6e1eba9e-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/386)
Oct 07 14:22:27 compute-0 kernel: tap6e1eba9e-f1: entered promiscuous mode
Oct 07 14:22:27 compute-0 ovn_controller[151684]: 2025-10-07T14:22:27Z|00930|binding|INFO|Claiming lport 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 for this chassis.
Oct 07 14:22:27 compute-0 ovn_controller[151684]: 2025-10-07T14:22:27Z|00931|binding|INFO|6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8: Claiming fa:16:3e:71:12:7c 10.100.0.3
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.735 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:12:7c 10.100.0.3'], port_security=['fa:16:3e:71:12:7c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd932a7ab-839c-48b9-804f-90cc8634e93b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a266c4b5f8164bceb621e0e23116c515', 'neutron:revision_number': '7', 'neutron:security_group_ids': '2167ecc9-baa9-4f24-bd7c-9baa94d55bea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061d6c9e-c728-4632-b92d-e6b85ba42658, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:22:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.738 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 in datapath ebea7a9d-f576-4b9e-8316-859c29b06dc2 bound to our chassis
Oct 07 14:22:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.740 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebea7a9d-f576-4b9e-8316-859c29b06dc2
Oct 07 14:22:27 compute-0 ovn_controller[151684]: 2025-10-07T14:22:27Z|00932|binding|INFO|Setting lport 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 ovn-installed in OVS
Oct 07 14:22:27 compute-0 ovn_controller[151684]: 2025-10-07T14:22:27Z|00933|binding|INFO|Setting lport 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 up in Southbound
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.768 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b6aca46d-5bf9-460e-99e4-8ff435af85aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:27 compute-0 systemd-machined[214580]: New machine qemu-113-instance-0000004e.
Oct 07 14:22:27 compute-0 systemd[1]: Started Virtual Machine qemu-113-instance-0000004e.
Oct 07 14:22:27 compute-0 systemd-udevd[350974]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:22:27 compute-0 NetworkManager[44949]: <info>  [1759846947.8192] device (tap6e1eba9e-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:22:27 compute-0 NetworkManager[44949]: <info>  [1759846947.8204] device (tap6e1eba9e-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:22:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.818 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7ccb59e4-32a9-4d75-bc03-298a3365dc10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.829 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[774a67ad-cffd-4920-97cd-ef97b2b4a802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.870 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[90b9d3f6-4198-4b13-b262-14ff9183929a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.888 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad190ab-3b11-41d1-ade0-5ca7d952f30d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebea7a9d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:58:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739733, 'reachable_time': 16237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350985, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.904 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[799cf8ef-ca05-42d6-9c82-da1dc022dd7e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapebea7a9d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739748, 'tstamp': 739748}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350986, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapebea7a9d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739751, 'tstamp': 739751}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350986, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.905 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebea7a9d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.909 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebea7a9d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.910 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.910 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebea7a9d-f0, col_values=(('external_ids', {'iface-id': 'e0f4a07d-63f3-4c49-8cad-69cdf20a2608'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.911 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 434 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 4.6 MiB/s wr, 298 op/s
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.980 2 DEBUG nova.compute.manager [req-29c41a9e-3fa5-473d-8712-0a21956de7bf req-6c405f37-18b9-4157-94af-42f84442a4ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.980 2 DEBUG oslo_concurrency.lockutils [req-29c41a9e-3fa5-473d-8712-0a21956de7bf req-6c405f37-18b9-4157-94af-42f84442a4ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.981 2 DEBUG oslo_concurrency.lockutils [req-29c41a9e-3fa5-473d-8712-0a21956de7bf req-6c405f37-18b9-4157-94af-42f84442a4ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.981 2 DEBUG oslo_concurrency.lockutils [req-29c41a9e-3fa5-473d-8712-0a21956de7bf req-6c405f37-18b9-4157-94af-42f84442a4ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:27 compute-0 nova_compute[259550]: 2025-10-07 14:22:27.981 2 DEBUG nova.compute.manager [req-29c41a9e-3fa5-473d-8712-0a21956de7bf req-6c405f37-18b9-4157-94af-42f84442a4ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Processing event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:22:28 compute-0 ceph-mon[74295]: pgmap v1761: 305 pgs: 305 active+clean; 434 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 4.6 MiB/s wr, 298 op/s
Oct 07 14:22:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:22:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3131344438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.141 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.148 2 DEBUG nova.compute.provider_tree [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.209 2 DEBUG nova.scheduler.client.report [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.285 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.286 2 DEBUG nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.364 2 DEBUG nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.365 2 DEBUG nova.network.neutron [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.432 2 INFO nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.498 2 DEBUG nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.510 2 DEBUG nova.policy [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2606252961124ad2a15c7f7529b28488', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.667 2 DEBUG nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.668 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.669 2 INFO nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Creating image(s)
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.698 2 DEBUG nova.storage.rbd_utils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image a888e66f-9992-460a-ab15-79ac0261c4e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.728 2 DEBUG nova.storage.rbd_utils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image a888e66f-9992-460a-ab15-79ac0261c4e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.762 2 DEBUG nova.storage.rbd_utils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image a888e66f-9992-460a-ab15-79ac0261c4e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.769 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.819 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846948.76521, d932a7ab-839c-48b9-804f-90cc8634e93b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.820 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] VM Started (Lifecycle Event)
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.826 2 DEBUG nova.compute.manager [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.830 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.836 2 INFO nova.virt.libvirt.driver [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Instance spawned successfully.
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.889 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.890 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.891 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.892 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.921 2 DEBUG nova.storage.rbd_utils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image a888e66f-9992-460a-ab15-79ac0261c4e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.926 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a888e66f-9992-460a-ab15-79ac0261c4e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.982 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:28 compute-0 nova_compute[259550]: 2025-10-07 14:22:28.987 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.092 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.092 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846948.7653127, d932a7ab-839c-48b9-804f-90cc8634e93b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.092 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] VM Paused (Lifecycle Event)
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.116 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.127 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846948.8298697, d932a7ab-839c-48b9-804f-90cc8634e93b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.127 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] VM Resumed (Lifecycle Event)
Oct 07 14:22:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3131344438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.196 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.208 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.232 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:22:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Oct 07 14:22:29 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.334 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a888e66f-9992-460a-ab15-79ac0261c4e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.414 2 DEBUG nova.storage.rbd_utils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] resizing rbd image a888e66f-9992-460a-ab15-79ac0261c4e2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.549 2 DEBUG nova.objects.instance [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'migration_context' on Instance uuid a888e66f-9992-460a-ab15-79ac0261c4e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.636 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.637 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Ensure instance console log exists: /var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.638 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.639 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.639 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.721 2 DEBUG nova.compute.manager [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.843 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 10.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:29 compute-0 nova_compute[259550]: 2025-10-07 14:22:29.913 2 DEBUG nova.network.neutron [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Successfully created port: c2c79fd1-616c-4d60-86ec-7f0535cd0015 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:22:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 498 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 7.2 MiB/s wr, 405 op/s
Oct 07 14:22:30 compute-0 ceph-mon[74295]: osdmap e239: 3 total, 3 up, 3 in
Oct 07 14:22:30 compute-0 ceph-mon[74295]: pgmap v1763: 305 pgs: 305 active+clean; 498 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 7.2 MiB/s wr, 405 op/s
Oct 07 14:22:30 compute-0 nova_compute[259550]: 2025-10-07 14:22:30.521 2 DEBUG nova.compute.manager [req-15afd940-6329-446b-bdaf-7680b0aac6a5 req-fc34f445-71bb-4420-9fd0-9429d6182bd7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:30 compute-0 nova_compute[259550]: 2025-10-07 14:22:30.523 2 DEBUG oslo_concurrency.lockutils [req-15afd940-6329-446b-bdaf-7680b0aac6a5 req-fc34f445-71bb-4420-9fd0-9429d6182bd7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:30 compute-0 nova_compute[259550]: 2025-10-07 14:22:30.524 2 DEBUG oslo_concurrency.lockutils [req-15afd940-6329-446b-bdaf-7680b0aac6a5 req-fc34f445-71bb-4420-9fd0-9429d6182bd7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:30 compute-0 nova_compute[259550]: 2025-10-07 14:22:30.524 2 DEBUG oslo_concurrency.lockutils [req-15afd940-6329-446b-bdaf-7680b0aac6a5 req-fc34f445-71bb-4420-9fd0-9429d6182bd7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:30 compute-0 nova_compute[259550]: 2025-10-07 14:22:30.524 2 DEBUG nova.compute.manager [req-15afd940-6329-446b-bdaf-7680b0aac6a5 req-fc34f445-71bb-4420-9fd0-9429d6182bd7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] No waiting events found dispatching network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:30 compute-0 nova_compute[259550]: 2025-10-07 14:22:30.525 2 WARNING nova.compute.manager [req-15afd940-6329-446b-bdaf-7680b0aac6a5 req-fc34f445-71bb-4420-9fd0-9429d6182bd7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received unexpected event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 for instance with vm_state active and task_state None.
Oct 07 14:22:30 compute-0 nova_compute[259550]: 2025-10-07 14:22:30.956 2 DEBUG nova.network.neutron [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Successfully updated port: c2c79fd1-616c-4d60-86ec-7f0535cd0015 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:22:30 compute-0 nova_compute[259550]: 2025-10-07 14:22:30.977 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "refresh_cache-a888e66f-9992-460a-ab15-79ac0261c4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:22:30 compute-0 nova_compute[259550]: 2025-10-07 14:22:30.978 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquired lock "refresh_cache-a888e66f-9992-460a-ab15-79ac0261c4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:22:30 compute-0 nova_compute[259550]: 2025-10-07 14:22:30.978 2 DEBUG nova.network.neutron [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:22:31 compute-0 nova_compute[259550]: 2025-10-07 14:22:31.198 2 DEBUG nova.network.neutron [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:22:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:22:31 compute-0 nova_compute[259550]: 2025-10-07 14:22:31.259 2 DEBUG nova.compute.manager [req-06b2f8a7-ea9b-442d-899d-2184ef3d2f91 req-8bd56dfb-214a-4ba9-aff2-a155848bbcca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Received event network-changed-c2c79fd1-616c-4d60-86ec-7f0535cd0015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:31 compute-0 nova_compute[259550]: 2025-10-07 14:22:31.260 2 DEBUG nova.compute.manager [req-06b2f8a7-ea9b-442d-899d-2184ef3d2f91 req-8bd56dfb-214a-4ba9-aff2-a155848bbcca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Refreshing instance network info cache due to event network-changed-c2c79fd1-616c-4d60-86ec-7f0535cd0015. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:22:31 compute-0 nova_compute[259550]: 2025-10-07 14:22:31.261 2 DEBUG oslo_concurrency.lockutils [req-06b2f8a7-ea9b-442d-899d-2184ef3d2f91 req-8bd56dfb-214a-4ba9-aff2-a155848bbcca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a888e66f-9992-460a-ab15-79ac0261c4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:22:31 compute-0 nova_compute[259550]: 2025-10-07 14:22:31.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:31 compute-0 nova_compute[259550]: 2025-10-07 14:22:31.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:31 compute-0 nova_compute[259550]: 2025-10-07 14:22:31.656 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquiring lock "059cdf38-dead-4636-8397-0037c0c4ced3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:31 compute-0 nova_compute[259550]: 2025-10-07 14:22:31.657 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Oct 07 14:22:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Oct 07 14:22:31 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Oct 07 14:22:31 compute-0 nova_compute[259550]: 2025-10-07 14:22:31.735 2 DEBUG nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:22:31 compute-0 nova_compute[259550]: 2025-10-07 14:22:31.956 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 489 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 6.4 MiB/s wr, 336 op/s
Oct 07 14:22:31 compute-0 nova_compute[259550]: 2025-10-07 14:22:31.960 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:31 compute-0 nova_compute[259550]: 2025-10-07 14:22:31.964 2 DEBUG nova.network.neutron [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Updating instance_info_cache with network_info: [{"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:31 compute-0 nova_compute[259550]: 2025-10-07 14:22:31.982 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:22:31 compute-0 nova_compute[259550]: 2025-10-07 14:22:31.983 2 INFO nova.compute.claims [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:22:31 compute-0 nova_compute[259550]: 2025-10-07 14:22:31.997 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Releasing lock "refresh_cache-a888e66f-9992-460a-ab15-79ac0261c4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:22:31 compute-0 nova_compute[259550]: 2025-10-07 14:22:31.997 2 DEBUG nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Instance network_info: |[{"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:22:31 compute-0 nova_compute[259550]: 2025-10-07 14:22:31.998 2 DEBUG oslo_concurrency.lockutils [req-06b2f8a7-ea9b-442d-899d-2184ef3d2f91 req-8bd56dfb-214a-4ba9-aff2-a155848bbcca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a888e66f-9992-460a-ab15-79ac0261c4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:22:31 compute-0 nova_compute[259550]: 2025-10-07 14:22:31.999 2 DEBUG nova.network.neutron [req-06b2f8a7-ea9b-442d-899d-2184ef3d2f91 req-8bd56dfb-214a-4ba9-aff2-a155848bbcca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Refreshing network info cache for port c2c79fd1-616c-4d60-86ec-7f0535cd0015 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.005 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Start _get_guest_xml network_info=[{"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.010 2 WARNING nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.015 2 DEBUG nova.virt.libvirt.host [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.016 2 DEBUG nova.virt.libvirt.host [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.020 2 DEBUG nova.virt.libvirt.host [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.021 2 DEBUG nova.virt.libvirt.host [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.022 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.023 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.023 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.024 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.024 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.025 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.025 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.025 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.026 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.026 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.026 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.027 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.031 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.258 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003039508652406069 of space, bias 1.0, pg target 0.9118525957218208 quantized to 32 (current 32)
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001691131283779384 of space, bias 1.0, pg target 0.5073393851338152 quantized to 32 (current 32)
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:22:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:22:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:22:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1403994943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.589 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.611 2 DEBUG nova.storage.rbd_utils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image a888e66f-9992-460a-ab15-79ac0261c4e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.616 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:22:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3655297123' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:22:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:22:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3655297123' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:22:32 compute-0 ceph-mon[74295]: osdmap e240: 3 total, 3 up, 3 in
Oct 07 14:22:32 compute-0 ceph-mon[74295]: pgmap v1765: 305 pgs: 305 active+clean; 489 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 6.4 MiB/s wr, 336 op/s
Oct 07 14:22:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1403994943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3655297123' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:22:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3655297123' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:22:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:22:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3866371752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.837 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.849 2 DEBUG nova.compute.provider_tree [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.870 2 DEBUG nova.scheduler.client.report [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.896 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.898 2 DEBUG nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.952 2 DEBUG nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.952 2 DEBUG nova.network.neutron [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.974 2 INFO nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:22:32 compute-0 nova_compute[259550]: 2025-10-07 14:22:32.997 2 DEBUG nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.040 2 DEBUG oslo_concurrency.lockutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "152070e7-9c74-429d-b9d8-c09cbcba121e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.041 2 DEBUG oslo_concurrency.lockutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.041 2 DEBUG oslo_concurrency.lockutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.049 2 DEBUG oslo_concurrency.lockutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.050 2 DEBUG oslo_concurrency.lockutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.051 2 INFO nova.compute.manager [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Terminating instance
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.052 2 DEBUG nova.compute.manager [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.095 2 DEBUG nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:22:33 compute-0 kernel: tapf4f9776d-fb (unregistering): left promiscuous mode
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.102 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.102 2 INFO nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Creating image(s)
Oct 07 14:22:33 compute-0 NetworkManager[44949]: <info>  [1759846953.1104] device (tapf4f9776d-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:22:33 compute-0 ovn_controller[151684]: 2025-10-07T14:22:33Z|00934|binding|INFO|Releasing lport f4f9776d-fbc4-4c99-a076-d6391636ed53 from this chassis (sb_readonly=0)
Oct 07 14:22:33 compute-0 ovn_controller[151684]: 2025-10-07T14:22:33Z|00935|binding|INFO|Setting lport f4f9776d-fbc4-4c99-a076-d6391636ed53 down in Southbound
Oct 07 14:22:33 compute-0 ovn_controller[151684]: 2025-10-07T14:22:33Z|00936|binding|INFO|Removing iface tapf4f9776d-fb ovn-installed in OVS
Oct 07 14:22:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.128 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:02:42 10.100.0.7'], port_security=['fa:16:3e:5d:02:42 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '152070e7-9c74-429d-b9d8-c09cbcba121e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a266c4b5f8164bceb621e0e23116c515', 'neutron:revision_number': '4', 'neutron:security_group_ids': '38933899-ccd3-4eef-a7a6-a99092e32db4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061d6c9e-c728-4632-b92d-e6b85ba42658, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=f4f9776d-fbc4-4c99-a076-d6391636ed53) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:22:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.130 161536 INFO neutron.agent.ovn.metadata.agent [-] Port f4f9776d-fbc4-4c99-a076-d6391636ed53 in datapath ebea7a9d-f576-4b9e-8316-859c29b06dc2 unbound from our chassis
Oct 07 14:22:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.131 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebea7a9d-f576-4b9e-8316-859c29b06dc2
Oct 07 14:22:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:22:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2514063770' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.149 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2de252c9-afec-47d1-8d73-2d43892c8e25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:33 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000053.scope: Deactivated successfully.
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.163 2 DEBUG nova.storage.rbd_utils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] rbd image 059cdf38-dead-4636-8397-0037c0c4ced3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:33 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000053.scope: Consumed 16.905s CPU time.
Oct 07 14:22:33 compute-0 systemd-machined[214580]: Machine qemu-104-instance-00000053 terminated.
Oct 07 14:22:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.188 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[71c3bec2-340a-42dc-961f-7c7a455b6ba4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.191 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a0739a99-0154-4156-a172-49abb74cd8c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.202 2 DEBUG nova.storage.rbd_utils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] rbd image 059cdf38-dead-4636-8397-0037c0c4ced3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.220 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[39c70fe2-41f1-4a52-ba68-27aa6e827672]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.240 2 DEBUG nova.storage.rbd_utils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] rbd image 059cdf38-dead-4636-8397-0037c0c4ced3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.240 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7a7cfee8-faa9-4650-a0c3-6ca96447f78f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebea7a9d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:58:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 1000, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 1000, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739733, 'reachable_time': 16237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351344, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.247 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.256 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7f597ede-b785-4dad-b24b-04430bf33de0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapebea7a9d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739748, 'tstamp': 739748}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351348, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapebea7a9d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739751, 'tstamp': 739751}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351348, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.258 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebea7a9d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.265 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebea7a9d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.265 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.266 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebea7a9d-f0, col_values=(('external_ids', {'iface-id': 'e0f4a07d-63f3-4c49-8cad-69cdf20a2608'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.266 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.290 2 DEBUG nova.policy [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '505542e64c504f158e0c4a26a0a79480', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f9c0c1088f744e55acc586a4d180b728', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.295 2 DEBUG nova.network.neutron [req-06b2f8a7-ea9b-442d-899d-2184ef3d2f91 req-8bd56dfb-214a-4ba9-aff2-a155848bbcca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Updated VIF entry in instance network info cache for port c2c79fd1-616c-4d60-86ec-7f0535cd0015. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.296 2 DEBUG nova.network.neutron [req-06b2f8a7-ea9b-442d-899d-2184ef3d2f91 req-8bd56dfb-214a-4ba9-aff2-a155848bbcca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Updating instance_info_cache with network_info: [{"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.303 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.687s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.305 2 DEBUG nova.virt.libvirt.vif [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1845921390',display_name='tempest-ServersTestJSON-server-1845921390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1845921390',id=92,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDvisbnvR+SBDTBpkRZYoHfuGCO3MwR4MB48Zh5eLs/OIEMqZaVECbFrh+wCEnmCSXpmeyo6hZaqLKqY9Su9eL7k6N2nZ41Rvo5bx+Cu1oQT9zbA59f1WkYryJwF3IsUxA==',key_name='tempest-key-2011271221',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-13zl71li',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:28Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=a888e66f-9992-460a-ab15-79ac0261c4e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.306 2 DEBUG nova.network.os_vif_util [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.307 2 DEBUG nova.network.os_vif_util [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:6c:90,bridge_name='br-int',has_traffic_filtering=True,id=c2c79fd1-616c-4d60-86ec-7f0535cd0015,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2c79fd1-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.313 2 DEBUG nova.objects.instance [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'pci_devices' on Instance uuid a888e66f-9992-460a-ab15-79ac0261c4e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.317 2 DEBUG oslo_concurrency.lockutils [req-06b2f8a7-ea9b-442d-899d-2184ef3d2f91 req-8bd56dfb-214a-4ba9-aff2-a155848bbcca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a888e66f-9992-460a-ab15-79ac0261c4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.323 2 INFO nova.virt.libvirt.driver [-] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Instance destroyed successfully.
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.324 2 DEBUG nova.objects.instance [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'resources' on Instance uuid 152070e7-9c74-429d-b9d8-c09cbcba121e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.331 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.331 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.332 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.332 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.354 2 DEBUG nova.storage.rbd_utils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] rbd image 059cdf38-dead-4636-8397-0037c0c4ced3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.365 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 059cdf38-dead-4636-8397-0037c0c4ced3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.405 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:22:33 compute-0 nova_compute[259550]:   <uuid>a888e66f-9992-460a-ab15-79ac0261c4e2</uuid>
Oct 07 14:22:33 compute-0 nova_compute[259550]:   <name>instance-0000005c</name>
Oct 07 14:22:33 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:22:33 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:22:33 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersTestJSON-server-1845921390</nova:name>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:22:32</nova:creationTime>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:22:33 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:22:33 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:22:33 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:22:33 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:22:33 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:22:33 compute-0 nova_compute[259550]:         <nova:user uuid="2606252961124ad2a15c7f7529b28488">tempest-ServersTestJSON-387950529-project-member</nova:user>
Oct 07 14:22:33 compute-0 nova_compute[259550]:         <nova:project uuid="ca7071ac09d84d15aba25489e9bb909a">tempest-ServersTestJSON-387950529</nova:project>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:22:33 compute-0 nova_compute[259550]:         <nova:port uuid="c2c79fd1-616c-4d60-86ec-7f0535cd0015">
Oct 07 14:22:33 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:22:33 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:22:33 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <system>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <entry name="serial">a888e66f-9992-460a-ab15-79ac0261c4e2</entry>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <entry name="uuid">a888e66f-9992-460a-ab15-79ac0261c4e2</entry>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     </system>
Oct 07 14:22:33 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:22:33 compute-0 nova_compute[259550]:   <os>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:   </os>
Oct 07 14:22:33 compute-0 nova_compute[259550]:   <features>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:   </features>
Oct 07 14:22:33 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:22:33 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:22:33 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/a888e66f-9992-460a-ab15-79ac0261c4e2_disk">
Oct 07 14:22:33 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       </source>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:22:33 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/a888e66f-9992-460a-ab15-79ac0261c4e2_disk.config">
Oct 07 14:22:33 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       </source>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:22:33 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:8c:6c:90"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <target dev="tapc2c79fd1-61"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2/console.log" append="off"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <video>
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     </video>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:22:33 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:22:33 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:22:33 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:22:33 compute-0 nova_compute[259550]: </domain>
Oct 07 14:22:33 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.406 2 DEBUG nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Preparing to wait for external event network-vif-plugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.406 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.407 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.407 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.408 2 DEBUG nova.virt.libvirt.vif [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1845921390',display_name='tempest-ServersTestJSON-server-1845921390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1845921390',id=92,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDvisbnvR+SBDTBpkRZYoHfuGCO3MwR4MB48Zh5eLs/OIEMqZaVECbFrh+wCEnmCSXpmeyo6hZaqLKqY9Su9eL7k6N2nZ41Rvo5bx+Cu1oQT9zbA59f1WkYryJwF3IsUxA==',key_name='tempest-key-2011271221',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-13zl71li',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:28Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=a888e66f-9992-460a-ab15-79ac0261c4e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.408 2 DEBUG nova.network.os_vif_util [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.409 2 DEBUG nova.network.os_vif_util [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:6c:90,bridge_name='br-int',has_traffic_filtering=True,id=c2c79fd1-616c-4d60-86ec-7f0535cd0015,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2c79fd1-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.409 2 DEBUG os_vif [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:6c:90,bridge_name='br-int',has_traffic_filtering=True,id=c2c79fd1-616c-4d60-86ec-7f0535cd0015,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2c79fd1-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.410 2 DEBUG nova.virt.libvirt.vif [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:21:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1872291949',display_name='tempest-ServerActionsTestOtherB-server-1872291949',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1872291949',id=83,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:21:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a266c4b5f8164bceb621e0e23116c515',ramdisk_id='',reservation_id='r-3sm7005j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1033607333',owner_user_name='tempest-ServerActionsTestOtherB-1033607333-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:21:20Z,user_data=None,user_id='b99e8c19767d42aa96c7d646cacc3772',uuid=152070e7-9c74-429d-b9d8-c09cbcba121e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "address": "fa:16:3e:5d:02:42", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f9776d-fb", "ovs_interfaceid": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.410 2 DEBUG nova.network.os_vif_util [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converting VIF {"id": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "address": "fa:16:3e:5d:02:42", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f9776d-fb", "ovs_interfaceid": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.411 2 DEBUG nova.network.os_vif_util [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:02:42,bridge_name='br-int',has_traffic_filtering=True,id=f4f9776d-fbc4-4c99-a076-d6391636ed53,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f9776d-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.411 2 DEBUG os_vif [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:02:42,bridge_name='br-int',has_traffic_filtering=True,id=f4f9776d-fbc4-4c99-a076-d6391636ed53,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f9776d-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.415 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.418 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4f9776d-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.424 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2c79fd1-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.425 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc2c79fd1-61, col_values=(('external_ids', {'iface-id': 'c2c79fd1-616c-4d60-86ec-7f0535cd0015', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:6c:90', 'vm-uuid': 'a888e66f-9992-460a-ab15-79ac0261c4e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:33 compute-0 NetworkManager[44949]: <info>  [1759846953.4277] manager: (tapc2c79fd1-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.435 2 INFO os_vif [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:02:42,bridge_name='br-int',has_traffic_filtering=True,id=f4f9776d-fbc4-4c99-a076-d6391636ed53,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f9776d-fb')
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.458 2 INFO os_vif [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:6c:90,bridge_name='br-int',has_traffic_filtering=True,id=c2c79fd1-616c-4d60-86ec-7f0535cd0015,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2c79fd1-61')
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.519 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.519 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.520 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No VIF found with MAC fa:16:3e:8c:6c:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.521 2 INFO nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Using config drive
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.554 2 DEBUG nova.storage.rbd_utils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image a888e66f-9992-460a-ab15-79ac0261c4e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3866371752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2514063770' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.742 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 059cdf38-dead-4636-8397-0037c0c4ced3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.376s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.804 2 DEBUG nova.storage.rbd_utils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] resizing rbd image 059cdf38-dead-4636-8397-0037c0c4ced3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.880 2 INFO nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Creating config drive at /var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2/disk.config
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.885 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8jsrov4l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.923 2 DEBUG nova.network.neutron [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Successfully created port: a0352a7f-7725-4e54-abe0-3bcd200e802d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:22:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 470 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 9.9 MiB/s rd, 9.6 MiB/s wr, 410 op/s
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.970 2 DEBUG nova.objects.instance [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lazy-loading 'migration_context' on Instance uuid 059cdf38-dead-4636-8397-0037c0c4ced3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.987 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.987 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Ensure instance console log exists: /var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.988 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.989 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:33 compute-0 nova_compute[259550]: 2025-10-07 14:22:33.989 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:34 compute-0 nova_compute[259550]: 2025-10-07 14:22:34.028 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8jsrov4l" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:34 compute-0 nova_compute[259550]: 2025-10-07 14:22:34.048 2 DEBUG nova.storage.rbd_utils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image a888e66f-9992-460a-ab15-79ac0261c4e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:34 compute-0 nova_compute[259550]: 2025-10-07 14:22:34.052 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2/disk.config a888e66f-9992-460a-ab15-79ac0261c4e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:34 compute-0 nova_compute[259550]: 2025-10-07 14:22:34.094 2 INFO nova.virt.libvirt.driver [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Deleting instance files /var/lib/nova/instances/152070e7-9c74-429d-b9d8-c09cbcba121e_del
Oct 07 14:22:34 compute-0 nova_compute[259550]: 2025-10-07 14:22:34.096 2 INFO nova.virt.libvirt.driver [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Deletion of /var/lib/nova/instances/152070e7-9c74-429d-b9d8-c09cbcba121e_del complete
Oct 07 14:22:34 compute-0 nova_compute[259550]: 2025-10-07 14:22:34.147 2 INFO nova.compute.manager [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Took 1.09 seconds to destroy the instance on the hypervisor.
Oct 07 14:22:34 compute-0 nova_compute[259550]: 2025-10-07 14:22:34.148 2 DEBUG oslo.service.loopingcall [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:22:34 compute-0 nova_compute[259550]: 2025-10-07 14:22:34.148 2 DEBUG nova.compute.manager [-] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:22:34 compute-0 nova_compute[259550]: 2025-10-07 14:22:34.149 2 DEBUG nova.network.neutron [-] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:22:34 compute-0 nova_compute[259550]: 2025-10-07 14:22:34.216 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2/disk.config a888e66f-9992-460a-ab15-79ac0261c4e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:34 compute-0 nova_compute[259550]: 2025-10-07 14:22:34.217 2 INFO nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Deleting local config drive /var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2/disk.config because it was imported into RBD.
Oct 07 14:22:34 compute-0 kernel: tapc2c79fd1-61: entered promiscuous mode
Oct 07 14:22:34 compute-0 systemd-udevd[351299]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:22:34 compute-0 NetworkManager[44949]: <info>  [1759846954.2659] manager: (tapc2c79fd1-61): new Tun device (/org/freedesktop/NetworkManager/Devices/388)
Oct 07 14:22:34 compute-0 ovn_controller[151684]: 2025-10-07T14:22:34Z|00937|binding|INFO|Claiming lport c2c79fd1-616c-4d60-86ec-7f0535cd0015 for this chassis.
Oct 07 14:22:34 compute-0 ovn_controller[151684]: 2025-10-07T14:22:34Z|00938|binding|INFO|c2c79fd1-616c-4d60-86ec-7f0535cd0015: Claiming fa:16:3e:8c:6c:90 10.100.0.14
Oct 07 14:22:34 compute-0 nova_compute[259550]: 2025-10-07 14:22:34.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:34 compute-0 NetworkManager[44949]: <info>  [1759846954.2746] device (tapc2c79fd1-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:22:34 compute-0 NetworkManager[44949]: <info>  [1759846954.2778] device (tapc2c79fd1-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:22:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.280 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:6c:90 10.100.0.14'], port_security=['fa:16:3e:8c:6c:90 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a888e66f-9992-460a-ab15-79ac0261c4e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c2c79fd1-616c-4d60-86ec-7f0535cd0015) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:22:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.282 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c2c79fd1-616c-4d60-86ec-7f0535cd0015 in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 bound to our chassis
Oct 07 14:22:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.284 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7
Oct 07 14:22:34 compute-0 ovn_controller[151684]: 2025-10-07T14:22:34Z|00939|binding|INFO|Setting lport c2c79fd1-616c-4d60-86ec-7f0535cd0015 ovn-installed in OVS
Oct 07 14:22:34 compute-0 ovn_controller[151684]: 2025-10-07T14:22:34Z|00940|binding|INFO|Setting lport c2c79fd1-616c-4d60-86ec-7f0535cd0015 up in Southbound
Oct 07 14:22:34 compute-0 nova_compute[259550]: 2025-10-07 14:22:34.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:34 compute-0 nova_compute[259550]: 2025-10-07 14:22:34.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:34 compute-0 systemd-machined[214580]: New machine qemu-114-instance-0000005c.
Oct 07 14:22:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.313 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[18aef3d8-8889-45f8-9f75-0fdcaf7616fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:34 compute-0 systemd[1]: Started Virtual Machine qemu-114-instance-0000005c.
Oct 07 14:22:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.353 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[77a5dd4d-5887-42bb-94ab-941d268d0e5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.360 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf8a2c2-ee4b-41bf-9c87-6c359a2d5fda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.393 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e02a63de-65a6-4a95-8733-c96f7244c1fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.415 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[73ef6cba-d307-4a1a-9c82-c3717d3897bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 916, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 916, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 31052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351575, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.428 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb4bc62-59c8-46d9-805b-9b5296d74b34]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351577, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351577, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.429 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:34 compute-0 nova_compute[259550]: 2025-10-07 14:22:34.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:34 compute-0 nova_compute[259550]: 2025-10-07 14:22:34.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.432 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.433 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.433 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.433 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:34 compute-0 ceph-mon[74295]: pgmap v1766: 305 pgs: 305 active+clean; 470 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 9.9 MiB/s rd, 9.6 MiB/s wr, 410 op/s
Oct 07 14:22:34 compute-0 nova_compute[259550]: 2025-10-07 14:22:34.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:22:34 compute-0 nova_compute[259550]: 2025-10-07 14:22:34.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.143 2 DEBUG nova.compute.manager [req-a8d989d3-2e9c-4951-be3c-4d0190a64b03 req-0739daea-c5e5-4dcc-bd3d-adfdda7d07f6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Received event network-vif-plugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.144 2 DEBUG oslo_concurrency.lockutils [req-a8d989d3-2e9c-4951-be3c-4d0190a64b03 req-0739daea-c5e5-4dcc-bd3d-adfdda7d07f6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.144 2 DEBUG oslo_concurrency.lockutils [req-a8d989d3-2e9c-4951-be3c-4d0190a64b03 req-0739daea-c5e5-4dcc-bd3d-adfdda7d07f6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.144 2 DEBUG oslo_concurrency.lockutils [req-a8d989d3-2e9c-4951-be3c-4d0190a64b03 req-0739daea-c5e5-4dcc-bd3d-adfdda7d07f6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.145 2 DEBUG nova.compute.manager [req-a8d989d3-2e9c-4951-be3c-4d0190a64b03 req-0739daea-c5e5-4dcc-bd3d-adfdda7d07f6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Processing event network-vif-plugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.217 2 DEBUG nova.compute.manager [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Received event network-vif-unplugged-f4f9776d-fbc4-4c99-a076-d6391636ed53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.217 2 DEBUG oslo_concurrency.lockutils [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.218 2 DEBUG oslo_concurrency.lockutils [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.218 2 DEBUG oslo_concurrency.lockutils [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.218 2 DEBUG nova.compute.manager [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] No waiting events found dispatching network-vif-unplugged-f4f9776d-fbc4-4c99-a076-d6391636ed53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.218 2 DEBUG nova.compute.manager [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Received event network-vif-unplugged-f4f9776d-fbc4-4c99-a076-d6391636ed53 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.218 2 DEBUG nova.compute.manager [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Received event network-vif-plugged-f4f9776d-fbc4-4c99-a076-d6391636ed53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.219 2 DEBUG oslo_concurrency.lockutils [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.219 2 DEBUG oslo_concurrency.lockutils [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.219 2 DEBUG oslo_concurrency.lockutils [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.219 2 DEBUG nova.compute.manager [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] No waiting events found dispatching network-vif-plugged-f4f9776d-fbc4-4c99-a076-d6391636ed53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.219 2 WARNING nova.compute.manager [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Received unexpected event network-vif-plugged-f4f9776d-fbc4-4c99-a076-d6391636ed53 for instance with vm_state active and task_state deleting.
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.229 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846955.228769, a888e66f-9992-460a-ab15-79ac0261c4e2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.229 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] VM Started (Lifecycle Event)
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.231 2 DEBUG nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.234 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.237 2 INFO nova.virt.libvirt.driver [-] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Instance spawned successfully.
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.237 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.246 2 DEBUG nova.network.neutron [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Successfully updated port: a0352a7f-7725-4e54-abe0-3bcd200e802d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.248 2 DEBUG nova.network.neutron [-] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.330 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.506 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.508 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquiring lock "refresh_cache-059cdf38-dead-4636-8397-0037c0c4ced3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.508 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquired lock "refresh_cache-059cdf38-dead-4636-8397-0037c0c4ced3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.509 2 DEBUG nova.network.neutron [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.512 2 INFO nova.compute.manager [-] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Took 1.36 seconds to deallocate network for instance.
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.521 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.522 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.523 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.523 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.523 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.524 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.533 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.576 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.576 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846955.2309647, a888e66f-9992-460a-ab15-79ac0261c4e2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.576 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] VM Paused (Lifecycle Event)
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.618 2 DEBUG oslo_concurrency.lockutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.618 2 DEBUG oslo_concurrency.lockutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.619 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.623 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846955.2333872, a888e66f-9992-460a-ab15-79ac0261c4e2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.624 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] VM Resumed (Lifecycle Event)
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.639 2 INFO nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Took 6.97 seconds to spawn the instance on the hypervisor.
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.640 2 DEBUG nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.650 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.658 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.684 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.717 2 INFO nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Took 8.51 seconds to build instance.
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.736 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.774 2 DEBUG oslo_concurrency.processutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.858 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846940.5337255, e1499379-f6d6-4edd-8af0-2ccf7c6e6683 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.859 2 INFO nova.compute.manager [-] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] VM Stopped (Lifecycle Event)
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.880 2 DEBUG nova.compute.manager [None req-5ab4f7a0-8032-43f2-a1c9-d961680a4b2b - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 452 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 12 MiB/s wr, 499 op/s
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:22:35 compute-0 nova_compute[259550]: 2025-10-07 14:22:35.981 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:22:36 compute-0 nova_compute[259550]: 2025-10-07 14:22:36.087 2 DEBUG nova.network.neutron [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:22:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:22:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Oct 07 14:22:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Oct 07 14:22:36 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Oct 07 14:22:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:22:36 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2322852226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:36 compute-0 nova_compute[259550]: 2025-10-07 14:22:36.309 2 DEBUG oslo_concurrency.processutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:36 compute-0 nova_compute[259550]: 2025-10-07 14:22:36.314 2 DEBUG nova.compute.provider_tree [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:22:36 compute-0 nova_compute[259550]: 2025-10-07 14:22:36.330 2 DEBUG nova.scheduler.client.report [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:22:36 compute-0 nova_compute[259550]: 2025-10-07 14:22:36.357 2 DEBUG oslo_concurrency.lockutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:36 compute-0 nova_compute[259550]: 2025-10-07 14:22:36.380 2 INFO nova.scheduler.client.report [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Deleted allocations for instance 152070e7-9c74-429d-b9d8-c09cbcba121e
Oct 07 14:22:36 compute-0 nova_compute[259550]: 2025-10-07 14:22:36.467 2 DEBUG oslo_concurrency.lockutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:36 compute-0 nova_compute[259550]: 2025-10-07 14:22:36.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:37 compute-0 ceph-mon[74295]: pgmap v1767: 305 pgs: 305 active+clean; 452 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 12 MiB/s wr, 499 op/s
Oct 07 14:22:37 compute-0 ceph-mon[74295]: osdmap e241: 3 total, 3 up, 3 in
Oct 07 14:22:37 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2322852226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:37 compute-0 podman[351643]: 2025-10-07 14:22:37.083082731 +0000 UTC m=+0.072042425 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:22:37 compute-0 podman[351642]: 2025-10-07 14:22:37.10768673 +0000 UTC m=+0.097374793 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.344 2 DEBUG nova.compute.manager [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Received event network-vif-deleted-f4f9776d-fbc4-4c99-a076-d6391636ed53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.345 2 DEBUG nova.compute.manager [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Received event network-vif-plugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.346 2 DEBUG oslo_concurrency.lockutils [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.346 2 DEBUG oslo_concurrency.lockutils [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.346 2 DEBUG oslo_concurrency.lockutils [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.346 2 DEBUG nova.compute.manager [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] No waiting events found dispatching network-vif-plugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.347 2 WARNING nova.compute.manager [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Received unexpected event network-vif-plugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 for instance with vm_state active and task_state None.
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.347 2 DEBUG nova.compute.manager [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Received event network-changed-a0352a7f-7725-4e54-abe0-3bcd200e802d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.347 2 DEBUG nova.compute.manager [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Refreshing instance network info cache due to event network-changed-a0352a7f-7725-4e54-abe0-3bcd200e802d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.347 2 DEBUG oslo_concurrency.lockutils [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-059cdf38-dead-4636-8397-0037c0c4ced3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.524 2 DEBUG nova.network.neutron [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Updating instance_info_cache with network_info: [{"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.544 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Releasing lock "refresh_cache-059cdf38-dead-4636-8397-0037c0c4ced3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.545 2 DEBUG nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Instance network_info: |[{"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.547 2 DEBUG oslo_concurrency.lockutils [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-059cdf38-dead-4636-8397-0037c0c4ced3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.549 2 DEBUG nova.network.neutron [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Refreshing network info cache for port a0352a7f-7725-4e54-abe0-3bcd200e802d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.553 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Start _get_guest_xml network_info=[{"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.563 2 WARNING nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.568 2 DEBUG nova.virt.libvirt.host [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.569 2 DEBUG nova.virt.libvirt.host [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.582 2 DEBUG nova.virt.libvirt.host [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.583 2 DEBUG nova.virt.libvirt.host [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.584 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.585 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.586 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.587 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.587 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.588 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.588 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.589 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.589 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.590 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.590 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.591 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.596 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.639 2 DEBUG oslo_concurrency.lockutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.641 2 DEBUG oslo_concurrency.lockutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.641 2 DEBUG oslo_concurrency.lockutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.642 2 DEBUG oslo_concurrency.lockutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.643 2 DEBUG oslo_concurrency.lockutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.646 2 INFO nova.compute.manager [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Terminating instance
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.648 2 DEBUG nova.compute.manager [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:22:37 compute-0 kernel: tap6e1eba9e-f1 (unregistering): left promiscuous mode
Oct 07 14:22:37 compute-0 NetworkManager[44949]: <info>  [1759846957.6990] device (tap6e1eba9e-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:22:37 compute-0 ovn_controller[151684]: 2025-10-07T14:22:37Z|00941|binding|INFO|Releasing lport 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 from this chassis (sb_readonly=0)
Oct 07 14:22:37 compute-0 ovn_controller[151684]: 2025-10-07T14:22:37Z|00942|binding|INFO|Setting lport 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 down in Southbound
Oct 07 14:22:37 compute-0 ovn_controller[151684]: 2025-10-07T14:22:37Z|00943|binding|INFO|Removing iface tap6e1eba9e-f1 ovn-installed in OVS
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:37.714 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:12:7c 10.100.0.3'], port_security=['fa:16:3e:71:12:7c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd932a7ab-839c-48b9-804f-90cc8634e93b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a266c4b5f8164bceb621e0e23116c515', 'neutron:revision_number': '9', 'neutron:security_group_ids': '2167ecc9-baa9-4f24-bd7c-9baa94d55bea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061d6c9e-c728-4632-b92d-e6b85ba42658, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:22:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:37.715 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 in datapath ebea7a9d-f576-4b9e-8316-859c29b06dc2 unbound from our chassis
Oct 07 14:22:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:37.716 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ebea7a9d-f576-4b9e-8316-859c29b06dc2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:22:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:37.717 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[97606e87-1708-4a38-9a63-fdf7734975bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:37.717 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2 namespace which is not needed anymore
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:37 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Oct 07 14:22:37 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000004e.scope: Consumed 9.835s CPU time.
Oct 07 14:22:37 compute-0 systemd-machined[214580]: Machine qemu-113-instance-0000004e terminated.
Oct 07 14:22:37 compute-0 neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2[339909]: [NOTICE]   (339916) : haproxy version is 2.8.14-c23fe91
Oct 07 14:22:37 compute-0 neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2[339909]: [NOTICE]   (339916) : path to executable is /usr/sbin/haproxy
Oct 07 14:22:37 compute-0 neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2[339909]: [WARNING]  (339916) : Exiting Master process...
Oct 07 14:22:37 compute-0 neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2[339909]: [WARNING]  (339916) : Exiting Master process...
Oct 07 14:22:37 compute-0 neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2[339909]: [ALERT]    (339916) : Current worker (339918) exited with code 143 (Terminated)
Oct 07 14:22:37 compute-0 neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2[339909]: [WARNING]  (339916) : All workers exited. Exiting... (0)
Oct 07 14:22:37 compute-0 systemd[1]: libpod-d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5.scope: Deactivated successfully.
Oct 07 14:22:37 compute-0 podman[351719]: 2025-10-07 14:22:37.863819425 +0000 UTC m=+0.047792364 container died d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:22:37 compute-0 NetworkManager[44949]: <info>  [1759846957.8672] manager: (tap6e1eba9e-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/389)
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.893 2 INFO nova.virt.libvirt.driver [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Instance destroyed successfully.
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.893 2 DEBUG nova.objects.instance [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'resources' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5-userdata-shm.mount: Deactivated successfully.
Oct 07 14:22:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dcdee683237114ce949886d137839ba727ebc7082b9a6e07603460b0f5ad933-merged.mount: Deactivated successfully.
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.906 2 DEBUG nova.virt.libvirt.vif [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:20:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-32969053',display_name='tempest-ServerActionsTestOtherB-server-32969053',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-32969053',id=78,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOqlzBNyTrZ3ukqlO6TVbreu9ViSyA0zBmuJZOrzxut70yuUvNen/46QBxy28dcNWssGIhIyWpj4pHS1UxVS0zSbAfGPV97emvFP5GyJtAaeCUYpqJsMV8Zmvgjpt+/vMQ==',key_name='tempest-keypair-1118909469',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:22:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a266c4b5f8164bceb621e0e23116c515',ramdisk_id='',reservation_id='r-r374dg9m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1033607333',owner_user_name='tempest-ServerActionsTestOtherB-1033607333-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:22:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b99e8c19767d42aa96c7d646cacc3772',uuid=d932a7ab-839c-48b9-804f-90cc8634e93b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.907 2 DEBUG nova.network.os_vif_util [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converting VIF {"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.908 2 DEBUG nova.network.os_vif_util [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.909 2 DEBUG os_vif [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:37 compute-0 podman[351719]: 2025-10-07 14:22:37.911885805 +0000 UTC m=+0.095858744 container cleanup d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.911 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e1eba9e-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:22:37 compute-0 nova_compute[259550]: 2025-10-07 14:22:37.920 2 INFO os_vif [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1')
Oct 07 14:22:37 compute-0 systemd[1]: libpod-conmon-d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5.scope: Deactivated successfully.
Oct 07 14:22:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 452 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 7.1 MiB/s wr, 362 op/s
Oct 07 14:22:37 compute-0 podman[351755]: 2025-10-07 14:22:37.994373444 +0000 UTC m=+0.053591177 container remove d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 07 14:22:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:37.999 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa5defb-596a-4fc2-934a-39d148ff0b92]: (4, ('Tue Oct  7 02:22:37 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2 (d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5)\nd6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5\nTue Oct  7 02:22:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2 (d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5)\nd6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:38.001 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[66e30561-f818-4fb2-8c53-e38a9d5c13b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:38.005 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebea7a9d-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:38 compute-0 kernel: tapebea7a9d-f0: left promiscuous mode
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:38.031 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[328d1fd0-ed66-4ba7-981d-a23d575741bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:38.059 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[012189ba-0461-4794-acfc-e38ff6a547dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:38.060 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5378c28a-3b93-4921-a71e-c52eebe52480]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:38.079 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ef222116-fff3-4cdd-a2ae-e3eeb511ea28]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739726, 'reachable_time': 27240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351781, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:38.082 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:22:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:38.082 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[34383eb1-4033-44a0-a103-90be42f03f97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:38 compute-0 systemd[1]: run-netns-ovnmeta\x2debea7a9d\x2df576\x2d4b9e\x2d8316\x2d859c29b06dc2.mount: Deactivated successfully.
Oct 07 14:22:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:22:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/652490974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.110 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.129 2 DEBUG nova.storage.rbd_utils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] rbd image 059cdf38-dead-4636-8397-0037c0c4ced3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.132 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.391 2 INFO nova.virt.libvirt.driver [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Deleting instance files /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b_del
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.392 2 INFO nova.virt.libvirt.driver [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Deletion of /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b_del complete
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.450 2 INFO nova.compute.manager [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Took 0.80 seconds to destroy the instance on the hypervisor.
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.451 2 DEBUG oslo.service.loopingcall [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.451 2 DEBUG nova.compute.manager [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.451 2 DEBUG nova.network.neutron [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:22:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:22:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1193519337' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.599 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.601 2 DEBUG nova.virt.libvirt.vif [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-348220581',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-348220581',id=93,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f9c0c1088f744e55acc586a4d180b728',ramdisk_id='',reservation_id='r-ezwd203p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-2004432481',owner_user_name='tempest-ServerTagsTestJSON-2004432481-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:33Z,user_data=None,user_id='505542e64c504f158e0c4a26a0a79480',uuid=059cdf38-dead-4636-8397-0037c0c4ced3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.602 2 DEBUG nova.network.os_vif_util [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Converting VIF {"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.603 2 DEBUG nova.network.os_vif_util [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:b3:e4,bridge_name='br-int',has_traffic_filtering=True,id=a0352a7f-7725-4e54-abe0-3bcd200e802d,network=Network(c1cdabcd-9e01-4a29-bf25-66798546aca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0352a7f-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.605 2 DEBUG nova.objects.instance [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lazy-loading 'pci_devices' on Instance uuid 059cdf38-dead-4636-8397-0037c0c4ced3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.620 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:22:38 compute-0 nova_compute[259550]:   <uuid>059cdf38-dead-4636-8397-0037c0c4ced3</uuid>
Oct 07 14:22:38 compute-0 nova_compute[259550]:   <name>instance-0000005d</name>
Oct 07 14:22:38 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:22:38 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:22:38 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerTagsTestJSON-server-348220581</nova:name>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:22:37</nova:creationTime>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:22:38 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:22:38 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:22:38 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:22:38 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:22:38 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:22:38 compute-0 nova_compute[259550]:         <nova:user uuid="505542e64c504f158e0c4a26a0a79480">tempest-ServerTagsTestJSON-2004432481-project-member</nova:user>
Oct 07 14:22:38 compute-0 nova_compute[259550]:         <nova:project uuid="f9c0c1088f744e55acc586a4d180b728">tempest-ServerTagsTestJSON-2004432481</nova:project>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:22:38 compute-0 nova_compute[259550]:         <nova:port uuid="a0352a7f-7725-4e54-abe0-3bcd200e802d">
Oct 07 14:22:38 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:22:38 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:22:38 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <system>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <entry name="serial">059cdf38-dead-4636-8397-0037c0c4ced3</entry>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <entry name="uuid">059cdf38-dead-4636-8397-0037c0c4ced3</entry>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     </system>
Oct 07 14:22:38 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:22:38 compute-0 nova_compute[259550]:   <os>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:   </os>
Oct 07 14:22:38 compute-0 nova_compute[259550]:   <features>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:   </features>
Oct 07 14:22:38 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:22:38 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:22:38 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/059cdf38-dead-4636-8397-0037c0c4ced3_disk">
Oct 07 14:22:38 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       </source>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:22:38 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/059cdf38-dead-4636-8397-0037c0c4ced3_disk.config">
Oct 07 14:22:38 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       </source>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:22:38 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:c6:b3:e4"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <target dev="tapa0352a7f-77"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3/console.log" append="off"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <video>
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     </video>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:22:38 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:22:38 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:22:38 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:22:38 compute-0 nova_compute[259550]: </domain>
Oct 07 14:22:38 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.622 2 DEBUG nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Preparing to wait for external event network-vif-plugged-a0352a7f-7725-4e54-abe0-3bcd200e802d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.622 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquiring lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.623 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.623 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.624 2 DEBUG nova.virt.libvirt.vif [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-348220581',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-348220581',id=93,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f9c0c1088f744e55acc586a4d180b728',ramdisk_id='',reservation_id='r-ezwd203p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-2004432481',owner_user_name='tempest-ServerTagsTestJSON-2004432481-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:33Z,user_data=None,user_id='505542e64c504f158e0c4a26a0a79480',uuid=059cdf38-dead-4636-8397-0037c0c4ced3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.625 2 DEBUG nova.network.os_vif_util [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Converting VIF {"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.626 2 DEBUG nova.network.os_vif_util [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:b3:e4,bridge_name='br-int',has_traffic_filtering=True,id=a0352a7f-7725-4e54-abe0-3bcd200e802d,network=Network(c1cdabcd-9e01-4a29-bf25-66798546aca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0352a7f-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.626 2 DEBUG os_vif [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:b3:e4,bridge_name='br-int',has_traffic_filtering=True,id=a0352a7f-7725-4e54-abe0-3bcd200e802d,network=Network(c1cdabcd-9e01-4a29-bf25-66798546aca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0352a7f-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.628 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.629 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0352a7f-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.633 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0352a7f-77, col_values=(('external_ids', {'iface-id': 'a0352a7f-7725-4e54-abe0-3bcd200e802d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:b3:e4', 'vm-uuid': '059cdf38-dead-4636-8397-0037c0c4ced3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:38 compute-0 NetworkManager[44949]: <info>  [1759846958.6354] manager: (tapa0352a7f-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.640 2 INFO os_vif [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:b3:e4,bridge_name='br-int',has_traffic_filtering=True,id=a0352a7f-7725-4e54-abe0-3bcd200e802d,network=Network(c1cdabcd-9e01-4a29-bf25-66798546aca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0352a7f-77')
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.691 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.691 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.691 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] No VIF found with MAC fa:16:3e:c6:b3:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.692 2 INFO nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Using config drive
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.715 2 DEBUG nova.storage.rbd_utils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] rbd image 059cdf38-dead-4636-8397-0037c0c4ced3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:38 compute-0 nova_compute[259550]: 2025-10-07 14:22:38.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.003 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.004 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.004 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.004 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.005 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:39 compute-0 ceph-mon[74295]: pgmap v1769: 305 pgs: 305 active+clean; 452 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 7.1 MiB/s wr, 362 op/s
Oct 07 14:22:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/652490974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1193519337' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.296 2 DEBUG nova.compute.manager [req-2f2b7096-04b1-405c-9161-b678031706d4 req-8ff631e3-7f00-439d-b474-1fa0d19de503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-vif-unplugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.297 2 DEBUG oslo_concurrency.lockutils [req-2f2b7096-04b1-405c-9161-b678031706d4 req-8ff631e3-7f00-439d-b474-1fa0d19de503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.298 2 DEBUG oslo_concurrency.lockutils [req-2f2b7096-04b1-405c-9161-b678031706d4 req-8ff631e3-7f00-439d-b474-1fa0d19de503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.298 2 DEBUG oslo_concurrency.lockutils [req-2f2b7096-04b1-405c-9161-b678031706d4 req-8ff631e3-7f00-439d-b474-1fa0d19de503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.298 2 DEBUG nova.compute.manager [req-2f2b7096-04b1-405c-9161-b678031706d4 req-8ff631e3-7f00-439d-b474-1fa0d19de503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] No waiting events found dispatching network-vif-unplugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.298 2 DEBUG nova.compute.manager [req-2f2b7096-04b1-405c-9161-b678031706d4 req-8ff631e3-7f00-439d-b474-1fa0d19de503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-vif-unplugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.397 2 INFO nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Creating config drive at /var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3/disk.config
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.402 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprrw7f2id execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:22:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3564616810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.452 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.544 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprrw7f2id" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.567 2 DEBUG nova.storage.rbd_utils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] rbd image 059cdf38-dead-4636-8397-0037c0c4ced3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.572 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3/disk.config 059cdf38-dead-4636-8397-0037c0c4ced3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.612 2 DEBUG nova.network.neutron [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Updated VIF entry in instance network info cache for port a0352a7f-7725-4e54-abe0-3bcd200e802d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.613 2 DEBUG nova.network.neutron [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Updating instance_info_cache with network_info: [{"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.621 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.622 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.632 2 DEBUG nova.network.neutron [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.634 2 DEBUG oslo_concurrency.lockutils [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-059cdf38-dead-4636-8397-0037c0c4ced3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.644 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.644 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.651 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.652 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.654 2 INFO nova.compute.manager [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Took 1.20 seconds to deallocate network for instance.
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.664 2 DEBUG oslo_concurrency.lockutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "a888e66f-9992-460a-ab15-79ac0261c4e2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.665 2 DEBUG oslo_concurrency.lockutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.665 2 DEBUG oslo_concurrency.lockutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.665 2 DEBUG oslo_concurrency.lockutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.665 2 DEBUG oslo_concurrency.lockutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.667 2 INFO nova.compute.manager [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Terminating instance
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.668 2 DEBUG nova.compute.manager [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.671 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.671 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.675 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.675 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.701 2 DEBUG oslo_concurrency.lockutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.701 2 DEBUG oslo_concurrency.lockutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:39 compute-0 kernel: tapc2c79fd1-61 (unregistering): left promiscuous mode
Oct 07 14:22:39 compute-0 NetworkManager[44949]: <info>  [1759846959.7197] device (tapc2c79fd1-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.727 2 DEBUG nova.compute.manager [req-6bba1dd7-a7f0-4c44-95e8-42569aaea07c req-76a07e91-9c9a-46a5-854f-bffaa8f117fa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-vif-deleted-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:39 compute-0 ovn_controller[151684]: 2025-10-07T14:22:39Z|00944|binding|INFO|Releasing lport c2c79fd1-616c-4d60-86ec-7f0535cd0015 from this chassis (sb_readonly=0)
Oct 07 14:22:39 compute-0 ovn_controller[151684]: 2025-10-07T14:22:39Z|00945|binding|INFO|Setting lport c2c79fd1-616c-4d60-86ec-7f0535cd0015 down in Southbound
Oct 07 14:22:39 compute-0 ovn_controller[151684]: 2025-10-07T14:22:39Z|00946|binding|INFO|Removing iface tapc2c79fd1-61 ovn-installed in OVS
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.737 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:6c:90 10.100.0.14'], port_security=['fa:16:3e:8c:6c:90 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a888e66f-9992-460a-ab15-79ac0261c4e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c2c79fd1-616c-4d60-86ec-7f0535cd0015) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.739 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c2c79fd1-616c-4d60-86ec-7f0535cd0015 in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 unbound from our chassis
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.741 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.761 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3/disk.config 059cdf38-dead-4636-8397-0037c0c4ced3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.762 2 INFO nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Deleting local config drive /var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3/disk.config because it was imported into RBD.
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.764 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[568087b7-c094-4391-966e-9c731e07aa92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:39 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Oct 07 14:22:39 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005c.scope: Consumed 5.144s CPU time.
Oct 07 14:22:39 compute-0 systemd-machined[214580]: Machine qemu-114-instance-0000005c terminated.
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.803 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a67bf6-6856-4682-b6bd-5a04a5b1a664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.807 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc160d2-87bd-4643-852a-557ab25a0857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.820 2 DEBUG oslo_concurrency.processutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:39 compute-0 kernel: tapa0352a7f-77: entered promiscuous mode
Oct 07 14:22:39 compute-0 NetworkManager[44949]: <info>  [1759846959.8383] manager: (tapa0352a7f-77): new Tun device (/org/freedesktop/NetworkManager/Devices/391)
Oct 07 14:22:39 compute-0 systemd-udevd[351680]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:22:39 compute-0 ovn_controller[151684]: 2025-10-07T14:22:39Z|00947|binding|INFO|Claiming lport a0352a7f-7725-4e54-abe0-3bcd200e802d for this chassis.
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.843 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f5ba6d-7ce4-48f6-96b8-b8606bf81898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:39 compute-0 ovn_controller[151684]: 2025-10-07T14:22:39Z|00948|binding|INFO|a0352a7f-7725-4e54-abe0-3bcd200e802d: Claiming fa:16:3e:c6:b3:e4 10.100.0.10
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.852 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:b3:e4 10.100.0.10'], port_security=['fa:16:3e:c6:b3:e4 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '059cdf38-dead-4636-8397-0037c0c4ced3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1cdabcd-9e01-4a29-bf25-66798546aca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f9c0c1088f744e55acc586a4d180b728', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64310848-1d82-4b02-993c-52bf19515ed4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d26f8f35-607a-45a2-b669-669744490ffb, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=a0352a7f-7725-4e54-abe0-3bcd200e802d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:22:39 compute-0 NetworkManager[44949]: <info>  [1759846959.8596] device (tapa0352a7f-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:22:39 compute-0 NetworkManager[44949]: <info>  [1759846959.8603] device (tapa0352a7f-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.869 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e82a43c0-87ce-4845-ae40-888b76b1631a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 916, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 916, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 31052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351931, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:39 compute-0 ovn_controller[151684]: 2025-10-07T14:22:39Z|00949|binding|INFO|Setting lport a0352a7f-7725-4e54-abe0-3bcd200e802d ovn-installed in OVS
Oct 07 14:22:39 compute-0 ovn_controller[151684]: 2025-10-07T14:22:39Z|00950|binding|INFO|Setting lport a0352a7f-7725-4e54-abe0-3bcd200e802d up in Southbound
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.888 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[82f724a4-920e-4252-aea8-adb64c590517]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351936, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351936, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:39 compute-0 systemd-machined[214580]: New machine qemu-115-instance-0000005d.
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.892 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:39 compute-0 systemd[1]: Started Virtual Machine qemu-115-instance-0000005d.
Oct 07 14:22:39 compute-0 NetworkManager[44949]: <info>  [1759846959.9102] manager: (tapc2c79fd1-61): new Tun device (/org/freedesktop/NetworkManager/Devices/392)
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.920 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.920 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.921 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.921 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.923 161536 INFO neutron.agent.ovn.metadata.agent [-] Port a0352a7f-7725-4e54-abe0-3bcd200e802d in datapath c1cdabcd-9e01-4a29-bf25-66798546aca6 unbound from our chassis
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.924 2 INFO nova.virt.libvirt.driver [-] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Instance destroyed successfully.
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.924 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c1cdabcd-9e01-4a29-bf25-66798546aca6
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.924 2 DEBUG nova.objects.instance [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'resources' on Instance uuid a888e66f-9992-460a-ab15-79ac0261c4e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.936 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d56a4cd8-3dd1-4691-a13d-744e391d1909]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.938 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc1cdabcd-91 in ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.941 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc1cdabcd-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.941 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3561cf-19ce-450b-ac6b-7867a95fc835]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.942 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f28f3156-501b-47a1-a6e4-3fc810f2a46b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.958 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6e58799d-0a6e-4eb6-bad5-ed424b10d8e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 371 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 9.4 MiB/s wr, 481 op/s
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.983 2 DEBUG nova.virt.libvirt.vif [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:22:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1845921390',display_name='tempest-ServersTestJSON-server-1845921390',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1845921390',id=92,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDvisbnvR+SBDTBpkRZYoHfuGCO3MwR4MB48Zh5eLs/OIEMqZaVECbFrh+wCEnmCSXpmeyo6hZaqLKqY9Su9eL7k6N2nZ41Rvo5bx+Cu1oQT9zbA59f1WkYryJwF3IsUxA==',key_name='tempest-key-2011271221',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:22:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-13zl71li',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:22:35Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=a888e66f-9992-460a-ab15-79ac0261c4e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.984 2 DEBUG nova.network.os_vif_util [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.985 2 DEBUG nova.network.os_vif_util [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:6c:90,bridge_name='br-int',has_traffic_filtering=True,id=c2c79fd1-616c-4d60-86ec-7f0535cd0015,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2c79fd1-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.986 2 DEBUG os_vif [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:6c:90,bridge_name='br-int',has_traffic_filtering=True,id=c2c79fd1-616c-4d60-86ec-7f0535cd0015,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2c79fd1-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:22:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.989 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4431661d-4ced-4c58-8560-60dc47a7ff0c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.992 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2c79fd1-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:39 compute-0 nova_compute[259550]: 2025-10-07 14:22:39.997 2 INFO os_vif [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:6c:90,bridge_name='br-int',has_traffic_filtering=True,id=c2c79fd1-616c-4d60-86ec-7f0535cd0015,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2c79fd1-61')
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.022 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[23c1e284-ae9d-456a-9153-4fa76c76d334]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.028 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[15f88588-a27a-4e79-be41-fd536d507814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:40 compute-0 NetworkManager[44949]: <info>  [1759846960.0297] manager: (tapc1cdabcd-90): new Veth device (/org/freedesktop/NetworkManager/Devices/393)
Oct 07 14:22:40 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3564616810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.065 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c7eaf2d8-91b8-4679-9242-01e96463d05a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.069 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d9d24d-9810-4ddf-b006-fc154f9e5e97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:40 compute-0 NetworkManager[44949]: <info>  [1759846960.1020] device (tapc1cdabcd-90): carrier: link connected
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.109 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[401a8d05-b48c-4221-ba64-417f944fdc62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.136 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b534447-ab04-48b2-b97d-55d562d6dc86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1cdabcd-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:c6:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 754366, 'reachable_time': 23769, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352016, 'error': None, 'target': 'ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.159 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e0270f-da23-4fea-b371-8968fc855848]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feff:c658'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 754366, 'tstamp': 754366}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352017, 'error': None, 'target': 'ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.181 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cc399664-7ded-45d6-8aa9-252c72141b54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1cdabcd-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:c6:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 754366, 'reachable_time': 23769, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 352018, 'error': None, 'target': 'ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.207 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.209 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3014MB free_disk=59.766944885253906GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.209 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.221 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[968b1a4e-8cdb-47e2-86d3-7b2b6481d626]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:22:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712830154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.308 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[47d90804-f6c8-42c2-b71f-36e6d89d01a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.310 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1cdabcd-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.311 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.311 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1cdabcd-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.311 2 DEBUG oslo_concurrency.processutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:40 compute-0 NetworkManager[44949]: <info>  [1759846960.3145] manager: (tapc1cdabcd-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Oct 07 14:22:40 compute-0 kernel: tapc1cdabcd-90: entered promiscuous mode
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.318 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc1cdabcd-90, col_values=(('external_ids', {'iface-id': '9159867a-1370-4ad3-9916-8e3a37e797ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:40 compute-0 ovn_controller[151684]: 2025-10-07T14:22:40Z|00951|binding|INFO|Releasing lport 9159867a-1370-4ad3-9916-8e3a37e797ca from this chassis (sb_readonly=0)
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.322 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c1cdabcd-9e01-4a29-bf25-66798546aca6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c1cdabcd-9e01-4a29-bf25-66798546aca6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.323 2 DEBUG nova.compute.provider_tree [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.327 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[709ed8a6-7481-48be-864d-35569d755cfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.328 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-c1cdabcd-9e01-4a29-bf25-66798546aca6
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/c1cdabcd-9e01-4a29-bf25-66798546aca6.pid.haproxy
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID c1cdabcd-9e01-4a29-bf25-66798546aca6
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:22:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.329 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6', 'env', 'PROCESS_TAG=haproxy-c1cdabcd-9e01-4a29-bf25-66798546aca6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c1cdabcd-9e01-4a29-bf25-66798546aca6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.488 2 INFO nova.virt.libvirt.driver [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Deleting instance files /var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2_del
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.489 2 INFO nova.virt.libvirt.driver [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Deletion of /var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2_del complete
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.572 2 DEBUG nova.scheduler.client.report [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.622 2 DEBUG oslo_concurrency.lockutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.624 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.660 2 INFO nova.scheduler.client.report [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Deleted allocations for instance d932a7ab-839c-48b9-804f-90cc8634e93b
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.669 2 INFO nova.compute.manager [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Took 1.00 seconds to destroy the instance on the hypervisor.
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.670 2 DEBUG oslo.service.loopingcall [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.671 2 DEBUG nova.compute.manager [-] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.671 2 DEBUG nova.network.neutron [-] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:22:40 compute-0 podman[352095]: 2025-10-07 14:22:40.744101464 +0000 UTC m=+0.044789554 container create 87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.764 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 2f0de516-cf33-49b6-b036-aee8c2f72943 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.764 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance fdd84566-c63e-469a-9173-55b845d32171 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.764 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance c8c2d410-01f0-4ef2-9ce3-232347c32e46 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.765 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance a888e66f-9992-460a-ab15-79ac0261c4e2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.765 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 059cdf38-dead-4636-8397-0037c0c4ced3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.765 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.765 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:22:40 compute-0 systemd[1]: Started libpod-conmon-87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c.scope.
Oct 07 14:22:40 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:22:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98b51d9d24f5f6a74d05b637a8d047f3a70db0e177c0c965ac9cfc29d076662a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:22:40 compute-0 podman[352095]: 2025-10-07 14:22:40.720660814 +0000 UTC m=+0.021348924 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.819 2 DEBUG oslo_concurrency.lockutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:40 compute-0 podman[352095]: 2025-10-07 14:22:40.830120056 +0000 UTC m=+0.130808166 container init 87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 07 14:22:40 compute-0 podman[352095]: 2025-10-07 14:22:40.836508515 +0000 UTC m=+0.137196605 container start 87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.857 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:40 compute-0 neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6[352110]: [NOTICE]   (352114) : New worker (352116) forked
Oct 07 14:22:40 compute-0 neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6[352110]: [NOTICE]   (352114) : Loading success.
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.967 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846960.967343, 059cdf38-dead-4636-8397-0037c0c4ced3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.968 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] VM Started (Lifecycle Event)
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.989 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.995 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846960.9674647, 059cdf38-dead-4636-8397-0037c0c4ced3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:40 compute-0 nova_compute[259550]: 2025-10-07 14:22:40.995 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] VM Paused (Lifecycle Event)
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.013 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.016 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.032 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:22:41 compute-0 ceph-mon[74295]: pgmap v1770: 305 pgs: 305 active+clean; 371 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 9.4 MiB/s wr, 481 op/s
Oct 07 14:22:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/712830154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.266 2 DEBUG nova.network.neutron [-] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.283 2 INFO nova.compute.manager [-] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Took 0.61 seconds to deallocate network for instance.
Oct 07 14:22:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:22:41 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714819094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.334 2 DEBUG oslo_concurrency.lockutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.345 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.349 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.366 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.389 2 DEBUG nova.compute.manager [req-11adf930-fe8e-4e1b-954a-ec77c6fab3e0 req-53721dee-e9b2-483e-aeb2-414bf54edde3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.389 2 DEBUG oslo_concurrency.lockutils [req-11adf930-fe8e-4e1b-954a-ec77c6fab3e0 req-53721dee-e9b2-483e-aeb2-414bf54edde3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.390 2 DEBUG oslo_concurrency.lockutils [req-11adf930-fe8e-4e1b-954a-ec77c6fab3e0 req-53721dee-e9b2-483e-aeb2-414bf54edde3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.390 2 DEBUG oslo_concurrency.lockutils [req-11adf930-fe8e-4e1b-954a-ec77c6fab3e0 req-53721dee-e9b2-483e-aeb2-414bf54edde3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.390 2 DEBUG nova.compute.manager [req-11adf930-fe8e-4e1b-954a-ec77c6fab3e0 req-53721dee-e9b2-483e-aeb2-414bf54edde3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] No waiting events found dispatching network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.390 2 WARNING nova.compute.manager [req-11adf930-fe8e-4e1b-954a-ec77c6fab3e0 req-53721dee-e9b2-483e-aeb2-414bf54edde3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received unexpected event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 for instance with vm_state deleted and task_state None.
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.392 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.392 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.392 2 DEBUG oslo_concurrency.lockutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.494 2 DEBUG oslo_concurrency.processutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.803 2 DEBUG nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Received event network-vif-unplugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.806 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.806 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.806 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.806 2 DEBUG nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] No waiting events found dispatching network-vif-unplugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.807 2 WARNING nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Received unexpected event network-vif-unplugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 for instance with vm_state deleted and task_state None.
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.807 2 DEBUG nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Received event network-vif-plugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.807 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.807 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.807 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.808 2 DEBUG nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] No waiting events found dispatching network-vif-plugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.808 2 WARNING nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Received unexpected event network-vif-plugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 for instance with vm_state deleted and task_state None.
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.808 2 DEBUG nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Received event network-vif-plugged-a0352a7f-7725-4e54-abe0-3bcd200e802d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.809 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.809 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.809 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.809 2 DEBUG nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Processing event network-vif-plugged-a0352a7f-7725-4e54-abe0-3bcd200e802d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.810 2 DEBUG nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Received event network-vif-plugged-a0352a7f-7725-4e54-abe0-3bcd200e802d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.810 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.810 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.810 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.810 2 DEBUG nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] No waiting events found dispatching network-vif-plugged-a0352a7f-7725-4e54-abe0-3bcd200e802d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.811 2 WARNING nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Received unexpected event network-vif-plugged-a0352a7f-7725-4e54-abe0-3bcd200e802d for instance with vm_state building and task_state spawning.
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.811 2 DEBUG nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Received event network-vif-deleted-c2c79fd1-616c-4d60-86ec-7f0535cd0015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.812 2 DEBUG nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.840 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846961.821746, 059cdf38-dead-4636-8397-0037c0c4ced3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.840 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] VM Resumed (Lifecycle Event)
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.843 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.846 2 INFO nova.virt.libvirt.driver [-] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Instance spawned successfully.
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.847 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.861 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.871 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.875 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.875 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.876 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.876 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.877 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.877 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.900 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.951 2 INFO nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Took 8.86 seconds to spawn the instance on the hypervisor.
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.952 2 DEBUG nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 368 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 9.0 MiB/s wr, 465 op/s
Oct 07 14:22:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:22:41 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2553049972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.987 2 DEBUG oslo_concurrency.processutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:41 compute-0 nova_compute[259550]: 2025-10-07 14:22:41.992 2 DEBUG nova.compute.provider_tree [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:22:42 compute-0 nova_compute[259550]: 2025-10-07 14:22:42.008 2 DEBUG nova.scheduler.client.report [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:22:42 compute-0 nova_compute[259550]: 2025-10-07 14:22:42.012 2 INFO nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Took 10.08 seconds to build instance.
Oct 07 14:22:42 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3714819094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:42 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2553049972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:42 compute-0 nova_compute[259550]: 2025-10-07 14:22:42.059 2 DEBUG oslo_concurrency.lockutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:42 compute-0 nova_compute[259550]: 2025-10-07 14:22:42.063 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:42 compute-0 nova_compute[259550]: 2025-10-07 14:22:42.100 2 INFO nova.scheduler.client.report [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Deleted allocations for instance a888e66f-9992-460a-ab15-79ac0261c4e2
Oct 07 14:22:42 compute-0 nova_compute[259550]: 2025-10-07 14:22:42.172 2 DEBUG oslo_concurrency.lockutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:43 compute-0 ceph-mon[74295]: pgmap v1771: 305 pgs: 305 active+clean; 368 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 9.0 MiB/s wr, 465 op/s
Oct 07 14:22:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 326 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.3 MiB/s wr, 389 op/s
Oct 07 14:22:44 compute-0 nova_compute[259550]: 2025-10-07 14:22:44.392 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:22:44 compute-0 nova_compute[259550]: 2025-10-07 14:22:44.392 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:22:44 compute-0 ovn_controller[151684]: 2025-10-07T14:22:44Z|00952|binding|INFO|Releasing lport 9159867a-1370-4ad3-9916-8e3a37e797ca from this chassis (sb_readonly=0)
Oct 07 14:22:44 compute-0 ovn_controller[151684]: 2025-10-07T14:22:44Z|00953|binding|INFO|Releasing lport 401012b3-9244-4a9f-9a1e-3bf75a54a412 from this chassis (sb_readonly=0)
Oct 07 14:22:44 compute-0 nova_compute[259550]: 2025-10-07 14:22:44.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:44 compute-0 ovn_controller[151684]: 2025-10-07T14:22:44Z|00954|binding|INFO|Releasing lport 9159867a-1370-4ad3-9916-8e3a37e797ca from this chassis (sb_readonly=0)
Oct 07 14:22:44 compute-0 ovn_controller[151684]: 2025-10-07T14:22:44Z|00955|binding|INFO|Releasing lport 401012b3-9244-4a9f-9a1e-3bf75a54a412 from this chassis (sb_readonly=0)
Oct 07 14:22:44 compute-0 nova_compute[259550]: 2025-10-07 14:22:44.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:44 compute-0 nova_compute[259550]: 2025-10-07 14:22:44.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:45 compute-0 ceph-mon[74295]: pgmap v1772: 305 pgs: 305 active+clean; 326 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.3 MiB/s wr, 389 op/s
Oct 07 14:22:45 compute-0 nova_compute[259550]: 2025-10-07 14:22:45.472 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:45 compute-0 nova_compute[259550]: 2025-10-07 14:22:45.472 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:45 compute-0 nova_compute[259550]: 2025-10-07 14:22:45.495 2 DEBUG nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:22:45 compute-0 nova_compute[259550]: 2025-10-07 14:22:45.558 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:45 compute-0 nova_compute[259550]: 2025-10-07 14:22:45.559 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:45 compute-0 nova_compute[259550]: 2025-10-07 14:22:45.565 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:22:45 compute-0 nova_compute[259550]: 2025-10-07 14:22:45.565 2 INFO nova.compute.claims [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:22:45 compute-0 nova_compute[259550]: 2025-10-07 14:22:45.767 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 326 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.7 MiB/s wr, 338 op/s
Oct 07 14:22:46 compute-0 ceph-mon[74295]: pgmap v1773: 305 pgs: 305 active+clean; 326 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.7 MiB/s wr, 338 op/s
Oct 07 14:22:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:22:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2270741428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.194 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.200 2 DEBUG nova.compute.provider_tree [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.230 2 DEBUG nova.scheduler.client.report [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:22:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.257 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.258 2 DEBUG nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.311 2 DEBUG nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.312 2 DEBUG nova.network.neutron [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.335 2 INFO nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.360 2 DEBUG nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.455 2 DEBUG nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.456 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.457 2 INFO nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Creating image(s)
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.476 2 DEBUG nova.storage.rbd_utils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.503 2 DEBUG nova.storage.rbd_utils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.525 2 DEBUG nova.storage.rbd_utils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.531 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.573 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.610 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.610 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.611 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.611 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.639 2 DEBUG nova.storage.rbd_utils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.644 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.692 2 DEBUG nova.policy [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2606252961124ad2a15c7f7529b28488', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.991 2 DEBUG oslo_concurrency.lockutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquiring lock "059cdf38-dead-4636-8397-0037c0c4ced3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.992 2 DEBUG oslo_concurrency.lockutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.993 2 DEBUG oslo_concurrency.lockutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquiring lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.993 2 DEBUG oslo_concurrency.lockutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.993 2 DEBUG oslo_concurrency.lockutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.995 2 INFO nova.compute.manager [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Terminating instance
Oct 07 14:22:46 compute-0 nova_compute[259550]: 2025-10-07 14:22:46.997 2 DEBUG nova.compute.manager [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:22:47 compute-0 kernel: tapa0352a7f-77 (unregistering): left promiscuous mode
Oct 07 14:22:47 compute-0 NetworkManager[44949]: <info>  [1759846967.0973] device (tapa0352a7f-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:22:47 compute-0 ovn_controller[151684]: 2025-10-07T14:22:47Z|00956|binding|INFO|Releasing lport a0352a7f-7725-4e54-abe0-3bcd200e802d from this chassis (sb_readonly=0)
Oct 07 14:22:47 compute-0 ovn_controller[151684]: 2025-10-07T14:22:47Z|00957|binding|INFO|Setting lport a0352a7f-7725-4e54-abe0-3bcd200e802d down in Southbound
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:47 compute-0 ovn_controller[151684]: 2025-10-07T14:22:47Z|00958|binding|INFO|Removing iface tapa0352a7f-77 ovn-installed in OVS
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:47.116 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:b3:e4 10.100.0.10'], port_security=['fa:16:3e:c6:b3:e4 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '059cdf38-dead-4636-8397-0037c0c4ced3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1cdabcd-9e01-4a29-bf25-66798546aca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f9c0c1088f744e55acc586a4d180b728', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64310848-1d82-4b02-993c-52bf19515ed4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d26f8f35-607a-45a2-b669-669744490ffb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=a0352a7f-7725-4e54-abe0-3bcd200e802d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:22:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:47.118 161536 INFO neutron.agent.ovn.metadata.agent [-] Port a0352a7f-7725-4e54-abe0-3bcd200e802d in datapath c1cdabcd-9e01-4a29-bf25-66798546aca6 unbound from our chassis
Oct 07 14:22:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:47.119 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c1cdabcd-9e01-4a29-bf25-66798546aca6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:22:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:47.120 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6d3d530b-581f-4334-a231-7b98942b65d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:47.120 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6 namespace which is not needed anymore
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:47 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Oct 07 14:22:47 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005d.scope: Consumed 6.204s CPU time.
Oct 07 14:22:47 compute-0 systemd-machined[214580]: Machine qemu-115-instance-0000005d terminated.
Oct 07 14:22:47 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2270741428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.239 2 INFO nova.virt.libvirt.driver [-] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Instance destroyed successfully.
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.240 2 DEBUG nova.objects.instance [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lazy-loading 'resources' on Instance uuid 059cdf38-dead-4636-8397-0037c0c4ced3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:47 compute-0 neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6[352110]: [NOTICE]   (352114) : haproxy version is 2.8.14-c23fe91
Oct 07 14:22:47 compute-0 neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6[352110]: [NOTICE]   (352114) : path to executable is /usr/sbin/haproxy
Oct 07 14:22:47 compute-0 neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6[352110]: [WARNING]  (352114) : Exiting Master process...
Oct 07 14:22:47 compute-0 neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6[352110]: [WARNING]  (352114) : Exiting Master process...
Oct 07 14:22:47 compute-0 neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6[352110]: [ALERT]    (352114) : Current worker (352116) exited with code 143 (Terminated)
Oct 07 14:22:47 compute-0 neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6[352110]: [WARNING]  (352114) : All workers exited. Exiting... (0)
Oct 07 14:22:47 compute-0 systemd[1]: libpod-87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c.scope: Deactivated successfully.
Oct 07 14:22:47 compute-0 podman[352311]: 2025-10-07 14:22:47.296883176 +0000 UTC m=+0.064320471 container died 87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.369 2 DEBUG nova.virt.libvirt.vif [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:22:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-348220581',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-348220581',id=93,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:22:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f9c0c1088f744e55acc586a4d180b728',ramdisk_id='',reservation_id='r-ezwd203p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-2004432481',owner_user_name='tempest-ServerTagsTestJSON-2004432481-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:22:41Z,user_data=None,user_id='505542e64c504f158e0c4a26a0a79480',uuid=059cdf38-dead-4636-8397-0037c0c4ced3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.370 2 DEBUG nova.network.os_vif_util [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Converting VIF {"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.371 2 DEBUG nova.network.os_vif_util [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:b3:e4,bridge_name='br-int',has_traffic_filtering=True,id=a0352a7f-7725-4e54-abe0-3bcd200e802d,network=Network(c1cdabcd-9e01-4a29-bf25-66798546aca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0352a7f-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.371 2 DEBUG os_vif [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:b3:e4,bridge_name='br-int',has_traffic_filtering=True,id=a0352a7f-7725-4e54-abe0-3bcd200e802d,network=Network(c1cdabcd-9e01-4a29-bf25-66798546aca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0352a7f-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.374 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0352a7f-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.379 2 INFO os_vif [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:b3:e4,bridge_name='br-int',has_traffic_filtering=True,id=a0352a7f-7725-4e54-abe0-3bcd200e802d,network=Network(c1cdabcd-9e01-4a29-bf25-66798546aca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0352a7f-77')
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.435 2 DEBUG nova.network.neutron [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Successfully created port: 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.482 2 DEBUG nova.compute.manager [req-3b687e5b-c209-4fa3-8f43-4ded788d41c3 req-0582e57b-7674-4c3b-9f91-6a4b7a5c55b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Received event network-vif-unplugged-a0352a7f-7725-4e54-abe0-3bcd200e802d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.483 2 DEBUG oslo_concurrency.lockutils [req-3b687e5b-c209-4fa3-8f43-4ded788d41c3 req-0582e57b-7674-4c3b-9f91-6a4b7a5c55b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.483 2 DEBUG oslo_concurrency.lockutils [req-3b687e5b-c209-4fa3-8f43-4ded788d41c3 req-0582e57b-7674-4c3b-9f91-6a4b7a5c55b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.483 2 DEBUG oslo_concurrency.lockutils [req-3b687e5b-c209-4fa3-8f43-4ded788d41c3 req-0582e57b-7674-4c3b-9f91-6a4b7a5c55b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.484 2 DEBUG nova.compute.manager [req-3b687e5b-c209-4fa3-8f43-4ded788d41c3 req-0582e57b-7674-4c3b-9f91-6a4b7a5c55b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] No waiting events found dispatching network-vif-unplugged-a0352a7f-7725-4e54-abe0-3bcd200e802d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.484 2 DEBUG nova.compute.manager [req-3b687e5b-c209-4fa3-8f43-4ded788d41c3 req-0582e57b-7674-4c3b-9f91-6a4b7a5c55b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Received event network-vif-unplugged-a0352a7f-7725-4e54-abe0-3bcd200e802d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:22:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c-userdata-shm.mount: Deactivated successfully.
Oct 07 14:22:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-98b51d9d24f5f6a74d05b637a8d047f3a70db0e177c0c965ac9cfc29d076662a-merged.mount: Deactivated successfully.
Oct 07 14:22:47 compute-0 podman[352311]: 2025-10-07 14:22:47.859535639 +0000 UTC m=+0.626972924 container cleanup 87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:22:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 326 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.2 MiB/s wr, 289 op/s
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:22:47 compute-0 nova_compute[259550]: 2025-10-07 14:22:47.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:22:48 compute-0 nova_compute[259550]: 2025-10-07 14:22:48.025 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:22:48 compute-0 podman[352364]: 2025-10-07 14:22:48.20556973 +0000 UTC m=+0.314415557 container remove 87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 07 14:22:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:48.214 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8dfd7629-9967-49c5-96f0-ef726beb8547]: (4, ('Tue Oct  7 02:22:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6 (87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c)\n87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c\nTue Oct  7 02:22:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6 (87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c)\n87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:48 compute-0 nova_compute[259550]: 2025-10-07 14:22:48.215 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:48.215 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d173e469-002c-4894-831a-e010b992a74e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:48.216 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1cdabcd-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:48 compute-0 kernel: tapc1cdabcd-90: left promiscuous mode
Oct 07 14:22:48 compute-0 systemd[1]: libpod-conmon-87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c.scope: Deactivated successfully.
Oct 07 14:22:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:48.240 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[57f0102d-de10-41cd-bc43-8d8d95575b2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:48.280 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[80ed9070-1780-404c-aa7b-b2e3b6b90a83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:48.281 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c9d3994-5aa0-4409-83ea-03f434ecaa47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:48.301 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2f96a1-a3ce-4d33-9558-6085f3e86d78]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 754358, 'reachable_time': 27467, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352386, 'error': None, 'target': 'ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:48 compute-0 systemd[1]: run-netns-ovnmeta\x2dc1cdabcd\x2d9e01\x2d4a29\x2dbf25\x2d66798546aca6.mount: Deactivated successfully.
Oct 07 14:22:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:48.306 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:22:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:48.307 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[e570ea9c-87a3-478f-b5ac-53ffd3471e4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:48 compute-0 nova_compute[259550]: 2025-10-07 14:22:48.314 2 DEBUG nova.network.neutron [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Successfully updated port: 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:22:48 compute-0 nova_compute[259550]: 2025-10-07 14:22:48.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:48 compute-0 nova_compute[259550]: 2025-10-07 14:22:48.318 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846953.2941175, 152070e7-9c74-429d-b9d8-c09cbcba121e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:48 compute-0 nova_compute[259550]: 2025-10-07 14:22:48.318 2 INFO nova.compute.manager [-] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] VM Stopped (Lifecycle Event)
Oct 07 14:22:48 compute-0 nova_compute[259550]: 2025-10-07 14:22:48.350 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "refresh_cache-c53d76d2-4525-4798-bcf4-ad2e6b18071a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:22:48 compute-0 nova_compute[259550]: 2025-10-07 14:22:48.350 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquired lock "refresh_cache-c53d76d2-4525-4798-bcf4-ad2e6b18071a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:22:48 compute-0 nova_compute[259550]: 2025-10-07 14:22:48.351 2 DEBUG nova.network.neutron [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:22:48 compute-0 nova_compute[259550]: 2025-10-07 14:22:48.352 2 DEBUG nova.compute.manager [None req-5e3a8669-68b4-46b1-b800-6853889bda6c - - - - - -] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:48 compute-0 nova_compute[259550]: 2025-10-07 14:22:48.359 2 DEBUG nova.storage.rbd_utils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] resizing rbd image c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:22:48 compute-0 ceph-mon[74295]: pgmap v1774: 305 pgs: 305 active+clean; 326 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.2 MiB/s wr, 289 op/s
Oct 07 14:22:48 compute-0 nova_compute[259550]: 2025-10-07 14:22:48.657 2 DEBUG nova.network.neutron [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:22:48 compute-0 nova_compute[259550]: 2025-10-07 14:22:48.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.179 2 DEBUG nova.objects.instance [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'migration_context' on Instance uuid c53d76d2-4525-4798-bcf4-ad2e6b18071a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.198 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.198 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Ensure instance console log exists: /var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.199 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.199 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.199 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:49 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Oct 07 14:22:49 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005b.scope: Consumed 14.905s CPU time.
Oct 07 14:22:49 compute-0 systemd-machined[214580]: Machine qemu-112-instance-0000005b terminated.
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.599 2 DEBUG nova.network.neutron [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Updating instance_info_cache with network_info: [{"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.621 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Releasing lock "refresh_cache-c53d76d2-4525-4798-bcf4-ad2e6b18071a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.622 2 DEBUG nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Instance network_info: |[{"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.624 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Start _get_guest_xml network_info=[{"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.630 2 WARNING nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.635 2 DEBUG nova.virt.libvirt.host [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.635 2 DEBUG nova.virt.libvirt.host [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.639 2 DEBUG nova.virt.libvirt.host [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.639 2 DEBUG nova.virt.libvirt.host [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.640 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.641 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.641 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.641 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.642 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.642 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.642 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.642 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.642 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.642 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.643 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.643 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.645 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.685 2 DEBUG nova.compute.manager [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Received event network-vif-plugged-a0352a7f-7725-4e54-abe0-3bcd200e802d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.685 2 DEBUG oslo_concurrency.lockutils [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.686 2 DEBUG oslo_concurrency.lockutils [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.686 2 DEBUG oslo_concurrency.lockutils [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.686 2 DEBUG nova.compute.manager [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] No waiting events found dispatching network-vif-plugged-a0352a7f-7725-4e54-abe0-3bcd200e802d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.687 2 WARNING nova.compute.manager [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Received unexpected event network-vif-plugged-a0352a7f-7725-4e54-abe0-3bcd200e802d for instance with vm_state active and task_state deleting.
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.687 2 DEBUG nova.compute.manager [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Received event network-changed-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.687 2 DEBUG nova.compute.manager [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Refreshing instance network info cache due to event network-changed-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.687 2 DEBUG oslo_concurrency.lockutils [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-c53d76d2-4525-4798-bcf4-ad2e6b18071a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.687 2 DEBUG oslo_concurrency.lockutils [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-c53d76d2-4525-4798-bcf4-ad2e6b18071a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.687 2 DEBUG nova.network.neutron [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Refreshing network info cache for port 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.694 2 INFO nova.virt.libvirt.driver [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Deleting instance files /var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3_del
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.695 2 INFO nova.virt.libvirt.driver [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Deletion of /var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3_del complete
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.699 2 INFO nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance shutdown successfully after 24 seconds.
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.705 2 INFO nova.virt.libvirt.driver [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance destroyed successfully.
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.710 2 INFO nova.virt.libvirt.driver [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance destroyed successfully.
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.769 2 INFO nova.compute.manager [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Took 2.77 seconds to destroy the instance on the hypervisor.
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.770 2 DEBUG oslo.service.loopingcall [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.770 2 DEBUG nova.compute.manager [-] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:22:49 compute-0 nova_compute[259550]: 2025-10-07 14:22:49.770 2 DEBUG nova.network.neutron [-] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:22:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 351 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.0 MiB/s wr, 303 op/s
Oct 07 14:22:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:22:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3966010511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.135 2 INFO nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Deleting instance files /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46_del
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.136 2 INFO nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Deletion of /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46_del complete
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.139 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.158 2 DEBUG nova.storage.rbd_utils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.161 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.284 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.285 2 INFO nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Creating image(s)
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.310 2 DEBUG nova.storage.rbd_utils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.333 2 DEBUG nova.storage.rbd_utils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.353 2 DEBUG nova.storage.rbd_utils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.357 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.426 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.427 2 DEBUG oslo_concurrency.lockutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.428 2 DEBUG oslo_concurrency.lockutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.428 2 DEBUG oslo_concurrency.lockutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.447 2 DEBUG nova.storage.rbd_utils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.450 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:22:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/561040190' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.645 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.646 2 DEBUG nova.virt.libvirt.vif [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-316574553',display_name='tempest-ServersTestJSON-server-316574553',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-316574553',id=94,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-3nuzvc5m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:46Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=c53d76d2-4525-4798-bcf4-ad2e6b18071a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.647 2 DEBUG nova.network.os_vif_util [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.647 2 DEBUG nova.network.os_vif_util [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:7c:47,bridge_name='br-int',has_traffic_filtering=True,id=132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132f4c57-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.648 2 DEBUG nova.objects.instance [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'pci_devices' on Instance uuid c53d76d2-4525-4798-bcf4-ad2e6b18071a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.680 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:22:50 compute-0 nova_compute[259550]:   <uuid>c53d76d2-4525-4798-bcf4-ad2e6b18071a</uuid>
Oct 07 14:22:50 compute-0 nova_compute[259550]:   <name>instance-0000005e</name>
Oct 07 14:22:50 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:22:50 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:22:50 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersTestJSON-server-316574553</nova:name>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:22:49</nova:creationTime>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:22:50 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:22:50 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:22:50 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:22:50 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:22:50 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:22:50 compute-0 nova_compute[259550]:         <nova:user uuid="2606252961124ad2a15c7f7529b28488">tempest-ServersTestJSON-387950529-project-member</nova:user>
Oct 07 14:22:50 compute-0 nova_compute[259550]:         <nova:project uuid="ca7071ac09d84d15aba25489e9bb909a">tempest-ServersTestJSON-387950529</nova:project>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:22:50 compute-0 nova_compute[259550]:         <nova:port uuid="132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d">
Oct 07 14:22:50 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:22:50 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:22:50 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <system>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <entry name="serial">c53d76d2-4525-4798-bcf4-ad2e6b18071a</entry>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <entry name="uuid">c53d76d2-4525-4798-bcf4-ad2e6b18071a</entry>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     </system>
Oct 07 14:22:50 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:22:50 compute-0 nova_compute[259550]:   <os>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:   </os>
Oct 07 14:22:50 compute-0 nova_compute[259550]:   <features>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:   </features>
Oct 07 14:22:50 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:22:50 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:22:50 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk">
Oct 07 14:22:50 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       </source>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:22:50 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk.config">
Oct 07 14:22:50 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       </source>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:22:50 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:06:7c:47"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <target dev="tap132f4c57-4e"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a/console.log" append="off"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <video>
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     </video>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:22:50 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:22:50 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:22:50 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:22:50 compute-0 nova_compute[259550]: </domain>
Oct 07 14:22:50 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.682 2 DEBUG nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Preparing to wait for external event network-vif-plugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.682 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.683 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.683 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.683 2 DEBUG nova.virt.libvirt.vif [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-316574553',display_name='tempest-ServersTestJSON-server-316574553',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-316574553',id=94,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-3nuzvc5m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:46Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=c53d76d2-4525-4798-bcf4-ad2e6b18071a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.684 2 DEBUG nova.network.os_vif_util [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.685 2 DEBUG nova.network.os_vif_util [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:7c:47,bridge_name='br-int',has_traffic_filtering=True,id=132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132f4c57-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.685 2 DEBUG os_vif [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:7c:47,bridge_name='br-int',has_traffic_filtering=True,id=132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132f4c57-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.686 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.691 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap132f4c57-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.692 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap132f4c57-4e, col_values=(('external_ids', {'iface-id': '132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:06:7c:47', 'vm-uuid': 'c53d76d2-4525-4798-bcf4-ad2e6b18071a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:50 compute-0 NetworkManager[44949]: <info>  [1759846970.6956] manager: (tap132f4c57-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/395)
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.700 2 INFO os_vif [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:7c:47,bridge_name='br-int',has_traffic_filtering=True,id=132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132f4c57-4e')
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.758 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:50 compute-0 podman[352633]: 2025-10-07 14:22:50.817871938 +0000 UTC m=+0.082883250 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:22:50 compute-0 podman[352634]: 2025-10-07 14:22:50.826518547 +0000 UTC m=+0.089658619 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.838 2 DEBUG nova.storage.rbd_utils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] resizing rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.870 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.870 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.871 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No VIF found with MAC fa:16:3e:06:7c:47, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.871 2 INFO nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Using config drive
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.891 2 DEBUG nova.storage.rbd_utils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.899 2 DEBUG nova.network.neutron [-] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.950 2 INFO nova.compute.manager [-] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Took 1.18 seconds to deallocate network for instance.
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.960 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.960 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Ensure instance console log exists: /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.961 2 DEBUG oslo_concurrency.lockutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.961 2 DEBUG oslo_concurrency.lockutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.961 2 DEBUG oslo_concurrency.lockutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.962 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.966 2 WARNING nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.971 2 DEBUG nova.virt.libvirt.host [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.971 2 DEBUG nova.virt.libvirt.host [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.976 2 DEBUG nova.virt.libvirt.host [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.978 2 DEBUG nova.virt.libvirt.host [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.978 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.978 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.979 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.979 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.979 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.979 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.979 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.979 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.980 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.980 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.980 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.980 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:22:50 compute-0 nova_compute[259550]: 2025-10-07 14:22:50.980 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.000 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:51 compute-0 ceph-mon[74295]: pgmap v1775: 305 pgs: 305 active+clean; 351 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.0 MiB/s wr, 303 op/s
Oct 07 14:22:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3966010511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/561040190' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.036 2 DEBUG oslo_concurrency.lockutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.037 2 DEBUG oslo_concurrency.lockutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.154 2 DEBUG oslo_concurrency.processutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.297 2 INFO nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Creating config drive at /var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a/disk.config
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.302 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj6cbyll5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.341 2 DEBUG nova.network.neutron [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Updated VIF entry in instance network info cache for port 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.342 2 DEBUG nova.network.neutron [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Updating instance_info_cache with network_info: [{"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.359 2 DEBUG oslo_concurrency.lockutils [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-c53d76d2-4525-4798-bcf4-ad2e6b18071a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:22:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:22:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1349800485' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.449 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj6cbyll5" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.473 2 DEBUG nova.storage.rbd_utils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.478 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a/disk.config c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.524 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.554 2 DEBUG nova.storage.rbd_utils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.560 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:22:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2251524762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.644 2 DEBUG oslo_concurrency.processutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.651 2 DEBUG nova.compute.provider_tree [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.664 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a/disk.config c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.665 2 INFO nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Deleting local config drive /var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a/disk.config because it was imported into RBD.
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.668 2 DEBUG nova.scheduler.client.report [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.715 2 DEBUG oslo_concurrency.lockutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:51 compute-0 NetworkManager[44949]: <info>  [1759846971.7216] manager: (tap132f4c57-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/396)
Oct 07 14:22:51 compute-0 kernel: tap132f4c57-4e: entered promiscuous mode
Oct 07 14:22:51 compute-0 ovn_controller[151684]: 2025-10-07T14:22:51Z|00959|binding|INFO|Claiming lport 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d for this chassis.
Oct 07 14:22:51 compute-0 ovn_controller[151684]: 2025-10-07T14:22:51Z|00960|binding|INFO|132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d: Claiming fa:16:3e:06:7c:47 10.100.0.14
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.739 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:7c:47 10.100.0.14'], port_security=['fa:16:3e:06:7c:47 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c53d76d2-4525-4798-bcf4-ad2e6b18071a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:22:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.740 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 bound to our chassis
Oct 07 14:22:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.741 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7
Oct 07 14:22:51 compute-0 ovn_controller[151684]: 2025-10-07T14:22:51Z|00961|binding|INFO|Setting lport 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d ovn-installed in OVS
Oct 07 14:22:51 compute-0 ovn_controller[151684]: 2025-10-07T14:22:51Z|00962|binding|INFO|Setting lport 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d up in Southbound
Oct 07 14:22:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.757 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a417c338-a782-4c80-952a-636d83bdb051]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:51 compute-0 systemd-udevd[352901]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.763 2 INFO nova.scheduler.client.report [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Deleted allocations for instance 059cdf38-dead-4636-8397-0037c0c4ced3
Oct 07 14:22:51 compute-0 systemd-machined[214580]: New machine qemu-116-instance-0000005e.
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.766 2 DEBUG nova.compute.manager [req-b792fbff-c155-420e-a16e-316e0cdb6ea2 req-398f8e71-2887-4b70-bd15-28f62bc24bd4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Received event network-vif-deleted-a0352a7f-7725-4e54-abe0-3bcd200e802d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:51 compute-0 systemd[1]: Started Virtual Machine qemu-116-instance-0000005e.
Oct 07 14:22:51 compute-0 NetworkManager[44949]: <info>  [1759846971.7774] device (tap132f4c57-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:22:51 compute-0 NetworkManager[44949]: <info>  [1759846971.7787] device (tap132f4c57-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:22:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.794 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2be6097c-0e03-4b23-8c19-9dae7eb87cc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.798 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ee509d83-ec03-4911-b211-b7df10a1b077]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.828 2 DEBUG oslo_concurrency.lockutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.834 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2e11f64f-5bf9-4095-9495-c03b100a7562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.860 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9799cbfa-3991-4caf-a0c7-3b68c43cb411]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 916, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 916, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 31052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352913, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.884 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5eccecd4-eb75-4e1e-b33e-c53f96962bf3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352915, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352915, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:22:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.886 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:51 compute-0 nova_compute[259550]: 2025-10-07 14:22:51.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.890 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.890 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.890 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:22:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.891 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:22:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 323 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 190 op/s
Oct 07 14:22:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1349800485' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2251524762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:22:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2875037884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:52 compute-0 nova_compute[259550]: 2025-10-07 14:22:52.060 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:52 compute-0 nova_compute[259550]: 2025-10-07 14:22:52.063 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:22:52 compute-0 nova_compute[259550]:   <uuid>c8c2d410-01f0-4ef2-9ce3-232347c32e46</uuid>
Oct 07 14:22:52 compute-0 nova_compute[259550]:   <name>instance-0000005b</name>
Oct 07 14:22:52 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:22:52 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:22:52 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerShowV247Test-server-1207130222</nova:name>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:22:50</nova:creationTime>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:22:52 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:22:52 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:22:52 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:22:52 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:22:52 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:22:52 compute-0 nova_compute[259550]:         <nova:user uuid="680153f599e540e2ad16e791561c02e2">tempest-ServerShowV247Test-1983417179-project-member</nova:user>
Oct 07 14:22:52 compute-0 nova_compute[259550]:         <nova:project uuid="a580b052291544cca604ec1cdb73a416">tempest-ServerShowV247Test-1983417179</nova:project>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:22:52 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:22:52 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <system>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <entry name="serial">c8c2d410-01f0-4ef2-9ce3-232347c32e46</entry>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <entry name="uuid">c8c2d410-01f0-4ef2-9ce3-232347c32e46</entry>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     </system>
Oct 07 14:22:52 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:22:52 compute-0 nova_compute[259550]:   <os>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:   </os>
Oct 07 14:22:52 compute-0 nova_compute[259550]:   <features>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:   </features>
Oct 07 14:22:52 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:22:52 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:22:52 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk">
Oct 07 14:22:52 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       </source>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:22:52 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config">
Oct 07 14:22:52 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       </source>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:22:52 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/console.log" append="off"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <video>
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     </video>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:22:52 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:22:52 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:22:52 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:22:52 compute-0 nova_compute[259550]: </domain>
Oct 07 14:22:52 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:22:52 compute-0 nova_compute[259550]: 2025-10-07 14:22:52.116 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:22:52 compute-0 nova_compute[259550]: 2025-10-07 14:22:52.116 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:22:52 compute-0 nova_compute[259550]: 2025-10-07 14:22:52.117 2 INFO nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Using config drive
Oct 07 14:22:52 compute-0 nova_compute[259550]: 2025-10-07 14:22:52.136 2 DEBUG nova.storage.rbd_utils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:52 compute-0 nova_compute[259550]: 2025-10-07 14:22:52.153 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:52 compute-0 nova_compute[259550]: 2025-10-07 14:22:52.183 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'keypairs' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:52 compute-0 nova_compute[259550]: 2025-10-07 14:22:52.379 2 INFO nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Creating config drive at /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config
Oct 07 14:22:52 compute-0 nova_compute[259550]: 2025-10-07 14:22:52.384 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp15dq76l1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:52 compute-0 nova_compute[259550]: 2025-10-07 14:22:52.526 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp15dq76l1" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:52 compute-0 nova_compute[259550]: 2025-10-07 14:22:52.552 2 DEBUG nova.storage.rbd_utils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:52 compute-0 nova_compute[259550]: 2025-10-07 14:22:52.558 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:22:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:22:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:22:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:22:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:22:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:22:52 compute-0 nova_compute[259550]: 2025-10-07 14:22:52.725 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:52 compute-0 nova_compute[259550]: 2025-10-07 14:22:52.727 2 INFO nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Deleting local config drive /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config because it was imported into RBD.
Oct 07 14:22:52 compute-0 systemd-machined[214580]: New machine qemu-117-instance-0000005b.
Oct 07 14:22:52 compute-0 systemd[1]: Started Virtual Machine qemu-117-instance-0000005b.
Oct 07 14:22:52 compute-0 nova_compute[259550]: 2025-10-07 14:22:52.882 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846957.8796117, d932a7ab-839c-48b9-804f-90cc8634e93b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:52 compute-0 nova_compute[259550]: 2025-10-07 14:22:52.882 2 INFO nova.compute.manager [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] VM Stopped (Lifecycle Event)
Oct 07 14:22:52 compute-0 nova_compute[259550]: 2025-10-07 14:22:52.902 2 DEBUG nova.compute.manager [None req-eac5e2c3-4f96-48ae-8731-97ff3a5fbf96 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:53 compute-0 nova_compute[259550]: 2025-10-07 14:22:53.019 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846973.0187206, c53d76d2-4525-4798-bcf4-ad2e6b18071a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:53 compute-0 nova_compute[259550]: 2025-10-07 14:22:53.020 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] VM Started (Lifecycle Event)
Oct 07 14:22:53 compute-0 ceph-mon[74295]: pgmap v1776: 305 pgs: 305 active+clean; 323 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 190 op/s
Oct 07 14:22:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2875037884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:22:53 compute-0 nova_compute[259550]: 2025-10-07 14:22:53.038 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:53 compute-0 nova_compute[259550]: 2025-10-07 14:22:53.043 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846973.020317, c53d76d2-4525-4798-bcf4-ad2e6b18071a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:53 compute-0 nova_compute[259550]: 2025-10-07 14:22:53.043 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] VM Paused (Lifecycle Event)
Oct 07 14:22:53 compute-0 nova_compute[259550]: 2025-10-07 14:22:53.059 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:53 compute-0 nova_compute[259550]: 2025-10-07 14:22:53.064 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:22:53 compute-0 nova_compute[259550]: 2025-10-07 14:22:53.080 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:22:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 271 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 182 op/s
Oct 07 14:22:53 compute-0 nova_compute[259550]: 2025-10-07 14:22:53.993 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for c8c2d410-01f0-4ef2-9ce3-232347c32e46 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:22:53 compute-0 nova_compute[259550]: 2025-10-07 14:22:53.994 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846973.9932034, c8c2d410-01f0-4ef2-9ce3-232347c32e46 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:53 compute-0 nova_compute[259550]: 2025-10-07 14:22:53.995 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] VM Resumed (Lifecycle Event)
Oct 07 14:22:53 compute-0 nova_compute[259550]: 2025-10-07 14:22:53.998 2 DEBUG nova.compute.manager [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:22:53 compute-0 nova_compute[259550]: 2025-10-07 14:22:53.999 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.003 2 INFO nova.virt.libvirt.driver [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance spawned successfully.
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.004 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.025 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.036 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.043 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.044 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.045 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.045 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.046 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.046 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.080 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.080 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846973.997373, c8c2d410-01f0-4ef2-9ce3-232347c32e46 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.080 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] VM Started (Lifecycle Event)
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.112 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.116 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.123 2 DEBUG nova.compute.manager [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.146 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.181 2 DEBUG oslo_concurrency.lockutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.182 2 DEBUG oslo_concurrency.lockutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.182 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.264 2 DEBUG oslo_concurrency.lockutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.567 2 DEBUG nova.compute.manager [req-3bec4f2d-55ee-4701-ab0c-e3a93ebb48e6 req-40535015-8ed4-4d68-aa02-a7f6ed4d712c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Received event network-vif-plugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.568 2 DEBUG oslo_concurrency.lockutils [req-3bec4f2d-55ee-4701-ab0c-e3a93ebb48e6 req-40535015-8ed4-4d68-aa02-a7f6ed4d712c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.568 2 DEBUG oslo_concurrency.lockutils [req-3bec4f2d-55ee-4701-ab0c-e3a93ebb48e6 req-40535015-8ed4-4d68-aa02-a7f6ed4d712c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.568 2 DEBUG oslo_concurrency.lockutils [req-3bec4f2d-55ee-4701-ab0c-e3a93ebb48e6 req-40535015-8ed4-4d68-aa02-a7f6ed4d712c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.568 2 DEBUG nova.compute.manager [req-3bec4f2d-55ee-4701-ab0c-e3a93ebb48e6 req-40535015-8ed4-4d68-aa02-a7f6ed4d712c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Processing event network-vif-plugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.569 2 DEBUG nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.584 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846974.5724573, c53d76d2-4525-4798-bcf4-ad2e6b18071a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.586 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] VM Resumed (Lifecycle Event)
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.591 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.595 2 INFO nova.virt.libvirt.driver [-] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Instance spawned successfully.
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.595 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.612 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.618 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.621 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.622 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.622 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.623 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.623 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.623 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.647 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.672 2 INFO nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Took 8.22 seconds to spawn the instance on the hypervisor.
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.673 2 DEBUG nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.730 2 INFO nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Took 9.19 seconds to build instance.
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.755 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.922 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846959.9219847, a888e66f-9992-460a-ab15-79ac0261c4e2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.923 2 INFO nova.compute.manager [-] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] VM Stopped (Lifecycle Event)
Oct 07 14:22:54 compute-0 nova_compute[259550]: 2025-10-07 14:22:54.945 2 DEBUG nova.compute.manager [None req-46139455-f0f1-4bba-96d1-b0df361f91ac - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:22:55 compute-0 ceph-mon[74295]: pgmap v1777: 305 pgs: 305 active+clean; 271 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 182 op/s
Oct 07 14:22:55 compute-0 nova_compute[259550]: 2025-10-07 14:22:55.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:55 compute-0 ovn_controller[151684]: 2025-10-07T14:22:55Z|00963|binding|INFO|Releasing lport 401012b3-9244-4a9f-9a1e-3bf75a54a412 from this chassis (sb_readonly=0)
Oct 07 14:22:55 compute-0 nova_compute[259550]: 2025-10-07 14:22:55.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 293 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Oct 07 14:22:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.335 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "c8c2d410-01f0-4ef2-9ce3-232347c32e46" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.336 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "c8c2d410-01f0-4ef2-9ce3-232347c32e46" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.336 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "c8c2d410-01f0-4ef2-9ce3-232347c32e46-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.336 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "c8c2d410-01f0-4ef2-9ce3-232347c32e46-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.337 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "c8c2d410-01f0-4ef2-9ce3-232347c32e46-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.338 2 INFO nova.compute.manager [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Terminating instance
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.338 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "refresh_cache-c8c2d410-01f0-4ef2-9ce3-232347c32e46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.338 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquired lock "refresh_cache-c8c2d410-01f0-4ef2-9ce3-232347c32e46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.339 2 DEBUG nova.network.neutron [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.500 2 DEBUG nova.network.neutron [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.681 2 DEBUG nova.compute.manager [req-9bae44f2-5224-4c29-879a-6d21abc2548b req-ab95437a-c0f8-4641-9113-00166544cecd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Received event network-vif-plugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.681 2 DEBUG oslo_concurrency.lockutils [req-9bae44f2-5224-4c29-879a-6d21abc2548b req-ab95437a-c0f8-4641-9113-00166544cecd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.681 2 DEBUG oslo_concurrency.lockutils [req-9bae44f2-5224-4c29-879a-6d21abc2548b req-ab95437a-c0f8-4641-9113-00166544cecd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.681 2 DEBUG oslo_concurrency.lockutils [req-9bae44f2-5224-4c29-879a-6d21abc2548b req-ab95437a-c0f8-4641-9113-00166544cecd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.682 2 DEBUG nova.compute.manager [req-9bae44f2-5224-4c29-879a-6d21abc2548b req-ab95437a-c0f8-4641-9113-00166544cecd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] No waiting events found dispatching network-vif-plugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.682 2 WARNING nova.compute.manager [req-9bae44f2-5224-4c29-879a-6d21abc2548b req-ab95437a-c0f8-4641-9113-00166544cecd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Received unexpected event network-vif-plugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d for instance with vm_state active and task_state None.
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.833 2 DEBUG nova.network.neutron [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.859 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Releasing lock "refresh_cache-c8c2d410-01f0-4ef2-9ce3-232347c32e46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:22:56 compute-0 nova_compute[259550]: 2025-10-07 14:22:56.859 2 DEBUG nova.compute.manager [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:22:56 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Oct 07 14:22:56 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005b.scope: Consumed 3.990s CPU time.
Oct 07 14:22:56 compute-0 systemd-machined[214580]: Machine qemu-117-instance-0000005b terminated.
Oct 07 14:22:57 compute-0 ceph-mon[74295]: pgmap v1778: 305 pgs: 305 active+clean; 293 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Oct 07 14:22:57 compute-0 nova_compute[259550]: 2025-10-07 14:22:57.077 2 INFO nova.virt.libvirt.driver [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance destroyed successfully.
Oct 07 14:22:57 compute-0 nova_compute[259550]: 2025-10-07 14:22:57.078 2 DEBUG nova.objects.instance [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'resources' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:22:57 compute-0 nova_compute[259550]: 2025-10-07 14:22:57.493 2 INFO nova.virt.libvirt.driver [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Deleting instance files /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46_del
Oct 07 14:22:57 compute-0 nova_compute[259550]: 2025-10-07 14:22:57.493 2 INFO nova.virt.libvirt.driver [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Deletion of /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46_del complete
Oct 07 14:22:57 compute-0 nova_compute[259550]: 2025-10-07 14:22:57.582 2 INFO nova.compute.manager [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Took 0.72 seconds to destroy the instance on the hypervisor.
Oct 07 14:22:57 compute-0 nova_compute[259550]: 2025-10-07 14:22:57.582 2 DEBUG oslo.service.loopingcall [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:22:57 compute-0 nova_compute[259550]: 2025-10-07 14:22:57.583 2 DEBUG nova.compute.manager [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:22:57 compute-0 nova_compute[259550]: 2025-10-07 14:22:57.583 2 DEBUG nova.network.neutron [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:22:57 compute-0 nova_compute[259550]: 2025-10-07 14:22:57.723 2 DEBUG nova.network.neutron [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:22:57 compute-0 nova_compute[259550]: 2025-10-07 14:22:57.737 2 DEBUG nova.network.neutron [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:22:57 compute-0 nova_compute[259550]: 2025-10-07 14:22:57.751 2 INFO nova.compute.manager [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Took 0.17 seconds to deallocate network for instance.
Oct 07 14:22:57 compute-0 nova_compute[259550]: 2025-10-07 14:22:57.792 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:57 compute-0 nova_compute[259550]: 2025-10-07 14:22:57.792 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:57 compute-0 nova_compute[259550]: 2025-10-07 14:22:57.906 2 DEBUG oslo_concurrency.processutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 293 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 982 KiB/s rd, 3.6 MiB/s wr, 161 op/s
Oct 07 14:22:58 compute-0 ceph-mon[74295]: pgmap v1779: 305 pgs: 305 active+clean; 293 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 982 KiB/s rd, 3.6 MiB/s wr, 161 op/s
Oct 07 14:22:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:22:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3745086220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:58 compute-0 nova_compute[259550]: 2025-10-07 14:22:58.388 2 DEBUG oslo_concurrency.processutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:58 compute-0 nova_compute[259550]: 2025-10-07 14:22:58.395 2 DEBUG nova.compute.provider_tree [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:22:58 compute-0 nova_compute[259550]: 2025-10-07 14:22:58.409 2 DEBUG nova.scheduler.client.report [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:22:58 compute-0 nova_compute[259550]: 2025-10-07 14:22:58.429 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:58 compute-0 nova_compute[259550]: 2025-10-07 14:22:58.471 2 INFO nova.scheduler.client.report [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Deleted allocations for instance c8c2d410-01f0-4ef2-9ce3-232347c32e46
Oct 07 14:22:58 compute-0 nova_compute[259550]: 2025-10-07 14:22:58.520 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "c8c2d410-01f0-4ef2-9ce3-232347c32e46" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:58 compute-0 nova_compute[259550]: 2025-10-07 14:22:58.652 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "569d484f-0c0f-4b2c-aca5-e88654f9501f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:58 compute-0 nova_compute[259550]: 2025-10-07 14:22:58.653 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:58 compute-0 nova_compute[259550]: 2025-10-07 14:22:58.674 2 DEBUG nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:22:58 compute-0 nova_compute[259550]: 2025-10-07 14:22:58.754 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:58 compute-0 nova_compute[259550]: 2025-10-07 14:22:58.755 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:58 compute-0 nova_compute[259550]: 2025-10-07 14:22:58.762 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:22:58 compute-0 nova_compute[259550]: 2025-10-07 14:22:58.763 2 INFO nova.compute.claims [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:22:58 compute-0 nova_compute[259550]: 2025-10-07 14:22:58.924 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3745086220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:22:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3711068996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.367 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.373 2 DEBUG nova.compute.provider_tree [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.396 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "fdd84566-c63e-469a-9173-55b845d32171" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.396 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "fdd84566-c63e-469a-9173-55b845d32171" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.397 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "fdd84566-c63e-469a-9173-55b845d32171-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.397 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "fdd84566-c63e-469a-9173-55b845d32171-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.397 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "fdd84566-c63e-469a-9173-55b845d32171-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.399 2 INFO nova.compute.manager [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Terminating instance
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.400 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "refresh_cache-fdd84566-c63e-469a-9173-55b845d32171" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.400 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquired lock "refresh_cache-fdd84566-c63e-469a-9173-55b845d32171" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.401 2 DEBUG nova.network.neutron [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.403 2 DEBUG nova.scheduler.client.report [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.428 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.429 2 DEBUG nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.502 2 DEBUG nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.503 2 DEBUG nova.network.neutron [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.539 2 INFO nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.566 2 DEBUG nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.664 2 DEBUG nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.665 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.666 2 INFO nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Creating image(s)
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.690 2 DEBUG nova.storage.rbd_utils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.717 2 DEBUG nova.storage.rbd_utils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.741 2 DEBUG nova.storage.rbd_utils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.746 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.789 2 DEBUG nova.network.neutron [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.831 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.832 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.833 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.833 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.855 2 DEBUG nova.storage.rbd_utils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.859 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:22:59 compute-0 nova_compute[259550]: 2025-10-07 14:22:59.905 2 DEBUG nova.policy [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2606252961124ad2a15c7f7529b28488', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:22:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 267 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.6 MiB/s wr, 247 op/s
Oct 07 14:23:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:00.054 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:00.054 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:00.055 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.098 2 DEBUG nova.network.neutron [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.113 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Releasing lock "refresh_cache-fdd84566-c63e-469a-9173-55b845d32171" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.113 2 DEBUG nova.compute.manager [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:23:00 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3711068996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:00 compute-0 ceph-mon[74295]: pgmap v1780: 305 pgs: 305 active+clean; 267 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.6 MiB/s wr, 247 op/s
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.183 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:00 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Oct 07 14:23:00 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005a.scope: Consumed 14.098s CPU time.
Oct 07 14:23:00 compute-0 systemd-machined[214580]: Machine qemu-111-instance-0000005a terminated.
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.261 2 DEBUG nova.storage.rbd_utils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] resizing rbd image 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.356 2 DEBUG nova.objects.instance [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'migration_context' on Instance uuid 569d484f-0c0f-4b2c-aca5-e88654f9501f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.358 2 INFO nova.virt.libvirt.driver [-] [instance: fdd84566-c63e-469a-9173-55b845d32171] Instance destroyed successfully.
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.358 2 DEBUG nova.objects.instance [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'resources' on Instance uuid fdd84566-c63e-469a-9173-55b845d32171 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.375 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.376 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Ensure instance console log exists: /var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.377 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.377 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.377 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.472 2 DEBUG nova.network.neutron [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Successfully created port: 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.796 2 INFO nova.virt.libvirt.driver [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Deleting instance files /var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171_del
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.796 2 INFO nova.virt.libvirt.driver [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Deletion of /var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171_del complete
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.864 2 INFO nova.compute.manager [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.865 2 DEBUG oslo.service.loopingcall [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.866 2 DEBUG nova.compute.manager [-] [instance: fdd84566-c63e-469a-9173-55b845d32171] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:23:00 compute-0 nova_compute[259550]: 2025-10-07 14:23:00.866 2 DEBUG nova.network.neutron [-] [instance: fdd84566-c63e-469a-9173-55b845d32171] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:23:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:23:01 compute-0 nova_compute[259550]: 2025-10-07 14:23:01.437 2 DEBUG nova.network.neutron [-] [instance: fdd84566-c63e-469a-9173-55b845d32171] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:23:01 compute-0 nova_compute[259550]: 2025-10-07 14:23:01.522 2 DEBUG nova.network.neutron [-] [instance: fdd84566-c63e-469a-9173-55b845d32171] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:23:01 compute-0 nova_compute[259550]: 2025-10-07 14:23:01.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:01 compute-0 nova_compute[259550]: 2025-10-07 14:23:01.655 2 INFO nova.compute.manager [-] [instance: fdd84566-c63e-469a-9173-55b845d32171] Took 0.79 seconds to deallocate network for instance.
Oct 07 14:23:01 compute-0 nova_compute[259550]: 2025-10-07 14:23:01.840 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:01 compute-0 nova_compute[259550]: 2025-10-07 14:23:01.841 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:01 compute-0 nova_compute[259550]: 2025-10-07 14:23:01.950 2 DEBUG oslo_concurrency.processutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 263 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 283 op/s
Oct 07 14:23:02 compute-0 nova_compute[259550]: 2025-10-07 14:23:02.109 2 DEBUG nova.network.neutron [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Successfully updated port: 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:23:02 compute-0 nova_compute[259550]: 2025-10-07 14:23:02.136 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "refresh_cache-569d484f-0c0f-4b2c-aca5-e88654f9501f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:23:02 compute-0 nova_compute[259550]: 2025-10-07 14:23:02.137 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquired lock "refresh_cache-569d484f-0c0f-4b2c-aca5-e88654f9501f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:23:02 compute-0 nova_compute[259550]: 2025-10-07 14:23:02.137 2 DEBUG nova.network.neutron [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:23:02 compute-0 nova_compute[259550]: 2025-10-07 14:23:02.240 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846967.2391968, 059cdf38-dead-4636-8397-0037c0c4ced3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:23:02 compute-0 nova_compute[259550]: 2025-10-07 14:23:02.241 2 INFO nova.compute.manager [-] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] VM Stopped (Lifecycle Event)
Oct 07 14:23:02 compute-0 nova_compute[259550]: 2025-10-07 14:23:02.267 2 DEBUG nova.compute.manager [req-950f12e8-2090-41ef-a8ed-d7c56f9e386e req-6ad8747b-1362-4417-af64-40d4e049ff86 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Received event network-changed-03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:02 compute-0 nova_compute[259550]: 2025-10-07 14:23:02.267 2 DEBUG nova.compute.manager [req-950f12e8-2090-41ef-a8ed-d7c56f9e386e req-6ad8747b-1362-4417-af64-40d4e049ff86 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Refreshing instance network info cache due to event network-changed-03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:23:02 compute-0 nova_compute[259550]: 2025-10-07 14:23:02.268 2 DEBUG oslo_concurrency.lockutils [req-950f12e8-2090-41ef-a8ed-d7c56f9e386e req-6ad8747b-1362-4417-af64-40d4e049ff86 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-569d484f-0c0f-4b2c-aca5-e88654f9501f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:23:02 compute-0 nova_compute[259550]: 2025-10-07 14:23:02.315 2 DEBUG nova.compute.manager [None req-e8ca5208-0b03-45bb-b481-9828779556a4 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:23:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3030206704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:02 compute-0 nova_compute[259550]: 2025-10-07 14:23:02.400 2 DEBUG oslo_concurrency.processutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:02 compute-0 nova_compute[259550]: 2025-10-07 14:23:02.407 2 DEBUG nova.compute.provider_tree [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:23:02 compute-0 nova_compute[259550]: 2025-10-07 14:23:02.537 2 DEBUG nova.scheduler.client.report [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:23:02 compute-0 nova_compute[259550]: 2025-10-07 14:23:02.601 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:02 compute-0 nova_compute[259550]: 2025-10-07 14:23:02.603 2 DEBUG nova.network.neutron [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:23:02 compute-0 nova_compute[259550]: 2025-10-07 14:23:02.648 2 INFO nova.scheduler.client.report [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Deleted allocations for instance fdd84566-c63e-469a-9173-55b845d32171
Oct 07 14:23:02 compute-0 nova_compute[259550]: 2025-10-07 14:23:02.746 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "fdd84566-c63e-469a-9173-55b845d32171" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:03 compute-0 ceph-mon[74295]: pgmap v1781: 305 pgs: 305 active+clean; 263 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 283 op/s
Oct 07 14:23:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3030206704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.689 2 DEBUG nova.network.neutron [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Updating instance_info_cache with network_info: [{"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.919 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Releasing lock "refresh_cache-569d484f-0c0f-4b2c-aca5-e88654f9501f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.920 2 DEBUG nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Instance network_info: |[{"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.921 2 DEBUG oslo_concurrency.lockutils [req-950f12e8-2090-41ef-a8ed-d7c56f9e386e req-6ad8747b-1362-4417-af64-40d4e049ff86 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-569d484f-0c0f-4b2c-aca5-e88654f9501f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.921 2 DEBUG nova.network.neutron [req-950f12e8-2090-41ef-a8ed-d7c56f9e386e req-6ad8747b-1362-4417-af64-40d4e049ff86 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Refreshing network info cache for port 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.925 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Start _get_guest_xml network_info=[{"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.931 2 WARNING nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.935 2 DEBUG nova.virt.libvirt.host [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.936 2 DEBUG nova.virt.libvirt.host [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.940 2 DEBUG nova.virt.libvirt.host [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.940 2 DEBUG nova.virt.libvirt.host [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.941 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.941 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.941 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.942 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.942 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.942 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.942 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.943 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.943 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.943 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.943 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.944 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:23:03 compute-0 nova_compute[259550]: 2025-10-07 14:23:03.947 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 241 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 284 op/s
Oct 07 14:23:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:23:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/473774807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.412 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.433 2 DEBUG nova.storage.rbd_utils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.437 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:23:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/687310136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.898 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.900 2 DEBUG nova.virt.libvirt.vif [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-316574553',display_name='tempest-ServersTestJSON-server-316574553',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-316574553',id=95,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-2uq04zx6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:59Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=569d484f-0c0f-4b2c-aca5-e88654f9501f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.901 2 DEBUG nova.network.os_vif_util [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.902 2 DEBUG nova.network.os_vif_util [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:12:c0,bridge_name='br-int',has_traffic_filtering=True,id=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03cd7a1d-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.903 2 DEBUG nova.objects.instance [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'pci_devices' on Instance uuid 569d484f-0c0f-4b2c-aca5-e88654f9501f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.925 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:23:04 compute-0 nova_compute[259550]:   <uuid>569d484f-0c0f-4b2c-aca5-e88654f9501f</uuid>
Oct 07 14:23:04 compute-0 nova_compute[259550]:   <name>instance-0000005f</name>
Oct 07 14:23:04 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:23:04 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:23:04 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersTestJSON-server-316574553</nova:name>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:23:03</nova:creationTime>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:23:04 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:23:04 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:23:04 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:23:04 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:23:04 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:23:04 compute-0 nova_compute[259550]:         <nova:user uuid="2606252961124ad2a15c7f7529b28488">tempest-ServersTestJSON-387950529-project-member</nova:user>
Oct 07 14:23:04 compute-0 nova_compute[259550]:         <nova:project uuid="ca7071ac09d84d15aba25489e9bb909a">tempest-ServersTestJSON-387950529</nova:project>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:23:04 compute-0 nova_compute[259550]:         <nova:port uuid="03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd">
Oct 07 14:23:04 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:23:04 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:23:04 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <system>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <entry name="serial">569d484f-0c0f-4b2c-aca5-e88654f9501f</entry>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <entry name="uuid">569d484f-0c0f-4b2c-aca5-e88654f9501f</entry>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     </system>
Oct 07 14:23:04 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:23:04 compute-0 nova_compute[259550]:   <os>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:   </os>
Oct 07 14:23:04 compute-0 nova_compute[259550]:   <features>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:   </features>
Oct 07 14:23:04 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:23:04 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:23:04 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/569d484f-0c0f-4b2c-aca5-e88654f9501f_disk">
Oct 07 14:23:04 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       </source>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:23:04 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/569d484f-0c0f-4b2c-aca5-e88654f9501f_disk.config">
Oct 07 14:23:04 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       </source>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:23:04 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:de:12:c0"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <target dev="tap03cd7a1d-2b"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f/console.log" append="off"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <video>
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     </video>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:23:04 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:23:04 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:23:04 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:23:04 compute-0 nova_compute[259550]: </domain>
Oct 07 14:23:04 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.927 2 DEBUG nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Preparing to wait for external event network-vif-plugged-03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.928 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.928 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.929 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.930 2 DEBUG nova.virt.libvirt.vif [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-316574553',display_name='tempest-ServersTestJSON-server-316574553',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-316574553',id=95,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-2uq04zx6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:59Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=569d484f-0c0f-4b2c-aca5-e88654f9501f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.930 2 DEBUG nova.network.os_vif_util [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.931 2 DEBUG nova.network.os_vif_util [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:12:c0,bridge_name='br-int',has_traffic_filtering=True,id=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03cd7a1d-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.932 2 DEBUG os_vif [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:12:c0,bridge_name='br-int',has_traffic_filtering=True,id=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03cd7a1d-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.933 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.934 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.937 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03cd7a1d-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.938 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap03cd7a1d-2b, col_values=(('external_ids', {'iface-id': '03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:12:c0', 'vm-uuid': '569d484f-0c0f-4b2c-aca5-e88654f9501f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:04 compute-0 NetworkManager[44949]: <info>  [1759846984.9411] manager: (tap03cd7a1d-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:04 compute-0 nova_compute[259550]: 2025-10-07 14:23:04.948 2 INFO os_vif [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:12:c0,bridge_name='br-int',has_traffic_filtering=True,id=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03cd7a1d-2b')
Oct 07 14:23:05 compute-0 nova_compute[259550]: 2025-10-07 14:23:05.019 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:23:05 compute-0 nova_compute[259550]: 2025-10-07 14:23:05.020 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:23:05 compute-0 nova_compute[259550]: 2025-10-07 14:23:05.020 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No VIF found with MAC fa:16:3e:de:12:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:23:05 compute-0 nova_compute[259550]: 2025-10-07 14:23:05.020 2 INFO nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Using config drive
Oct 07 14:23:05 compute-0 nova_compute[259550]: 2025-10-07 14:23:05.041 2 DEBUG nova.storage.rbd_utils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:05 compute-0 ceph-mon[74295]: pgmap v1782: 305 pgs: 305 active+clean; 241 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 284 op/s
Oct 07 14:23:05 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/473774807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:23:05 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/687310136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:23:05 compute-0 nova_compute[259550]: 2025-10-07 14:23:05.663 2 INFO nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Creating config drive at /var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f/disk.config
Oct 07 14:23:05 compute-0 nova_compute[259550]: 2025-10-07 14:23:05.671 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdlbhn4y5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:05 compute-0 nova_compute[259550]: 2025-10-07 14:23:05.837 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdlbhn4y5" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:05 compute-0 nova_compute[259550]: 2025-10-07 14:23:05.862 2 DEBUG nova.storage.rbd_utils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:05 compute-0 nova_compute[259550]: 2025-10-07 14:23:05.866 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f/disk.config 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 213 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 242 op/s
Oct 07 14:23:06 compute-0 nova_compute[259550]: 2025-10-07 14:23:06.109 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f/disk.config 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:06 compute-0 nova_compute[259550]: 2025-10-07 14:23:06.111 2 INFO nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Deleting local config drive /var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f/disk.config because it was imported into RBD.
Oct 07 14:23:06 compute-0 kernel: tap03cd7a1d-2b: entered promiscuous mode
Oct 07 14:23:06 compute-0 NetworkManager[44949]: <info>  [1759846986.1725] manager: (tap03cd7a1d-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/398)
Oct 07 14:23:06 compute-0 ovn_controller[151684]: 2025-10-07T14:23:06Z|00964|binding|INFO|Claiming lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd for this chassis.
Oct 07 14:23:06 compute-0 ovn_controller[151684]: 2025-10-07T14:23:06Z|00965|binding|INFO|03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd: Claiming fa:16:3e:de:12:c0 10.100.0.5
Oct 07 14:23:06 compute-0 nova_compute[259550]: 2025-10-07 14:23:06.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.185 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:12:c0 10.100.0.5'], port_security=['fa:16:3e:de:12:c0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '569d484f-0c0f-4b2c-aca5-e88654f9501f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:23:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.187 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 bound to our chassis
Oct 07 14:23:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.188 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7
Oct 07 14:23:06 compute-0 ovn_controller[151684]: 2025-10-07T14:23:06Z|00966|binding|INFO|Setting lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd ovn-installed in OVS
Oct 07 14:23:06 compute-0 ovn_controller[151684]: 2025-10-07T14:23:06Z|00967|binding|INFO|Setting lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd up in Southbound
Oct 07 14:23:06 compute-0 nova_compute[259550]: 2025-10-07 14:23:06.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:06 compute-0 nova_compute[259550]: 2025-10-07 14:23:06.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:06 compute-0 systemd-udevd[353491]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:23:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.208 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4d324244-691b-4617-bf95-7ae5a85b7369]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:06 compute-0 systemd-machined[214580]: New machine qemu-118-instance-0000005f.
Oct 07 14:23:06 compute-0 nova_compute[259550]: 2025-10-07 14:23:06.216 2 DEBUG nova.network.neutron [req-950f12e8-2090-41ef-a8ed-d7c56f9e386e req-6ad8747b-1362-4417-af64-40d4e049ff86 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Updated VIF entry in instance network info cache for port 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:23:06 compute-0 nova_compute[259550]: 2025-10-07 14:23:06.216 2 DEBUG nova.network.neutron [req-950f12e8-2090-41ef-a8ed-d7c56f9e386e req-6ad8747b-1362-4417-af64-40d4e049ff86 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Updating instance_info_cache with network_info: [{"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:23:06 compute-0 systemd[1]: Started Virtual Machine qemu-118-instance-0000005f.
Oct 07 14:23:06 compute-0 NetworkManager[44949]: <info>  [1759846986.2234] device (tap03cd7a1d-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:23:06 compute-0 NetworkManager[44949]: <info>  [1759846986.2243] device (tap03cd7a1d-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:23:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.238 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fc1afe6b-3392-417b-b365-68f133e050b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:06 compute-0 nova_compute[259550]: 2025-10-07 14:23:06.240 2 DEBUG oslo_concurrency.lockutils [req-950f12e8-2090-41ef-a8ed-d7c56f9e386e req-6ad8747b-1362-4417-af64-40d4e049ff86 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-569d484f-0c0f-4b2c-aca5-e88654f9501f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:23:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.241 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5825eca8-aa8f-462b-a82e-bd391ddf5506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:23:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.268 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e2275889-4a18-4e1c-881e-c0d5fd9a7c16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.290 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a60a0ecd-f326-4f58-b9d3-089f34b56ee5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 916, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 916, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 15179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353503, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.308 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ea7a0b1a-606d-4dfc-81f1-43e39fd37fb4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353505, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353505, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.309 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:06 compute-0 nova_compute[259550]: 2025-10-07 14:23:06.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:06 compute-0 nova_compute[259550]: 2025-10-07 14:23:06.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.312 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.312 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.312 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.313 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:06 compute-0 nova_compute[259550]: 2025-10-07 14:23:06.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:07 compute-0 ceph-mon[74295]: pgmap v1783: 305 pgs: 305 active+clean; 213 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 242 op/s
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.059 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846987.0593636, 569d484f-0c0f-4b2c-aca5-e88654f9501f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.060 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] VM Started (Lifecycle Event)
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.083 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.087 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846987.061973, 569d484f-0c0f-4b2c-aca5-e88654f9501f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.087 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] VM Paused (Lifecycle Event)
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.108 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.112 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.136 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.358 2 DEBUG nova.compute.manager [req-01a35197-2b7d-4443-9bde-f93be82b1ed1 req-8d863e89-959b-4451-aa8c-8fed394cf69a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Received event network-vif-plugged-03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.359 2 DEBUG oslo_concurrency.lockutils [req-01a35197-2b7d-4443-9bde-f93be82b1ed1 req-8d863e89-959b-4451-aa8c-8fed394cf69a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.359 2 DEBUG oslo_concurrency.lockutils [req-01a35197-2b7d-4443-9bde-f93be82b1ed1 req-8d863e89-959b-4451-aa8c-8fed394cf69a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.359 2 DEBUG oslo_concurrency.lockutils [req-01a35197-2b7d-4443-9bde-f93be82b1ed1 req-8d863e89-959b-4451-aa8c-8fed394cf69a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.360 2 DEBUG nova.compute.manager [req-01a35197-2b7d-4443-9bde-f93be82b1ed1 req-8d863e89-959b-4451-aa8c-8fed394cf69a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Processing event network-vif-plugged-03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.360 2 DEBUG nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.366 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.367 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846987.366076, 569d484f-0c0f-4b2c-aca5-e88654f9501f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.367 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] VM Resumed (Lifecycle Event)
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.373 2 INFO nova.virt.libvirt.driver [-] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Instance spawned successfully.
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.374 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:23:07 compute-0 ovn_controller[151684]: 2025-10-07T14:23:07Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:06:7c:47 10.100.0.14
Oct 07 14:23:07 compute-0 ovn_controller[151684]: 2025-10-07T14:23:07Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:06:7c:47 10.100.0.14
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.401 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.408 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.412 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.413 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.413 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.414 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.414 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.415 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.442 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.476 2 INFO nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Took 7.81 seconds to spawn the instance on the hypervisor.
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.476 2 DEBUG nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.572 2 INFO nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Took 8.84 seconds to build instance.
Oct 07 14:23:07 compute-0 nova_compute[259550]: 2025-10-07 14:23:07.595 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 213 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 180 op/s
Oct 07 14:23:08 compute-0 podman[353549]: 2025-10-07 14:23:08.090099134 +0000 UTC m=+0.070847734 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid)
Oct 07 14:23:08 compute-0 podman[353548]: 2025-10-07 14:23:08.116172091 +0000 UTC m=+0.100739093 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 07 14:23:08 compute-0 ceph-mon[74295]: pgmap v1784: 305 pgs: 305 active+clean; 213 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 180 op/s
Oct 07 14:23:09 compute-0 nova_compute[259550]: 2025-10-07 14:23:09.519 2 DEBUG nova.compute.manager [req-08a7cc0b-5848-4aa0-bc45-c1b25a8c8e16 req-1948d1eb-b6a7-4d74-be1a-d22e094e1547 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Received event network-vif-plugged-03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:09 compute-0 nova_compute[259550]: 2025-10-07 14:23:09.519 2 DEBUG oslo_concurrency.lockutils [req-08a7cc0b-5848-4aa0-bc45-c1b25a8c8e16 req-1948d1eb-b6a7-4d74-be1a-d22e094e1547 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:09 compute-0 nova_compute[259550]: 2025-10-07 14:23:09.519 2 DEBUG oslo_concurrency.lockutils [req-08a7cc0b-5848-4aa0-bc45-c1b25a8c8e16 req-1948d1eb-b6a7-4d74-be1a-d22e094e1547 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:09 compute-0 nova_compute[259550]: 2025-10-07 14:23:09.520 2 DEBUG oslo_concurrency.lockutils [req-08a7cc0b-5848-4aa0-bc45-c1b25a8c8e16 req-1948d1eb-b6a7-4d74-be1a-d22e094e1547 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:09 compute-0 nova_compute[259550]: 2025-10-07 14:23:09.520 2 DEBUG nova.compute.manager [req-08a7cc0b-5848-4aa0-bc45-c1b25a8c8e16 req-1948d1eb-b6a7-4d74-be1a-d22e094e1547 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] No waiting events found dispatching network-vif-plugged-03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:23:09 compute-0 nova_compute[259550]: 2025-10-07 14:23:09.520 2 WARNING nova.compute.manager [req-08a7cc0b-5848-4aa0-bc45-c1b25a8c8e16 req-1948d1eb-b6a7-4d74-be1a-d22e094e1547 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Received unexpected event network-vif-plugged-03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd for instance with vm_state active and task_state None.
Oct 07 14:23:09 compute-0 nova_compute[259550]: 2025-10-07 14:23:09.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 235 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.1 MiB/s wr, 249 op/s
Oct 07 14:23:11 compute-0 ceph-mon[74295]: pgmap v1785: 305 pgs: 305 active+clean; 235 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.1 MiB/s wr, 249 op/s
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.050 2 DEBUG oslo_concurrency.lockutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "569d484f-0c0f-4b2c-aca5-e88654f9501f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.051 2 DEBUG oslo_concurrency.lockutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.051 2 DEBUG oslo_concurrency.lockutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.051 2 DEBUG oslo_concurrency.lockutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.052 2 DEBUG oslo_concurrency.lockutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.053 2 INFO nova.compute.manager [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Terminating instance
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.054 2 DEBUG nova.compute.manager [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:23:11 compute-0 kernel: tap03cd7a1d-2b (unregistering): left promiscuous mode
Oct 07 14:23:11 compute-0 NetworkManager[44949]: <info>  [1759846991.1080] device (tap03cd7a1d-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:23:11 compute-0 ovn_controller[151684]: 2025-10-07T14:23:11Z|00968|binding|INFO|Releasing lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd from this chassis (sb_readonly=0)
Oct 07 14:23:11 compute-0 ovn_controller[151684]: 2025-10-07T14:23:11Z|00969|binding|INFO|Setting lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd down in Southbound
Oct 07 14:23:11 compute-0 ovn_controller[151684]: 2025-10-07T14:23:11Z|00970|binding|INFO|Removing iface tap03cd7a1d-2b ovn-installed in OVS
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.126 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:12:c0 10.100.0.5'], port_security=['fa:16:3e:de:12:c0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '569d484f-0c0f-4b2c-aca5-e88654f9501f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.128 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 unbound from our chassis
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.129 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.148 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[24271953-256d-4e90-bc16-30ac5f732c47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:11 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Oct 07 14:23:11 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d0000005f.scope: Consumed 4.503s CPU time.
Oct 07 14:23:11 compute-0 systemd-machined[214580]: Machine qemu-118-instance-0000005f terminated.
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.178 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[111d226d-d38d-4d27-a2d5-b88ac449cdd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.183 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d4142d-1073-4cd6-a4cd-56883a59fa05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.212 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff24d67-0595-42e2-b890-6dc23dc56a21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.229 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[35a462bd-4f14-4da1-8914-5435ffffb37b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 18, 'rx_bytes': 916, 'tx_bytes': 948, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 18, 'rx_bytes': 916, 'tx_bytes': 948, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 15179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353597, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.243 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[087981bd-5aaf-46ab-96cf-762b960c0af4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353598, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353598, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.245 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.250 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.250 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.251 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.251 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:23:11 compute-0 kernel: tap03cd7a1d-2b: entered promiscuous mode
Oct 07 14:23:11 compute-0 NetworkManager[44949]: <info>  [1759846991.2697] manager: (tap03cd7a1d-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/399)
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:11 compute-0 ovn_controller[151684]: 2025-10-07T14:23:11Z|00971|binding|INFO|Claiming lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd for this chassis.
Oct 07 14:23:11 compute-0 ovn_controller[151684]: 2025-10-07T14:23:11Z|00972|binding|INFO|03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd: Claiming fa:16:3e:de:12:c0 10.100.0.5
Oct 07 14:23:11 compute-0 kernel: tap03cd7a1d-2b (unregistering): left promiscuous mode
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.279 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:12:c0 10.100.0.5'], port_security=['fa:16:3e:de:12:c0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '569d484f-0c0f-4b2c-aca5-e88654f9501f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.281 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 bound to our chassis
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.282 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.293 2 INFO nova.virt.libvirt.driver [-] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Instance destroyed successfully.
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.293 2 DEBUG nova.objects.instance [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'resources' on Instance uuid 569d484f-0c0f-4b2c-aca5-e88654f9501f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.298 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8644c197-d994-4f76-8790-d21ac89ec0a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:11 compute-0 ovn_controller[151684]: 2025-10-07T14:23:11Z|00973|binding|INFO|Setting lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd ovn-installed in OVS
Oct 07 14:23:11 compute-0 ovn_controller[151684]: 2025-10-07T14:23:11Z|00974|binding|INFO|Setting lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd up in Southbound
Oct 07 14:23:11 compute-0 ovn_controller[151684]: 2025-10-07T14:23:11Z|00975|binding|INFO|Releasing lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd from this chassis (sb_readonly=1)
Oct 07 14:23:11 compute-0 ovn_controller[151684]: 2025-10-07T14:23:11Z|00976|if_status|INFO|Dropped 2 log messages in last 113 seconds (most recently, 113 seconds ago) due to excessive rate
Oct 07 14:23:11 compute-0 ovn_controller[151684]: 2025-10-07T14:23:11Z|00977|if_status|INFO|Not setting lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd down as sb is readonly
Oct 07 14:23:11 compute-0 ovn_controller[151684]: 2025-10-07T14:23:11Z|00978|binding|INFO|Removing iface tap03cd7a1d-2b ovn-installed in OVS
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:11 compute-0 ovn_controller[151684]: 2025-10-07T14:23:11Z|00979|binding|INFO|Releasing lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd from this chassis (sb_readonly=0)
Oct 07 14:23:11 compute-0 ovn_controller[151684]: 2025-10-07T14:23:11Z|00980|binding|INFO|Setting lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd down in Southbound
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.312 2 DEBUG nova.virt.libvirt.vif [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-316574553',display_name='tempest-ServersTestJSON-server-316574553',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-316574553',id=95,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:23:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-2uq04zx6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:23:07Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=569d484f-0c0f-4b2c-aca5-e88654f9501f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.313 2 DEBUG nova.network.os_vif_util [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.314 2 DEBUG nova.network.os_vif_util [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:12:c0,bridge_name='br-int',has_traffic_filtering=True,id=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03cd7a1d-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.314 2 DEBUG os_vif [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:12:c0,bridge_name='br-int',has_traffic_filtering=True,id=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03cd7a1d-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.316 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03cd7a1d-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.318 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:12:c0 10.100.0.5'], port_security=['fa:16:3e:de:12:c0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '569d484f-0c0f-4b2c-aca5-e88654f9501f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.324 2 INFO os_vif [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:12:c0,bridge_name='br-int',has_traffic_filtering=True,id=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03cd7a1d-2b')
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.328 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e905ad09-fe0c-493d-868f-8cd64cb6523d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.330 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[aab12013-1dae-4ca8-a43e-d317fe32b4f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.363 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[35752ebd-8bff-4f8d-93ed-711de3b5c17b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.383 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf855b3-73b5-4c66-b70f-3fb5eea9164c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 20, 'rx_bytes': 916, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 20, 'rx_bytes': 916, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 15179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353631, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.401 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1b7f10-6674-42f3-8942-5cc4008e9581]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353632, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353632, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.403 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.406 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.406 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.406 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.407 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.408 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 unbound from our chassis
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.409 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.424 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f2c339-ebc5-4a91-8eb2-86ed63fa60d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.454 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7320cb-dec9-425c-b7ae-ae1b0f4801ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.458 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e7343633-fe70-4294-a89c-a141b7d560b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.487 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5207d359-00e0-4fea-978f-55b8e4ba3a3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.507 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[572a62b4-e03e-41ee-93dd-59d1d025db8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 22, 'rx_bytes': 916, 'tx_bytes': 1116, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 22, 'rx_bytes': 916, 'tx_bytes': 1116, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 15179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353638, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.523 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0009b2dd-acb9-44ac-98e1-3d91e97416b9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353639, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353639, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.525 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.527 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.528 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.528 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.529 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.854 2 INFO nova.virt.libvirt.driver [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Deleting instance files /var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f_del
Oct 07 14:23:11 compute-0 nova_compute[259550]: 2025-10-07 14:23:11.855 2 INFO nova.virt.libvirt.driver [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Deletion of /var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f_del complete
Oct 07 14:23:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 246 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 222 op/s
Oct 07 14:23:12 compute-0 nova_compute[259550]: 2025-10-07 14:23:12.033 2 INFO nova.compute.manager [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Took 0.98 seconds to destroy the instance on the hypervisor.
Oct 07 14:23:12 compute-0 nova_compute[259550]: 2025-10-07 14:23:12.033 2 DEBUG oslo.service.loopingcall [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:23:12 compute-0 nova_compute[259550]: 2025-10-07 14:23:12.033 2 DEBUG nova.compute.manager [-] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:23:12 compute-0 nova_compute[259550]: 2025-10-07 14:23:12.034 2 DEBUG nova.network.neutron [-] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:23:12 compute-0 nova_compute[259550]: 2025-10-07 14:23:12.076 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846977.0753396, c8c2d410-01f0-4ef2-9ce3-232347c32e46 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:23:12 compute-0 nova_compute[259550]: 2025-10-07 14:23:12.077 2 INFO nova.compute.manager [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] VM Stopped (Lifecycle Event)
Oct 07 14:23:12 compute-0 nova_compute[259550]: 2025-10-07 14:23:12.097 2 DEBUG nova.compute.manager [None req-4ed58ada-2a2a-4ed1-bdb7-7f1ebc9cb3b0 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:13 compute-0 nova_compute[259550]: 2025-10-07 14:23:13.015 2 DEBUG nova.network.neutron [-] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:23:13 compute-0 nova_compute[259550]: 2025-10-07 14:23:13.035 2 INFO nova.compute.manager [-] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Took 1.00 seconds to deallocate network for instance.
Oct 07 14:23:13 compute-0 ceph-mon[74295]: pgmap v1786: 305 pgs: 305 active+clean; 246 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 222 op/s
Oct 07 14:23:13 compute-0 nova_compute[259550]: 2025-10-07 14:23:13.076 2 DEBUG oslo_concurrency.lockutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:13 compute-0 nova_compute[259550]: 2025-10-07 14:23:13.077 2 DEBUG oslo_concurrency.lockutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:13 compute-0 sudo[353641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:23:13 compute-0 sudo[353641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:23:13 compute-0 nova_compute[259550]: 2025-10-07 14:23:13.169 2 DEBUG oslo_concurrency.processutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:13 compute-0 sudo[353641]: pam_unix(sudo:session): session closed for user root
Oct 07 14:23:13 compute-0 sudo[353666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:23:13 compute-0 sudo[353666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:23:13 compute-0 sudo[353666]: pam_unix(sudo:session): session closed for user root
Oct 07 14:23:13 compute-0 sudo[353692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:23:13 compute-0 sudo[353692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:23:13 compute-0 sudo[353692]: pam_unix(sudo:session): session closed for user root
Oct 07 14:23:13 compute-0 sudo[353736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:23:13 compute-0 sudo[353736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:23:13 compute-0 nova_compute[259550]: 2025-10-07 14:23:13.383 2 DEBUG nova.compute.manager [req-177ece94-f21f-45fe-a939-7d94a8a008e4 req-2d64d94b-9a7d-4bc3-8b00-0723778dbab9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Received event network-vif-deleted-03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:23:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3051669186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:13 compute-0 nova_compute[259550]: 2025-10-07 14:23:13.622 2 DEBUG oslo_concurrency.processutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:13 compute-0 nova_compute[259550]: 2025-10-07 14:23:13.629 2 DEBUG nova.compute.provider_tree [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:23:13 compute-0 nova_compute[259550]: 2025-10-07 14:23:13.650 2 DEBUG nova.scheduler.client.report [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:23:13 compute-0 nova_compute[259550]: 2025-10-07 14:23:13.675 2 DEBUG oslo_concurrency.lockutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:13 compute-0 nova_compute[259550]: 2025-10-07 14:23:13.695 2 INFO nova.scheduler.client.report [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Deleted allocations for instance 569d484f-0c0f-4b2c-aca5-e88654f9501f
Oct 07 14:23:13 compute-0 nova_compute[259550]: 2025-10-07 14:23:13.765 2 DEBUG oslo_concurrency.lockutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:13 compute-0 sudo[353736]: pam_unix(sudo:session): session closed for user root
Oct 07 14:23:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:23:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:23:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:23:13 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:23:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:23:13 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:23:13 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev d7711c8f-cd6b-493b-860e-1cf8387973ac does not exist
Oct 07 14:23:13 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 89ca9e05-c5ec-4acf-b4e7-66c0801a5e52 does not exist
Oct 07 14:23:13 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 41c8a871-5009-41b3-bb44-f03176687eb5 does not exist
Oct 07 14:23:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:23:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:23:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:23:13 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:23:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:23:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:23:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 216 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 187 op/s
Oct 07 14:23:13 compute-0 sudo[353795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:23:13 compute-0 sudo[353795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:23:13 compute-0 sudo[353795]: pam_unix(sudo:session): session closed for user root
Oct 07 14:23:14 compute-0 sudo[353820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:23:14 compute-0 sudo[353820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:23:14 compute-0 sudo[353820]: pam_unix(sudo:session): session closed for user root
Oct 07 14:23:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3051669186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:14 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:23:14 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:23:14 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:23:14 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:23:14 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:23:14 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:23:14 compute-0 sudo[353845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:23:14 compute-0 sudo[353845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:23:14 compute-0 sudo[353845]: pam_unix(sudo:session): session closed for user root
Oct 07 14:23:14 compute-0 sudo[353870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:23:14 compute-0 sudo[353870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.374 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquiring lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.375 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.394 2 DEBUG nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.462 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.462 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.471 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.471 2 INFO nova.compute.claims [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:23:14 compute-0 podman[353935]: 2025-10-07 14:23:14.516700485 +0000 UTC m=+0.042058295 container create ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:23:14 compute-0 systemd[1]: Started libpod-conmon-ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce.scope.
Oct 07 14:23:14 compute-0 podman[353935]: 2025-10-07 14:23:14.49705072 +0000 UTC m=+0.022408570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.595 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:14 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:23:14 compute-0 podman[353935]: 2025-10-07 14:23:14.634164824 +0000 UTC m=+0.159522694 container init ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 14:23:14 compute-0 podman[353935]: 2025-10-07 14:23:14.646946085 +0000 UTC m=+0.172303915 container start ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:23:14 compute-0 podman[353935]: 2025-10-07 14:23:14.651158568 +0000 UTC m=+0.176516418 container attach ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 07 14:23:14 compute-0 gifted_brown[353952]: 167 167
Oct 07 14:23:14 compute-0 systemd[1]: libpod-ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce.scope: Deactivated successfully.
Oct 07 14:23:14 compute-0 conmon[353952]: conmon ea424bb0b6efdfea74e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce.scope/container/memory.events
Oct 07 14:23:14 compute-0 podman[353935]: 2025-10-07 14:23:14.655560996 +0000 UTC m=+0.180918836 container died ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 07 14:23:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ca1273accc457cb54595a9b777d678e03e445dfc4b7536e5b3394e3caf3e40d-merged.mount: Deactivated successfully.
Oct 07 14:23:14 compute-0 podman[353935]: 2025-10-07 14:23:14.709571799 +0000 UTC m=+0.234929599 container remove ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:23:14 compute-0 systemd[1]: libpod-conmon-ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce.scope: Deactivated successfully.
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.749 2 DEBUG oslo_concurrency.lockutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.751 2 DEBUG oslo_concurrency.lockutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.751 2 DEBUG oslo_concurrency.lockutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.752 2 DEBUG oslo_concurrency.lockutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.752 2 DEBUG oslo_concurrency.lockutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.754 2 INFO nova.compute.manager [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Terminating instance
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.755 2 DEBUG nova.compute.manager [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:23:14 compute-0 kernel: tap132f4c57-4e (unregistering): left promiscuous mode
Oct 07 14:23:14 compute-0 NetworkManager[44949]: <info>  [1759846994.8334] device (tap132f4c57-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:14 compute-0 ovn_controller[151684]: 2025-10-07T14:23:14Z|00981|binding|INFO|Releasing lport 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d from this chassis (sb_readonly=0)
Oct 07 14:23:14 compute-0 ovn_controller[151684]: 2025-10-07T14:23:14Z|00982|binding|INFO|Setting lport 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d down in Southbound
Oct 07 14:23:14 compute-0 ovn_controller[151684]: 2025-10-07T14:23:14Z|00983|binding|INFO|Removing iface tap132f4c57-4e ovn-installed in OVS
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:14.854 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:7c:47 10.100.0.14'], port_security=['fa:16:3e:06:7c:47 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c53d76d2-4525-4798-bcf4-ad2e6b18071a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:23:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:14.856 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 unbound from our chassis
Oct 07 14:23:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:14.857 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7
Oct 07 14:23:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:14.879 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cc546f45-4c07-4ba8-8521-46e4169918c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:14 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Oct 07 14:23:14 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005e.scope: Consumed 13.519s CPU time.
Oct 07 14:23:14 compute-0 systemd-machined[214580]: Machine qemu-116-instance-0000005e terminated.
Oct 07 14:23:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:14.928 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c315229a-ebf9-400e-ab00-0513f7832d14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:14.931 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[22fa438b-d499-44dc-a1b5-d8a684e61f11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:14 compute-0 podman[353999]: 2025-10-07 14:23:14.938755453 +0000 UTC m=+0.062084910 container create dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:23:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:14.963 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[eaaab0a5-d199-45dc-84d2-25da55987880]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:14.995 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c36c7bd7-5e4d-4824-8153-317c039a22f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 24, 'rx_bytes': 958, 'tx_bytes': 1200, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 24, 'rx_bytes': 958, 'tx_bytes': 1200, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 15179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354024, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:14 compute-0 nova_compute[259550]: 2025-10-07 14:23:14.996 2 INFO nova.virt.libvirt.driver [-] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Instance destroyed successfully.
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.003 2 DEBUG nova.objects.instance [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'resources' on Instance uuid c53d76d2-4525-4798-bcf4-ad2e6b18071a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:23:15 compute-0 systemd[1]: Started libpod-conmon-dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff.scope.
Oct 07 14:23:15 compute-0 podman[353999]: 2025-10-07 14:23:14.917572057 +0000 UTC m=+0.040901564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:23:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:15.019 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dc6244f8-5bd6-443b-84c1-258d5536e023]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354035, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354035, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.019 2 DEBUG nova.virt.libvirt.vif [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:22:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-316574553',display_name='tempest-ServersTestJSON-server-316574553',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-316574553',id=94,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:22:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-3nuzvc5m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:22:54Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=c53d76d2-4525-4798-bcf4-ad2e6b18071a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.020 2 DEBUG nova.network.os_vif_util [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.020 2 DEBUG nova.network.os_vif_util [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:7c:47,bridge_name='br-int',has_traffic_filtering=True,id=132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132f4c57-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.021 2 DEBUG os_vif [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:7c:47,bridge_name='br-int',has_traffic_filtering=True,id=132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132f4c57-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:23:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:15.021 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.022 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap132f4c57-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.026 2 INFO os_vif [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:7c:47,bridge_name='br-int',has_traffic_filtering=True,id=132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132f4c57-4e')
Oct 07 14:23:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:15.027 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:15.028 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:15.029 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:15.029 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:15 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:23:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1787db4ff472bfdcbded1cc706fbbe3e9341e27dff36f0f69d07dc0294ac91c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:23:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1787db4ff472bfdcbded1cc706fbbe3e9341e27dff36f0f69d07dc0294ac91c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:23:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1787db4ff472bfdcbded1cc706fbbe3e9341e27dff36f0f69d07dc0294ac91c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:23:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1787db4ff472bfdcbded1cc706fbbe3e9341e27dff36f0f69d07dc0294ac91c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:23:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1787db4ff472bfdcbded1cc706fbbe3e9341e27dff36f0f69d07dc0294ac91c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:23:15 compute-0 ceph-mon[74295]: pgmap v1787: 305 pgs: 305 active+clean; 216 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 187 op/s
Oct 07 14:23:15 compute-0 podman[353999]: 2025-10-07 14:23:15.074704776 +0000 UTC m=+0.198034243 container init dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:23:15 compute-0 podman[353999]: 2025-10-07 14:23:15.082061002 +0000 UTC m=+0.205390469 container start dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:23:15 compute-0 podman[353999]: 2025-10-07 14:23:15.085654498 +0000 UTC m=+0.208984005 container attach dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:23:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:23:15 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3756524447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.131 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.138 2 DEBUG nova.compute.provider_tree [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.158 2 DEBUG nova.scheduler.client.report [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.182 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.183 2 DEBUG nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.227 2 DEBUG nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.227 2 DEBUG nova.network.neutron [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.245 2 INFO nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.261 2 DEBUG nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.353 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846980.3300617, fdd84566-c63e-469a-9173-55b845d32171 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.353 2 INFO nova.compute.manager [-] [instance: fdd84566-c63e-469a-9173-55b845d32171] VM Stopped (Lifecycle Event)
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.373 2 DEBUG nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.374 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.375 2 INFO nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Creating image(s)
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.396 2 DEBUG nova.storage.rbd_utils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] rbd image ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.425 2 DEBUG nova.storage.rbd_utils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] rbd image ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.447 2 DEBUG nova.storage.rbd_utils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] rbd image ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.449 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.482 2 DEBUG nova.compute.manager [None req-a8dbbb45-bdee-466f-9d80-b494890cc6a6 - - - - - -] [instance: fdd84566-c63e-469a-9173-55b845d32171] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.489 2 INFO nova.virt.libvirt.driver [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Deleting instance files /var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a_del
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.490 2 INFO nova.virt.libvirt.driver [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Deletion of /var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a_del complete
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.520 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.521 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.522 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.522 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.538 2 DEBUG nova.storage.rbd_utils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] rbd image ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.541 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.581 2 INFO nova.compute.manager [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.581 2 DEBUG oslo.service.loopingcall [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.582 2 DEBUG nova.compute.manager [-] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.582 2 DEBUG nova.network.neutron [-] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.618 2 DEBUG nova.policy [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ef4a6b36d606485ea7b0334d25dd23bd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9fd5efb5e7c240a997805db53864ecfc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.857 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.904 2 DEBUG nova.storage.rbd_utils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] resizing rbd image ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:23:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 168 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 175 op/s
Oct 07 14:23:15 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.986 2 DEBUG nova.objects.instance [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lazy-loading 'migration_context' on Instance uuid ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:23:16 compute-0 nova_compute[259550]: 2025-10-07 14:23:15.999 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:23:16 compute-0 nova_compute[259550]: 2025-10-07 14:23:16.000 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Ensure instance console log exists: /var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:23:16 compute-0 nova_compute[259550]: 2025-10-07 14:23:16.000 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:16 compute-0 nova_compute[259550]: 2025-10-07 14:23:16.000 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:16 compute-0 nova_compute[259550]: 2025-10-07 14:23:16.000 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:16 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3756524447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:16 compute-0 serene_allen[354036]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:23:16 compute-0 serene_allen[354036]: --> relative data size: 1.0
Oct 07 14:23:16 compute-0 serene_allen[354036]: --> All data devices are unavailable
Oct 07 14:23:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:23:16 compute-0 systemd[1]: libpod-dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff.scope: Deactivated successfully.
Oct 07 14:23:16 compute-0 podman[353999]: 2025-10-07 14:23:16.270090047 +0000 UTC m=+1.393419544 container died dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:23:16 compute-0 systemd[1]: libpod-dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff.scope: Consumed 1.108s CPU time.
Oct 07 14:23:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1787db4ff472bfdcbded1cc706fbbe3e9341e27dff36f0f69d07dc0294ac91c-merged.mount: Deactivated successfully.
Oct 07 14:23:16 compute-0 podman[353999]: 2025-10-07 14:23:16.336130692 +0000 UTC m=+1.459460149 container remove dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 07 14:23:16 compute-0 systemd[1]: libpod-conmon-dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff.scope: Deactivated successfully.
Oct 07 14:23:16 compute-0 sudo[353870]: pam_unix(sudo:session): session closed for user root
Oct 07 14:23:16 compute-0 sudo[354266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:23:16 compute-0 sudo[354266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:23:16 compute-0 sudo[354266]: pam_unix(sudo:session): session closed for user root
Oct 07 14:23:16 compute-0 sudo[354291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:23:16 compute-0 sudo[354291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:23:16 compute-0 sudo[354291]: pam_unix(sudo:session): session closed for user root
Oct 07 14:23:16 compute-0 nova_compute[259550]: 2025-10-07 14:23:16.549 2 DEBUG nova.compute.manager [req-ac5d73d5-1f8e-4d03-a007-001eaa8ac6ec req-531aeb8b-7ba2-4715-95b7-cb53247857b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Received event network-vif-unplugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:16 compute-0 nova_compute[259550]: 2025-10-07 14:23:16.550 2 DEBUG oslo_concurrency.lockutils [req-ac5d73d5-1f8e-4d03-a007-001eaa8ac6ec req-531aeb8b-7ba2-4715-95b7-cb53247857b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:16 compute-0 nova_compute[259550]: 2025-10-07 14:23:16.550 2 DEBUG oslo_concurrency.lockutils [req-ac5d73d5-1f8e-4d03-a007-001eaa8ac6ec req-531aeb8b-7ba2-4715-95b7-cb53247857b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:16 compute-0 nova_compute[259550]: 2025-10-07 14:23:16.550 2 DEBUG oslo_concurrency.lockutils [req-ac5d73d5-1f8e-4d03-a007-001eaa8ac6ec req-531aeb8b-7ba2-4715-95b7-cb53247857b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:16 compute-0 nova_compute[259550]: 2025-10-07 14:23:16.550 2 DEBUG nova.compute.manager [req-ac5d73d5-1f8e-4d03-a007-001eaa8ac6ec req-531aeb8b-7ba2-4715-95b7-cb53247857b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] No waiting events found dispatching network-vif-unplugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:23:16 compute-0 nova_compute[259550]: 2025-10-07 14:23:16.550 2 DEBUG nova.compute.manager [req-ac5d73d5-1f8e-4d03-a007-001eaa8ac6ec req-531aeb8b-7ba2-4715-95b7-cb53247857b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Received event network-vif-unplugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:23:16 compute-0 sudo[354316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:23:16 compute-0 sudo[354316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:23:16 compute-0 sudo[354316]: pam_unix(sudo:session): session closed for user root
Oct 07 14:23:16 compute-0 nova_compute[259550]: 2025-10-07 14:23:16.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:16 compute-0 sudo[354341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:23:16 compute-0 sudo[354341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:23:16 compute-0 nova_compute[259550]: 2025-10-07 14:23:16.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:16.694 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:23:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:16.695 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:23:16 compute-0 nova_compute[259550]: 2025-10-07 14:23:16.867 2 DEBUG nova.network.neutron [-] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:23:16 compute-0 nova_compute[259550]: 2025-10-07 14:23:16.880 2 DEBUG nova.network.neutron [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Successfully created port: cef2a5c2-ea82-48c7-8da3-0028586c49aa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:23:16 compute-0 nova_compute[259550]: 2025-10-07 14:23:16.885 2 INFO nova.compute.manager [-] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Took 1.30 seconds to deallocate network for instance.
Oct 07 14:23:16 compute-0 nova_compute[259550]: 2025-10-07 14:23:16.922 2 DEBUG oslo_concurrency.lockutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:16 compute-0 nova_compute[259550]: 2025-10-07 14:23:16.923 2 DEBUG oslo_concurrency.lockutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:16 compute-0 podman[354403]: 2025-10-07 14:23:16.958980274 +0000 UTC m=+0.039028384 container create bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sinoussi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 07 14:23:16 compute-0 nova_compute[259550]: 2025-10-07 14:23:16.993 2 DEBUG oslo_concurrency.processutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:17 compute-0 systemd[1]: Started libpod-conmon-bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83.scope.
Oct 07 14:23:17 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:23:17 compute-0 podman[354403]: 2025-10-07 14:23:16.944479256 +0000 UTC m=+0.024527386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:23:17 compute-0 podman[354403]: 2025-10-07 14:23:17.04753628 +0000 UTC m=+0.127584420 container init bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sinoussi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 07 14:23:17 compute-0 podman[354403]: 2025-10-07 14:23:17.059058518 +0000 UTC m=+0.139106628 container start bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 07 14:23:17 compute-0 youthful_sinoussi[354419]: 167 167
Oct 07 14:23:17 compute-0 podman[354403]: 2025-10-07 14:23:17.063093666 +0000 UTC m=+0.143141776 container attach bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:23:17 compute-0 systemd[1]: libpod-bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83.scope: Deactivated successfully.
Oct 07 14:23:17 compute-0 conmon[354419]: conmon bc6dffe67d3b80a98747 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83.scope/container/memory.events
Oct 07 14:23:17 compute-0 podman[354403]: 2025-10-07 14:23:17.064515334 +0000 UTC m=+0.144563444 container died bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 07 14:23:17 compute-0 ceph-mon[74295]: pgmap v1788: 305 pgs: 305 active+clean; 168 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 175 op/s
Oct 07 14:23:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-7363ad5e3a7a612ea6181f51a63c844c9af041d3dd83c97fb3e0e21dd1f9dadd-merged.mount: Deactivated successfully.
Oct 07 14:23:17 compute-0 podman[354403]: 2025-10-07 14:23:17.111615633 +0000 UTC m=+0.191663743 container remove bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:23:17 compute-0 systemd[1]: libpod-conmon-bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83.scope: Deactivated successfully.
Oct 07 14:23:17 compute-0 podman[354463]: 2025-10-07 14:23:17.365401574 +0000 UTC m=+0.106220070 container create c4b1bbc01b110d0e3699ccd0e754b49a826e0dbc93bb3217a6c7ca8f5ab95d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:23:17 compute-0 podman[354463]: 2025-10-07 14:23:17.284454131 +0000 UTC m=+0.025272627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:23:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:23:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1140728377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:17 compute-0 nova_compute[259550]: 2025-10-07 14:23:17.475 2 DEBUG oslo_concurrency.processutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:17 compute-0 nova_compute[259550]: 2025-10-07 14:23:17.482 2 DEBUG nova.compute.provider_tree [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:23:17 compute-0 nova_compute[259550]: 2025-10-07 14:23:17.498 2 DEBUG nova.scheduler.client.report [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:23:17 compute-0 nova_compute[259550]: 2025-10-07 14:23:17.520 2 DEBUG oslo_concurrency.lockutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:17 compute-0 systemd[1]: Started libpod-conmon-c4b1bbc01b110d0e3699ccd0e754b49a826e0dbc93bb3217a6c7ca8f5ab95d4c.scope.
Oct 07 14:23:17 compute-0 nova_compute[259550]: 2025-10-07 14:23:17.548 2 INFO nova.scheduler.client.report [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Deleted allocations for instance c53d76d2-4525-4798-bcf4-ad2e6b18071a
Oct 07 14:23:17 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:23:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c3bd96607fef57a8bab59e1701cdc3d67df4c25e642f8220f4c273039e93c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:23:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c3bd96607fef57a8bab59e1701cdc3d67df4c25e642f8220f4c273039e93c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:23:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c3bd96607fef57a8bab59e1701cdc3d67df4c25e642f8220f4c273039e93c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:23:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c3bd96607fef57a8bab59e1701cdc3d67df4c25e642f8220f4c273039e93c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:23:17 compute-0 podman[354463]: 2025-10-07 14:23:17.600517596 +0000 UTC m=+0.341336092 container init c4b1bbc01b110d0e3699ccd0e754b49a826e0dbc93bb3217a6c7ca8f5ab95d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 07 14:23:17 compute-0 nova_compute[259550]: 2025-10-07 14:23:17.602 2 DEBUG oslo_concurrency.lockutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:17 compute-0 podman[354463]: 2025-10-07 14:23:17.606959249 +0000 UTC m=+0.347777735 container start c4b1bbc01b110d0e3699ccd0e754b49a826e0dbc93bb3217a6c7ca8f5ab95d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:23:17 compute-0 podman[354463]: 2025-10-07 14:23:17.646744222 +0000 UTC m=+0.387562728 container attach c4b1bbc01b110d0e3699ccd0e754b49a826e0dbc93bb3217a6c7ca8f5ab95d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:23:17 compute-0 nova_compute[259550]: 2025-10-07 14:23:17.794 2 DEBUG nova.network.neutron [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Successfully updated port: cef2a5c2-ea82-48c7-8da3-0028586c49aa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:23:17 compute-0 nova_compute[259550]: 2025-10-07 14:23:17.830 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquiring lock "refresh_cache-ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:23:17 compute-0 nova_compute[259550]: 2025-10-07 14:23:17.831 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquired lock "refresh_cache-ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:23:17 compute-0 nova_compute[259550]: 2025-10-07 14:23:17.832 2 DEBUG nova.network.neutron [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:23:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 168 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 169 op/s
Oct 07 14:23:18 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1140728377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:18 compute-0 nova_compute[259550]: 2025-10-07 14:23:18.188 2 DEBUG nova.network.neutron [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]: {
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:     "0": [
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:         {
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "devices": [
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "/dev/loop3"
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             ],
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "lv_name": "ceph_lv0",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "lv_size": "21470642176",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "name": "ceph_lv0",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "tags": {
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.cluster_name": "ceph",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.crush_device_class": "",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.encrypted": "0",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.osd_id": "0",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.type": "block",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.vdo": "0"
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             },
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "type": "block",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "vg_name": "ceph_vg0"
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:         }
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:     ],
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:     "1": [
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:         {
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "devices": [
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "/dev/loop4"
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             ],
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "lv_name": "ceph_lv1",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "lv_size": "21470642176",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "name": "ceph_lv1",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "tags": {
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.cluster_name": "ceph",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.crush_device_class": "",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.encrypted": "0",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.osd_id": "1",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.type": "block",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.vdo": "0"
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             },
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "type": "block",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "vg_name": "ceph_vg1"
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:         }
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:     ],
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:     "2": [
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:         {
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "devices": [
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "/dev/loop5"
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             ],
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "lv_name": "ceph_lv2",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "lv_size": "21470642176",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "name": "ceph_lv2",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "tags": {
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.cluster_name": "ceph",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.crush_device_class": "",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.encrypted": "0",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.osd_id": "2",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.type": "block",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:                 "ceph.vdo": "0"
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             },
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "type": "block",
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:             "vg_name": "ceph_vg2"
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:         }
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]:     ]
Oct 07 14:23:18 compute-0 crazy_stonebraker[354481]: }
Oct 07 14:23:18 compute-0 systemd[1]: libpod-c4b1bbc01b110d0e3699ccd0e754b49a826e0dbc93bb3217a6c7ca8f5ab95d4c.scope: Deactivated successfully.
Oct 07 14:23:18 compute-0 podman[354463]: 2025-10-07 14:23:18.400304057 +0000 UTC m=+1.141122633 container died c4b1bbc01b110d0e3699ccd0e754b49a826e0dbc93bb3217a6c7ca8f5ab95d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:23:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1c3bd96607fef57a8bab59e1701cdc3d67df4c25e642f8220f4c273039e93c4-merged.mount: Deactivated successfully.
Oct 07 14:23:18 compute-0 podman[354463]: 2025-10-07 14:23:18.511641732 +0000 UTC m=+1.252460218 container remove c4b1bbc01b110d0e3699ccd0e754b49a826e0dbc93bb3217a6c7ca8f5ab95d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 07 14:23:18 compute-0 systemd[1]: libpod-conmon-c4b1bbc01b110d0e3699ccd0e754b49a826e0dbc93bb3217a6c7ca8f5ab95d4c.scope: Deactivated successfully.
Oct 07 14:23:18 compute-0 sudo[354341]: pam_unix(sudo:session): session closed for user root
Oct 07 14:23:18 compute-0 sudo[354502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:23:18 compute-0 sudo[354502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:23:18 compute-0 sudo[354502]: pam_unix(sudo:session): session closed for user root
Oct 07 14:23:18 compute-0 nova_compute[259550]: 2025-10-07 14:23:18.689 2 DEBUG nova.compute.manager [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Received event network-vif-plugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:18 compute-0 nova_compute[259550]: 2025-10-07 14:23:18.691 2 DEBUG oslo_concurrency.lockutils [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:18 compute-0 nova_compute[259550]: 2025-10-07 14:23:18.691 2 DEBUG oslo_concurrency.lockutils [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:18 compute-0 nova_compute[259550]: 2025-10-07 14:23:18.691 2 DEBUG oslo_concurrency.lockutils [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:18 compute-0 nova_compute[259550]: 2025-10-07 14:23:18.691 2 DEBUG nova.compute.manager [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] No waiting events found dispatching network-vif-plugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:23:18 compute-0 nova_compute[259550]: 2025-10-07 14:23:18.692 2 WARNING nova.compute.manager [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Received unexpected event network-vif-plugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d for instance with vm_state deleted and task_state None.
Oct 07 14:23:18 compute-0 nova_compute[259550]: 2025-10-07 14:23:18.692 2 DEBUG nova.compute.manager [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Received event network-vif-deleted-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:18 compute-0 nova_compute[259550]: 2025-10-07 14:23:18.694 2 DEBUG nova.compute.manager [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Received event network-changed-cef2a5c2-ea82-48c7-8da3-0028586c49aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:18 compute-0 nova_compute[259550]: 2025-10-07 14:23:18.694 2 DEBUG nova.compute.manager [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Refreshing instance network info cache due to event network-changed-cef2a5c2-ea82-48c7-8da3-0028586c49aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:23:18 compute-0 nova_compute[259550]: 2025-10-07 14:23:18.694 2 DEBUG oslo_concurrency.lockutils [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:23:18 compute-0 sudo[354527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:23:18 compute-0 sudo[354527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:23:18 compute-0 sudo[354527]: pam_unix(sudo:session): session closed for user root
Oct 07 14:23:18 compute-0 sudo[354552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:23:18 compute-0 sudo[354552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:23:18 compute-0 sudo[354552]: pam_unix(sudo:session): session closed for user root
Oct 07 14:23:18 compute-0 sudo[354577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:23:18 compute-0 sudo[354577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:23:19 compute-0 ceph-mon[74295]: pgmap v1789: 305 pgs: 305 active+clean; 168 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 169 op/s
Oct 07 14:23:19 compute-0 podman[354642]: 2025-10-07 14:23:19.241909865 +0000 UTC m=+0.053764988 container create a55515e8f695232e15b21b5ea4901aec29285f9aac6baebdee5be0fa25eec9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:23:19 compute-0 systemd[1]: Started libpod-conmon-a55515e8f695232e15b21b5ea4901aec29285f9aac6baebdee5be0fa25eec9b0.scope.
Oct 07 14:23:19 compute-0 podman[354642]: 2025-10-07 14:23:19.211004779 +0000 UTC m=+0.022859892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:23:19 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:23:19 compute-0 podman[354642]: 2025-10-07 14:23:19.394488192 +0000 UTC m=+0.206343305 container init a55515e8f695232e15b21b5ea4901aec29285f9aac6baebdee5be0fa25eec9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mendel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:23:19 compute-0 podman[354642]: 2025-10-07 14:23:19.402817584 +0000 UTC m=+0.214672677 container start a55515e8f695232e15b21b5ea4901aec29285f9aac6baebdee5be0fa25eec9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:23:19 compute-0 pensive_mendel[354658]: 167 167
Oct 07 14:23:19 compute-0 systemd[1]: libpod-a55515e8f695232e15b21b5ea4901aec29285f9aac6baebdee5be0fa25eec9b0.scope: Deactivated successfully.
Oct 07 14:23:19 compute-0 podman[354642]: 2025-10-07 14:23:19.43934101 +0000 UTC m=+0.251196163 container attach a55515e8f695232e15b21b5ea4901aec29285f9aac6baebdee5be0fa25eec9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:23:19 compute-0 podman[354642]: 2025-10-07 14:23:19.44005028 +0000 UTC m=+0.251905383 container died a55515e8f695232e15b21b5ea4901aec29285f9aac6baebdee5be0fa25eec9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:23:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1f57d7050a010d76742f039d9cc2f135ac9abb8e115dacca7d5d32d228d734c-merged.mount: Deactivated successfully.
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.581 2 DEBUG nova.network.neutron [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Updating instance_info_cache with network_info: [{"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.604 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Releasing lock "refresh_cache-ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.605 2 DEBUG nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Instance network_info: |[{"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.605 2 DEBUG oslo_concurrency.lockutils [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.605 2 DEBUG nova.network.neutron [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Refreshing network info cache for port cef2a5c2-ea82-48c7-8da3-0028586c49aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:23:19 compute-0 podman[354642]: 2025-10-07 14:23:19.608463759 +0000 UTC m=+0.420319042 container remove a55515e8f695232e15b21b5ea4901aec29285f9aac6baebdee5be0fa25eec9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mendel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.609 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Start _get_guest_xml network_info=[{"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.616 2 WARNING nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:23:19 compute-0 systemd[1]: libpod-conmon-a55515e8f695232e15b21b5ea4901aec29285f9aac6baebdee5be0fa25eec9b0.scope: Deactivated successfully.
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.625 2 DEBUG nova.virt.libvirt.host [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.626 2 DEBUG nova.virt.libvirt.host [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.632 2 DEBUG nova.virt.libvirt.host [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.632 2 DEBUG nova.virt.libvirt.host [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.633 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.633 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.634 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.634 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.634 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.635 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.635 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.635 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.636 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.636 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.636 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.637 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:23:19 compute-0 nova_compute[259550]: 2025-10-07 14:23:19.640 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:19 compute-0 podman[354683]: 2025-10-07 14:23:19.764795387 +0000 UTC m=+0.023131830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:23:19 compute-0 podman[354683]: 2025-10-07 14:23:19.877349124 +0000 UTC m=+0.135685537 container create 6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_franklin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:23:19 compute-0 systemd[1]: Started libpod-conmon-6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4.scope.
Oct 07 14:23:19 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:23:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50561b75e0bdd7ab3e1e266516a48b26c4502ad42767ba47edd4da70e13e6964/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:23:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50561b75e0bdd7ab3e1e266516a48b26c4502ad42767ba47edd4da70e13e6964/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:23:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50561b75e0bdd7ab3e1e266516a48b26c4502ad42767ba47edd4da70e13e6964/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:23:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50561b75e0bdd7ab3e1e266516a48b26c4502ad42767ba47edd4da70e13e6964/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:23:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 141 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 216 op/s
Oct 07 14:23:19 compute-0 podman[354683]: 2025-10-07 14:23:19.980192722 +0000 UTC m=+0.238529155 container init 6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_franklin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 07 14:23:19 compute-0 podman[354683]: 2025-10-07 14:23:19.988085363 +0000 UTC m=+0.246421786 container start 6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_franklin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 07 14:23:19 compute-0 podman[354683]: 2025-10-07 14:23:19.999039016 +0000 UTC m=+0.257375709 container attach 6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_franklin, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:23:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4940471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.131 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:20 compute-0 ceph-mon[74295]: pgmap v1790: 305 pgs: 305 active+clean; 141 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 216 op/s
Oct 07 14:23:20 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4940471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.160 2 DEBUG nova.storage.rbd_utils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] rbd image ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.165 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.166545) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847000166662, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1080, "num_deletes": 254, "total_data_size": 1355732, "memory_usage": 1378944, "flush_reason": "Manual Compaction"}
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847000214554, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 1340073, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36657, "largest_seqno": 37736, "table_properties": {"data_size": 1334880, "index_size": 2589, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12272, "raw_average_key_size": 20, "raw_value_size": 1324070, "raw_average_value_size": 2206, "num_data_blocks": 114, "num_entries": 600, "num_filter_entries": 600, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759846922, "oldest_key_time": 1759846922, "file_creation_time": 1759847000, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 48012 microseconds, and 8112 cpu microseconds.
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.214594) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 1340073 bytes OK
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.214616) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.216789) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.216803) EVENT_LOG_v1 {"time_micros": 1759847000216799, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.216820) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 1350546, prev total WAL file size 1350546, number of live WAL files 2.
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.217455) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(1308KB)], [80(8186KB)]
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847000217483, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 9723476, "oldest_snapshot_seqno": -1}
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6244 keys, 8100393 bytes, temperature: kUnknown
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847000277028, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8100393, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8059860, "index_size": 23847, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 157964, "raw_average_key_size": 25, "raw_value_size": 7949119, "raw_average_value_size": 1273, "num_data_blocks": 960, "num_entries": 6244, "num_filter_entries": 6244, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847000, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.277293) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8100393 bytes
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.278973) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.0 rd, 135.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.0 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(13.3) write-amplify(6.0) OK, records in: 6767, records dropped: 523 output_compression: NoCompression
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.279008) EVENT_LOG_v1 {"time_micros": 1759847000278995, "job": 46, "event": "compaction_finished", "compaction_time_micros": 59657, "compaction_time_cpu_micros": 17926, "output_level": 6, "num_output_files": 1, "total_output_size": 8100393, "num_input_records": 6767, "num_output_records": 6244, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847000279591, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847000280873, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.217376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.281005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.281013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.281015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.281017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:23:20 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.281019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:23:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:23:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3166003175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.635 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.637 2 DEBUG nova.virt.libvirt.vif [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-426897571',display_name='tempest-ServerPasswordTestJSON-server-426897571',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-426897571',id=96,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9fd5efb5e7c240a997805db53864ecfc',ramdisk_id='',reservation_id='r-8zuwxzb4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-452511673',owner_user_name='tempest-ServerPasswordTestJSON-452511673-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:23:15Z,user_data=None,user_id='ef4a6b36d606485ea7b0334d25dd23bd',uuid=ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.637 2 DEBUG nova.network.os_vif_util [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Converting VIF {"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.638 2 DEBUG nova.network.os_vif_util [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=cef2a5c2-ea82-48c7-8da3-0028586c49aa,network=Network(ac82160b-5221-4bd6-bab0-b5d08c95e428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef2a5c2-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.640 2 DEBUG nova.objects.instance [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lazy-loading 'pci_devices' on Instance uuid ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.708 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:23:20 compute-0 nova_compute[259550]:   <uuid>ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1</uuid>
Oct 07 14:23:20 compute-0 nova_compute[259550]:   <name>instance-00000060</name>
Oct 07 14:23:20 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:23:20 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:23:20 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerPasswordTestJSON-server-426897571</nova:name>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:23:19</nova:creationTime>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:23:20 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:23:20 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:23:20 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:23:20 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:23:20 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:23:20 compute-0 nova_compute[259550]:         <nova:user uuid="ef4a6b36d606485ea7b0334d25dd23bd">tempest-ServerPasswordTestJSON-452511673-project-member</nova:user>
Oct 07 14:23:20 compute-0 nova_compute[259550]:         <nova:project uuid="9fd5efb5e7c240a997805db53864ecfc">tempest-ServerPasswordTestJSON-452511673</nova:project>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:23:20 compute-0 nova_compute[259550]:         <nova:port uuid="cef2a5c2-ea82-48c7-8da3-0028586c49aa">
Oct 07 14:23:20 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:23:20 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:23:20 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <system>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <entry name="serial">ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1</entry>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <entry name="uuid">ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1</entry>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     </system>
Oct 07 14:23:20 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:23:20 compute-0 nova_compute[259550]:   <os>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:   </os>
Oct 07 14:23:20 compute-0 nova_compute[259550]:   <features>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:   </features>
Oct 07 14:23:20 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:23:20 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:23:20 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk">
Oct 07 14:23:20 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       </source>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:23:20 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk.config">
Oct 07 14:23:20 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       </source>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:23:20 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:53:69:ab"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <target dev="tapcef2a5c2-ea"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1/console.log" append="off"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <video>
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     </video>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:23:20 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:23:20 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:23:20 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:23:20 compute-0 nova_compute[259550]: </domain>
Oct 07 14:23:20 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.709 2 DEBUG nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Preparing to wait for external event network-vif-plugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.710 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquiring lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.710 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.710 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.711 2 DEBUG nova.virt.libvirt.vif [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-426897571',display_name='tempest-ServerPasswordTestJSON-server-426897571',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-426897571',id=96,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9fd5efb5e7c240a997805db53864ecfc',ramdisk_id='',reservation_id='r-8zuwxzb4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-452511673',owner_user_name='tempest-ServerPasswordTestJSON-452511673-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:23:15Z,user_data=None,user_id='ef4a6b36d606485ea7b0334d25dd23bd',uuid=ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.711 2 DEBUG nova.network.os_vif_util [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Converting VIF {"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.712 2 DEBUG nova.network.os_vif_util [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=cef2a5c2-ea82-48c7-8da3-0028586c49aa,network=Network(ac82160b-5221-4bd6-bab0-b5d08c95e428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef2a5c2-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.712 2 DEBUG os_vif [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=cef2a5c2-ea82-48c7-8da3-0028586c49aa,network=Network(ac82160b-5221-4bd6-bab0-b5d08c95e428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef2a5c2-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.713 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.714 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.718 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcef2a5c2-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.719 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcef2a5c2-ea, col_values=(('external_ids', {'iface-id': 'cef2a5c2-ea82-48c7-8da3-0028586c49aa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:69:ab', 'vm-uuid': 'ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:20 compute-0 NetworkManager[44949]: <info>  [1759847000.7218] manager: (tapcef2a5c2-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.732 2 INFO os_vif [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=cef2a5c2-ea82-48c7-8da3-0028586c49aa,network=Network(ac82160b-5221-4bd6-bab0-b5d08c95e428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef2a5c2-ea')
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.810 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.810 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.811 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] No VIF found with MAC fa:16:3e:53:69:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.811 2 INFO nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Using config drive
Oct 07 14:23:20 compute-0 nova_compute[259550]: 2025-10-07 14:23:20.829 2 DEBUG nova.storage.rbd_utils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] rbd image ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:20 compute-0 adoring_franklin[354718]: {
Oct 07 14:23:20 compute-0 adoring_franklin[354718]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:23:20 compute-0 adoring_franklin[354718]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:23:20 compute-0 adoring_franklin[354718]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:23:20 compute-0 adoring_franklin[354718]:         "osd_id": 2,
Oct 07 14:23:20 compute-0 adoring_franklin[354718]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:23:20 compute-0 adoring_franklin[354718]:         "type": "bluestore"
Oct 07 14:23:20 compute-0 adoring_franklin[354718]:     },
Oct 07 14:23:20 compute-0 adoring_franklin[354718]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:23:20 compute-0 adoring_franklin[354718]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:23:20 compute-0 adoring_franklin[354718]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:23:20 compute-0 adoring_franklin[354718]:         "osd_id": 1,
Oct 07 14:23:20 compute-0 adoring_franklin[354718]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:23:20 compute-0 adoring_franklin[354718]:         "type": "bluestore"
Oct 07 14:23:20 compute-0 adoring_franklin[354718]:     },
Oct 07 14:23:20 compute-0 adoring_franklin[354718]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:23:20 compute-0 adoring_franklin[354718]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:23:20 compute-0 adoring_franklin[354718]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:23:20 compute-0 adoring_franklin[354718]:         "osd_id": 0,
Oct 07 14:23:20 compute-0 adoring_franklin[354718]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:23:21 compute-0 adoring_franklin[354718]:         "type": "bluestore"
Oct 07 14:23:21 compute-0 adoring_franklin[354718]:     }
Oct 07 14:23:21 compute-0 adoring_franklin[354718]: }
Oct 07 14:23:21 compute-0 systemd[1]: libpod-6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4.scope: Deactivated successfully.
Oct 07 14:23:21 compute-0 systemd[1]: libpod-6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4.scope: Consumed 1.031s CPU time.
Oct 07 14:23:21 compute-0 podman[354826]: 2025-10-07 14:23:21.071172314 +0000 UTC m=+0.026785147 container died 6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:23:21 compute-0 podman[354815]: 2025-10-07 14:23:21.126409229 +0000 UTC m=+0.105632923 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:23:21 compute-0 podman[354812]: 2025-10-07 14:23:21.16347819 +0000 UTC m=+0.143447964 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 07 14:23:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-50561b75e0bdd7ab3e1e266516a48b26c4502ad42767ba47edd4da70e13e6964-merged.mount: Deactivated successfully.
Oct 07 14:23:21 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3166003175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:23:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:23:21 compute-0 podman[354826]: 2025-10-07 14:23:21.352280074 +0000 UTC m=+0.307892877 container remove 6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_franklin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:23:21 compute-0 systemd[1]: libpod-conmon-6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4.scope: Deactivated successfully.
Oct 07 14:23:21 compute-0 sudo[354577]: pam_unix(sudo:session): session closed for user root
Oct 07 14:23:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:23:21 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:23:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:23:21 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:23:21 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 5876a73c-09af-4e8b-9bea-dabb34e2214a does not exist
Oct 07 14:23:21 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev fa78878c-5e66-487f-b9d4-20af59f868ba does not exist
Oct 07 14:23:21 compute-0 sudo[354870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:23:21 compute-0 sudo[354870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:23:21 compute-0 sudo[354870]: pam_unix(sudo:session): session closed for user root
Oct 07 14:23:21 compute-0 sudo[354895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:23:21 compute-0 sudo[354895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:23:21 compute-0 sudo[354895]: pam_unix(sudo:session): session closed for user root
Oct 07 14:23:21 compute-0 nova_compute[259550]: 2025-10-07 14:23:21.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:21 compute-0 nova_compute[259550]: 2025-10-07 14:23:21.635 2 INFO nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Creating config drive at /var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1/disk.config
Oct 07 14:23:21 compute-0 nova_compute[259550]: 2025-10-07 14:23:21.639 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3evq2bn5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:21.697 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:21 compute-0 nova_compute[259550]: 2025-10-07 14:23:21.782 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3evq2bn5" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:21 compute-0 nova_compute[259550]: 2025-10-07 14:23:21.804 2 DEBUG nova.storage.rbd_utils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] rbd image ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:21 compute-0 nova_compute[259550]: 2025-10-07 14:23:21.807 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1/disk.config ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:21 compute-0 nova_compute[259550]: 2025-10-07 14:23:21.945 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "49805eb1-6f40-48f8-bcdc-de12de83733b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:21 compute-0 nova_compute[259550]: 2025-10-07 14:23:21.946 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:21 compute-0 nova_compute[259550]: 2025-10-07 14:23:21.970 2 DEBUG nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:23:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 150 op/s
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.067 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.068 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.075 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.075 2 INFO nova.compute.claims [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.207 2 DEBUG nova.network.neutron [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Updated VIF entry in instance network info cache for port cef2a5c2-ea82-48c7-8da3-0028586c49aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.208 2 DEBUG nova.network.neutron [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Updating instance_info_cache with network_info: [{"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.216 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.256 2 DEBUG oslo_concurrency.lockutils [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.258 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1/disk.config ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.259 2 INFO nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Deleting local config drive /var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1/disk.config because it was imported into RBD.
Oct 07 14:23:22 compute-0 kernel: tapcef2a5c2-ea: entered promiscuous mode
Oct 07 14:23:22 compute-0 NetworkManager[44949]: <info>  [1759847002.3192] manager: (tapcef2a5c2-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/401)
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:22 compute-0 ovn_controller[151684]: 2025-10-07T14:23:22Z|00984|binding|INFO|Claiming lport cef2a5c2-ea82-48c7-8da3-0028586c49aa for this chassis.
Oct 07 14:23:22 compute-0 ovn_controller[151684]: 2025-10-07T14:23:22Z|00985|binding|INFO|cef2a5c2-ea82-48c7-8da3-0028586c49aa: Claiming fa:16:3e:53:69:ab 10.100.0.7
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.334 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:69:ab 10.100.0.7'], port_security=['fa:16:3e:53:69:ab 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac82160b-5221-4bd6-bab0-b5d08c95e428', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9fd5efb5e7c240a997805db53864ecfc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a65ebf45-161a-4778-bc19-6a690dbd2208', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=efcc1312-35a2-444f-b47e-40e2e08c733d, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=cef2a5c2-ea82-48c7-8da3-0028586c49aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.335 161536 INFO neutron.agent.ovn.metadata.agent [-] Port cef2a5c2-ea82-48c7-8da3-0028586c49aa in datapath ac82160b-5221-4bd6-bab0-b5d08c95e428 bound to our chassis
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.336 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac82160b-5221-4bd6-bab0-b5d08c95e428
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.351 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bf0cde73-3841-48c2-b55d-5fd026b22465]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.353 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapac82160b-51 in ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.356 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapac82160b-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.356 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa6ebee-f4fd-4c28-b1f6-7fdb3ecc83e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.357 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[675cf287-79d7-48e5-9a04-2135b2c3d6ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:22 compute-0 systemd-udevd[354985]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:23:22 compute-0 NetworkManager[44949]: <info>  [1759847002.3735] device (tapcef2a5c2-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:23:22 compute-0 NetworkManager[44949]: <info>  [1759847002.3750] device (tapcef2a5c2-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:23:22 compute-0 systemd-machined[214580]: New machine qemu-119-instance-00000060.
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.377 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[cc040ba1-42bb-4d2a-81ff-0151b691089f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:22 compute-0 systemd[1]: Started Virtual Machine qemu-119-instance-00000060.
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.406 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[97365a04-96a3-475c-a979-cc98b5bb00cf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:22 compute-0 ovn_controller[151684]: 2025-10-07T14:23:22Z|00986|binding|INFO|Setting lport cef2a5c2-ea82-48c7-8da3-0028586c49aa ovn-installed in OVS
Oct 07 14:23:22 compute-0 ovn_controller[151684]: 2025-10-07T14:23:22Z|00987|binding|INFO|Setting lport cef2a5c2-ea82-48c7-8da3-0028586c49aa up in Southbound
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.439 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a80c30-fdc1-45bf-bddc-1ed590cbaad7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.445 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8480ae70-c328-433c-8368-82015b8587d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:22 compute-0 NetworkManager[44949]: <info>  [1759847002.4467] manager: (tapac82160b-50): new Veth device (/org/freedesktop/NetworkManager/Devices/402)
Oct 07 14:23:22 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:23:22 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:23:22 compute-0 ceph-mon[74295]: pgmap v1791: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 150 op/s
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.490 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cb117214-e494-4174-a033-0dbb7a1c7645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.493 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ca4c23c9-8ad6-499b-8bfc-e57b6e901a55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:22 compute-0 NetworkManager[44949]: <info>  [1759847002.5150] device (tapac82160b-50): carrier: link connected
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.521 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7ab1e8db-06da-4a89-ba4e-fd4592e7483f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.541 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c4b92c05-e420-4a27-bb81-35b0189360fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac82160b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:87:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758607, 'reachable_time': 19310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355026, 'error': None, 'target': 'ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.558 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[42e8cb42-0885-4696-8c4f-2742eb3b14ac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:87a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758607, 'tstamp': 758607}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355027, 'error': None, 'target': 'ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.577 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[05dae76e-7ff3-48be-b0cc-ee102cc3f918]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac82160b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:87:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758607, 'reachable_time': 19310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355028, 'error': None, 'target': 'ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.609 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d284d36e-4978-408b-8143-f646a195fe4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:23:22
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['images', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'vms', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', '.rgw.root']
Oct 07 14:23:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:23:22 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3557800978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.673 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[360399e5-f941-4133-ae48-6ed02b142520]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.674 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac82160b-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.674 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.675 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac82160b-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:22 compute-0 kernel: tapac82160b-50: entered promiscuous mode
Oct 07 14:23:22 compute-0 NetworkManager[44949]: <info>  [1759847002.6780] manager: (tapac82160b-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.680 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac82160b-50, col_values=(('external_ids', {'iface-id': '5eb8784d-43ef-482a-a6dc-a2613b333a0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:22 compute-0 ovn_controller[151684]: 2025-10-07T14:23:22Z|00988|binding|INFO|Releasing lport 5eb8784d-43ef-482a-a6dc-a2613b333a0f from this chassis (sb_readonly=0)
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.685 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.712 2 DEBUG nova.compute.provider_tree [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.713 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ac82160b-5221-4bd6-bab0-b5d08c95e428.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ac82160b-5221-4bd6-bab0-b5d08c95e428.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.714 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[30e27fb9-5c94-4b05-a0d3-7dc3f7df661f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.715 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-ac82160b-5221-4bd6-bab0-b5d08c95e428
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/ac82160b-5221-4bd6-bab0-b5d08c95e428.pid.haproxy
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID ac82160b-5221-4bd6-bab0-b5d08c95e428
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:23:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.717 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428', 'env', 'PROCESS_TAG=haproxy-ac82160b-5221-4bd6-bab0-b5d08c95e428', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ac82160b-5221-4bd6-bab0-b5d08c95e428.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.730 2 DEBUG nova.scheduler.client.report [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.751 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.752 2 DEBUG nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.821 2 DEBUG nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.822 2 DEBUG nova.network.neutron [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.846 2 INFO nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.867 2 DEBUG nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.899 2 DEBUG nova.compute.manager [req-43be2ca1-f66d-47e1-a464-0c4eeba5ee72 req-5386f489-a307-48fd-86f9-a7869bdb4869 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Received event network-vif-plugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.900 2 DEBUG oslo_concurrency.lockutils [req-43be2ca1-f66d-47e1-a464-0c4eeba5ee72 req-5386f489-a307-48fd-86f9-a7869bdb4869 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.900 2 DEBUG oslo_concurrency.lockutils [req-43be2ca1-f66d-47e1-a464-0c4eeba5ee72 req-5386f489-a307-48fd-86f9-a7869bdb4869 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.900 2 DEBUG oslo_concurrency.lockutils [req-43be2ca1-f66d-47e1-a464-0c4eeba5ee72 req-5386f489-a307-48fd-86f9-a7869bdb4869 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.901 2 DEBUG nova.compute.manager [req-43be2ca1-f66d-47e1-a464-0c4eeba5ee72 req-5386f489-a307-48fd-86f9-a7869bdb4869 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Processing event network-vif-plugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:23:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.981 2 DEBUG nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.983 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:23:22 compute-0 nova_compute[259550]: 2025-10-07 14:23:22.983 2 INFO nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Creating image(s)
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.004 2 DEBUG nova.storage.rbd_utils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 49805eb1-6f40-48f8-bcdc-de12de83733b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.029 2 DEBUG nova.storage.rbd_utils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 49805eb1-6f40-48f8-bcdc-de12de83733b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.053 2 DEBUG nova.storage.rbd_utils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 49805eb1-6f40-48f8-bcdc-de12de83733b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.057 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:23 compute-0 podman[355116]: 2025-10-07 14:23:23.110056373 +0000 UTC m=+0.063042065 container create 3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.137 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.140 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.140 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.141 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.168 2 DEBUG nova.storage.rbd_utils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 49805eb1-6f40-48f8-bcdc-de12de83733b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:23 compute-0 podman[355116]: 2025-10-07 14:23:23.075078929 +0000 UTC m=+0.028064681 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:23:23 compute-0 systemd[1]: Started libpod-conmon-3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9.scope.
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.177 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 49805eb1-6f40-48f8-bcdc-de12de83733b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:23 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45254348c58ccf37cde64c4f9e929821eb371c9ded4236404aedd3233f9a4d2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:23:23 compute-0 podman[355116]: 2025-10-07 14:23:23.212916921 +0000 UTC m=+0.165902633 container init 3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:23:23 compute-0 podman[355116]: 2025-10-07 14:23:23.21881571 +0000 UTC m=+0.171801402 container start 3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:23:23 compute-0 neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428[355193]: [NOTICE]   (355198) : New worker (355207) forked
Oct 07 14:23:23 compute-0 neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428[355193]: [NOTICE]   (355198) : Loading success.
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.354 2 DEBUG nova.policy [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2606252961124ad2a15c7f7529b28488', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.485 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 49805eb1-6f40-48f8-bcdc-de12de83733b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:23 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3557800978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.549 2 DEBUG nova.storage.rbd_utils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] resizing rbd image 49805eb1-6f40-48f8-bcdc-de12de83733b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.598 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847003.5980296, ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.599 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] VM Started (Lifecycle Event)
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.601 2 DEBUG nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.607 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.637 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.639 2 INFO nova.virt.libvirt.driver [-] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Instance spawned successfully.
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.639 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.644 2 DEBUG nova.objects.instance [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'migration_context' on Instance uuid 49805eb1-6f40-48f8-bcdc-de12de83733b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.647 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.667 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.668 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Ensure instance console log exists: /var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.668 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.669 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.669 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.670 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.671 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847003.5981593, ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.671 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] VM Paused (Lifecycle Event)
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.675 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.675 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.676 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.676 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.677 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.677 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.702 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.705 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847003.605789, ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.706 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] VM Resumed (Lifecycle Event)
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.734 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.737 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.760 2 INFO nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Took 8.39 seconds to spawn the instance on the hypervisor.
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.761 2 DEBUG nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.768 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.819 2 INFO nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Took 9.37 seconds to build instance.
Oct 07 14:23:23 compute-0 nova_compute[259550]: 2025-10-07 14:23:23.839 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 187 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.6 MiB/s wr, 112 op/s
Oct 07 14:23:24 compute-0 nova_compute[259550]: 2025-10-07 14:23:24.035 2 DEBUG nova.network.neutron [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Successfully created port: 853f22d9-282d-4428-a55e-f35727720eb3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:23:24 compute-0 ceph-mon[74295]: pgmap v1792: 305 pgs: 305 active+clean; 187 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.6 MiB/s wr, 112 op/s
Oct 07 14:23:24 compute-0 nova_compute[259550]: 2025-10-07 14:23:24.961 2 DEBUG nova.network.neutron [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Successfully updated port: 853f22d9-282d-4428-a55e-f35727720eb3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:23:24 compute-0 nova_compute[259550]: 2025-10-07 14:23:24.978 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "refresh_cache-49805eb1-6f40-48f8-bcdc-de12de83733b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:23:24 compute-0 nova_compute[259550]: 2025-10-07 14:23:24.979 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquired lock "refresh_cache-49805eb1-6f40-48f8-bcdc-de12de83733b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:23:24 compute-0 nova_compute[259550]: 2025-10-07 14:23:24.979 2 DEBUG nova.network.neutron [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:23:25 compute-0 nova_compute[259550]: 2025-10-07 14:23:25.091 2 DEBUG nova.compute.manager [req-12ee7846-fc37-464f-a78d-1ef6ad11cdbf req-c4557a90-8992-4172-85b3-6413580f1480 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Received event network-vif-plugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:25 compute-0 nova_compute[259550]: 2025-10-07 14:23:25.091 2 DEBUG oslo_concurrency.lockutils [req-12ee7846-fc37-464f-a78d-1ef6ad11cdbf req-c4557a90-8992-4172-85b3-6413580f1480 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:25 compute-0 nova_compute[259550]: 2025-10-07 14:23:25.092 2 DEBUG oslo_concurrency.lockutils [req-12ee7846-fc37-464f-a78d-1ef6ad11cdbf req-c4557a90-8992-4172-85b3-6413580f1480 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:25 compute-0 nova_compute[259550]: 2025-10-07 14:23:25.092 2 DEBUG oslo_concurrency.lockutils [req-12ee7846-fc37-464f-a78d-1ef6ad11cdbf req-c4557a90-8992-4172-85b3-6413580f1480 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:25 compute-0 nova_compute[259550]: 2025-10-07 14:23:25.092 2 DEBUG nova.compute.manager [req-12ee7846-fc37-464f-a78d-1ef6ad11cdbf req-c4557a90-8992-4172-85b3-6413580f1480 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] No waiting events found dispatching network-vif-plugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:23:25 compute-0 nova_compute[259550]: 2025-10-07 14:23:25.092 2 WARNING nova.compute.manager [req-12ee7846-fc37-464f-a78d-1ef6ad11cdbf req-c4557a90-8992-4172-85b3-6413580f1480 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Received unexpected event network-vif-plugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa for instance with vm_state active and task_state None.
Oct 07 14:23:25 compute-0 nova_compute[259550]: 2025-10-07 14:23:25.204 2 DEBUG nova.network.neutron [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:23:25 compute-0 nova_compute[259550]: 2025-10-07 14:23:25.283 2 DEBUG nova.compute.manager [req-0b146891-26fc-48f2-b268-22d180462ca0 req-ea09b24b-317e-4218-b484-5d419b88b271 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Received event network-changed-853f22d9-282d-4428-a55e-f35727720eb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:25 compute-0 nova_compute[259550]: 2025-10-07 14:23:25.284 2 DEBUG nova.compute.manager [req-0b146891-26fc-48f2-b268-22d180462ca0 req-ea09b24b-317e-4218-b484-5d419b88b271 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Refreshing instance network info cache due to event network-changed-853f22d9-282d-4428-a55e-f35727720eb3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:23:25 compute-0 nova_compute[259550]: 2025-10-07 14:23:25.284 2 DEBUG oslo_concurrency.lockutils [req-0b146891-26fc-48f2-b268-22d180462ca0 req-ea09b24b-317e-4218-b484-5d419b88b271 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-49805eb1-6f40-48f8-bcdc-de12de83733b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:23:25 compute-0 nova_compute[259550]: 2025-10-07 14:23:25.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 213 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 145 op/s
Oct 07 14:23:26 compute-0 nova_compute[259550]: 2025-10-07 14:23:26.292 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846991.290687, 569d484f-0c0f-4b2c-aca5-e88654f9501f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:23:26 compute-0 nova_compute[259550]: 2025-10-07 14:23:26.292 2 INFO nova.compute.manager [-] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] VM Stopped (Lifecycle Event)
Oct 07 14:23:26 compute-0 nova_compute[259550]: 2025-10-07 14:23:26.318 2 DEBUG nova.compute.manager [None req-b74eccc1-ac91-4be5-8d21-4307a3edc69a - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:23:26 compute-0 nova_compute[259550]: 2025-10-07 14:23:26.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:26 compute-0 nova_compute[259550]: 2025-10-07 14:23:26.797 2 DEBUG oslo_concurrency.lockutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquiring lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:26 compute-0 nova_compute[259550]: 2025-10-07 14:23:26.798 2 DEBUG oslo_concurrency.lockutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:26 compute-0 nova_compute[259550]: 2025-10-07 14:23:26.798 2 DEBUG oslo_concurrency.lockutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquiring lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:26 compute-0 nova_compute[259550]: 2025-10-07 14:23:26.798 2 DEBUG oslo_concurrency.lockutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:26 compute-0 nova_compute[259550]: 2025-10-07 14:23:26.799 2 DEBUG oslo_concurrency.lockutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:26 compute-0 nova_compute[259550]: 2025-10-07 14:23:26.800 2 INFO nova.compute.manager [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Terminating instance
Oct 07 14:23:26 compute-0 nova_compute[259550]: 2025-10-07 14:23:26.801 2 DEBUG nova.compute.manager [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:23:26 compute-0 kernel: tapcef2a5c2-ea (unregistering): left promiscuous mode
Oct 07 14:23:26 compute-0 NetworkManager[44949]: <info>  [1759847006.8567] device (tapcef2a5c2-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:23:26 compute-0 ovn_controller[151684]: 2025-10-07T14:23:26Z|00989|binding|INFO|Releasing lport cef2a5c2-ea82-48c7-8da3-0028586c49aa from this chassis (sb_readonly=0)
Oct 07 14:23:26 compute-0 ovn_controller[151684]: 2025-10-07T14:23:26Z|00990|binding|INFO|Setting lport cef2a5c2-ea82-48c7-8da3-0028586c49aa down in Southbound
Oct 07 14:23:26 compute-0 ovn_controller[151684]: 2025-10-07T14:23:26Z|00991|binding|INFO|Removing iface tapcef2a5c2-ea ovn-installed in OVS
Oct 07 14:23:26 compute-0 nova_compute[259550]: 2025-10-07 14:23:26.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:26.881 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:69:ab 10.100.0.7'], port_security=['fa:16:3e:53:69:ab 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac82160b-5221-4bd6-bab0-b5d08c95e428', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9fd5efb5e7c240a997805db53864ecfc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a65ebf45-161a-4778-bc19-6a690dbd2208', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=efcc1312-35a2-444f-b47e-40e2e08c733d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=cef2a5c2-ea82-48c7-8da3-0028586c49aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:23:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:26.882 161536 INFO neutron.agent.ovn.metadata.agent [-] Port cef2a5c2-ea82-48c7-8da3-0028586c49aa in datapath ac82160b-5221-4bd6-bab0-b5d08c95e428 unbound from our chassis
Oct 07 14:23:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:26.884 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ac82160b-5221-4bd6-bab0-b5d08c95e428, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:23:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:26.885 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4f227a45-4ea4-449d-932d-228dc96907b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:26.885 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428 namespace which is not needed anymore
Oct 07 14:23:26 compute-0 nova_compute[259550]: 2025-10-07 14:23:26.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:26 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000060.scope: Deactivated successfully.
Oct 07 14:23:26 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000060.scope: Consumed 4.368s CPU time.
Oct 07 14:23:26 compute-0 systemd-machined[214580]: Machine qemu-119-instance-00000060 terminated.
Oct 07 14:23:27 compute-0 neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428[355193]: [NOTICE]   (355198) : haproxy version is 2.8.14-c23fe91
Oct 07 14:23:27 compute-0 neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428[355193]: [NOTICE]   (355198) : path to executable is /usr/sbin/haproxy
Oct 07 14:23:27 compute-0 neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428[355193]: [WARNING]  (355198) : Exiting Master process...
Oct 07 14:23:27 compute-0 neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428[355193]: [WARNING]  (355198) : Exiting Master process...
Oct 07 14:23:27 compute-0 neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428[355193]: [ALERT]    (355198) : Current worker (355207) exited with code 143 (Terminated)
Oct 07 14:23:27 compute-0 neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428[355193]: [WARNING]  (355198) : All workers exited. Exiting... (0)
Oct 07 14:23:27 compute-0 systemd[1]: libpod-3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9.scope: Deactivated successfully.
Oct 07 14:23:27 compute-0 conmon[355193]: conmon 3a30c5a21e14b7bd568c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9.scope/container/memory.events
Oct 07 14:23:27 compute-0 podman[355320]: 2025-10-07 14:23:27.036637883 +0000 UTC m=+0.052126613 container died 3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 07 14:23:27 compute-0 ceph-mon[74295]: pgmap v1793: 305 pgs: 305 active+clean; 213 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 145 op/s
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.043 2 INFO nova.virt.libvirt.driver [-] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Instance destroyed successfully.
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.044 2 DEBUG nova.objects.instance [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lazy-loading 'resources' on Instance uuid ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.058 2 DEBUG nova.virt.libvirt.vif [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-426897571',display_name='tempest-ServerPasswordTestJSON-server-426897571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-426897571',id=96,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:23:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9fd5efb5e7c240a997805db53864ecfc',ramdisk_id='',reservation_id='r-8zuwxzb4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-452511673',owner_user_name='tempest-ServerPasswordTestJSON-452511673-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:23:26Z,user_data=None,user_id='ef4a6b36d606485ea7b0334d25dd23bd',uuid=ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.059 2 DEBUG nova.network.os_vif_util [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Converting VIF {"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.059 2 DEBUG nova.network.os_vif_util [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=cef2a5c2-ea82-48c7-8da3-0028586c49aa,network=Network(ac82160b-5221-4bd6-bab0-b5d08c95e428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef2a5c2-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.060 2 DEBUG os_vif [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=cef2a5c2-ea82-48c7-8da3-0028586c49aa,network=Network(ac82160b-5221-4bd6-bab0-b5d08c95e428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef2a5c2-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:23:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9-userdata-shm.mount: Deactivated successfully.
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.063 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcef2a5c2-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e45254348c58ccf37cde64c4f9e929821eb371c9ded4236404aedd3233f9a4d2-merged.mount: Deactivated successfully.
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.072 2 INFO os_vif [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=cef2a5c2-ea82-48c7-8da3-0028586c49aa,network=Network(ac82160b-5221-4bd6-bab0-b5d08c95e428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef2a5c2-ea')
Oct 07 14:23:27 compute-0 podman[355320]: 2025-10-07 14:23:27.079005006 +0000 UTC m=+0.094493736 container cleanup 3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 07 14:23:27 compute-0 systemd[1]: libpod-conmon-3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9.scope: Deactivated successfully.
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.114 2 DEBUG nova.network.neutron [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Updating instance_info_cache with network_info: [{"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:23:27 compute-0 podman[355367]: 2025-10-07 14:23:27.150745863 +0000 UTC m=+0.048038915 container remove 3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:23:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:27.158 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6427d2-b5ab-4875-a67d-0e92a8e4f068]: (4, ('Tue Oct  7 02:23:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428 (3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9)\n3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9\nTue Oct  7 02:23:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428 (3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9)\n3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:27.160 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eab685a4-c3f0-41c6-9576-2347b51628e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:27.161 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac82160b-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:27 compute-0 kernel: tapac82160b-50: left promiscuous mode
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.172 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Releasing lock "refresh_cache-49805eb1-6f40-48f8-bcdc-de12de83733b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.172 2 DEBUG nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Instance network_info: |[{"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.173 2 DEBUG oslo_concurrency.lockutils [req-0b146891-26fc-48f2-b268-22d180462ca0 req-ea09b24b-317e-4218-b484-5d419b88b271 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-49805eb1-6f40-48f8-bcdc-de12de83733b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.173 2 DEBUG nova.network.neutron [req-0b146891-26fc-48f2-b268-22d180462ca0 req-ea09b24b-317e-4218-b484-5d419b88b271 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Refreshing network info cache for port 853f22d9-282d-4428-a55e-f35727720eb3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.175 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Start _get_guest_xml network_info=[{"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.185 2 WARNING nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:23:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:27.187 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[221873bc-1145-4738-9de6-bf663feb53cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.198 2 DEBUG nova.virt.libvirt.host [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.199 2 DEBUG nova.virt.libvirt.host [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.205 2 DEBUG nova.virt.libvirt.host [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.205 2 DEBUG nova.virt.libvirt.host [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.206 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.206 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.206 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.206 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.207 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.207 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.207 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.207 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.207 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.207 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.208 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.208 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.212 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:27.218 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[28e4f4f2-c640-4d21-9c14-9edc06ea4ad1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:27.219 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1c972abf-d47f-4095-9b0b-e297b266b4a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:27.238 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[24a64403-b589-4a57-b34f-2cbc791b539c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758599, 'reachable_time': 24248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355393, 'error': None, 'target': 'ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:27.240 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:23:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:27.240 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd204d0-2bea-4331-a521-259cd825ddad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:27 compute-0 systemd[1]: run-netns-ovnmeta\x2dac82160b\x2d5221\x2d4bd6\x2dbab0\x2db5d08c95e428.mount: Deactivated successfully.
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.556 2 INFO nova.virt.libvirt.driver [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Deleting instance files /var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_del
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.557 2 INFO nova.virt.libvirt.driver [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Deletion of /var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_del complete
Oct 07 14:23:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:23:27 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/716028951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.684 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.701 2 DEBUG nova.storage.rbd_utils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 49805eb1-6f40-48f8-bcdc-de12de83733b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.704 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.750 2 DEBUG nova.compute.manager [req-b65a6c51-c0c2-42cb-8e65-63901d7b5123 req-46dc3577-217c-44d7-a9d9-a393a0fb9b66 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Received event network-vif-unplugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.750 2 DEBUG oslo_concurrency.lockutils [req-b65a6c51-c0c2-42cb-8e65-63901d7b5123 req-46dc3577-217c-44d7-a9d9-a393a0fb9b66 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.751 2 DEBUG oslo_concurrency.lockutils [req-b65a6c51-c0c2-42cb-8e65-63901d7b5123 req-46dc3577-217c-44d7-a9d9-a393a0fb9b66 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.751 2 DEBUG oslo_concurrency.lockutils [req-b65a6c51-c0c2-42cb-8e65-63901d7b5123 req-46dc3577-217c-44d7-a9d9-a393a0fb9b66 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.752 2 DEBUG nova.compute.manager [req-b65a6c51-c0c2-42cb-8e65-63901d7b5123 req-46dc3577-217c-44d7-a9d9-a393a0fb9b66 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] No waiting events found dispatching network-vif-unplugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.752 2 DEBUG nova.compute.manager [req-b65a6c51-c0c2-42cb-8e65-63901d7b5123 req-46dc3577-217c-44d7-a9d9-a393a0fb9b66 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Received event network-vif-unplugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.753 2 INFO nova.compute.manager [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Took 0.95 seconds to destroy the instance on the hypervisor.
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.753 2 DEBUG oslo.service.loopingcall [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.755 2 DEBUG nova.compute.manager [-] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:23:27 compute-0 nova_compute[259550]: 2025-10-07 14:23:27.755 2 DEBUG nova.network.neutron [-] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:23:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 213 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 126 op/s
Oct 07 14:23:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/716028951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:23:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:23:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2979835019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.168 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.169 2 DEBUG nova.virt.libvirt.vif [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:23:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1173658225',display_name='tempest-ServersTestJSON-server-1173658225',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1173658225',id=97,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-h2rgpkvt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:23:22Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=49805eb1-6f40-48f8-bcdc-de12de83733b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.169 2 DEBUG nova.network.os_vif_util [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.170 2 DEBUG nova.network.os_vif_util [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c7:be,bridge_name='br-int',has_traffic_filtering=True,id=853f22d9-282d-4428-a55e-f35727720eb3,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f22d9-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.171 2 DEBUG nova.objects.instance [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'pci_devices' on Instance uuid 49805eb1-6f40-48f8-bcdc-de12de83733b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.184 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:23:28 compute-0 nova_compute[259550]:   <uuid>49805eb1-6f40-48f8-bcdc-de12de83733b</uuid>
Oct 07 14:23:28 compute-0 nova_compute[259550]:   <name>instance-00000061</name>
Oct 07 14:23:28 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:23:28 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:23:28 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersTestJSON-server-1173658225</nova:name>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:23:27</nova:creationTime>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:23:28 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:23:28 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:23:28 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:23:28 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:23:28 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:23:28 compute-0 nova_compute[259550]:         <nova:user uuid="2606252961124ad2a15c7f7529b28488">tempest-ServersTestJSON-387950529-project-member</nova:user>
Oct 07 14:23:28 compute-0 nova_compute[259550]:         <nova:project uuid="ca7071ac09d84d15aba25489e9bb909a">tempest-ServersTestJSON-387950529</nova:project>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:23:28 compute-0 nova_compute[259550]:         <nova:port uuid="853f22d9-282d-4428-a55e-f35727720eb3">
Oct 07 14:23:28 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:23:28 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:23:28 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <system>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <entry name="serial">49805eb1-6f40-48f8-bcdc-de12de83733b</entry>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <entry name="uuid">49805eb1-6f40-48f8-bcdc-de12de83733b</entry>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     </system>
Oct 07 14:23:28 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:23:28 compute-0 nova_compute[259550]:   <os>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:   </os>
Oct 07 14:23:28 compute-0 nova_compute[259550]:   <features>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:   </features>
Oct 07 14:23:28 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:23:28 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:23:28 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/49805eb1-6f40-48f8-bcdc-de12de83733b_disk">
Oct 07 14:23:28 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       </source>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:23:28 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/49805eb1-6f40-48f8-bcdc-de12de83733b_disk.config">
Oct 07 14:23:28 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       </source>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:23:28 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:fe:c7:be"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <target dev="tap853f22d9-28"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b/console.log" append="off"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <video>
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     </video>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:23:28 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:23:28 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:23:28 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:23:28 compute-0 nova_compute[259550]: </domain>
Oct 07 14:23:28 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.184 2 DEBUG nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Preparing to wait for external event network-vif-plugged-853f22d9-282d-4428-a55e-f35727720eb3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.185 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.185 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.185 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.185 2 DEBUG nova.virt.libvirt.vif [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:23:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1173658225',display_name='tempest-ServersTestJSON-server-1173658225',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1173658225',id=97,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-h2rgpkvt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:23:22Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=49805eb1-6f40-48f8-bcdc-de12de83733b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.186 2 DEBUG nova.network.os_vif_util [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.186 2 DEBUG nova.network.os_vif_util [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c7:be,bridge_name='br-int',has_traffic_filtering=True,id=853f22d9-282d-4428-a55e-f35727720eb3,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f22d9-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.186 2 DEBUG os_vif [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c7:be,bridge_name='br-int',has_traffic_filtering=True,id=853f22d9-282d-4428-a55e-f35727720eb3,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f22d9-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.187 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.188 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.190 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap853f22d9-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.191 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap853f22d9-28, col_values=(('external_ids', {'iface-id': '853f22d9-282d-4428-a55e-f35727720eb3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:c7:be', 'vm-uuid': '49805eb1-6f40-48f8-bcdc-de12de83733b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:28 compute-0 NetworkManager[44949]: <info>  [1759847008.1945] manager: (tap853f22d9-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/404)
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.200 2 INFO os_vif [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c7:be,bridge_name='br-int',has_traffic_filtering=True,id=853f22d9-282d-4428-a55e-f35727720eb3,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f22d9-28')
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.257 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.257 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.257 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No VIF found with MAC fa:16:3e:fe:c7:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.258 2 INFO nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Using config drive
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.276 2 DEBUG nova.storage.rbd_utils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 49805eb1-6f40-48f8-bcdc-de12de83733b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.453 2 DEBUG nova.network.neutron [-] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.477 2 INFO nova.compute.manager [-] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Took 0.72 seconds to deallocate network for instance.
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.541 2 DEBUG oslo_concurrency.lockutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.542 2 DEBUG oslo_concurrency.lockutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:28 compute-0 nova_compute[259550]: 2025-10-07 14:23:28.638 2 DEBUG oslo_concurrency.processutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.008 2 INFO nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Creating config drive at /var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b/disk.config
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.016 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjfh96fy2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.059 2 DEBUG nova.network.neutron [req-0b146891-26fc-48f2-b268-22d180462ca0 req-ea09b24b-317e-4218-b484-5d419b88b271 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Updated VIF entry in instance network info cache for port 853f22d9-282d-4428-a55e-f35727720eb3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.060 2 DEBUG nova.network.neutron [req-0b146891-26fc-48f2-b268-22d180462ca0 req-ea09b24b-317e-4218-b484-5d419b88b271 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Updating instance_info_cache with network_info: [{"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:23:29 compute-0 ceph-mon[74295]: pgmap v1794: 305 pgs: 305 active+clean; 213 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 126 op/s
Oct 07 14:23:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2979835019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.078 2 DEBUG oslo_concurrency.lockutils [req-0b146891-26fc-48f2-b268-22d180462ca0 req-ea09b24b-317e-4218-b484-5d419b88b271 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-49805eb1-6f40-48f8-bcdc-de12de83733b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:23:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:23:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/481142845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.121 2 DEBUG oslo_concurrency.processutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.128 2 DEBUG nova.compute.provider_tree [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.145 2 DEBUG nova.scheduler.client.report [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.167 2 DEBUG oslo_concurrency.lockutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.171 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjfh96fy2" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.201 2 DEBUG nova.storage.rbd_utils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 49805eb1-6f40-48f8-bcdc-de12de83733b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.205 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b/disk.config 49805eb1-6f40-48f8-bcdc-de12de83733b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.261 2 INFO nova.scheduler.client.report [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Deleted allocations for instance ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.324 2 DEBUG oslo_concurrency.lockutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.401 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b/disk.config 49805eb1-6f40-48f8-bcdc-de12de83733b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.402 2 INFO nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Deleting local config drive /var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b/disk.config because it was imported into RBD.
Oct 07 14:23:29 compute-0 kernel: tap853f22d9-28: entered promiscuous mode
Oct 07 14:23:29 compute-0 NetworkManager[44949]: <info>  [1759847009.4497] manager: (tap853f22d9-28): new Tun device (/org/freedesktop/NetworkManager/Devices/405)
Oct 07 14:23:29 compute-0 systemd-udevd[355301]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:29 compute-0 ovn_controller[151684]: 2025-10-07T14:23:29Z|00992|binding|INFO|Claiming lport 853f22d9-282d-4428-a55e-f35727720eb3 for this chassis.
Oct 07 14:23:29 compute-0 ovn_controller[151684]: 2025-10-07T14:23:29Z|00993|binding|INFO|853f22d9-282d-4428-a55e-f35727720eb3: Claiming fa:16:3e:fe:c7:be 10.100.0.14
Oct 07 14:23:29 compute-0 NetworkManager[44949]: <info>  [1759847009.4615] device (tap853f22d9-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:23:29 compute-0 NetworkManager[44949]: <info>  [1759847009.4635] device (tap853f22d9-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:23:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.461 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:c7:be 10.100.0.14'], port_security=['fa:16:3e:fe:c7:be 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '49805eb1-6f40-48f8-bcdc-de12de83733b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=853f22d9-282d-4428-a55e-f35727720eb3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:23:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.465 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 853f22d9-282d-4428-a55e-f35727720eb3 in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 bound to our chassis
Oct 07 14:23:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.467 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7
Oct 07 14:23:29 compute-0 ovn_controller[151684]: 2025-10-07T14:23:29Z|00994|binding|INFO|Setting lport 853f22d9-282d-4428-a55e-f35727720eb3 ovn-installed in OVS
Oct 07 14:23:29 compute-0 ovn_controller[151684]: 2025-10-07T14:23:29Z|00995|binding|INFO|Setting lport 853f22d9-282d-4428-a55e-f35727720eb3 up in Southbound
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.491 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[231b6963-808e-4986-8e4d-c9145e4fc11f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:29 compute-0 systemd-machined[214580]: New machine qemu-120-instance-00000061.
Oct 07 14:23:29 compute-0 systemd[1]: Started Virtual Machine qemu-120-instance-00000061.
Oct 07 14:23:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.534 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1e2a6196-f744-4d70-b4d9-b7e496d9c7a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.539 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[850d45a5-b9ca-418c-a46e-4056b49d173a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.571 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b51e44e2-d413-4675-befe-7ced4286667b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.598 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ae3698b8-cc4e-41fe-a6d6-43da711f7253]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 26, 'rx_bytes': 958, 'tx_bytes': 1284, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 26, 'rx_bytes': 958, 'tx_bytes': 1284, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 15179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355563, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.614 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[38582979-0192-431a-bd4c-b6c7ac44d02d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355564, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355564, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.617 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.620 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.621 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.621 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.621 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.728 2 DEBUG nova.compute.manager [req-08a6bdee-2dad-49f5-8305-8324796429e7 req-8d06a25c-e90d-49ef-a77d-a960c43bb6bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Received event network-vif-plugged-853f22d9-282d-4428-a55e-f35727720eb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.728 2 DEBUG oslo_concurrency.lockutils [req-08a6bdee-2dad-49f5-8305-8324796429e7 req-8d06a25c-e90d-49ef-a77d-a960c43bb6bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.729 2 DEBUG oslo_concurrency.lockutils [req-08a6bdee-2dad-49f5-8305-8324796429e7 req-8d06a25c-e90d-49ef-a77d-a960c43bb6bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.729 2 DEBUG oslo_concurrency.lockutils [req-08a6bdee-2dad-49f5-8305-8324796429e7 req-8d06a25c-e90d-49ef-a77d-a960c43bb6bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.730 2 DEBUG nova.compute.manager [req-08a6bdee-2dad-49f5-8305-8324796429e7 req-8d06a25c-e90d-49ef-a77d-a960c43bb6bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Processing event network-vif-plugged-853f22d9-282d-4428-a55e-f35727720eb3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.830 2 DEBUG nova.compute.manager [req-dd66ca86-1135-4c6c-a0cc-ecd61cfd53f0 req-9055b8f9-0186-46b1-b67c-4e96cc8a79c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Received event network-vif-plugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.831 2 DEBUG oslo_concurrency.lockutils [req-dd66ca86-1135-4c6c-a0cc-ecd61cfd53f0 req-9055b8f9-0186-46b1-b67c-4e96cc8a79c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.831 2 DEBUG oslo_concurrency.lockutils [req-dd66ca86-1135-4c6c-a0cc-ecd61cfd53f0 req-9055b8f9-0186-46b1-b67c-4e96cc8a79c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.831 2 DEBUG oslo_concurrency.lockutils [req-dd66ca86-1135-4c6c-a0cc-ecd61cfd53f0 req-9055b8f9-0186-46b1-b67c-4e96cc8a79c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.832 2 DEBUG nova.compute.manager [req-dd66ca86-1135-4c6c-a0cc-ecd61cfd53f0 req-9055b8f9-0186-46b1-b67c-4e96cc8a79c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] No waiting events found dispatching network-vif-plugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.832 2 WARNING nova.compute.manager [req-dd66ca86-1135-4c6c-a0cc-ecd61cfd53f0 req-9055b8f9-0186-46b1-b67c-4e96cc8a79c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Received unexpected event network-vif-plugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa for instance with vm_state deleted and task_state None.
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.832 2 DEBUG nova.compute.manager [req-dd66ca86-1135-4c6c-a0cc-ecd61cfd53f0 req-9055b8f9-0186-46b1-b67c-4e96cc8a79c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Received event network-vif-deleted-cef2a5c2-ea82-48c7-8da3-0028586c49aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 193 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 181 op/s
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.990 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846994.989657, c53d76d2-4525-4798-bcf4-ad2e6b18071a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:23:29 compute-0 nova_compute[259550]: 2025-10-07 14:23:29.991 2 INFO nova.compute.manager [-] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] VM Stopped (Lifecycle Event)
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.010 2 DEBUG nova.compute.manager [None req-2718f535-f24e-41e9-bdc6-f6041534af05 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:30 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/481142845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.776 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847010.7754915, 49805eb1-6f40-48f8-bcdc-de12de83733b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.776 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] VM Started (Lifecycle Event)
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.779 2 DEBUG nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.783 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.788 2 INFO nova.virt.libvirt.driver [-] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Instance spawned successfully.
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.789 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.810 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.815 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.823 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.824 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.825 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.825 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.826 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.826 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.850 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.851 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847010.7757099, 49805eb1-6f40-48f8-bcdc-de12de83733b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.851 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] VM Paused (Lifecycle Event)
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.880 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.885 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847010.7825274, 49805eb1-6f40-48f8-bcdc-de12de83733b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.886 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] VM Resumed (Lifecycle Event)
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.889 2 INFO nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Took 7.91 seconds to spawn the instance on the hypervisor.
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.890 2 DEBUG nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.923 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.928 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.952 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.962 2 INFO nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Took 8.92 seconds to build instance.
Oct 07 14:23:30 compute-0 nova_compute[259550]: 2025-10-07 14:23:30.979 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:31 compute-0 ceph-mon[74295]: pgmap v1795: 305 pgs: 305 active+clean; 193 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 181 op/s
Oct 07 14:23:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:23:31 compute-0 nova_compute[259550]: 2025-10-07 14:23:31.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:31 compute-0 nova_compute[259550]: 2025-10-07 14:23:31.840 2 DEBUG nova.compute.manager [req-d0933404-b72d-43ff-90c4-81b334052731 req-e711d199-d880-4e2b-8190-7ac519c9a900 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Received event network-vif-plugged-853f22d9-282d-4428-a55e-f35727720eb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:31 compute-0 nova_compute[259550]: 2025-10-07 14:23:31.841 2 DEBUG oslo_concurrency.lockutils [req-d0933404-b72d-43ff-90c4-81b334052731 req-e711d199-d880-4e2b-8190-7ac519c9a900 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:31 compute-0 nova_compute[259550]: 2025-10-07 14:23:31.841 2 DEBUG oslo_concurrency.lockutils [req-d0933404-b72d-43ff-90c4-81b334052731 req-e711d199-d880-4e2b-8190-7ac519c9a900 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:31 compute-0 nova_compute[259550]: 2025-10-07 14:23:31.841 2 DEBUG oslo_concurrency.lockutils [req-d0933404-b72d-43ff-90c4-81b334052731 req-e711d199-d880-4e2b-8190-7ac519c9a900 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:31 compute-0 nova_compute[259550]: 2025-10-07 14:23:31.841 2 DEBUG nova.compute.manager [req-d0933404-b72d-43ff-90c4-81b334052731 req-e711d199-d880-4e2b-8190-7ac519c9a900 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] No waiting events found dispatching network-vif-plugged-853f22d9-282d-4428-a55e-f35727720eb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:23:31 compute-0 nova_compute[259550]: 2025-10-07 14:23:31.842 2 WARNING nova.compute.manager [req-d0933404-b72d-43ff-90c4-81b334052731 req-e711d199-d880-4e2b-8190-7ac519c9a900 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Received unexpected event network-vif-plugged-853f22d9-282d-4428-a55e-f35727720eb3 for instance with vm_state active and task_state None.
Oct 07 14:23:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 167 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 163 op/s
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001106732678908641 of space, bias 1.0, pg target 0.33201980367259226 quantized to 32 (current 32)
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:23:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:23:32 compute-0 ovn_controller[151684]: 2025-10-07T14:23:32Z|00996|binding|INFO|Releasing lport 401012b3-9244-4a9f-9a1e-3bf75a54a412 from this chassis (sb_readonly=0)
Oct 07 14:23:32 compute-0 nova_compute[259550]: 2025-10-07 14:23:32.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:23:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2166337259' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:23:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:23:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2166337259' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:23:33 compute-0 ceph-mon[74295]: pgmap v1796: 305 pgs: 305 active+clean; 167 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 163 op/s
Oct 07 14:23:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2166337259' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:23:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2166337259' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:23:33 compute-0 nova_compute[259550]: 2025-10-07 14:23:33.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 230 op/s
Oct 07 14:23:34 compute-0 ceph-mon[74295]: pgmap v1797: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 230 op/s
Oct 07 14:23:34 compute-0 nova_compute[259550]: 2025-10-07 14:23:34.968 2 DEBUG oslo_concurrency.lockutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "49805eb1-6f40-48f8-bcdc-de12de83733b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:34 compute-0 nova_compute[259550]: 2025-10-07 14:23:34.968 2 DEBUG oslo_concurrency.lockutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:34 compute-0 nova_compute[259550]: 2025-10-07 14:23:34.968 2 DEBUG oslo_concurrency.lockutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:34 compute-0 nova_compute[259550]: 2025-10-07 14:23:34.968 2 DEBUG oslo_concurrency.lockutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:34 compute-0 nova_compute[259550]: 2025-10-07 14:23:34.969 2 DEBUG oslo_concurrency.lockutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:34 compute-0 nova_compute[259550]: 2025-10-07 14:23:34.970 2 INFO nova.compute.manager [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Terminating instance
Oct 07 14:23:34 compute-0 nova_compute[259550]: 2025-10-07 14:23:34.970 2 DEBUG nova.compute.manager [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:23:35 compute-0 kernel: tap853f22d9-28 (unregistering): left promiscuous mode
Oct 07 14:23:35 compute-0 NetworkManager[44949]: <info>  [1759847015.0044] device (tap853f22d9-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:23:35 compute-0 ovn_controller[151684]: 2025-10-07T14:23:35Z|00997|binding|INFO|Releasing lport 853f22d9-282d-4428-a55e-f35727720eb3 from this chassis (sb_readonly=0)
Oct 07 14:23:35 compute-0 ovn_controller[151684]: 2025-10-07T14:23:35Z|00998|binding|INFO|Setting lport 853f22d9-282d-4428-a55e-f35727720eb3 down in Southbound
Oct 07 14:23:35 compute-0 ovn_controller[151684]: 2025-10-07T14:23:35Z|00999|binding|INFO|Removing iface tap853f22d9-28 ovn-installed in OVS
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.025 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:c7:be 10.100.0.14'], port_security=['fa:16:3e:fe:c7:be 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '49805eb1-6f40-48f8-bcdc-de12de83733b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=853f22d9-282d-4428-a55e-f35727720eb3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:23:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.026 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 853f22d9-282d-4428-a55e-f35727720eb3 in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 unbound from our chassis
Oct 07 14:23:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.027 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.050 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0db35d-7ba2-484e-9f55-a856fe0a16b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:35 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000061.scope: Deactivated successfully.
Oct 07 14:23:35 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000061.scope: Consumed 5.450s CPU time.
Oct 07 14:23:35 compute-0 systemd-machined[214580]: Machine qemu-120-instance-00000061 terminated.
Oct 07 14:23:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.079 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[903e6d7c-e7e2-4036-9a23-593c6d6e2821]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.082 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[59192d7e-a169-4d18-b767-db5a1011e37f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.107 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[77ebead3-4484-4ddb-ac1f-ae4b07f6f84f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.123 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ae081852-b359-4e43-be4f-77da3a03ffc8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 28, 'rx_bytes': 958, 'tx_bytes': 1368, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 28, 'rx_bytes': 958, 'tx_bytes': 1368, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 15179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355620, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.140 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed48e92-a2d2-4030-8c5e-1be1e51338d7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355621, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355621, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.141 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.147 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.147 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.147 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.147 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.211 2 INFO nova.virt.libvirt.driver [-] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Instance destroyed successfully.
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.212 2 DEBUG nova.objects.instance [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'resources' on Instance uuid 49805eb1-6f40-48f8-bcdc-de12de83733b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.236 2 DEBUG nova.virt.libvirt.vif [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:23:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1173658225',display_name='tempest-ServersTestJSON-server-1173658225',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1173658225',id=97,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:23:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-h2rgpkvt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:23:33Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=49805eb1-6f40-48f8-bcdc-de12de83733b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.237 2 DEBUG nova.network.os_vif_util [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.237 2 DEBUG nova.network.os_vif_util [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c7:be,bridge_name='br-int',has_traffic_filtering=True,id=853f22d9-282d-4428-a55e-f35727720eb3,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f22d9-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.238 2 DEBUG os_vif [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c7:be,bridge_name='br-int',has_traffic_filtering=True,id=853f22d9-282d-4428-a55e-f35727720eb3,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f22d9-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap853f22d9-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.245 2 INFO os_vif [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c7:be,bridge_name='br-int',has_traffic_filtering=True,id=853f22d9-282d-4428-a55e-f35727720eb3,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f22d9-28')
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.262 2 DEBUG nova.compute.manager [req-38c69068-d027-4d29-a66a-795446fa0212 req-e8f20bd5-bf26-42cd-8223-2f4347283463 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Received event network-vif-unplugged-853f22d9-282d-4428-a55e-f35727720eb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.263 2 DEBUG oslo_concurrency.lockutils [req-38c69068-d027-4d29-a66a-795446fa0212 req-e8f20bd5-bf26-42cd-8223-2f4347283463 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.263 2 DEBUG oslo_concurrency.lockutils [req-38c69068-d027-4d29-a66a-795446fa0212 req-e8f20bd5-bf26-42cd-8223-2f4347283463 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.263 2 DEBUG oslo_concurrency.lockutils [req-38c69068-d027-4d29-a66a-795446fa0212 req-e8f20bd5-bf26-42cd-8223-2f4347283463 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.263 2 DEBUG nova.compute.manager [req-38c69068-d027-4d29-a66a-795446fa0212 req-e8f20bd5-bf26-42cd-8223-2f4347283463 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] No waiting events found dispatching network-vif-unplugged-853f22d9-282d-4428-a55e-f35727720eb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.264 2 DEBUG nova.compute.manager [req-38c69068-d027-4d29-a66a-795446fa0212 req-e8f20bd5-bf26-42cd-8223-2f4347283463 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Received event network-vif-unplugged-853f22d9-282d-4428-a55e-f35727720eb3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.641 2 INFO nova.virt.libvirt.driver [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Deleting instance files /var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b_del
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.641 2 INFO nova.virt.libvirt.driver [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Deletion of /var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b_del complete
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.690 2 INFO nova.compute.manager [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Took 0.72 seconds to destroy the instance on the hypervisor.
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.691 2 DEBUG oslo.service.loopingcall [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.691 2 DEBUG nova.compute.manager [-] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.691 2 DEBUG nova.network.neutron [-] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:23:35 compute-0 nova_compute[259550]: 2025-10-07 14:23:35.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:23:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 140 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 984 KiB/s wr, 248 op/s
Oct 07 14:23:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:23:36 compute-0 nova_compute[259550]: 2025-10-07 14:23:36.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:36 compute-0 nova_compute[259550]: 2025-10-07 14:23:36.767 2 DEBUG nova.network.neutron [-] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:23:36 compute-0 nova_compute[259550]: 2025-10-07 14:23:36.788 2 INFO nova.compute.manager [-] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Took 1.10 seconds to deallocate network for instance.
Oct 07 14:23:36 compute-0 nova_compute[259550]: 2025-10-07 14:23:36.836 2 DEBUG oslo_concurrency.lockutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:36 compute-0 nova_compute[259550]: 2025-10-07 14:23:36.837 2 DEBUG oslo_concurrency.lockutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:36 compute-0 nova_compute[259550]: 2025-10-07 14:23:36.889 2 DEBUG nova.scheduler.client.report [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 07 14:23:36 compute-0 nova_compute[259550]: 2025-10-07 14:23:36.926 2 DEBUG nova.scheduler.client.report [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 07 14:23:36 compute-0 nova_compute[259550]: 2025-10-07 14:23:36.927 2 DEBUG nova.compute.provider_tree [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 07 14:23:36 compute-0 nova_compute[259550]: 2025-10-07 14:23:36.947 2 DEBUG nova.scheduler.client.report [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 07 14:23:36 compute-0 nova_compute[259550]: 2025-10-07 14:23:36.968 2 DEBUG nova.compute.manager [req-170d5577-c9a5-47e6-8ae6-a66098fe630f req-9e15b1ff-3cc9-410e-ac6f-8a7a4baf5bb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Received event network-vif-deleted-853f22d9-282d-4428-a55e-f35727720eb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:37 compute-0 ceph-mon[74295]: pgmap v1798: 305 pgs: 305 active+clean; 140 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 984 KiB/s wr, 248 op/s
Oct 07 14:23:37 compute-0 nova_compute[259550]: 2025-10-07 14:23:37.052 2 DEBUG nova.scheduler.client.report [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 07 14:23:37 compute-0 nova_compute[259550]: 2025-10-07 14:23:37.112 2 DEBUG oslo_concurrency.processutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:37 compute-0 nova_compute[259550]: 2025-10-07 14:23:37.430 2 DEBUG nova.compute.manager [req-af9dfb53-1529-44ec-aae8-df5ca58cbb97 req-ed47376b-0fe1-4503-9b5b-e5434c676473 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Received event network-vif-plugged-853f22d9-282d-4428-a55e-f35727720eb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:37 compute-0 nova_compute[259550]: 2025-10-07 14:23:37.431 2 DEBUG oslo_concurrency.lockutils [req-af9dfb53-1529-44ec-aae8-df5ca58cbb97 req-ed47376b-0fe1-4503-9b5b-e5434c676473 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:37 compute-0 nova_compute[259550]: 2025-10-07 14:23:37.432 2 DEBUG oslo_concurrency.lockutils [req-af9dfb53-1529-44ec-aae8-df5ca58cbb97 req-ed47376b-0fe1-4503-9b5b-e5434c676473 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:37 compute-0 nova_compute[259550]: 2025-10-07 14:23:37.432 2 DEBUG oslo_concurrency.lockutils [req-af9dfb53-1529-44ec-aae8-df5ca58cbb97 req-ed47376b-0fe1-4503-9b5b-e5434c676473 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:37 compute-0 nova_compute[259550]: 2025-10-07 14:23:37.433 2 DEBUG nova.compute.manager [req-af9dfb53-1529-44ec-aae8-df5ca58cbb97 req-ed47376b-0fe1-4503-9b5b-e5434c676473 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] No waiting events found dispatching network-vif-plugged-853f22d9-282d-4428-a55e-f35727720eb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:23:37 compute-0 nova_compute[259550]: 2025-10-07 14:23:37.433 2 WARNING nova.compute.manager [req-af9dfb53-1529-44ec-aae8-df5ca58cbb97 req-ed47376b-0fe1-4503-9b5b-e5434c676473 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Received unexpected event network-vif-plugged-853f22d9-282d-4428-a55e-f35727720eb3 for instance with vm_state deleted and task_state None.
Oct 07 14:23:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:23:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2693103898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:37 compute-0 nova_compute[259550]: 2025-10-07 14:23:37.615 2 DEBUG oslo_concurrency.processutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:37 compute-0 nova_compute[259550]: 2025-10-07 14:23:37.623 2 DEBUG nova.compute.provider_tree [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:23:37 compute-0 nova_compute[259550]: 2025-10-07 14:23:37.774 2 DEBUG nova.scheduler.client.report [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:23:37 compute-0 nova_compute[259550]: 2025-10-07 14:23:37.798 2 DEBUG oslo_concurrency.lockutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:37 compute-0 nova_compute[259550]: 2025-10-07 14:23:37.837 2 INFO nova.scheduler.client.report [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Deleted allocations for instance 49805eb1-6f40-48f8-bcdc-de12de83733b
Oct 07 14:23:37 compute-0 nova_compute[259550]: 2025-10-07 14:23:37.896 2 DEBUG oslo_concurrency.lockutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:37 compute-0 nova_compute[259550]: 2025-10-07 14:23:37.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:23:37 compute-0 nova_compute[259550]: 2025-10-07 14:23:37.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:23:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 140 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 14 KiB/s wr, 192 op/s
Oct 07 14:23:38 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2693103898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:39 compute-0 ceph-mon[74295]: pgmap v1799: 305 pgs: 305 active+clean; 140 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 14 KiB/s wr, 192 op/s
Oct 07 14:23:39 compute-0 podman[355676]: 2025-10-07 14:23:39.084999861 +0000 UTC m=+0.067767191 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid)
Oct 07 14:23:39 compute-0 podman[355675]: 2025-10-07 14:23:39.089716408 +0000 UTC m=+0.076946327 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:23:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 121 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 15 KiB/s wr, 209 op/s
Oct 07 14:23:40 compute-0 nova_compute[259550]: 2025-10-07 14:23:40.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:40 compute-0 nova_compute[259550]: 2025-10-07 14:23:40.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:23:41 compute-0 nova_compute[259550]: 2025-10-07 14:23:41.016 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:41 compute-0 nova_compute[259550]: 2025-10-07 14:23:41.016 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:41 compute-0 nova_compute[259550]: 2025-10-07 14:23:41.016 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:41 compute-0 nova_compute[259550]: 2025-10-07 14:23:41.017 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:23:41 compute-0 nova_compute[259550]: 2025-10-07 14:23:41.017 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:41 compute-0 ceph-mon[74295]: pgmap v1800: 305 pgs: 305 active+clean; 121 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 15 KiB/s wr, 209 op/s
Oct 07 14:23:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:23:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:23:41 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/334021485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:41 compute-0 nova_compute[259550]: 2025-10-07 14:23:41.499 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:41 compute-0 nova_compute[259550]: 2025-10-07 14:23:41.565 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:23:41 compute-0 nova_compute[259550]: 2025-10-07 14:23:41.566 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:23:41 compute-0 nova_compute[259550]: 2025-10-07 14:23:41.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:41 compute-0 nova_compute[259550]: 2025-10-07 14:23:41.752 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:23:41 compute-0 nova_compute[259550]: 2025-10-07 14:23:41.754 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3631MB free_disk=59.94264221191406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:23:41 compute-0 nova_compute[259550]: 2025-10-07 14:23:41.754 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:41 compute-0 nova_compute[259550]: 2025-10-07 14:23:41.754 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:41 compute-0 nova_compute[259550]: 2025-10-07 14:23:41.806 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 2f0de516-cf33-49b6-b036-aee8c2f72943 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:23:41 compute-0 nova_compute[259550]: 2025-10-07 14:23:41.806 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:23:41 compute-0 nova_compute[259550]: 2025-10-07 14:23:41.807 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:23:41 compute-0 nova_compute[259550]: 2025-10-07 14:23:41.845 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 121 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 155 op/s
Oct 07 14:23:42 compute-0 nova_compute[259550]: 2025-10-07 14:23:42.027 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "2ec4d149-b57c-4821-9852-5485f6279472" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:42 compute-0 nova_compute[259550]: 2025-10-07 14:23:42.028 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:42 compute-0 nova_compute[259550]: 2025-10-07 14:23:42.040 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847007.039891, ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:23:42 compute-0 nova_compute[259550]: 2025-10-07 14:23:42.041 2 INFO nova.compute.manager [-] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] VM Stopped (Lifecycle Event)
Oct 07 14:23:42 compute-0 nova_compute[259550]: 2025-10-07 14:23:42.048 2 DEBUG nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:23:42 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/334021485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:42 compute-0 nova_compute[259550]: 2025-10-07 14:23:42.071 2 DEBUG nova.compute.manager [None req-b74820bf-1d1c-480a-b4a0-adda141784c1 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:42 compute-0 nova_compute[259550]: 2025-10-07 14:23:42.121 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:23:42 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4079182429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:42 compute-0 nova_compute[259550]: 2025-10-07 14:23:42.367 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:42 compute-0 nova_compute[259550]: 2025-10-07 14:23:42.375 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:23:42 compute-0 nova_compute[259550]: 2025-10-07 14:23:42.416 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:23:42 compute-0 nova_compute[259550]: 2025-10-07 14:23:42.446 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:23:42 compute-0 nova_compute[259550]: 2025-10-07 14:23:42.447 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:42 compute-0 nova_compute[259550]: 2025-10-07 14:23:42.447 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:42 compute-0 nova_compute[259550]: 2025-10-07 14:23:42.454 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:23:42 compute-0 nova_compute[259550]: 2025-10-07 14:23:42.455 2 INFO nova.compute.claims [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:23:42 compute-0 nova_compute[259550]: 2025-10-07 14:23:42.602 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:23:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1380457896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.066 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.071 2 DEBUG nova.compute.provider_tree [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:23:43 compute-0 ceph-mon[74295]: pgmap v1801: 305 pgs: 305 active+clean; 121 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 155 op/s
Oct 07 14:23:43 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4079182429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:43 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1380457896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.086 2 DEBUG nova.scheduler.client.report [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.106 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.107 2 DEBUG nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.150 2 DEBUG nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.150 2 DEBUG nova.network.neutron [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.167 2 INFO nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.188 2 DEBUG nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.283 2 DEBUG nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.284 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.285 2 INFO nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Creating image(s)
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.306 2 DEBUG nova.storage.rbd_utils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2ec4d149-b57c-4821-9852-5485f6279472_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.329 2 DEBUG nova.storage.rbd_utils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2ec4d149-b57c-4821-9852-5485f6279472_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.351 2 DEBUG nova.storage.rbd_utils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2ec4d149-b57c-4821-9852-5485f6279472_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.354 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.430 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.432 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.432 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.433 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.453 2 DEBUG nova.storage.rbd_utils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2ec4d149-b57c-4821-9852-5485f6279472_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.457 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 2ec4d149-b57c-4821-9852-5485f6279472_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.494 2 DEBUG nova.policy [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2606252961124ad2a15c7f7529b28488', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.498 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.499 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.732 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 2ec4d149-b57c-4821-9852-5485f6279472_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.794 2 DEBUG nova.storage.rbd_utils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] resizing rbd image 2ec4d149-b57c-4821-9852-5485f6279472_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.894 2 DEBUG nova.objects.instance [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'migration_context' on Instance uuid 2ec4d149-b57c-4821-9852-5485f6279472 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.909 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.909 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Ensure instance console log exists: /var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.910 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.910 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:43 compute-0 nova_compute[259550]: 2025-10-07 14:23:43.910 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 133 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 329 KiB/s wr, 138 op/s
Oct 07 14:23:44 compute-0 ceph-mon[74295]: pgmap v1802: 305 pgs: 305 active+clean; 133 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 329 KiB/s wr, 138 op/s
Oct 07 14:23:44 compute-0 nova_compute[259550]: 2025-10-07 14:23:44.182 2 DEBUG nova.network.neutron [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Successfully created port: 3cf557fc-1ada-4cc7-9d31-e123f70742b7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:23:44 compute-0 nova_compute[259550]: 2025-10-07 14:23:44.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:23:45 compute-0 nova_compute[259550]: 2025-10-07 14:23:45.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:45 compute-0 nova_compute[259550]: 2025-10-07 14:23:45.533 2 DEBUG nova.network.neutron [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Successfully updated port: 3cf557fc-1ada-4cc7-9d31-e123f70742b7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:23:45 compute-0 nova_compute[259550]: 2025-10-07 14:23:45.553 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "refresh_cache-2ec4d149-b57c-4821-9852-5485f6279472" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:23:45 compute-0 nova_compute[259550]: 2025-10-07 14:23:45.553 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquired lock "refresh_cache-2ec4d149-b57c-4821-9852-5485f6279472" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:23:45 compute-0 nova_compute[259550]: 2025-10-07 14:23:45.553 2 DEBUG nova.network.neutron [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:23:45 compute-0 nova_compute[259550]: 2025-10-07 14:23:45.700 2 DEBUG nova.compute.manager [req-d17e5a2a-fa51-494c-adc3-65c03bd55874 req-36e9af90-be8f-4f4d-9054-538d32b45c28 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Received event network-changed-3cf557fc-1ada-4cc7-9d31-e123f70742b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:45 compute-0 nova_compute[259550]: 2025-10-07 14:23:45.701 2 DEBUG nova.compute.manager [req-d17e5a2a-fa51-494c-adc3-65c03bd55874 req-36e9af90-be8f-4f4d-9054-538d32b45c28 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Refreshing instance network info cache due to event network-changed-3cf557fc-1ada-4cc7-9d31-e123f70742b7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:23:45 compute-0 nova_compute[259550]: 2025-10-07 14:23:45.701 2 DEBUG oslo_concurrency.lockutils [req-d17e5a2a-fa51-494c-adc3-65c03bd55874 req-36e9af90-be8f-4f4d-9054-538d32b45c28 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-2ec4d149-b57c-4821-9852-5485f6279472" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:23:45 compute-0 nova_compute[259550]: 2025-10-07 14:23:45.790 2 DEBUG nova.network.neutron [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:23:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 152 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 972 KiB/s rd, 1.0 MiB/s wr, 72 op/s
Oct 07 14:23:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.888 2 DEBUG nova.network.neutron [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Updating instance_info_cache with network_info: [{"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.911 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Releasing lock "refresh_cache-2ec4d149-b57c-4821-9852-5485f6279472" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.911 2 DEBUG nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Instance network_info: |[{"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.912 2 DEBUG oslo_concurrency.lockutils [req-d17e5a2a-fa51-494c-adc3-65c03bd55874 req-36e9af90-be8f-4f4d-9054-538d32b45c28 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-2ec4d149-b57c-4821-9852-5485f6279472" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.912 2 DEBUG nova.network.neutron [req-d17e5a2a-fa51-494c-adc3-65c03bd55874 req-36e9af90-be8f-4f4d-9054-538d32b45c28 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Refreshing network info cache for port 3cf557fc-1ada-4cc7-9d31-e123f70742b7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.918 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Start _get_guest_xml network_info=[{"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.926 2 WARNING nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.931 2 DEBUG nova.virt.libvirt.host [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.932 2 DEBUG nova.virt.libvirt.host [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.936 2 DEBUG nova.virt.libvirt.host [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.937 2 DEBUG nova.virt.libvirt.host [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.937 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.938 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.938 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.938 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.939 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.939 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.939 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.939 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.940 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.940 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.940 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.940 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:23:46 compute-0 nova_compute[259550]: 2025-10-07 14:23:46.944 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:47 compute-0 ceph-mon[74295]: pgmap v1803: 305 pgs: 305 active+clean; 152 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 972 KiB/s rd, 1.0 MiB/s wr, 72 op/s
Oct 07 14:23:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:23:47 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/386131731' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:23:47 compute-0 nova_compute[259550]: 2025-10-07 14:23:47.448 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:47 compute-0 nova_compute[259550]: 2025-10-07 14:23:47.475 2 DEBUG nova.storage.rbd_utils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2ec4d149-b57c-4821-9852-5485f6279472_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:47 compute-0 nova_compute[259550]: 2025-10-07 14:23:47.479 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:23:47 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3425748012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:23:47 compute-0 nova_compute[259550]: 2025-10-07 14:23:47.969 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:47 compute-0 nova_compute[259550]: 2025-10-07 14:23:47.970 2 DEBUG nova.virt.libvirt.vif [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:23:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1268944220',display_name='tempest-ServersTestJSON-server-1268944220',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1268944220',id=98,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-0tb8zd37',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:23:43Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=2ec4d149-b57c-4821-9852-5485f6279472,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:23:47 compute-0 nova_compute[259550]: 2025-10-07 14:23:47.971 2 DEBUG nova.network.os_vif_util [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:23:47 compute-0 nova_compute[259550]: 2025-10-07 14:23:47.972 2 DEBUG nova.network.os_vif_util [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b5:9a,bridge_name='br-int',has_traffic_filtering=True,id=3cf557fc-1ada-4cc7-9d31-e123f70742b7,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf557fc-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:23:47 compute-0 nova_compute[259550]: 2025-10-07 14:23:47.973 2 DEBUG nova.objects.instance [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ec4d149-b57c-4821-9852-5485f6279472 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:23:47 compute-0 nova_compute[259550]: 2025-10-07 14:23:47.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:23:47 compute-0 nova_compute[259550]: 2025-10-07 14:23:47.981 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:23:47 compute-0 nova_compute[259550]: 2025-10-07 14:23:47.981 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:23:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 152 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.0 MiB/s wr, 33 op/s
Oct 07 14:23:48 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/386131731' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:23:48 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3425748012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.176 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:23:48 compute-0 nova_compute[259550]:   <uuid>2ec4d149-b57c-4821-9852-5485f6279472</uuid>
Oct 07 14:23:48 compute-0 nova_compute[259550]:   <name>instance-00000062</name>
Oct 07 14:23:48 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:23:48 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:23:48 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersTestJSON-server-1268944220</nova:name>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:23:46</nova:creationTime>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:23:48 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:23:48 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:23:48 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:23:48 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:23:48 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:23:48 compute-0 nova_compute[259550]:         <nova:user uuid="2606252961124ad2a15c7f7529b28488">tempest-ServersTestJSON-387950529-project-member</nova:user>
Oct 07 14:23:48 compute-0 nova_compute[259550]:         <nova:project uuid="ca7071ac09d84d15aba25489e9bb909a">tempest-ServersTestJSON-387950529</nova:project>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:23:48 compute-0 nova_compute[259550]:         <nova:port uuid="3cf557fc-1ada-4cc7-9d31-e123f70742b7">
Oct 07 14:23:48 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:23:48 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:23:48 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <system>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <entry name="serial">2ec4d149-b57c-4821-9852-5485f6279472</entry>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <entry name="uuid">2ec4d149-b57c-4821-9852-5485f6279472</entry>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     </system>
Oct 07 14:23:48 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:23:48 compute-0 nova_compute[259550]:   <os>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:   </os>
Oct 07 14:23:48 compute-0 nova_compute[259550]:   <features>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:   </features>
Oct 07 14:23:48 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:23:48 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:23:48 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/2ec4d149-b57c-4821-9852-5485f6279472_disk">
Oct 07 14:23:48 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       </source>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:23:48 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/2ec4d149-b57c-4821-9852-5485f6279472_disk.config">
Oct 07 14:23:48 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       </source>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:23:48 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:8c:b5:9a"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <target dev="tap3cf557fc-1a"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472/console.log" append="off"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <video>
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     </video>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:23:48 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:23:48 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:23:48 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:23:48 compute-0 nova_compute[259550]: </domain>
Oct 07 14:23:48 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.177 2 DEBUG nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Preparing to wait for external event network-vif-plugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.177 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "2ec4d149-b57c-4821-9852-5485f6279472-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.177 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.178 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.178 2 DEBUG nova.virt.libvirt.vif [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:23:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1268944220',display_name='tempest-ServersTestJSON-server-1268944220',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1268944220',id=98,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-0tb8zd37',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:23:43Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=2ec4d149-b57c-4821-9852-5485f6279472,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.179 2 DEBUG nova.network.os_vif_util [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.179 2 DEBUG nova.network.os_vif_util [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b5:9a,bridge_name='br-int',has_traffic_filtering=True,id=3cf557fc-1ada-4cc7-9d31-e123f70742b7,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf557fc-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.180 2 DEBUG os_vif [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b5:9a,bridge_name='br-int',has_traffic_filtering=True,id=3cf557fc-1ada-4cc7-9d31-e123f70742b7,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf557fc-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.181 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.181 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cf557fc-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3cf557fc-1a, col_values=(('external_ids', {'iface-id': '3cf557fc-1ada-4cc7-9d31-e123f70742b7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:b5:9a', 'vm-uuid': '2ec4d149-b57c-4821-9852-5485f6279472'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:48 compute-0 NetworkManager[44949]: <info>  [1759847028.1869] manager: (tap3cf557fc-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.192 2 INFO os_vif [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b5:9a,bridge_name='br-int',has_traffic_filtering=True,id=3cf557fc-1ada-4cc7-9d31-e123f70742b7,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf557fc-1a')
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.426 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.444 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.445 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.445 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No VIF found with MAC fa:16:3e:8c:b5:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.446 2 INFO nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Using config drive
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.476 2 DEBUG nova.storage.rbd_utils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2ec4d149-b57c-4821-9852-5485f6279472_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.800 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-2f0de516-cf33-49b6-b036-aee8c2f72943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.800 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-2f0de516-cf33-49b6-b036-aee8c2f72943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.800 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:23:48 compute-0 nova_compute[259550]: 2025-10-07 14:23:48.801 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2f0de516-cf33-49b6-b036-aee8c2f72943 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:23:49 compute-0 ceph-mon[74295]: pgmap v1804: 305 pgs: 305 active+clean; 152 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.0 MiB/s wr, 33 op/s
Oct 07 14:23:49 compute-0 nova_compute[259550]: 2025-10-07 14:23:49.270 2 INFO nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Creating config drive at /var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472/disk.config
Oct 07 14:23:49 compute-0 nova_compute[259550]: 2025-10-07 14:23:49.277 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpufr3q68x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:49 compute-0 nova_compute[259550]: 2025-10-07 14:23:49.421 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpufr3q68x" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:49 compute-0 nova_compute[259550]: 2025-10-07 14:23:49.448 2 DEBUG nova.storage.rbd_utils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2ec4d149-b57c-4821-9852-5485f6279472_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:49 compute-0 nova_compute[259550]: 2025-10-07 14:23:49.452 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472/disk.config 2ec4d149-b57c-4821-9852-5485f6279472_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:49 compute-0 nova_compute[259550]: 2025-10-07 14:23:49.619 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472/disk.config 2ec4d149-b57c-4821-9852-5485f6279472_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:49 compute-0 nova_compute[259550]: 2025-10-07 14:23:49.620 2 INFO nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Deleting local config drive /var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472/disk.config because it was imported into RBD.
Oct 07 14:23:49 compute-0 kernel: tap3cf557fc-1a: entered promiscuous mode
Oct 07 14:23:49 compute-0 NetworkManager[44949]: <info>  [1759847029.6656] manager: (tap3cf557fc-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/407)
Oct 07 14:23:49 compute-0 ovn_controller[151684]: 2025-10-07T14:23:49Z|01000|binding|INFO|Claiming lport 3cf557fc-1ada-4cc7-9d31-e123f70742b7 for this chassis.
Oct 07 14:23:49 compute-0 ovn_controller[151684]: 2025-10-07T14:23:49Z|01001|binding|INFO|3cf557fc-1ada-4cc7-9d31-e123f70742b7: Claiming fa:16:3e:8c:b5:9a 10.100.0.3
Oct 07 14:23:49 compute-0 nova_compute[259550]: 2025-10-07 14:23:49.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.674 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:b5:9a 10.100.0.3'], port_security=['fa:16:3e:8c:b5:9a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2ec4d149-b57c-4821-9852-5485f6279472', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3cf557fc-1ada-4cc7-9d31-e123f70742b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:23:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.675 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3cf557fc-1ada-4cc7-9d31-e123f70742b7 in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 bound to our chassis
Oct 07 14:23:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.676 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7
Oct 07 14:23:49 compute-0 ovn_controller[151684]: 2025-10-07T14:23:49Z|01002|binding|INFO|Setting lport 3cf557fc-1ada-4cc7-9d31-e123f70742b7 ovn-installed in OVS
Oct 07 14:23:49 compute-0 ovn_controller[151684]: 2025-10-07T14:23:49Z|01003|binding|INFO|Setting lport 3cf557fc-1ada-4cc7-9d31-e123f70742b7 up in Southbound
Oct 07 14:23:49 compute-0 nova_compute[259550]: 2025-10-07 14:23:49.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:49 compute-0 nova_compute[259550]: 2025-10-07 14:23:49.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.694 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[996c63c3-fce5-4123-96a0-61d222128f2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:49 compute-0 systemd-udevd[356084]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:23:49 compute-0 systemd-machined[214580]: New machine qemu-121-instance-00000062.
Oct 07 14:23:49 compute-0 NetworkManager[44949]: <info>  [1759847029.7087] device (tap3cf557fc-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:23:49 compute-0 NetworkManager[44949]: <info>  [1759847029.7098] device (tap3cf557fc-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:23:49 compute-0 systemd[1]: Started Virtual Machine qemu-121-instance-00000062.
Oct 07 14:23:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.719 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[762b96ba-5655-4ca1-9fd8-92c4d4d84953]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.722 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a99584-167b-4303-b8bc-05323854a8f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.748 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d35285-90de-4f50-b749-95dd40902924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.765 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[90f011dd-63f8-43b2-ac41-e58daff9ad2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 30, 'rx_bytes': 958, 'tx_bytes': 1452, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 30, 'rx_bytes': 958, 'tx_bytes': 1452, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 15179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356093, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.782 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[248f60a5-2561-4042-a144-9e885d45d1d2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356097, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356097, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:23:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.784 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:49 compute-0 nova_compute[259550]: 2025-10-07 14:23:49.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:49 compute-0 nova_compute[259550]: 2025-10-07 14:23:49.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.788 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.789 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.789 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:23:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.789 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:23:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.035 2 DEBUG nova.compute.manager [req-7062659f-8977-4eaa-95a5-fffd0085fa09 req-bd0b3b5e-a0db-4182-9926-2a144149a895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Received event network-vif-plugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.036 2 DEBUG oslo_concurrency.lockutils [req-7062659f-8977-4eaa-95a5-fffd0085fa09 req-bd0b3b5e-a0db-4182-9926-2a144149a895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "2ec4d149-b57c-4821-9852-5485f6279472-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.036 2 DEBUG oslo_concurrency.lockutils [req-7062659f-8977-4eaa-95a5-fffd0085fa09 req-bd0b3b5e-a0db-4182-9926-2a144149a895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.036 2 DEBUG oslo_concurrency.lockutils [req-7062659f-8977-4eaa-95a5-fffd0085fa09 req-bd0b3b5e-a0db-4182-9926-2a144149a895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.037 2 DEBUG nova.compute.manager [req-7062659f-8977-4eaa-95a5-fffd0085fa09 req-bd0b3b5e-a0db-4182-9926-2a144149a895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Processing event network-vif-plugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:23:50 compute-0 ceph-mon[74295]: pgmap v1805: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.210 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847015.2094228, 49805eb1-6f40-48f8-bcdc-de12de83733b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.210 2 INFO nova.compute.manager [-] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] VM Stopped (Lifecycle Event)
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.240 2 DEBUG nova.compute.manager [None req-0a51322a-d7b4-4ddd-8d52-941605354967 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.460 2 DEBUG nova.network.neutron [req-d17e5a2a-fa51-494c-adc3-65c03bd55874 req-36e9af90-be8f-4f4d-9054-538d32b45c28 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Updated VIF entry in instance network info cache for port 3cf557fc-1ada-4cc7-9d31-e123f70742b7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.461 2 DEBUG nova.network.neutron [req-d17e5a2a-fa51-494c-adc3-65c03bd55874 req-36e9af90-be8f-4f4d-9054-538d32b45c28 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Updating instance_info_cache with network_info: [{"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.594 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847030.5936222, 2ec4d149-b57c-4821-9852-5485f6279472 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.594 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] VM Started (Lifecycle Event)
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.596 2 DEBUG nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.599 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.602 2 INFO nova.virt.libvirt.driver [-] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Instance spawned successfully.
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.602 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.633 2 DEBUG oslo_concurrency.lockutils [req-d17e5a2a-fa51-494c-adc3-65c03bd55874 req-36e9af90-be8f-4f4d-9054-538d32b45c28 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-2ec4d149-b57c-4821-9852-5485f6279472" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.741 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.747 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.750 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.751 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.751 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.752 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.752 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.752 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.849 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.849 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847030.5937285, 2ec4d149-b57c-4821-9852-5485f6279472 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.850 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] VM Paused (Lifecycle Event)
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.878 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.881 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847030.5989666, 2ec4d149-b57c-4821-9852-5485f6279472 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.881 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] VM Resumed (Lifecycle Event)
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.898 2 INFO nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Took 7.61 seconds to spawn the instance on the hypervisor.
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.898 2 DEBUG nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.906 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.909 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.949 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.969 2 INFO nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Took 8.87 seconds to build instance.
Oct 07 14:23:50 compute-0 nova_compute[259550]: 2025-10-07 14:23:50.987 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:23:51 compute-0 nova_compute[259550]: 2025-10-07 14:23:51.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 07 14:23:52 compute-0 nova_compute[259550]: 2025-10-07 14:23:52.053 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Updating instance_info_cache with network_info: [{"id": "d1f34f39-0808-4a53-bf40-fff477dee819", "address": "fa:16:3e:f6:9a:2b", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1f34f39-08", "ovs_interfaceid": "d1f34f39-0808-4a53-bf40-fff477dee819", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:23:52 compute-0 podman[356141]: 2025-10-07 14:23:52.069711929 +0000 UTC m=+0.055905734 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:23:52 compute-0 nova_compute[259550]: 2025-10-07 14:23:52.073 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-2f0de516-cf33-49b6-b036-aee8c2f72943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:23:52 compute-0 nova_compute[259550]: 2025-10-07 14:23:52.074 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:23:52 compute-0 nova_compute[259550]: 2025-10-07 14:23:52.074 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:23:52 compute-0 podman[356142]: 2025-10-07 14:23:52.104844488 +0000 UTC m=+0.088807053 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 07 14:23:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:23:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:23:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:23:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:23:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:23:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:23:52 compute-0 nova_compute[259550]: 2025-10-07 14:23:52.873 2 DEBUG nova.compute.manager [req-4fed3e16-66c0-41d4-8826-a3396ff36263 req-2bfc1966-e10b-4731-8b97-cbee558980e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Received event network-vif-plugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:52 compute-0 nova_compute[259550]: 2025-10-07 14:23:52.874 2 DEBUG oslo_concurrency.lockutils [req-4fed3e16-66c0-41d4-8826-a3396ff36263 req-2bfc1966-e10b-4731-8b97-cbee558980e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "2ec4d149-b57c-4821-9852-5485f6279472-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:52 compute-0 nova_compute[259550]: 2025-10-07 14:23:52.874 2 DEBUG oslo_concurrency.lockutils [req-4fed3e16-66c0-41d4-8826-a3396ff36263 req-2bfc1966-e10b-4731-8b97-cbee558980e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:52 compute-0 nova_compute[259550]: 2025-10-07 14:23:52.874 2 DEBUG oslo_concurrency.lockutils [req-4fed3e16-66c0-41d4-8826-a3396ff36263 req-2bfc1966-e10b-4731-8b97-cbee558980e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:52 compute-0 nova_compute[259550]: 2025-10-07 14:23:52.874 2 DEBUG nova.compute.manager [req-4fed3e16-66c0-41d4-8826-a3396ff36263 req-2bfc1966-e10b-4731-8b97-cbee558980e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] No waiting events found dispatching network-vif-plugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:23:52 compute-0 nova_compute[259550]: 2025-10-07 14:23:52.875 2 WARNING nova.compute.manager [req-4fed3e16-66c0-41d4-8826-a3396ff36263 req-2bfc1966-e10b-4731-8b97-cbee558980e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Received unexpected event network-vif-plugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 for instance with vm_state active and task_state None.
Oct 07 14:23:53 compute-0 ceph-mon[74295]: pgmap v1806: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 07 14:23:53 compute-0 nova_compute[259550]: 2025-10-07 14:23:53.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Oct 07 14:23:55 compute-0 ceph-mon[74295]: pgmap v1807: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Oct 07 14:23:55 compute-0 nova_compute[259550]: 2025-10-07 14:23:55.681 2 DEBUG oslo_concurrency.lockutils [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "2ec4d149-b57c-4821-9852-5485f6279472" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:55 compute-0 nova_compute[259550]: 2025-10-07 14:23:55.681 2 DEBUG oslo_concurrency.lockutils [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:55 compute-0 nova_compute[259550]: 2025-10-07 14:23:55.682 2 DEBUG nova.compute.manager [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:23:55 compute-0 nova_compute[259550]: 2025-10-07 14:23:55.687 2 DEBUG nova.compute.manager [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 07 14:23:55 compute-0 nova_compute[259550]: 2025-10-07 14:23:55.688 2 DEBUG nova.objects.instance [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'flavor' on Instance uuid 2ec4d149-b57c-4821-9852-5485f6279472 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:23:55 compute-0 nova_compute[259550]: 2025-10-07 14:23:55.716 2 DEBUG nova.virt.libvirt.driver [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:23:55 compute-0 nova_compute[259550]: 2025-10-07 14:23:55.764 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquiring lock "77a1261a-cfc4-44f8-8353-fce48dff65e6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:55 compute-0 nova_compute[259550]: 2025-10-07 14:23:55.765 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:55 compute-0 nova_compute[259550]: 2025-10-07 14:23:55.783 2 DEBUG nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:23:55 compute-0 nova_compute[259550]: 2025-10-07 14:23:55.862 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:55 compute-0 nova_compute[259550]: 2025-10-07 14:23:55.862 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:55 compute-0 nova_compute[259550]: 2025-10-07 14:23:55.871 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:23:55 compute-0 nova_compute[259550]: 2025-10-07 14:23:55.871 2 INFO nova.compute.claims [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:23:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 87 op/s
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.026 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.336153) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847036336198, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 578, "num_deletes": 255, "total_data_size": 579062, "memory_usage": 590048, "flush_reason": "Manual Compaction"}
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847036342740, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 573604, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37737, "largest_seqno": 38314, "table_properties": {"data_size": 570471, "index_size": 1041, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7225, "raw_average_key_size": 18, "raw_value_size": 564172, "raw_average_value_size": 1454, "num_data_blocks": 46, "num_entries": 388, "num_filter_entries": 388, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847001, "oldest_key_time": 1759847001, "file_creation_time": 1759847036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 6617 microseconds, and 2374 cpu microseconds.
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.342771) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 573604 bytes OK
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.342785) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.344566) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.344577) EVENT_LOG_v1 {"time_micros": 1759847036344573, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.344589) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 575841, prev total WAL file size 575841, number of live WAL files 2.
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.344993) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323539' seq:72057594037927935, type:22 .. '6C6F676D0031353130' seq:0, type:0; will stop at (end)
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(560KB)], [83(7910KB)]
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847036345029, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 8673997, "oldest_snapshot_seqno": -1}
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6111 keys, 8542906 bytes, temperature: kUnknown
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847036386519, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8542906, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8502251, "index_size": 24289, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 156130, "raw_average_key_size": 25, "raw_value_size": 8392838, "raw_average_value_size": 1373, "num_data_blocks": 976, "num_entries": 6111, "num_filter_entries": 6111, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.386769) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8542906 bytes
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.388125) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.7 rd, 205.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 7.7 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(30.0) write-amplify(14.9) OK, records in: 6632, records dropped: 521 output_compression: NoCompression
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.388140) EVENT_LOG_v1 {"time_micros": 1759847036388133, "job": 48, "event": "compaction_finished", "compaction_time_micros": 41559, "compaction_time_cpu_micros": 18836, "output_level": 6, "num_output_files": 1, "total_output_size": 8542906, "num_input_records": 6632, "num_output_records": 6111, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847036388353, "job": 48, "event": "table_file_deletion", "file_number": 85}
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847036389498, "job": 48, "event": "table_file_deletion", "file_number": 83}
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.344906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.389575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.389583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.389586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.389589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:23:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.389592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:23:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:23:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2903994116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.447 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.453 2 DEBUG nova.compute.provider_tree [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.474 2 DEBUG nova.scheduler.client.report [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.493 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.494 2 DEBUG nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.540 2 DEBUG nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.541 2 DEBUG nova.network.neutron [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.558 2 INFO nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.572 2 DEBUG nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.661 2 DEBUG nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.662 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.663 2 INFO nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Creating image(s)
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.683 2 DEBUG nova.storage.rbd_utils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] rbd image 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.706 2 DEBUG nova.storage.rbd_utils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] rbd image 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.727 2 DEBUG nova.storage.rbd_utils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] rbd image 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.730 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.804 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.805 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.806 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.806 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.826 2 DEBUG nova.storage.rbd_utils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] rbd image 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:23:56 compute-0 nova_compute[259550]: 2025-10-07 14:23:56.829 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:23:57 compute-0 nova_compute[259550]: 2025-10-07 14:23:57.026 2 DEBUG nova.policy [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b84ccbe54bc04b6e9b1e9a3ec69cae9c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '85bd6ccdfa5f4d8b8afbb83b034f15f7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:23:57 compute-0 ceph-mon[74295]: pgmap v1808: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 87 op/s
Oct 07 14:23:57 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2903994116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:23:57 compute-0 nova_compute[259550]: 2025-10-07 14:23:57.105 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:23:57 compute-0 nova_compute[259550]: 2025-10-07 14:23:57.159 2 DEBUG nova.storage.rbd_utils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] resizing rbd image 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:23:57 compute-0 nova_compute[259550]: 2025-10-07 14:23:57.249 2 DEBUG nova.objects.instance [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 77a1261a-cfc4-44f8-8353-fce48dff65e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:23:57 compute-0 nova_compute[259550]: 2025-10-07 14:23:57.264 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:23:57 compute-0 nova_compute[259550]: 2025-10-07 14:23:57.264 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Ensure instance console log exists: /var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:23:57 compute-0 nova_compute[259550]: 2025-10-07 14:23:57.265 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:23:57 compute-0 nova_compute[259550]: 2025-10-07 14:23:57.265 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:23:57 compute-0 nova_compute[259550]: 2025-10-07 14:23:57.265 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:23:57 compute-0 nova_compute[259550]: 2025-10-07 14:23:57.853 2 DEBUG nova.network.neutron [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Successfully created port: 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:23:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 778 KiB/s wr, 84 op/s
Oct 07 14:23:58 compute-0 nova_compute[259550]: 2025-10-07 14:23:58.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:23:59 compute-0 ceph-mon[74295]: pgmap v1809: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 778 KiB/s wr, 84 op/s
Oct 07 14:23:59 compute-0 nova_compute[259550]: 2025-10-07 14:23:59.332 2 DEBUG nova.network.neutron [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Successfully updated port: 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:23:59 compute-0 nova_compute[259550]: 2025-10-07 14:23:59.346 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquiring lock "refresh_cache-77a1261a-cfc4-44f8-8353-fce48dff65e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:23:59 compute-0 nova_compute[259550]: 2025-10-07 14:23:59.347 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquired lock "refresh_cache-77a1261a-cfc4-44f8-8353-fce48dff65e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:23:59 compute-0 nova_compute[259550]: 2025-10-07 14:23:59.347 2 DEBUG nova.network.neutron [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:23:59 compute-0 nova_compute[259550]: 2025-10-07 14:23:59.449 2 DEBUG nova.compute.manager [req-16f42d47-cc1a-4800-a3a7-53bc1452a51e req-295cc813-6634-4b31-8dce-f552d005bd1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Received event network-changed-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:23:59 compute-0 nova_compute[259550]: 2025-10-07 14:23:59.450 2 DEBUG nova.compute.manager [req-16f42d47-cc1a-4800-a3a7-53bc1452a51e req-295cc813-6634-4b31-8dce-f552d005bd1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Refreshing instance network info cache due to event network-changed-691fda3f-3a00-4049-9e23-b7ce0a4d3d24. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:23:59 compute-0 nova_compute[259550]: 2025-10-07 14:23:59.451 2 DEBUG oslo_concurrency.lockutils [req-16f42d47-cc1a-4800-a3a7-53bc1452a51e req-295cc813-6634-4b31-8dce-f552d005bd1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-77a1261a-cfc4-44f8-8353-fce48dff65e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:23:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 200 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 86 op/s
Oct 07 14:24:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:00.055 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:00.055 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:00.056 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:00 compute-0 nova_compute[259550]: 2025-10-07 14:24:00.090 2 DEBUG nova.network.neutron [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:24:00 compute-0 ceph-mon[74295]: pgmap v1810: 305 pgs: 305 active+clean; 200 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 86 op/s
Oct 07 14:24:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.398 2 DEBUG nova.network.neutron [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Updating instance_info_cache with network_info: [{"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.420 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Releasing lock "refresh_cache-77a1261a-cfc4-44f8-8353-fce48dff65e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.421 2 DEBUG nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Instance network_info: |[{"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.421 2 DEBUG oslo_concurrency.lockutils [req-16f42d47-cc1a-4800-a3a7-53bc1452a51e req-295cc813-6634-4b31-8dce-f552d005bd1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-77a1261a-cfc4-44f8-8353-fce48dff65e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.421 2 DEBUG nova.network.neutron [req-16f42d47-cc1a-4800-a3a7-53bc1452a51e req-295cc813-6634-4b31-8dce-f552d005bd1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Refreshing network info cache for port 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.424 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Start _get_guest_xml network_info=[{"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.428 2 WARNING nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.437 2 DEBUG nova.virt.libvirt.host [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.439 2 DEBUG nova.virt.libvirt.host [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.443 2 DEBUG nova.virt.libvirt.host [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.443 2 DEBUG nova.virt.libvirt.host [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.444 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.444 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.444 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.445 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.445 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.445 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.446 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.446 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.446 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.446 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.446 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.446 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.450 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:24:01 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1437245736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.896 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.919 2 DEBUG nova.storage.rbd_utils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] rbd image 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:24:01 compute-0 nova_compute[259550]: 2025-10-07 14:24:01.923 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:01 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1437245736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:24:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 213 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:24:02 compute-0 ovn_controller[151684]: 2025-10-07T14:24:02Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:b5:9a 10.100.0.3
Oct 07 14:24:02 compute-0 ovn_controller[151684]: 2025-10-07T14:24:02Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:b5:9a 10.100.0.3
Oct 07 14:24:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:24:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1909468088' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.363 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.365 2 DEBUG nova.virt.libvirt.vif [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:23:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-359470232',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-359470232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-359470232',id=99,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='85bd6ccdfa5f4d8b8afbb83b034f15f7',ramdisk_id='',reservation_id='r-zsiwi22d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1133687348',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1133687348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:23:56Z,user_data=None,user_id='b84ccbe54bc04b6e9b1e9a3ec69cae9c',uuid=77a1261a-cfc4-44f8-8353-fce48dff65e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.366 2 DEBUG nova.network.os_vif_util [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Converting VIF {"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.367 2 DEBUG nova.network.os_vif_util [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ba:dc,bridge_name='br-int',has_traffic_filtering=True,id=691fda3f-3a00-4049-9e23-b7ce0a4d3d24,network=Network(9229a572-88fa-42e8-a77f-f7ab29bba83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691fda3f-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.368 2 DEBUG nova.objects.instance [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 77a1261a-cfc4-44f8-8353-fce48dff65e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.385 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:24:02 compute-0 nova_compute[259550]:   <uuid>77a1261a-cfc4-44f8-8353-fce48dff65e6</uuid>
Oct 07 14:24:02 compute-0 nova_compute[259550]:   <name>instance-00000063</name>
Oct 07 14:24:02 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:24:02 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:24:02 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-359470232</nova:name>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:24:01</nova:creationTime>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:24:02 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:24:02 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:24:02 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:24:02 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:24:02 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:24:02 compute-0 nova_compute[259550]:         <nova:user uuid="b84ccbe54bc04b6e9b1e9a3ec69cae9c">tempest-ServersNegativeTestMultiTenantJSON-1133687348-project-member</nova:user>
Oct 07 14:24:02 compute-0 nova_compute[259550]:         <nova:project uuid="85bd6ccdfa5f4d8b8afbb83b034f15f7">tempest-ServersNegativeTestMultiTenantJSON-1133687348</nova:project>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:24:02 compute-0 nova_compute[259550]:         <nova:port uuid="691fda3f-3a00-4049-9e23-b7ce0a4d3d24">
Oct 07 14:24:02 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:24:02 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:24:02 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <system>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <entry name="serial">77a1261a-cfc4-44f8-8353-fce48dff65e6</entry>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <entry name="uuid">77a1261a-cfc4-44f8-8353-fce48dff65e6</entry>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     </system>
Oct 07 14:24:02 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:24:02 compute-0 nova_compute[259550]:   <os>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:   </os>
Oct 07 14:24:02 compute-0 nova_compute[259550]:   <features>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:   </features>
Oct 07 14:24:02 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:24:02 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:24:02 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/77a1261a-cfc4-44f8-8353-fce48dff65e6_disk">
Oct 07 14:24:02 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       </source>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:24:02 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/77a1261a-cfc4-44f8-8353-fce48dff65e6_disk.config">
Oct 07 14:24:02 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       </source>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:24:02 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:7a:ba:dc"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <target dev="tap691fda3f-3a"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6/console.log" append="off"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <video>
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     </video>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:24:02 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:24:02 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:24:02 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:24:02 compute-0 nova_compute[259550]: </domain>
Oct 07 14:24:02 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.386 2 DEBUG nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Preparing to wait for external event network-vif-plugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.387 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquiring lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.387 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.387 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.388 2 DEBUG nova.virt.libvirt.vif [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:23:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-359470232',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-359470232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-359470232',id=99,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='85bd6ccdfa5f4d8b8afbb83b034f15f7',ramdisk_id='',reservation_id='r-zsiwi22d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1133687348',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1133687348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:23:56Z,user_data=None,user_id='b84ccbe54bc04b6e9b1e9a3ec69cae9c',uuid=77a1261a-cfc4-44f8-8353-fce48dff65e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.388 2 DEBUG nova.network.os_vif_util [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Converting VIF {"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.389 2 DEBUG nova.network.os_vif_util [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ba:dc,bridge_name='br-int',has_traffic_filtering=True,id=691fda3f-3a00-4049-9e23-b7ce0a4d3d24,network=Network(9229a572-88fa-42e8-a77f-f7ab29bba83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691fda3f-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.389 2 DEBUG os_vif [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ba:dc,bridge_name='br-int',has_traffic_filtering=True,id=691fda3f-3a00-4049-9e23-b7ce0a4d3d24,network=Network(9229a572-88fa-42e8-a77f-f7ab29bba83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691fda3f-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.390 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.391 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap691fda3f-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.394 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap691fda3f-3a, col_values=(('external_ids', {'iface-id': '691fda3f-3a00-4049-9e23-b7ce0a4d3d24', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:ba:dc', 'vm-uuid': '77a1261a-cfc4-44f8-8353-fce48dff65e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:02 compute-0 NetworkManager[44949]: <info>  [1759847042.3964] manager: (tap691fda3f-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.406 2 INFO os_vif [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ba:dc,bridge_name='br-int',has_traffic_filtering=True,id=691fda3f-3a00-4049-9e23-b7ce0a4d3d24,network=Network(9229a572-88fa-42e8-a77f-f7ab29bba83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691fda3f-3a')
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.462 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.463 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.463 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] No VIF found with MAC fa:16:3e:7a:ba:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.464 2 INFO nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Using config drive
Oct 07 14:24:02 compute-0 nova_compute[259550]: 2025-10-07 14:24:02.491 2 DEBUG nova.storage.rbd_utils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] rbd image 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:24:02 compute-0 ceph-mon[74295]: pgmap v1811: 305 pgs: 305 active+clean; 213 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:24:02 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1909468088' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.028 2 INFO nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Creating config drive at /var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6/disk.config
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.034 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd0hylty0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.189 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd0hylty0" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.229 2 DEBUG nova.storage.rbd_utils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] rbd image 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.233 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6/disk.config 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.389 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6/disk.config 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.390 2 INFO nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Deleting local config drive /var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6/disk.config because it was imported into RBD.
Oct 07 14:24:03 compute-0 kernel: tap691fda3f-3a: entered promiscuous mode
Oct 07 14:24:03 compute-0 NetworkManager[44949]: <info>  [1759847043.4381] manager: (tap691fda3f-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/409)
Oct 07 14:24:03 compute-0 ovn_controller[151684]: 2025-10-07T14:24:03Z|01004|binding|INFO|Claiming lport 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 for this chassis.
Oct 07 14:24:03 compute-0 ovn_controller[151684]: 2025-10-07T14:24:03Z|01005|binding|INFO|691fda3f-3a00-4049-9e23-b7ce0a4d3d24: Claiming fa:16:3e:7a:ba:dc 10.100.0.12
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.453 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:ba:dc 10.100.0.12'], port_security=['fa:16:3e:7a:ba:dc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '77a1261a-cfc4-44f8-8353-fce48dff65e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9229a572-88fa-42e8-a77f-f7ab29bba83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '85bd6ccdfa5f4d8b8afbb83b034f15f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '616fd0c8-90f8-4e77-b261-e8ac5fb3da26', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5297e51f-b3a6-4510-b470-81311044f72c, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=691fda3f-3a00-4049-9e23-b7ce0a4d3d24) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.454 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 in datapath 9229a572-88fa-42e8-a77f-f7ab29bba83d bound to our chassis
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.455 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9229a572-88fa-42e8-a77f-f7ab29bba83d
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.471 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[93c9602d-f2ed-4f69-81b4-48d5227b0462]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.472 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9229a572-81 in ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:24:03 compute-0 systemd-machined[214580]: New machine qemu-122-instance-00000063.
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.475 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9229a572-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.475 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e59aafbb-985b-44dc-abce-74309a4f7d72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.476 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eb9a3090-c17e-4404-a109-216286afd136]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:03 compute-0 systemd-udevd[356510]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:24:03 compute-0 systemd[1]: Started Virtual Machine qemu-122-instance-00000063.
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.493 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a5e48e-4e5b-4bba-9add-0725fb2804e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:03 compute-0 NetworkManager[44949]: <info>  [1759847043.4973] device (tap691fda3f-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:24:03 compute-0 NetworkManager[44949]: <info>  [1759847043.5002] device (tap691fda3f-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.501 2 DEBUG nova.network.neutron [req-16f42d47-cc1a-4800-a3a7-53bc1452a51e req-295cc813-6634-4b31-8dce-f552d005bd1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Updated VIF entry in instance network info cache for port 691fda3f-3a00-4049-9e23-b7ce0a4d3d24. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.502 2 DEBUG nova.network.neutron [req-16f42d47-cc1a-4800-a3a7-53bc1452a51e req-295cc813-6634-4b31-8dce-f552d005bd1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Updating instance_info_cache with network_info: [{"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.517 2 DEBUG oslo_concurrency.lockutils [req-16f42d47-cc1a-4800-a3a7-53bc1452a51e req-295cc813-6634-4b31-8dce-f552d005bd1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-77a1261a-cfc4-44f8-8353-fce48dff65e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.519 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a15cd46f-4cd8-48de-8ccd-54f8de1fd976]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.547 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5905524d-0def-4358-97bb-e609f2751fdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:03 compute-0 ovn_controller[151684]: 2025-10-07T14:24:03Z|01006|binding|INFO|Setting lport 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 ovn-installed in OVS
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:03 compute-0 ovn_controller[151684]: 2025-10-07T14:24:03Z|01007|binding|INFO|Setting lport 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 up in Southbound
Oct 07 14:24:03 compute-0 systemd-udevd[356513]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:24:03 compute-0 NetworkManager[44949]: <info>  [1759847043.5568] manager: (tap9229a572-80): new Veth device (/org/freedesktop/NetworkManager/Devices/410)
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.556 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bf1f25cb-c2d9-49d5-93cc-6921992a87a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.587 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a61c34e3-bcaf-4245-aeb8-01026d9d3d4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.592 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[09d0f9d2-1247-4c4b-b25f-ba4ff0ac35ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:03 compute-0 NetworkManager[44949]: <info>  [1759847043.6127] device (tap9229a572-80): carrier: link connected
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.618 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8a188e53-cac1-4596-827e-046f74f8d858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.636 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d472c97e-1a5b-4747-a305-21036fe2ac4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9229a572-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:f2:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 762717, 'reachable_time': 33902, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356542, 'error': None, 'target': 'ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.652 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ad050048-5864-4ae9-aa05-6ef5bdc3d0e5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb8:f2aa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 762717, 'tstamp': 762717}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356543, 'error': None, 'target': 'ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.671 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c25fd2ca-301f-4a83-a335-dec368e21133]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9229a572-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:f2:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 762717, 'reachable_time': 33902, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 356551, 'error': None, 'target': 'ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.702 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5a6a4023-1aba-4146-8b80-d40bb9e77fa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.754 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2ba3df04-8ef2-414b-b226-235955f99db0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.757 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9229a572-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.758 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.758 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9229a572-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:24:03 compute-0 NetworkManager[44949]: <info>  [1759847043.7607] manager: (tap9229a572-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Oct 07 14:24:03 compute-0 kernel: tap9229a572-80: entered promiscuous mode
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.766 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9229a572-80, col_values=(('external_ids', {'iface-id': '467816a9-10f3-4a54-9b50-5cd9f2a12df0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:24:03 compute-0 ovn_controller[151684]: 2025-10-07T14:24:03Z|01008|binding|INFO|Releasing lport 467816a9-10f3-4a54-9b50-5cd9f2a12df0 from this chassis (sb_readonly=0)
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.770 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9229a572-88fa-42e8-a77f-f7ab29bba83d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9229a572-88fa-42e8-a77f-f7ab29bba83d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.771 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3bd09470-f222-424c-9df0-ffac9dc8d0d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.772 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-9229a572-88fa-42e8-a77f-f7ab29bba83d
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/9229a572-88fa-42e8-a77f-f7ab29bba83d.pid.haproxy
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 9229a572-88fa-42e8-a77f-f7ab29bba83d
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:24:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.773 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d', 'env', 'PROCESS_TAG=haproxy-9229a572-88fa-42e8-a77f-f7ab29bba83d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9229a572-88fa-42e8-a77f-f7ab29bba83d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.835 2 DEBUG nova.compute.manager [req-2d57ca19-65f1-45e9-bf67-2a9268f73b03 req-197cafdb-14d7-4930-bb29-4e4c67338b64 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Received event network-vif-plugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.835 2 DEBUG oslo_concurrency.lockutils [req-2d57ca19-65f1-45e9-bf67-2a9268f73b03 req-197cafdb-14d7-4930-bb29-4e4c67338b64 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.835 2 DEBUG oslo_concurrency.lockutils [req-2d57ca19-65f1-45e9-bf67-2a9268f73b03 req-197cafdb-14d7-4930-bb29-4e4c67338b64 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.836 2 DEBUG oslo_concurrency.lockutils [req-2d57ca19-65f1-45e9-bf67-2a9268f73b03 req-197cafdb-14d7-4930-bb29-4e4c67338b64 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:03 compute-0 nova_compute[259550]: 2025-10-07 14:24:03.836 2 DEBUG nova.compute.manager [req-2d57ca19-65f1-45e9-bf67-2a9268f73b03 req-197cafdb-14d7-4930-bb29-4e4c67338b64 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Processing event network-vif-plugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:24:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 230 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 137 op/s
Oct 07 14:24:04 compute-0 podman[356618]: 2025-10-07 14:24:04.185467427 +0000 UTC m=+0.065937463 container create 3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:24:04 compute-0 systemd[1]: Started libpod-conmon-3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11.scope.
Oct 07 14:24:04 compute-0 podman[356618]: 2025-10-07 14:24:04.146741333 +0000 UTC m=+0.027211399 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:24:04 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb5be1eceb21f86cca23ad9b2f755736db45630c487afa93af14943591b01768/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.263 2 DEBUG nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.264 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847044.263533, 77a1261a-cfc4-44f8-8353-fce48dff65e6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.265 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] VM Started (Lifecycle Event)
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.267 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.270 2 INFO nova.virt.libvirt.driver [-] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Instance spawned successfully.
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.270 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:24:04 compute-0 podman[356618]: 2025-10-07 14:24:04.273019417 +0000 UTC m=+0.153489473 container init 3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:24:04 compute-0 podman[356618]: 2025-10-07 14:24:04.280477736 +0000 UTC m=+0.160947772 container start 3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.286 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.293 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.298 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.298 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.299 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.299 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.300 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.300 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:24:04 compute-0 neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d[356634]: [NOTICE]   (356638) : New worker (356640) forked
Oct 07 14:24:04 compute-0 neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d[356634]: [NOTICE]   (356638) : Loading success.
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.322 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.323 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847044.2638097, 77a1261a-cfc4-44f8-8353-fce48dff65e6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.323 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] VM Paused (Lifecycle Event)
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.361 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.365 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847044.2667992, 77a1261a-cfc4-44f8-8353-fce48dff65e6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.365 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] VM Resumed (Lifecycle Event)
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.386 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.389 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.396 2 INFO nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Took 7.73 seconds to spawn the instance on the hypervisor.
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.396 2 DEBUG nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.403 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.453 2 INFO nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Took 8.62 seconds to build instance.
Oct 07 14:24:04 compute-0 nova_compute[259550]: 2025-10-07 14:24:04.477 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:05 compute-0 ceph-mon[74295]: pgmap v1812: 305 pgs: 305 active+clean; 230 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 137 op/s
Oct 07 14:24:05 compute-0 nova_compute[259550]: 2025-10-07 14:24:05.799 2 DEBUG nova.virt.libvirt.driver [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:24:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 246 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 141 op/s
Oct 07 14:24:06 compute-0 nova_compute[259550]: 2025-10-07 14:24:06.260 2 DEBUG nova.compute.manager [req-eee84337-daeb-4942-be42-545ee34a5806 req-4480fb0e-308b-4f6f-ba5d-6bf4452f9d5f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Received event network-vif-plugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:24:06 compute-0 nova_compute[259550]: 2025-10-07 14:24:06.261 2 DEBUG oslo_concurrency.lockutils [req-eee84337-daeb-4942-be42-545ee34a5806 req-4480fb0e-308b-4f6f-ba5d-6bf4452f9d5f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:06 compute-0 nova_compute[259550]: 2025-10-07 14:24:06.261 2 DEBUG oslo_concurrency.lockutils [req-eee84337-daeb-4942-be42-545ee34a5806 req-4480fb0e-308b-4f6f-ba5d-6bf4452f9d5f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:06 compute-0 nova_compute[259550]: 2025-10-07 14:24:06.261 2 DEBUG oslo_concurrency.lockutils [req-eee84337-daeb-4942-be42-545ee34a5806 req-4480fb0e-308b-4f6f-ba5d-6bf4452f9d5f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:06 compute-0 nova_compute[259550]: 2025-10-07 14:24:06.261 2 DEBUG nova.compute.manager [req-eee84337-daeb-4942-be42-545ee34a5806 req-4480fb0e-308b-4f6f-ba5d-6bf4452f9d5f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] No waiting events found dispatching network-vif-plugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:24:06 compute-0 nova_compute[259550]: 2025-10-07 14:24:06.261 2 WARNING nova.compute.manager [req-eee84337-daeb-4942-be42-545ee34a5806 req-4480fb0e-308b-4f6f-ba5d-6bf4452f9d5f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Received unexpected event network-vif-plugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 for instance with vm_state active and task_state None.
Oct 07 14:24:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:24:06 compute-0 nova_compute[259550]: 2025-10-07 14:24:06.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:07 compute-0 ceph-mon[74295]: pgmap v1813: 305 pgs: 305 active+clean; 246 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 141 op/s
Oct 07 14:24:07 compute-0 nova_compute[259550]: 2025-10-07 14:24:07.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 246 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 943 KiB/s rd, 3.9 MiB/s wr, 119 op/s
Oct 07 14:24:08 compute-0 kernel: tap3cf557fc-1a (unregistering): left promiscuous mode
Oct 07 14:24:08 compute-0 NetworkManager[44949]: <info>  [1759847048.0560] device (tap3cf557fc-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:24:08 compute-0 ovn_controller[151684]: 2025-10-07T14:24:08Z|01009|binding|INFO|Releasing lport 3cf557fc-1ada-4cc7-9d31-e123f70742b7 from this chassis (sb_readonly=0)
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:08 compute-0 ovn_controller[151684]: 2025-10-07T14:24:08Z|01010|binding|INFO|Setting lport 3cf557fc-1ada-4cc7-9d31-e123f70742b7 down in Southbound
Oct 07 14:24:08 compute-0 ovn_controller[151684]: 2025-10-07T14:24:08Z|01011|binding|INFO|Removing iface tap3cf557fc-1a ovn-installed in OVS
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.073 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:b5:9a 10.100.0.3'], port_security=['fa:16:3e:8c:b5:9a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2ec4d149-b57c-4821-9852-5485f6279472', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3cf557fc-1ada-4cc7-9d31-e123f70742b7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.075 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3cf557fc-1ada-4cc7-9d31-e123f70742b7 in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 unbound from our chassis
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.077 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.094 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e8370c25-7e44-4378-9a7b-f560bcb2a603]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.127 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[617461e0-20a3-4b4c-8bdf-9cd3f4cbfdab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.129 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca97522-1415-4b20-a27c-930e2c2a9592]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:08 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000062.scope: Deactivated successfully.
Oct 07 14:24:08 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000062.scope: Consumed 13.258s CPU time.
Oct 07 14:24:08 compute-0 systemd-machined[214580]: Machine qemu-121-instance-00000062 terminated.
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.158 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[97f04613-9840-426a-816d-289843c5694a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.177 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7429b08a-bf83-4d86-b709-7c871b5617db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 32, 'rx_bytes': 958, 'tx_bytes': 1536, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 32, 'rx_bytes': 958, 'tx_bytes': 1536, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 15179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356660, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.197 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9c4142ba-4627-4bc2-87bc-3a6c79d7fe74]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356661, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356661, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.199 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.205 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.205 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.205 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.206 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.334 2 DEBUG oslo_concurrency.lockutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquiring lock "77a1261a-cfc4-44f8-8353-fce48dff65e6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.334 2 DEBUG oslo_concurrency.lockutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.334 2 DEBUG oslo_concurrency.lockutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquiring lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.335 2 DEBUG oslo_concurrency.lockutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.335 2 DEBUG oslo_concurrency.lockutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.336 2 INFO nova.compute.manager [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Terminating instance
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.337 2 DEBUG nova.compute.manager [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:24:08 compute-0 kernel: tap691fda3f-3a (unregistering): left promiscuous mode
Oct 07 14:24:08 compute-0 NetworkManager[44949]: <info>  [1759847048.3869] device (tap691fda3f-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:24:08 compute-0 ovn_controller[151684]: 2025-10-07T14:24:08Z|01012|binding|INFO|Releasing lport 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 from this chassis (sb_readonly=0)
Oct 07 14:24:08 compute-0 ovn_controller[151684]: 2025-10-07T14:24:08Z|01013|binding|INFO|Setting lport 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 down in Southbound
Oct 07 14:24:08 compute-0 ovn_controller[151684]: 2025-10-07T14:24:08Z|01014|binding|INFO|Removing iface tap691fda3f-3a ovn-installed in OVS
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.405 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:ba:dc 10.100.0.12'], port_security=['fa:16:3e:7a:ba:dc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '77a1261a-cfc4-44f8-8353-fce48dff65e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9229a572-88fa-42e8-a77f-f7ab29bba83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '85bd6ccdfa5f4d8b8afbb83b034f15f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '616fd0c8-90f8-4e77-b261-e8ac5fb3da26', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5297e51f-b3a6-4510-b470-81311044f72c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=691fda3f-3a00-4049-9e23-b7ce0a4d3d24) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.406 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 in datapath 9229a572-88fa-42e8-a77f-f7ab29bba83d unbound from our chassis
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.407 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9229a572-88fa-42e8-a77f-f7ab29bba83d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.408 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[856ae252-ed0f-4987-b769-326775ee18e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.408 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d namespace which is not needed anymore
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.418 2 DEBUG nova.compute.manager [req-ac022a7b-9a9a-4e52-bc15-d7c17e4eb01d req-7913a213-517b-42a7-b8ca-7051ec0f90b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Received event network-vif-unplugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.418 2 DEBUG oslo_concurrency.lockutils [req-ac022a7b-9a9a-4e52-bc15-d7c17e4eb01d req-7913a213-517b-42a7-b8ca-7051ec0f90b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "2ec4d149-b57c-4821-9852-5485f6279472-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.419 2 DEBUG oslo_concurrency.lockutils [req-ac022a7b-9a9a-4e52-bc15-d7c17e4eb01d req-7913a213-517b-42a7-b8ca-7051ec0f90b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.419 2 DEBUG oslo_concurrency.lockutils [req-ac022a7b-9a9a-4e52-bc15-d7c17e4eb01d req-7913a213-517b-42a7-b8ca-7051ec0f90b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.419 2 DEBUG nova.compute.manager [req-ac022a7b-9a9a-4e52-bc15-d7c17e4eb01d req-7913a213-517b-42a7-b8ca-7051ec0f90b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] No waiting events found dispatching network-vif-unplugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.419 2 WARNING nova.compute.manager [req-ac022a7b-9a9a-4e52-bc15-d7c17e4eb01d req-7913a213-517b-42a7-b8ca-7051ec0f90b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Received unexpected event network-vif-unplugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 for instance with vm_state active and task_state powering-off.
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:08 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000063.scope: Deactivated successfully.
Oct 07 14:24:08 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000063.scope: Consumed 4.820s CPU time.
Oct 07 14:24:08 compute-0 systemd-machined[214580]: Machine qemu-122-instance-00000063 terminated.
Oct 07 14:24:08 compute-0 neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d[356634]: [NOTICE]   (356638) : haproxy version is 2.8.14-c23fe91
Oct 07 14:24:08 compute-0 neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d[356634]: [NOTICE]   (356638) : path to executable is /usr/sbin/haproxy
Oct 07 14:24:08 compute-0 neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d[356634]: [WARNING]  (356638) : Exiting Master process...
Oct 07 14:24:08 compute-0 neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d[356634]: [WARNING]  (356638) : Exiting Master process...
Oct 07 14:24:08 compute-0 neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d[356634]: [ALERT]    (356638) : Current worker (356640) exited with code 143 (Terminated)
Oct 07 14:24:08 compute-0 neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d[356634]: [WARNING]  (356638) : All workers exited. Exiting... (0)
Oct 07 14:24:08 compute-0 systemd[1]: libpod-3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11.scope: Deactivated successfully.
Oct 07 14:24:08 compute-0 conmon[356634]: conmon 3e19f4b541108fbdec24 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11.scope/container/memory.events
Oct 07 14:24:08 compute-0 podman[356696]: 2025-10-07 14:24:08.529457551 +0000 UTC m=+0.038996013 container died 3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 14:24:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11-userdata-shm.mount: Deactivated successfully.
Oct 07 14:24:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb5be1eceb21f86cca23ad9b2f755736db45630c487afa93af14943591b01768-merged.mount: Deactivated successfully.
Oct 07 14:24:08 compute-0 podman[356696]: 2025-10-07 14:24:08.572356718 +0000 UTC m=+0.081895190 container cleanup 3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.574 2 INFO nova.virt.libvirt.driver [-] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Instance destroyed successfully.
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.575 2 DEBUG nova.objects.instance [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lazy-loading 'resources' on Instance uuid 77a1261a-cfc4-44f8-8353-fce48dff65e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:24:08 compute-0 systemd[1]: libpod-conmon-3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11.scope: Deactivated successfully.
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.596 2 DEBUG nova.virt.libvirt.vif [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:23:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-359470232',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-359470232',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-359470232',id=99,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:24:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='85bd6ccdfa5f4d8b8afbb83b034f15f7',ramdisk_id='',reservation_id='r-zsiwi22d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1133687348',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1133687348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:24:04Z,user_data=None,user_id='b84ccbe54bc04b6e9b1e9a3ec69cae9c',uuid=77a1261a-cfc4-44f8-8353-fce48dff65e6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.598 2 DEBUG nova.network.os_vif_util [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Converting VIF {"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.599 2 DEBUG nova.network.os_vif_util [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ba:dc,bridge_name='br-int',has_traffic_filtering=True,id=691fda3f-3a00-4049-9e23-b7ce0a4d3d24,network=Network(9229a572-88fa-42e8-a77f-f7ab29bba83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691fda3f-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.599 2 DEBUG os_vif [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ba:dc,bridge_name='br-int',has_traffic_filtering=True,id=691fda3f-3a00-4049-9e23-b7ce0a4d3d24,network=Network(9229a572-88fa-42e8-a77f-f7ab29bba83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691fda3f-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.603 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap691fda3f-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.611 2 INFO os_vif [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ba:dc,bridge_name='br-int',has_traffic_filtering=True,id=691fda3f-3a00-4049-9e23-b7ce0a4d3d24,network=Network(9229a572-88fa-42e8-a77f-f7ab29bba83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691fda3f-3a')
Oct 07 14:24:08 compute-0 podman[356737]: 2025-10-07 14:24:08.639059439 +0000 UTC m=+0.044728096 container remove 3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.645 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b19c0f83-2288-491c-90c9-648f7248a076]: (4, ('Tue Oct  7 02:24:08 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d (3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11)\n3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11\nTue Oct  7 02:24:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d (3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11)\n3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.647 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e0770a22-2f3e-4168-8c1d-3f3cee7961c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.648 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9229a572-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:08 compute-0 kernel: tap9229a572-80: left promiscuous mode
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.667 2 DEBUG nova.compute.manager [req-bd113aba-376d-4b9d-9bea-9e0763bd0235 req-d00258ee-0bc5-4918-89bb-2de0a2670e44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Received event network-vif-unplugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.668 2 DEBUG oslo_concurrency.lockutils [req-bd113aba-376d-4b9d-9bea-9e0763bd0235 req-d00258ee-0bc5-4918-89bb-2de0a2670e44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.668 2 DEBUG oslo_concurrency.lockutils [req-bd113aba-376d-4b9d-9bea-9e0763bd0235 req-d00258ee-0bc5-4918-89bb-2de0a2670e44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.668 2 DEBUG oslo_concurrency.lockutils [req-bd113aba-376d-4b9d-9bea-9e0763bd0235 req-d00258ee-0bc5-4918-89bb-2de0a2670e44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.672 2 DEBUG nova.compute.manager [req-bd113aba-376d-4b9d-9bea-9e0763bd0235 req-d00258ee-0bc5-4918-89bb-2de0a2670e44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] No waiting events found dispatching network-vif-unplugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.672 2 DEBUG nova.compute.manager [req-bd113aba-376d-4b9d-9bea-9e0763bd0235 req-d00258ee-0bc5-4918-89bb-2de0a2670e44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Received event network-vif-unplugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.674 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e19c6f12-1d15-4730-91d8-092283e2fff6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.705 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fba4794a-3290-4708-a799-6fed94dd22e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.706 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[13359dd2-a8d9-46d5-ba8a-674a4d46ec3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.723 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b03c20db-76da-4e14-b3a1-7f228bb86972]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 762710, 'reachable_time': 30592, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356774, 'error': None, 'target': 'ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.726 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:24:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.726 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[fba3a72a-7788-4453-bd8f-4f21b8e3b595]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d9229a572\x2d88fa\x2d42e8\x2da77f\x2df7ab29bba83d.mount: Deactivated successfully.
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.816 2 INFO nova.virt.libvirt.driver [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Instance shutdown successfully after 13 seconds.
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.823 2 INFO nova.virt.libvirt.driver [-] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Instance destroyed successfully.
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.824 2 DEBUG nova.objects.instance [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'numa_topology' on Instance uuid 2ec4d149-b57c-4821-9852-5485f6279472 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.837 2 DEBUG nova.compute.manager [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.878 2 DEBUG oslo_concurrency.lockutils [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.982 2 INFO nova.virt.libvirt.driver [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Deleting instance files /var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6_del
Oct 07 14:24:08 compute-0 nova_compute[259550]: 2025-10-07 14:24:08.983 2 INFO nova.virt.libvirt.driver [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Deletion of /var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6_del complete
Oct 07 14:24:09 compute-0 nova_compute[259550]: 2025-10-07 14:24:09.067 2 INFO nova.compute.manager [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 07 14:24:09 compute-0 nova_compute[259550]: 2025-10-07 14:24:09.067 2 DEBUG oslo.service.loopingcall [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:24:09 compute-0 nova_compute[259550]: 2025-10-07 14:24:09.068 2 DEBUG nova.compute.manager [-] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:24:09 compute-0 nova_compute[259550]: 2025-10-07 14:24:09.068 2 DEBUG nova.network.neutron [-] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:24:09 compute-0 ceph-mon[74295]: pgmap v1814: 305 pgs: 305 active+clean; 246 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 943 KiB/s rd, 3.9 MiB/s wr, 119 op/s
Oct 07 14:24:09 compute-0 nova_compute[259550]: 2025-10-07 14:24:09.629 2 DEBUG nova.network.neutron [-] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:24:09 compute-0 nova_compute[259550]: 2025-10-07 14:24:09.650 2 INFO nova.compute.manager [-] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Took 0.58 seconds to deallocate network for instance.
Oct 07 14:24:09 compute-0 nova_compute[259550]: 2025-10-07 14:24:09.699 2 DEBUG oslo_concurrency.lockutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:09 compute-0 nova_compute[259550]: 2025-10-07 14:24:09.700 2 DEBUG oslo_concurrency.lockutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:09 compute-0 nova_compute[259550]: 2025-10-07 14:24:09.778 2 DEBUG oslo_concurrency.processutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 246 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 07 14:24:10 compute-0 podman[356797]: 2025-10-07 14:24:10.085899401 +0000 UTC m=+0.069825897 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:24:10 compute-0 podman[356796]: 2025-10-07 14:24:10.08623834 +0000 UTC m=+0.071998866 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 07 14:24:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:24:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1873977040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:24:10 compute-0 nova_compute[259550]: 2025-10-07 14:24:10.228 2 DEBUG oslo_concurrency.processutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:10 compute-0 nova_compute[259550]: 2025-10-07 14:24:10.238 2 DEBUG nova.compute.provider_tree [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:24:10 compute-0 nova_compute[259550]: 2025-10-07 14:24:10.257 2 DEBUG nova.scheduler.client.report [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:24:10 compute-0 nova_compute[259550]: 2025-10-07 14:24:10.283 2 DEBUG oslo_concurrency.lockutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:10 compute-0 nova_compute[259550]: 2025-10-07 14:24:10.305 2 INFO nova.scheduler.client.report [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Deleted allocations for instance 77a1261a-cfc4-44f8-8353-fce48dff65e6
Oct 07 14:24:10 compute-0 nova_compute[259550]: 2025-10-07 14:24:10.391 2 DEBUG oslo_concurrency.lockutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:10 compute-0 nova_compute[259550]: 2025-10-07 14:24:10.677 2 DEBUG nova.compute.manager [req-36187d15-ea40-46b7-b8a8-ef4927fa25ea req-10a8c062-ef05-4d51-a719-13a9a9951328 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Received event network-vif-plugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:24:10 compute-0 nova_compute[259550]: 2025-10-07 14:24:10.677 2 DEBUG oslo_concurrency.lockutils [req-36187d15-ea40-46b7-b8a8-ef4927fa25ea req-10a8c062-ef05-4d51-a719-13a9a9951328 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "2ec4d149-b57c-4821-9852-5485f6279472-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:10 compute-0 nova_compute[259550]: 2025-10-07 14:24:10.677 2 DEBUG oslo_concurrency.lockutils [req-36187d15-ea40-46b7-b8a8-ef4927fa25ea req-10a8c062-ef05-4d51-a719-13a9a9951328 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:10 compute-0 nova_compute[259550]: 2025-10-07 14:24:10.678 2 DEBUG oslo_concurrency.lockutils [req-36187d15-ea40-46b7-b8a8-ef4927fa25ea req-10a8c062-ef05-4d51-a719-13a9a9951328 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:10 compute-0 nova_compute[259550]: 2025-10-07 14:24:10.678 2 DEBUG nova.compute.manager [req-36187d15-ea40-46b7-b8a8-ef4927fa25ea req-10a8c062-ef05-4d51-a719-13a9a9951328 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] No waiting events found dispatching network-vif-plugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:24:10 compute-0 nova_compute[259550]: 2025-10-07 14:24:10.678 2 WARNING nova.compute.manager [req-36187d15-ea40-46b7-b8a8-ef4927fa25ea req-10a8c062-ef05-4d51-a719-13a9a9951328 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Received unexpected event network-vif-plugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 for instance with vm_state stopped and task_state None.
Oct 07 14:24:10 compute-0 nova_compute[259550]: 2025-10-07 14:24:10.678 2 DEBUG nova.compute.manager [req-36187d15-ea40-46b7-b8a8-ef4927fa25ea req-10a8c062-ef05-4d51-a719-13a9a9951328 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Received event network-vif-deleted-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:24:10 compute-0 nova_compute[259550]: 2025-10-07 14:24:10.951 2 DEBUG nova.compute.manager [req-de64a4fe-6d6c-4f0d-8f08-088eb547bd59 req-5f763028-6993-409f-a3ea-7ba5c192914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Received event network-vif-plugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:24:10 compute-0 nova_compute[259550]: 2025-10-07 14:24:10.951 2 DEBUG oslo_concurrency.lockutils [req-de64a4fe-6d6c-4f0d-8f08-088eb547bd59 req-5f763028-6993-409f-a3ea-7ba5c192914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:10 compute-0 nova_compute[259550]: 2025-10-07 14:24:10.951 2 DEBUG oslo_concurrency.lockutils [req-de64a4fe-6d6c-4f0d-8f08-088eb547bd59 req-5f763028-6993-409f-a3ea-7ba5c192914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:10 compute-0 nova_compute[259550]: 2025-10-07 14:24:10.952 2 DEBUG oslo_concurrency.lockutils [req-de64a4fe-6d6c-4f0d-8f08-088eb547bd59 req-5f763028-6993-409f-a3ea-7ba5c192914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:10 compute-0 nova_compute[259550]: 2025-10-07 14:24:10.952 2 DEBUG nova.compute.manager [req-de64a4fe-6d6c-4f0d-8f08-088eb547bd59 req-5f763028-6993-409f-a3ea-7ba5c192914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] No waiting events found dispatching network-vif-plugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:24:10 compute-0 nova_compute[259550]: 2025-10-07 14:24:10.952 2 WARNING nova.compute.manager [req-de64a4fe-6d6c-4f0d-8f08-088eb547bd59 req-5f763028-6993-409f-a3ea-7ba5c192914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Received unexpected event network-vif-plugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 for instance with vm_state deleted and task_state None.
Oct 07 14:24:11 compute-0 ceph-mon[74295]: pgmap v1815: 305 pgs: 305 active+clean; 246 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 07 14:24:11 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1873977040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:24:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.394 2 DEBUG oslo_concurrency.lockutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "2ec4d149-b57c-4821-9852-5485f6279472" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.395 2 DEBUG oslo_concurrency.lockutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.396 2 DEBUG oslo_concurrency.lockutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "2ec4d149-b57c-4821-9852-5485f6279472-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.396 2 DEBUG oslo_concurrency.lockutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.397 2 DEBUG oslo_concurrency.lockutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.398 2 INFO nova.compute.manager [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Terminating instance
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.400 2 DEBUG nova.compute.manager [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.407 2 INFO nova.virt.libvirt.driver [-] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Instance destroyed successfully.
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.408 2 DEBUG nova.objects.instance [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'resources' on Instance uuid 2ec4d149-b57c-4821-9852-5485f6279472 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.424 2 DEBUG nova.virt.libvirt.vif [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:23:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1268944220',display_name='tempest-Íñstáñcé-1224174263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1268944220',id=98,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:23:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-0tb8zd37',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:24:09Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=2ec4d149-b57c-4821-9852-5485f6279472,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.425 2 DEBUG nova.network.os_vif_util [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.425 2 DEBUG nova.network.os_vif_util [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b5:9a,bridge_name='br-int',has_traffic_filtering=True,id=3cf557fc-1ada-4cc7-9d31-e123f70742b7,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf557fc-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.426 2 DEBUG os_vif [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b5:9a,bridge_name='br-int',has_traffic_filtering=True,id=3cf557fc-1ada-4cc7-9d31-e123f70742b7,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf557fc-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.428 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cf557fc-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.433 2 INFO os_vif [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b5:9a,bridge_name='br-int',has_traffic_filtering=True,id=3cf557fc-1ada-4cc7-9d31-e123f70742b7,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf557fc-1a')
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.814 2 INFO nova.virt.libvirt.driver [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Deleting instance files /var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472_del
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.815 2 INFO nova.virt.libvirt.driver [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Deletion of /var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472_del complete
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.870 2 INFO nova.compute.manager [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Took 0.47 seconds to destroy the instance on the hypervisor.
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.871 2 DEBUG oslo.service.loopingcall [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.871 2 DEBUG nova.compute.manager [-] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:24:11 compute-0 nova_compute[259550]: 2025-10-07 14:24:11.872 2 DEBUG nova.network.neutron [-] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:24:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 233 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 187 op/s
Oct 07 14:24:13 compute-0 ceph-mon[74295]: pgmap v1816: 305 pgs: 305 active+clean; 233 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 187 op/s
Oct 07 14:24:13 compute-0 nova_compute[259550]: 2025-10-07 14:24:13.323 2 DEBUG nova.network.neutron [-] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:24:13 compute-0 nova_compute[259550]: 2025-10-07 14:24:13.338 2 INFO nova.compute.manager [-] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Took 1.47 seconds to deallocate network for instance.
Oct 07 14:24:13 compute-0 nova_compute[259550]: 2025-10-07 14:24:13.392 2 DEBUG oslo_concurrency.lockutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:13 compute-0 nova_compute[259550]: 2025-10-07 14:24:13.392 2 DEBUG oslo_concurrency.lockutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:13 compute-0 nova_compute[259550]: 2025-10-07 14:24:13.479 2 DEBUG oslo_concurrency.processutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:13 compute-0 nova_compute[259550]: 2025-10-07 14:24:13.528 2 DEBUG nova.compute.manager [req-631e8046-769f-41b6-a01f-13fb2d639a79 req-5eff4733-95bb-4e7b-b3d4-716c5c63b22c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Received event network-vif-deleted-3cf557fc-1ada-4cc7-9d31-e123f70742b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:24:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:24:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3947667113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:24:13 compute-0 nova_compute[259550]: 2025-10-07 14:24:13.951 2 DEBUG oslo_concurrency.processutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:13 compute-0 nova_compute[259550]: 2025-10-07 14:24:13.956 2 DEBUG nova.compute.provider_tree [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:24:13 compute-0 nova_compute[259550]: 2025-10-07 14:24:13.970 2 DEBUG nova.scheduler.client.report [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:24:13 compute-0 nova_compute[259550]: 2025-10-07 14:24:13.988 2 DEBUG oslo_concurrency.lockutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 156 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Oct 07 14:24:14 compute-0 nova_compute[259550]: 2025-10-07 14:24:14.013 2 INFO nova.scheduler.client.report [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Deleted allocations for instance 2ec4d149-b57c-4821-9852-5485f6279472
Oct 07 14:24:14 compute-0 nova_compute[259550]: 2025-10-07 14:24:14.073 2 DEBUG oslo_concurrency.lockutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3947667113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:24:14 compute-0 ovn_controller[151684]: 2025-10-07T14:24:14Z|01015|binding|INFO|Releasing lport 401012b3-9244-4a9f-9a1e-3bf75a54a412 from this chassis (sb_readonly=0)
Oct 07 14:24:14 compute-0 nova_compute[259550]: 2025-10-07 14:24:14.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:15 compute-0 ceph-mon[74295]: pgmap v1817: 305 pgs: 305 active+clean; 156 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Oct 07 14:24:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 121 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 156 op/s
Oct 07 14:24:16 compute-0 ceph-mon[74295]: pgmap v1818: 305 pgs: 305 active+clean; 121 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 156 op/s
Oct 07 14:24:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.570 2 DEBUG oslo_concurrency.lockutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "2f0de516-cf33-49b6-b036-aee8c2f72943" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.571 2 DEBUG oslo_concurrency.lockutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.571 2 DEBUG oslo_concurrency.lockutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.571 2 DEBUG oslo_concurrency.lockutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.571 2 DEBUG oslo_concurrency.lockutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.573 2 INFO nova.compute.manager [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Terminating instance
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.574 2 DEBUG nova.compute.manager [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:24:16 compute-0 kernel: tapd1f34f39-08 (unregistering): left promiscuous mode
Oct 07 14:24:16 compute-0 NetworkManager[44949]: <info>  [1759847056.6355] device (tapd1f34f39-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:16 compute-0 ovn_controller[151684]: 2025-10-07T14:24:16Z|01016|binding|INFO|Releasing lport d1f34f39-0808-4a53-bf40-fff477dee819 from this chassis (sb_readonly=0)
Oct 07 14:24:16 compute-0 ovn_controller[151684]: 2025-10-07T14:24:16Z|01017|binding|INFO|Setting lport d1f34f39-0808-4a53-bf40-fff477dee819 down in Southbound
Oct 07 14:24:16 compute-0 ovn_controller[151684]: 2025-10-07T14:24:16Z|01018|binding|INFO|Removing iface tapd1f34f39-08 ovn-installed in OVS
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.650 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:9a:2b 10.100.0.4'], port_security=['fa:16:3e:f6:9a:2b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2f0de516-cf33-49b6-b036-aee8c2f72943', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d1f34f39-0808-4a53-bf40-fff477dee819) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:24:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.651 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d1f34f39-0808-4a53-bf40-fff477dee819 in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 unbound from our chassis
Oct 07 14:24:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.652 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:24:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.653 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e17b2921-aeb8-4554-be29-a22caac87f3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.653 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7 namespace which is not needed anymore
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:16 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000058.scope: Deactivated successfully.
Oct 07 14:24:16 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000058.scope: Consumed 18.436s CPU time.
Oct 07 14:24:16 compute-0 systemd-machined[214580]: Machine qemu-107-instance-00000058 terminated.
Oct 07 14:24:16 compute-0 neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7[347508]: [NOTICE]   (347512) : haproxy version is 2.8.14-c23fe91
Oct 07 14:24:16 compute-0 neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7[347508]: [NOTICE]   (347512) : path to executable is /usr/sbin/haproxy
Oct 07 14:24:16 compute-0 neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7[347508]: [WARNING]  (347512) : Exiting Master process...
Oct 07 14:24:16 compute-0 neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7[347508]: [ALERT]    (347512) : Current worker (347514) exited with code 143 (Terminated)
Oct 07 14:24:16 compute-0 neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7[347508]: [WARNING]  (347512) : All workers exited. Exiting... (0)
Oct 07 14:24:16 compute-0 systemd[1]: libpod-b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb.scope: Deactivated successfully.
Oct 07 14:24:16 compute-0 podman[356901]: 2025-10-07 14:24:16.786101753 +0000 UTC m=+0.043679398 container died b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.813 2 INFO nova.virt.libvirt.driver [-] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Instance destroyed successfully.
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.813 2 DEBUG nova.objects.instance [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'resources' on Instance uuid 2f0de516-cf33-49b6-b036-aee8c2f72943 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:24:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb-userdata-shm.mount: Deactivated successfully.
Oct 07 14:24:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a3a13a26d565d27b55a2c061c5f559046e2a2ef2ed17aeaf90edd70b5935596-merged.mount: Deactivated successfully.
Oct 07 14:24:16 compute-0 podman[356901]: 2025-10-07 14:24:16.825131336 +0000 UTC m=+0.082708971 container cleanup b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.832 2 DEBUG nova.virt.libvirt.vif [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:21:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-500502864',display_name='tempest-₡-500502864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--500502864',id=88,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:22:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-ann565o7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:22:01Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=2f0de516-cf33-49b6-b036-aee8c2f72943,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1f34f39-0808-4a53-bf40-fff477dee819", "address": "fa:16:3e:f6:9a:2b", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1f34f39-08", "ovs_interfaceid": "d1f34f39-0808-4a53-bf40-fff477dee819", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.833 2 DEBUG nova.network.os_vif_util [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "d1f34f39-0808-4a53-bf40-fff477dee819", "address": "fa:16:3e:f6:9a:2b", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1f34f39-08", "ovs_interfaceid": "d1f34f39-0808-4a53-bf40-fff477dee819", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.833 2 DEBUG nova.network.os_vif_util [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:9a:2b,bridge_name='br-int',has_traffic_filtering=True,id=d1f34f39-0808-4a53-bf40-fff477dee819,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1f34f39-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.833 2 DEBUG os_vif [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:9a:2b,bridge_name='br-int',has_traffic_filtering=True,id=d1f34f39-0808-4a53-bf40-fff477dee819,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1f34f39-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.835 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1f34f39-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:24:16 compute-0 systemd[1]: libpod-conmon-b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb.scope: Deactivated successfully.
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.842 2 INFO os_vif [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:9a:2b,bridge_name='br-int',has_traffic_filtering=True,id=d1f34f39-0808-4a53-bf40-fff477dee819,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1f34f39-08')
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.893 2 DEBUG nova.compute.manager [req-d49b93df-48ba-422a-871b-7aad95936106 req-d8fd9034-b6fb-49df-8612-3bf540a95c36 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Received event network-vif-unplugged-d1f34f39-0808-4a53-bf40-fff477dee819 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:24:16 compute-0 podman[356943]: 2025-10-07 14:24:16.894500249 +0000 UTC m=+0.043477473 container remove b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.894 2 DEBUG oslo_concurrency.lockutils [req-d49b93df-48ba-422a-871b-7aad95936106 req-d8fd9034-b6fb-49df-8612-3bf540a95c36 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.896 2 DEBUG oslo_concurrency.lockutils [req-d49b93df-48ba-422a-871b-7aad95936106 req-d8fd9034-b6fb-49df-8612-3bf540a95c36 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.896 2 DEBUG oslo_concurrency.lockutils [req-d49b93df-48ba-422a-871b-7aad95936106 req-d8fd9034-b6fb-49df-8612-3bf540a95c36 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.897 2 DEBUG nova.compute.manager [req-d49b93df-48ba-422a-871b-7aad95936106 req-d8fd9034-b6fb-49df-8612-3bf540a95c36 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] No waiting events found dispatching network-vif-unplugged-d1f34f39-0808-4a53-bf40-fff477dee819 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.897 2 DEBUG nova.compute.manager [req-d49b93df-48ba-422a-871b-7aad95936106 req-d8fd9034-b6fb-49df-8612-3bf540a95c36 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Received event network-vif-unplugged-d1f34f39-0808-4a53-bf40-fff477dee819 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:24:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.910 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[822f08c6-a5e9-40e8-b3ab-eb5878b95666]: (4, ('Tue Oct  7 02:24:16 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7 (b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb)\nb69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb\nTue Oct  7 02:24:16 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7 (b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb)\nb69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.912 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9bcb70-1667-4a60-8cb0-50cfd7917982]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.913 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:24:16 compute-0 kernel: tap55c52758-90: left promiscuous mode
Oct 07 14:24:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.923 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6befe7-db76-4648-99fc-4519e4184f95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:16 compute-0 nova_compute[259550]: 2025-10-07 14:24:16.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.948 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[75aeb9a0-0e5a-47d0-9eb7-1e23f8a1c477]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.950 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1115326d-58dd-41b2-b120-c5fbcbe2d818]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.965 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[592bd875-83a9-47a1-841b-069bd17ceca3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749518, 'reachable_time': 23845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356976, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.967 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:24:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.967 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[cca92823-4e27-4700-bc96-939a61ca980f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:24:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d55c52758\x2d97c9\x2d4a7e\x2db735\x2d6c70d1ca75a7.mount: Deactivated successfully.
Oct 07 14:24:17 compute-0 nova_compute[259550]: 2025-10-07 14:24:17.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:17.184 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:24:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:17.186 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:24:17 compute-0 nova_compute[259550]: 2025-10-07 14:24:17.251 2 INFO nova.virt.libvirt.driver [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Deleting instance files /var/lib/nova/instances/2f0de516-cf33-49b6-b036-aee8c2f72943_del
Oct 07 14:24:17 compute-0 nova_compute[259550]: 2025-10-07 14:24:17.252 2 INFO nova.virt.libvirt.driver [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Deletion of /var/lib/nova/instances/2f0de516-cf33-49b6-b036-aee8c2f72943_del complete
Oct 07 14:24:17 compute-0 nova_compute[259550]: 2025-10-07 14:24:17.305 2 INFO nova.compute.manager [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 07 14:24:17 compute-0 nova_compute[259550]: 2025-10-07 14:24:17.306 2 DEBUG oslo.service.loopingcall [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:24:17 compute-0 nova_compute[259550]: 2025-10-07 14:24:17.307 2 DEBUG nova.compute.manager [-] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:24:17 compute-0 nova_compute[259550]: 2025-10-07 14:24:17.307 2 DEBUG nova.network.neutron [-] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:24:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 121 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 27 KiB/s wr, 100 op/s
Oct 07 14:24:18 compute-0 nova_compute[259550]: 2025-10-07 14:24:18.542 2 DEBUG nova.network.neutron [-] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:24:18 compute-0 nova_compute[259550]: 2025-10-07 14:24:18.564 2 INFO nova.compute.manager [-] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Took 1.26 seconds to deallocate network for instance.
Oct 07 14:24:18 compute-0 nova_compute[259550]: 2025-10-07 14:24:18.613 2 DEBUG oslo_concurrency.lockutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:18 compute-0 nova_compute[259550]: 2025-10-07 14:24:18.613 2 DEBUG oslo_concurrency.lockutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:18 compute-0 nova_compute[259550]: 2025-10-07 14:24:18.665 2 DEBUG oslo_concurrency.processutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:19 compute-0 ceph-mon[74295]: pgmap v1819: 305 pgs: 305 active+clean; 121 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 27 KiB/s wr, 100 op/s
Oct 07 14:24:19 compute-0 nova_compute[259550]: 2025-10-07 14:24:19.086 2 DEBUG nova.compute.manager [req-9f5377fe-f9aa-430a-906a-d3457e3c2e5e req-5e622c37-8afd-45c4-a1ee-e1efa24b9651 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Received event network-vif-plugged-d1f34f39-0808-4a53-bf40-fff477dee819 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:24:19 compute-0 nova_compute[259550]: 2025-10-07 14:24:19.087 2 DEBUG oslo_concurrency.lockutils [req-9f5377fe-f9aa-430a-906a-d3457e3c2e5e req-5e622c37-8afd-45c4-a1ee-e1efa24b9651 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:19 compute-0 nova_compute[259550]: 2025-10-07 14:24:19.087 2 DEBUG oslo_concurrency.lockutils [req-9f5377fe-f9aa-430a-906a-d3457e3c2e5e req-5e622c37-8afd-45c4-a1ee-e1efa24b9651 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:19 compute-0 nova_compute[259550]: 2025-10-07 14:24:19.087 2 DEBUG oslo_concurrency.lockutils [req-9f5377fe-f9aa-430a-906a-d3457e3c2e5e req-5e622c37-8afd-45c4-a1ee-e1efa24b9651 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:19 compute-0 nova_compute[259550]: 2025-10-07 14:24:19.088 2 DEBUG nova.compute.manager [req-9f5377fe-f9aa-430a-906a-d3457e3c2e5e req-5e622c37-8afd-45c4-a1ee-e1efa24b9651 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] No waiting events found dispatching network-vif-plugged-d1f34f39-0808-4a53-bf40-fff477dee819 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:24:19 compute-0 nova_compute[259550]: 2025-10-07 14:24:19.088 2 WARNING nova.compute.manager [req-9f5377fe-f9aa-430a-906a-d3457e3c2e5e req-5e622c37-8afd-45c4-a1ee-e1efa24b9651 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Received unexpected event network-vif-plugged-d1f34f39-0808-4a53-bf40-fff477dee819 for instance with vm_state deleted and task_state None.
Oct 07 14:24:19 compute-0 nova_compute[259550]: 2025-10-07 14:24:19.088 2 DEBUG nova.compute.manager [req-9f5377fe-f9aa-430a-906a-d3457e3c2e5e req-5e622c37-8afd-45c4-a1ee-e1efa24b9651 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Received event network-vif-deleted-d1f34f39-0808-4a53-bf40-fff477dee819 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:24:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:24:19 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1967518062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:24:19 compute-0 nova_compute[259550]: 2025-10-07 14:24:19.112 2 DEBUG oslo_concurrency.processutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:19 compute-0 nova_compute[259550]: 2025-10-07 14:24:19.120 2 DEBUG nova.compute.provider_tree [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:24:19 compute-0 nova_compute[259550]: 2025-10-07 14:24:19.138 2 DEBUG nova.scheduler.client.report [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:24:19 compute-0 nova_compute[259550]: 2025-10-07 14:24:19.170 2 DEBUG oslo_concurrency.lockutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:19 compute-0 nova_compute[259550]: 2025-10-07 14:24:19.192 2 INFO nova.scheduler.client.report [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Deleted allocations for instance 2f0de516-cf33-49b6-b036-aee8c2f72943
Oct 07 14:24:19 compute-0 nova_compute[259550]: 2025-10-07 14:24:19.249 2 DEBUG oslo_concurrency.lockutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 57 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 27 KiB/s wr, 123 op/s
Oct 07 14:24:20 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1967518062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:24:21 compute-0 ceph-mon[74295]: pgmap v1820: 305 pgs: 305 active+clean; 57 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 27 KiB/s wr, 123 op/s
Oct 07 14:24:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:24:21 compute-0 sudo[357000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:24:21 compute-0 nova_compute[259550]: 2025-10-07 14:24:21.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:21 compute-0 sudo[357000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:21 compute-0 sudo[357000]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:21 compute-0 sudo[357025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:24:21 compute-0 sudo[357025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:21 compute-0 sudo[357025]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:21 compute-0 sudo[357050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:24:21 compute-0 sudo[357050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:21 compute-0 sudo[357050]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:21 compute-0 nova_compute[259550]: 2025-10-07 14:24:21.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:21 compute-0 sudo[357075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 07 14:24:21 compute-0 sudo[357075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 41 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 5.5 KiB/s wr, 81 op/s
Oct 07 14:24:22 compute-0 sudo[357075]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:24:22 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:24:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:24:22 compute-0 podman[357117]: 2025-10-07 14:24:22.172163911 +0000 UTC m=+0.058524825 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 07 14:24:22 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:24:22 compute-0 sudo[357156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:24:22 compute-0 podman[357125]: 2025-10-07 14:24:22.226152733 +0000 UTC m=+0.093738266 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller)
Oct 07 14:24:22 compute-0 sudo[357156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:22 compute-0 sudo[357156]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:22 compute-0 sudo[357189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:24:22 compute-0 sudo[357189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:22 compute-0 sudo[357189]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:22 compute-0 sudo[357214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:24:22 compute-0 sudo[357214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:22 compute-0 sudo[357214]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:22 compute-0 sudo[357239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:24:22 compute-0 sudo[357239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:24:22
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'images', '.mgr', 'default.rgw.log']
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:24:22 compute-0 sudo[357239]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 07 14:24:22 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 07 14:24:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:24:22 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:24:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:24:22 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:24:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:24:22 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 4eb21dfb-6933-43e1-bb80-75f890d4a1a7 does not exist
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 51ae20cf-4707-4b43-854f-f31b5308f7cc does not exist
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev cb3cf96e-c751-42b2-a082-55275bf3f77e does not exist
Oct 07 14:24:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:24:22 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:24:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:24:22 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:24:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:24:22 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:24:22 compute-0 sudo[357295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:24:22 compute-0 sudo[357295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:24:22 compute-0 sudo[357295]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:24:23 compute-0 sudo[357320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:24:23 compute-0 sudo[357320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:23 compute-0 sudo[357320]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:23 compute-0 ceph-mon[74295]: pgmap v1821: 305 pgs: 305 active+clean; 41 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 5.5 KiB/s wr, 81 op/s
Oct 07 14:24:23 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:24:23 compute-0 sudo[357345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:24:23 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:24:23 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 07 14:24:23 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:24:23 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:24:23 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:24:23 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:24:23 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:24:23 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:24:23 compute-0 sudo[357345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:23 compute-0 sudo[357345]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:23 compute-0 nova_compute[259550]: 2025-10-07 14:24:23.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:23 compute-0 sudo[357370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:24:23 compute-0 sudo[357370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:23 compute-0 nova_compute[259550]: 2025-10-07 14:24:23.322 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847048.3220978, 2ec4d149-b57c-4821-9852-5485f6279472 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:24:23 compute-0 nova_compute[259550]: 2025-10-07 14:24:23.323 2 INFO nova.compute.manager [-] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] VM Stopped (Lifecycle Event)
Oct 07 14:24:23 compute-0 nova_compute[259550]: 2025-10-07 14:24:23.347 2 DEBUG nova.compute.manager [None req-2c3413dc-7203-4602-bfa7-a798afdff589 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:24:23 compute-0 podman[357435]: 2025-10-07 14:24:23.482881953 +0000 UTC m=+0.039908028 container create eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 07 14:24:23 compute-0 systemd[1]: Started libpod-conmon-eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b.scope.
Oct 07 14:24:23 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:24:23 compute-0 podman[357435]: 2025-10-07 14:24:23.464016159 +0000 UTC m=+0.021042254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:24:23 compute-0 nova_compute[259550]: 2025-10-07 14:24:23.571 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847048.5701125, 77a1261a-cfc4-44f8-8353-fce48dff65e6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:24:23 compute-0 nova_compute[259550]: 2025-10-07 14:24:23.571 2 INFO nova.compute.manager [-] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] VM Stopped (Lifecycle Event)
Oct 07 14:24:23 compute-0 podman[357435]: 2025-10-07 14:24:23.576621828 +0000 UTC m=+0.133647923 container init eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:24:23 compute-0 podman[357435]: 2025-10-07 14:24:23.583722818 +0000 UTC m=+0.140748893 container start eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:24:23 compute-0 podman[357435]: 2025-10-07 14:24:23.587419997 +0000 UTC m=+0.144446122 container attach eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 07 14:24:23 compute-0 flamboyant_swartz[357451]: 167 167
Oct 07 14:24:23 compute-0 systemd[1]: libpod-eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b.scope: Deactivated successfully.
Oct 07 14:24:23 compute-0 conmon[357451]: conmon eea661b64b2b997ff795 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b.scope/container/memory.events
Oct 07 14:24:23 compute-0 podman[357435]: 2025-10-07 14:24:23.591212318 +0000 UTC m=+0.148238393 container died eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:24:23 compute-0 nova_compute[259550]: 2025-10-07 14:24:23.595 2 DEBUG nova.compute.manager [None req-24c51562-a703-4474-98c1-35332a7665bb - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:24:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f59a0d2a75db7180e2be4fd4dd20e268a6bcd6e7ed8575329ba67e64f1151cae-merged.mount: Deactivated successfully.
Oct 07 14:24:23 compute-0 podman[357435]: 2025-10-07 14:24:23.631508285 +0000 UTC m=+0.188534360 container remove eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 14:24:23 compute-0 systemd[1]: libpod-conmon-eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b.scope: Deactivated successfully.
Oct 07 14:24:23 compute-0 podman[357476]: 2025-10-07 14:24:23.823239668 +0000 UTC m=+0.049005891 container create e8e75761b308d14879415681704fa4e22be90005279fc06dd3415fde3d564b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Oct 07 14:24:23 compute-0 systemd[1]: Started libpod-conmon-e8e75761b308d14879415681704fa4e22be90005279fc06dd3415fde3d564b32.scope.
Oct 07 14:24:23 compute-0 podman[357476]: 2025-10-07 14:24:23.803238194 +0000 UTC m=+0.029004687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:24:23 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:24:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a4e7222cc16807184ada7d751d6492c39acc5ee98427a2569781e91a09905f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:24:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a4e7222cc16807184ada7d751d6492c39acc5ee98427a2569781e91a09905f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:24:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a4e7222cc16807184ada7d751d6492c39acc5ee98427a2569781e91a09905f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:24:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a4e7222cc16807184ada7d751d6492c39acc5ee98427a2569781e91a09905f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:24:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a4e7222cc16807184ada7d751d6492c39acc5ee98427a2569781e91a09905f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:24:23 compute-0 podman[357476]: 2025-10-07 14:24:23.926294642 +0000 UTC m=+0.152060865 container init e8e75761b308d14879415681704fa4e22be90005279fc06dd3415fde3d564b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wilson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 07 14:24:23 compute-0 podman[357476]: 2025-10-07 14:24:23.935790015 +0000 UTC m=+0.161556228 container start e8e75761b308d14879415681704fa4e22be90005279fc06dd3415fde3d564b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:24:23 compute-0 podman[357476]: 2025-10-07 14:24:23.940522632 +0000 UTC m=+0.166288835 container attach e8e75761b308d14879415681704fa4e22be90005279fc06dd3415fde3d564b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:24:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 41 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.8 KiB/s wr, 58 op/s
Oct 07 14:24:24 compute-0 ceph-mon[74295]: pgmap v1822: 305 pgs: 305 active+clean; 41 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.8 KiB/s wr, 58 op/s
Oct 07 14:24:24 compute-0 jolly_wilson[357492]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:24:24 compute-0 jolly_wilson[357492]: --> relative data size: 1.0
Oct 07 14:24:24 compute-0 jolly_wilson[357492]: --> All data devices are unavailable
Oct 07 14:24:24 compute-0 systemd[1]: libpod-e8e75761b308d14879415681704fa4e22be90005279fc06dd3415fde3d564b32.scope: Deactivated successfully.
Oct 07 14:24:24 compute-0 podman[357476]: 2025-10-07 14:24:24.977320355 +0000 UTC m=+1.203086568 container died e8e75761b308d14879415681704fa4e22be90005279fc06dd3415fde3d564b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wilson, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:24:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-90a4e7222cc16807184ada7d751d6492c39acc5ee98427a2569781e91a09905f-merged.mount: Deactivated successfully.
Oct 07 14:24:25 compute-0 podman[357476]: 2025-10-07 14:24:25.033223899 +0000 UTC m=+1.258990142 container remove e8e75761b308d14879415681704fa4e22be90005279fc06dd3415fde3d564b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 07 14:24:25 compute-0 systemd[1]: libpod-conmon-e8e75761b308d14879415681704fa4e22be90005279fc06dd3415fde3d564b32.scope: Deactivated successfully.
Oct 07 14:24:25 compute-0 sudo[357370]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:25 compute-0 sudo[357532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:24:25 compute-0 sudo[357532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:25 compute-0 sudo[357532]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:24:25.188 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:24:25 compute-0 sudo[357557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:24:25 compute-0 sudo[357557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:25 compute-0 sudo[357557]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:25 compute-0 sudo[357582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:24:25 compute-0 sudo[357582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:25 compute-0 sudo[357582]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:25 compute-0 sudo[357607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:24:25 compute-0 sudo[357607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:25 compute-0 podman[357673]: 2025-10-07 14:24:25.604576186 +0000 UTC m=+0.042302312 container create d795a3f8677b70d1ae19782b1c855671d77a72743f58a56052d9eed4109cc4bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_euler, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:24:25 compute-0 systemd[1]: Started libpod-conmon-d795a3f8677b70d1ae19782b1c855671d77a72743f58a56052d9eed4109cc4bc.scope.
Oct 07 14:24:25 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:24:25 compute-0 podman[357673]: 2025-10-07 14:24:25.584032477 +0000 UTC m=+0.021758623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:24:25 compute-0 podman[357673]: 2025-10-07 14:24:25.683869605 +0000 UTC m=+0.121595751 container init d795a3f8677b70d1ae19782b1c855671d77a72743f58a56052d9eed4109cc4bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_euler, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 07 14:24:25 compute-0 podman[357673]: 2025-10-07 14:24:25.689378962 +0000 UTC m=+0.127105088 container start d795a3f8677b70d1ae19782b1c855671d77a72743f58a56052d9eed4109cc4bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 07 14:24:25 compute-0 podman[357673]: 2025-10-07 14:24:25.692764542 +0000 UTC m=+0.130497118 container attach d795a3f8677b70d1ae19782b1c855671d77a72743f58a56052d9eed4109cc4bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_euler, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:24:25 compute-0 modest_euler[357689]: 167 167
Oct 07 14:24:25 compute-0 systemd[1]: libpod-d795a3f8677b70d1ae19782b1c855671d77a72743f58a56052d9eed4109cc4bc.scope: Deactivated successfully.
Oct 07 14:24:25 compute-0 podman[357673]: 2025-10-07 14:24:25.695123845 +0000 UTC m=+0.132849991 container died d795a3f8677b70d1ae19782b1c855671d77a72743f58a56052d9eed4109cc4bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:24:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-466e84dd01f330ecfaf29f24aa56d8cd608cfcaa1303b176ce7a865dffcf6e8f-merged.mount: Deactivated successfully.
Oct 07 14:24:25 compute-0 podman[357673]: 2025-10-07 14:24:25.733968464 +0000 UTC m=+0.171694590 container remove d795a3f8677b70d1ae19782b1c855671d77a72743f58a56052d9eed4109cc4bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_euler, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 07 14:24:25 compute-0 systemd[1]: libpod-conmon-d795a3f8677b70d1ae19782b1c855671d77a72743f58a56052d9eed4109cc4bc.scope: Deactivated successfully.
Oct 07 14:24:25 compute-0 podman[357713]: 2025-10-07 14:24:25.961211925 +0000 UTC m=+0.043563185 container create 1ba6ac5fe541b2b7529b8581efe2dadeba7763c97fac5162a96df3ed2765233d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_diffie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:24:26 compute-0 systemd[1]: Started libpod-conmon-1ba6ac5fe541b2b7529b8581efe2dadeba7763c97fac5162a96df3ed2765233d.scope.
Oct 07 14:24:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 41 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Oct 07 14:24:26 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:24:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445974c0442e74abfc89947246f7fa1e33f879ce898aacee564a185cb621a345/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:24:26 compute-0 podman[357713]: 2025-10-07 14:24:25.941901419 +0000 UTC m=+0.024252709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:24:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445974c0442e74abfc89947246f7fa1e33f879ce898aacee564a185cb621a345/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:24:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445974c0442e74abfc89947246f7fa1e33f879ce898aacee564a185cb621a345/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:24:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445974c0442e74abfc89947246f7fa1e33f879ce898aacee564a185cb621a345/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:24:26 compute-0 podman[357713]: 2025-10-07 14:24:26.055364461 +0000 UTC m=+0.137715741 container init 1ba6ac5fe541b2b7529b8581efe2dadeba7763c97fac5162a96df3ed2765233d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:24:26 compute-0 podman[357713]: 2025-10-07 14:24:26.066616761 +0000 UTC m=+0.148968021 container start 1ba6ac5fe541b2b7529b8581efe2dadeba7763c97fac5162a96df3ed2765233d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:24:26 compute-0 podman[357713]: 2025-10-07 14:24:26.071538523 +0000 UTC m=+0.153889823 container attach 1ba6ac5fe541b2b7529b8581efe2dadeba7763c97fac5162a96df3ed2765233d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_diffie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 07 14:24:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:24:26 compute-0 nova_compute[259550]: 2025-10-07 14:24:26.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:26 compute-0 silly_diffie[357730]: {
Oct 07 14:24:26 compute-0 silly_diffie[357730]:     "0": [
Oct 07 14:24:26 compute-0 silly_diffie[357730]:         {
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "devices": [
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "/dev/loop3"
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             ],
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "lv_name": "ceph_lv0",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "lv_size": "21470642176",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "name": "ceph_lv0",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "tags": {
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.cluster_name": "ceph",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.crush_device_class": "",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.encrypted": "0",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.osd_id": "0",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.type": "block",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.vdo": "0"
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             },
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "type": "block",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "vg_name": "ceph_vg0"
Oct 07 14:24:26 compute-0 silly_diffie[357730]:         }
Oct 07 14:24:26 compute-0 silly_diffie[357730]:     ],
Oct 07 14:24:26 compute-0 silly_diffie[357730]:     "1": [
Oct 07 14:24:26 compute-0 silly_diffie[357730]:         {
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "devices": [
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "/dev/loop4"
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             ],
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "lv_name": "ceph_lv1",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "lv_size": "21470642176",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "name": "ceph_lv1",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "tags": {
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.cluster_name": "ceph",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.crush_device_class": "",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.encrypted": "0",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.osd_id": "1",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.type": "block",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.vdo": "0"
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             },
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "type": "block",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "vg_name": "ceph_vg1"
Oct 07 14:24:26 compute-0 silly_diffie[357730]:         }
Oct 07 14:24:26 compute-0 silly_diffie[357730]:     ],
Oct 07 14:24:26 compute-0 silly_diffie[357730]:     "2": [
Oct 07 14:24:26 compute-0 silly_diffie[357730]:         {
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "devices": [
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "/dev/loop5"
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             ],
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "lv_name": "ceph_lv2",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "lv_size": "21470642176",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "name": "ceph_lv2",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "tags": {
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.cluster_name": "ceph",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.crush_device_class": "",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.encrypted": "0",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.osd_id": "2",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.type": "block",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:                 "ceph.vdo": "0"
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             },
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "type": "block",
Oct 07 14:24:26 compute-0 silly_diffie[357730]:             "vg_name": "ceph_vg2"
Oct 07 14:24:26 compute-0 silly_diffie[357730]:         }
Oct 07 14:24:26 compute-0 silly_diffie[357730]:     ]
Oct 07 14:24:26 compute-0 silly_diffie[357730]: }
Oct 07 14:24:26 compute-0 nova_compute[259550]: 2025-10-07 14:24:26.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:26 compute-0 systemd[1]: libpod-1ba6ac5fe541b2b7529b8581efe2dadeba7763c97fac5162a96df3ed2765233d.scope: Deactivated successfully.
Oct 07 14:24:26 compute-0 podman[357713]: 2025-10-07 14:24:26.859891438 +0000 UTC m=+0.942242698 container died 1ba6ac5fe541b2b7529b8581efe2dadeba7763c97fac5162a96df3ed2765233d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_diffie, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 07 14:24:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-445974c0442e74abfc89947246f7fa1e33f879ce898aacee564a185cb621a345-merged.mount: Deactivated successfully.
Oct 07 14:24:26 compute-0 podman[357713]: 2025-10-07 14:24:26.91980615 +0000 UTC m=+1.002157400 container remove 1ba6ac5fe541b2b7529b8581efe2dadeba7763c97fac5162a96df3ed2765233d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_diffie, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:24:26 compute-0 systemd[1]: libpod-conmon-1ba6ac5fe541b2b7529b8581efe2dadeba7763c97fac5162a96df3ed2765233d.scope: Deactivated successfully.
Oct 07 14:24:26 compute-0 sudo[357607]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:27 compute-0 sudo[357753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:24:27 compute-0 sudo[357753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:27 compute-0 sudo[357753]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:27 compute-0 ceph-mon[74295]: pgmap v1823: 305 pgs: 305 active+clean; 41 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Oct 07 14:24:27 compute-0 sudo[357778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:24:27 compute-0 sudo[357778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:27 compute-0 sudo[357778]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:27 compute-0 sudo[357803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:24:27 compute-0 sudo[357803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:27 compute-0 sudo[357803]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:27 compute-0 sudo[357828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:24:27 compute-0 sudo[357828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:27 compute-0 podman[357895]: 2025-10-07 14:24:27.53548393 +0000 UTC m=+0.047938992 container create bf38e28bc8d19a11751d9bb588ffad29700718c07376252bf74c8fe1b6122e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 07 14:24:27 compute-0 systemd[1]: Started libpod-conmon-bf38e28bc8d19a11751d9bb588ffad29700718c07376252bf74c8fe1b6122e34.scope.
Oct 07 14:24:27 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:24:27 compute-0 podman[357895]: 2025-10-07 14:24:27.509211779 +0000 UTC m=+0.021666891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:24:27 compute-0 podman[357895]: 2025-10-07 14:24:27.614101891 +0000 UTC m=+0.126556923 container init bf38e28bc8d19a11751d9bb588ffad29700718c07376252bf74c8fe1b6122e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaum, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 07 14:24:27 compute-0 podman[357895]: 2025-10-07 14:24:27.623515012 +0000 UTC m=+0.135970034 container start bf38e28bc8d19a11751d9bb588ffad29700718c07376252bf74c8fe1b6122e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaum, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 07 14:24:27 compute-0 podman[357895]: 2025-10-07 14:24:27.627611442 +0000 UTC m=+0.140066514 container attach bf38e28bc8d19a11751d9bb588ffad29700718c07376252bf74c8fe1b6122e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaum, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:24:27 compute-0 happy_chaum[357912]: 167 167
Oct 07 14:24:27 compute-0 systemd[1]: libpod-bf38e28bc8d19a11751d9bb588ffad29700718c07376252bf74c8fe1b6122e34.scope: Deactivated successfully.
Oct 07 14:24:27 compute-0 podman[357895]: 2025-10-07 14:24:27.631877416 +0000 UTC m=+0.144332438 container died bf38e28bc8d19a11751d9bb588ffad29700718c07376252bf74c8fe1b6122e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaum, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:24:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-68714eb975e59e2b96273d8a7c3d3facbd957a8c578261777b894df8d3978c33-merged.mount: Deactivated successfully.
Oct 07 14:24:27 compute-0 podman[357895]: 2025-10-07 14:24:27.676375575 +0000 UTC m=+0.188830597 container remove bf38e28bc8d19a11751d9bb588ffad29700718c07376252bf74c8fe1b6122e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaum, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:24:27 compute-0 systemd[1]: libpod-conmon-bf38e28bc8d19a11751d9bb588ffad29700718c07376252bf74c8fe1b6122e34.scope: Deactivated successfully.
Oct 07 14:24:27 compute-0 podman[357936]: 2025-10-07 14:24:27.901534562 +0000 UTC m=+0.077764020 container create 9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 07 14:24:27 compute-0 systemd[1]: Started libpod-conmon-9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754.scope.
Oct 07 14:24:27 compute-0 podman[357936]: 2025-10-07 14:24:27.869462925 +0000 UTC m=+0.045692443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:24:27 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:24:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98b36e6578c8d8957aadf184909f50089ce87edd39a8b489e467e94b06e6d1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:24:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98b36e6578c8d8957aadf184909f50089ce87edd39a8b489e467e94b06e6d1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:24:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98b36e6578c8d8957aadf184909f50089ce87edd39a8b489e467e94b06e6d1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:24:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98b36e6578c8d8957aadf184909f50089ce87edd39a8b489e467e94b06e6d1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:24:28 compute-0 podman[357936]: 2025-10-07 14:24:28.002591962 +0000 UTC m=+0.178821460 container init 9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:24:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 41 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:24:28 compute-0 podman[357936]: 2025-10-07 14:24:28.016408231 +0000 UTC m=+0.192637649 container start 9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:24:28 compute-0 podman[357936]: 2025-10-07 14:24:28.020337836 +0000 UTC m=+0.196567264 container attach 9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 07 14:24:28 compute-0 trusting_golick[357952]: {
Oct 07 14:24:28 compute-0 trusting_golick[357952]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:24:28 compute-0 trusting_golick[357952]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:24:28 compute-0 trusting_golick[357952]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:24:28 compute-0 trusting_golick[357952]:         "osd_id": 2,
Oct 07 14:24:28 compute-0 trusting_golick[357952]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:24:28 compute-0 trusting_golick[357952]:         "type": "bluestore"
Oct 07 14:24:28 compute-0 trusting_golick[357952]:     },
Oct 07 14:24:28 compute-0 trusting_golick[357952]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:24:28 compute-0 trusting_golick[357952]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:24:28 compute-0 trusting_golick[357952]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:24:28 compute-0 trusting_golick[357952]:         "osd_id": 1,
Oct 07 14:24:28 compute-0 trusting_golick[357952]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:24:28 compute-0 trusting_golick[357952]:         "type": "bluestore"
Oct 07 14:24:28 compute-0 trusting_golick[357952]:     },
Oct 07 14:24:28 compute-0 trusting_golick[357952]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:24:28 compute-0 trusting_golick[357952]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:24:28 compute-0 trusting_golick[357952]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:24:28 compute-0 trusting_golick[357952]:         "osd_id": 0,
Oct 07 14:24:28 compute-0 trusting_golick[357952]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:24:28 compute-0 trusting_golick[357952]:         "type": "bluestore"
Oct 07 14:24:28 compute-0 trusting_golick[357952]:     }
Oct 07 14:24:28 compute-0 trusting_golick[357952]: }
Oct 07 14:24:29 compute-0 systemd[1]: libpod-9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754.scope: Deactivated successfully.
Oct 07 14:24:29 compute-0 systemd[1]: libpod-9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754.scope: Consumed 1.019s CPU time.
Oct 07 14:24:29 compute-0 podman[357936]: 2025-10-07 14:24:29.024149478 +0000 UTC m=+1.200378936 container died 9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 14:24:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c98b36e6578c8d8957aadf184909f50089ce87edd39a8b489e467e94b06e6d1d-merged.mount: Deactivated successfully.
Oct 07 14:24:29 compute-0 ceph-mon[74295]: pgmap v1824: 305 pgs: 305 active+clean; 41 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:24:29 compute-0 podman[357936]: 2025-10-07 14:24:29.087504821 +0000 UTC m=+1.263734239 container remove 9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 07 14:24:29 compute-0 systemd[1]: libpod-conmon-9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754.scope: Deactivated successfully.
Oct 07 14:24:29 compute-0 sudo[357828]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:24:29 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:24:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:24:29 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:24:29 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 11fd2aa3-21da-4d73-abce-09b8f09237a7 does not exist
Oct 07 14:24:29 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev b769475e-7a33-401b-888b-2ceb51b02eb0 does not exist
Oct 07 14:24:29 compute-0 sudo[357997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:24:29 compute-0 sudo[357997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:29 compute-0 sudo[357997]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:29 compute-0 sudo[358022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:24:29 compute-0 sudo[358022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:24:29 compute-0 sudo[358022]: pam_unix(sudo:session): session closed for user root
Oct 07 14:24:29 compute-0 nova_compute[259550]: 2025-10-07 14:24:29.901 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "4fa0acc3-6fd9-4af0-8126-abba381ba4ad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:29 compute-0 nova_compute[259550]: 2025-10-07 14:24:29.903 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "4fa0acc3-6fd9-4af0-8126-abba381ba4ad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:29 compute-0 nova_compute[259550]: 2025-10-07 14:24:29.923 2 DEBUG nova.compute.manager [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.003 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.004 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 41 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.012 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.013 2 INFO nova.compute.claims [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.106 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:30 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:24:30 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:24:30 compute-0 ceph-mon[74295]: pgmap v1825: 305 pgs: 305 active+clean; 41 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:24:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:24:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1611791755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.567 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.574 2 DEBUG nova.compute.provider_tree [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.602 2 DEBUG nova.scheduler.client.report [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.653 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.654 2 DEBUG nova.compute.manager [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.718 2 DEBUG nova.compute.manager [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.736 2 INFO nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.753 2 DEBUG nova.compute.manager [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.836 2 DEBUG nova.compute.manager [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.838 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.839 2 INFO nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Creating image(s)
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.865 2 DEBUG nova.storage.rbd_utils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.896 2 DEBUG nova.storage.rbd_utils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.927 2 DEBUG nova.storage.rbd_utils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:24:30 compute-0 nova_compute[259550]: 2025-10-07 14:24:30.933 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.029 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.031 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.033 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.033 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.073 2 DEBUG nova.storage.rbd_utils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.077 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:31 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1611791755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:24:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.352 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.411 2 DEBUG nova.storage.rbd_utils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] resizing rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.505 2 DEBUG nova.objects.instance [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'migration_context' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.521 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.521 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Ensure instance console log exists: /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.522 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.522 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.523 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.524 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.529 2 WARNING nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.535 2 DEBUG nova.virt.libvirt.host [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.536 2 DEBUG nova.virt.libvirt.host [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.539 2 DEBUG nova.virt.libvirt.host [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.540 2 DEBUG nova.virt.libvirt.host [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.540 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.540 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.541 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.541 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.541 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.542 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.542 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.542 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.542 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.543 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.543 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.543 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.546 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.809 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847056.8084297, 2f0de516-cf33-49b6-b036-aee8c2f72943 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.810 2 INFO nova.compute.manager [-] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] VM Stopped (Lifecycle Event)
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.832 2 DEBUG nova.compute.manager [None req-c8057cfc-2785-4af9-a6ed-37cbb6e2917a - - - - - -] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:24:31 compute-0 nova_compute[259550]: 2025-10-07 14:24:31.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:24:31 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3297341986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:24:32 compute-0 nova_compute[259550]: 2025-10-07 14:24:32.002 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 41 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 4 op/s
Oct 07 14:24:32 compute-0 nova_compute[259550]: 2025-10-07 14:24:32.032 2 DEBUG nova.storage.rbd_utils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:24:32 compute-0 nova_compute[259550]: 2025-10-07 14:24:32.038 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3297341986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:24:32 compute-0 ceph-mon[74295]: pgmap v1826: 305 pgs: 305 active+clean; 41 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 4 op/s
Oct 07 14:24:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:24:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3155769748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:24:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:24:32 compute-0 nova_compute[259550]: 2025-10-07 14:24:32.482 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:32 compute-0 nova_compute[259550]: 2025-10-07 14:24:32.484 2 DEBUG nova.objects.instance [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:24:32 compute-0 nova_compute[259550]: 2025-10-07 14:24:32.500 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:24:32 compute-0 nova_compute[259550]:   <uuid>4fa0acc3-6fd9-4af0-8126-abba381ba4ad</uuid>
Oct 07 14:24:32 compute-0 nova_compute[259550]:   <name>instance-00000064</name>
Oct 07 14:24:32 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:24:32 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:24:32 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerShowV254Test-server-543993218</nova:name>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:24:31</nova:creationTime>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:24:32 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:24:32 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:24:32 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:24:32 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:24:32 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:24:32 compute-0 nova_compute[259550]:         <nova:user uuid="cb586f3a9a014ce38f71d2a40873fa67">tempest-ServerShowV254Test-2120464215-project-member</nova:user>
Oct 07 14:24:32 compute-0 nova_compute[259550]:         <nova:project uuid="e73ba37bce884458b471aeda5370963f">tempest-ServerShowV254Test-2120464215</nova:project>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:24:32 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:24:32 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <system>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <entry name="serial">4fa0acc3-6fd9-4af0-8126-abba381ba4ad</entry>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <entry name="uuid">4fa0acc3-6fd9-4af0-8126-abba381ba4ad</entry>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     </system>
Oct 07 14:24:32 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:24:32 compute-0 nova_compute[259550]:   <os>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:   </os>
Oct 07 14:24:32 compute-0 nova_compute[259550]:   <features>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:   </features>
Oct 07 14:24:32 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:24:32 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:24:32 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk">
Oct 07 14:24:32 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       </source>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:24:32 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config">
Oct 07 14:24:32 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       </source>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:24:32 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/console.log" append="off"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <video>
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     </video>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:24:32 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:24:32 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:24:32 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:24:32 compute-0 nova_compute[259550]: </domain>
Oct 07 14:24:32 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:24:32 compute-0 nova_compute[259550]: 2025-10-07 14:24:32.550 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:24:32 compute-0 nova_compute[259550]: 2025-10-07 14:24:32.550 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:24:32 compute-0 nova_compute[259550]: 2025-10-07 14:24:32.551 2 INFO nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Using config drive
Oct 07 14:24:32 compute-0 nova_compute[259550]: 2025-10-07 14:24:32.577 2 DEBUG nova.storage.rbd_utils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:24:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:24:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4188609326' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:24:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:24:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4188609326' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:24:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3155769748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:24:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/4188609326' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:24:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/4188609326' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:24:33 compute-0 nova_compute[259550]: 2025-10-07 14:24:33.340 2 INFO nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Creating config drive at /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config
Oct 07 14:24:33 compute-0 nova_compute[259550]: 2025-10-07 14:24:33.345 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwt_tmdgy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:33 compute-0 nova_compute[259550]: 2025-10-07 14:24:33.486 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwt_tmdgy" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:33 compute-0 nova_compute[259550]: 2025-10-07 14:24:33.513 2 DEBUG nova.storage.rbd_utils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:24:33 compute-0 nova_compute[259550]: 2025-10-07 14:24:33.517 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:33 compute-0 nova_compute[259550]: 2025-10-07 14:24:33.686 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:33 compute-0 nova_compute[259550]: 2025-10-07 14:24:33.688 2 INFO nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Deleting local config drive /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config because it was imported into RBD.
Oct 07 14:24:33 compute-0 systemd-machined[214580]: New machine qemu-123-instance-00000064.
Oct 07 14:24:33 compute-0 systemd[1]: Started Virtual Machine qemu-123-instance-00000064.
Oct 07 14:24:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 66 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.0 MiB/s wr, 17 op/s
Oct 07 14:24:34 compute-0 ceph-mon[74295]: pgmap v1827: 305 pgs: 305 active+clean; 66 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.0 MiB/s wr, 17 op/s
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.621 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847074.621026, 4fa0acc3-6fd9-4af0-8126-abba381ba4ad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.623 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] VM Resumed (Lifecycle Event)
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.626 2 DEBUG nova.compute.manager [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.626 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.629 2 INFO nova.virt.libvirt.driver [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance spawned successfully.
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.630 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.648 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.654 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.659 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.660 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.660 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.660 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.661 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.661 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.681 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.681 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847074.6228948, 4fa0acc3-6fd9-4af0-8126-abba381ba4ad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.682 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] VM Started (Lifecycle Event)
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.718 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.722 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.729 2 INFO nova.compute.manager [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Took 3.89 seconds to spawn the instance on the hypervisor.
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.729 2 DEBUG nova.compute.manager [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.743 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.784 2 INFO nova.compute.manager [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Took 4.81 seconds to build instance.
Oct 07 14:24:34 compute-0 nova_compute[259550]: 2025-10-07 14:24:34.807 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "4fa0acc3-6fd9-4af0-8126-abba381ba4ad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 88 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Oct 07 14:24:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:24:36 compute-0 nova_compute[259550]: 2025-10-07 14:24:36.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:36 compute-0 nova_compute[259550]: 2025-10-07 14:24:36.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:36 compute-0 nova_compute[259550]: 2025-10-07 14:24:36.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:24:36 compute-0 nova_compute[259550]: 2025-10-07 14:24:36.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:24:37 compute-0 ceph-mon[74295]: pgmap v1828: 305 pgs: 305 active+clean; 88 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Oct 07 14:24:37 compute-0 nova_compute[259550]: 2025-10-07 14:24:37.743 2 INFO nova.compute.manager [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Rebuilding instance
Oct 07 14:24:37 compute-0 nova_compute[259550]: 2025-10-07 14:24:37.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:24:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 88 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Oct 07 14:24:38 compute-0 nova_compute[259550]: 2025-10-07 14:24:38.174 2 DEBUG nova.objects.instance [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:24:38 compute-0 nova_compute[259550]: 2025-10-07 14:24:38.191 2 DEBUG nova.compute.manager [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:24:38 compute-0 nova_compute[259550]: 2025-10-07 14:24:38.242 2 DEBUG nova.objects.instance [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'pci_requests' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:24:38 compute-0 nova_compute[259550]: 2025-10-07 14:24:38.255 2 DEBUG nova.objects.instance [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:24:38 compute-0 nova_compute[259550]: 2025-10-07 14:24:38.271 2 DEBUG nova.objects.instance [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'resources' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:24:38 compute-0 nova_compute[259550]: 2025-10-07 14:24:38.283 2 DEBUG nova.objects.instance [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'migration_context' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:24:38 compute-0 nova_compute[259550]: 2025-10-07 14:24:38.296 2 DEBUG nova.objects.instance [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:24:38 compute-0 nova_compute[259550]: 2025-10-07 14:24:38.299 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:24:38 compute-0 nova_compute[259550]: 2025-10-07 14:24:38.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:24:38 compute-0 nova_compute[259550]: 2025-10-07 14:24:38.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:24:39 compute-0 ceph-mon[74295]: pgmap v1829: 305 pgs: 305 active+clean; 88 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Oct 07 14:24:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 88 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 89 op/s
Oct 07 14:24:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Oct 07 14:24:41 compute-0 ceph-mon[74295]: pgmap v1830: 305 pgs: 305 active+clean; 88 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 89 op/s
Oct 07 14:24:41 compute-0 podman[358414]: 2025-10-07 14:24:41.084754011 +0000 UTC m=+0.071479221 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:24:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Oct 07 14:24:41 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Oct 07 14:24:41 compute-0 podman[358415]: 2025-10-07 14:24:41.113913491 +0000 UTC m=+0.089322398 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 07 14:24:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:24:41 compute-0 nova_compute[259550]: 2025-10-07 14:24:41.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:41 compute-0 nova_compute[259550]: 2025-10-07 14:24:41.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:41 compute-0 nova_compute[259550]: 2025-10-07 14:24:41.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:24:42 compute-0 nova_compute[259550]: 2025-10-07 14:24:42.004 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:42 compute-0 nova_compute[259550]: 2025-10-07 14:24:42.004 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:42 compute-0 nova_compute[259550]: 2025-10-07 14:24:42.005 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:42 compute-0 nova_compute[259550]: 2025-10-07 14:24:42.005 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:24:42 compute-0 nova_compute[259550]: 2025-10-07 14:24:42.005 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 88 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Oct 07 14:24:42 compute-0 ceph-mon[74295]: osdmap e242: 3 total, 3 up, 3 in
Oct 07 14:24:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:24:42 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481067306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:24:42 compute-0 nova_compute[259550]: 2025-10-07 14:24:42.475 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:42 compute-0 nova_compute[259550]: 2025-10-07 14:24:42.534 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:24:42 compute-0 nova_compute[259550]: 2025-10-07 14:24:42.535 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:24:42 compute-0 nova_compute[259550]: 2025-10-07 14:24:42.685 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:24:42 compute-0 nova_compute[259550]: 2025-10-07 14:24:42.687 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3719MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:24:42 compute-0 nova_compute[259550]: 2025-10-07 14:24:42.688 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:42 compute-0 nova_compute[259550]: 2025-10-07 14:24:42.688 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:42 compute-0 nova_compute[259550]: 2025-10-07 14:24:42.753 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 4fa0acc3-6fd9-4af0-8126-abba381ba4ad actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:24:42 compute-0 nova_compute[259550]: 2025-10-07 14:24:42.753 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:24:42 compute-0 nova_compute[259550]: 2025-10-07 14:24:42.753 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:24:42 compute-0 nova_compute[259550]: 2025-10-07 14:24:42.790 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:43 compute-0 ceph-mon[74295]: pgmap v1832: 305 pgs: 305 active+clean; 88 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Oct 07 14:24:43 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1481067306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:24:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:24:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3512057660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:24:43 compute-0 nova_compute[259550]: 2025-10-07 14:24:43.306 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:43 compute-0 nova_compute[259550]: 2025-10-07 14:24:43.311 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:24:43 compute-0 nova_compute[259550]: 2025-10-07 14:24:43.328 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:24:43 compute-0 nova_compute[259550]: 2025-10-07 14:24:43.351 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:24:43 compute-0 nova_compute[259550]: 2025-10-07 14:24:43.351 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 88 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 941 KiB/s wr, 114 op/s
Oct 07 14:24:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Oct 07 14:24:44 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3512057660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:24:44 compute-0 ceph-mon[74295]: pgmap v1833: 305 pgs: 305 active+clean; 88 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 941 KiB/s wr, 114 op/s
Oct 07 14:24:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Oct 07 14:24:44 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Oct 07 14:24:45 compute-0 ceph-mon[74295]: osdmap e243: 3 total, 3 up, 3 in
Oct 07 14:24:45 compute-0 nova_compute[259550]: 2025-10-07 14:24:45.352 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:24:45 compute-0 nova_compute[259550]: 2025-10-07 14:24:45.353 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:24:45 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 07 14:24:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 128 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.0 MiB/s wr, 139 op/s
Oct 07 14:24:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Oct 07 14:24:46 compute-0 ceph-mon[74295]: pgmap v1835: 305 pgs: 305 active+clean; 128 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.0 MiB/s wr, 139 op/s
Oct 07 14:24:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Oct 07 14:24:46 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Oct 07 14:24:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:24:46 compute-0 nova_compute[259550]: 2025-10-07 14:24:46.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:46 compute-0 nova_compute[259550]: 2025-10-07 14:24:46.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Oct 07 14:24:47 compute-0 ceph-mon[74295]: osdmap e244: 3 total, 3 up, 3 in
Oct 07 14:24:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Oct 07 14:24:47 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Oct 07 14:24:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 128 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 6.7 MiB/s wr, 70 op/s
Oct 07 14:24:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Oct 07 14:24:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Oct 07 14:24:48 compute-0 ceph-mon[74295]: osdmap e245: 3 total, 3 up, 3 in
Oct 07 14:24:48 compute-0 ceph-mon[74295]: pgmap v1838: 305 pgs: 305 active+clean; 128 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 6.7 MiB/s wr, 70 op/s
Oct 07 14:24:48 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Oct 07 14:24:48 compute-0 nova_compute[259550]: 2025-10-07 14:24:48.339 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:24:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Oct 07 14:24:49 compute-0 ceph-mon[74295]: osdmap e246: 3 total, 3 up, 3 in
Oct 07 14:24:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Oct 07 14:24:49 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Oct 07 14:24:49 compute-0 nova_compute[259550]: 2025-10-07 14:24:49.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:24:49 compute-0 nova_compute[259550]: 2025-10-07 14:24:49.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:24:49 compute-0 nova_compute[259550]: 2025-10-07 14:24:49.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:24:50 compute-0 nova_compute[259550]: 2025-10-07 14:24:50.002 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-4fa0acc3-6fd9-4af0-8126-abba381ba4ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:24:50 compute-0 nova_compute[259550]: 2025-10-07 14:24:50.003 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-4fa0acc3-6fd9-4af0-8126-abba381ba4ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:24:50 compute-0 nova_compute[259550]: 2025-10-07 14:24:50.003 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:24:50 compute-0 nova_compute[259550]: 2025-10-07 14:24:50.003 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:24:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 151 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 631 KiB/s rd, 22 MiB/s wr, 222 op/s
Oct 07 14:24:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Oct 07 14:24:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Oct 07 14:24:50 compute-0 ceph-mon[74295]: osdmap e247: 3 total, 3 up, 3 in
Oct 07 14:24:50 compute-0 ceph-mon[74295]: pgmap v1841: 305 pgs: 305 active+clean; 151 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 631 KiB/s rd, 22 MiB/s wr, 222 op/s
Oct 07 14:24:50 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Oct 07 14:24:50 compute-0 nova_compute[259550]: 2025-10-07 14:24:50.366 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:24:50 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Deactivated successfully.
Oct 07 14:24:50 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Consumed 13.158s CPU time.
Oct 07 14:24:50 compute-0 systemd-machined[214580]: Machine qemu-123-instance-00000064 terminated.
Oct 07 14:24:50 compute-0 nova_compute[259550]: 2025-10-07 14:24:50.951 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:24:51 compute-0 nova_compute[259550]: 2025-10-07 14:24:51.091 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-4fa0acc3-6fd9-4af0-8126-abba381ba4ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:24:51 compute-0 nova_compute[259550]: 2025-10-07 14:24:51.092 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:24:51 compute-0 ceph-mon[74295]: osdmap e248: 3 total, 3 up, 3 in
Oct 07 14:24:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:24:51 compute-0 nova_compute[259550]: 2025-10-07 14:24:51.354 2 INFO nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance shutdown successfully after 13 seconds.
Oct 07 14:24:51 compute-0 nova_compute[259550]: 2025-10-07 14:24:51.360 2 INFO nova.virt.libvirt.driver [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance destroyed successfully.
Oct 07 14:24:51 compute-0 nova_compute[259550]: 2025-10-07 14:24:51.365 2 INFO nova.virt.libvirt.driver [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance destroyed successfully.
Oct 07 14:24:51 compute-0 nova_compute[259550]: 2025-10-07 14:24:51.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:51 compute-0 nova_compute[259550]: 2025-10-07 14:24:51.769 2 INFO nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Deleting instance files /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad_del
Oct 07 14:24:51 compute-0 nova_compute[259550]: 2025-10-07 14:24:51.771 2 INFO nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Deletion of /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad_del complete
Oct 07 14:24:51 compute-0 nova_compute[259550]: 2025-10-07 14:24:51.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:51 compute-0 nova_compute[259550]: 2025-10-07 14:24:51.942 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:24:51 compute-0 nova_compute[259550]: 2025-10-07 14:24:51.942 2 INFO nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Creating image(s)
Oct 07 14:24:51 compute-0 nova_compute[259550]: 2025-10-07 14:24:51.962 2 DEBUG nova.storage.rbd_utils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:24:51 compute-0 nova_compute[259550]: 2025-10-07 14:24:51.985 2 DEBUG nova.storage.rbd_utils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.008 2 DEBUG nova.storage.rbd_utils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.012 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 121 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 960 KiB/s rd, 20 MiB/s wr, 407 op/s
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.048 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.088 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.088 2 DEBUG oslo_concurrency.lockutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.089 2 DEBUG oslo_concurrency.lockutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.089 2 DEBUG oslo_concurrency.lockutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.109 2 DEBUG nova.storage.rbd_utils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.112 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:52 compute-0 ceph-mon[74295]: pgmap v1843: 305 pgs: 305 active+clean; 121 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 960 KiB/s rd, 20 MiB/s wr, 407 op/s
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.375 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.442 2 DEBUG nova.storage.rbd_utils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] resizing rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.532 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.533 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Ensure instance console log exists: /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.533 2 DEBUG oslo_concurrency.lockutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.533 2 DEBUG oslo_concurrency.lockutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.534 2 DEBUG oslo_concurrency.lockutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.535 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.540 2 WARNING nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.545 2 DEBUG nova.virt.libvirt.host [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.545 2 DEBUG nova.virt.libvirt.host [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.549 2 DEBUG nova.virt.libvirt.host [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.549 2 DEBUG nova.virt.libvirt.host [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.549 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.550 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.550 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.550 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.551 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.551 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.551 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.551 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.552 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.552 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.552 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.552 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.553 2 DEBUG nova.objects.instance [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:24:52 compute-0 nova_compute[259550]: 2025-10-07 14:24:52.572 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:24:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:24:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:24:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:24:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:24:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:24:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:24:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/457808243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:24:53 compute-0 nova_compute[259550]: 2025-10-07 14:24:53.021 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:53 compute-0 nova_compute[259550]: 2025-10-07 14:24:53.049 2 DEBUG nova.storage.rbd_utils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:24:53 compute-0 nova_compute[259550]: 2025-10-07 14:24:53.052 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:53 compute-0 podman[358704]: 2025-10-07 14:24:53.088964122 +0000 UTC m=+0.079184127 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 07 14:24:53 compute-0 podman[358706]: 2025-10-07 14:24:53.091986793 +0000 UTC m=+0.079554957 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:24:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/457808243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:24:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:24:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1012545423' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:24:53 compute-0 nova_compute[259550]: 2025-10-07 14:24:53.497 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:53 compute-0 nova_compute[259550]: 2025-10-07 14:24:53.500 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:24:53 compute-0 nova_compute[259550]:   <uuid>4fa0acc3-6fd9-4af0-8126-abba381ba4ad</uuid>
Oct 07 14:24:53 compute-0 nova_compute[259550]:   <name>instance-00000064</name>
Oct 07 14:24:53 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:24:53 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:24:53 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerShowV254Test-server-543993218</nova:name>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:24:52</nova:creationTime>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:24:53 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:24:53 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:24:53 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:24:53 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:24:53 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:24:53 compute-0 nova_compute[259550]:         <nova:user uuid="cb586f3a9a014ce38f71d2a40873fa67">tempest-ServerShowV254Test-2120464215-project-member</nova:user>
Oct 07 14:24:53 compute-0 nova_compute[259550]:         <nova:project uuid="e73ba37bce884458b471aeda5370963f">tempest-ServerShowV254Test-2120464215</nova:project>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:24:53 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:24:53 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <system>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <entry name="serial">4fa0acc3-6fd9-4af0-8126-abba381ba4ad</entry>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <entry name="uuid">4fa0acc3-6fd9-4af0-8126-abba381ba4ad</entry>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     </system>
Oct 07 14:24:53 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:24:53 compute-0 nova_compute[259550]:   <os>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:   </os>
Oct 07 14:24:53 compute-0 nova_compute[259550]:   <features>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:   </features>
Oct 07 14:24:53 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:24:53 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:24:53 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk">
Oct 07 14:24:53 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       </source>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:24:53 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config">
Oct 07 14:24:53 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       </source>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:24:53 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/console.log" append="off"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <video>
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     </video>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:24:53 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:24:53 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:24:53 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:24:53 compute-0 nova_compute[259550]: </domain>
Oct 07 14:24:53 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:24:53 compute-0 nova_compute[259550]: 2025-10-07 14:24:53.550 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:24:53 compute-0 nova_compute[259550]: 2025-10-07 14:24:53.551 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:24:53 compute-0 nova_compute[259550]: 2025-10-07 14:24:53.551 2 INFO nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Using config drive
Oct 07 14:24:53 compute-0 nova_compute[259550]: 2025-10-07 14:24:53.575 2 DEBUG nova.storage.rbd_utils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:24:53 compute-0 nova_compute[259550]: 2025-10-07 14:24:53.592 2 DEBUG nova.objects.instance [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:24:53 compute-0 nova_compute[259550]: 2025-10-07 14:24:53.953 2 INFO nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Creating config drive at /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config
Oct 07 14:24:53 compute-0 nova_compute[259550]: 2025-10-07 14:24:53.959 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp37aogcok execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 108 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 879 KiB/s rd, 19 MiB/s wr, 473 op/s
Oct 07 14:24:54 compute-0 nova_compute[259550]: 2025-10-07 14:24:54.116 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp37aogcok" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:54 compute-0 nova_compute[259550]: 2025-10-07 14:24:54.145 2 DEBUG nova.storage.rbd_utils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:24:54 compute-0 nova_compute[259550]: 2025-10-07 14:24:54.149 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Oct 07 14:24:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Oct 07 14:24:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1012545423' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:24:54 compute-0 ceph-mon[74295]: pgmap v1844: 305 pgs: 305 active+clean; 108 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 879 KiB/s rd, 19 MiB/s wr, 473 op/s
Oct 07 14:24:54 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Oct 07 14:24:54 compute-0 nova_compute[259550]: 2025-10-07 14:24:54.320 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:54 compute-0 nova_compute[259550]: 2025-10-07 14:24:54.321 2 INFO nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Deleting local config drive /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config because it was imported into RBD.
Oct 07 14:24:54 compute-0 systemd-machined[214580]: New machine qemu-124-instance-00000064.
Oct 07 14:24:54 compute-0 systemd[1]: Started Virtual Machine qemu-124-instance-00000064.
Oct 07 14:24:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Oct 07 14:24:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Oct 07 14:24:55 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Oct 07 14:24:55 compute-0 ceph-mon[74295]: osdmap e249: 3 total, 3 up, 3 in
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.249 2 DEBUG nova.compute.manager [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.250 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.251 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 4fa0acc3-6fd9-4af0-8126-abba381ba4ad due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.252 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847095.2481906, 4fa0acc3-6fd9-4af0-8126-abba381ba4ad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.252 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] VM Resumed (Lifecycle Event)
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.261 2 INFO nova.virt.libvirt.driver [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance spawned successfully.
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.262 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.282 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.289 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.294 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.295 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.295 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.296 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.296 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.297 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.328 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.329 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847095.2500045, 4fa0acc3-6fd9-4af0-8126-abba381ba4ad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.329 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] VM Started (Lifecycle Event)
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.360 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.364 2 DEBUG nova.compute.manager [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.367 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.409 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.428 2 DEBUG oslo_concurrency.lockutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.428 2 DEBUG oslo_concurrency.lockutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.429 2 DEBUG nova.objects.instance [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:24:55 compute-0 nova_compute[259550]: 2025-10-07 14:24:55.485 2 DEBUG oslo_concurrency.lockutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:56 compute-0 ovn_controller[151684]: 2025-10-07T14:24:56Z|01019|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Oct 07 14:24:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 88 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 471 KiB/s rd, 5.1 MiB/s wr, 355 op/s
Oct 07 14:24:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Oct 07 14:24:56 compute-0 ceph-mon[74295]: osdmap e250: 3 total, 3 up, 3 in
Oct 07 14:24:56 compute-0 ceph-mon[74295]: pgmap v1847: 305 pgs: 305 active+clean; 88 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 471 KiB/s rd, 5.1 MiB/s wr, 355 op/s
Oct 07 14:24:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Oct 07 14:24:56 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Oct 07 14:24:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:24:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Oct 07 14:24:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Oct 07 14:24:56 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Oct 07 14:24:56 compute-0 nova_compute[259550]: 2025-10-07 14:24:56.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:56 compute-0 nova_compute[259550]: 2025-10-07 14:24:56.707 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "4fa0acc3-6fd9-4af0-8126-abba381ba4ad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:56 compute-0 nova_compute[259550]: 2025-10-07 14:24:56.708 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "4fa0acc3-6fd9-4af0-8126-abba381ba4ad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:56 compute-0 nova_compute[259550]: 2025-10-07 14:24:56.709 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "4fa0acc3-6fd9-4af0-8126-abba381ba4ad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:56 compute-0 nova_compute[259550]: 2025-10-07 14:24:56.709 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "4fa0acc3-6fd9-4af0-8126-abba381ba4ad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:56 compute-0 nova_compute[259550]: 2025-10-07 14:24:56.709 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "4fa0acc3-6fd9-4af0-8126-abba381ba4ad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:56 compute-0 nova_compute[259550]: 2025-10-07 14:24:56.711 2 INFO nova.compute.manager [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Terminating instance
Oct 07 14:24:56 compute-0 nova_compute[259550]: 2025-10-07 14:24:56.711 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "refresh_cache-4fa0acc3-6fd9-4af0-8126-abba381ba4ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:24:56 compute-0 nova_compute[259550]: 2025-10-07 14:24:56.712 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquired lock "refresh_cache-4fa0acc3-6fd9-4af0-8126-abba381ba4ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:24:56 compute-0 nova_compute[259550]: 2025-10-07 14:24:56.712 2 DEBUG nova.network.neutron [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:24:56 compute-0 nova_compute[259550]: 2025-10-07 14:24:56.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:24:57 compute-0 nova_compute[259550]: 2025-10-07 14:24:57.233 2 DEBUG nova.network.neutron [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:24:57 compute-0 ceph-mon[74295]: osdmap e251: 3 total, 3 up, 3 in
Oct 07 14:24:57 compute-0 ceph-mon[74295]: osdmap e252: 3 total, 3 up, 3 in
Oct 07 14:24:57 compute-0 nova_compute[259550]: 2025-10-07 14:24:57.539 2 DEBUG nova.network.neutron [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:24:57 compute-0 nova_compute[259550]: 2025-10-07 14:24:57.558 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Releasing lock "refresh_cache-4fa0acc3-6fd9-4af0-8126-abba381ba4ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:24:57 compute-0 nova_compute[259550]: 2025-10-07 14:24:57.559 2 DEBUG nova.compute.manager [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:24:57 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000064.scope: Deactivated successfully.
Oct 07 14:24:57 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000064.scope: Consumed 3.126s CPU time.
Oct 07 14:24:57 compute-0 systemd-machined[214580]: Machine qemu-124-instance-00000064 terminated.
Oct 07 14:24:57 compute-0 nova_compute[259550]: 2025-10-07 14:24:57.780 2 INFO nova.virt.libvirt.driver [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance destroyed successfully.
Oct 07 14:24:57 compute-0 nova_compute[259550]: 2025-10-07 14:24:57.780 2 DEBUG nova.objects.instance [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'resources' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:24:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 88 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Oct 07 14:24:58 compute-0 nova_compute[259550]: 2025-10-07 14:24:58.183 2 INFO nova.virt.libvirt.driver [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Deleting instance files /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad_del
Oct 07 14:24:58 compute-0 nova_compute[259550]: 2025-10-07 14:24:58.184 2 INFO nova.virt.libvirt.driver [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Deletion of /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad_del complete
Oct 07 14:24:58 compute-0 nova_compute[259550]: 2025-10-07 14:24:58.235 2 INFO nova.compute.manager [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Took 0.68 seconds to destroy the instance on the hypervisor.
Oct 07 14:24:58 compute-0 nova_compute[259550]: 2025-10-07 14:24:58.236 2 DEBUG oslo.service.loopingcall [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:24:58 compute-0 nova_compute[259550]: 2025-10-07 14:24:58.236 2 DEBUG nova.compute.manager [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:24:58 compute-0 nova_compute[259550]: 2025-10-07 14:24:58.237 2 DEBUG nova.network.neutron [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:24:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Oct 07 14:24:58 compute-0 ceph-mon[74295]: pgmap v1850: 305 pgs: 305 active+clean; 88 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Oct 07 14:24:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Oct 07 14:24:58 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Oct 07 14:24:58 compute-0 nova_compute[259550]: 2025-10-07 14:24:58.442 2 DEBUG nova.network.neutron [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:24:58 compute-0 nova_compute[259550]: 2025-10-07 14:24:58.467 2 DEBUG nova.network.neutron [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:24:58 compute-0 nova_compute[259550]: 2025-10-07 14:24:58.487 2 INFO nova.compute.manager [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Took 0.25 seconds to deallocate network for instance.
Oct 07 14:24:58 compute-0 nova_compute[259550]: 2025-10-07 14:24:58.533 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:24:58 compute-0 nova_compute[259550]: 2025-10-07 14:24:58.534 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:24:58 compute-0 nova_compute[259550]: 2025-10-07 14:24:58.590 2 DEBUG oslo_concurrency.processutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:24:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:24:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4076374420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:24:59 compute-0 nova_compute[259550]: 2025-10-07 14:24:59.059 2 DEBUG oslo_concurrency.processutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:24:59 compute-0 nova_compute[259550]: 2025-10-07 14:24:59.065 2 DEBUG nova.compute.provider_tree [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:24:59 compute-0 nova_compute[259550]: 2025-10-07 14:24:59.083 2 DEBUG nova.scheduler.client.report [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:24:59 compute-0 nova_compute[259550]: 2025-10-07 14:24:59.117 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:59 compute-0 nova_compute[259550]: 2025-10-07 14:24:59.164 2 INFO nova.scheduler.client.report [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Deleted allocations for instance 4fa0acc3-6fd9-4af0-8126-abba381ba4ad
Oct 07 14:24:59 compute-0 nova_compute[259550]: 2025-10-07 14:24:59.265 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "4fa0acc3-6fd9-4af0-8126-abba381ba4ad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:24:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Oct 07 14:24:59 compute-0 ceph-mon[74295]: osdmap e253: 3 total, 3 up, 3 in
Oct 07 14:24:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4076374420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:24:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Oct 07 14:24:59 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Oct 07 14:25:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 52 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 47 KiB/s wr, 416 op/s
Oct 07 14:25:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:25:00.056 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:25:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:25:00.057 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:25:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:25:00.057 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:25:00 compute-0 ceph-mon[74295]: osdmap e254: 3 total, 3 up, 3 in
Oct 07 14:25:00 compute-0 ceph-mon[74295]: pgmap v1853: 305 pgs: 305 active+clean; 52 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 47 KiB/s wr, 416 op/s
Oct 07 14:25:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Oct 07 14:25:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Oct 07 14:25:01 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Oct 07 14:25:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:25:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Oct 07 14:25:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Oct 07 14:25:01 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Oct 07 14:25:01 compute-0 nova_compute[259550]: 2025-10-07 14:25:01.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:01 compute-0 nova_compute[259550]: 2025-10-07 14:25:01.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 41 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 56 KiB/s wr, 607 op/s
Oct 07 14:25:02 compute-0 ceph-mon[74295]: osdmap e255: 3 total, 3 up, 3 in
Oct 07 14:25:02 compute-0 ceph-mon[74295]: osdmap e256: 3 total, 3 up, 3 in
Oct 07 14:25:02 compute-0 ceph-mon[74295]: pgmap v1856: 305 pgs: 305 active+clean; 41 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 56 KiB/s wr, 607 op/s
Oct 07 14:25:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Oct 07 14:25:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Oct 07 14:25:03 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Oct 07 14:25:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 41 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 12 KiB/s wr, 272 op/s
Oct 07 14:25:04 compute-0 ceph-mon[74295]: osdmap e257: 3 total, 3 up, 3 in
Oct 07 14:25:04 compute-0 ceph-mon[74295]: pgmap v1858: 305 pgs: 305 active+clean; 41 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 12 KiB/s wr, 272 op/s
Oct 07 14:25:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 41 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 10 KiB/s wr, 216 op/s
Oct 07 14:25:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:25:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Oct 07 14:25:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Oct 07 14:25:06 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Oct 07 14:25:06 compute-0 nova_compute[259550]: 2025-10-07 14:25:06.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:06 compute-0 nova_compute[259550]: 2025-10-07 14:25:06.851 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "d3b23591-1e36-4309-b361-7763dfc43021" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:25:06 compute-0 nova_compute[259550]: 2025-10-07 14:25:06.852 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "d3b23591-1e36-4309-b361-7763dfc43021" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:25:06 compute-0 nova_compute[259550]: 2025-10-07 14:25:06.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:07 compute-0 ceph-mon[74295]: pgmap v1859: 305 pgs: 305 active+clean; 41 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 10 KiB/s wr, 216 op/s
Oct 07 14:25:07 compute-0 ceph-mon[74295]: osdmap e258: 3 total, 3 up, 3 in
Oct 07 14:25:07 compute-0 nova_compute[259550]: 2025-10-07 14:25:07.198 2 DEBUG nova.compute.manager [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:25:07 compute-0 nova_compute[259550]: 2025-10-07 14:25:07.406 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:25:07 compute-0 nova_compute[259550]: 2025-10-07 14:25:07.407 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:25:07 compute-0 nova_compute[259550]: 2025-10-07 14:25:07.414 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:25:07 compute-0 nova_compute[259550]: 2025-10-07 14:25:07.414 2 INFO nova.compute.claims [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:25:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 41 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 4.4 KiB/s wr, 80 op/s
Oct 07 14:25:08 compute-0 nova_compute[259550]: 2025-10-07 14:25:08.440 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:25:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:25:08 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2329822703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:25:08 compute-0 nova_compute[259550]: 2025-10-07 14:25:08.994 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:25:09 compute-0 nova_compute[259550]: 2025-10-07 14:25:09.004 2 DEBUG nova.compute.provider_tree [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:25:09 compute-0 ceph-mon[74295]: pgmap v1861: 305 pgs: 305 active+clean; 41 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 4.4 KiB/s wr, 80 op/s
Oct 07 14:25:09 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2329822703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:25:09 compute-0 nova_compute[259550]: 2025-10-07 14:25:09.207 2 DEBUG nova.scheduler.client.report [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:25:09 compute-0 nova_compute[259550]: 2025-10-07 14:25:09.863 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:25:09 compute-0 nova_compute[259550]: 2025-10-07 14:25:09.865 2 DEBUG nova.compute.manager [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:25:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 41 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 4.1 KiB/s wr, 69 op/s
Oct 07 14:25:10 compute-0 nova_compute[259550]: 2025-10-07 14:25:10.163 2 DEBUG nova.compute.manager [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 07 14:25:10 compute-0 nova_compute[259550]: 2025-10-07 14:25:10.335 2 INFO nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:25:11 compute-0 nova_compute[259550]: 2025-10-07 14:25:11.040 2 DEBUG nova.compute.manager [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:25:11 compute-0 ceph-mon[74295]: pgmap v1862: 305 pgs: 305 active+clean; 41 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 4.1 KiB/s wr, 69 op/s
Oct 07 14:25:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:25:11 compute-0 nova_compute[259550]: 2025-10-07 14:25:11.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:11 compute-0 nova_compute[259550]: 2025-10-07 14:25:11.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 41 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.8 KiB/s wr, 64 op/s
Oct 07 14:25:12 compute-0 podman[358973]: 2025-10-07 14:25:12.065262018 +0000 UTC m=+0.055840633 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:25:12 compute-0 podman[358974]: 2025-10-07 14:25:12.065799192 +0000 UTC m=+0.054957529 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid)
Oct 07 14:25:12 compute-0 nova_compute[259550]: 2025-10-07 14:25:12.605 2 DEBUG nova.compute.manager [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:25:12 compute-0 nova_compute[259550]: 2025-10-07 14:25:12.608 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:25:12 compute-0 nova_compute[259550]: 2025-10-07 14:25:12.609 2 INFO nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Creating image(s)
Oct 07 14:25:12 compute-0 nova_compute[259550]: 2025-10-07 14:25:12.639 2 DEBUG nova.storage.rbd_utils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:25:12 compute-0 nova_compute[259550]: 2025-10-07 14:25:12.660 2 DEBUG nova.storage.rbd_utils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:25:12 compute-0 nova_compute[259550]: 2025-10-07 14:25:12.680 2 DEBUG nova.storage.rbd_utils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:25:12 compute-0 nova_compute[259550]: 2025-10-07 14:25:12.684 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:25:12 compute-0 nova_compute[259550]: 2025-10-07 14:25:12.761 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:25:12 compute-0 nova_compute[259550]: 2025-10-07 14:25:12.762 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:25:12 compute-0 nova_compute[259550]: 2025-10-07 14:25:12.763 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:25:12 compute-0 nova_compute[259550]: 2025-10-07 14:25:12.763 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:25:12 compute-0 nova_compute[259550]: 2025-10-07 14:25:12.784 2 DEBUG nova.storage.rbd_utils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:25:12 compute-0 nova_compute[259550]: 2025-10-07 14:25:12.787 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 d3b23591-1e36-4309-b361-7763dfc43021_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:25:12 compute-0 nova_compute[259550]: 2025-10-07 14:25:12.834 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847097.7777388, 4fa0acc3-6fd9-4af0-8126-abba381ba4ad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:25:12 compute-0 nova_compute[259550]: 2025-10-07 14:25:12.835 2 INFO nova.compute.manager [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] VM Stopped (Lifecycle Event)
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.064 2 DEBUG nova.compute.manager [None req-6626d0d3-06c2-40a9-8cfb-7e2bd1ce5fd4 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.079 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 d3b23591-1e36-4309-b361-7763dfc43021_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:25:13 compute-0 ceph-mon[74295]: pgmap v1863: 305 pgs: 305 active+clean; 41 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.8 KiB/s wr, 64 op/s
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.144 2 DEBUG nova.storage.rbd_utils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] resizing rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.223 2 DEBUG nova.objects.instance [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'migration_context' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.341 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.342 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Ensure instance console log exists: /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.343 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.344 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.344 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.347 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.355 2 WARNING nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.362 2 DEBUG nova.virt.libvirt.host [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.364 2 DEBUG nova.virt.libvirt.host [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.368 2 DEBUG nova.virt.libvirt.host [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.369 2 DEBUG nova.virt.libvirt.host [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.370 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.371 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.372 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.373 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.373 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.374 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.375 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.375 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.376 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.376 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.377 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.378 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.383 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:25:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:25:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2891196183' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.838 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.866 2 DEBUG nova.storage.rbd_utils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:25:13 compute-0 nova_compute[259550]: 2025-10-07 14:25:13.871 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:25:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 64 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.3 MiB/s wr, 20 op/s
Oct 07 14:25:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2891196183' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:25:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:25:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/986770265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:25:14 compute-0 nova_compute[259550]: 2025-10-07 14:25:14.325 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:25:14 compute-0 nova_compute[259550]: 2025-10-07 14:25:14.328 2 DEBUG nova.objects.instance [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'pci_devices' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:25:14 compute-0 nova_compute[259550]: 2025-10-07 14:25:14.505 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:25:14 compute-0 nova_compute[259550]:   <uuid>d3b23591-1e36-4309-b361-7763dfc43021</uuid>
Oct 07 14:25:14 compute-0 nova_compute[259550]:   <name>instance-00000065</name>
Oct 07 14:25:14 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:25:14 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:25:14 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerShowV257Test-server-1426187763</nova:name>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:25:13</nova:creationTime>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:25:14 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:25:14 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:25:14 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:25:14 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:25:14 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:25:14 compute-0 nova_compute[259550]:         <nova:user uuid="b13de0bc55134c1a9a69b90ed0ce42dd">tempest-ServerShowV257Test-1404222754-project-member</nova:user>
Oct 07 14:25:14 compute-0 nova_compute[259550]:         <nova:project uuid="9a9c9f0aca2445a3996c55f880cebe99">tempest-ServerShowV257Test-1404222754</nova:project>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:25:14 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:25:14 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <system>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <entry name="serial">d3b23591-1e36-4309-b361-7763dfc43021</entry>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <entry name="uuid">d3b23591-1e36-4309-b361-7763dfc43021</entry>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     </system>
Oct 07 14:25:14 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:25:14 compute-0 nova_compute[259550]:   <os>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:   </os>
Oct 07 14:25:14 compute-0 nova_compute[259550]:   <features>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:   </features>
Oct 07 14:25:14 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:25:14 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:25:14 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/d3b23591-1e36-4309-b361-7763dfc43021_disk">
Oct 07 14:25:14 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       </source>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:25:14 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/d3b23591-1e36-4309-b361-7763dfc43021_disk.config">
Oct 07 14:25:14 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       </source>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:25:14 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/console.log" append="off"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <video>
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     </video>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:25:14 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:25:14 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:25:14 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:25:14 compute-0 nova_compute[259550]: </domain>
Oct 07 14:25:14 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:25:15 compute-0 ceph-mon[74295]: pgmap v1864: 305 pgs: 305 active+clean; 64 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.3 MiB/s wr, 20 op/s
Oct 07 14:25:15 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/986770265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:25:15 compute-0 nova_compute[259550]: 2025-10-07 14:25:15.175 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:25:15 compute-0 nova_compute[259550]: 2025-10-07 14:25:15.176 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:25:15 compute-0 nova_compute[259550]: 2025-10-07 14:25:15.176 2 INFO nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Using config drive
Oct 07 14:25:15 compute-0 nova_compute[259550]: 2025-10-07 14:25:15.202 2 DEBUG nova.storage.rbd_utils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:25:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 88 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 2.1 MiB/s wr, 34 op/s
Oct 07 14:25:16 compute-0 ceph-mon[74295]: pgmap v1865: 305 pgs: 305 active+clean; 88 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 2.1 MiB/s wr, 34 op/s
Oct 07 14:25:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:25:16 compute-0 nova_compute[259550]: 2025-10-07 14:25:16.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:16 compute-0 nova_compute[259550]: 2025-10-07 14:25:16.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:17 compute-0 nova_compute[259550]: 2025-10-07 14:25:17.248 2 INFO nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Creating config drive at /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config
Oct 07 14:25:17 compute-0 nova_compute[259550]: 2025-10-07 14:25:17.255 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph0jn384h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:25:17 compute-0 nova_compute[259550]: 2025-10-07 14:25:17.418 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph0jn384h" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:25:17 compute-0 nova_compute[259550]: 2025-10-07 14:25:17.447 2 DEBUG nova.storage.rbd_utils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:25:17 compute-0 nova_compute[259550]: 2025-10-07 14:25:17.450 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config d3b23591-1e36-4309-b361-7763dfc43021_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:25:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:25:17.513 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:25:17 compute-0 nova_compute[259550]: 2025-10-07 14:25:17.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:25:17.515 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:25:17 compute-0 nova_compute[259550]: 2025-10-07 14:25:17.636 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config d3b23591-1e36-4309-b361-7763dfc43021_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:25:17 compute-0 nova_compute[259550]: 2025-10-07 14:25:17.637 2 INFO nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Deleting local config drive /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config because it was imported into RBD.
Oct 07 14:25:17 compute-0 systemd-machined[214580]: New machine qemu-125-instance-00000065.
Oct 07 14:25:17 compute-0 systemd[1]: Started Virtual Machine qemu-125-instance-00000065.
Oct 07 14:25:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 88 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 07 14:25:18 compute-0 nova_compute[259550]: 2025-10-07 14:25:18.548 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847118.5474482, d3b23591-1e36-4309-b361-7763dfc43021 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:25:18 compute-0 nova_compute[259550]: 2025-10-07 14:25:18.548 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] VM Resumed (Lifecycle Event)
Oct 07 14:25:18 compute-0 nova_compute[259550]: 2025-10-07 14:25:18.552 2 DEBUG nova.compute.manager [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:25:18 compute-0 nova_compute[259550]: 2025-10-07 14:25:18.552 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:25:18 compute-0 nova_compute[259550]: 2025-10-07 14:25:18.556 2 INFO nova.virt.libvirt.driver [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance spawned successfully.
Oct 07 14:25:18 compute-0 nova_compute[259550]: 2025-10-07 14:25:18.556 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:25:18 compute-0 nova_compute[259550]: 2025-10-07 14:25:18.620 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:25:18 compute-0 nova_compute[259550]: 2025-10-07 14:25:18.623 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:25:18 compute-0 nova_compute[259550]: 2025-10-07 14:25:18.710 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:25:18 compute-0 nova_compute[259550]: 2025-10-07 14:25:18.711 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:25:18 compute-0 nova_compute[259550]: 2025-10-07 14:25:18.712 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:25:18 compute-0 nova_compute[259550]: 2025-10-07 14:25:18.713 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:25:18 compute-0 nova_compute[259550]: 2025-10-07 14:25:18.713 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:25:18 compute-0 nova_compute[259550]: 2025-10-07 14:25:18.714 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:25:18 compute-0 nova_compute[259550]: 2025-10-07 14:25:18.916 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:25:18 compute-0 nova_compute[259550]: 2025-10-07 14:25:18.917 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847118.5485191, d3b23591-1e36-4309-b361-7763dfc43021 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:25:18 compute-0 nova_compute[259550]: 2025-10-07 14:25:18.917 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] VM Started (Lifecycle Event)
Oct 07 14:25:19 compute-0 ceph-mon[74295]: pgmap v1866: 305 pgs: 305 active+clean; 88 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 07 14:25:19 compute-0 nova_compute[259550]: 2025-10-07 14:25:19.358 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:25:19 compute-0 nova_compute[259550]: 2025-10-07 14:25:19.361 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:25:19 compute-0 nova_compute[259550]: 2025-10-07 14:25:19.447 2 INFO nova.compute.manager [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Took 6.84 seconds to spawn the instance on the hypervisor.
Oct 07 14:25:19 compute-0 nova_compute[259550]: 2025-10-07 14:25:19.447 2 DEBUG nova.compute.manager [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:25:19 compute-0 nova_compute[259550]: 2025-10-07 14:25:19.851 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:25:19 compute-0 nova_compute[259550]: 2025-10-07 14:25:19.996 2 INFO nova.compute.manager [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Took 12.61 seconds to build instance.
Oct 07 14:25:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 88 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 07 14:25:20 compute-0 nova_compute[259550]: 2025-10-07 14:25:20.146 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "d3b23591-1e36-4309-b361-7763dfc43021" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:25:21 compute-0 ceph-mon[74295]: pgmap v1867: 305 pgs: 305 active+clean; 88 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 07 14:25:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:25:21 compute-0 nova_compute[259550]: 2025-10-07 14:25:21.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:21 compute-0 nova_compute[259550]: 2025-10-07 14:25:21.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 88 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 853 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:25:22
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['vms', '.mgr', 'images', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control']
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:25:22 compute-0 nova_compute[259550]: 2025-10-07 14:25:22.878 2 INFO nova.compute.manager [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Rebuilding instance
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:25:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:25:23 compute-0 ceph-mon[74295]: pgmap v1868: 305 pgs: 305 active+clean; 88 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 853 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 07 14:25:23 compute-0 nova_compute[259550]: 2025-10-07 14:25:23.593 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:25:23 compute-0 nova_compute[259550]: 2025-10-07 14:25:23.642 2 DEBUG nova.compute.manager [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:25:23 compute-0 nova_compute[259550]: 2025-10-07 14:25:23.751 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'pci_requests' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:25:23 compute-0 nova_compute[259550]: 2025-10-07 14:25:23.847 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'pci_devices' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:25:23 compute-0 nova_compute[259550]: 2025-10-07 14:25:23.885 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'resources' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:25:23 compute-0 nova_compute[259550]: 2025-10-07 14:25:23.921 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'migration_context' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:25:24 compute-0 nova_compute[259550]: 2025-10-07 14:25:24.004 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:25:24 compute-0 nova_compute[259550]: 2025-10-07 14:25:24.008 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:25:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 88 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:25:24 compute-0 podman[359355]: 2025-10-07 14:25:24.105691964 +0000 UTC m=+0.092156203 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 07 14:25:24 compute-0 podman[359356]: 2025-10-07 14:25:24.124750113 +0000 UTC m=+0.101323198 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct 07 14:25:24 compute-0 ceph-mon[74295]: pgmap v1869: 305 pgs: 305 active+clean; 88 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:25:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:25:25.517 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:25:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 88 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 759 KiB/s wr, 87 op/s
Oct 07 14:25:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:25:26 compute-0 nova_compute[259550]: 2025-10-07 14:25:26.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:26 compute-0 nova_compute[259550]: 2025-10-07 14:25:26.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:27 compute-0 ceph-mon[74295]: pgmap v1870: 305 pgs: 305 active+clean; 88 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 759 KiB/s wr, 87 op/s
Oct 07 14:25:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 88 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:25:29 compute-0 ceph-mon[74295]: pgmap v1871: 305 pgs: 305 active+clean; 88 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:25:29 compute-0 sudo[359401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:25:29 compute-0 sudo[359401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:29 compute-0 sudo[359401]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:29 compute-0 sudo[359426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:25:29 compute-0 sudo[359426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:29 compute-0 sudo[359426]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:29 compute-0 sudo[359451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:25:29 compute-0 sudo[359451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:29 compute-0 sudo[359451]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:29 compute-0 sudo[359476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 07 14:25:29 compute-0 sudo[359476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 88 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:25:30 compute-0 podman[359575]: 2025-10-07 14:25:30.044768148 +0000 UTC m=+0.072404516 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:25:30 compute-0 podman[359575]: 2025-10-07 14:25:30.158330383 +0000 UTC m=+0.185966781 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 07 14:25:30 compute-0 sudo[359476]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:25:30 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:25:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:25:30 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:25:30 compute-0 sudo[359732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:25:30 compute-0 sudo[359732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:30 compute-0 sudo[359732]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:30 compute-0 sudo[359757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:25:30 compute-0 sudo[359757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:30 compute-0 sudo[359757]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:31 compute-0 sudo[359782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:25:31 compute-0 sudo[359782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:31 compute-0 sudo[359782]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:31 compute-0 sudo[359807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:25:31 compute-0 sudo[359807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:31 compute-0 ceph-mon[74295]: pgmap v1872: 305 pgs: 305 active+clean; 88 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:25:31 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:25:31 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:25:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:25:31 compute-0 sudo[359807]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:25:31 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:25:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:25:31 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:25:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:25:31 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:25:31 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev dbcf6e1a-58af-411b-af4d-2f69938ea94e does not exist
Oct 07 14:25:31 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 72a21181-83b1-4e1d-8651-833c5da29288 does not exist
Oct 07 14:25:31 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 871038ac-43c2-4c96-8c87-2fdf432f8e2d does not exist
Oct 07 14:25:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:25:31 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:25:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:25:31 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:25:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:25:31 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:25:31 compute-0 sudo[359863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:25:31 compute-0 sudo[359863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:31 compute-0 sudo[359863]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:31 compute-0 sudo[359888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:25:31 compute-0 sudo[359888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:31 compute-0 sudo[359888]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:31 compute-0 nova_compute[259550]: 2025-10-07 14:25:31.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:31 compute-0 sudo[359913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:25:31 compute-0 sudo[359913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:31 compute-0 sudo[359913]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:31 compute-0 sudo[359938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:25:31 compute-0 sudo[359938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:31 compute-0 nova_compute[259550]: 2025-10-07 14:25:31.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 95 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 836 KiB/s wr, 92 op/s
Oct 07 14:25:32 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:25:32 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:25:32 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:25:32 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:25:32 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:25:32 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:25:32 compute-0 podman[360004]: 2025-10-07 14:25:32.171505446 +0000 UTC m=+0.059508571 container create 6d0d830579651780e70f6d78ca8d548c5a5f4f83604c168183a539a72d243dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haslett, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 07 14:25:32 compute-0 systemd[1]: Started libpod-conmon-6d0d830579651780e70f6d78ca8d548c5a5f4f83604c168183a539a72d243dd7.scope.
Oct 07 14:25:32 compute-0 podman[360004]: 2025-10-07 14:25:32.140098537 +0000 UTC m=+0.028101732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:25:32 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:25:32 compute-0 podman[360004]: 2025-10-07 14:25:32.265865618 +0000 UTC m=+0.153868763 container init 6d0d830579651780e70f6d78ca8d548c5a5f4f83604c168183a539a72d243dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 07 14:25:32 compute-0 podman[360004]: 2025-10-07 14:25:32.273254035 +0000 UTC m=+0.161257150 container start 6d0d830579651780e70f6d78ca8d548c5a5f4f83604c168183a539a72d243dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 07 14:25:32 compute-0 podman[360004]: 2025-10-07 14:25:32.277349294 +0000 UTC m=+0.165352419 container attach 6d0d830579651780e70f6d78ca8d548c5a5f4f83604c168183a539a72d243dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:25:32 compute-0 gallant_haslett[360020]: 167 167
Oct 07 14:25:32 compute-0 systemd[1]: libpod-6d0d830579651780e70f6d78ca8d548c5a5f4f83604c168183a539a72d243dd7.scope: Deactivated successfully.
Oct 07 14:25:32 compute-0 podman[360004]: 2025-10-07 14:25:32.27834017 +0000 UTC m=+0.166343295 container died 6d0d830579651780e70f6d78ca8d548c5a5f4f83604c168183a539a72d243dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:25:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-313d9a37118157265147cf7e8c38140c1d487c45321349669442d75faa6ef7ad-merged.mount: Deactivated successfully.
Oct 07 14:25:32 compute-0 podman[360004]: 2025-10-07 14:25:32.319463609 +0000 UTC m=+0.207466724 container remove 6d0d830579651780e70f6d78ca8d548c5a5f4f83604c168183a539a72d243dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haslett, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 07 14:25:32 compute-0 systemd[1]: libpod-conmon-6d0d830579651780e70f6d78ca8d548c5a5f4f83604c168183a539a72d243dd7.scope: Deactivated successfully.
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005071358948687895 of space, bias 1.0, pg target 0.15214076846063684 quantized to 32 (current 32)
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:25:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:25:32 compute-0 podman[360044]: 2025-10-07 14:25:32.507413181 +0000 UTC m=+0.061827913 container create 8890ab0793da99b6c372d94654c0eb5e2b5abc2ccd85f6cf1809c27623d6021f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elion, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 07 14:25:32 compute-0 systemd[1]: Started libpod-conmon-8890ab0793da99b6c372d94654c0eb5e2b5abc2ccd85f6cf1809c27623d6021f.scope.
Oct 07 14:25:32 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/736434d5f45bfa560105b01b7b804a8ebd5bc4e3192fae2fbc323810ea3fb79f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/736434d5f45bfa560105b01b7b804a8ebd5bc4e3192fae2fbc323810ea3fb79f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/736434d5f45bfa560105b01b7b804a8ebd5bc4e3192fae2fbc323810ea3fb79f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/736434d5f45bfa560105b01b7b804a8ebd5bc4e3192fae2fbc323810ea3fb79f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/736434d5f45bfa560105b01b7b804a8ebd5bc4e3192fae2fbc323810ea3fb79f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:25:32 compute-0 podman[360044]: 2025-10-07 14:25:32.488875866 +0000 UTC m=+0.043290688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:25:32 compute-0 podman[360044]: 2025-10-07 14:25:32.583864595 +0000 UTC m=+0.138279347 container init 8890ab0793da99b6c372d94654c0eb5e2b5abc2ccd85f6cf1809c27623d6021f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elion, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 07 14:25:32 compute-0 podman[360044]: 2025-10-07 14:25:32.594753875 +0000 UTC m=+0.149168607 container start 8890ab0793da99b6c372d94654c0eb5e2b5abc2ccd85f6cf1809c27623d6021f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elion, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 14:25:32 compute-0 podman[360044]: 2025-10-07 14:25:32.598073304 +0000 UTC m=+0.152488036 container attach 8890ab0793da99b6c372d94654c0eb5e2b5abc2ccd85f6cf1809c27623d6021f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:25:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:25:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3159187807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:25:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:25:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3159187807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:25:33 compute-0 ceph-mon[74295]: pgmap v1873: 305 pgs: 305 active+clean; 95 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 836 KiB/s wr, 92 op/s
Oct 07 14:25:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3159187807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:25:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3159187807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:25:33 compute-0 stoic_elion[360061]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:25:33 compute-0 stoic_elion[360061]: --> relative data size: 1.0
Oct 07 14:25:33 compute-0 stoic_elion[360061]: --> All data devices are unavailable
Oct 07 14:25:33 compute-0 systemd[1]: libpod-8890ab0793da99b6c372d94654c0eb5e2b5abc2ccd85f6cf1809c27623d6021f.scope: Deactivated successfully.
Oct 07 14:25:33 compute-0 podman[360044]: 2025-10-07 14:25:33.621173662 +0000 UTC m=+1.175588414 container died 8890ab0793da99b6c372d94654c0eb5e2b5abc2ccd85f6cf1809c27623d6021f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elion, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:25:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-736434d5f45bfa560105b01b7b804a8ebd5bc4e3192fae2fbc323810ea3fb79f-merged.mount: Deactivated successfully.
Oct 07 14:25:33 compute-0 podman[360044]: 2025-10-07 14:25:33.713519659 +0000 UTC m=+1.267934391 container remove 8890ab0793da99b6c372d94654c0eb5e2b5abc2ccd85f6cf1809c27623d6021f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:25:33 compute-0 systemd[1]: libpod-conmon-8890ab0793da99b6c372d94654c0eb5e2b5abc2ccd85f6cf1809c27623d6021f.scope: Deactivated successfully.
Oct 07 14:25:33 compute-0 sudo[359938]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:33 compute-0 sudo[360102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:25:33 compute-0 sudo[360102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:33 compute-0 sudo[360102]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:33 compute-0 sudo[360127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:25:33 compute-0 sudo[360127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:33 compute-0 sudo[360127]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:33 compute-0 sudo[360152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:25:33 compute-0 sudo[360152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:33 compute-0 sudo[360152]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:33 compute-0 sudo[360177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:25:33 compute-0 sudo[360177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 121 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Oct 07 14:25:34 compute-0 nova_compute[259550]: 2025-10-07 14:25:34.058 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 07 14:25:34 compute-0 ceph-mon[74295]: pgmap v1874: 305 pgs: 305 active+clean; 121 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Oct 07 14:25:34 compute-0 podman[360244]: 2025-10-07 14:25:34.297483153 +0000 UTC m=+0.056379868 container create 24c8a99dfb38841dfa7c2d727902c2c1a1563c9bf62613e414fba7f3df9e7cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 07 14:25:34 compute-0 systemd[1]: Started libpod-conmon-24c8a99dfb38841dfa7c2d727902c2c1a1563c9bf62613e414fba7f3df9e7cab.scope.
Oct 07 14:25:34 compute-0 podman[360244]: 2025-10-07 14:25:34.269369991 +0000 UTC m=+0.028266806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:25:34 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:25:34 compute-0 podman[360244]: 2025-10-07 14:25:34.381167229 +0000 UTC m=+0.140063954 container init 24c8a99dfb38841dfa7c2d727902c2c1a1563c9bf62613e414fba7f3df9e7cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 07 14:25:34 compute-0 podman[360244]: 2025-10-07 14:25:34.3894549 +0000 UTC m=+0.148351625 container start 24c8a99dfb38841dfa7c2d727902c2c1a1563c9bf62613e414fba7f3df9e7cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 07 14:25:34 compute-0 podman[360244]: 2025-10-07 14:25:34.393422136 +0000 UTC m=+0.152318861 container attach 24c8a99dfb38841dfa7c2d727902c2c1a1563c9bf62613e414fba7f3df9e7cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:25:34 compute-0 determined_austin[360261]: 167 167
Oct 07 14:25:34 compute-0 systemd[1]: libpod-24c8a99dfb38841dfa7c2d727902c2c1a1563c9bf62613e414fba7f3df9e7cab.scope: Deactivated successfully.
Oct 07 14:25:34 compute-0 podman[360244]: 2025-10-07 14:25:34.396311873 +0000 UTC m=+0.155208608 container died 24c8a99dfb38841dfa7c2d727902c2c1a1563c9bf62613e414fba7f3df9e7cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:25:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c5e765a40594d6c795f0611a25a2aea054abd7c19ab6f571a30d9c30f3a078e-merged.mount: Deactivated successfully.
Oct 07 14:25:34 compute-0 podman[360244]: 2025-10-07 14:25:34.442880987 +0000 UTC m=+0.201777702 container remove 24c8a99dfb38841dfa7c2d727902c2c1a1563c9bf62613e414fba7f3df9e7cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 07 14:25:34 compute-0 systemd[1]: libpod-conmon-24c8a99dfb38841dfa7c2d727902c2c1a1563c9bf62613e414fba7f3df9e7cab.scope: Deactivated successfully.
Oct 07 14:25:34 compute-0 podman[360287]: 2025-10-07 14:25:34.683144868 +0000 UTC m=+0.059767319 container create 133dffb942057c729e3b3e36de78538c641511f395b2586f4a946fae73224492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kepler, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 07 14:25:34 compute-0 systemd[1]: Started libpod-conmon-133dffb942057c729e3b3e36de78538c641511f395b2586f4a946fae73224492.scope.
Oct 07 14:25:34 compute-0 podman[360287]: 2025-10-07 14:25:34.662415434 +0000 UTC m=+0.039037895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:25:34 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ecb99705be147e0345e7c12dea212835574e61683d6d000c119d2536ca8a02b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ecb99705be147e0345e7c12dea212835574e61683d6d000c119d2536ca8a02b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ecb99705be147e0345e7c12dea212835574e61683d6d000c119d2536ca8a02b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ecb99705be147e0345e7c12dea212835574e61683d6d000c119d2536ca8a02b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:25:34 compute-0 podman[360287]: 2025-10-07 14:25:34.791309768 +0000 UTC m=+0.167932279 container init 133dffb942057c729e3b3e36de78538c641511f395b2586f4a946fae73224492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 07 14:25:34 compute-0 podman[360287]: 2025-10-07 14:25:34.800639107 +0000 UTC m=+0.177261538 container start 133dffb942057c729e3b3e36de78538c641511f395b2586f4a946fae73224492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kepler, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 07 14:25:34 compute-0 podman[360287]: 2025-10-07 14:25:34.80635393 +0000 UTC m=+0.182976441 container attach 133dffb942057c729e3b3e36de78538c641511f395b2586f4a946fae73224492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:25:35 compute-0 nifty_kepler[360303]: {
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:     "0": [
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:         {
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "devices": [
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "/dev/loop3"
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             ],
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "lv_name": "ceph_lv0",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "lv_size": "21470642176",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "name": "ceph_lv0",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "tags": {
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.cluster_name": "ceph",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.crush_device_class": "",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.encrypted": "0",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.osd_id": "0",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.type": "block",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.vdo": "0"
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             },
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "type": "block",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "vg_name": "ceph_vg0"
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:         }
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:     ],
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:     "1": [
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:         {
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "devices": [
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "/dev/loop4"
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             ],
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "lv_name": "ceph_lv1",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "lv_size": "21470642176",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "name": "ceph_lv1",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "tags": {
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.cluster_name": "ceph",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.crush_device_class": "",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.encrypted": "0",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.osd_id": "1",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.type": "block",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.vdo": "0"
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             },
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "type": "block",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "vg_name": "ceph_vg1"
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:         }
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:     ],
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:     "2": [
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:         {
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "devices": [
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "/dev/loop5"
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             ],
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "lv_name": "ceph_lv2",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "lv_size": "21470642176",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "name": "ceph_lv2",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "tags": {
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.cluster_name": "ceph",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.crush_device_class": "",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.encrypted": "0",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.osd_id": "2",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.type": "block",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:                 "ceph.vdo": "0"
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             },
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "type": "block",
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:             "vg_name": "ceph_vg2"
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:         }
Oct 07 14:25:35 compute-0 nifty_kepler[360303]:     ]
Oct 07 14:25:35 compute-0 nifty_kepler[360303]: }
Oct 07 14:25:35 compute-0 systemd[1]: libpod-133dffb942057c729e3b3e36de78538c641511f395b2586f4a946fae73224492.scope: Deactivated successfully.
Oct 07 14:25:35 compute-0 podman[360287]: 2025-10-07 14:25:35.556987017 +0000 UTC m=+0.933609478 container died 133dffb942057c729e3b3e36de78538c641511f395b2586f4a946fae73224492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:25:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ecb99705be147e0345e7c12dea212835574e61683d6d000c119d2536ca8a02b-merged.mount: Deactivated successfully.
Oct 07 14:25:35 compute-0 podman[360287]: 2025-10-07 14:25:35.618573813 +0000 UTC m=+0.995196254 container remove 133dffb942057c729e3b3e36de78538c641511f395b2586f4a946fae73224492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kepler, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:25:35 compute-0 systemd[1]: libpod-conmon-133dffb942057c729e3b3e36de78538c641511f395b2586f4a946fae73224492.scope: Deactivated successfully.
Oct 07 14:25:35 compute-0 sudo[360177]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:35 compute-0 sudo[360323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:25:35 compute-0 sudo[360323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:35 compute-0 sudo[360323]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:35 compute-0 sudo[360348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:25:35 compute-0 sudo[360348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:35 compute-0 sudo[360348]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:35 compute-0 sudo[360373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:25:35 compute-0 sudo[360373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:35 compute-0 sudo[360373]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:35 compute-0 sudo[360398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:25:35 compute-0 sudo[360398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 121 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 07 14:25:36 compute-0 podman[360463]: 2025-10-07 14:25:36.237168232 +0000 UTC m=+0.040067412 container create 0596c708aa394dd1d778ef95e51fb61cf1af9e914d8decba42911bd6340dfac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_banach, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 07 14:25:36 compute-0 systemd[1]: Started libpod-conmon-0596c708aa394dd1d778ef95e51fb61cf1af9e914d8decba42911bd6340dfac4.scope.
Oct 07 14:25:36 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:25:36 compute-0 podman[360463]: 2025-10-07 14:25:36.218717589 +0000 UTC m=+0.021616799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:25:36 compute-0 podman[360463]: 2025-10-07 14:25:36.320876789 +0000 UTC m=+0.123775969 container init 0596c708aa394dd1d778ef95e51fb61cf1af9e914d8decba42911bd6340dfac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_banach, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:25:36 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000065.scope: Deactivated successfully.
Oct 07 14:25:36 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000065.scope: Consumed 12.683s CPU time.
Oct 07 14:25:36 compute-0 podman[360463]: 2025-10-07 14:25:36.32954639 +0000 UTC m=+0.132445570 container start 0596c708aa394dd1d778ef95e51fb61cf1af9e914d8decba42911bd6340dfac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_banach, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:25:36 compute-0 systemd-machined[214580]: Machine qemu-125-instance-00000065 terminated.
Oct 07 14:25:36 compute-0 podman[360463]: 2025-10-07 14:25:36.33292592 +0000 UTC m=+0.135825120 container attach 0596c708aa394dd1d778ef95e51fb61cf1af9e914d8decba42911bd6340dfac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 14:25:36 compute-0 eager_banach[360480]: 167 167
Oct 07 14:25:36 compute-0 systemd[1]: libpod-0596c708aa394dd1d778ef95e51fb61cf1af9e914d8decba42911bd6340dfac4.scope: Deactivated successfully.
Oct 07 14:25:36 compute-0 podman[360463]: 2025-10-07 14:25:36.335825488 +0000 UTC m=+0.138724668 container died 0596c708aa394dd1d778ef95e51fb61cf1af9e914d8decba42911bd6340dfac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_banach, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 07 14:25:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-70d0e8c8554e64e843c051f8e5e9b263f8b5a1841830a5d365a6afc1216148a6-merged.mount: Deactivated successfully.
Oct 07 14:25:36 compute-0 podman[360463]: 2025-10-07 14:25:36.371949773 +0000 UTC m=+0.174848953 container remove 0596c708aa394dd1d778ef95e51fb61cf1af9e914d8decba42911bd6340dfac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 07 14:25:36 compute-0 systemd[1]: libpod-conmon-0596c708aa394dd1d778ef95e51fb61cf1af9e914d8decba42911bd6340dfac4.scope: Deactivated successfully.
Oct 07 14:25:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:25:36 compute-0 podman[360503]: 2025-10-07 14:25:36.553903535 +0000 UTC m=+0.053505380 container create a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:25:36 compute-0 systemd[1]: Started libpod-conmon-a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b.scope.
Oct 07 14:25:36 compute-0 podman[360503]: 2025-10-07 14:25:36.524985822 +0000 UTC m=+0.024587737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:25:36 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/755fb84c13499be36299904cd7685641c421f4089e08ffa8b781e38471d76032/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/755fb84c13499be36299904cd7685641c421f4089e08ffa8b781e38471d76032/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/755fb84c13499be36299904cd7685641c421f4089e08ffa8b781e38471d76032/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/755fb84c13499be36299904cd7685641c421f4089e08ffa8b781e38471d76032/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:25:36 compute-0 podman[360503]: 2025-10-07 14:25:36.650698131 +0000 UTC m=+0.150299996 container init a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haslett, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 07 14:25:36 compute-0 podman[360503]: 2025-10-07 14:25:36.665262741 +0000 UTC m=+0.164864616 container start a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 07 14:25:36 compute-0 podman[360503]: 2025-10-07 14:25:36.669569716 +0000 UTC m=+0.169171611 container attach a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haslett, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct 07 14:25:36 compute-0 nova_compute[259550]: 2025-10-07 14:25:36.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:36 compute-0 nova_compute[259550]: 2025-10-07 14:25:36.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:36 compute-0 nova_compute[259550]: 2025-10-07 14:25:36.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:25:37 compute-0 nova_compute[259550]: 2025-10-07 14:25:37.074 2 INFO nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance shutdown successfully after 13 seconds.
Oct 07 14:25:37 compute-0 nova_compute[259550]: 2025-10-07 14:25:37.080 2 INFO nova.virt.libvirt.driver [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance destroyed successfully.
Oct 07 14:25:37 compute-0 nova_compute[259550]: 2025-10-07 14:25:37.086 2 INFO nova.virt.libvirt.driver [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance destroyed successfully.
Oct 07 14:25:37 compute-0 ceph-mon[74295]: pgmap v1875: 305 pgs: 305 active+clean; 121 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 07 14:25:37 compute-0 nova_compute[259550]: 2025-10-07 14:25:37.505 2 INFO nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Deleting instance files /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021_del
Oct 07 14:25:37 compute-0 nova_compute[259550]: 2025-10-07 14:25:37.507 2 INFO nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Deletion of /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021_del complete
Oct 07 14:25:37 compute-0 nova_compute[259550]: 2025-10-07 14:25:37.661 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:25:37 compute-0 nova_compute[259550]: 2025-10-07 14:25:37.661 2 INFO nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Creating image(s)
Oct 07 14:25:37 compute-0 nova_compute[259550]: 2025-10-07 14:25:37.679 2 DEBUG nova.storage.rbd_utils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:25:37 compute-0 nova_compute[259550]: 2025-10-07 14:25:37.698 2 DEBUG nova.storage.rbd_utils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:25:37 compute-0 nifty_haslett[360522]: {
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:         "osd_id": 2,
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:         "type": "bluestore"
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:     },
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:         "osd_id": 1,
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:         "type": "bluestore"
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:     },
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:         "osd_id": 0,
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:         "type": "bluestore"
Oct 07 14:25:37 compute-0 nifty_haslett[360522]:     }
Oct 07 14:25:37 compute-0 nifty_haslett[360522]: }
Oct 07 14:25:37 compute-0 nova_compute[259550]: 2025-10-07 14:25:37.720 2 DEBUG nova.storage.rbd_utils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:25:37 compute-0 nova_compute[259550]: 2025-10-07 14:25:37.724 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:25:37 compute-0 systemd[1]: libpod-a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b.scope: Deactivated successfully.
Oct 07 14:25:37 compute-0 podman[360503]: 2025-10-07 14:25:37.74342548 +0000 UTC m=+1.243027315 container died a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haslett, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Oct 07 14:25:37 compute-0 systemd[1]: libpod-a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b.scope: Consumed 1.067s CPU time.
Oct 07 14:25:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-755fb84c13499be36299904cd7685641c421f4089e08ffa8b781e38471d76032-merged.mount: Deactivated successfully.
Oct 07 14:25:37 compute-0 podman[360503]: 2025-10-07 14:25:37.798638815 +0000 UTC m=+1.298240670 container remove a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 07 14:25:37 compute-0 nova_compute[259550]: 2025-10-07 14:25:37.802 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:25:37 compute-0 nova_compute[259550]: 2025-10-07 14:25:37.803 2 DEBUG oslo_concurrency.lockutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:25:37 compute-0 systemd[1]: libpod-conmon-a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b.scope: Deactivated successfully.
Oct 07 14:25:37 compute-0 nova_compute[259550]: 2025-10-07 14:25:37.805 2 DEBUG oslo_concurrency.lockutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:25:37 compute-0 nova_compute[259550]: 2025-10-07 14:25:37.805 2 DEBUG oslo_concurrency.lockutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:25:37 compute-0 nova_compute[259550]: 2025-10-07 14:25:37.823 2 DEBUG nova.storage.rbd_utils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:25:37 compute-0 sudo[360398]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:37 compute-0 nova_compute[259550]: 2025-10-07 14:25:37.827 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 d3b23591-1e36-4309-b361-7763dfc43021_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:25:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:25:37 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:25:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:25:37 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:25:37 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 706a9c6a-8fc4-4706-8ae6-4c23c20542d2 does not exist
Oct 07 14:25:37 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 8cccc1ca-ba21-44b7-9ef9-9113b8cfa9e9 does not exist
Oct 07 14:25:37 compute-0 sudo[360663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:25:37 compute-0 sudo[360663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:37 compute-0 sudo[360663]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:37 compute-0 sudo[360703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:25:37 compute-0 sudo[360703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:25:37 compute-0 sudo[360703]: pam_unix(sudo:session): session closed for user root
Oct 07 14:25:37 compute-0 nova_compute[259550]: 2025-10-07 14:25:37.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:25:37 compute-0 nova_compute[259550]: 2025-10-07 14:25:37.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:25:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 121 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.096 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 d3b23591-1e36-4309-b361-7763dfc43021_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.165 2 DEBUG nova.storage.rbd_utils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] resizing rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.268 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.269 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Ensure instance console log exists: /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.270 2 DEBUG oslo_concurrency.lockutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.270 2 DEBUG oslo_concurrency.lockutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.270 2 DEBUG oslo_concurrency.lockutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.272 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.277 2 WARNING nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.286 2 DEBUG nova.virt.libvirt.host [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.287 2 DEBUG nova.virt.libvirt.host [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.292 2 DEBUG nova.virt.libvirt.host [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.293 2 DEBUG nova.virt.libvirt.host [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.293 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.294 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.295 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.295 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.295 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.295 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.296 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.296 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.296 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.297 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.297 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.297 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.298 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.321 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:25:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:25:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3776649082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.767 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.793 2 DEBUG nova.storage.rbd_utils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.798 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:25:38 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:25:38 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:25:38 compute-0 ceph-mon[74295]: pgmap v1876: 305 pgs: 305 active+clean; 121 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 07 14:25:38 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3776649082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:25:38 compute-0 nova_compute[259550]: 2025-10-07 14:25:38.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:25:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:25:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1216107542' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:25:39 compute-0 nova_compute[259550]: 2025-10-07 14:25:39.271 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:25:39 compute-0 nova_compute[259550]: 2025-10-07 14:25:39.276 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:25:39 compute-0 nova_compute[259550]:   <uuid>d3b23591-1e36-4309-b361-7763dfc43021</uuid>
Oct 07 14:25:39 compute-0 nova_compute[259550]:   <name>instance-00000065</name>
Oct 07 14:25:39 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:25:39 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:25:39 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <nova:name>tempest-ServerShowV257Test-server-1426187763</nova:name>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:25:38</nova:creationTime>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:25:39 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:25:39 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:25:39 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:25:39 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:25:39 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:25:39 compute-0 nova_compute[259550]:         <nova:user uuid="b13de0bc55134c1a9a69b90ed0ce42dd">tempest-ServerShowV257Test-1404222754-project-member</nova:user>
Oct 07 14:25:39 compute-0 nova_compute[259550]:         <nova:project uuid="9a9c9f0aca2445a3996c55f880cebe99">tempest-ServerShowV257Test-1404222754</nova:project>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:25:39 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:25:39 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <system>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <entry name="serial">d3b23591-1e36-4309-b361-7763dfc43021</entry>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <entry name="uuid">d3b23591-1e36-4309-b361-7763dfc43021</entry>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     </system>
Oct 07 14:25:39 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:25:39 compute-0 nova_compute[259550]:   <os>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:   </os>
Oct 07 14:25:39 compute-0 nova_compute[259550]:   <features>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:   </features>
Oct 07 14:25:39 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:25:39 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:25:39 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/d3b23591-1e36-4309-b361-7763dfc43021_disk">
Oct 07 14:25:39 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       </source>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:25:39 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/d3b23591-1e36-4309-b361-7763dfc43021_disk.config">
Oct 07 14:25:39 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       </source>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:25:39 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/console.log" append="off"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <video>
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     </video>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:25:39 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:25:39 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:25:39 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:25:39 compute-0 nova_compute[259550]: </domain>
Oct 07 14:25:39 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:25:39 compute-0 nova_compute[259550]: 2025-10-07 14:25:39.437 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:25:39 compute-0 nova_compute[259550]: 2025-10-07 14:25:39.438 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:25:39 compute-0 nova_compute[259550]: 2025-10-07 14:25:39.438 2 INFO nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Using config drive
Oct 07 14:25:39 compute-0 nova_compute[259550]: 2025-10-07 14:25:39.464 2 DEBUG nova.storage.rbd_utils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:25:39 compute-0 nova_compute[259550]: 2025-10-07 14:25:39.504 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'ec2_ids' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:25:39 compute-0 nova_compute[259550]: 2025-10-07 14:25:39.588 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'keypairs' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:25:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1216107542' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:25:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 110 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 3.6 MiB/s wr, 86 op/s
Oct 07 14:25:40 compute-0 nova_compute[259550]: 2025-10-07 14:25:40.300 2 INFO nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Creating config drive at /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config
Oct 07 14:25:40 compute-0 nova_compute[259550]: 2025-10-07 14:25:40.311 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp55ps3jrm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:25:40 compute-0 nova_compute[259550]: 2025-10-07 14:25:40.459 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp55ps3jrm" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:25:40 compute-0 nova_compute[259550]: 2025-10-07 14:25:40.496 2 DEBUG nova.storage.rbd_utils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:25:40 compute-0 nova_compute[259550]: 2025-10-07 14:25:40.502 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config d3b23591-1e36-4309-b361-7763dfc43021_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:25:40 compute-0 nova_compute[259550]: 2025-10-07 14:25:40.670 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config d3b23591-1e36-4309-b361-7763dfc43021_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:25:40 compute-0 nova_compute[259550]: 2025-10-07 14:25:40.672 2 INFO nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Deleting local config drive /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config because it was imported into RBD.
Oct 07 14:25:40 compute-0 systemd-machined[214580]: New machine qemu-126-instance-00000065.
Oct 07 14:25:40 compute-0 systemd[1]: Started Virtual Machine qemu-126-instance-00000065.
Oct 07 14:25:40 compute-0 ceph-mon[74295]: pgmap v1877: 305 pgs: 305 active+clean; 110 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 3.6 MiB/s wr, 86 op/s
Oct 07 14:25:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.635 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for d3b23591-1e36-4309-b361-7763dfc43021 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.637 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847141.6351116, d3b23591-1e36-4309-b361-7763dfc43021 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.637 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] VM Resumed (Lifecycle Event)
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.640 2 DEBUG nova.compute.manager [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.640 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.645 2 INFO nova.virt.libvirt.driver [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance spawned successfully.
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.646 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.662 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.668 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.674 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.674 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.675 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.675 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.676 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.677 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.705 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.706 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847141.6352496, d3b23591-1e36-4309-b361-7763dfc43021 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.707 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] VM Started (Lifecycle Event)
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.733 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.739 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.744 2 DEBUG nova.compute.manager [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.775 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.809 2 DEBUG oslo_concurrency.lockutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.811 2 DEBUG oslo_concurrency.lockutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.811 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.870 2 DEBUG oslo_concurrency.lockutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:25:41 compute-0 nova_compute[259550]: 2025-10-07 14:25:41.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 3.9 MiB/s wr, 119 op/s
Oct 07 14:25:43 compute-0 podman[360982]: 2025-10-07 14:25:43.090698851 +0000 UTC m=+0.074101911 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 07 14:25:43 compute-0 ceph-mon[74295]: pgmap v1878: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 3.9 MiB/s wr, 119 op/s
Oct 07 14:25:43 compute-0 podman[360981]: 2025-10-07 14:25:43.129014244 +0000 UTC m=+0.109970048 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd)
Oct 07 14:25:43 compute-0 nova_compute[259550]: 2025-10-07 14:25:43.618 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "d3b23591-1e36-4309-b361-7763dfc43021" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:25:43 compute-0 nova_compute[259550]: 2025-10-07 14:25:43.619 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "d3b23591-1e36-4309-b361-7763dfc43021" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:25:43 compute-0 nova_compute[259550]: 2025-10-07 14:25:43.619 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "d3b23591-1e36-4309-b361-7763dfc43021-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:25:43 compute-0 nova_compute[259550]: 2025-10-07 14:25:43.619 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "d3b23591-1e36-4309-b361-7763dfc43021-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:25:43 compute-0 nova_compute[259550]: 2025-10-07 14:25:43.619 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "d3b23591-1e36-4309-b361-7763dfc43021-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:25:43 compute-0 nova_compute[259550]: 2025-10-07 14:25:43.620 2 INFO nova.compute.manager [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Terminating instance
Oct 07 14:25:43 compute-0 nova_compute[259550]: 2025-10-07 14:25:43.621 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "refresh_cache-d3b23591-1e36-4309-b361-7763dfc43021" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:25:43 compute-0 nova_compute[259550]: 2025-10-07 14:25:43.621 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquired lock "refresh_cache-d3b23591-1e36-4309-b361-7763dfc43021" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:25:43 compute-0 nova_compute[259550]: 2025-10-07 14:25:43.621 2 DEBUG nova.network.neutron [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:25:43 compute-0 nova_compute[259550]: 2025-10-07 14:25:43.783 2 DEBUG nova.network.neutron [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:25:43 compute-0 nova_compute[259550]: 2025-10-07 14:25:43.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:25:43 compute-0 nova_compute[259550]: 2025-10-07 14:25:43.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.023 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.023 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.023 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:25:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.1 MiB/s wr, 153 op/s
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.392 2 DEBUG nova.network.neutron [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.419 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Releasing lock "refresh_cache-d3b23591-1e36-4309-b361-7763dfc43021" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.420 2 DEBUG nova.compute.manager [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:25:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:25:44 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3111153291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:25:44 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000065.scope: Deactivated successfully.
Oct 07 14:25:44 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000065.scope: Consumed 3.694s CPU time.
Oct 07 14:25:44 compute-0 systemd-machined[214580]: Machine qemu-126-instance-00000065 terminated.
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.496 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.650 2 INFO nova.virt.libvirt.driver [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance destroyed successfully.
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.651 2 DEBUG nova.objects.instance [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'resources' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.732 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.733 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.875 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.876 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3834MB free_disk=59.96736145019531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.876 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.876 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.975 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance d3b23591-1e36-4309-b361-7763dfc43021 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.976 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:25:44 compute-0 nova_compute[259550]: 2025-10-07 14:25:44.977 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:25:45 compute-0 nova_compute[259550]: 2025-10-07 14:25:45.019 2 INFO nova.virt.libvirt.driver [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Deleting instance files /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021_del
Oct 07 14:25:45 compute-0 nova_compute[259550]: 2025-10-07 14:25:45.020 2 INFO nova.virt.libvirt.driver [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Deletion of /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021_del complete
Oct 07 14:25:45 compute-0 nova_compute[259550]: 2025-10-07 14:25:45.043 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:25:45 compute-0 ceph-mon[74295]: pgmap v1879: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.1 MiB/s wr, 153 op/s
Oct 07 14:25:45 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3111153291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:25:45 compute-0 nova_compute[259550]: 2025-10-07 14:25:45.136 2 INFO nova.compute.manager [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Took 0.72 seconds to destroy the instance on the hypervisor.
Oct 07 14:25:45 compute-0 nova_compute[259550]: 2025-10-07 14:25:45.138 2 DEBUG oslo.service.loopingcall [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:25:45 compute-0 nova_compute[259550]: 2025-10-07 14:25:45.139 2 DEBUG nova.compute.manager [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:25:45 compute-0 nova_compute[259550]: 2025-10-07 14:25:45.140 2 DEBUG nova.network.neutron [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:25:45 compute-0 nova_compute[259550]: 2025-10-07 14:25:45.457 2 DEBUG nova.network.neutron [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:25:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:25:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3338664337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:25:45 compute-0 nova_compute[259550]: 2025-10-07 14:25:45.473 2 DEBUG nova.network.neutron [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:25:45 compute-0 nova_compute[259550]: 2025-10-07 14:25:45.489 2 INFO nova.compute.manager [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Took 0.35 seconds to deallocate network for instance.
Oct 07 14:25:45 compute-0 nova_compute[259550]: 2025-10-07 14:25:45.490 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:25:45 compute-0 nova_compute[259550]: 2025-10-07 14:25:45.502 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:25:45 compute-0 nova_compute[259550]: 2025-10-07 14:25:45.525 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:25:45 compute-0 nova_compute[259550]: 2025-10-07 14:25:45.549 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:25:45 compute-0 nova_compute[259550]: 2025-10-07 14:25:45.560 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:25:45 compute-0 nova_compute[259550]: 2025-10-07 14:25:45.561 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:25:45 compute-0 nova_compute[259550]: 2025-10-07 14:25:45.561 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:25:45 compute-0 nova_compute[259550]: 2025-10-07 14:25:45.605 2 DEBUG oslo_concurrency.processutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:25:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:25:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2435711627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:25:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 79 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 147 op/s
Oct 07 14:25:46 compute-0 nova_compute[259550]: 2025-10-07 14:25:46.042 2 DEBUG oslo_concurrency.processutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:25:46 compute-0 nova_compute[259550]: 2025-10-07 14:25:46.050 2 DEBUG nova.compute.provider_tree [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:25:46 compute-0 nova_compute[259550]: 2025-10-07 14:25:46.081 2 DEBUG nova.scheduler.client.report [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:25:46 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3338664337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:25:46 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2435711627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:25:46 compute-0 nova_compute[259550]: 2025-10-07 14:25:46.171 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:25:46 compute-0 nova_compute[259550]: 2025-10-07 14:25:46.196 2 INFO nova.scheduler.client.report [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Deleted allocations for instance d3b23591-1e36-4309-b361-7763dfc43021
Oct 07 14:25:46 compute-0 nova_compute[259550]: 2025-10-07 14:25:46.262 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "d3b23591-1e36-4309-b361-7763dfc43021" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:25:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:25:46 compute-0 nova_compute[259550]: 2025-10-07 14:25:46.561 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:25:46 compute-0 nova_compute[259550]: 2025-10-07 14:25:46.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:46 compute-0 nova_compute[259550]: 2025-10-07 14:25:46.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:47 compute-0 ceph-mon[74295]: pgmap v1880: 305 pgs: 305 active+clean; 79 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 147 op/s
Oct 07 14:25:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 79 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 145 op/s
Oct 07 14:25:48 compute-0 ceph-mon[74295]: pgmap v1881: 305 pgs: 305 active+clean; 79 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 145 op/s
Oct 07 14:25:48 compute-0 nova_compute[259550]: 2025-10-07 14:25:48.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:25:49 compute-0 nova_compute[259550]: 2025-10-07 14:25:49.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:25:49 compute-0 nova_compute[259550]: 2025-10-07 14:25:49.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:25:49 compute-0 nova_compute[259550]: 2025-10-07 14:25:49.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:25:49 compute-0 nova_compute[259550]: 2025-10-07 14:25:49.996 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:25:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 41 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Oct 07 14:25:51 compute-0 ceph-mon[74295]: pgmap v1882: 305 pgs: 305 active+clean; 41 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Oct 07 14:25:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:25:51 compute-0 nova_compute[259550]: 2025-10-07 14:25:51.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:51 compute-0 nova_compute[259550]: 2025-10-07 14:25:51.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 358 KiB/s wr, 132 op/s
Oct 07 14:25:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:25:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:25:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:25:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:25:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:25:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:25:53 compute-0 ceph-mon[74295]: pgmap v1883: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 358 KiB/s wr, 132 op/s
Oct 07 14:25:53 compute-0 nova_compute[259550]: 2025-10-07 14:25:53.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:25:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Oct 07 14:25:55 compute-0 podman[361109]: 2025-10-07 14:25:55.09091193 +0000 UTC m=+0.077158352 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:25:55 compute-0 podman[361110]: 2025-10-07 14:25:55.091368043 +0000 UTC m=+0.076081134 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:25:55 compute-0 ceph-mon[74295]: pgmap v1884: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Oct 07 14:25:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 1.5 KiB/s wr, 39 op/s
Oct 07 14:25:56 compute-0 ceph-mon[74295]: pgmap v1885: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 1.5 KiB/s wr, 39 op/s
Oct 07 14:25:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:25:56 compute-0 nova_compute[259550]: 2025-10-07 14:25:56.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:56 compute-0 nova_compute[259550]: 2025-10-07 14:25:56.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:25:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 11 op/s
Oct 07 14:25:59 compute-0 ceph-mon[74295]: pgmap v1886: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 11 op/s
Oct 07 14:25:59 compute-0 nova_compute[259550]: 2025-10-07 14:25:59.648 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847144.6468685, d3b23591-1e36-4309-b361-7763dfc43021 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:25:59 compute-0 nova_compute[259550]: 2025-10-07 14:25:59.648 2 INFO nova.compute.manager [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] VM Stopped (Lifecycle Event)
Oct 07 14:25:59 compute-0 nova_compute[259550]: 2025-10-07 14:25:59.667 2 DEBUG nova.compute.manager [None req-9705d272-efb8-45b5-bae0-7988facb2b7e - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:26:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 11 op/s
Oct 07 14:26:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:00.058 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:00.058 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:00.059 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:01 compute-0 ceph-mon[74295]: pgmap v1887: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 11 op/s
Oct 07 14:26:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:26:01 compute-0 nova_compute[259550]: 2025-10-07 14:26:01.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:01 compute-0 nova_compute[259550]: 2025-10-07 14:26:01.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:26:02 compute-0 ceph-mon[74295]: pgmap v1888: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:26:02 compute-0 nova_compute[259550]: 2025-10-07 14:26:02.546 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:02 compute-0 nova_compute[259550]: 2025-10-07 14:26:02.547 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:02 compute-0 nova_compute[259550]: 2025-10-07 14:26:02.559 2 DEBUG nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:26:02 compute-0 nova_compute[259550]: 2025-10-07 14:26:02.624 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:02 compute-0 nova_compute[259550]: 2025-10-07 14:26:02.625 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:02 compute-0 nova_compute[259550]: 2025-10-07 14:26:02.632 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:26:02 compute-0 nova_compute[259550]: 2025-10-07 14:26:02.633 2 INFO nova.compute.claims [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:26:02 compute-0 nova_compute[259550]: 2025-10-07 14:26:02.732 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:26:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:26:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4291413713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:26:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4291413713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.174 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.180 2 DEBUG nova.compute.provider_tree [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.193 2 DEBUG nova.scheduler.client.report [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.212 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.213 2 DEBUG nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.268 2 DEBUG nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.268 2 DEBUG nova.network.neutron [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.291 2 INFO nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.309 2 DEBUG nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.438 2 DEBUG nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.439 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.440 2 INFO nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Creating image(s)
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.459 2 DEBUG nova.storage.rbd_utils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.479 2 DEBUG nova.storage.rbd_utils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.501 2 DEBUG nova.storage.rbd_utils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.505 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.577 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.578 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.579 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.579 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.604 2 DEBUG nova.storage.rbd_utils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.608 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.643 2 DEBUG nova.policy [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6b6ae9b333804dcc8e1ed82ba0879e32', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '65245efcef84404ca2ccf82df5946696', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.913 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:26:03 compute-0 nova_compute[259550]: 2025-10-07 14:26:03.974 2 DEBUG nova.storage.rbd_utils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] resizing rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:26:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Oct 07 14:26:04 compute-0 nova_compute[259550]: 2025-10-07 14:26:04.069 2 DEBUG nova.objects.instance [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'migration_context' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:26:04 compute-0 nova_compute[259550]: 2025-10-07 14:26:04.087 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:26:04 compute-0 nova_compute[259550]: 2025-10-07 14:26:04.088 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Ensure instance console log exists: /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:26:04 compute-0 nova_compute[259550]: 2025-10-07 14:26:04.088 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:04 compute-0 nova_compute[259550]: 2025-10-07 14:26:04.088 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:04 compute-0 nova_compute[259550]: 2025-10-07 14:26:04.089 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:04 compute-0 ceph-mon[74295]: pgmap v1889: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Oct 07 14:26:05 compute-0 nova_compute[259550]: 2025-10-07 14:26:05.607 2 DEBUG nova.network.neutron [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Successfully created port: 8d87c94f-f436-4619-9ca7-9e116cab44bf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:26:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 72 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 1.1 MiB/s wr, 2 op/s
Oct 07 14:26:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:26:06 compute-0 nova_compute[259550]: 2025-10-07 14:26:06.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:06 compute-0 nova_compute[259550]: 2025-10-07 14:26:06.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:07 compute-0 ceph-mon[74295]: pgmap v1890: 305 pgs: 305 active+clean; 72 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 1.1 MiB/s wr, 2 op/s
Oct 07 14:26:07 compute-0 nova_compute[259550]: 2025-10-07 14:26:07.690 2 DEBUG nova.network.neutron [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Successfully updated port: 8d87c94f-f436-4619-9ca7-9e116cab44bf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:26:07 compute-0 nova_compute[259550]: 2025-10-07 14:26:07.709 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:26:07 compute-0 nova_compute[259550]: 2025-10-07 14:26:07.709 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquired lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:26:07 compute-0 nova_compute[259550]: 2025-10-07 14:26:07.709 2 DEBUG nova.network.neutron [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:26:07 compute-0 nova_compute[259550]: 2025-10-07 14:26:07.851 2 DEBUG nova.compute.manager [req-9d6ff499-3ab0-4d7d-813d-a949720b8165 req-9db2846f-414a-4606-a14c-51a2c8f1b885 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-changed-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:26:07 compute-0 nova_compute[259550]: 2025-10-07 14:26:07.851 2 DEBUG nova.compute.manager [req-9d6ff499-3ab0-4d7d-813d-a949720b8165 req-9db2846f-414a-4606-a14c-51a2c8f1b885 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Refreshing instance network info cache due to event network-changed-8d87c94f-f436-4619-9ca7-9e116cab44bf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:26:07 compute-0 nova_compute[259550]: 2025-10-07 14:26:07.851 2 DEBUG oslo_concurrency.lockutils [req-9d6ff499-3ab0-4d7d-813d-a949720b8165 req-9db2846f-414a-4606-a14c-51a2c8f1b885 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:26:07 compute-0 nova_compute[259550]: 2025-10-07 14:26:07.897 2 DEBUG nova.network.neutron [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:26:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 72 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 1.1 MiB/s wr, 2 op/s
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.059 2 DEBUG nova.network.neutron [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.080 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Releasing lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.081 2 DEBUG nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance network_info: |[{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.081 2 DEBUG oslo_concurrency.lockutils [req-9d6ff499-3ab0-4d7d-813d-a949720b8165 req-9db2846f-414a-4606-a14c-51a2c8f1b885 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.082 2 DEBUG nova.network.neutron [req-9d6ff499-3ab0-4d7d-813d-a949720b8165 req-9db2846f-414a-4606-a14c-51a2c8f1b885 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Refreshing network info cache for port 8d87c94f-f436-4619-9ca7-9e116cab44bf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.085 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Start _get_guest_xml network_info=[{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.091 2 WARNING nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.095 2 DEBUG nova.virt.libvirt.host [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.096 2 DEBUG nova.virt.libvirt.host [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.100 2 DEBUG nova.virt.libvirt.host [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.101 2 DEBUG nova.virt.libvirt.host [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.102 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.102 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.102 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.103 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.103 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.103 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.104 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.104 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.104 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.104 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.105 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.105 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.108 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:26:09 compute-0 ceph-mon[74295]: pgmap v1891: 305 pgs: 305 active+clean; 72 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 1.1 MiB/s wr, 2 op/s
Oct 07 14:26:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:26:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2254047907' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.537 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.568 2 DEBUG nova.storage.rbd_utils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:26:09 compute-0 nova_compute[259550]: 2025-10-07 14:26:09.574 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:26:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:26:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3798753409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.039 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.042 2 DEBUG nova.virt.libvirt.vif [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1596186961',display_name='tempest-ServersNegativeTestJSON-server-1596186961',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1596186961',id=102,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-7x9o1pxk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:26:03Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=91c66dff-47e6-4b52-9e3f-d8c58d256bcf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.042 2 DEBUG nova.network.os_vif_util [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.043 2 DEBUG nova.network.os_vif_util [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.045 2 DEBUG nova.objects.instance [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:26:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.065 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:26:10 compute-0 nova_compute[259550]:   <uuid>91c66dff-47e6-4b52-9e3f-d8c58d256bcf</uuid>
Oct 07 14:26:10 compute-0 nova_compute[259550]:   <name>instance-00000066</name>
Oct 07 14:26:10 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:26:10 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:26:10 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersNegativeTestJSON-server-1596186961</nova:name>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:26:09</nova:creationTime>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:26:10 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:26:10 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:26:10 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:26:10 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:26:10 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:26:10 compute-0 nova_compute[259550]:         <nova:user uuid="6b6ae9b333804dcc8e1ed82ba0879e32">tempest-ServersNegativeTestJSON-1110185129-project-member</nova:user>
Oct 07 14:26:10 compute-0 nova_compute[259550]:         <nova:project uuid="65245efcef84404ca2ccf82df5946696">tempest-ServersNegativeTestJSON-1110185129</nova:project>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:26:10 compute-0 nova_compute[259550]:         <nova:port uuid="8d87c94f-f436-4619-9ca7-9e116cab44bf">
Oct 07 14:26:10 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:26:10 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:26:10 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <system>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <entry name="serial">91c66dff-47e6-4b52-9e3f-d8c58d256bcf</entry>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <entry name="uuid">91c66dff-47e6-4b52-9e3f-d8c58d256bcf</entry>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     </system>
Oct 07 14:26:10 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:26:10 compute-0 nova_compute[259550]:   <os>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:   </os>
Oct 07 14:26:10 compute-0 nova_compute[259550]:   <features>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:   </features>
Oct 07 14:26:10 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:26:10 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:26:10 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk">
Oct 07 14:26:10 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       </source>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:26:10 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config">
Oct 07 14:26:10 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       </source>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:26:10 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:ce:79:3a"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <target dev="tap8d87c94f-f4"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/console.log" append="off"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <video>
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     </video>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:26:10 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:26:10 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:26:10 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:26:10 compute-0 nova_compute[259550]: </domain>
Oct 07 14:26:10 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.066 2 DEBUG nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Preparing to wait for external event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.067 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.067 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.067 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.068 2 DEBUG nova.virt.libvirt.vif [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1596186961',display_name='tempest-ServersNegativeTestJSON-server-1596186961',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1596186961',id=102,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-7x9o1pxk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:26:03Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=91c66dff-47e6-4b52-9e3f-d8c58d256bcf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.069 2 DEBUG nova.network.os_vif_util [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.070 2 DEBUG nova.network.os_vif_util [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.070 2 DEBUG os_vif [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.072 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.073 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.076 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d87c94f-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.077 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d87c94f-f4, col_values=(('external_ids', {'iface-id': '8d87c94f-f436-4619-9ca7-9e116cab44bf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ce:79:3a', 'vm-uuid': '91c66dff-47e6-4b52-9e3f-d8c58d256bcf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:10 compute-0 NetworkManager[44949]: <info>  [1759847170.0848] manager: (tap8d87c94f-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/412)
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.094 2 INFO os_vif [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4')
Oct 07 14:26:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2254047907' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:26:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3798753409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.140 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.141 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.141 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No VIF found with MAC fa:16:3e:ce:79:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.141 2 INFO nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Using config drive
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.162 2 DEBUG nova.storage.rbd_utils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.623 2 INFO nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Creating config drive at /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.630 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvjuyra1k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.777 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvjuyra1k" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.809 2 DEBUG nova.storage.rbd_utils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.812 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.850 2 DEBUG nova.network.neutron [req-9d6ff499-3ab0-4d7d-813d-a949720b8165 req-9db2846f-414a-4606-a14c-51a2c8f1b885 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updated VIF entry in instance network info cache for port 8d87c94f-f436-4619-9ca7-9e116cab44bf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.850 2 DEBUG nova.network.neutron [req-9d6ff499-3ab0-4d7d-813d-a949720b8165 req-9db2846f-414a-4606-a14c-51a2c8f1b885 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:26:10 compute-0 nova_compute[259550]: 2025-10-07 14:26:10.871 2 DEBUG oslo_concurrency.lockutils [req-9d6ff499-3ab0-4d7d-813d-a949720b8165 req-9db2846f-414a-4606-a14c-51a2c8f1b885 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:26:11 compute-0 nova_compute[259550]: 2025-10-07 14:26:11.011 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:26:11 compute-0 nova_compute[259550]: 2025-10-07 14:26:11.012 2 INFO nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Deleting local config drive /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config because it was imported into RBD.
Oct 07 14:26:11 compute-0 kernel: tap8d87c94f-f4: entered promiscuous mode
Oct 07 14:26:11 compute-0 NetworkManager[44949]: <info>  [1759847171.0668] manager: (tap8d87c94f-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/413)
Oct 07 14:26:11 compute-0 ovn_controller[151684]: 2025-10-07T14:26:11Z|01020|binding|INFO|Claiming lport 8d87c94f-f436-4619-9ca7-9e116cab44bf for this chassis.
Oct 07 14:26:11 compute-0 ovn_controller[151684]: 2025-10-07T14:26:11Z|01021|binding|INFO|8d87c94f-f436-4619-9ca7-9e116cab44bf: Claiming fa:16:3e:ce:79:3a 10.100.0.3
Oct 07 14:26:11 compute-0 nova_compute[259550]: 2025-10-07 14:26:11.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:11 compute-0 nova_compute[259550]: 2025-10-07 14:26:11.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.084 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:79:3a 10.100.0.3'], port_security=['fa:16:3e:ce:79:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '91c66dff-47e6-4b52-9e3f-d8c58d256bcf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df30eae3-80f8-4787-8c66-2dfab55da65a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65245efcef84404ca2ccf82df5946696', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7ce7b451-d224-4231-90f8-24fa819f5920', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efac365-0f4c-42a7-9c1f-c734401b92b1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8d87c94f-f436-4619-9ca7-9e116cab44bf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.087 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8d87c94f-f436-4619-9ca7-9e116cab44bf in datapath df30eae3-80f8-4787-8c66-2dfab55da65a bound to our chassis
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.088 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df30eae3-80f8-4787-8c66-2dfab55da65a
Oct 07 14:26:11 compute-0 systemd-udevd[361476]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.102 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1e49ea7b-5169-4f84-85f3-30b54585f50b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.103 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdf30eae3-81 in ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.105 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdf30eae3-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.105 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ad2be3f2-fe4d-4dd2-b8f1-1d7f69a3350e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.106 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[066fed09-61bd-4f7b-94c1-71e6651aa042]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:11 compute-0 systemd-machined[214580]: New machine qemu-127-instance-00000066.
Oct 07 14:26:11 compute-0 NetworkManager[44949]: <info>  [1759847171.1211] device (tap8d87c94f-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:26:11 compute-0 NetworkManager[44949]: <info>  [1759847171.1219] device (tap8d87c94f-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.119 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d30e25-cb4d-4259-97f6-3727c379656d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.148 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ff320474-384e-44ae-8f48-b4446f43eb97]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:11 compute-0 systemd[1]: Started Virtual Machine qemu-127-instance-00000066.
Oct 07 14:26:11 compute-0 nova_compute[259550]: 2025-10-07 14:26:11.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:11 compute-0 ovn_controller[151684]: 2025-10-07T14:26:11Z|01022|binding|INFO|Setting lport 8d87c94f-f436-4619-9ca7-9e116cab44bf ovn-installed in OVS
Oct 07 14:26:11 compute-0 ovn_controller[151684]: 2025-10-07T14:26:11Z|01023|binding|INFO|Setting lport 8d87c94f-f436-4619-9ca7-9e116cab44bf up in Southbound
Oct 07 14:26:11 compute-0 nova_compute[259550]: 2025-10-07 14:26:11.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:11 compute-0 ceph-mon[74295]: pgmap v1892: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.179 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6084680b-773d-4c11-811a-3c5d0bcf671a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.183 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de9d89de-c5cd-4d35-8f21-aa140f8686b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:11 compute-0 NetworkManager[44949]: <info>  [1759847171.1850] manager: (tapdf30eae3-80): new Veth device (/org/freedesktop/NetworkManager/Devices/414)
Oct 07 14:26:11 compute-0 systemd-udevd[361480]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.213 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[374b87cf-c1d2-4c33-903f-577563ce3b7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.216 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a52458-b9bc-480e-ba92-06aaeb5e1c28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:11 compute-0 NetworkManager[44949]: <info>  [1759847171.2391] device (tapdf30eae3-80): carrier: link connected
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.244 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef08277-656f-4c95-b8ef-5be82d5ac172]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.263 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[17ad9bf4-55c2-4bda-afca-8284e65a0a06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf30eae3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:c8:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775480, 'reachable_time': 40114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361509, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.278 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1bb56d22-f3ef-4075-bbfe-874f957afba5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:c8ab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775480, 'tstamp': 775480}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361510, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.295 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f2bdbb9f-3e5b-4811-b489-bd227c4e41c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf30eae3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:c8:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775480, 'reachable_time': 40114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361511, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.329 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9696227b-773d-4022-b52f-4f8d19d907c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.399 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc0af9b-f74f-40ea-9e99-46718c30c006]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.401 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf30eae3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.401 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.401 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf30eae3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:26:11 compute-0 NetworkManager[44949]: <info>  [1759847171.4041] manager: (tapdf30eae3-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Oct 07 14:26:11 compute-0 kernel: tapdf30eae3-80: entered promiscuous mode
Oct 07 14:26:11 compute-0 nova_compute[259550]: 2025-10-07 14:26:11.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.406 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf30eae3-80, col_values=(('external_ids', {'iface-id': '36ffe8eb-a28d-46e1-ae56-324c81416e5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:26:11 compute-0 ovn_controller[151684]: 2025-10-07T14:26:11Z|01024|binding|INFO|Releasing lport 36ffe8eb-a28d-46e1-ae56-324c81416e5e from this chassis (sb_readonly=0)
Oct 07 14:26:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:26:11 compute-0 nova_compute[259550]: 2025-10-07 14:26:11.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:11 compute-0 nova_compute[259550]: 2025-10-07 14:26:11.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.425 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/df30eae3-80f8-4787-8c66-2dfab55da65a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/df30eae3-80f8-4787-8c66-2dfab55da65a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.426 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[750555fa-24e9-4b5f-b7db-672568b55643]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.427 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-df30eae3-80f8-4787-8c66-2dfab55da65a
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/df30eae3-80f8-4787-8c66-2dfab55da65a.pid.haproxy
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID df30eae3-80f8-4787-8c66-2dfab55da65a
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:26:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.427 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'env', 'PROCESS_TAG=haproxy-df30eae3-80f8-4787-8c66-2dfab55da65a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/df30eae3-80f8-4787-8c66-2dfab55da65a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:26:11 compute-0 nova_compute[259550]: 2025-10-07 14:26:11.563 2 DEBUG nova.compute.manager [req-77d9ec6e-6154-4965-89f6-7d7c0f526674 req-aa0a0727-0059-4391-bcc1-5751d8a245a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:26:11 compute-0 nova_compute[259550]: 2025-10-07 14:26:11.563 2 DEBUG oslo_concurrency.lockutils [req-77d9ec6e-6154-4965-89f6-7d7c0f526674 req-aa0a0727-0059-4391-bcc1-5751d8a245a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:11 compute-0 nova_compute[259550]: 2025-10-07 14:26:11.563 2 DEBUG oslo_concurrency.lockutils [req-77d9ec6e-6154-4965-89f6-7d7c0f526674 req-aa0a0727-0059-4391-bcc1-5751d8a245a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:11 compute-0 nova_compute[259550]: 2025-10-07 14:26:11.564 2 DEBUG oslo_concurrency.lockutils [req-77d9ec6e-6154-4965-89f6-7d7c0f526674 req-aa0a0727-0059-4391-bcc1-5751d8a245a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:11 compute-0 nova_compute[259550]: 2025-10-07 14:26:11.564 2 DEBUG nova.compute.manager [req-77d9ec6e-6154-4965-89f6-7d7c0f526674 req-aa0a0727-0059-4391-bcc1-5751d8a245a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Processing event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:26:11 compute-0 nova_compute[259550]: 2025-10-07 14:26:11.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:11 compute-0 podman[361542]: 2025-10-07 14:26:11.78671946 +0000 UTC m=+0.050431718 container create d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:26:11 compute-0 systemd[1]: Started libpod-conmon-d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a.scope.
Oct 07 14:26:11 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:26:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d441a2678501fe529d289bed0f151b8829262887c81920cdf574184ba61c8960/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:26:11 compute-0 podman[361542]: 2025-10-07 14:26:11.755846315 +0000 UTC m=+0.019558603 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:26:11 compute-0 podman[361542]: 2025-10-07 14:26:11.866173663 +0000 UTC m=+0.129885981 container init d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:26:11 compute-0 podman[361542]: 2025-10-07 14:26:11.871069584 +0000 UTC m=+0.134781862 container start d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 07 14:26:11 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[361557]: [NOTICE]   (361561) : New worker (361563) forked
Oct 07 14:26:11 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[361557]: [NOTICE]   (361561) : Loading success.
Oct 07 14:26:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:26:12 compute-0 ceph-mon[74295]: pgmap v1893: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.455 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847173.4550266, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.456 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Started (Lifecycle Event)
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.458 2 DEBUG nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.463 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.466 2 INFO nova.virt.libvirt.driver [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance spawned successfully.
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.466 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.481 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.486 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.491 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.491 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.492 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.492 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.492 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.493 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.514 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.514 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847173.4582098, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.514 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Paused (Lifecycle Event)
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.548 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.551 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847173.4621363, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.551 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Resumed (Lifecycle Event)
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.556 2 INFO nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Took 10.12 seconds to spawn the instance on the hypervisor.
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.557 2 DEBUG nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.565 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.567 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.599 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.624 2 INFO nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Took 11.03 seconds to build instance.
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.643 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.701 2 DEBUG nova.compute.manager [req-3de53637-3c16-4db5-9a66-040b177d9ea2 req-a07192bc-9cb7-47e9-84ca-9bdbe5a1ab32 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.701 2 DEBUG oslo_concurrency.lockutils [req-3de53637-3c16-4db5-9a66-040b177d9ea2 req-a07192bc-9cb7-47e9-84ca-9bdbe5a1ab32 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.701 2 DEBUG oslo_concurrency.lockutils [req-3de53637-3c16-4db5-9a66-040b177d9ea2 req-a07192bc-9cb7-47e9-84ca-9bdbe5a1ab32 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.702 2 DEBUG oslo_concurrency.lockutils [req-3de53637-3c16-4db5-9a66-040b177d9ea2 req-a07192bc-9cb7-47e9-84ca-9bdbe5a1ab32 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.702 2 DEBUG nova.compute.manager [req-3de53637-3c16-4db5-9a66-040b177d9ea2 req-a07192bc-9cb7-47e9-84ca-9bdbe5a1ab32 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:26:13 compute-0 nova_compute[259550]: 2025-10-07 14:26:13.702 2 WARNING nova.compute.manager [req-3de53637-3c16-4db5-9a66-040b177d9ea2 req-a07192bc-9cb7-47e9-84ca-9bdbe5a1ab32 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received unexpected event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with vm_state active and task_state None.
Oct 07 14:26:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 07 14:26:14 compute-0 podman[361615]: 2025-10-07 14:26:14.075800505 +0000 UTC m=+0.059395928 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 07 14:26:14 compute-0 podman[361614]: 2025-10-07 14:26:14.077405348 +0000 UTC m=+0.061530835 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 14:26:15 compute-0 nova_compute[259550]: 2025-10-07 14:26:15.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:15 compute-0 ceph-mon[74295]: pgmap v1894: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 07 14:26:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Oct 07 14:26:16 compute-0 ceph-mon[74295]: pgmap v1895: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Oct 07 14:26:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:26:16 compute-0 nova_compute[259550]: 2025-10-07 14:26:16.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:17 compute-0 nova_compute[259550]: 2025-10-07 14:26:17.672 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:17 compute-0 nova_compute[259550]: 2025-10-07 14:26:17.673 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:17 compute-0 nova_compute[259550]: 2025-10-07 14:26:17.690 2 DEBUG nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:26:17 compute-0 nova_compute[259550]: 2025-10-07 14:26:17.767 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:17 compute-0 nova_compute[259550]: 2025-10-07 14:26:17.768 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:17 compute-0 nova_compute[259550]: 2025-10-07 14:26:17.781 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:26:17 compute-0 nova_compute[259550]: 2025-10-07 14:26:17.782 2 INFO nova.compute.claims [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:26:17 compute-0 nova_compute[259550]: 2025-10-07 14:26:17.911 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:26:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 731 KiB/s wr, 74 op/s
Oct 07 14:26:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:26:18 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1737943312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.354 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.361 2 DEBUG nova.compute.provider_tree [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.380 2 DEBUG nova.scheduler.client.report [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.417 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.418 2 DEBUG nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.481 2 DEBUG nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.481 2 DEBUG nova.network.neutron [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.501 2 INFO nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.515 2 DEBUG nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.606 2 DEBUG nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.608 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.608 2 INFO nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Creating image(s)
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.633 2 DEBUG nova.storage.rbd_utils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.662 2 DEBUG nova.storage.rbd_utils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.690 2 DEBUG nova.storage.rbd_utils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.694 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.746 2 DEBUG nova.policy [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6b6ae9b333804dcc8e1ed82ba0879e32', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '65245efcef84404ca2ccf82df5946696', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.787 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.788 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.789 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.789 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.817 2 DEBUG nova.storage.rbd_utils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:26:18 compute-0 nova_compute[259550]: 2025-10-07 14:26:18.821 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:26:19 compute-0 ceph-mon[74295]: pgmap v1896: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 731 KiB/s wr, 74 op/s
Oct 07 14:26:19 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1737943312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:26:19 compute-0 nova_compute[259550]: 2025-10-07 14:26:19.120 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:26:19 compute-0 nova_compute[259550]: 2025-10-07 14:26:19.184 2 DEBUG nova.storage.rbd_utils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] resizing rbd image e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:26:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:19.201 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:26:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:19.202 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:26:19 compute-0 nova_compute[259550]: 2025-10-07 14:26:19.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:19 compute-0 nova_compute[259550]: 2025-10-07 14:26:19.330 2 DEBUG nova.objects.instance [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'migration_context' on Instance uuid e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:26:19 compute-0 nova_compute[259550]: 2025-10-07 14:26:19.345 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:26:19 compute-0 nova_compute[259550]: 2025-10-07 14:26:19.346 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Ensure instance console log exists: /var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:26:19 compute-0 nova_compute[259550]: 2025-10-07 14:26:19.346 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:19 compute-0 nova_compute[259550]: 2025-10-07 14:26:19.347 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:19 compute-0 nova_compute[259550]: 2025-10-07 14:26:19.347 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:19 compute-0 nova_compute[259550]: 2025-10-07 14:26:19.674 2 DEBUG nova.network.neutron [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Successfully created port: 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:26:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 731 KiB/s wr, 98 op/s
Oct 07 14:26:20 compute-0 nova_compute[259550]: 2025-10-07 14:26:20.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:20 compute-0 ceph-mon[74295]: pgmap v1897: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 731 KiB/s wr, 98 op/s
Oct 07 14:26:21 compute-0 nova_compute[259550]: 2025-10-07 14:26:21.078 2 DEBUG nova.network.neutron [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Successfully updated port: 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:26:21 compute-0 nova_compute[259550]: 2025-10-07 14:26:21.108 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "refresh_cache-e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:26:21 compute-0 nova_compute[259550]: 2025-10-07 14:26:21.109 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquired lock "refresh_cache-e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:26:21 compute-0 nova_compute[259550]: 2025-10-07 14:26:21.109 2 DEBUG nova.network.neutron [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:26:21 compute-0 nova_compute[259550]: 2025-10-07 14:26:21.264 2 DEBUG nova.compute.manager [req-fe3983fe-17b2-46a0-b6bc-aed476f18a23 req-cc2ec416-9f92-4502-ad37-e74d0d00c842 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Received event network-changed-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:26:21 compute-0 nova_compute[259550]: 2025-10-07 14:26:21.265 2 DEBUG nova.compute.manager [req-fe3983fe-17b2-46a0-b6bc-aed476f18a23 req-cc2ec416-9f92-4502-ad37-e74d0d00c842 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Refreshing instance network info cache due to event network-changed-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:26:21 compute-0 nova_compute[259550]: 2025-10-07 14:26:21.265 2 DEBUG oslo_concurrency.lockutils [req-fe3983fe-17b2-46a0-b6bc-aed476f18a23 req-cc2ec416-9f92-4502-ad37-e74d0d00c842 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:26:21 compute-0 nova_compute[259550]: 2025-10-07 14:26:21.314 2 DEBUG nova.network.neutron [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:26:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:26:21 compute-0 nova_compute[259550]: 2025-10-07 14:26:21.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 114 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 741 KiB/s wr, 98 op/s
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.201 2 DEBUG nova.network.neutron [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Updating instance_info_cache with network_info: [{"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.219 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Releasing lock "refresh_cache-e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.220 2 DEBUG nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Instance network_info: |[{"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.221 2 DEBUG oslo_concurrency.lockutils [req-fe3983fe-17b2-46a0-b6bc-aed476f18a23 req-cc2ec416-9f92-4502-ad37-e74d0d00c842 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.221 2 DEBUG nova.network.neutron [req-fe3983fe-17b2-46a0-b6bc-aed476f18a23 req-cc2ec416-9f92-4502-ad37-e74d0d00c842 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Refreshing network info cache for port 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.225 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Start _get_guest_xml network_info=[{"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.230 2 WARNING nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.240 2 DEBUG nova.virt.libvirt.host [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.241 2 DEBUG nova.virt.libvirt.host [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.251 2 DEBUG nova.virt.libvirt.host [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.252 2 DEBUG nova.virt.libvirt.host [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.253 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.253 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.254 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.254 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.254 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.255 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.255 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.255 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.256 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.256 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.256 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.257 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.259 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:26:22
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'vms', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:26:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:26:22 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1336973826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.709 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.736 2 DEBUG nova.storage.rbd_utils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:26:22 compute-0 nova_compute[259550]: 2025-10-07 14:26:22.741 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:26:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:26:23 compute-0 ceph-mon[74295]: pgmap v1898: 305 pgs: 305 active+clean; 114 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 741 KiB/s wr, 98 op/s
Oct 07 14:26:23 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1336973826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:26:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:26:23 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/676300460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.206 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.209 2 DEBUG nova.virt.libvirt.vif [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:26:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1332650014',display_name='tempest-ServersNegativeTestJSON-server-1332650014',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1332650014',id=103,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-f0g4mxvo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:26:18Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.209 2 DEBUG nova.network.os_vif_util [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.210 2 DEBUG nova.network.os_vif_util [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:d7:24,bridge_name='br-int',has_traffic_filtering=True,id=55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55a0a0d7-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.211 2 DEBUG nova.objects.instance [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'pci_devices' on Instance uuid e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.225 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:26:23 compute-0 nova_compute[259550]:   <uuid>e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d</uuid>
Oct 07 14:26:23 compute-0 nova_compute[259550]:   <name>instance-00000067</name>
Oct 07 14:26:23 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:26:23 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:26:23 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersNegativeTestJSON-server-1332650014</nova:name>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:26:22</nova:creationTime>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:26:23 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:26:23 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:26:23 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:26:23 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:26:23 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:26:23 compute-0 nova_compute[259550]:         <nova:user uuid="6b6ae9b333804dcc8e1ed82ba0879e32">tempest-ServersNegativeTestJSON-1110185129-project-member</nova:user>
Oct 07 14:26:23 compute-0 nova_compute[259550]:         <nova:project uuid="65245efcef84404ca2ccf82df5946696">tempest-ServersNegativeTestJSON-1110185129</nova:project>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:26:23 compute-0 nova_compute[259550]:         <nova:port uuid="55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac">
Oct 07 14:26:23 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:26:23 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:26:23 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <system>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <entry name="serial">e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d</entry>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <entry name="uuid">e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d</entry>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     </system>
Oct 07 14:26:23 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:26:23 compute-0 nova_compute[259550]:   <os>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:   </os>
Oct 07 14:26:23 compute-0 nova_compute[259550]:   <features>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:   </features>
Oct 07 14:26:23 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:26:23 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:26:23 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk">
Oct 07 14:26:23 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       </source>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:26:23 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk.config">
Oct 07 14:26:23 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       </source>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:26:23 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:26:d7:24"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <target dev="tap55a0a0d7-af"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d/console.log" append="off"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <video>
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     </video>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:26:23 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:26:23 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:26:23 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:26:23 compute-0 nova_compute[259550]: </domain>
Oct 07 14:26:23 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.230 2 DEBUG nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Preparing to wait for external event network-vif-plugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.231 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.231 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.231 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.232 2 DEBUG nova.virt.libvirt.vif [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:26:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1332650014',display_name='tempest-ServersNegativeTestJSON-server-1332650014',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1332650014',id=103,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-f0g4mxvo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:26:18Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.232 2 DEBUG nova.network.os_vif_util [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.233 2 DEBUG nova.network.os_vif_util [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:d7:24,bridge_name='br-int',has_traffic_filtering=True,id=55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55a0a0d7-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.234 2 DEBUG os_vif [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:d7:24,bridge_name='br-int',has_traffic_filtering=True,id=55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55a0a0d7-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.235 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.235 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55a0a0d7-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap55a0a0d7-af, col_values=(('external_ids', {'iface-id': '55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:d7:24', 'vm-uuid': 'e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:23 compute-0 NetworkManager[44949]: <info>  [1759847183.2417] manager: (tap55a0a0d7-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.247 2 INFO os_vif [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:d7:24,bridge_name='br-int',has_traffic_filtering=True,id=55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55a0a0d7-af')
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.299 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.300 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.300 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No VIF found with MAC fa:16:3e:26:d7:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.301 2 INFO nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Using config drive
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.322 2 DEBUG nova.storage.rbd_utils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.763 2 INFO nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Creating config drive at /var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d/disk.config
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.768 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc3a5c5he execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.912 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc3a5c5he" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.939 2 DEBUG nova.storage.rbd_utils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:26:23 compute-0 nova_compute[259550]: 2025-10-07 14:26:23.943 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d/disk.config e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:26:24 compute-0 nova_compute[259550]: 2025-10-07 14:26:24.025 2 DEBUG nova.network.neutron [req-fe3983fe-17b2-46a0-b6bc-aed476f18a23 req-cc2ec416-9f92-4502-ad37-e74d0d00c842 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Updated VIF entry in instance network info cache for port 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:26:24 compute-0 nova_compute[259550]: 2025-10-07 14:26:24.026 2 DEBUG nova.network.neutron [req-fe3983fe-17b2-46a0-b6bc-aed476f18a23 req-cc2ec416-9f92-4502-ad37-e74d0d00c842 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Updating instance_info_cache with network_info: [{"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:26:24 compute-0 nova_compute[259550]: 2025-10-07 14:26:24.049 2 DEBUG oslo_concurrency.lockutils [req-fe3983fe-17b2-46a0-b6bc-aed476f18a23 req-cc2ec416-9f92-4502-ad37-e74d0d00c842 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:26:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 134 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:26:24 compute-0 nova_compute[259550]: 2025-10-07 14:26:24.118 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d/disk.config e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:26:24 compute-0 nova_compute[259550]: 2025-10-07 14:26:24.119 2 INFO nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Deleting local config drive /var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d/disk.config because it was imported into RBD.
Oct 07 14:26:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/676300460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:26:24 compute-0 ceph-mon[74295]: pgmap v1899: 305 pgs: 305 active+clean; 134 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:26:24 compute-0 kernel: tap55a0a0d7-af: entered promiscuous mode
Oct 07 14:26:24 compute-0 NetworkManager[44949]: <info>  [1759847184.1667] manager: (tap55a0a0d7-af): new Tun device (/org/freedesktop/NetworkManager/Devices/417)
Oct 07 14:26:24 compute-0 ovn_controller[151684]: 2025-10-07T14:26:24Z|01025|binding|INFO|Claiming lport 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac for this chassis.
Oct 07 14:26:24 compute-0 nova_compute[259550]: 2025-10-07 14:26:24.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:24 compute-0 ovn_controller[151684]: 2025-10-07T14:26:24Z|01026|binding|INFO|55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac: Claiming fa:16:3e:26:d7:24 10.100.0.10
Oct 07 14:26:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.175 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:d7:24 10.100.0.10'], port_security=['fa:16:3e:26:d7:24 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df30eae3-80f8-4787-8c66-2dfab55da65a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65245efcef84404ca2ccf82df5946696', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7ce7b451-d224-4231-90f8-24fa819f5920', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efac365-0f4c-42a7-9c1f-c734401b92b1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:26:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.177 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac in datapath df30eae3-80f8-4787-8c66-2dfab55da65a bound to our chassis
Oct 07 14:26:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.178 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df30eae3-80f8-4787-8c66-2dfab55da65a
Oct 07 14:26:24 compute-0 nova_compute[259550]: 2025-10-07 14:26:24.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:24 compute-0 ovn_controller[151684]: 2025-10-07T14:26:24Z|01027|binding|INFO|Setting lport 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac ovn-installed in OVS
Oct 07 14:26:24 compute-0 ovn_controller[151684]: 2025-10-07T14:26:24Z|01028|binding|INFO|Setting lport 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac up in Southbound
Oct 07 14:26:24 compute-0 nova_compute[259550]: 2025-10-07 14:26:24.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.196 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ab0b3712-fa87-47a3-a6c0-ab084386cc6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:24 compute-0 systemd-machined[214580]: New machine qemu-128-instance-00000067.
Oct 07 14:26:24 compute-0 systemd-udevd[361978]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:26:24 compute-0 systemd[1]: Started Virtual Machine qemu-128-instance-00000067.
Oct 07 14:26:24 compute-0 NetworkManager[44949]: <info>  [1759847184.2264] device (tap55a0a0d7-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:26:24 compute-0 NetworkManager[44949]: <info>  [1759847184.2275] device (tap55a0a0d7-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:26:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.228 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e69335b1-8f58-445f-854e-c1406e204aed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.231 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[eed1a83f-1299-4d36-8783-ed7eb31a3418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.263 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b2bfba3d-fe6f-474b-8fb2-863ac9f2af39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.280 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf8cc8f-0731-4141-8e8c-2a5a1afa25d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf30eae3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:c8:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775480, 'reachable_time': 40114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361988, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.297 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[22bf9384-0d39-4b7c-9415-c4ff864e8a2f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdf30eae3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775491, 'tstamp': 775491}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361990, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdf30eae3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775495, 'tstamp': 775495}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361990, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.298 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf30eae3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:26:24 compute-0 nova_compute[259550]: 2025-10-07 14:26:24.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.301 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf30eae3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:26:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.302 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:26:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.302 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf30eae3-80, col_values=(('external_ids', {'iface-id': '36ffe8eb-a28d-46e1-ae56-324c81416e5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:26:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.302 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:26:25 compute-0 nova_compute[259550]: 2025-10-07 14:26:25.128 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847185.1281853, e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:26:25 compute-0 nova_compute[259550]: 2025-10-07 14:26:25.129 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] VM Started (Lifecycle Event)
Oct 07 14:26:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Oct 07 14:26:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Oct 07 14:26:25 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Oct 07 14:26:25 compute-0 nova_compute[259550]: 2025-10-07 14:26:25.148 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:26:25 compute-0 nova_compute[259550]: 2025-10-07 14:26:25.153 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847185.129251, e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:26:25 compute-0 nova_compute[259550]: 2025-10-07 14:26:25.153 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] VM Paused (Lifecycle Event)
Oct 07 14:26:25 compute-0 nova_compute[259550]: 2025-10-07 14:26:25.175 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:26:25 compute-0 nova_compute[259550]: 2025-10-07 14:26:25.181 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:26:25 compute-0 nova_compute[259550]: 2025-10-07 14:26:25.209 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:26:25 compute-0 ovn_controller[151684]: 2025-10-07T14:26:25Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ce:79:3a 10.100.0.3
Oct 07 14:26:25 compute-0 ovn_controller[151684]: 2025-10-07T14:26:25Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ce:79:3a 10.100.0.3
Oct 07 14:26:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 142 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.3 MiB/s wr, 102 op/s
Oct 07 14:26:26 compute-0 podman[362034]: 2025-10-07 14:26:26.089402653 +0000 UTC m=+0.066291682 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 07 14:26:26 compute-0 podman[362035]: 2025-10-07 14:26:26.122423436 +0000 UTC m=+0.099689136 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 14:26:26 compute-0 ceph-mon[74295]: osdmap e259: 3 total, 3 up, 3 in
Oct 07 14:26:26 compute-0 ceph-mon[74295]: pgmap v1901: 305 pgs: 305 active+clean; 142 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.3 MiB/s wr, 102 op/s
Oct 07 14:26:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:26:26 compute-0 nova_compute[259550]: 2025-10-07 14:26:26.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Oct 07 14:26:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Oct 07 14:26:27 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Oct 07 14:26:27 compute-0 nova_compute[259550]: 2025-10-07 14:26:27.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:26:27 compute-0 nova_compute[259550]: 2025-10-07 14:26:27.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 07 14:26:28 compute-0 nova_compute[259550]: 2025-10-07 14:26:28.005 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 07 14:26:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 142 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 4.1 MiB/s wr, 91 op/s
Oct 07 14:26:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:28.204 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:26:28 compute-0 ceph-mon[74295]: osdmap e260: 3 total, 3 up, 3 in
Oct 07 14:26:28 compute-0 ceph-mon[74295]: pgmap v1903: 305 pgs: 305 active+clean; 142 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 4.1 MiB/s wr, 91 op/s
Oct 07 14:26:28 compute-0 nova_compute[259550]: 2025-10-07 14:26:28.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Oct 07 14:26:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Oct 07 14:26:29 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.905 2 DEBUG nova.compute.manager [req-e8c270e4-db35-4dab-8e31-01287cf063c4 req-357a9ebc-6d5b-4005-9233-4cda1c0fedb8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Received event network-vif-plugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.906 2 DEBUG oslo_concurrency.lockutils [req-e8c270e4-db35-4dab-8e31-01287cf063c4 req-357a9ebc-6d5b-4005-9233-4cda1c0fedb8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.906 2 DEBUG oslo_concurrency.lockutils [req-e8c270e4-db35-4dab-8e31-01287cf063c4 req-357a9ebc-6d5b-4005-9233-4cda1c0fedb8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.907 2 DEBUG oslo_concurrency.lockutils [req-e8c270e4-db35-4dab-8e31-01287cf063c4 req-357a9ebc-6d5b-4005-9233-4cda1c0fedb8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.907 2 DEBUG nova.compute.manager [req-e8c270e4-db35-4dab-8e31-01287cf063c4 req-357a9ebc-6d5b-4005-9233-4cda1c0fedb8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Processing event network-vif-plugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.908 2 DEBUG nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.912 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847189.9117203, e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.912 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] VM Resumed (Lifecycle Event)
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.914 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.919 2 INFO nova.virt.libvirt.driver [-] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Instance spawned successfully.
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.920 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.937 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.940 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.948 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.948 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.949 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.949 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.949 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.950 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:26:29 compute-0 nova_compute[259550]: 2025-10-07 14:26:29.976 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:26:30 compute-0 nova_compute[259550]: 2025-10-07 14:26:30.011 2 INFO nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Took 11.40 seconds to spawn the instance on the hypervisor.
Oct 07 14:26:30 compute-0 nova_compute[259550]: 2025-10-07 14:26:30.012 2 DEBUG nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:26:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 162 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 678 KiB/s rd, 4.2 MiB/s wr, 212 op/s
Oct 07 14:26:30 compute-0 nova_compute[259550]: 2025-10-07 14:26:30.075 2 INFO nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Took 12.34 seconds to build instance.
Oct 07 14:26:30 compute-0 nova_compute[259550]: 2025-10-07 14:26:30.169 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:30 compute-0 ceph-mon[74295]: osdmap e261: 3 total, 3 up, 3 in
Oct 07 14:26:30 compute-0 ceph-mon[74295]: pgmap v1905: 305 pgs: 305 active+clean; 162 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 678 KiB/s rd, 4.2 MiB/s wr, 212 op/s
Oct 07 14:26:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Oct 07 14:26:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Oct 07 14:26:31 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.381 2 DEBUG oslo_concurrency.lockutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.382 2 DEBUG oslo_concurrency.lockutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.382 2 DEBUG oslo_concurrency.lockutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.382 2 DEBUG oslo_concurrency.lockutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.382 2 DEBUG oslo_concurrency.lockutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.383 2 INFO nova.compute.manager [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Terminating instance
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.385 2 DEBUG nova.compute.manager [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:26:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:26:31 compute-0 kernel: tap55a0a0d7-af (unregistering): left promiscuous mode
Oct 07 14:26:31 compute-0 NetworkManager[44949]: <info>  [1759847191.4322] device (tap55a0a0d7-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:26:31 compute-0 ovn_controller[151684]: 2025-10-07T14:26:31Z|01029|binding|INFO|Releasing lport 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac from this chassis (sb_readonly=0)
Oct 07 14:26:31 compute-0 ovn_controller[151684]: 2025-10-07T14:26:31Z|01030|binding|INFO|Setting lport 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac down in Southbound
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:31 compute-0 ovn_controller[151684]: 2025-10-07T14:26:31Z|01031|binding|INFO|Removing iface tap55a0a0d7-af ovn-installed in OVS
Oct 07 14:26:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.449 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:d7:24 10.100.0.10'], port_security=['fa:16:3e:26:d7:24 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df30eae3-80f8-4787-8c66-2dfab55da65a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65245efcef84404ca2ccf82df5946696', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7ce7b451-d224-4231-90f8-24fa819f5920', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efac365-0f4c-42a7-9c1f-c734401b92b1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:26:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.450 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac in datapath df30eae3-80f8-4787-8c66-2dfab55da65a unbound from our chassis
Oct 07 14:26:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.452 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df30eae3-80f8-4787-8c66-2dfab55da65a
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.470 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e4de366b-230c-4130-ad37-8398baf8a577]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.505 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c9ffb6d5-1acb-4c10-b7ea-77981c6f5b5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.509 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e25820-40a4-4dd4-8db5-ab8bda454a11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:31 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000067.scope: Deactivated successfully.
Oct 07 14:26:31 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000067.scope: Consumed 2.398s CPU time.
Oct 07 14:26:31 compute-0 systemd-machined[214580]: Machine qemu-128-instance-00000067 terminated.
Oct 07 14:26:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.546 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1cef83af-2bcc-48ae-850b-475d3cbbb5a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.570 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fe7e446c-9873-43a0-9761-ce8669073547]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf30eae3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:c8:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 874, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 874, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775480, 'reachable_time': 40114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362086, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.593 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8991129a-7cc6-4436-8be7-62be64842082]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdf30eae3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775491, 'tstamp': 775491}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362087, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdf30eae3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775495, 'tstamp': 775495}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362087, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:26:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.596 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf30eae3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.605 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf30eae3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:26:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.606 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:26:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.606 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf30eae3-80, col_values=(('external_ids', {'iface-id': '36ffe8eb-a28d-46e1-ae56-324c81416e5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:26:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.607 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.623 2 INFO nova.virt.libvirt.driver [-] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Instance destroyed successfully.
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.624 2 DEBUG nova.objects.instance [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'resources' on Instance uuid e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.639 2 DEBUG nova.virt.libvirt.vif [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:26:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1332650014',display_name='tempest-ServersNegativeTestJSON-server-1332650014',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1332650014',id=103,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:26:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-f0g4mxvo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:26:30Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.639 2 DEBUG nova.network.os_vif_util [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.640 2 DEBUG nova.network.os_vif_util [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:d7:24,bridge_name='br-int',has_traffic_filtering=True,id=55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55a0a0d7-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.640 2 DEBUG os_vif [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:d7:24,bridge_name='br-int',has_traffic_filtering=True,id=55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55a0a0d7-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.644 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55a0a0d7-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.650 2 INFO os_vif [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:d7:24,bridge_name='br-int',has_traffic_filtering=True,id=55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55a0a0d7-af')
Oct 07 14:26:31 compute-0 nova_compute[259550]: 2025-10-07 14:26:31.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 721 KiB/s rd, 2.3 MiB/s wr, 186 op/s
Oct 07 14:26:32 compute-0 nova_compute[259550]: 2025-10-07 14:26:32.093 2 DEBUG nova.compute.manager [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Received event network-vif-plugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:26:32 compute-0 nova_compute[259550]: 2025-10-07 14:26:32.094 2 DEBUG oslo_concurrency.lockutils [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:32 compute-0 nova_compute[259550]: 2025-10-07 14:26:32.094 2 DEBUG oslo_concurrency.lockutils [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:32 compute-0 nova_compute[259550]: 2025-10-07 14:26:32.095 2 DEBUG oslo_concurrency.lockutils [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:32 compute-0 nova_compute[259550]: 2025-10-07 14:26:32.095 2 DEBUG nova.compute.manager [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] No waiting events found dispatching network-vif-plugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:26:32 compute-0 nova_compute[259550]: 2025-10-07 14:26:32.096 2 WARNING nova.compute.manager [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Received unexpected event network-vif-plugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac for instance with vm_state active and task_state deleting.
Oct 07 14:26:32 compute-0 nova_compute[259550]: 2025-10-07 14:26:32.096 2 DEBUG nova.compute.manager [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Received event network-vif-unplugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:26:32 compute-0 nova_compute[259550]: 2025-10-07 14:26:32.096 2 DEBUG oslo_concurrency.lockutils [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:32 compute-0 nova_compute[259550]: 2025-10-07 14:26:32.097 2 DEBUG oslo_concurrency.lockutils [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:32 compute-0 nova_compute[259550]: 2025-10-07 14:26:32.098 2 DEBUG oslo_concurrency.lockutils [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:32 compute-0 nova_compute[259550]: 2025-10-07 14:26:32.098 2 DEBUG nova.compute.manager [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] No waiting events found dispatching network-vif-unplugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:26:32 compute-0 nova_compute[259550]: 2025-10-07 14:26:32.099 2 DEBUG nova.compute.manager [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Received event network-vif-unplugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:26:32 compute-0 nova_compute[259550]: 2025-10-07 14:26:32.120 2 INFO nova.virt.libvirt.driver [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Deleting instance files /var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_del
Oct 07 14:26:32 compute-0 nova_compute[259550]: 2025-10-07 14:26:32.120 2 INFO nova.virt.libvirt.driver [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Deletion of /var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_del complete
Oct 07 14:26:32 compute-0 nova_compute[259550]: 2025-10-07 14:26:32.213 2 INFO nova.compute.manager [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 07 14:26:32 compute-0 nova_compute[259550]: 2025-10-07 14:26:32.214 2 DEBUG oslo.service.loopingcall [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:26:32 compute-0 nova_compute[259550]: 2025-10-07 14:26:32.214 2 DEBUG nova.compute.manager [-] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:26:32 compute-0 nova_compute[259550]: 2025-10-07 14:26:32.214 2 DEBUG nova.network.neutron [-] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:26:32 compute-0 ceph-mon[74295]: osdmap e262: 3 total, 3 up, 3 in
Oct 07 14:26:32 compute-0 ceph-mon[74295]: pgmap v1907: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 721 KiB/s rd, 2.3 MiB/s wr, 186 op/s
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001104379822719281 of space, bias 1.0, pg target 0.33131394681578435 quantized to 32 (current 32)
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664306179592368 of space, bias 1.0, pg target 0.19992918538777105 quantized to 32 (current 32)
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:26:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:26:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:26:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4233695989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:26:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:26:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4233695989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:26:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/4233695989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:26:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/4233695989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:26:33 compute-0 nova_compute[259550]: 2025-10-07 14:26:33.693 2 DEBUG nova.network.neutron [-] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:26:33 compute-0 nova_compute[259550]: 2025-10-07 14:26:33.754 2 INFO nova.compute.manager [-] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Took 1.54 seconds to deallocate network for instance.
Oct 07 14:26:33 compute-0 nova_compute[259550]: 2025-10-07 14:26:33.820 2 DEBUG nova.compute.manager [req-6aadea6d-6437-4fc9-b4cb-0983f1ad0245 req-c5169e4e-e8eb-42d5-94b9-3c61873cf495 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Received event network-vif-deleted-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:26:33 compute-0 nova_compute[259550]: 2025-10-07 14:26:33.880 2 DEBUG oslo_concurrency.lockutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:33 compute-0 nova_compute[259550]: 2025-10-07 14:26:33.881 2 DEBUG oslo_concurrency.lockutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:33 compute-0 nova_compute[259550]: 2025-10-07 14:26:33.960 2 DEBUG oslo_concurrency.processutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:26:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 147 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.0 MiB/s wr, 296 op/s
Oct 07 14:26:34 compute-0 nova_compute[259550]: 2025-10-07 14:26:34.258 2 DEBUG nova.compute.manager [req-c93a78bd-3202-45ce-bf8d-1be45a5ecedf req-7d3e61d2-fbe5-413c-bd16-270da0688e91 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Received event network-vif-plugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:26:34 compute-0 nova_compute[259550]: 2025-10-07 14:26:34.258 2 DEBUG oslo_concurrency.lockutils [req-c93a78bd-3202-45ce-bf8d-1be45a5ecedf req-7d3e61d2-fbe5-413c-bd16-270da0688e91 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:34 compute-0 nova_compute[259550]: 2025-10-07 14:26:34.258 2 DEBUG oslo_concurrency.lockutils [req-c93a78bd-3202-45ce-bf8d-1be45a5ecedf req-7d3e61d2-fbe5-413c-bd16-270da0688e91 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:34 compute-0 nova_compute[259550]: 2025-10-07 14:26:34.258 2 DEBUG oslo_concurrency.lockutils [req-c93a78bd-3202-45ce-bf8d-1be45a5ecedf req-7d3e61d2-fbe5-413c-bd16-270da0688e91 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:34 compute-0 nova_compute[259550]: 2025-10-07 14:26:34.259 2 DEBUG nova.compute.manager [req-c93a78bd-3202-45ce-bf8d-1be45a5ecedf req-7d3e61d2-fbe5-413c-bd16-270da0688e91 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] No waiting events found dispatching network-vif-plugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:26:34 compute-0 nova_compute[259550]: 2025-10-07 14:26:34.259 2 WARNING nova.compute.manager [req-c93a78bd-3202-45ce-bf8d-1be45a5ecedf req-7d3e61d2-fbe5-413c-bd16-270da0688e91 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Received unexpected event network-vif-plugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac for instance with vm_state deleted and task_state None.
Oct 07 14:26:34 compute-0 ceph-mon[74295]: pgmap v1908: 305 pgs: 305 active+clean; 147 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.0 MiB/s wr, 296 op/s
Oct 07 14:26:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:26:34 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3102534354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:26:34 compute-0 nova_compute[259550]: 2025-10-07 14:26:34.472 2 DEBUG oslo_concurrency.processutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:26:34 compute-0 nova_compute[259550]: 2025-10-07 14:26:34.479 2 DEBUG nova.compute.provider_tree [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:26:34 compute-0 nova_compute[259550]: 2025-10-07 14:26:34.494 2 DEBUG nova.scheduler.client.report [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:26:34 compute-0 nova_compute[259550]: 2025-10-07 14:26:34.527 2 DEBUG oslo_concurrency.lockutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:34 compute-0 nova_compute[259550]: 2025-10-07 14:26:34.556 2 INFO nova.scheduler.client.report [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Deleted allocations for instance e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d
Oct 07 14:26:34 compute-0 nova_compute[259550]: 2025-10-07 14:26:34.632 2 DEBUG oslo_concurrency.lockutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:35 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3102534354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.454551) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847195454644, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1864, "num_deletes": 261, "total_data_size": 2800524, "memory_usage": 2854848, "flush_reason": "Manual Compaction"}
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847195476589, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2735554, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38315, "largest_seqno": 40178, "table_properties": {"data_size": 2726903, "index_size": 5337, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18250, "raw_average_key_size": 20, "raw_value_size": 2709520, "raw_average_value_size": 3075, "num_data_blocks": 235, "num_entries": 881, "num_filter_entries": 881, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847037, "oldest_key_time": 1759847037, "file_creation_time": 1759847195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 22080 microseconds, and 6109 cpu microseconds.
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.476644) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2735554 bytes OK
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.476665) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.485650) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.485676) EVENT_LOG_v1 {"time_micros": 1759847195485667, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.485732) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2792411, prev total WAL file size 2792411, number of live WAL files 2.
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.486700) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2671KB)], [86(8342KB)]
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847195486802, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11278460, "oldest_snapshot_seqno": -1}
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6463 keys, 9649569 bytes, temperature: kUnknown
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847195562850, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9649569, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9605087, "index_size": 27220, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 164328, "raw_average_key_size": 25, "raw_value_size": 9488079, "raw_average_value_size": 1468, "num_data_blocks": 1095, "num_entries": 6463, "num_filter_entries": 6463, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.563289) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9649569 bytes
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.564764) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.9 rd, 126.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 8.1 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(7.7) write-amplify(3.5) OK, records in: 6992, records dropped: 529 output_compression: NoCompression
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.564792) EVENT_LOG_v1 {"time_micros": 1759847195564779, "job": 50, "event": "compaction_finished", "compaction_time_micros": 76262, "compaction_time_cpu_micros": 26577, "output_level": 6, "num_output_files": 1, "total_output_size": 9649569, "num_input_records": 6992, "num_output_records": 6463, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847195566047, "job": 50, "event": "table_file_deletion", "file_number": 88}
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847195569135, "job": 50, "event": "table_file_deletion", "file_number": 86}
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.486526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.569318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.569332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.569334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.569336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:26:35 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.569338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:26:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 121 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 314 op/s
Oct 07 14:26:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:26:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Oct 07 14:26:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Oct 07 14:26:36 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Oct 07 14:26:36 compute-0 ceph-mon[74295]: pgmap v1909: 305 pgs: 305 active+clean; 121 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 314 op/s
Oct 07 14:26:36 compute-0 ceph-mon[74295]: osdmap e263: 3 total, 3 up, 3 in
Oct 07 14:26:36 compute-0 nova_compute[259550]: 2025-10-07 14:26:36.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:36 compute-0 nova_compute[259550]: 2025-10-07 14:26:36.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:38 compute-0 nova_compute[259550]: 2025-10-07 14:26:38.004 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:26:38 compute-0 nova_compute[259550]: 2025-10-07 14:26:38.005 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:26:38 compute-0 sudo[362140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:26:38 compute-0 sudo[362140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:26:38 compute-0 sudo[362140]: pam_unix(sudo:session): session closed for user root
Oct 07 14:26:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 121 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 114 KiB/s wr, 205 op/s
Oct 07 14:26:38 compute-0 sudo[362165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:26:38 compute-0 sudo[362165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:26:38 compute-0 sudo[362165]: pam_unix(sudo:session): session closed for user root
Oct 07 14:26:38 compute-0 sudo[362190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:26:38 compute-0 sudo[362190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:26:38 compute-0 sudo[362190]: pam_unix(sudo:session): session closed for user root
Oct 07 14:26:38 compute-0 sudo[362215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:26:38 compute-0 sudo[362215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:26:38 compute-0 sudo[362215]: pam_unix(sudo:session): session closed for user root
Oct 07 14:26:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:26:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:26:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:26:38 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:26:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:26:38 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:26:38 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 1a96c624-057f-4e87-8ca1-7aa15a409cb0 does not exist
Oct 07 14:26:38 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 8e414cfa-6513-4b37-994b-8ec0c1cc6103 does not exist
Oct 07 14:26:38 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 7536aeaf-db4c-46b6-81a7-6cbaeda6baf8 does not exist
Oct 07 14:26:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:26:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:26:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:26:38 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:26:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:26:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:26:38 compute-0 sudo[362270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:26:38 compute-0 sudo[362270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:26:38 compute-0 sudo[362270]: pam_unix(sudo:session): session closed for user root
Oct 07 14:26:38 compute-0 sudo[362295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:26:38 compute-0 sudo[362295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:26:38 compute-0 sudo[362295]: pam_unix(sudo:session): session closed for user root
Oct 07 14:26:38 compute-0 sudo[362320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:26:38 compute-0 sudo[362320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:26:38 compute-0 sudo[362320]: pam_unix(sudo:session): session closed for user root
Oct 07 14:26:38 compute-0 nova_compute[259550]: 2025-10-07 14:26:38.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:26:39 compute-0 sudo[362345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:26:39 compute-0 sudo[362345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:26:39 compute-0 ceph-mon[74295]: pgmap v1911: 305 pgs: 305 active+clean; 121 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 114 KiB/s wr, 205 op/s
Oct 07 14:26:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:26:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:26:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:26:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:26:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:26:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:26:39 compute-0 podman[362410]: 2025-10-07 14:26:39.322677593 +0000 UTC m=+0.037103702 container create b67c5e3a78fd75899fdf8443c61d10aa788b16c98a4d2c20e5abb31c0f9e4792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 07 14:26:39 compute-0 systemd[1]: Started libpod-conmon-b67c5e3a78fd75899fdf8443c61d10aa788b16c98a4d2c20e5abb31c0f9e4792.scope.
Oct 07 14:26:39 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:26:39 compute-0 podman[362410]: 2025-10-07 14:26:39.304319623 +0000 UTC m=+0.018745772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:26:39 compute-0 podman[362410]: 2025-10-07 14:26:39.409204116 +0000 UTC m=+0.123630265 container init b67c5e3a78fd75899fdf8443c61d10aa788b16c98a4d2c20e5abb31c0f9e4792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_liskov, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:26:39 compute-0 podman[362410]: 2025-10-07 14:26:39.419138741 +0000 UTC m=+0.133564860 container start b67c5e3a78fd75899fdf8443c61d10aa788b16c98a4d2c20e5abb31c0f9e4792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_liskov, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 07 14:26:39 compute-0 podman[362410]: 2025-10-07 14:26:39.423423845 +0000 UTC m=+0.137849984 container attach b67c5e3a78fd75899fdf8443c61d10aa788b16c98a4d2c20e5abb31c0f9e4792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_liskov, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 07 14:26:39 compute-0 inspiring_liskov[362426]: 167 167
Oct 07 14:26:39 compute-0 systemd[1]: libpod-b67c5e3a78fd75899fdf8443c61d10aa788b16c98a4d2c20e5abb31c0f9e4792.scope: Deactivated successfully.
Oct 07 14:26:39 compute-0 podman[362410]: 2025-10-07 14:26:39.42582946 +0000 UTC m=+0.140255599 container died b67c5e3a78fd75899fdf8443c61d10aa788b16c98a4d2c20e5abb31c0f9e4792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_liskov, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 14:26:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec523373747ed95c79223a713f48961293f3a656bdf996dabc2a30666c8e0844-merged.mount: Deactivated successfully.
Oct 07 14:26:39 compute-0 podman[362410]: 2025-10-07 14:26:39.468337966 +0000 UTC m=+0.182764085 container remove b67c5e3a78fd75899fdf8443c61d10aa788b16c98a4d2c20e5abb31c0f9e4792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_liskov, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 14:26:39 compute-0 systemd[1]: libpod-conmon-b67c5e3a78fd75899fdf8443c61d10aa788b16c98a4d2c20e5abb31c0f9e4792.scope: Deactivated successfully.
Oct 07 14:26:39 compute-0 podman[362449]: 2025-10-07 14:26:39.687444711 +0000 UTC m=+0.065885052 container create af45ea02b00a184f21541ea126f1bf97b0d0ccbb102ca834522f8d0c4432abfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_grothendieck, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 07 14:26:39 compute-0 systemd[1]: Started libpod-conmon-af45ea02b00a184f21541ea126f1bf97b0d0ccbb102ca834522f8d0c4432abfa.scope.
Oct 07 14:26:39 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:26:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a458b86aeeebf39b82042190ea8efb6c3e5fa3ab26d469220961e761c1569f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:26:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a458b86aeeebf39b82042190ea8efb6c3e5fa3ab26d469220961e761c1569f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:26:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a458b86aeeebf39b82042190ea8efb6c3e5fa3ab26d469220961e761c1569f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:26:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a458b86aeeebf39b82042190ea8efb6c3e5fa3ab26d469220961e761c1569f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:26:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a458b86aeeebf39b82042190ea8efb6c3e5fa3ab26d469220961e761c1569f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:26:39 compute-0 podman[362449]: 2025-10-07 14:26:39.663419379 +0000 UTC m=+0.041859790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:26:39 compute-0 podman[362449]: 2025-10-07 14:26:39.780288701 +0000 UTC m=+0.158729042 container init af45ea02b00a184f21541ea126f1bf97b0d0ccbb102ca834522f8d0c4432abfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 07 14:26:39 compute-0 podman[362449]: 2025-10-07 14:26:39.787006301 +0000 UTC m=+0.165446612 container start af45ea02b00a184f21541ea126f1bf97b0d0ccbb102ca834522f8d0c4432abfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 07 14:26:39 compute-0 podman[362449]: 2025-10-07 14:26:39.843171852 +0000 UTC m=+0.221612213 container attach af45ea02b00a184f21541ea126f1bf97b0d0ccbb102ca834522f8d0c4432abfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:26:39 compute-0 nova_compute[259550]: 2025-10-07 14:26:39.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:26:39 compute-0 nova_compute[259550]: 2025-10-07 14:26:39.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:26:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 121 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 21 KiB/s wr, 160 op/s
Oct 07 14:26:40 compute-0 ceph-mon[74295]: pgmap v1912: 305 pgs: 305 active+clean; 121 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 21 KiB/s wr, 160 op/s
Oct 07 14:26:40 compute-0 angry_grothendieck[362465]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:26:40 compute-0 angry_grothendieck[362465]: --> relative data size: 1.0
Oct 07 14:26:40 compute-0 angry_grothendieck[362465]: --> All data devices are unavailable
Oct 07 14:26:40 compute-0 systemd[1]: libpod-af45ea02b00a184f21541ea126f1bf97b0d0ccbb102ca834522f8d0c4432abfa.scope: Deactivated successfully.
Oct 07 14:26:40 compute-0 podman[362449]: 2025-10-07 14:26:40.832545358 +0000 UTC m=+1.210985709 container died af45ea02b00a184f21541ea126f1bf97b0d0ccbb102ca834522f8d0c4432abfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:26:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2a458b86aeeebf39b82042190ea8efb6c3e5fa3ab26d469220961e761c1569f-merged.mount: Deactivated successfully.
Oct 07 14:26:40 compute-0 podman[362449]: 2025-10-07 14:26:40.888109173 +0000 UTC m=+1.266549494 container remove af45ea02b00a184f21541ea126f1bf97b0d0ccbb102ca834522f8d0c4432abfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 07 14:26:40 compute-0 systemd[1]: libpod-conmon-af45ea02b00a184f21541ea126f1bf97b0d0ccbb102ca834522f8d0c4432abfa.scope: Deactivated successfully.
Oct 07 14:26:40 compute-0 sudo[362345]: pam_unix(sudo:session): session closed for user root
Oct 07 14:26:40 compute-0 sudo[362506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:26:40 compute-0 sudo[362506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:26:40 compute-0 sudo[362506]: pam_unix(sudo:session): session closed for user root
Oct 07 14:26:40 compute-0 nova_compute[259550]: 2025-10-07 14:26:40.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:26:40 compute-0 nova_compute[259550]: 2025-10-07 14:26:40.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 07 14:26:41 compute-0 sudo[362531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:26:41 compute-0 sudo[362531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:26:41 compute-0 sudo[362531]: pam_unix(sudo:session): session closed for user root
Oct 07 14:26:41 compute-0 sudo[362556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:26:41 compute-0 sudo[362556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:26:41 compute-0 sudo[362556]: pam_unix(sudo:session): session closed for user root
Oct 07 14:26:41 compute-0 sudo[362581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:26:41 compute-0 sudo[362581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:26:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:26:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Oct 07 14:26:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Oct 07 14:26:41 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Oct 07 14:26:41 compute-0 podman[362644]: 2025-10-07 14:26:41.470562376 +0000 UTC m=+0.050948462 container create b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meitner, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:26:41 compute-0 systemd[1]: Started libpod-conmon-b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8.scope.
Oct 07 14:26:41 compute-0 podman[362644]: 2025-10-07 14:26:41.443181845 +0000 UTC m=+0.023567961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:26:41 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:26:41 compute-0 nova_compute[259550]: 2025-10-07 14:26:41.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:41 compute-0 podman[362644]: 2025-10-07 14:26:41.672922684 +0000 UTC m=+0.253308780 container init b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meitner, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 07 14:26:41 compute-0 podman[362644]: 2025-10-07 14:26:41.679760006 +0000 UTC m=+0.260146092 container start b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meitner, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:26:41 compute-0 systemd[1]: libpod-b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8.scope: Deactivated successfully.
Oct 07 14:26:41 compute-0 peaceful_meitner[362661]: 167 167
Oct 07 14:26:41 compute-0 conmon[362661]: conmon b440b574f73aa1e3139b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8.scope/container/memory.events
Oct 07 14:26:41 compute-0 podman[362644]: 2025-10-07 14:26:41.700972633 +0000 UTC m=+0.281358719 container attach b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 07 14:26:41 compute-0 podman[362644]: 2025-10-07 14:26:41.701720812 +0000 UTC m=+0.282106898 container died b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:26:41 compute-0 nova_compute[259550]: 2025-10-07 14:26:41.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:41 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 07 14:26:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d9b32d343572ecec74821d80560408261919626921ee42151b9ead6414bfabc-merged.mount: Deactivated successfully.
Oct 07 14:26:41 compute-0 nova_compute[259550]: 2025-10-07 14:26:41.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:26:42 compute-0 podman[362644]: 2025-10-07 14:26:42.002213332 +0000 UTC m=+0.582599418 container remove b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 07 14:26:42 compute-0 systemd[1]: libpod-conmon-b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8.scope: Deactivated successfully.
Oct 07 14:26:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 121 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 949 KiB/s rd, 20 KiB/s wr, 60 op/s
Oct 07 14:26:42 compute-0 podman[362687]: 2025-10-07 14:26:42.183755773 +0000 UTC m=+0.040130453 container create 3b2cd43ed4b69a143914394b69aab08e79c7543e150081945a439f8d8301dda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 07 14:26:42 compute-0 systemd[1]: Started libpod-conmon-3b2cd43ed4b69a143914394b69aab08e79c7543e150081945a439f8d8301dda3.scope.
Oct 07 14:26:42 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:26:42 compute-0 podman[362687]: 2025-10-07 14:26:42.166710488 +0000 UTC m=+0.023085178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:26:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a52c90ea10a86180285eae7a7418373fdbfa06978be849de2c2cce323317bbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:26:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a52c90ea10a86180285eae7a7418373fdbfa06978be849de2c2cce323317bbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:26:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a52c90ea10a86180285eae7a7418373fdbfa06978be849de2c2cce323317bbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:26:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a52c90ea10a86180285eae7a7418373fdbfa06978be849de2c2cce323317bbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:26:42 compute-0 podman[362687]: 2025-10-07 14:26:42.284775933 +0000 UTC m=+0.141150603 container init 3b2cd43ed4b69a143914394b69aab08e79c7543e150081945a439f8d8301dda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 07 14:26:42 compute-0 podman[362687]: 2025-10-07 14:26:42.295640073 +0000 UTC m=+0.152014743 container start 3b2cd43ed4b69a143914394b69aab08e79c7543e150081945a439f8d8301dda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:26:42 compute-0 podman[362687]: 2025-10-07 14:26:42.302331371 +0000 UTC m=+0.158706051 container attach 3b2cd43ed4b69a143914394b69aab08e79c7543e150081945a439f8d8301dda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 07 14:26:42 compute-0 ceph-mon[74295]: osdmap e264: 3 total, 3 up, 3 in
Oct 07 14:26:42 compute-0 ceph-mon[74295]: pgmap v1914: 305 pgs: 305 active+clean; 121 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 949 KiB/s rd, 20 KiB/s wr, 60 op/s
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]: {
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:     "0": [
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:         {
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "devices": [
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "/dev/loop3"
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             ],
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "lv_name": "ceph_lv0",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "lv_size": "21470642176",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "name": "ceph_lv0",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "tags": {
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.cluster_name": "ceph",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.crush_device_class": "",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.encrypted": "0",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.osd_id": "0",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.type": "block",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.vdo": "0"
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             },
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "type": "block",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "vg_name": "ceph_vg0"
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:         }
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:     ],
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:     "1": [
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:         {
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "devices": [
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "/dev/loop4"
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             ],
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "lv_name": "ceph_lv1",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "lv_size": "21470642176",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "name": "ceph_lv1",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "tags": {
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.cluster_name": "ceph",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.crush_device_class": "",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.encrypted": "0",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.osd_id": "1",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.type": "block",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.vdo": "0"
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             },
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "type": "block",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "vg_name": "ceph_vg1"
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:         }
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:     ],
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:     "2": [
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:         {
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "devices": [
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "/dev/loop5"
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             ],
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "lv_name": "ceph_lv2",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "lv_size": "21470642176",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "name": "ceph_lv2",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "tags": {
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.cluster_name": "ceph",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.crush_device_class": "",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.encrypted": "0",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.osd_id": "2",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.type": "block",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:                 "ceph.vdo": "0"
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             },
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "type": "block",
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:             "vg_name": "ceph_vg2"
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:         }
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]:     ]
Oct 07 14:26:43 compute-0 wizardly_aryabhata[362704]: }
Oct 07 14:26:43 compute-0 systemd[1]: libpod-3b2cd43ed4b69a143914394b69aab08e79c7543e150081945a439f8d8301dda3.scope: Deactivated successfully.
Oct 07 14:26:43 compute-0 podman[362687]: 2025-10-07 14:26:43.081467941 +0000 UTC m=+0.937842641 container died 3b2cd43ed4b69a143914394b69aab08e79c7543e150081945a439f8d8301dda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_aryabhata, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 07 14:26:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a52c90ea10a86180285eae7a7418373fdbfa06978be849de2c2cce323317bbe-merged.mount: Deactivated successfully.
Oct 07 14:26:43 compute-0 podman[362687]: 2025-10-07 14:26:43.15704511 +0000 UTC m=+1.013419780 container remove 3b2cd43ed4b69a143914394b69aab08e79c7543e150081945a439f8d8301dda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct 07 14:26:43 compute-0 systemd[1]: libpod-conmon-3b2cd43ed4b69a143914394b69aab08e79c7543e150081945a439f8d8301dda3.scope: Deactivated successfully.
Oct 07 14:26:43 compute-0 sudo[362581]: pam_unix(sudo:session): session closed for user root
Oct 07 14:26:43 compute-0 sudo[362725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:26:43 compute-0 sudo[362725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:26:43 compute-0 sudo[362725]: pam_unix(sudo:session): session closed for user root
Oct 07 14:26:43 compute-0 sudo[362750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:26:43 compute-0 sudo[362750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:26:43 compute-0 sudo[362750]: pam_unix(sudo:session): session closed for user root
Oct 07 14:26:43 compute-0 sudo[362775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:26:43 compute-0 sudo[362775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:26:43 compute-0 sudo[362775]: pam_unix(sudo:session): session closed for user root
Oct 07 14:26:43 compute-0 sudo[362800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:26:43 compute-0 sudo[362800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:26:43 compute-0 podman[362865]: 2025-10-07 14:26:43.794912504 +0000 UTC m=+0.040835992 container create 15ac06c06e52810121932ac08ba4f6230737765ebc35884c993321b612a9c01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bohr, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:26:43 compute-0 systemd[1]: Started libpod-conmon-15ac06c06e52810121932ac08ba4f6230737765ebc35884c993321b612a9c01c.scope.
Oct 07 14:26:43 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:26:43 compute-0 podman[362865]: 2025-10-07 14:26:43.775602578 +0000 UTC m=+0.021525976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:26:43 compute-0 podman[362865]: 2025-10-07 14:26:43.874323756 +0000 UTC m=+0.120247184 container init 15ac06c06e52810121932ac08ba4f6230737765ebc35884c993321b612a9c01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bohr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 07 14:26:43 compute-0 podman[362865]: 2025-10-07 14:26:43.881759534 +0000 UTC m=+0.127682932 container start 15ac06c06e52810121932ac08ba4f6230737765ebc35884c993321b612a9c01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bohr, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:26:43 compute-0 podman[362865]: 2025-10-07 14:26:43.884973441 +0000 UTC m=+0.130896859 container attach 15ac06c06e52810121932ac08ba4f6230737765ebc35884c993321b612a9c01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bohr, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:26:43 compute-0 reverent_bohr[362881]: 167 167
Oct 07 14:26:43 compute-0 systemd[1]: libpod-15ac06c06e52810121932ac08ba4f6230737765ebc35884c993321b612a9c01c.scope: Deactivated successfully.
Oct 07 14:26:43 compute-0 podman[362865]: 2025-10-07 14:26:43.887274602 +0000 UTC m=+0.133198000 container died 15ac06c06e52810121932ac08ba4f6230737765ebc35884c993321b612a9c01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:26:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfd93a073c0bf4be28434900ff1202d3927e054842db60afdfa87e00a1e87ec5-merged.mount: Deactivated successfully.
Oct 07 14:26:43 compute-0 podman[362865]: 2025-10-07 14:26:43.934484934 +0000 UTC m=+0.180408332 container remove 15ac06c06e52810121932ac08ba4f6230737765ebc35884c993321b612a9c01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 07 14:26:43 compute-0 systemd[1]: libpod-conmon-15ac06c06e52810121932ac08ba4f6230737765ebc35884c993321b612a9c01c.scope: Deactivated successfully.
Oct 07 14:26:44 compute-0 nova_compute[259550]: 2025-10-07 14:26:44.006 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:26:44 compute-0 nova_compute[259550]: 2025-10-07 14:26:44.008 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:26:44 compute-0 nova_compute[259550]: 2025-10-07 14:26:44.027 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:44 compute-0 nova_compute[259550]: 2025-10-07 14:26:44.028 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:44 compute-0 nova_compute[259550]: 2025-10-07 14:26:44.028 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:44 compute-0 nova_compute[259550]: 2025-10-07 14:26:44.028 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:26:44 compute-0 nova_compute[259550]: 2025-10-07 14:26:44.028 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:26:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 121 MiB data, 731 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:26:44 compute-0 podman[362906]: 2025-10-07 14:26:44.103385127 +0000 UTC m=+0.043506113 container create a39692a55a7c9ef8335cba21e82744f5a3f30634d0e5d9e3d85d0435311857c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_clarke, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:26:44 compute-0 systemd[1]: Started libpod-conmon-a39692a55a7c9ef8335cba21e82744f5a3f30634d0e5d9e3d85d0435311857c5.scope.
Oct 07 14:26:44 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:26:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fddb01774a8456d0ab6005e2aa7b3c72844e28ea044dfea54740a4564faebc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:26:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fddb01774a8456d0ab6005e2aa7b3c72844e28ea044dfea54740a4564faebc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:26:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fddb01774a8456d0ab6005e2aa7b3c72844e28ea044dfea54740a4564faebc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:26:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fddb01774a8456d0ab6005e2aa7b3c72844e28ea044dfea54740a4564faebc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:26:44 compute-0 podman[362906]: 2025-10-07 14:26:44.085696524 +0000 UTC m=+0.025817530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:26:44 compute-0 podman[362906]: 2025-10-07 14:26:44.187169976 +0000 UTC m=+0.127290982 container init a39692a55a7c9ef8335cba21e82744f5a3f30634d0e5d9e3d85d0435311857c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:26:44 compute-0 ceph-mon[74295]: pgmap v1915: 305 pgs: 305 active+clean; 121 MiB data, 731 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:26:44 compute-0 podman[362906]: 2025-10-07 14:26:44.194474301 +0000 UTC m=+0.134595287 container start a39692a55a7c9ef8335cba21e82744f5a3f30634d0e5d9e3d85d0435311857c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 07 14:26:44 compute-0 podman[362906]: 2025-10-07 14:26:44.200133672 +0000 UTC m=+0.140254658 container attach a39692a55a7c9ef8335cba21e82744f5a3f30634d0e5d9e3d85d0435311857c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 07 14:26:44 compute-0 podman[362924]: 2025-10-07 14:26:44.233866863 +0000 UTC m=+0.086359569 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 14:26:44 compute-0 podman[362919]: 2025-10-07 14:26:44.237640154 +0000 UTC m=+0.092967815 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:26:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:26:44 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/101934396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:26:44 compute-0 nova_compute[259550]: 2025-10-07 14:26:44.518 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:26:44 compute-0 nova_compute[259550]: 2025-10-07 14:26:44.603 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:26:44 compute-0 nova_compute[259550]: 2025-10-07 14:26:44.603 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:26:44 compute-0 nova_compute[259550]: 2025-10-07 14:26:44.748 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:26:44 compute-0 nova_compute[259550]: 2025-10-07 14:26:44.749 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3626MB free_disk=59.94279098510742GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:26:44 compute-0 nova_compute[259550]: 2025-10-07 14:26:44.749 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:26:44 compute-0 nova_compute[259550]: 2025-10-07 14:26:44.749 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:26:45 compute-0 nova_compute[259550]: 2025-10-07 14:26:45.042 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 91c66dff-47e6-4b52-9e3f-d8c58d256bcf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:26:45 compute-0 nova_compute[259550]: 2025-10-07 14:26:45.043 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:26:45 compute-0 nova_compute[259550]: 2025-10-07 14:26:45.043 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:26:45 compute-0 recursing_clarke[362925]: {
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:         "osd_id": 2,
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:         "type": "bluestore"
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:     },
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:         "osd_id": 1,
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:         "type": "bluestore"
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:     },
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:         "osd_id": 0,
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:         "type": "bluestore"
Oct 07 14:26:45 compute-0 recursing_clarke[362925]:     }
Oct 07 14:26:45 compute-0 recursing_clarke[362925]: }
Oct 07 14:26:45 compute-0 systemd[1]: libpod-a39692a55a7c9ef8335cba21e82744f5a3f30634d0e5d9e3d85d0435311857c5.scope: Deactivated successfully.
Oct 07 14:26:45 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/101934396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:26:45 compute-0 podman[363016]: 2025-10-07 14:26:45.232324702 +0000 UTC m=+0.031309447 container died a39692a55a7c9ef8335cba21e82744f5a3f30634d0e5d9e3d85d0435311857c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 07 14:26:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9fddb01774a8456d0ab6005e2aa7b3c72844e28ea044dfea54740a4564faebc-merged.mount: Deactivated successfully.
Oct 07 14:26:45 compute-0 podman[363016]: 2025-10-07 14:26:45.29024645 +0000 UTC m=+0.089231205 container remove a39692a55a7c9ef8335cba21e82744f5a3f30634d0e5d9e3d85d0435311857c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:26:45 compute-0 systemd[1]: libpod-conmon-a39692a55a7c9ef8335cba21e82744f5a3f30634d0e5d9e3d85d0435311857c5.scope: Deactivated successfully.
Oct 07 14:26:45 compute-0 sudo[362800]: pam_unix(sudo:session): session closed for user root
Oct 07 14:26:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:26:45 compute-0 nova_compute[259550]: 2025-10-07 14:26:45.362 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:26:45 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:26:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:26:45 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:26:45 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev e4ffc561-563f-41ca-87f0-c225eaf46553 does not exist
Oct 07 14:26:45 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev e3f30dd9-81ca-43d3-8100-fadebc10cf8e does not exist
Oct 07 14:26:45 compute-0 sudo[363032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:26:45 compute-0 sudo[363032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:26:45 compute-0 sudo[363032]: pam_unix(sudo:session): session closed for user root
Oct 07 14:26:45 compute-0 sudo[363058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:26:45 compute-0 sudo[363058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:26:45 compute-0 sudo[363058]: pam_unix(sudo:session): session closed for user root
Oct 07 14:26:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:26:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3626018372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:26:45 compute-0 nova_compute[259550]: 2025-10-07 14:26:45.870 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:26:45 compute-0 nova_compute[259550]: 2025-10-07 14:26:45.878 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:26:45 compute-0 nova_compute[259550]: 2025-10-07 14:26:45.896 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:26:45 compute-0 nova_compute[259550]: 2025-10-07 14:26:45.917 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:26:45 compute-0 nova_compute[259550]: 2025-10-07 14:26:45.918 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:26:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s wr, 0 op/s
Oct 07 14:26:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:26:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:26:46 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3626018372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:26:46 compute-0 ceph-mon[74295]: pgmap v1916: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s wr, 0 op/s
Oct 07 14:26:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:26:46 compute-0 nova_compute[259550]: 2025-10-07 14:26:46.626 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847191.6208487, e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:26:46 compute-0 nova_compute[259550]: 2025-10-07 14:26:46.626 2 INFO nova.compute.manager [-] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] VM Stopped (Lifecycle Event)
Oct 07 14:26:46 compute-0 nova_compute[259550]: 2025-10-07 14:26:46.647 2 DEBUG nova.compute.manager [None req-5d5a820c-14f1-4477-9549-bb4895bf55b5 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:26:46 compute-0 nova_compute[259550]: 2025-10-07 14:26:46.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:46 compute-0 nova_compute[259550]: 2025-10-07 14:26:46.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s wr, 0 op/s
Oct 07 14:26:48 compute-0 ceph-mon[74295]: pgmap v1917: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s wr, 0 op/s
Oct 07 14:26:48 compute-0 nova_compute[259550]: 2025-10-07 14:26:48.893 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:26:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s wr, 0 op/s
Oct 07 14:26:50 compute-0 ceph-mon[74295]: pgmap v1918: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s wr, 0 op/s
Oct 07 14:26:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:26:51 compute-0 nova_compute[259550]: 2025-10-07 14:26:51.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:51 compute-0 nova_compute[259550]: 2025-10-07 14:26:51.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:51 compute-0 nova_compute[259550]: 2025-10-07 14:26:51.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:26:51 compute-0 nova_compute[259550]: 2025-10-07 14:26:51.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:26:51 compute-0 nova_compute[259550]: 2025-10-07 14:26:51.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:26:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s wr, 0 op/s
Oct 07 14:26:52 compute-0 ceph-mon[74295]: pgmap v1919: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s wr, 0 op/s
Oct 07 14:26:52 compute-0 nova_compute[259550]: 2025-10-07 14:26:52.435 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:26:52 compute-0 nova_compute[259550]: 2025-10-07 14:26:52.436 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:26:52 compute-0 nova_compute[259550]: 2025-10-07 14:26:52.436 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:26:52 compute-0 nova_compute[259550]: 2025-10-07 14:26:52.436 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:26:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:26:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:26:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:26:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:26:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:26:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:26:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 07 14:26:54 compute-0 nova_compute[259550]: 2025-10-07 14:26:54.146 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:26:54 compute-0 ceph-mon[74295]: pgmap v1920: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 07 14:26:54 compute-0 nova_compute[259550]: 2025-10-07 14:26:54.209 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:26:54 compute-0 nova_compute[259550]: 2025-10-07 14:26:54.210 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:26:54 compute-0 nova_compute[259550]: 2025-10-07 14:26:54.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:26:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 07 14:26:56 compute-0 ceph-mon[74295]: pgmap v1921: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 07 14:26:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:26:56 compute-0 nova_compute[259550]: 2025-10-07 14:26:56.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:56 compute-0 nova_compute[259550]: 2025-10-07 14:26:56.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:26:57 compute-0 podman[363103]: 2025-10-07 14:26:57.073810233 +0000 UTC m=+0.061826993 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:26:57 compute-0 podman[363104]: 2025-10-07 14:26:57.104914305 +0000 UTC m=+0.091513957 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 07 14:26:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:26:58 compute-0 ceph-mon[74295]: pgmap v1922: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:27:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:00.058 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:27:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:00.059 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:27:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:00.060 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:27:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Oct 07 14:27:00 compute-0 ceph-mon[74295]: pgmap v1923: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Oct 07 14:27:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:27:01 compute-0 nova_compute[259550]: 2025-10-07 14:27:01.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:01 compute-0 nova_compute[259550]: 2025-10-07 14:27:01.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 07 14:27:02 compute-0 ceph-mon[74295]: pgmap v1924: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 07 14:27:03 compute-0 nova_compute[259550]: 2025-10-07 14:27:03.865 2 INFO nova.compute.manager [None req-0f1d7120-f3fe-4b1d-9bbb-02473144f376 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Pausing
Oct 07 14:27:03 compute-0 nova_compute[259550]: 2025-10-07 14:27:03.865 2 DEBUG nova.objects.instance [None req-0f1d7120-f3fe-4b1d-9bbb-02473144f376 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'flavor' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:27:03 compute-0 nova_compute[259550]: 2025-10-07 14:27:03.894 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847223.8941445, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:27:03 compute-0 nova_compute[259550]: 2025-10-07 14:27:03.895 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Paused (Lifecycle Event)
Oct 07 14:27:03 compute-0 nova_compute[259550]: 2025-10-07 14:27:03.897 2 DEBUG nova.compute.manager [None req-0f1d7120-f3fe-4b1d-9bbb-02473144f376 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:27:03 compute-0 nova_compute[259550]: 2025-10-07 14:27:03.919 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:27:03 compute-0 nova_compute[259550]: 2025-10-07 14:27:03.922 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:27:03 compute-0 nova_compute[259550]: 2025-10-07 14:27:03.956 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 07 14:27:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 07 14:27:04 compute-0 ceph-mon[74295]: pgmap v1925: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 07 14:27:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 07 14:27:06 compute-0 ceph-mon[74295]: pgmap v1926: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 07 14:27:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:27:06 compute-0 nova_compute[259550]: 2025-10-07 14:27:06.601 2 INFO nova.compute.manager [None req-9a97f8af-4a84-4195-b57e-74d83c7e05be 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Unpausing
Oct 07 14:27:06 compute-0 nova_compute[259550]: 2025-10-07 14:27:06.602 2 DEBUG nova.objects.instance [None req-9a97f8af-4a84-4195-b57e-74d83c7e05be 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'flavor' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:27:06 compute-0 nova_compute[259550]: 2025-10-07 14:27:06.626 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847226.626048, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:27:06 compute-0 nova_compute[259550]: 2025-10-07 14:27:06.626 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Resumed (Lifecycle Event)
Oct 07 14:27:06 compute-0 virtqemud[259430]: argument unsupported: QEMU guest agent is not configured
Oct 07 14:27:06 compute-0 nova_compute[259550]: 2025-10-07 14:27:06.630 2 DEBUG nova.virt.libvirt.guest [None req-9a97f8af-4a84-4195-b57e-74d83c7e05be 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 07 14:27:06 compute-0 nova_compute[259550]: 2025-10-07 14:27:06.631 2 DEBUG nova.compute.manager [None req-9a97f8af-4a84-4195-b57e-74d83c7e05be 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:27:06 compute-0 nova_compute[259550]: 2025-10-07 14:27:06.654 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:27:06 compute-0 nova_compute[259550]: 2025-10-07 14:27:06.658 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:27:06 compute-0 nova_compute[259550]: 2025-10-07 14:27:06.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:06 compute-0 nova_compute[259550]: 2025-10-07 14:27:06.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 07 14:27:08 compute-0 ceph-mon[74295]: pgmap v1927: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 07 14:27:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 07 14:27:10 compute-0 ceph-mon[74295]: pgmap v1928: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 07 14:27:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:27:11 compute-0 nova_compute[259550]: 2025-10-07 14:27:11.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:11 compute-0 nova_compute[259550]: 2025-10-07 14:27:11.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 07 14:27:12 compute-0 ceph-mon[74295]: pgmap v1929: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 07 14:27:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:27:14 compute-0 ceph-mon[74295]: pgmap v1930: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:27:15 compute-0 podman[363149]: 2025-10-07 14:27:15.076972297 +0000 UTC m=+0.061294018 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:27:15 compute-0 podman[363150]: 2025-10-07 14:27:15.088953968 +0000 UTC m=+0.062363227 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 14:27:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:27:16 compute-0 ceph-mon[74295]: pgmap v1931: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:27:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:27:16 compute-0 nova_compute[259550]: 2025-10-07 14:27:16.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:16 compute-0 nova_compute[259550]: 2025-10-07 14:27:16.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:27:18 compute-0 ceph-mon[74295]: pgmap v1932: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:27:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:19.340 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:27:19 compute-0 nova_compute[259550]: 2025-10-07 14:27:19.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:19.341 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:27:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Oct 07 14:27:20 compute-0 ceph-mon[74295]: pgmap v1933: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Oct 07 14:27:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:27:21 compute-0 nova_compute[259550]: 2025-10-07 14:27:21.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:21 compute-0 nova_compute[259550]: 2025-10-07 14:27:21.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Oct 07 14:27:22 compute-0 ceph-mon[74295]: pgmap v1934: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:27:22
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'images', 'volumes', 'default.rgw.control', 'default.rgw.log']
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:27:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:27:23 compute-0 nova_compute[259550]: 2025-10-07 14:27:23.108 2 DEBUG oslo_concurrency.lockutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:27:23 compute-0 nova_compute[259550]: 2025-10-07 14:27:23.108 2 DEBUG oslo_concurrency.lockutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:27:23 compute-0 nova_compute[259550]: 2025-10-07 14:27:23.109 2 INFO nova.compute.manager [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Shelving
Oct 07 14:27:23 compute-0 nova_compute[259550]: 2025-10-07 14:27:23.147 2 DEBUG nova.virt.libvirt.driver [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:27:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 11 KiB/s wr, 2 op/s
Oct 07 14:27:24 compute-0 ceph-mon[74295]: pgmap v1935: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 11 KiB/s wr, 2 op/s
Oct 07 14:27:25 compute-0 kernel: tap8d87c94f-f4 (unregistering): left promiscuous mode
Oct 07 14:27:25 compute-0 NetworkManager[44949]: <info>  [1759847245.3728] device (tap8d87c94f-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:27:25 compute-0 nova_compute[259550]: 2025-10-07 14:27:25.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:25 compute-0 ovn_controller[151684]: 2025-10-07T14:27:25Z|01032|binding|INFO|Releasing lport 8d87c94f-f436-4619-9ca7-9e116cab44bf from this chassis (sb_readonly=0)
Oct 07 14:27:25 compute-0 ovn_controller[151684]: 2025-10-07T14:27:25Z|01033|binding|INFO|Setting lport 8d87c94f-f436-4619-9ca7-9e116cab44bf down in Southbound
Oct 07 14:27:25 compute-0 ovn_controller[151684]: 2025-10-07T14:27:25Z|01034|binding|INFO|Removing iface tap8d87c94f-f4 ovn-installed in OVS
Oct 07 14:27:25 compute-0 nova_compute[259550]: 2025-10-07 14:27:25.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:25 compute-0 nova_compute[259550]: 2025-10-07 14:27:25.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:25 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000066.scope: Deactivated successfully.
Oct 07 14:27:25 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000066.scope: Consumed 16.697s CPU time.
Oct 07 14:27:25 compute-0 systemd-machined[214580]: Machine qemu-127-instance-00000066 terminated.
Oct 07 14:27:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.475 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:79:3a 10.100.0.3'], port_security=['fa:16:3e:ce:79:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '91c66dff-47e6-4b52-9e3f-d8c58d256bcf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df30eae3-80f8-4787-8c66-2dfab55da65a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65245efcef84404ca2ccf82df5946696', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7ce7b451-d224-4231-90f8-24fa819f5920', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efac365-0f4c-42a7-9c1f-c734401b92b1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8d87c94f-f436-4619-9ca7-9e116cab44bf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:27:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.476 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8d87c94f-f436-4619-9ca7-9e116cab44bf in datapath df30eae3-80f8-4787-8c66-2dfab55da65a unbound from our chassis
Oct 07 14:27:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.477 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df30eae3-80f8-4787-8c66-2dfab55da65a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:27:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.478 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[beb694b5-0eab-4e4a-9dbc-52fc8964852d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.480 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a namespace which is not needed anymore
Oct 07 14:27:25 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[361557]: [NOTICE]   (361561) : haproxy version is 2.8.14-c23fe91
Oct 07 14:27:25 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[361557]: [NOTICE]   (361561) : path to executable is /usr/sbin/haproxy
Oct 07 14:27:25 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[361557]: [WARNING]  (361561) : Exiting Master process...
Oct 07 14:27:25 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[361557]: [WARNING]  (361561) : Exiting Master process...
Oct 07 14:27:25 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[361557]: [ALERT]    (361561) : Current worker (361563) exited with code 143 (Terminated)
Oct 07 14:27:25 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[361557]: [WARNING]  (361561) : All workers exited. Exiting... (0)
Oct 07 14:27:25 compute-0 systemd[1]: libpod-d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a.scope: Deactivated successfully.
Oct 07 14:27:25 compute-0 podman[363214]: 2025-10-07 14:27:25.634545583 +0000 UTC m=+0.061241518 container died d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 07 14:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-d441a2678501fe529d289bed0f151b8829262887c81920cdf574184ba61c8960-merged.mount: Deactivated successfully.
Oct 07 14:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a-userdata-shm.mount: Deactivated successfully.
Oct 07 14:27:25 compute-0 podman[363214]: 2025-10-07 14:27:25.70029712 +0000 UTC m=+0.126993055 container cleanup d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:27:25 compute-0 systemd[1]: libpod-conmon-d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a.scope: Deactivated successfully.
Oct 07 14:27:25 compute-0 podman[363252]: 2025-10-07 14:27:25.7657964 +0000 UTC m=+0.043383420 container remove d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 07 14:27:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.772 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf05dfa-67c3-4baf-b19f-d004ee958a45]: (4, ('Tue Oct  7 02:27:25 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a (d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a)\nd52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a\nTue Oct  7 02:27:25 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a (d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a)\nd52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.776 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f9670f37-8321-4871-98e2-bd6887a3efb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.778 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf30eae3-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:27:25 compute-0 nova_compute[259550]: 2025-10-07 14:27:25.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:25 compute-0 kernel: tapdf30eae3-80: left promiscuous mode
Oct 07 14:27:25 compute-0 nova_compute[259550]: 2025-10-07 14:27:25.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.805 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3b004946-50be-42a2-9929-d1aaf9d99c27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.835 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1945d17c-339a-4e1c-9369-350addd6dfff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.838 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[546afd28-963f-42c7-899c-fc9475836636]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.858 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dd21a908-8359-429c-801d-607743a2d9b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775473, 'reachable_time': 26548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363269, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.861 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:27:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.861 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[4aaa5634-5603-4b23-bbab-477b805fe417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:25 compute-0 systemd[1]: run-netns-ovnmeta\x2ddf30eae3\x2d80f8\x2d4787\x2d8c66\x2d2dfab55da65a.mount: Deactivated successfully.
Oct 07 14:27:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 16 KiB/s wr, 2 op/s
Oct 07 14:27:26 compute-0 nova_compute[259550]: 2025-10-07 14:27:26.168 2 INFO nova.virt.libvirt.driver [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance shutdown successfully after 3 seconds.
Oct 07 14:27:26 compute-0 nova_compute[259550]: 2025-10-07 14:27:26.176 2 INFO nova.virt.libvirt.driver [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance destroyed successfully.
Oct 07 14:27:26 compute-0 nova_compute[259550]: 2025-10-07 14:27:26.177 2 DEBUG nova.objects.instance [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'numa_topology' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:27:26 compute-0 ceph-mon[74295]: pgmap v1936: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 16 KiB/s wr, 2 op/s
Oct 07 14:27:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:27:26 compute-0 nova_compute[259550]: 2025-10-07 14:27:26.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:26 compute-0 nova_compute[259550]: 2025-10-07 14:27:26.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:28 compute-0 podman[363270]: 2025-10-07 14:27:28.073968785 +0000 UTC m=+0.057894218 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 07 14:27:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 16 KiB/s wr, 2 op/s
Oct 07 14:27:28 compute-0 podman[363271]: 2025-10-07 14:27:28.118759282 +0000 UTC m=+0.094975089 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:27:28 compute-0 nova_compute[259550]: 2025-10-07 14:27:28.197 2 INFO nova.virt.libvirt.driver [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Beginning cold snapshot process
Oct 07 14:27:28 compute-0 ceph-mon[74295]: pgmap v1937: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 16 KiB/s wr, 2 op/s
Oct 07 14:27:28 compute-0 nova_compute[259550]: 2025-10-07 14:27:28.739 2 DEBUG nova.virt.libvirt.imagebackend [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:27:29 compute-0 nova_compute[259550]: 2025-10-07 14:27:29.135 2 DEBUG nova.storage.rbd_utils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] creating snapshot(6f362715705247eab083d2a7fd304410) on rbd image(91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:27:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Oct 07 14:27:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Oct 07 14:27:29 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Oct 07 14:27:29 compute-0 nova_compute[259550]: 2025-10-07 14:27:29.278 2 DEBUG nova.storage.rbd_utils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] cloning vms/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk@6f362715705247eab083d2a7fd304410 to images/6cbf90eb-7382-4407-9e6c-a36d06a02db1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:27:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:29.343 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:27:29 compute-0 nova_compute[259550]: 2025-10-07 14:27:29.399 2 DEBUG nova.storage.rbd_utils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] flattening images/6cbf90eb-7382-4407-9e6c-a36d06a02db1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:27:29 compute-0 nova_compute[259550]: 2025-10-07 14:27:29.909 2 DEBUG nova.storage.rbd_utils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] removing snapshot(6f362715705247eab083d2a7fd304410) on rbd image(91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:27:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.4 KiB/s rd, 10 KiB/s wr, 2 op/s
Oct 07 14:27:30 compute-0 nova_compute[259550]: 2025-10-07 14:27:30.158 2 DEBUG nova.compute.manager [req-6aa132e1-b00a-4d18-9df6-b62e1cacef5d req-a1a89546-c870-4cbc-87e9-70c794e2f5a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-unplugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:27:30 compute-0 nova_compute[259550]: 2025-10-07 14:27:30.158 2 DEBUG oslo_concurrency.lockutils [req-6aa132e1-b00a-4d18-9df6-b62e1cacef5d req-a1a89546-c870-4cbc-87e9-70c794e2f5a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:27:30 compute-0 nova_compute[259550]: 2025-10-07 14:27:30.158 2 DEBUG oslo_concurrency.lockutils [req-6aa132e1-b00a-4d18-9df6-b62e1cacef5d req-a1a89546-c870-4cbc-87e9-70c794e2f5a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:27:30 compute-0 nova_compute[259550]: 2025-10-07 14:27:30.159 2 DEBUG oslo_concurrency.lockutils [req-6aa132e1-b00a-4d18-9df6-b62e1cacef5d req-a1a89546-c870-4cbc-87e9-70c794e2f5a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:27:30 compute-0 nova_compute[259550]: 2025-10-07 14:27:30.159 2 DEBUG nova.compute.manager [req-6aa132e1-b00a-4d18-9df6-b62e1cacef5d req-a1a89546-c870-4cbc-87e9-70c794e2f5a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-unplugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:27:30 compute-0 nova_compute[259550]: 2025-10-07 14:27:30.159 2 WARNING nova.compute.manager [req-6aa132e1-b00a-4d18-9df6-b62e1cacef5d req-a1a89546-c870-4cbc-87e9-70c794e2f5a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received unexpected event network-vif-unplugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with vm_state active and task_state shelving_image_uploading.
Oct 07 14:27:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Oct 07 14:27:30 compute-0 ceph-mon[74295]: osdmap e265: 3 total, 3 up, 3 in
Oct 07 14:27:30 compute-0 ceph-mon[74295]: pgmap v1939: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.4 KiB/s rd, 10 KiB/s wr, 2 op/s
Oct 07 14:27:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Oct 07 14:27:30 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Oct 07 14:27:30 compute-0 nova_compute[259550]: 2025-10-07 14:27:30.257 2 DEBUG nova.storage.rbd_utils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] creating snapshot(snap) on rbd image(6cbf90eb-7382-4407-9e6c-a36d06a02db1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:27:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Oct 07 14:27:31 compute-0 ceph-mon[74295]: osdmap e266: 3 total, 3 up, 3 in
Oct 07 14:27:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Oct 07 14:27:31 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Oct 07 14:27:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:27:31 compute-0 nova_compute[259550]: 2025-10-07 14:27:31.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:31 compute-0 nova_compute[259550]: 2025-10-07 14:27:31.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 128 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 741 KiB/s wr, 6 op/s
Oct 07 14:27:32 compute-0 ceph-mon[74295]: osdmap e267: 3 total, 3 up, 3 in
Oct 07 14:27:32 compute-0 ceph-mon[74295]: pgmap v1942: 305 pgs: 305 active+clean; 128 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 741 KiB/s wr, 6 op/s
Oct 07 14:27:32 compute-0 nova_compute[259550]: 2025-10-07 14:27:32.451 2 DEBUG nova.compute.manager [req-752f3321-5696-422d-a56b-4ad9bf9d3107 req-b3aa27db-cfae-4bb2-b6a6-f3af251c6f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:27:32 compute-0 nova_compute[259550]: 2025-10-07 14:27:32.452 2 DEBUG oslo_concurrency.lockutils [req-752f3321-5696-422d-a56b-4ad9bf9d3107 req-b3aa27db-cfae-4bb2-b6a6-f3af251c6f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:27:32 compute-0 nova_compute[259550]: 2025-10-07 14:27:32.452 2 DEBUG oslo_concurrency.lockutils [req-752f3321-5696-422d-a56b-4ad9bf9d3107 req-b3aa27db-cfae-4bb2-b6a6-f3af251c6f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:27:32 compute-0 nova_compute[259550]: 2025-10-07 14:27:32.452 2 DEBUG oslo_concurrency.lockutils [req-752f3321-5696-422d-a56b-4ad9bf9d3107 req-b3aa27db-cfae-4bb2-b6a6-f3af251c6f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:27:32 compute-0 nova_compute[259550]: 2025-10-07 14:27:32.452 2 DEBUG nova.compute.manager [req-752f3321-5696-422d-a56b-4ad9bf9d3107 req-b3aa27db-cfae-4bb2-b6a6-f3af251c6f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:27:32 compute-0 nova_compute[259550]: 2025-10-07 14:27:32.453 2 WARNING nova.compute.manager [req-752f3321-5696-422d-a56b-4ad9bf9d3107 req-b3aa27db-cfae-4bb2-b6a6-f3af251c6f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received unexpected event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with vm_state active and task_state shelving_image_uploading.
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007594638235006837 of space, bias 1.0, pg target 0.2278391470502051 quantized to 32 (current 32)
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0007368255315165723 of space, bias 1.0, pg target 0.22104765945497168 quantized to 32 (current 32)
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:27:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:27:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:27:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/328128088' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:27:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:27:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/328128088' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:27:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:33.101 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2 2001:db8::f816:3eff:feac:9691'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3b81107-fbd8-4b68-adb0-30f49b47dd04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ed3753df-65a0-439b-ac95-e84eb7b44484) old=Port_Binding(mac=['fa:16:3e:ac:96:91 2001:db8::f816:3eff:feac:9691'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:27:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:33.103 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ed3753df-65a0-439b-ac95-e84eb7b44484 in datapath b1af60e6-ad2e-48e6-9494-943f38a2c137 updated
Oct 07 14:27:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:33.104 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1af60e6-ad2e-48e6-9494-943f38a2c137, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:27:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:33.105 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ae6f39-e3a6-4eaa-b88e-312bd7f89514]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/328128088' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:27:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/328128088' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:27:33 compute-0 nova_compute[259550]: 2025-10-07 14:27:33.488 2 INFO nova.virt.libvirt.driver [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Snapshot image upload complete
Oct 07 14:27:33 compute-0 nova_compute[259550]: 2025-10-07 14:27:33.489 2 DEBUG nova.compute.manager [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:27:33 compute-0 nova_compute[259550]: 2025-10-07 14:27:33.682 2 INFO nova.compute.manager [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Shelve offloading
Oct 07 14:27:33 compute-0 nova_compute[259550]: 2025-10-07 14:27:33.689 2 INFO nova.virt.libvirt.driver [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance destroyed successfully.
Oct 07 14:27:33 compute-0 nova_compute[259550]: 2025-10-07 14:27:33.690 2 DEBUG nova.compute.manager [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:27:33 compute-0 nova_compute[259550]: 2025-10-07 14:27:33.692 2 DEBUG oslo_concurrency.lockutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:27:33 compute-0 nova_compute[259550]: 2025-10-07 14:27:33.693 2 DEBUG oslo_concurrency.lockutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquired lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:27:33 compute-0 nova_compute[259550]: 2025-10-07 14:27:33.693 2 DEBUG nova.network.neutron [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:27:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 200 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 172 op/s
Oct 07 14:27:34 compute-0 ceph-mon[74295]: pgmap v1943: 305 pgs: 305 active+clean; 200 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 172 op/s
Oct 07 14:27:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 200 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 6.8 MiB/s wr, 150 op/s
Oct 07 14:27:36 compute-0 ceph-mon[74295]: pgmap v1944: 305 pgs: 305 active+clean; 200 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 6.8 MiB/s wr, 150 op/s
Oct 07 14:27:36 compute-0 nova_compute[259550]: 2025-10-07 14:27:36.208 2 DEBUG nova.network.neutron [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:27:36 compute-0 nova_compute[259550]: 2025-10-07 14:27:36.279 2 DEBUG oslo_concurrency.lockutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Releasing lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:27:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:27:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Oct 07 14:27:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Oct 07 14:27:36 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Oct 07 14:27:36 compute-0 nova_compute[259550]: 2025-10-07 14:27:36.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:36 compute-0 nova_compute[259550]: 2025-10-07 14:27:36.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:37 compute-0 ceph-mon[74295]: osdmap e268: 3 total, 3 up, 3 in
Oct 07 14:27:37 compute-0 nova_compute[259550]: 2025-10-07 14:27:37.662 2 INFO nova.virt.libvirt.driver [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance destroyed successfully.
Oct 07 14:27:37 compute-0 nova_compute[259550]: 2025-10-07 14:27:37.662 2 DEBUG nova.objects.instance [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'resources' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:27:37 compute-0 nova_compute[259550]: 2025-10-07 14:27:37.682 2 DEBUG nova.virt.libvirt.vif [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1596186961',display_name='tempest-ServersNegativeTestJSON-server-1596186961',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1596186961',id=102,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:26:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-7x9o1pxk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member',shelved_at='2025-10-07T14:27:33.488961',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='6cbf90eb-7382-4407-9e6c-a36d06a02db1'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:27:28Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=91c66dff-47e6-4b52-9e3f-d8c58d256bcf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:27:37 compute-0 nova_compute[259550]: 2025-10-07 14:27:37.682 2 DEBUG nova.network.os_vif_util [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:27:37 compute-0 nova_compute[259550]: 2025-10-07 14:27:37.683 2 DEBUG nova.network.os_vif_util [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:27:37 compute-0 nova_compute[259550]: 2025-10-07 14:27:37.683 2 DEBUG os_vif [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:27:37 compute-0 nova_compute[259550]: 2025-10-07 14:27:37.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:37 compute-0 nova_compute[259550]: 2025-10-07 14:27:37.685 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d87c94f-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:27:37 compute-0 nova_compute[259550]: 2025-10-07 14:27:37.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:37 compute-0 nova_compute[259550]: 2025-10-07 14:27:37.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:37 compute-0 nova_compute[259550]: 2025-10-07 14:27:37.741 2 INFO os_vif [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4')
Oct 07 14:27:37 compute-0 nova_compute[259550]: 2025-10-07 14:27:37.776 2 DEBUG nova.compute.manager [req-9c56341a-516b-478d-84ea-0f66787c0498 req-57ba4043-fe61-40bc-8d87-d2b0ecaac3ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-changed-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:27:37 compute-0 nova_compute[259550]: 2025-10-07 14:27:37.777 2 DEBUG nova.compute.manager [req-9c56341a-516b-478d-84ea-0f66787c0498 req-57ba4043-fe61-40bc-8d87-d2b0ecaac3ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Refreshing instance network info cache due to event network-changed-8d87c94f-f436-4619-9ca7-9e116cab44bf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:27:37 compute-0 nova_compute[259550]: 2025-10-07 14:27:37.777 2 DEBUG oslo_concurrency.lockutils [req-9c56341a-516b-478d-84ea-0f66787c0498 req-57ba4043-fe61-40bc-8d87-d2b0ecaac3ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:27:37 compute-0 nova_compute[259550]: 2025-10-07 14:27:37.778 2 DEBUG oslo_concurrency.lockutils [req-9c56341a-516b-478d-84ea-0f66787c0498 req-57ba4043-fe61-40bc-8d87-d2b0ecaac3ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:27:37 compute-0 nova_compute[259550]: 2025-10-07 14:27:37.778 2 DEBUG nova.network.neutron [req-9c56341a-516b-478d-84ea-0f66787c0498 req-57ba4043-fe61-40bc-8d87-d2b0ecaac3ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Refreshing network info cache for port 8d87c94f-f436-4619-9ca7-9e116cab44bf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:27:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 200 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.9 MiB/s wr, 131 op/s
Oct 07 14:27:38 compute-0 nova_compute[259550]: 2025-10-07 14:27:38.225 2 INFO nova.virt.libvirt.driver [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Deleting instance files /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_del
Oct 07 14:27:38 compute-0 nova_compute[259550]: 2025-10-07 14:27:38.226 2 INFO nova.virt.libvirt.driver [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Deletion of /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_del complete
Oct 07 14:27:38 compute-0 ceph-mon[74295]: pgmap v1946: 305 pgs: 305 active+clean; 200 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.9 MiB/s wr, 131 op/s
Oct 07 14:27:38 compute-0 nova_compute[259550]: 2025-10-07 14:27:38.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:27:38 compute-0 nova_compute[259550]: 2025-10-07 14:27:38.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:27:39 compute-0 nova_compute[259550]: 2025-10-07 14:27:39.202 2 INFO nova.scheduler.client.report [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Deleted allocations for instance 91c66dff-47e6-4b52-9e3f-d8c58d256bcf
Oct 07 14:27:39 compute-0 nova_compute[259550]: 2025-10-07 14:27:39.346 2 DEBUG oslo_concurrency.lockutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:27:39 compute-0 nova_compute[259550]: 2025-10-07 14:27:39.347 2 DEBUG oslo_concurrency.lockutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:27:39 compute-0 nova_compute[259550]: 2025-10-07 14:27:39.380 2 DEBUG oslo_concurrency.processutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:27:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:27:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3174143164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:27:39 compute-0 nova_compute[259550]: 2025-10-07 14:27:39.838 2 DEBUG oslo_concurrency.processutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:27:39 compute-0 nova_compute[259550]: 2025-10-07 14:27:39.844 2 DEBUG nova.compute.provider_tree [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:27:39 compute-0 nova_compute[259550]: 2025-10-07 14:27:39.873 2 DEBUG nova.scheduler.client.report [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:27:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3174143164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:27:39 compute-0 nova_compute[259550]: 2025-10-07 14:27:39.919 2 DEBUG oslo_concurrency.lockutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:27:39 compute-0 nova_compute[259550]: 2025-10-07 14:27:39.967 2 DEBUG oslo_concurrency.lockutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 16.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:27:39 compute-0 nova_compute[259550]: 2025-10-07 14:27:39.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:27:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 165 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.8 MiB/s wr, 135 op/s
Oct 07 14:27:40 compute-0 nova_compute[259550]: 2025-10-07 14:27:40.623 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847245.6214035, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:27:40 compute-0 nova_compute[259550]: 2025-10-07 14:27:40.624 2 INFO nova.compute.manager [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Stopped (Lifecycle Event)
Oct 07 14:27:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:40.648 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:96:91 2001:db8::f816:3eff:feac:9691'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3b81107-fbd8-4b68-adb0-30f49b47dd04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ed3753df-65a0-439b-ac95-e84eb7b44484) old=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2 2001:db8::f816:3eff:feac:9691'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:27:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:40.651 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ed3753df-65a0-439b-ac95-e84eb7b44484 in datapath b1af60e6-ad2e-48e6-9494-943f38a2c137 updated
Oct 07 14:27:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:40.653 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1af60e6-ad2e-48e6-9494-943f38a2c137, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:27:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:40.654 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[015cd650-55a2-4489-8892-8eee578c585b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:40 compute-0 nova_compute[259550]: 2025-10-07 14:27:40.658 2 DEBUG nova.compute.manager [None req-f5b28fef-ffb2-4c06-8edd-0afbe914b1a2 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:27:40 compute-0 ceph-mon[74295]: pgmap v1947: 305 pgs: 305 active+clean; 165 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.8 MiB/s wr, 135 op/s
Oct 07 14:27:40 compute-0 nova_compute[259550]: 2025-10-07 14:27:40.944 2 DEBUG nova.network.neutron [req-9c56341a-516b-478d-84ea-0f66787c0498 req-57ba4043-fe61-40bc-8d87-d2b0ecaac3ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updated VIF entry in instance network info cache for port 8d87c94f-f436-4619-9ca7-9e116cab44bf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:27:40 compute-0 nova_compute[259550]: 2025-10-07 14:27:40.944 2 DEBUG nova.network.neutron [req-9c56341a-516b-478d-84ea-0f66787c0498 req-57ba4043-fe61-40bc-8d87-d2b0ecaac3ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": null, "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:27:41 compute-0 nova_compute[259550]: 2025-10-07 14:27:41.003 2 DEBUG oslo_concurrency.lockutils [req-9c56341a-516b-478d-84ea-0f66787c0498 req-57ba4043-fe61-40bc-8d87-d2b0ecaac3ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:27:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:27:41 compute-0 nova_compute[259550]: 2025-10-07 14:27:41.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:41 compute-0 nova_compute[259550]: 2025-10-07 14:27:41.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:27:41 compute-0 nova_compute[259550]: 2025-10-07 14:27:41.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:27:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 120 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 132 op/s
Oct 07 14:27:42 compute-0 ceph-mon[74295]: pgmap v1948: 305 pgs: 305 active+clean; 120 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 132 op/s
Oct 07 14:27:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:42.626 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2 2001:db8::f816:3eff:feac:9691'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3b81107-fbd8-4b68-adb0-30f49b47dd04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ed3753df-65a0-439b-ac95-e84eb7b44484) old=Port_Binding(mac=['fa:16:3e:ac:96:91 2001:db8::f816:3eff:feac:9691'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:27:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:42.627 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ed3753df-65a0-439b-ac95-e84eb7b44484 in datapath b1af60e6-ad2e-48e6-9494-943f38a2c137 updated
Oct 07 14:27:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:42.628 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1af60e6-ad2e-48e6-9494-943f38a2c137, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:27:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:42.629 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[508a3994-18f8-42f0-8b71-aaf44b9e9442]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:42 compute-0 nova_compute[259550]: 2025-10-07 14:27:42.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 120 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Oct 07 14:27:44 compute-0 ceph-mon[74295]: pgmap v1949: 305 pgs: 305 active+clean; 120 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Oct 07 14:27:44 compute-0 nova_compute[259550]: 2025-10-07 14:27:44.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:27:45 compute-0 nova_compute[259550]: 2025-10-07 14:27:45.558 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:27:45 compute-0 nova_compute[259550]: 2025-10-07 14:27:45.559 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:27:45 compute-0 nova_compute[259550]: 2025-10-07 14:27:45.559 2 INFO nova.compute.manager [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Unshelving
Oct 07 14:27:45 compute-0 sudo[363498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:27:45 compute-0 sudo[363498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:27:45 compute-0 sudo[363498]: pam_unix(sudo:session): session closed for user root
Oct 07 14:27:45 compute-0 sudo[363535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:27:45 compute-0 podman[363523]: 2025-10-07 14:27:45.752459341 +0000 UTC m=+0.058572667 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:27:45 compute-0 nova_compute[259550]: 2025-10-07 14:27:45.754 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:27:45 compute-0 nova_compute[259550]: 2025-10-07 14:27:45.754 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:27:45 compute-0 sudo[363535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:27:45 compute-0 sudo[363535]: pam_unix(sudo:session): session closed for user root
Oct 07 14:27:45 compute-0 nova_compute[259550]: 2025-10-07 14:27:45.759 2 DEBUG nova.objects.instance [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'pci_requests' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:27:45 compute-0 nova_compute[259550]: 2025-10-07 14:27:45.780 2 DEBUG nova.objects.instance [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'numa_topology' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:27:45 compute-0 podman[363522]: 2025-10-07 14:27:45.784584849 +0000 UTC m=+0.094740352 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 07 14:27:45 compute-0 nova_compute[259550]: 2025-10-07 14:27:45.805 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:27:45 compute-0 nova_compute[259550]: 2025-10-07 14:27:45.805 2 INFO nova.compute.claims [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:27:45 compute-0 sudo[363585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:27:45 compute-0 sudo[363585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:27:45 compute-0 sudo[363585]: pam_unix(sudo:session): session closed for user root
Oct 07 14:27:45 compute-0 sudo[363610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:27:45 compute-0 sudo[363610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:27:45 compute-0 nova_compute[259550]: 2025-10-07 14:27:45.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:27:46 compute-0 nova_compute[259550]: 2025-10-07 14:27:46.008 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:27:46 compute-0 nova_compute[259550]: 2025-10-07 14:27:46.030 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:27:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 120 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Oct 07 14:27:46 compute-0 ceph-mon[74295]: pgmap v1950: 305 pgs: 305 active+clean; 120 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Oct 07 14:27:46 compute-0 sudo[363610]: pam_unix(sudo:session): session closed for user root
Oct 07 14:27:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:27:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:27:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:27:46 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:27:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:27:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:27:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/221435772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:27:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:27:46 compute-0 nova_compute[259550]: 2025-10-07 14:27:46.485 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:27:46 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:27:46 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 1198d2d6-47d9-4fe9-8c81-aec4d2528e17 does not exist
Oct 07 14:27:46 compute-0 nova_compute[259550]: 2025-10-07 14:27:46.493 2 DEBUG nova.compute.provider_tree [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:27:46 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev a4ee4754-59ee-4726-9732-d01bd7357802 does not exist
Oct 07 14:27:46 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev f3b3bca1-cdfe-4399-bddd-215804c3c8d9 does not exist
Oct 07 14:27:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:27:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:27:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:27:46 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:27:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:27:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:27:46 compute-0 sudo[363689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:27:46 compute-0 sudo[363689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:27:46 compute-0 sudo[363689]: pam_unix(sudo:session): session closed for user root
Oct 07 14:27:46 compute-0 nova_compute[259550]: 2025-10-07 14:27:46.572 2 DEBUG nova.scheduler.client.report [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:27:46 compute-0 nova_compute[259550]: 2025-10-07 14:27:46.599 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:27:46 compute-0 nova_compute[259550]: 2025-10-07 14:27:46.604 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:27:46 compute-0 nova_compute[259550]: 2025-10-07 14:27:46.604 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:27:46 compute-0 nova_compute[259550]: 2025-10-07 14:27:46.604 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:27:46 compute-0 nova_compute[259550]: 2025-10-07 14:27:46.604 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:27:46 compute-0 sudo[363714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:27:46 compute-0 sudo[363714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:27:46 compute-0 sudo[363714]: pam_unix(sudo:session): session closed for user root
Oct 07 14:27:46 compute-0 sudo[363740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:27:46 compute-0 sudo[363740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:27:46 compute-0 sudo[363740]: pam_unix(sudo:session): session closed for user root
Oct 07 14:27:46 compute-0 sudo[363765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:27:46 compute-0 sudo[363765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:27:46 compute-0 nova_compute[259550]: 2025-10-07 14:27:46.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:46 compute-0 nova_compute[259550]: 2025-10-07 14:27:46.866 2 INFO nova.network.neutron [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating port 8d87c94f-f436-4619-9ca7-9e116cab44bf with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 07 14:27:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:27:47 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1178763856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.038 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:27:47 compute-0 podman[363850]: 2025-10-07 14:27:47.087450923 +0000 UTC m=+0.046454142 container create 6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mccarthy, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:27:47 compute-0 systemd[1]: Started libpod-conmon-6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6.scope.
Oct 07 14:27:47 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:27:47 compute-0 podman[363850]: 2025-10-07 14:27:47.064245682 +0000 UTC m=+0.023248951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:27:47 compute-0 podman[363850]: 2025-10-07 14:27:47.175694971 +0000 UTC m=+0.134698210 container init 6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 07 14:27:47 compute-0 podman[363850]: 2025-10-07 14:27:47.182569114 +0000 UTC m=+0.141572333 container start 6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:27:47 compute-0 podman[363850]: 2025-10-07 14:27:47.187622949 +0000 UTC m=+0.146626168 container attach 6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mccarthy, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:27:47 compute-0 ecstatic_mccarthy[363867]: 167 167
Oct 07 14:27:47 compute-0 systemd[1]: libpod-6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6.scope: Deactivated successfully.
Oct 07 14:27:47 compute-0 conmon[363867]: conmon 6f891371c0443674ecb3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6.scope/container/memory.events
Oct 07 14:27:47 compute-0 podman[363850]: 2025-10-07 14:27:47.189027156 +0000 UTC m=+0.148030375 container died 6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:27:47 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:27:47 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:27:47 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/221435772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:27:47 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:27:47 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:27:47 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:27:47 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:27:47 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1178763856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:27:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce13e1c7b78c35ff27185603dc2c723ce7e95ec7b180a220dd410bf2774034a6-merged.mount: Deactivated successfully.
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.233 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.236 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3857MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.236 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.236 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:27:47 compute-0 podman[363850]: 2025-10-07 14:27:47.248380623 +0000 UTC m=+0.207383852 container remove 6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 07 14:27:47 compute-0 systemd[1]: libpod-conmon-6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6.scope: Deactivated successfully.
Oct 07 14:27:47 compute-0 podman[363889]: 2025-10-07 14:27:47.396961733 +0000 UTC m=+0.043692998 container create d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leavitt, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:27:47 compute-0 systemd[1]: Started libpod-conmon-d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937.scope.
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.449 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 91c66dff-47e6-4b52-9e3f-d8c58d256bcf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.449 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.449 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:27:47 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:27:47 compute-0 podman[363889]: 2025-10-07 14:27:47.376538427 +0000 UTC m=+0.023269702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ecf2856dd8d19baaf98282e72f19dbeecbff1f49511d5f44b4ad853c03853/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ecf2856dd8d19baaf98282e72f19dbeecbff1f49511d5f44b4ad853c03853/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ecf2856dd8d19baaf98282e72f19dbeecbff1f49511d5f44b4ad853c03853/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ecf2856dd8d19baaf98282e72f19dbeecbff1f49511d5f44b4ad853c03853/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ecf2856dd8d19baaf98282e72f19dbeecbff1f49511d5f44b4ad853c03853/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:27:47 compute-0 podman[363889]: 2025-10-07 14:27:47.497289473 +0000 UTC m=+0.144020748 container init d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leavitt, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:27:47 compute-0 podman[363889]: 2025-10-07 14:27:47.503189101 +0000 UTC m=+0.149920346 container start d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leavitt, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.511 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:27:47 compute-0 podman[363889]: 2025-10-07 14:27:47.51250465 +0000 UTC m=+0.159235895 container attach d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:27:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:47.716 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3b81107-fbd8-4b68-adb0-30f49b47dd04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ed3753df-65a0-439b-ac95-e84eb7b44484) old=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2 2001:db8::f816:3eff:feac:9691'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:27:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:47.717 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ed3753df-65a0-439b-ac95-e84eb7b44484 in datapath b1af60e6-ad2e-48e6-9494-943f38a2c137 updated
Oct 07 14:27:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:47.718 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1af60e6-ad2e-48e6-9494-943f38a2c137, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:27:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:47.719 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[18d7d44a-a445-4dba-8146-91cd1b84084c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.737 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.738 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquired lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.738 2 DEBUG nova.network.neutron [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.847 2 DEBUG nova.compute.manager [req-4dc18136-a29e-40a7-8d09-03aab17acfca req-19c025fd-b88a-45ab-a0a1-7e3bfabff26a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-changed-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.848 2 DEBUG nova.compute.manager [req-4dc18136-a29e-40a7-8d09-03aab17acfca req-19c025fd-b88a-45ab-a0a1-7e3bfabff26a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Refreshing instance network info cache due to event network-changed-8d87c94f-f436-4619-9ca7-9e116cab44bf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.848 2 DEBUG oslo_concurrency.lockutils [req-4dc18136-a29e-40a7-8d09-03aab17acfca req-19c025fd-b88a-45ab-a0a1-7e3bfabff26a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:27:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:27:47 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4108190869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.954 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.959 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.974 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.997 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:27:47 compute-0 nova_compute[259550]: 2025-10-07 14:27:47.998 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:27:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 120 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 07 14:27:48 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4108190869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:27:48 compute-0 ceph-mon[74295]: pgmap v1951: 305 pgs: 305 active+clean; 120 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 07 14:27:48 compute-0 sharp_leavitt[363905]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:27:48 compute-0 sharp_leavitt[363905]: --> relative data size: 1.0
Oct 07 14:27:48 compute-0 sharp_leavitt[363905]: --> All data devices are unavailable
Oct 07 14:27:48 compute-0 systemd[1]: libpod-d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937.scope: Deactivated successfully.
Oct 07 14:27:48 compute-0 podman[363889]: 2025-10-07 14:27:48.606312007 +0000 UTC m=+1.253043252 container died d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:27:48 compute-0 systemd[1]: libpod-d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937.scope: Consumed 1.044s CPU time.
Oct 07 14:27:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d9ecf2856dd8d19baaf98282e72f19dbeecbff1f49511d5f44b4ad853c03853-merged.mount: Deactivated successfully.
Oct 07 14:27:48 compute-0 podman[363889]: 2025-10-07 14:27:48.802752576 +0000 UTC m=+1.449483861 container remove d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 07 14:27:48 compute-0 systemd[1]: libpod-conmon-d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937.scope: Deactivated successfully.
Oct 07 14:27:48 compute-0 sudo[363765]: pam_unix(sudo:session): session closed for user root
Oct 07 14:27:48 compute-0 sudo[363969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:27:48 compute-0 sudo[363969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:27:48 compute-0 sudo[363969]: pam_unix(sudo:session): session closed for user root
Oct 07 14:27:49 compute-0 sudo[363994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:27:49 compute-0 sudo[363994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:27:49 compute-0 sudo[363994]: pam_unix(sudo:session): session closed for user root
Oct 07 14:27:49 compute-0 sudo[364019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:27:49 compute-0 sudo[364019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:27:49 compute-0 sudo[364019]: pam_unix(sudo:session): session closed for user root
Oct 07 14:27:49 compute-0 sudo[364044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:27:49 compute-0 sudo[364044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:27:49 compute-0 nova_compute[259550]: 2025-10-07 14:27:49.442 2 DEBUG nova.network.neutron [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:27:49 compute-0 nova_compute[259550]: 2025-10-07 14:27:49.469 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Releasing lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:27:49 compute-0 nova_compute[259550]: 2025-10-07 14:27:49.471 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:27:49 compute-0 nova_compute[259550]: 2025-10-07 14:27:49.472 2 INFO nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Creating image(s)
Oct 07 14:27:49 compute-0 nova_compute[259550]: 2025-10-07 14:27:49.494 2 DEBUG nova.storage.rbd_utils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:27:49 compute-0 nova_compute[259550]: 2025-10-07 14:27:49.497 2 DEBUG nova.objects.instance [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:27:49 compute-0 nova_compute[259550]: 2025-10-07 14:27:49.498 2 DEBUG oslo_concurrency.lockutils [req-4dc18136-a29e-40a7-8d09-03aab17acfca req-19c025fd-b88a-45ab-a0a1-7e3bfabff26a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:27:49 compute-0 nova_compute[259550]: 2025-10-07 14:27:49.499 2 DEBUG nova.network.neutron [req-4dc18136-a29e-40a7-8d09-03aab17acfca req-19c025fd-b88a-45ab-a0a1-7e3bfabff26a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Refreshing network info cache for port 8d87c94f-f436-4619-9ca7-9e116cab44bf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:27:49 compute-0 nova_compute[259550]: 2025-10-07 14:27:49.536 2 DEBUG nova.storage.rbd_utils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:27:49 compute-0 podman[364111]: 2025-10-07 14:27:49.456116444 +0000 UTC m=+0.022102331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:27:49 compute-0 nova_compute[259550]: 2025-10-07 14:27:49.561 2 DEBUG nova.storage.rbd_utils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:27:49 compute-0 nova_compute[259550]: 2025-10-07 14:27:49.564 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "e39203b7fd9b2dbef1d5d036a58f08adbcb6966b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:27:49 compute-0 nova_compute[259550]: 2025-10-07 14:27:49.565 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e39203b7fd9b2dbef1d5d036a58f08adbcb6966b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:27:49 compute-0 podman[364111]: 2025-10-07 14:27:49.625645205 +0000 UTC m=+0.191631112 container create ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_easley, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 07 14:27:49 compute-0 systemd[1]: Started libpod-conmon-ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34.scope.
Oct 07 14:27:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:49.774 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2 2001:db8::f816:3eff:feac:9691'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3b81107-fbd8-4b68-adb0-30f49b47dd04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ed3753df-65a0-439b-ac95-e84eb7b44484) old=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:27:49 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:27:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:49.776 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ed3753df-65a0-439b-ac95-e84eb7b44484 in datapath b1af60e6-ad2e-48e6-9494-943f38a2c137 updated
Oct 07 14:27:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:49.778 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1af60e6-ad2e-48e6-9494-943f38a2c137, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:27:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:49.780 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9b7de782-ace8-4be2-a023-cdc32abdf3b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:49 compute-0 nova_compute[259550]: 2025-10-07 14:27:49.796 2 DEBUG nova.virt.libvirt.imagebackend [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Image locations are: [{'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/6cbf90eb-7382-4407-9e6c-a36d06a02db1/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/6cbf90eb-7382-4407-9e6c-a36d06a02db1/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 07 14:27:49 compute-0 podman[364111]: 2025-10-07 14:27:49.857895199 +0000 UTC m=+0.423881166 container init ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:27:49 compute-0 podman[364111]: 2025-10-07 14:27:49.867012113 +0000 UTC m=+0.432998000 container start ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_easley, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:27:49 compute-0 nova_compute[259550]: 2025-10-07 14:27:49.871 2 DEBUG nova.virt.libvirt.imagebackend [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Selected location: {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/6cbf90eb-7382-4407-9e6c-a36d06a02db1/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 07 14:27:49 compute-0 nova_compute[259550]: 2025-10-07 14:27:49.872 2 DEBUG nova.storage.rbd_utils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] cloning images/6cbf90eb-7382-4407-9e6c-a36d06a02db1@snap to None/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:27:49 compute-0 beautiful_easley[364181]: 167 167
Oct 07 14:27:49 compute-0 systemd[1]: libpod-ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34.scope: Deactivated successfully.
Oct 07 14:27:49 compute-0 conmon[364181]: conmon ef1716c3c971fef1d9a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34.scope/container/memory.events
Oct 07 14:27:49 compute-0 podman[364111]: 2025-10-07 14:27:49.948024758 +0000 UTC m=+0.514010645 container attach ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_easley, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:27:49 compute-0 podman[364111]: 2025-10-07 14:27:49.94847742 +0000 UTC m=+0.514463317 container died ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_easley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:27:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 120 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:27:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f25d5884bc60bbf9983225a2c369d3113bda353594ebfe19721e7165153986f6-merged.mount: Deactivated successfully.
Oct 07 14:27:50 compute-0 ceph-mon[74295]: pgmap v1952: 305 pgs: 305 active+clean; 120 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:27:50 compute-0 podman[364111]: 2025-10-07 14:27:50.387709147 +0000 UTC m=+0.953695034 container remove ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 14:27:50 compute-0 nova_compute[259550]: 2025-10-07 14:27:50.426 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e39203b7fd9b2dbef1d5d036a58f08adbcb6966b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:27:50 compute-0 systemd[1]: libpod-conmon-ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34.scope: Deactivated successfully.
Oct 07 14:27:50 compute-0 nova_compute[259550]: 2025-10-07 14:27:50.569 2 DEBUG nova.objects.instance [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'migration_context' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:27:50 compute-0 podman[364304]: 2025-10-07 14:27:50.572406422 +0000 UTC m=+0.065385789 container create f0e17b817f73837f7c1b16f7e1543c36814e7ffa6ec982a191203e42c0e3c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:27:50 compute-0 nova_compute[259550]: 2025-10-07 14:27:50.625 2 DEBUG nova.storage.rbd_utils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] flattening vms/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:27:50 compute-0 podman[364304]: 2025-10-07 14:27:50.532277839 +0000 UTC m=+0.025257226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:27:50 compute-0 systemd[1]: Started libpod-conmon-f0e17b817f73837f7c1b16f7e1543c36814e7ffa6ec982a191203e42c0e3c783.scope.
Oct 07 14:27:50 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:27:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3234d7827d1247b757a4f4c48ae3c620c5161a10bbd4f6d35dcc51ff5e73545/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:27:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3234d7827d1247b757a4f4c48ae3c620c5161a10bbd4f6d35dcc51ff5e73545/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:27:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3234d7827d1247b757a4f4c48ae3c620c5161a10bbd4f6d35dcc51ff5e73545/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:27:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3234d7827d1247b757a4f4c48ae3c620c5161a10bbd4f6d35dcc51ff5e73545/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:27:50 compute-0 podman[364304]: 2025-10-07 14:27:50.710686997 +0000 UTC m=+0.203666384 container init f0e17b817f73837f7c1b16f7e1543c36814e7ffa6ec982a191203e42c0e3c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:27:50 compute-0 podman[364304]: 2025-10-07 14:27:50.720071927 +0000 UTC m=+0.213051284 container start f0e17b817f73837f7c1b16f7e1543c36814e7ffa6ec982a191203e42c0e3c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:27:50 compute-0 podman[364304]: 2025-10-07 14:27:50.765878491 +0000 UTC m=+0.258857898 container attach f0e17b817f73837f7c1b16f7e1543c36814e7ffa6ec982a191203e42c0e3c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:27:50 compute-0 nova_compute[259550]: 2025-10-07 14:27:50.997 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:27:51 compute-0 nova_compute[259550]: 2025-10-07 14:27:51.020 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:27:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:27:51 compute-0 modest_yonath[364374]: {
Oct 07 14:27:51 compute-0 modest_yonath[364374]:     "0": [
Oct 07 14:27:51 compute-0 modest_yonath[364374]:         {
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "devices": [
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "/dev/loop3"
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             ],
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "lv_name": "ceph_lv0",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "lv_size": "21470642176",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "name": "ceph_lv0",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "tags": {
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.cluster_name": "ceph",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.crush_device_class": "",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.encrypted": "0",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.osd_id": "0",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.type": "block",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.vdo": "0"
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             },
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "type": "block",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "vg_name": "ceph_vg0"
Oct 07 14:27:51 compute-0 modest_yonath[364374]:         }
Oct 07 14:27:51 compute-0 modest_yonath[364374]:     ],
Oct 07 14:27:51 compute-0 modest_yonath[364374]:     "1": [
Oct 07 14:27:51 compute-0 modest_yonath[364374]:         {
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "devices": [
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "/dev/loop4"
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             ],
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "lv_name": "ceph_lv1",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "lv_size": "21470642176",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "name": "ceph_lv1",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "tags": {
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.cluster_name": "ceph",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.crush_device_class": "",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.encrypted": "0",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.osd_id": "1",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.type": "block",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.vdo": "0"
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             },
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "type": "block",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "vg_name": "ceph_vg1"
Oct 07 14:27:51 compute-0 modest_yonath[364374]:         }
Oct 07 14:27:51 compute-0 modest_yonath[364374]:     ],
Oct 07 14:27:51 compute-0 modest_yonath[364374]:     "2": [
Oct 07 14:27:51 compute-0 modest_yonath[364374]:         {
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "devices": [
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "/dev/loop5"
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             ],
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "lv_name": "ceph_lv2",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "lv_size": "21470642176",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "name": "ceph_lv2",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "tags": {
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.cluster_name": "ceph",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.crush_device_class": "",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.encrypted": "0",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.osd_id": "2",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.type": "block",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:                 "ceph.vdo": "0"
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             },
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "type": "block",
Oct 07 14:27:51 compute-0 modest_yonath[364374]:             "vg_name": "ceph_vg2"
Oct 07 14:27:51 compute-0 modest_yonath[364374]:         }
Oct 07 14:27:51 compute-0 modest_yonath[364374]:     ]
Oct 07 14:27:51 compute-0 modest_yonath[364374]: }
Oct 07 14:27:51 compute-0 systemd[1]: libpod-f0e17b817f73837f7c1b16f7e1543c36814e7ffa6ec982a191203e42c0e3c783.scope: Deactivated successfully.
Oct 07 14:27:51 compute-0 podman[364388]: 2025-10-07 14:27:51.704033539 +0000 UTC m=+0.026184591 container died f0e17b817f73837f7c1b16f7e1543c36814e7ffa6ec982a191203e42c0e3c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:27:51 compute-0 nova_compute[259550]: 2025-10-07 14:27:51.739 2 DEBUG nova.network.neutron [req-4dc18136-a29e-40a7-8d09-03aab17acfca req-19c025fd-b88a-45ab-a0a1-7e3bfabff26a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updated VIF entry in instance network info cache for port 8d87c94f-f436-4619-9ca7-9e116cab44bf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:27:51 compute-0 nova_compute[259550]: 2025-10-07 14:27:51.741 2 DEBUG nova.network.neutron [req-4dc18136-a29e-40a7-8d09-03aab17acfca req-19c025fd-b88a-45ab-a0a1-7e3bfabff26a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:27:51 compute-0 nova_compute[259550]: 2025-10-07 14:27:51.757 2 DEBUG oslo_concurrency.lockutils [req-4dc18136-a29e-40a7-8d09-03aab17acfca req-19c025fd-b88a-45ab-a0a1-7e3bfabff26a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:27:51 compute-0 nova_compute[259550]: 2025-10-07 14:27:51.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:51 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 14:27:51 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 14:27:51 compute-0 nova_compute[259550]: 2025-10-07 14:27:51.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:27:51 compute-0 nova_compute[259550]: 2025-10-07 14:27:51.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:27:51 compute-0 nova_compute[259550]: 2025-10-07 14:27:51.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.003 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.004 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.004 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.004 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:27:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 120 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 85 B/s wr, 10 op/s
Oct 07 14:27:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3234d7827d1247b757a4f4c48ae3c620c5161a10bbd4f6d35dcc51ff5e73545-merged.mount: Deactivated successfully.
Oct 07 14:27:52 compute-0 ceph-mon[74295]: pgmap v1953: 305 pgs: 305 active+clean; 120 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 85 B/s wr, 10 op/s
Oct 07 14:27:52 compute-0 podman[364388]: 2025-10-07 14:27:52.373743573 +0000 UTC m=+0.695894645 container remove f0e17b817f73837f7c1b16f7e1543c36814e7ffa6ec982a191203e42c0e3c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:27:52 compute-0 systemd[1]: libpod-conmon-f0e17b817f73837f7c1b16f7e1543c36814e7ffa6ec982a191203e42c0e3c783.scope: Deactivated successfully.
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.421 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Image rbd:vms/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.422 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.422 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Ensure instance console log exists: /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.423 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.423 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.423 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.426 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Start _get_guest_xml network_info=[{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T14:27:22Z,direct_url=<?>,disk_format='raw',id=6cbf90eb-7382-4407-9e6c-a36d06a02db1,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-1596186961-shelved',owner='65245efcef84404ca2ccf82df5946696',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T14:27:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.430 2 WARNING nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.436 2 DEBUG nova.virt.libvirt.host [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:27:52 compute-0 sudo[364044]: pam_unix(sudo:session): session closed for user root
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.436 2 DEBUG nova.virt.libvirt.host [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.442 2 DEBUG nova.virt.libvirt.host [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.442 2 DEBUG nova.virt.libvirt.host [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.443 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.443 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T14:27:22Z,direct_url=<?>,disk_format='raw',id=6cbf90eb-7382-4407-9e6c-a36d06a02db1,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-1596186961-shelved',owner='65245efcef84404ca2ccf82df5946696',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T14:27:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.444 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.444 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.444 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.444 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.445 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.445 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.445 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.445 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.445 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.445 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.446 2 DEBUG nova.objects.instance [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.461 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:27:52 compute-0 sudo[364404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:27:52 compute-0 sudo[364404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:27:52 compute-0 sudo[364404]: pam_unix(sudo:session): session closed for user root
Oct 07 14:27:52 compute-0 sudo[364430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:27:52 compute-0 sudo[364430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:27:52 compute-0 sudo[364430]: pam_unix(sudo:session): session closed for user root
Oct 07 14:27:52 compute-0 sudo[364455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:27:52 compute-0 sudo[364455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:27:52 compute-0 sudo[364455]: pam_unix(sudo:session): session closed for user root
Oct 07 14:27:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:27:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:27:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:27:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:27:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:27:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:27:52 compute-0 sudo[364499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:27:52 compute-0 sudo[364499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:27:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/428140567' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.924 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.949 2 DEBUG nova.storage.rbd_utils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:27:52 compute-0 nova_compute[259550]: 2025-10-07 14:27:52.958 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:27:53 compute-0 podman[364584]: 2025-10-07 14:27:53.004922179 +0000 UTC m=+0.023813148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:27:53 compute-0 podman[364584]: 2025-10-07 14:27:53.340754322 +0000 UTC m=+0.359645321 container create 392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:27:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:27:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/512768072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.460 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.462 2 DEBUG nova.virt.libvirt.vif [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1596186961',display_name='tempest-ServersNegativeTestJSON-server-1596186961',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1596186961',id=102,image_ref='6cbf90eb-7382-4407-9e6c-a36d06a02db1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:26:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-7x9o1pxk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member',shelved_at='2025-10-07T14:27:33.488961',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='6cbf90eb-7382-4407-9e6c-a36d06a02db1'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:27:45Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=91c66dff-47e6-4b52-9e3f-d8c58d256bcf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.463 2 DEBUG nova.network.os_vif_util [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.464 2 DEBUG nova.network.os_vif_util [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.465 2 DEBUG nova.objects.instance [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.493 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:27:53 compute-0 nova_compute[259550]:   <uuid>91c66dff-47e6-4b52-9e3f-d8c58d256bcf</uuid>
Oct 07 14:27:53 compute-0 nova_compute[259550]:   <name>instance-00000066</name>
Oct 07 14:27:53 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:27:53 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:27:53 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <nova:name>tempest-ServersNegativeTestJSON-server-1596186961</nova:name>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:27:52</nova:creationTime>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:27:53 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:27:53 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:27:53 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:27:53 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:27:53 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:27:53 compute-0 nova_compute[259550]:         <nova:user uuid="6b6ae9b333804dcc8e1ed82ba0879e32">tempest-ServersNegativeTestJSON-1110185129-project-member</nova:user>
Oct 07 14:27:53 compute-0 nova_compute[259550]:         <nova:project uuid="65245efcef84404ca2ccf82df5946696">tempest-ServersNegativeTestJSON-1110185129</nova:project>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="6cbf90eb-7382-4407-9e6c-a36d06a02db1"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:27:53 compute-0 nova_compute[259550]:         <nova:port uuid="8d87c94f-f436-4619-9ca7-9e116cab44bf">
Oct 07 14:27:53 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:27:53 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:27:53 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <system>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <entry name="serial">91c66dff-47e6-4b52-9e3f-d8c58d256bcf</entry>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <entry name="uuid">91c66dff-47e6-4b52-9e3f-d8c58d256bcf</entry>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     </system>
Oct 07 14:27:53 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:27:53 compute-0 nova_compute[259550]:   <os>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:   </os>
Oct 07 14:27:53 compute-0 nova_compute[259550]:   <features>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:   </features>
Oct 07 14:27:53 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:27:53 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:27:53 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk">
Oct 07 14:27:53 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       </source>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:27:53 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config">
Oct 07 14:27:53 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       </source>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:27:53 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:ce:79:3a"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <target dev="tap8d87c94f-f4"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/console.log" append="off"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <video>
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     </video>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <input type="keyboard" bus="usb"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:27:53 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:27:53 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:27:53 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:27:53 compute-0 nova_compute[259550]: </domain>
Oct 07 14:27:53 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.494 2 DEBUG nova.compute.manager [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Preparing to wait for external event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.494 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.495 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.495 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.496 2 DEBUG nova.virt.libvirt.vif [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1596186961',display_name='tempest-ServersNegativeTestJSON-server-1596186961',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1596186961',id=102,image_ref='6cbf90eb-7382-4407-9e6c-a36d06a02db1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:26:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-7x9o1pxk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member',shelved_at='2025-10-07T14:27:33.488961',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='6cbf90eb-7382-4407-9e6c-a36d06a02db1'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:27:45Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=91c66dff-47e6-4b52-9e3f-d8c58d256bcf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.496 2 DEBUG nova.network.os_vif_util [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.497 2 DEBUG nova.network.os_vif_util [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.497 2 DEBUG os_vif [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.499 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.499 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d87c94f-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.503 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d87c94f-f4, col_values=(('external_ids', {'iface-id': '8d87c94f-f436-4619-9ca7-9e116cab44bf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ce:79:3a', 'vm-uuid': '91c66dff-47e6-4b52-9e3f-d8c58d256bcf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:53 compute-0 NetworkManager[44949]: <info>  [1759847273.5059] manager: (tap8d87c94f-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.517 2 INFO os_vif [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4')
Oct 07 14:27:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/428140567' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:27:53 compute-0 systemd[1]: Started libpod-conmon-392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94.scope.
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.619 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.619 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.619 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No VIF found with MAC fa:16:3e:ce:79:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.620 2 INFO nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Using config drive
Oct 07 14:27:53 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.646 2 DEBUG nova.storage.rbd_utils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:27:53 compute-0 podman[364584]: 2025-10-07 14:27:53.659407727 +0000 UTC m=+0.678298706 container init 392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wright, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:27:53 compute-0 podman[364584]: 2025-10-07 14:27:53.665412097 +0000 UTC m=+0.684303056 container start 392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wright, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.667 2 DEBUG nova.objects.instance [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:27:53 compute-0 podman[364584]: 2025-10-07 14:27:53.669383474 +0000 UTC m=+0.688274453 container attach 392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wright, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Oct 07 14:27:53 compute-0 romantic_wright[364624]: 167 167
Oct 07 14:27:53 compute-0 systemd[1]: libpod-392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94.scope: Deactivated successfully.
Oct 07 14:27:53 compute-0 conmon[364624]: conmon 392e101ecc239a61c67d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94.scope/container/memory.events
Oct 07 14:27:53 compute-0 podman[364584]: 2025-10-07 14:27:53.672869896 +0000 UTC m=+0.691760855 container died 392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:27:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b446219e50ade6aab90b1a390481dfcf4dd12ba5ff4a232d8e0e356a0704e268-merged.mount: Deactivated successfully.
Oct 07 14:27:53 compute-0 podman[364584]: 2025-10-07 14:27:53.711059296 +0000 UTC m=+0.729950255 container remove 392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wright, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 07 14:27:53 compute-0 nova_compute[259550]: 2025-10-07 14:27:53.716 2 DEBUG nova.objects.instance [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'keypairs' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:27:53 compute-0 systemd[1]: libpod-conmon-392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94.scope: Deactivated successfully.
Oct 07 14:27:53 compute-0 podman[364666]: 2025-10-07 14:27:53.890224794 +0000 UTC m=+0.049750180 container create b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct 07 14:27:53 compute-0 systemd[1]: Started libpod-conmon-b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f.scope.
Oct 07 14:27:53 compute-0 podman[364666]: 2025-10-07 14:27:53.865387831 +0000 UTC m=+0.024913257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:27:53 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726696d104e98c7547c6543e395624c24b66d23f2e699fe046c030e09717d3f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726696d104e98c7547c6543e395624c24b66d23f2e699fe046c030e09717d3f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726696d104e98c7547c6543e395624c24b66d23f2e699fe046c030e09717d3f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726696d104e98c7547c6543e395624c24b66d23f2e699fe046c030e09717d3f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:27:53 compute-0 podman[364666]: 2025-10-07 14:27:53.985313545 +0000 UTC m=+0.144838941 container init b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curran, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 07 14:27:53 compute-0 podman[364666]: 2025-10-07 14:27:53.994567732 +0000 UTC m=+0.154093118 container start b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curran, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:27:53 compute-0 podman[364666]: 2025-10-07 14:27:53.997909981 +0000 UTC m=+0.157435387 container attach b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:27:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 159 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 67 op/s
Oct 07 14:27:54 compute-0 nova_compute[259550]: 2025-10-07 14:27:54.138 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:27:54 compute-0 nova_compute[259550]: 2025-10-07 14:27:54.153 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:27:54 compute-0 nova_compute[259550]: 2025-10-07 14:27:54.154 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:27:54 compute-0 nova_compute[259550]: 2025-10-07 14:27:54.192 2 INFO nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Creating config drive at /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config
Oct 07 14:27:54 compute-0 nova_compute[259550]: 2025-10-07 14:27:54.197 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf77v5bax execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:27:54 compute-0 nova_compute[259550]: 2025-10-07 14:27:54.350 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf77v5bax" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:27:54 compute-0 nova_compute[259550]: 2025-10-07 14:27:54.372 2 DEBUG nova.storage.rbd_utils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:27:54 compute-0 nova_compute[259550]: 2025-10-07 14:27:54.378 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:27:54 compute-0 nova_compute[259550]: 2025-10-07 14:27:54.560 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:27:54 compute-0 nova_compute[259550]: 2025-10-07 14:27:54.561 2 INFO nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Deleting local config drive /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config because it was imported into RBD.
Oct 07 14:27:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/512768072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:27:54 compute-0 ceph-mon[74295]: pgmap v1954: 305 pgs: 305 active+clean; 159 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 67 op/s
Oct 07 14:27:54 compute-0 kernel: tap8d87c94f-f4: entered promiscuous mode
Oct 07 14:27:54 compute-0 ovn_controller[151684]: 2025-10-07T14:27:54Z|01035|binding|INFO|Claiming lport 8d87c94f-f436-4619-9ca7-9e116cab44bf for this chassis.
Oct 07 14:27:54 compute-0 ovn_controller[151684]: 2025-10-07T14:27:54Z|01036|binding|INFO|8d87c94f-f436-4619-9ca7-9e116cab44bf: Claiming fa:16:3e:ce:79:3a 10.100.0.3
Oct 07 14:27:54 compute-0 nova_compute[259550]: 2025-10-07 14:27:54.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:54 compute-0 NetworkManager[44949]: <info>  [1759847274.6200] manager: (tap8d87c94f-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/419)
Oct 07 14:27:54 compute-0 ovn_controller[151684]: 2025-10-07T14:27:54Z|01037|binding|INFO|Setting lport 8d87c94f-f436-4619-9ca7-9e116cab44bf ovn-installed in OVS
Oct 07 14:27:54 compute-0 nova_compute[259550]: 2025-10-07 14:27:54.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:54 compute-0 nova_compute[259550]: 2025-10-07 14:27:54.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:54 compute-0 systemd-udevd[364740]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:27:54 compute-0 systemd-machined[214580]: New machine qemu-129-instance-00000066.
Oct 07 14:27:54 compute-0 NetworkManager[44949]: <info>  [1759847274.6679] device (tap8d87c94f-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:27:54 compute-0 NetworkManager[44949]: <info>  [1759847274.6686] device (tap8d87c94f-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:27:54 compute-0 systemd[1]: Started Virtual Machine qemu-129-instance-00000066.
Oct 07 14:27:54 compute-0 kind_curran[364682]: {
Oct 07 14:27:54 compute-0 kind_curran[364682]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:27:54 compute-0 kind_curran[364682]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:27:54 compute-0 kind_curran[364682]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:27:54 compute-0 kind_curran[364682]:         "osd_id": 2,
Oct 07 14:27:54 compute-0 kind_curran[364682]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:27:54 compute-0 kind_curran[364682]:         "type": "bluestore"
Oct 07 14:27:54 compute-0 kind_curran[364682]:     },
Oct 07 14:27:54 compute-0 kind_curran[364682]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:27:54 compute-0 kind_curran[364682]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:27:54 compute-0 kind_curran[364682]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:27:54 compute-0 kind_curran[364682]:         "osd_id": 1,
Oct 07 14:27:54 compute-0 kind_curran[364682]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:27:54 compute-0 kind_curran[364682]:         "type": "bluestore"
Oct 07 14:27:54 compute-0 kind_curran[364682]:     },
Oct 07 14:27:54 compute-0 kind_curran[364682]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:27:54 compute-0 kind_curran[364682]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:27:54 compute-0 kind_curran[364682]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:27:54 compute-0 kind_curran[364682]:         "osd_id": 0,
Oct 07 14:27:54 compute-0 kind_curran[364682]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:27:54 compute-0 kind_curran[364682]:         "type": "bluestore"
Oct 07 14:27:54 compute-0 kind_curran[364682]:     }
Oct 07 14:27:54 compute-0 kind_curran[364682]: }
Oct 07 14:27:55 compute-0 systemd[1]: libpod-b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f.scope: Deactivated successfully.
Oct 07 14:27:55 compute-0 systemd[1]: libpod-b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f.scope: Consumed 1.024s CPU time.
Oct 07 14:27:55 compute-0 conmon[364682]: conmon b8c14af54ae2a9cd3c03 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f.scope/container/memory.events
Oct 07 14:27:55 compute-0 podman[364666]: 2025-10-07 14:27:55.024351248 +0000 UTC m=+1.183876634 container died b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curran, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 07 14:27:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-726696d104e98c7547c6543e395624c24b66d23f2e699fe046c030e09717d3f8-merged.mount: Deactivated successfully.
Oct 07 14:27:55 compute-0 podman[364666]: 2025-10-07 14:27:55.09174049 +0000 UTC m=+1.251265876 container remove b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curran, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 07 14:27:55 compute-0 systemd[1]: libpod-conmon-b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f.scope: Deactivated successfully.
Oct 07 14:27:55 compute-0 sudo[364499]: pam_unix(sudo:session): session closed for user root
Oct 07 14:27:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:27:55 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:27:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:27:55 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:27:55 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev ba106108-6e6f-470d-bb53-fbad2dc6c63f does not exist
Oct 07 14:27:55 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev db9ec09e-4ec3-41ba-8553-1ccecffd66e5 does not exist
Oct 07 14:27:55 compute-0 sudo[364791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:27:55 compute-0 sudo[364791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:27:55 compute-0 sudo[364791]: pam_unix(sudo:session): session closed for user root
Oct 07 14:27:55 compute-0 sudo[364816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:27:55 compute-0 sudo[364816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:27:55 compute-0 sudo[364816]: pam_unix(sudo:session): session closed for user root
Oct 07 14:27:55 compute-0 ovn_controller[151684]: 2025-10-07T14:27:55Z|01038|binding|INFO|Setting lport 8d87c94f-f436-4619-9ca7-9e116cab44bf up in Southbound
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.305 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:79:3a 10.100.0.3'], port_security=['fa:16:3e:ce:79:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '91c66dff-47e6-4b52-9e3f-d8c58d256bcf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df30eae3-80f8-4787-8c66-2dfab55da65a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65245efcef84404ca2ccf82df5946696', 'neutron:revision_number': '7', 'neutron:security_group_ids': '7ce7b451-d224-4231-90f8-24fa819f5920', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efac365-0f4c-42a7-9c1f-c734401b92b1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8d87c94f-f436-4619-9ca7-9e116cab44bf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.306 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8d87c94f-f436-4619-9ca7-9e116cab44bf in datapath df30eae3-80f8-4787-8c66-2dfab55da65a bound to our chassis
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.307 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df30eae3-80f8-4787-8c66-2dfab55da65a
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.321 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5cdc9d-9b4f-41e3-af62-7c3a3940ce1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.322 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdf30eae3-81 in ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.324 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdf30eae3-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.324 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdef5e2-f660-4e4f-95d9-cf07d26d6b62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.325 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5155d05b-6f54-49c6-90a5-c923705d6fe2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.342 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[1f463f77-18bb-405e-a13a-0b03bbd17106]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.372 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4726ff7d-0f60-470d-8508-2f7f9ee1c69b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.409 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a40f8c-26c0-4247-8de2-1a08c4465a8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.415 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ace15dc4-eec7-4fed-b9ca-94a0dba022b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:55 compute-0 systemd-udevd[364742]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:27:55 compute-0 NetworkManager[44949]: <info>  [1759847275.4177] manager: (tapdf30eae3-80): new Veth device (/org/freedesktop/NetworkManager/Devices/420)
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.448 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8378d8c2-7186-4265-93a7-a0079581ac5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.453 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2576bc-8e2f-41ec-91f9-7bab9de94f78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:55 compute-0 NetworkManager[44949]: <info>  [1759847275.4744] device (tapdf30eae3-80): carrier: link connected
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.481 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d18edb9d-781e-410b-ba16-8a80f03bb7dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.500 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e40d0e-5f2f-495c-97f9-d32d323834e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf30eae3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:c8:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 301], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785903, 'reachable_time': 42236, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364865, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.517 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4f6bd2a7-484f-493d-b718-329a84937af4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:c8ab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785903, 'tstamp': 785903}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364866, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.544 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[22b97f48-0cc4-44b6-99ae-3f3620503b27]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf30eae3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:c8:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 301], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785903, 'reachable_time': 42236, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364882, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.576 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dc7245c3-34cc-4a73-914b-312ee0843ee4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.629 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[80d59c0b-0e20-49d0-bb5d-b1befdb0ec22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.630 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf30eae3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.631 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.631 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf30eae3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:27:55 compute-0 kernel: tapdf30eae3-80: entered promiscuous mode
Oct 07 14:27:55 compute-0 nova_compute[259550]: 2025-10-07 14:27:55.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:55 compute-0 NetworkManager[44949]: <info>  [1759847275.6351] manager: (tapdf30eae3-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Oct 07 14:27:55 compute-0 nova_compute[259550]: 2025-10-07 14:27:55.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.637 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf30eae3-80, col_values=(('external_ids', {'iface-id': '36ffe8eb-a28d-46e1-ae56-324c81416e5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:27:55 compute-0 nova_compute[259550]: 2025-10-07 14:27:55.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:55 compute-0 ovn_controller[151684]: 2025-10-07T14:27:55Z|01039|binding|INFO|Releasing lport 36ffe8eb-a28d-46e1-ae56-324c81416e5e from this chassis (sb_readonly=0)
Oct 07 14:27:55 compute-0 nova_compute[259550]: 2025-10-07 14:27:55.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.654 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/df30eae3-80f8-4787-8c66-2dfab55da65a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/df30eae3-80f8-4787-8c66-2dfab55da65a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.655 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bf268914-398b-4f51-8213-6398222aabca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.655 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-df30eae3-80f8-4787-8c66-2dfab55da65a
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/df30eae3-80f8-4787-8c66-2dfab55da65a.pid.haproxy
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID df30eae3-80f8-4787-8c66-2dfab55da65a
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:27:55 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.657 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'env', 'PROCESS_TAG=haproxy-df30eae3-80f8-4787-8c66-2dfab55da65a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/df30eae3-80f8-4787-8c66-2dfab55da65a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:27:55 compute-0 nova_compute[259550]: 2025-10-07 14:27:55.865 2 DEBUG nova.compute.manager [req-352e7e0e-ce93-4870-9733-04b3e927d70c req-2c20ff73-ab25-4576-82e1-1efa095c04da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:27:55 compute-0 nova_compute[259550]: 2025-10-07 14:27:55.865 2 DEBUG oslo_concurrency.lockutils [req-352e7e0e-ce93-4870-9733-04b3e927d70c req-2c20ff73-ab25-4576-82e1-1efa095c04da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:27:55 compute-0 nova_compute[259550]: 2025-10-07 14:27:55.871 2 DEBUG oslo_concurrency.lockutils [req-352e7e0e-ce93-4870-9733-04b3e927d70c req-2c20ff73-ab25-4576-82e1-1efa095c04da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:27:55 compute-0 nova_compute[259550]: 2025-10-07 14:27:55.871 2 DEBUG oslo_concurrency.lockutils [req-352e7e0e-ce93-4870-9733-04b3e927d70c req-2c20ff73-ab25-4576-82e1-1efa095c04da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:27:55 compute-0 nova_compute[259550]: 2025-10-07 14:27:55.871 2 DEBUG nova.compute.manager [req-352e7e0e-ce93-4870-9733-04b3e927d70c req-2c20ff73-ab25-4576-82e1-1efa095c04da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Processing event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:27:56 compute-0 podman[364942]: 2025-10-07 14:27:56.027017791 +0000 UTC m=+0.063393736 container create 0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:27:56 compute-0 nova_compute[259550]: 2025-10-07 14:27:56.046 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847276.0452726, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:27:56 compute-0 nova_compute[259550]: 2025-10-07 14:27:56.047 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Started (Lifecycle Event)
Oct 07 14:27:56 compute-0 nova_compute[259550]: 2025-10-07 14:27:56.049 2 DEBUG nova.compute.manager [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:27:56 compute-0 nova_compute[259550]: 2025-10-07 14:27:56.055 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:27:56 compute-0 systemd[1]: Started libpod-conmon-0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce.scope.
Oct 07 14:27:56 compute-0 nova_compute[259550]: 2025-10-07 14:27:56.058 2 INFO nova.virt.libvirt.driver [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance spawned successfully.
Oct 07 14:27:56 compute-0 nova_compute[259550]: 2025-10-07 14:27:56.067 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:27:56 compute-0 nova_compute[259550]: 2025-10-07 14:27:56.072 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:27:56 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:27:56 compute-0 podman[364942]: 2025-10-07 14:27:55.987379191 +0000 UTC m=+0.023755156 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e094ead86eef4fe25a38bfdd989dd00f422d4b0c9f86053c4d21d1603c9022a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:27:56 compute-0 nova_compute[259550]: 2025-10-07 14:27:56.093 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:27:56 compute-0 nova_compute[259550]: 2025-10-07 14:27:56.094 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847276.045442, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:27:56 compute-0 nova_compute[259550]: 2025-10-07 14:27:56.094 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Paused (Lifecycle Event)
Oct 07 14:27:56 compute-0 podman[364942]: 2025-10-07 14:27:56.098207252 +0000 UTC m=+0.134583227 container init 0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:27:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 200 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 77 op/s
Oct 07 14:27:56 compute-0 podman[364942]: 2025-10-07 14:27:56.106656288 +0000 UTC m=+0.143032233 container start 0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 07 14:27:56 compute-0 nova_compute[259550]: 2025-10-07 14:27:56.111 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:27:56 compute-0 nova_compute[259550]: 2025-10-07 14:27:56.114 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847276.0521812, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:27:56 compute-0 nova_compute[259550]: 2025-10-07 14:27:56.114 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Resumed (Lifecycle Event)
Oct 07 14:27:56 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[364957]: [NOTICE]   (364961) : New worker (364963) forked
Oct 07 14:27:56 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[364957]: [NOTICE]   (364961) : Loading success.
Oct 07 14:27:56 compute-0 nova_compute[259550]: 2025-10-07 14:27:56.134 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:27:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:27:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:27:56 compute-0 nova_compute[259550]: 2025-10-07 14:27:56.139 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:27:56 compute-0 nova_compute[259550]: 2025-10-07 14:27:56.163 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:27:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:56.169 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3b81107-fbd8-4b68-adb0-30f49b47dd04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ed3753df-65a0-439b-ac95-e84eb7b44484) old=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2 2001:db8::f816:3eff:feac:9691'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:27:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:56.170 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ed3753df-65a0-439b-ac95-e84eb7b44484 in datapath b1af60e6-ad2e-48e6-9494-943f38a2c137 updated
Oct 07 14:27:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:56.171 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1af60e6-ad2e-48e6-9494-943f38a2c137, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:27:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:56.172 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6461eeed-1b77-4b46-b2f0-e9c7f184884c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.474996) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847276475053, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1011, "num_deletes": 252, "total_data_size": 1367403, "memory_usage": 1393392, "flush_reason": "Manual Compaction"}
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847276490277, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 1343286, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40179, "largest_seqno": 41189, "table_properties": {"data_size": 1338281, "index_size": 2529, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10032, "raw_average_key_size": 18, "raw_value_size": 1328172, "raw_average_value_size": 2397, "num_data_blocks": 112, "num_entries": 554, "num_filter_entries": 554, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847196, "oldest_key_time": 1759847196, "file_creation_time": 1759847276, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 15457 microseconds, and 8320 cpu microseconds.
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.490452) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 1343286 bytes OK
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.490512) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.492114) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.492137) EVENT_LOG_v1 {"time_micros": 1759847276492129, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.492156) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 1362570, prev total WAL file size 1362570, number of live WAL files 2.
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.494955) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(1311KB)], [89(9423KB)]
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847276495018, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 10992855, "oldest_snapshot_seqno": -1}
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6496 keys, 10287771 bytes, temperature: kUnknown
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847276556048, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 10287771, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10241901, "index_size": 28544, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 166845, "raw_average_key_size": 25, "raw_value_size": 10123063, "raw_average_value_size": 1558, "num_data_blocks": 1133, "num_entries": 6496, "num_filter_entries": 6496, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847276, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.556287) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 10287771 bytes
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.557323) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.8 rd, 168.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.2 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(15.8) write-amplify(7.7) OK, records in: 7017, records dropped: 521 output_compression: NoCompression
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.557338) EVENT_LOG_v1 {"time_micros": 1759847276557330, "job": 52, "event": "compaction_finished", "compaction_time_micros": 61126, "compaction_time_cpu_micros": 22764, "output_level": 6, "num_output_files": 1, "total_output_size": 10287771, "num_input_records": 7017, "num_output_records": 6496, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847276557659, "job": 52, "event": "table_file_deletion", "file_number": 91}
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847276559222, "job": 52, "event": "table_file_deletion", "file_number": 89}
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.494808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.559287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.559294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.559297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.559299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:27:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.559301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:27:56 compute-0 nova_compute[259550]: 2025-10-07 14:27:56.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:56 compute-0 nova_compute[259550]: 2025-10-07 14:27:56.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:27:57 compute-0 ceph-mon[74295]: pgmap v1955: 305 pgs: 305 active+clean; 200 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 77 op/s
Oct 07 14:27:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Oct 07 14:27:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Oct 07 14:27:57 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Oct 07 14:27:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:57.589 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2 2001:db8::f816:3eff:feac:9691'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3b81107-fbd8-4b68-adb0-30f49b47dd04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ed3753df-65a0-439b-ac95-e84eb7b44484) old=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:27:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:57.590 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ed3753df-65a0-439b-ac95-e84eb7b44484 in datapath b1af60e6-ad2e-48e6-9494-943f38a2c137 updated
Oct 07 14:27:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:57.591 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1af60e6-ad2e-48e6-9494-943f38a2c137, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:27:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:27:57.592 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[65833c99-4f39-4ace-b806-539cb2a579bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:27:57 compute-0 nova_compute[259550]: 2025-10-07 14:27:57.649 2 DEBUG nova.compute.manager [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:27:57 compute-0 nova_compute[259550]: 2025-10-07 14:27:57.754 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 12.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:27:57 compute-0 nova_compute[259550]: 2025-10-07 14:27:57.948 2 DEBUG nova.compute.manager [req-da11c6e4-0646-4801-ade0-4dbbcad3d840 req-5f682aa6-de19-4634-b41a-c70c7c401697 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:27:57 compute-0 nova_compute[259550]: 2025-10-07 14:27:57.949 2 DEBUG oslo_concurrency.lockutils [req-da11c6e4-0646-4801-ade0-4dbbcad3d840 req-5f682aa6-de19-4634-b41a-c70c7c401697 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:27:57 compute-0 nova_compute[259550]: 2025-10-07 14:27:57.949 2 DEBUG oslo_concurrency.lockutils [req-da11c6e4-0646-4801-ade0-4dbbcad3d840 req-5f682aa6-de19-4634-b41a-c70c7c401697 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:27:57 compute-0 nova_compute[259550]: 2025-10-07 14:27:57.950 2 DEBUG oslo_concurrency.lockutils [req-da11c6e4-0646-4801-ade0-4dbbcad3d840 req-5f682aa6-de19-4634-b41a-c70c7c401697 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:27:57 compute-0 nova_compute[259550]: 2025-10-07 14:27:57.950 2 DEBUG nova.compute.manager [req-da11c6e4-0646-4801-ade0-4dbbcad3d840 req-5f682aa6-de19-4634-b41a-c70c7c401697 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:27:57 compute-0 nova_compute[259550]: 2025-10-07 14:27:57.950 2 WARNING nova.compute.manager [req-da11c6e4-0646-4801-ade0-4dbbcad3d840 req-5f682aa6-de19-4634-b41a-c70c7c401697 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received unexpected event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with vm_state active and task_state None.
Oct 07 14:27:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 200 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 92 op/s
Oct 07 14:27:58 compute-0 ceph-mon[74295]: osdmap e269: 3 total, 3 up, 3 in
Oct 07 14:27:58 compute-0 ceph-mon[74295]: pgmap v1957: 305 pgs: 305 active+clean; 200 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 92 op/s
Oct 07 14:27:58 compute-0 nova_compute[259550]: 2025-10-07 14:27:58.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:27:59 compute-0 podman[364972]: 2025-10-07 14:27:59.056562781 +0000 UTC m=+0.052913855 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 07 14:27:59 compute-0 podman[364973]: 2025-10-07 14:27:59.083809779 +0000 UTC m=+0.078418196 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:28:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:00.059 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:28:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:00.060 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:28:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:00.060 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:28:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 128 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 4.7 MiB/s wr, 158 op/s
Oct 07 14:28:00 compute-0 ceph-mon[74295]: pgmap v1958: 305 pgs: 305 active+clean; 128 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 4.7 MiB/s wr, 158 op/s
Oct 07 14:28:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:28:01 compute-0 nova_compute[259550]: 2025-10-07 14:28:01.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 207 op/s
Oct 07 14:28:02 compute-0 ceph-mon[74295]: pgmap v1959: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 207 op/s
Oct 07 14:28:03 compute-0 nova_compute[259550]: 2025-10-07 14:28:03.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.5 MiB/s wr, 126 op/s
Oct 07 14:28:04 compute-0 ceph-mon[74295]: pgmap v1960: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.5 MiB/s wr, 126 op/s
Oct 07 14:28:04 compute-0 nova_compute[259550]: 2025-10-07 14:28:04.623 2 DEBUG nova.objects.instance [None req-2eb11ae5-a7e4-4a5e-a509-2f4b4409caa8 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:28:04 compute-0 nova_compute[259550]: 2025-10-07 14:28:04.646 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847284.6462533, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:28:04 compute-0 nova_compute[259550]: 2025-10-07 14:28:04.646 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Paused (Lifecycle Event)
Oct 07 14:28:04 compute-0 nova_compute[259550]: 2025-10-07 14:28:04.665 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:28:04 compute-0 nova_compute[259550]: 2025-10-07 14:28:04.672 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:28:04 compute-0 nova_compute[259550]: 2025-10-07 14:28:04.696 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 07 14:28:05 compute-0 kernel: tap8d87c94f-f4 (unregistering): left promiscuous mode
Oct 07 14:28:05 compute-0 NetworkManager[44949]: <info>  [1759847285.2773] device (tap8d87c94f-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:28:05 compute-0 ovn_controller[151684]: 2025-10-07T14:28:05Z|01040|binding|INFO|Releasing lport 8d87c94f-f436-4619-9ca7-9e116cab44bf from this chassis (sb_readonly=0)
Oct 07 14:28:05 compute-0 ovn_controller[151684]: 2025-10-07T14:28:05Z|01041|binding|INFO|Setting lport 8d87c94f-f436-4619-9ca7-9e116cab44bf down in Southbound
Oct 07 14:28:05 compute-0 ovn_controller[151684]: 2025-10-07T14:28:05Z|01042|binding|INFO|Removing iface tap8d87c94f-f4 ovn-installed in OVS
Oct 07 14:28:05 compute-0 nova_compute[259550]: 2025-10-07 14:28:05.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:05 compute-0 nova_compute[259550]: 2025-10-07 14:28:05.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.296 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:79:3a 10.100.0.3'], port_security=['fa:16:3e:ce:79:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '91c66dff-47e6-4b52-9e3f-d8c58d256bcf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df30eae3-80f8-4787-8c66-2dfab55da65a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65245efcef84404ca2ccf82df5946696', 'neutron:revision_number': '9', 'neutron:security_group_ids': '7ce7b451-d224-4231-90f8-24fa819f5920', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efac365-0f4c-42a7-9c1f-c734401b92b1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8d87c94f-f436-4619-9ca7-9e116cab44bf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:28:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.298 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8d87c94f-f436-4619-9ca7-9e116cab44bf in datapath df30eae3-80f8-4787-8c66-2dfab55da65a unbound from our chassis
Oct 07 14:28:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.299 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df30eae3-80f8-4787-8c66-2dfab55da65a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:28:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.300 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aed93dcf-f16a-45cd-a3ab-62e72bec28d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.300 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a namespace which is not needed anymore
Oct 07 14:28:05 compute-0 nova_compute[259550]: 2025-10-07 14:28:05.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:05 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000066.scope: Deactivated successfully.
Oct 07 14:28:05 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000066.scope: Consumed 10.390s CPU time.
Oct 07 14:28:05 compute-0 systemd-machined[214580]: Machine qemu-129-instance-00000066 terminated.
Oct 07 14:28:05 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[364957]: [NOTICE]   (364961) : haproxy version is 2.8.14-c23fe91
Oct 07 14:28:05 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[364957]: [NOTICE]   (364961) : path to executable is /usr/sbin/haproxy
Oct 07 14:28:05 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[364957]: [WARNING]  (364961) : Exiting Master process...
Oct 07 14:28:05 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[364957]: [ALERT]    (364961) : Current worker (364963) exited with code 143 (Terminated)
Oct 07 14:28:05 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[364957]: [WARNING]  (364961) : All workers exited. Exiting... (0)
Oct 07 14:28:05 compute-0 systemd[1]: libpod-0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce.scope: Deactivated successfully.
Oct 07 14:28:05 compute-0 nova_compute[259550]: 2025-10-07 14:28:05.461 2 DEBUG nova.compute.manager [None req-2eb11ae5-a7e4-4a5e-a509-2f4b4409caa8 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:28:05 compute-0 podman[365042]: 2025-10-07 14:28:05.466546517 +0000 UTC m=+0.050033467 container died 0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 07 14:28:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce-userdata-shm.mount: Deactivated successfully.
Oct 07 14:28:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e094ead86eef4fe25a38bfdd989dd00f422d4b0c9f86053c4d21d1603c9022a-merged.mount: Deactivated successfully.
Oct 07 14:28:05 compute-0 podman[365042]: 2025-10-07 14:28:05.514721125 +0000 UTC m=+0.098208075 container cleanup 0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:28:05 compute-0 systemd[1]: libpod-conmon-0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce.scope: Deactivated successfully.
Oct 07 14:28:05 compute-0 podman[365084]: 2025-10-07 14:28:05.586995116 +0000 UTC m=+0.047479270 container remove 0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 07 14:28:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.593 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4a895dab-bc65-405c-bc14-85d6457aeb76]: (4, ('Tue Oct  7 02:28:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a (0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce)\n0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce\nTue Oct  7 02:28:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a (0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce)\n0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.594 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f3aab05a-793d-4a0a-b454-5f7bfe118973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.595 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf30eae3-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:28:05 compute-0 nova_compute[259550]: 2025-10-07 14:28:05.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:05 compute-0 kernel: tapdf30eae3-80: left promiscuous mode
Oct 07 14:28:05 compute-0 nova_compute[259550]: 2025-10-07 14:28:05.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.619 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f3314a6b-5044-4028-bad7-01d37917e2b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:05 compute-0 nova_compute[259550]: 2025-10-07 14:28:05.638 2 DEBUG nova.compute.manager [req-be785c04-40d8-4f3a-8168-3f7b350923cf req-ce5f6ba3-77fc-4d13-a111-8e7a211b036a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-unplugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:28:05 compute-0 nova_compute[259550]: 2025-10-07 14:28:05.639 2 DEBUG oslo_concurrency.lockutils [req-be785c04-40d8-4f3a-8168-3f7b350923cf req-ce5f6ba3-77fc-4d13-a111-8e7a211b036a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:28:05 compute-0 nova_compute[259550]: 2025-10-07 14:28:05.639 2 DEBUG oslo_concurrency.lockutils [req-be785c04-40d8-4f3a-8168-3f7b350923cf req-ce5f6ba3-77fc-4d13-a111-8e7a211b036a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:28:05 compute-0 nova_compute[259550]: 2025-10-07 14:28:05.640 2 DEBUG oslo_concurrency.lockutils [req-be785c04-40d8-4f3a-8168-3f7b350923cf req-ce5f6ba3-77fc-4d13-a111-8e7a211b036a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:28:05 compute-0 nova_compute[259550]: 2025-10-07 14:28:05.640 2 DEBUG nova.compute.manager [req-be785c04-40d8-4f3a-8168-3f7b350923cf req-ce5f6ba3-77fc-4d13-a111-8e7a211b036a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-unplugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:28:05 compute-0 nova_compute[259550]: 2025-10-07 14:28:05.640 2 WARNING nova.compute.manager [req-be785c04-40d8-4f3a-8168-3f7b350923cf req-ce5f6ba3-77fc-4d13-a111-8e7a211b036a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received unexpected event network-vif-unplugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with vm_state suspended and task_state None.
Oct 07 14:28:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.653 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[06942c1e-4e09-4b7f-b32f-f305f30cff70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.656 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d85dbb0f-8ab4-45c8-8fa1-9f32a52563c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.679 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[10da4fb2-a758-42d2-b7f4-d8dd643cbf7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785896, 'reachable_time': 28789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365102, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:05 compute-0 systemd[1]: run-netns-ovnmeta\x2ddf30eae3\x2d80f8\x2d4787\x2d8c66\x2d2dfab55da65a.mount: Deactivated successfully.
Oct 07 14:28:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.684 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:28:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.684 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9324b5-0555-4ed0-afe7-b62b37f3af24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:06.016 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:96:91 2001:db8::f816:3eff:feac:9691'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3b81107-fbd8-4b68-adb0-30f49b47dd04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ed3753df-65a0-439b-ac95-e84eb7b44484) old=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2 2001:db8::f816:3eff:feac:9691'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:28:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:06.018 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ed3753df-65a0-439b-ac95-e84eb7b44484 in datapath b1af60e6-ad2e-48e6-9494-943f38a2c137 updated
Oct 07 14:28:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:06.020 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1af60e6-ad2e-48e6-9494-943f38a2c137, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:28:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:06.021 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[643c449e-a468-4353-82e2-7ae1f274552f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 115 op/s
Oct 07 14:28:06 compute-0 ceph-mon[74295]: pgmap v1961: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 115 op/s
Oct 07 14:28:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:28:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Oct 07 14:28:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Oct 07 14:28:06 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Oct 07 14:28:06 compute-0 nova_compute[259550]: 2025-10-07 14:28:06.744 2 INFO nova.compute.manager [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Resuming
Oct 07 14:28:06 compute-0 nova_compute[259550]: 2025-10-07 14:28:06.746 2 DEBUG nova.objects.instance [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'flavor' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:28:06 compute-0 nova_compute[259550]: 2025-10-07 14:28:06.779 2 DEBUG oslo_concurrency.lockutils [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:28:06 compute-0 nova_compute[259550]: 2025-10-07 14:28:06.779 2 DEBUG oslo_concurrency.lockutils [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquired lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:28:06 compute-0 nova_compute[259550]: 2025-10-07 14:28:06.780 2 DEBUG nova.network.neutron [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:28:06 compute-0 nova_compute[259550]: 2025-10-07 14:28:06.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:07 compute-0 ceph-mon[74295]: osdmap e270: 3 total, 3 up, 3 in
Oct 07 14:28:07 compute-0 nova_compute[259550]: 2025-10-07 14:28:07.758 2 DEBUG nova.compute.manager [req-712dc50c-784c-4360-b16b-6612097bfe3e req-9d315384-f387-428e-a4a6-55c425a270d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:28:07 compute-0 nova_compute[259550]: 2025-10-07 14:28:07.759 2 DEBUG oslo_concurrency.lockutils [req-712dc50c-784c-4360-b16b-6612097bfe3e req-9d315384-f387-428e-a4a6-55c425a270d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:28:07 compute-0 nova_compute[259550]: 2025-10-07 14:28:07.760 2 DEBUG oslo_concurrency.lockutils [req-712dc50c-784c-4360-b16b-6612097bfe3e req-9d315384-f387-428e-a4a6-55c425a270d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:28:07 compute-0 nova_compute[259550]: 2025-10-07 14:28:07.760 2 DEBUG oslo_concurrency.lockutils [req-712dc50c-784c-4360-b16b-6612097bfe3e req-9d315384-f387-428e-a4a6-55c425a270d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:28:07 compute-0 nova_compute[259550]: 2025-10-07 14:28:07.761 2 DEBUG nova.compute.manager [req-712dc50c-784c-4360-b16b-6612097bfe3e req-9d315384-f387-428e-a4a6-55c425a270d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:28:07 compute-0 nova_compute[259550]: 2025-10-07 14:28:07.761 2 WARNING nova.compute.manager [req-712dc50c-784c-4360-b16b-6612097bfe3e req-9d315384-f387-428e-a4a6-55c425a270d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received unexpected event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with vm_state suspended and task_state resuming.
Oct 07 14:28:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 115 op/s
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.446 2 DEBUG nova.network.neutron [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.463 2 DEBUG oslo_concurrency.lockutils [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Releasing lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.470 2 DEBUG nova.virt.libvirt.vif [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1596186961',display_name='tempest-ServersNegativeTestJSON-server-1596186961',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1596186961',id=102,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:27:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-7x9o1pxk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:28:05Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=91c66dff-47e6-4b52-9e3f-d8c58d256bcf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.470 2 DEBUG nova.network.os_vif_util [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.471 2 DEBUG nova.network.os_vif_util [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.471 2 DEBUG os_vif [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.472 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d87c94f-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d87c94f-f4, col_values=(('external_ids', {'iface-id': '8d87c94f-f436-4619-9ca7-9e116cab44bf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ce:79:3a', 'vm-uuid': '91c66dff-47e6-4b52-9e3f-d8c58d256bcf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.477 2 INFO os_vif [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4')
Oct 07 14:28:08 compute-0 ceph-mon[74295]: pgmap v1963: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 115 op/s
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.500 2 DEBUG nova.objects.instance [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'numa_topology' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:08 compute-0 kernel: tap8d87c94f-f4: entered promiscuous mode
Oct 07 14:28:08 compute-0 NetworkManager[44949]: <info>  [1759847288.5827] manager: (tap8d87c94f-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/422)
Oct 07 14:28:08 compute-0 ovn_controller[151684]: 2025-10-07T14:28:08Z|01043|binding|INFO|Claiming lport 8d87c94f-f436-4619-9ca7-9e116cab44bf for this chassis.
Oct 07 14:28:08 compute-0 ovn_controller[151684]: 2025-10-07T14:28:08Z|01044|binding|INFO|8d87c94f-f436-4619-9ca7-9e116cab44bf: Claiming fa:16:3e:ce:79:3a 10.100.0.3
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.593 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:79:3a 10.100.0.3'], port_security=['fa:16:3e:ce:79:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '91c66dff-47e6-4b52-9e3f-d8c58d256bcf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df30eae3-80f8-4787-8c66-2dfab55da65a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65245efcef84404ca2ccf82df5946696', 'neutron:revision_number': '10', 'neutron:security_group_ids': '7ce7b451-d224-4231-90f8-24fa819f5920', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efac365-0f4c-42a7-9c1f-c734401b92b1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8d87c94f-f436-4619-9ca7-9e116cab44bf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.594 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8d87c94f-f436-4619-9ca7-9e116cab44bf in datapath df30eae3-80f8-4787-8c66-2dfab55da65a bound to our chassis
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.595 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df30eae3-80f8-4787-8c66-2dfab55da65a
Oct 07 14:28:08 compute-0 ovn_controller[151684]: 2025-10-07T14:28:08Z|01045|binding|INFO|Setting lport 8d87c94f-f436-4619-9ca7-9e116cab44bf ovn-installed in OVS
Oct 07 14:28:08 compute-0 ovn_controller[151684]: 2025-10-07T14:28:08Z|01046|binding|INFO|Setting lport 8d87c94f-f436-4619-9ca7-9e116cab44bf up in Southbound
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.614 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf991a5-dad6-4614-bf9e-102144cd5ffd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.615 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdf30eae3-81 in ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:28:08 compute-0 systemd-udevd[365117]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.617 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdf30eae3-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.617 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[03cb27db-349c-42a5-81e2-83b450872cf6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.618 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92ae752c-e7b1-4cfe-b68d-5f93addaf648]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:08 compute-0 NetworkManager[44949]: <info>  [1759847288.6311] device (tap8d87c94f-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:28:08 compute-0 NetworkManager[44949]: <info>  [1759847288.6326] device (tap8d87c94f-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:28:08 compute-0 systemd-machined[214580]: New machine qemu-130-instance-00000066.
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.634 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[dac39046-aea7-45a7-9085-8fadb1c47e06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:08 compute-0 systemd[1]: Started Virtual Machine qemu-130-instance-00000066.
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.664 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4168ebbc-7836-4da0-836f-b3bc19bdc1ab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.695 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[88093356-93d1-4a52-beaa-8a8d1378d466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:08 compute-0 NetworkManager[44949]: <info>  [1759847288.7013] manager: (tapdf30eae3-80): new Veth device (/org/freedesktop/NetworkManager/Devices/423)
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.700 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3907554b-d42b-4cc3-adfd-21b255eb9b06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:08 compute-0 systemd-udevd[365122]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.735 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[bebb12e9-e39a-4b4c-b75a-d2c6531192a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.737 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[668c3bdb-3e33-4dd1-b6da-23ce7be2e658]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:08 compute-0 NetworkManager[44949]: <info>  [1759847288.7620] device (tapdf30eae3-80): carrier: link connected
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.770 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b9cb1d-af51-4513-bcf2-16865c7d3911]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.794 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f07bf2-9cb0-4694-b1b4-4d827b0e7927]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf30eae3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:c8:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 304], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787232, 'reachable_time': 33075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365151, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.819 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e8fa44-2c97-48d6-b7ab-4c1b19a0deb9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:c8ab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 787232, 'tstamp': 787232}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365152, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.844 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[76aa904a-1e46-4a85-8648-37406aa57ae0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf30eae3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:c8:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 304], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787232, 'reachable_time': 33075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 365153, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.888 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[08911520-3c1f-42a2-a54a-f311be1225cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.981 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[41e1b1e1-8606-4dda-be8c-da78b20c0be4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.984 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf30eae3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.984 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.985 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf30eae3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:28:08 compute-0 kernel: tapdf30eae3-80: entered promiscuous mode
Oct 07 14:28:08 compute-0 NetworkManager[44949]: <info>  [1759847288.9880] manager: (tapdf30eae3-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.996 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf30eae3-80, col_values=(('external_ids', {'iface-id': '36ffe8eb-a28d-46e1-ae56-324c81416e5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:28:08 compute-0 ovn_controller[151684]: 2025-10-07T14:28:08Z|01047|binding|INFO|Releasing lport 36ffe8eb-a28d-46e1-ae56-324c81416e5e from this chassis (sb_readonly=0)
Oct 07 14:28:08 compute-0 nova_compute[259550]: 2025-10-07 14:28:08.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:09 compute-0 nova_compute[259550]: 2025-10-07 14:28:09.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:09.016 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/df30eae3-80f8-4787-8c66-2dfab55da65a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/df30eae3-80f8-4787-8c66-2dfab55da65a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:09.017 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c6bc0daa-fe13-4179-900f-abe34d6d8ebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:09.018 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-df30eae3-80f8-4787-8c66-2dfab55da65a
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/df30eae3-80f8-4787-8c66-2dfab55da65a.pid.haproxy
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID df30eae3-80f8-4787-8c66-2dfab55da65a
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:28:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:09.019 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'env', 'PROCESS_TAG=haproxy-df30eae3-80f8-4787-8c66-2dfab55da65a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/df30eae3-80f8-4787-8c66-2dfab55da65a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:28:09 compute-0 podman[365183]: 2025-10-07 14:28:09.395534212 +0000 UTC m=+0.047241333 container create c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 07 14:28:09 compute-0 systemd[1]: Started libpod-conmon-c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e.scope.
Oct 07 14:28:09 compute-0 podman[365183]: 2025-10-07 14:28:09.371261384 +0000 UTC m=+0.022968515 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:28:09 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:28:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/377d196f06f2dbd8119d4f06850230af8d64915c4afaebe12208a94e230e18c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:28:09 compute-0 podman[365183]: 2025-10-07 14:28:09.494486096 +0000 UTC m=+0.146193217 container init c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:28:09 compute-0 podman[365183]: 2025-10-07 14:28:09.502796849 +0000 UTC m=+0.154503970 container start c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 07 14:28:09 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[365198]: [NOTICE]   (365203) : New worker (365205) forked
Oct 07 14:28:09 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[365198]: [NOTICE]   (365203) : Loading success.
Oct 07 14:28:09 compute-0 nova_compute[259550]: 2025-10-07 14:28:09.846 2 DEBUG nova.compute.manager [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:28:09 compute-0 nova_compute[259550]: 2025-10-07 14:28:09.847 2 DEBUG oslo_concurrency.lockutils [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:28:09 compute-0 nova_compute[259550]: 2025-10-07 14:28:09.847 2 DEBUG oslo_concurrency.lockutils [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:28:09 compute-0 nova_compute[259550]: 2025-10-07 14:28:09.847 2 DEBUG oslo_concurrency.lockutils [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:28:09 compute-0 nova_compute[259550]: 2025-10-07 14:28:09.848 2 DEBUG nova.compute.manager [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:28:09 compute-0 nova_compute[259550]: 2025-10-07 14:28:09.848 2 WARNING nova.compute.manager [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received unexpected event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with vm_state suspended and task_state resuming.
Oct 07 14:28:09 compute-0 nova_compute[259550]: 2025-10-07 14:28:09.848 2 DEBUG nova.compute.manager [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:28:09 compute-0 nova_compute[259550]: 2025-10-07 14:28:09.848 2 DEBUG oslo_concurrency.lockutils [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:28:09 compute-0 nova_compute[259550]: 2025-10-07 14:28:09.849 2 DEBUG oslo_concurrency.lockutils [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:28:09 compute-0 nova_compute[259550]: 2025-10-07 14:28:09.849 2 DEBUG oslo_concurrency.lockutils [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:28:09 compute-0 nova_compute[259550]: 2025-10-07 14:28:09.849 2 DEBUG nova.compute.manager [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:28:09 compute-0 nova_compute[259550]: 2025-10-07 14:28:09.849 2 WARNING nova.compute.manager [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received unexpected event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with vm_state suspended and task_state resuming.
Oct 07 14:28:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 307 B/s wr, 49 op/s
Oct 07 14:28:10 compute-0 ceph-mon[74295]: pgmap v1964: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 307 B/s wr, 49 op/s
Oct 07 14:28:10 compute-0 nova_compute[259550]: 2025-10-07 14:28:10.369 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 91c66dff-47e6-4b52-9e3f-d8c58d256bcf due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:28:10 compute-0 nova_compute[259550]: 2025-10-07 14:28:10.370 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847290.3693898, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:28:10 compute-0 nova_compute[259550]: 2025-10-07 14:28:10.370 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Started (Lifecycle Event)
Oct 07 14:28:10 compute-0 nova_compute[259550]: 2025-10-07 14:28:10.386 2 DEBUG nova.compute.manager [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:28:10 compute-0 nova_compute[259550]: 2025-10-07 14:28:10.386 2 DEBUG nova.objects.instance [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:28:10 compute-0 nova_compute[259550]: 2025-10-07 14:28:10.394 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:28:10 compute-0 nova_compute[259550]: 2025-10-07 14:28:10.396 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:28:10 compute-0 nova_compute[259550]: 2025-10-07 14:28:10.401 2 INFO nova.virt.libvirt.driver [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance running successfully.
Oct 07 14:28:10 compute-0 virtqemud[259430]: argument unsupported: QEMU guest agent is not configured
Oct 07 14:28:10 compute-0 nova_compute[259550]: 2025-10-07 14:28:10.404 2 DEBUG nova.virt.libvirt.guest [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 07 14:28:10 compute-0 nova_compute[259550]: 2025-10-07 14:28:10.405 2 DEBUG nova.compute.manager [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:28:10 compute-0 nova_compute[259550]: 2025-10-07 14:28:10.418 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 07 14:28:10 compute-0 nova_compute[259550]: 2025-10-07 14:28:10.418 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847290.3723118, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:28:10 compute-0 nova_compute[259550]: 2025-10-07 14:28:10.419 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Resumed (Lifecycle Event)
Oct 07 14:28:10 compute-0 nova_compute[259550]: 2025-10-07 14:28:10.472 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:28:10 compute-0 nova_compute[259550]: 2025-10-07 14:28:10.476 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:28:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:28:11 compute-0 nova_compute[259550]: 2025-10-07 14:28:11.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:12 compute-0 ceph-mon[74295]: pgmap v1965: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:13 compute-0 nova_compute[259550]: 2025-10-07 14:28:13.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 11 op/s
Oct 07 14:28:14 compute-0 ceph-mon[74295]: pgmap v1966: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 11 op/s
Oct 07 14:28:15 compute-0 ovn_controller[151684]: 2025-10-07T14:28:15Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ce:79:3a 10.100.0.3
Oct 07 14:28:16 compute-0 podman[365256]: 2025-10-07 14:28:16.077750784 +0000 UTC m=+0.062441490 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible)
Oct 07 14:28:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 441 KiB/s rd, 3.2 KiB/s wr, 33 op/s
Oct 07 14:28:16 compute-0 podman[365257]: 2025-10-07 14:28:16.111823394 +0000 UTC m=+0.090244472 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:28:16 compute-0 ceph-mon[74295]: pgmap v1967: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 441 KiB/s rd, 3.2 KiB/s wr, 33 op/s
Oct 07 14:28:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:28:16 compute-0 nova_compute[259550]: 2025-10-07 14:28:16.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 2.8 KiB/s wr, 28 op/s
Oct 07 14:28:18 compute-0 ceph-mon[74295]: pgmap v1968: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 2.8 KiB/s wr, 28 op/s
Oct 07 14:28:18 compute-0 nova_compute[259550]: 2025-10-07 14:28:18.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 12 KiB/s wr, 48 op/s
Oct 07 14:28:20 compute-0 ceph-mon[74295]: pgmap v1969: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 12 KiB/s wr, 48 op/s
Oct 07 14:28:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:28:21 compute-0 nova_compute[259550]: 2025-10-07 14:28:21.759 2 DEBUG oslo_concurrency.lockutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:28:21 compute-0 nova_compute[259550]: 2025-10-07 14:28:21.759 2 DEBUG oslo_concurrency.lockutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:28:21 compute-0 nova_compute[259550]: 2025-10-07 14:28:21.760 2 DEBUG oslo_concurrency.lockutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:28:21 compute-0 nova_compute[259550]: 2025-10-07 14:28:21.760 2 DEBUG oslo_concurrency.lockutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:28:21 compute-0 nova_compute[259550]: 2025-10-07 14:28:21.760 2 DEBUG oslo_concurrency.lockutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:28:21 compute-0 nova_compute[259550]: 2025-10-07 14:28:21.761 2 INFO nova.compute.manager [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Terminating instance
Oct 07 14:28:21 compute-0 nova_compute[259550]: 2025-10-07 14:28:21.762 2 DEBUG nova.compute.manager [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:28:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:21.765 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:28:21 compute-0 nova_compute[259550]: 2025-10-07 14:28:21.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:21.767 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:28:21 compute-0 kernel: tap8d87c94f-f4 (unregistering): left promiscuous mode
Oct 07 14:28:21 compute-0 NetworkManager[44949]: <info>  [1759847301.8169] device (tap8d87c94f-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:28:21 compute-0 nova_compute[259550]: 2025-10-07 14:28:21.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:21 compute-0 nova_compute[259550]: 2025-10-07 14:28:21.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:21 compute-0 ovn_controller[151684]: 2025-10-07T14:28:21Z|01048|binding|INFO|Releasing lport 8d87c94f-f436-4619-9ca7-9e116cab44bf from this chassis (sb_readonly=0)
Oct 07 14:28:21 compute-0 ovn_controller[151684]: 2025-10-07T14:28:21Z|01049|binding|INFO|Setting lport 8d87c94f-f436-4619-9ca7-9e116cab44bf down in Southbound
Oct 07 14:28:21 compute-0 ovn_controller[151684]: 2025-10-07T14:28:21Z|01050|binding|INFO|Removing iface tap8d87c94f-f4 ovn-installed in OVS
Oct 07 14:28:21 compute-0 nova_compute[259550]: 2025-10-07 14:28:21.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:21.838 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:79:3a 10.100.0.3'], port_security=['fa:16:3e:ce:79:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '91c66dff-47e6-4b52-9e3f-d8c58d256bcf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df30eae3-80f8-4787-8c66-2dfab55da65a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65245efcef84404ca2ccf82df5946696', 'neutron:revision_number': '11', 'neutron:security_group_ids': '7ce7b451-d224-4231-90f8-24fa819f5920', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efac365-0f4c-42a7-9c1f-c734401b92b1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8d87c94f-f436-4619-9ca7-9e116cab44bf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:28:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:21.839 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8d87c94f-f436-4619-9ca7-9e116cab44bf in datapath df30eae3-80f8-4787-8c66-2dfab55da65a unbound from our chassis
Oct 07 14:28:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:21.840 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df30eae3-80f8-4787-8c66-2dfab55da65a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:28:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:21.841 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4c99b2-ed7e-48b3-b849-19dca1c4ca48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:21.841 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a namespace which is not needed anymore
Oct 07 14:28:21 compute-0 nova_compute[259550]: 2025-10-07 14:28:21.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:21 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000066.scope: Deactivated successfully.
Oct 07 14:28:21 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000066.scope: Consumed 5.990s CPU time.
Oct 07 14:28:21 compute-0 systemd-machined[214580]: Machine qemu-130-instance-00000066 terminated.
Oct 07 14:28:21 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[365198]: [NOTICE]   (365203) : haproxy version is 2.8.14-c23fe91
Oct 07 14:28:21 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[365198]: [NOTICE]   (365203) : path to executable is /usr/sbin/haproxy
Oct 07 14:28:21 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[365198]: [WARNING]  (365203) : Exiting Master process...
Oct 07 14:28:21 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[365198]: [ALERT]    (365203) : Current worker (365205) exited with code 143 (Terminated)
Oct 07 14:28:21 compute-0 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[365198]: [WARNING]  (365203) : All workers exited. Exiting... (0)
Oct 07 14:28:21 compute-0 systemd[1]: libpod-c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e.scope: Deactivated successfully.
Oct 07 14:28:21 compute-0 conmon[365198]: conmon c8524fb3c898a9ddaab3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e.scope/container/memory.events
Oct 07 14:28:21 compute-0 nova_compute[259550]: 2025-10-07 14:28:21.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:21 compute-0 podman[365320]: 2025-10-07 14:28:21.980266253 +0000 UTC m=+0.042364724 container died c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:28:21 compute-0 nova_compute[259550]: 2025-10-07 14:28:21.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:21 compute-0 nova_compute[259550]: 2025-10-07 14:28:21.989 2 INFO nova.virt.libvirt.driver [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance destroyed successfully.
Oct 07 14:28:21 compute-0 nova_compute[259550]: 2025-10-07 14:28:21.990 2 DEBUG nova.objects.instance [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'resources' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:28:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e-userdata-shm.mount: Deactivated successfully.
Oct 07 14:28:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-377d196f06f2dbd8119d4f06850230af8d64915c4afaebe12208a94e230e18c6-merged.mount: Deactivated successfully.
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.013 2 DEBUG nova.virt.libvirt.vif [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1596186961',display_name='tempest-ServersNegativeTestJSON-server-1596186961',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1596186961',id=102,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:27:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-7x9o1pxk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:28:10Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=91c66dff-47e6-4b52-9e3f-d8c58d256bcf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.014 2 DEBUG nova.network.os_vif_util [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.015 2 DEBUG nova.network.os_vif_util [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.015 2 DEBUG os_vif [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:28:22 compute-0 podman[365320]: 2025-10-07 14:28:22.015989957 +0000 UTC m=+0.078088428 container cleanup c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.017 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d87c94f-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.022 2 INFO os_vif [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4')
Oct 07 14:28:22 compute-0 systemd[1]: libpod-conmon-c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e.scope: Deactivated successfully.
Oct 07 14:28:22 compute-0 podman[365357]: 2025-10-07 14:28:22.089783469 +0000 UTC m=+0.050755207 container remove c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:28:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:22.096 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5b55cfe4-2e48-4e70-8d68-ab43d0945336]: (4, ('Tue Oct  7 02:28:21 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a (c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e)\nc8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e\nTue Oct  7 02:28:22 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a (c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e)\nc8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:22.100 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[99375131-8c33-479f-ab7e-4f6966e4f44c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:22.101 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf30eae3-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:22 compute-0 kernel: tapdf30eae3-80: left promiscuous mode
Oct 07 14:28:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 20 KiB/s wr, 49 op/s
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:22.118 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c7aa2411-0889-4bd7-b905-4d097a0f3c53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:22.147 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[405c9141-c734-4aa2-9a88-7fc67d92295a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:22.148 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9967ff4d-f990-4fcc-abea-6ddb6b2aae6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:22.170 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd8c1e5-bda4-4901-9878-ab2a27c54488]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787225, 'reachable_time': 32286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365390, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:22.173 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:28:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:22.173 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[79d4db00-ff94-42f9-aeff-669fb5a504af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:22 compute-0 systemd[1]: run-netns-ovnmeta\x2ddf30eae3\x2d80f8\x2d4787\x2d8c66\x2d2dfab55da65a.mount: Deactivated successfully.
Oct 07 14:28:22 compute-0 ceph-mon[74295]: pgmap v1970: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 20 KiB/s wr, 49 op/s
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.213 2 DEBUG nova.compute.manager [req-90d6499e-08bc-40bf-8952-c7dadc145435 req-868ac5dd-31bd-45a1-a4b2-e572037e0215 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-unplugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.214 2 DEBUG oslo_concurrency.lockutils [req-90d6499e-08bc-40bf-8952-c7dadc145435 req-868ac5dd-31bd-45a1-a4b2-e572037e0215 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.214 2 DEBUG oslo_concurrency.lockutils [req-90d6499e-08bc-40bf-8952-c7dadc145435 req-868ac5dd-31bd-45a1-a4b2-e572037e0215 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.215 2 DEBUG oslo_concurrency.lockutils [req-90d6499e-08bc-40bf-8952-c7dadc145435 req-868ac5dd-31bd-45a1-a4b2-e572037e0215 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.215 2 DEBUG nova.compute.manager [req-90d6499e-08bc-40bf-8952-c7dadc145435 req-868ac5dd-31bd-45a1-a4b2-e572037e0215 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-unplugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.215 2 DEBUG nova.compute.manager [req-90d6499e-08bc-40bf-8952-c7dadc145435 req-868ac5dd-31bd-45a1-a4b2-e572037e0215 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-unplugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.375 2 INFO nova.virt.libvirt.driver [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Deleting instance files /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_del
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.376 2 INFO nova.virt.libvirt.driver [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Deletion of /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_del complete
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.452 2 INFO nova.compute.manager [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Took 0.69 seconds to destroy the instance on the hypervisor.
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.453 2 DEBUG oslo.service.loopingcall [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.453 2 DEBUG nova.compute.manager [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:28:22 compute-0 nova_compute[259550]: 2025-10-07 14:28:22.454 2 DEBUG nova.network.neutron [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:28:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:28:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:28:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:28:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:28:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:28:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:28:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:28:22
Oct 07 14:28:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:28:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:28:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'vms', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'default.rgw.log']
Oct 07 14:28:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:28:22 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:28:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:28:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:28:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:28:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:28:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:28:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:28:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:28:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:28:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:28:23 compute-0 nova_compute[259550]: 2025-10-07 14:28:23.685 2 DEBUG nova.network.neutron [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:28:23 compute-0 nova_compute[259550]: 2025-10-07 14:28:23.709 2 INFO nova.compute.manager [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Took 1.25 seconds to deallocate network for instance.
Oct 07 14:28:23 compute-0 nova_compute[259550]: 2025-10-07 14:28:23.762 2 DEBUG oslo_concurrency.lockutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:28:23 compute-0 nova_compute[259550]: 2025-10-07 14:28:23.763 2 DEBUG oslo_concurrency.lockutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:28:23 compute-0 nova_compute[259550]: 2025-10-07 14:28:23.830 2 DEBUG oslo_concurrency.processutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:28:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 82 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 22 KiB/s wr, 75 op/s
Oct 07 14:28:24 compute-0 ceph-mon[74295]: pgmap v1971: 305 pgs: 305 active+clean; 82 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 22 KiB/s wr, 75 op/s
Oct 07 14:28:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:28:24 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2991991570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:28:24 compute-0 nova_compute[259550]: 2025-10-07 14:28:24.261 2 DEBUG oslo_concurrency.processutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:28:24 compute-0 nova_compute[259550]: 2025-10-07 14:28:24.268 2 DEBUG nova.compute.provider_tree [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:28:24 compute-0 nova_compute[259550]: 2025-10-07 14:28:24.287 2 DEBUG nova.scheduler.client.report [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:28:24 compute-0 nova_compute[259550]: 2025-10-07 14:28:24.315 2 DEBUG oslo_concurrency.lockutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:28:24 compute-0 nova_compute[259550]: 2025-10-07 14:28:24.323 2 DEBUG nova.compute.manager [req-a69b1c35-5a5e-4eb4-b581-ccca1b737ed1 req-715e6abb-843c-4a56-abd4-c6e3feb457df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:28:24 compute-0 nova_compute[259550]: 2025-10-07 14:28:24.324 2 DEBUG oslo_concurrency.lockutils [req-a69b1c35-5a5e-4eb4-b581-ccca1b737ed1 req-715e6abb-843c-4a56-abd4-c6e3feb457df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:28:24 compute-0 nova_compute[259550]: 2025-10-07 14:28:24.324 2 DEBUG oslo_concurrency.lockutils [req-a69b1c35-5a5e-4eb4-b581-ccca1b737ed1 req-715e6abb-843c-4a56-abd4-c6e3feb457df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:28:24 compute-0 nova_compute[259550]: 2025-10-07 14:28:24.324 2 DEBUG oslo_concurrency.lockutils [req-a69b1c35-5a5e-4eb4-b581-ccca1b737ed1 req-715e6abb-843c-4a56-abd4-c6e3feb457df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:28:24 compute-0 nova_compute[259550]: 2025-10-07 14:28:24.325 2 DEBUG nova.compute.manager [req-a69b1c35-5a5e-4eb4-b581-ccca1b737ed1 req-715e6abb-843c-4a56-abd4-c6e3feb457df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:28:24 compute-0 nova_compute[259550]: 2025-10-07 14:28:24.325 2 WARNING nova.compute.manager [req-a69b1c35-5a5e-4eb4-b581-ccca1b737ed1 req-715e6abb-843c-4a56-abd4-c6e3feb457df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received unexpected event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with vm_state deleted and task_state None.
Oct 07 14:28:24 compute-0 nova_compute[259550]: 2025-10-07 14:28:24.325 2 DEBUG nova.compute.manager [req-a69b1c35-5a5e-4eb4-b581-ccca1b737ed1 req-715e6abb-843c-4a56-abd4-c6e3feb457df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-deleted-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:28:24 compute-0 nova_compute[259550]: 2025-10-07 14:28:24.345 2 INFO nova.scheduler.client.report [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Deleted allocations for instance 91c66dff-47e6-4b52-9e3f-d8c58d256bcf
Oct 07 14:28:24 compute-0 nova_compute[259550]: 2025-10-07 14:28:24.407 2 DEBUG oslo_concurrency.lockutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:28:25 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2991991570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:28:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 502 KiB/s rd, 22 KiB/s wr, 67 op/s
Oct 07 14:28:26 compute-0 ceph-mon[74295]: pgmap v1972: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 502 KiB/s rd, 22 KiB/s wr, 67 op/s
Oct 07 14:28:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:28:26 compute-0 nova_compute[259550]: 2025-10-07 14:28:26.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:27 compute-0 nova_compute[259550]: 2025-10-07 14:28:27.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:27.770 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:28:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 19 KiB/s wr, 49 op/s
Oct 07 14:28:28 compute-0 ceph-mon[74295]: pgmap v1973: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 19 KiB/s wr, 49 op/s
Oct 07 14:28:28 compute-0 nova_compute[259550]: 2025-10-07 14:28:28.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:30 compute-0 podman[365414]: 2025-10-07 14:28:30.072613243 +0000 UTC m=+0.060009504 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 07 14:28:30 compute-0 podman[365415]: 2025-10-07 14:28:30.10653796 +0000 UTC m=+0.090451638 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 07 14:28:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 19 KiB/s wr, 49 op/s
Oct 07 14:28:30 compute-0 ceph-mon[74295]: pgmap v1974: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 19 KiB/s wr, 49 op/s
Oct 07 14:28:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:28:31 compute-0 nova_compute[259550]: 2025-10-07 14:28:31.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:32 compute-0 nova_compute[259550]: 2025-10-07 14:28:32.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 28 op/s
Oct 07 14:28:32 compute-0 ceph-mon[74295]: pgmap v1975: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 28 op/s
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:28:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:28:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:28:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2687165527' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:28:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:28:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2687165527' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:28:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2687165527' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:28:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2687165527' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:28:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.5 KiB/s wr, 28 op/s
Oct 07 14:28:34 compute-0 ceph-mon[74295]: pgmap v1976: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.5 KiB/s wr, 28 op/s
Oct 07 14:28:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 937 B/s rd, 0 B/s wr, 2 op/s
Oct 07 14:28:36 compute-0 ceph-mon[74295]: pgmap v1977: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 937 B/s rd, 0 B/s wr, 2 op/s
Oct 07 14:28:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:28:36 compute-0 nova_compute[259550]: 2025-10-07 14:28:36.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:36 compute-0 nova_compute[259550]: 2025-10-07 14:28:36.989 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847301.988357, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:28:36 compute-0 nova_compute[259550]: 2025-10-07 14:28:36.990 2 INFO nova.compute.manager [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Stopped (Lifecycle Event)
Oct 07 14:28:37 compute-0 nova_compute[259550]: 2025-10-07 14:28:37.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:37 compute-0 nova_compute[259550]: 2025-10-07 14:28:37.134 2 DEBUG nova.compute.manager [None req-2843025a-6734-408c-a07e-37847966bef0 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:28:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:38 compute-0 ceph-mon[74295]: pgmap v1978: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:39 compute-0 nova_compute[259550]: 2025-10-07 14:28:39.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:28:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:40 compute-0 ceph-mon[74295]: pgmap v1979: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:40 compute-0 nova_compute[259550]: 2025-10-07 14:28:40.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:28:40 compute-0 nova_compute[259550]: 2025-10-07 14:28:40.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:28:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:28:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:41.644 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:96:91 2001:db8:0:1:f816:3eff:feac:9691'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3b81107-fbd8-4b68-adb0-30f49b47dd04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ed3753df-65a0-439b-ac95-e84eb7b44484) old=Port_Binding(mac=['fa:16:3e:ac:96:91 2001:db8::f816:3eff:feac:9691'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:28:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:41.645 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ed3753df-65a0-439b-ac95-e84eb7b44484 in datapath b1af60e6-ad2e-48e6-9494-943f38a2c137 updated
Oct 07 14:28:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:41.645 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1af60e6-ad2e-48e6-9494-943f38a2c137, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:28:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:41.646 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[21350ce9-43ca-4bdc-af48-468fd7746937]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:41 compute-0 nova_compute[259550]: 2025-10-07 14:28:41.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:41 compute-0 nova_compute[259550]: 2025-10-07 14:28:41.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:28:41 compute-0 nova_compute[259550]: 2025-10-07 14:28:41.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:28:42 compute-0 nova_compute[259550]: 2025-10-07 14:28:42.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:42 compute-0 ceph-mon[74295]: pgmap v1980: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:44 compute-0 ceph-mon[74295]: pgmap v1981: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:44 compute-0 nova_compute[259550]: 2025-10-07 14:28:44.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:28:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:46 compute-0 ceph-mon[74295]: pgmap v1982: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:28:46 compute-0 nova_compute[259550]: 2025-10-07 14:28:46.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:47 compute-0 nova_compute[259550]: 2025-10-07 14:28:47.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:47 compute-0 podman[365461]: 2025-10-07 14:28:47.076987566 +0000 UTC m=+0.058598697 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:28:47 compute-0 podman[365460]: 2025-10-07 14:28:47.091781872 +0000 UTC m=+0.081551091 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 14:28:47 compute-0 nova_compute[259550]: 2025-10-07 14:28:47.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:28:48 compute-0 nova_compute[259550]: 2025-10-07 14:28:48.009 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:28:48 compute-0 nova_compute[259550]: 2025-10-07 14:28:48.010 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:28:48 compute-0 nova_compute[259550]: 2025-10-07 14:28:48.010 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:28:48 compute-0 nova_compute[259550]: 2025-10-07 14:28:48.010 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:28:48 compute-0 nova_compute[259550]: 2025-10-07 14:28:48.010 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:28:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:48 compute-0 ceph-mon[74295]: pgmap v1983: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:28:48 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/687717762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:28:48 compute-0 nova_compute[259550]: 2025-10-07 14:28:48.500 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:28:48 compute-0 nova_compute[259550]: 2025-10-07 14:28:48.682 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:28:48 compute-0 nova_compute[259550]: 2025-10-07 14:28:48.684 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3832MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:28:48 compute-0 nova_compute[259550]: 2025-10-07 14:28:48.684 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:28:48 compute-0 nova_compute[259550]: 2025-10-07 14:28:48.684 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:28:48 compute-0 nova_compute[259550]: 2025-10-07 14:28:48.765 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:28:48 compute-0 nova_compute[259550]: 2025-10-07 14:28:48.765 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:28:48 compute-0 nova_compute[259550]: 2025-10-07 14:28:48.782 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 07 14:28:48 compute-0 nova_compute[259550]: 2025-10-07 14:28:48.798 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 07 14:28:48 compute-0 nova_compute[259550]: 2025-10-07 14:28:48.799 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 07 14:28:48 compute-0 nova_compute[259550]: 2025-10-07 14:28:48.822 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 07 14:28:48 compute-0 nova_compute[259550]: 2025-10-07 14:28:48.848 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 07 14:28:48 compute-0 nova_compute[259550]: 2025-10-07 14:28:48.867 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:28:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:48.882 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:42:ed 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-ce30dd1c-3f23-4392-a65c-69af967e4a40', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce30dd1c-3f23-4392-a65c-69af967e4a40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9053ea8abeec4d278cfe9e76fe5aa4bc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=343ddd66-c1f1-4ffd-958d-c4bb2dd19d72, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=46bd0673-542e-432f-a5ba-9baa7ff32653) old=Port_Binding(mac=['fa:16:3e:3c:42:ed 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-ce30dd1c-3f23-4392-a65c-69af967e4a40', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce30dd1c-3f23-4392-a65c-69af967e4a40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9053ea8abeec4d278cfe9e76fe5aa4bc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:28:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:48.884 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 46bd0673-542e-432f-a5ba-9baa7ff32653 in datapath ce30dd1c-3f23-4392-a65c-69af967e4a40 updated
Oct 07 14:28:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:48.884 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce30dd1c-3f23-4392-a65c-69af967e4a40, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:28:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:28:48.886 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[45d3ed8d-2707-456e-86aa-8a6f9cc28679]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:28:49 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/687717762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:28:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:28:49 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/479812549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:28:49 compute-0 nova_compute[259550]: 2025-10-07 14:28:49.336 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:28:49 compute-0 nova_compute[259550]: 2025-10-07 14:28:49.343 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:28:49 compute-0 nova_compute[259550]: 2025-10-07 14:28:49.374 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:28:49 compute-0 nova_compute[259550]: 2025-10-07 14:28:49.416 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:28:49 compute-0 nova_compute[259550]: 2025-10-07 14:28:49.417 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:28:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:50 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/479812549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:28:50 compute-0 ceph-mon[74295]: pgmap v1984: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:50 compute-0 nova_compute[259550]: 2025-10-07 14:28:50.418 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:28:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:28:51 compute-0 nova_compute[259550]: 2025-10-07 14:28:51.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:52 compute-0 nova_compute[259550]: 2025-10-07 14:28:52.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:52 compute-0 ceph-mon[74295]: pgmap v1985: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:28:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:28:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:28:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:28:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:28:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:28:52 compute-0 nova_compute[259550]: 2025-10-07 14:28:52.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:28:52 compute-0 nova_compute[259550]: 2025-10-07 14:28:52.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:28:52 compute-0 nova_compute[259550]: 2025-10-07 14:28:52.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:28:52 compute-0 nova_compute[259550]: 2025-10-07 14:28:52.998 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:28:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:54 compute-0 ceph-mon[74295]: pgmap v1986: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:55 compute-0 sudo[365542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:28:55 compute-0 sudo[365542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:28:55 compute-0 sudo[365542]: pam_unix(sudo:session): session closed for user root
Oct 07 14:28:55 compute-0 sudo[365567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:28:55 compute-0 sudo[365567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:28:55 compute-0 sudo[365567]: pam_unix(sudo:session): session closed for user root
Oct 07 14:28:55 compute-0 sudo[365592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:28:55 compute-0 sudo[365592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:28:55 compute-0 sudo[365592]: pam_unix(sudo:session): session closed for user root
Oct 07 14:28:55 compute-0 sudo[365617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:28:55 compute-0 sudo[365617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:28:56 compute-0 sudo[365617]: pam_unix(sudo:session): session closed for user root
Oct 07 14:28:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:28:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:28:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:28:56 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:28:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:28:56 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:28:56 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 2744df9d-6abd-4b43-a390-b38137f12f0e does not exist
Oct 07 14:28:56 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev c6c3301b-ebb1-4d8d-a962-c7a909abffe4 does not exist
Oct 07 14:28:56 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev b8d84c17-02d5-4693-8391-dc36bbbe8436 does not exist
Oct 07 14:28:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:28:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:28:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:28:56 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:28:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:28:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:28:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:28:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:28:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:28:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:28:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:28:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:28:56 compute-0 sudo[365673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:28:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:56 compute-0 sudo[365673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:28:56 compute-0 sudo[365673]: pam_unix(sudo:session): session closed for user root
Oct 07 14:28:56 compute-0 sudo[365698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:28:56 compute-0 sudo[365698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:28:56 compute-0 sudo[365698]: pam_unix(sudo:session): session closed for user root
Oct 07 14:28:56 compute-0 sudo[365723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:28:56 compute-0 sudo[365723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:28:56 compute-0 sudo[365723]: pam_unix(sudo:session): session closed for user root
Oct 07 14:28:56 compute-0 sudo[365748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:28:56 compute-0 sudo[365748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:28:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:28:56 compute-0 podman[365816]: 2025-10-07 14:28:56.656512595 +0000 UTC m=+0.036322732 container create 24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:28:56 compute-0 systemd[1]: Started libpod-conmon-24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb.scope.
Oct 07 14:28:56 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:28:56 compute-0 podman[365816]: 2025-10-07 14:28:56.64026665 +0000 UTC m=+0.020076807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:28:56 compute-0 podman[365816]: 2025-10-07 14:28:56.737085787 +0000 UTC m=+0.116895954 container init 24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 07 14:28:56 compute-0 podman[365816]: 2025-10-07 14:28:56.744884446 +0000 UTC m=+0.124694583 container start 24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_volhard, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 07 14:28:56 compute-0 podman[365816]: 2025-10-07 14:28:56.748811631 +0000 UTC m=+0.128621788 container attach 24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 07 14:28:56 compute-0 clever_volhard[365833]: 167 167
Oct 07 14:28:56 compute-0 systemd[1]: libpod-24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb.scope: Deactivated successfully.
Oct 07 14:28:56 compute-0 conmon[365833]: conmon 24159a04891d6596201e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb.scope/container/memory.events
Oct 07 14:28:56 compute-0 podman[365816]: 2025-10-07 14:28:56.756154997 +0000 UTC m=+0.135965134 container died 24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 07 14:28:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-30c554a04a66ab6566aed93cc480c1f3a81ca0a516985e8ad3eef865c7183b8f-merged.mount: Deactivated successfully.
Oct 07 14:28:56 compute-0 podman[365816]: 2025-10-07 14:28:56.799908636 +0000 UTC m=+0.179718763 container remove 24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 07 14:28:56 compute-0 systemd[1]: libpod-conmon-24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb.scope: Deactivated successfully.
Oct 07 14:28:56 compute-0 nova_compute[259550]: 2025-10-07 14:28:56.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:56 compute-0 podman[365856]: 2025-10-07 14:28:56.962431859 +0000 UTC m=+0.039860416 container create 4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:28:57 compute-0 systemd[1]: Started libpod-conmon-4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8.scope.
Oct 07 14:28:57 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:28:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f308967f19607bac87169a3826783923512772124ced6b4c3d10d1d14e94e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:28:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f308967f19607bac87169a3826783923512772124ced6b4c3d10d1d14e94e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:28:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f308967f19607bac87169a3826783923512772124ced6b4c3d10d1d14e94e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:28:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f308967f19607bac87169a3826783923512772124ced6b4c3d10d1d14e94e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:28:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f308967f19607bac87169a3826783923512772124ced6b4c3d10d1d14e94e2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:28:57 compute-0 nova_compute[259550]: 2025-10-07 14:28:57.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:28:57 compute-0 podman[365856]: 2025-10-07 14:28:56.94447873 +0000 UTC m=+0.021907307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:28:57 compute-0 podman[365856]: 2025-10-07 14:28:57.043253649 +0000 UTC m=+0.120682196 container init 4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 07 14:28:57 compute-0 podman[365856]: 2025-10-07 14:28:57.055156657 +0000 UTC m=+0.132585204 container start 4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 07 14:28:57 compute-0 podman[365856]: 2025-10-07 14:28:57.060098389 +0000 UTC m=+0.137526946 container attach 4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:28:57 compute-0 ceph-mon[74295]: pgmap v1987: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:58 compute-0 blissful_keldysh[365873]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:28:58 compute-0 blissful_keldysh[365873]: --> relative data size: 1.0
Oct 07 14:28:58 compute-0 blissful_keldysh[365873]: --> All data devices are unavailable
Oct 07 14:28:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:58 compute-0 systemd[1]: libpod-4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8.scope: Deactivated successfully.
Oct 07 14:28:58 compute-0 podman[365856]: 2025-10-07 14:28:58.143702263 +0000 UTC m=+1.221130810 container died 4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 14:28:58 compute-0 systemd[1]: libpod-4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8.scope: Consumed 1.034s CPU time.
Oct 07 14:28:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2f308967f19607bac87169a3826783923512772124ced6b4c3d10d1d14e94e2-merged.mount: Deactivated successfully.
Oct 07 14:28:58 compute-0 podman[365856]: 2025-10-07 14:28:58.202792942 +0000 UTC m=+1.280221479 container remove 4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:28:58 compute-0 systemd[1]: libpod-conmon-4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8.scope: Deactivated successfully.
Oct 07 14:28:58 compute-0 ceph-mon[74295]: pgmap v1988: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:28:58 compute-0 sudo[365748]: pam_unix(sudo:session): session closed for user root
Oct 07 14:28:58 compute-0 sudo[365916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:28:58 compute-0 sudo[365916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:28:58 compute-0 sudo[365916]: pam_unix(sudo:session): session closed for user root
Oct 07 14:28:58 compute-0 sudo[365941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:28:58 compute-0 sudo[365941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:28:58 compute-0 sudo[365941]: pam_unix(sudo:session): session closed for user root
Oct 07 14:28:58 compute-0 sudo[365966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:28:58 compute-0 sudo[365966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:28:58 compute-0 sudo[365966]: pam_unix(sudo:session): session closed for user root
Oct 07 14:28:58 compute-0 sudo[365991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:28:58 compute-0 sudo[365991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:28:58 compute-0 podman[366057]: 2025-10-07 14:28:58.802414623 +0000 UTC m=+0.039665950 container create f76f81eb514ff027784c223d42f18cd40da86a722a7feacf9b1da1ae507e0b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_spence, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 07 14:28:58 compute-0 systemd[1]: Started libpod-conmon-f76f81eb514ff027784c223d42f18cd40da86a722a7feacf9b1da1ae507e0b17.scope.
Oct 07 14:28:58 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:28:58 compute-0 podman[366057]: 2025-10-07 14:28:58.785494551 +0000 UTC m=+0.022745928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:28:58 compute-0 ovn_controller[151684]: 2025-10-07T14:28:58Z|01051|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 07 14:28:58 compute-0 podman[366057]: 2025-10-07 14:28:58.886887671 +0000 UTC m=+0.124139038 container init f76f81eb514ff027784c223d42f18cd40da86a722a7feacf9b1da1ae507e0b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:28:58 compute-0 podman[366057]: 2025-10-07 14:28:58.893412415 +0000 UTC m=+0.130663772 container start f76f81eb514ff027784c223d42f18cd40da86a722a7feacf9b1da1ae507e0b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_spence, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:28:58 compute-0 podman[366057]: 2025-10-07 14:28:58.896898259 +0000 UTC m=+0.134149626 container attach f76f81eb514ff027784c223d42f18cd40da86a722a7feacf9b1da1ae507e0b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_spence, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 14:28:58 compute-0 fervent_spence[366073]: 167 167
Oct 07 14:28:58 compute-0 systemd[1]: libpod-f76f81eb514ff027784c223d42f18cd40da86a722a7feacf9b1da1ae507e0b17.scope: Deactivated successfully.
Oct 07 14:28:58 compute-0 podman[366057]: 2025-10-07 14:28:58.89916703 +0000 UTC m=+0.136418387 container died f76f81eb514ff027784c223d42f18cd40da86a722a7feacf9b1da1ae507e0b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_spence, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:28:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2fb22d0dfc885b32ae040942d1c1e1f40c0f3f92a0df0adb1b7603f7ec82ec1-merged.mount: Deactivated successfully.
Oct 07 14:28:58 compute-0 podman[366057]: 2025-10-07 14:28:58.943343349 +0000 UTC m=+0.180594676 container remove f76f81eb514ff027784c223d42f18cd40da86a722a7feacf9b1da1ae507e0b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_spence, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 07 14:28:58 compute-0 systemd[1]: libpod-conmon-f76f81eb514ff027784c223d42f18cd40da86a722a7feacf9b1da1ae507e0b17.scope: Deactivated successfully.
Oct 07 14:28:58 compute-0 nova_compute[259550]: 2025-10-07 14:28:58.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:28:59 compute-0 podman[366097]: 2025-10-07 14:28:59.112961522 +0000 UTC m=+0.046481123 container create 2d52b482c5f535c64ea48aafe6cfb16e55dc0dbaa39d060d98558e05a6e02d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 07 14:28:59 compute-0 systemd[1]: Started libpod-conmon-2d52b482c5f535c64ea48aafe6cfb16e55dc0dbaa39d060d98558e05a6e02d7d.scope.
Oct 07 14:28:59 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:28:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd48edd1fe3c051c07ae4f353cbdcbcf3664792bcffe473ef003bd7ecd90f256/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:28:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd48edd1fe3c051c07ae4f353cbdcbcf3664792bcffe473ef003bd7ecd90f256/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:28:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd48edd1fe3c051c07ae4f353cbdcbcf3664792bcffe473ef003bd7ecd90f256/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:28:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd48edd1fe3c051c07ae4f353cbdcbcf3664792bcffe473ef003bd7ecd90f256/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:28:59 compute-0 podman[366097]: 2025-10-07 14:28:59.095759582 +0000 UTC m=+0.029279193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:28:59 compute-0 podman[366097]: 2025-10-07 14:28:59.195367644 +0000 UTC m=+0.128887285 container init 2d52b482c5f535c64ea48aafe6cfb16e55dc0dbaa39d060d98558e05a6e02d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:28:59 compute-0 podman[366097]: 2025-10-07 14:28:59.202555836 +0000 UTC m=+0.136075437 container start 2d52b482c5f535c64ea48aafe6cfb16e55dc0dbaa39d060d98558e05a6e02d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 07 14:28:59 compute-0 podman[366097]: 2025-10-07 14:28:59.206276955 +0000 UTC m=+0.139796616 container attach 2d52b482c5f535c64ea48aafe6cfb16e55dc0dbaa39d060d98558e05a6e02d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 07 14:28:59 compute-0 competent_bohr[366113]: {
Oct 07 14:28:59 compute-0 competent_bohr[366113]:     "0": [
Oct 07 14:28:59 compute-0 competent_bohr[366113]:         {
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "devices": [
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "/dev/loop3"
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             ],
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "lv_name": "ceph_lv0",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "lv_size": "21470642176",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "name": "ceph_lv0",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "tags": {
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.cluster_name": "ceph",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.crush_device_class": "",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.encrypted": "0",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.osd_id": "0",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.type": "block",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.vdo": "0"
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             },
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "type": "block",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "vg_name": "ceph_vg0"
Oct 07 14:28:59 compute-0 competent_bohr[366113]:         }
Oct 07 14:28:59 compute-0 competent_bohr[366113]:     ],
Oct 07 14:28:59 compute-0 competent_bohr[366113]:     "1": [
Oct 07 14:28:59 compute-0 competent_bohr[366113]:         {
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "devices": [
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "/dev/loop4"
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             ],
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "lv_name": "ceph_lv1",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "lv_size": "21470642176",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "name": "ceph_lv1",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "tags": {
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.cluster_name": "ceph",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.crush_device_class": "",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.encrypted": "0",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.osd_id": "1",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.type": "block",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.vdo": "0"
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             },
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "type": "block",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "vg_name": "ceph_vg1"
Oct 07 14:28:59 compute-0 competent_bohr[366113]:         }
Oct 07 14:28:59 compute-0 competent_bohr[366113]:     ],
Oct 07 14:28:59 compute-0 competent_bohr[366113]:     "2": [
Oct 07 14:28:59 compute-0 competent_bohr[366113]:         {
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "devices": [
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "/dev/loop5"
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             ],
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "lv_name": "ceph_lv2",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "lv_size": "21470642176",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "name": "ceph_lv2",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "tags": {
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.cluster_name": "ceph",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.crush_device_class": "",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.encrypted": "0",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.osd_id": "2",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.type": "block",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:                 "ceph.vdo": "0"
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             },
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "type": "block",
Oct 07 14:28:59 compute-0 competent_bohr[366113]:             "vg_name": "ceph_vg2"
Oct 07 14:28:59 compute-0 competent_bohr[366113]:         }
Oct 07 14:28:59 compute-0 competent_bohr[366113]:     ]
Oct 07 14:28:59 compute-0 competent_bohr[366113]: }
Oct 07 14:28:59 compute-0 systemd[1]: libpod-2d52b482c5f535c64ea48aafe6cfb16e55dc0dbaa39d060d98558e05a6e02d7d.scope: Deactivated successfully.
Oct 07 14:29:00 compute-0 podman[366097]: 2025-10-07 14:28:59.999958133 +0000 UTC m=+0.933477764 container died 2d52b482c5f535c64ea48aafe6cfb16e55dc0dbaa39d060d98558e05a6e02d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:29:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:29:00.061 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:29:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:29:00.062 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:29:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:29:00.062 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:29:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd48edd1fe3c051c07ae4f353cbdcbcf3664792bcffe473ef003bd7ecd90f256-merged.mount: Deactivated successfully.
Oct 07 14:29:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:00 compute-0 podman[366097]: 2025-10-07 14:29:00.170083258 +0000 UTC m=+1.103602869 container remove 2d52b482c5f535c64ea48aafe6cfb16e55dc0dbaa39d060d98558e05a6e02d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 07 14:29:00 compute-0 systemd[1]: libpod-conmon-2d52b482c5f535c64ea48aafe6cfb16e55dc0dbaa39d060d98558e05a6e02d7d.scope: Deactivated successfully.
Oct 07 14:29:00 compute-0 sudo[365991]: pam_unix(sudo:session): session closed for user root
Oct 07 14:29:00 compute-0 podman[366133]: 2025-10-07 14:29:00.221442391 +0000 UTC m=+0.102344626 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 07 14:29:00 compute-0 ceph-mon[74295]: pgmap v1989: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:00 compute-0 sudo[366159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:29:00 compute-0 sudo[366159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:29:00 compute-0 sudo[366159]: pam_unix(sudo:session): session closed for user root
Oct 07 14:29:00 compute-0 podman[366146]: 2025-10-07 14:29:00.301155251 +0000 UTC m=+0.089656987 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:29:00 compute-0 sudo[366200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:29:00 compute-0 sudo[366200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:29:00 compute-0 sudo[366200]: pam_unix(sudo:session): session closed for user root
Oct 07 14:29:00 compute-0 sudo[366227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:29:00 compute-0 sudo[366227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:29:00 compute-0 sudo[366227]: pam_unix(sudo:session): session closed for user root
Oct 07 14:29:00 compute-0 sudo[366252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:29:00 compute-0 sudo[366252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:29:00 compute-0 podman[366317]: 2025-10-07 14:29:00.765012955 +0000 UTC m=+0.048950969 container create 3b0298be7782e921f9a360f57c8379257cad9548d516644f0de1999e05a902c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_antonelli, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 14:29:00 compute-0 systemd[1]: Started libpod-conmon-3b0298be7782e921f9a360f57c8379257cad9548d516644f0de1999e05a902c7.scope.
Oct 07 14:29:00 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:29:00 compute-0 podman[366317]: 2025-10-07 14:29:00.737362956 +0000 UTC m=+0.021300990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:29:00 compute-0 podman[366317]: 2025-10-07 14:29:00.838680264 +0000 UTC m=+0.122618298 container init 3b0298be7782e921f9a360f57c8379257cad9548d516644f0de1999e05a902c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 07 14:29:00 compute-0 podman[366317]: 2025-10-07 14:29:00.847879869 +0000 UTC m=+0.131817883 container start 3b0298be7782e921f9a360f57c8379257cad9548d516644f0de1999e05a902c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 07 14:29:00 compute-0 affectionate_antonelli[366334]: 167 167
Oct 07 14:29:00 compute-0 systemd[1]: libpod-3b0298be7782e921f9a360f57c8379257cad9548d516644f0de1999e05a902c7.scope: Deactivated successfully.
Oct 07 14:29:00 compute-0 podman[366317]: 2025-10-07 14:29:00.861463602 +0000 UTC m=+0.145401716 container attach 3b0298be7782e921f9a360f57c8379257cad9548d516644f0de1999e05a902c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 07 14:29:00 compute-0 podman[366317]: 2025-10-07 14:29:00.862550702 +0000 UTC m=+0.146488776 container died 3b0298be7782e921f9a360f57c8379257cad9548d516644f0de1999e05a902c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_antonelli, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 07 14:29:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff64fd6b0945032ac3b090a9ac7bcda24f26720523e14203dc95b02606f903e0-merged.mount: Deactivated successfully.
Oct 07 14:29:00 compute-0 podman[366317]: 2025-10-07 14:29:00.963736195 +0000 UTC m=+0.247674219 container remove 3b0298be7782e921f9a360f57c8379257cad9548d516644f0de1999e05a902c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:29:00 compute-0 systemd[1]: libpod-conmon-3b0298be7782e921f9a360f57c8379257cad9548d516644f0de1999e05a902c7.scope: Deactivated successfully.
Oct 07 14:29:01 compute-0 podman[366360]: 2025-10-07 14:29:01.16486534 +0000 UTC m=+0.057578070 container create b5fb244dd063a46d3287f97fefcc9fd3056189b0559e8b6acf2d59f5490bc4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:29:01 compute-0 systemd[1]: Started libpod-conmon-b5fb244dd063a46d3287f97fefcc9fd3056189b0559e8b6acf2d59f5490bc4ca.scope.
Oct 07 14:29:01 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:29:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4347c68d0cf43913e2f0acc627fce5a84b05e815ade3d69cc903ebdfb0e91f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:29:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4347c68d0cf43913e2f0acc627fce5a84b05e815ade3d69cc903ebdfb0e91f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:29:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4347c68d0cf43913e2f0acc627fce5a84b05e815ade3d69cc903ebdfb0e91f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:29:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4347c68d0cf43913e2f0acc627fce5a84b05e815ade3d69cc903ebdfb0e91f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:29:01 compute-0 podman[366360]: 2025-10-07 14:29:01.131599241 +0000 UTC m=+0.024311991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:29:01 compute-0 podman[366360]: 2025-10-07 14:29:01.233426862 +0000 UTC m=+0.126139602 container init b5fb244dd063a46d3287f97fefcc9fd3056189b0559e8b6acf2d59f5490bc4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 14:29:01 compute-0 podman[366360]: 2025-10-07 14:29:01.239305388 +0000 UTC m=+0.132018128 container start b5fb244dd063a46d3287f97fefcc9fd3056189b0559e8b6acf2d59f5490bc4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 07 14:29:01 compute-0 podman[366360]: 2025-10-07 14:29:01.241841796 +0000 UTC m=+0.134554536 container attach b5fb244dd063a46d3287f97fefcc9fd3056189b0559e8b6acf2d59f5490bc4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 07 14:29:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:29:01 compute-0 nova_compute[259550]: 2025-10-07 14:29:01.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:02 compute-0 nova_compute[259550]: 2025-10-07 14:29:02.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:02 compute-0 competent_khayyam[366377]: {
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:         "osd_id": 2,
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:         "type": "bluestore"
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:     },
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:         "osd_id": 1,
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:         "type": "bluestore"
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:     },
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:         "osd_id": 0,
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:         "type": "bluestore"
Oct 07 14:29:02 compute-0 competent_khayyam[366377]:     }
Oct 07 14:29:02 compute-0 competent_khayyam[366377]: }
Oct 07 14:29:02 compute-0 systemd[1]: libpod-b5fb244dd063a46d3287f97fefcc9fd3056189b0559e8b6acf2d59f5490bc4ca.scope: Deactivated successfully.
Oct 07 14:29:02 compute-0 podman[366360]: 2025-10-07 14:29:02.177338353 +0000 UTC m=+1.070051093 container died b5fb244dd063a46d3287f97fefcc9fd3056189b0559e8b6acf2d59f5490bc4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 07 14:29:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4347c68d0cf43913e2f0acc627fce5a84b05e815ade3d69cc903ebdfb0e91f6-merged.mount: Deactivated successfully.
Oct 07 14:29:02 compute-0 ceph-mon[74295]: pgmap v1990: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:02 compute-0 podman[366360]: 2025-10-07 14:29:02.23448832 +0000 UTC m=+1.127201060 container remove b5fb244dd063a46d3287f97fefcc9fd3056189b0559e8b6acf2d59f5490bc4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 14:29:02 compute-0 systemd[1]: libpod-conmon-b5fb244dd063a46d3287f97fefcc9fd3056189b0559e8b6acf2d59f5490bc4ca.scope: Deactivated successfully.
Oct 07 14:29:02 compute-0 sudo[366252]: pam_unix(sudo:session): session closed for user root
Oct 07 14:29:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:29:02 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:29:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:29:02 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:29:02 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 4e314e5d-b26e-44fa-bd44-cf99d46d1772 does not exist
Oct 07 14:29:02 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev e0c58db4-949e-4e33-944f-12a15bf3178d does not exist
Oct 07 14:29:02 compute-0 sudo[366424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:29:02 compute-0 sudo[366424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:29:02 compute-0 sudo[366424]: pam_unix(sudo:session): session closed for user root
Oct 07 14:29:02 compute-0 sudo[366449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:29:02 compute-0 sudo[366449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:29:02 compute-0 sudo[366449]: pam_unix(sudo:session): session closed for user root
Oct 07 14:29:03 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:29:03 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:29:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:04 compute-0 ceph-mon[74295]: pgmap v1991: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:06 compute-0 ceph-mon[74295]: pgmap v1992: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:29:06 compute-0 nova_compute[259550]: 2025-10-07 14:29:06.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:07 compute-0 nova_compute[259550]: 2025-10-07 14:29:07.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:08 compute-0 ceph-mon[74295]: pgmap v1993: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:10 compute-0 ceph-mon[74295]: pgmap v1994: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:29:11 compute-0 nova_compute[259550]: 2025-10-07 14:29:11.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:12 compute-0 nova_compute[259550]: 2025-10-07 14:29:12.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:16 compute-0 ceph-mds[100686]: mds.beacon.cephfs.compute-0.xpofvx missed beacon ack from the monitors
Oct 07 14:29:16 compute-0 nova_compute[259550]: 2025-10-07 14:29:16.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:17 compute-0 nova_compute[259550]: 2025-10-07 14:29:17.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:29:18 compute-0 podman[366475]: 2025-10-07 14:29:18.108538452 +0000 UTC m=+0.086134802 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true)
Oct 07 14:29:18 compute-0 podman[366474]: 2025-10-07 14:29:18.112849037 +0000 UTC m=+0.087592031 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:29:18 compute-0 ceph-mon[74295]: pgmap v1995: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:19 compute-0 ceph-mon[74295]: pgmap v1996: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:19 compute-0 ceph-mon[74295]: pgmap v1997: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:19 compute-0 ceph-mon[74295]: pgmap v1998: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:20 compute-0 ceph-mon[74295]: pgmap v1999: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:21 compute-0 nova_compute[259550]: 2025-10-07 14:29:21.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:22 compute-0 nova_compute[259550]: 2025-10-07 14:29:22.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:29:22.201 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:29:22 compute-0 nova_compute[259550]: 2025-10-07 14:29:22.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:29:22.201 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:29:22 compute-0 ceph-mon[74295]: pgmap v2000: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:29:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:29:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:29:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:29:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:29:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:29:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:29:22
Oct 07 14:29:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:29:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:29:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'vms', 'default.rgw.log', 'default.rgw.meta', 'backups']
Oct 07 14:29:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:29:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:29:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:29:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:29:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:29:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:29:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:29:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:29:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:29:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:29:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:29:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:29:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:29:23.962 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:76:cd 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e73bddf2-052a-4493-8a38-fe42db12e853', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e73bddf2-052a-4493-8a38-fe42db12e853', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9053ea8abeec4d278cfe9e76fe5aa4bc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fcaac634-4eae-48ce-9495-48bae919ff70, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=0cb29088-c0bc-48df-80b9-858314de808b) old=Port_Binding(mac=['fa:16:3e:48:76:cd 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e73bddf2-052a-4493-8a38-fe42db12e853', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e73bddf2-052a-4493-8a38-fe42db12e853', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9053ea8abeec4d278cfe9e76fe5aa4bc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:29:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:29:23.963 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 0cb29088-c0bc-48df-80b9-858314de808b in datapath e73bddf2-052a-4493-8a38-fe42db12e853 updated
Oct 07 14:29:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:29:23.964 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e73bddf2-052a-4493-8a38-fe42db12e853, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:29:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:29:23.965 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2fdac8-01b3-4ff0-988e-abe862240a75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:29:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:24 compute-0 ceph-mon[74295]: pgmap v2001: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:29:25.203 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:29:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:26 compute-0 ceph-mon[74295]: pgmap v2002: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:26 compute-0 nova_compute[259550]: 2025-10-07 14:29:26.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:27 compute-0 nova_compute[259550]: 2025-10-07 14:29:27.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:29:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:28 compute-0 ceph-mon[74295]: pgmap v2003: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:30 compute-0 ceph-mon[74295]: pgmap v2004: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:31 compute-0 podman[366515]: 2025-10-07 14:29:31.054191357 +0000 UTC m=+0.047149401 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 07 14:29:31 compute-0 podman[366516]: 2025-10-07 14:29:31.083042198 +0000 UTC m=+0.074589754 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:29:31 compute-0 nova_compute[259550]: 2025-10-07 14:29:31.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:32 compute-0 nova_compute[259550]: 2025-10-07 14:29:32.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:32 compute-0 ceph-mon[74295]: pgmap v2005: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:29:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:29:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:29:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2284740012' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:29:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:29:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2284740012' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:29:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:29:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2284740012' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:29:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2284740012' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:29:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:34 compute-0 ceph-mon[74295]: pgmap v2006: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:36 compute-0 ceph-mon[74295]: pgmap v2007: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:36 compute-0 nova_compute[259550]: 2025-10-07 14:29:36.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:37 compute-0 nova_compute[259550]: 2025-10-07 14:29:37.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:29:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:38 compute-0 ceph-mon[74295]: pgmap v2008: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:40 compute-0 ceph-mon[74295]: pgmap v2009: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:40 compute-0 nova_compute[259550]: 2025-10-07 14:29:40.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:29:41 compute-0 nova_compute[259550]: 2025-10-07 14:29:41.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:41 compute-0 nova_compute[259550]: 2025-10-07 14:29:41.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:29:41 compute-0 nova_compute[259550]: 2025-10-07 14:29:41.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:29:42 compute-0 nova_compute[259550]: 2025-10-07 14:29:42.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:42 compute-0 ceph-mon[74295]: pgmap v2010: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:29:42 compute-0 nova_compute[259550]: 2025-10-07 14:29:42.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:29:42 compute-0 nova_compute[259550]: 2025-10-07 14:29:42.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:29:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:44 compute-0 ceph-mon[74295]: pgmap v2011: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:29:44.467 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:76:cd 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-e73bddf2-052a-4493-8a38-fe42db12e853', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e73bddf2-052a-4493-8a38-fe42db12e853', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9053ea8abeec4d278cfe9e76fe5aa4bc', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fcaac634-4eae-48ce-9495-48bae919ff70, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=0cb29088-c0bc-48df-80b9-858314de808b) old=Port_Binding(mac=['fa:16:3e:48:76:cd 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e73bddf2-052a-4493-8a38-fe42db12e853', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e73bddf2-052a-4493-8a38-fe42db12e853', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9053ea8abeec4d278cfe9e76fe5aa4bc', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:29:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:29:44.469 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 0cb29088-c0bc-48df-80b9-858314de808b in datapath e73bddf2-052a-4493-8a38-fe42db12e853 updated
Oct 07 14:29:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:29:44.471 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e73bddf2-052a-4493-8a38-fe42db12e853, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:29:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:29:44.473 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[43ac70d2-b1d3-499f-b3ad-bc29fa5f361b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:29:45 compute-0 nova_compute[259550]: 2025-10-07 14:29:45.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:29:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:46 compute-0 ceph-mon[74295]: pgmap v2012: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:46 compute-0 nova_compute[259550]: 2025-10-07 14:29:46.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:47 compute-0 nova_compute[259550]: 2025-10-07 14:29:47.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:29:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:48 compute-0 ceph-mon[74295]: pgmap v2013: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:48 compute-0 nova_compute[259550]: 2025-10-07 14:29:48.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:29:49 compute-0 podman[366556]: 2025-10-07 14:29:49.058081006 +0000 UTC m=+0.047800368 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001)
Oct 07 14:29:49 compute-0 podman[366555]: 2025-10-07 14:29:49.06421423 +0000 UTC m=+0.056725236 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 07 14:29:49 compute-0 nova_compute[259550]: 2025-10-07 14:29:49.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:29:50 compute-0 nova_compute[259550]: 2025-10-07 14:29:50.027 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:29:50 compute-0 nova_compute[259550]: 2025-10-07 14:29:50.028 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:29:50 compute-0 nova_compute[259550]: 2025-10-07 14:29:50.028 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:29:50 compute-0 nova_compute[259550]: 2025-10-07 14:29:50.028 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:29:50 compute-0 nova_compute[259550]: 2025-10-07 14:29:50.029 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:29:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:50 compute-0 ceph-mon[74295]: pgmap v2014: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:29:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/214961567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:29:50 compute-0 nova_compute[259550]: 2025-10-07 14:29:50.479 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:29:50 compute-0 nova_compute[259550]: 2025-10-07 14:29:50.633 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:29:50 compute-0 nova_compute[259550]: 2025-10-07 14:29:50.634 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3891MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:29:50 compute-0 nova_compute[259550]: 2025-10-07 14:29:50.634 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:29:50 compute-0 nova_compute[259550]: 2025-10-07 14:29:50.634 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:29:50 compute-0 nova_compute[259550]: 2025-10-07 14:29:50.910 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:29:50 compute-0 nova_compute[259550]: 2025-10-07 14:29:50.911 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:29:50 compute-0 nova_compute[259550]: 2025-10-07 14:29:50.951 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:29:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/214961567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:29:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:29:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1241488099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:29:51 compute-0 nova_compute[259550]: 2025-10-07 14:29:51.389 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:29:51 compute-0 nova_compute[259550]: 2025-10-07 14:29:51.395 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:29:51 compute-0 nova_compute[259550]: 2025-10-07 14:29:51.482 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:29:51 compute-0 nova_compute[259550]: 2025-10-07 14:29:51.484 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:29:51 compute-0 nova_compute[259550]: 2025-10-07 14:29:51.484 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:29:51 compute-0 nova_compute[259550]: 2025-10-07 14:29:51.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:52 compute-0 nova_compute[259550]: 2025-10-07 14:29:52.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1241488099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:29:52 compute-0 ceph-mon[74295]: pgmap v2015: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:29:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:29:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:29:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:29:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:29:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:29:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:29:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:54 compute-0 ceph-mon[74295]: pgmap v2016: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:54 compute-0 nova_compute[259550]: 2025-10-07 14:29:54.480 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:29:54 compute-0 nova_compute[259550]: 2025-10-07 14:29:54.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:29:54 compute-0 nova_compute[259550]: 2025-10-07 14:29:54.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:29:54 compute-0 nova_compute[259550]: 2025-10-07 14:29:54.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:29:55 compute-0 nova_compute[259550]: 2025-10-07 14:29:55.022 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:29:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:56 compute-0 ceph-mon[74295]: pgmap v2017: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:56 compute-0 nova_compute[259550]: 2025-10-07 14:29:56.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:57 compute-0 nova_compute[259550]: 2025-10-07 14:29:57.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:29:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:29:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:58 compute-0 ceph-mon[74295]: pgmap v2018: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:29:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Oct 07 14:29:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Oct 07 14:29:59 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Oct 07 14:30:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:30:00.062 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:30:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:30:00.062 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:30:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:30:00.063 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:30:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 41 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 818 B/s rd, 307 B/s wr, 1 op/s
Oct 07 14:30:00 compute-0 ceph-mon[74295]: osdmap e271: 3 total, 3 up, 3 in
Oct 07 14:30:00 compute-0 ceph-mon[74295]: pgmap v2020: 305 pgs: 305 active+clean; 41 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 818 B/s rd, 307 B/s wr, 1 op/s
Oct 07 14:30:00 compute-0 nova_compute[259550]: 2025-10-07 14:30:00.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:30:01 compute-0 nova_compute[259550]: 2025-10-07 14:30:01.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:02 compute-0 podman[366637]: 2025-10-07 14:30:02.085062285 +0000 UTC m=+0.073505386 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 07 14:30:02 compute-0 podman[366638]: 2025-10-07 14:30:02.114042208 +0000 UTC m=+0.101843341 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:30:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 41 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Oct 07 14:30:02 compute-0 nova_compute[259550]: 2025-10-07 14:30:02.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:02 compute-0 ceph-mon[74295]: pgmap v2021: 305 pgs: 305 active+clean; 41 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Oct 07 14:30:02 compute-0 sudo[366677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:30:02 compute-0 sudo[366677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:30:02 compute-0 sudo[366677]: pam_unix(sudo:session): session closed for user root
Oct 07 14:30:02 compute-0 sudo[366702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:30:02 compute-0 sudo[366702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:30:02 compute-0 sudo[366702]: pam_unix(sudo:session): session closed for user root
Oct 07 14:30:02 compute-0 sudo[366727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:30:02 compute-0 sudo[366727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:30:02 compute-0 sudo[366727]: pam_unix(sudo:session): session closed for user root
Oct 07 14:30:02 compute-0 sudo[366752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:30:02 compute-0 sudo[366752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:30:02 compute-0 nova_compute[259550]: 2025-10-07 14:30:02.782 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Acquiring lock "5dc59e63-b96a-4de8-a978-dc88a8d1dadd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:30:02 compute-0 nova_compute[259550]: 2025-10-07 14:30:02.783 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "5dc59e63-b96a-4de8-a978-dc88a8d1dadd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:30:02 compute-0 nova_compute[259550]: 2025-10-07 14:30:02.804 2 DEBUG nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:30:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:30:02 compute-0 nova_compute[259550]: 2025-10-07 14:30:02.905 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:30:02 compute-0 nova_compute[259550]: 2025-10-07 14:30:02.907 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:30:02 compute-0 nova_compute[259550]: 2025-10-07 14:30:02.914 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:30:02 compute-0 nova_compute[259550]: 2025-10-07 14:30:02.915 2 INFO nova.compute.claims [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.020 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:30:03 compute-0 sudo[366752]: pam_unix(sudo:session): session closed for user root
Oct 07 14:30:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:30:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:30:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:30:03 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:30:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:30:03 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:30:03 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 03909f70-cb23-4d6c-a087-d46c22d41d37 does not exist
Oct 07 14:30:03 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev ef0c811d-32fa-4e7a-9758-241e22c356ef does not exist
Oct 07 14:30:03 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 28e8a9aa-ce97-4f14-bae2-63ede14be609 does not exist
Oct 07 14:30:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:30:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:30:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:30:03 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:30:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:30:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:30:03 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:30:03 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:30:03 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:30:03 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:30:03 compute-0 sudo[366827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:30:03 compute-0 sudo[366827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:30:03 compute-0 sudo[366827]: pam_unix(sudo:session): session closed for user root
Oct 07 14:30:03 compute-0 sudo[366852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:30:03 compute-0 sudo[366852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:30:03 compute-0 sudo[366852]: pam_unix(sudo:session): session closed for user root
Oct 07 14:30:03 compute-0 sudo[366877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:30:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:30:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1981700070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:30:03 compute-0 sudo[366877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:30:03 compute-0 sudo[366877]: pam_unix(sudo:session): session closed for user root
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.508 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.515 2 DEBUG nova.compute.provider_tree [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.531 2 DEBUG nova.scheduler.client.report [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:30:03 compute-0 sudo[366904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:30:03 compute-0 sudo[366904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.551 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.552 2 DEBUG nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.592 2 DEBUG nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.593 2 DEBUG nova.network.neutron [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.615 2 INFO nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.636 2 DEBUG nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.729 2 DEBUG nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.731 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.731 2 INFO nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Creating image(s)
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.753 2 DEBUG nova.storage.rbd_utils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] rbd image 5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.782 2 DEBUG nova.storage.rbd_utils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] rbd image 5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.808 2 DEBUG nova.storage.rbd_utils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] rbd image 5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.813 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Acquiring lock "ab71a249494f042ede61a514c9ad0d1e110b7453" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.814 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "ab71a249494f042ede61a514c9ad0d1e110b7453" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:30:03 compute-0 podman[367018]: 2025-10-07 14:30:03.882828972 +0000 UTC m=+0.043451882 container create cf718da5fa82a49289ec71dfcd5a8e5a4d91a1d6a2ace9b63447a41358a19cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_torvalds, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:30:03 compute-0 systemd[1]: Started libpod-conmon-cf718da5fa82a49289ec71dfcd5a8e5a4d91a1d6a2ace9b63447a41358a19cb5.scope.
Oct 07 14:30:03 compute-0 podman[367018]: 2025-10-07 14:30:03.862621981 +0000 UTC m=+0.023244911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:30:03 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.972 2 DEBUG nova.network.neutron [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 07 14:30:03 compute-0 nova_compute[259550]: 2025-10-07 14:30:03.974 2 DEBUG nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:30:03 compute-0 podman[367018]: 2025-10-07 14:30:03.983768359 +0000 UTC m=+0.144391349 container init cf718da5fa82a49289ec71dfcd5a8e5a4d91a1d6a2ace9b63447a41358a19cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:30:03 compute-0 podman[367018]: 2025-10-07 14:30:03.994541516 +0000 UTC m=+0.155164426 container start cf718da5fa82a49289ec71dfcd5a8e5a4d91a1d6a2ace9b63447a41358a19cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:30:03 compute-0 podman[367018]: 2025-10-07 14:30:03.999341555 +0000 UTC m=+0.159964465 container attach cf718da5fa82a49289ec71dfcd5a8e5a4d91a1d6a2ace9b63447a41358a19cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 07 14:30:04 compute-0 serene_torvalds[367035]: 167 167
Oct 07 14:30:04 compute-0 systemd[1]: libpod-cf718da5fa82a49289ec71dfcd5a8e5a4d91a1d6a2ace9b63447a41358a19cb5.scope: Deactivated successfully.
Oct 07 14:30:04 compute-0 podman[367018]: 2025-10-07 14:30:04.001057471 +0000 UTC m=+0.161680401 container died cf718da5fa82a49289ec71dfcd5a8e5a4d91a1d6a2ace9b63447a41358a19cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 14:30:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb10c6f878c74439461db26139e427670b3bac64d1055652257173cd3c75a22c-merged.mount: Deactivated successfully.
Oct 07 14:30:04 compute-0 podman[367018]: 2025-10-07 14:30:04.041951363 +0000 UTC m=+0.202574273 container remove cf718da5fa82a49289ec71dfcd5a8e5a4d91a1d6a2ace9b63447a41358a19cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:30:04 compute-0 systemd[1]: libpod-conmon-cf718da5fa82a49289ec71dfcd5a8e5a4d91a1d6a2ace9b63447a41358a19cb5.scope: Deactivated successfully.
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.052 2 DEBUG nova.virt.libvirt.imagebackend [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Image locations are: [{'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/3b9fdcc4-750e-4e5b-a404-723fbbf47b21/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/3b9fdcc4-750e-4e5b-a404-723fbbf47b21/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.094 2 DEBUG nova.virt.libvirt.imagebackend [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Selected location: {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/3b9fdcc4-750e-4e5b-a404-723fbbf47b21/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.095 2 DEBUG nova.storage.rbd_utils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] cloning images/3b9fdcc4-750e-4e5b-a404-723fbbf47b21@snap to None/5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:30:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.193 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "ab71a249494f042ede61a514c9ad0d1e110b7453" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:30:04 compute-0 podman[367125]: 2025-10-07 14:30:04.220587437 +0000 UTC m=+0.049750741 container create 9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 07 14:30:04 compute-0 systemd[1]: Started libpod-conmon-9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535.scope.
Oct 07 14:30:04 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:30:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670e2335457cba625180717dcf3e3a77a755a716b64cec9eaf47810ff1973f97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:30:04 compute-0 podman[367125]: 2025-10-07 14:30:04.201441305 +0000 UTC m=+0.030604669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:30:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670e2335457cba625180717dcf3e3a77a755a716b64cec9eaf47810ff1973f97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:30:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670e2335457cba625180717dcf3e3a77a755a716b64cec9eaf47810ff1973f97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:30:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670e2335457cba625180717dcf3e3a77a755a716b64cec9eaf47810ff1973f97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:30:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670e2335457cba625180717dcf3e3a77a755a716b64cec9eaf47810ff1973f97/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:30:04 compute-0 podman[367125]: 2025-10-07 14:30:04.309712918 +0000 UTC m=+0.138876222 container init 9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:30:04 compute-0 podman[367125]: 2025-10-07 14:30:04.318449352 +0000 UTC m=+0.147612656 container start 9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.320 2 DEBUG nova.storage.rbd_utils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] resizing rbd image 5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:30:04 compute-0 podman[367125]: 2025-10-07 14:30:04.323516586 +0000 UTC m=+0.152679900 container attach 9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:30:04 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:30:04 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:30:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1981700070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:30:04 compute-0 ceph-mon[74295]: pgmap v2022: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.420 2 DEBUG nova.objects.instance [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lazy-loading 'migration_context' on Instance uuid 5dc59e63-b96a-4de8-a978-dc88a8d1dadd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.436 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.437 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Ensure instance console log exists: /var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.438 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.438 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.438 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.440 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c1aca32feac3ec36f1e6a9b17261b81f',container_format='bare',created_at=2025-10-07T14:29:58Z,direct_url=<?>,disk_format='raw',id=3b9fdcc4-750e-4e5b-a404-723fbbf47b21,min_disk=0,min_ram=0,name='tempest-image-dependency-test-2093359418',owner='99916734e57d4c4a8076c48736dca89b',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-10-07T14:29:59Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '3b9fdcc4-750e-4e5b-a404-723fbbf47b21'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.446 2 WARNING nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.455 2 DEBUG nova.virt.libvirt.host [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.457 2 DEBUG nova.virt.libvirt.host [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.462 2 DEBUG nova.virt.libvirt.host [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.463 2 DEBUG nova.virt.libvirt.host [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.464 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.464 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c1aca32feac3ec36f1e6a9b17261b81f',container_format='bare',created_at=2025-10-07T14:29:58Z,direct_url=<?>,disk_format='raw',id=3b9fdcc4-750e-4e5b-a404-723fbbf47b21,min_disk=0,min_ram=0,name='tempest-image-dependency-test-2093359418',owner='99916734e57d4c4a8076c48736dca89b',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-10-07T14:29:59Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.465 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.465 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.466 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.466 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.466 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.466 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.467 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.467 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.467 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.468 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.470 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:30:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:30:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/71379425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:30:04 compute-0 nova_compute[259550]: 2025-10-07 14:30:04.983 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:30:05 compute-0 nova_compute[259550]: 2025-10-07 14:30:05.004 2 DEBUG nova.storage.rbd_utils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] rbd image 5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:30:05 compute-0 nova_compute[259550]: 2025-10-07 14:30:05.008 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:30:05 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/71379425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:30:05 compute-0 great_hawking[367180]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:30:05 compute-0 great_hawking[367180]: --> relative data size: 1.0
Oct 07 14:30:05 compute-0 great_hawking[367180]: --> All data devices are unavailable
Oct 07 14:30:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:30:05 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2079564725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:30:05 compute-0 systemd[1]: libpod-9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535.scope: Deactivated successfully.
Oct 07 14:30:05 compute-0 systemd[1]: libpod-9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535.scope: Consumed 1.058s CPU time.
Oct 07 14:30:05 compute-0 podman[367125]: 2025-10-07 14:30:05.451568699 +0000 UTC m=+1.280732013 container died 9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 07 14:30:05 compute-0 nova_compute[259550]: 2025-10-07 14:30:05.455 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:30:05 compute-0 nova_compute[259550]: 2025-10-07 14:30:05.458 2 DEBUG nova.objects.instance [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lazy-loading 'pci_devices' on Instance uuid 5dc59e63-b96a-4de8-a978-dc88a8d1dadd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:30:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-670e2335457cba625180717dcf3e3a77a755a716b64cec9eaf47810ff1973f97-merged.mount: Deactivated successfully.
Oct 07 14:30:05 compute-0 nova_compute[259550]: 2025-10-07 14:30:05.489 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:30:05 compute-0 nova_compute[259550]:   <uuid>5dc59e63-b96a-4de8-a978-dc88a8d1dadd</uuid>
Oct 07 14:30:05 compute-0 nova_compute[259550]:   <name>instance-00000068</name>
Oct 07 14:30:05 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:30:05 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:30:05 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <nova:name>instance-depend-image</nova:name>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:30:04</nova:creationTime>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:30:05 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:30:05 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:30:05 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:30:05 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:30:05 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:30:05 compute-0 nova_compute[259550]:         <nova:user uuid="abb7b1705a844f358f445b980c484003">tempest-ImageDependencyTests-283254304-project-member</nova:user>
Oct 07 14:30:05 compute-0 nova_compute[259550]:         <nova:project uuid="99916734e57d4c4a8076c48736dca89b">tempest-ImageDependencyTests-283254304</nova:project>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="3b9fdcc4-750e-4e5b-a404-723fbbf47b21"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:30:05 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:30:05 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <system>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <entry name="serial">5dc59e63-b96a-4de8-a978-dc88a8d1dadd</entry>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <entry name="uuid">5dc59e63-b96a-4de8-a978-dc88a8d1dadd</entry>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     </system>
Oct 07 14:30:05 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:30:05 compute-0 nova_compute[259550]:   <os>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:   </os>
Oct 07 14:30:05 compute-0 nova_compute[259550]:   <features>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:   </features>
Oct 07 14:30:05 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:30:05 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:30:05 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk">
Oct 07 14:30:05 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       </source>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:30:05 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk.config">
Oct 07 14:30:05 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       </source>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:30:05 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd/console.log" append="off"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <video>
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     </video>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:30:05 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:30:05 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:30:05 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:30:05 compute-0 nova_compute[259550]: </domain>
Oct 07 14:30:05 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:30:05 compute-0 podman[367125]: 2025-10-07 14:30:05.511417708 +0000 UTC m=+1.340581012 container remove 9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 07 14:30:05 compute-0 systemd[1]: libpod-conmon-9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535.scope: Deactivated successfully.
Oct 07 14:30:05 compute-0 sudo[366904]: pam_unix(sudo:session): session closed for user root
Oct 07 14:30:05 compute-0 nova_compute[259550]: 2025-10-07 14:30:05.556 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:30:05 compute-0 nova_compute[259550]: 2025-10-07 14:30:05.556 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:30:05 compute-0 nova_compute[259550]: 2025-10-07 14:30:05.557 2 INFO nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Using config drive
Oct 07 14:30:05 compute-0 nova_compute[259550]: 2025-10-07 14:30:05.584 2 DEBUG nova.storage.rbd_utils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] rbd image 5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:30:05 compute-0 sudo[367321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:30:05 compute-0 sudo[367321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:30:05 compute-0 sudo[367321]: pam_unix(sudo:session): session closed for user root
Oct 07 14:30:05 compute-0 sudo[367364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:30:05 compute-0 sudo[367364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:30:05 compute-0 sudo[367364]: pam_unix(sudo:session): session closed for user root
Oct 07 14:30:05 compute-0 sudo[367389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:30:05 compute-0 sudo[367389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:30:05 compute-0 sudo[367389]: pam_unix(sudo:session): session closed for user root
Oct 07 14:30:05 compute-0 sudo[367414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:30:05 compute-0 sudo[367414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:30:06 compute-0 podman[367482]: 2025-10-07 14:30:06.099010739 +0000 UTC m=+0.050970043 container create 74c2e33c450e9598ebc67b4b82af5687fb2b6a7e0e631eeca0409cad2967b49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:30:06 compute-0 nova_compute[259550]: 2025-10-07 14:30:06.132 2 INFO nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Creating config drive at /var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd/disk.config
Oct 07 14:30:06 compute-0 nova_compute[259550]: 2025-10-07 14:30:06.138 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53u0ocmz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:30:06 compute-0 systemd[1]: Started libpod-conmon-74c2e33c450e9598ebc67b4b82af5687fb2b6a7e0e631eeca0409cad2967b49d.scope.
Oct 07 14:30:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.1 KiB/s wr, 35 op/s
Oct 07 14:30:06 compute-0 podman[367482]: 2025-10-07 14:30:06.07022726 +0000 UTC m=+0.022186574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:30:06 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:30:06 compute-0 podman[367482]: 2025-10-07 14:30:06.188348406 +0000 UTC m=+0.140307700 container init 74c2e33c450e9598ebc67b4b82af5687fb2b6a7e0e631eeca0409cad2967b49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_perlman, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 07 14:30:06 compute-0 podman[367482]: 2025-10-07 14:30:06.19597057 +0000 UTC m=+0.147929864 container start 74c2e33c450e9598ebc67b4b82af5687fb2b6a7e0e631eeca0409cad2967b49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_perlman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 07 14:30:06 compute-0 podman[367482]: 2025-10-07 14:30:06.19896308 +0000 UTC m=+0.150922394 container attach 74c2e33c450e9598ebc67b4b82af5687fb2b6a7e0e631eeca0409cad2967b49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:30:06 compute-0 optimistic_perlman[367498]: 167 167
Oct 07 14:30:06 compute-0 systemd[1]: libpod-74c2e33c450e9598ebc67b4b82af5687fb2b6a7e0e631eeca0409cad2967b49d.scope: Deactivated successfully.
Oct 07 14:30:06 compute-0 podman[367482]: 2025-10-07 14:30:06.201694063 +0000 UTC m=+0.153653367 container died 74c2e33c450e9598ebc67b4b82af5687fb2b6a7e0e631eeca0409cad2967b49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 07 14:30:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-07330ddf16c0bd219cdefec352b28c7c8260c9325c3048129294865aebf3b0c4-merged.mount: Deactivated successfully.
Oct 07 14:30:06 compute-0 podman[367482]: 2025-10-07 14:30:06.240582392 +0000 UTC m=+0.192541686 container remove 74c2e33c450e9598ebc67b4b82af5687fb2b6a7e0e631eeca0409cad2967b49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_perlman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 07 14:30:06 compute-0 systemd[1]: libpod-conmon-74c2e33c450e9598ebc67b4b82af5687fb2b6a7e0e631eeca0409cad2967b49d.scope: Deactivated successfully.
Oct 07 14:30:06 compute-0 nova_compute[259550]: 2025-10-07 14:30:06.282 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53u0ocmz" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:30:06 compute-0 nova_compute[259550]: 2025-10-07 14:30:06.306 2 DEBUG nova.storage.rbd_utils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] rbd image 5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:30:06 compute-0 nova_compute[259550]: 2025-10-07 14:30:06.310 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd/disk.config 5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:30:06 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2079564725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:30:06 compute-0 ceph-mon[74295]: pgmap v2023: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.1 KiB/s wr, 35 op/s
Oct 07 14:30:06 compute-0 podman[367544]: 2025-10-07 14:30:06.394665159 +0000 UTC m=+0.039874397 container create cc5a516c7eac6723e5065fd887326d1678d40793b1a5fc95cf2f36c55d1acd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:30:06 compute-0 systemd[1]: Started libpod-conmon-cc5a516c7eac6723e5065fd887326d1678d40793b1a5fc95cf2f36c55d1acd6a.scope.
Oct 07 14:30:06 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:30:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a6424d1c6131feefe2a27912b2c39a2be3e9e5e43df80144dd48495ae136c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:30:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a6424d1c6131feefe2a27912b2c39a2be3e9e5e43df80144dd48495ae136c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:30:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a6424d1c6131feefe2a27912b2c39a2be3e9e5e43df80144dd48495ae136c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:30:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a6424d1c6131feefe2a27912b2c39a2be3e9e5e43df80144dd48495ae136c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:30:06 compute-0 podman[367544]: 2025-10-07 14:30:06.377209542 +0000 UTC m=+0.022418800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:30:06 compute-0 podman[367544]: 2025-10-07 14:30:06.480312927 +0000 UTC m=+0.125522185 container init cc5a516c7eac6723e5065fd887326d1678d40793b1a5fc95cf2f36c55d1acd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_burnell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:30:06 compute-0 podman[367544]: 2025-10-07 14:30:06.48715707 +0000 UTC m=+0.132366308 container start cc5a516c7eac6723e5065fd887326d1678d40793b1a5fc95cf2f36c55d1acd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_burnell, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 07 14:30:06 compute-0 podman[367544]: 2025-10-07 14:30:06.490393657 +0000 UTC m=+0.135602915 container attach cc5a516c7eac6723e5065fd887326d1678d40793b1a5fc95cf2f36c55d1acd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 07 14:30:06 compute-0 nova_compute[259550]: 2025-10-07 14:30:06.498 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd/disk.config 5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:30:06 compute-0 nova_compute[259550]: 2025-10-07 14:30:06.501 2 INFO nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Deleting local config drive /var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd/disk.config because it was imported into RBD.
Oct 07 14:30:06 compute-0 systemd-machined[214580]: New machine qemu-131-instance-00000068.
Oct 07 14:30:06 compute-0 systemd[1]: Started Virtual Machine qemu-131-instance-00000068.
Oct 07 14:30:06 compute-0 nova_compute[259550]: 2025-10-07 14:30:06.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]: {
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:     "0": [
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:         {
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "devices": [
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "/dev/loop3"
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             ],
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "lv_name": "ceph_lv0",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "lv_size": "21470642176",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "name": "ceph_lv0",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "tags": {
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.cluster_name": "ceph",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.crush_device_class": "",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.encrypted": "0",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.osd_id": "0",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.type": "block",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.vdo": "0"
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             },
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "type": "block",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "vg_name": "ceph_vg0"
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:         }
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:     ],
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:     "1": [
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:         {
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "devices": [
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "/dev/loop4"
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             ],
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "lv_name": "ceph_lv1",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "lv_size": "21470642176",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "name": "ceph_lv1",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "tags": {
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.cluster_name": "ceph",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.crush_device_class": "",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.encrypted": "0",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.osd_id": "1",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.type": "block",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.vdo": "0"
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             },
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "type": "block",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "vg_name": "ceph_vg1"
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:         }
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:     ],
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:     "2": [
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:         {
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "devices": [
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "/dev/loop5"
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             ],
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "lv_name": "ceph_lv2",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "lv_size": "21470642176",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "name": "ceph_lv2",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "tags": {
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.cluster_name": "ceph",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.crush_device_class": "",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.encrypted": "0",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.osd_id": "2",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.type": "block",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:                 "ceph.vdo": "0"
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             },
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "type": "block",
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:             "vg_name": "ceph_vg2"
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:         }
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]:     ]
Oct 07 14:30:07 compute-0 affectionate_burnell[367580]: }
Oct 07 14:30:07 compute-0 nova_compute[259550]: 2025-10-07 14:30:07.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:07 compute-0 systemd[1]: libpod-cc5a516c7eac6723e5065fd887326d1678d40793b1a5fc95cf2f36c55d1acd6a.scope: Deactivated successfully.
Oct 07 14:30:07 compute-0 podman[367544]: 2025-10-07 14:30:07.217917097 +0000 UTC m=+0.863126335 container died cc5a516c7eac6723e5065fd887326d1678d40793b1a5fc95cf2f36c55d1acd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_burnell, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 07 14:30:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7a6424d1c6131feefe2a27912b2c39a2be3e9e5e43df80144dd48495ae136c0-merged.mount: Deactivated successfully.
Oct 07 14:30:07 compute-0 podman[367544]: 2025-10-07 14:30:07.279360559 +0000 UTC m=+0.924569797 container remove cc5a516c7eac6723e5065fd887326d1678d40793b1a5fc95cf2f36c55d1acd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_burnell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 07 14:30:07 compute-0 systemd[1]: libpod-conmon-cc5a516c7eac6723e5065fd887326d1678d40793b1a5fc95cf2f36c55d1acd6a.scope: Deactivated successfully.
Oct 07 14:30:07 compute-0 sudo[367414]: pam_unix(sudo:session): session closed for user root
Oct 07 14:30:07 compute-0 sudo[367618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:30:07 compute-0 sudo[367618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:30:07 compute-0 sudo[367618]: pam_unix(sudo:session): session closed for user root
Oct 07 14:30:07 compute-0 sudo[367668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:30:07 compute-0 sudo[367668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:30:07 compute-0 sudo[367668]: pam_unix(sudo:session): session closed for user root
Oct 07 14:30:07 compute-0 sudo[367708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:30:07 compute-0 sudo[367708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:30:07 compute-0 sudo[367708]: pam_unix(sudo:session): session closed for user root
Oct 07 14:30:07 compute-0 sudo[367735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:30:07 compute-0 sudo[367735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:30:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:30:07 compute-0 podman[367801]: 2025-10-07 14:30:07.92463121 +0000 UTC m=+0.039010693 container create e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:30:07 compute-0 systemd[1]: Started libpod-conmon-e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160.scope.
Oct 07 14:30:07 compute-0 nova_compute[259550]: 2025-10-07 14:30:07.972 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847407.9720669, 5dc59e63-b96a-4de8-a978-dc88a8d1dadd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:30:07 compute-0 nova_compute[259550]: 2025-10-07 14:30:07.974 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] VM Resumed (Lifecycle Event)
Oct 07 14:30:07 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:30:07 compute-0 nova_compute[259550]: 2025-10-07 14:30:07.978 2 DEBUG nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:30:07 compute-0 nova_compute[259550]: 2025-10-07 14:30:07.979 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:30:07 compute-0 nova_compute[259550]: 2025-10-07 14:30:07.988 2 INFO nova.virt.libvirt.driver [-] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Instance spawned successfully.
Oct 07 14:30:07 compute-0 nova_compute[259550]: 2025-10-07 14:30:07.989 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:30:07 compute-0 nova_compute[259550]: 2025-10-07 14:30:07.997 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:30:08 compute-0 nova_compute[259550]: 2025-10-07 14:30:08.001 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:30:08 compute-0 podman[367801]: 2025-10-07 14:30:07.908453268 +0000 UTC m=+0.022832771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:30:08 compute-0 podman[367801]: 2025-10-07 14:30:08.00919145 +0000 UTC m=+0.123570953 container init e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclaren, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:30:08 compute-0 podman[367801]: 2025-10-07 14:30:08.016542826 +0000 UTC m=+0.130922309 container start e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 07 14:30:08 compute-0 podman[367801]: 2025-10-07 14:30:08.019665869 +0000 UTC m=+0.134045362 container attach e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclaren, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:30:08 compute-0 determined_mclaren[367817]: 167 167
Oct 07 14:30:08 compute-0 systemd[1]: libpod-e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160.scope: Deactivated successfully.
Oct 07 14:30:08 compute-0 conmon[367817]: conmon e4c49e1a650564bec231 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160.scope/container/memory.events
Oct 07 14:30:08 compute-0 podman[367822]: 2025-10-07 14:30:08.063401409 +0000 UTC m=+0.026548501 container died e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclaren, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:30:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-21f3b0b4e813a3456f2e3105fa696680e5fc1238a055594ab37b49c227a4cee3-merged.mount: Deactivated successfully.
Oct 07 14:30:08 compute-0 podman[367822]: 2025-10-07 14:30:08.102265206 +0000 UTC m=+0.065412288 container remove e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclaren, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 07 14:30:08 compute-0 systemd[1]: libpod-conmon-e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160.scope: Deactivated successfully.
Oct 07 14:30:08 compute-0 nova_compute[259550]: 2025-10-07 14:30:08.149 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:30:08 compute-0 nova_compute[259550]: 2025-10-07 14:30:08.150 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:30:08 compute-0 nova_compute[259550]: 2025-10-07 14:30:08.150 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:30:08 compute-0 nova_compute[259550]: 2025-10-07 14:30:08.151 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:30:08 compute-0 nova_compute[259550]: 2025-10-07 14:30:08.152 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:30:08 compute-0 nova_compute[259550]: 2025-10-07 14:30:08.153 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:30:08 compute-0 nova_compute[259550]: 2025-10-07 14:30:08.156 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:30:08 compute-0 nova_compute[259550]: 2025-10-07 14:30:08.156 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847407.9784286, 5dc59e63-b96a-4de8-a978-dc88a8d1dadd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:30:08 compute-0 nova_compute[259550]: 2025-10-07 14:30:08.157 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] VM Started (Lifecycle Event)
Oct 07 14:30:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.1 KiB/s wr, 35 op/s
Oct 07 14:30:08 compute-0 nova_compute[259550]: 2025-10-07 14:30:08.196 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:30:08 compute-0 nova_compute[259550]: 2025-10-07 14:30:08.201 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:30:08 compute-0 nova_compute[259550]: 2025-10-07 14:30:08.229 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:30:08 compute-0 nova_compute[259550]: 2025-10-07 14:30:08.246 2 INFO nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Took 4.52 seconds to spawn the instance on the hypervisor.
Oct 07 14:30:08 compute-0 nova_compute[259550]: 2025-10-07 14:30:08.246 2 DEBUG nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:30:08 compute-0 ceph-mon[74295]: pgmap v2024: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.1 KiB/s wr, 35 op/s
Oct 07 14:30:08 compute-0 podman[367843]: 2025-10-07 14:30:08.28317721 +0000 UTC m=+0.058448782 container create bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_golick, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 14:30:08 compute-0 nova_compute[259550]: 2025-10-07 14:30:08.323 2 INFO nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Took 5.45 seconds to build instance.
Oct 07 14:30:08 compute-0 systemd[1]: Started libpod-conmon-bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0.scope.
Oct 07 14:30:08 compute-0 podman[367843]: 2025-10-07 14:30:08.267335867 +0000 UTC m=+0.042607449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:30:08 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:30:08 compute-0 nova_compute[259550]: 2025-10-07 14:30:08.360 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "5dc59e63-b96a-4de8-a978-dc88a8d1dadd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b70a443bde841cc3b3d8f294d8bb802d9bba13169906060b0db33c18d2b227/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b70a443bde841cc3b3d8f294d8bb802d9bba13169906060b0db33c18d2b227/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b70a443bde841cc3b3d8f294d8bb802d9bba13169906060b0db33c18d2b227/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b70a443bde841cc3b3d8f294d8bb802d9bba13169906060b0db33c18d2b227/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:30:08 compute-0 podman[367843]: 2025-10-07 14:30:08.381885948 +0000 UTC m=+0.157157540 container init bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct 07 14:30:08 compute-0 podman[367843]: 2025-10-07 14:30:08.38869838 +0000 UTC m=+0.163969972 container start bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_golick, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:30:08 compute-0 podman[367843]: 2025-10-07 14:30:08.396959001 +0000 UTC m=+0.172230593 container attach bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct 07 14:30:09 compute-0 tender_golick[367860]: {
Oct 07 14:30:09 compute-0 tender_golick[367860]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:30:09 compute-0 tender_golick[367860]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:30:09 compute-0 tender_golick[367860]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:30:09 compute-0 tender_golick[367860]:         "osd_id": 2,
Oct 07 14:30:09 compute-0 tender_golick[367860]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:30:09 compute-0 tender_golick[367860]:         "type": "bluestore"
Oct 07 14:30:09 compute-0 tender_golick[367860]:     },
Oct 07 14:30:09 compute-0 tender_golick[367860]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:30:09 compute-0 tender_golick[367860]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:30:09 compute-0 tender_golick[367860]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:30:09 compute-0 tender_golick[367860]:         "osd_id": 1,
Oct 07 14:30:09 compute-0 tender_golick[367860]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:30:09 compute-0 tender_golick[367860]:         "type": "bluestore"
Oct 07 14:30:09 compute-0 tender_golick[367860]:     },
Oct 07 14:30:09 compute-0 tender_golick[367860]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:30:09 compute-0 tender_golick[367860]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:30:09 compute-0 tender_golick[367860]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:30:09 compute-0 tender_golick[367860]:         "osd_id": 0,
Oct 07 14:30:09 compute-0 tender_golick[367860]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:30:09 compute-0 tender_golick[367860]:         "type": "bluestore"
Oct 07 14:30:09 compute-0 tender_golick[367860]:     }
Oct 07 14:30:09 compute-0 tender_golick[367860]: }
Oct 07 14:30:09 compute-0 systemd[1]: libpod-bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0.scope: Deactivated successfully.
Oct 07 14:30:09 compute-0 systemd[1]: libpod-bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0.scope: Consumed 1.052s CPU time.
Oct 07 14:30:09 compute-0 podman[367843]: 2025-10-07 14:30:09.43844568 +0000 UTC m=+1.213717252 container died bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_golick, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 07 14:30:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-47b70a443bde841cc3b3d8f294d8bb802d9bba13169906060b0db33c18d2b227-merged.mount: Deactivated successfully.
Oct 07 14:30:09 compute-0 podman[367843]: 2025-10-07 14:30:09.510759372 +0000 UTC m=+1.286030944 container remove bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_golick, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct 07 14:30:09 compute-0 systemd[1]: libpod-conmon-bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0.scope: Deactivated successfully.
Oct 07 14:30:09 compute-0 sudo[367735]: pam_unix(sudo:session): session closed for user root
Oct 07 14:30:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:30:09 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:30:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:30:09 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:30:09 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 80341229-ec6e-42bc-b90a-b364c485259c does not exist
Oct 07 14:30:09 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 7947588a-38af-4916-a88c-5fdd62c08e77 does not exist
Oct 07 14:30:09 compute-0 sudo[367907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:30:09 compute-0 sudo[367907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:30:09 compute-0 sudo[367907]: pam_unix(sudo:session): session closed for user root
Oct 07 14:30:09 compute-0 sudo[367932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:30:09 compute-0 sudo[367932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:30:09 compute-0 sudo[367932]: pam_unix(sudo:session): session closed for user root
Oct 07 14:30:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 16 KiB/s wr, 54 op/s
Oct 07 14:30:10 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:30:10 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:30:10 compute-0 ceph-mon[74295]: pgmap v2025: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 16 KiB/s wr, 54 op/s
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.588114) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847410588180, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 1337, "num_deletes": 252, "total_data_size": 1967648, "memory_usage": 2003152, "flush_reason": "Manual Compaction"}
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847410618342, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 1936590, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41190, "largest_seqno": 42526, "table_properties": {"data_size": 1930317, "index_size": 3476, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13465, "raw_average_key_size": 20, "raw_value_size": 1917570, "raw_average_value_size": 2853, "num_data_blocks": 156, "num_entries": 672, "num_filter_entries": 672, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847277, "oldest_key_time": 1759847277, "file_creation_time": 1759847410, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 30250 microseconds, and 5670 cpu microseconds.
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.618373) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 1936590 bytes OK
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.618390) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.625602) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.625620) EVENT_LOG_v1 {"time_micros": 1759847410625615, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.625639) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 1961673, prev total WAL file size 1988161, number of live WAL files 2.
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.626640) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(1891KB)], [92(10046KB)]
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847410626710, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 12224361, "oldest_snapshot_seqno": -1}
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6648 keys, 10593797 bytes, temperature: kUnknown
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847410781499, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 10593797, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10546599, "index_size": 29486, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 170681, "raw_average_key_size": 25, "raw_value_size": 10424761, "raw_average_value_size": 1568, "num_data_blocks": 1168, "num_entries": 6648, "num_filter_entries": 6648, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847410, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.781742) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 10593797 bytes
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.788833) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.9 rd, 68.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.8 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(11.8) write-amplify(5.5) OK, records in: 7168, records dropped: 520 output_compression: NoCompression
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.788882) EVENT_LOG_v1 {"time_micros": 1759847410788863, "job": 54, "event": "compaction_finished", "compaction_time_micros": 154866, "compaction_time_cpu_micros": 26766, "output_level": 6, "num_output_files": 1, "total_output_size": 10593797, "num_input_records": 7168, "num_output_records": 6648, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847410789497, "job": 54, "event": "table_file_deletion", "file_number": 94}
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847410791409, "job": 54, "event": "table_file_deletion", "file_number": 92}
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.626481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.791560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.791568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.791570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.791571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:30:10 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.791573) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:30:11 compute-0 nova_compute[259550]: 2025-10-07 14:30:11.264 2 DEBUG nova.compute.manager [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:30:11 compute-0 nova_compute[259550]: 2025-10-07 14:30:11.321 2 INFO nova.compute.manager [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] instance snapshotting
Oct 07 14:30:11 compute-0 nova_compute[259550]: 2025-10-07 14:30:11.809 2 INFO nova.virt.libvirt.driver [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Beginning live snapshot process
Oct 07 14:30:11 compute-0 nova_compute[259550]: 2025-10-07 14:30:11.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:12 compute-0 nova_compute[259550]: 2025-10-07 14:30:12.026 2 DEBUG nova.storage.rbd_utils [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] creating snapshot(08102c1045c946bf9671c4953ab06e52) on rbd image(5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:30:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 15 KiB/s wr, 58 op/s
Oct 07 14:30:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Oct 07 14:30:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Oct 07 14:30:12 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Oct 07 14:30:12 compute-0 nova_compute[259550]: 2025-10-07 14:30:12.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:12 compute-0 nova_compute[259550]: 2025-10-07 14:30:12.228 2 DEBUG nova.storage.rbd_utils [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] cloning vms/5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk@08102c1045c946bf9671c4953ab06e52 to images/ca202b7f-8033-4699-bf89-025968f471a0 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:30:12 compute-0 nova_compute[259550]: 2025-10-07 14:30:12.335 2 DEBUG nova.storage.rbd_utils [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] flattening images/ca202b7f-8033-4699-bf89-025968f471a0 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:30:12 compute-0 nova_compute[259550]: 2025-10-07 14:30:12.635 2 DEBUG nova.storage.rbd_utils [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] removing snapshot(08102c1045c946bf9671c4953ab06e52) on rbd image(5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:30:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:30:13 compute-0 ceph-mon[74295]: pgmap v2026: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 15 KiB/s wr, 58 op/s
Oct 07 14:30:13 compute-0 ceph-mon[74295]: osdmap e272: 3 total, 3 up, 3 in
Oct 07 14:30:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Oct 07 14:30:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Oct 07 14:30:13 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Oct 07 14:30:13 compute-0 nova_compute[259550]: 2025-10-07 14:30:13.268 2 DEBUG nova.storage.rbd_utils [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] creating snapshot(snap) on rbd image(ca202b7f-8033-4699-bf89-025968f471a0) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:30:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 23 KiB/s wr, 102 op/s
Oct 07 14:30:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Oct 07 14:30:14 compute-0 ceph-mon[74295]: osdmap e273: 3 total, 3 up, 3 in
Oct 07 14:30:14 compute-0 ceph-mon[74295]: pgmap v2029: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 23 KiB/s wr, 102 op/s
Oct 07 14:30:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Oct 07 14:30:14 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Oct 07 14:30:15 compute-0 ceph-mon[74295]: osdmap e274: 3 total, 3 up, 3 in
Oct 07 14:30:15 compute-0 nova_compute[259550]: 2025-10-07 14:30:15.894 2 INFO nova.virt.libvirt.driver [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Snapshot image upload complete
Oct 07 14:30:15 compute-0 nova_compute[259550]: 2025-10-07 14:30:15.894 2 INFO nova.compute.manager [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Took 4.57 seconds to snapshot the instance on the hypervisor.
Oct 07 14:30:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 5.2 KiB/s wr, 123 op/s
Oct 07 14:30:16 compute-0 ceph-mon[74295]: pgmap v2031: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 5.2 KiB/s wr, 123 op/s
Oct 07 14:30:16 compute-0 nova_compute[259550]: 2025-10-07 14:30:16.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:17 compute-0 nova_compute[259550]: 2025-10-07 14:30:17.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Oct 07 14:30:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Oct 07 14:30:17 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Oct 07 14:30:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:30:18 compute-0 nova_compute[259550]: 2025-10-07 14:30:18.120 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Acquiring lock "5dc59e63-b96a-4de8-a978-dc88a8d1dadd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:30:18 compute-0 nova_compute[259550]: 2025-10-07 14:30:18.121 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "5dc59e63-b96a-4de8-a978-dc88a8d1dadd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:30:18 compute-0 nova_compute[259550]: 2025-10-07 14:30:18.122 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Acquiring lock "5dc59e63-b96a-4de8-a978-dc88a8d1dadd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:30:18 compute-0 nova_compute[259550]: 2025-10-07 14:30:18.122 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "5dc59e63-b96a-4de8-a978-dc88a8d1dadd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:30:18 compute-0 nova_compute[259550]: 2025-10-07 14:30:18.123 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "5dc59e63-b96a-4de8-a978-dc88a8d1dadd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:30:18 compute-0 nova_compute[259550]: 2025-10-07 14:30:18.125 2 INFO nova.compute.manager [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Terminating instance
Oct 07 14:30:18 compute-0 nova_compute[259550]: 2025-10-07 14:30:18.127 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Acquiring lock "refresh_cache-5dc59e63-b96a-4de8-a978-dc88a8d1dadd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:30:18 compute-0 nova_compute[259550]: 2025-10-07 14:30:18.128 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Acquired lock "refresh_cache-5dc59e63-b96a-4de8-a978-dc88a8d1dadd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:30:18 compute-0 nova_compute[259550]: 2025-10-07 14:30:18.128 2 DEBUG nova.network.neutron [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:30:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 4.3 KiB/s wr, 105 op/s
Oct 07 14:30:18 compute-0 ceph-mon[74295]: osdmap e275: 3 total, 3 up, 3 in
Oct 07 14:30:18 compute-0 ceph-mon[74295]: pgmap v2033: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 4.3 KiB/s wr, 105 op/s
Oct 07 14:30:18 compute-0 nova_compute[259550]: 2025-10-07 14:30:18.653 2 DEBUG nova.network.neutron [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:30:18 compute-0 nova_compute[259550]: 2025-10-07 14:30:18.962 2 DEBUG nova.network.neutron [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:30:18 compute-0 nova_compute[259550]: 2025-10-07 14:30:18.977 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Releasing lock "refresh_cache-5dc59e63-b96a-4de8-a978-dc88a8d1dadd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:30:18 compute-0 nova_compute[259550]: 2025-10-07 14:30:18.978 2 DEBUG nova.compute.manager [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:30:19 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000068.scope: Deactivated successfully.
Oct 07 14:30:19 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000068.scope: Consumed 1.712s CPU time.
Oct 07 14:30:19 compute-0 systemd-machined[214580]: Machine qemu-131-instance-00000068 terminated.
Oct 07 14:30:19 compute-0 nova_compute[259550]: 2025-10-07 14:30:19.201 2 INFO nova.virt.libvirt.driver [-] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Instance destroyed successfully.
Oct 07 14:30:19 compute-0 nova_compute[259550]: 2025-10-07 14:30:19.202 2 DEBUG nova.objects.instance [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lazy-loading 'resources' on Instance uuid 5dc59e63-b96a-4de8-a978-dc88a8d1dadd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:30:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Oct 07 14:30:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Oct 07 14:30:19 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Oct 07 14:30:19 compute-0 nova_compute[259550]: 2025-10-07 14:30:19.732 2 INFO nova.virt.libvirt.driver [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Deleting instance files /var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd_del
Oct 07 14:30:19 compute-0 nova_compute[259550]: 2025-10-07 14:30:19.733 2 INFO nova.virt.libvirt.driver [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Deletion of /var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd_del complete
Oct 07 14:30:19 compute-0 nova_compute[259550]: 2025-10-07 14:30:19.935 2 INFO nova.compute.manager [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Took 0.96 seconds to destroy the instance on the hypervisor.
Oct 07 14:30:19 compute-0 nova_compute[259550]: 2025-10-07 14:30:19.935 2 DEBUG oslo.service.loopingcall [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:30:19 compute-0 nova_compute[259550]: 2025-10-07 14:30:19.936 2 DEBUG nova.compute.manager [-] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:30:19 compute-0 nova_compute[259550]: 2025-10-07 14:30:19.936 2 DEBUG nova.network.neutron [-] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:30:20 compute-0 podman[368120]: 2025-10-07 14:30:20.08309955 +0000 UTC m=+0.074637335 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 07 14:30:20 compute-0 podman[368121]: 2025-10-07 14:30:20.083875891 +0000 UTC m=+0.065447400 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:30:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 2.7 KiB/s wr, 97 op/s
Oct 07 14:30:20 compute-0 nova_compute[259550]: 2025-10-07 14:30:20.195 2 DEBUG nova.network.neutron [-] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:30:20 compute-0 nova_compute[259550]: 2025-10-07 14:30:20.248 2 DEBUG nova.network.neutron [-] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:30:20 compute-0 nova_compute[259550]: 2025-10-07 14:30:20.307 2 INFO nova.compute.manager [-] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Took 0.37 seconds to deallocate network for instance.
Oct 07 14:30:20 compute-0 nova_compute[259550]: 2025-10-07 14:30:20.440 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:30:20 compute-0 nova_compute[259550]: 2025-10-07 14:30:20.440 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:30:20 compute-0 ceph-mon[74295]: osdmap e276: 3 total, 3 up, 3 in
Oct 07 14:30:20 compute-0 ceph-mon[74295]: pgmap v2035: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 2.7 KiB/s wr, 97 op/s
Oct 07 14:30:20 compute-0 nova_compute[259550]: 2025-10-07 14:30:20.496 2 DEBUG oslo_concurrency.processutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:30:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:30:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1680111326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:30:20 compute-0 nova_compute[259550]: 2025-10-07 14:30:20.951 2 DEBUG oslo_concurrency.processutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:30:20 compute-0 nova_compute[259550]: 2025-10-07 14:30:20.958 2 DEBUG nova.compute.provider_tree [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:30:20 compute-0 nova_compute[259550]: 2025-10-07 14:30:20.980 2 DEBUG nova.scheduler.client.report [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:30:21 compute-0 nova_compute[259550]: 2025-10-07 14:30:21.051 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:30:21 compute-0 nova_compute[259550]: 2025-10-07 14:30:21.091 2 INFO nova.scheduler.client.report [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Deleted allocations for instance 5dc59e63-b96a-4de8-a978-dc88a8d1dadd
Oct 07 14:30:21 compute-0 nova_compute[259550]: 2025-10-07 14:30:21.164 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "5dc59e63-b96a-4de8-a978-dc88a8d1dadd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:30:21 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1680111326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:30:21 compute-0 nova_compute[259550]: 2025-10-07 14:30:21.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 4.2 KiB/s wr, 128 op/s
Oct 07 14:30:22 compute-0 nova_compute[259550]: 2025-10-07 14:30:22.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:30:22.283 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:30:22 compute-0 nova_compute[259550]: 2025-10-07 14:30:22.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:30:22.286 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:30:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:30:22.289 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:30:22 compute-0 ceph-mon[74295]: pgmap v2036: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 4.2 KiB/s wr, 128 op/s
Oct 07 14:30:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:30:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:30:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:30:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:30:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:30:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:30:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:30:22
Oct 07 14:30:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:30:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:30:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr', 'volumes', 'images']
Oct 07 14:30:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:30:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:30:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Oct 07 14:30:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Oct 07 14:30:22 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Oct 07 14:30:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:30:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:30:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:30:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:30:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:30:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:30:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:30:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:30:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:30:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:30:23 compute-0 ceph-mon[74295]: osdmap e277: 3 total, 3 up, 3 in
Oct 07 14:30:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 5.5 KiB/s wr, 143 op/s
Oct 07 14:30:24 compute-0 ceph-mon[74295]: pgmap v2038: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 5.5 KiB/s wr, 143 op/s
Oct 07 14:30:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 4.7 KiB/s wr, 123 op/s
Oct 07 14:30:26 compute-0 ceph-mon[74295]: pgmap v2039: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 4.7 KiB/s wr, 123 op/s
Oct 07 14:30:26 compute-0 nova_compute[259550]: 2025-10-07 14:30:26.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:27 compute-0 nova_compute[259550]: 2025-10-07 14:30:27.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:30:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Oct 07 14:30:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Oct 07 14:30:27 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.898177) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847427898280, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 479, "num_deletes": 254, "total_data_size": 378359, "memory_usage": 387728, "flush_reason": "Manual Compaction"}
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847427903417, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 333876, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42527, "largest_seqno": 43005, "table_properties": {"data_size": 331159, "index_size": 753, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7183, "raw_average_key_size": 20, "raw_value_size": 325666, "raw_average_value_size": 946, "num_data_blocks": 33, "num_entries": 344, "num_filter_entries": 344, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847410, "oldest_key_time": 1759847410, "file_creation_time": 1759847427, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 5297 microseconds, and 2573 cpu microseconds.
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.903486) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 333876 bytes OK
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.903519) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.906292) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.906348) EVENT_LOG_v1 {"time_micros": 1759847427906336, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.906380) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 375484, prev total WAL file size 375484, number of live WAL files 2.
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.907202) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353032' seq:72057594037927935, type:22 .. '6D6772737461740031373534' seq:0, type:0; will stop at (end)
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(326KB)], [95(10MB)]
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847427907254, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 10927673, "oldest_snapshot_seqno": -1}
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6476 keys, 7664385 bytes, temperature: kUnknown
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847427960808, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 7664385, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7622798, "index_size": 24325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 167332, "raw_average_key_size": 25, "raw_value_size": 7508349, "raw_average_value_size": 1159, "num_data_blocks": 952, "num_entries": 6476, "num_filter_entries": 6476, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847427, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.961242) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 7664385 bytes
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.963149) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.5 rd, 142.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.1 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(55.7) write-amplify(23.0) OK, records in: 6992, records dropped: 516 output_compression: NoCompression
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.963180) EVENT_LOG_v1 {"time_micros": 1759847427963166, "job": 56, "event": "compaction_finished", "compaction_time_micros": 53705, "compaction_time_cpu_micros": 20180, "output_level": 6, "num_output_files": 1, "total_output_size": 7664385, "num_input_records": 6992, "num_output_records": 6476, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847427963438, "job": 56, "event": "table_file_deletion", "file_number": 97}
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847427966558, "job": 56, "event": "table_file_deletion", "file_number": 95}
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.907046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.966710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.966720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.966723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.966726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:30:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.966729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:30:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 2.9 KiB/s wr, 72 op/s
Oct 07 14:30:28 compute-0 ceph-mon[74295]: osdmap e278: 3 total, 3 up, 3 in
Oct 07 14:30:28 compute-0 ceph-mon[74295]: pgmap v2041: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 2.9 KiB/s wr, 72 op/s
Oct 07 14:30:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 767 B/s wr, 18 op/s
Oct 07 14:30:30 compute-0 ceph-mon[74295]: pgmap v2042: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 767 B/s wr, 18 op/s
Oct 07 14:30:31 compute-0 nova_compute[259550]: 2025-10-07 14:30:31.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 661 B/s wr, 15 op/s
Oct 07 14:30:32 compute-0 nova_compute[259550]: 2025-10-07 14:30:32.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:32 compute-0 ceph-mon[74295]: pgmap v2043: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 661 B/s wr, 15 op/s
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:30:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:30:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:30:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2389509859' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:30:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:30:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2389509859' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:30:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:30:32 compute-0 podman[368179]: 2025-10-07 14:30:32.976731842 +0000 UTC m=+0.057019704 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:30:33 compute-0 podman[368180]: 2025-10-07 14:30:33.01483616 +0000 UTC m=+0.090983392 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 07 14:30:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2389509859' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:30:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2389509859' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:30:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:34 compute-0 nova_compute[259550]: 2025-10-07 14:30:34.202 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847419.1995983, 5dc59e63-b96a-4de8-a978-dc88a8d1dadd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:30:34 compute-0 nova_compute[259550]: 2025-10-07 14:30:34.203 2 INFO nova.compute.manager [-] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] VM Stopped (Lifecycle Event)
Oct 07 14:30:34 compute-0 nova_compute[259550]: 2025-10-07 14:30:34.233 2 DEBUG nova.compute.manager [None req-6d30e21f-b3c4-4fcc-a718-a5d5dec39bf8 - - - - - -] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:30:34 compute-0 ceph-mon[74295]: pgmap v2044: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:35 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:30:35 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 9371 writes, 42K keys, 9371 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 9371 writes, 9371 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1611 writes, 7729 keys, 1611 commit groups, 1.0 writes per commit group, ingest: 9.71 MB, 0.02 MB/s
                                           Interval WAL: 1611 writes, 1611 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     75.6      0.68              0.16        28    0.024       0      0       0.0       0.0
                                             L6      1/0    7.31 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.1    139.6    115.4      1.84              0.61        27    0.068    149K    15K       0.0       0.0
                                            Sum      1/0    7.31 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.1    101.7    104.6      2.52              0.77        55    0.046    149K    15K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.6    111.7    108.4      0.64              0.19        14    0.046     48K   3615       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    139.6    115.4      1.84              0.61        27    0.068    149K    15K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     75.9      0.68              0.16        27    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.050, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.26 GB write, 0.07 MB/s write, 0.25 GB read, 0.07 MB/s read, 2.5 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5619101451f0#2 capacity: 304.00 MB usage: 28.97 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000285 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1896,27.84 MB,9.15937%) FilterBlock(56,419.86 KB,0.134875%) IndexBlock(56,734.67 KB,0.236004%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 07 14:30:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:36 compute-0 ceph-mon[74295]: pgmap v2045: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:36 compute-0 nova_compute[259550]: 2025-10-07 14:30:36.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:37 compute-0 nova_compute[259550]: 2025-10-07 14:30:37.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:30:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:38 compute-0 ceph-mon[74295]: pgmap v2046: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:40 compute-0 ceph-mon[74295]: pgmap v2047: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:41 compute-0 nova_compute[259550]: 2025-10-07 14:30:41.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:41 compute-0 nova_compute[259550]: 2025-10-07 14:30:41.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:30:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:42 compute-0 nova_compute[259550]: 2025-10-07 14:30:42.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:42 compute-0 ceph-mon[74295]: pgmap v2048: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:30:42 compute-0 nova_compute[259550]: 2025-10-07 14:30:42.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:30:43 compute-0 nova_compute[259550]: 2025-10-07 14:30:43.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:30:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:44 compute-0 ceph-mon[74295]: pgmap v2049: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:44 compute-0 nova_compute[259550]: 2025-10-07 14:30:44.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:30:44 compute-0 nova_compute[259550]: 2025-10-07 14:30:44.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:30:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:46 compute-0 ceph-mon[74295]: pgmap v2050: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:46 compute-0 nova_compute[259550]: 2025-10-07 14:30:46.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:46 compute-0 nova_compute[259550]: 2025-10-07 14:30:46.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:30:47 compute-0 nova_compute[259550]: 2025-10-07 14:30:47.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:30:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:48 compute-0 ceph-mon[74295]: pgmap v2051: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:49 compute-0 nova_compute[259550]: 2025-10-07 14:30:49.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:30:49 compute-0 nova_compute[259550]: 2025-10-07 14:30:49.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:30:50 compute-0 nova_compute[259550]: 2025-10-07 14:30:50.026 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:30:50 compute-0 nova_compute[259550]: 2025-10-07 14:30:50.027 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:30:50 compute-0 nova_compute[259550]: 2025-10-07 14:30:50.027 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:30:50 compute-0 nova_compute[259550]: 2025-10-07 14:30:50.027 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:30:50 compute-0 nova_compute[259550]: 2025-10-07 14:30:50.027 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:30:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:50 compute-0 ceph-mon[74295]: pgmap v2052: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:30:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4224071388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:30:50 compute-0 nova_compute[259550]: 2025-10-07 14:30:50.487 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:30:50 compute-0 podman[368246]: 2025-10-07 14:30:50.582386363 +0000 UTC m=+0.056991704 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 07 14:30:50 compute-0 podman[368247]: 2025-10-07 14:30:50.590317625 +0000 UTC m=+0.062297286 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:30:50 compute-0 nova_compute[259550]: 2025-10-07 14:30:50.658 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:30:50 compute-0 nova_compute[259550]: 2025-10-07 14:30:50.659 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3882MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:30:50 compute-0 nova_compute[259550]: 2025-10-07 14:30:50.659 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:30:50 compute-0 nova_compute[259550]: 2025-10-07 14:30:50.659 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:30:50 compute-0 nova_compute[259550]: 2025-10-07 14:30:50.742 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:30:50 compute-0 nova_compute[259550]: 2025-10-07 14:30:50.742 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:30:50 compute-0 nova_compute[259550]: 2025-10-07 14:30:50.756 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:30:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:30:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2016555259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:30:51 compute-0 nova_compute[259550]: 2025-10-07 14:30:51.226 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:30:51 compute-0 nova_compute[259550]: 2025-10-07 14:30:51.233 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:30:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4224071388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:30:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2016555259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:30:51 compute-0 nova_compute[259550]: 2025-10-07 14:30:51.408 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:30:51 compute-0 nova_compute[259550]: 2025-10-07 14:30:51.450 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:30:51 compute-0 nova_compute[259550]: 2025-10-07 14:30:51.451 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:30:51 compute-0 nova_compute[259550]: 2025-10-07 14:30:51.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:52 compute-0 nova_compute[259550]: 2025-10-07 14:30:52.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:52 compute-0 ceph-mon[74295]: pgmap v2053: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:30:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:30:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:30:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:30:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:30:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:30:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:30:52 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Oct 07 14:30:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:52.920681) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:30:52 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Oct 07 14:30:52 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847452920739, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 461, "num_deletes": 257, "total_data_size": 379656, "memory_usage": 388512, "flush_reason": "Manual Compaction"}
Oct 07 14:30:52 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Oct 07 14:30:52 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847452994088, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 376294, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43006, "largest_seqno": 43466, "table_properties": {"data_size": 373651, "index_size": 679, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6127, "raw_average_key_size": 18, "raw_value_size": 368430, "raw_average_value_size": 1086, "num_data_blocks": 31, "num_entries": 339, "num_filter_entries": 339, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847428, "oldest_key_time": 1759847428, "file_creation_time": 1759847452, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:30:52 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 73476 microseconds, and 2244 cpu microseconds.
Oct 07 14:30:52 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:52.994157) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 376294 bytes OK
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:52.994185) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.031998) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.032047) EVENT_LOG_v1 {"time_micros": 1759847453032038, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.032073) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 376868, prev total WAL file size 376868, number of live WAL files 2.
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.032919) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353039' seq:72057594037927935, type:22 .. '6C6F676D0031373632' seq:0, type:0; will stop at (end)
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(367KB)], [98(7484KB)]
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847453033022, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 8040679, "oldest_snapshot_seqno": -1}
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6294 keys, 7918275 bytes, temperature: kUnknown
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847453172278, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 7918275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7877073, "index_size": 24388, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 164489, "raw_average_key_size": 26, "raw_value_size": 7764899, "raw_average_value_size": 1233, "num_data_blocks": 951, "num_entries": 6294, "num_filter_entries": 6294, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.172690) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 7918275 bytes
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.181290) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 57.7 rd, 56.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 7.3 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(42.4) write-amplify(21.0) OK, records in: 6815, records dropped: 521 output_compression: NoCompression
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.181312) EVENT_LOG_v1 {"time_micros": 1759847453181300, "job": 58, "event": "compaction_finished", "compaction_time_micros": 139420, "compaction_time_cpu_micros": 23731, "output_level": 6, "num_output_files": 1, "total_output_size": 7918275, "num_input_records": 6815, "num_output_records": 6294, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847453181581, "job": 58, "event": "table_file_deletion", "file_number": 100}
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847453183352, "job": 58, "event": "table_file_deletion", "file_number": 98}
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.032625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.183479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.183487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.183489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.183491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:30:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.183493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:30:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:54 compute-0 ceph-mon[74295]: pgmap v2054: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:56 compute-0 ceph-mon[74295]: pgmap v2055: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:56 compute-0 nova_compute[259550]: 2025-10-07 14:30:56.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:57 compute-0 nova_compute[259550]: 2025-10-07 14:30:57.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:30:57 compute-0 nova_compute[259550]: 2025-10-07 14:30:57.451 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:30:57 compute-0 nova_compute[259550]: 2025-10-07 14:30:57.452 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:30:57 compute-0 nova_compute[259550]: 2025-10-07 14:30:57.452 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:30:57 compute-0 nova_compute[259550]: 2025-10-07 14:30:57.477 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:30:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:30:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:30:58 compute-0 ceph-mon[74295]: pgmap v2056: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:31:00.063 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:31:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:31:00.064 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:31:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:31:00.064 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:31:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:00 compute-0 ceph-mon[74295]: pgmap v2057: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:01 compute-0 nova_compute[259550]: 2025-10-07 14:31:01.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:02 compute-0 nova_compute[259550]: 2025-10-07 14:31:02.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:02 compute-0 ceph-mon[74295]: pgmap v2058: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:31:02 compute-0 nova_compute[259550]: 2025-10-07 14:31:02.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:31:04 compute-0 podman[368307]: 2025-10-07 14:31:04.077574619 +0000 UTC m=+0.070857175 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:31:04 compute-0 podman[368308]: 2025-10-07 14:31:04.085746476 +0000 UTC m=+0.073793712 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 07 14:31:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:04 compute-0 ceph-mon[74295]: pgmap v2059: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:06 compute-0 ceph-mon[74295]: pgmap v2060: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:06 compute-0 nova_compute[259550]: 2025-10-07 14:31:06.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:07 compute-0 nova_compute[259550]: 2025-10-07 14:31:07.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:31:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:08 compute-0 ceph-mon[74295]: pgmap v2061: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:09 compute-0 sudo[368354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:31:09 compute-0 sudo[368354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:31:09 compute-0 sudo[368354]: pam_unix(sudo:session): session closed for user root
Oct 07 14:31:09 compute-0 sudo[368379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:31:09 compute-0 sudo[368379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:31:09 compute-0 sudo[368379]: pam_unix(sudo:session): session closed for user root
Oct 07 14:31:09 compute-0 sudo[368404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:31:09 compute-0 sudo[368404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:31:09 compute-0 sudo[368404]: pam_unix(sudo:session): session closed for user root
Oct 07 14:31:09 compute-0 sudo[368429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:31:09 compute-0 sudo[368429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:31:09 compute-0 nova_compute[259550]: 2025-10-07 14:31:09.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:31:09 compute-0 nova_compute[259550]: 2025-10-07 14:31:09.982 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:31:09 compute-0 nova_compute[259550]: 2025-10-07 14:31:09.983 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:31:09 compute-0 nova_compute[259550]: 2025-10-07 14:31:09.983 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:31:09 compute-0 nova_compute[259550]: 2025-10-07 14:31:09.984 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:31:09 compute-0 nova_compute[259550]: 2025-10-07 14:31:09.984 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:31:09 compute-0 nova_compute[259550]: 2025-10-07 14:31:09.984 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:31:10 compute-0 nova_compute[259550]: 2025-10-07 14:31:10.187 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Oct 07 14:31:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:10 compute-0 nova_compute[259550]: 2025-10-07 14:31:10.223 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Oct 07 14:31:10 compute-0 nova_compute[259550]: 2025-10-07 14:31:10.223 2 WARNING nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122
Oct 07 14:31:10 compute-0 nova_compute[259550]: 2025-10-07 14:31:10.224 2 WARNING nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2
Oct 07 14:31:10 compute-0 nova_compute[259550]: 2025-10-07 14:31:10.224 2 INFO nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Removable base files: /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2
Oct 07 14:31:10 compute-0 nova_compute[259550]: 2025-10-07 14:31:10.224 2 INFO nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122
Oct 07 14:31:10 compute-0 nova_compute[259550]: 2025-10-07 14:31:10.224 2 INFO nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2
Oct 07 14:31:10 compute-0 nova_compute[259550]: 2025-10-07 14:31:10.224 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Oct 07 14:31:10 compute-0 nova_compute[259550]: 2025-10-07 14:31:10.225 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Oct 07 14:31:10 compute-0 nova_compute[259550]: 2025-10-07 14:31:10.225 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Oct 07 14:31:10 compute-0 nova_compute[259550]: 2025-10-07 14:31:10.225 2 INFO nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Oct 07 14:31:10 compute-0 ceph-mon[74295]: pgmap v2062: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:10 compute-0 sudo[368429]: pam_unix(sudo:session): session closed for user root
Oct 07 14:31:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:31:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:31:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:31:10 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:31:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:31:10 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:31:10 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev bc6e9d08-efb3-4e13-9bd1-79d3bab8eacd does not exist
Oct 07 14:31:10 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 268e7816-9da1-4970-bca0-d7eca3244db0 does not exist
Oct 07 14:31:10 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev af9c69df-db05-4bb6-b0c2-79a2614c67bb does not exist
Oct 07 14:31:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:31:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:31:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:31:10 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:31:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:31:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:31:10 compute-0 sudo[368487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:31:10 compute-0 sudo[368487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:31:10 compute-0 sudo[368487]: pam_unix(sudo:session): session closed for user root
Oct 07 14:31:10 compute-0 sudo[368512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:31:10 compute-0 sudo[368512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:31:10 compute-0 sudo[368512]: pam_unix(sudo:session): session closed for user root
Oct 07 14:31:10 compute-0 sudo[368537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:31:10 compute-0 sudo[368537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:31:10 compute-0 sudo[368537]: pam_unix(sudo:session): session closed for user root
Oct 07 14:31:10 compute-0 sudo[368562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:31:10 compute-0 sudo[368562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:31:11 compute-0 podman[368628]: 2025-10-07 14:31:11.050526728 +0000 UTC m=+0.044727316 container create 81b58ebd11480b600bf3edd084d5542e3291eeeb0836fbdd74a7eb45a7154a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:31:11 compute-0 systemd[1]: Started libpod-conmon-81b58ebd11480b600bf3edd084d5542e3291eeeb0836fbdd74a7eb45a7154a7f.scope.
Oct 07 14:31:11 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:31:11 compute-0 podman[368628]: 2025-10-07 14:31:11.130619408 +0000 UTC m=+0.124820086 container init 81b58ebd11480b600bf3edd084d5542e3291eeeb0836fbdd74a7eb45a7154a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 07 14:31:11 compute-0 podman[368628]: 2025-10-07 14:31:11.034457278 +0000 UTC m=+0.028657886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:31:11 compute-0 podman[368628]: 2025-10-07 14:31:11.136904225 +0000 UTC m=+0.131104813 container start 81b58ebd11480b600bf3edd084d5542e3291eeeb0836fbdd74a7eb45a7154a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:31:11 compute-0 podman[368628]: 2025-10-07 14:31:11.140566723 +0000 UTC m=+0.134767331 container attach 81b58ebd11480b600bf3edd084d5542e3291eeeb0836fbdd74a7eb45a7154a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hertz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 07 14:31:11 compute-0 cranky_hertz[368644]: 167 167
Oct 07 14:31:11 compute-0 systemd[1]: libpod-81b58ebd11480b600bf3edd084d5542e3291eeeb0836fbdd74a7eb45a7154a7f.scope: Deactivated successfully.
Oct 07 14:31:11 compute-0 podman[368628]: 2025-10-07 14:31:11.143309347 +0000 UTC m=+0.137509935 container died 81b58ebd11480b600bf3edd084d5542e3291eeeb0836fbdd74a7eb45a7154a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hertz, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:31:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-00b4461583832ab0d43e90e2edf2c11e8c9971471b5e74d201063172aa037345-merged.mount: Deactivated successfully.
Oct 07 14:31:11 compute-0 podman[368628]: 2025-10-07 14:31:11.191717171 +0000 UTC m=+0.185917769 container remove 81b58ebd11480b600bf3edd084d5542e3291eeeb0836fbdd74a7eb45a7154a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:31:11 compute-0 systemd[1]: libpod-conmon-81b58ebd11480b600bf3edd084d5542e3291eeeb0836fbdd74a7eb45a7154a7f.scope: Deactivated successfully.
Oct 07 14:31:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:31:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:31:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:31:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:31:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:31:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:31:11 compute-0 podman[368668]: 2025-10-07 14:31:11.375994994 +0000 UTC m=+0.049152464 container create 84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_vaughan, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:31:11 compute-0 systemd[1]: Started libpod-conmon-84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b.scope.
Oct 07 14:31:11 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5afb7662f24797114240f6cd28376e019a89aed461e6e4fa2b65f3aa2a126cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5afb7662f24797114240f6cd28376e019a89aed461e6e4fa2b65f3aa2a126cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5afb7662f24797114240f6cd28376e019a89aed461e6e4fa2b65f3aa2a126cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5afb7662f24797114240f6cd28376e019a89aed461e6e4fa2b65f3aa2a126cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5afb7662f24797114240f6cd28376e019a89aed461e6e4fa2b65f3aa2a126cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:31:11 compute-0 podman[368668]: 2025-10-07 14:31:11.356820331 +0000 UTC m=+0.029977841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:31:11 compute-0 podman[368668]: 2025-10-07 14:31:11.453770482 +0000 UTC m=+0.126927962 container init 84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_vaughan, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:31:11 compute-0 podman[368668]: 2025-10-07 14:31:11.461946611 +0000 UTC m=+0.135104071 container start 84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:31:11 compute-0 podman[368668]: 2025-10-07 14:31:11.465566317 +0000 UTC m=+0.138723807 container attach 84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_vaughan, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:31:11 compute-0 nova_compute[259550]: 2025-10-07 14:31:11.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:12 compute-0 nova_compute[259550]: 2025-10-07 14:31:12.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:12 compute-0 ceph-mon[74295]: pgmap v2063: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:12 compute-0 unruffled_vaughan[368685]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:31:12 compute-0 unruffled_vaughan[368685]: --> relative data size: 1.0
Oct 07 14:31:12 compute-0 unruffled_vaughan[368685]: --> All data devices are unavailable
Oct 07 14:31:12 compute-0 systemd[1]: libpod-84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b.scope: Deactivated successfully.
Oct 07 14:31:12 compute-0 systemd[1]: libpod-84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b.scope: Consumed 1.008s CPU time.
Oct 07 14:31:12 compute-0 podman[368714]: 2025-10-07 14:31:12.588215305 +0000 UTC m=+0.032563991 container died 84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:31:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5afb7662f24797114240f6cd28376e019a89aed461e6e4fa2b65f3aa2a126cd-merged.mount: Deactivated successfully.
Oct 07 14:31:12 compute-0 podman[368714]: 2025-10-07 14:31:12.65240674 +0000 UTC m=+0.096755396 container remove 84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 07 14:31:12 compute-0 systemd[1]: libpod-conmon-84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b.scope: Deactivated successfully.
Oct 07 14:31:12 compute-0 sudo[368562]: pam_unix(sudo:session): session closed for user root
Oct 07 14:31:12 compute-0 sudo[368729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:31:12 compute-0 sudo[368729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:31:12 compute-0 sudo[368729]: pam_unix(sudo:session): session closed for user root
Oct 07 14:31:12 compute-0 sudo[368754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:31:12 compute-0 sudo[368754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:31:12 compute-0 sudo[368754]: pam_unix(sudo:session): session closed for user root
Oct 07 14:31:12 compute-0 sudo[368779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:31:12 compute-0 sudo[368779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:31:12 compute-0 sudo[368779]: pam_unix(sudo:session): session closed for user root
Oct 07 14:31:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:31:12 compute-0 sudo[368804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:31:12 compute-0 sudo[368804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:31:13 compute-0 podman[368870]: 2025-10-07 14:31:13.300779055 +0000 UTC m=+0.035252793 container create 60a3f3a5f57365c1eb32f79153c5ca0b9846039e315c500138089407c05db27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_herschel, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:31:13 compute-0 systemd[1]: Started libpod-conmon-60a3f3a5f57365c1eb32f79153c5ca0b9846039e315c500138089407c05db27c.scope.
Oct 07 14:31:13 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:31:13 compute-0 podman[368870]: 2025-10-07 14:31:13.286447452 +0000 UTC m=+0.020921210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:31:13 compute-0 podman[368870]: 2025-10-07 14:31:13.382878358 +0000 UTC m=+0.117352116 container init 60a3f3a5f57365c1eb32f79153c5ca0b9846039e315c500138089407c05db27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 07 14:31:13 compute-0 podman[368870]: 2025-10-07 14:31:13.391501829 +0000 UTC m=+0.125975567 container start 60a3f3a5f57365c1eb32f79153c5ca0b9846039e315c500138089407c05db27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_herschel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:31:13 compute-0 podman[368870]: 2025-10-07 14:31:13.394472719 +0000 UTC m=+0.128946637 container attach 60a3f3a5f57365c1eb32f79153c5ca0b9846039e315c500138089407c05db27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 07 14:31:13 compute-0 vibrant_herschel[368887]: 167 167
Oct 07 14:31:13 compute-0 systemd[1]: libpod-60a3f3a5f57365c1eb32f79153c5ca0b9846039e315c500138089407c05db27c.scope: Deactivated successfully.
Oct 07 14:31:13 compute-0 podman[368870]: 2025-10-07 14:31:13.397444048 +0000 UTC m=+0.131917816 container died 60a3f3a5f57365c1eb32f79153c5ca0b9846039e315c500138089407c05db27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-083c2f38c93c245c34f2b349604e237db69705cc3aad75c709ea55984f39e141-merged.mount: Deactivated successfully.
Oct 07 14:31:13 compute-0 podman[368870]: 2025-10-07 14:31:13.447130055 +0000 UTC m=+0.181603793 container remove 60a3f3a5f57365c1eb32f79153c5ca0b9846039e315c500138089407c05db27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:31:13 compute-0 systemd[1]: libpod-conmon-60a3f3a5f57365c1eb32f79153c5ca0b9846039e315c500138089407c05db27c.scope: Deactivated successfully.
Oct 07 14:31:13 compute-0 podman[368910]: 2025-10-07 14:31:13.675169479 +0000 UTC m=+0.054178808 container create e9d7b2ca888e2d801ce40e07f5682d6f21841f1f31c0700ee046e8110c413f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ride, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:31:13 compute-0 systemd[1]: Started libpod-conmon-e9d7b2ca888e2d801ce40e07f5682d6f21841f1f31c0700ee046e8110c413f5a.scope.
Oct 07 14:31:13 compute-0 podman[368910]: 2025-10-07 14:31:13.654683512 +0000 UTC m=+0.033692831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:31:13 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b462f81417e286b3524676ceaabbebe01deeeee65dcc3ad57083b396c3575d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b462f81417e286b3524676ceaabbebe01deeeee65dcc3ad57083b396c3575d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b462f81417e286b3524676ceaabbebe01deeeee65dcc3ad57083b396c3575d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b462f81417e286b3524676ceaabbebe01deeeee65dcc3ad57083b396c3575d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:31:13 compute-0 podman[368910]: 2025-10-07 14:31:13.78632996 +0000 UTC m=+0.165339319 container init e9d7b2ca888e2d801ce40e07f5682d6f21841f1f31c0700ee046e8110c413f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ride, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:31:13 compute-0 podman[368910]: 2025-10-07 14:31:13.799340687 +0000 UTC m=+0.178350046 container start e9d7b2ca888e2d801ce40e07f5682d6f21841f1f31c0700ee046e8110c413f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:31:13 compute-0 podman[368910]: 2025-10-07 14:31:13.80358097 +0000 UTC m=+0.182590309 container attach e9d7b2ca888e2d801ce40e07f5682d6f21841f1f31c0700ee046e8110c413f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 07 14:31:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:14 compute-0 ceph-mon[74295]: pgmap v2064: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:14 compute-0 sharp_ride[368926]: {
Oct 07 14:31:14 compute-0 sharp_ride[368926]:     "0": [
Oct 07 14:31:14 compute-0 sharp_ride[368926]:         {
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "devices": [
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "/dev/loop3"
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             ],
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "lv_name": "ceph_lv0",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "lv_size": "21470642176",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "name": "ceph_lv0",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "tags": {
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.cluster_name": "ceph",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.crush_device_class": "",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.encrypted": "0",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.osd_id": "0",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.type": "block",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.vdo": "0"
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             },
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "type": "block",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "vg_name": "ceph_vg0"
Oct 07 14:31:14 compute-0 sharp_ride[368926]:         }
Oct 07 14:31:14 compute-0 sharp_ride[368926]:     ],
Oct 07 14:31:14 compute-0 sharp_ride[368926]:     "1": [
Oct 07 14:31:14 compute-0 sharp_ride[368926]:         {
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "devices": [
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "/dev/loop4"
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             ],
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "lv_name": "ceph_lv1",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "lv_size": "21470642176",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "name": "ceph_lv1",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "tags": {
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.cluster_name": "ceph",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.crush_device_class": "",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.encrypted": "0",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.osd_id": "1",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.type": "block",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.vdo": "0"
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             },
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "type": "block",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "vg_name": "ceph_vg1"
Oct 07 14:31:14 compute-0 sharp_ride[368926]:         }
Oct 07 14:31:14 compute-0 sharp_ride[368926]:     ],
Oct 07 14:31:14 compute-0 sharp_ride[368926]:     "2": [
Oct 07 14:31:14 compute-0 sharp_ride[368926]:         {
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "devices": [
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "/dev/loop5"
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             ],
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "lv_name": "ceph_lv2",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "lv_size": "21470642176",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "name": "ceph_lv2",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "tags": {
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.cluster_name": "ceph",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.crush_device_class": "",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.encrypted": "0",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.osd_id": "2",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.type": "block",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:                 "ceph.vdo": "0"
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             },
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "type": "block",
Oct 07 14:31:14 compute-0 sharp_ride[368926]:             "vg_name": "ceph_vg2"
Oct 07 14:31:14 compute-0 sharp_ride[368926]:         }
Oct 07 14:31:14 compute-0 sharp_ride[368926]:     ]
Oct 07 14:31:14 compute-0 sharp_ride[368926]: }
Oct 07 14:31:14 compute-0 systemd[1]: libpod-e9d7b2ca888e2d801ce40e07f5682d6f21841f1f31c0700ee046e8110c413f5a.scope: Deactivated successfully.
Oct 07 14:31:14 compute-0 podman[368910]: 2025-10-07 14:31:14.631213785 +0000 UTC m=+1.010223114 container died e9d7b2ca888e2d801ce40e07f5682d6f21841f1f31c0700ee046e8110c413f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 07 14:31:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5b462f81417e286b3524676ceaabbebe01deeeee65dcc3ad57083b396c3575d-merged.mount: Deactivated successfully.
Oct 07 14:31:14 compute-0 podman[368910]: 2025-10-07 14:31:14.709758813 +0000 UTC m=+1.088768152 container remove e9d7b2ca888e2d801ce40e07f5682d6f21841f1f31c0700ee046e8110c413f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 07 14:31:14 compute-0 systemd[1]: libpod-conmon-e9d7b2ca888e2d801ce40e07f5682d6f21841f1f31c0700ee046e8110c413f5a.scope: Deactivated successfully.
Oct 07 14:31:14 compute-0 sudo[368804]: pam_unix(sudo:session): session closed for user root
Oct 07 14:31:14 compute-0 sudo[368948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:31:14 compute-0 sudo[368948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:31:14 compute-0 sudo[368948]: pam_unix(sudo:session): session closed for user root
Oct 07 14:31:14 compute-0 sudo[368973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:31:14 compute-0 sudo[368973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:31:14 compute-0 sudo[368973]: pam_unix(sudo:session): session closed for user root
Oct 07 14:31:14 compute-0 sudo[368998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:31:14 compute-0 sudo[368998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:31:14 compute-0 sudo[368998]: pam_unix(sudo:session): session closed for user root
Oct 07 14:31:14 compute-0 sudo[369023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:31:14 compute-0 sudo[369023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:31:15 compute-0 podman[369087]: 2025-10-07 14:31:15.319248939 +0000 UTC m=+0.051806165 container create 81c08aa8fd0eb32c15ddfc786c081ac37731b8569c22eb43f932e1c02ba66547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 14:31:15 compute-0 systemd[1]: Started libpod-conmon-81c08aa8fd0eb32c15ddfc786c081ac37731b8569c22eb43f932e1c02ba66547.scope.
Oct 07 14:31:15 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:31:15 compute-0 podman[369087]: 2025-10-07 14:31:15.29384613 +0000 UTC m=+0.026403426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:31:15 compute-0 podman[369087]: 2025-10-07 14:31:15.397776728 +0000 UTC m=+0.130334004 container init 81c08aa8fd0eb32c15ddfc786c081ac37731b8569c22eb43f932e1c02ba66547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 07 14:31:15 compute-0 podman[369087]: 2025-10-07 14:31:15.404673681 +0000 UTC m=+0.137230917 container start 81c08aa8fd0eb32c15ddfc786c081ac37731b8569c22eb43f932e1c02ba66547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:31:15 compute-0 podman[369087]: 2025-10-07 14:31:15.407709463 +0000 UTC m=+0.140266719 container attach 81c08aa8fd0eb32c15ddfc786c081ac37731b8569c22eb43f932e1c02ba66547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_einstein, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 07 14:31:15 compute-0 gracious_einstein[369103]: 167 167
Oct 07 14:31:15 compute-0 systemd[1]: libpod-81c08aa8fd0eb32c15ddfc786c081ac37731b8569c22eb43f932e1c02ba66547.scope: Deactivated successfully.
Oct 07 14:31:15 compute-0 podman[369087]: 2025-10-07 14:31:15.410779405 +0000 UTC m=+0.143336651 container died 81c08aa8fd0eb32c15ddfc786c081ac37731b8569c22eb43f932e1c02ba66547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_einstein, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 07 14:31:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ccab8c5e09f03464210ae4f4fb0145ad4b952ec401bb2d8e43830e5bdd59054-merged.mount: Deactivated successfully.
Oct 07 14:31:15 compute-0 podman[369087]: 2025-10-07 14:31:15.45248384 +0000 UTC m=+0.185041086 container remove 81c08aa8fd0eb32c15ddfc786c081ac37731b8569c22eb43f932e1c02ba66547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:31:15 compute-0 systemd[1]: libpod-conmon-81c08aa8fd0eb32c15ddfc786c081ac37731b8569c22eb43f932e1c02ba66547.scope: Deactivated successfully.
Oct 07 14:31:15 compute-0 podman[369126]: 2025-10-07 14:31:15.640476923 +0000 UTC m=+0.051761434 container create e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 07 14:31:15 compute-0 systemd[1]: Started libpod-conmon-e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2.scope.
Oct 07 14:31:15 compute-0 podman[369126]: 2025-10-07 14:31:15.615509215 +0000 UTC m=+0.026793756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:31:15 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e5cd4690c1c256ab4b36aabfc465e0485133cfece561db8a9809eb48057ba2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e5cd4690c1c256ab4b36aabfc465e0485133cfece561db8a9809eb48057ba2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e5cd4690c1c256ab4b36aabfc465e0485133cfece561db8a9809eb48057ba2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e5cd4690c1c256ab4b36aabfc465e0485133cfece561db8a9809eb48057ba2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:31:15 compute-0 podman[369126]: 2025-10-07 14:31:15.743116355 +0000 UTC m=+0.154400896 container init e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 07 14:31:15 compute-0 podman[369126]: 2025-10-07 14:31:15.75040761 +0000 UTC m=+0.161692111 container start e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:31:15 compute-0 podman[369126]: 2025-10-07 14:31:15.75452577 +0000 UTC m=+0.165810311 container attach e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 14:31:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:16 compute-0 ceph-mon[74295]: pgmap v2065: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]: {
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:         "osd_id": 2,
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:         "type": "bluestore"
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:     },
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:         "osd_id": 1,
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:         "type": "bluestore"
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:     },
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:         "osd_id": 0,
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:         "type": "bluestore"
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]:     }
Oct 07 14:31:16 compute-0 unruffled_engelbart[369143]: }
Oct 07 14:31:16 compute-0 systemd[1]: libpod-e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2.scope: Deactivated successfully.
Oct 07 14:31:16 compute-0 podman[369126]: 2025-10-07 14:31:16.747385829 +0000 UTC m=+1.158670330 container died e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:31:16 compute-0 systemd[1]: libpod-e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2.scope: Consumed 1.004s CPU time.
Oct 07 14:31:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-36e5cd4690c1c256ab4b36aabfc465e0485133cfece561db8a9809eb48057ba2-merged.mount: Deactivated successfully.
Oct 07 14:31:16 compute-0 podman[369126]: 2025-10-07 14:31:16.808152413 +0000 UTC m=+1.219436914 container remove e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:31:16 compute-0 systemd[1]: libpod-conmon-e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2.scope: Deactivated successfully.
Oct 07 14:31:16 compute-0 sudo[369023]: pam_unix(sudo:session): session closed for user root
Oct 07 14:31:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:31:16 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:31:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:31:16 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:31:16 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 9d87e0a3-a92d-4ac6-b8bd-81fb4535bfbc does not exist
Oct 07 14:31:16 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 83fce1de-332f-4197-9389-6fd9f55bfffe does not exist
Oct 07 14:31:16 compute-0 sudo[369190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:31:16 compute-0 sudo[369190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:31:16 compute-0 sudo[369190]: pam_unix(sudo:session): session closed for user root
Oct 07 14:31:16 compute-0 nova_compute[259550]: 2025-10-07 14:31:16.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:17 compute-0 sudo[369215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:31:17 compute-0 sudo[369215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:31:17 compute-0 sudo[369215]: pam_unix(sudo:session): session closed for user root
Oct 07 14:31:17 compute-0 nova_compute[259550]: 2025-10-07 14:31:17.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:31:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:31:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:31:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:18 compute-0 ceph-mon[74295]: pgmap v2066: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:20 compute-0 ceph-mon[74295]: pgmap v2067: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:21 compute-0 podman[369240]: 2025-10-07 14:31:21.075753296 +0000 UTC m=+0.065746377 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:31:21 compute-0 podman[369241]: 2025-10-07 14:31:21.080155754 +0000 UTC m=+0.070540875 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:31:21 compute-0 nova_compute[259550]: 2025-10-07 14:31:21.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:22 compute-0 nova_compute[259550]: 2025-10-07 14:31:22.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:22 compute-0 ceph-mon[74295]: pgmap v2068: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:31:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:31:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:31:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:31:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:31:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:31:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:31:22
Oct 07 14:31:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:31:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:31:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', '.mgr', 'vms', 'volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.meta']
Oct 07 14:31:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:31:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:31:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:31:22.989 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:31:22 compute-0 nova_compute[259550]: 2025-10-07 14:31:22.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:31:22.991 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:31:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:31:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:31:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:31:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:31:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:31:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:31:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:31:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:31:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:31:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:31:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:24 compute-0 ceph-mon[74295]: pgmap v2069: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:31:24.993 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:31:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:26 compute-0 ceph-mon[74295]: pgmap v2070: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:26 compute-0 nova_compute[259550]: 2025-10-07 14:31:26.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:27 compute-0 nova_compute[259550]: 2025-10-07 14:31:27.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:31:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:28 compute-0 ceph-mon[74295]: pgmap v2071: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:30 compute-0 ceph-mon[74295]: pgmap v2072: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:30 compute-0 nova_compute[259550]: 2025-10-07 14:31:30.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:31:30 compute-0 nova_compute[259550]: 2025-10-07 14:31:30.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 07 14:31:31 compute-0 nova_compute[259550]: 2025-10-07 14:31:31.004 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 07 14:31:31 compute-0 nova_compute[259550]: 2025-10-07 14:31:31.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:32 compute-0 nova_compute[259550]: 2025-10-07 14:31:32.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:32 compute-0 ceph-mon[74295]: pgmap v2073: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:31:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:31:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:31:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2724747782' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:31:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:31:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2724747782' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:31:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:31:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2724747782' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:31:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2724747782' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:31:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:34 compute-0 ceph-mon[74295]: pgmap v2074: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:35 compute-0 podman[369282]: 2025-10-07 14:31:35.057117094 +0000 UTC m=+0.046925705 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:31:35 compute-0 podman[369283]: 2025-10-07 14:31:35.097825172 +0000 UTC m=+0.084844238 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 07 14:31:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:36 compute-0 ceph-mon[74295]: pgmap v2075: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:37 compute-0 nova_compute[259550]: 2025-10-07 14:31:37.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:37 compute-0 nova_compute[259550]: 2025-10-07 14:31:37.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:31:37 compute-0 nova_compute[259550]: 2025-10-07 14:31:37.923 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:31:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:38 compute-0 ceph-mon[74295]: pgmap v2076: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:40 compute-0 ceph-mon[74295]: pgmap v2077: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:42 compute-0 nova_compute[259550]: 2025-10-07 14:31:42.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:42 compute-0 nova_compute[259550]: 2025-10-07 14:31:42.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:42 compute-0 ceph-mon[74295]: pgmap v2078: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:31:43 compute-0 nova_compute[259550]: 2025-10-07 14:31:43.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:31:43 compute-0 nova_compute[259550]: 2025-10-07 14:31:43.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:31:43 compute-0 nova_compute[259550]: 2025-10-07 14:31:43.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 07 14:31:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:44 compute-0 ceph-mon[74295]: pgmap v2079: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:44 compute-0 nova_compute[259550]: 2025-10-07 14:31:44.998 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:31:45 compute-0 nova_compute[259550]: 2025-10-07 14:31:44.999 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:31:45 compute-0 nova_compute[259550]: 2025-10-07 14:31:45.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:31:45 compute-0 nova_compute[259550]: 2025-10-07 14:31:45.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:31:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:46 compute-0 ceph-mon[74295]: pgmap v2080: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:47 compute-0 nova_compute[259550]: 2025-10-07 14:31:47.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:47 compute-0 nova_compute[259550]: 2025-10-07 14:31:47.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:31:47 compute-0 nova_compute[259550]: 2025-10-07 14:31:47.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:31:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:48 compute-0 ceph-mon[74295]: pgmap v2081: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:49 compute-0 nova_compute[259550]: 2025-10-07 14:31:49.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:31:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:50 compute-0 ceph-mon[74295]: pgmap v2082: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:51 compute-0 nova_compute[259550]: 2025-10-07 14:31:51.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:31:52 compute-0 nova_compute[259550]: 2025-10-07 14:31:52.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:52 compute-0 nova_compute[259550]: 2025-10-07 14:31:52.032 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:31:52 compute-0 nova_compute[259550]: 2025-10-07 14:31:52.033 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:31:52 compute-0 nova_compute[259550]: 2025-10-07 14:31:52.033 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:31:52 compute-0 nova_compute[259550]: 2025-10-07 14:31:52.033 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:31:52 compute-0 nova_compute[259550]: 2025-10-07 14:31:52.034 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:31:52 compute-0 podman[369326]: 2025-10-07 14:31:52.070990309 +0000 UTC m=+0.059459069 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3)
Oct 07 14:31:52 compute-0 podman[369327]: 2025-10-07 14:31:52.09684065 +0000 UTC m=+0.083210344 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 07 14:31:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:52 compute-0 nova_compute[259550]: 2025-10-07 14:31:52.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:52 compute-0 ceph-mon[74295]: pgmap v2083: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:31:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2656170575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:31:52 compute-0 nova_compute[259550]: 2025-10-07 14:31:52.512 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:31:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:31:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:31:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:31:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:31:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:31:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:31:52 compute-0 nova_compute[259550]: 2025-10-07 14:31:52.719 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:31:52 compute-0 nova_compute[259550]: 2025-10-07 14:31:52.721 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3889MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:31:52 compute-0 nova_compute[259550]: 2025-10-07 14:31:52.722 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:31:52 compute-0 nova_compute[259550]: 2025-10-07 14:31:52.722 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:31:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:31:52 compute-0 nova_compute[259550]: 2025-10-07 14:31:52.946 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:31:52 compute-0 nova_compute[259550]: 2025-10-07 14:31:52.947 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:31:53 compute-0 nova_compute[259550]: 2025-10-07 14:31:53.004 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:31:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2656170575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:31:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:31:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2099138458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:31:53 compute-0 nova_compute[259550]: 2025-10-07 14:31:53.475 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:31:53 compute-0 nova_compute[259550]: 2025-10-07 14:31:53.482 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:31:53 compute-0 nova_compute[259550]: 2025-10-07 14:31:53.503 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:31:53 compute-0 nova_compute[259550]: 2025-10-07 14:31:53.505 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:31:53 compute-0 nova_compute[259550]: 2025-10-07 14:31:53.505 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:31:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2099138458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:31:54 compute-0 ceph-mon[74295]: pgmap v2084: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:54 compute-0 nova_compute[259550]: 2025-10-07 14:31:54.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:31:55 compute-0 nova_compute[259550]: 2025-10-07 14:31:55.029 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:31:55 compute-0 nova_compute[259550]: 2025-10-07 14:31:55.995 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:31:55 compute-0 nova_compute[259550]: 2025-10-07 14:31:55.995 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:31:55 compute-0 nova_compute[259550]: 2025-10-07 14:31:55.996 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:31:56 compute-0 nova_compute[259550]: 2025-10-07 14:31:56.067 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:31:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:56 compute-0 ceph-mon[74295]: pgmap v2085: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:57 compute-0 nova_compute[259550]: 2025-10-07 14:31:57.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:57 compute-0 nova_compute[259550]: 2025-10-07 14:31:57.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:31:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:31:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:31:58 compute-0 ceph-mon[74295]: pgmap v2086: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:32:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:00.064 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:00.065 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:00.066 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:32:00 compute-0 ceph-mon[74295]: pgmap v2087: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:32:02 compute-0 nova_compute[259550]: 2025-10-07 14:32:02.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:32:02 compute-0 nova_compute[259550]: 2025-10-07 14:32:02.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:02 compute-0 ceph-mon[74295]: pgmap v2088: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:32:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:32:02 compute-0 nova_compute[259550]: 2025-10-07 14:32:02.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:32:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:32:04 compute-0 ceph-mon[74295]: pgmap v2089: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:32:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:04.686 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:f4:0e 10.100.0.2 2001:db8::f816:3eff:fe0d:f40e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe0d:f40e/64', 'neutron:device_id': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33b7c271-47f1-414b-a27c-b99f92f4e7c6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ac55d93f-5af1-4917-a7be-679169a02318) old=Port_Binding(mac=['fa:16:3e:0d:f4:0e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:32:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:04.687 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ac55d93f-5af1-4917-a7be-679169a02318 in datapath 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc updated
Oct 07 14:32:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:04.688 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:32:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:04.690 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9b14ea-127b-4b33-8483-aeeca7635fa2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:06 compute-0 podman[369408]: 2025-10-07 14:32:06.067857752 +0000 UTC m=+0.053958423 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:32:06 compute-0 podman[369409]: 2025-10-07 14:32:06.129168311 +0000 UTC m=+0.109174048 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:32:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:32:06 compute-0 ceph-mon[74295]: pgmap v2090: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:32:06 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 07 14:32:07 compute-0 nova_compute[259550]: 2025-10-07 14:32:07.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:07 compute-0 nova_compute[259550]: 2025-10-07 14:32:07.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:32:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:32:08 compute-0 ceph-mon[74295]: pgmap v2091: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:32:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:32:10 compute-0 ceph-mon[74295]: pgmap v2092: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:32:12 compute-0 nova_compute[259550]: 2025-10-07 14:32:12.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:32:12 compute-0 nova_compute[259550]: 2025-10-07 14:32:12.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:12 compute-0 ceph-mon[74295]: pgmap v2093: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:32:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:32:13 compute-0 nova_compute[259550]: 2025-10-07 14:32:13.343 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "0af0082a-1adc-40e7-b254-88e03182e802" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:13 compute-0 nova_compute[259550]: 2025-10-07 14:32:13.344 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:13 compute-0 nova_compute[259550]: 2025-10-07 14:32:13.428 2 DEBUG nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:32:13 compute-0 nova_compute[259550]: 2025-10-07 14:32:13.748 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:13 compute-0 nova_compute[259550]: 2025-10-07 14:32:13.748 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:13 compute-0 nova_compute[259550]: 2025-10-07 14:32:13.755 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:32:13 compute-0 nova_compute[259550]: 2025-10-07 14:32:13.755 2 INFO nova.compute.claims [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:32:13 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:32:13 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 31K writes, 126K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 31K writes, 11K syncs, 2.86 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5046 writes, 18K keys, 5046 commit groups, 1.0 writes per commit group, ingest: 17.27 MB, 0.03 MB/s
                                           Interval WAL: 5046 writes, 2084 syncs, 2.42 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 14:32:13 compute-0 nova_compute[259550]: 2025-10-07 14:32:13.991 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:32:14 compute-0 ceph-mon[74295]: pgmap v2094: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:32:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:32:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1193526635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:32:14 compute-0 nova_compute[259550]: 2025-10-07 14:32:14.443 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:14 compute-0 nova_compute[259550]: 2025-10-07 14:32:14.448 2 DEBUG nova.compute.provider_tree [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:32:14 compute-0 nova_compute[259550]: 2025-10-07 14:32:14.594 2 DEBUG nova.scheduler.client.report [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:32:14 compute-0 nova_compute[259550]: 2025-10-07 14:32:14.687 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:14 compute-0 nova_compute[259550]: 2025-10-07 14:32:14.688 2 DEBUG nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:32:14 compute-0 ovn_controller[151684]: 2025-10-07T14:32:14Z|01052|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 07 14:32:14 compute-0 nova_compute[259550]: 2025-10-07 14:32:14.890 2 DEBUG nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:32:14 compute-0 nova_compute[259550]: 2025-10-07 14:32:14.892 2 DEBUG nova.network.neutron [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:32:14 compute-0 nova_compute[259550]: 2025-10-07 14:32:14.946 2 INFO nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:32:15 compute-0 nova_compute[259550]: 2025-10-07 14:32:15.034 2 DEBUG nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:32:15 compute-0 nova_compute[259550]: 2025-10-07 14:32:15.177 2 DEBUG nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:32:15 compute-0 nova_compute[259550]: 2025-10-07 14:32:15.178 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:32:15 compute-0 nova_compute[259550]: 2025-10-07 14:32:15.178 2 INFO nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Creating image(s)
Oct 07 14:32:15 compute-0 nova_compute[259550]: 2025-10-07 14:32:15.203 2 DEBUG nova.storage.rbd_utils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 0af0082a-1adc-40e7-b254-88e03182e802_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:15 compute-0 nova_compute[259550]: 2025-10-07 14:32:15.226 2 DEBUG nova.storage.rbd_utils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 0af0082a-1adc-40e7-b254-88e03182e802_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:15 compute-0 nova_compute[259550]: 2025-10-07 14:32:15.248 2 DEBUG nova.storage.rbd_utils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 0af0082a-1adc-40e7-b254-88e03182e802_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:15 compute-0 nova_compute[259550]: 2025-10-07 14:32:15.251 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:15 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1193526635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:32:15 compute-0 nova_compute[259550]: 2025-10-07 14:32:15.306 2 DEBUG nova.policy [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:32:15 compute-0 nova_compute[259550]: 2025-10-07 14:32:15.356 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:15 compute-0 nova_compute[259550]: 2025-10-07 14:32:15.357 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:15 compute-0 nova_compute[259550]: 2025-10-07 14:32:15.359 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:15 compute-0 nova_compute[259550]: 2025-10-07 14:32:15.359 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:15 compute-0 nova_compute[259550]: 2025-10-07 14:32:15.382 2 DEBUG nova.storage.rbd_utils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 0af0082a-1adc-40e7-b254-88e03182e802_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:15 compute-0 nova_compute[259550]: 2025-10-07 14:32:15.386 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 0af0082a-1adc-40e7-b254-88e03182e802_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:15 compute-0 nova_compute[259550]: 2025-10-07 14:32:15.732 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 0af0082a-1adc-40e7-b254-88e03182e802_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:15 compute-0 nova_compute[259550]: 2025-10-07 14:32:15.811 2 DEBUG nova.storage.rbd_utils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 0af0082a-1adc-40e7-b254-88e03182e802_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:32:15 compute-0 nova_compute[259550]: 2025-10-07 14:32:15.917 2 DEBUG nova.objects.instance [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 0af0082a-1adc-40e7-b254-88e03182e802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:32:16 compute-0 nova_compute[259550]: 2025-10-07 14:32:16.005 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:32:16 compute-0 nova_compute[259550]: 2025-10-07 14:32:16.006 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Ensure instance console log exists: /var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:32:16 compute-0 nova_compute[259550]: 2025-10-07 14:32:16.007 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:16 compute-0 nova_compute[259550]: 2025-10-07 14:32:16.007 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:16 compute-0 nova_compute[259550]: 2025-10-07 14:32:16.007 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 47 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 105 KiB/s wr, 11 op/s
Oct 07 14:32:16 compute-0 ceph-mon[74295]: pgmap v2095: 305 pgs: 305 active+clean; 47 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 105 KiB/s wr, 11 op/s
Oct 07 14:32:16 compute-0 nova_compute[259550]: 2025-10-07 14:32:16.677 2 DEBUG nova.network.neutron [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Successfully created port: 52f43128-c899-4d76-9e65-c99941c834d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:32:17 compute-0 nova_compute[259550]: 2025-10-07 14:32:17.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:17 compute-0 sudo[369645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:32:17 compute-0 sudo[369645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:32:17 compute-0 sudo[369645]: pam_unix(sudo:session): session closed for user root
Oct 07 14:32:17 compute-0 sudo[369670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:32:17 compute-0 sudo[369670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:32:17 compute-0 sudo[369670]: pam_unix(sudo:session): session closed for user root
Oct 07 14:32:17 compute-0 sudo[369695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:32:17 compute-0 sudo[369695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:32:17 compute-0 sudo[369695]: pam_unix(sudo:session): session closed for user root
Oct 07 14:32:17 compute-0 nova_compute[259550]: 2025-10-07 14:32:17.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:17 compute-0 sudo[369720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:32:17 compute-0 sudo[369720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:32:17 compute-0 sudo[369720]: pam_unix(sudo:session): session closed for user root
Oct 07 14:32:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:32:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:32:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:32:17 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:32:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:32:17 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:32:17 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 27b074ba-49e9-41ba-960e-83360daa37bd does not exist
Oct 07 14:32:17 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev f5f1fc18-5be6-442a-a971-0a039c8ff29b does not exist
Oct 07 14:32:17 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev c4425fd0-b0d4-4087-b46a-5e52d95cfd10 does not exist
Oct 07 14:32:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:32:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:32:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:32:17 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:32:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:32:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:32:17 compute-0 nova_compute[259550]: 2025-10-07 14:32:17.791 2 DEBUG nova.network.neutron [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Successfully updated port: 52f43128-c899-4d76-9e65-c99941c834d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:32:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:32:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:32:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:32:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:32:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:32:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:32:17 compute-0 sudo[369777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:32:17 compute-0 sudo[369777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:32:17 compute-0 sudo[369777]: pam_unix(sudo:session): session closed for user root
Oct 07 14:32:17 compute-0 sudo[369802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:32:17 compute-0 sudo[369802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:32:17 compute-0 sudo[369802]: pam_unix(sudo:session): session closed for user root
Oct 07 14:32:17 compute-0 sudo[369827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:32:17 compute-0 sudo[369827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:32:17 compute-0 sudo[369827]: pam_unix(sudo:session): session closed for user root
Oct 07 14:32:17 compute-0 nova_compute[259550]: 2025-10-07 14:32:17.914 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:32:17 compute-0 nova_compute[259550]: 2025-10-07 14:32:17.914 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:32:17 compute-0 nova_compute[259550]: 2025-10-07 14:32:17.915 2 DEBUG nova.network.neutron [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:32:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:32:17 compute-0 sudo[369852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:32:17 compute-0 sudo[369852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:32:18 compute-0 nova_compute[259550]: 2025-10-07 14:32:18.072 2 DEBUG nova.compute.manager [req-25b925b4-4644-4ffc-b50c-b10fb77cbf93 req-5801e83f-461c-4eeb-8f28-bad4c7b3bfa8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Received event network-changed-52f43128-c899-4d76-9e65-c99941c834d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:32:18 compute-0 nova_compute[259550]: 2025-10-07 14:32:18.073 2 DEBUG nova.compute.manager [req-25b925b4-4644-4ffc-b50c-b10fb77cbf93 req-5801e83f-461c-4eeb-8f28-bad4c7b3bfa8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Refreshing instance network info cache due to event network-changed-52f43128-c899-4d76-9e65-c99941c834d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:32:18 compute-0 nova_compute[259550]: 2025-10-07 14:32:18.073 2 DEBUG oslo_concurrency.lockutils [req-25b925b4-4644-4ffc-b50c-b10fb77cbf93 req-5801e83f-461c-4eeb-8f28-bad4c7b3bfa8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:32:18 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:32:18 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 36K writes, 138K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s
                                           Cumulative WAL: 36K writes, 12K syncs, 2.79 writes per sync, written: 0.13 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5197 writes, 17K keys, 5197 commit groups, 1.0 writes per commit group, ingest: 15.78 MB, 0.03 MB/s
                                           Interval WAL: 5197 writes, 2171 syncs, 2.39 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 14:32:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 47 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 105 KiB/s wr, 11 op/s
Oct 07 14:32:18 compute-0 podman[369919]: 2025-10-07 14:32:18.311737813 +0000 UTC m=+0.046480153 container create acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:32:18 compute-0 systemd[1]: Started libpod-conmon-acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc.scope.
Oct 07 14:32:18 compute-0 podman[369919]: 2025-10-07 14:32:18.28727429 +0000 UTC m=+0.022016720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:32:18 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:32:18 compute-0 podman[369919]: 2025-10-07 14:32:18.416892364 +0000 UTC m=+0.151634724 container init acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:32:18 compute-0 podman[369919]: 2025-10-07 14:32:18.425922544 +0000 UTC m=+0.160664884 container start acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:32:18 compute-0 podman[369919]: 2025-10-07 14:32:18.430033044 +0000 UTC m=+0.164775404 container attach acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:32:18 compute-0 systemd[1]: libpod-acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc.scope: Deactivated successfully.
Oct 07 14:32:18 compute-0 ecstatic_lewin[369935]: 167 167
Oct 07 14:32:18 compute-0 conmon[369935]: conmon acc99383613f4cf1139c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc.scope/container/memory.events
Oct 07 14:32:18 compute-0 podman[369919]: 2025-10-07 14:32:18.436224779 +0000 UTC m=+0.170967119 container died acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 07 14:32:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a4eb203dbabd22801070dd86a80d322351ad944af0e4409dd3620fb99fe0042-merged.mount: Deactivated successfully.
Oct 07 14:32:18 compute-0 podman[369919]: 2025-10-07 14:32:18.488414815 +0000 UTC m=+0.223157195 container remove acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:32:18 compute-0 nova_compute[259550]: 2025-10-07 14:32:18.503 2 DEBUG nova.network.neutron [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:32:18 compute-0 systemd[1]: libpod-conmon-acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc.scope: Deactivated successfully.
Oct 07 14:32:18 compute-0 podman[369959]: 2025-10-07 14:32:18.664389886 +0000 UTC m=+0.050101819 container create a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 07 14:32:18 compute-0 systemd[1]: Started libpod-conmon-a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc.scope.
Oct 07 14:32:18 compute-0 podman[369959]: 2025-10-07 14:32:18.64656228 +0000 UTC m=+0.032274243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:32:18 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8335a6c70330c017af1958cb8c5710f3634b4378330da9b028540833b8120f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8335a6c70330c017af1958cb8c5710f3634b4378330da9b028540833b8120f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8335a6c70330c017af1958cb8c5710f3634b4378330da9b028540833b8120f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8335a6c70330c017af1958cb8c5710f3634b4378330da9b028540833b8120f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8335a6c70330c017af1958cb8c5710f3634b4378330da9b028540833b8120f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:32:18 compute-0 podman[369959]: 2025-10-07 14:32:18.76819868 +0000 UTC m=+0.153910633 container init a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 14:32:18 compute-0 podman[369959]: 2025-10-07 14:32:18.7794315 +0000 UTC m=+0.165143443 container start a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 07 14:32:18 compute-0 podman[369959]: 2025-10-07 14:32:18.783986522 +0000 UTC m=+0.169698465 container attach a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 07 14:32:18 compute-0 ceph-mon[74295]: pgmap v2096: 305 pgs: 305 active+clean; 47 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 105 KiB/s wr, 11 op/s
Oct 07 14:32:19 compute-0 stoic_carver[369977]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:32:19 compute-0 stoic_carver[369977]: --> relative data size: 1.0
Oct 07 14:32:19 compute-0 stoic_carver[369977]: --> All data devices are unavailable
Oct 07 14:32:19 compute-0 systemd[1]: libpod-a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc.scope: Deactivated successfully.
Oct 07 14:32:19 compute-0 podman[369959]: 2025-10-07 14:32:19.850082628 +0000 UTC m=+1.235794561 container died a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:32:19 compute-0 systemd[1]: libpod-a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc.scope: Consumed 1.025s CPU time.
Oct 07 14:32:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d8335a6c70330c017af1958cb8c5710f3634b4378330da9b028540833b8120f-merged.mount: Deactivated successfully.
Oct 07 14:32:19 compute-0 podman[369959]: 2025-10-07 14:32:19.906179847 +0000 UTC m=+1.291891780 container remove a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 07 14:32:19 compute-0 systemd[1]: libpod-conmon-a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc.scope: Deactivated successfully.
Oct 07 14:32:19 compute-0 sudo[369852]: pam_unix(sudo:session): session closed for user root
Oct 07 14:32:19 compute-0 sudo[370019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:32:19 compute-0 sudo[370019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:32:19 compute-0 sudo[370019]: pam_unix(sudo:session): session closed for user root
Oct 07 14:32:20 compute-0 sudo[370044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:32:20 compute-0 sudo[370044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:32:20 compute-0 sudo[370044]: pam_unix(sudo:session): session closed for user root
Oct 07 14:32:20 compute-0 sudo[370069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:32:20 compute-0 sudo[370069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:32:20 compute-0 sudo[370069]: pam_unix(sudo:session): session closed for user root
Oct 07 14:32:20 compute-0 sudo[370094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:32:20 compute-0 sudo[370094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:32:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 84 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Oct 07 14:32:20 compute-0 ceph-mon[74295]: pgmap v2097: 305 pgs: 305 active+clean; 84 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.320 2 DEBUG nova.network.neutron [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updating instance_info_cache with network_info: [{"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.459 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.459 2 DEBUG nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Instance network_info: |[{"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.459 2 DEBUG oslo_concurrency.lockutils [req-25b925b4-4644-4ffc-b50c-b10fb77cbf93 req-5801e83f-461c-4eeb-8f28-bad4c7b3bfa8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.460 2 DEBUG nova.network.neutron [req-25b925b4-4644-4ffc-b50c-b10fb77cbf93 req-5801e83f-461c-4eeb-8f28-bad4c7b3bfa8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Refreshing network info cache for port 52f43128-c899-4d76-9e65-c99941c834d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.463 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Start _get_guest_xml network_info=[{"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.469 2 WARNING nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.475 2 DEBUG nova.virt.libvirt.host [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:32:20 compute-0 podman[370159]: 2025-10-07 14:32:20.476677012 +0000 UTC m=+0.043622427 container create 546b9a5a6e818f969963544f40f5296d1be86c04f30a07219c14acee4c54d64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.477 2 DEBUG nova.virt.libvirt.host [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.481 2 DEBUG nova.virt.libvirt.host [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.481 2 DEBUG nova.virt.libvirt.host [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.482 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.482 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.483 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.483 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.483 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.483 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.483 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.484 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.484 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.484 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.484 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.484 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.488 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:20 compute-0 systemd[1]: Started libpod-conmon-546b9a5a6e818f969963544f40f5296d1be86c04f30a07219c14acee4c54d64b.scope.
Oct 07 14:32:20 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:32:20 compute-0 podman[370159]: 2025-10-07 14:32:20.456847792 +0000 UTC m=+0.023793237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:32:20 compute-0 podman[370159]: 2025-10-07 14:32:20.566318997 +0000 UTC m=+0.133264442 container init 546b9a5a6e818f969963544f40f5296d1be86c04f30a07219c14acee4c54d64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Oct 07 14:32:20 compute-0 podman[370159]: 2025-10-07 14:32:20.573113829 +0000 UTC m=+0.140059244 container start 546b9a5a6e818f969963544f40f5296d1be86c04f30a07219c14acee4c54d64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kare, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 07 14:32:20 compute-0 gallant_kare[370176]: 167 167
Oct 07 14:32:20 compute-0 systemd[1]: libpod-546b9a5a6e818f969963544f40f5296d1be86c04f30a07219c14acee4c54d64b.scope: Deactivated successfully.
Oct 07 14:32:20 compute-0 podman[370159]: 2025-10-07 14:32:20.593910623 +0000 UTC m=+0.160856038 container attach 546b9a5a6e818f969963544f40f5296d1be86c04f30a07219c14acee4c54d64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:32:20 compute-0 podman[370159]: 2025-10-07 14:32:20.594410087 +0000 UTC m=+0.161355522 container died 546b9a5a6e818f969963544f40f5296d1be86c04f30a07219c14acee4c54d64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kare, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 07 14:32:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff3bc5a2b6358647cbbb4fa3418bdcf456903a189b4717319d6fc7ca4c30e576-merged.mount: Deactivated successfully.
Oct 07 14:32:20 compute-0 podman[370159]: 2025-10-07 14:32:20.81006078 +0000 UTC m=+0.377006205 container remove 546b9a5a6e818f969963544f40f5296d1be86c04f30a07219c14acee4c54d64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:32:20 compute-0 systemd[1]: libpod-conmon-546b9a5a6e818f969963544f40f5296d1be86c04f30a07219c14acee4c54d64b.scope: Deactivated successfully.
Oct 07 14:32:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:32:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219155710' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.956 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.985 2 DEBUG nova.storage.rbd_utils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 0af0082a-1adc-40e7-b254-88e03182e802_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:20 compute-0 nova_compute[259550]: 2025-10-07 14:32:20.988 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:20 compute-0 podman[370221]: 2025-10-07 14:32:20.998784523 +0000 UTC m=+0.070394453 container create c8c4d57c1bb63bb60e6aeb6091dd650e56dfacb943e7249a1b4a2b904ffe935b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_benz, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 14:32:21 compute-0 podman[370221]: 2025-10-07 14:32:20.952805284 +0000 UTC m=+0.024415234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:32:21 compute-0 systemd[1]: Started libpod-conmon-c8c4d57c1bb63bb60e6aeb6091dd650e56dfacb943e7249a1b4a2b904ffe935b.scope.
Oct 07 14:32:21 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3e4bebd85ebd312242a8567909aa0fec8ac1ddb56d3616450cd8ceb0371c16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3e4bebd85ebd312242a8567909aa0fec8ac1ddb56d3616450cd8ceb0371c16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3e4bebd85ebd312242a8567909aa0fec8ac1ddb56d3616450cd8ceb0371c16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3e4bebd85ebd312242a8567909aa0fec8ac1ddb56d3616450cd8ceb0371c16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:32:21 compute-0 podman[370221]: 2025-10-07 14:32:21.097559121 +0000 UTC m=+0.169169081 container init c8c4d57c1bb63bb60e6aeb6091dd650e56dfacb943e7249a1b4a2b904ffe935b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:32:21 compute-0 podman[370221]: 2025-10-07 14:32:21.104662241 +0000 UTC m=+0.176272171 container start c8c4d57c1bb63bb60e6aeb6091dd650e56dfacb943e7249a1b4a2b904ffe935b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_benz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:32:21 compute-0 podman[370221]: 2025-10-07 14:32:21.114090563 +0000 UTC m=+0.185700503 container attach c8c4d57c1bb63bb60e6aeb6091dd650e56dfacb943e7249a1b4a2b904ffe935b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_benz, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:32:21 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1219155710' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:32:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:32:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3796852326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.426 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.430 2 DEBUG nova.virt.libvirt.vif [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:32:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1273910432',display_name='tempest-TestGettingAddress-server-1273910432',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1273910432',id=105,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKZcRW+cTahv1daeBOuj+cvRjMyW8WsW20MGafd+5cNwSmbuhLgTtTRGaDrZ7xqiE/Dwq9wlNoEqtjkRltl/UATKHeenR7LAEwYRgqoAMdni0PUi0HrATcyFAYghdo06gw==',key_name='tempest-TestGettingAddress-855853305',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-v3y9hbt0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:32:15Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=0af0082a-1adc-40e7-b254-88e03182e802,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.430 2 DEBUG nova.network.os_vif_util [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.431 2 DEBUG nova.network.os_vif_util [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:17:8c,bridge_name='br-int',has_traffic_filtering=True,id=52f43128-c899-4d76-9e65-c99941c834d4,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f43128-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.433 2 DEBUG nova.objects.instance [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 0af0082a-1adc-40e7-b254-88e03182e802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.453 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:32:21 compute-0 nova_compute[259550]:   <uuid>0af0082a-1adc-40e7-b254-88e03182e802</uuid>
Oct 07 14:32:21 compute-0 nova_compute[259550]:   <name>instance-00000069</name>
Oct 07 14:32:21 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:32:21 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:32:21 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <nova:name>tempest-TestGettingAddress-server-1273910432</nova:name>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:32:20</nova:creationTime>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:32:21 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:32:21 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:32:21 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:32:21 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:32:21 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:32:21 compute-0 nova_compute[259550]:         <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct 07 14:32:21 compute-0 nova_compute[259550]:         <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:32:21 compute-0 nova_compute[259550]:         <nova:port uuid="52f43128-c899-4d76-9e65-c99941c834d4">
Oct 07 14:32:21 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe52:178c" ipVersion="6"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:32:21 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:32:21 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <system>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <entry name="serial">0af0082a-1adc-40e7-b254-88e03182e802</entry>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <entry name="uuid">0af0082a-1adc-40e7-b254-88e03182e802</entry>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     </system>
Oct 07 14:32:21 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:32:21 compute-0 nova_compute[259550]:   <os>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:   </os>
Oct 07 14:32:21 compute-0 nova_compute[259550]:   <features>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:   </features>
Oct 07 14:32:21 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:32:21 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:32:21 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/0af0082a-1adc-40e7-b254-88e03182e802_disk">
Oct 07 14:32:21 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       </source>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:32:21 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/0af0082a-1adc-40e7-b254-88e03182e802_disk.config">
Oct 07 14:32:21 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       </source>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:32:21 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:52:17:8c"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <target dev="tap52f43128-c8"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802/console.log" append="off"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <video>
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     </video>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:32:21 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:32:21 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:32:21 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:32:21 compute-0 nova_compute[259550]: </domain>
Oct 07 14:32:21 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.454 2 DEBUG nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Preparing to wait for external event network-vif-plugged-52f43128-c899-4d76-9e65-c99941c834d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.455 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "0af0082a-1adc-40e7-b254-88e03182e802-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.456 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.456 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.457 2 DEBUG nova.virt.libvirt.vif [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:32:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1273910432',display_name='tempest-TestGettingAddress-server-1273910432',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1273910432',id=105,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKZcRW+cTahv1daeBOuj+cvRjMyW8WsW20MGafd+5cNwSmbuhLgTtTRGaDrZ7xqiE/Dwq9wlNoEqtjkRltl/UATKHeenR7LAEwYRgqoAMdni0PUi0HrATcyFAYghdo06gw==',key_name='tempest-TestGettingAddress-855853305',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-v3y9hbt0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:32:15Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=0af0082a-1adc-40e7-b254-88e03182e802,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.457 2 DEBUG nova.network.os_vif_util [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.458 2 DEBUG nova.network.os_vif_util [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:17:8c,bridge_name='br-int',has_traffic_filtering=True,id=52f43128-c899-4d76-9e65-c99941c834d4,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f43128-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.458 2 DEBUG os_vif [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:17:8c,bridge_name='br-int',has_traffic_filtering=True,id=52f43128-c899-4d76-9e65-c99941c834d4,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f43128-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.459 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.460 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.463 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52f43128-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.464 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52f43128-c8, col_values=(('external_ids', {'iface-id': '52f43128-c899-4d76-9e65-c99941c834d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:17:8c', 'vm-uuid': '0af0082a-1adc-40e7-b254-88e03182e802'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:32:21 compute-0 NetworkManager[44949]: <info>  [1759847541.4669] manager: (tap52f43128-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.478 2 INFO os_vif [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:17:8c,bridge_name='br-int',has_traffic_filtering=True,id=52f43128-c899-4d76-9e65-c99941c834d4,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f43128-c8')
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.542 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.543 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.544 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:52:17:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.545 2 INFO nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Using config drive
Oct 07 14:32:21 compute-0 nova_compute[259550]: 2025-10-07 14:32:21.577 2 DEBUG nova.storage.rbd_utils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 0af0082a-1adc-40e7-b254-88e03182e802_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:21 compute-0 cool_benz[370258]: {
Oct 07 14:32:21 compute-0 cool_benz[370258]:     "0": [
Oct 07 14:32:21 compute-0 cool_benz[370258]:         {
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "devices": [
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "/dev/loop3"
Oct 07 14:32:21 compute-0 cool_benz[370258]:             ],
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "lv_name": "ceph_lv0",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "lv_size": "21470642176",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "name": "ceph_lv0",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "tags": {
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.cluster_name": "ceph",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.crush_device_class": "",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.encrypted": "0",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.osd_id": "0",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.type": "block",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.vdo": "0"
Oct 07 14:32:21 compute-0 cool_benz[370258]:             },
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "type": "block",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "vg_name": "ceph_vg0"
Oct 07 14:32:21 compute-0 cool_benz[370258]:         }
Oct 07 14:32:21 compute-0 cool_benz[370258]:     ],
Oct 07 14:32:21 compute-0 cool_benz[370258]:     "1": [
Oct 07 14:32:21 compute-0 cool_benz[370258]:         {
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "devices": [
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "/dev/loop4"
Oct 07 14:32:21 compute-0 cool_benz[370258]:             ],
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "lv_name": "ceph_lv1",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "lv_size": "21470642176",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "name": "ceph_lv1",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "tags": {
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.cluster_name": "ceph",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.crush_device_class": "",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.encrypted": "0",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.osd_id": "1",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.type": "block",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.vdo": "0"
Oct 07 14:32:21 compute-0 cool_benz[370258]:             },
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "type": "block",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "vg_name": "ceph_vg1"
Oct 07 14:32:21 compute-0 cool_benz[370258]:         }
Oct 07 14:32:21 compute-0 cool_benz[370258]:     ],
Oct 07 14:32:21 compute-0 cool_benz[370258]:     "2": [
Oct 07 14:32:21 compute-0 cool_benz[370258]:         {
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "devices": [
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "/dev/loop5"
Oct 07 14:32:21 compute-0 cool_benz[370258]:             ],
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "lv_name": "ceph_lv2",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "lv_size": "21470642176",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "name": "ceph_lv2",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "tags": {
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.cluster_name": "ceph",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.crush_device_class": "",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.encrypted": "0",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.osd_id": "2",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.type": "block",
Oct 07 14:32:21 compute-0 cool_benz[370258]:                 "ceph.vdo": "0"
Oct 07 14:32:21 compute-0 cool_benz[370258]:             },
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "type": "block",
Oct 07 14:32:21 compute-0 cool_benz[370258]:             "vg_name": "ceph_vg2"
Oct 07 14:32:21 compute-0 cool_benz[370258]:         }
Oct 07 14:32:21 compute-0 cool_benz[370258]:     ]
Oct 07 14:32:21 compute-0 cool_benz[370258]: }
Oct 07 14:32:21 compute-0 systemd[1]: libpod-c8c4d57c1bb63bb60e6aeb6091dd650e56dfacb943e7249a1b4a2b904ffe935b.scope: Deactivated successfully.
Oct 07 14:32:21 compute-0 podman[370221]: 2025-10-07 14:32:21.89763106 +0000 UTC m=+0.969240990 container died c8c4d57c1bb63bb60e6aeb6091dd650e56dfacb943e7249a1b4a2b904ffe935b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_benz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 14:32:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d3e4bebd85ebd312242a8567909aa0fec8ac1ddb56d3616450cd8ceb0371c16-merged.mount: Deactivated successfully.
Oct 07 14:32:21 compute-0 podman[370221]: 2025-10-07 14:32:21.988100147 +0000 UTC m=+1.059710067 container remove c8c4d57c1bb63bb60e6aeb6091dd650e56dfacb943e7249a1b4a2b904ffe935b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_benz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:32:21 compute-0 systemd[1]: libpod-conmon-c8c4d57c1bb63bb60e6aeb6091dd650e56dfacb943e7249a1b4a2b904ffe935b.scope: Deactivated successfully.
Oct 07 14:32:22 compute-0 nova_compute[259550]: 2025-10-07 14:32:22.011 2 INFO nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Creating config drive at /var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802/disk.config
Oct 07 14:32:22 compute-0 nova_compute[259550]: 2025-10-07 14:32:22.016 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8af9lohh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:22 compute-0 sudo[370094]: pam_unix(sudo:session): session closed for user root
Oct 07 14:32:22 compute-0 nova_compute[259550]: 2025-10-07 14:32:22.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:22 compute-0 nova_compute[259550]: 2025-10-07 14:32:22.130 2 DEBUG nova.network.neutron [req-25b925b4-4644-4ffc-b50c-b10fb77cbf93 req-5801e83f-461c-4eeb-8f28-bad4c7b3bfa8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updated VIF entry in instance network info cache for port 52f43128-c899-4d76-9e65-c99941c834d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:32:22 compute-0 nova_compute[259550]: 2025-10-07 14:32:22.130 2 DEBUG nova.network.neutron [req-25b925b4-4644-4ffc-b50c-b10fb77cbf93 req-5801e83f-461c-4eeb-8f28-bad4c7b3bfa8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updating instance_info_cache with network_info: [{"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:32:22 compute-0 sudo[370323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:32:22 compute-0 sudo[370323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:32:22 compute-0 sudo[370323]: pam_unix(sudo:session): session closed for user root
Oct 07 14:32:22 compute-0 nova_compute[259550]: 2025-10-07 14:32:22.163 2 DEBUG oslo_concurrency.lockutils [req-25b925b4-4644-4ffc-b50c-b10fb77cbf93 req-5801e83f-461c-4eeb-8f28-bad4c7b3bfa8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:32:22 compute-0 nova_compute[259550]: 2025-10-07 14:32:22.190 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8af9lohh" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:22 compute-0 sudo[370362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:32:22 compute-0 sudo[370362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:32:22 compute-0 sudo[370362]: pam_unix(sudo:session): session closed for user root
Oct 07 14:32:22 compute-0 podman[370349]: 2025-10-07 14:32:22.213530281 +0000 UTC m=+0.063571770 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:32:22 compute-0 podman[370350]: 2025-10-07 14:32:22.213656384 +0000 UTC m=+0.061073892 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:32:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:32:22 compute-0 nova_compute[259550]: 2025-10-07 14:32:22.233 2 DEBUG nova.storage.rbd_utils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 0af0082a-1adc-40e7-b254-88e03182e802_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:22 compute-0 nova_compute[259550]: 2025-10-07 14:32:22.237 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802/disk.config 0af0082a-1adc-40e7-b254-88e03182e802_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:22 compute-0 sudo[370424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:32:22 compute-0 sudo[370424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:32:22 compute-0 sudo[370424]: pam_unix(sudo:session): session closed for user root
Oct 07 14:32:22 compute-0 sudo[370455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:32:22 compute-0 sudo[370455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:32:22 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3796852326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:32:22 compute-0 ceph-mon[74295]: pgmap v2098: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:32:22 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:32:22 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 25K writes, 101K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s
                                           Cumulative WAL: 25K writes, 8900 syncs, 2.89 writes per sync, written: 0.10 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3566 writes, 12K keys, 3566 commit groups, 1.0 writes per commit group, ingest: 9.89 MB, 0.02 MB/s
                                           Interval WAL: 3566 writes, 1483 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 14:32:22 compute-0 nova_compute[259550]: 2025-10-07 14:32:22.515 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802/disk.config 0af0082a-1adc-40e7-b254-88e03182e802_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:22 compute-0 nova_compute[259550]: 2025-10-07 14:32:22.516 2 INFO nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Deleting local config drive /var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802/disk.config because it was imported into RBD.
Oct 07 14:32:22 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 07 14:32:22 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 07 14:32:22 compute-0 kernel: tap52f43128-c8: entered promiscuous mode
Oct 07 14:32:22 compute-0 NetworkManager[44949]: <info>  [1759847542.6307] manager: (tap52f43128-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/426)
Oct 07 14:32:22 compute-0 ovn_controller[151684]: 2025-10-07T14:32:22Z|01053|binding|INFO|Claiming lport 52f43128-c899-4d76-9e65-c99941c834d4 for this chassis.
Oct 07 14:32:22 compute-0 ovn_controller[151684]: 2025-10-07T14:32:22Z|01054|binding|INFO|52f43128-c899-4d76-9e65-c99941c834d4: Claiming fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c
Oct 07 14:32:22 compute-0 nova_compute[259550]: 2025-10-07 14:32:22.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:22 compute-0 nova_compute[259550]: 2025-10-07 14:32:22.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:22 compute-0 nova_compute[259550]: 2025-10-07 14:32:22.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:22 compute-0 systemd-machined[214580]: New machine qemu-132-instance-00000069.
Oct 07 14:32:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.668 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c'], port_security=['fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:fe52:178c/64', 'neutron:device_id': '0af0082a-1adc-40e7-b254-88e03182e802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3172dca0-91ca-4f82-9e12-53d0e4f57177', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33b7c271-47f1-414b-a27c-b99f92f4e7c6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=52f43128-c899-4d76-9e65-c99941c834d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:32:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.669 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 52f43128-c899-4d76-9e65-c99941c834d4 in datapath 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc bound to our chassis
Oct 07 14:32:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.670 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc
Oct 07 14:32:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:32:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:32:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:32:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:32:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:32:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:32:22 compute-0 systemd[1]: Started Virtual Machine qemu-132-instance-00000069.
Oct 07 14:32:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.688 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1688fbf1-b3e0-4f5e-a71d-a43e42659ff8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.689 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1da6903e-11 in ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:32:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:32:22
Oct 07 14:32:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:32:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:32:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.log', '.mgr', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.meta']
Oct 07 14:32:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:32:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.694 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1da6903e-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:32:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.694 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e18f8abd-2b3a-4644-9fa6-ccdb8388341d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.695 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[462bd3e3-33a8-419e-9f07-7db3fb561cdf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:22 compute-0 systemd-udevd[370580]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:32:22 compute-0 nova_compute[259550]: 2025-10-07 14:32:22.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.707 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc1af53-1687-46f9-ba90-917598195487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:22 compute-0 NetworkManager[44949]: <info>  [1759847542.7103] device (tap52f43128-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:32:22 compute-0 ovn_controller[151684]: 2025-10-07T14:32:22Z|01055|binding|INFO|Setting lport 52f43128-c899-4d76-9e65-c99941c834d4 ovn-installed in OVS
Oct 07 14:32:22 compute-0 ovn_controller[151684]: 2025-10-07T14:32:22Z|01056|binding|INFO|Setting lport 52f43128-c899-4d76-9e65-c99941c834d4 up in Southbound
Oct 07 14:32:22 compute-0 NetworkManager[44949]: <info>  [1759847542.7131] device (tap52f43128-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:32:22 compute-0 nova_compute[259550]: 2025-10-07 14:32:22.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.725 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3aefb430-81dc-429d-ac32-aa1bf02ec672]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:22 compute-0 podman[370559]: 2025-10-07 14:32:22.657397001 +0000 UTC m=+0.025619556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:32:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.770 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cc99483d-cd24-4161-9c3f-8c69974dc24b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.776 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[942d2479-d543-42a5-92d9-38af5b82d68b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:22 compute-0 systemd-udevd[370583]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:32:22 compute-0 NetworkManager[44949]: <info>  [1759847542.7783] manager: (tap1da6903e-10): new Veth device (/org/freedesktop/NetworkManager/Devices/427)
Oct 07 14:32:22 compute-0 podman[370559]: 2025-10-07 14:32:22.797316239 +0000 UTC m=+0.165538794 container create 5eed8adefc907d33a5239057c16cddecf0fa94b38f3e905e0ee1eaed18badb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_perlman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:32:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.814 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0feee2-3e4d-48b5-8dd7-1a4480f70f62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.817 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[93e7447c-8f8e-46c2-8597-9c46fd06ad93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:22 compute-0 NetworkManager[44949]: <info>  [1759847542.8362] device (tap1da6903e-10): carrier: link connected
Oct 07 14:32:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.842 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cb05c1dd-a3bc-4adf-b65e-2d9e46fc9689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:22 compute-0 systemd[1]: Started libpod-conmon-5eed8adefc907d33a5239057c16cddecf0fa94b38f3e905e0ee1eaed18badb65.scope.
Oct 07 14:32:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.863 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0d49f1e0-fc5a-44d9-a21b-60cad97a85b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1da6903e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:f4:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 812640, 'reachable_time': 33851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370614, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.878 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[07ac826d-0173-48f7-be27-14898741835c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0d:f40e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 812640, 'tstamp': 812640}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370617, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.895 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fe24f0f8-5f8e-4905-9966-60b0a9a23d4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1da6903e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:f4:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 812640, 'reachable_time': 33851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370619, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:22 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:32:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:32:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.938 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[75302ff5-6807-4074-b064-c794f4154f1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:22 compute-0 podman[370559]: 2025-10-07 14:32:22.950169723 +0000 UTC m=+0.318392308 container init 5eed8adefc907d33a5239057c16cddecf0fa94b38f3e905e0ee1eaed18badb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 14:32:22 compute-0 podman[370559]: 2025-10-07 14:32:22.957361466 +0000 UTC m=+0.325584021 container start 5eed8adefc907d33a5239057c16cddecf0fa94b38f3e905e0ee1eaed18badb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 07 14:32:22 compute-0 zen_perlman[370615]: 167 167
Oct 07 14:32:22 compute-0 systemd[1]: libpod-5eed8adefc907d33a5239057c16cddecf0fa94b38f3e905e0ee1eaed18badb65.scope: Deactivated successfully.
Oct 07 14:32:22 compute-0 podman[370559]: 2025-10-07 14:32:22.964853296 +0000 UTC m=+0.333075841 container attach 5eed8adefc907d33a5239057c16cddecf0fa94b38f3e905e0ee1eaed18badb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_perlman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Oct 07 14:32:22 compute-0 podman[370559]: 2025-10-07 14:32:22.965268507 +0000 UTC m=+0.333491062 container died 5eed8adefc907d33a5239057c16cddecf0fa94b38f3e905e0ee1eaed18badb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_perlman, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:23.008 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9713c7d2-596f-412b-b4b6-32986cc1baa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:23.010 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1da6903e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:23.010 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:23.011 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1da6903e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:23 compute-0 NetworkManager[44949]: <info>  [1759847543.0136] manager: (tap1da6903e-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Oct 07 14:32:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-6985ba1244852fc3e900502d24bff83ad71a98f6712c748b5c8fd76b72ac6803-merged.mount: Deactivated successfully.
Oct 07 14:32:23 compute-0 kernel: tap1da6903e-10: entered promiscuous mode
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:23.019 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1da6903e-10, col_values=(('external_ids', {'iface-id': 'ac55d93f-5af1-4917-a7be-679169a02318'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:23 compute-0 ovn_controller[151684]: 2025-10-07T14:32:23Z|01057|binding|INFO|Releasing lport ac55d93f-5af1-4917-a7be-679169a02318 from this chassis (sb_readonly=0)
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:23.022 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1da6903e-17a3-4ac8-b5a0-50ed4bf377bc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1da6903e-17a3-4ac8-b5a0-50ed4bf377bc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:32:23 compute-0 podman[370559]: 2025-10-07 14:32:23.023897424 +0000 UTC m=+0.392119979 container remove 5eed8adefc907d33a5239057c16cddecf0fa94b38f3e905e0ee1eaed18badb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_perlman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:23.023 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b74c2a1d-9b9c-4066-a9d9-b2b3f310ff78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:23.026 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/1da6903e-17a3-4ac8-b5a0-50ed4bf377bc.pid.haproxy
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:32:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:23.027 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'env', 'PROCESS_TAG=haproxy-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1da6903e-17a3-4ac8-b5a0-50ed4bf377bc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:23 compute-0 systemd[1]: libpod-conmon-5eed8adefc907d33a5239057c16cddecf0fa94b38f3e905e0ee1eaed18badb65.scope: Deactivated successfully.
Oct 07 14:32:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:32:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:32:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:32:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:32:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:32:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:32:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:32:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:32:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:32:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:32:23 compute-0 podman[370649]: 2025-10-07 14:32:23.235007435 +0000 UTC m=+0.052408012 container create a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_payne, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 07 14:32:23 compute-0 systemd[1]: Started libpod-conmon-a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5.scope.
Oct 07 14:32:23 compute-0 podman[370649]: 2025-10-07 14:32:23.209236856 +0000 UTC m=+0.026637453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:32:23 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf53702f677fb9cff2454f5dc0947bbab5e8b2e976fd66e62c52be63c8e67c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf53702f677fb9cff2454f5dc0947bbab5e8b2e976fd66e62c52be63c8e67c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf53702f677fb9cff2454f5dc0947bbab5e8b2e976fd66e62c52be63c8e67c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf53702f677fb9cff2454f5dc0947bbab5e8b2e976fd66e62c52be63c8e67c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:32:23 compute-0 podman[370649]: 2025-10-07 14:32:23.340199766 +0000 UTC m=+0.157600373 container init a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_payne, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 07 14:32:23 compute-0 podman[370649]: 2025-10-07 14:32:23.351525458 +0000 UTC m=+0.168926035 container start a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 14:32:23 compute-0 podman[370649]: 2025-10-07 14:32:23.355853824 +0000 UTC m=+0.173254401 container attach a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 07 14:32:23 compute-0 podman[370723]: 2025-10-07 14:32:23.41935581 +0000 UTC m=+0.056937871 container create 008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 07 14:32:23 compute-0 systemd[1]: Started libpod-conmon-008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b.scope.
Oct 07 14:32:23 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:32:23 compute-0 podman[370723]: 2025-10-07 14:32:23.391711012 +0000 UTC m=+0.029293103 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e23988a13804a7caa057b6ca3196289d2a8449fff2e969f7ade775e8fde0162/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:32:23 compute-0 podman[370723]: 2025-10-07 14:32:23.501340301 +0000 UTC m=+0.138922362 container init 008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 14:32:23 compute-0 podman[370723]: 2025-10-07 14:32:23.508476272 +0000 UTC m=+0.146058333 container start 008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:32:23 compute-0 neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc[370748]: [NOTICE]   (370753) : New worker (370755) forked
Oct 07 14:32:23 compute-0 neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc[370748]: [NOTICE]   (370753) : Loading success.
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.782 2 DEBUG nova.compute.manager [req-c0a1801d-2d86-4130-be97-536c0e1050b0 req-45b49a4f-6b9b-4128-a057-ffa1ad108d35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Received event network-vif-plugged-52f43128-c899-4d76-9e65-c99941c834d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.783 2 DEBUG oslo_concurrency.lockutils [req-c0a1801d-2d86-4130-be97-536c0e1050b0 req-45b49a4f-6b9b-4128-a057-ffa1ad108d35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0af0082a-1adc-40e7-b254-88e03182e802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.783 2 DEBUG oslo_concurrency.lockutils [req-c0a1801d-2d86-4130-be97-536c0e1050b0 req-45b49a4f-6b9b-4128-a057-ffa1ad108d35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.783 2 DEBUG oslo_concurrency.lockutils [req-c0a1801d-2d86-4130-be97-536c0e1050b0 req-45b49a4f-6b9b-4128-a057-ffa1ad108d35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.784 2 DEBUG nova.compute.manager [req-c0a1801d-2d86-4130-be97-536c0e1050b0 req-45b49a4f-6b9b-4128-a057-ffa1ad108d35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Processing event network-vif-plugged-52f43128-c899-4d76-9e65-c99941c834d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.863 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847543.8633685, 0af0082a-1adc-40e7-b254-88e03182e802 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.864 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] VM Started (Lifecycle Event)
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.866 2 DEBUG nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.869 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.873 2 INFO nova.virt.libvirt.driver [-] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Instance spawned successfully.
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.874 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.881 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.885 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.892 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.892 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.893 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.893 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.893 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.894 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.902 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.902 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847543.8636093, 0af0082a-1adc-40e7-b254-88e03182e802 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.903 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] VM Paused (Lifecycle Event)
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.923 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.927 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847543.868457, 0af0082a-1adc-40e7-b254-88e03182e802 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.927 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] VM Resumed (Lifecycle Event)
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.953 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.957 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.961 2 INFO nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Took 8.78 seconds to spawn the instance on the hypervisor.
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.961 2 DEBUG nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:32:23 compute-0 nova_compute[259550]: 2025-10-07 14:32:23.976 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:32:24 compute-0 nova_compute[259550]: 2025-10-07 14:32:24.020 2 INFO nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Took 10.47 seconds to build instance.
Oct 07 14:32:24 compute-0 nova_compute[259550]: 2025-10-07 14:32:24.035 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:24 compute-0 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct 07 14:32:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 07 14:32:24 compute-0 ceph-mon[74295]: pgmap v2099: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 07 14:32:24 compute-0 goofy_payne[370671]: {
Oct 07 14:32:24 compute-0 goofy_payne[370671]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:32:24 compute-0 goofy_payne[370671]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:32:24 compute-0 goofy_payne[370671]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:32:24 compute-0 goofy_payne[370671]:         "osd_id": 2,
Oct 07 14:32:24 compute-0 goofy_payne[370671]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:32:24 compute-0 goofy_payne[370671]:         "type": "bluestore"
Oct 07 14:32:24 compute-0 goofy_payne[370671]:     },
Oct 07 14:32:24 compute-0 goofy_payne[370671]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:32:24 compute-0 goofy_payne[370671]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:32:24 compute-0 goofy_payne[370671]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:32:24 compute-0 goofy_payne[370671]:         "osd_id": 1,
Oct 07 14:32:24 compute-0 goofy_payne[370671]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:32:24 compute-0 goofy_payne[370671]:         "type": "bluestore"
Oct 07 14:32:24 compute-0 goofy_payne[370671]:     },
Oct 07 14:32:24 compute-0 goofy_payne[370671]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:32:24 compute-0 goofy_payne[370671]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:32:24 compute-0 goofy_payne[370671]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:32:24 compute-0 goofy_payne[370671]:         "osd_id": 0,
Oct 07 14:32:24 compute-0 goofy_payne[370671]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:32:24 compute-0 goofy_payne[370671]:         "type": "bluestore"
Oct 07 14:32:24 compute-0 goofy_payne[370671]:     }
Oct 07 14:32:24 compute-0 goofy_payne[370671]: }
Oct 07 14:32:24 compute-0 systemd[1]: libpod-a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5.scope: Deactivated successfully.
Oct 07 14:32:24 compute-0 systemd[1]: libpod-a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5.scope: Consumed 1.038s CPU time.
Oct 07 14:32:24 compute-0 podman[370792]: 2025-10-07 14:32:24.450210436 +0000 UTC m=+0.022964785 container died a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_payne, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:32:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bf53702f677fb9cff2454f5dc0947bbab5e8b2e976fd66e62c52be63c8e67c6-merged.mount: Deactivated successfully.
Oct 07 14:32:24 compute-0 podman[370792]: 2025-10-07 14:32:24.504801065 +0000 UTC m=+0.077555404 container remove a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_payne, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:32:24 compute-0 systemd[1]: libpod-conmon-a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5.scope: Deactivated successfully.
Oct 07 14:32:24 compute-0 sudo[370455]: pam_unix(sudo:session): session closed for user root
Oct 07 14:32:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:32:24 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:32:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:32:24 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:32:24 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev a1b17141-a7fe-41c2-b9ff-ed98ab9cf51e does not exist
Oct 07 14:32:24 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 72b4d39f-516c-445c-8e0b-7d775f77b442 does not exist
Oct 07 14:32:24 compute-0 sudo[370807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:32:24 compute-0 sudo[370807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:32:24 compute-0 sudo[370807]: pam_unix(sudo:session): session closed for user root
Oct 07 14:32:24 compute-0 sudo[370832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:32:24 compute-0 sudo[370832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:32:24 compute-0 sudo[370832]: pam_unix(sudo:session): session closed for user root
Oct 07 14:32:25 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:32:25 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:32:26 compute-0 nova_compute[259550]: 2025-10-07 14:32:26.202 2 DEBUG nova.compute.manager [req-55732c70-bc13-4943-bcf4-c4fa22c23233 req-d7505817-780c-466d-8473-41e6fd603073 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Received event network-vif-plugged-52f43128-c899-4d76-9e65-c99941c834d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:32:26 compute-0 nova_compute[259550]: 2025-10-07 14:32:26.203 2 DEBUG oslo_concurrency.lockutils [req-55732c70-bc13-4943-bcf4-c4fa22c23233 req-d7505817-780c-466d-8473-41e6fd603073 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0af0082a-1adc-40e7-b254-88e03182e802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:26 compute-0 nova_compute[259550]: 2025-10-07 14:32:26.203 2 DEBUG oslo_concurrency.lockutils [req-55732c70-bc13-4943-bcf4-c4fa22c23233 req-d7505817-780c-466d-8473-41e6fd603073 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:26 compute-0 nova_compute[259550]: 2025-10-07 14:32:26.204 2 DEBUG oslo_concurrency.lockutils [req-55732c70-bc13-4943-bcf4-c4fa22c23233 req-d7505817-780c-466d-8473-41e6fd603073 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:26 compute-0 nova_compute[259550]: 2025-10-07 14:32:26.204 2 DEBUG nova.compute.manager [req-55732c70-bc13-4943-bcf4-c4fa22c23233 req-d7505817-780c-466d-8473-41e6fd603073 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] No waiting events found dispatching network-vif-plugged-52f43128-c899-4d76-9e65-c99941c834d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:32:26 compute-0 nova_compute[259550]: 2025-10-07 14:32:26.204 2 WARNING nova.compute.manager [req-55732c70-bc13-4943-bcf4-c4fa22c23233 req-d7505817-780c-466d-8473-41e6fd603073 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Received unexpected event network-vif-plugged-52f43128-c899-4d76-9e65-c99941c834d4 for instance with vm_state active and task_state None.
Oct 07 14:32:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 408 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Oct 07 14:32:26 compute-0 nova_compute[259550]: 2025-10-07 14:32:26.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:26 compute-0 ceph-mon[74295]: pgmap v2100: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 408 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Oct 07 14:32:27 compute-0 nova_compute[259550]: 2025-10-07 14:32:27.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:32:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 402 KiB/s rd, 1.7 MiB/s wr, 38 op/s
Oct 07 14:32:28 compute-0 ceph-mon[74295]: pgmap v2101: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 402 KiB/s rd, 1.7 MiB/s wr, 38 op/s
Oct 07 14:32:29 compute-0 nova_compute[259550]: 2025-10-07 14:32:29.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:29 compute-0 NetworkManager[44949]: <info>  [1759847549.1786] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/429)
Oct 07 14:32:29 compute-0 NetworkManager[44949]: <info>  [1759847549.1797] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Oct 07 14:32:29 compute-0 nova_compute[259550]: 2025-10-07 14:32:29.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:29 compute-0 ovn_controller[151684]: 2025-10-07T14:32:29Z|01058|binding|INFO|Releasing lport ac55d93f-5af1-4917-a7be-679169a02318 from this chassis (sb_readonly=0)
Oct 07 14:32:29 compute-0 nova_compute[259550]: 2025-10-07 14:32:29.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:29.390 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:32:29 compute-0 nova_compute[259550]: 2025-10-07 14:32:29.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:29.391 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:32:29 compute-0 nova_compute[259550]: 2025-10-07 14:32:29.955 2 DEBUG nova.compute.manager [req-d32698d0-2998-4004-9135-d4fd5e62327c req-ee7c08c0-3d24-49a6-bdbe-0f444cb89655 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Received event network-changed-52f43128-c899-4d76-9e65-c99941c834d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:32:29 compute-0 nova_compute[259550]: 2025-10-07 14:32:29.956 2 DEBUG nova.compute.manager [req-d32698d0-2998-4004-9135-d4fd5e62327c req-ee7c08c0-3d24-49a6-bdbe-0f444cb89655 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Refreshing instance network info cache due to event network-changed-52f43128-c899-4d76-9e65-c99941c834d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:32:29 compute-0 nova_compute[259550]: 2025-10-07 14:32:29.956 2 DEBUG oslo_concurrency.lockutils [req-d32698d0-2998-4004-9135-d4fd5e62327c req-ee7c08c0-3d24-49a6-bdbe-0f444cb89655 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:32:29 compute-0 nova_compute[259550]: 2025-10-07 14:32:29.956 2 DEBUG oslo_concurrency.lockutils [req-d32698d0-2998-4004-9135-d4fd5e62327c req-ee7c08c0-3d24-49a6-bdbe-0f444cb89655 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:32:29 compute-0 nova_compute[259550]: 2025-10-07 14:32:29.957 2 DEBUG nova.network.neutron [req-d32698d0-2998-4004-9135-d4fd5e62327c req-ee7c08c0-3d24-49a6-bdbe-0f444cb89655 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Refreshing network info cache for port 52f43128-c899-4d76-9e65-c99941c834d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:32:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 89 op/s
Oct 07 14:32:30 compute-0 ceph-mon[74295]: pgmap v2102: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 89 op/s
Oct 07 14:32:31 compute-0 nova_compute[259550]: 2025-10-07 14:32:31.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:31 compute-0 nova_compute[259550]: 2025-10-07 14:32:31.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:32 compute-0 nova_compute[259550]: 2025-10-07 14:32:32.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 354 KiB/s wr, 75 op/s
Oct 07 14:32:32 compute-0 ceph-mon[74295]: pgmap v2103: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 354 KiB/s wr, 75 op/s
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:32:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:32:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:32:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/508426753' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:32:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:32:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/508426753' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:32:32 compute-0 nova_compute[259550]: 2025-10-07 14:32:32.709 2 DEBUG nova.network.neutron [req-d32698d0-2998-4004-9135-d4fd5e62327c req-ee7c08c0-3d24-49a6-bdbe-0f444cb89655 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updated VIF entry in instance network info cache for port 52f43128-c899-4d76-9e65-c99941c834d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:32:32 compute-0 nova_compute[259550]: 2025-10-07 14:32:32.709 2 DEBUG nova.network.neutron [req-d32698d0-2998-4004-9135-d4fd5e62327c req-ee7c08c0-3d24-49a6-bdbe-0f444cb89655 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updating instance_info_cache with network_info: [{"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:32:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:32:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/508426753' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:32:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/508426753' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:32:33 compute-0 nova_compute[259550]: 2025-10-07 14:32:33.610 2 DEBUG oslo_concurrency.lockutils [req-d32698d0-2998-4004-9135-d4fd5e62327c req-ee7c08c0-3d24-49a6-bdbe-0f444cb89655 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:32:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:32:34 compute-0 ceph-mon[74295]: pgmap v2104: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:32:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 96 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 513 KiB/s wr, 75 op/s
Oct 07 14:32:36 compute-0 nova_compute[259550]: 2025-10-07 14:32:36.249 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:36 compute-0 nova_compute[259550]: 2025-10-07 14:32:36.249 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:36 compute-0 ceph-mon[74295]: pgmap v2105: 305 pgs: 305 active+clean; 96 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 513 KiB/s wr, 75 op/s
Oct 07 14:32:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:36.393 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:32:36 compute-0 nova_compute[259550]: 2025-10-07 14:32:36.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:36 compute-0 nova_compute[259550]: 2025-10-07 14:32:36.597 2 DEBUG nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:32:36 compute-0 nova_compute[259550]: 2025-10-07 14:32:36.900 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:36 compute-0 nova_compute[259550]: 2025-10-07 14:32:36.901 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:36 compute-0 nova_compute[259550]: 2025-10-07 14:32:36.911 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:32:36 compute-0 nova_compute[259550]: 2025-10-07 14:32:36.912 2 INFO nova.compute.claims [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.067 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:37 compute-0 podman[370860]: 2025-10-07 14:32:37.124379984 +0000 UTC m=+0.091152767 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:37 compute-0 podman[370861]: 2025-10-07 14:32:37.15043767 +0000 UTC m=+0.124264091 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:32:37 compute-0 ovn_controller[151684]: 2025-10-07T14:32:37Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:52:17:8c 10.100.0.13
Oct 07 14:32:37 compute-0 ovn_controller[151684]: 2025-10-07T14:32:37Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:52:17:8c 10.100.0.13
Oct 07 14:32:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:32:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/687542214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.529 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.534 2 DEBUG nova.compute.provider_tree [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.550 2 DEBUG nova.scheduler.client.report [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:32:37 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/687542214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.570 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.571 2 DEBUG nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.644 2 DEBUG nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.645 2 DEBUG nova.network.neutron [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.667 2 INFO nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.686 2 DEBUG nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.774 2 DEBUG nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.776 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.777 2 INFO nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Creating image(s)
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.797 2 DEBUG nova.storage.rbd_utils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.816 2 DEBUG nova.storage.rbd_utils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.841 2 DEBUG nova.storage.rbd_utils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.845 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.935 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.936 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.937 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.937 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.961 2 DEBUG nova.storage.rbd_utils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:37 compute-0 nova_compute[259550]: 2025-10-07 14:32:37.966 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 96 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 513 KiB/s wr, 62 op/s
Oct 07 14:32:38 compute-0 nova_compute[259550]: 2025-10-07 14:32:38.433 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:38 compute-0 nova_compute[259550]: 2025-10-07 14:32:38.485 2 DEBUG nova.storage.rbd_utils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] resizing rbd image 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:32:38 compute-0 ceph-mon[74295]: pgmap v2106: 305 pgs: 305 active+clean; 96 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 513 KiB/s wr, 62 op/s
Oct 07 14:32:38 compute-0 nova_compute[259550]: 2025-10-07 14:32:38.578 2 DEBUG nova.objects.instance [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'migration_context' on Instance uuid 08184d43-2bd1-4f46-9bdf-63437c87a8ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:32:38 compute-0 nova_compute[259550]: 2025-10-07 14:32:38.582 2 DEBUG nova.policy [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5c505d04148e44b8b93ceab0e3cedef4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:32:38 compute-0 nova_compute[259550]: 2025-10-07 14:32:38.598 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:32:38 compute-0 nova_compute[259550]: 2025-10-07 14:32:38.598 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Ensure instance console log exists: /var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:32:38 compute-0 nova_compute[259550]: 2025-10-07 14:32:38.599 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:38 compute-0 nova_compute[259550]: 2025-10-07 14:32:38.599 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:38 compute-0 nova_compute[259550]: 2025-10-07 14:32:38.599 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 144 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.2 MiB/s wr, 117 op/s
Oct 07 14:32:40 compute-0 ceph-mon[74295]: pgmap v2107: 305 pgs: 305 active+clean; 144 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.2 MiB/s wr, 117 op/s
Oct 07 14:32:41 compute-0 nova_compute[259550]: 2025-10-07 14:32:41.242 2 DEBUG nova.network.neutron [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Successfully created port: 70d147cb-a82e-4f80-88cf-016ec19f5a2c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:32:41 compute-0 nova_compute[259550]: 2025-10-07 14:32:41.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:42 compute-0 nova_compute[259550]: 2025-10-07 14:32:42.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:42 compute-0 nova_compute[259550]: 2025-10-07 14:32:42.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 167 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Oct 07 14:32:42 compute-0 ceph-mon[74295]: pgmap v2108: 305 pgs: 305 active+clean; 167 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Oct 07 14:32:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:32:43 compute-0 nova_compute[259550]: 2025-10-07 14:32:43.323 2 DEBUG nova.network.neutron [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Successfully updated port: 70d147cb-a82e-4f80-88cf-016ec19f5a2c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:32:43 compute-0 nova_compute[259550]: 2025-10-07 14:32:43.344 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:32:43 compute-0 nova_compute[259550]: 2025-10-07 14:32:43.345 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquired lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:32:43 compute-0 nova_compute[259550]: 2025-10-07 14:32:43.345 2 DEBUG nova.network.neutron [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:32:43 compute-0 nova_compute[259550]: 2025-10-07 14:32:43.465 2 DEBUG nova.compute.manager [req-361f8e2a-6f7b-46b2-b31c-0e2d775463c9 req-e74d3bf6-c1cd-4f43-b933-85a3032c0879 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received event network-changed-70d147cb-a82e-4f80-88cf-016ec19f5a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:32:43 compute-0 nova_compute[259550]: 2025-10-07 14:32:43.465 2 DEBUG nova.compute.manager [req-361f8e2a-6f7b-46b2-b31c-0e2d775463c9 req-e74d3bf6-c1cd-4f43-b933-85a3032c0879 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Refreshing instance network info cache due to event network-changed-70d147cb-a82e-4f80-88cf-016ec19f5a2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:32:43 compute-0 nova_compute[259550]: 2025-10-07 14:32:43.465 2 DEBUG oslo_concurrency.lockutils [req-361f8e2a-6f7b-46b2-b31c-0e2d775463c9 req-e74d3bf6-c1cd-4f43-b933-85a3032c0879 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:32:43 compute-0 nova_compute[259550]: 2025-10-07 14:32:43.581 2 DEBUG nova.network.neutron [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:32:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 167 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 07 14:32:44 compute-0 ceph-mon[74295]: pgmap v2109: 305 pgs: 305 active+clean; 167 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 07 14:32:44 compute-0 nova_compute[259550]: 2025-10-07 14:32:44.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:32:44 compute-0 nova_compute[259550]: 2025-10-07 14:32:44.996 2 DEBUG nova.network.neutron [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Updating instance_info_cache with network_info: [{"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.246 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Releasing lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.246 2 DEBUG nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Instance network_info: |[{"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.247 2 DEBUG oslo_concurrency.lockutils [req-361f8e2a-6f7b-46b2-b31c-0e2d775463c9 req-e74d3bf6-c1cd-4f43-b933-85a3032c0879 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.247 2 DEBUG nova.network.neutron [req-361f8e2a-6f7b-46b2-b31c-0e2d775463c9 req-e74d3bf6-c1cd-4f43-b933-85a3032c0879 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Refreshing network info cache for port 70d147cb-a82e-4f80-88cf-016ec19f5a2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.250 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Start _get_guest_xml network_info=[{"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.256 2 WARNING nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.261 2 DEBUG nova.virt.libvirt.host [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.262 2 DEBUG nova.virt.libvirt.host [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.265 2 DEBUG nova.virt.libvirt.host [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.266 2 DEBUG nova.virt.libvirt.host [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.266 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.266 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.267 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.267 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.267 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.267 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.267 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.268 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.268 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.268 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.268 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.268 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.271 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:32:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1813904550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.709 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.731 2 DEBUG nova.storage.rbd_utils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.734 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:45 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1813904550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:32:45 compute-0 nova_compute[259550]: 2025-10-07 14:32:45.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:32:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:32:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4281085148' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.175 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.178 2 DEBUG nova.virt.libvirt.vif [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:32:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1785477500',display_name='tempest-TestNetworkAdvancedServerOps-server-1785477500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1785477500',id=106,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDOmKPVgdwoa2PL66bDGbFu3FUOEtM4uSjrJKdQm0+Dh5hnoywovo36nzSmeP/PaULcFe/lE8zWO3xuub67iudTqw2d8wzPnJMGL68v9G5hFZdOFPK6yXS41BfolaqLZgQ==',key_name='tempest-TestNetworkAdvancedServerOps-1280135326',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-pm6ky9yn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:32:37Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=08184d43-2bd1-4f46-9bdf-63437c87a8ad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.178 2 DEBUG nova.network.os_vif_util [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.179 2 DEBUG nova.network.os_vif_util [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:b2:f9,bridge_name='br-int',has_traffic_filtering=True,id=70d147cb-a82e-4f80-88cf-016ec19f5a2c,network=Network(48fde043-e83e-45b7-ab13-37b209fd562c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d147cb-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.181 2 DEBUG nova.objects.instance [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 08184d43-2bd1-4f46-9bdf-63437c87a8ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:32:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 167 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.253 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:32:46 compute-0 nova_compute[259550]:   <uuid>08184d43-2bd1-4f46-9bdf-63437c87a8ad</uuid>
Oct 07 14:32:46 compute-0 nova_compute[259550]:   <name>instance-0000006a</name>
Oct 07 14:32:46 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:32:46 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:32:46 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1785477500</nova:name>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:32:45</nova:creationTime>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:32:46 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:32:46 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:32:46 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:32:46 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:32:46 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:32:46 compute-0 nova_compute[259550]:         <nova:user uuid="5c505d04148e44b8b93ceab0e3cedef4">tempest-TestNetworkAdvancedServerOps-316338420-project-member</nova:user>
Oct 07 14:32:46 compute-0 nova_compute[259550]:         <nova:project uuid="74c80c1e3c7c4a0dbf1c602d301618a7">tempest-TestNetworkAdvancedServerOps-316338420</nova:project>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:32:46 compute-0 nova_compute[259550]:         <nova:port uuid="70d147cb-a82e-4f80-88cf-016ec19f5a2c">
Oct 07 14:32:46 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:32:46 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:32:46 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <system>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <entry name="serial">08184d43-2bd1-4f46-9bdf-63437c87a8ad</entry>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <entry name="uuid">08184d43-2bd1-4f46-9bdf-63437c87a8ad</entry>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     </system>
Oct 07 14:32:46 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:32:46 compute-0 nova_compute[259550]:   <os>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:   </os>
Oct 07 14:32:46 compute-0 nova_compute[259550]:   <features>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:   </features>
Oct 07 14:32:46 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:32:46 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:32:46 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk">
Oct 07 14:32:46 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       </source>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:32:46 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk.config">
Oct 07 14:32:46 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       </source>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:32:46 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:96:b2:f9"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <target dev="tap70d147cb-a8"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad/console.log" append="off"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <video>
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     </video>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:32:46 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:32:46 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:32:46 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:32:46 compute-0 nova_compute[259550]: </domain>
Oct 07 14:32:46 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.254 2 DEBUG nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Preparing to wait for external event network-vif-plugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.254 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.255 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.256 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.257 2 DEBUG nova.virt.libvirt.vif [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:32:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1785477500',display_name='tempest-TestNetworkAdvancedServerOps-server-1785477500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1785477500',id=106,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDOmKPVgdwoa2PL66bDGbFu3FUOEtM4uSjrJKdQm0+Dh5hnoywovo36nzSmeP/PaULcFe/lE8zWO3xuub67iudTqw2d8wzPnJMGL68v9G5hFZdOFPK6yXS41BfolaqLZgQ==',key_name='tempest-TestNetworkAdvancedServerOps-1280135326',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-pm6ky9yn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:32:37Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=08184d43-2bd1-4f46-9bdf-63437c87a8ad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.257 2 DEBUG nova.network.os_vif_util [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.258 2 DEBUG nova.network.os_vif_util [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:b2:f9,bridge_name='br-int',has_traffic_filtering=True,id=70d147cb-a82e-4f80-88cf-016ec19f5a2c,network=Network(48fde043-e83e-45b7-ab13-37b209fd562c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d147cb-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.258 2 DEBUG os_vif [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:b2:f9,bridge_name='br-int',has_traffic_filtering=True,id=70d147cb-a82e-4f80-88cf-016ec19f5a2c,network=Network(48fde043-e83e-45b7-ab13-37b209fd562c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d147cb-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.259 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.260 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.264 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap70d147cb-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.264 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap70d147cb-a8, col_values=(('external_ids', {'iface-id': '70d147cb-a82e-4f80-88cf-016ec19f5a2c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:b2:f9', 'vm-uuid': '08184d43-2bd1-4f46-9bdf-63437c87a8ad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:46 compute-0 NetworkManager[44949]: <info>  [1759847566.2677] manager: (tap70d147cb-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.279 2 INFO os_vif [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:b2:f9,bridge_name='br-int',has_traffic_filtering=True,id=70d147cb-a82e-4f80-88cf-016ec19f5a2c,network=Network(48fde043-e83e-45b7-ab13-37b209fd562c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d147cb-a8')
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.404 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.404 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.405 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No VIF found with MAC fa:16:3e:96:b2:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.405 2 INFO nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Using config drive
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.429 2 DEBUG nova.storage.rbd_utils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.707 2 INFO nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Creating config drive at /var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad/disk.config
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.712 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx9p9vm03 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:46 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4281085148' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:32:46 compute-0 ceph-mon[74295]: pgmap v2110: 305 pgs: 305 active+clean; 167 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.861 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx9p9vm03" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.890 2 DEBUG nova.storage.rbd_utils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:46 compute-0 nova_compute[259550]: 2025-10-07 14:32:46.894 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad/disk.config 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:47 compute-0 nova_compute[259550]: 2025-10-07 14:32:47.063 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad/disk.config 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:47 compute-0 nova_compute[259550]: 2025-10-07 14:32:47.064 2 INFO nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Deleting local config drive /var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad/disk.config because it was imported into RBD.
Oct 07 14:32:47 compute-0 NetworkManager[44949]: <info>  [1759847567.1054] manager: (tap70d147cb-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/432)
Oct 07 14:32:47 compute-0 kernel: tap70d147cb-a8: entered promiscuous mode
Oct 07 14:32:47 compute-0 nova_compute[259550]: 2025-10-07 14:32:47.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:47 compute-0 ovn_controller[151684]: 2025-10-07T14:32:47Z|01059|binding|INFO|Claiming lport 70d147cb-a82e-4f80-88cf-016ec19f5a2c for this chassis.
Oct 07 14:32:47 compute-0 ovn_controller[151684]: 2025-10-07T14:32:47Z|01060|binding|INFO|70d147cb-a82e-4f80-88cf-016ec19f5a2c: Claiming fa:16:3e:96:b2:f9 10.100.0.3
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.116 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:b2:f9 10.100.0.3'], port_security=['fa:16:3e:96:b2:f9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '08184d43-2bd1-4f46-9bdf-63437c87a8ad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48fde043-e83e-45b7-ab13-37b209fd562c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd8bbba0f-6faa-43dc-9c06-619017376051', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ea6d7f11-49ef-4d9e-9bd1-b8e683eae537, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=70d147cb-a82e-4f80-88cf-016ec19f5a2c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.118 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 70d147cb-a82e-4f80-88cf-016ec19f5a2c in datapath 48fde043-e83e-45b7-ab13-37b209fd562c bound to our chassis
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.120 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48fde043-e83e-45b7-ab13-37b209fd562c
Oct 07 14:32:47 compute-0 ovn_controller[151684]: 2025-10-07T14:32:47Z|01061|binding|INFO|Setting lport 70d147cb-a82e-4f80-88cf-016ec19f5a2c ovn-installed in OVS
Oct 07 14:32:47 compute-0 ovn_controller[151684]: 2025-10-07T14:32:47Z|01062|binding|INFO|Setting lport 70d147cb-a82e-4f80-88cf-016ec19f5a2c up in Southbound
Oct 07 14:32:47 compute-0 nova_compute[259550]: 2025-10-07 14:32:47.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:47 compute-0 nova_compute[259550]: 2025-10-07 14:32:47.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.133 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1d12dff6-116d-455c-9c4e-fb3d88d075ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.134 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48fde043-e1 in ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.136 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48fde043-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.136 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[77bba6db-7741-4450-bd50-ac3910f6c4fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.137 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7cfbcb86-7b79-4afb-a867-fbf253bf6649]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:47 compute-0 nova_compute[259550]: 2025-10-07 14:32:47.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:47 compute-0 systemd-udevd[371229]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.149 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6dddabd4-926f-43f0-9f33-7688b96cc1ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:47 compute-0 systemd-machined[214580]: New machine qemu-133-instance-0000006a.
Oct 07 14:32:47 compute-0 NetworkManager[44949]: <info>  [1759847567.1580] device (tap70d147cb-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:32:47 compute-0 NetworkManager[44949]: <info>  [1759847567.1595] device (tap70d147cb-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:32:47 compute-0 systemd[1]: Started Virtual Machine qemu-133-instance-0000006a.
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.173 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3af77505-2ac1-499f-b03a-acc7103f230a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.205 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[94967e97-0dca-4da6-b004-9395049a2877]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:47 compute-0 systemd-udevd[371232]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.212 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[be912a1f-4153-46bc-b260-8eb50757d58e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:47 compute-0 NetworkManager[44949]: <info>  [1759847567.2131] manager: (tap48fde043-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/433)
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.244 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[332538bb-f9c5-4da7-a4a2-d00110c9f748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.247 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0bbe670b-e948-4fe6-bebc-db7191013f51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:47 compute-0 NetworkManager[44949]: <info>  [1759847567.2721] device (tap48fde043-e0): carrier: link connected
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.278 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a645a7-5637-4a8e-a072-758903fae7e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.300 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e09228e7-b00e-40cf-b673-1ca4fa1f5eb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48fde043-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 309], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815083, 'reachable_time': 32833, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371260, 'error': None, 'target': 'ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.319 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[602d855f-d783-4f60-980e-2ec23767839c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:adca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 815083, 'tstamp': 815083}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371261, 'error': None, 'target': 'ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:47 compute-0 nova_compute[259550]: 2025-10-07 14:32:47.321 2 DEBUG nova.compute.manager [req-c56d2b3a-ee9c-4ead-b837-95b1729a0584 req-50fbf4b3-91b7-4d65-98d7-93c5361d12de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received event network-vif-plugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:32:47 compute-0 nova_compute[259550]: 2025-10-07 14:32:47.322 2 DEBUG oslo_concurrency.lockutils [req-c56d2b3a-ee9c-4ead-b837-95b1729a0584 req-50fbf4b3-91b7-4d65-98d7-93c5361d12de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:47 compute-0 nova_compute[259550]: 2025-10-07 14:32:47.322 2 DEBUG oslo_concurrency.lockutils [req-c56d2b3a-ee9c-4ead-b837-95b1729a0584 req-50fbf4b3-91b7-4d65-98d7-93c5361d12de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:47 compute-0 nova_compute[259550]: 2025-10-07 14:32:47.322 2 DEBUG oslo_concurrency.lockutils [req-c56d2b3a-ee9c-4ead-b837-95b1729a0584 req-50fbf4b3-91b7-4d65-98d7-93c5361d12de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:47 compute-0 nova_compute[259550]: 2025-10-07 14:32:47.323 2 DEBUG nova.compute.manager [req-c56d2b3a-ee9c-4ead-b837-95b1729a0584 req-50fbf4b3-91b7-4d65-98d7-93c5361d12de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Processing event network-vif-plugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.344 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2d88bf-61d7-4328-9e54-9606435cfe42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48fde043-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 309], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815083, 'reachable_time': 32833, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371262, 'error': None, 'target': 'ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.375 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[88b13b32-ad09-4781-868f-962ee6243d4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.438 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f444a7e-1423-411a-a769-90e42003aaf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.439 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48fde043-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.439 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.440 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48fde043-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:32:47 compute-0 NetworkManager[44949]: <info>  [1759847567.4426] manager: (tap48fde043-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Oct 07 14:32:47 compute-0 kernel: tap48fde043-e0: entered promiscuous mode
Oct 07 14:32:47 compute-0 nova_compute[259550]: 2025-10-07 14:32:47.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:47 compute-0 nova_compute[259550]: 2025-10-07 14:32:47.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.446 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48fde043-e0, col_values=(('external_ids', {'iface-id': 'd62984c1-9132-4390-b975-fecc00942743'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:32:47 compute-0 ovn_controller[151684]: 2025-10-07T14:32:47Z|01063|binding|INFO|Releasing lport d62984c1-9132-4390-b975-fecc00942743 from this chassis (sb_readonly=0)
Oct 07 14:32:47 compute-0 nova_compute[259550]: 2025-10-07 14:32:47.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:47 compute-0 nova_compute[259550]: 2025-10-07 14:32:47.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.462 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48fde043-e83e-45b7-ab13-37b209fd562c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48fde043-e83e-45b7-ab13-37b209fd562c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.463 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a819e4c4-665f-4004-8785-01adb05646b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.464 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-48fde043-e83e-45b7-ab13-37b209fd562c
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/48fde043-e83e-45b7-ab13-37b209fd562c.pid.haproxy
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 48fde043-e83e-45b7-ab13-37b209fd562c
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:32:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.464 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c', 'env', 'PROCESS_TAG=haproxy-48fde043-e83e-45b7-ab13-37b209fd562c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48fde043-e83e-45b7-ab13-37b209fd562c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:32:47 compute-0 podman[371336]: 2025-10-07 14:32:47.819862312 +0000 UTC m=+0.052415182 container create 2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 14:32:47 compute-0 systemd[1]: Started libpod-conmon-2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7.scope.
Oct 07 14:32:47 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:32:47 compute-0 podman[371336]: 2025-10-07 14:32:47.791297938 +0000 UTC m=+0.023850838 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:32:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc61ebfbc859062541ed2a2564be14d195084d66aefaf18d107a00c5495bc9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:32:47 compute-0 podman[371336]: 2025-10-07 14:32:47.901877383 +0000 UTC m=+0.134430273 container init 2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:32:47 compute-0 podman[371336]: 2025-10-07 14:32:47.914183762 +0000 UTC m=+0.146736632 container start 2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:32:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:32:47 compute-0 neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c[371351]: [NOTICE]   (371355) : New worker (371357) forked
Oct 07 14:32:47 compute-0 neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c[371351]: [NOTICE]   (371355) : Loading success.
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.013 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847568.0130007, 08184d43-2bd1-4f46-9bdf-63437c87a8ad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.013 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] VM Started (Lifecycle Event)
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.016 2 DEBUG nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.020 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.026 2 INFO nova.virt.libvirt.driver [-] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Instance spawned successfully.
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.026 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.036 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.042 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.058 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.058 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.059 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.059 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.060 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.061 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.066 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.067 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847568.0131211, 08184d43-2bd1-4f46-9bdf-63437c87a8ad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.067 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] VM Paused (Lifecycle Event)
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.091 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.096 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847568.0202394, 08184d43-2bd1-4f46-9bdf-63437c87a8ad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.097 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] VM Resumed (Lifecycle Event)
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.115 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.119 2 INFO nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Took 10.34 seconds to spawn the instance on the hypervisor.
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.120 2 DEBUG nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.121 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.147 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.180 2 INFO nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Took 11.51 seconds to build instance.
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.194 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 167 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 3.4 MiB/s wr, 79 op/s
Oct 07 14:32:48 compute-0 ceph-mon[74295]: pgmap v2111: 305 pgs: 305 active+clean; 167 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 3.4 MiB/s wr, 79 op/s
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.524 2 DEBUG nova.network.neutron [req-361f8e2a-6f7b-46b2-b31c-0e2d775463c9 req-e74d3bf6-c1cd-4f43-b933-85a3032c0879 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Updated VIF entry in instance network info cache for port 70d147cb-a82e-4f80-88cf-016ec19f5a2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.525 2 DEBUG nova.network.neutron [req-361f8e2a-6f7b-46b2-b31c-0e2d775463c9 req-e74d3bf6-c1cd-4f43-b933-85a3032c0879 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Updating instance_info_cache with network_info: [{"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.541 2 DEBUG oslo_concurrency.lockutils [req-361f8e2a-6f7b-46b2-b31c-0e2d775463c9 req-e74d3bf6-c1cd-4f43-b933-85a3032c0879 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:32:48 compute-0 nova_compute[259550]: 2025-10-07 14:32:48.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:32:49 compute-0 nova_compute[259550]: 2025-10-07 14:32:49.418 2 DEBUG nova.compute.manager [req-f5b1b5a6-8e6b-47ec-beef-85913c5067ba req-bf5aca01-4b01-4a84-afbb-5228385020f4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received event network-vif-plugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:32:49 compute-0 nova_compute[259550]: 2025-10-07 14:32:49.419 2 DEBUG oslo_concurrency.lockutils [req-f5b1b5a6-8e6b-47ec-beef-85913c5067ba req-bf5aca01-4b01-4a84-afbb-5228385020f4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:49 compute-0 nova_compute[259550]: 2025-10-07 14:32:49.419 2 DEBUG oslo_concurrency.lockutils [req-f5b1b5a6-8e6b-47ec-beef-85913c5067ba req-bf5aca01-4b01-4a84-afbb-5228385020f4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:49 compute-0 nova_compute[259550]: 2025-10-07 14:32:49.419 2 DEBUG oslo_concurrency.lockutils [req-f5b1b5a6-8e6b-47ec-beef-85913c5067ba req-bf5aca01-4b01-4a84-afbb-5228385020f4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:49 compute-0 nova_compute[259550]: 2025-10-07 14:32:49.420 2 DEBUG nova.compute.manager [req-f5b1b5a6-8e6b-47ec-beef-85913c5067ba req-bf5aca01-4b01-4a84-afbb-5228385020f4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] No waiting events found dispatching network-vif-plugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:32:49 compute-0 nova_compute[259550]: 2025-10-07 14:32:49.420 2 WARNING nova.compute.manager [req-f5b1b5a6-8e6b-47ec-beef-85913c5067ba req-bf5aca01-4b01-4a84-afbb-5228385020f4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received unexpected event network-vif-plugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c for instance with vm_state active and task_state None.
Oct 07 14:32:49 compute-0 nova_compute[259550]: 2025-10-07 14:32:49.635 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "d3fa3175-2379-4c66-9d83-0a37f5559db8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:49 compute-0 nova_compute[259550]: 2025-10-07 14:32:49.635 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:49 compute-0 nova_compute[259550]: 2025-10-07 14:32:49.667 2 DEBUG nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:32:49 compute-0 nova_compute[259550]: 2025-10-07 14:32:49.743 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:49 compute-0 nova_compute[259550]: 2025-10-07 14:32:49.743 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:49 compute-0 nova_compute[259550]: 2025-10-07 14:32:49.751 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:32:49 compute-0 nova_compute[259550]: 2025-10-07 14:32:49.751 2 INFO nova.compute.claims [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:32:49 compute-0 nova_compute[259550]: 2025-10-07 14:32:49.902 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 167 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 3.4 MiB/s wr, 90 op/s
Oct 07 14:32:50 compute-0 ceph-mon[74295]: pgmap v2112: 305 pgs: 305 active+clean; 167 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 3.4 MiB/s wr, 90 op/s
Oct 07 14:32:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:32:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4104398649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.415 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.421 2 DEBUG nova.compute.provider_tree [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.441 2 DEBUG nova.scheduler.client.report [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.478 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.479 2 DEBUG nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.555 2 DEBUG nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.556 2 DEBUG nova.network.neutron [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.582 2 INFO nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.605 2 DEBUG nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.732 2 DEBUG nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.735 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.736 2 INFO nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Creating image(s)
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.761 2 DEBUG nova.storage.rbd_utils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image d3fa3175-2379-4c66-9d83-0a37f5559db8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.785 2 DEBUG nova.storage.rbd_utils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image d3fa3175-2379-4c66-9d83-0a37f5559db8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.809 2 DEBUG nova.storage.rbd_utils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image d3fa3175-2379-4c66-9d83-0a37f5559db8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.814 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.898 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.899 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.900 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.900 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.924 2 DEBUG nova.storage.rbd_utils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image d3fa3175-2379-4c66-9d83-0a37f5559db8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.928 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 d3fa3175-2379-4c66-9d83-0a37f5559db8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.973 2 DEBUG nova.policy [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:32:50 compute-0 nova_compute[259550]: 2025-10-07 14:32:50.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:32:51 compute-0 nova_compute[259550]: 2025-10-07 14:32:51.244 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 d3fa3175-2379-4c66-9d83-0a37f5559db8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:51 compute-0 nova_compute[259550]: 2025-10-07 14:32:51.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4104398649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:32:51 compute-0 nova_compute[259550]: 2025-10-07 14:32:51.324 2 DEBUG nova.storage.rbd_utils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image d3fa3175-2379-4c66-9d83-0a37f5559db8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:32:51 compute-0 nova_compute[259550]: 2025-10-07 14:32:51.427 2 DEBUG nova.objects.instance [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid d3fa3175-2379-4c66-9d83-0a37f5559db8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:32:51 compute-0 nova_compute[259550]: 2025-10-07 14:32:51.513 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:32:51 compute-0 nova_compute[259550]: 2025-10-07 14:32:51.514 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Ensure instance console log exists: /var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:32:51 compute-0 nova_compute[259550]: 2025-10-07 14:32:51.514 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:51 compute-0 nova_compute[259550]: 2025-10-07 14:32:51.515 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:51 compute-0 nova_compute[259550]: 2025-10-07 14:32:51.515 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:52 compute-0 nova_compute[259550]: 2025-10-07 14:32:52.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 167 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 859 KiB/s rd, 769 KiB/s wr, 61 op/s
Oct 07 14:32:52 compute-0 ceph-mon[74295]: pgmap v2113: 305 pgs: 305 active+clean; 167 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 859 KiB/s rd, 769 KiB/s wr, 61 op/s
Oct 07 14:32:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:32:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:32:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:32:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:32:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:32:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:32:52 compute-0 nova_compute[259550]: 2025-10-07 14:32:52.811 2 DEBUG nova.network.neutron [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Successfully created port: c6894b29-6b20-445c-991a-9aefefb3823c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:32:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:32:53 compute-0 podman[371554]: 2025-10-07 14:32:53.078967217 +0000 UTC m=+0.067989118 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 07 14:32:53 compute-0 podman[371555]: 2025-10-07 14:32:53.084193796 +0000 UTC m=+0.068342306 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:32:53 compute-0 nova_compute[259550]: 2025-10-07 14:32:53.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:32:54 compute-0 nova_compute[259550]: 2025-10-07 14:32:54.143 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:54 compute-0 nova_compute[259550]: 2025-10-07 14:32:54.144 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:54 compute-0 nova_compute[259550]: 2025-10-07 14:32:54.145 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:54 compute-0 nova_compute[259550]: 2025-10-07 14:32:54.145 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:32:54 compute-0 nova_compute[259550]: 2025-10-07 14:32:54.146 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 208 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 100 op/s
Oct 07 14:32:54 compute-0 ceph-mon[74295]: pgmap v2114: 305 pgs: 305 active+clean; 208 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 100 op/s
Oct 07 14:32:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:32:54 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1279964445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:32:54 compute-0 nova_compute[259550]: 2025-10-07 14:32:54.631 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:54 compute-0 nova_compute[259550]: 2025-10-07 14:32:54.719 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:32:54 compute-0 nova_compute[259550]: 2025-10-07 14:32:54.720 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:32:54 compute-0 nova_compute[259550]: 2025-10-07 14:32:54.724 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:32:54 compute-0 nova_compute[259550]: 2025-10-07 14:32:54.724 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:32:54 compute-0 nova_compute[259550]: 2025-10-07 14:32:54.896 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:32:54 compute-0 nova_compute[259550]: 2025-10-07 14:32:54.897 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3466MB free_disk=59.90550231933594GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:32:54 compute-0 nova_compute[259550]: 2025-10-07 14:32:54.898 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:54 compute-0 nova_compute[259550]: 2025-10-07 14:32:54.898 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:54 compute-0 nova_compute[259550]: 2025-10-07 14:32:54.987 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 0af0082a-1adc-40e7-b254-88e03182e802 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:32:54 compute-0 nova_compute[259550]: 2025-10-07 14:32:54.988 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 08184d43-2bd1-4f46-9bdf-63437c87a8ad actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:32:54 compute-0 nova_compute[259550]: 2025-10-07 14:32:54.988 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance d3fa3175-2379-4c66-9d83-0a37f5559db8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:32:54 compute-0 nova_compute[259550]: 2025-10-07 14:32:54.988 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:32:54 compute-0 nova_compute[259550]: 2025-10-07 14:32:54.989 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:32:55 compute-0 nova_compute[259550]: 2025-10-07 14:32:55.061 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:55 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1279964445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:32:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:32:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1293899815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:32:55 compute-0 nova_compute[259550]: 2025-10-07 14:32:55.499 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:55 compute-0 nova_compute[259550]: 2025-10-07 14:32:55.504 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:32:55 compute-0 nova_compute[259550]: 2025-10-07 14:32:55.521 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:32:55 compute-0 nova_compute[259550]: 2025-10-07 14:32:55.547 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:32:55 compute-0 nova_compute[259550]: 2025-10-07 14:32:55.547 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:55 compute-0 nova_compute[259550]: 2025-10-07 14:32:55.633 2 DEBUG nova.network.neutron [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Successfully updated port: c6894b29-6b20-445c-991a-9aefefb3823c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:32:55 compute-0 nova_compute[259550]: 2025-10-07 14:32:55.653 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:32:55 compute-0 nova_compute[259550]: 2025-10-07 14:32:55.654 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:32:55 compute-0 nova_compute[259550]: 2025-10-07 14:32:55.654 2 DEBUG nova.network.neutron [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:32:55 compute-0 nova_compute[259550]: 2025-10-07 14:32:55.705 2 DEBUG nova.compute.manager [req-3e74b534-756d-498a-a6b2-513e596e7956 req-07aee861-76f5-4410-88f8-b23a60107dbc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received event network-changed-70d147cb-a82e-4f80-88cf-016ec19f5a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:32:55 compute-0 nova_compute[259550]: 2025-10-07 14:32:55.705 2 DEBUG nova.compute.manager [req-3e74b534-756d-498a-a6b2-513e596e7956 req-07aee861-76f5-4410-88f8-b23a60107dbc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Refreshing instance network info cache due to event network-changed-70d147cb-a82e-4f80-88cf-016ec19f5a2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:32:55 compute-0 nova_compute[259550]: 2025-10-07 14:32:55.706 2 DEBUG oslo_concurrency.lockutils [req-3e74b534-756d-498a-a6b2-513e596e7956 req-07aee861-76f5-4410-88f8-b23a60107dbc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:32:55 compute-0 nova_compute[259550]: 2025-10-07 14:32:55.706 2 DEBUG oslo_concurrency.lockutils [req-3e74b534-756d-498a-a6b2-513e596e7956 req-07aee861-76f5-4410-88f8-b23a60107dbc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:32:55 compute-0 nova_compute[259550]: 2025-10-07 14:32:55.706 2 DEBUG nova.network.neutron [req-3e74b534-756d-498a-a6b2-513e596e7956 req-07aee861-76f5-4410-88f8-b23a60107dbc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Refreshing network info cache for port 70d147cb-a82e-4f80-88cf-016ec19f5a2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:32:55 compute-0 nova_compute[259550]: 2025-10-07 14:32:55.990 2 DEBUG nova.network.neutron [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:32:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 213 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 07 14:32:56 compute-0 nova_compute[259550]: 2025-10-07 14:32:56.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:56 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1293899815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:32:56 compute-0 ceph-mon[74295]: pgmap v2115: 305 pgs: 305 active+clean; 213 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 07 14:32:57 compute-0 nova_compute[259550]: 2025-10-07 14:32:57.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:57 compute-0 nova_compute[259550]: 2025-10-07 14:32:57.858 2 DEBUG nova.compute.manager [req-01e0695e-ef4f-4d66-8176-0c681d9f75c9 req-3683347f-e863-4e88-9aad-ad3d49992d3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received event network-changed-c6894b29-6b20-445c-991a-9aefefb3823c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:32:57 compute-0 nova_compute[259550]: 2025-10-07 14:32:57.859 2 DEBUG nova.compute.manager [req-01e0695e-ef4f-4d66-8176-0c681d9f75c9 req-3683347f-e863-4e88-9aad-ad3d49992d3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Refreshing instance network info cache due to event network-changed-c6894b29-6b20-445c-991a-9aefefb3823c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:32:57 compute-0 nova_compute[259550]: 2025-10-07 14:32:57.859 2 DEBUG oslo_concurrency.lockutils [req-01e0695e-ef4f-4d66-8176-0c681d9f75c9 req-3683347f-e863-4e88-9aad-ad3d49992d3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:32:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.011 2 DEBUG nova.network.neutron [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Updating instance_info_cache with network_info: [{"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.019 2 DEBUG nova.network.neutron [req-3e74b534-756d-498a-a6b2-513e596e7956 req-07aee861-76f5-4410-88f8-b23a60107dbc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Updated VIF entry in instance network info cache for port 70d147cb-a82e-4f80-88cf-016ec19f5a2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.020 2 DEBUG nova.network.neutron [req-3e74b534-756d-498a-a6b2-513e596e7956 req-07aee861-76f5-4410-88f8-b23a60107dbc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Updating instance_info_cache with network_info: [{"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.059 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.059 2 DEBUG nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Instance network_info: |[{"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.060 2 DEBUG oslo_concurrency.lockutils [req-01e0695e-ef4f-4d66-8176-0c681d9f75c9 req-3683347f-e863-4e88-9aad-ad3d49992d3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.060 2 DEBUG nova.network.neutron [req-01e0695e-ef4f-4d66-8176-0c681d9f75c9 req-3683347f-e863-4e88-9aad-ad3d49992d3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Refreshing network info cache for port c6894b29-6b20-445c-991a-9aefefb3823c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.064 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Start _get_guest_xml network_info=[{"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.070 2 WARNING nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.077 2 DEBUG nova.virt.libvirt.host [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.077 2 DEBUG nova.virt.libvirt.host [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.083 2 DEBUG nova.virt.libvirt.host [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.084 2 DEBUG nova.virt.libvirt.host [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.084 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.084 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.085 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.085 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.086 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.086 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.086 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.087 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.087 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.087 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.088 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.088 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.091 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.134 2 DEBUG oslo_concurrency.lockutils [req-3e74b534-756d-498a-a6b2-513e596e7956 req-07aee861-76f5-4410-88f8-b23a60107dbc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:32:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 213 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 07 14:32:58 compute-0 ceph-mon[74295]: pgmap v2116: 305 pgs: 305 active+clean; 213 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:32:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2669948632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.550 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.570 2 DEBUG nova.storage.rbd_utils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image d3fa3175-2379-4c66-9d83-0a37f5559db8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:58 compute-0 nova_compute[259550]: 2025-10-07 14:32:58.573 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:32:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:32:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/786880835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.039 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.042 2 DEBUG nova.virt.libvirt.vif [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:32:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1497631187',display_name='tempest-TestGettingAddress-server-1497631187',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1497631187',id=107,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKZcRW+cTahv1daeBOuj+cvRjMyW8WsW20MGafd+5cNwSmbuhLgTtTRGaDrZ7xqiE/Dwq9wlNoEqtjkRltl/UATKHeenR7LAEwYRgqoAMdni0PUi0HrATcyFAYghdo06gw==',key_name='tempest-TestGettingAddress-855853305',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-k6d5uiwn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:32:50Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=d3fa3175-2379-4c66-9d83-0a37f5559db8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.042 2 DEBUG nova.network.os_vif_util [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.043 2 DEBUG nova.network.os_vif_util [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:d1:60,bridge_name='br-int',has_traffic_filtering=True,id=c6894b29-6b20-445c-991a-9aefefb3823c,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6894b29-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.045 2 DEBUG nova.objects.instance [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid d3fa3175-2379-4c66-9d83-0a37f5559db8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.061 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:32:59 compute-0 nova_compute[259550]:   <uuid>d3fa3175-2379-4c66-9d83-0a37f5559db8</uuid>
Oct 07 14:32:59 compute-0 nova_compute[259550]:   <name>instance-0000006b</name>
Oct 07 14:32:59 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:32:59 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:32:59 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <nova:name>tempest-TestGettingAddress-server-1497631187</nova:name>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:32:58</nova:creationTime>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:32:59 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:32:59 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:32:59 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:32:59 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:32:59 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:32:59 compute-0 nova_compute[259550]:         <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct 07 14:32:59 compute-0 nova_compute[259550]:         <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:32:59 compute-0 nova_compute[259550]:         <nova:port uuid="c6894b29-6b20-445c-991a-9aefefb3823c">
Oct 07 14:32:59 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe28:d160" ipVersion="6"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:32:59 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:32:59 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <system>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <entry name="serial">d3fa3175-2379-4c66-9d83-0a37f5559db8</entry>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <entry name="uuid">d3fa3175-2379-4c66-9d83-0a37f5559db8</entry>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     </system>
Oct 07 14:32:59 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:32:59 compute-0 nova_compute[259550]:   <os>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:   </os>
Oct 07 14:32:59 compute-0 nova_compute[259550]:   <features>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:   </features>
Oct 07 14:32:59 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:32:59 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:32:59 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/d3fa3175-2379-4c66-9d83-0a37f5559db8_disk">
Oct 07 14:32:59 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       </source>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:32:59 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/d3fa3175-2379-4c66-9d83-0a37f5559db8_disk.config">
Oct 07 14:32:59 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       </source>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:32:59 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:28:d1:60"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <target dev="tapc6894b29-6b"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8/console.log" append="off"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <video>
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     </video>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:32:59 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:32:59 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:32:59 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:32:59 compute-0 nova_compute[259550]: </domain>
Oct 07 14:32:59 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.063 2 DEBUG nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Preparing to wait for external event network-vif-plugged-c6894b29-6b20-445c-991a-9aefefb3823c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.063 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.064 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.064 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.065 2 DEBUG nova.virt.libvirt.vif [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:32:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1497631187',display_name='tempest-TestGettingAddress-server-1497631187',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1497631187',id=107,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKZcRW+cTahv1daeBOuj+cvRjMyW8WsW20MGafd+5cNwSmbuhLgTtTRGaDrZ7xqiE/Dwq9wlNoEqtjkRltl/UATKHeenR7LAEwYRgqoAMdni0PUi0HrATcyFAYghdo06gw==',key_name='tempest-TestGettingAddress-855853305',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-k6d5uiwn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:32:50Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=d3fa3175-2379-4c66-9d83-0a37f5559db8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.065 2 DEBUG nova.network.os_vif_util [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.066 2 DEBUG nova.network.os_vif_util [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:d1:60,bridge_name='br-int',has_traffic_filtering=True,id=c6894b29-6b20-445c-991a-9aefefb3823c,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6894b29-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.066 2 DEBUG os_vif [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:d1:60,bridge_name='br-int',has_traffic_filtering=True,id=c6894b29-6b20-445c-991a-9aefefb3823c,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6894b29-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.067 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.068 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.070 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6894b29-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.071 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc6894b29-6b, col_values=(('external_ids', {'iface-id': 'c6894b29-6b20-445c-991a-9aefefb3823c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:d1:60', 'vm-uuid': 'd3fa3175-2379-4c66-9d83-0a37f5559db8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:59 compute-0 NetworkManager[44949]: <info>  [1759847579.0738] manager: (tapc6894b29-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/435)
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.082 2 INFO os_vif [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:d1:60,bridge_name='br-int',has_traffic_filtering=True,id=c6894b29-6b20-445c-991a-9aefefb3823c,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6894b29-6b')
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.209 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.209 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.209 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:28:d1:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.210 2 INFO nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Using config drive
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.231 2 DEBUG nova.storage.rbd_utils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image d3fa3175-2379-4c66-9d83-0a37f5559db8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:32:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2669948632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:32:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/786880835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.547 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.548 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.549 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:32:59 compute-0 nova_compute[259550]: 2025-10-07 14:32:59.580 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 07 14:33:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.065 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.066 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.067 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:00 compute-0 nova_compute[259550]: 2025-10-07 14:33:00.140 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:33:00 compute-0 nova_compute[259550]: 2025-10-07 14:33:00.141 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:33:00 compute-0 nova_compute[259550]: 2025-10-07 14:33:00.141 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:33:00 compute-0 nova_compute[259550]: 2025-10-07 14:33:00.141 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0af0082a-1adc-40e7-b254-88e03182e802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:33:00 compute-0 nova_compute[259550]: 2025-10-07 14:33:00.160 2 INFO nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Creating config drive at /var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8/disk.config
Oct 07 14:33:00 compute-0 nova_compute[259550]: 2025-10-07 14:33:00.166 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcemc4rjp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:33:00 compute-0 ovn_controller[151684]: 2025-10-07T14:33:00Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:96:b2:f9 10.100.0.3
Oct 07 14:33:00 compute-0 ovn_controller[151684]: 2025-10-07T14:33:00Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:96:b2:f9 10.100.0.3
Oct 07 14:33:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 213 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 07 14:33:00 compute-0 nova_compute[259550]: 2025-10-07 14:33:00.312 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcemc4rjp" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:33:00 compute-0 ceph-mon[74295]: pgmap v2117: 305 pgs: 305 active+clean; 213 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 07 14:33:00 compute-0 nova_compute[259550]: 2025-10-07 14:33:00.349 2 DEBUG nova.storage.rbd_utils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image d3fa3175-2379-4c66-9d83-0a37f5559db8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:33:00 compute-0 nova_compute[259550]: 2025-10-07 14:33:00.353 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8/disk.config d3fa3175-2379-4c66-9d83-0a37f5559db8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:33:00 compute-0 nova_compute[259550]: 2025-10-07 14:33:00.563 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8/disk.config d3fa3175-2379-4c66-9d83-0a37f5559db8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:33:00 compute-0 nova_compute[259550]: 2025-10-07 14:33:00.564 2 INFO nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Deleting local config drive /var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8/disk.config because it was imported into RBD.
Oct 07 14:33:00 compute-0 kernel: tapc6894b29-6b: entered promiscuous mode
Oct 07 14:33:00 compute-0 NetworkManager[44949]: <info>  [1759847580.6142] manager: (tapc6894b29-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/436)
Oct 07 14:33:00 compute-0 nova_compute[259550]: 2025-10-07 14:33:00.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:00 compute-0 ovn_controller[151684]: 2025-10-07T14:33:00Z|01064|binding|INFO|Claiming lport c6894b29-6b20-445c-991a-9aefefb3823c for this chassis.
Oct 07 14:33:00 compute-0 ovn_controller[151684]: 2025-10-07T14:33:00Z|01065|binding|INFO|c6894b29-6b20-445c-991a-9aefefb3823c: Claiming fa:16:3e:28:d1:60 10.100.0.11 2001:db8::f816:3eff:fe28:d160
Oct 07 14:33:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.674 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:d1:60 10.100.0.11 2001:db8::f816:3eff:fe28:d160'], port_security=['fa:16:3e:28:d1:60 10.100.0.11 2001:db8::f816:3eff:fe28:d160'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fe28:d160/64', 'neutron:device_id': 'd3fa3175-2379-4c66-9d83-0a37f5559db8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3172dca0-91ca-4f82-9e12-53d0e4f57177', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33b7c271-47f1-414b-a27c-b99f92f4e7c6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c6894b29-6b20-445c-991a-9aefefb3823c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:33:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.675 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c6894b29-6b20-445c-991a-9aefefb3823c in datapath 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc bound to our chassis
Oct 07 14:33:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.677 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc
Oct 07 14:33:00 compute-0 ovn_controller[151684]: 2025-10-07T14:33:00Z|01066|binding|INFO|Setting lport c6894b29-6b20-445c-991a-9aefefb3823c ovn-installed in OVS
Oct 07 14:33:00 compute-0 ovn_controller[151684]: 2025-10-07T14:33:00Z|01067|binding|INFO|Setting lport c6894b29-6b20-445c-991a-9aefefb3823c up in Southbound
Oct 07 14:33:00 compute-0 systemd-udevd[371773]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:33:00 compute-0 nova_compute[259550]: 2025-10-07 14:33:00.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:00 compute-0 nova_compute[259550]: 2025-10-07 14:33:00.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:00 compute-0 systemd-machined[214580]: New machine qemu-134-instance-0000006b.
Oct 07 14:33:00 compute-0 NetworkManager[44949]: <info>  [1759847580.6978] device (tapc6894b29-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:33:00 compute-0 NetworkManager[44949]: <info>  [1759847580.6987] device (tapc6894b29-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:33:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.696 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2e466682-5cb2-4a65-a152-0b6dcb534558]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:00 compute-0 systemd[1]: Started Virtual Machine qemu-134-instance-0000006b.
Oct 07 14:33:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.731 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ab075bfc-1edc-4719-82ea-8f5b5ac4eccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.734 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[16781a8f-b458-4273-8d3b-595960ac9f54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.764 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0637ea-c929-43a5-a7f8-78d27479b6a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.785 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c580558f-feda-4fc8-8947-f110fea1d763]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1da6903e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:f4:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 812640, 'reachable_time': 33851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 20, 'inoctets': 1592, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 20, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1592, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 20, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371787, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.803 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2ccbb464-cd7e-4bb2-be65-62c36c687901]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1da6903e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 812653, 'tstamp': 812653}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371789, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1da6903e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 812656, 'tstamp': 812656}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371789, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.806 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1da6903e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:33:00 compute-0 nova_compute[259550]: 2025-10-07 14:33:00.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.811 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1da6903e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:33:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.811 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:33:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.812 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1da6903e-10, col_values=(('external_ids', {'iface-id': 'ac55d93f-5af1-4917-a7be-679169a02318'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:33:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.812 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:33:00 compute-0 nova_compute[259550]: 2025-10-07 14:33:00.884 2 DEBUG nova.network.neutron [req-01e0695e-ef4f-4d66-8176-0c681d9f75c9 req-3683347f-e863-4e88-9aad-ad3d49992d3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Updated VIF entry in instance network info cache for port c6894b29-6b20-445c-991a-9aefefb3823c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:33:00 compute-0 nova_compute[259550]: 2025-10-07 14:33:00.885 2 DEBUG nova.network.neutron [req-01e0695e-ef4f-4d66-8176-0c681d9f75c9 req-3683347f-e863-4e88-9aad-ad3d49992d3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Updating instance_info_cache with network_info: [{"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:33:00 compute-0 nova_compute[259550]: 2025-10-07 14:33:00.905 2 DEBUG oslo_concurrency.lockutils [req-01e0695e-ef4f-4d66-8176-0c681d9f75c9 req-3683347f-e863-4e88-9aad-ad3d49992d3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:33:01 compute-0 nova_compute[259550]: 2025-10-07 14:33:01.550 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847581.5496955, d3fa3175-2379-4c66-9d83-0a37f5559db8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:33:01 compute-0 nova_compute[259550]: 2025-10-07 14:33:01.550 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] VM Started (Lifecycle Event)
Oct 07 14:33:01 compute-0 nova_compute[259550]: 2025-10-07 14:33:01.567 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:33:01 compute-0 nova_compute[259550]: 2025-10-07 14:33:01.570 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847581.5523415, d3fa3175-2379-4c66-9d83-0a37f5559db8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:33:01 compute-0 nova_compute[259550]: 2025-10-07 14:33:01.570 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] VM Paused (Lifecycle Event)
Oct 07 14:33:01 compute-0 nova_compute[259550]: 2025-10-07 14:33:01.587 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:33:01 compute-0 nova_compute[259550]: 2025-10-07 14:33:01.591 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:33:01 compute-0 nova_compute[259550]: 2025-10-07 14:33:01.606 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:33:02 compute-0 nova_compute[259550]: 2025-10-07 14:33:02.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 226 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 102 op/s
Oct 07 14:33:02 compute-0 ceph-mon[74295]: pgmap v2118: 305 pgs: 305 active+clean; 226 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 102 op/s
Oct 07 14:33:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:33:03 compute-0 nova_compute[259550]: 2025-10-07 14:33:03.481 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updating instance_info_cache with network_info: [{"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:33:03 compute-0 nova_compute[259550]: 2025-10-07 14:33:03.639 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:33:03 compute-0 nova_compute[259550]: 2025-10-07 14:33:03.639 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:33:03 compute-0 nova_compute[259550]: 2025-10-07 14:33:03.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:33:04 compute-0 nova_compute[259550]: 2025-10-07 14:33:04.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 244 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 134 op/s
Oct 07 14:33:04 compute-0 ceph-mon[74295]: pgmap v2119: 305 pgs: 305 active+clean; 244 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 134 op/s
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.022 2 DEBUG nova.compute.manager [req-fe3d9eb2-50a6-4447-81e8-bc7010946318 req-61dd3f2a-34b4-447f-a8f4-f5a1665ddee3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received event network-vif-plugged-c6894b29-6b20-445c-991a-9aefefb3823c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.023 2 DEBUG oslo_concurrency.lockutils [req-fe3d9eb2-50a6-4447-81e8-bc7010946318 req-61dd3f2a-34b4-447f-a8f4-f5a1665ddee3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.023 2 DEBUG oslo_concurrency.lockutils [req-fe3d9eb2-50a6-4447-81e8-bc7010946318 req-61dd3f2a-34b4-447f-a8f4-f5a1665ddee3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.023 2 DEBUG oslo_concurrency.lockutils [req-fe3d9eb2-50a6-4447-81e8-bc7010946318 req-61dd3f2a-34b4-447f-a8f4-f5a1665ddee3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.023 2 DEBUG nova.compute.manager [req-fe3d9eb2-50a6-4447-81e8-bc7010946318 req-61dd3f2a-34b4-447f-a8f4-f5a1665ddee3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Processing event network-vif-plugged-c6894b29-6b20-445c-991a-9aefefb3823c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.024 2 DEBUG nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.035 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847585.0282066, d3fa3175-2379-4c66-9d83-0a37f5559db8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.035 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] VM Resumed (Lifecycle Event)
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.036 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.041 2 INFO nova.virt.libvirt.driver [-] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Instance spawned successfully.
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.042 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.168 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.175 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.179 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.180 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.180 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.181 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.182 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.182 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.318 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.410 2 INFO nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Took 14.68 seconds to spawn the instance on the hypervisor.
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.411 2 DEBUG nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:33:05 compute-0 nova_compute[259550]: 2025-10-07 14:33:05.724 2 INFO nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Took 16.01 seconds to build instance.
Oct 07 14:33:06 compute-0 nova_compute[259550]: 2025-10-07 14:33:06.027 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 2.5 MiB/s wr, 74 op/s
Oct 07 14:33:06 compute-0 ceph-mon[74295]: pgmap v2120: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 2.5 MiB/s wr, 74 op/s
Oct 07 14:33:07 compute-0 nova_compute[259550]: 2025-10-07 14:33:07.112 2 DEBUG nova.compute.manager [req-c263d050-21fa-4c82-a50c-a1385665fb49 req-eb0b0473-27af-44c9-9dec-4b3380d2d803 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received event network-vif-plugged-c6894b29-6b20-445c-991a-9aefefb3823c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:33:07 compute-0 nova_compute[259550]: 2025-10-07 14:33:07.113 2 DEBUG oslo_concurrency.lockutils [req-c263d050-21fa-4c82-a50c-a1385665fb49 req-eb0b0473-27af-44c9-9dec-4b3380d2d803 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:07 compute-0 nova_compute[259550]: 2025-10-07 14:33:07.113 2 DEBUG oslo_concurrency.lockutils [req-c263d050-21fa-4c82-a50c-a1385665fb49 req-eb0b0473-27af-44c9-9dec-4b3380d2d803 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:07 compute-0 nova_compute[259550]: 2025-10-07 14:33:07.113 2 DEBUG oslo_concurrency.lockutils [req-c263d050-21fa-4c82-a50c-a1385665fb49 req-eb0b0473-27af-44c9-9dec-4b3380d2d803 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:07 compute-0 nova_compute[259550]: 2025-10-07 14:33:07.114 2 DEBUG nova.compute.manager [req-c263d050-21fa-4c82-a50c-a1385665fb49 req-eb0b0473-27af-44c9-9dec-4b3380d2d803 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] No waiting events found dispatching network-vif-plugged-c6894b29-6b20-445c-991a-9aefefb3823c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:33:07 compute-0 nova_compute[259550]: 2025-10-07 14:33:07.114 2 WARNING nova.compute.manager [req-c263d050-21fa-4c82-a50c-a1385665fb49 req-eb0b0473-27af-44c9-9dec-4b3380d2d803 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received unexpected event network-vif-plugged-c6894b29-6b20-445c-991a-9aefefb3823c for instance with vm_state active and task_state None.
Oct 07 14:33:07 compute-0 nova_compute[259550]: 2025-10-07 14:33:07.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:07 compute-0 nova_compute[259550]: 2025-10-07 14:33:07.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:07 compute-0 nova_compute[259550]: 2025-10-07 14:33:07.662 2 INFO nova.compute.manager [None req-d667a55d-499b-4883-9e20-ba05c4f28b3f 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Get console output
Oct 07 14:33:07 compute-0 nova_compute[259550]: 2025-10-07 14:33:07.667 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:33:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:33:07 compute-0 nova_compute[259550]: 2025-10-07 14:33:07.956 2 INFO nova.compute.manager [None req-b760ddf5-9662-40aa-9d46-d464f0958af5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Pausing
Oct 07 14:33:07 compute-0 nova_compute[259550]: 2025-10-07 14:33:07.957 2 DEBUG nova.objects.instance [None req-b760ddf5-9662-40aa-9d46-d464f0958af5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'flavor' on Instance uuid 08184d43-2bd1-4f46-9bdf-63437c87a8ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:33:08 compute-0 nova_compute[259550]: 2025-10-07 14:33:08.011 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847588.0107796, 08184d43-2bd1-4f46-9bdf-63437c87a8ad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:33:08 compute-0 nova_compute[259550]: 2025-10-07 14:33:08.011 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] VM Paused (Lifecycle Event)
Oct 07 14:33:08 compute-0 nova_compute[259550]: 2025-10-07 14:33:08.013 2 DEBUG nova.compute.manager [None req-b760ddf5-9662-40aa-9d46-d464f0958af5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:33:08 compute-0 nova_compute[259550]: 2025-10-07 14:33:08.069 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:33:08 compute-0 nova_compute[259550]: 2025-10-07 14:33:08.074 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:33:08 compute-0 podman[371832]: 2025-10-07 14:33:08.082770526 +0000 UTC m=+0.057400114 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 07 14:33:08 compute-0 podman[371833]: 2025-10-07 14:33:08.113181518 +0000 UTC m=+0.091310561 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 07 14:33:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 07 14:33:08 compute-0 ceph-mon[74295]: pgmap v2121: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 07 14:33:09 compute-0 nova_compute[259550]: 2025-10-07 14:33:09.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 124 op/s
Oct 07 14:33:10 compute-0 ceph-mon[74295]: pgmap v2122: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 124 op/s
Oct 07 14:33:11 compute-0 nova_compute[259550]: 2025-10-07 14:33:11.945 2 DEBUG nova.compute.manager [req-90817a98-05a3-435c-b967-f403281b9608 req-0a634e85-315d-46a0-b149-325d3bf6c55d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received event network-changed-c6894b29-6b20-445c-991a-9aefefb3823c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:33:11 compute-0 nova_compute[259550]: 2025-10-07 14:33:11.946 2 DEBUG nova.compute.manager [req-90817a98-05a3-435c-b967-f403281b9608 req-0a634e85-315d-46a0-b149-325d3bf6c55d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Refreshing instance network info cache due to event network-changed-c6894b29-6b20-445c-991a-9aefefb3823c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:33:11 compute-0 nova_compute[259550]: 2025-10-07 14:33:11.946 2 DEBUG oslo_concurrency.lockutils [req-90817a98-05a3-435c-b967-f403281b9608 req-0a634e85-315d-46a0-b149-325d3bf6c55d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:33:11 compute-0 nova_compute[259550]: 2025-10-07 14:33:11.946 2 DEBUG oslo_concurrency.lockutils [req-90817a98-05a3-435c-b967-f403281b9608 req-0a634e85-315d-46a0-b149-325d3bf6c55d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:33:11 compute-0 nova_compute[259550]: 2025-10-07 14:33:11.947 2 DEBUG nova.network.neutron [req-90817a98-05a3-435c-b967-f403281b9608 req-0a634e85-315d-46a0-b149-325d3bf6c55d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Refreshing network info cache for port c6894b29-6b20-445c-991a-9aefefb3823c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:33:12 compute-0 nova_compute[259550]: 2025-10-07 14:33:12.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Oct 07 14:33:12 compute-0 ceph-mon[74295]: pgmap v2123: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Oct 07 14:33:12 compute-0 nova_compute[259550]: 2025-10-07 14:33:12.437 2 INFO nova.compute.manager [None req-2aed1fcd-2b59-495a-9362-73b8982ac1d4 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Get console output
Oct 07 14:33:12 compute-0 nova_compute[259550]: 2025-10-07 14:33:12.443 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:33:12 compute-0 nova_compute[259550]: 2025-10-07 14:33:12.766 2 INFO nova.compute.manager [None req-60d0cbc3-668b-4e81-b13a-6de4830b3e37 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Unpausing
Oct 07 14:33:12 compute-0 nova_compute[259550]: 2025-10-07 14:33:12.767 2 DEBUG nova.objects.instance [None req-60d0cbc3-668b-4e81-b13a-6de4830b3e37 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'flavor' on Instance uuid 08184d43-2bd1-4f46-9bdf-63437c87a8ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:33:12 compute-0 nova_compute[259550]: 2025-10-07 14:33:12.813 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847592.8133488, 08184d43-2bd1-4f46-9bdf-63437c87a8ad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:33:12 compute-0 nova_compute[259550]: 2025-10-07 14:33:12.814 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] VM Resumed (Lifecycle Event)
Oct 07 14:33:12 compute-0 virtqemud[259430]: argument unsupported: QEMU guest agent is not configured
Oct 07 14:33:12 compute-0 nova_compute[259550]: 2025-10-07 14:33:12.819 2 DEBUG nova.virt.libvirt.guest [None req-60d0cbc3-668b-4e81-b13a-6de4830b3e37 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 07 14:33:12 compute-0 nova_compute[259550]: 2025-10-07 14:33:12.819 2 DEBUG nova.compute.manager [None req-60d0cbc3-668b-4e81-b13a-6de4830b3e37 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:33:12 compute-0 nova_compute[259550]: 2025-10-07 14:33:12.863 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:33:12 compute-0 nova_compute[259550]: 2025-10-07 14:33:12.866 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:33:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:33:14 compute-0 nova_compute[259550]: 2025-10-07 14:33:14.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 939 KiB/s wr, 124 op/s
Oct 07 14:33:14 compute-0 ceph-mon[74295]: pgmap v2124: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 939 KiB/s wr, 124 op/s
Oct 07 14:33:15 compute-0 nova_compute[259550]: 2025-10-07 14:33:15.503 2 DEBUG nova.network.neutron [req-90817a98-05a3-435c-b967-f403281b9608 req-0a634e85-315d-46a0-b149-325d3bf6c55d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Updated VIF entry in instance network info cache for port c6894b29-6b20-445c-991a-9aefefb3823c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:33:15 compute-0 nova_compute[259550]: 2025-10-07 14:33:15.503 2 DEBUG nova.network.neutron [req-90817a98-05a3-435c-b967-f403281b9608 req-0a634e85-315d-46a0-b149-325d3bf6c55d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Updating instance_info_cache with network_info: [{"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:33:15 compute-0 nova_compute[259550]: 2025-10-07 14:33:15.547 2 DEBUG oslo_concurrency.lockutils [req-90817a98-05a3-435c-b967-f403281b9608 req-0a634e85-315d-46a0-b149-325d3bf6c55d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:33:15 compute-0 nova_compute[259550]: 2025-10-07 14:33:15.708 2 INFO nova.compute.manager [None req-c63ae1d1-53b8-4abc-b22e-defa775315c6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Get console output
Oct 07 14:33:15 compute-0 nova_compute[259550]: 2025-10-07 14:33:15.714 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:33:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 67 op/s
Oct 07 14:33:16 compute-0 ovn_controller[151684]: 2025-10-07T14:33:16Z|01068|binding|INFO|Releasing lport d62984c1-9132-4390-b975-fecc00942743 from this chassis (sb_readonly=0)
Oct 07 14:33:16 compute-0 ovn_controller[151684]: 2025-10-07T14:33:16Z|01069|binding|INFO|Releasing lport ac55d93f-5af1-4917-a7be-679169a02318 from this chassis (sb_readonly=0)
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:16 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.576 2 DEBUG nova.compute.manager [req-963ad272-43ff-4654-ab99-aac5063e5d17 req-8ea92ec7-8f60-43dd-904a-220c5b36448d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received event network-changed-70d147cb-a82e-4f80-88cf-016ec19f5a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.576 2 DEBUG nova.compute.manager [req-963ad272-43ff-4654-ab99-aac5063e5d17 req-8ea92ec7-8f60-43dd-904a-220c5b36448d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Refreshing instance network info cache due to event network-changed-70d147cb-a82e-4f80-88cf-016ec19f5a2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.576 2 DEBUG oslo_concurrency.lockutils [req-963ad272-43ff-4654-ab99-aac5063e5d17 req-8ea92ec7-8f60-43dd-904a-220c5b36448d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.577 2 DEBUG oslo_concurrency.lockutils [req-963ad272-43ff-4654-ab99-aac5063e5d17 req-8ea92ec7-8f60-43dd-904a-220c5b36448d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.577 2 DEBUG nova.network.neutron [req-963ad272-43ff-4654-ab99-aac5063e5d17 req-8ea92ec7-8f60-43dd-904a-220c5b36448d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Refreshing network info cache for port 70d147cb-a82e-4f80-88cf-016ec19f5a2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.621 2 DEBUG oslo_concurrency.lockutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.621 2 DEBUG oslo_concurrency.lockutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.622 2 DEBUG oslo_concurrency.lockutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.622 2 DEBUG oslo_concurrency.lockutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.622 2 DEBUG oslo_concurrency.lockutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.624 2 INFO nova.compute.manager [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Terminating instance
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.625 2 DEBUG nova.compute.manager [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:33:16 compute-0 kernel: tap70d147cb-a8 (unregistering): left promiscuous mode
Oct 07 14:33:16 compute-0 NetworkManager[44949]: <info>  [1759847596.6788] device (tap70d147cb-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:33:16 compute-0 ovn_controller[151684]: 2025-10-07T14:33:16Z|01070|binding|INFO|Releasing lport 70d147cb-a82e-4f80-88cf-016ec19f5a2c from this chassis (sb_readonly=0)
Oct 07 14:33:16 compute-0 ovn_controller[151684]: 2025-10-07T14:33:16Z|01071|binding|INFO|Setting lport 70d147cb-a82e-4f80-88cf-016ec19f5a2c down in Southbound
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:16 compute-0 ovn_controller[151684]: 2025-10-07T14:33:16Z|01072|binding|INFO|Removing iface tap70d147cb-a8 ovn-installed in OVS
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:16.700 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:b2:f9 10.100.0.3'], port_security=['fa:16:3e:96:b2:f9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '08184d43-2bd1-4f46-9bdf-63437c87a8ad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48fde043-e83e-45b7-ab13-37b209fd562c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8bbba0f-6faa-43dc-9c06-619017376051', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ea6d7f11-49ef-4d9e-9bd1-b8e683eae537, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=70d147cb-a82e-4f80-88cf-016ec19f5a2c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:33:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:16.701 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 70d147cb-a82e-4f80-88cf-016ec19f5a2c in datapath 48fde043-e83e-45b7-ab13-37b209fd562c unbound from our chassis
Oct 07 14:33:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:16.702 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48fde043-e83e-45b7-ab13-37b209fd562c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:16.703 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e56cd581-d729-4405-926b-3ba4437cc633]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:16.710 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c namespace which is not needed anymore
Oct 07 14:33:16 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Oct 07 14:33:16 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006a.scope: Consumed 13.401s CPU time.
Oct 07 14:33:16 compute-0 systemd-machined[214580]: Machine qemu-133-instance-0000006a terminated.
Oct 07 14:33:16 compute-0 neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c[371351]: [NOTICE]   (371355) : haproxy version is 2.8.14-c23fe91
Oct 07 14:33:16 compute-0 neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c[371351]: [NOTICE]   (371355) : path to executable is /usr/sbin/haproxy
Oct 07 14:33:16 compute-0 neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c[371351]: [WARNING]  (371355) : Exiting Master process...
Oct 07 14:33:16 compute-0 neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c[371351]: [ALERT]    (371355) : Current worker (371357) exited with code 143 (Terminated)
Oct 07 14:33:16 compute-0 neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c[371351]: [WARNING]  (371355) : All workers exited. Exiting... (0)
Oct 07 14:33:16 compute-0 systemd[1]: libpod-2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7.scope: Deactivated successfully.
Oct 07 14:33:16 compute-0 podman[371903]: 2025-10-07 14:33:16.848255783 +0000 UTC m=+0.050276374 container died 2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.892 2 INFO nova.virt.libvirt.driver [-] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Instance destroyed successfully.
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.893 2 DEBUG nova.objects.instance [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'resources' on Instance uuid 08184d43-2bd1-4f46-9bdf-63437c87a8ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:33:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7-userdata-shm.mount: Deactivated successfully.
Oct 07 14:33:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cc61ebfbc859062541ed2a2564be14d195084d66aefaf18d107a00c5495bc9e-merged.mount: Deactivated successfully.
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.914 2 DEBUG nova.virt.libvirt.vif [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:32:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1785477500',display_name='tempest-TestNetworkAdvancedServerOps-server-1785477500',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1785477500',id=106,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDOmKPVgdwoa2PL66bDGbFu3FUOEtM4uSjrJKdQm0+Dh5hnoywovo36nzSmeP/PaULcFe/lE8zWO3xuub67iudTqw2d8wzPnJMGL68v9G5hFZdOFPK6yXS41BfolaqLZgQ==',key_name='tempest-TestNetworkAdvancedServerOps-1280135326',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:32:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-pm6ky9yn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:33:12Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=08184d43-2bd1-4f46-9bdf-63437c87a8ad,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.914 2 DEBUG nova.network.os_vif_util [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.915 2 DEBUG nova.network.os_vif_util [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:b2:f9,bridge_name='br-int',has_traffic_filtering=True,id=70d147cb-a82e-4f80-88cf-016ec19f5a2c,network=Network(48fde043-e83e-45b7-ab13-37b209fd562c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d147cb-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.916 2 DEBUG os_vif [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:b2:f9,bridge_name='br-int',has_traffic_filtering=True,id=70d147cb-a82e-4f80-88cf-016ec19f5a2c,network=Network(48fde043-e83e-45b7-ab13-37b209fd562c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d147cb-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap70d147cb-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:16 compute-0 nova_compute[259550]: 2025-10-07 14:33:16.924 2 INFO os_vif [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:b2:f9,bridge_name='br-int',has_traffic_filtering=True,id=70d147cb-a82e-4f80-88cf-016ec19f5a2c,network=Network(48fde043-e83e-45b7-ab13-37b209fd562c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d147cb-a8')
Oct 07 14:33:16 compute-0 podman[371903]: 2025-10-07 14:33:16.926801802 +0000 UTC m=+0.128822383 container cleanup 2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 07 14:33:16 compute-0 systemd[1]: libpod-conmon-2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7.scope: Deactivated successfully.
Oct 07 14:33:17 compute-0 podman[371947]: 2025-10-07 14:33:17.01765305 +0000 UTC m=+0.059643965 container remove 2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:33:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:17.027 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c73362a1-9f36-482b-b7bf-bf29de252772]: (4, ('Tue Oct  7 02:33:16 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c (2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7)\n2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7\nTue Oct  7 02:33:16 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c (2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7)\n2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:17.030 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[04872295-79a6-45d1-92ea-e1d0e4fb0123]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:17.031 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48fde043-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:33:17 compute-0 kernel: tap48fde043-e0: left promiscuous mode
Oct 07 14:33:17 compute-0 nova_compute[259550]: 2025-10-07 14:33:17.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:17 compute-0 nova_compute[259550]: 2025-10-07 14:33:17.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:17.056 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd1cde6-3082-46cc-b064-80a1cb73ddee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:17.086 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ec2834a1-a5ab-4536-b559-45b755b90512]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:17.088 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[23def0b7-191f-4fe0-8f4e-81848081e789]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:17.111 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8b642b1b-e59b-4935-bd46-5c976faa0b7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815076, 'reachable_time': 28892, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371973, 'error': None, 'target': 'ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d48fde043\x2de83e\x2d45b7\x2dab13\x2d37b209fd562c.mount: Deactivated successfully.
Oct 07 14:33:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:17.118 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:33:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:17.118 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[95d81e4c-5457-4434-ae7c-22f05efeed07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:17 compute-0 nova_compute[259550]: 2025-10-07 14:33:17.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:17 compute-0 ceph-mon[74295]: pgmap v2125: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 67 op/s
Oct 07 14:33:17 compute-0 nova_compute[259550]: 2025-10-07 14:33:17.381 2 INFO nova.virt.libvirt.driver [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Deleting instance files /var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad_del
Oct 07 14:33:17 compute-0 nova_compute[259550]: 2025-10-07 14:33:17.382 2 INFO nova.virt.libvirt.driver [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Deletion of /var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad_del complete
Oct 07 14:33:17 compute-0 nova_compute[259550]: 2025-10-07 14:33:17.425 2 INFO nova.compute.manager [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Took 0.80 seconds to destroy the instance on the hypervisor.
Oct 07 14:33:17 compute-0 nova_compute[259550]: 2025-10-07 14:33:17.426 2 DEBUG oslo.service.loopingcall [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:33:17 compute-0 nova_compute[259550]: 2025-10-07 14:33:17.426 2 DEBUG nova.compute.manager [-] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:33:17 compute-0 nova_compute[259550]: 2025-10-07 14:33:17.427 2 DEBUG nova.network.neutron [-] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:33:17 compute-0 ovn_controller[151684]: 2025-10-07T14:33:17Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:d1:60 10.100.0.11
Oct 07 14:33:17 compute-0 ovn_controller[151684]: 2025-10-07T14:33:17Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:d1:60 10.100.0.11
Oct 07 14:33:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:33:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 64 op/s
Oct 07 14:33:18 compute-0 ceph-mon[74295]: pgmap v2126: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 64 op/s
Oct 07 14:33:18 compute-0 nova_compute[259550]: 2025-10-07 14:33:18.681 2 DEBUG nova.compute.manager [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received event network-vif-unplugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:33:18 compute-0 nova_compute[259550]: 2025-10-07 14:33:18.682 2 DEBUG oslo_concurrency.lockutils [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:18 compute-0 nova_compute[259550]: 2025-10-07 14:33:18.682 2 DEBUG oslo_concurrency.lockutils [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:18 compute-0 nova_compute[259550]: 2025-10-07 14:33:18.682 2 DEBUG oslo_concurrency.lockutils [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:18 compute-0 nova_compute[259550]: 2025-10-07 14:33:18.682 2 DEBUG nova.compute.manager [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] No waiting events found dispatching network-vif-unplugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:33:18 compute-0 nova_compute[259550]: 2025-10-07 14:33:18.682 2 DEBUG nova.compute.manager [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received event network-vif-unplugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:33:18 compute-0 nova_compute[259550]: 2025-10-07 14:33:18.683 2 DEBUG nova.compute.manager [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received event network-vif-plugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:33:18 compute-0 nova_compute[259550]: 2025-10-07 14:33:18.683 2 DEBUG oslo_concurrency.lockutils [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:18 compute-0 nova_compute[259550]: 2025-10-07 14:33:18.683 2 DEBUG oslo_concurrency.lockutils [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:18 compute-0 nova_compute[259550]: 2025-10-07 14:33:18.683 2 DEBUG oslo_concurrency.lockutils [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:18 compute-0 nova_compute[259550]: 2025-10-07 14:33:18.683 2 DEBUG nova.compute.manager [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] No waiting events found dispatching network-vif-plugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:33:18 compute-0 nova_compute[259550]: 2025-10-07 14:33:18.683 2 WARNING nova.compute.manager [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received unexpected event network-vif-plugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c for instance with vm_state active and task_state deleting.
Oct 07 14:33:19 compute-0 nova_compute[259550]: 2025-10-07 14:33:19.414 2 DEBUG nova.network.neutron [-] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:33:19 compute-0 nova_compute[259550]: 2025-10-07 14:33:19.451 2 INFO nova.compute.manager [-] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Took 2.02 seconds to deallocate network for instance.
Oct 07 14:33:19 compute-0 nova_compute[259550]: 2025-10-07 14:33:19.490 2 DEBUG nova.network.neutron [req-963ad272-43ff-4654-ab99-aac5063e5d17 req-8ea92ec7-8f60-43dd-904a-220c5b36448d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Updated VIF entry in instance network info cache for port 70d147cb-a82e-4f80-88cf-016ec19f5a2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:33:19 compute-0 nova_compute[259550]: 2025-10-07 14:33:19.490 2 DEBUG nova.network.neutron [req-963ad272-43ff-4654-ab99-aac5063e5d17 req-8ea92ec7-8f60-43dd-904a-220c5b36448d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Updating instance_info_cache with network_info: [{"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:33:19 compute-0 nova_compute[259550]: 2025-10-07 14:33:19.498 2 DEBUG oslo_concurrency.lockutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:19 compute-0 nova_compute[259550]: 2025-10-07 14:33:19.498 2 DEBUG oslo_concurrency.lockutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:19 compute-0 nova_compute[259550]: 2025-10-07 14:33:19.520 2 DEBUG nova.compute.manager [req-d53defe6-ad8d-42d9-b63a-9b36ee9d1faf req-2d74a42f-eba3-480c-a899-e2f3d1c9d233 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received event network-vif-deleted-70d147cb-a82e-4f80-88cf-016ec19f5a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:33:19 compute-0 nova_compute[259550]: 2025-10-07 14:33:19.537 2 DEBUG oslo_concurrency.lockutils [req-963ad272-43ff-4654-ab99-aac5063e5d17 req-8ea92ec7-8f60-43dd-904a-220c5b36448d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:33:19 compute-0 nova_compute[259550]: 2025-10-07 14:33:19.595 2 DEBUG oslo_concurrency.processutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:33:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:33:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2672913450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:33:20 compute-0 nova_compute[259550]: 2025-10-07 14:33:20.073 2 DEBUG oslo_concurrency.processutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:33:20 compute-0 nova_compute[259550]: 2025-10-07 14:33:20.079 2 DEBUG nova.compute.provider_tree [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:33:20 compute-0 nova_compute[259550]: 2025-10-07 14:33:20.098 2 DEBUG nova.scheduler.client.report [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:33:20 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2672913450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:33:20 compute-0 nova_compute[259550]: 2025-10-07 14:33:20.127 2 DEBUG oslo_concurrency.lockutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:20 compute-0 nova_compute[259550]: 2025-10-07 14:33:20.166 2 INFO nova.scheduler.client.report [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Deleted allocations for instance 08184d43-2bd1-4f46-9bdf-63437c87a8ad
Oct 07 14:33:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 228 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 131 op/s
Oct 07 14:33:20 compute-0 nova_compute[259550]: 2025-10-07 14:33:20.261 2 DEBUG oslo_concurrency.lockutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:21 compute-0 nova_compute[259550]: 2025-10-07 14:33:21.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:22 compute-0 ceph-mon[74295]: pgmap v2127: 305 pgs: 305 active+clean; 228 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 131 op/s
Oct 07 14:33:22 compute-0 nova_compute[259550]: 2025-10-07 14:33:22.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 760 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 07 14:33:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:33:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:33:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:33:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:33:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:33:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:33:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:33:22
Oct 07 14:33:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:33:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:33:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'default.rgw.log', 'backups', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'volumes']
Oct 07 14:33:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:33:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:33:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:33:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:33:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:33:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:33:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:33:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:33:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:33:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:33:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:33:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:33:23 compute-0 ovn_controller[151684]: 2025-10-07T14:33:23Z|01073|binding|INFO|Releasing lport ac55d93f-5af1-4917-a7be-679169a02318 from this chassis (sb_readonly=0)
Oct 07 14:33:24 compute-0 nova_compute[259550]: 2025-10-07 14:33:24.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:24 compute-0 podman[371998]: 2025-10-07 14:33:24.085060444 +0000 UTC m=+0.053171222 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 07 14:33:24 compute-0 podman[371997]: 2025-10-07 14:33:24.092712898 +0000 UTC m=+0.062334176 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct 07 14:33:24 compute-0 ceph-mon[74295]: pgmap v2128: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 760 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 07 14:33:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 07 14:33:24 compute-0 sudo[372037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:33:24 compute-0 sudo[372037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:24 compute-0 sudo[372037]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:24 compute-0 sudo[372062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:33:24 compute-0 sudo[372062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:24 compute-0 sudo[372062]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:24 compute-0 sudo[372087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:33:24 compute-0 sudo[372087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:24 compute-0 sudo[372087]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:24 compute-0 sudo[372112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:33:24 compute-0 sudo[372112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:25 compute-0 sudo[372112]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:25 compute-0 sudo[372168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:33:25 compute-0 sudo[372168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:25 compute-0 sudo[372168]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:25 compute-0 sudo[372193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:33:25 compute-0 sudo[372193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:25 compute-0 sudo[372193]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:25 compute-0 sudo[372218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:33:25 compute-0 sudo[372218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:25 compute-0 sudo[372218]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:25 compute-0 sudo[372243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 07 14:33:25 compute-0 sudo[372243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:25 compute-0 ovn_controller[151684]: 2025-10-07T14:33:25Z|01074|binding|INFO|Releasing lport ac55d93f-5af1-4917-a7be-679169a02318 from this chassis (sb_readonly=0)
Oct 07 14:33:25 compute-0 nova_compute[259550]: 2025-10-07 14:33:25.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:25 compute-0 sudo[372243]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:33:26 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:33:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:33:26 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:33:26 compute-0 sudo[372287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:33:26 compute-0 sudo[372287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:26 compute-0 sudo[372287]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:26 compute-0 sudo[372312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:33:26 compute-0 sudo[372312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:26 compute-0 sudo[372312]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:26 compute-0 ceph-mon[74295]: pgmap v2129: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 07 14:33:26 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:33:26 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:33:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 07 14:33:26 compute-0 sudo[372337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:33:26 compute-0 sudo[372337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:26 compute-0 sudo[372337]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:26 compute-0 sudo[372362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- inventory --format=json-pretty --filter-for-batch
Oct 07 14:33:26 compute-0 sudo[372362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:26 compute-0 podman[372426]: 2025-10-07 14:33:26.725594879 +0000 UTC m=+0.075116198 container create c91f7f47ced1d34d7636a2f04b92d39b209c0538fba2ee51b7d9dcfaaad97189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:33:26 compute-0 podman[372426]: 2025-10-07 14:33:26.672079839 +0000 UTC m=+0.021601138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:33:26 compute-0 systemd[1]: Started libpod-conmon-c91f7f47ced1d34d7636a2f04b92d39b209c0538fba2ee51b7d9dcfaaad97189.scope.
Oct 07 14:33:26 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:33:26 compute-0 nova_compute[259550]: 2025-10-07 14:33:26.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:26 compute-0 podman[372426]: 2025-10-07 14:33:26.94267443 +0000 UTC m=+0.292195729 container init c91f7f47ced1d34d7636a2f04b92d39b209c0538fba2ee51b7d9dcfaaad97189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:33:26 compute-0 podman[372426]: 2025-10-07 14:33:26.950904369 +0000 UTC m=+0.300425648 container start c91f7f47ced1d34d7636a2f04b92d39b209c0538fba2ee51b7d9dcfaaad97189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:33:26 compute-0 festive_lederberg[372442]: 167 167
Oct 07 14:33:26 compute-0 systemd[1]: libpod-c91f7f47ced1d34d7636a2f04b92d39b209c0538fba2ee51b7d9dcfaaad97189.scope: Deactivated successfully.
Oct 07 14:33:26 compute-0 podman[372426]: 2025-10-07 14:33:26.969523847 +0000 UTC m=+0.319045146 container attach c91f7f47ced1d34d7636a2f04b92d39b209c0538fba2ee51b7d9dcfaaad97189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 07 14:33:26 compute-0 podman[372426]: 2025-10-07 14:33:26.969962649 +0000 UTC m=+0.319483938 container died c91f7f47ced1d34d7636a2f04b92d39b209c0538fba2ee51b7d9dcfaaad97189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 07 14:33:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cd3968c017fdbbab46d149f971e7366e5dcbb05e94dead09fea5690b653b1e2-merged.mount: Deactivated successfully.
Oct 07 14:33:27 compute-0 nova_compute[259550]: 2025-10-07 14:33:27.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:27 compute-0 podman[372426]: 2025-10-07 14:33:27.169737637 +0000 UTC m=+0.519258946 container remove c91f7f47ced1d34d7636a2f04b92d39b209c0538fba2ee51b7d9dcfaaad97189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:33:27 compute-0 systemd[1]: libpod-conmon-c91f7f47ced1d34d7636a2f04b92d39b209c0538fba2ee51b7d9dcfaaad97189.scope: Deactivated successfully.
Oct 07 14:33:27 compute-0 podman[372466]: 2025-10-07 14:33:27.393694462 +0000 UTC m=+0.023841908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:33:27 compute-0 podman[372466]: 2025-10-07 14:33:27.523182431 +0000 UTC m=+0.153329887 container create bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_brattain, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct 07 14:33:27 compute-0 systemd[1]: Started libpod-conmon-bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77.scope.
Oct 07 14:33:27 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ccb4fc6fd00d9ed5040c709026deeb266947aa7c6f6b3db0fd5ec91edd4ac6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ccb4fc6fd00d9ed5040c709026deeb266947aa7c6f6b3db0fd5ec91edd4ac6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ccb4fc6fd00d9ed5040c709026deeb266947aa7c6f6b3db0fd5ec91edd4ac6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ccb4fc6fd00d9ed5040c709026deeb266947aa7c6f6b3db0fd5ec91edd4ac6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:33:27 compute-0 podman[372466]: 2025-10-07 14:33:27.749513679 +0000 UTC m=+0.379661215 container init bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_brattain, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:33:27 compute-0 podman[372466]: 2025-10-07 14:33:27.766115552 +0000 UTC m=+0.396263008 container start bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_brattain, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:33:27 compute-0 podman[372466]: 2025-10-07 14:33:27.847436165 +0000 UTC m=+0.477583661 container attach bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 07 14:33:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:33:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 07 14:33:28 compute-0 ceph-mon[74295]: pgmap v2130: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 07 14:33:29 compute-0 jovial_brattain[372483]: [
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:     {
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:         "available": false,
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:         "ceph_device": false,
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:         "lsm_data": {},
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:         "lvs": [],
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:         "path": "/dev/sr0",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:         "rejected_reasons": [
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "Has a FileSystem",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "Insufficient space (<5GB)"
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:         ],
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:         "sys_api": {
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "actuators": null,
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "device_nodes": "sr0",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "devname": "sr0",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "human_readable_size": "482.00 KB",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "id_bus": "ata",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "model": "QEMU DVD-ROM",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "nr_requests": "2",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "parent": "/dev/sr0",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "partitions": {},
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "path": "/dev/sr0",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "removable": "1",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "rev": "2.5+",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "ro": "0",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "rotational": "0",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "sas_address": "",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "sas_device_handle": "",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "scheduler_mode": "mq-deadline",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "sectors": 0,
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "sectorsize": "2048",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "size": 493568.0,
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "support_discard": "2048",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "type": "disk",
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:             "vendor": "QEMU"
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:         }
Oct 07 14:33:29 compute-0 jovial_brattain[372483]:     }
Oct 07 14:33:29 compute-0 jovial_brattain[372483]: ]
Oct 07 14:33:29 compute-0 systemd[1]: libpod-bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77.scope: Deactivated successfully.
Oct 07 14:33:29 compute-0 systemd[1]: libpod-bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77.scope: Consumed 1.514s CPU time.
Oct 07 14:33:29 compute-0 podman[372466]: 2025-10-07 14:33:29.374215381 +0000 UTC m=+2.004362807 container died bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_brattain, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 07 14:33:29 compute-0 ceph-mon[74295]: pgmap v2131: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 07 14:33:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ccb4fc6fd00d9ed5040c709026deeb266947aa7c6f6b3db0fd5ec91edd4ac6d-merged.mount: Deactivated successfully.
Oct 07 14:33:29 compute-0 podman[372466]: 2025-10-07 14:33:29.766431701 +0000 UTC m=+2.396579157 container remove bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_brattain, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:33:29 compute-0 systemd[1]: libpod-conmon-bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77.scope: Deactivated successfully.
Oct 07 14:33:29 compute-0 sudo[372362]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:33:29 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:33:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:33:29 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:33:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:33:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:33:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:33:29 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:33:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:33:29 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:33:29 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 1f5b9385-f581-4053-9bcc-024fe0eb852e does not exist
Oct 07 14:33:29 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 0a0d9797-ad92-47fc-a805-d6b4cc98c4fe does not exist
Oct 07 14:33:29 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev ccb41a5f-8711-4a5d-a9e6-109e9fd1397f does not exist
Oct 07 14:33:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:33:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:33:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:33:29 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:33:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:33:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:33:29 compute-0 nova_compute[259550]: 2025-10-07 14:33:29.898 2 DEBUG nova.compute.manager [req-7f5d2f49-41a4-4104-a755-d8d1e20a4193 req-6764839d-a89d-4a90-8d4f-d1bccefe6bcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received event network-changed-c6894b29-6b20-445c-991a-9aefefb3823c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:33:29 compute-0 nova_compute[259550]: 2025-10-07 14:33:29.900 2 DEBUG nova.compute.manager [req-7f5d2f49-41a4-4104-a755-d8d1e20a4193 req-6764839d-a89d-4a90-8d4f-d1bccefe6bcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Refreshing instance network info cache due to event network-changed-c6894b29-6b20-445c-991a-9aefefb3823c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:33:29 compute-0 nova_compute[259550]: 2025-10-07 14:33:29.900 2 DEBUG oslo_concurrency.lockutils [req-7f5d2f49-41a4-4104-a755-d8d1e20a4193 req-6764839d-a89d-4a90-8d4f-d1bccefe6bcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:33:29 compute-0 nova_compute[259550]: 2025-10-07 14:33:29.900 2 DEBUG oslo_concurrency.lockutils [req-7f5d2f49-41a4-4104-a755-d8d1e20a4193 req-6764839d-a89d-4a90-8d4f-d1bccefe6bcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:33:29 compute-0 nova_compute[259550]: 2025-10-07 14:33:29.900 2 DEBUG nova.network.neutron [req-7f5d2f49-41a4-4104-a755-d8d1e20a4193 req-6764839d-a89d-4a90-8d4f-d1bccefe6bcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Refreshing network info cache for port c6894b29-6b20-445c-991a-9aefefb3823c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:33:29 compute-0 sudo[374444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:33:29 compute-0 sudo[374444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:29 compute-0 sudo[374444]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:29 compute-0 sudo[374469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:33:29 compute-0 sudo[374469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:29 compute-0 sudo[374469]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:30 compute-0 sudo[374494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:33:30 compute-0 sudo[374494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:30 compute-0 sudo[374494]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:30 compute-0 sudo[374519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:33:30 compute-0 sudo[374519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 2.2 MiB/s wr, 101 op/s
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.362 2 DEBUG oslo_concurrency.lockutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "d3fa3175-2379-4c66-9d83-0a37f5559db8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.362 2 DEBUG oslo_concurrency.lockutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.363 2 DEBUG oslo_concurrency.lockutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.363 2 DEBUG oslo_concurrency.lockutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.363 2 DEBUG oslo_concurrency.lockutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.364 2 INFO nova.compute.manager [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Terminating instance
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.365 2 DEBUG nova.compute.manager [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:33:30 compute-0 podman[374584]: 2025-10-07 14:33:30.472902989 +0000 UTC m=+0.052161085 container create eeac03258cad587136ae8a462622f5d992ac5bcb768a0c2a82bf81b95a8c1283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:33:30 compute-0 kernel: tapc6894b29-6b (unregistering): left promiscuous mode
Oct 07 14:33:30 compute-0 NetworkManager[44949]: <info>  [1759847610.5181] device (tapc6894b29-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:33:30 compute-0 systemd[1]: Started libpod-conmon-eeac03258cad587136ae8a462622f5d992ac5bcb768a0c2a82bf81b95a8c1283.scope.
Oct 07 14:33:30 compute-0 podman[374584]: 2025-10-07 14:33:30.438181291 +0000 UTC m=+0.017439387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:33:30 compute-0 ovn_controller[151684]: 2025-10-07T14:33:30Z|01075|binding|INFO|Releasing lport c6894b29-6b20-445c-991a-9aefefb3823c from this chassis (sb_readonly=0)
Oct 07 14:33:30 compute-0 ovn_controller[151684]: 2025-10-07T14:33:30Z|01076|binding|INFO|Setting lport c6894b29-6b20-445c-991a-9aefefb3823c down in Southbound
Oct 07 14:33:30 compute-0 ovn_controller[151684]: 2025-10-07T14:33:30Z|01077|binding|INFO|Removing iface tapc6894b29-6b ovn-installed in OVS
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:30 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:33:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.626 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:d1:60 10.100.0.11 2001:db8::f816:3eff:fe28:d160'], port_security=['fa:16:3e:28:d1:60 10.100.0.11 2001:db8::f816:3eff:fe28:d160'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fe28:d160/64', 'neutron:device_id': 'd3fa3175-2379-4c66-9d83-0a37f5559db8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3172dca0-91ca-4f82-9e12-53d0e4f57177', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33b7c271-47f1-414b-a27c-b99f92f4e7c6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c6894b29-6b20-445c-991a-9aefefb3823c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:33:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.627 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c6894b29-6b20-445c-991a-9aefefb3823c in datapath 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc unbound from our chassis
Oct 07 14:33:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.628 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc
Oct 07 14:33:30 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Oct 07 14:33:30 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006b.scope: Consumed 13.735s CPU time.
Oct 07 14:33:30 compute-0 systemd-machined[214580]: Machine qemu-134-instance-0000006b terminated.
Oct 07 14:33:30 compute-0 podman[374584]: 2025-10-07 14:33:30.64286146 +0000 UTC m=+0.222119566 container init eeac03258cad587136ae8a462622f5d992ac5bcb768a0c2a82bf81b95a8c1283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:33:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.651 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd9da96-3728-416f-8c90-94416cc7cac8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:30 compute-0 podman[374584]: 2025-10-07 14:33:30.653787112 +0000 UTC m=+0.233045188 container start eeac03258cad587136ae8a462622f5d992ac5bcb768a0c2a82bf81b95a8c1283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:33:30 compute-0 crazy_goldberg[374606]: 167 167
Oct 07 14:33:30 compute-0 podman[374584]: 2025-10-07 14:33:30.659355481 +0000 UTC m=+0.238613587 container attach eeac03258cad587136ae8a462622f5d992ac5bcb768a0c2a82bf81b95a8c1283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:33:30 compute-0 systemd[1]: libpod-eeac03258cad587136ae8a462622f5d992ac5bcb768a0c2a82bf81b95a8c1283.scope: Deactivated successfully.
Oct 07 14:33:30 compute-0 podman[374584]: 2025-10-07 14:33:30.663692067 +0000 UTC m=+0.242950133 container died eeac03258cad587136ae8a462622f5d992ac5bcb768a0c2a82bf81b95a8c1283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:33:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.683 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[760ce19e-58b5-46cb-94d4-1deaa37db4ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.686 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4d16d45e-4f22-440d-a192-8bb4d20f35f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.713 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4b8181a3-a9c4-4e43-89fc-5a30aec3f69b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb54edd1a8661dec284dab482b4d3a949ebfd313e354f11f94b2ff4dc2929fb6-merged.mount: Deactivated successfully.
Oct 07 14:33:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.731 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3d4f3dd7-015d-4a82-94d4-9db4fa8ca0da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1da6903e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:f4:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 812640, 'reachable_time': 24827, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 32, 'inoctets': 2464, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 32, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2464, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 32, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374626, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.748 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[df3b7791-1d22-4f62-a856-83ee3ac78e75]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1da6903e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 812653, 'tstamp': 812653}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374627, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1da6903e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 812656, 'tstamp': 812656}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374627, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.750 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1da6903e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.757 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1da6903e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:33:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.757 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:33:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.758 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1da6903e-10, col_values=(('external_ids', {'iface-id': 'ac55d93f-5af1-4917-a7be-679169a02318'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:33:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.758 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:33:30 compute-0 podman[374584]: 2025-10-07 14:33:30.788695506 +0000 UTC m=+0.367953582 container remove eeac03258cad587136ae8a462622f5d992ac5bcb768a0c2a82bf81b95a8c1283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.804 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.805 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.806 2 INFO nova.virt.libvirt.driver [-] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Instance destroyed successfully.
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.806 2 DEBUG nova.objects.instance [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid d3fa3175-2379-4c66-9d83-0a37f5559db8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:33:30 compute-0 systemd[1]: libpod-conmon-eeac03258cad587136ae8a462622f5d992ac5bcb768a0c2a82bf81b95a8c1283.scope: Deactivated successfully.
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.830 2 DEBUG nova.virt.libvirt.vif [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:32:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1497631187',display_name='tempest-TestGettingAddress-server-1497631187',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1497631187',id=107,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKZcRW+cTahv1daeBOuj+cvRjMyW8WsW20MGafd+5cNwSmbuhLgTtTRGaDrZ7xqiE/Dwq9wlNoEqtjkRltl/UATKHeenR7LAEwYRgqoAMdni0PUi0HrATcyFAYghdo06gw==',key_name='tempest-TestGettingAddress-855853305',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:33:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-k6d5uiwn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:33:05Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=d3fa3175-2379-4c66-9d83-0a37f5559db8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.831 2 DEBUG nova.network.os_vif_util [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.831 2 DEBUG nova.network.os_vif_util [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:d1:60,bridge_name='br-int',has_traffic_filtering=True,id=c6894b29-6b20-445c-991a-9aefefb3823c,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6894b29-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.832 2 DEBUG os_vif [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:d1:60,bridge_name='br-int',has_traffic_filtering=True,id=c6894b29-6b20-445c-991a-9aefefb3823c,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6894b29-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.833 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6894b29-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:33:30 compute-0 nova_compute[259550]: 2025-10-07 14:33:30.839 2 INFO os_vif [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:d1:60,bridge_name='br-int',has_traffic_filtering=True,id=c6894b29-6b20-445c-991a-9aefefb3823c,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6894b29-6b')
Oct 07 14:33:30 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:33:30 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:33:30 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:33:30 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:33:30 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:33:30 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:33:30 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:33:30 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:33:31 compute-0 podman[374659]: 2025-10-07 14:33:31.005621203 +0000 UTC m=+0.085191328 container create 5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_goldwasser, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 07 14:33:31 compute-0 podman[374659]: 2025-10-07 14:33:30.942875706 +0000 UTC m=+0.022445861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:33:31 compute-0 systemd[1]: Started libpod-conmon-5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781.scope.
Oct 07 14:33:31 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d3a21b6bdb42506fb63d870a87776ba82c502258c5bce28e426eeb586e7145/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d3a21b6bdb42506fb63d870a87776ba82c502258c5bce28e426eeb586e7145/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d3a21b6bdb42506fb63d870a87776ba82c502258c5bce28e426eeb586e7145/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d3a21b6bdb42506fb63d870a87776ba82c502258c5bce28e426eeb586e7145/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d3a21b6bdb42506fb63d870a87776ba82c502258c5bce28e426eeb586e7145/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:33:31 compute-0 podman[374659]: 2025-10-07 14:33:31.141457053 +0000 UTC m=+0.221027188 container init 5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_goldwasser, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:33:31 compute-0 podman[374659]: 2025-10-07 14:33:31.148316216 +0000 UTC m=+0.227886341 container start 5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 07 14:33:31 compute-0 podman[374659]: 2025-10-07 14:33:31.15632453 +0000 UTC m=+0.235894655 container attach 5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 07 14:33:31 compute-0 nova_compute[259550]: 2025-10-07 14:33:31.891 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847596.890288, 08184d43-2bd1-4f46-9bdf-63437c87a8ad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:33:31 compute-0 nova_compute[259550]: 2025-10-07 14:33:31.893 2 INFO nova.compute.manager [-] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] VM Stopped (Lifecycle Event)
Oct 07 14:33:31 compute-0 nova_compute[259550]: 2025-10-07 14:33:31.936 2 DEBUG nova.compute.manager [None req-7982060a-feb8-427b-a4cb-af31037be146 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:33:32 compute-0 nova_compute[259550]: 2025-10-07 14:33:32.077 2 DEBUG nova.compute.manager [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received event network-vif-unplugged-c6894b29-6b20-445c-991a-9aefefb3823c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:33:32 compute-0 nova_compute[259550]: 2025-10-07 14:33:32.077 2 DEBUG oslo_concurrency.lockutils [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:32 compute-0 nova_compute[259550]: 2025-10-07 14:33:32.077 2 DEBUG oslo_concurrency.lockutils [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:32 compute-0 nova_compute[259550]: 2025-10-07 14:33:32.077 2 DEBUG oslo_concurrency.lockutils [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:32 compute-0 nova_compute[259550]: 2025-10-07 14:33:32.077 2 DEBUG nova.compute.manager [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] No waiting events found dispatching network-vif-unplugged-c6894b29-6b20-445c-991a-9aefefb3823c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:33:32 compute-0 nova_compute[259550]: 2025-10-07 14:33:32.078 2 DEBUG nova.compute.manager [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received event network-vif-unplugged-c6894b29-6b20-445c-991a-9aefefb3823c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:33:32 compute-0 nova_compute[259550]: 2025-10-07 14:33:32.078 2 DEBUG nova.compute.manager [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received event network-vif-plugged-c6894b29-6b20-445c-991a-9aefefb3823c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:33:32 compute-0 nova_compute[259550]: 2025-10-07 14:33:32.078 2 DEBUG oslo_concurrency.lockutils [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:32 compute-0 nova_compute[259550]: 2025-10-07 14:33:32.078 2 DEBUG oslo_concurrency.lockutils [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:32 compute-0 nova_compute[259550]: 2025-10-07 14:33:32.078 2 DEBUG oslo_concurrency.lockutils [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:32 compute-0 nova_compute[259550]: 2025-10-07 14:33:32.079 2 DEBUG nova.compute.manager [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] No waiting events found dispatching network-vif-plugged-c6894b29-6b20-445c-991a-9aefefb3823c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:33:32 compute-0 nova_compute[259550]: 2025-10-07 14:33:32.079 2 WARNING nova.compute.manager [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received unexpected event network-vif-plugged-c6894b29-6b20-445c-991a-9aefefb3823c for instance with vm_state active and task_state deleting.
Oct 07 14:33:32 compute-0 nova_compute[259550]: 2025-10-07 14:33:32.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:32 compute-0 ceph-mon[74295]: pgmap v2132: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 2.2 MiB/s wr, 101 op/s
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 259 KiB/s wr, 52 op/s
Oct 07 14:33:32 compute-0 eager_goldwasser[374679]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:33:32 compute-0 eager_goldwasser[374679]: --> relative data size: 1.0
Oct 07 14:33:32 compute-0 eager_goldwasser[374679]: --> All data devices are unavailable
Oct 07 14:33:32 compute-0 systemd[1]: libpod-5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781.scope: Deactivated successfully.
Oct 07 14:33:32 compute-0 systemd[1]: libpod-5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781.scope: Consumed 1.021s CPU time.
Oct 07 14:33:32 compute-0 podman[374709]: 2025-10-07 14:33:32.569426908 +0000 UTC m=+0.026517049 container died 5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_goldwasser, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015185461027544442 of space, bias 1.0, pg target 0.4555638308263333 quantized to 32 (current 32)
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:33:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7d3a21b6bdb42506fb63d870a87776ba82c502258c5bce28e426eeb586e7145-merged.mount: Deactivated successfully.
Oct 07 14:33:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:33:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3337432773' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:33:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:33:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3337432773' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:33:32 compute-0 podman[374709]: 2025-10-07 14:33:32.739393931 +0000 UTC m=+0.196484032 container remove 5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_goldwasser, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:33:32 compute-0 systemd[1]: libpod-conmon-5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781.scope: Deactivated successfully.
Oct 07 14:33:32 compute-0 sudo[374519]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:32 compute-0 sudo[374726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:33:32 compute-0 sudo[374726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:32 compute-0 sudo[374726]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:32 compute-0 sudo[374751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:33:32 compute-0 sudo[374751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:32 compute-0 sudo[374751]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:33:32 compute-0 nova_compute[259550]: 2025-10-07 14:33:32.960 2 INFO nova.virt.libvirt.driver [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Deleting instance files /var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8_del
Oct 07 14:33:32 compute-0 nova_compute[259550]: 2025-10-07 14:33:32.961 2 INFO nova.virt.libvirt.driver [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Deletion of /var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8_del complete
Oct 07 14:33:32 compute-0 sudo[374776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:33:32 compute-0 sudo[374776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:32 compute-0 sudo[374776]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:33 compute-0 nova_compute[259550]: 2025-10-07 14:33:33.024 2 DEBUG nova.network.neutron [req-7f5d2f49-41a4-4104-a755-d8d1e20a4193 req-6764839d-a89d-4a90-8d4f-d1bccefe6bcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Updated VIF entry in instance network info cache for port c6894b29-6b20-445c-991a-9aefefb3823c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:33:33 compute-0 nova_compute[259550]: 2025-10-07 14:33:33.024 2 DEBUG nova.network.neutron [req-7f5d2f49-41a4-4104-a755-d8d1e20a4193 req-6764839d-a89d-4a90-8d4f-d1bccefe6bcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Updating instance_info_cache with network_info: [{"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:33:33 compute-0 sudo[374801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:33:33 compute-0 sudo[374801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:33 compute-0 nova_compute[259550]: 2025-10-07 14:33:33.052 2 INFO nova.compute.manager [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Took 2.69 seconds to destroy the instance on the hypervisor.
Oct 07 14:33:33 compute-0 nova_compute[259550]: 2025-10-07 14:33:33.053 2 DEBUG oslo.service.loopingcall [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:33:33 compute-0 nova_compute[259550]: 2025-10-07 14:33:33.053 2 DEBUG nova.compute.manager [-] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:33:33 compute-0 nova_compute[259550]: 2025-10-07 14:33:33.053 2 DEBUG nova.network.neutron [-] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:33:33 compute-0 nova_compute[259550]: 2025-10-07 14:33:33.068 2 DEBUG oslo_concurrency.lockutils [req-7f5d2f49-41a4-4104-a755-d8d1e20a4193 req-6764839d-a89d-4a90-8d4f-d1bccefe6bcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:33:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3337432773' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:33:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3337432773' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:33:33 compute-0 podman[374866]: 2025-10-07 14:33:33.403871775 +0000 UTC m=+0.042868306 container create 66bb5df8803695da2598248efd858f1bc4f3994b2772858e7ba6b1dc7ec5932e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noyce, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 07 14:33:33 compute-0 systemd[1]: Started libpod-conmon-66bb5df8803695da2598248efd858f1bc4f3994b2772858e7ba6b1dc7ec5932e.scope.
Oct 07 14:33:33 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:33:33 compute-0 podman[374866]: 2025-10-07 14:33:33.384462286 +0000 UTC m=+0.023458837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:33:33 compute-0 podman[374866]: 2025-10-07 14:33:33.499550612 +0000 UTC m=+0.138547143 container init 66bb5df8803695da2598248efd858f1bc4f3994b2772858e7ba6b1dc7ec5932e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noyce, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 07 14:33:33 compute-0 podman[374866]: 2025-10-07 14:33:33.508366138 +0000 UTC m=+0.147362669 container start 66bb5df8803695da2598248efd858f1bc4f3994b2772858e7ba6b1dc7ec5932e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 07 14:33:33 compute-0 gallant_noyce[374882]: 167 167
Oct 07 14:33:33 compute-0 systemd[1]: libpod-66bb5df8803695da2598248efd858f1bc4f3994b2772858e7ba6b1dc7ec5932e.scope: Deactivated successfully.
Oct 07 14:33:33 compute-0 podman[374866]: 2025-10-07 14:33:33.539744596 +0000 UTC m=+0.178741157 container attach 66bb5df8803695da2598248efd858f1bc4f3994b2772858e7ba6b1dc7ec5932e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 07 14:33:33 compute-0 podman[374866]: 2025-10-07 14:33:33.541099112 +0000 UTC m=+0.180095653 container died 66bb5df8803695da2598248efd858f1bc4f3994b2772858e7ba6b1dc7ec5932e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:33:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-51b495e4b5db4ec2e24ba10586650741ed66d8eab834d8796556a385fd00e40e-merged.mount: Deactivated successfully.
Oct 07 14:33:33 compute-0 podman[374866]: 2025-10-07 14:33:33.665908847 +0000 UTC m=+0.304905368 container remove 66bb5df8803695da2598248efd858f1bc4f3994b2772858e7ba6b1dc7ec5932e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:33:33 compute-0 systemd[1]: libpod-conmon-66bb5df8803695da2598248efd858f1bc4f3994b2772858e7ba6b1dc7ec5932e.scope: Deactivated successfully.
Oct 07 14:33:33 compute-0 podman[374909]: 2025-10-07 14:33:33.848776194 +0000 UTC m=+0.051082877 container create de6d25b724b154aff6061cced2161779f7fb1ee579a67f305c98a26eec6de808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:33:33 compute-0 systemd[1]: Started libpod-conmon-de6d25b724b154aff6061cced2161779f7fb1ee579a67f305c98a26eec6de808.scope.
Oct 07 14:33:33 compute-0 podman[374909]: 2025-10-07 14:33:33.821705461 +0000 UTC m=+0.024012154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:33:33 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1f4a21851a1b7766e07e28cb7f4f45f004b2e5413e2e86bdc69350af43fa289/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1f4a21851a1b7766e07e28cb7f4f45f004b2e5413e2e86bdc69350af43fa289/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1f4a21851a1b7766e07e28cb7f4f45f004b2e5413e2e86bdc69350af43fa289/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1f4a21851a1b7766e07e28cb7f4f45f004b2e5413e2e86bdc69350af43fa289/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:33:33 compute-0 podman[374909]: 2025-10-07 14:33:33.945129908 +0000 UTC m=+0.147436611 container init de6d25b724b154aff6061cced2161779f7fb1ee579a67f305c98a26eec6de808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 07 14:33:33 compute-0 podman[374909]: 2025-10-07 14:33:33.951140279 +0000 UTC m=+0.153446962 container start de6d25b724b154aff6061cced2161779f7fb1ee579a67f305c98a26eec6de808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:33:33 compute-0 podman[374909]: 2025-10-07 14:33:33.968732199 +0000 UTC m=+0.171038902 container attach de6d25b724b154aff6061cced2161779f7fb1ee579a67f305c98a26eec6de808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:33:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 132 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 24 KiB/s wr, 91 op/s
Oct 07 14:33:34 compute-0 nova_compute[259550]: 2025-10-07 14:33:34.285 2 DEBUG nova.network.neutron [-] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:33:34 compute-0 nova_compute[259550]: 2025-10-07 14:33:34.315 2 INFO nova.compute.manager [-] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Took 1.26 seconds to deallocate network for instance.
Oct 07 14:33:34 compute-0 ceph-mon[74295]: pgmap v2133: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 259 KiB/s wr, 52 op/s
Oct 07 14:33:34 compute-0 nova_compute[259550]: 2025-10-07 14:33:34.369 2 DEBUG oslo_concurrency.lockutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:34 compute-0 nova_compute[259550]: 2025-10-07 14:33:34.369 2 DEBUG oslo_concurrency.lockutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:34 compute-0 nova_compute[259550]: 2025-10-07 14:33:34.417 2 DEBUG nova.compute.manager [req-da7d357d-045d-4451-a071-2a38fbd9d749 req-8cf76ac2-e8bc-4418-b124-9f154619f5f9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received event network-vif-deleted-c6894b29-6b20-445c-991a-9aefefb3823c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:33:34 compute-0 nova_compute[259550]: 2025-10-07 14:33:34.454 2 DEBUG oslo_concurrency.processutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:33:34 compute-0 zen_liskov[374925]: {
Oct 07 14:33:34 compute-0 zen_liskov[374925]:     "0": [
Oct 07 14:33:34 compute-0 zen_liskov[374925]:         {
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "devices": [
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "/dev/loop3"
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             ],
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "lv_name": "ceph_lv0",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "lv_size": "21470642176",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "name": "ceph_lv0",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "tags": {
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.cluster_name": "ceph",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.crush_device_class": "",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.encrypted": "0",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.osd_id": "0",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.type": "block",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.vdo": "0"
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             },
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "type": "block",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "vg_name": "ceph_vg0"
Oct 07 14:33:34 compute-0 zen_liskov[374925]:         }
Oct 07 14:33:34 compute-0 zen_liskov[374925]:     ],
Oct 07 14:33:34 compute-0 zen_liskov[374925]:     "1": [
Oct 07 14:33:34 compute-0 zen_liskov[374925]:         {
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "devices": [
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "/dev/loop4"
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             ],
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "lv_name": "ceph_lv1",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "lv_size": "21470642176",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "name": "ceph_lv1",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "tags": {
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.cluster_name": "ceph",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.crush_device_class": "",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.encrypted": "0",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.osd_id": "1",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.type": "block",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.vdo": "0"
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             },
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "type": "block",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "vg_name": "ceph_vg1"
Oct 07 14:33:34 compute-0 zen_liskov[374925]:         }
Oct 07 14:33:34 compute-0 zen_liskov[374925]:     ],
Oct 07 14:33:34 compute-0 zen_liskov[374925]:     "2": [
Oct 07 14:33:34 compute-0 zen_liskov[374925]:         {
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "devices": [
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "/dev/loop5"
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             ],
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "lv_name": "ceph_lv2",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "lv_size": "21470642176",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "name": "ceph_lv2",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "tags": {
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.cluster_name": "ceph",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.crush_device_class": "",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.encrypted": "0",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.osd_id": "2",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.type": "block",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:                 "ceph.vdo": "0"
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             },
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "type": "block",
Oct 07 14:33:34 compute-0 zen_liskov[374925]:             "vg_name": "ceph_vg2"
Oct 07 14:33:34 compute-0 zen_liskov[374925]:         }
Oct 07 14:33:34 compute-0 zen_liskov[374925]:     ]
Oct 07 14:33:34 compute-0 zen_liskov[374925]: }
Oct 07 14:33:34 compute-0 systemd[1]: libpod-de6d25b724b154aff6061cced2161779f7fb1ee579a67f305c98a26eec6de808.scope: Deactivated successfully.
Oct 07 14:33:34 compute-0 podman[374909]: 2025-10-07 14:33:34.765418327 +0000 UTC m=+0.967725010 container died de6d25b724b154aff6061cced2161779f7fb1ee579a67f305c98a26eec6de808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 07 14:33:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1f4a21851a1b7766e07e28cb7f4f45f004b2e5413e2e86bdc69350af43fa289-merged.mount: Deactivated successfully.
Oct 07 14:33:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:33:34 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3425001993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:33:34 compute-0 nova_compute[259550]: 2025-10-07 14:33:34.879 2 DEBUG oslo_concurrency.processutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:33:34 compute-0 nova_compute[259550]: 2025-10-07 14:33:34.886 2 DEBUG nova.compute.provider_tree [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:33:34 compute-0 nova_compute[259550]: 2025-10-07 14:33:34.917 2 DEBUG nova.scheduler.client.report [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:33:34 compute-0 nova_compute[259550]: 2025-10-07 14:33:34.938 2 DEBUG oslo_concurrency.lockutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:34 compute-0 nova_compute[259550]: 2025-10-07 14:33:34.975 2 INFO nova.scheduler.client.report [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance d3fa3175-2379-4c66-9d83-0a37f5559db8
Oct 07 14:33:35 compute-0 podman[374909]: 2025-10-07 14:33:35.026173324 +0000 UTC m=+1.228480007 container remove de6d25b724b154aff6061cced2161779f7fb1ee579a67f305c98a26eec6de808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct 07 14:33:35 compute-0 systemd[1]: libpod-conmon-de6d25b724b154aff6061cced2161779f7fb1ee579a67f305c98a26eec6de808.scope: Deactivated successfully.
Oct 07 14:33:35 compute-0 nova_compute[259550]: 2025-10-07 14:33:35.063 2 DEBUG oslo_concurrency.lockutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:35 compute-0 sudo[374801]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:35 compute-0 sudo[374969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:33:35 compute-0 sudo[374969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:35 compute-0 sudo[374969]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:35 compute-0 sudo[374994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:33:35 compute-0 sudo[374994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:35 compute-0 sudo[374994]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:35 compute-0 sudo[375019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:33:35 compute-0 sudo[375019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:35 compute-0 sudo[375019]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:35 compute-0 sudo[375044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:33:35 compute-0 sudo[375044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:35 compute-0 ceph-mon[74295]: pgmap v2134: 305 pgs: 305 active+clean; 132 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 24 KiB/s wr, 91 op/s
Oct 07 14:33:35 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3425001993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:33:35 compute-0 nova_compute[259550]: 2025-10-07 14:33:35.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:35 compute-0 podman[375109]: 2025-10-07 14:33:35.670380708 +0000 UTC m=+0.049258117 container create 8d5314086025ff5e8fcfe031589838ce197f8dc661e65b8c93db44897290f804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 07 14:33:35 compute-0 systemd[1]: Started libpod-conmon-8d5314086025ff5e8fcfe031589838ce197f8dc661e65b8c93db44897290f804.scope.
Oct 07 14:33:35 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:33:35 compute-0 podman[375109]: 2025-10-07 14:33:35.640457128 +0000 UTC m=+0.019334557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:33:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:35.807 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:33:35 compute-0 podman[375109]: 2025-10-07 14:33:35.830140166 +0000 UTC m=+0.209017605 container init 8d5314086025ff5e8fcfe031589838ce197f8dc661e65b8c93db44897290f804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feistel, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:33:35 compute-0 nova_compute[259550]: 2025-10-07 14:33:35.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:35 compute-0 podman[375109]: 2025-10-07 14:33:35.837629496 +0000 UTC m=+0.216506905 container start 8d5314086025ff5e8fcfe031589838ce197f8dc661e65b8c93db44897290f804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feistel, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:33:35 compute-0 peaceful_feistel[375125]: 167 167
Oct 07 14:33:35 compute-0 systemd[1]: libpod-8d5314086025ff5e8fcfe031589838ce197f8dc661e65b8c93db44897290f804.scope: Deactivated successfully.
Oct 07 14:33:35 compute-0 podman[375109]: 2025-10-07 14:33:35.914949363 +0000 UTC m=+0.293826772 container attach 8d5314086025ff5e8fcfe031589838ce197f8dc661e65b8c93db44897290f804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feistel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 07 14:33:35 compute-0 podman[375109]: 2025-10-07 14:33:35.916100583 +0000 UTC m=+0.294977992 container died 8d5314086025ff5e8fcfe031589838ce197f8dc661e65b8c93db44897290f804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feistel, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 07 14:33:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-03cbdda2e848ef06b00ccdb729eb1ccdb9b01341a744b6453b6ab9346b1b28f6-merged.mount: Deactivated successfully.
Oct 07 14:33:36 compute-0 podman[375109]: 2025-10-07 14:33:36.148176384 +0000 UTC m=+0.527053793 container remove 8d5314086025ff5e8fcfe031589838ce197f8dc661e65b8c93db44897290f804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 07 14:33:36 compute-0 systemd[1]: libpod-conmon-8d5314086025ff5e8fcfe031589838ce197f8dc661e65b8c93db44897290f804.scope: Deactivated successfully.
Oct 07 14:33:36 compute-0 nova_compute[259550]: 2025-10-07 14:33:36.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 16 KiB/s wr, 102 op/s
Oct 07 14:33:36 compute-0 podman[375150]: 2025-10-07 14:33:36.368513943 +0000 UTC m=+0.081683375 container create 761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chaplygin, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:33:36 compute-0 podman[375150]: 2025-10-07 14:33:36.312449905 +0000 UTC m=+0.025619377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:33:36 compute-0 systemd[1]: Started libpod-conmon-761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde.scope.
Oct 07 14:33:36 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4e5e9c48a50d999c67364d5a1620eab42d3849f90c37053280fe64597428ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4e5e9c48a50d999c67364d5a1620eab42d3849f90c37053280fe64597428ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4e5e9c48a50d999c67364d5a1620eab42d3849f90c37053280fe64597428ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4e5e9c48a50d999c67364d5a1620eab42d3849f90c37053280fe64597428ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:33:36 compute-0 podman[375150]: 2025-10-07 14:33:36.492996488 +0000 UTC m=+0.206165920 container init 761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chaplygin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:33:36 compute-0 podman[375150]: 2025-10-07 14:33:36.501385082 +0000 UTC m=+0.214554504 container start 761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chaplygin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:33:36 compute-0 podman[375150]: 2025-10-07 14:33:36.510673471 +0000 UTC m=+0.223842923 container attach 761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chaplygin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:33:37 compute-0 nova_compute[259550]: 2025-10-07 14:33:37.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:37 compute-0 ceph-mon[74295]: pgmap v2135: 305 pgs: 305 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 16 KiB/s wr, 102 op/s
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]: {
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:         "osd_id": 2,
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:         "type": "bluestore"
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:     },
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:         "osd_id": 1,
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:         "type": "bluestore"
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:     },
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:         "osd_id": 0,
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:         "type": "bluestore"
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]:     }
Oct 07 14:33:37 compute-0 keen_chaplygin[375167]: }
Oct 07 14:33:37 compute-0 systemd[1]: libpod-761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde.scope: Deactivated successfully.
Oct 07 14:33:37 compute-0 systemd[1]: libpod-761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde.scope: Consumed 1.009s CPU time.
Oct 07 14:33:37 compute-0 podman[375150]: 2025-10-07 14:33:37.509363726 +0000 UTC m=+1.222533148 container died 761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 07 14:33:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d4e5e9c48a50d999c67364d5a1620eab42d3849f90c37053280fe64597428ac-merged.mount: Deactivated successfully.
Oct 07 14:33:37 compute-0 podman[375150]: 2025-10-07 14:33:37.572871933 +0000 UTC m=+1.286041355 container remove 761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chaplygin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 07 14:33:37 compute-0 systemd[1]: libpod-conmon-761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde.scope: Deactivated successfully.
Oct 07 14:33:37 compute-0 sudo[375044]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:33:37 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:33:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:33:37 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:33:37 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev b02e064d-cb06-4a3f-9c24-3e1815d9147b does not exist
Oct 07 14:33:37 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev d12b17b3-f191-4cbf-a851-f6d02ccdae28 does not exist
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.623457) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847617623491, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1648, "num_deletes": 251, "total_data_size": 2587203, "memory_usage": 2627536, "flush_reason": "Manual Compaction"}
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847617635520, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 2517267, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43467, "largest_seqno": 45114, "table_properties": {"data_size": 2509778, "index_size": 4432, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15615, "raw_average_key_size": 19, "raw_value_size": 2494730, "raw_average_value_size": 3186, "num_data_blocks": 199, "num_entries": 783, "num_filter_entries": 783, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847454, "oldest_key_time": 1759847454, "file_creation_time": 1759847617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 12108 microseconds, and 5389 cpu microseconds.
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.635561) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 2517267 bytes OK
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.635578) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.636997) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.637009) EVENT_LOG_v1 {"time_micros": 1759847617637005, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.637024) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2580096, prev total WAL file size 2580096, number of live WAL files 2.
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.637773) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(2458KB)], [101(7732KB)]
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847617637819, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10435542, "oldest_snapshot_seqno": -1}
Oct 07 14:33:37 compute-0 nova_compute[259550]: 2025-10-07 14:33:37.636 2 DEBUG nova.compute.manager [req-bca4eb52-319b-46ea-9208-ff07fcd02b41 req-823da5db-19e3-4503-8bde-ce66b04a249a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Received event network-changed-52f43128-c899-4d76-9e65-c99941c834d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:33:37 compute-0 nova_compute[259550]: 2025-10-07 14:33:37.637 2 DEBUG nova.compute.manager [req-bca4eb52-319b-46ea-9208-ff07fcd02b41 req-823da5db-19e3-4503-8bde-ce66b04a249a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Refreshing instance network info cache due to event network-changed-52f43128-c899-4d76-9e65-c99941c834d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:33:37 compute-0 nova_compute[259550]: 2025-10-07 14:33:37.637 2 DEBUG oslo_concurrency.lockutils [req-bca4eb52-319b-46ea-9208-ff07fcd02b41 req-823da5db-19e3-4503-8bde-ce66b04a249a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:33:37 compute-0 nova_compute[259550]: 2025-10-07 14:33:37.637 2 DEBUG oslo_concurrency.lockutils [req-bca4eb52-319b-46ea-9208-ff07fcd02b41 req-823da5db-19e3-4503-8bde-ce66b04a249a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:33:37 compute-0 nova_compute[259550]: 2025-10-07 14:33:37.637 2 DEBUG nova.network.neutron [req-bca4eb52-319b-46ea-9208-ff07fcd02b41 req-823da5db-19e3-4503-8bde-ce66b04a249a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Refreshing network info cache for port 52f43128-c899-4d76-9e65-c99941c834d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:33:37 compute-0 sudo[375212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:33:37 compute-0 sudo[375212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:37 compute-0 sudo[375212]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6563 keys, 8724631 bytes, temperature: kUnknown
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847617683395, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8724631, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8680962, "index_size": 26149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 170677, "raw_average_key_size": 26, "raw_value_size": 8563505, "raw_average_value_size": 1304, "num_data_blocks": 1022, "num_entries": 6563, "num_filter_entries": 6563, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.683604) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8724631 bytes
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.684726) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.7 rd, 191.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 7.6 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 7077, records dropped: 514 output_compression: NoCompression
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.684740) EVENT_LOG_v1 {"time_micros": 1759847617684733, "job": 60, "event": "compaction_finished", "compaction_time_micros": 45633, "compaction_time_cpu_micros": 19680, "output_level": 6, "num_output_files": 1, "total_output_size": 8724631, "num_input_records": 7077, "num_output_records": 6563, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847617685189, "job": 60, "event": "table_file_deletion", "file_number": 103}
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847617686309, "job": 60, "event": "table_file_deletion", "file_number": 101}
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.637676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.686455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.686460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.686462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.686463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:33:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.686464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:33:37 compute-0 sudo[375237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:33:37 compute-0 sudo[375237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:33:37 compute-0 sudo[375237]: pam_unix(sudo:session): session closed for user root
Oct 07 14:33:37 compute-0 nova_compute[259550]: 2025-10-07 14:33:37.812 2 DEBUG oslo_concurrency.lockutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "0af0082a-1adc-40e7-b254-88e03182e802" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:37 compute-0 nova_compute[259550]: 2025-10-07 14:33:37.812 2 DEBUG oslo_concurrency.lockutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:37 compute-0 nova_compute[259550]: 2025-10-07 14:33:37.813 2 DEBUG oslo_concurrency.lockutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "0af0082a-1adc-40e7-b254-88e03182e802-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:37 compute-0 nova_compute[259550]: 2025-10-07 14:33:37.813 2 DEBUG oslo_concurrency.lockutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:37 compute-0 nova_compute[259550]: 2025-10-07 14:33:37.813 2 DEBUG oslo_concurrency.lockutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:37 compute-0 nova_compute[259550]: 2025-10-07 14:33:37.816 2 INFO nova.compute.manager [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Terminating instance
Oct 07 14:33:37 compute-0 nova_compute[259550]: 2025-10-07 14:33:37.817 2 DEBUG nova.compute.manager [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:33:37 compute-0 kernel: tap52f43128-c8 (unregistering): left promiscuous mode
Oct 07 14:33:37 compute-0 NetworkManager[44949]: <info>  [1759847617.8693] device (tap52f43128-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:33:37 compute-0 ovn_controller[151684]: 2025-10-07T14:33:37Z|01078|binding|INFO|Releasing lport 52f43128-c899-4d76-9e65-c99941c834d4 from this chassis (sb_readonly=0)
Oct 07 14:33:37 compute-0 ovn_controller[151684]: 2025-10-07T14:33:37Z|01079|binding|INFO|Setting lport 52f43128-c899-4d76-9e65-c99941c834d4 down in Southbound
Oct 07 14:33:37 compute-0 ovn_controller[151684]: 2025-10-07T14:33:37Z|01080|binding|INFO|Removing iface tap52f43128-c8 ovn-installed in OVS
Oct 07 14:33:37 compute-0 nova_compute[259550]: 2025-10-07 14:33:37.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:37.899 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c'], port_security=['fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:fe52:178c/64', 'neutron:device_id': '0af0082a-1adc-40e7-b254-88e03182e802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3172dca0-91ca-4f82-9e12-53d0e4f57177', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33b7c271-47f1-414b-a27c-b99f92f4e7c6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=52f43128-c899-4d76-9e65-c99941c834d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:33:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:37.903 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 52f43128-c899-4d76-9e65-c99941c834d4 in datapath 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc unbound from our chassis
Oct 07 14:33:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:37.904 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:33:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:37.905 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0da81019-f4da-4e04-b6c9-3145b1ff1c41]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:37.906 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc namespace which is not needed anymore
Oct 07 14:33:37 compute-0 nova_compute[259550]: 2025-10-07 14:33:37.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:33:37 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000069.scope: Deactivated successfully.
Oct 07 14:33:37 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000069.scope: Consumed 16.057s CPU time.
Oct 07 14:33:37 compute-0 systemd-machined[214580]: Machine qemu-132-instance-00000069 terminated.
Oct 07 14:33:38 compute-0 kernel: tap52f43128-c8: entered promiscuous mode
Oct 07 14:33:38 compute-0 NetworkManager[44949]: <info>  [1759847618.0372] manager: (tap52f43128-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/437)
Oct 07 14:33:38 compute-0 kernel: tap52f43128-c8 (unregistering): left promiscuous mode
Oct 07 14:33:38 compute-0 ovn_controller[151684]: 2025-10-07T14:33:38Z|01081|binding|INFO|Claiming lport 52f43128-c899-4d76-9e65-c99941c834d4 for this chassis.
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:38 compute-0 ovn_controller[151684]: 2025-10-07T14:33:38Z|01082|binding|INFO|52f43128-c899-4d76-9e65-c99941c834d4: Claiming fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c
Oct 07 14:33:38 compute-0 neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc[370748]: [NOTICE]   (370753) : haproxy version is 2.8.14-c23fe91
Oct 07 14:33:38 compute-0 neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc[370748]: [NOTICE]   (370753) : path to executable is /usr/sbin/haproxy
Oct 07 14:33:38 compute-0 neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc[370748]: [WARNING]  (370753) : Exiting Master process...
Oct 07 14:33:38 compute-0 neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc[370748]: [WARNING]  (370753) : Exiting Master process...
Oct 07 14:33:38 compute-0 neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc[370748]: [ALERT]    (370753) : Current worker (370755) exited with code 143 (Terminated)
Oct 07 14:33:38 compute-0 neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc[370748]: [WARNING]  (370753) : All workers exited. Exiting... (0)
Oct 07 14:33:38 compute-0 systemd[1]: libpod-008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b.scope: Deactivated successfully.
Oct 07 14:33:38 compute-0 conmon[370748]: conmon 008e9149d07e11d61fbd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b.scope/container/memory.events
Oct 07 14:33:38 compute-0 podman[375286]: 2025-10-07 14:33:38.06187819 +0000 UTC m=+0.054092117 container died 008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 07 14:33:38 compute-0 ovn_controller[151684]: 2025-10-07T14:33:38Z|01083|binding|INFO|Setting lport 52f43128-c899-4d76-9e65-c99941c834d4 ovn-installed in OVS
Oct 07 14:33:38 compute-0 ovn_controller[151684]: 2025-10-07T14:33:38Z|01084|if_status|INFO|Dropped 2 log messages in last 627 seconds (most recently, 627 seconds ago) due to excessive rate
Oct 07 14:33:38 compute-0 ovn_controller[151684]: 2025-10-07T14:33:38Z|01085|if_status|INFO|Not setting lport 52f43128-c899-4d76-9e65-c99941c834d4 down as sb is readonly
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.064 2 INFO nova.virt.libvirt.driver [-] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Instance destroyed successfully.
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.064 2 DEBUG nova.objects.instance [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 0af0082a-1adc-40e7-b254-88e03182e802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:38 compute-0 ovn_controller[151684]: 2025-10-07T14:33:38Z|01086|binding|INFO|Releasing lport 52f43128-c899-4d76-9e65-c99941c834d4 from this chassis (sb_readonly=0)
Oct 07 14:33:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.073 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c'], port_security=['fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:fe52:178c/64', 'neutron:device_id': '0af0082a-1adc-40e7-b254-88e03182e802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3172dca0-91ca-4f82-9e12-53d0e4f57177', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33b7c271-47f1-414b-a27c-b99f92f4e7c6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=52f43128-c899-4d76-9e65-c99941c834d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:33:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.085 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c'], port_security=['fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:fe52:178c/64', 'neutron:device_id': '0af0082a-1adc-40e7-b254-88e03182e802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3172dca0-91ca-4f82-9e12-53d0e4f57177', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33b7c271-47f1-414b-a27c-b99f92f4e7c6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=52f43128-c899-4d76-9e65-c99941c834d4) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.091 2 DEBUG nova.virt.libvirt.vif [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:32:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1273910432',display_name='tempest-TestGettingAddress-server-1273910432',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1273910432',id=105,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKZcRW+cTahv1daeBOuj+cvRjMyW8WsW20MGafd+5cNwSmbuhLgTtTRGaDrZ7xqiE/Dwq9wlNoEqtjkRltl/UATKHeenR7LAEwYRgqoAMdni0PUi0HrATcyFAYghdo06gw==',key_name='tempest-TestGettingAddress-855853305',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:32:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-v3y9hbt0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:32:24Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=0af0082a-1adc-40e7-b254-88e03182e802,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.091 2 DEBUG nova.network.os_vif_util [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.092 2 DEBUG nova.network.os_vif_util [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:52:17:8c,bridge_name='br-int',has_traffic_filtering=True,id=52f43128-c899-4d76-9e65-c99941c834d4,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f43128-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.093 2 DEBUG os_vif [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:52:17:8c,bridge_name='br-int',has_traffic_filtering=True,id=52f43128-c899-4d76-9e65-c99941c834d4,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f43128-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.095 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52f43128-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.101 2 INFO os_vif [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:52:17:8c,bridge_name='br-int',has_traffic_filtering=True,id=52f43128-c899-4d76-9e65-c99941c834d4,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f43128-c8')
Oct 07 14:33:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e23988a13804a7caa057b6ca3196289d2a8449fff2e969f7ade775e8fde0162-merged.mount: Deactivated successfully.
Oct 07 14:33:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b-userdata-shm.mount: Deactivated successfully.
Oct 07 14:33:38 compute-0 podman[375286]: 2025-10-07 14:33:38.123057835 +0000 UTC m=+0.115271732 container cleanup 008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:33:38 compute-0 systemd[1]: libpod-conmon-008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b.scope: Deactivated successfully.
Oct 07 14:33:38 compute-0 podman[375313]: 2025-10-07 14:33:38.185675107 +0000 UTC m=+0.061144514 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 07 14:33:38 compute-0 podman[375336]: 2025-10-07 14:33:38.194830292 +0000 UTC m=+0.050906561 container remove 008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 07 14:33:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.201 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e0691181-72cf-43c8-8377-51468cf99f02]: (4, ('Tue Oct  7 02:33:37 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc (008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b)\n008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b\nTue Oct  7 02:33:38 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc (008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b)\n008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.204 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2b21b002-e057-47c0-bcf9-abc70715960a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.205 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1da6903e-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:38 compute-0 kernel: tap1da6903e-10: left promiscuous mode
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.224 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e75ee102-1b08-4ea9-b58d-8ecdc8add1cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.248 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[94c10817-d68e-4f09-a447-4aa71e4ee962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.251 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e0b9da-c571-4ee7-8961-4e3099c44ddf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 16 KiB/s wr, 102 op/s
Oct 07 14:33:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.268 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7126b42a-74b3-4dac-a45a-6ff5f44844e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 812632, 'reachable_time': 37875, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375391, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:38 compute-0 podman[375337]: 2025-10-07 14:33:38.272030065 +0000 UTC m=+0.113848183 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:33:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d1da6903e\x2d17a3\x2d4ac8\x2db5a0\x2d50ed4bf377bc.mount: Deactivated successfully.
Oct 07 14:33:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.274 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:33:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.274 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[2d6939fd-9b28-4d75-b8ca-58c1aa6a6a33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.275 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 52f43128-c899-4d76-9e65-c99941c834d4 in datapath 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc unbound from our chassis
Oct 07 14:33:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.276 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:33:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.277 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8f42e266-6273-4760-bde9-626aa7a5b7f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.277 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 52f43128-c899-4d76-9e65-c99941c834d4 in datapath 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc unbound from our chassis
Oct 07 14:33:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.278 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:33:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.279 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[42387c74-8f5d-43c9-a732-e1ba148e47e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.556 2 INFO nova.virt.libvirt.driver [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Deleting instance files /var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802_del
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.557 2 INFO nova.virt.libvirt.driver [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Deletion of /var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802_del complete
Oct 07 14:33:38 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:33:38 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.810 2 INFO nova.compute.manager [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Took 0.99 seconds to destroy the instance on the hypervisor.
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.810 2 DEBUG oslo.service.loopingcall [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.810 2 DEBUG nova.compute.manager [-] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:33:38 compute-0 nova_compute[259550]: 2025-10-07 14:33:38.811 2 DEBUG nova.network.neutron [-] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:33:39 compute-0 nova_compute[259550]: 2025-10-07 14:33:39.685 2 DEBUG nova.network.neutron [-] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:33:39 compute-0 nova_compute[259550]: 2025-10-07 14:33:39.704 2 INFO nova.compute.manager [-] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Took 0.89 seconds to deallocate network for instance.
Oct 07 14:33:39 compute-0 ceph-mon[74295]: pgmap v2136: 305 pgs: 305 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 16 KiB/s wr, 102 op/s
Oct 07 14:33:39 compute-0 nova_compute[259550]: 2025-10-07 14:33:39.753 2 DEBUG oslo_concurrency.lockutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:39 compute-0 nova_compute[259550]: 2025-10-07 14:33:39.754 2 DEBUG oslo_concurrency.lockutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:39 compute-0 nova_compute[259550]: 2025-10-07 14:33:39.796 2 DEBUG nova.compute.manager [req-89cd28da-5b8a-48a5-93aa-6d8206645dc9 req-25e8dc78-c521-4c5b-951a-923a7af5a05a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Received event network-vif-deleted-52f43128-c899-4d76-9e65-c99941c834d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:33:39 compute-0 nova_compute[259550]: 2025-10-07 14:33:39.813 2 DEBUG oslo_concurrency.processutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:33:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:33:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3463791443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:33:40 compute-0 nova_compute[259550]: 2025-10-07 14:33:40.241 2 DEBUG oslo_concurrency.processutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:33:40 compute-0 nova_compute[259550]: 2025-10-07 14:33:40.246 2 DEBUG nova.compute.provider_tree [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:33:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 63 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 17 KiB/s wr, 124 op/s
Oct 07 14:33:40 compute-0 nova_compute[259550]: 2025-10-07 14:33:40.271 2 DEBUG nova.scheduler.client.report [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:33:40 compute-0 nova_compute[259550]: 2025-10-07 14:33:40.298 2 DEBUG oslo_concurrency.lockutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:40 compute-0 nova_compute[259550]: 2025-10-07 14:33:40.326 2 INFO nova.scheduler.client.report [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 0af0082a-1adc-40e7-b254-88e03182e802
Oct 07 14:33:40 compute-0 nova_compute[259550]: 2025-10-07 14:33:40.436 2 DEBUG oslo_concurrency.lockutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:40 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3463791443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:33:40 compute-0 nova_compute[259550]: 2025-10-07 14:33:40.989 2 DEBUG nova.network.neutron [req-bca4eb52-319b-46ea-9208-ff07fcd02b41 req-823da5db-19e3-4503-8bde-ce66b04a249a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updated VIF entry in instance network info cache for port 52f43128-c899-4d76-9e65-c99941c834d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:33:40 compute-0 nova_compute[259550]: 2025-10-07 14:33:40.990 2 DEBUG nova.network.neutron [req-bca4eb52-319b-46ea-9208-ff07fcd02b41 req-823da5db-19e3-4503-8bde-ce66b04a249a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updating instance_info_cache with network_info: [{"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:33:41 compute-0 nova_compute[259550]: 2025-10-07 14:33:41.009 2 DEBUG oslo_concurrency.lockutils [req-bca4eb52-319b-46ea-9208-ff07fcd02b41 req-823da5db-19e3-4503-8bde-ce66b04a249a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:33:41 compute-0 ceph-mon[74295]: pgmap v2137: 305 pgs: 305 active+clean; 63 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 17 KiB/s wr, 124 op/s
Oct 07 14:33:42 compute-0 nova_compute[259550]: 2025-10-07 14:33:42.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 41 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 5.0 KiB/s wr, 121 op/s
Oct 07 14:33:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:33:43 compute-0 nova_compute[259550]: 2025-10-07 14:33:43.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:43 compute-0 ceph-mon[74295]: pgmap v2138: 305 pgs: 305 active+clean; 41 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 5.0 KiB/s wr, 121 op/s
Oct 07 14:33:44 compute-0 nova_compute[259550]: 2025-10-07 14:33:44.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 5.0 KiB/s wr, 103 op/s
Oct 07 14:33:45 compute-0 nova_compute[259550]: 2025-10-07 14:33:45.803 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847610.8020964, d3fa3175-2379-4c66-9d83-0a37f5559db8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:33:45 compute-0 nova_compute[259550]: 2025-10-07 14:33:45.804 2 INFO nova.compute.manager [-] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] VM Stopped (Lifecycle Event)
Oct 07 14:33:45 compute-0 nova_compute[259550]: 2025-10-07 14:33:45.832 2 DEBUG nova.compute.manager [None req-a52bd597-e808-4e32-bba6-de73c46aaf58 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:33:45 compute-0 nova_compute[259550]: 2025-10-07 14:33:45.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:33:45 compute-0 nova_compute[259550]: 2025-10-07 14:33:45.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:33:46 compute-0 ceph-mon[74295]: pgmap v2139: 305 pgs: 305 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 5.0 KiB/s wr, 103 op/s
Oct 07 14:33:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.5 KiB/s wr, 39 op/s
Oct 07 14:33:46 compute-0 nova_compute[259550]: 2025-10-07 14:33:46.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:33:46 compute-0 nova_compute[259550]: 2025-10-07 14:33:46.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:33:46 compute-0 nova_compute[259550]: 2025-10-07 14:33:46.981 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:33:47 compute-0 nova_compute[259550]: 2025-10-07 14:33:47.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:47 compute-0 nova_compute[259550]: 2025-10-07 14:33:47.854 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:47 compute-0 nova_compute[259550]: 2025-10-07 14:33:47.855 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:47 compute-0 nova_compute[259550]: 2025-10-07 14:33:47.891 2 DEBUG nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:33:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:33:47 compute-0 nova_compute[259550]: 2025-10-07 14:33:47.984 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:47 compute-0 nova_compute[259550]: 2025-10-07 14:33:47.985 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:47 compute-0 nova_compute[259550]: 2025-10-07 14:33:47.993 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:33:47 compute-0 nova_compute[259550]: 2025-10-07 14:33:47.994 2 INFO nova.compute.claims [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:33:48 compute-0 nova_compute[259550]: 2025-10-07 14:33:48.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:48 compute-0 ceph-mon[74295]: pgmap v2140: 305 pgs: 305 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.5 KiB/s wr, 39 op/s
Oct 07 14:33:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:33:48 compute-0 nova_compute[259550]: 2025-10-07 14:33:48.323 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:33:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:33:48 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3220293337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:33:48 compute-0 nova_compute[259550]: 2025-10-07 14:33:48.755 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:33:48 compute-0 nova_compute[259550]: 2025-10-07 14:33:48.761 2 DEBUG nova.compute.provider_tree [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:33:48 compute-0 nova_compute[259550]: 2025-10-07 14:33:48.788 2 DEBUG nova.scheduler.client.report [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:33:48 compute-0 nova_compute[259550]: 2025-10-07 14:33:48.834 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:48 compute-0 nova_compute[259550]: 2025-10-07 14:33:48.836 2 DEBUG nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:33:48 compute-0 nova_compute[259550]: 2025-10-07 14:33:48.888 2 DEBUG nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:33:48 compute-0 nova_compute[259550]: 2025-10-07 14:33:48.888 2 DEBUG nova.network.neutron [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:33:48 compute-0 nova_compute[259550]: 2025-10-07 14:33:48.909 2 INFO nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:33:48 compute-0 nova_compute[259550]: 2025-10-07 14:33:48.930 2 DEBUG nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:33:48 compute-0 nova_compute[259550]: 2025-10-07 14:33:48.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:33:49 compute-0 nova_compute[259550]: 2025-10-07 14:33:49.040 2 DEBUG nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:33:49 compute-0 nova_compute[259550]: 2025-10-07 14:33:49.042 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:33:49 compute-0 nova_compute[259550]: 2025-10-07 14:33:49.042 2 INFO nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Creating image(s)
Oct 07 14:33:49 compute-0 nova_compute[259550]: 2025-10-07 14:33:49.063 2 DEBUG nova.storage.rbd_utils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:33:49 compute-0 nova_compute[259550]: 2025-10-07 14:33:49.083 2 DEBUG nova.storage.rbd_utils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:33:49 compute-0 nova_compute[259550]: 2025-10-07 14:33:49.103 2 DEBUG nova.storage.rbd_utils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:33:49 compute-0 nova_compute[259550]: 2025-10-07 14:33:49.107 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:33:49 compute-0 nova_compute[259550]: 2025-10-07 14:33:49.145 2 DEBUG nova.policy [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5c505d04148e44b8b93ceab0e3cedef4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:33:49 compute-0 nova_compute[259550]: 2025-10-07 14:33:49.180 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:33:49 compute-0 nova_compute[259550]: 2025-10-07 14:33:49.181 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:49 compute-0 nova_compute[259550]: 2025-10-07 14:33:49.182 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:49 compute-0 nova_compute[259550]: 2025-10-07 14:33:49.182 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:49 compute-0 nova_compute[259550]: 2025-10-07 14:33:49.265 2 DEBUG nova.storage.rbd_utils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:33:49 compute-0 nova_compute[259550]: 2025-10-07 14:33:49.268 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:33:49 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3220293337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:33:49 compute-0 nova_compute[259550]: 2025-10-07 14:33:49.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:49 compute-0 nova_compute[259550]: 2025-10-07 14:33:49.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:50 compute-0 nova_compute[259550]: 2025-10-07 14:33:50.030 2 DEBUG nova.network.neutron [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Successfully created port: ea75e1fb-474b-484e-9a52-73939d17da1b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:33:50 compute-0 nova_compute[259550]: 2025-10-07 14:33:50.135 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:50 compute-0 nova_compute[259550]: 2025-10-07 14:33:50.136 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:50 compute-0 nova_compute[259550]: 2025-10-07 14:33:50.162 2 DEBUG nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:33:50 compute-0 nova_compute[259550]: 2025-10-07 14:33:50.238 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:50 compute-0 nova_compute[259550]: 2025-10-07 14:33:50.238 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:50 compute-0 nova_compute[259550]: 2025-10-07 14:33:50.244 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:33:50 compute-0 nova_compute[259550]: 2025-10-07 14:33:50.244 2 INFO nova.compute.claims [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:33:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:33:50 compute-0 nova_compute[259550]: 2025-10-07 14:33:50.331 2 DEBUG nova.scheduler.client.report [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 07 14:33:50 compute-0 nova_compute[259550]: 2025-10-07 14:33:50.366 2 DEBUG nova.scheduler.client.report [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 07 14:33:50 compute-0 nova_compute[259550]: 2025-10-07 14:33:50.366 2 DEBUG nova.compute.provider_tree [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 07 14:33:50 compute-0 nova_compute[259550]: 2025-10-07 14:33:50.386 2 DEBUG nova.scheduler.client.report [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 07 14:33:50 compute-0 nova_compute[259550]: 2025-10-07 14:33:50.412 2 DEBUG nova.scheduler.client.report [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 07 14:33:50 compute-0 nova_compute[259550]: 2025-10-07 14:33:50.473 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:33:50 compute-0 ceph-mon[74295]: pgmap v2141: 305 pgs: 305 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:33:50 compute-0 nova_compute[259550]: 2025-10-07 14:33:50.878 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:33:50 compute-0 nova_compute[259550]: 2025-10-07 14:33:50.943 2 DEBUG nova.storage.rbd_utils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] resizing rbd image a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.041 2 DEBUG nova.objects.instance [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'migration_context' on Instance uuid a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:33:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:33:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/718782002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.105 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.110 2 DEBUG nova.compute.provider_tree [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.206 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.207 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Ensure instance console log exists: /var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.207 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.207 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.208 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.220 2 DEBUG nova.network.neutron [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Successfully updated port: ea75e1fb-474b-484e-9a52-73939d17da1b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.226 2 DEBUG nova.scheduler.client.report [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.254 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.254 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquired lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.254 2 DEBUG nova.network.neutron [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.259 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.260 2 DEBUG nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.312 2 DEBUG nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.313 2 DEBUG nova.network.neutron [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.342 2 INFO nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.360 2 DEBUG nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.458 2 DEBUG nova.compute.manager [req-19e34f79-6c50-4de8-8cec-f5afc12c4da8 req-deb9d6fa-ffb1-4e44-8b2a-0961a98ab5b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-changed-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.459 2 DEBUG nova.compute.manager [req-19e34f79-6c50-4de8-8cec-f5afc12c4da8 req-deb9d6fa-ffb1-4e44-8b2a-0961a98ab5b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Refreshing instance network info cache due to event network-changed-ea75e1fb-474b-484e-9a52-73939d17da1b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.459 2 DEBUG oslo_concurrency.lockutils [req-19e34f79-6c50-4de8-8cec-f5afc12c4da8 req-deb9d6fa-ffb1-4e44-8b2a-0961a98ab5b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.461 2 DEBUG nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.462 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.462 2 INFO nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Creating image(s)
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.482 2 DEBUG nova.storage.rbd_utils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.505 2 DEBUG nova.storage.rbd_utils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.525 2 DEBUG nova.storage.rbd_utils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.528 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.599 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.600 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.601 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.601 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.624 2 DEBUG nova.storage.rbd_utils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.627 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.698 2 DEBUG nova.network.neutron [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:33:51 compute-0 ceph-mon[74295]: pgmap v2142: 305 pgs: 305 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:33:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/718782002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.939 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:33:51 compute-0 nova_compute[259550]: 2025-10-07 14:33:51.998 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:33:52 compute-0 nova_compute[259550]: 2025-10-07 14:33:52.002 2 DEBUG nova.storage.rbd_utils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:33:52 compute-0 nova_compute[259550]: 2025-10-07 14:33:52.093 2 DEBUG nova.objects.instance [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:33:52 compute-0 nova_compute[259550]: 2025-10-07 14:33:52.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:52 compute-0 nova_compute[259550]: 2025-10-07 14:33:52.216 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:33:52 compute-0 nova_compute[259550]: 2025-10-07 14:33:52.217 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Ensure instance console log exists: /var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:33:52 compute-0 nova_compute[259550]: 2025-10-07 14:33:52.217 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:52 compute-0 nova_compute[259550]: 2025-10-07 14:33:52.218 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:52 compute-0 nova_compute[259550]: 2025-10-07 14:33:52.218 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 45 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 24 KiB/s wr, 5 op/s
Oct 07 14:33:52 compute-0 nova_compute[259550]: 2025-10-07 14:33:52.296 2 DEBUG nova.policy [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:33:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:33:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:33:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:33:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:33:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:33:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:33:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.062 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847618.0612252, 0af0082a-1adc-40e7-b254-88e03182e802 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.063 2 INFO nova.compute.manager [-] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] VM Stopped (Lifecycle Event)
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.250 2 DEBUG nova.compute.manager [None req-05363ca4-7460-44cc-a1f3-bc7cd46d5479 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.794 2 DEBUG nova.network.neutron [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Updating instance_info_cache with network_info: [{"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:33:53 compute-0 ceph-mon[74295]: pgmap v2143: 305 pgs: 305 active+clean; 45 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 24 KiB/s wr, 5 op/s
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.898 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Releasing lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.899 2 DEBUG nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Instance network_info: |[{"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.899 2 DEBUG oslo_concurrency.lockutils [req-19e34f79-6c50-4de8-8cec-f5afc12c4da8 req-deb9d6fa-ffb1-4e44-8b2a-0961a98ab5b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.900 2 DEBUG nova.network.neutron [req-19e34f79-6c50-4de8-8cec-f5afc12c4da8 req-deb9d6fa-ffb1-4e44-8b2a-0961a98ab5b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Refreshing network info cache for port ea75e1fb-474b-484e-9a52-73939d17da1b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.904 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Start _get_guest_xml network_info=[{"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.911 2 WARNING nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.916 2 DEBUG nova.virt.libvirt.host [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.917 2 DEBUG nova.virt.libvirt.host [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.920 2 DEBUG nova.virt.libvirt.host [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.921 2 DEBUG nova.virt.libvirt.host [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.922 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.922 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.923 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.923 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.924 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.924 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.924 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.924 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.925 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.925 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.925 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.925 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:33:53 compute-0 nova_compute[259550]: 2025-10-07 14:33:53.929 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:33:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 97 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Oct 07 14:33:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:33:54 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2089750526' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:33:54 compute-0 nova_compute[259550]: 2025-10-07 14:33:54.370 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:33:54 compute-0 nova_compute[259550]: 2025-10-07 14:33:54.393 2 DEBUG nova.storage.rbd_utils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:33:54 compute-0 nova_compute[259550]: 2025-10-07 14:33:54.397 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:33:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2089750526' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:33:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:33:54 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1585117156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:33:54 compute-0 nova_compute[259550]: 2025-10-07 14:33:54.845 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:33:54 compute-0 nova_compute[259550]: 2025-10-07 14:33:54.848 2 DEBUG nova.virt.libvirt.vif [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-797854475',display_name='tempest-TestNetworkAdvancedServerOps-server-797854475',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-797854475',id=108,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD4Lkj2HHIhO2PrQ11lMJZ07v9Q3dgwfIb7H7MpsdLLDdtvBovL59oN2/XacURcr+DjY2ldySrFXIZcVMnzxGBRmm67eVPlJbvF3TQEBfQs6C/dhmbAY+AHYoEt1hPfXxw==',key_name='tempest-TestNetworkAdvancedServerOps-512345044',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-qabao1fo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:33:48Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:33:54 compute-0 nova_compute[259550]: 2025-10-07 14:33:54.848 2 DEBUG nova.network.os_vif_util [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:33:54 compute-0 nova_compute[259550]: 2025-10-07 14:33:54.850 2 DEBUG nova.network.os_vif_util [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:91:21,bridge_name='br-int',has_traffic_filtering=True,id=ea75e1fb-474b-484e-9a52-73939d17da1b,network=Network(256ae6d5-60ea-4d02-8c43-2f55874712c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea75e1fb-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:33:54 compute-0 nova_compute[259550]: 2025-10-07 14:33:54.852 2 DEBUG nova.objects.instance [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:33:54 compute-0 nova_compute[259550]: 2025-10-07 14:33:54.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:33:55 compute-0 podman[375855]: 2025-10-07 14:33:55.07282427 +0000 UTC m=+0.062479600 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:33:55 compute-0 podman[375856]: 2025-10-07 14:33:55.093857952 +0000 UTC m=+0.081045886 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.313 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:33:55 compute-0 nova_compute[259550]:   <uuid>a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8</uuid>
Oct 07 14:33:55 compute-0 nova_compute[259550]:   <name>instance-0000006c</name>
Oct 07 14:33:55 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:33:55 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:33:55 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-797854475</nova:name>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:33:53</nova:creationTime>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:33:55 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:33:55 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:33:55 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:33:55 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:33:55 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:33:55 compute-0 nova_compute[259550]:         <nova:user uuid="5c505d04148e44b8b93ceab0e3cedef4">tempest-TestNetworkAdvancedServerOps-316338420-project-member</nova:user>
Oct 07 14:33:55 compute-0 nova_compute[259550]:         <nova:project uuid="74c80c1e3c7c4a0dbf1c602d301618a7">tempest-TestNetworkAdvancedServerOps-316338420</nova:project>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:33:55 compute-0 nova_compute[259550]:         <nova:port uuid="ea75e1fb-474b-484e-9a52-73939d17da1b">
Oct 07 14:33:55 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:33:55 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:33:55 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <system>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <entry name="serial">a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8</entry>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <entry name="uuid">a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8</entry>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     </system>
Oct 07 14:33:55 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:33:55 compute-0 nova_compute[259550]:   <os>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:   </os>
Oct 07 14:33:55 compute-0 nova_compute[259550]:   <features>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:   </features>
Oct 07 14:33:55 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:33:55 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:33:55 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk">
Oct 07 14:33:55 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       </source>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:33:55 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk.config">
Oct 07 14:33:55 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       </source>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:33:55 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:57:91:21"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <target dev="tapea75e1fb-47"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8/console.log" append="off"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <video>
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     </video>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:33:55 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:33:55 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:33:55 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:33:55 compute-0 nova_compute[259550]: </domain>
Oct 07 14:33:55 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.314 2 DEBUG nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Preparing to wait for external event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.315 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.315 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.315 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.316 2 DEBUG nova.virt.libvirt.vif [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-797854475',display_name='tempest-TestNetworkAdvancedServerOps-server-797854475',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-797854475',id=108,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD4Lkj2HHIhO2PrQ11lMJZ07v9Q3dgwfIb7H7MpsdLLDdtvBovL59oN2/XacURcr+DjY2ldySrFXIZcVMnzxGBRmm67eVPlJbvF3TQEBfQs6C/dhmbAY+AHYoEt1hPfXxw==',key_name='tempest-TestNetworkAdvancedServerOps-512345044',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-qabao1fo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:33:48Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.316 2 DEBUG nova.network.os_vif_util [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.317 2 DEBUG nova.network.os_vif_util [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:91:21,bridge_name='br-int',has_traffic_filtering=True,id=ea75e1fb-474b-484e-9a52-73939d17da1b,network=Network(256ae6d5-60ea-4d02-8c43-2f55874712c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea75e1fb-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.317 2 DEBUG os_vif [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:91:21,bridge_name='br-int',has_traffic_filtering=True,id=ea75e1fb-474b-484e-9a52-73939d17da1b,network=Network(256ae6d5-60ea-4d02-8c43-2f55874712c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea75e1fb-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.318 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.319 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.321 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea75e1fb-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.322 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapea75e1fb-47, col_values=(('external_ids', {'iface-id': 'ea75e1fb-474b-484e-9a52-73939d17da1b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:91:21', 'vm-uuid': 'a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:55 compute-0 NetworkManager[44949]: <info>  [1759847635.3241] manager: (tapea75e1fb-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/438)
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.331 2 INFO os_vif [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:91:21,bridge_name='br-int',has_traffic_filtering=True,id=ea75e1fb-474b-484e-9a52-73939d17da1b,network=Network(256ae6d5-60ea-4d02-8c43-2f55874712c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea75e1fb-47')
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.705 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.706 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.706 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.706 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.706 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:33:55 compute-0 nova_compute[259550]: 2025-10-07 14:33:55.741 2 DEBUG nova.network.neutron [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Successfully created port: 13fb3ec4-0080-406c-8cea-6620016c0513 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:33:55 compute-0 ceph-mon[74295]: pgmap v2144: 305 pgs: 305 active+clean; 97 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Oct 07 14:33:55 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1585117156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:33:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:33:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2366465579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:33:56 compute-0 nova_compute[259550]: 2025-10-07 14:33:56.134 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:33:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 07 14:33:56 compute-0 nova_compute[259550]: 2025-10-07 14:33:56.629 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:33:56 compute-0 nova_compute[259550]: 2025-10-07 14:33:56.629 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:33:56 compute-0 nova_compute[259550]: 2025-10-07 14:33:56.629 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No VIF found with MAC fa:16:3e:57:91:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:33:56 compute-0 nova_compute[259550]: 2025-10-07 14:33:56.630 2 INFO nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Using config drive
Oct 07 14:33:56 compute-0 nova_compute[259550]: 2025-10-07 14:33:56.651 2 DEBUG nova.storage.rbd_utils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:33:56 compute-0 nova_compute[259550]: 2025-10-07 14:33:56.784 2 DEBUG nova.network.neutron [req-19e34f79-6c50-4de8-8cec-f5afc12c4da8 req-deb9d6fa-ffb1-4e44-8b2a-0961a98ab5b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Updated VIF entry in instance network info cache for port ea75e1fb-474b-484e-9a52-73939d17da1b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:33:56 compute-0 nova_compute[259550]: 2025-10-07 14:33:56.785 2 DEBUG nova.network.neutron [req-19e34f79-6c50-4de8-8cec-f5afc12c4da8 req-deb9d6fa-ffb1-4e44-8b2a-0961a98ab5b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Updating instance_info_cache with network_info: [{"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:33:57 compute-0 nova_compute[259550]: 2025-10-07 14:33:57.015 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:33:57 compute-0 nova_compute[259550]: 2025-10-07 14:33:57.015 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:33:57 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2366465579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:33:57 compute-0 nova_compute[259550]: 2025-10-07 14:33:57.144 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:33:57 compute-0 nova_compute[259550]: 2025-10-07 14:33:57.145 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3760MB free_disk=59.96270751953125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:33:57 compute-0 nova_compute[259550]: 2025-10-07 14:33:57.145 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:57 compute-0 nova_compute[259550]: 2025-10-07 14:33:57.146 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:57 compute-0 nova_compute[259550]: 2025-10-07 14:33:57.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:57 compute-0 nova_compute[259550]: 2025-10-07 14:33:57.293 2 DEBUG oslo_concurrency.lockutils [req-19e34f79-6c50-4de8-8cec-f5afc12c4da8 req-deb9d6fa-ffb1-4e44-8b2a-0961a98ab5b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:33:57 compute-0 nova_compute[259550]: 2025-10-07 14:33:57.537 2 INFO nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Creating config drive at /var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8/disk.config
Oct 07 14:33:57 compute-0 nova_compute[259550]: 2025-10-07 14:33:57.541 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9tyabgrq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:33:57 compute-0 nova_compute[259550]: 2025-10-07 14:33:57.611 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:33:57 compute-0 nova_compute[259550]: 2025-10-07 14:33:57.611 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:33:57 compute-0 nova_compute[259550]: 2025-10-07 14:33:57.612 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:33:57 compute-0 nova_compute[259550]: 2025-10-07 14:33:57.612 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:33:57 compute-0 nova_compute[259550]: 2025-10-07 14:33:57.672 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:33:57 compute-0 nova_compute[259550]: 2025-10-07 14:33:57.704 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9tyabgrq" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:33:57 compute-0 nova_compute[259550]: 2025-10-07 14:33:57.734 2 DEBUG nova.storage.rbd_utils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:33:57 compute-0 nova_compute[259550]: 2025-10-07 14:33:57.739 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8/disk.config a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:33:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:33:57 compute-0 nova_compute[259550]: 2025-10-07 14:33:57.999 2 DEBUG nova.network.neutron [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Successfully updated port: 13fb3ec4-0080-406c-8cea-6620016c0513 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.131 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.131 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.131 2 DEBUG nova.network.neutron [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:33:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:33:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2304762170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.157 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.163 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:33:58 compute-0 ceph-mon[74295]: pgmap v2145: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.201 2 DEBUG nova.compute.manager [req-32e62629-1851-42be-91a0-7e29d1d0f9a5 req-2a39097c-2de7-4b4e-89e1-06144fe19469 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Received event network-changed-13fb3ec4-0080-406c-8cea-6620016c0513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.201 2 DEBUG nova.compute.manager [req-32e62629-1851-42be-91a0-7e29d1d0f9a5 req-2a39097c-2de7-4b4e-89e1-06144fe19469 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Refreshing instance network info cache due to event network-changed-13fb3ec4-0080-406c-8cea-6620016c0513. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.202 2 DEBUG oslo_concurrency.lockutils [req-32e62629-1851-42be-91a0-7e29d1d0f9a5 req-2a39097c-2de7-4b4e-89e1-06144fe19469 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.256 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.259 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8/disk.config a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.260 2 INFO nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Deleting local config drive /var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8/disk.config because it was imported into RBD.
Oct 07 14:33:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 07 14:33:58 compute-0 kernel: tapea75e1fb-47: entered promiscuous mode
Oct 07 14:33:58 compute-0 NetworkManager[44949]: <info>  [1759847638.3057] manager: (tapea75e1fb-47): new Tun device (/org/freedesktop/NetworkManager/Devices/439)
Oct 07 14:33:58 compute-0 ovn_controller[151684]: 2025-10-07T14:33:58Z|01087|binding|INFO|Claiming lport ea75e1fb-474b-484e-9a52-73939d17da1b for this chassis.
Oct 07 14:33:58 compute-0 ovn_controller[151684]: 2025-10-07T14:33:58Z|01088|binding|INFO|ea75e1fb-474b-484e-9a52-73939d17da1b: Claiming fa:16:3e:57:91:21 10.100.0.14
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.319 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:91:21 10.100.0.14'], port_security=['fa:16:3e:57:91:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37af7aab-7feb-47ed-a12c-8267b21bb6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a239abd8-f455-4712-a289-4273b897ab8d, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ea75e1fb-474b-484e-9a52-73939d17da1b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.320 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ea75e1fb-474b-484e-9a52-73939d17da1b in datapath 256ae6d5-60ea-4d02-8c43-2f55874712c6 bound to our chassis
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.321 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 256ae6d5-60ea-4d02-8c43-2f55874712c6
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.334 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[62fbd5f1-259c-4f40-9743-20162d41c9cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.336 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap256ae6d5-61 in ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:33:58 compute-0 systemd-udevd[376013]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.338 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap256ae6d5-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.339 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[14e5cf2d-3a6d-4f88-b187-0b961e016d89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.339 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[63444818-c62b-4d46-acef-9d0fcf5262da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:58 compute-0 systemd-machined[214580]: New machine qemu-135-instance-0000006c.
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.350 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[f21690eb-cda3-4095-9bea-320ddb0e66cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:58 compute-0 NetworkManager[44949]: <info>  [1759847638.3557] device (tapea75e1fb-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:33:58 compute-0 NetworkManager[44949]: <info>  [1759847638.3563] device (tapea75e1fb-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.356 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.356 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:58 compute-0 systemd[1]: Started Virtual Machine qemu-135-instance-0000006c.
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.377 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1cc06042-e078-47e0-a027-8ab823788bb5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:58 compute-0 ovn_controller[151684]: 2025-10-07T14:33:58Z|01089|binding|INFO|Setting lport ea75e1fb-474b-484e-9a52-73939d17da1b ovn-installed in OVS
Oct 07 14:33:58 compute-0 ovn_controller[151684]: 2025-10-07T14:33:58Z|01090|binding|INFO|Setting lport ea75e1fb-474b-484e-9a52-73939d17da1b up in Southbound
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.409 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[63faec56-a293-4eb4-ad53-fec3964f8ee6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.415 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[68a0ab1e-a647-440a-b3c9-9a7be48caba0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:58 compute-0 NetworkManager[44949]: <info>  [1759847638.4163] manager: (tap256ae6d5-60): new Veth device (/org/freedesktop/NetworkManager/Devices/440)
Oct 07 14:33:58 compute-0 systemd-udevd[376016]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.443 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[14608631-c5d2-45cd-bd1d-3837c00ac802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.448 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[02f8b1d7-389a-4605-bcce-8444bd93e649]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:58 compute-0 NetworkManager[44949]: <info>  [1759847638.4784] device (tap256ae6d5-60): carrier: link connected
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.484 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[25eab7b9-47db-4a74-bc7f-475f7dae5a8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.487 2 DEBUG nova.network.neutron [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.508 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2b54244d-b3c4-47f5-acde-1e4e564eba59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap256ae6d5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:7e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 315], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822204, 'reachable_time': 30648, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376045, 'error': None, 'target': 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.526 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[73fb00c9-d0c0-478d-82c1-28fed2707f43]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:7e20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 822204, 'tstamp': 822204}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376046, 'error': None, 'target': 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.548 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc9048b-dc78-469a-a679-fdadff8aaa5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap256ae6d5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:7e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 315], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822204, 'reachable_time': 30648, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 376047, 'error': None, 'target': 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.574 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4f03be55-c1dd-4d0c-b4c7-ec8014b5e04c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.645 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0cefaa19-e07a-4e3e-a1ad-51d3c5de7c0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.647 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap256ae6d5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.647 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.648 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap256ae6d5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:58 compute-0 NetworkManager[44949]: <info>  [1759847638.6504] manager: (tap256ae6d5-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Oct 07 14:33:58 compute-0 kernel: tap256ae6d5-60: entered promiscuous mode
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.654 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap256ae6d5-60, col_values=(('external_ids', {'iface-id': 'fcf1dc99-f823-4d28-9074-21a7bced7a57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:33:58 compute-0 ovn_controller[151684]: 2025-10-07T14:33:58Z|01091|binding|INFO|Releasing lport fcf1dc99-f823-4d28-9074-21a7bced7a57 from this chassis (sb_readonly=0)
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.669 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/256ae6d5-60ea-4d02-8c43-2f55874712c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/256ae6d5-60ea-4d02-8c43-2f55874712c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.670 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a80a7215-e745-4d82-b87d-ffed3d638a1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.671 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-256ae6d5-60ea-4d02-8c43-2f55874712c6
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/256ae6d5-60ea-4d02-8c43-2f55874712c6.pid.haproxy
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 256ae6d5-60ea-4d02-8c43-2f55874712c6
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:33:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.672 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'env', 'PROCESS_TAG=haproxy-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/256ae6d5-60ea-4d02-8c43-2f55874712c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.903 2 DEBUG nova.compute.manager [req-f43451f4-2b1e-42a1-90d1-bb81b1c0c968 req-0efb4040-89a9-4119-a075-d2b66f90c507 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.904 2 DEBUG oslo_concurrency.lockutils [req-f43451f4-2b1e-42a1-90d1-bb81b1c0c968 req-0efb4040-89a9-4119-a075-d2b66f90c507 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.904 2 DEBUG oslo_concurrency.lockutils [req-f43451f4-2b1e-42a1-90d1-bb81b1c0c968 req-0efb4040-89a9-4119-a075-d2b66f90c507 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.905 2 DEBUG oslo_concurrency.lockutils [req-f43451f4-2b1e-42a1-90d1-bb81b1c0c968 req-0efb4040-89a9-4119-a075-d2b66f90c507 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:33:58 compute-0 nova_compute[259550]: 2025-10-07 14:33:58.905 2 DEBUG nova.compute.manager [req-f43451f4-2b1e-42a1-90d1-bb81b1c0c968 req-0efb4040-89a9-4119-a075-d2b66f90c507 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Processing event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:33:59 compute-0 podman[376097]: 2025-10-07 14:33:59.036186943 +0000 UTC m=+0.059410798 container create 3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 07 14:33:59 compute-0 systemd[1]: Started libpod-conmon-3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f.scope.
Oct 07 14:33:59 compute-0 podman[376097]: 2025-10-07 14:33:59.005263487 +0000 UTC m=+0.028487372 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:33:59 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:33:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cca578f989b6450d529d2e7b4f782d731435dda711704c70dbd7335068f20d2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:33:59 compute-0 podman[376097]: 2025-10-07 14:33:59.137913011 +0000 UTC m=+0.161136866 container init 3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:33:59 compute-0 podman[376097]: 2025-10-07 14:33:59.143726797 +0000 UTC m=+0.166950662 container start 3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 07 14:33:59 compute-0 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376135]: [NOTICE]   (376140) : New worker (376142) forked
Oct 07 14:33:59 compute-0 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376135]: [NOTICE]   (376140) : Loading success.
Oct 07 14:33:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2304762170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.352 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.407 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.407 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.476 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:33:59 compute-0 auditd[701]: Audit daemon rotating log files
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.553 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847639.5531995, a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.554 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] VM Started (Lifecycle Event)
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.557 2 DEBUG nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.563 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.568 2 INFO nova.virt.libvirt.driver [-] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Instance spawned successfully.
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.568 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.658 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.664 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.669 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.670 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.671 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.672 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.673 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.674 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.760 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.761 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847639.553405, a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.761 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] VM Paused (Lifecycle Event)
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.823 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.827 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847639.5611787, a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.827 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] VM Resumed (Lifecycle Event)
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.848 2 INFO nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Took 10.81 seconds to spawn the instance on the hypervisor.
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.849 2 DEBUG nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.890 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:33:59 compute-0 nova_compute[259550]: 2025-10-07 14:33:59.893 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:34:00 compute-0 nova_compute[259550]: 2025-10-07 14:34:00.003 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:34:00 compute-0 nova_compute[259550]: 2025-10-07 14:34:00.055 2 INFO nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Took 12.10 seconds to build instance.
Oct 07 14:34:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:00.067 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:00.067 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:00.068 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:00 compute-0 nova_compute[259550]: 2025-10-07 14:34:00.190 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:00 compute-0 ceph-mon[74295]: pgmap v2146: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 07 14:34:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Oct 07 14:34:00 compute-0 nova_compute[259550]: 2025-10-07 14:34:00.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.014 2 DEBUG nova.network.neutron [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updating instance_info_cache with network_info: [{"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.042 2 DEBUG nova.compute.manager [req-78a23dc4-8b63-48b4-b927-3d23b806d783 req-155b380b-3685-4f0c-9c7b-f98bb55c92da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.043 2 DEBUG oslo_concurrency.lockutils [req-78a23dc4-8b63-48b4-b927-3d23b806d783 req-155b380b-3685-4f0c-9c7b-f98bb55c92da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.043 2 DEBUG oslo_concurrency.lockutils [req-78a23dc4-8b63-48b4-b927-3d23b806d783 req-155b380b-3685-4f0c-9c7b-f98bb55c92da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.044 2 DEBUG oslo_concurrency.lockutils [req-78a23dc4-8b63-48b4-b927-3d23b806d783 req-155b380b-3685-4f0c-9c7b-f98bb55c92da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.044 2 DEBUG nova.compute.manager [req-78a23dc4-8b63-48b4-b927-3d23b806d783 req-155b380b-3685-4f0c-9c7b-f98bb55c92da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] No waiting events found dispatching network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.044 2 WARNING nova.compute.manager [req-78a23dc4-8b63-48b4-b927-3d23b806d783 req-155b380b-3685-4f0c-9c7b-f98bb55c92da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received unexpected event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b for instance with vm_state active and task_state None.
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.063 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.064 2 DEBUG nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Instance network_info: |[{"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.064 2 DEBUG oslo_concurrency.lockutils [req-32e62629-1851-42be-91a0-7e29d1d0f9a5 req-2a39097c-2de7-4b4e-89e1-06144fe19469 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.065 2 DEBUG nova.network.neutron [req-32e62629-1851-42be-91a0-7e29d1d0f9a5 req-2a39097c-2de7-4b4e-89e1-06144fe19469 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Refreshing network info cache for port 13fb3ec4-0080-406c-8cea-6620016c0513 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.068 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Start _get_guest_xml network_info=[{"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.072 2 WARNING nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.079 2 DEBUG nova.virt.libvirt.host [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.080 2 DEBUG nova.virt.libvirt.host [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.089 2 DEBUG nova.virt.libvirt.host [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.090 2 DEBUG nova.virt.libvirt.host [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.091 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.091 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.092 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.092 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.093 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.093 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.094 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.094 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.094 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.095 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.095 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.096 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.098 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:34:01 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1866524971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.640 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.675 2 DEBUG nova.storage.rbd_utils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:34:01 compute-0 nova_compute[259550]: 2025-10-07 14:34:01.684 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:34:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3393270145' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.174 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.176 2 DEBUG nova.virt.libvirt.vif [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:33:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1213898697',display_name='tempest-TestNetworkBasicOps-server-1213898697',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1213898697',id=109,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJOXXnJfbNjLe/xM7CNQv3JpMjH7QLX8ezU0CsayXoRWsJliurdVjk2soP6dLM8VmWxLkM0XL+W32Oeh6Y4JueSCzXDjtscbAk00DGutpE2Eyr2ZzKKLBUUQM3YVtmhffw==',key_name='tempest-TestNetworkBasicOps-1417079346',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-6sgmyxmd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:33:51Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=109b6c3d-5810-4b7d-a96d-4e1b64dc2daa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.177 2 DEBUG nova.network.os_vif_util [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.178 2 DEBUG nova.network.os_vif_util [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:48:df,bridge_name='br-int',has_traffic_filtering=True,id=13fb3ec4-0080-406c-8cea-6620016c0513,network=Network(8e6e03c6-002b-464f-aea7-5ae708e3e5dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13fb3ec4-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.179 2 DEBUG nova.objects.instance [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.225 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:34:02 compute-0 nova_compute[259550]:   <uuid>109b6c3d-5810-4b7d-a96d-4e1b64dc2daa</uuid>
Oct 07 14:34:02 compute-0 nova_compute[259550]:   <name>instance-0000006d</name>
Oct 07 14:34:02 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:34:02 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:34:02 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkBasicOps-server-1213898697</nova:name>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:34:01</nova:creationTime>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:34:02 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:34:02 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:34:02 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:34:02 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:34:02 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:34:02 compute-0 nova_compute[259550]:         <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:34:02 compute-0 nova_compute[259550]:         <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:34:02 compute-0 nova_compute[259550]:         <nova:port uuid="13fb3ec4-0080-406c-8cea-6620016c0513">
Oct 07 14:34:02 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:34:02 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:34:02 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <system>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <entry name="serial">109b6c3d-5810-4b7d-a96d-4e1b64dc2daa</entry>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <entry name="uuid">109b6c3d-5810-4b7d-a96d-4e1b64dc2daa</entry>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     </system>
Oct 07 14:34:02 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:34:02 compute-0 nova_compute[259550]:   <os>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:   </os>
Oct 07 14:34:02 compute-0 nova_compute[259550]:   <features>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:   </features>
Oct 07 14:34:02 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:34:02 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:34:02 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk">
Oct 07 14:34:02 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       </source>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:34:02 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk.config">
Oct 07 14:34:02 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       </source>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:34:02 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:a7:48:df"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <target dev="tap13fb3ec4-00"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa/console.log" append="off"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <video>
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     </video>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:34:02 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:34:02 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:34:02 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:34:02 compute-0 nova_compute[259550]: </domain>
Oct 07 14:34:02 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.227 2 DEBUG nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Preparing to wait for external event network-vif-plugged-13fb3ec4-0080-406c-8cea-6620016c0513 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.227 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.227 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.228 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.228 2 DEBUG nova.virt.libvirt.vif [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:33:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1213898697',display_name='tempest-TestNetworkBasicOps-server-1213898697',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1213898697',id=109,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJOXXnJfbNjLe/xM7CNQv3JpMjH7QLX8ezU0CsayXoRWsJliurdVjk2soP6dLM8VmWxLkM0XL+W32Oeh6Y4JueSCzXDjtscbAk00DGutpE2Eyr2ZzKKLBUUQM3YVtmhffw==',key_name='tempest-TestNetworkBasicOps-1417079346',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-6sgmyxmd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:33:51Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=109b6c3d-5810-4b7d-a96d-4e1b64dc2daa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.229 2 DEBUG nova.network.os_vif_util [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.231 2 DEBUG nova.network.os_vif_util [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:48:df,bridge_name='br-int',has_traffic_filtering=True,id=13fb3ec4-0080-406c-8cea-6620016c0513,network=Network(8e6e03c6-002b-464f-aea7-5ae708e3e5dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13fb3ec4-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.232 2 DEBUG os_vif [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:48:df,bridge_name='br-int',has_traffic_filtering=True,id=13fb3ec4-0080-406c-8cea-6620016c0513,network=Network(8e6e03c6-002b-464f-aea7-5ae708e3e5dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13fb3ec4-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.232 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.233 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13fb3ec4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.238 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap13fb3ec4-00, col_values=(('external_ids', {'iface-id': '13fb3ec4-0080-406c-8cea-6620016c0513', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:48:df', 'vm-uuid': '109b6c3d-5810-4b7d-a96d-4e1b64dc2daa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:02 compute-0 ceph-mon[74295]: pgmap v2147: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:02 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1866524971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:34:02 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3393270145' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:34:02 compute-0 NetworkManager[44949]: <info>  [1759847642.2427] manager: (tap13fb3ec4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/442)
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.249 2 INFO os_vif [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:48:df,bridge_name='br-int',has_traffic_filtering=True,id=13fb3ec4-0080-406c-8cea-6620016c0513,network=Network(8e6e03c6-002b-464f-aea7-5ae708e3e5dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13fb3ec4-00')
Oct 07 14:34:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 414 KiB/s rd, 3.6 MiB/s wr, 72 op/s
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.336 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.336 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.336 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:a7:48:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.337 2 INFO nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Using config drive
Oct 07 14:34:02 compute-0 nova_compute[259550]: 2025-10-07 14:34:02.366 2 DEBUG nova.storage.rbd_utils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:34:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:34:03 compute-0 nova_compute[259550]: 2025-10-07 14:34:03.297 2 INFO nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Creating config drive at /var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa/disk.config
Oct 07 14:34:03 compute-0 nova_compute[259550]: 2025-10-07 14:34:03.302 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpamtu_6w6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:03 compute-0 nova_compute[259550]: 2025-10-07 14:34:03.445 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpamtu_6w6" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:03 compute-0 nova_compute[259550]: 2025-10-07 14:34:03.469 2 DEBUG nova.storage.rbd_utils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:34:03 compute-0 nova_compute[259550]: 2025-10-07 14:34:03.472 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa/disk.config 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:03 compute-0 nova_compute[259550]: 2025-10-07 14:34:03.633 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa/disk.config 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:03 compute-0 nova_compute[259550]: 2025-10-07 14:34:03.634 2 INFO nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Deleting local config drive /var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa/disk.config because it was imported into RBD.
Oct 07 14:34:03 compute-0 kernel: tap13fb3ec4-00: entered promiscuous mode
Oct 07 14:34:03 compute-0 NetworkManager[44949]: <info>  [1759847643.7051] manager: (tap13fb3ec4-00): new Tun device (/org/freedesktop/NetworkManager/Devices/443)
Oct 07 14:34:03 compute-0 nova_compute[259550]: 2025-10-07 14:34:03.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:03 compute-0 ovn_controller[151684]: 2025-10-07T14:34:03Z|01092|binding|INFO|Claiming lport 13fb3ec4-0080-406c-8cea-6620016c0513 for this chassis.
Oct 07 14:34:03 compute-0 ovn_controller[151684]: 2025-10-07T14:34:03Z|01093|binding|INFO|13fb3ec4-0080-406c-8cea-6620016c0513: Claiming fa:16:3e:a7:48:df 10.100.0.13
Oct 07 14:34:03 compute-0 systemd-udevd[376285]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:34:03 compute-0 NetworkManager[44949]: <info>  [1759847643.7493] device (tap13fb3ec4-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:34:03 compute-0 NetworkManager[44949]: <info>  [1759847643.7513] device (tap13fb3ec4-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:34:03 compute-0 systemd-machined[214580]: New machine qemu-136-instance-0000006d.
Oct 07 14:34:03 compute-0 nova_compute[259550]: 2025-10-07 14:34:03.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:03 compute-0 systemd[1]: Started Virtual Machine qemu-136-instance-0000006d.
Oct 07 14:34:03 compute-0 ovn_controller[151684]: 2025-10-07T14:34:03Z|01094|binding|INFO|Setting lport 13fb3ec4-0080-406c-8cea-6620016c0513 ovn-installed in OVS
Oct 07 14:34:03 compute-0 nova_compute[259550]: 2025-10-07 14:34:03.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:03 compute-0 ovn_controller[151684]: 2025-10-07T14:34:03Z|01095|binding|INFO|Setting lport 13fb3ec4-0080-406c-8cea-6620016c0513 up in Southbound
Oct 07 14:34:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.834 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:48:df 10.100.0.13'], port_security=['fa:16:3e:a7:48:df 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '109b6c3d-5810-4b7d-a96d-4e1b64dc2daa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e6e03c6-002b-464f-aea7-5ae708e3e5dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f7aecf35-a2bc-4233-a6ec-98effd9bc888', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bf60a0a-2571-43f1-914f-0f00c5d191c5, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=13fb3ec4-0080-406c-8cea-6620016c0513) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:34:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.835 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 13fb3ec4-0080-406c-8cea-6620016c0513 in datapath 8e6e03c6-002b-464f-aea7-5ae708e3e5dc bound to our chassis
Oct 07 14:34:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.836 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e6e03c6-002b-464f-aea7-5ae708e3e5dc
Oct 07 14:34:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.850 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d0be2b34-ccfa-48ec-b458-90df9f9ac3c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.852 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e6e03c6-01 in ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:34:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.854 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e6e03c6-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:34:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.854 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f1a3fd6a-dbe2-4e65-a9ed-8b11777846b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.855 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cd01057f-6fbe-4a15-a1f0-1684b42c9490]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.868 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[dc8a9793-d477-413c-b6fa-336023a1c3ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.897 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aa066717-711e-4aa4-bf4c-138a70dc9a78]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.928 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[33c02fc2-1cb6-4c7a-893d-58c0a780046f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.935 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[af0eb095-4340-487c-9acd-4d8753b4fe11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:03 compute-0 NetworkManager[44949]: <info>  [1759847643.9366] manager: (tap8e6e03c6-00): new Veth device (/org/freedesktop/NetworkManager/Devices/444)
Oct 07 14:34:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.971 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2af032f4-c91d-49e1-8179-191e4d0ff7ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.974 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad5a470-0b05-4ea2-930f-52abfaa8f987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:03 compute-0 nova_compute[259550]: 2025-10-07 14:34:03.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:34:03 compute-0 NetworkManager[44949]: <info>  [1759847643.9941] device (tap8e6e03c6-00): carrier: link connected
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.999 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c49c91-01c2-4234-aa9b-3aa4781f68dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.017 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3577a473-4567-4e45-a36d-df1594d95730]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e6e03c6-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:49:ff'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822755, 'reachable_time': 37564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376322, 'error': None, 'target': 'ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.033 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[85a29652-d4b3-4ab6-b0a6-7bd7821c1ded]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:49ff'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 822755, 'tstamp': 822755}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376323, 'error': None, 'target': 'ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.050 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd442af-4454-4816-b12b-a7954bb6b0d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e6e03c6-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:49:ff'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822755, 'reachable_time': 37564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 376324, 'error': None, 'target': 'ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.080 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[89b99713-bb82-48cc-b558-fb79c9c01ff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.149 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[997a9a69-0ff3-4954-abee-b80935cb2f80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.152 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e6e03c6-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.152 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.153 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e6e03c6-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:04 compute-0 nova_compute[259550]: 2025-10-07 14:34:04.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:04 compute-0 kernel: tap8e6e03c6-00: entered promiscuous mode
Oct 07 14:34:04 compute-0 NetworkManager[44949]: <info>  [1759847644.1556] manager: (tap8e6e03c6-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/445)
Oct 07 14:34:04 compute-0 nova_compute[259550]: 2025-10-07 14:34:04.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.157 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e6e03c6-00, col_values=(('external_ids', {'iface-id': '3f0bb527-e0e8-409d-b11f-0c1205bec67a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:04 compute-0 nova_compute[259550]: 2025-10-07 14:34:04.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:04 compute-0 ovn_controller[151684]: 2025-10-07T14:34:04Z|01096|binding|INFO|Releasing lport 3f0bb527-e0e8-409d-b11f-0c1205bec67a from this chassis (sb_readonly=0)
Oct 07 14:34:04 compute-0 nova_compute[259550]: 2025-10-07 14:34:04.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.174 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e6e03c6-002b-464f-aea7-5ae708e3e5dc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e6e03c6-002b-464f-aea7-5ae708e3e5dc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.176 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7350d25-e13f-4a3f-94fb-dc016af13db7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.177 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-8e6e03c6-002b-464f-aea7-5ae708e3e5dc
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/8e6e03c6-002b-464f-aea7-5ae708e3e5dc.pid.haproxy
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 8e6e03c6-002b-464f-aea7-5ae708e3e5dc
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:34:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.179 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc', 'env', 'PROCESS_TAG=haproxy-8e6e03c6-002b-464f-aea7-5ae708e3e5dc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e6e03c6-002b-464f-aea7-5ae708e3e5dc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:34:04 compute-0 ceph-mon[74295]: pgmap v2148: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 414 KiB/s rd, 3.6 MiB/s wr, 72 op/s
Oct 07 14:34:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 125 op/s
Oct 07 14:34:04 compute-0 podman[376398]: 2025-10-07 14:34:04.544255861 +0000 UTC m=+0.053094730 container create 342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:34:04 compute-0 nova_compute[259550]: 2025-10-07 14:34:04.593 2 DEBUG nova.network.neutron [req-32e62629-1851-42be-91a0-7e29d1d0f9a5 req-2a39097c-2de7-4b4e-89e1-06144fe19469 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updated VIF entry in instance network info cache for port 13fb3ec4-0080-406c-8cea-6620016c0513. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:34:04 compute-0 nova_compute[259550]: 2025-10-07 14:34:04.593 2 DEBUG nova.network.neutron [req-32e62629-1851-42be-91a0-7e29d1d0f9a5 req-2a39097c-2de7-4b4e-89e1-06144fe19469 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updating instance_info_cache with network_info: [{"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:34:04 compute-0 systemd[1]: Started libpod-conmon-342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db.scope.
Oct 07 14:34:04 compute-0 podman[376398]: 2025-10-07 14:34:04.516297444 +0000 UTC m=+0.025136333 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:34:04 compute-0 nova_compute[259550]: 2025-10-07 14:34:04.618 2 DEBUG oslo_concurrency.lockutils [req-32e62629-1851-42be-91a0-7e29d1d0f9a5 req-2a39097c-2de7-4b4e-89e1-06144fe19469 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:34:04 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce9d977b3319507425402d4f816a2d462fb60a6cead3d48e6a5a2ead545ba35/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:34:04 compute-0 podman[376398]: 2025-10-07 14:34:04.650350086 +0000 UTC m=+0.159188985 container init 342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:34:04 compute-0 podman[376398]: 2025-10-07 14:34:04.657702232 +0000 UTC m=+0.166541101 container start 342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 07 14:34:04 compute-0 neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc[376413]: [NOTICE]   (376417) : New worker (376419) forked
Oct 07 14:34:04 compute-0 neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc[376413]: [NOTICE]   (376417) : Loading success.
Oct 07 14:34:04 compute-0 nova_compute[259550]: 2025-10-07 14:34:04.828 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847644.827904, 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:34:04 compute-0 nova_compute[259550]: 2025-10-07 14:34:04.828 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] VM Started (Lifecycle Event)
Oct 07 14:34:04 compute-0 nova_compute[259550]: 2025-10-07 14:34:04.874 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:34:04 compute-0 nova_compute[259550]: 2025-10-07 14:34:04.879 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847644.8287969, 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:34:04 compute-0 nova_compute[259550]: 2025-10-07 14:34:04.880 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] VM Paused (Lifecycle Event)
Oct 07 14:34:04 compute-0 nova_compute[259550]: 2025-10-07 14:34:04.924 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:34:04 compute-0 nova_compute[259550]: 2025-10-07 14:34:04.926 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:34:05 compute-0 nova_compute[259550]: 2025-10-07 14:34:05.027 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:34:05 compute-0 ceph-mon[74295]: pgmap v2149: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 125 op/s
Oct 07 14:34:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 78 op/s
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.429 2 DEBUG nova.compute.manager [req-e5383fa0-7f81-4113-b26b-c8728ac40110 req-1a664de8-08a0-4059-9a30-dc6b2e774d81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Received event network-vif-plugged-13fb3ec4-0080-406c-8cea-6620016c0513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.430 2 DEBUG oslo_concurrency.lockutils [req-e5383fa0-7f81-4113-b26b-c8728ac40110 req-1a664de8-08a0-4059-9a30-dc6b2e774d81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.430 2 DEBUG oslo_concurrency.lockutils [req-e5383fa0-7f81-4113-b26b-c8728ac40110 req-1a664de8-08a0-4059-9a30-dc6b2e774d81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.430 2 DEBUG oslo_concurrency.lockutils [req-e5383fa0-7f81-4113-b26b-c8728ac40110 req-1a664de8-08a0-4059-9a30-dc6b2e774d81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.431 2 DEBUG nova.compute.manager [req-e5383fa0-7f81-4113-b26b-c8728ac40110 req-1a664de8-08a0-4059-9a30-dc6b2e774d81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Processing event network-vif-plugged-13fb3ec4-0080-406c-8cea-6620016c0513 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.431 2 DEBUG nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.434 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847646.434416, 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.435 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] VM Resumed (Lifecycle Event)
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.438 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.444 2 INFO nova.virt.libvirt.driver [-] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Instance spawned successfully.
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.445 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.490 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.496 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.500 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.501 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.501 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.502 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.502 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.503 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.596 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.871 2 INFO nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Took 15.41 seconds to spawn the instance on the hypervisor.
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.872 2 DEBUG nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:34:06 compute-0 nova_compute[259550]: 2025-10-07 14:34:06.978 2 INFO nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Took 16.77 seconds to build instance.
Oct 07 14:34:07 compute-0 nova_compute[259550]: 2025-10-07 14:34:07.011 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:07 compute-0 nova_compute[259550]: 2025-10-07 14:34:07.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:07 compute-0 nova_compute[259550]: 2025-10-07 14:34:07.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:07 compute-0 ceph-mon[74295]: pgmap v2150: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 78 op/s
Oct 07 14:34:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:34:08 compute-0 ovn_controller[151684]: 2025-10-07T14:34:08Z|01097|binding|INFO|Releasing lport fcf1dc99-f823-4d28-9074-21a7bced7a57 from this chassis (sb_readonly=0)
Oct 07 14:34:08 compute-0 ovn_controller[151684]: 2025-10-07T14:34:08Z|01098|binding|INFO|Releasing lport 3f0bb527-e0e8-409d-b11f-0c1205bec67a from this chassis (sb_readonly=0)
Oct 07 14:34:08 compute-0 nova_compute[259550]: 2025-10-07 14:34:08.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:08 compute-0 NetworkManager[44949]: <info>  [1759847648.1765] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/446)
Oct 07 14:34:08 compute-0 NetworkManager[44949]: <info>  [1759847648.1783] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/447)
Oct 07 14:34:08 compute-0 ovn_controller[151684]: 2025-10-07T14:34:08Z|01099|binding|INFO|Releasing lport fcf1dc99-f823-4d28-9074-21a7bced7a57 from this chassis (sb_readonly=0)
Oct 07 14:34:08 compute-0 ovn_controller[151684]: 2025-10-07T14:34:08Z|01100|binding|INFO|Releasing lport 3f0bb527-e0e8-409d-b11f-0c1205bec67a from this chassis (sb_readonly=0)
Oct 07 14:34:08 compute-0 nova_compute[259550]: 2025-10-07 14:34:08.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:08 compute-0 nova_compute[259550]: 2025-10-07 14:34:08.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 07 14:34:08 compute-0 nova_compute[259550]: 2025-10-07 14:34:08.605 2 DEBUG nova.compute.manager [req-a913345d-14f6-469f-9e62-fb249bb5dd31 req-5d01109c-82ab-49e6-ac37-f4422b75a13a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Received event network-vif-plugged-13fb3ec4-0080-406c-8cea-6620016c0513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:08 compute-0 nova_compute[259550]: 2025-10-07 14:34:08.606 2 DEBUG oslo_concurrency.lockutils [req-a913345d-14f6-469f-9e62-fb249bb5dd31 req-5d01109c-82ab-49e6-ac37-f4422b75a13a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:08 compute-0 nova_compute[259550]: 2025-10-07 14:34:08.606 2 DEBUG oslo_concurrency.lockutils [req-a913345d-14f6-469f-9e62-fb249bb5dd31 req-5d01109c-82ab-49e6-ac37-f4422b75a13a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:08 compute-0 nova_compute[259550]: 2025-10-07 14:34:08.608 2 DEBUG oslo_concurrency.lockutils [req-a913345d-14f6-469f-9e62-fb249bb5dd31 req-5d01109c-82ab-49e6-ac37-f4422b75a13a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:08 compute-0 nova_compute[259550]: 2025-10-07 14:34:08.610 2 DEBUG nova.compute.manager [req-a913345d-14f6-469f-9e62-fb249bb5dd31 req-5d01109c-82ab-49e6-ac37-f4422b75a13a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] No waiting events found dispatching network-vif-plugged-13fb3ec4-0080-406c-8cea-6620016c0513 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:34:08 compute-0 nova_compute[259550]: 2025-10-07 14:34:08.610 2 WARNING nova.compute.manager [req-a913345d-14f6-469f-9e62-fb249bb5dd31 req-5d01109c-82ab-49e6-ac37-f4422b75a13a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Received unexpected event network-vif-plugged-13fb3ec4-0080-406c-8cea-6620016c0513 for instance with vm_state active and task_state None.
Oct 07 14:34:09 compute-0 podman[376429]: 2025-10-07 14:34:09.065002927 +0000 UTC m=+0.054595750 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 07 14:34:09 compute-0 podman[376430]: 2025-10-07 14:34:09.110442901 +0000 UTC m=+0.097790874 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:34:09 compute-0 ceph-mon[74295]: pgmap v2151: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 07 14:34:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 25 KiB/s wr, 103 op/s
Oct 07 14:34:11 compute-0 ceph-mon[74295]: pgmap v2152: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 25 KiB/s wr, 103 op/s
Oct 07 14:34:11 compute-0 nova_compute[259550]: 2025-10-07 14:34:11.560 2 DEBUG nova.compute.manager [req-1f9f7554-ac49-4c77-bc5a-aeb1c629b270 req-ce1cdbf9-d52f-4792-a777-668b5055508c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-changed-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:11 compute-0 nova_compute[259550]: 2025-10-07 14:34:11.561 2 DEBUG nova.compute.manager [req-1f9f7554-ac49-4c77-bc5a-aeb1c629b270 req-ce1cdbf9-d52f-4792-a777-668b5055508c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Refreshing instance network info cache due to event network-changed-ea75e1fb-474b-484e-9a52-73939d17da1b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:34:11 compute-0 nova_compute[259550]: 2025-10-07 14:34:11.562 2 DEBUG oslo_concurrency.lockutils [req-1f9f7554-ac49-4c77-bc5a-aeb1c629b270 req-ce1cdbf9-d52f-4792-a777-668b5055508c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:34:11 compute-0 nova_compute[259550]: 2025-10-07 14:34:11.562 2 DEBUG oslo_concurrency.lockutils [req-1f9f7554-ac49-4c77-bc5a-aeb1c629b270 req-ce1cdbf9-d52f-4792-a777-668b5055508c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:34:11 compute-0 nova_compute[259550]: 2025-10-07 14:34:11.562 2 DEBUG nova.network.neutron [req-1f9f7554-ac49-4c77-bc5a-aeb1c629b270 req-ce1cdbf9-d52f-4792-a777-668b5055508c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Refreshing network info cache for port ea75e1fb-474b-484e-9a52-73939d17da1b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:34:12 compute-0 nova_compute[259550]: 2025-10-07 14:34:12.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:12 compute-0 nova_compute[259550]: 2025-10-07 14:34:12.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 144 op/s
Oct 07 14:34:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:34:13 compute-0 nova_compute[259550]: 2025-10-07 14:34:13.121 2 DEBUG nova.network.neutron [req-1f9f7554-ac49-4c77-bc5a-aeb1c629b270 req-ce1cdbf9-d52f-4792-a777-668b5055508c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Updated VIF entry in instance network info cache for port ea75e1fb-474b-484e-9a52-73939d17da1b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:34:13 compute-0 nova_compute[259550]: 2025-10-07 14:34:13.122 2 DEBUG nova.network.neutron [req-1f9f7554-ac49-4c77-bc5a-aeb1c629b270 req-ce1cdbf9-d52f-4792-a777-668b5055508c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Updating instance_info_cache with network_info: [{"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:34:13 compute-0 nova_compute[259550]: 2025-10-07 14:34:13.143 2 DEBUG oslo_concurrency.lockutils [req-1f9f7554-ac49-4c77-bc5a-aeb1c629b270 req-ce1cdbf9-d52f-4792-a777-668b5055508c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:34:13 compute-0 ovn_controller[151684]: 2025-10-07T14:34:13Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:91:21 10.100.0.14
Oct 07 14:34:13 compute-0 ovn_controller[151684]: 2025-10-07T14:34:13Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:91:21 10.100.0.14
Oct 07 14:34:13 compute-0 nova_compute[259550]: 2025-10-07 14:34:13.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:13 compute-0 ceph-mon[74295]: pgmap v2153: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 144 op/s
Oct 07 14:34:13 compute-0 nova_compute[259550]: 2025-10-07 14:34:13.822 2 DEBUG nova.compute.manager [req-24e07542-9fec-4ac1-a8ea-5ca588cbc6bd req-2de2193f-27ab-4b70-8ebf-f61a744a9dfb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Received event network-changed-13fb3ec4-0080-406c-8cea-6620016c0513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:13 compute-0 nova_compute[259550]: 2025-10-07 14:34:13.823 2 DEBUG nova.compute.manager [req-24e07542-9fec-4ac1-a8ea-5ca588cbc6bd req-2de2193f-27ab-4b70-8ebf-f61a744a9dfb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Refreshing instance network info cache due to event network-changed-13fb3ec4-0080-406c-8cea-6620016c0513. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:34:13 compute-0 nova_compute[259550]: 2025-10-07 14:34:13.823 2 DEBUG oslo_concurrency.lockutils [req-24e07542-9fec-4ac1-a8ea-5ca588cbc6bd req-2de2193f-27ab-4b70-8ebf-f61a744a9dfb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:34:13 compute-0 nova_compute[259550]: 2025-10-07 14:34:13.824 2 DEBUG oslo_concurrency.lockutils [req-24e07542-9fec-4ac1-a8ea-5ca588cbc6bd req-2de2193f-27ab-4b70-8ebf-f61a744a9dfb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:34:13 compute-0 nova_compute[259550]: 2025-10-07 14:34:13.824 2 DEBUG nova.network.neutron [req-24e07542-9fec-4ac1-a8ea-5ca588cbc6bd req-2de2193f-27ab-4b70-8ebf-f61a744a9dfb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Refreshing network info cache for port 13fb3ec4-0080-406c-8cea-6620016c0513 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:34:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 153 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 184 op/s
Oct 07 14:34:15 compute-0 ceph-mon[74295]: pgmap v2154: 305 pgs: 305 active+clean; 153 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 184 op/s
Oct 07 14:34:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 167 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 07 14:34:16 compute-0 nova_compute[259550]: 2025-10-07 14:34:16.275 2 DEBUG nova.network.neutron [req-24e07542-9fec-4ac1-a8ea-5ca588cbc6bd req-2de2193f-27ab-4b70-8ebf-f61a744a9dfb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updated VIF entry in instance network info cache for port 13fb3ec4-0080-406c-8cea-6620016c0513. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:34:16 compute-0 nova_compute[259550]: 2025-10-07 14:34:16.276 2 DEBUG nova.network.neutron [req-24e07542-9fec-4ac1-a8ea-5ca588cbc6bd req-2de2193f-27ab-4b70-8ebf-f61a744a9dfb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updating instance_info_cache with network_info: [{"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:34:16 compute-0 nova_compute[259550]: 2025-10-07 14:34:16.356 2 DEBUG oslo_concurrency.lockutils [req-24e07542-9fec-4ac1-a8ea-5ca588cbc6bd req-2de2193f-27ab-4b70-8ebf-f61a744a9dfb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:34:17 compute-0 nova_compute[259550]: 2025-10-07 14:34:17.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:17 compute-0 nova_compute[259550]: 2025-10-07 14:34:17.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:17 compute-0 ceph-mon[74295]: pgmap v2155: 305 pgs: 305 active+clean; 167 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 07 14:34:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:34:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 167 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct 07 14:34:19 compute-0 nova_compute[259550]: 2025-10-07 14:34:19.055 2 INFO nova.compute.manager [None req-d8b16fbf-61ea-4af6-8d5e-54be8c0ae1d4 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Get console output
Oct 07 14:34:19 compute-0 nova_compute[259550]: 2025-10-07 14:34:19.060 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:34:19 compute-0 ovn_controller[151684]: 2025-10-07T14:34:19Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a7:48:df 10.100.0.13
Oct 07 14:34:19 compute-0 ovn_controller[151684]: 2025-10-07T14:34:19Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:48:df 10.100.0.13
Oct 07 14:34:19 compute-0 ceph-mon[74295]: pgmap v2156: 305 pgs: 305 active+clean; 167 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct 07 14:34:19 compute-0 nova_compute[259550]: 2025-10-07 14:34:19.540 2 DEBUG oslo_concurrency.lockutils [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:19 compute-0 nova_compute[259550]: 2025-10-07 14:34:19.540 2 DEBUG oslo_concurrency.lockutils [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:19 compute-0 nova_compute[259550]: 2025-10-07 14:34:19.541 2 INFO nova.compute.manager [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Rebooting instance
Oct 07 14:34:19 compute-0 nova_compute[259550]: 2025-10-07 14:34:19.567 2 DEBUG oslo_concurrency.lockutils [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:34:19 compute-0 nova_compute[259550]: 2025-10-07 14:34:19.568 2 DEBUG oslo_concurrency.lockutils [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquired lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:34:19 compute-0 nova_compute[259550]: 2025-10-07 14:34:19.568 2 DEBUG nova.network.neutron [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:34:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 180 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.4 MiB/s wr, 176 op/s
Oct 07 14:34:20 compute-0 nova_compute[259550]: 2025-10-07 14:34:20.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:21 compute-0 ceph-mon[74295]: pgmap v2157: 305 pgs: 305 active+clean; 180 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.4 MiB/s wr, 176 op/s
Oct 07 14:34:22 compute-0 nova_compute[259550]: 2025-10-07 14:34:22.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:22 compute-0 nova_compute[259550]: 2025-10-07 14:34:22.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 196 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 168 op/s
Oct 07 14:34:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:34:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:34:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:34:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:34:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:34:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:34:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:34:22
Oct 07 14:34:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:34:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:34:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'volumes', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', '.rgw.root', 'backups', 'cephfs.cephfs.meta']
Oct 07 14:34:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:34:22 compute-0 nova_compute[259550]: 2025-10-07 14:34:22.892 2 DEBUG nova.network.neutron [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Updating instance_info_cache with network_info: [{"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:34:22 compute-0 nova_compute[259550]: 2025-10-07 14:34:22.942 2 DEBUG oslo_concurrency.lockutils [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Releasing lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:34:22 compute-0 nova_compute[259550]: 2025-10-07 14:34:22.943 2 DEBUG nova.compute.manager [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:34:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:34:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:34:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:34:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:34:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:34:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:34:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:34:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:34:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:34:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:34:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:34:23 compute-0 ceph-mon[74295]: pgmap v2158: 305 pgs: 305 active+clean; 196 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 168 op/s
Oct 07 14:34:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 651 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Oct 07 14:34:25 compute-0 ceph-mon[74295]: pgmap v2159: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 651 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Oct 07 14:34:25 compute-0 kernel: tapea75e1fb-47 (unregistering): left promiscuous mode
Oct 07 14:34:25 compute-0 NetworkManager[44949]: <info>  [1759847665.8778] device (tapea75e1fb-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:34:25 compute-0 nova_compute[259550]: 2025-10-07 14:34:25.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:25 compute-0 ovn_controller[151684]: 2025-10-07T14:34:25Z|01101|binding|INFO|Releasing lport ea75e1fb-474b-484e-9a52-73939d17da1b from this chassis (sb_readonly=0)
Oct 07 14:34:25 compute-0 ovn_controller[151684]: 2025-10-07T14:34:25Z|01102|binding|INFO|Setting lport ea75e1fb-474b-484e-9a52-73939d17da1b down in Southbound
Oct 07 14:34:25 compute-0 ovn_controller[151684]: 2025-10-07T14:34:25Z|01103|binding|INFO|Removing iface tapea75e1fb-47 ovn-installed in OVS
Oct 07 14:34:25 compute-0 nova_compute[259550]: 2025-10-07 14:34:25.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:25 compute-0 nova_compute[259550]: 2025-10-07 14:34:25.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:25 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Oct 07 14:34:25 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006c.scope: Consumed 13.945s CPU time.
Oct 07 14:34:25 compute-0 systemd-machined[214580]: Machine qemu-135-instance-0000006c terminated.
Oct 07 14:34:25 compute-0 podman[376472]: 2025-10-07 14:34:25.972155304 +0000 UTC m=+0.071625565 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 14:34:26 compute-0 podman[376475]: 2025-10-07 14:34:26.003554313 +0000 UTC m=+0.102738957 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.002 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:91:21 10.100.0.14'], port_security=['fa:16:3e:57:91:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37af7aab-7feb-47ed-a12c-8267b21bb6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a239abd8-f455-4712-a289-4273b897ab8d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ea75e1fb-474b-484e-9a52-73939d17da1b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.004 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ea75e1fb-474b-484e-9a52-73939d17da1b in datapath 256ae6d5-60ea-4d02-8c43-2f55874712c6 unbound from our chassis
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.006 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 256ae6d5-60ea-4d02-8c43-2f55874712c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.007 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf3ce05-dd5d-4fe0-952c-12820400bc74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.008 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 namespace which is not needed anymore
Oct 07 14:34:26 compute-0 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376135]: [NOTICE]   (376140) : haproxy version is 2.8.14-c23fe91
Oct 07 14:34:26 compute-0 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376135]: [NOTICE]   (376140) : path to executable is /usr/sbin/haproxy
Oct 07 14:34:26 compute-0 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376135]: [WARNING]  (376140) : Exiting Master process...
Oct 07 14:34:26 compute-0 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376135]: [ALERT]    (376140) : Current worker (376142) exited with code 143 (Terminated)
Oct 07 14:34:26 compute-0 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376135]: [WARNING]  (376140) : All workers exited. Exiting... (0)
Oct 07 14:34:26 compute-0 systemd[1]: libpod-3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f.scope: Deactivated successfully.
Oct 07 14:34:26 compute-0 podman[376533]: 2025-10-07 14:34:26.137980075 +0000 UTC m=+0.046728360 container died 3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:34:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f-userdata-shm.mount: Deactivated successfully.
Oct 07 14:34:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cca578f989b6450d529d2e7b4f782d731435dda711704c70dbd7335068f20d2-merged.mount: Deactivated successfully.
Oct 07 14:34:26 compute-0 podman[376533]: 2025-10-07 14:34:26.194390072 +0000 UTC m=+0.103138337 container cleanup 3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 07 14:34:26 compute-0 systemd[1]: libpod-conmon-3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f.scope: Deactivated successfully.
Oct 07 14:34:26 compute-0 podman[376573]: 2025-10-07 14:34:26.255043633 +0000 UTC m=+0.040395831 container remove 3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.262 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[23e44a7c-cdd7-4e56-9751-e155fe8ec9da]: (4, ('Tue Oct  7 02:34:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 (3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f)\n3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f\nTue Oct  7 02:34:26 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 (3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f)\n3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.263 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[026a86e0-34a8-470a-9e5d-53e3ff64f38f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.264 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap256ae6d5-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:26 compute-0 nova_compute[259550]: 2025-10-07 14:34:26.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:26 compute-0 kernel: tap256ae6d5-60: left promiscuous mode
Oct 07 14:34:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 2.5 MiB/s wr, 73 op/s
Oct 07 14:34:26 compute-0 nova_compute[259550]: 2025-10-07 14:34:26.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.291 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4b277ba3-e92e-4927-bbc2-bb13f655fd9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.322 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[316d32c0-5f3f-4699-88d8-75323965f207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.323 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d3335b2d-c4dd-485f-af45-a30a5c407e8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.339 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ad07cdac-4eda-4bb4-90dc-f27186d010aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822196, 'reachable_time': 39177, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376592, 'error': None, 'target': 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.342 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.342 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[c8191e19-6e36-49ce-b3b6-bb9aeedcafcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d256ae6d5\x2d60ea\x2d4d02\x2d8c43\x2d2f55874712c6.mount: Deactivated successfully.
Oct 07 14:34:26 compute-0 nova_compute[259550]: 2025-10-07 14:34:26.488 2 INFO nova.virt.libvirt.driver [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Instance shutdown successfully.
Oct 07 14:34:26 compute-0 kernel: tapea75e1fb-47: entered promiscuous mode
Oct 07 14:34:26 compute-0 systemd-udevd[376496]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:34:26 compute-0 NetworkManager[44949]: <info>  [1759847666.5430] manager: (tapea75e1fb-47): new Tun device (/org/freedesktop/NetworkManager/Devices/448)
Oct 07 14:34:26 compute-0 nova_compute[259550]: 2025-10-07 14:34:26.545 2 INFO nova.compute.manager [None req-f8281924-01f4-4df5-ae8a-b1ace5b13103 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Get console output
Oct 07 14:34:26 compute-0 nova_compute[259550]: 2025-10-07 14:34:26.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:26 compute-0 ovn_controller[151684]: 2025-10-07T14:34:26Z|01104|binding|INFO|Claiming lport ea75e1fb-474b-484e-9a52-73939d17da1b for this chassis.
Oct 07 14:34:26 compute-0 ovn_controller[151684]: 2025-10-07T14:34:26Z|01105|binding|INFO|ea75e1fb-474b-484e-9a52-73939d17da1b: Claiming fa:16:3e:57:91:21 10.100.0.14
Oct 07 14:34:26 compute-0 NetworkManager[44949]: <info>  [1759847666.5539] device (tapea75e1fb-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:34:26 compute-0 NetworkManager[44949]: <info>  [1759847666.5546] device (tapea75e1fb-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:34:26 compute-0 nova_compute[259550]: 2025-10-07 14:34:26.556 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:34:26 compute-0 ovn_controller[151684]: 2025-10-07T14:34:26Z|01106|binding|INFO|Setting lport ea75e1fb-474b-484e-9a52-73939d17da1b ovn-installed in OVS
Oct 07 14:34:26 compute-0 nova_compute[259550]: 2025-10-07 14:34:26.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:26 compute-0 systemd-machined[214580]: New machine qemu-137-instance-0000006c.
Oct 07 14:34:26 compute-0 systemd[1]: Started Virtual Machine qemu-137-instance-0000006c.
Oct 07 14:34:26 compute-0 ovn_controller[151684]: 2025-10-07T14:34:26Z|01107|binding|INFO|Setting lport ea75e1fb-474b-484e-9a52-73939d17da1b up in Southbound
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.616 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:91:21 10.100.0.14'], port_security=['fa:16:3e:57:91:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37af7aab-7feb-47ed-a12c-8267b21bb6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a239abd8-f455-4712-a289-4273b897ab8d, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ea75e1fb-474b-484e-9a52-73939d17da1b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.618 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ea75e1fb-474b-484e-9a52-73939d17da1b in datapath 256ae6d5-60ea-4d02-8c43-2f55874712c6 bound to our chassis
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.620 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 256ae6d5-60ea-4d02-8c43-2f55874712c6
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.636 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bc932f76-f444-4981-ab4b-8d978c84a339]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.637 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap256ae6d5-61 in ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.638 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap256ae6d5-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.639 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5cda4a-c94c-454c-8e65-75b36d8384d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.640 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[daeab1da-318d-4bb9-9798-55385d0a0cdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.658 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[58805da6-a281-44c4-b38e-f641622c2f99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.679 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[98994e63-3b3e-463f-8cd5-d0e67af4e89a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 nova_compute[259550]: 2025-10-07 14:34:26.706 2 DEBUG nova.compute.manager [req-a9abfe04-9a50-42f6-b59d-c06f77b064c2 req-23e3b025-6e19-4156-948e-b854397cfbb6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-unplugged-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:26 compute-0 nova_compute[259550]: 2025-10-07 14:34:26.707 2 DEBUG oslo_concurrency.lockutils [req-a9abfe04-9a50-42f6-b59d-c06f77b064c2 req-23e3b025-6e19-4156-948e-b854397cfbb6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:26 compute-0 nova_compute[259550]: 2025-10-07 14:34:26.707 2 DEBUG oslo_concurrency.lockutils [req-a9abfe04-9a50-42f6-b59d-c06f77b064c2 req-23e3b025-6e19-4156-948e-b854397cfbb6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:26 compute-0 nova_compute[259550]: 2025-10-07 14:34:26.707 2 DEBUG oslo_concurrency.lockutils [req-a9abfe04-9a50-42f6-b59d-c06f77b064c2 req-23e3b025-6e19-4156-948e-b854397cfbb6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:26 compute-0 nova_compute[259550]: 2025-10-07 14:34:26.707 2 DEBUG nova.compute.manager [req-a9abfe04-9a50-42f6-b59d-c06f77b064c2 req-23e3b025-6e19-4156-948e-b854397cfbb6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] No waiting events found dispatching network-vif-unplugged-ea75e1fb-474b-484e-9a52-73939d17da1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:34:26 compute-0 nova_compute[259550]: 2025-10-07 14:34:26.708 2 WARNING nova.compute.manager [req-a9abfe04-9a50-42f6-b59d-c06f77b064c2 req-23e3b025-6e19-4156-948e-b854397cfbb6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received unexpected event network-vif-unplugged-ea75e1fb-474b-484e-9a52-73939d17da1b for instance with vm_state active and task_state reboot_started.
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.712 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[77298a6d-2e02-4ada-8a78-e4fc3e1caae6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.718 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cf052e8b-9aaf-4a15-a3aa-37de77724813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 NetworkManager[44949]: <info>  [1759847666.7202] manager: (tap256ae6d5-60): new Veth device (/org/freedesktop/NetworkManager/Devices/449)
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.755 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2b328f04-1134-4eee-9654-1051cdb61c1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.759 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[07c16194-9584-41f4-8006-07ace51c2d19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 NetworkManager[44949]: <info>  [1759847666.7912] device (tap256ae6d5-60): carrier: link connected
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.797 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1518c1-4a26-4c6c-a1b0-4d11494e35ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.815 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[42422fa3-c9b4-4501-9057-cebc34d90709]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap256ae6d5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:7e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 320], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 825035, 'reachable_time': 38957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376656, 'error': None, 'target': 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.834 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[95ac88b6-3877-4b93-b0d3-08bca70ed482]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:7e20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 825035, 'tstamp': 825035}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376674, 'error': None, 'target': 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.856 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[20b8bcc3-9c14-4b1c-85a4-bd8ca4e40abc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap256ae6d5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:7e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 320], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 825035, 'reachable_time': 38957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 376676, 'error': None, 'target': 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.897 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a176c53a-0ae3-4c1e-af3e-1ee46f56c06a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.973 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ee40e848-2926-4651-8edd-4751d5363c7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.975 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap256ae6d5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.975 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.976 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap256ae6d5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:26 compute-0 nova_compute[259550]: 2025-10-07 14:34:26.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:26 compute-0 kernel: tap256ae6d5-60: entered promiscuous mode
Oct 07 14:34:26 compute-0 NetworkManager[44949]: <info>  [1759847666.9795] manager: (tap256ae6d5-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/450)
Oct 07 14:34:26 compute-0 nova_compute[259550]: 2025-10-07 14:34:26.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.983 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap256ae6d5-60, col_values=(('external_ids', {'iface-id': 'fcf1dc99-f823-4d28-9074-21a7bced7a57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:26 compute-0 nova_compute[259550]: 2025-10-07 14:34:26.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:26 compute-0 ovn_controller[151684]: 2025-10-07T14:34:26Z|01108|binding|INFO|Releasing lport fcf1dc99-f823-4d28-9074-21a7bced7a57 from this chassis (sb_readonly=0)
Oct 07 14:34:27 compute-0 nova_compute[259550]: 2025-10-07 14:34:27.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:27.003 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/256ae6d5-60ea-4d02-8c43-2f55874712c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/256ae6d5-60ea-4d02-8c43-2f55874712c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:27.004 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a32c1930-e546-4aeb-a406-98dae6808657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:27.005 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-256ae6d5-60ea-4d02-8c43-2f55874712c6
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/256ae6d5-60ea-4d02-8c43-2f55874712c6.pid.haproxy
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 256ae6d5-60ea-4d02-8c43-2f55874712c6
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:34:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:27.013 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'env', 'PROCESS_TAG=haproxy-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/256ae6d5-60ea-4d02-8c43-2f55874712c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:34:27 compute-0 nova_compute[259550]: 2025-10-07 14:34:27.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:27 compute-0 nova_compute[259550]: 2025-10-07 14:34:27.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:27 compute-0 nova_compute[259550]: 2025-10-07 14:34:27.378 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:34:27 compute-0 nova_compute[259550]: 2025-10-07 14:34:27.379 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847667.3778539, a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:34:27 compute-0 nova_compute[259550]: 2025-10-07 14:34:27.379 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] VM Resumed (Lifecycle Event)
Oct 07 14:34:27 compute-0 nova_compute[259550]: 2025-10-07 14:34:27.384 2 INFO nova.virt.libvirt.driver [-] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Instance running successfully.
Oct 07 14:34:27 compute-0 nova_compute[259550]: 2025-10-07 14:34:27.385 2 INFO nova.virt.libvirt.driver [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Instance soft rebooted successfully.
Oct 07 14:34:27 compute-0 nova_compute[259550]: 2025-10-07 14:34:27.385 2 DEBUG nova.compute.manager [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:34:27 compute-0 podman[376713]: 2025-10-07 14:34:27.449066808 +0000 UTC m=+0.058385902 container create c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 07 14:34:27 compute-0 systemd[1]: Started libpod-conmon-c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac.scope.
Oct 07 14:34:27 compute-0 podman[376713]: 2025-10-07 14:34:27.416111727 +0000 UTC m=+0.025430811 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:34:27 compute-0 nova_compute[259550]: 2025-10-07 14:34:27.525 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:34:27 compute-0 nova_compute[259550]: 2025-10-07 14:34:27.529 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:34:27 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ab2a485858f7d7c59681ecb143d683598d5f4b822096692bfe1648f09b1a197/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:34:27 compute-0 ceph-mon[74295]: pgmap v2160: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 2.5 MiB/s wr, 73 op/s
Oct 07 14:34:27 compute-0 podman[376713]: 2025-10-07 14:34:27.584681201 +0000 UTC m=+0.194000305 container init c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:34:27 compute-0 podman[376713]: 2025-10-07 14:34:27.592448118 +0000 UTC m=+0.201767192 container start c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 14:34:27 compute-0 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376728]: [NOTICE]   (376732) : New worker (376734) forked
Oct 07 14:34:27 compute-0 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376728]: [NOTICE]   (376732) : Loading success.
Oct 07 14:34:27 compute-0 nova_compute[259550]: 2025-10-07 14:34:27.846 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847667.3782148, a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:34:27 compute-0 nova_compute[259550]: 2025-10-07 14:34:27.847 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] VM Started (Lifecycle Event)
Oct 07 14:34:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.205 2 DEBUG oslo_concurrency.lockutils [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 8.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.296 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.301 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.830 2 DEBUG nova.compute.manager [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.831 2 DEBUG oslo_concurrency.lockutils [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.832 2 DEBUG oslo_concurrency.lockutils [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.832 2 DEBUG oslo_concurrency.lockutils [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.833 2 DEBUG nova.compute.manager [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] No waiting events found dispatching network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.833 2 WARNING nova.compute.manager [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received unexpected event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b for instance with vm_state active and task_state None.
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.833 2 DEBUG nova.compute.manager [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.835 2 DEBUG oslo_concurrency.lockutils [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.836 2 DEBUG oslo_concurrency.lockutils [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.837 2 DEBUG oslo_concurrency.lockutils [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.837 2 DEBUG nova.compute.manager [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] No waiting events found dispatching network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.838 2 WARNING nova.compute.manager [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received unexpected event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b for instance with vm_state active and task_state None.
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.839 2 DEBUG nova.compute.manager [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.839 2 DEBUG oslo_concurrency.lockutils [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.840 2 DEBUG oslo_concurrency.lockutils [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.841 2 DEBUG oslo_concurrency.lockutils [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.842 2 DEBUG nova.compute.manager [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] No waiting events found dispatching network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:34:28 compute-0 nova_compute[259550]: 2025-10-07 14:34:28.842 2 WARNING nova.compute.manager [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received unexpected event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b for instance with vm_state active and task_state None.
Oct 07 14:34:29 compute-0 ceph-mon[74295]: pgmap v2161: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Oct 07 14:34:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 120 op/s
Oct 07 14:34:30 compute-0 nova_compute[259550]: 2025-10-07 14:34:30.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:31.352 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:34:31 compute-0 nova_compute[259550]: 2025-10-07 14:34:31.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:31.354 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:34:31 compute-0 ceph-mon[74295]: pgmap v2162: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 120 op/s
Oct 07 14:34:32 compute-0 nova_compute[259550]: 2025-10-07 14:34:32.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:32 compute-0 nova_compute[259550]: 2025-10-07 14:34:32.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 985 KiB/s wr, 95 op/s
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015193727819561111 of space, bias 1.0, pg target 0.45581183458683333 quantized to 32 (current 32)
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:34:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:34:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:34:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/334373561' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:34:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:34:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/334373561' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:34:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:34:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:33.355 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:33 compute-0 ceph-mon[74295]: pgmap v2163: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 985 KiB/s wr, 95 op/s
Oct 07 14:34:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/334373561' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:34:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/334373561' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:34:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 41 KiB/s wr, 75 op/s
Oct 07 14:34:35 compute-0 nova_compute[259550]: 2025-10-07 14:34:35.220 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:35 compute-0 nova_compute[259550]: 2025-10-07 14:34:35.221 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:35 compute-0 nova_compute[259550]: 2025-10-07 14:34:35.241 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:34:35 compute-0 nova_compute[259550]: 2025-10-07 14:34:35.310 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:35 compute-0 nova_compute[259550]: 2025-10-07 14:34:35.310 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:35 compute-0 nova_compute[259550]: 2025-10-07 14:34:35.317 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:34:35 compute-0 nova_compute[259550]: 2025-10-07 14:34:35.318 2 INFO nova.compute.claims [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:34:35 compute-0 nova_compute[259550]: 2025-10-07 14:34:35.454 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:35 compute-0 ceph-mon[74295]: pgmap v2164: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 41 KiB/s wr, 75 op/s
Oct 07 14:34:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:34:35 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/291774645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:34:35 compute-0 nova_compute[259550]: 2025-10-07 14:34:35.944 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:35 compute-0 nova_compute[259550]: 2025-10-07 14:34:35.950 2 DEBUG nova.compute.provider_tree [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:34:35 compute-0 nova_compute[259550]: 2025-10-07 14:34:35.967 2 DEBUG nova.scheduler.client.report [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:34:35 compute-0 nova_compute[259550]: 2025-10-07 14:34:35.988 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:35 compute-0 nova_compute[259550]: 2025-10-07 14:34:35.988 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.033 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.034 2 DEBUG nova.network.neutron [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.054 2 INFO nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.069 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.153 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.154 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.155 2 INFO nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Creating image(s)
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.176 2 DEBUG nova.storage.rbd_utils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image df052dd5-fecd-4dd3-be36-4becc3f9f318_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.199 2 DEBUG nova.storage.rbd_utils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image df052dd5-fecd-4dd3-be36-4becc3f9f318_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.225 2 DEBUG nova.storage.rbd_utils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image df052dd5-fecd-4dd3-be36-4becc3f9f318_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.229 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.263 2 DEBUG nova.policy [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:34:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 36 KiB/s wr, 73 op/s
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.303 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.304 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.305 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.305 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.326 2 DEBUG nova.storage.rbd_utils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image df052dd5-fecd-4dd3-be36-4becc3f9f318_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.331 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 df052dd5-fecd-4dd3-be36-4becc3f9f318_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.667 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 df052dd5-fecd-4dd3-be36-4becc3f9f318_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:36 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/291774645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.743 2 DEBUG nova.storage.rbd_utils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image df052dd5-fecd-4dd3-be36-4becc3f9f318_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.887 2 DEBUG nova.objects.instance [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid df052dd5-fecd-4dd3-be36-4becc3f9f318 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.905 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.906 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Ensure instance console log exists: /var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.907 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.908 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:36 compute-0 nova_compute[259550]: 2025-10-07 14:34:36.909 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:37 compute-0 nova_compute[259550]: 2025-10-07 14:34:37.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:37 compute-0 nova_compute[259550]: 2025-10-07 14:34:37.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:37 compute-0 ceph-mon[74295]: pgmap v2165: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 36 KiB/s wr, 73 op/s
Oct 07 14:34:37 compute-0 nova_compute[259550]: 2025-10-07 14:34:37.719 2 DEBUG nova.network.neutron [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Successfully created port: 72db4fd3-8171-42af-9801-69a061614ccc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:34:37 compute-0 sudo[376933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:34:37 compute-0 sudo[376933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:37 compute-0 sudo[376933]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:37 compute-0 sudo[376958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:34:37 compute-0 sudo[376958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:37 compute-0 sudo[376958]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:37 compute-0 sudo[376983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:34:37 compute-0 sudo[376983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:37 compute-0 sudo[376983]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:34:37 compute-0 sudo[377008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 07 14:34:37 compute-0 sudo[377008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:38 compute-0 sudo[377008]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:34:38 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:34:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:34:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 71 op/s
Oct 07 14:34:38 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:34:38 compute-0 sudo[377053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:34:38 compute-0 sudo[377053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:38 compute-0 sudo[377053]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:38 compute-0 sudo[377078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:34:38 compute-0 sudo[377078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:38 compute-0 sudo[377078]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:38 compute-0 sudo[377103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:34:38 compute-0 sudo[377103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:38 compute-0 sudo[377103]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:38 compute-0 sudo[377128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:34:38 compute-0 sudo[377128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:38 compute-0 ovn_controller[151684]: 2025-10-07T14:34:38Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:91:21 10.100.0.14
Oct 07 14:34:39 compute-0 sudo[377128]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 07 14:34:39 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 07 14:34:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:34:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:34:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:34:39 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:34:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:34:39 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:34:39 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev a996b5e8-9450-48fa-9bd2-68eaa57faeb3 does not exist
Oct 07 14:34:39 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 60d9a18e-7b4c-425f-9b0b-4c2ecc1a68fc does not exist
Oct 07 14:34:39 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev c6224891-a622-4004-beed-d992f66da967 does not exist
Oct 07 14:34:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:34:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:34:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:34:39 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:34:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:34:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:34:39 compute-0 sudo[377184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:34:39 compute-0 sudo[377184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:39 compute-0 sudo[377184]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:39 compute-0 podman[377208]: 2025-10-07 14:34:39.275886965 +0000 UTC m=+0.064177566 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 07 14:34:39 compute-0 sudo[377221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:34:39 compute-0 sudo[377221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:34:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:34:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 07 14:34:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:34:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:34:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:34:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:34:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:34:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:34:39 compute-0 sudo[377221]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:39 compute-0 podman[377209]: 2025-10-07 14:34:39.333703039 +0000 UTC m=+0.122096923 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:34:39 compute-0 sudo[377275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:34:39 compute-0 sudo[377275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:39 compute-0 sudo[377275]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:39 compute-0 sudo[377303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:34:39 compute-0 sudo[377303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:39 compute-0 nova_compute[259550]: 2025-10-07 14:34:39.516 2 DEBUG nova.network.neutron [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Successfully created port: 87dbfe27-9436-4e21-a648-df77ddfec6ca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:34:39 compute-0 podman[377368]: 2025-10-07 14:34:39.753814165 +0000 UTC m=+0.045422684 container create 26e925d90a977404c1de300a42b13bb1a87dc6eeb5dd4589ec619de8bede6425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jepsen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:34:39 compute-0 systemd[1]: Started libpod-conmon-26e925d90a977404c1de300a42b13bb1a87dc6eeb5dd4589ec619de8bede6425.scope.
Oct 07 14:34:39 compute-0 podman[377368]: 2025-10-07 14:34:39.729894546 +0000 UTC m=+0.021503095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:34:39 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:34:39 compute-0 podman[377368]: 2025-10-07 14:34:39.851212988 +0000 UTC m=+0.142821547 container init 26e925d90a977404c1de300a42b13bb1a87dc6eeb5dd4589ec619de8bede6425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:34:39 compute-0 podman[377368]: 2025-10-07 14:34:39.861052781 +0000 UTC m=+0.152661300 container start 26e925d90a977404c1de300a42b13bb1a87dc6eeb5dd4589ec619de8bede6425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jepsen, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:34:39 compute-0 zen_jepsen[377384]: 167 167
Oct 07 14:34:39 compute-0 podman[377368]: 2025-10-07 14:34:39.866860716 +0000 UTC m=+0.158469235 container attach 26e925d90a977404c1de300a42b13bb1a87dc6eeb5dd4589ec619de8bede6425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jepsen, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 07 14:34:39 compute-0 systemd[1]: libpod-26e925d90a977404c1de300a42b13bb1a87dc6eeb5dd4589ec619de8bede6425.scope: Deactivated successfully.
Oct 07 14:34:39 compute-0 podman[377368]: 2025-10-07 14:34:39.869008843 +0000 UTC m=+0.160617392 container died 26e925d90a977404c1de300a42b13bb1a87dc6eeb5dd4589ec619de8bede6425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:34:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-abfb868c2bd6b450ccbea031f7918d119a5b3c656fdbbbb302c02d4b8b271b9f-merged.mount: Deactivated successfully.
Oct 07 14:34:39 compute-0 nova_compute[259550]: 2025-10-07 14:34:39.900 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "a5f57fca-2121-432c-b000-2fd92f5c1b12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:39 compute-0 nova_compute[259550]: 2025-10-07 14:34:39.902 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:39 compute-0 podman[377368]: 2025-10-07 14:34:39.910962314 +0000 UTC m=+0.202570853 container remove 26e925d90a977404c1de300a42b13bb1a87dc6eeb5dd4589ec619de8bede6425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 07 14:34:39 compute-0 systemd[1]: libpod-conmon-26e925d90a977404c1de300a42b13bb1a87dc6eeb5dd4589ec619de8bede6425.scope: Deactivated successfully.
Oct 07 14:34:39 compute-0 nova_compute[259550]: 2025-10-07 14:34:39.929 2 DEBUG nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:34:40 compute-0 nova_compute[259550]: 2025-10-07 14:34:40.018 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:40 compute-0 nova_compute[259550]: 2025-10-07 14:34:40.019 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:40 compute-0 nova_compute[259550]: 2025-10-07 14:34:40.028 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:34:40 compute-0 nova_compute[259550]: 2025-10-07 14:34:40.028 2 INFO nova.compute.claims [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:34:40 compute-0 podman[377408]: 2025-10-07 14:34:40.112249072 +0000 UTC m=+0.051599110 container create 1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shamir, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Oct 07 14:34:40 compute-0 systemd[1]: Started libpod-conmon-1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6.scope.
Oct 07 14:34:40 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:34:40 compute-0 podman[377408]: 2025-10-07 14:34:40.090013028 +0000 UTC m=+0.029363096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f806f3f7e30aacc4f6fcc222ec0e87cd3a25eeddc829d77ded208bd60ba43889/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f806f3f7e30aacc4f6fcc222ec0e87cd3a25eeddc829d77ded208bd60ba43889/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f806f3f7e30aacc4f6fcc222ec0e87cd3a25eeddc829d77ded208bd60ba43889/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f806f3f7e30aacc4f6fcc222ec0e87cd3a25eeddc829d77ded208bd60ba43889/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f806f3f7e30aacc4f6fcc222ec0e87cd3a25eeddc829d77ded208bd60ba43889/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:34:40 compute-0 podman[377408]: 2025-10-07 14:34:40.204899238 +0000 UTC m=+0.144249306 container init 1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shamir, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:34:40 compute-0 podman[377408]: 2025-10-07 14:34:40.21360428 +0000 UTC m=+0.152954328 container start 1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shamir, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:34:40 compute-0 podman[377408]: 2025-10-07 14:34:40.218020639 +0000 UTC m=+0.157370697 container attach 1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 07 14:34:40 compute-0 nova_compute[259550]: 2025-10-07 14:34:40.226 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 233 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 129 op/s
Oct 07 14:34:40 compute-0 ceph-mon[74295]: pgmap v2166: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 71 op/s
Oct 07 14:34:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:34:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/34419485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:34:40 compute-0 nova_compute[259550]: 2025-10-07 14:34:40.682 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:40 compute-0 nova_compute[259550]: 2025-10-07 14:34:40.689 2 DEBUG nova.compute.provider_tree [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:34:40 compute-0 nova_compute[259550]: 2025-10-07 14:34:40.857 2 DEBUG nova.scheduler.client.report [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:34:41 compute-0 nova_compute[259550]: 2025-10-07 14:34:41.200 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:41 compute-0 nova_compute[259550]: 2025-10-07 14:34:41.201 2 DEBUG nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:34:41 compute-0 wizardly_shamir[377424]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:34:41 compute-0 wizardly_shamir[377424]: --> relative data size: 1.0
Oct 07 14:34:41 compute-0 wizardly_shamir[377424]: --> All data devices are unavailable
Oct 07 14:34:41 compute-0 systemd[1]: libpod-1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6.scope: Deactivated successfully.
Oct 07 14:34:41 compute-0 systemd[1]: libpod-1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6.scope: Consumed 1.009s CPU time.
Oct 07 14:34:41 compute-0 podman[377408]: 2025-10-07 14:34:41.297911134 +0000 UTC m=+1.237261182 container died 1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:34:41 compute-0 ceph-mon[74295]: pgmap v2167: 305 pgs: 305 active+clean; 233 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 129 op/s
Oct 07 14:34:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/34419485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:34:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f806f3f7e30aacc4f6fcc222ec0e87cd3a25eeddc829d77ded208bd60ba43889-merged.mount: Deactivated successfully.
Oct 07 14:34:41 compute-0 podman[377408]: 2025-10-07 14:34:41.366300231 +0000 UTC m=+1.305650279 container remove 1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shamir, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 07 14:34:41 compute-0 systemd[1]: libpod-conmon-1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6.scope: Deactivated successfully.
Oct 07 14:34:41 compute-0 sudo[377303]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:41 compute-0 sudo[377487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:34:41 compute-0 sudo[377487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:41 compute-0 sudo[377487]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:41 compute-0 sudo[377512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:34:41 compute-0 sudo[377512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:41 compute-0 sudo[377512]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:41 compute-0 nova_compute[259550]: 2025-10-07 14:34:41.526 2 DEBUG nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:34:41 compute-0 nova_compute[259550]: 2025-10-07 14:34:41.527 2 DEBUG nova.network.neutron [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:34:41 compute-0 nova_compute[259550]: 2025-10-07 14:34:41.549 2 DEBUG nova.network.neutron [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Successfully updated port: 72db4fd3-8171-42af-9801-69a061614ccc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:34:41 compute-0 sudo[377537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:34:41 compute-0 sudo[377537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:41 compute-0 sudo[377537]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:41 compute-0 nova_compute[259550]: 2025-10-07 14:34:41.581 2 INFO nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:34:41 compute-0 sudo[377562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:34:41 compute-0 sudo[377562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:41 compute-0 nova_compute[259550]: 2025-10-07 14:34:41.688 2 DEBUG nova.compute.manager [req-2726bb49-28fd-49f8-9371-7ab4f8482a44 req-99accf3b-a685-4113-b42f-481bcacee65d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-changed-72db4fd3-8171-42af-9801-69a061614ccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:41 compute-0 nova_compute[259550]: 2025-10-07 14:34:41.688 2 DEBUG nova.compute.manager [req-2726bb49-28fd-49f8-9371-7ab4f8482a44 req-99accf3b-a685-4113-b42f-481bcacee65d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Refreshing instance network info cache due to event network-changed-72db4fd3-8171-42af-9801-69a061614ccc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:34:41 compute-0 nova_compute[259550]: 2025-10-07 14:34:41.688 2 DEBUG oslo_concurrency.lockutils [req-2726bb49-28fd-49f8-9371-7ab4f8482a44 req-99accf3b-a685-4113-b42f-481bcacee65d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:34:41 compute-0 nova_compute[259550]: 2025-10-07 14:34:41.688 2 DEBUG oslo_concurrency.lockutils [req-2726bb49-28fd-49f8-9371-7ab4f8482a44 req-99accf3b-a685-4113-b42f-481bcacee65d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:34:41 compute-0 nova_compute[259550]: 2025-10-07 14:34:41.688 2 DEBUG nova.network.neutron [req-2726bb49-28fd-49f8-9371-7ab4f8482a44 req-99accf3b-a685-4113-b42f-481bcacee65d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Refreshing network info cache for port 72db4fd3-8171-42af-9801-69a061614ccc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:34:41 compute-0 nova_compute[259550]: 2025-10-07 14:34:41.690 2 DEBUG nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:34:41 compute-0 nova_compute[259550]: 2025-10-07 14:34:41.725 2 DEBUG nova.policy [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:34:41 compute-0 nova_compute[259550]: 2025-10-07 14:34:41.956 2 DEBUG nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:34:41 compute-0 nova_compute[259550]: 2025-10-07 14:34:41.957 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:34:41 compute-0 nova_compute[259550]: 2025-10-07 14:34:41.958 2 INFO nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Creating image(s)
Oct 07 14:34:41 compute-0 podman[377629]: 2025-10-07 14:34:41.962204814 +0000 UTC m=+0.039310442 container create eda200767d94f3c5e9f5b47ff37f4b62f5cb2f077d0d5c59448735ecd206ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_gould, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:34:41 compute-0 nova_compute[259550]: 2025-10-07 14:34:41.983 2 DEBUG nova.storage.rbd_utils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image a5f57fca-2121-432c-b000-2fd92f5c1b12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:34:42 compute-0 systemd[1]: Started libpod-conmon-eda200767d94f3c5e9f5b47ff37f4b62f5cb2f077d0d5c59448735ecd206ae1b.scope.
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.007 2 DEBUG nova.storage.rbd_utils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image a5f57fca-2121-432c-b000-2fd92f5c1b12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:34:42 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.033 2 DEBUG nova.storage.rbd_utils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image a5f57fca-2121-432c-b000-2fd92f5c1b12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.037 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:42 compute-0 podman[377629]: 2025-10-07 14:34:42.039786467 +0000 UTC m=+0.116892125 container init eda200767d94f3c5e9f5b47ff37f4b62f5cb2f077d0d5c59448735ecd206ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:34:42 compute-0 podman[377629]: 2025-10-07 14:34:41.945568249 +0000 UTC m=+0.022673897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:34:42 compute-0 podman[377629]: 2025-10-07 14:34:42.04851111 +0000 UTC m=+0.125616738 container start eda200767d94f3c5e9f5b47ff37f4b62f5cb2f077d0d5c59448735ecd206ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_gould, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:34:42 compute-0 hungry_gould[377678]: 167 167
Oct 07 14:34:42 compute-0 podman[377629]: 2025-10-07 14:34:42.055812585 +0000 UTC m=+0.132918243 container attach eda200767d94f3c5e9f5b47ff37f4b62f5cb2f077d0d5c59448735ecd206ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_gould, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:34:42 compute-0 systemd[1]: libpod-eda200767d94f3c5e9f5b47ff37f4b62f5cb2f077d0d5c59448735ecd206ae1b.scope: Deactivated successfully.
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.071 2 DEBUG nova.network.neutron [req-2726bb49-28fd-49f8-9371-7ab4f8482a44 req-99accf3b-a685-4113-b42f-481bcacee65d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:34:42 compute-0 podman[377705]: 2025-10-07 14:34:42.094286544 +0000 UTC m=+0.022026070 container died eda200767d94f3c5e9f5b47ff37f4b62f5cb2f077d0d5c59448735ecd206ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_gould, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.110 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.111 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.111 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.112 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e303d867e3813a6f97d7cf0d8fccda86f4abc663f81f781f7c635267279a63d5-merged.mount: Deactivated successfully.
Oct 07 14:34:42 compute-0 podman[377705]: 2025-10-07 14:34:42.134338543 +0000 UTC m=+0.062078049 container remove eda200767d94f3c5e9f5b47ff37f4b62f5cb2f077d0d5c59448735ecd206ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.135 2 DEBUG nova.storage.rbd_utils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image a5f57fca-2121-432c-b000-2fd92f5c1b12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:34:42 compute-0 systemd[1]: libpod-conmon-eda200767d94f3c5e9f5b47ff37f4b62f5cb2f077d0d5c59448735ecd206ae1b.scope: Deactivated successfully.
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.144 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a5f57fca-2121-432c-b000-2fd92f5c1b12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 246 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 991 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 07 14:34:42 compute-0 podman[377764]: 2025-10-07 14:34:42.330178286 +0000 UTC m=+0.057349753 container create 34c94ffd3eb1c85715b1ef34fb5b5a52bf4e610e005d248ac1f76c3d8c6d8585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_turing, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:34:42 compute-0 systemd[1]: Started libpod-conmon-34c94ffd3eb1c85715b1ef34fb5b5a52bf4e610e005d248ac1f76c3d8c6d8585.scope.
Oct 07 14:34:42 compute-0 podman[377764]: 2025-10-07 14:34:42.299398904 +0000 UTC m=+0.026570401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:34:42 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a3fb5773ab16613e869de286957cf0e4779bf2df7bf5606a0cc6b7169a17ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a3fb5773ab16613e869de286957cf0e4779bf2df7bf5606a0cc6b7169a17ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a3fb5773ab16613e869de286957cf0e4779bf2df7bf5606a0cc6b7169a17ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a3fb5773ab16613e869de286957cf0e4779bf2df7bf5606a0cc6b7169a17ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:34:42 compute-0 podman[377764]: 2025-10-07 14:34:42.442490607 +0000 UTC m=+0.169662114 container init 34c94ffd3eb1c85715b1ef34fb5b5a52bf4e610e005d248ac1f76c3d8c6d8585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_turing, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:34:42 compute-0 podman[377764]: 2025-10-07 14:34:42.45006449 +0000 UTC m=+0.177235957 container start 34c94ffd3eb1c85715b1ef34fb5b5a52bf4e610e005d248ac1f76c3d8c6d8585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_turing, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:34:42 compute-0 podman[377764]: 2025-10-07 14:34:42.456946964 +0000 UTC m=+0.184118471 container attach 34c94ffd3eb1c85715b1ef34fb5b5a52bf4e610e005d248ac1f76c3d8c6d8585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_turing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.492 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a5f57fca-2121-432c-b000-2fd92f5c1b12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.548 2 DEBUG nova.network.neutron [req-2726bb49-28fd-49f8-9371-7ab4f8482a44 req-99accf3b-a685-4113-b42f-481bcacee65d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.556 2 DEBUG nova.storage.rbd_utils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image a5f57fca-2121-432c-b000-2fd92f5c1b12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.670 2 DEBUG nova.objects.instance [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid a5f57fca-2121-432c-b000-2fd92f5c1b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.711 2 DEBUG nova.network.neutron [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Successfully updated port: 87dbfe27-9436-4e21-a648-df77ddfec6ca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.766 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.766 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Ensure instance console log exists: /var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.767 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.767 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.767 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.770 2 DEBUG oslo_concurrency.lockutils [req-2726bb49-28fd-49f8-9371-7ab4f8482a44 req-99accf3b-a685-4113-b42f-481bcacee65d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.772 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.772 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:34:42 compute-0 nova_compute[259550]: 2025-10-07 14:34:42.772 2 DEBUG nova.network.neutron [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:34:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:34:43 compute-0 nova_compute[259550]: 2025-10-07 14:34:43.056 2 DEBUG nova.network.neutron [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:34:43 compute-0 zealous_turing[377783]: {
Oct 07 14:34:43 compute-0 zealous_turing[377783]:     "0": [
Oct 07 14:34:43 compute-0 zealous_turing[377783]:         {
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "devices": [
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "/dev/loop3"
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             ],
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "lv_name": "ceph_lv0",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "lv_size": "21470642176",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "name": "ceph_lv0",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "tags": {
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.cluster_name": "ceph",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.crush_device_class": "",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.encrypted": "0",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.osd_id": "0",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.type": "block",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.vdo": "0"
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             },
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "type": "block",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "vg_name": "ceph_vg0"
Oct 07 14:34:43 compute-0 zealous_turing[377783]:         }
Oct 07 14:34:43 compute-0 zealous_turing[377783]:     ],
Oct 07 14:34:43 compute-0 zealous_turing[377783]:     "1": [
Oct 07 14:34:43 compute-0 zealous_turing[377783]:         {
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "devices": [
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "/dev/loop4"
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             ],
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "lv_name": "ceph_lv1",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "lv_size": "21470642176",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "name": "ceph_lv1",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "tags": {
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.cluster_name": "ceph",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.crush_device_class": "",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.encrypted": "0",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.osd_id": "1",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.type": "block",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.vdo": "0"
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             },
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "type": "block",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "vg_name": "ceph_vg1"
Oct 07 14:34:43 compute-0 zealous_turing[377783]:         }
Oct 07 14:34:43 compute-0 zealous_turing[377783]:     ],
Oct 07 14:34:43 compute-0 zealous_turing[377783]:     "2": [
Oct 07 14:34:43 compute-0 zealous_turing[377783]:         {
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "devices": [
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "/dev/loop5"
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             ],
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "lv_name": "ceph_lv2",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "lv_size": "21470642176",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "name": "ceph_lv2",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "tags": {
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.cluster_name": "ceph",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.crush_device_class": "",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.encrypted": "0",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.osd_id": "2",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.type": "block",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:                 "ceph.vdo": "0"
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             },
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "type": "block",
Oct 07 14:34:43 compute-0 zealous_turing[377783]:             "vg_name": "ceph_vg2"
Oct 07 14:34:43 compute-0 zealous_turing[377783]:         }
Oct 07 14:34:43 compute-0 zealous_turing[377783]:     ]
Oct 07 14:34:43 compute-0 zealous_turing[377783]: }
Oct 07 14:34:43 compute-0 ceph-mon[74295]: pgmap v2168: 305 pgs: 305 active+clean; 246 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 991 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 07 14:34:43 compute-0 systemd[1]: libpod-34c94ffd3eb1c85715b1ef34fb5b5a52bf4e610e005d248ac1f76c3d8c6d8585.scope: Deactivated successfully.
Oct 07 14:34:43 compute-0 podman[377764]: 2025-10-07 14:34:43.345791554 +0000 UTC m=+1.072963041 container died 34c94ffd3eb1c85715b1ef34fb5b5a52bf4e610e005d248ac1f76c3d8c6d8585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_turing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 07 14:34:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-90a3fb5773ab16613e869de286957cf0e4779bf2df7bf5606a0cc6b7169a17ad-merged.mount: Deactivated successfully.
Oct 07 14:34:43 compute-0 podman[377764]: 2025-10-07 14:34:43.436754025 +0000 UTC m=+1.163925502 container remove 34c94ffd3eb1c85715b1ef34fb5b5a52bf4e610e005d248ac1f76c3d8c6d8585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_turing, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 07 14:34:43 compute-0 systemd[1]: libpod-conmon-34c94ffd3eb1c85715b1ef34fb5b5a52bf4e610e005d248ac1f76c3d8c6d8585.scope: Deactivated successfully.
Oct 07 14:34:43 compute-0 sudo[377562]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:43 compute-0 sudo[377875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:34:43 compute-0 sudo[377875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:43 compute-0 sudo[377875]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:43 compute-0 sudo[377900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:34:43 compute-0 sudo[377900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:43 compute-0 sudo[377900]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:43 compute-0 sudo[377925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:34:43 compute-0 sudo[377925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:43 compute-0 sudo[377925]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:43 compute-0 sudo[377950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:34:43 compute-0 sudo[377950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:44 compute-0 podman[378014]: 2025-10-07 14:34:44.075871382 +0000 UTC m=+0.055199396 container create 73e93b039bcf6463b00b708794e33c8c8f81a5cf79e06207f4f646a1a21956db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gagarin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:34:44 compute-0 systemd[1]: Started libpod-conmon-73e93b039bcf6463b00b708794e33c8c8f81a5cf79e06207f4f646a1a21956db.scope.
Oct 07 14:34:44 compute-0 podman[378014]: 2025-10-07 14:34:44.045310265 +0000 UTC m=+0.024638309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:34:44 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:34:44 compute-0 podman[378014]: 2025-10-07 14:34:44.160451002 +0000 UTC m=+0.139779036 container init 73e93b039bcf6463b00b708794e33c8c8f81a5cf79e06207f4f646a1a21956db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gagarin, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 07 14:34:44 compute-0 podman[378014]: 2025-10-07 14:34:44.168040175 +0000 UTC m=+0.147368189 container start 73e93b039bcf6463b00b708794e33c8c8f81a5cf79e06207f4f646a1a21956db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 14:34:44 compute-0 hopeful_gagarin[378030]: 167 167
Oct 07 14:34:44 compute-0 systemd[1]: libpod-73e93b039bcf6463b00b708794e33c8c8f81a5cf79e06207f4f646a1a21956db.scope: Deactivated successfully.
Oct 07 14:34:44 compute-0 podman[378014]: 2025-10-07 14:34:44.174143518 +0000 UTC m=+0.153471532 container attach 73e93b039bcf6463b00b708794e33c8c8f81a5cf79e06207f4f646a1a21956db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:34:44 compute-0 podman[378014]: 2025-10-07 14:34:44.175203606 +0000 UTC m=+0.154531650 container died 73e93b039bcf6463b00b708794e33c8c8f81a5cf79e06207f4f646a1a21956db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gagarin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 07 14:34:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3548046edebff1c733161ca13c43a812db3e57abb70eeb763a7d39a9192e1b7-merged.mount: Deactivated successfully.
Oct 07 14:34:44 compute-0 podman[378014]: 2025-10-07 14:34:44.23711704 +0000 UTC m=+0.216445054 container remove 73e93b039bcf6463b00b708794e33c8c8f81a5cf79e06207f4f646a1a21956db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gagarin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 07 14:34:44 compute-0 systemd[1]: libpod-conmon-73e93b039bcf6463b00b708794e33c8c8f81a5cf79e06207f4f646a1a21956db.scope: Deactivated successfully.
Oct 07 14:34:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 281 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 3.3 MiB/s wr, 100 op/s
Oct 07 14:34:44 compute-0 podman[378053]: 2025-10-07 14:34:44.412427055 +0000 UTC m=+0.039089026 container create 0abbbe93a18f06fd1c1f0e25323f00171c51747bf0776ec4e99ccbf8573b53c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 07 14:34:44 compute-0 systemd[1]: Started libpod-conmon-0abbbe93a18f06fd1c1f0e25323f00171c51747bf0776ec4e99ccbf8573b53c8.scope.
Oct 07 14:34:44 compute-0 podman[378053]: 2025-10-07 14:34:44.393784647 +0000 UTC m=+0.020446648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:34:44 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb03ca82ac7bea5c65cbb29e4d19835a00b8b4128fc248fd91bdbe23feb4b8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb03ca82ac7bea5c65cbb29e4d19835a00b8b4128fc248fd91bdbe23feb4b8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb03ca82ac7bea5c65cbb29e4d19835a00b8b4128fc248fd91bdbe23feb4b8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb03ca82ac7bea5c65cbb29e4d19835a00b8b4128fc248fd91bdbe23feb4b8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:34:44 compute-0 podman[378053]: 2025-10-07 14:34:44.519587288 +0000 UTC m=+0.146249279 container init 0abbbe93a18f06fd1c1f0e25323f00171c51747bf0776ec4e99ccbf8573b53c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:34:44 compute-0 podman[378053]: 2025-10-07 14:34:44.527435107 +0000 UTC m=+0.154097078 container start 0abbbe93a18f06fd1c1f0e25323f00171c51747bf0776ec4e99ccbf8573b53c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatterjee, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:34:44 compute-0 podman[378053]: 2025-10-07 14:34:44.53164501 +0000 UTC m=+0.158307011 container attach 0abbbe93a18f06fd1c1f0e25323f00171c51747bf0776ec4e99ccbf8573b53c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Oct 07 14:34:45 compute-0 nova_compute[259550]: 2025-10-07 14:34:45.140 2 DEBUG nova.network.neutron [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Successfully created port: 647fc35d-c5c1-4818-9fe1-b704468ceb32 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:34:45 compute-0 ceph-mon[74295]: pgmap v2169: 305 pgs: 305 active+clean; 281 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 3.3 MiB/s wr, 100 op/s
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]: {
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:         "osd_id": 2,
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:         "type": "bluestore"
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:     },
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:         "osd_id": 1,
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:         "type": "bluestore"
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:     },
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:         "osd_id": 0,
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:         "type": "bluestore"
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]:     }
Oct 07 14:34:45 compute-0 charming_chatterjee[378071]: }
Oct 07 14:34:45 compute-0 systemd[1]: libpod-0abbbe93a18f06fd1c1f0e25323f00171c51747bf0776ec4e99ccbf8573b53c8.scope: Deactivated successfully.
Oct 07 14:34:45 compute-0 podman[378104]: 2025-10-07 14:34:45.554146432 +0000 UTC m=+0.025346588 container died 0abbbe93a18f06fd1c1f0e25323f00171c51747bf0776ec4e99ccbf8573b53c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 07 14:34:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fb03ca82ac7bea5c65cbb29e4d19835a00b8b4128fc248fd91bdbe23feb4b8e-merged.mount: Deactivated successfully.
Oct 07 14:34:45 compute-0 podman[378104]: 2025-10-07 14:34:45.707359986 +0000 UTC m=+0.178560122 container remove 0abbbe93a18f06fd1c1f0e25323f00171c51747bf0776ec4e99ccbf8573b53c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatterjee, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 07 14:34:45 compute-0 systemd[1]: libpod-conmon-0abbbe93a18f06fd1c1f0e25323f00171c51747bf0776ec4e99ccbf8573b53c8.scope: Deactivated successfully.
Oct 07 14:34:45 compute-0 sudo[377950]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:34:45 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:34:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:34:45 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:34:45 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev ff4c46c7-d236-4f44-9c66-c22ebe81560e does not exist
Oct 07 14:34:45 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev a7a829e6-29f6-4af9-bf0d-a4a3d579c1bd does not exist
Oct 07 14:34:45 compute-0 sudo[378119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:34:45 compute-0 sudo[378119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:45 compute-0 sudo[378119]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:45 compute-0 sudo[378144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:34:45 compute-0 sudo[378144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:34:45 compute-0 sudo[378144]: pam_unix(sudo:session): session closed for user root
Oct 07 14:34:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 293 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Oct 07 14:34:46 compute-0 nova_compute[259550]: 2025-10-07 14:34:46.475 2 DEBUG nova.compute.manager [req-0c7d0f82-97c1-4003-81bb-3a564f72accf req-94187f31-8b59-4f14-8d41-bf6b83dee8d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-changed-87dbfe27-9436-4e21-a648-df77ddfec6ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:46 compute-0 nova_compute[259550]: 2025-10-07 14:34:46.476 2 DEBUG nova.compute.manager [req-0c7d0f82-97c1-4003-81bb-3a564f72accf req-94187f31-8b59-4f14-8d41-bf6b83dee8d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Refreshing instance network info cache due to event network-changed-87dbfe27-9436-4e21-a648-df77ddfec6ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:34:46 compute-0 nova_compute[259550]: 2025-10-07 14:34:46.476 2 DEBUG oslo_concurrency.lockutils [req-0c7d0f82-97c1-4003-81bb-3a564f72accf req-94187f31-8b59-4f14-8d41-bf6b83dee8d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:34:46 compute-0 nova_compute[259550]: 2025-10-07 14:34:46.776 2 INFO nova.compute.manager [None req-9996e800-54fd-4cd8-825c-7fb0b74e8a82 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Get console output
Oct 07 14:34:46 compute-0 nova_compute[259550]: 2025-10-07 14:34:46.781 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:34:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:34:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:34:46 compute-0 nova_compute[259550]: 2025-10-07 14:34:46.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:34:47 compute-0 nova_compute[259550]: 2025-10-07 14:34:47.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:47 compute-0 nova_compute[259550]: 2025-10-07 14:34:47.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:47 compute-0 nova_compute[259550]: 2025-10-07 14:34:47.900 2 DEBUG nova.network.neutron [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Successfully updated port: 647fc35d-c5c1-4818-9fe1-b704468ceb32 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:34:47 compute-0 ceph-mon[74295]: pgmap v2170: 305 pgs: 305 active+clean; 293 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Oct 07 14:34:47 compute-0 nova_compute[259550]: 2025-10-07 14:34:47.943 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-a5f57fca-2121-432c-b000-2fd92f5c1b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:34:47 compute-0 nova_compute[259550]: 2025-10-07 14:34:47.944 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-a5f57fca-2121-432c-b000-2fd92f5c1b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:34:47 compute-0 nova_compute[259550]: 2025-10-07 14:34:47.944 2 DEBUG nova.network.neutron [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:34:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:34:47 compute-0 nova_compute[259550]: 2025-10-07 14:34:47.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:34:47 compute-0 nova_compute[259550]: 2025-10-07 14:34:47.979 2 DEBUG nova.network.neutron [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updating instance_info_cache with network_info: [{"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:34:47 compute-0 nova_compute[259550]: 2025-10-07 14:34:47.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:34:47 compute-0 nova_compute[259550]: 2025-10-07 14:34:47.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:34:47 compute-0 nova_compute[259550]: 2025-10-07 14:34:47.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.158 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.159 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Instance network_info: |[{"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.159 2 DEBUG oslo_concurrency.lockutils [req-0c7d0f82-97c1-4003-81bb-3a564f72accf req-94187f31-8b59-4f14-8d41-bf6b83dee8d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.159 2 DEBUG nova.network.neutron [req-0c7d0f82-97c1-4003-81bb-3a564f72accf req-94187f31-8b59-4f14-8d41-bf6b83dee8d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Refreshing network info cache for port 87dbfe27-9436-4e21-a648-df77ddfec6ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.163 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Start _get_guest_xml network_info=[{"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.169 2 WARNING nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.178 2 DEBUG nova.virt.libvirt.host [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.179 2 DEBUG nova.virt.libvirt.host [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.183 2 DEBUG nova.virt.libvirt.host [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.184 2 DEBUG nova.virt.libvirt.host [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.184 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.184 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.185 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.185 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.185 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.185 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.186 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.186 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.186 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.186 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.186 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.187 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.189 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.230 2 DEBUG oslo_concurrency.lockutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.231 2 DEBUG oslo_concurrency.lockutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.231 2 DEBUG oslo_concurrency.lockutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.231 2 DEBUG oslo_concurrency.lockutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.231 2 DEBUG oslo_concurrency.lockutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.233 2 INFO nova.compute.manager [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Terminating instance
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.234 2 DEBUG nova.compute.manager [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:34:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 293 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.292 2 DEBUG nova.network.neutron [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.367 2 DEBUG nova.compute.manager [req-07f8d396-b011-4880-b7d8-7c94ed7eba09 req-2a2fd73b-cba8-4c0f-ad3f-4b97ca5f3c85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Received event network-changed-647fc35d-c5c1-4818-9fe1-b704468ceb32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.367 2 DEBUG nova.compute.manager [req-07f8d396-b011-4880-b7d8-7c94ed7eba09 req-2a2fd73b-cba8-4c0f-ad3f-4b97ca5f3c85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Refreshing instance network info cache due to event network-changed-647fc35d-c5c1-4818-9fe1-b704468ceb32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.367 2 DEBUG oslo_concurrency.lockutils [req-07f8d396-b011-4880-b7d8-7c94ed7eba09 req-2a2fd73b-cba8-4c0f-ad3f-4b97ca5f3c85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a5f57fca-2121-432c-b000-2fd92f5c1b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:34:48 compute-0 kernel: tapea75e1fb-47 (unregistering): left promiscuous mode
Oct 07 14:34:48 compute-0 NetworkManager[44949]: <info>  [1759847688.4381] device (tapea75e1fb-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:34:48 compute-0 ovn_controller[151684]: 2025-10-07T14:34:48Z|01109|binding|INFO|Releasing lport ea75e1fb-474b-484e-9a52-73939d17da1b from this chassis (sb_readonly=0)
Oct 07 14:34:48 compute-0 ovn_controller[151684]: 2025-10-07T14:34:48Z|01110|binding|INFO|Setting lport ea75e1fb-474b-484e-9a52-73939d17da1b down in Southbound
Oct 07 14:34:48 compute-0 ovn_controller[151684]: 2025-10-07T14:34:48Z|01111|binding|INFO|Removing iface tapea75e1fb-47 ovn-installed in OVS
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.464 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:91:21 10.100.0.14'], port_security=['fa:16:3e:57:91:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '6', 'neutron:security_group_ids': '37af7aab-7feb-47ed-a12c-8267b21bb6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a239abd8-f455-4712-a289-4273b897ab8d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ea75e1fb-474b-484e-9a52-73939d17da1b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:34:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.465 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ea75e1fb-474b-484e-9a52-73939d17da1b in datapath 256ae6d5-60ea-4d02-8c43-2f55874712c6 unbound from our chassis
Oct 07 14:34:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.469 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 256ae6d5-60ea-4d02-8c43-2f55874712c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.471 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd982fa-1961-40de-922c-af61833f6e94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.471 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 namespace which is not needed anymore
Oct 07 14:34:48 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Oct 07 14:34:48 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006c.scope: Consumed 12.580s CPU time.
Oct 07 14:34:48 compute-0 systemd-machined[214580]: Machine qemu-137-instance-0000006c terminated.
Oct 07 14:34:48 compute-0 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376728]: [NOTICE]   (376732) : haproxy version is 2.8.14-c23fe91
Oct 07 14:34:48 compute-0 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376728]: [NOTICE]   (376732) : path to executable is /usr/sbin/haproxy
Oct 07 14:34:48 compute-0 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376728]: [WARNING]  (376732) : Exiting Master process...
Oct 07 14:34:48 compute-0 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376728]: [WARNING]  (376732) : Exiting Master process...
Oct 07 14:34:48 compute-0 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376728]: [ALERT]    (376732) : Current worker (376734) exited with code 143 (Terminated)
Oct 07 14:34:48 compute-0 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376728]: [WARNING]  (376732) : All workers exited. Exiting... (0)
Oct 07 14:34:48 compute-0 systemd[1]: libpod-c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac.scope: Deactivated successfully.
Oct 07 14:34:48 compute-0 podman[378211]: 2025-10-07 14:34:48.621802771 +0000 UTC m=+0.048338453 container died c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:34:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:34:48 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2800255612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:34:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac-userdata-shm.mount: Deactivated successfully.
Oct 07 14:34:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ab2a485858f7d7c59681ecb143d683598d5f4b822096692bfe1648f09b1a197-merged.mount: Deactivated successfully.
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:48 compute-0 podman[378211]: 2025-10-07 14:34:48.666215047 +0000 UTC m=+0.092750729 container cleanup c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.671 2 INFO nova.virt.libvirt.driver [-] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Instance destroyed successfully.
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.672 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.672 2 DEBUG nova.objects.instance [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'resources' on Instance uuid a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:34:48 compute-0 systemd[1]: libpod-conmon-c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac.scope: Deactivated successfully.
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.702 2 DEBUG nova.storage.rbd_utils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image df052dd5-fecd-4dd3-be36-4becc3f9f318_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.708 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:48 compute-0 podman[378251]: 2025-10-07 14:34:48.739870726 +0000 UTC m=+0.047444629 container remove c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:34:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.746 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ed336f94-95f9-4158-9b36-92eae507eb13]: (4, ('Tue Oct  7 02:34:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 (c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac)\nc689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac\nTue Oct  7 02:34:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 (c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac)\nc689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.748 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0e0147d3-4b43-4b0a-9997-a5338656dfc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.749 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap256ae6d5-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:48 compute-0 kernel: tap256ae6d5-60: left promiscuous mode
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.773 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[640a4d4e-5921-4e2c-ad51-1b82defe85d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.775 2 DEBUG nova.virt.libvirt.vif [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-797854475',display_name='tempest-TestNetworkAdvancedServerOps-server-797854475',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-797854475',id=108,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD4Lkj2HHIhO2PrQ11lMJZ07v9Q3dgwfIb7H7MpsdLLDdtvBovL59oN2/XacURcr+DjY2ldySrFXIZcVMnzxGBRmm67eVPlJbvF3TQEBfQs6C/dhmbAY+AHYoEt1hPfXxw==',key_name='tempest-TestNetworkAdvancedServerOps-512345044',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:33:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-qabao1fo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:34:27Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.775 2 DEBUG nova.network.os_vif_util [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.778 2 DEBUG nova.network.os_vif_util [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:91:21,bridge_name='br-int',has_traffic_filtering=True,id=ea75e1fb-474b-484e-9a52-73939d17da1b,network=Network(256ae6d5-60ea-4d02-8c43-2f55874712c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea75e1fb-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.778 2 DEBUG os_vif [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:91:21,bridge_name='br-int',has_traffic_filtering=True,id=ea75e1fb-474b-484e-9a52-73939d17da1b,network=Network(256ae6d5-60ea-4d02-8c43-2f55874712c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea75e1fb-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.780 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea75e1fb-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.790 2 INFO os_vif [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:91:21,bridge_name='br-int',has_traffic_filtering=True,id=ea75e1fb-474b-484e-9a52-73939d17da1b,network=Network(256ae6d5-60ea-4d02-8c43-2f55874712c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea75e1fb-47')
Oct 07 14:34:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.808 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6dd65b-df2d-49c5-bb45-836132b4fe7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.810 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[95545db9-d3a6-413b-bbbc-d38a93fc8911]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.825 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bdc7a293-89f2-4efa-9c96-ea7c3157e606]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 825027, 'reachable_time': 30808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378302, 'error': None, 'target': 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.828 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:34:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.828 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[cdfeb7de-146b-4f9f-a896-37291f837878]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d256ae6d5\x2d60ea\x2d4d02\x2d8c43\x2d2f55874712c6.mount: Deactivated successfully.
Oct 07 14:34:48 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2800255612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:34:48 compute-0 nova_compute[259550]: 2025-10-07 14:34:48.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.035 2 DEBUG nova.compute.manager [req-6172aa79-5f02-449c-8555-bf5834fc5b7a req-a0fa3e06-1160-414d-9ca1-46a859bbeef0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-changed-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.036 2 DEBUG nova.compute.manager [req-6172aa79-5f02-449c-8555-bf5834fc5b7a req-a0fa3e06-1160-414d-9ca1-46a859bbeef0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Refreshing instance network info cache due to event network-changed-ea75e1fb-474b-484e-9a52-73939d17da1b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.036 2 DEBUG oslo_concurrency.lockutils [req-6172aa79-5f02-449c-8555-bf5834fc5b7a req-a0fa3e06-1160-414d-9ca1-46a859bbeef0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.037 2 DEBUG oslo_concurrency.lockutils [req-6172aa79-5f02-449c-8555-bf5834fc5b7a req-a0fa3e06-1160-414d-9ca1-46a859bbeef0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.037 2 DEBUG nova.network.neutron [req-6172aa79-5f02-449c-8555-bf5834fc5b7a req-a0fa3e06-1160-414d-9ca1-46a859bbeef0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Refreshing network info cache for port ea75e1fb-474b-484e-9a52-73939d17da1b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:34:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:34:49 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3673809848' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.224 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.225 2 DEBUG nova.virt.libvirt.vif [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1627574273',display_name='tempest-TestGettingAddress-server-1627574273',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1627574273',id=110,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-yvllxh4l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:34:36Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=df052dd5-fecd-4dd3-be36-4becc3f9f318,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.225 2 DEBUG nova.network.os_vif_util [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.226 2 DEBUG nova.network.os_vif_util [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:17:d0,bridge_name='br-int',has_traffic_filtering=True,id=72db4fd3-8171-42af-9801-69a061614ccc,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72db4fd3-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.227 2 DEBUG nova.virt.libvirt.vif [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1627574273',display_name='tempest-TestGettingAddress-server-1627574273',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1627574273',id=110,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-yvllxh4l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:34:36Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=df052dd5-fecd-4dd3-be36-4becc3f9f318,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.227 2 DEBUG nova.network.os_vif_util [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.227 2 DEBUG nova.network.os_vif_util [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:cb:8e,bridge_name='br-int',has_traffic_filtering=True,id=87dbfe27-9436-4e21-a648-df77ddfec6ca,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87dbfe27-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.228 2 DEBUG nova.objects.instance [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid df052dd5-fecd-4dd3-be36-4becc3f9f318 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.445 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:34:49 compute-0 nova_compute[259550]:   <uuid>df052dd5-fecd-4dd3-be36-4becc3f9f318</uuid>
Oct 07 14:34:49 compute-0 nova_compute[259550]:   <name>instance-0000006e</name>
Oct 07 14:34:49 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:34:49 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:34:49 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <nova:name>tempest-TestGettingAddress-server-1627574273</nova:name>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:34:48</nova:creationTime>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:34:49 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:34:49 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:34:49 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:34:49 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:34:49 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:34:49 compute-0 nova_compute[259550]:         <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct 07 14:34:49 compute-0 nova_compute[259550]:         <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:34:49 compute-0 nova_compute[259550]:         <nova:port uuid="72db4fd3-8171-42af-9801-69a061614ccc">
Oct 07 14:34:49 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:34:49 compute-0 nova_compute[259550]:         <nova:port uuid="87dbfe27-9436-4e21-a648-df77ddfec6ca">
Oct 07 14:34:49 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe44:cb8e" ipVersion="6"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:34:49 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:34:49 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <system>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <entry name="serial">df052dd5-fecd-4dd3-be36-4becc3f9f318</entry>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <entry name="uuid">df052dd5-fecd-4dd3-be36-4becc3f9f318</entry>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     </system>
Oct 07 14:34:49 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:34:49 compute-0 nova_compute[259550]:   <os>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:   </os>
Oct 07 14:34:49 compute-0 nova_compute[259550]:   <features>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:   </features>
Oct 07 14:34:49 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:34:49 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:34:49 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/df052dd5-fecd-4dd3-be36-4becc3f9f318_disk">
Oct 07 14:34:49 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       </source>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:34:49 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/df052dd5-fecd-4dd3-be36-4becc3f9f318_disk.config">
Oct 07 14:34:49 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       </source>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:34:49 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:4d:17:d0"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <target dev="tap72db4fd3-81"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:44:cb:8e"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <target dev="tap87dbfe27-94"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318/console.log" append="off"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <video>
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     </video>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:34:49 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:34:49 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:34:49 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:34:49 compute-0 nova_compute[259550]: </domain>
Oct 07 14:34:49 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.446 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Preparing to wait for external event network-vif-plugged-72db4fd3-8171-42af-9801-69a061614ccc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.446 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.446 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.447 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.447 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Preparing to wait for external event network-vif-plugged-87dbfe27-9436-4e21-a648-df77ddfec6ca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.447 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.447 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.447 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.448 2 DEBUG nova.virt.libvirt.vif [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1627574273',display_name='tempest-TestGettingAddress-server-1627574273',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1627574273',id=110,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-yvllxh4l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:34:36Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=df052dd5-fecd-4dd3-be36-4becc3f9f318,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.448 2 DEBUG nova.network.os_vif_util [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.449 2 DEBUG nova.network.os_vif_util [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:17:d0,bridge_name='br-int',has_traffic_filtering=True,id=72db4fd3-8171-42af-9801-69a061614ccc,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72db4fd3-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.449 2 DEBUG os_vif [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:17:d0,bridge_name='br-int',has_traffic_filtering=True,id=72db4fd3-8171-42af-9801-69a061614ccc,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72db4fd3-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.450 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.450 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.453 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72db4fd3-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.454 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap72db4fd3-81, col_values=(('external_ids', {'iface-id': '72db4fd3-8171-42af-9801-69a061614ccc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:17:d0', 'vm-uuid': 'df052dd5-fecd-4dd3-be36-4becc3f9f318'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:49 compute-0 NetworkManager[44949]: <info>  [1759847689.4563] manager: (tap72db4fd3-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/451)
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.461 2 INFO os_vif [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:17:d0,bridge_name='br-int',has_traffic_filtering=True,id=72db4fd3-8171-42af-9801-69a061614ccc,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72db4fd3-81')
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.462 2 DEBUG nova.virt.libvirt.vif [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1627574273',display_name='tempest-TestGettingAddress-server-1627574273',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1627574273',id=110,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-yvllxh4l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:34:36Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=df052dd5-fecd-4dd3-be36-4becc3f9f318,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.462 2 DEBUG nova.network.os_vif_util [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.463 2 DEBUG nova.network.os_vif_util [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:cb:8e,bridge_name='br-int',has_traffic_filtering=True,id=87dbfe27-9436-4e21-a648-df77ddfec6ca,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87dbfe27-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.463 2 DEBUG os_vif [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:cb:8e,bridge_name='br-int',has_traffic_filtering=True,id=87dbfe27-9436-4e21-a648-df77ddfec6ca,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87dbfe27-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.464 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.464 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.466 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87dbfe27-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.467 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap87dbfe27-94, col_values=(('external_ids', {'iface-id': '87dbfe27-9436-4e21-a648-df77ddfec6ca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:cb:8e', 'vm-uuid': 'df052dd5-fecd-4dd3-be36-4becc3f9f318'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:49 compute-0 NetworkManager[44949]: <info>  [1759847689.5063] manager: (tap87dbfe27-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.513 2 INFO os_vif [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:cb:8e,bridge_name='br-int',has_traffic_filtering=True,id=87dbfe27-9436-4e21-a648-df77ddfec6ca,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87dbfe27-94')
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.556 2 INFO nova.virt.libvirt.driver [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Deleting instance files /var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_del
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.557 2 INFO nova.virt.libvirt.driver [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Deletion of /var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_del complete
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.580 2 DEBUG nova.network.neutron [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Updating instance_info_cache with network_info: [{"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.941 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.942 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.942 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:4d:17:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.942 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:44:cb:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.943 2 INFO nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Using config drive
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.966 2 DEBUG nova.storage.rbd_utils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image df052dd5-fecd-4dd3-be36-4becc3f9f318_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.972 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-a5f57fca-2121-432c-b000-2fd92f5c1b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.973 2 DEBUG nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Instance network_info: |[{"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.973 2 DEBUG oslo_concurrency.lockutils [req-07f8d396-b011-4880-b7d8-7c94ed7eba09 req-2a2fd73b-cba8-4c0f-ad3f-4b97ca5f3c85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a5f57fca-2121-432c-b000-2fd92f5c1b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.974 2 DEBUG nova.network.neutron [req-07f8d396-b011-4880-b7d8-7c94ed7eba09 req-2a2fd73b-cba8-4c0f-ad3f-4b97ca5f3c85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Refreshing network info cache for port 647fc35d-c5c1-4818-9fe1-b704468ceb32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.976 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Start _get_guest_xml network_info=[{"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.979 2 INFO nova.compute.manager [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Took 1.74 seconds to destroy the instance on the hypervisor.
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.980 2 DEBUG oslo.service.loopingcall [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.980 2 DEBUG nova.compute.manager [-] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.980 2 DEBUG nova.network.neutron [-] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.984 2 WARNING nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.991 2 DEBUG nova.virt.libvirt.host [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.991 2 DEBUG nova.virt.libvirt.host [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.994 2 DEBUG nova.virt.libvirt.host [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.995 2 DEBUG nova.virt.libvirt.host [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.995 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.995 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.996 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.996 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.996 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.996 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.996 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.996 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.997 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.997 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.997 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:34:49 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.997 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:34:50 compute-0 nova_compute[259550]: 2025-10-07 14:34:49.999 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:50 compute-0 nova_compute[259550]: 2025-10-07 14:34:50.043 2 DEBUG nova.network.neutron [req-0c7d0f82-97c1-4003-81bb-3a564f72accf req-94187f31-8b59-4f14-8d41-bf6b83dee8d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updated VIF entry in instance network info cache for port 87dbfe27-9436-4e21-a648-df77ddfec6ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:34:50 compute-0 nova_compute[259550]: 2025-10-07 14:34:50.043 2 DEBUG nova.network.neutron [req-0c7d0f82-97c1-4003-81bb-3a564f72accf req-94187f31-8b59-4f14-8d41-bf6b83dee8d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updating instance_info_cache with network_info: [{"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:34:50 compute-0 ceph-mon[74295]: pgmap v2171: 305 pgs: 305 active+clean; 293 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Oct 07 14:34:50 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3673809848' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:34:50 compute-0 nova_compute[259550]: 2025-10-07 14:34:50.104 2 DEBUG oslo_concurrency.lockutils [req-0c7d0f82-97c1-4003-81bb-3a564f72accf req-94187f31-8b59-4f14-8d41-bf6b83dee8d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:34:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 294 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 569 KiB/s rd, 3.6 MiB/s wr, 101 op/s
Oct 07 14:34:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:34:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3801410304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:34:50 compute-0 nova_compute[259550]: 2025-10-07 14:34:50.528 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:50 compute-0 nova_compute[259550]: 2025-10-07 14:34:50.551 2 DEBUG nova.storage.rbd_utils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image a5f57fca-2121-432c-b000-2fd92f5c1b12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:34:50 compute-0 nova_compute[259550]: 2025-10-07 14:34:50.554 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:50 compute-0 nova_compute[259550]: 2025-10-07 14:34:50.590 2 INFO nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Creating config drive at /var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318/disk.config
Oct 07 14:34:50 compute-0 nova_compute[259550]: 2025-10-07 14:34:50.595 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7sy2t63i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:50 compute-0 nova_compute[259550]: 2025-10-07 14:34:50.739 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7sy2t63i" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:50 compute-0 nova_compute[259550]: 2025-10-07 14:34:50.770 2 DEBUG nova.storage.rbd_utils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image df052dd5-fecd-4dd3-be36-4becc3f9f318_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:34:50 compute-0 nova_compute[259550]: 2025-10-07 14:34:50.776 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318/disk.config df052dd5-fecd-4dd3-be36-4becc3f9f318_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:50 compute-0 nova_compute[259550]: 2025-10-07 14:34:50.854 2 DEBUG nova.network.neutron [req-6172aa79-5f02-449c-8555-bf5834fc5b7a req-a0fa3e06-1160-414d-9ca1-46a859bbeef0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Updated VIF entry in instance network info cache for port ea75e1fb-474b-484e-9a52-73939d17da1b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:34:50 compute-0 nova_compute[259550]: 2025-10-07 14:34:50.855 2 DEBUG nova.network.neutron [req-6172aa79-5f02-449c-8555-bf5834fc5b7a req-a0fa3e06-1160-414d-9ca1-46a859bbeef0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Updating instance_info_cache with network_info: [{"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:34:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:34:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4005761768' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.044 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.045 2 DEBUG nova.virt.libvirt.vif [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:34:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-81555906',display_name='tempest-TestNetworkBasicOps-server-81555906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-81555906',id=111,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVpb0If8IJnGODxIDZvyf13jL46mQ77emkpmotXMzF/nDOLDOQfFE/H2m8lCVqJZPCaNeKJqv9eLmV7ud/4driHFAI2qIDYCbnN7/CMc/8whSX25QhcG+umNda5hVH/NQ==',key_name='tempest-TestNetworkBasicOps-307113164',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-edoddh5w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:34:41Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=a5f57fca-2121-432c-b000-2fd92f5c1b12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.045 2 DEBUG nova.network.os_vif_util [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.046 2 DEBUG nova.network.os_vif_util [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:ca:dd,bridge_name='br-int',has_traffic_filtering=True,id=647fc35d-c5c1-4818-9fe1-b704468ceb32,network=Network(df76b87a-da1a-4438-aff3-46b99df6a681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647fc35d-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.047 2 DEBUG nova.objects.instance [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid a5f57fca-2121-432c-b000-2fd92f5c1b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.102 2 DEBUG oslo_concurrency.lockutils [req-6172aa79-5f02-449c-8555-bf5834fc5b7a req-a0fa3e06-1160-414d-9ca1-46a859bbeef0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:34:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3801410304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:34:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4005761768' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.163 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:34:51 compute-0 nova_compute[259550]:   <uuid>a5f57fca-2121-432c-b000-2fd92f5c1b12</uuid>
Oct 07 14:34:51 compute-0 nova_compute[259550]:   <name>instance-0000006f</name>
Oct 07 14:34:51 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:34:51 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:34:51 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkBasicOps-server-81555906</nova:name>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:34:49</nova:creationTime>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:34:51 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:34:51 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:34:51 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:34:51 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:34:51 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:34:51 compute-0 nova_compute[259550]:         <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:34:51 compute-0 nova_compute[259550]:         <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:34:51 compute-0 nova_compute[259550]:         <nova:port uuid="647fc35d-c5c1-4818-9fe1-b704468ceb32">
Oct 07 14:34:51 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:34:51 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:34:51 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <system>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <entry name="serial">a5f57fca-2121-432c-b000-2fd92f5c1b12</entry>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <entry name="uuid">a5f57fca-2121-432c-b000-2fd92f5c1b12</entry>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     </system>
Oct 07 14:34:51 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:34:51 compute-0 nova_compute[259550]:   <os>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:   </os>
Oct 07 14:34:51 compute-0 nova_compute[259550]:   <features>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:   </features>
Oct 07 14:34:51 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:34:51 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:34:51 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/a5f57fca-2121-432c-b000-2fd92f5c1b12_disk">
Oct 07 14:34:51 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       </source>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:34:51 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/a5f57fca-2121-432c-b000-2fd92f5c1b12_disk.config">
Oct 07 14:34:51 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       </source>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:34:51 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:12:ca:dd"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <target dev="tap647fc35d-c5"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12/console.log" append="off"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <video>
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     </video>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:34:51 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:34:51 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:34:51 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:34:51 compute-0 nova_compute[259550]: </domain>
Oct 07 14:34:51 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.165 2 DEBUG nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Preparing to wait for external event network-vif-plugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.165 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.165 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.166 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.166 2 DEBUG nova.virt.libvirt.vif [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:34:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-81555906',display_name='tempest-TestNetworkBasicOps-server-81555906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-81555906',id=111,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVpb0If8IJnGODxIDZvyf13jL46mQ77emkpmotXMzF/nDOLDOQfFE/H2m8lCVqJZPCaNeKJqv9eLmV7ud/4driHFAI2qIDYCbnN7/CMc/8whSX25QhcG+umNda5hVH/NQ==',key_name='tempest-TestNetworkBasicOps-307113164',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-edoddh5w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:34:41Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=a5f57fca-2121-432c-b000-2fd92f5c1b12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.167 2 DEBUG nova.network.os_vif_util [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.167 2 DEBUG nova.network.os_vif_util [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:ca:dd,bridge_name='br-int',has_traffic_filtering=True,id=647fc35d-c5c1-4818-9fe1-b704468ceb32,network=Network(df76b87a-da1a-4438-aff3-46b99df6a681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647fc35d-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.168 2 DEBUG os_vif [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:ca:dd,bridge_name='br-int',has_traffic_filtering=True,id=647fc35d-c5c1-4818-9fe1-b704468ceb32,network=Network(df76b87a-da1a-4438-aff3-46b99df6a681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647fc35d-c5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.169 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.169 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.172 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap647fc35d-c5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.173 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap647fc35d-c5, col_values=(('external_ids', {'iface-id': '647fc35d-c5c1-4818-9fe1-b704468ceb32', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:ca:dd', 'vm-uuid': 'a5f57fca-2121-432c-b000-2fd92f5c1b12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:51 compute-0 NetworkManager[44949]: <info>  [1759847691.1760] manager: (tap647fc35d-c5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/453)
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.186 2 INFO os_vif [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:ca:dd,bridge_name='br-int',has_traffic_filtering=True,id=647fc35d-c5c1-4818-9fe1-b704468ceb32,network=Network(df76b87a-da1a-4438-aff3-46b99df6a681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647fc35d-c5')
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.237 2 DEBUG nova.network.neutron [-] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.259 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318/disk.config df052dd5-fecd-4dd3-be36-4becc3f9f318_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.260 2 INFO nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Deleting local config drive /var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318/disk.config because it was imported into RBD.
Oct 07 14:34:51 compute-0 systemd-udevd[378192]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:34:51 compute-0 NetworkManager[44949]: <info>  [1759847691.3183] manager: (tap72db4fd3-81): new Tun device (/org/freedesktop/NetworkManager/Devices/454)
Oct 07 14:34:51 compute-0 kernel: tap72db4fd3-81: entered promiscuous mode
Oct 07 14:34:51 compute-0 ovn_controller[151684]: 2025-10-07T14:34:51Z|01112|binding|INFO|Claiming lport 72db4fd3-8171-42af-9801-69a061614ccc for this chassis.
Oct 07 14:34:51 compute-0 ovn_controller[151684]: 2025-10-07T14:34:51Z|01113|binding|INFO|72db4fd3-8171-42af-9801-69a061614ccc: Claiming fa:16:3e:4d:17:d0 10.100.0.8
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:51 compute-0 NetworkManager[44949]: <info>  [1759847691.3342] device (tap72db4fd3-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:34:51 compute-0 NetworkManager[44949]: <info>  [1759847691.3350] device (tap72db4fd3-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:34:51 compute-0 NetworkManager[44949]: <info>  [1759847691.3386] manager: (tap87dbfe27-94): new Tun device (/org/freedesktop/NetworkManager/Devices/455)
Oct 07 14:34:51 compute-0 kernel: tap87dbfe27-94: entered promiscuous mode
Oct 07 14:34:51 compute-0 ovn_controller[151684]: 2025-10-07T14:34:51Z|01114|binding|INFO|Setting lport 72db4fd3-8171-42af-9801-69a061614ccc ovn-installed in OVS
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:51 compute-0 ovn_controller[151684]: 2025-10-07T14:34:51Z|01115|if_status|INFO|Dropped 1 log messages in last 1141 seconds (most recently, 1141 seconds ago) due to excessive rate
Oct 07 14:34:51 compute-0 ovn_controller[151684]: 2025-10-07T14:34:51Z|01116|if_status|INFO|Not updating pb chassis for 87dbfe27-9436-4e21-a648-df77ddfec6ca now as sb is readonly
Oct 07 14:34:51 compute-0 NetworkManager[44949]: <info>  [1759847691.3543] device (tap87dbfe27-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:34:51 compute-0 NetworkManager[44949]: <info>  [1759847691.3550] device (tap87dbfe27-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:51 compute-0 systemd-machined[214580]: New machine qemu-138-instance-0000006e.
Oct 07 14:34:51 compute-0 systemd[1]: Started Virtual Machine qemu-138-instance-0000006e.
Oct 07 14:34:51 compute-0 ovn_controller[151684]: 2025-10-07T14:34:51Z|01117|binding|INFO|Claiming lport 87dbfe27-9436-4e21-a648-df77ddfec6ca for this chassis.
Oct 07 14:34:51 compute-0 ovn_controller[151684]: 2025-10-07T14:34:51Z|01118|binding|INFO|87dbfe27-9436-4e21-a648-df77ddfec6ca: Claiming fa:16:3e:44:cb:8e 2001:db8::f816:3eff:fe44:cb8e
Oct 07 14:34:51 compute-0 ovn_controller[151684]: 2025-10-07T14:34:51Z|01119|binding|INFO|Setting lport 72db4fd3-8171-42af-9801-69a061614ccc up in Southbound
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.477 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:17:d0 10.100.0.8'], port_security=['fa:16:3e:4d:17:d0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'df052dd5-fecd-4dd3-be36-4becc3f9f318', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c899e05d-224c-44fe-8294-eaece58d7fe7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03f8a6cc-86b5-49f0-b76e-96a2dbc54e74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34b97456-8a84-4c85-bc2d-80b2de48e489, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=72db4fd3-8171-42af-9801-69a061614ccc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:34:51 compute-0 ovn_controller[151684]: 2025-10-07T14:34:51Z|01120|binding|INFO|Setting lport 87dbfe27-9436-4e21-a648-df77ddfec6ca ovn-installed in OVS
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.479 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 72db4fd3-8171-42af-9801-69a061614ccc in datapath c899e05d-224c-44fe-8294-eaece58d7fe7 bound to our chassis
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.480 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c899e05d-224c-44fe-8294-eaece58d7fe7
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.494 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d983c2fd-e625-4c01-9d49-d174edb37024]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.495 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc899e05d-21 in ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.497 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc899e05d-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.497 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1de5961d-49c6-4fa8-8d63-e8c50ef7e4da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.498 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c861c3d6-b50c-4bce-b31a-2f02fd4c69b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.510 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[96da4c4f-1c19-4350-9a87-72a243fac43b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.530 2 DEBUG nova.compute.manager [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-unplugged-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.530 2 DEBUG oslo_concurrency.lockutils [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.531 2 DEBUG oslo_concurrency.lockutils [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.531 2 DEBUG oslo_concurrency.lockutils [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.531 2 DEBUG nova.compute.manager [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] No waiting events found dispatching network-vif-unplugged-ea75e1fb-474b-484e-9a52-73939d17da1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.531 2 DEBUG nova.compute.manager [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-unplugged-ea75e1fb-474b-484e-9a52-73939d17da1b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.531 2 DEBUG nova.compute.manager [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.532 2 DEBUG oslo_concurrency.lockutils [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.532 2 DEBUG oslo_concurrency.lockutils [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.532 2 DEBUG oslo_concurrency.lockutils [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.532 2 DEBUG nova.compute.manager [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] No waiting events found dispatching network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.532 2 WARNING nova.compute.manager [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received unexpected event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b for instance with vm_state active and task_state deleting.
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.534 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e1abbd32-42f1-45fe-bfa6-c3314dea8a8c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.561 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f868e6-0136-4059-952f-5d1f0faf5f49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:51 compute-0 NetworkManager[44949]: <info>  [1759847691.5787] manager: (tapc899e05d-20): new Veth device (/org/freedesktop/NetworkManager/Devices/456)
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.578 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[00163c49-d960-4c5e-99f7-c8c3461ca669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:51 compute-0 systemd-udevd[378491]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.614 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b90c45f7-86f3-41e0-9df2-8f43c4bcc5d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.617 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a933f626-11a3-49d7-9437-705953f25378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:51 compute-0 NetworkManager[44949]: <info>  [1759847691.6407] device (tapc899e05d-20): carrier: link connected
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.646 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fe854340-f969-4e32-97e7-adaf71eaed02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.658 2 INFO nova.compute.manager [-] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Took 1.68 seconds to deallocate network for instance.
Oct 07 14:34:51 compute-0 ovn_controller[151684]: 2025-10-07T14:34:51Z|01121|binding|INFO|Setting lport 87dbfe27-9436-4e21-a648-df77ddfec6ca up in Southbound
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.659 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:cb:8e 2001:db8::f816:3eff:fe44:cb8e'], port_security=['fa:16:3e:44:cb:8e 2001:db8::f816:3eff:fe44:cb8e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe44:cb8e/64', 'neutron:device_id': 'df052dd5-fecd-4dd3-be36-4becc3f9f318', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03f8a6cc-86b5-49f0-b76e-96a2dbc54e74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eae04c5e-8ef6-447a-8c6c-5575a0afadaf, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=87dbfe27-9436-4e21-a648-df77ddfec6ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.662 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c21a8019-db47-4642-abbf-f27debe9d8ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc899e05d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:ba:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827520, 'reachable_time': 17717, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378512, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.670 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.670 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.671 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:12:ca:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.671 2 INFO nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Using config drive
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.683 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[af80d870-ee14-4c7e-aa17-6ecf857661fa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe74:babd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827520, 'tstamp': 827520}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378513, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.693 2 DEBUG nova.storage.rbd_utils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image a5f57fca-2121-432c-b000-2fd92f5c1b12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.703 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[70b6535c-031c-48ad-96e0-0d0b0464d850]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc899e05d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:ba:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827520, 'reachable_time': 17717, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378530, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.735 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d8422788-a7e8-4f2e-bd59-3e73ea6d7af6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.798 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[edd36dc3-12bb-4537-af48-8240d5778aa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.801 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc899e05d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.801 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.802 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc899e05d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:51 compute-0 NetworkManager[44949]: <info>  [1759847691.8046] manager: (tapc899e05d-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/457)
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.806 2 DEBUG oslo_concurrency.lockutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.807 2 DEBUG oslo_concurrency.lockutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:51 compute-0 kernel: tapc899e05d-20: entered promiscuous mode
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.813 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc899e05d-20, col_values=(('external_ids', {'iface-id': 'a420e217-e19f-447c-8a17-e0fdc42144b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:51 compute-0 ovn_controller[151684]: 2025-10-07T14:34:51Z|01122|binding|INFO|Releasing lport a420e217-e19f-447c-8a17-e0fdc42144b5 from this chassis (sb_readonly=0)
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.837 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c899e05d-224c-44fe-8294-eaece58d7fe7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c899e05d-224c-44fe-8294-eaece58d7fe7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.838 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8ea5e00b-f9f7-42a4-ae3a-27ce3b8b74b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.839 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-c899e05d-224c-44fe-8294-eaece58d7fe7
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/c899e05d-224c-44fe-8294-eaece58d7fe7.pid.haproxy
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID c899e05d-224c-44fe-8294-eaece58d7fe7
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:34:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.841 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'env', 'PROCESS_TAG=haproxy-c899e05d-224c-44fe-8294-eaece58d7fe7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c899e05d-224c-44fe-8294-eaece58d7fe7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.852 2 DEBUG nova.network.neutron [req-07f8d396-b011-4880-b7d8-7c94ed7eba09 req-2a2fd73b-cba8-4c0f-ad3f-4b97ca5f3c85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Updated VIF entry in instance network info cache for port 647fc35d-c5c1-4818-9fe1-b704468ceb32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.853 2 DEBUG nova.network.neutron [req-07f8d396-b011-4880-b7d8-7c94ed7eba09 req-2a2fd73b-cba8-4c0f-ad3f-4b97ca5f3c85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Updating instance_info_cache with network_info: [{"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.897 2 DEBUG oslo_concurrency.lockutils [req-07f8d396-b011-4880-b7d8-7c94ed7eba09 req-2a2fd73b-cba8-4c0f-ad3f-4b97ca5f3c85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a5f57fca-2121-432c-b000-2fd92f5c1b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:34:51 compute-0 nova_compute[259550]: 2025-10-07 14:34:51.974 2 DEBUG oslo_concurrency.processutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:52 compute-0 nova_compute[259550]: 2025-10-07 14:34:52.006 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:34:52 compute-0 nova_compute[259550]: 2025-10-07 14:34:52.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:52 compute-0 ceph-mon[74295]: pgmap v2172: 305 pgs: 305 active+clean; 294 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 569 KiB/s rd, 3.6 MiB/s wr, 101 op/s
Oct 07 14:34:52 compute-0 nova_compute[259550]: 2025-10-07 14:34:52.268 2 INFO nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Creating config drive at /var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12/disk.config
Oct 07 14:34:52 compute-0 nova_compute[259550]: 2025-10-07 14:34:52.273 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl1h660cx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 269 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 149 KiB/s rd, 2.4 MiB/s wr, 50 op/s
Oct 07 14:34:52 compute-0 podman[378605]: 2025-10-07 14:34:52.20667886 +0000 UTC m=+0.032614952 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:34:52 compute-0 nova_compute[259550]: 2025-10-07 14:34:52.361 2 DEBUG nova.compute.manager [req-b8bdf78e-5dba-4ba7-932b-4bd3d835d27d req-5b892d10-e845-491c-bfa7-b90cb5befcbf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-plugged-72db4fd3-8171-42af-9801-69a061614ccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:52 compute-0 nova_compute[259550]: 2025-10-07 14:34:52.363 2 DEBUG oslo_concurrency.lockutils [req-b8bdf78e-5dba-4ba7-932b-4bd3d835d27d req-5b892d10-e845-491c-bfa7-b90cb5befcbf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:52 compute-0 nova_compute[259550]: 2025-10-07 14:34:52.363 2 DEBUG oslo_concurrency.lockutils [req-b8bdf78e-5dba-4ba7-932b-4bd3d835d27d req-5b892d10-e845-491c-bfa7-b90cb5befcbf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:52 compute-0 nova_compute[259550]: 2025-10-07 14:34:52.364 2 DEBUG oslo_concurrency.lockutils [req-b8bdf78e-5dba-4ba7-932b-4bd3d835d27d req-5b892d10-e845-491c-bfa7-b90cb5befcbf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:52 compute-0 nova_compute[259550]: 2025-10-07 14:34:52.364 2 DEBUG nova.compute.manager [req-b8bdf78e-5dba-4ba7-932b-4bd3d835d27d req-5b892d10-e845-491c-bfa7-b90cb5befcbf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Processing event network-vif-plugged-72db4fd3-8171-42af-9801-69a061614ccc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:34:52 compute-0 nova_compute[259550]: 2025-10-07 14:34:52.415 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl1h660cx" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:34:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/41990193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:34:52 compute-0 nova_compute[259550]: 2025-10-07 14:34:52.511 2 DEBUG nova.storage.rbd_utils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image a5f57fca-2121-432c-b000-2fd92f5c1b12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:34:52 compute-0 nova_compute[259550]: 2025-10-07 14:34:52.515 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12/disk.config a5f57fca-2121-432c-b000-2fd92f5c1b12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:52 compute-0 nova_compute[259550]: 2025-10-07 14:34:52.566 2 DEBUG oslo_concurrency.processutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:52 compute-0 nova_compute[259550]: 2025-10-07 14:34:52.572 2 DEBUG nova.compute.provider_tree [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:34:52 compute-0 nova_compute[259550]: 2025-10-07 14:34:52.621 2 DEBUG nova.scheduler.client.report [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:34:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:34:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:34:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:34:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:34:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:34:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:34:52 compute-0 nova_compute[259550]: 2025-10-07 14:34:52.767 2 DEBUG oslo_concurrency.lockutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:52 compute-0 podman[378605]: 2025-10-07 14:34:52.824239982 +0000 UTC m=+0.650176074 container create 5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:34:52 compute-0 nova_compute[259550]: 2025-10-07 14:34:52.892 2 INFO nova.scheduler.client.report [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Deleted allocations for instance a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8
Oct 07 14:34:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:34:53 compute-0 systemd[1]: Started libpod-conmon-5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484.scope.
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.061 2 DEBUG oslo_concurrency.lockutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:53 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/325f40c5397e88b9b38207d1a08b9557bb0033598dc986796ff5fd8d9abcf075/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.125 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847693.1246157, df052dd5-fecd-4dd3-be36-4becc3f9f318 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.125 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] VM Started (Lifecycle Event)
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.211 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.215 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847693.1247613, df052dd5-fecd-4dd3-be36-4becc3f9f318 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.216 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] VM Paused (Lifecycle Event)
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.282 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.286 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:34:53 compute-0 podman[378605]: 2025-10-07 14:34:53.296159601 +0000 UTC m=+1.122095713 container init 5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 07 14:34:53 compute-0 podman[378605]: 2025-10-07 14:34:53.301902905 +0000 UTC m=+1.127838997 container start 5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.314 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:34:53 compute-0 neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7[378688]: [NOTICE]   (378692) : New worker (378694) forked
Oct 07 14:34:53 compute-0 neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7[378688]: [NOTICE]   (378692) : Loading success.
Oct 07 14:34:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/41990193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.490 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 87dbfe27-9436-4e21-a648-df77ddfec6ca in datapath 673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3 unbound from our chassis
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.493 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.509 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[78c5c3e0-b415-4a97-bfa4-46238d6e0e50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.510 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap673dd6ba-41 in ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.512 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap673dd6ba-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.512 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f41047a0-94f7-4f3f-bff4-05c075193939]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.514 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5653ff94-5a9c-4eb3-baa0-463ee0403bc6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.528 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[841bb333-c2f9-46c0-8846-d7dc8c93b5dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.543 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd872cb-da51-481a-a289-505e1712c141]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.573 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e58c58c5-2983-4f29-9b0f-d61e03ec5954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:53 compute-0 systemd-udevd[378495]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:34:53 compute-0 NetworkManager[44949]: <info>  [1759847693.5794] manager: (tap673dd6ba-40): new Veth device (/org/freedesktop/NetworkManager/Devices/458)
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.578 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d40bbb3c-3ca9-4afa-850b-f524bf4986af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.608 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[11ea1202-52e4-4c4e-b9f2-0da65aa110df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.616 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ea091f95-dda2-4dc0-a909-3f7f8060ce47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:53 compute-0 NetworkManager[44949]: <info>  [1759847693.6403] device (tap673dd6ba-40): carrier: link connected
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.646 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ff915d1a-f002-4213-802c-c4f49475320e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.666 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ec52e4-f920-4f8a-a091-913628539b3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap673dd6ba-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:cb:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 325], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827720, 'reachable_time': 16406, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378719, 'error': None, 'target': 'ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.682 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6b5842-3e6d-4a03-9a1e-dd59530ea1c8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:cb46'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827720, 'tstamp': 827720}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378720, 'error': None, 'target': 'ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.701 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cf1fb016-9c34-4249-99f8-53e7c5d54343]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap673dd6ba-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:cb:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 325], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827720, 'reachable_time': 16406, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378721, 'error': None, 'target': 'ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.712 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12/disk.config a5f57fca-2121-432c-b000-2fd92f5c1b12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.713 2 INFO nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Deleting local config drive /var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12/disk.config because it was imported into RBD.
Oct 07 14:34:53 compute-0 virtqemud[259430]: End of file while reading data: Input/output error
Oct 07 14:34:53 compute-0 virtqemud[259430]: End of file while reading data: Input/output error
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.734 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[06002ab7-697d-4a2a-9a2b-d8bb108a23e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.764 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e42c6bfa-c354-4998-a600-286a7434649d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.766 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap673dd6ba-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.766 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.766 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap673dd6ba-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:53 compute-0 NetworkManager[44949]: <info>  [1759847693.7692] manager: (tap673dd6ba-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/459)
Oct 07 14:34:53 compute-0 kernel: tap673dd6ba-40: entered promiscuous mode
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.774 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap673dd6ba-40, col_values=(('external_ids', {'iface-id': 'e2eca697-1003-4116-b184-c9191a00584f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:53 compute-0 ovn_controller[151684]: 2025-10-07T14:34:53Z|01123|binding|INFO|Releasing lport e2eca697-1003-4116-b184-c9191a00584f from this chassis (sb_readonly=0)
Oct 07 14:34:53 compute-0 NetworkManager[44949]: <info>  [1759847693.7783] manager: (tap647fc35d-c5): new Tun device (/org/freedesktop/NetworkManager/Devices/460)
Oct 07 14:34:53 compute-0 kernel: tap647fc35d-c5: entered promiscuous mode
Oct 07 14:34:53 compute-0 NetworkManager[44949]: <info>  [1759847693.7880] device (tap647fc35d-c5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:34:53 compute-0 NetworkManager[44949]: <info>  [1759847693.7886] device (tap647fc35d-c5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:53 compute-0 ovn_controller[151684]: 2025-10-07T14:34:53Z|01124|binding|INFO|Claiming lport 647fc35d-c5c1-4818-9fe1-b704468ceb32 for this chassis.
Oct 07 14:34:53 compute-0 ovn_controller[151684]: 2025-10-07T14:34:53Z|01125|binding|INFO|647fc35d-c5c1-4818-9fe1-b704468ceb32: Claiming fa:16:3e:12:ca:dd 10.100.0.27
Oct 07 14:34:53 compute-0 systemd-machined[214580]: New machine qemu-139-instance-0000006f.
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.823 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:ca:dd 10.100.0.27'], port_security=['fa:16:3e:12:ca:dd 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'a5f57fca-2121-432c-b000-2fd92f5c1b12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df76b87a-da1a-4438-aff3-46b99df6a681', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2ae626df-0151-4864-97c5-333f386c32b3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f05eda8c-3c14-40e1-b5d0-16d149ef6c4e, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=647fc35d-c5c1-4818-9fe1-b704468ceb32) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:34:53 compute-0 systemd[1]: Started Virtual Machine qemu-139-instance-0000006f.
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.840 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.841 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8a901487-6e01-4273-8dbb-6c9ba6f412c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.844 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3.pid.haproxy
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:34:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.844 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'env', 'PROCESS_TAG=haproxy-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:34:53 compute-0 ovn_controller[151684]: 2025-10-07T14:34:53Z|01126|binding|INFO|Setting lport 647fc35d-c5c1-4818-9fe1-b704468ceb32 ovn-installed in OVS
Oct 07 14:34:53 compute-0 ovn_controller[151684]: 2025-10-07T14:34:53Z|01127|binding|INFO|Setting lport 647fc35d-c5c1-4818-9fe1-b704468ceb32 up in Southbound
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.869 2 DEBUG nova.compute.manager [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-deleted-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.870 2 DEBUG nova.compute.manager [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-plugged-87dbfe27-9436-4e21-a648-df77ddfec6ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.870 2 DEBUG oslo_concurrency.lockutils [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.870 2 DEBUG oslo_concurrency.lockutils [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.871 2 DEBUG oslo_concurrency.lockutils [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.871 2 DEBUG nova.compute.manager [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Processing event network-vif-plugged-87dbfe27-9436-4e21-a648-df77ddfec6ca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.871 2 DEBUG nova.compute.manager [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-plugged-87dbfe27-9436-4e21-a648-df77ddfec6ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.871 2 DEBUG oslo_concurrency.lockutils [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.871 2 DEBUG oslo_concurrency.lockutils [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.872 2 DEBUG oslo_concurrency.lockutils [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.872 2 DEBUG nova.compute.manager [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] No waiting events found dispatching network-vif-plugged-87dbfe27-9436-4e21-a648-df77ddfec6ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.872 2 WARNING nova.compute.manager [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received unexpected event network-vif-plugged-87dbfe27-9436-4e21-a648-df77ddfec6ca for instance with vm_state building and task_state spawning.
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.873 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.883 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847693.8827105, df052dd5-fecd-4dd3-be36-4becc3f9f318 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.883 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] VM Resumed (Lifecycle Event)
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.886 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.895 2 INFO nova.virt.libvirt.driver [-] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Instance spawned successfully.
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.895 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.930 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.933 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.945 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.945 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.945 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.946 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.946 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:34:53 compute-0 nova_compute[259550]: 2025-10-07 14:34:53.947 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.023 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.108 2 INFO nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Took 17.95 seconds to spawn the instance on the hypervisor.
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.108 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:34:54 compute-0 podman[378811]: 2025-10-07 14:34:54.220670445 +0000 UTC m=+0.047987774 container create 4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.242 2 INFO nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Took 18.96 seconds to build instance.
Oct 07 14:34:54 compute-0 systemd[1]: Started libpod-conmon-4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d.scope.
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.276 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 213 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Oct 07 14:34:54 compute-0 podman[378811]: 2025-10-07 14:34:54.194791013 +0000 UTC m=+0.022108362 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:34:54 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:34:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9610b23e7fa9be83c256b0b81e796a0e4842e582d51c70137e3b1797edeeac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:34:54 compute-0 podman[378811]: 2025-10-07 14:34:54.321791947 +0000 UTC m=+0.149109296 container init 4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:34:54 compute-0 podman[378811]: 2025-10-07 14:34:54.341075263 +0000 UTC m=+0.168392592 container start 4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 07 14:34:54 compute-0 ceph-mon[74295]: pgmap v2173: 305 pgs: 305 active+clean; 269 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 149 KiB/s rd, 2.4 MiB/s wr, 50 op/s
Oct 07 14:34:54 compute-0 neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3[378830]: [NOTICE]   (378834) : New worker (378836) forked
Oct 07 14:34:54 compute-0 neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3[378830]: [NOTICE]   (378834) : Loading success.
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.404 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 647fc35d-c5c1-4818-9fe1-b704468ceb32 in datapath df76b87a-da1a-4438-aff3-46b99df6a681 unbound from our chassis
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.406 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df76b87a-da1a-4438-aff3-46b99df6a681
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.420 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8d5810-2217-49e1-ad23-ab06f72fa086]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.421 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdf76b87a-d1 in ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.424 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdf76b87a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.424 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d32dda-5b0e-42c6-bbd4-02fe58fa6c8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.425 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[707e8dd6-6331-407c-a23e-b0e0290d61e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.438 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6d62a0d4-f59f-4b87-b906-59cb0335cdb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.454 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ae123f92-7834-47a6-b18a-62752b8a1247]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.488 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3d427f4f-1fd4-4d31-a749-3c439bd005ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:54 compute-0 NetworkManager[44949]: <info>  [1759847694.4974] manager: (tapdf76b87a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/461)
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.501 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[378aee8b-1c11-40cc-b466-db6f9ccf9901]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.537 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0c108b56-7155-4fa1-84ea-9d99f739992d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.541 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0afa756a-9a5c-4481-8dc3-7a1670fd713f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.544 2 DEBUG nova.compute.manager [req-1590f589-cd43-4bbd-a8d3-a5ead688d938 req-02a3ff99-61ad-41d1-9b60-a1cb66358cc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-plugged-72db4fd3-8171-42af-9801-69a061614ccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.544 2 DEBUG oslo_concurrency.lockutils [req-1590f589-cd43-4bbd-a8d3-a5ead688d938 req-02a3ff99-61ad-41d1-9b60-a1cb66358cc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.546 2 DEBUG oslo_concurrency.lockutils [req-1590f589-cd43-4bbd-a8d3-a5ead688d938 req-02a3ff99-61ad-41d1-9b60-a1cb66358cc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.546 2 DEBUG oslo_concurrency.lockutils [req-1590f589-cd43-4bbd-a8d3-a5ead688d938 req-02a3ff99-61ad-41d1-9b60-a1cb66358cc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.546 2 DEBUG nova.compute.manager [req-1590f589-cd43-4bbd-a8d3-a5ead688d938 req-02a3ff99-61ad-41d1-9b60-a1cb66358cc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] No waiting events found dispatching network-vif-plugged-72db4fd3-8171-42af-9801-69a061614ccc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.546 2 WARNING nova.compute.manager [req-1590f589-cd43-4bbd-a8d3-a5ead688d938 req-02a3ff99-61ad-41d1-9b60-a1cb66358cc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received unexpected event network-vif-plugged-72db4fd3-8171-42af-9801-69a061614ccc for instance with vm_state active and task_state None.
Oct 07 14:34:54 compute-0 NetworkManager[44949]: <info>  [1759847694.5671] device (tapdf76b87a-d0): carrier: link connected
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.573 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ea51d3c8-471f-4909-ba52-63294f7dc785]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.596 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f00d938a-c4bb-4771-bc8a-351c47e8287f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf76b87a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:f9:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 327], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827813, 'reachable_time': 27744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378855, 'error': None, 'target': 'ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.612 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[58b2944e-ad3a-48c2-902a-1e1df1cf9526]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:f9dc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827813, 'tstamp': 827813}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378856, 'error': None, 'target': 'ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.631 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[549cfe56-8a84-4d97-96f6-02e3f3428792]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf76b87a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:f9:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 327], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827813, 'reachable_time': 27744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378857, 'error': None, 'target': 'ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.667 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[69563dd4-94d6-4120-8c83-c3c75a3fc65e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.739 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5114e02c-5476-4c9c-a3af-b8acd3de16ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.741 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf76b87a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.741 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.742 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf76b87a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:54 compute-0 NetworkManager[44949]: <info>  [1759847694.7447] manager: (tapdf76b87a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/462)
Oct 07 14:34:54 compute-0 kernel: tapdf76b87a-d0: entered promiscuous mode
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.747 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf76b87a-d0, col_values=(('external_ids', {'iface-id': 'a27bf1b8-ba5c-4c7e-947d-4fb0bbc09db2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:54 compute-0 ovn_controller[151684]: 2025-10-07T14:34:54Z|01128|binding|INFO|Releasing lport a27bf1b8-ba5c-4c7e-947d-4fb0bbc09db2 from this chassis (sb_readonly=0)
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.753 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847694.7531753, a5f57fca-2121-432c-b000-2fd92f5c1b12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.753 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] VM Started (Lifecycle Event)
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.751 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/df76b87a-da1a-4438-aff3-46b99df6a681.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/df76b87a-da1a-4438-aff3-46b99df6a681.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.755 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[945a95f8-c495-44a4-8cac-0667560b71d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.756 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-df76b87a-da1a-4438-aff3-46b99df6a681
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/df76b87a-da1a-4438-aff3-46b99df6a681.pid.haproxy
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID df76b87a-da1a-4438-aff3-46b99df6a681
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:34:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.756 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681', 'env', 'PROCESS_TAG=haproxy-df76b87a-da1a-4438-aff3-46b99df6a681', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/df76b87a-da1a-4438-aff3-46b99df6a681.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.993 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.998 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847694.753738, a5f57fca-2121-432c-b000-2fd92f5c1b12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:34:54 compute-0 nova_compute[259550]: 2025-10-07 14:34:54.998 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] VM Paused (Lifecycle Event)
Oct 07 14:34:55 compute-0 nova_compute[259550]: 2025-10-07 14:34:55.194 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:34:55 compute-0 nova_compute[259550]: 2025-10-07 14:34:55.197 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:34:55 compute-0 podman[378889]: 2025-10-07 14:34:55.134379889 +0000 UTC m=+0.025725568 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:34:55 compute-0 podman[378889]: 2025-10-07 14:34:55.270290381 +0000 UTC m=+0.161636040 container create a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 07 14:34:55 compute-0 systemd[1]: Started libpod-conmon-a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222.scope.
Oct 07 14:34:55 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d8f73fee24f065dab00f339734911ef1ec093684fe1fea8ceea6e76dc54383b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:34:55 compute-0 nova_compute[259550]: 2025-10-07 14:34:55.396 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:34:55 compute-0 podman[378889]: 2025-10-07 14:34:55.428660982 +0000 UTC m=+0.320006641 container init a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 14:34:55 compute-0 podman[378889]: 2025-10-07 14:34:55.434949641 +0000 UTC m=+0.326295300 container start a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:34:55 compute-0 neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681[378905]: [NOTICE]   (378909) : New worker (378911) forked
Oct 07 14:34:55 compute-0 neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681[378905]: [NOTICE]   (378909) : Loading success.
Oct 07 14:34:55 compute-0 ceph-mon[74295]: pgmap v2174: 305 pgs: 305 active+clean; 213 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Oct 07 14:34:56 compute-0 podman[378920]: 2025-10-07 14:34:56.086553272 +0000 UTC m=+0.072933560 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd)
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.139 2 DEBUG nova.compute.manager [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Received event network-vif-plugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.139 2 DEBUG oslo_concurrency.lockutils [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.139 2 DEBUG oslo_concurrency.lockutils [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.140 2 DEBUG oslo_concurrency.lockutils [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.140 2 DEBUG nova.compute.manager [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Processing event network-vif-plugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.140 2 DEBUG nova.compute.manager [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Received event network-vif-plugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.140 2 DEBUG oslo_concurrency.lockutils [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.140 2 DEBUG oslo_concurrency.lockutils [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.141 2 DEBUG oslo_concurrency.lockutils [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.141 2 DEBUG nova.compute.manager [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] No waiting events found dispatching network-vif-plugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.141 2 WARNING nova.compute.manager [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Received unexpected event network-vif-plugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 for instance with vm_state building and task_state spawning.
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.141 2 DEBUG nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.150 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847696.1503851, a5f57fca-2121-432c-b000-2fd92f5c1b12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.150 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] VM Resumed (Lifecycle Event)
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.162 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.165 2 INFO nova.virt.libvirt.driver [-] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Instance spawned successfully.
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.165 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:56 compute-0 podman[378941]: 2025-10-07 14:34:56.176825034 +0000 UTC m=+0.067146175 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.248 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.253 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.254 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.254 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.254 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.255 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.255 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.258 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:34:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 213 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 770 KiB/s rd, 337 KiB/s wr, 70 op/s
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.393 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.607 2 INFO nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Took 14.65 seconds to spawn the instance on the hypervisor.
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.609 2 DEBUG nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.916 2 INFO nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Took 16.92 seconds to build instance.
Oct 07 14:34:56 compute-0 nova_compute[259550]: 2025-10-07 14:34:56.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:34:57 compute-0 nova_compute[259550]: 2025-10-07 14:34:57.004 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:57 compute-0 nova_compute[259550]: 2025-10-07 14:34:57.124 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:57 compute-0 nova_compute[259550]: 2025-10-07 14:34:57.124 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:57 compute-0 nova_compute[259550]: 2025-10-07 14:34:57.125 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:57 compute-0 nova_compute[259550]: 2025-10-07 14:34:57.125 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:34:57 compute-0 nova_compute[259550]: 2025-10-07 14:34:57.125 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:57 compute-0 nova_compute[259550]: 2025-10-07 14:34:57.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:34:57 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2741335439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:34:57 compute-0 nova_compute[259550]: 2025-10-07 14:34:57.621 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:57 compute-0 ceph-mon[74295]: pgmap v2175: 305 pgs: 305 active+clean; 213 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 770 KiB/s rd, 337 KiB/s wr, 70 op/s
Oct 07 14:34:57 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2741335439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:34:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:34:57 compute-0 nova_compute[259550]: 2025-10-07 14:34:57.995 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:34:57 compute-0 nova_compute[259550]: 2025-10-07 14:34:57.995 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:34:58 compute-0 nova_compute[259550]: 2025-10-07 14:34:58.000 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:34:58 compute-0 nova_compute[259550]: 2025-10-07 14:34:58.000 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:34:58 compute-0 nova_compute[259550]: 2025-10-07 14:34:58.005 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:34:58 compute-0 nova_compute[259550]: 2025-10-07 14:34:58.005 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:34:58 compute-0 nova_compute[259550]: 2025-10-07 14:34:58.233 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:34:58 compute-0 nova_compute[259550]: 2025-10-07 14:34:58.235 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3339MB free_disk=59.901039123535156GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:34:58 compute-0 nova_compute[259550]: 2025-10-07 14:34:58.235 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:34:58 compute-0 nova_compute[259550]: 2025-10-07 14:34:58.235 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:34:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 213 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 770 KiB/s rd, 27 KiB/s wr, 69 op/s
Oct 07 14:34:58 compute-0 nova_compute[259550]: 2025-10-07 14:34:58.520 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:34:58 compute-0 nova_compute[259550]: 2025-10-07 14:34:58.521 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance df052dd5-fecd-4dd3-be36-4becc3f9f318 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:34:58 compute-0 nova_compute[259550]: 2025-10-07 14:34:58.521 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance a5f57fca-2121-432c-b000-2fd92f5c1b12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:34:58 compute-0 nova_compute[259550]: 2025-10-07 14:34:58.521 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:34:58 compute-0 nova_compute[259550]: 2025-10-07 14:34:58.522 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:34:58 compute-0 nova_compute[259550]: 2025-10-07 14:34:58.600 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:34:58 compute-0 ovn_controller[151684]: 2025-10-07T14:34:58Z|01129|binding|INFO|Releasing lport a420e217-e19f-447c-8a17-e0fdc42144b5 from this chassis (sb_readonly=0)
Oct 07 14:34:58 compute-0 ovn_controller[151684]: 2025-10-07T14:34:58Z|01130|binding|INFO|Releasing lport a27bf1b8-ba5c-4c7e-947d-4fb0bbc09db2 from this chassis (sb_readonly=0)
Oct 07 14:34:58 compute-0 ovn_controller[151684]: 2025-10-07T14:34:58Z|01131|binding|INFO|Releasing lport e2eca697-1003-4116-b184-c9191a00584f from this chassis (sb_readonly=0)
Oct 07 14:34:58 compute-0 ovn_controller[151684]: 2025-10-07T14:34:58Z|01132|binding|INFO|Releasing lport 3f0bb527-e0e8-409d-b11f-0c1205bec67a from this chassis (sb_readonly=0)
Oct 07 14:34:58 compute-0 nova_compute[259550]: 2025-10-07 14:34:58.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:34:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:34:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3437138190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:34:59 compute-0 nova_compute[259550]: 2025-10-07 14:34:59.122 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:34:59 compute-0 nova_compute[259550]: 2025-10-07 14:34:59.128 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:34:59 compute-0 nova_compute[259550]: 2025-10-07 14:34:59.184 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:34:59 compute-0 nova_compute[259550]: 2025-10-07 14:34:59.321 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:34:59 compute-0 nova_compute[259550]: 2025-10-07 14:34:59.322 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:34:59 compute-0 ceph-mon[74295]: pgmap v2176: 305 pgs: 305 active+clean; 213 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 770 KiB/s rd, 27 KiB/s wr, 69 op/s
Oct 07 14:34:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3437138190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:34:59 compute-0 nova_compute[259550]: 2025-10-07 14:34:59.896 2 DEBUG nova.compute.manager [req-7455615d-5b61-4bf8-966d-c47e3714bab7 req-25d9fe97-5b6c-4791-9fab-4d778e3b5877 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-changed-72db4fd3-8171-42af-9801-69a061614ccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:34:59 compute-0 nova_compute[259550]: 2025-10-07 14:34:59.897 2 DEBUG nova.compute.manager [req-7455615d-5b61-4bf8-966d-c47e3714bab7 req-25d9fe97-5b6c-4791-9fab-4d778e3b5877 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Refreshing instance network info cache due to event network-changed-72db4fd3-8171-42af-9801-69a061614ccc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:34:59 compute-0 nova_compute[259550]: 2025-10-07 14:34:59.897 2 DEBUG oslo_concurrency.lockutils [req-7455615d-5b61-4bf8-966d-c47e3714bab7 req-25d9fe97-5b6c-4791-9fab-4d778e3b5877 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:34:59 compute-0 nova_compute[259550]: 2025-10-07 14:34:59.898 2 DEBUG oslo_concurrency.lockutils [req-7455615d-5b61-4bf8-966d-c47e3714bab7 req-25d9fe97-5b6c-4791-9fab-4d778e3b5877 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:34:59 compute-0 nova_compute[259550]: 2025-10-07 14:34:59.898 2 DEBUG nova.network.neutron [req-7455615d-5b61-4bf8-966d-c47e3714bab7 req-25d9fe97-5b6c-4791-9fab-4d778e3b5877 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Refreshing network info cache for port 72db4fd3-8171-42af-9801-69a061614ccc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:35:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:00.067 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:00.068 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:00.069 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 39 KiB/s wr, 166 op/s
Oct 07 14:35:01 compute-0 nova_compute[259550]: 2025-10-07 14:35:01.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:01 compute-0 nova_compute[259550]: 2025-10-07 14:35:01.322 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:35:01 compute-0 nova_compute[259550]: 2025-10-07 14:35:01.323 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:35:01 compute-0 nova_compute[259550]: 2025-10-07 14:35:01.323 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:35:01 compute-0 ceph-mon[74295]: pgmap v2177: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 39 KiB/s wr, 166 op/s
Oct 07 14:35:02 compute-0 nova_compute[259550]: 2025-10-07 14:35:02.171 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:35:02 compute-0 nova_compute[259550]: 2025-10-07 14:35:02.173 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:35:02 compute-0 nova_compute[259550]: 2025-10-07 14:35:02.173 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:35:02 compute-0 nova_compute[259550]: 2025-10-07 14:35:02.173 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:35:02 compute-0 nova_compute[259550]: 2025-10-07 14:35:02.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 175 op/s
Oct 07 14:35:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:35:03 compute-0 nova_compute[259550]: 2025-10-07 14:35:03.481 2 DEBUG nova.network.neutron [req-7455615d-5b61-4bf8-966d-c47e3714bab7 req-25d9fe97-5b6c-4791-9fab-4d778e3b5877 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updated VIF entry in instance network info cache for port 72db4fd3-8171-42af-9801-69a061614ccc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:35:03 compute-0 nova_compute[259550]: 2025-10-07 14:35:03.481 2 DEBUG nova.network.neutron [req-7455615d-5b61-4bf8-966d-c47e3714bab7 req-25d9fe97-5b6c-4791-9fab-4d778e3b5877 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updating instance_info_cache with network_info: [{"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:35:03 compute-0 nova_compute[259550]: 2025-10-07 14:35:03.668 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847688.6666355, a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:35:03 compute-0 nova_compute[259550]: 2025-10-07 14:35:03.668 2 INFO nova.compute.manager [-] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] VM Stopped (Lifecycle Event)
Oct 07 14:35:03 compute-0 ceph-mon[74295]: pgmap v2178: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 175 op/s
Oct 07 14:35:03 compute-0 nova_compute[259550]: 2025-10-07 14:35:03.972 2 DEBUG nova.compute.manager [None req-cb6ab8e2-5310-48eb-b77a-ca5f52b011dd - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:35:04 compute-0 nova_compute[259550]: 2025-10-07 14:35:04.021 2 DEBUG oslo_concurrency.lockutils [req-7455615d-5b61-4bf8-966d-c47e3714bab7 req-25d9fe97-5b6c-4791-9fab-4d778e3b5877 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:35:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 169 op/s
Oct 07 14:35:04 compute-0 nova_compute[259550]: 2025-10-07 14:35:04.801 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updating instance_info_cache with network_info: [{"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:35:04 compute-0 nova_compute[259550]: 2025-10-07 14:35:04.953 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:35:04 compute-0 nova_compute[259550]: 2025-10-07 14:35:04.954 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:35:04 compute-0 nova_compute[259550]: 2025-10-07 14:35:04.954 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:35:05 compute-0 ceph-mon[74295]: pgmap v2179: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 169 op/s
Oct 07 14:35:06 compute-0 nova_compute[259550]: 2025-10-07 14:35:06.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 215 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 184 KiB/s wr, 142 op/s
Oct 07 14:35:07 compute-0 nova_compute[259550]: 2025-10-07 14:35:07.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:35:08 compute-0 ceph-mon[74295]: pgmap v2180: 305 pgs: 305 active+clean; 215 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 184 KiB/s wr, 142 op/s
Oct 07 14:35:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 215 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 184 KiB/s wr, 110 op/s
Oct 07 14:35:08 compute-0 ovn_controller[151684]: 2025-10-07T14:35:08Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:17:d0 10.100.0.8
Oct 07 14:35:08 compute-0 ovn_controller[151684]: 2025-10-07T14:35:08Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:17:d0 10.100.0.8
Oct 07 14:35:10 compute-0 podman[379007]: 2025-10-07 14:35:10.083599329 +0000 UTC m=+0.067718310 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 07 14:35:10 compute-0 podman[379008]: 2025-10-07 14:35:10.118904172 +0000 UTC m=+0.101058741 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 07 14:35:10 compute-0 ceph-mon[74295]: pgmap v2181: 305 pgs: 305 active+clean; 215 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 184 KiB/s wr, 110 op/s
Oct 07 14:35:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 237 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.7 MiB/s wr, 158 op/s
Oct 07 14:35:11 compute-0 nova_compute[259550]: 2025-10-07 14:35:11.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:11 compute-0 ceph-mon[74295]: pgmap v2182: 305 pgs: 305 active+clean; 237 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.7 MiB/s wr, 158 op/s
Oct 07 14:35:11 compute-0 ovn_controller[151684]: 2025-10-07T14:35:11Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:12:ca:dd 10.100.0.27
Oct 07 14:35:11 compute-0 ovn_controller[151684]: 2025-10-07T14:35:11Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:12:ca:dd 10.100.0.27
Oct 07 14:35:11 compute-0 nova_compute[259550]: 2025-10-07 14:35:11.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:12 compute-0 nova_compute[259550]: 2025-10-07 14:35:12.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 241 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 718 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Oct 07 14:35:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:35:13 compute-0 ceph-mon[74295]: pgmap v2183: 305 pgs: 305 active+clean; 241 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 718 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Oct 07 14:35:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 277 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 725 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Oct 07 14:35:15 compute-0 ceph-mon[74295]: pgmap v2184: 305 pgs: 305 active+clean; 277 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 725 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Oct 07 14:35:16 compute-0 nova_compute[259550]: 2025-10-07 14:35:16.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 279 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 731 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 07 14:35:16 compute-0 nova_compute[259550]: 2025-10-07 14:35:16.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:17 compute-0 nova_compute[259550]: 2025-10-07 14:35:17.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:17 compute-0 ceph-mon[74295]: pgmap v2185: 305 pgs: 305 active+clean; 279 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 731 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 07 14:35:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:35:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 279 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 722 KiB/s rd, 4.1 MiB/s wr, 124 op/s
Oct 07 14:35:19 compute-0 ceph-mon[74295]: pgmap v2186: 305 pgs: 305 active+clean; 279 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 722 KiB/s rd, 4.1 MiB/s wr, 124 op/s
Oct 07 14:35:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 279 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 723 KiB/s rd, 4.1 MiB/s wr, 125 op/s
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.073 2 DEBUG oslo_concurrency.lockutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "a5f57fca-2121-432c-b000-2fd92f5c1b12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.074 2 DEBUG oslo_concurrency.lockutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.074 2 DEBUG oslo_concurrency.lockutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.075 2 DEBUG oslo_concurrency.lockutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.075 2 DEBUG oslo_concurrency.lockutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.076 2 INFO nova.compute.manager [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Terminating instance
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.077 2 DEBUG nova.compute.manager [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:35:21 compute-0 kernel: tap647fc35d-c5 (unregistering): left promiscuous mode
Oct 07 14:35:21 compute-0 NetworkManager[44949]: <info>  [1759847721.1466] device (tap647fc35d-c5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:35:21 compute-0 ovn_controller[151684]: 2025-10-07T14:35:21Z|01133|binding|INFO|Releasing lport 647fc35d-c5c1-4818-9fe1-b704468ceb32 from this chassis (sb_readonly=0)
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:21 compute-0 ovn_controller[151684]: 2025-10-07T14:35:21Z|01134|binding|INFO|Setting lport 647fc35d-c5c1-4818-9fe1-b704468ceb32 down in Southbound
Oct 07 14:35:21 compute-0 ovn_controller[151684]: 2025-10-07T14:35:21Z|01135|binding|INFO|Removing iface tap647fc35d-c5 ovn-installed in OVS
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:21 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Oct 07 14:35:21 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006f.scope: Consumed 14.315s CPU time.
Oct 07 14:35:21 compute-0 systemd-machined[214580]: Machine qemu-139-instance-0000006f terminated.
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.233 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:ca:dd 10.100.0.27'], port_security=['fa:16:3e:12:ca:dd 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'a5f57fca-2121-432c-b000-2fd92f5c1b12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df76b87a-da1a-4438-aff3-46b99df6a681', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2ae626df-0151-4864-97c5-333f386c32b3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f05eda8c-3c14-40e1-b5d0-16d149ef6c4e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=647fc35d-c5c1-4818-9fe1-b704468ceb32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:35:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.234 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 647fc35d-c5c1-4818-9fe1-b704468ceb32 in datapath df76b87a-da1a-4438-aff3-46b99df6a681 unbound from our chassis
Oct 07 14:35:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.235 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df76b87a-da1a-4438-aff3-46b99df6a681, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:35:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.237 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[496fb9fb-01be-48b9-a891-2892e8bd6d07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.238 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681 namespace which is not needed anymore
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.323 2 INFO nova.virt.libvirt.driver [-] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Instance destroyed successfully.
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.324 2 DEBUG nova.objects.instance [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid a5f57fca-2121-432c-b000-2fd92f5c1b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:35:21 compute-0 neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681[378905]: [NOTICE]   (378909) : haproxy version is 2.8.14-c23fe91
Oct 07 14:35:21 compute-0 neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681[378905]: [NOTICE]   (378909) : path to executable is /usr/sbin/haproxy
Oct 07 14:35:21 compute-0 neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681[378905]: [WARNING]  (378909) : Exiting Master process...
Oct 07 14:35:21 compute-0 neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681[378905]: [ALERT]    (378909) : Current worker (378911) exited with code 143 (Terminated)
Oct 07 14:35:21 compute-0 neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681[378905]: [WARNING]  (378909) : All workers exited. Exiting... (0)
Oct 07 14:35:21 compute-0 systemd[1]: libpod-a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222.scope: Deactivated successfully.
Oct 07 14:35:21 compute-0 conmon[378905]: conmon a2e5ff7d139732459641 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222.scope/container/memory.events
Oct 07 14:35:21 compute-0 podman[379080]: 2025-10-07 14:35:21.389185751 +0000 UTC m=+0.048050056 container died a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:35:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222-userdata-shm.mount: Deactivated successfully.
Oct 07 14:35:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d8f73fee24f065dab00f339734911ef1ec093684fe1fea8ceea6e76dc54383b-merged.mount: Deactivated successfully.
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.427 2 DEBUG nova.virt.libvirt.vif [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:34:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-81555906',display_name='tempest-TestNetworkBasicOps-server-81555906',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-81555906',id=111,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVpb0If8IJnGODxIDZvyf13jL46mQ77emkpmotXMzF/nDOLDOQfFE/H2m8lCVqJZPCaNeKJqv9eLmV7ud/4driHFAI2qIDYCbnN7/CMc/8whSX25QhcG+umNda5hVH/NQ==',key_name='tempest-TestNetworkBasicOps-307113164',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:34:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-edoddh5w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:34:56Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=a5f57fca-2121-432c-b000-2fd92f5c1b12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.428 2 DEBUG nova.network.os_vif_util [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.429 2 DEBUG nova.network.os_vif_util [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:ca:dd,bridge_name='br-int',has_traffic_filtering=True,id=647fc35d-c5c1-4818-9fe1-b704468ceb32,network=Network(df76b87a-da1a-4438-aff3-46b99df6a681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647fc35d-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.430 2 DEBUG os_vif [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:ca:dd,bridge_name='br-int',has_traffic_filtering=True,id=647fc35d-c5c1-4818-9fe1-b704468ceb32,network=Network(df76b87a-da1a-4438-aff3-46b99df6a681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647fc35d-c5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:35:21 compute-0 podman[379080]: 2025-10-07 14:35:21.432335143 +0000 UTC m=+0.091199448 container cleanup a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap647fc35d-c5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:35:21 compute-0 systemd[1]: libpod-conmon-a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222.scope: Deactivated successfully.
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.443 2 INFO os_vif [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:ca:dd,bridge_name='br-int',has_traffic_filtering=True,id=647fc35d-c5c1-4818-9fe1-b704468ceb32,network=Network(df76b87a-da1a-4438-aff3-46b99df6a681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647fc35d-c5')
Oct 07 14:35:21 compute-0 podman[379112]: 2025-10-07 14:35:21.502033315 +0000 UTC m=+0.044294414 container remove a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:35:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.508 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e779ecf5-669c-4165-8510-f9ffda582114]: (4, ('Tue Oct  7 02:35:21 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681 (a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222)\na2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222\nTue Oct  7 02:35:21 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681 (a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222)\na2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.510 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b3f1d55-7aec-43f4-a687-5daa43c06b31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.511 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf76b87a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:21 compute-0 kernel: tapdf76b87a-d0: left promiscuous mode
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.533 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[50a30880-4f97-4b7a-a6c0-140a8981bf43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.568 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f87af27-e8ed-4a79-85ac-5a7045b4de33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.570 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8195ce4f-4884-47e4-8070-97aa47b21d61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.592 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[83721fb5-8566-4cfc-a8c6-0f48e0055d91]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827804, 'reachable_time': 22666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379145, 'error': None, 'target': 'ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:21 compute-0 systemd[1]: run-netns-ovnmeta\x2ddf76b87a\x2dda1a\x2d4438\x2daff3\x2d46b99df6a681.mount: Deactivated successfully.
Oct 07 14:35:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.601 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:35:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.601 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[9c9b8e9b-dd3c-46a4-8d17-eac2748d77c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.818 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.820 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.850 2 INFO nova.virt.libvirt.driver [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Deleting instance files /var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12_del
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.851 2 INFO nova.virt.libvirt.driver [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Deletion of /var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12_del complete
Oct 07 14:35:21 compute-0 ceph-mon[74295]: pgmap v2187: 305 pgs: 305 active+clean; 279 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 723 KiB/s rd, 4.1 MiB/s wr, 125 op/s
Oct 07 14:35:21 compute-0 nova_compute[259550]: 2025-10-07 14:35:21.977 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.076 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.076 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.081 2 INFO nova.compute.manager [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Took 1.00 seconds to destroy the instance on the hypervisor.
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.081 2 DEBUG oslo.service.loopingcall [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.082 2 DEBUG nova.compute.manager [-] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.082 2 DEBUG nova.network.neutron [-] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.150 2 DEBUG nova.compute.manager [req-b8479340-7414-4360-976e-cab12568bd96 req-be7e7d8a-1603-494a-be9d-bfb663bb52a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Received event network-vif-unplugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.151 2 DEBUG oslo_concurrency.lockutils [req-b8479340-7414-4360-976e-cab12568bd96 req-be7e7d8a-1603-494a-be9d-bfb663bb52a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.151 2 DEBUG oslo_concurrency.lockutils [req-b8479340-7414-4360-976e-cab12568bd96 req-be7e7d8a-1603-494a-be9d-bfb663bb52a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.151 2 DEBUG oslo_concurrency.lockutils [req-b8479340-7414-4360-976e-cab12568bd96 req-be7e7d8a-1603-494a-be9d-bfb663bb52a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.152 2 DEBUG nova.compute.manager [req-b8479340-7414-4360-976e-cab12568bd96 req-be7e7d8a-1603-494a-be9d-bfb663bb52a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] No waiting events found dispatching network-vif-unplugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.152 2 DEBUG nova.compute.manager [req-b8479340-7414-4360-976e-cab12568bd96 req-be7e7d8a-1603-494a-be9d-bfb663bb52a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Received event network-vif-unplugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.193 2 DEBUG nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.202 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.203 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.212 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.212 2 INFO nova.compute.claims [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 279 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 433 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Oct 07 14:35:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:35:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:35:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:35:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:35:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:35:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.687 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:35:22
Oct 07 14:35:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:35:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:35:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.mgr', 'cephfs.cephfs.meta', 'images', '.rgw.root']
Oct 07 14:35:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:35:22 compute-0 nova_compute[259550]: 2025-10-07 14:35:22.762 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:35:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:35:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:35:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:35:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:35:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:35:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:35:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:35:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:35:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:35:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:35:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:35:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:35:23 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/700200518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:35:23 compute-0 nova_compute[259550]: 2025-10-07 14:35:23.242 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:35:23 compute-0 nova_compute[259550]: 2025-10-07 14:35:23.249 2 DEBUG nova.compute.provider_tree [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:35:23 compute-0 nova_compute[259550]: 2025-10-07 14:35:23.303 2 DEBUG nova.scheduler.client.report [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:35:23 compute-0 nova_compute[259550]: 2025-10-07 14:35:23.412 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:23 compute-0 nova_compute[259550]: 2025-10-07 14:35:23.413 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:35:23 compute-0 nova_compute[259550]: 2025-10-07 14:35:23.416 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:23 compute-0 nova_compute[259550]: 2025-10-07 14:35:23.423 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:35:23 compute-0 nova_compute[259550]: 2025-10-07 14:35:23.423 2 INFO nova.compute.claims [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:35:23 compute-0 nova_compute[259550]: 2025-10-07 14:35:23.912 2 DEBUG nova.network.neutron [-] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:35:23 compute-0 nova_compute[259550]: 2025-10-07 14:35:23.919 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:35:23 compute-0 nova_compute[259550]: 2025-10-07 14:35:23.920 2 DEBUG nova.network.neutron [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:35:23 compute-0 ceph-mon[74295]: pgmap v2188: 305 pgs: 305 active+clean; 279 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 433 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Oct 07 14:35:23 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/700200518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:35:24 compute-0 nova_compute[259550]: 2025-10-07 14:35:24.134 2 INFO nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:35:24 compute-0 nova_compute[259550]: 2025-10-07 14:35:24.176 2 INFO nova.compute.manager [-] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Took 2.09 seconds to deallocate network for instance.
Oct 07 14:35:24 compute-0 nova_compute[259550]: 2025-10-07 14:35:24.299 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:35:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 214 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 1.9 MiB/s wr, 82 op/s
Oct 07 14:35:24 compute-0 nova_compute[259550]: 2025-10-07 14:35:24.813 2 DEBUG oslo_concurrency.lockutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.015 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.017 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.017 2 INFO nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Creating image(s)
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.042 2 DEBUG nova.storage.rbd_utils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 614c22a4-9342-4037-adb5-71c3375b8553_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.064 2 DEBUG nova.storage.rbd_utils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 614c22a4-9342-4037-adb5-71c3375b8553_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.083 2 DEBUG nova.storage.rbd_utils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 614c22a4-9342-4037-adb5-71c3375b8553_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.086 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.155 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.156 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.156 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.157 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.176 2 DEBUG nova.storage.rbd_utils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 614c22a4-9342-4037-adb5-71c3375b8553_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.179 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 614c22a4-9342-4037-adb5-71c3375b8553_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.218 2 DEBUG nova.compute.manager [req-6ffc3ecd-6a41-4a7e-a80f-f1d78aa0ac47 req-c35c4bcb-c17b-42f0-b6bf-bf31a0eab3a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Received event network-vif-plugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.219 2 DEBUG oslo_concurrency.lockutils [req-6ffc3ecd-6a41-4a7e-a80f-f1d78aa0ac47 req-c35c4bcb-c17b-42f0-b6bf-bf31a0eab3a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.220 2 DEBUG oslo_concurrency.lockutils [req-6ffc3ecd-6a41-4a7e-a80f-f1d78aa0ac47 req-c35c4bcb-c17b-42f0-b6bf-bf31a0eab3a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.220 2 DEBUG oslo_concurrency.lockutils [req-6ffc3ecd-6a41-4a7e-a80f-f1d78aa0ac47 req-c35c4bcb-c17b-42f0-b6bf-bf31a0eab3a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.220 2 DEBUG nova.compute.manager [req-6ffc3ecd-6a41-4a7e-a80f-f1d78aa0ac47 req-c35c4bcb-c17b-42f0-b6bf-bf31a0eab3a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] No waiting events found dispatching network-vif-plugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.220 2 WARNING nova.compute.manager [req-6ffc3ecd-6a41-4a7e-a80f-f1d78aa0ac47 req-c35c4bcb-c17b-42f0-b6bf-bf31a0eab3a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Received unexpected event network-vif-plugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 for instance with vm_state deleted and task_state None.
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.221 2 DEBUG nova.compute.manager [req-6ffc3ecd-6a41-4a7e-a80f-f1d78aa0ac47 req-c35c4bcb-c17b-42f0-b6bf-bf31a0eab3a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Received event network-vif-deleted-647fc35d-c5c1-4818-9fe1-b704468ceb32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.499 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 614c22a4-9342-4037-adb5-71c3375b8553_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.566 2 DEBUG nova.storage.rbd_utils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 614c22a4-9342-4037-adb5-71c3375b8553_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.670 2 DEBUG nova.objects.instance [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 614c22a4-9342-4037-adb5-71c3375b8553 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.753 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.753 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Ensure instance console log exists: /var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.753 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.754 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.754 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.792 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:35:25 compute-0 nova_compute[259550]: 2025-10-07 14:35:25.828 2 DEBUG nova.policy [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:35:25 compute-0 ceph-mon[74295]: pgmap v2189: 305 pgs: 305 active+clean; 214 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 1.9 MiB/s wr, 82 op/s
Oct 07 14:35:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:35:26 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/437884889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:35:26 compute-0 nova_compute[259550]: 2025-10-07 14:35:26.263 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:35:26 compute-0 nova_compute[259550]: 2025-10-07 14:35:26.269 2 DEBUG nova.compute.provider_tree [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:35:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 220 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 MiB/s wr, 34 op/s
Oct 07 14:35:26 compute-0 nova_compute[259550]: 2025-10-07 14:35:26.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:26 compute-0 nova_compute[259550]: 2025-10-07 14:35:26.481 2 DEBUG nova.scheduler.client.report [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:35:26 compute-0 nova_compute[259550]: 2025-10-07 14:35:26.762 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:26 compute-0 nova_compute[259550]: 2025-10-07 14:35:26.763 2 DEBUG nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:35:26 compute-0 nova_compute[259550]: 2025-10-07 14:35:26.766 2 DEBUG oslo_concurrency.lockutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:26 compute-0 nova_compute[259550]: 2025-10-07 14:35:26.893 2 DEBUG oslo_concurrency.processutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:35:26 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/437884889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:35:27 compute-0 podman[379358]: 2025-10-07 14:35:27.06724062 +0000 UTC m=+0.055790882 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 14:35:27 compute-0 podman[379359]: 2025-10-07 14:35:27.098815903 +0000 UTC m=+0.085022732 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.109 2 DEBUG nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.110 2 DEBUG nova.network.neutron [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.204 2 INFO nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.292 2 DEBUG nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:35:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:35:27 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2338717851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.376 2 DEBUG oslo_concurrency.processutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.381 2 DEBUG nova.compute.provider_tree [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.441 2 DEBUG nova.policy [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5c505d04148e44b8b93ceab0e3cedef4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.470 2 DEBUG nova.scheduler.client.report [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.546 2 DEBUG nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.547 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.548 2 INFO nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Creating image(s)
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.570 2 DEBUG nova.storage.rbd_utils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.595 2 DEBUG nova.storage.rbd_utils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.619 2 DEBUG nova.storage.rbd_utils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.624 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.658 2 DEBUG nova.network.neutron [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Successfully created port: 3eb614e8-3f14-4375-bbe9-34facd5bce52 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.662 2 DEBUG oslo_concurrency.lockutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.696 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.697 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.697 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.698 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.718 2 DEBUG nova.storage.rbd_utils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.722 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.760 2 INFO nova.scheduler.client.report [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance a5f57fca-2121-432c-b000-2fd92f5c1b12
Oct 07 14:35:27 compute-0 nova_compute[259550]: 2025-10-07 14:35:27.853 2 DEBUG oslo_concurrency.lockutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:35:27 compute-0 ceph-mon[74295]: pgmap v2190: 305 pgs: 305 active+clean; 220 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 MiB/s wr, 34 op/s
Oct 07 14:35:27 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2338717851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:35:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 220 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 MiB/s wr, 33 op/s
Oct 07 14:35:28 compute-0 nova_compute[259550]: 2025-10-07 14:35:28.437 2 DEBUG nova.network.neutron [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Successfully created port: e3da1022-6830-4557-9992-ffd8ec07a599 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:35:28 compute-0 nova_compute[259550]: 2025-10-07 14:35:28.513 2 DEBUG nova.network.neutron [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Successfully created port: 84baaee6-3f89-4d61-aaea-507e46e65618 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:35:28 compute-0 nova_compute[259550]: 2025-10-07 14:35:28.801 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:35:28 compute-0 nova_compute[259550]: 2025-10-07 14:35:28.864 2 DEBUG nova.storage.rbd_utils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] resizing rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:35:28 compute-0 nova_compute[259550]: 2025-10-07 14:35:28.961 2 DEBUG nova.objects.instance [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'migration_context' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:35:28 compute-0 nova_compute[259550]: 2025-10-07 14:35:28.995 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:35:28 compute-0 nova_compute[259550]: 2025-10-07 14:35:28.995 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Ensure instance console log exists: /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:35:28 compute-0 nova_compute[259550]: 2025-10-07 14:35:28.996 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:28 compute-0 nova_compute[259550]: 2025-10-07 14:35:28.996 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:28 compute-0 nova_compute[259550]: 2025-10-07 14:35:28.996 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:29 compute-0 nova_compute[259550]: 2025-10-07 14:35:29.904 2 DEBUG nova.network.neutron [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Successfully updated port: 3eb614e8-3f14-4375-bbe9-34facd5bce52 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:35:29 compute-0 ceph-mon[74295]: pgmap v2191: 305 pgs: 305 active+clean; 220 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 MiB/s wr, 33 op/s
Oct 07 14:35:30 compute-0 nova_compute[259550]: 2025-10-07 14:35:30.043 2 DEBUG nova.compute.manager [req-4486fb18-b51b-4f3f-a3e9-716f99d1193d req-2a702014-d240-45d6-887a-597c3e0e3ac2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-changed-3eb614e8-3f14-4375-bbe9-34facd5bce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:35:30 compute-0 nova_compute[259550]: 2025-10-07 14:35:30.043 2 DEBUG nova.compute.manager [req-4486fb18-b51b-4f3f-a3e9-716f99d1193d req-2a702014-d240-45d6-887a-597c3e0e3ac2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Refreshing instance network info cache due to event network-changed-3eb614e8-3f14-4375-bbe9-34facd5bce52. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:35:30 compute-0 nova_compute[259550]: 2025-10-07 14:35:30.043 2 DEBUG oslo_concurrency.lockutils [req-4486fb18-b51b-4f3f-a3e9-716f99d1193d req-2a702014-d240-45d6-887a-597c3e0e3ac2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:35:30 compute-0 nova_compute[259550]: 2025-10-07 14:35:30.044 2 DEBUG oslo_concurrency.lockutils [req-4486fb18-b51b-4f3f-a3e9-716f99d1193d req-2a702014-d240-45d6-887a-597c3e0e3ac2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:35:30 compute-0 nova_compute[259550]: 2025-10-07 14:35:30.044 2 DEBUG nova.network.neutron [req-4486fb18-b51b-4f3f-a3e9-716f99d1193d req-2a702014-d240-45d6-887a-597c3e0e3ac2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Refreshing network info cache for port 3eb614e8-3f14-4375-bbe9-34facd5bce52 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:35:30 compute-0 nova_compute[259550]: 2025-10-07 14:35:30.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:30 compute-0 nova_compute[259550]: 2025-10-07 14:35:30.249 2 DEBUG nova.network.neutron [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Successfully updated port: 84baaee6-3f89-4d61-aaea-507e46e65618 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:35:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 273 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.6 MiB/s wr, 59 op/s
Oct 07 14:35:30 compute-0 nova_compute[259550]: 2025-10-07 14:35:30.358 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:35:30 compute-0 nova_compute[259550]: 2025-10-07 14:35:30.358 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquired lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:35:30 compute-0 nova_compute[259550]: 2025-10-07 14:35:30.359 2 DEBUG nova.network.neutron [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:35:30 compute-0 nova_compute[259550]: 2025-10-07 14:35:30.525 2 DEBUG nova.network.neutron [req-4486fb18-b51b-4f3f-a3e9-716f99d1193d req-2a702014-d240-45d6-887a-597c3e0e3ac2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:35:30 compute-0 nova_compute[259550]: 2025-10-07 14:35:30.737 2 DEBUG nova.network.neutron [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:35:31 compute-0 nova_compute[259550]: 2025-10-07 14:35:31.025 2 DEBUG nova.network.neutron [req-4486fb18-b51b-4f3f-a3e9-716f99d1193d req-2a702014-d240-45d6-887a-597c3e0e3ac2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:35:31 compute-0 nova_compute[259550]: 2025-10-07 14:35:31.181 2 DEBUG oslo_concurrency.lockutils [req-4486fb18-b51b-4f3f-a3e9-716f99d1193d req-2a702014-d240-45d6-887a-597c3e0e3ac2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:35:31 compute-0 ovn_controller[151684]: 2025-10-07T14:35:31Z|01136|binding|INFO|Releasing lport a420e217-e19f-447c-8a17-e0fdc42144b5 from this chassis (sb_readonly=0)
Oct 07 14:35:31 compute-0 ovn_controller[151684]: 2025-10-07T14:35:31Z|01137|binding|INFO|Releasing lport e2eca697-1003-4116-b184-c9191a00584f from this chassis (sb_readonly=0)
Oct 07 14:35:31 compute-0 ovn_controller[151684]: 2025-10-07T14:35:31Z|01138|binding|INFO|Releasing lport 3f0bb527-e0e8-409d-b11f-0c1205bec67a from this chassis (sb_readonly=0)
Oct 07 14:35:31 compute-0 nova_compute[259550]: 2025-10-07 14:35:31.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:31 compute-0 nova_compute[259550]: 2025-10-07 14:35:31.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:31.755 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:35:31 compute-0 nova_compute[259550]: 2025-10-07 14:35:31.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:31.756 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:35:31 compute-0 ceph-mon[74295]: pgmap v2192: 305 pgs: 305 active+clean; 273 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.6 MiB/s wr, 59 op/s
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 277 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 2.9 MiB/s wr, 81 op/s
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.446 2 DEBUG nova.compute.manager [req-0ffbda25-3cd3-4cb4-9ca4-ac86a979b1a7 req-a27ff1a4-c8aa-49a9-94ed-4960147f1910 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-changed-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.447 2 DEBUG nova.compute.manager [req-0ffbda25-3cd3-4cb4-9ca4-ac86a979b1a7 req-a27ff1a4-c8aa-49a9-94ed-4960147f1910 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Refreshing instance network info cache due to event network-changed-84baaee6-3f89-4d61-aaea-507e46e65618. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.447 2 DEBUG oslo_concurrency.lockutils [req-0ffbda25-3cd3-4cb4-9ca4-ac86a979b1a7 req-a27ff1a4-c8aa-49a9-94ed-4960147f1910 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002211684817998307 of space, bias 1.0, pg target 0.6635054453994921 quantized to 32 (current 32)
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:35:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.623 2 DEBUG nova.network.neutron [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Updating instance_info_cache with network_info: [{"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:35:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:35:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1566944976' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:35:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:35:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1566944976' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.790 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Releasing lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.791 2 DEBUG nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance network_info: |[{"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.791 2 DEBUG oslo_concurrency.lockutils [req-0ffbda25-3cd3-4cb4-9ca4-ac86a979b1a7 req-a27ff1a4-c8aa-49a9-94ed-4960147f1910 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.791 2 DEBUG nova.network.neutron [req-0ffbda25-3cd3-4cb4-9ca4-ac86a979b1a7 req-a27ff1a4-c8aa-49a9-94ed-4960147f1910 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Refreshing network info cache for port 84baaee6-3f89-4d61-aaea-507e46e65618 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.794 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Start _get_guest_xml network_info=[{"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.798 2 WARNING nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.841 2 DEBUG nova.virt.libvirt.host [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.842 2 DEBUG nova.virt.libvirt.host [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.846 2 DEBUG nova.virt.libvirt.host [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.846 2 DEBUG nova.virt.libvirt.host [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.847 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.847 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.847 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.848 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.848 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.848 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.848 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.849 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.849 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.849 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.849 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.850 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.853 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:35:32 compute-0 nova_compute[259550]: 2025-10-07 14:35:32.887 2 DEBUG nova.network.neutron [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Successfully updated port: e3da1022-6830-4557-9992-ffd8ec07a599 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:35:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:35:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1566944976' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:35:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1566944976' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:35:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:35:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/846551382' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.346 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.366 2 DEBUG nova.storage.rbd_utils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.369 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.555 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.555 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.555 2 DEBUG nova.network.neutron [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:35:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:35:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2856261765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.844 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.846 2 DEBUG nova.virt.libvirt.vif [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:35:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2036986690',display_name='tempest-TestNetworkAdvancedServerOps-server-2036986690',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2036986690',id=113,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWZftNkY/zVHCXSaics6F8ZT1EfbDe33oEET2vK0gP7Xemq47ftAGCjttUGmoEEk2tSMFrst5lmpCcJn9l+9uZ/tfDJY40RL0sC5x3TjuIWq3RsYwCZVLYWuqz+xMOnKw==',key_name='tempest-TestNetworkAdvancedServerOps-1798249152',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-ha5sygpa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:35:27Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.847 2 DEBUG nova.network.os_vif_util [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.848 2 DEBUG nova.network.os_vif_util [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.849 2 DEBUG nova.objects.instance [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.876 2 DEBUG nova.network.neutron [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.932 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:35:33 compute-0 nova_compute[259550]:   <uuid>e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb</uuid>
Oct 07 14:35:33 compute-0 nova_compute[259550]:   <name>instance-00000071</name>
Oct 07 14:35:33 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:35:33 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:35:33 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-2036986690</nova:name>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:35:32</nova:creationTime>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:35:33 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:35:33 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:35:33 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:35:33 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:35:33 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:35:33 compute-0 nova_compute[259550]:         <nova:user uuid="5c505d04148e44b8b93ceab0e3cedef4">tempest-TestNetworkAdvancedServerOps-316338420-project-member</nova:user>
Oct 07 14:35:33 compute-0 nova_compute[259550]:         <nova:project uuid="74c80c1e3c7c4a0dbf1c602d301618a7">tempest-TestNetworkAdvancedServerOps-316338420</nova:project>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:35:33 compute-0 nova_compute[259550]:         <nova:port uuid="84baaee6-3f89-4d61-aaea-507e46e65618">
Oct 07 14:35:33 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:35:33 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:35:33 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <system>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <entry name="serial">e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb</entry>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <entry name="uuid">e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb</entry>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     </system>
Oct 07 14:35:33 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:35:33 compute-0 nova_compute[259550]:   <os>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:   </os>
Oct 07 14:35:33 compute-0 nova_compute[259550]:   <features>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:   </features>
Oct 07 14:35:33 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:35:33 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:35:33 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk">
Oct 07 14:35:33 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       </source>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:35:33 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config">
Oct 07 14:35:33 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       </source>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:35:33 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:84:dc:c4"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <target dev="tap84baaee6-3f"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/console.log" append="off"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <video>
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     </video>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:35:33 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:35:33 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:35:33 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:35:33 compute-0 nova_compute[259550]: </domain>
Oct 07 14:35:33 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.933 2 DEBUG nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Preparing to wait for external event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.934 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.934 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.934 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.935 2 DEBUG nova.virt.libvirt.vif [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:35:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2036986690',display_name='tempest-TestNetworkAdvancedServerOps-server-2036986690',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2036986690',id=113,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWZftNkY/zVHCXSaics6F8ZT1EfbDe33oEET2vK0gP7Xemq47ftAGCjttUGmoEEk2tSMFrst5lmpCcJn9l+9uZ/tfDJY40RL0sC5x3TjuIWq3RsYwCZVLYWuqz+xMOnKw==',key_name='tempest-TestNetworkAdvancedServerOps-1798249152',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-ha5sygpa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:35:27Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.935 2 DEBUG nova.network.os_vif_util [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.936 2 DEBUG nova.network.os_vif_util [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.936 2 DEBUG os_vif [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.937 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.937 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.940 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84baaee6-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.940 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap84baaee6-3f, col_values=(('external_ids', {'iface-id': '84baaee6-3f89-4d61-aaea-507e46e65618', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:dc:c4', 'vm-uuid': 'e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:33 compute-0 NetworkManager[44949]: <info>  [1759847733.9793] manager: (tap84baaee6-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/463)
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.986 2 INFO os_vif [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f')
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.988 2 DEBUG oslo_concurrency.lockutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.988 2 DEBUG oslo_concurrency.lockutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.988 2 DEBUG oslo_concurrency.lockutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.989 2 DEBUG oslo_concurrency.lockutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.989 2 DEBUG oslo_concurrency.lockutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.991 2 INFO nova.compute.manager [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Terminating instance
Oct 07 14:35:33 compute-0 nova_compute[259550]: 2025-10-07 14:35:33.992 2 DEBUG nova.compute.manager [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:35:34 compute-0 ceph-mon[74295]: pgmap v2193: 305 pgs: 305 active+clean; 277 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 2.9 MiB/s wr, 81 op/s
Oct 07 14:35:34 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/846551382' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:35:34 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2856261765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.109 2 DEBUG nova.compute.manager [req-3ff2f1ae-e4e0-477d-b6ce-4ec93783cc87 req-2e63a337-74b3-4e61-b5c3-54cadf93b77d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Received event network-changed-13fb3ec4-0080-406c-8cea-6620016c0513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.110 2 DEBUG nova.compute.manager [req-3ff2f1ae-e4e0-477d-b6ce-4ec93783cc87 req-2e63a337-74b3-4e61-b5c3-54cadf93b77d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Refreshing instance network info cache due to event network-changed-13fb3ec4-0080-406c-8cea-6620016c0513. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.111 2 DEBUG oslo_concurrency.lockutils [req-3ff2f1ae-e4e0-477d-b6ce-4ec93783cc87 req-2e63a337-74b3-4e61-b5c3-54cadf93b77d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.111 2 DEBUG oslo_concurrency.lockutils [req-3ff2f1ae-e4e0-477d-b6ce-4ec93783cc87 req-2e63a337-74b3-4e61-b5c3-54cadf93b77d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.111 2 DEBUG nova.network.neutron [req-3ff2f1ae-e4e0-477d-b6ce-4ec93783cc87 req-2e63a337-74b3-4e61-b5c3-54cadf93b77d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Refreshing network info cache for port 13fb3ec4-0080-406c-8cea-6620016c0513 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:35:34 compute-0 kernel: tap13fb3ec4-00 (unregistering): left promiscuous mode
Oct 07 14:35:34 compute-0 NetworkManager[44949]: <info>  [1759847734.1185] device (tap13fb3ec4-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:35:34 compute-0 ovn_controller[151684]: 2025-10-07T14:35:34Z|01139|binding|INFO|Releasing lport 13fb3ec4-0080-406c-8cea-6620016c0513 from this chassis (sb_readonly=0)
Oct 07 14:35:34 compute-0 ovn_controller[151684]: 2025-10-07T14:35:34Z|01140|binding|INFO|Setting lport 13fb3ec4-0080-406c-8cea-6620016c0513 down in Southbound
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:34 compute-0 ovn_controller[151684]: 2025-10-07T14:35:34Z|01141|binding|INFO|Removing iface tap13fb3ec4-00 ovn-installed in OVS
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:34 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Oct 07 14:35:34 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006d.scope: Consumed 15.955s CPU time.
Oct 07 14:35:34 compute-0 systemd-machined[214580]: Machine qemu-136-instance-0000006d terminated.
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.230 2 INFO nova.virt.libvirt.driver [-] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Instance destroyed successfully.
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.230 2 DEBUG nova.objects.instance [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:35:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.250 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:48:df 10.100.0.13'], port_security=['fa:16:3e:a7:48:df 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '109b6c3d-5810-4b7d-a96d-4e1b64dc2daa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e6e03c6-002b-464f-aea7-5ae708e3e5dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f7aecf35-a2bc-4233-a6ec-98effd9bc888', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bf60a0a-2571-43f1-914f-0f00c5d191c5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=13fb3ec4-0080-406c-8cea-6620016c0513) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:35:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.251 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 13fb3ec4-0080-406c-8cea-6620016c0513 in datapath 8e6e03c6-002b-464f-aea7-5ae708e3e5dc unbound from our chassis
Oct 07 14:35:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.252 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e6e03c6-002b-464f-aea7-5ae708e3e5dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:35:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.253 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[05fbb673-f320-49cf-91ee-6256dfdcd485]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.254 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc namespace which is not needed anymore
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.254 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.255 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.255 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No VIF found with MAC fa:16:3e:84:dc:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.255 2 INFO nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Using config drive
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.281 2 DEBUG nova.storage.rbd_utils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:35:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 293 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 MiB/s wr, 83 op/s
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.321 2 DEBUG nova.virt.libvirt.vif [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:33:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1213898697',display_name='tempest-TestNetworkBasicOps-server-1213898697',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1213898697',id=109,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJOXXnJfbNjLe/xM7CNQv3JpMjH7QLX8ezU0CsayXoRWsJliurdVjk2soP6dLM8VmWxLkM0XL+W32Oeh6Y4JueSCzXDjtscbAk00DGutpE2Eyr2ZzKKLBUUQM3YVtmhffw==',key_name='tempest-TestNetworkBasicOps-1417079346',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:34:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-6sgmyxmd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:34:06Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=109b6c3d-5810-4b7d-a96d-4e1b64dc2daa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.322 2 DEBUG nova.network.os_vif_util [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.322 2 DEBUG nova.network.os_vif_util [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a7:48:df,bridge_name='br-int',has_traffic_filtering=True,id=13fb3ec4-0080-406c-8cea-6620016c0513,network=Network(8e6e03c6-002b-464f-aea7-5ae708e3e5dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13fb3ec4-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.323 2 DEBUG os_vif [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:48:df,bridge_name='br-int',has_traffic_filtering=True,id=13fb3ec4-0080-406c-8cea-6620016c0513,network=Network(8e6e03c6-002b-464f-aea7-5ae708e3e5dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13fb3ec4-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.324 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13fb3ec4-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.334 2 INFO os_vif [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:48:df,bridge_name='br-int',has_traffic_filtering=True,id=13fb3ec4-0080-406c-8cea-6620016c0513,network=Network(8e6e03c6-002b-464f-aea7-5ae708e3e5dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13fb3ec4-00')
Oct 07 14:35:34 compute-0 neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc[376413]: [NOTICE]   (376417) : haproxy version is 2.8.14-c23fe91
Oct 07 14:35:34 compute-0 neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc[376413]: [NOTICE]   (376417) : path to executable is /usr/sbin/haproxy
Oct 07 14:35:34 compute-0 neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc[376413]: [WARNING]  (376417) : Exiting Master process...
Oct 07 14:35:34 compute-0 neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc[376413]: [WARNING]  (376417) : Exiting Master process...
Oct 07 14:35:34 compute-0 neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc[376413]: [ALERT]    (376417) : Current worker (376419) exited with code 143 (Terminated)
Oct 07 14:35:34 compute-0 neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc[376413]: [WARNING]  (376417) : All workers exited. Exiting... (0)
Oct 07 14:35:34 compute-0 systemd[1]: libpod-342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db.scope: Deactivated successfully.
Oct 07 14:35:34 compute-0 podman[379708]: 2025-10-07 14:35:34.406460056 +0000 UTC m=+0.064629438 container died 342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:35:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db-userdata-shm.mount: Deactivated successfully.
Oct 07 14:35:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ce9d977b3319507425402d4f816a2d462fb60a6cead3d48e6a5a2ead545ba35-merged.mount: Deactivated successfully.
Oct 07 14:35:34 compute-0 podman[379708]: 2025-10-07 14:35:34.475695355 +0000 UTC m=+0.133864717 container cleanup 342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:35:34 compute-0 systemd[1]: libpod-conmon-342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db.scope: Deactivated successfully.
Oct 07 14:35:34 compute-0 podman[379754]: 2025-10-07 14:35:34.542453309 +0000 UTC m=+0.046971246 container remove 342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:35:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.551 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[581bb811-0b8b-43e9-9f71-67593f14fb38]: (4, ('Tue Oct  7 02:35:34 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc (342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db)\n342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db\nTue Oct  7 02:35:34 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc (342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db)\n342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.553 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b512424c-5562-40f4-8ceb-17109264ad4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.554 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e6e03c6-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:34 compute-0 kernel: tap8e6e03c6-00: left promiscuous mode
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:34 compute-0 nova_compute[259550]: 2025-10-07 14:35:34.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.581 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5f16eb-f0e0-4995-9da2-ceeb1783a8f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.606 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b40dc7-fb71-4f45-9ccb-204054c390e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.608 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e2fbf4e4-ea24-4aca-9186-7562198230bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.624 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8121df11-a8d6-4f27-97c0-d2da1c4ca0c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822748, 'reachable_time': 29587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379773, 'error': None, 'target': 'ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:34 compute-0 systemd[1]: run-netns-ovnmeta\x2d8e6e03c6\x2d002b\x2d464f\x2daea7\x2d5ae708e3e5dc.mount: Deactivated successfully.
Oct 07 14:35:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.629 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:35:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.629 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[fd9ea914-354d-4f71-955c-2f5e4dedb4dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:35 compute-0 nova_compute[259550]: 2025-10-07 14:35:35.036 2 INFO nova.virt.libvirt.driver [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Deleting instance files /var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_del
Oct 07 14:35:35 compute-0 nova_compute[259550]: 2025-10-07 14:35:35.037 2 INFO nova.virt.libvirt.driver [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Deletion of /var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_del complete
Oct 07 14:35:35 compute-0 nova_compute[259550]: 2025-10-07 14:35:35.134 2 INFO nova.compute.manager [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Took 1.14 seconds to destroy the instance on the hypervisor.
Oct 07 14:35:35 compute-0 nova_compute[259550]: 2025-10-07 14:35:35.135 2 DEBUG oslo.service.loopingcall [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:35:35 compute-0 nova_compute[259550]: 2025-10-07 14:35:35.135 2 DEBUG nova.compute.manager [-] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:35:35 compute-0 nova_compute[259550]: 2025-10-07 14:35:35.135 2 DEBUG nova.network.neutron [-] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:35:35 compute-0 nova_compute[259550]: 2025-10-07 14:35:35.300 2 DEBUG nova.compute.manager [req-9b464704-29b6-481c-bc52-437e486c1f88 req-7120b690-a46a-4650-8443-f9b61ce2bb85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-changed-e3da1022-6830-4557-9992-ffd8ec07a599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:35:35 compute-0 nova_compute[259550]: 2025-10-07 14:35:35.300 2 DEBUG nova.compute.manager [req-9b464704-29b6-481c-bc52-437e486c1f88 req-7120b690-a46a-4650-8443-f9b61ce2bb85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Refreshing instance network info cache due to event network-changed-e3da1022-6830-4557-9992-ffd8ec07a599. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:35:35 compute-0 nova_compute[259550]: 2025-10-07 14:35:35.301 2 DEBUG oslo_concurrency.lockutils [req-9b464704-29b6-481c-bc52-437e486c1f88 req-7120b690-a46a-4650-8443-f9b61ce2bb85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:35:35 compute-0 nova_compute[259550]: 2025-10-07 14:35:35.802 2 DEBUG nova.network.neutron [req-0ffbda25-3cd3-4cb4-9ca4-ac86a979b1a7 req-a27ff1a4-c8aa-49a9-94ed-4960147f1910 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Updated VIF entry in instance network info cache for port 84baaee6-3f89-4d61-aaea-507e46e65618. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:35:35 compute-0 nova_compute[259550]: 2025-10-07 14:35:35.802 2 DEBUG nova.network.neutron [req-0ffbda25-3cd3-4cb4-9ca4-ac86a979b1a7 req-a27ff1a4-c8aa-49a9-94ed-4960147f1910 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Updating instance_info_cache with network_info: [{"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:35:35 compute-0 nova_compute[259550]: 2025-10-07 14:35:35.807 2 INFO nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Creating config drive at /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config
Oct 07 14:35:35 compute-0 nova_compute[259550]: 2025-10-07 14:35:35.812 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0xcctgs_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:35:35 compute-0 nova_compute[259550]: 2025-10-07 14:35:35.940 2 DEBUG oslo_concurrency.lockutils [req-0ffbda25-3cd3-4cb4-9ca4-ac86a979b1a7 req-a27ff1a4-c8aa-49a9-94ed-4960147f1910 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:35:35 compute-0 nova_compute[259550]: 2025-10-07 14:35:35.956 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0xcctgs_" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:35:35 compute-0 nova_compute[259550]: 2025-10-07 14:35:35.981 2 DEBUG nova.storage.rbd_utils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:35:35 compute-0 nova_compute[259550]: 2025-10-07 14:35:35.984 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:35:36 compute-0 ceph-mon[74295]: pgmap v2194: 305 pgs: 305 active+clean; 293 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 MiB/s wr, 83 op/s
Oct 07 14:35:36 compute-0 nova_compute[259550]: 2025-10-07 14:35:36.198 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:35:36 compute-0 nova_compute[259550]: 2025-10-07 14:35:36.200 2 INFO nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Deleting local config drive /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config because it was imported into RBD.
Oct 07 14:35:36 compute-0 kernel: tap84baaee6-3f: entered promiscuous mode
Oct 07 14:35:36 compute-0 NetworkManager[44949]: <info>  [1759847736.2668] manager: (tap84baaee6-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/464)
Oct 07 14:35:36 compute-0 systemd-udevd[379655]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:35:36 compute-0 ovn_controller[151684]: 2025-10-07T14:35:36Z|01142|binding|INFO|Claiming lport 84baaee6-3f89-4d61-aaea-507e46e65618 for this chassis.
Oct 07 14:35:36 compute-0 ovn_controller[151684]: 2025-10-07T14:35:36Z|01143|binding|INFO|84baaee6-3f89-4d61-aaea-507e46e65618: Claiming fa:16:3e:84:dc:c4 10.100.0.11
Oct 07 14:35:36 compute-0 nova_compute[259550]: 2025-10-07 14:35:36.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:36 compute-0 NetworkManager[44949]: <info>  [1759847736.2813] device (tap84baaee6-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:35:36 compute-0 NetworkManager[44949]: <info>  [1759847736.2826] device (tap84baaee6-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:35:36 compute-0 ovn_controller[151684]: 2025-10-07T14:35:36Z|01144|binding|INFO|Setting lport 84baaee6-3f89-4d61-aaea-507e46e65618 ovn-installed in OVS
Oct 07 14:35:36 compute-0 nova_compute[259550]: 2025-10-07 14:35:36.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:36 compute-0 nova_compute[259550]: 2025-10-07 14:35:36.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:36 compute-0 systemd-machined[214580]: New machine qemu-140-instance-00000071.
Oct 07 14:35:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 248 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.5 MiB/s wr, 71 op/s
Oct 07 14:35:36 compute-0 ovn_controller[151684]: 2025-10-07T14:35:36Z|01145|binding|INFO|Setting lport 84baaee6-3f89-4d61-aaea-507e46e65618 up in Southbound
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.319 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:dc:c4 10.100.0.11'], port_security=['fa:16:3e:84:dc:c4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '97d6fa92-e050-4cce-a368-3065ace6996b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e8a86b-883d-4b18-ac5a-841f1cc14111, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=84baaee6-3f89-4d61-aaea-507e46e65618) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.321 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 84baaee6-3f89-4d61-aaea-507e46e65618 in datapath 002ae4bb-0f71-4b57-99ae-0bfd304fb458 bound to our chassis
Oct 07 14:35:36 compute-0 nova_compute[259550]: 2025-10-07 14:35:36.320 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847721.3187284, a5f57fca-2121-432c-b000-2fd92f5c1b12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:35:36 compute-0 nova_compute[259550]: 2025-10-07 14:35:36.321 2 INFO nova.compute.manager [-] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] VM Stopped (Lifecycle Event)
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.322 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 002ae4bb-0f71-4b57-99ae-0bfd304fb458
Oct 07 14:35:36 compute-0 systemd[1]: Started Virtual Machine qemu-140-instance-00000071.
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.334 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9a2454b0-e8e6-42c4-8f16-bea91ea4be5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.335 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap002ae4bb-01 in ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.337 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap002ae4bb-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.337 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4095d8e6-6b4d-4775-9a3d-c32208edbf05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.338 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92728d5d-4591-4a29-81e1-d3f7f031a307]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.350 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0f04b6-3cb9-4e01-89a8-f2dbe7a39100]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.373 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4df5c7ee-963e-487e-9b84-9b0d76fc8380]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.410 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a3156bd7-ec9f-4f83-bfe9-56174efa8b96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:36 compute-0 NetworkManager[44949]: <info>  [1759847736.4174] manager: (tap002ae4bb-00): new Veth device (/org/freedesktop/NetworkManager/Devices/465)
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.417 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9482e5ba-a51a-4dd4-878f-7703012b1ea9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:36 compute-0 nova_compute[259550]: 2025-10-07 14:35:36.436 2 DEBUG nova.compute.manager [None req-8ceadb25-3711-4919-8e48-9d2b054c16ba - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.457 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4066e5-f5f3-4532-a5c4-a750ea80f51c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.460 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9dfb40-f58e-4971-ba04-69f27cc7446f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:36 compute-0 NetworkManager[44949]: <info>  [1759847736.4852] device (tap002ae4bb-00): carrier: link connected
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.496 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2916c81d-05c2-4eb5-a2b0-6a04a7e14374]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.514 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a429fb-06f4-4770-9c97-dcf9b4b87908]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap002ae4bb-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:93:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 331], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 832004, 'reachable_time': 29221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379860, 'error': None, 'target': 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.533 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[474247b8-7b5d-437c-a67b-d0e1a9de139b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe46:93fa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 832004, 'tstamp': 832004}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379861, 'error': None, 'target': 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.551 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fab6e0bb-1050-46d4-bf54-2c2f8466f1bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap002ae4bb-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:93:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 331], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 832004, 'reachable_time': 29221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379862, 'error': None, 'target': 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.585 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c01902c-a129-4154-9d39-e94d80eaaabb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.643 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[85cc9396-d414-424c-b428-08aa4bbc08aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.646 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap002ae4bb-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.646 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.647 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap002ae4bb-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:36 compute-0 nova_compute[259550]: 2025-10-07 14:35:36.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:36 compute-0 NetworkManager[44949]: <info>  [1759847736.6496] manager: (tap002ae4bb-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/466)
Oct 07 14:35:36 compute-0 kernel: tap002ae4bb-00: entered promiscuous mode
Oct 07 14:35:36 compute-0 nova_compute[259550]: 2025-10-07 14:35:36.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.653 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap002ae4bb-00, col_values=(('external_ids', {'iface-id': '9b384c3a-3862-48f5-9fa0-a30ca1bb30cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:36 compute-0 ovn_controller[151684]: 2025-10-07T14:35:36Z|01146|binding|INFO|Releasing lport 9b384c3a-3862-48f5-9fa0-a30ca1bb30cc from this chassis (sb_readonly=0)
Oct 07 14:35:36 compute-0 nova_compute[259550]: 2025-10-07 14:35:36.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:36 compute-0 nova_compute[259550]: 2025-10-07 14:35:36.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.669 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/002ae4bb-0f71-4b57-99ae-0bfd304fb458.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/002ae4bb-0f71-4b57-99ae-0bfd304fb458.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.671 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fc8531c9-1d47-4738-93e9-ef336c1caf6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.672 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-002ae4bb-0f71-4b57-99ae-0bfd304fb458
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/002ae4bb-0f71-4b57-99ae-0bfd304fb458.pid.haproxy
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 002ae4bb-0f71-4b57-99ae-0bfd304fb458
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:35:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.673 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'env', 'PROCESS_TAG=haproxy-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/002ae4bb-0f71-4b57-99ae-0bfd304fb458.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:35:37 compute-0 podman[379936]: 2025-10-07 14:35:37.102579567 +0000 UTC m=+0.088754882 container create 47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 14:35:37 compute-0 podman[379936]: 2025-10-07 14:35:37.045354658 +0000 UTC m=+0.031529973 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:35:37 compute-0 systemd[1]: Started libpod-conmon-47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624.scope.
Oct 07 14:35:37 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa396a46fc4f81512c267e5f20233b50cdd5b2d225de55dfebc3c6e7a24f58e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:35:37 compute-0 podman[379936]: 2025-10-07 14:35:37.199345143 +0000 UTC m=+0.185520458 container init 47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:35:37 compute-0 podman[379936]: 2025-10-07 14:35:37.205908528 +0000 UTC m=+0.192083823 container start 47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.213 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847737.2126427, e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.215 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] VM Started (Lifecycle Event)
Oct 07 14:35:37 compute-0 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[379952]: [NOTICE]   (379956) : New worker (379958) forked
Oct 07 14:35:37 compute-0 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[379952]: [NOTICE]   (379956) : Loading success.
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.252 2 DEBUG nova.network.neutron [-] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.266 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.309 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847737.2130566, e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.309 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] VM Paused (Lifecycle Event)
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.330 2 INFO nova.compute.manager [-] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Took 2.19 seconds to deallocate network for instance.
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.337 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.342 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.405 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.426 2 DEBUG oslo_concurrency.lockutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.426 2 DEBUG oslo_concurrency.lockutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.519 2 DEBUG nova.network.neutron [req-3ff2f1ae-e4e0-477d-b6ce-4ec93783cc87 req-2e63a337-74b3-4e61-b5c3-54cadf93b77d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updated VIF entry in instance network info cache for port 13fb3ec4-0080-406c-8cea-6620016c0513. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.520 2 DEBUG nova.network.neutron [req-3ff2f1ae-e4e0-477d-b6ce-4ec93783cc87 req-2e63a337-74b3-4e61-b5c3-54cadf93b77d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updating instance_info_cache with network_info: [{"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.551 2 DEBUG oslo_concurrency.processutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.594 2 DEBUG oslo_concurrency.lockutils [req-3ff2f1ae-e4e0-477d-b6ce-4ec93783cc87 req-2e63a337-74b3-4e61-b5c3-54cadf93b77d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.776 2 DEBUG nova.compute.manager [req-c35ff1d9-127c-479d-9a6a-5497aaf1e463 req-c6356181-6cd0-412d-9d30-cfb3e8d0ef54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Received event network-vif-deleted-13fb3ec4-0080-406c-8cea-6620016c0513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.776 2 INFO nova.compute.manager [req-c35ff1d9-127c-479d-9a6a-5497aaf1e463 req-c6356181-6cd0-412d-9d30-cfb3e8d0ef54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Neutron deleted interface 13fb3ec4-0080-406c-8cea-6620016c0513; detaching it from the instance and deleting it from the info cache
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.776 2 DEBUG nova.network.neutron [req-c35ff1d9-127c-479d-9a6a-5497aaf1e463 req-c6356181-6cd0-412d-9d30-cfb3e8d0ef54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:35:37 compute-0 nova_compute[259550]: 2025-10-07 14:35:37.871 2 DEBUG nova.compute.manager [req-c35ff1d9-127c-479d-9a6a-5497aaf1e463 req-c6356181-6cd0-412d-9d30-cfb3e8d0ef54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Detach interface failed, port_id=13fb3ec4-0080-406c-8cea-6620016c0513, reason: Instance 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:35:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:35:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:35:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2423252125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.028 2 DEBUG oslo_concurrency.processutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.035 2 DEBUG nova.compute.provider_tree [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.077 2 DEBUG nova.scheduler.client.report [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.085 2 DEBUG nova.network.neutron [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updating instance_info_cache with network_info: [{"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:35:38 compute-0 ceph-mon[74295]: pgmap v2195: 305 pgs: 305 active+clean; 248 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.5 MiB/s wr, 71 op/s
Oct 07 14:35:38 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2423252125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.134 2 DEBUG oslo_concurrency.lockutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.142 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.143 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Instance network_info: |[{"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.143 2 DEBUG oslo_concurrency.lockutils [req-9b464704-29b6-481c-bc52-437e486c1f88 req-7120b690-a46a-4650-8443-f9b61ce2bb85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.144 2 DEBUG nova.network.neutron [req-9b464704-29b6-481c-bc52-437e486c1f88 req-7120b690-a46a-4650-8443-f9b61ce2bb85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Refreshing network info cache for port e3da1022-6830-4557-9992-ffd8ec07a599 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.148 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Start _get_guest_xml network_info=[{"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.151 2 WARNING nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.155 2 DEBUG nova.virt.libvirt.host [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.156 2 DEBUG nova.virt.libvirt.host [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.159 2 DEBUG nova.virt.libvirt.host [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.159 2 DEBUG nova.virt.libvirt.host [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.159 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.160 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.160 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.160 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.161 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.161 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.161 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.161 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.161 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.162 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.162 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.162 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.165 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.201 2 INFO nova.scheduler.client.report [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa
Oct 07 14:35:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 248 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.4 MiB/s wr, 56 op/s
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.339 2 DEBUG oslo_concurrency.lockutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:35:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4108979624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.618 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.644 2 DEBUG nova.storage.rbd_utils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 614c22a4-9342-4037-adb5-71c3375b8553_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:35:38 compute-0 nova_compute[259550]: 2025-10-07 14:35:38.649 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:35:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:35:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2672729564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.135 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.137 2 DEBUG nova.virt.libvirt.vif [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:35:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1065431809',display_name='tempest-TestGettingAddress-server-1065431809',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1065431809',id=112,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-b1k3bk39',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:35:24Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=614c22a4-9342-4037-adb5-71c3375b8553,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.137 2 DEBUG nova.network.os_vif_util [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.138 2 DEBUG nova.network.os_vif_util [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:68:46,bridge_name='br-int',has_traffic_filtering=True,id=3eb614e8-3f14-4375-bbe9-34facd5bce52,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb614e8-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.139 2 DEBUG nova.virt.libvirt.vif [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:35:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1065431809',display_name='tempest-TestGettingAddress-server-1065431809',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1065431809',id=112,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-b1k3bk39',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:35:24Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=614c22a4-9342-4037-adb5-71c3375b8553,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.139 2 DEBUG nova.network.os_vif_util [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.139 2 DEBUG nova.network.os_vif_util [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:5c:25,bridge_name='br-int',has_traffic_filtering=True,id=e3da1022-6830-4557-9992-ffd8ec07a599,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3da1022-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.141 2 DEBUG nova.objects.instance [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 614c22a4-9342-4037-adb5-71c3375b8553 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.210 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:35:39 compute-0 nova_compute[259550]:   <uuid>614c22a4-9342-4037-adb5-71c3375b8553</uuid>
Oct 07 14:35:39 compute-0 nova_compute[259550]:   <name>instance-00000070</name>
Oct 07 14:35:39 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:35:39 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:35:39 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <nova:name>tempest-TestGettingAddress-server-1065431809</nova:name>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:35:38</nova:creationTime>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:35:39 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:35:39 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:35:39 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:35:39 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:35:39 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:35:39 compute-0 nova_compute[259550]:         <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct 07 14:35:39 compute-0 nova_compute[259550]:         <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:35:39 compute-0 nova_compute[259550]:         <nova:port uuid="3eb614e8-3f14-4375-bbe9-34facd5bce52">
Oct 07 14:35:39 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:35:39 compute-0 nova_compute[259550]:         <nova:port uuid="e3da1022-6830-4557-9992-ffd8ec07a599">
Oct 07 14:35:39 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe2e:5c25" ipVersion="6"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:35:39 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:35:39 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <system>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <entry name="serial">614c22a4-9342-4037-adb5-71c3375b8553</entry>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <entry name="uuid">614c22a4-9342-4037-adb5-71c3375b8553</entry>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     </system>
Oct 07 14:35:39 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:35:39 compute-0 nova_compute[259550]:   <os>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:   </os>
Oct 07 14:35:39 compute-0 nova_compute[259550]:   <features>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:   </features>
Oct 07 14:35:39 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:35:39 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:35:39 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/614c22a4-9342-4037-adb5-71c3375b8553_disk">
Oct 07 14:35:39 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       </source>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:35:39 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/614c22a4-9342-4037-adb5-71c3375b8553_disk.config">
Oct 07 14:35:39 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       </source>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:35:39 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:02:68:46"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <target dev="tap3eb614e8-3f"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:2e:5c:25"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <target dev="tape3da1022-68"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553/console.log" append="off"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <video>
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     </video>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:35:39 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:35:39 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:35:39 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:35:39 compute-0 nova_compute[259550]: </domain>
Oct 07 14:35:39 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.211 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Preparing to wait for external event network-vif-plugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.212 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.212 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.212 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.212 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Preparing to wait for external event network-vif-plugged-e3da1022-6830-4557-9992-ffd8ec07a599 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.213 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.213 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.213 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.214 2 DEBUG nova.virt.libvirt.vif [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:35:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1065431809',display_name='tempest-TestGettingAddress-server-1065431809',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1065431809',id=112,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-b1k3bk39',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:35:24Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=614c22a4-9342-4037-adb5-71c3375b8553,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.214 2 DEBUG nova.network.os_vif_util [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.215 2 DEBUG nova.network.os_vif_util [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:68:46,bridge_name='br-int',has_traffic_filtering=True,id=3eb614e8-3f14-4375-bbe9-34facd5bce52,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb614e8-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.215 2 DEBUG os_vif [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:68:46,bridge_name='br-int',has_traffic_filtering=True,id=3eb614e8-3f14-4375-bbe9-34facd5bce52,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb614e8-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4108979624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.217 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.217 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.221 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3eb614e8-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.221 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3eb614e8-3f, col_values=(('external_ids', {'iface-id': '3eb614e8-3f14-4375-bbe9-34facd5bce52', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:68:46', 'vm-uuid': '614c22a4-9342-4037-adb5-71c3375b8553'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:39 compute-0 NetworkManager[44949]: <info>  [1759847739.2249] manager: (tap3eb614e8-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/467)
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.233 2 INFO os_vif [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:68:46,bridge_name='br-int',has_traffic_filtering=True,id=3eb614e8-3f14-4375-bbe9-34facd5bce52,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb614e8-3f')
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.234 2 DEBUG nova.virt.libvirt.vif [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:35:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1065431809',display_name='tempest-TestGettingAddress-server-1065431809',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1065431809',id=112,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-b1k3bk39',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:35:24Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=614c22a4-9342-4037-adb5-71c3375b8553,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.235 2 DEBUG nova.network.os_vif_util [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.235 2 DEBUG nova.network.os_vif_util [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:5c:25,bridge_name='br-int',has_traffic_filtering=True,id=e3da1022-6830-4557-9992-ffd8ec07a599,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3da1022-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.235 2 DEBUG os_vif [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:5c:25,bridge_name='br-int',has_traffic_filtering=True,id=e3da1022-6830-4557-9992-ffd8ec07a599,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3da1022-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3da1022-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape3da1022-68, col_values=(('external_ids', {'iface-id': 'e3da1022-6830-4557-9992-ffd8ec07a599', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:5c:25', 'vm-uuid': '614c22a4-9342-4037-adb5-71c3375b8553'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:39 compute-0 NetworkManager[44949]: <info>  [1759847739.2423] manager: (tape3da1022-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.253 2 INFO os_vif [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:5c:25,bridge_name='br-int',has_traffic_filtering=True,id=e3da1022-6830-4557-9992-ffd8ec07a599,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3da1022-68')
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.564 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.565 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.565 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:02:68:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.565 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:2e:5c:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.566 2 INFO nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Using config drive
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.594 2 DEBUG nova.storage.rbd_utils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 614c22a4-9342-4037-adb5-71c3375b8553_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:35:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:39.759 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.988 2 INFO nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Creating config drive at /var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553/disk.config
Oct 07 14:35:39 compute-0 nova_compute[259550]: 2025-10-07 14:35:39.995 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp96iin_w4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:35:40 compute-0 nova_compute[259550]: 2025-10-07 14:35:40.144 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp96iin_w4" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:35:40 compute-0 nova_compute[259550]: 2025-10-07 14:35:40.174 2 DEBUG nova.storage.rbd_utils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 614c22a4-9342-4037-adb5-71c3375b8553_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:35:40 compute-0 nova_compute[259550]: 2025-10-07 14:35:40.179 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553/disk.config 614c22a4-9342-4037-adb5-71c3375b8553_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:35:40 compute-0 ceph-mon[74295]: pgmap v2196: 305 pgs: 305 active+clean; 248 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.4 MiB/s wr, 56 op/s
Oct 07 14:35:40 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2672729564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:35:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 213 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 2.4 MiB/s wr, 80 op/s
Oct 07 14:35:40 compute-0 nova_compute[259550]: 2025-10-07 14:35:40.549 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553/disk.config 614c22a4-9342-4037-adb5-71c3375b8553_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.370s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:35:40 compute-0 nova_compute[259550]: 2025-10-07 14:35:40.550 2 INFO nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Deleting local config drive /var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553/disk.config because it was imported into RBD.
Oct 07 14:35:40 compute-0 kernel: tap3eb614e8-3f: entered promiscuous mode
Oct 07 14:35:40 compute-0 NetworkManager[44949]: <info>  [1759847740.6038] manager: (tap3eb614e8-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/469)
Oct 07 14:35:40 compute-0 ovn_controller[151684]: 2025-10-07T14:35:40Z|01147|binding|INFO|Claiming lport 3eb614e8-3f14-4375-bbe9-34facd5bce52 for this chassis.
Oct 07 14:35:40 compute-0 ovn_controller[151684]: 2025-10-07T14:35:40Z|01148|binding|INFO|3eb614e8-3f14-4375-bbe9-34facd5bce52: Claiming fa:16:3e:02:68:46 10.100.0.10
Oct 07 14:35:40 compute-0 nova_compute[259550]: 2025-10-07 14:35:40.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.620 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:68:46 10.100.0.10'], port_security=['fa:16:3e:02:68:46 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '614c22a4-9342-4037-adb5-71c3375b8553', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c899e05d-224c-44fe-8294-eaece58d7fe7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03f8a6cc-86b5-49f0-b76e-96a2dbc54e74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34b97456-8a84-4c85-bc2d-80b2de48e489, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3eb614e8-3f14-4375-bbe9-34facd5bce52) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.621 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3eb614e8-3f14-4375-bbe9-34facd5bce52 in datapath c899e05d-224c-44fe-8294-eaece58d7fe7 bound to our chassis
Oct 07 14:35:40 compute-0 NetworkManager[44949]: <info>  [1759847740.6243] manager: (tape3da1022-68): new Tun device (/org/freedesktop/NetworkManager/Devices/470)
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.623 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c899e05d-224c-44fe-8294-eaece58d7fe7
Oct 07 14:35:40 compute-0 kernel: tape3da1022-68: entered promiscuous mode
Oct 07 14:35:40 compute-0 ovn_controller[151684]: 2025-10-07T14:35:40Z|01149|binding|INFO|Setting lport 3eb614e8-3f14-4375-bbe9-34facd5bce52 ovn-installed in OVS
Oct 07 14:35:40 compute-0 ovn_controller[151684]: 2025-10-07T14:35:40Z|01150|binding|INFO|Setting lport 3eb614e8-3f14-4375-bbe9-34facd5bce52 up in Southbound
Oct 07 14:35:40 compute-0 ovn_controller[151684]: 2025-10-07T14:35:40Z|01151|if_status|INFO|Dropped 3 log messages in last 50 seconds (most recently, 49 seconds ago) due to excessive rate
Oct 07 14:35:40 compute-0 ovn_controller[151684]: 2025-10-07T14:35:40Z|01152|if_status|INFO|Not updating pb chassis for e3da1022-6830-4557-9992-ffd8ec07a599 now as sb is readonly
Oct 07 14:35:40 compute-0 nova_compute[259550]: 2025-10-07 14:35:40.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.638 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[46005e0e-abb3-4bb2-b560-c30c4e598f28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:40 compute-0 nova_compute[259550]: 2025-10-07 14:35:40.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:40 compute-0 systemd-udevd[380152]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:35:40 compute-0 systemd-udevd[380154]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:35:40 compute-0 systemd-machined[214580]: New machine qemu-141-instance-00000070.
Oct 07 14:35:40 compute-0 systemd[1]: Started Virtual Machine qemu-141-instance-00000070.
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.671 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[663d25d4-b5d7-4f0a-a0ef-0057b2035357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.675 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c6fcd808-eb9a-4355-9da5-db9369af721f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:40 compute-0 NetworkManager[44949]: <info>  [1759847740.6784] device (tape3da1022-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:35:40 compute-0 NetworkManager[44949]: <info>  [1759847740.6795] device (tape3da1022-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:35:40 compute-0 NetworkManager[44949]: <info>  [1759847740.6821] device (tap3eb614e8-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:35:40 compute-0 NetworkManager[44949]: <info>  [1759847740.6830] device (tap3eb614e8-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.707 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[31157d49-62ec-42f8-90d0-2a3c8d35ac13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:40 compute-0 ovn_controller[151684]: 2025-10-07T14:35:40Z|01153|binding|INFO|Claiming lport e3da1022-6830-4557-9992-ffd8ec07a599 for this chassis.
Oct 07 14:35:40 compute-0 ovn_controller[151684]: 2025-10-07T14:35:40Z|01154|binding|INFO|e3da1022-6830-4557-9992-ffd8ec07a599: Claiming fa:16:3e:2e:5c:25 2001:db8::f816:3eff:fe2e:5c25
Oct 07 14:35:40 compute-0 ovn_controller[151684]: 2025-10-07T14:35:40Z|01155|binding|INFO|Setting lport e3da1022-6830-4557-9992-ffd8ec07a599 ovn-installed in OVS
Oct 07 14:35:40 compute-0 nova_compute[259550]: 2025-10-07 14:35:40.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.730 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[22cf1395-a5b0-4441-9f4f-02aa30f5d9ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc899e05d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:ba:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827520, 'reachable_time': 17717, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380175, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:40 compute-0 podman[380125]: 2025-10-07 14:35:40.736006864 +0000 UTC m=+0.101139303 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.745 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:5c:25 2001:db8::f816:3eff:fe2e:5c25'], port_security=['fa:16:3e:2e:5c:25 2001:db8::f816:3eff:fe2e:5c25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe2e:5c25/64', 'neutron:device_id': '614c22a4-9342-4037-adb5-71c3375b8553', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03f8a6cc-86b5-49f0-b76e-96a2dbc54e74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eae04c5e-8ef6-447a-8c6c-5575a0afadaf, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=e3da1022-6830-4557-9992-ffd8ec07a599) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:35:40 compute-0 podman[380128]: 2025-10-07 14:35:40.746999968 +0000 UTC m=+0.110606187 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true)
Oct 07 14:35:40 compute-0 ovn_controller[151684]: 2025-10-07T14:35:40Z|01156|binding|INFO|Setting lport e3da1022-6830-4557-9992-ffd8ec07a599 up in Southbound
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.751 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cf562eaf-6f33-4f13-97e2-7a46c22883b3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc899e05d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827532, 'tstamp': 827532}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380185, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc899e05d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827535, 'tstamp': 827535}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380185, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.753 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc899e05d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:40 compute-0 nova_compute[259550]: 2025-10-07 14:35:40.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.756 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc899e05d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.756 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.756 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc899e05d-20, col_values=(('external_ids', {'iface-id': 'a420e217-e19f-447c-8a17-e0fdc42144b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.757 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.758 161536 INFO neutron.agent.ovn.metadata.agent [-] Port e3da1022-6830-4557-9992-ffd8ec07a599 in datapath 673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3 bound to our chassis
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.759 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.774 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e788f5-3103-404a-a464-617f40373be1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.801 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b1bc16-b48e-40a5-954d-0358d03f027b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.805 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[60476f6e-10b0-439a-a1a0-9ec022ecd34e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.840 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[525ae20e-12df-422a-82b8-135ee9f3ab8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.862 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[58592f7b-5010-4adc-ba86-5533ac75c421]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap673dd6ba-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:cb:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 4, 'rx_bytes': 1802, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 4, 'rx_bytes': 1802, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 325], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827720, 'reachable_time': 16406, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 19, 'inoctets': 1536, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 19, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1536, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 19, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380193, 'error': None, 'target': 'ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.884 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[456ea80d-3901-4abc-9e6d-6dbb0c03a82e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap673dd6ba-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827732, 'tstamp': 827732}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380194, 'error': None, 'target': 'ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.885 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap673dd6ba-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:40 compute-0 nova_compute[259550]: 2025-10-07 14:35:40.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:40 compute-0 nova_compute[259550]: 2025-10-07 14:35:40.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.888 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap673dd6ba-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.888 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.889 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap673dd6ba-40, col_values=(('external_ids', {'iface-id': 'e2eca697-1003-4116-b184-c9191a00584f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:35:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.889 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:35:40 compute-0 nova_compute[259550]: 2025-10-07 14:35:40.978 2 DEBUG nova.network.neutron [req-9b464704-29b6-481c-bc52-437e486c1f88 req-7120b690-a46a-4650-8443-f9b61ce2bb85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updated VIF entry in instance network info cache for port e3da1022-6830-4557-9992-ffd8ec07a599. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:35:40 compute-0 nova_compute[259550]: 2025-10-07 14:35:40.979 2 DEBUG nova.network.neutron [req-9b464704-29b6-481c-bc52-437e486c1f88 req-7120b690-a46a-4650-8443-f9b61ce2bb85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updating instance_info_cache with network_info: [{"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:35:41 compute-0 nova_compute[259550]: 2025-10-07 14:35:41.071 2 DEBUG oslo_concurrency.lockutils [req-9b464704-29b6-481c-bc52-437e486c1f88 req-7120b690-a46a-4650-8443-f9b61ce2bb85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:35:41 compute-0 nova_compute[259550]: 2025-10-07 14:35:41.264 2 DEBUG nova.compute.manager [req-9381b296-3a4a-47f3-b8ae-82b609e697fb req-f763daa7-7c10-484d-9402-9b98c2999314 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-plugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:35:41 compute-0 nova_compute[259550]: 2025-10-07 14:35:41.264 2 DEBUG oslo_concurrency.lockutils [req-9381b296-3a4a-47f3-b8ae-82b609e697fb req-f763daa7-7c10-484d-9402-9b98c2999314 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:41 compute-0 nova_compute[259550]: 2025-10-07 14:35:41.265 2 DEBUG oslo_concurrency.lockutils [req-9381b296-3a4a-47f3-b8ae-82b609e697fb req-f763daa7-7c10-484d-9402-9b98c2999314 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:41 compute-0 nova_compute[259550]: 2025-10-07 14:35:41.265 2 DEBUG oslo_concurrency.lockutils [req-9381b296-3a4a-47f3-b8ae-82b609e697fb req-f763daa7-7c10-484d-9402-9b98c2999314 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:41 compute-0 nova_compute[259550]: 2025-10-07 14:35:41.265 2 DEBUG nova.compute.manager [req-9381b296-3a4a-47f3-b8ae-82b609e697fb req-f763daa7-7c10-484d-9402-9b98c2999314 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Processing event network-vif-plugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:35:42 compute-0 ceph-mon[74295]: pgmap v2197: 305 pgs: 305 active+clean; 213 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 2.4 MiB/s wr, 80 op/s
Oct 07 14:35:42 compute-0 nova_compute[259550]: 2025-10-07 14:35:42.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 213 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1023 KiB/s wr, 63 op/s
Oct 07 14:35:42 compute-0 nova_compute[259550]: 2025-10-07 14:35:42.373 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847742.3732715, 614c22a4-9342-4037-adb5-71c3375b8553 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:35:42 compute-0 nova_compute[259550]: 2025-10-07 14:35:42.374 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] VM Started (Lifecycle Event)
Oct 07 14:35:42 compute-0 nova_compute[259550]: 2025-10-07 14:35:42.412 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:35:42 compute-0 nova_compute[259550]: 2025-10-07 14:35:42.416 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847742.3756316, 614c22a4-9342-4037-adb5-71c3375b8553 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:35:42 compute-0 nova_compute[259550]: 2025-10-07 14:35:42.416 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] VM Paused (Lifecycle Event)
Oct 07 14:35:42 compute-0 nova_compute[259550]: 2025-10-07 14:35:42.448 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:35:42 compute-0 nova_compute[259550]: 2025-10-07 14:35:42.451 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:35:42 compute-0 nova_compute[259550]: 2025-10-07 14:35:42.471 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:35:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.216 2 DEBUG nova.compute.manager [req-d103d16a-9ed9-4e10-a9bc-8c30ce0ca5e9 req-255049a8-5c89-4a7d-9a63-d48370d1b585 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.216 2 DEBUG oslo_concurrency.lockutils [req-d103d16a-9ed9-4e10-a9bc-8c30ce0ca5e9 req-255049a8-5c89-4a7d-9a63-d48370d1b585 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.216 2 DEBUG oslo_concurrency.lockutils [req-d103d16a-9ed9-4e10-a9bc-8c30ce0ca5e9 req-255049a8-5c89-4a7d-9a63-d48370d1b585 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.216 2 DEBUG oslo_concurrency.lockutils [req-d103d16a-9ed9-4e10-a9bc-8c30ce0ca5e9 req-255049a8-5c89-4a7d-9a63-d48370d1b585 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.217 2 DEBUG nova.compute.manager [req-d103d16a-9ed9-4e10-a9bc-8c30ce0ca5e9 req-255049a8-5c89-4a7d-9a63-d48370d1b585 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Processing event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.217 2 DEBUG nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.221 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847743.221098, e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.221 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] VM Resumed (Lifecycle Event)
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.222 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.226 2 INFO nova.virt.libvirt.driver [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance spawned successfully.
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.226 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.249 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.255 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.259 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.259 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.260 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.260 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.260 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.261 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.286 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.320 2 INFO nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Took 15.77 seconds to spawn the instance on the hypervisor.
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.321 2 DEBUG nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:35:43 compute-0 ceph-mon[74295]: pgmap v2198: 305 pgs: 305 active+clean; 213 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1023 KiB/s wr, 63 op/s
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.379 2 DEBUG nova.compute.manager [req-4d37bc9d-ce43-4f27-9db8-d465806372f6 req-2f3dca48-b8bf-49f2-881c-2b51dca448bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-plugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.379 2 DEBUG oslo_concurrency.lockutils [req-4d37bc9d-ce43-4f27-9db8-d465806372f6 req-2f3dca48-b8bf-49f2-881c-2b51dca448bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.380 2 DEBUG oslo_concurrency.lockutils [req-4d37bc9d-ce43-4f27-9db8-d465806372f6 req-2f3dca48-b8bf-49f2-881c-2b51dca448bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.380 2 DEBUG oslo_concurrency.lockutils [req-4d37bc9d-ce43-4f27-9db8-d465806372f6 req-2f3dca48-b8bf-49f2-881c-2b51dca448bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.380 2 DEBUG nova.compute.manager [req-4d37bc9d-ce43-4f27-9db8-d465806372f6 req-2f3dca48-b8bf-49f2-881c-2b51dca448bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] No event matching network-vif-plugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 in dict_keys([('network-vif-plugged', 'e3da1022-6830-4557-9992-ffd8ec07a599')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.380 2 WARNING nova.compute.manager [req-4d37bc9d-ce43-4f27-9db8-d465806372f6 req-2f3dca48-b8bf-49f2-881c-2b51dca448bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received unexpected event network-vif-plugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 for instance with vm_state building and task_state spawning.
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.395 2 INFO nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Took 20.74 seconds to build instance.
Oct 07 14:35:43 compute-0 nova_compute[259550]: 2025-10-07 14:35:43.430 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:44 compute-0 ovn_controller[151684]: 2025-10-07T14:35:44Z|01157|binding|INFO|Releasing lport a420e217-e19f-447c-8a17-e0fdc42144b5 from this chassis (sb_readonly=0)
Oct 07 14:35:44 compute-0 nova_compute[259550]: 2025-10-07 14:35:44.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:44 compute-0 ovn_controller[151684]: 2025-10-07T14:35:44Z|01158|binding|INFO|Releasing lport 9b384c3a-3862-48f5-9fa0-a30ca1bb30cc from this chassis (sb_readonly=0)
Oct 07 14:35:44 compute-0 ovn_controller[151684]: 2025-10-07T14:35:44Z|01159|binding|INFO|Releasing lport e2eca697-1003-4116-b184-c9191a00584f from this chassis (sb_readonly=0)
Oct 07 14:35:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 107 KiB/s rd, 712 KiB/s wr, 53 op/s
Oct 07 14:35:44 compute-0 nova_compute[259550]: 2025-10-07 14:35:44.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.310 2 DEBUG nova.compute.manager [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.310 2 DEBUG oslo_concurrency.lockutils [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.310 2 DEBUG oslo_concurrency.lockutils [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.310 2 DEBUG oslo_concurrency.lockutils [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.310 2 DEBUG nova.compute.manager [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] No waiting events found dispatching network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.311 2 WARNING nova.compute.manager [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received unexpected event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 for instance with vm_state active and task_state None.
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.311 2 DEBUG nova.compute.manager [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-plugged-e3da1022-6830-4557-9992-ffd8ec07a599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.311 2 DEBUG oslo_concurrency.lockutils [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.311 2 DEBUG oslo_concurrency.lockutils [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.311 2 DEBUG oslo_concurrency.lockutils [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.312 2 DEBUG nova.compute.manager [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Processing event network-vif-plugged-e3da1022-6830-4557-9992-ffd8ec07a599 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.312 2 DEBUG nova.compute.manager [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-plugged-e3da1022-6830-4557-9992-ffd8ec07a599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.312 2 DEBUG oslo_concurrency.lockutils [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.312 2 DEBUG oslo_concurrency.lockutils [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.312 2 DEBUG oslo_concurrency.lockutils [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.312 2 DEBUG nova.compute.manager [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] No waiting events found dispatching network-vif-plugged-e3da1022-6830-4557-9992-ffd8ec07a599 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.313 2 WARNING nova.compute.manager [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received unexpected event network-vif-plugged-e3da1022-6830-4557-9992-ffd8ec07a599 for instance with vm_state building and task_state spawning.
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.313 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.316 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847745.3164804, 614c22a4-9342-4037-adb5-71c3375b8553 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.317 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] VM Resumed (Lifecycle Event)
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.319 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.323 2 INFO nova.virt.libvirt.driver [-] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Instance spawned successfully.
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.324 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.346 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.351 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.355 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.356 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.356 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.357 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.357 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.358 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:35:45 compute-0 ceph-mon[74295]: pgmap v2199: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 107 KiB/s rd, 712 KiB/s wr, 53 op/s
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.387 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.456 2 INFO nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Took 20.44 seconds to spawn the instance on the hypervisor.
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.456 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.560 2 INFO nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Took 23.38 seconds to build instance.
Oct 07 14:35:45 compute-0 nova_compute[259550]: 2025-10-07 14:35:45.587 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:46 compute-0 sudo[380238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:35:46 compute-0 sudo[380238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:46 compute-0 sudo[380238]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:46 compute-0 sudo[380263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:35:46 compute-0 sudo[380263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:46 compute-0 sudo[380263]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:46 compute-0 sudo[380288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:35:46 compute-0 sudo[380288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:46 compute-0 sudo[380288]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:46 compute-0 sudo[380313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 07 14:35:46 compute-0 sudo[380313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 448 KiB/s rd, 28 KiB/s wr, 62 op/s
Oct 07 14:35:46 compute-0 nova_compute[259550]: 2025-10-07 14:35:46.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:46 compute-0 podman[380408]: 2025-10-07 14:35:46.738179165 +0000 UTC m=+0.070112775 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 07 14:35:46 compute-0 podman[380408]: 2025-10-07 14:35:46.833211404 +0000 UTC m=+0.165145004 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 07 14:35:46 compute-0 nova_compute[259550]: 2025-10-07 14:35:46.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:35:47 compute-0 nova_compute[259550]: 2025-10-07 14:35:47.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:47 compute-0 nova_compute[259550]: 2025-10-07 14:35:47.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:47 compute-0 ceph-mon[74295]: pgmap v2200: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 448 KiB/s rd, 28 KiB/s wr, 62 op/s
Oct 07 14:35:47 compute-0 sudo[380313]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:35:47 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:35:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:35:47 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:35:47 compute-0 sudo[380559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:35:47 compute-0 sudo[380559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:47 compute-0 sudo[380559]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:47 compute-0 sudo[380584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:35:47 compute-0 sudo[380584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:47 compute-0 sudo[380584]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:47 compute-0 sudo[380609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:35:47 compute-0 sudo[380609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:47 compute-0 sudo[380609]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:47 compute-0 sudo[380634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:35:47 compute-0 sudo[380634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:35:48 compute-0 sudo[380634]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 446 KiB/s rd, 28 KiB/s wr, 56 op/s
Oct 07 14:35:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:35:48 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:35:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:35:48 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:35:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:35:48 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:35:48 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 935d1af2-2616-44f0-8341-153e89f83475 does not exist
Oct 07 14:35:48 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev cf3cb86b-9ae5-4edf-bc73-b5891b47bb76 does not exist
Oct 07 14:35:48 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 9903ea43-8c97-4719-85f1-69a1806dba57 does not exist
Oct 07 14:35:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:35:48 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:35:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:35:48 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:35:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:35:48 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:35:48 compute-0 sudo[380688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:35:48 compute-0 sudo[380688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:48 compute-0 sudo[380688]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:48 compute-0 sudo[380713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:35:48 compute-0 sudo[380713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:48 compute-0 sudo[380713]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:48 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:35:48 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:35:48 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:35:48 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:35:48 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:35:48 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:35:48 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:35:48 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:35:48 compute-0 sudo[380738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:35:48 compute-0 sudo[380738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:48 compute-0 sudo[380738]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:48 compute-0 sudo[380763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:35:48 compute-0 sudo[380763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:48 compute-0 podman[380827]: 2025-10-07 14:35:48.939395722 +0000 UTC m=+0.038981943 container create 520363fc74fc751d52cb0e19a9842ccf1ecbae8d82bdf76cab2c120fc9624f5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldstine, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:35:48 compute-0 nova_compute[259550]: 2025-10-07 14:35:48.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:35:48 compute-0 nova_compute[259550]: 2025-10-07 14:35:48.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:35:48 compute-0 nova_compute[259550]: 2025-10-07 14:35:48.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:35:48 compute-0 nova_compute[259550]: 2025-10-07 14:35:48.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:35:48 compute-0 systemd[1]: Started libpod-conmon-520363fc74fc751d52cb0e19a9842ccf1ecbae8d82bdf76cab2c120fc9624f5c.scope.
Oct 07 14:35:49 compute-0 podman[380827]: 2025-10-07 14:35:48.921777632 +0000 UTC m=+0.021363873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:35:49 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:35:49 compute-0 podman[380827]: 2025-10-07 14:35:49.040656708 +0000 UTC m=+0.140242929 container init 520363fc74fc751d52cb0e19a9842ccf1ecbae8d82bdf76cab2c120fc9624f5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldstine, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:35:49 compute-0 podman[380827]: 2025-10-07 14:35:49.050332087 +0000 UTC m=+0.149918308 container start 520363fc74fc751d52cb0e19a9842ccf1ecbae8d82bdf76cab2c120fc9624f5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 14:35:49 compute-0 podman[380827]: 2025-10-07 14:35:49.055083714 +0000 UTC m=+0.154669945 container attach 520363fc74fc751d52cb0e19a9842ccf1ecbae8d82bdf76cab2c120fc9624f5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 07 14:35:49 compute-0 keen_goldstine[380842]: 167 167
Oct 07 14:35:49 compute-0 systemd[1]: libpod-520363fc74fc751d52cb0e19a9842ccf1ecbae8d82bdf76cab2c120fc9624f5c.scope: Deactivated successfully.
Oct 07 14:35:49 compute-0 podman[380827]: 2025-10-07 14:35:49.057498858 +0000 UTC m=+0.157085089 container died 520363fc74fc751d52cb0e19a9842ccf1ecbae8d82bdf76cab2c120fc9624f5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldstine, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:35:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cf0da66ad63c6b8759bb523da201b148e2f8639a1c5b9ba3d80d7746e84531d-merged.mount: Deactivated successfully.
Oct 07 14:35:49 compute-0 podman[380827]: 2025-10-07 14:35:49.105892911 +0000 UTC m=+0.205479132 container remove 520363fc74fc751d52cb0e19a9842ccf1ecbae8d82bdf76cab2c120fc9624f5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 07 14:35:49 compute-0 systemd[1]: libpod-conmon-520363fc74fc751d52cb0e19a9842ccf1ecbae8d82bdf76cab2c120fc9624f5c.scope: Deactivated successfully.
Oct 07 14:35:49 compute-0 nova_compute[259550]: 2025-10-07 14:35:49.229 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847734.2274334, 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:35:49 compute-0 nova_compute[259550]: 2025-10-07 14:35:49.231 2 INFO nova.compute.manager [-] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] VM Stopped (Lifecycle Event)
Oct 07 14:35:49 compute-0 nova_compute[259550]: 2025-10-07 14:35:49.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:49 compute-0 podman[380865]: 2025-10-07 14:35:49.318337508 +0000 UTC m=+0.040999406 container create f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hofstadter, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Oct 07 14:35:49 compute-0 systemd[1]: Started libpod-conmon-f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7.scope.
Oct 07 14:35:49 compute-0 podman[380865]: 2025-10-07 14:35:49.300711286 +0000 UTC m=+0.023373204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:35:49 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef3f4a8fb39c7c0499748e994025df16596a0732ef71b4fd36ed2520c4326b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef3f4a8fb39c7c0499748e994025df16596a0732ef71b4fd36ed2520c4326b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef3f4a8fb39c7c0499748e994025df16596a0732ef71b4fd36ed2520c4326b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef3f4a8fb39c7c0499748e994025df16596a0732ef71b4fd36ed2520c4326b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef3f4a8fb39c7c0499748e994025df16596a0732ef71b4fd36ed2520c4326b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:35:49 compute-0 podman[380865]: 2025-10-07 14:35:49.442233168 +0000 UTC m=+0.164895096 container init f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hofstadter, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 14:35:49 compute-0 podman[380865]: 2025-10-07 14:35:49.451188638 +0000 UTC m=+0.173850536 container start f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 07 14:35:49 compute-0 podman[380865]: 2025-10-07 14:35:49.457054994 +0000 UTC m=+0.179716922 container attach f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hofstadter, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:35:49 compute-0 ceph-mon[74295]: pgmap v2201: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 446 KiB/s rd, 28 KiB/s wr, 56 op/s
Oct 07 14:35:49 compute-0 nova_compute[259550]: 2025-10-07 14:35:49.520 2 DEBUG nova.compute.manager [None req-96b22911-0140-4d23-a28f-4cbd156d56af - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:35:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 28 KiB/s wr, 131 op/s
Oct 07 14:35:50 compute-0 mystifying_hofstadter[380882]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:35:50 compute-0 mystifying_hofstadter[380882]: --> relative data size: 1.0
Oct 07 14:35:50 compute-0 mystifying_hofstadter[380882]: --> All data devices are unavailable
Oct 07 14:35:50 compute-0 systemd[1]: libpod-f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7.scope: Deactivated successfully.
Oct 07 14:35:50 compute-0 systemd[1]: libpod-f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7.scope: Consumed 1.069s CPU time.
Oct 07 14:35:50 compute-0 podman[380911]: 2025-10-07 14:35:50.634349412 +0000 UTC m=+0.035273453 container died f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:35:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-eef3f4a8fb39c7c0499748e994025df16596a0732ef71b4fd36ed2520c4326b2-merged.mount: Deactivated successfully.
Oct 07 14:35:50 compute-0 podman[380911]: 2025-10-07 14:35:50.731798266 +0000 UTC m=+0.132722307 container remove f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 07 14:35:50 compute-0 systemd[1]: libpod-conmon-f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7.scope: Deactivated successfully.
Oct 07 14:35:50 compute-0 sudo[380763]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:50 compute-0 sudo[380926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:35:50 compute-0 sudo[380926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:50 compute-0 sudo[380926]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:50 compute-0 sudo[380951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:35:50 compute-0 sudo[380951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:50 compute-0 sudo[380951]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:50 compute-0 sudo[380976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:35:50 compute-0 sudo[380976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:50 compute-0 sudo[380976]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:50 compute-0 nova_compute[259550]: 2025-10-07 14:35:50.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:35:51 compute-0 sudo[381001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:35:51 compute-0 sudo[381001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:51 compute-0 podman[381066]: 2025-10-07 14:35:51.384724683 +0000 UTC m=+0.043413161 container create 65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_stonebraker, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:35:51 compute-0 systemd[1]: Started libpod-conmon-65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4.scope.
Oct 07 14:35:51 compute-0 podman[381066]: 2025-10-07 14:35:51.366107535 +0000 UTC m=+0.024796043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:35:51 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:35:51 compute-0 podman[381066]: 2025-10-07 14:35:51.500142576 +0000 UTC m=+0.158831074 container init 65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_stonebraker, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:35:51 compute-0 podman[381066]: 2025-10-07 14:35:51.507480332 +0000 UTC m=+0.166168810 container start 65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_stonebraker, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:35:51 compute-0 inspiring_stonebraker[381083]: 167 167
Oct 07 14:35:51 compute-0 systemd[1]: libpod-65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4.scope: Deactivated successfully.
Oct 07 14:35:51 compute-0 conmon[381083]: conmon 65028c5909f7072b2be8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4.scope/container/memory.events
Oct 07 14:35:51 compute-0 podman[381066]: 2025-10-07 14:35:51.51450356 +0000 UTC m=+0.173192038 container attach 65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_stonebraker, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:35:51 compute-0 podman[381066]: 2025-10-07 14:35:51.515328152 +0000 UTC m=+0.174016630 container died 65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:35:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-80f64848eddc315c2ec427b88bb005cb38664cb1886389a4836826e2d53a1657-merged.mount: Deactivated successfully.
Oct 07 14:35:51 compute-0 podman[381066]: 2025-10-07 14:35:51.575010217 +0000 UTC m=+0.233698695 container remove 65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 07 14:35:51 compute-0 systemd[1]: libpod-conmon-65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4.scope: Deactivated successfully.
Oct 07 14:35:51 compute-0 ceph-mon[74295]: pgmap v2202: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 28 KiB/s wr, 131 op/s
Oct 07 14:35:51 compute-0 podman[381108]: 2025-10-07 14:35:51.782509432 +0000 UTC m=+0.044930292 container create 52e2fd79fc4102fb6f17ed2a2f4ee713d7c70ca9c9ceace5d4f1f503334b4f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jang, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 07 14:35:51 compute-0 systemd[1]: Started libpod-conmon-52e2fd79fc4102fb6f17ed2a2f4ee713d7c70ca9c9ceace5d4f1f503334b4f32.scope.
Oct 07 14:35:51 compute-0 podman[381108]: 2025-10-07 14:35:51.764373497 +0000 UTC m=+0.026794377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:35:51 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457f0ccb160c603bf9d9d76cb754019f654f9575c8ed8123cd944f489ac57b58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457f0ccb160c603bf9d9d76cb754019f654f9575c8ed8123cd944f489ac57b58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457f0ccb160c603bf9d9d76cb754019f654f9575c8ed8123cd944f489ac57b58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457f0ccb160c603bf9d9d76cb754019f654f9575c8ed8123cd944f489ac57b58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:35:51 compute-0 podman[381108]: 2025-10-07 14:35:51.8808679 +0000 UTC m=+0.143288760 container init 52e2fd79fc4102fb6f17ed2a2f4ee713d7c70ca9c9ceace5d4f1f503334b4f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jang, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:35:51 compute-0 podman[381108]: 2025-10-07 14:35:51.889461449 +0000 UTC m=+0.151882309 container start 52e2fd79fc4102fb6f17ed2a2f4ee713d7c70ca9c9ceace5d4f1f503334b4f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jang, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 07 14:35:51 compute-0 podman[381108]: 2025-10-07 14:35:51.894960786 +0000 UTC m=+0.157381676 container attach 52e2fd79fc4102fb6f17ed2a2f4ee713d7c70ca9c9ceace5d4f1f503334b4f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:35:51 compute-0 nova_compute[259550]: 2025-10-07 14:35:51.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:35:52 compute-0 nova_compute[259550]: 2025-10-07 14:35:52.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 145 op/s
Oct 07 14:35:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:35:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:35:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:35:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:35:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:35:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:35:52 compute-0 stoic_jang[381125]: {
Oct 07 14:35:52 compute-0 stoic_jang[381125]:     "0": [
Oct 07 14:35:52 compute-0 stoic_jang[381125]:         {
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "devices": [
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "/dev/loop3"
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             ],
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "lv_name": "ceph_lv0",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "lv_size": "21470642176",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "name": "ceph_lv0",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "tags": {
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.cluster_name": "ceph",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.crush_device_class": "",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.encrypted": "0",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.osd_id": "0",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.type": "block",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.vdo": "0"
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             },
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "type": "block",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "vg_name": "ceph_vg0"
Oct 07 14:35:52 compute-0 stoic_jang[381125]:         }
Oct 07 14:35:52 compute-0 stoic_jang[381125]:     ],
Oct 07 14:35:52 compute-0 stoic_jang[381125]:     "1": [
Oct 07 14:35:52 compute-0 stoic_jang[381125]:         {
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "devices": [
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "/dev/loop4"
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             ],
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "lv_name": "ceph_lv1",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "lv_size": "21470642176",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "name": "ceph_lv1",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "tags": {
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.cluster_name": "ceph",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.crush_device_class": "",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.encrypted": "0",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.osd_id": "1",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.type": "block",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.vdo": "0"
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             },
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "type": "block",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "vg_name": "ceph_vg1"
Oct 07 14:35:52 compute-0 stoic_jang[381125]:         }
Oct 07 14:35:52 compute-0 stoic_jang[381125]:     ],
Oct 07 14:35:52 compute-0 stoic_jang[381125]:     "2": [
Oct 07 14:35:52 compute-0 stoic_jang[381125]:         {
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "devices": [
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "/dev/loop5"
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             ],
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "lv_name": "ceph_lv2",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "lv_size": "21470642176",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "name": "ceph_lv2",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "tags": {
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.cluster_name": "ceph",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.crush_device_class": "",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.encrypted": "0",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.osd_id": "2",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.type": "block",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:                 "ceph.vdo": "0"
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             },
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "type": "block",
Oct 07 14:35:52 compute-0 stoic_jang[381125]:             "vg_name": "ceph_vg2"
Oct 07 14:35:52 compute-0 stoic_jang[381125]:         }
Oct 07 14:35:52 compute-0 stoic_jang[381125]:     ]
Oct 07 14:35:52 compute-0 stoic_jang[381125]: }
Oct 07 14:35:52 compute-0 systemd[1]: libpod-52e2fd79fc4102fb6f17ed2a2f4ee713d7c70ca9c9ceace5d4f1f503334b4f32.scope: Deactivated successfully.
Oct 07 14:35:52 compute-0 podman[381108]: 2025-10-07 14:35:52.75924935 +0000 UTC m=+1.021670220 container died 52e2fd79fc4102fb6f17ed2a2f4ee713d7c70ca9c9ceace5d4f1f503334b4f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jang, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 07 14:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-457f0ccb160c603bf9d9d76cb754019f654f9575c8ed8123cd944f489ac57b58-merged.mount: Deactivated successfully.
Oct 07 14:35:52 compute-0 podman[381108]: 2025-10-07 14:35:52.832077806 +0000 UTC m=+1.094498666 container remove 52e2fd79fc4102fb6f17ed2a2f4ee713d7c70ca9c9ceace5d4f1f503334b4f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 07 14:35:52 compute-0 systemd[1]: libpod-conmon-52e2fd79fc4102fb6f17ed2a2f4ee713d7c70ca9c9ceace5d4f1f503334b4f32.scope: Deactivated successfully.
Oct 07 14:35:52 compute-0 sudo[381001]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:52 compute-0 sudo[381147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:35:52 compute-0 sudo[381147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:52 compute-0 sudo[381147]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:35:52 compute-0 sudo[381172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:35:53 compute-0 sudo[381172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:53 compute-0 sudo[381172]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:53 compute-0 sudo[381197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:35:53 compute-0 sudo[381197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:53 compute-0 sudo[381197]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:53 compute-0 sudo[381222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:35:53 compute-0 sudo[381222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:53 compute-0 podman[381286]: 2025-10-07 14:35:53.51970826 +0000 UTC m=+0.057049365 container create 46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:35:53 compute-0 systemd[1]: Started libpod-conmon-46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793.scope.
Oct 07 14:35:53 compute-0 podman[381286]: 2025-10-07 14:35:53.492512963 +0000 UTC m=+0.029854088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:35:53 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:35:53 compute-0 podman[381286]: 2025-10-07 14:35:53.609771827 +0000 UTC m=+0.147112952 container init 46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:35:53 compute-0 podman[381286]: 2025-10-07 14:35:53.616811425 +0000 UTC m=+0.154152530 container start 46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:35:53 compute-0 podman[381286]: 2025-10-07 14:35:53.620784931 +0000 UTC m=+0.158126056 container attach 46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct 07 14:35:53 compute-0 systemd[1]: libpod-46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793.scope: Deactivated successfully.
Oct 07 14:35:53 compute-0 crazy_lovelace[381302]: 167 167
Oct 07 14:35:53 compute-0 conmon[381302]: conmon 46d6a00a3f02bcdedee6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793.scope/container/memory.events
Oct 07 14:35:53 compute-0 podman[381286]: 2025-10-07 14:35:53.624193482 +0000 UTC m=+0.161534587 container died 46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 07 14:35:53 compute-0 ceph-mon[74295]: pgmap v2203: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 145 op/s
Oct 07 14:35:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-37e434d956223c4270eae0ce816b6df2db82e597c54af1d7718653766549e77c-merged.mount: Deactivated successfully.
Oct 07 14:35:53 compute-0 podman[381286]: 2025-10-07 14:35:53.676202701 +0000 UTC m=+0.213543806 container remove 46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 07 14:35:53 compute-0 systemd[1]: libpod-conmon-46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793.scope: Deactivated successfully.
Oct 07 14:35:53 compute-0 podman[381326]: 2025-10-07 14:35:53.916625126 +0000 UTC m=+0.066090977 container create 03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_germain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:35:53 compute-0 systemd[1]: Started libpod-conmon-03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea.scope.
Oct 07 14:35:53 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c008a3bd37c7c22e088bba77b7bc93301b18f7971ad16ea3d6e4149dec45b48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c008a3bd37c7c22e088bba77b7bc93301b18f7971ad16ea3d6e4149dec45b48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c008a3bd37c7c22e088bba77b7bc93301b18f7971ad16ea3d6e4149dec45b48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c008a3bd37c7c22e088bba77b7bc93301b18f7971ad16ea3d6e4149dec45b48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:35:53 compute-0 podman[381326]: 2025-10-07 14:35:53.896748895 +0000 UTC m=+0.046214766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:35:54 compute-0 podman[381326]: 2025-10-07 14:35:54.001616247 +0000 UTC m=+0.151082118 container init 03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_germain, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:35:54 compute-0 podman[381326]: 2025-10-07 14:35:54.009627671 +0000 UTC m=+0.159093512 container start 03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_germain, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 07 14:35:54 compute-0 podman[381326]: 2025-10-07 14:35:54.032162183 +0000 UTC m=+0.181628064 container attach 03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_germain, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Oct 07 14:35:54 compute-0 nova_compute[259550]: 2025-10-07 14:35:54.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 15 KiB/s wr, 136 op/s
Oct 07 14:35:54 compute-0 nova_compute[259550]: 2025-10-07 14:35:54.423 2 DEBUG nova.compute.manager [req-8d9a68f2-0024-4524-8463-03ce5622209a req-4a018531-ba0e-4db1-acc3-80a265c7339c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-changed-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:35:54 compute-0 nova_compute[259550]: 2025-10-07 14:35:54.424 2 DEBUG nova.compute.manager [req-8d9a68f2-0024-4524-8463-03ce5622209a req-4a018531-ba0e-4db1-acc3-80a265c7339c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Refreshing instance network info cache due to event network-changed-84baaee6-3f89-4d61-aaea-507e46e65618. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:35:54 compute-0 nova_compute[259550]: 2025-10-07 14:35:54.424 2 DEBUG oslo_concurrency.lockutils [req-8d9a68f2-0024-4524-8463-03ce5622209a req-4a018531-ba0e-4db1-acc3-80a265c7339c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:35:54 compute-0 nova_compute[259550]: 2025-10-07 14:35:54.424 2 DEBUG oslo_concurrency.lockutils [req-8d9a68f2-0024-4524-8463-03ce5622209a req-4a018531-ba0e-4db1-acc3-80a265c7339c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:35:54 compute-0 nova_compute[259550]: 2025-10-07 14:35:54.425 2 DEBUG nova.network.neutron [req-8d9a68f2-0024-4524-8463-03ce5622209a req-4a018531-ba0e-4db1-acc3-80a265c7339c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Refreshing network info cache for port 84baaee6-3f89-4d61-aaea-507e46e65618 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:35:55 compute-0 goofy_germain[381341]: {
Oct 07 14:35:55 compute-0 goofy_germain[381341]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:35:55 compute-0 goofy_germain[381341]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:35:55 compute-0 goofy_germain[381341]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:35:55 compute-0 goofy_germain[381341]:         "osd_id": 2,
Oct 07 14:35:55 compute-0 goofy_germain[381341]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:35:55 compute-0 goofy_germain[381341]:         "type": "bluestore"
Oct 07 14:35:55 compute-0 goofy_germain[381341]:     },
Oct 07 14:35:55 compute-0 goofy_germain[381341]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:35:55 compute-0 goofy_germain[381341]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:35:55 compute-0 goofy_germain[381341]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:35:55 compute-0 goofy_germain[381341]:         "osd_id": 1,
Oct 07 14:35:55 compute-0 goofy_germain[381341]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:35:55 compute-0 goofy_germain[381341]:         "type": "bluestore"
Oct 07 14:35:55 compute-0 goofy_germain[381341]:     },
Oct 07 14:35:55 compute-0 goofy_germain[381341]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:35:55 compute-0 goofy_germain[381341]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:35:55 compute-0 goofy_germain[381341]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:35:55 compute-0 goofy_germain[381341]:         "osd_id": 0,
Oct 07 14:35:55 compute-0 goofy_germain[381341]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:35:55 compute-0 goofy_germain[381341]:         "type": "bluestore"
Oct 07 14:35:55 compute-0 goofy_germain[381341]:     }
Oct 07 14:35:55 compute-0 goofy_germain[381341]: }
Oct 07 14:35:55 compute-0 systemd[1]: libpod-03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea.scope: Deactivated successfully.
Oct 07 14:35:55 compute-0 systemd[1]: libpod-03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea.scope: Consumed 1.074s CPU time.
Oct 07 14:35:55 compute-0 podman[381374]: 2025-10-07 14:35:55.126170175 +0000 UTC m=+0.025640845 container died 03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:35:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c008a3bd37c7c22e088bba77b7bc93301b18f7971ad16ea3d6e4149dec45b48-merged.mount: Deactivated successfully.
Oct 07 14:35:55 compute-0 podman[381374]: 2025-10-07 14:35:55.220843965 +0000 UTC m=+0.120314615 container remove 03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:35:55 compute-0 systemd[1]: libpod-conmon-03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea.scope: Deactivated successfully.
Oct 07 14:35:55 compute-0 sudo[381222]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:35:55 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:35:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:35:55 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:35:55 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev ccbf365b-d9a0-4af4-8ca3-7cac45b2dfab does not exist
Oct 07 14:35:55 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 93572faa-c4b9-4962-9919-dd185a73af5a does not exist
Oct 07 14:35:55 compute-0 sudo[381388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:35:55 compute-0 sudo[381388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:55 compute-0 sudo[381388]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:55 compute-0 sudo[381413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:35:55 compute-0 sudo[381413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:35:55 compute-0 sudo[381413]: pam_unix(sudo:session): session closed for user root
Oct 07 14:35:56 compute-0 ceph-mon[74295]: pgmap v2204: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 15 KiB/s wr, 136 op/s
Oct 07 14:35:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:35:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:35:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 221 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 511 KiB/s wr, 131 op/s
Oct 07 14:35:56 compute-0 nova_compute[259550]: 2025-10-07 14:35:56.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:35:57 compute-0 nova_compute[259550]: 2025-10-07 14:35:57.041 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:57 compute-0 nova_compute[259550]: 2025-10-07 14:35:57.042 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:57 compute-0 nova_compute[259550]: 2025-10-07 14:35:57.042 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:57 compute-0 nova_compute[259550]: 2025-10-07 14:35:57.042 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:35:57 compute-0 nova_compute[259550]: 2025-10-07 14:35:57.043 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:35:57 compute-0 ovn_controller[151684]: 2025-10-07T14:35:57Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:dc:c4 10.100.0.11
Oct 07 14:35:57 compute-0 ovn_controller[151684]: 2025-10-07T14:35:57Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:dc:c4 10.100.0.11
Oct 07 14:35:57 compute-0 nova_compute[259550]: 2025-10-07 14:35:57.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:35:57 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4112694520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:35:57 compute-0 nova_compute[259550]: 2025-10-07 14:35:57.521 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:35:57 compute-0 podman[381462]: 2025-10-07 14:35:57.65887472 +0000 UTC m=+0.079751412 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 07 14:35:57 compute-0 podman[381461]: 2025-10-07 14:35:57.658887511 +0000 UTC m=+0.079365932 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Oct 07 14:35:57 compute-0 nova_compute[259550]: 2025-10-07 14:35:57.872 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:35:57 compute-0 nova_compute[259550]: 2025-10-07 14:35:57.872 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:35:57 compute-0 nova_compute[259550]: 2025-10-07 14:35:57.876 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:35:57 compute-0 nova_compute[259550]: 2025-10-07 14:35:57.877 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:35:57 compute-0 nova_compute[259550]: 2025-10-07 14:35:57.881 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:35:57 compute-0 nova_compute[259550]: 2025-10-07 14:35:57.881 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:35:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:35:58 compute-0 nova_compute[259550]: 2025-10-07 14:35:58.124 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:35:58 compute-0 nova_compute[259550]: 2025-10-07 14:35:58.125 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3270MB free_disk=59.89509582519531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:35:58 compute-0 nova_compute[259550]: 2025-10-07 14:35:58.126 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:35:58 compute-0 nova_compute[259550]: 2025-10-07 14:35:58.126 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:35:58 compute-0 nova_compute[259550]: 2025-10-07 14:35:58.144 2 DEBUG nova.network.neutron [req-8d9a68f2-0024-4524-8463-03ce5622209a req-4a018531-ba0e-4db1-acc3-80a265c7339c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Updated VIF entry in instance network info cache for port 84baaee6-3f89-4d61-aaea-507e46e65618. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:35:58 compute-0 nova_compute[259550]: 2025-10-07 14:35:58.145 2 DEBUG nova.network.neutron [req-8d9a68f2-0024-4524-8463-03ce5622209a req-4a018531-ba0e-4db1-acc3-80a265c7339c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Updating instance_info_cache with network_info: [{"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:35:58 compute-0 ceph-mon[74295]: pgmap v2205: 305 pgs: 305 active+clean; 221 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 511 KiB/s wr, 131 op/s
Oct 07 14:35:58 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4112694520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:35:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 221 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 511 KiB/s wr, 120 op/s
Oct 07 14:35:58 compute-0 nova_compute[259550]: 2025-10-07 14:35:58.714 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance df052dd5-fecd-4dd3-be36-4becc3f9f318 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:35:58 compute-0 nova_compute[259550]: 2025-10-07 14:35:58.715 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 614c22a4-9342-4037-adb5-71c3375b8553 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:35:58 compute-0 nova_compute[259550]: 2025-10-07 14:35:58.715 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:35:58 compute-0 nova_compute[259550]: 2025-10-07 14:35:58.715 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:35:58 compute-0 nova_compute[259550]: 2025-10-07 14:35:58.716 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:35:58 compute-0 nova_compute[259550]: 2025-10-07 14:35:58.788 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:35:59 compute-0 nova_compute[259550]: 2025-10-07 14:35:59.117 2 DEBUG oslo_concurrency.lockutils [req-8d9a68f2-0024-4524-8463-03ce5622209a req-4a018531-ba0e-4db1-acc3-80a265c7339c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:35:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:35:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/488407076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:35:59 compute-0 nova_compute[259550]: 2025-10-07 14:35:59.257 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:35:59 compute-0 nova_compute[259550]: 2025-10-07 14:35:59.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:35:59 compute-0 nova_compute[259550]: 2025-10-07 14:35:59.263 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:35:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/488407076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:35:59 compute-0 nova_compute[259550]: 2025-10-07 14:35:59.470 2 DEBUG nova.compute.manager [req-bb4d6528-b1ab-4372-845b-31cb79609a43 req-428e082a-f6c9-458b-87dd-bb57aaaf98b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-changed-3eb614e8-3f14-4375-bbe9-34facd5bce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:35:59 compute-0 nova_compute[259550]: 2025-10-07 14:35:59.471 2 DEBUG nova.compute.manager [req-bb4d6528-b1ab-4372-845b-31cb79609a43 req-428e082a-f6c9-458b-87dd-bb57aaaf98b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Refreshing instance network info cache due to event network-changed-3eb614e8-3f14-4375-bbe9-34facd5bce52. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:35:59 compute-0 nova_compute[259550]: 2025-10-07 14:35:59.471 2 DEBUG oslo_concurrency.lockutils [req-bb4d6528-b1ab-4372-845b-31cb79609a43 req-428e082a-f6c9-458b-87dd-bb57aaaf98b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:35:59 compute-0 nova_compute[259550]: 2025-10-07 14:35:59.472 2 DEBUG oslo_concurrency.lockutils [req-bb4d6528-b1ab-4372-845b-31cb79609a43 req-428e082a-f6c9-458b-87dd-bb57aaaf98b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:35:59 compute-0 nova_compute[259550]: 2025-10-07 14:35:59.472 2 DEBUG nova.network.neutron [req-bb4d6528-b1ab-4372-845b-31cb79609a43 req-428e082a-f6c9-458b-87dd-bb57aaaf98b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Refreshing network info cache for port 3eb614e8-3f14-4375-bbe9-34facd5bce52 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:35:59 compute-0 nova_compute[259550]: 2025-10-07 14:35:59.563 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:35:59 compute-0 nova_compute[259550]: 2025-10-07 14:35:59.661 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:35:59 compute-0 nova_compute[259550]: 2025-10-07 14:35:59.661 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:35:59 compute-0 ovn_controller[151684]: 2025-10-07T14:35:59Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:68:46 10.100.0.10
Oct 07 14:35:59 compute-0 ovn_controller[151684]: 2025-10-07T14:35:59Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:68:46 10.100.0.10
Oct 07 14:36:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:00.068 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:00.068 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:00.069 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:00 compute-0 nova_compute[259550]: 2025-10-07 14:36:00.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 262 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Oct 07 14:36:00 compute-0 ceph-mon[74295]: pgmap v2206: 305 pgs: 305 active+clean; 221 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 511 KiB/s wr, 120 op/s
Oct 07 14:36:01 compute-0 ceph-mon[74295]: pgmap v2207: 305 pgs: 305 active+clean; 262 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Oct 07 14:36:01 compute-0 nova_compute[259550]: 2025-10-07 14:36:01.601 2 DEBUG nova.network.neutron [req-bb4d6528-b1ab-4372-845b-31cb79609a43 req-428e082a-f6c9-458b-87dd-bb57aaaf98b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updated VIF entry in instance network info cache for port 3eb614e8-3f14-4375-bbe9-34facd5bce52. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:36:01 compute-0 nova_compute[259550]: 2025-10-07 14:36:01.602 2 DEBUG nova.network.neutron [req-bb4d6528-b1ab-4372-845b-31cb79609a43 req-428e082a-f6c9-458b-87dd-bb57aaaf98b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updating instance_info_cache with network_info: [{"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:36:01 compute-0 nova_compute[259550]: 2025-10-07 14:36:01.662 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:36:01 compute-0 nova_compute[259550]: 2025-10-07 14:36:01.874 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:36:01 compute-0 nova_compute[259550]: 2025-10-07 14:36:01.875 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:36:01 compute-0 nova_compute[259550]: 2025-10-07 14:36:01.938 2 DEBUG oslo_concurrency.lockutils [req-bb4d6528-b1ab-4372-845b-31cb79609a43 req-428e082a-f6c9-458b-87dd-bb57aaaf98b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:36:02 compute-0 nova_compute[259550]: 2025-10-07 14:36:02.222 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:36:02 compute-0 nova_compute[259550]: 2025-10-07 14:36:02.222 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:36:02 compute-0 nova_compute[259550]: 2025-10-07 14:36:02.223 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:36:02 compute-0 nova_compute[259550]: 2025-10-07 14:36:02.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 271 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.2 MiB/s wr, 155 op/s
Oct 07 14:36:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:36:03 compute-0 nova_compute[259550]: 2025-10-07 14:36:03.039 2 INFO nova.compute.manager [None req-6b7afdfc-e496-4075-a030-32b1bf31d697 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Get console output
Oct 07 14:36:03 compute-0 nova_compute[259550]: 2025-10-07 14:36:03.047 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:36:03 compute-0 ceph-mon[74295]: pgmap v2208: 305 pgs: 305 active+clean; 271 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.2 MiB/s wr, 155 op/s
Oct 07 14:36:04 compute-0 nova_compute[259550]: 2025-10-07 14:36:04.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 279 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 703 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 07 14:36:05 compute-0 ceph-mon[74295]: pgmap v2209: 305 pgs: 305 active+clean; 279 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 703 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 07 14:36:05 compute-0 nova_compute[259550]: 2025-10-07 14:36:05.538 2 INFO nova.compute.manager [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Rebuilding instance
Oct 07 14:36:05 compute-0 nova_compute[259550]: 2025-10-07 14:36:05.759 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:36:06 compute-0 nova_compute[259550]: 2025-10-07 14:36:06.000 2 DEBUG nova.compute.manager [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:36:06 compute-0 nova_compute[259550]: 2025-10-07 14:36:06.291 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_requests' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:36:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 279 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 703 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 07 14:36:06 compute-0 nova_compute[259550]: 2025-10-07 14:36:06.349 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:36:06 compute-0 nova_compute[259550]: 2025-10-07 14:36:06.411 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'resources' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:36:06 compute-0 nova_compute[259550]: 2025-10-07 14:36:06.540 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'migration_context' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:36:06 compute-0 nova_compute[259550]: 2025-10-07 14:36:06.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:06 compute-0 nova_compute[259550]: 2025-10-07 14:36:06.619 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:36:06 compute-0 nova_compute[259550]: 2025-10-07 14:36:06.625 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:36:07 compute-0 nova_compute[259550]: 2025-10-07 14:36:07.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:07 compute-0 ceph-mon[74295]: pgmap v2210: 305 pgs: 305 active+clean; 279 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 703 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 07 14:36:07 compute-0 nova_compute[259550]: 2025-10-07 14:36:07.669 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updating instance_info_cache with network_info: [{"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:36:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:36:08 compute-0 nova_compute[259550]: 2025-10-07 14:36:08.005 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:36:08 compute-0 nova_compute[259550]: 2025-10-07 14:36:08.005 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:36:08 compute-0 nova_compute[259550]: 2025-10-07 14:36:08.006 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:36:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 279 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 696 KiB/s rd, 3.8 MiB/s wr, 121 op/s
Oct 07 14:36:09 compute-0 kernel: tap84baaee6-3f (unregistering): left promiscuous mode
Oct 07 14:36:09 compute-0 NetworkManager[44949]: <info>  [1759847769.2502] device (tap84baaee6-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:36:09 compute-0 ovn_controller[151684]: 2025-10-07T14:36:09Z|01160|binding|INFO|Releasing lport 84baaee6-3f89-4d61-aaea-507e46e65618 from this chassis (sb_readonly=0)
Oct 07 14:36:09 compute-0 ovn_controller[151684]: 2025-10-07T14:36:09Z|01161|binding|INFO|Setting lport 84baaee6-3f89-4d61-aaea-507e46e65618 down in Southbound
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:09 compute-0 ovn_controller[151684]: 2025-10-07T14:36:09Z|01162|binding|INFO|Removing iface tap84baaee6-3f ovn-installed in OVS
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:09.304 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:dc:c4 10.100.0.11'], port_security=['fa:16:3e:84:dc:c4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '97d6fa92-e050-4cce-a368-3065ace6996b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e8a86b-883d-4b18-ac5a-841f1cc14111, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=84baaee6-3f89-4d61-aaea-507e46e65618) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:36:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:09.306 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 84baaee6-3f89-4d61-aaea-507e46e65618 in datapath 002ae4bb-0f71-4b57-99ae-0bfd304fb458 unbound from our chassis
Oct 07 14:36:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:09.307 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 002ae4bb-0f71-4b57-99ae-0bfd304fb458, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:36:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:09.309 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5f7367f5-6ef0-4a8b-b861-30953ce67e61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:09.309 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 namespace which is not needed anymore
Oct 07 14:36:09 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d00000071.scope: Deactivated successfully.
Oct 07 14:36:09 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d00000071.scope: Consumed 14.835s CPU time.
Oct 07 14:36:09 compute-0 systemd-machined[214580]: Machine qemu-140-instance-00000071 terminated.
Oct 07 14:36:09 compute-0 ovn_controller[151684]: 2025-10-07T14:36:09Z|01163|binding|INFO|Releasing lport a420e217-e19f-447c-8a17-e0fdc42144b5 from this chassis (sb_readonly=0)
Oct 07 14:36:09 compute-0 ovn_controller[151684]: 2025-10-07T14:36:09Z|01164|binding|INFO|Releasing lport 9b384c3a-3862-48f5-9fa0-a30ca1bb30cc from this chassis (sb_readonly=0)
Oct 07 14:36:09 compute-0 ovn_controller[151684]: 2025-10-07T14:36:09Z|01165|binding|INFO|Releasing lport e2eca697-1003-4116-b184-c9191a00584f from this chassis (sb_readonly=0)
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:09 compute-0 ceph-mon[74295]: pgmap v2211: 305 pgs: 305 active+clean; 279 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 696 KiB/s rd, 3.8 MiB/s wr, 121 op/s
Oct 07 14:36:09 compute-0 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[379952]: [NOTICE]   (379956) : haproxy version is 2.8.14-c23fe91
Oct 07 14:36:09 compute-0 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[379952]: [NOTICE]   (379956) : path to executable is /usr/sbin/haproxy
Oct 07 14:36:09 compute-0 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[379952]: [WARNING]  (379956) : Exiting Master process...
Oct 07 14:36:09 compute-0 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[379952]: [WARNING]  (379956) : Exiting Master process...
Oct 07 14:36:09 compute-0 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[379952]: [ALERT]    (379956) : Current worker (379958) exited with code 143 (Terminated)
Oct 07 14:36:09 compute-0 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[379952]: [WARNING]  (379956) : All workers exited. Exiting... (0)
Oct 07 14:36:09 compute-0 systemd[1]: libpod-47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624.scope: Deactivated successfully.
Oct 07 14:36:09 compute-0 podman[381546]: 2025-10-07 14:36:09.494357649 +0000 UTC m=+0.083137252 container died 47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.578 2 DEBUG oslo_concurrency.lockutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.578 2 DEBUG oslo_concurrency.lockutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.579 2 DEBUG oslo_concurrency.lockutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.579 2 DEBUG oslo_concurrency.lockutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.580 2 DEBUG oslo_concurrency.lockutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.581 2 INFO nova.compute.manager [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Terminating instance
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.582 2 DEBUG nova.compute.manager [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.639 2 DEBUG nova.compute.manager [req-75103e4d-1b3b-42f0-897f-39e247444757 req-901cee8c-c35b-4f5e-99e7-097f5fd01f2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-changed-3eb614e8-3f14-4375-bbe9-34facd5bce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.640 2 DEBUG nova.compute.manager [req-75103e4d-1b3b-42f0-897f-39e247444757 req-901cee8c-c35b-4f5e-99e7-097f5fd01f2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Refreshing instance network info cache due to event network-changed-3eb614e8-3f14-4375-bbe9-34facd5bce52. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.640 2 DEBUG oslo_concurrency.lockutils [req-75103e4d-1b3b-42f0-897f-39e247444757 req-901cee8c-c35b-4f5e-99e7-097f5fd01f2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.640 2 DEBUG oslo_concurrency.lockutils [req-75103e4d-1b3b-42f0-897f-39e247444757 req-901cee8c-c35b-4f5e-99e7-097f5fd01f2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.640 2 DEBUG nova.network.neutron [req-75103e4d-1b3b-42f0-897f-39e247444757 req-901cee8c-c35b-4f5e-99e7-097f5fd01f2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Refreshing network info cache for port 3eb614e8-3f14-4375-bbe9-34facd5bce52 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.644 2 INFO nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance shutdown successfully after 3 seconds.
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.649 2 INFO nova.virt.libvirt.driver [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance destroyed successfully.
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.653 2 INFO nova.virt.libvirt.driver [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance destroyed successfully.
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.654 2 DEBUG nova.virt.libvirt.vif [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:35:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2036986690',display_name='tempest-TestNetworkAdvancedServerOps-server-2036986690',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2036986690',id=113,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWZftNkY/zVHCXSaics6F8ZT1EfbDe33oEET2vK0gP7Xemq47ftAGCjttUGmoEEk2tSMFrst5lmpCcJn9l+9uZ/tfDJY40RL0sC5x3TjuIWq3RsYwCZVLYWuqz+xMOnKw==',key_name='tempest-TestNetworkAdvancedServerOps-1798249152',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:35:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-ha5sygpa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:36:03Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.655 2 DEBUG nova.network.os_vif_util [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.655 2 DEBUG nova.network.os_vif_util [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.656 2 DEBUG os_vif [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.657 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84baaee6-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:09 compute-0 nova_compute[259550]: 2025-10-07 14:36:09.666 2 INFO os_vif [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f')
Oct 07 14:36:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624-userdata-shm.mount: Deactivated successfully.
Oct 07 14:36:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfa396a46fc4f81512c267e5f20233b50cdd5b2d225de55dfebc3c6e7a24f58e-merged.mount: Deactivated successfully.
Oct 07 14:36:09 compute-0 podman[381546]: 2025-10-07 14:36:09.962091067 +0000 UTC m=+0.550870670 container cleanup 47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:36:09 compute-0 systemd[1]: libpod-conmon-47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624.scope: Deactivated successfully.
Oct 07 14:36:10 compute-0 kernel: tap3eb614e8-3f (unregistering): left promiscuous mode
Oct 07 14:36:10 compute-0 NetworkManager[44949]: <info>  [1759847770.1392] device (tap3eb614e8-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:36:10 compute-0 ovn_controller[151684]: 2025-10-07T14:36:10Z|01166|binding|INFO|Releasing lport 3eb614e8-3f14-4375-bbe9-34facd5bce52 from this chassis (sb_readonly=0)
Oct 07 14:36:10 compute-0 ovn_controller[151684]: 2025-10-07T14:36:10Z|01167|binding|INFO|Setting lport 3eb614e8-3f14-4375-bbe9-34facd5bce52 down in Southbound
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:10 compute-0 ovn_controller[151684]: 2025-10-07T14:36:10Z|01168|binding|INFO|Removing iface tap3eb614e8-3f ovn-installed in OVS
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:10 compute-0 kernel: tape3da1022-68 (unregistering): left promiscuous mode
Oct 07 14:36:10 compute-0 NetworkManager[44949]: <info>  [1759847770.1807] device (tape3da1022-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:36:10 compute-0 ovn_controller[151684]: 2025-10-07T14:36:10Z|01169|binding|INFO|Releasing lport e3da1022-6830-4557-9992-ffd8ec07a599 from this chassis (sb_readonly=1)
Oct 07 14:36:10 compute-0 ovn_controller[151684]: 2025-10-07T14:36:10Z|01170|binding|INFO|Removing iface tape3da1022-68 ovn-installed in OVS
Oct 07 14:36:10 compute-0 ovn_controller[151684]: 2025-10-07T14:36:10Z|01171|if_status|INFO|Dropped 1 log messages in last 152 seconds (most recently, 152 seconds ago) due to excessive rate
Oct 07 14:36:10 compute-0 ovn_controller[151684]: 2025-10-07T14:36:10Z|01172|if_status|INFO|Not setting lport e3da1022-6830-4557-9992-ffd8ec07a599 down as sb is readonly
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:10 compute-0 ovn_controller[151684]: 2025-10-07T14:36:10Z|01173|binding|INFO|Setting lport e3da1022-6830-4557-9992-ffd8ec07a599 down in Southbound
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.229 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:68:46 10.100.0.10'], port_security=['fa:16:3e:02:68:46 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '614c22a4-9342-4037-adb5-71c3375b8553', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c899e05d-224c-44fe-8294-eaece58d7fe7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03f8a6cc-86b5-49f0-b76e-96a2dbc54e74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34b97456-8a84-4c85-bc2d-80b2de48e489, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3eb614e8-3f14-4375-bbe9-34facd5bce52) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:36:10 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000070.scope: Deactivated successfully.
Oct 07 14:36:10 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000070.scope: Consumed 15.621s CPU time.
Oct 07 14:36:10 compute-0 systemd-machined[214580]: Machine qemu-141-instance-00000070 terminated.
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.283 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:5c:25 2001:db8::f816:3eff:fe2e:5c25'], port_security=['fa:16:3e:2e:5c:25 2001:db8::f816:3eff:fe2e:5c25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe2e:5c25/64', 'neutron:device_id': '614c22a4-9342-4037-adb5-71c3375b8553', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03f8a6cc-86b5-49f0-b76e-96a2dbc54e74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eae04c5e-8ef6-447a-8c6c-5575a0afadaf, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=e3da1022-6830-4557-9992-ffd8ec07a599) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:36:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 279 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 706 KiB/s rd, 3.8 MiB/s wr, 126 op/s
Oct 07 14:36:10 compute-0 podman[381604]: 2025-10-07 14:36:10.383022795 +0000 UTC m=+0.399186439 container remove 47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.390 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8af6c58b-f804-4ee0-ba79-9bb323ff8141]: (4, ('Tue Oct  7 02:36:09 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 (47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624)\n47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624\nTue Oct  7 02:36:09 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 (47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624)\n47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.392 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5283d62c-f8ae-43e1-bd59-60055daae5a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.393 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap002ae4bb-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:10 compute-0 kernel: tap002ae4bb-00: left promiscuous mode
Oct 07 14:36:10 compute-0 NetworkManager[44949]: <info>  [1759847770.4122] manager: (tape3da1022-68): new Tun device (/org/freedesktop/NetworkManager/Devices/471)
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.423 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[679d527a-1b4f-4287-8b2e-6c89f64b91be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.432 2 INFO nova.virt.libvirt.driver [-] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Instance destroyed successfully.
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.432 2 DEBUG nova.objects.instance [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 614c22a4-9342-4037-adb5-71c3375b8553 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.450 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8f2fc603-58da-4c09-99f0-191c704fd85a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.452 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0e38fe-053c-486f-ad86-2be3c849e7e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.467 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1a1f9c0b-947a-4d68-a0f9-488e8bbc8cc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 831996, 'reachable_time': 20743, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381646, 'error': None, 'target': 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d002ae4bb\x2d0f71\x2d4b57\x2d99ae\x2d0bfd304fb458.mount: Deactivated successfully.
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.473 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.473 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[5730ecd5-4ef5-4071-a4fa-e58bb33ad23e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.474 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3eb614e8-3f14-4375-bbe9-34facd5bce52 in datapath c899e05d-224c-44fe-8294-eaece58d7fe7 unbound from our chassis
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.475 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c899e05d-224c-44fe-8294-eaece58d7fe7
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.491 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[76727b13-9c56-43fe-875a-a133dab60a83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.524 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[272ae17c-c067-440e-88e7-153f090d698c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.526 2 DEBUG nova.virt.libvirt.vif [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:35:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1065431809',display_name='tempest-TestGettingAddress-server-1065431809',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1065431809',id=112,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:35:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-b1k3bk39',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:35:45Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=614c22a4-9342-4037-adb5-71c3375b8553,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.526 2 DEBUG nova.network.os_vif_util [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.527 2 DEBUG nova.network.os_vif_util [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:02:68:46,bridge_name='br-int',has_traffic_filtering=True,id=3eb614e8-3f14-4375-bbe9-34facd5bce52,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb614e8-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.527 2 DEBUG os_vif [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:68:46,bridge_name='br-int',has_traffic_filtering=True,id=3eb614e8-3f14-4375-bbe9-34facd5bce52,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb614e8-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.528 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3eb614e8-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.528 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8d57c20e-cbe3-4e24-b9dd-0959a456c1f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.537 2 INFO os_vif [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:68:46,bridge_name='br-int',has_traffic_filtering=True,id=3eb614e8-3f14-4375-bbe9-34facd5bce52,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb614e8-3f')
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.538 2 DEBUG nova.virt.libvirt.vif [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:35:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1065431809',display_name='tempest-TestGettingAddress-server-1065431809',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1065431809',id=112,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:35:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-b1k3bk39',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:35:45Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=614c22a4-9342-4037-adb5-71c3375b8553,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.538 2 DEBUG nova.network.os_vif_util [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.539 2 DEBUG nova.network.os_vif_util [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:5c:25,bridge_name='br-int',has_traffic_filtering=True,id=e3da1022-6830-4557-9992-ffd8ec07a599,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3da1022-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.539 2 DEBUG os_vif [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:5c:25,bridge_name='br-int',has_traffic_filtering=True,id=e3da1022-6830-4557-9992-ffd8ec07a599,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3da1022-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.541 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3da1022-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.550 2 DEBUG nova.compute.manager [req-5de303bc-a3eb-4503-b819-5e4d06cad502 req-71379314-22ce-496b-893a-1a943e99c50c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-unplugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.550 2 DEBUG oslo_concurrency.lockutils [req-5de303bc-a3eb-4503-b819-5e4d06cad502 req-71379314-22ce-496b-893a-1a943e99c50c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.550 2 DEBUG oslo_concurrency.lockutils [req-5de303bc-a3eb-4503-b819-5e4d06cad502 req-71379314-22ce-496b-893a-1a943e99c50c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.551 2 DEBUG oslo_concurrency.lockutils [req-5de303bc-a3eb-4503-b819-5e4d06cad502 req-71379314-22ce-496b-893a-1a943e99c50c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.551 2 DEBUG nova.compute.manager [req-5de303bc-a3eb-4503-b819-5e4d06cad502 req-71379314-22ce-496b-893a-1a943e99c50c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] No waiting events found dispatching network-vif-unplugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.551 2 DEBUG nova.compute.manager [req-5de303bc-a3eb-4503-b819-5e4d06cad502 req-71379314-22ce-496b-893a-1a943e99c50c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-unplugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.551 2 INFO os_vif [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:5c:25,bridge_name='br-int',has_traffic_filtering=True,id=e3da1022-6830-4557-9992-ffd8ec07a599,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3da1022-68')
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.567 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[aea1e8e9-c5fe-4136-834c-9cbb343ba1e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.589 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c73a51d3-d787-41fb-966c-249b6bc03d0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc899e05d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:ba:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827520, 'reachable_time': 17717, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381669, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.609 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e010a476-faaa-4b8a-aef8-d7e2ed6f65ac]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc899e05d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827532, 'tstamp': 827532}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381670, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc899e05d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827535, 'tstamp': 827535}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381670, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.611 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc899e05d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.614 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc899e05d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.614 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.615 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc899e05d-20, col_values=(('external_ids', {'iface-id': 'a420e217-e19f-447c-8a17-e0fdc42144b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.615 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.617 161536 INFO neutron.agent.ovn.metadata.agent [-] Port e3da1022-6830-4557-9992-ffd8ec07a599 in datapath 673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3 unbound from our chassis
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.619 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.633 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[61ef4fdf-65cb-4ae9-af6c-5539a2325448]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.669 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[333cc2cf-4963-4d81-91cf-24eb3b1e7e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.673 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5986124d-a389-41fd-9018-43991fafa620]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.699 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[738cbe54-22f1-4038-9294-f281ab36ac30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.726 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e674ab-9d1c-4efc-a17d-a60cb8853474]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap673dd6ba-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:cb:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2772, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2772, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 325], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827720, 'reachable_time': 16406, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2352, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2352, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381679, 'error': None, 'target': 'ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.749 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[42abc9bd-626b-42fd-9b35-6b9002ab582f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap673dd6ba-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827732, 'tstamp': 827732}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381680, 'error': None, 'target': 'ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.752 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap673dd6ba-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:10 compute-0 nova_compute[259550]: 2025-10-07 14:36:10.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.756 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap673dd6ba-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.756 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.756 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap673dd6ba-40, col_values=(('external_ids', {'iface-id': 'e2eca697-1003-4116-b184-c9191a00584f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.757 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:36:11 compute-0 nova_compute[259550]: 2025-10-07 14:36:11.068 2 DEBUG nova.network.neutron [req-75103e4d-1b3b-42f0-897f-39e247444757 req-901cee8c-c35b-4f5e-99e7-097f5fd01f2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updated VIF entry in instance network info cache for port 3eb614e8-3f14-4375-bbe9-34facd5bce52. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:36:11 compute-0 nova_compute[259550]: 2025-10-07 14:36:11.069 2 DEBUG nova.network.neutron [req-75103e4d-1b3b-42f0-897f-39e247444757 req-901cee8c-c35b-4f5e-99e7-097f5fd01f2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updating instance_info_cache with network_info: [{"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:36:11 compute-0 podman[381681]: 2025-10-07 14:36:11.078161089 +0000 UTC m=+0.062544583 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 07 14:36:11 compute-0 podman[381682]: 2025-10-07 14:36:11.108585832 +0000 UTC m=+0.092720288 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:36:11 compute-0 nova_compute[259550]: 2025-10-07 14:36:11.133 2 DEBUG oslo_concurrency.lockutils [req-75103e4d-1b3b-42f0-897f-39e247444757 req-901cee8c-c35b-4f5e-99e7-097f5fd01f2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:36:11 compute-0 ceph-mon[74295]: pgmap v2212: 305 pgs: 305 active+clean; 279 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 706 KiB/s rd, 3.8 MiB/s wr, 126 op/s
Oct 07 14:36:12 compute-0 nova_compute[259550]: 2025-10-07 14:36:12.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 279 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 274 KiB/s rd, 418 KiB/s wr, 62 op/s
Oct 07 14:36:12 compute-0 nova_compute[259550]: 2025-10-07 14:36:12.743 2 DEBUG nova.compute.manager [req-1e16e314-d5a0-4cc3-b88e-7d5f072af652 req-361d1923-c1d8-42c1-841a-b7f2bfb2336f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-plugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:12 compute-0 nova_compute[259550]: 2025-10-07 14:36:12.743 2 DEBUG oslo_concurrency.lockutils [req-1e16e314-d5a0-4cc3-b88e-7d5f072af652 req-361d1923-c1d8-42c1-841a-b7f2bfb2336f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:12 compute-0 nova_compute[259550]: 2025-10-07 14:36:12.744 2 DEBUG oslo_concurrency.lockutils [req-1e16e314-d5a0-4cc3-b88e-7d5f072af652 req-361d1923-c1d8-42c1-841a-b7f2bfb2336f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:12 compute-0 nova_compute[259550]: 2025-10-07 14:36:12.744 2 DEBUG oslo_concurrency.lockutils [req-1e16e314-d5a0-4cc3-b88e-7d5f072af652 req-361d1923-c1d8-42c1-841a-b7f2bfb2336f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:12 compute-0 nova_compute[259550]: 2025-10-07 14:36:12.745 2 DEBUG nova.compute.manager [req-1e16e314-d5a0-4cc3-b88e-7d5f072af652 req-361d1923-c1d8-42c1-841a-b7f2bfb2336f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] No waiting events found dispatching network-vif-plugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:36:12 compute-0 nova_compute[259550]: 2025-10-07 14:36:12.745 2 WARNING nova.compute.manager [req-1e16e314-d5a0-4cc3-b88e-7d5f072af652 req-361d1923-c1d8-42c1-841a-b7f2bfb2336f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received unexpected event network-vif-plugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 for instance with vm_state active and task_state deleting.
Oct 07 14:36:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:36:13 compute-0 nova_compute[259550]: 2025-10-07 14:36:13.107 2 DEBUG nova.compute.manager [req-374cbcba-b28a-4b9f-883f-b0c77ca478b7 req-38957575-9949-46b2-8ece-562a4c09cefa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-unplugged-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:13 compute-0 nova_compute[259550]: 2025-10-07 14:36:13.108 2 DEBUG oslo_concurrency.lockutils [req-374cbcba-b28a-4b9f-883f-b0c77ca478b7 req-38957575-9949-46b2-8ece-562a4c09cefa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:13 compute-0 nova_compute[259550]: 2025-10-07 14:36:13.108 2 DEBUG oslo_concurrency.lockutils [req-374cbcba-b28a-4b9f-883f-b0c77ca478b7 req-38957575-9949-46b2-8ece-562a4c09cefa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:13 compute-0 nova_compute[259550]: 2025-10-07 14:36:13.108 2 DEBUG oslo_concurrency.lockutils [req-374cbcba-b28a-4b9f-883f-b0c77ca478b7 req-38957575-9949-46b2-8ece-562a4c09cefa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:13 compute-0 nova_compute[259550]: 2025-10-07 14:36:13.108 2 DEBUG nova.compute.manager [req-374cbcba-b28a-4b9f-883f-b0c77ca478b7 req-38957575-9949-46b2-8ece-562a4c09cefa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] No waiting events found dispatching network-vif-unplugged-84baaee6-3f89-4d61-aaea-507e46e65618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:36:13 compute-0 nova_compute[259550]: 2025-10-07 14:36:13.109 2 WARNING nova.compute.manager [req-374cbcba-b28a-4b9f-883f-b0c77ca478b7 req-38957575-9949-46b2-8ece-562a4c09cefa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received unexpected event network-vif-unplugged-84baaee6-3f89-4d61-aaea-507e46e65618 for instance with vm_state active and task_state rebuilding.
Oct 07 14:36:13 compute-0 ceph-mon[74295]: pgmap v2213: 305 pgs: 305 active+clean; 279 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 274 KiB/s rd, 418 KiB/s wr, 62 op/s
Oct 07 14:36:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 186 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 110 KiB/s wr, 39 op/s
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.189 2 INFO nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Deleting instance files /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_del
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.191 2 INFO nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Deletion of /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_del complete
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.506 2 DEBUG nova.compute.manager [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.507 2 DEBUG oslo_concurrency.lockutils [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.507 2 DEBUG oslo_concurrency.lockutils [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.508 2 DEBUG oslo_concurrency.lockutils [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.508 2 DEBUG nova.compute.manager [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] No waiting events found dispatching network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.508 2 WARNING nova.compute.manager [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received unexpected event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 for instance with vm_state active and task_state rebuilding.
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.508 2 DEBUG nova.compute.manager [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-unplugged-e3da1022-6830-4557-9992-ffd8ec07a599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.508 2 DEBUG oslo_concurrency.lockutils [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.509 2 DEBUG oslo_concurrency.lockutils [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.509 2 DEBUG oslo_concurrency.lockutils [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.509 2 DEBUG nova.compute.manager [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] No waiting events found dispatching network-vif-unplugged-e3da1022-6830-4557-9992-ffd8ec07a599 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.509 2 DEBUG nova.compute.manager [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-unplugged-e3da1022-6830-4557-9992-ffd8ec07a599 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.509 2 DEBUG nova.compute.manager [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-plugged-e3da1022-6830-4557-9992-ffd8ec07a599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.509 2 DEBUG oslo_concurrency.lockutils [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.509 2 DEBUG oslo_concurrency.lockutils [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.510 2 DEBUG oslo_concurrency.lockutils [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.510 2 DEBUG nova.compute.manager [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] No waiting events found dispatching network-vif-plugged-e3da1022-6830-4557-9992-ffd8ec07a599 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.510 2 WARNING nova.compute.manager [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received unexpected event network-vif-plugged-e3da1022-6830-4557-9992-ffd8ec07a599 for instance with vm_state active and task_state deleting.
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.560 2 INFO nova.virt.libvirt.driver [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Deleting instance files /var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553_del
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.561 2 INFO nova.virt.libvirt.driver [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Deletion of /var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553_del complete
Oct 07 14:36:15 compute-0 ovn_controller[151684]: 2025-10-07T14:36:15Z|01174|binding|INFO|Releasing lport a420e217-e19f-447c-8a17-e0fdc42144b5 from this chassis (sb_readonly=0)
Oct 07 14:36:15 compute-0 ovn_controller[151684]: 2025-10-07T14:36:15Z|01175|binding|INFO|Releasing lport e2eca697-1003-4116-b184-c9191a00584f from this chassis (sb_readonly=0)
Oct 07 14:36:15 compute-0 nova_compute[259550]: 2025-10-07 14:36:15.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:16 compute-0 ceph-mon[74295]: pgmap v2214: 305 pgs: 305 active+clean; 186 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 110 KiB/s wr, 39 op/s
Oct 07 14:36:16 compute-0 nova_compute[259550]: 2025-10-07 14:36:16.168 2 INFO nova.compute.manager [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Took 6.59 seconds to destroy the instance on the hypervisor.
Oct 07 14:36:16 compute-0 nova_compute[259550]: 2025-10-07 14:36:16.169 2 DEBUG oslo.service.loopingcall [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:36:16 compute-0 nova_compute[259550]: 2025-10-07 14:36:16.169 2 DEBUG nova.compute.manager [-] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:36:16 compute-0 nova_compute[259550]: 2025-10-07 14:36:16.170 2 DEBUG nova.network.neutron [-] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:36:16 compute-0 nova_compute[259550]: 2025-10-07 14:36:16.311 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:36:16 compute-0 nova_compute[259550]: 2025-10-07 14:36:16.312 2 INFO nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Creating image(s)
Oct 07 14:36:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 146 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 41 KiB/s wr, 47 op/s
Oct 07 14:36:16 compute-0 nova_compute[259550]: 2025-10-07 14:36:16.334 2 DEBUG nova.storage.rbd_utils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:36:16 compute-0 nova_compute[259550]: 2025-10-07 14:36:16.363 2 DEBUG nova.storage.rbd_utils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:36:16 compute-0 nova_compute[259550]: 2025-10-07 14:36:16.385 2 DEBUG nova.storage.rbd_utils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:36:16 compute-0 nova_compute[259550]: 2025-10-07 14:36:16.389 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:36:16 compute-0 nova_compute[259550]: 2025-10-07 14:36:16.484 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:36:16 compute-0 nova_compute[259550]: 2025-10-07 14:36:16.485 2 DEBUG oslo_concurrency.lockutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:16 compute-0 nova_compute[259550]: 2025-10-07 14:36:16.486 2 DEBUG oslo_concurrency.lockutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:16 compute-0 nova_compute[259550]: 2025-10-07 14:36:16.486 2 DEBUG oslo_concurrency.lockutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:16 compute-0 nova_compute[259550]: 2025-10-07 14:36:16.510 2 DEBUG nova.storage.rbd_utils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:36:16 compute-0 nova_compute[259550]: 2025-10-07 14:36:16.514 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:36:16 compute-0 nova_compute[259550]: 2025-10-07 14:36:16.875 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:16 compute-0 nova_compute[259550]: 2025-10-07 14:36:16.876 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.294 2 DEBUG nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.469 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.955s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.541 2 DEBUG nova.storage.rbd_utils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] resizing rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.589 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.590 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.601 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.601 2 INFO nova.compute.claims [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.688 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.689 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Ensure instance console log exists: /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.689 2 DEBUG oslo_concurrency.lockutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.690 2 DEBUG oslo_concurrency.lockutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.690 2 DEBUG oslo_concurrency.lockutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.692 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Start _get_guest_xml network_info=[{"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.696 2 WARNING nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.700 2 DEBUG nova.virt.libvirt.host [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.701 2 DEBUG nova.virt.libvirt.host [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.704 2 DEBUG nova.virt.libvirt.host [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.704 2 DEBUG nova.virt.libvirt.host [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.705 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.705 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.705 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.705 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.706 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.706 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.706 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.706 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.706 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.706 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.707 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.707 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.707 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:36:17 compute-0 nova_compute[259550]: 2025-10-07 14:36:17.948 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:36:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:36:18 compute-0 ceph-mon[74295]: pgmap v2215: 305 pgs: 305 active+clean; 146 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 41 KiB/s wr, 47 op/s
Oct 07 14:36:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 146 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 41 KiB/s wr, 47 op/s
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.343 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:36:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:36:18 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1520250676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.423 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.445 2 DEBUG nova.storage.rbd_utils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.449 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:36:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:36:18 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2929134489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.793 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.800 2 DEBUG nova.compute.provider_tree [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:36:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:36:18 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/259515317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.908 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.909 2 DEBUG nova.virt.libvirt.vif [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:35:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2036986690',display_name='tempest-TestNetworkAdvancedServerOps-server-2036986690',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2036986690',id=113,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWZftNkY/zVHCXSaics6F8ZT1EfbDe33oEET2vK0gP7Xemq47ftAGCjttUGmoEEk2tSMFrst5lmpCcJn9l+9uZ/tfDJY40RL0sC5x3TjuIWq3RsYwCZVLYWuqz+xMOnKw==',key_name='tempest-TestNetworkAdvancedServerOps-1798249152',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:35:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-ha5sygpa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:36:15Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.909 2 DEBUG nova.network.os_vif_util [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.910 2 DEBUG nova.network.os_vif_util [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.913 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:36:18 compute-0 nova_compute[259550]:   <uuid>e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb</uuid>
Oct 07 14:36:18 compute-0 nova_compute[259550]:   <name>instance-00000071</name>
Oct 07 14:36:18 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:36:18 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:36:18 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-2036986690</nova:name>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:36:17</nova:creationTime>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:36:18 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:36:18 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:36:18 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:36:18 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:36:18 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:36:18 compute-0 nova_compute[259550]:         <nova:user uuid="5c505d04148e44b8b93ceab0e3cedef4">tempest-TestNetworkAdvancedServerOps-316338420-project-member</nova:user>
Oct 07 14:36:18 compute-0 nova_compute[259550]:         <nova:project uuid="74c80c1e3c7c4a0dbf1c602d301618a7">tempest-TestNetworkAdvancedServerOps-316338420</nova:project>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:36:18 compute-0 nova_compute[259550]:         <nova:port uuid="84baaee6-3f89-4d61-aaea-507e46e65618">
Oct 07 14:36:18 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:36:18 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:36:18 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <system>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <entry name="serial">e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb</entry>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <entry name="uuid">e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb</entry>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     </system>
Oct 07 14:36:18 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:36:18 compute-0 nova_compute[259550]:   <os>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:   </os>
Oct 07 14:36:18 compute-0 nova_compute[259550]:   <features>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:   </features>
Oct 07 14:36:18 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:36:18 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:36:18 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk">
Oct 07 14:36:18 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       </source>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:36:18 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config">
Oct 07 14:36:18 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       </source>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:36:18 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:84:dc:c4"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <target dev="tap84baaee6-3f"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/console.log" append="off"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <video>
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     </video>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:36:18 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:36:18 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:36:18 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:36:18 compute-0 nova_compute[259550]: </domain>
Oct 07 14:36:18 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.914 2 DEBUG nova.virt.libvirt.vif [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:35:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2036986690',display_name='tempest-TestNetworkAdvancedServerOps-server-2036986690',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2036986690',id=113,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWZftNkY/zVHCXSaics6F8ZT1EfbDe33oEET2vK0gP7Xemq47ftAGCjttUGmoEEk2tSMFrst5lmpCcJn9l+9uZ/tfDJY40RL0sC5x3TjuIWq3RsYwCZVLYWuqz+xMOnKw==',key_name='tempest-TestNetworkAdvancedServerOps-1798249152',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:35:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-ha5sygpa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:36:15Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.914 2 DEBUG nova.network.os_vif_util [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.914 2 DEBUG nova.network.os_vif_util [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.915 2 DEBUG os_vif [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84baaee6-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap84baaee6-3f, col_values=(('external_ids', {'iface-id': '84baaee6-3f89-4d61-aaea-507e46e65618', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:dc:c4', 'vm-uuid': 'e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:18 compute-0 NetworkManager[44949]: <info>  [1759847778.9206] manager: (tap84baaee6-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.925 2 INFO os_vif [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f')
Oct 07 14:36:18 compute-0 nova_compute[259550]: 2025-10-07 14:36:18.935 2 DEBUG nova.scheduler.client.report [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:36:19 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1520250676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:36:19 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2929134489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:36:19 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/259515317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:36:19 compute-0 nova_compute[259550]: 2025-10-07 14:36:19.335 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:19 compute-0 nova_compute[259550]: 2025-10-07 14:36:19.336 2 DEBUG nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:36:19 compute-0 nova_compute[259550]: 2025-10-07 14:36:19.396 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:36:19 compute-0 nova_compute[259550]: 2025-10-07 14:36:19.397 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:36:19 compute-0 nova_compute[259550]: 2025-10-07 14:36:19.397 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No VIF found with MAC fa:16:3e:84:dc:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:36:19 compute-0 nova_compute[259550]: 2025-10-07 14:36:19.398 2 INFO nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Using config drive
Oct 07 14:36:19 compute-0 nova_compute[259550]: 2025-10-07 14:36:19.418 2 DEBUG nova.storage.rbd_utils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:36:19 compute-0 nova_compute[259550]: 2025-10-07 14:36:19.770 2 DEBUG nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:36:19 compute-0 nova_compute[259550]: 2025-10-07 14:36:19.770 2 DEBUG nova.network.neutron [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:36:19 compute-0 nova_compute[259550]: 2025-10-07 14:36:19.778 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:36:19 compute-0 nova_compute[259550]: 2025-10-07 14:36:19.889 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'keypairs' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:36:19 compute-0 nova_compute[259550]: 2025-10-07 14:36:19.915 2 INFO nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:36:20 compute-0 nova_compute[259550]: 2025-10-07 14:36:20.000 2 DEBUG nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:36:20 compute-0 ceph-mon[74295]: pgmap v2216: 305 pgs: 305 active+clean; 146 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 41 KiB/s wr, 47 op/s
Oct 07 14:36:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 144 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 978 KiB/s wr, 76 op/s
Oct 07 14:36:20 compute-0 nova_compute[259550]: 2025-10-07 14:36:20.442 2 DEBUG nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:36:20 compute-0 nova_compute[259550]: 2025-10-07 14:36:20.443 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:36:20 compute-0 nova_compute[259550]: 2025-10-07 14:36:20.444 2 INFO nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Creating image(s)
Oct 07 14:36:20 compute-0 nova_compute[259550]: 2025-10-07 14:36:20.465 2 DEBUG nova.storage.rbd_utils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:36:20 compute-0 nova_compute[259550]: 2025-10-07 14:36:20.488 2 DEBUG nova.storage.rbd_utils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:36:20 compute-0 nova_compute[259550]: 2025-10-07 14:36:20.509 2 DEBUG nova.storage.rbd_utils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:36:20 compute-0 nova_compute[259550]: 2025-10-07 14:36:20.512 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:36:20 compute-0 nova_compute[259550]: 2025-10-07 14:36:20.602 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:36:20 compute-0 nova_compute[259550]: 2025-10-07 14:36:20.603 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:20 compute-0 nova_compute[259550]: 2025-10-07 14:36:20.604 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:20 compute-0 nova_compute[259550]: 2025-10-07 14:36:20.604 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:20 compute-0 nova_compute[259550]: 2025-10-07 14:36:20.623 2 DEBUG nova.storage.rbd_utils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:36:20 compute-0 nova_compute[259550]: 2025-10-07 14:36:20.626 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:36:20 compute-0 nova_compute[259550]: 2025-10-07 14:36:20.998 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:36:21 compute-0 nova_compute[259550]: 2025-10-07 14:36:21.059 2 DEBUG nova.storage.rbd_utils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:36:21 compute-0 nova_compute[259550]: 2025-10-07 14:36:21.229 2 DEBUG nova.objects.instance [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid 88e378df-94f3-4a3e-89e1-62f6de052e9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:36:21 compute-0 nova_compute[259550]: 2025-10-07 14:36:21.262 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:36:21 compute-0 nova_compute[259550]: 2025-10-07 14:36:21.262 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Ensure instance console log exists: /var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:36:21 compute-0 nova_compute[259550]: 2025-10-07 14:36:21.263 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:21 compute-0 nova_compute[259550]: 2025-10-07 14:36:21.264 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:21 compute-0 nova_compute[259550]: 2025-10-07 14:36:21.264 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:21 compute-0 nova_compute[259550]: 2025-10-07 14:36:21.505 2 DEBUG nova.compute.manager [req-ed0705dd-0d46-495e-898f-fb210fb89800 req-2ed09e60-d264-40f9-9f39-52ecb713ccdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-deleted-e3da1022-6830-4557-9992-ffd8ec07a599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:21 compute-0 nova_compute[259550]: 2025-10-07 14:36:21.505 2 INFO nova.compute.manager [req-ed0705dd-0d46-495e-898f-fb210fb89800 req-2ed09e60-d264-40f9-9f39-52ecb713ccdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Neutron deleted interface e3da1022-6830-4557-9992-ffd8ec07a599; detaching it from the instance and deleting it from the info cache
Oct 07 14:36:21 compute-0 nova_compute[259550]: 2025-10-07 14:36:21.506 2 DEBUG nova.network.neutron [req-ed0705dd-0d46-495e-898f-fb210fb89800 req-2ed09e60-d264-40f9-9f39-52ecb713ccdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updating instance_info_cache with network_info: [{"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:36:21 compute-0 nova_compute[259550]: 2025-10-07 14:36:21.558 2 DEBUG nova.network.neutron [-] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:36:21 compute-0 nova_compute[259550]: 2025-10-07 14:36:21.625 2 DEBUG nova.compute.manager [req-ed0705dd-0d46-495e-898f-fb210fb89800 req-2ed09e60-d264-40f9-9f39-52ecb713ccdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Detach interface failed, port_id=e3da1022-6830-4557-9992-ffd8ec07a599, reason: Instance 614c22a4-9342-4037-adb5-71c3375b8553 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:36:21 compute-0 nova_compute[259550]: 2025-10-07 14:36:21.637 2 INFO nova.compute.manager [-] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Took 5.47 seconds to deallocate network for instance.
Oct 07 14:36:21 compute-0 nova_compute[259550]: 2025-10-07 14:36:21.672 2 DEBUG nova.policy [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:36:21 compute-0 nova_compute[259550]: 2025-10-07 14:36:21.732 2 DEBUG oslo_concurrency.lockutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:21 compute-0 nova_compute[259550]: 2025-10-07 14:36:21.733 2 DEBUG oslo_concurrency.lockutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:21 compute-0 nova_compute[259550]: 2025-10-07 14:36:21.837 2 DEBUG oslo_concurrency.processutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:36:21 compute-0 nova_compute[259550]: 2025-10-07 14:36:21.944 2 INFO nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Creating config drive at /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config
Oct 07 14:36:21 compute-0 nova_compute[259550]: 2025-10-07 14:36:21.950 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr6dmss1v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:36:22 compute-0 nova_compute[259550]: 2025-10-07 14:36:22.096 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr6dmss1v" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:36:22 compute-0 nova_compute[259550]: 2025-10-07 14:36:22.126 2 DEBUG nova.storage.rbd_utils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:36:22 compute-0 nova_compute[259550]: 2025-10-07 14:36:22.130 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:36:22 compute-0 ceph-mon[74295]: pgmap v2217: 305 pgs: 305 active+clean; 144 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 978 KiB/s wr, 76 op/s
Oct 07 14:36:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:36:22 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3308405861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:36:22 compute-0 nova_compute[259550]: 2025-10-07 14:36:22.290 2 DEBUG oslo_concurrency.processutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:36:22 compute-0 nova_compute[259550]: 2025-10-07 14:36:22.296 2 DEBUG nova.compute.provider_tree [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:36:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 167 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Oct 07 14:36:22 compute-0 nova_compute[259550]: 2025-10-07 14:36:22.325 2 DEBUG nova.scheduler.client.report [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:36:22 compute-0 nova_compute[259550]: 2025-10-07 14:36:22.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:22 compute-0 nova_compute[259550]: 2025-10-07 14:36:22.372 2 DEBUG oslo_concurrency.lockutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:22 compute-0 nova_compute[259550]: 2025-10-07 14:36:22.411 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:36:22 compute-0 nova_compute[259550]: 2025-10-07 14:36:22.411 2 INFO nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Deleting local config drive /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config because it was imported into RBD.
Oct 07 14:36:22 compute-0 kernel: tap84baaee6-3f: entered promiscuous mode
Oct 07 14:36:22 compute-0 NetworkManager[44949]: <info>  [1759847782.4645] manager: (tap84baaee6-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/473)
Oct 07 14:36:22 compute-0 ovn_controller[151684]: 2025-10-07T14:36:22Z|01176|binding|INFO|Claiming lport 84baaee6-3f89-4d61-aaea-507e46e65618 for this chassis.
Oct 07 14:36:22 compute-0 ovn_controller[151684]: 2025-10-07T14:36:22Z|01177|binding|INFO|84baaee6-3f89-4d61-aaea-507e46e65618: Claiming fa:16:3e:84:dc:c4 10.100.0.11
Oct 07 14:36:22 compute-0 nova_compute[259550]: 2025-10-07 14:36:22.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:22 compute-0 ovn_controller[151684]: 2025-10-07T14:36:22Z|01178|binding|INFO|Setting lport 84baaee6-3f89-4d61-aaea-507e46e65618 ovn-installed in OVS
Oct 07 14:36:22 compute-0 nova_compute[259550]: 2025-10-07 14:36:22.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:22 compute-0 systemd-udevd[382238]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:36:22 compute-0 systemd-machined[214580]: New machine qemu-142-instance-00000071.
Oct 07 14:36:22 compute-0 NetworkManager[44949]: <info>  [1759847782.5077] device (tap84baaee6-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:36:22 compute-0 NetworkManager[44949]: <info>  [1759847782.5085] device (tap84baaee6-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:36:22 compute-0 systemd[1]: Started Virtual Machine qemu-142-instance-00000071.
Oct 07 14:36:22 compute-0 nova_compute[259550]: 2025-10-07 14:36:22.556 2 INFO nova.scheduler.client.report [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 614c22a4-9342-4037-adb5-71c3375b8553
Oct 07 14:36:22 compute-0 ovn_controller[151684]: 2025-10-07T14:36:22Z|01179|binding|INFO|Setting lport 84baaee6-3f89-4d61-aaea-507e46e65618 up in Southbound
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.629 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:dc:c4 10.100.0.11'], port_security=['fa:16:3e:84:dc:c4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '5', 'neutron:security_group_ids': '97d6fa92-e050-4cce-a368-3065ace6996b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e8a86b-883d-4b18-ac5a-841f1cc14111, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=84baaee6-3f89-4d61-aaea-507e46e65618) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.631 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 84baaee6-3f89-4d61-aaea-507e46e65618 in datapath 002ae4bb-0f71-4b57-99ae-0bfd304fb458 bound to our chassis
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.633 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 002ae4bb-0f71-4b57-99ae-0bfd304fb458
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.645 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6012021f-1af8-40b8-8e89-3248ff7ae2e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.647 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap002ae4bb-01 in ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.649 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap002ae4bb-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.649 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[13025b81-5f5d-4751-9a74-fe114ab64e18]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.650 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[20156dcf-6a79-43d6-848a-b6f9d69ddfd6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.662 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[dd67fc02-3290-4abe-b6f7-8810e13a6c9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:36:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:36:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:36:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:36:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:36:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.687 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[050f3dd7-a374-4017-b938-45bcd9aea0bd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:36:22
Oct 07 14:36:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:36:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:36:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms']
Oct 07 14:36:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.723 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3706b538-be00-483e-bb09-75e7853f5bb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:22 compute-0 NetworkManager[44949]: <info>  [1759847782.7328] manager: (tap002ae4bb-00): new Veth device (/org/freedesktop/NetworkManager/Devices/474)
Oct 07 14:36:22 compute-0 systemd-udevd[382240]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.731 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[660b8f29-f4ef-4399-ae23-78ce3be927b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.772 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3a01a49b-10f8-4765-8cfa-47de3d5366a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.776 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7809bdd5-2a26-4075-835a-b7d58a0efdf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:22 compute-0 NetworkManager[44949]: <info>  [1759847782.8111] device (tap002ae4bb-00): carrier: link connected
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.817 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8017b485-75bc-4236-bf18-077c8d35fabc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.836 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f46c8a-88f8-4577-a14f-7edd0813492f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap002ae4bb-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:93:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 836637, 'reachable_time': 23776, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382312, 'error': None, 'target': 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.853 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[63b86e71-31e8-4735-8864-ad88cec3edcc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe46:93fa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 836637, 'tstamp': 836637}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382314, 'error': None, 'target': 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.871 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7d262f95-1a9b-448c-ab8c-22d8741c7cfd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap002ae4bb-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:93:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 836637, 'reachable_time': 23776, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382315, 'error': None, 'target': 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.899 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2d535916-cd92-488e-871d-61af50068138]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.977 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[25c20c26-7afa-4e22-9ce6-15a99e079827]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.979 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap002ae4bb-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.980 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:36:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:36:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.980 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap002ae4bb-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:23 compute-0 kernel: tap002ae4bb-00: entered promiscuous mode
Oct 07 14:36:23 compute-0 NetworkManager[44949]: <info>  [1759847783.0133] manager: (tap002ae4bb-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/475)
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:23.016 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap002ae4bb-00, col_values=(('external_ids', {'iface-id': '9b384c3a-3862-48f5-9fa0-a30ca1bb30cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:23 compute-0 ovn_controller[151684]: 2025-10-07T14:36:23Z|01180|binding|INFO|Releasing lport 9b384c3a-3862-48f5-9fa0-a30ca1bb30cc from this chassis (sb_readonly=0)
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:23.021 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/002ae4bb-0f71-4b57-99ae-0bfd304fb458.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/002ae4bb-0f71-4b57-99ae-0bfd304fb458.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:23.022 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ed1ede22-3a30-4ea3-a297-e2d73464d8ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:23.023 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-002ae4bb-0f71-4b57-99ae-0bfd304fb458
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/002ae4bb-0f71-4b57-99ae-0bfd304fb458.pid.haproxy
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 002ae4bb-0f71-4b57-99ae-0bfd304fb458
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:36:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:23.024 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'env', 'PROCESS_TAG=haproxy-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/002ae4bb-0f71-4b57-99ae-0bfd304fb458.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:36:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:36:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:36:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:36:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:36:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:36:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:36:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:36:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:36:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.123 2 DEBUG nova.network.neutron [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Successfully created port: 68807b3e-505c-41ea-96e4-f03b329e4c69 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.147 2 DEBUG oslo_concurrency.lockutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:23 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3308405861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.326 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.326 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847783.325037, e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.327 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] VM Resumed (Lifecycle Event)
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.330 2 DEBUG nova.compute.manager [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.330 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.334 2 INFO nova.virt.libvirt.driver [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance spawned successfully.
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.334 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.406 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.414 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.415 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:36:23 compute-0 podman[382345]: 2025-10-07 14:36:23.416804452 +0000 UTC m=+0.048746133 container create e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.419 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.420 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.421 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.422 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.429 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:36:23 compute-0 systemd[1]: Started libpod-conmon-e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823.scope.
Oct 07 14:36:23 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:36:23 compute-0 podman[382345]: 2025-10-07 14:36:23.390182631 +0000 UTC m=+0.022124332 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b9ad01db3205e9317c12e398a06f873e540a0ad996975fae67ff29c1925f78/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:36:23 compute-0 podman[382345]: 2025-10-07 14:36:23.515200632 +0000 UTC m=+0.147142333 container init e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:36:23 compute-0 podman[382345]: 2025-10-07 14:36:23.520985966 +0000 UTC m=+0.152927657 container start e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 07 14:36:23 compute-0 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[382360]: [NOTICE]   (382364) : New worker (382366) forked
Oct 07 14:36:23 compute-0 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[382360]: [NOTICE]   (382364) : Loading success.
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.697 2 DEBUG nova.compute.manager [req-4f608368-794d-4900-bc86-8c39904cae06 req-5d0ae9aa-42df-4882-aabc-c253e1ae57c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.700 2 DEBUG oslo_concurrency.lockutils [req-4f608368-794d-4900-bc86-8c39904cae06 req-5d0ae9aa-42df-4882-aabc-c253e1ae57c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.701 2 DEBUG oslo_concurrency.lockutils [req-4f608368-794d-4900-bc86-8c39904cae06 req-5d0ae9aa-42df-4882-aabc-c253e1ae57c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.701 2 DEBUG oslo_concurrency.lockutils [req-4f608368-794d-4900-bc86-8c39904cae06 req-5d0ae9aa-42df-4882-aabc-c253e1ae57c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.701 2 DEBUG nova.compute.manager [req-4f608368-794d-4900-bc86-8c39904cae06 req-5d0ae9aa-42df-4882-aabc-c253e1ae57c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] No waiting events found dispatching network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.702 2 WARNING nova.compute.manager [req-4f608368-794d-4900-bc86-8c39904cae06 req-5d0ae9aa-42df-4882-aabc-c253e1ae57c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received unexpected event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 for instance with vm_state active and task_state rebuild_spawning.
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.719 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.720 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847783.3258421, e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.720 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] VM Started (Lifecycle Event)
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.785 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.790 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.883 2 DEBUG nova.compute.manager [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.970 2 DEBUG nova.compute.manager [req-02fe9cc0-0579-40d4-a1ea-3488e6ad8f61 req-567478bf-2e66-49dd-bcb0-64864f1045e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-deleted-3eb614e8-3f14-4375-bbe9-34facd5bce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:23 compute-0 nova_compute[259550]: 2025-10-07 14:36:23.982 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 07 14:36:24 compute-0 ceph-mon[74295]: pgmap v2218: 305 pgs: 305 active+clean; 167 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Oct 07 14:36:24 compute-0 nova_compute[259550]: 2025-10-07 14:36:24.307 2 DEBUG oslo_concurrency.lockutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:24 compute-0 nova_compute[259550]: 2025-10-07 14:36:24.308 2 DEBUG oslo_concurrency.lockutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:24 compute-0 nova_compute[259550]: 2025-10-07 14:36:24.308 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 07 14:36:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 209 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 3.3 MiB/s wr, 91 op/s
Oct 07 14:36:24 compute-0 nova_compute[259550]: 2025-10-07 14:36:24.553 2 DEBUG oslo_concurrency.lockutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.396 2 DEBUG nova.network.neutron [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Successfully updated port: 68807b3e-505c-41ea-96e4-f03b329e4c69 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.426 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847770.425433, 614c22a4-9342-4037-adb5-71c3375b8553 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.427 2 INFO nova.compute.manager [-] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] VM Stopped (Lifecycle Event)
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.533 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.533 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.533 2 DEBUG nova.network.neutron [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.610 2 DEBUG nova.compute.manager [None req-1af8454a-e916-4b31-8e49-c0ad0384b1ec - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.782 2 DEBUG nova.network.neutron [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.834 2 DEBUG nova.compute.manager [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.834 2 DEBUG oslo_concurrency.lockutils [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.835 2 DEBUG oslo_concurrency.lockutils [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.835 2 DEBUG oslo_concurrency.lockutils [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.835 2 DEBUG nova.compute.manager [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] No waiting events found dispatching network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.835 2 WARNING nova.compute.manager [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received unexpected event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 for instance with vm_state active and task_state None.
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.836 2 DEBUG nova.compute.manager [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-changed-72db4fd3-8171-42af-9801-69a061614ccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.836 2 DEBUG nova.compute.manager [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Refreshing instance network info cache due to event network-changed-72db4fd3-8171-42af-9801-69a061614ccc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.836 2 DEBUG oslo_concurrency.lockutils [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.837 2 DEBUG oslo_concurrency.lockutils [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.837 2 DEBUG nova.network.neutron [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Refreshing network info cache for port 72db4fd3-8171-42af-9801-69a061614ccc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.874 2 DEBUG oslo_concurrency.lockutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.874 2 DEBUG oslo_concurrency.lockutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.875 2 DEBUG oslo_concurrency.lockutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.875 2 DEBUG oslo_concurrency.lockutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.875 2 DEBUG oslo_concurrency.lockutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.877 2 INFO nova.compute.manager [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Terminating instance
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.878 2 DEBUG nova.compute.manager [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:36:25 compute-0 kernel: tap72db4fd3-81 (unregistering): left promiscuous mode
Oct 07 14:36:25 compute-0 NetworkManager[44949]: <info>  [1759847785.9489] device (tap72db4fd3-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:36:25 compute-0 ovn_controller[151684]: 2025-10-07T14:36:25Z|01181|binding|INFO|Releasing lport 72db4fd3-8171-42af-9801-69a061614ccc from this chassis (sb_readonly=0)
Oct 07 14:36:25 compute-0 ovn_controller[151684]: 2025-10-07T14:36:25Z|01182|binding|INFO|Setting lport 72db4fd3-8171-42af-9801-69a061614ccc down in Southbound
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:25 compute-0 ovn_controller[151684]: 2025-10-07T14:36:25Z|01183|binding|INFO|Removing iface tap72db4fd3-81 ovn-installed in OVS
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:25 compute-0 nova_compute[259550]: 2025-10-07 14:36:25.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:25.982 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:17:d0 10.100.0.8'], port_security=['fa:16:3e:4d:17:d0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'df052dd5-fecd-4dd3-be36-4becc3f9f318', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c899e05d-224c-44fe-8294-eaece58d7fe7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03f8a6cc-86b5-49f0-b76e-96a2dbc54e74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34b97456-8a84-4c85-bc2d-80b2de48e489, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=72db4fd3-8171-42af-9801-69a061614ccc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:36:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:25.983 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 72db4fd3-8171-42af-9801-69a061614ccc in datapath c899e05d-224c-44fe-8294-eaece58d7fe7 unbound from our chassis
Oct 07 14:36:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:25.985 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c899e05d-224c-44fe-8294-eaece58d7fe7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:36:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:25.986 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2281bdb6-6f6a-46fe-96f1-edf785bb0047]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:25.986 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7 namespace which is not needed anymore
Oct 07 14:36:25 compute-0 kernel: tap87dbfe27-94 (unregistering): left promiscuous mode
Oct 07 14:36:25 compute-0 NetworkManager[44949]: <info>  [1759847785.9927] device (tap87dbfe27-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:36:26 compute-0 ovn_controller[151684]: 2025-10-07T14:36:26Z|01184|binding|INFO|Releasing lport 87dbfe27-9436-4e21-a648-df77ddfec6ca from this chassis (sb_readonly=0)
Oct 07 14:36:26 compute-0 ovn_controller[151684]: 2025-10-07T14:36:26Z|01185|binding|INFO|Setting lport 87dbfe27-9436-4e21-a648-df77ddfec6ca down in Southbound
Oct 07 14:36:26 compute-0 ovn_controller[151684]: 2025-10-07T14:36:26Z|01186|binding|INFO|Removing iface tap87dbfe27-94 ovn-installed in OVS
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.029 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:cb:8e 2001:db8::f816:3eff:fe44:cb8e'], port_security=['fa:16:3e:44:cb:8e 2001:db8::f816:3eff:fe44:cb8e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe44:cb8e/64', 'neutron:device_id': 'df052dd5-fecd-4dd3-be36-4becc3f9f318', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03f8a6cc-86b5-49f0-b76e-96a2dbc54e74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eae04c5e-8ef6-447a-8c6c-5575a0afadaf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=87dbfe27-9436-4e21-a648-df77ddfec6ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:36:26 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Oct 07 14:36:26 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006e.scope: Consumed 17.403s CPU time.
Oct 07 14:36:26 compute-0 systemd-machined[214580]: Machine qemu-138-instance-0000006e terminated.
Oct 07 14:36:26 compute-0 NetworkManager[44949]: <info>  [1759847786.1121] manager: (tap87dbfe27-94): new Tun device (/org/freedesktop/NetworkManager/Devices/476)
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:26 compute-0 neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7[378688]: [NOTICE]   (378692) : haproxy version is 2.8.14-c23fe91
Oct 07 14:36:26 compute-0 neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7[378688]: [NOTICE]   (378692) : path to executable is /usr/sbin/haproxy
Oct 07 14:36:26 compute-0 neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7[378688]: [WARNING]  (378692) : Exiting Master process...
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.139 2 INFO nova.virt.libvirt.driver [-] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Instance destroyed successfully.
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.140 2 DEBUG nova.objects.instance [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid df052dd5-fecd-4dd3-be36-4becc3f9f318 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:36:26 compute-0 neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7[378688]: [ALERT]    (378692) : Current worker (378694) exited with code 143 (Terminated)
Oct 07 14:36:26 compute-0 neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7[378688]: [WARNING]  (378692) : All workers exited. Exiting... (0)
Oct 07 14:36:26 compute-0 systemd[1]: libpod-5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484.scope: Deactivated successfully.
Oct 07 14:36:26 compute-0 podman[382402]: 2025-10-07 14:36:26.149911313 +0000 UTC m=+0.074318808 container died 5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.157 2 DEBUG nova.virt.libvirt.vif [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1627574273',display_name='tempest-TestGettingAddress-server-1627574273',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1627574273',id=110,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:34:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-yvllxh4l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:34:54Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=df052dd5-fecd-4dd3-be36-4becc3f9f318,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.157 2 DEBUG nova.network.os_vif_util [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.158 2 DEBUG nova.network.os_vif_util [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:17:d0,bridge_name='br-int',has_traffic_filtering=True,id=72db4fd3-8171-42af-9801-69a061614ccc,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72db4fd3-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.158 2 DEBUG os_vif [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:17:d0,bridge_name='br-int',has_traffic_filtering=True,id=72db4fd3-8171-42af-9801-69a061614ccc,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72db4fd3-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.160 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72db4fd3-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.167 2 INFO os_vif [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:17:d0,bridge_name='br-int',has_traffic_filtering=True,id=72db4fd3-8171-42af-9801-69a061614ccc,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72db4fd3-81')
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.168 2 DEBUG nova.virt.libvirt.vif [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1627574273',display_name='tempest-TestGettingAddress-server-1627574273',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1627574273',id=110,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:34:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-yvllxh4l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:34:54Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=df052dd5-fecd-4dd3-be36-4becc3f9f318,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.168 2 DEBUG nova.network.os_vif_util [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.169 2 DEBUG nova.network.os_vif_util [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:44:cb:8e,bridge_name='br-int',has_traffic_filtering=True,id=87dbfe27-9436-4e21-a648-df77ddfec6ca,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87dbfe27-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.170 2 DEBUG os_vif [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:cb:8e,bridge_name='br-int',has_traffic_filtering=True,id=87dbfe27-9436-4e21-a648-df77ddfec6ca,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87dbfe27-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.172 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87dbfe27-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.179 2 INFO os_vif [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:cb:8e,bridge_name='br-int',has_traffic_filtering=True,id=87dbfe27-9436-4e21-a648-df77ddfec6ca,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87dbfe27-94')
Oct 07 14:36:26 compute-0 ceph-mon[74295]: pgmap v2219: 305 pgs: 305 active+clean; 209 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 3.3 MiB/s wr, 91 op/s
Oct 07 14:36:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484-userdata-shm.mount: Deactivated successfully.
Oct 07 14:36:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-325f40c5397e88b9b38207d1a08b9557bb0033598dc986796ff5fd8d9abcf075-merged.mount: Deactivated successfully.
Oct 07 14:36:26 compute-0 podman[382402]: 2025-10-07 14:36:26.209815862 +0000 UTC m=+0.134223357 container cleanup 5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 07 14:36:26 compute-0 systemd[1]: libpod-conmon-5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484.scope: Deactivated successfully.
Oct 07 14:36:26 compute-0 podman[382470]: 2025-10-07 14:36:26.281844588 +0000 UTC m=+0.048145408 container remove 5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.288 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[72b2b75c-5c14-483f-9d68-889d32374dc7]: (4, ('Tue Oct  7 02:36:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7 (5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484)\n5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484\nTue Oct  7 02:36:26 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7 (5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484)\n5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.290 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[33e0d967-973e-46c6-b9a8-c95e3b122a67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.291 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc899e05d-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:26 compute-0 kernel: tapc899e05d-20: left promiscuous mode
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.301 2 DEBUG nova.compute.manager [req-f0750cb4-48b5-4531-88ab-4cc16256da87 req-72d3bed9-2544-4a9f-9c8b-b1c33e5e135d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-unplugged-72db4fd3-8171-42af-9801-69a061614ccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.301 2 DEBUG oslo_concurrency.lockutils [req-f0750cb4-48b5-4531-88ab-4cc16256da87 req-72d3bed9-2544-4a9f-9c8b-b1c33e5e135d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.302 2 DEBUG oslo_concurrency.lockutils [req-f0750cb4-48b5-4531-88ab-4cc16256da87 req-72d3bed9-2544-4a9f-9c8b-b1c33e5e135d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.302 2 DEBUG oslo_concurrency.lockutils [req-f0750cb4-48b5-4531-88ab-4cc16256da87 req-72d3bed9-2544-4a9f-9c8b-b1c33e5e135d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.302 2 DEBUG nova.compute.manager [req-f0750cb4-48b5-4531-88ab-4cc16256da87 req-72d3bed9-2544-4a9f-9c8b-b1c33e5e135d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] No waiting events found dispatching network-vif-unplugged-72db4fd3-8171-42af-9801-69a061614ccc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.302 2 DEBUG nova.compute.manager [req-f0750cb4-48b5-4531-88ab-4cc16256da87 req-72d3bed9-2544-4a9f-9c8b-b1c33e5e135d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-unplugged-72db4fd3-8171-42af-9801-69a061614ccc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.314 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5d5854d2-5140-4a2a-8ab1-863c70d74688]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 955 KiB/s rd, 3.6 MiB/s wr, 128 op/s
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.343 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0cfc4d10-d442-4257-bb2e-06ece0d6de6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.344 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cb55cfa6-e883-48c1-a655-5df307bad6b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.365 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0fcacd82-1fef-44ca-babd-079d561365c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827512, 'reachable_time': 29611, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382485, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:26 compute-0 systemd[1]: run-netns-ovnmeta\x2dc899e05d\x2d224c\x2d44fe\x2d8294\x2deaece58d7fe7.mount: Deactivated successfully.
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.371 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.371 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[649ab640-cbdd-4332-819c-432af7305cc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.372 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 87dbfe27-9436-4e21-a648-df77ddfec6ca in datapath 673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3 unbound from our chassis
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.373 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.374 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf760ff-af3a-476f-bea1-3d3396165f13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.374 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3 namespace which is not needed anymore
Oct 07 14:36:26 compute-0 neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3[378830]: [NOTICE]   (378834) : haproxy version is 2.8.14-c23fe91
Oct 07 14:36:26 compute-0 neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3[378830]: [NOTICE]   (378834) : path to executable is /usr/sbin/haproxy
Oct 07 14:36:26 compute-0 neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3[378830]: [WARNING]  (378834) : Exiting Master process...
Oct 07 14:36:26 compute-0 neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3[378830]: [ALERT]    (378834) : Current worker (378836) exited with code 143 (Terminated)
Oct 07 14:36:26 compute-0 neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3[378830]: [WARNING]  (378834) : All workers exited. Exiting... (0)
Oct 07 14:36:26 compute-0 systemd[1]: libpod-4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d.scope: Deactivated successfully.
Oct 07 14:36:26 compute-0 podman[382504]: 2025-10-07 14:36:26.544148026 +0000 UTC m=+0.068705807 container died 4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:36:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d-userdata-shm.mount: Deactivated successfully.
Oct 07 14:36:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd9610b23e7fa9be83c256b0b81e796a0e4842e582d51c70137e3b1797edeeac-merged.mount: Deactivated successfully.
Oct 07 14:36:26 compute-0 podman[382504]: 2025-10-07 14:36:26.606437761 +0000 UTC m=+0.130995512 container cleanup 4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:36:26 compute-0 systemd[1]: libpod-conmon-4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d.scope: Deactivated successfully.
Oct 07 14:36:26 compute-0 podman[382534]: 2025-10-07 14:36:26.682312468 +0000 UTC m=+0.054653851 container remove 4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.688 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[37c1573c-5026-4a70-b1c0-de509aa3b25a]: (4, ('Tue Oct  7 02:36:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3 (4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d)\n4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d\nTue Oct  7 02:36:26 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3 (4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d)\n4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.689 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8a933206-1e5c-4d16-ba94-9085e80dd8a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.690 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap673dd6ba-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:26 compute-0 kernel: tap673dd6ba-40: left promiscuous mode
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.697 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3785b1-f039-4fe7-9747-87e943b41d52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.722 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b23552dc-3a51-4f1e-a172-4617b146c5fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.723 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c4bd2764-efdc-46af-a0ca-1792dacbcda7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.730 2 INFO nova.virt.libvirt.driver [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Deleting instance files /var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318_del
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.731 2 INFO nova.virt.libvirt.driver [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Deletion of /var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318_del complete
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.738 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[06e492cd-b183-4f23-993d-b33faf5f228d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827713, 'reachable_time': 25724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382549, 'error': None, 'target': 'ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.740 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:36:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.740 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[38609ac3-3e93-4bfb-bad8-059ec253ec3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.749 2 DEBUG nova.network.neutron [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updating instance_info_cache with network_info: [{"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.793 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.794 2 DEBUG nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Instance network_info: |[{"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.796 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Start _get_guest_xml network_info=[{"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.800 2 WARNING nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.804 2 DEBUG nova.virt.libvirt.host [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.805 2 DEBUG nova.virt.libvirt.host [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.808 2 DEBUG nova.virt.libvirt.host [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.809 2 DEBUG nova.virt.libvirt.host [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.809 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.809 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.810 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.810 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.810 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.810 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.811 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.811 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.811 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.811 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.811 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.812 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.814 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.860 2 INFO nova.compute.manager [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Took 0.98 seconds to destroy the instance on the hypervisor.
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.861 2 DEBUG oslo.service.loopingcall [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.861 2 DEBUG nova.compute.manager [-] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:36:26 compute-0 nova_compute[259550]: 2025-10-07 14:36:26.862 2 DEBUG nova.network.neutron [-] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:36:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d673dd6ba\x2d4e32\x2d4ca4\x2db9a1\x2d97a20a5e17b3.mount: Deactivated successfully.
Oct 07 14:36:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:36:27 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3904870245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.277 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.300 2 DEBUG nova.storage.rbd_utils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.303 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:36:27 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1275341985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.756 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.758 2 DEBUG nova.virt.libvirt.vif [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1302175134',display_name='tempest-TestNetworkBasicOps-server-1302175134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1302175134',id=114,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWpQUaf47IgvjMJ1PfiJ3hBWwgl5SRzpVWmxMmvbTcZSrsJXs5xD5XgPEXcyNNmm528IJ5KaOIt3CjVQWo+4ASv0OE74+2GIAZPXgv6ybnOW8r7u3cAdnaHk7c6RMmT5A==',key_name='tempest-TestNetworkBasicOps-677021137',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-xkm7y880',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:36:20Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=88e378df-94f3-4a3e-89e1-62f6de052e9d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.758 2 DEBUG nova.network.os_vif_util [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.758 2 DEBUG nova.network.os_vif_util [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:2b:a2,bridge_name='br-int',has_traffic_filtering=True,id=68807b3e-505c-41ea-96e4-f03b329e4c69,network=Network(4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68807b3e-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.759 2 DEBUG nova.objects.instance [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid 88e378df-94f3-4a3e-89e1-62f6de052e9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.818 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:36:27 compute-0 nova_compute[259550]:   <uuid>88e378df-94f3-4a3e-89e1-62f6de052e9d</uuid>
Oct 07 14:36:27 compute-0 nova_compute[259550]:   <name>instance-00000072</name>
Oct 07 14:36:27 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:36:27 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:36:27 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkBasicOps-server-1302175134</nova:name>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:36:26</nova:creationTime>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:36:27 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:36:27 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:36:27 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:36:27 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:36:27 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:36:27 compute-0 nova_compute[259550]:         <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:36:27 compute-0 nova_compute[259550]:         <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:36:27 compute-0 nova_compute[259550]:         <nova:port uuid="68807b3e-505c-41ea-96e4-f03b329e4c69">
Oct 07 14:36:27 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:36:27 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:36:27 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <system>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <entry name="serial">88e378df-94f3-4a3e-89e1-62f6de052e9d</entry>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <entry name="uuid">88e378df-94f3-4a3e-89e1-62f6de052e9d</entry>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     </system>
Oct 07 14:36:27 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:36:27 compute-0 nova_compute[259550]:   <os>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:   </os>
Oct 07 14:36:27 compute-0 nova_compute[259550]:   <features>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:   </features>
Oct 07 14:36:27 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:36:27 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:36:27 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/88e378df-94f3-4a3e-89e1-62f6de052e9d_disk">
Oct 07 14:36:27 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       </source>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:36:27 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/88e378df-94f3-4a3e-89e1-62f6de052e9d_disk.config">
Oct 07 14:36:27 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       </source>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:36:27 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:aa:2b:a2"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <target dev="tap68807b3e-50"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/console.log" append="off"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <video>
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     </video>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:36:27 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:36:27 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:36:27 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:36:27 compute-0 nova_compute[259550]: </domain>
Oct 07 14:36:27 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.819 2 DEBUG nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Preparing to wait for external event network-vif-plugged-68807b3e-505c-41ea-96e4-f03b329e4c69 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.819 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.819 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.819 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.820 2 DEBUG nova.virt.libvirt.vif [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1302175134',display_name='tempest-TestNetworkBasicOps-server-1302175134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1302175134',id=114,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWpQUaf47IgvjMJ1PfiJ3hBWwgl5SRzpVWmxMmvbTcZSrsJXs5xD5XgPEXcyNNmm528IJ5KaOIt3CjVQWo+4ASv0OE74+2GIAZPXgv6ybnOW8r7u3cAdnaHk7c6RMmT5A==',key_name='tempest-TestNetworkBasicOps-677021137',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-xkm7y880',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:36:20Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=88e378df-94f3-4a3e-89e1-62f6de052e9d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.820 2 DEBUG nova.network.os_vif_util [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.820 2 DEBUG nova.network.os_vif_util [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:2b:a2,bridge_name='br-int',has_traffic_filtering=True,id=68807b3e-505c-41ea-96e4-f03b329e4c69,network=Network(4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68807b3e-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.821 2 DEBUG os_vif [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:2b:a2,bridge_name='br-int',has_traffic_filtering=True,id=68807b3e-505c-41ea-96e4-f03b329e4c69,network=Network(4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68807b3e-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.822 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.822 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.824 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68807b3e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.824 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap68807b3e-50, col_values=(('external_ids', {'iface-id': '68807b3e-505c-41ea-96e4-f03b329e4c69', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:2b:a2', 'vm-uuid': '88e378df-94f3-4a3e-89e1-62f6de052e9d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:27 compute-0 NetworkManager[44949]: <info>  [1759847787.8266] manager: (tap68807b3e-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/477)
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.831 2 INFO os_vif [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:2b:a2,bridge_name='br-int',has_traffic_filtering=True,id=68807b3e-505c-41ea-96e4-f03b329e4c69,network=Network(4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68807b3e-50')
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.912 2 DEBUG nova.network.neutron [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updated VIF entry in instance network info cache for port 72db4fd3-8171-42af-9801-69a061614ccc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.913 2 DEBUG nova.network.neutron [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updating instance_info_cache with network_info: [{"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:36:27 compute-0 podman[382615]: 2025-10-07 14:36:27.921804718 +0000 UTC m=+0.058153326 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 07 14:36:27 compute-0 podman[382616]: 2025-10-07 14:36:27.927747216 +0000 UTC m=+0.061695049 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3)
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.945 2 DEBUG nova.compute.manager [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-unplugged-87dbfe27-9436-4e21-a648-df77ddfec6ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.945 2 DEBUG oslo_concurrency.lockutils [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.945 2 DEBUG oslo_concurrency.lockutils [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.945 2 DEBUG oslo_concurrency.lockutils [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.946 2 DEBUG nova.compute.manager [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] No waiting events found dispatching network-vif-unplugged-87dbfe27-9436-4e21-a648-df77ddfec6ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.946 2 DEBUG nova.compute.manager [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-unplugged-87dbfe27-9436-4e21-a648-df77ddfec6ca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.946 2 DEBUG nova.compute.manager [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-plugged-87dbfe27-9436-4e21-a648-df77ddfec6ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.946 2 DEBUG oslo_concurrency.lockutils [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.946 2 DEBUG oslo_concurrency.lockutils [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.947 2 DEBUG oslo_concurrency.lockutils [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.947 2 DEBUG nova.compute.manager [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] No waiting events found dispatching network-vif-plugged-87dbfe27-9436-4e21-a648-df77ddfec6ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.947 2 WARNING nova.compute.manager [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received unexpected event network-vif-plugged-87dbfe27-9436-4e21-a648-df77ddfec6ca for instance with vm_state active and task_state deleting.
Oct 07 14:36:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.990 2 DEBUG oslo_concurrency.lockutils [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.990 2 DEBUG nova.compute.manager [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-changed-68807b3e-505c-41ea-96e4-f03b329e4c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.991 2 DEBUG nova.compute.manager [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Refreshing instance network info cache due to event network-changed-68807b3e-505c-41ea-96e4-f03b329e4c69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.991 2 DEBUG oslo_concurrency.lockutils [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.991 2 DEBUG oslo_concurrency.lockutils [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.991 2 DEBUG nova.network.neutron [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Refreshing network info cache for port 68807b3e-505c-41ea-96e4-f03b329e4c69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.996 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.997 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.997 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:aa:2b:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:36:27 compute-0 nova_compute[259550]: 2025-10-07 14:36:27.997 2 INFO nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Using config drive
Oct 07 14:36:28 compute-0 nova_compute[259550]: 2025-10-07 14:36:28.020 2 DEBUG nova.storage.rbd_utils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:36:28 compute-0 ceph-mon[74295]: pgmap v2220: 305 pgs: 305 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 955 KiB/s rd, 3.6 MiB/s wr, 128 op/s
Oct 07 14:36:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3904870245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:36:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1275341985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:36:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 942 KiB/s rd, 3.6 MiB/s wr, 108 op/s
Oct 07 14:36:28 compute-0 nova_compute[259550]: 2025-10-07 14:36:28.419 2 DEBUG nova.compute.manager [req-ade8b577-0fba-4459-a69e-89b71189f1b7 req-03a5140a-f7ae-4d85-85e7-c242c28cbb12 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-plugged-72db4fd3-8171-42af-9801-69a061614ccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:28 compute-0 nova_compute[259550]: 2025-10-07 14:36:28.419 2 DEBUG oslo_concurrency.lockutils [req-ade8b577-0fba-4459-a69e-89b71189f1b7 req-03a5140a-f7ae-4d85-85e7-c242c28cbb12 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:28 compute-0 nova_compute[259550]: 2025-10-07 14:36:28.419 2 DEBUG oslo_concurrency.lockutils [req-ade8b577-0fba-4459-a69e-89b71189f1b7 req-03a5140a-f7ae-4d85-85e7-c242c28cbb12 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:28 compute-0 nova_compute[259550]: 2025-10-07 14:36:28.419 2 DEBUG oslo_concurrency.lockutils [req-ade8b577-0fba-4459-a69e-89b71189f1b7 req-03a5140a-f7ae-4d85-85e7-c242c28cbb12 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:28 compute-0 nova_compute[259550]: 2025-10-07 14:36:28.420 2 DEBUG nova.compute.manager [req-ade8b577-0fba-4459-a69e-89b71189f1b7 req-03a5140a-f7ae-4d85-85e7-c242c28cbb12 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] No waiting events found dispatching network-vif-plugged-72db4fd3-8171-42af-9801-69a061614ccc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:36:28 compute-0 nova_compute[259550]: 2025-10-07 14:36:28.420 2 WARNING nova.compute.manager [req-ade8b577-0fba-4459-a69e-89b71189f1b7 req-03a5140a-f7ae-4d85-85e7-c242c28cbb12 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received unexpected event network-vif-plugged-72db4fd3-8171-42af-9801-69a061614ccc for instance with vm_state active and task_state deleting.
Oct 07 14:36:28 compute-0 nova_compute[259550]: 2025-10-07 14:36:28.812 2 INFO nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Creating config drive at /var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/disk.config
Oct 07 14:36:28 compute-0 nova_compute[259550]: 2025-10-07 14:36:28.818 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprjdyl1vw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:36:28 compute-0 nova_compute[259550]: 2025-10-07 14:36:28.960 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprjdyl1vw" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:36:28 compute-0 nova_compute[259550]: 2025-10-07 14:36:28.985 2 DEBUG nova.storage.rbd_utils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:36:28 compute-0 nova_compute[259550]: 2025-10-07 14:36:28.988 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/disk.config 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:36:29 compute-0 nova_compute[259550]: 2025-10-07 14:36:29.546 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/disk.config 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:36:29 compute-0 nova_compute[259550]: 2025-10-07 14:36:29.548 2 INFO nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Deleting local config drive /var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/disk.config because it was imported into RBD.
Oct 07 14:36:29 compute-0 kernel: tap68807b3e-50: entered promiscuous mode
Oct 07 14:36:29 compute-0 nova_compute[259550]: 2025-10-07 14:36:29.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:29 compute-0 ovn_controller[151684]: 2025-10-07T14:36:29Z|01187|binding|INFO|Claiming lport 68807b3e-505c-41ea-96e4-f03b329e4c69 for this chassis.
Oct 07 14:36:29 compute-0 ovn_controller[151684]: 2025-10-07T14:36:29Z|01188|binding|INFO|68807b3e-505c-41ea-96e4-f03b329e4c69: Claiming fa:16:3e:aa:2b:a2 10.100.0.14
Oct 07 14:36:29 compute-0 NetworkManager[44949]: <info>  [1759847789.6031] manager: (tap68807b3e-50): new Tun device (/org/freedesktop/NetworkManager/Devices/478)
Oct 07 14:36:29 compute-0 ovn_controller[151684]: 2025-10-07T14:36:29Z|01189|binding|INFO|Setting lport 68807b3e-505c-41ea-96e4-f03b329e4c69 ovn-installed in OVS
Oct 07 14:36:29 compute-0 nova_compute[259550]: 2025-10-07 14:36:29.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.622 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:2b:a2 10.100.0.14'], port_security=['fa:16:3e:aa:2b:a2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '88e378df-94f3-4a3e-89e1-62f6de052e9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '49bffbb9-d898-4935-821f-8756d7f43377', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3993e5e4-e68a-46c5-a740-21f6d2b71498, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=68807b3e-505c-41ea-96e4-f03b329e4c69) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:36:29 compute-0 ovn_controller[151684]: 2025-10-07T14:36:29Z|01190|binding|INFO|Setting lport 68807b3e-505c-41ea-96e4-f03b329e4c69 up in Southbound
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.624 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 68807b3e-505c-41ea-96e4-f03b329e4c69 in datapath 4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38 bound to our chassis
Oct 07 14:36:29 compute-0 systemd-udevd[382722]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.625 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38
Oct 07 14:36:29 compute-0 nova_compute[259550]: 2025-10-07 14:36:29.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:29 compute-0 NetworkManager[44949]: <info>  [1759847789.6422] device (tap68807b3e-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:36:29 compute-0 NetworkManager[44949]: <info>  [1759847789.6428] device (tap68807b3e-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.641 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a5543e53-b67a-48a9-95d6-1843b6aa45c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.643 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4e6cb0ef-51 in ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.645 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4e6cb0ef-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.645 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7515f796-e1bb-4e73-9076-4ab39b0b5267]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.647 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[87b2e94b-60b5-4713-b52e-0042b058a4f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:29 compute-0 systemd-machined[214580]: New machine qemu-143-instance-00000072.
Oct 07 14:36:29 compute-0 systemd[1]: Started Virtual Machine qemu-143-instance-00000072.
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.669 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[90813cdf-b575-465e-b71d-29d7771db929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.683 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[59fdc409-ae4f-47da-8a2c-6f0fe5860815]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.714 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9fc1d5-d74e-47e2-8458-76263b12a2ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:29 compute-0 systemd-udevd[382728]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:36:29 compute-0 NetworkManager[44949]: <info>  [1759847789.7234] manager: (tap4e6cb0ef-50): new Veth device (/org/freedesktop/NetworkManager/Devices/479)
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.724 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2d97bde4-a61a-47d5-9d6a-c12e5c2e7479]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.770 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[34b1bd61-9866-4794-b120-95a4395f8a0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.774 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[97129d73-2791-4b68-843f-5b2ea2b5550e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:29 compute-0 NetworkManager[44949]: <info>  [1759847789.8015] device (tap4e6cb0ef-50): carrier: link connected
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.807 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5d3aa783-81c5-48f1-8e07-fc702a02c6c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.829 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[434329ed-6e3e-4cdc-b2f9-fde2591844dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4e6cb0ef-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:8c:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 837336, 'reachable_time': 35350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382758, 'error': None, 'target': 'ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.846 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8e110cdd-c2c3-4f7e-9d9c-791b3f70dfdc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec7:8c23'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 837336, 'tstamp': 837336}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382759, 'error': None, 'target': 'ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.868 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e96fcf7f-d2e4-40f0-b155-f80806f2d47e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4e6cb0ef-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:8c:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 837336, 'reachable_time': 35350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382760, 'error': None, 'target': 'ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.904 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[45dcec8e-a823-4bd2-9a7c-44fb5df13ae1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.990 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e624348d-139d-4d5e-988c-5c1f86164b21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.992 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e6cb0ef-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.992 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:36:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.992 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4e6cb0ef-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:29 compute-0 nova_compute[259550]: 2025-10-07 14:36:29.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:29 compute-0 NetworkManager[44949]: <info>  [1759847789.9951] manager: (tap4e6cb0ef-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/480)
Oct 07 14:36:29 compute-0 kernel: tap4e6cb0ef-50: entered promiscuous mode
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:29.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:30.001 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4e6cb0ef-50, col_values=(('external_ids', {'iface-id': '0987c9ec-179e-425c-b303-601326f99ffa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:30 compute-0 ovn_controller[151684]: 2025-10-07T14:36:30Z|01191|binding|INFO|Releasing lport 0987c9ec-179e-425c-b303-601326f99ffa from this chassis (sb_readonly=0)
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:30.005 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:30.005 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[69d25294-764a-4554-a796-3b6c297f53b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:30.006 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38.pid.haproxy
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:36:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:30.007 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38', 'env', 'PROCESS_TAG=haproxy-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.091 2 DEBUG nova.compute.manager [req-5d4f7c12-b1eb-4e55-9892-a4bbef3492f1 req-ce12cd9b-d60c-4d5e-a424-afcdc63588a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-deleted-72db4fd3-8171-42af-9801-69a061614ccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.092 2 INFO nova.compute.manager [req-5d4f7c12-b1eb-4e55-9892-a4bbef3492f1 req-ce12cd9b-d60c-4d5e-a424-afcdc63588a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Neutron deleted interface 72db4fd3-8171-42af-9801-69a061614ccc; detaching it from the instance and deleting it from the info cache
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.092 2 DEBUG nova.network.neutron [req-5d4f7c12-b1eb-4e55-9892-a4bbef3492f1 req-ce12cd9b-d60c-4d5e-a424-afcdc63588a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updating instance_info_cache with network_info: [{"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.171 2 DEBUG nova.compute.manager [req-5d4f7c12-b1eb-4e55-9892-a4bbef3492f1 req-ce12cd9b-d60c-4d5e-a424-afcdc63588a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Detach interface failed, port_id=72db4fd3-8171-42af-9801-69a061614ccc, reason: Instance df052dd5-fecd-4dd3-be36-4becc3f9f318 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.286 2 DEBUG nova.network.neutron [-] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:36:30 compute-0 ceph-mon[74295]: pgmap v2221: 305 pgs: 305 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 942 KiB/s rd, 3.6 MiB/s wr, 108 op/s
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.322 2 INFO nova.compute.manager [-] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Took 3.46 seconds to deallocate network for instance.
Oct 07 14:36:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 152 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 165 op/s
Oct 07 14:36:30 compute-0 podman[382833]: 2025-10-07 14:36:30.383080504 +0000 UTC m=+0.074093121 container create c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 07 14:36:30 compute-0 systemd[1]: Started libpod-conmon-c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b.scope.
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.432 2 DEBUG oslo_concurrency.lockutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.432 2 DEBUG oslo_concurrency.lockutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:30 compute-0 podman[382833]: 2025-10-07 14:36:30.338999496 +0000 UTC m=+0.030012133 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:36:30 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecad819b79e80b6f43b313371589ac92f170aa571899affb8f76f01510b64cf5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:36:30 compute-0 podman[382833]: 2025-10-07 14:36:30.4708623 +0000 UTC m=+0.161875097 container init c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 07 14:36:30 compute-0 podman[382833]: 2025-10-07 14:36:30.476624344 +0000 UTC m=+0.167636961 container start c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.490 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847790.4896224, 88e378df-94f3-4a3e-89e1-62f6de052e9d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.490 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] VM Started (Lifecycle Event)
Oct 07 14:36:30 compute-0 neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38[382848]: [NOTICE]   (382852) : New worker (382854) forked
Oct 07 14:36:30 compute-0 neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38[382848]: [NOTICE]   (382852) : Loading success.
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.539 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.543 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847790.490437, 88e378df-94f3-4a3e-89e1-62f6de052e9d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.543 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] VM Paused (Lifecycle Event)
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.555 2 DEBUG oslo_concurrency.processutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.594 2 DEBUG nova.compute.manager [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-vif-plugged-68807b3e-505c-41ea-96e4-f03b329e4c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.595 2 DEBUG oslo_concurrency.lockutils [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.595 2 DEBUG oslo_concurrency.lockutils [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.595 2 DEBUG oslo_concurrency.lockutils [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.595 2 DEBUG nova.compute.manager [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Processing event network-vif-plugged-68807b3e-505c-41ea-96e4-f03b329e4c69 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.596 2 DEBUG nova.compute.manager [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-vif-plugged-68807b3e-505c-41ea-96e4-f03b329e4c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.596 2 DEBUG oslo_concurrency.lockutils [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.596 2 DEBUG oslo_concurrency.lockutils [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.596 2 DEBUG oslo_concurrency.lockutils [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.596 2 DEBUG nova.compute.manager [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] No waiting events found dispatching network-vif-plugged-68807b3e-505c-41ea-96e4-f03b329e4c69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.596 2 WARNING nova.compute.manager [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received unexpected event network-vif-plugged-68807b3e-505c-41ea-96e4-f03b329e4c69 for instance with vm_state building and task_state spawning.
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.597 2 DEBUG nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.606 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.614 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.622 2 INFO nova.virt.libvirt.driver [-] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Instance spawned successfully.
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.623 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.626 2 DEBUG nova.network.neutron [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updated VIF entry in instance network info cache for port 68807b3e-505c-41ea-96e4-f03b329e4c69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.627 2 DEBUG nova.network.neutron [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updating instance_info_cache with network_info: [{"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.630 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847790.6151118, 88e378df-94f3-4a3e-89e1-62f6de052e9d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.631 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] VM Resumed (Lifecycle Event)
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.815 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.816 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.817 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.817 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.817 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.818 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.821 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.822 2 DEBUG oslo_concurrency.lockutils [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:36:30 compute-0 nova_compute[259550]: 2025-10-07 14:36:30.827 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:36:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:36:31 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4013827441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:36:31 compute-0 nova_compute[259550]: 2025-10-07 14:36:31.032 2 DEBUG oslo_concurrency.processutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:36:31 compute-0 nova_compute[259550]: 2025-10-07 14:36:31.037 2 DEBUG nova.compute.provider_tree [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:36:31 compute-0 nova_compute[259550]: 2025-10-07 14:36:31.041 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:36:31 compute-0 nova_compute[259550]: 2025-10-07 14:36:31.166 2 DEBUG nova.scheduler.client.report [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:36:31 compute-0 nova_compute[259550]: 2025-10-07 14:36:31.226 2 INFO nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Took 10.78 seconds to spawn the instance on the hypervisor.
Oct 07 14:36:31 compute-0 nova_compute[259550]: 2025-10-07 14:36:31.227 2 DEBUG nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:36:31 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4013827441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:36:31 compute-0 nova_compute[259550]: 2025-10-07 14:36:31.347 2 DEBUG oslo_concurrency.lockutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.915s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:31 compute-0 nova_compute[259550]: 2025-10-07 14:36:31.436 2 INFO nova.scheduler.client.report [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance df052dd5-fecd-4dd3-be36-4becc3f9f318
Oct 07 14:36:31 compute-0 nova_compute[259550]: 2025-10-07 14:36:31.500 2 INFO nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Took 13.97 seconds to build instance.
Oct 07 14:36:31 compute-0 nova_compute[259550]: 2025-10-07 14:36:31.601 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:31 compute-0 nova_compute[259550]: 2025-10-07 14:36:31.882 2 DEBUG oslo_concurrency.lockutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:32 compute-0 ceph-mon[74295]: pgmap v2222: 305 pgs: 305 active+clean; 152 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 165 op/s
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 143 op/s
Oct 07 14:36:32 compute-0 nova_compute[259550]: 2025-10-07 14:36:32.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006966362041739923 of space, bias 1.0, pg target 0.2089908612521977 quantized to 32 (current 32)
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:36:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:36:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:36:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1711557554' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:36:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:36:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1711557554' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:36:32 compute-0 nova_compute[259550]: 2025-10-07 14:36:32.862 2 DEBUG nova.compute.manager [req-a095efcd-855e-4fb2-b116-5e8ac53eeb8c req-153f54ee-4bac-4deb-8aec-a743e736eb41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-deleted-87dbfe27-9436-4e21-a648-df77ddfec6ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:32 compute-0 nova_compute[259550]: 2025-10-07 14:36:32.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:36:33 compute-0 ceph-mon[74295]: pgmap v2223: 305 pgs: 305 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 143 op/s
Oct 07 14:36:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1711557554' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:36:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1711557554' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:36:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 191 op/s
Oct 07 14:36:35 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 07 14:36:35 compute-0 ceph-mon[74295]: pgmap v2224: 305 pgs: 305 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 191 op/s
Oct 07 14:36:35 compute-0 ovn_controller[151684]: 2025-10-07T14:36:35Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:dc:c4 10.100.0.11
Oct 07 14:36:35 compute-0 ovn_controller[151684]: 2025-10-07T14:36:35Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:dc:c4 10.100.0.11
Oct 07 14:36:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 148 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 202 op/s
Oct 07 14:36:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:36.781 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:36:36 compute-0 nova_compute[259550]: 2025-10-07 14:36:36.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:36.784 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:36:37 compute-0 nova_compute[259550]: 2025-10-07 14:36:37.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:37 compute-0 ceph-mon[74295]: pgmap v2225: 305 pgs: 305 active+clean; 148 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 202 op/s
Oct 07 14:36:37 compute-0 nova_compute[259550]: 2025-10-07 14:36:37.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:36:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 148 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.2 MiB/s wr, 150 op/s
Oct 07 14:36:39 compute-0 ceph-mon[74295]: pgmap v2226: 305 pgs: 305 active+clean; 148 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.2 MiB/s wr, 150 op/s
Oct 07 14:36:40 compute-0 nova_compute[259550]: 2025-10-07 14:36:40.050 2 DEBUG nova.compute.manager [req-eda01b30-8284-42ed-8e6e-64c9ddc16b2c req-7246d0f6-a68c-432a-9833-76833c6b3316 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-changed-68807b3e-505c-41ea-96e4-f03b329e4c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:40 compute-0 nova_compute[259550]: 2025-10-07 14:36:40.051 2 DEBUG nova.compute.manager [req-eda01b30-8284-42ed-8e6e-64c9ddc16b2c req-7246d0f6-a68c-432a-9833-76833c6b3316 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Refreshing instance network info cache due to event network-changed-68807b3e-505c-41ea-96e4-f03b329e4c69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:36:40 compute-0 nova_compute[259550]: 2025-10-07 14:36:40.051 2 DEBUG oslo_concurrency.lockutils [req-eda01b30-8284-42ed-8e6e-64c9ddc16b2c req-7246d0f6-a68c-432a-9833-76833c6b3316 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:36:40 compute-0 nova_compute[259550]: 2025-10-07 14:36:40.052 2 DEBUG oslo_concurrency.lockutils [req-eda01b30-8284-42ed-8e6e-64c9ddc16b2c req-7246d0f6-a68c-432a-9833-76833c6b3316 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:36:40 compute-0 nova_compute[259550]: 2025-10-07 14:36:40.052 2 DEBUG nova.network.neutron [req-eda01b30-8284-42ed-8e6e-64c9ddc16b2c req-7246d0f6-a68c-432a-9833-76833c6b3316 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Refreshing network info cache for port 68807b3e-505c-41ea-96e4-f03b329e4c69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:36:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 167 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Oct 07 14:36:41 compute-0 nova_compute[259550]: 2025-10-07 14:36:41.131 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847786.1303074, df052dd5-fecd-4dd3-be36-4becc3f9f318 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:36:41 compute-0 nova_compute[259550]: 2025-10-07 14:36:41.132 2 INFO nova.compute.manager [-] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] VM Stopped (Lifecycle Event)
Oct 07 14:36:41 compute-0 nova_compute[259550]: 2025-10-07 14:36:41.218 2 DEBUG nova.compute.manager [None req-32884433-5cb7-4bbd-aa64-8a2c5fd690f2 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:36:41 compute-0 ceph-mon[74295]: pgmap v2227: 305 pgs: 305 active+clean; 167 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Oct 07 14:36:41 compute-0 ovn_controller[151684]: 2025-10-07T14:36:41Z|01192|binding|INFO|Releasing lport 0987c9ec-179e-425c-b303-601326f99ffa from this chassis (sb_readonly=0)
Oct 07 14:36:41 compute-0 ovn_controller[151684]: 2025-10-07T14:36:41Z|01193|binding|INFO|Releasing lport 9b384c3a-3862-48f5-9fa0-a30ca1bb30cc from this chassis (sb_readonly=0)
Oct 07 14:36:41 compute-0 nova_compute[259550]: 2025-10-07 14:36:41.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:41 compute-0 nova_compute[259550]: 2025-10-07 14:36:41.786 2 INFO nova.compute.manager [None req-fa4f1ef0-f045-4664-846a-2f196b772149 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Get console output
Oct 07 14:36:41 compute-0 nova_compute[259550]: 2025-10-07 14:36:41.790 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:36:41 compute-0 nova_compute[259550]: 2025-10-07 14:36:41.900 2 DEBUG nova.network.neutron [req-eda01b30-8284-42ed-8e6e-64c9ddc16b2c req-7246d0f6-a68c-432a-9833-76833c6b3316 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updated VIF entry in instance network info cache for port 68807b3e-505c-41ea-96e4-f03b329e4c69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:36:41 compute-0 nova_compute[259550]: 2025-10-07 14:36:41.901 2 DEBUG nova.network.neutron [req-eda01b30-8284-42ed-8e6e-64c9ddc16b2c req-7246d0f6-a68c-432a-9833-76833c6b3316 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updating instance_info_cache with network_info: [{"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:36:42 compute-0 podman[382886]: 2025-10-07 14:36:42.080424931 +0000 UTC m=+0.062229593 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:36:42 compute-0 podman[382887]: 2025-10-07 14:36:42.109075607 +0000 UTC m=+0.087903250 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 07 14:36:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 167 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Oct 07 14:36:42 compute-0 nova_compute[259550]: 2025-10-07 14:36:42.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:42 compute-0 nova_compute[259550]: 2025-10-07 14:36:42.869 2 DEBUG oslo_concurrency.lockutils [req-eda01b30-8284-42ed-8e6e-64c9ddc16b2c req-7246d0f6-a68c-432a-9833-76833c6b3316 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:36:42 compute-0 nova_compute[259550]: 2025-10-07 14:36:42.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:36:43 compute-0 ceph-mon[74295]: pgmap v2228: 305 pgs: 305 active+clean; 167 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Oct 07 14:36:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:43.786 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:43 compute-0 nova_compute[259550]: 2025-10-07 14:36:43.785 2 DEBUG nova.compute.manager [req-fc7c72ca-e209-4462-984a-426257699671 req-a61096d1-5d60-4890-8dbb-d36ad865c830 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-changed-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:43 compute-0 nova_compute[259550]: 2025-10-07 14:36:43.786 2 DEBUG nova.compute.manager [req-fc7c72ca-e209-4462-984a-426257699671 req-a61096d1-5d60-4890-8dbb-d36ad865c830 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Refreshing instance network info cache due to event network-changed-84baaee6-3f89-4d61-aaea-507e46e65618. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:36:43 compute-0 nova_compute[259550]: 2025-10-07 14:36:43.786 2 DEBUG oslo_concurrency.lockutils [req-fc7c72ca-e209-4462-984a-426257699671 req-a61096d1-5d60-4890-8dbb-d36ad865c830 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:36:43 compute-0 nova_compute[259550]: 2025-10-07 14:36:43.786 2 DEBUG oslo_concurrency.lockutils [req-fc7c72ca-e209-4462-984a-426257699671 req-a61096d1-5d60-4890-8dbb-d36ad865c830 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:36:43 compute-0 nova_compute[259550]: 2025-10-07 14:36:43.786 2 DEBUG nova.network.neutron [req-fc7c72ca-e209-4462-984a-426257699671 req-a61096d1-5d60-4890-8dbb-d36ad865c830 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Refreshing network info cache for port 84baaee6-3f89-4d61-aaea-507e46e65618 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:36:44 compute-0 ovn_controller[151684]: 2025-10-07T14:36:44Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:aa:2b:a2 10.100.0.14
Oct 07 14:36:44 compute-0 ovn_controller[151684]: 2025-10-07T14:36:44Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:2b:a2 10.100.0.14
Oct 07 14:36:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 199 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 189 op/s
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.565 2 DEBUG oslo_concurrency.lockutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.565 2 DEBUG oslo_concurrency.lockutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.565 2 DEBUG oslo_concurrency.lockutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.566 2 DEBUG oslo_concurrency.lockutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.566 2 DEBUG oslo_concurrency.lockutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.567 2 INFO nova.compute.manager [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Terminating instance
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.568 2 DEBUG nova.compute.manager [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:36:44 compute-0 kernel: tap84baaee6-3f (unregistering): left promiscuous mode
Oct 07 14:36:44 compute-0 NetworkManager[44949]: <info>  [1759847804.6260] device (tap84baaee6-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:36:44 compute-0 ovn_controller[151684]: 2025-10-07T14:36:44Z|01194|binding|INFO|Releasing lport 84baaee6-3f89-4d61-aaea-507e46e65618 from this chassis (sb_readonly=0)
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:44 compute-0 ovn_controller[151684]: 2025-10-07T14:36:44Z|01195|binding|INFO|Setting lport 84baaee6-3f89-4d61-aaea-507e46e65618 down in Southbound
Oct 07 14:36:44 compute-0 ovn_controller[151684]: 2025-10-07T14:36:44Z|01196|binding|INFO|Removing iface tap84baaee6-3f ovn-installed in OVS
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:44 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000071.scope: Deactivated successfully.
Oct 07 14:36:44 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000071.scope: Consumed 13.621s CPU time.
Oct 07 14:36:44 compute-0 systemd-machined[214580]: Machine qemu-142-instance-00000071 terminated.
Oct 07 14:36:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:44.732 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:dc:c4 10.100.0.11'], port_security=['fa:16:3e:84:dc:c4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '6', 'neutron:security_group_ids': '97d6fa92-e050-4cce-a368-3065ace6996b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e8a86b-883d-4b18-ac5a-841f1cc14111, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=84baaee6-3f89-4d61-aaea-507e46e65618) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:36:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:44.733 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 84baaee6-3f89-4d61-aaea-507e46e65618 in datapath 002ae4bb-0f71-4b57-99ae-0bfd304fb458 unbound from our chassis
Oct 07 14:36:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:44.734 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 002ae4bb-0f71-4b57-99ae-0bfd304fb458, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:36:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:44.735 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e764fa07-52b4-4d8d-99c8-ca359b12c3f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:44.735 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 namespace which is not needed anymore
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.805 2 INFO nova.virt.libvirt.driver [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance destroyed successfully.
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.805 2 DEBUG nova.objects.instance [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'resources' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:36:44 compute-0 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[382360]: [NOTICE]   (382364) : haproxy version is 2.8.14-c23fe91
Oct 07 14:36:44 compute-0 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[382360]: [NOTICE]   (382364) : path to executable is /usr/sbin/haproxy
Oct 07 14:36:44 compute-0 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[382360]: [WARNING]  (382364) : Exiting Master process...
Oct 07 14:36:44 compute-0 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[382360]: [ALERT]    (382364) : Current worker (382366) exited with code 143 (Terminated)
Oct 07 14:36:44 compute-0 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[382360]: [WARNING]  (382364) : All workers exited. Exiting... (0)
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.889 2 DEBUG nova.virt.libvirt.vif [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:35:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2036986690',display_name='tempest-TestNetworkAdvancedServerOps-server-2036986690',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2036986690',id=113,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWZftNkY/zVHCXSaics6F8ZT1EfbDe33oEET2vK0gP7Xemq47ftAGCjttUGmoEEk2tSMFrst5lmpCcJn9l+9uZ/tfDJY40RL0sC5x3TjuIWq3RsYwCZVLYWuqz+xMOnKw==',key_name='tempest-TestNetworkAdvancedServerOps-1798249152',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:36:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-ha5sygpa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:36:24Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.890 2 DEBUG nova.network.os_vif_util [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.890 2 DEBUG nova.network.os_vif_util [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:36:44 compute-0 systemd[1]: libpod-e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823.scope: Deactivated successfully.
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.891 2 DEBUG os_vif [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:36:44 compute-0 conmon[382360]: conmon e8169b3facf70ab175b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823.scope/container/memory.events
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.893 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84baaee6-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:44 compute-0 podman[382966]: 2025-10-07 14:36:44.898158093 +0000 UTC m=+0.055131355 container died e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.899 2 INFO os_vif [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f')
Oct 07 14:36:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823-userdata-shm.mount: Deactivated successfully.
Oct 07 14:36:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-24b9ad01db3205e9317c12e398a06f873e540a0ad996975fae67ff29c1925f78-merged.mount: Deactivated successfully.
Oct 07 14:36:44 compute-0 podman[382966]: 2025-10-07 14:36:44.980378249 +0000 UTC m=+0.137351491 container cleanup e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:36:44 compute-0 nova_compute[259550]: 2025-10-07 14:36:44.981 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 07 14:36:44 compute-0 systemd[1]: libpod-conmon-e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823.scope: Deactivated successfully.
Oct 07 14:36:45 compute-0 nova_compute[259550]: 2025-10-07 14:36:45.015 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 07 14:36:45 compute-0 nova_compute[259550]: 2025-10-07 14:36:45.016 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:36:45 compute-0 nova_compute[259550]: 2025-10-07 14:36:45.016 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 07 14:36:45 compute-0 podman[383016]: 2025-10-07 14:36:45.062595156 +0000 UTC m=+0.059989964 container remove e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:36:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:45.069 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[290a118d-1396-46c9-aaca-f3d7cf6782c2]: (4, ('Tue Oct  7 02:36:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 (e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823)\ne8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823\nTue Oct  7 02:36:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 (e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823)\ne8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:45.072 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[90dd67a1-946c-4c59-ae41-b6c581d80369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:45.073 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap002ae4bb-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:45 compute-0 nova_compute[259550]: 2025-10-07 14:36:45.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:45 compute-0 kernel: tap002ae4bb-00: left promiscuous mode
Oct 07 14:36:45 compute-0 nova_compute[259550]: 2025-10-07 14:36:45.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:45.096 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8b42331d-b60b-4469-9e4c-f1d9378255c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:45.131 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0031962e-9fb2-4ab0-aae4-f6e0e478f0ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:45.133 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2955fa94-0e9a-4cb5-9608-178a6b832237]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:45.151 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb6c0ec-0233-47c1-a4f5-c301d0e8ef5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 836628, 'reachable_time': 35248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383031, 'error': None, 'target': 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:45.155 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:36:45 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:45.155 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[0203227d-8505-4f9b-b278-92e6672e3773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d002ae4bb\x2d0f71\x2d4b57\x2d99ae\x2d0bfd304fb458.mount: Deactivated successfully.
Oct 07 14:36:45 compute-0 nova_compute[259550]: 2025-10-07 14:36:45.490 2 INFO nova.virt.libvirt.driver [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Deleting instance files /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_del
Oct 07 14:36:45 compute-0 nova_compute[259550]: 2025-10-07 14:36:45.491 2 INFO nova.virt.libvirt.driver [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Deletion of /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_del complete
Oct 07 14:36:45 compute-0 ceph-mon[74295]: pgmap v2229: 305 pgs: 305 active+clean; 199 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 189 op/s
Oct 07 14:36:45 compute-0 nova_compute[259550]: 2025-10-07 14:36:45.619 2 INFO nova.compute.manager [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Took 1.05 seconds to destroy the instance on the hypervisor.
Oct 07 14:36:45 compute-0 nova_compute[259550]: 2025-10-07 14:36:45.619 2 DEBUG oslo.service.loopingcall [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:36:45 compute-0 nova_compute[259550]: 2025-10-07 14:36:45.619 2 DEBUG nova.compute.manager [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:36:45 compute-0 nova_compute[259550]: 2025-10-07 14:36:45.620 2 DEBUG nova.network.neutron [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:36:46 compute-0 nova_compute[259550]: 2025-10-07 14:36:46.170 2 DEBUG nova.network.neutron [req-fc7c72ca-e209-4462-984a-426257699671 req-a61096d1-5d60-4890-8dbb-d36ad865c830 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Updated VIF entry in instance network info cache for port 84baaee6-3f89-4d61-aaea-507e46e65618. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:36:46 compute-0 nova_compute[259550]: 2025-10-07 14:36:46.171 2 DEBUG nova.network.neutron [req-fc7c72ca-e209-4462-984a-426257699671 req-a61096d1-5d60-4890-8dbb-d36ad865c830 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Updating instance_info_cache with network_info: [{"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:36:46 compute-0 nova_compute[259550]: 2025-10-07 14:36:46.176 2 DEBUG nova.compute.manager [req-0202701a-570a-4672-b2d7-a4c60066eddf req-2eca5971-91d3-4589-b0f8-224e0f25b612 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-unplugged-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:46 compute-0 nova_compute[259550]: 2025-10-07 14:36:46.176 2 DEBUG oslo_concurrency.lockutils [req-0202701a-570a-4672-b2d7-a4c60066eddf req-2eca5971-91d3-4589-b0f8-224e0f25b612 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:46 compute-0 nova_compute[259550]: 2025-10-07 14:36:46.176 2 DEBUG oslo_concurrency.lockutils [req-0202701a-570a-4672-b2d7-a4c60066eddf req-2eca5971-91d3-4589-b0f8-224e0f25b612 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:46 compute-0 nova_compute[259550]: 2025-10-07 14:36:46.177 2 DEBUG oslo_concurrency.lockutils [req-0202701a-570a-4672-b2d7-a4c60066eddf req-2eca5971-91d3-4589-b0f8-224e0f25b612 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:46 compute-0 nova_compute[259550]: 2025-10-07 14:36:46.177 2 DEBUG nova.compute.manager [req-0202701a-570a-4672-b2d7-a4c60066eddf req-2eca5971-91d3-4589-b0f8-224e0f25b612 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] No waiting events found dispatching network-vif-unplugged-84baaee6-3f89-4d61-aaea-507e46e65618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:36:46 compute-0 nova_compute[259550]: 2025-10-07 14:36:46.177 2 DEBUG nova.compute.manager [req-0202701a-570a-4672-b2d7-a4c60066eddf req-2eca5971-91d3-4589-b0f8-224e0f25b612 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-unplugged-84baaee6-3f89-4d61-aaea-507e46e65618 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:36:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 163 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 1011 KiB/s rd, 4.3 MiB/s wr, 158 op/s
Oct 07 14:36:46 compute-0 nova_compute[259550]: 2025-10-07 14:36:46.555 2 DEBUG oslo_concurrency.lockutils [req-fc7c72ca-e209-4462-984a-426257699671 req-a61096d1-5d60-4890-8dbb-d36ad865c830 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:36:47 compute-0 nova_compute[259550]: 2025-10-07 14:36:47.273 2 DEBUG nova.network.neutron [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:36:47 compute-0 nova_compute[259550]: 2025-10-07 14:36:47.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:47 compute-0 nova_compute[259550]: 2025-10-07 14:36:47.462 2 INFO nova.compute.manager [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Took 1.84 seconds to deallocate network for instance.
Oct 07 14:36:47 compute-0 ceph-mon[74295]: pgmap v2230: 305 pgs: 305 active+clean; 163 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 1011 KiB/s rd, 4.3 MiB/s wr, 158 op/s
Oct 07 14:36:47 compute-0 nova_compute[259550]: 2025-10-07 14:36:47.767 2 DEBUG oslo_concurrency.lockutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:47 compute-0 nova_compute[259550]: 2025-10-07 14:36:47.768 2 DEBUG oslo_concurrency.lockutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:36:48 compute-0 nova_compute[259550]: 2025-10-07 14:36:48.112 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:36:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 163 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 605 KiB/s rd, 3.1 MiB/s wr, 132 op/s
Oct 07 14:36:48 compute-0 nova_compute[259550]: 2025-10-07 14:36:48.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:36:48 compute-0 nova_compute[259550]: 2025-10-07 14:36:48.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:36:48 compute-0 nova_compute[259550]: 2025-10-07 14:36:48.981 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:36:49 compute-0 ceph-mon[74295]: pgmap v2231: 305 pgs: 305 active+clean; 163 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 605 KiB/s rd, 3.1 MiB/s wr, 132 op/s
Oct 07 14:36:49 compute-0 nova_compute[259550]: 2025-10-07 14:36:49.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:49 compute-0 nova_compute[259550]: 2025-10-07 14:36:49.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:36:50 compute-0 nova_compute[259550]: 2025-10-07 14:36:50.150 2 DEBUG nova.compute.manager [req-ee39f3bb-878f-4716-8773-dc121d132f88 req-496c8766-8c1f-415b-9f17-ba6733612ede 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:50 compute-0 nova_compute[259550]: 2025-10-07 14:36:50.150 2 DEBUG oslo_concurrency.lockutils [req-ee39f3bb-878f-4716-8773-dc121d132f88 req-496c8766-8c1f-415b-9f17-ba6733612ede 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:50 compute-0 nova_compute[259550]: 2025-10-07 14:36:50.151 2 DEBUG oslo_concurrency.lockutils [req-ee39f3bb-878f-4716-8773-dc121d132f88 req-496c8766-8c1f-415b-9f17-ba6733612ede 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:50 compute-0 nova_compute[259550]: 2025-10-07 14:36:50.151 2 DEBUG oslo_concurrency.lockutils [req-ee39f3bb-878f-4716-8773-dc121d132f88 req-496c8766-8c1f-415b-9f17-ba6733612ede 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:50 compute-0 nova_compute[259550]: 2025-10-07 14:36:50.151 2 DEBUG nova.compute.manager [req-ee39f3bb-878f-4716-8773-dc121d132f88 req-496c8766-8c1f-415b-9f17-ba6733612ede 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] No waiting events found dispatching network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:36:50 compute-0 nova_compute[259550]: 2025-10-07 14:36:50.151 2 WARNING nova.compute.manager [req-ee39f3bb-878f-4716-8773-dc121d132f88 req-496c8766-8c1f-415b-9f17-ba6733612ede 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received unexpected event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 for instance with vm_state deleted and task_state None.
Oct 07 14:36:50 compute-0 nova_compute[259550]: 2025-10-07 14:36:50.152 2 DEBUG nova.compute.manager [req-ee39f3bb-878f-4716-8773-dc121d132f88 req-496c8766-8c1f-415b-9f17-ba6733612ede 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-deleted-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 121 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 609 KiB/s rd, 3.1 MiB/s wr, 139 op/s
Oct 07 14:36:50 compute-0 nova_compute[259550]: 2025-10-07 14:36:50.704 2 DEBUG oslo_concurrency.processutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:36:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:36:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2870104604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:36:51 compute-0 nova_compute[259550]: 2025-10-07 14:36:51.143 2 DEBUG oslo_concurrency.processutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:36:51 compute-0 nova_compute[259550]: 2025-10-07 14:36:51.149 2 DEBUG nova.compute.provider_tree [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:36:51 compute-0 nova_compute[259550]: 2025-10-07 14:36:51.226 2 DEBUG nova.scheduler.client.report [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:36:51 compute-0 nova_compute[259550]: 2025-10-07 14:36:51.322 2 DEBUG oslo_concurrency.lockutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 3.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:51 compute-0 nova_compute[259550]: 2025-10-07 14:36:51.384 2 INFO nova.compute.manager [None req-eb712208-a59c-44e0-81fe-e2d656b88698 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Get console output
Oct 07 14:36:51 compute-0 nova_compute[259550]: 2025-10-07 14:36:51.386 2 INFO nova.scheduler.client.report [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Deleted allocations for instance e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb
Oct 07 14:36:51 compute-0 nova_compute[259550]: 2025-10-07 14:36:51.393 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:36:51 compute-0 nova_compute[259550]: 2025-10-07 14:36:51.640 2 DEBUG oslo_concurrency.lockutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:51 compute-0 ceph-mon[74295]: pgmap v2232: 305 pgs: 305 active+clean; 121 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 609 KiB/s rd, 3.1 MiB/s wr, 139 op/s
Oct 07 14:36:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2870104604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:36:51 compute-0 nova_compute[259550]: 2025-10-07 14:36:51.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:36:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 2.2 MiB/s wr, 98 op/s
Oct 07 14:36:52 compute-0 nova_compute[259550]: 2025-10-07 14:36:52.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:36:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:36:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:36:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:36:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:36:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:36:52 compute-0 nova_compute[259550]: 2025-10-07 14:36:52.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:36:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:36:53 compute-0 ceph-mon[74295]: pgmap v2233: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 2.2 MiB/s wr, 98 op/s
Oct 07 14:36:54 compute-0 nova_compute[259550]: 2025-10-07 14:36:54.248 2 DEBUG oslo_concurrency.lockutils [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "interface-88e378df-94f3-4a3e-89e1-62f6de052e9d-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:54 compute-0 nova_compute[259550]: 2025-10-07 14:36:54.249 2 DEBUG oslo_concurrency.lockutils [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "interface-88e378df-94f3-4a3e-89e1-62f6de052e9d-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:54 compute-0 nova_compute[259550]: 2025-10-07 14:36:54.249 2 DEBUG nova.objects.instance [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'flavor' on Instance uuid 88e378df-94f3-4a3e-89e1-62f6de052e9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:36:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 2.2 MiB/s wr, 92 op/s
Oct 07 14:36:54 compute-0 nova_compute[259550]: 2025-10-07 14:36:54.770 2 DEBUG nova.objects.instance [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_requests' on Instance uuid 88e378df-94f3-4a3e-89e1-62f6de052e9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:36:54 compute-0 nova_compute[259550]: 2025-10-07 14:36:54.786 2 DEBUG nova.network.neutron [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:36:54 compute-0 nova_compute[259550]: 2025-10-07 14:36:54.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:55 compute-0 nova_compute[259550]: 2025-10-07 14:36:55.067 2 DEBUG nova.policy [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:36:55 compute-0 ovn_controller[151684]: 2025-10-07T14:36:55Z|01197|binding|INFO|Releasing lport 0987c9ec-179e-425c-b303-601326f99ffa from this chassis (sb_readonly=0)
Oct 07 14:36:55 compute-0 sudo[383055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:36:55 compute-0 sudo[383055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:36:55 compute-0 sudo[383055]: pam_unix(sudo:session): session closed for user root
Oct 07 14:36:55 compute-0 nova_compute[259550]: 2025-10-07 14:36:55.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:55 compute-0 sudo[383080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:36:55 compute-0 sudo[383080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:36:55 compute-0 sudo[383080]: pam_unix(sudo:session): session closed for user root
Oct 07 14:36:55 compute-0 sudo[383105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:36:55 compute-0 sudo[383105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:36:55 compute-0 sudo[383105]: pam_unix(sudo:session): session closed for user root
Oct 07 14:36:55 compute-0 ceph-mon[74295]: pgmap v2234: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 2.2 MiB/s wr, 92 op/s
Oct 07 14:36:55 compute-0 sudo[383130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:36:55 compute-0 sudo[383130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:36:55 compute-0 nova_compute[259550]: 2025-10-07 14:36:55.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:36:56 compute-0 nova_compute[259550]: 2025-10-07 14:36:56.015 2 DEBUG nova.network.neutron [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Successfully created port: 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:36:56 compute-0 sudo[383130]: pam_unix(sudo:session): session closed for user root
Oct 07 14:36:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:36:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:36:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:36:56 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:36:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:36:56 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:36:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 26 KiB/s wr, 37 op/s
Oct 07 14:36:56 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 5b66d0e2-a07a-49e8-af52-fd83b17e16dc does not exist
Oct 07 14:36:56 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev aa7bf53d-831a-4f3b-a601-dfc22a5428da does not exist
Oct 07 14:36:56 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev ef6bbd86-0508-4a64-95a7-56e42cf97f42 does not exist
Oct 07 14:36:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:36:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:36:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:36:56 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:36:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:36:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:36:56 compute-0 sudo[383186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:36:56 compute-0 sudo[383186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:36:56 compute-0 sudo[383186]: pam_unix(sudo:session): session closed for user root
Oct 07 14:36:56 compute-0 sudo[383211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:36:56 compute-0 sudo[383211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:36:56 compute-0 sudo[383211]: pam_unix(sudo:session): session closed for user root
Oct 07 14:36:56 compute-0 sudo[383236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:36:56 compute-0 sudo[383236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:36:56 compute-0 sudo[383236]: pam_unix(sudo:session): session closed for user root
Oct 07 14:36:56 compute-0 sudo[383261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:36:56 compute-0 sudo[383261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:36:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:36:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:36:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:36:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:36:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:36:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:36:56 compute-0 podman[383325]: 2025-10-07 14:36:56.886821323 +0000 UTC m=+0.038430248 container create 045669c3af6db06adddbd034bbf903b9eaaf72d8f2dbd16ef98af0d672155574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_golick, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Oct 07 14:36:56 compute-0 systemd[1]: Started libpod-conmon-045669c3af6db06adddbd034bbf903b9eaaf72d8f2dbd16ef98af0d672155574.scope.
Oct 07 14:36:56 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:36:56 compute-0 podman[383325]: 2025-10-07 14:36:56.964217901 +0000 UTC m=+0.115826876 container init 045669c3af6db06adddbd034bbf903b9eaaf72d8f2dbd16ef98af0d672155574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_golick, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 07 14:36:56 compute-0 podman[383325]: 2025-10-07 14:36:56.870837126 +0000 UTC m=+0.022446071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:36:56 compute-0 podman[383325]: 2025-10-07 14:36:56.971563808 +0000 UTC m=+0.123172743 container start 045669c3af6db06adddbd034bbf903b9eaaf72d8f2dbd16ef98af0d672155574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_golick, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:36:56 compute-0 podman[383325]: 2025-10-07 14:36:56.975696468 +0000 UTC m=+0.127305403 container attach 045669c3af6db06adddbd034bbf903b9eaaf72d8f2dbd16ef98af0d672155574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_golick, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:36:56 compute-0 quizzical_golick[383341]: 167 167
Oct 07 14:36:56 compute-0 systemd[1]: libpod-045669c3af6db06adddbd034bbf903b9eaaf72d8f2dbd16ef98af0d672155574.scope: Deactivated successfully.
Oct 07 14:36:56 compute-0 podman[383325]: 2025-10-07 14:36:56.977186198 +0000 UTC m=+0.128795133 container died 045669c3af6db06adddbd034bbf903b9eaaf72d8f2dbd16ef98af0d672155574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 07 14:36:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-769e7cdae2427f47e94fb1708964917c9b8741f991173f9b40e7245ba6798491-merged.mount: Deactivated successfully.
Oct 07 14:36:57 compute-0 podman[383325]: 2025-10-07 14:36:57.019901489 +0000 UTC m=+0.171510414 container remove 045669c3af6db06adddbd034bbf903b9eaaf72d8f2dbd16ef98af0d672155574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_golick, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:36:57 compute-0 systemd[1]: libpod-conmon-045669c3af6db06adddbd034bbf903b9eaaf72d8f2dbd16ef98af0d672155574.scope: Deactivated successfully.
Oct 07 14:36:57 compute-0 podman[383365]: 2025-10-07 14:36:57.183684795 +0000 UTC m=+0.041860100 container create fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gates, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 07 14:36:57 compute-0 systemd[1]: Started libpod-conmon-fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa.scope.
Oct 07 14:36:57 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/befdcde83218d98f4d57db52b16b5edff2eb4a7f36351704b4d4d9ed9844db93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/befdcde83218d98f4d57db52b16b5edff2eb4a7f36351704b4d4d9ed9844db93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/befdcde83218d98f4d57db52b16b5edff2eb4a7f36351704b4d4d9ed9844db93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/befdcde83218d98f4d57db52b16b5edff2eb4a7f36351704b4d4d9ed9844db93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/befdcde83218d98f4d57db52b16b5edff2eb4a7f36351704b4d4d9ed9844db93/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:36:57 compute-0 podman[383365]: 2025-10-07 14:36:57.163793294 +0000 UTC m=+0.021968609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:36:57 compute-0 podman[383365]: 2025-10-07 14:36:57.268464781 +0000 UTC m=+0.126640106 container init fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gates, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 07 14:36:57 compute-0 podman[383365]: 2025-10-07 14:36:57.276826574 +0000 UTC m=+0.135001889 container start fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:36:57 compute-0 nova_compute[259550]: 2025-10-07 14:36:57.277 2 DEBUG nova.network.neutron [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Successfully updated port: 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:36:57 compute-0 podman[383365]: 2025-10-07 14:36:57.281504549 +0000 UTC m=+0.139679854 container attach fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gates, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 14:36:57 compute-0 nova_compute[259550]: 2025-10-07 14:36:57.337 2 DEBUG oslo_concurrency.lockutils [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:36:57 compute-0 nova_compute[259550]: 2025-10-07 14:36:57.338 2 DEBUG oslo_concurrency.lockutils [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:36:57 compute-0 nova_compute[259550]: 2025-10-07 14:36:57.338 2 DEBUG nova.network.neutron [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:36:57 compute-0 nova_compute[259550]: 2025-10-07 14:36:57.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:57 compute-0 nova_compute[259550]: 2025-10-07 14:36:57.402 2 DEBUG nova.compute.manager [req-fb70c293-6c32-4d8a-9654-758c24c5eeb6 req-5f4a0a30-4358-4557-9978-51a38d279392 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-changed-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:36:57 compute-0 nova_compute[259550]: 2025-10-07 14:36:57.403 2 DEBUG nova.compute.manager [req-fb70c293-6c32-4d8a-9654-758c24c5eeb6 req-5f4a0a30-4358-4557-9978-51a38d279392 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Refreshing instance network info cache due to event network-changed-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:36:57 compute-0 nova_compute[259550]: 2025-10-07 14:36:57.403 2 DEBUG oslo_concurrency.lockutils [req-fb70c293-6c32-4d8a-9654-758c24c5eeb6 req-5f4a0a30-4358-4557-9978-51a38d279392 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:36:57 compute-0 ceph-mon[74295]: pgmap v2235: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 26 KiB/s wr, 37 op/s
Oct 07 14:36:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:36:58 compute-0 podman[383389]: 2025-10-07 14:36:58.09026215 +0000 UTC m=+0.070657030 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:36:58 compute-0 podman[383391]: 2025-10-07 14:36:58.11983906 +0000 UTC m=+0.099004317 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 14:36:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 4.1 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct 07 14:36:58 compute-0 compassionate_gates[383382]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:36:58 compute-0 compassionate_gates[383382]: --> relative data size: 1.0
Oct 07 14:36:58 compute-0 compassionate_gates[383382]: --> All data devices are unavailable
Oct 07 14:36:58 compute-0 systemd[1]: libpod-fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa.scope: Deactivated successfully.
Oct 07 14:36:58 compute-0 podman[383365]: 2025-10-07 14:36:58.398615129 +0000 UTC m=+1.256790444 container died fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 07 14:36:58 compute-0 systemd[1]: libpod-fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa.scope: Consumed 1.047s CPU time.
Oct 07 14:36:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-befdcde83218d98f4d57db52b16b5edff2eb4a7f36351704b4d4d9ed9844db93-merged.mount: Deactivated successfully.
Oct 07 14:36:58 compute-0 podman[383365]: 2025-10-07 14:36:58.468387453 +0000 UTC m=+1.326562758 container remove fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gates, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:36:58 compute-0 systemd[1]: libpod-conmon-fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa.scope: Deactivated successfully.
Oct 07 14:36:58 compute-0 sudo[383261]: pam_unix(sudo:session): session closed for user root
Oct 07 14:36:58 compute-0 sudo[383461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:36:58 compute-0 sudo[383461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:36:58 compute-0 sudo[383461]: pam_unix(sudo:session): session closed for user root
Oct 07 14:36:58 compute-0 sudo[383486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:36:58 compute-0 sudo[383486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:36:58 compute-0 sudo[383486]: pam_unix(sudo:session): session closed for user root
Oct 07 14:36:58 compute-0 sudo[383511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:36:58 compute-0 sudo[383511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:36:58 compute-0 sudo[383511]: pam_unix(sudo:session): session closed for user root
Oct 07 14:36:58 compute-0 sudo[383536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:36:58 compute-0 sudo[383536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.018 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.018 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:36:59 compute-0 podman[383601]: 2025-10-07 14:36:59.126399976 +0000 UTC m=+0.039385334 container create ea01a104fd7f901c70d37169a99d36271db8d0fccdb0afa2e2fbd1927ab1d7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:36:59 compute-0 systemd[1]: Started libpod-conmon-ea01a104fd7f901c70d37169a99d36271db8d0fccdb0afa2e2fbd1927ab1d7cd.scope.
Oct 07 14:36:59 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:36:59 compute-0 podman[383601]: 2025-10-07 14:36:59.194398332 +0000 UTC m=+0.107383710 container init ea01a104fd7f901c70d37169a99d36271db8d0fccdb0afa2e2fbd1927ab1d7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:36:59 compute-0 podman[383601]: 2025-10-07 14:36:59.204146683 +0000 UTC m=+0.117132041 container start ea01a104fd7f901c70d37169a99d36271db8d0fccdb0afa2e2fbd1927ab1d7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 07 14:36:59 compute-0 podman[383601]: 2025-10-07 14:36:59.110373597 +0000 UTC m=+0.023358985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:36:59 compute-0 podman[383601]: 2025-10-07 14:36:59.207827841 +0000 UTC m=+0.120813219 container attach ea01a104fd7f901c70d37169a99d36271db8d0fccdb0afa2e2fbd1927ab1d7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:36:59 compute-0 elastic_bartik[383617]: 167 167
Oct 07 14:36:59 compute-0 systemd[1]: libpod-ea01a104fd7f901c70d37169a99d36271db8d0fccdb0afa2e2fbd1927ab1d7cd.scope: Deactivated successfully.
Oct 07 14:36:59 compute-0 podman[383601]: 2025-10-07 14:36:59.211205901 +0000 UTC m=+0.124191289 container died ea01a104fd7f901c70d37169a99d36271db8d0fccdb0afa2e2fbd1927ab1d7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.212 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.213 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:36:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d20b9d558944f4a674bd08156c4e2afb7e3625288eb856259adffd6e036a5e6-merged.mount: Deactivated successfully.
Oct 07 14:36:59 compute-0 podman[383601]: 2025-10-07 14:36:59.247482591 +0000 UTC m=+0.160467949 container remove ea01a104fd7f901c70d37169a99d36271db8d0fccdb0afa2e2fbd1927ab1d7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 07 14:36:59 compute-0 systemd[1]: libpod-conmon-ea01a104fd7f901c70d37169a99d36271db8d0fccdb0afa2e2fbd1927ab1d7cd.scope: Deactivated successfully.
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.340 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.340 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.340 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.340 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.341 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.415 2 DEBUG nova.network.neutron [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updating instance_info_cache with network_info: [{"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:36:59 compute-0 podman[383643]: 2025-10-07 14:36:59.419837256 +0000 UTC m=+0.043503163 container create d19724c9232564570c41a096d40603ed7b312d7b6ba92448eebdbc038040c1e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:36:59 compute-0 systemd[1]: Started libpod-conmon-d19724c9232564570c41a096d40603ed7b312d7b6ba92448eebdbc038040c1e7.scope.
Oct 07 14:36:59 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e8819c438856d4b1ab9917b5376f805e07b6816f43e741fd94b01c74e0cc97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e8819c438856d4b1ab9917b5376f805e07b6816f43e741fd94b01c74e0cc97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e8819c438856d4b1ab9917b5376f805e07b6816f43e741fd94b01c74e0cc97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e8819c438856d4b1ab9917b5376f805e07b6816f43e741fd94b01c74e0cc97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:36:59 compute-0 podman[383643]: 2025-10-07 14:36:59.487735701 +0000 UTC m=+0.111401608 container init d19724c9232564570c41a096d40603ed7b312d7b6ba92448eebdbc038040c1e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 07 14:36:59 compute-0 podman[383643]: 2025-10-07 14:36:59.496665739 +0000 UTC m=+0.120331646 container start d19724c9232564570c41a096d40603ed7b312d7b6ba92448eebdbc038040c1e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 07 14:36:59 compute-0 podman[383643]: 2025-10-07 14:36:59.402180765 +0000 UTC m=+0.025846702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:36:59 compute-0 podman[383643]: 2025-10-07 14:36:59.500238615 +0000 UTC m=+0.123904522 container attach d19724c9232564570c41a096d40603ed7b312d7b6ba92448eebdbc038040c1e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.603 2 DEBUG oslo_concurrency.lockutils [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.605 2 DEBUG oslo_concurrency.lockutils [req-fb70c293-6c32-4d8a-9654-758c24c5eeb6 req-5f4a0a30-4358-4557-9978-51a38d279392 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.605 2 DEBUG nova.network.neutron [req-fb70c293-6c32-4d8a-9654-758c24c5eeb6 req-5f4a0a30-4358-4557-9978-51a38d279392 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Refreshing network info cache for port 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.608 2 DEBUG nova.virt.libvirt.vif [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1302175134',display_name='tempest-TestNetworkBasicOps-server-1302175134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1302175134',id=114,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWpQUaf47IgvjMJ1PfiJ3hBWwgl5SRzpVWmxMmvbTcZSrsJXs5xD5XgPEXcyNNmm528IJ5KaOIt3CjVQWo+4ASv0OE74+2GIAZPXgv6ybnOW8r7u3cAdnaHk7c6RMmT5A==',key_name='tempest-TestNetworkBasicOps-677021137',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:36:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-xkm7y880',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:36:31Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=88e378df-94f3-4a3e-89e1-62f6de052e9d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.609 2 DEBUG nova.network.os_vif_util [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.610 2 DEBUG nova.network.os_vif_util [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:26:98,bridge_name='br-int',has_traffic_filtering=True,id=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49,network=Network(03ab5118-02e3-4fcc-b38c-4eebc4305f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444fd1ba-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.610 2 DEBUG os_vif [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:26:98,bridge_name='br-int',has_traffic_filtering=True,id=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49,network=Network(03ab5118-02e3-4fcc-b38c-4eebc4305f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444fd1ba-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.611 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.612 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap444fd1ba-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap444fd1ba-91, col_values=(('external_ids', {'iface-id': '444fd1ba-91db-40d4-95c0-a6ec1fb6ce49', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:26:98', 'vm-uuid': '88e378df-94f3-4a3e-89e1-62f6de052e9d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:59 compute-0 NetworkManager[44949]: <info>  [1759847819.6192] manager: (tap444fd1ba-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/481)
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.629 2 INFO os_vif [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:26:98,bridge_name='br-int',has_traffic_filtering=True,id=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49,network=Network(03ab5118-02e3-4fcc-b38c-4eebc4305f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444fd1ba-91')
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.630 2 DEBUG nova.virt.libvirt.vif [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1302175134',display_name='tempest-TestNetworkBasicOps-server-1302175134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1302175134',id=114,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWpQUaf47IgvjMJ1PfiJ3hBWwgl5SRzpVWmxMmvbTcZSrsJXs5xD5XgPEXcyNNmm528IJ5KaOIt3CjVQWo+4ASv0OE74+2GIAZPXgv6ybnOW8r7u3cAdnaHk7c6RMmT5A==',key_name='tempest-TestNetworkBasicOps-677021137',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:36:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-xkm7y880',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:36:31Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=88e378df-94f3-4a3e-89e1-62f6de052e9d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.630 2 DEBUG nova.network.os_vif_util [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.631 2 DEBUG nova.network.os_vif_util [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:26:98,bridge_name='br-int',has_traffic_filtering=True,id=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49,network=Network(03ab5118-02e3-4fcc-b38c-4eebc4305f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444fd1ba-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.634 2 DEBUG nova.virt.libvirt.guest [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] attach device xml: <interface type="ethernet">
Oct 07 14:36:59 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:7c:26:98"/>
Oct 07 14:36:59 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:36:59 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:36:59 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:36:59 compute-0 nova_compute[259550]:   <target dev="tap444fd1ba-91"/>
Oct 07 14:36:59 compute-0 nova_compute[259550]: </interface>
Oct 07 14:36:59 compute-0 nova_compute[259550]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 07 14:36:59 compute-0 kernel: tap444fd1ba-91: entered promiscuous mode
Oct 07 14:36:59 compute-0 ovn_controller[151684]: 2025-10-07T14:36:59Z|01198|binding|INFO|Claiming lport 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 for this chassis.
Oct 07 14:36:59 compute-0 ovn_controller[151684]: 2025-10-07T14:36:59Z|01199|binding|INFO|444fd1ba-91db-40d4-95c0-a6ec1fb6ce49: Claiming fa:16:3e:7c:26:98 10.100.0.19
Oct 07 14:36:59 compute-0 NetworkManager[44949]: <info>  [1759847819.6524] manager: (tap444fd1ba-91): new Tun device (/org/freedesktop/NetworkManager/Devices/482)
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:59 compute-0 systemd-udevd[383690]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:36:59 compute-0 NetworkManager[44949]: <info>  [1759847819.7087] device (tap444fd1ba-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:36:59 compute-0 NetworkManager[44949]: <info>  [1759847819.7094] device (tap444fd1ba-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:36:59 compute-0 ovn_controller[151684]: 2025-10-07T14:36:59Z|01200|binding|INFO|Setting lport 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 ovn-installed in OVS
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.803 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847804.8026242, e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.803 2 INFO nova.compute.manager [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] VM Stopped (Lifecycle Event)
Oct 07 14:36:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:36:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4218141925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:36:59 compute-0 ceph-mon[74295]: pgmap v2236: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 4.1 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct 07 14:36:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4218141925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:36:59 compute-0 nova_compute[259550]: 2025-10-07 14:36:59.828 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:36:59 compute-0 ovn_controller[151684]: 2025-10-07T14:36:59Z|01201|binding|INFO|Setting lport 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 up in Southbound
Oct 07 14:36:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:59.957 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:26:98 10.100.0.19'], port_security=['fa:16:3e:7c:26:98 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': '88e378df-94f3-4a3e-89e1-62f6de052e9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03ab5118-02e3-4fcc-b38c-4eebc4305f96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '46186043-39e3-4b2d-9425-b7b9cc0cd458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74cd8327-fb9a-441d-85b6-cebb9054477c, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:36:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:59.958 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 in datapath 03ab5118-02e3-4fcc-b38c-4eebc4305f96 bound to our chassis
Oct 07 14:36:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:59.959 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03ab5118-02e3-4fcc-b38c-4eebc4305f96
Oct 07 14:36:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:59.972 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[93384757-a4cb-4542-b49e-c0adf7a00871]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:59.974 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap03ab5118-01 in ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:36:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:59.976 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap03ab5118-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:36:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:59.976 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3912fe76-80d6-4d34-904b-8638557bc10a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:59.977 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4397a9fa-72a0-46f1-ae45-979d69de7dbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:36:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:36:59.991 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[dad96626-44b0-46ef-a7f8-4014d5472b6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.004 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb8405c-06d2-42e4-b1ed-1a10d8af8e39]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.031 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5313bf07-c787-461a-b35b-542fb21c6034]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.045 2 DEBUG nova.compute.manager [None req-63b4a422-df20-4405-9615-e136ca15591c - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:37:00 compute-0 NetworkManager[44949]: <info>  [1759847820.0481] manager: (tap03ab5118-00): new Veth device (/org/freedesktop/NetworkManager/Devices/483)
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.047 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[325ee812-1667-421d-8a57-7d203a348acf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:00 compute-0 systemd-udevd[383692]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.070 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.070 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.071 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.087 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[96cb4703-8b41-448b-aac6-3fbbb54705a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.091 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[93a41c5f-a1f7-4005-9678-00ef6ccd517e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:00 compute-0 NetworkManager[44949]: <info>  [1759847820.1162] device (tap03ab5118-00): carrier: link connected
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.122 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[76af2131-b9c0-49f3-bf47-ad8a674e8f01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.139 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[05007246-41d1-4644-93e2-f60a83a6cd55]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03ab5118-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:0a:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 345], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840368, 'reachable_time': 38819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383722, 'error': None, 'target': 'ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.153 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[39de37fb-708c-4acd-84c7-4296f811a356]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:a8b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 840368, 'tstamp': 840368}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383723, 'error': None, 'target': 'ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.171 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4b524dce-1525-4e00-9f47-ddd45493f0bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03ab5118-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:0a:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 345], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840368, 'reachable_time': 38819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 383724, 'error': None, 'target': 'ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.202 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f3686ea7-2c8a-446b-a1f4-65003fb2060b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.258 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fc1efb27-724c-4568-a89b-27e24077af87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.260 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03ab5118-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.260 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.260 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03ab5118-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:00 compute-0 kernel: tap03ab5118-00: entered promiscuous mode
Oct 07 14:37:00 compute-0 NetworkManager[44949]: <info>  [1759847820.2644] manager: (tap03ab5118-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/484)
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.267 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03ab5118-00, col_values=(('external_ids', {'iface-id': '6498794a-d0b4-4dcb-87cf-e1a202b8ef2f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:00 compute-0 ovn_controller[151684]: 2025-10-07T14:37:00Z|01202|binding|INFO|Releasing lport 6498794a-d0b4-4dcb-87cf-e1a202b8ef2f from this chassis (sb_readonly=1)
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.274 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/03ab5118-02e3-4fcc-b38c-4eebc4305f96.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/03ab5118-02e3-4fcc-b38c-4eebc4305f96.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.275 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5cbd0418-b3d0-43c9-a0e0-a7884892648d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.276 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-03ab5118-02e3-4fcc-b38c-4eebc4305f96
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/03ab5118-02e3-4fcc-b38c-4eebc4305f96.pid.haproxy
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 03ab5118-02e3-4fcc-b38c-4eebc4305f96
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:37:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.277 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96', 'env', 'PROCESS_TAG=haproxy-03ab5118-02e3-4fcc-b38c-4eebc4305f96', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/03ab5118-02e3-4fcc-b38c-4eebc4305f96.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]: {
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:     "0": [
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:         {
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "devices": [
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "/dev/loop3"
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             ],
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "lv_name": "ceph_lv0",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "lv_size": "21470642176",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "name": "ceph_lv0",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "tags": {
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.cluster_name": "ceph",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.crush_device_class": "",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.encrypted": "0",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.osd_id": "0",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.type": "block",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.vdo": "0"
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             },
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "type": "block",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "vg_name": "ceph_vg0"
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:         }
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:     ],
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:     "1": [
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:         {
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "devices": [
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "/dev/loop4"
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             ],
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "lv_name": "ceph_lv1",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "lv_size": "21470642176",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "name": "ceph_lv1",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "tags": {
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.cluster_name": "ceph",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.crush_device_class": "",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.encrypted": "0",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.osd_id": "1",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.type": "block",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.vdo": "0"
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             },
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "type": "block",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "vg_name": "ceph_vg1"
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:         }
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:     ],
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:     "2": [
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:         {
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "devices": [
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "/dev/loop5"
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             ],
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "lv_name": "ceph_lv2",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "lv_size": "21470642176",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "name": "ceph_lv2",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "tags": {
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.cluster_name": "ceph",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.crush_device_class": "",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.encrypted": "0",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.osd_id": "2",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.type": "block",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:                 "ceph.vdo": "0"
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             },
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "type": "block",
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:             "vg_name": "ceph_vg2"
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:         }
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]:     ]
Oct 07 14:37:00 compute-0 affectionate_matsumoto[383660]: }
Oct 07 14:37:00 compute-0 systemd[1]: libpod-d19724c9232564570c41a096d40603ed7b312d7b6ba92448eebdbc038040c1e7.scope: Deactivated successfully.
Oct 07 14:37:00 compute-0 podman[383643]: 2025-10-07 14:37:00.327060877 +0000 UTC m=+0.950726804 container died d19724c9232564570c41a096d40603ed7b312d7b6ba92448eebdbc038040c1e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:37:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 4.1 KiB/s rd, 17 KiB/s wr, 7 op/s
Oct 07 14:37:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-59e8819c438856d4b1ab9917b5376f805e07b6816f43e741fd94b01c74e0cc97-merged.mount: Deactivated successfully.
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.482 2 DEBUG nova.virt.libvirt.driver [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.483 2 DEBUG nova.virt.libvirt.driver [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.484 2 DEBUG nova.virt.libvirt.driver [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:aa:2b:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.484 2 DEBUG nova.virt.libvirt.driver [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:7c:26:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:37:00 compute-0 podman[383643]: 2025-10-07 14:37:00.691789264 +0000 UTC m=+1.315455171 container remove d19724c9232564570c41a096d40603ed7b312d7b6ba92448eebdbc038040c1e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:37:00 compute-0 systemd[1]: libpod-conmon-d19724c9232564570c41a096d40603ed7b312d7b6ba92448eebdbc038040c1e7.scope: Deactivated successfully.
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.730 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.730 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:37:00 compute-0 sudo[383536]: pam_unix(sudo:session): session closed for user root
Oct 07 14:37:00 compute-0 sudo[383785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:37:00 compute-0 sudo[383785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:37:00 compute-0 sudo[383785]: pam_unix(sudo:session): session closed for user root
Oct 07 14:37:00 compute-0 podman[383770]: 2025-10-07 14:37:00.716255207 +0000 UTC m=+0.102400657 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.828 2 DEBUG nova.virt.libvirt.guest [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:37:00 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:37:00 compute-0 nova_compute[259550]:   <nova:name>tempest-TestNetworkBasicOps-server-1302175134</nova:name>
Oct 07 14:37:00 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:37:00</nova:creationTime>
Oct 07 14:37:00 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:37:00 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:37:00 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:37:00 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:37:00 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:37:00 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:37:00 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:37:00 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:37:00 compute-0 nova_compute[259550]:     <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:37:00 compute-0 nova_compute[259550]:     <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:37:00 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:37:00 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:37:00 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:37:00 compute-0 nova_compute[259550]:     <nova:port uuid="68807b3e-505c-41ea-96e4-f03b329e4c69">
Oct 07 14:37:00 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:37:00 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:37:00 compute-0 nova_compute[259550]:     <nova:port uuid="444fd1ba-91db-40d4-95c0-a6ec1fb6ce49">
Oct 07 14:37:00 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct 07 14:37:00 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:37:00 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:37:00 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:37:00 compute-0 nova_compute[259550]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 07 14:37:00 compute-0 sudo[383810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:37:00 compute-0 sudo[383810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:37:00 compute-0 sudo[383810]: pam_unix(sudo:session): session closed for user root
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.916 2 DEBUG oslo_concurrency.lockutils [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "interface-88e378df-94f3-4a3e-89e1-62f6de052e9d-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:00 compute-0 sudo[383835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:37:00 compute-0 sudo[383835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:37:00 compute-0 sudo[383835]: pam_unix(sudo:session): session closed for user root
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.954 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.955 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3558MB free_disk=59.94276428222656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.955 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.956 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.968 2 DEBUG nova.compute.manager [req-2ab9a041-44db-45f1-93e8-301b371da448 req-74e91406-3242-450c-b344-66c460797140 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-vif-plugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.968 2 DEBUG oslo_concurrency.lockutils [req-2ab9a041-44db-45f1-93e8-301b371da448 req-74e91406-3242-450c-b344-66c460797140 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.969 2 DEBUG oslo_concurrency.lockutils [req-2ab9a041-44db-45f1-93e8-301b371da448 req-74e91406-3242-450c-b344-66c460797140 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.969 2 DEBUG oslo_concurrency.lockutils [req-2ab9a041-44db-45f1-93e8-301b371da448 req-74e91406-3242-450c-b344-66c460797140 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.969 2 DEBUG nova.compute.manager [req-2ab9a041-44db-45f1-93e8-301b371da448 req-74e91406-3242-450c-b344-66c460797140 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] No waiting events found dispatching network-vif-plugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:37:00 compute-0 nova_compute[259550]: 2025-10-07 14:37:00.969 2 WARNING nova.compute.manager [req-2ab9a041-44db-45f1-93e8-301b371da448 req-74e91406-3242-450c-b344-66c460797140 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received unexpected event network-vif-plugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 for instance with vm_state active and task_state None.
Oct 07 14:37:01 compute-0 sudo[383860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:37:01 compute-0 sudo[383860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:37:01 compute-0 podman[383770]: 2025-10-07 14:37:01.117475498 +0000 UTC m=+0.503620958 container create 859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:37:01 compute-0 nova_compute[259550]: 2025-10-07 14:37:01.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:01 compute-0 systemd[1]: Started libpod-conmon-859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064.scope.
Oct 07 14:37:01 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d2c041b6e969b41f63435c0c8bb7d38859ca24d5708706e9cc97d1edb64a1a9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:37:01 compute-0 ovn_controller[151684]: 2025-10-07T14:37:01Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7c:26:98 10.100.0.19
Oct 07 14:37:01 compute-0 ovn_controller[151684]: 2025-10-07T14:37:01Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7c:26:98 10.100.0.19
Oct 07 14:37:01 compute-0 nova_compute[259550]: 2025-10-07 14:37:01.250 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 88e378df-94f3-4a3e-89e1-62f6de052e9d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:37:01 compute-0 nova_compute[259550]: 2025-10-07 14:37:01.250 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:37:01 compute-0 nova_compute[259550]: 2025-10-07 14:37:01.250 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:37:01 compute-0 podman[383770]: 2025-10-07 14:37:01.275740647 +0000 UTC m=+0.661886127 container init 859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:37:01 compute-0 podman[383770]: 2025-10-07 14:37:01.286039482 +0000 UTC m=+0.672184932 container start 859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 07 14:37:01 compute-0 neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96[383901]: [NOTICE]   (383920) : New worker (383922) forked
Oct 07 14:37:01 compute-0 neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96[383901]: [NOTICE]   (383920) : Loading success.
Oct 07 14:37:01 compute-0 nova_compute[259550]: 2025-10-07 14:37:01.349 2 DEBUG nova.network.neutron [req-fb70c293-6c32-4d8a-9654-758c24c5eeb6 req-5f4a0a30-4358-4557-9978-51a38d279392 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updated VIF entry in instance network info cache for port 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:37:01 compute-0 nova_compute[259550]: 2025-10-07 14:37:01.350 2 DEBUG nova.network.neutron [req-fb70c293-6c32-4d8a-9654-758c24c5eeb6 req-5f4a0a30-4358-4557-9978-51a38d279392 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updating instance_info_cache with network_info: [{"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:37:01 compute-0 nova_compute[259550]: 2025-10-07 14:37:01.353 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:01 compute-0 nova_compute[259550]: 2025-10-07 14:37:01.390 2 DEBUG oslo_concurrency.lockutils [req-fb70c293-6c32-4d8a-9654-758c24c5eeb6 req-5f4a0a30-4358-4557-9978-51a38d279392 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:37:01 compute-0 podman[383948]: 2025-10-07 14:37:01.465762285 +0000 UTC m=+0.024816155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:37:01 compute-0 podman[383948]: 2025-10-07 14:37:01.603182277 +0000 UTC m=+0.162236127 container create a6cca9de0d596dc89006e6df92f7c197f7605ea08fa206290deec469d596a896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shannon, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 07 14:37:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:37:01 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1548020635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:37:01 compute-0 systemd[1]: Started libpod-conmon-a6cca9de0d596dc89006e6df92f7c197f7605ea08fa206290deec469d596a896.scope.
Oct 07 14:37:01 compute-0 nova_compute[259550]: 2025-10-07 14:37:01.794 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:01 compute-0 nova_compute[259550]: 2025-10-07 14:37:01.801 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:37:01 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:37:01 compute-0 nova_compute[259550]: 2025-10-07 14:37:01.821 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:37:01 compute-0 podman[383948]: 2025-10-07 14:37:01.831181279 +0000 UTC m=+0.390235139 container init a6cca9de0d596dc89006e6df92f7c197f7605ea08fa206290deec469d596a896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shannon, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 14:37:01 compute-0 podman[383948]: 2025-10-07 14:37:01.838414332 +0000 UTC m=+0.397468182 container start a6cca9de0d596dc89006e6df92f7c197f7605ea08fa206290deec469d596a896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 07 14:37:01 compute-0 pedantic_shannon[383984]: 167 167
Oct 07 14:37:01 compute-0 systemd[1]: libpod-a6cca9de0d596dc89006e6df92f7c197f7605ea08fa206290deec469d596a896.scope: Deactivated successfully.
Oct 07 14:37:01 compute-0 nova_compute[259550]: 2025-10-07 14:37:01.872 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:37:01 compute-0 nova_compute[259550]: 2025-10-07 14:37:01.872 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:01 compute-0 podman[383948]: 2025-10-07 14:37:01.880538248 +0000 UTC m=+0.439592098 container attach a6cca9de0d596dc89006e6df92f7c197f7605ea08fa206290deec469d596a896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct 07 14:37:01 compute-0 podman[383948]: 2025-10-07 14:37:01.881581025 +0000 UTC m=+0.440634885 container died a6cca9de0d596dc89006e6df92f7c197f7605ea08fa206290deec469d596a896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shannon, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 07 14:37:02 compute-0 ceph-mon[74295]: pgmap v2237: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 4.1 KiB/s rd, 17 KiB/s wr, 7 op/s
Oct 07 14:37:02 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1548020635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:37:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-615f2add1e595ba7c3c03a716290e7a2380edea0329954e9116706c1d9708266-merged.mount: Deactivated successfully.
Oct 07 14:37:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct 07 14:37:02 compute-0 podman[383948]: 2025-10-07 14:37:02.347622028 +0000 UTC m=+0.906675878 container remove a6cca9de0d596dc89006e6df92f7c197f7605ea08fa206290deec469d596a896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:37:02 compute-0 nova_compute[259550]: 2025-10-07 14:37:02.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:02 compute-0 systemd[1]: libpod-conmon-a6cca9de0d596dc89006e6df92f7c197f7605ea08fa206290deec469d596a896.scope: Deactivated successfully.
Oct 07 14:37:02 compute-0 podman[384008]: 2025-10-07 14:37:02.531919093 +0000 UTC m=+0.025539373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:37:02 compute-0 podman[384008]: 2025-10-07 14:37:02.761828196 +0000 UTC m=+0.255448446 container create 679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_proskuriakova, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:37:02 compute-0 systemd[1]: Started libpod-conmon-679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0.scope.
Oct 07 14:37:02 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a16923cb6bc6c3e76209de583a4a02fa2189b29325c23217d50e7ec68793e23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a16923cb6bc6c3e76209de583a4a02fa2189b29325c23217d50e7ec68793e23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a16923cb6bc6c3e76209de583a4a02fa2189b29325c23217d50e7ec68793e23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a16923cb6bc6c3e76209de583a4a02fa2189b29325c23217d50e7ec68793e23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:37:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:37:03 compute-0 podman[384008]: 2025-10-07 14:37:03.009303989 +0000 UTC m=+0.502924249 container init 679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_proskuriakova, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 07 14:37:03 compute-0 podman[384008]: 2025-10-07 14:37:03.020011905 +0000 UTC m=+0.513632165 container start 679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_proskuriakova, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.053 2 DEBUG nova.compute.manager [req-93ee5714-b22a-4b6d-94d2-2b0d302b258e req-db5ddbf8-2122-4b6c-9e52-5a1fbad16cd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-vif-plugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.053 2 DEBUG oslo_concurrency.lockutils [req-93ee5714-b22a-4b6d-94d2-2b0d302b258e req-db5ddbf8-2122-4b6c-9e52-5a1fbad16cd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.054 2 DEBUG oslo_concurrency.lockutils [req-93ee5714-b22a-4b6d-94d2-2b0d302b258e req-db5ddbf8-2122-4b6c-9e52-5a1fbad16cd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.054 2 DEBUG oslo_concurrency.lockutils [req-93ee5714-b22a-4b6d-94d2-2b0d302b258e req-db5ddbf8-2122-4b6c-9e52-5a1fbad16cd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.055 2 DEBUG nova.compute.manager [req-93ee5714-b22a-4b6d-94d2-2b0d302b258e req-db5ddbf8-2122-4b6c-9e52-5a1fbad16cd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] No waiting events found dispatching network-vif-plugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.055 2 WARNING nova.compute.manager [req-93ee5714-b22a-4b6d-94d2-2b0d302b258e req-db5ddbf8-2122-4b6c-9e52-5a1fbad16cd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received unexpected event network-vif-plugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 for instance with vm_state active and task_state None.
Oct 07 14:37:03 compute-0 podman[384008]: 2025-10-07 14:37:03.138731977 +0000 UTC m=+0.632352257 container attach 679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.383 2 DEBUG oslo_concurrency.lockutils [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "interface-88e378df-94f3-4a3e-89e1-62f6de052e9d-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.384 2 DEBUG oslo_concurrency.lockutils [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "interface-88e378df-94f3-4a3e-89e1-62f6de052e9d-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.414 2 DEBUG nova.objects.instance [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'flavor' on Instance uuid 88e378df-94f3-4a3e-89e1-62f6de052e9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.442 2 DEBUG nova.virt.libvirt.vif [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1302175134',display_name='tempest-TestNetworkBasicOps-server-1302175134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1302175134',id=114,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWpQUaf47IgvjMJ1PfiJ3hBWwgl5SRzpVWmxMmvbTcZSrsJXs5xD5XgPEXcyNNmm528IJ5KaOIt3CjVQWo+4ASv0OE74+2GIAZPXgv6ybnOW8r7u3cAdnaHk7c6RMmT5A==',key_name='tempest-TestNetworkBasicOps-677021137',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:36:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-xkm7y880',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:36:31Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=88e378df-94f3-4a3e-89e1-62f6de052e9d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.443 2 DEBUG nova.network.os_vif_util [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.444 2 DEBUG nova.network.os_vif_util [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:26:98,bridge_name='br-int',has_traffic_filtering=True,id=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49,network=Network(03ab5118-02e3-4fcc-b38c-4eebc4305f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444fd1ba-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.449 2 DEBUG nova.virt.libvirt.guest [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7c:26:98"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap444fd1ba-91"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.452 2 DEBUG nova.virt.libvirt.guest [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7c:26:98"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap444fd1ba-91"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.454 2 DEBUG nova.virt.libvirt.driver [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Attempting to detach device tap444fd1ba-91 from instance 88e378df-94f3-4a3e-89e1-62f6de052e9d from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.455 2 DEBUG nova.virt.libvirt.guest [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] detach device xml: <interface type="ethernet">
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:7c:26:98"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <target dev="tap444fd1ba-91"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]: </interface>
Oct 07 14:37:03 compute-0 nova_compute[259550]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.630 2 DEBUG nova.virt.libvirt.guest [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7c:26:98"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap444fd1ba-91"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.635 2 DEBUG nova.virt.libvirt.guest [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:7c:26:98"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap444fd1ba-91"/></interface>not found in domain: <domain type='kvm' id='143'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <name>instance-00000072</name>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <uuid>88e378df-94f3-4a3e-89e1-62f6de052e9d</uuid>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:name>tempest-TestNetworkBasicOps-server-1302175134</nova:name>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:37:00</nova:creationTime>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:port uuid="68807b3e-505c-41ea-96e4-f03b329e4c69">
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:port uuid="444fd1ba-91db-40d4-95c0-a6ec1fb6ce49">
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:37:03 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <memory unit='KiB'>131072</memory>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <vcpu placement='static'>1</vcpu>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <resource>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <partition>/machine</partition>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </resource>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <sysinfo type='smbios'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <system>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <entry name='manufacturer'>RDO</entry>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <entry name='product'>OpenStack Compute</entry>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <entry name='serial'>88e378df-94f3-4a3e-89e1-62f6de052e9d</entry>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <entry name='uuid'>88e378df-94f3-4a3e-89e1-62f6de052e9d</entry>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <entry name='family'>Virtual Machine</entry>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </system>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <os>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <boot dev='hd'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <smbios mode='sysinfo'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </os>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <features>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <vmcoreinfo state='on'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </features>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <cpu mode='custom' match='exact' check='full'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <vendor>AMD</vendor>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='x2apic'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc-deadline'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='hypervisor'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc_adjust'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='spec-ctrl'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='stibp'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='arch-capabilities'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='ssbd'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='cmp_legacy'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='overflow-recov'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='succor'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='ibrs'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='amd-ssbd'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='virt-ssbd'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='lbrv'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='tsc-scale'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='vmcb-clean'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='flushbyasid'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='pause-filter'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='pfthreshold'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='rdctl-no'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='mds-no'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='pschange-mc-no'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='gds-no'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='rfds-no'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='xsaves'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='svm'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='topoext'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='npt'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='nrip-save'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <clock offset='utc'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <timer name='pit' tickpolicy='delay'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <timer name='hpet' present='no'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <on_poweroff>destroy</on_poweroff>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <on_reboot>restart</on_reboot>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <on_crash>destroy</on_crash>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <disk type='network' device='disk'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/88e378df-94f3-4a3e-89e1-62f6de052e9d_disk' index='2'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       </source>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target dev='vda' bus='virtio'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='virtio-disk0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <disk type='network' device='cdrom'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/88e378df-94f3-4a3e-89e1-62f6de052e9d_disk.config' index='1'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       </source>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target dev='sda' bus='sata'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <readonly/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='sata0-0-0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='0' model='pcie-root'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pcie.0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='1' port='0x10'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.1'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='2' port='0x11'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.2'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='3' port='0x12'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.3'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='4' port='0x13'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.4'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='5' port='0x14'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.5'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='6' port='0x15'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.6'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='7' port='0x16'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.7'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='8' port='0x17'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.8'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='9' port='0x18'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.9'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='10' port='0x19'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.10'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='11' port='0x1a'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.11'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='12' port='0x1b'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.12'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='13' port='0x1c'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.13'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='14' port='0x1d'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.14'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='15' port='0x1e'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.15'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='16' port='0x1f'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.16'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='17' port='0x20'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.17'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='18' port='0x21'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.18'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='19' port='0x22'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.19'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='20' port='0x23'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.20'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='21' port='0x24'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.21'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='22' port='0x25'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.22'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='23' port='0x26'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.23'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='24' port='0x27'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.24'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='25' port='0x28'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.25'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-pci-bridge'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.26'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='usb'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='sata' index='0'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='ide'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:aa:2b:a2'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target dev='tap68807b3e-50'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='net0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:7c:26:98'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target dev='tap444fd1ba-91'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='net1'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <serial type='pty'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/console.log' append='off'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target type='isa-serial' port='0'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:         <model name='isa-serial'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       </target>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <console type='pty' tty='/dev/pts/0'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/console.log' append='off'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target type='serial' port='0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </console>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <input type='tablet' bus='usb'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='input0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='usb' bus='0' port='1'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </input>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <input type='mouse' bus='ps2'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='input1'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </input>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <input type='keyboard' bus='ps2'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='input2'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </input>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <listen type='address' address='::0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </graphics>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <audio id='1' type='none'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <video>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model type='virtio' heads='1' primary='yes'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='video0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </video>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <watchdog model='itco' action='reset'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='watchdog0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </watchdog>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <memballoon model='virtio'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <stats period='10'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='balloon0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <rng model='virtio'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <backend model='random'>/dev/urandom</backend>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='rng0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <label>system_u:system_r:svirt_t:s0:c220,c590</label>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c220,c590</imagelabel>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <label>+107:+107</label>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <imagelabel>+107:+107</imagelabel>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:37:03 compute-0 nova_compute[259550]: </domain>
Oct 07 14:37:03 compute-0 nova_compute[259550]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.635 2 INFO nova.virt.libvirt.driver [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully detached device tap444fd1ba-91 from instance 88e378df-94f3-4a3e-89e1-62f6de052e9d from the persistent domain config.
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.636 2 DEBUG nova.virt.libvirt.driver [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] (1/8): Attempting to detach device tap444fd1ba-91 with device alias net1 from instance 88e378df-94f3-4a3e-89e1-62f6de052e9d from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.636 2 DEBUG nova.virt.libvirt.guest [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] detach device xml: <interface type="ethernet">
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:7c:26:98"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <target dev="tap444fd1ba-91"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]: </interface>
Oct 07 14:37:03 compute-0 nova_compute[259550]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 07 14:37:03 compute-0 kernel: tap444fd1ba-91 (unregistering): left promiscuous mode
Oct 07 14:37:03 compute-0 NetworkManager[44949]: <info>  [1759847823.6956] device (tap444fd1ba-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:37:03 compute-0 ovn_controller[151684]: 2025-10-07T14:37:03Z|01203|binding|INFO|Releasing lport 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 from this chassis (sb_readonly=0)
Oct 07 14:37:03 compute-0 ovn_controller[151684]: 2025-10-07T14:37:03Z|01204|binding|INFO|Setting lport 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 down in Southbound
Oct 07 14:37:03 compute-0 ovn_controller[151684]: 2025-10-07T14:37:03Z|01205|binding|INFO|Removing iface tap444fd1ba-91 ovn-installed in OVS
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:03.720 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:26:98 10.100.0.19'], port_security=['fa:16:3e:7c:26:98 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': '88e378df-94f3-4a3e-89e1-62f6de052e9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03ab5118-02e3-4fcc-b38c-4eebc4305f96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '46186043-39e3-4b2d-9425-b7b9cc0cd458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74cd8327-fb9a-441d-85b6-cebb9054477c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:37:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:03.721 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 in datapath 03ab5118-02e3-4fcc-b38c-4eebc4305f96 unbound from our chassis
Oct 07 14:37:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:03.722 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 03ab5118-02e3-4fcc-b38c-4eebc4305f96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:37:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:03.724 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[530517da-55cc-4793-af44-7b175a46ef99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:03.728 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96 namespace which is not needed anymore
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.730 2 DEBUG nova.virt.libvirt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Received event <DeviceRemovedEvent: 1759847823.72822, 88e378df-94f3-4a3e-89e1-62f6de052e9d => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.731 2 DEBUG nova.virt.libvirt.driver [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Start waiting for the detach event from libvirt for device tap444fd1ba-91 with device alias net1 for instance 88e378df-94f3-4a3e-89e1-62f6de052e9d _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.731 2 DEBUG nova.virt.libvirt.guest [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7c:26:98"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap444fd1ba-91"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.740 2 DEBUG nova.virt.libvirt.guest [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:7c:26:98"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap444fd1ba-91"/></interface>not found in domain: <domain type='kvm' id='143'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <name>instance-00000072</name>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <uuid>88e378df-94f3-4a3e-89e1-62f6de052e9d</uuid>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:name>tempest-TestNetworkBasicOps-server-1302175134</nova:name>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:37:00</nova:creationTime>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:port uuid="68807b3e-505c-41ea-96e4-f03b329e4c69">
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:port uuid="444fd1ba-91db-40d4-95c0-a6ec1fb6ce49">
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:37:03 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <memory unit='KiB'>131072</memory>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <vcpu placement='static'>1</vcpu>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <resource>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <partition>/machine</partition>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </resource>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <sysinfo type='smbios'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <system>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <entry name='manufacturer'>RDO</entry>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <entry name='product'>OpenStack Compute</entry>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <entry name='serial'>88e378df-94f3-4a3e-89e1-62f6de052e9d</entry>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <entry name='uuid'>88e378df-94f3-4a3e-89e1-62f6de052e9d</entry>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <entry name='family'>Virtual Machine</entry>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </system>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <os>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <boot dev='hd'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <smbios mode='sysinfo'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </os>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <features>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <vmcoreinfo state='on'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </features>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <cpu mode='custom' match='exact' check='full'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <vendor>AMD</vendor>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='x2apic'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc-deadline'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='hypervisor'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc_adjust'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='spec-ctrl'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='stibp'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='arch-capabilities'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='ssbd'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='cmp_legacy'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='overflow-recov'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='succor'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='ibrs'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='amd-ssbd'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='virt-ssbd'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='lbrv'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='tsc-scale'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='vmcb-clean'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='flushbyasid'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='pause-filter'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='pfthreshold'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='rdctl-no'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='mds-no'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='pschange-mc-no'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='gds-no'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='rfds-no'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='xsaves'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='svm'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='require' name='topoext'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='npt'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <feature policy='disable' name='nrip-save'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <clock offset='utc'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <timer name='pit' tickpolicy='delay'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <timer name='hpet' present='no'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <on_poweroff>destroy</on_poweroff>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <on_reboot>restart</on_reboot>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <on_crash>destroy</on_crash>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <disk type='network' device='disk'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/88e378df-94f3-4a3e-89e1-62f6de052e9d_disk' index='2'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       </source>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target dev='vda' bus='virtio'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='virtio-disk0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <disk type='network' device='cdrom'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/88e378df-94f3-4a3e-89e1-62f6de052e9d_disk.config' index='1'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       </source>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target dev='sda' bus='sata'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <readonly/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='sata0-0-0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='0' model='pcie-root'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pcie.0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='1' port='0x10'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.1'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='2' port='0x11'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.2'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='3' port='0x12'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.3'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='4' port='0x13'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.4'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='5' port='0x14'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.5'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='6' port='0x15'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.6'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='7' port='0x16'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.7'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='8' port='0x17'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.8'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='9' port='0x18'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.9'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='10' port='0x19'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.10'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='11' port='0x1a'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.11'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='12' port='0x1b'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.12'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='13' port='0x1c'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.13'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='14' port='0x1d'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.14'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='15' port='0x1e'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.15'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='16' port='0x1f'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.16'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='17' port='0x20'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.17'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='18' port='0x21'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.18'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='19' port='0x22'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.19'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='20' port='0x23'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.20'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='21' port='0x24'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.21'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='22' port='0x25'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.22'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='23' port='0x26'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.23'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='24' port='0x27'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.24'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target chassis='25' port='0x28'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.25'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model name='pcie-pci-bridge'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='pci.26'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='usb'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <controller type='sata' index='0'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='ide'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:aa:2b:a2'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target dev='tap68807b3e-50'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='net0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <serial type='pty'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/console.log' append='off'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target type='isa-serial' port='0'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:         <model name='isa-serial'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       </target>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <console type='pty' tty='/dev/pts/0'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/console.log' append='off'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <target type='serial' port='0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </console>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <input type='tablet' bus='usb'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='input0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='usb' bus='0' port='1'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </input>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <input type='mouse' bus='ps2'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='input1'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </input>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <input type='keyboard' bus='ps2'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='input2'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </input>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <listen type='address' address='::0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </graphics>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <audio id='1' type='none'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <video>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <model type='virtio' heads='1' primary='yes'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='video0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </video>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <watchdog model='itco' action='reset'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='watchdog0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </watchdog>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <memballoon model='virtio'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <stats period='10'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='balloon0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <rng model='virtio'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <backend model='random'>/dev/urandom</backend>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <alias name='rng0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <label>system_u:system_r:svirt_t:s0:c220,c590</label>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c220,c590</imagelabel>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <label>+107:+107</label>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <imagelabel>+107:+107</imagelabel>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:37:03 compute-0 nova_compute[259550]: </domain>
Oct 07 14:37:03 compute-0 nova_compute[259550]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.740 2 INFO nova.virt.libvirt.driver [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully detached device tap444fd1ba-91 from instance 88e378df-94f3-4a3e-89e1-62f6de052e9d from the live domain config.
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.741 2 DEBUG nova.virt.libvirt.vif [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1302175134',display_name='tempest-TestNetworkBasicOps-server-1302175134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1302175134',id=114,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWpQUaf47IgvjMJ1PfiJ3hBWwgl5SRzpVWmxMmvbTcZSrsJXs5xD5XgPEXcyNNmm528IJ5KaOIt3CjVQWo+4ASv0OE74+2GIAZPXgv6ybnOW8r7u3cAdnaHk7c6RMmT5A==',key_name='tempest-TestNetworkBasicOps-677021137',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:36:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-xkm7y880',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:36:31Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=88e378df-94f3-4a3e-89e1-62f6de052e9d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.741 2 DEBUG nova.network.os_vif_util [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.741 2 DEBUG nova.network.os_vif_util [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:26:98,bridge_name='br-int',has_traffic_filtering=True,id=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49,network=Network(03ab5118-02e3-4fcc-b38c-4eebc4305f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444fd1ba-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.742 2 DEBUG os_vif [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:26:98,bridge_name='br-int',has_traffic_filtering=True,id=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49,network=Network(03ab5118-02e3-4fcc-b38c-4eebc4305f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444fd1ba-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.745 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap444fd1ba-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.753 2 INFO os_vif [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:26:98,bridge_name='br-int',has_traffic_filtering=True,id=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49,network=Network(03ab5118-02e3-4fcc-b38c-4eebc4305f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444fd1ba-91')
Oct 07 14:37:03 compute-0 nova_compute[259550]: 2025-10-07 14:37:03.753 2 DEBUG nova.virt.libvirt.guest [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:name>tempest-TestNetworkBasicOps-server-1302175134</nova:name>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:37:03</nova:creationTime>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     <nova:port uuid="68807b3e-505c-41ea-96e4-f03b329e4c69">
Oct 07 14:37:03 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:37:03 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:37:03 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:37:03 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:37:03 compute-0 nova_compute[259550]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 07 14:37:03 compute-0 neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96[383901]: [NOTICE]   (383920) : haproxy version is 2.8.14-c23fe91
Oct 07 14:37:03 compute-0 neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96[383901]: [NOTICE]   (383920) : path to executable is /usr/sbin/haproxy
Oct 07 14:37:03 compute-0 neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96[383901]: [WARNING]  (383920) : Exiting Master process...
Oct 07 14:37:03 compute-0 neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96[383901]: [ALERT]    (383920) : Current worker (383922) exited with code 143 (Terminated)
Oct 07 14:37:03 compute-0 neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96[383901]: [WARNING]  (383920) : All workers exited. Exiting... (0)
Oct 07 14:37:03 compute-0 systemd[1]: libpod-859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064.scope: Deactivated successfully.
Oct 07 14:37:03 compute-0 podman[384060]: 2025-10-07 14:37:03.913419637 +0000 UTC m=+0.094072425 container died 859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]: {
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:         "osd_id": 2,
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:         "type": "bluestore"
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:     },
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:         "osd_id": 1,
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:         "type": "bluestore"
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:     },
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:         "osd_id": 0,
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:         "type": "bluestore"
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]:     }
Oct 07 14:37:04 compute-0 hardcore_proskuriakova[384024]: }
Oct 07 14:37:04 compute-0 systemd[1]: libpod-679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0.scope: Deactivated successfully.
Oct 07 14:37:04 compute-0 systemd[1]: libpod-679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0.scope: Consumed 1.002s CPU time.
Oct 07 14:37:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064-userdata-shm.mount: Deactivated successfully.
Oct 07 14:37:04 compute-0 ceph-mon[74295]: pgmap v2238: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct 07 14:37:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d2c041b6e969b41f63435c0c8bb7d38859ca24d5708706e9cc97d1edb64a1a9-merged.mount: Deactivated successfully.
Oct 07 14:37:04 compute-0 podman[384008]: 2025-10-07 14:37:04.231223529 +0000 UTC m=+1.724843809 container died 679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 07 14:37:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 8.3 KiB/s rd, 17 KiB/s wr, 1 op/s
Oct 07 14:37:04 compute-0 podman[384060]: 2025-10-07 14:37:04.455677486 +0000 UTC m=+0.636330274 container cleanup 859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:37:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a16923cb6bc6c3e76209de583a4a02fa2189b29325c23217d50e7ec68793e23-merged.mount: Deactivated successfully.
Oct 07 14:37:04 compute-0 podman[384108]: 2025-10-07 14:37:04.54901104 +0000 UTC m=+0.483930822 container remove 679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct 07 14:37:04 compute-0 systemd[1]: libpod-conmon-679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0.scope: Deactivated successfully.
Oct 07 14:37:04 compute-0 sudo[383860]: pam_unix(sudo:session): session closed for user root
Oct 07 14:37:04 compute-0 podman[384124]: 2025-10-07 14:37:04.584016536 +0000 UTC m=+0.100579469 container remove 859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 14:37:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:37:04 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:37:04 compute-0 systemd[1]: libpod-conmon-859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064.scope: Deactivated successfully.
Oct 07 14:37:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:37:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:04.593 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[79cd9b62-7da0-4eac-940d-dd90a86027d4]: (4, ('Tue Oct  7 02:37:03 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96 (859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064)\n859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064\nTue Oct  7 02:37:04 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96 (859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064)\n859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:04.598 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[12a94bdb-246f-4956-8101-c681c3567787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:04.599 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03ab5118-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:04 compute-0 nova_compute[259550]: 2025-10-07 14:37:04.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:04 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:37:04 compute-0 kernel: tap03ab5118-00: left promiscuous mode
Oct 07 14:37:04 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 60a689a7-0fcf-408b-b45c-9149a3c42c42 does not exist
Oct 07 14:37:04 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 3cdfc09d-b100-4cb4-b766-923c33efc095 does not exist
Oct 07 14:37:04 compute-0 nova_compute[259550]: 2025-10-07 14:37:04.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:04.623 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[190c7520-d1d3-4d43-9bed-6474852b790f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:04.653 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0d3340-cf65-4892-a3ae-a5622833ddde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:04.654 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[55bb0203-d826-44bb-ae54-f6597aebfaac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:04 compute-0 sudo[384137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:37:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:04.673 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d162cd-0705-430b-9b1c-9caa5367ff9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840358, 'reachable_time': 26257, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384162, 'error': None, 'target': 'ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:04 compute-0 sudo[384137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:37:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d03ab5118\x2d02e3\x2d4fcc\x2db38c\x2d4eebc4305f96.mount: Deactivated successfully.
Oct 07 14:37:04 compute-0 sudo[384137]: pam_unix(sudo:session): session closed for user root
Oct 07 14:37:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:04.679 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:37:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:04.679 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[3e5bb309-8432-4c3b-87e2-b759ee163c37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:04 compute-0 sudo[384165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:37:04 compute-0 sudo[384165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:37:04 compute-0 sudo[384165]: pam_unix(sudo:session): session closed for user root
Oct 07 14:37:05 compute-0 nova_compute[259550]: 2025-10-07 14:37:05.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:05 compute-0 nova_compute[259550]: 2025-10-07 14:37:05.199 2 DEBUG nova.compute.manager [req-70c9f191-5e02-4400-b306-fca84fdacda6 req-bd350b39-650f-481f-89a8-457b299864c6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-vif-unplugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:05 compute-0 nova_compute[259550]: 2025-10-07 14:37:05.200 2 DEBUG oslo_concurrency.lockutils [req-70c9f191-5e02-4400-b306-fca84fdacda6 req-bd350b39-650f-481f-89a8-457b299864c6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:05 compute-0 nova_compute[259550]: 2025-10-07 14:37:05.200 2 DEBUG oslo_concurrency.lockutils [req-70c9f191-5e02-4400-b306-fca84fdacda6 req-bd350b39-650f-481f-89a8-457b299864c6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:05 compute-0 nova_compute[259550]: 2025-10-07 14:37:05.201 2 DEBUG oslo_concurrency.lockutils [req-70c9f191-5e02-4400-b306-fca84fdacda6 req-bd350b39-650f-481f-89a8-457b299864c6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:05 compute-0 nova_compute[259550]: 2025-10-07 14:37:05.201 2 DEBUG nova.compute.manager [req-70c9f191-5e02-4400-b306-fca84fdacda6 req-bd350b39-650f-481f-89a8-457b299864c6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] No waiting events found dispatching network-vif-unplugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:37:05 compute-0 nova_compute[259550]: 2025-10-07 14:37:05.201 2 WARNING nova.compute.manager [req-70c9f191-5e02-4400-b306-fca84fdacda6 req-bd350b39-650f-481f-89a8-457b299864c6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received unexpected event network-vif-unplugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 for instance with vm_state active and task_state None.
Oct 07 14:37:05 compute-0 nova_compute[259550]: 2025-10-07 14:37:05.452 2 DEBUG oslo_concurrency.lockutils [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:37:05 compute-0 nova_compute[259550]: 2025-10-07 14:37:05.453 2 DEBUG oslo_concurrency.lockutils [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:37:05 compute-0 nova_compute[259550]: 2025-10-07 14:37:05.454 2 DEBUG nova.network.neutron [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:37:05 compute-0 ceph-mon[74295]: pgmap v2239: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 8.3 KiB/s rd, 17 KiB/s wr, 1 op/s
Oct 07 14:37:05 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:37:05 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:37:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Oct 07 14:37:06 compute-0 nova_compute[259550]: 2025-10-07 14:37:06.642 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:37:06 compute-0 nova_compute[259550]: 2025-10-07 14:37:06.682 2 INFO nova.network.neutron [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Port 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 07 14:37:06 compute-0 nova_compute[259550]: 2025-10-07 14:37:06.683 2 DEBUG nova.network.neutron [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updating instance_info_cache with network_info: [{"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:37:06 compute-0 nova_compute[259550]: 2025-10-07 14:37:06.705 2 DEBUG oslo_concurrency.lockutils [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:37:06 compute-0 nova_compute[259550]: 2025-10-07 14:37:06.737 2 DEBUG oslo_concurrency.lockutils [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "interface-88e378df-94f3-4a3e-89e1-62f6de052e9d-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:07 compute-0 nova_compute[259550]: 2025-10-07 14:37:07.292 2 DEBUG nova.compute.manager [req-c2428730-5ef3-486f-b9c9-fdf5d47430fd req-55617eae-fb7f-48d4-8c6d-c9f256e37f6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-vif-plugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:07 compute-0 nova_compute[259550]: 2025-10-07 14:37:07.292 2 DEBUG oslo_concurrency.lockutils [req-c2428730-5ef3-486f-b9c9-fdf5d47430fd req-55617eae-fb7f-48d4-8c6d-c9f256e37f6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:07 compute-0 nova_compute[259550]: 2025-10-07 14:37:07.293 2 DEBUG oslo_concurrency.lockutils [req-c2428730-5ef3-486f-b9c9-fdf5d47430fd req-55617eae-fb7f-48d4-8c6d-c9f256e37f6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:07 compute-0 nova_compute[259550]: 2025-10-07 14:37:07.293 2 DEBUG oslo_concurrency.lockutils [req-c2428730-5ef3-486f-b9c9-fdf5d47430fd req-55617eae-fb7f-48d4-8c6d-c9f256e37f6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:07 compute-0 nova_compute[259550]: 2025-10-07 14:37:07.293 2 DEBUG nova.compute.manager [req-c2428730-5ef3-486f-b9c9-fdf5d47430fd req-55617eae-fb7f-48d4-8c6d-c9f256e37f6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] No waiting events found dispatching network-vif-plugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:37:07 compute-0 nova_compute[259550]: 2025-10-07 14:37:07.293 2 WARNING nova.compute.manager [req-c2428730-5ef3-486f-b9c9-fdf5d47430fd req-55617eae-fb7f-48d4-8c6d-c9f256e37f6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received unexpected event network-vif-plugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 for instance with vm_state active and task_state None.
Oct 07 14:37:07 compute-0 nova_compute[259550]: 2025-10-07 14:37:07.294 2 DEBUG nova.compute.manager [req-c2428730-5ef3-486f-b9c9-fdf5d47430fd req-55617eae-fb7f-48d4-8c6d-c9f256e37f6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-vif-deleted-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:07 compute-0 nova_compute[259550]: 2025-10-07 14:37:07.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:07 compute-0 ovn_controller[151684]: 2025-10-07T14:37:07Z|01206|binding|INFO|Releasing lport 0987c9ec-179e-425c-b303-601326f99ffa from this chassis (sb_readonly=0)
Oct 07 14:37:07 compute-0 nova_compute[259550]: 2025-10-07 14:37:07.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:07 compute-0 ceph-mon[74295]: pgmap v2240: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Oct 07 14:37:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.152 2 DEBUG oslo_concurrency.lockutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.153 2 DEBUG oslo_concurrency.lockutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.153 2 DEBUG oslo_concurrency.lockutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.154 2 DEBUG oslo_concurrency.lockutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.154 2 DEBUG oslo_concurrency.lockutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.155 2 INFO nova.compute.manager [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Terminating instance
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.156 2 DEBUG nova.compute.manager [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Oct 07 14:37:08 compute-0 kernel: tap68807b3e-50 (unregistering): left promiscuous mode
Oct 07 14:37:08 compute-0 NetworkManager[44949]: <info>  [1759847828.3882] device (tap68807b3e-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:08 compute-0 ovn_controller[151684]: 2025-10-07T14:37:08Z|01207|binding|INFO|Releasing lport 68807b3e-505c-41ea-96e4-f03b329e4c69 from this chassis (sb_readonly=0)
Oct 07 14:37:08 compute-0 ovn_controller[151684]: 2025-10-07T14:37:08Z|01208|binding|INFO|Setting lport 68807b3e-505c-41ea-96e4-f03b329e4c69 down in Southbound
Oct 07 14:37:08 compute-0 ovn_controller[151684]: 2025-10-07T14:37:08Z|01209|binding|INFO|Removing iface tap68807b3e-50 ovn-installed in OVS
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.417 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:2b:a2 10.100.0.14'], port_security=['fa:16:3e:aa:2b:a2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '88e378df-94f3-4a3e-89e1-62f6de052e9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '49bffbb9-d898-4935-821f-8756d7f43377', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3993e5e4-e68a-46c5-a740-21f6d2b71498, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=68807b3e-505c-41ea-96e4-f03b329e4c69) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:37:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.419 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 68807b3e-505c-41ea-96e4-f03b329e4c69 in datapath 4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38 unbound from our chassis
Oct 07 14:37:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.420 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.422 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[be6da688-ccfc-4bef-af0c-3609466696f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.423 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38 namespace which is not needed anymore
Oct 07 14:37:08 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000072.scope: Deactivated successfully.
Oct 07 14:37:08 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000072.scope: Consumed 13.973s CPU time.
Oct 07 14:37:08 compute-0 systemd-machined[214580]: Machine qemu-143-instance-00000072 terminated.
Oct 07 14:37:08 compute-0 neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38[382848]: [NOTICE]   (382852) : haproxy version is 2.8.14-c23fe91
Oct 07 14:37:08 compute-0 neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38[382848]: [NOTICE]   (382852) : path to executable is /usr/sbin/haproxy
Oct 07 14:37:08 compute-0 neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38[382848]: [WARNING]  (382852) : Exiting Master process...
Oct 07 14:37:08 compute-0 neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38[382848]: [ALERT]    (382852) : Current worker (382854) exited with code 143 (Terminated)
Oct 07 14:37:08 compute-0 neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38[382848]: [WARNING]  (382852) : All workers exited. Exiting... (0)
Oct 07 14:37:08 compute-0 systemd[1]: libpod-c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b.scope: Deactivated successfully.
Oct 07 14:37:08 compute-0 conmon[382848]: conmon c5e808b5b8a3e3272d23 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b.scope/container/memory.events
Oct 07 14:37:08 compute-0 podman[384214]: 2025-10-07 14:37:08.584235033 +0000 UTC m=+0.056494791 container died c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.594 2 INFO nova.virt.libvirt.driver [-] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Instance destroyed successfully.
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.601 2 DEBUG nova.objects.instance [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid 88e378df-94f3-4a3e-89e1-62f6de052e9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.616 2 DEBUG nova.virt.libvirt.vif [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1302175134',display_name='tempest-TestNetworkBasicOps-server-1302175134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1302175134',id=114,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWpQUaf47IgvjMJ1PfiJ3hBWwgl5SRzpVWmxMmvbTcZSrsJXs5xD5XgPEXcyNNmm528IJ5KaOIt3CjVQWo+4ASv0OE74+2GIAZPXgv6ybnOW8r7u3cAdnaHk7c6RMmT5A==',key_name='tempest-TestNetworkBasicOps-677021137',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:36:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-xkm7y880',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:36:31Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=88e378df-94f3-4a3e-89e1-62f6de052e9d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.617 2 DEBUG nova.network.os_vif_util [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.617 2 DEBUG nova.network.os_vif_util [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:aa:2b:a2,bridge_name='br-int',has_traffic_filtering=True,id=68807b3e-505c-41ea-96e4-f03b329e4c69,network=Network(4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68807b3e-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.618 2 DEBUG os_vif [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:2b:a2,bridge_name='br-int',has_traffic_filtering=True,id=68807b3e-505c-41ea-96e4-f03b329e4c69,network=Network(4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68807b3e-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.620 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68807b3e-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b-userdata-shm.mount: Deactivated successfully.
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.626 2 INFO os_vif [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:2b:a2,bridge_name='br-int',has_traffic_filtering=True,id=68807b3e-505c-41ea-96e4-f03b329e4c69,network=Network(4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68807b3e-50')
Oct 07 14:37:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecad819b79e80b6f43b313371589ac92f170aa571899affb8f76f01510b64cf5-merged.mount: Deactivated successfully.
Oct 07 14:37:08 compute-0 podman[384214]: 2025-10-07 14:37:08.6354093 +0000 UTC m=+0.107669058 container cleanup c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:37:08 compute-0 systemd[1]: libpod-conmon-c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b.scope: Deactivated successfully.
Oct 07 14:37:08 compute-0 podman[384270]: 2025-10-07 14:37:08.705249037 +0000 UTC m=+0.045757744 container remove c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:37:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.711 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[510ea67f-0f46-4644-b129-e774e881e958]: (4, ('Tue Oct  7 02:37:08 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38 (c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b)\nc5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b\nTue Oct  7 02:37:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38 (c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b)\nc5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.713 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a81322-b8bf-4afe-a209-76ab6bf89ac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.714 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e6cb0ef-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:08 compute-0 kernel: tap4e6cb0ef-50: left promiscuous mode
Oct 07 14:37:08 compute-0 nova_compute[259550]: 2025-10-07 14:37:08.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.734 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c70bd45e-1c4b-420f-aa89-0c54be5b79fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.760 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c9fdad02-a6ca-4fda-ba63-e5e95005a580]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.761 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5b1b3b81-05ab-4372-a24c-1c0687b12e48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.778 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[832b3bdf-a3f0-47a7-907b-9427cc03ebd1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 837327, 'reachable_time': 35570, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384289, 'error': None, 'target': 'ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d4e6cb0ef\x2d5d1e\x2d4cf2\x2da50c\x2d8f2d313d6a38.mount: Deactivated successfully.
Oct 07 14:37:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.781 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:37:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.781 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[0b8df40e-b48a-46a1-82f4-d9068db78d3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:09 compute-0 nova_compute[259550]: 2025-10-07 14:37:09.050 2 INFO nova.virt.libvirt.driver [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Deleting instance files /var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d_del
Oct 07 14:37:09 compute-0 nova_compute[259550]: 2025-10-07 14:37:09.051 2 INFO nova.virt.libvirt.driver [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Deletion of /var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d_del complete
Oct 07 14:37:09 compute-0 nova_compute[259550]: 2025-10-07 14:37:09.105 2 INFO nova.compute.manager [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Took 0.95 seconds to destroy the instance on the hypervisor.
Oct 07 14:37:09 compute-0 nova_compute[259550]: 2025-10-07 14:37:09.106 2 DEBUG oslo.service.loopingcall [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:37:09 compute-0 nova_compute[259550]: 2025-10-07 14:37:09.107 2 DEBUG nova.compute.manager [-] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:37:09 compute-0 nova_compute[259550]: 2025-10-07 14:37:09.107 2 DEBUG nova.network.neutron [-] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:37:09 compute-0 nova_compute[259550]: 2025-10-07 14:37:09.368 2 DEBUG nova.compute.manager [req-b81d33ef-1bb3-45f3-aeb4-400d4b4eaaf7 req-ab284870-2b1d-4795-887f-14a5832d3773 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-changed-68807b3e-505c-41ea-96e4-f03b329e4c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:09 compute-0 nova_compute[259550]: 2025-10-07 14:37:09.369 2 DEBUG nova.compute.manager [req-b81d33ef-1bb3-45f3-aeb4-400d4b4eaaf7 req-ab284870-2b1d-4795-887f-14a5832d3773 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Refreshing instance network info cache due to event network-changed-68807b3e-505c-41ea-96e4-f03b329e4c69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:37:09 compute-0 nova_compute[259550]: 2025-10-07 14:37:09.369 2 DEBUG oslo_concurrency.lockutils [req-b81d33ef-1bb3-45f3-aeb4-400d4b4eaaf7 req-ab284870-2b1d-4795-887f-14a5832d3773 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:37:09 compute-0 nova_compute[259550]: 2025-10-07 14:37:09.370 2 DEBUG oslo_concurrency.lockutils [req-b81d33ef-1bb3-45f3-aeb4-400d4b4eaaf7 req-ab284870-2b1d-4795-887f-14a5832d3773 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:37:09 compute-0 nova_compute[259550]: 2025-10-07 14:37:09.370 2 DEBUG nova.network.neutron [req-b81d33ef-1bb3-45f3-aeb4-400d4b4eaaf7 req-ab284870-2b1d-4795-887f-14a5832d3773 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Refreshing network info cache for port 68807b3e-505c-41ea-96e4-f03b329e4c69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:37:09 compute-0 ceph-mon[74295]: pgmap v2241: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Oct 07 14:37:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 87 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 10 KiB/s wr, 6 op/s
Oct 07 14:37:10 compute-0 nova_compute[259550]: 2025-10-07 14:37:10.394 2 DEBUG nova.network.neutron [-] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:37:10 compute-0 nova_compute[259550]: 2025-10-07 14:37:10.496 2 INFO nova.compute.manager [-] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Took 1.39 seconds to deallocate network for instance.
Oct 07 14:37:10 compute-0 nova_compute[259550]: 2025-10-07 14:37:10.643 2 DEBUG oslo_concurrency.lockutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:10 compute-0 nova_compute[259550]: 2025-10-07 14:37:10.644 2 DEBUG oslo_concurrency.lockutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:10 compute-0 nova_compute[259550]: 2025-10-07 14:37:10.707 2 DEBUG oslo_concurrency.processutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:37:11 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1908663976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:37:11 compute-0 nova_compute[259550]: 2025-10-07 14:37:11.207 2 DEBUG oslo_concurrency.processutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:11 compute-0 nova_compute[259550]: 2025-10-07 14:37:11.213 2 DEBUG nova.compute.provider_tree [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:37:11 compute-0 nova_compute[259550]: 2025-10-07 14:37:11.250 2 DEBUG nova.scheduler.client.report [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:37:11 compute-0 nova_compute[259550]: 2025-10-07 14:37:11.295 2 DEBUG oslo_concurrency.lockutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:11 compute-0 nova_compute[259550]: 2025-10-07 14:37:11.322 2 INFO nova.scheduler.client.report [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance 88e378df-94f3-4a3e-89e1-62f6de052e9d
Oct 07 14:37:11 compute-0 nova_compute[259550]: 2025-10-07 14:37:11.410 2 DEBUG nova.network.neutron [req-b81d33ef-1bb3-45f3-aeb4-400d4b4eaaf7 req-ab284870-2b1d-4795-887f-14a5832d3773 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updated VIF entry in instance network info cache for port 68807b3e-505c-41ea-96e4-f03b329e4c69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:37:11 compute-0 nova_compute[259550]: 2025-10-07 14:37:11.411 2 DEBUG nova.network.neutron [req-b81d33ef-1bb3-45f3-aeb4-400d4b4eaaf7 req-ab284870-2b1d-4795-887f-14a5832d3773 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updating instance_info_cache with network_info: [{"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:37:11 compute-0 nova_compute[259550]: 2025-10-07 14:37:11.514 2 DEBUG oslo_concurrency.lockutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.361s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:11 compute-0 nova_compute[259550]: 2025-10-07 14:37:11.516 2 DEBUG oslo_concurrency.lockutils [req-b81d33ef-1bb3-45f3-aeb4-400d4b4eaaf7 req-ab284870-2b1d-4795-887f-14a5832d3773 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:37:11 compute-0 nova_compute[259550]: 2025-10-07 14:37:11.612 2 DEBUG nova.compute.manager [req-cf0dec61-a615-4549-8230-944933ce62b6 req-5ecbb06d-1e01-409b-b002-6ff81fb2c970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-vif-deleted-68807b3e-505c-41ea-96e4-f03b329e4c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:11 compute-0 nova_compute[259550]: 2025-10-07 14:37:11.612 2 INFO nova.compute.manager [req-cf0dec61-a615-4549-8230-944933ce62b6 req-5ecbb06d-1e01-409b-b002-6ff81fb2c970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Neutron deleted interface 68807b3e-505c-41ea-96e4-f03b329e4c69; detaching it from the instance and deleting it from the info cache
Oct 07 14:37:11 compute-0 nova_compute[259550]: 2025-10-07 14:37:11.613 2 DEBUG nova.network.neutron [req-cf0dec61-a615-4549-8230-944933ce62b6 req-5ecbb06d-1e01-409b-b002-6ff81fb2c970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Oct 07 14:37:11 compute-0 nova_compute[259550]: 2025-10-07 14:37:11.615 2 DEBUG nova.compute.manager [req-cf0dec61-a615-4549-8230-944933ce62b6 req-5ecbb06d-1e01-409b-b002-6ff81fb2c970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Detach interface failed, port_id=68807b3e-505c-41ea-96e4-f03b329e4c69, reason: Instance 88e378df-94f3-4a3e-89e1-62f6de052e9d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:37:11 compute-0 nova_compute[259550]: 2025-10-07 14:37:11.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:11 compute-0 ceph-mon[74295]: pgmap v2242: 305 pgs: 305 active+clean; 87 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 10 KiB/s wr, 6 op/s
Oct 07 14:37:11 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1908663976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:37:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 82 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 5.6 KiB/s wr, 15 op/s
Oct 07 14:37:12 compute-0 nova_compute[259550]: 2025-10-07 14:37:12.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:37:13 compute-0 podman[384313]: 2025-10-07 14:37:13.067405867 +0000 UTC m=+0.055241848 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:37:13 compute-0 podman[384314]: 2025-10-07 14:37:13.105201997 +0000 UTC m=+0.089480782 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 07 14:37:13 compute-0 nova_compute[259550]: 2025-10-07 14:37:13.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:13 compute-0 ceph-mon[74295]: pgmap v2243: 305 pgs: 305 active+clean; 82 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 5.6 KiB/s wr, 15 op/s
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.713293) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847833713382, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2065, "num_deletes": 251, "total_data_size": 3350444, "memory_usage": 3398096, "flush_reason": "Manual Compaction"}
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847833732613, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3272563, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45115, "largest_seqno": 47179, "table_properties": {"data_size": 3263250, "index_size": 5807, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19190, "raw_average_key_size": 20, "raw_value_size": 3244683, "raw_average_value_size": 3415, "num_data_blocks": 258, "num_entries": 950, "num_filter_entries": 950, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847618, "oldest_key_time": 1759847618, "file_creation_time": 1759847833, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 19368 microseconds, and 7265 cpu microseconds.
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.732670) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3272563 bytes OK
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.732690) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.734030) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.734046) EVENT_LOG_v1 {"time_micros": 1759847833734040, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.734064) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3341762, prev total WAL file size 3341762, number of live WAL files 2.
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.735171) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3195KB)], [104(8520KB)]
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847833735207, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 11997194, "oldest_snapshot_seqno": -1}
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 6999 keys, 10308845 bytes, temperature: kUnknown
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847833803382, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 10308845, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10260891, "index_size": 29346, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17541, "raw_key_size": 180440, "raw_average_key_size": 25, "raw_value_size": 10134444, "raw_average_value_size": 1447, "num_data_blocks": 1155, "num_entries": 6999, "num_filter_entries": 6999, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847833, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.803685) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10308845 bytes
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.805072) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.7 rd, 151.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.3 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(6.8) write-amplify(3.2) OK, records in: 7513, records dropped: 514 output_compression: NoCompression
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.805107) EVENT_LOG_v1 {"time_micros": 1759847833805083, "job": 62, "event": "compaction_finished", "compaction_time_micros": 68278, "compaction_time_cpu_micros": 27658, "output_level": 6, "num_output_files": 1, "total_output_size": 10308845, "num_input_records": 7513, "num_output_records": 6999, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847833805891, "job": 62, "event": "table_file_deletion", "file_number": 106}
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847833807634, "job": 62, "event": "table_file_deletion", "file_number": 104}
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.735095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.807679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.807684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.807686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.807687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:37:13 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.807688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:37:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 41 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 6.2 KiB/s wr, 28 op/s
Oct 07 14:37:15 compute-0 ceph-mon[74295]: pgmap v2244: 305 pgs: 305 active+clean; 41 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 6.2 KiB/s wr, 28 op/s
Oct 07 14:37:15 compute-0 nova_compute[259550]: 2025-10-07 14:37:15.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 41 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 27 op/s
Oct 07 14:37:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:16.842 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:a2:29 2001:db8:0:1:f816:3eff:fe7c:a229 2001:db8::f816:3eff:fe7c:a229'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe7c:a229/64 2001:db8::f816:3eff:fe7c:a229/64', 'neutron:device_id': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c956141-6a21-499d-99b1-885d1a2972f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fceb8d2-9d2a-45b9-beb8-73d518298477, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ec328f15-1843-4594-8d39-b0d2d9796360) old=Port_Binding(mac=['fa:16:3e:7c:a2:29 2001:db8::f816:3eff:fe7c:a229'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe7c:a229/64', 'neutron:device_id': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c956141-6a21-499d-99b1-885d1a2972f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:37:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:16.844 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ec328f15-1843-4594-8d39-b0d2d9796360 in datapath 4c956141-6a21-499d-99b1-885d1a2972f7 updated
Oct 07 14:37:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:16.845 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c956141-6a21-499d-99b1-885d1a2972f7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:37:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:16.845 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7f808a6b-d63e-48f8-8b9e-34394e36eaec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:17 compute-0 nova_compute[259550]: 2025-10-07 14:37:17.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:17 compute-0 ceph-mon[74295]: pgmap v2245: 305 pgs: 305 active+clean; 41 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 27 op/s
Oct 07 14:37:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:37:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 41 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 27 op/s
Oct 07 14:37:18 compute-0 nova_compute[259550]: 2025-10-07 14:37:18.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:19 compute-0 nova_compute[259550]: 2025-10-07 14:37:19.792 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:19 compute-0 nova_compute[259550]: 2025-10-07 14:37:19.793 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:19 compute-0 ceph-mon[74295]: pgmap v2246: 305 pgs: 305 active+clean; 41 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 27 op/s
Oct 07 14:37:19 compute-0 nova_compute[259550]: 2025-10-07 14:37:19.935 2 DEBUG nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:37:20 compute-0 nova_compute[259550]: 2025-10-07 14:37:20.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:20 compute-0 nova_compute[259550]: 2025-10-07 14:37:20.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 41 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 27 op/s
Oct 07 14:37:20 compute-0 nova_compute[259550]: 2025-10-07 14:37:20.485 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:20 compute-0 nova_compute[259550]: 2025-10-07 14:37:20.486 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:20 compute-0 nova_compute[259550]: 2025-10-07 14:37:20.497 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:37:20 compute-0 nova_compute[259550]: 2025-10-07 14:37:20.497 2 INFO nova.compute.claims [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:37:21 compute-0 nova_compute[259550]: 2025-10-07 14:37:21.074 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:37:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2945680067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:37:21 compute-0 nova_compute[259550]: 2025-10-07 14:37:21.537 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:21 compute-0 nova_compute[259550]: 2025-10-07 14:37:21.543 2 DEBUG nova.compute.provider_tree [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:37:21 compute-0 nova_compute[259550]: 2025-10-07 14:37:21.637 2 DEBUG nova.scheduler.client.report [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:37:21 compute-0 nova_compute[259550]: 2025-10-07 14:37:21.897 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:21 compute-0 nova_compute[259550]: 2025-10-07 14:37:21.898 2 DEBUG nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:37:22 compute-0 ceph-mon[74295]: pgmap v2247: 305 pgs: 305 active+clean; 41 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 27 op/s
Oct 07 14:37:22 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2945680067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:37:22 compute-0 nova_compute[259550]: 2025-10-07 14:37:22.119 2 DEBUG nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:37:22 compute-0 nova_compute[259550]: 2025-10-07 14:37:22.120 2 DEBUG nova.network.neutron [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:37:22 compute-0 nova_compute[259550]: 2025-10-07 14:37:22.221 2 INFO nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:37:22 compute-0 nova_compute[259550]: 2025-10-07 14:37:22.296 2 DEBUG nova.policy [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5c505d04148e44b8b93ceab0e3cedef4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:37:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 41 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 597 B/s wr, 22 op/s
Oct 07 14:37:22 compute-0 nova_compute[259550]: 2025-10-07 14:37:22.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:22 compute-0 nova_compute[259550]: 2025-10-07 14:37:22.438 2 DEBUG nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:37:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:37:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:37:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:37:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:37:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:37:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:37:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:37:22
Oct 07 14:37:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:37:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:37:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'images', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.log', 'vms']
Oct 07 14:37:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:37:22 compute-0 nova_compute[259550]: 2025-10-07 14:37:22.759 2 DEBUG nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:37:22 compute-0 nova_compute[259550]: 2025-10-07 14:37:22.761 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:37:22 compute-0 nova_compute[259550]: 2025-10-07 14:37:22.761 2 INFO nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Creating image(s)
Oct 07 14:37:22 compute-0 nova_compute[259550]: 2025-10-07 14:37:22.783 2 DEBUG nova.storage.rbd_utils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 4e64a021-390b-4a0c-bb4c-75a19f274777_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:22 compute-0 nova_compute[259550]: 2025-10-07 14:37:22.804 2 DEBUG nova.storage.rbd_utils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 4e64a021-390b-4a0c-bb4c-75a19f274777_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:22 compute-0 nova_compute[259550]: 2025-10-07 14:37:22.825 2 DEBUG nova.storage.rbd_utils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 4e64a021-390b-4a0c-bb4c-75a19f274777_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:22 compute-0 nova_compute[259550]: 2025-10-07 14:37:22.829 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:22 compute-0 nova_compute[259550]: 2025-10-07 14:37:22.906 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:22 compute-0 nova_compute[259550]: 2025-10-07 14:37:22.907 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:22 compute-0 nova_compute[259550]: 2025-10-07 14:37:22.908 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:22 compute-0 nova_compute[259550]: 2025-10-07 14:37:22.909 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:22 compute-0 nova_compute[259550]: 2025-10-07 14:37:22.933 2 DEBUG nova.storage.rbd_utils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 4e64a021-390b-4a0c-bb4c-75a19f274777_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:22 compute-0 nova_compute[259550]: 2025-10-07 14:37:22.939 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4e64a021-390b-4a0c-bb4c-75a19f274777_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:37:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:37:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:37:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:37:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:37:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:37:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:37:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:37:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:37:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:37:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:37:23 compute-0 nova_compute[259550]: 2025-10-07 14:37:23.423 2 DEBUG nova.network.neutron [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Successfully created port: 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:37:23 compute-0 nova_compute[259550]: 2025-10-07 14:37:23.523 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4e64a021-390b-4a0c-bb4c-75a19f274777_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:23 compute-0 nova_compute[259550]: 2025-10-07 14:37:23.588 2 DEBUG nova.storage.rbd_utils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] resizing rbd image 4e64a021-390b-4a0c-bb4c-75a19f274777_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:37:23 compute-0 nova_compute[259550]: 2025-10-07 14:37:23.621 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847828.593345, 88e378df-94f3-4a3e-89e1-62f6de052e9d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:37:23 compute-0 nova_compute[259550]: 2025-10-07 14:37:23.621 2 INFO nova.compute.manager [-] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] VM Stopped (Lifecycle Event)
Oct 07 14:37:23 compute-0 nova_compute[259550]: 2025-10-07 14:37:23.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:23 compute-0 nova_compute[259550]: 2025-10-07 14:37:23.664 2 DEBUG nova.compute.manager [None req-4b741e53-f5a4-4c44-981e-2dad65ab448f - - - - - -] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:37:23 compute-0 nova_compute[259550]: 2025-10-07 14:37:23.761 2 DEBUG nova.objects.instance [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:37:23 compute-0 nova_compute[259550]: 2025-10-07 14:37:23.818 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:37:23 compute-0 nova_compute[259550]: 2025-10-07 14:37:23.818 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Ensure instance console log exists: /var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:37:23 compute-0 nova_compute[259550]: 2025-10-07 14:37:23.819 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:23 compute-0 nova_compute[259550]: 2025-10-07 14:37:23.819 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:23 compute-0 nova_compute[259550]: 2025-10-07 14:37:23.819 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:24 compute-0 ceph-mon[74295]: pgmap v2248: 305 pgs: 305 active+clean; 41 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 597 B/s wr, 22 op/s
Oct 07 14:37:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 66 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 793 KiB/s wr, 36 op/s
Oct 07 14:37:26 compute-0 ceph-mon[74295]: pgmap v2249: 305 pgs: 305 active+clean; 66 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 793 KiB/s wr, 36 op/s
Oct 07 14:37:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 88 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:37:27 compute-0 nova_compute[259550]: 2025-10-07 14:37:27.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:27 compute-0 nova_compute[259550]: 2025-10-07 14:37:27.725 2 DEBUG nova.network.neutron [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Successfully updated port: 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:37:27 compute-0 nova_compute[259550]: 2025-10-07 14:37:27.727 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:27 compute-0 nova_compute[259550]: 2025-10-07 14:37:27.727 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:27 compute-0 nova_compute[259550]: 2025-10-07 14:37:27.844 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:37:27 compute-0 nova_compute[259550]: 2025-10-07 14:37:27.845 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquired lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:37:27 compute-0 nova_compute[259550]: 2025-10-07 14:37:27.845 2 DEBUG nova.network.neutron [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:37:27 compute-0 nova_compute[259550]: 2025-10-07 14:37:27.896 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:37:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:37:28 compute-0 ceph-mon[74295]: pgmap v2250: 305 pgs: 305 active+clean; 88 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:37:28 compute-0 nova_compute[259550]: 2025-10-07 14:37:28.113 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:28 compute-0 nova_compute[259550]: 2025-10-07 14:37:28.114 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:28 compute-0 nova_compute[259550]: 2025-10-07 14:37:28.119 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:37:28 compute-0 nova_compute[259550]: 2025-10-07 14:37:28.119 2 INFO nova.compute.claims [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:37:28 compute-0 nova_compute[259550]: 2025-10-07 14:37:28.151 2 DEBUG nova.compute.manager [req-154eaec5-94f4-47ae-98ad-4a090934bc55 req-65e7a00d-ac6a-4f78-a063-a7f5ec57aa79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-changed-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:28 compute-0 nova_compute[259550]: 2025-10-07 14:37:28.152 2 DEBUG nova.compute.manager [req-154eaec5-94f4-47ae-98ad-4a090934bc55 req-65e7a00d-ac6a-4f78-a063-a7f5ec57aa79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Refreshing instance network info cache due to event network-changed-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:37:28 compute-0 nova_compute[259550]: 2025-10-07 14:37:28.152 2 DEBUG oslo_concurrency.lockutils [req-154eaec5-94f4-47ae-98ad-4a090934bc55 req-65e7a00d-ac6a-4f78-a063-a7f5ec57aa79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:37:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 88 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:37:28 compute-0 nova_compute[259550]: 2025-10-07 14:37:28.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:28 compute-0 nova_compute[259550]: 2025-10-07 14:37:28.647 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:28 compute-0 nova_compute[259550]: 2025-10-07 14:37:28.701 2 DEBUG nova.network.neutron [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:37:29 compute-0 podman[384566]: 2025-10-07 14:37:29.067902246 +0000 UTC m=+0.054692733 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:37:29 compute-0 podman[384565]: 2025-10-07 14:37:29.06805627 +0000 UTC m=+0.058171745 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:37:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:37:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2404864662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.142 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.147 2 DEBUG nova.compute.provider_tree [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.217 2 DEBUG nova.scheduler.client.report [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.294 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.295 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.530 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.530 2 DEBUG nova.network.neutron [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.696 2 INFO nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.743 2 DEBUG nova.policy [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.747 2 DEBUG nova.network.neutron [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updating instance_info_cache with network_info: [{"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.880 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.972 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Releasing lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.972 2 DEBUG nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance network_info: |[{"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.973 2 DEBUG oslo_concurrency.lockutils [req-154eaec5-94f4-47ae-98ad-4a090934bc55 req-65e7a00d-ac6a-4f78-a063-a7f5ec57aa79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.974 2 DEBUG nova.network.neutron [req-154eaec5-94f4-47ae-98ad-4a090934bc55 req-65e7a00d-ac6a-4f78-a063-a7f5ec57aa79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Refreshing network info cache for port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.979 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Start _get_guest_xml network_info=[{"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.989 2 WARNING nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.995 2 DEBUG nova.virt.libvirt.host [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.996 2 DEBUG nova.virt.libvirt.host [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:37:29 compute-0 nova_compute[259550]: 2025-10-07 14:37:29.999 2 DEBUG nova.virt.libvirt.host [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.000 2 DEBUG nova.virt.libvirt.host [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.000 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.001 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.001 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.002 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.002 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.002 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.003 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.003 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.003 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.004 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.004 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.005 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.009 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:30 compute-0 ceph-mon[74295]: pgmap v2251: 305 pgs: 305 active+clean; 88 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:37:30 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2404864662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.203 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.205 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.205 2 INFO nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Creating image(s)
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.228 2 DEBUG nova.storage.rbd_utils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.249 2 DEBUG nova.storage.rbd_utils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.273 2 DEBUG nova.storage.rbd_utils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.276 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.349 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.350 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.350 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.350 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 88 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.372 2 DEBUG nova.storage.rbd_utils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.376 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:37:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/492944341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.478 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.506 2 DEBUG nova.storage.rbd_utils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 4e64a021-390b-4a0c-bb4c-75a19f274777_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.510 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.701 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.764 2 DEBUG nova.storage.rbd_utils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.865 2 DEBUG nova.objects.instance [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 23c0ce36-9e34-4a73-9f99-3b79f8623238 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.923 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.923 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Ensure instance console log exists: /var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.924 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.924 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.924 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:37:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1830871132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.997 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.999 2 DEBUG nova.virt.libvirt.vif [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1824501664',display_name='tempest-TestNetworkAdvancedServerOps-server-1824501664',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1824501664',id=115,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZnJzpjSv/TU/ynLhehpWtOh3Ok15/bj8hZqv14d9GetFxMUNAsBy1sPCK8k7EXv+srEQ5zJiIUEWZ8pm1FRF0+OuDCiKL8OPwrGm2N566RfHl82V8uvcba1igoHu/qSA==',key_name='tempest-TestNetworkAdvancedServerOps-112753023',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-zrj0040b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:37:22Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=4e64a021-390b-4a0c-bb4c-75a19f274777,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:37:30 compute-0 nova_compute[259550]: 2025-10-07 14:37:30.999 2 DEBUG nova.network.os_vif_util [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.000 2 DEBUG nova.network.os_vif_util [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.001 2 DEBUG nova.objects.instance [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:37:31 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/492944341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:37:31 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1830871132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.123 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:37:31 compute-0 nova_compute[259550]:   <uuid>4e64a021-390b-4a0c-bb4c-75a19f274777</uuid>
Oct 07 14:37:31 compute-0 nova_compute[259550]:   <name>instance-00000073</name>
Oct 07 14:37:31 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:37:31 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:37:31 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1824501664</nova:name>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:37:29</nova:creationTime>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:37:31 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:37:31 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:37:31 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:37:31 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:37:31 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:37:31 compute-0 nova_compute[259550]:         <nova:user uuid="5c505d04148e44b8b93ceab0e3cedef4">tempest-TestNetworkAdvancedServerOps-316338420-project-member</nova:user>
Oct 07 14:37:31 compute-0 nova_compute[259550]:         <nova:project uuid="74c80c1e3c7c4a0dbf1c602d301618a7">tempest-TestNetworkAdvancedServerOps-316338420</nova:project>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:37:31 compute-0 nova_compute[259550]:         <nova:port uuid="6e86ce79-9f1b-4e53-8ae5-918e8402b8c6">
Oct 07 14:37:31 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:37:31 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:37:31 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <system>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <entry name="serial">4e64a021-390b-4a0c-bb4c-75a19f274777</entry>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <entry name="uuid">4e64a021-390b-4a0c-bb4c-75a19f274777</entry>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     </system>
Oct 07 14:37:31 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:37:31 compute-0 nova_compute[259550]:   <os>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:   </os>
Oct 07 14:37:31 compute-0 nova_compute[259550]:   <features>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:   </features>
Oct 07 14:37:31 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:37:31 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:37:31 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4e64a021-390b-4a0c-bb4c-75a19f274777_disk">
Oct 07 14:37:31 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       </source>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:37:31 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4e64a021-390b-4a0c-bb4c-75a19f274777_disk.config">
Oct 07 14:37:31 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       </source>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:37:31 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:6d:38:87"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <target dev="tap6e86ce79-9f"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777/console.log" append="off"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <video>
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     </video>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:37:31 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:37:31 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:37:31 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:37:31 compute-0 nova_compute[259550]: </domain>
Oct 07 14:37:31 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.123 2 DEBUG nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Preparing to wait for external event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.124 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.124 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.124 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.125 2 DEBUG nova.virt.libvirt.vif [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1824501664',display_name='tempest-TestNetworkAdvancedServerOps-server-1824501664',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1824501664',id=115,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZnJzpjSv/TU/ynLhehpWtOh3Ok15/bj8hZqv14d9GetFxMUNAsBy1sPCK8k7EXv+srEQ5zJiIUEWZ8pm1FRF0+OuDCiKL8OPwrGm2N566RfHl82V8uvcba1igoHu/qSA==',key_name='tempest-TestNetworkAdvancedServerOps-112753023',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-zrj0040b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:37:22Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=4e64a021-390b-4a0c-bb4c-75a19f274777,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.125 2 DEBUG nova.network.os_vif_util [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.126 2 DEBUG nova.network.os_vif_util [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.126 2 DEBUG os_vif [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.128 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.128 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.133 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e86ce79-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.133 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e86ce79-9f, col_values=(('external_ids', {'iface-id': '6e86ce79-9f1b-4e53-8ae5-918e8402b8c6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:38:87', 'vm-uuid': '4e64a021-390b-4a0c-bb4c-75a19f274777'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:31 compute-0 NetworkManager[44949]: <info>  [1759847851.1370] manager: (tap6e86ce79-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/485)
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.141 2 INFO os_vif [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f')
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.241 2 DEBUG nova.network.neutron [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Successfully created port: 72a26c5e-6ceb-4ee6-b79b-d105eac8054b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.284 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.284 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.285 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No VIF found with MAC fa:16:3e:6d:38:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.285 2 INFO nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Using config drive
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.307 2 DEBUG nova.storage.rbd_utils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 4e64a021-390b-4a0c-bb4c-75a19f274777_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.772 2 DEBUG nova.network.neutron [req-154eaec5-94f4-47ae-98ad-4a090934bc55 req-65e7a00d-ac6a-4f78-a063-a7f5ec57aa79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updated VIF entry in instance network info cache for port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.773 2 DEBUG nova.network.neutron [req-154eaec5-94f4-47ae-98ad-4a090934bc55 req-65e7a00d-ac6a-4f78-a063-a7f5ec57aa79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updating instance_info_cache with network_info: [{"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.839 2 DEBUG oslo_concurrency.lockutils [req-154eaec5-94f4-47ae-98ad-4a090934bc55 req-65e7a00d-ac6a-4f78-a063-a7f5ec57aa79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.919 2 INFO nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Creating config drive at /var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777/disk.config
Oct 07 14:37:31 compute-0 nova_compute[259550]: 2025-10-07 14:37:31.928 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpst8jjpv0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:32 compute-0 nova_compute[259550]: 2025-10-07 14:37:32.071 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpst8jjpv0" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:32 compute-0 nova_compute[259550]: 2025-10-07 14:37:32.097 2 DEBUG nova.storage.rbd_utils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 4e64a021-390b-4a0c-bb4c-75a19f274777_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:32 compute-0 nova_compute[259550]: 2025-10-07 14:37:32.100 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777/disk.config 4e64a021-390b-4a0c-bb4c-75a19f274777_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:32 compute-0 ceph-mon[74295]: pgmap v2252: 305 pgs: 305 active+clean; 88 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:37:32 compute-0 nova_compute[259550]: 2025-10-07 14:37:32.331 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777/disk.config 4e64a021-390b-4a0c-bb4c-75a19f274777_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:32 compute-0 nova_compute[259550]: 2025-10-07 14:37:32.333 2 INFO nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Deleting local config drive /var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777/disk.config because it was imported into RBD.
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 102 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 2.4 MiB/s wr, 28 op/s
Oct 07 14:37:32 compute-0 kernel: tap6e86ce79-9f: entered promiscuous mode
Oct 07 14:37:32 compute-0 NetworkManager[44949]: <info>  [1759847852.3821] manager: (tap6e86ce79-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/486)
Oct 07 14:37:32 compute-0 ovn_controller[151684]: 2025-10-07T14:37:32Z|01210|binding|INFO|Claiming lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 for this chassis.
Oct 07 14:37:32 compute-0 ovn_controller[151684]: 2025-10-07T14:37:32Z|01211|binding|INFO|6e86ce79-9f1b-4e53-8ae5-918e8402b8c6: Claiming fa:16:3e:6d:38:87 10.100.0.6
Oct 07 14:37:32 compute-0 systemd-udevd[384903]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:37:32 compute-0 nova_compute[259550]: 2025-10-07 14:37:32.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:32 compute-0 nova_compute[259550]: 2025-10-07 14:37:32.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:32 compute-0 NetworkManager[44949]: <info>  [1759847852.4364] device (tap6e86ce79-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:37:32 compute-0 NetworkManager[44949]: <info>  [1759847852.4375] device (tap6e86ce79-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:37:32 compute-0 systemd-machined[214580]: New machine qemu-144-instance-00000073.
Oct 07 14:37:32 compute-0 systemd[1]: Started Virtual Machine qemu-144-instance-00000073.
Oct 07 14:37:32 compute-0 ovn_controller[151684]: 2025-10-07T14:37:32Z|01212|binding|INFO|Setting lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 ovn-installed in OVS
Oct 07 14:37:32 compute-0 nova_compute[259550]: 2025-10-07 14:37:32.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0004631311250569773 of space, bias 1.0, pg target 0.1389393375170932 quantized to 32 (current 32)
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:37:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:37:32 compute-0 ovn_controller[151684]: 2025-10-07T14:37:32Z|01213|binding|INFO|Setting lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 up in Southbound
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.650 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:38:87 10.100.0.6'], port_security=['fa:16:3e:6d:38:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4e64a021-390b-4a0c-bb4c-75a19f274777', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28efb734-0152-4914-9f31-b818d894be70', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c3ab1e78-0aa6-4d11-8104-a075342a0333', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3cdfe5f-0d4a-4d58-ba3f-aac15c533467, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.651 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 in datapath 28efb734-0152-4914-9f31-b818d894be70 bound to our chassis
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.654 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28efb734-0152-4914-9f31-b818d894be70
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.669 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5f56df2f-3e63-4bbc-b884-9c21e2949c21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.670 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap28efb734-01 in ovnmeta-28efb734-0152-4914-9f31-b818d894be70 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.672 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap28efb734-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.672 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ea9900ef-1da8-4689-9656-c2d350c3fa36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.674 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6af87485-a98e-407d-9dac-68edb8da40ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.695 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a91ca161-be92-4436-9376-a1c50f4f41ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:37:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2507582309' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:37:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:37:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2507582309' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.723 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e9313090-55fe-4608-9a6a-3bf8184c38ea]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.761 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b781cfb3-bd0d-4ead-8c9b-d296e568c3ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.767 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2368fe16-eb3c-40fb-ac66-b1946e96bb4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:32 compute-0 NetworkManager[44949]: <info>  [1759847852.7678] manager: (tap28efb734-00): new Veth device (/org/freedesktop/NetworkManager/Devices/487)
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.802 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ef85c2df-b0fb-48db-a64a-331b8353a234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.805 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc021f3-03fa-4083-b61c-c60912d8c2c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:32 compute-0 NetworkManager[44949]: <info>  [1759847852.8307] device (tap28efb734-00): carrier: link connected
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.839 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c895d9e9-2b87-4d15-ad11-abd90dfb8158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.856 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba897c5-7a0b-46fe-ba3e-346ef0c3659c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28efb734-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b7:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 348], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 843639, 'reachable_time': 32519, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384980, 'error': None, 'target': 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.874 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d022c278-bcac-4d18-a3ee-ccd2e3a2b71f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:b7c0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 843639, 'tstamp': 843639}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384981, 'error': None, 'target': 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.893 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b54082a1-1d7b-495f-bdd5-93d2d44cde70]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28efb734-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b7:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 348], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 843639, 'reachable_time': 32519, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384983, 'error': None, 'target': 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.923 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ebea8767-071e-4d75-8c1b-eb081db872c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.982 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[039a3a74-2f64-4b91-85f7-ad00f8407512]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.983 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28efb734-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.984 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.984 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28efb734-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:32 compute-0 nova_compute[259550]: 2025-10-07 14:37:32.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:32 compute-0 NetworkManager[44949]: <info>  [1759847852.9864] manager: (tap28efb734-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/488)
Oct 07 14:37:32 compute-0 kernel: tap28efb734-00: entered promiscuous mode
Oct 07 14:37:32 compute-0 nova_compute[259550]: 2025-10-07 14:37:32.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.988 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28efb734-00, col_values=(('external_ids', {'iface-id': '54dfc7fb-c548-460a-8b73-708819b15ca2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:32 compute-0 nova_compute[259550]: 2025-10-07 14:37:32.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:32 compute-0 ovn_controller[151684]: 2025-10-07T14:37:32Z|01214|binding|INFO|Releasing lport 54dfc7fb-c548-460a-8b73-708819b15ca2 from this chassis (sb_readonly=0)
Oct 07 14:37:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:33.005 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/28efb734-0152-4914-9f31-b818d894be70.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/28efb734-0152-4914-9f31-b818d894be70.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:33.006 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[57384803-7219-4929-997d-e9dd0ec0a4d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:33.006 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-28efb734-0152-4914-9f31-b818d894be70
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/28efb734-0152-4914-9f31-b818d894be70.pid.haproxy
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 28efb734-0152-4914-9f31-b818d894be70
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:37:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:33.007 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'env', 'PROCESS_TAG=haproxy-28efb734-0152-4914-9f31-b818d894be70', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/28efb734-0152-4914-9f31-b818d894be70.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:37:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2507582309' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:37:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2507582309' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.284 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847853.2841039, 4e64a021-390b-4a0c-bb4c-75a19f274777 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.285 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] VM Started (Lifecycle Event)
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.314 2 DEBUG nova.network.neutron [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Successfully created port: 1a175a4f-4ef8-469e-b213-5f8d404858c8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.358 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.362 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847853.2843251, 4e64a021-390b-4a0c-bb4c-75a19f274777 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.362 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] VM Paused (Lifecycle Event)
Oct 07 14:37:33 compute-0 podman[385014]: 2025-10-07 14:37:33.374403928 +0000 UTC m=+0.049051722 container create ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:37:33 compute-0 systemd[1]: Started libpod-conmon-ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef.scope.
Oct 07 14:37:33 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:37:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a358810f9f8251b9e86fcc8b171db7901750b59cf1dbd689c2d91dd47284bb72/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.432 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.436 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:37:33 compute-0 podman[385014]: 2025-10-07 14:37:33.346544204 +0000 UTC m=+0.021191978 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:37:33 compute-0 podman[385014]: 2025-10-07 14:37:33.456572084 +0000 UTC m=+0.131219858 container init ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:37:33 compute-0 podman[385014]: 2025-10-07 14:37:33.461890116 +0000 UTC m=+0.136537870 container start ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 07 14:37:33 compute-0 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[385029]: [NOTICE]   (385033) : New worker (385035) forked
Oct 07 14:37:33 compute-0 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[385029]: [NOTICE]   (385033) : Loading success.
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.505 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.650 2 DEBUG nova.compute.manager [req-d5588f08-81dd-4eb7-9ac9-41524cb1f9f3 req-e624ab32-7c6d-4fb9-ac9e-d58da43ba186 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.651 2 DEBUG oslo_concurrency.lockutils [req-d5588f08-81dd-4eb7-9ac9-41524cb1f9f3 req-e624ab32-7c6d-4fb9-ac9e-d58da43ba186 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.651 2 DEBUG oslo_concurrency.lockutils [req-d5588f08-81dd-4eb7-9ac9-41524cb1f9f3 req-e624ab32-7c6d-4fb9-ac9e-d58da43ba186 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.652 2 DEBUG oslo_concurrency.lockutils [req-d5588f08-81dd-4eb7-9ac9-41524cb1f9f3 req-e624ab32-7c6d-4fb9-ac9e-d58da43ba186 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.652 2 DEBUG nova.compute.manager [req-d5588f08-81dd-4eb7-9ac9-41524cb1f9f3 req-e624ab32-7c6d-4fb9-ac9e-d58da43ba186 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Processing event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.653 2 DEBUG nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.658 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847853.6574464, 4e64a021-390b-4a0c-bb4c-75a19f274777 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.659 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] VM Resumed (Lifecycle Event)
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.661 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.667 2 INFO nova.virt.libvirt.driver [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance spawned successfully.
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.668 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.718 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.722 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.775 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.776 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.776 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.777 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.777 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.778 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:37:33 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 14:37:33 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.794 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.992 2 INFO nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Took 11.23 seconds to spawn the instance on the hypervisor.
Oct 07 14:37:33 compute-0 nova_compute[259550]: 2025-10-07 14:37:33.993 2 DEBUG nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:37:34 compute-0 nova_compute[259550]: 2025-10-07 14:37:34.102 2 INFO nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Took 13.69 seconds to build instance.
Oct 07 14:37:34 compute-0 ceph-mon[74295]: pgmap v2253: 305 pgs: 305 active+clean; 102 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 2.4 MiB/s wr, 28 op/s
Oct 07 14:37:34 compute-0 nova_compute[259550]: 2025-10-07 14:37:34.128 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 134 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 60 op/s
Oct 07 14:37:34 compute-0 nova_compute[259550]: 2025-10-07 14:37:34.847 2 DEBUG nova.network.neutron [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Successfully updated port: 72a26c5e-6ceb-4ee6-b79b-d105eac8054b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:37:35 compute-0 nova_compute[259550]: 2025-10-07 14:37:35.773 2 DEBUG nova.compute.manager [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:35 compute-0 nova_compute[259550]: 2025-10-07 14:37:35.774 2 DEBUG oslo_concurrency.lockutils [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:35 compute-0 nova_compute[259550]: 2025-10-07 14:37:35.774 2 DEBUG oslo_concurrency.lockutils [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:35 compute-0 nova_compute[259550]: 2025-10-07 14:37:35.775 2 DEBUG oslo_concurrency.lockutils [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:35 compute-0 nova_compute[259550]: 2025-10-07 14:37:35.775 2 DEBUG nova.compute.manager [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] No waiting events found dispatching network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:37:35 compute-0 nova_compute[259550]: 2025-10-07 14:37:35.775 2 WARNING nova.compute.manager [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received unexpected event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 for instance with vm_state active and task_state None.
Oct 07 14:37:35 compute-0 nova_compute[259550]: 2025-10-07 14:37:35.775 2 DEBUG nova.compute.manager [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-changed-72a26c5e-6ceb-4ee6-b79b-d105eac8054b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:35 compute-0 nova_compute[259550]: 2025-10-07 14:37:35.776 2 DEBUG nova.compute.manager [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Refreshing instance network info cache due to event network-changed-72a26c5e-6ceb-4ee6-b79b-d105eac8054b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:37:35 compute-0 nova_compute[259550]: 2025-10-07 14:37:35.776 2 DEBUG oslo_concurrency.lockutils [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:37:35 compute-0 nova_compute[259550]: 2025-10-07 14:37:35.776 2 DEBUG oslo_concurrency.lockutils [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:37:35 compute-0 nova_compute[259550]: 2025-10-07 14:37:35.777 2 DEBUG nova.network.neutron [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Refreshing network info cache for port 72a26c5e-6ceb-4ee6-b79b-d105eac8054b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:37:35 compute-0 nova_compute[259550]: 2025-10-07 14:37:35.952 2 DEBUG nova.network.neutron [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:37:36 compute-0 nova_compute[259550]: 2025-10-07 14:37:36.132 2 DEBUG nova.network.neutron [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Successfully updated port: 1a175a4f-4ef8-469e-b213-5f8d404858c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:37:36 compute-0 nova_compute[259550]: 2025-10-07 14:37:36.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:36 compute-0 nova_compute[259550]: 2025-10-07 14:37:36.168 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:37:36 compute-0 ceph-mon[74295]: pgmap v2254: 305 pgs: 305 active+clean; 134 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 60 op/s
Oct 07 14:37:36 compute-0 nova_compute[259550]: 2025-10-07 14:37:36.343 2 DEBUG nova.network.neutron [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:37:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 134 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.8 MiB/s wr, 74 op/s
Oct 07 14:37:36 compute-0 nova_compute[259550]: 2025-10-07 14:37:36.403 2 DEBUG oslo_concurrency.lockutils [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:37:36 compute-0 nova_compute[259550]: 2025-10-07 14:37:36.404 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:37:36 compute-0 nova_compute[259550]: 2025-10-07 14:37:36.404 2 DEBUG nova.network.neutron [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:37:36 compute-0 nova_compute[259550]: 2025-10-07 14:37:36.630 2 DEBUG nova.network.neutron [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:37:36 compute-0 nova_compute[259550]: 2025-10-07 14:37:36.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:36.941 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:37:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:36.943 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:37:37 compute-0 NetworkManager[44949]: <info>  [1759847857.3690] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/489)
Oct 07 14:37:37 compute-0 NetworkManager[44949]: <info>  [1759847857.3699] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/490)
Oct 07 14:37:37 compute-0 nova_compute[259550]: 2025-10-07 14:37:37.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:37 compute-0 nova_compute[259550]: 2025-10-07 14:37:37.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:37 compute-0 ovn_controller[151684]: 2025-10-07T14:37:37Z|01215|binding|INFO|Releasing lport 54dfc7fb-c548-460a-8b73-708819b15ca2 from this chassis (sb_readonly=0)
Oct 07 14:37:37 compute-0 ovn_controller[151684]: 2025-10-07T14:37:37Z|01216|binding|INFO|Releasing lport 54dfc7fb-c548-460a-8b73-708819b15ca2 from this chassis (sb_readonly=0)
Oct 07 14:37:37 compute-0 nova_compute[259550]: 2025-10-07 14:37:37.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:37:38 compute-0 nova_compute[259550]: 2025-10-07 14:37:38.144 2 DEBUG nova.compute.manager [req-d06a0eca-d38e-4d8e-88ca-4f05fb543a0c req-cc44f78d-13b2-4346-b820-89c34b15aba4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-changed-1a175a4f-4ef8-469e-b213-5f8d404858c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:38 compute-0 nova_compute[259550]: 2025-10-07 14:37:38.144 2 DEBUG nova.compute.manager [req-d06a0eca-d38e-4d8e-88ca-4f05fb543a0c req-cc44f78d-13b2-4346-b820-89c34b15aba4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Refreshing instance network info cache due to event network-changed-1a175a4f-4ef8-469e-b213-5f8d404858c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:37:38 compute-0 nova_compute[259550]: 2025-10-07 14:37:38.144 2 DEBUG oslo_concurrency.lockutils [req-d06a0eca-d38e-4d8e-88ca-4f05fb543a0c req-cc44f78d-13b2-4346-b820-89c34b15aba4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:37:38 compute-0 ceph-mon[74295]: pgmap v2255: 305 pgs: 305 active+clean; 134 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.8 MiB/s wr, 74 op/s
Oct 07 14:37:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 134 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.203 2 DEBUG nova.network.neutron [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updating instance_info_cache with network_info: [{"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.237 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.238 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Instance network_info: |[{"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.239 2 DEBUG oslo_concurrency.lockutils [req-d06a0eca-d38e-4d8e-88ca-4f05fb543a0c req-cc44f78d-13b2-4346-b820-89c34b15aba4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.240 2 DEBUG nova.network.neutron [req-d06a0eca-d38e-4d8e-88ca-4f05fb543a0c req-cc44f78d-13b2-4346-b820-89c34b15aba4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Refreshing network info cache for port 1a175a4f-4ef8-469e-b213-5f8d404858c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.248 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Start _get_guest_xml network_info=[{"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.253 2 WARNING nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.261 2 DEBUG nova.virt.libvirt.host [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.262 2 DEBUG nova.virt.libvirt.host [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.265 2 DEBUG nova.virt.libvirt.host [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.265 2 DEBUG nova.virt.libvirt.host [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.266 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.266 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.267 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.267 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.267 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.267 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.267 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.268 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.268 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.268 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.268 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.269 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.272 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:37:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3486510496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.766 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.787 2 DEBUG nova.storage.rbd_utils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:39 compute-0 nova_compute[259550]: 2025-10-07 14:37:39.792 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:40 compute-0 ceph-mon[74295]: pgmap v2256: 305 pgs: 305 active+clean; 134 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct 07 14:37:40 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3486510496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:37:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:37:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3144698385' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.254 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.256 2 DEBUG nova.virt.libvirt.vif [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:37:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-78341121',display_name='tempest-TestGettingAddress-server-78341121',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-78341121',id=116,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-g1800drv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:37:29Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=23c0ce36-9e34-4a73-9f99-3b79f8623238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.256 2 DEBUG nova.network.os_vif_util [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.257 2 DEBUG nova.network.os_vif_util [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:bc:87,bridge_name='br-int',has_traffic_filtering=True,id=72a26c5e-6ceb-4ee6-b79b-d105eac8054b,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72a26c5e-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.258 2 DEBUG nova.virt.libvirt.vif [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:37:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-78341121',display_name='tempest-TestGettingAddress-server-78341121',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-78341121',id=116,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-g1800drv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:37:29Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=23c0ce36-9e34-4a73-9f99-3b79f8623238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.258 2 DEBUG nova.network.os_vif_util [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.259 2 DEBUG nova.network.os_vif_util [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:c4:4a,bridge_name='br-int',has_traffic_filtering=True,id=1a175a4f-4ef8-469e-b213-5f8d404858c8,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a175a4f-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.260 2 DEBUG nova.objects.instance [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 23c0ce36-9e34-4a73-9f99-3b79f8623238 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.302 2 DEBUG nova.compute.manager [req-a7af1ba0-7f6b-48c2-855f-31d644bd6c33 req-7ff242c1-0e74-499f-b68d-aa8c900c2421 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-changed-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.302 2 DEBUG nova.compute.manager [req-a7af1ba0-7f6b-48c2-855f-31d644bd6c33 req-7ff242c1-0e74-499f-b68d-aa8c900c2421 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Refreshing instance network info cache due to event network-changed-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.302 2 DEBUG oslo_concurrency.lockutils [req-a7af1ba0-7f6b-48c2-855f-31d644bd6c33 req-7ff242c1-0e74-499f-b68d-aa8c900c2421 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.303 2 DEBUG oslo_concurrency.lockutils [req-a7af1ba0-7f6b-48c2-855f-31d644bd6c33 req-7ff242c1-0e74-499f-b68d-aa8c900c2421 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.303 2 DEBUG nova.network.neutron [req-a7af1ba0-7f6b-48c2-855f-31d644bd6c33 req-7ff242c1-0e74-499f-b68d-aa8c900c2421 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Refreshing network info cache for port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:37:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 134 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.418 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:37:40 compute-0 nova_compute[259550]:   <uuid>23c0ce36-9e34-4a73-9f99-3b79f8623238</uuid>
Oct 07 14:37:40 compute-0 nova_compute[259550]:   <name>instance-00000074</name>
Oct 07 14:37:40 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:37:40 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:37:40 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <nova:name>tempest-TestGettingAddress-server-78341121</nova:name>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:37:39</nova:creationTime>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:37:40 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:37:40 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:37:40 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:37:40 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:37:40 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:37:40 compute-0 nova_compute[259550]:         <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct 07 14:37:40 compute-0 nova_compute[259550]:         <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:37:40 compute-0 nova_compute[259550]:         <nova:port uuid="72a26c5e-6ceb-4ee6-b79b-d105eac8054b">
Oct 07 14:37:40 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:37:40 compute-0 nova_compute[259550]:         <nova:port uuid="1a175a4f-4ef8-469e-b213-5f8d404858c8">
Oct 07 14:37:40 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe72:c44a" ipVersion="6"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe72:c44a" ipVersion="6"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:37:40 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:37:40 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <system>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <entry name="serial">23c0ce36-9e34-4a73-9f99-3b79f8623238</entry>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <entry name="uuid">23c0ce36-9e34-4a73-9f99-3b79f8623238</entry>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     </system>
Oct 07 14:37:40 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:37:40 compute-0 nova_compute[259550]:   <os>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:   </os>
Oct 07 14:37:40 compute-0 nova_compute[259550]:   <features>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:   </features>
Oct 07 14:37:40 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:37:40 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:37:40 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/23c0ce36-9e34-4a73-9f99-3b79f8623238_disk">
Oct 07 14:37:40 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       </source>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:37:40 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/23c0ce36-9e34-4a73-9f99-3b79f8623238_disk.config">
Oct 07 14:37:40 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       </source>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:37:40 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:dc:bc:87"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <target dev="tap72a26c5e-6c"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:72:c4:4a"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <target dev="tap1a175a4f-4e"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238/console.log" append="off"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <video>
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     </video>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:37:40 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:37:40 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:37:40 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:37:40 compute-0 nova_compute[259550]: </domain>
Oct 07 14:37:40 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.419 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Preparing to wait for external event network-vif-plugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.420 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.420 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.420 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.421 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Preparing to wait for external event network-vif-plugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.421 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.421 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.421 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.422 2 DEBUG nova.virt.libvirt.vif [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:37:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-78341121',display_name='tempest-TestGettingAddress-server-78341121',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-78341121',id=116,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-g1800drv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:37:29Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=23c0ce36-9e34-4a73-9f99-3b79f8623238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.422 2 DEBUG nova.network.os_vif_util [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.423 2 DEBUG nova.network.os_vif_util [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:bc:87,bridge_name='br-int',has_traffic_filtering=True,id=72a26c5e-6ceb-4ee6-b79b-d105eac8054b,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72a26c5e-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.423 2 DEBUG os_vif [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:bc:87,bridge_name='br-int',has_traffic_filtering=True,id=72a26c5e-6ceb-4ee6-b79b-d105eac8054b,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72a26c5e-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.424 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.425 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.428 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72a26c5e-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.428 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap72a26c5e-6c, col_values=(('external_ids', {'iface-id': '72a26c5e-6ceb-4ee6-b79b-d105eac8054b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:bc:87', 'vm-uuid': '23c0ce36-9e34-4a73-9f99-3b79f8623238'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:40 compute-0 NetworkManager[44949]: <info>  [1759847860.4306] manager: (tap72a26c5e-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/491)
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.438 2 INFO os_vif [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:bc:87,bridge_name='br-int',has_traffic_filtering=True,id=72a26c5e-6ceb-4ee6-b79b-d105eac8054b,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72a26c5e-6c')
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.439 2 DEBUG nova.virt.libvirt.vif [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:37:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-78341121',display_name='tempest-TestGettingAddress-server-78341121',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-78341121',id=116,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-g1800drv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:37:29Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=23c0ce36-9e34-4a73-9f99-3b79f8623238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.439 2 DEBUG nova.network.os_vif_util [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.440 2 DEBUG nova.network.os_vif_util [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:c4:4a,bridge_name='br-int',has_traffic_filtering=True,id=1a175a4f-4ef8-469e-b213-5f8d404858c8,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a175a4f-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.440 2 DEBUG os_vif [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:c4:4a,bridge_name='br-int',has_traffic_filtering=True,id=1a175a4f-4ef8-469e-b213-5f8d404858c8,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a175a4f-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.441 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.441 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a175a4f-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.444 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1a175a4f-4e, col_values=(('external_ids', {'iface-id': '1a175a4f-4ef8-469e-b213-5f8d404858c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:72:c4:4a', 'vm-uuid': '23c0ce36-9e34-4a73-9f99-3b79f8623238'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:40 compute-0 NetworkManager[44949]: <info>  [1759847860.4463] manager: (tap1a175a4f-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/492)
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.452 2 INFO os_vif [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:c4:4a,bridge_name='br-int',has_traffic_filtering=True,id=1a175a4f-4ef8-469e-b213-5f8d404858c8,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a175a4f-4e')
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.718 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.718 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.719 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:dc:bc:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.719 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:72:c4:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.719 2 INFO nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Using config drive
Oct 07 14:37:40 compute-0 nova_compute[259550]: 2025-10-07 14:37:40.737 2 DEBUG nova.storage.rbd_utils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3144698385' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:37:41 compute-0 nova_compute[259550]: 2025-10-07 14:37:41.741 2 INFO nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Creating config drive at /var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238/disk.config
Oct 07 14:37:41 compute-0 nova_compute[259550]: 2025-10-07 14:37:41.745 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiyj0dzqo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:41 compute-0 nova_compute[259550]: 2025-10-07 14:37:41.886 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiyj0dzqo" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:41 compute-0 nova_compute[259550]: 2025-10-07 14:37:41.907 2 DEBUG nova.storage.rbd_utils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:41 compute-0 nova_compute[259550]: 2025-10-07 14:37:41.916 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238/disk.config 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:42 compute-0 nova_compute[259550]: 2025-10-07 14:37:42.000 2 DEBUG nova.network.neutron [req-d06a0eca-d38e-4d8e-88ca-4f05fb543a0c req-cc44f78d-13b2-4346-b820-89c34b15aba4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updated VIF entry in instance network info cache for port 1a175a4f-4ef8-469e-b213-5f8d404858c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:37:42 compute-0 nova_compute[259550]: 2025-10-07 14:37:42.001 2 DEBUG nova.network.neutron [req-d06a0eca-d38e-4d8e-88ca-4f05fb543a0c req-cc44f78d-13b2-4346-b820-89c34b15aba4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updating instance_info_cache with network_info: [{"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:37:42 compute-0 nova_compute[259550]: 2025-10-07 14:37:42.162 2 DEBUG oslo_concurrency.lockutils [req-d06a0eca-d38e-4d8e-88ca-4f05fb543a0c req-cc44f78d-13b2-4346-b820-89c34b15aba4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:37:42 compute-0 ceph-mon[74295]: pgmap v2257: 305 pgs: 305 active+clean; 134 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:37:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 134 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:37:42 compute-0 nova_compute[259550]: 2025-10-07 14:37:42.594 2 DEBUG nova.network.neutron [req-a7af1ba0-7f6b-48c2-855f-31d644bd6c33 req-7ff242c1-0e74-499f-b68d-aa8c900c2421 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updated VIF entry in instance network info cache for port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:37:42 compute-0 nova_compute[259550]: 2025-10-07 14:37:42.595 2 DEBUG nova.network.neutron [req-a7af1ba0-7f6b-48c2-855f-31d644bd6c33 req-7ff242c1-0e74-499f-b68d-aa8c900c2421 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updating instance_info_cache with network_info: [{"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:37:42 compute-0 nova_compute[259550]: 2025-10-07 14:37:42.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:42 compute-0 nova_compute[259550]: 2025-10-07 14:37:42.750 2 DEBUG oslo_concurrency.lockutils [req-a7af1ba0-7f6b-48c2-855f-31d644bd6c33 req-7ff242c1-0e74-499f-b68d-aa8c900c2421 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:37:42 compute-0 nova_compute[259550]: 2025-10-07 14:37:42.819 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238/disk.config 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.903s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:42 compute-0 nova_compute[259550]: 2025-10-07 14:37:42.820 2 INFO nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Deleting local config drive /var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238/disk.config because it was imported into RBD.
Oct 07 14:37:42 compute-0 kernel: tap72a26c5e-6c: entered promiscuous mode
Oct 07 14:37:42 compute-0 NetworkManager[44949]: <info>  [1759847862.8873] manager: (tap72a26c5e-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/493)
Oct 07 14:37:42 compute-0 ovn_controller[151684]: 2025-10-07T14:37:42Z|01217|binding|INFO|Claiming lport 72a26c5e-6ceb-4ee6-b79b-d105eac8054b for this chassis.
Oct 07 14:37:42 compute-0 nova_compute[259550]: 2025-10-07 14:37:42.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:42 compute-0 ovn_controller[151684]: 2025-10-07T14:37:42Z|01218|binding|INFO|72a26c5e-6ceb-4ee6-b79b-d105eac8054b: Claiming fa:16:3e:dc:bc:87 10.100.0.12
Oct 07 14:37:42 compute-0 NetworkManager[44949]: <info>  [1759847862.9032] manager: (tap1a175a4f-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/494)
Oct 07 14:37:42 compute-0 systemd-udevd[385185]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:37:42 compute-0 kernel: tap1a175a4f-4e: entered promiscuous mode
Oct 07 14:37:42 compute-0 systemd-udevd[385186]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:37:42 compute-0 ovn_controller[151684]: 2025-10-07T14:37:42Z|01219|binding|INFO|Setting lport 72a26c5e-6ceb-4ee6-b79b-d105eac8054b ovn-installed in OVS
Oct 07 14:37:42 compute-0 ovn_controller[151684]: 2025-10-07T14:37:42Z|01220|if_status|INFO|Dropped 7 log messages in last 122 seconds (most recently, 122 seconds ago) due to excessive rate
Oct 07 14:37:42 compute-0 ovn_controller[151684]: 2025-10-07T14:37:42Z|01221|if_status|INFO|Not updating pb chassis for 1a175a4f-4ef8-469e-b213-5f8d404858c8 now as sb is readonly
Oct 07 14:37:42 compute-0 nova_compute[259550]: 2025-10-07 14:37:42.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:42 compute-0 NetworkManager[44949]: <info>  [1759847862.9349] device (tap72a26c5e-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:37:42 compute-0 NetworkManager[44949]: <info>  [1759847862.9368] device (tap72a26c5e-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:37:42 compute-0 NetworkManager[44949]: <info>  [1759847862.9386] device (tap1a175a4f-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:37:42 compute-0 NetworkManager[44949]: <info>  [1759847862.9399] device (tap1a175a4f-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:37:42 compute-0 nova_compute[259550]: 2025-10-07 14:37:42.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:42 compute-0 ovn_controller[151684]: 2025-10-07T14:37:42Z|01222|binding|INFO|Claiming lport 1a175a4f-4ef8-469e-b213-5f8d404858c8 for this chassis.
Oct 07 14:37:42 compute-0 ovn_controller[151684]: 2025-10-07T14:37:42Z|01223|binding|INFO|1a175a4f-4ef8-469e-b213-5f8d404858c8: Claiming fa:16:3e:72:c4:4a 2001:db8:0:1:f816:3eff:fe72:c44a 2001:db8::f816:3eff:fe72:c44a
Oct 07 14:37:42 compute-0 ovn_controller[151684]: 2025-10-07T14:37:42Z|01224|binding|INFO|Setting lport 72a26c5e-6ceb-4ee6-b79b-d105eac8054b up in Southbound
Oct 07 14:37:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:42.945 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:42.948 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:bc:87 10.100.0.12'], port_security=['fa:16:3e:dc:bc:87 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '23c0ce36-9e34-4a73-9f99-3b79f8623238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5dfb73c9-a89b-4659-8761-7d887493b39b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd07d1fc3-3bf7-4ca6-a994-01f8bb5c5bd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=169b3722-7b9a-4733-8efb-f5bd5c71aacf, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=72a26c5e-6ceb-4ee6-b79b-d105eac8054b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:37:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:42.949 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 72a26c5e-6ceb-4ee6-b79b-d105eac8054b in datapath 5dfb73c9-a89b-4659-8761-7d887493b39b bound to our chassis
Oct 07 14:37:42 compute-0 ovn_controller[151684]: 2025-10-07T14:37:42Z|01225|binding|INFO|Setting lport 1a175a4f-4ef8-469e-b213-5f8d404858c8 ovn-installed in OVS
Oct 07 14:37:42 compute-0 nova_compute[259550]: 2025-10-07 14:37:42.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:42 compute-0 systemd-machined[214580]: New machine qemu-145-instance-00000074.
Oct 07 14:37:42 compute-0 systemd[1]: Started Virtual Machine qemu-145-instance-00000074.
Oct 07 14:37:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.002 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5dfb73c9-a89b-4659-8761-7d887493b39b
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.014 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ab071713-26a3-47f5-a354-1508d7b6595f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.027 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5dfb73c9-a1 in ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.029 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5dfb73c9-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.029 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3e27be6f-c426-4cad-8027-2c6d9a056a9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.030 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[808ea68f-8025-4922-8951-8364259d61c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:43 compute-0 ovn_controller[151684]: 2025-10-07T14:37:43Z|01226|binding|INFO|Setting lport 1a175a4f-4ef8-469e-b213-5f8d404858c8 up in Southbound
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.032 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:c4:4a 2001:db8:0:1:f816:3eff:fe72:c44a 2001:db8::f816:3eff:fe72:c44a'], port_security=['fa:16:3e:72:c4:4a 2001:db8:0:1:f816:3eff:fe72:c44a 2001:db8::f816:3eff:fe72:c44a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe72:c44a/64 2001:db8::f816:3eff:fe72:c44a/64', 'neutron:device_id': '23c0ce36-9e34-4a73-9f99-3b79f8623238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c956141-6a21-499d-99b1-885d1a2972f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd07d1fc3-3bf7-4ca6-a994-01f8bb5c5bd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fceb8d2-9d2a-45b9-beb8-73d518298477, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1a175a4f-4ef8-469e-b213-5f8d404858c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.049 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[e654d743-3b2b-45cf-aadc-28a246f2615e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.074 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2466e32f-9f0a-4920-9c6c-c73c58ff6141]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.109 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0004c75d-d8e8-46f7-8037-ea11f3e5d29f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:43 compute-0 NetworkManager[44949]: <info>  [1759847863.1184] manager: (tap5dfb73c9-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/495)
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.119 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4a60130f-9c2c-4d69-897c-61a1ff60ddb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.160 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ba2a46-108e-4831-8df5-57e6c82e8779]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.164 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[dabb4c6f-985a-4930-9176-db28c7127e65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:43 compute-0 NetworkManager[44949]: <info>  [1759847863.1920] device (tap5dfb73c9-a0): carrier: link connected
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.200 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fa99dd88-cfd9-4f30-b8fe-e4215a9301f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:43 compute-0 podman[385211]: 2025-10-07 14:37:43.219269568 +0000 UTC m=+0.060680342 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.221 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5bdf6152-6003-40f7-bff3-6ca69c58b16d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5dfb73c9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:44:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 200, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 200, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 351], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844675, 'reachable_time': 21117, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 172, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 172, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385253, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.239 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4e3ebbcd-3af1-4c84-8fe3-7c77eb2d4144]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee7:4411'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 844675, 'tstamp': 844675}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385260, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.256 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7024e23f-23c1-40cb-9734-3973353bb3b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5dfb73c9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:44:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 200, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 200, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 351], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844675, 'reachable_time': 21117, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 172, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 172, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 385265, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:43 compute-0 podman[385217]: 2025-10-07 14:37:43.277053822 +0000 UTC m=+0.118500927 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.282 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8eef5421-a272-4ff9-918f-52f7cab64802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.294 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "c0eb8730-2b26-4cc0-8a9c-019688db568f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.294 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.339 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b48da5d8-c84d-4502-895a-020c47cadc61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.340 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5dfb73c9-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.340 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.341 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5dfb73c9-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:43 compute-0 NetworkManager[44949]: <info>  [1759847863.3431] manager: (tap5dfb73c9-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/496)
Oct 07 14:37:43 compute-0 kernel: tap5dfb73c9-a0: entered promiscuous mode
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.344 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5dfb73c9-a0, col_values=(('external_ids', {'iface-id': '30dd4552-fdd6-4d17-af87-77adcec53278'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:43 compute-0 ovn_controller[151684]: 2025-10-07T14:37:43Z|01227|binding|INFO|Releasing lport 30dd4552-fdd6-4d17-af87-77adcec53278 from this chassis (sb_readonly=0)
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.360 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5dfb73c9-a89b-4659-8761-7d887493b39b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5dfb73c9-a89b-4659-8761-7d887493b39b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.361 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d909bccd-1a4f-477e-9f47-025bc0bef358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.361 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-5dfb73c9-a89b-4659-8761-7d887493b39b
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/5dfb73c9-a89b-4659-8761-7d887493b39b.pid.haproxy
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 5dfb73c9-a89b-4659-8761-7d887493b39b
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:37:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.362 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'env', 'PROCESS_TAG=haproxy-5dfb73c9-a89b-4659-8761-7d887493b39b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5dfb73c9-a89b-4659-8761-7d887493b39b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.390 2 DEBUG nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.512 2 DEBUG nova.compute.manager [req-b9f9918f-1d53-41bd-bdac-7e4e17a96f2c req-892c5a0a-550c-4999-a1cb-fea70fcdc9cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-plugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.513 2 DEBUG oslo_concurrency.lockutils [req-b9f9918f-1d53-41bd-bdac-7e4e17a96f2c req-892c5a0a-550c-4999-a1cb-fea70fcdc9cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.513 2 DEBUG oslo_concurrency.lockutils [req-b9f9918f-1d53-41bd-bdac-7e4e17a96f2c req-892c5a0a-550c-4999-a1cb-fea70fcdc9cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.514 2 DEBUG oslo_concurrency.lockutils [req-b9f9918f-1d53-41bd-bdac-7e4e17a96f2c req-892c5a0a-550c-4999-a1cb-fea70fcdc9cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.514 2 DEBUG nova.compute.manager [req-b9f9918f-1d53-41bd-bdac-7e4e17a96f2c req-892c5a0a-550c-4999-a1cb-fea70fcdc9cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Processing event network-vif-plugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.540 2 DEBUG nova.compute.manager [req-a7488ecf-3063-4b59-8496-da200abe81fc req-21772a0d-1aa4-4e3d-890b-5d5259f284d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-plugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.541 2 DEBUG oslo_concurrency.lockutils [req-a7488ecf-3063-4b59-8496-da200abe81fc req-21772a0d-1aa4-4e3d-890b-5d5259f284d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.541 2 DEBUG oslo_concurrency.lockutils [req-a7488ecf-3063-4b59-8496-da200abe81fc req-21772a0d-1aa4-4e3d-890b-5d5259f284d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.541 2 DEBUG oslo_concurrency.lockutils [req-a7488ecf-3063-4b59-8496-da200abe81fc req-21772a0d-1aa4-4e3d-890b-5d5259f284d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.542 2 DEBUG nova.compute.manager [req-a7488ecf-3063-4b59-8496-da200abe81fc req-21772a0d-1aa4-4e3d-890b-5d5259f284d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Processing event network-vif-plugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.546 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.547 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.556 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.556 2 INFO nova.compute.claims [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:37:43 compute-0 nova_compute[259550]: 2025-10-07 14:37:43.740 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:43 compute-0 podman[385340]: 2025-10-07 14:37:43.729742228 +0000 UTC m=+0.022341648 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:37:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:37:44 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2630454737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:37:44 compute-0 podman[385340]: 2025-10-07 14:37:44.182846956 +0000 UTC m=+0.475446366 container create c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.206 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.213 2 DEBUG nova.compute.provider_tree [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.262 2 DEBUG nova.scheduler.client.report [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.281 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847864.281388, 23c0ce36-9e34-4a73-9f99-3b79f8623238 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.282 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] VM Started (Lifecycle Event)
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.284 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.285 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.286 2 DEBUG nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.290 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.294 2 INFO nova.virt.libvirt.driver [-] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Instance spawned successfully.
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.294 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.299 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.303 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:37:44 compute-0 systemd[1]: Started libpod-conmon-c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69.scope.
Oct 07 14:37:44 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:37:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c393a5ba60ac8c97a1f1c72230373f0ef69c5c9c49d0c8f32905833660b064/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:37:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 134 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 106 op/s
Oct 07 14:37:44 compute-0 ceph-mon[74295]: pgmap v2258: 305 pgs: 305 active+clean; 134 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:37:44 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2630454737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:37:44 compute-0 podman[385340]: 2025-10-07 14:37:44.38625769 +0000 UTC m=+0.678857110 container init c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:37:44 compute-0 podman[385340]: 2025-10-07 14:37:44.392400504 +0000 UTC m=+0.684999914 container start c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:37:44 compute-0 neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b[385379]: [NOTICE]   (385383) : New worker (385385) forked
Oct 07 14:37:44 compute-0 neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b[385379]: [NOTICE]   (385383) : Loading success.
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.452 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1a175a4f-4ef8-469e-b213-5f8d404858c8 in datapath 4c956141-6a21-499d-99b1-885d1a2972f7 unbound from our chassis
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.457 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c956141-6a21-499d-99b1-885d1a2972f7
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.470 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6edcebf0-2045-4d17-a0db-6590c2ecd13a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.471 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.471 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4c956141-61 in ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.471 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847864.2819824, 23c0ce36-9e34-4a73-9f99-3b79f8623238 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.472 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] VM Paused (Lifecycle Event)
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.472 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4c956141-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.473 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c75fde23-0aa9-4920-b8a2-4cde4eb970e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.473 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e090b79f-0d5b-4535-a6ad-0e7476b620cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.477 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.477 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.478 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.478 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.479 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.479 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.484 2 DEBUG nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.484 2 DEBUG nova.network.neutron [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.487 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[14befe37-04c4-4abf-93b6-4fdf4bf5b0df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.501 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[263f76b1-53a5-43e0-bf67-b1e98fb6bb5f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.529 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a5cfad-e652-4594-bc45-add80244923a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.535 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[762fd037-3a32-48e7-a089-0b493f4e706d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:44 compute-0 NetworkManager[44949]: <info>  [1759847864.5364] manager: (tap4c956141-60): new Veth device (/org/freedesktop/NetworkManager/Devices/497)
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.551 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.555 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847864.2872374, 23c0ce36-9e34-4a73-9f99-3b79f8623238 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.556 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] VM Resumed (Lifecycle Event)
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.567 2 INFO nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.578 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8fdaea-5b58-4219-aaeb-179b520d80fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.582 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[294f7f2d-952b-486f-a442-74011726b887]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.589 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.592 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:37:44 compute-0 NetworkManager[44949]: <info>  [1759847864.6053] device (tap4c956141-60): carrier: link connected
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.611 2 INFO nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Took 14.41 seconds to spawn the instance on the hypervisor.
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.612 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.611 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5872c887-c037-4b65-ab7d-0fb80ea12b17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.627 2 DEBUG nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.632 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[485d541b-1980-44d0-92b2-5f6cd203d709]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c956141-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:a2:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844816, 'reachable_time': 27638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385404, 'error': None, 'target': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.650 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e650f099-de47-4862-befa-7213f7dc18b2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7c:a229'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 844816, 'tstamp': 844816}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385405, 'error': None, 'target': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.666 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.667 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[49f0f459-2618-4544-a385-bab667464f0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c956141-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:a2:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844816, 'reachable_time': 27638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 385406, 'error': None, 'target': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.701 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6fdd7179-77bc-43f7-a8a2-ee630f2a8d54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.718 2 INFO nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Took 16.63 seconds to build instance.
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.734 2 DEBUG nova.policy [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.752 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea6c730-5def-4d6a-a45c-fa094f9c3b01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.753 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c956141-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.753 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.754 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c956141-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:44 compute-0 NetworkManager[44949]: <info>  [1759847864.7723] manager: (tap4c956141-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Oct 07 14:37:44 compute-0 kernel: tap4c956141-60: entered promiscuous mode
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.776 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c956141-60, col_values=(('external_ids', {'iface-id': 'ec328f15-1843-4594-8d39-b0d2d9796360'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:44 compute-0 ovn_controller[151684]: 2025-10-07T14:37:44Z|01228|binding|INFO|Releasing lport ec328f15-1843-4594-8d39-b0d2d9796360 from this chassis (sb_readonly=0)
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.779 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c956141-6a21-499d-99b1-885d1a2972f7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c956141-6a21-499d-99b1-885d1a2972f7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.780 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ae399c8a-7d8f-45e9-b673-5a9ced2bfd70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.781 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-4c956141-6a21-499d-99b1-885d1a2972f7
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/4c956141-6a21-499d-99b1-885d1a2972f7.pid.haproxy
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 4c956141-6a21-499d-99b1-885d1a2972f7
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:37:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.782 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'env', 'PROCESS_TAG=haproxy-4c956141-6a21-499d-99b1-885d1a2972f7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4c956141-6a21-499d-99b1-885d1a2972f7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.918 2 DEBUG nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.919 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.919 2 INFO nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Creating image(s)
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.939 2 DEBUG nova.storage.rbd_utils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image c0eb8730-2b26-4cc0-8a9c-019688db568f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.964 2 DEBUG nova.storage.rbd_utils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image c0eb8730-2b26-4cc0-8a9c-019688db568f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.988 2 DEBUG nova.storage.rbd_utils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image c0eb8730-2b26-4cc0-8a9c-019688db568f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:44 compute-0 nova_compute[259550]: 2025-10-07 14:37:44.992 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.031 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.087 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.089 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.089 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.090 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.121 2 DEBUG nova.storage.rbd_utils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image c0eb8730-2b26-4cc0-8a9c-019688db568f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.128 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c0eb8730-2b26-4cc0-8a9c-019688db568f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:45 compute-0 podman[385507]: 2025-10-07 14:37:45.185394244 +0000 UTC m=+0.057593481 container create 9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:37:45 compute-0 systemd[1]: Started libpod-conmon-9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44.scope.
Oct 07 14:37:45 compute-0 podman[385507]: 2025-10-07 14:37:45.15494105 +0000 UTC m=+0.027140317 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:37:45 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a337889f89fdae52e14dc13504ac936b7577e6b7adfb2aa670a8f40235199adf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:37:45 compute-0 podman[385507]: 2025-10-07 14:37:45.293590735 +0000 UTC m=+0.165790002 container init 9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 07 14:37:45 compute-0 podman[385507]: 2025-10-07 14:37:45.300106889 +0000 UTC m=+0.172306126 container start 9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 07 14:37:45 compute-0 neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7[385541]: [NOTICE]   (385548) : New worker (385550) forked
Oct 07 14:37:45 compute-0 neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7[385541]: [NOTICE]   (385548) : Loading success.
Oct 07 14:37:45 compute-0 ceph-mon[74295]: pgmap v2259: 305 pgs: 305 active+clean; 134 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 106 op/s
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.452 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c0eb8730-2b26-4cc0-8a9c-019688db568f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.519 2 DEBUG nova.storage.rbd_utils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image c0eb8730-2b26-4cc0-8a9c-019688db568f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.604 2 DEBUG nova.compute.manager [req-2c264bad-a964-41b1-bc8c-aecbf3ec5a3f req-138d55b3-201c-4bf1-a3eb-f78b6b3f8fdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-plugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.605 2 DEBUG oslo_concurrency.lockutils [req-2c264bad-a964-41b1-bc8c-aecbf3ec5a3f req-138d55b3-201c-4bf1-a3eb-f78b6b3f8fdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.605 2 DEBUG oslo_concurrency.lockutils [req-2c264bad-a964-41b1-bc8c-aecbf3ec5a3f req-138d55b3-201c-4bf1-a3eb-f78b6b3f8fdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.605 2 DEBUG oslo_concurrency.lockutils [req-2c264bad-a964-41b1-bc8c-aecbf3ec5a3f req-138d55b3-201c-4bf1-a3eb-f78b6b3f8fdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.606 2 DEBUG nova.compute.manager [req-2c264bad-a964-41b1-bc8c-aecbf3ec5a3f req-138d55b3-201c-4bf1-a3eb-f78b6b3f8fdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] No waiting events found dispatching network-vif-plugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.606 2 WARNING nova.compute.manager [req-2c264bad-a964-41b1-bc8c-aecbf3ec5a3f req-138d55b3-201c-4bf1-a3eb-f78b6b3f8fdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received unexpected event network-vif-plugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b for instance with vm_state active and task_state None.
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.611 2 DEBUG nova.objects.instance [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid c0eb8730-2b26-4cc0-8a9c-019688db568f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.628 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.628 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Ensure instance console log exists: /var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.629 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.629 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.630 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.674 2 DEBUG nova.compute.manager [req-ae0982d4-a1b5-48bb-9946-c45af7236a92 req-11ec5634-e044-48e3-9eb1-223e46535d61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-plugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.675 2 DEBUG oslo_concurrency.lockutils [req-ae0982d4-a1b5-48bb-9946-c45af7236a92 req-11ec5634-e044-48e3-9eb1-223e46535d61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.675 2 DEBUG oslo_concurrency.lockutils [req-ae0982d4-a1b5-48bb-9946-c45af7236a92 req-11ec5634-e044-48e3-9eb1-223e46535d61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.675 2 DEBUG oslo_concurrency.lockutils [req-ae0982d4-a1b5-48bb-9946-c45af7236a92 req-11ec5634-e044-48e3-9eb1-223e46535d61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.675 2 DEBUG nova.compute.manager [req-ae0982d4-a1b5-48bb-9946-c45af7236a92 req-11ec5634-e044-48e3-9eb1-223e46535d61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] No waiting events found dispatching network-vif-plugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.676 2 WARNING nova.compute.manager [req-ae0982d4-a1b5-48bb-9946-c45af7236a92 req-11ec5634-e044-48e3-9eb1-223e46535d61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received unexpected event network-vif-plugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 for instance with vm_state active and task_state None.
Oct 07 14:37:45 compute-0 nova_compute[259550]: 2025-10-07 14:37:45.779 2 DEBUG nova.network.neutron [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Successfully created port: 1c96bebd-0f68-48d9-9bab-486d6e56cb4e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:37:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 156 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.4 MiB/s wr, 108 op/s
Oct 07 14:37:46 compute-0 nova_compute[259550]: 2025-10-07 14:37:46.969 2 DEBUG nova.network.neutron [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Successfully updated port: 1c96bebd-0f68-48d9-9bab-486d6e56cb4e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:37:47 compute-0 ovn_controller[151684]: 2025-10-07T14:37:47Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6d:38:87 10.100.0.6
Oct 07 14:37:47 compute-0 ovn_controller[151684]: 2025-10-07T14:37:47Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6d:38:87 10.100.0.6
Oct 07 14:37:47 compute-0 nova_compute[259550]: 2025-10-07 14:37:47.025 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:37:47 compute-0 nova_compute[259550]: 2025-10-07 14:37:47.026 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:37:47 compute-0 nova_compute[259550]: 2025-10-07 14:37:47.026 2 DEBUG nova.network.neutron [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:37:47 compute-0 nova_compute[259550]: 2025-10-07 14:37:47.236 2 DEBUG nova.network.neutron [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:37:47 compute-0 ceph-mon[74295]: pgmap v2260: 305 pgs: 305 active+clean; 156 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.4 MiB/s wr, 108 op/s
Oct 07 14:37:47 compute-0 nova_compute[259550]: 2025-10-07 14:37:47.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:47 compute-0 nova_compute[259550]: 2025-10-07 14:37:47.779 2 DEBUG nova.compute.manager [req-f4d33a72-af81-4318-a945-7beb00f3b1f0 req-a06070b0-56f8-4f1f-a747-d71a74cfa0f1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received event network-changed-1c96bebd-0f68-48d9-9bab-486d6e56cb4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:47 compute-0 nova_compute[259550]: 2025-10-07 14:37:47.780 2 DEBUG nova.compute.manager [req-f4d33a72-af81-4318-a945-7beb00f3b1f0 req-a06070b0-56f8-4f1f-a747-d71a74cfa0f1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Refreshing instance network info cache due to event network-changed-1c96bebd-0f68-48d9-9bab-486d6e56cb4e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:37:47 compute-0 nova_compute[259550]: 2025-10-07 14:37:47.780 2 DEBUG oslo_concurrency.lockutils [req-f4d33a72-af81-4318-a945-7beb00f3b1f0 req-a06070b0-56f8-4f1f-a747-d71a74cfa0f1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:37:47 compute-0 nova_compute[259550]: 2025-10-07 14:37:47.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:37:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.329 2 DEBUG nova.network.neutron [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Updating instance_info_cache with network_info: [{"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:37:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 156 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 70 op/s
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.438 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.439 2 DEBUG nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Instance network_info: |[{"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.440 2 DEBUG oslo_concurrency.lockutils [req-f4d33a72-af81-4318-a945-7beb00f3b1f0 req-a06070b0-56f8-4f1f-a747-d71a74cfa0f1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.440 2 DEBUG nova.network.neutron [req-f4d33a72-af81-4318-a945-7beb00f3b1f0 req-a06070b0-56f8-4f1f-a747-d71a74cfa0f1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Refreshing network info cache for port 1c96bebd-0f68-48d9-9bab-486d6e56cb4e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.442 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Start _get_guest_xml network_info=[{"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.448 2 WARNING nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.455 2 DEBUG nova.virt.libvirt.host [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.456 2 DEBUG nova.virt.libvirt.host [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.461 2 DEBUG nova.virt.libvirt.host [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.462 2 DEBUG nova.virt.libvirt.host [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.462 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.462 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.463 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.463 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.464 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.464 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.464 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.464 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.465 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.465 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.465 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.465 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.469 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.807 2 DEBUG nova.compute.manager [req-59a27177-fb7e-4880-957a-5cf0cca6528a req-c2fc33ce-a770-46d3-ab7a-8b63f6b3342f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-changed-72a26c5e-6ceb-4ee6-b79b-d105eac8054b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.808 2 DEBUG nova.compute.manager [req-59a27177-fb7e-4880-957a-5cf0cca6528a req-c2fc33ce-a770-46d3-ab7a-8b63f6b3342f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Refreshing instance network info cache due to event network-changed-72a26c5e-6ceb-4ee6-b79b-d105eac8054b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.809 2 DEBUG oslo_concurrency.lockutils [req-59a27177-fb7e-4880-957a-5cf0cca6528a req-c2fc33ce-a770-46d3-ab7a-8b63f6b3342f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.810 2 DEBUG oslo_concurrency.lockutils [req-59a27177-fb7e-4880-957a-5cf0cca6528a req-c2fc33ce-a770-46d3-ab7a-8b63f6b3342f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.810 2 DEBUG nova.network.neutron [req-59a27177-fb7e-4880-957a-5cf0cca6528a req-c2fc33ce-a770-46d3-ab7a-8b63f6b3342f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Refreshing network info cache for port 72a26c5e-6ceb-4ee6-b79b-d105eac8054b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:37:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:37:48 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2831155830' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.912 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.931 2 DEBUG nova.storage.rbd_utils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image c0eb8730-2b26-4cc0-8a9c-019688db568f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:48 compute-0 nova_compute[259550]: 2025-10-07 14:37:48.935 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:37:49 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2552038153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.367 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.371 2 DEBUG nova.virt.libvirt.vif [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:37:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-528486164',display_name='tempest-TestNetworkBasicOps-server-528486164',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-528486164',id=117,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQxueVciqThLEf3QSXvOwVT+5hy1ZzCF3GFJVm8JMkN6QiflWo55jpLsNYky8mEXwzLpOQbQ3bLKW+JqUogdrrbbB9pgVzSI20yKTEUP5ZPdTnLVnqTqbVW/n6Yq8HTpg==',key_name='tempest-TestNetworkBasicOps-1556177030',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-aayr7mmd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:37:44Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=c0eb8730-2b26-4cc0-8a9c-019688db568f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.372 2 DEBUG nova.network.os_vif_util [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.373 2 DEBUG nova.network.os_vif_util [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:76:3f,bridge_name='br-int',has_traffic_filtering=True,id=1c96bebd-0f68-48d9-9bab-486d6e56cb4e,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c96bebd-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.375 2 DEBUG nova.objects.instance [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid c0eb8730-2b26-4cc0-8a9c-019688db568f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:37:49 compute-0 ceph-mon[74295]: pgmap v2261: 305 pgs: 305 active+clean; 156 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 70 op/s
Oct 07 14:37:49 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2831155830' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:37:49 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2552038153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.565 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:37:49 compute-0 nova_compute[259550]:   <uuid>c0eb8730-2b26-4cc0-8a9c-019688db568f</uuid>
Oct 07 14:37:49 compute-0 nova_compute[259550]:   <name>instance-00000075</name>
Oct 07 14:37:49 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:37:49 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:37:49 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkBasicOps-server-528486164</nova:name>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:37:48</nova:creationTime>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:37:49 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:37:49 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:37:49 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:37:49 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:37:49 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:37:49 compute-0 nova_compute[259550]:         <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:37:49 compute-0 nova_compute[259550]:         <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:37:49 compute-0 nova_compute[259550]:         <nova:port uuid="1c96bebd-0f68-48d9-9bab-486d6e56cb4e">
Oct 07 14:37:49 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:37:49 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:37:49 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <system>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <entry name="serial">c0eb8730-2b26-4cc0-8a9c-019688db568f</entry>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <entry name="uuid">c0eb8730-2b26-4cc0-8a9c-019688db568f</entry>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     </system>
Oct 07 14:37:49 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:37:49 compute-0 nova_compute[259550]:   <os>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:   </os>
Oct 07 14:37:49 compute-0 nova_compute[259550]:   <features>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:   </features>
Oct 07 14:37:49 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:37:49 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:37:49 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/c0eb8730-2b26-4cc0-8a9c-019688db568f_disk">
Oct 07 14:37:49 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       </source>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:37:49 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/c0eb8730-2b26-4cc0-8a9c-019688db568f_disk.config">
Oct 07 14:37:49 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       </source>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:37:49 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:46:76:3f"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <target dev="tap1c96bebd-0f"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f/console.log" append="off"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <video>
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     </video>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:37:49 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:37:49 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:37:49 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:37:49 compute-0 nova_compute[259550]: </domain>
Oct 07 14:37:49 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.571 2 DEBUG nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Preparing to wait for external event network-vif-plugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.571 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.572 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.572 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.573 2 DEBUG nova.virt.libvirt.vif [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:37:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-528486164',display_name='tempest-TestNetworkBasicOps-server-528486164',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-528486164',id=117,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQxueVciqThLEf3QSXvOwVT+5hy1ZzCF3GFJVm8JMkN6QiflWo55jpLsNYky8mEXwzLpOQbQ3bLKW+JqUogdrrbbB9pgVzSI20yKTEUP5ZPdTnLVnqTqbVW/n6Yq8HTpg==',key_name='tempest-TestNetworkBasicOps-1556177030',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-aayr7mmd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:37:44Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=c0eb8730-2b26-4cc0-8a9c-019688db568f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.573 2 DEBUG nova.network.os_vif_util [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.575 2 DEBUG nova.network.os_vif_util [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:76:3f,bridge_name='br-int',has_traffic_filtering=True,id=1c96bebd-0f68-48d9-9bab-486d6e56cb4e,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c96bebd-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.575 2 DEBUG os_vif [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:76:3f,bridge_name='br-int',has_traffic_filtering=True,id=1c96bebd-0f68-48d9-9bab-486d6e56cb4e,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c96bebd-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.576 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.579 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c96bebd-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.580 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c96bebd-0f, col_values=(('external_ids', {'iface-id': '1c96bebd-0f68-48d9-9bab-486d6e56cb4e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:76:3f', 'vm-uuid': 'c0eb8730-2b26-4cc0-8a9c-019688db568f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:49 compute-0 NetworkManager[44949]: <info>  [1759847869.5824] manager: (tap1c96bebd-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/499)
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.589 2 INFO os_vif [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:76:3f,bridge_name='br-int',has_traffic_filtering=True,id=1c96bebd-0f68-48d9-9bab-486d6e56cb4e,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c96bebd-0f')
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.720 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.721 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.721 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:46:76:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.721 2 INFO nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Using config drive
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.739 2 DEBUG nova.storage.rbd_utils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image c0eb8730-2b26-4cc0-8a9c-019688db568f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:37:49 compute-0 nova_compute[259550]: 2025-10-07 14:37:49.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:37:50 compute-0 nova_compute[259550]: 2025-10-07 14:37:50.085 2 DEBUG nova.network.neutron [req-f4d33a72-af81-4318-a945-7beb00f3b1f0 req-a06070b0-56f8-4f1f-a747-d71a74cfa0f1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Updated VIF entry in instance network info cache for port 1c96bebd-0f68-48d9-9bab-486d6e56cb4e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:37:50 compute-0 nova_compute[259550]: 2025-10-07 14:37:50.086 2 DEBUG nova.network.neutron [req-f4d33a72-af81-4318-a945-7beb00f3b1f0 req-a06070b0-56f8-4f1f-a747-d71a74cfa0f1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Updating instance_info_cache with network_info: [{"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:37:50 compute-0 nova_compute[259550]: 2025-10-07 14:37:50.143 2 DEBUG oslo_concurrency.lockutils [req-f4d33a72-af81-4318-a945-7beb00f3b1f0 req-a06070b0-56f8-4f1f-a747-d71a74cfa0f1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:37:50 compute-0 nova_compute[259550]: 2025-10-07 14:37:50.160 2 INFO nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Creating config drive at /var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f/disk.config
Oct 07 14:37:50 compute-0 nova_compute[259550]: 2025-10-07 14:37:50.164 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpro2in8z9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:50 compute-0 nova_compute[259550]: 2025-10-07 14:37:50.304 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpro2in8z9" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:50 compute-0 nova_compute[259550]: 2025-10-07 14:37:50.328 2 DEBUG nova.storage.rbd_utils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image c0eb8730-2b26-4cc0-8a9c-019688db568f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:37:50 compute-0 nova_compute[259550]: 2025-10-07 14:37:50.331 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f/disk.config c0eb8730-2b26-4cc0-8a9c-019688db568f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:37:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 210 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 182 op/s
Oct 07 14:37:50 compute-0 nova_compute[259550]: 2025-10-07 14:37:50.405 2 DEBUG nova.network.neutron [req-59a27177-fb7e-4880-957a-5cf0cca6528a req-c2fc33ce-a770-46d3-ab7a-8b63f6b3342f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updated VIF entry in instance network info cache for port 72a26c5e-6ceb-4ee6-b79b-d105eac8054b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:37:50 compute-0 nova_compute[259550]: 2025-10-07 14:37:50.406 2 DEBUG nova.network.neutron [req-59a27177-fb7e-4880-957a-5cf0cca6528a req-c2fc33ce-a770-46d3-ab7a-8b63f6b3342f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updating instance_info_cache with network_info: [{"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:37:50 compute-0 nova_compute[259550]: 2025-10-07 14:37:50.612 2 DEBUG oslo_concurrency.lockutils [req-59a27177-fb7e-4880-957a-5cf0cca6528a req-c2fc33ce-a770-46d3-ab7a-8b63f6b3342f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:37:50 compute-0 nova_compute[259550]: 2025-10-07 14:37:50.755 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f/disk.config c0eb8730-2b26-4cc0-8a9c-019688db568f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:37:50 compute-0 nova_compute[259550]: 2025-10-07 14:37:50.756 2 INFO nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Deleting local config drive /var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f/disk.config because it was imported into RBD.
Oct 07 14:37:50 compute-0 NetworkManager[44949]: <info>  [1759847870.8165] manager: (tap1c96bebd-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/500)
Oct 07 14:37:50 compute-0 kernel: tap1c96bebd-0f: entered promiscuous mode
Oct 07 14:37:50 compute-0 nova_compute[259550]: 2025-10-07 14:37:50.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:50 compute-0 ovn_controller[151684]: 2025-10-07T14:37:50Z|01229|binding|INFO|Claiming lport 1c96bebd-0f68-48d9-9bab-486d6e56cb4e for this chassis.
Oct 07 14:37:50 compute-0 ovn_controller[151684]: 2025-10-07T14:37:50Z|01230|binding|INFO|1c96bebd-0f68-48d9-9bab-486d6e56cb4e: Claiming fa:16:3e:46:76:3f 10.100.0.13
Oct 07 14:37:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.854 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:76:3f 10.100.0.13'], port_security=['fa:16:3e:46:76:3f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c0eb8730-2b26-4cc0-8a9c-019688db568f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ddf5ea1d-9677-4e1c-8a3a-0f9543177677', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be472fff-888c-4934-b0b7-07dc4319f2eb, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1c96bebd-0f68-48d9-9bab-486d6e56cb4e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:37:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.855 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1c96bebd-0f68-48d9-9bab-486d6e56cb4e in datapath 4bd15a72-ce65-4737-b705-4b2b86d3a32a bound to our chassis
Oct 07 14:37:50 compute-0 ovn_controller[151684]: 2025-10-07T14:37:50Z|01231|binding|INFO|Setting lport 1c96bebd-0f68-48d9-9bab-486d6e56cb4e ovn-installed in OVS
Oct 07 14:37:50 compute-0 ovn_controller[151684]: 2025-10-07T14:37:50Z|01232|binding|INFO|Setting lport 1c96bebd-0f68-48d9-9bab-486d6e56cb4e up in Southbound
Oct 07 14:37:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.857 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bd15a72-ce65-4737-b705-4b2b86d3a32a
Oct 07 14:37:50 compute-0 nova_compute[259550]: 2025-10-07 14:37:50.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:50 compute-0 systemd-machined[214580]: New machine qemu-146-instance-00000075.
Oct 07 14:37:50 compute-0 systemd-udevd[385767]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:37:50 compute-0 nova_compute[259550]: 2025-10-07 14:37:50.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.871 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eb8be62b-5375-4086-b8ab-6c5905b9467a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.872 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4bd15a72-c1 in ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:37:50 compute-0 systemd[1]: Started Virtual Machine qemu-146-instance-00000075.
Oct 07 14:37:50 compute-0 NetworkManager[44949]: <info>  [1759847870.8759] device (tap1c96bebd-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:37:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.874 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4bd15a72-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:37:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.874 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1d686052-eee5-486b-834e-182c62b8a465]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:50 compute-0 NetworkManager[44949]: <info>  [1759847870.8777] device (tap1c96bebd-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:37:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.877 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8b66e0-be24-4587-ba80-89260ae816b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.894 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[3b043d58-f6d3-4a4e-b4a5-a1084d981f55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.911 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[99f7dd8d-126e-42eb-9677-32c089502826]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.939 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[31c45dc9-b8b0-4135-bf15-83a454b90bcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.944 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[812f676e-c81f-4770-ade3-75b45f084046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:50 compute-0 systemd-udevd[385770]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:37:50 compute-0 NetworkManager[44949]: <info>  [1759847870.9500] manager: (tap4bd15a72-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/501)
Oct 07 14:37:50 compute-0 nova_compute[259550]: 2025-10-07 14:37:50.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:37:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.989 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf9de0b-c3f1-4002-9c31-4f64f82b96d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.993 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd33759-e5a4-4ce7-a59f-f76dc9de16d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:51 compute-0 NetworkManager[44949]: <info>  [1759847871.0401] device (tap4bd15a72-c0): carrier: link connected
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.042 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5723b261-1955-4a5d-9f51-3f6394ce9d07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.063 2 DEBUG nova.compute.manager [req-50b6a5ab-3c3b-4159-b981-53dbef0691ce req-1434d2ec-09d7-4e04-be83-4fddc9c0eac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received event network-vif-plugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.064 2 DEBUG oslo_concurrency.lockutils [req-50b6a5ab-3c3b-4159-b981-53dbef0691ce req-1434d2ec-09d7-4e04-be83-4fddc9c0eac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.064 2 DEBUG oslo_concurrency.lockutils [req-50b6a5ab-3c3b-4159-b981-53dbef0691ce req-1434d2ec-09d7-4e04-be83-4fddc9c0eac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.063 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[158d9750-10a7-40d2-a3dc-85afa0910dc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bd15a72-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:db:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 354], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845459, 'reachable_time': 38757, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385800, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.064 2 DEBUG oslo_concurrency.lockutils [req-50b6a5ab-3c3b-4159-b981-53dbef0691ce req-1434d2ec-09d7-4e04-be83-4fddc9c0eac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.065 2 DEBUG nova.compute.manager [req-50b6a5ab-3c3b-4159-b981-53dbef0691ce req-1434d2ec-09d7-4e04-be83-4fddc9c0eac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Processing event network-vif-plugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.084 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[19d4b464-7856-46a2-8b9a-74a26d2d2fe6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:db12'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 845459, 'tstamp': 845459}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385801, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.106 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4ea7c8-a6f9-4768-a252-d7faa7b32a52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bd15a72-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:db:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 354], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845459, 'reachable_time': 38757, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 385802, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.153 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d2bd86e8-c556-46e2-a373-df35ed7281c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.211 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5f4ce2e8-41df-42c4-bc86-d79e0af96c85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.218 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bd15a72-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.218 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.219 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bd15a72-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:51 compute-0 NetworkManager[44949]: <info>  [1759847871.2213] manager: (tap4bd15a72-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/502)
Oct 07 14:37:51 compute-0 kernel: tap4bd15a72-c0: entered promiscuous mode
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.232 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bd15a72-c0, col_values=(('external_ids', {'iface-id': '818ca059-c8de-4f85-a524-8f47c8fbf780'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:51 compute-0 ovn_controller[151684]: 2025-10-07T14:37:51Z|01233|binding|INFO|Releasing lport 818ca059-c8de-4f85-a524-8f47c8fbf780 from this chassis (sb_readonly=0)
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.235 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4bd15a72-ce65-4737-b705-4b2b86d3a32a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4bd15a72-ce65-4737-b705-4b2b86d3a32a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.248 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[07b39efb-b1a3-4f7e-8916-dadab92f3db0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.249 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-4bd15a72-ce65-4737-b705-4b2b86d3a32a
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/4bd15a72-ce65-4737-b705-4b2b86d3a32a.pid.haproxy
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 4bd15a72-ce65-4737-b705-4b2b86d3a32a
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:37:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.250 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'env', 'PROCESS_TAG=haproxy-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4bd15a72-ce65-4737-b705-4b2b86d3a32a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:37:51 compute-0 ceph-mon[74295]: pgmap v2262: 305 pgs: 305 active+clean; 210 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 182 op/s
Oct 07 14:37:51 compute-0 podman[385874]: 2025-10-07 14:37:51.615792616 +0000 UTC m=+0.046816031 container create 70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:37:51 compute-0 systemd[1]: Started libpod-conmon-70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c.scope.
Oct 07 14:37:51 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:37:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad6812d5d6ccd623c1c8db2e48873105455dd3e4c1a071313da6b9108e14c4dd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:37:51 compute-0 podman[385874]: 2025-10-07 14:37:51.591107297 +0000 UTC m=+0.022130752 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:37:51 compute-0 podman[385874]: 2025-10-07 14:37:51.700552791 +0000 UTC m=+0.131576226 container init 70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:37:51 compute-0 podman[385874]: 2025-10-07 14:37:51.707109466 +0000 UTC m=+0.138132881 container start 70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:37:51 compute-0 neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a[385889]: [NOTICE]   (385893) : New worker (385895) forked
Oct 07 14:37:51 compute-0 neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a[385889]: [NOTICE]   (385893) : Loading success.
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.910 2 DEBUG nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.911 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847871.9107678, c0eb8730-2b26-4cc0-8a9c-019688db568f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.912 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] VM Started (Lifecycle Event)
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.932 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.935 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.939 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.941 2 INFO nova.virt.libvirt.driver [-] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Instance spawned successfully.
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.942 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.956 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.956 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847871.9108791, c0eb8730-2b26-4cc0-8a9c-019688db568f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.956 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] VM Paused (Lifecycle Event)
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.963 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.964 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.964 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.965 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.965 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.966 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.971 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.974 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847871.9153705, c0eb8730-2b26-4cc0-8a9c-019688db568f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:37:51 compute-0 nova_compute[259550]: 2025-10-07 14:37:51.974 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] VM Resumed (Lifecycle Event)
Oct 07 14:37:52 compute-0 nova_compute[259550]: 2025-10-07 14:37:52.011 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:37:52 compute-0 nova_compute[259550]: 2025-10-07 14:37:52.014 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:37:52 compute-0 nova_compute[259550]: 2025-10-07 14:37:52.029 2 INFO nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Took 7.11 seconds to spawn the instance on the hypervisor.
Oct 07 14:37:52 compute-0 nova_compute[259550]: 2025-10-07 14:37:52.030 2 DEBUG nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:37:52 compute-0 nova_compute[259550]: 2025-10-07 14:37:52.039 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:37:52 compute-0 nova_compute[259550]: 2025-10-07 14:37:52.088 2 INFO nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Took 8.57 seconds to build instance.
Oct 07 14:37:52 compute-0 nova_compute[259550]: 2025-10-07 14:37:52.105 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 213 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct 07 14:37:52 compute-0 nova_compute[259550]: 2025-10-07 14:37:52.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:37:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:37:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:37:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:37:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:37:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:37:52 compute-0 nova_compute[259550]: 2025-10-07 14:37:52.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:37:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.002264) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847873002336, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 564, "num_deletes": 255, "total_data_size": 585179, "memory_usage": 597096, "flush_reason": "Manual Compaction"}
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847873007340, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 579927, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47180, "largest_seqno": 47743, "table_properties": {"data_size": 576855, "index_size": 1044, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6984, "raw_average_key_size": 18, "raw_value_size": 570728, "raw_average_value_size": 1505, "num_data_blocks": 47, "num_entries": 379, "num_filter_entries": 379, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847834, "oldest_key_time": 1759847834, "file_creation_time": 1759847873, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 5125 microseconds, and 2862 cpu microseconds.
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.007395) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 579927 bytes OK
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.007420) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.009329) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.009353) EVENT_LOG_v1 {"time_micros": 1759847873009346, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.009372) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 582008, prev total WAL file size 582008, number of live WAL files 2.
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.009971) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373631' seq:72057594037927935, type:22 .. '6C6F676D0032303132' seq:0, type:0; will stop at (end)
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(566KB)], [107(10067KB)]
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847873010015, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 10888772, "oldest_snapshot_seqno": -1}
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6860 keys, 10765465 bytes, temperature: kUnknown
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847873083487, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 10765465, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10717401, "index_size": 29840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 178498, "raw_average_key_size": 26, "raw_value_size": 10592311, "raw_average_value_size": 1544, "num_data_blocks": 1173, "num_entries": 6860, "num_filter_entries": 6860, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847873, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.083882) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 10765465 bytes
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.085680) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.7 rd, 146.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.8 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(37.3) write-amplify(18.6) OK, records in: 7378, records dropped: 518 output_compression: NoCompression
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.085700) EVENT_LOG_v1 {"time_micros": 1759847873085692, "job": 64, "event": "compaction_finished", "compaction_time_micros": 73699, "compaction_time_cpu_micros": 28345, "output_level": 6, "num_output_files": 1, "total_output_size": 10765465, "num_input_records": 7378, "num_output_records": 6860, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847873086474, "job": 64, "event": "table_file_deletion", "file_number": 109}
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847873089047, "job": 64, "event": "table_file_deletion", "file_number": 107}
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.009823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.089167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.089172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.089173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.089175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:37:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.089176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:37:53 compute-0 nova_compute[259550]: 2025-10-07 14:37:53.131 2 DEBUG nova.compute.manager [req-02b53d50-5cc8-421c-96af-c431ef650fb2 req-90107a9e-b85b-4bc6-ad6e-ade36ea47451 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received event network-vif-plugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:53 compute-0 nova_compute[259550]: 2025-10-07 14:37:53.132 2 DEBUG oslo_concurrency.lockutils [req-02b53d50-5cc8-421c-96af-c431ef650fb2 req-90107a9e-b85b-4bc6-ad6e-ade36ea47451 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:53 compute-0 nova_compute[259550]: 2025-10-07 14:37:53.132 2 DEBUG oslo_concurrency.lockutils [req-02b53d50-5cc8-421c-96af-c431ef650fb2 req-90107a9e-b85b-4bc6-ad6e-ade36ea47451 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:53 compute-0 nova_compute[259550]: 2025-10-07 14:37:53.133 2 DEBUG oslo_concurrency.lockutils [req-02b53d50-5cc8-421c-96af-c431ef650fb2 req-90107a9e-b85b-4bc6-ad6e-ade36ea47451 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:53 compute-0 nova_compute[259550]: 2025-10-07 14:37:53.133 2 DEBUG nova.compute.manager [req-02b53d50-5cc8-421c-96af-c431ef650fb2 req-90107a9e-b85b-4bc6-ad6e-ade36ea47451 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] No waiting events found dispatching network-vif-plugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:37:53 compute-0 nova_compute[259550]: 2025-10-07 14:37:53.133 2 WARNING nova.compute.manager [req-02b53d50-5cc8-421c-96af-c431ef650fb2 req-90107a9e-b85b-4bc6-ad6e-ade36ea47451 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received unexpected event network-vif-plugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e for instance with vm_state active and task_state None.
Oct 07 14:37:53 compute-0 nova_compute[259550]: 2025-10-07 14:37:53.937 2 INFO nova.compute.manager [None req-f5b37fd7-aad1-440f-be72-695b56954266 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Get console output
Oct 07 14:37:53 compute-0 nova_compute[259550]: 2025-10-07 14:37:53.942 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:37:53 compute-0 nova_compute[259550]: 2025-10-07 14:37:53.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:37:54 compute-0 ceph-mon[74295]: pgmap v2263: 305 pgs: 305 active+clean; 213 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct 07 14:37:54 compute-0 nova_compute[259550]: 2025-10-07 14:37:54.183 2 DEBUG oslo_concurrency.lockutils [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:54 compute-0 nova_compute[259550]: 2025-10-07 14:37:54.184 2 DEBUG oslo_concurrency.lockutils [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:54 compute-0 nova_compute[259550]: 2025-10-07 14:37:54.184 2 DEBUG nova.compute.manager [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:37:54 compute-0 nova_compute[259550]: 2025-10-07 14:37:54.189 2 DEBUG nova.compute.manager [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 07 14:37:54 compute-0 nova_compute[259550]: 2025-10-07 14:37:54.190 2 DEBUG nova.objects.instance [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'flavor' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:37:54 compute-0 nova_compute[259550]: 2025-10-07 14:37:54.218 2 DEBUG nova.virt.libvirt.driver [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:37:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 214 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.9 MiB/s wr, 208 op/s
Oct 07 14:37:54 compute-0 nova_compute[259550]: 2025-10-07 14:37:54.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:56 compute-0 ceph-mon[74295]: pgmap v2264: 305 pgs: 305 active+clean; 214 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.9 MiB/s wr, 208 op/s
Oct 07 14:37:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 214 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 231 op/s
Oct 07 14:37:56 compute-0 kernel: tap6e86ce79-9f (unregistering): left promiscuous mode
Oct 07 14:37:56 compute-0 NetworkManager[44949]: <info>  [1759847876.5507] device (tap6e86ce79-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:37:56 compute-0 nova_compute[259550]: 2025-10-07 14:37:56.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:56 compute-0 ovn_controller[151684]: 2025-10-07T14:37:56Z|01234|binding|INFO|Releasing lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 from this chassis (sb_readonly=0)
Oct 07 14:37:56 compute-0 ovn_controller[151684]: 2025-10-07T14:37:56Z|01235|binding|INFO|Setting lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 down in Southbound
Oct 07 14:37:56 compute-0 ovn_controller[151684]: 2025-10-07T14:37:56Z|01236|binding|INFO|Removing iface tap6e86ce79-9f ovn-installed in OVS
Oct 07 14:37:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:56.586 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:38:87 10.100.0.6'], port_security=['fa:16:3e:6d:38:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4e64a021-390b-4a0c-bb4c-75a19f274777', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28efb734-0152-4914-9f31-b818d894be70', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c3ab1e78-0aa6-4d11-8104-a075342a0333', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.195'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3cdfe5f-0d4a-4d58-ba3f-aac15c533467, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:37:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:56.587 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 in datapath 28efb734-0152-4914-9f31-b818d894be70 unbound from our chassis
Oct 07 14:37:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:56.589 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28efb734-0152-4914-9f31-b818d894be70, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:37:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:56.590 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f8c61eb6-b60a-4f15-b75b-8b824676b230]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:56.590 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-28efb734-0152-4914-9f31-b818d894be70 namespace which is not needed anymore
Oct 07 14:37:56 compute-0 nova_compute[259550]: 2025-10-07 14:37:56.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:56 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000073.scope: Deactivated successfully.
Oct 07 14:37:56 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000073.scope: Consumed 13.834s CPU time.
Oct 07 14:37:56 compute-0 systemd-machined[214580]: Machine qemu-144-instance-00000073 terminated.
Oct 07 14:37:56 compute-0 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[385029]: [NOTICE]   (385033) : haproxy version is 2.8.14-c23fe91
Oct 07 14:37:56 compute-0 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[385029]: [NOTICE]   (385033) : path to executable is /usr/sbin/haproxy
Oct 07 14:37:56 compute-0 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[385029]: [WARNING]  (385033) : Exiting Master process...
Oct 07 14:37:56 compute-0 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[385029]: [WARNING]  (385033) : Exiting Master process...
Oct 07 14:37:56 compute-0 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[385029]: [ALERT]    (385033) : Current worker (385035) exited with code 143 (Terminated)
Oct 07 14:37:56 compute-0 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[385029]: [WARNING]  (385033) : All workers exited. Exiting... (0)
Oct 07 14:37:56 compute-0 systemd[1]: libpod-ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef.scope: Deactivated successfully.
Oct 07 14:37:56 compute-0 podman[385926]: 2025-10-07 14:37:56.774861269 +0000 UTC m=+0.083640117 container died ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:37:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef-userdata-shm.mount: Deactivated successfully.
Oct 07 14:37:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a358810f9f8251b9e86fcc8b171db7901750b59cf1dbd689c2d91dd47284bb72-merged.mount: Deactivated successfully.
Oct 07 14:37:56 compute-0 podman[385926]: 2025-10-07 14:37:56.870006641 +0000 UTC m=+0.178785489 container cleanup ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 14:37:56 compute-0 systemd[1]: libpod-conmon-ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef.scope: Deactivated successfully.
Oct 07 14:37:56 compute-0 podman[385963]: 2025-10-07 14:37:56.989452972 +0000 UTC m=+0.093679434 container remove ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:37:56 compute-0 nova_compute[259550]: 2025-10-07 14:37:56.989 2 DEBUG nova.compute.manager [req-548e80f5-41e4-42aa-aadf-3deaa74f0323 req-ef86e87c-1878-47c4-82e8-2df0829309ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-unplugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:56 compute-0 nova_compute[259550]: 2025-10-07 14:37:56.991 2 DEBUG oslo_concurrency.lockutils [req-548e80f5-41e4-42aa-aadf-3deaa74f0323 req-ef86e87c-1878-47c4-82e8-2df0829309ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:37:56 compute-0 nova_compute[259550]: 2025-10-07 14:37:56.991 2 DEBUG oslo_concurrency.lockutils [req-548e80f5-41e4-42aa-aadf-3deaa74f0323 req-ef86e87c-1878-47c4-82e8-2df0829309ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:37:56 compute-0 nova_compute[259550]: 2025-10-07 14:37:56.991 2 DEBUG oslo_concurrency.lockutils [req-548e80f5-41e4-42aa-aadf-3deaa74f0323 req-ef86e87c-1878-47c4-82e8-2df0829309ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:56 compute-0 nova_compute[259550]: 2025-10-07 14:37:56.991 2 DEBUG nova.compute.manager [req-548e80f5-41e4-42aa-aadf-3deaa74f0323 req-ef86e87c-1878-47c4-82e8-2df0829309ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] No waiting events found dispatching network-vif-unplugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:37:56 compute-0 nova_compute[259550]: 2025-10-07 14:37:56.992 2 WARNING nova.compute.manager [req-548e80f5-41e4-42aa-aadf-3deaa74f0323 req-ef86e87c-1878-47c4-82e8-2df0829309ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received unexpected event network-vif-unplugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 for instance with vm_state active and task_state powering-off.
Oct 07 14:37:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:56.998 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e15ae09b-5b4d-49bc-97c9-2f6dfeacd899]: (4, ('Tue Oct  7 02:37:56 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70 (ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef)\nff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef\nTue Oct  7 02:37:56 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70 (ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef)\nff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:57.000 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb03fc6-0ca7-4896-a0fd-4479cadcba8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:57.002 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28efb734-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:37:57 compute-0 nova_compute[259550]: 2025-10-07 14:37:57.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:57 compute-0 kernel: tap28efb734-00: left promiscuous mode
Oct 07 14:37:57 compute-0 nova_compute[259550]: 2025-10-07 14:37:57.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:57.035 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cd79608f-063b-43a5-bd79-b0ec937cc722]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:57.070 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[99644421-7d72-4695-8977-760d4afb416e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:57.072 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[13aa0d89-e436-4889-a241-9f2f32b54ba2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:57.098 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[37b37bab-efee-4563-886e-832d0432c288]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 843632, 'reachable_time': 34076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385984, 'error': None, 'target': 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d28efb734\x2d0152\x2d4914\x2d9f31\x2db818d894be70.mount: Deactivated successfully.
Oct 07 14:37:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:57.102 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-28efb734-0152-4914-9f31-b818d894be70 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:37:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:37:57.103 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a0795a74-28db-49ae-82e8-4e5600fdec08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:37:57 compute-0 nova_compute[259550]: 2025-10-07 14:37:57.237 2 INFO nova.virt.libvirt.driver [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance shutdown successfully after 3 seconds.
Oct 07 14:37:57 compute-0 nova_compute[259550]: 2025-10-07 14:37:57.242 2 INFO nova.virt.libvirt.driver [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance destroyed successfully.
Oct 07 14:37:57 compute-0 nova_compute[259550]: 2025-10-07 14:37:57.243 2 DEBUG nova.objects.instance [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:37:57 compute-0 nova_compute[259550]: 2025-10-07 14:37:57.264 2 DEBUG nova.compute.manager [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:37:57 compute-0 nova_compute[259550]: 2025-10-07 14:37:57.331 2 DEBUG oslo_concurrency.lockutils [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:37:57 compute-0 nova_compute[259550]: 2025-10-07 14:37:57.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:57 compute-0 ovn_controller[151684]: 2025-10-07T14:37:57Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dc:bc:87 10.100.0.12
Oct 07 14:37:57 compute-0 ovn_controller[151684]: 2025-10-07T14:37:57Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dc:bc:87 10.100.0.12
Oct 07 14:37:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:37:58 compute-0 ceph-mon[74295]: pgmap v2265: 305 pgs: 305 active+clean; 214 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 231 op/s
Oct 07 14:37:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 214 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.5 MiB/s wr, 197 op/s
Oct 07 14:37:59 compute-0 nova_compute[259550]: 2025-10-07 14:37:59.164 2 DEBUG nova.compute.manager [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received event network-changed-1c96bebd-0f68-48d9-9bab-486d6e56cb4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:37:59 compute-0 nova_compute[259550]: 2025-10-07 14:37:59.165 2 DEBUG nova.compute.manager [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Refreshing instance network info cache due to event network-changed-1c96bebd-0f68-48d9-9bab-486d6e56cb4e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:37:59 compute-0 nova_compute[259550]: 2025-10-07 14:37:59.165 2 DEBUG oslo_concurrency.lockutils [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:37:59 compute-0 nova_compute[259550]: 2025-10-07 14:37:59.165 2 DEBUG oslo_concurrency.lockutils [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:37:59 compute-0 nova_compute[259550]: 2025-10-07 14:37:59.166 2 DEBUG nova.network.neutron [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Refreshing network info cache for port 1c96bebd-0f68-48d9-9bab-486d6e56cb4e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:37:59 compute-0 nova_compute[259550]: 2025-10-07 14:37:59.318 2 INFO nova.compute.manager [None req-29e9f3a0-2a3b-49d3-849b-5f74be629d4f 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Get console output
Oct 07 14:37:59 compute-0 nova_compute[259550]: 2025-10-07 14:37:59.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:37:59 compute-0 nova_compute[259550]: 2025-10-07 14:37:59.693 2 DEBUG nova.objects.instance [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'flavor' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:37:59 compute-0 nova_compute[259550]: 2025-10-07 14:37:59.733 2 DEBUG oslo_concurrency.lockutils [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:37:59 compute-0 nova_compute[259550]: 2025-10-07 14:37:59.734 2 DEBUG oslo_concurrency.lockutils [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquired lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:37:59 compute-0 nova_compute[259550]: 2025-10-07 14:37:59.734 2 DEBUG nova.network.neutron [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:37:59 compute-0 nova_compute[259550]: 2025-10-07 14:37:59.734 2 DEBUG nova.objects.instance [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'info_cache' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:37:59 compute-0 nova_compute[259550]: 2025-10-07 14:37:59.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:37:59 compute-0 nova_compute[259550]: 2025-10-07 14:37:59.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:37:59 compute-0 nova_compute[259550]: 2025-10-07 14:37:59.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:38:00 compute-0 ceph-mon[74295]: pgmap v2266: 305 pgs: 305 active+clean; 214 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.5 MiB/s wr, 197 op/s
Oct 07 14:38:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:00.071 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:00.071 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:00.072 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:00 compute-0 podman[385985]: 2025-10-07 14:38:00.080019254 +0000 UTC m=+0.068492991 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:38:00 compute-0 podman[385986]: 2025-10-07 14:38:00.099475233 +0000 UTC m=+0.086770599 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3)
Oct 07 14:38:00 compute-0 nova_compute[259550]: 2025-10-07 14:38:00.150 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:38:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 235 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.7 MiB/s wr, 250 op/s
Oct 07 14:38:00 compute-0 nova_compute[259550]: 2025-10-07 14:38:00.868 2 DEBUG nova.network.neutron [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Updated VIF entry in instance network info cache for port 1c96bebd-0f68-48d9-9bab-486d6e56cb4e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:38:00 compute-0 nova_compute[259550]: 2025-10-07 14:38:00.869 2 DEBUG nova.network.neutron [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Updating instance_info_cache with network_info: [{"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.192 2 DEBUG nova.network.neutron [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updating instance_info_cache with network_info: [{"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.277 2 DEBUG oslo_concurrency.lockutils [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.278 2 DEBUG nova.compute.manager [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.278 2 DEBUG oslo_concurrency.lockutils [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.278 2 DEBUG oslo_concurrency.lockutils [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.279 2 DEBUG oslo_concurrency.lockutils [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.279 2 DEBUG nova.compute.manager [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] No waiting events found dispatching network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.279 2 WARNING nova.compute.manager [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received unexpected event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 for instance with vm_state stopped and task_state None.
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.362 2 DEBUG oslo_concurrency.lockutils [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Releasing lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.364 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.364 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.364 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.446 2 INFO nova.virt.libvirt.driver [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance destroyed successfully.
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.447 2 DEBUG nova.objects.instance [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.691 2 DEBUG nova.objects.instance [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'resources' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.820 2 DEBUG nova.virt.libvirt.vif [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1824501664',display_name='tempest-TestNetworkAdvancedServerOps-server-1824501664',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1824501664',id=115,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZnJzpjSv/TU/ynLhehpWtOh3Ok15/bj8hZqv14d9GetFxMUNAsBy1sPCK8k7EXv+srEQ5zJiIUEWZ8pm1FRF0+OuDCiKL8OPwrGm2N566RfHl82V8uvcba1igoHu/qSA==',key_name='tempest-TestNetworkAdvancedServerOps-112753023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:37:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-zrj0040b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:37:57Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=4e64a021-390b-4a0c-bb4c-75a19f274777,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.821 2 DEBUG nova.network.os_vif_util [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.822 2 DEBUG nova.network.os_vif_util [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.822 2 DEBUG os_vif [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.824 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e86ce79-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.829 2 INFO os_vif [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f')
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.835 2 DEBUG nova.virt.libvirt.driver [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Start _get_guest_xml network_info=[{"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.839 2 WARNING nova.virt.libvirt.driver [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.846 2 DEBUG nova.virt.libvirt.host [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.847 2 DEBUG nova.virt.libvirt.host [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.850 2 DEBUG nova.virt.libvirt.host [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.851 2 DEBUG nova.virt.libvirt.host [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.852 2 DEBUG nova.virt.libvirt.driver [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.852 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.853 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.853 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.853 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.854 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.854 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.854 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.855 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.855 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.855 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.856 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:38:01 compute-0 nova_compute[259550]: 2025-10-07 14:38:01.856 2 DEBUG nova.objects.instance [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:38:02 compute-0 ceph-mon[74295]: pgmap v2267: 305 pgs: 305 active+clean; 235 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.7 MiB/s wr, 250 op/s
Oct 07 14:38:02 compute-0 nova_compute[259550]: 2025-10-07 14:38:02.121 2 DEBUG oslo_concurrency.processutils [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:38:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 246 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 151 op/s
Oct 07 14:38:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:38:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481707439' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:38:02 compute-0 nova_compute[259550]: 2025-10-07 14:38:02.598 2 DEBUG oslo_concurrency.processutils [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:38:02 compute-0 nova_compute[259550]: 2025-10-07 14:38:02.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:02 compute-0 nova_compute[259550]: 2025-10-07 14:38:02.634 2 DEBUG oslo_concurrency.processutils [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:38:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:38:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:38:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/690858199' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:38:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1481707439' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.109 2 DEBUG oslo_concurrency.processutils [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.111 2 DEBUG nova.virt.libvirt.vif [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1824501664',display_name='tempest-TestNetworkAdvancedServerOps-server-1824501664',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1824501664',id=115,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZnJzpjSv/TU/ynLhehpWtOh3Ok15/bj8hZqv14d9GetFxMUNAsBy1sPCK8k7EXv+srEQ5zJiIUEWZ8pm1FRF0+OuDCiKL8OPwrGm2N566RfHl82V8uvcba1igoHu/qSA==',key_name='tempest-TestNetworkAdvancedServerOps-112753023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:37:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-zrj0040b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:37:57Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=4e64a021-390b-4a0c-bb4c-75a19f274777,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.112 2 DEBUG nova.network.os_vif_util [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.112 2 DEBUG nova.network.os_vif_util [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.114 2 DEBUG nova.objects.instance [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.130 2 DEBUG nova.virt.libvirt.driver [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:38:03 compute-0 nova_compute[259550]:   <uuid>4e64a021-390b-4a0c-bb4c-75a19f274777</uuid>
Oct 07 14:38:03 compute-0 nova_compute[259550]:   <name>instance-00000073</name>
Oct 07 14:38:03 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:38:03 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:38:03 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1824501664</nova:name>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:38:01</nova:creationTime>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:38:03 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:38:03 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:38:03 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:38:03 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:38:03 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:38:03 compute-0 nova_compute[259550]:         <nova:user uuid="5c505d04148e44b8b93ceab0e3cedef4">tempest-TestNetworkAdvancedServerOps-316338420-project-member</nova:user>
Oct 07 14:38:03 compute-0 nova_compute[259550]:         <nova:project uuid="74c80c1e3c7c4a0dbf1c602d301618a7">tempest-TestNetworkAdvancedServerOps-316338420</nova:project>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:38:03 compute-0 nova_compute[259550]:         <nova:port uuid="6e86ce79-9f1b-4e53-8ae5-918e8402b8c6">
Oct 07 14:38:03 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:38:03 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:38:03 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <system>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <entry name="serial">4e64a021-390b-4a0c-bb4c-75a19f274777</entry>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <entry name="uuid">4e64a021-390b-4a0c-bb4c-75a19f274777</entry>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     </system>
Oct 07 14:38:03 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:38:03 compute-0 nova_compute[259550]:   <os>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:   </os>
Oct 07 14:38:03 compute-0 nova_compute[259550]:   <features>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:   </features>
Oct 07 14:38:03 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:38:03 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:38:03 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4e64a021-390b-4a0c-bb4c-75a19f274777_disk">
Oct 07 14:38:03 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       </source>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:38:03 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4e64a021-390b-4a0c-bb4c-75a19f274777_disk.config">
Oct 07 14:38:03 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       </source>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:38:03 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:6d:38:87"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <target dev="tap6e86ce79-9f"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777/console.log" append="off"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <video>
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     </video>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <input type="keyboard" bus="usb"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:38:03 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:38:03 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:38:03 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:38:03 compute-0 nova_compute[259550]: </domain>
Oct 07 14:38:03 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.132 2 DEBUG nova.virt.libvirt.driver [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.132 2 DEBUG nova.virt.libvirt.driver [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.133 2 DEBUG nova.virt.libvirt.vif [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1824501664',display_name='tempest-TestNetworkAdvancedServerOps-server-1824501664',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1824501664',id=115,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZnJzpjSv/TU/ynLhehpWtOh3Ok15/bj8hZqv14d9GetFxMUNAsBy1sPCK8k7EXv+srEQ5zJiIUEWZ8pm1FRF0+OuDCiKL8OPwrGm2N566RfHl82V8uvcba1igoHu/qSA==',key_name='tempest-TestNetworkAdvancedServerOps-112753023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:37:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-zrj0040b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:37:57Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=4e64a021-390b-4a0c-bb4c-75a19f274777,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.134 2 DEBUG nova.network.os_vif_util [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.134 2 DEBUG nova.network.os_vif_util [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.135 2 DEBUG os_vif [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.136 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.136 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e86ce79-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e86ce79-9f, col_values=(('external_ids', {'iface-id': '6e86ce79-9f1b-4e53-8ae5-918e8402b8c6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:38:87', 'vm-uuid': '4e64a021-390b-4a0c-bb4c-75a19f274777'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:03 compute-0 NetworkManager[44949]: <info>  [1759847883.2268] manager: (tap6e86ce79-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/503)
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.234 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updating instance_info_cache with network_info: [{"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.237 2 INFO os_vif [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f')
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.251 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.251 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.252 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.272 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.273 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.273 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.273 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.273 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:38:03 compute-0 kernel: tap6e86ce79-9f: entered promiscuous mode
Oct 07 14:38:03 compute-0 NetworkManager[44949]: <info>  [1759847883.3207] manager: (tap6e86ce79-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/504)
Oct 07 14:38:03 compute-0 ovn_controller[151684]: 2025-10-07T14:38:03Z|01237|binding|INFO|Claiming lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 for this chassis.
Oct 07 14:38:03 compute-0 ovn_controller[151684]: 2025-10-07T14:38:03Z|01238|binding|INFO|6e86ce79-9f1b-4e53-8ae5-918e8402b8c6: Claiming fa:16:3e:6d:38:87 10.100.0.6
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.330 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:38:87 10.100.0.6'], port_security=['fa:16:3e:6d:38:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4e64a021-390b-4a0c-bb4c-75a19f274777', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28efb734-0152-4914-9f31-b818d894be70', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c3ab1e78-0aa6-4d11-8104-a075342a0333', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.195'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3cdfe5f-0d4a-4d58-ba3f-aac15c533467, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.331 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 in datapath 28efb734-0152-4914-9f31-b818d894be70 bound to our chassis
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.332 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28efb734-0152-4914-9f31-b818d894be70
Oct 07 14:38:03 compute-0 ovn_controller[151684]: 2025-10-07T14:38:03Z|01239|binding|INFO|Setting lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 ovn-installed in OVS
Oct 07 14:38:03 compute-0 ovn_controller[151684]: 2025-10-07T14:38:03Z|01240|binding|INFO|Setting lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 up in Southbound
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.346 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4c092479-0f87-4e3c-88e2-9beb76ece58e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.347 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap28efb734-01 in ovnmeta-28efb734-0152-4914-9f31-b818d894be70 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:38:03 compute-0 systemd-udevd[386102]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.349 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap28efb734-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.350 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[101265fd-f328-48ae-b298-2b115f28b66c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.353 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a479bd2a-6855-4a0e-8ca7-03d98c4c09f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:03 compute-0 systemd-machined[214580]: New machine qemu-147-instance-00000073.
Oct 07 14:38:03 compute-0 NetworkManager[44949]: <info>  [1759847883.3633] device (tap6e86ce79-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:38:03 compute-0 NetworkManager[44949]: <info>  [1759847883.3643] device (tap6e86ce79-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.366 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[51d3288d-642c-4227-a5c4-1a3813ec9135]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:03 compute-0 systemd[1]: Started Virtual Machine qemu-147-instance-00000073.
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.384 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8bbe98a1-e611-48af-bc22-0f3197741aae]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.537 2 DEBUG nova.compute.manager [req-af292fe6-7eae-46c2-8117-af75a2fddd50 req-ef01f250-3d5e-4301-a57e-ff29eccc6798 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.538 2 DEBUG oslo_concurrency.lockutils [req-af292fe6-7eae-46c2-8117-af75a2fddd50 req-ef01f250-3d5e-4301-a57e-ff29eccc6798 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.538 2 DEBUG oslo_concurrency.lockutils [req-af292fe6-7eae-46c2-8117-af75a2fddd50 req-ef01f250-3d5e-4301-a57e-ff29eccc6798 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.540 2 DEBUG oslo_concurrency.lockutils [req-af292fe6-7eae-46c2-8117-af75a2fddd50 req-ef01f250-3d5e-4301-a57e-ff29eccc6798 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.541 2 DEBUG nova.compute.manager [req-af292fe6-7eae-46c2-8117-af75a2fddd50 req-ef01f250-3d5e-4301-a57e-ff29eccc6798 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] No waiting events found dispatching network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.542 2 WARNING nova.compute.manager [req-af292fe6-7eae-46c2-8117-af75a2fddd50 req-ef01f250-3d5e-4301-a57e-ff29eccc6798 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received unexpected event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 for instance with vm_state stopped and task_state powering-on.
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.668 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4b048e41-7be6-44e8-b3fe-ad20371219e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:03 compute-0 NetworkManager[44949]: <info>  [1759847883.6759] manager: (tap28efb734-00): new Veth device (/org/freedesktop/NetworkManager/Devices/505)
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.677 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c1b2dce-7441-42e5-aa9a-b4b299769037]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.727 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6f17e931-5dae-4347-9b26-b4bd941c2b5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.731 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c1842400-40f3-4611-b3db-644741d992d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:03 compute-0 NetworkManager[44949]: <info>  [1759847883.7571] device (tap28efb734-00): carrier: link connected
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.763 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[02b56e85-4433-44a5-b582-0f0f848f4cc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:38:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4191226254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.792 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3462ffbc-353d-4ace-9c64-244c3ebc9182]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28efb734-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b7:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 846732, 'reachable_time': 20712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386155, 'error': None, 'target': 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.809 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.809 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de8c342b-9e50-4d1e-81cd-1c2991367513]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:b7c0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 846732, 'tstamp': 846732}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386157, 'error': None, 'target': 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.829 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e64a243a-b47a-412d-9ff3-40392fa4acef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28efb734-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b7:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 846732, 'reachable_time': 20712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 386158, 'error': None, 'target': 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.871 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1777d21d-2e5e-4cff-9e5b-5c5d08b3c612]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.936 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe6ef2e-57f3-449d-8f5a-8a0c37ee32ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.937 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28efb734-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.937 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.938 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28efb734-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:03 compute-0 kernel: tap28efb734-00: entered promiscuous mode
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:03 compute-0 NetworkManager[44949]: <info>  [1759847883.9410] manager: (tap28efb734-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/506)
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.943 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28efb734-00, col_values=(('external_ids', {'iface-id': '54dfc7fb-c548-460a-8b73-708819b15ca2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:03 compute-0 ovn_controller[151684]: 2025-10-07T14:38:03Z|01241|binding|INFO|Releasing lport 54dfc7fb-c548-460a-8b73-708819b15ca2 from this chassis (sb_readonly=0)
Oct 07 14:38:03 compute-0 nova_compute[259550]: 2025-10-07 14:38:03.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.966 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/28efb734-0152-4914-9f31-b818d894be70.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/28efb734-0152-4914-9f31-b818d894be70.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.967 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[23fd636c-ce59-451f-b69a-1dd7496a83aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.969 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-28efb734-0152-4914-9f31-b818d894be70
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/28efb734-0152-4914-9f31-b818d894be70.pid.haproxy
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 28efb734-0152-4914-9f31-b818d894be70
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:38:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.970 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'env', 'PROCESS_TAG=haproxy-28efb734-0152-4914-9f31-b818d894be70', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/28efb734-0152-4914-9f31-b818d894be70.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:38:04 compute-0 ceph-mon[74295]: pgmap v2268: 305 pgs: 305 active+clean; 246 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 151 op/s
Oct 07 14:38:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/690858199' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:38:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4191226254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:38:04 compute-0 nova_compute[259550]: 2025-10-07 14:38:04.234 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:38:04 compute-0 nova_compute[259550]: 2025-10-07 14:38:04.234 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:38:04 compute-0 nova_compute[259550]: 2025-10-07 14:38:04.239 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:38:04 compute-0 nova_compute[259550]: 2025-10-07 14:38:04.240 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:38:04 compute-0 nova_compute[259550]: 2025-10-07 14:38:04.245 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:38:04 compute-0 nova_compute[259550]: 2025-10-07 14:38:04.246 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:38:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 252 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 159 op/s
Oct 07 14:38:04 compute-0 podman[386190]: 2025-10-07 14:38:04.432909704 +0000 UTC m=+0.064169955 container create 0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:38:04 compute-0 systemd[1]: Started libpod-conmon-0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240.scope.
Oct 07 14:38:04 compute-0 podman[386190]: 2025-10-07 14:38:04.400828348 +0000 UTC m=+0.032088629 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:38:04 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bdfd20c34e8e463ac3bd8f75949ae3c20d5cfcc6b96b2e99a9b1bbbdec631c1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:38:04 compute-0 nova_compute[259550]: 2025-10-07 14:38:04.513 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:38:04 compute-0 nova_compute[259550]: 2025-10-07 14:38:04.515 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3335MB free_disk=59.876399993896484GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:38:04 compute-0 nova_compute[259550]: 2025-10-07 14:38:04.515 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:04 compute-0 nova_compute[259550]: 2025-10-07 14:38:04.515 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:04 compute-0 podman[386190]: 2025-10-07 14:38:04.528728045 +0000 UTC m=+0.159988316 container init 0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:38:04 compute-0 podman[386190]: 2025-10-07 14:38:04.535309431 +0000 UTC m=+0.166569682 container start 0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 07 14:38:04 compute-0 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[386205]: [NOTICE]   (386209) : New worker (386211) forked
Oct 07 14:38:04 compute-0 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[386205]: [NOTICE]   (386209) : Loading success.
Oct 07 14:38:04 compute-0 ovn_controller[151684]: 2025-10-07T14:38:04Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:46:76:3f 10.100.0.13
Oct 07 14:38:04 compute-0 ovn_controller[151684]: 2025-10-07T14:38:04Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:46:76:3f 10.100.0.13
Oct 07 14:38:04 compute-0 sudo[386220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:38:04 compute-0 sudo[386220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:38:04 compute-0 sudo[386220]: pam_unix(sudo:session): session closed for user root
Oct 07 14:38:04 compute-0 sudo[386281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:38:04 compute-0 sudo[386281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:38:04 compute-0 sudo[386281]: pam_unix(sudo:session): session closed for user root
Oct 07 14:38:04 compute-0 sudo[386311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:38:04 compute-0 sudo[386311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:38:04 compute-0 sudo[386311]: pam_unix(sudo:session): session closed for user root
Oct 07 14:38:05 compute-0 sudo[386337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:38:05 compute-0 sudo[386337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.378 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 4e64a021-390b-4a0c-bb4c-75a19f274777 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.379 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847885.3781834, 4e64a021-390b-4a0c-bb4c-75a19f274777 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.379 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] VM Resumed (Lifecycle Event)
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.381 2 DEBUG nova.compute.manager [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.384 2 INFO nova.virt.libvirt.driver [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance rebooted successfully.
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.384 2 DEBUG nova.compute.manager [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.532 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.536 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:38:05 compute-0 sudo[386337]: pam_unix(sudo:session): session closed for user root
Oct 07 14:38:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:38:05 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:38:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:38:05 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:38:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:38:05 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:38:05 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 0f6b1e7a-4755-4e93-b739-78f2e2d967b6 does not exist
Oct 07 14:38:05 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 4e7596c5-248f-4641-bf37-05e5942e1788 does not exist
Oct 07 14:38:05 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 95565ae4-8791-427e-885a-8a0608957652 does not exist
Oct 07 14:38:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:38:05 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.684 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] During sync_power_state the instance has a pending task (powering-on). Skip.
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.684 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847885.3808265, 4e64a021-390b-4a0c-bb4c-75a19f274777 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.684 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] VM Started (Lifecycle Event)
Oct 07 14:38:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:38:05 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:38:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:38:05 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:38:05 compute-0 sudo[386394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:38:05 compute-0 sudo[386394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:38:05 compute-0 sudo[386394]: pam_unix(sudo:session): session closed for user root
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.756 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 4e64a021-390b-4a0c-bb4c-75a19f274777 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.757 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 23c0ce36-9e34-4a73-9f99-3b79f8623238 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.757 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance c0eb8730-2b26-4cc0-8a9c-019688db568f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.757 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.757 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:38:05 compute-0 sudo[386419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:38:05 compute-0 sudo[386419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:38:05 compute-0 sudo[386419]: pam_unix(sudo:session): session closed for user root
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.839 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:38:05 compute-0 sudo[386444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:38:05 compute-0 sudo[386444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:38:05 compute-0 sudo[386444]: pam_unix(sudo:session): session closed for user root
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.894 2 DEBUG nova.compute.manager [req-3a938757-ee18-4e94-a4cb-63068c3c469e req-d5c1925c-d7a5-4e04-b509-094d385a5687 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.895 2 DEBUG oslo_concurrency.lockutils [req-3a938757-ee18-4e94-a4cb-63068c3c469e req-d5c1925c-d7a5-4e04-b509-094d385a5687 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.896 2 DEBUG oslo_concurrency.lockutils [req-3a938757-ee18-4e94-a4cb-63068c3c469e req-d5c1925c-d7a5-4e04-b509-094d385a5687 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.896 2 DEBUG oslo_concurrency.lockutils [req-3a938757-ee18-4e94-a4cb-63068c3c469e req-d5c1925c-d7a5-4e04-b509-094d385a5687 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.896 2 DEBUG nova.compute.manager [req-3a938757-ee18-4e94-a4cb-63068c3c469e req-d5c1925c-d7a5-4e04-b509-094d385a5687 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] No waiting events found dispatching network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.897 2 WARNING nova.compute.manager [req-3a938757-ee18-4e94-a4cb-63068c3c469e req-d5c1925c-d7a5-4e04-b509-094d385a5687 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received unexpected event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 for instance with vm_state active and task_state None.
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.898 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:38:05 compute-0 nova_compute[259550]: 2025-10-07 14:38:05.904 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:38:05 compute-0 sudo[386470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:38:05 compute-0 sudo[386470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:38:06 compute-0 ceph-mon[74295]: pgmap v2269: 305 pgs: 305 active+clean; 252 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 159 op/s
Oct 07 14:38:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:38:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:38:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:38:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:38:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:38:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:38:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:38:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2843257544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:38:06 compute-0 podman[386557]: 2025-10-07 14:38:06.347919324 +0000 UTC m=+0.092606595 container create 5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatelet, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:38:06 compute-0 nova_compute[259550]: 2025-10-07 14:38:06.364 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:38:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 267 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.0 MiB/s wr, 129 op/s
Oct 07 14:38:06 compute-0 nova_compute[259550]: 2025-10-07 14:38:06.372 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:38:06 compute-0 podman[386557]: 2025-10-07 14:38:06.278661904 +0000 UTC m=+0.023349195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:38:06 compute-0 nova_compute[259550]: 2025-10-07 14:38:06.391 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:38:06 compute-0 systemd[1]: Started libpod-conmon-5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d.scope.
Oct 07 14:38:06 compute-0 nova_compute[259550]: 2025-10-07 14:38:06.422 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:38:06 compute-0 nova_compute[259550]: 2025-10-07 14:38:06.422 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:06 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:38:06 compute-0 podman[386557]: 2025-10-07 14:38:06.464660774 +0000 UTC m=+0.209348075 container init 5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatelet, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 07 14:38:06 compute-0 podman[386557]: 2025-10-07 14:38:06.471453175 +0000 UTC m=+0.216140446 container start 5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 07 14:38:06 compute-0 nervous_chatelet[386575]: 167 167
Oct 07 14:38:06 compute-0 systemd[1]: libpod-5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d.scope: Deactivated successfully.
Oct 07 14:38:06 compute-0 conmon[386575]: conmon 5ac6d37017f3527d7211 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d.scope/container/memory.events
Oct 07 14:38:06 compute-0 podman[386557]: 2025-10-07 14:38:06.499373151 +0000 UTC m=+0.244060432 container attach 5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 07 14:38:06 compute-0 podman[386557]: 2025-10-07 14:38:06.49968706 +0000 UTC m=+0.244374331 container died 5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:38:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-df55622f615ab0c411ea86f6cb8876ea52ecb4e6afe83845133ae6bdef8c975e-merged.mount: Deactivated successfully.
Oct 07 14:38:06 compute-0 podman[386557]: 2025-10-07 14:38:06.608673552 +0000 UTC m=+0.353360823 container remove 5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 14:38:06 compute-0 systemd[1]: libpod-conmon-5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d.scope: Deactivated successfully.
Oct 07 14:38:06 compute-0 podman[386599]: 2025-10-07 14:38:06.818810366 +0000 UTC m=+0.046230636 container create b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jemison, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct 07 14:38:06 compute-0 systemd[1]: Started libpod-conmon-b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7.scope.
Oct 07 14:38:06 compute-0 podman[386599]: 2025-10-07 14:38:06.800636651 +0000 UTC m=+0.028056951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:38:06 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6757baf4744f99843625abc21a51b0f10ffc781de3918ca49ccaef97a181eb38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6757baf4744f99843625abc21a51b0f10ffc781de3918ca49ccaef97a181eb38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6757baf4744f99843625abc21a51b0f10ffc781de3918ca49ccaef97a181eb38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6757baf4744f99843625abc21a51b0f10ffc781de3918ca49ccaef97a181eb38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6757baf4744f99843625abc21a51b0f10ffc781de3918ca49ccaef97a181eb38/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:38:06 compute-0 podman[386599]: 2025-10-07 14:38:06.930311466 +0000 UTC m=+0.157731756 container init b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 07 14:38:06 compute-0 podman[386599]: 2025-10-07 14:38:06.938225478 +0000 UTC m=+0.165645768 container start b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jemison, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 07 14:38:06 compute-0 podman[386599]: 2025-10-07 14:38:06.941830214 +0000 UTC m=+0.169250514 container attach b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:38:07 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2843257544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:38:07 compute-0 nova_compute[259550]: 2025-10-07 14:38:07.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:38:08 compute-0 brave_jemison[386615]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:38:08 compute-0 brave_jemison[386615]: --> relative data size: 1.0
Oct 07 14:38:08 compute-0 brave_jemison[386615]: --> All data devices are unavailable
Oct 07 14:38:08 compute-0 systemd[1]: libpod-b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7.scope: Deactivated successfully.
Oct 07 14:38:08 compute-0 systemd[1]: libpod-b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7.scope: Consumed 1.140s CPU time.
Oct 07 14:38:08 compute-0 podman[386599]: 2025-10-07 14:38:08.147467669 +0000 UTC m=+1.374887939 container died b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:38:08 compute-0 nova_compute[259550]: 2025-10-07 14:38:08.153 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:38:08 compute-0 nova_compute[259550]: 2025-10-07 14:38:08.178 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:38:08 compute-0 nova_compute[259550]: 2025-10-07 14:38:08.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:08 compute-0 ceph-mon[74295]: pgmap v2270: 305 pgs: 305 active+clean; 267 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.0 MiB/s wr, 129 op/s
Oct 07 14:38:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-6757baf4744f99843625abc21a51b0f10ffc781de3918ca49ccaef97a181eb38-merged.mount: Deactivated successfully.
Oct 07 14:38:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 267 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 428 KiB/s rd, 4.0 MiB/s wr, 99 op/s
Oct 07 14:38:08 compute-0 podman[386599]: 2025-10-07 14:38:08.483408585 +0000 UTC m=+1.710828845 container remove b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jemison, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 07 14:38:08 compute-0 systemd[1]: libpod-conmon-b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7.scope: Deactivated successfully.
Oct 07 14:38:08 compute-0 sudo[386470]: pam_unix(sudo:session): session closed for user root
Oct 07 14:38:08 compute-0 sudo[386657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:38:08 compute-0 sudo[386657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:38:08 compute-0 sudo[386657]: pam_unix(sudo:session): session closed for user root
Oct 07 14:38:08 compute-0 sudo[386682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:38:08 compute-0 sudo[386682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:38:08 compute-0 sudo[386682]: pam_unix(sudo:session): session closed for user root
Oct 07 14:38:08 compute-0 sudo[386707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:38:08 compute-0 sudo[386707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:38:08 compute-0 sudo[386707]: pam_unix(sudo:session): session closed for user root
Oct 07 14:38:08 compute-0 sudo[386732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:38:08 compute-0 sudo[386732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:38:09 compute-0 nova_compute[259550]: 2025-10-07 14:38:09.097 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:09 compute-0 nova_compute[259550]: 2025-10-07 14:38:09.098 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:09 compute-0 nova_compute[259550]: 2025-10-07 14:38:09.131 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:38:09 compute-0 podman[386798]: 2025-10-07 14:38:09.211869072 +0000 UTC m=+0.042170918 container create d71d576389540dd14b0fbac7d85aca5f9567e9b9e46f2484e029b249e643db0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:38:09 compute-0 nova_compute[259550]: 2025-10-07 14:38:09.240 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:09 compute-0 nova_compute[259550]: 2025-10-07 14:38:09.240 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:09 compute-0 nova_compute[259550]: 2025-10-07 14:38:09.248 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:38:09 compute-0 nova_compute[259550]: 2025-10-07 14:38:09.248 2 INFO nova.compute.claims [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:38:09 compute-0 systemd[1]: Started libpod-conmon-d71d576389540dd14b0fbac7d85aca5f9567e9b9e46f2484e029b249e643db0a.scope.
Oct 07 14:38:09 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:38:09 compute-0 podman[386798]: 2025-10-07 14:38:09.195168096 +0000 UTC m=+0.025469962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:38:09 compute-0 podman[386798]: 2025-10-07 14:38:09.310197159 +0000 UTC m=+0.140499025 container init d71d576389540dd14b0fbac7d85aca5f9567e9b9e46f2484e029b249e643db0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:38:09 compute-0 podman[386798]: 2025-10-07 14:38:09.317075383 +0000 UTC m=+0.147377229 container start d71d576389540dd14b0fbac7d85aca5f9567e9b9e46f2484e029b249e643db0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:38:09 compute-0 podman[386798]: 2025-10-07 14:38:09.320813322 +0000 UTC m=+0.151115178 container attach d71d576389540dd14b0fbac7d85aca5f9567e9b9e46f2484e029b249e643db0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:38:09 compute-0 gracious_carver[386815]: 167 167
Oct 07 14:38:09 compute-0 systemd[1]: libpod-d71d576389540dd14b0fbac7d85aca5f9567e9b9e46f2484e029b249e643db0a.scope: Deactivated successfully.
Oct 07 14:38:09 compute-0 podman[386798]: 2025-10-07 14:38:09.323489244 +0000 UTC m=+0.153791090 container died d71d576389540dd14b0fbac7d85aca5f9567e9b9e46f2484e029b249e643db0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 07 14:38:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb89a884e12e2cb6ee72a606a5beb4c0b3d7a71f1f32a4a85d99f4a47e3576d0-merged.mount: Deactivated successfully.
Oct 07 14:38:09 compute-0 podman[386798]: 2025-10-07 14:38:09.383847637 +0000 UTC m=+0.214149483 container remove d71d576389540dd14b0fbac7d85aca5f9567e9b9e46f2484e029b249e643db0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:38:09 compute-0 systemd[1]: libpod-conmon-d71d576389540dd14b0fbac7d85aca5f9567e9b9e46f2484e029b249e643db0a.scope: Deactivated successfully.
Oct 07 14:38:09 compute-0 nova_compute[259550]: 2025-10-07 14:38:09.493 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:38:09 compute-0 podman[386840]: 2025-10-07 14:38:09.603134937 +0000 UTC m=+0.052851444 container create a35fc9271608224225c7650760a8c0077f3c217aabd7b786adea00fb5cf7ac04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 07 14:38:09 compute-0 systemd[1]: Started libpod-conmon-a35fc9271608224225c7650760a8c0077f3c217aabd7b786adea00fb5cf7ac04.scope.
Oct 07 14:38:09 compute-0 podman[386840]: 2025-10-07 14:38:09.577682167 +0000 UTC m=+0.027398704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:38:09 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad9b428a9f8fecd22fcd82c55e7d3a56f5d89b530a01a03841c8a7d8201fd6d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad9b428a9f8fecd22fcd82c55e7d3a56f5d89b530a01a03841c8a7d8201fd6d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad9b428a9f8fecd22fcd82c55e7d3a56f5d89b530a01a03841c8a7d8201fd6d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad9b428a9f8fecd22fcd82c55e7d3a56f5d89b530a01a03841c8a7d8201fd6d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:38:09 compute-0 podman[386840]: 2025-10-07 14:38:09.70204138 +0000 UTC m=+0.151757917 container init a35fc9271608224225c7650760a8c0077f3c217aabd7b786adea00fb5cf7ac04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 07 14:38:09 compute-0 podman[386840]: 2025-10-07 14:38:09.710165776 +0000 UTC m=+0.159882283 container start a35fc9271608224225c7650760a8c0077f3c217aabd7b786adea00fb5cf7ac04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 07 14:38:09 compute-0 podman[386840]: 2025-10-07 14:38:09.714491443 +0000 UTC m=+0.164207980 container attach a35fc9271608224225c7650760a8c0077f3c217aabd7b786adea00fb5cf7ac04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Oct 07 14:38:09 compute-0 nova_compute[259550]: 2025-10-07 14:38:09.772 2 INFO nova.compute.manager [None req-8b0ee77c-b5c0-413c-bbf2-c43ba7718c3b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Get console output
Oct 07 14:38:09 compute-0 nova_compute[259550]: 2025-10-07 14:38:09.780 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:38:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:38:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3163642183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:38:09 compute-0 nova_compute[259550]: 2025-10-07 14:38:09.969 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:38:09 compute-0 nova_compute[259550]: 2025-10-07 14:38:09.978 2 DEBUG nova.compute.provider_tree [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.016 2 DEBUG nova.scheduler.client.report [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.054 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.055 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.103 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.104 2 DEBUG nova.network.neutron [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.131 2 INFO nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.153 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.242 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.244 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.244 2 INFO nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Creating image(s)
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.272 2 DEBUG nova.storage.rbd_utils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.310 2 DEBUG nova.storage.rbd_utils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:38:10 compute-0 ceph-mon[74295]: pgmap v2271: 305 pgs: 305 active+clean; 267 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 428 KiB/s rd, 4.0 MiB/s wr, 99 op/s
Oct 07 14:38:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3163642183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.355 2 DEBUG nova.storage.rbd_utils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.359 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:38:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 279 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.3 MiB/s wr, 165 op/s
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.475 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.476 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.477 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.477 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.504 2 DEBUG nova.storage.rbd_utils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.512 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:38:10 compute-0 trusting_banzai[386876]: {
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:     "0": [
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:         {
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "devices": [
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "/dev/loop3"
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             ],
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "lv_name": "ceph_lv0",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "lv_size": "21470642176",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "name": "ceph_lv0",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "tags": {
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.cluster_name": "ceph",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.crush_device_class": "",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.encrypted": "0",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.osd_id": "0",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.type": "block",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.vdo": "0"
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             },
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "type": "block",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "vg_name": "ceph_vg0"
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:         }
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:     ],
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:     "1": [
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:         {
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "devices": [
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "/dev/loop4"
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             ],
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "lv_name": "ceph_lv1",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "lv_size": "21470642176",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "name": "ceph_lv1",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "tags": {
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.cluster_name": "ceph",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.crush_device_class": "",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.encrypted": "0",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.osd_id": "1",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.type": "block",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.vdo": "0"
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             },
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "type": "block",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "vg_name": "ceph_vg1"
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:         }
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:     ],
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:     "2": [
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:         {
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "devices": [
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "/dev/loop5"
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             ],
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "lv_name": "ceph_lv2",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "lv_size": "21470642176",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "name": "ceph_lv2",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "tags": {
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.cluster_name": "ceph",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.crush_device_class": "",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.encrypted": "0",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.osd_id": "2",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.type": "block",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:                 "ceph.vdo": "0"
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             },
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "type": "block",
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:             "vg_name": "ceph_vg2"
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:         }
Oct 07 14:38:10 compute-0 trusting_banzai[386876]:     ]
Oct 07 14:38:10 compute-0 trusting_banzai[386876]: }
Oct 07 14:38:10 compute-0 systemd[1]: libpod-a35fc9271608224225c7650760a8c0077f3c217aabd7b786adea00fb5cf7ac04.scope: Deactivated successfully.
Oct 07 14:38:10 compute-0 podman[386840]: 2025-10-07 14:38:10.659222576 +0000 UTC m=+1.108939093 container died a35fc9271608224225c7650760a8c0077f3c217aabd7b786adea00fb5cf7ac04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:38:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad9b428a9f8fecd22fcd82c55e7d3a56f5d89b530a01a03841c8a7d8201fd6d6-merged.mount: Deactivated successfully.
Oct 07 14:38:10 compute-0 podman[386840]: 2025-10-07 14:38:10.731540149 +0000 UTC m=+1.181256656 container remove a35fc9271608224225c7650760a8c0077f3c217aabd7b786adea00fb5cf7ac04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.739 2 DEBUG nova.policy [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:38:10 compute-0 systemd[1]: libpod-conmon-a35fc9271608224225c7650760a8c0077f3c217aabd7b786adea00fb5cf7ac04.scope: Deactivated successfully.
Oct 07 14:38:10 compute-0 sudo[386732]: pam_unix(sudo:session): session closed for user root
Oct 07 14:38:10 compute-0 sudo[386994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:38:10 compute-0 sudo[386994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:38:10 compute-0 sudo[386994]: pam_unix(sudo:session): session closed for user root
Oct 07 14:38:10 compute-0 sudo[387019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:38:10 compute-0 sudo[387019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:38:10 compute-0 sudo[387019]: pam_unix(sudo:session): session closed for user root
Oct 07 14:38:10 compute-0 nova_compute[259550]: 2025-10-07 14:38:10.945 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:38:10 compute-0 sudo[387044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:38:10 compute-0 sudo[387044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:38:10 compute-0 sudo[387044]: pam_unix(sudo:session): session closed for user root
Oct 07 14:38:11 compute-0 nova_compute[259550]: 2025-10-07 14:38:11.017 2 DEBUG nova.storage.rbd_utils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:38:11 compute-0 sudo[387101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:38:11 compute-0 sudo[387101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:38:11 compute-0 nova_compute[259550]: 2025-10-07 14:38:11.114 2 DEBUG nova.objects.instance [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 4c61749b-b18d-4fbe-b99c-90e15ced9469 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:38:11 compute-0 nova_compute[259550]: 2025-10-07 14:38:11.158 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:38:11 compute-0 nova_compute[259550]: 2025-10-07 14:38:11.159 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Ensure instance console log exists: /var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:38:11 compute-0 nova_compute[259550]: 2025-10-07 14:38:11.159 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:11 compute-0 nova_compute[259550]: 2025-10-07 14:38:11.160 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:11 compute-0 nova_compute[259550]: 2025-10-07 14:38:11.160 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:11 compute-0 podman[387205]: 2025-10-07 14:38:11.384098985 +0000 UTC m=+0.040215135 container create a4290636f2a45c21335c69a2a28b1f1c43ecba0ae6691ec6173abcff61a38979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:38:11 compute-0 systemd[1]: Started libpod-conmon-a4290636f2a45c21335c69a2a28b1f1c43ecba0ae6691ec6173abcff61a38979.scope.
Oct 07 14:38:11 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:38:11 compute-0 podman[387205]: 2025-10-07 14:38:11.365962961 +0000 UTC m=+0.022079141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:38:11 compute-0 podman[387205]: 2025-10-07 14:38:11.461730529 +0000 UTC m=+0.117846709 container init a4290636f2a45c21335c69a2a28b1f1c43ecba0ae6691ec6173abcff61a38979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 07 14:38:11 compute-0 podman[387205]: 2025-10-07 14:38:11.469868237 +0000 UTC m=+0.125984387 container start a4290636f2a45c21335c69a2a28b1f1c43ecba0ae6691ec6173abcff61a38979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 07 14:38:11 compute-0 nice_herschel[387221]: 167 167
Oct 07 14:38:11 compute-0 systemd[1]: libpod-a4290636f2a45c21335c69a2a28b1f1c43ecba0ae6691ec6173abcff61a38979.scope: Deactivated successfully.
Oct 07 14:38:11 compute-0 podman[387205]: 2025-10-07 14:38:11.476623038 +0000 UTC m=+0.132739208 container attach a4290636f2a45c21335c69a2a28b1f1c43ecba0ae6691ec6173abcff61a38979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 07 14:38:11 compute-0 podman[387205]: 2025-10-07 14:38:11.477224714 +0000 UTC m=+0.133340864 container died a4290636f2a45c21335c69a2a28b1f1c43ecba0ae6691ec6173abcff61a38979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 07 14:38:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5abfc1ab273f3a11bd4dbbcd78e5cd262ce9e46d1441f07dd377d82b193ed81-merged.mount: Deactivated successfully.
Oct 07 14:38:11 compute-0 podman[387205]: 2025-10-07 14:38:11.512825175 +0000 UTC m=+0.168941325 container remove a4290636f2a45c21335c69a2a28b1f1c43ecba0ae6691ec6173abcff61a38979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_herschel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:38:11 compute-0 systemd[1]: libpod-conmon-a4290636f2a45c21335c69a2a28b1f1c43ecba0ae6691ec6173abcff61a38979.scope: Deactivated successfully.
Oct 07 14:38:11 compute-0 podman[387247]: 2025-10-07 14:38:11.7367965 +0000 UTC m=+0.043703159 container create d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:38:11 compute-0 systemd[1]: Started libpod-conmon-d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0.scope.
Oct 07 14:38:11 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:38:11 compute-0 podman[387247]: 2025-10-07 14:38:11.718317806 +0000 UTC m=+0.025224445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5568478be26ef2a3b0c9fd811b8bd875471c54e4e444b60d1a673eba2b806468/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5568478be26ef2a3b0c9fd811b8bd875471c54e4e444b60d1a673eba2b806468/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5568478be26ef2a3b0c9fd811b8bd875471c54e4e444b60d1a673eba2b806468/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5568478be26ef2a3b0c9fd811b8bd875471c54e4e444b60d1a673eba2b806468/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:38:11 compute-0 podman[387247]: 2025-10-07 14:38:11.827047721 +0000 UTC m=+0.133954390 container init d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:38:11 compute-0 podman[387247]: 2025-10-07 14:38:11.835297261 +0000 UTC m=+0.142203900 container start d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:38:11 compute-0 podman[387247]: 2025-10-07 14:38:11.839008141 +0000 UTC m=+0.145914790 container attach d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 07 14:38:12 compute-0 ceph-mon[74295]: pgmap v2272: 305 pgs: 305 active+clean; 279 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.3 MiB/s wr, 165 op/s
Oct 07 14:38:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 279 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.1 MiB/s wr, 147 op/s
Oct 07 14:38:12 compute-0 nova_compute[259550]: 2025-10-07 14:38:12.598 2 DEBUG nova.network.neutron [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Successfully created port: ad5643dc-1b43-4ffa-b380-8ee6fbac98fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:38:12 compute-0 nova_compute[259550]: 2025-10-07 14:38:12.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:12 compute-0 strange_turing[387264]: {
Oct 07 14:38:12 compute-0 strange_turing[387264]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:38:12 compute-0 strange_turing[387264]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:38:12 compute-0 strange_turing[387264]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:38:12 compute-0 strange_turing[387264]:         "osd_id": 2,
Oct 07 14:38:12 compute-0 strange_turing[387264]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:38:12 compute-0 strange_turing[387264]:         "type": "bluestore"
Oct 07 14:38:12 compute-0 strange_turing[387264]:     },
Oct 07 14:38:12 compute-0 strange_turing[387264]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:38:12 compute-0 strange_turing[387264]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:38:12 compute-0 strange_turing[387264]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:38:12 compute-0 strange_turing[387264]:         "osd_id": 1,
Oct 07 14:38:12 compute-0 strange_turing[387264]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:38:12 compute-0 strange_turing[387264]:         "type": "bluestore"
Oct 07 14:38:12 compute-0 strange_turing[387264]:     },
Oct 07 14:38:12 compute-0 strange_turing[387264]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:38:12 compute-0 strange_turing[387264]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:38:12 compute-0 strange_turing[387264]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:38:12 compute-0 strange_turing[387264]:         "osd_id": 0,
Oct 07 14:38:12 compute-0 strange_turing[387264]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:38:12 compute-0 strange_turing[387264]:         "type": "bluestore"
Oct 07 14:38:12 compute-0 strange_turing[387264]:     }
Oct 07 14:38:12 compute-0 strange_turing[387264]: }
Oct 07 14:38:12 compute-0 systemd[1]: libpod-d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0.scope: Deactivated successfully.
Oct 07 14:38:12 compute-0 systemd[1]: libpod-d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0.scope: Consumed 1.004s CPU time.
Oct 07 14:38:12 compute-0 podman[387247]: 2025-10-07 14:38:12.846293615 +0000 UTC m=+1.153200254 container died d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 07 14:38:12 compute-0 nova_compute[259550]: 2025-10-07 14:38:12.910 2 DEBUG nova.compute.manager [req-748ba35f-0669-4183-91be-c1cf9b015ef7 req-66bb247d-7cc7-4671-b45c-3cde8a2e3490 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received event network-changed-1c96bebd-0f68-48d9-9bab-486d6e56cb4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:12 compute-0 nova_compute[259550]: 2025-10-07 14:38:12.912 2 DEBUG nova.compute.manager [req-748ba35f-0669-4183-91be-c1cf9b015ef7 req-66bb247d-7cc7-4671-b45c-3cde8a2e3490 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Refreshing instance network info cache due to event network-changed-1c96bebd-0f68-48d9-9bab-486d6e56cb4e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:38:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-5568478be26ef2a3b0c9fd811b8bd875471c54e4e444b60d1a673eba2b806468-merged.mount: Deactivated successfully.
Oct 07 14:38:12 compute-0 nova_compute[259550]: 2025-10-07 14:38:12.912 2 DEBUG oslo_concurrency.lockutils [req-748ba35f-0669-4183-91be-c1cf9b015ef7 req-66bb247d-7cc7-4671-b45c-3cde8a2e3490 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:38:12 compute-0 nova_compute[259550]: 2025-10-07 14:38:12.913 2 DEBUG oslo_concurrency.lockutils [req-748ba35f-0669-4183-91be-c1cf9b015ef7 req-66bb247d-7cc7-4671-b45c-3cde8a2e3490 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:38:12 compute-0 nova_compute[259550]: 2025-10-07 14:38:12.913 2 DEBUG nova.network.neutron [req-748ba35f-0669-4183-91be-c1cf9b015ef7 req-66bb247d-7cc7-4671-b45c-3cde8a2e3490 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Refreshing network info cache for port 1c96bebd-0f68-48d9-9bab-486d6e56cb4e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:38:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:38:13 compute-0 podman[387247]: 2025-10-07 14:38:13.038038849 +0000 UTC m=+1.344945488 container remove d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 07 14:38:13 compute-0 systemd[1]: libpod-conmon-d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0.scope: Deactivated successfully.
Oct 07 14:38:13 compute-0 sudo[387101]: pam_unix(sudo:session): session closed for user root
Oct 07 14:38:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:38:13 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:38:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:38:13 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:38:13 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 19b7a85f-fa64-4dc5-b708-924760ea4793 does not exist
Oct 07 14:38:13 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev cf005306-4ca7-4f9c-ae49-64ea8ef05575 does not exist
Oct 07 14:38:13 compute-0 sudo[387311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:38:13 compute-0 sudo[387311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:38:13 compute-0 sudo[387311]: pam_unix(sudo:session): session closed for user root
Oct 07 14:38:13 compute-0 nova_compute[259550]: 2025-10-07 14:38:13.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:13 compute-0 sudo[387336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:38:13 compute-0 sudo[387336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:38:13 compute-0 sudo[387336]: pam_unix(sudo:session): session closed for user root
Oct 07 14:38:13 compute-0 podman[387360]: 2025-10-07 14:38:13.392076609 +0000 UTC m=+0.083385488 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 07 14:38:13 compute-0 podman[387361]: 2025-10-07 14:38:13.40595602 +0000 UTC m=+0.094076934 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:38:13 compute-0 nova_compute[259550]: 2025-10-07 14:38:13.674 2 DEBUG nova.network.neutron [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Successfully created port: 1ee1b68d-6081-4e61-a797-c2e41ac53a29 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:38:14 compute-0 ceph-mon[74295]: pgmap v2273: 305 pgs: 305 active+clean; 279 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.1 MiB/s wr, 147 op/s
Oct 07 14:38:14 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:38:14 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:38:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 310 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 161 op/s
Oct 07 14:38:15 compute-0 nova_compute[259550]: 2025-10-07 14:38:15.885 2 DEBUG nova.network.neutron [req-748ba35f-0669-4183-91be-c1cf9b015ef7 req-66bb247d-7cc7-4671-b45c-3cde8a2e3490 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Updated VIF entry in instance network info cache for port 1c96bebd-0f68-48d9-9bab-486d6e56cb4e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:38:15 compute-0 nova_compute[259550]: 2025-10-07 14:38:15.887 2 DEBUG nova.network.neutron [req-748ba35f-0669-4183-91be-c1cf9b015ef7 req-66bb247d-7cc7-4671-b45c-3cde8a2e3490 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Updating instance_info_cache with network_info: [{"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:38:15 compute-0 nova_compute[259550]: 2025-10-07 14:38:15.964 2 DEBUG oslo_concurrency.lockutils [req-748ba35f-0669-4183-91be-c1cf9b015ef7 req-66bb247d-7cc7-4671-b45c-3cde8a2e3490 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:38:16 compute-0 nova_compute[259550]: 2025-10-07 14:38:16.345 2 DEBUG nova.network.neutron [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Successfully updated port: ad5643dc-1b43-4ffa-b380-8ee6fbac98fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:38:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 144 op/s
Oct 07 14:38:16 compute-0 nova_compute[259550]: 2025-10-07 14:38:16.498 2 DEBUG nova.compute.manager [req-ebeb8ec0-a850-440e-a0cf-f63df1435500 req-b8c86ae9-39e2-48de-815b-53a81cc28095 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-changed-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:16 compute-0 nova_compute[259550]: 2025-10-07 14:38:16.499 2 DEBUG nova.compute.manager [req-ebeb8ec0-a850-440e-a0cf-f63df1435500 req-b8c86ae9-39e2-48de-815b-53a81cc28095 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Refreshing instance network info cache due to event network-changed-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:38:16 compute-0 nova_compute[259550]: 2025-10-07 14:38:16.500 2 DEBUG oslo_concurrency.lockutils [req-ebeb8ec0-a850-440e-a0cf-f63df1435500 req-b8c86ae9-39e2-48de-815b-53a81cc28095 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:38:16 compute-0 nova_compute[259550]: 2025-10-07 14:38:16.500 2 DEBUG oslo_concurrency.lockutils [req-ebeb8ec0-a850-440e-a0cf-f63df1435500 req-b8c86ae9-39e2-48de-815b-53a81cc28095 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:38:16 compute-0 nova_compute[259550]: 2025-10-07 14:38:16.501 2 DEBUG nova.network.neutron [req-ebeb8ec0-a850-440e-a0cf-f63df1435500 req-b8c86ae9-39e2-48de-815b-53a81cc28095 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Refreshing network info cache for port ad5643dc-1b43-4ffa-b380-8ee6fbac98fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:38:16 compute-0 ceph-mon[74295]: pgmap v2274: 305 pgs: 305 active+clean; 310 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 161 op/s
Oct 07 14:38:16 compute-0 nova_compute[259550]: 2025-10-07 14:38:16.790 2 DEBUG nova.network.neutron [req-ebeb8ec0-a850-440e-a0cf-f63df1435500 req-b8c86ae9-39e2-48de-815b-53a81cc28095 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:38:17 compute-0 nova_compute[259550]: 2025-10-07 14:38:17.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:17 compute-0 nova_compute[259550]: 2025-10-07 14:38:17.657 2 DEBUG nova.network.neutron [req-ebeb8ec0-a850-440e-a0cf-f63df1435500 req-b8c86ae9-39e2-48de-815b-53a81cc28095 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:38:17 compute-0 nova_compute[259550]: 2025-10-07 14:38:17.759 2 DEBUG oslo_concurrency.lockutils [req-ebeb8ec0-a850-440e-a0cf-f63df1435500 req-b8c86ae9-39e2-48de-815b-53a81cc28095 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:38:17 compute-0 nova_compute[259550]: 2025-10-07 14:38:17.811 2 DEBUG nova.network.neutron [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Successfully updated port: 1ee1b68d-6081-4e61-a797-c2e41ac53a29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:38:17 compute-0 nova_compute[259550]: 2025-10-07 14:38:17.876 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:38:17 compute-0 nova_compute[259550]: 2025-10-07 14:38:17.877 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:38:17 compute-0 nova_compute[259550]: 2025-10-07 14:38:17.878 2 DEBUG nova.network.neutron [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:38:17 compute-0 ceph-mon[74295]: pgmap v2275: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 144 op/s
Oct 07 14:38:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:38:18 compute-0 nova_compute[259550]: 2025-10-07 14:38:18.074 2 DEBUG nova.network.neutron [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:38:18 compute-0 nova_compute[259550]: 2025-10-07 14:38:18.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 07 14:38:18 compute-0 nova_compute[259550]: 2025-10-07 14:38:18.601 2 DEBUG nova.compute.manager [req-28d83d1b-369a-4dc4-a0f1-fec923ca2416 req-764cc7f8-180f-4206-a585-94ff01a054f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-changed-1ee1b68d-6081-4e61-a797-c2e41ac53a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:18 compute-0 nova_compute[259550]: 2025-10-07 14:38:18.602 2 DEBUG nova.compute.manager [req-28d83d1b-369a-4dc4-a0f1-fec923ca2416 req-764cc7f8-180f-4206-a585-94ff01a054f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Refreshing instance network info cache due to event network-changed-1ee1b68d-6081-4e61-a797-c2e41ac53a29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:38:18 compute-0 nova_compute[259550]: 2025-10-07 14:38:18.602 2 DEBUG oslo_concurrency.lockutils [req-28d83d1b-369a-4dc4-a0f1-fec923ca2416 req-764cc7f8-180f-4206-a585-94ff01a054f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:38:20 compute-0 ceph-mon[74295]: pgmap v2276: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 07 14:38:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.074 2 DEBUG nova.network.neutron [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updating instance_info_cache with network_info: [{"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.352 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.352 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Instance network_info: |[{"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.353 2 DEBUG oslo_concurrency.lockutils [req-28d83d1b-369a-4dc4-a0f1-fec923ca2416 req-764cc7f8-180f-4206-a585-94ff01a054f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.353 2 DEBUG nova.network.neutron [req-28d83d1b-369a-4dc4-a0f1-fec923ca2416 req-764cc7f8-180f-4206-a585-94ff01a054f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Refreshing network info cache for port 1ee1b68d-6081-4e61-a797-c2e41ac53a29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.357 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Start _get_guest_xml network_info=[{"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.362 2 WARNING nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.370 2 DEBUG nova.virt.libvirt.host [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.371 2 DEBUG nova.virt.libvirt.host [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.374 2 DEBUG nova.virt.libvirt.host [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.374 2 DEBUG nova.virt.libvirt.host [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.375 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.375 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.375 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.376 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.376 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.376 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.376 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.377 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.377 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.377 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.377 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.377 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.380 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:38:21 compute-0 ceph-mon[74295]: pgmap v2277: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 07 14:38:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:38:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/407483761' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.899 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.937 2 DEBUG nova.storage.rbd_utils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:38:21 compute-0 nova_compute[259550]: 2025-10-07 14:38:21.944 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:38:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 63 op/s
Oct 07 14:38:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:38:22 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1451410457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.418 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.419 2 DEBUG nova.virt.libvirt.vif [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:38:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1890244122',display_name='tempest-TestGettingAddress-server-1890244122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1890244122',id=118,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-c20dq06d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:38:10Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=4c61749b-b18d-4fbe-b99c-90e15ced9469,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.420 2 DEBUG nova.network.os_vif_util [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.420 2 DEBUG nova.network.os_vif_util [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e8:e8,bridge_name='br-int',has_traffic_filtering=True,id=ad5643dc-1b43-4ffa-b380-8ee6fbac98fe,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad5643dc-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.421 2 DEBUG nova.virt.libvirt.vif [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:38:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1890244122',display_name='tempest-TestGettingAddress-server-1890244122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1890244122',id=118,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-c20dq06d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:38:10Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=4c61749b-b18d-4fbe-b99c-90e15ced9469,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.421 2 DEBUG nova.network.os_vif_util [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.422 2 DEBUG nova.network.os_vif_util [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:02:3b,bridge_name='br-int',has_traffic_filtering=True,id=1ee1b68d-6081-4e61-a797-c2e41ac53a29,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ee1b68d-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.423 2 DEBUG nova.objects.instance [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 4c61749b-b18d-4fbe-b99c-90e15ced9469 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.475 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.475 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.556 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:38:22 compute-0 nova_compute[259550]:   <uuid>4c61749b-b18d-4fbe-b99c-90e15ced9469</uuid>
Oct 07 14:38:22 compute-0 nova_compute[259550]:   <name>instance-00000076</name>
Oct 07 14:38:22 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:38:22 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:38:22 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <nova:name>tempest-TestGettingAddress-server-1890244122</nova:name>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:38:21</nova:creationTime>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:38:22 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:38:22 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:38:22 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:38:22 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:38:22 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:38:22 compute-0 nova_compute[259550]:         <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct 07 14:38:22 compute-0 nova_compute[259550]:         <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:38:22 compute-0 nova_compute[259550]:         <nova:port uuid="ad5643dc-1b43-4ffa-b380-8ee6fbac98fe">
Oct 07 14:38:22 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:38:22 compute-0 nova_compute[259550]:         <nova:port uuid="1ee1b68d-6081-4e61-a797-c2e41ac53a29">
Oct 07 14:38:22 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe32:23b" ipVersion="6"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe32:23b" ipVersion="6"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:38:22 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:38:22 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <system>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <entry name="serial">4c61749b-b18d-4fbe-b99c-90e15ced9469</entry>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <entry name="uuid">4c61749b-b18d-4fbe-b99c-90e15ced9469</entry>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     </system>
Oct 07 14:38:22 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:38:22 compute-0 nova_compute[259550]:   <os>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:   </os>
Oct 07 14:38:22 compute-0 nova_compute[259550]:   <features>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:   </features>
Oct 07 14:38:22 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:38:22 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:38:22 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4c61749b-b18d-4fbe-b99c-90e15ced9469_disk">
Oct 07 14:38:22 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       </source>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:38:22 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4c61749b-b18d-4fbe-b99c-90e15ced9469_disk.config">
Oct 07 14:38:22 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       </source>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:38:22 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:f4:e8:e8"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <target dev="tapad5643dc-1b"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:32:02:3b"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <target dev="tap1ee1b68d-60"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469/console.log" append="off"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <video>
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     </video>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:38:22 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:38:22 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:38:22 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:38:22 compute-0 nova_compute[259550]: </domain>
Oct 07 14:38:22 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.558 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Preparing to wait for external event network-vif-plugged-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.558 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.558 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.559 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.559 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Preparing to wait for external event network-vif-plugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.559 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.559 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.560 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.560 2 DEBUG nova.virt.libvirt.vif [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:38:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1890244122',display_name='tempest-TestGettingAddress-server-1890244122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1890244122',id=118,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-c20dq06d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:38:10Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=4c61749b-b18d-4fbe-b99c-90e15ced9469,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.560 2 DEBUG nova.network.os_vif_util [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.561 2 DEBUG nova.network.os_vif_util [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e8:e8,bridge_name='br-int',has_traffic_filtering=True,id=ad5643dc-1b43-4ffa-b380-8ee6fbac98fe,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad5643dc-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.561 2 DEBUG os_vif [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e8:e8,bridge_name='br-int',has_traffic_filtering=True,id=ad5643dc-1b43-4ffa-b380-8ee6fbac98fe,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad5643dc-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.562 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.563 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.565 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad5643dc-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.566 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapad5643dc-1b, col_values=(('external_ids', {'iface-id': 'ad5643dc-1b43-4ffa-b380-8ee6fbac98fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:e8:e8', 'vm-uuid': '4c61749b-b18d-4fbe-b99c-90e15ced9469'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:22 compute-0 NetworkManager[44949]: <info>  [1759847902.5688] manager: (tapad5643dc-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/507)
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.577 2 INFO os_vif [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e8:e8,bridge_name='br-int',has_traffic_filtering=True,id=ad5643dc-1b43-4ffa-b380-8ee6fbac98fe,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad5643dc-1b')
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.578 2 DEBUG nova.virt.libvirt.vif [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:38:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1890244122',display_name='tempest-TestGettingAddress-server-1890244122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1890244122',id=118,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-c20dq06d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:38:10Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=4c61749b-b18d-4fbe-b99c-90e15ced9469,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.579 2 DEBUG nova.network.os_vif_util [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.579 2 DEBUG nova.network.os_vif_util [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:02:3b,bridge_name='br-int',has_traffic_filtering=True,id=1ee1b68d-6081-4e61-a797-c2e41ac53a29,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ee1b68d-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.580 2 DEBUG os_vif [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:02:3b,bridge_name='br-int',has_traffic_filtering=True,id=1ee1b68d-6081-4e61-a797-c2e41ac53a29,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ee1b68d-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.580 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.581 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.583 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ee1b68d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.583 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1ee1b68d-60, col_values=(('external_ids', {'iface-id': '1ee1b68d-6081-4e61-a797-c2e41ac53a29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:02:3b', 'vm-uuid': '4c61749b-b18d-4fbe-b99c-90e15ced9469'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:22 compute-0 NetworkManager[44949]: <info>  [1759847902.5854] manager: (tap1ee1b68d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/508)
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.596 2 INFO os_vif [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:02:3b,bridge_name='br-int',has_traffic_filtering=True,id=1ee1b68d-6081-4e61-a797-c2e41ac53a29,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ee1b68d-60')
Oct 07 14:38:22 compute-0 nova_compute[259550]: 2025-10-07 14:38:22.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:38:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:38:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:38:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:38:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:38:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:38:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:38:22
Oct 07 14:38:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:38:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:38:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.control', 'images', 'backups', 'vms', 'volumes', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data']
Oct 07 14:38:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:38:22 compute-0 ovn_controller[151684]: 2025-10-07T14:38:22Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6d:38:87 10.100.0.6
Oct 07 14:38:23 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/407483761' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:38:23 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1451410457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:38:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:38:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:38:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:38:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:38:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:38:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:38:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:38:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:38:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:38:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:38:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:38:23 compute-0 nova_compute[259550]: 2025-10-07 14:38:23.079 2 DEBUG nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:38:23 compute-0 nova_compute[259550]: 2025-10-07 14:38:23.314 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:38:23 compute-0 nova_compute[259550]: 2025-10-07 14:38:23.314 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:38:23 compute-0 nova_compute[259550]: 2025-10-07 14:38:23.315 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:f4:e8:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:38:23 compute-0 nova_compute[259550]: 2025-10-07 14:38:23.315 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:32:02:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:38:23 compute-0 nova_compute[259550]: 2025-10-07 14:38:23.316 2 INFO nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Using config drive
Oct 07 14:38:23 compute-0 nova_compute[259550]: 2025-10-07 14:38:23.336 2 DEBUG nova.storage.rbd_utils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:38:23 compute-0 nova_compute[259550]: 2025-10-07 14:38:23.725 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:23 compute-0 nova_compute[259550]: 2025-10-07 14:38:23.725 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:23 compute-0 nova_compute[259550]: 2025-10-07 14:38:23.732 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:38:23 compute-0 nova_compute[259550]: 2025-10-07 14:38:23.732 2 INFO nova.compute.claims [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:38:24 compute-0 ceph-mon[74295]: pgmap v2278: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 63 op/s
Oct 07 14:38:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct 07 14:38:24 compute-0 nova_compute[259550]: 2025-10-07 14:38:24.742 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:38:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:38:25 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1304747458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:38:25 compute-0 nova_compute[259550]: 2025-10-07 14:38:25.190 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:38:25 compute-0 nova_compute[259550]: 2025-10-07 14:38:25.195 2 DEBUG nova.compute.provider_tree [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:38:25 compute-0 nova_compute[259550]: 2025-10-07 14:38:25.279 2 DEBUG nova.scheduler.client.report [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:38:25 compute-0 nova_compute[259550]: 2025-10-07 14:38:25.410 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:25 compute-0 nova_compute[259550]: 2025-10-07 14:38:25.411 2 DEBUG nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:38:25 compute-0 nova_compute[259550]: 2025-10-07 14:38:25.619 2 DEBUG nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:38:25 compute-0 nova_compute[259550]: 2025-10-07 14:38:25.620 2 DEBUG nova.network.neutron [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:38:25 compute-0 nova_compute[259550]: 2025-10-07 14:38:25.764 2 DEBUG nova.network.neutron [req-28d83d1b-369a-4dc4-a0f1-fec923ca2416 req-764cc7f8-180f-4206-a585-94ff01a054f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updated VIF entry in instance network info cache for port 1ee1b68d-6081-4e61-a797-c2e41ac53a29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:38:25 compute-0 nova_compute[259550]: 2025-10-07 14:38:25.764 2 DEBUG nova.network.neutron [req-28d83d1b-369a-4dc4-a0f1-fec923ca2416 req-764cc7f8-180f-4206-a585-94ff01a054f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updating instance_info_cache with network_info: [{"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:38:25 compute-0 nova_compute[259550]: 2025-10-07 14:38:25.768 2 DEBUG nova.policy [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.018 2 DEBUG oslo_concurrency.lockutils [req-28d83d1b-369a-4dc4-a0f1-fec923ca2416 req-764cc7f8-180f-4206-a585-94ff01a054f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.023 2 INFO nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.113 2 DEBUG nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:38:26 compute-0 ceph-mon[74295]: pgmap v2279: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct 07 14:38:26 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1304747458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.329 2 INFO nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Creating config drive at /var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469/disk.config
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.334 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9487p5s3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:38:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 679 KiB/s wr, 44 op/s
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.514 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9487p5s3" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.540 2 DEBUG nova.storage.rbd_utils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.544 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469/disk.config 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.579 2 DEBUG nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.581 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.581 2 INFO nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Creating image(s)
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.607 2 DEBUG nova.storage.rbd_utils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.636 2 DEBUG nova.storage.rbd_utils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.660 2 DEBUG nova.storage.rbd_utils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.665 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.737 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.738 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.739 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.739 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.761 2 DEBUG nova.storage.rbd_utils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:38:26 compute-0 nova_compute[259550]: 2025-10-07 14:38:26.764 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:38:27 compute-0 nova_compute[259550]: 2025-10-07 14:38:27.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:27 compute-0 nova_compute[259550]: 2025-10-07 14:38:27.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:27 compute-0 ceph-mon[74295]: pgmap v2280: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 679 KiB/s wr, 44 op/s
Oct 07 14:38:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:38:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 524 KiB/s rd, 16 KiB/s wr, 42 op/s
Oct 07 14:38:28 compute-0 nova_compute[259550]: 2025-10-07 14:38:28.718 2 DEBUG nova.network.neutron [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Successfully created port: 134818f1-9848-45e7-ac27-c5290f58f87f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:38:28 compute-0 nova_compute[259550]: 2025-10-07 14:38:28.788 2 INFO nova.compute.manager [None req-96acce94-cb44-4df9-a5c9-a72d4d0867de 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Get console output
Oct 07 14:38:28 compute-0 nova_compute[259550]: 2025-10-07 14:38:28.796 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:38:29 compute-0 nova_compute[259550]: 2025-10-07 14:38:29.345 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469/disk.config 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.801s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:38:29 compute-0 nova_compute[259550]: 2025-10-07 14:38:29.345 2 INFO nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Deleting local config drive /var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469/disk.config because it was imported into RBD.
Oct 07 14:38:29 compute-0 NetworkManager[44949]: <info>  [1759847909.3952] manager: (tapad5643dc-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/509)
Oct 07 14:38:29 compute-0 kernel: tapad5643dc-1b: entered promiscuous mode
Oct 07 14:38:29 compute-0 ovn_controller[151684]: 2025-10-07T14:38:29Z|01242|binding|INFO|Claiming lport ad5643dc-1b43-4ffa-b380-8ee6fbac98fe for this chassis.
Oct 07 14:38:29 compute-0 ovn_controller[151684]: 2025-10-07T14:38:29Z|01243|binding|INFO|ad5643dc-1b43-4ffa-b380-8ee6fbac98fe: Claiming fa:16:3e:f4:e8:e8 10.100.0.14
Oct 07 14:38:29 compute-0 nova_compute[259550]: 2025-10-07 14:38:29.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:29 compute-0 NetworkManager[44949]: <info>  [1759847909.4184] manager: (tap1ee1b68d-60): new Tun device (/org/freedesktop/NetworkManager/Devices/510)
Oct 07 14:38:29 compute-0 kernel: tap1ee1b68d-60: entered promiscuous mode
Oct 07 14:38:29 compute-0 ovn_controller[151684]: 2025-10-07T14:38:29Z|01244|binding|INFO|Setting lport ad5643dc-1b43-4ffa-b380-8ee6fbac98fe ovn-installed in OVS
Oct 07 14:38:29 compute-0 nova_compute[259550]: 2025-10-07 14:38:29.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:29 compute-0 ovn_controller[151684]: 2025-10-07T14:38:29Z|01245|if_status|INFO|Dropped 5 log messages in last 47 seconds (most recently, 47 seconds ago) due to excessive rate
Oct 07 14:38:29 compute-0 ovn_controller[151684]: 2025-10-07T14:38:29Z|01246|if_status|INFO|Not updating pb chassis for 1ee1b68d-6081-4e61-a797-c2e41ac53a29 now as sb is readonly
Oct 07 14:38:29 compute-0 systemd-udevd[387660]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:38:29 compute-0 systemd-udevd[387659]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:38:29 compute-0 nova_compute[259550]: 2025-10-07 14:38:29.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:29 compute-0 NetworkManager[44949]: <info>  [1759847909.4573] device (tapad5643dc-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:38:29 compute-0 NetworkManager[44949]: <info>  [1759847909.4580] device (tap1ee1b68d-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:38:29 compute-0 systemd-machined[214580]: New machine qemu-148-instance-00000076.
Oct 07 14:38:29 compute-0 NetworkManager[44949]: <info>  [1759847909.4586] device (tapad5643dc-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:38:29 compute-0 NetworkManager[44949]: <info>  [1759847909.4590] device (tap1ee1b68d-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:38:29 compute-0 systemd[1]: Started Virtual Machine qemu-148-instance-00000076.
Oct 07 14:38:29 compute-0 ovn_controller[151684]: 2025-10-07T14:38:29Z|01247|binding|INFO|Claiming lport 1ee1b68d-6081-4e61-a797-c2e41ac53a29 for this chassis.
Oct 07 14:38:29 compute-0 ovn_controller[151684]: 2025-10-07T14:38:29Z|01248|binding|INFO|1ee1b68d-6081-4e61-a797-c2e41ac53a29: Claiming fa:16:3e:32:02:3b 2001:db8:0:1:f816:3eff:fe32:23b 2001:db8::f816:3eff:fe32:23b
Oct 07 14:38:29 compute-0 ovn_controller[151684]: 2025-10-07T14:38:29Z|01249|binding|INFO|Setting lport ad5643dc-1b43-4ffa-b380-8ee6fbac98fe up in Southbound
Oct 07 14:38:29 compute-0 ovn_controller[151684]: 2025-10-07T14:38:29Z|01250|binding|INFO|Setting lport 1ee1b68d-6081-4e61-a797-c2e41ac53a29 ovn-installed in OVS
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.644 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:e8:e8 10.100.0.14'], port_security=['fa:16:3e:f4:e8:e8 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4c61749b-b18d-4fbe-b99c-90e15ced9469', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5dfb73c9-a89b-4659-8761-7d887493b39b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd07d1fc3-3bf7-4ca6-a994-01f8bb5c5bd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=169b3722-7b9a-4733-8efb-f5bd5c71aacf, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ad5643dc-1b43-4ffa-b380-8ee6fbac98fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.645 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ad5643dc-1b43-4ffa-b380-8ee6fbac98fe in datapath 5dfb73c9-a89b-4659-8761-7d887493b39b bound to our chassis
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.647 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5dfb73c9-a89b-4659-8761-7d887493b39b
Oct 07 14:38:29 compute-0 nova_compute[259550]: 2025-10-07 14:38:29.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.662 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b4996dd5-7867-4aa4-918c-f721005635df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.691 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae6de16-8a32-4a37-ad34-d07537643677]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.694 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf7e92d-b805-4137-ba04-28db463dfacc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.723 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a0023808-9a1d-46da-aa40-77c0880d12e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.742 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[88ec0337-7d8e-4d86-8b75-9995a1a91805]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5dfb73c9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:44:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 5, 'rx_bytes': 986, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 5, 'rx_bytes': 986, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 351], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844675, 'reachable_time': 28533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 9, 'inoctets': 776, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 9, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 776, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 9, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387683, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.751 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:02:3b 2001:db8:0:1:f816:3eff:fe32:23b 2001:db8::f816:3eff:fe32:23b'], port_security=['fa:16:3e:32:02:3b 2001:db8:0:1:f816:3eff:fe32:23b 2001:db8::f816:3eff:fe32:23b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe32:23b/64 2001:db8::f816:3eff:fe32:23b/64', 'neutron:device_id': '4c61749b-b18d-4fbe-b99c-90e15ced9469', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c956141-6a21-499d-99b1-885d1a2972f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd07d1fc3-3bf7-4ca6-a994-01f8bb5c5bd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fceb8d2-9d2a-45b9-beb8-73d518298477, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1ee1b68d-6081-4e61-a797-c2e41ac53a29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:38:29 compute-0 ovn_controller[151684]: 2025-10-07T14:38:29Z|01251|binding|INFO|Setting lport 1ee1b68d-6081-4e61-a797-c2e41ac53a29 up in Southbound
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.764 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c41e03e-09e5-4d22-91e3-db8da9a8ce93]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5dfb73c9-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 844686, 'tstamp': 844686}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387693, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5dfb73c9-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 844689, 'tstamp': 844689}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387693, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.767 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5dfb73c9-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:29 compute-0 nova_compute[259550]: 2025-10-07 14:38:29.785 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.812 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5dfb73c9-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.812 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.813 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5dfb73c9-a0, col_values=(('external_ids', {'iface-id': '30dd4552-fdd6-4d17-af87-77adcec53278'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.813 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.815 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1ee1b68d-6081-4e61-a797-c2e41ac53a29 in datapath 4c956141-6a21-499d-99b1-885d1a2972f7 bound to our chassis
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.816 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c956141-6a21-499d-99b1-885d1a2972f7
Oct 07 14:38:29 compute-0 nova_compute[259550]: 2025-10-07 14:38:29.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.835 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4a8770-2476-40a9-87db-ff3033a4f2e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.867 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7397394a-edf9-413e-8f4c-ba235748524a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.870 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[047e9ab7-f4bf-4196-b07c-78d7dcc9a6bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:29 compute-0 nova_compute[259550]: 2025-10-07 14:38:29.903 2 DEBUG nova.storage.rbd_utils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.907 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec4dd48-3d59-4efb-a8f9-507c25217ee4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.929 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[500bf550-9925-4c64-87e4-e617c877b27b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c956141-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:a2:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844816, 'reachable_time': 18374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387764, 'error': None, 'target': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.944 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f770757c-d0e8-4ee9-8823-7865bdcf92ef]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4c956141-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 844830, 'tstamp': 844830}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387774, 'error': None, 'target': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.946 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c956141-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.949 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c956141-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.949 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.949 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c956141-60, col_values=(('external_ids', {'iface-id': 'ec328f15-1843-4594-8d39-b0d2d9796360'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.949 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:38:30 compute-0 nova_compute[259550]: 2025-10-07 14:38:30.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:30 compute-0 ceph-mon[74295]: pgmap v2281: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 524 KiB/s rd, 16 KiB/s wr, 42 op/s
Oct 07 14:38:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 337 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 623 KiB/s wr, 59 op/s
Oct 07 14:38:30 compute-0 nova_compute[259550]: 2025-10-07 14:38:30.576 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847910.5753996, 4c61749b-b18d-4fbe-b99c-90e15ced9469 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:38:30 compute-0 nova_compute[259550]: 2025-10-07 14:38:30.576 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] VM Started (Lifecycle Event)
Oct 07 14:38:30 compute-0 nova_compute[259550]: 2025-10-07 14:38:30.730 2 DEBUG nova.objects.instance [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid 716d82da-745f-43ca-a7fa-38f02d3e5dc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:38:30 compute-0 nova_compute[259550]: 2025-10-07 14:38:30.750 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:38:30 compute-0 nova_compute[259550]: 2025-10-07 14:38:30.754 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847910.5755756, 4c61749b-b18d-4fbe-b99c-90e15ced9469 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:38:30 compute-0 nova_compute[259550]: 2025-10-07 14:38:30.754 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] VM Paused (Lifecycle Event)
Oct 07 14:38:30 compute-0 nova_compute[259550]: 2025-10-07 14:38:30.979 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:38:30 compute-0 nova_compute[259550]: 2025-10-07 14:38:30.980 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Ensure instance console log exists: /var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:38:30 compute-0 nova_compute[259550]: 2025-10-07 14:38:30.980 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:30 compute-0 nova_compute[259550]: 2025-10-07 14:38:30.981 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:30 compute-0 nova_compute[259550]: 2025-10-07 14:38:30.981 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:30 compute-0 nova_compute[259550]: 2025-10-07 14:38:30.986 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:38:30 compute-0 nova_compute[259550]: 2025-10-07 14:38:30.989 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:38:31 compute-0 podman[387801]: 2025-10-07 14:38:31.089725319 +0000 UTC m=+0.064499684 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:38:31 compute-0 podman[387800]: 2025-10-07 14:38:31.0897494 +0000 UTC m=+0.064922486 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:38:31 compute-0 nova_compute[259550]: 2025-10-07 14:38:31.547 2 DEBUG nova.compute.manager [req-9e7e1f96-5789-411a-9926-bf3af1ec4fc3 req-d508dfb5-bdd4-48c7-b519-77584f14f3cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-plugged-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:31 compute-0 nova_compute[259550]: 2025-10-07 14:38:31.547 2 DEBUG oslo_concurrency.lockutils [req-9e7e1f96-5789-411a-9926-bf3af1ec4fc3 req-d508dfb5-bdd4-48c7-b519-77584f14f3cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:31 compute-0 nova_compute[259550]: 2025-10-07 14:38:31.547 2 DEBUG oslo_concurrency.lockutils [req-9e7e1f96-5789-411a-9926-bf3af1ec4fc3 req-d508dfb5-bdd4-48c7-b519-77584f14f3cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:31 compute-0 nova_compute[259550]: 2025-10-07 14:38:31.548 2 DEBUG oslo_concurrency.lockutils [req-9e7e1f96-5789-411a-9926-bf3af1ec4fc3 req-d508dfb5-bdd4-48c7-b519-77584f14f3cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:31 compute-0 nova_compute[259550]: 2025-10-07 14:38:31.548 2 DEBUG nova.compute.manager [req-9e7e1f96-5789-411a-9926-bf3af1ec4fc3 req-d508dfb5-bdd4-48c7-b519-77584f14f3cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Processing event network-vif-plugged-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:38:31 compute-0 nova_compute[259550]: 2025-10-07 14:38:31.668 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:38:31 compute-0 nova_compute[259550]: 2025-10-07 14:38:31.684 2 DEBUG nova.network.neutron [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Successfully updated port: 134818f1-9848-45e7-ac27-c5290f58f87f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.039 2 DEBUG nova.compute.manager [req-9e2a0c00-e925-4a36-bec9-b8d08a68359d req-a75c543f-387c-479a-a45c-cb0e4b56efe1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-plugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.040 2 DEBUG oslo_concurrency.lockutils [req-9e2a0c00-e925-4a36-bec9-b8d08a68359d req-a75c543f-387c-479a-a45c-cb0e4b56efe1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.040 2 DEBUG oslo_concurrency.lockutils [req-9e2a0c00-e925-4a36-bec9-b8d08a68359d req-a75c543f-387c-479a-a45c-cb0e4b56efe1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.040 2 DEBUG oslo_concurrency.lockutils [req-9e2a0c00-e925-4a36-bec9-b8d08a68359d req-a75c543f-387c-479a-a45c-cb0e4b56efe1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.040 2 DEBUG nova.compute.manager [req-9e2a0c00-e925-4a36-bec9-b8d08a68359d req-a75c543f-387c-479a-a45c-cb0e4b56efe1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Processing event network-vif-plugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.041 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.050 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.051 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847912.0503786, 4c61749b-b18d-4fbe-b99c-90e15ced9469 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.051 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] VM Resumed (Lifecycle Event)
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.056 2 INFO nova.virt.libvirt.driver [-] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Instance spawned successfully.
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.057 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:38:32 compute-0 ceph-mon[74295]: pgmap v2282: 305 pgs: 305 active+clean; 337 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 623 KiB/s wr, 59 op/s
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 355 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 538 KiB/s rd, 1.4 MiB/s wr, 67 op/s
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.559 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.565 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.570 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.571 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.572 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.572 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.573 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.573 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002973883041935274 of space, bias 1.0, pg target 0.8921649125805822 quantized to 32 (current 32)
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:38:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:38:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3298582817' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:38:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:38:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3298582817' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.764 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-716d82da-745f-43ca-a7fa-38f02d3e5dc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.765 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-716d82da-745f-43ca-a7fa-38f02d3e5dc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.765 2 DEBUG nova.network.neutron [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.919 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.969 2 DEBUG oslo_concurrency.lockutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.969 2 DEBUG oslo_concurrency.lockutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.970 2 DEBUG oslo_concurrency.lockutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.970 2 DEBUG oslo_concurrency.lockutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.970 2 DEBUG oslo_concurrency.lockutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.972 2 INFO nova.compute.manager [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Terminating instance
Oct 07 14:38:32 compute-0 nova_compute[259550]: 2025-10-07 14:38:32.973 2 DEBUG nova.compute.manager [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:38:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.061 2 INFO nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Took 22.82 seconds to spawn the instance on the hypervisor.
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.062 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.114 2 DEBUG nova.network.neutron [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:38:33 compute-0 kernel: tap6e86ce79-9f (unregistering): left promiscuous mode
Oct 07 14:38:33 compute-0 NetworkManager[44949]: <info>  [1759847913.1660] device (tap6e86ce79-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:33 compute-0 ovn_controller[151684]: 2025-10-07T14:38:33Z|01252|binding|INFO|Releasing lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 from this chassis (sb_readonly=0)
Oct 07 14:38:33 compute-0 ovn_controller[151684]: 2025-10-07T14:38:33Z|01253|binding|INFO|Setting lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 down in Southbound
Oct 07 14:38:33 compute-0 ovn_controller[151684]: 2025-10-07T14:38:33Z|01254|binding|INFO|Removing iface tap6e86ce79-9f ovn-installed in OVS
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:33 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000073.scope: Deactivated successfully.
Oct 07 14:38:33 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000073.scope: Consumed 19.088s CPU time.
Oct 07 14:38:33 compute-0 systemd-machined[214580]: Machine qemu-147-instance-00000073 terminated.
Oct 07 14:38:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3298582817' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:38:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3298582817' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:38:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:33.350 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:38:87 10.100.0.6'], port_security=['fa:16:3e:6d:38:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4e64a021-390b-4a0c-bb4c-75a19f274777', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28efb734-0152-4914-9f31-b818d894be70', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c3ab1e78-0aa6-4d11-8104-a075342a0333', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3cdfe5f-0d4a-4d58-ba3f-aac15c533467, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:38:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:33.351 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 in datapath 28efb734-0152-4914-9f31-b818d894be70 unbound from our chassis
Oct 07 14:38:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:33.353 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28efb734-0152-4914-9f31-b818d894be70, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:38:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:33.354 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[40f86114-61ba-4678-a1de-d57a1a3bee1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:33.354 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-28efb734-0152-4914-9f31-b818d894be70 namespace which is not needed anymore
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.405 2 INFO nova.virt.libvirt.driver [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance destroyed successfully.
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.405 2 DEBUG nova.objects.instance [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'resources' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:38:33 compute-0 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[386205]: [NOTICE]   (386209) : haproxy version is 2.8.14-c23fe91
Oct 07 14:38:33 compute-0 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[386205]: [NOTICE]   (386209) : path to executable is /usr/sbin/haproxy
Oct 07 14:38:33 compute-0 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[386205]: [WARNING]  (386209) : Exiting Master process...
Oct 07 14:38:33 compute-0 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[386205]: [WARNING]  (386209) : Exiting Master process...
Oct 07 14:38:33 compute-0 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[386205]: [ALERT]    (386209) : Current worker (386211) exited with code 143 (Terminated)
Oct 07 14:38:33 compute-0 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[386205]: [WARNING]  (386209) : All workers exited. Exiting... (0)
Oct 07 14:38:33 compute-0 systemd[1]: libpod-0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240.scope: Deactivated successfully.
Oct 07 14:38:33 compute-0 podman[387868]: 2025-10-07 14:38:33.533734345 +0000 UTC m=+0.082837695 container died 0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 07 14:38:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240-userdata-shm.mount: Deactivated successfully.
Oct 07 14:38:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bdfd20c34e8e463ac3bd8f75949ae3c20d5cfcc6b96b2e99a9b1bbbdec631c1-merged.mount: Deactivated successfully.
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.699 2 INFO nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Took 24.48 seconds to build instance.
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.702 2 DEBUG nova.virt.libvirt.vif [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1824501664',display_name='tempest-TestNetworkAdvancedServerOps-server-1824501664',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1824501664',id=115,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZnJzpjSv/TU/ynLhehpWtOh3Ok15/bj8hZqv14d9GetFxMUNAsBy1sPCK8k7EXv+srEQ5zJiIUEWZ8pm1FRF0+OuDCiKL8OPwrGm2N566RfHl82V8uvcba1igoHu/qSA==',key_name='tempest-TestNetworkAdvancedServerOps-112753023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:37:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-zrj0040b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:38:05Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=4e64a021-390b-4a0c-bb4c-75a19f274777,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.702 2 DEBUG nova.network.os_vif_util [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.703 2 DEBUG nova.network.os_vif_util [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.704 2 DEBUG os_vif [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.706 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e86ce79-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.713 2 INFO os_vif [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f')
Oct 07 14:38:33 compute-0 podman[387868]: 2025-10-07 14:38:33.887196499 +0000 UTC m=+0.436299839 container cleanup 0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:38:33 compute-0 systemd[1]: libpod-conmon-0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240.scope: Deactivated successfully.
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.962 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 24.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.966 2 DEBUG nova.compute.manager [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-changed-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.967 2 DEBUG nova.compute.manager [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Refreshing instance network info cache due to event network-changed-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.967 2 DEBUG oslo_concurrency.lockutils [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.967 2 DEBUG oslo_concurrency.lockutils [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:38:33 compute-0 nova_compute[259550]: 2025-10-07 14:38:33.968 2 DEBUG nova.network.neutron [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Refreshing network info cache for port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:38:34 compute-0 podman[387917]: 2025-10-07 14:38:34.281126455 +0000 UTC m=+0.359982020 container remove 0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:38:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:34.289 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f00cc3dd-e221-4e40-8980-7ca3d4272997]: (4, ('Tue Oct  7 02:38:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70 (0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240)\n0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240\nTue Oct  7 02:38:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70 (0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240)\n0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:34 compute-0 nova_compute[259550]: 2025-10-07 14:38:34.291 2 DEBUG nova.compute.manager [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received event network-changed-134818f1-9848-45e7-ac27-c5290f58f87f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:34.291 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c118bf27-fc96-4250-8130-c75138316fa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:34.293 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28efb734-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:34 compute-0 kernel: tap28efb734-00: left promiscuous mode
Oct 07 14:38:34 compute-0 nova_compute[259550]: 2025-10-07 14:38:34.294 2 DEBUG nova.compute.manager [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Refreshing instance network info cache due to event network-changed-134818f1-9848-45e7-ac27-c5290f58f87f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:38:34 compute-0 nova_compute[259550]: 2025-10-07 14:38:34.300 2 DEBUG oslo_concurrency.lockutils [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-716d82da-745f-43ca-a7fa-38f02d3e5dc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:38:34 compute-0 nova_compute[259550]: 2025-10-07 14:38:34.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:34 compute-0 nova_compute[259550]: 2025-10-07 14:38:34.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:34.314 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e82d568f-88a3-4382-a5d3-d7898179c2e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:34 compute-0 ceph-mon[74295]: pgmap v2283: 305 pgs: 305 active+clean; 355 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 538 KiB/s rd, 1.4 MiB/s wr, 67 op/s
Oct 07 14:38:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:34.348 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5f181f26-31c3-4b86-84d5-17daeee0c616]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:34.349 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a036f543-1a3a-4339-9952-fd3c9912c4a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:34.369 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e17c6e04-83e0-4426-bf0e-0730ab920d68]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 846722, 'reachable_time': 37603, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387931, 'error': None, 'target': 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:34 compute-0 systemd[1]: run-netns-ovnmeta\x2d28efb734\x2d0152\x2d4914\x2d9f31\x2db818d894be70.mount: Deactivated successfully.
Oct 07 14:38:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:34.373 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-28efb734-0152-4914-9f31-b818d894be70 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:38:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:34.374 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[06c56437-afde-4141-8c43-e6d18ca086f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 374 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Oct 07 14:38:35 compute-0 ceph-mon[74295]: pgmap v2284: 305 pgs: 305 active+clean; 374 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.733 2 DEBUG nova.network.neutron [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Updating instance_info_cache with network_info: [{"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.833 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-716d82da-745f-43ca-a7fa-38f02d3e5dc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.833 2 DEBUG nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Instance network_info: |[{"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.834 2 DEBUG oslo_concurrency.lockutils [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-716d82da-745f-43ca-a7fa-38f02d3e5dc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.834 2 DEBUG nova.network.neutron [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Refreshing network info cache for port 134818f1-9848-45e7-ac27-c5290f58f87f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.837 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Start _get_guest_xml network_info=[{"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.840 2 WARNING nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.845 2 DEBUG nova.virt.libvirt.host [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.846 2 DEBUG nova.virt.libvirt.host [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.850 2 DEBUG nova.virt.libvirt.host [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.850 2 DEBUG nova.virt.libvirt.host [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.851 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.851 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.852 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.852 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.853 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.853 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.854 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.854 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.854 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.854 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.855 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.855 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:38:35 compute-0 nova_compute[259550]: 2025-10-07 14:38:35.858 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.060 2 DEBUG nova.network.neutron [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updated VIF entry in instance network info cache for port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.064 2 DEBUG nova.network.neutron [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updating instance_info_cache with network_info: [{"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.196 2 DEBUG oslo_concurrency.lockutils [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.197 2 DEBUG nova.compute.manager [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-plugged-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.197 2 DEBUG oslo_concurrency.lockutils [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.198 2 DEBUG oslo_concurrency.lockutils [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.198 2 DEBUG oslo_concurrency.lockutils [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.199 2 DEBUG nova.compute.manager [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] No waiting events found dispatching network-vif-plugged-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.199 2 WARNING nova.compute.manager [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received unexpected event network-vif-plugged-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe for instance with vm_state active and task_state None.
Oct 07 14:38:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:38:36 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2203997159' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.313 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.337 2 DEBUG nova.storage.rbd_utils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.341 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:38:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 345 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 121 op/s
Oct 07 14:38:36 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2203997159' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:38:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:38:36 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2764461806' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.816 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.819 2 DEBUG nova.virt.libvirt.vif [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:38:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-533021491',display_name='tempest-TestNetworkBasicOps-server-533021491',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-533021491',id=119,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFiLtVtON/CvAsHcf1ZXHYh3rc03vthPyQgHr1JmpZrgfX90g8u0AoXxhQvx7zNCW+RZBDYY9qzlgStzxh1Af0F+ohdhBY+PBrQ7zmCH8Cgi0yMJpOyuiddZB1OxGzpRsA==',key_name='tempest-TestNetworkBasicOps-1904256776',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-vph0u7ke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:38:26Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=716d82da-745f-43ca-a7fa-38f02d3e5dc3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.819 2 DEBUG nova.network.os_vif_util [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.820 2 DEBUG nova.network.os_vif_util [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:cb:9f,bridge_name='br-int',has_traffic_filtering=True,id=134818f1-9848-45e7-ac27-c5290f58f87f,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap134818f1-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.822 2 DEBUG nova.objects.instance [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid 716d82da-745f-43ca-a7fa-38f02d3e5dc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.843 2 DEBUG nova.compute.manager [req-257844b9-e11c-4781-bcda-558a7f24fb3d req-52a789b8-de04-4d95-9e37-d2566b065dd0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.844 2 DEBUG oslo_concurrency.lockutils [req-257844b9-e11c-4781-bcda-558a7f24fb3d req-52a789b8-de04-4d95-9e37-d2566b065dd0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.844 2 DEBUG oslo_concurrency.lockutils [req-257844b9-e11c-4781-bcda-558a7f24fb3d req-52a789b8-de04-4d95-9e37-d2566b065dd0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.845 2 DEBUG oslo_concurrency.lockutils [req-257844b9-e11c-4781-bcda-558a7f24fb3d req-52a789b8-de04-4d95-9e37-d2566b065dd0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.846 2 DEBUG nova.compute.manager [req-257844b9-e11c-4781-bcda-558a7f24fb3d req-52a789b8-de04-4d95-9e37-d2566b065dd0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] No waiting events found dispatching network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.846 2 WARNING nova.compute.manager [req-257844b9-e11c-4781-bcda-558a7f24fb3d req-52a789b8-de04-4d95-9e37-d2566b065dd0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received unexpected event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 for instance with vm_state active and task_state deleting.
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.891 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:38:36 compute-0 nova_compute[259550]:   <uuid>716d82da-745f-43ca-a7fa-38f02d3e5dc3</uuid>
Oct 07 14:38:36 compute-0 nova_compute[259550]:   <name>instance-00000077</name>
Oct 07 14:38:36 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:38:36 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:38:36 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkBasicOps-server-533021491</nova:name>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:38:35</nova:creationTime>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:38:36 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:38:36 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:38:36 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:38:36 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:38:36 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:38:36 compute-0 nova_compute[259550]:         <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:38:36 compute-0 nova_compute[259550]:         <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:38:36 compute-0 nova_compute[259550]:         <nova:port uuid="134818f1-9848-45e7-ac27-c5290f58f87f">
Oct 07 14:38:36 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:38:36 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:38:36 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <system>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <entry name="serial">716d82da-745f-43ca-a7fa-38f02d3e5dc3</entry>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <entry name="uuid">716d82da-745f-43ca-a7fa-38f02d3e5dc3</entry>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     </system>
Oct 07 14:38:36 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:38:36 compute-0 nova_compute[259550]:   <os>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:   </os>
Oct 07 14:38:36 compute-0 nova_compute[259550]:   <features>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:   </features>
Oct 07 14:38:36 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:38:36 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:38:36 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk">
Oct 07 14:38:36 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       </source>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:38:36 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk.config">
Oct 07 14:38:36 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       </source>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:38:36 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:f6:cb:9f"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <target dev="tap134818f1-98"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3/console.log" append="off"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <video>
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     </video>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:38:36 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:38:36 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:38:36 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:38:36 compute-0 nova_compute[259550]: </domain>
Oct 07 14:38:36 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.897 2 DEBUG nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Preparing to wait for external event network-vif-plugged-134818f1-9848-45e7-ac27-c5290f58f87f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.897 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.897 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.898 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.899 2 DEBUG nova.virt.libvirt.vif [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:38:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-533021491',display_name='tempest-TestNetworkBasicOps-server-533021491',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-533021491',id=119,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFiLtVtON/CvAsHcf1ZXHYh3rc03vthPyQgHr1JmpZrgfX90g8u0AoXxhQvx7zNCW+RZBDYY9qzlgStzxh1Af0F+ohdhBY+PBrQ7zmCH8Cgi0yMJpOyuiddZB1OxGzpRsA==',key_name='tempest-TestNetworkBasicOps-1904256776',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-vph0u7ke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:38:26Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=716d82da-745f-43ca-a7fa-38f02d3e5dc3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.899 2 DEBUG nova.network.os_vif_util [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.900 2 DEBUG nova.network.os_vif_util [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:cb:9f,bridge_name='br-int',has_traffic_filtering=True,id=134818f1-9848-45e7-ac27-c5290f58f87f,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap134818f1-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.900 2 DEBUG os_vif [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:cb:9f,bridge_name='br-int',has_traffic_filtering=True,id=134818f1-9848-45e7-ac27-c5290f58f87f,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap134818f1-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.904 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.907 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap134818f1-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.908 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap134818f1-98, col_values=(('external_ids', {'iface-id': '134818f1-9848-45e7-ac27-c5290f58f87f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:cb:9f', 'vm-uuid': '716d82da-745f-43ca-a7fa-38f02d3e5dc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:36 compute-0 NetworkManager[44949]: <info>  [1759847916.9107] manager: (tap134818f1-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/511)
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:36 compute-0 nova_compute[259550]: 2025-10-07 14:38:36.916 2 INFO os_vif [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:cb:9f,bridge_name='br-int',has_traffic_filtering=True,id=134818f1-9848-45e7-ac27-c5290f58f87f,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap134818f1-98')
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.040 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.041 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.041 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:f6:cb:9f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.042 2 INFO nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Using config drive
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.062 2 DEBUG nova.storage.rbd_utils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.141 2 INFO nova.virt.libvirt.driver [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Deleting instance files /var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777_del
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.142 2 INFO nova.virt.libvirt.driver [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Deletion of /var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777_del complete
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.447 2 INFO nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Creating config drive at /var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3/disk.config
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.452 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzz7sht2v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.489 2 DEBUG nova.network.neutron [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Updated VIF entry in instance network info cache for port 134818f1-9848-45e7-ac27-c5290f58f87f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.490 2 DEBUG nova.network.neutron [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Updating instance_info_cache with network_info: [{"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.497 2 INFO nova.compute.manager [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Took 4.52 seconds to destroy the instance on the hypervisor.
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.498 2 DEBUG oslo.service.loopingcall [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.498 2 DEBUG nova.compute.manager [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.499 2 DEBUG nova.network.neutron [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.591 2 DEBUG oslo_concurrency.lockutils [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-716d82da-745f-43ca-a7fa-38f02d3e5dc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.592 2 DEBUG nova.compute.manager [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-plugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.592 2 DEBUG oslo_concurrency.lockutils [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.592 2 DEBUG oslo_concurrency.lockutils [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.593 2 DEBUG oslo_concurrency.lockutils [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.593 2 DEBUG nova.compute.manager [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] No waiting events found dispatching network-vif-plugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.593 2 WARNING nova.compute.manager [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received unexpected event network-vif-plugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 for instance with vm_state active and task_state None.
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.594 2 DEBUG nova.compute.manager [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-unplugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.594 2 DEBUG oslo_concurrency.lockutils [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.594 2 DEBUG oslo_concurrency.lockutils [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.595 2 DEBUG oslo_concurrency.lockutils [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.595 2 DEBUG nova.compute.manager [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] No waiting events found dispatching network-vif-unplugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.595 2 DEBUG nova.compute.manager [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-unplugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.596 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzz7sht2v" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.620 2 DEBUG nova.storage.rbd_utils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.625 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3/disk.config 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:37.691 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:38:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:37.692 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:38:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:37.692 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:37 compute-0 nova_compute[259550]: 2025-10-07 14:38:37.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:37 compute-0 ceph-mon[74295]: pgmap v2285: 305 pgs: 305 active+clean; 345 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 121 op/s
Oct 07 14:38:37 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2764461806' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:38:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:38:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 345 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Oct 07 14:38:38 compute-0 nova_compute[259550]: 2025-10-07 14:38:38.389 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3/disk.config 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.765s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:38:38 compute-0 nova_compute[259550]: 2025-10-07 14:38:38.391 2 INFO nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Deleting local config drive /var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3/disk.config because it was imported into RBD.
Oct 07 14:38:38 compute-0 nova_compute[259550]: 2025-10-07 14:38:38.418 2 DEBUG nova.network.neutron [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:38:38 compute-0 nova_compute[259550]: 2025-10-07 14:38:38.443 2 INFO nova.compute.manager [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Took 0.94 seconds to deallocate network for instance.
Oct 07 14:38:38 compute-0 kernel: tap134818f1-98: entered promiscuous mode
Oct 07 14:38:38 compute-0 NetworkManager[44949]: <info>  [1759847918.4513] manager: (tap134818f1-98): new Tun device (/org/freedesktop/NetworkManager/Devices/512)
Oct 07 14:38:38 compute-0 nova_compute[259550]: 2025-10-07 14:38:38.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:38 compute-0 ovn_controller[151684]: 2025-10-07T14:38:38Z|01255|binding|INFO|Claiming lport 134818f1-9848-45e7-ac27-c5290f58f87f for this chassis.
Oct 07 14:38:38 compute-0 ovn_controller[151684]: 2025-10-07T14:38:38Z|01256|binding|INFO|134818f1-9848-45e7-ac27-c5290f58f87f: Claiming fa:16:3e:f6:cb:9f 10.100.0.11
Oct 07 14:38:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.467 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:cb:9f 10.100.0.11'], port_security=['fa:16:3e:f6:cb:9f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '716d82da-745f-43ca-a7fa-38f02d3e5dc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd82525cf-6a8e-4dd1-ac1e-a6b578fadd19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be472fff-888c-4934-b0b7-07dc4319f2eb, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=134818f1-9848-45e7-ac27-c5290f58f87f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:38:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.469 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 134818f1-9848-45e7-ac27-c5290f58f87f in datapath 4bd15a72-ce65-4737-b705-4b2b86d3a32a bound to our chassis
Oct 07 14:38:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.470 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bd15a72-ce65-4737-b705-4b2b86d3a32a
Oct 07 14:38:38 compute-0 ovn_controller[151684]: 2025-10-07T14:38:38Z|01257|binding|INFO|Setting lport 134818f1-9848-45e7-ac27-c5290f58f87f ovn-installed in OVS
Oct 07 14:38:38 compute-0 ovn_controller[151684]: 2025-10-07T14:38:38Z|01258|binding|INFO|Setting lport 134818f1-9848-45e7-ac27-c5290f58f87f up in Southbound
Oct 07 14:38:38 compute-0 nova_compute[259550]: 2025-10-07 14:38:38.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:38 compute-0 systemd-machined[214580]: New machine qemu-149-instance-00000077.
Oct 07 14:38:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.489 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b87a0721-f8ab-4640-b33b-d1184bb321b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:38 compute-0 nova_compute[259550]: 2025-10-07 14:38:38.502 2 DEBUG oslo_concurrency.lockutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:38 compute-0 nova_compute[259550]: 2025-10-07 14:38:38.503 2 DEBUG oslo_concurrency.lockutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:38 compute-0 systemd[1]: Started Virtual Machine qemu-149-instance-00000077.
Oct 07 14:38:38 compute-0 systemd-udevd[388072]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:38:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.519 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad208eb-48f0-450b-adb2-ca2b1035de42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.526 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f7d97c-fc01-40be-b366-6cc0e408368b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:38 compute-0 NetworkManager[44949]: <info>  [1759847918.5311] device (tap134818f1-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:38:38 compute-0 NetworkManager[44949]: <info>  [1759847918.5317] device (tap134818f1-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:38:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.561 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd0d6ba-c87a-4d50-8c2e-4965e111b5dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.580 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[801722f5-757c-47a5-be24-9f96d7787f76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bd15a72-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:db:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 354], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845459, 'reachable_time': 31245, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388079, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.606 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3b69f9e1-9b6b-47e2-94a1-64369220fdc0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bd15a72-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 845473, 'tstamp': 845473}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388083, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bd15a72-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 845476, 'tstamp': 845476}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388083, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.608 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bd15a72-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.611 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bd15a72-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.612 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:38:38 compute-0 nova_compute[259550]: 2025-10-07 14:38:38.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.612 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bd15a72-c0, col_values=(('external_ids', {'iface-id': '818ca059-c8de-4f85-a524-8f47c8fbf780'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.613 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:38:38 compute-0 nova_compute[259550]: 2025-10-07 14:38:38.625 2 DEBUG oslo_concurrency.processutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:38:38 compute-0 nova_compute[259550]: 2025-10-07 14:38:38.942 2 DEBUG nova.compute.manager [req-343dea61-9918-475e-b8bd-40923cd72d77 req-6bd5f156-d2a4-42cb-92f5-9f7366047037 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-deleted-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.025 2 DEBUG nova.compute.manager [req-6dd79aa0-0520-4db1-893e-47ac8cca21b0 req-1ef2648d-f14a-48a1-8086-42d562c37a40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received event network-vif-plugged-134818f1-9848-45e7-ac27-c5290f58f87f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.026 2 DEBUG oslo_concurrency.lockutils [req-6dd79aa0-0520-4db1-893e-47ac8cca21b0 req-1ef2648d-f14a-48a1-8086-42d562c37a40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.026 2 DEBUG oslo_concurrency.lockutils [req-6dd79aa0-0520-4db1-893e-47ac8cca21b0 req-1ef2648d-f14a-48a1-8086-42d562c37a40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.026 2 DEBUG oslo_concurrency.lockutils [req-6dd79aa0-0520-4db1-893e-47ac8cca21b0 req-1ef2648d-f14a-48a1-8086-42d562c37a40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.026 2 DEBUG nova.compute.manager [req-6dd79aa0-0520-4db1-893e-47ac8cca21b0 req-1ef2648d-f14a-48a1-8086-42d562c37a40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Processing event network-vif-plugged-134818f1-9848-45e7-ac27-c5290f58f87f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:38:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:38:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2840752674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.120 2 DEBUG oslo_concurrency.processutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.126 2 DEBUG nova.compute.provider_tree [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.146 2 DEBUG nova.scheduler.client.report [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.167 2 DEBUG oslo_concurrency.lockutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.190 2 INFO nova.scheduler.client.report [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Deleted allocations for instance 4e64a021-390b-4a0c-bb4c-75a19f274777
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.255 2 DEBUG oslo_concurrency.lockutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.436 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847919.4362261, 716d82da-745f-43ca-a7fa-38f02d3e5dc3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.437 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] VM Started (Lifecycle Event)
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.443 2 DEBUG nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.446 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.449 2 INFO nova.virt.libvirt.driver [-] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Instance spawned successfully.
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.452 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.456 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.459 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.480 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.480 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847919.4374018, 716d82da-745f-43ca-a7fa-38f02d3e5dc3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.480 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] VM Paused (Lifecycle Event)
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.485 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.486 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.486 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.487 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.487 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.488 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.498 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.502 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847919.4459107, 716d82da-745f-43ca-a7fa-38f02d3e5dc3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.503 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] VM Resumed (Lifecycle Event)
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.524 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.529 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.555 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.574 2 INFO nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Took 12.99 seconds to spawn the instance on the hypervisor.
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.574 2 DEBUG nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.640 2 INFO nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Took 16.27 seconds to build instance.
Oct 07 14:38:39 compute-0 nova_compute[259550]: 2025-10-07 14:38:39.681 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:39 compute-0 ceph-mon[74295]: pgmap v2286: 305 pgs: 305 active+clean; 345 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Oct 07 14:38:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2840752674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:38:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 293 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Oct 07 14:38:41 compute-0 nova_compute[259550]: 2025-10-07 14:38:41.033 2 DEBUG nova.compute.manager [req-54752178-f9eb-4b1d-87ec-68199fc409a2 req-fde99da6-10cf-414f-960a-489bee80f0fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-changed-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:41 compute-0 nova_compute[259550]: 2025-10-07 14:38:41.033 2 DEBUG nova.compute.manager [req-54752178-f9eb-4b1d-87ec-68199fc409a2 req-fde99da6-10cf-414f-960a-489bee80f0fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Refreshing instance network info cache due to event network-changed-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:38:41 compute-0 nova_compute[259550]: 2025-10-07 14:38:41.034 2 DEBUG oslo_concurrency.lockutils [req-54752178-f9eb-4b1d-87ec-68199fc409a2 req-fde99da6-10cf-414f-960a-489bee80f0fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:38:41 compute-0 nova_compute[259550]: 2025-10-07 14:38:41.034 2 DEBUG oslo_concurrency.lockutils [req-54752178-f9eb-4b1d-87ec-68199fc409a2 req-fde99da6-10cf-414f-960a-489bee80f0fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:38:41 compute-0 nova_compute[259550]: 2025-10-07 14:38:41.035 2 DEBUG nova.network.neutron [req-54752178-f9eb-4b1d-87ec-68199fc409a2 req-fde99da6-10cf-414f-960a-489bee80f0fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Refreshing network info cache for port ad5643dc-1b43-4ffa-b380-8ee6fbac98fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:38:41 compute-0 nova_compute[259550]: 2025-10-07 14:38:41.116 2 DEBUG nova.compute.manager [req-0b92e191-ff58-4c40-9d1b-8621e9a46517 req-1b659032-24c5-418a-8071-f0f75655e033 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received event network-vif-plugged-134818f1-9848-45e7-ac27-c5290f58f87f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:41 compute-0 nova_compute[259550]: 2025-10-07 14:38:41.117 2 DEBUG oslo_concurrency.lockutils [req-0b92e191-ff58-4c40-9d1b-8621e9a46517 req-1b659032-24c5-418a-8071-f0f75655e033 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:41 compute-0 nova_compute[259550]: 2025-10-07 14:38:41.117 2 DEBUG oslo_concurrency.lockutils [req-0b92e191-ff58-4c40-9d1b-8621e9a46517 req-1b659032-24c5-418a-8071-f0f75655e033 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:41 compute-0 nova_compute[259550]: 2025-10-07 14:38:41.118 2 DEBUG oslo_concurrency.lockutils [req-0b92e191-ff58-4c40-9d1b-8621e9a46517 req-1b659032-24c5-418a-8071-f0f75655e033 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:41 compute-0 nova_compute[259550]: 2025-10-07 14:38:41.119 2 DEBUG nova.compute.manager [req-0b92e191-ff58-4c40-9d1b-8621e9a46517 req-1b659032-24c5-418a-8071-f0f75655e033 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] No waiting events found dispatching network-vif-plugged-134818f1-9848-45e7-ac27-c5290f58f87f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:38:41 compute-0 nova_compute[259550]: 2025-10-07 14:38:41.121 2 WARNING nova.compute.manager [req-0b92e191-ff58-4c40-9d1b-8621e9a46517 req-1b659032-24c5-418a-8071-f0f75655e033 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received unexpected event network-vif-plugged-134818f1-9848-45e7-ac27-c5290f58f87f for instance with vm_state active and task_state None.
Oct 07 14:38:41 compute-0 ceph-mon[74295]: pgmap v2287: 305 pgs: 305 active+clean; 293 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Oct 07 14:38:41 compute-0 nova_compute[259550]: 2025-10-07 14:38:41.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 293 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 138 op/s
Oct 07 14:38:42 compute-0 nova_compute[259550]: 2025-10-07 14:38:42.404 2 DEBUG nova.network.neutron [req-54752178-f9eb-4b1d-87ec-68199fc409a2 req-fde99da6-10cf-414f-960a-489bee80f0fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updated VIF entry in instance network info cache for port ad5643dc-1b43-4ffa-b380-8ee6fbac98fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:38:42 compute-0 nova_compute[259550]: 2025-10-07 14:38:42.404 2 DEBUG nova.network.neutron [req-54752178-f9eb-4b1d-87ec-68199fc409a2 req-fde99da6-10cf-414f-960a-489bee80f0fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updating instance_info_cache with network_info: [{"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:38:42 compute-0 nova_compute[259550]: 2025-10-07 14:38:42.566 2 DEBUG oslo_concurrency.lockutils [req-54752178-f9eb-4b1d-87ec-68199fc409a2 req-fde99da6-10cf-414f-960a-489bee80f0fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:38:42 compute-0 nova_compute[259550]: 2025-10-07 14:38:42.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:43 compute-0 ovn_controller[151684]: 2025-10-07T14:38:43Z|01259|binding|INFO|Releasing lport 30dd4552-fdd6-4d17-af87-77adcec53278 from this chassis (sb_readonly=0)
Oct 07 14:38:43 compute-0 ovn_controller[151684]: 2025-10-07T14:38:43Z|01260|binding|INFO|Releasing lport 818ca059-c8de-4f85-a524-8f47c8fbf780 from this chassis (sb_readonly=0)
Oct 07 14:38:43 compute-0 ovn_controller[151684]: 2025-10-07T14:38:43Z|01261|binding|INFO|Releasing lport ec328f15-1843-4594-8d39-b0d2d9796360 from this chassis (sb_readonly=0)
Oct 07 14:38:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:38:43 compute-0 nova_compute[259550]: 2025-10-07 14:38:43.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:43 compute-0 nova_compute[259550]: 2025-10-07 14:38:43.352 2 DEBUG nova.compute.manager [req-73a6bd6a-6a6b-419c-8c40-eb8770a2bfdd req-2f08f53d-5cf8-4284-8773-8f6fd73910e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received event network-changed-134818f1-9848-45e7-ac27-c5290f58f87f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:43 compute-0 nova_compute[259550]: 2025-10-07 14:38:43.353 2 DEBUG nova.compute.manager [req-73a6bd6a-6a6b-419c-8c40-eb8770a2bfdd req-2f08f53d-5cf8-4284-8773-8f6fd73910e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Refreshing instance network info cache due to event network-changed-134818f1-9848-45e7-ac27-c5290f58f87f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:38:43 compute-0 nova_compute[259550]: 2025-10-07 14:38:43.353 2 DEBUG oslo_concurrency.lockutils [req-73a6bd6a-6a6b-419c-8c40-eb8770a2bfdd req-2f08f53d-5cf8-4284-8773-8f6fd73910e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-716d82da-745f-43ca-a7fa-38f02d3e5dc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:38:43 compute-0 nova_compute[259550]: 2025-10-07 14:38:43.353 2 DEBUG oslo_concurrency.lockutils [req-73a6bd6a-6a6b-419c-8c40-eb8770a2bfdd req-2f08f53d-5cf8-4284-8773-8f6fd73910e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-716d82da-745f-43ca-a7fa-38f02d3e5dc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:38:43 compute-0 nova_compute[259550]: 2025-10-07 14:38:43.354 2 DEBUG nova.network.neutron [req-73a6bd6a-6a6b-419c-8c40-eb8770a2bfdd req-2f08f53d-5cf8-4284-8773-8f6fd73910e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Refreshing network info cache for port 134818f1-9848-45e7-ac27-c5290f58f87f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:38:43 compute-0 ceph-mon[74295]: pgmap v2288: 305 pgs: 305 active+clean; 293 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 138 op/s
Oct 07 14:38:44 compute-0 podman[388149]: 2025-10-07 14:38:44.073447549 +0000 UTC m=+0.057027035 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 07 14:38:44 compute-0 podman[388150]: 2025-10-07 14:38:44.106970805 +0000 UTC m=+0.090561721 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.build-date=20251001, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 14:38:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 293 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 394 KiB/s wr, 167 op/s
Oct 07 14:38:44 compute-0 nova_compute[259550]: 2025-10-07 14:38:44.865 2 DEBUG nova.network.neutron [req-73a6bd6a-6a6b-419c-8c40-eb8770a2bfdd req-2f08f53d-5cf8-4284-8773-8f6fd73910e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Updated VIF entry in instance network info cache for port 134818f1-9848-45e7-ac27-c5290f58f87f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:38:44 compute-0 nova_compute[259550]: 2025-10-07 14:38:44.866 2 DEBUG nova.network.neutron [req-73a6bd6a-6a6b-419c-8c40-eb8770a2bfdd req-2f08f53d-5cf8-4284-8773-8f6fd73910e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Updating instance_info_cache with network_info: [{"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:38:44 compute-0 nova_compute[259550]: 2025-10-07 14:38:44.886 2 DEBUG oslo_concurrency.lockutils [req-73a6bd6a-6a6b-419c-8c40-eb8770a2bfdd req-2f08f53d-5cf8-4284-8773-8f6fd73910e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-716d82da-745f-43ca-a7fa-38f02d3e5dc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:38:45 compute-0 ceph-mon[74295]: pgmap v2289: 305 pgs: 305 active+clean; 293 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 394 KiB/s wr, 167 op/s
Oct 07 14:38:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 301 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 608 KiB/s wr, 127 op/s
Oct 07 14:38:46 compute-0 ovn_controller[151684]: 2025-10-07T14:38:46Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f4:e8:e8 10.100.0.14
Oct 07 14:38:46 compute-0 ovn_controller[151684]: 2025-10-07T14:38:46Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f4:e8:e8 10.100.0.14
Oct 07 14:38:46 compute-0 nova_compute[259550]: 2025-10-07 14:38:46.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:47 compute-0 nova_compute[259550]: 2025-10-07 14:38:47.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:47 compute-0 ceph-mon[74295]: pgmap v2290: 305 pgs: 305 active+clean; 301 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 608 KiB/s wr, 127 op/s
Oct 07 14:38:47 compute-0 nova_compute[259550]: 2025-10-07 14:38:47.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:38:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.052977) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847928053042, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 712, "num_deletes": 250, "total_data_size": 838832, "memory_usage": 851520, "flush_reason": "Manual Compaction"}
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847928057165, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 554084, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47744, "largest_seqno": 48455, "table_properties": {"data_size": 550893, "index_size": 1035, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8623, "raw_average_key_size": 20, "raw_value_size": 544206, "raw_average_value_size": 1308, "num_data_blocks": 46, "num_entries": 416, "num_filter_entries": 416, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847874, "oldest_key_time": 1759847874, "file_creation_time": 1759847928, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 4245 microseconds, and 2423 cpu microseconds.
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.057230) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 554084 bytes OK
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.057247) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.059050) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.059065) EVENT_LOG_v1 {"time_micros": 1759847928059061, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.059081) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 835134, prev total WAL file size 835134, number of live WAL files 2.
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.059830) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373533' seq:72057594037927935, type:22 .. '6D6772737461740032303034' seq:0, type:0; will stop at (end)
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(541KB)], [110(10MB)]
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847928059961, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 11319549, "oldest_snapshot_seqno": -1}
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6786 keys, 8288376 bytes, temperature: kUnknown
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847928106430, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 8288376, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8244933, "index_size": 25347, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17029, "raw_key_size": 177172, "raw_average_key_size": 26, "raw_value_size": 8125236, "raw_average_value_size": 1197, "num_data_blocks": 987, "num_entries": 6786, "num_filter_entries": 6786, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847928, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.106660) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 8288376 bytes
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.108831) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 243.3 rd, 178.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.3 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(35.4) write-amplify(15.0) OK, records in: 7276, records dropped: 490 output_compression: NoCompression
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.108849) EVENT_LOG_v1 {"time_micros": 1759847928108841, "job": 66, "event": "compaction_finished", "compaction_time_micros": 46528, "compaction_time_cpu_micros": 20787, "output_level": 6, "num_output_files": 1, "total_output_size": 8288376, "num_input_records": 7276, "num_output_records": 6786, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847928109051, "job": 66, "event": "table_file_deletion", "file_number": 112}
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847928110755, "job": 66, "event": "table_file_deletion", "file_number": 110}
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.059682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.110846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.110851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.110853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.110854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:38:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.110855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:38:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 301 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 608 KiB/s wr, 103 op/s
Oct 07 14:38:48 compute-0 nova_compute[259550]: 2025-10-07 14:38:48.402 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847913.4021986, 4e64a021-390b-4a0c-bb4c-75a19f274777 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:38:48 compute-0 nova_compute[259550]: 2025-10-07 14:38:48.403 2 INFO nova.compute.manager [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] VM Stopped (Lifecycle Event)
Oct 07 14:38:48 compute-0 nova_compute[259550]: 2025-10-07 14:38:48.425 2 DEBUG nova.compute.manager [None req-3a2b95e7-11a4-4f43-be0e-e1a893d322d6 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:38:49 compute-0 nova_compute[259550]: 2025-10-07 14:38:49.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:38:49 compute-0 nova_compute[259550]: 2025-10-07 14:38:49.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:38:50 compute-0 nova_compute[259550]: 2025-10-07 14:38:50.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:50 compute-0 ceph-mon[74295]: pgmap v2291: 305 pgs: 305 active+clean; 301 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 608 KiB/s wr, 103 op/s
Oct 07 14:38:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 324 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 155 op/s
Oct 07 14:38:51 compute-0 nova_compute[259550]: 2025-10-07 14:38:51.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:51 compute-0 nova_compute[259550]: 2025-10-07 14:38:51.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:38:51 compute-0 nova_compute[259550]: 2025-10-07 14:38:51.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:38:52 compute-0 ceph-mon[74295]: pgmap v2292: 305 pgs: 305 active+clean; 324 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 155 op/s
Oct 07 14:38:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 135 op/s
Oct 07 14:38:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:38:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:38:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:38:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:38:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:38:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:38:52 compute-0 nova_compute[259550]: 2025-10-07 14:38:52.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:38:53 compute-0 nova_compute[259550]: 2025-10-07 14:38:53.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:38:54 compute-0 ceph-mon[74295]: pgmap v2293: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 135 op/s
Oct 07 14:38:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 328 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.5 MiB/s wr, 117 op/s
Oct 07 14:38:54 compute-0 nova_compute[259550]: 2025-10-07 14:38:54.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:38:55 compute-0 ovn_controller[151684]: 2025-10-07T14:38:55Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:cb:9f 10.100.0.11
Oct 07 14:38:55 compute-0 ovn_controller[151684]: 2025-10-07T14:38:55Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:cb:9f 10.100.0.11
Oct 07 14:38:56 compute-0 nova_compute[259550]: 2025-10-07 14:38:56.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:56 compute-0 ceph-mon[74295]: pgmap v2294: 305 pgs: 305 active+clean; 328 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.5 MiB/s wr, 117 op/s
Oct 07 14:38:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 346 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 789 KiB/s rd, 3.4 MiB/s wr, 106 op/s
Oct 07 14:38:56 compute-0 nova_compute[259550]: 2025-10-07 14:38:56.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:57 compute-0 nova_compute[259550]: 2025-10-07 14:38:57.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:38:58 compute-0 ceph-mon[74295]: pgmap v2295: 305 pgs: 305 active+clean; 346 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 789 KiB/s rd, 3.4 MiB/s wr, 106 op/s
Oct 07 14:38:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 346 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.8 MiB/s wr, 87 op/s
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.479 2 DEBUG oslo_concurrency.lockutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.479 2 DEBUG oslo_concurrency.lockutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.480 2 DEBUG oslo_concurrency.lockutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.480 2 DEBUG oslo_concurrency.lockutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.480 2 DEBUG oslo_concurrency.lockutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.482 2 INFO nova.compute.manager [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Terminating instance
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.483 2 DEBUG nova.compute.manager [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:38:59 compute-0 kernel: tapad5643dc-1b (unregistering): left promiscuous mode
Oct 07 14:38:59 compute-0 NetworkManager[44949]: <info>  [1759847939.5489] device (tapad5643dc-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.551 2 DEBUG nova.compute.manager [req-760c94e7-ffba-4e80-8171-62ec9414ea0f req-84e1636a-7ec7-4b5b-bcf0-5c9014780280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-changed-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.552 2 DEBUG nova.compute.manager [req-760c94e7-ffba-4e80-8171-62ec9414ea0f req-84e1636a-7ec7-4b5b-bcf0-5c9014780280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Refreshing instance network info cache due to event network-changed-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.552 2 DEBUG oslo_concurrency.lockutils [req-760c94e7-ffba-4e80-8171-62ec9414ea0f req-84e1636a-7ec7-4b5b-bcf0-5c9014780280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.552 2 DEBUG oslo_concurrency.lockutils [req-760c94e7-ffba-4e80-8171-62ec9414ea0f req-84e1636a-7ec7-4b5b-bcf0-5c9014780280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.553 2 DEBUG nova.network.neutron [req-760c94e7-ffba-4e80-8171-62ec9414ea0f req-84e1636a-7ec7-4b5b-bcf0-5c9014780280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Refreshing network info cache for port ad5643dc-1b43-4ffa-b380-8ee6fbac98fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:38:59 compute-0 ovn_controller[151684]: 2025-10-07T14:38:59Z|01262|binding|INFO|Releasing lport ad5643dc-1b43-4ffa-b380-8ee6fbac98fe from this chassis (sb_readonly=0)
Oct 07 14:38:59 compute-0 ovn_controller[151684]: 2025-10-07T14:38:59Z|01263|binding|INFO|Setting lport ad5643dc-1b43-4ffa-b380-8ee6fbac98fe down in Southbound
Oct 07 14:38:59 compute-0 ovn_controller[151684]: 2025-10-07T14:38:59Z|01264|binding|INFO|Removing iface tapad5643dc-1b ovn-installed in OVS
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.563 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:e8:e8 10.100.0.14'], port_security=['fa:16:3e:f4:e8:e8 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4c61749b-b18d-4fbe-b99c-90e15ced9469', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5dfb73c9-a89b-4659-8761-7d887493b39b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd07d1fc3-3bf7-4ca6-a994-01f8bb5c5bd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=169b3722-7b9a-4733-8efb-f5bd5c71aacf, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ad5643dc-1b43-4ffa-b380-8ee6fbac98fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.564 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ad5643dc-1b43-4ffa-b380-8ee6fbac98fe in datapath 5dfb73c9-a89b-4659-8761-7d887493b39b unbound from our chassis
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.566 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5dfb73c9-a89b-4659-8761-7d887493b39b
Oct 07 14:38:59 compute-0 kernel: tap1ee1b68d-60 (unregistering): left promiscuous mode
Oct 07 14:38:59 compute-0 NetworkManager[44949]: <info>  [1759847939.5794] device (tap1ee1b68d-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.586 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aea3755c-ca36-449f-8b35-31c39f125872]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:59 compute-0 ovn_controller[151684]: 2025-10-07T14:38:59Z|01265|binding|INFO|Releasing lport 1ee1b68d-6081-4e61-a797-c2e41ac53a29 from this chassis (sb_readonly=0)
Oct 07 14:38:59 compute-0 ovn_controller[151684]: 2025-10-07T14:38:59Z|01266|binding|INFO|Setting lport 1ee1b68d-6081-4e61-a797-c2e41ac53a29 down in Southbound
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:59 compute-0 ovn_controller[151684]: 2025-10-07T14:38:59Z|01267|binding|INFO|Removing iface tap1ee1b68d-60 ovn-installed in OVS
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.596 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:02:3b 2001:db8:0:1:f816:3eff:fe32:23b 2001:db8::f816:3eff:fe32:23b'], port_security=['fa:16:3e:32:02:3b 2001:db8:0:1:f816:3eff:fe32:23b 2001:db8::f816:3eff:fe32:23b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe32:23b/64 2001:db8::f816:3eff:fe32:23b/64', 'neutron:device_id': '4c61749b-b18d-4fbe-b99c-90e15ced9469', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c956141-6a21-499d-99b1-885d1a2972f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd07d1fc3-3bf7-4ca6-a994-01f8bb5c5bd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fceb8d2-9d2a-45b9-beb8-73d518298477, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1ee1b68d-6081-4e61-a797-c2e41ac53a29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.623 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9f710601-8b8e-4402-8df7-b6b3da296755]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.626 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c88f373f-7005-4629-93aa-8870ede9da47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:59 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000076.scope: Deactivated successfully.
Oct 07 14:38:59 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000076.scope: Consumed 16.182s CPU time.
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.656 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e8047b-82fd-4d4a-87ea-f5d2e480adf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:59 compute-0 systemd-machined[214580]: Machine qemu-148-instance-00000076 terminated.
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.676 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7b62aaaf-ed90-4b60-a2b4-b17743d2258e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5dfb73c9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:44:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 7, 'rx_bytes': 1070, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 7, 'rx_bytes': 1070, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 351], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844675, 'reachable_time': 28533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 9, 'inoctets': 776, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 9, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 776, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 9, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388206, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.697 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[192ce812-2c15-4323-8e5c-f19d6419ac71]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5dfb73c9-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 844686, 'tstamp': 844686}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388207, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5dfb73c9-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 844689, 'tstamp': 844689}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388207, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.699 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5dfb73c9-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:59 compute-0 systemd-udevd[388194]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:38:59 compute-0 NetworkManager[44949]: <info>  [1759847939.7048] manager: (tapad5643dc-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/513)
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.709 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5dfb73c9-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.710 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.710 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5dfb73c9-a0, col_values=(('external_ids', {'iface-id': '30dd4552-fdd6-4d17-af87-77adcec53278'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.710 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.711 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1ee1b68d-6081-4e61-a797-c2e41ac53a29 in datapath 4c956141-6a21-499d-99b1-885d1a2972f7 unbound from our chassis
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.713 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c956141-6a21-499d-99b1-885d1a2972f7
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.728 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[403c237e-295d-47d8-9bd6-6c90c146633d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.740 2 INFO nova.virt.libvirt.driver [-] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Instance destroyed successfully.
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.741 2 DEBUG nova.objects.instance [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 4c61749b-b18d-4fbe-b99c-90e15ced9469 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.761 2 DEBUG nova.virt.libvirt.vif [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:38:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1890244122',display_name='tempest-TestGettingAddress-server-1890244122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1890244122',id=118,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:38:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-c20dq06d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:38:33Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=4c61749b-b18d-4fbe-b99c-90e15ced9469,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.761 2 DEBUG nova.network.os_vif_util [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.762 2 DEBUG nova.network.os_vif_util [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:e8:e8,bridge_name='br-int',has_traffic_filtering=True,id=ad5643dc-1b43-4ffa-b380-8ee6fbac98fe,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad5643dc-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.762 2 DEBUG os_vif [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:e8:e8,bridge_name='br-int',has_traffic_filtering=True,id=ad5643dc-1b43-4ffa-b380-8ee6fbac98fe,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad5643dc-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.761 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d85100a1-286f-47d7-b01b-2da0d139254c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.764 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad5643dc-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.764 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b9062e0a-d319-42c9-9b81-766746073aa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.779 2 INFO os_vif [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:e8:e8,bridge_name='br-int',has_traffic_filtering=True,id=ad5643dc-1b43-4ffa-b380-8ee6fbac98fe,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad5643dc-1b')
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.780 2 DEBUG nova.virt.libvirt.vif [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:38:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1890244122',display_name='tempest-TestGettingAddress-server-1890244122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1890244122',id=118,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:38:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-c20dq06d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:38:33Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=4c61749b-b18d-4fbe-b99c-90e15ced9469,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.780 2 DEBUG nova.network.os_vif_util [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.781 2 DEBUG nova.network.os_vif_util [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:02:3b,bridge_name='br-int',has_traffic_filtering=True,id=1ee1b68d-6081-4e61-a797-c2e41ac53a29,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ee1b68d-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.781 2 DEBUG os_vif [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:02:3b,bridge_name='br-int',has_traffic_filtering=True,id=1ee1b68d-6081-4e61-a797-c2e41ac53a29,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ee1b68d-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.782 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ee1b68d-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.787 2 INFO os_vif [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:02:3b,bridge_name='br-int',has_traffic_filtering=True,id=1ee1b68d-6081-4e61-a797-c2e41ac53a29,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ee1b68d-60')
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.796 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[94ddd5dd-d2b5-4dcf-83c6-fba1863c0edd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.817 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0c37fa6c-5f81-4674-8448-7577373a6ce4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c956141-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:a2:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844816, 'reachable_time': 18374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388242, 'error': None, 'target': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.839 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[26a7a116-475c-48e4-bc04-ebfcbe7ead03]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4c956141-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 844830, 'tstamp': 844830}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388251, 'error': None, 'target': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.841 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c956141-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.845 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c956141-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.845 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.845 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c956141-60, col_values=(('external_ids', {'iface-id': 'ec328f15-1843-4594-8d39-b0d2d9796360'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:38:59 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.846 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:38:59 compute-0 nova_compute[259550]: 2025-10-07 14:38:59.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:39:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:00.071 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:00.072 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:00.073 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.229 2 INFO nova.virt.libvirt.driver [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Deleting instance files /var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469_del
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.231 2 INFO nova.virt.libvirt.driver [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Deletion of /var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469_del complete
Oct 07 14:39:00 compute-0 ceph-mon[74295]: pgmap v2296: 305 pgs: 305 active+clean; 346 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.8 MiB/s wr, 87 op/s
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.361 2 DEBUG nova.compute.manager [req-e1fc23a5-e688-47cd-9570-15de44cf6337 req-fff567e4-b075-4a75-bff7-06799a3a0eb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-unplugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.362 2 DEBUG oslo_concurrency.lockutils [req-e1fc23a5-e688-47cd-9570-15de44cf6337 req-fff567e4-b075-4a75-bff7-06799a3a0eb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.362 2 DEBUG oslo_concurrency.lockutils [req-e1fc23a5-e688-47cd-9570-15de44cf6337 req-fff567e4-b075-4a75-bff7-06799a3a0eb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.362 2 DEBUG oslo_concurrency.lockutils [req-e1fc23a5-e688-47cd-9570-15de44cf6337 req-fff567e4-b075-4a75-bff7-06799a3a0eb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.363 2 DEBUG nova.compute.manager [req-e1fc23a5-e688-47cd-9570-15de44cf6337 req-fff567e4-b075-4a75-bff7-06799a3a0eb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] No waiting events found dispatching network-vif-unplugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.363 2 DEBUG nova.compute.manager [req-e1fc23a5-e688-47cd-9570-15de44cf6337 req-fff567e4-b075-4a75-bff7-06799a3a0eb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-unplugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:39:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 359 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 511 KiB/s rd, 3.7 MiB/s wr, 117 op/s
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.418 2 INFO nova.compute.manager [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Took 0.94 seconds to destroy the instance on the hypervisor.
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.419 2 DEBUG oslo.service.loopingcall [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.419 2 DEBUG nova.compute.manager [-] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.420 2 DEBUG nova.network.neutron [-] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.531 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.532 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.559 2 DEBUG nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.572 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.573 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.573 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.640 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.641 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.659 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.660 2 INFO nova.compute.claims [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.848 2 DEBUG nova.scheduler.client.report [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.866 2 DEBUG nova.scheduler.client.report [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.867 2 DEBUG nova.compute.provider_tree [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.879 2 DEBUG nova.scheduler.client.report [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 07 14:39:00 compute-0 nova_compute[259550]: 2025-10-07 14:39:00.899 2 DEBUG nova.scheduler.client.report [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.031 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.375 2 INFO nova.compute.manager [None req-5d792702-e5e6-4f04-8201-e88990916fbb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Get console output
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.389 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:39:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:39:01 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067318881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.504 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.510 2 DEBUG nova.compute.provider_tree [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.524 2 DEBUG nova.scheduler.client.report [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.549 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.550 2 DEBUG nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.605 2 DEBUG nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.605 2 DEBUG nova.network.neutron [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.618 2 DEBUG nova.compute.manager [req-8944b6a3-06bc-48b3-97ae-be52d9d39394 req-2ff25bca-d09e-4fbe-881b-8fde83fbd6db 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-deleted-1ee1b68d-6081-4e61-a797-c2e41ac53a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.619 2 INFO nova.compute.manager [req-8944b6a3-06bc-48b3-97ae-be52d9d39394 req-2ff25bca-d09e-4fbe-881b-8fde83fbd6db 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Neutron deleted interface 1ee1b68d-6081-4e61-a797-c2e41ac53a29; detaching it from the instance and deleting it from the info cache
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.619 2 DEBUG nova.network.neutron [req-8944b6a3-06bc-48b3-97ae-be52d9d39394 req-2ff25bca-d09e-4fbe-881b-8fde83fbd6db 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updating instance_info_cache with network_info: [{"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.631 2 INFO nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.652 2 DEBUG nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.655 2 DEBUG nova.compute.manager [req-8944b6a3-06bc-48b3-97ae-be52d9d39394 req-2ff25bca-d09e-4fbe-881b-8fde83fbd6db 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Detach interface failed, port_id=1ee1b68d-6081-4e61-a797-c2e41ac53a29, reason: Instance 4c61749b-b18d-4fbe-b99c-90e15ced9469 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.716 2 DEBUG oslo_concurrency.lockutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.716 2 DEBUG oslo_concurrency.lockutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.717 2 DEBUG oslo_concurrency.lockutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.717 2 DEBUG oslo_concurrency.lockutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.717 2 DEBUG oslo_concurrency.lockutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.718 2 INFO nova.compute.manager [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Terminating instance
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.719 2 DEBUG nova.compute.manager [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.722 2 DEBUG nova.network.neutron [req-760c94e7-ffba-4e80-8171-62ec9414ea0f req-84e1636a-7ec7-4b5b-bcf0-5c9014780280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updated VIF entry in instance network info cache for port ad5643dc-1b43-4ffa-b380-8ee6fbac98fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.722 2 DEBUG nova.network.neutron [req-760c94e7-ffba-4e80-8171-62ec9414ea0f req-84e1636a-7ec7-4b5b-bcf0-5c9014780280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updating instance_info_cache with network_info: [{"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:39:01 compute-0 kernel: tap134818f1-98 (unregistering): left promiscuous mode
Oct 07 14:39:01 compute-0 NetworkManager[44949]: <info>  [1759847941.7813] device (tap134818f1-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:39:01 compute-0 ovn_controller[151684]: 2025-10-07T14:39:01Z|01268|binding|INFO|Releasing lport 134818f1-9848-45e7-ac27-c5290f58f87f from this chassis (sb_readonly=0)
Oct 07 14:39:01 compute-0 ovn_controller[151684]: 2025-10-07T14:39:01Z|01269|binding|INFO|Setting lport 134818f1-9848-45e7-ac27-c5290f58f87f down in Southbound
Oct 07 14:39:01 compute-0 ovn_controller[151684]: 2025-10-07T14:39:01Z|01270|binding|INFO|Removing iface tap134818f1-98 ovn-installed in OVS
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.791 2 DEBUG nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.793 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.793 2 INFO nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Creating image(s)
Oct 07 14:39:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.815 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:cb:9f 10.100.0.11'], port_security=['fa:16:3e:f6:cb:9f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '716d82da-745f-43ca-a7fa-38f02d3e5dc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd82525cf-6a8e-4dd1-ac1e-a6b578fadd19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.174'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be472fff-888c-4934-b0b7-07dc4319f2eb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=134818f1-9848-45e7-ac27-c5290f58f87f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:39:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.824 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 134818f1-9848-45e7-ac27-c5290f58f87f in datapath 4bd15a72-ce65-4737-b705-4b2b86d3a32a unbound from our chassis
Oct 07 14:39:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.826 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bd15a72-ce65-4737-b705-4b2b86d3a32a
Oct 07 14:39:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.845 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8901374c-c5d6-4f29-bebe-03e9ba562b66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.865 2 DEBUG nova.storage.rbd_utils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:39:01 compute-0 podman[388279]: 2025-10-07 14:39:01.882420873 +0000 UTC m=+0.060912909 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 14:39:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.885 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3f905200-0dc7-4bd8-aab4-c59b53723882]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:01 compute-0 podman[388287]: 2025-10-07 14:39:01.889705927 +0000 UTC m=+0.067291218 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 07 14:39:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.888 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[12f9b98f-5645-4c88-aa91-94cac70ff620]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:01 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000077.scope: Deactivated successfully.
Oct 07 14:39:01 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000077.scope: Consumed 16.868s CPU time.
Oct 07 14:39:01 compute-0 systemd-machined[214580]: Machine qemu-149-instance-00000077 terminated.
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.910 2 DEBUG nova.storage.rbd_utils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:39:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.922 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[996cbed4-195c-4d89-87a7-d335d6bc9560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.939 2 DEBUG nova.storage.rbd_utils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:39:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.945 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[db5da720-6c2d-4846-90e0-7d506ffe244d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bd15a72-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:db:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 354], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845459, 'reachable_time': 31245, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388372, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.946 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.965 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[08e46c0f-04c2-438b-81e2-964103c303ab]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bd15a72-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 845473, 'tstamp': 845473}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388386, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bd15a72-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 845476, 'tstamp': 845476}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388386, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.967 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bd15a72-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.972 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bd15a72-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.972 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:39:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.972 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bd15a72-c0, col_values=(('external_ids', {'iface-id': '818ca059-c8de-4f85-a524-8f47c8fbf780'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.973 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.988 2 DEBUG nova.policy [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5c505d04148e44b8b93ceab0e3cedef4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:39:01 compute-0 nova_compute[259550]: 2025-10-07 14:39:01.997 2 DEBUG oslo_concurrency.lockutils [req-760c94e7-ffba-4e80-8171-62ec9414ea0f req-84e1636a-7ec7-4b5b-bcf0-5c9014780280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.003 2 INFO nova.virt.libvirt.driver [-] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Instance destroyed successfully.
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.003 2 DEBUG nova.objects.instance [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid 716d82da-745f-43ca-a7fa-38f02d3e5dc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.025 2 DEBUG nova.virt.libvirt.vif [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:38:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-533021491',display_name='tempest-TestNetworkBasicOps-server-533021491',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-533021491',id=119,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFiLtVtON/CvAsHcf1ZXHYh3rc03vthPyQgHr1JmpZrgfX90g8u0AoXxhQvx7zNCW+RZBDYY9qzlgStzxh1Af0F+ohdhBY+PBrQ7zmCH8Cgi0yMJpOyuiddZB1OxGzpRsA==',key_name='tempest-TestNetworkBasicOps-1904256776',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:38:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-vph0u7ke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:38:39Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=716d82da-745f-43ca-a7fa-38f02d3e5dc3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.026 2 DEBUG nova.network.os_vif_util [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.027 2 DEBUG nova.network.os_vif_util [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:cb:9f,bridge_name='br-int',has_traffic_filtering=True,id=134818f1-9848-45e7-ac27-c5290f58f87f,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap134818f1-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.027 2 DEBUG os_vif [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:cb:9f,bridge_name='br-int',has_traffic_filtering=True,id=134818f1-9848-45e7-ac27-c5290f58f87f,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap134818f1-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.029 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap134818f1-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.030 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.031 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.032 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.033 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.054 2 DEBUG nova.storage.rbd_utils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.057 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.097 2 INFO os_vif [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:cb:9f,bridge_name='br-int',has_traffic_filtering=True,id=134818f1-9848-45e7-ac27-c5290f58f87f,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap134818f1-98')
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.183 2 DEBUG nova.network.neutron [-] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.201 2 INFO nova.compute.manager [-] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Took 1.78 seconds to deallocate network for instance.
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.259 2 DEBUG oslo_concurrency.lockutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.260 2 DEBUG oslo_concurrency.lockutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:02 compute-0 ceph-mon[74295]: pgmap v2297: 305 pgs: 305 active+clean; 359 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 511 KiB/s rd, 3.7 MiB/s wr, 117 op/s
Oct 07 14:39:02 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3067318881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 334 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.415 2 DEBUG oslo_concurrency.processutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.452 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.517 2 DEBUG nova.storage.rbd_utils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] resizing rbd image 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.646 2 DEBUG nova.objects.instance [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'migration_context' on Instance uuid 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.659 2 DEBUG nova.compute.manager [req-df21b932-4596-457b-917c-1cff2a2f14a9 req-d9f130bf-35d8-47a6-b86c-526e230d43ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-plugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.660 2 DEBUG oslo_concurrency.lockutils [req-df21b932-4596-457b-917c-1cff2a2f14a9 req-d9f130bf-35d8-47a6-b86c-526e230d43ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.660 2 DEBUG oslo_concurrency.lockutils [req-df21b932-4596-457b-917c-1cff2a2f14a9 req-d9f130bf-35d8-47a6-b86c-526e230d43ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.660 2 DEBUG oslo_concurrency.lockutils [req-df21b932-4596-457b-917c-1cff2a2f14a9 req-d9f130bf-35d8-47a6-b86c-526e230d43ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.661 2 DEBUG nova.compute.manager [req-df21b932-4596-457b-917c-1cff2a2f14a9 req-d9f130bf-35d8-47a6-b86c-526e230d43ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] No waiting events found dispatching network-vif-plugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.661 2 WARNING nova.compute.manager [req-df21b932-4596-457b-917c-1cff2a2f14a9 req-d9f130bf-35d8-47a6-b86c-526e230d43ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received unexpected event network-vif-plugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 for instance with vm_state deleted and task_state None.
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.671 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.672 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Ensure instance console log exists: /var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.672 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.673 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.673 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.716 2 INFO nova.virt.libvirt.driver [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Deleting instance files /var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3_del
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.717 2 INFO nova.virt.libvirt.driver [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Deletion of /var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3_del complete
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.743 2 DEBUG nova.network.neutron [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Successfully created port: 660a78e9-3d3f-4949-88f4-3cad47b74229 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.774 2 INFO nova.compute.manager [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Took 1.05 seconds to destroy the instance on the hypervisor.
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.775 2 DEBUG oslo.service.loopingcall [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.775 2 DEBUG nova.compute.manager [-] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.775 2 DEBUG nova.network.neutron [-] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:39:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:39:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2559127699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.917 2 DEBUG oslo_concurrency.processutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.923 2 DEBUG nova.compute.provider_tree [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:39:02 compute-0 nova_compute[259550]: 2025-10-07 14:39:02.997 2 DEBUG nova.scheduler.client.report [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:39:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.057 2 DEBUG oslo_concurrency.lockutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.089 2 INFO nova.scheduler.client.report [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 4c61749b-b18d-4fbe-b99c-90e15ced9469
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.155 2 DEBUG oslo_concurrency.lockutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2559127699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.611 2 DEBUG nova.network.neutron [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Successfully updated port: 660a78e9-3d3f-4949-88f4-3cad47b74229 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.659 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.659 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquired lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.659 2 DEBUG nova.network.neutron [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.735 2 DEBUG nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-plugged-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.735 2 DEBUG oslo_concurrency.lockutils [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.735 2 DEBUG oslo_concurrency.lockutils [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.736 2 DEBUG oslo_concurrency.lockutils [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.736 2 DEBUG nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] No waiting events found dispatching network-vif-plugged-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.736 2 WARNING nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received unexpected event network-vif-plugged-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe for instance with vm_state deleted and task_state None.
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.736 2 DEBUG nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-deleted-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.736 2 DEBUG nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received event network-vif-unplugged-134818f1-9848-45e7-ac27-c5290f58f87f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.736 2 DEBUG oslo_concurrency.lockutils [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.737 2 DEBUG oslo_concurrency.lockutils [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.737 2 DEBUG oslo_concurrency.lockutils [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.737 2 DEBUG nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] No waiting events found dispatching network-vif-unplugged-134818f1-9848-45e7-ac27-c5290f58f87f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.737 2 DEBUG nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received event network-vif-unplugged-134818f1-9848-45e7-ac27-c5290f58f87f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.737 2 DEBUG nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received event network-vif-plugged-134818f1-9848-45e7-ac27-c5290f58f87f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.737 2 DEBUG oslo_concurrency.lockutils [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.738 2 DEBUG oslo_concurrency.lockutils [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.738 2 DEBUG oslo_concurrency.lockutils [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.738 2 DEBUG nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] No waiting events found dispatching network-vif-plugged-134818f1-9848-45e7-ac27-c5290f58f87f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.738 2 WARNING nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received unexpected event network-vif-plugged-134818f1-9848-45e7-ac27-c5290f58f87f for instance with vm_state active and task_state deleting.
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.739 2 DEBUG nova.network.neutron [-] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.756 2 INFO nova.compute.manager [-] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Took 0.98 seconds to deallocate network for instance.
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.812 2 DEBUG oslo_concurrency.lockutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.812 2 DEBUG oslo_concurrency.lockutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.878 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updating instance_info_cache with network_info: [{"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.909 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.910 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.910 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.922 2 DEBUG oslo_concurrency.processutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.958 2 DEBUG nova.network.neutron [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:39:03 compute-0 nova_compute[259550]: 2025-10-07 14:39:03.962 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:04 compute-0 ceph-mon[74295]: pgmap v2298: 305 pgs: 305 active+clean; 334 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Oct 07 14:39:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:39:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1155700411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 277 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.7 MiB/s wr, 136 op/s
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.399 2 DEBUG oslo_concurrency.processutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.408 2 DEBUG nova.compute.provider_tree [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.425 2 DEBUG nova.scheduler.client.report [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.503 2 DEBUG oslo_concurrency.lockutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.506 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.506 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.506 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.507 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.747 2 INFO nova.scheduler.client.report [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance 716d82da-745f-43ca-a7fa-38f02d3e5dc3
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.754 2 DEBUG nova.network.neutron [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Updating instance_info_cache with network_info: [{"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.762 2 DEBUG oslo_concurrency.lockutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.762 2 DEBUG oslo_concurrency.lockutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.762 2 DEBUG oslo_concurrency.lockutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.762 2 DEBUG oslo_concurrency.lockutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.762 2 DEBUG oslo_concurrency.lockutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.764 2 INFO nova.compute.manager [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Terminating instance
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.764 2 DEBUG nova.compute.manager [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.858 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Releasing lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.858 2 DEBUG nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Instance network_info: |[{"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.861 2 DEBUG nova.compute.manager [req-d662a34e-5ed1-418f-8626-09212690bc75 req-61afcf9a-64e0-4b1a-9b69-d9c70abd9e4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-changed-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.861 2 DEBUG nova.compute.manager [req-d662a34e-5ed1-418f-8626-09212690bc75 req-61afcf9a-64e0-4b1a-9b69-d9c70abd9e4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Refreshing instance network info cache due to event network-changed-660a78e9-3d3f-4949-88f4-3cad47b74229. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.861 2 DEBUG oslo_concurrency.lockutils [req-d662a34e-5ed1-418f-8626-09212690bc75 req-61afcf9a-64e0-4b1a-9b69-d9c70abd9e4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.862 2 DEBUG oslo_concurrency.lockutils [req-d662a34e-5ed1-418f-8626-09212690bc75 req-61afcf9a-64e0-4b1a-9b69-d9c70abd9e4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.862 2 DEBUG nova.network.neutron [req-d662a34e-5ed1-418f-8626-09212690bc75 req-61afcf9a-64e0-4b1a-9b69-d9c70abd9e4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Refreshing network info cache for port 660a78e9-3d3f-4949-88f4-3cad47b74229 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.867 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Start _get_guest_xml network_info=[{"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.873 2 WARNING nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.879 2 DEBUG nova.virt.libvirt.host [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.880 2 DEBUG nova.virt.libvirt.host [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.883 2 DEBUG nova.virt.libvirt.host [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.883 2 DEBUG nova.virt.libvirt.host [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.884 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.884 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.884 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.885 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.885 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.885 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.885 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.886 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.886 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.886 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.886 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.887 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:39:04 compute-0 kernel: tap72a26c5e-6c (unregistering): left promiscuous mode
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.891 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:04 compute-0 NetworkManager[44949]: <info>  [1759847944.9042] device (tap72a26c5e-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:39:04 compute-0 ovn_controller[151684]: 2025-10-07T14:39:04Z|01271|binding|INFO|Releasing lport 72a26c5e-6ceb-4ee6-b79b-d105eac8054b from this chassis (sb_readonly=0)
Oct 07 14:39:04 compute-0 ovn_controller[151684]: 2025-10-07T14:39:04Z|01272|binding|INFO|Setting lport 72a26c5e-6ceb-4ee6-b79b-d105eac8054b down in Southbound
Oct 07 14:39:04 compute-0 ovn_controller[151684]: 2025-10-07T14:39:04Z|01273|binding|INFO|Removing iface tap72a26c5e-6c ovn-installed in OVS
Oct 07 14:39:04 compute-0 kernel: tap1a175a4f-4e (unregistering): left promiscuous mode
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:04 compute-0 NetworkManager[44949]: <info>  [1759847944.9347] device (tap1a175a4f-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.944 2 DEBUG oslo_concurrency.lockutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:04 compute-0 ovn_controller[151684]: 2025-10-07T14:39:04Z|01274|binding|INFO|Releasing lport 1a175a4f-4ef8-469e-b213-5f8d404858c8 from this chassis (sb_readonly=1)
Oct 07 14:39:04 compute-0 ovn_controller[151684]: 2025-10-07T14:39:04Z|01275|binding|INFO|Removing iface tap1a175a4f-4e ovn-installed in OVS
Oct 07 14:39:04 compute-0 ovn_controller[151684]: 2025-10-07T14:39:04Z|01276|if_status|INFO|Dropped 4 log messages in last 175 seconds (most recently, 175 seconds ago) due to excessive rate
Oct 07 14:39:04 compute-0 ovn_controller[151684]: 2025-10-07T14:39:04Z|01277|if_status|INFO|Not setting lport 1a175a4f-4ef8-469e-b213-5f8d404858c8 down as sb is readonly
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:04 compute-0 ovn_controller[151684]: 2025-10-07T14:39:04Z|01278|binding|INFO|Setting lport 1a175a4f-4ef8-469e-b213-5f8d404858c8 down in Southbound
Oct 07 14:39:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:04.951 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:bc:87 10.100.0.12'], port_security=['fa:16:3e:dc:bc:87 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '23c0ce36-9e34-4a73-9f99-3b79f8623238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5dfb73c9-a89b-4659-8761-7d887493b39b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd07d1fc3-3bf7-4ca6-a994-01f8bb5c5bd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=169b3722-7b9a-4733-8efb-f5bd5c71aacf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=72a26c5e-6ceb-4ee6-b79b-d105eac8054b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:39:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:04.952 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 72a26c5e-6ceb-4ee6-b79b-d105eac8054b in datapath 5dfb73c9-a89b-4659-8761-7d887493b39b unbound from our chassis
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:04.953 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5dfb73c9-a89b-4659-8761-7d887493b39b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:39:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:04.954 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd78ae3-b9d2-4420-9fab-f1521bc661f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:39:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174935593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:04.954 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b namespace which is not needed anymore
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:04.969 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:c4:4a 2001:db8:0:1:f816:3eff:fe72:c44a 2001:db8::f816:3eff:fe72:c44a'], port_security=['fa:16:3e:72:c4:4a 2001:db8:0:1:f816:3eff:fe72:c44a 2001:db8::f816:3eff:fe72:c44a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe72:c44a/64 2001:db8::f816:3eff:fe72:c44a/64', 'neutron:device_id': '23c0ce36-9e34-4a73-9f99-3b79f8623238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c956141-6a21-499d-99b1-885d1a2972f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd07d1fc3-3bf7-4ca6-a994-01f8bb5c5bd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fceb8d2-9d2a-45b9-beb8-73d518298477, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1a175a4f-4ef8-469e-b213-5f8d404858c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:39:04 compute-0 nova_compute[259550]: 2025-10-07 14:39:04.992 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:05 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000074.scope: Deactivated successfully.
Oct 07 14:39:05 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000074.scope: Consumed 17.604s CPU time.
Oct 07 14:39:05 compute-0 systemd-machined[214580]: Machine qemu-145-instance-00000074 terminated.
Oct 07 14:39:05 compute-0 neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b[385379]: [NOTICE]   (385383) : haproxy version is 2.8.14-c23fe91
Oct 07 14:39:05 compute-0 neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b[385379]: [NOTICE]   (385383) : path to executable is /usr/sbin/haproxy
Oct 07 14:39:05 compute-0 neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b[385379]: [WARNING]  (385383) : Exiting Master process...
Oct 07 14:39:05 compute-0 neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b[385379]: [ALERT]    (385383) : Current worker (385385) exited with code 143 (Terminated)
Oct 07 14:39:05 compute-0 neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b[385379]: [WARNING]  (385383) : All workers exited. Exiting... (0)
Oct 07 14:39:05 compute-0 systemd[1]: libpod-c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69.scope: Deactivated successfully.
Oct 07 14:39:05 compute-0 podman[388617]: 2025-10-07 14:39:05.094429479 +0000 UTC m=+0.049032261 container died c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:39:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69-userdata-shm.mount: Deactivated successfully.
Oct 07 14:39:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6c393a5ba60ac8c97a1f1c72230373f0ef69c5c9c49d0c8f32905833660b064-merged.mount: Deactivated successfully.
Oct 07 14:39:05 compute-0 podman[388617]: 2025-10-07 14:39:05.14387896 +0000 UTC m=+0.098481732 container cleanup c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:39:05 compute-0 systemd[1]: libpod-conmon-c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69.scope: Deactivated successfully.
Oct 07 14:39:05 compute-0 NetworkManager[44949]: <info>  [1759847945.1853] manager: (tap72a26c5e-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/514)
Oct 07 14:39:05 compute-0 systemd-udevd[388588]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:39:05 compute-0 NetworkManager[44949]: <info>  [1759847945.2101] manager: (tap1a175a4f-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/515)
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.231 2 INFO nova.virt.libvirt.driver [-] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Instance destroyed successfully.
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.232 2 DEBUG nova.objects.instance [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 23c0ce36-9e34-4a73-9f99-3b79f8623238 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:39:05 compute-0 podman[388664]: 2025-10-07 14:39:05.237794789 +0000 UTC m=+0.070990818 container remove c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.251 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7b7b3a-079e-42f8-87fc-39958f621d57]: (4, ('Tue Oct  7 02:39:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b (c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69)\nc1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69\nTue Oct  7 02:39:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b (c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69)\nc1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.253 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5f1092d0-5d20-46a8-9e76-5daee8cd9f3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.254 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5dfb73c9-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.256 2 DEBUG nova.virt.libvirt.vif [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:37:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-78341121',display_name='tempest-TestGettingAddress-server-78341121',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-78341121',id=116,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:37:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-g1800drv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:37:44Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=23c0ce36-9e34-4a73-9f99-3b79f8623238,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.257 2 DEBUG nova.network.os_vif_util [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:39:05 compute-0 kernel: tap5dfb73c9-a0: left promiscuous mode
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.258 2 DEBUG nova.network.os_vif_util [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dc:bc:87,bridge_name='br-int',has_traffic_filtering=True,id=72a26c5e-6ceb-4ee6-b79b-d105eac8054b,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72a26c5e-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.258 2 DEBUG os_vif [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:bc:87,bridge_name='br-int',has_traffic_filtering=True,id=72a26c5e-6ceb-4ee6-b79b-d105eac8054b,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72a26c5e-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.264 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72a26c5e-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.282 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[74d90dd7-b96f-49f1-91ad-571e2d7b95fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.290 2 INFO os_vif [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:bc:87,bridge_name='br-int',has_traffic_filtering=True,id=72a26c5e-6ceb-4ee6-b79b-d105eac8054b,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72a26c5e-6c')
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.291 2 DEBUG nova.virt.libvirt.vif [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:37:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-78341121',display_name='tempest-TestGettingAddress-server-78341121',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-78341121',id=116,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:37:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-g1800drv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:37:44Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=23c0ce36-9e34-4a73-9f99-3b79f8623238,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.291 2 DEBUG nova.network.os_vif_util [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.292 2 DEBUG nova.network.os_vif_util [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:72:c4:4a,bridge_name='br-int',has_traffic_filtering=True,id=1a175a4f-4ef8-469e-b213-5f8d404858c8,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a175a4f-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.292 2 DEBUG os_vif [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:c4:4a,bridge_name='br-int',has_traffic_filtering=True,id=1a175a4f-4ef8-469e-b213-5f8d404858c8,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a175a4f-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.294 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a175a4f-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.300 2 INFO os_vif [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:c4:4a,bridge_name='br-int',has_traffic_filtering=True,id=1a175a4f-4ef8-469e-b213-5f8d404858c8,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a175a4f-4e')
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.309 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[692e3887-a170-440a-9ee2-6b746a551ab7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.310 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ab747b-1148-4022-b47b-3ef783af7e5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.333 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[afa5bcf4-2faa-43b8-b330-c80e7b89252d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844666, 'reachable_time': 38561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388705, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d5dfb73c9\x2da89b\x2d4659\x2d8761\x2d7d887493b39b.mount: Deactivated successfully.
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.339 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.339 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0d9b63-87ee-4339-9950-48d5b3c576e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.340 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1a175a4f-4ef8-469e-b213-5f8d404858c8 in datapath 4c956141-6a21-499d-99b1-885d1a2972f7 unbound from our chassis
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.342 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c956141-6a21-499d-99b1-885d1a2972f7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.343 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c57ef16e-ea64-47a9-9adb-c7c04276c60b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.343 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7 namespace which is not needed anymore
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.368 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.369 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:39:05 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1155700411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:05 compute-0 ceph-mon[74295]: pgmap v2299: 305 pgs: 305 active+clean; 277 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.7 MiB/s wr, 136 op/s
Oct 07 14:39:05 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2174935593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.373 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.373 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:39:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:39:05 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3618825100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.486 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:05 compute-0 neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7[385541]: [NOTICE]   (385548) : haproxy version is 2.8.14-c23fe91
Oct 07 14:39:05 compute-0 neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7[385541]: [NOTICE]   (385548) : path to executable is /usr/sbin/haproxy
Oct 07 14:39:05 compute-0 neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7[385541]: [ALERT]    (385548) : Current worker (385550) exited with code 143 (Terminated)
Oct 07 14:39:05 compute-0 neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7[385541]: [WARNING]  (385548) : All workers exited. Exiting... (0)
Oct 07 14:39:05 compute-0 systemd[1]: libpod-9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44.scope: Deactivated successfully.
Oct 07 14:39:05 compute-0 podman[388726]: 2025-10-07 14:39:05.513719692 +0000 UTC m=+0.064259688 container died 9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.515 2 DEBUG nova.storage.rbd_utils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.525 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44-userdata-shm.mount: Deactivated successfully.
Oct 07 14:39:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a337889f89fdae52e14dc13504ac936b7577e6b7adfb2aa670a8f40235199adf-merged.mount: Deactivated successfully.
Oct 07 14:39:05 compute-0 podman[388726]: 2025-10-07 14:39:05.568602938 +0000 UTC m=+0.119142914 container cleanup 9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 14:39:05 compute-0 systemd[1]: libpod-conmon-9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44.scope: Deactivated successfully.
Oct 07 14:39:05 compute-0 podman[388775]: 2025-10-07 14:39:05.635840555 +0000 UTC m=+0.045606329 container remove 9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.642 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1280b056-f388-4c36-a2b4-9c90d5393c53]: (4, ('Tue Oct  7 02:39:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7 (9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44)\n9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44\nTue Oct  7 02:39:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7 (9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44)\n9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.644 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e2606eb8-344d-4f06-b389-8ecd12402eaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.645 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c956141-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:05 compute-0 kernel: tap4c956141-60: left promiscuous mode
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.730 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[73276e92-e1d3-4bf4-b0c9-1697b7830209]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.756 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e348980f-3fbe-45e3-b378-865bbdd0f5c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.758 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4628ed-effe-4bdf-ab89-b2b8721a5a61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.775 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[49813131-7cfe-4db4-a340-8b62489c314f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844808, 'reachable_time': 17355, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388809, 'error': None, 'target': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.778 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:39:05 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.778 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[4993563d-3cb7-4fad-9ac1-d682db8c48b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.783 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.785 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3472MB free_disk=59.820438385009766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.785 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.786 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.853 2 DEBUG nova.compute.manager [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received event network-vif-deleted-134818f1-9848-45e7-ac27-c5290f58f87f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.853 2 DEBUG nova.compute.manager [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-changed-72a26c5e-6ceb-4ee6-b79b-d105eac8054b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.853 2 DEBUG nova.compute.manager [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Refreshing instance network info cache due to event network-changed-72a26c5e-6ceb-4ee6-b79b-d105eac8054b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.854 2 DEBUG oslo_concurrency.lockutils [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.854 2 DEBUG oslo_concurrency.lockutils [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.854 2 DEBUG nova.network.neutron [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Refreshing network info cache for port 72a26c5e-6ceb-4ee6-b79b-d105eac8054b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.889 2 INFO nova.virt.libvirt.driver [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Deleting instance files /var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238_del
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.890 2 INFO nova.virt.libvirt.driver [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Deletion of /var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238_del complete
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.893 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 23c0ce36-9e34-4a73-9f99-3b79f8623238 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.893 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance c0eb8730-2b26-4cc0-8a9c-019688db568f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.893 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.893 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.893 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:39:05 compute-0 nova_compute[259550]: 2025-10-07 14:39:05.975 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:39:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2958821824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.027 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.029 2 DEBUG nova.virt.libvirt.vif [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:38:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-335561064',display_name='tempest-TestNetworkAdvancedServerOps-server-335561064',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-335561064',id=120,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKd+JvDcSqw5OJ03cvElecW4OwXxUWcOza7ZaEaEf+FC3qoJlFBHob9pK59ESmk17iW8KZuFczVLCS9KQd4bG4fRY/MC1LsIJiiL5MRoGeETGGZgzRCRibwIbIrlPnNS2Q==',key_name='tempest-TestNetworkAdvancedServerOps-452416595',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-kg93fcgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:39:01Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=9996c6b8-7d50-42b8-9617-2a1ae7d36d30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.030 2 DEBUG nova.network.os_vif_util [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.031 2 DEBUG nova.network.os_vif_util [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.032 2 DEBUG nova.objects.instance [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.078 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:39:06 compute-0 nova_compute[259550]:   <uuid>9996c6b8-7d50-42b8-9617-2a1ae7d36d30</uuid>
Oct 07 14:39:06 compute-0 nova_compute[259550]:   <name>instance-00000078</name>
Oct 07 14:39:06 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:39:06 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:39:06 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-335561064</nova:name>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:39:04</nova:creationTime>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:39:06 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:39:06 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:39:06 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:39:06 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:39:06 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:39:06 compute-0 nova_compute[259550]:         <nova:user uuid="5c505d04148e44b8b93ceab0e3cedef4">tempest-TestNetworkAdvancedServerOps-316338420-project-member</nova:user>
Oct 07 14:39:06 compute-0 nova_compute[259550]:         <nova:project uuid="74c80c1e3c7c4a0dbf1c602d301618a7">tempest-TestNetworkAdvancedServerOps-316338420</nova:project>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:39:06 compute-0 nova_compute[259550]:         <nova:port uuid="660a78e9-3d3f-4949-88f4-3cad47b74229">
Oct 07 14:39:06 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:39:06 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:39:06 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <system>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <entry name="serial">9996c6b8-7d50-42b8-9617-2a1ae7d36d30</entry>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <entry name="uuid">9996c6b8-7d50-42b8-9617-2a1ae7d36d30</entry>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     </system>
Oct 07 14:39:06 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:39:06 compute-0 nova_compute[259550]:   <os>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:   </os>
Oct 07 14:39:06 compute-0 nova_compute[259550]:   <features>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:   </features>
Oct 07 14:39:06 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:39:06 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:39:06 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk">
Oct 07 14:39:06 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       </source>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:39:06 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk.config">
Oct 07 14:39:06 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       </source>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:39:06 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:b2:2c:15"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <target dev="tap660a78e9-3d"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30/console.log" append="off"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <video>
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     </video>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:39:06 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:39:06 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:39:06 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:39:06 compute-0 nova_compute[259550]: </domain>
Oct 07 14:39:06 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.080 2 DEBUG nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Preparing to wait for external event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.080 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.080 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.080 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.081 2 DEBUG nova.virt.libvirt.vif [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:38:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-335561064',display_name='tempest-TestNetworkAdvancedServerOps-server-335561064',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-335561064',id=120,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKd+JvDcSqw5OJ03cvElecW4OwXxUWcOza7ZaEaEf+FC3qoJlFBHob9pK59ESmk17iW8KZuFczVLCS9KQd4bG4fRY/MC1LsIJiiL5MRoGeETGGZgzRCRibwIbIrlPnNS2Q==',key_name='tempest-TestNetworkAdvancedServerOps-452416595',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-kg93fcgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:39:01Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=9996c6b8-7d50-42b8-9617-2a1ae7d36d30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.081 2 DEBUG nova.network.os_vif_util [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.082 2 DEBUG nova.network.os_vif_util [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.082 2 DEBUG os_vif [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.083 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.084 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.087 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap660a78e9-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.087 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap660a78e9-3d, col_values=(('external_ids', {'iface-id': '660a78e9-3d3f-4949-88f4-3cad47b74229', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b2:2c:15', 'vm-uuid': '9996c6b8-7d50-42b8-9617-2a1ae7d36d30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:06 compute-0 NetworkManager[44949]: <info>  [1759847946.0896] manager: (tap660a78e9-3d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/516)
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.090 2 INFO nova.compute.manager [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Took 1.33 seconds to destroy the instance on the hypervisor.
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.091 2 DEBUG oslo.service.loopingcall [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.091 2 DEBUG nova.compute.manager [-] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.092 2 DEBUG nova.network.neutron [-] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.095 2 INFO os_vif [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d')
Oct 07 14:39:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d4c956141\x2d6a21\x2d499d\x2d99b1\x2d885d1a2972f7.mount: Deactivated successfully.
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.207 2 DEBUG nova.network.neutron [req-d662a34e-5ed1-418f-8626-09212690bc75 req-61afcf9a-64e0-4b1a-9b69-d9c70abd9e4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Updated VIF entry in instance network info cache for port 660a78e9-3d3f-4949-88f4-3cad47b74229. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.207 2 DEBUG nova.network.neutron [req-d662a34e-5ed1-418f-8626-09212690bc75 req-61afcf9a-64e0-4b1a-9b69-d9c70abd9e4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Updating instance_info_cache with network_info: [{"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.324 2 DEBUG oslo_concurrency.lockutils [req-d662a34e-5ed1-418f-8626-09212690bc75 req-61afcf9a-64e0-4b1a-9b69-d9c70abd9e4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.328 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.328 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.328 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No VIF found with MAC fa:16:3e:b2:2c:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.329 2 INFO nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Using config drive
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.346 2 DEBUG nova.storage.rbd_utils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:39:06 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3618825100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:39:06 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2958821824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:39:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 221 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 3.6 MiB/s wr, 141 op/s
Oct 07 14:39:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:39:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2690316980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.420 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.425 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.446 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.475 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.476 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.995 2 DEBUG nova.compute.manager [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-unplugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.996 2 DEBUG oslo_concurrency.lockutils [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.996 2 DEBUG oslo_concurrency.lockutils [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.996 2 DEBUG oslo_concurrency.lockutils [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.996 2 DEBUG nova.compute.manager [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] No waiting events found dispatching network-vif-unplugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.996 2 DEBUG nova.compute.manager [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-unplugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.996 2 DEBUG nova.compute.manager [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-plugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.997 2 DEBUG oslo_concurrency.lockutils [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.997 2 DEBUG oslo_concurrency.lockutils [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.997 2 DEBUG oslo_concurrency.lockutils [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.997 2 DEBUG nova.compute.manager [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] No waiting events found dispatching network-vif-plugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:39:06 compute-0 nova_compute[259550]: 2025-10-07 14:39:06.998 2 WARNING nova.compute.manager [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received unexpected event network-vif-plugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b for instance with vm_state active and task_state deleting.
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.027 2 INFO nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Creating config drive at /var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30/disk.config
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.032 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp57sdmp_3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.179 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp57sdmp_3" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.202 2 DEBUG nova.storage.rbd_utils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.206 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30/disk.config 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.369 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30/disk.config 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.370 2 INFO nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Deleting local config drive /var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30/disk.config because it was imported into RBD.
Oct 07 14:39:07 compute-0 ceph-mon[74295]: pgmap v2300: 305 pgs: 305 active+clean; 221 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 3.6 MiB/s wr, 141 op/s
Oct 07 14:39:07 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2690316980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:07 compute-0 kernel: tap660a78e9-3d: entered promiscuous mode
Oct 07 14:39:07 compute-0 NetworkManager[44949]: <info>  [1759847947.4173] manager: (tap660a78e9-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/517)
Oct 07 14:39:07 compute-0 ovn_controller[151684]: 2025-10-07T14:39:07Z|01279|binding|INFO|Claiming lport 660a78e9-3d3f-4949-88f4-3cad47b74229 for this chassis.
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:07 compute-0 ovn_controller[151684]: 2025-10-07T14:39:07Z|01280|binding|INFO|660a78e9-3d3f-4949-88f4-3cad47b74229: Claiming fa:16:3e:b2:2c:15 10.100.0.7
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.425 2 DEBUG nova.network.neutron [-] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:39:07 compute-0 NetworkManager[44949]: <info>  [1759847947.4312] device (tap660a78e9-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:39:07 compute-0 NetworkManager[44949]: <info>  [1759847947.4320] device (tap660a78e9-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.430 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:2c:15 10.100.0.7'], port_security=['fa:16:3e:b2:2c:15 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9996c6b8-7d50-42b8-9617-2a1ae7d36d30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0d4d4d28-c7c9-4239-b81d-776c0fee085d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df54412f-162f-42ea-b08c-2dcfe769ac96, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=660a78e9-3d3f-4949-88f4-3cad47b74229) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.431 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 660a78e9-3d3f-4949-88f4-3cad47b74229 in datapath d58f5a01-ad0a-4168-95c9-fc8189cce054 bound to our chassis
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.432 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d58f5a01-ad0a-4168-95c9-fc8189cce054
Oct 07 14:39:07 compute-0 ovn_controller[151684]: 2025-10-07T14:39:07Z|01281|binding|INFO|Setting lport 660a78e9-3d3f-4949-88f4-3cad47b74229 ovn-installed in OVS
Oct 07 14:39:07 compute-0 ovn_controller[151684]: 2025-10-07T14:39:07Z|01282|binding|INFO|Setting lport 660a78e9-3d3f-4949-88f4-3cad47b74229 up in Southbound
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.444 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[20c38a93-300d-413f-a9a0-99639fdd09c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.446 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd58f5a01-a1 in ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.448 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd58f5a01-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.448 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a2315f0a-4cf5-4ed7-9f52-8ff85969f951]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.449 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[da1c7e4d-e93f-4db5-9542-46f49c700fac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.455 2 INFO nova.compute.manager [-] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Took 1.36 seconds to deallocate network for instance.
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.463 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[f13e5305-4951-4281-89a9-5e23a2096c11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:07 compute-0 systemd-machined[214580]: New machine qemu-150-instance-00000078.
Oct 07 14:39:07 compute-0 systemd[1]: Started Virtual Machine qemu-150-instance-00000078.
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.485 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fd25ea9f-3b9a-4bc2-8cf4-fe01cd1bfe21]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.510 2 DEBUG oslo_concurrency.lockutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.510 2 DEBUG oslo_concurrency.lockutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.515 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6acb7aba-805c-48af-a2a6-1df8b6d82f05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.520 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[328acd5a-b7a1-4a35-8da8-924bafb34f21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:07 compute-0 NetworkManager[44949]: <info>  [1759847947.5216] manager: (tapd58f5a01-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/518)
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.544 2 DEBUG oslo_concurrency.lockutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "c0eb8730-2b26-4cc0-8a9c-019688db568f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.544 2 DEBUG oslo_concurrency.lockutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.545 2 DEBUG oslo_concurrency.lockutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.545 2 DEBUG oslo_concurrency.lockutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.546 2 DEBUG oslo_concurrency.lockutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.547 2 INFO nova.compute.manager [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Terminating instance
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.547 2 DEBUG nova.compute.manager [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.550 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.564 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ccae067a-0d4f-444f-9bd6-a6234cb7b7b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.568 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec4ba2a-070d-4cfd-8ee9-26b5e57c3ac3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:07 compute-0 NetworkManager[44949]: <info>  [1759847947.5943] device (tapd58f5a01-a0): carrier: link connected
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.600 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0c551d91-f5eb-4ac0-9d12-15fbc922bf69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.614 2 DEBUG oslo_concurrency.processutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:07 compute-0 kernel: tap1c96bebd-0f (unregistering): left promiscuous mode
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.621 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[68f988fb-4a21-4f96-8f77-f73548dd9b42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd58f5a01-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:0d:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 368], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 853115, 'reachable_time': 35738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388937, 'error': None, 'target': 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:07 compute-0 NetworkManager[44949]: <info>  [1759847947.6289] device (tap1c96bebd-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:39:07 compute-0 ovn_controller[151684]: 2025-10-07T14:39:07Z|01283|binding|INFO|Releasing lport 1c96bebd-0f68-48d9-9bab-486d6e56cb4e from this chassis (sb_readonly=0)
Oct 07 14:39:07 compute-0 ovn_controller[151684]: 2025-10-07T14:39:07Z|01284|binding|INFO|Setting lport 1c96bebd-0f68-48d9-9bab-486d6e56cb4e down in Southbound
Oct 07 14:39:07 compute-0 ovn_controller[151684]: 2025-10-07T14:39:07Z|01285|binding|INFO|Removing iface tap1c96bebd-0f ovn-installed in OVS
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.640 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[43f7033a-987a-409f-b93d-7cba014852be]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:dbf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 853115, 'tstamp': 853115}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388939, 'error': None, 'target': 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.648 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:76:3f 10.100.0.13'], port_security=['fa:16:3e:46:76:3f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c0eb8730-2b26-4cc0-8a9c-019688db568f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ddf5ea1d-9677-4e1c-8a3a-0f9543177677', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be472fff-888c-4934-b0b7-07dc4319f2eb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1c96bebd-0f68-48d9-9bab-486d6e56cb4e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.653 2 DEBUG nova.network.neutron [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updated VIF entry in instance network info cache for port 72a26c5e-6ceb-4ee6-b79b-d105eac8054b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.654 2 DEBUG nova.network.neutron [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updating instance_info_cache with network_info: [{"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.661 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[67ae5a27-10c2-4423-8e41-65071c94caa2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd58f5a01-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:0d:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 368], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 853115, 'reachable_time': 35738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 388943, 'error': None, 'target': 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.684 2 DEBUG oslo_concurrency.lockutils [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.685 2 DEBUG nova.compute.manager [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-unplugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.685 2 DEBUG oslo_concurrency.lockutils [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.686 2 DEBUG oslo_concurrency.lockutils [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.686 2 DEBUG oslo_concurrency.lockutils [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.686 2 DEBUG nova.compute.manager [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] No waiting events found dispatching network-vif-unplugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.687 2 DEBUG nova.compute.manager [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-unplugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.691 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c3774608-39c5-4db9-960c-cc5ac9b6f89b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:07 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000075.scope: Deactivated successfully.
Oct 07 14:39:07 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000075.scope: Consumed 16.025s CPU time.
Oct 07 14:39:07 compute-0 systemd-machined[214580]: Machine qemu-146-instance-00000075 terminated.
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.758 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab083be-ad9b-44e5-911d-27931fd62d8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.759 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd58f5a01-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.759 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.760 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd58f5a01-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:07 compute-0 NetworkManager[44949]: <info>  [1759847947.7623] manager: (tapd58f5a01-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/519)
Oct 07 14:39:07 compute-0 kernel: tapd58f5a01-a0: entered promiscuous mode
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.770 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd58f5a01-a0, col_values=(('external_ids', {'iface-id': 'd4b37819-62c7-42bc-af8c-24aff0a13de9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:07 compute-0 ovn_controller[151684]: 2025-10-07T14:39:07Z|01286|binding|INFO|Releasing lport d4b37819-62c7-42bc-af8c-24aff0a13de9 from this chassis (sb_readonly=0)
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.783 2 INFO nova.virt.libvirt.driver [-] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Instance destroyed successfully.
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.784 2 DEBUG nova.objects.instance [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid c0eb8730-2b26-4cc0-8a9c-019688db568f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.795 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d58f5a01-ad0a-4168-95c9-fc8189cce054.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d58f5a01-ad0a-4168-95c9-fc8189cce054.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.796 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ee80a20f-3c41-499e-9a63-0c03266b0df1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.797 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-d58f5a01-ad0a-4168-95c9-fc8189cce054
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/d58f5a01-ad0a-4168-95c9-fc8189cce054.pid.haproxy
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID d58f5a01-ad0a-4168-95c9-fc8189cce054
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:39:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.800 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'env', 'PROCESS_TAG=haproxy-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d58f5a01-ad0a-4168-95c9-fc8189cce054.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.813 2 DEBUG nova.virt.libvirt.vif [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:37:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-528486164',display_name='tempest-TestNetworkBasicOps-server-528486164',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-528486164',id=117,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQxueVciqThLEf3QSXvOwVT+5hy1ZzCF3GFJVm8JMkN6QiflWo55jpLsNYky8mEXwzLpOQbQ3bLKW+JqUogdrrbbB9pgVzSI20yKTEUP5ZPdTnLVnqTqbVW/n6Yq8HTpg==',key_name='tempest-TestNetworkBasicOps-1556177030',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:37:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-aayr7mmd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:37:52Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=c0eb8730-2b26-4cc0-8a9c-019688db568f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.813 2 DEBUG nova.network.os_vif_util [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.814 2 DEBUG nova.network.os_vif_util [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:76:3f,bridge_name='br-int',has_traffic_filtering=True,id=1c96bebd-0f68-48d9-9bab-486d6e56cb4e,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c96bebd-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.814 2 DEBUG os_vif [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:76:3f,bridge_name='br-int',has_traffic_filtering=True,id=1c96bebd-0f68-48d9-9bab-486d6e56cb4e,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c96bebd-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.816 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c96bebd-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.820 2 INFO os_vif [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:76:3f,bridge_name='br-int',has_traffic_filtering=True,id=1c96bebd-0f68-48d9-9bab-486d6e56cb4e,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c96bebd-0f')
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.954 2 DEBUG nova.compute.manager [req-93ca9911-cce1-487b-8663-d5ccf6c667eb req-d3522e30-9ba6-48bc-ac8e-689eadd9fab2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-plugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.954 2 DEBUG oslo_concurrency.lockutils [req-93ca9911-cce1-487b-8663-d5ccf6c667eb req-d3522e30-9ba6-48bc-ac8e-689eadd9fab2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.955 2 DEBUG oslo_concurrency.lockutils [req-93ca9911-cce1-487b-8663-d5ccf6c667eb req-d3522e30-9ba6-48bc-ac8e-689eadd9fab2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.955 2 DEBUG oslo_concurrency.lockutils [req-93ca9911-cce1-487b-8663-d5ccf6c667eb req-d3522e30-9ba6-48bc-ac8e-689eadd9fab2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.955 2 DEBUG nova.compute.manager [req-93ca9911-cce1-487b-8663-d5ccf6c667eb req-d3522e30-9ba6-48bc-ac8e-689eadd9fab2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] No waiting events found dispatching network-vif-plugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:39:07 compute-0 nova_compute[259550]: 2025-10-07 14:39:07.956 2 WARNING nova.compute.manager [req-93ca9911-cce1-487b-8663-d5ccf6c667eb req-d3522e30-9ba6-48bc-ac8e-689eadd9fab2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received unexpected event network-vif-plugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 for instance with vm_state deleted and task_state None.
Oct 07 14:39:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:39:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:39:08 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1901233399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.092 2 DEBUG oslo_concurrency.processutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.101 2 DEBUG nova.compute.provider_tree [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.119 2 DEBUG nova.scheduler.client.report [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.143 2 DEBUG oslo_concurrency.lockutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.170 2 INFO nova.scheduler.client.report [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 23c0ce36-9e34-4a73-9f99-3b79f8623238
Oct 07 14:39:08 compute-0 podman[389025]: 2025-10-07 14:39:08.188228236 +0000 UTC m=+0.058401491 container create 4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 07 14:39:08 compute-0 systemd[1]: Started libpod-conmon-4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f.scope.
Oct 07 14:39:08 compute-0 podman[389025]: 2025-10-07 14:39:08.158339398 +0000 UTC m=+0.028512683 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.260 2 DEBUG oslo_concurrency.lockutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:08 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01b59d4dc6edabb37a3c0b5853d080b83aedff571bda483335c1c14eb6b156ce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.293 2 INFO nova.virt.libvirt.driver [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Deleting instance files /var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f_del
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.294 2 INFO nova.virt.libvirt.driver [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Deletion of /var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f_del complete
Oct 07 14:39:08 compute-0 podman[389025]: 2025-10-07 14:39:08.299813678 +0000 UTC m=+0.169986953 container init 4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:39:08 compute-0 podman[389025]: 2025-10-07 14:39:08.307476352 +0000 UTC m=+0.177649607 container start 4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:39:08 compute-0 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[389081]: [NOTICE]   (389086) : New worker (389088) forked
Oct 07 14:39:08 compute-0 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[389081]: [NOTICE]   (389086) : Loading success.
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.364 2 INFO nova.compute.manager [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Took 0.82 seconds to destroy the instance on the hypervisor.
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.364 2 DEBUG oslo.service.loopingcall [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.365 2 DEBUG nova.compute.manager [-] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.365 2 DEBUG nova.network.neutron [-] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:39:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.367 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1c96bebd-0f68-48d9-9bab-486d6e56cb4e in datapath 4bd15a72-ce65-4737-b705-4b2b86d3a32a unbound from our chassis
Oct 07 14:39:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.368 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4bd15a72-ce65-4737-b705-4b2b86d3a32a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:39:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.369 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bd84780d-4767-4377-bdf6-c008843f9818]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.370 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a namespace which is not needed anymore
Oct 07 14:39:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 221 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 195 KiB/s rd, 2.7 MiB/s wr, 115 op/s
Oct 07 14:39:08 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1901233399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:08 compute-0 neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a[385889]: [NOTICE]   (385893) : haproxy version is 2.8.14-c23fe91
Oct 07 14:39:08 compute-0 neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a[385889]: [NOTICE]   (385893) : path to executable is /usr/sbin/haproxy
Oct 07 14:39:08 compute-0 neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a[385889]: [WARNING]  (385893) : Exiting Master process...
Oct 07 14:39:08 compute-0 neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a[385889]: [WARNING]  (385893) : Exiting Master process...
Oct 07 14:39:08 compute-0 neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a[385889]: [ALERT]    (385893) : Current worker (385895) exited with code 143 (Terminated)
Oct 07 14:39:08 compute-0 neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a[385889]: [WARNING]  (385893) : All workers exited. Exiting... (0)
Oct 07 14:39:08 compute-0 systemd[1]: libpod-70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c.scope: Deactivated successfully.
Oct 07 14:39:08 compute-0 podman[389114]: 2025-10-07 14:39:08.498313292 +0000 UTC m=+0.044634224 container died 70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:39:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c-userdata-shm.mount: Deactivated successfully.
Oct 07 14:39:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad6812d5d6ccd623c1c8db2e48873105455dd3e4c1a071313da6b9108e14c4dd-merged.mount: Deactivated successfully.
Oct 07 14:39:08 compute-0 podman[389114]: 2025-10-07 14:39:08.538919637 +0000 UTC m=+0.085240579 container cleanup 70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:39:08 compute-0 systemd[1]: libpod-conmon-70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c.scope: Deactivated successfully.
Oct 07 14:39:08 compute-0 podman[389140]: 2025-10-07 14:39:08.594465411 +0000 UTC m=+0.037703699 container remove 70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:39:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.600 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c624ba16-3e24-41eb-a5ba-a024ddbcfb64]: (4, ('Tue Oct  7 02:39:08 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a (70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c)\n70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c\nTue Oct  7 02:39:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a (70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c)\n70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.602 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92a09337-3818-400d-a3db-d28e26202b99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.604 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bd15a72-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:08 compute-0 kernel: tap4bd15a72-c0: left promiscuous mode
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.622 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[62858df8-3f23-402c-8d4c-2bb9e8ffa113]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.652 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[61c967b2-b8cf-4877-b2bb-d80ce6b04219]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.653 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4c7ee560-862b-4dce-91d9-1196eda918d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.670 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d64f3c82-443b-4641-b8a7-1259ba9be6f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845449, 'reachable_time': 36288, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389156, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d4bd15a72\x2dce65\x2d4737\x2db705\x2d4b2b86d3a32a.mount: Deactivated successfully.
Oct 07 14:39:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.675 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:39:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.675 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[8e3405ef-f000-4259-a944-3e71d380e63a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.722 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847948.7225485, 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.723 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] VM Started (Lifecycle Event)
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.751 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.756 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847948.7226646, 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.756 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] VM Paused (Lifecycle Event)
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.781 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.784 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:39:08 compute-0 nova_compute[259550]: 2025-10-07 14:39:08.813 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.104 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-deleted-72a26c5e-6ceb-4ee6-b79b-d105eac8054b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.104 2 INFO nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Neutron deleted interface 72a26c5e-6ceb-4ee6-b79b-d105eac8054b; detaching it from the instance and deleting it from the info cache
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.105 2 DEBUG nova.network.neutron [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.108 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Detach interface failed, port_id=72a26c5e-6ceb-4ee6-b79b-d105eac8054b, reason: Instance 23c0ce36-9e34-4a73-9f99-3b79f8623238 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.108 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-deleted-1a175a4f-4ef8-469e-b213-5f8d404858c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.108 2 INFO nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Neutron deleted interface 1a175a4f-4ef8-469e-b213-5f8d404858c8; detaching it from the instance and deleting it from the info cache
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.108 2 DEBUG nova.network.neutron [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.110 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Detach interface failed, port_id=1a175a4f-4ef8-469e-b213-5f8d404858c8, reason: Instance 23c0ce36-9e34-4a73-9f99-3b79f8623238 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.111 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.111 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.111 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.111 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.112 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Processing event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.112 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.112 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.112 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.113 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.113 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] No waiting events found dispatching network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.113 2 WARNING nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received unexpected event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 for instance with vm_state building and task_state spawning.
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.113 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received event network-vif-unplugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.113 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.114 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.114 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.114 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] No waiting events found dispatching network-vif-unplugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.114 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received event network-vif-unplugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.115 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received event network-vif-plugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.115 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.115 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.115 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.115 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] No waiting events found dispatching network-vif-plugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.116 2 WARNING nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received unexpected event network-vif-plugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e for instance with vm_state active and task_state deleting.
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.116 2 DEBUG nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.121 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847949.1215708, 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.122 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] VM Resumed (Lifecycle Event)
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.124 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.129 2 INFO nova.virt.libvirt.driver [-] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Instance spawned successfully.
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.129 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.143 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.147 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.156 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.157 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.157 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.158 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.158 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.159 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.174 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.225 2 INFO nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Took 7.43 seconds to spawn the instance on the hypervisor.
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.225 2 DEBUG nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.260 2 DEBUG nova.network.neutron [-] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.293 2 INFO nova.compute.manager [-] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Took 0.93 seconds to deallocate network for instance.
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.297 2 INFO nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Took 8.68 seconds to build instance.
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.325 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.346 2 DEBUG oslo_concurrency.lockutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.346 2 DEBUG oslo_concurrency.lockutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.408 2 DEBUG oslo_concurrency.processutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:09 compute-0 ceph-mon[74295]: pgmap v2301: 305 pgs: 305 active+clean; 221 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 195 KiB/s rd, 2.7 MiB/s wr, 115 op/s
Oct 07 14:39:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:39:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712875872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.884 2 DEBUG oslo_concurrency.processutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.890 2 DEBUG nova.compute.provider_tree [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.909 2 DEBUG nova.scheduler.client.report [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.937 2 DEBUG oslo_concurrency.lockutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:09 compute-0 nova_compute[259550]: 2025-10-07 14:39:09.965 2 INFO nova.scheduler.client.report [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance c0eb8730-2b26-4cc0-8a9c-019688db568f
Oct 07 14:39:10 compute-0 nova_compute[259550]: 2025-10-07 14:39:10.050 2 DEBUG nova.compute.manager [req-55dc7877-2261-4ea7-a597-abf7b7c40a52 req-96287f0f-7370-4ea9-a842-9a2297023a57 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received event network-vif-deleted-1c96bebd-0f68-48d9-9bab-486d6e56cb4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:10 compute-0 nova_compute[259550]: 2025-10-07 14:39:10.054 2 DEBUG oslo_concurrency.lockutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 120 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 236 KiB/s rd, 2.7 MiB/s wr, 172 op/s
Oct 07 14:39:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/712875872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:11 compute-0 ceph-mon[74295]: pgmap v2302: 305 pgs: 305 active+clean; 120 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 236 KiB/s rd, 2.7 MiB/s wr, 172 op/s
Oct 07 14:39:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 88 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 1.8 MiB/s wr, 157 op/s
Oct 07 14:39:12 compute-0 nova_compute[259550]: 2025-10-07 14:39:12.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:12 compute-0 nova_compute[259550]: 2025-10-07 14:39:12.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:39:13 compute-0 sudo[389179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:39:13 compute-0 sudo[389179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:39:13 compute-0 sudo[389179]: pam_unix(sudo:session): session closed for user root
Oct 07 14:39:13 compute-0 sudo[389204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:39:13 compute-0 sudo[389204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:39:13 compute-0 sudo[389204]: pam_unix(sudo:session): session closed for user root
Oct 07 14:39:13 compute-0 sudo[389229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:39:13 compute-0 sudo[389229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:39:13 compute-0 sudo[389229]: pam_unix(sudo:session): session closed for user root
Oct 07 14:39:13 compute-0 sudo[389254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:39:13 compute-0 sudo[389254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:39:13 compute-0 ceph-mon[74295]: pgmap v2303: 305 pgs: 305 active+clean; 88 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 1.8 MiB/s wr, 157 op/s
Oct 07 14:39:13 compute-0 nova_compute[259550]: 2025-10-07 14:39:13.979 2 DEBUG nova.compute.manager [req-47c6fba4-358f-45e6-9e52-d4b4633b0ec2 req-c4cc3d5d-268a-41f2-96e5-54f07392d971 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-changed-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:13 compute-0 nova_compute[259550]: 2025-10-07 14:39:13.980 2 DEBUG nova.compute.manager [req-47c6fba4-358f-45e6-9e52-d4b4633b0ec2 req-c4cc3d5d-268a-41f2-96e5-54f07392d971 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Refreshing instance network info cache due to event network-changed-660a78e9-3d3f-4949-88f4-3cad47b74229. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:39:13 compute-0 nova_compute[259550]: 2025-10-07 14:39:13.981 2 DEBUG oslo_concurrency.lockutils [req-47c6fba4-358f-45e6-9e52-d4b4633b0ec2 req-c4cc3d5d-268a-41f2-96e5-54f07392d971 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:39:13 compute-0 nova_compute[259550]: 2025-10-07 14:39:13.981 2 DEBUG oslo_concurrency.lockutils [req-47c6fba4-358f-45e6-9e52-d4b4633b0ec2 req-c4cc3d5d-268a-41f2-96e5-54f07392d971 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:39:13 compute-0 nova_compute[259550]: 2025-10-07 14:39:13.981 2 DEBUG nova.network.neutron [req-47c6fba4-358f-45e6-9e52-d4b4633b0ec2 req-c4cc3d5d-268a-41f2-96e5-54f07392d971 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Refreshing network info cache for port 660a78e9-3d3f-4949-88f4-3cad47b74229 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:39:14 compute-0 sudo[389254]: pam_unix(sudo:session): session closed for user root
Oct 07 14:39:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:39:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:39:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:39:14 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:39:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:39:14 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:39:14 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 301df201-07a7-4802-aa9b-dfe4c00d46e6 does not exist
Oct 07 14:39:14 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 486d70ff-2fe2-404d-b9f4-9b7a5ec177bd does not exist
Oct 07 14:39:14 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 8e0c25fa-72a6-4f2e-8929-91495baa104a does not exist
Oct 07 14:39:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:39:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:39:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:39:14 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:39:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:39:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:39:14 compute-0 sudo[389309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:39:14 compute-0 sudo[389309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:39:14 compute-0 sudo[389309]: pam_unix(sudo:session): session closed for user root
Oct 07 14:39:14 compute-0 sudo[389343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:39:14 compute-0 sudo[389343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:39:14 compute-0 sudo[389343]: pam_unix(sudo:session): session closed for user root
Oct 07 14:39:14 compute-0 podman[389333]: 2025-10-07 14:39:14.295397072 +0000 UTC m=+0.097822665 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 07 14:39:14 compute-0 podman[389334]: 2025-10-07 14:39:14.314738899 +0000 UTC m=+0.118270382 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 14:39:14 compute-0 sudo[389393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:39:14 compute-0 sudo[389393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:39:14 compute-0 sudo[389393]: pam_unix(sudo:session): session closed for user root
Oct 07 14:39:14 compute-0 sudo[389424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:39:14 compute-0 sudo[389424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:39:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 88 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 203 op/s
Oct 07 14:39:14 compute-0 ovn_controller[151684]: 2025-10-07T14:39:14Z|01287|binding|INFO|Releasing lport d4b37819-62c7-42bc-af8c-24aff0a13de9 from this chassis (sb_readonly=0)
Oct 07 14:39:14 compute-0 nova_compute[259550]: 2025-10-07 14:39:14.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:14 compute-0 nova_compute[259550]: 2025-10-07 14:39:14.737 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847939.7338557, 4c61749b-b18d-4fbe-b99c-90e15ced9469 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:39:14 compute-0 nova_compute[259550]: 2025-10-07 14:39:14.737 2 INFO nova.compute.manager [-] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] VM Stopped (Lifecycle Event)
Oct 07 14:39:14 compute-0 podman[389486]: 2025-10-07 14:39:14.745656944 +0000 UTC m=+0.055228348 container create dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:39:14 compute-0 nova_compute[259550]: 2025-10-07 14:39:14.759 2 DEBUG nova.compute.manager [None req-4445251c-ed84-4f0a-b298-0e5f5bb17d8f - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:39:14 compute-0 systemd[1]: Started libpod-conmon-dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af.scope.
Oct 07 14:39:14 compute-0 podman[389486]: 2025-10-07 14:39:14.713156245 +0000 UTC m=+0.022727669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:39:14 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:39:14 compute-0 podman[389486]: 2025-10-07 14:39:14.832802612 +0000 UTC m=+0.142374046 container init dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:39:14 compute-0 podman[389486]: 2025-10-07 14:39:14.840979741 +0000 UTC m=+0.150551145 container start dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 14:39:14 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:39:14 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:39:14 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:39:14 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:39:14 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:39:14 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:39:14 compute-0 podman[389486]: 2025-10-07 14:39:14.84505496 +0000 UTC m=+0.154626364 container attach dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:39:14 compute-0 systemd[1]: libpod-dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af.scope: Deactivated successfully.
Oct 07 14:39:14 compute-0 pedantic_albattani[389502]: 167 167
Oct 07 14:39:14 compute-0 conmon[389502]: conmon dd5d648c8679d7914df4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af.scope/container/memory.events
Oct 07 14:39:14 compute-0 podman[389486]: 2025-10-07 14:39:14.852505648 +0000 UTC m=+0.162077062 container died dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 07 14:39:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-f32ad818dbb666c16848bd7f5e709158b85f6e13fa80a067b6c6fa079e227a29-merged.mount: Deactivated successfully.
Oct 07 14:39:14 compute-0 podman[389486]: 2025-10-07 14:39:14.895858347 +0000 UTC m=+0.205429751 container remove dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:39:14 compute-0 systemd[1]: libpod-conmon-dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af.scope: Deactivated successfully.
Oct 07 14:39:15 compute-0 podman[389528]: 2025-10-07 14:39:15.06701245 +0000 UTC m=+0.042445675 container create 0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 07 14:39:15 compute-0 systemd[1]: Started libpod-conmon-0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653.scope.
Oct 07 14:39:15 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893295a505c4232f37e535a88154641c9eb43195044f52e1584bd19e7737c091/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893295a505c4232f37e535a88154641c9eb43195044f52e1584bd19e7737c091/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893295a505c4232f37e535a88154641c9eb43195044f52e1584bd19e7737c091/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893295a505c4232f37e535a88154641c9eb43195044f52e1584bd19e7737c091/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893295a505c4232f37e535a88154641c9eb43195044f52e1584bd19e7737c091/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:39:15 compute-0 podman[389528]: 2025-10-07 14:39:15.142781434 +0000 UTC m=+0.118214679 container init 0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct 07 14:39:15 compute-0 podman[389528]: 2025-10-07 14:39:15.051546527 +0000 UTC m=+0.026979782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:39:15 compute-0 podman[389528]: 2025-10-07 14:39:15.15382168 +0000 UTC m=+0.129254905 container start 0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jang, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:39:15 compute-0 podman[389528]: 2025-10-07 14:39:15.157753305 +0000 UTC m=+0.133186530 container attach 0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jang, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 07 14:39:15 compute-0 ovn_controller[151684]: 2025-10-07T14:39:15Z|01288|binding|INFO|Releasing lport d4b37819-62c7-42bc-af8c-24aff0a13de9 from this chassis (sb_readonly=0)
Oct 07 14:39:15 compute-0 nova_compute[259550]: 2025-10-07 14:39:15.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:15 compute-0 nova_compute[259550]: 2025-10-07 14:39:15.451 2 DEBUG nova.network.neutron [req-47c6fba4-358f-45e6-9e52-d4b4633b0ec2 req-c4cc3d5d-268a-41f2-96e5-54f07392d971 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Updated VIF entry in instance network info cache for port 660a78e9-3d3f-4949-88f4-3cad47b74229. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:39:15 compute-0 nova_compute[259550]: 2025-10-07 14:39:15.452 2 DEBUG nova.network.neutron [req-47c6fba4-358f-45e6-9e52-d4b4633b0ec2 req-c4cc3d5d-268a-41f2-96e5-54f07392d971 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Updating instance_info_cache with network_info: [{"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:39:15 compute-0 nova_compute[259550]: 2025-10-07 14:39:15.568 2 DEBUG oslo_concurrency.lockutils [req-47c6fba4-358f-45e6-9e52-d4b4633b0ec2 req-c4cc3d5d-268a-41f2-96e5-54f07392d971 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:39:16 compute-0 ceph-mon[74295]: pgmap v2304: 305 pgs: 305 active+clean; 88 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 203 op/s
Oct 07 14:39:16 compute-0 stupefied_jang[389545]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:39:16 compute-0 stupefied_jang[389545]: --> relative data size: 1.0
Oct 07 14:39:16 compute-0 stupefied_jang[389545]: --> All data devices are unavailable
Oct 07 14:39:16 compute-0 systemd[1]: libpod-0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653.scope: Deactivated successfully.
Oct 07 14:39:16 compute-0 podman[389528]: 2025-10-07 14:39:16.281589384 +0000 UTC m=+1.257022619 container died 0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jang, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:39:16 compute-0 systemd[1]: libpod-0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653.scope: Consumed 1.071s CPU time.
Oct 07 14:39:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-893295a505c4232f37e535a88154641c9eb43195044f52e1584bd19e7737c091-merged.mount: Deactivated successfully.
Oct 07 14:39:16 compute-0 podman[389528]: 2025-10-07 14:39:16.398459247 +0000 UTC m=+1.373892482 container remove 0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jang, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 07 14:39:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 88 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 254 KiB/s wr, 141 op/s
Oct 07 14:39:16 compute-0 systemd[1]: libpod-conmon-0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653.scope: Deactivated successfully.
Oct 07 14:39:16 compute-0 sudo[389424]: pam_unix(sudo:session): session closed for user root
Oct 07 14:39:16 compute-0 sudo[389586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:39:16 compute-0 sudo[389586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:39:16 compute-0 sudo[389586]: pam_unix(sudo:session): session closed for user root
Oct 07 14:39:16 compute-0 sudo[389611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:39:16 compute-0 sudo[389611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:39:16 compute-0 sudo[389611]: pam_unix(sudo:session): session closed for user root
Oct 07 14:39:16 compute-0 sudo[389636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:39:16 compute-0 sudo[389636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:39:16 compute-0 sudo[389636]: pam_unix(sudo:session): session closed for user root
Oct 07 14:39:16 compute-0 sudo[389661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:39:16 compute-0 sudo[389661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:39:16 compute-0 nova_compute[259550]: 2025-10-07 14:39:16.991 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847941.9615107, 716d82da-745f-43ca-a7fa-38f02d3e5dc3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:39:16 compute-0 nova_compute[259550]: 2025-10-07 14:39:16.992 2 INFO nova.compute.manager [-] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] VM Stopped (Lifecycle Event)
Oct 07 14:39:17 compute-0 podman[389726]: 2025-10-07 14:39:17.015897565 +0000 UTC m=+0.039659630 container create 8d51d76ec799ad8a91be1b38e0c77ed0b6addbdc462a56c33af09f796089b819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mestorf, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 07 14:39:17 compute-0 systemd[1]: Started libpod-conmon-8d51d76ec799ad8a91be1b38e0c77ed0b6addbdc462a56c33af09f796089b819.scope.
Oct 07 14:39:17 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:39:17 compute-0 podman[389726]: 2025-10-07 14:39:17.079564977 +0000 UTC m=+0.103327062 container init 8d51d76ec799ad8a91be1b38e0c77ed0b6addbdc462a56c33af09f796089b819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mestorf, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 07 14:39:17 compute-0 nova_compute[259550]: 2025-10-07 14:39:17.087 2 DEBUG nova.compute.manager [None req-ea369d32-26b9-4218-8fec-3b6a3f087bb9 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:39:17 compute-0 podman[389726]: 2025-10-07 14:39:17.091384532 +0000 UTC m=+0.115146597 container start 8d51d76ec799ad8a91be1b38e0c77ed0b6addbdc462a56c33af09f796089b819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:39:17 compute-0 reverent_mestorf[389742]: 167 167
Oct 07 14:39:17 compute-0 podman[389726]: 2025-10-07 14:39:16.999392334 +0000 UTC m=+0.023154429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:39:17 compute-0 podman[389726]: 2025-10-07 14:39:17.095236795 +0000 UTC m=+0.118998880 container attach 8d51d76ec799ad8a91be1b38e0c77ed0b6addbdc462a56c33af09f796089b819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mestorf, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:39:17 compute-0 systemd[1]: libpod-8d51d76ec799ad8a91be1b38e0c77ed0b6addbdc462a56c33af09f796089b819.scope: Deactivated successfully.
Oct 07 14:39:17 compute-0 podman[389726]: 2025-10-07 14:39:17.096806837 +0000 UTC m=+0.120568902 container died 8d51d76ec799ad8a91be1b38e0c77ed0b6addbdc462a56c33af09f796089b819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mestorf, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 14:39:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a9059c762456d477cdba9d0f8855084f60d6402cceff09c737cdae94a7f9c00-merged.mount: Deactivated successfully.
Oct 07 14:39:17 compute-0 podman[389726]: 2025-10-07 14:39:17.132604253 +0000 UTC m=+0.156366318 container remove 8d51d76ec799ad8a91be1b38e0c77ed0b6addbdc462a56c33af09f796089b819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mestorf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:39:17 compute-0 systemd[1]: libpod-conmon-8d51d76ec799ad8a91be1b38e0c77ed0b6addbdc462a56c33af09f796089b819.scope: Deactivated successfully.
Oct 07 14:39:17 compute-0 podman[389767]: 2025-10-07 14:39:17.308951065 +0000 UTC m=+0.039964018 container create 7db1e370d6a5823465b440054059505a35d79bc0321c01a4fbd1af79c9063dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:39:17 compute-0 systemd[1]: Started libpod-conmon-7db1e370d6a5823465b440054059505a35d79bc0321c01a4fbd1af79c9063dd2.scope.
Oct 07 14:39:17 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2704a1e4829bad48fba06d42ea8e84149ee3f6843a890b05b08d5d4ca884b62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2704a1e4829bad48fba06d42ea8e84149ee3f6843a890b05b08d5d4ca884b62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2704a1e4829bad48fba06d42ea8e84149ee3f6843a890b05b08d5d4ca884b62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2704a1e4829bad48fba06d42ea8e84149ee3f6843a890b05b08d5d4ca884b62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:39:17 compute-0 podman[389767]: 2025-10-07 14:39:17.291825258 +0000 UTC m=+0.022838231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:39:17 compute-0 podman[389767]: 2025-10-07 14:39:17.3940953 +0000 UTC m=+0.125108273 container init 7db1e370d6a5823465b440054059505a35d79bc0321c01a4fbd1af79c9063dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:39:17 compute-0 podman[389767]: 2025-10-07 14:39:17.400249226 +0000 UTC m=+0.131262179 container start 7db1e370d6a5823465b440054059505a35d79bc0321c01a4fbd1af79c9063dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 07 14:39:17 compute-0 podman[389767]: 2025-10-07 14:39:17.405519316 +0000 UTC m=+0.136532289 container attach 7db1e370d6a5823465b440054059505a35d79bc0321c01a4fbd1af79c9063dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:39:17 compute-0 nova_compute[259550]: 2025-10-07 14:39:17.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:17 compute-0 nova_compute[259550]: 2025-10-07 14:39:17.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:39:18 compute-0 nova_compute[259550]: 2025-10-07 14:39:18.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:18 compute-0 ceph-mon[74295]: pgmap v2305: 305 pgs: 305 active+clean; 88 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 254 KiB/s wr, 141 op/s
Oct 07 14:39:18 compute-0 unruffled_benz[389783]: {
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:     "0": [
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:         {
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "devices": [
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "/dev/loop3"
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             ],
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "lv_name": "ceph_lv0",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "lv_size": "21470642176",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "name": "ceph_lv0",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "tags": {
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.cluster_name": "ceph",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.crush_device_class": "",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.encrypted": "0",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.osd_id": "0",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.type": "block",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.vdo": "0"
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             },
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "type": "block",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "vg_name": "ceph_vg0"
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:         }
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:     ],
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:     "1": [
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:         {
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "devices": [
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "/dev/loop4"
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             ],
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "lv_name": "ceph_lv1",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "lv_size": "21470642176",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "name": "ceph_lv1",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "tags": {
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.cluster_name": "ceph",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.crush_device_class": "",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.encrypted": "0",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.osd_id": "1",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.type": "block",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.vdo": "0"
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             },
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "type": "block",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "vg_name": "ceph_vg1"
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:         }
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:     ],
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:     "2": [
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:         {
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "devices": [
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "/dev/loop5"
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             ],
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "lv_name": "ceph_lv2",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "lv_size": "21470642176",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "name": "ceph_lv2",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "tags": {
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.cluster_name": "ceph",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.crush_device_class": "",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.encrypted": "0",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.osd_id": "2",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.type": "block",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:                 "ceph.vdo": "0"
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             },
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "type": "block",
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:             "vg_name": "ceph_vg2"
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:         }
Oct 07 14:39:18 compute-0 unruffled_benz[389783]:     ]
Oct 07 14:39:18 compute-0 unruffled_benz[389783]: }
Oct 07 14:39:18 compute-0 systemd[1]: libpod-7db1e370d6a5823465b440054059505a35d79bc0321c01a4fbd1af79c9063dd2.scope: Deactivated successfully.
Oct 07 14:39:18 compute-0 podman[389792]: 2025-10-07 14:39:18.278999865 +0000 UTC m=+0.025485141 container died 7db1e370d6a5823465b440054059505a35d79bc0321c01a4fbd1af79c9063dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:39:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2704a1e4829bad48fba06d42ea8e84149ee3f6843a890b05b08d5d4ca884b62-merged.mount: Deactivated successfully.
Oct 07 14:39:18 compute-0 podman[389792]: 2025-10-07 14:39:18.357219036 +0000 UTC m=+0.103704282 container remove 7db1e370d6a5823465b440054059505a35d79bc0321c01a4fbd1af79c9063dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:39:18 compute-0 systemd[1]: libpod-conmon-7db1e370d6a5823465b440054059505a35d79bc0321c01a4fbd1af79c9063dd2.scope: Deactivated successfully.
Oct 07 14:39:18 compute-0 sudo[389661]: pam_unix(sudo:session): session closed for user root
Oct 07 14:39:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 88 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 128 op/s
Oct 07 14:39:18 compute-0 sudo[389807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:39:18 compute-0 sudo[389807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:39:18 compute-0 sudo[389807]: pam_unix(sudo:session): session closed for user root
Oct 07 14:39:18 compute-0 sudo[389832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:39:18 compute-0 sudo[389832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:39:18 compute-0 sudo[389832]: pam_unix(sudo:session): session closed for user root
Oct 07 14:39:18 compute-0 sudo[389857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:39:18 compute-0 sudo[389857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:39:18 compute-0 sudo[389857]: pam_unix(sudo:session): session closed for user root
Oct 07 14:39:18 compute-0 sudo[389882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:39:18 compute-0 sudo[389882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:39:18 compute-0 podman[389949]: 2025-10-07 14:39:18.958022909 +0000 UTC m=+0.043426781 container create a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_colden, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 07 14:39:19 compute-0 systemd[1]: Started libpod-conmon-a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0.scope.
Oct 07 14:39:19 compute-0 podman[389949]: 2025-10-07 14:39:18.937383668 +0000 UTC m=+0.022787580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:39:19 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:39:19 compute-0 podman[389949]: 2025-10-07 14:39:19.048835326 +0000 UTC m=+0.134239208 container init a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_colden, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:39:19 compute-0 podman[389949]: 2025-10-07 14:39:19.057483917 +0000 UTC m=+0.142887799 container start a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 07 14:39:19 compute-0 xenodochial_colden[389965]: 167 167
Oct 07 14:39:19 compute-0 systemd[1]: libpod-a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0.scope: Deactivated successfully.
Oct 07 14:39:19 compute-0 conmon[389965]: conmon a3ceb52df5aee41106c2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0.scope/container/memory.events
Oct 07 14:39:19 compute-0 podman[389949]: 2025-10-07 14:39:19.063418225 +0000 UTC m=+0.148822137 container attach a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 07 14:39:19 compute-0 podman[389949]: 2025-10-07 14:39:19.063657582 +0000 UTC m=+0.149061464 container died a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_colden, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:39:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e913db734ec3b265293b124f4e8efb6aede0a0b2feeb1c34a5f95323b6f02bc-merged.mount: Deactivated successfully.
Oct 07 14:39:19 compute-0 podman[389949]: 2025-10-07 14:39:19.101195275 +0000 UTC m=+0.186599157 container remove a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_colden, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 14:39:19 compute-0 systemd[1]: libpod-conmon-a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0.scope: Deactivated successfully.
Oct 07 14:39:19 compute-0 podman[389988]: 2025-10-07 14:39:19.265734792 +0000 UTC m=+0.042435795 container create 30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_galois, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 07 14:39:19 compute-0 systemd[1]: Started libpod-conmon-30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a.scope.
Oct 07 14:39:19 compute-0 podman[389988]: 2025-10-07 14:39:19.249103617 +0000 UTC m=+0.025804640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:39:19 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76890074084b2bfa3fa145fcc03a2051d83f0ecb710a7b11561d18fc57025cbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76890074084b2bfa3fa145fcc03a2051d83f0ecb710a7b11561d18fc57025cbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76890074084b2bfa3fa145fcc03a2051d83f0ecb710a7b11561d18fc57025cbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76890074084b2bfa3fa145fcc03a2051d83f0ecb710a7b11561d18fc57025cbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:39:19 compute-0 podman[389988]: 2025-10-07 14:39:19.364072869 +0000 UTC m=+0.140773892 container init 30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 07 14:39:19 compute-0 podman[389988]: 2025-10-07 14:39:19.372369401 +0000 UTC m=+0.149070404 container start 30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_galois, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:39:19 compute-0 podman[389988]: 2025-10-07 14:39:19.375642378 +0000 UTC m=+0.152343411 container attach 30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_galois, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 07 14:39:20 compute-0 ceph-mon[74295]: pgmap v2306: 305 pgs: 305 active+clean; 88 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 128 op/s
Oct 07 14:39:20 compute-0 nova_compute[259550]: 2025-10-07 14:39:20.227 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847945.2260573, 23c0ce36-9e34-4a73-9f99-3b79f8623238 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:39:20 compute-0 nova_compute[259550]: 2025-10-07 14:39:20.229 2 INFO nova.compute.manager [-] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] VM Stopped (Lifecycle Event)
Oct 07 14:39:20 compute-0 nova_compute[259550]: 2025-10-07 14:39:20.248 2 DEBUG nova.compute.manager [None req-28d03074-648a-422c-9e4f-659eaeb94ae7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:39:20 compute-0 gallant_galois[390005]: {
Oct 07 14:39:20 compute-0 gallant_galois[390005]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:39:20 compute-0 gallant_galois[390005]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:39:20 compute-0 gallant_galois[390005]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:39:20 compute-0 gallant_galois[390005]:         "osd_id": 2,
Oct 07 14:39:20 compute-0 gallant_galois[390005]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:39:20 compute-0 gallant_galois[390005]:         "type": "bluestore"
Oct 07 14:39:20 compute-0 gallant_galois[390005]:     },
Oct 07 14:39:20 compute-0 gallant_galois[390005]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:39:20 compute-0 gallant_galois[390005]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:39:20 compute-0 gallant_galois[390005]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:39:20 compute-0 gallant_galois[390005]:         "osd_id": 1,
Oct 07 14:39:20 compute-0 gallant_galois[390005]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:39:20 compute-0 gallant_galois[390005]:         "type": "bluestore"
Oct 07 14:39:20 compute-0 gallant_galois[390005]:     },
Oct 07 14:39:20 compute-0 gallant_galois[390005]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:39:20 compute-0 gallant_galois[390005]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:39:20 compute-0 gallant_galois[390005]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:39:20 compute-0 gallant_galois[390005]:         "osd_id": 0,
Oct 07 14:39:20 compute-0 gallant_galois[390005]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:39:20 compute-0 gallant_galois[390005]:         "type": "bluestore"
Oct 07 14:39:20 compute-0 gallant_galois[390005]:     }
Oct 07 14:39:20 compute-0 gallant_galois[390005]: }
Oct 07 14:39:20 compute-0 systemd[1]: libpod-30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a.scope: Deactivated successfully.
Oct 07 14:39:20 compute-0 podman[389988]: 2025-10-07 14:39:20.394429851 +0000 UTC m=+1.171130854 container died 30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_galois, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Oct 07 14:39:20 compute-0 systemd[1]: libpod-30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a.scope: Consumed 1.017s CPU time.
Oct 07 14:39:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 88 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 128 op/s
Oct 07 14:39:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-76890074084b2bfa3fa145fcc03a2051d83f0ecb710a7b11561d18fc57025cbb-merged.mount: Deactivated successfully.
Oct 07 14:39:20 compute-0 podman[389988]: 2025-10-07 14:39:20.505119068 +0000 UTC m=+1.281820071 container remove 30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_galois, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 07 14:39:20 compute-0 systemd[1]: libpod-conmon-30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a.scope: Deactivated successfully.
Oct 07 14:39:20 compute-0 sudo[389882]: pam_unix(sudo:session): session closed for user root
Oct 07 14:39:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:39:20 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:39:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:39:20 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:39:20 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 182b7e8e-9c5a-42fa-afd3-253559fe7a9f does not exist
Oct 07 14:39:20 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev ebcd22f1-cc67-4cd1-8748-edf02989e4bf does not exist
Oct 07 14:39:20 compute-0 sudo[390051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:39:20 compute-0 sudo[390051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:39:20 compute-0 sudo[390051]: pam_unix(sudo:session): session closed for user root
Oct 07 14:39:20 compute-0 sudo[390076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:39:20 compute-0 sudo[390076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:39:20 compute-0 sudo[390076]: pam_unix(sudo:session): session closed for user root
Oct 07 14:39:21 compute-0 ceph-mon[74295]: pgmap v2307: 305 pgs: 305 active+clean; 88 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 128 op/s
Oct 07 14:39:21 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:39:21 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:39:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 94 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 635 KiB/s wr, 73 op/s
Oct 07 14:39:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:39:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:39:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:39:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:39:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:39:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:39:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:39:22
Oct 07 14:39:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:39:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:39:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['volumes', '.mgr', 'backups', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Oct 07 14:39:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:39:22 compute-0 nova_compute[259550]: 2025-10-07 14:39:22.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:22 compute-0 nova_compute[259550]: 2025-10-07 14:39:22.782 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847947.7813118, c0eb8730-2b26-4cc0-8a9c-019688db568f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:39:22 compute-0 nova_compute[259550]: 2025-10-07 14:39:22.783 2 INFO nova.compute.manager [-] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] VM Stopped (Lifecycle Event)
Oct 07 14:39:22 compute-0 nova_compute[259550]: 2025-10-07 14:39:22.804 2 DEBUG nova.compute.manager [None req-af434e86-1269-440b-a259-1b8b941a780f - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:39:22 compute-0 nova_compute[259550]: 2025-10-07 14:39:22.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:22 compute-0 ovn_controller[151684]: 2025-10-07T14:39:22Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b2:2c:15 10.100.0.7
Oct 07 14:39:22 compute-0 ovn_controller[151684]: 2025-10-07T14:39:22Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b2:2c:15 10.100.0.7
Oct 07 14:39:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:39:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:39:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:39:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:39:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:39:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:39:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:39:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:39:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:39:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:39:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:39:23 compute-0 ceph-mon[74295]: pgmap v2308: 305 pgs: 305 active+clean; 94 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 635 KiB/s wr, 73 op/s
Oct 07 14:39:23 compute-0 nova_compute[259550]: 2025-10-07 14:39:23.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 119 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Oct 07 14:39:25 compute-0 ceph-mon[74295]: pgmap v2309: 305 pgs: 305 active+clean; 119 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Oct 07 14:39:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 121 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 07 14:39:27 compute-0 ceph-mon[74295]: pgmap v2310: 305 pgs: 305 active+clean; 121 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 07 14:39:27 compute-0 nova_compute[259550]: 2025-10-07 14:39:27.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:27 compute-0 nova_compute[259550]: 2025-10-07 14:39:27.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:39:28 compute-0 nova_compute[259550]: 2025-10-07 14:39:28.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 121 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 07 14:39:29 compute-0 nova_compute[259550]: 2025-10-07 14:39:29.372 2 INFO nova.compute.manager [None req-954a3246-54c1-4410-91ce-fddd675c832a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Get console output
Oct 07 14:39:29 compute-0 nova_compute[259550]: 2025-10-07 14:39:29.377 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:39:29 compute-0 ceph-mon[74295]: pgmap v2311: 305 pgs: 305 active+clean; 121 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 07 14:39:29 compute-0 nova_compute[259550]: 2025-10-07 14:39:29.658 2 DEBUG nova.objects.instance [None req-462ecbaf-4ade-404b-8366-ff48d46e7ea0 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:39:29 compute-0 nova_compute[259550]: 2025-10-07 14:39:29.696 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847969.6958537, 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:39:29 compute-0 nova_compute[259550]: 2025-10-07 14:39:29.696 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] VM Paused (Lifecycle Event)
Oct 07 14:39:29 compute-0 nova_compute[259550]: 2025-10-07 14:39:29.724 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:39:29 compute-0 nova_compute[259550]: 2025-10-07 14:39:29.729 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:39:29 compute-0 nova_compute[259550]: 2025-10-07 14:39:29.754 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 07 14:39:30 compute-0 kernel: tap660a78e9-3d (unregistering): left promiscuous mode
Oct 07 14:39:30 compute-0 NetworkManager[44949]: <info>  [1759847970.4000] device (tap660a78e9-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:39:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 121 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:39:30 compute-0 ovn_controller[151684]: 2025-10-07T14:39:30Z|01289|binding|INFO|Releasing lport 660a78e9-3d3f-4949-88f4-3cad47b74229 from this chassis (sb_readonly=0)
Oct 07 14:39:30 compute-0 ovn_controller[151684]: 2025-10-07T14:39:30Z|01290|binding|INFO|Setting lport 660a78e9-3d3f-4949-88f4-3cad47b74229 down in Southbound
Oct 07 14:39:30 compute-0 ovn_controller[151684]: 2025-10-07T14:39:30Z|01291|binding|INFO|Removing iface tap660a78e9-3d ovn-installed in OVS
Oct 07 14:39:30 compute-0 nova_compute[259550]: 2025-10-07 14:39:30.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.427 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:2c:15 10.100.0.7'], port_security=['fa:16:3e:b2:2c:15 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9996c6b8-7d50-42b8-9617-2a1ae7d36d30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0d4d4d28-c7c9-4239-b81d-776c0fee085d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df54412f-162f-42ea-b08c-2dcfe769ac96, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=660a78e9-3d3f-4949-88f4-3cad47b74229) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:39:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.428 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 660a78e9-3d3f-4949-88f4-3cad47b74229 in datapath d58f5a01-ad0a-4168-95c9-fc8189cce054 unbound from our chassis
Oct 07 14:39:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.429 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d58f5a01-ad0a-4168-95c9-fc8189cce054, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:39:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.430 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[040411a7-484f-4d1f-8a9c-ea861b651ba4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.430 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 namespace which is not needed anymore
Oct 07 14:39:30 compute-0 nova_compute[259550]: 2025-10-07 14:39:30.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:30 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000078.scope: Deactivated successfully.
Oct 07 14:39:30 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000078.scope: Consumed 13.793s CPU time.
Oct 07 14:39:30 compute-0 systemd-machined[214580]: Machine qemu-150-instance-00000078 terminated.
Oct 07 14:39:30 compute-0 nova_compute[259550]: 2025-10-07 14:39:30.568 2 DEBUG nova.compute.manager [None req-462ecbaf-4ade-404b-8366-ff48d46e7ea0 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:39:30 compute-0 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[389081]: [NOTICE]   (389086) : haproxy version is 2.8.14-c23fe91
Oct 07 14:39:30 compute-0 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[389081]: [NOTICE]   (389086) : path to executable is /usr/sbin/haproxy
Oct 07 14:39:30 compute-0 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[389081]: [WARNING]  (389086) : Exiting Master process...
Oct 07 14:39:30 compute-0 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[389081]: [WARNING]  (389086) : Exiting Master process...
Oct 07 14:39:30 compute-0 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[389081]: [ALERT]    (389086) : Current worker (389088) exited with code 143 (Terminated)
Oct 07 14:39:30 compute-0 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[389081]: [WARNING]  (389086) : All workers exited. Exiting... (0)
Oct 07 14:39:30 compute-0 systemd[1]: libpod-4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f.scope: Deactivated successfully.
Oct 07 14:39:30 compute-0 podman[390128]: 2025-10-07 14:39:30.600756638 +0000 UTC m=+0.056865561 container died 4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:39:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f-userdata-shm.mount: Deactivated successfully.
Oct 07 14:39:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-01b59d4dc6edabb37a3c0b5853d080b83aedff571bda483335c1c14eb6b156ce-merged.mount: Deactivated successfully.
Oct 07 14:39:30 compute-0 podman[390128]: 2025-10-07 14:39:30.715146804 +0000 UTC m=+0.171255727 container cleanup 4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:39:30 compute-0 systemd[1]: libpod-conmon-4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f.scope: Deactivated successfully.
Oct 07 14:39:30 compute-0 podman[390168]: 2025-10-07 14:39:30.780909861 +0000 UTC m=+0.043313139 container remove 4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:39:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.787 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf04536-c899-4ea9-9e47-ef5a753517ec]: (4, ('Tue Oct  7 02:39:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 (4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f)\n4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f\nTue Oct  7 02:39:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 (4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f)\n4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.788 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7f84dfae-9cc5-4e85-93ef-15cd21bc5c2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.789 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd58f5a01-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:30 compute-0 nova_compute[259550]: 2025-10-07 14:39:30.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:30 compute-0 kernel: tapd58f5a01-a0: left promiscuous mode
Oct 07 14:39:30 compute-0 nova_compute[259550]: 2025-10-07 14:39:30.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.813 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fc709d7c-8569-4354-8452-f59ad9b0e5a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.860 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a6cbf562-8a19-4cc3-a48b-eec6ae653311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.862 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dac76122-8d1d-405b-bd00-90b37c67bbe4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.876 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3621bd85-f3d3-4b34-9d1f-98412963574e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 853107, 'reachable_time': 16460, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390187, 'error': None, 'target': 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:30 compute-0 systemd[1]: run-netns-ovnmeta\x2dd58f5a01\x2dad0a\x2d4168\x2d95c9\x2dfc8189cce054.mount: Deactivated successfully.
Oct 07 14:39:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.879 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:39:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.879 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[f1fad9e4-d758-44f2-bad4-f7f4eb3550b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:31 compute-0 nova_compute[259550]: 2025-10-07 14:39:31.068 2 DEBUG nova.compute.manager [req-a9f320a9-189a-4331-b37f-4dae906650ae req-8f1a4ac8-145a-4a54-ad2c-6978a5ef9ce9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-vif-unplugged-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:31 compute-0 nova_compute[259550]: 2025-10-07 14:39:31.068 2 DEBUG oslo_concurrency.lockutils [req-a9f320a9-189a-4331-b37f-4dae906650ae req-8f1a4ac8-145a-4a54-ad2c-6978a5ef9ce9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:31 compute-0 nova_compute[259550]: 2025-10-07 14:39:31.069 2 DEBUG oslo_concurrency.lockutils [req-a9f320a9-189a-4331-b37f-4dae906650ae req-8f1a4ac8-145a-4a54-ad2c-6978a5ef9ce9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:31 compute-0 nova_compute[259550]: 2025-10-07 14:39:31.069 2 DEBUG oslo_concurrency.lockutils [req-a9f320a9-189a-4331-b37f-4dae906650ae req-8f1a4ac8-145a-4a54-ad2c-6978a5ef9ce9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:31 compute-0 nova_compute[259550]: 2025-10-07 14:39:31.069 2 DEBUG nova.compute.manager [req-a9f320a9-189a-4331-b37f-4dae906650ae req-8f1a4ac8-145a-4a54-ad2c-6978a5ef9ce9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] No waiting events found dispatching network-vif-unplugged-660a78e9-3d3f-4949-88f4-3cad47b74229 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:39:31 compute-0 nova_compute[259550]: 2025-10-07 14:39:31.069 2 WARNING nova.compute.manager [req-a9f320a9-189a-4331-b37f-4dae906650ae req-8f1a4ac8-145a-4a54-ad2c-6978a5ef9ce9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received unexpected event network-vif-unplugged-660a78e9-3d3f-4949-88f4-3cad47b74229 for instance with vm_state suspended and task_state None.
Oct 07 14:39:31 compute-0 ceph-mon[74295]: pgmap v2312: 305 pgs: 305 active+clean; 121 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:39:32 compute-0 podman[390188]: 2025-10-07 14:39:32.079029097 +0000 UTC m=+0.067197247 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:39:32 compute-0 podman[390189]: 2025-10-07 14:39:32.099061613 +0000 UTC m=+0.084377616 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 121 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007587643257146578 of space, bias 1.0, pg target 0.22762929771439736 quantized to 32 (current 32)
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:39:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:39:32 compute-0 nova_compute[259550]: 2025-10-07 14:39:32.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:39:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3128308685' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:39:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:39:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3128308685' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:39:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3128308685' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:39:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3128308685' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:39:32 compute-0 nova_compute[259550]: 2025-10-07 14:39:32.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:32 compute-0 nova_compute[259550]: 2025-10-07 14:39:32.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.002 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.002 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.021 2 DEBUG nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:39:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.115 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.116 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.124 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.124 2 INFO nova.compute.claims [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.140 2 DEBUG nova.compute.manager [req-b6bd0cad-761c-4c51-b1e3-b2896696921c req-cb810b17-2b8c-448f-bbfd-b49188b36cb9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.140 2 DEBUG oslo_concurrency.lockutils [req-b6bd0cad-761c-4c51-b1e3-b2896696921c req-cb810b17-2b8c-448f-bbfd-b49188b36cb9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.141 2 DEBUG oslo_concurrency.lockutils [req-b6bd0cad-761c-4c51-b1e3-b2896696921c req-cb810b17-2b8c-448f-bbfd-b49188b36cb9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.141 2 DEBUG oslo_concurrency.lockutils [req-b6bd0cad-761c-4c51-b1e3-b2896696921c req-cb810b17-2b8c-448f-bbfd-b49188b36cb9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.141 2 DEBUG nova.compute.manager [req-b6bd0cad-761c-4c51-b1e3-b2896696921c req-cb810b17-2b8c-448f-bbfd-b49188b36cb9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] No waiting events found dispatching network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.141 2 WARNING nova.compute.manager [req-b6bd0cad-761c-4c51-b1e3-b2896696921c req-cb810b17-2b8c-448f-bbfd-b49188b36cb9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received unexpected event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 for instance with vm_state suspended and task_state None.
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.249 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:39:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1275762732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.728 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.736 2 DEBUG nova.compute.provider_tree [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.795 2 DEBUG nova.scheduler.client.report [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.830 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.830 2 DEBUG nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.898 2 DEBUG nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.899 2 DEBUG nova.network.neutron [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:39:33 compute-0 ceph-mon[74295]: pgmap v2313: 305 pgs: 305 active+clean; 121 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:39:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1275762732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.916 2 INFO nova.compute.manager [None req-723895d9-aead-47cf-8e1b-c3dd486a5a47 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Get console output
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.944 2 INFO nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:39:33 compute-0 nova_compute[259550]: 2025-10-07 14:39:33.976 2 DEBUG nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.058 2 DEBUG nova.policy [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.072 2 DEBUG nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.074 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.075 2 INFO nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Creating image(s)
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.101 2 DEBUG nova.storage.rbd_utils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image d0b10640-5492-4d8f-8b94-a49a15b6e702_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.123 2 DEBUG nova.storage.rbd_utils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image d0b10640-5492-4d8f-8b94-a49a15b6e702_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.146 2 DEBUG nova.storage.rbd_utils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image d0b10640-5492-4d8f-8b94-a49a15b6e702_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.150 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.212 2 INFO nova.compute.manager [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Resuming
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.213 2 DEBUG nova.objects.instance [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'flavor' on Instance uuid 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.226 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.227 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.227 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.228 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.253 2 DEBUG nova.storage.rbd_utils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image d0b10640-5492-4d8f-8b94-a49a15b6e702_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.258 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 d0b10640-5492-4d8f-8b94-a49a15b6e702_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.313 2 DEBUG oslo_concurrency.lockutils [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.314 2 DEBUG oslo_concurrency.lockutils [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquired lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.314 2 DEBUG nova.network.neutron [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:39:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 121 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 1.5 MiB/s wr, 60 op/s
Oct 07 14:39:34 compute-0 nova_compute[259550]: 2025-10-07 14:39:34.963 2 DEBUG nova.network.neutron [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Successfully created port: b7bf5de8-3ba0-43cd-a839-d8812cbe4276 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:39:35 compute-0 nova_compute[259550]: 2025-10-07 14:39:35.949 2 DEBUG nova.network.neutron [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Successfully updated port: b7bf5de8-3ba0-43cd-a839-d8812cbe4276 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:39:35 compute-0 nova_compute[259550]: 2025-10-07 14:39:35.968 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:39:35 compute-0 nova_compute[259550]: 2025-10-07 14:39:35.969 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:39:35 compute-0 nova_compute[259550]: 2025-10-07 14:39:35.969 2 DEBUG nova.network.neutron [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.011 2 DEBUG nova.network.neutron [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Updating instance_info_cache with network_info: [{"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.033 2 DEBUG nova.compute.manager [req-f3b25241-4ff8-491d-9729-899b3fc3a09a req-99adfd41-d668-44be-aec0-6e06527cc51c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-changed-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.034 2 DEBUG nova.compute.manager [req-f3b25241-4ff8-491d-9729-899b3fc3a09a req-99adfd41-d668-44be-aec0-6e06527cc51c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing instance network info cache due to event network-changed-b7bf5de8-3ba0-43cd-a839-d8812cbe4276. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.034 2 DEBUG oslo_concurrency.lockutils [req-f3b25241-4ff8-491d-9729-899b3fc3a09a req-99adfd41-d668-44be-aec0-6e06527cc51c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.043 2 DEBUG oslo_concurrency.lockutils [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Releasing lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.049 2 DEBUG nova.virt.libvirt.vif [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:38:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-335561064',display_name='tempest-TestNetworkAdvancedServerOps-server-335561064',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-335561064',id=120,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKd+JvDcSqw5OJ03cvElecW4OwXxUWcOza7ZaEaEf+FC3qoJlFBHob9pK59ESmk17iW8KZuFczVLCS9KQd4bG4fRY/MC1LsIJiiL5MRoGeETGGZgzRCRibwIbIrlPnNS2Q==',key_name='tempest-TestNetworkAdvancedServerOps-452416595',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:39:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-kg93fcgl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:39:30Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=9996c6b8-7d50-42b8-9617-2a1ae7d36d30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.049 2 DEBUG nova.network.os_vif_util [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.050 2 DEBUG nova.network.os_vif_util [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.050 2 DEBUG os_vif [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.051 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.052 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap660a78e9-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap660a78e9-3d, col_values=(('external_ids', {'iface-id': '660a78e9-3d3f-4949-88f4-3cad47b74229', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b2:2c:15', 'vm-uuid': '9996c6b8-7d50-42b8-9617-2a1ae7d36d30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.056 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.056 2 INFO os_vif [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d')
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.133 2 DEBUG nova.network.neutron [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.376 2 DEBUG nova.objects.instance [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:39:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 121 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 17 KiB/s wr, 12 op/s
Oct 07 14:39:36 compute-0 NetworkManager[44949]: <info>  [1759847976.4606] manager: (tap660a78e9-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/520)
Oct 07 14:39:36 compute-0 kernel: tap660a78e9-3d: entered promiscuous mode
Oct 07 14:39:36 compute-0 ovn_controller[151684]: 2025-10-07T14:39:36Z|01292|binding|INFO|Claiming lport 660a78e9-3d3f-4949-88f4-3cad47b74229 for this chassis.
Oct 07 14:39:36 compute-0 ovn_controller[151684]: 2025-10-07T14:39:36Z|01293|binding|INFO|660a78e9-3d3f-4949-88f4-3cad47b74229: Claiming fa:16:3e:b2:2c:15 10.100.0.7
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.480 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:2c:15 10.100.0.7'], port_security=['fa:16:3e:b2:2c:15 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9996c6b8-7d50-42b8-9617-2a1ae7d36d30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0d4d4d28-c7c9-4239-b81d-776c0fee085d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df54412f-162f-42ea-b08c-2dcfe769ac96, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=660a78e9-3d3f-4949-88f4-3cad47b74229) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.481 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 660a78e9-3d3f-4949-88f4-3cad47b74229 in datapath d58f5a01-ad0a-4168-95c9-fc8189cce054 bound to our chassis
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.482 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d58f5a01-ad0a-4168-95c9-fc8189cce054
Oct 07 14:39:36 compute-0 systemd-udevd[390359]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:39:36 compute-0 ovn_controller[151684]: 2025-10-07T14:39:36Z|01294|binding|INFO|Setting lport 660a78e9-3d3f-4949-88f4-3cad47b74229 ovn-installed in OVS
Oct 07 14:39:36 compute-0 ovn_controller[151684]: 2025-10-07T14:39:36Z|01295|binding|INFO|Setting lport 660a78e9-3d3f-4949-88f4-3cad47b74229 up in Southbound
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.494 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[67e69aa0-2238-444b-90df-12c441eb5507]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.495 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd58f5a01-a1 in ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.497 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd58f5a01-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.497 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[36498ff0-5416-4bce-a918-662d6fb972a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:36 compute-0 NetworkManager[44949]: <info>  [1759847976.4986] device (tap660a78e9-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:39:36 compute-0 NetworkManager[44949]: <info>  [1759847976.4996] device (tap660a78e9-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.499 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d83c422f-f7e4-4973-94f1-02ddca082680]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:36 compute-0 systemd-machined[214580]: New machine qemu-151-instance-00000078.
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.511 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[25539b9f-bde6-4955-aa41-5f5a245f80dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:36 compute-0 systemd[1]: Started Virtual Machine qemu-151-instance-00000078.
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.536 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ffd02652-f0ea-4488-9a15-b4a28eb91f3d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.571 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8a61d55f-46cf-4534-a097-26a7401eb544]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.579 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ac64f27c-751e-4aec-90a7-3ffb3dccf24e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:36 compute-0 NetworkManager[44949]: <info>  [1759847976.5823] manager: (tapd58f5a01-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/521)
Oct 07 14:39:36 compute-0 ceph-mon[74295]: pgmap v2314: 305 pgs: 305 active+clean; 121 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 1.5 MiB/s wr, 60 op/s
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.619 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[598f20db-8d58-4579-ad89-6a698d1e1798]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.624 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[74a0db51-e036-4715-bbec-40f10f5a6d37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:36 compute-0 NetworkManager[44949]: <info>  [1759847976.6497] device (tapd58f5a01-a0): carrier: link connected
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.658 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4a56341a-e0c7-42c6-a396-aeb6e1d9c2b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.682 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d9683c70-7560-48c7-8f78-2050c1cc196d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd58f5a01-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:0d:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856021, 'reachable_time': 34211, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390393, 'error': None, 'target': 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.727 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[240f7898-0575-4b2e-8d70-054d46bdeb6c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:dbf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 856021, 'tstamp': 856021}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390394, 'error': None, 'target': 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.751 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[02cbc6d6-000f-4ed7-9b8f-fd6a8c101a7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd58f5a01-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:0d:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856021, 'reachable_time': 34211, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 390395, 'error': None, 'target': 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.791 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e2341460-8c34-4318-af8c-f7fe285c184d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.862 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee5c566-95d7-479d-b855-0357cd156f2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.863 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd58f5a01-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.863 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.864 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd58f5a01-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:36 compute-0 kernel: tapd58f5a01-a0: entered promiscuous mode
Oct 07 14:39:36 compute-0 NetworkManager[44949]: <info>  [1759847976.8678] manager: (tapd58f5a01-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/522)
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.871 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd58f5a01-a0, col_values=(('external_ids', {'iface-id': 'd4b37819-62c7-42bc-af8c-24aff0a13de9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:36 compute-0 ovn_controller[151684]: 2025-10-07T14:39:36Z|01296|binding|INFO|Releasing lport d4b37819-62c7-42bc-af8c-24aff0a13de9 from this chassis (sb_readonly=0)
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.876 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d58f5a01-ad0a-4168-95c9-fc8189cce054.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d58f5a01-ad0a-4168-95c9-fc8189cce054.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.877 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[200981b6-c982-44cf-b4f9-8667a43eae21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.877 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-d58f5a01-ad0a-4168-95c9-fc8189cce054
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/d58f5a01-ad0a-4168-95c9-fc8189cce054.pid.haproxy
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID d58f5a01-ad0a-4168-95c9-fc8189cce054
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:39:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.878 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'env', 'PROCESS_TAG=haproxy-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d58f5a01-ad0a-4168-95c9-fc8189cce054.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:36 compute-0 nova_compute[259550]: 2025-10-07 14:39:36.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:37 compute-0 nova_compute[259550]: 2025-10-07 14:39:37.265 2 DEBUG nova.network.neutron [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:39:37 compute-0 nova_compute[259550]: 2025-10-07 14:39:37.287 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:39:37 compute-0 nova_compute[259550]: 2025-10-07 14:39:37.288 2 DEBUG nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Instance network_info: |[{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:39:37 compute-0 nova_compute[259550]: 2025-10-07 14:39:37.289 2 DEBUG oslo_concurrency.lockutils [req-f3b25241-4ff8-491d-9729-899b3fc3a09a req-99adfd41-d668-44be-aec0-6e06527cc51c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:39:37 compute-0 nova_compute[259550]: 2025-10-07 14:39:37.289 2 DEBUG nova.network.neutron [req-f3b25241-4ff8-491d-9729-899b3fc3a09a req-99adfd41-d668-44be-aec0-6e06527cc51c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing network info cache for port b7bf5de8-3ba0-43cd-a839-d8812cbe4276 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:39:37 compute-0 podman[390434]: 2025-10-07 14:39:37.221043095 +0000 UTC m=+0.026305964 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:39:37 compute-0 nova_compute[259550]: 2025-10-07 14:39:37.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:37 compute-0 nova_compute[259550]: 2025-10-07 14:39:37.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:37 compute-0 ceph-mon[74295]: pgmap v2315: 305 pgs: 305 active+clean; 121 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 17 KiB/s wr, 12 op/s
Oct 07 14:39:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.135 2 DEBUG nova.compute.manager [req-b91264df-7c5a-4c61-8099-6beded35caff req-c3553de3-0fde-486c-9c11-7c5b8d95b74f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.136 2 DEBUG oslo_concurrency.lockutils [req-b91264df-7c5a-4c61-8099-6beded35caff req-c3553de3-0fde-486c-9c11-7c5b8d95b74f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.136 2 DEBUG oslo_concurrency.lockutils [req-b91264df-7c5a-4c61-8099-6beded35caff req-c3553de3-0fde-486c-9c11-7c5b8d95b74f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.137 2 DEBUG oslo_concurrency.lockutils [req-b91264df-7c5a-4c61-8099-6beded35caff req-c3553de3-0fde-486c-9c11-7c5b8d95b74f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.137 2 DEBUG nova.compute.manager [req-b91264df-7c5a-4c61-8099-6beded35caff req-c3553de3-0fde-486c-9c11-7c5b8d95b74f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] No waiting events found dispatching network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.137 2 WARNING nova.compute.manager [req-b91264df-7c5a-4c61-8099-6beded35caff req-c3553de3-0fde-486c-9c11-7c5b8d95b74f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received unexpected event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 for instance with vm_state suspended and task_state resuming.
Oct 07 14:39:38 compute-0 podman[390434]: 2025-10-07 14:39:38.226993874 +0000 UTC m=+1.032256713 container create 8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.317 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 d0b10640-5492-4d8f-8b94-a49a15b6e702_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:38 compute-0 systemd[1]: Started libpod-conmon-8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b.scope.
Oct 07 14:39:38 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:39:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfe4b90e1f92652e9319c7c056a0787abaa2c6f38b64b25f838b359a30878577/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:39:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:38.387 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.394 2 DEBUG nova.storage.rbd_utils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image d0b10640-5492-4d8f-8b94-a49a15b6e702_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:39:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 121 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 14 KiB/s wr, 11 op/s
Oct 07 14:39:38 compute-0 podman[390434]: 2025-10-07 14:39:38.427560244 +0000 UTC m=+1.232823113 container init 8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 07 14:39:38 compute-0 podman[390434]: 2025-10-07 14:39:38.433107782 +0000 UTC m=+1.238370621 container start 8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:39:38 compute-0 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[390491]: [NOTICE]   (390542) : New worker (390544) forked
Oct 07 14:39:38 compute-0 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[390491]: [NOTICE]   (390542) : Loading success.
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.490 2 DEBUG nova.compute.manager [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.491 2 DEBUG nova.objects.instance [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.493 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.494 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847978.4487634, 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.498 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] VM Started (Lifecycle Event)
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.526 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.527 2 INFO nova.virt.libvirt.driver [-] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Instance running successfully.
Oct 07 14:39:38 compute-0 virtqemud[259430]: argument unsupported: QEMU guest agent is not configured
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.531 2 DEBUG nova.virt.libvirt.guest [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.531 2 DEBUG nova.compute.manager [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.532 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:39:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:38.557 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.580 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.581 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847978.452041, 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.582 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] VM Resumed (Lifecycle Event)
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.632 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.637 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.773 2 DEBUG nova.objects.instance [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid d0b10640-5492-4d8f-8b94-a49a15b6e702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.790 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.790 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Ensure instance console log exists: /var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.791 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.791 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.791 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.793 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Start _get_guest_xml network_info=[{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.815 2 WARNING nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.820 2 DEBUG nova.virt.libvirt.host [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.821 2 DEBUG nova.virt.libvirt.host [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.827 2 DEBUG nova.virt.libvirt.host [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.828 2 DEBUG nova.virt.libvirt.host [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.829 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.829 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.829 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.830 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.830 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.830 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.830 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.830 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.831 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.831 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.831 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.831 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.834 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.995 2 DEBUG nova.network.neutron [req-f3b25241-4ff8-491d-9729-899b3fc3a09a req-99adfd41-d668-44be-aec0-6e06527cc51c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updated VIF entry in instance network info cache for port b7bf5de8-3ba0-43cd-a839-d8812cbe4276. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:39:38 compute-0 nova_compute[259550]: 2025-10-07 14:39:38.996 2 DEBUG nova.network.neutron [req-f3b25241-4ff8-491d-9729-899b3fc3a09a req-99adfd41-d668-44be-aec0-6e06527cc51c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.013 2 DEBUG oslo_concurrency.lockutils [req-f3b25241-4ff8-491d-9729-899b3fc3a09a req-99adfd41-d668-44be-aec0-6e06527cc51c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:39:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:39.042 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:a0:12 2001:db8:0:1:f816:3eff:fe6e:a012 2001:db8::f816:3eff:fe6e:a012'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe6e:a012/64 2001:db8::f816:3eff:fe6e:a012/64', 'neutron:device_id': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abe90ba0-a518-4cef-a49b-de57485faec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=133edfbf-d913-46a7-a148-1a6e26213678, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=763708cd-58bb-4680-a4f7-042aa711a366) old=Port_Binding(mac=['fa:16:3e:6e:a0:12 2001:db8::f816:3eff:fe6e:a012'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6e:a012/64', 'neutron:device_id': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abe90ba0-a518-4cef-a49b-de57485faec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:39:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:39.044 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 763708cd-58bb-4680-a4f7-042aa711a366 in datapath abe90ba0-a518-4cef-a49b-de57485faec5 updated
Oct 07 14:39:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:39.045 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abe90ba0-a518-4cef-a49b-de57485faec5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:39:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:39.046 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[791503b8-003b-4fb2-b22c-424b7db7ae26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:39:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4047354148' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.323 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.348 2 DEBUG nova.storage.rbd_utils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image d0b10640-5492-4d8f-8b94-a49a15b6e702_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.360 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.479 2 INFO nova.compute.manager [None req-3e94abce-e60d-4417-902b-0379256134ea 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Get console output
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.486 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:39:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:39:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1598505080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.878 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.880 2 DEBUG nova.virt.libvirt.vif [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-666494646',display_name='tempest-TestNetworkBasicOps-server-666494646',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-666494646',id=121,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1spSCFixXNsUFx/gNeoHExW+DY1/4E+O5JhigGItAWYtVOtc4GVbv0L/rgo0glCGTWIkGxAFExfWpDWhQ8tY55XDjxFuD7v7bFZwlCzmx6XPgY1bFJ0yFMdc8TPdCuzA==',key_name='tempest-TestNetworkBasicOps-45009849',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-1mnwx928',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:39:34Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=d0b10640-5492-4d8f-8b94-a49a15b6e702,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.880 2 DEBUG nova.network.os_vif_util [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.881 2 DEBUG nova.network.os_vif_util [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:ee:73,bridge_name='br-int',has_traffic_filtering=True,id=b7bf5de8-3ba0-43cd-a839-d8812cbe4276,network=Network(580b59e0-70f8-44c3-a35f-9c4f88691f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7bf5de8-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.882 2 DEBUG nova.objects.instance [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid d0b10640-5492-4d8f-8b94-a49a15b6e702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.901 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:39:39 compute-0 nova_compute[259550]:   <uuid>d0b10640-5492-4d8f-8b94-a49a15b6e702</uuid>
Oct 07 14:39:39 compute-0 nova_compute[259550]:   <name>instance-00000079</name>
Oct 07 14:39:39 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:39:39 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:39:39 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkBasicOps-server-666494646</nova:name>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:39:38</nova:creationTime>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:39:39 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:39:39 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:39:39 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:39:39 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:39:39 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:39:39 compute-0 nova_compute[259550]:         <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:39:39 compute-0 nova_compute[259550]:         <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:39:39 compute-0 nova_compute[259550]:         <nova:port uuid="b7bf5de8-3ba0-43cd-a839-d8812cbe4276">
Oct 07 14:39:39 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:39:39 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:39:39 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <system>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <entry name="serial">d0b10640-5492-4d8f-8b94-a49a15b6e702</entry>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <entry name="uuid">d0b10640-5492-4d8f-8b94-a49a15b6e702</entry>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     </system>
Oct 07 14:39:39 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:39:39 compute-0 nova_compute[259550]:   <os>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:   </os>
Oct 07 14:39:39 compute-0 nova_compute[259550]:   <features>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:   </features>
Oct 07 14:39:39 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:39:39 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:39:39 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/d0b10640-5492-4d8f-8b94-a49a15b6e702_disk">
Oct 07 14:39:39 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       </source>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:39:39 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/d0b10640-5492-4d8f-8b94-a49a15b6e702_disk.config">
Oct 07 14:39:39 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       </source>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:39:39 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:6d:ee:73"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <target dev="tapb7bf5de8-3b"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/console.log" append="off"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <video>
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     </video>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:39:39 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:39:39 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:39:39 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:39:39 compute-0 nova_compute[259550]: </domain>
Oct 07 14:39:39 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.902 2 DEBUG nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Preparing to wait for external event network-vif-plugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.903 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.903 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.903 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.904 2 DEBUG nova.virt.libvirt.vif [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-666494646',display_name='tempest-TestNetworkBasicOps-server-666494646',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-666494646',id=121,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1spSCFixXNsUFx/gNeoHExW+DY1/4E+O5JhigGItAWYtVOtc4GVbv0L/rgo0glCGTWIkGxAFExfWpDWhQ8tY55XDjxFuD7v7bFZwlCzmx6XPgY1bFJ0yFMdc8TPdCuzA==',key_name='tempest-TestNetworkBasicOps-45009849',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-1mnwx928',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:39:34Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=d0b10640-5492-4d8f-8b94-a49a15b6e702,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.904 2 DEBUG nova.network.os_vif_util [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.905 2 DEBUG nova.network.os_vif_util [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:ee:73,bridge_name='br-int',has_traffic_filtering=True,id=b7bf5de8-3ba0-43cd-a839-d8812cbe4276,network=Network(580b59e0-70f8-44c3-a35f-9c4f88691f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7bf5de8-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.905 2 DEBUG os_vif [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:ee:73,bridge_name='br-int',has_traffic_filtering=True,id=b7bf5de8-3ba0-43cd-a839-d8812cbe4276,network=Network(580b59e0-70f8-44c3-a35f-9c4f88691f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7bf5de8-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.906 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.906 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7bf5de8-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb7bf5de8-3b, col_values=(('external_ids', {'iface-id': 'b7bf5de8-3ba0-43cd-a839-d8812cbe4276', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:ee:73', 'vm-uuid': 'd0b10640-5492-4d8f-8b94-a49a15b6e702'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:39 compute-0 NetworkManager[44949]: <info>  [1759847979.9132] manager: (tapb7bf5de8-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/523)
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:39 compute-0 nova_compute[259550]: 2025-10-07 14:39:39.920 2 INFO os_vif [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:ee:73,bridge_name='br-int',has_traffic_filtering=True,id=b7bf5de8-3ba0-43cd-a839-d8812cbe4276,network=Network(580b59e0-70f8-44c3-a35f-9c4f88691f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7bf5de8-3b')
Oct 07 14:39:39 compute-0 ceph-mon[74295]: pgmap v2316: 305 pgs: 305 active+clean; 121 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 14 KiB/s wr, 11 op/s
Oct 07 14:39:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4047354148' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:39:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1598505080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.011 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.012 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.013 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:6d:ee:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.013 2 INFO nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Using config drive
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.031 2 DEBUG nova.storage.rbd_utils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image d0b10640-5492-4d8f-8b94-a49a15b6e702_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.232 2 DEBUG oslo_concurrency.lockutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.235 2 DEBUG oslo_concurrency.lockutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.236 2 DEBUG oslo_concurrency.lockutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.236 2 DEBUG oslo_concurrency.lockutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.236 2 DEBUG oslo_concurrency.lockutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.238 2 INFO nova.compute.manager [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Terminating instance
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.239 2 DEBUG nova.compute.manager [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.317 2 DEBUG nova.compute.manager [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.318 2 DEBUG oslo_concurrency.lockutils [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.318 2 DEBUG oslo_concurrency.lockutils [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.319 2 DEBUG oslo_concurrency.lockutils [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.319 2 DEBUG nova.compute.manager [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] No waiting events found dispatching network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.319 2 WARNING nova.compute.manager [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received unexpected event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 for instance with vm_state active and task_state deleting.
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.320 2 DEBUG nova.compute.manager [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-changed-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.320 2 DEBUG nova.compute.manager [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Refreshing instance network info cache due to event network-changed-660a78e9-3d3f-4949-88f4-3cad47b74229. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.320 2 DEBUG oslo_concurrency.lockutils [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.320 2 DEBUG oslo_concurrency.lockutils [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.320 2 DEBUG nova.network.neutron [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Refreshing network info cache for port 660a78e9-3d3f-4949-88f4-3cad47b74229 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:39:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 158 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.7 MiB/s wr, 30 op/s
Oct 07 14:39:40 compute-0 kernel: tap660a78e9-3d (unregistering): left promiscuous mode
Oct 07 14:39:40 compute-0 NetworkManager[44949]: <info>  [1759847980.4367] device (tap660a78e9-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:40 compute-0 ovn_controller[151684]: 2025-10-07T14:39:40Z|01297|binding|INFO|Releasing lport 660a78e9-3d3f-4949-88f4-3cad47b74229 from this chassis (sb_readonly=0)
Oct 07 14:39:40 compute-0 ovn_controller[151684]: 2025-10-07T14:39:40Z|01298|binding|INFO|Setting lport 660a78e9-3d3f-4949-88f4-3cad47b74229 down in Southbound
Oct 07 14:39:40 compute-0 ovn_controller[151684]: 2025-10-07T14:39:40Z|01299|binding|INFO|Removing iface tap660a78e9-3d ovn-installed in OVS
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.460 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:2c:15 10.100.0.7'], port_security=['fa:16:3e:b2:2c:15 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9996c6b8-7d50-42b8-9617-2a1ae7d36d30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0d4d4d28-c7c9-4239-b81d-776c0fee085d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df54412f-162f-42ea-b08c-2dcfe769ac96, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=660a78e9-3d3f-4949-88f4-3cad47b74229) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:39:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.461 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 660a78e9-3d3f-4949-88f4-3cad47b74229 in datapath d58f5a01-ad0a-4168-95c9-fc8189cce054 unbound from our chassis
Oct 07 14:39:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.462 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d58f5a01-ad0a-4168-95c9-fc8189cce054, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:39:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.463 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed2cd4e-dc4d-4b4c-aa36-e56e46229b7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.463 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 namespace which is not needed anymore
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:40 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000078.scope: Deactivated successfully.
Oct 07 14:39:40 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000078.scope: Consumed 1.305s CPU time.
Oct 07 14:39:40 compute-0 systemd-machined[214580]: Machine qemu-151-instance-00000078 terminated.
Oct 07 14:39:40 compute-0 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[390491]: [NOTICE]   (390542) : haproxy version is 2.8.14-c23fe91
Oct 07 14:39:40 compute-0 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[390491]: [NOTICE]   (390542) : path to executable is /usr/sbin/haproxy
Oct 07 14:39:40 compute-0 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[390491]: [WARNING]  (390542) : Exiting Master process...
Oct 07 14:39:40 compute-0 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[390491]: [ALERT]    (390542) : Current worker (390544) exited with code 143 (Terminated)
Oct 07 14:39:40 compute-0 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[390491]: [WARNING]  (390542) : All workers exited. Exiting... (0)
Oct 07 14:39:40 compute-0 systemd[1]: libpod-8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b.scope: Deactivated successfully.
Oct 07 14:39:40 compute-0 conmon[390491]: conmon 8c28b3fe46c43f2fb200 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b.scope/container/memory.events
Oct 07 14:39:40 compute-0 podman[390680]: 2025-10-07 14:39:40.608396086 +0000 UTC m=+0.065643205 container died 8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.680 2 INFO nova.virt.libvirt.driver [-] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Instance destroyed successfully.
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.681 2 DEBUG nova.objects.instance [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'resources' on Instance uuid 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.698 2 DEBUG nova.virt.libvirt.vif [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:38:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-335561064',display_name='tempest-TestNetworkAdvancedServerOps-server-335561064',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-335561064',id=120,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKd+JvDcSqw5OJ03cvElecW4OwXxUWcOza7ZaEaEf+FC3qoJlFBHob9pK59ESmk17iW8KZuFczVLCS9KQd4bG4fRY/MC1LsIJiiL5MRoGeETGGZgzRCRibwIbIrlPnNS2Q==',key_name='tempest-TestNetworkAdvancedServerOps-452416595',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:39:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-kg93fcgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:39:38Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=9996c6b8-7d50-42b8-9617-2a1ae7d36d30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.699 2 DEBUG nova.network.os_vif_util [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.699 2 DEBUG nova.network.os_vif_util [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.700 2 DEBUG os_vif [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:39:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b-userdata-shm.mount: Deactivated successfully.
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap660a78e9-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfe4b90e1f92652e9319c7c056a0787abaa2c6f38b64b25f838b359a30878577-merged.mount: Deactivated successfully.
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.710 2 INFO nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Creating config drive at /var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/disk.config
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.716 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp16ro6gwy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:40 compute-0 podman[390680]: 2025-10-07 14:39:40.723231084 +0000 UTC m=+0.180478203 container cleanup 8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:39:40 compute-0 systemd[1]: libpod-conmon-8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b.scope: Deactivated successfully.
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.765 2 INFO os_vif [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d')
Oct 07 14:39:40 compute-0 podman[390726]: 2025-10-07 14:39:40.786668019 +0000 UTC m=+0.042271130 container remove 8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 07 14:39:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.792 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7f57c1ef-2aa9-46bb-920e-a226dbb1dd67]: (4, ('Tue Oct  7 02:39:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 (8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b)\n8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b\nTue Oct  7 02:39:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 (8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b)\n8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.794 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[863f9766-9ffb-4765-94a1-e57e04db7c75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.794 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd58f5a01-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:40 compute-0 kernel: tapd58f5a01-a0: left promiscuous mode
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.818 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd233dc-9da3-49a4-a26e-a11adccd8b1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.845 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fb350ac3-3ac4-430f-ba29-450193ca3860]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.847 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d498baef-fb08-4ffe-af60-53d713a5b510]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.861 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp16ro6gwy" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.866 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[252cd932-8f22-4593-9033-c46e9842c953]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856012, 'reachable_time': 26775, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390765, 'error': None, 'target': 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:40 compute-0 systemd[1]: run-netns-ovnmeta\x2dd58f5a01\x2dad0a\x2d4168\x2d95c9\x2dfc8189cce054.mount: Deactivated successfully.
Oct 07 14:39:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.871 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:39:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.871 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[3c554c3f-358d-4a60-b5e5-f747f17565ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.888 2 DEBUG nova.storage.rbd_utils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image d0b10640-5492-4d8f-8b94-a49a15b6e702_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:39:40 compute-0 nova_compute[259550]: 2025-10-07 14:39:40.891 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/disk.config d0b10640-5492-4d8f-8b94-a49a15b6e702_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:41 compute-0 nova_compute[259550]: 2025-10-07 14:39:41.157 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/disk.config d0b10640-5492-4d8f-8b94-a49a15b6e702_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:41 compute-0 nova_compute[259550]: 2025-10-07 14:39:41.158 2 INFO nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Deleting local config drive /var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/disk.config because it was imported into RBD.
Oct 07 14:39:41 compute-0 kernel: tapb7bf5de8-3b: entered promiscuous mode
Oct 07 14:39:41 compute-0 systemd-udevd[390661]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:39:41 compute-0 NetworkManager[44949]: <info>  [1759847981.2338] manager: (tapb7bf5de8-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/524)
Oct 07 14:39:41 compute-0 nova_compute[259550]: 2025-10-07 14:39:41.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:41 compute-0 ovn_controller[151684]: 2025-10-07T14:39:41Z|01300|binding|INFO|Claiming lport b7bf5de8-3ba0-43cd-a839-d8812cbe4276 for this chassis.
Oct 07 14:39:41 compute-0 ovn_controller[151684]: 2025-10-07T14:39:41Z|01301|binding|INFO|b7bf5de8-3ba0-43cd-a839-d8812cbe4276: Claiming fa:16:3e:6d:ee:73 10.100.0.7
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.245 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:ee:73 10.100.0.7'], port_security=['fa:16:3e:6d:ee:73 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd0b10640-5492-4d8f-8b94-a49a15b6e702', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-580b59e0-70f8-44c3-a35f-9c4f88691f96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8da4ff39-44c8-491e-a635-6be0569feae9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1c4ac2-8721-482c-b415-fdc398b5953a, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b7bf5de8-3ba0-43cd-a839-d8812cbe4276) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.246 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b7bf5de8-3ba0-43cd-a839-d8812cbe4276 in datapath 580b59e0-70f8-44c3-a35f-9c4f88691f96 bound to our chassis
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.248 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 580b59e0-70f8-44c3-a35f-9c4f88691f96
Oct 07 14:39:41 compute-0 ovn_controller[151684]: 2025-10-07T14:39:41Z|01302|binding|INFO|Setting lport b7bf5de8-3ba0-43cd-a839-d8812cbe4276 ovn-installed in OVS
Oct 07 14:39:41 compute-0 ovn_controller[151684]: 2025-10-07T14:39:41Z|01303|binding|INFO|Setting lport b7bf5de8-3ba0-43cd-a839-d8812cbe4276 up in Southbound
Oct 07 14:39:41 compute-0 NetworkManager[44949]: <info>  [1759847981.2556] device (tapb7bf5de8-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:39:41 compute-0 NetworkManager[44949]: <info>  [1759847981.2568] device (tapb7bf5de8-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:39:41 compute-0 nova_compute[259550]: 2025-10-07 14:39:41.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.264 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4b91eddd-f0c6-467f-9d55-099a5911c3c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.265 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap580b59e0-71 in ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.267 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap580b59e0-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.267 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c0819d43-6232-4e9a-b10e-612f699a3f5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.268 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[849029c1-f830-4d9a-975d-0efa7810f07b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:41 compute-0 systemd-machined[214580]: New machine qemu-152-instance-00000079.
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.289 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2fbff2-b8e2-4599-b134-9a611b4f9345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:41 compute-0 systemd[1]: Started Virtual Machine qemu-152-instance-00000079.
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.306 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9a7b9994-2adc-4f95-a1f0-cef9cf96c728]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.335 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[15de7b4a-ba76-4e24-850a-8b79f849090a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.342 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d6ca82-9936-424e-a301-16868858d3af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:41 compute-0 NetworkManager[44949]: <info>  [1759847981.3468] manager: (tap580b59e0-70): new Veth device (/org/freedesktop/NetworkManager/Devices/525)
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.383 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[dcbf89f9-adb0-4da7-9be1-de47dc408c09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.386 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6b488e5b-9f02-47f1-a1a9-afcf8b49b6fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:41 compute-0 nova_compute[259550]: 2025-10-07 14:39:41.410 2 INFO nova.virt.libvirt.driver [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Deleting instance files /var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30_del
Oct 07 14:39:41 compute-0 nova_compute[259550]: 2025-10-07 14:39:41.410 2 INFO nova.virt.libvirt.driver [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Deletion of /var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30_del complete
Oct 07 14:39:41 compute-0 NetworkManager[44949]: <info>  [1759847981.4134] device (tap580b59e0-70): carrier: link connected
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.417 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcce059-eeec-4faf-875d-fd978fa92138]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.434 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6df63b47-940c-4fb3-ad39-a5792b03ef4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap580b59e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:9a:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 375], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856497, 'reachable_time': 26637, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390848, 'error': None, 'target': 'ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.451 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[646d5704-4425-4eec-826b-0dcbc23253e6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:9a05'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 856497, 'tstamp': 856497}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390849, 'error': None, 'target': 'ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.469 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[41c2be65-b51d-406a-9e5b-802441cef51d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap580b59e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:9a:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 375], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856497, 'reachable_time': 26637, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 390850, 'error': None, 'target': 'ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:41 compute-0 nova_compute[259550]: 2025-10-07 14:39:41.479 2 INFO nova.compute.manager [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Took 1.24 seconds to destroy the instance on the hypervisor.
Oct 07 14:39:41 compute-0 nova_compute[259550]: 2025-10-07 14:39:41.479 2 DEBUG oslo.service.loopingcall [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:39:41 compute-0 nova_compute[259550]: 2025-10-07 14:39:41.480 2 DEBUG nova.compute.manager [-] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:39:41 compute-0 nova_compute[259550]: 2025-10-07 14:39:41.480 2 DEBUG nova.network.neutron [-] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.504 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0f70170a-52e1-46ce-a804-6c444a1ea16b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.567 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a1626e-abd8-4a35-9e4d-b73c5caeb30d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.569 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580b59e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.569 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.570 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap580b59e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:41 compute-0 nova_compute[259550]: 2025-10-07 14:39:41.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:41 compute-0 NetworkManager[44949]: <info>  [1759847981.5727] manager: (tap580b59e0-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/526)
Oct 07 14:39:41 compute-0 kernel: tap580b59e0-70: entered promiscuous mode
Oct 07 14:39:41 compute-0 nova_compute[259550]: 2025-10-07 14:39:41.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.576 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap580b59e0-70, col_values=(('external_ids', {'iface-id': '406dfedc-cd29-46b7-9b91-7b006ecd582c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:41 compute-0 nova_compute[259550]: 2025-10-07 14:39:41.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:41 compute-0 ovn_controller[151684]: 2025-10-07T14:39:41Z|01304|binding|INFO|Releasing lport 406dfedc-cd29-46b7-9b91-7b006ecd582c from this chassis (sb_readonly=0)
Oct 07 14:39:41 compute-0 nova_compute[259550]: 2025-10-07 14:39:41.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.596 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/580b59e0-70f8-44c3-a35f-9c4f88691f96.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/580b59e0-70f8-44c3-a35f-9c4f88691f96.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.597 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bb2c9d25-dcea-46c0-85ba-de123b7a9a9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.598 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-580b59e0-70f8-44c3-a35f-9c4f88691f96
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/580b59e0-70f8-44c3-a35f-9c4f88691f96.pid.haproxy
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 580b59e0-70f8-44c3-a35f-9c4f88691f96
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:39:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.600 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96', 'env', 'PROCESS_TAG=haproxy-580b59e0-70f8-44c3-a35f-9c4f88691f96', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/580b59e0-70f8-44c3-a35f-9c4f88691f96.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:39:41 compute-0 ceph-mon[74295]: pgmap v2317: 305 pgs: 305 active+clean; 158 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.7 MiB/s wr, 30 op/s
Oct 07 14:39:41 compute-0 podman[390917]: 2025-10-07 14:39:41.985967906 +0000 UTC m=+0.045807496 container create 150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:39:42 compute-0 systemd[1]: Started libpod-conmon-150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352.scope.
Oct 07 14:39:42 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:39:42 compute-0 podman[390917]: 2025-10-07 14:39:41.962757125 +0000 UTC m=+0.022596745 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea80f1f94cd7157cc848e1780ed5de7036ed725f787faa3cc6a7f92f3662579a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:39:42 compute-0 podman[390917]: 2025-10-07 14:39:42.087979331 +0000 UTC m=+0.147818931 container init 150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 07 14:39:42 compute-0 podman[390917]: 2025-10-07 14:39:42.093976831 +0000 UTC m=+0.153816431 container start 150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 07 14:39:42 compute-0 neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96[390938]: [NOTICE]   (390942) : New worker (390944) forked
Oct 07 14:39:42 compute-0 neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96[390938]: [NOTICE]   (390942) : Loading success.
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.202 2 DEBUG nova.network.neutron [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Updated VIF entry in instance network info cache for port 660a78e9-3d3f-4949-88f4-3cad47b74229. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.202 2 DEBUG nova.network.neutron [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Updating instance_info_cache with network_info: [{"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.225 2 DEBUG oslo_concurrency.lockutils [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.284 2 DEBUG nova.network.neutron [-] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.305 2 INFO nova.compute.manager [-] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Took 0.82 seconds to deallocate network for instance.
Oct 07 14:39:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 167 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.444 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847982.4439216, d0b10640-5492-4d8f-8b94-a49a15b6e702 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.444 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] VM Started (Lifecycle Event)
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.505 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.509 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847982.444058, d0b10640-5492-4d8f-8b94-a49a15b6e702 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.510 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] VM Paused (Lifecycle Event)
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.546 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.550 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.552 2 DEBUG oslo_concurrency.lockutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.553 2 DEBUG oslo_concurrency.lockutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.573 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.639 2 DEBUG oslo_concurrency.processutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.806 2 DEBUG nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-vif-unplugged-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.806 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.807 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.807 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.807 2 DEBUG nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] No waiting events found dispatching network-vif-unplugged-660a78e9-3d3f-4949-88f4-3cad47b74229 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.807 2 WARNING nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received unexpected event network-vif-unplugged-660a78e9-3d3f-4949-88f4-3cad47b74229 for instance with vm_state deleted and task_state None.
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.807 2 DEBUG nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.808 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.808 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.808 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.808 2 DEBUG nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] No waiting events found dispatching network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.808 2 WARNING nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received unexpected event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 for instance with vm_state deleted and task_state None.
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.808 2 DEBUG nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-plugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.809 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.809 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.809 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.809 2 DEBUG nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Processing event network-vif-plugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.809 2 DEBUG nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-plugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.809 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.810 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.810 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.810 2 DEBUG nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] No waiting events found dispatching network-vif-plugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.810 2 WARNING nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received unexpected event network-vif-plugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 for instance with vm_state building and task_state spawning.
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.810 2 DEBUG nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-vif-deleted-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.811 2 DEBUG nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.815 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847982.8145022, d0b10640-5492-4d8f-8b94-a49a15b6e702 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.815 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] VM Resumed (Lifecycle Event)
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.817 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.826 2 INFO nova.virt.libvirt.driver [-] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Instance spawned successfully.
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.826 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.835 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.840 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.849 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.850 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.850 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.851 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.851 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.851 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.908 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.942 2 INFO nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Took 8.87 seconds to spawn the instance on the hypervisor.
Oct 07 14:39:42 compute-0 nova_compute[259550]: 2025-10-07 14:39:42.942 2 DEBUG nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:39:43 compute-0 nova_compute[259550]: 2025-10-07 14:39:43.018 2 INFO nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Took 9.93 seconds to build instance.
Oct 07 14:39:43 compute-0 nova_compute[259550]: 2025-10-07 14:39:43.046 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:39:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:39:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/207620396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:43 compute-0 nova_compute[259550]: 2025-10-07 14:39:43.115 2 DEBUG oslo_concurrency.processutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:43 compute-0 nova_compute[259550]: 2025-10-07 14:39:43.122 2 DEBUG nova.compute.provider_tree [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:39:43 compute-0 nova_compute[259550]: 2025-10-07 14:39:43.146 2 DEBUG nova.scheduler.client.report [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:39:43 compute-0 nova_compute[259550]: 2025-10-07 14:39:43.174 2 DEBUG oslo_concurrency.lockutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:43 compute-0 nova_compute[259550]: 2025-10-07 14:39:43.209 2 INFO nova.scheduler.client.report [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Deleted allocations for instance 9996c6b8-7d50-42b8-9617-2a1ae7d36d30
Oct 07 14:39:43 compute-0 nova_compute[259550]: 2025-10-07 14:39:43.290 2 DEBUG oslo_concurrency.lockutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:43 compute-0 ceph-mon[74295]: pgmap v2318: 305 pgs: 305 active+clean; 167 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 07 14:39:43 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/207620396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 103 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Oct 07 14:39:44 compute-0 nova_compute[259550]: 2025-10-07 14:39:44.712 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:44 compute-0 nova_compute[259550]: 2025-10-07 14:39:44.712 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:44 compute-0 nova_compute[259550]: 2025-10-07 14:39:44.731 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:39:44 compute-0 nova_compute[259550]: 2025-10-07 14:39:44.788 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:44 compute-0 nova_compute[259550]: 2025-10-07 14:39:44.789 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:44 compute-0 nova_compute[259550]: 2025-10-07 14:39:44.796 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:39:44 compute-0 nova_compute[259550]: 2025-10-07 14:39:44.796 2 INFO nova.compute.claims [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:39:44 compute-0 nova_compute[259550]: 2025-10-07 14:39:44.927 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:45 compute-0 podman[390976]: 2025-10-07 14:39:45.074164454 +0000 UTC m=+0.061106454 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 07 14:39:45 compute-0 podman[390977]: 2025-10-07 14:39:45.115781735 +0000 UTC m=+0.102700195 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 14:39:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:39:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228171555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.397 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.404 2 DEBUG nova.compute.provider_tree [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.424 2 DEBUG nova.scheduler.client.report [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.444 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.444 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.506 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.506 2 DEBUG nova.network.neutron [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.562 2 INFO nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.645 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.684 2 DEBUG nova.policy [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.773 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.774 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.774 2 INFO nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Creating image(s)
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.797 2 DEBUG nova.storage.rbd_utils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.825 2 DEBUG nova.storage.rbd_utils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.852 2 DEBUG nova.storage.rbd_utils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.856 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.935 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.936 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.937 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.937 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.961 2 DEBUG nova.storage.rbd_utils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:39:45 compute-0 nova_compute[259550]: 2025-10-07 14:39:45.965 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:39:46 compute-0 ceph-mon[74295]: pgmap v2319: 305 pgs: 305 active+clean; 103 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Oct 07 14:39:46 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4228171555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:39:46 compute-0 nova_compute[259550]: 2025-10-07 14:39:46.334 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:39:46 compute-0 nova_compute[259550]: 2025-10-07 14:39:46.399 2 DEBUG nova.storage.rbd_utils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:39:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 88 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 120 op/s
Oct 07 14:39:46 compute-0 nova_compute[259550]: 2025-10-07 14:39:46.513 2 DEBUG nova.objects.instance [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:39:46 compute-0 nova_compute[259550]: 2025-10-07 14:39:46.535 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:39:46 compute-0 nova_compute[259550]: 2025-10-07 14:39:46.535 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Ensure instance console log exists: /var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:39:46 compute-0 nova_compute[259550]: 2025-10-07 14:39:46.536 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:39:46 compute-0 nova_compute[259550]: 2025-10-07 14:39:46.536 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:39:46 compute-0 nova_compute[259550]: 2025-10-07 14:39:46.536 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:39:46 compute-0 nova_compute[259550]: 2025-10-07 14:39:46.765 2 DEBUG nova.compute.manager [req-4d2c267e-7566-4433-889f-872c9b0386de req-8e7f5549-f085-416a-8dae-a871cdecfd08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-changed-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:46 compute-0 nova_compute[259550]: 2025-10-07 14:39:46.766 2 DEBUG nova.compute.manager [req-4d2c267e-7566-4433-889f-872c9b0386de req-8e7f5549-f085-416a-8dae-a871cdecfd08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing instance network info cache due to event network-changed-b7bf5de8-3ba0-43cd-a839-d8812cbe4276. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:39:46 compute-0 nova_compute[259550]: 2025-10-07 14:39:46.766 2 DEBUG oslo_concurrency.lockutils [req-4d2c267e-7566-4433-889f-872c9b0386de req-8e7f5549-f085-416a-8dae-a871cdecfd08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:39:46 compute-0 nova_compute[259550]: 2025-10-07 14:39:46.766 2 DEBUG oslo_concurrency.lockutils [req-4d2c267e-7566-4433-889f-872c9b0386de req-8e7f5549-f085-416a-8dae-a871cdecfd08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:39:46 compute-0 nova_compute[259550]: 2025-10-07 14:39:46.766 2 DEBUG nova.network.neutron [req-4d2c267e-7566-4433-889f-872c9b0386de req-8e7f5549-f085-416a-8dae-a871cdecfd08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing network info cache for port b7bf5de8-3ba0-43cd-a839-d8812cbe4276 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:39:47 compute-0 nova_compute[259550]: 2025-10-07 14:39:47.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:48 compute-0 ceph-mon[74295]: pgmap v2320: 305 pgs: 305 active+clean; 88 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 120 op/s
Oct 07 14:39:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:39:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 88 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 07 14:39:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:39:48.561 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:39:48 compute-0 nova_compute[259550]: 2025-10-07 14:39:48.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:39:50 compute-0 ceph-mon[74295]: pgmap v2321: 305 pgs: 305 active+clean; 88 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 07 14:39:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 115 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 137 op/s
Oct 07 14:39:50 compute-0 nova_compute[259550]: 2025-10-07 14:39:50.576 2 DEBUG nova.network.neutron [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Successfully created port: ff77de20-1280-4c30-941d-c53ed7efcbe8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:39:50 compute-0 nova_compute[259550]: 2025-10-07 14:39:50.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:50 compute-0 nova_compute[259550]: 2025-10-07 14:39:50.771 2 DEBUG nova.network.neutron [req-4d2c267e-7566-4433-889f-872c9b0386de req-8e7f5549-f085-416a-8dae-a871cdecfd08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updated VIF entry in instance network info cache for port b7bf5de8-3ba0-43cd-a839-d8812cbe4276. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:39:50 compute-0 nova_compute[259550]: 2025-10-07 14:39:50.772 2 DEBUG nova.network.neutron [req-4d2c267e-7566-4433-889f-872c9b0386de req-8e7f5549-f085-416a-8dae-a871cdecfd08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:39:50 compute-0 ovn_controller[151684]: 2025-10-07T14:39:50Z|01305|binding|INFO|Releasing lport 406dfedc-cd29-46b7-9b91-7b006ecd582c from this chassis (sb_readonly=0)
Oct 07 14:39:50 compute-0 nova_compute[259550]: 2025-10-07 14:39:50.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:50 compute-0 nova_compute[259550]: 2025-10-07 14:39:50.886 2 DEBUG oslo_concurrency.lockutils [req-4d2c267e-7566-4433-889f-872c9b0386de req-8e7f5549-f085-416a-8dae-a871cdecfd08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:39:51 compute-0 nova_compute[259550]: 2025-10-07 14:39:51.777 2 DEBUG nova.network.neutron [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Successfully created port: a40ec757-407d-4375-b756-d4fb8f5664b4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:39:51 compute-0 nova_compute[259550]: 2025-10-07 14:39:51.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:39:51 compute-0 nova_compute[259550]: 2025-10-07 14:39:51.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:39:52 compute-0 ceph-mon[74295]: pgmap v2322: 305 pgs: 305 active+clean; 115 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 137 op/s
Oct 07 14:39:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 130 op/s
Oct 07 14:39:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:39:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:39:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:39:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:39:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:39:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:39:52 compute-0 nova_compute[259550]: 2025-10-07 14:39:52.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:39:53 compute-0 nova_compute[259550]: 2025-10-07 14:39:53.476 2 DEBUG nova.network.neutron [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Successfully updated port: ff77de20-1280-4c30-941d-c53ed7efcbe8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:39:53 compute-0 nova_compute[259550]: 2025-10-07 14:39:53.743 2 DEBUG nova.compute.manager [req-92c90d1d-7c98-4180-9209-6c545e2e7b9b req-8c06700b-fdbb-4a7b-8f76-a27c7b085cdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-changed-ff77de20-1280-4c30-941d-c53ed7efcbe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:53 compute-0 nova_compute[259550]: 2025-10-07 14:39:53.743 2 DEBUG nova.compute.manager [req-92c90d1d-7c98-4180-9209-6c545e2e7b9b req-8c06700b-fdbb-4a7b-8f76-a27c7b085cdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Refreshing instance network info cache due to event network-changed-ff77de20-1280-4c30-941d-c53ed7efcbe8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:39:53 compute-0 nova_compute[259550]: 2025-10-07 14:39:53.744 2 DEBUG oslo_concurrency.lockutils [req-92c90d1d-7c98-4180-9209-6c545e2e7b9b req-8c06700b-fdbb-4a7b-8f76-a27c7b085cdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:39:53 compute-0 nova_compute[259550]: 2025-10-07 14:39:53.744 2 DEBUG oslo_concurrency.lockutils [req-92c90d1d-7c98-4180-9209-6c545e2e7b9b req-8c06700b-fdbb-4a7b-8f76-a27c7b085cdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:39:53 compute-0 nova_compute[259550]: 2025-10-07 14:39:53.744 2 DEBUG nova.network.neutron [req-92c90d1d-7c98-4180-9209-6c545e2e7b9b req-8c06700b-fdbb-4a7b-8f76-a27c7b085cdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Refreshing network info cache for port ff77de20-1280-4c30-941d-c53ed7efcbe8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:39:53 compute-0 nova_compute[259550]: 2025-10-07 14:39:53.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:39:53 compute-0 nova_compute[259550]: 2025-10-07 14:39:53.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:39:54 compute-0 ceph-mon[74295]: pgmap v2323: 305 pgs: 305 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 130 op/s
Oct 07 14:39:54 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 07 14:39:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 07 14:39:54 compute-0 nova_compute[259550]: 2025-10-07 14:39:54.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:39:54 compute-0 nova_compute[259550]: 2025-10-07 14:39:54.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:39:55 compute-0 nova_compute[259550]: 2025-10-07 14:39:55.109 2 DEBUG nova.network.neutron [req-92c90d1d-7c98-4180-9209-6c545e2e7b9b req-8c06700b-fdbb-4a7b-8f76-a27c7b085cdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:39:55 compute-0 nova_compute[259550]: 2025-10-07 14:39:55.678 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847980.6769834, 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:39:55 compute-0 nova_compute[259550]: 2025-10-07 14:39:55.679 2 INFO nova.compute.manager [-] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] VM Stopped (Lifecycle Event)
Oct 07 14:39:55 compute-0 nova_compute[259550]: 2025-10-07 14:39:55.709 2 DEBUG nova.compute.manager [None req-096cba56-4f96-4827-baa2-dabd8615c138 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:39:55 compute-0 nova_compute[259550]: 2025-10-07 14:39:55.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:56 compute-0 nova_compute[259550]: 2025-10-07 14:39:56.033 2 DEBUG nova.network.neutron [req-92c90d1d-7c98-4180-9209-6c545e2e7b9b req-8c06700b-fdbb-4a7b-8f76-a27c7b085cdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:39:56 compute-0 ceph-mon[74295]: pgmap v2324: 305 pgs: 305 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 07 14:39:56 compute-0 nova_compute[259550]: 2025-10-07 14:39:56.055 2 DEBUG oslo_concurrency.lockutils [req-92c90d1d-7c98-4180-9209-6c545e2e7b9b req-8c06700b-fdbb-4a7b-8f76-a27c7b085cdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:39:56 compute-0 ovn_controller[151684]: 2025-10-07T14:39:56Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6d:ee:73 10.100.0.7
Oct 07 14:39:56 compute-0 ovn_controller[151684]: 2025-10-07T14:39:56Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6d:ee:73 10.100.0.7
Oct 07 14:39:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 145 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 709 KiB/s rd, 2.2 MiB/s wr, 86 op/s
Oct 07 14:39:57 compute-0 nova_compute[259550]: 2025-10-07 14:39:57.007 2 DEBUG nova.network.neutron [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Successfully updated port: a40ec757-407d-4375-b756-d4fb8f5664b4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:39:57 compute-0 nova_compute[259550]: 2025-10-07 14:39:57.148 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:39:57 compute-0 nova_compute[259550]: 2025-10-07 14:39:57.148 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:39:57 compute-0 nova_compute[259550]: 2025-10-07 14:39:57.148 2 DEBUG nova.network.neutron [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:39:57 compute-0 nova_compute[259550]: 2025-10-07 14:39:57.184 2 DEBUG nova.compute.manager [req-07a94f41-a735-43cb-bf9b-fd9519e0f39e req-22834135-a575-4f9d-a2a5-38e18a93ecf8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-changed-a40ec757-407d-4375-b756-d4fb8f5664b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:39:57 compute-0 nova_compute[259550]: 2025-10-07 14:39:57.184 2 DEBUG nova.compute.manager [req-07a94f41-a735-43cb-bf9b-fd9519e0f39e req-22834135-a575-4f9d-a2a5-38e18a93ecf8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Refreshing instance network info cache due to event network-changed-a40ec757-407d-4375-b756-d4fb8f5664b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:39:57 compute-0 nova_compute[259550]: 2025-10-07 14:39:57.185 2 DEBUG oslo_concurrency.lockutils [req-07a94f41-a735-43cb-bf9b-fd9519e0f39e req-22834135-a575-4f9d-a2a5-38e18a93ecf8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:39:57 compute-0 nova_compute[259550]: 2025-10-07 14:39:57.714 2 DEBUG nova.network.neutron [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:39:57 compute-0 nova_compute[259550]: 2025-10-07 14:39:57.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:39:58 compute-0 ceph-mon[74295]: pgmap v2325: 305 pgs: 305 active+clean; 145 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 709 KiB/s rd, 2.2 MiB/s wr, 86 op/s
Oct 07 14:39:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:39:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 145 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 522 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.046 2 DEBUG nova.network.neutron [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updating instance_info_cache with network_info: [{"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.067 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.068 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Instance network_info: |[{"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.069 2 DEBUG oslo_concurrency.lockutils [req-07a94f41-a735-43cb-bf9b-fd9519e0f39e req-22834135-a575-4f9d-a2a5-38e18a93ecf8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.069 2 DEBUG nova.network.neutron [req-07a94f41-a735-43cb-bf9b-fd9519e0f39e req-22834135-a575-4f9d-a2a5-38e18a93ecf8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Refreshing network info cache for port a40ec757-407d-4375-b756-d4fb8f5664b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:40:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:00.072 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:00.073 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:00.073 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.075 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Start _get_guest_xml network_info=[{"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.082 2 WARNING nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.098 2 DEBUG nova.virt.libvirt.host [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.099 2 DEBUG nova.virt.libvirt.host [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.111 2 DEBUG nova.virt.libvirt.host [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.112 2 DEBUG nova.virt.libvirt.host [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.113 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.113 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.114 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:40:00 compute-0 ceph-mon[74295]: pgmap v2326: 305 pgs: 305 active+clean; 145 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 522 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.114 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.114 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.114 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.115 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.115 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.115 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.115 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.116 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.116 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.120 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 167 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 701 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct 07 14:40:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:40:00 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/723887806' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.578 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.608 2 DEBUG nova.storage.rbd_utils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.614 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:40:00 compute-0 nova_compute[259550]: 2025-10-07 14:40:00.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.014 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:40:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:40:01 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3252103378' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.053 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.054 2 DEBUG nova.virt.libvirt.vif [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:39:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1628698219',display_name='tempest-TestGettingAddress-server-1628698219',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1628698219',id=122,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-fvhmi32z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:39:45Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=91ef3edf-0b1e-4a6d-8ef1-af2687c58b74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.055 2 DEBUG nova.network.os_vif_util [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.056 2 DEBUG nova.network.os_vif_util [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:b5:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff77de20-1280-4c30-941d-c53ed7efcbe8,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff77de20-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.057 2 DEBUG nova.virt.libvirt.vif [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:39:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1628698219',display_name='tempest-TestGettingAddress-server-1628698219',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1628698219',id=122,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-fvhmi32z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:39:45Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=91ef3edf-0b1e-4a6d-8ef1-af2687c58b74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.057 2 DEBUG nova.network.os_vif_util [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.058 2 DEBUG nova.network.os_vif_util [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:32:f2,bridge_name='br-int',has_traffic_filtering=True,id=a40ec757-407d-4375-b756-d4fb8f5664b4,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa40ec757-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.059 2 DEBUG nova.objects.instance [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.077 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:40:01 compute-0 nova_compute[259550]:   <uuid>91ef3edf-0b1e-4a6d-8ef1-af2687c58b74</uuid>
Oct 07 14:40:01 compute-0 nova_compute[259550]:   <name>instance-0000007a</name>
Oct 07 14:40:01 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:40:01 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:40:01 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <nova:name>tempest-TestGettingAddress-server-1628698219</nova:name>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:40:00</nova:creationTime>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:40:01 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:40:01 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:40:01 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:40:01 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:40:01 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:40:01 compute-0 nova_compute[259550]:         <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct 07 14:40:01 compute-0 nova_compute[259550]:         <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:40:01 compute-0 nova_compute[259550]:         <nova:port uuid="ff77de20-1280-4c30-941d-c53ed7efcbe8">
Oct 07 14:40:01 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:40:01 compute-0 nova_compute[259550]:         <nova:port uuid="a40ec757-407d-4375-b756-d4fb8f5664b4">
Oct 07 14:40:01 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe37:32f2" ipVersion="6"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe37:32f2" ipVersion="6"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:40:01 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:40:01 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <system>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <entry name="serial">91ef3edf-0b1e-4a6d-8ef1-af2687c58b74</entry>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <entry name="uuid">91ef3edf-0b1e-4a6d-8ef1-af2687c58b74</entry>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     </system>
Oct 07 14:40:01 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:40:01 compute-0 nova_compute[259550]:   <os>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:   </os>
Oct 07 14:40:01 compute-0 nova_compute[259550]:   <features>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:   </features>
Oct 07 14:40:01 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:40:01 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:40:01 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk">
Oct 07 14:40:01 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       </source>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:40:01 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk.config">
Oct 07 14:40:01 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       </source>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:40:01 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:09:b5:2f"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <target dev="tapff77de20-12"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:37:32:f2"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <target dev="tapa40ec757-40"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74/console.log" append="off"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <video>
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     </video>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:40:01 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:40:01 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:40:01 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:40:01 compute-0 nova_compute[259550]: </domain>
Oct 07 14:40:01 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.078 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Preparing to wait for external event network-vif-plugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.078 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.079 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.079 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.079 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Preparing to wait for external event network-vif-plugged-a40ec757-407d-4375-b756-d4fb8f5664b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.080 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.080 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.080 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.081 2 DEBUG nova.virt.libvirt.vif [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:39:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1628698219',display_name='tempest-TestGettingAddress-server-1628698219',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1628698219',id=122,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-fvhmi32z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:39:45Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=91ef3edf-0b1e-4a6d-8ef1-af2687c58b74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.081 2 DEBUG nova.network.os_vif_util [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.082 2 DEBUG nova.network.os_vif_util [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:b5:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff77de20-1280-4c30-941d-c53ed7efcbe8,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff77de20-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.083 2 DEBUG os_vif [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:b5:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff77de20-1280-4c30-941d-c53ed7efcbe8,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff77de20-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.084 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.084 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.092 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff77de20-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.093 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapff77de20-12, col_values=(('external_ids', {'iface-id': 'ff77de20-1280-4c30-941d-c53ed7efcbe8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:b5:2f', 'vm-uuid': '91ef3edf-0b1e-4a6d-8ef1-af2687c58b74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:01 compute-0 NetworkManager[44949]: <info>  [1759848001.0955] manager: (tapff77de20-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/527)
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.106 2 INFO os_vif [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:b5:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff77de20-1280-4c30-941d-c53ed7efcbe8,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff77de20-12')
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.106 2 DEBUG nova.virt.libvirt.vif [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:39:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1628698219',display_name='tempest-TestGettingAddress-server-1628698219',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1628698219',id=122,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-fvhmi32z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:39:45Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=91ef3edf-0b1e-4a6d-8ef1-af2687c58b74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.107 2 DEBUG nova.network.os_vif_util [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.107 2 DEBUG nova.network.os_vif_util [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:32:f2,bridge_name='br-int',has_traffic_filtering=True,id=a40ec757-407d-4375-b756-d4fb8f5664b4,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa40ec757-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.108 2 DEBUG os_vif [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:32:f2,bridge_name='br-int',has_traffic_filtering=True,id=a40ec757-407d-4375-b756-d4fb8f5664b4,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa40ec757-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.108 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.109 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.110 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa40ec757-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.111 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa40ec757-40, col_values=(('external_ids', {'iface-id': 'a40ec757-407d-4375-b756-d4fb8f5664b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:32:f2', 'vm-uuid': '91ef3edf-0b1e-4a6d-8ef1-af2687c58b74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:01 compute-0 NetworkManager[44949]: <info>  [1759848001.1127] manager: (tapa40ec757-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/528)
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:40:01 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/723887806' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:40:01 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3252103378' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.124 2 INFO os_vif [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:32:f2,bridge_name='br-int',has_traffic_filtering=True,id=a40ec757-407d-4375-b756-d4fb8f5664b4,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa40ec757-40')
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.181 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.182 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.182 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:09:b5:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.183 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:37:32:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.183 2 INFO nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Using config drive
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.210 2 DEBUG nova.storage.rbd_utils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.720 2 INFO nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Creating config drive at /var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74/disk.config
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.725 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4hesnxv2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.865 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4hesnxv2" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.895 2 DEBUG nova.storage.rbd_utils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:01 compute-0 nova_compute[259550]: 2025-10-07 14:40:01.898 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74/disk.config 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.089 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74/disk.config 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.090 2 INFO nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Deleting local config drive /var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74/disk.config because it was imported into RBD.
Oct 07 14:40:02 compute-0 ceph-mon[74295]: pgmap v2327: 305 pgs: 305 active+clean; 167 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 701 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct 07 14:40:02 compute-0 NetworkManager[44949]: <info>  [1759848002.1388] manager: (tapff77de20-12): new Tun device (/org/freedesktop/NetworkManager/Devices/529)
Oct 07 14:40:02 compute-0 kernel: tapff77de20-12: entered promiscuous mode
Oct 07 14:40:02 compute-0 ovn_controller[151684]: 2025-10-07T14:40:02Z|01306|binding|INFO|Claiming lport ff77de20-1280-4c30-941d-c53ed7efcbe8 for this chassis.
Oct 07 14:40:02 compute-0 ovn_controller[151684]: 2025-10-07T14:40:02Z|01307|binding|INFO|ff77de20-1280-4c30-941d-c53ed7efcbe8: Claiming fa:16:3e:09:b5:2f 10.100.0.12
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.188 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:b5:2f 10.100.0.12'], port_security=['fa:16:3e:09:b5:2f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '91ef3edf-0b1e-4a6d-8ef1-af2687c58b74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b59ffdd2-4285-47f2-a931-fca691d1c031', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3b63116b-3cbd-4da7-8f74-19fd26396505', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c45d6c4-6a15-4c19-8f6b-673e9b96b82d, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ff77de20-1280-4c30-941d-c53ed7efcbe8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.189 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ff77de20-1280-4c30-941d-c53ed7efcbe8 in datapath b59ffdd2-4285-47f2-a931-fca691d1c031 bound to our chassis
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.190 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b59ffdd2-4285-47f2-a931-fca691d1c031
Oct 07 14:40:02 compute-0 NetworkManager[44949]: <info>  [1759848002.1924] manager: (tapa40ec757-40): new Tun device (/org/freedesktop/NetworkManager/Devices/530)
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:02 compute-0 kernel: tapa40ec757-40: entered promiscuous mode
Oct 07 14:40:02 compute-0 ovn_controller[151684]: 2025-10-07T14:40:02Z|01308|binding|INFO|Setting lport ff77de20-1280-4c30-941d-c53ed7efcbe8 ovn-installed in OVS
Oct 07 14:40:02 compute-0 ovn_controller[151684]: 2025-10-07T14:40:02Z|01309|binding|INFO|Setting lport ff77de20-1280-4c30-941d-c53ed7efcbe8 up in Southbound
Oct 07 14:40:02 compute-0 ovn_controller[151684]: 2025-10-07T14:40:02Z|01310|if_status|INFO|Dropped 5 log messages in last 92 seconds (most recently, 92 seconds ago) due to excessive rate
Oct 07 14:40:02 compute-0 ovn_controller[151684]: 2025-10-07T14:40:02Z|01311|if_status|INFO|Not updating pb chassis for a40ec757-407d-4375-b756-d4fb8f5664b4 now as sb is readonly
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.204 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[59b9c16c-ec6b-4e47-b689-913434805153]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.207 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb59ffdd2-41 in ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.210 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb59ffdd2-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.210 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6c90a952-70e4-44e6-96a1-56df6f1551c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:02 compute-0 ovn_controller[151684]: 2025-10-07T14:40:02Z|01312|binding|INFO|Claiming lport a40ec757-407d-4375-b756-d4fb8f5664b4 for this chassis.
Oct 07 14:40:02 compute-0 ovn_controller[151684]: 2025-10-07T14:40:02Z|01313|binding|INFO|a40ec757-407d-4375-b756-d4fb8f5664b4: Claiming fa:16:3e:37:32:f2 2001:db8:0:1:f816:3eff:fe37:32f2 2001:db8::f816:3eff:fe37:32f2
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.212 2 DEBUG nova.network.neutron [req-07a94f41-a735-43cb-bf9b-fd9519e0f39e req-22834135-a575-4f9d-a2a5-38e18a93ecf8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updated VIF entry in instance network info cache for port a40ec757-407d-4375-b756-d4fb8f5664b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.212 2 DEBUG nova.network.neutron [req-07a94f41-a735-43cb-bf9b-fd9519e0f39e req-22834135-a575-4f9d-a2a5-38e18a93ecf8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updating instance_info_cache with network_info: [{"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.211 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ad72db80-3736-4c38-91a4-9d7151450d9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:02 compute-0 ovn_controller[151684]: 2025-10-07T14:40:02Z|01314|binding|INFO|Setting lport a40ec757-407d-4375-b756-d4fb8f5664b4 ovn-installed in OVS
Oct 07 14:40:02 compute-0 ovn_controller[151684]: 2025-10-07T14:40:02Z|01315|binding|INFO|Setting lport a40ec757-407d-4375-b756-d4fb8f5664b4 up in Southbound
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.233 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:32:f2 2001:db8:0:1:f816:3eff:fe37:32f2 2001:db8::f816:3eff:fe37:32f2'], port_security=['fa:16:3e:37:32:f2 2001:db8:0:1:f816:3eff:fe37:32f2 2001:db8::f816:3eff:fe37:32f2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe37:32f2/64 2001:db8::f816:3eff:fe37:32f2/64', 'neutron:device_id': '91ef3edf-0b1e-4a6d-8ef1-af2687c58b74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abe90ba0-a518-4cef-a49b-de57485faec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3b63116b-3cbd-4da7-8f74-19fd26396505', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=133edfbf-d913-46a7-a148-1a6e26213678, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=a40ec757-407d-4375-b756-d4fb8f5664b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.236 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[89e36837-3f59-4977-93ac-3f8eed6c7c29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:02 compute-0 systemd-machined[214580]: New machine qemu-153-instance-0000007a.
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.244 2 DEBUG oslo_concurrency.lockutils [req-07a94f41-a735-43cb-bf9b-fd9519e0f39e req-22834135-a575-4f9d-a2a5-38e18a93ecf8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:40:02 compute-0 systemd[1]: Started Virtual Machine qemu-153-instance-0000007a.
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.263 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[031c57a4-dd35-4bcc-b71c-fd9c46159151]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:02 compute-0 podman[391349]: 2025-10-07 14:40:02.277421049 +0000 UTC m=+0.060404095 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 07 14:40:02 compute-0 systemd-udevd[391393]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:40:02 compute-0 systemd-udevd[391394]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:40:02 compute-0 podman[391345]: 2025-10-07 14:40:02.293462778 +0000 UTC m=+0.078010285 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 07 14:40:02 compute-0 NetworkManager[44949]: <info>  [1759848002.2966] device (tapff77de20-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:40:02 compute-0 NetworkManager[44949]: <info>  [1759848002.2977] device (tapff77de20-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:40:02 compute-0 NetworkManager[44949]: <info>  [1759848002.2983] device (tapa40ec757-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:40:02 compute-0 NetworkManager[44949]: <info>  [1759848002.2989] device (tapa40ec757-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.300 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c1acfe7c-f3cb-4b44-8d7e-6ba179c0bd98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:02 compute-0 NetworkManager[44949]: <info>  [1759848002.3059] manager: (tapb59ffdd2-40): new Veth device (/org/freedesktop/NetworkManager/Devices/531)
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.305 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf43075-9cbe-4819-9de6-64d128bdeede]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.346 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2db6aabc-032e-4484-ba70-47907b4eb920]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.351 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[76a092af-cb77-445f-9377-69e9a40c1976]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:02 compute-0 NetworkManager[44949]: <info>  [1759848002.3779] device (tapb59ffdd2-40): carrier: link connected
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.385 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cc434b14-c275-450f-8f6e-9fa60d8b91e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.404 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[15849579-cb03-4e80-b385-45ef148d9057]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb59ffdd2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:3d:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858594, 'reachable_time': 15330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391423, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.423 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d36d09af-0662-4aa1-b8f0-d03b72ca8262]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:3d8f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858594, 'tstamp': 858594}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391424, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 167 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.9 MiB/s wr, 71 op/s
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.444 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2b0680c2-a9e8-4a2f-b5e5-3fb4b433feb7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb59ffdd2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:3d:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858594, 'reachable_time': 15330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391425, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.475 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[17e149e0-0d2d-41ad-bb0b-13e1599bf6fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.541 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a6e517-4cd6-433f-8595-0b6aa2abf7de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.542 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb59ffdd2-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.543 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.543 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb59ffdd2-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:02 compute-0 NetworkManager[44949]: <info>  [1759848002.5457] manager: (tapb59ffdd2-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/532)
Oct 07 14:40:02 compute-0 kernel: tapb59ffdd2-40: entered promiscuous mode
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.553 2 DEBUG nova.compute.manager [req-7b20a5f8-2c46-4ed6-b3b1-428dfa3eae74 req-a7195039-6367-4d65-92a1-658d9b47c9f9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-plugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.553 2 DEBUG oslo_concurrency.lockutils [req-7b20a5f8-2c46-4ed6-b3b1-428dfa3eae74 req-a7195039-6367-4d65-92a1-658d9b47c9f9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.554 2 DEBUG oslo_concurrency.lockutils [req-7b20a5f8-2c46-4ed6-b3b1-428dfa3eae74 req-a7195039-6367-4d65-92a1-658d9b47c9f9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.554 2 DEBUG oslo_concurrency.lockutils [req-7b20a5f8-2c46-4ed6-b3b1-428dfa3eae74 req-a7195039-6367-4d65-92a1-658d9b47c9f9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.554 2 DEBUG nova.compute.manager [req-7b20a5f8-2c46-4ed6-b3b1-428dfa3eae74 req-a7195039-6367-4d65-92a1-658d9b47c9f9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Processing event network-vif-plugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.555 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb59ffdd2-40, col_values=(('external_ids', {'iface-id': '4cc97c0a-633b-48bc-94c4-6f8ac1f61c66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:02 compute-0 ovn_controller[151684]: 2025-10-07T14:40:02Z|01316|binding|INFO|Releasing lport 4cc97c0a-633b-48bc-94c4-6f8ac1f61c66 from this chassis (sb_readonly=0)
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.560 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b59ffdd2-4285-47f2-a931-fca691d1c031.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b59ffdd2-4285-47f2-a931-fca691d1c031.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.561 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f15eeb4e-56f9-4eb3-a9d8-914d1d49c6d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.562 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-b59ffdd2-4285-47f2-a931-fca691d1c031
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/b59ffdd2-4285-47f2-a931-fca691d1c031.pid.haproxy
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID b59ffdd2-4285-47f2-a931-fca691d1c031
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:40:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.563 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'env', 'PROCESS_TAG=haproxy-b59ffdd2-4285-47f2-a931-fca691d1c031', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b59ffdd2-4285-47f2-a931-fca691d1c031.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.681 2 INFO nova.compute.manager [None req-42ecc973-ca36-49eb-816a-e1ba07af0f68 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Get console output
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.692 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:02 compute-0 nova_compute[259550]: 2025-10-07 14:40:02.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.006 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.006 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.006 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.007 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.007 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:03 compute-0 podman[391499]: 2025-10-07 14:40:02.922506686 +0000 UTC m=+0.021806734 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:40:03 compute-0 podman[391499]: 2025-10-07 14:40:03.032752372 +0000 UTC m=+0.132052389 container create 07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 14:40:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:40:03 compute-0 systemd[1]: Started libpod-conmon-07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1.scope.
Oct 07 14:40:03 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25645b5d35c32237e95d967c51a9e71c44938bb7f9072a90ac733ca580ab77f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:40:03 compute-0 podman[391499]: 2025-10-07 14:40:03.1546644 +0000 UTC m=+0.253964437 container init 07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:40:03 compute-0 podman[391499]: 2025-10-07 14:40:03.160950967 +0000 UTC m=+0.260250984 container start 07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:40:03 compute-0 neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031[391515]: [NOTICE]   (391537) : New worker (391541) forked
Oct 07 14:40:03 compute-0 neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031[391515]: [NOTICE]   (391537) : Loading success.
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.224 161536 INFO neutron.agent.ovn.metadata.agent [-] Port a40ec757-407d-4375-b756-d4fb8f5664b4 in datapath abe90ba0-a518-4cef-a49b-de57485faec5 unbound from our chassis
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.226 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abe90ba0-a518-4cef-a49b-de57485faec5
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.239 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf2dca0-1171-45a8-949a-a5cd3751c735]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.240 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabe90ba0-a1 in ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.242 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabe90ba0-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.243 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dee9ac3b-9c5a-4c47-a1f1-31b877db7e20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.245 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e1d4ef-adaf-4629-b311-558eff1436ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.260 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[aa2f2bb6-6cdf-40cc-970e-a0f708f5789a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.272 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f462fb46-7b87-4353-afdc-de76964ec627]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.302 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e79f9301-355b-4f1c-8f60-3ea8a54decbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:03 compute-0 NetworkManager[44949]: <info>  [1759848003.3093] manager: (tapabe90ba0-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/533)
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.313 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f90168cc-6077-4517-887d-bf09a75b24e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.345 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b20215ed-2447-431b-8835-2ccfee384228]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.348 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c60635db-4576-459a-b3fa-b0371aa7265f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.361 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848003.360391, 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.361 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] VM Started (Lifecycle Event)
Oct 07 14:40:03 compute-0 NetworkManager[44949]: <info>  [1759848003.3726] device (tapabe90ba0-a0): carrier: link connected
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.378 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae20115-5b08-4751-81e3-64bb941043c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.380 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.409 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848003.3606946, 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.409 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] VM Paused (Lifecycle Event)
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.416 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[864d5108-b036-48ad-a66f-ebff4807995f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabe90ba0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:a0:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858693, 'reachable_time': 40982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391560, 'error': None, 'target': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.425 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.428 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.430 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd3b273-737a-4f83-9aed-2b8119fd5403]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:a012'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858693, 'tstamp': 858693}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391561, 'error': None, 'target': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.448 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.450 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b11404f0-73be-4eb5-85e4-645eb8926387]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabe90ba0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:a0:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858693, 'reachable_time': 40982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391562, 'error': None, 'target': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.477 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[94f8a9d4-a36f-463e-b5a8-8fcc57b36752]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:40:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/737997735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.519 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5759d9a5-09cc-4335-b7db-51fee7c9b297]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.521 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabe90ba0-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.522 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.522 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabe90ba0-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:03 compute-0 kernel: tapabe90ba0-a0: entered promiscuous mode
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:03 compute-0 NetworkManager[44949]: <info>  [1759848003.5253] manager: (tapabe90ba0-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/534)
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.528 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabe90ba0-a0, col_values=(('external_ids', {'iface-id': '763708cd-58bb-4680-a4f7-042aa711a366'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:03 compute-0 ovn_controller[151684]: 2025-10-07T14:40:03Z|01317|binding|INFO|Releasing lport 763708cd-58bb-4680-a4f7-042aa711a366 from this chassis (sb_readonly=0)
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.533 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.551 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abe90ba0-a518-4cef-a49b-de57485faec5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abe90ba0-a518-4cef-a49b-de57485faec5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.553 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[17c4d571-fa97-4d72-b683-c4aba625bf38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.554 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-abe90ba0-a518-4cef-a49b-de57485faec5
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/abe90ba0-a518-4cef-a49b-de57485faec5.pid.haproxy
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID abe90ba0-a518-4cef-a49b-de57485faec5
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:40:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.555 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'env', 'PROCESS_TAG=haproxy-abe90ba0-a518-4cef-a49b-de57485faec5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abe90ba0-a518-4cef-a49b-de57485faec5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.640 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.640 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.644 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.645 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.828 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.829 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3373MB free_disk=59.92213821411133GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.829 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.830 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.926 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance d0b10640-5492-4d8f-8b94-a49a15b6e702 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.927 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.927 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:40:03 compute-0 nova_compute[259550]: 2025-10-07 14:40:03.927 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:40:03 compute-0 podman[391595]: 2025-10-07 14:40:03.968583668 +0000 UTC m=+0.048242760 container create edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:40:04 compute-0 systemd[1]: Started libpod-conmon-edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9.scope.
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.019 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:04 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c49969737f58c490492c852bd2ea4f9f98b17080447bba92e02b629fbfab611a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:40:04 compute-0 podman[391595]: 2025-10-07 14:40:03.947606518 +0000 UTC m=+0.027265640 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:40:04 compute-0 podman[391595]: 2025-10-07 14:40:04.047291451 +0000 UTC m=+0.126950563 container init edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 07 14:40:04 compute-0 podman[391595]: 2025-10-07 14:40:04.054119534 +0000 UTC m=+0.133778626 container start edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 14:40:04 compute-0 neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5[391610]: [NOTICE]   (391615) : New worker (391617) forked
Oct 07 14:40:04 compute-0 neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5[391610]: [NOTICE]   (391615) : Loading success.
Oct 07 14:40:04 compute-0 ceph-mon[74295]: pgmap v2328: 305 pgs: 305 active+clean; 167 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.9 MiB/s wr, 71 op/s
Oct 07 14:40:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/737997735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:40:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Oct 07 14:40:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:40:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1417907105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.462 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.467 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.483 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.506 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.507 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.655 2 DEBUG nova.compute.manager [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-plugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.655 2 DEBUG oslo_concurrency.lockutils [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.656 2 DEBUG oslo_concurrency.lockutils [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.656 2 DEBUG oslo_concurrency.lockutils [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.656 2 DEBUG nova.compute.manager [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] No event matching network-vif-plugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 in dict_keys([('network-vif-plugged', 'a40ec757-407d-4375-b756-d4fb8f5664b4')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.656 2 WARNING nova.compute.manager [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received unexpected event network-vif-plugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 for instance with vm_state building and task_state spawning.
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.656 2 DEBUG nova.compute.manager [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-plugged-a40ec757-407d-4375-b756-d4fb8f5664b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.656 2 DEBUG oslo_concurrency.lockutils [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.657 2 DEBUG oslo_concurrency.lockutils [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.657 2 DEBUG oslo_concurrency.lockutils [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.657 2 DEBUG nova.compute.manager [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Processing event network-vif-plugged-a40ec757-407d-4375-b756-d4fb8f5664b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.657 2 DEBUG nova.compute.manager [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-plugged-a40ec757-407d-4375-b756-d4fb8f5664b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.657 2 DEBUG oslo_concurrency.lockutils [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.657 2 DEBUG oslo_concurrency.lockutils [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.658 2 DEBUG oslo_concurrency.lockutils [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.658 2 DEBUG nova.compute.manager [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] No waiting events found dispatching network-vif-plugged-a40ec757-407d-4375-b756-d4fb8f5664b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.658 2 WARNING nova.compute.manager [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received unexpected event network-vif-plugged-a40ec757-407d-4375-b756-d4fb8f5664b4 for instance with vm_state building and task_state spawning.
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.659 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.663 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848004.6631212, 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.663 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] VM Resumed (Lifecycle Event)
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.666 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.670 2 INFO nova.virt.libvirt.driver [-] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Instance spawned successfully.
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.670 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.688 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.695 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.699 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.699 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.700 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.700 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.700 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.701 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.730 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.761 2 INFO nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Took 18.99 seconds to spawn the instance on the hypervisor.
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.762 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.825 2 INFO nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Took 20.06 seconds to build instance.
Oct 07 14:40:04 compute-0 nova_compute[259550]: 2025-10-07 14:40:04.842 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:05 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1417907105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:40:06 compute-0 nova_compute[259550]: 2025-10-07 14:40:06.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:06 compute-0 ceph-mon[74295]: pgmap v2329: 305 pgs: 305 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Oct 07 14:40:06 compute-0 nova_compute[259550]: 2025-10-07 14:40:06.239 2 DEBUG oslo_concurrency.lockutils [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "interface-d0b10640-5492-4d8f-8b94-a49a15b6e702-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:06 compute-0 nova_compute[259550]: 2025-10-07 14:40:06.239 2 DEBUG oslo_concurrency.lockutils [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "interface-d0b10640-5492-4d8f-8b94-a49a15b6e702-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:06 compute-0 nova_compute[259550]: 2025-10-07 14:40:06.240 2 DEBUG nova.objects.instance [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'flavor' on Instance uuid d0b10640-5492-4d8f-8b94-a49a15b6e702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:40:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 891 KiB/s rd, 2.2 MiB/s wr, 88 op/s
Oct 07 14:40:06 compute-0 nova_compute[259550]: 2025-10-07 14:40:06.855 2 DEBUG nova.objects.instance [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_requests' on Instance uuid d0b10640-5492-4d8f-8b94-a49a15b6e702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:40:06 compute-0 nova_compute[259550]: 2025-10-07 14:40:06.867 2 DEBUG nova.network.neutron [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:40:07 compute-0 nova_compute[259550]: 2025-10-07 14:40:07.116 2 DEBUG nova.policy [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:40:07 compute-0 nova_compute[259550]: 2025-10-07 14:40:07.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:07 compute-0 nova_compute[259550]: 2025-10-07 14:40:07.856 2 DEBUG nova.network.neutron [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Successfully created port: 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:40:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:40:08 compute-0 ceph-mon[74295]: pgmap v2330: 305 pgs: 305 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 891 KiB/s rd, 2.2 MiB/s wr, 88 op/s
Oct 07 14:40:08 compute-0 nova_compute[259550]: 2025-10-07 14:40:08.289 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:08 compute-0 nova_compute[259550]: 2025-10-07 14:40:08.290 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:08 compute-0 nova_compute[259550]: 2025-10-07 14:40:08.323 2 DEBUG nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:40:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 794 KiB/s rd, 1.7 MiB/s wr, 57 op/s
Oct 07 14:40:08 compute-0 nova_compute[259550]: 2025-10-07 14:40:08.501 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:08 compute-0 nova_compute[259550]: 2025-10-07 14:40:08.501 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:08 compute-0 nova_compute[259550]: 2025-10-07 14:40:08.502 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:40:08 compute-0 nova_compute[259550]: 2025-10-07 14:40:08.510 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:40:08 compute-0 nova_compute[259550]: 2025-10-07 14:40:08.511 2 INFO nova.compute.claims [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:40:08 compute-0 nova_compute[259550]: 2025-10-07 14:40:08.531 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:40:08 compute-0 nova_compute[259550]: 2025-10-07 14:40:08.919 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:40:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/860519083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.409 2 DEBUG nova.network.neutron [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Successfully updated port: 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.426 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.432 2 DEBUG nova.compute.provider_tree [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.463 2 DEBUG oslo_concurrency.lockutils [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.463 2 DEBUG oslo_concurrency.lockutils [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.464 2 DEBUG nova.network.neutron [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.472 2 DEBUG nova.scheduler.client.report [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.528 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.529 2 DEBUG nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.591 2 DEBUG nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.592 2 DEBUG nova.network.neutron [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.618 2 INFO nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.639 2 DEBUG nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.736 2 DEBUG nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.738 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.739 2 INFO nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Creating image(s)
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.764 2 DEBUG nova.storage.rbd_utils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] rbd image 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.793 2 DEBUG nova.storage.rbd_utils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] rbd image 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.824 2 DEBUG nova.storage.rbd_utils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] rbd image 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.827 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.871 2 DEBUG nova.compute.manager [req-f0193ea0-cf5e-4255-9626-8729241b5ab0 req-b604bacb-e228-42e0-b1ff-241e32e91f45 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-changed-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.872 2 DEBUG nova.compute.manager [req-f0193ea0-cf5e-4255-9626-8729241b5ab0 req-b604bacb-e228-42e0-b1ff-241e32e91f45 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing instance network info cache due to event network-changed-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.872 2 DEBUG oslo_concurrency.lockutils [req-f0193ea0-cf5e-4255-9626-8729241b5ab0 req-b604bacb-e228-42e0-b1ff-241e32e91f45 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.918 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.919 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.920 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.920 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.940 2 DEBUG nova.storage.rbd_utils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] rbd image 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.943 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:09 compute-0 nova_compute[259550]: 2025-10-07 14:40:09.990 2 DEBUG nova.policy [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '57b4a09a91e94b4c8417e522a9a10496', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c1c9329037b74c90a5df2b4a0e0afe75', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:40:10 compute-0 ceph-mon[74295]: pgmap v2331: 305 pgs: 305 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 794 KiB/s rd, 1.7 MiB/s wr, 57 op/s
Oct 07 14:40:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/860519083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:40:10 compute-0 nova_compute[259550]: 2025-10-07 14:40:10.306 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:10 compute-0 nova_compute[259550]: 2025-10-07 14:40:10.345 2 DEBUG nova.compute.manager [req-7f845283-f045-4462-a4c6-3324b3259ae2 req-ad129684-5e11-47f6-891b-90b2c0c427e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-changed-ff77de20-1280-4c30-941d-c53ed7efcbe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:10 compute-0 nova_compute[259550]: 2025-10-07 14:40:10.346 2 DEBUG nova.compute.manager [req-7f845283-f045-4462-a4c6-3324b3259ae2 req-ad129684-5e11-47f6-891b-90b2c0c427e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Refreshing instance network info cache due to event network-changed-ff77de20-1280-4c30-941d-c53ed7efcbe8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:40:10 compute-0 nova_compute[259550]: 2025-10-07 14:40:10.346 2 DEBUG oslo_concurrency.lockutils [req-7f845283-f045-4462-a4c6-3324b3259ae2 req-ad129684-5e11-47f6-891b-90b2c0c427e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:40:10 compute-0 nova_compute[259550]: 2025-10-07 14:40:10.346 2 DEBUG oslo_concurrency.lockutils [req-7f845283-f045-4462-a4c6-3324b3259ae2 req-ad129684-5e11-47f6-891b-90b2c0c427e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:40:10 compute-0 nova_compute[259550]: 2025-10-07 14:40:10.347 2 DEBUG nova.network.neutron [req-7f845283-f045-4462-a4c6-3324b3259ae2 req-ad129684-5e11-47f6-891b-90b2c0c427e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Refreshing network info cache for port ff77de20-1280-4c30-941d-c53ed7efcbe8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:40:10 compute-0 nova_compute[259550]: 2025-10-07 14:40:10.395 2 DEBUG nova.storage.rbd_utils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] resizing rbd image 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:40:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 97 op/s
Oct 07 14:40:10 compute-0 nova_compute[259550]: 2025-10-07 14:40:10.490 2 DEBUG nova.objects.instance [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'migration_context' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:40:10 compute-0 nova_compute[259550]: 2025-10-07 14:40:10.605 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:40:10 compute-0 nova_compute[259550]: 2025-10-07 14:40:10.605 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Ensure instance console log exists: /var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:40:10 compute-0 nova_compute[259550]: 2025-10-07 14:40:10.605 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:10 compute-0 nova_compute[259550]: 2025-10-07 14:40:10.606 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:10 compute-0 nova_compute[259550]: 2025-10-07 14:40:10.606 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:11 compute-0 nova_compute[259550]: 2025-10-07 14:40:11.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:11 compute-0 nova_compute[259550]: 2025-10-07 14:40:11.463 2 DEBUG nova.network.neutron [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Successfully created port: 4eb92b42-4298-4d8e-8455-b37a2972c583 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:40:12 compute-0 ceph-mon[74295]: pgmap v2332: 305 pgs: 305 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 97 op/s
Oct 07 14:40:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 170 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 123 KiB/s wr, 77 op/s
Oct 07 14:40:12 compute-0 nova_compute[259550]: 2025-10-07 14:40:12.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.276 2 DEBUG nova.network.neutron [req-7f845283-f045-4462-a4c6-3324b3259ae2 req-ad129684-5e11-47f6-891b-90b2c0c427e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updated VIF entry in instance network info cache for port ff77de20-1280-4c30-941d-c53ed7efcbe8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.277 2 DEBUG nova.network.neutron [req-7f845283-f045-4462-a4c6-3324b3259ae2 req-ad129684-5e11-47f6-891b-90b2c0c427e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updating instance_info_cache with network_info: [{"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.296 2 DEBUG oslo_concurrency.lockutils [req-7f845283-f045-4462-a4c6-3324b3259ae2 req-ad129684-5e11-47f6-891b-90b2c0c427e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.310 2 DEBUG nova.network.neutron [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.327 2 DEBUG oslo_concurrency.lockutils [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.328 2 DEBUG oslo_concurrency.lockutils [req-f0193ea0-cf5e-4255-9626-8729241b5ab0 req-b604bacb-e228-42e0-b1ff-241e32e91f45 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.329 2 DEBUG nova.network.neutron [req-f0193ea0-cf5e-4255-9626-8729241b5ab0 req-b604bacb-e228-42e0-b1ff-241e32e91f45 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing network info cache for port 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.332 2 DEBUG nova.virt.libvirt.vif [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-666494646',display_name='tempest-TestNetworkBasicOps-server-666494646',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-666494646',id=121,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1spSCFixXNsUFx/gNeoHExW+DY1/4E+O5JhigGItAWYtVOtc4GVbv0L/rgo0glCGTWIkGxAFExfWpDWhQ8tY55XDjxFuD7v7bFZwlCzmx6XPgY1bFJ0yFMdc8TPdCuzA==',key_name='tempest-TestNetworkBasicOps-45009849',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:39:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-1mnwx928',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:39:42Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=d0b10640-5492-4d8f-8b94-a49a15b6e702,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.332 2 DEBUG nova.network.os_vif_util [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.333 2 DEBUG nova.network.os_vif_util [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:75:78,bridge_name='br-int',has_traffic_filtering=True,id=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap359e9c20-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.334 2 DEBUG os_vif [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:75:78,bridge_name='br-int',has_traffic_filtering=True,id=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap359e9c20-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.338 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap359e9c20-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.339 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap359e9c20-ec, col_values=(('external_ids', {'iface-id': '359e9c20-ec4d-4bc9-bfc1-93f3464bf09b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:75:78', 'vm-uuid': 'd0b10640-5492-4d8f-8b94-a49a15b6e702'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:13 compute-0 NetworkManager[44949]: <info>  [1759848013.3419] manager: (tap359e9c20-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/535)
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.349 2 INFO os_vif [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:75:78,bridge_name='br-int',has_traffic_filtering=True,id=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap359e9c20-ec')
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.350 2 DEBUG nova.virt.libvirt.vif [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-666494646',display_name='tempest-TestNetworkBasicOps-server-666494646',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-666494646',id=121,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1spSCFixXNsUFx/gNeoHExW+DY1/4E+O5JhigGItAWYtVOtc4GVbv0L/rgo0glCGTWIkGxAFExfWpDWhQ8tY55XDjxFuD7v7bFZwlCzmx6XPgY1bFJ0yFMdc8TPdCuzA==',key_name='tempest-TestNetworkBasicOps-45009849',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:39:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-1mnwx928',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:39:42Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=d0b10640-5492-4d8f-8b94-a49a15b6e702,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.350 2 DEBUG nova.network.os_vif_util [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.351 2 DEBUG nova.network.os_vif_util [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:75:78,bridge_name='br-int',has_traffic_filtering=True,id=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap359e9c20-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.355 2 DEBUG nova.virt.libvirt.guest [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] attach device xml: <interface type="ethernet">
Oct 07 14:40:13 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:98:75:78"/>
Oct 07 14:40:13 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:40:13 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:40:13 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:40:13 compute-0 nova_compute[259550]:   <target dev="tap359e9c20-ec"/>
Oct 07 14:40:13 compute-0 nova_compute[259550]: </interface>
Oct 07 14:40:13 compute-0 nova_compute[259550]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 07 14:40:13 compute-0 kernel: tap359e9c20-ec: entered promiscuous mode
Oct 07 14:40:13 compute-0 NetworkManager[44949]: <info>  [1759848013.3672] manager: (tap359e9c20-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/536)
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:13 compute-0 ovn_controller[151684]: 2025-10-07T14:40:13Z|01318|binding|INFO|Claiming lport 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b for this chassis.
Oct 07 14:40:13 compute-0 ovn_controller[151684]: 2025-10-07T14:40:13Z|01319|binding|INFO|359e9c20-ec4d-4bc9-bfc1-93f3464bf09b: Claiming fa:16:3e:98:75:78 10.100.0.23
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.389 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:75:78 10.100.0.23'], port_security=['fa:16:3e:98:75:78 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'd0b10640-5492-4d8f-8b94-a49a15b6e702', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '46186043-39e3-4b2d-9425-b7b9cc0cd458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4af57759-0100-4ebb-81fb-af43dd151fff, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.390 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b in datapath cae6e154-5797-4df5-a9e8-545cc6ed0188 bound to our chassis
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.392 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cae6e154-5797-4df5-a9e8-545cc6ed0188
Oct 07 14:40:13 compute-0 systemd-udevd[391842]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.410 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[907144ef-4e05-4051-a7eb-db065d335514]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.413 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcae6e154-51 in ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:40:13 compute-0 NetworkManager[44949]: <info>  [1759848013.4156] device (tap359e9c20-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:40:13 compute-0 NetworkManager[44949]: <info>  [1759848013.4167] device (tap359e9c20-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.414 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcae6e154-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.414 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[278d0e7e-f498-48ef-9da4-78659ddd019e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.419 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[894c8ca5-a711-49b6-94d6-df98765ed8cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:13 compute-0 ovn_controller[151684]: 2025-10-07T14:40:13Z|01320|binding|INFO|Setting lport 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b ovn-installed in OVS
Oct 07 14:40:13 compute-0 ovn_controller[151684]: 2025-10-07T14:40:13Z|01321|binding|INFO|Setting lport 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b up in Southbound
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.436 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[309d8a80-a4b6-4587-bcf5-36d0414b925b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.451 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bed1ee1f-9e70-4f62-a921-e1384b0af8a8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.477 2 DEBUG nova.network.neutron [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Successfully updated port: 4eb92b42-4298-4d8e-8455-b37a2972c583 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.482 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[35de56b9-7626-4ffe-b9e7-1c1b63686ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:13 compute-0 systemd-udevd[391844]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:40:13 compute-0 NetworkManager[44949]: <info>  [1759848013.4894] manager: (tapcae6e154-50): new Veth device (/org/freedesktop/NetworkManager/Devices/537)
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.490 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[be9b78e3-180d-4e5f-98a5-bcf98c94fc51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.534 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[64cc0a56-9f08-4509-8f0d-08820ac65678]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.538 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9200b11a-269c-4f14-a274-b6972360a0b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:13 compute-0 NetworkManager[44949]: <info>  [1759848013.5618] device (tapcae6e154-50): carrier: link connected
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.565 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.565 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquired lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.565 2 DEBUG nova.network.neutron [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.567 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3b50c694-20e1-4b7c-837c-9929b73c758c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.586 2 DEBUG nova.virt.libvirt.driver [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.587 2 DEBUG nova.virt.libvirt.driver [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.588 2 DEBUG nova.virt.libvirt.driver [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:6d:ee:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.588 2 DEBUG nova.virt.libvirt.driver [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:98:75:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.594 2 DEBUG nova.compute.manager [req-eac82111-1332-4b4e-8208-197f9b581bee req-8cb9e5d2-c597-402a-8a5d-7ce064ab58ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-changed-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.594 2 DEBUG nova.compute.manager [req-eac82111-1332-4b4e-8208-197f9b581bee req-8cb9e5d2-c597-402a-8a5d-7ce064ab58ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Refreshing instance network info cache due to event network-changed-4eb92b42-4298-4d8e-8455-b37a2972c583. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.595 2 DEBUG oslo_concurrency.lockutils [req-eac82111-1332-4b4e-8208-197f9b581bee req-8cb9e5d2-c597-402a-8a5d-7ce064ab58ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.596 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3a182ef1-9279-4a01-9cc5-c1ab3fee0d44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcae6e154-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:47:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 859712, 'reachable_time': 43552, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391868, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.617 2 DEBUG nova.virt.libvirt.guest [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:40:13 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:40:13 compute-0 nova_compute[259550]:   <nova:name>tempest-TestNetworkBasicOps-server-666494646</nova:name>
Oct 07 14:40:13 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:40:13</nova:creationTime>
Oct 07 14:40:13 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:40:13 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:40:13 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:40:13 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:40:13 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:40:13 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:40:13 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:40:13 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:40:13 compute-0 nova_compute[259550]:     <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:40:13 compute-0 nova_compute[259550]:     <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:40:13 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:40:13 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:40:13 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:40:13 compute-0 nova_compute[259550]:     <nova:port uuid="b7bf5de8-3ba0-43cd-a839-d8812cbe4276">
Oct 07 14:40:13 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:40:13 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:40:13 compute-0 nova_compute[259550]:     <nova:port uuid="359e9c20-ec4d-4bc9-bfc1-93f3464bf09b">
Oct 07 14:40:13 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Oct 07 14:40:13 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:40:13 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:40:13 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:40:13 compute-0 nova_compute[259550]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.619 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[37cbebd8-6509-4532-ab84-5c5649807691]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec9:47d6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 859712, 'tstamp': 859712}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391869, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.645 2 DEBUG oslo_concurrency.lockutils [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "interface-d0b10640-5492-4d8f-8b94-a49a15b6e702-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.647 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[23722516-8cc4-4175-b335-6b8b9ce5f4bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcae6e154-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:47:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 859712, 'reachable_time': 43552, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391870, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.683 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3e21d1c1-7140-4c89-89c1-57c6ba25de0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.741 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eddb9b43-ad8a-40eb-bb85-04c4a80a37e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.743 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcae6e154-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.743 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.743 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcae6e154-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:13 compute-0 NetworkManager[44949]: <info>  [1759848013.7457] manager: (tapcae6e154-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/538)
Oct 07 14:40:13 compute-0 kernel: tapcae6e154-50: entered promiscuous mode
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.749 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcae6e154-50, col_values=(('external_ids', {'iface-id': '795a08c5-66c3-453c-a5db-19a02c166ab7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:13 compute-0 ovn_controller[151684]: 2025-10-07T14:40:13Z|01322|binding|INFO|Releasing lport 795a08c5-66c3-453c-a5db-19a02c166ab7 from this chassis (sb_readonly=0)
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.758 2 DEBUG nova.network.neutron [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.771 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cae6e154-5797-4df5-a9e8-545cc6ed0188.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cae6e154-5797-4df5-a9e8-545cc6ed0188.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.772 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[28395a54-f7cd-474e-8707-e1c6de6ba3fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.772 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-cae6e154-5797-4df5-a9e8-545cc6ed0188
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/cae6e154-5797-4df5-a9e8-545cc6ed0188.pid.haproxy
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID cae6e154-5797-4df5-a9e8-545cc6ed0188
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:40:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.773 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'env', 'PROCESS_TAG=haproxy-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cae6e154-5797-4df5-a9e8-545cc6ed0188.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.845 2 DEBUG nova.compute.manager [req-6c23bbed-c91d-46a4-a38d-49aff723be6f req-82d042e2-6eab-4e66-9357-834148a315df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-plugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.845 2 DEBUG oslo_concurrency.lockutils [req-6c23bbed-c91d-46a4-a38d-49aff723be6f req-82d042e2-6eab-4e66-9357-834148a315df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.846 2 DEBUG oslo_concurrency.lockutils [req-6c23bbed-c91d-46a4-a38d-49aff723be6f req-82d042e2-6eab-4e66-9357-834148a315df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.846 2 DEBUG oslo_concurrency.lockutils [req-6c23bbed-c91d-46a4-a38d-49aff723be6f req-82d042e2-6eab-4e66-9357-834148a315df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.846 2 DEBUG nova.compute.manager [req-6c23bbed-c91d-46a4-a38d-49aff723be6f req-82d042e2-6eab-4e66-9357-834148a315df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] No waiting events found dispatching network-vif-plugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:40:13 compute-0 nova_compute[259550]: 2025-10-07 14:40:13.846 2 WARNING nova.compute.manager [req-6c23bbed-c91d-46a4-a38d-49aff723be6f req-82d042e2-6eab-4e66-9357-834148a315df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received unexpected event network-vif-plugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b for instance with vm_state active and task_state None.
Oct 07 14:40:14 compute-0 podman[391902]: 2025-10-07 14:40:14.127937671 +0000 UTC m=+0.063843718 container create 397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:40:14 compute-0 systemd[1]: Started libpod-conmon-397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e.scope.
Oct 07 14:40:14 compute-0 podman[391902]: 2025-10-07 14:40:14.093889666 +0000 UTC m=+0.029795763 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:40:14 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:40:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a41d1aa38f8abe0fef319a4a94399d666f0b1094fa534bf42c28aa629de3903/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:40:14 compute-0 podman[391902]: 2025-10-07 14:40:14.213744951 +0000 UTC m=+0.149651018 container init 397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 07 14:40:14 compute-0 podman[391902]: 2025-10-07 14:40:14.219071103 +0000 UTC m=+0.154977160 container start 397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:40:14 compute-0 neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188[391917]: [NOTICE]   (391921) : New worker (391923) forked
Oct 07 14:40:14 compute-0 neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188[391917]: [NOTICE]   (391921) : Loading success.
Oct 07 14:40:14 compute-0 ceph-mon[74295]: pgmap v2333: 305 pgs: 305 active+clean; 170 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 123 KiB/s wr, 77 op/s
Oct 07 14:40:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 213 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 07 14:40:15 compute-0 ovn_controller[151684]: 2025-10-07T14:40:15Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:98:75:78 10.100.0.23
Oct 07 14:40:15 compute-0 ovn_controller[151684]: 2025-10-07T14:40:15Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:98:75:78 10.100.0.23
Oct 07 14:40:15 compute-0 nova_compute[259550]: 2025-10-07 14:40:15.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:15.823 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:40:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:15.825 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:40:15 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:15.829 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:15 compute-0 nova_compute[259550]: 2025-10-07 14:40:15.969 2 DEBUG nova.compute.manager [req-e5b6afc2-74aa-4f37-bc7b-69d473eafdd1 req-6f42d036-5b71-453d-b687-0b007e5dcf05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-plugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:15 compute-0 nova_compute[259550]: 2025-10-07 14:40:15.970 2 DEBUG oslo_concurrency.lockutils [req-e5b6afc2-74aa-4f37-bc7b-69d473eafdd1 req-6f42d036-5b71-453d-b687-0b007e5dcf05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:15 compute-0 nova_compute[259550]: 2025-10-07 14:40:15.970 2 DEBUG oslo_concurrency.lockutils [req-e5b6afc2-74aa-4f37-bc7b-69d473eafdd1 req-6f42d036-5b71-453d-b687-0b007e5dcf05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:15 compute-0 nova_compute[259550]: 2025-10-07 14:40:15.970 2 DEBUG oslo_concurrency.lockutils [req-e5b6afc2-74aa-4f37-bc7b-69d473eafdd1 req-6f42d036-5b71-453d-b687-0b007e5dcf05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:15 compute-0 nova_compute[259550]: 2025-10-07 14:40:15.970 2 DEBUG nova.compute.manager [req-e5b6afc2-74aa-4f37-bc7b-69d473eafdd1 req-6f42d036-5b71-453d-b687-0b007e5dcf05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] No waiting events found dispatching network-vif-plugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:40:15 compute-0 nova_compute[259550]: 2025-10-07 14:40:15.971 2 WARNING nova.compute.manager [req-e5b6afc2-74aa-4f37-bc7b-69d473eafdd1 req-6f42d036-5b71-453d-b687-0b007e5dcf05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received unexpected event network-vif-plugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b for instance with vm_state active and task_state None.
Oct 07 14:40:16 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 07 14:40:16 compute-0 podman[391932]: 2025-10-07 14:40:16.082074553 +0000 UTC m=+0.067049234 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 14:40:16 compute-0 podman[391933]: 2025-10-07 14:40:16.121412308 +0000 UTC m=+0.102708850 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct 07 14:40:16 compute-0 ceph-mon[74295]: pgmap v2334: 305 pgs: 305 active+clean; 213 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 07 14:40:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 213 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.481 2 DEBUG nova.network.neutron [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Updating instance_info_cache with network_info: [{"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.504 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Releasing lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.505 2 DEBUG nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Instance network_info: |[{"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.505 2 DEBUG oslo_concurrency.lockutils [req-eac82111-1332-4b4e-8208-197f9b581bee req-8cb9e5d2-c597-402a-8a5d-7ce064ab58ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.505 2 DEBUG nova.network.neutron [req-eac82111-1332-4b4e-8208-197f9b581bee req-8cb9e5d2-c597-402a-8a5d-7ce064ab58ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Refreshing network info cache for port 4eb92b42-4298-4d8e-8455-b37a2972c583 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.508 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Start _get_guest_xml network_info=[{"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.512 2 WARNING nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.518 2 DEBUG nova.virt.libvirt.host [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.519 2 DEBUG nova.virt.libvirt.host [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.525 2 DEBUG nova.virt.libvirt.host [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.526 2 DEBUG nova.virt.libvirt.host [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.527 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.528 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.529 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.529 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.529 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.529 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.529 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.529 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.529 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.530 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.530 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.530 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.533 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.578 2 DEBUG nova.network.neutron [req-f0193ea0-cf5e-4255-9626-8729241b5ab0 req-b604bacb-e228-42e0-b1ff-241e32e91f45 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updated VIF entry in instance network info cache for port 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.579 2 DEBUG nova.network.neutron [req-f0193ea0-cf5e-4255-9626-8729241b5ab0 req-b604bacb-e228-42e0-b1ff-241e32e91f45 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.599 2 DEBUG oslo_concurrency.lockutils [req-f0193ea0-cf5e-4255-9626-8729241b5ab0 req-b604bacb-e228-42e0-b1ff-241e32e91f45 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:40:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:40:16 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2554996707' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:40:16 compute-0 nova_compute[259550]: 2025-10-07 14:40:16.989 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.016 2 DEBUG nova.storage.rbd_utils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] rbd image 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.022 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:17 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2554996707' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:40:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:40:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/320746890' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.485 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.487 2 DEBUG nova.virt.libvirt.vif [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:40:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1564405251',display_name='tempest-TestServerAdvancedOps-server-1564405251',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1564405251',id=123,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c1c9329037b74c90a5df2b4a0e0afe75',ramdisk_id='',reservation_id='r-wo40rctq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1206359101',owner_user_name='tempest-TestServerAdvancedOps-1206359101-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:40:09Z,user_data=None,user_id='57b4a09a91e94b4c8417e522a9a10496',uuid=0ec8fffb-cb39-4dd3-88b8-41467b24be13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.488 2 DEBUG nova.network.os_vif_util [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converting VIF {"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.489 2 DEBUG nova.network.os_vif_util [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.490 2 DEBUG nova.objects.instance [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.505 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:40:17 compute-0 nova_compute[259550]:   <uuid>0ec8fffb-cb39-4dd3-88b8-41467b24be13</uuid>
Oct 07 14:40:17 compute-0 nova_compute[259550]:   <name>instance-0000007b</name>
Oct 07 14:40:17 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:40:17 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:40:17 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <nova:name>tempest-TestServerAdvancedOps-server-1564405251</nova:name>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:40:16</nova:creationTime>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:40:17 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:40:17 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:40:17 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:40:17 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:40:17 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:40:17 compute-0 nova_compute[259550]:         <nova:user uuid="57b4a09a91e94b4c8417e522a9a10496">tempest-TestServerAdvancedOps-1206359101-project-member</nova:user>
Oct 07 14:40:17 compute-0 nova_compute[259550]:         <nova:project uuid="c1c9329037b74c90a5df2b4a0e0afe75">tempest-TestServerAdvancedOps-1206359101</nova:project>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:40:17 compute-0 nova_compute[259550]:         <nova:port uuid="4eb92b42-4298-4d8e-8455-b37a2972c583">
Oct 07 14:40:17 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:40:17 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:40:17 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <system>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <entry name="serial">0ec8fffb-cb39-4dd3-88b8-41467b24be13</entry>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <entry name="uuid">0ec8fffb-cb39-4dd3-88b8-41467b24be13</entry>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     </system>
Oct 07 14:40:17 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:40:17 compute-0 nova_compute[259550]:   <os>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:   </os>
Oct 07 14:40:17 compute-0 nova_compute[259550]:   <features>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:   </features>
Oct 07 14:40:17 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:40:17 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:40:17 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk">
Oct 07 14:40:17 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       </source>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:40:17 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk.config">
Oct 07 14:40:17 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       </source>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:40:17 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:ee:3b:52"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <target dev="tap4eb92b42-42"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13/console.log" append="off"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <video>
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     </video>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:40:17 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:40:17 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:40:17 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:40:17 compute-0 nova_compute[259550]: </domain>
Oct 07 14:40:17 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.506 2 DEBUG nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Preparing to wait for external event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.506 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.506 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.506 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.507 2 DEBUG nova.virt.libvirt.vif [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:40:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1564405251',display_name='tempest-TestServerAdvancedOps-server-1564405251',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1564405251',id=123,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c1c9329037b74c90a5df2b4a0e0afe75',ramdisk_id='',reservation_id='r-wo40rctq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1206359101',owner_user_name='tempest-TestServerAdvancedOps-1206359101-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:40:09Z,user_data=None,user_id='57b4a09a91e94b4c8417e522a9a10496',uuid=0ec8fffb-cb39-4dd3-88b8-41467b24be13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.508 2 DEBUG nova.network.os_vif_util [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converting VIF {"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.508 2 DEBUG nova.network.os_vif_util [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.509 2 DEBUG os_vif [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.511 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.514 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4eb92b42-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.515 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4eb92b42-42, col_values=(('external_ids', {'iface-id': '4eb92b42-4298-4d8e-8455-b37a2972c583', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:3b:52', 'vm-uuid': '0ec8fffb-cb39-4dd3-88b8-41467b24be13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:17 compute-0 NetworkManager[44949]: <info>  [1759848017.5178] manager: (tap4eb92b42-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/539)
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.525 2 INFO os_vif [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42')
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.624 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.625 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.625 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] No VIF found with MAC fa:16:3e:ee:3b:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.625 2 INFO nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Using config drive
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.652 2 DEBUG nova.storage.rbd_utils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] rbd image 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:17 compute-0 nova_compute[259550]: 2025-10-07 14:40:17.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:40:18 compute-0 ovn_controller[151684]: 2025-10-07T14:40:18Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:b5:2f 10.100.0.12
Oct 07 14:40:18 compute-0 ovn_controller[151684]: 2025-10-07T14:40:18Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:b5:2f 10.100.0.12
Oct 07 14:40:18 compute-0 ceph-mon[74295]: pgmap v2335: 305 pgs: 305 active+clean; 213 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Oct 07 14:40:18 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/320746890' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:40:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 213 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Oct 07 14:40:18 compute-0 nova_compute[259550]: 2025-10-07 14:40:18.725 2 INFO nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Creating config drive at /var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13/disk.config
Oct 07 14:40:18 compute-0 nova_compute[259550]: 2025-10-07 14:40:18.736 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpesn4y3rw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:18 compute-0 nova_compute[259550]: 2025-10-07 14:40:18.886 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpesn4y3rw" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:18 compute-0 nova_compute[259550]: 2025-10-07 14:40:18.915 2 DEBUG nova.storage.rbd_utils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] rbd image 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:18 compute-0 nova_compute[259550]: 2025-10-07 14:40:18.919 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13/disk.config 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.029 2 DEBUG nova.network.neutron [req-eac82111-1332-4b4e-8208-197f9b581bee req-8cb9e5d2-c597-402a-8a5d-7ce064ab58ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Updated VIF entry in instance network info cache for port 4eb92b42-4298-4d8e-8455-b37a2972c583. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.030 2 DEBUG nova.network.neutron [req-eac82111-1332-4b4e-8208-197f9b581bee req-8cb9e5d2-c597-402a-8a5d-7ce064ab58ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Updating instance_info_cache with network_info: [{"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.046 2 DEBUG oslo_concurrency.lockutils [req-eac82111-1332-4b4e-8208-197f9b581bee req-8cb9e5d2-c597-402a-8a5d-7ce064ab58ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.092 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13/disk.config 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.092 2 INFO nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Deleting local config drive /var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13/disk.config because it was imported into RBD.
Oct 07 14:40:19 compute-0 kernel: tap4eb92b42-42: entered promiscuous mode
Oct 07 14:40:19 compute-0 NetworkManager[44949]: <info>  [1759848019.1477] manager: (tap4eb92b42-42): new Tun device (/org/freedesktop/NetworkManager/Devices/540)
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:19 compute-0 ovn_controller[151684]: 2025-10-07T14:40:19Z|01323|binding|INFO|Claiming lport 4eb92b42-4298-4d8e-8455-b37a2972c583 for this chassis.
Oct 07 14:40:19 compute-0 ovn_controller[151684]: 2025-10-07T14:40:19Z|01324|binding|INFO|4eb92b42-4298-4d8e-8455-b37a2972c583: Claiming fa:16:3e:ee:3b:52 10.100.0.4
Oct 07 14:40:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:19.160 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3b:52 10.100.0.4'], port_security=['fa:16:3e:ee:3b:52 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0ec8fffb-cb39-4dd3-88b8-41467b24be13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c1c9329037b74c90a5df2b4a0e0afe75', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b1762bba-55ab-4449-bafb-3376afdfdbc4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8112f49-4ba8-42c2-9619-6731a09d69e4, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4eb92b42-4298-4d8e-8455-b37a2972c583) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:40:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:19.162 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4eb92b42-4298-4d8e-8455-b37a2972c583 in datapath 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 bound to our chassis
Oct 07 14:40:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:19.163 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:40:19 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:19.164 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e757823b-d62d-4c95-9464-f4817068cf43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:19 compute-0 systemd-machined[214580]: New machine qemu-154-instance-0000007b.
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:19 compute-0 systemd-udevd[392110]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:40:19 compute-0 ovn_controller[151684]: 2025-10-07T14:40:19Z|01325|binding|INFO|Setting lport 4eb92b42-4298-4d8e-8455-b37a2972c583 ovn-installed in OVS
Oct 07 14:40:19 compute-0 ovn_controller[151684]: 2025-10-07T14:40:19Z|01326|binding|INFO|Setting lport 4eb92b42-4298-4d8e-8455-b37a2972c583 up in Southbound
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:19 compute-0 systemd[1]: Started Virtual Machine qemu-154-instance-0000007b.
Oct 07 14:40:19 compute-0 NetworkManager[44949]: <info>  [1759848019.2179] device (tap4eb92b42-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:40:19 compute-0 NetworkManager[44949]: <info>  [1759848019.2190] device (tap4eb92b42-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.944 2 DEBUG nova.compute.manager [req-84b76b99-7afb-408f-adbf-f973d773c85a req-ab593593-3287-4fd9-b15d-26a55687fb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.944 2 DEBUG oslo_concurrency.lockutils [req-84b76b99-7afb-408f-adbf-f973d773c85a req-ab593593-3287-4fd9-b15d-26a55687fb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.944 2 DEBUG oslo_concurrency.lockutils [req-84b76b99-7afb-408f-adbf-f973d773c85a req-ab593593-3287-4fd9-b15d-26a55687fb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.945 2 DEBUG oslo_concurrency.lockutils [req-84b76b99-7afb-408f-adbf-f973d773c85a req-ab593593-3287-4fd9-b15d-26a55687fb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.945 2 DEBUG nova.compute.manager [req-84b76b99-7afb-408f-adbf-f973d773c85a req-ab593593-3287-4fd9-b15d-26a55687fb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Processing event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.964 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848019.964502, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.965 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Started (Lifecycle Event)
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.967 2 DEBUG nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.972 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.976 2 INFO nova.virt.libvirt.driver [-] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Instance spawned successfully.
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.976 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.983 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:19 compute-0 nova_compute[259550]: 2025-10-07 14:40:19.986 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:40:20 compute-0 nova_compute[259550]: 2025-10-07 14:40:20.002 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:20 compute-0 nova_compute[259550]: 2025-10-07 14:40:20.002 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:20 compute-0 nova_compute[259550]: 2025-10-07 14:40:20.003 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:20 compute-0 nova_compute[259550]: 2025-10-07 14:40:20.003 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:20 compute-0 nova_compute[259550]: 2025-10-07 14:40:20.003 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:20 compute-0 nova_compute[259550]: 2025-10-07 14:40:20.004 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:20 compute-0 nova_compute[259550]: 2025-10-07 14:40:20.008 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:40:20 compute-0 nova_compute[259550]: 2025-10-07 14:40:20.008 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848019.9653676, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:40:20 compute-0 nova_compute[259550]: 2025-10-07 14:40:20.009 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Paused (Lifecycle Event)
Oct 07 14:40:20 compute-0 nova_compute[259550]: 2025-10-07 14:40:20.034 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:20 compute-0 nova_compute[259550]: 2025-10-07 14:40:20.038 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848019.9695575, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:40:20 compute-0 nova_compute[259550]: 2025-10-07 14:40:20.038 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Resumed (Lifecycle Event)
Oct 07 14:40:20 compute-0 nova_compute[259550]: 2025-10-07 14:40:20.060 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:20 compute-0 nova_compute[259550]: 2025-10-07 14:40:20.063 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:40:20 compute-0 nova_compute[259550]: 2025-10-07 14:40:20.074 2 INFO nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Took 10.34 seconds to spawn the instance on the hypervisor.
Oct 07 14:40:20 compute-0 nova_compute[259550]: 2025-10-07 14:40:20.074 2 DEBUG nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:20 compute-0 nova_compute[259550]: 2025-10-07 14:40:20.245 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:40:20 compute-0 nova_compute[259550]: 2025-10-07 14:40:20.282 2 INFO nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Took 11.80 seconds to build instance.
Oct 07 14:40:20 compute-0 ceph-mon[74295]: pgmap v2336: 305 pgs: 305 active+clean; 213 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Oct 07 14:40:20 compute-0 nova_compute[259550]: 2025-10-07 14:40:20.311 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 237 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.4 MiB/s wr, 109 op/s
Oct 07 14:40:20 compute-0 sudo[392161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:40:20 compute-0 sudo[392161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:40:20 compute-0 sudo[392161]: pam_unix(sudo:session): session closed for user root
Oct 07 14:40:20 compute-0 sudo[392186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:40:20 compute-0 sudo[392186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:40:20 compute-0 sudo[392186]: pam_unix(sudo:session): session closed for user root
Oct 07 14:40:20 compute-0 sudo[392211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:40:20 compute-0 sudo[392211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:40:20 compute-0 sudo[392211]: pam_unix(sudo:session): session closed for user root
Oct 07 14:40:20 compute-0 sudo[392236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:40:20 compute-0 sudo[392236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:40:21 compute-0 sudo[392236]: pam_unix(sudo:session): session closed for user root
Oct 07 14:40:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:40:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:40:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:40:21 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:40:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:40:21 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:40:21 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 40ec1401-c665-46d2-8502-4453c0a3e91d does not exist
Oct 07 14:40:21 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 2ef472cc-49a8-4483-8789-7acfd3609929 does not exist
Oct 07 14:40:21 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 8de346ad-bce1-42af-94be-eadab2bc2db1 does not exist
Oct 07 14:40:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:40:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:40:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:40:21 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:40:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:40:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:40:21 compute-0 sudo[392292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:40:21 compute-0 sudo[392292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:40:21 compute-0 sudo[392292]: pam_unix(sudo:session): session closed for user root
Oct 07 14:40:21 compute-0 sudo[392317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:40:21 compute-0 sudo[392317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:40:21 compute-0 sudo[392317]: pam_unix(sudo:session): session closed for user root
Oct 07 14:40:21 compute-0 sudo[392342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:40:21 compute-0 sudo[392342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:40:21 compute-0 sudo[392342]: pam_unix(sudo:session): session closed for user root
Oct 07 14:40:21 compute-0 sudo[392367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:40:21 compute-0 sudo[392367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:40:22 compute-0 nova_compute[259550]: 2025-10-07 14:40:22.026 2 DEBUG nova.compute.manager [req-bcad2b0d-8728-4b1a-b916-c59d7c3d156a req-df6493bb-ef61-4fb7-81c5-fd5899d9ef74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:22 compute-0 nova_compute[259550]: 2025-10-07 14:40:22.027 2 DEBUG oslo_concurrency.lockutils [req-bcad2b0d-8728-4b1a-b916-c59d7c3d156a req-df6493bb-ef61-4fb7-81c5-fd5899d9ef74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:22 compute-0 nova_compute[259550]: 2025-10-07 14:40:22.028 2 DEBUG oslo_concurrency.lockutils [req-bcad2b0d-8728-4b1a-b916-c59d7c3d156a req-df6493bb-ef61-4fb7-81c5-fd5899d9ef74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:22 compute-0 nova_compute[259550]: 2025-10-07 14:40:22.028 2 DEBUG oslo_concurrency.lockutils [req-bcad2b0d-8728-4b1a-b916-c59d7c3d156a req-df6493bb-ef61-4fb7-81c5-fd5899d9ef74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:22 compute-0 nova_compute[259550]: 2025-10-07 14:40:22.028 2 DEBUG nova.compute.manager [req-bcad2b0d-8728-4b1a-b916-c59d7c3d156a req-df6493bb-ef61-4fb7-81c5-fd5899d9ef74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:40:22 compute-0 nova_compute[259550]: 2025-10-07 14:40:22.028 2 WARNING nova.compute.manager [req-bcad2b0d-8728-4b1a-b916-c59d7c3d156a req-df6493bb-ef61-4fb7-81c5-fd5899d9ef74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state active and task_state None.
Oct 07 14:40:22 compute-0 podman[392432]: 2025-10-07 14:40:22.205641929 +0000 UTC m=+0.039823869 container create fe817f064b534249b4e60e15f3f2431b442326cd0ab35ea3b436968b4c80465a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:40:22 compute-0 systemd[1]: Started libpod-conmon-fe817f064b534249b4e60e15f3f2431b442326cd0ab35ea3b436968b4c80465a.scope.
Oct 07 14:40:22 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:40:22 compute-0 podman[392432]: 2025-10-07 14:40:22.187780775 +0000 UTC m=+0.021962735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:40:22 compute-0 podman[392432]: 2025-10-07 14:40:22.292425375 +0000 UTC m=+0.126607315 container init fe817f064b534249b4e60e15f3f2431b442326cd0ab35ea3b436968b4c80465a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 07 14:40:22 compute-0 podman[392432]: 2025-10-07 14:40:22.299192625 +0000 UTC m=+0.133374555 container start fe817f064b534249b4e60e15f3f2431b442326cd0ab35ea3b436968b4c80465a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:40:22 compute-0 podman[392432]: 2025-10-07 14:40:22.302326238 +0000 UTC m=+0.136508198 container attach fe817f064b534249b4e60e15f3f2431b442326cd0ab35ea3b436968b4c80465a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:40:22 compute-0 objective_roentgen[392448]: 167 167
Oct 07 14:40:22 compute-0 systemd[1]: libpod-fe817f064b534249b4e60e15f3f2431b442326cd0ab35ea3b436968b4c80465a.scope: Deactivated successfully.
Oct 07 14:40:22 compute-0 podman[392432]: 2025-10-07 14:40:22.304176128 +0000 UTC m=+0.138358058 container died fe817f064b534249b4e60e15f3f2431b442326cd0ab35ea3b436968b4c80465a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:40:22 compute-0 ceph-mon[74295]: pgmap v2337: 305 pgs: 305 active+clean; 237 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.4 MiB/s wr, 109 op/s
Oct 07 14:40:22 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:40:22 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:40:22 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:40:22 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:40:22 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:40:22 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:40:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-617ab9fd97dfe0c8a47757def16a21685a4c7070b49f3d43037667d1b4efc11b-merged.mount: Deactivated successfully.
Oct 07 14:40:22 compute-0 podman[392432]: 2025-10-07 14:40:22.364883982 +0000 UTC m=+0.199065912 container remove fe817f064b534249b4e60e15f3f2431b442326cd0ab35ea3b436968b4c80465a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:40:22 compute-0 systemd[1]: libpod-conmon-fe817f064b534249b4e60e15f3f2431b442326cd0ab35ea3b436968b4c80465a.scope: Deactivated successfully.
Oct 07 14:40:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 598 KiB/s rd, 3.9 MiB/s wr, 101 op/s
Oct 07 14:40:22 compute-0 nova_compute[259550]: 2025-10-07 14:40:22.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:22 compute-0 podman[392475]: 2025-10-07 14:40:22.589491762 +0000 UTC m=+0.043294562 container create 1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 07 14:40:22 compute-0 systemd[1]: Started libpod-conmon-1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a.scope.
Oct 07 14:40:22 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe8c995ae45cad174a8f52d0616796f82ad90dc9c2bb0d84b9b2789f19a41f32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe8c995ae45cad174a8f52d0616796f82ad90dc9c2bb0d84b9b2789f19a41f32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe8c995ae45cad174a8f52d0616796f82ad90dc9c2bb0d84b9b2789f19a41f32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe8c995ae45cad174a8f52d0616796f82ad90dc9c2bb0d84b9b2789f19a41f32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe8c995ae45cad174a8f52d0616796f82ad90dc9c2bb0d84b9b2789f19a41f32/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:40:22 compute-0 podman[392475]: 2025-10-07 14:40:22.572603322 +0000 UTC m=+0.026406142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:40:22 compute-0 podman[392475]: 2025-10-07 14:40:22.686971893 +0000 UTC m=+0.140774703 container init 1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 07 14:40:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:40:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:40:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:40:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:40:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:40:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:40:22 compute-0 podman[392475]: 2025-10-07 14:40:22.697008519 +0000 UTC m=+0.150811319 container start 1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 07 14:40:22 compute-0 podman[392475]: 2025-10-07 14:40:22.703957464 +0000 UTC m=+0.157760364 container attach 1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct 07 14:40:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:40:22
Oct 07 14:40:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:40:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:40:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'backups', '.mgr', 'images']
Oct 07 14:40:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:40:22 compute-0 nova_compute[259550]: 2025-10-07 14:40:22.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:22 compute-0 nova_compute[259550]: 2025-10-07 14:40:22.905 2 DEBUG nova.objects.instance [None req-828e63df-bb74-43ea-9e90-4889af12f1dd 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:40:22 compute-0 nova_compute[259550]: 2025-10-07 14:40:22.936 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848022.9367168, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:40:22 compute-0 nova_compute[259550]: 2025-10-07 14:40:22.937 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Paused (Lifecycle Event)
Oct 07 14:40:22 compute-0 nova_compute[259550]: 2025-10-07 14:40:22.993 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:23 compute-0 nova_compute[259550]: 2025-10-07 14:40:23.000 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:40:23 compute-0 nova_compute[259550]: 2025-10-07 14:40:23.034 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 07 14:40:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:40:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:40:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:40:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:40:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:40:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:40:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:40:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:40:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:40:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:40:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:40:23 compute-0 kernel: tap4eb92b42-42 (unregistering): left promiscuous mode
Oct 07 14:40:23 compute-0 NetworkManager[44949]: <info>  [1759848023.4525] device (tap4eb92b42-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:40:23 compute-0 ceph-mon[74295]: pgmap v2338: 305 pgs: 305 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 598 KiB/s rd, 3.9 MiB/s wr, 101 op/s
Oct 07 14:40:23 compute-0 ovn_controller[151684]: 2025-10-07T14:40:23Z|01327|binding|INFO|Releasing lport 4eb92b42-4298-4d8e-8455-b37a2972c583 from this chassis (sb_readonly=0)
Oct 07 14:40:23 compute-0 ovn_controller[151684]: 2025-10-07T14:40:23Z|01328|binding|INFO|Setting lport 4eb92b42-4298-4d8e-8455-b37a2972c583 down in Southbound
Oct 07 14:40:23 compute-0 ovn_controller[151684]: 2025-10-07T14:40:23Z|01329|binding|INFO|Removing iface tap4eb92b42-42 ovn-installed in OVS
Oct 07 14:40:23 compute-0 nova_compute[259550]: 2025-10-07 14:40:23.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:23 compute-0 nova_compute[259550]: 2025-10-07 14:40:23.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:23 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Oct 07 14:40:23 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007b.scope: Consumed 3.824s CPU time.
Oct 07 14:40:23 compute-0 systemd-machined[214580]: Machine qemu-154-instance-0000007b terminated.
Oct 07 14:40:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:23.543 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3b:52 10.100.0.4'], port_security=['fa:16:3e:ee:3b:52 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0ec8fffb-cb39-4dd3-88b8-41467b24be13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c1c9329037b74c90a5df2b4a0e0afe75', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b1762bba-55ab-4449-bafb-3376afdfdbc4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8112f49-4ba8-42c2-9619-6731a09d69e4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4eb92b42-4298-4d8e-8455-b37a2972c583) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:40:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:23.544 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4eb92b42-4298-4d8e-8455-b37a2972c583 in datapath 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 unbound from our chassis
Oct 07 14:40:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:23.545 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:40:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:23.546 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7fdc5ca9-be2e-4215-8d88-68685a2422b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:23 compute-0 nova_compute[259550]: 2025-10-07 14:40:23.642 2 DEBUG nova.compute.manager [None req-828e63df-bb74-43ea-9e90-4889af12f1dd 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:23 compute-0 competent_diffie[392492]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:40:23 compute-0 competent_diffie[392492]: --> relative data size: 1.0
Oct 07 14:40:23 compute-0 competent_diffie[392492]: --> All data devices are unavailable
Oct 07 14:40:23 compute-0 systemd[1]: libpod-1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a.scope: Deactivated successfully.
Oct 07 14:40:23 compute-0 systemd[1]: libpod-1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a.scope: Consumed 1.028s CPU time.
Oct 07 14:40:23 compute-0 podman[392475]: 2025-10-07 14:40:23.794747518 +0000 UTC m=+1.248550318 container died 1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:40:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe8c995ae45cad174a8f52d0616796f82ad90dc9c2bb0d84b9b2789f19a41f32-merged.mount: Deactivated successfully.
Oct 07 14:40:23 compute-0 podman[392475]: 2025-10-07 14:40:23.901687791 +0000 UTC m=+1.355490591 container remove 1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 07 14:40:23 compute-0 systemd[1]: libpod-conmon-1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a.scope: Deactivated successfully.
Oct 07 14:40:23 compute-0 sudo[392367]: pam_unix(sudo:session): session closed for user root
Oct 07 14:40:23 compute-0 sudo[392554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:40:23 compute-0 sudo[392554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:40:24 compute-0 sudo[392554]: pam_unix(sudo:session): session closed for user root
Oct 07 14:40:24 compute-0 sudo[392579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:40:24 compute-0 sudo[392579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:40:24 compute-0 sudo[392579]: pam_unix(sudo:session): session closed for user root
Oct 07 14:40:24 compute-0 sudo[392604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:40:24 compute-0 sudo[392604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:40:24 compute-0 sudo[392604]: pam_unix(sudo:session): session closed for user root
Oct 07 14:40:24 compute-0 sudo[392629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:40:24 compute-0 sudo[392629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:40:24 compute-0 nova_compute[259550]: 2025-10-07 14:40:24.219 2 DEBUG nova.compute.manager [req-cb411942-ffb7-4311-96d8-23bae217dac7 req-3c10df1a-3fcf-4568-9a49-c4f33eedb72c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-unplugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:24 compute-0 nova_compute[259550]: 2025-10-07 14:40:24.219 2 DEBUG oslo_concurrency.lockutils [req-cb411942-ffb7-4311-96d8-23bae217dac7 req-3c10df1a-3fcf-4568-9a49-c4f33eedb72c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:24 compute-0 nova_compute[259550]: 2025-10-07 14:40:24.220 2 DEBUG oslo_concurrency.lockutils [req-cb411942-ffb7-4311-96d8-23bae217dac7 req-3c10df1a-3fcf-4568-9a49-c4f33eedb72c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:24 compute-0 nova_compute[259550]: 2025-10-07 14:40:24.220 2 DEBUG oslo_concurrency.lockutils [req-cb411942-ffb7-4311-96d8-23bae217dac7 req-3c10df1a-3fcf-4568-9a49-c4f33eedb72c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:24 compute-0 nova_compute[259550]: 2025-10-07 14:40:24.220 2 DEBUG nova.compute.manager [req-cb411942-ffb7-4311-96d8-23bae217dac7 req-3c10df1a-3fcf-4568-9a49-c4f33eedb72c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-unplugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:40:24 compute-0 nova_compute[259550]: 2025-10-07 14:40:24.220 2 WARNING nova.compute.manager [req-cb411942-ffb7-4311-96d8-23bae217dac7 req-3c10df1a-3fcf-4568-9a49-c4f33eedb72c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-unplugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state suspended and task_state None.
Oct 07 14:40:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 163 op/s
Oct 07 14:40:24 compute-0 podman[392694]: 2025-10-07 14:40:24.514769804 +0000 UTC m=+0.020012763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:40:24 compute-0 podman[392694]: 2025-10-07 14:40:24.62560979 +0000 UTC m=+0.130852739 container create 9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_joliot, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 07 14:40:24 compute-0 systemd[1]: Started libpod-conmon-9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122.scope.
Oct 07 14:40:24 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:40:24 compute-0 podman[392694]: 2025-10-07 14:40:24.763549987 +0000 UTC m=+0.268792966 container init 9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:40:24 compute-0 podman[392694]: 2025-10-07 14:40:24.771620501 +0000 UTC m=+0.276863460 container start 9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 07 14:40:24 compute-0 priceless_joliot[392710]: 167 167
Oct 07 14:40:24 compute-0 systemd[1]: libpod-9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122.scope: Deactivated successfully.
Oct 07 14:40:24 compute-0 podman[392694]: 2025-10-07 14:40:24.777023155 +0000 UTC m=+0.282266104 container attach 9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_joliot, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:40:24 compute-0 conmon[392710]: conmon 9e0788ffb871bc16e730 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122.scope/container/memory.events
Oct 07 14:40:24 compute-0 podman[392694]: 2025-10-07 14:40:24.77872604 +0000 UTC m=+0.283969009 container died 9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:40:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-79527061b0b371aa6d469abf3493c4f3ff812c9d1ab538942e2b0417b4a2f60a-merged.mount: Deactivated successfully.
Oct 07 14:40:24 compute-0 podman[392694]: 2025-10-07 14:40:24.823160422 +0000 UTC m=+0.328403371 container remove 9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_joliot, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:40:24 compute-0 systemd[1]: libpod-conmon-9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122.scope: Deactivated successfully.
Oct 07 14:40:25 compute-0 podman[392732]: 2025-10-07 14:40:25.018582216 +0000 UTC m=+0.046095337 container create 8d1dfdc7a268868bf03ebf92b2f4518043da68f590173e66b48ffd0f4515a592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_agnesi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 07 14:40:25 compute-0 systemd[1]: Started libpod-conmon-8d1dfdc7a268868bf03ebf92b2f4518043da68f590173e66b48ffd0f4515a592.scope.
Oct 07 14:40:25 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85235eeafa7e982f95872cb38ad0268d8f576fd23b5ba52c6c5c78ef9fbdde14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:40:25 compute-0 podman[392732]: 2025-10-07 14:40:24.998681177 +0000 UTC m=+0.026194318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85235eeafa7e982f95872cb38ad0268d8f576fd23b5ba52c6c5c78ef9fbdde14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85235eeafa7e982f95872cb38ad0268d8f576fd23b5ba52c6c5c78ef9fbdde14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85235eeafa7e982f95872cb38ad0268d8f576fd23b5ba52c6c5c78ef9fbdde14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:40:25 compute-0 podman[392732]: 2025-10-07 14:40:25.107028548 +0000 UTC m=+0.134541689 container init 8d1dfdc7a268868bf03ebf92b2f4518043da68f590173e66b48ffd0f4515a592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_agnesi, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 07 14:40:25 compute-0 podman[392732]: 2025-10-07 14:40:25.114079855 +0000 UTC m=+0.141592976 container start 8d1dfdc7a268868bf03ebf92b2f4518043da68f590173e66b48ffd0f4515a592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:40:25 compute-0 podman[392732]: 2025-10-07 14:40:25.117456774 +0000 UTC m=+0.144969895 container attach 8d1dfdc7a268868bf03ebf92b2f4518043da68f590173e66b48ffd0f4515a592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_agnesi, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 07 14:40:25 compute-0 nova_compute[259550]: 2025-10-07 14:40:25.361 2 INFO nova.compute.manager [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Resuming
Oct 07 14:40:25 compute-0 nova_compute[259550]: 2025-10-07 14:40:25.363 2 DEBUG nova.objects.instance [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'flavor' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:40:25 compute-0 nova_compute[259550]: 2025-10-07 14:40:25.441 2 DEBUG oslo_concurrency.lockutils [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:40:25 compute-0 nova_compute[259550]: 2025-10-07 14:40:25.442 2 DEBUG oslo_concurrency.lockutils [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquired lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:40:25 compute-0 nova_compute[259550]: 2025-10-07 14:40:25.442 2 DEBUG nova.network.neutron [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:40:25 compute-0 ceph-mon[74295]: pgmap v2339: 305 pgs: 305 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 163 op/s
Oct 07 14:40:25 compute-0 nova_compute[259550]: 2025-10-07 14:40:25.646 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "ca309873-104c-4cd4-a609-686d61823e0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:25 compute-0 nova_compute[259550]: 2025-10-07 14:40:25.646 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:25 compute-0 nova_compute[259550]: 2025-10-07 14:40:25.750 2 DEBUG nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:40:25 compute-0 nova_compute[259550]: 2025-10-07 14:40:25.864 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:25 compute-0 nova_compute[259550]: 2025-10-07 14:40:25.865 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:25 compute-0 nova_compute[259550]: 2025-10-07 14:40:25.875 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:40:25 compute-0 nova_compute[259550]: 2025-10-07 14:40:25.875 2 INFO nova.compute.claims [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:40:25 compute-0 peaceful_agnesi[392749]: {
Oct 07 14:40:25 compute-0 peaceful_agnesi[392749]:     "0": [
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:         {
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "devices": [
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "/dev/loop3"
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             ],
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "lv_name": "ceph_lv0",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "lv_size": "21470642176",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "name": "ceph_lv0",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "tags": {
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.cluster_name": "ceph",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.crush_device_class": "",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.encrypted": "0",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.osd_id": "0",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.type": "block",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.vdo": "0"
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             },
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "type": "block",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "vg_name": "ceph_vg0"
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:         }
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:     ],
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:     "1": [
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:         {
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "devices": [
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "/dev/loop4"
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             ],
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "lv_name": "ceph_lv1",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "lv_size": "21470642176",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "name": "ceph_lv1",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "tags": {
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.cluster_name": "ceph",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.crush_device_class": "",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.encrypted": "0",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.osd_id": "1",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.type": "block",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.vdo": "0"
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             },
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "type": "block",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "vg_name": "ceph_vg1"
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:         }
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:     ],
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:     "2": [
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:         {
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "devices": [
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "/dev/loop5"
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             ],
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "lv_name": "ceph_lv2",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "lv_size": "21470642176",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "name": "ceph_lv2",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "tags": {
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.cluster_name": "ceph",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.crush_device_class": "",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.encrypted": "0",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.osd_id": "2",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.type": "block",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:                 "ceph.vdo": "0"
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             },
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "type": "block",
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:             "vg_name": "ceph_vg2"
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:         }
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]:     ]
Oct 07 14:40:26 compute-0 peaceful_agnesi[392749]: }
Oct 07 14:40:26 compute-0 systemd[1]: libpod-8d1dfdc7a268868bf03ebf92b2f4518043da68f590173e66b48ffd0f4515a592.scope: Deactivated successfully.
Oct 07 14:40:26 compute-0 podman[392732]: 2025-10-07 14:40:26.028448701 +0000 UTC m=+1.055961842 container died 8d1dfdc7a268868bf03ebf92b2f4518043da68f590173e66b48ffd0f4515a592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 07 14:40:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-85235eeafa7e982f95872cb38ad0268d8f576fd23b5ba52c6c5c78ef9fbdde14-merged.mount: Deactivated successfully.
Oct 07 14:40:26 compute-0 podman[392732]: 2025-10-07 14:40:26.083634577 +0000 UTC m=+1.111147698 container remove 8d1dfdc7a268868bf03ebf92b2f4518043da68f590173e66b48ffd0f4515a592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_agnesi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 07 14:40:26 compute-0 systemd[1]: libpod-conmon-8d1dfdc7a268868bf03ebf92b2f4518043da68f590173e66b48ffd0f4515a592.scope: Deactivated successfully.
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.110 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:26 compute-0 sudo[392629]: pam_unix(sudo:session): session closed for user root
Oct 07 14:40:26 compute-0 sudo[392771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:40:26 compute-0 sudo[392771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:40:26 compute-0 sudo[392771]: pam_unix(sudo:session): session closed for user root
Oct 07 14:40:26 compute-0 sudo[392797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:40:26 compute-0 sudo[392797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:40:26 compute-0 sudo[392797]: pam_unix(sudo:session): session closed for user root
Oct 07 14:40:26 compute-0 sudo[392832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:40:26 compute-0 sudo[392832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:40:26 compute-0 sudo[392832]: pam_unix(sudo:session): session closed for user root
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.323 2 DEBUG nova.compute.manager [req-7fe8927b-a801-407e-8874-0f7d739b50e0 req-1f069625-369f-49a9-b6dd-0e165c5d68b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.324 2 DEBUG oslo_concurrency.lockutils [req-7fe8927b-a801-407e-8874-0f7d739b50e0 req-1f069625-369f-49a9-b6dd-0e165c5d68b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.325 2 DEBUG oslo_concurrency.lockutils [req-7fe8927b-a801-407e-8874-0f7d739b50e0 req-1f069625-369f-49a9-b6dd-0e165c5d68b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.325 2 DEBUG oslo_concurrency.lockutils [req-7fe8927b-a801-407e-8874-0f7d739b50e0 req-1f069625-369f-49a9-b6dd-0e165c5d68b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.325 2 DEBUG nova.compute.manager [req-7fe8927b-a801-407e-8874-0f7d739b50e0 req-1f069625-369f-49a9-b6dd-0e165c5d68b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.325 2 WARNING nova.compute.manager [req-7fe8927b-a801-407e-8874-0f7d739b50e0 req-1f069625-369f-49a9-b6dd-0e165c5d68b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state suspended and task_state resuming.
Oct 07 14:40:26 compute-0 sudo[392866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:40:26 compute-0 sudo[392866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:40:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.474 2 DEBUG nova.network.neutron [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Updating instance_info_cache with network_info: [{"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.514 2 DEBUG oslo_concurrency.lockutils [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Releasing lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.524 2 DEBUG nova.virt.libvirt.vif [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:40:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1564405251',display_name='tempest-TestServerAdvancedOps-server-1564405251',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1564405251',id=123,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:40:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c1c9329037b74c90a5df2b4a0e0afe75',ramdisk_id='',reservation_id='r-wo40rctq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1206359101',owner_user_name='tempest-TestServerAdvancedOps-1206359101-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:40:23Z,user_data=None,user_id='57b4a09a91e94b4c8417e522a9a10496',uuid=0ec8fffb-cb39-4dd3-88b8-41467b24be13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.525 2 DEBUG nova.network.os_vif_util [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converting VIF {"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.526 2 DEBUG nova.network.os_vif_util [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.527 2 DEBUG os_vif [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.529 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.536 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:40:26 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3533922842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.540 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4eb92b42-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.540 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4eb92b42-42, col_values=(('external_ids', {'iface-id': '4eb92b42-4298-4d8e-8455-b37a2972c583', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:3b:52', 'vm-uuid': '0ec8fffb-cb39-4dd3-88b8-41467b24be13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.541 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.542 2 INFO os_vif [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42')
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.561 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.564 2 DEBUG nova.objects.instance [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.568 2 DEBUG nova.compute.provider_tree [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.718 2 DEBUG nova.scheduler.client.report [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:40:26 compute-0 podman[392935]: 2025-10-07 14:40:26.721474302 +0000 UTC m=+0.047618617 container create 23bb8058790dc5be24fd63c16f5026e1dbccb8ac991d364e91398e224e7f35a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 07 14:40:26 compute-0 kernel: tap4eb92b42-42: entered promiscuous mode
Oct 07 14:40:26 compute-0 NetworkManager[44949]: <info>  [1759848026.7554] manager: (tap4eb92b42-42): new Tun device (/org/freedesktop/NetworkManager/Devices/541)
Oct 07 14:40:26 compute-0 ovn_controller[151684]: 2025-10-07T14:40:26Z|01330|binding|INFO|Claiming lport 4eb92b42-4298-4d8e-8455-b37a2972c583 for this chassis.
Oct 07 14:40:26 compute-0 ovn_controller[151684]: 2025-10-07T14:40:26Z|01331|binding|INFO|4eb92b42-4298-4d8e-8455-b37a2972c583: Claiming fa:16:3e:ee:3b:52 10.100.0.4
Oct 07 14:40:26 compute-0 systemd[1]: Started libpod-conmon-23bb8058790dc5be24fd63c16f5026e1dbccb8ac991d364e91398e224e7f35a4.scope.
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:26 compute-0 podman[392935]: 2025-10-07 14:40:26.696787245 +0000 UTC m=+0.022931590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:40:26 compute-0 ovn_controller[151684]: 2025-10-07T14:40:26Z|01332|binding|INFO|Setting lport 4eb92b42-4298-4d8e-8455-b37a2972c583 ovn-installed in OVS
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:26 compute-0 ovn_controller[151684]: 2025-10-07T14:40:26Z|01333|binding|INFO|Setting lport 4eb92b42-4298-4d8e-8455-b37a2972c583 up in Southbound
Oct 07 14:40:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:26.803 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3b:52 10.100.0.4'], port_security=['fa:16:3e:ee:3b:52 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0ec8fffb-cb39-4dd3-88b8-41467b24be13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c1c9329037b74c90a5df2b4a0e0afe75', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b1762bba-55ab-4449-bafb-3376afdfdbc4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8112f49-4ba8-42c2-9619-6731a09d69e4, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4eb92b42-4298-4d8e-8455-b37a2972c583) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:26.804 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4eb92b42-4298-4d8e-8455-b37a2972c583 in datapath 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 bound to our chassis
Oct 07 14:40:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:26.805 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:40:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:26.806 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0e8053eb-c940-4b5c-9718-48e541af15d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.809 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.809 2 DEBUG nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:40:26 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:40:26 compute-0 systemd-machined[214580]: New machine qemu-155-instance-0000007b.
Oct 07 14:40:26 compute-0 systemd[1]: Started Virtual Machine qemu-155-instance-0000007b.
Oct 07 14:40:26 compute-0 podman[392935]: 2025-10-07 14:40:26.829506663 +0000 UTC m=+0.155650978 container init 23bb8058790dc5be24fd63c16f5026e1dbccb8ac991d364e91398e224e7f35a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 07 14:40:26 compute-0 systemd-udevd[392968]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:40:26 compute-0 podman[392935]: 2025-10-07 14:40:26.837604978 +0000 UTC m=+0.163749273 container start 23bb8058790dc5be24fd63c16f5026e1dbccb8ac991d364e91398e224e7f35a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 07 14:40:26 compute-0 podman[392935]: 2025-10-07 14:40:26.8410561 +0000 UTC m=+0.167200405 container attach 23bb8058790dc5be24fd63c16f5026e1dbccb8ac991d364e91398e224e7f35a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 07 14:40:26 compute-0 hopeful_kepler[392961]: 167 167
Oct 07 14:40:26 compute-0 systemd[1]: libpod-23bb8058790dc5be24fd63c16f5026e1dbccb8ac991d364e91398e224e7f35a4.scope: Deactivated successfully.
Oct 07 14:40:26 compute-0 podman[392935]: 2025-10-07 14:40:26.844267385 +0000 UTC m=+0.170411670 container died 23bb8058790dc5be24fd63c16f5026e1dbccb8ac991d364e91398e224e7f35a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 07 14:40:26 compute-0 NetworkManager[44949]: <info>  [1759848026.8498] device (tap4eb92b42-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:40:26 compute-0 NetworkManager[44949]: <info>  [1759848026.8505] device (tap4eb92b42-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:40:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1c9e90ccfec0d5c3df0bc0e8757c3e784c66ec1758d81131d7216d57d7dbbd7-merged.mount: Deactivated successfully.
Oct 07 14:40:26 compute-0 podman[392935]: 2025-10-07 14:40:26.885667245 +0000 UTC m=+0.211811530 container remove 23bb8058790dc5be24fd63c16f5026e1dbccb8ac991d364e91398e224e7f35a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 07 14:40:26 compute-0 systemd[1]: libpod-conmon-23bb8058790dc5be24fd63c16f5026e1dbccb8ac991d364e91398e224e7f35a4.scope: Deactivated successfully.
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.996 2 DEBUG nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:40:26 compute-0 nova_compute[259550]: 2025-10-07 14:40:26.998 2 DEBUG nova.network.neutron [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.073 2 INFO nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:40:27 compute-0 podman[392997]: 2025-10-07 14:40:27.084009968 +0000 UTC m=+0.042340267 container create ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.107 2 DEBUG nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:40:27 compute-0 systemd[1]: Started libpod-conmon-ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e.scope.
Oct 07 14:40:27 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:40:27 compute-0 podman[392997]: 2025-10-07 14:40:27.067179661 +0000 UTC m=+0.025509980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd8683a5568bdd3ef510ff17f274d64d91e371fee26c12674fff2d0e9bea400/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd8683a5568bdd3ef510ff17f274d64d91e371fee26c12674fff2d0e9bea400/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd8683a5568bdd3ef510ff17f274d64d91e371fee26c12674fff2d0e9bea400/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd8683a5568bdd3ef510ff17f274d64d91e371fee26c12674fff2d0e9bea400/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:40:27 compute-0 podman[392997]: 2025-10-07 14:40:27.174749709 +0000 UTC m=+0.133080028 container init ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:40:27 compute-0 podman[392997]: 2025-10-07 14:40:27.183662777 +0000 UTC m=+0.141993076 container start ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 07 14:40:27 compute-0 podman[392997]: 2025-10-07 14:40:27.188071144 +0000 UTC m=+0.146401463 container attach ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.203 2 DEBUG nova.policy [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.282 2 DEBUG nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.284 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.285 2 INFO nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Creating image(s)
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.304 2 DEBUG nova.storage.rbd_utils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image ca309873-104c-4cd4-a609-686d61823e0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.326 2 DEBUG nova.storage.rbd_utils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image ca309873-104c-4cd4-a609-686d61823e0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.349 2 DEBUG nova.storage.rbd_utils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image ca309873-104c-4cd4-a609-686d61823e0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.353 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.426 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.427 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.428 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.428 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.447 2 DEBUG nova.storage.rbd_utils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image ca309873-104c-4cd4-a609-686d61823e0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.450 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ca309873-104c-4cd4-a609-686d61823e0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:27 compute-0 ceph-mon[74295]: pgmap v2340: 305 pgs: 305 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Oct 07 14:40:27 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3533922842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.877 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 0ec8fffb-cb39-4dd3-88b8-41467b24be13 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.878 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848027.877388, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.879 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Started (Lifecycle Event)
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.894 2 DEBUG nova.compute.manager [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.895 2 DEBUG nova.objects.instance [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.915 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ca309873-104c-4cd4-a609-686d61823e0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.951 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.988 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:40:27 compute-0 nova_compute[259550]: 2025-10-07 14:40:27.995 2 DEBUG nova.storage.rbd_utils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image ca309873-104c-4cd4-a609-686d61823e0f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.033 2 INFO nova.virt.libvirt.driver [-] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Instance running successfully.
Oct 07 14:40:28 compute-0 virtqemud[259430]: argument unsupported: QEMU guest agent is not configured
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.039 2 DEBUG nova.virt.libvirt.guest [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.039 2 DEBUG nova.compute.manager [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.229 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.229 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848027.8832893, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.230 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Resumed (Lifecycle Event)
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]: {
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:         "osd_id": 2,
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:         "type": "bluestore"
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:     },
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:         "osd_id": 1,
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:         "type": "bluestore"
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:     },
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:         "osd_id": 0,
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:         "type": "bluestore"
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]:     }
Oct 07 14:40:28 compute-0 nostalgic_pare[393014]: }
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.281 2 DEBUG nova.objects.instance [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid ca309873-104c-4cd4-a609-686d61823e0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:40:28 compute-0 systemd[1]: libpod-ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e.scope: Deactivated successfully.
Oct 07 14:40:28 compute-0 systemd[1]: libpod-ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e.scope: Consumed 1.061s CPU time.
Oct 07 14:40:28 compute-0 podman[392997]: 2025-10-07 14:40:28.29590921 +0000 UTC m=+1.254239509 container died ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.314 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.316 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:40:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bd8683a5568bdd3ef510ff17f274d64d91e371fee26c12674fff2d0e9bea400-merged.mount: Deactivated successfully.
Oct 07 14:40:28 compute-0 podman[392997]: 2025-10-07 14:40:28.351128279 +0000 UTC m=+1.309458578 container remove ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:40:28 compute-0 systemd[1]: libpod-conmon-ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e.scope: Deactivated successfully.
Oct 07 14:40:28 compute-0 sudo[392866]: pam_unix(sudo:session): session closed for user root
Oct 07 14:40:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:40:28 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:40:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:40:28 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:40:28 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev d0e02ef3-a706-48ae-adcb-1c8bbd1a7483 does not exist
Oct 07 14:40:28 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev b9e03ee4-b2cf-4ffb-91dc-7367fb323ae3 does not exist
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.433 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.433 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Ensure instance console log exists: /var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.434 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.434 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.435 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Oct 07 14:40:28 compute-0 sudo[393266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:40:28 compute-0 sudo[393266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:40:28 compute-0 sudo[393266]: pam_unix(sudo:session): session closed for user root
Oct 07 14:40:28 compute-0 sudo[393291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:40:28 compute-0 sudo[393291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:40:28 compute-0 sudo[393291]: pam_unix(sudo:session): session closed for user root
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.905 2 DEBUG nova.network.neutron [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Successfully created port: bcef16bc-28e8-4e97-a50b-7625a1917ee5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.963 2 DEBUG nova.compute.manager [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.964 2 DEBUG oslo_concurrency.lockutils [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.964 2 DEBUG oslo_concurrency.lockutils [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.964 2 DEBUG oslo_concurrency.lockutils [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.964 2 DEBUG nova.compute.manager [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.965 2 WARNING nova.compute.manager [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state active and task_state None.
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.965 2 DEBUG nova.compute.manager [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.965 2 DEBUG oslo_concurrency.lockutils [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.965 2 DEBUG oslo_concurrency.lockutils [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.965 2 DEBUG oslo_concurrency.lockutils [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.965 2 DEBUG nova.compute.manager [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:40:28 compute-0 nova_compute[259550]: 2025-10-07 14:40:28.966 2 WARNING nova.compute.manager [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state active and task_state None.
Oct 07 14:40:29 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:40:29 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:40:29 compute-0 ceph-mon[74295]: pgmap v2341: 305 pgs: 305 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Oct 07 14:40:29 compute-0 nova_compute[259550]: 2025-10-07 14:40:29.445 2 DEBUG nova.objects.instance [None req-b626a2ac-b30c-4f1f-84d0-77740664bffe 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:40:29 compute-0 nova_compute[259550]: 2025-10-07 14:40:29.471 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848029.4714851, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:40:29 compute-0 nova_compute[259550]: 2025-10-07 14:40:29.472 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Paused (Lifecycle Event)
Oct 07 14:40:29 compute-0 nova_compute[259550]: 2025-10-07 14:40:29.500 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:29 compute-0 nova_compute[259550]: 2025-10-07 14:40:29.505 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:40:29 compute-0 nova_compute[259550]: 2025-10-07 14:40:29.543 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 07 14:40:29 compute-0 nova_compute[259550]: 2025-10-07 14:40:29.702 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:29 compute-0 nova_compute[259550]: 2025-10-07 14:40:29.702 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:29 compute-0 nova_compute[259550]: 2025-10-07 14:40:29.719 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:40:29 compute-0 nova_compute[259550]: 2025-10-07 14:40:29.840 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:29 compute-0 nova_compute[259550]: 2025-10-07 14:40:29.840 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:29 compute-0 nova_compute[259550]: 2025-10-07 14:40:29.857 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:40:29 compute-0 nova_compute[259550]: 2025-10-07 14:40:29.857 2 INFO nova.compute.claims [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:40:29 compute-0 nova_compute[259550]: 2025-10-07 14:40:29.993 2 DEBUG nova.network.neutron [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Successfully updated port: bcef16bc-28e8-4e97-a50b-7625a1917ee5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:40:30 compute-0 kernel: tap4eb92b42-42 (unregistering): left promiscuous mode
Oct 07 14:40:30 compute-0 NetworkManager[44949]: <info>  [1759848030.0034] device (tap4eb92b42-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:30 compute-0 ovn_controller[151684]: 2025-10-07T14:40:30Z|01334|binding|INFO|Releasing lport 4eb92b42-4298-4d8e-8455-b37a2972c583 from this chassis (sb_readonly=0)
Oct 07 14:40:30 compute-0 ovn_controller[151684]: 2025-10-07T14:40:30Z|01335|binding|INFO|Setting lport 4eb92b42-4298-4d8e-8455-b37a2972c583 down in Southbound
Oct 07 14:40:30 compute-0 ovn_controller[151684]: 2025-10-07T14:40:30Z|01336|binding|INFO|Removing iface tap4eb92b42-42 ovn-installed in OVS
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:30 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Oct 07 14:40:30 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007b.scope: Consumed 2.520s CPU time.
Oct 07 14:40:30 compute-0 systemd-machined[214580]: Machine qemu-155-instance-0000007b terminated.
Oct 07 14:40:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:30.093 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3b:52 10.100.0.4'], port_security=['fa:16:3e:ee:3b:52 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0ec8fffb-cb39-4dd3-88b8-41467b24be13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c1c9329037b74c90a5df2b4a0e0afe75', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b1762bba-55ab-4449-bafb-3376afdfdbc4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8112f49-4ba8-42c2-9619-6731a09d69e4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4eb92b42-4298-4d8e-8455-b37a2972c583) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:40:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:30.094 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4eb92b42-4298-4d8e-8455-b37a2972c583 in datapath 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 unbound from our chassis
Oct 07 14:40:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:30.095 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:40:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:30.096 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1da09992-9f05-4094-a7eb-c68ba0323da7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.152 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-ca309873-104c-4cd4-a609-686d61823e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.153 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-ca309873-104c-4cd4-a609-686d61823e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.153 2 DEBUG nova.network.neutron [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.155 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.203 2 DEBUG nova.compute.manager [req-bdcb78ff-f42e-4976-ac4c-0fd788f0450b req-5ba7b957-53c3-44ca-bff1-a10d2d7947c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Received event network-changed-bcef16bc-28e8-4e97-a50b-7625a1917ee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.204 2 DEBUG nova.compute.manager [req-bdcb78ff-f42e-4976-ac4c-0fd788f0450b req-5ba7b957-53c3-44ca-bff1-a10d2d7947c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Refreshing instance network info cache due to event network-changed-bcef16bc-28e8-4e97-a50b-7625a1917ee5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.204 2 DEBUG oslo_concurrency.lockutils [req-bdcb78ff-f42e-4976-ac4c-0fd788f0450b req-5ba7b957-53c3-44ca-bff1-a10d2d7947c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-ca309873-104c-4cd4-a609-686d61823e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.206 2 DEBUG nova.compute.manager [None req-b626a2ac-b30c-4f1f-84d0-77740664bffe 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.356 2 DEBUG nova.network.neutron [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:40:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 259 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 156 op/s
Oct 07 14:40:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:40:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2889561579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.631 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.638 2 DEBUG nova.compute.provider_tree [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.656 2 DEBUG nova.scheduler.client.report [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:40:30 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2889561579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.759 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.760 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.811 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.811 2 DEBUG nova.network.neutron [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.851 2 INFO nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:40:30 compute-0 nova_compute[259550]: 2025-10-07 14:40:30.887 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.038 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.039 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.040 2 INFO nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Creating image(s)
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.062 2 DEBUG nova.storage.rbd_utils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 52ccd902-898f-4809-a231-be5760626c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.088 2 DEBUG nova.storage.rbd_utils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 52ccd902-898f-4809-a231-be5760626c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.112 2 DEBUG nova.storage.rbd_utils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 52ccd902-898f-4809-a231-be5760626c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.117 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.165 2 DEBUG nova.policy [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.206 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.207 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.208 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.209 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.234 2 DEBUG nova.storage.rbd_utils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 52ccd902-898f-4809-a231-be5760626c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.238 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 52ccd902-898f-4809-a231-be5760626c2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.491 2 DEBUG nova.compute.manager [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-unplugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.491 2 DEBUG oslo_concurrency.lockutils [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.492 2 DEBUG oslo_concurrency.lockutils [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.492 2 DEBUG oslo_concurrency.lockutils [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.492 2 DEBUG nova.compute.manager [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-unplugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.493 2 WARNING nova.compute.manager [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-unplugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state suspended and task_state None.
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.493 2 DEBUG nova.compute.manager [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.493 2 DEBUG oslo_concurrency.lockutils [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.493 2 DEBUG oslo_concurrency.lockutils [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.494 2 DEBUG oslo_concurrency.lockutils [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.494 2 DEBUG nova.compute.manager [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.494 2 WARNING nova.compute.manager [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state suspended and task_state None.
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.533 2 DEBUG nova.network.neutron [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Updating instance_info_cache with network_info: [{"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.581 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-ca309873-104c-4cd4-a609-686d61823e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.582 2 DEBUG nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Instance network_info: |[{"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.582 2 DEBUG oslo_concurrency.lockutils [req-bdcb78ff-f42e-4976-ac4c-0fd788f0450b req-5ba7b957-53c3-44ca-bff1-a10d2d7947c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-ca309873-104c-4cd4-a609-686d61823e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.582 2 DEBUG nova.network.neutron [req-bdcb78ff-f42e-4976-ac4c-0fd788f0450b req-5ba7b957-53c3-44ca-bff1-a10d2d7947c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Refreshing network info cache for port bcef16bc-28e8-4e97-a50b-7625a1917ee5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.586 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Start _get_guest_xml network_info=[{"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.590 2 WARNING nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.597 2 DEBUG nova.virt.libvirt.host [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.597 2 DEBUG nova.virt.libvirt.host [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.602 2 DEBUG nova.virt.libvirt.host [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.603 2 DEBUG nova.virt.libvirt.host [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.603 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.603 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.604 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.604 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.604 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.604 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.605 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.605 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.605 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.605 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.605 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.606 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.609 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.681 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 52ccd902-898f-4809-a231-be5760626c2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:31 compute-0 ceph-mon[74295]: pgmap v2342: 305 pgs: 305 active+clean; 259 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 156 op/s
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.739 2 DEBUG nova.storage.rbd_utils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 52ccd902-898f-4809-a231-be5760626c2c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.839 2 DEBUG nova.objects.instance [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 52ccd902-898f-4809-a231-be5760626c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.855 2 INFO nova.compute.manager [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Resuming
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.856 2 DEBUG nova.objects.instance [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'flavor' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.904 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.904 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Ensure instance console log exists: /var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.905 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.905 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:31 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.905 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:31.999 2 DEBUG oslo_concurrency.lockutils [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.000 2 DEBUG oslo_concurrency.lockutils [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquired lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.000 2 DEBUG nova.network.neutron [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:40:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:40:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4244256990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.106 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.127 2 DEBUG nova.storage.rbd_utils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image ca309873-104c-4cd4-a609-686d61823e0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.131 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 293 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 133 op/s
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:40:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2906102693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.614 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.616 2 DEBUG nova.virt.libvirt.vif [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:40:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2008963535',display_name='tempest-TestNetworkBasicOps-server-2008963535',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2008963535',id=124,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxD9h4ZkkgrlueJ/+Le33LVwl4afSGXrv2n9VWfucsKy4uVMm2emVKU0zzzyJ1CeXnsYkd5eNJ5QzJC0SrQJgU7HtUo0DaaWNwjsm6StKgjUKnYtMfa+OUzVPQdsT6WdA==',key_name='tempest-TestNetworkBasicOps-266700111',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-0d2v23l1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:40:27Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=ca309873-104c-4cd4-a609-686d61823e0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.616 2 DEBUG nova.network.os_vif_util [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.617 2 DEBUG nova.network.os_vif_util [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:5f:df,bridge_name='br-int',has_traffic_filtering=True,id=bcef16bc-28e8-4e97-a50b-7625a1917ee5,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcef16bc-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.619 2 DEBUG nova.objects.instance [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid ca309873-104c-4cd4-a609-686d61823e0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022150551255127955 of space, bias 1.0, pg target 0.6645165376538387 quantized to 32 (current 32)
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:40:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.668 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:40:32 compute-0 nova_compute[259550]:   <uuid>ca309873-104c-4cd4-a609-686d61823e0f</uuid>
Oct 07 14:40:32 compute-0 nova_compute[259550]:   <name>instance-0000007c</name>
Oct 07 14:40:32 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:40:32 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:40:32 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkBasicOps-server-2008963535</nova:name>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:40:31</nova:creationTime>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:40:32 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:40:32 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:40:32 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:40:32 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:40:32 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:40:32 compute-0 nova_compute[259550]:         <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:40:32 compute-0 nova_compute[259550]:         <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:40:32 compute-0 nova_compute[259550]:         <nova:port uuid="bcef16bc-28e8-4e97-a50b-7625a1917ee5">
Oct 07 14:40:32 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:40:32 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:40:32 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <system>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <entry name="serial">ca309873-104c-4cd4-a609-686d61823e0f</entry>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <entry name="uuid">ca309873-104c-4cd4-a609-686d61823e0f</entry>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     </system>
Oct 07 14:40:32 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:40:32 compute-0 nova_compute[259550]:   <os>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:   </os>
Oct 07 14:40:32 compute-0 nova_compute[259550]:   <features>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:   </features>
Oct 07 14:40:32 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:40:32 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:40:32 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/ca309873-104c-4cd4-a609-686d61823e0f_disk">
Oct 07 14:40:32 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       </source>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:40:32 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/ca309873-104c-4cd4-a609-686d61823e0f_disk.config">
Oct 07 14:40:32 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       </source>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:40:32 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:17:5f:df"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <target dev="tapbcef16bc-28"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f/console.log" append="off"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <video>
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     </video>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:40:32 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:40:32 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:40:32 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:40:32 compute-0 nova_compute[259550]: </domain>
Oct 07 14:40:32 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.669 2 DEBUG nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Preparing to wait for external event network-vif-plugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.669 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "ca309873-104c-4cd4-a609-686d61823e0f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.670 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.670 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.671 2 DEBUG nova.virt.libvirt.vif [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:40:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2008963535',display_name='tempest-TestNetworkBasicOps-server-2008963535',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2008963535',id=124,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxD9h4ZkkgrlueJ/+Le33LVwl4afSGXrv2n9VWfucsKy4uVMm2emVKU0zzzyJ1CeXnsYkd5eNJ5QzJC0SrQJgU7HtUo0DaaWNwjsm6StKgjUKnYtMfa+OUzVPQdsT6WdA==',key_name='tempest-TestNetworkBasicOps-266700111',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-0d2v23l1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:40:27Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=ca309873-104c-4cd4-a609-686d61823e0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.671 2 DEBUG nova.network.os_vif_util [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.671 2 DEBUG nova.network.os_vif_util [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:5f:df,bridge_name='br-int',has_traffic_filtering=True,id=bcef16bc-28e8-4e97-a50b-7625a1917ee5,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcef16bc-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.672 2 DEBUG os_vif [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:5f:df,bridge_name='br-int',has_traffic_filtering=True,id=bcef16bc-28e8-4e97-a50b-7625a1917ee5,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcef16bc-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.673 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.673 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.677 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbcef16bc-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.678 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbcef16bc-28, col_values=(('external_ids', {'iface-id': 'bcef16bc-28e8-4e97-a50b-7625a1917ee5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:5f:df', 'vm-uuid': 'ca309873-104c-4cd4-a609-686d61823e0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:32 compute-0 NetworkManager[44949]: <info>  [1759848032.6803] manager: (tapbcef16bc-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.690 2 INFO os_vif [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:5f:df,bridge_name='br-int',has_traffic_filtering=True,id=bcef16bc-28e8-4e97-a50b-7625a1917ee5,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcef16bc-28')
Oct 07 14:40:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4244256990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:40:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2906102693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:40:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:40:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2819913526' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:40:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:40:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2819913526' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:40:32 compute-0 podman[393592]: 2025-10-07 14:40:32.794109344 +0000 UTC m=+0.066211840 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:40:32 compute-0 podman[393591]: 2025-10-07 14:40:32.815741299 +0000 UTC m=+0.088217626 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.833 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.833 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.833 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:17:5f:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.833 2 INFO nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Using config drive
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.854 2 DEBUG nova.storage.rbd_utils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image ca309873-104c-4cd4-a609-686d61823e0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:32 compute-0 nova_compute[259550]: 2025-10-07 14:40:32.995 2 DEBUG nova.network.neutron [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Successfully created port: b4567457-8da4-42a7-b4c0-42724b2c0bc9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:40:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.364 2 INFO nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Creating config drive at /var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f/disk.config
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.369 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnqm9b6xh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.457 2 DEBUG nova.network.neutron [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Updating instance_info_cache with network_info: [{"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.485 2 DEBUG oslo_concurrency.lockutils [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Releasing lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.491 2 DEBUG nova.virt.libvirt.vif [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:40:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1564405251',display_name='tempest-TestServerAdvancedOps-server-1564405251',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1564405251',id=123,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:40:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c1c9329037b74c90a5df2b4a0e0afe75',ramdisk_id='',reservation_id='r-wo40rctq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1206359101',owner_user_name='tempest-TestServerAdvancedOps-1206359101-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:40:30Z,user_data=None,user_id='57b4a09a91e94b4c8417e522a9a10496',uuid=0ec8fffb-cb39-4dd3-88b8-41467b24be13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.492 2 DEBUG nova.network.os_vif_util [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converting VIF {"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.493 2 DEBUG nova.network.os_vif_util [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.493 2 DEBUG os_vif [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.494 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.495 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.498 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4eb92b42-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.499 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4eb92b42-42, col_values=(('external_ids', {'iface-id': '4eb92b42-4298-4d8e-8455-b37a2972c583', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:3b:52', 'vm-uuid': '0ec8fffb-cb39-4dd3-88b8-41467b24be13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.500 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.500 2 INFO os_vif [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42')
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.523 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnqm9b6xh" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.551 2 DEBUG nova.storage.rbd_utils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image ca309873-104c-4cd4-a609-686d61823e0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.555 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f/disk.config ca309873-104c-4cd4-a609-686d61823e0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.599 2 DEBUG nova.objects.instance [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:40:33 compute-0 NetworkManager[44949]: <info>  [1759848033.6646] manager: (tap4eb92b42-42): new Tun device (/org/freedesktop/NetworkManager/Devices/543)
Oct 07 14:40:33 compute-0 kernel: tap4eb92b42-42: entered promiscuous mode
Oct 07 14:40:33 compute-0 ovn_controller[151684]: 2025-10-07T14:40:33Z|01337|binding|INFO|Claiming lport 4eb92b42-4298-4d8e-8455-b37a2972c583 for this chassis.
Oct 07 14:40:33 compute-0 ovn_controller[151684]: 2025-10-07T14:40:33Z|01338|binding|INFO|4eb92b42-4298-4d8e-8455-b37a2972c583: Claiming fa:16:3e:ee:3b:52 10.100.0.4
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:33.677 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3b:52 10.100.0.4'], port_security=['fa:16:3e:ee:3b:52 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0ec8fffb-cb39-4dd3-88b8-41467b24be13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c1c9329037b74c90a5df2b4a0e0afe75', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'b1762bba-55ab-4449-bafb-3376afdfdbc4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8112f49-4ba8-42c2-9619-6731a09d69e4, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4eb92b42-4298-4d8e-8455-b37a2972c583) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:40:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:33.678 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4eb92b42-4298-4d8e-8455-b37a2972c583 in datapath 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 bound to our chassis
Oct 07 14:40:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:33.678 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:40:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:33.679 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dea77ccd-2179-4a5c-90f9-023865dfe416]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:33 compute-0 ovn_controller[151684]: 2025-10-07T14:40:33Z|01339|binding|INFO|Setting lport 4eb92b42-4298-4d8e-8455-b37a2972c583 ovn-installed in OVS
Oct 07 14:40:33 compute-0 ovn_controller[151684]: 2025-10-07T14:40:33Z|01340|binding|INFO|Setting lport 4eb92b42-4298-4d8e-8455-b37a2972c583 up in Southbound
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:33 compute-0 systemd-udevd[393700]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:40:33 compute-0 systemd-machined[214580]: New machine qemu-156-instance-0000007b.
Oct 07 14:40:33 compute-0 NetworkManager[44949]: <info>  [1759848033.7219] device (tap4eb92b42-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:40:33 compute-0 NetworkManager[44949]: <info>  [1759848033.7231] device (tap4eb92b42-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:40:33 compute-0 systemd[1]: Started Virtual Machine qemu-156-instance-0000007b.
Oct 07 14:40:33 compute-0 ceph-mon[74295]: pgmap v2343: 305 pgs: 305 active+clean; 293 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 133 op/s
Oct 07 14:40:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2819913526' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:40:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2819913526' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.828 2 DEBUG nova.network.neutron [req-bdcb78ff-f42e-4976-ac4c-0fd788f0450b req-5ba7b957-53c3-44ca-bff1-a10d2d7947c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Updated VIF entry in instance network info cache for port bcef16bc-28e8-4e97-a50b-7625a1917ee5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.830 2 DEBUG nova.network.neutron [req-bdcb78ff-f42e-4976-ac4c-0fd788f0450b req-5ba7b957-53c3-44ca-bff1-a10d2d7947c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Updating instance_info_cache with network_info: [{"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.861 2 DEBUG oslo_concurrency.lockutils [req-bdcb78ff-f42e-4976-ac4c-0fd788f0450b req-5ba7b957-53c3-44ca-bff1-a10d2d7947c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-ca309873-104c-4cd4-a609-686d61823e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.882 2 DEBUG nova.compute.manager [req-0f4fa94b-e01b-42ad-be07-c09af409f5ce req-8e345064-f568-4420-897f-5457f4624986 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.882 2 DEBUG oslo_concurrency.lockutils [req-0f4fa94b-e01b-42ad-be07-c09af409f5ce req-8e345064-f568-4420-897f-5457f4624986 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.882 2 DEBUG oslo_concurrency.lockutils [req-0f4fa94b-e01b-42ad-be07-c09af409f5ce req-8e345064-f568-4420-897f-5457f4624986 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.882 2 DEBUG oslo_concurrency.lockutils [req-0f4fa94b-e01b-42ad-be07-c09af409f5ce req-8e345064-f568-4420-897f-5457f4624986 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.883 2 DEBUG nova.compute.manager [req-0f4fa94b-e01b-42ad-be07-c09af409f5ce req-8e345064-f568-4420-897f-5457f4624986 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.883 2 WARNING nova.compute.manager [req-0f4fa94b-e01b-42ad-be07-c09af409f5ce req-8e345064-f568-4420-897f-5457f4624986 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state suspended and task_state resuming.
Oct 07 14:40:33 compute-0 nova_compute[259550]: 2025-10-07 14:40:33.895 2 DEBUG nova.network.neutron [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Successfully created port: d721f926-d7b0-4e02-bf84-b45b7c5df102 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.218 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f/disk.config ca309873-104c-4cd4-a609-686d61823e0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.218 2 INFO nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Deleting local config drive /var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f/disk.config because it was imported into RBD.
Oct 07 14:40:34 compute-0 kernel: tapbcef16bc-28: entered promiscuous mode
Oct 07 14:40:34 compute-0 systemd-udevd[393702]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:40:34 compute-0 NetworkManager[44949]: <info>  [1759848034.2689] manager: (tapbcef16bc-28): new Tun device (/org/freedesktop/NetworkManager/Devices/544)
Oct 07 14:40:34 compute-0 ovn_controller[151684]: 2025-10-07T14:40:34Z|01341|binding|INFO|Claiming lport bcef16bc-28e8-4e97-a50b-7625a1917ee5 for this chassis.
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:34 compute-0 ovn_controller[151684]: 2025-10-07T14:40:34Z|01342|binding|INFO|bcef16bc-28e8-4e97-a50b-7625a1917ee5: Claiming fa:16:3e:17:5f:df 10.100.0.27
Oct 07 14:40:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.280 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:5f:df 10.100.0.27'], port_security=['fa:16:3e:17:5f:df 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'ca309873-104c-4cd4-a609-686d61823e0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '84380b93-9bd8-46c6-8ce7-0eb2636c568f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4af57759-0100-4ebb-81fb-af43dd151fff, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=bcef16bc-28e8-4e97-a50b-7625a1917ee5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:40:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.282 161536 INFO neutron.agent.ovn.metadata.agent [-] Port bcef16bc-28e8-4e97-a50b-7625a1917ee5 in datapath cae6e154-5797-4df5-a9e8-545cc6ed0188 bound to our chassis
Oct 07 14:40:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.283 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cae6e154-5797-4df5-a9e8-545cc6ed0188
Oct 07 14:40:34 compute-0 NetworkManager[44949]: <info>  [1759848034.2849] device (tapbcef16bc-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:40:34 compute-0 NetworkManager[44949]: <info>  [1759848034.2857] device (tapbcef16bc-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:40:34 compute-0 ovn_controller[151684]: 2025-10-07T14:40:34Z|01343|binding|INFO|Setting lport bcef16bc-28e8-4e97-a50b-7625a1917ee5 ovn-installed in OVS
Oct 07 14:40:34 compute-0 ovn_controller[151684]: 2025-10-07T14:40:34Z|01344|binding|INFO|Setting lport bcef16bc-28e8-4e97-a50b-7625a1917ee5 up in Southbound
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.304 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a2372f-7fbd-411b-9d9b-6c889729fe1e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:34 compute-0 systemd-machined[214580]: New machine qemu-157-instance-0000007c.
Oct 07 14:40:34 compute-0 systemd[1]: Started Virtual Machine qemu-157-instance-0000007c.
Oct 07 14:40:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.332 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a3e237-7900-45c9-8fe6-f82985f4a140]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.336 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[98e3bcf4-1845-49fc-b6b0-a74e9fc051bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.363 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3c62dc-0a22-4d94-952d-d7b05cb25fd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.381 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b46b2b31-cf96-444b-9566-2666327a67b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcae6e154-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:47:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 859712, 'reachable_time': 43552, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393780, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.399 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cebaaaf5-2147-4300-a6ae-52c8d114b414]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcae6e154-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 859727, 'tstamp': 859727}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393782, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tapcae6e154-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 859730, 'tstamp': 859730}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393782, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.401 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcae6e154-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.405 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcae6e154-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.406 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.406 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcae6e154-50, col_values=(('external_ids', {'iface-id': '795a08c5-66c3-453c-a5db-19a02c166ab7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.407 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 320 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 128 op/s
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.710 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 0ec8fffb-cb39-4dd3-88b8-41467b24be13 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.711 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848034.7098725, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.711 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Started (Lifecycle Event)
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.731 2 DEBUG nova.compute.manager [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.732 2 DEBUG nova.objects.instance [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.750 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.761 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.766 2 INFO nova.virt.libvirt.driver [-] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Instance running successfully.
Oct 07 14:40:34 compute-0 virtqemud[259430]: argument unsupported: QEMU guest agent is not configured
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.769 2 DEBUG nova.virt.libvirt.guest [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.769 2 DEBUG nova.compute.manager [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.803 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.803 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848034.7165442, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.804 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Resumed (Lifecycle Event)
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.831 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:34 compute-0 nova_compute[259550]: 2025-10-07 14:40:34.835 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:40:35 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:40:35 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 49K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1394 writes, 6291 keys, 1394 commit groups, 1.0 writes per commit group, ingest: 8.82 MB, 0.01 MB/s
                                           Interval WAL: 1394 writes, 1394 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     73.5      0.80              0.18        33    0.024       0      0       0.0       0.0
                                             L6      1/0    7.90 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.4    138.7    115.7      2.21              0.73        32    0.069    185K    17K       0.0       0.0
                                            Sum      1/0    7.90 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.4    101.9    104.5      3.01              0.91        65    0.046    185K    17K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.3    103.0    104.2      0.49              0.14        10    0.049     36K   2557       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    138.7    115.7      2.21              0.73        32    0.069    185K    17K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     73.7      0.79              0.18        32    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.057, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.31 GB write, 0.07 MB/s write, 0.30 GB read, 0.07 MB/s read, 3.0 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5619101451f0#2 capacity: 304.00 MB usage: 33.96 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000311 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2215,32.58 MB,10.7185%) FilterBlock(66,523.80 KB,0.168263%) IndexBlock(66,888.89 KB,0.285545%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 07 14:40:35 compute-0 nova_compute[259550]: 2025-10-07 14:40:35.278 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848035.278009, ca309873-104c-4cd4-a609-686d61823e0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:40:35 compute-0 nova_compute[259550]: 2025-10-07 14:40:35.279 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] VM Started (Lifecycle Event)
Oct 07 14:40:35 compute-0 nova_compute[259550]: 2025-10-07 14:40:35.306 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:35 compute-0 nova_compute[259550]: 2025-10-07 14:40:35.311 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848035.2782521, ca309873-104c-4cd4-a609-686d61823e0f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:40:35 compute-0 nova_compute[259550]: 2025-10-07 14:40:35.311 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] VM Paused (Lifecycle Event)
Oct 07 14:40:35 compute-0 nova_compute[259550]: 2025-10-07 14:40:35.336 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:35 compute-0 nova_compute[259550]: 2025-10-07 14:40:35.340 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:40:35 compute-0 nova_compute[259550]: 2025-10-07 14:40:35.359 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:40:35 compute-0 ceph-mon[74295]: pgmap v2344: 305 pgs: 305 active+clean; 320 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 128 op/s
Oct 07 14:40:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 339 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 3.6 MiB/s wr, 63 op/s
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.850 2 DEBUG nova.compute.manager [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.851 2 DEBUG oslo_concurrency.lockutils [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.851 2 DEBUG oslo_concurrency.lockutils [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.851 2 DEBUG oslo_concurrency.lockutils [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.851 2 DEBUG nova.compute.manager [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.852 2 WARNING nova.compute.manager [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state active and task_state None.
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.852 2 DEBUG nova.compute.manager [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Received event network-vif-plugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.852 2 DEBUG oslo_concurrency.lockutils [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ca309873-104c-4cd4-a609-686d61823e0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.852 2 DEBUG oslo_concurrency.lockutils [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.852 2 DEBUG oslo_concurrency.lockutils [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.853 2 DEBUG nova.compute.manager [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Processing event network-vif-plugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.853 2 DEBUG nova.compute.manager [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Received event network-vif-plugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.853 2 DEBUG oslo_concurrency.lockutils [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ca309873-104c-4cd4-a609-686d61823e0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.853 2 DEBUG oslo_concurrency.lockutils [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.854 2 DEBUG oslo_concurrency.lockutils [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.854 2 DEBUG nova.compute.manager [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] No waiting events found dispatching network-vif-plugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.854 2 WARNING nova.compute.manager [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Received unexpected event network-vif-plugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 for instance with vm_state building and task_state spawning.
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.855 2 DEBUG nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.858 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848036.858216, ca309873-104c-4cd4-a609-686d61823e0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.858 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] VM Resumed (Lifecycle Event)
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.860 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.867 2 INFO nova.virt.libvirt.driver [-] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Instance spawned successfully.
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.868 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.898 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.904 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.909 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.910 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.911 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.911 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.911 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.912 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:36 compute-0 nova_compute[259550]: 2025-10-07 14:40:36.988 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:40:37 compute-0 nova_compute[259550]: 2025-10-07 14:40:37.036 2 INFO nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Took 9.75 seconds to spawn the instance on the hypervisor.
Oct 07 14:40:37 compute-0 nova_compute[259550]: 2025-10-07 14:40:37.036 2 DEBUG nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:37 compute-0 nova_compute[259550]: 2025-10-07 14:40:37.118 2 INFO nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Took 11.29 seconds to build instance.
Oct 07 14:40:37 compute-0 nova_compute[259550]: 2025-10-07 14:40:37.151 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:37 compute-0 nova_compute[259550]: 2025-10-07 14:40:37.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:37 compute-0 nova_compute[259550]: 2025-10-07 14:40:37.731 2 DEBUG nova.network.neutron [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Successfully updated port: b4567457-8da4-42a7-b4c0-42724b2c0bc9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:40:37 compute-0 nova_compute[259550]: 2025-10-07 14:40:37.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:37 compute-0 ceph-mon[74295]: pgmap v2345: 305 pgs: 305 active+clean; 339 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 3.6 MiB/s wr, 63 op/s
Oct 07 14:40:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:40:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 339 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 62 op/s
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.238 2 DEBUG nova.compute.manager [req-d8a63f8e-602b-47e7-a3d6-de213cea4894 req-ec251655-2f9a-4cdc-9692-9dd868178e02 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-changed-b4567457-8da4-42a7-b4c0-42724b2c0bc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.239 2 DEBUG nova.compute.manager [req-d8a63f8e-602b-47e7-a3d6-de213cea4894 req-ec251655-2f9a-4cdc-9692-9dd868178e02 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Refreshing instance network info cache due to event network-changed-b4567457-8da4-42a7-b4c0-42724b2c0bc9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.240 2 DEBUG oslo_concurrency.lockutils [req-d8a63f8e-602b-47e7-a3d6-de213cea4894 req-ec251655-2f9a-4cdc-9692-9dd868178e02 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.240 2 DEBUG oslo_concurrency.lockutils [req-d8a63f8e-602b-47e7-a3d6-de213cea4894 req-ec251655-2f9a-4cdc-9692-9dd868178e02 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.241 2 DEBUG nova.network.neutron [req-d8a63f8e-602b-47e7-a3d6-de213cea4894 req-ec251655-2f9a-4cdc-9692-9dd868178e02 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Refreshing network info cache for port b4567457-8da4-42a7-b4c0-42724b2c0bc9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.491 2 DEBUG oslo_concurrency.lockutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.491 2 DEBUG oslo_concurrency.lockutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.492 2 DEBUG oslo_concurrency.lockutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.492 2 DEBUG oslo_concurrency.lockutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.493 2 DEBUG oslo_concurrency.lockutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.494 2 INFO nova.compute.manager [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Terminating instance
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.495 2 DEBUG nova.compute.manager [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.517 2 DEBUG nova.network.neutron [req-d8a63f8e-602b-47e7-a3d6-de213cea4894 req-ec251655-2f9a-4cdc-9692-9dd868178e02 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:40:39 compute-0 kernel: tap4eb92b42-42 (unregistering): left promiscuous mode
Oct 07 14:40:39 compute-0 NetworkManager[44949]: <info>  [1759848039.5507] device (tap4eb92b42-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:40:39 compute-0 ovn_controller[151684]: 2025-10-07T14:40:39Z|01345|binding|INFO|Releasing lport 4eb92b42-4298-4d8e-8455-b37a2972c583 from this chassis (sb_readonly=0)
Oct 07 14:40:39 compute-0 ovn_controller[151684]: 2025-10-07T14:40:39Z|01346|binding|INFO|Setting lport 4eb92b42-4298-4d8e-8455-b37a2972c583 down in Southbound
Oct 07 14:40:39 compute-0 ovn_controller[151684]: 2025-10-07T14:40:39Z|01347|binding|INFO|Removing iface tap4eb92b42-42 ovn-installed in OVS
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:39.598 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3b:52 10.100.0.4'], port_security=['fa:16:3e:ee:3b:52 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0ec8fffb-cb39-4dd3-88b8-41467b24be13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c1c9329037b74c90a5df2b4a0e0afe75', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b1762bba-55ab-4449-bafb-3376afdfdbc4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8112f49-4ba8-42c2-9619-6731a09d69e4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4eb92b42-4298-4d8e-8455-b37a2972c583) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:40:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:39.599 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4eb92b42-4298-4d8e-8455-b37a2972c583 in datapath 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 unbound from our chassis
Oct 07 14:40:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:39.601 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 07 14:40:39 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:39.602 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[20be9b0c-9b38-483e-a464-d90af4051565]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:39 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Oct 07 14:40:39 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007b.scope: Consumed 5.501s CPU time.
Oct 07 14:40:39 compute-0 systemd-machined[214580]: Machine qemu-156-instance-0000007b terminated.
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.730 2 INFO nova.virt.libvirt.driver [-] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Instance destroyed successfully.
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.730 2 DEBUG nova.objects.instance [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'resources' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.755 2 DEBUG nova.virt.libvirt.vif [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:40:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1564405251',display_name='tempest-TestServerAdvancedOps-server-1564405251',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1564405251',id=123,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:40:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c1c9329037b74c90a5df2b4a0e0afe75',ramdisk_id='',reservation_id='r-wo40rctq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-1206359101',owner_user_name='tempest-TestServerAdvancedOps-1206359101-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:40:34Z,user_data=None,user_id='57b4a09a91e94b4c8417e522a9a10496',uuid=0ec8fffb-cb39-4dd3-88b8-41467b24be13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.756 2 DEBUG nova.network.os_vif_util [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converting VIF {"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.756 2 DEBUG nova.network.os_vif_util [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.758 2 DEBUG os_vif [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.760 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4eb92b42-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:39 compute-0 nova_compute[259550]: 2025-10-07 14:40:39.765 2 INFO os_vif [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42')
Oct 07 14:40:39 compute-0 ceph-mon[74295]: pgmap v2346: 305 pgs: 305 active+clean; 339 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 62 op/s
Oct 07 14:40:40 compute-0 nova_compute[259550]: 2025-10-07 14:40:40.088 2 DEBUG nova.network.neutron [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Successfully updated port: d721f926-d7b0-4e02-bf84-b45b7c5df102 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:40:40 compute-0 nova_compute[259550]: 2025-10-07 14:40:40.117 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:40:40 compute-0 nova_compute[259550]: 2025-10-07 14:40:40.124 2 DEBUG nova.network.neutron [req-d8a63f8e-602b-47e7-a3d6-de213cea4894 req-ec251655-2f9a-4cdc-9692-9dd868178e02 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:40:40 compute-0 nova_compute[259550]: 2025-10-07 14:40:40.143 2 DEBUG oslo_concurrency.lockutils [req-d8a63f8e-602b-47e7-a3d6-de213cea4894 req-ec251655-2f9a-4cdc-9692-9dd868178e02 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:40:40 compute-0 nova_compute[259550]: 2025-10-07 14:40:40.143 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:40:40 compute-0 nova_compute[259550]: 2025-10-07 14:40:40.144 2 DEBUG nova.network.neutron [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:40:40 compute-0 nova_compute[259550]: 2025-10-07 14:40:40.204 2 INFO nova.virt.libvirt.driver [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Deleting instance files /var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13_del
Oct 07 14:40:40 compute-0 nova_compute[259550]: 2025-10-07 14:40:40.205 2 INFO nova.virt.libvirt.driver [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Deletion of /var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13_del complete
Oct 07 14:40:40 compute-0 nova_compute[259550]: 2025-10-07 14:40:40.247 2 DEBUG nova.compute.manager [req-8396bc7b-88ed-43a8-98d1-1ea12affa1c4 req-edf743ad-ca13-4c07-b5bb-9809c148d406 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-unplugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:40 compute-0 nova_compute[259550]: 2025-10-07 14:40:40.248 2 DEBUG oslo_concurrency.lockutils [req-8396bc7b-88ed-43a8-98d1-1ea12affa1c4 req-edf743ad-ca13-4c07-b5bb-9809c148d406 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:40 compute-0 nova_compute[259550]: 2025-10-07 14:40:40.249 2 DEBUG oslo_concurrency.lockutils [req-8396bc7b-88ed-43a8-98d1-1ea12affa1c4 req-edf743ad-ca13-4c07-b5bb-9809c148d406 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:40 compute-0 nova_compute[259550]: 2025-10-07 14:40:40.249 2 DEBUG oslo_concurrency.lockutils [req-8396bc7b-88ed-43a8-98d1-1ea12affa1c4 req-edf743ad-ca13-4c07-b5bb-9809c148d406 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:40 compute-0 nova_compute[259550]: 2025-10-07 14:40:40.250 2 DEBUG nova.compute.manager [req-8396bc7b-88ed-43a8-98d1-1ea12affa1c4 req-edf743ad-ca13-4c07-b5bb-9809c148d406 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-unplugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:40:40 compute-0 nova_compute[259550]: 2025-10-07 14:40:40.250 2 DEBUG nova.compute.manager [req-8396bc7b-88ed-43a8-98d1-1ea12affa1c4 req-edf743ad-ca13-4c07-b5bb-9809c148d406 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-unplugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:40:40 compute-0 nova_compute[259550]: 2025-10-07 14:40:40.276 2 INFO nova.compute.manager [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 07 14:40:40 compute-0 nova_compute[259550]: 2025-10-07 14:40:40.277 2 DEBUG oslo.service.loopingcall [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:40:40 compute-0 nova_compute[259550]: 2025-10-07 14:40:40.278 2 DEBUG nova.compute.manager [-] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:40:40 compute-0 nova_compute[259550]: 2025-10-07 14:40:40.278 2 DEBUG nova.network.neutron [-] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:40:40 compute-0 nova_compute[259550]: 2025-10-07 14:40:40.392 2 DEBUG nova.network.neutron [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:40:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 339 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 977 KiB/s rd, 3.6 MiB/s wr, 104 op/s
Oct 07 14:40:41 compute-0 nova_compute[259550]: 2025-10-07 14:40:41.016 2 DEBUG nova.network.neutron [-] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:40:41 compute-0 nova_compute[259550]: 2025-10-07 14:40:41.034 2 INFO nova.compute.manager [-] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Took 0.76 seconds to deallocate network for instance.
Oct 07 14:40:41 compute-0 nova_compute[259550]: 2025-10-07 14:40:41.078 2 DEBUG oslo_concurrency.lockutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:41 compute-0 nova_compute[259550]: 2025-10-07 14:40:41.079 2 DEBUG oslo_concurrency.lockutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:41 compute-0 nova_compute[259550]: 2025-10-07 14:40:41.216 2 DEBUG oslo_concurrency.processutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:41 compute-0 nova_compute[259550]: 2025-10-07 14:40:41.340 2 DEBUG nova.compute.manager [req-7e832b63-6b8f-405b-bec7-6a1a6bfcd6ff req-de944dbc-4ff8-49cc-b116-e3563fa3cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-changed-d721f926-d7b0-4e02-bf84-b45b7c5df102 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:41 compute-0 nova_compute[259550]: 2025-10-07 14:40:41.340 2 DEBUG nova.compute.manager [req-7e832b63-6b8f-405b-bec7-6a1a6bfcd6ff req-de944dbc-4ff8-49cc-b116-e3563fa3cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Refreshing instance network info cache due to event network-changed-d721f926-d7b0-4e02-bf84-b45b7c5df102. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:40:41 compute-0 nova_compute[259550]: 2025-10-07 14:40:41.341 2 DEBUG oslo_concurrency.lockutils [req-7e832b63-6b8f-405b-bec7-6a1a6bfcd6ff req-de944dbc-4ff8-49cc-b116-e3563fa3cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:40:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:40:41 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/392849799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:40:41 compute-0 nova_compute[259550]: 2025-10-07 14:40:41.716 2 DEBUG oslo_concurrency.processutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:41 compute-0 nova_compute[259550]: 2025-10-07 14:40:41.721 2 DEBUG nova.compute.provider_tree [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:40:41 compute-0 nova_compute[259550]: 2025-10-07 14:40:41.755 2 DEBUG nova.scheduler.client.report [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:40:41 compute-0 nova_compute[259550]: 2025-10-07 14:40:41.784 2 DEBUG oslo_concurrency.lockutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:41 compute-0 nova_compute[259550]: 2025-10-07 14:40:41.867 2 INFO nova.scheduler.client.report [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Deleted allocations for instance 0ec8fffb-cb39-4dd3-88b8-41467b24be13
Oct 07 14:40:41 compute-0 ceph-mon[74295]: pgmap v2347: 305 pgs: 305 active+clean; 339 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 977 KiB/s rd, 3.6 MiB/s wr, 104 op/s
Oct 07 14:40:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/392849799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:40:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 336 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 124 op/s
Oct 07 14:40:42 compute-0 nova_compute[259550]: 2025-10-07 14:40:42.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.031 2 DEBUG nova.compute.manager [req-24320672-abbe-47e5-84ea-8aefc7bc1070 req-a5dca7f1-4402-4821-8e7b-9a293f22ee7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.032 2 DEBUG oslo_concurrency.lockutils [req-24320672-abbe-47e5-84ea-8aefc7bc1070 req-a5dca7f1-4402-4821-8e7b-9a293f22ee7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.032 2 DEBUG oslo_concurrency.lockutils [req-24320672-abbe-47e5-84ea-8aefc7bc1070 req-a5dca7f1-4402-4821-8e7b-9a293f22ee7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.032 2 DEBUG oslo_concurrency.lockutils [req-24320672-abbe-47e5-84ea-8aefc7bc1070 req-a5dca7f1-4402-4821-8e7b-9a293f22ee7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.033 2 DEBUG nova.compute.manager [req-24320672-abbe-47e5-84ea-8aefc7bc1070 req-a5dca7f1-4402-4821-8e7b-9a293f22ee7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.033 2 WARNING nova.compute.manager [req-24320672-abbe-47e5-84ea-8aefc7bc1070 req-a5dca7f1-4402-4821-8e7b-9a293f22ee7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state deleted and task_state None.
Oct 07 14:40:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.094 2 DEBUG oslo_concurrency.lockutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.310 2 DEBUG nova.network.neutron [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updating instance_info_cache with network_info: [{"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.333 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.333 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Instance network_info: |[{"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.334 2 DEBUG oslo_concurrency.lockutils [req-7e832b63-6b8f-405b-bec7-6a1a6bfcd6ff req-de944dbc-4ff8-49cc-b116-e3563fa3cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.334 2 DEBUG nova.network.neutron [req-7e832b63-6b8f-405b-bec7-6a1a6bfcd6ff req-de944dbc-4ff8-49cc-b116-e3563fa3cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Refreshing network info cache for port d721f926-d7b0-4e02-bf84-b45b7c5df102 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.339 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Start _get_guest_xml network_info=[{"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.346 2 WARNING nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.355 2 DEBUG nova.virt.libvirt.host [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.356 2 DEBUG nova.virt.libvirt.host [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.360 2 DEBUG nova.virt.libvirt.host [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.361 2 DEBUG nova.virt.libvirt.host [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.362 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.362 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.362 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.363 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.363 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.363 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.363 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.364 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.364 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.364 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.365 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.365 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.369 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:43 compute-0 ovn_controller[151684]: 2025-10-07T14:40:43Z|01348|binding|INFO|Releasing lport 406dfedc-cd29-46b7-9b91-7b006ecd582c from this chassis (sb_readonly=0)
Oct 07 14:40:43 compute-0 ovn_controller[151684]: 2025-10-07T14:40:43Z|01349|binding|INFO|Releasing lport 795a08c5-66c3-453c-a5db-19a02c166ab7 from this chassis (sb_readonly=0)
Oct 07 14:40:43 compute-0 ovn_controller[151684]: 2025-10-07T14:40:43Z|01350|binding|INFO|Releasing lport 4cc97c0a-633b-48bc-94c4-6f8ac1f61c66 from this chassis (sb_readonly=0)
Oct 07 14:40:43 compute-0 ovn_controller[151684]: 2025-10-07T14:40:43Z|01351|binding|INFO|Releasing lport 763708cd-58bb-4680-a4f7-042aa711a366 from this chassis (sb_readonly=0)
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:40:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2441973241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.835 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.866 2 DEBUG nova.storage.rbd_utils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 52ccd902-898f-4809-a231-be5760626c2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:43 compute-0 nova_compute[259550]: 2025-10-07 14:40:43.872 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:43 compute-0 ceph-mon[74295]: pgmap v2348: 305 pgs: 305 active+clean; 336 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 124 op/s
Oct 07 14:40:43 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2441973241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:40:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:40:44 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1045589483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.328 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.330 2 DEBUG nova.virt.libvirt.vif [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:40:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1539930206',display_name='tempest-TestGettingAddress-server-1539930206',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1539930206',id=125,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-d9tm55rs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:40:30Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=52ccd902-898f-4809-a231-be5760626c2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.331 2 DEBUG nova.network.os_vif_util [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.332 2 DEBUG nova.network.os_vif_util [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:cf:e1,bridge_name='br-int',has_traffic_filtering=True,id=b4567457-8da4-42a7-b4c0-42724b2c0bc9,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4567457-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.333 2 DEBUG nova.virt.libvirt.vif [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:40:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1539930206',display_name='tempest-TestGettingAddress-server-1539930206',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1539930206',id=125,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-d9tm55rs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:40:30Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=52ccd902-898f-4809-a231-be5760626c2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.333 2 DEBUG nova.network.os_vif_util [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.334 2 DEBUG nova.network.os_vif_util [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:f0:03,bridge_name='br-int',has_traffic_filtering=True,id=d721f926-d7b0-4e02-bf84-b45b7c5df102,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd721f926-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.335 2 DEBUG nova.objects.instance [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 52ccd902-898f-4809-a231-be5760626c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.351 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:40:44 compute-0 nova_compute[259550]:   <uuid>52ccd902-898f-4809-a231-be5760626c2c</uuid>
Oct 07 14:40:44 compute-0 nova_compute[259550]:   <name>instance-0000007d</name>
Oct 07 14:40:44 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:40:44 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:40:44 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <nova:name>tempest-TestGettingAddress-server-1539930206</nova:name>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:40:43</nova:creationTime>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:40:44 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:40:44 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:40:44 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:40:44 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:40:44 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:40:44 compute-0 nova_compute[259550]:         <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct 07 14:40:44 compute-0 nova_compute[259550]:         <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:40:44 compute-0 nova_compute[259550]:         <nova:port uuid="b4567457-8da4-42a7-b4c0-42724b2c0bc9">
Oct 07 14:40:44 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:40:44 compute-0 nova_compute[259550]:         <nova:port uuid="d721f926-d7b0-4e02-bf84-b45b7c5df102">
Oct 07 14:40:44 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe15:f003" ipVersion="6"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe15:f003" ipVersion="6"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:40:44 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:40:44 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <system>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <entry name="serial">52ccd902-898f-4809-a231-be5760626c2c</entry>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <entry name="uuid">52ccd902-898f-4809-a231-be5760626c2c</entry>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     </system>
Oct 07 14:40:44 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:40:44 compute-0 nova_compute[259550]:   <os>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:   </os>
Oct 07 14:40:44 compute-0 nova_compute[259550]:   <features>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:   </features>
Oct 07 14:40:44 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:40:44 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:40:44 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/52ccd902-898f-4809-a231-be5760626c2c_disk">
Oct 07 14:40:44 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       </source>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:40:44 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/52ccd902-898f-4809-a231-be5760626c2c_disk.config">
Oct 07 14:40:44 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       </source>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:40:44 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:09:cf:e1"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <target dev="tapb4567457-8d"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:15:f0:03"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <target dev="tapd721f926-d7"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c/console.log" append="off"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <video>
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     </video>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:40:44 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:40:44 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:40:44 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:40:44 compute-0 nova_compute[259550]: </domain>
Oct 07 14:40:44 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.352 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Preparing to wait for external event network-vif-plugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.353 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.353 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.353 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.353 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Preparing to wait for external event network-vif-plugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.354 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.354 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.354 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.355 2 DEBUG nova.virt.libvirt.vif [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:40:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1539930206',display_name='tempest-TestGettingAddress-server-1539930206',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1539930206',id=125,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-d9tm55rs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:40:30Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=52ccd902-898f-4809-a231-be5760626c2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.355 2 DEBUG nova.network.os_vif_util [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.356 2 DEBUG nova.network.os_vif_util [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:cf:e1,bridge_name='br-int',has_traffic_filtering=True,id=b4567457-8da4-42a7-b4c0-42724b2c0bc9,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4567457-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.356 2 DEBUG os_vif [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:cf:e1,bridge_name='br-int',has_traffic_filtering=True,id=b4567457-8da4-42a7-b4c0-42724b2c0bc9,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4567457-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.357 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.357 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.360 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4567457-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.360 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb4567457-8d, col_values=(('external_ids', {'iface-id': 'b4567457-8da4-42a7-b4c0-42724b2c0bc9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:cf:e1', 'vm-uuid': '52ccd902-898f-4809-a231-be5760626c2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:44 compute-0 NetworkManager[44949]: <info>  [1759848044.3631] manager: (tapb4567457-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/545)
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.369 2 INFO os_vif [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:cf:e1,bridge_name='br-int',has_traffic_filtering=True,id=b4567457-8da4-42a7-b4c0-42724b2c0bc9,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4567457-8d')
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.370 2 DEBUG nova.virt.libvirt.vif [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:40:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1539930206',display_name='tempest-TestGettingAddress-server-1539930206',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1539930206',id=125,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-d9tm55rs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:40:30Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=52ccd902-898f-4809-a231-be5760626c2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.370 2 DEBUG nova.network.os_vif_util [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.371 2 DEBUG nova.network.os_vif_util [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:f0:03,bridge_name='br-int',has_traffic_filtering=True,id=d721f926-d7b0-4e02-bf84-b45b7c5df102,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd721f926-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.371 2 DEBUG os_vif [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:f0:03,bridge_name='br-int',has_traffic_filtering=True,id=d721f926-d7b0-4e02-bf84-b45b7c5df102,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd721f926-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.372 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.373 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd721f926-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.377 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd721f926-d7, col_values=(('external_ids', {'iface-id': 'd721f926-d7b0-4e02-bf84-b45b7c5df102', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:f0:03', 'vm-uuid': '52ccd902-898f-4809-a231-be5760626c2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:44 compute-0 NetworkManager[44949]: <info>  [1759848044.3808] manager: (tapd721f926-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/546)
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.387 2 INFO os_vif [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:f0:03,bridge_name='br-int',has_traffic_filtering=True,id=d721f926-d7b0-4e02-bf84-b45b7c5df102,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd721f926-d7')
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.440 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.440 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.440 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:09:cf:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.441 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:15:f0:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.441 2 INFO nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Using config drive
Oct 07 14:40:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 293 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Oct 07 14:40:44 compute-0 nova_compute[259550]: 2025-10-07 14:40:44.467 2 DEBUG nova.storage.rbd_utils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 52ccd902-898f-4809-a231-be5760626c2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:44 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1045589483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:40:45 compute-0 nova_compute[259550]: 2025-10-07 14:40:45.745 2 INFO nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Creating config drive at /var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c/disk.config
Oct 07 14:40:45 compute-0 nova_compute[259550]: 2025-10-07 14:40:45.750 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqlkmi5v1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:45 compute-0 nova_compute[259550]: 2025-10-07 14:40:45.895 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqlkmi5v1" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:45 compute-0 nova_compute[259550]: 2025-10-07 14:40:45.927 2 DEBUG nova.storage.rbd_utils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 52ccd902-898f-4809-a231-be5760626c2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:40:45 compute-0 nova_compute[259550]: 2025-10-07 14:40:45.931 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c/disk.config 52ccd902-898f-4809-a231-be5760626c2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:40:45 compute-0 ceph-mon[74295]: pgmap v2349: 305 pgs: 305 active+clean; 293 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Oct 07 14:40:46 compute-0 nova_compute[259550]: 2025-10-07 14:40:46.088 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c/disk.config 52ccd902-898f-4809-a231-be5760626c2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:40:46 compute-0 nova_compute[259550]: 2025-10-07 14:40:46.089 2 INFO nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Deleting local config drive /var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c/disk.config because it was imported into RBD.
Oct 07 14:40:46 compute-0 NetworkManager[44949]: <info>  [1759848046.1398] manager: (tapb4567457-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/547)
Oct 07 14:40:46 compute-0 kernel: tapb4567457-8d: entered promiscuous mode
Oct 07 14:40:46 compute-0 ovn_controller[151684]: 2025-10-07T14:40:46Z|01352|binding|INFO|Claiming lport b4567457-8da4-42a7-b4c0-42724b2c0bc9 for this chassis.
Oct 07 14:40:46 compute-0 ovn_controller[151684]: 2025-10-07T14:40:46Z|01353|binding|INFO|b4567457-8da4-42a7-b4c0-42724b2c0bc9: Claiming fa:16:3e:09:cf:e1 10.100.0.6
Oct 07 14:40:46 compute-0 nova_compute[259550]: 2025-10-07 14:40:46.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:46 compute-0 NetworkManager[44949]: <info>  [1759848046.1634] manager: (tapd721f926-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/548)
Oct 07 14:40:46 compute-0 kernel: tapd721f926-d7: entered promiscuous mode
Oct 07 14:40:46 compute-0 ovn_controller[151684]: 2025-10-07T14:40:46Z|01354|binding|INFO|Setting lport b4567457-8da4-42a7-b4c0-42724b2c0bc9 ovn-installed in OVS
Oct 07 14:40:46 compute-0 ovn_controller[151684]: 2025-10-07T14:40:46Z|01355|if_status|INFO|Not updating pb chassis for d721f926-d7b0-4e02-bf84-b45b7c5df102 now as sb is readonly
Oct 07 14:40:46 compute-0 nova_compute[259550]: 2025-10-07 14:40:46.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:46 compute-0 systemd-udevd[394042]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:40:46 compute-0 systemd-udevd[394043]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:40:46 compute-0 nova_compute[259550]: 2025-10-07 14:40:46.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:46 compute-0 NetworkManager[44949]: <info>  [1759848046.2035] device (tapd721f926-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:40:46 compute-0 NetworkManager[44949]: <info>  [1759848046.2042] device (tapd721f926-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:40:46 compute-0 NetworkManager[44949]: <info>  [1759848046.2064] device (tapb4567457-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:40:46 compute-0 NetworkManager[44949]: <info>  [1759848046.2072] device (tapb4567457-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:40:46 compute-0 nova_compute[259550]: 2025-10-07 14:40:46.206 2 DEBUG nova.network.neutron [req-7e832b63-6b8f-405b-bec7-6a1a6bfcd6ff req-de944dbc-4ff8-49cc-b116-e3563fa3cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updated VIF entry in instance network info cache for port d721f926-d7b0-4e02-bf84-b45b7c5df102. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:40:46 compute-0 nova_compute[259550]: 2025-10-07 14:40:46.207 2 DEBUG nova.network.neutron [req-7e832b63-6b8f-405b-bec7-6a1a6bfcd6ff req-de944dbc-4ff8-49cc-b116-e3563fa3cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updating instance_info_cache with network_info: [{"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:40:46 compute-0 systemd-machined[214580]: New machine qemu-158-instance-0000007d.
Oct 07 14:40:46 compute-0 systemd[1]: Started Virtual Machine qemu-158-instance-0000007d.
Oct 07 14:40:46 compute-0 podman[394019]: 2025-10-07 14:40:46.242737654 +0000 UTC m=+0.068595805 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent)
Oct 07 14:40:46 compute-0 podman[394022]: 2025-10-07 14:40:46.278194866 +0000 UTC m=+0.106089600 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller)
Oct 07 14:40:46 compute-0 ovn_controller[151684]: 2025-10-07T14:40:46Z|01356|binding|INFO|Claiming lport d721f926-d7b0-4e02-bf84-b45b7c5df102 for this chassis.
Oct 07 14:40:46 compute-0 ovn_controller[151684]: 2025-10-07T14:40:46Z|01357|binding|INFO|d721f926-d7b0-4e02-bf84-b45b7c5df102: Claiming fa:16:3e:15:f0:03 2001:db8:0:1:f816:3eff:fe15:f003 2001:db8::f816:3eff:fe15:f003
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.366 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:cf:e1 10.100.0.6'], port_security=['fa:16:3e:09:cf:e1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '52ccd902-898f-4809-a231-be5760626c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b59ffdd2-4285-47f2-a931-fca691d1c031', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3b63116b-3cbd-4da7-8f74-19fd26396505', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c45d6c4-6a15-4c19-8f6b-673e9b96b82d, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b4567457-8da4-42a7-b4c0-42724b2c0bc9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:40:46 compute-0 ovn_controller[151684]: 2025-10-07T14:40:46Z|01358|binding|INFO|Setting lport b4567457-8da4-42a7-b4c0-42724b2c0bc9 up in Southbound
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.368 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b4567457-8da4-42a7-b4c0-42724b2c0bc9 in datapath b59ffdd2-4285-47f2-a931-fca691d1c031 bound to our chassis
Oct 07 14:40:46 compute-0 ovn_controller[151684]: 2025-10-07T14:40:46Z|01359|binding|INFO|Setting lport d721f926-d7b0-4e02-bf84-b45b7c5df102 ovn-installed in OVS
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.370 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b59ffdd2-4285-47f2-a931-fca691d1c031
Oct 07 14:40:46 compute-0 nova_compute[259550]: 2025-10-07 14:40:46.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:46 compute-0 nova_compute[259550]: 2025-10-07 14:40:46.379 2 DEBUG oslo_concurrency.lockutils [req-7e832b63-6b8f-405b-bec7-6a1a6bfcd6ff req-de944dbc-4ff8-49cc-b116-e3563fa3cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:40:46 compute-0 nova_compute[259550]: 2025-10-07 14:40:46.379 2 DEBUG nova.compute.manager [req-7e832b63-6b8f-405b-bec7-6a1a6bfcd6ff req-de944dbc-4ff8-49cc-b116-e3563fa3cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-deleted-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.388 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[efcc201f-7490-4186-bfc7-415313c37d8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.429 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ef775a81-f2a0-4c42-8390-3a0abc185185]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.441 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e47bf219-7b11-463e-ad5e-75b0a9bc6fc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 293 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 613 KiB/s wr, 106 op/s
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.488 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:f0:03 2001:db8:0:1:f816:3eff:fe15:f003 2001:db8::f816:3eff:fe15:f003'], port_security=['fa:16:3e:15:f0:03 2001:db8:0:1:f816:3eff:fe15:f003 2001:db8::f816:3eff:fe15:f003'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe15:f003/64 2001:db8::f816:3eff:fe15:f003/64', 'neutron:device_id': '52ccd902-898f-4809-a231-be5760626c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abe90ba0-a518-4cef-a49b-de57485faec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3b63116b-3cbd-4da7-8f74-19fd26396505', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=133edfbf-d913-46a7-a148-1a6e26213678, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d721f926-d7b0-4e02-bf84-b45b7c5df102) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:40:46 compute-0 ovn_controller[151684]: 2025-10-07T14:40:46Z|01360|binding|INFO|Setting lport d721f926-d7b0-4e02-bf84-b45b7c5df102 up in Southbound
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.488 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb2d72a-6a97-4ef8-9ac7-186f3c570e7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.509 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3e0828-a576-491e-b565-1decb21aad28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb59ffdd2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:3d:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858594, 'reachable_time': 15330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394086, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.538 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aa78ab2c-9136-41c5-87b5-bf566d6051ff]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb59ffdd2-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858606, 'tstamp': 858606}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394087, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb59ffdd2-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858610, 'tstamp': 858610}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394087, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.540 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb59ffdd2-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:46 compute-0 nova_compute[259550]: 2025-10-07 14:40:46.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:46 compute-0 nova_compute[259550]: 2025-10-07 14:40:46.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.544 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb59ffdd2-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.544 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.545 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb59ffdd2-40, col_values=(('external_ids', {'iface-id': '4cc97c0a-633b-48bc-94c4-6f8ac1f61c66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.545 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.546 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d721f926-d7b0-4e02-bf84-b45b7c5df102 in datapath abe90ba0-a518-4cef-a49b-de57485faec5 unbound from our chassis
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.548 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abe90ba0-a518-4cef-a49b-de57485faec5
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.565 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4946aae1-6803-4541-a8cc-03cd4c5c09da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.610 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9012511d-6456-48f6-af5c-f79cfc3fbdec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.614 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[397b0282-d830-493b-9d08-2b08bfbaf262]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.663 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[76533887-81cd-4218-b112-9dcaf7aaf669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.689 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6467f772-b421-4b1b-8f10-0112330f4c42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabe90ba0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:a0:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858693, 'reachable_time': 40982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394093, 'error': None, 'target': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.709 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3549f328-f21d-4394-8e11-88f96ff42188]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapabe90ba0-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858707, 'tstamp': 858707}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394094, 'error': None, 'target': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.711 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabe90ba0-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:46 compute-0 nova_compute[259550]: 2025-10-07 14:40:46.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:46 compute-0 nova_compute[259550]: 2025-10-07 14:40:46.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.714 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabe90ba0-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.714 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.714 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabe90ba0-a0, col_values=(('external_ids', {'iface-id': '763708cd-58bb-4680-a4f7-042aa711a366'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:40:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.715 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:40:47 compute-0 nova_compute[259550]: 2025-10-07 14:40:47.133 2 DEBUG nova.compute.manager [req-a8596417-718a-4142-b28c-8aa1378f081b req-b91c65b8-7e33-4863-a408-2329f57cd8b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-plugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:47 compute-0 nova_compute[259550]: 2025-10-07 14:40:47.134 2 DEBUG oslo_concurrency.lockutils [req-a8596417-718a-4142-b28c-8aa1378f081b req-b91c65b8-7e33-4863-a408-2329f57cd8b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:47 compute-0 nova_compute[259550]: 2025-10-07 14:40:47.134 2 DEBUG oslo_concurrency.lockutils [req-a8596417-718a-4142-b28c-8aa1378f081b req-b91c65b8-7e33-4863-a408-2329f57cd8b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:47 compute-0 nova_compute[259550]: 2025-10-07 14:40:47.134 2 DEBUG oslo_concurrency.lockutils [req-a8596417-718a-4142-b28c-8aa1378f081b req-b91c65b8-7e33-4863-a408-2329f57cd8b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:47 compute-0 nova_compute[259550]: 2025-10-07 14:40:47.135 2 DEBUG nova.compute.manager [req-a8596417-718a-4142-b28c-8aa1378f081b req-b91c65b8-7e33-4863-a408-2329f57cd8b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Processing event network-vif-plugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:40:47 compute-0 nova_compute[259550]: 2025-10-07 14:40:47.239 2 DEBUG nova.compute.manager [req-bdf7f985-db8d-4fdd-90ff-07829bb8fe77 req-f4dcd3c2-a7a7-4608-8c39-b4c8d39ee6e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-plugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:47 compute-0 nova_compute[259550]: 2025-10-07 14:40:47.239 2 DEBUG oslo_concurrency.lockutils [req-bdf7f985-db8d-4fdd-90ff-07829bb8fe77 req-f4dcd3c2-a7a7-4608-8c39-b4c8d39ee6e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:47 compute-0 nova_compute[259550]: 2025-10-07 14:40:47.240 2 DEBUG oslo_concurrency.lockutils [req-bdf7f985-db8d-4fdd-90ff-07829bb8fe77 req-f4dcd3c2-a7a7-4608-8c39-b4c8d39ee6e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:47 compute-0 nova_compute[259550]: 2025-10-07 14:40:47.240 2 DEBUG oslo_concurrency.lockutils [req-bdf7f985-db8d-4fdd-90ff-07829bb8fe77 req-f4dcd3c2-a7a7-4608-8c39-b4c8d39ee6e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:47 compute-0 nova_compute[259550]: 2025-10-07 14:40:47.240 2 DEBUG nova.compute.manager [req-bdf7f985-db8d-4fdd-90ff-07829bb8fe77 req-f4dcd3c2-a7a7-4608-8c39-b4c8d39ee6e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Processing event network-vif-plugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:40:47 compute-0 nova_compute[259550]: 2025-10-07 14:40:47.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:47 compute-0 ceph-mon[74295]: pgmap v2350: 305 pgs: 305 active+clean; 293 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 613 KiB/s wr, 106 op/s
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.014 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848048.013203, 52ccd902-898f-4809-a231-be5760626c2c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.014 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] VM Started (Lifecycle Event)
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.016 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.021 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.024 2 INFO nova.virt.libvirt.driver [-] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Instance spawned successfully.
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.025 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.063 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.067 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:40:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.098 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.098 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.099 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.099 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.100 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.100 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.117 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.118 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848048.0136638, 52ccd902-898f-4809-a231-be5760626c2c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.118 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] VM Paused (Lifecycle Event)
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.240 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.244 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848048.0197148, 52ccd902-898f-4809-a231-be5760626c2c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.245 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] VM Resumed (Lifecycle Event)
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.316 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.319 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.359 2 INFO nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Took 17.32 seconds to spawn the instance on the hypervisor.
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.360 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.415 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:40:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 293 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 103 op/s
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.541 2 INFO nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Took 18.73 seconds to build instance.
Oct 07 14:40:48 compute-0 nova_compute[259550]: 2025-10-07 14:40:48.641 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:49 compute-0 nova_compute[259550]: 2025-10-07 14:40:49.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:49 compute-0 nova_compute[259550]: 2025-10-07 14:40:49.382 2 DEBUG nova.compute.manager [req-9e03c94c-5344-4e80-8e33-d889f88c2799 req-6b612930-d213-4d5d-bf98-76687c9f24a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-plugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:49 compute-0 nova_compute[259550]: 2025-10-07 14:40:49.383 2 DEBUG oslo_concurrency.lockutils [req-9e03c94c-5344-4e80-8e33-d889f88c2799 req-6b612930-d213-4d5d-bf98-76687c9f24a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:49 compute-0 nova_compute[259550]: 2025-10-07 14:40:49.383 2 DEBUG oslo_concurrency.lockutils [req-9e03c94c-5344-4e80-8e33-d889f88c2799 req-6b612930-d213-4d5d-bf98-76687c9f24a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:49 compute-0 nova_compute[259550]: 2025-10-07 14:40:49.383 2 DEBUG oslo_concurrency.lockutils [req-9e03c94c-5344-4e80-8e33-d889f88c2799 req-6b612930-d213-4d5d-bf98-76687c9f24a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:49 compute-0 nova_compute[259550]: 2025-10-07 14:40:49.384 2 DEBUG nova.compute.manager [req-9e03c94c-5344-4e80-8e33-d889f88c2799 req-6b612930-d213-4d5d-bf98-76687c9f24a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] No waiting events found dispatching network-vif-plugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:40:49 compute-0 nova_compute[259550]: 2025-10-07 14:40:49.384 2 WARNING nova.compute.manager [req-9e03c94c-5344-4e80-8e33-d889f88c2799 req-6b612930-d213-4d5d-bf98-76687c9f24a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received unexpected event network-vif-plugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 for instance with vm_state active and task_state None.
Oct 07 14:40:49 compute-0 nova_compute[259550]: 2025-10-07 14:40:49.466 2 DEBUG nova.compute.manager [req-d4d0c819-2f98-4cd4-a8be-bea63284fe53 req-3ed75a6a-5d55-4f6e-a650-627af4570f7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-plugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:49 compute-0 nova_compute[259550]: 2025-10-07 14:40:49.467 2 DEBUG oslo_concurrency.lockutils [req-d4d0c819-2f98-4cd4-a8be-bea63284fe53 req-3ed75a6a-5d55-4f6e-a650-627af4570f7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:40:49 compute-0 nova_compute[259550]: 2025-10-07 14:40:49.467 2 DEBUG oslo_concurrency.lockutils [req-d4d0c819-2f98-4cd4-a8be-bea63284fe53 req-3ed75a6a-5d55-4f6e-a650-627af4570f7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:40:49 compute-0 nova_compute[259550]: 2025-10-07 14:40:49.467 2 DEBUG oslo_concurrency.lockutils [req-d4d0c819-2f98-4cd4-a8be-bea63284fe53 req-3ed75a6a-5d55-4f6e-a650-627af4570f7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:40:49 compute-0 nova_compute[259550]: 2025-10-07 14:40:49.467 2 DEBUG nova.compute.manager [req-d4d0c819-2f98-4cd4-a8be-bea63284fe53 req-3ed75a6a-5d55-4f6e-a650-627af4570f7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] No waiting events found dispatching network-vif-plugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:40:49 compute-0 nova_compute[259550]: 2025-10-07 14:40:49.468 2 WARNING nova.compute.manager [req-d4d0c819-2f98-4cd4-a8be-bea63284fe53 req-3ed75a6a-5d55-4f6e-a650-627af4570f7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received unexpected event network-vif-plugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 for instance with vm_state active and task_state None.
Oct 07 14:40:49 compute-0 nova_compute[259550]: 2025-10-07 14:40:49.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:40:50 compute-0 ceph-mon[74295]: pgmap v2351: 305 pgs: 305 active+clean; 293 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 103 op/s
Oct 07 14:40:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 293 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 32 KiB/s wr, 126 op/s
Oct 07 14:40:52 compute-0 ceph-mon[74295]: pgmap v2352: 305 pgs: 305 active+clean; 293 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 32 KiB/s wr, 126 op/s
Oct 07 14:40:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 294 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 570 KiB/s wr, 123 op/s
Oct 07 14:40:52 compute-0 nova_compute[259550]: 2025-10-07 14:40:52.506 2 DEBUG nova.compute.manager [req-6782ff18-c0ef-4b81-b2ff-785f18072ab4 req-07222c5f-9e90-4f36-926d-08350ec731f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-changed-b4567457-8da4-42a7-b4c0-42724b2c0bc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:40:52 compute-0 nova_compute[259550]: 2025-10-07 14:40:52.507 2 DEBUG nova.compute.manager [req-6782ff18-c0ef-4b81-b2ff-785f18072ab4 req-07222c5f-9e90-4f36-926d-08350ec731f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Refreshing instance network info cache due to event network-changed-b4567457-8da4-42a7-b4c0-42724b2c0bc9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:40:52 compute-0 nova_compute[259550]: 2025-10-07 14:40:52.507 2 DEBUG oslo_concurrency.lockutils [req-6782ff18-c0ef-4b81-b2ff-785f18072ab4 req-07222c5f-9e90-4f36-926d-08350ec731f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:40:52 compute-0 nova_compute[259550]: 2025-10-07 14:40:52.507 2 DEBUG oslo_concurrency.lockutils [req-6782ff18-c0ef-4b81-b2ff-785f18072ab4 req-07222c5f-9e90-4f36-926d-08350ec731f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:40:52 compute-0 nova_compute[259550]: 2025-10-07 14:40:52.507 2 DEBUG nova.network.neutron [req-6782ff18-c0ef-4b81-b2ff-785f18072ab4 req-07222c5f-9e90-4f36-926d-08350ec731f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Refreshing network info cache for port b4567457-8da4-42a7-b4c0-42724b2c0bc9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:40:52 compute-0 ovn_controller[151684]: 2025-10-07T14:40:52Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:17:5f:df 10.100.0.27
Oct 07 14:40:52 compute-0 ovn_controller[151684]: 2025-10-07T14:40:52Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:17:5f:df 10.100.0.27
Oct 07 14:40:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:40:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:40:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:40:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:40:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:40:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:40:52 compute-0 nova_compute[259550]: 2025-10-07 14:40:52.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:52 compute-0 nova_compute[259550]: 2025-10-07 14:40:52.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:40:52 compute-0 nova_compute[259550]: 2025-10-07 14:40:52.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:40:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.078395) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848053078477, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1297, "num_deletes": 251, "total_data_size": 1920941, "memory_usage": 1946816, "flush_reason": "Manual Compaction"}
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848053104140, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 1890731, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48456, "largest_seqno": 49752, "table_properties": {"data_size": 1884622, "index_size": 3376, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13241, "raw_average_key_size": 20, "raw_value_size": 1872280, "raw_average_value_size": 2828, "num_data_blocks": 151, "num_entries": 662, "num_filter_entries": 662, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847929, "oldest_key_time": 1759847929, "file_creation_time": 1759848053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 25796 microseconds, and 7005 cpu microseconds.
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.104196) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 1890731 bytes OK
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.104221) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.106333) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.106348) EVENT_LOG_v1 {"time_micros": 1759848053106343, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.106366) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1915067, prev total WAL file size 1915067, number of live WAL files 2.
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.107319) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1846KB)], [113(8094KB)]
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848053107383, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 10179107, "oldest_snapshot_seqno": -1}
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 6934 keys, 8484003 bytes, temperature: kUnknown
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848053152531, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 8484003, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8439498, "index_size": 26089, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 180986, "raw_average_key_size": 26, "raw_value_size": 8317148, "raw_average_value_size": 1199, "num_data_blocks": 1014, "num_entries": 6934, "num_filter_entries": 6934, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759848053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.152743) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 8484003 bytes
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.154211) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.2 rd, 187.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.9 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(9.9) write-amplify(4.5) OK, records in: 7448, records dropped: 514 output_compression: NoCompression
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.154230) EVENT_LOG_v1 {"time_micros": 1759848053154220, "job": 68, "event": "compaction_finished", "compaction_time_micros": 45210, "compaction_time_cpu_micros": 22136, "output_level": 6, "num_output_files": 1, "total_output_size": 8484003, "num_input_records": 7448, "num_output_records": 6934, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848053154631, "job": 68, "event": "table_file_deletion", "file_number": 115}
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848053155977, "job": 68, "event": "table_file_deletion", "file_number": 113}
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.107225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.156054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.156059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.156061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.156062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:40:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.156064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:40:54 compute-0 nova_compute[259550]: 2025-10-07 14:40:54.236 2 DEBUG nova.network.neutron [req-6782ff18-c0ef-4b81-b2ff-785f18072ab4 req-07222c5f-9e90-4f36-926d-08350ec731f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updated VIF entry in instance network info cache for port b4567457-8da4-42a7-b4c0-42724b2c0bc9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:40:54 compute-0 nova_compute[259550]: 2025-10-07 14:40:54.238 2 DEBUG nova.network.neutron [req-6782ff18-c0ef-4b81-b2ff-785f18072ab4 req-07222c5f-9e90-4f36-926d-08350ec731f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updating instance_info_cache with network_info: [{"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:40:54 compute-0 ceph-mon[74295]: pgmap v2353: 305 pgs: 305 active+clean; 294 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 570 KiB/s wr, 123 op/s
Oct 07 14:40:54 compute-0 nova_compute[259550]: 2025-10-07 14:40:54.360 2 DEBUG oslo_concurrency.lockutils [req-6782ff18-c0ef-4b81-b2ff-785f18072ab4 req-07222c5f-9e90-4f36-926d-08350ec731f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:40:54 compute-0 nova_compute[259550]: 2025-10-07 14:40:54.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 320 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 125 op/s
Oct 07 14:40:54 compute-0 nova_compute[259550]: 2025-10-07 14:40:54.728 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848039.7273135, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:40:54 compute-0 nova_compute[259550]: 2025-10-07 14:40:54.729 2 INFO nova.compute.manager [-] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Stopped (Lifecycle Event)
Oct 07 14:40:54 compute-0 nova_compute[259550]: 2025-10-07 14:40:54.749 2 DEBUG nova.compute.manager [None req-ae463f8e-084d-40e5-8426-4b371a9ae537 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:40:54 compute-0 nova_compute[259550]: 2025-10-07 14:40:54.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:40:54 compute-0 nova_compute[259550]: 2025-10-07 14:40:54.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:40:55 compute-0 nova_compute[259550]: 2025-10-07 14:40:55.989 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:40:56 compute-0 nova_compute[259550]: 2025-10-07 14:40:56.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:56 compute-0 ceph-mon[74295]: pgmap v2354: 305 pgs: 305 active+clean; 320 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 125 op/s
Oct 07 14:40:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct 07 14:40:56 compute-0 nova_compute[259550]: 2025-10-07 14:40:56.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:40:57 compute-0 nova_compute[259550]: 2025-10-07 14:40:57.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:40:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:40:58 compute-0 ceph-mon[74295]: pgmap v2355: 305 pgs: 305 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct 07 14:40:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct 07 14:40:59 compute-0 nova_compute[259550]: 2025-10-07 14:40:59.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:00.073 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:00.074 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:00.075 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:00 compute-0 ceph-mon[74295]: pgmap v2356: 305 pgs: 305 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct 07 14:41:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct 07 14:41:01 compute-0 nova_compute[259550]: 2025-10-07 14:41:01.390 2 DEBUG nova.compute.manager [req-45a94d74-8502-4442-8cb5-79a0dfd971ce req-41744434-1ee0-430e-a548-2301fc25e13f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-changed-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:01 compute-0 nova_compute[259550]: 2025-10-07 14:41:01.391 2 DEBUG nova.compute.manager [req-45a94d74-8502-4442-8cb5-79a0dfd971ce req-41744434-1ee0-430e-a548-2301fc25e13f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing instance network info cache due to event network-changed-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:41:01 compute-0 nova_compute[259550]: 2025-10-07 14:41:01.392 2 DEBUG oslo_concurrency.lockutils [req-45a94d74-8502-4442-8cb5-79a0dfd971ce req-41744434-1ee0-430e-a548-2301fc25e13f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:41:01 compute-0 nova_compute[259550]: 2025-10-07 14:41:01.392 2 DEBUG oslo_concurrency.lockutils [req-45a94d74-8502-4442-8cb5-79a0dfd971ce req-41744434-1ee0-430e-a548-2301fc25e13f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:41:01 compute-0 nova_compute[259550]: 2025-10-07 14:41:01.393 2 DEBUG nova.network.neutron [req-45a94d74-8502-4442-8cb5-79a0dfd971ce req-41744434-1ee0-430e-a548-2301fc25e13f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing network info cache for port 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:41:01 compute-0 ceph-mon[74295]: pgmap v2357: 305 pgs: 305 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct 07 14:41:01 compute-0 nova_compute[259550]: 2025-10-07 14:41:01.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 113 op/s
Oct 07 14:41:02 compute-0 nova_compute[259550]: 2025-10-07 14:41:02.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:02 compute-0 podman[394139]: 2025-10-07 14:41:02.951841068 +0000 UTC m=+0.060464788 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 07 14:41:02 compute-0 podman[394138]: 2025-10-07 14:41:02.952612698 +0000 UTC m=+0.062555693 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 07 14:41:02 compute-0 nova_compute[259550]: 2025-10-07 14:41:02.985 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:41:02 compute-0 nova_compute[259550]: 2025-10-07 14:41:02.986 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:41:02 compute-0 nova_compute[259550]: 2025-10-07 14:41:02.986 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:41:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:41:03 compute-0 nova_compute[259550]: 2025-10-07 14:41:03.487 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:41:03 compute-0 nova_compute[259550]: 2025-10-07 14:41:03.516 2 DEBUG nova.network.neutron [req-45a94d74-8502-4442-8cb5-79a0dfd971ce req-41744434-1ee0-430e-a548-2301fc25e13f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updated VIF entry in instance network info cache for port 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:41:03 compute-0 nova_compute[259550]: 2025-10-07 14:41:03.517 2 DEBUG nova.network.neutron [req-45a94d74-8502-4442-8cb5-79a0dfd971ce req-41744434-1ee0-430e-a548-2301fc25e13f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:41:03 compute-0 nova_compute[259550]: 2025-10-07 14:41:03.609 2 DEBUG oslo_concurrency.lockutils [req-45a94d74-8502-4442-8cb5-79a0dfd971ce req-41744434-1ee0-430e-a548-2301fc25e13f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:41:03 compute-0 nova_compute[259550]: 2025-10-07 14:41:03.609 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:41:03 compute-0 nova_compute[259550]: 2025-10-07 14:41:03.610 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:41:03 compute-0 nova_compute[259550]: 2025-10-07 14:41:03.610 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d0b10640-5492-4d8f-8b94-a49a15b6e702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:41:03 compute-0 ceph-mon[74295]: pgmap v2358: 305 pgs: 305 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 113 op/s
Oct 07 14:41:04 compute-0 nova_compute[259550]: 2025-10-07 14:41:04.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 947 KiB/s rd, 1.6 MiB/s wr, 77 op/s
Oct 07 14:41:05 compute-0 ovn_controller[151684]: 2025-10-07T14:41:05Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:cf:e1 10.100.0.6
Oct 07 14:41:05 compute-0 ovn_controller[151684]: 2025-10-07T14:41:05Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:cf:e1 10.100.0.6
Oct 07 14:41:05 compute-0 ceph-mon[74295]: pgmap v2359: 305 pgs: 305 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 947 KiB/s rd, 1.6 MiB/s wr, 77 op/s
Oct 07 14:41:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 335 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 1.3 MiB/s wr, 52 op/s
Oct 07 14:41:07 compute-0 nova_compute[259550]: 2025-10-07 14:41:07.151 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:41:07 compute-0 nova_compute[259550]: 2025-10-07 14:41:07.352 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:41:07 compute-0 nova_compute[259550]: 2025-10-07 14:41:07.353 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:41:07 compute-0 nova_compute[259550]: 2025-10-07 14:41:07.353 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:41:07 compute-0 nova_compute[259550]: 2025-10-07 14:41:07.392 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:07 compute-0 nova_compute[259550]: 2025-10-07 14:41:07.393 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:07 compute-0 nova_compute[259550]: 2025-10-07 14:41:07.393 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:07 compute-0 nova_compute[259550]: 2025-10-07 14:41:07.394 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:41:07 compute-0 nova_compute[259550]: 2025-10-07 14:41:07.394 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:41:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:41:07 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/174453716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:41:07 compute-0 nova_compute[259550]: 2025-10-07 14:41:07.842 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:41:07 compute-0 nova_compute[259550]: 2025-10-07 14:41:07.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:07 compute-0 nova_compute[259550]: 2025-10-07 14:41:07.976 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:41:07 compute-0 nova_compute[259550]: 2025-10-07 14:41:07.977 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:41:07 compute-0 nova_compute[259550]: 2025-10-07 14:41:07.980 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:41:07 compute-0 nova_compute[259550]: 2025-10-07 14:41:07.980 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:41:07 compute-0 nova_compute[259550]: 2025-10-07 14:41:07.983 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:41:07 compute-0 nova_compute[259550]: 2025-10-07 14:41:07.983 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:41:07 compute-0 nova_compute[259550]: 2025-10-07 14:41:07.986 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:41:07 compute-0 nova_compute[259550]: 2025-10-07 14:41:07.987 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:41:07 compute-0 ceph-mon[74295]: pgmap v2360: 305 pgs: 305 active+clean; 335 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 1.3 MiB/s wr, 52 op/s
Oct 07 14:41:07 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/174453716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:41:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:41:08 compute-0 nova_compute[259550]: 2025-10-07 14:41:08.196 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:41:08 compute-0 nova_compute[259550]: 2025-10-07 14:41:08.197 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2945MB free_disk=59.81833267211914GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:41:08 compute-0 nova_compute[259550]: 2025-10-07 14:41:08.198 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:08 compute-0 nova_compute[259550]: 2025-10-07 14:41:08.198 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 335 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 814 KiB/s wr, 21 op/s
Oct 07 14:41:08 compute-0 nova_compute[259550]: 2025-10-07 14:41:08.566 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance d0b10640-5492-4d8f-8b94-a49a15b6e702 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:41:08 compute-0 nova_compute[259550]: 2025-10-07 14:41:08.566 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:41:08 compute-0 nova_compute[259550]: 2025-10-07 14:41:08.566 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance ca309873-104c-4cd4-a609-686d61823e0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:41:08 compute-0 nova_compute[259550]: 2025-10-07 14:41:08.566 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 52ccd902-898f-4809-a231-be5760626c2c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:41:08 compute-0 nova_compute[259550]: 2025-10-07 14:41:08.567 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:41:08 compute-0 nova_compute[259550]: 2025-10-07 14:41:08.567 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:41:08 compute-0 nova_compute[259550]: 2025-10-07 14:41:08.648 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:41:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:41:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/268461916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:41:09 compute-0 nova_compute[259550]: 2025-10-07 14:41:09.107 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:41:09 compute-0 nova_compute[259550]: 2025-10-07 14:41:09.115 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:41:09 compute-0 nova_compute[259550]: 2025-10-07 14:41:09.189 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:41:09 compute-0 nova_compute[259550]: 2025-10-07 14:41:09.259 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:41:09 compute-0 nova_compute[259550]: 2025-10-07 14:41:09.260 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:09 compute-0 nova_compute[259550]: 2025-10-07 14:41:09.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:10 compute-0 ceph-mon[74295]: pgmap v2361: 305 pgs: 305 active+clean; 335 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 814 KiB/s wr, 21 op/s
Oct 07 14:41:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/268461916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:41:10 compute-0 nova_compute[259550]: 2025-10-07 14:41:10.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 356 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 07 14:41:10 compute-0 nova_compute[259550]: 2025-10-07 14:41:10.888 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:41:12 compute-0 ceph-mon[74295]: pgmap v2362: 305 pgs: 305 active+clean; 356 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 07 14:41:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Oct 07 14:41:12 compute-0 nova_compute[259550]: 2025-10-07 14:41:12.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:41:14 compute-0 ceph-mon[74295]: pgmap v2363: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Oct 07 14:41:14 compute-0 nova_compute[259550]: 2025-10-07 14:41:14.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 07 14:41:16 compute-0 nova_compute[259550]: 2025-10-07 14:41:16.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:16 compute-0 ceph-mon[74295]: pgmap v2364: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 07 14:41:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 07 14:41:17 compute-0 podman[394221]: 2025-10-07 14:41:17.087364775 +0000 UTC m=+0.076802613 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 07 14:41:17 compute-0 podman[394222]: 2025-10-07 14:41:17.104924601 +0000 UTC m=+0.092501399 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:41:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:17.468 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:41:17 compute-0 nova_compute[259550]: 2025-10-07 14:41:17.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:17.469 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:41:17 compute-0 nova_compute[259550]: 2025-10-07 14:41:17.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:41:18 compute-0 ceph-mon[74295]: pgmap v2365: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 07 14:41:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 230 KiB/s rd, 1.4 MiB/s wr, 44 op/s
Oct 07 14:41:19 compute-0 nova_compute[259550]: 2025-10-07 14:41:19.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:20 compute-0 ceph-mon[74295]: pgmap v2366: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 230 KiB/s rd, 1.4 MiB/s wr, 44 op/s
Oct 07 14:41:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 230 KiB/s rd, 1.4 MiB/s wr, 46 op/s
Oct 07 14:41:22 compute-0 ceph-mon[74295]: pgmap v2367: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 230 KiB/s rd, 1.4 MiB/s wr, 46 op/s
Oct 07 14:41:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 97 KiB/s wr, 5 op/s
Oct 07 14:41:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:22.471 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:41:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:41:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:41:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:41:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:41:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:41:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:41:22
Oct 07 14:41:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:41:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:41:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'default.rgw.control', 'images', 'default.rgw.log', '.rgw.root', 'volumes', '.mgr', 'cephfs.cephfs.data']
Oct 07 14:41:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:41:22 compute-0 nova_compute[259550]: 2025-10-07 14:41:22.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:41:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:41:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:41:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:41:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:41:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:41:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:41:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:41:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:41:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:41:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:41:24 compute-0 ceph-mon[74295]: pgmap v2368: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 97 KiB/s wr, 5 op/s
Oct 07 14:41:24 compute-0 nova_compute[259550]: 2025-10-07 14:41:24.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s wr, 2 op/s
Oct 07 14:41:26 compute-0 ceph-mon[74295]: pgmap v2369: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s wr, 2 op/s
Oct 07 14:41:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 30 KiB/s wr, 3 op/s
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.516 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquiring lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.517 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.590 2 DEBUG nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.697 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.698 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.705 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.705 2 INFO nova.compute.claims [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.798 2 DEBUG nova.compute.manager [req-6e24b557-052d-41b4-8458-0bf008b5d288 req-831a81eb-07d4-4776-97a3-9fddb1c59f6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-changed-b4567457-8da4-42a7-b4c0-42724b2c0bc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.799 2 DEBUG nova.compute.manager [req-6e24b557-052d-41b4-8458-0bf008b5d288 req-831a81eb-07d4-4776-97a3-9fddb1c59f6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Refreshing instance network info cache due to event network-changed-b4567457-8da4-42a7-b4c0-42724b2c0bc9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.799 2 DEBUG oslo_concurrency.lockutils [req-6e24b557-052d-41b4-8458-0bf008b5d288 req-831a81eb-07d4-4776-97a3-9fddb1c59f6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.799 2 DEBUG oslo_concurrency.lockutils [req-6e24b557-052d-41b4-8458-0bf008b5d288 req-831a81eb-07d4-4776-97a3-9fddb1c59f6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.800 2 DEBUG nova.network.neutron [req-6e24b557-052d-41b4-8458-0bf008b5d288 req-831a81eb-07d4-4776-97a3-9fddb1c59f6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Refreshing network info cache for port b4567457-8da4-42a7-b4c0-42724b2c0bc9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.946 2 DEBUG oslo_concurrency.lockutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.946 2 DEBUG oslo_concurrency.lockutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.946 2 DEBUG oslo_concurrency.lockutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.947 2 DEBUG oslo_concurrency.lockutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.947 2 DEBUG oslo_concurrency.lockutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.948 2 INFO nova.compute.manager [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Terminating instance
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.949 2 DEBUG nova.compute.manager [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:41:27 compute-0 nova_compute[259550]: 2025-10-07 14:41:27.954 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:41:28 compute-0 kernel: tapb4567457-8d (unregistering): left promiscuous mode
Oct 07 14:41:28 compute-0 NetworkManager[44949]: <info>  [1759848088.0148] device (tapb4567457-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:41:28 compute-0 ovn_controller[151684]: 2025-10-07T14:41:28Z|01361|binding|INFO|Releasing lport b4567457-8da4-42a7-b4c0-42724b2c0bc9 from this chassis (sb_readonly=0)
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:28 compute-0 ovn_controller[151684]: 2025-10-07T14:41:28Z|01362|binding|INFO|Setting lport b4567457-8da4-42a7-b4c0-42724b2c0bc9 down in Southbound
Oct 07 14:41:28 compute-0 ovn_controller[151684]: 2025-10-07T14:41:28Z|01363|binding|INFO|Removing iface tapb4567457-8d ovn-installed in OVS
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:28 compute-0 kernel: tapd721f926-d7 (unregistering): left promiscuous mode
Oct 07 14:41:28 compute-0 NetworkManager[44949]: <info>  [1759848088.0581] device (tapd721f926-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:28 compute-0 ovn_controller[151684]: 2025-10-07T14:41:28Z|01364|binding|INFO|Releasing lport d721f926-d7b0-4e02-bf84-b45b7c5df102 from this chassis (sb_readonly=1)
Oct 07 14:41:28 compute-0 ovn_controller[151684]: 2025-10-07T14:41:28Z|01365|binding|INFO|Removing iface tapd721f926-d7 ovn-installed in OVS
Oct 07 14:41:28 compute-0 ovn_controller[151684]: 2025-10-07T14:41:28Z|01366|if_status|INFO|Dropped 1 log messages in last 143 seconds (most recently, 143 seconds ago) due to excessive rate
Oct 07 14:41:28 compute-0 ovn_controller[151684]: 2025-10-07T14:41:28Z|01367|if_status|INFO|Not setting lport d721f926-d7b0-4e02-bf84-b45b7c5df102 down as sb is readonly
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:28 compute-0 ovn_controller[151684]: 2025-10-07T14:41:28Z|01368|binding|INFO|Setting lport d721f926-d7b0-4e02-bf84-b45b7c5df102 down in Southbound
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.120 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:cf:e1 10.100.0.6'], port_security=['fa:16:3e:09:cf:e1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '52ccd902-898f-4809-a231-be5760626c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b59ffdd2-4285-47f2-a931-fca691d1c031', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3b63116b-3cbd-4da7-8f74-19fd26396505', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c45d6c4-6a15-4c19-8f6b-673e9b96b82d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b4567457-8da4-42a7-b4c0-42724b2c0bc9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.121 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b4567457-8da4-42a7-b4c0-42724b2c0bc9 in datapath b59ffdd2-4285-47f2-a931-fca691d1c031 unbound from our chassis
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.123 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b59ffdd2-4285-47f2-a931-fca691d1c031
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.140 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc3accc-1b6f-4e88-ac7d-f76d6136a82a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:28 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Oct 07 14:41:28 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007d.scope: Consumed 19.826s CPU time.
Oct 07 14:41:28 compute-0 systemd-machined[214580]: Machine qemu-158-instance-0000007d terminated.
Oct 07 14:41:28 compute-0 ceph-mon[74295]: pgmap v2370: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 30 KiB/s wr, 3 op/s
Oct 07 14:41:28 compute-0 NetworkManager[44949]: <info>  [1759848088.1865] manager: (tapd721f926-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/549)
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.186 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[84517abc-d718-4100-b437-31e6e5f0e5d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.197 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf294a6-881d-4728-975d-3f578941d230]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.217 2 INFO nova.virt.libvirt.driver [-] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Instance destroyed successfully.
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.218 2 DEBUG nova.objects.instance [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 52ccd902-898f-4809-a231-be5760626c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.231 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f0fc57-082d-460c-abc7-e77505d6eb78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.249 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:f0:03 2001:db8:0:1:f816:3eff:fe15:f003 2001:db8::f816:3eff:fe15:f003'], port_security=['fa:16:3e:15:f0:03 2001:db8:0:1:f816:3eff:fe15:f003 2001:db8::f816:3eff:fe15:f003'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe15:f003/64 2001:db8::f816:3eff:fe15:f003/64', 'neutron:device_id': '52ccd902-898f-4809-a231-be5760626c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abe90ba0-a518-4cef-a49b-de57485faec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3b63116b-3cbd-4da7-8f74-19fd26396505', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=133edfbf-d913-46a7-a148-1a6e26213678, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d721f926-d7b0-4e02-bf84-b45b7c5df102) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.256 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[99daefb4-a9c0-4c12-89e2-1b2acd087f26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb59ffdd2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:3d:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858594, 'reachable_time': 15330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394325, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.279 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cfaf0621-1911-4418-a009-03eb85b581c0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb59ffdd2-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858606, 'tstamp': 858606}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394326, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb59ffdd2-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858610, 'tstamp': 858610}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394326, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.281 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb59ffdd2-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.293 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb59ffdd2-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.294 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.294 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb59ffdd2-40, col_values=(('external_ids', {'iface-id': '4cc97c0a-633b-48bc-94c4-6f8ac1f61c66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.295 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.298 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d721f926-d7b0-4e02-bf84-b45b7c5df102 in datapath abe90ba0-a518-4cef-a49b-de57485faec5 unbound from our chassis
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.300 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abe90ba0-a518-4cef-a49b-de57485faec5
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.316 2 DEBUG nova.virt.libvirt.vif [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:40:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1539930206',display_name='tempest-TestGettingAddress-server-1539930206',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1539930206',id=125,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:40:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-d9tm55rs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:40:48Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=52ccd902-898f-4809-a231-be5760626c2c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.317 2 DEBUG nova.network.os_vif_util [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.318 2 DEBUG nova.network.os_vif_util [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:cf:e1,bridge_name='br-int',has_traffic_filtering=True,id=b4567457-8da4-42a7-b4c0-42724b2c0bc9,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4567457-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.318 2 DEBUG os_vif [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:cf:e1,bridge_name='br-int',has_traffic_filtering=True,id=b4567457-8da4-42a7-b4c0-42724b2c0bc9,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4567457-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.321 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4567457-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.325 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2c384d88-a4f8-4e24-b008-c2ff019ecb05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.348 2 INFO os_vif [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:cf:e1,bridge_name='br-int',has_traffic_filtering=True,id=b4567457-8da4-42a7-b4c0-42724b2c0bc9,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4567457-8d')
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.349 2 DEBUG nova.virt.libvirt.vif [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:40:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1539930206',display_name='tempest-TestGettingAddress-server-1539930206',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1539930206',id=125,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:40:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-d9tm55rs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:40:48Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=52ccd902-898f-4809-a231-be5760626c2c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.349 2 DEBUG nova.network.os_vif_util [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.350 2 DEBUG nova.network.os_vif_util [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:f0:03,bridge_name='br-int',has_traffic_filtering=True,id=d721f926-d7b0-4e02-bf84-b45b7c5df102,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd721f926-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.351 2 DEBUG os_vif [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:f0:03,bridge_name='br-int',has_traffic_filtering=True,id=d721f926-d7b0-4e02-bf84-b45b7c5df102,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd721f926-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.354 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd721f926-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.360 2 INFO os_vif [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:f0:03,bridge_name='br-int',has_traffic_filtering=True,id=d721f926-d7b0-4e02-bf84-b45b7c5df102,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd721f926-d7')
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.373 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[dd86261e-9aa7-4c2e-b22c-a23c5908f580]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.380 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5800a5b2-a70f-47e5-bd55-8350041a32b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.417 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b2355c-b70e-4a8b-bf99-53e9c544cf78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3177214505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.439 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.446 2 DEBUG nova.compute.provider_tree [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.448 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[300fcaa0-9638-4e15-a1d5-93c0929a8af9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabe90ba0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:a0:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858693, 'reachable_time': 40982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394355, 'error': None, 'target': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 16 KiB/s wr, 2 op/s
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.469 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[409e4df0-21ea-43e5-b2ff-0f8cdc43c253]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapabe90ba0-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858707, 'tstamp': 858707}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394356, 'error': None, 'target': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.471 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabe90ba0-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.474 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabe90ba0-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.474 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.474 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabe90ba0-a0, col_values=(('external_ids', {'iface-id': '763708cd-58bb-4680-a4f7-042aa711a366'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.475 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.523 2 DEBUG nova.compute.manager [req-c71abda6-9369-4b03-a579-a6e4682c4db2 req-a1b46bff-cfb2-4be1-a492-a1f8c7a63a78 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-unplugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.523 2 DEBUG oslo_concurrency.lockutils [req-c71abda6-9369-4b03-a579-a6e4682c4db2 req-a1b46bff-cfb2-4be1-a492-a1f8c7a63a78 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.524 2 DEBUG oslo_concurrency.lockutils [req-c71abda6-9369-4b03-a579-a6e4682c4db2 req-a1b46bff-cfb2-4be1-a492-a1f8c7a63a78 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.524 2 DEBUG oslo_concurrency.lockutils [req-c71abda6-9369-4b03-a579-a6e4682c4db2 req-a1b46bff-cfb2-4be1-a492-a1f8c7a63a78 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.524 2 DEBUG nova.compute.manager [req-c71abda6-9369-4b03-a579-a6e4682c4db2 req-a1b46bff-cfb2-4be1-a492-a1f8c7a63a78 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] No waiting events found dispatching network-vif-unplugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.524 2 DEBUG nova.compute.manager [req-c71abda6-9369-4b03-a579-a6e4682c4db2 req-a1b46bff-cfb2-4be1-a492-a1f8c7a63a78 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-unplugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.570 2 DEBUG nova.scheduler.client.report [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:41:28 compute-0 sudo[394357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:41:28 compute-0 sudo[394357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.656 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.658 2 DEBUG nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:41:28 compute-0 sudo[394357]: pam_unix(sudo:session): session closed for user root
Oct 07 14:41:28 compute-0 sudo[394383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:41:28 compute-0 sudo[394383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:41:28 compute-0 sudo[394383]: pam_unix(sudo:session): session closed for user root
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.734 2 DEBUG nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.736 2 DEBUG nova.network.neutron [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:41:28 compute-0 sudo[394408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:41:28 compute-0 sudo[394408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:41:28 compute-0 sudo[394408]: pam_unix(sudo:session): session closed for user root
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.823 2 INFO nova.virt.libvirt.driver [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Deleting instance files /var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c_del
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.825 2 INFO nova.virt.libvirt.driver [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Deletion of /var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c_del complete
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.829 2 INFO nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:41:28 compute-0 sudo[394433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:41:28 compute-0 sudo[394433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:41:28 compute-0 nova_compute[259550]: 2025-10-07 14:41:28.957 2 DEBUG nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.025 2 INFO nova.compute.manager [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Took 1.08 seconds to destroy the instance on the hypervisor.
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.026 2 DEBUG oslo.service.loopingcall [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.026 2 DEBUG nova.compute.manager [-] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.026 2 DEBUG nova.network.neutron [-] [instance: 52ccd902-898f-4809-a231-be5760626c2c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.161 2 DEBUG nova.policy [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '512332529ac84b3f9a7bb7d98d8577ac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b215e149c484ed3a0d2130b82d65f6f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:41:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3177214505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.415 2 DEBUG nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.416 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.417 2 INFO nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Creating image(s)
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.444 2 DEBUG nova.storage.rbd_utils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] rbd image 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.470 2 DEBUG nova.storage.rbd_utils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] rbd image 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.499 2 DEBUG nova.storage.rbd_utils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] rbd image 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.503 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:41:29 compute-0 sudo[394433]: pam_unix(sudo:session): session closed for user root
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.589 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.589 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.590 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.590 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.612 2 DEBUG nova.storage.rbd_utils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] rbd image 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.616 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:41:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:41:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:41:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:41:29 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:41:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:41:29 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:41:29 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 83322ff9-95b0-41ae-af30-adf6bb4a4c1c does not exist
Oct 07 14:41:29 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 295f4782-5c69-450b-a0a6-407f9be5abda does not exist
Oct 07 14:41:29 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev e52f158f-fc83-4f3b-872c-a0d98360fed8 does not exist
Oct 07 14:41:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:41:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:41:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:41:29 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:41:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:41:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.704 2 DEBUG nova.network.neutron [req-6e24b557-052d-41b4-8458-0bf008b5d288 req-831a81eb-07d4-4776-97a3-9fddb1c59f6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updated VIF entry in instance network info cache for port b4567457-8da4-42a7-b4c0-42724b2c0bc9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.705 2 DEBUG nova.network.neutron [req-6e24b557-052d-41b4-8458-0bf008b5d288 req-831a81eb-07d4-4776-97a3-9fddb1c59f6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updating instance_info_cache with network_info: [{"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:41:29 compute-0 sudo[394565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:41:29 compute-0 sudo[394565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:41:29 compute-0 sudo[394565]: pam_unix(sudo:session): session closed for user root
Oct 07 14:41:29 compute-0 sudo[394605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:41:29 compute-0 sudo[394605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:41:29 compute-0 sudo[394605]: pam_unix(sudo:session): session closed for user root
Oct 07 14:41:29 compute-0 sudo[394634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.843 2 DEBUG oslo_concurrency.lockutils [req-6e24b557-052d-41b4-8458-0bf008b5d288 req-831a81eb-07d4-4776-97a3-9fddb1c59f6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:41:29 compute-0 sudo[394634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:41:29 compute-0 sudo[394634]: pam_unix(sudo:session): session closed for user root
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.910 2 DEBUG nova.compute.manager [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-unplugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.911 2 DEBUG oslo_concurrency.lockutils [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.911 2 DEBUG oslo_concurrency.lockutils [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:29 compute-0 sudo[394659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.911 2 DEBUG oslo_concurrency.lockutils [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.912 2 DEBUG nova.compute.manager [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] No waiting events found dispatching network-vif-unplugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.913 2 DEBUG nova.compute.manager [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-unplugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.913 2 DEBUG nova.compute.manager [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-plugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.913 2 DEBUG oslo_concurrency.lockutils [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.913 2 DEBUG oslo_concurrency.lockutils [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.914 2 DEBUG oslo_concurrency.lockutils [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.914 2 DEBUG nova.compute.manager [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] No waiting events found dispatching network-vif-plugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.915 2 WARNING nova.compute.manager [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received unexpected event network-vif-plugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 for instance with vm_state active and task_state deleting.
Oct 07 14:41:29 compute-0 sudo[394659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:41:29 compute-0 nova_compute[259550]: 2025-10-07 14:41:29.967 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.038 2 DEBUG nova.storage.rbd_utils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] resizing rbd image 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.150 2 DEBUG nova.objects.instance [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lazy-loading 'migration_context' on Instance uuid 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.162 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.162 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Ensure instance console log exists: /var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.163 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.163 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.164 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.193 2 DEBUG nova.network.neutron [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Successfully created port: feac1006-7556-4dd6-9691-bc886a9410f3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:41:30 compute-0 ceph-mon[74295]: pgmap v2371: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 16 KiB/s wr, 2 op/s
Oct 07 14:41:30 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:41:30 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:41:30 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:41:30 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:41:30 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:41:30 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:41:30 compute-0 podman[394793]: 2025-10-07 14:41:30.278763936 +0000 UTC m=+0.046942988 container create c640e358748bfa1bf0133609e7ab4ba87c361f790049e6966a9890670fcd94c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nash, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:41:30 compute-0 systemd[1]: Started libpod-conmon-c640e358748bfa1bf0133609e7ab4ba87c361f790049e6966a9890670fcd94c1.scope.
Oct 07 14:41:30 compute-0 podman[394793]: 2025-10-07 14:41:30.253534666 +0000 UTC m=+0.021713758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:41:30 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:41:30 compute-0 podman[394793]: 2025-10-07 14:41:30.375356964 +0000 UTC m=+0.143536066 container init c640e358748bfa1bf0133609e7ab4ba87c361f790049e6966a9890670fcd94c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 07 14:41:30 compute-0 podman[394793]: 2025-10-07 14:41:30.385504553 +0000 UTC m=+0.153683605 container start c640e358748bfa1bf0133609e7ab4ba87c361f790049e6966a9890670fcd94c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:41:30 compute-0 podman[394793]: 2025-10-07 14:41:30.389285114 +0000 UTC m=+0.157464156 container attach c640e358748bfa1bf0133609e7ab4ba87c361f790049e6966a9890670fcd94c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nash, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 07 14:41:30 compute-0 objective_nash[394809]: 167 167
Oct 07 14:41:30 compute-0 systemd[1]: libpod-c640e358748bfa1bf0133609e7ab4ba87c361f790049e6966a9890670fcd94c1.scope: Deactivated successfully.
Oct 07 14:41:30 compute-0 podman[394793]: 2025-10-07 14:41:30.393420584 +0000 UTC m=+0.161599636 container died c640e358748bfa1bf0133609e7ab4ba87c361f790049e6966a9890670fcd94c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nash, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:41:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 310 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 18 KiB/s wr, 26 op/s
Oct 07 14:41:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f968130948fe29a5884a2b78657abcaf1f35896689d67b5e9445fe5065593147-merged.mount: Deactivated successfully.
Oct 07 14:41:30 compute-0 podman[394793]: 2025-10-07 14:41:30.499851293 +0000 UTC m=+0.268030305 container remove c640e358748bfa1bf0133609e7ab4ba87c361f790049e6966a9890670fcd94c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 07 14:41:30 compute-0 systemd[1]: libpod-conmon-c640e358748bfa1bf0133609e7ab4ba87c361f790049e6966a9890670fcd94c1.scope: Deactivated successfully.
Oct 07 14:41:30 compute-0 podman[394831]: 2025-10-07 14:41:30.719218924 +0000 UTC m=+0.047232267 container create 1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 07 14:41:30 compute-0 systemd[1]: Started libpod-conmon-1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e.scope.
Oct 07 14:41:30 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea1d2fe034d0a3f6ada09dc6b5b5415dd1822e93e140fc95a0f29b6d420fe37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:41:30 compute-0 podman[394831]: 2025-10-07 14:41:30.698337659 +0000 UTC m=+0.026351032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea1d2fe034d0a3f6ada09dc6b5b5415dd1822e93e140fc95a0f29b6d420fe37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea1d2fe034d0a3f6ada09dc6b5b5415dd1822e93e140fc95a0f29b6d420fe37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea1d2fe034d0a3f6ada09dc6b5b5415dd1822e93e140fc95a0f29b6d420fe37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea1d2fe034d0a3f6ada09dc6b5b5415dd1822e93e140fc95a0f29b6d420fe37/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:41:30 compute-0 podman[394831]: 2025-10-07 14:41:30.810175862 +0000 UTC m=+0.138189295 container init 1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hellman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.813 2 DEBUG nova.compute.manager [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-plugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.815 2 DEBUG oslo_concurrency.lockutils [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.815 2 DEBUG oslo_concurrency.lockutils [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.815 2 DEBUG oslo_concurrency.lockutils [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.815 2 DEBUG nova.compute.manager [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] No waiting events found dispatching network-vif-plugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.816 2 WARNING nova.compute.manager [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received unexpected event network-vif-plugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 for instance with vm_state active and task_state deleting.
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.817 2 DEBUG nova.compute.manager [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-deleted-d721f926-d7b0-4e02-bf84-b45b7c5df102 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.817 2 INFO nova.compute.manager [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Neutron deleted interface d721f926-d7b0-4e02-bf84-b45b7c5df102; detaching it from the instance and deleting it from the info cache
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.818 2 DEBUG nova.network.neutron [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updating instance_info_cache with network_info: [{"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:41:30 compute-0 podman[394831]: 2025-10-07 14:41:30.822059777 +0000 UTC m=+0.150073130 container start 1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 07 14:41:30 compute-0 podman[394831]: 2025-10-07 14:41:30.827870882 +0000 UTC m=+0.155884345 container attach 1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hellman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.872 2 DEBUG nova.compute.manager [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Detach interface failed, port_id=d721f926-d7b0-4e02-bf84-b45b7c5df102, reason: Instance 52ccd902-898f-4809-a231-be5760626c2c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.913 2 DEBUG nova.network.neutron [-] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:41:30 compute-0 nova_compute[259550]: 2025-10-07 14:41:30.950 2 INFO nova.compute.manager [-] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Took 1.92 seconds to deallocate network for instance.
Oct 07 14:41:31 compute-0 nova_compute[259550]: 2025-10-07 14:41:31.046 2 DEBUG oslo_concurrency.lockutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:31 compute-0 nova_compute[259550]: 2025-10-07 14:41:31.046 2 DEBUG oslo_concurrency.lockutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:31 compute-0 nova_compute[259550]: 2025-10-07 14:41:31.159 2 DEBUG oslo_concurrency.processutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:41:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:41:31 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174876285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:41:31 compute-0 nova_compute[259550]: 2025-10-07 14:41:31.671 2 DEBUG oslo_concurrency.processutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:41:31 compute-0 nova_compute[259550]: 2025-10-07 14:41:31.680 2 DEBUG nova.compute.provider_tree [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:41:31 compute-0 nova_compute[259550]: 2025-10-07 14:41:31.710 2 DEBUG nova.scheduler.client.report [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:41:31 compute-0 nova_compute[259550]: 2025-10-07 14:41:31.900 2 DEBUG oslo_concurrency.lockutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:31 compute-0 nova_compute[259550]: 2025-10-07 14:41:31.959 2 INFO nova.scheduler.client.report [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 52ccd902-898f-4809-a231-be5760626c2c
Oct 07 14:41:31 compute-0 unruffled_hellman[394848]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:41:31 compute-0 unruffled_hellman[394848]: --> relative data size: 1.0
Oct 07 14:41:31 compute-0 unruffled_hellman[394848]: --> All data devices are unavailable
Oct 07 14:41:32 compute-0 systemd[1]: libpod-1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e.scope: Deactivated successfully.
Oct 07 14:41:32 compute-0 systemd[1]: libpod-1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e.scope: Consumed 1.099s CPU time.
Oct 07 14:41:32 compute-0 podman[394831]: 2025-10-07 14:41:32.004716823 +0000 UTC m=+1.332730176 container died 1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hellman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:41:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ea1d2fe034d0a3f6ada09dc6b5b5415dd1822e93e140fc95a0f29b6d420fe37-merged.mount: Deactivated successfully.
Oct 07 14:41:32 compute-0 podman[394831]: 2025-10-07 14:41:32.068677842 +0000 UTC m=+1.396691195 container remove 1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:41:32 compute-0 systemd[1]: libpod-conmon-1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e.scope: Deactivated successfully.
Oct 07 14:41:32 compute-0 sudo[394659]: pam_unix(sudo:session): session closed for user root
Oct 07 14:41:32 compute-0 nova_compute[259550]: 2025-10-07 14:41:32.134 2 DEBUG oslo_concurrency.lockutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:32 compute-0 sudo[394912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:41:32 compute-0 sudo[394912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:41:32 compute-0 sudo[394912]: pam_unix(sudo:session): session closed for user root
Oct 07 14:41:32 compute-0 ceph-mon[74295]: pgmap v2372: 305 pgs: 305 active+clean; 310 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 18 KiB/s wr, 26 op/s
Oct 07 14:41:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4174876285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:41:32 compute-0 sudo[394937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:41:32 compute-0 sudo[394937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:41:32 compute-0 sudo[394937]: pam_unix(sudo:session): session closed for user root
Oct 07 14:41:32 compute-0 sudo[394962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:41:32 compute-0 sudo[394962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:41:32 compute-0 sudo[394962]: pam_unix(sudo:session): session closed for user root
Oct 07 14:41:32 compute-0 sudo[394987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:41:32 compute-0 sudo[394987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 299 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.3 MiB/s wr, 43 op/s
Oct 07 14:41:32 compute-0 nova_compute[259550]: 2025-10-07 14:41:32.653 2 DEBUG nova.network.neutron [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Successfully updated port: feac1006-7556-4dd6-9691-bc886a9410f3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002530782989841766 of space, bias 1.0, pg target 0.7592348969525298 quantized to 32 (current 32)
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:41:32 compute-0 nova_compute[259550]: 2025-10-07 14:41:32.669 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquiring lock "refresh_cache-300a7ac9-5462-4db2-817f-07ea0b2d6aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:41:32 compute-0 nova_compute[259550]: 2025-10-07 14:41:32.669 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquired lock "refresh_cache-300a7ac9-5462-4db2-817f-07ea0b2d6aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:41:32 compute-0 nova_compute[259550]: 2025-10-07 14:41:32.669 2 DEBUG nova.network.neutron [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:41:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:41:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:41:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2872906395' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:41:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:41:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2872906395' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:41:32 compute-0 podman[395052]: 2025-10-07 14:41:32.727083924 +0000 UTC m=+0.043345933 container create 7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pasteur, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 07 14:41:32 compute-0 systemd[1]: Started libpod-conmon-7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f.scope.
Oct 07 14:41:32 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:41:32 compute-0 podman[395052]: 2025-10-07 14:41:32.708054618 +0000 UTC m=+0.024316647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:41:32 compute-0 nova_compute[259550]: 2025-10-07 14:41:32.804 2 DEBUG nova.network.neutron [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:41:32 compute-0 podman[395052]: 2025-10-07 14:41:32.819032577 +0000 UTC m=+0.135294596 container init 7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pasteur, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:41:32 compute-0 podman[395052]: 2025-10-07 14:41:32.829247449 +0000 UTC m=+0.145509458 container start 7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pasteur, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:41:32 compute-0 podman[395052]: 2025-10-07 14:41:32.832722921 +0000 UTC m=+0.148984940 container attach 7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pasteur, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:41:32 compute-0 nervous_pasteur[395070]: 167 167
Oct 07 14:41:32 compute-0 systemd[1]: libpod-7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f.scope: Deactivated successfully.
Oct 07 14:41:32 compute-0 conmon[395070]: conmon 7b7d5f68b275c1ab96b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f.scope/container/memory.events
Oct 07 14:41:32 compute-0 podman[395052]: 2025-10-07 14:41:32.838446543 +0000 UTC m=+0.154708572 container died 7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:41:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8996353de1342cdf983d9cfe6944b22d0e8a2e16dc78ac661ca71afee977721-merged.mount: Deactivated successfully.
Oct 07 14:41:32 compute-0 podman[395052]: 2025-10-07 14:41:32.877650325 +0000 UTC m=+0.193912334 container remove 7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:41:32 compute-0 nova_compute[259550]: 2025-10-07 14:41:32.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:32 compute-0 systemd[1]: libpod-conmon-7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f.scope: Deactivated successfully.
Oct 07 14:41:32 compute-0 nova_compute[259550]: 2025-10-07 14:41:32.897 2 DEBUG nova.compute.manager [req-f434a5bf-c610-49a5-aedc-bc60d1c92442 req-1b8b6a13-87b0-4a24-b10f-84be82ce2d31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-deleted-b4567457-8da4-42a7-b4c0-42724b2c0bc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:32 compute-0 nova_compute[259550]: 2025-10-07 14:41:32.897 2 DEBUG nova.compute.manager [req-f434a5bf-c610-49a5-aedc-bc60d1c92442 req-1b8b6a13-87b0-4a24-b10f-84be82ce2d31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received event network-changed-feac1006-7556-4dd6-9691-bc886a9410f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:32 compute-0 nova_compute[259550]: 2025-10-07 14:41:32.898 2 DEBUG nova.compute.manager [req-f434a5bf-c610-49a5-aedc-bc60d1c92442 req-1b8b6a13-87b0-4a24-b10f-84be82ce2d31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Refreshing instance network info cache due to event network-changed-feac1006-7556-4dd6-9691-bc886a9410f3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:41:32 compute-0 nova_compute[259550]: 2025-10-07 14:41:32.898 2 DEBUG oslo_concurrency.lockutils [req-f434a5bf-c610-49a5-aedc-bc60d1c92442 req-1b8b6a13-87b0-4a24-b10f-84be82ce2d31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-300a7ac9-5462-4db2-817f-07ea0b2d6aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:41:33 compute-0 podman[395090]: 2025-10-07 14:41:33.084112364 +0000 UTC m=+0.069408577 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:41:33 compute-0 podman[395089]: 2025-10-07 14:41:33.085019917 +0000 UTC m=+0.072174809 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:41:33 compute-0 podman[395112]: 2025-10-07 14:41:33.089840796 +0000 UTC m=+0.047937365 container create 1dd86d60b0efa6faeb01d23ead09f9bf94eaf27b1e3c9dc14815d11a9b06343b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poincare, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:41:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:41:33 compute-0 systemd[1]: Started libpod-conmon-1dd86d60b0efa6faeb01d23ead09f9bf94eaf27b1e3c9dc14815d11a9b06343b.scope.
Oct 07 14:41:33 compute-0 podman[395112]: 2025-10-07 14:41:33.069878575 +0000 UTC m=+0.027975154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:41:33 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff96983e00650ea1f78a36e9dd559d0fc4c54acd275b82cd13f55e92a238724/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff96983e00650ea1f78a36e9dd559d0fc4c54acd275b82cd13f55e92a238724/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff96983e00650ea1f78a36e9dd559d0fc4c54acd275b82cd13f55e92a238724/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff96983e00650ea1f78a36e9dd559d0fc4c54acd275b82cd13f55e92a238724/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:41:33 compute-0 podman[395112]: 2025-10-07 14:41:33.197036495 +0000 UTC m=+0.155133084 container init 1dd86d60b0efa6faeb01d23ead09f9bf94eaf27b1e3c9dc14815d11a9b06343b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:41:33 compute-0 podman[395112]: 2025-10-07 14:41:33.206672681 +0000 UTC m=+0.164769240 container start 1dd86d60b0efa6faeb01d23ead09f9bf94eaf27b1e3c9dc14815d11a9b06343b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poincare, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:41:33 compute-0 podman[395112]: 2025-10-07 14:41:33.212256769 +0000 UTC m=+0.170353348 container attach 1dd86d60b0efa6faeb01d23ead09f9bf94eaf27b1e3c9dc14815d11a9b06343b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poincare, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 07 14:41:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2872906395' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:41:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2872906395' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.522 2 DEBUG oslo_concurrency.lockutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.523 2 DEBUG oslo_concurrency.lockutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.523 2 DEBUG oslo_concurrency.lockutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.523 2 DEBUG oslo_concurrency.lockutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.523 2 DEBUG oslo_concurrency.lockutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.524 2 INFO nova.compute.manager [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Terminating instance
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.525 2 DEBUG nova.compute.manager [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.528 2 DEBUG nova.network.neutron [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Updating instance_info_cache with network_info: [{"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.546 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Releasing lock "refresh_cache-300a7ac9-5462-4db2-817f-07ea0b2d6aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.547 2 DEBUG nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Instance network_info: |[{"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.547 2 DEBUG oslo_concurrency.lockutils [req-f434a5bf-c610-49a5-aedc-bc60d1c92442 req-1b8b6a13-87b0-4a24-b10f-84be82ce2d31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-300a7ac9-5462-4db2-817f-07ea0b2d6aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.548 2 DEBUG nova.network.neutron [req-f434a5bf-c610-49a5-aedc-bc60d1c92442 req-1b8b6a13-87b0-4a24-b10f-84be82ce2d31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Refreshing network info cache for port feac1006-7556-4dd6-9691-bc886a9410f3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.550 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Start _get_guest_xml network_info=[{"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.555 2 WARNING nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.565 2 DEBUG nova.virt.libvirt.host [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.568 2 DEBUG nova.virt.libvirt.host [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.574 2 DEBUG nova.virt.libvirt.host [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.577 2 DEBUG nova.virt.libvirt.host [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.577 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.577 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.578 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.578 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.579 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.579 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.579 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.580 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.582 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.582 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.584 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.584 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.587 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:41:33 compute-0 kernel: tapff77de20-12 (unregistering): left promiscuous mode
Oct 07 14:41:33 compute-0 NetworkManager[44949]: <info>  [1759848093.6393] device (tapff77de20-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:41:33 compute-0 kernel: tapa40ec757-40 (unregistering): left promiscuous mode
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:33 compute-0 ovn_controller[151684]: 2025-10-07T14:41:33Z|01369|binding|INFO|Releasing lport ff77de20-1280-4c30-941d-c53ed7efcbe8 from this chassis (sb_readonly=0)
Oct 07 14:41:33 compute-0 ovn_controller[151684]: 2025-10-07T14:41:33Z|01370|binding|INFO|Setting lport ff77de20-1280-4c30-941d-c53ed7efcbe8 down in Southbound
Oct 07 14:41:33 compute-0 ovn_controller[151684]: 2025-10-07T14:41:33Z|01371|binding|INFO|Removing iface tapff77de20-12 ovn-installed in OVS
Oct 07 14:41:33 compute-0 NetworkManager[44949]: <info>  [1759848093.6605] device (tapa40ec757-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:33.664 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:b5:2f 10.100.0.12'], port_security=['fa:16:3e:09:b5:2f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '91ef3edf-0b1e-4a6d-8ef1-af2687c58b74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b59ffdd2-4285-47f2-a931-fca691d1c031', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3b63116b-3cbd-4da7-8f74-19fd26396505', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c45d6c4-6a15-4c19-8f6b-673e9b96b82d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ff77de20-1280-4c30-941d-c53ed7efcbe8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:41:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:33.665 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ff77de20-1280-4c30-941d-c53ed7efcbe8 in datapath b59ffdd2-4285-47f2-a931-fca691d1c031 unbound from our chassis
Oct 07 14:41:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:33.667 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b59ffdd2-4285-47f2-a931-fca691d1c031, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:41:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:33.672 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[519aca76-253d-43e3-9b6e-edcea27860af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:33.672 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031 namespace which is not needed anymore
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:33 compute-0 ovn_controller[151684]: 2025-10-07T14:41:33Z|01372|binding|INFO|Releasing lport a40ec757-407d-4375-b756-d4fb8f5664b4 from this chassis (sb_readonly=0)
Oct 07 14:41:33 compute-0 ovn_controller[151684]: 2025-10-07T14:41:33Z|01373|binding|INFO|Setting lport a40ec757-407d-4375-b756-d4fb8f5664b4 down in Southbound
Oct 07 14:41:33 compute-0 ovn_controller[151684]: 2025-10-07T14:41:33Z|01374|binding|INFO|Removing iface tapa40ec757-40 ovn-installed in OVS
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:33.709 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:32:f2 2001:db8:0:1:f816:3eff:fe37:32f2 2001:db8::f816:3eff:fe37:32f2'], port_security=['fa:16:3e:37:32:f2 2001:db8:0:1:f816:3eff:fe37:32f2 2001:db8::f816:3eff:fe37:32f2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe37:32f2/64 2001:db8::f816:3eff:fe37:32f2/64', 'neutron:device_id': '91ef3edf-0b1e-4a6d-8ef1-af2687c58b74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abe90ba0-a518-4cef-a49b-de57485faec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3b63116b-3cbd-4da7-8f74-19fd26396505', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=133edfbf-d913-46a7-a148-1a6e26213678, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=a40ec757-407d-4375-b756-d4fb8f5664b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:33 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Oct 07 14:41:33 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d0000007a.scope: Consumed 16.547s CPU time.
Oct 07 14:41:33 compute-0 systemd-machined[214580]: Machine qemu-153-instance-0000007a terminated.
Oct 07 14:41:33 compute-0 NetworkManager[44949]: <info>  [1759848093.7590] manager: (tapa40ec757-40): new Tun device (/org/freedesktop/NetworkManager/Devices/550)
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.787 2 INFO nova.virt.libvirt.driver [-] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Instance destroyed successfully.
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.787 2 DEBUG nova.objects.instance [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.802 2 DEBUG nova.virt.libvirt.vif [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:39:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1628698219',display_name='tempest-TestGettingAddress-server-1628698219',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1628698219',id=122,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:40:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-fvhmi32z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:40:04Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=91ef3edf-0b1e-4a6d-8ef1-af2687c58b74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.802 2 DEBUG nova.network.os_vif_util [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.803 2 DEBUG nova.network.os_vif_util [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:b5:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff77de20-1280-4c30-941d-c53ed7efcbe8,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff77de20-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.804 2 DEBUG os_vif [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:b5:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff77de20-1280-4c30-941d-c53ed7efcbe8,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff77de20-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.806 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff77de20-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.822 2 INFO os_vif [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:b5:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff77de20-1280-4c30-941d-c53ed7efcbe8,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff77de20-12')
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.823 2 DEBUG nova.virt.libvirt.vif [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:39:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1628698219',display_name='tempest-TestGettingAddress-server-1628698219',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1628698219',id=122,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:40:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-fvhmi32z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:40:04Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=91ef3edf-0b1e-4a6d-8ef1-af2687c58b74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.823 2 DEBUG nova.network.os_vif_util [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.824 2 DEBUG nova.network.os_vif_util [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:32:f2,bridge_name='br-int',has_traffic_filtering=True,id=a40ec757-407d-4375-b756-d4fb8f5664b4,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa40ec757-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.824 2 DEBUG os_vif [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:32:f2,bridge_name='br-int',has_traffic_filtering=True,id=a40ec757-407d-4375-b756-d4fb8f5664b4,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa40ec757-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.826 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa40ec757-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:33 compute-0 nova_compute[259550]: 2025-10-07 14:41:33.831 2 INFO os_vif [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:32:f2,bridge_name='br-int',has_traffic_filtering=True,id=a40ec757-407d-4375-b756-d4fb8f5664b4,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa40ec757-40')
Oct 07 14:41:33 compute-0 neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031[391515]: [NOTICE]   (391537) : haproxy version is 2.8.14-c23fe91
Oct 07 14:41:33 compute-0 neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031[391515]: [NOTICE]   (391537) : path to executable is /usr/sbin/haproxy
Oct 07 14:41:33 compute-0 neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031[391515]: [ALERT]    (391537) : Current worker (391541) exited with code 143 (Terminated)
Oct 07 14:41:33 compute-0 neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031[391515]: [WARNING]  (391537) : All workers exited. Exiting... (0)
Oct 07 14:41:33 compute-0 systemd[1]: libpod-07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1.scope: Deactivated successfully.
Oct 07 14:41:33 compute-0 conmon[391515]: conmon 07847417c8983cc8622e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1.scope/container/memory.events
Oct 07 14:41:33 compute-0 podman[395223]: 2025-10-07 14:41:33.857696175 +0000 UTC m=+0.058614819 container died 07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:41:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1-userdata-shm.mount: Deactivated successfully.
Oct 07 14:41:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-25645b5d35c32237e95d967c51a9e71c44938bb7f9072a90ac733ca580ab77f3-merged.mount: Deactivated successfully.
Oct 07 14:41:33 compute-0 podman[395223]: 2025-10-07 14:41:33.954308003 +0000 UTC m=+0.155226647 container cleanup 07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:41:33 compute-0 systemd[1]: libpod-conmon-07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1.scope: Deactivated successfully.
Oct 07 14:41:34 compute-0 podman[395273]: 2025-10-07 14:41:34.035774389 +0000 UTC m=+0.057416938 container remove 07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.043 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1a96e9d9-8c5f-4e6e-8422-c0ec73aea54e]: (4, ('Tue Oct  7 02:41:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031 (07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1)\n07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1\nTue Oct  7 02:41:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031 (07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1)\n07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:34 compute-0 confident_poincare[395151]: {
Oct 07 14:41:34 compute-0 confident_poincare[395151]:     "0": [
Oct 07 14:41:34 compute-0 confident_poincare[395151]:         {
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "devices": [
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "/dev/loop3"
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             ],
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "lv_name": "ceph_lv0",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "lv_size": "21470642176",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "name": "ceph_lv0",
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.046 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5ddf2007-a8d9-4d31-89ac-9f7d37cee8ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "tags": {
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.cluster_name": "ceph",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.crush_device_class": "",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.encrypted": "0",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.osd_id": "0",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.type": "block",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.vdo": "0"
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             },
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "type": "block",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "vg_name": "ceph_vg0"
Oct 07 14:41:34 compute-0 confident_poincare[395151]:         }
Oct 07 14:41:34 compute-0 confident_poincare[395151]:     ],
Oct 07 14:41:34 compute-0 confident_poincare[395151]:     "1": [
Oct 07 14:41:34 compute-0 confident_poincare[395151]:         {
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "devices": [
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "/dev/loop4"
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             ],
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "lv_name": "ceph_lv1",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "lv_size": "21470642176",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "name": "ceph_lv1",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "tags": {
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.cluster_name": "ceph",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.crush_device_class": "",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.encrypted": "0",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.osd_id": "1",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.type": "block",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.vdo": "0"
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             },
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "type": "block",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "vg_name": "ceph_vg1"
Oct 07 14:41:34 compute-0 confident_poincare[395151]:         }
Oct 07 14:41:34 compute-0 confident_poincare[395151]:     ],
Oct 07 14:41:34 compute-0 confident_poincare[395151]:     "2": [
Oct 07 14:41:34 compute-0 confident_poincare[395151]:         {
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "devices": [
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "/dev/loop5"
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             ],
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "lv_name": "ceph_lv2",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "lv_size": "21470642176",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "name": "ceph_lv2",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "tags": {
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.cluster_name": "ceph",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.crush_device_class": "",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.encrypted": "0",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.osd_id": "2",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.type": "block",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:                 "ceph.vdo": "0"
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             },
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "type": "block",
Oct 07 14:41:34 compute-0 confident_poincare[395151]:             "vg_name": "ceph_vg2"
Oct 07 14:41:34 compute-0 confident_poincare[395151]:         }
Oct 07 14:41:34 compute-0 confident_poincare[395151]:     ]
Oct 07 14:41:34 compute-0 confident_poincare[395151]: }
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.047 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb59ffdd2-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:34 compute-0 systemd[1]: libpod-1dd86d60b0efa6faeb01d23ead09f9bf94eaf27b1e3c9dc14815d11a9b06343b.scope: Deactivated successfully.
Oct 07 14:41:34 compute-0 podman[395112]: 2025-10-07 14:41:34.089334423 +0000 UTC m=+1.047431032 container died 1dd86d60b0efa6faeb01d23ead09f9bf94eaf27b1e3c9dc14815d11a9b06343b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poincare, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:34 compute-0 kernel: tapb59ffdd2-40: left promiscuous mode
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:41:34 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1893562222' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.115 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d2332590-2c02-4ac1-b2ef-855565fc44e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-fff96983e00650ea1f78a36e9dd559d0fc4c54acd275b82cd13f55e92a238724-merged.mount: Deactivated successfully.
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.142 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb5894d-f521-4131-8a41-373a20d55f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.144 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[95a163fa-69cb-4a0a-9a0c-3bc8af8b7753]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.151 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:41:34 compute-0 podman[395112]: 2025-10-07 14:41:34.171447175 +0000 UTC m=+1.129543734 container remove 1dd86d60b0efa6faeb01d23ead09f9bf94eaf27b1e3c9dc14815d11a9b06343b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poincare, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.172 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c263ca35-077d-4ac8-bde9-a681b993ebed]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858585, 'reachable_time': 16037, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395303, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:34 compute-0 systemd[1]: run-netns-ovnmeta\x2db59ffdd2\x2d4285\x2d47f2\x2da931\x2dfca691d1c031.mount: Deactivated successfully.
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.177 2 DEBUG nova.storage.rbd_utils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] rbd image 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.180 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.181 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[c159f7fc-9567-4d38-a6e6-68579c18db86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.182 161536 INFO neutron.agent.ovn.metadata.agent [-] Port a40ec757-407d-4375-b756-d4fb8f5664b4 in datapath abe90ba0-a518-4cef-a49b-de57485faec5 unbound from our chassis
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.184 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abe90ba0-a518-4cef-a49b-de57485faec5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:41:34 compute-0 systemd[1]: libpod-conmon-1dd86d60b0efa6faeb01d23ead09f9bf94eaf27b1e3c9dc14815d11a9b06343b.scope: Deactivated successfully.
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.190 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[db1d773b-bc5b-4180-812d-e40bab00bc6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.191 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5 namespace which is not needed anymore
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.191 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:41:34 compute-0 sudo[394987]: pam_unix(sudo:session): session closed for user root
Oct 07 14:41:34 compute-0 ceph-mon[74295]: pgmap v2373: 305 pgs: 305 active+clean; 299 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.3 MiB/s wr, 43 op/s
Oct 07 14:41:34 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1893562222' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:41:34 compute-0 sudo[395328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:41:34 compute-0 sudo[395328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:41:34 compute-0 sudo[395328]: pam_unix(sudo:session): session closed for user root
Oct 07 14:41:34 compute-0 neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5[391610]: [NOTICE]   (391615) : haproxy version is 2.8.14-c23fe91
Oct 07 14:41:34 compute-0 neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5[391610]: [NOTICE]   (391615) : path to executable is /usr/sbin/haproxy
Oct 07 14:41:34 compute-0 neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5[391610]: [WARNING]  (391615) : Exiting Master process...
Oct 07 14:41:34 compute-0 neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5[391610]: [ALERT]    (391615) : Current worker (391617) exited with code 143 (Terminated)
Oct 07 14:41:34 compute-0 neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5[391610]: [WARNING]  (391615) : All workers exited. Exiting... (0)
Oct 07 14:41:34 compute-0 systemd[1]: libpod-edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9.scope: Deactivated successfully.
Oct 07 14:41:34 compute-0 podman[395366]: 2025-10-07 14:41:34.343033456 +0000 UTC m=+0.047557425 container died edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:41:34 compute-0 sudo[395367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:41:34 compute-0 sudo[395367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:41:34 compute-0 sudo[395367]: pam_unix(sudo:session): session closed for user root
Oct 07 14:41:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9-userdata-shm.mount: Deactivated successfully.
Oct 07 14:41:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c49969737f58c490492c852bd2ea4f9f98b17080447bba92e02b629fbfab611a-merged.mount: Deactivated successfully.
Oct 07 14:41:34 compute-0 podman[395366]: 2025-10-07 14:41:34.39550309 +0000 UTC m=+0.100027039 container cleanup edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 07 14:41:34 compute-0 systemd[1]: libpod-conmon-edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9.scope: Deactivated successfully.
Oct 07 14:41:34 compute-0 sudo[395425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:41:34 compute-0 sudo[395425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:41:34 compute-0 sudo[395425]: pam_unix(sudo:session): session closed for user root
Oct 07 14:41:34 compute-0 podman[395460]: 2025-10-07 14:41:34.462885631 +0000 UTC m=+0.044846033 container remove edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:41:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 326 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.469 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bb4375d3-80be-4e04-b4e9-bc56c50a0fff]: (4, ('Tue Oct  7 02:41:34 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5 (edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9)\nedf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9\nTue Oct  7 02:41:34 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5 (edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9)\nedf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.472 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e5427477-6fef-4d0c-b40a-7e32ed25861a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.475 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabe90ba0-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:34 compute-0 kernel: tapabe90ba0-a0: left promiscuous mode
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.488 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a6bf7237-d2a3-4f42-91bd-3a171ba2a2cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:34 compute-0 sudo[395473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:41:34 compute-0 sudo[395473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.509 2 INFO nova.virt.libvirt.driver [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Deleting instance files /var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_del
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.511 2 INFO nova.virt.libvirt.driver [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Deletion of /var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_del complete
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.523 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[734794a6-a803-4d16-8eb7-5489556ff1fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.524 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d382fb4c-65c5-43b2-a104-a11564641332]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.541 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ab9f80-bd1d-4a94-8a52-6020a9a677a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858686, 'reachable_time': 29986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395501, 'error': None, 'target': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.544 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:41:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.544 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[50318de0-16a6-46da-b20d-73ea3078f013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:34 compute-0 systemd[1]: run-netns-ovnmeta\x2dabe90ba0\x2da518\x2d4cef\x2da49b\x2dde57485faec5.mount: Deactivated successfully.
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.559 2 INFO nova.compute.manager [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Took 1.03 seconds to destroy the instance on the hypervisor.
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.559 2 DEBUG oslo.service.loopingcall [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.560 2 DEBUG nova.compute.manager [-] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.560 2 DEBUG nova.network.neutron [-] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:41:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:41:34 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1450717079' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.692 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.694 2 DEBUG nova.virt.libvirt.vif [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:41:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1671371669',display_name='tempest-TestServerBasicOps-server-1671371669',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1671371669',id=126,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGvFnIqTqfHxE6+s78OqW40EeX1Gc32/mDkbAeqCywn5GmNeJRkLzGwn+a0yP/7G4uTQPEHQaiqW/W1UjNqDLgiC69VXo+2gdbXJjgI/CEgIH3VE1qmFfhNZh8qFmO8jUg==',key_name='tempest-TestServerBasicOps-1893482168',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b215e149c484ed3a0d2130b82d65f6f',ramdisk_id='',reservation_id='r-q3r9pyxl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1198650773',owner_user_name='tempest-TestServerBasicOps-1198650773-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:41:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='512332529ac84b3f9a7bb7d98d8577ac',uuid=300a7ac9-5462-4db2-817f-07ea0b2d6aa6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.694 2 DEBUG nova.network.os_vif_util [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Converting VIF {"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.695 2 DEBUG nova.network.os_vif_util [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:35:e2,bridge_name='br-int',has_traffic_filtering=True,id=feac1006-7556-4dd6-9691-bc886a9410f3,network=Network(70c32d19-cc0d-46fc-a583-1be2bd26332c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfeac1006-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.697 2 DEBUG nova.objects.instance [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lazy-loading 'pci_devices' on Instance uuid 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.720 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:41:34 compute-0 nova_compute[259550]:   <uuid>300a7ac9-5462-4db2-817f-07ea0b2d6aa6</uuid>
Oct 07 14:41:34 compute-0 nova_compute[259550]:   <name>instance-0000007e</name>
Oct 07 14:41:34 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:41:34 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:41:34 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <nova:name>tempest-TestServerBasicOps-server-1671371669</nova:name>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:41:33</nova:creationTime>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:41:34 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:41:34 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:41:34 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:41:34 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:41:34 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:41:34 compute-0 nova_compute[259550]:         <nova:user uuid="512332529ac84b3f9a7bb7d98d8577ac">tempest-TestServerBasicOps-1198650773-project-member</nova:user>
Oct 07 14:41:34 compute-0 nova_compute[259550]:         <nova:project uuid="3b215e149c484ed3a0d2130b82d65f6f">tempest-TestServerBasicOps-1198650773</nova:project>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:41:34 compute-0 nova_compute[259550]:         <nova:port uuid="feac1006-7556-4dd6-9691-bc886a9410f3">
Oct 07 14:41:34 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:41:34 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:41:34 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <system>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <entry name="serial">300a7ac9-5462-4db2-817f-07ea0b2d6aa6</entry>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <entry name="uuid">300a7ac9-5462-4db2-817f-07ea0b2d6aa6</entry>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     </system>
Oct 07 14:41:34 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:41:34 compute-0 nova_compute[259550]:   <os>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:   </os>
Oct 07 14:41:34 compute-0 nova_compute[259550]:   <features>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:   </features>
Oct 07 14:41:34 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:41:34 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:41:34 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk">
Oct 07 14:41:34 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       </source>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:41:34 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk.config">
Oct 07 14:41:34 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       </source>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:41:34 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:03:35:e2"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <target dev="tapfeac1006-75"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6/console.log" append="off"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <video>
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     </video>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:41:34 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:41:34 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:41:34 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:41:34 compute-0 nova_compute[259550]: </domain>
Oct 07 14:41:34 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.721 2 DEBUG nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Preparing to wait for external event network-vif-plugged-feac1006-7556-4dd6-9691-bc886a9410f3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.721 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquiring lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.721 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.722 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.722 2 DEBUG nova.virt.libvirt.vif [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:41:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1671371669',display_name='tempest-TestServerBasicOps-server-1671371669',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1671371669',id=126,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGvFnIqTqfHxE6+s78OqW40EeX1Gc32/mDkbAeqCywn5GmNeJRkLzGwn+a0yP/7G4uTQPEHQaiqW/W1UjNqDLgiC69VXo+2gdbXJjgI/CEgIH3VE1qmFfhNZh8qFmO8jUg==',key_name='tempest-TestServerBasicOps-1893482168',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b215e149c484ed3a0d2130b82d65f6f',ramdisk_id='',reservation_id='r-q3r9pyxl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1198650773',owner_user_name='tempest-TestServerBasicOps-1198650773-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:41:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='512332529ac84b3f9a7bb7d98d8577ac',uuid=300a7ac9-5462-4db2-817f-07ea0b2d6aa6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.723 2 DEBUG nova.network.os_vif_util [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Converting VIF {"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.724 2 DEBUG nova.network.os_vif_util [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:35:e2,bridge_name='br-int',has_traffic_filtering=True,id=feac1006-7556-4dd6-9691-bc886a9410f3,network=Network(70c32d19-cc0d-46fc-a583-1be2bd26332c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfeac1006-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.724 2 DEBUG os_vif [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:35:e2,bridge_name='br-int',has_traffic_filtering=True,id=feac1006-7556-4dd6-9691-bc886a9410f3,network=Network(70c32d19-cc0d-46fc-a583-1be2bd26332c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfeac1006-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.725 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.726 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.729 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfeac1006-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.730 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfeac1006-75, col_values=(('external_ids', {'iface-id': 'feac1006-7556-4dd6-9691-bc886a9410f3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:35:e2', 'vm-uuid': '300a7ac9-5462-4db2-817f-07ea0b2d6aa6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:34 compute-0 NetworkManager[44949]: <info>  [1759848094.7323] manager: (tapfeac1006-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/551)
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.744 2 INFO os_vif [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:35:e2,bridge_name='br-int',has_traffic_filtering=True,id=feac1006-7556-4dd6-9691-bc886a9410f3,network=Network(70c32d19-cc0d-46fc-a583-1be2bd26332c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfeac1006-75')
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.797 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.797 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.798 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] No VIF found with MAC fa:16:3e:03:35:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.798 2 INFO nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Using config drive
Oct 07 14:41:34 compute-0 nova_compute[259550]: 2025-10-07 14:41:34.819 2 DEBUG nova.storage.rbd_utils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] rbd image 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:41:34 compute-0 podman[395553]: 2025-10-07 14:41:34.856794112 +0000 UTC m=+0.039506201 container create c86420a75e491753c545f94ed8878fa5f345d487fc3c1e350edfe8446f3ec645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:41:34 compute-0 systemd[1]: Started libpod-conmon-c86420a75e491753c545f94ed8878fa5f345d487fc3c1e350edfe8446f3ec645.scope.
Oct 07 14:41:34 compute-0 podman[395553]: 2025-10-07 14:41:34.838106505 +0000 UTC m=+0.020818614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:41:34 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:41:34 compute-0 podman[395553]: 2025-10-07 14:41:34.963825317 +0000 UTC m=+0.146537426 container init c86420a75e491753c545f94ed8878fa5f345d487fc3c1e350edfe8446f3ec645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:41:34 compute-0 podman[395553]: 2025-10-07 14:41:34.973627257 +0000 UTC m=+0.156339346 container start c86420a75e491753c545f94ed8878fa5f345d487fc3c1e350edfe8446f3ec645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 07 14:41:34 compute-0 podman[395553]: 2025-10-07 14:41:34.978080145 +0000 UTC m=+0.160792254 container attach c86420a75e491753c545f94ed8878fa5f345d487fc3c1e350edfe8446f3ec645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 07 14:41:34 compute-0 dazzling_ritchie[395580]: 167 167
Oct 07 14:41:34 compute-0 systemd[1]: libpod-c86420a75e491753c545f94ed8878fa5f345d487fc3c1e350edfe8446f3ec645.scope: Deactivated successfully.
Oct 07 14:41:34 compute-0 podman[395553]: 2025-10-07 14:41:34.979509813 +0000 UTC m=+0.162221902 container died c86420a75e491753c545f94ed8878fa5f345d487fc3c1e350edfe8446f3ec645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 07 14:41:35 compute-0 nova_compute[259550]: 2025-10-07 14:41:35.000 2 DEBUG nova.compute.manager [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-changed-ff77de20-1280-4c30-941d-c53ed7efcbe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:35 compute-0 nova_compute[259550]: 2025-10-07 14:41:35.001 2 DEBUG nova.compute.manager [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Refreshing instance network info cache due to event network-changed-ff77de20-1280-4c30-941d-c53ed7efcbe8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:41:35 compute-0 nova_compute[259550]: 2025-10-07 14:41:35.001 2 DEBUG oslo_concurrency.lockutils [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:41:35 compute-0 nova_compute[259550]: 2025-10-07 14:41:35.001 2 DEBUG oslo_concurrency.lockutils [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:41:35 compute-0 nova_compute[259550]: 2025-10-07 14:41:35.001 2 DEBUG nova.network.neutron [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Refreshing network info cache for port ff77de20-1280-4c30-941d-c53ed7efcbe8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:41:35 compute-0 podman[395553]: 2025-10-07 14:41:35.018677585 +0000 UTC m=+0.201389674 container remove c86420a75e491753c545f94ed8878fa5f345d487fc3c1e350edfe8446f3ec645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:41:35 compute-0 systemd[1]: libpod-conmon-c86420a75e491753c545f94ed8878fa5f345d487fc3c1e350edfe8446f3ec645.scope: Deactivated successfully.
Oct 07 14:41:35 compute-0 podman[395605]: 2025-10-07 14:41:35.232702874 +0000 UTC m=+0.046091487 container create 952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 07 14:41:35 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1450717079' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:41:35 compute-0 systemd[1]: Started libpod-conmon-952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb.scope.
Oct 07 14:41:35 compute-0 podman[395605]: 2025-10-07 14:41:35.213247486 +0000 UTC m=+0.026636149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:41:35 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89e2c8fd018cee7186437e0a07400431786147ed3eeebb692941f12e7cbb8ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89e2c8fd018cee7186437e0a07400431786147ed3eeebb692941f12e7cbb8ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89e2c8fd018cee7186437e0a07400431786147ed3eeebb692941f12e7cbb8ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89e2c8fd018cee7186437e0a07400431786147ed3eeebb692941f12e7cbb8ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:41:35 compute-0 podman[395605]: 2025-10-07 14:41:35.338768513 +0000 UTC m=+0.152157136 container init 952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wright, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 07 14:41:35 compute-0 podman[395605]: 2025-10-07 14:41:35.350211257 +0000 UTC m=+0.163599870 container start 952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wright, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:41:35 compute-0 podman[395605]: 2025-10-07 14:41:35.354097791 +0000 UTC m=+0.167486404 container attach 952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:41:35 compute-0 nova_compute[259550]: 2025-10-07 14:41:35.749 2 INFO nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Creating config drive at /var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6/disk.config
Oct 07 14:41:35 compute-0 nova_compute[259550]: 2025-10-07 14:41:35.755 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw2hozw7n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:41:35 compute-0 nova_compute[259550]: 2025-10-07 14:41:35.902 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw2hozw7n" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:41:35 compute-0 nova_compute[259550]: 2025-10-07 14:41:35.932 2 DEBUG nova.storage.rbd_utils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] rbd image 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:41:35 compute-0 nova_compute[259550]: 2025-10-07 14:41:35.938 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6/disk.config 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:41:36 compute-0 nova_compute[259550]: 2025-10-07 14:41:36.119 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6/disk.config 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:41:36 compute-0 nova_compute[259550]: 2025-10-07 14:41:36.120 2 INFO nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Deleting local config drive /var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6/disk.config because it was imported into RBD.
Oct 07 14:41:36 compute-0 kernel: tapfeac1006-75: entered promiscuous mode
Oct 07 14:41:36 compute-0 NetworkManager[44949]: <info>  [1759848096.1840] manager: (tapfeac1006-75): new Tun device (/org/freedesktop/NetworkManager/Devices/552)
Oct 07 14:41:36 compute-0 systemd-udevd[395159]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:41:36 compute-0 ovn_controller[151684]: 2025-10-07T14:41:36Z|01375|binding|INFO|Claiming lport feac1006-7556-4dd6-9691-bc886a9410f3 for this chassis.
Oct 07 14:41:36 compute-0 nova_compute[259550]: 2025-10-07 14:41:36.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:36 compute-0 ovn_controller[151684]: 2025-10-07T14:41:36Z|01376|binding|INFO|feac1006-7556-4dd6-9691-bc886a9410f3: Claiming fa:16:3e:03:35:e2 10.100.0.14
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.195 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:35:e2 10.100.0.14'], port_security=['fa:16:3e:03:35:e2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '300a7ac9-5462-4db2-817f-07ea0b2d6aa6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70c32d19-cc0d-46fc-a583-1be2bd26332c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b215e149c484ed3a0d2130b82d65f6f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2290bbe4-d1ef-40a3-bca7-dc73e3a190e7 4672fac1-ae90-46b4-8fd3-3fd3255b2962', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d118d88-a678-4c5f-9b83-025d93ef3b4c, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=feac1006-7556-4dd6-9691-bc886a9410f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.197 161536 INFO neutron.agent.ovn.metadata.agent [-] Port feac1006-7556-4dd6-9691-bc886a9410f3 in datapath 70c32d19-cc0d-46fc-a583-1be2bd26332c bound to our chassis
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.198 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 70c32d19-cc0d-46fc-a583-1be2bd26332c
Oct 07 14:41:36 compute-0 NetworkManager[44949]: <info>  [1759848096.2039] device (tapfeac1006-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:41:36 compute-0 NetworkManager[44949]: <info>  [1759848096.2048] device (tapfeac1006-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:41:36 compute-0 ovn_controller[151684]: 2025-10-07T14:41:36Z|01377|binding|INFO|Setting lport feac1006-7556-4dd6-9691-bc886a9410f3 ovn-installed in OVS
Oct 07 14:41:36 compute-0 ovn_controller[151684]: 2025-10-07T14:41:36Z|01378|binding|INFO|Setting lport feac1006-7556-4dd6-9691-bc886a9410f3 up in Southbound
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.212 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[429f38f9-17a6-4b33-9914-6d5fbf00c6c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.213 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap70c32d19-c1 in ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:41:36 compute-0 nova_compute[259550]: 2025-10-07 14:41:36.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.216 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap70c32d19-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.216 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7e6b5c-a577-4851-97ba-8124a46a9fdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:36 compute-0 nova_compute[259550]: 2025-10-07 14:41:36.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:36 compute-0 ovn_controller[151684]: 2025-10-07T14:41:36Z|01379|binding|INFO|Releasing lport 406dfedc-cd29-46b7-9b91-7b006ecd582c from this chassis (sb_readonly=0)
Oct 07 14:41:36 compute-0 ovn_controller[151684]: 2025-10-07T14:41:36Z|01380|binding|INFO|Releasing lport 795a08c5-66c3-453c-a5db-19a02c166ab7 from this chassis (sb_readonly=0)
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.218 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[add2fdc5-6434-4fb4-9af7-c2dd03c4e9a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:36 compute-0 systemd-machined[214580]: New machine qemu-159-instance-0000007e.
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.236 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[63a25d81-9ff0-4913-b371-72000d70baff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:36 compute-0 systemd[1]: Started Virtual Machine qemu-159-instance-0000007e.
Oct 07 14:41:36 compute-0 ceph-mon[74295]: pgmap v2374: 305 pgs: 305 active+clean; 326 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.255 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e155e182-5a40-41e1-b07f-ddc9cd412c42]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.289 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3957c9d2-cc72-44ee-818c-1fc2ccaf33a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:36 compute-0 NetworkManager[44949]: <info>  [1759848096.3007] manager: (tap70c32d19-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/553)
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.298 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[485f42b9-b271-4a8b-adff-adc77706b549]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.335 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4d8dc8-d9a8-474d-a3ee-00d82ccdbc8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.342 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d4864174-92e2-464d-8ff4-33f0c3c5069e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:36 compute-0 heuristic_wright[395622]: {
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:         "osd_id": 2,
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:         "type": "bluestore"
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:     },
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:         "osd_id": 1,
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:         "type": "bluestore"
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:     },
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:         "osd_id": 0,
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:         "type": "bluestore"
Oct 07 14:41:36 compute-0 heuristic_wright[395622]:     }
Oct 07 14:41:36 compute-0 heuristic_wright[395622]: }
Oct 07 14:41:36 compute-0 NetworkManager[44949]: <info>  [1759848096.3708] device (tap70c32d19-c0): carrier: link connected
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.379 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[775769d5-b66b-43e1-b816-d3c429b90b73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:36 compute-0 systemd[1]: libpod-952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb.scope: Deactivated successfully.
Oct 07 14:41:36 compute-0 systemd[1]: libpod-952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb.scope: Consumed 1.014s CPU time.
Oct 07 14:41:36 compute-0 podman[395605]: 2025-10-07 14:41:36.390746485 +0000 UTC m=+1.204135098 container died 952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.401 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8445f361-4e62-4dce-ac6b-25d973595dd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap70c32d19-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:ac:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 396], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 867993, 'reachable_time': 33172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395738, 'error': None, 'target': 'ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.419 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c96cb44-11ee-495d-8886-a3952b39b44d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:ace5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 867993, 'tstamp': 867993}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395741, 'error': None, 'target': 'ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-f89e2c8fd018cee7186437e0a07400431786147ed3eeebb692941f12e7cbb8ce-merged.mount: Deactivated successfully.
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.436 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[728d5f15-3405-423c-a674-f1b071971a48]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap70c32d19-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:ac:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 396], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 867993, 'reachable_time': 33172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 395747, 'error': None, 'target': 'ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:36 compute-0 podman[395605]: 2025-10-07 14:41:36.45038599 +0000 UTC m=+1.263774603 container remove 952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 07 14:41:36 compute-0 systemd[1]: libpod-conmon-952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb.scope: Deactivated successfully.
Oct 07 14:41:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 297 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.474 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d4691e52-eef7-4112-bf33-01b416488220]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:36 compute-0 nova_compute[259550]: 2025-10-07 14:41:36.478 2 DEBUG nova.network.neutron [req-f434a5bf-c610-49a5-aedc-bc60d1c92442 req-1b8b6a13-87b0-4a24-b10f-84be82ce2d31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Updated VIF entry in instance network info cache for port feac1006-7556-4dd6-9691-bc886a9410f3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:41:36 compute-0 nova_compute[259550]: 2025-10-07 14:41:36.479 2 DEBUG nova.network.neutron [req-f434a5bf-c610-49a5-aedc-bc60d1c92442 req-1b8b6a13-87b0-4a24-b10f-84be82ce2d31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Updating instance_info_cache with network_info: [{"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:41:36 compute-0 sudo[395473]: pam_unix(sudo:session): session closed for user root
Oct 07 14:41:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:41:36 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:41:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:41:36 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:41:36 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev e8a6b2c0-f4d8-4ca0-81d8-b529722e575c does not exist
Oct 07 14:41:36 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev b11cf4a5-73cc-42e4-a931-d5d5be6f4cce does not exist
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.541 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0227e9aa-50c2-43a1-96d8-7eee044c9005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.543 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap70c32d19-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.543 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.543 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap70c32d19-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:36 compute-0 kernel: tap70c32d19-c0: entered promiscuous mode
Oct 07 14:41:36 compute-0 nova_compute[259550]: 2025-10-07 14:41:36.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:36 compute-0 NetworkManager[44949]: <info>  [1759848096.5462] manager: (tap70c32d19-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/554)
Oct 07 14:41:36 compute-0 nova_compute[259550]: 2025-10-07 14:41:36.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.556 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap70c32d19-c0, col_values=(('external_ids', {'iface-id': 'eb0d4e5a-943a-459a-bddc-3f4beed43512'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:36 compute-0 ovn_controller[151684]: 2025-10-07T14:41:36Z|01381|binding|INFO|Releasing lport eb0d4e5a-943a-459a-bddc-3f4beed43512 from this chassis (sb_readonly=0)
Oct 07 14:41:36 compute-0 nova_compute[259550]: 2025-10-07 14:41:36.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:36 compute-0 sudo[395756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.570 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/70c32d19-cc0d-46fc-a583-1be2bd26332c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/70c32d19-cc0d-46fc-a583-1be2bd26332c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.572 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[96bb96fb-59bd-438c-821c-c65e33a3f0ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.573 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-70c32d19-cc0d-46fc-a583-1be2bd26332c
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/70c32d19-cc0d-46fc-a583-1be2bd26332c.pid.haproxy
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 70c32d19-cc0d-46fc-a583-1be2bd26332c
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:41:36 compute-0 nova_compute[259550]: 2025-10-07 14:41:36.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:36 compute-0 sudo[395756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:41:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.576 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c', 'env', 'PROCESS_TAG=haproxy-70c32d19-cc0d-46fc-a583-1be2bd26332c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/70c32d19-cc0d-46fc-a583-1be2bd26332c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:41:36 compute-0 sudo[395756]: pam_unix(sudo:session): session closed for user root
Oct 07 14:41:36 compute-0 nova_compute[259550]: 2025-10-07 14:41:36.628 2 DEBUG nova.compute.manager [req-6f0f322f-f951-4aa1-b22d-61dbbc16f44e req-e0e77955-b848-435c-bd07-f25e9827abb5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received event network-vif-plugged-feac1006-7556-4dd6-9691-bc886a9410f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:36 compute-0 nova_compute[259550]: 2025-10-07 14:41:36.629 2 DEBUG oslo_concurrency.lockutils [req-6f0f322f-f951-4aa1-b22d-61dbbc16f44e req-e0e77955-b848-435c-bd07-f25e9827abb5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:36 compute-0 nova_compute[259550]: 2025-10-07 14:41:36.636 2 DEBUG oslo_concurrency.lockutils [req-6f0f322f-f951-4aa1-b22d-61dbbc16f44e req-e0e77955-b848-435c-bd07-f25e9827abb5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:36 compute-0 nova_compute[259550]: 2025-10-07 14:41:36.636 2 DEBUG oslo_concurrency.lockutils [req-6f0f322f-f951-4aa1-b22d-61dbbc16f44e req-e0e77955-b848-435c-bd07-f25e9827abb5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:36 compute-0 nova_compute[259550]: 2025-10-07 14:41:36.637 2 DEBUG nova.compute.manager [req-6f0f322f-f951-4aa1-b22d-61dbbc16f44e req-e0e77955-b848-435c-bd07-f25e9827abb5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Processing event network-vif-plugged-feac1006-7556-4dd6-9691-bc886a9410f3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:41:36 compute-0 sudo[395784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:41:36 compute-0 sudo[395784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:41:36 compute-0 sudo[395784]: pam_unix(sudo:session): session closed for user root
Oct 07 14:41:36 compute-0 nova_compute[259550]: 2025-10-07 14:41:36.718 2 DEBUG oslo_concurrency.lockutils [req-f434a5bf-c610-49a5-aedc-bc60d1c92442 req-1b8b6a13-87b0-4a24-b10f-84be82ce2d31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-300a7ac9-5462-4db2-817f-07ea0b2d6aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:41:36 compute-0 podman[395833]: 2025-10-07 14:41:36.970555996 +0000 UTC m=+0.048136750 container create 106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:41:37 compute-0 systemd[1]: Started libpod-conmon-106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710.scope.
Oct 07 14:41:37 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da373f4ff1907c25e28cd2a4fb9ff763074cfeaac1f7943c05c71d9f726084b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:41:37 compute-0 podman[395833]: 2025-10-07 14:41:36.945228663 +0000 UTC m=+0.022809437 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:41:37 compute-0 podman[395833]: 2025-10-07 14:41:37.060339263 +0000 UTC m=+0.137920017 container init 106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:41:37 compute-0 podman[395833]: 2025-10-07 14:41:37.065691715 +0000 UTC m=+0.143272469 container start 106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.071 2 DEBUG nova.network.neutron [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updated VIF entry in instance network info cache for port ff77de20-1280-4c30-941d-c53ed7efcbe8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.071 2 DEBUG nova.network.neutron [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updating instance_info_cache with network_info: [{"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.085 2 DEBUG nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-plugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.086 2 DEBUG oslo_concurrency.lockutils [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.086 2 DEBUG oslo_concurrency.lockutils [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.087 2 DEBUG oslo_concurrency.lockutils [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.087 2 DEBUG nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] No waiting events found dispatching network-vif-plugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.087 2 WARNING nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received unexpected event network-vif-plugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 for instance with vm_state active and task_state deleting.
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.087 2 DEBUG nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-unplugged-a40ec757-407d-4375-b756-d4fb8f5664b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.088 2 DEBUG oslo_concurrency.lockutils [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.088 2 DEBUG oslo_concurrency.lockutils [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.088 2 DEBUG oslo_concurrency.lockutils [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.088 2 DEBUG nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] No waiting events found dispatching network-vif-unplugged-a40ec757-407d-4375-b756-d4fb8f5664b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.089 2 DEBUG nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-unplugged-a40ec757-407d-4375-b756-d4fb8f5664b4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.089 2 DEBUG nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-deleted-ff77de20-1280-4c30-941d-c53ed7efcbe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.089 2 INFO nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Neutron deleted interface ff77de20-1280-4c30-941d-c53ed7efcbe8; detaching it from the instance and deleting it from the info cache
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.089 2 DEBUG nova.network.neutron [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updating instance_info_cache with network_info: [{"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.092 2 DEBUG oslo_concurrency.lockutils [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.093 2 DEBUG nova.compute.manager [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-unplugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.093 2 DEBUG oslo_concurrency.lockutils [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.093 2 DEBUG oslo_concurrency.lockutils [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.094 2 DEBUG oslo_concurrency.lockutils [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.094 2 DEBUG nova.compute.manager [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] No waiting events found dispatching network-vif-unplugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.094 2 DEBUG nova.compute.manager [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-unplugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:41:37 compute-0 neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c[395849]: [NOTICE]   (395853) : New worker (395870) forked
Oct 07 14:41:37 compute-0 neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c[395849]: [NOTICE]   (395853) : Loading success.
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.110 2 DEBUG nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Detach interface failed, port_id=ff77de20-1280-4c30-941d-c53ed7efcbe8, reason: Instance 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.110 2 DEBUG nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-plugged-a40ec757-407d-4375-b756-d4fb8f5664b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.111 2 DEBUG oslo_concurrency.lockutils [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.111 2 DEBUG oslo_concurrency.lockutils [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.111 2 DEBUG oslo_concurrency.lockutils [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.111 2 DEBUG nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] No waiting events found dispatching network-vif-plugged-a40ec757-407d-4375-b756-d4fb8f5664b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.112 2 WARNING nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received unexpected event network-vif-plugged-a40ec757-407d-4375-b756-d4fb8f5664b4 for instance with vm_state active and task_state deleting.
Oct 07 14:41:37 compute-0 ceph-mon[74295]: pgmap v2375: 305 pgs: 305 active+clean; 297 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Oct 07 14:41:37 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:41:37 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.658 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848097.6579847, 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.659 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] VM Started (Lifecycle Event)
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.663 2 DEBUG nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.667 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.672 2 INFO nova.virt.libvirt.driver [-] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Instance spawned successfully.
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.672 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.679 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.682 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.692 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.692 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.692 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.693 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.693 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.694 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.701 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.701 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848097.6580865, 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.701 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] VM Paused (Lifecycle Event)
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.721 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.724 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848097.6661854, 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.724 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] VM Resumed (Lifecycle Event)
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.754 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.755 2 DEBUG nova.network.neutron [-] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.759 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.763 2 INFO nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Took 8.35 seconds to spawn the instance on the hypervisor.
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.763 2 DEBUG nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.775 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.785 2 INFO nova.compute.manager [-] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Took 3.22 seconds to deallocate network for instance.
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.833 2 INFO nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Took 10.16 seconds to build instance.
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.840 2 DEBUG oslo_concurrency.lockutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.840 2 DEBUG oslo_concurrency.lockutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.851 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:37 compute-0 nova_compute[259550]: 2025-10-07 14:41:37.939 2 DEBUG oslo_concurrency.processutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:41:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:41:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:41:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2904002790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:41:38 compute-0 nova_compute[259550]: 2025-10-07 14:41:38.412 2 DEBUG oslo_concurrency.processutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:41:38 compute-0 nova_compute[259550]: 2025-10-07 14:41:38.419 2 DEBUG nova.compute.provider_tree [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:41:38 compute-0 nova_compute[259550]: 2025-10-07 14:41:38.444 2 DEBUG nova.scheduler.client.report [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:41:38 compute-0 nova_compute[259550]: 2025-10-07 14:41:38.467 2 DEBUG oslo_concurrency.lockutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 297 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 07 14:41:38 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2904002790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:41:38 compute-0 nova_compute[259550]: 2025-10-07 14:41:38.509 2 INFO nova.scheduler.client.report [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74
Oct 07 14:41:38 compute-0 nova_compute[259550]: 2025-10-07 14:41:38.593 2 DEBUG oslo_concurrency.lockutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:38 compute-0 nova_compute[259550]: 2025-10-07 14:41:38.713 2 DEBUG nova.compute.manager [req-a369222c-a993-4631-8849-8c384fc2c2c3 req-73f351da-ffb3-4aba-b56f-4c45177ebad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received event network-vif-plugged-feac1006-7556-4dd6-9691-bc886a9410f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:38 compute-0 nova_compute[259550]: 2025-10-07 14:41:38.713 2 DEBUG oslo_concurrency.lockutils [req-a369222c-a993-4631-8849-8c384fc2c2c3 req-73f351da-ffb3-4aba-b56f-4c45177ebad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:38 compute-0 nova_compute[259550]: 2025-10-07 14:41:38.713 2 DEBUG oslo_concurrency.lockutils [req-a369222c-a993-4631-8849-8c384fc2c2c3 req-73f351da-ffb3-4aba-b56f-4c45177ebad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:38 compute-0 nova_compute[259550]: 2025-10-07 14:41:38.714 2 DEBUG oslo_concurrency.lockutils [req-a369222c-a993-4631-8849-8c384fc2c2c3 req-73f351da-ffb3-4aba-b56f-4c45177ebad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:38 compute-0 nova_compute[259550]: 2025-10-07 14:41:38.714 2 DEBUG nova.compute.manager [req-a369222c-a993-4631-8849-8c384fc2c2c3 req-73f351da-ffb3-4aba-b56f-4c45177ebad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] No waiting events found dispatching network-vif-plugged-feac1006-7556-4dd6-9691-bc886a9410f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:41:38 compute-0 nova_compute[259550]: 2025-10-07 14:41:38.714 2 WARNING nova.compute.manager [req-a369222c-a993-4631-8849-8c384fc2c2c3 req-73f351da-ffb3-4aba-b56f-4c45177ebad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received unexpected event network-vif-plugged-feac1006-7556-4dd6-9691-bc886a9410f3 for instance with vm_state active and task_state None.
Oct 07 14:41:39 compute-0 nova_compute[259550]: 2025-10-07 14:41:39.199 2 DEBUG nova.compute.manager [req-f12688cb-0402-4293-bf7d-3f6ff477a92a req-9037a809-a6c5-4719-90c5-764a152ea27a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-deleted-a40ec757-407d-4375-b756-d4fb8f5664b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:39 compute-0 ceph-mon[74295]: pgmap v2376: 305 pgs: 305 active+clean; 297 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 07 14:41:39 compute-0 nova_compute[259550]: 2025-10-07 14:41:39.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 246 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 1.8 MiB/s wr, 115 op/s
Oct 07 14:41:41 compute-0 nova_compute[259550]: 2025-10-07 14:41:41.324 2 DEBUG nova.compute.manager [req-ca6707b5-d108-4d8c-9e6c-088f3b0b2f1d req-8417ffc4-9f5a-426e-a3ef-52df87841aa7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received event network-changed-feac1006-7556-4dd6-9691-bc886a9410f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:41 compute-0 nova_compute[259550]: 2025-10-07 14:41:41.324 2 DEBUG nova.compute.manager [req-ca6707b5-d108-4d8c-9e6c-088f3b0b2f1d req-8417ffc4-9f5a-426e-a3ef-52df87841aa7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Refreshing instance network info cache due to event network-changed-feac1006-7556-4dd6-9691-bc886a9410f3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:41:41 compute-0 nova_compute[259550]: 2025-10-07 14:41:41.325 2 DEBUG oslo_concurrency.lockutils [req-ca6707b5-d108-4d8c-9e6c-088f3b0b2f1d req-8417ffc4-9f5a-426e-a3ef-52df87841aa7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-300a7ac9-5462-4db2-817f-07ea0b2d6aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:41:41 compute-0 nova_compute[259550]: 2025-10-07 14:41:41.325 2 DEBUG oslo_concurrency.lockutils [req-ca6707b5-d108-4d8c-9e6c-088f3b0b2f1d req-8417ffc4-9f5a-426e-a3ef-52df87841aa7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-300a7ac9-5462-4db2-817f-07ea0b2d6aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:41:41 compute-0 nova_compute[259550]: 2025-10-07 14:41:41.325 2 DEBUG nova.network.neutron [req-ca6707b5-d108-4d8c-9e6c-088f3b0b2f1d req-8417ffc4-9f5a-426e-a3ef-52df87841aa7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Refreshing network info cache for port feac1006-7556-4dd6-9691-bc886a9410f3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:41:41 compute-0 ceph-mon[74295]: pgmap v2377: 305 pgs: 305 active+clean; 246 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 1.8 MiB/s wr, 115 op/s
Oct 07 14:41:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 246 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 136 op/s
Oct 07 14:41:42 compute-0 nova_compute[259550]: 2025-10-07 14:41:42.674 2 DEBUG nova.network.neutron [req-ca6707b5-d108-4d8c-9e6c-088f3b0b2f1d req-8417ffc4-9f5a-426e-a3ef-52df87841aa7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Updated VIF entry in instance network info cache for port feac1006-7556-4dd6-9691-bc886a9410f3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:41:42 compute-0 nova_compute[259550]: 2025-10-07 14:41:42.674 2 DEBUG nova.network.neutron [req-ca6707b5-d108-4d8c-9e6c-088f3b0b2f1d req-8417ffc4-9f5a-426e-a3ef-52df87841aa7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Updating instance_info_cache with network_info: [{"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:41:42 compute-0 nova_compute[259550]: 2025-10-07 14:41:42.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:41:43 compute-0 nova_compute[259550]: 2025-10-07 14:41:43.109 2 DEBUG oslo_concurrency.lockutils [req-ca6707b5-d108-4d8c-9e6c-088f3b0b2f1d req-8417ffc4-9f5a-426e-a3ef-52df87841aa7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-300a7ac9-5462-4db2-817f-07ea0b2d6aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:41:43 compute-0 nova_compute[259550]: 2025-10-07 14:41:43.213 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848088.210345, 52ccd902-898f-4809-a231-be5760626c2c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:41:43 compute-0 nova_compute[259550]: 2025-10-07 14:41:43.213 2 INFO nova.compute.manager [-] [instance: 52ccd902-898f-4809-a231-be5760626c2c] VM Stopped (Lifecycle Event)
Oct 07 14:41:43 compute-0 ceph-mon[74295]: pgmap v2378: 305 pgs: 305 active+clean; 246 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 136 op/s
Oct 07 14:41:43 compute-0 nova_compute[259550]: 2025-10-07 14:41:43.565 2 DEBUG nova.compute.manager [None req-f0759171-d8e3-4e2f-97b4-40a63f66c802 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:41:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 246 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 546 KiB/s wr, 118 op/s
Oct 07 14:41:44 compute-0 nova_compute[259550]: 2025-10-07 14:41:44.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:45 compute-0 ceph-mon[74295]: pgmap v2379: 305 pgs: 305 active+clean; 246 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 546 KiB/s wr, 118 op/s
Oct 07 14:41:45 compute-0 nova_compute[259550]: 2025-10-07 14:41:45.901 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:41:45 compute-0 nova_compute[259550]: 2025-10-07 14:41:45.944 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Triggering sync for uuid d0b10640-5492-4d8f-8b94-a49a15b6e702 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 07 14:41:45 compute-0 nova_compute[259550]: 2025-10-07 14:41:45.945 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Triggering sync for uuid ca309873-104c-4cd4-a609-686d61823e0f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 07 14:41:45 compute-0 nova_compute[259550]: 2025-10-07 14:41:45.945 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Triggering sync for uuid 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 07 14:41:45 compute-0 nova_compute[259550]: 2025-10-07 14:41:45.945 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:45 compute-0 nova_compute[259550]: 2025-10-07 14:41:45.946 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:45 compute-0 nova_compute[259550]: 2025-10-07 14:41:45.946 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "ca309873-104c-4cd4-a609-686d61823e0f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:45 compute-0 nova_compute[259550]: 2025-10-07 14:41:45.946 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "ca309873-104c-4cd4-a609-686d61823e0f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:45 compute-0 nova_compute[259550]: 2025-10-07 14:41:45.949 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:45 compute-0 nova_compute[259550]: 2025-10-07 14:41:45.949 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:46 compute-0 nova_compute[259550]: 2025-10-07 14:41:46.005 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:46 compute-0 nova_compute[259550]: 2025-10-07 14:41:46.041 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "ca309873-104c-4cd4-a609-686d61823e0f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:46 compute-0 nova_compute[259550]: 2025-10-07 14:41:46.042 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 246 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 101 op/s
Oct 07 14:41:47 compute-0 ceph-mon[74295]: pgmap v2380: 305 pgs: 305 active+clean; 246 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 101 op/s
Oct 07 14:41:47 compute-0 nova_compute[259550]: 2025-10-07 14:41:47.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:47 compute-0 ovn_controller[151684]: 2025-10-07T14:41:47Z|01382|binding|INFO|Releasing lport eb0d4e5a-943a-459a-bddc-3f4beed43512 from this chassis (sb_readonly=0)
Oct 07 14:41:47 compute-0 ovn_controller[151684]: 2025-10-07T14:41:47Z|01383|binding|INFO|Releasing lport 406dfedc-cd29-46b7-9b91-7b006ecd582c from this chassis (sb_readonly=0)
Oct 07 14:41:47 compute-0 ovn_controller[151684]: 2025-10-07T14:41:47Z|01384|binding|INFO|Releasing lport 795a08c5-66c3-453c-a5db-19a02c166ab7 from this chassis (sb_readonly=0)
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.000 2 DEBUG oslo_concurrency.lockutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "ca309873-104c-4cd4-a609-686d61823e0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.001 2 DEBUG oslo_concurrency.lockutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.001 2 DEBUG oslo_concurrency.lockutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "ca309873-104c-4cd4-a609-686d61823e0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.001 2 DEBUG oslo_concurrency.lockutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.002 2 DEBUG oslo_concurrency.lockutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.003 2 INFO nova.compute.manager [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Terminating instance
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.004 2 DEBUG nova.compute.manager [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:48 compute-0 kernel: tapbcef16bc-28 (unregistering): left promiscuous mode
Oct 07 14:41:48 compute-0 NetworkManager[44949]: <info>  [1759848108.0764] device (tapbcef16bc-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:41:48 compute-0 ovn_controller[151684]: 2025-10-07T14:41:48Z|01385|binding|INFO|Releasing lport bcef16bc-28e8-4e97-a50b-7625a1917ee5 from this chassis (sb_readonly=0)
Oct 07 14:41:48 compute-0 ovn_controller[151684]: 2025-10-07T14:41:48Z|01386|binding|INFO|Setting lport bcef16bc-28e8-4e97-a50b-7625a1917ee5 down in Southbound
Oct 07 14:41:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:48 compute-0 ovn_controller[151684]: 2025-10-07T14:41:48Z|01387|binding|INFO|Removing iface tapbcef16bc-28 ovn-installed in OVS
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:48 compute-0 podman[395929]: 2025-10-07 14:41:48.125416776 +0000 UTC m=+0.094599305 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 07 14:41:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.129 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:5f:df 10.100.0.27'], port_security=['fa:16:3e:17:5f:df 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'ca309873-104c-4cd4-a609-686d61823e0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '84380b93-9bd8-46c6-8ce7-0eb2636c568f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4af57759-0100-4ebb-81fb-af43dd151fff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=bcef16bc-28e8-4e97-a50b-7625a1917ee5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:41:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.131 161536 INFO neutron.agent.ovn.metadata.agent [-] Port bcef16bc-28e8-4e97-a50b-7625a1917ee5 in datapath cae6e154-5797-4df5-a9e8-545cc6ed0188 unbound from our chassis
Oct 07 14:41:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.132 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cae6e154-5797-4df5-a9e8-545cc6ed0188
Oct 07 14:41:48 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Oct 07 14:41:48 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007c.scope: Consumed 18.245s CPU time.
Oct 07 14:41:48 compute-0 systemd-machined[214580]: Machine qemu-157-instance-0000007c terminated.
Oct 07 14:41:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.150 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[873e8872-dab7-4938-a60e-8a04465ea047]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:48 compute-0 podman[395930]: 2025-10-07 14:41:48.150905875 +0000 UTC m=+0.117822314 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Oct 07 14:41:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.181 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[72bb1012-f9c3-437e-9cca-331f12739947]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.184 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[183a4228-4fe6-4a01-a095-ae8ff54624a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.212 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[38777716-74f6-4277-a5f1-8e350bf58880]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.235 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[34971164-166e-4ff5-b556-303245e968e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcae6e154-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:47:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 8, 'rx_bytes': 1222, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 8, 'rx_bytes': 1222, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 859712, 'reachable_time': 43552, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 10, 'inoctets': 872, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 10, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 872, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 10, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395984, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.243 2 INFO nova.virt.libvirt.driver [-] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Instance destroyed successfully.
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.244 2 DEBUG nova.objects.instance [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid ca309873-104c-4cd4-a609-686d61823e0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:41:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.253 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[54a79886-9e9e-424f-ac7c-b9f59fc07249]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcae6e154-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 859727, 'tstamp': 859727}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395992, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tapcae6e154-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 859730, 'tstamp': 859730}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395992, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.255 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcae6e154-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.261 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcae6e154-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.261 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:41:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.261 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcae6e154-50, col_values=(('external_ids', {'iface-id': '795a08c5-66c3-453c-a5db-19a02c166ab7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.262 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.280 2 DEBUG nova.virt.libvirt.vif [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:40:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2008963535',display_name='tempest-TestNetworkBasicOps-server-2008963535',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2008963535',id=124,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxD9h4ZkkgrlueJ/+Le33LVwl4afSGXrv2n9VWfucsKy4uVMm2emVKU0zzzyJ1CeXnsYkd5eNJ5QzJC0SrQJgU7HtUo0DaaWNwjsm6StKgjUKnYtMfa+OUzVPQdsT6WdA==',key_name='tempest-TestNetworkBasicOps-266700111',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:40:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-0d2v23l1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:40:37Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=ca309873-104c-4cd4-a609-686d61823e0f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.280 2 DEBUG nova.network.os_vif_util [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.281 2 DEBUG nova.network.os_vif_util [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:5f:df,bridge_name='br-int',has_traffic_filtering=True,id=bcef16bc-28e8-4e97-a50b-7625a1917ee5,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcef16bc-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.281 2 DEBUG os_vif [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:5f:df,bridge_name='br-int',has_traffic_filtering=True,id=bcef16bc-28e8-4e97-a50b-7625a1917ee5,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcef16bc-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.283 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbcef16bc-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.289 2 INFO os_vif [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:5f:df,bridge_name='br-int',has_traffic_filtering=True,id=bcef16bc-28e8-4e97-a50b-7625a1917ee5,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcef16bc-28')
Oct 07 14:41:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 246 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 95 op/s
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.781 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848093.7793257, 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.782 2 INFO nova.compute.manager [-] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] VM Stopped (Lifecycle Event)
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.805 2 DEBUG nova.compute.manager [None req-235d8d94-1902-4ab5-9aaa-cf0411cf4dc0 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.929 2 INFO nova.virt.libvirt.driver [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Deleting instance files /var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f_del
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.929 2 INFO nova.virt.libvirt.driver [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Deletion of /var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f_del complete
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.989 2 INFO nova.compute.manager [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Took 0.98 seconds to destroy the instance on the hypervisor.
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.990 2 DEBUG oslo.service.loopingcall [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.991 2 DEBUG nova.compute.manager [-] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:41:48 compute-0 nova_compute[259550]: 2025-10-07 14:41:48.991 2 DEBUG nova.network.neutron [-] [instance: ca309873-104c-4cd4-a609-686d61823e0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:41:49 compute-0 nova_compute[259550]: 2025-10-07 14:41:49.004 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 07 14:41:49 compute-0 nova_compute[259550]: 2025-10-07 14:41:49.020 2 DEBUG nova.compute.manager [req-6b09c162-84cd-4589-b8d2-b7a2a76dd840 req-046c49d4-3e39-4745-89a7-2defd004f12c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Received event network-vif-unplugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:49 compute-0 nova_compute[259550]: 2025-10-07 14:41:49.020 2 DEBUG oslo_concurrency.lockutils [req-6b09c162-84cd-4589-b8d2-b7a2a76dd840 req-046c49d4-3e39-4745-89a7-2defd004f12c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ca309873-104c-4cd4-a609-686d61823e0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:49 compute-0 nova_compute[259550]: 2025-10-07 14:41:49.021 2 DEBUG oslo_concurrency.lockutils [req-6b09c162-84cd-4589-b8d2-b7a2a76dd840 req-046c49d4-3e39-4745-89a7-2defd004f12c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:49 compute-0 nova_compute[259550]: 2025-10-07 14:41:49.021 2 DEBUG oslo_concurrency.lockutils [req-6b09c162-84cd-4589-b8d2-b7a2a76dd840 req-046c49d4-3e39-4745-89a7-2defd004f12c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:49 compute-0 nova_compute[259550]: 2025-10-07 14:41:49.021 2 DEBUG nova.compute.manager [req-6b09c162-84cd-4589-b8d2-b7a2a76dd840 req-046c49d4-3e39-4745-89a7-2defd004f12c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] No waiting events found dispatching network-vif-unplugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:41:49 compute-0 nova_compute[259550]: 2025-10-07 14:41:49.021 2 DEBUG nova.compute.manager [req-6b09c162-84cd-4589-b8d2-b7a2a76dd840 req-046c49d4-3e39-4745-89a7-2defd004f12c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Received event network-vif-unplugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:41:49 compute-0 ceph-mon[74295]: pgmap v2381: 305 pgs: 305 active+clean; 246 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 95 op/s
Oct 07 14:41:50 compute-0 nova_compute[259550]: 2025-10-07 14:41:50.004 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:41:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 211 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 649 KiB/s wr, 129 op/s
Oct 07 14:41:50 compute-0 nova_compute[259550]: 2025-10-07 14:41:50.727 2 DEBUG nova.network.neutron [-] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:41:50 compute-0 nova_compute[259550]: 2025-10-07 14:41:50.860 2 INFO nova.compute.manager [-] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Took 1.87 seconds to deallocate network for instance.
Oct 07 14:41:51 compute-0 nova_compute[259550]: 2025-10-07 14:41:51.002 2 DEBUG oslo_concurrency.lockutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:51 compute-0 nova_compute[259550]: 2025-10-07 14:41:51.003 2 DEBUG oslo_concurrency.lockutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:51 compute-0 nova_compute[259550]: 2025-10-07 14:41:51.122 2 DEBUG oslo_concurrency.processutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:41:51 compute-0 ovn_controller[151684]: 2025-10-07T14:41:51Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:03:35:e2 10.100.0.14
Oct 07 14:41:51 compute-0 ovn_controller[151684]: 2025-10-07T14:41:51Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:03:35:e2 10.100.0.14
Oct 07 14:41:51 compute-0 nova_compute[259550]: 2025-10-07 14:41:51.203 2 DEBUG nova.compute.manager [req-ddfb8fa8-4d25-40ab-aa10-47a8d218ee90 req-a07566ce-5511-4cc4-a7ca-f309bce4380f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Received event network-vif-plugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:51 compute-0 nova_compute[259550]: 2025-10-07 14:41:51.204 2 DEBUG oslo_concurrency.lockutils [req-ddfb8fa8-4d25-40ab-aa10-47a8d218ee90 req-a07566ce-5511-4cc4-a7ca-f309bce4380f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ca309873-104c-4cd4-a609-686d61823e0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:51 compute-0 nova_compute[259550]: 2025-10-07 14:41:51.205 2 DEBUG oslo_concurrency.lockutils [req-ddfb8fa8-4d25-40ab-aa10-47a8d218ee90 req-a07566ce-5511-4cc4-a7ca-f309bce4380f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:51 compute-0 nova_compute[259550]: 2025-10-07 14:41:51.205 2 DEBUG oslo_concurrency.lockutils [req-ddfb8fa8-4d25-40ab-aa10-47a8d218ee90 req-a07566ce-5511-4cc4-a7ca-f309bce4380f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:51 compute-0 nova_compute[259550]: 2025-10-07 14:41:51.205 2 DEBUG nova.compute.manager [req-ddfb8fa8-4d25-40ab-aa10-47a8d218ee90 req-a07566ce-5511-4cc4-a7ca-f309bce4380f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] No waiting events found dispatching network-vif-plugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:41:51 compute-0 nova_compute[259550]: 2025-10-07 14:41:51.205 2 WARNING nova.compute.manager [req-ddfb8fa8-4d25-40ab-aa10-47a8d218ee90 req-a07566ce-5511-4cc4-a7ca-f309bce4380f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Received unexpected event network-vif-plugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 for instance with vm_state deleted and task_state None.
Oct 07 14:41:51 compute-0 nova_compute[259550]: 2025-10-07 14:41:51.206 2 DEBUG nova.compute.manager [req-ddfb8fa8-4d25-40ab-aa10-47a8d218ee90 req-a07566ce-5511-4cc4-a7ca-f309bce4380f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Received event network-vif-deleted-bcef16bc-28e8-4e97-a50b-7625a1917ee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:51 compute-0 ceph-mon[74295]: pgmap v2382: 305 pgs: 305 active+clean; 211 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 649 KiB/s wr, 129 op/s
Oct 07 14:41:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:41:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2345765719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:41:51 compute-0 nova_compute[259550]: 2025-10-07 14:41:51.599 2 DEBUG oslo_concurrency.processutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:41:51 compute-0 nova_compute[259550]: 2025-10-07 14:41:51.605 2 DEBUG nova.compute.provider_tree [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:41:51 compute-0 nova_compute[259550]: 2025-10-07 14:41:51.700 2 DEBUG nova.scheduler.client.report [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:41:51 compute-0 nova_compute[259550]: 2025-10-07 14:41:51.822 2 DEBUG oslo_concurrency.lockutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:51 compute-0 nova_compute[259550]: 2025-10-07 14:41:51.953 2 INFO nova.scheduler.client.report [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance ca309873-104c-4cd4-a609-686d61823e0f
Oct 07 14:41:52 compute-0 nova_compute[259550]: 2025-10-07 14:41:52.090 2 DEBUG oslo_concurrency.lockutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 187 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.3 MiB/s wr, 119 op/s
Oct 07 14:41:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2345765719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:41:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:41:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:41:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:41:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:41:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:41:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:41:52 compute-0 nova_compute[259550]: 2025-10-07 14:41:52.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:41:53 compute-0 nova_compute[259550]: 2025-10-07 14:41:53.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:53 compute-0 ceph-mon[74295]: pgmap v2383: 305 pgs: 305 active+clean; 187 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.3 MiB/s wr, 119 op/s
Oct 07 14:41:53 compute-0 nova_compute[259550]: 2025-10-07 14:41:53.888 2 DEBUG oslo_concurrency.lockutils [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "interface-d0b10640-5492-4d8f-8b94-a49a15b6e702-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:53 compute-0 nova_compute[259550]: 2025-10-07 14:41:53.889 2 DEBUG oslo_concurrency.lockutils [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "interface-d0b10640-5492-4d8f-8b94-a49a15b6e702-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:53 compute-0 nova_compute[259550]: 2025-10-07 14:41:53.918 2 DEBUG nova.objects.instance [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'flavor' on Instance uuid d0b10640-5492-4d8f-8b94-a49a15b6e702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:41:53 compute-0 nova_compute[259550]: 2025-10-07 14:41:53.954 2 DEBUG nova.virt.libvirt.vif [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-666494646',display_name='tempest-TestNetworkBasicOps-server-666494646',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-666494646',id=121,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1spSCFixXNsUFx/gNeoHExW+DY1/4E+O5JhigGItAWYtVOtc4GVbv0L/rgo0glCGTWIkGxAFExfWpDWhQ8tY55XDjxFuD7v7bFZwlCzmx6XPgY1bFJ0yFMdc8TPdCuzA==',key_name='tempest-TestNetworkBasicOps-45009849',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:39:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-1mnwx928',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:39:42Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=d0b10640-5492-4d8f-8b94-a49a15b6e702,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:41:53 compute-0 nova_compute[259550]: 2025-10-07 14:41:53.954 2 DEBUG nova.network.os_vif_util [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:41:53 compute-0 nova_compute[259550]: 2025-10-07 14:41:53.955 2 DEBUG nova.network.os_vif_util [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:98:75:78,bridge_name='br-int',has_traffic_filtering=True,id=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap359e9c20-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:41:53 compute-0 nova_compute[259550]: 2025-10-07 14:41:53.958 2 DEBUG nova.virt.libvirt.guest [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:98:75:78"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap359e9c20-ec"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:41:53 compute-0 nova_compute[259550]: 2025-10-07 14:41:53.961 2 DEBUG nova.virt.libvirt.guest [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:98:75:78"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap359e9c20-ec"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:41:53 compute-0 nova_compute[259550]: 2025-10-07 14:41:53.963 2 DEBUG nova.virt.libvirt.driver [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Attempting to detach device tap359e9c20-ec from instance d0b10640-5492-4d8f-8b94-a49a15b6e702 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 07 14:41:53 compute-0 nova_compute[259550]: 2025-10-07 14:41:53.963 2 DEBUG nova.virt.libvirt.guest [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] detach device xml: <interface type="ethernet">
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:98:75:78"/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <target dev="tap359e9c20-ec"/>
Oct 07 14:41:53 compute-0 nova_compute[259550]: </interface>
Oct 07 14:41:53 compute-0 nova_compute[259550]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 07 14:41:53 compute-0 nova_compute[259550]: 2025-10-07 14:41:53.970 2 DEBUG nova.virt.libvirt.guest [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:98:75:78"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap359e9c20-ec"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:41:53 compute-0 nova_compute[259550]: 2025-10-07 14:41:53.975 2 DEBUG nova.virt.libvirt.guest [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:98:75:78"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap359e9c20-ec"/></interface>not found in domain: <domain type='kvm' id='152'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <name>instance-00000079</name>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <uuid>d0b10640-5492-4d8f-8b94-a49a15b6e702</uuid>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <nova:name>tempest-TestNetworkBasicOps-server-666494646</nova:name>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:40:13</nova:creationTime>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <nova:port uuid="b7bf5de8-3ba0-43cd-a839-d8812cbe4276">
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <nova:port uuid="359e9c20-ec4d-4bc9-bfc1-93f3464bf09b">
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:41:53 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <memory unit='KiB'>131072</memory>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <vcpu placement='static'>1</vcpu>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <resource>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <partition>/machine</partition>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   </resource>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <sysinfo type='smbios'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <system>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <entry name='manufacturer'>RDO</entry>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <entry name='product'>OpenStack Compute</entry>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <entry name='serial'>d0b10640-5492-4d8f-8b94-a49a15b6e702</entry>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <entry name='uuid'>d0b10640-5492-4d8f-8b94-a49a15b6e702</entry>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <entry name='family'>Virtual Machine</entry>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </system>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <os>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <boot dev='hd'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <smbios mode='sysinfo'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   </os>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <features>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <vmcoreinfo state='on'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   </features>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <cpu mode='custom' match='exact' check='full'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <vendor>AMD</vendor>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='x2apic'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc-deadline'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='hypervisor'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc_adjust'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='spec-ctrl'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='stibp'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='arch-capabilities'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='ssbd'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='cmp_legacy'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='overflow-recov'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='succor'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='ibrs'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='amd-ssbd'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='virt-ssbd'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='disable' name='lbrv'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='disable' name='tsc-scale'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='disable' name='vmcb-clean'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='disable' name='flushbyasid'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='disable' name='pause-filter'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='disable' name='pfthreshold'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='rdctl-no'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='mds-no'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='pschange-mc-no'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='gds-no'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='rfds-no'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='disable' name='xsaves'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='disable' name='svm'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='require' name='topoext'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='disable' name='npt'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <feature policy='disable' name='nrip-save'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <clock offset='utc'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <timer name='pit' tickpolicy='delay'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <timer name='hpet' present='no'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <on_poweroff>destroy</on_poweroff>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <on_reboot>restart</on_reboot>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <on_crash>destroy</on_crash>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <disk type='network' device='disk'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/d0b10640-5492-4d8f-8b94-a49a15b6e702_disk' index='2'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       </source>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target dev='vda' bus='virtio'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='virtio-disk0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <disk type='network' device='cdrom'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/d0b10640-5492-4d8f-8b94-a49a15b6e702_disk.config' index='1'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       </source>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target dev='sda' bus='sata'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <readonly/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='sata0-0-0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='0' model='pcie-root'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pcie.0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='1' port='0x10'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.1'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='2' port='0x11'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.2'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='3' port='0x12'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.3'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='4' port='0x13'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.4'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='5' port='0x14'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.5'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='6' port='0x15'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.6'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='7' port='0x16'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.7'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='8' port='0x17'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.8'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='9' port='0x18'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.9'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='10' port='0x19'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.10'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='11' port='0x1a'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.11'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='12' port='0x1b'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.12'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='13' port='0x1c'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.13'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='14' port='0x1d'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.14'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='15' port='0x1e'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.15'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='16' port='0x1f'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.16'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='17' port='0x20'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.17'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='18' port='0x21'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.18'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='19' port='0x22'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.19'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='20' port='0x23'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.20'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='21' port='0x24'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.21'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='22' port='0x25'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.22'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='23' port='0x26'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.23'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='24' port='0x27'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.24'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target chassis='25' port='0x28'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.25'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model name='pcie-pci-bridge'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='pci.26'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='usb'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <controller type='sata' index='0'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='ide'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:6d:ee:73'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target dev='tapb7bf5de8-3b'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='net0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:98:75:78'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target dev='tap359e9c20-ec'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='net1'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <serial type='pty'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/console.log' append='off'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target type='isa-serial' port='0'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:         <model name='isa-serial'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       </target>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <console type='pty' tty='/dev/pts/0'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/console.log' append='off'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <target type='serial' port='0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </console>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <input type='tablet' bus='usb'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='input0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='usb' bus='0' port='1'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </input>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <input type='mouse' bus='ps2'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='input1'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </input>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <input type='keyboard' bus='ps2'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='input2'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </input>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <listen type='address' address='::0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </graphics>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <audio id='1' type='none'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <video>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <model type='virtio' heads='1' primary='yes'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='video0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </video>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <watchdog model='itco' action='reset'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='watchdog0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </watchdog>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <memballoon model='virtio'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <stats period='10'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='balloon0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <rng model='virtio'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <backend model='random'>/dev/urandom</backend>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <alias name='rng0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <label>system_u:system_r:svirt_t:s0:c211,c671</label>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c211,c671</imagelabel>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <label>+107:+107</label>
Oct 07 14:41:53 compute-0 nova_compute[259550]:     <imagelabel>+107:+107</imagelabel>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:41:53 compute-0 nova_compute[259550]: </domain>
Oct 07 14:41:53 compute-0 nova_compute[259550]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 07 14:41:53 compute-0 nova_compute[259550]: 2025-10-07 14:41:53.977 2 INFO nova.virt.libvirt.driver [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully detached device tap359e9c20-ec from instance d0b10640-5492-4d8f-8b94-a49a15b6e702 from the persistent domain config.
Oct 07 14:41:53 compute-0 nova_compute[259550]: 2025-10-07 14:41:53.977 2 DEBUG nova.virt.libvirt.driver [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] (1/8): Attempting to detach device tap359e9c20-ec with device alias net1 from instance d0b10640-5492-4d8f-8b94-a49a15b6e702 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 07 14:41:53 compute-0 nova_compute[259550]: 2025-10-07 14:41:53.978 2 DEBUG nova.virt.libvirt.guest [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] detach device xml: <interface type="ethernet">
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <mac address="fa:16:3e:98:75:78"/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <model type="virtio"/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <mtu size="1442"/>
Oct 07 14:41:53 compute-0 nova_compute[259550]:   <target dev="tap359e9c20-ec"/>
Oct 07 14:41:53 compute-0 nova_compute[259550]: </interface>
Oct 07 14:41:53 compute-0 nova_compute[259550]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 07 14:41:53 compute-0 nova_compute[259550]: 2025-10-07 14:41:53.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:41:53 compute-0 nova_compute[259550]: 2025-10-07 14:41:53.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:41:54 compute-0 kernel: tap359e9c20-ec (unregistering): left promiscuous mode
Oct 07 14:41:54 compute-0 NetworkManager[44949]: <info>  [1759848114.0761] device (tap359e9c20-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.085 2 DEBUG nova.virt.libvirt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Received event <DeviceRemovedEvent: 1759848114.085082, d0b10640-5492-4d8f-8b94-a49a15b6e702 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 07 14:41:54 compute-0 ovn_controller[151684]: 2025-10-07T14:41:54Z|01388|binding|INFO|Releasing lport 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b from this chassis (sb_readonly=0)
Oct 07 14:41:54 compute-0 ovn_controller[151684]: 2025-10-07T14:41:54Z|01389|binding|INFO|Setting lport 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b down in Southbound
Oct 07 14:41:54 compute-0 ovn_controller[151684]: 2025-10-07T14:41:54Z|01390|binding|INFO|Removing iface tap359e9c20-ec ovn-installed in OVS
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.089 2 DEBUG nova.virt.libvirt.driver [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Start waiting for the detach event from libvirt for device tap359e9c20-ec with device alias net1 for instance d0b10640-5492-4d8f-8b94-a49a15b6e702 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.090 2 DEBUG nova.virt.libvirt.guest [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:98:75:78"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap359e9c20-ec"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.094 2 DEBUG nova.virt.libvirt.guest [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:98:75:78"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap359e9c20-ec"/></interface>not found in domain: <domain type='kvm' id='152'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <name>instance-00000079</name>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <uuid>d0b10640-5492-4d8f-8b94-a49a15b6e702</uuid>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <nova:name>tempest-TestNetworkBasicOps-server-666494646</nova:name>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:40:13</nova:creationTime>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <nova:port uuid="b7bf5de8-3ba0-43cd-a839-d8812cbe4276">
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <nova:port uuid="359e9c20-ec4d-4bc9-bfc1-93f3464bf09b">
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:41:54 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <memory unit='KiB'>131072</memory>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <vcpu placement='static'>1</vcpu>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <resource>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <partition>/machine</partition>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   </resource>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <sysinfo type='smbios'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <system>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <entry name='manufacturer'>RDO</entry>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <entry name='product'>OpenStack Compute</entry>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <entry name='serial'>d0b10640-5492-4d8f-8b94-a49a15b6e702</entry>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <entry name='uuid'>d0b10640-5492-4d8f-8b94-a49a15b6e702</entry>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <entry name='family'>Virtual Machine</entry>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </system>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <os>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <boot dev='hd'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <smbios mode='sysinfo'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   </os>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <features>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <vmcoreinfo state='on'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   </features>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <cpu mode='custom' match='exact' check='full'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <vendor>AMD</vendor>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='x2apic'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc-deadline'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='hypervisor'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='tsc_adjust'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='spec-ctrl'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='stibp'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='arch-capabilities'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='ssbd'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='cmp_legacy'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='overflow-recov'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='succor'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='ibrs'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='amd-ssbd'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='virt-ssbd'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='disable' name='lbrv'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='disable' name='tsc-scale'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='disable' name='vmcb-clean'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='disable' name='flushbyasid'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='disable' name='pause-filter'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='disable' name='pfthreshold'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='rdctl-no'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='mds-no'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='pschange-mc-no'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='gds-no'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='rfds-no'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='disable' name='xsaves'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='disable' name='svm'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='require' name='topoext'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='disable' name='npt'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <feature policy='disable' name='nrip-save'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <clock offset='utc'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <timer name='pit' tickpolicy='delay'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <timer name='hpet' present='no'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <on_poweroff>destroy</on_poweroff>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <on_reboot>restart</on_reboot>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <on_crash>destroy</on_crash>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <disk type='network' device='disk'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/d0b10640-5492-4d8f-8b94-a49a15b6e702_disk' index='2'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       </source>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target dev='vda' bus='virtio'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='virtio-disk0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <disk type='network' device='cdrom'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <driver name='qemu' type='raw' cache='none'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <auth username='openstack'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:         <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <source protocol='rbd' name='vms/d0b10640-5492-4d8f-8b94-a49a15b6e702_disk.config' index='1'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:         <host name='192.168.122.100' port='6789'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       </source>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target dev='sda' bus='sata'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <readonly/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='sata0-0-0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='0' model='pcie-root'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pcie.0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='1' port='0x10'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.1'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='2' port='0x11'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.2'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='3' port='0x12'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.3'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='4' port='0x13'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.4'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='5' port='0x14'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.5'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='6' port='0x15'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.6'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='7' port='0x16'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.7'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='8' port='0x17'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.8'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='9' port='0x18'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.9'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='10' port='0x19'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.10'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='11' port='0x1a'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.11'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='12' port='0x1b'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.12'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='13' port='0x1c'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.13'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='14' port='0x1d'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.14'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='15' port='0x1e'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.15'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='16' port='0x1f'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.16'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='17' port='0x20'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.17'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='18' port='0x21'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.18'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='19' port='0x22'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.19'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='20' port='0x23'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.20'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='21' port='0x24'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.21'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='22' port='0x25'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.22'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='23' port='0x26'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.23'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='24' port='0x27'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.24'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-root-port'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target chassis='25' port='0x28'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.25'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model name='pcie-pci-bridge'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='pci.26'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='usb'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <controller type='sata' index='0'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='ide'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </controller>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <interface type='ethernet'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <mac address='fa:16:3e:6d:ee:73'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target dev='tapb7bf5de8-3b'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model type='virtio'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <driver name='vhost' rx_queue_size='512'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <mtu size='1442'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='net0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <serial type='pty'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/console.log' append='off'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target type='isa-serial' port='0'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:         <model name='isa-serial'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       </target>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <console type='pty' tty='/dev/pts/0'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <source path='/dev/pts/0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <log file='/var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/console.log' append='off'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <target type='serial' port='0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='serial0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </console>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <input type='tablet' bus='usb'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='input0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='usb' bus='0' port='1'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </input>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <input type='mouse' bus='ps2'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='input1'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </input>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <input type='keyboard' bus='ps2'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='input2'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </input>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <listen type='address' address='::0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </graphics>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <audio id='1' type='none'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <video>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <model type='virtio' heads='1' primary='yes'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='video0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </video>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <watchdog model='itco' action='reset'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='watchdog0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </watchdog>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <memballoon model='virtio'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <stats period='10'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='balloon0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <rng model='virtio'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <backend model='random'>/dev/urandom</backend>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <alias name='rng0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <label>system_u:system_r:svirt_t:s0:c211,c671</label>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c211,c671</imagelabel>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <label>+107:+107</label>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <imagelabel>+107:+107</imagelabel>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   </seclabel>
Oct 07 14:41:54 compute-0 nova_compute[259550]: </domain>
Oct 07 14:41:54 compute-0 nova_compute[259550]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.096 2 INFO nova.virt.libvirt.driver [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully detached device tap359e9c20-ec from instance d0b10640-5492-4d8f-8b94-a49a15b6e702 from the live domain config.
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.097 2 DEBUG nova.virt.libvirt.vif [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-666494646',display_name='tempest-TestNetworkBasicOps-server-666494646',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-666494646',id=121,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1spSCFixXNsUFx/gNeoHExW+DY1/4E+O5JhigGItAWYtVOtc4GVbv0L/rgo0glCGTWIkGxAFExfWpDWhQ8tY55XDjxFuD7v7bFZwlCzmx6XPgY1bFJ0yFMdc8TPdCuzA==',key_name='tempest-TestNetworkBasicOps-45009849',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:39:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-1mnwx928',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:39:42Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=d0b10640-5492-4d8f-8b94-a49a15b6e702,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.097 2 DEBUG nova.network.os_vif_util [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.098 2 DEBUG nova.network.os_vif_util [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:98:75:78,bridge_name='br-int',has_traffic_filtering=True,id=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap359e9c20-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.098 2 DEBUG os_vif [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:75:78,bridge_name='br-int',has_traffic_filtering=True,id=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap359e9c20-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.101 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap359e9c20-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.108 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:75:78 10.100.0.23', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'd0b10640-5492-4d8f-8b94-a49a15b6e702', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4af57759-0100-4ebb-81fb-af43dd151fff, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.108 2 INFO os_vif [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:75:78,bridge_name='br-int',has_traffic_filtering=True,id=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap359e9c20-ec')
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.109 2 DEBUG nova.virt.libvirt.guest [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <nova:name>tempest-TestNetworkBasicOps-server-666494646</nova:name>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <nova:creationTime>2025-10-07 14:41:54</nova:creationTime>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <nova:flavor name="m1.nano">
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <nova:memory>128</nova:memory>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <nova:disk>1</nova:disk>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <nova:swap>0</nova:swap>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <nova:vcpus>1</nova:vcpus>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   </nova:flavor>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <nova:owner>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   </nova:owner>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   <nova:ports>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     <nova:port uuid="b7bf5de8-3ba0-43cd-a839-d8812cbe4276">
Oct 07 14:41:54 compute-0 nova_compute[259550]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:41:54 compute-0 nova_compute[259550]:     </nova:port>
Oct 07 14:41:54 compute-0 nova_compute[259550]:   </nova:ports>
Oct 07 14:41:54 compute-0 nova_compute[259550]: </nova:instance>
Oct 07 14:41:54 compute-0 nova_compute[259550]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 07 14:41:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.110 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b in datapath cae6e154-5797-4df5-a9e8-545cc6ed0188 unbound from our chassis
Oct 07 14:41:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.111 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cae6e154-5797-4df5-a9e8-545cc6ed0188, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:41:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.111 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d14b758d-a9d7-4f6a-811b-4e6258a807c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.112 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188 namespace which is not needed anymore
Oct 07 14:41:54 compute-0 neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188[391917]: [NOTICE]   (391921) : haproxy version is 2.8.14-c23fe91
Oct 07 14:41:54 compute-0 neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188[391917]: [NOTICE]   (391921) : path to executable is /usr/sbin/haproxy
Oct 07 14:41:54 compute-0 neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188[391917]: [WARNING]  (391921) : Exiting Master process...
Oct 07 14:41:54 compute-0 neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188[391917]: [ALERT]    (391921) : Current worker (391923) exited with code 143 (Terminated)
Oct 07 14:41:54 compute-0 neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188[391917]: [WARNING]  (391921) : All workers exited. Exiting... (0)
Oct 07 14:41:54 compute-0 systemd[1]: libpod-397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e.scope: Deactivated successfully.
Oct 07 14:41:54 compute-0 conmon[391917]: conmon 397322e542acc104d0b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e.scope/container/memory.events
Oct 07 14:41:54 compute-0 podman[396061]: 2025-10-07 14:41:54.247087473 +0000 UTC m=+0.047789232 container died 397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 07 14:41:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e-userdata-shm.mount: Deactivated successfully.
Oct 07 14:41:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a41d1aa38f8abe0fef319a4a94399d666f0b1094fa534bf42c28aa629de3903-merged.mount: Deactivated successfully.
Oct 07 14:41:54 compute-0 podman[396061]: 2025-10-07 14:41:54.292058578 +0000 UTC m=+0.092760337 container cleanup 397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 07 14:41:54 compute-0 systemd[1]: libpod-conmon-397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e.scope: Deactivated successfully.
Oct 07 14:41:54 compute-0 podman[396087]: 2025-10-07 14:41:54.355788802 +0000 UTC m=+0.043153218 container remove 397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 07 14:41:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.361 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e22b5d0f-3f2c-418d-99d0-1d699311b0fd]: (4, ('Tue Oct  7 02:41:54 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188 (397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e)\n397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e\nTue Oct  7 02:41:54 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188 (397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e)\n397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.363 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[082e81f2-1e9a-484b-8417-bb7d47aa0d43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.364 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcae6e154-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:54 compute-0 kernel: tapcae6e154-50: left promiscuous mode
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.385 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de2ad207-e10d-4da2-8f6d-999a4d861dd7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.401 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8f9db31a-f4e4-435d-a6b2-d03d8f5d16a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.403 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[71fc7d3f-58fe-4df5-af9b-f72b489ae88c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.423 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7383bd17-d7fb-488e-bd6e-b20882ed9ee4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 859704, 'reachable_time': 22272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396104, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.426 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:41:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.426 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[03cd7af5-8532-4c65-853b-c4d1a274fb7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:54 compute-0 systemd[1]: run-netns-ovnmeta\x2dcae6e154\x2d5797\x2d4df5\x2da9e8\x2d545cc6ed0188.mount: Deactivated successfully.
Oct 07 14:41:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 200 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.973 2 DEBUG nova.compute.manager [req-b9e0894b-3f56-4add-a90b-75ed98e115ae req-05c35d47-dafe-4685-a73d-5be40bcb3f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-unplugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.974 2 DEBUG oslo_concurrency.lockutils [req-b9e0894b-3f56-4add-a90b-75ed98e115ae req-05c35d47-dafe-4685-a73d-5be40bcb3f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.974 2 DEBUG oslo_concurrency.lockutils [req-b9e0894b-3f56-4add-a90b-75ed98e115ae req-05c35d47-dafe-4685-a73d-5be40bcb3f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.974 2 DEBUG oslo_concurrency.lockutils [req-b9e0894b-3f56-4add-a90b-75ed98e115ae req-05c35d47-dafe-4685-a73d-5be40bcb3f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.975 2 DEBUG nova.compute.manager [req-b9e0894b-3f56-4add-a90b-75ed98e115ae req-05c35d47-dafe-4685-a73d-5be40bcb3f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] No waiting events found dispatching network-vif-unplugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:41:54 compute-0 nova_compute[259550]: 2025-10-07 14:41:54.975 2 WARNING nova.compute.manager [req-b9e0894b-3f56-4add-a90b-75ed98e115ae req-05c35d47-dafe-4685-a73d-5be40bcb3f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received unexpected event network-vif-unplugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b for instance with vm_state active and task_state None.
Oct 07 14:41:55 compute-0 nova_compute[259550]: 2025-10-07 14:41:55.286 2 DEBUG oslo_concurrency.lockutils [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:41:55 compute-0 nova_compute[259550]: 2025-10-07 14:41:55.287 2 DEBUG oslo_concurrency.lockutils [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:41:55 compute-0 nova_compute[259550]: 2025-10-07 14:41:55.288 2 DEBUG nova.network.neutron [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:41:55 compute-0 ceph-mon[74295]: pgmap v2384: 305 pgs: 305 active+clean; 200 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 07 14:41:55 compute-0 nova_compute[259550]: 2025-10-07 14:41:55.976 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:41:55 compute-0 nova_compute[259550]: 2025-10-07 14:41:55.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:41:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 200 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 07 14:41:56 compute-0 nova_compute[259550]: 2025-10-07 14:41:56.780 2 INFO nova.network.neutron [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Port 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 07 14:41:56 compute-0 nova_compute[259550]: 2025-10-07 14:41:56.780 2 DEBUG nova.network.neutron [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:41:56 compute-0 nova_compute[259550]: 2025-10-07 14:41:56.831 2 DEBUG oslo_concurrency.lockutils [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:41:56 compute-0 nova_compute[259550]: 2025-10-07 14:41:56.872 2 DEBUG oslo_concurrency.lockutils [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "interface-d0b10640-5492-4d8f-8b94-a49a15b6e702-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:56 compute-0 nova_compute[259550]: 2025-10-07 14:41:56.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:41:56 compute-0 nova_compute[259550]: 2025-10-07 14:41:56.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:41:56 compute-0 nova_compute[259550]: 2025-10-07 14:41:56.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:41:56 compute-0 nova_compute[259550]: 2025-10-07 14:41:56.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 07 14:41:57 compute-0 nova_compute[259550]: 2025-10-07 14:41:57.071 2 DEBUG nova.compute.manager [req-0b29dc83-b7d5-4cba-bbdc-3ca7622a601b req-21bd5bb5-67c4-4e64-815c-b4723873e9e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-plugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:57 compute-0 nova_compute[259550]: 2025-10-07 14:41:57.071 2 DEBUG oslo_concurrency.lockutils [req-0b29dc83-b7d5-4cba-bbdc-3ca7622a601b req-21bd5bb5-67c4-4e64-815c-b4723873e9e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:57 compute-0 nova_compute[259550]: 2025-10-07 14:41:57.072 2 DEBUG oslo_concurrency.lockutils [req-0b29dc83-b7d5-4cba-bbdc-3ca7622a601b req-21bd5bb5-67c4-4e64-815c-b4723873e9e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:57 compute-0 nova_compute[259550]: 2025-10-07 14:41:57.072 2 DEBUG oslo_concurrency.lockutils [req-0b29dc83-b7d5-4cba-bbdc-3ca7622a601b req-21bd5bb5-67c4-4e64-815c-b4723873e9e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:57 compute-0 nova_compute[259550]: 2025-10-07 14:41:57.072 2 DEBUG nova.compute.manager [req-0b29dc83-b7d5-4cba-bbdc-3ca7622a601b req-21bd5bb5-67c4-4e64-815c-b4723873e9e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] No waiting events found dispatching network-vif-plugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:41:57 compute-0 nova_compute[259550]: 2025-10-07 14:41:57.073 2 WARNING nova.compute.manager [req-0b29dc83-b7d5-4cba-bbdc-3ca7622a601b req-21bd5bb5-67c4-4e64-815c-b4723873e9e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received unexpected event network-vif-plugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b for instance with vm_state active and task_state None.
Oct 07 14:41:57 compute-0 nova_compute[259550]: 2025-10-07 14:41:57.073 2 DEBUG nova.compute.manager [req-0b29dc83-b7d5-4cba-bbdc-3ca7622a601b req-21bd5bb5-67c4-4e64-815c-b4723873e9e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-deleted-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:57 compute-0 ovn_controller[151684]: 2025-10-07T14:41:57Z|01391|binding|INFO|Releasing lport eb0d4e5a-943a-459a-bddc-3f4beed43512 from this chassis (sb_readonly=0)
Oct 07 14:41:57 compute-0 ovn_controller[151684]: 2025-10-07T14:41:57Z|01392|binding|INFO|Releasing lport 406dfedc-cd29-46b7-9b91-7b006ecd582c from this chassis (sb_readonly=0)
Oct 07 14:41:57 compute-0 nova_compute[259550]: 2025-10-07 14:41:57.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:57 compute-0 ceph-mon[74295]: pgmap v2385: 305 pgs: 305 active+clean; 200 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 07 14:41:57 compute-0 nova_compute[259550]: 2025-10-07 14:41:57.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:41:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 200 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.536 2 DEBUG oslo_concurrency.lockutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.537 2 DEBUG oslo_concurrency.lockutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.537 2 DEBUG oslo_concurrency.lockutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.538 2 DEBUG oslo_concurrency.lockutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.538 2 DEBUG oslo_concurrency.lockutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.539 2 INFO nova.compute.manager [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Terminating instance
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.540 2 DEBUG nova.compute.manager [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:41:58 compute-0 kernel: tapb7bf5de8-3b (unregistering): left promiscuous mode
Oct 07 14:41:58 compute-0 NetworkManager[44949]: <info>  [1759848118.6134] device (tapb7bf5de8-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:58 compute-0 ovn_controller[151684]: 2025-10-07T14:41:58Z|01393|binding|INFO|Releasing lport b7bf5de8-3ba0-43cd-a839-d8812cbe4276 from this chassis (sb_readonly=0)
Oct 07 14:41:58 compute-0 ovn_controller[151684]: 2025-10-07T14:41:58Z|01394|binding|INFO|Setting lport b7bf5de8-3ba0-43cd-a839-d8812cbe4276 down in Southbound
Oct 07 14:41:58 compute-0 ovn_controller[151684]: 2025-10-07T14:41:58Z|01395|binding|INFO|Removing iface tapb7bf5de8-3b ovn-installed in OVS
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.640 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:ee:73 10.100.0.7'], port_security=['fa:16:3e:6d:ee:73 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd0b10640-5492-4d8f-8b94-a49a15b6e702', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-580b59e0-70f8-44c3-a35f-9c4f88691f96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8da4ff39-44c8-491e-a635-6be0569feae9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1c4ac2-8721-482c-b415-fdc398b5953a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b7bf5de8-3ba0-43cd-a839-d8812cbe4276) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:41:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.641 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b7bf5de8-3ba0-43cd-a839-d8812cbe4276 in datapath 580b59e0-70f8-44c3-a35f-9c4f88691f96 unbound from our chassis
Oct 07 14:41:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.643 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 580b59e0-70f8-44c3-a35f-9c4f88691f96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:41:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.644 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[25fb8fb0-f778-4f80-b583-719bdf96f9c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.644 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96 namespace which is not needed anymore
Oct 07 14:41:58 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000079.scope: Deactivated successfully.
Oct 07 14:41:58 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000079.scope: Consumed 19.721s CPU time.
Oct 07 14:41:58 compute-0 systemd-machined[214580]: Machine qemu-152-instance-00000079 terminated.
Oct 07 14:41:58 compute-0 neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96[390938]: [NOTICE]   (390942) : haproxy version is 2.8.14-c23fe91
Oct 07 14:41:58 compute-0 neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96[390938]: [NOTICE]   (390942) : path to executable is /usr/sbin/haproxy
Oct 07 14:41:58 compute-0 neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96[390938]: [WARNING]  (390942) : Exiting Master process...
Oct 07 14:41:58 compute-0 neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96[390938]: [WARNING]  (390942) : Exiting Master process...
Oct 07 14:41:58 compute-0 neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96[390938]: [ALERT]    (390942) : Current worker (390944) exited with code 143 (Terminated)
Oct 07 14:41:58 compute-0 neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96[390938]: [WARNING]  (390942) : All workers exited. Exiting... (0)
Oct 07 14:41:58 compute-0 systemd[1]: libpod-150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352.scope: Deactivated successfully.
Oct 07 14:41:58 compute-0 podman[396128]: 2025-10-07 14:41:58.779399164 +0000 UTC m=+0.047316909 container died 150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.794 2 INFO nova.virt.libvirt.driver [-] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Instance destroyed successfully.
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.795 2 DEBUG nova.objects.instance [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid d0b10640-5492-4d8f-8b94-a49a15b6e702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:41:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352-userdata-shm.mount: Deactivated successfully.
Oct 07 14:41:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea80f1f94cd7157cc848e1780ed5de7036ed725f787faa3cc6a7f92f3662579a-merged.mount: Deactivated successfully.
Oct 07 14:41:58 compute-0 podman[396128]: 2025-10-07 14:41:58.823224558 +0000 UTC m=+0.091142303 container cleanup 150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.833 2 DEBUG nova.virt.libvirt.vif [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-666494646',display_name='tempest-TestNetworkBasicOps-server-666494646',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-666494646',id=121,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1spSCFixXNsUFx/gNeoHExW+DY1/4E+O5JhigGItAWYtVOtc4GVbv0L/rgo0glCGTWIkGxAFExfWpDWhQ8tY55XDjxFuD7v7bFZwlCzmx6XPgY1bFJ0yFMdc8TPdCuzA==',key_name='tempest-TestNetworkBasicOps-45009849',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:39:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-1mnwx928',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:39:42Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=d0b10640-5492-4d8f-8b94-a49a15b6e702,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.833 2 DEBUG nova.network.os_vif_util [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.834 2 DEBUG nova.network.os_vif_util [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:ee:73,bridge_name='br-int',has_traffic_filtering=True,id=b7bf5de8-3ba0-43cd-a839-d8812cbe4276,network=Network(580b59e0-70f8-44c3-a35f-9c4f88691f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7bf5de8-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.834 2 DEBUG os_vif [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:ee:73,bridge_name='br-int',has_traffic_filtering=True,id=b7bf5de8-3ba0-43cd-a839-d8812cbe4276,network=Network(580b59e0-70f8-44c3-a35f-9c4f88691f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7bf5de8-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.836 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7bf5de8-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:58 compute-0 systemd[1]: libpod-conmon-150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352.scope: Deactivated successfully.
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.842 2 INFO os_vif [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:ee:73,bridge_name='br-int',has_traffic_filtering=True,id=b7bf5de8-3ba0-43cd-a839-d8812cbe4276,network=Network(580b59e0-70f8-44c3-a35f-9c4f88691f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7bf5de8-3b')
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.884 2 DEBUG nova.compute.manager [req-e6514af9-1a44-40af-9e4a-585dcbfea45f req-9bc2adff-875a-4155-bc24-8b10d0cc38ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-unplugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.884 2 DEBUG oslo_concurrency.lockutils [req-e6514af9-1a44-40af-9e4a-585dcbfea45f req-9bc2adff-875a-4155-bc24-8b10d0cc38ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.885 2 DEBUG oslo_concurrency.lockutils [req-e6514af9-1a44-40af-9e4a-585dcbfea45f req-9bc2adff-875a-4155-bc24-8b10d0cc38ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.886 2 DEBUG oslo_concurrency.lockutils [req-e6514af9-1a44-40af-9e4a-585dcbfea45f req-9bc2adff-875a-4155-bc24-8b10d0cc38ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.887 2 DEBUG nova.compute.manager [req-e6514af9-1a44-40af-9e4a-585dcbfea45f req-9bc2adff-875a-4155-bc24-8b10d0cc38ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] No waiting events found dispatching network-vif-unplugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.887 2 DEBUG nova.compute.manager [req-e6514af9-1a44-40af-9e4a-585dcbfea45f req-9bc2adff-875a-4155-bc24-8b10d0cc38ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-unplugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:41:58 compute-0 podman[396167]: 2025-10-07 14:41:58.887411595 +0000 UTC m=+0.043929780 container remove 150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:41:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.897 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cf69e57d-73f3-4510-8142-8c5fa5748157]: (4, ('Tue Oct  7 02:41:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96 (150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352)\n150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352\nTue Oct  7 02:41:58 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96 (150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352)\n150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.901 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[42f585da-f860-4c78-a0f8-9955b5adb12d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.902 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580b59e0-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:41:58 compute-0 kernel: tap580b59e0-70: left promiscuous mode
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:58 compute-0 nova_compute[259550]: 2025-10-07 14:41:58.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:41:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.925 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[424c26c8-037c-4603-8b9d-66ccf8469d16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.961 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[28ddae6e-e11c-496b-b4f9-e111c4065145]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.962 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9c226ca5-8b2e-4c67-a4b9-142a719d5f18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.978 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b1cf8a42-3960-42e0-8bf7-7c2ffd076ffc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856489, 'reachable_time': 40316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396200, 'error': None, 'target': 'ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.981 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:41:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.981 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[462adcf7-8b71-4a08-b9fb-1b723cc08aca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:41:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d580b59e0\x2d70f8\x2d44c3\x2da35f\x2d9c4f88691f96.mount: Deactivated successfully.
Oct 07 14:41:59 compute-0 nova_compute[259550]: 2025-10-07 14:41:59.297 2 INFO nova.virt.libvirt.driver [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Deleting instance files /var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702_del
Oct 07 14:41:59 compute-0 nova_compute[259550]: 2025-10-07 14:41:59.299 2 INFO nova.virt.libvirt.driver [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Deletion of /var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702_del complete
Oct 07 14:41:59 compute-0 nova_compute[259550]: 2025-10-07 14:41:59.315 2 DEBUG nova.compute.manager [req-94ce6b3c-3421-4a86-90e3-4651a0d807f6 req-4db6cfee-bdb4-4a32-92f1-2b30070d9e7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-changed-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:41:59 compute-0 nova_compute[259550]: 2025-10-07 14:41:59.315 2 DEBUG nova.compute.manager [req-94ce6b3c-3421-4a86-90e3-4651a0d807f6 req-4db6cfee-bdb4-4a32-92f1-2b30070d9e7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing instance network info cache due to event network-changed-b7bf5de8-3ba0-43cd-a839-d8812cbe4276. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:41:59 compute-0 nova_compute[259550]: 2025-10-07 14:41:59.316 2 DEBUG oslo_concurrency.lockutils [req-94ce6b3c-3421-4a86-90e3-4651a0d807f6 req-4db6cfee-bdb4-4a32-92f1-2b30070d9e7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:41:59 compute-0 nova_compute[259550]: 2025-10-07 14:41:59.316 2 DEBUG oslo_concurrency.lockutils [req-94ce6b3c-3421-4a86-90e3-4651a0d807f6 req-4db6cfee-bdb4-4a32-92f1-2b30070d9e7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:41:59 compute-0 nova_compute[259550]: 2025-10-07 14:41:59.317 2 DEBUG nova.network.neutron [req-94ce6b3c-3421-4a86-90e3-4651a0d807f6 req-4db6cfee-bdb4-4a32-92f1-2b30070d9e7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing network info cache for port b7bf5de8-3ba0-43cd-a839-d8812cbe4276 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:41:59 compute-0 nova_compute[259550]: 2025-10-07 14:41:59.399 2 INFO nova.compute.manager [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Took 0.86 seconds to destroy the instance on the hypervisor.
Oct 07 14:41:59 compute-0 nova_compute[259550]: 2025-10-07 14:41:59.400 2 DEBUG oslo.service.loopingcall [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:41:59 compute-0 nova_compute[259550]: 2025-10-07 14:41:59.400 2 DEBUG nova.compute.manager [-] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:41:59 compute-0 nova_compute[259550]: 2025-10-07 14:41:59.400 2 DEBUG nova.network.neutron [-] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:41:59 compute-0 ceph-mon[74295]: pgmap v2386: 305 pgs: 305 active+clean; 200 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 07 14:42:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:00.074 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:00.075 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:00.076 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 200 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Oct 07 14:42:00 compute-0 nova_compute[259550]: 2025-10-07 14:42:00.966 2 DEBUG nova.compute.manager [req-735d5aa7-a26b-40ba-8dfa-48efbe7a6a3b req-438c40d3-fecf-4d39-a78a-6dc86b49e2ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-plugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:42:00 compute-0 nova_compute[259550]: 2025-10-07 14:42:00.966 2 DEBUG oslo_concurrency.lockutils [req-735d5aa7-a26b-40ba-8dfa-48efbe7a6a3b req-438c40d3-fecf-4d39-a78a-6dc86b49e2ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:00 compute-0 nova_compute[259550]: 2025-10-07 14:42:00.966 2 DEBUG oslo_concurrency.lockutils [req-735d5aa7-a26b-40ba-8dfa-48efbe7a6a3b req-438c40d3-fecf-4d39-a78a-6dc86b49e2ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:00 compute-0 nova_compute[259550]: 2025-10-07 14:42:00.967 2 DEBUG oslo_concurrency.lockutils [req-735d5aa7-a26b-40ba-8dfa-48efbe7a6a3b req-438c40d3-fecf-4d39-a78a-6dc86b49e2ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:00 compute-0 nova_compute[259550]: 2025-10-07 14:42:00.967 2 DEBUG nova.compute.manager [req-735d5aa7-a26b-40ba-8dfa-48efbe7a6a3b req-438c40d3-fecf-4d39-a78a-6dc86b49e2ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] No waiting events found dispatching network-vif-plugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:42:00 compute-0 nova_compute[259550]: 2025-10-07 14:42:00.967 2 WARNING nova.compute.manager [req-735d5aa7-a26b-40ba-8dfa-48efbe7a6a3b req-438c40d3-fecf-4d39-a78a-6dc86b49e2ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received unexpected event network-vif-plugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 for instance with vm_state active and task_state deleting.
Oct 07 14:42:01 compute-0 ceph-mon[74295]: pgmap v2387: 305 pgs: 305 active+clean; 200 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Oct 07 14:42:01 compute-0 nova_compute[259550]: 2025-10-07 14:42:01.813 2 DEBUG nova.network.neutron [-] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:42:01 compute-0 nova_compute[259550]: 2025-10-07 14:42:01.933 2 DEBUG nova.network.neutron [req-94ce6b3c-3421-4a86-90e3-4651a0d807f6 req-4db6cfee-bdb4-4a32-92f1-2b30070d9e7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updated VIF entry in instance network info cache for port b7bf5de8-3ba0-43cd-a839-d8812cbe4276. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:42:01 compute-0 nova_compute[259550]: 2025-10-07 14:42:01.934 2 DEBUG nova.network.neutron [req-94ce6b3c-3421-4a86-90e3-4651a0d807f6 req-4db6cfee-bdb4-4a32-92f1-2b30070d9e7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:42:01 compute-0 nova_compute[259550]: 2025-10-07 14:42:01.996 2 DEBUG nova.compute.manager [req-e282eb86-0090-487d-9b1f-ab629f4ef9ed req-25158942-42c3-464a-aa12-64d21e797f1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-deleted-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:42:01 compute-0 nova_compute[259550]: 2025-10-07 14:42:01.996 2 INFO nova.compute.manager [req-e282eb86-0090-487d-9b1f-ab629f4ef9ed req-25158942-42c3-464a-aa12-64d21e797f1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Neutron deleted interface b7bf5de8-3ba0-43cd-a839-d8812cbe4276; detaching it from the instance and deleting it from the info cache
Oct 07 14:42:01 compute-0 nova_compute[259550]: 2025-10-07 14:42:01.996 2 DEBUG nova.network.neutron [req-e282eb86-0090-487d-9b1f-ab629f4ef9ed req-25158942-42c3-464a-aa12-64d21e797f1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:42:02 compute-0 nova_compute[259550]: 2025-10-07 14:42:02.004 2 INFO nova.compute.manager [-] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Took 2.60 seconds to deallocate network for instance.
Oct 07 14:42:02 compute-0 nova_compute[259550]: 2025-10-07 14:42:02.028 2 DEBUG nova.compute.manager [req-e282eb86-0090-487d-9b1f-ab629f4ef9ed req-25158942-42c3-464a-aa12-64d21e797f1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Detach interface failed, port_id=b7bf5de8-3ba0-43cd-a839-d8812cbe4276, reason: Instance d0b10640-5492-4d8f-8b94-a49a15b6e702 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:42:02 compute-0 nova_compute[259550]: 2025-10-07 14:42:02.051 2 DEBUG oslo_concurrency.lockutils [req-94ce6b3c-3421-4a86-90e3-4651a0d807f6 req-4db6cfee-bdb4-4a32-92f1-2b30070d9e7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:42:02 compute-0 nova_compute[259550]: 2025-10-07 14:42:02.099 2 DEBUG oslo_concurrency.lockutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:02 compute-0 nova_compute[259550]: 2025-10-07 14:42:02.100 2 DEBUG oslo_concurrency.lockutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:02 compute-0 nova_compute[259550]: 2025-10-07 14:42:02.317 2 DEBUG oslo_concurrency.processutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:42:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 180 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 316 KiB/s rd, 1.5 MiB/s wr, 75 op/s
Oct 07 14:42:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:42:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/976224580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:42:02 compute-0 nova_compute[259550]: 2025-10-07 14:42:02.764 2 DEBUG oslo_concurrency.processutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:42:02 compute-0 nova_compute[259550]: 2025-10-07 14:42:02.772 2 DEBUG nova.compute.provider_tree [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:42:02 compute-0 nova_compute[259550]: 2025-10-07 14:42:02.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:03 compute-0 nova_compute[259550]: 2025-10-07 14:42:03.003 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:42:03 compute-0 nova_compute[259550]: 2025-10-07 14:42:03.004 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:42:03 compute-0 nova_compute[259550]: 2025-10-07 14:42:03.078 2 DEBUG nova.scheduler.client.report [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:42:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:42:03 compute-0 nova_compute[259550]: 2025-10-07 14:42:03.240 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848108.2395742, ca309873-104c-4cd4-a609-686d61823e0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:42:03 compute-0 nova_compute[259550]: 2025-10-07 14:42:03.240 2 INFO nova.compute.manager [-] [instance: ca309873-104c-4cd4-a609-686d61823e0f] VM Stopped (Lifecycle Event)
Oct 07 14:42:03 compute-0 nova_compute[259550]: 2025-10-07 14:42:03.291 2 DEBUG nova.compute.manager [None req-6126e4f9-ef14-4f5e-9eee-fa882e7065d6 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:42:03 compute-0 nova_compute[259550]: 2025-10-07 14:42:03.306 2 DEBUG oslo_concurrency.lockutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:03 compute-0 nova_compute[259550]: 2025-10-07 14:42:03.325 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:42:03 compute-0 nova_compute[259550]: 2025-10-07 14:42:03.351 2 INFO nova.scheduler.client.report [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance d0b10640-5492-4d8f-8b94-a49a15b6e702
Oct 07 14:42:03 compute-0 nova_compute[259550]: 2025-10-07 14:42:03.464 2 DEBUG oslo_concurrency.lockutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:03 compute-0 ceph-mon[74295]: pgmap v2388: 305 pgs: 305 active+clean; 180 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 316 KiB/s rd, 1.5 MiB/s wr, 75 op/s
Oct 07 14:42:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/976224580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:42:03 compute-0 nova_compute[259550]: 2025-10-07 14:42:03.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:04 compute-0 nova_compute[259550]: 2025-10-07 14:42:04.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:04 compute-0 podman[396224]: 2025-10-07 14:42:04.090103183 +0000 UTC m=+0.072774915 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 07 14:42:04 compute-0 podman[396225]: 2025-10-07 14:42:04.0971037 +0000 UTC m=+0.077569984 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:42:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 872 KiB/s wr, 46 op/s
Oct 07 14:42:05 compute-0 ceph-mon[74295]: pgmap v2389: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 872 KiB/s wr, 46 op/s
Oct 07 14:42:05 compute-0 nova_compute[259550]: 2025-10-07 14:42:05.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:42:06 compute-0 nova_compute[259550]: 2025-10-07 14:42:06.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:06 compute-0 nova_compute[259550]: 2025-10-07 14:42:06.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:06 compute-0 nova_compute[259550]: 2025-10-07 14:42:06.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:06 compute-0 nova_compute[259550]: 2025-10-07 14:42:06.019 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:42:06 compute-0 nova_compute[259550]: 2025-10-07 14:42:06.019 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:42:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:42:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2797654740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:42:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 07 14:42:06 compute-0 nova_compute[259550]: 2025-10-07 14:42:06.489 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:42:06 compute-0 nova_compute[259550]: 2025-10-07 14:42:06.575 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:42:06 compute-0 nova_compute[259550]: 2025-10-07 14:42:06.576 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:42:06 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2797654740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:42:06 compute-0 nova_compute[259550]: 2025-10-07 14:42:06.728 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:42:06 compute-0 nova_compute[259550]: 2025-10-07 14:42:06.729 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3501MB free_disk=59.942710876464844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:42:06 compute-0 nova_compute[259550]: 2025-10-07 14:42:06.730 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:06 compute-0 nova_compute[259550]: 2025-10-07 14:42:06.730 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:06 compute-0 nova_compute[259550]: 2025-10-07 14:42:06.841 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:42:06 compute-0 nova_compute[259550]: 2025-10-07 14:42:06.842 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:42:06 compute-0 nova_compute[259550]: 2025-10-07 14:42:06.843 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:42:06 compute-0 nova_compute[259550]: 2025-10-07 14:42:06.888 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:42:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:42:07 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2895425505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:42:07 compute-0 nova_compute[259550]: 2025-10-07 14:42:07.339 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:42:07 compute-0 nova_compute[259550]: 2025-10-07 14:42:07.345 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:42:07 compute-0 nova_compute[259550]: 2025-10-07 14:42:07.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:07 compute-0 nova_compute[259550]: 2025-10-07 14:42:07.431 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:42:07 compute-0 nova_compute[259550]: 2025-10-07 14:42:07.590 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:42:07 compute-0 nova_compute[259550]: 2025-10-07 14:42:07.591 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:07 compute-0 nova_compute[259550]: 2025-10-07 14:42:07.592 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:42:07 compute-0 ceph-mon[74295]: pgmap v2390: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 07 14:42:07 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2895425505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:42:07 compute-0 nova_compute[259550]: 2025-10-07 14:42:07.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:42:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 07 14:42:08 compute-0 nova_compute[259550]: 2025-10-07 14:42:08.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:09 compute-0 nova_compute[259550]: 2025-10-07 14:42:09.612 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:42:09 compute-0 ceph-mon[74295]: pgmap v2391: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 07 14:42:10 compute-0 ovn_controller[151684]: 2025-10-07T14:42:10Z|01396|binding|INFO|Releasing lport eb0d4e5a-943a-459a-bddc-3f4beed43512 from this chassis (sb_readonly=0)
Oct 07 14:42:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 28 op/s
Oct 07 14:42:10 compute-0 nova_compute[259550]: 2025-10-07 14:42:10.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:10 compute-0 nova_compute[259550]: 2025-10-07 14:42:10.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:42:11 compute-0 ceph-mon[74295]: pgmap v2392: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 28 op/s
Oct 07 14:42:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 4.2 KiB/s wr, 25 op/s
Oct 07 14:42:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:12.533 161642 DEBUG eventlet.wsgi.server [-] (161642) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 07 14:42:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:12.535 161642 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Oct 07 14:42:12 compute-0 ovn_metadata_agent[161531]: Accept: */*
Oct 07 14:42:12 compute-0 ovn_metadata_agent[161531]: Connection: close
Oct 07 14:42:12 compute-0 ovn_metadata_agent[161531]: Content-Type: text/plain
Oct 07 14:42:12 compute-0 ovn_metadata_agent[161531]: Host: 169.254.169.254
Oct 07 14:42:12 compute-0 ovn_metadata_agent[161531]: User-Agent: curl/7.84.0
Oct 07 14:42:12 compute-0 ovn_metadata_agent[161531]: X-Forwarded-For: 10.100.0.14
Oct 07 14:42:12 compute-0 ovn_metadata_agent[161531]: X-Ovn-Network-Id: 70c32d19-cc0d-46fc-a583-1be2bd26332c __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 07 14:42:12 compute-0 nova_compute[259550]: 2025-10-07 14:42:12.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:42:13 compute-0 ceph-mon[74295]: pgmap v2393: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 4.2 KiB/s wr, 25 op/s
Oct 07 14:42:13 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:42:13 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 36K writes, 147K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.04 MB/s
                                           Cumulative WAL: 36K writes, 12K syncs, 2.82 writes per sync, written: 0.14 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4957 writes, 21K keys, 4957 commit groups, 1.0 writes per commit group, ingest: 24.44 MB, 0.04 MB/s
                                           Interval WAL: 4957 writes, 1908 syncs, 2.60 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 14:42:13 compute-0 nova_compute[259550]: 2025-10-07 14:42:13.793 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848118.7921877, d0b10640-5492-4d8f-8b94-a49a15b6e702 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:42:13 compute-0 nova_compute[259550]: 2025-10-07 14:42:13.793 2 INFO nova.compute.manager [-] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] VM Stopped (Lifecycle Event)
Oct 07 14:42:13 compute-0 nova_compute[259550]: 2025-10-07 14:42:13.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:13 compute-0 nova_compute[259550]: 2025-10-07 14:42:13.940 2 DEBUG nova.compute.manager [None req-d4bcc752-126e-4887-947c-1b122386de4d - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:42:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:14.000 161642 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 07 14:42:14 compute-0 haproxy-metadata-proxy-70c32d19-cc0d-46fc-a583-1be2bd26332c[395870]: 10.100.0.14:32962 [07/Oct/2025:14:42:12.532] listener listener/metadata 0/0/0/1468/1468 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Oct 07 14:42:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:14.001 161642 INFO eventlet.wsgi.server [-] 10.100.0.14,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.4661798
Oct 07 14:42:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:14.123 161642 DEBUG eventlet.wsgi.server [-] (161642) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 07 14:42:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:14.124 161642 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Oct 07 14:42:14 compute-0 ovn_metadata_agent[161531]: Accept: */*
Oct 07 14:42:14 compute-0 ovn_metadata_agent[161531]: Connection: close
Oct 07 14:42:14 compute-0 ovn_metadata_agent[161531]: Content-Length: 100
Oct 07 14:42:14 compute-0 ovn_metadata_agent[161531]: Content-Type: application/x-www-form-urlencoded
Oct 07 14:42:14 compute-0 ovn_metadata_agent[161531]: Host: 169.254.169.254
Oct 07 14:42:14 compute-0 ovn_metadata_agent[161531]: User-Agent: curl/7.84.0
Oct 07 14:42:14 compute-0 ovn_metadata_agent[161531]: X-Forwarded-For: 10.100.0.14
Oct 07 14:42:14 compute-0 ovn_metadata_agent[161531]: X-Ovn-Network-Id: 70c32d19-cc0d-46fc-a583-1be2bd26332c
Oct 07 14:42:14 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:42:14 compute-0 ovn_metadata_agent[161531]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 07 14:42:14 compute-0 nova_compute[259550]: 2025-10-07 14:42:14.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:14.407 161642 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 07 14:42:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:14.407 161642 INFO eventlet.wsgi.server [-] 10.100.0.14,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2836077
Oct 07 14:42:14 compute-0 haproxy-metadata-proxy-70c32d19-cc0d-46fc-a583-1be2bd26332c[395870]: 10.100.0.14:32972 [07/Oct/2025:14:42:14.122] listener listener/metadata 0/0/0/285/285 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Oct 07 14:42:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 2.8 KiB/s wr, 12 op/s
Oct 07 14:42:15 compute-0 ceph-mon[74295]: pgmap v2394: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 2.8 KiB/s wr, 12 op/s
Oct 07 14:42:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 1023 B/s wr, 2 op/s
Oct 07 14:42:16 compute-0 nova_compute[259550]: 2025-10-07 14:42:16.981 2 DEBUG oslo_concurrency.lockutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquiring lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:16 compute-0 nova_compute[259550]: 2025-10-07 14:42:16.982 2 DEBUG oslo_concurrency.lockutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:16 compute-0 nova_compute[259550]: 2025-10-07 14:42:16.983 2 DEBUG oslo_concurrency.lockutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquiring lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:16 compute-0 nova_compute[259550]: 2025-10-07 14:42:16.983 2 DEBUG oslo_concurrency.lockutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:16 compute-0 nova_compute[259550]: 2025-10-07 14:42:16.984 2 DEBUG oslo_concurrency.lockutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:16 compute-0 nova_compute[259550]: 2025-10-07 14:42:16.986 2 INFO nova.compute.manager [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Terminating instance
Oct 07 14:42:16 compute-0 nova_compute[259550]: 2025-10-07 14:42:16.988 2 DEBUG nova.compute.manager [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:42:17 compute-0 kernel: tapfeac1006-75 (unregistering): left promiscuous mode
Oct 07 14:42:17 compute-0 NetworkManager[44949]: <info>  [1759848137.0508] device (tapfeac1006-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:42:17 compute-0 ovn_controller[151684]: 2025-10-07T14:42:17Z|01397|binding|INFO|Releasing lport feac1006-7556-4dd6-9691-bc886a9410f3 from this chassis (sb_readonly=0)
Oct 07 14:42:17 compute-0 ovn_controller[151684]: 2025-10-07T14:42:17Z|01398|binding|INFO|Setting lport feac1006-7556-4dd6-9691-bc886a9410f3 down in Southbound
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:17 compute-0 ovn_controller[151684]: 2025-10-07T14:42:17Z|01399|binding|INFO|Removing iface tapfeac1006-75 ovn-installed in OVS
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.114 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:35:e2 10.100.0.14'], port_security=['fa:16:3e:03:35:e2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '300a7ac9-5462-4db2-817f-07ea0b2d6aa6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70c32d19-cc0d-46fc-a583-1be2bd26332c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b215e149c484ed3a0d2130b82d65f6f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2290bbe4-d1ef-40a3-bca7-dc73e3a190e7 4672fac1-ae90-46b4-8fd3-3fd3255b2962', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d118d88-a678-4c5f-9b83-025d93ef3b4c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=feac1006-7556-4dd6-9691-bc886a9410f3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:42:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.115 161536 INFO neutron.agent.ovn.metadata.agent [-] Port feac1006-7556-4dd6-9691-bc886a9410f3 in datapath 70c32d19-cc0d-46fc-a583-1be2bd26332c unbound from our chassis
Oct 07 14:42:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.116 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 70c32d19-cc0d-46fc-a583-1be2bd26332c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:42:17 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Oct 07 14:42:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.117 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[06ac1956-6637-4ee7-9fbb-e5966fee2c96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.118 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c namespace which is not needed anymore
Oct 07 14:42:17 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007e.scope: Consumed 14.990s CPU time.
Oct 07 14:42:17 compute-0 systemd-machined[214580]: Machine qemu-159-instance-0000007e terminated.
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.227 2 INFO nova.virt.libvirt.driver [-] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Instance destroyed successfully.
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.228 2 DEBUG nova.objects.instance [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lazy-loading 'resources' on Instance uuid 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:42:17 compute-0 neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c[395849]: [NOTICE]   (395853) : haproxy version is 2.8.14-c23fe91
Oct 07 14:42:17 compute-0 neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c[395849]: [NOTICE]   (395853) : path to executable is /usr/sbin/haproxy
Oct 07 14:42:17 compute-0 neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c[395849]: [WARNING]  (395853) : Exiting Master process...
Oct 07 14:42:17 compute-0 neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c[395849]: [ALERT]    (395853) : Current worker (395870) exited with code 143 (Terminated)
Oct 07 14:42:17 compute-0 neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c[395849]: [WARNING]  (395853) : All workers exited. Exiting... (0)
Oct 07 14:42:17 compute-0 systemd[1]: libpod-106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710.scope: Deactivated successfully.
Oct 07 14:42:17 compute-0 podman[396333]: 2025-10-07 14:42:17.274724055 +0000 UTC m=+0.048898641 container died 106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.293 2 DEBUG nova.virt.libvirt.vif [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:41:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1671371669',display_name='tempest-TestServerBasicOps-server-1671371669',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1671371669',id=126,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGvFnIqTqfHxE6+s78OqW40EeX1Gc32/mDkbAeqCywn5GmNeJRkLzGwn+a0yP/7G4uTQPEHQaiqW/W1UjNqDLgiC69VXo+2gdbXJjgI/CEgIH3VE1qmFfhNZh8qFmO8jUg==',key_name='tempest-TestServerBasicOps-1893482168',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:41:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b215e149c484ed3a0d2130b82d65f6f',ramdisk_id='',reservation_id='r-q3r9pyxl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1198650773',owner_user_name='tempest-TestServerBasicOps-1198650773-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:42:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='512332529ac84b3f9a7bb7d98d8577ac',uuid=300a7ac9-5462-4db2-817f-07ea0b2d6aa6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.293 2 DEBUG nova.network.os_vif_util [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Converting VIF {"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.294 2 DEBUG nova.network.os_vif_util [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:03:35:e2,bridge_name='br-int',has_traffic_filtering=True,id=feac1006-7556-4dd6-9691-bc886a9410f3,network=Network(70c32d19-cc0d-46fc-a583-1be2bd26332c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfeac1006-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.294 2 DEBUG os_vif [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:35:e2,bridge_name='br-int',has_traffic_filtering=True,id=feac1006-7556-4dd6-9691-bc886a9410f3,network=Network(70c32d19-cc0d-46fc-a583-1be2bd26332c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfeac1006-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.296 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfeac1006-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.304 2 INFO os_vif [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:35:e2,bridge_name='br-int',has_traffic_filtering=True,id=feac1006-7556-4dd6-9691-bc886a9410f3,network=Network(70c32d19-cc0d-46fc-a583-1be2bd26332c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfeac1006-75')
Oct 07 14:42:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710-userdata-shm.mount: Deactivated successfully.
Oct 07 14:42:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8da373f4ff1907c25e28cd2a4fb9ff763074cfeaac1f7943c05c71d9f726084b-merged.mount: Deactivated successfully.
Oct 07 14:42:17 compute-0 podman[396333]: 2025-10-07 14:42:17.328096413 +0000 UTC m=+0.102271009 container cleanup 106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:42:17 compute-0 systemd[1]: libpod-conmon-106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710.scope: Deactivated successfully.
Oct 07 14:42:17 compute-0 podman[396390]: 2025-10-07 14:42:17.406178979 +0000 UTC m=+0.052628260 container remove 106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 07 14:42:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.411 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a95e3e25-d7b0-4723-bbe1-bbb878633aa2]: (4, ('Tue Oct  7 02:42:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c (106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710)\n106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710\nTue Oct  7 02:42:17 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c (106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710)\n106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.413 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[211b535f-f459-4363-8099-7ab29d711490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.414 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap70c32d19-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:17 compute-0 kernel: tap70c32d19-c0: left promiscuous mode
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.431 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e53d13-3089-4fa5-86a1-4659072ff838]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.469 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[38e97fe1-be30-493c-ad57-27ef41bc713f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.470 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9ae868-6964-42de-b223-06115a89f875]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.491 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c4ffca0a-1345-44d4-87df-0e60f06cbffe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 867984, 'reachable_time': 23033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396407, 'error': None, 'target': 'ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.493 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:42:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.493 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a62a5ab5-5e25-4a0b-8f84-20a145733dfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d70c32d19\x2dcc0d\x2d46fc\x2da583\x2d1be2bd26332c.mount: Deactivated successfully.
Oct 07 14:42:17 compute-0 ceph-mon[74295]: pgmap v2395: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 1023 B/s wr, 2 op/s
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.787 2 INFO nova.virt.libvirt.driver [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Deleting instance files /var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6_del
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.788 2 INFO nova.virt.libvirt.driver [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Deletion of /var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6_del complete
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.796 2 DEBUG nova.compute.manager [req-098fee9d-62b0-4e1a-9134-2ac099fd8973 req-53bb1346-ff41-4b10-8524-febd1c15ad13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received event network-vif-unplugged-feac1006-7556-4dd6-9691-bc886a9410f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.796 2 DEBUG oslo_concurrency.lockutils [req-098fee9d-62b0-4e1a-9134-2ac099fd8973 req-53bb1346-ff41-4b10-8524-febd1c15ad13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.797 2 DEBUG oslo_concurrency.lockutils [req-098fee9d-62b0-4e1a-9134-2ac099fd8973 req-53bb1346-ff41-4b10-8524-febd1c15ad13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.798 2 DEBUG oslo_concurrency.lockutils [req-098fee9d-62b0-4e1a-9134-2ac099fd8973 req-53bb1346-ff41-4b10-8524-febd1c15ad13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.798 2 DEBUG nova.compute.manager [req-098fee9d-62b0-4e1a-9134-2ac099fd8973 req-53bb1346-ff41-4b10-8524-febd1c15ad13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] No waiting events found dispatching network-vif-unplugged-feac1006-7556-4dd6-9691-bc886a9410f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.798 2 DEBUG nova.compute.manager [req-098fee9d-62b0-4e1a-9134-2ac099fd8973 req-53bb1346-ff41-4b10-8524-febd1c15ad13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received event network-vif-unplugged-feac1006-7556-4dd6-9691-bc886a9410f3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:17 compute-0 nova_compute[259550]: 2025-10-07 14:42:17.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:18 compute-0 nova_compute[259550]: 2025-10-07 14:42:18.097 2 INFO nova.compute.manager [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Took 1.11 seconds to destroy the instance on the hypervisor.
Oct 07 14:42:18 compute-0 nova_compute[259550]: 2025-10-07 14:42:18.097 2 DEBUG oslo.service.loopingcall [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:42:18 compute-0 nova_compute[259550]: 2025-10-07 14:42:18.098 2 DEBUG nova.compute.manager [-] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:42:18 compute-0 nova_compute[259550]: 2025-10-07 14:42:18.098 2 DEBUG nova.network.neutron [-] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:42:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:42:18 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:42:18 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 42K writes, 162K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s
                                           Cumulative WAL: 42K writes, 15K syncs, 2.78 writes per sync, written: 0.16 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5937 writes, 24K keys, 5937 commit groups, 1.0 writes per commit group, ingest: 30.85 MB, 0.05 MB/s
                                           Interval WAL: 5937 writes, 2206 syncs, 2.69 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 14:42:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 1023 B/s wr, 2 op/s
Oct 07 14:42:19 compute-0 podman[396409]: 2025-10-07 14:42:19.066250355 +0000 UTC m=+0.054993664 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:42:19 compute-0 podman[396410]: 2025-10-07 14:42:19.114020414 +0000 UTC m=+0.095987893 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Oct 07 14:42:19 compute-0 ceph-mon[74295]: pgmap v2396: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 1023 B/s wr, 2 op/s
Oct 07 14:42:19 compute-0 nova_compute[259550]: 2025-10-07 14:42:19.963 2 DEBUG nova.compute.manager [req-b009c7f3-5f8f-448c-b839-66632e405636 req-94786c87-cea9-4527-abd3-68aa4373ec6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received event network-vif-plugged-feac1006-7556-4dd6-9691-bc886a9410f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:42:19 compute-0 nova_compute[259550]: 2025-10-07 14:42:19.963 2 DEBUG oslo_concurrency.lockutils [req-b009c7f3-5f8f-448c-b839-66632e405636 req-94786c87-cea9-4527-abd3-68aa4373ec6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:19 compute-0 nova_compute[259550]: 2025-10-07 14:42:19.963 2 DEBUG oslo_concurrency.lockutils [req-b009c7f3-5f8f-448c-b839-66632e405636 req-94786c87-cea9-4527-abd3-68aa4373ec6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:19 compute-0 nova_compute[259550]: 2025-10-07 14:42:19.963 2 DEBUG oslo_concurrency.lockutils [req-b009c7f3-5f8f-448c-b839-66632e405636 req-94786c87-cea9-4527-abd3-68aa4373ec6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:19 compute-0 nova_compute[259550]: 2025-10-07 14:42:19.964 2 DEBUG nova.compute.manager [req-b009c7f3-5f8f-448c-b839-66632e405636 req-94786c87-cea9-4527-abd3-68aa4373ec6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] No waiting events found dispatching network-vif-plugged-feac1006-7556-4dd6-9691-bc886a9410f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:42:19 compute-0 nova_compute[259550]: 2025-10-07 14:42:19.964 2 WARNING nova.compute.manager [req-b009c7f3-5f8f-448c-b839-66632e405636 req-94786c87-cea9-4527-abd3-68aa4373ec6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received unexpected event network-vif-plugged-feac1006-7556-4dd6-9691-bc886a9410f3 for instance with vm_state active and task_state deleting.
Oct 07 14:42:20 compute-0 nova_compute[259550]: 2025-10-07 14:42:20.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:20.201 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:42:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:20.202 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:42:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 77 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 5.5 KiB/s wr, 24 op/s
Oct 07 14:42:20 compute-0 nova_compute[259550]: 2025-10-07 14:42:20.915 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:20 compute-0 nova_compute[259550]: 2025-10-07 14:42:20.916 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:21 compute-0 nova_compute[259550]: 2025-10-07 14:42:21.027 2 DEBUG nova.network.neutron [-] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:42:21 compute-0 nova_compute[259550]: 2025-10-07 14:42:21.074 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:42:21 compute-0 nova_compute[259550]: 2025-10-07 14:42:21.476 2 DEBUG nova.compute.manager [req-315b5268-5280-4f52-81b7-97d4dfc1d9c3 req-e80aee47-ac2a-4585-9f6d-1baa1656cd4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received event network-vif-deleted-feac1006-7556-4dd6-9691-bc886a9410f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:42:21 compute-0 nova_compute[259550]: 2025-10-07 14:42:21.477 2 INFO nova.compute.manager [req-315b5268-5280-4f52-81b7-97d4dfc1d9c3 req-e80aee47-ac2a-4585-9f6d-1baa1656cd4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Neutron deleted interface feac1006-7556-4dd6-9691-bc886a9410f3; detaching it from the instance and deleting it from the info cache
Oct 07 14:42:21 compute-0 nova_compute[259550]: 2025-10-07 14:42:21.477 2 DEBUG nova.network.neutron [req-315b5268-5280-4f52-81b7-97d4dfc1d9c3 req-e80aee47-ac2a-4585-9f6d-1baa1656cd4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:42:21 compute-0 nova_compute[259550]: 2025-10-07 14:42:21.483 2 INFO nova.compute.manager [-] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Took 3.38 seconds to deallocate network for instance.
Oct 07 14:42:21 compute-0 nova_compute[259550]: 2025-10-07 14:42:21.646 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:21 compute-0 nova_compute[259550]: 2025-10-07 14:42:21.647 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:21 compute-0 nova_compute[259550]: 2025-10-07 14:42:21.655 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:42:21 compute-0 nova_compute[259550]: 2025-10-07 14:42:21.656 2 INFO nova.compute.claims [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:42:21 compute-0 nova_compute[259550]: 2025-10-07 14:42:21.662 2 DEBUG nova.compute.manager [req-315b5268-5280-4f52-81b7-97d4dfc1d9c3 req-e80aee47-ac2a-4585-9f6d-1baa1656cd4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Detach interface failed, port_id=feac1006-7556-4dd6-9691-bc886a9410f3, reason: Instance 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:42:21 compute-0 ceph-mon[74295]: pgmap v2397: 305 pgs: 305 active+clean; 77 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 5.5 KiB/s wr, 24 op/s
Oct 07 14:42:21 compute-0 nova_compute[259550]: 2025-10-07 14:42:21.784 2 DEBUG oslo_concurrency.lockutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:22 compute-0 nova_compute[259550]: 2025-10-07 14:42:22.043 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:42:22 compute-0 nova_compute[259550]: 2025-10-07 14:42:22.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:22 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:42:22 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 30K writes, 123K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 30K writes, 10K syncs, 2.84 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5232 writes, 21K keys, 5232 commit groups, 1.0 writes per commit group, ingest: 25.78 MB, 0.04 MB/s
                                           Interval WAL: 5232 writes, 1997 syncs, 2.62 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 14:42:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:42:22 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2032458394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:42:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 41 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Oct 07 14:42:22 compute-0 nova_compute[259550]: 2025-10-07 14:42:22.500 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:42:22 compute-0 nova_compute[259550]: 2025-10-07 14:42:22.506 2 DEBUG nova.compute.provider_tree [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:42:22 compute-0 nova_compute[259550]: 2025-10-07 14:42:22.587 2 DEBUG nova.scheduler.client.report [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:42:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:42:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:42:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:42:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:42:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:42:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:42:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:42:22
Oct 07 14:42:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:42:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:42:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'volumes']
Oct 07 14:42:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:42:22 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2032458394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:42:22 compute-0 nova_compute[259550]: 2025-10-07 14:42:22.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:42:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:42:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:42:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:42:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:42:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:42:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:42:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:42:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:42:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:42:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:42:23 compute-0 nova_compute[259550]: 2025-10-07 14:42:23.261 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:23 compute-0 nova_compute[259550]: 2025-10-07 14:42:23.263 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:42:23 compute-0 nova_compute[259550]: 2025-10-07 14:42:23.265 2 DEBUG oslo_concurrency.lockutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:23 compute-0 nova_compute[259550]: 2025-10-07 14:42:23.336 2 DEBUG oslo_concurrency.processutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:42:23 compute-0 nova_compute[259550]: 2025-10-07 14:42:23.448 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:42:23 compute-0 nova_compute[259550]: 2025-10-07 14:42:23.449 2 DEBUG nova.network.neutron [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:42:23 compute-0 nova_compute[259550]: 2025-10-07 14:42:23.596 2 INFO nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:42:23 compute-0 ceph-mon[74295]: pgmap v2398: 305 pgs: 305 active+clean; 41 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Oct 07 14:42:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:42:23 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3688149553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:42:23 compute-0 nova_compute[259550]: 2025-10-07 14:42:23.767 2 DEBUG oslo_concurrency.processutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:42:23 compute-0 nova_compute[259550]: 2025-10-07 14:42:23.775 2 DEBUG nova.compute.provider_tree [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:42:23 compute-0 nova_compute[259550]: 2025-10-07 14:42:23.801 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:42:23 compute-0 nova_compute[259550]: 2025-10-07 14:42:23.860 2 DEBUG nova.policy [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:42:24 compute-0 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct 07 14:42:24 compute-0 nova_compute[259550]: 2025-10-07 14:42:24.104 2 DEBUG nova.scheduler.client.report [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:42:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:24.205 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 41 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Oct 07 14:42:24 compute-0 nova_compute[259550]: 2025-10-07 14:42:24.589 2 DEBUG oslo_concurrency.lockutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.324s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:24 compute-0 nova_compute[259550]: 2025-10-07 14:42:24.723 2 INFO nova.scheduler.client.report [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Deleted allocations for instance 300a7ac9-5462-4db2-817f-07ea0b2d6aa6
Oct 07 14:42:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3688149553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:42:24 compute-0 nova_compute[259550]: 2025-10-07 14:42:24.873 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:42:24 compute-0 nova_compute[259550]: 2025-10-07 14:42:24.875 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:42:24 compute-0 nova_compute[259550]: 2025-10-07 14:42:24.875 2 INFO nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Creating image(s)
Oct 07 14:42:24 compute-0 nova_compute[259550]: 2025-10-07 14:42:24.901 2 DEBUG nova.storage.rbd_utils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 77918bef-8f72-4152-ac55-f4d4c98477ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:42:24 compute-0 nova_compute[259550]: 2025-10-07 14:42:24.921 2 DEBUG nova.storage.rbd_utils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 77918bef-8f72-4152-ac55-f4d4c98477ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:42:24 compute-0 nova_compute[259550]: 2025-10-07 14:42:24.940 2 DEBUG nova.storage.rbd_utils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 77918bef-8f72-4152-ac55-f4d4c98477ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:42:24 compute-0 nova_compute[259550]: 2025-10-07 14:42:24.944 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:42:25 compute-0 nova_compute[259550]: 2025-10-07 14:42:25.018 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:42:25 compute-0 nova_compute[259550]: 2025-10-07 14:42:25.019 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:25 compute-0 nova_compute[259550]: 2025-10-07 14:42:25.019 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:25 compute-0 nova_compute[259550]: 2025-10-07 14:42:25.020 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:25 compute-0 nova_compute[259550]: 2025-10-07 14:42:25.037 2 DEBUG nova.storage.rbd_utils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 77918bef-8f72-4152-ac55-f4d4c98477ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:42:25 compute-0 nova_compute[259550]: 2025-10-07 14:42:25.040 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 77918bef-8f72-4152-ac55-f4d4c98477ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:42:25 compute-0 nova_compute[259550]: 2025-10-07 14:42:25.120 2 DEBUG oslo_concurrency.lockutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:25 compute-0 nova_compute[259550]: 2025-10-07 14:42:25.123 2 DEBUG nova.network.neutron [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Successfully created port: c75fde3c-8461-4ed7-9c14-7f14f5794599 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:42:25 compute-0 nova_compute[259550]: 2025-10-07 14:42:25.336 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 77918bef-8f72-4152-ac55-f4d4c98477ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:42:25 compute-0 nova_compute[259550]: 2025-10-07 14:42:25.388 2 DEBUG nova.storage.rbd_utils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 77918bef-8f72-4152-ac55-f4d4c98477ec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:42:25 compute-0 nova_compute[259550]: 2025-10-07 14:42:25.474 2 DEBUG nova.objects.instance [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 77918bef-8f72-4152-ac55-f4d4c98477ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:42:25 compute-0 nova_compute[259550]: 2025-10-07 14:42:25.525 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:42:25 compute-0 nova_compute[259550]: 2025-10-07 14:42:25.526 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Ensure instance console log exists: /var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:42:25 compute-0 nova_compute[259550]: 2025-10-07 14:42:25.526 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:25 compute-0 nova_compute[259550]: 2025-10-07 14:42:25.527 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:25 compute-0 nova_compute[259550]: 2025-10-07 14:42:25.527 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:25 compute-0 nova_compute[259550]: 2025-10-07 14:42:25.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:25 compute-0 ceph-mon[74295]: pgmap v2399: 305 pgs: 305 active+clean; 41 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Oct 07 14:42:26 compute-0 nova_compute[259550]: 2025-10-07 14:42:26.163 2 DEBUG nova.network.neutron [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Successfully created port: dfe40ca6-700f-4101-8729-3d1ee103c5ea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:42:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 62 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 746 KiB/s wr, 54 op/s
Oct 07 14:42:27 compute-0 nova_compute[259550]: 2025-10-07 14:42:27.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:27 compute-0 ceph-mon[74295]: pgmap v2400: 305 pgs: 305 active+clean; 62 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 746 KiB/s wr, 54 op/s
Oct 07 14:42:27 compute-0 nova_compute[259550]: 2025-10-07 14:42:27.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:42:28 compute-0 nova_compute[259550]: 2025-10-07 14:42:28.317 2 DEBUG nova.network.neutron [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Successfully updated port: c75fde3c-8461-4ed7-9c14-7f14f5794599 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:42:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 62 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 746 KiB/s wr, 52 op/s
Oct 07 14:42:28 compute-0 nova_compute[259550]: 2025-10-07 14:42:28.633 2 DEBUG nova.compute.manager [req-8549bd28-55c8-420c-99db-f8372d38ac66 req-1560c9cc-1a42-4e34-8500-251f07b5c41a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-changed-c75fde3c-8461-4ed7-9c14-7f14f5794599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:42:28 compute-0 nova_compute[259550]: 2025-10-07 14:42:28.634 2 DEBUG nova.compute.manager [req-8549bd28-55c8-420c-99db-f8372d38ac66 req-1560c9cc-1a42-4e34-8500-251f07b5c41a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Refreshing instance network info cache due to event network-changed-c75fde3c-8461-4ed7-9c14-7f14f5794599. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:42:28 compute-0 nova_compute[259550]: 2025-10-07 14:42:28.634 2 DEBUG oslo_concurrency.lockutils [req-8549bd28-55c8-420c-99db-f8372d38ac66 req-1560c9cc-1a42-4e34-8500-251f07b5c41a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:42:28 compute-0 nova_compute[259550]: 2025-10-07 14:42:28.635 2 DEBUG oslo_concurrency.lockutils [req-8549bd28-55c8-420c-99db-f8372d38ac66 req-1560c9cc-1a42-4e34-8500-251f07b5c41a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:42:28 compute-0 nova_compute[259550]: 2025-10-07 14:42:28.635 2 DEBUG nova.network.neutron [req-8549bd28-55c8-420c-99db-f8372d38ac66 req-1560c9cc-1a42-4e34-8500-251f07b5c41a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Refreshing network info cache for port c75fde3c-8461-4ed7-9c14-7f14f5794599 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:42:29 compute-0 nova_compute[259550]: 2025-10-07 14:42:29.141 2 DEBUG nova.network.neutron [req-8549bd28-55c8-420c-99db-f8372d38ac66 req-1560c9cc-1a42-4e34-8500-251f07b5c41a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:42:29 compute-0 ceph-mon[74295]: pgmap v2401: 305 pgs: 305 active+clean; 62 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 746 KiB/s wr, 52 op/s
Oct 07 14:42:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct 07 14:42:31 compute-0 ceph-mon[74295]: pgmap v2402: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct 07 14:42:31 compute-0 nova_compute[259550]: 2025-10-07 14:42:31.994 2 DEBUG nova.network.neutron [req-8549bd28-55c8-420c-99db-f8372d38ac66 req-1560c9cc-1a42-4e34-8500-251f07b5c41a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:42:32 compute-0 nova_compute[259550]: 2025-10-07 14:42:32.149 2 DEBUG oslo_concurrency.lockutils [req-8549bd28-55c8-420c-99db-f8372d38ac66 req-1560c9cc-1a42-4e34-8500-251f07b5c41a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:42:32 compute-0 nova_compute[259550]: 2025-10-07 14:42:32.224 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848137.223097, 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:42:32 compute-0 nova_compute[259550]: 2025-10-07 14:42:32.225 2 INFO nova.compute.manager [-] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] VM Stopped (Lifecycle Event)
Oct 07 14:42:32 compute-0 nova_compute[259550]: 2025-10-07 14:42:32.270 2 DEBUG nova.compute.manager [None req-82647f94-e50b-43af-948a-d851099614a6 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:42:32 compute-0 nova_compute[259550]: 2025-10-07 14:42:32.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 07 14:42:32 compute-0 nova_compute[259550]: 2025-10-07 14:42:32.637 2 DEBUG nova.network.neutron [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Successfully updated port: dfe40ca6-700f-4101-8729-3d1ee103c5ea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:42:32 compute-0 nova_compute[259550]: 2025-10-07 14:42:32.671 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:42:32 compute-0 nova_compute[259550]: 2025-10-07 14:42:32.671 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:42:32 compute-0 nova_compute[259550]: 2025-10-07 14:42:32.672 2 DEBUG nova.network.neutron [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:42:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:42:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:42:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2624504052' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:42:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:42:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2624504052' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:42:32 compute-0 nova_compute[259550]: 2025-10-07 14:42:32.724 2 DEBUG nova.compute.manager [req-b199f553-9872-4c1d-90bd-446c399d768e req-b9854a6f-0c54-4dd8-b838-ec520b5d22cb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-changed-dfe40ca6-700f-4101-8729-3d1ee103c5ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:42:32 compute-0 nova_compute[259550]: 2025-10-07 14:42:32.725 2 DEBUG nova.compute.manager [req-b199f553-9872-4c1d-90bd-446c399d768e req-b9854a6f-0c54-4dd8-b838-ec520b5d22cb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Refreshing instance network info cache due to event network-changed-dfe40ca6-700f-4101-8729-3d1ee103c5ea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:42:32 compute-0 nova_compute[259550]: 2025-10-07 14:42:32.725 2 DEBUG oslo_concurrency.lockutils [req-b199f553-9872-4c1d-90bd-446c399d768e req-b9854a6f-0c54-4dd8-b838-ec520b5d22cb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:42:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2624504052' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:42:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2624504052' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:42:32 compute-0 nova_compute[259550]: 2025-10-07 14:42:32.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:33 compute-0 nova_compute[259550]: 2025-10-07 14:42:33.055 2 DEBUG nova.network.neutron [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:42:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:42:33 compute-0 ceph-mon[74295]: pgmap v2403: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 07 14:42:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:42:35 compute-0 podman[396664]: 2025-10-07 14:42:35.073382799 +0000 UTC m=+0.063088918 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 07 14:42:35 compute-0 podman[396665]: 2025-10-07 14:42:35.075982278 +0000 UTC m=+0.056694437 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 07 14:42:35 compute-0 ceph-mon[74295]: pgmap v2404: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:42:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:42:36 compute-0 sudo[396703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:42:36 compute-0 sudo[396703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:42:36 compute-0 sudo[396703]: pam_unix(sudo:session): session closed for user root
Oct 07 14:42:36 compute-0 sudo[396728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:42:36 compute-0 sudo[396728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:42:36 compute-0 sudo[396728]: pam_unix(sudo:session): session closed for user root
Oct 07 14:42:36 compute-0 sudo[396753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:42:36 compute-0 sudo[396753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:42:36 compute-0 sudo[396753]: pam_unix(sudo:session): session closed for user root
Oct 07 14:42:36 compute-0 sudo[396778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:42:36 compute-0 sudo[396778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:42:37 compute-0 nova_compute[259550]: 2025-10-07 14:42:37.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:37 compute-0 sudo[396778]: pam_unix(sudo:session): session closed for user root
Oct 07 14:42:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:42:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:42:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:42:37 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:42:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:42:37 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:42:37 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev c6407494-f471-44f5-abe7-924bdd1c36d4 does not exist
Oct 07 14:42:37 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev cc21e67f-afd9-4770-9e75-788bac4c7f59 does not exist
Oct 07 14:42:37 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 491cbff7-fe55-42a4-a792-ad6a4f6c8f1a does not exist
Oct 07 14:42:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:42:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:42:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:42:37 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:42:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:42:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:42:37 compute-0 sudo[396835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:42:37 compute-0 sudo[396835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:42:37 compute-0 sudo[396835]: pam_unix(sudo:session): session closed for user root
Oct 07 14:42:37 compute-0 sudo[396860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:42:37 compute-0 sudo[396860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:42:37 compute-0 sudo[396860]: pam_unix(sudo:session): session closed for user root
Oct 07 14:42:37 compute-0 sudo[396885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:42:37 compute-0 sudo[396885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:42:37 compute-0 sudo[396885]: pam_unix(sudo:session): session closed for user root
Oct 07 14:42:37 compute-0 sudo[396910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:42:37 compute-0 sudo[396910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:42:37 compute-0 ceph-mon[74295]: pgmap v2405: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:42:37 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:42:37 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:42:37 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:42:37 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:42:37 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:42:37 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:42:37 compute-0 podman[396974]: 2025-10-07 14:42:37.9608702 +0000 UTC m=+0.038297409 container create 92288936e18a57dd3bdc9975175ac140dd142d4626ffee70b70d4b69e422663e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wing, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 07 14:42:37 compute-0 nova_compute[259550]: 2025-10-07 14:42:37.965 2 DEBUG nova.network.neutron [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updating instance_info_cache with network_info: [{"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:42:37 compute-0 nova_compute[259550]: 2025-10-07 14:42:37.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:38 compute-0 systemd[1]: Started libpod-conmon-92288936e18a57dd3bdc9975175ac140dd142d4626ffee70b70d4b69e422663e.scope.
Oct 07 14:42:38 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:42:38 compute-0 podman[396974]: 2025-10-07 14:42:37.943724954 +0000 UTC m=+0.021152093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.044 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.044 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Instance network_info: |[{"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.044 2 DEBUG oslo_concurrency.lockutils [req-b199f553-9872-4c1d-90bd-446c399d768e req-b9854a6f-0c54-4dd8-b838-ec520b5d22cb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.045 2 DEBUG nova.network.neutron [req-b199f553-9872-4c1d-90bd-446c399d768e req-b9854a6f-0c54-4dd8-b838-ec520b5d22cb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Refreshing network info cache for port dfe40ca6-700f-4101-8729-3d1ee103c5ea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.048 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Start _get_guest_xml network_info=[{"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.056 2 WARNING nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:42:38 compute-0 podman[396974]: 2025-10-07 14:42:38.062794099 +0000 UTC m=+0.140221238 container init 92288936e18a57dd3bdc9975175ac140dd142d4626ffee70b70d4b69e422663e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wing, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.065 2 DEBUG nova.virt.libvirt.host [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.066 2 DEBUG nova.virt.libvirt.host [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.069 2 DEBUG nova.virt.libvirt.host [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.069 2 DEBUG nova.virt.libvirt.host [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.070 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.070 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.070 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.071 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.071 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:42:38 compute-0 podman[396974]: 2025-10-07 14:42:38.071449479 +0000 UTC m=+0.148876598 container start 92288936e18a57dd3bdc9975175ac140dd142d4626ffee70b70d4b69e422663e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.071 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.071 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.071 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.072 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.072 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.072 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.072 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:42:38 compute-0 podman[396974]: 2025-10-07 14:42:38.074686306 +0000 UTC m=+0.152113455 container attach 92288936e18a57dd3bdc9975175ac140dd142d4626ffee70b70d4b69e422663e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.075 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:42:38 compute-0 condescending_wing[396990]: 167 167
Oct 07 14:42:38 compute-0 systemd[1]: libpod-92288936e18a57dd3bdc9975175ac140dd142d4626ffee70b70d4b69e422663e.scope: Deactivated successfully.
Oct 07 14:42:38 compute-0 podman[396974]: 2025-10-07 14:42:38.078345912 +0000 UTC m=+0.155773051 container died 92288936e18a57dd3bdc9975175ac140dd142d4626ffee70b70d4b69e422663e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wing, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:42:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f52cc011f3d1375c44dbf797c1e059d7dc1cc8937c6c3c7ba3f215f1c56a086e-merged.mount: Deactivated successfully.
Oct 07 14:42:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:42:38 compute-0 podman[396974]: 2025-10-07 14:42:38.114525504 +0000 UTC m=+0.191952623 container remove 92288936e18a57dd3bdc9975175ac140dd142d4626ffee70b70d4b69e422663e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 07 14:42:38 compute-0 systemd[1]: libpod-conmon-92288936e18a57dd3bdc9975175ac140dd142d4626ffee70b70d4b69e422663e.scope: Deactivated successfully.
Oct 07 14:42:38 compute-0 podman[397034]: 2025-10-07 14:42:38.27164371 +0000 UTC m=+0.040698263 container create f5e13393f8b0e2e26cf841ca8b59fea7d93b6c5927f926006a125b71493b8c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:42:38 compute-0 systemd[1]: Started libpod-conmon-f5e13393f8b0e2e26cf841ca8b59fea7d93b6c5927f926006a125b71493b8c38.scope.
Oct 07 14:42:38 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:42:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1609536243f5907642ea0fbaf5393fd0c4eb692897cea7686dd213659b8af1a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:42:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1609536243f5907642ea0fbaf5393fd0c4eb692897cea7686dd213659b8af1a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:42:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1609536243f5907642ea0fbaf5393fd0c4eb692897cea7686dd213659b8af1a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:42:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1609536243f5907642ea0fbaf5393fd0c4eb692897cea7686dd213659b8af1a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:42:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1609536243f5907642ea0fbaf5393fd0c4eb692897cea7686dd213659b8af1a9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:42:38 compute-0 podman[397034]: 2025-10-07 14:42:38.251412243 +0000 UTC m=+0.020466796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:42:38 compute-0 podman[397034]: 2025-10-07 14:42:38.35402914 +0000 UTC m=+0.123083683 container init f5e13393f8b0e2e26cf841ca8b59fea7d93b6c5927f926006a125b71493b8c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:42:38 compute-0 podman[397034]: 2025-10-07 14:42:38.362997719 +0000 UTC m=+0.132052252 container start f5e13393f8b0e2e26cf841ca8b59fea7d93b6c5927f926006a125b71493b8c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:42:38 compute-0 podman[397034]: 2025-10-07 14:42:38.366357818 +0000 UTC m=+0.135412381 container attach f5e13393f8b0e2e26cf841ca8b59fea7d93b6c5927f926006a125b71493b8c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:42:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 1.0 MiB/s wr, 3 op/s
Oct 07 14:42:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:42:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1623183904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.524 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.549 2 DEBUG nova.storage.rbd_utils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 77918bef-8f72-4152-ac55-f4d4c98477ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:42:38 compute-0 nova_compute[259550]: 2025-10-07 14:42:38.555 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:42:38 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1623183904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:42:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:42:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1234686973' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.023 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.026 2 DEBUG nova.virt.libvirt.vif [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:42:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-649311904',display_name='tempest-TestGettingAddress-server-649311904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-649311904',id=127,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-pje5d30x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:42:24Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=77918bef-8f72-4152-ac55-f4d4c98477ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.026 2 DEBUG nova.network.os_vif_util [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.027 2 DEBUG nova.network.os_vif_util [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:99:4d,bridge_name='br-int',has_traffic_filtering=True,id=c75fde3c-8461-4ed7-9c14-7f14f5794599,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75fde3c-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.028 2 DEBUG nova.virt.libvirt.vif [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:42:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-649311904',display_name='tempest-TestGettingAddress-server-649311904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-649311904',id=127,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-pje5d30x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:42:24Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=77918bef-8f72-4152-ac55-f4d4c98477ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.028 2 DEBUG nova.network.os_vif_util [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.028 2 DEBUG nova.network.os_vif_util [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:c5:56,bridge_name='br-int',has_traffic_filtering=True,id=dfe40ca6-700f-4101-8729-3d1ee103c5ea,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfe40ca6-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.029 2 DEBUG nova.objects.instance [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 77918bef-8f72-4152-ac55-f4d4c98477ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.059 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:42:39 compute-0 nova_compute[259550]:   <uuid>77918bef-8f72-4152-ac55-f4d4c98477ec</uuid>
Oct 07 14:42:39 compute-0 nova_compute[259550]:   <name>instance-0000007f</name>
Oct 07 14:42:39 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:42:39 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:42:39 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <nova:name>tempest-TestGettingAddress-server-649311904</nova:name>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:42:38</nova:creationTime>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:42:39 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:42:39 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:42:39 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:42:39 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:42:39 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:42:39 compute-0 nova_compute[259550]:         <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct 07 14:42:39 compute-0 nova_compute[259550]:         <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:42:39 compute-0 nova_compute[259550]:         <nova:port uuid="c75fde3c-8461-4ed7-9c14-7f14f5794599">
Oct 07 14:42:39 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:42:39 compute-0 nova_compute[259550]:         <nova:port uuid="dfe40ca6-700f-4101-8729-3d1ee103c5ea">
Oct 07 14:42:39 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feb4:c556" ipVersion="6"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:42:39 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:42:39 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <system>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <entry name="serial">77918bef-8f72-4152-ac55-f4d4c98477ec</entry>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <entry name="uuid">77918bef-8f72-4152-ac55-f4d4c98477ec</entry>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     </system>
Oct 07 14:42:39 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:42:39 compute-0 nova_compute[259550]:   <os>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:   </os>
Oct 07 14:42:39 compute-0 nova_compute[259550]:   <features>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:   </features>
Oct 07 14:42:39 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:42:39 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:42:39 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/77918bef-8f72-4152-ac55-f4d4c98477ec_disk">
Oct 07 14:42:39 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       </source>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:42:39 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/77918bef-8f72-4152-ac55-f4d4c98477ec_disk.config">
Oct 07 14:42:39 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       </source>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:42:39 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:41:99:4d"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <target dev="tapc75fde3c-84"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:b4:c5:56"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <target dev="tapdfe40ca6-70"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec/console.log" append="off"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <video>
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     </video>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:42:39 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:42:39 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:42:39 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:42:39 compute-0 nova_compute[259550]: </domain>
Oct 07 14:42:39 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.059 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Preparing to wait for external event network-vif-plugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.059 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.060 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.060 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.060 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Preparing to wait for external event network-vif-plugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.060 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.061 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.061 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.062 2 DEBUG nova.virt.libvirt.vif [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:42:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-649311904',display_name='tempest-TestGettingAddress-server-649311904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-649311904',id=127,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-pje5d30x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:42:24Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=77918bef-8f72-4152-ac55-f4d4c98477ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.062 2 DEBUG nova.network.os_vif_util [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.063 2 DEBUG nova.network.os_vif_util [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:99:4d,bridge_name='br-int',has_traffic_filtering=True,id=c75fde3c-8461-4ed7-9c14-7f14f5794599,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75fde3c-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.063 2 DEBUG os_vif [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:99:4d,bridge_name='br-int',has_traffic_filtering=True,id=c75fde3c-8461-4ed7-9c14-7f14f5794599,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75fde3c-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.064 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.065 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.070 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc75fde3c-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.070 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc75fde3c-84, col_values=(('external_ids', {'iface-id': 'c75fde3c-8461-4ed7-9c14-7f14f5794599', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:99:4d', 'vm-uuid': '77918bef-8f72-4152-ac55-f4d4c98477ec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:39 compute-0 NetworkManager[44949]: <info>  [1759848159.1217] manager: (tapc75fde3c-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/555)
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.128 2 INFO os_vif [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:99:4d,bridge_name='br-int',has_traffic_filtering=True,id=c75fde3c-8461-4ed7-9c14-7f14f5794599,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75fde3c-84')
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.129 2 DEBUG nova.virt.libvirt.vif [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:42:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-649311904',display_name='tempest-TestGettingAddress-server-649311904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-649311904',id=127,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-pje5d30x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:42:24Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=77918bef-8f72-4152-ac55-f4d4c98477ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.129 2 DEBUG nova.network.os_vif_util [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.130 2 DEBUG nova.network.os_vif_util [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:c5:56,bridge_name='br-int',has_traffic_filtering=True,id=dfe40ca6-700f-4101-8729-3d1ee103c5ea,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfe40ca6-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.130 2 DEBUG os_vif [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:c5:56,bridge_name='br-int',has_traffic_filtering=True,id=dfe40ca6-700f-4101-8729-3d1ee103c5ea,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfe40ca6-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.133 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdfe40ca6-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.134 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdfe40ca6-70, col_values=(('external_ids', {'iface-id': 'dfe40ca6-700f-4101-8729-3d1ee103c5ea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:c5:56', 'vm-uuid': '77918bef-8f72-4152-ac55-f4d4c98477ec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:39 compute-0 NetworkManager[44949]: <info>  [1759848159.1358] manager: (tapdfe40ca6-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/556)
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.145 2 INFO os_vif [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:c5:56,bridge_name='br-int',has_traffic_filtering=True,id=dfe40ca6-700f-4101-8729-3d1ee103c5ea,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfe40ca6-70')
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.346 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.347 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.347 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:41:99:4d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.347 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:b4:c5:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.347 2 INFO nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Using config drive
Oct 07 14:42:39 compute-0 nova_compute[259550]: 2025-10-07 14:42:39.369 2 DEBUG nova.storage.rbd_utils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 77918bef-8f72-4152-ac55-f4d4c98477ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:42:39 compute-0 youthful_banach[397051]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:42:39 compute-0 youthful_banach[397051]: --> relative data size: 1.0
Oct 07 14:42:39 compute-0 youthful_banach[397051]: --> All data devices are unavailable
Oct 07 14:42:39 compute-0 systemd[1]: libpod-f5e13393f8b0e2e26cf841ca8b59fea7d93b6c5927f926006a125b71493b8c38.scope: Deactivated successfully.
Oct 07 14:42:39 compute-0 podman[397145]: 2025-10-07 14:42:39.443136799 +0000 UTC m=+0.023360962 container died f5e13393f8b0e2e26cf841ca8b59fea7d93b6c5927f926006a125b71493b8c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:42:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1609536243f5907642ea0fbaf5393fd0c4eb692897cea7686dd213659b8af1a9-merged.mount: Deactivated successfully.
Oct 07 14:42:39 compute-0 podman[397145]: 2025-10-07 14:42:39.496014285 +0000 UTC m=+0.076238428 container remove f5e13393f8b0e2e26cf841ca8b59fea7d93b6c5927f926006a125b71493b8c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 07 14:42:39 compute-0 systemd[1]: libpod-conmon-f5e13393f8b0e2e26cf841ca8b59fea7d93b6c5927f926006a125b71493b8c38.scope: Deactivated successfully.
Oct 07 14:42:39 compute-0 sudo[396910]: pam_unix(sudo:session): session closed for user root
Oct 07 14:42:39 compute-0 sudo[397160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:42:39 compute-0 sudo[397160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:42:39 compute-0 sudo[397160]: pam_unix(sudo:session): session closed for user root
Oct 07 14:42:39 compute-0 sudo[397185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:42:39 compute-0 sudo[397185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:42:39 compute-0 sudo[397185]: pam_unix(sudo:session): session closed for user root
Oct 07 14:42:39 compute-0 sudo[397210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:42:39 compute-0 sudo[397210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:42:39 compute-0 sudo[397210]: pam_unix(sudo:session): session closed for user root
Oct 07 14:42:39 compute-0 sudo[397235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:42:39 compute-0 sudo[397235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:42:39 compute-0 ceph-mon[74295]: pgmap v2406: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 1.0 MiB/s wr, 3 op/s
Oct 07 14:42:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1234686973' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:42:40 compute-0 podman[397300]: 2025-10-07 14:42:40.100274586 +0000 UTC m=+0.036666036 container create e6e634be7376922d52e4a745f6390fe29284bfd5de166c57081deb972cbb7287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ride, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 07 14:42:40 compute-0 systemd[1]: Started libpod-conmon-e6e634be7376922d52e4a745f6390fe29284bfd5de166c57081deb972cbb7287.scope.
Oct 07 14:42:40 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:42:40 compute-0 podman[397300]: 2025-10-07 14:42:40.176221785 +0000 UTC m=+0.112613255 container init e6e634be7376922d52e4a745f6390fe29284bfd5de166c57081deb972cbb7287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ride, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 07 14:42:40 compute-0 podman[397300]: 2025-10-07 14:42:40.085827172 +0000 UTC m=+0.022218642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:42:40 compute-0 podman[397300]: 2025-10-07 14:42:40.183581941 +0000 UTC m=+0.119973391 container start e6e634be7376922d52e4a745f6390fe29284bfd5de166c57081deb972cbb7287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 07 14:42:40 compute-0 podman[397300]: 2025-10-07 14:42:40.186895748 +0000 UTC m=+0.123287198 container attach e6e634be7376922d52e4a745f6390fe29284bfd5de166c57081deb972cbb7287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ride, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:42:40 compute-0 nervous_ride[397316]: 167 167
Oct 07 14:42:40 compute-0 systemd[1]: libpod-e6e634be7376922d52e4a745f6390fe29284bfd5de166c57081deb972cbb7287.scope: Deactivated successfully.
Oct 07 14:42:40 compute-0 podman[397300]: 2025-10-07 14:42:40.188676956 +0000 UTC m=+0.125068406 container died e6e634be7376922d52e4a745f6390fe29284bfd5de166c57081deb972cbb7287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:42:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-63888330e78b18e36c98a32ab4360bd59558bb10a031344ab59a0136d8573f8d-merged.mount: Deactivated successfully.
Oct 07 14:42:40 compute-0 podman[397300]: 2025-10-07 14:42:40.226194723 +0000 UTC m=+0.162586163 container remove e6e634be7376922d52e4a745f6390fe29284bfd5de166c57081deb972cbb7287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ride, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:42:40 compute-0 systemd[1]: libpod-conmon-e6e634be7376922d52e4a745f6390fe29284bfd5de166c57081deb972cbb7287.scope: Deactivated successfully.
Oct 07 14:42:40 compute-0 nova_compute[259550]: 2025-10-07 14:42:40.311 2 INFO nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Creating config drive at /var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec/disk.config
Oct 07 14:42:40 compute-0 nova_compute[259550]: 2025-10-07 14:42:40.315 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpswympl1y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:42:40 compute-0 podman[397340]: 2025-10-07 14:42:40.42021147 +0000 UTC m=+0.053562994 container create 5788c0e0651eef109486e689a7ced7f0cb51d0f5388ef17d2b9768494ee06a2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:42:40 compute-0 systemd[1]: Started libpod-conmon-5788c0e0651eef109486e689a7ced7f0cb51d0f5388ef17d2b9768494ee06a2b.scope.
Oct 07 14:42:40 compute-0 nova_compute[259550]: 2025-10-07 14:42:40.472 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpswympl1y" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:42:40 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:42:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7218345ed887c4ee5ff2fd1906f9fb689fd9f8e938ab1a22d9eb411dda401dd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:42:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7218345ed887c4ee5ff2fd1906f9fb689fd9f8e938ab1a22d9eb411dda401dd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:42:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7218345ed887c4ee5ff2fd1906f9fb689fd9f8e938ab1a22d9eb411dda401dd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:42:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7218345ed887c4ee5ff2fd1906f9fb689fd9f8e938ab1a22d9eb411dda401dd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:42:40 compute-0 podman[397340]: 2025-10-07 14:42:40.398065451 +0000 UTC m=+0.031416995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:42:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 1.0 MiB/s wr, 3 op/s
Oct 07 14:42:40 compute-0 nova_compute[259550]: 2025-10-07 14:42:40.504 2 DEBUG nova.storage.rbd_utils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 77918bef-8f72-4152-ac55-f4d4c98477ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:42:40 compute-0 podman[397340]: 2025-10-07 14:42:40.507167101 +0000 UTC m=+0.140518655 container init 5788c0e0651eef109486e689a7ced7f0cb51d0f5388ef17d2b9768494ee06a2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 07 14:42:40 compute-0 nova_compute[259550]: 2025-10-07 14:42:40.510 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec/disk.config 77918bef-8f72-4152-ac55-f4d4c98477ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:42:40 compute-0 podman[397340]: 2025-10-07 14:42:40.514002363 +0000 UTC m=+0.147353887 container start 5788c0e0651eef109486e689a7ced7f0cb51d0f5388ef17d2b9768494ee06a2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 07 14:42:40 compute-0 podman[397340]: 2025-10-07 14:42:40.517257759 +0000 UTC m=+0.150609313 container attach 5788c0e0651eef109486e689a7ced7f0cb51d0f5388ef17d2b9768494ee06a2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:42:40 compute-0 nova_compute[259550]: 2025-10-07 14:42:40.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:40 compute-0 nova_compute[259550]: 2025-10-07 14:42:40.711 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec/disk.config 77918bef-8f72-4152-ac55-f4d4c98477ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:42:40 compute-0 nova_compute[259550]: 2025-10-07 14:42:40.712 2 INFO nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Deleting local config drive /var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec/disk.config because it was imported into RBD.
Oct 07 14:42:40 compute-0 NetworkManager[44949]: <info>  [1759848160.7867] manager: (tapc75fde3c-84): new Tun device (/org/freedesktop/NetworkManager/Devices/557)
Oct 07 14:42:40 compute-0 NetworkManager[44949]: <info>  [1759848160.8131] manager: (tapdfe40ca6-70): new Tun device (/org/freedesktop/NetworkManager/Devices/558)
Oct 07 14:42:40 compute-0 systemd-udevd[397411]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:42:40 compute-0 systemd-udevd[397412]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:42:40 compute-0 systemd-machined[214580]: New machine qemu-160-instance-0000007f.
Oct 07 14:42:40 compute-0 kernel: tapdfe40ca6-70: entered promiscuous mode
Oct 07 14:42:40 compute-0 NetworkManager[44949]: <info>  [1759848160.8603] device (tapdfe40ca6-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:42:40 compute-0 kernel: tapc75fde3c-84: entered promiscuous mode
Oct 07 14:42:40 compute-0 nova_compute[259550]: 2025-10-07 14:42:40.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:40 compute-0 NetworkManager[44949]: <info>  [1759848160.8654] device (tapc75fde3c-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:42:40 compute-0 NetworkManager[44949]: <info>  [1759848160.8675] device (tapdfe40ca6-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:42:40 compute-0 NetworkManager[44949]: <info>  [1759848160.8687] device (tapc75fde3c-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:42:40 compute-0 systemd[1]: Started Virtual Machine qemu-160-instance-0000007f.
Oct 07 14:42:40 compute-0 ovn_controller[151684]: 2025-10-07T14:42:40Z|01400|binding|INFO|Claiming lport c75fde3c-8461-4ed7-9c14-7f14f5794599 for this chassis.
Oct 07 14:42:40 compute-0 ovn_controller[151684]: 2025-10-07T14:42:40Z|01401|binding|INFO|c75fde3c-8461-4ed7-9c14-7f14f5794599: Claiming fa:16:3e:41:99:4d 10.100.0.4
Oct 07 14:42:40 compute-0 ovn_controller[151684]: 2025-10-07T14:42:40Z|01402|binding|INFO|Claiming lport dfe40ca6-700f-4101-8729-3d1ee103c5ea for this chassis.
Oct 07 14:42:40 compute-0 ovn_controller[151684]: 2025-10-07T14:42:40Z|01403|binding|INFO|dfe40ca6-700f-4101-8729-3d1ee103c5ea: Claiming fa:16:3e:b4:c5:56 2001:db8::f816:3eff:feb4:c556
Oct 07 14:42:40 compute-0 nova_compute[259550]: 2025-10-07 14:42:40.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:40 compute-0 ovn_controller[151684]: 2025-10-07T14:42:40Z|01404|binding|INFO|Setting lport c75fde3c-8461-4ed7-9c14-7f14f5794599 ovn-installed in OVS
Oct 07 14:42:40 compute-0 ovn_controller[151684]: 2025-10-07T14:42:40Z|01405|binding|INFO|Setting lport dfe40ca6-700f-4101-8729-3d1ee103c5ea ovn-installed in OVS
Oct 07 14:42:40 compute-0 nova_compute[259550]: 2025-10-07 14:42:40.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:40 compute-0 ovn_controller[151684]: 2025-10-07T14:42:40Z|01406|binding|INFO|Setting lport c75fde3c-8461-4ed7-9c14-7f14f5794599 up in Southbound
Oct 07 14:42:40 compute-0 ovn_controller[151684]: 2025-10-07T14:42:40Z|01407|binding|INFO|Setting lport dfe40ca6-700f-4101-8729-3d1ee103c5ea up in Southbound
Oct 07 14:42:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:40.989 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:99:4d 10.100.0.4'], port_security=['fa:16:3e:41:99:4d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '77918bef-8f72-4152-ac55-f4d4c98477ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a500b116-d64f-4be8-9413-85a351e36563', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=262bb8b3-c881-4ed3-8240-c435878fb605, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c75fde3c-8461-4ed7-9c14-7f14f5794599) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:42:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:40.990 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:c5:56 2001:db8::f816:3eff:feb4:c556'], port_security=['fa:16:3e:b4:c5:56 2001:db8::f816:3eff:feb4:c556'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb4:c556/64', 'neutron:device_id': '77918bef-8f72-4152-ac55-f4d4c98477ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a500b116-d64f-4be8-9413-85a351e36563', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36c3782d-89d5-4d69-ae40-86969f172913, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=dfe40ca6-700f-4101-8729-3d1ee103c5ea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:42:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:40.991 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c75fde3c-8461-4ed7-9c14-7f14f5794599 in datapath bb059ee7-3091-491e-8da2-c9bd1da0f922 bound to our chassis
Oct 07 14:42:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:40.992 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bb059ee7-3091-491e-8da2-c9bd1da0f922
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.004 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d3dbb756-c930-4760-8995-d5ce64926df4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.004 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbb059ee7-31 in ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.006 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbb059ee7-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.006 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[56c42539-a8a6-486a-89ca-5e57355fc671]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.007 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb9cee6-8f55-4387-85d9-ac9e3a7e3bbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.018 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a8f79e-1435-452f-b11c-06f6d8e7df8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.040 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[89c8372d-7a18-48b2-a4d3-d181a727b429]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.067 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[af4c3791-aa4f-4573-b0be-c1e7997c6aca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.074 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fccc7af3-d26e-4dee-adf0-04ad546d7bda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 NetworkManager[44949]: <info>  [1759848161.0755] manager: (tapbb059ee7-30): new Veth device (/org/freedesktop/NetworkManager/Devices/559)
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.106 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[57782a0d-d7ac-497e-83bf-50570beb8234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.108 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2c74eaa4-f514-4c0b-ade8-b49f09855cf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 NetworkManager[44949]: <info>  [1759848161.1315] device (tapbb059ee7-30): carrier: link connected
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.136 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[aea42d31-9407-4e3b-a3d4-66e17df74f12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.154 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dfbd8ebb-3893-46a3-9e8c-d8be1d6f894d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb059ee7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:3b:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 402], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874469, 'reachable_time': 30371, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397450, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.170 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c854e420-85f2-465d-a4b6-9351d1e8af24]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe59:3bd9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874469, 'tstamp': 874469}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397451, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.187 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[12a41738-11d6-403d-8ca2-b966c24017b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb059ee7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:3b:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 402], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874469, 'reachable_time': 30371, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 397452, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.224 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[93bdc264-e0b0-4e18-ae7f-b5f4355cffa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.283 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[390691e1-7075-4540-95e5-314d99144cdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.285 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb059ee7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.285 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.286 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb059ee7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:41 compute-0 nova_compute[259550]: 2025-10-07 14:42:41.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:41 compute-0 kernel: tapbb059ee7-30: entered promiscuous mode
Oct 07 14:42:41 compute-0 NetworkManager[44949]: <info>  [1759848161.3328] manager: (tapbb059ee7-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/560)
Oct 07 14:42:41 compute-0 nova_compute[259550]: 2025-10-07 14:42:41.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.333 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbb059ee7-30, col_values=(('external_ids', {'iface-id': '770f7899-3ca8-4bf3-9f06-6b8c25522fc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:41 compute-0 ovn_controller[151684]: 2025-10-07T14:42:41Z|01408|binding|INFO|Releasing lport 770f7899-3ca8-4bf3-9f06-6b8c25522fc3 from this chassis (sb_readonly=0)
Oct 07 14:42:41 compute-0 nova_compute[259550]: 2025-10-07 14:42:41.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.350 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bb059ee7-3091-491e-8da2-c9bd1da0f922.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bb059ee7-3091-491e-8da2-c9bd1da0f922.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.351 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b6920d02-f04b-49f6-a62a-77042591cc43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.352 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-bb059ee7-3091-491e-8da2-c9bd1da0f922
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/bb059ee7-3091-491e-8da2-c9bd1da0f922.pid.haproxy
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID bb059ee7-3091-491e-8da2-c9bd1da0f922
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.353 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'env', 'PROCESS_TAG=haproxy-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bb059ee7-3091-491e-8da2-c9bd1da0f922.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]: {
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:     "0": [
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:         {
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "devices": [
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "/dev/loop3"
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             ],
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "lv_name": "ceph_lv0",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "lv_size": "21470642176",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "name": "ceph_lv0",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "tags": {
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.cluster_name": "ceph",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.crush_device_class": "",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.encrypted": "0",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.osd_id": "0",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.type": "block",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.vdo": "0"
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             },
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "type": "block",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "vg_name": "ceph_vg0"
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:         }
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:     ],
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:     "1": [
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:         {
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "devices": [
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "/dev/loop4"
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             ],
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "lv_name": "ceph_lv1",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "lv_size": "21470642176",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "name": "ceph_lv1",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "tags": {
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.cluster_name": "ceph",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.crush_device_class": "",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.encrypted": "0",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.osd_id": "1",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.type": "block",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.vdo": "0"
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             },
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "type": "block",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "vg_name": "ceph_vg1"
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:         }
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:     ],
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:     "2": [
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:         {
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "devices": [
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "/dev/loop5"
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             ],
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "lv_name": "ceph_lv2",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "lv_size": "21470642176",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "name": "ceph_lv2",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "tags": {
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.cluster_name": "ceph",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.crush_device_class": "",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.encrypted": "0",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.osd_id": "2",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.type": "block",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:                 "ceph.vdo": "0"
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             },
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "type": "block",
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:             "vg_name": "ceph_vg2"
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:         }
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]:     ]
Oct 07 14:42:41 compute-0 gifted_meninsky[397358]: }
Oct 07 14:42:41 compute-0 nova_compute[259550]: 2025-10-07 14:42:41.421 2 DEBUG nova.network.neutron [req-b199f553-9872-4c1d-90bd-446c399d768e req-b9854a6f-0c54-4dd8-b838-ec520b5d22cb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updated VIF entry in instance network info cache for port dfe40ca6-700f-4101-8729-3d1ee103c5ea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:42:41 compute-0 nova_compute[259550]: 2025-10-07 14:42:41.421 2 DEBUG nova.network.neutron [req-b199f553-9872-4c1d-90bd-446c399d768e req-b9854a6f-0c54-4dd8-b838-ec520b5d22cb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updating instance_info_cache with network_info: [{"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:42:41 compute-0 systemd[1]: libpod-5788c0e0651eef109486e689a7ced7f0cb51d0f5388ef17d2b9768494ee06a2b.scope: Deactivated successfully.
Oct 07 14:42:41 compute-0 podman[397340]: 2025-10-07 14:42:41.45366263 +0000 UTC m=+1.087014154 container died 5788c0e0651eef109486e689a7ced7f0cb51d0f5388ef17d2b9768494ee06a2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:42:41 compute-0 nova_compute[259550]: 2025-10-07 14:42:41.455 2 DEBUG oslo_concurrency.lockutils [req-b199f553-9872-4c1d-90bd-446c399d768e req-b9854a6f-0c54-4dd8-b838-ec520b5d22cb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:42:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7218345ed887c4ee5ff2fd1906f9fb689fd9f8e938ab1a22d9eb411dda401dd3-merged.mount: Deactivated successfully.
Oct 07 14:42:41 compute-0 podman[397340]: 2025-10-07 14:42:41.507608824 +0000 UTC m=+1.140960348 container remove 5788c0e0651eef109486e689a7ced7f0cb51d0f5388ef17d2b9768494ee06a2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:42:41 compute-0 systemd[1]: libpod-conmon-5788c0e0651eef109486e689a7ced7f0cb51d0f5388ef17d2b9768494ee06a2b.scope: Deactivated successfully.
Oct 07 14:42:41 compute-0 sudo[397235]: pam_unix(sudo:session): session closed for user root
Oct 07 14:42:41 compute-0 sudo[397520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:42:41 compute-0 sudo[397520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:42:41 compute-0 sudo[397520]: pam_unix(sudo:session): session closed for user root
Oct 07 14:42:41 compute-0 sudo[397550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:42:41 compute-0 sudo[397550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:42:41 compute-0 sudo[397550]: pam_unix(sudo:session): session closed for user root
Oct 07 14:42:41 compute-0 sudo[397591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:42:41 compute-0 sudo[397591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:42:41 compute-0 sudo[397591]: pam_unix(sudo:session): session closed for user root
Oct 07 14:42:41 compute-0 podman[397590]: 2025-10-07 14:42:41.739422195 +0000 UTC m=+0.048380696 container create be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:42:41 compute-0 systemd[1]: Started libpod-conmon-be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d.scope.
Oct 07 14:42:41 compute-0 sudo[397628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:42:41 compute-0 sudo[397628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:42:41 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:42:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aade3835522dc3103f13522e66cfc9c234f96ccd29b80c7bd03443b1f8c3128d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:42:41 compute-0 podman[397590]: 2025-10-07 14:42:41.713118266 +0000 UTC m=+0.022076767 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:42:41 compute-0 podman[397590]: 2025-10-07 14:42:41.81372754 +0000 UTC m=+0.122686071 container init be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:42:41 compute-0 podman[397590]: 2025-10-07 14:42:41.819197886 +0000 UTC m=+0.128156387 container start be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 07 14:42:41 compute-0 neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922[397654]: [NOTICE]   (397660) : New worker (397662) forked
Oct 07 14:42:41 compute-0 neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922[397654]: [NOTICE]   (397660) : Loading success.
Oct 07 14:42:41 compute-0 ceph-mon[74295]: pgmap v2407: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 1.0 MiB/s wr, 3 op/s
Oct 07 14:42:41 compute-0 nova_compute[259550]: 2025-10-07 14:42:41.872 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848161.872121, 77918bef-8f72-4152-ac55-f4d4c98477ec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:42:41 compute-0 nova_compute[259550]: 2025-10-07 14:42:41.873 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] VM Started (Lifecycle Event)
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.890 161536 INFO neutron.agent.ovn.metadata.agent [-] Port dfe40ca6-700f-4101-8729-3d1ee103c5ea in datapath e6e769bc-2b33-4210-8062-fbc8d16f9127 unbound from our chassis
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.892 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e6e769bc-2b33-4210-8062-fbc8d16f9127
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.905 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b7beac18-84d9-467c-8a48-960f970ae49b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.906 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape6e769bc-21 in ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.908 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape6e769bc-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.908 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[414498b8-dcc4-45a5-a9cf-d33b5d75904b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.910 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3fb50d-5105-4989-a263-5d76460ce24b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.924 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[b16feba9-5156-452b-8964-6de5f1b4ed24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.936 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4e81cf62-61a2-48f9-bf81-882a3006ed2f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.964 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2c5d4c27-f837-484c-89db-d9498dbd75d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.970 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3566648b-e272-42bf-b864-73ef6d7b3dad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:41 compute-0 NetworkManager[44949]: <info>  [1759848161.9718] manager: (tape6e769bc-20): new Veth device (/org/freedesktop/NetworkManager/Devices/561)
Oct 07 14:42:41 compute-0 systemd-udevd[397437]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.001 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef4014a-c60f-498e-ad19-80c2885f4c75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.004 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[578fb7f2-63e8-4235-9d55-ae678c362058]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:42 compute-0 nova_compute[259550]: 2025-10-07 14:42:42.018 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:42:42 compute-0 nova_compute[259550]: 2025-10-07 14:42:42.022 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848161.8722425, 77918bef-8f72-4152-ac55-f4d4c98477ec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:42:42 compute-0 nova_compute[259550]: 2025-10-07 14:42:42.023 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] VM Paused (Lifecycle Event)
Oct 07 14:42:42 compute-0 NetworkManager[44949]: <info>  [1759848162.0322] device (tape6e769bc-20): carrier: link connected
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.037 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5c33574e-b314-457c-9be4-23d87f063b48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.055 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[174e3ffc-bd0e-4947-b75f-ba1263fc2a0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape6e769bc-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:ce:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874559, 'reachable_time': 36097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397719, 'error': None, 'target': 'ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.075 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[907bbd52-e615-40bd-bd60-932696f980d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe70:ce9b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874559, 'tstamp': 874559}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397722, 'error': None, 'target': 'ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.091 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0c105ec6-f59b-420e-8a0b-788fa0b48450]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape6e769bc-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:ce:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874559, 'reachable_time': 36097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 397725, 'error': None, 'target': 'ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:42 compute-0 podman[397723]: 2025-10-07 14:42:42.121487391 +0000 UTC m=+0.036046550 container create 822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kepler, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.121 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb43d3c-877c-4383-a047-f22d4060f989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.153 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[69f72715-d162-46c2-8b09-4617bd4491f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.155 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6e769bc-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.155 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.155 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape6e769bc-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:42 compute-0 systemd[1]: Started libpod-conmon-822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc.scope.
Oct 07 14:42:42 compute-0 NetworkManager[44949]: <info>  [1759848162.1578] manager: (tape6e769bc-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/562)
Oct 07 14:42:42 compute-0 nova_compute[259550]: 2025-10-07 14:42:42.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:42 compute-0 kernel: tape6e769bc-20: entered promiscuous mode
Oct 07 14:42:42 compute-0 nova_compute[259550]: 2025-10-07 14:42:42.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.161 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape6e769bc-20, col_values=(('external_ids', {'iface-id': '21cca283-5f07-4e28-8ee2-a58e7356e156'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:42 compute-0 ovn_controller[151684]: 2025-10-07T14:42:42Z|01409|binding|INFO|Releasing lport 21cca283-5f07-4e28-8ee2-a58e7356e156 from this chassis (sb_readonly=0)
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.164 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e6e769bc-2b33-4210-8062-fbc8d16f9127.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e6e769bc-2b33-4210-8062-fbc8d16f9127.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:42:42 compute-0 nova_compute[259550]: 2025-10-07 14:42:42.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.165 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[068b878d-1f95-4878-a686-1329f0890c51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.165 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-e6e769bc-2b33-4210-8062-fbc8d16f9127
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/e6e769bc-2b33-4210-8062-fbc8d16f9127.pid.haproxy
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID e6e769bc-2b33-4210-8062-fbc8d16f9127
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:42:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.166 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'env', 'PROCESS_TAG=haproxy-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e6e769bc-2b33-4210-8062-fbc8d16f9127.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:42:42 compute-0 nova_compute[259550]: 2025-10-07 14:42:42.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:42 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:42:42 compute-0 podman[397723]: 2025-10-07 14:42:42.106341708 +0000 UTC m=+0.020900867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:42:42 compute-0 podman[397723]: 2025-10-07 14:42:42.214907384 +0000 UTC m=+0.129466543 container init 822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kepler, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 07 14:42:42 compute-0 podman[397723]: 2025-10-07 14:42:42.223750499 +0000 UTC m=+0.138309638 container start 822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kepler, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 14:42:42 compute-0 podman[397723]: 2025-10-07 14:42:42.227217311 +0000 UTC m=+0.141776470 container attach 822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kepler, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 07 14:42:42 compute-0 keen_kepler[397744]: 167 167
Oct 07 14:42:42 compute-0 systemd[1]: libpod-822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc.scope: Deactivated successfully.
Oct 07 14:42:42 compute-0 conmon[397744]: conmon 822ed29485fcf00478ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc.scope/container/memory.events
Oct 07 14:42:42 compute-0 podman[397723]: 2025-10-07 14:42:42.232023319 +0000 UTC m=+0.146582458 container died 822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 07 14:42:42 compute-0 nova_compute[259550]: 2025-10-07 14:42:42.245 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:42:42 compute-0 nova_compute[259550]: 2025-10-07 14:42:42.248 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:42:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf47402c6d7a6b56f542f30045e449afb376d866fcd9ecd99a267203d8d19875-merged.mount: Deactivated successfully.
Oct 07 14:42:42 compute-0 podman[397723]: 2025-10-07 14:42:42.26819677 +0000 UTC m=+0.182755909 container remove 822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kepler, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 14:42:42 compute-0 systemd[1]: libpod-conmon-822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc.scope: Deactivated successfully.
Oct 07 14:42:42 compute-0 podman[397774]: 2025-10-07 14:42:42.470393864 +0000 UTC m=+0.053023610 container create 9477ae4fab86e43e4ad295f51887e909aa1b408d5121acaaa98a707a5da5661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 07 14:42:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 170 B/s wr, 2 op/s
Oct 07 14:42:42 compute-0 systemd[1]: Started libpod-conmon-9477ae4fab86e43e4ad295f51887e909aa1b408d5121acaaa98a707a5da5661b.scope.
Oct 07 14:42:42 compute-0 nova_compute[259550]: 2025-10-07 14:42:42.523 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:42:42 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:42:42 compute-0 podman[397804]: 2025-10-07 14:42:42.538788923 +0000 UTC m=+0.054796568 container create 6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 14:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da13c56bcffd84141303230b59ba381972a80db1b722dc37b92ecb830a647ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:42:42 compute-0 podman[397774]: 2025-10-07 14:42:42.448193584 +0000 UTC m=+0.030823390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da13c56bcffd84141303230b59ba381972a80db1b722dc37b92ecb830a647ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da13c56bcffd84141303230b59ba381972a80db1b722dc37b92ecb830a647ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da13c56bcffd84141303230b59ba381972a80db1b722dc37b92ecb830a647ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:42:42 compute-0 podman[397774]: 2025-10-07 14:42:42.565278687 +0000 UTC m=+0.147908453 container init 9477ae4fab86e43e4ad295f51887e909aa1b408d5121acaaa98a707a5da5661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 14:42:42 compute-0 podman[397774]: 2025-10-07 14:42:42.573420773 +0000 UTC m=+0.156050529 container start 9477ae4fab86e43e4ad295f51887e909aa1b408d5121acaaa98a707a5da5661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:42:42 compute-0 podman[397774]: 2025-10-07 14:42:42.576986958 +0000 UTC m=+0.159616734 container attach 9477ae4fab86e43e4ad295f51887e909aa1b408d5121acaaa98a707a5da5661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_grothendieck, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:42:42 compute-0 systemd[1]: Started libpod-conmon-6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109.scope.
Oct 07 14:42:42 compute-0 podman[397804]: 2025-10-07 14:42:42.509838093 +0000 UTC m=+0.025845768 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:42:42 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975fa376099815760edd3ac270db67c4e9de90d314e2f36412b98c8c8fb0e428/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:42:42 compute-0 podman[397804]: 2025-10-07 14:42:42.626917465 +0000 UTC m=+0.142925130 container init 6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true)
Oct 07 14:42:42 compute-0 podman[397804]: 2025-10-07 14:42:42.632003741 +0000 UTC m=+0.148011386 container start 6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Oct 07 14:42:42 compute-0 neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127[397827]: [NOTICE]   (397831) : New worker (397833) forked
Oct 07 14:42:42 compute-0 neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127[397827]: [NOTICE]   (397831) : Loading success.
Oct 07 14:42:42 compute-0 nova_compute[259550]: 2025-10-07 14:42:42.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]: {
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:         "osd_id": 2,
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:         "type": "bluestore"
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:     },
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:         "osd_id": 1,
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:         "type": "bluestore"
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:     },
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:         "osd_id": 0,
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:         "type": "bluestore"
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]:     }
Oct 07 14:42:43 compute-0 agitated_grothendieck[397820]: }
Oct 07 14:42:43 compute-0 systemd[1]: libpod-9477ae4fab86e43e4ad295f51887e909aa1b408d5121acaaa98a707a5da5661b.scope: Deactivated successfully.
Oct 07 14:42:43 compute-0 podman[397774]: 2025-10-07 14:42:43.526851706 +0000 UTC m=+1.109481452 container died 9477ae4fab86e43e4ad295f51887e909aa1b408d5121acaaa98a707a5da5661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_grothendieck, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 07 14:42:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7da13c56bcffd84141303230b59ba381972a80db1b722dc37b92ecb830a647ac-merged.mount: Deactivated successfully.
Oct 07 14:42:43 compute-0 podman[397774]: 2025-10-07 14:42:43.589784149 +0000 UTC m=+1.172413895 container remove 9477ae4fab86e43e4ad295f51887e909aa1b408d5121acaaa98a707a5da5661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_grothendieck, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:42:43 compute-0 systemd[1]: libpod-conmon-9477ae4fab86e43e4ad295f51887e909aa1b408d5121acaaa98a707a5da5661b.scope: Deactivated successfully.
Oct 07 14:42:43 compute-0 sudo[397628]: pam_unix(sudo:session): session closed for user root
Oct 07 14:42:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:42:43 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:42:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:42:43 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:42:43 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev d5d97e10-25a7-441a-991f-906f8f7bbff2 does not exist
Oct 07 14:42:43 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 5651c734-f50e-4b2e-9418-2d87fc85e6ff does not exist
Oct 07 14:42:43 compute-0 sudo[397884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:42:43 compute-0 sudo[397884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:42:43 compute-0 sudo[397884]: pam_unix(sudo:session): session closed for user root
Oct 07 14:42:43 compute-0 sudo[397909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:42:43 compute-0 sudo[397909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:42:43 compute-0 sudo[397909]: pam_unix(sudo:session): session closed for user root
Oct 07 14:42:43 compute-0 ceph-mon[74295]: pgmap v2408: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 170 B/s wr, 2 op/s
Oct 07 14:42:43 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:42:43 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.069 2 DEBUG nova.compute.manager [req-667d0dd1-0afa-43ab-8cdf-f4a23011019b req-067479b2-03f3-4185-99b1-501ae13dc18b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-plugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.071 2 DEBUG oslo_concurrency.lockutils [req-667d0dd1-0afa-43ab-8cdf-f4a23011019b req-067479b2-03f3-4185-99b1-501ae13dc18b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.071 2 DEBUG oslo_concurrency.lockutils [req-667d0dd1-0afa-43ab-8cdf-f4a23011019b req-067479b2-03f3-4185-99b1-501ae13dc18b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.071 2 DEBUG oslo_concurrency.lockutils [req-667d0dd1-0afa-43ab-8cdf-f4a23011019b req-067479b2-03f3-4185-99b1-501ae13dc18b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.071 2 DEBUG nova.compute.manager [req-667d0dd1-0afa-43ab-8cdf-f4a23011019b req-067479b2-03f3-4185-99b1-501ae13dc18b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Processing event network-vif-plugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.175 2 DEBUG nova.compute.manager [req-5423ede3-8197-4dd5-b1ef-ea4aef2b559e req-e4b461ff-1aad-412f-91bb-7273b7e7c242 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-plugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.176 2 DEBUG oslo_concurrency.lockutils [req-5423ede3-8197-4dd5-b1ef-ea4aef2b559e req-e4b461ff-1aad-412f-91bb-7273b7e7c242 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.177 2 DEBUG oslo_concurrency.lockutils [req-5423ede3-8197-4dd5-b1ef-ea4aef2b559e req-e4b461ff-1aad-412f-91bb-7273b7e7c242 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.177 2 DEBUG oslo_concurrency.lockutils [req-5423ede3-8197-4dd5-b1ef-ea4aef2b559e req-e4b461ff-1aad-412f-91bb-7273b7e7c242 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.177 2 DEBUG nova.compute.manager [req-5423ede3-8197-4dd5-b1ef-ea4aef2b559e req-e4b461ff-1aad-412f-91bb-7273b7e7c242 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Processing event network-vif-plugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.178 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.185 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848164.1854818, 77918bef-8f72-4152-ac55-f4d4c98477ec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.186 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] VM Resumed (Lifecycle Event)
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.189 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.193 2 INFO nova.virt.libvirt.driver [-] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Instance spawned successfully.
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.193 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.287 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.288 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.288 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.289 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.289 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.289 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.294 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.297 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.449 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:42:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 12 KiB/s wr, 5 op/s
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.835 2 INFO nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Took 19.96 seconds to spawn the instance on the hypervisor.
Oct 07 14:42:44 compute-0 nova_compute[259550]: 2025-10-07 14:42:44.835 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:42:45 compute-0 nova_compute[259550]: 2025-10-07 14:42:45.282 2 INFO nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Took 23.68 seconds to build instance.
Oct 07 14:42:45 compute-0 ceph-mon[74295]: pgmap v2409: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 12 KiB/s wr, 5 op/s
Oct 07 14:42:45 compute-0 nova_compute[259550]: 2025-10-07 14:42:45.912 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 24.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:46 compute-0 nova_compute[259550]: 2025-10-07 14:42:46.366 2 DEBUG nova.compute.manager [req-39e61412-9ff9-4d58-b5c3-e037400d15c4 req-f1936fb8-8d98-4546-85c6-17984e521d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-plugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:42:46 compute-0 nova_compute[259550]: 2025-10-07 14:42:46.369 2 DEBUG oslo_concurrency.lockutils [req-39e61412-9ff9-4d58-b5c3-e037400d15c4 req-f1936fb8-8d98-4546-85c6-17984e521d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:46 compute-0 nova_compute[259550]: 2025-10-07 14:42:46.370 2 DEBUG oslo_concurrency.lockutils [req-39e61412-9ff9-4d58-b5c3-e037400d15c4 req-f1936fb8-8d98-4546-85c6-17984e521d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:46 compute-0 nova_compute[259550]: 2025-10-07 14:42:46.371 2 DEBUG oslo_concurrency.lockutils [req-39e61412-9ff9-4d58-b5c3-e037400d15c4 req-f1936fb8-8d98-4546-85c6-17984e521d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:46 compute-0 nova_compute[259550]: 2025-10-07 14:42:46.371 2 DEBUG nova.compute.manager [req-39e61412-9ff9-4d58-b5c3-e037400d15c4 req-f1936fb8-8d98-4546-85c6-17984e521d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] No waiting events found dispatching network-vif-plugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:42:46 compute-0 nova_compute[259550]: 2025-10-07 14:42:46.372 2 WARNING nova.compute.manager [req-39e61412-9ff9-4d58-b5c3-e037400d15c4 req-f1936fb8-8d98-4546-85c6-17984e521d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received unexpected event network-vif-plugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea for instance with vm_state active and task_state None.
Oct 07 14:42:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 873 KiB/s rd, 12 KiB/s wr, 37 op/s
Oct 07 14:42:46 compute-0 nova_compute[259550]: 2025-10-07 14:42:46.604 2 DEBUG nova.compute.manager [req-b62648d1-1e07-483e-8da3-5a98e5b13a5d req-e745df93-ca9a-46c0-85e6-751e8048ea1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-plugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:42:46 compute-0 nova_compute[259550]: 2025-10-07 14:42:46.605 2 DEBUG oslo_concurrency.lockutils [req-b62648d1-1e07-483e-8da3-5a98e5b13a5d req-e745df93-ca9a-46c0-85e6-751e8048ea1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:46 compute-0 nova_compute[259550]: 2025-10-07 14:42:46.605 2 DEBUG oslo_concurrency.lockutils [req-b62648d1-1e07-483e-8da3-5a98e5b13a5d req-e745df93-ca9a-46c0-85e6-751e8048ea1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:46 compute-0 nova_compute[259550]: 2025-10-07 14:42:46.605 2 DEBUG oslo_concurrency.lockutils [req-b62648d1-1e07-483e-8da3-5a98e5b13a5d req-e745df93-ca9a-46c0-85e6-751e8048ea1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:46 compute-0 nova_compute[259550]: 2025-10-07 14:42:46.606 2 DEBUG nova.compute.manager [req-b62648d1-1e07-483e-8da3-5a98e5b13a5d req-e745df93-ca9a-46c0-85e6-751e8048ea1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] No waiting events found dispatching network-vif-plugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:42:46 compute-0 nova_compute[259550]: 2025-10-07 14:42:46.606 2 WARNING nova.compute.manager [req-b62648d1-1e07-483e-8da3-5a98e5b13a5d req-e745df93-ca9a-46c0-85e6-751e8048ea1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received unexpected event network-vif-plugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 for instance with vm_state active and task_state None.
Oct 07 14:42:47 compute-0 nova_compute[259550]: 2025-10-07 14:42:47.533 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "4954e98d-461f-46e8-9f41-c81930c02cff" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:47 compute-0 nova_compute[259550]: 2025-10-07 14:42:47.534 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:47 compute-0 nova_compute[259550]: 2025-10-07 14:42:47.588 2 DEBUG nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:42:47 compute-0 nova_compute[259550]: 2025-10-07 14:42:47.785 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:47 compute-0 nova_compute[259550]: 2025-10-07 14:42:47.785 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:47 compute-0 nova_compute[259550]: 2025-10-07 14:42:47.797 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:42:47 compute-0 nova_compute[259550]: 2025-10-07 14:42:47.797 2 INFO nova.compute.claims [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:42:47 compute-0 ceph-mon[74295]: pgmap v2410: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 873 KiB/s rd, 12 KiB/s wr, 37 op/s
Oct 07 14:42:48 compute-0 nova_compute[259550]: 2025-10-07 14:42:48.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:48 compute-0 nova_compute[259550]: 2025-10-07 14:42:48.022 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:42:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:42:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:42:48 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2089006158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:42:48 compute-0 nova_compute[259550]: 2025-10-07 14:42:48.481 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:42:48 compute-0 nova_compute[259550]: 2025-10-07 14:42:48.492 2 DEBUG nova.compute.provider_tree [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:42:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 873 KiB/s rd, 12 KiB/s wr, 37 op/s
Oct 07 14:42:48 compute-0 nova_compute[259550]: 2025-10-07 14:42:48.547 2 DEBUG nova.scheduler.client.report [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:42:48 compute-0 nova_compute[259550]: 2025-10-07 14:42:48.610 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:48 compute-0 nova_compute[259550]: 2025-10-07 14:42:48.611 2 DEBUG nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:42:48 compute-0 nova_compute[259550]: 2025-10-07 14:42:48.750 2 DEBUG nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:42:48 compute-0 nova_compute[259550]: 2025-10-07 14:42:48.751 2 DEBUG nova.network.neutron [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:42:48 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2089006158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:42:48 compute-0 nova_compute[259550]: 2025-10-07 14:42:48.945 2 INFO nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:42:49 compute-0 nova_compute[259550]: 2025-10-07 14:42:49.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:49 compute-0 nova_compute[259550]: 2025-10-07 14:42:49.146 2 DEBUG nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:42:49 compute-0 ceph-mon[74295]: pgmap v2411: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 873 KiB/s rd, 12 KiB/s wr, 37 op/s
Oct 07 14:42:49 compute-0 nova_compute[259550]: 2025-10-07 14:42:49.914 2 DEBUG nova.policy [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:42:49 compute-0 nova_compute[259550]: 2025-10-07 14:42:49.971 2 DEBUG nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:42:49 compute-0 nova_compute[259550]: 2025-10-07 14:42:49.975 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:42:49 compute-0 nova_compute[259550]: 2025-10-07 14:42:49.976 2 INFO nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Creating image(s)
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.022 2 DEBUG nova.storage.rbd_utils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 4954e98d-461f-46e8-9f41-c81930c02cff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:42:50 compute-0 podman[397957]: 2025-10-07 14:42:50.078991633 +0000 UTC m=+0.064675970 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.081 2 DEBUG nova.storage.rbd_utils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 4954e98d-461f-46e8-9f41-c81930c02cff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.117 2 DEBUG nova.storage.rbd_utils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 4954e98d-461f-46e8-9f41-c81930c02cff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.122 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:42:50 compute-0 podman[397971]: 2025-10-07 14:42:50.128116519 +0000 UTC m=+0.109972683 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.206 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.208 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.209 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.209 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.234 2 DEBUG nova.storage.rbd_utils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 4954e98d-461f-46e8-9f41-c81930c02cff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.238 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4954e98d-461f-46e8-9f41-c81930c02cff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:42:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.575 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4954e98d-461f-46e8-9f41-c81930c02cff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.645 2 DEBUG nova.storage.rbd_utils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image 4954e98d-461f-46e8-9f41-c81930c02cff_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.739 2 DEBUG nova.objects.instance [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid 4954e98d-461f-46e8-9f41-c81930c02cff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.760 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.761 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Ensure instance console log exists: /var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.761 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.761 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.762 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:50 compute-0 NetworkManager[44949]: <info>  [1759848170.7719] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/563)
Oct 07 14:42:50 compute-0 NetworkManager[44949]: <info>  [1759848170.7730] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/564)
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:50 compute-0 ovn_controller[151684]: 2025-10-07T14:42:50Z|01410|binding|INFO|Releasing lport 21cca283-5f07-4e28-8ee2-a58e7356e156 from this chassis (sb_readonly=0)
Oct 07 14:42:50 compute-0 ovn_controller[151684]: 2025-10-07T14:42:50Z|01411|binding|INFO|Releasing lport 770f7899-3ca8-4bf3-9f06-6b8c25522fc3 from this chassis (sb_readonly=0)
Oct 07 14:42:50 compute-0 nova_compute[259550]: 2025-10-07 14:42:50.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:51 compute-0 ceph-mon[74295]: pgmap v2412: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:42:51 compute-0 nova_compute[259550]: 2025-10-07 14:42:51.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:42:52 compute-0 nova_compute[259550]: 2025-10-07 14:42:52.173 2 DEBUG nova.network.neutron [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Successfully updated port: 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:42:52 compute-0 nova_compute[259550]: 2025-10-07 14:42:52.192 2 DEBUG nova.compute.manager [req-d97a65f6-0de9-4cb0-98cc-41529dbf2476 req-e6c272ab-b28a-4a69-89b8-dbc87086b08b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-changed-c75fde3c-8461-4ed7-9c14-7f14f5794599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:42:52 compute-0 nova_compute[259550]: 2025-10-07 14:42:52.193 2 DEBUG nova.compute.manager [req-d97a65f6-0de9-4cb0-98cc-41529dbf2476 req-e6c272ab-b28a-4a69-89b8-dbc87086b08b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Refreshing instance network info cache due to event network-changed-c75fde3c-8461-4ed7-9c14-7f14f5794599. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:42:52 compute-0 nova_compute[259550]: 2025-10-07 14:42:52.193 2 DEBUG oslo_concurrency.lockutils [req-d97a65f6-0de9-4cb0-98cc-41529dbf2476 req-e6c272ab-b28a-4a69-89b8-dbc87086b08b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:42:52 compute-0 nova_compute[259550]: 2025-10-07 14:42:52.193 2 DEBUG oslo_concurrency.lockutils [req-d97a65f6-0de9-4cb0-98cc-41529dbf2476 req-e6c272ab-b28a-4a69-89b8-dbc87086b08b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:42:52 compute-0 nova_compute[259550]: 2025-10-07 14:42:52.193 2 DEBUG nova.network.neutron [req-d97a65f6-0de9-4cb0-98cc-41529dbf2476 req-e6c272ab-b28a-4a69-89b8-dbc87086b08b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Refreshing network info cache for port c75fde3c-8461-4ed7-9c14-7f14f5794599 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:42:52 compute-0 nova_compute[259550]: 2025-10-07 14:42:52.257 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-4954e98d-461f-46e8-9f41-c81930c02cff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:42:52 compute-0 nova_compute[259550]: 2025-10-07 14:42:52.257 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-4954e98d-461f-46e8-9f41-c81930c02cff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:42:52 compute-0 nova_compute[259550]: 2025-10-07 14:42:52.258 2 DEBUG nova.network.neutron [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:42:52 compute-0 nova_compute[259550]: 2025-10-07 14:42:52.440 2 DEBUG nova.network.neutron [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:42:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 110 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 696 KiB/s wr, 75 op/s
Oct 07 14:42:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:42:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:42:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:42:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:42:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:42:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:42:53 compute-0 nova_compute[259550]: 2025-10-07 14:42:53.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:42:53 compute-0 ceph-mon[74295]: pgmap v2413: 305 pgs: 305 active+clean; 110 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 696 KiB/s wr, 75 op/s
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.278 2 DEBUG nova.network.neutron [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Updating instance_info_cache with network_info: [{"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.301 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-4954e98d-461f-46e8-9f41-c81930c02cff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.302 2 DEBUG nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Instance network_info: |[{"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.304 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Start _get_guest_xml network_info=[{"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.309 2 WARNING nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.316 2 DEBUG nova.virt.libvirt.host [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.317 2 DEBUG nova.virt.libvirt.host [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.321 2 DEBUG nova.compute.manager [req-b9a26612-e34a-47b0-98e4-cacb9dbb163c req-8cfdd93f-bc79-4155-8433-59557446b6d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Received event network-changed-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.322 2 DEBUG nova.compute.manager [req-b9a26612-e34a-47b0-98e4-cacb9dbb163c req-8cfdd93f-bc79-4155-8433-59557446b6d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Refreshing instance network info cache due to event network-changed-8e464ebe-1be0-4a3a-b8df-c088ec663aa2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.322 2 DEBUG oslo_concurrency.lockutils [req-b9a26612-e34a-47b0-98e4-cacb9dbb163c req-8cfdd93f-bc79-4155-8433-59557446b6d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4954e98d-461f-46e8-9f41-c81930c02cff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.322 2 DEBUG oslo_concurrency.lockutils [req-b9a26612-e34a-47b0-98e4-cacb9dbb163c req-8cfdd93f-bc79-4155-8433-59557446b6d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4954e98d-461f-46e8-9f41-c81930c02cff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.323 2 DEBUG nova.network.neutron [req-b9a26612-e34a-47b0-98e4-cacb9dbb163c req-8cfdd93f-bc79-4155-8433-59557446b6d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Refreshing network info cache for port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.324 2 DEBUG nova.virt.libvirt.host [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.325 2 DEBUG nova.virt.libvirt.host [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.325 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.325 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.326 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.326 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.326 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.327 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.327 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.327 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.327 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.327 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.328 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.328 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.331 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:42:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 134 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct 07 14:42:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:42:54 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2255972073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.841 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.862 2 DEBUG nova.storage.rbd_utils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 4954e98d-461f-46e8-9f41-c81930c02cff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.866 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.908 2 DEBUG nova.network.neutron [req-d97a65f6-0de9-4cb0-98cc-41529dbf2476 req-e6c272ab-b28a-4a69-89b8-dbc87086b08b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updated VIF entry in instance network info cache for port c75fde3c-8461-4ed7-9c14-7f14f5794599. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.909 2 DEBUG nova.network.neutron [req-d97a65f6-0de9-4cb0-98cc-41529dbf2476 req-e6c272ab-b28a-4a69-89b8-dbc87086b08b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updating instance_info_cache with network_info: [{"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:42:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2255972073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:42:54 compute-0 nova_compute[259550]: 2025-10-07 14:42:54.939 2 DEBUG oslo_concurrency.lockutils [req-d97a65f6-0de9-4cb0-98cc-41529dbf2476 req-e6c272ab-b28a-4a69-89b8-dbc87086b08b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:42:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:42:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/338211284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.396 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.397 2 DEBUG nova.virt.libvirt.vif [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:42:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-20568441',display_name='tempest-TestNetworkBasicOps-server-20568441',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-20568441',id=128,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLqq1NDEWVa3Pi0bJGoGA/5QwojsWmG4PsNzBryv1MHFYQv3a/HZ2jG+uXsf9gU2YDMUeSFxRaiZcRUccSi6nnsGaKDlAkF25zSznVEcPGfGqTa/G6c3Yyx8D7jl5gKmBw==',key_name='tempest-TestNetworkBasicOps-562171512',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-tdpabr3h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:42:49Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=4954e98d-461f-46e8-9f41-c81930c02cff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.398 2 DEBUG nova.network.os_vif_util [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.398 2 DEBUG nova.network.os_vif_util [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.399 2 DEBUG nova.objects.instance [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid 4954e98d-461f-46e8-9f41-c81930c02cff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.458 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:42:55 compute-0 nova_compute[259550]:   <uuid>4954e98d-461f-46e8-9f41-c81930c02cff</uuid>
Oct 07 14:42:55 compute-0 nova_compute[259550]:   <name>instance-00000080</name>
Oct 07 14:42:55 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:42:55 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:42:55 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkBasicOps-server-20568441</nova:name>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:42:54</nova:creationTime>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:42:55 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:42:55 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:42:55 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:42:55 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:42:55 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:42:55 compute-0 nova_compute[259550]:         <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:42:55 compute-0 nova_compute[259550]:         <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:42:55 compute-0 nova_compute[259550]:         <nova:port uuid="8e464ebe-1be0-4a3a-b8df-c088ec663aa2">
Oct 07 14:42:55 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:42:55 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:42:55 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <system>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <entry name="serial">4954e98d-461f-46e8-9f41-c81930c02cff</entry>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <entry name="uuid">4954e98d-461f-46e8-9f41-c81930c02cff</entry>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     </system>
Oct 07 14:42:55 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:42:55 compute-0 nova_compute[259550]:   <os>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:   </os>
Oct 07 14:42:55 compute-0 nova_compute[259550]:   <features>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:   </features>
Oct 07 14:42:55 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:42:55 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:42:55 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4954e98d-461f-46e8-9f41-c81930c02cff_disk">
Oct 07 14:42:55 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       </source>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:42:55 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/4954e98d-461f-46e8-9f41-c81930c02cff_disk.config">
Oct 07 14:42:55 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       </source>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:42:55 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:b7:10:26"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <target dev="tap8e464ebe-1b"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff/console.log" append="off"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <video>
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     </video>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:42:55 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:42:55 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:42:55 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:42:55 compute-0 nova_compute[259550]: </domain>
Oct 07 14:42:55 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.459 2 DEBUG nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Preparing to wait for external event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.459 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.460 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.460 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.461 2 DEBUG nova.virt.libvirt.vif [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:42:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-20568441',display_name='tempest-TestNetworkBasicOps-server-20568441',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-20568441',id=128,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLqq1NDEWVa3Pi0bJGoGA/5QwojsWmG4PsNzBryv1MHFYQv3a/HZ2jG+uXsf9gU2YDMUeSFxRaiZcRUccSi6nnsGaKDlAkF25zSznVEcPGfGqTa/G6c3Yyx8D7jl5gKmBw==',key_name='tempest-TestNetworkBasicOps-562171512',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-tdpabr3h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:42:49Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=4954e98d-461f-46e8-9f41-c81930c02cff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.461 2 DEBUG nova.network.os_vif_util [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.461 2 DEBUG nova.network.os_vif_util [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.462 2 DEBUG os_vif [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.463 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.463 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.467 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e464ebe-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.468 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8e464ebe-1b, col_values=(('external_ids', {'iface-id': '8e464ebe-1be0-4a3a-b8df-c088ec663aa2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:10:26', 'vm-uuid': '4954e98d-461f-46e8-9f41-c81930c02cff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:55 compute-0 NetworkManager[44949]: <info>  [1759848175.4704] manager: (tap8e464ebe-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/565)
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.482 2 INFO os_vif [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b')
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.550 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.550 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.550 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:b7:10:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.551 2 INFO nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Using config drive
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.576 2 DEBUG nova.storage.rbd_utils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 4954e98d-461f-46e8-9f41-c81930c02cff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:42:55 compute-0 ceph-mon[74295]: pgmap v2414: 305 pgs: 305 active+clean; 134 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct 07 14:42:55 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/338211284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:42:55 compute-0 nova_compute[259550]: 2025-10-07 14:42:55.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:42:56 compute-0 nova_compute[259550]: 2025-10-07 14:42:56.199 2 INFO nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Creating config drive at /var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff/disk.config
Oct 07 14:42:56 compute-0 nova_compute[259550]: 2025-10-07 14:42:56.205 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdtw0xk2z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:42:56 compute-0 nova_compute[259550]: 2025-10-07 14:42:56.300 2 DEBUG nova.network.neutron [req-b9a26612-e34a-47b0-98e4-cacb9dbb163c req-8cfdd93f-bc79-4155-8433-59557446b6d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Updated VIF entry in instance network info cache for port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:42:56 compute-0 nova_compute[259550]: 2025-10-07 14:42:56.301 2 DEBUG nova.network.neutron [req-b9a26612-e34a-47b0-98e4-cacb9dbb163c req-8cfdd93f-bc79-4155-8433-59557446b6d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Updating instance_info_cache with network_info: [{"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:42:56 compute-0 nova_compute[259550]: 2025-10-07 14:42:56.346 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdtw0xk2z" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:42:56 compute-0 nova_compute[259550]: 2025-10-07 14:42:56.373 2 DEBUG nova.storage.rbd_utils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 4954e98d-461f-46e8-9f41-c81930c02cff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:42:56 compute-0 nova_compute[259550]: 2025-10-07 14:42:56.376 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff/disk.config 4954e98d-461f-46e8-9f41-c81930c02cff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:42:56 compute-0 nova_compute[259550]: 2025-10-07 14:42:56.414 2 DEBUG oslo_concurrency.lockutils [req-b9a26612-e34a-47b0-98e4-cacb9dbb163c req-8cfdd93f-bc79-4155-8433-59557446b6d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4954e98d-461f-46e8-9f41-c81930c02cff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:42:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 144 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.9 MiB/s wr, 113 op/s
Oct 07 14:42:56 compute-0 nova_compute[259550]: 2025-10-07 14:42:56.569 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff/disk.config 4954e98d-461f-46e8-9f41-c81930c02cff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:42:56 compute-0 nova_compute[259550]: 2025-10-07 14:42:56.570 2 INFO nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Deleting local config drive /var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff/disk.config because it was imported into RBD.
Oct 07 14:42:56 compute-0 ovn_controller[151684]: 2025-10-07T14:42:56Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:41:99:4d 10.100.0.4
Oct 07 14:42:56 compute-0 ovn_controller[151684]: 2025-10-07T14:42:56Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:41:99:4d 10.100.0.4
Oct 07 14:42:56 compute-0 NetworkManager[44949]: <info>  [1759848176.6332] manager: (tap8e464ebe-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/566)
Oct 07 14:42:56 compute-0 kernel: tap8e464ebe-1b: entered promiscuous mode
Oct 07 14:42:56 compute-0 ovn_controller[151684]: 2025-10-07T14:42:56Z|01412|binding|INFO|Claiming lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 for this chassis.
Oct 07 14:42:56 compute-0 ovn_controller[151684]: 2025-10-07T14:42:56Z|01413|binding|INFO|8e464ebe-1be0-4a3a-b8df-c088ec663aa2: Claiming fa:16:3e:b7:10:26 10.100.0.12
Oct 07 14:42:56 compute-0 nova_compute[259550]: 2025-10-07 14:42:56.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:56 compute-0 ovn_controller[151684]: 2025-10-07T14:42:56Z|01414|binding|INFO|Setting lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 ovn-installed in OVS
Oct 07 14:42:56 compute-0 nova_compute[259550]: 2025-10-07 14:42:56.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:56 compute-0 ovn_controller[151684]: 2025-10-07T14:42:56Z|01415|binding|INFO|Setting lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 up in Southbound
Oct 07 14:42:56 compute-0 nova_compute[259550]: 2025-10-07 14:42:56.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.656 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:10:26 10.100.0.12'], port_security=['fa:16:3e:b7:10:26 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-843309094', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4954e98d-461f-46e8-9f41-c81930c02cff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aea4318-48a4-451b-b36b-e364946c1859', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-843309094', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '46186043-39e3-4b2d-9425-b7b9cc0cd458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d04921d-dca2-4310-91f6-b9ea410f8b20, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8e464ebe-1be0-4a3a-b8df-c088ec663aa2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.658 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 in datapath 7aea4318-48a4-451b-b36b-e364946c1859 bound to our chassis
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.659 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7aea4318-48a4-451b-b36b-e364946c1859
Oct 07 14:42:56 compute-0 systemd-udevd[398302]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.673 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f80f9fac-7564-49c4-9a69-30c3067820c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.676 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7aea4318-41 in ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.679 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7aea4318-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.679 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6a6a49a3-1f1a-4358-a6e8-ba42dcaa101a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.680 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff08458-b7dd-4e22-9e7e-1867e7694f36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:56 compute-0 NetworkManager[44949]: <info>  [1759848176.6909] device (tap8e464ebe-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:42:56 compute-0 NetworkManager[44949]: <info>  [1759848176.6921] device (tap8e464ebe-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:42:56 compute-0 systemd-machined[214580]: New machine qemu-161-instance-00000080.
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.695 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6cd73192-d170-4295-aff5-22d50cfa9d56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:56 compute-0 systemd[1]: Started Virtual Machine qemu-161-instance-00000080.
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.721 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cab42816-0ad3-42a4-a52f-b73c6b68811f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.754 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[73311364-256b-4cbb-af64-833bd0b2b3fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:56 compute-0 NetworkManager[44949]: <info>  [1759848176.7604] manager: (tap7aea4318-40): new Veth device (/org/freedesktop/NetworkManager/Devices/567)
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.759 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bb150253-42db-48c7-8c50-9bf55a2de0a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.797 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3ecac053-6828-41d5-9d48-6741c9ee18c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.801 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3b9af2-bc65-4903-9e62-996fb66c831e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:56 compute-0 NetworkManager[44949]: <info>  [1759848176.8292] device (tap7aea4318-40): carrier: link connected
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.834 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[242185cc-6a53-4ba0-940e-914ecbdddcb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.850 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[47b66eaa-4033-40d2-9b68-8b2a9db10f3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7aea4318-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:73:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 876039, 'reachable_time': 27562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398336, 'error': None, 'target': 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.866 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb3268c-af1d-4868-884d-9e06c04f4870]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:73fd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 876039, 'tstamp': 876039}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398337, 'error': None, 'target': 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.885 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[89fda78e-02e0-4c7f-be68-86b003959428]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7aea4318-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:73:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 876039, 'reachable_time': 27562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 398338, 'error': None, 'target': 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.917 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[75d10307-1d77-4b65-9754-0ceb2c3ed8c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:56 compute-0 nova_compute[259550]: 2025-10-07 14:42:56.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.985 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ca810a-de01-4683-930d-6efb1c388c5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.986 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7aea4318-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.987 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.987 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7aea4318-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:56 compute-0 nova_compute[259550]: 2025-10-07 14:42:56.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:56 compute-0 NetworkManager[44949]: <info>  [1759848176.9898] manager: (tap7aea4318-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/568)
Oct 07 14:42:56 compute-0 kernel: tap7aea4318-40: entered promiscuous mode
Oct 07 14:42:56 compute-0 nova_compute[259550]: 2025-10-07 14:42:56.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.995 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7aea4318-40, col_values=(('external_ids', {'iface-id': '51e9d814-5370-44c9-809c-78ed73cc4b32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:42:56 compute-0 nova_compute[259550]: 2025-10-07 14:42:56.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:56 compute-0 ovn_controller[151684]: 2025-10-07T14:42:56Z|01416|binding|INFO|Releasing lport 51e9d814-5370-44c9-809c-78ed73cc4b32 from this chassis (sb_readonly=0)
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:57.011 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7aea4318-48a4-451b-b36b-e364946c1859.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7aea4318-48a4-451b-b36b-e364946c1859.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:57.012 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[56fb61c2-4bf9-4300-8669-b9641d57df9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:57.013 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-7aea4318-48a4-451b-b36b-e364946c1859
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/7aea4318-48a4-451b-b36b-e364946c1859.pid.haproxy
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 7aea4318-48a4-451b-b36b-e364946c1859
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:42:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:42:57.015 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'env', 'PROCESS_TAG=haproxy-7aea4318-48a4-451b-b36b-e364946c1859', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7aea4318-48a4-451b-b36b-e364946c1859.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.093 2 DEBUG nova.compute.manager [req-790f80a7-43cb-4530-b51b-3619da2da70e req-eda356a4-ecd6-422b-b82d-17dffad07565 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Received event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.093 2 DEBUG oslo_concurrency.lockutils [req-790f80a7-43cb-4530-b51b-3619da2da70e req-eda356a4-ecd6-422b-b82d-17dffad07565 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.093 2 DEBUG oslo_concurrency.lockutils [req-790f80a7-43cb-4530-b51b-3619da2da70e req-eda356a4-ecd6-422b-b82d-17dffad07565 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.094 2 DEBUG oslo_concurrency.lockutils [req-790f80a7-43cb-4530-b51b-3619da2da70e req-eda356a4-ecd6-422b-b82d-17dffad07565 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.094 2 DEBUG nova.compute.manager [req-790f80a7-43cb-4530-b51b-3619da2da70e req-eda356a4-ecd6-422b-b82d-17dffad07565 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Processing event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:42:57 compute-0 podman[398412]: 2025-10-07 14:42:57.391100542 +0000 UTC m=+0.047490493 container create 5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:42:57 compute-0 systemd[1]: Started libpod-conmon-5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d.scope.
Oct 07 14:42:57 compute-0 podman[398412]: 2025-10-07 14:42:57.368044519 +0000 UTC m=+0.024434480 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:42:57 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:42:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26fbd25cec77d0b294d26659ee0c2598b49ebe94e1d14db13af23b421b9aafa8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:42:57 compute-0 podman[398412]: 2025-10-07 14:42:57.50239517 +0000 UTC m=+0.158785141 container init 5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:42:57 compute-0 podman[398412]: 2025-10-07 14:42:57.508272586 +0000 UTC m=+0.164662527 container start 5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:42:57 compute-0 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[398427]: [NOTICE]   (398431) : New worker (398433) forked
Oct 07 14:42:57 compute-0 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[398427]: [NOTICE]   (398431) : Loading success.
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.757 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848177.7567456, 4954e98d-461f-46e8-9f41-c81930c02cff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.757 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] VM Started (Lifecycle Event)
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.759 2 DEBUG nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.763 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.766 2 INFO nova.virt.libvirt.driver [-] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Instance spawned successfully.
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.766 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.784 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.787 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.805 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.805 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.805 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.806 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.806 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.806 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.845 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.845 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848177.7575202, 4954e98d-461f-46e8-9f41-c81930c02cff => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.845 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] VM Paused (Lifecycle Event)
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.899 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.902 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848177.7619922, 4954e98d-461f-46e8-9f41-c81930c02cff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.902 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] VM Resumed (Lifecycle Event)
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.907 2 INFO nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Took 7.94 seconds to spawn the instance on the hypervisor.
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.907 2 DEBUG nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:42:57 compute-0 ceph-mon[74295]: pgmap v2415: 305 pgs: 305 active+clean; 144 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.9 MiB/s wr, 113 op/s
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.965 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.968 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:42:57 compute-0 nova_compute[259550]: 2025-10-07 14:42:57.990 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:42:58 compute-0 nova_compute[259550]: 2025-10-07 14:42:58.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:42:58 compute-0 nova_compute[259550]: 2025-10-07 14:42:58.010 2 INFO nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Took 10.25 seconds to build instance.
Oct 07 14:42:58 compute-0 nova_compute[259550]: 2025-10-07 14:42:58.029 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:42:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 144 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.9 MiB/s wr, 81 op/s
Oct 07 14:42:58 compute-0 nova_compute[259550]: 2025-10-07 14:42:58.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:42:59 compute-0 nova_compute[259550]: 2025-10-07 14:42:59.171 2 DEBUG nova.compute.manager [req-b2aaaca2-f376-44d1-b245-1e7b34114b3d req-985302ab-e698-4c75-bf91-f13163ae8a61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Received event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:42:59 compute-0 nova_compute[259550]: 2025-10-07 14:42:59.172 2 DEBUG oslo_concurrency.lockutils [req-b2aaaca2-f376-44d1-b245-1e7b34114b3d req-985302ab-e698-4c75-bf91-f13163ae8a61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:42:59 compute-0 nova_compute[259550]: 2025-10-07 14:42:59.172 2 DEBUG oslo_concurrency.lockutils [req-b2aaaca2-f376-44d1-b245-1e7b34114b3d req-985302ab-e698-4c75-bf91-f13163ae8a61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:42:59 compute-0 nova_compute[259550]: 2025-10-07 14:42:59.172 2 DEBUG oslo_concurrency.lockutils [req-b2aaaca2-f376-44d1-b245-1e7b34114b3d req-985302ab-e698-4c75-bf91-f13163ae8a61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:42:59 compute-0 nova_compute[259550]: 2025-10-07 14:42:59.173 2 DEBUG nova.compute.manager [req-b2aaaca2-f376-44d1-b245-1e7b34114b3d req-985302ab-e698-4c75-bf91-f13163ae8a61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] No waiting events found dispatching network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:42:59 compute-0 nova_compute[259550]: 2025-10-07 14:42:59.173 2 WARNING nova.compute.manager [req-b2aaaca2-f376-44d1-b245-1e7b34114b3d req-985302ab-e698-4c75-bf91-f13163ae8a61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Received unexpected event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 for instance with vm_state active and task_state None.
Oct 07 14:42:59 compute-0 ceph-mon[74295]: pgmap v2416: 305 pgs: 305 active+clean; 144 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.9 MiB/s wr, 81 op/s
Oct 07 14:43:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:00.074 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:00.075 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:00.076 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:00 compute-0 nova_compute[259550]: 2025-10-07 14:43:00.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 166 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.400 2 DEBUG nova.compute.manager [req-806a92f8-3d3d-4302-989d-2ca9afa4401f req-1d0dfd43-0e02-4486-9895-07921637992c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Received event network-changed-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.401 2 DEBUG nova.compute.manager [req-806a92f8-3d3d-4302-989d-2ca9afa4401f req-1d0dfd43-0e02-4486-9895-07921637992c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Refreshing instance network info cache due to event network-changed-8e464ebe-1be0-4a3a-b8df-c088ec663aa2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.401 2 DEBUG oslo_concurrency.lockutils [req-806a92f8-3d3d-4302-989d-2ca9afa4401f req-1d0dfd43-0e02-4486-9895-07921637992c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4954e98d-461f-46e8-9f41-c81930c02cff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.401 2 DEBUG oslo_concurrency.lockutils [req-806a92f8-3d3d-4302-989d-2ca9afa4401f req-1d0dfd43-0e02-4486-9895-07921637992c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4954e98d-461f-46e8-9f41-c81930c02cff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.402 2 DEBUG nova.network.neutron [req-806a92f8-3d3d-4302-989d-2ca9afa4401f req-1d0dfd43-0e02-4486-9895-07921637992c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Refreshing network info cache for port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.574 2 DEBUG oslo_concurrency.lockutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "4954e98d-461f-46e8-9f41-c81930c02cff" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.575 2 DEBUG oslo_concurrency.lockutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.575 2 DEBUG oslo_concurrency.lockutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.575 2 DEBUG oslo_concurrency.lockutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.576 2 DEBUG oslo_concurrency.lockutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.577 2 INFO nova.compute.manager [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Terminating instance
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.578 2 DEBUG nova.compute.manager [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:43:01 compute-0 kernel: tap8e464ebe-1b (unregistering): left promiscuous mode
Oct 07 14:43:01 compute-0 NetworkManager[44949]: <info>  [1759848181.6470] device (tap8e464ebe-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:43:01 compute-0 ovn_controller[151684]: 2025-10-07T14:43:01Z|01417|binding|INFO|Releasing lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 from this chassis (sb_readonly=0)
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:01 compute-0 ovn_controller[151684]: 2025-10-07T14:43:01Z|01418|binding|INFO|Setting lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 down in Southbound
Oct 07 14:43:01 compute-0 ovn_controller[151684]: 2025-10-07T14:43:01Z|01419|binding|INFO|Removing iface tap8e464ebe-1b ovn-installed in OVS
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:01 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000080.scope: Deactivated successfully.
Oct 07 14:43:01 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000080.scope: Consumed 4.911s CPU time.
Oct 07 14:43:01 compute-0 systemd-machined[214580]: Machine qemu-161-instance-00000080 terminated.
Oct 07 14:43:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:01.729 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:10:26 10.100.0.12'], port_security=['fa:16:3e:b7:10:26 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-843309094', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4954e98d-461f-46e8-9f41-c81930c02cff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aea4318-48a4-451b-b36b-e364946c1859', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-843309094', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '46186043-39e3-4b2d-9425-b7b9cc0cd458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d04921d-dca2-4310-91f6-b9ea410f8b20, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8e464ebe-1be0-4a3a-b8df-c088ec663aa2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:43:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:01.730 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 in datapath 7aea4318-48a4-451b-b36b-e364946c1859 unbound from our chassis
Oct 07 14:43:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:01.732 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7aea4318-48a4-451b-b36b-e364946c1859, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:43:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:01.733 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a4e504-8d0e-4d7d-be64-f2b660b0aeb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:01.734 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 namespace which is not needed anymore
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.816 2 INFO nova.virt.libvirt.driver [-] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Instance destroyed successfully.
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.817 2 DEBUG nova.objects.instance [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid 4954e98d-461f-46e8-9f41-c81930c02cff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:43:01 compute-0 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[398427]: [NOTICE]   (398431) : haproxy version is 2.8.14-c23fe91
Oct 07 14:43:01 compute-0 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[398427]: [NOTICE]   (398431) : path to executable is /usr/sbin/haproxy
Oct 07 14:43:01 compute-0 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[398427]: [WARNING]  (398431) : Exiting Master process...
Oct 07 14:43:01 compute-0 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[398427]: [WARNING]  (398431) : Exiting Master process...
Oct 07 14:43:01 compute-0 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[398427]: [ALERT]    (398431) : Current worker (398433) exited with code 143 (Terminated)
Oct 07 14:43:01 compute-0 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[398427]: [WARNING]  (398431) : All workers exited. Exiting... (0)
Oct 07 14:43:01 compute-0 systemd[1]: libpod-5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d.scope: Deactivated successfully.
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.868 2 DEBUG nova.virt.libvirt.vif [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:42:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-20568441',display_name='tempest-TestNetworkBasicOps-server-20568441',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-20568441',id=128,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLqq1NDEWVa3Pi0bJGoGA/5QwojsWmG4PsNzBryv1MHFYQv3a/HZ2jG+uXsf9gU2YDMUeSFxRaiZcRUccSi6nnsGaKDlAkF25zSznVEcPGfGqTa/G6c3Yyx8D7jl5gKmBw==',key_name='tempest-TestNetworkBasicOps-562171512',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:42:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-tdpabr3h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:42:57Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=4954e98d-461f-46e8-9f41-c81930c02cff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.869 2 DEBUG nova.network.os_vif_util [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.870 2 DEBUG nova.network.os_vif_util [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.870 2 DEBUG os_vif [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.872 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e464ebe-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:01 compute-0 podman[398474]: 2025-10-07 14:43:01.87440322 +0000 UTC m=+0.045223044 container died 5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:01 compute-0 nova_compute[259550]: 2025-10-07 14:43:01.879 2 INFO os_vif [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b')
Oct 07 14:43:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d-userdata-shm.mount: Deactivated successfully.
Oct 07 14:43:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-26fbd25cec77d0b294d26659ee0c2598b49ebe94e1d14db13af23b421b9aafa8-merged.mount: Deactivated successfully.
Oct 07 14:43:01 compute-0 podman[398474]: 2025-10-07 14:43:01.927374268 +0000 UTC m=+0.098194102 container cleanup 5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:43:01 compute-0 systemd[1]: libpod-conmon-5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d.scope: Deactivated successfully.
Oct 07 14:43:01 compute-0 ceph-mon[74295]: pgmap v2417: 305 pgs: 305 active+clean; 166 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Oct 07 14:43:01 compute-0 podman[398523]: 2025-10-07 14:43:01.992243082 +0000 UTC m=+0.042534791 container remove 5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:43:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:02.000 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2765b280-13c8-4146-8cdf-564e5ebb4cd9]: (4, ('Tue Oct  7 02:43:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 (5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d)\n5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d\nTue Oct  7 02:43:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 (5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d)\n5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:02.003 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e3da848b-7ecf-4584-bb58-2eea4086b907]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:02.004 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7aea4318-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:02 compute-0 nova_compute[259550]: 2025-10-07 14:43:02.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:02 compute-0 kernel: tap7aea4318-40: left promiscuous mode
Oct 07 14:43:02 compute-0 nova_compute[259550]: 2025-10-07 14:43:02.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:02.022 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3da36a0e-9bd3-4587-8d0e-27881f41cdf7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:02 compute-0 nova_compute[259550]: 2025-10-07 14:43:02.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:02.056 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cae0ce83-f3d1-4e4f-b61a-f387b325f660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:02.057 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b4f92f4a-26bd-4f13-bca5-eedcec55ba12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:02.073 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c538e9a1-138e-458a-9111-6842044c7459]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 876031, 'reachable_time': 29509, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398540, 'error': None, 'target': 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d7aea4318\x2d48a4\x2d451b\x2db36b\x2de364946c1859.mount: Deactivated successfully.
Oct 07 14:43:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:02.077 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:43:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:02.077 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[45d33ae1-263c-4d17-9557-9a53481a7221]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:02 compute-0 nova_compute[259550]: 2025-10-07 14:43:02.465 2 INFO nova.virt.libvirt.driver [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Deleting instance files /var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff_del
Oct 07 14:43:02 compute-0 nova_compute[259550]: 2025-10-07 14:43:02.466 2 INFO nova.virt.libvirt.driver [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Deletion of /var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff_del complete
Oct 07 14:43:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 167 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Oct 07 14:43:02 compute-0 nova_compute[259550]: 2025-10-07 14:43:02.513 2 INFO nova.compute.manager [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Took 0.93 seconds to destroy the instance on the hypervisor.
Oct 07 14:43:02 compute-0 nova_compute[259550]: 2025-10-07 14:43:02.514 2 DEBUG oslo.service.loopingcall [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:43:02 compute-0 nova_compute[259550]: 2025-10-07 14:43:02.514 2 DEBUG nova.compute.manager [-] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:43:02 compute-0 nova_compute[259550]: 2025-10-07 14:43:02.515 2 DEBUG nova.network.neutron [-] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:43:03 compute-0 nova_compute[259550]: 2025-10-07 14:43:03.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:43:03 compute-0 nova_compute[259550]: 2025-10-07 14:43:03.549 2 DEBUG nova.compute.manager [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Received event network-vif-unplugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:43:03 compute-0 nova_compute[259550]: 2025-10-07 14:43:03.549 2 DEBUG oslo_concurrency.lockutils [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:03 compute-0 nova_compute[259550]: 2025-10-07 14:43:03.549 2 DEBUG oslo_concurrency.lockutils [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:03 compute-0 nova_compute[259550]: 2025-10-07 14:43:03.549 2 DEBUG oslo_concurrency.lockutils [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:03 compute-0 nova_compute[259550]: 2025-10-07 14:43:03.550 2 DEBUG nova.compute.manager [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] No waiting events found dispatching network-vif-unplugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:43:03 compute-0 nova_compute[259550]: 2025-10-07 14:43:03.550 2 DEBUG nova.compute.manager [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Received event network-vif-unplugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:43:03 compute-0 nova_compute[259550]: 2025-10-07 14:43:03.550 2 DEBUG nova.compute.manager [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Received event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:43:03 compute-0 nova_compute[259550]: 2025-10-07 14:43:03.550 2 DEBUG oslo_concurrency.lockutils [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:03 compute-0 nova_compute[259550]: 2025-10-07 14:43:03.551 2 DEBUG oslo_concurrency.lockutils [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:03 compute-0 nova_compute[259550]: 2025-10-07 14:43:03.551 2 DEBUG oslo_concurrency.lockutils [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:03 compute-0 nova_compute[259550]: 2025-10-07 14:43:03.551 2 DEBUG nova.compute.manager [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] No waiting events found dispatching network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:43:03 compute-0 nova_compute[259550]: 2025-10-07 14:43:03.551 2 WARNING nova.compute.manager [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Received unexpected event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 for instance with vm_state active and task_state deleting.
Oct 07 14:43:03 compute-0 nova_compute[259550]: 2025-10-07 14:43:03.662 2 DEBUG nova.network.neutron [req-806a92f8-3d3d-4302-989d-2ca9afa4401f req-1d0dfd43-0e02-4486-9895-07921637992c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Updated VIF entry in instance network info cache for port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:43:03 compute-0 nova_compute[259550]: 2025-10-07 14:43:03.663 2 DEBUG nova.network.neutron [req-806a92f8-3d3d-4302-989d-2ca9afa4401f req-1d0dfd43-0e02-4486-9895-07921637992c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Updating instance_info_cache with network_info: [{"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:43:03 compute-0 nova_compute[259550]: 2025-10-07 14:43:03.698 2 DEBUG oslo_concurrency.lockutils [req-806a92f8-3d3d-4302-989d-2ca9afa4401f req-1d0dfd43-0e02-4486-9895-07921637992c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4954e98d-461f-46e8-9f41-c81930c02cff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:43:03 compute-0 nova_compute[259550]: 2025-10-07 14:43:03.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:43:03 compute-0 nova_compute[259550]: 2025-10-07 14:43:03.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:43:03 compute-0 nova_compute[259550]: 2025-10-07 14:43:03.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:43:04 compute-0 ceph-mon[74295]: pgmap v2418: 305 pgs: 305 active+clean; 167 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Oct 07 14:43:04 compute-0 nova_compute[259550]: 2025-10-07 14:43:04.015 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 07 14:43:04 compute-0 nova_compute[259550]: 2025-10-07 14:43:04.209 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:43:04 compute-0 nova_compute[259550]: 2025-10-07 14:43:04.209 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:43:04 compute-0 nova_compute[259550]: 2025-10-07 14:43:04.210 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:43:04 compute-0 nova_compute[259550]: 2025-10-07 14:43:04.210 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 77918bef-8f72-4152-ac55-f4d4c98477ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:43:04 compute-0 nova_compute[259550]: 2025-10-07 14:43:04.479 2 DEBUG nova.network.neutron [-] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:43:04 compute-0 nova_compute[259550]: 2025-10-07 14:43:04.495 2 INFO nova.compute.manager [-] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Took 1.98 seconds to deallocate network for instance.
Oct 07 14:43:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 143 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 176 op/s
Oct 07 14:43:04 compute-0 nova_compute[259550]: 2025-10-07 14:43:04.539 2 DEBUG oslo_concurrency.lockutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:04 compute-0 nova_compute[259550]: 2025-10-07 14:43:04.539 2 DEBUG oslo_concurrency.lockutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:04 compute-0 nova_compute[259550]: 2025-10-07 14:43:04.607 2 DEBUG oslo_concurrency.processutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:43:05 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/857675004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:43:05 compute-0 nova_compute[259550]: 2025-10-07 14:43:05.049 2 DEBUG oslo_concurrency.processutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:05 compute-0 nova_compute[259550]: 2025-10-07 14:43:05.056 2 DEBUG nova.compute.provider_tree [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:43:05 compute-0 nova_compute[259550]: 2025-10-07 14:43:05.074 2 DEBUG nova.scheduler.client.report [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:43:05 compute-0 nova_compute[259550]: 2025-10-07 14:43:05.097 2 DEBUG oslo_concurrency.lockutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:05 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/857675004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:43:05 compute-0 nova_compute[259550]: 2025-10-07 14:43:05.126 2 INFO nova.scheduler.client.report [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance 4954e98d-461f-46e8-9f41-c81930c02cff
Oct 07 14:43:05 compute-0 nova_compute[259550]: 2025-10-07 14:43:05.188 2 DEBUG oslo_concurrency.lockutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:06 compute-0 nova_compute[259550]: 2025-10-07 14:43:06.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:06 compute-0 podman[398564]: 2025-10-07 14:43:06.069256471 +0000 UTC m=+0.052953509 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 07 14:43:06 compute-0 podman[398565]: 2025-10-07 14:43:06.08014381 +0000 UTC m=+0.061390162 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 07 14:43:06 compute-0 ceph-mon[74295]: pgmap v2419: 305 pgs: 305 active+clean; 143 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 176 op/s
Oct 07 14:43:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 121 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Oct 07 14:43:06 compute-0 nova_compute[259550]: 2025-10-07 14:43:06.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:07 compute-0 nova_compute[259550]: 2025-10-07 14:43:07.071 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updating instance_info_cache with network_info: [{"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:43:07 compute-0 nova_compute[259550]: 2025-10-07 14:43:07.098 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:43:07 compute-0 nova_compute[259550]: 2025-10-07 14:43:07.098 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:43:07 compute-0 nova_compute[259550]: 2025-10-07 14:43:07.099 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:43:07 compute-0 nova_compute[259550]: 2025-10-07 14:43:07.126 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:07 compute-0 nova_compute[259550]: 2025-10-07 14:43:07.127 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:07 compute-0 nova_compute[259550]: 2025-10-07 14:43:07.127 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:07 compute-0 nova_compute[259550]: 2025-10-07 14:43:07.127 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:43:07 compute-0 nova_compute[259550]: 2025-10-07 14:43:07.128 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:43:07 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/639504587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:43:07 compute-0 nova_compute[259550]: 2025-10-07 14:43:07.633 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:07 compute-0 nova_compute[259550]: 2025-10-07 14:43:07.756 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:43:07 compute-0 nova_compute[259550]: 2025-10-07 14:43:07.756 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:43:07 compute-0 nova_compute[259550]: 2025-10-07 14:43:07.963 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:43:07 compute-0 nova_compute[259550]: 2025-10-07 14:43:07.965 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3522MB free_disk=59.94275665283203GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:43:07 compute-0 nova_compute[259550]: 2025-10-07 14:43:07.965 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:07 compute-0 nova_compute[259550]: 2025-10-07 14:43:07.966 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:08 compute-0 nova_compute[259550]: 2025-10-07 14:43:08.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:08 compute-0 nova_compute[259550]: 2025-10-07 14:43:08.058 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 77918bef-8f72-4152-ac55-f4d4c98477ec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:43:08 compute-0 nova_compute[259550]: 2025-10-07 14:43:08.059 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:43:08 compute-0 nova_compute[259550]: 2025-10-07 14:43:08.059 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:43:08 compute-0 nova_compute[259550]: 2025-10-07 14:43:08.102 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:43:08 compute-0 ceph-mon[74295]: pgmap v2420: 305 pgs: 305 active+clean; 121 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Oct 07 14:43:08 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/639504587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:43:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 121 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 145 op/s
Oct 07 14:43:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:43:08 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2491770374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:43:08 compute-0 nova_compute[259550]: 2025-10-07 14:43:08.574 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:08 compute-0 nova_compute[259550]: 2025-10-07 14:43:08.579 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:43:08 compute-0 nova_compute[259550]: 2025-10-07 14:43:08.597 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:43:08 compute-0 nova_compute[259550]: 2025-10-07 14:43:08.619 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:43:08 compute-0 nova_compute[259550]: 2025-10-07 14:43:08.620 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:09 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2491770374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:43:10 compute-0 ceph-mon[74295]: pgmap v2421: 305 pgs: 305 active+clean; 121 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 145 op/s
Oct 07 14:43:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 121 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 145 op/s
Oct 07 14:43:11 compute-0 nova_compute[259550]: 2025-10-07 14:43:11.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:12 compute-0 ceph-mon[74295]: pgmap v2422: 305 pgs: 305 active+clean; 121 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 145 op/s
Oct 07 14:43:12 compute-0 nova_compute[259550]: 2025-10-07 14:43:12.506 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:12 compute-0 nova_compute[259550]: 2025-10-07 14:43:12.506 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 121 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 844 KiB/s rd, 47 KiB/s wr, 71 op/s
Oct 07 14:43:12 compute-0 nova_compute[259550]: 2025-10-07 14:43:12.560 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:43:12 compute-0 nova_compute[259550]: 2025-10-07 14:43:12.667 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:12 compute-0 nova_compute[259550]: 2025-10-07 14:43:12.667 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:12 compute-0 nova_compute[259550]: 2025-10-07 14:43:12.676 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:43:12 compute-0 nova_compute[259550]: 2025-10-07 14:43:12.677 2 INFO nova.compute.claims [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:43:13 compute-0 nova_compute[259550]: 2025-10-07 14:43:13.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:13 compute-0 nova_compute[259550]: 2025-10-07 14:43:13.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:13 compute-0 nova_compute[259550]: 2025-10-07 14:43:13.057 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:43:13 compute-0 nova_compute[259550]: 2025-10-07 14:43:13.504 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:43:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:43:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/840452907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:43:13 compute-0 nova_compute[259550]: 2025-10-07 14:43:13.599 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:13 compute-0 nova_compute[259550]: 2025-10-07 14:43:13.604 2 DEBUG nova.compute.provider_tree [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:43:13 compute-0 nova_compute[259550]: 2025-10-07 14:43:13.648 2 DEBUG nova.scheduler.client.report [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:43:13 compute-0 nova_compute[259550]: 2025-10-07 14:43:13.687 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:13 compute-0 nova_compute[259550]: 2025-10-07 14:43:13.688 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:43:13 compute-0 nova_compute[259550]: 2025-10-07 14:43:13.750 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:43:13 compute-0 nova_compute[259550]: 2025-10-07 14:43:13.751 2 DEBUG nova.network.neutron [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:43:13 compute-0 nova_compute[259550]: 2025-10-07 14:43:13.781 2 INFO nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:43:13 compute-0 nova_compute[259550]: 2025-10-07 14:43:13.842 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:43:13 compute-0 nova_compute[259550]: 2025-10-07 14:43:13.963 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:43:13 compute-0 nova_compute[259550]: 2025-10-07 14:43:13.964 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:43:13 compute-0 nova_compute[259550]: 2025-10-07 14:43:13.965 2 INFO nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Creating image(s)
Oct 07 14:43:13 compute-0 nova_compute[259550]: 2025-10-07 14:43:13.985 2 DEBUG nova.storage.rbd_utils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 1f692a08-811a-41fb-a8a2-aa936481a256_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.012 2 DEBUG nova.storage.rbd_utils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 1f692a08-811a-41fb-a8a2-aa936481a256_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.040 2 DEBUG nova.storage.rbd_utils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 1f692a08-811a-41fb-a8a2-aa936481a256_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.045 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.099 2 DEBUG nova.policy [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.145 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.147 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.148 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.148 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.175 2 DEBUG nova.storage.rbd_utils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 1f692a08-811a-41fb-a8a2-aa936481a256_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.180 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 1f692a08-811a-41fb-a8a2-aa936481a256_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:14 compute-0 ceph-mon[74295]: pgmap v2423: 305 pgs: 305 active+clean; 121 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 844 KiB/s rd, 47 KiB/s wr, 71 op/s
Oct 07 14:43:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/840452907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:43:14 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct 07 14:43:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 121 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 27 op/s
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.669 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 1f692a08-811a-41fb-a8a2-aa936481a256_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.727 2 DEBUG nova.storage.rbd_utils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 1f692a08-811a-41fb-a8a2-aa936481a256_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.773 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.773 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.822 2 DEBUG nova.objects.instance [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 1f692a08-811a-41fb-a8a2-aa936481a256 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.851 2 DEBUG nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.861 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.861 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Ensure instance console log exists: /var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.862 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.862 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.862 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.961 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.962 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.968 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:43:14 compute-0 nova_compute[259550]: 2025-10-07 14:43:14.969 2 INFO nova.compute.claims [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:43:15 compute-0 nova_compute[259550]: 2025-10-07 14:43:15.140 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:43:15 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/593319723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:43:15 compute-0 nova_compute[259550]: 2025-10-07 14:43:15.625 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:15 compute-0 nova_compute[259550]: 2025-10-07 14:43:15.632 2 DEBUG nova.compute.provider_tree [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:43:15 compute-0 nova_compute[259550]: 2025-10-07 14:43:15.660 2 DEBUG nova.scheduler.client.report [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:43:15 compute-0 nova_compute[259550]: 2025-10-07 14:43:15.731 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:15 compute-0 nova_compute[259550]: 2025-10-07 14:43:15.732 2 DEBUG nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:43:15 compute-0 nova_compute[259550]: 2025-10-07 14:43:15.796 2 DEBUG nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:43:15 compute-0 nova_compute[259550]: 2025-10-07 14:43:15.797 2 DEBUG nova.network.neutron [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:43:15 compute-0 nova_compute[259550]: 2025-10-07 14:43:15.848 2 INFO nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:43:15 compute-0 nova_compute[259550]: 2025-10-07 14:43:15.898 2 DEBUG nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.132 2 DEBUG nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.134 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.135 2 INFO nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Creating image(s)
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.160 2 DEBUG nova.storage.rbd_utils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.190 2 DEBUG nova.storage.rbd_utils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.215 2 DEBUG nova.storage.rbd_utils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.219 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:16 compute-0 ceph-mon[74295]: pgmap v2424: 305 pgs: 305 active+clean; 121 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 27 op/s
Oct 07 14:43:16 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/593319723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.294 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.295 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.296 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.296 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.320 2 DEBUG nova.storage.rbd_utils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.323 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 131 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 573 KiB/s wr, 24 op/s
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.604 2 DEBUG nova.policy [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.687 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.781 2 DEBUG nova.storage.rbd_utils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.816 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848181.8144217, 4954e98d-461f-46e8-9f41-c81930c02cff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.817 2 INFO nova.compute.manager [-] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] VM Stopped (Lifecycle Event)
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.884 2 DEBUG nova.compute.manager [None req-879938de-38e4-4ab2-8ae4-9d8ae4c5f2ab - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:43:16 compute-0 nova_compute[259550]: 2025-10-07 14:43:16.934 2 DEBUG nova.network.neutron [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Successfully created port: b3a7ccba-7b5f-4e87-af92-723dd36cc703 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:43:17 compute-0 nova_compute[259550]: 2025-10-07 14:43:17.078 2 DEBUG nova.objects.instance [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid 998e2894-41dd-4eb6-9b5c-08a2e5da0acf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:43:17 compute-0 nova_compute[259550]: 2025-10-07 14:43:17.150 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:43:17 compute-0 nova_compute[259550]: 2025-10-07 14:43:17.151 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Ensure instance console log exists: /var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:43:17 compute-0 nova_compute[259550]: 2025-10-07 14:43:17.152 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:17 compute-0 nova_compute[259550]: 2025-10-07 14:43:17.152 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:17 compute-0 nova_compute[259550]: 2025-10-07 14:43:17.152 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:18 compute-0 nova_compute[259550]: 2025-10-07 14:43:18.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:43:18 compute-0 ceph-mon[74295]: pgmap v2425: 305 pgs: 305 active+clean; 131 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 573 KiB/s wr, 24 op/s
Oct 07 14:43:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 131 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 572 KiB/s wr, 12 op/s
Oct 07 14:43:19 compute-0 nova_compute[259550]: 2025-10-07 14:43:19.119 2 DEBUG nova.network.neutron [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Successfully created port: 5fb4e372-50c4-49a3-a717-ddc2c99673c7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:43:20 compute-0 nova_compute[259550]: 2025-10-07 14:43:20.169 2 DEBUG nova.network.neutron [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Successfully updated port: 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:43:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:20.220 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:43:20 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:20.221 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:43:20 compute-0 nova_compute[259550]: 2025-10-07 14:43:20.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:20 compute-0 nova_compute[259550]: 2025-10-07 14:43:20.232 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-998e2894-41dd-4eb6-9b5c-08a2e5da0acf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:43:20 compute-0 nova_compute[259550]: 2025-10-07 14:43:20.232 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-998e2894-41dd-4eb6-9b5c-08a2e5da0acf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:43:20 compute-0 nova_compute[259550]: 2025-10-07 14:43:20.232 2 DEBUG nova.network.neutron [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:43:20 compute-0 ceph-mon[74295]: pgmap v2426: 305 pgs: 305 active+clean; 131 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 572 KiB/s wr, 12 op/s
Oct 07 14:43:20 compute-0 nova_compute[259550]: 2025-10-07 14:43:20.355 2 DEBUG nova.compute.manager [req-35464970-6c5b-48f6-9a84-74a2d130e115 req-b469562a-651e-4f0d-9645-bc498167b193 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Received event network-changed-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:43:20 compute-0 nova_compute[259550]: 2025-10-07 14:43:20.359 2 DEBUG nova.compute.manager [req-35464970-6c5b-48f6-9a84-74a2d130e115 req-b469562a-651e-4f0d-9645-bc498167b193 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Refreshing instance network info cache due to event network-changed-8e464ebe-1be0-4a3a-b8df-c088ec663aa2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:43:20 compute-0 nova_compute[259550]: 2025-10-07 14:43:20.359 2 DEBUG oslo_concurrency.lockutils [req-35464970-6c5b-48f6-9a84-74a2d130e115 req-b469562a-651e-4f0d-9645-bc498167b193 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-998e2894-41dd-4eb6-9b5c-08a2e5da0acf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:43:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 192 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.6 MiB/s wr, 42 op/s
Oct 07 14:43:21 compute-0 podman[399022]: 2025-10-07 14:43:21.060301438 +0000 UTC m=+0.047772112 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 14:43:21 compute-0 podman[399023]: 2025-10-07 14:43:21.103783573 +0000 UTC m=+0.084370944 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 07 14:43:21 compute-0 nova_compute[259550]: 2025-10-07 14:43:21.504 2 DEBUG nova.network.neutron [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:43:21 compute-0 nova_compute[259550]: 2025-10-07 14:43:21.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:22 compute-0 ceph-mon[74295]: pgmap v2427: 305 pgs: 305 active+clean; 192 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.6 MiB/s wr, 42 op/s
Oct 07 14:43:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 07 14:43:22 compute-0 nova_compute[259550]: 2025-10-07 14:43:22.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:43:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:43:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:43:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:43:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:43:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:43:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:43:22
Oct 07 14:43:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:43:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:43:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'vms', 'images', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'backups', 'default.rgw.control']
Oct 07 14:43:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:43:22 compute-0 nova_compute[259550]: 2025-10-07 14:43:22.756 2 DEBUG nova.network.neutron [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Successfully updated port: b3a7ccba-7b5f-4e87-af92-723dd36cc703 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:43:22 compute-0 nova_compute[259550]: 2025-10-07 14:43:22.845 2 DEBUG nova.compute.manager [req-04542ca0-a889-476d-bd31-99b7bc2c9c2a req-69a7b8a9-51c1-48a9-aaf2-fe617328a907 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-changed-b3a7ccba-7b5f-4e87-af92-723dd36cc703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:43:22 compute-0 nova_compute[259550]: 2025-10-07 14:43:22.846 2 DEBUG nova.compute.manager [req-04542ca0-a889-476d-bd31-99b7bc2c9c2a req-69a7b8a9-51c1-48a9-aaf2-fe617328a907 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Refreshing instance network info cache due to event network-changed-b3a7ccba-7b5f-4e87-af92-723dd36cc703. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:43:22 compute-0 nova_compute[259550]: 2025-10-07 14:43:22.846 2 DEBUG oslo_concurrency.lockutils [req-04542ca0-a889-476d-bd31-99b7bc2c9c2a req-69a7b8a9-51c1-48a9-aaf2-fe617328a907 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:43:22 compute-0 nova_compute[259550]: 2025-10-07 14:43:22.846 2 DEBUG oslo_concurrency.lockutils [req-04542ca0-a889-476d-bd31-99b7bc2c9c2a req-69a7b8a9-51c1-48a9-aaf2-fe617328a907 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:43:22 compute-0 nova_compute[259550]: 2025-10-07 14:43:22.846 2 DEBUG nova.network.neutron [req-04542ca0-a889-476d-bd31-99b7bc2c9c2a req-69a7b8a9-51c1-48a9-aaf2-fe617328a907 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Refreshing network info cache for port b3a7ccba-7b5f-4e87-af92-723dd36cc703 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:43:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:43:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:43:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:43:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:43:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:43:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:43:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:43:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:43:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:43:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.122 2 DEBUG nova.network.neutron [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Updating instance_info_cache with network_info: [{"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.216 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-998e2894-41dd-4eb6-9b5c-08a2e5da0acf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.216 2 DEBUG nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Instance network_info: |[{"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.217 2 DEBUG oslo_concurrency.lockutils [req-35464970-6c5b-48f6-9a84-74a2d130e115 req-b469562a-651e-4f0d-9645-bc498167b193 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-998e2894-41dd-4eb6-9b5c-08a2e5da0acf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.217 2 DEBUG nova.network.neutron [req-35464970-6c5b-48f6-9a84-74a2d130e115 req-b469562a-651e-4f0d-9645-bc498167b193 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Refreshing network info cache for port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.221 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Start _get_guest_xml network_info=[{"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.222 2 DEBUG nova.network.neutron [req-04542ca0-a889-476d-bd31-99b7bc2c9c2a req-69a7b8a9-51c1-48a9-aaf2-fe617328a907 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.229 2 WARNING nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.235 2 DEBUG nova.virt.libvirt.host [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.236 2 DEBUG nova.virt.libvirt.host [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.240 2 DEBUG nova.virt.libvirt.host [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.240 2 DEBUG nova.virt.libvirt.host [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.241 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.241 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.241 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.241 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.242 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.242 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.242 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.242 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.242 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.243 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.243 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.243 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.246 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:43:23 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3653239994' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.685 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.705 2 DEBUG nova.storage.rbd_utils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.710 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.827 2 DEBUG nova.network.neutron [req-04542ca0-a889-476d-bd31-99b7bc2c9c2a req-69a7b8a9-51c1-48a9-aaf2-fe617328a907 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:43:23 compute-0 nova_compute[259550]: 2025-10-07 14:43:23.871 2 DEBUG oslo_concurrency.lockutils [req-04542ca0-a889-476d-bd31-99b7bc2c9c2a req-69a7b8a9-51c1-48a9-aaf2-fe617328a907 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:43:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:43:24 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4249436716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.132 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.134 2 DEBUG nova.virt.libvirt.vif [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:43:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1037921195',display_name='tempest-TestNetworkBasicOps-server-1037921195',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1037921195',id=130,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLtzetRjoFkj/DFDF9WAwZ04LCKLWqJ+1zvvsm0AAVovAYiR1b+Pd5gjAzqKDLKwmDA6OpvtCVvIhI+zBY1NadamdON/n6P9hhZKAqnvip6hxYov7U6qVbPGlxpeUTPRYA==',key_name='tempest-TestNetworkBasicOps-2019301416',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-sa2bmdzi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:43:15Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=998e2894-41dd-4eb6-9b5c-08a2e5da0acf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.134 2 DEBUG nova.network.os_vif_util [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.135 2 DEBUG nova.network.os_vif_util [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.137 2 DEBUG nova.objects.instance [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid 998e2894-41dd-4eb6-9b5c-08a2e5da0acf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.211 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:43:24 compute-0 nova_compute[259550]:   <uuid>998e2894-41dd-4eb6-9b5c-08a2e5da0acf</uuid>
Oct 07 14:43:24 compute-0 nova_compute[259550]:   <name>instance-00000082</name>
Oct 07 14:43:24 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:43:24 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:43:24 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkBasicOps-server-1037921195</nova:name>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:43:23</nova:creationTime>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:43:24 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:43:24 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:43:24 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:43:24 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:43:24 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:43:24 compute-0 nova_compute[259550]:         <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:43:24 compute-0 nova_compute[259550]:         <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:43:24 compute-0 nova_compute[259550]:         <nova:port uuid="8e464ebe-1be0-4a3a-b8df-c088ec663aa2">
Oct 07 14:43:24 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:43:24 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:43:24 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <system>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <entry name="serial">998e2894-41dd-4eb6-9b5c-08a2e5da0acf</entry>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <entry name="uuid">998e2894-41dd-4eb6-9b5c-08a2e5da0acf</entry>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     </system>
Oct 07 14:43:24 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:43:24 compute-0 nova_compute[259550]:   <os>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:   </os>
Oct 07 14:43:24 compute-0 nova_compute[259550]:   <features>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:   </features>
Oct 07 14:43:24 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:43:24 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:43:24 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk">
Oct 07 14:43:24 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       </source>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:43:24 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk.config">
Oct 07 14:43:24 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       </source>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:43:24 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:b7:10:26"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <target dev="tap8e464ebe-1b"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf/console.log" append="off"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <video>
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     </video>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:43:24 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:43:24 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:43:24 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:43:24 compute-0 nova_compute[259550]: </domain>
Oct 07 14:43:24 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.212 2 DEBUG nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Preparing to wait for external event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.212 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.213 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.213 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.214 2 DEBUG nova.virt.libvirt.vif [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:43:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1037921195',display_name='tempest-TestNetworkBasicOps-server-1037921195',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1037921195',id=130,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLtzetRjoFkj/DFDF9WAwZ04LCKLWqJ+1zvvsm0AAVovAYiR1b+Pd5gjAzqKDLKwmDA6OpvtCVvIhI+zBY1NadamdON/n6P9hhZKAqnvip6hxYov7U6qVbPGlxpeUTPRYA==',key_name='tempest-TestNetworkBasicOps-2019301416',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-sa2bmdzi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:43:15Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=998e2894-41dd-4eb6-9b5c-08a2e5da0acf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.214 2 DEBUG nova.network.os_vif_util [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.215 2 DEBUG nova.network.os_vif_util [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.215 2 DEBUG os_vif [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.216 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.217 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.220 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e464ebe-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.220 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8e464ebe-1b, col_values=(('external_ids', {'iface-id': '8e464ebe-1be0-4a3a-b8df-c088ec663aa2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:10:26', 'vm-uuid': '998e2894-41dd-4eb6-9b5c-08a2e5da0acf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:24 compute-0 NetworkManager[44949]: <info>  [1759848204.2233] manager: (tap8e464ebe-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/569)
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.232 2 INFO os_vif [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b')
Oct 07 14:43:24 compute-0 ceph-mon[74295]: pgmap v2428: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 07 14:43:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3653239994' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:43:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4249436716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:43:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.564 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.565 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.565 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:b7:10:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.566 2 INFO nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Using config drive
Oct 07 14:43:24 compute-0 nova_compute[259550]: 2025-10-07 14:43:24.590 2 DEBUG nova.storage.rbd_utils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.062 2 DEBUG nova.network.neutron [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Successfully updated port: 5fb4e372-50c4-49a3-a717-ddc2c99673c7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.193 2 INFO nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Creating config drive at /var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf/disk.config
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.197 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo3v9vf1l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.297 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.298 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.298 2 DEBUG nova.network.neutron [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.349 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo3v9vf1l" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.374 2 DEBUG nova.storage.rbd_utils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.379 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf/disk.config 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.459 2 DEBUG nova.compute.manager [req-284e9a51-4a1f-49a0-8c10-dccee89fea9f req-43acaa58-2967-4e8e-b04e-d0e9b49458b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-changed-5fb4e372-50c4-49a3-a717-ddc2c99673c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.460 2 DEBUG nova.compute.manager [req-284e9a51-4a1f-49a0-8c10-dccee89fea9f req-43acaa58-2967-4e8e-b04e-d0e9b49458b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Refreshing instance network info cache due to event network-changed-5fb4e372-50c4-49a3-a717-ddc2c99673c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.461 2 DEBUG oslo_concurrency.lockutils [req-284e9a51-4a1f-49a0-8c10-dccee89fea9f req-43acaa58-2967-4e8e-b04e-d0e9b49458b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.464 2 DEBUG nova.network.neutron [req-35464970-6c5b-48f6-9a84-74a2d130e115 req-b469562a-651e-4f0d-9645-bc498167b193 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Updated VIF entry in instance network info cache for port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.465 2 DEBUG nova.network.neutron [req-35464970-6c5b-48f6-9a84-74a2d130e115 req-b469562a-651e-4f0d-9645-bc498167b193 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Updating instance_info_cache with network_info: [{"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:43:25 compute-0 ceph-mon[74295]: pgmap v2429: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.522 2 DEBUG oslo_concurrency.lockutils [req-35464970-6c5b-48f6-9a84-74a2d130e115 req-b469562a-651e-4f0d-9645-bc498167b193 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-998e2894-41dd-4eb6-9b5c-08a2e5da0acf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.618 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf/disk.config 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.620 2 INFO nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Deleting local config drive /var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf/disk.config because it was imported into RBD.
Oct 07 14:43:25 compute-0 kernel: tap8e464ebe-1b: entered promiscuous mode
Oct 07 14:43:25 compute-0 NetworkManager[44949]: <info>  [1759848205.6921] manager: (tap8e464ebe-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/570)
Oct 07 14:43:25 compute-0 ovn_controller[151684]: 2025-10-07T14:43:25Z|01420|binding|INFO|Claiming lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 for this chassis.
Oct 07 14:43:25 compute-0 ovn_controller[151684]: 2025-10-07T14:43:25Z|01421|binding|INFO|8e464ebe-1be0-4a3a-b8df-c088ec663aa2: Claiming fa:16:3e:b7:10:26 10.100.0.12
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:25 compute-0 ovn_controller[151684]: 2025-10-07T14:43:25Z|01422|binding|INFO|Setting lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 ovn-installed in OVS
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:25 compute-0 ovn_controller[151684]: 2025-10-07T14:43:25Z|01423|binding|INFO|Setting lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 up in Southbound
Oct 07 14:43:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.716 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:10:26 10.100.0.12'], port_security=['fa:16:3e:b7:10:26 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-843309094', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '998e2894-41dd-4eb6-9b5c-08a2e5da0acf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aea4318-48a4-451b-b36b-e364946c1859', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-843309094', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '7', 'neutron:security_group_ids': '46186043-39e3-4b2d-9425-b7b9cc0cd458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d04921d-dca2-4310-91f6-b9ea410f8b20, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8e464ebe-1be0-4a3a-b8df-c088ec663aa2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:43:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.717 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 in datapath 7aea4318-48a4-451b-b36b-e364946c1859 bound to our chassis
Oct 07 14:43:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.719 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7aea4318-48a4-451b-b36b-e364946c1859
Oct 07 14:43:25 compute-0 systemd-udevd[399199]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:43:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.731 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0aff0d62-cc95-4cb5-84d0-3752fcd8ff61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.732 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7aea4318-41 in ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:43:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.734 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7aea4318-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:43:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.735 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ed1c3f0b-cd9e-4b74-8c71-fb592c868a0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.736 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ca8ae5f9-a51b-4b64-8c40-47ac24b99876]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:25 compute-0 NetworkManager[44949]: <info>  [1759848205.7406] device (tap8e464ebe-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:43:25 compute-0 NetworkManager[44949]: <info>  [1759848205.7416] device (tap8e464ebe-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:43:25 compute-0 systemd-machined[214580]: New machine qemu-162-instance-00000082.
Oct 07 14:43:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.749 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[73919b5d-0e68-4f9c-a100-05a094e2a89a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:25 compute-0 systemd[1]: Started Virtual Machine qemu-162-instance-00000082.
Oct 07 14:43:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.766 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a9328a19-ca09-46e4-a8bd-7af469c0d18b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.799 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b87a705c-9860-4e14-9566-7ba8b393f67a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.804 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1aa84e8-5569-4fa7-96f2-1c50288c8189]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:25 compute-0 NetworkManager[44949]: <info>  [1759848205.8048] manager: (tap7aea4318-40): new Veth device (/org/freedesktop/NetworkManager/Devices/571)
Oct 07 14:43:25 compute-0 systemd-udevd[399204]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:43:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.837 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba26be4-57c0-4e14-85fb-ea830c980e2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.840 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f44338c4-a9b3-41fb-bf6b-afc46361a96c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:25 compute-0 NetworkManager[44949]: <info>  [1759848205.8655] device (tap7aea4318-40): carrier: link connected
Oct 07 14:43:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.872 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e76c14-534f-4dbf-8bc2-b19a0b76952c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:25 compute-0 nova_compute[259550]: 2025-10-07 14:43:25.872 2 DEBUG nova.network.neutron [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:43:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.889 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[63a42d49-8a6f-4452-a476-1080f948a797]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7aea4318-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:73:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 408], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878943, 'reachable_time': 40799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399233, 'error': None, 'target': 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.907 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e08437e8-af4e-4970-825a-653053b33c27]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:73fd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 878943, 'tstamp': 878943}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399234, 'error': None, 'target': 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.924 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c7aec9a-d448-485f-852d-825a0ba9149c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7aea4318-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:73:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 408], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878943, 'reachable_time': 40799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 399235, 'error': None, 'target': 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.960 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d01440-a950-40c0-aac8-da8d930b7c9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:26.023 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0d6912-a0e3-43eb-ba90-56c73eed9ee8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:26.024 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7aea4318-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:26.025 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:26.025 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7aea4318-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:26 compute-0 nova_compute[259550]: 2025-10-07 14:43:26.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:26 compute-0 kernel: tap7aea4318-40: entered promiscuous mode
Oct 07 14:43:26 compute-0 NetworkManager[44949]: <info>  [1759848206.0281] manager: (tap7aea4318-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/572)
Oct 07 14:43:26 compute-0 nova_compute[259550]: 2025-10-07 14:43:26.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:26.036 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7aea4318-40, col_values=(('external_ids', {'iface-id': '51e9d814-5370-44c9-809c-78ed73cc4b32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:26 compute-0 nova_compute[259550]: 2025-10-07 14:43:26.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:26 compute-0 ovn_controller[151684]: 2025-10-07T14:43:26Z|01424|binding|INFO|Releasing lport 51e9d814-5370-44c9-809c-78ed73cc4b32 from this chassis (sb_readonly=0)
Oct 07 14:43:26 compute-0 nova_compute[259550]: 2025-10-07 14:43:26.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:26 compute-0 nova_compute[259550]: 2025-10-07 14:43:26.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:26.052 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7aea4318-48a4-451b-b36b-e364946c1859.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7aea4318-48a4-451b-b36b-e364946c1859.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:26.052 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5b8a27d2-7390-499e-af0e-436bfea7a677]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:26.054 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-7aea4318-48a4-451b-b36b-e364946c1859
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/7aea4318-48a4-451b-b36b-e364946c1859.pid.haproxy
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 7aea4318-48a4-451b-b36b-e364946c1859
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:43:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:26.054 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'env', 'PROCESS_TAG=haproxy-7aea4318-48a4-451b-b36b-e364946c1859', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7aea4318-48a4-451b-b36b-e364946c1859.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:43:26 compute-0 podman[399267]: 2025-10-07 14:43:26.463695312 +0000 UTC m=+0.089330046 container create 8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 14:43:26 compute-0 podman[399267]: 2025-10-07 14:43:26.397978075 +0000 UTC m=+0.023612829 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:43:26 compute-0 systemd[1]: Started libpod-conmon-8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb.scope.
Oct 07 14:43:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 58 op/s
Oct 07 14:43:26 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc5e3c38f58495ce9e1ded90bc2192e137de291ef67b1de2c7c83ee6a5d2171d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:43:26 compute-0 podman[399267]: 2025-10-07 14:43:26.562469967 +0000 UTC m=+0.188104711 container init 8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:43:26 compute-0 podman[399267]: 2025-10-07 14:43:26.572495193 +0000 UTC m=+0.198129927 container start 8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:43:26 compute-0 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[399320]: [NOTICE]   (399328) : New worker (399330) forked
Oct 07 14:43:26 compute-0 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[399320]: [NOTICE]   (399328) : Loading success.
Oct 07 14:43:26 compute-0 nova_compute[259550]: 2025-10-07 14:43:26.988 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848206.9881957, 998e2894-41dd-4eb6-9b5c-08a2e5da0acf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:43:26 compute-0 nova_compute[259550]: 2025-10-07 14:43:26.989 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] VM Started (Lifecycle Event)
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.041 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.046 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848206.9884768, 998e2894-41dd-4eb6-9b5c-08a2e5da0acf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.046 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] VM Paused (Lifecycle Event)
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.118 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.122 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.164 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:43:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:27.223 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:27 compute-0 ceph-mon[74295]: pgmap v2430: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 58 op/s
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.607 2 DEBUG nova.compute.manager [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Received event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.608 2 DEBUG oslo_concurrency.lockutils [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.608 2 DEBUG oslo_concurrency.lockutils [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.608 2 DEBUG oslo_concurrency.lockutils [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.608 2 DEBUG nova.compute.manager [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Processing event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.609 2 DEBUG nova.compute.manager [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Received event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.609 2 DEBUG oslo_concurrency.lockutils [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.609 2 DEBUG oslo_concurrency.lockutils [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.609 2 DEBUG oslo_concurrency.lockutils [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.609 2 DEBUG nova.compute.manager [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] No waiting events found dispatching network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.610 2 WARNING nova.compute.manager [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Received unexpected event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 for instance with vm_state building and task_state spawning.
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.610 2 DEBUG nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.613 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848207.6135912, 998e2894-41dd-4eb6-9b5c-08a2e5da0acf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.614 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] VM Resumed (Lifecycle Event)
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.615 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.618 2 INFO nova.virt.libvirt.driver [-] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Instance spawned successfully.
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.618 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.695 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.700 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.700 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.701 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.701 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.702 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.702 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:43:27 compute-0 nova_compute[259550]: 2025-10-07 14:43:27.706 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.025 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.112 2 INFO nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Took 11.98 seconds to spawn the instance on the hypervisor.
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.112 2 DEBUG nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:43:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.192 2 INFO nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Took 13.25 seconds to build instance.
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.212 2 DEBUG nova.network.neutron [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updating instance_info_cache with network_info: [{"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.223 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.249 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.250 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Instance network_info: |[{"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.250 2 DEBUG oslo_concurrency.lockutils [req-284e9a51-4a1f-49a0-8c10-dccee89fea9f req-43acaa58-2967-4e8e-b04e-d0e9b49458b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.251 2 DEBUG nova.network.neutron [req-284e9a51-4a1f-49a0-8c10-dccee89fea9f req-43acaa58-2967-4e8e-b04e-d0e9b49458b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Refreshing network info cache for port 5fb4e372-50c4-49a3-a717-ddc2c99673c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.254 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Start _get_guest_xml network_info=[{"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.259 2 WARNING nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.265 2 DEBUG nova.virt.libvirt.host [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.265 2 DEBUG nova.virt.libvirt.host [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.271 2 DEBUG nova.virt.libvirt.host [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.272 2 DEBUG nova.virt.libvirt.host [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.272 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.272 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.273 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.273 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.273 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.274 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.274 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.274 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.274 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.275 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.275 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.275 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.278 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 3.0 MiB/s wr, 46 op/s
Oct 07 14:43:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:43:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3798614902' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.802 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.826 2 DEBUG nova.storage.rbd_utils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 1f692a08-811a-41fb-a8a2-aa936481a256_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:28 compute-0 nova_compute[259550]: 2025-10-07 14:43:28.829 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:43:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1707179847' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.296 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.298 2 DEBUG nova.virt.libvirt.vif [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:43:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-589430566',display_name='tempest-TestGettingAddress-server-589430566',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-589430566',id=129,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-9t1w4huy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:43:13Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=1f692a08-811a-41fb-a8a2-aa936481a256,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.299 2 DEBUG nova.network.os_vif_util [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.300 2 DEBUG nova.network.os_vif_util [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:0b:88,bridge_name='br-int',has_traffic_filtering=True,id=b3a7ccba-7b5f-4e87-af92-723dd36cc703,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3a7ccba-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.301 2 DEBUG nova.virt.libvirt.vif [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:43:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-589430566',display_name='tempest-TestGettingAddress-server-589430566',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-589430566',id=129,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-9t1w4huy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:43:13Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=1f692a08-811a-41fb-a8a2-aa936481a256,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.301 2 DEBUG nova.network.os_vif_util [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.302 2 DEBUG nova.network.os_vif_util [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:bb:5e,bridge_name='br-int',has_traffic_filtering=True,id=5fb4e372-50c4-49a3-a717-ddc2c99673c7,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb4e372-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.303 2 DEBUG nova.objects.instance [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f692a08-811a-41fb-a8a2-aa936481a256 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.330 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:43:29 compute-0 nova_compute[259550]:   <uuid>1f692a08-811a-41fb-a8a2-aa936481a256</uuid>
Oct 07 14:43:29 compute-0 nova_compute[259550]:   <name>instance-00000081</name>
Oct 07 14:43:29 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:43:29 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:43:29 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <nova:name>tempest-TestGettingAddress-server-589430566</nova:name>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:43:28</nova:creationTime>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:43:29 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:43:29 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:43:29 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:43:29 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:43:29 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:43:29 compute-0 nova_compute[259550]:         <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct 07 14:43:29 compute-0 nova_compute[259550]:         <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:43:29 compute-0 nova_compute[259550]:         <nova:port uuid="b3a7ccba-7b5f-4e87-af92-723dd36cc703">
Oct 07 14:43:29 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:43:29 compute-0 nova_compute[259550]:         <nova:port uuid="5fb4e372-50c4-49a3-a717-ddc2c99673c7">
Oct 07 14:43:29 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe5c:bb5e" ipVersion="6"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:43:29 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:43:29 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <system>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <entry name="serial">1f692a08-811a-41fb-a8a2-aa936481a256</entry>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <entry name="uuid">1f692a08-811a-41fb-a8a2-aa936481a256</entry>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     </system>
Oct 07 14:43:29 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:43:29 compute-0 nova_compute[259550]:   <os>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:   </os>
Oct 07 14:43:29 compute-0 nova_compute[259550]:   <features>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:   </features>
Oct 07 14:43:29 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:43:29 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:43:29 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1f692a08-811a-41fb-a8a2-aa936481a256_disk">
Oct 07 14:43:29 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       </source>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:43:29 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1f692a08-811a-41fb-a8a2-aa936481a256_disk.config">
Oct 07 14:43:29 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       </source>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:43:29 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:e6:0b:88"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <target dev="tapb3a7ccba-7b"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:5c:bb:5e"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <target dev="tap5fb4e372-50"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256/console.log" append="off"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <video>
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     </video>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:43:29 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:43:29 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:43:29 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:43:29 compute-0 nova_compute[259550]: </domain>
Oct 07 14:43:29 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.331 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Preparing to wait for external event network-vif-plugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.332 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.332 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.332 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.332 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Preparing to wait for external event network-vif-plugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.333 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.333 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.333 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.334 2 DEBUG nova.virt.libvirt.vif [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:43:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-589430566',display_name='tempest-TestGettingAddress-server-589430566',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-589430566',id=129,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-9t1w4huy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:43:13Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=1f692a08-811a-41fb-a8a2-aa936481a256,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.334 2 DEBUG nova.network.os_vif_util [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.335 2 DEBUG nova.network.os_vif_util [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:0b:88,bridge_name='br-int',has_traffic_filtering=True,id=b3a7ccba-7b5f-4e87-af92-723dd36cc703,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3a7ccba-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.336 2 DEBUG os_vif [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:0b:88,bridge_name='br-int',has_traffic_filtering=True,id=b3a7ccba-7b5f-4e87-af92-723dd36cc703,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3a7ccba-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.337 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.337 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.340 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3a7ccba-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.341 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3a7ccba-7b, col_values=(('external_ids', {'iface-id': 'b3a7ccba-7b5f-4e87-af92-723dd36cc703', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:0b:88', 'vm-uuid': '1f692a08-811a-41fb-a8a2-aa936481a256'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:29 compute-0 NetworkManager[44949]: <info>  [1759848209.3433] manager: (tapb3a7ccba-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/573)
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.350 2 INFO os_vif [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:0b:88,bridge_name='br-int',has_traffic_filtering=True,id=b3a7ccba-7b5f-4e87-af92-723dd36cc703,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3a7ccba-7b')
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.351 2 DEBUG nova.virt.libvirt.vif [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:43:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-589430566',display_name='tempest-TestGettingAddress-server-589430566',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-589430566',id=129,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-9t1w4huy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:43:13Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=1f692a08-811a-41fb-a8a2-aa936481a256,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.351 2 DEBUG nova.network.os_vif_util [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.352 2 DEBUG nova.network.os_vif_util [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:bb:5e,bridge_name='br-int',has_traffic_filtering=True,id=5fb4e372-50c4-49a3-a717-ddc2c99673c7,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb4e372-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.352 2 DEBUG os_vif [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:bb:5e,bridge_name='br-int',has_traffic_filtering=True,id=5fb4e372-50c4-49a3-a717-ddc2c99673c7,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb4e372-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.353 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.354 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.356 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fb4e372-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.356 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5fb4e372-50, col_values=(('external_ids', {'iface-id': '5fb4e372-50c4-49a3-a717-ddc2c99673c7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5c:bb:5e', 'vm-uuid': '1f692a08-811a-41fb-a8a2-aa936481a256'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:29 compute-0 NetworkManager[44949]: <info>  [1759848209.3587] manager: (tap5fb4e372-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/574)
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.366 2 INFO os_vif [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:bb:5e,bridge_name='br-int',has_traffic_filtering=True,id=5fb4e372-50c4-49a3-a717-ddc2c99673c7,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb4e372-50')
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.452 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.452 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.453 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:e6:0b:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.453 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:5c:bb:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.453 2 INFO nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Using config drive
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.483 2 DEBUG nova.storage.rbd_utils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 1f692a08-811a-41fb-a8a2-aa936481a256_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.513 2 DEBUG nova.network.neutron [req-284e9a51-4a1f-49a0-8c10-dccee89fea9f req-43acaa58-2967-4e8e-b04e-d0e9b49458b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updated VIF entry in instance network info cache for port 5fb4e372-50c4-49a3-a717-ddc2c99673c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.515 2 DEBUG nova.network.neutron [req-284e9a51-4a1f-49a0-8c10-dccee89fea9f req-43acaa58-2967-4e8e-b04e-d0e9b49458b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updating instance_info_cache with network_info: [{"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:43:29 compute-0 ceph-mon[74295]: pgmap v2431: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 3.0 MiB/s wr, 46 op/s
Oct 07 14:43:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3798614902' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:43:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1707179847' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:43:29 compute-0 nova_compute[259550]: 2025-10-07 14:43:29.690 2 DEBUG oslo_concurrency.lockutils [req-284e9a51-4a1f-49a0-8c10-dccee89fea9f req-43acaa58-2967-4e8e-b04e-d0e9b49458b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:43:30 compute-0 nova_compute[259550]: 2025-10-07 14:43:30.046 2 INFO nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Creating config drive at /var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256/disk.config
Oct 07 14:43:30 compute-0 nova_compute[259550]: 2025-10-07 14:43:30.052 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxdk21nyc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:30 compute-0 nova_compute[259550]: 2025-10-07 14:43:30.221 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxdk21nyc" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:30 compute-0 nova_compute[259550]: 2025-10-07 14:43:30.258 2 DEBUG nova.storage.rbd_utils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 1f692a08-811a-41fb-a8a2-aa936481a256_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:30 compute-0 nova_compute[259550]: 2025-10-07 14:43:30.263 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256/disk.config 1f692a08-811a-41fb-a8a2-aa936481a256_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:30 compute-0 nova_compute[259550]: 2025-10-07 14:43:30.465 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256/disk.config 1f692a08-811a-41fb-a8a2-aa936481a256_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:30 compute-0 nova_compute[259550]: 2025-10-07 14:43:30.467 2 INFO nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Deleting local config drive /var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256/disk.config because it was imported into RBD.
Oct 07 14:43:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 923 KiB/s rd, 3.0 MiB/s wr, 83 op/s
Oct 07 14:43:30 compute-0 kernel: tapb3a7ccba-7b: entered promiscuous mode
Oct 07 14:43:30 compute-0 NetworkManager[44949]: <info>  [1759848210.5246] manager: (tapb3a7ccba-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/575)
Oct 07 14:43:30 compute-0 nova_compute[259550]: 2025-10-07 14:43:30.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:30 compute-0 ovn_controller[151684]: 2025-10-07T14:43:30Z|01425|binding|INFO|Claiming lport b3a7ccba-7b5f-4e87-af92-723dd36cc703 for this chassis.
Oct 07 14:43:30 compute-0 ovn_controller[151684]: 2025-10-07T14:43:30Z|01426|binding|INFO|b3a7ccba-7b5f-4e87-af92-723dd36cc703: Claiming fa:16:3e:e6:0b:88 10.100.0.7
Oct 07 14:43:30 compute-0 NetworkManager[44949]: <info>  [1759848210.5442] manager: (tap5fb4e372-50): new Tun device (/org/freedesktop/NetworkManager/Devices/576)
Oct 07 14:43:30 compute-0 kernel: tap5fb4e372-50: entered promiscuous mode
Oct 07 14:43:30 compute-0 systemd-udevd[399478]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:43:30 compute-0 systemd-udevd[399479]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:43:30 compute-0 nova_compute[259550]: 2025-10-07 14:43:30.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:30 compute-0 ovn_controller[151684]: 2025-10-07T14:43:30Z|01427|binding|INFO|Setting lport b3a7ccba-7b5f-4e87-af92-723dd36cc703 ovn-installed in OVS
Oct 07 14:43:30 compute-0 ovn_controller[151684]: 2025-10-07T14:43:30Z|01428|if_status|INFO|Dropped 8 log messages in last 165 seconds (most recently, 165 seconds ago) due to excessive rate
Oct 07 14:43:30 compute-0 ovn_controller[151684]: 2025-10-07T14:43:30Z|01429|if_status|INFO|Not updating pb chassis for 5fb4e372-50c4-49a3-a717-ddc2c99673c7 now as sb is readonly
Oct 07 14:43:30 compute-0 ovn_controller[151684]: 2025-10-07T14:43:30Z|01430|binding|INFO|Claiming lport 5fb4e372-50c4-49a3-a717-ddc2c99673c7 for this chassis.
Oct 07 14:43:30 compute-0 ovn_controller[151684]: 2025-10-07T14:43:30Z|01431|binding|INFO|5fb4e372-50c4-49a3-a717-ddc2c99673c7: Claiming fa:16:3e:5c:bb:5e 2001:db8::f816:3eff:fe5c:bb5e
Oct 07 14:43:30 compute-0 ovn_controller[151684]: 2025-10-07T14:43:30Z|01432|binding|INFO|Setting lport b3a7ccba-7b5f-4e87-af92-723dd36cc703 up in Southbound
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.576 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:0b:88 10.100.0.7'], port_security=['fa:16:3e:e6:0b:88 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1f692a08-811a-41fb-a8a2-aa936481a256', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a500b116-d64f-4be8-9413-85a351e36563', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=262bb8b3-c881-4ed3-8240-c435878fb605, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b3a7ccba-7b5f-4e87-af92-723dd36cc703) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:43:30 compute-0 NetworkManager[44949]: <info>  [1759848210.5773] device (tap5fb4e372-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.577 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b3a7ccba-7b5f-4e87-af92-723dd36cc703 in datapath bb059ee7-3091-491e-8da2-c9bd1da0f922 bound to our chassis
Oct 07 14:43:30 compute-0 NetworkManager[44949]: <info>  [1759848210.5781] device (tapb3a7ccba-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:43:30 compute-0 NetworkManager[44949]: <info>  [1759848210.5787] device (tap5fb4e372-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:43:30 compute-0 NetworkManager[44949]: <info>  [1759848210.5790] device (tapb3a7ccba-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.579 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bb059ee7-3091-491e-8da2-c9bd1da0f922
Oct 07 14:43:30 compute-0 nova_compute[259550]: 2025-10-07 14:43:30.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:30 compute-0 ovn_controller[151684]: 2025-10-07T14:43:30Z|01433|binding|INFO|Setting lport 5fb4e372-50c4-49a3-a717-ddc2c99673c7 ovn-installed in OVS
Oct 07 14:43:30 compute-0 nova_compute[259550]: 2025-10-07 14:43:30.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:30 compute-0 systemd-machined[214580]: New machine qemu-163-instance-00000081.
Oct 07 14:43:30 compute-0 ovn_controller[151684]: 2025-10-07T14:43:30Z|01434|binding|INFO|Setting lport 5fb4e372-50c4-49a3-a717-ddc2c99673c7 up in Southbound
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.600 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b95b19e-0f31-42ed-83f6-c58da9f705d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.603 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:bb:5e 2001:db8::f816:3eff:fe5c:bb5e'], port_security=['fa:16:3e:5c:bb:5e 2001:db8::f816:3eff:fe5c:bb5e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe5c:bb5e/64', 'neutron:device_id': '1f692a08-811a-41fb-a8a2-aa936481a256', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a500b116-d64f-4be8-9413-85a351e36563', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36c3782d-89d5-4d69-ae40-86969f172913, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5fb4e372-50c4-49a3-a717-ddc2c99673c7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:43:30 compute-0 systemd[1]: Started Virtual Machine qemu-163-instance-00000081.
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.636 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd6c16b-195c-44e8-9cc7-a8b42fd945bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.639 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[59723376-4700-4f74-ae9d-e12b80779074]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.666 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4805cf07-e0ce-42c4-9cb6-7770aee1dc47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.694 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb512b1-95a8-492b-bc9d-3e16c4fa9037]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb059ee7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:3b:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 402], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874469, 'reachable_time': 30371, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399496, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.714 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[86891c32-5720-4a0d-bfe3-99b8ef15184d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbb059ee7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874481, 'tstamp': 874481}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399498, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbb059ee7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874484, 'tstamp': 874484}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399498, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.717 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb059ee7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:30 compute-0 nova_compute[259550]: 2025-10-07 14:43:30.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:30 compute-0 nova_compute[259550]: 2025-10-07 14:43:30.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.720 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb059ee7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.721 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.721 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbb059ee7-30, col_values=(('external_ids', {'iface-id': '770f7899-3ca8-4bf3-9f06-6b8c25522fc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.722 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.723 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5fb4e372-50c4-49a3-a717-ddc2c99673c7 in datapath e6e769bc-2b33-4210-8062-fbc8d16f9127 unbound from our chassis
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.724 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e6e769bc-2b33-4210-8062-fbc8d16f9127
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.744 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[56c878ff-2136-440f-aae4-1dda040b8918]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.793 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4f208c-3528-4d05-b60d-b50c664507b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.797 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f4fb622c-7cc2-4efc-81ea-e6f56c9e1444]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.829 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c214d935-c7b1-439b-9c7c-abddb9b7025f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.847 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[42d509ee-5500-4076-ad4e-88449ebf75fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape6e769bc-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:ce:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 4, 'rx_bytes': 1802, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 4, 'rx_bytes': 1802, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874559, 'reachable_time': 36097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 19, 'inoctets': 1536, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 19, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1536, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 19, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399504, 'error': None, 'target': 'ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.865 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7fca998e-0bef-4c57-92b6-9d9ba875ba98]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape6e769bc-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874571, 'tstamp': 874571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399505, 'error': None, 'target': 'ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.867 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6e769bc-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:30 compute-0 nova_compute[259550]: 2025-10-07 14:43:30.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:30 compute-0 nova_compute[259550]: 2025-10-07 14:43:30.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.870 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape6e769bc-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.871 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.871 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape6e769bc-20, col_values=(('external_ids', {'iface-id': '21cca283-5f07-4e28-8ee2-a58e7356e156'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.871 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.328 2 DEBUG oslo_concurrency.lockutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.328 2 DEBUG oslo_concurrency.lockutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.328 2 DEBUG oslo_concurrency.lockutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.329 2 DEBUG oslo_concurrency.lockutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.329 2 DEBUG oslo_concurrency.lockutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.330 2 INFO nova.compute.manager [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Terminating instance
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.331 2 DEBUG nova.compute.manager [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:43:31 compute-0 kernel: tap8e464ebe-1b (unregistering): left promiscuous mode
Oct 07 14:43:31 compute-0 NetworkManager[44949]: <info>  [1759848211.3722] device (tap8e464ebe-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:31 compute-0 ovn_controller[151684]: 2025-10-07T14:43:31Z|01435|binding|INFO|Releasing lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 from this chassis (sb_readonly=0)
Oct 07 14:43:31 compute-0 ovn_controller[151684]: 2025-10-07T14:43:31Z|01436|binding|INFO|Setting lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 down in Southbound
Oct 07 14:43:31 compute-0 ovn_controller[151684]: 2025-10-07T14:43:31Z|01437|binding|INFO|Removing iface tap8e464ebe-1b ovn-installed in OVS
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.395 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:10:26 10.100.0.12'], port_security=['fa:16:3e:b7:10:26 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-843309094', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '998e2894-41dd-4eb6-9b5c-08a2e5da0acf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aea4318-48a4-451b-b36b-e364946c1859', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-843309094', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '9', 'neutron:security_group_ids': '46186043-39e3-4b2d-9425-b7b9cc0cd458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.231', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d04921d-dca2-4310-91f6-b9ea410f8b20, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8e464ebe-1be0-4a3a-b8df-c088ec663aa2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:43:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.396 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 in datapath 7aea4318-48a4-451b-b36b-e364946c1859 unbound from our chassis
Oct 07 14:43:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.398 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7aea4318-48a4-451b-b36b-e364946c1859, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.398 2 DEBUG nova.compute.manager [req-030d67b5-f84c-44ab-ba46-4b383d875a7f req-8d90cfaf-b444-4b64-a545-36c73ea9fd68 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-plugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.399 2 DEBUG oslo_concurrency.lockutils [req-030d67b5-f84c-44ab-ba46-4b383d875a7f req-8d90cfaf-b444-4b64-a545-36c73ea9fd68 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.399 2 DEBUG oslo_concurrency.lockutils [req-030d67b5-f84c-44ab-ba46-4b383d875a7f req-8d90cfaf-b444-4b64-a545-36c73ea9fd68 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.400 2 DEBUG oslo_concurrency.lockutils [req-030d67b5-f84c-44ab-ba46-4b383d875a7f req-8d90cfaf-b444-4b64-a545-36c73ea9fd68 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.400 2 DEBUG nova.compute.manager [req-030d67b5-f84c-44ab-ba46-4b383d875a7f req-8d90cfaf-b444-4b64-a545-36c73ea9fd68 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Processing event network-vif-plugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.399 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9fe960-e07a-436d-a7dc-1254273e8a16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.403 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 namespace which is not needed anymore
Oct 07 14:43:31 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000082.scope: Deactivated successfully.
Oct 07 14:43:31 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000082.scope: Consumed 4.899s CPU time.
Oct 07 14:43:31 compute-0 systemd-machined[214580]: Machine qemu-162-instance-00000082 terminated.
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.536 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848211.5357327, 1f692a08-811a-41fb-a8a2-aa936481a256 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.536 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] VM Started (Lifecycle Event)
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.553 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:43:31 compute-0 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[399320]: [NOTICE]   (399328) : haproxy version is 2.8.14-c23fe91
Oct 07 14:43:31 compute-0 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[399320]: [NOTICE]   (399328) : path to executable is /usr/sbin/haproxy
Oct 07 14:43:31 compute-0 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[399320]: [WARNING]  (399328) : Exiting Master process...
Oct 07 14:43:31 compute-0 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[399320]: [WARNING]  (399328) : Exiting Master process...
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:31 compute-0 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[399320]: [ALERT]    (399328) : Current worker (399330) exited with code 143 (Terminated)
Oct 07 14:43:31 compute-0 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[399320]: [WARNING]  (399328) : All workers exited. Exiting... (0)
Oct 07 14:43:31 compute-0 systemd[1]: libpod-8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb.scope: Deactivated successfully.
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.561 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848211.5359128, 1f692a08-811a-41fb-a8a2-aa936481a256 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.561 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] VM Paused (Lifecycle Event)
Oct 07 14:43:31 compute-0 podman[399569]: 2025-10-07 14:43:31.56629604 +0000 UTC m=+0.055912677 container died 8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.570 2 INFO nova.virt.libvirt.driver [-] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Instance destroyed successfully.
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.570 2 DEBUG nova.objects.instance [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid 998e2894-41dd-4eb6-9b5c-08a2e5da0acf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.585 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.587 2 DEBUG nova.virt.libvirt.vif [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:43:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1037921195',display_name='tempest-TestNetworkBasicOps-server-1037921195',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1037921195',id=130,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLtzetRjoFkj/DFDF9WAwZ04LCKLWqJ+1zvvsm0AAVovAYiR1b+Pd5gjAzqKDLKwmDA6OpvtCVvIhI+zBY1NadamdON/n6P9hhZKAqnvip6hxYov7U6qVbPGlxpeUTPRYA==',key_name='tempest-TestNetworkBasicOps-2019301416',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:43:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-sa2bmdzi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:43:28Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=998e2894-41dd-4eb6-9b5c-08a2e5da0acf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.588 2 DEBUG nova.network.os_vif_util [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.588 2 DEBUG nova.network.os_vif_util [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.589 2 DEBUG os_vif [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.595 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e464ebe-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb-userdata-shm.mount: Deactivated successfully.
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc5e3c38f58495ce9e1ded90bc2192e137de291ef67b1de2c7c83ee6a5d2171d-merged.mount: Deactivated successfully.
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.608 2 INFO os_vif [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b')
Oct 07 14:43:31 compute-0 ceph-mon[74295]: pgmap v2432: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 923 KiB/s rd, 3.0 MiB/s wr, 83 op/s
Oct 07 14:43:31 compute-0 podman[399569]: 2025-10-07 14:43:31.622812372 +0000 UTC m=+0.112429029 container cleanup 8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 14:43:31 compute-0 systemd[1]: libpod-conmon-8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb.scope: Deactivated successfully.
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.632 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.660 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:43:31 compute-0 podman[399621]: 2025-10-07 14:43:31.689203058 +0000 UTC m=+0.043943500 container remove 8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 07 14:43:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.695 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[10a5a678-f295-4a07-8579-7a14bfb5c51e]: (4, ('Tue Oct  7 02:43:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 (8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb)\n8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb\nTue Oct  7 02:43:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 (8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb)\n8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.697 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5b41ad7d-5641-423d-b6e9-002f0a94041b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.698 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7aea4318-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:31 compute-0 kernel: tap7aea4318-40: left promiscuous mode
Oct 07 14:43:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.704 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa88831-1303-4ae3-bfc9-ff1204569016]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:31 compute-0 nova_compute[259550]: 2025-10-07 14:43:31.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.733 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ca6333b3-d497-40b5-a7ad-6323ddd06a89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.734 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e0805193-3a4f-4b34-bfc5-e1fd228a0776]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.750 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5e33cff6-4118-4850-9207-2b058e0461c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878935, 'reachable_time': 19767, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399637, 'error': None, 'target': 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.752 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:43:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.752 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[e204ee96-8e04-46e5-bb0b-85c06fee4fea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d7aea4318\x2d48a4\x2d451b\x2db36b\x2de364946c1859.mount: Deactivated successfully.
Oct 07 14:43:32 compute-0 nova_compute[259550]: 2025-10-07 14:43:32.039 2 INFO nova.virt.libvirt.driver [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Deleting instance files /var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf_del
Oct 07 14:43:32 compute-0 nova_compute[259550]: 2025-10-07 14:43:32.040 2 INFO nova.virt.libvirt.driver [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Deletion of /var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf_del complete
Oct 07 14:43:32 compute-0 nova_compute[259550]: 2025-10-07 14:43:32.096 2 INFO nova.compute.manager [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Took 0.76 seconds to destroy the instance on the hypervisor.
Oct 07 14:43:32 compute-0 nova_compute[259550]: 2025-10-07 14:43:32.096 2 DEBUG oslo.service.loopingcall [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:43:32 compute-0 nova_compute[259550]: 2025-10-07 14:43:32.097 2 DEBUG nova.compute.manager [-] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:43:32 compute-0 nova_compute[259550]: 2025-10-07 14:43:32.097 2 DEBUG nova.network.neutron [-] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:43:32 compute-0 nova_compute[259550]: 2025-10-07 14:43:32.282 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:32 compute-0 nova_compute[259550]: 2025-10-07 14:43:32.282 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:32 compute-0 nova_compute[259550]: 2025-10-07 14:43:32.299 2 DEBUG nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:43:32 compute-0 nova_compute[259550]: 2025-10-07 14:43:32.374 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:32 compute-0 nova_compute[259550]: 2025-10-07 14:43:32.374 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:32 compute-0 nova_compute[259550]: 2025-10-07 14:43:32.384 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:43:32 compute-0 nova_compute[259550]: 2025-10-07 14:43:32.384 2 INFO nova.compute.claims [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 953 KiB/s wr, 113 op/s
Oct 07 14:43:32 compute-0 nova_compute[259550]: 2025-10-07 14:43:32.520 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014538743529009571 of space, bias 1.0, pg target 0.43616230587028715 quantized to 32 (current 32)
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:43:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:43:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2741088785' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:43:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:43:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:43:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2741088785' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:43:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:43:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/523965864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:43:32 compute-0 nova_compute[259550]: 2025-10-07 14:43:32.955 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:32 compute-0 nova_compute[259550]: 2025-10-07 14:43:32.962 2 DEBUG nova.compute.provider_tree [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:43:32 compute-0 nova_compute[259550]: 2025-10-07 14:43:32.979 2 DEBUG nova.scheduler.client.report [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.003 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.004 2 DEBUG nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.052 2 DEBUG nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.052 2 DEBUG nova.network.neutron [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.078 2 INFO nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.101 2 DEBUG nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:43:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.202 2 DEBUG nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.204 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.205 2 INFO nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Creating image(s)
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.228 2 DEBUG nova.storage.rbd_utils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.256 2 DEBUG nova.storage.rbd_utils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.284 2 DEBUG nova.storage.rbd_utils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.288 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.326 2 DEBUG nova.policy [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '52bb2c10051444f181ee0572525fbe9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ac40ef14492f40768b3852a40da26621', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.343 2 DEBUG nova.network.neutron [-] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.367 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.368 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.368 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.369 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.393 2 DEBUG nova.storage.rbd_utils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.398 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.435 2 INFO nova.compute.manager [-] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Took 1.34 seconds to deallocate network for instance.
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.489 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-plugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.489 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.490 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.490 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.490 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] No event matching network-vif-plugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 in dict_keys([('network-vif-plugged', '5fb4e372-50c4-49a3-a717-ddc2c99673c7')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.491 2 WARNING nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received unexpected event network-vif-plugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 for instance with vm_state building and task_state spawning.
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.491 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-plugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.491 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.491 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.492 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.492 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Processing event network-vif-plugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.492 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-plugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.492 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.493 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.493 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.493 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] No waiting events found dispatching network-vif-plugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.493 2 WARNING nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received unexpected event network-vif-plugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 for instance with vm_state building and task_state spawning.
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.494 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Received event network-vif-unplugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.494 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.494 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.494 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.495 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] No waiting events found dispatching network-vif-unplugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.495 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Received event network-vif-unplugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.495 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Received event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.495 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.496 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.496 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.496 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] No waiting events found dispatching network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.496 2 WARNING nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Received unexpected event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 for instance with vm_state active and task_state deleting.
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.497 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.503 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848213.502678, 1f692a08-811a-41fb-a8a2-aa936481a256 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.504 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] VM Resumed (Lifecycle Event)
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.509 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.520 2 DEBUG oslo_concurrency.lockutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.521 2 DEBUG oslo_concurrency.lockutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.523 2 INFO nova.virt.libvirt.driver [-] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Instance spawned successfully.
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.524 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.536 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.539 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.571 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.572 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.572 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.573 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.573 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.574 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.582 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.614 2 DEBUG oslo_concurrency.processutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.681 2 INFO nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Took 19.72 seconds to spawn the instance on the hypervisor.
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.682 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:43:33 compute-0 ceph-mon[74295]: pgmap v2433: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 953 KiB/s wr, 113 op/s
Oct 07 14:43:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2741088785' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:43:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2741088785' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:43:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/523965864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.737 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.817 2 INFO nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Took 21.17 seconds to build instance.
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.826 2 DEBUG nova.storage.rbd_utils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] resizing rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.868 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.942 2 DEBUG nova.objects.instance [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'migration_context' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.954 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.954 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Ensure instance console log exists: /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.955 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.955 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:33 compute-0 nova_compute[259550]: 2025-10-07 14:43:33.956 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:43:34 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1135145531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:43:34 compute-0 nova_compute[259550]: 2025-10-07 14:43:34.091 2 DEBUG oslo_concurrency.processutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:34 compute-0 nova_compute[259550]: 2025-10-07 14:43:34.096 2 DEBUG nova.compute.provider_tree [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:43:34 compute-0 nova_compute[259550]: 2025-10-07 14:43:34.110 2 DEBUG nova.scheduler.client.report [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:43:34 compute-0 nova_compute[259550]: 2025-10-07 14:43:34.131 2 DEBUG oslo_concurrency.lockutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:34 compute-0 nova_compute[259550]: 2025-10-07 14:43:34.154 2 INFO nova.scheduler.client.report [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance 998e2894-41dd-4eb6-9b5c-08a2e5da0acf
Oct 07 14:43:34 compute-0 nova_compute[259550]: 2025-10-07 14:43:34.229 2 DEBUG oslo_concurrency.lockutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:34 compute-0 nova_compute[259550]: 2025-10-07 14:43:34.337 2 DEBUG nova.network.neutron [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Successfully created port: 3ac63514-77e7-4d94-a67c-94806ca3b58b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:43:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 211 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 164 op/s
Oct 07 14:43:34 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1135145531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:43:35 compute-0 ceph-mon[74295]: pgmap v2434: 305 pgs: 305 active+clean; 211 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 164 op/s
Oct 07 14:43:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 213 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 214 op/s
Oct 07 14:43:36 compute-0 nova_compute[259550]: 2025-10-07 14:43:36.550 2 DEBUG nova.network.neutron [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Successfully updated port: 3ac63514-77e7-4d94-a67c-94806ca3b58b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:43:36 compute-0 nova_compute[259550]: 2025-10-07 14:43:36.566 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:43:36 compute-0 nova_compute[259550]: 2025-10-07 14:43:36.566 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquired lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:43:36 compute-0 nova_compute[259550]: 2025-10-07 14:43:36.566 2 DEBUG nova.network.neutron [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:43:36 compute-0 nova_compute[259550]: 2025-10-07 14:43:36.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:36 compute-0 nova_compute[259550]: 2025-10-07 14:43:36.683 2 DEBUG nova.compute.manager [req-9b124d71-a07c-449e-b8fe-0ea3dc4fa8f0 req-ab269b09-d91d-47c0-bf6b-4cf9bb42bd2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:43:36 compute-0 nova_compute[259550]: 2025-10-07 14:43:36.683 2 DEBUG nova.compute.manager [req-9b124d71-a07c-449e-b8fe-0ea3dc4fa8f0 req-ab269b09-d91d-47c0-bf6b-4cf9bb42bd2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing instance network info cache due to event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:43:36 compute-0 nova_compute[259550]: 2025-10-07 14:43:36.684 2 DEBUG oslo_concurrency.lockutils [req-9b124d71-a07c-449e-b8fe-0ea3dc4fa8f0 req-ab269b09-d91d-47c0-bf6b-4cf9bb42bd2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:43:36 compute-0 nova_compute[259550]: 2025-10-07 14:43:36.712 2 DEBUG nova.network.neutron [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:43:37 compute-0 podman[399850]: 2025-10-07 14:43:37.07297672 +0000 UTC m=+0.054544971 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 07 14:43:37 compute-0 podman[399851]: 2025-10-07 14:43:37.078729253 +0000 UTC m=+0.060413087 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:43:37 compute-0 ceph-mon[74295]: pgmap v2435: 305 pgs: 305 active+clean; 213 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 214 op/s
Oct 07 14:43:38 compute-0 nova_compute[259550]: 2025-10-07 14:43:38.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:43:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 213 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 210 op/s
Oct 07 14:43:39 compute-0 ceph-mon[74295]: pgmap v2436: 305 pgs: 305 active+clean; 213 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 210 op/s
Oct 07 14:43:39 compute-0 nova_compute[259550]: 2025-10-07 14:43:39.897 2 DEBUG nova.network.neutron [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.124 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Releasing lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.125 2 DEBUG nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance network_info: |[{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.125 2 DEBUG oslo_concurrency.lockutils [req-9b124d71-a07c-449e-b8fe-0ea3dc4fa8f0 req-ab269b09-d91d-47c0-bf6b-4cf9bb42bd2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.125 2 DEBUG nova.network.neutron [req-9b124d71-a07c-449e-b8fe-0ea3dc4fa8f0 req-ab269b09-d91d-47c0-bf6b-4cf9bb42bd2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing network info cache for port 3ac63514-77e7-4d94-a67c-94806ca3b58b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.128 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Start _get_guest_xml network_info=[{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.132 2 WARNING nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.137 2 DEBUG nova.virt.libvirt.host [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.137 2 DEBUG nova.virt.libvirt.host [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.140 2 DEBUG nova.virt.libvirt.host [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.141 2 DEBUG nova.virt.libvirt.host [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.143 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.143 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.144 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.145 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.145 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.146 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.146 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.146 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.147 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.147 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.147 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.148 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.152 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 256 op/s
Oct 07 14:43:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:43:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3647233733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.594 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.615 2 DEBUG nova.storage.rbd_utils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:40 compute-0 nova_compute[259550]: 2025-10-07 14:43:40.620 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:40 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3647233733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:43:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:43:41 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3271776861' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.055 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.057 2 DEBUG nova.virt.libvirt.vif [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:43:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-308957706',display_name='tempest-TestShelveInstance-server-308957706',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-308957706',id=131,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIP0AMqBzzYmRrc/gnJzbbMyBAFmuqR5iC2+H5lS9P1NxQlnpTWhcfEteNAmj5N76nsDrvP+kS3KGhT6YiYXIeHex+K2fKyOb9r6ICTnlnIC+U793tuGi+owzMBnIl2+nw==',key_name='tempest-TestShelveInstance-2096115086',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ac40ef14492f40768b3852a40da26621',ramdisk_id='',reservation_id='r-zct1uv2p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-703128978',owner_user_name='tempest-TestShelveInstance-703128978-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:43:33Z,user_data=None,user_id='52bb2c10051444f181ee0572525fbe9d',uuid=30241223-64c5-4a88-8ba2-ee340fe6cbd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.057 2 DEBUG nova.network.os_vif_util [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converting VIF {"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.058 2 DEBUG nova.network.os_vif_util [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.059 2 DEBUG nova.objects.instance [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'pci_devices' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.214 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:43:41 compute-0 nova_compute[259550]:   <uuid>30241223-64c5-4a88-8ba2-ee340fe6cbd3</uuid>
Oct 07 14:43:41 compute-0 nova_compute[259550]:   <name>instance-00000083</name>
Oct 07 14:43:41 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:43:41 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:43:41 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <nova:name>tempest-TestShelveInstance-server-308957706</nova:name>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:43:40</nova:creationTime>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:43:41 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:43:41 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:43:41 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:43:41 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:43:41 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:43:41 compute-0 nova_compute[259550]:         <nova:user uuid="52bb2c10051444f181ee0572525fbe9d">tempest-TestShelveInstance-703128978-project-member</nova:user>
Oct 07 14:43:41 compute-0 nova_compute[259550]:         <nova:project uuid="ac40ef14492f40768b3852a40da26621">tempest-TestShelveInstance-703128978</nova:project>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:43:41 compute-0 nova_compute[259550]:         <nova:port uuid="3ac63514-77e7-4d94-a67c-94806ca3b58b">
Oct 07 14:43:41 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:43:41 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:43:41 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <system>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <entry name="serial">30241223-64c5-4a88-8ba2-ee340fe6cbd3</entry>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <entry name="uuid">30241223-64c5-4a88-8ba2-ee340fe6cbd3</entry>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     </system>
Oct 07 14:43:41 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:43:41 compute-0 nova_compute[259550]:   <os>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:   </os>
Oct 07 14:43:41 compute-0 nova_compute[259550]:   <features>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:   </features>
Oct 07 14:43:41 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:43:41 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:43:41 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk">
Oct 07 14:43:41 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       </source>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:43:41 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config">
Oct 07 14:43:41 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       </source>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:43:41 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:32:1b:50"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <target dev="tap3ac63514-77"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/console.log" append="off"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <video>
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     </video>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:43:41 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:43:41 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:43:41 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:43:41 compute-0 nova_compute[259550]: </domain>
Oct 07 14:43:41 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.216 2 DEBUG nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Preparing to wait for external event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.216 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.216 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.216 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.217 2 DEBUG nova.virt.libvirt.vif [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:43:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-308957706',display_name='tempest-TestShelveInstance-server-308957706',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-308957706',id=131,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIP0AMqBzzYmRrc/gnJzbbMyBAFmuqR5iC2+H5lS9P1NxQlnpTWhcfEteNAmj5N76nsDrvP+kS3KGhT6YiYXIeHex+K2fKyOb9r6ICTnlnIC+U793tuGi+owzMBnIl2+nw==',key_name='tempest-TestShelveInstance-2096115086',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ac40ef14492f40768b3852a40da26621',ramdisk_id='',reservation_id='r-zct1uv2p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-703128978',owner_user_name='tempest-TestShelveInstance-703128978-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:43:33Z,user_data=None,user_id='52bb2c10051444f181ee0572525fbe9d',uuid=30241223-64c5-4a88-8ba2-ee340fe6cbd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.217 2 DEBUG nova.network.os_vif_util [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converting VIF {"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.218 2 DEBUG nova.network.os_vif_util [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.218 2 DEBUG os_vif [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.219 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.220 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.223 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ac63514-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.224 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3ac63514-77, col_values=(('external_ids', {'iface-id': '3ac63514-77e7-4d94-a67c-94806ca3b58b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:1b:50', 'vm-uuid': '30241223-64c5-4a88-8ba2-ee340fe6cbd3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:43:41 compute-0 NetworkManager[44949]: <info>  [1759848221.2281] manager: (tap3ac63514-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/577)
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.233 2 INFO os_vif [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77')
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.483 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.483 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.484 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] No VIF found with MAC fa:16:3e:32:1b:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.484 2 INFO nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Using config drive
Oct 07 14:43:41 compute-0 nova_compute[259550]: 2025-10-07 14:43:41.505 2 DEBUG nova.storage.rbd_utils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:41 compute-0 ceph-mon[74295]: pgmap v2437: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 256 op/s
Oct 07 14:43:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3271776861' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:43:42 compute-0 nova_compute[259550]: 2025-10-07 14:43:42.496 2 INFO nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Creating config drive at /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config
Oct 07 14:43:42 compute-0 nova_compute[259550]: 2025-10-07 14:43:42.501 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph08stra8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Oct 07 14:43:42 compute-0 nova_compute[259550]: 2025-10-07 14:43:42.643 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph08stra8" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:42 compute-0 nova_compute[259550]: 2025-10-07 14:43:42.674 2 DEBUG nova.storage.rbd_utils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:43:42 compute-0 nova_compute[259550]: 2025-10-07 14:43:42.678 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:43:42 compute-0 nova_compute[259550]: 2025-10-07 14:43:42.714 2 DEBUG nova.network.neutron [req-9b124d71-a07c-449e-b8fe-0ea3dc4fa8f0 req-ab269b09-d91d-47c0-bf6b-4cf9bb42bd2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updated VIF entry in instance network info cache for port 3ac63514-77e7-4d94-a67c-94806ca3b58b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:43:42 compute-0 nova_compute[259550]: 2025-10-07 14:43:42.715 2 DEBUG nova.network.neutron [req-9b124d71-a07c-449e-b8fe-0ea3dc4fa8f0 req-ab269b09-d91d-47c0-bf6b-4cf9bb42bd2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:43:42 compute-0 nova_compute[259550]: 2025-10-07 14:43:42.732 2 DEBUG oslo_concurrency.lockutils [req-9b124d71-a07c-449e-b8fe-0ea3dc4fa8f0 req-ab269b09-d91d-47c0-bf6b-4cf9bb42bd2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:43:42 compute-0 nova_compute[259550]: 2025-10-07 14:43:42.858 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:43:42 compute-0 nova_compute[259550]: 2025-10-07 14:43:42.859 2 INFO nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Deleting local config drive /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config because it was imported into RBD.
Oct 07 14:43:42 compute-0 kernel: tap3ac63514-77: entered promiscuous mode
Oct 07 14:43:42 compute-0 NetworkManager[44949]: <info>  [1759848222.9116] manager: (tap3ac63514-77): new Tun device (/org/freedesktop/NetworkManager/Devices/578)
Oct 07 14:43:42 compute-0 ovn_controller[151684]: 2025-10-07T14:43:42Z|01438|binding|INFO|Claiming lport 3ac63514-77e7-4d94-a67c-94806ca3b58b for this chassis.
Oct 07 14:43:42 compute-0 ovn_controller[151684]: 2025-10-07T14:43:42Z|01439|binding|INFO|3ac63514-77e7-4d94-a67c-94806ca3b58b: Claiming fa:16:3e:32:1b:50 10.100.0.11
Oct 07 14:43:42 compute-0 nova_compute[259550]: 2025-10-07 14:43:42.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.920 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:1b:50 10.100.0.11'], port_security=['fa:16:3e:32:1b:50 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '30241223-64c5-4a88-8ba2-ee340fe6cbd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ac40ef14492f40768b3852a40da26621', 'neutron:revision_number': '2', 'neutron:security_group_ids': '56b8c028-3f77-4dba-a2e9-4a1cb7c88d4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46c5eb1a-8a6c-4afe-ae4e-424959d231e5, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3ac63514-77e7-4d94-a67c-94806ca3b58b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:43:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.921 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3ac63514-77e7-4d94-a67c-94806ca3b58b in datapath 7c054d6f-68ec-4f0b-9362-221001cc6b67 bound to our chassis
Oct 07 14:43:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.922 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c054d6f-68ec-4f0b-9362-221001cc6b67
Oct 07 14:43:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.937 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1e68a6ef-7a67-441b-b861-5b0d7864b3ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.938 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c054d6f-61 in ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:43:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.941 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c054d6f-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:43:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.941 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c839e750-94bc-4663-9122-0918ea0a6949]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.942 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6eec26-3071-46b5-9231-ecc3b5f4adf3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:42 compute-0 ovn_controller[151684]: 2025-10-07T14:43:42Z|01440|binding|INFO|Setting lport 3ac63514-77e7-4d94-a67c-94806ca3b58b ovn-installed in OVS
Oct 07 14:43:42 compute-0 ovn_controller[151684]: 2025-10-07T14:43:42Z|01441|binding|INFO|Setting lport 3ac63514-77e7-4d94-a67c-94806ca3b58b up in Southbound
Oct 07 14:43:42 compute-0 nova_compute[259550]: 2025-10-07 14:43:42.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.953 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a4be15e7-51da-4180-a863-743ad1d8a39d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:42 compute-0 nova_compute[259550]: 2025-10-07 14:43:42.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:42 compute-0 systemd-udevd[400028]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:43:42 compute-0 systemd-machined[214580]: New machine qemu-164-instance-00000083.
Oct 07 14:43:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.967 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[91ac9f56-8053-435b-9bd9-6e7433371011]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:42 compute-0 NetworkManager[44949]: <info>  [1759848222.9732] device (tap3ac63514-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:43:42 compute-0 NetworkManager[44949]: <info>  [1759848222.9746] device (tap3ac63514-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:43:42 compute-0 systemd[1]: Started Virtual Machine qemu-164-instance-00000083.
Oct 07 14:43:42 compute-0 nova_compute[259550]: 2025-10-07 14:43:42.984 2 DEBUG nova.compute.manager [req-a1011205-2c79-4721-bd4d-5ae7f23d7522 req-a62804d1-be80-489e-b1c1-0d656e8dd600 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-changed-b3a7ccba-7b5f-4e87-af92-723dd36cc703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:43:42 compute-0 nova_compute[259550]: 2025-10-07 14:43:42.984 2 DEBUG nova.compute.manager [req-a1011205-2c79-4721-bd4d-5ae7f23d7522 req-a62804d1-be80-489e-b1c1-0d656e8dd600 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Refreshing instance network info cache due to event network-changed-b3a7ccba-7b5f-4e87-af92-723dd36cc703. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:43:42 compute-0 nova_compute[259550]: 2025-10-07 14:43:42.984 2 DEBUG oslo_concurrency.lockutils [req-a1011205-2c79-4721-bd4d-5ae7f23d7522 req-a62804d1-be80-489e-b1c1-0d656e8dd600 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:43:42 compute-0 nova_compute[259550]: 2025-10-07 14:43:42.985 2 DEBUG oslo_concurrency.lockutils [req-a1011205-2c79-4721-bd4d-5ae7f23d7522 req-a62804d1-be80-489e-b1c1-0d656e8dd600 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:43:42 compute-0 nova_compute[259550]: 2025-10-07 14:43:42.985 2 DEBUG nova.network.neutron [req-a1011205-2c79-4721-bd4d-5ae7f23d7522 req-a62804d1-be80-489e-b1c1-0d656e8dd600 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Refreshing network info cache for port b3a7ccba-7b5f-4e87-af92-723dd36cc703 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.006 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0742acfe-1e55-488a-b27f-60b63d8cb65b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.011 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[962ec913-5cb5-44d2-8067-0b6f83f1b59e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:43 compute-0 NetworkManager[44949]: <info>  [1759848223.0129] manager: (tap7c054d6f-60): new Veth device (/org/freedesktop/NetworkManager/Devices/579)
Oct 07 14:43:43 compute-0 nova_compute[259550]: 2025-10-07 14:43:43.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.046 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b826749c-122b-4c5d-b235-9178e44e80d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.049 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6380e460-f3be-4e10-b4d9-4995d3d50adf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:43 compute-0 NetworkManager[44949]: <info>  [1759848223.0718] device (tap7c054d6f-60): carrier: link connected
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.077 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[15359c49-2705-4f5c-9e64-a0e4199428d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.095 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a5aca605-a496-4857-85b7-5ab50bf9cae5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c054d6f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:e1:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 880663, 'reachable_time': 28101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400059, 'error': None, 'target': 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.110 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1243b3-c883-4e04-ad7f-b25bf5e63790]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:e109'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 880663, 'tstamp': 880663}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400060, 'error': None, 'target': 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.126 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1edbfd2f-5c2d-4722-b4b0-e68d84e87d54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c054d6f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:e1:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 880663, 'reachable_time': 28101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 400061, 'error': None, 'target': 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.163 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b147962f-7a4c-49ce-8f0b-99455d7c8b5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.228 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2c506245-5dbf-4e72-9c01-3ac2a45aae4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.230 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c054d6f-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.230 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.231 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c054d6f-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:43 compute-0 NetworkManager[44949]: <info>  [1759848223.2336] manager: (tap7c054d6f-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/580)
Oct 07 14:43:43 compute-0 kernel: tap7c054d6f-60: entered promiscuous mode
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.235 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c054d6f-60, col_values=(('external_ids', {'iface-id': 'bb603baf-bbde-4821-a893-8713cfab0527'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:43:43 compute-0 ovn_controller[151684]: 2025-10-07T14:43:43Z|01442|binding|INFO|Releasing lport bb603baf-bbde-4821-a893-8713cfab0527 from this chassis (sb_readonly=0)
Oct 07 14:43:43 compute-0 nova_compute[259550]: 2025-10-07 14:43:43.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:43 compute-0 nova_compute[259550]: 2025-10-07 14:43:43.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.253 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c054d6f-68ec-4f0b-9362-221001cc6b67.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c054d6f-68ec-4f0b-9362-221001cc6b67.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.255 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[32368f3b-514f-4ccf-bc75-ea46a04ef455]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.256 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-7c054d6f-68ec-4f0b-9362-221001cc6b67
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/7c054d6f-68ec-4f0b-9362-221001cc6b67.pid.haproxy
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 7c054d6f-68ec-4f0b-9362-221001cc6b67
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:43:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.258 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'env', 'PROCESS_TAG=haproxy-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c054d6f-68ec-4f0b-9362-221001cc6b67.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:43:43 compute-0 podman[400133]: 2025-10-07 14:43:43.645049218 +0000 UTC m=+0.066413687 container create d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Oct 07 14:43:43 compute-0 systemd[1]: Started libpod-conmon-d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5.scope.
Oct 07 14:43:43 compute-0 podman[400133]: 2025-10-07 14:43:43.609362619 +0000 UTC m=+0.030727118 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:43:43 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d6983957f31550d23f88388bbfaa03a6c047e05d35d4dc2cf9e78281c838558/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:43:43 compute-0 ceph-mon[74295]: pgmap v2438: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Oct 07 14:43:43 compute-0 podman[400133]: 2025-10-07 14:43:43.745445577 +0000 UTC m=+0.166810066 container init d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:43:43 compute-0 podman[400133]: 2025-10-07 14:43:43.751909798 +0000 UTC m=+0.173274267 container start d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:43:43 compute-0 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[400148]: [NOTICE]   (400152) : New worker (400160) forked
Oct 07 14:43:43 compute-0 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[400148]: [NOTICE]   (400152) : Loading success.
Oct 07 14:43:43 compute-0 nova_compute[259550]: 2025-10-07 14:43:43.834 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848223.832886, 30241223-64c5-4a88-8ba2-ee340fe6cbd3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:43:43 compute-0 nova_compute[259550]: 2025-10-07 14:43:43.835 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] VM Started (Lifecycle Event)
Oct 07 14:43:43 compute-0 sudo[400153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:43:43 compute-0 sudo[400153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:43:43 compute-0 sudo[400153]: pam_unix(sudo:session): session closed for user root
Oct 07 14:43:43 compute-0 nova_compute[259550]: 2025-10-07 14:43:43.868 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:43:43 compute-0 nova_compute[259550]: 2025-10-07 14:43:43.877 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848223.8333333, 30241223-64c5-4a88-8ba2-ee340fe6cbd3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:43:43 compute-0 nova_compute[259550]: 2025-10-07 14:43:43.878 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] VM Paused (Lifecycle Event)
Oct 07 14:43:43 compute-0 nova_compute[259550]: 2025-10-07 14:43:43.898 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:43:43 compute-0 nova_compute[259550]: 2025-10-07 14:43:43.904 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:43:43 compute-0 sudo[400188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:43:43 compute-0 sudo[400188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:43:43 compute-0 sudo[400188]: pam_unix(sudo:session): session closed for user root
Oct 07 14:43:43 compute-0 nova_compute[259550]: 2025-10-07 14:43:43.926 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:43:43 compute-0 sudo[400213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:43:43 compute-0 sudo[400213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:43:43 compute-0 sudo[400213]: pam_unix(sudo:session): session closed for user root
Oct 07 14:43:44 compute-0 sudo[400238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:43:44 compute-0 sudo[400238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:43:44 compute-0 ovn_controller[151684]: 2025-10-07T14:43:44Z|01443|binding|INFO|Releasing lport 21cca283-5f07-4e28-8ee2-a58e7356e156 from this chassis (sb_readonly=0)
Oct 07 14:43:44 compute-0 ovn_controller[151684]: 2025-10-07T14:43:44Z|01444|binding|INFO|Releasing lport bb603baf-bbde-4821-a893-8713cfab0527 from this chassis (sb_readonly=0)
Oct 07 14:43:44 compute-0 ovn_controller[151684]: 2025-10-07T14:43:44Z|01445|binding|INFO|Releasing lport 770f7899-3ca8-4bf3-9f06-6b8c25522fc3 from this chassis (sb_readonly=0)
Oct 07 14:43:44 compute-0 nova_compute[259550]: 2025-10-07 14:43:44.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 214 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Oct 07 14:43:44 compute-0 sudo[400238]: pam_unix(sudo:session): session closed for user root
Oct 07 14:43:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:43:44 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:43:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:43:44 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:43:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:43:44 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:43:44 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 0c003f2b-afef-4910-973c-4a6210c7053e does not exist
Oct 07 14:43:44 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 9b3d8ee3-158c-48c2-809b-ecf993fba6ee does not exist
Oct 07 14:43:44 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev ecce9d25-5ee6-4081-8ffa-928d7117b828 does not exist
Oct 07 14:43:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:43:44 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:43:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:43:44 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:43:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:43:44 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:43:44 compute-0 sudo[400294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:43:44 compute-0 sudo[400294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:43:44 compute-0 sudo[400294]: pam_unix(sudo:session): session closed for user root
Oct 07 14:43:44 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:43:44 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:43:44 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:43:44 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:43:44 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:43:44 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:43:44 compute-0 sudo[400319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:43:44 compute-0 sudo[400319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:43:44 compute-0 sudo[400319]: pam_unix(sudo:session): session closed for user root
Oct 07 14:43:44 compute-0 sudo[400344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:43:44 compute-0 sudo[400344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:43:44 compute-0 sudo[400344]: pam_unix(sudo:session): session closed for user root
Oct 07 14:43:44 compute-0 sudo[400369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:43:44 compute-0 sudo[400369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.015 2 DEBUG nova.network.neutron [req-a1011205-2c79-4721-bd4d-5ae7f23d7522 req-a62804d1-be80-489e-b1c1-0d656e8dd600 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updated VIF entry in instance network info cache for port b3a7ccba-7b5f-4e87-af92-723dd36cc703. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.015 2 DEBUG nova.network.neutron [req-a1011205-2c79-4721-bd4d-5ae7f23d7522 req-a62804d1-be80-489e-b1c1-0d656e8dd600 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updating instance_info_cache with network_info: [{"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.047 2 DEBUG oslo_concurrency.lockutils [req-a1011205-2c79-4721-bd4d-5ae7f23d7522 req-a62804d1-be80-489e-b1c1-0d656e8dd600 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.076 2 DEBUG nova.compute.manager [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.077 2 DEBUG oslo_concurrency.lockutils [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.077 2 DEBUG oslo_concurrency.lockutils [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.077 2 DEBUG oslo_concurrency.lockutils [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.078 2 DEBUG nova.compute.manager [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Processing event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.078 2 DEBUG nova.compute.manager [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.078 2 DEBUG oslo_concurrency.lockutils [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.078 2 DEBUG oslo_concurrency.lockutils [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.079 2 DEBUG oslo_concurrency.lockutils [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.079 2 DEBUG nova.compute.manager [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] No waiting events found dispatching network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.079 2 WARNING nova.compute.manager [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received unexpected event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b for instance with vm_state building and task_state spawning.
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.080 2 DEBUG nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.085 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848225.0850458, 30241223-64c5-4a88-8ba2-ee340fe6cbd3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.085 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] VM Resumed (Lifecycle Event)
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.089 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.097 2 INFO nova.virt.libvirt.driver [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance spawned successfully.
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.098 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.105 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.109 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.120 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.121 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.122 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.122 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.122 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.123 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.131 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.189 2 INFO nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Took 11.99 seconds to spawn the instance on the hypervisor.
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.189 2 DEBUG nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.259 2 INFO nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Took 12.91 seconds to build instance.
Oct 07 14:43:45 compute-0 nova_compute[259550]: 2025-10-07 14:43:45.277 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:43:45 compute-0 podman[400432]: 2025-10-07 14:43:45.327989671 +0000 UTC m=+0.047928404 container create f2601deb65130247d8a9abfd0709737a5c5fd078f3c7add590ee50a3f254024b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:43:45 compute-0 systemd[1]: Started libpod-conmon-f2601deb65130247d8a9abfd0709737a5c5fd078f3c7add590ee50a3f254024b.scope.
Oct 07 14:43:45 compute-0 podman[400432]: 2025-10-07 14:43:45.308265537 +0000 UTC m=+0.028204290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:43:45 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:43:45 compute-0 podman[400432]: 2025-10-07 14:43:45.44529547 +0000 UTC m=+0.165234223 container init f2601deb65130247d8a9abfd0709737a5c5fd078f3c7add590ee50a3f254024b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 07 14:43:45 compute-0 podman[400432]: 2025-10-07 14:43:45.453687613 +0000 UTC m=+0.173626336 container start f2601deb65130247d8a9abfd0709737a5c5fd078f3c7add590ee50a3f254024b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_euclid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 07 14:43:45 compute-0 podman[400432]: 2025-10-07 14:43:45.457965606 +0000 UTC m=+0.177904339 container attach f2601deb65130247d8a9abfd0709737a5c5fd078f3c7add590ee50a3f254024b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 07 14:43:45 compute-0 romantic_euclid[400447]: 167 167
Oct 07 14:43:45 compute-0 systemd[1]: libpod-f2601deb65130247d8a9abfd0709737a5c5fd078f3c7add590ee50a3f254024b.scope: Deactivated successfully.
Oct 07 14:43:45 compute-0 podman[400432]: 2025-10-07 14:43:45.462327682 +0000 UTC m=+0.182266415 container died f2601deb65130247d8a9abfd0709737a5c5fd078f3c7add590ee50a3f254024b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_euclid, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:43:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecf1083d65766b1319589dcab4cb62ad6a105d78fbe5fa4ec90070a361ccd109-merged.mount: Deactivated successfully.
Oct 07 14:43:45 compute-0 podman[400432]: 2025-10-07 14:43:45.514745375 +0000 UTC m=+0.234684108 container remove f2601deb65130247d8a9abfd0709737a5c5fd078f3c7add590ee50a3f254024b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct 07 14:43:45 compute-0 systemd[1]: libpod-conmon-f2601deb65130247d8a9abfd0709737a5c5fd078f3c7add590ee50a3f254024b.scope: Deactivated successfully.
Oct 07 14:43:45 compute-0 podman[400471]: 2025-10-07 14:43:45.747735509 +0000 UTC m=+0.070240129 container create 84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lewin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 07 14:43:45 compute-0 ceph-mon[74295]: pgmap v2439: 305 pgs: 305 active+clean; 214 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Oct 07 14:43:45 compute-0 systemd[1]: Started libpod-conmon-84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead.scope.
Oct 07 14:43:45 compute-0 podman[400471]: 2025-10-07 14:43:45.715054759 +0000 UTC m=+0.037559399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:43:45 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db383d80af92d9b7a156342bcf4a4d40533c795885f289b7689ec4affa172617/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db383d80af92d9b7a156342bcf4a4d40533c795885f289b7689ec4affa172617/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db383d80af92d9b7a156342bcf4a4d40533c795885f289b7689ec4affa172617/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db383d80af92d9b7a156342bcf4a4d40533c795885f289b7689ec4affa172617/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db383d80af92d9b7a156342bcf4a4d40533c795885f289b7689ec4affa172617/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:43:45 compute-0 podman[400471]: 2025-10-07 14:43:45.86370151 +0000 UTC m=+0.186206140 container init 84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 07 14:43:45 compute-0 podman[400471]: 2025-10-07 14:43:45.872875574 +0000 UTC m=+0.195380184 container start 84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:43:45 compute-0 podman[400471]: 2025-10-07 14:43:45.876962513 +0000 UTC m=+0.199467153 container attach 84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 07 14:43:46 compute-0 ovn_controller[151684]: 2025-10-07T14:43:46Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:0b:88 10.100.0.7
Oct 07 14:43:46 compute-0 ovn_controller[151684]: 2025-10-07T14:43:46Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:0b:88 10.100.0.7
Oct 07 14:43:46 compute-0 nova_compute[259550]: 2025-10-07 14:43:46.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 228 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Oct 07 14:43:46 compute-0 nova_compute[259550]: 2025-10-07 14:43:46.661 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848211.5633817, 998e2894-41dd-4eb6-9b5c-08a2e5da0acf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:43:46 compute-0 nova_compute[259550]: 2025-10-07 14:43:46.662 2 INFO nova.compute.manager [-] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] VM Stopped (Lifecycle Event)
Oct 07 14:43:46 compute-0 nova_compute[259550]: 2025-10-07 14:43:46.684 2 DEBUG nova.compute.manager [None req-27780ea9-04bd-4c01-9797-bec878a8ee5e - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:43:47 compute-0 funny_lewin[400487]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:43:47 compute-0 funny_lewin[400487]: --> relative data size: 1.0
Oct 07 14:43:47 compute-0 funny_lewin[400487]: --> All data devices are unavailable
Oct 07 14:43:47 compute-0 systemd[1]: libpod-84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead.scope: Deactivated successfully.
Oct 07 14:43:47 compute-0 systemd[1]: libpod-84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead.scope: Consumed 1.070s CPU time.
Oct 07 14:43:47 compute-0 podman[400471]: 2025-10-07 14:43:47.043955032 +0000 UTC m=+1.366459642 container died 84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lewin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:43:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-db383d80af92d9b7a156342bcf4a4d40533c795885f289b7689ec4affa172617-merged.mount: Deactivated successfully.
Oct 07 14:43:47 compute-0 podman[400471]: 2025-10-07 14:43:47.098521682 +0000 UTC m=+1.421026292 container remove 84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lewin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 07 14:43:47 compute-0 systemd[1]: libpod-conmon-84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead.scope: Deactivated successfully.
Oct 07 14:43:47 compute-0 sudo[400369]: pam_unix(sudo:session): session closed for user root
Oct 07 14:43:47 compute-0 sudo[400529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:43:47 compute-0 sudo[400529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:43:47 compute-0 sudo[400529]: pam_unix(sudo:session): session closed for user root
Oct 07 14:43:47 compute-0 sudo[400554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:43:47 compute-0 sudo[400554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:43:47 compute-0 sudo[400554]: pam_unix(sudo:session): session closed for user root
Oct 07 14:43:47 compute-0 sudo[400579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:43:47 compute-0 sudo[400579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:43:47 compute-0 sudo[400579]: pam_unix(sudo:session): session closed for user root
Oct 07 14:43:47 compute-0 sudo[400604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:43:47 compute-0 sudo[400604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:43:47 compute-0 podman[400671]: 2025-10-07 14:43:47.70451498 +0000 UTC m=+0.039817769 container create aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 07 14:43:47 compute-0 systemd[1]: Started libpod-conmon-aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06.scope.
Oct 07 14:43:47 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:43:47 compute-0 podman[400671]: 2025-10-07 14:43:47.686844941 +0000 UTC m=+0.022147750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:43:47 compute-0 ceph-mon[74295]: pgmap v2440: 305 pgs: 305 active+clean; 228 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Oct 07 14:43:47 compute-0 podman[400671]: 2025-10-07 14:43:47.897094699 +0000 UTC m=+0.232397498 container init aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 07 14:43:47 compute-0 podman[400671]: 2025-10-07 14:43:47.904592858 +0000 UTC m=+0.239895647 container start aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:43:47 compute-0 podman[400671]: 2025-10-07 14:43:47.90841623 +0000 UTC m=+0.243719029 container attach aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:43:47 compute-0 stoic_booth[400687]: 167 167
Oct 07 14:43:47 compute-0 systemd[1]: libpod-aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06.scope: Deactivated successfully.
Oct 07 14:43:47 compute-0 conmon[400687]: conmon aafd776df87ef4e29ae9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06.scope/container/memory.events
Oct 07 14:43:47 compute-0 podman[400671]: 2025-10-07 14:43:47.91143631 +0000 UTC m=+0.246739099 container died aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:43:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-df57f6cb54f4c95ec9f82aef2177fdd617b059a8d229b66890a9357625cf6232-merged.mount: Deactivated successfully.
Oct 07 14:43:47 compute-0 podman[400671]: 2025-10-07 14:43:47.954057183 +0000 UTC m=+0.289359972 container remove aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 07 14:43:47 compute-0 systemd[1]: libpod-conmon-aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06.scope: Deactivated successfully.
Oct 07 14:43:48 compute-0 nova_compute[259550]: 2025-10-07 14:43:48.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:43:48 compute-0 podman[400711]: 2025-10-07 14:43:48.189110801 +0000 UTC m=+0.042697326 container create 454c93a5e5aee370f18c2905c78ee1512d2ffb7c757f6014c2fdc7a837288462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:43:48 compute-0 systemd[1]: Started libpod-conmon-454c93a5e5aee370f18c2905c78ee1512d2ffb7c757f6014c2fdc7a837288462.scope.
Oct 07 14:43:48 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:43:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33441e79bbc0244ca68d80985e415202b3adfba1a9a49836632e1079bb984713/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:43:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33441e79bbc0244ca68d80985e415202b3adfba1a9a49836632e1079bb984713/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:43:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33441e79bbc0244ca68d80985e415202b3adfba1a9a49836632e1079bb984713/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:43:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33441e79bbc0244ca68d80985e415202b3adfba1a9a49836632e1079bb984713/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:43:48 compute-0 podman[400711]: 2025-10-07 14:43:48.168656317 +0000 UTC m=+0.022242862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:43:48 compute-0 podman[400711]: 2025-10-07 14:43:48.270584977 +0000 UTC m=+0.124171522 container init 454c93a5e5aee370f18c2905c78ee1512d2ffb7c757f6014c2fdc7a837288462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:43:48 compute-0 podman[400711]: 2025-10-07 14:43:48.276604357 +0000 UTC m=+0.130190872 container start 454c93a5e5aee370f18c2905c78ee1512d2ffb7c757f6014c2fdc7a837288462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dewdney, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:43:48 compute-0 podman[400711]: 2025-10-07 14:43:48.280607433 +0000 UTC m=+0.134193978 container attach 454c93a5e5aee370f18c2905c78ee1512d2ffb7c757f6014c2fdc7a837288462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 07 14:43:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 228 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 69 op/s
Oct 07 14:43:49 compute-0 keen_dewdney[400728]: {
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:     "0": [
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:         {
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "devices": [
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "/dev/loop3"
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             ],
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "lv_name": "ceph_lv0",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "lv_size": "21470642176",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "name": "ceph_lv0",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "tags": {
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.cluster_name": "ceph",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.crush_device_class": "",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.encrypted": "0",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.osd_id": "0",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.type": "block",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.vdo": "0"
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             },
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "type": "block",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "vg_name": "ceph_vg0"
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:         }
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:     ],
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:     "1": [
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:         {
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "devices": [
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "/dev/loop4"
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             ],
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "lv_name": "ceph_lv1",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "lv_size": "21470642176",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "name": "ceph_lv1",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "tags": {
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.cluster_name": "ceph",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.crush_device_class": "",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.encrypted": "0",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.osd_id": "1",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.type": "block",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.vdo": "0"
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             },
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "type": "block",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "vg_name": "ceph_vg1"
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:         }
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:     ],
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:     "2": [
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:         {
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "devices": [
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "/dev/loop5"
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             ],
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "lv_name": "ceph_lv2",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "lv_size": "21470642176",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "name": "ceph_lv2",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "tags": {
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.cluster_name": "ceph",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.crush_device_class": "",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.encrypted": "0",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.osd_id": "2",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.type": "block",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:                 "ceph.vdo": "0"
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             },
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "type": "block",
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:             "vg_name": "ceph_vg2"
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:         }
Oct 07 14:43:49 compute-0 keen_dewdney[400728]:     ]
Oct 07 14:43:49 compute-0 keen_dewdney[400728]: }
Oct 07 14:43:49 compute-0 systemd[1]: libpod-454c93a5e5aee370f18c2905c78ee1512d2ffb7c757f6014c2fdc7a837288462.scope: Deactivated successfully.
Oct 07 14:43:49 compute-0 nova_compute[259550]: 2025-10-07 14:43:49.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:49 compute-0 podman[400737]: 2025-10-07 14:43:49.134009076 +0000 UTC m=+0.036407518 container died 454c93a5e5aee370f18c2905c78ee1512d2ffb7c757f6014c2fdc7a837288462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:43:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-33441e79bbc0244ca68d80985e415202b3adfba1a9a49836632e1079bb984713-merged.mount: Deactivated successfully.
Oct 07 14:43:49 compute-0 podman[400737]: 2025-10-07 14:43:49.251556841 +0000 UTC m=+0.153955263 container remove 454c93a5e5aee370f18c2905c78ee1512d2ffb7c757f6014c2fdc7a837288462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:43:49 compute-0 systemd[1]: libpod-conmon-454c93a5e5aee370f18c2905c78ee1512d2ffb7c757f6014c2fdc7a837288462.scope: Deactivated successfully.
Oct 07 14:43:49 compute-0 sudo[400604]: pam_unix(sudo:session): session closed for user root
Oct 07 14:43:49 compute-0 sudo[400752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:43:49 compute-0 sudo[400752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:43:49 compute-0 sudo[400752]: pam_unix(sudo:session): session closed for user root
Oct 07 14:43:49 compute-0 sudo[400777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:43:49 compute-0 sudo[400777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:43:49 compute-0 sudo[400777]: pam_unix(sudo:session): session closed for user root
Oct 07 14:43:49 compute-0 sudo[400802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:43:49 compute-0 sudo[400802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:43:49 compute-0 sudo[400802]: pam_unix(sudo:session): session closed for user root
Oct 07 14:43:49 compute-0 sudo[400827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:43:49 compute-0 sudo[400827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:43:49 compute-0 podman[400892]: 2025-10-07 14:43:49.904393313 +0000 UTC m=+0.063241872 container create 1efd59ff41e2ed43ae3f08e6ba24f558d80f2bf05d2d0cd1adefdcf5031e160d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:43:49 compute-0 ceph-mon[74295]: pgmap v2441: 305 pgs: 305 active+clean; 228 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 69 op/s
Oct 07 14:43:49 compute-0 systemd[1]: Started libpod-conmon-1efd59ff41e2ed43ae3f08e6ba24f558d80f2bf05d2d0cd1adefdcf5031e160d.scope.
Oct 07 14:43:49 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:43:49 compute-0 podman[400892]: 2025-10-07 14:43:49.883720175 +0000 UTC m=+0.042568754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:43:49 compute-0 podman[400892]: 2025-10-07 14:43:49.992331871 +0000 UTC m=+0.151180450 container init 1efd59ff41e2ed43ae3f08e6ba24f558d80f2bf05d2d0cd1adefdcf5031e160d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_maxwell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 07 14:43:50 compute-0 podman[400892]: 2025-10-07 14:43:50.001383052 +0000 UTC m=+0.160231611 container start 1efd59ff41e2ed43ae3f08e6ba24f558d80f2bf05d2d0cd1adefdcf5031e160d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_maxwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:43:50 compute-0 confident_maxwell[400909]: 167 167
Oct 07 14:43:50 compute-0 podman[400892]: 2025-10-07 14:43:50.00582362 +0000 UTC m=+0.164672209 container attach 1efd59ff41e2ed43ae3f08e6ba24f558d80f2bf05d2d0cd1adefdcf5031e160d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 07 14:43:50 compute-0 systemd[1]: libpod-1efd59ff41e2ed43ae3f08e6ba24f558d80f2bf05d2d0cd1adefdcf5031e160d.scope: Deactivated successfully.
Oct 07 14:43:50 compute-0 podman[400892]: 2025-10-07 14:43:50.007771411 +0000 UTC m=+0.166619980 container died 1efd59ff41e2ed43ae3f08e6ba24f558d80f2bf05d2d0cd1adefdcf5031e160d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 07 14:43:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ee5d97b3679c563faff53f35ac8ee773e4ee62e5f7e47125ee8f30f25d5e979-merged.mount: Deactivated successfully.
Oct 07 14:43:50 compute-0 podman[400892]: 2025-10-07 14:43:50.047053085 +0000 UTC m=+0.205901644 container remove 1efd59ff41e2ed43ae3f08e6ba24f558d80f2bf05d2d0cd1adefdcf5031e160d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_maxwell, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:43:50 compute-0 systemd[1]: libpod-conmon-1efd59ff41e2ed43ae3f08e6ba24f558d80f2bf05d2d0cd1adefdcf5031e160d.scope: Deactivated successfully.
Oct 07 14:43:50 compute-0 podman[400933]: 2025-10-07 14:43:50.246432956 +0000 UTC m=+0.046196249 container create ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williams, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:43:50 compute-0 systemd[1]: Started libpod-conmon-ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13.scope.
Oct 07 14:43:50 compute-0 podman[400933]: 2025-10-07 14:43:50.226871866 +0000 UTC m=+0.026635169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:43:50 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52947d3a0114f83ff37dabfff821bc1540cc191168f77849050f5447d33e49b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52947d3a0114f83ff37dabfff821bc1540cc191168f77849050f5447d33e49b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52947d3a0114f83ff37dabfff821bc1540cc191168f77849050f5447d33e49b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52947d3a0114f83ff37dabfff821bc1540cc191168f77849050f5447d33e49b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:43:50 compute-0 podman[400933]: 2025-10-07 14:43:50.354634561 +0000 UTC m=+0.154397874 container init ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williams, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:43:50 compute-0 podman[400933]: 2025-10-07 14:43:50.361473993 +0000 UTC m=+0.161237296 container start ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williams, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:43:50 compute-0 podman[400933]: 2025-10-07 14:43:50.366967759 +0000 UTC m=+0.166731072 container attach ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:43:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 241 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 159 op/s
Oct 07 14:43:51 compute-0 nova_compute[259550]: 2025-10-07 14:43:51.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:51 compute-0 jovial_williams[400950]: {
Oct 07 14:43:51 compute-0 jovial_williams[400950]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:43:51 compute-0 jovial_williams[400950]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:43:51 compute-0 jovial_williams[400950]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:43:51 compute-0 jovial_williams[400950]:         "osd_id": 2,
Oct 07 14:43:51 compute-0 jovial_williams[400950]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:43:51 compute-0 jovial_williams[400950]:         "type": "bluestore"
Oct 07 14:43:51 compute-0 jovial_williams[400950]:     },
Oct 07 14:43:51 compute-0 jovial_williams[400950]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:43:51 compute-0 jovial_williams[400950]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:43:51 compute-0 jovial_williams[400950]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:43:51 compute-0 jovial_williams[400950]:         "osd_id": 1,
Oct 07 14:43:51 compute-0 jovial_williams[400950]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:43:51 compute-0 jovial_williams[400950]:         "type": "bluestore"
Oct 07 14:43:51 compute-0 jovial_williams[400950]:     },
Oct 07 14:43:51 compute-0 jovial_williams[400950]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:43:51 compute-0 jovial_williams[400950]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:43:51 compute-0 jovial_williams[400950]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:43:51 compute-0 jovial_williams[400950]:         "osd_id": 0,
Oct 07 14:43:51 compute-0 jovial_williams[400950]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:43:51 compute-0 jovial_williams[400950]:         "type": "bluestore"
Oct 07 14:43:51 compute-0 jovial_williams[400950]:     }
Oct 07 14:43:51 compute-0 jovial_williams[400950]: }
Oct 07 14:43:51 compute-0 podman[400933]: 2025-10-07 14:43:51.390865465 +0000 UTC m=+1.190628758 container died ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williams, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 14:43:51 compute-0 systemd[1]: libpod-ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13.scope: Deactivated successfully.
Oct 07 14:43:51 compute-0 systemd[1]: libpod-ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13.scope: Consumed 1.036s CPU time.
Oct 07 14:43:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c52947d3a0114f83ff37dabfff821bc1540cc191168f77849050f5447d33e49b-merged.mount: Deactivated successfully.
Oct 07 14:43:51 compute-0 podman[400933]: 2025-10-07 14:43:51.457365233 +0000 UTC m=+1.257128526 container remove ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:43:51 compute-0 systemd[1]: libpod-conmon-ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13.scope: Deactivated successfully.
Oct 07 14:43:51 compute-0 sudo[400827]: pam_unix(sudo:session): session closed for user root
Oct 07 14:43:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:43:51 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:43:51 compute-0 podman[400984]: 2025-10-07 14:43:51.52577356 +0000 UTC m=+0.096959968 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 07 14:43:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:43:51 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:43:51 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 91abf953-6e1c-4dd5-a0cc-f90a1c672c20 does not exist
Oct 07 14:43:51 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 7eae22cf-15f7-4aa3-aad6-7c7d3fce76da does not exist
Oct 07 14:43:51 compute-0 podman[400992]: 2025-10-07 14:43:51.570423817 +0000 UTC m=+0.143577547 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Oct 07 14:43:51 compute-0 sudo[401034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:43:51 compute-0 sudo[401034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:43:51 compute-0 sudo[401034]: pam_unix(sudo:session): session closed for user root
Oct 07 14:43:51 compute-0 sudo[401064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:43:51 compute-0 sudo[401064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:43:51 compute-0 sudo[401064]: pam_unix(sudo:session): session closed for user root
Oct 07 14:43:51 compute-0 ceph-mon[74295]: pgmap v2442: 305 pgs: 305 active+clean; 241 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 159 op/s
Oct 07 14:43:51 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:43:51 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:43:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Oct 07 14:43:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:43:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:43:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:43:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:43:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:43:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:43:52 compute-0 nova_compute[259550]: 2025-10-07 14:43:52.837 2 DEBUG nova.compute.manager [req-6824cd4e-bbba-4380-bbda-f675831a4079 req-013a092c-4ce7-4346-997f-42345ed013ed 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:43:52 compute-0 nova_compute[259550]: 2025-10-07 14:43:52.838 2 DEBUG nova.compute.manager [req-6824cd4e-bbba-4380-bbda-f675831a4079 req-013a092c-4ce7-4346-997f-42345ed013ed 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing instance network info cache due to event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:43:52 compute-0 nova_compute[259550]: 2025-10-07 14:43:52.839 2 DEBUG oslo_concurrency.lockutils [req-6824cd4e-bbba-4380-bbda-f675831a4079 req-013a092c-4ce7-4346-997f-42345ed013ed 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:43:52 compute-0 nova_compute[259550]: 2025-10-07 14:43:52.839 2 DEBUG oslo_concurrency.lockutils [req-6824cd4e-bbba-4380-bbda-f675831a4079 req-013a092c-4ce7-4346-997f-42345ed013ed 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:43:52 compute-0 nova_compute[259550]: 2025-10-07 14:43:52.839 2 DEBUG nova.network.neutron [req-6824cd4e-bbba-4380-bbda-f675831a4079 req-013a092c-4ce7-4346-997f-42345ed013ed 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing network info cache for port 3ac63514-77e7-4d94-a67c-94806ca3b58b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:43:53 compute-0 nova_compute[259550]: 2025-10-07 14:43:53.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:43:53 compute-0 ceph-mon[74295]: pgmap v2443: 305 pgs: 305 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Oct 07 14:43:53 compute-0 nova_compute[259550]: 2025-10-07 14:43:53.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:43:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Oct 07 14:43:54 compute-0 nova_compute[259550]: 2025-10-07 14:43:54.614 2 DEBUG nova.network.neutron [req-6824cd4e-bbba-4380-bbda-f675831a4079 req-013a092c-4ce7-4346-997f-42345ed013ed 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updated VIF entry in instance network info cache for port 3ac63514-77e7-4d94-a67c-94806ca3b58b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:43:54 compute-0 nova_compute[259550]: 2025-10-07 14:43:54.615 2 DEBUG nova.network.neutron [req-6824cd4e-bbba-4380-bbda-f675831a4079 req-013a092c-4ce7-4346-997f-42345ed013ed 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:43:54 compute-0 nova_compute[259550]: 2025-10-07 14:43:54.695 2 DEBUG oslo_concurrency.lockutils [req-6824cd4e-bbba-4380-bbda-f675831a4079 req-013a092c-4ce7-4346-997f-42345ed013ed 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:43:55 compute-0 nova_compute[259550]: 2025-10-07 14:43:55.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:55 compute-0 ceph-mon[74295]: pgmap v2444: 305 pgs: 305 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Oct 07 14:43:55 compute-0 nova_compute[259550]: 2025-10-07 14:43:55.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:43:55 compute-0 nova_compute[259550]: 2025-10-07 14:43:55.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:43:56 compute-0 nova_compute[259550]: 2025-10-07 14:43:56.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 07 14:43:56 compute-0 nova_compute[259550]: 2025-10-07 14:43:56.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:43:57 compute-0 ovn_controller[151684]: 2025-10-07T14:43:57Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:32:1b:50 10.100.0.11
Oct 07 14:43:57 compute-0 ovn_controller[151684]: 2025-10-07T14:43:57Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:32:1b:50 10.100.0.11
Oct 07 14:43:57 compute-0 ceph-mon[74295]: pgmap v2445: 305 pgs: 305 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 07 14:43:57 compute-0 nova_compute[259550]: 2025-10-07 14:43:57.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:43:58 compute-0 nova_compute[259550]: 2025-10-07 14:43:58.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:43:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:43:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 114 op/s
Oct 07 14:43:58 compute-0 nova_compute[259550]: 2025-10-07 14:43:58.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:43:58 compute-0 nova_compute[259550]: 2025-10-07 14:43:58.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:43:59 compute-0 ceph-mon[74295]: pgmap v2446: 305 pgs: 305 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 114 op/s
Oct 07 14:44:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:00.075 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:00.076 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:00.077 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 273 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 146 op/s
Oct 07 14:44:01 compute-0 nova_compute[259550]: 2025-10-07 14:44:01.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:01 compute-0 ceph-mon[74295]: pgmap v2447: 305 pgs: 305 active+clean; 273 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 146 op/s
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.325 2 DEBUG oslo_concurrency.lockutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.325 2 DEBUG oslo_concurrency.lockutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.325 2 DEBUG oslo_concurrency.lockutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.326 2 DEBUG oslo_concurrency.lockutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.326 2 DEBUG oslo_concurrency.lockutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.327 2 INFO nova.compute.manager [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Terminating instance
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.328 2 DEBUG nova.compute.manager [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.364 2 DEBUG nova.compute.manager [req-7eab557c-4dfb-4cd7-a605-8f56b6e6a2d9 req-345cd34b-37c5-4eff-954e-335fb55a7c11 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-changed-b3a7ccba-7b5f-4e87-af92-723dd36cc703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.365 2 DEBUG nova.compute.manager [req-7eab557c-4dfb-4cd7-a605-8f56b6e6a2d9 req-345cd34b-37c5-4eff-954e-335fb55a7c11 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Refreshing instance network info cache due to event network-changed-b3a7ccba-7b5f-4e87-af92-723dd36cc703. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.365 2 DEBUG oslo_concurrency.lockutils [req-7eab557c-4dfb-4cd7-a605-8f56b6e6a2d9 req-345cd34b-37c5-4eff-954e-335fb55a7c11 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.365 2 DEBUG oslo_concurrency.lockutils [req-7eab557c-4dfb-4cd7-a605-8f56b6e6a2d9 req-345cd34b-37c5-4eff-954e-335fb55a7c11 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.366 2 DEBUG nova.network.neutron [req-7eab557c-4dfb-4cd7-a605-8f56b6e6a2d9 req-345cd34b-37c5-4eff-954e-335fb55a7c11 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Refreshing network info cache for port b3a7ccba-7b5f-4e87-af92-723dd36cc703 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:44:02 compute-0 kernel: tapb3a7ccba-7b (unregistering): left promiscuous mode
Oct 07 14:44:02 compute-0 NetworkManager[44949]: <info>  [1759848242.3831] device (tapb3a7ccba-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:44:02 compute-0 ovn_controller[151684]: 2025-10-07T14:44:02Z|01446|binding|INFO|Releasing lport b3a7ccba-7b5f-4e87-af92-723dd36cc703 from this chassis (sb_readonly=0)
Oct 07 14:44:02 compute-0 ovn_controller[151684]: 2025-10-07T14:44:02Z|01447|binding|INFO|Setting lport b3a7ccba-7b5f-4e87-af92-723dd36cc703 down in Southbound
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:02 compute-0 ovn_controller[151684]: 2025-10-07T14:44:02Z|01448|binding|INFO|Removing iface tapb3a7ccba-7b ovn-installed in OVS
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:02 compute-0 kernel: tap5fb4e372-50 (unregistering): left promiscuous mode
Oct 07 14:44:02 compute-0 NetworkManager[44949]: <info>  [1759848242.4266] device (tap5fb4e372-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.431 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:0b:88 10.100.0.7'], port_security=['fa:16:3e:e6:0b:88 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1f692a08-811a-41fb-a8a2-aa936481a256', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a500b116-d64f-4be8-9413-85a351e36563', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=262bb8b3-c881-4ed3-8240-c435878fb605, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b3a7ccba-7b5f-4e87-af92-723dd36cc703) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.432 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b3a7ccba-7b5f-4e87-af92-723dd36cc703 in datapath bb059ee7-3091-491e-8da2-c9bd1da0f922 unbound from our chassis
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.434 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bb059ee7-3091-491e-8da2-c9bd1da0f922
Oct 07 14:44:02 compute-0 ovn_controller[151684]: 2025-10-07T14:44:02Z|01449|binding|INFO|Releasing lport 5fb4e372-50c4-49a3-a717-ddc2c99673c7 from this chassis (sb_readonly=0)
Oct 07 14:44:02 compute-0 ovn_controller[151684]: 2025-10-07T14:44:02Z|01450|binding|INFO|Setting lport 5fb4e372-50c4-49a3-a717-ddc2c99673c7 down in Southbound
Oct 07 14:44:02 compute-0 ovn_controller[151684]: 2025-10-07T14:44:02Z|01451|binding|INFO|Removing iface tap5fb4e372-50 ovn-installed in OVS
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.464 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f94fcbea-6379-4cbf-82e6-757b4fc5a223]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:02 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000081.scope: Deactivated successfully.
Oct 07 14:44:02 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000081.scope: Consumed 14.464s CPU time.
Oct 07 14:44:02 compute-0 systemd-machined[214580]: Machine qemu-163-instance-00000081 terminated.
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.494 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:bb:5e 2001:db8::f816:3eff:fe5c:bb5e'], port_security=['fa:16:3e:5c:bb:5e 2001:db8::f816:3eff:fe5c:bb5e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe5c:bb5e/64', 'neutron:device_id': '1f692a08-811a-41fb-a8a2-aa936481a256', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a500b116-d64f-4be8-9413-85a351e36563', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36c3782d-89d5-4d69-ae40-86969f172913, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5fb4e372-50c4-49a3-a717-ddc2c99673c7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.494 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[85ec7141-7f8d-4349-a242-9520a23484ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.499 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c820d1e0-3d47-4e21-a210-042c1b11d639]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.523 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2f535674-d0e9-4dd3-ae0a-34731e06c90e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 89 op/s
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.542 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1ab606dd-f8fb-41f4-879e-47e85cc4a37f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb059ee7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:3b:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 402], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874469, 'reachable_time': 36531, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401106, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:02 compute-0 NetworkManager[44949]: <info>  [1759848242.5578] manager: (tap5fb4e372-50): new Tun device (/org/freedesktop/NetworkManager/Devices/581)
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.562 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0a94dfaa-8d15-471a-9b91-2eceaa2fae5a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbb059ee7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874481, 'tstamp': 874481}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401111, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbb059ee7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874484, 'tstamp': 874484}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401111, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.565 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb059ee7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.577 2 INFO nova.virt.libvirt.driver [-] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Instance destroyed successfully.
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.578 2 DEBUG nova.objects.instance [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 1f692a08-811a-41fb-a8a2-aa936481a256 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.591 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb059ee7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.591 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.591 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbb059ee7-30, col_values=(('external_ids', {'iface-id': '770f7899-3ca8-4bf3-9f06-6b8c25522fc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.592 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.593 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5fb4e372-50c4-49a3-a717-ddc2c99673c7 in datapath e6e769bc-2b33-4210-8062-fbc8d16f9127 unbound from our chassis
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.595 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e6e769bc-2b33-4210-8062-fbc8d16f9127
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.604 2 DEBUG nova.virt.libvirt.vif [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:43:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-589430566',display_name='tempest-TestGettingAddress-server-589430566',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-589430566',id=129,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:43:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-9t1w4huy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:43:33Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=1f692a08-811a-41fb-a8a2-aa936481a256,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.605 2 DEBUG nova.network.os_vif_util [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.606 2 DEBUG nova.network.os_vif_util [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0b:88,bridge_name='br-int',has_traffic_filtering=True,id=b3a7ccba-7b5f-4e87-af92-723dd36cc703,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3a7ccba-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.606 2 DEBUG os_vif [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0b:88,bridge_name='br-int',has_traffic_filtering=True,id=b3a7ccba-7b5f-4e87-af92-723dd36cc703,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3a7ccba-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.609 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3a7ccba-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.615 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb2399b-7059-430f-97f7-17b379601fa0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.618 2 INFO os_vif [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0b:88,bridge_name='br-int',has_traffic_filtering=True,id=b3a7ccba-7b5f-4e87-af92-723dd36cc703,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3a7ccba-7b')
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.619 2 DEBUG nova.virt.libvirt.vif [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:43:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-589430566',display_name='tempest-TestGettingAddress-server-589430566',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-589430566',id=129,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:43:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-9t1w4huy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:43:33Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=1f692a08-811a-41fb-a8a2-aa936481a256,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.619 2 DEBUG nova.network.os_vif_util [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.620 2 DEBUG nova.network.os_vif_util [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:bb:5e,bridge_name='br-int',has_traffic_filtering=True,id=5fb4e372-50c4-49a3-a717-ddc2c99673c7,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb4e372-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.620 2 DEBUG os_vif [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:bb:5e,bridge_name='br-int',has_traffic_filtering=True,id=5fb4e372-50c4-49a3-a717-ddc2c99673c7,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb4e372-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.621 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fb4e372-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.625 2 INFO os_vif [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:bb:5e,bridge_name='br-int',has_traffic_filtering=True,id=5fb4e372-50c4-49a3-a717-ddc2c99673c7,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb4e372-50')
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.646 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b52962a6-3248-45f5-a9f6-186df33c6c11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.649 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3a9c7e09-1b92-4917-b7c3-d889da4f1a58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.685 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[72a90c09-2462-4aa0-b429-ba85e9062769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.705 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ffad0b-08a8-474b-9517-4cce7e83b1b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape6e769bc-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:ce:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2772, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2772, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874559, 'reachable_time': 39280, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2352, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2352, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401156, 'error': None, 'target': 'ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.724 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[81238723-49a7-4e74-bc15-1fe11411a067]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape6e769bc-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874571, 'tstamp': 874571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401157, 'error': None, 'target': 'ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.725 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6e769bc-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:02 compute-0 nova_compute[259550]: 2025-10-07 14:44:02.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.729 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape6e769bc-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.730 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.730 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape6e769bc-20, col_values=(('external_ids', {'iface-id': '21cca283-5f07-4e28-8ee2-a58e7356e156'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.730 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:44:03 compute-0 nova_compute[259550]: 2025-10-07 14:44:03.039 2 INFO nova.virt.libvirt.driver [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Deleting instance files /var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256_del
Oct 07 14:44:03 compute-0 nova_compute[259550]: 2025-10-07 14:44:03.040 2 INFO nova.virt.libvirt.driver [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Deletion of /var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256_del complete
Oct 07 14:44:03 compute-0 nova_compute[259550]: 2025-10-07 14:44:03.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:44:03 compute-0 nova_compute[259550]: 2025-10-07 14:44:03.177 2 INFO nova.compute.manager [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Took 0.85 seconds to destroy the instance on the hypervisor.
Oct 07 14:44:03 compute-0 nova_compute[259550]: 2025-10-07 14:44:03.178 2 DEBUG oslo.service.loopingcall [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:44:03 compute-0 nova_compute[259550]: 2025-10-07 14:44:03.178 2 DEBUG nova.compute.manager [-] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:44:03 compute-0 nova_compute[259550]: 2025-10-07 14:44:03.178 2 DEBUG nova.network.neutron [-] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:44:03 compute-0 nova_compute[259550]: 2025-10-07 14:44:03.532 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:03 compute-0 nova_compute[259550]: 2025-10-07 14:44:03.532 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:03 compute-0 nova_compute[259550]: 2025-10-07 14:44:03.644 2 DEBUG nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:44:03 compute-0 nova_compute[259550]: 2025-10-07 14:44:03.849 2 DEBUG nova.network.neutron [req-7eab557c-4dfb-4cd7-a605-8f56b6e6a2d9 req-345cd34b-37c5-4eff-954e-335fb55a7c11 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updated VIF entry in instance network info cache for port b3a7ccba-7b5f-4e87-af92-723dd36cc703. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:44:03 compute-0 nova_compute[259550]: 2025-10-07 14:44:03.849 2 DEBUG nova.network.neutron [req-7eab557c-4dfb-4cd7-a605-8f56b6e6a2d9 req-345cd34b-37c5-4eff-954e-335fb55a7c11 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updating instance_info_cache with network_info: [{"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:44:03 compute-0 nova_compute[259550]: 2025-10-07 14:44:03.921 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:03 compute-0 nova_compute[259550]: 2025-10-07 14:44:03.922 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:03 compute-0 nova_compute[259550]: 2025-10-07 14:44:03.932 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:44:03 compute-0 nova_compute[259550]: 2025-10-07 14:44:03.933 2 INFO nova.compute.claims [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:44:04 compute-0 ceph-mon[74295]: pgmap v2448: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 89 op/s
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.046 2 DEBUG oslo_concurrency.lockutils [req-7eab557c-4dfb-4cd7-a605-8f56b6e6a2d9 req-345cd34b-37c5-4eff-954e-335fb55a7c11 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.207 2 DEBUG nova.scheduler.client.report [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.226 2 DEBUG nova.scheduler.client.report [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.227 2 DEBUG nova.compute.provider_tree [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.251 2 DEBUG nova.scheduler.client.report [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.277 2 DEBUG nova.scheduler.client.report [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.368 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 230 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.2 MiB/s wr, 82 op/s
Oct 07 14:44:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:44:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1471044954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.818 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.828 2 DEBUG nova.compute.provider_tree [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.916 2 DEBUG nova.scheduler.client.report [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.996 2 DEBUG nova.compute.manager [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-unplugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.996 2 DEBUG oslo_concurrency.lockutils [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.997 2 DEBUG oslo_concurrency.lockutils [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.997 2 DEBUG oslo_concurrency.lockutils [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.998 2 DEBUG nova.compute.manager [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] No waiting events found dispatching network-vif-unplugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.998 2 DEBUG nova.compute.manager [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-unplugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.998 2 DEBUG nova.compute.manager [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-plugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.999 2 DEBUG oslo_concurrency.lockutils [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.999 2 DEBUG oslo_concurrency.lockutils [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:04 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.999 2 DEBUG oslo_concurrency.lockutils [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:04.999 2 DEBUG nova.compute.manager [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] No waiting events found dispatching network-vif-plugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.000 2 WARNING nova.compute.manager [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received unexpected event network-vif-plugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 for instance with vm_state active and task_state deleting.
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.005 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.005 2 DEBUG nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:44:05 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1471044954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.098 2 DEBUG nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.099 2 DEBUG nova.network.neutron [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.147 2 INFO nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.186 2 DEBUG nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.343 2 DEBUG nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.345 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.345 2 INFO nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Creating image(s)
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.369 2 DEBUG nova.storage.rbd_utils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.398 2 DEBUG nova.storage.rbd_utils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.430 2 DEBUG nova.storage.rbd_utils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.434 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.506 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.507 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.508 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.508 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.532 2 DEBUG nova.storage.rbd_utils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.536 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.901 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.930 2 DEBUG nova.policy [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.964 2 DEBUG nova.storage.rbd_utils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.997 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.997 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:44:05 compute-0 nova_compute[259550]: 2025-10-07 14:44:05.997 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:44:06 compute-0 ceph-mon[74295]: pgmap v2449: 305 pgs: 305 active+clean; 230 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.2 MiB/s wr, 82 op/s
Oct 07 14:44:06 compute-0 nova_compute[259550]: 2025-10-07 14:44:06.064 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 07 14:44:06 compute-0 nova_compute[259550]: 2025-10-07 14:44:06.064 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 07 14:44:06 compute-0 nova_compute[259550]: 2025-10-07 14:44:06.079 2 DEBUG nova.objects.instance [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid 489f6a90-6b26-4b8b-aa3e-095d1d8df333 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:44:06 compute-0 nova_compute[259550]: 2025-10-07 14:44:06.114 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:44:06 compute-0 nova_compute[259550]: 2025-10-07 14:44:06.115 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Ensure instance console log exists: /var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:44:06 compute-0 nova_compute[259550]: 2025-10-07 14:44:06.116 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:06 compute-0 nova_compute[259550]: 2025-10-07 14:44:06.116 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:06 compute-0 nova_compute[259550]: 2025-10-07 14:44:06.116 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 207 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 2.2 MiB/s wr, 106 op/s
Oct 07 14:44:06 compute-0 nova_compute[259550]: 2025-10-07 14:44:06.616 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:44:06 compute-0 nova_compute[259550]: 2025-10-07 14:44:06.617 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:44:06 compute-0 nova_compute[259550]: 2025-10-07 14:44:06.617 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:44:06 compute-0 nova_compute[259550]: 2025-10-07 14:44:06.617 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 77918bef-8f72-4152-ac55-f4d4c98477ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.216 2 DEBUG nova.compute.manager [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-unplugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.217 2 DEBUG oslo_concurrency.lockutils [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.217 2 DEBUG oslo_concurrency.lockutils [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.218 2 DEBUG oslo_concurrency.lockutils [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.218 2 DEBUG nova.compute.manager [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] No waiting events found dispatching network-vif-unplugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.218 2 DEBUG nova.compute.manager [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-unplugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.218 2 DEBUG nova.compute.manager [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-deleted-b3a7ccba-7b5f-4e87-af92-723dd36cc703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.218 2 INFO nova.compute.manager [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Neutron deleted interface b3a7ccba-7b5f-4e87-af92-723dd36cc703; detaching it from the instance and deleting it from the info cache
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.218 2 DEBUG nova.network.neutron [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updating instance_info_cache with network_info: [{"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.381 2 DEBUG nova.compute.manager [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Detach interface failed, port_id=b3a7ccba-7b5f-4e87-af92-723dd36cc703, reason: Instance 1f692a08-811a-41fb-a8a2-aa936481a256 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.381 2 DEBUG nova.compute.manager [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-plugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.381 2 DEBUG oslo_concurrency.lockutils [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.382 2 DEBUG oslo_concurrency.lockutils [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.382 2 DEBUG oslo_concurrency.lockutils [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.382 2 DEBUG nova.compute.manager [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] No waiting events found dispatching network-vif-plugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.382 2 WARNING nova.compute.manager [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received unexpected event network-vif-plugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 for instance with vm_state active and task_state deleting.
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.427 2 DEBUG nova.network.neutron [-] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.448 2 DEBUG nova.network.neutron [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Successfully created port: 677a2058-6a17-4388-9011-69553854c197 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.561 2 INFO nova.compute.manager [-] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Took 4.38 seconds to deallocate network for instance.
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.755 2 DEBUG oslo_concurrency.lockutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.756 2 DEBUG oslo_concurrency.lockutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:07 compute-0 nova_compute[259550]: 2025-10-07 14:44:07.914 2 DEBUG oslo_concurrency.processutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:08 compute-0 ceph-mon[74295]: pgmap v2450: 305 pgs: 305 active+clean; 207 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 2.2 MiB/s wr, 106 op/s
Oct 07 14:44:08 compute-0 nova_compute[259550]: 2025-10-07 14:44:08.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:08 compute-0 podman[401348]: 2025-10-07 14:44:08.100072181 +0000 UTC m=+0.086488650 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 07 14:44:08 compute-0 podman[401349]: 2025-10-07 14:44:08.116892858 +0000 UTC m=+0.103355298 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3)
Oct 07 14:44:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:44:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:44:08 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/675125889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:44:08 compute-0 nova_compute[259550]: 2025-10-07 14:44:08.410 2 DEBUG oslo_concurrency.processutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:08 compute-0 nova_compute[259550]: 2025-10-07 14:44:08.417 2 DEBUG nova.compute.provider_tree [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:44:08 compute-0 nova_compute[259550]: 2025-10-07 14:44:08.459 2 DEBUG nova.scheduler.client.report [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:44:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 207 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.2 MiB/s wr, 105 op/s
Oct 07 14:44:08 compute-0 nova_compute[259550]: 2025-10-07 14:44:08.614 2 DEBUG oslo_concurrency.lockutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:08 compute-0 nova_compute[259550]: 2025-10-07 14:44:08.797 2 INFO nova.scheduler.client.report [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 1f692a08-811a-41fb-a8a2-aa936481a256
Oct 07 14:44:08 compute-0 nova_compute[259550]: 2025-10-07 14:44:08.965 2 DEBUG oslo_concurrency.lockutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:08 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 14:44:08 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 14:44:09 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/675125889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:44:09 compute-0 nova_compute[259550]: 2025-10-07 14:44:09.326 2 DEBUG nova.compute.manager [req-531cdbe6-eccd-4c27-9041-194ef7857684 req-a4d4f773-4004-4e61-af6a-b09f30c2ab2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-deleted-5fb4e372-50c4-49a3-a717-ddc2c99673c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:09 compute-0 nova_compute[259550]: 2025-10-07 14:44:09.722 2 DEBUG nova.network.neutron [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Successfully updated port: 677a2058-6a17-4388-9011-69553854c197 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:44:09 compute-0 nova_compute[259550]: 2025-10-07 14:44:09.742 2 DEBUG oslo_concurrency.lockutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:09 compute-0 nova_compute[259550]: 2025-10-07 14:44:09.743 2 DEBUG oslo_concurrency.lockutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:09 compute-0 nova_compute[259550]: 2025-10-07 14:44:09.743 2 INFO nova.compute.manager [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Shelving
Oct 07 14:44:09 compute-0 nova_compute[259550]: 2025-10-07 14:44:09.772 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:44:09 compute-0 nova_compute[259550]: 2025-10-07 14:44:09.773 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:44:09 compute-0 nova_compute[259550]: 2025-10-07 14:44:09.773 2 DEBUG nova.network.neutron [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:44:09 compute-0 nova_compute[259550]: 2025-10-07 14:44:09.802 2 DEBUG nova.virt.libvirt.driver [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 07 14:44:09 compute-0 nova_compute[259550]: 2025-10-07 14:44:09.981 2 DEBUG nova.compute.manager [req-38c9fefa-eb20-4853-af54-c979d08df39b req-e042fe2a-1d22-45b1-9f28-b27137f20afe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Received event network-changed-677a2058-6a17-4388-9011-69553854c197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:09 compute-0 nova_compute[259550]: 2025-10-07 14:44:09.982 2 DEBUG nova.compute.manager [req-38c9fefa-eb20-4853-af54-c979d08df39b req-e042fe2a-1d22-45b1-9f28-b27137f20afe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Refreshing instance network info cache due to event network-changed-677a2058-6a17-4388-9011-69553854c197. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:44:09 compute-0 nova_compute[259550]: 2025-10-07 14:44:09.982 2 DEBUG oslo_concurrency.lockutils [req-38c9fefa-eb20-4853-af54-c979d08df39b req-e042fe2a-1d22-45b1-9f28-b27137f20afe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:44:09 compute-0 nova_compute[259550]: 2025-10-07 14:44:09.984 2 DEBUG nova.network.neutron [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:44:10 compute-0 ceph-mon[74295]: pgmap v2451: 305 pgs: 305 active+clean; 207 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.2 MiB/s wr, 105 op/s
Oct 07 14:44:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 230 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 3.3 MiB/s wr, 110 op/s
Oct 07 14:44:11 compute-0 nova_compute[259550]: 2025-10-07 14:44:11.924 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updating instance_info_cache with network_info: [{"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:44:12 compute-0 kernel: tap3ac63514-77 (unregistering): left promiscuous mode
Oct 07 14:44:12 compute-0 NetworkManager[44949]: <info>  [1759848252.0449] device (tap3ac63514-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:44:12 compute-0 ovn_controller[151684]: 2025-10-07T14:44:12Z|01452|binding|INFO|Releasing lport 3ac63514-77e7-4d94-a67c-94806ca3b58b from this chassis (sb_readonly=0)
Oct 07 14:44:12 compute-0 ovn_controller[151684]: 2025-10-07T14:44:12Z|01453|binding|INFO|Setting lport 3ac63514-77e7-4d94-a67c-94806ca3b58b down in Southbound
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:12 compute-0 ovn_controller[151684]: 2025-10-07T14:44:12Z|01454|binding|INFO|Removing iface tap3ac63514-77 ovn-installed in OVS
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:12 compute-0 ceph-mon[74295]: pgmap v2452: 305 pgs: 305 active+clean; 230 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 3.3 MiB/s wr, 110 op/s
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:12 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000083.scope: Deactivated successfully.
Oct 07 14:44:12 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000083.scope: Consumed 13.633s CPU time.
Oct 07 14:44:12 compute-0 systemd-machined[214580]: Machine qemu-164-instance-00000083 terminated.
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.183 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.183 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.183 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:44:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.186 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:1b:50 10.100.0.11'], port_security=['fa:16:3e:32:1b:50 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '30241223-64c5-4a88-8ba2-ee340fe6cbd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ac40ef14492f40768b3852a40da26621', 'neutron:revision_number': '4', 'neutron:security_group_ids': '56b8c028-3f77-4dba-a2e9-4a1cb7c88d4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46c5eb1a-8a6c-4afe-ae4e-424959d231e5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3ac63514-77e7-4d94-a67c-94806ca3b58b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:44:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.187 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3ac63514-77e7-4d94-a67c-94806ca3b58b in datapath 7c054d6f-68ec-4f0b-9362-221001cc6b67 unbound from our chassis
Oct 07 14:44:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.189 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c054d6f-68ec-4f0b-9362-221001cc6b67, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:44:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.190 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92c2388b-57e8-4c11-9387-b6f2309a8a27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.190 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 namespace which is not needed anymore
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.250 2 DEBUG nova.network.neutron [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Updating instance_info_cache with network_info: [{"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:44:12 compute-0 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[400148]: [NOTICE]   (400152) : haproxy version is 2.8.14-c23fe91
Oct 07 14:44:12 compute-0 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[400148]: [NOTICE]   (400152) : path to executable is /usr/sbin/haproxy
Oct 07 14:44:12 compute-0 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[400148]: [WARNING]  (400152) : Exiting Master process...
Oct 07 14:44:12 compute-0 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[400148]: [WARNING]  (400152) : Exiting Master process...
Oct 07 14:44:12 compute-0 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[400148]: [ALERT]    (400152) : Current worker (400160) exited with code 143 (Terminated)
Oct 07 14:44:12 compute-0 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[400148]: [WARNING]  (400152) : All workers exited. Exiting... (0)
Oct 07 14:44:12 compute-0 systemd[1]: libpod-d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5.scope: Deactivated successfully.
Oct 07 14:44:12 compute-0 podman[401432]: 2025-10-07 14:44:12.335855619 +0000 UTC m=+0.050293758 container died d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 14:44:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5-userdata-shm.mount: Deactivated successfully.
Oct 07 14:44:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d6983957f31550d23f88388bbfaa03a6c047e05d35d4dc2cf9e78281c838558-merged.mount: Deactivated successfully.
Oct 07 14:44:12 compute-0 podman[401432]: 2025-10-07 14:44:12.379564901 +0000 UTC m=+0.094003040 container cleanup d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 07 14:44:12 compute-0 systemd[1]: libpod-conmon-d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5.scope: Deactivated successfully.
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.416 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.416 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.417 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.417 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.417 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:12 compute-0 podman[401468]: 2025-10-07 14:44:12.448260197 +0000 UTC m=+0.042720396 container remove d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:44:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.454 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[646ed011-f0a3-4aae-b006-c82918b60cd9]: (4, ('Tue Oct  7 02:44:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 (d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5)\nd8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5\nTue Oct  7 02:44:12 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 (d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5)\nd8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.456 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bf82003b-7d57-406c-b89a-ac2c20597524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.457 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c054d6f-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:12 compute-0 kernel: tap7c054d6f-60: left promiscuous mode
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.480 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2fbb7f3c-a9a9-4eb7-845d-3cf077be028b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.524 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[69fce859-9062-42d9-9f09-a9cb91222d9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.525 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b2239ab1-75f8-4431-8d4b-d7b38183d8e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 173 KiB/s rd, 2.5 MiB/s wr, 93 op/s
Oct 07 14:44:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.544 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee33aa2-c237-4062-a6c4-44ed77fe5f2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 880656, 'reachable_time': 20673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401487, 'error': None, 'target': 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d7c054d6f\x2d68ec\x2d4f0b\x2d9362\x2d221001cc6b67.mount: Deactivated successfully.
Oct 07 14:44:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.547 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:44:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.547 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e6772d-a823-45f5-bc7e-ba5ba3a78b21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.569 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.570 2 DEBUG nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Instance network_info: |[{"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.571 2 DEBUG oslo_concurrency.lockutils [req-38c9fefa-eb20-4853-af54-c979d08df39b req-e042fe2a-1d22-45b1-9f28-b27137f20afe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.571 2 DEBUG nova.network.neutron [req-38c9fefa-eb20-4853-af54-c979d08df39b req-e042fe2a-1d22-45b1-9f28-b27137f20afe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Refreshing network info cache for port 677a2058-6a17-4388-9011-69553854c197 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.575 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Start _get_guest_xml network_info=[{"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.580 2 WARNING nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.586 2 DEBUG nova.virt.libvirt.host [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.586 2 DEBUG nova.virt.libvirt.host [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.593 2 DEBUG nova.virt.libvirt.host [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.595 2 DEBUG nova.virt.libvirt.host [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.596 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.596 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.596 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.597 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.597 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.597 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.597 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.598 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.598 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.598 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.599 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.599 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.603 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.780 2 DEBUG nova.compute.manager [req-b63efb60-74ce-4595-9e3e-a6e3df76a15d req-1feac593-ac3c-4b08-a07a-aa3798c68b44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-vif-unplugged-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.780 2 DEBUG oslo_concurrency.lockutils [req-b63efb60-74ce-4595-9e3e-a6e3df76a15d req-1feac593-ac3c-4b08-a07a-aa3798c68b44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.781 2 DEBUG oslo_concurrency.lockutils [req-b63efb60-74ce-4595-9e3e-a6e3df76a15d req-1feac593-ac3c-4b08-a07a-aa3798c68b44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.781 2 DEBUG oslo_concurrency.lockutils [req-b63efb60-74ce-4595-9e3e-a6e3df76a15d req-1feac593-ac3c-4b08-a07a-aa3798c68b44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.782 2 DEBUG nova.compute.manager [req-b63efb60-74ce-4595-9e3e-a6e3df76a15d req-1feac593-ac3c-4b08-a07a-aa3798c68b44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] No waiting events found dispatching network-vif-unplugged-3ac63514-77e7-4d94-a67c-94806ca3b58b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.782 2 WARNING nova.compute.manager [req-b63efb60-74ce-4595-9e3e-a6e3df76a15d req-1feac593-ac3c-4b08-a07a-aa3798c68b44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received unexpected event network-vif-unplugged-3ac63514-77e7-4d94-a67c-94806ca3b58b for instance with vm_state active and task_state shelving.
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.821 2 INFO nova.virt.libvirt.driver [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance shutdown successfully after 3 seconds.
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.827 2 INFO nova.virt.libvirt.driver [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance destroyed successfully.
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.827 2 DEBUG nova.objects.instance [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'numa_topology' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:44:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:44:12 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3841762384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:44:12 compute-0 nova_compute[259550]: 2025-10-07 14:44:12.883 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:44:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1451467580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.050 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:13 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3841762384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:44:13 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1451467580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.075 2 DEBUG nova.storage.rbd_utils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.080 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.231 2 INFO nova.virt.libvirt.driver [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Beginning cold snapshot process
Oct 07 14:44:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:44:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1497203963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.547 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.548 2 DEBUG nova.virt.libvirt.vif [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:44:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-992946339',display_name='tempest-TestNetworkBasicOps-server-992946339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-992946339',id=132,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNb4KI5K3ESp/qtDfTCTovy9nAGo/33FtwIKSe6Yo7uBCrstvg9OTDvMktEqMWbObIphCTkLTVovrjRh9e99psr4PmzBYLqjNAws3HaKHjxoqr6GDuKbnjMBh642Y3hcvA==',key_name='tempest-TestNetworkBasicOps-2088718198',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-jb8fwwnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:44:05Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=489f6a90-6b26-4b8b-aa3e-095d1d8df333,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.548 2 DEBUG nova.network.os_vif_util [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.549 2 DEBUG nova.network.os_vif_util [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:0b:2c,bridge_name='br-int',has_traffic_filtering=True,id=677a2058-6a17-4388-9011-69553854c197,network=Network(5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677a2058-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.551 2 DEBUG nova.objects.instance [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid 489f6a90-6b26-4b8b-aa3e-095d1d8df333 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.568 2 DEBUG oslo_concurrency.lockutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.568 2 DEBUG oslo_concurrency.lockutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.569 2 DEBUG oslo_concurrency.lockutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.569 2 DEBUG oslo_concurrency.lockutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.569 2 DEBUG oslo_concurrency.lockutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.571 2 INFO nova.compute.manager [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Terminating instance
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.572 2 DEBUG nova.compute.manager [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.601 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.601 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:44:13 compute-0 kernel: tapc75fde3c-84 (unregistering): left promiscuous mode
Oct 07 14:44:13 compute-0 NetworkManager[44949]: <info>  [1759848253.6434] device (tapc75fde3c-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:44:13 compute-0 ovn_controller[151684]: 2025-10-07T14:44:13Z|01455|binding|INFO|Releasing lport c75fde3c-8461-4ed7-9c14-7f14f5794599 from this chassis (sb_readonly=0)
Oct 07 14:44:13 compute-0 ovn_controller[151684]: 2025-10-07T14:44:13Z|01456|binding|INFO|Setting lport c75fde3c-8461-4ed7-9c14-7f14f5794599 down in Southbound
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:13 compute-0 ovn_controller[151684]: 2025-10-07T14:44:13Z|01457|binding|INFO|Removing iface tapc75fde3c-84 ovn-installed in OVS
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:13 compute-0 kernel: tapdfe40ca6-70 (unregistering): left promiscuous mode
Oct 07 14:44:13 compute-0 NetworkManager[44949]: <info>  [1759848253.6860] device (tapdfe40ca6-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:13 compute-0 ovn_controller[151684]: 2025-10-07T14:44:13Z|01458|binding|INFO|Releasing lport dfe40ca6-700f-4101-8729-3d1ee103c5ea from this chassis (sb_readonly=1)
Oct 07 14:44:13 compute-0 ovn_controller[151684]: 2025-10-07T14:44:13Z|01459|binding|INFO|Removing iface tapdfe40ca6-70 ovn-installed in OVS
Oct 07 14:44:13 compute-0 ovn_controller[151684]: 2025-10-07T14:44:13Z|01460|if_status|INFO|Dropped 4 log messages in last 166 seconds (most recently, 166 seconds ago) due to excessive rate
Oct 07 14:44:13 compute-0 ovn_controller[151684]: 2025-10-07T14:44:13Z|01461|if_status|INFO|Not setting lport dfe40ca6-700f-4101-8729-3d1ee103c5ea down as sb is readonly
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:13 compute-0 ovn_controller[151684]: 2025-10-07T14:44:13Z|01462|binding|INFO|Setting lport dfe40ca6-700f-4101-8729-3d1ee103c5ea down in Southbound
Oct 07 14:44:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:13.705 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:99:4d 10.100.0.4'], port_security=['fa:16:3e:41:99:4d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '77918bef-8f72-4152-ac55-f4d4c98477ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a500b116-d64f-4be8-9413-85a351e36563', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=262bb8b3-c881-4ed3-8240-c435878fb605, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c75fde3c-8461-4ed7-9c14-7f14f5794599) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:44:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:13.706 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c75fde3c-8461-4ed7-9c14-7f14f5794599 in datapath bb059ee7-3091-491e-8da2-c9bd1da0f922 unbound from our chassis
Oct 07 14:44:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:13.707 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bb059ee7-3091-491e-8da2-c9bd1da0f922, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:44:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:13.708 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[71d25368-0e90-42c3-baa3-328aa8418771]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:13.708 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922 namespace which is not needed anymore
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.723 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:44:13 compute-0 nova_compute[259550]:   <uuid>489f6a90-6b26-4b8b-aa3e-095d1d8df333</uuid>
Oct 07 14:44:13 compute-0 nova_compute[259550]:   <name>instance-00000084</name>
Oct 07 14:44:13 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:44:13 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:44:13 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkBasicOps-server-992946339</nova:name>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:44:12</nova:creationTime>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:44:13 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:44:13 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:44:13 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:44:13 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:44:13 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:44:13 compute-0 nova_compute[259550]:         <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:44:13 compute-0 nova_compute[259550]:         <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:44:13 compute-0 nova_compute[259550]:         <nova:port uuid="677a2058-6a17-4388-9011-69553854c197">
Oct 07 14:44:13 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:44:13 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:44:13 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <system>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <entry name="serial">489f6a90-6b26-4b8b-aa3e-095d1d8df333</entry>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <entry name="uuid">489f6a90-6b26-4b8b-aa3e-095d1d8df333</entry>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     </system>
Oct 07 14:44:13 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:44:13 compute-0 nova_compute[259550]:   <os>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:   </os>
Oct 07 14:44:13 compute-0 nova_compute[259550]:   <features>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:   </features>
Oct 07 14:44:13 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:44:13 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:44:13 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk">
Oct 07 14:44:13 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       </source>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:44:13 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk.config">
Oct 07 14:44:13 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       </source>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:44:13 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:72:0b:2c"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <target dev="tap677a2058-6a"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333/console.log" append="off"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <video>
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     </video>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:44:13 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:44:13 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:44:13 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:44:13 compute-0 nova_compute[259550]: </domain>
Oct 07 14:44:13 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.723 2 DEBUG nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Preparing to wait for external event network-vif-plugged-677a2058-6a17-4388-9011-69553854c197 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.723 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.724 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.724 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.725 2 DEBUG nova.virt.libvirt.vif [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:44:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-992946339',display_name='tempest-TestNetworkBasicOps-server-992946339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-992946339',id=132,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNb4KI5K3ESp/qtDfTCTovy9nAGo/33FtwIKSe6Yo7uBCrstvg9OTDvMktEqMWbObIphCTkLTVovrjRh9e99psr4PmzBYLqjNAws3HaKHjxoqr6GDuKbnjMBh642Y3hcvA==',key_name='tempest-TestNetworkBasicOps-2088718198',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-jb8fwwnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:44:05Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=489f6a90-6b26-4b8b-aa3e-095d1d8df333,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.725 2 DEBUG nova.network.os_vif_util [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.726 2 DEBUG nova.network.os_vif_util [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:0b:2c,bridge_name='br-int',has_traffic_filtering=True,id=677a2058-6a17-4388-9011-69553854c197,network=Network(5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677a2058-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.726 2 DEBUG os_vif [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:0b:2c,bridge_name='br-int',has_traffic_filtering=True,id=677a2058-6a17-4388-9011-69553854c197,network=Network(5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677a2058-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.727 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.727 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.730 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap677a2058-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.730 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap677a2058-6a, col_values=(('external_ids', {'iface-id': '677a2058-6a17-4388-9011-69553854c197', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:72:0b:2c', 'vm-uuid': '489f6a90-6b26-4b8b-aa3e-095d1d8df333'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:13 compute-0 NetworkManager[44949]: <info>  [1759848253.7334] manager: (tap677a2058-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/582)
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.744 2 INFO os_vif [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:0b:2c,bridge_name='br-int',has_traffic_filtering=True,id=677a2058-6a17-4388-9011-69553854c197,network=Network(5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677a2058-6a')
Oct 07 14:44:13 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Oct 07 14:44:13 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d0000007f.scope: Consumed 16.471s CPU time.
Oct 07 14:44:13 compute-0 systemd-machined[214580]: Machine qemu-160-instance-0000007f terminated.
Oct 07 14:44:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:13.784 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:c5:56 2001:db8::f816:3eff:feb4:c556'], port_security=['fa:16:3e:b4:c5:56 2001:db8::f816:3eff:feb4:c556'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb4:c556/64', 'neutron:device_id': '77918bef-8f72-4152-ac55-f4d4c98477ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a500b116-d64f-4be8-9413-85a351e36563', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36c3782d-89d5-4d69-ae40-86969f172913, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=dfe40ca6-700f-4101-8729-3d1ee103c5ea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:44:13 compute-0 NetworkManager[44949]: <info>  [1759848253.7901] manager: (tapc75fde3c-84): new Tun device (/org/freedesktop/NetworkManager/Devices/583)
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.790 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.791 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.822 2 INFO nova.virt.libvirt.driver [-] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Instance destroyed successfully.
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.822 2 DEBUG nova.objects.instance [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 77918bef-8f72-4152-ac55-f4d4c98477ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:44:13 compute-0 neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922[397654]: [NOTICE]   (397660) : haproxy version is 2.8.14-c23fe91
Oct 07 14:44:13 compute-0 neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922[397654]: [NOTICE]   (397660) : path to executable is /usr/sbin/haproxy
Oct 07 14:44:13 compute-0 neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922[397654]: [WARNING]  (397660) : Exiting Master process...
Oct 07 14:44:13 compute-0 neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922[397654]: [ALERT]    (397660) : Current worker (397662) exited with code 143 (Terminated)
Oct 07 14:44:13 compute-0 neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922[397654]: [WARNING]  (397660) : All workers exited. Exiting... (0)
Oct 07 14:44:13 compute-0 systemd[1]: libpod-be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d.scope: Deactivated successfully.
Oct 07 14:44:13 compute-0 podman[401607]: 2025-10-07 14:44:13.860040593 +0000 UTC m=+0.060787687 container died be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:44:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d-userdata-shm.mount: Deactivated successfully.
Oct 07 14:44:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-aade3835522dc3103f13522e66cfc9c234f96ccd29b80c7bd03443b1f8c3128d-merged.mount: Deactivated successfully.
Oct 07 14:44:13 compute-0 podman[401607]: 2025-10-07 14:44:13.905108921 +0000 UTC m=+0.105855995 container cleanup be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:44:13 compute-0 systemd[1]: libpod-conmon-be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d.scope: Deactivated successfully.
Oct 07 14:44:13 compute-0 podman[401659]: 2025-10-07 14:44:13.96678624 +0000 UTC m=+0.038328019 container remove be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 07 14:44:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:13.973 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c63df306-df07-4655-b347-7cc726e7e32b]: (4, ('Tue Oct  7 02:44:13 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922 (be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d)\nbe3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d\nTue Oct  7 02:44:13 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922 (be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d)\nbe3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:13.975 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[527641f8-4909-431f-a323-bb4503d22a2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:13.976 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb059ee7-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.979 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.980 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3467MB free_disk=59.877315521240234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.980 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.980 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:13 compute-0 kernel: tapbb059ee7-30: left promiscuous mode
Oct 07 14:44:13 compute-0 nova_compute[259550]: 2025-10-07 14:44:13.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.005 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8118be85-df07-444a-80a0-6fe1eba319b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.038 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2cedabd5-06c8-4175-a83b-e2f050959bef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.039 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[df55eda0-85e7-4581-8081-9925c5ddd591]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.055 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[58750837-4637-4b7f-a22e-8687727a56e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874462, 'reachable_time': 26257, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401686, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:14 compute-0 systemd[1]: run-netns-ovnmeta\x2dbb059ee7\x2d3091\x2d491e\x2d8da2\x2dc9bd1da0f922.mount: Deactivated successfully.
Oct 07 14:44:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.057 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:44:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.057 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[d07df711-96bc-43b9-bcf2-923c2a2b18f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.058 161536 INFO neutron.agent.ovn.metadata.agent [-] Port dfe40ca6-700f-4101-8729-3d1ee103c5ea in datapath e6e769bc-2b33-4210-8062-fbc8d16f9127 unbound from our chassis
Oct 07 14:44:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.059 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e6e769bc-2b33-4210-8062-fbc8d16f9127, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:44:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.060 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dacf9111-baa9-4d7e-8e6d-b93f42879e12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.060 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127 namespace which is not needed anymore
Oct 07 14:44:14 compute-0 ceph-mon[74295]: pgmap v2453: 305 pgs: 305 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 173 KiB/s rd, 2.5 MiB/s wr, 93 op/s
Oct 07 14:44:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1497203963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:44:14 compute-0 neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127[397827]: [NOTICE]   (397831) : haproxy version is 2.8.14-c23fe91
Oct 07 14:44:14 compute-0 neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127[397827]: [NOTICE]   (397831) : path to executable is /usr/sbin/haproxy
Oct 07 14:44:14 compute-0 neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127[397827]: [WARNING]  (397831) : Exiting Master process...
Oct 07 14:44:14 compute-0 neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127[397827]: [ALERT]    (397831) : Current worker (397833) exited with code 143 (Terminated)
Oct 07 14:44:14 compute-0 neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127[397827]: [WARNING]  (397831) : All workers exited. Exiting... (0)
Oct 07 14:44:14 compute-0 systemd[1]: libpod-6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109.scope: Deactivated successfully.
Oct 07 14:44:14 compute-0 conmon[397827]: conmon 6a035aa3c4170c2ee13b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109.scope/container/memory.events
Oct 07 14:44:14 compute-0 podman[401704]: 2025-10-07 14:44:14.195188521 +0000 UTC m=+0.049721882 container died 6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:44:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109-userdata-shm.mount: Deactivated successfully.
Oct 07 14:44:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-975fa376099815760edd3ac270db67c4e9de90d314e2f36412b98c8c8fb0e428-merged.mount: Deactivated successfully.
Oct 07 14:44:14 compute-0 podman[401704]: 2025-10-07 14:44:14.229615817 +0000 UTC m=+0.084149178 container cleanup 6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 07 14:44:14 compute-0 systemd[1]: libpod-conmon-6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109.scope: Deactivated successfully.
Oct 07 14:44:14 compute-0 podman[401734]: 2025-10-07 14:44:14.285994485 +0000 UTC m=+0.037420346 container remove 6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 07 14:44:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.291 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f95fb358-059f-48fe-b246-41c28251e711]: (4, ('Tue Oct  7 02:44:14 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127 (6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109)\n6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109\nTue Oct  7 02:44:14 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127 (6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109)\n6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.292 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[296cdcb6-0554-4455-a361-acaded6513ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.293 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6e769bc-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:14 compute-0 kernel: tape6e769bc-20: left promiscuous mode
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.324 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[85c118ea-dcb2-4bed-a7a3-0722d413c066]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.358 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[053a9e94-a80a-4c98-a7dc-d7596ec35c3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.359 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ee73da-b3ab-42c6-a44c-aa4c5e53ed00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.378 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f41a595-c07d-44a8-97ca-361113a9d1cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874552, 'reachable_time': 15137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401762, 'error': None, 'target': 'ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.380 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:44:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.380 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[b26fa508-a95c-4dbb-9138-3b9d7c1b2b0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.646 2 DEBUG nova.virt.libvirt.vif [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:42:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-649311904',display_name='tempest-TestGettingAddress-server-649311904',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-649311904',id=127,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:42:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-pje5d30x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:42:45Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=77918bef-8f72-4152-ac55-f4d4c98477ec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.646 2 DEBUG nova.network.os_vif_util [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.647 2 DEBUG nova.network.os_vif_util [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:41:99:4d,bridge_name='br-int',has_traffic_filtering=True,id=c75fde3c-8461-4ed7-9c14-7f14f5794599,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75fde3c-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.647 2 DEBUG os_vif [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:99:4d,bridge_name='br-int',has_traffic_filtering=True,id=c75fde3c-8461-4ed7-9c14-7f14f5794599,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75fde3c-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.649 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc75fde3c-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.710 2 INFO os_vif [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:99:4d,bridge_name='br-int',has_traffic_filtering=True,id=c75fde3c-8461-4ed7-9c14-7f14f5794599,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75fde3c-84')
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.710 2 DEBUG nova.virt.libvirt.vif [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:42:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-649311904',display_name='tempest-TestGettingAddress-server-649311904',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-649311904',id=127,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:42:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-pje5d30x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:42:45Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=77918bef-8f72-4152-ac55-f4d4c98477ec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.711 2 DEBUG nova.network.os_vif_util [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.711 2 DEBUG nova.network.os_vif_util [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b4:c5:56,bridge_name='br-int',has_traffic_filtering=True,id=dfe40ca6-700f-4101-8729-3d1ee103c5ea,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfe40ca6-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.711 2 DEBUG os_vif [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:c5:56,bridge_name='br-int',has_traffic_filtering=True,id=dfe40ca6-700f-4101-8729-3d1ee103c5ea,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfe40ca6-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.713 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfe40ca6-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.721 2 INFO os_vif [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:c5:56,bridge_name='br-int',has_traffic_filtering=True,id=dfe40ca6-700f-4101-8729-3d1ee103c5ea,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfe40ca6-70')
Oct 07 14:44:14 compute-0 systemd[1]: run-netns-ovnmeta\x2de6e769bc\x2d2b33\x2d4210\x2d8062\x2dfbc8d16f9127.mount: Deactivated successfully.
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.953 2 DEBUG nova.virt.libvirt.imagebackend [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.966 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.967 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.967 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:72:0b:2c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.968 2 INFO nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Using config drive
Oct 07 14:44:14 compute-0 nova_compute[259550]: 2025-10-07 14:44:14.989 2 DEBUG nova.storage.rbd_utils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.032 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 77918bef-8f72-4152-ac55-f4d4c98477ec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.033 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 30241223-64c5-4a88-8ba2-ee340fe6cbd3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.033 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 489f6a90-6b26-4b8b-aa3e-095d1d8df333 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.033 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.033 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.108 2 INFO nova.virt.libvirt.driver [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Deleting instance files /var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec_del
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.109 2 INFO nova.virt.libvirt.driver [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Deletion of /var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec_del complete
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.114 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.280 2 DEBUG nova.storage.rbd_utils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] creating snapshot(a6640c287f554325b70ebfdf06813bd3) on rbd image(30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.436 2 DEBUG nova.compute.manager [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-changed-c75fde3c-8461-4ed7-9c14-7f14f5794599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.437 2 DEBUG nova.compute.manager [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Refreshing instance network info cache due to event network-changed-c75fde3c-8461-4ed7-9c14-7f14f5794599. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.437 2 DEBUG oslo_concurrency.lockutils [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.437 2 DEBUG oslo_concurrency.lockutils [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.438 2 DEBUG nova.network.neutron [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Refreshing network info cache for port c75fde3c-8461-4ed7-9c14-7f14f5794599 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:44:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:44:15 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3890338425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.564 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.571 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.699 2 INFO nova.compute.manager [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Took 2.13 seconds to destroy the instance on the hypervisor.
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.700 2 DEBUG oslo.service.loopingcall [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.700 2 DEBUG nova.compute.manager [-] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.700 2 DEBUG nova.network.neutron [-] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.769 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.988 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:44:15 compute-0 nova_compute[259550]: 2025-10-07 14:44:15.988 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.028 2 DEBUG nova.network.neutron [req-38c9fefa-eb20-4853-af54-c979d08df39b req-e042fe2a-1d22-45b1-9f28-b27137f20afe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Updated VIF entry in instance network info cache for port 677a2058-6a17-4388-9011-69553854c197. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.029 2 DEBUG nova.network.neutron [req-38c9fefa-eb20-4853-af54-c979d08df39b req-e042fe2a-1d22-45b1-9f28-b27137f20afe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Updating instance_info_cache with network_info: [{"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:44:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Oct 07 14:44:16 compute-0 ceph-mon[74295]: pgmap v2454: 305 pgs: 305 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct 07 14:44:16 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3890338425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.090 2 DEBUG oslo_concurrency.lockutils [req-38c9fefa-eb20-4853-af54-c979d08df39b req-e042fe2a-1d22-45b1-9f28-b27137f20afe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:44:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Oct 07 14:44:16 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.118 2 DEBUG nova.compute.manager [req-6a034fb3-90e0-48ed-b962-4f376111076d req-a952655c-092c-4fac-9f52-ec20d6bd1b35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-unplugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.118 2 DEBUG oslo_concurrency.lockutils [req-6a034fb3-90e0-48ed-b962-4f376111076d req-a952655c-092c-4fac-9f52-ec20d6bd1b35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.118 2 DEBUG oslo_concurrency.lockutils [req-6a034fb3-90e0-48ed-b962-4f376111076d req-a952655c-092c-4fac-9f52-ec20d6bd1b35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.119 2 DEBUG oslo_concurrency.lockutils [req-6a034fb3-90e0-48ed-b962-4f376111076d req-a952655c-092c-4fac-9f52-ec20d6bd1b35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.119 2 DEBUG nova.compute.manager [req-6a034fb3-90e0-48ed-b962-4f376111076d req-a952655c-092c-4fac-9f52-ec20d6bd1b35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] No waiting events found dispatching network-vif-unplugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.119 2 DEBUG nova.compute.manager [req-6a034fb3-90e0-48ed-b962-4f376111076d req-a952655c-092c-4fac-9f52-ec20d6bd1b35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-unplugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.172 2 DEBUG nova.storage.rbd_utils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] cloning vms/30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk@a6640c287f554325b70ebfdf06813bd3 to images/b28f4653-03be-4806-84c3-f036c6d93d6e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.217 2 INFO nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Creating config drive at /var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333/disk.config
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.221 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppl9bavil execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.287 2 DEBUG nova.storage.rbd_utils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] flattening images/b28f4653-03be-4806-84c3-f036c6d93d6e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.371 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppl9bavil" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.402 2 DEBUG nova.storage.rbd_utils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.405 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333/disk.config 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 215 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.742 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333/disk.config 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.743 2 INFO nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Deleting local config drive /var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333/disk.config because it was imported into RBD.
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.757 2 DEBUG nova.storage.rbd_utils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] removing snapshot(a6640c287f554325b70ebfdf06813bd3) on rbd image(30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.787 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:44:16 compute-0 kernel: tap677a2058-6a: entered promiscuous mode
Oct 07 14:44:16 compute-0 NetworkManager[44949]: <info>  [1759848256.8045] manager: (tap677a2058-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/584)
Oct 07 14:44:16 compute-0 ovn_controller[151684]: 2025-10-07T14:44:16Z|01463|binding|INFO|Claiming lport 677a2058-6a17-4388-9011-69553854c197 for this chassis.
Oct 07 14:44:16 compute-0 ovn_controller[151684]: 2025-10-07T14:44:16Z|01464|binding|INFO|677a2058-6a17-4388-9011-69553854c197: Claiming fa:16:3e:72:0b:2c 10.100.0.13
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:16 compute-0 ovn_controller[151684]: 2025-10-07T14:44:16Z|01465|binding|INFO|Setting lport 677a2058-6a17-4388-9011-69553854c197 ovn-installed in OVS
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:16 compute-0 nova_compute[259550]: 2025-10-07 14:44:16.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:16 compute-0 systemd-machined[214580]: New machine qemu-165-instance-00000084.
Oct 07 14:44:16 compute-0 systemd-udevd[402008]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:44:16 compute-0 NetworkManager[44949]: <info>  [1759848256.8497] device (tap677a2058-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:44:16 compute-0 systemd[1]: Started Virtual Machine qemu-165-instance-00000084.
Oct 07 14:44:16 compute-0 NetworkManager[44949]: <info>  [1759848256.8524] device (tap677a2058-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.023 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:0b:2c 10.100.0.13'], port_security=['fa:16:3e:72:0b:2c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '489f6a90-6b26-4b8b-aa3e-095d1d8df333', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc3234eb-21e6-48c7-8919-4558ed1fcfca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd2a2dd7-3564-4953-a9fd-9479810a55c3, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=677a2058-6a17-4388-9011-69553854c197) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:44:17 compute-0 ovn_controller[151684]: 2025-10-07T14:44:17Z|01466|binding|INFO|Setting lport 677a2058-6a17-4388-9011-69553854c197 up in Southbound
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.024 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 677a2058-6a17-4388-9011-69553854c197 in datapath 5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0 bound to our chassis
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.026 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.038 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d615ec37-072f-4fe3-966b-102a0245a6b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.039 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5daef5b9-01 in ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.041 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5daef5b9-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.041 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c5510e64-d753-45f5-80e9-18f679af9cbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.041 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[784b7d7f-10f1-492a-a633-c3e74a18248f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.043 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.053 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[bfbd4810-00a3-484b-8529-78a2dd8dcc92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.068 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f7fc9226-c063-49aa-8707-30768097b84d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.095 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[797681ea-878b-4d42-b86e-d78ceafc42ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Oct 07 14:44:17 compute-0 ceph-mon[74295]: osdmap e279: 3 total, 3 up, 3 in
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.104 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2c2035-8e8b-445a-b1d8-bdb3fa3cf5ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:17 compute-0 NetworkManager[44949]: <info>  [1759848257.1051] manager: (tap5daef5b9-00): new Veth device (/org/freedesktop/NetworkManager/Devices/585)
Oct 07 14:44:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Oct 07 14:44:17 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.145 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7034a049-9bba-45af-a31a-45800e23ceb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.148 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[260a8273-47e1-4d1d-a8f5-902bdfb27d5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.162 2 DEBUG nova.storage.rbd_utils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] creating snapshot(snap) on rbd image(b28f4653-03be-4806-84c3-f036c6d93d6e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 14:44:17 compute-0 NetworkManager[44949]: <info>  [1759848257.1731] device (tap5daef5b9-00): carrier: link connected
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.182 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a75384-5350-4e8d-8836-a4cadca5d6d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.201 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[30be3b7f-aba0-4f19-9683-93cb80b57123]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5daef5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:3b:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 420], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884073, 'reachable_time': 24792, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402062, 'error': None, 'target': 'ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.216 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a768caf9-1dad-49f6-8585-84326eb765c9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef3:3be4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 884073, 'tstamp': 884073}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402065, 'error': None, 'target': 'ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.233 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[830e83ec-2a7f-4ae3-a97e-5607761a6b80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5daef5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:3b:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 420], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884073, 'reachable_time': 24792, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 402066, 'error': None, 'target': 'ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.268 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f2fb9176-aac9-4c6a-8ddf-51b04984d2d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.330 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ec2042-4bf6-47c5-bbf7-18b8d4fbc570]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.334 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5daef5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.334 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.335 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5daef5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:17 compute-0 NetworkManager[44949]: <info>  [1759848257.3383] manager: (tap5daef5b9-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/586)
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:17 compute-0 kernel: tap5daef5b9-00: entered promiscuous mode
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.343 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5daef5b9-00, col_values=(('external_ids', {'iface-id': '62e3fee2-0588-4b81-9e92-ba04a35b5aca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:17 compute-0 ovn_controller[151684]: 2025-10-07T14:44:17Z|01467|binding|INFO|Releasing lport 62e3fee2-0588-4b81-9e92-ba04a35b5aca from this chassis (sb_readonly=0)
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.366 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.367 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[61cf6c52-6371-49f6-a9c2-c7d55e2fbbe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.367 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0.pid.haproxy
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:44:17 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.368 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0', 'env', 'PROCESS_TAG=haproxy-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.576 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848242.5747437, 1f692a08-811a-41fb-a8a2-aa936481a256 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.576 2 INFO nova.compute.manager [-] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] VM Stopped (Lifecycle Event)
Oct 07 14:44:17 compute-0 podman[402145]: 2025-10-07 14:44:17.695395758 +0000 UTC m=+0.027865652 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:44:17 compute-0 podman[402145]: 2025-10-07 14:44:17.798074467 +0000 UTC m=+0.130544331 container create feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.818 2 DEBUG nova.compute.manager [None req-ef77697e-7eb7-4875-8a52-1fdcdeb9ba98 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:44:17 compute-0 systemd[1]: Started libpod-conmon-feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6.scope.
Oct 07 14:44:17 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:44:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23924fe35529e2e0cddf4b199e3d7462d2dc12d09d3b0995ef3a978cbf4b85b8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:44:17 compute-0 podman[402145]: 2025-10-07 14:44:17.958406519 +0000 UTC m=+0.290876403 container init feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Oct 07 14:44:17 compute-0 podman[402145]: 2025-10-07 14:44:17.963990837 +0000 UTC m=+0.296460701 container start feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.974 2 DEBUG nova.compute.manager [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-unplugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.975 2 DEBUG oslo_concurrency.lockutils [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.975 2 DEBUG oslo_concurrency.lockutils [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.976 2 DEBUG oslo_concurrency.lockutils [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.978 2 DEBUG nova.compute.manager [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] No waiting events found dispatching network-vif-unplugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.978 2 DEBUG nova.compute.manager [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-unplugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.978 2 DEBUG nova.compute.manager [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-plugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.979 2 DEBUG oslo_concurrency.lockutils [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.979 2 DEBUG oslo_concurrency.lockutils [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.979 2 DEBUG oslo_concurrency.lockutils [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.980 2 DEBUG nova.compute.manager [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] No waiting events found dispatching network-vif-plugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:44:17 compute-0 nova_compute[259550]: 2025-10-07 14:44:17.980 2 WARNING nova.compute.manager [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received unexpected event network-vif-plugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 for instance with vm_state active and task_state deleting.
Oct 07 14:44:17 compute-0 neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0[402160]: [NOTICE]   (402164) : New worker (402166) forked
Oct 07 14:44:17 compute-0 neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0[402160]: [NOTICE]   (402164) : Loading success.
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.029 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848258.0292382, 489f6a90-6b26-4b8b-aa3e-095d1d8df333 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.030 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] VM Started (Lifecycle Event)
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.051 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.055 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848258.029427, 489f6a90-6b26-4b8b-aa3e-095d1d8df333 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.055 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] VM Paused (Lifecycle Event)
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.081 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.085 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.115 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:44:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Oct 07 14:44:18 compute-0 ceph-mon[74295]: pgmap v2456: 305 pgs: 305 active+clean; 215 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Oct 07 14:44:18 compute-0 ceph-mon[74295]: osdmap e280: 3 total, 3 up, 3 in
Oct 07 14:44:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Oct 07 14:44:18 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.300 2 DEBUG nova.compute.manager [req-9d21b70c-6abc-4316-944a-ec0b01bbfb7b req-96be565d-8aaa-4b9e-b505-fa8f33e470da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-plugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.301 2 DEBUG oslo_concurrency.lockutils [req-9d21b70c-6abc-4316-944a-ec0b01bbfb7b req-96be565d-8aaa-4b9e-b505-fa8f33e470da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.301 2 DEBUG oslo_concurrency.lockutils [req-9d21b70c-6abc-4316-944a-ec0b01bbfb7b req-96be565d-8aaa-4b9e-b505-fa8f33e470da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.301 2 DEBUG oslo_concurrency.lockutils [req-9d21b70c-6abc-4316-944a-ec0b01bbfb7b req-96be565d-8aaa-4b9e-b505-fa8f33e470da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.301 2 DEBUG nova.compute.manager [req-9d21b70c-6abc-4316-944a-ec0b01bbfb7b req-96be565d-8aaa-4b9e-b505-fa8f33e470da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] No waiting events found dispatching network-vif-plugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.302 2 WARNING nova.compute.manager [req-9d21b70c-6abc-4316-944a-ec0b01bbfb7b req-96be565d-8aaa-4b9e-b505-fa8f33e470da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received unexpected event network-vif-plugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea for instance with vm_state active and task_state deleting.
Oct 07 14:44:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 215 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 10 KiB/s wr, 43 op/s
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.944 2 DEBUG nova.network.neutron [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updated VIF entry in instance network info cache for port c75fde3c-8461-4ed7-9c14-7f14f5794599. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.944 2 DEBUG nova.network.neutron [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updating instance_info_cache with network_info: [{"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.978 2 DEBUG oslo_concurrency.lockutils [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.978 2 DEBUG nova.compute.manager [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.978 2 DEBUG oslo_concurrency.lockutils [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.979 2 DEBUG oslo_concurrency.lockutils [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.979 2 DEBUG oslo_concurrency.lockutils [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.979 2 DEBUG nova.compute.manager [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] No waiting events found dispatching network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:44:18 compute-0 nova_compute[259550]: 2025-10-07 14:44:18.979 2 WARNING nova.compute.manager [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received unexpected event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b for instance with vm_state active and task_state shelving_image_uploading.
Oct 07 14:44:19 compute-0 ceph-mon[74295]: osdmap e281: 3 total, 3 up, 3 in
Oct 07 14:44:19 compute-0 nova_compute[259550]: 2025-10-07 14:44:19.604 2 DEBUG nova.network.neutron [-] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:44:19 compute-0 nova_compute[259550]: 2025-10-07 14:44:19.626 2 INFO nova.compute.manager [-] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Took 3.93 seconds to deallocate network for instance.
Oct 07 14:44:19 compute-0 nova_compute[259550]: 2025-10-07 14:44:19.695 2 DEBUG oslo_concurrency.lockutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:19 compute-0 nova_compute[259550]: 2025-10-07 14:44:19.696 2 DEBUG oslo_concurrency.lockutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:19 compute-0 nova_compute[259550]: 2025-10-07 14:44:19.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:19 compute-0 nova_compute[259550]: 2025-10-07 14:44:19.780 2 DEBUG oslo_concurrency.processutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:19 compute-0 nova_compute[259550]: 2025-10-07 14:44:19.994 2 INFO nova.virt.libvirt.driver [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Snapshot image upload complete
Oct 07 14:44:19 compute-0 nova_compute[259550]: 2025-10-07 14:44:19.994 2 DEBUG nova.compute.manager [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.058 2 INFO nova.compute.manager [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Shelve offloading
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.063 2 DEBUG nova.compute.manager [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-deleted-dfe40ca6-700f-4101-8729-3d1ee103c5ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.064 2 DEBUG nova.compute.manager [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Received event network-vif-plugged-677a2058-6a17-4388-9011-69553854c197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.064 2 DEBUG oslo_concurrency.lockutils [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.064 2 DEBUG oslo_concurrency.lockutils [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.065 2 DEBUG oslo_concurrency.lockutils [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.065 2 DEBUG nova.compute.manager [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Processing event network-vif-plugged-677a2058-6a17-4388-9011-69553854c197 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.065 2 DEBUG nova.compute.manager [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Received event network-vif-plugged-677a2058-6a17-4388-9011-69553854c197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.066 2 DEBUG oslo_concurrency.lockutils [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.066 2 DEBUG oslo_concurrency.lockutils [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.066 2 DEBUG oslo_concurrency.lockutils [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.067 2 DEBUG nova.compute.manager [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] No waiting events found dispatching network-vif-plugged-677a2058-6a17-4388-9011-69553854c197 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.067 2 WARNING nova.compute.manager [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Received unexpected event network-vif-plugged-677a2058-6a17-4388-9011-69553854c197 for instance with vm_state building and task_state spawning.
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.067 2 DEBUG nova.compute.manager [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-deleted-c75fde3c-8461-4ed7-9c14-7f14f5794599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.068 2 DEBUG nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.073 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.077 2 INFO nova.virt.libvirt.driver [-] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Instance spawned successfully.
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.077 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.078 2 INFO nova.virt.libvirt.driver [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance destroyed successfully.
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.079 2 DEBUG nova.compute.manager [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.080 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848260.079563, 489f6a90-6b26-4b8b-aa3e-095d1d8df333 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.080 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] VM Resumed (Lifecycle Event)
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.102 2 DEBUG oslo_concurrency.lockutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.103 2 DEBUG oslo_concurrency.lockutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquired lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.103 2 DEBUG nova.network.neutron [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.109 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.114 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.115 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.115 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.116 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.116 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.116 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.124 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.153 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.188 2 INFO nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Took 14.85 seconds to spawn the instance on the hypervisor.
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.189 2 DEBUG nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:44:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:44:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4257071256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.226 2 DEBUG oslo_concurrency.processutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.232 2 DEBUG nova.compute.provider_tree [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.253 2 INFO nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Took 16.37 seconds to build instance.
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.259 2 DEBUG nova.scheduler.client.report [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.284 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.290 2 DEBUG oslo_concurrency.lockutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:20 compute-0 ceph-mon[74295]: pgmap v2459: 305 pgs: 305 active+clean; 215 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 10 KiB/s wr, 43 op/s
Oct 07 14:44:20 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4257071256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.324 2 INFO nova.scheduler.client.report [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 77918bef-8f72-4152-ac55-f4d4c98477ec
Oct 07 14:44:20 compute-0 nova_compute[259550]: 2025-10-07 14:44:20.406 2 DEBUG oslo_concurrency.lockutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 220 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 5.9 MiB/s wr, 199 op/s
Oct 07 14:44:21 compute-0 nova_compute[259550]: 2025-10-07 14:44:21.586 2 DEBUG nova.network.neutron [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:44:21 compute-0 nova_compute[259550]: 2025-10-07 14:44:21.606 2 DEBUG oslo_concurrency.lockutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Releasing lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:44:22 compute-0 podman[402197]: 2025-10-07 14:44:22.084718318 +0000 UTC m=+0.061891956 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 07 14:44:22 compute-0 podman[402198]: 2025-10-07 14:44:22.13299157 +0000 UTC m=+0.109199683 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 07 14:44:22 compute-0 ceph-mon[74295]: pgmap v2460: 305 pgs: 305 active+clean; 220 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 5.9 MiB/s wr, 199 op/s
Oct 07 14:44:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 246 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 7.3 MiB/s wr, 182 op/s
Oct 07 14:44:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:22.691 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:44:22 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:22.693 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:44:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:44:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:44:22 compute-0 nova_compute[259550]: 2025-10-07 14:44:22.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:44:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:44:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:44:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:44:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:44:22
Oct 07 14:44:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:44:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:44:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'images', 'volumes']
Oct 07 14:44:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:44:22 compute-0 nova_compute[259550]: 2025-10-07 14:44:22.788 2 INFO nova.virt.libvirt.driver [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance destroyed successfully.
Oct 07 14:44:22 compute-0 nova_compute[259550]: 2025-10-07 14:44:22.788 2 DEBUG nova.objects.instance [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'resources' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:44:22 compute-0 nova_compute[259550]: 2025-10-07 14:44:22.801 2 DEBUG nova.virt.libvirt.vif [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:43:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-308957706',display_name='tempest-TestShelveInstance-server-308957706',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-308957706',id=131,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIP0AMqBzzYmRrc/gnJzbbMyBAFmuqR5iC2+H5lS9P1NxQlnpTWhcfEteNAmj5N76nsDrvP+kS3KGhT6YiYXIeHex+K2fKyOb9r6ICTnlnIC+U793tuGi+owzMBnIl2+nw==',key_name='tempest-TestShelveInstance-2096115086',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:43:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ac40ef14492f40768b3852a40da26621',ramdisk_id='',reservation_id='r-zct1uv2p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-703128978',owner_user_name='tempest-TestShelveInstance-703128978-project-member',shelved_at='2025-10-07T14:44:19.994816',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='b28f4653-03be-4806-84c3-f036c6d93d6e'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:44:14Z,user_data=None,user_id='52bb2c10051444f181ee0572525fbe9d',uuid=30241223-64c5-4a88-8ba2-ee340fe6cbd3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:44:22 compute-0 nova_compute[259550]: 2025-10-07 14:44:22.802 2 DEBUG nova.network.os_vif_util [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converting VIF {"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:44:22 compute-0 nova_compute[259550]: 2025-10-07 14:44:22.804 2 DEBUG nova.network.os_vif_util [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:44:22 compute-0 nova_compute[259550]: 2025-10-07 14:44:22.805 2 DEBUG os_vif [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:44:22 compute-0 nova_compute[259550]: 2025-10-07 14:44:22.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:22 compute-0 nova_compute[259550]: 2025-10-07 14:44:22.806 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ac63514-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:22 compute-0 nova_compute[259550]: 2025-10-07 14:44:22.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:22 compute-0 nova_compute[259550]: 2025-10-07 14:44:22.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:44:22 compute-0 nova_compute[259550]: 2025-10-07 14:44:22.812 2 INFO os_vif [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77')
Oct 07 14:44:22 compute-0 nova_compute[259550]: 2025-10-07 14:44:22.867 2 DEBUG nova.compute.manager [req-5e6df965-71fe-4121-8f73-17f6bd77a1cd req-4ddef9cc-4022-46c7-9ec6-554e8e386bf2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:22 compute-0 nova_compute[259550]: 2025-10-07 14:44:22.868 2 DEBUG nova.compute.manager [req-5e6df965-71fe-4121-8f73-17f6bd77a1cd req-4ddef9cc-4022-46c7-9ec6-554e8e386bf2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing instance network info cache due to event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:44:22 compute-0 nova_compute[259550]: 2025-10-07 14:44:22.868 2 DEBUG oslo_concurrency.lockutils [req-5e6df965-71fe-4121-8f73-17f6bd77a1cd req-4ddef9cc-4022-46c7-9ec6-554e8e386bf2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:44:22 compute-0 nova_compute[259550]: 2025-10-07 14:44:22.868 2 DEBUG oslo_concurrency.lockutils [req-5e6df965-71fe-4121-8f73-17f6bd77a1cd req-4ddef9cc-4022-46c7-9ec6-554e8e386bf2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:44:22 compute-0 nova_compute[259550]: 2025-10-07 14:44:22.868 2 DEBUG nova.network.neutron [req-5e6df965-71fe-4121-8f73-17f6bd77a1cd req-4ddef9cc-4022-46c7-9ec6-554e8e386bf2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing network info cache for port 3ac63514-77e7-4d94-a67c-94806ca3b58b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:44:23 compute-0 nova_compute[259550]: 2025-10-07 14:44:23.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:44:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:44:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:44:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:44:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:44:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:44:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:44:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:44:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:44:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:44:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:44:23 compute-0 nova_compute[259550]: 2025-10-07 14:44:23.213 2 INFO nova.virt.libvirt.driver [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Deleting instance files /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3_del
Oct 07 14:44:23 compute-0 nova_compute[259550]: 2025-10-07 14:44:23.214 2 INFO nova.virt.libvirt.driver [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Deletion of /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3_del complete
Oct 07 14:44:23 compute-0 nova_compute[259550]: 2025-10-07 14:44:23.310 2 INFO nova.scheduler.client.report [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Deleted allocations for instance 30241223-64c5-4a88-8ba2-ee340fe6cbd3
Oct 07 14:44:23 compute-0 nova_compute[259550]: 2025-10-07 14:44:23.361 2 DEBUG oslo_concurrency.lockutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:23 compute-0 nova_compute[259550]: 2025-10-07 14:44:23.362 2 DEBUG oslo_concurrency.lockutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:23 compute-0 nova_compute[259550]: 2025-10-07 14:44:23.412 2 DEBUG oslo_concurrency.processutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:44:23 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2658636029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:44:23 compute-0 nova_compute[259550]: 2025-10-07 14:44:23.888 2 DEBUG oslo_concurrency.processutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:23 compute-0 nova_compute[259550]: 2025-10-07 14:44:23.893 2 DEBUG nova.compute.provider_tree [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:44:23 compute-0 nova_compute[259550]: 2025-10-07 14:44:23.913 2 DEBUG nova.scheduler.client.report [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:44:23 compute-0 nova_compute[259550]: 2025-10-07 14:44:23.936 2 DEBUG oslo_concurrency.lockutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:23 compute-0 nova_compute[259550]: 2025-10-07 14:44:23.972 2 DEBUG nova.network.neutron [req-5e6df965-71fe-4121-8f73-17f6bd77a1cd req-4ddef9cc-4022-46c7-9ec6-554e8e386bf2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updated VIF entry in instance network info cache for port 3ac63514-77e7-4d94-a67c-94806ca3b58b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:44:23 compute-0 nova_compute[259550]: 2025-10-07 14:44:23.972 2 DEBUG nova.network.neutron [req-5e6df965-71fe-4121-8f73-17f6bd77a1cd req-4ddef9cc-4022-46c7-9ec6-554e8e386bf2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": null, "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap3ac63514-77", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:44:23 compute-0 nova_compute[259550]: 2025-10-07 14:44:23.986 2 DEBUG oslo_concurrency.lockutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:23 compute-0 nova_compute[259550]: 2025-10-07 14:44:23.996 2 DEBUG oslo_concurrency.lockutils [req-5e6df965-71fe-4121-8f73-17f6bd77a1cd req-4ddef9cc-4022-46c7-9ec6-554e8e386bf2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:44:24 compute-0 nova_compute[259550]: 2025-10-07 14:44:24.304 2 DEBUG nova.compute.manager [req-544e9844-68ff-4024-98a3-54d0a20a9dcc req-eebcacea-bf9c-4d52-a516-c4e5bcf16488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Received event network-changed-677a2058-6a17-4388-9011-69553854c197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:24 compute-0 nova_compute[259550]: 2025-10-07 14:44:24.304 2 DEBUG nova.compute.manager [req-544e9844-68ff-4024-98a3-54d0a20a9dcc req-eebcacea-bf9c-4d52-a516-c4e5bcf16488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Refreshing instance network info cache due to event network-changed-677a2058-6a17-4388-9011-69553854c197. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:44:24 compute-0 nova_compute[259550]: 2025-10-07 14:44:24.305 2 DEBUG oslo_concurrency.lockutils [req-544e9844-68ff-4024-98a3-54d0a20a9dcc req-eebcacea-bf9c-4d52-a516-c4e5bcf16488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:44:24 compute-0 nova_compute[259550]: 2025-10-07 14:44:24.305 2 DEBUG oslo_concurrency.lockutils [req-544e9844-68ff-4024-98a3-54d0a20a9dcc req-eebcacea-bf9c-4d52-a516-c4e5bcf16488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:44:24 compute-0 nova_compute[259550]: 2025-10-07 14:44:24.305 2 DEBUG nova.network.neutron [req-544e9844-68ff-4024-98a3-54d0a20a9dcc req-eebcacea-bf9c-4d52-a516-c4e5bcf16488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Refreshing network info cache for port 677a2058-6a17-4388-9011-69553854c197 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:44:24 compute-0 ceph-mon[74295]: pgmap v2461: 305 pgs: 305 active+clean; 246 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 7.3 MiB/s wr, 182 op/s
Oct 07 14:44:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2658636029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:44:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 196 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 5.9 MiB/s wr, 241 op/s
Oct 07 14:44:25 compute-0 nova_compute[259550]: 2025-10-07 14:44:25.492 2 DEBUG nova.network.neutron [req-544e9844-68ff-4024-98a3-54d0a20a9dcc req-eebcacea-bf9c-4d52-a516-c4e5bcf16488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Updated VIF entry in instance network info cache for port 677a2058-6a17-4388-9011-69553854c197. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:44:25 compute-0 nova_compute[259550]: 2025-10-07 14:44:25.493 2 DEBUG nova.network.neutron [req-544e9844-68ff-4024-98a3-54d0a20a9dcc req-eebcacea-bf9c-4d52-a516-c4e5bcf16488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Updating instance_info_cache with network_info: [{"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:44:25 compute-0 nova_compute[259550]: 2025-10-07 14:44:25.514 2 DEBUG oslo_concurrency.lockutils [req-544e9844-68ff-4024-98a3-54d0a20a9dcc req-eebcacea-bf9c-4d52-a516-c4e5bcf16488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:44:25 compute-0 ovn_controller[151684]: 2025-10-07T14:44:25Z|01468|binding|INFO|Releasing lport 62e3fee2-0588-4b81-9e92-ba04a35b5aca from this chassis (sb_readonly=0)
Oct 07 14:44:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:25.694 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:25 compute-0 nova_compute[259550]: 2025-10-07 14:44:25.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:26 compute-0 ceph-mon[74295]: pgmap v2462: 305 pgs: 305 active+clean; 196 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 5.9 MiB/s wr, 241 op/s
Oct 07 14:44:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 5.0 MiB/s wr, 246 op/s
Oct 07 14:44:27 compute-0 nova_compute[259550]: 2025-10-07 14:44:27.299 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848252.2985566, 30241223-64c5-4a88-8ba2-ee340fe6cbd3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:44:27 compute-0 nova_compute[259550]: 2025-10-07 14:44:27.300 2 INFO nova.compute.manager [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] VM Stopped (Lifecycle Event)
Oct 07 14:44:27 compute-0 nova_compute[259550]: 2025-10-07 14:44:27.327 2 DEBUG nova.compute.manager [None req-b8a2af00-c87b-4d6c-860a-150608e9dace - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:44:27 compute-0 nova_compute[259550]: 2025-10-07 14:44:27.378 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:27 compute-0 nova_compute[259550]: 2025-10-07 14:44:27.378 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:27 compute-0 nova_compute[259550]: 2025-10-07 14:44:27.379 2 INFO nova.compute.manager [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Unshelving
Oct 07 14:44:27 compute-0 nova_compute[259550]: 2025-10-07 14:44:27.464 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:27 compute-0 nova_compute[259550]: 2025-10-07 14:44:27.465 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:27 compute-0 nova_compute[259550]: 2025-10-07 14:44:27.472 2 DEBUG nova.objects.instance [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'pci_requests' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:44:27 compute-0 nova_compute[259550]: 2025-10-07 14:44:27.493 2 DEBUG nova.objects.instance [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'numa_topology' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:44:27 compute-0 nova_compute[259550]: 2025-10-07 14:44:27.507 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:44:27 compute-0 nova_compute[259550]: 2025-10-07 14:44:27.508 2 INFO nova.compute.claims [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:44:27 compute-0 nova_compute[259550]: 2025-10-07 14:44:27.609 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:27 compute-0 nova_compute[259550]: 2025-10-07 14:44:27.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:44:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2351130135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:44:28 compute-0 nova_compute[259550]: 2025-10-07 14:44:28.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:28 compute-0 nova_compute[259550]: 2025-10-07 14:44:28.115 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:28 compute-0 nova_compute[259550]: 2025-10-07 14:44:28.120 2 DEBUG nova.compute.provider_tree [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:44:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:44:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Oct 07 14:44:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Oct 07 14:44:28 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Oct 07 14:44:28 compute-0 nova_compute[259550]: 2025-10-07 14:44:28.138 2 DEBUG nova.scheduler.client.report [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:44:28 compute-0 nova_compute[259550]: 2025-10-07 14:44:28.159 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:28 compute-0 nova_compute[259550]: 2025-10-07 14:44:28.348 2 INFO nova.network.neutron [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating port 3ac63514-77e7-4d94-a67c-94806ca3b58b with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 07 14:44:28 compute-0 ceph-mon[74295]: pgmap v2463: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 5.0 MiB/s wr, 246 op/s
Oct 07 14:44:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2351130135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:44:28 compute-0 ceph-mon[74295]: osdmap e282: 3 total, 3 up, 3 in
Oct 07 14:44:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 232 op/s
Oct 07 14:44:28 compute-0 nova_compute[259550]: 2025-10-07 14:44:28.818 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848253.8180351, 77918bef-8f72-4152-ac55-f4d4c98477ec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:44:28 compute-0 nova_compute[259550]: 2025-10-07 14:44:28.819 2 INFO nova.compute.manager [-] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] VM Stopped (Lifecycle Event)
Oct 07 14:44:28 compute-0 nova_compute[259550]: 2025-10-07 14:44:28.838 2 DEBUG nova.compute.manager [None req-1b7f581f-5081-40a0-803a-0ac631cfaf51 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:44:29 compute-0 nova_compute[259550]: 2025-10-07 14:44:29.082 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:44:29 compute-0 nova_compute[259550]: 2025-10-07 14:44:29.083 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquired lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:44:29 compute-0 nova_compute[259550]: 2025-10-07 14:44:29.083 2 DEBUG nova.network.neutron [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:44:29 compute-0 nova_compute[259550]: 2025-10-07 14:44:29.161 2 DEBUG nova.compute.manager [req-3e3897e1-aee8-4359-b4ac-8aa77ba227cd req-d1bc7b15-ea24-4538-b331-9fae3921f118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:29 compute-0 nova_compute[259550]: 2025-10-07 14:44:29.162 2 DEBUG nova.compute.manager [req-3e3897e1-aee8-4359-b4ac-8aa77ba227cd req-d1bc7b15-ea24-4538-b331-9fae3921f118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing instance network info cache due to event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:44:29 compute-0 nova_compute[259550]: 2025-10-07 14:44:29.162 2 DEBUG oslo_concurrency.lockutils [req-3e3897e1-aee8-4359-b4ac-8aa77ba227cd req-d1bc7b15-ea24-4538-b331-9fae3921f118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.363698) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848269363789, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2081, "num_deletes": 252, "total_data_size": 3310780, "memory_usage": 3359984, "flush_reason": "Manual Compaction"}
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848269378541, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3254448, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49753, "largest_seqno": 51833, "table_properties": {"data_size": 3245073, "index_size": 5869, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19509, "raw_average_key_size": 20, "raw_value_size": 3226214, "raw_average_value_size": 3360, "num_data_blocks": 260, "num_entries": 960, "num_filter_entries": 960, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759848054, "oldest_key_time": 1759848054, "file_creation_time": 1759848269, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 14875 microseconds, and 7001 cpu microseconds.
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.378584) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3254448 bytes OK
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.378604) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.380407) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.380424) EVENT_LOG_v1 {"time_micros": 1759848269380419, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.380443) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3302033, prev total WAL file size 3302033, number of live WAL files 2.
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.381379) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3178KB)], [116(8285KB)]
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848269381453, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 11738451, "oldest_snapshot_seqno": -1}
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7374 keys, 10043767 bytes, temperature: kUnknown
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848269449671, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10043767, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9994741, "index_size": 29458, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18501, "raw_key_size": 190866, "raw_average_key_size": 25, "raw_value_size": 9863285, "raw_average_value_size": 1337, "num_data_blocks": 1153, "num_entries": 7374, "num_filter_entries": 7374, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759848269, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.449926) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10043767 bytes
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.451618) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.9 rd, 147.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.1 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7894, records dropped: 520 output_compression: NoCompression
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.451655) EVENT_LOG_v1 {"time_micros": 1759848269451638, "job": 70, "event": "compaction_finished", "compaction_time_micros": 68305, "compaction_time_cpu_micros": 25563, "output_level": 6, "num_output_files": 1, "total_output_size": 10043767, "num_input_records": 7894, "num_output_records": 7374, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848269452629, "job": 70, "event": "table_file_deletion", "file_number": 118}
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848269454646, "job": 70, "event": "table_file_deletion", "file_number": 116}
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.381247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.454714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.454722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.454724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.454726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:44:29 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.454728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:44:30 compute-0 ceph-mon[74295]: pgmap v2465: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 232 op/s
Oct 07 14:44:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.1 MiB/s wr, 138 op/s
Oct 07 14:44:31 compute-0 nova_compute[259550]: 2025-10-07 14:44:31.146 2 DEBUG nova.network.neutron [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:44:31 compute-0 nova_compute[259550]: 2025-10-07 14:44:31.167 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Releasing lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:44:31 compute-0 nova_compute[259550]: 2025-10-07 14:44:31.168 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:44:31 compute-0 nova_compute[259550]: 2025-10-07 14:44:31.169 2 INFO nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Creating image(s)
Oct 07 14:44:31 compute-0 nova_compute[259550]: 2025-10-07 14:44:31.189 2 DEBUG nova.storage.rbd_utils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:44:31 compute-0 nova_compute[259550]: 2025-10-07 14:44:31.193 2 DEBUG nova.objects.instance [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:44:31 compute-0 nova_compute[259550]: 2025-10-07 14:44:31.195 2 DEBUG oslo_concurrency.lockutils [req-3e3897e1-aee8-4359-b4ac-8aa77ba227cd req-d1bc7b15-ea24-4538-b331-9fae3921f118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:44:31 compute-0 nova_compute[259550]: 2025-10-07 14:44:31.195 2 DEBUG nova.network.neutron [req-3e3897e1-aee8-4359-b4ac-8aa77ba227cd req-d1bc7b15-ea24-4538-b331-9fae3921f118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing network info cache for port 3ac63514-77e7-4d94-a67c-94806ca3b58b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:44:31 compute-0 nova_compute[259550]: 2025-10-07 14:44:31.236 2 DEBUG nova.storage.rbd_utils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:44:31 compute-0 nova_compute[259550]: 2025-10-07 14:44:31.255 2 DEBUG nova.storage.rbd_utils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:44:31 compute-0 nova_compute[259550]: 2025-10-07 14:44:31.259 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "93592b775c19eb4c64da7128d6922cbb2e2cd89e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:31 compute-0 nova_compute[259550]: 2025-10-07 14:44:31.261 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "93592b775c19eb4c64da7128d6922cbb2e2cd89e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:31 compute-0 nova_compute[259550]: 2025-10-07 14:44:31.506 2 DEBUG nova.virt.libvirt.imagebackend [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Image locations are: [{'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/b28f4653-03be-4806-84c3-f036c6d93d6e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/b28f4653-03be-4806-84c3-f036c6d93d6e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 07 14:44:31 compute-0 ceph-mon[74295]: pgmap v2466: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.1 MiB/s wr, 138 op/s
Oct 07 14:44:31 compute-0 nova_compute[259550]: 2025-10-07 14:44:31.566 2 DEBUG nova.virt.libvirt.imagebackend [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Selected location: {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/b28f4653-03be-4806-84c3-f036c6d93d6e/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 07 14:44:31 compute-0 nova_compute[259550]: 2025-10-07 14:44:31.567 2 DEBUG nova.storage.rbd_utils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] cloning images/b28f4653-03be-4806-84c3-f036c6d93d6e@snap to None/30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 14:44:31 compute-0 nova_compute[259550]: 2025-10-07 14:44:31.702 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "93592b775c19eb4c64da7128d6922cbb2e2cd89e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.441s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:31 compute-0 nova_compute[259550]: 2025-10-07 14:44:31.863 2 DEBUG nova.objects.instance [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'migration_context' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:44:31 compute-0 nova_compute[259550]: 2025-10-07 14:44:31.921 2 DEBUG nova.storage.rbd_utils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] flattening vms/30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.336 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Image rbd:vms/30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.338 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.338 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Ensure instance console log exists: /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.339 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.339 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.340 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.342 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Start _get_guest_xml network_info=[{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T14:44:09Z,direct_url=<?>,disk_format='raw',id=b28f4653-03be-4806-84c3-f036c6d93d6e,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-308957706-shelved',owner='ac40ef14492f40768b3852a40da26621',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T14:44:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.346 2 WARNING nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.349 2 DEBUG nova.virt.libvirt.host [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.350 2 DEBUG nova.virt.libvirt.host [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.353 2 DEBUG nova.virt.libvirt.host [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.354 2 DEBUG nova.virt.libvirt.host [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.354 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.354 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T14:44:09Z,direct_url=<?>,disk_format='raw',id=b28f4653-03be-4806-84c3-f036c6d93d6e,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-308957706-shelved',owner='ac40ef14492f40768b3852a40da26621',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T14:44:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.355 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.355 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.356 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.356 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.356 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.357 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.357 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.357 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.358 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.358 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.358 2 DEBUG nova.objects.instance [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.375 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 KiB/s wr, 115 op/s
Oct 07 14:44:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:44:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1905787020' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:44:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:44:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1905787020' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014249405808426125 of space, bias 1.0, pg target 0.42748217425278373 quantized to 32 (current 32)
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:44:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:44:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3431466215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.898 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.923 2 DEBUG nova.storage.rbd_utils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:44:32 compute-0 nova_compute[259550]: 2025-10-07 14:44:32.927 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.156 2 DEBUG nova.network.neutron [req-3e3897e1-aee8-4359-b4ac-8aa77ba227cd req-d1bc7b15-ea24-4538-b331-9fae3921f118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updated VIF entry in instance network info cache for port 3ac63514-77e7-4d94-a67c-94806ca3b58b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.157 2 DEBUG nova.network.neutron [req-3e3897e1-aee8-4359-b4ac-8aa77ba227cd req-d1bc7b15-ea24-4538-b331-9fae3921f118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.175 2 DEBUG oslo_concurrency.lockutils [req-3e3897e1-aee8-4359-b4ac-8aa77ba227cd req-d1bc7b15-ea24-4538-b331-9fae3921f118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:44:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:44:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/559260670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:44:33 compute-0 ovn_controller[151684]: 2025-10-07T14:44:33Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:72:0b:2c 10.100.0.13
Oct 07 14:44:33 compute-0 ovn_controller[151684]: 2025-10-07T14:44:33Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:0b:2c 10.100.0.13
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.407 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.409 2 DEBUG nova.virt.libvirt.vif [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:43:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-308957706',display_name='tempest-TestShelveInstance-server-308957706',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-308957706',id=131,image_ref='b28f4653-03be-4806-84c3-f036c6d93d6e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-2096115086',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:43:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='ac40ef14492f40768b3852a40da26621',ramdisk_id='',reservation_id='r-zct1uv2p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-703128978',owner_user_name='tempest-TestShelveInstance-703128978-project-member',shelved_at='2025-10-07T14:44:19.994816',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='b28f4653-03be-4806-84c3-f036c6d93d6e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:44:27Z,user_data=None,user_id='52bb2c10051444f181ee0572525fbe9d',uuid=30241223-64c5-4a88-8ba2-ee340fe6cbd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.409 2 DEBUG nova.network.os_vif_util [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converting VIF {"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.410 2 DEBUG nova.network.os_vif_util [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.412 2 DEBUG nova.objects.instance [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'pci_devices' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.426 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:44:33 compute-0 nova_compute[259550]:   <uuid>30241223-64c5-4a88-8ba2-ee340fe6cbd3</uuid>
Oct 07 14:44:33 compute-0 nova_compute[259550]:   <name>instance-00000083</name>
Oct 07 14:44:33 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:44:33 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:44:33 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <nova:name>tempest-TestShelveInstance-server-308957706</nova:name>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:44:32</nova:creationTime>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:44:33 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:44:33 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:44:33 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:44:33 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:44:33 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:44:33 compute-0 nova_compute[259550]:         <nova:user uuid="52bb2c10051444f181ee0572525fbe9d">tempest-TestShelveInstance-703128978-project-member</nova:user>
Oct 07 14:44:33 compute-0 nova_compute[259550]:         <nova:project uuid="ac40ef14492f40768b3852a40da26621">tempest-TestShelveInstance-703128978</nova:project>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="b28f4653-03be-4806-84c3-f036c6d93d6e"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:44:33 compute-0 nova_compute[259550]:         <nova:port uuid="3ac63514-77e7-4d94-a67c-94806ca3b58b">
Oct 07 14:44:33 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:44:33 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:44:33 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <system>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <entry name="serial">30241223-64c5-4a88-8ba2-ee340fe6cbd3</entry>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <entry name="uuid">30241223-64c5-4a88-8ba2-ee340fe6cbd3</entry>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     </system>
Oct 07 14:44:33 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:44:33 compute-0 nova_compute[259550]:   <os>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:   </os>
Oct 07 14:44:33 compute-0 nova_compute[259550]:   <features>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:   </features>
Oct 07 14:44:33 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:44:33 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:44:33 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk">
Oct 07 14:44:33 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       </source>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:44:33 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config">
Oct 07 14:44:33 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       </source>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:44:33 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:32:1b:50"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <target dev="tap3ac63514-77"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/console.log" append="off"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <video>
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     </video>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <input type="keyboard" bus="usb"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:44:33 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:44:33 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:44:33 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:44:33 compute-0 nova_compute[259550]: </domain>
Oct 07 14:44:33 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.428 2 DEBUG nova.compute.manager [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Preparing to wait for external event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.429 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.429 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.429 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.430 2 DEBUG nova.virt.libvirt.vif [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:43:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-308957706',display_name='tempest-TestShelveInstance-server-308957706',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-308957706',id=131,image_ref='b28f4653-03be-4806-84c3-f036c6d93d6e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-2096115086',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:43:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='ac40ef14492f40768b3852a40da26621',ramdisk_id='',reservation_id='r-zct1uv2p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-703128978',owner_user_name='tempest-TestShelveInstance-703128978-project-member',shelved_at='2025-10-07T14:44:19.994816',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='b28f4653-03be-4806-84c3-f036c6d93d6e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:44:27Z,user_data=None,user_id='52bb2c10051444f181ee0572525fbe9d',uuid=30241223-64c5-4a88-8ba2-ee340fe6cbd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.430 2 DEBUG nova.network.os_vif_util [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converting VIF {"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.431 2 DEBUG nova.network.os_vif_util [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.431 2 DEBUG os_vif [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.433 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.433 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ac63514-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3ac63514-77, col_values=(('external_ids', {'iface-id': '3ac63514-77e7-4d94-a67c-94806ca3b58b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:1b:50', 'vm-uuid': '30241223-64c5-4a88-8ba2-ee340fe6cbd3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:33 compute-0 NetworkManager[44949]: <info>  [1759848273.4392] manager: (tap3ac63514-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/587)
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.444 2 INFO os_vif [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77')
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.505 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.505 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.505 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] No VIF found with MAC fa:16:3e:32:1b:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.506 2 INFO nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Using config drive
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.530 2 DEBUG nova.storage.rbd_utils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.549 2 DEBUG nova.objects.instance [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:44:33 compute-0 ceph-mon[74295]: pgmap v2467: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 KiB/s wr, 115 op/s
Oct 07 14:44:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1905787020' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:44:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1905787020' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:44:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3431466215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:44:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/559260670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:44:33 compute-0 nova_compute[259550]: 2025-10-07 14:44:33.663 2 DEBUG nova.objects.instance [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'keypairs' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:44:34 compute-0 nova_compute[259550]: 2025-10-07 14:44:34.090 2 INFO nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Creating config drive at /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config
Oct 07 14:44:34 compute-0 nova_compute[259550]: 2025-10-07 14:44:34.095 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph60qwz87 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:34 compute-0 nova_compute[259550]: 2025-10-07 14:44:34.237 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph60qwz87" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:34 compute-0 nova_compute[259550]: 2025-10-07 14:44:34.261 2 DEBUG nova.storage.rbd_utils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:44:34 compute-0 nova_compute[259550]: 2025-10-07 14:44:34.265 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:34 compute-0 nova_compute[259550]: 2025-10-07 14:44:34.438 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:34 compute-0 nova_compute[259550]: 2025-10-07 14:44:34.440 2 INFO nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Deleting local config drive /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config because it was imported into RBD.
Oct 07 14:44:34 compute-0 kernel: tap3ac63514-77: entered promiscuous mode
Oct 07 14:44:34 compute-0 NetworkManager[44949]: <info>  [1759848274.5025] manager: (tap3ac63514-77): new Tun device (/org/freedesktop/NetworkManager/Devices/588)
Oct 07 14:44:34 compute-0 ovn_controller[151684]: 2025-10-07T14:44:34Z|01469|binding|INFO|Claiming lport 3ac63514-77e7-4d94-a67c-94806ca3b58b for this chassis.
Oct 07 14:44:34 compute-0 ovn_controller[151684]: 2025-10-07T14:44:34Z|01470|binding|INFO|3ac63514-77e7-4d94-a67c-94806ca3b58b: Claiming fa:16:3e:32:1b:50 10.100.0.11
Oct 07 14:44:34 compute-0 nova_compute[259550]: 2025-10-07 14:44:34.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.514 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:1b:50 10.100.0.11'], port_security=['fa:16:3e:32:1b:50 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '30241223-64c5-4a88-8ba2-ee340fe6cbd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ac40ef14492f40768b3852a40da26621', 'neutron:revision_number': '7', 'neutron:security_group_ids': '56b8c028-3f77-4dba-a2e9-4a1cb7c88d4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46c5eb1a-8a6c-4afe-ae4e-424959d231e5, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3ac63514-77e7-4d94-a67c-94806ca3b58b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.515 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3ac63514-77e7-4d94-a67c-94806ca3b58b in datapath 7c054d6f-68ec-4f0b-9362-221001cc6b67 bound to our chassis
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.517 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c054d6f-68ec-4f0b-9362-221001cc6b67
Oct 07 14:44:34 compute-0 ovn_controller[151684]: 2025-10-07T14:44:34Z|01471|binding|INFO|Setting lport 3ac63514-77e7-4d94-a67c-94806ca3b58b ovn-installed in OVS
Oct 07 14:44:34 compute-0 ovn_controller[151684]: 2025-10-07T14:44:34Z|01472|binding|INFO|Setting lport 3ac63514-77e7-4d94-a67c-94806ca3b58b up in Southbound
Oct 07 14:44:34 compute-0 nova_compute[259550]: 2025-10-07 14:44:34.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.528 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0eaa18e9-696b-4c1e-8abc-71733c6c5126]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.529 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c054d6f-61 in ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.531 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c054d6f-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.531 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[45fe601d-98e4-407f-9c27-6f27a745337c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:34 compute-0 nova_compute[259550]: 2025-10-07 14:44:34.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.532 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5dd542cb-6e5a-451c-9ead-6960d8f546ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:34 compute-0 systemd-udevd[402657]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:44:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 249 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.4 MiB/s wr, 157 op/s
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.544 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[8da4d0b6-b777-443b-96c3-3e8f3fa4566f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:34 compute-0 NetworkManager[44949]: <info>  [1759848274.5516] device (tap3ac63514-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:44:34 compute-0 NetworkManager[44949]: <info>  [1759848274.5526] device (tap3ac63514-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:44:34 compute-0 systemd-machined[214580]: New machine qemu-166-instance-00000083.
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.557 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[242eadb5-51d7-4ce2-9268-961932eb0a81]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:34 compute-0 systemd[1]: Started Virtual Machine qemu-166-instance-00000083.
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.590 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b770f80d-8e72-4def-9781-232fca6e21a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:34 compute-0 NetworkManager[44949]: <info>  [1759848274.5989] manager: (tap7c054d6f-60): new Veth device (/org/freedesktop/NetworkManager/Devices/589)
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.598 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[622bc70f-d694-43ec-bb8a-e5d247b233e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.633 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c9efc6-bd51-44b7-9d9b-5f3ad92fbfe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.636 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[76e77f01-2477-4477-b64f-5f0604dab6e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:34 compute-0 NetworkManager[44949]: <info>  [1759848274.6595] device (tap7c054d6f-60): carrier: link connected
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.664 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[02b88fb8-9ac6-448c-96a7-75429c46eb9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.682 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[10e837b7-90ab-42ed-bdc0-7a2bc38c135b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c054d6f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:e1:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 885822, 'reachable_time': 26783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402689, 'error': None, 'target': 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.698 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[487776a1-426e-446d-8b9a-def27c398ca9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:e109'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 885822, 'tstamp': 885822}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402690, 'error': None, 'target': 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.718 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa3bee8-39ec-4193-9fec-aed31ae2c80a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c054d6f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:e1:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 885822, 'reachable_time': 26783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 402691, 'error': None, 'target': 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.751 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[219c5a66-73e2-4398-adcb-a48b84dadd27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.815 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dac79552-5b36-4503-aa18-8b5c29cd65b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.817 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c054d6f-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.817 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.817 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c054d6f-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:34 compute-0 NetworkManager[44949]: <info>  [1759848274.8797] manager: (tap7c054d6f-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/590)
Oct 07 14:44:34 compute-0 kernel: tap7c054d6f-60: entered promiscuous mode
Oct 07 14:44:34 compute-0 nova_compute[259550]: 2025-10-07 14:44:34.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:34 compute-0 nova_compute[259550]: 2025-10-07 14:44:34.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.883 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c054d6f-60, col_values=(('external_ids', {'iface-id': 'bb603baf-bbde-4821-a893-8713cfab0527'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:34 compute-0 nova_compute[259550]: 2025-10-07 14:44:34.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:34 compute-0 ovn_controller[151684]: 2025-10-07T14:44:34Z|01473|binding|INFO|Releasing lport bb603baf-bbde-4821-a893-8713cfab0527 from this chassis (sb_readonly=0)
Oct 07 14:44:34 compute-0 nova_compute[259550]: 2025-10-07 14:44:34.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.888 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c054d6f-68ec-4f0b-9362-221001cc6b67.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c054d6f-68ec-4f0b-9362-221001cc6b67.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.888 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4d01d3-bdcf-4ef7-a294-4046b9058277]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.889 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-7c054d6f-68ec-4f0b-9362-221001cc6b67
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/7c054d6f-68ec-4f0b-9362-221001cc6b67.pid.haproxy
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 7c054d6f-68ec-4f0b-9362-221001cc6b67
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:44:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.890 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'env', 'PROCESS_TAG=haproxy-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c054d6f-68ec-4f0b-9362-221001cc6b67.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:44:34 compute-0 nova_compute[259550]: 2025-10-07 14:44:34.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:34 compute-0 nova_compute[259550]: 2025-10-07 14:44:34.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.144 2 DEBUG nova.compute.manager [req-a3f041aa-2ee5-493a-b6d2-8ff584fa7185 req-414f9eda-4031-4cad-9ac1-04a24f8f6548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.145 2 DEBUG oslo_concurrency.lockutils [req-a3f041aa-2ee5-493a-b6d2-8ff584fa7185 req-414f9eda-4031-4cad-9ac1-04a24f8f6548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.145 2 DEBUG oslo_concurrency.lockutils [req-a3f041aa-2ee5-493a-b6d2-8ff584fa7185 req-414f9eda-4031-4cad-9ac1-04a24f8f6548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.145 2 DEBUG oslo_concurrency.lockutils [req-a3f041aa-2ee5-493a-b6d2-8ff584fa7185 req-414f9eda-4031-4cad-9ac1-04a24f8f6548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.146 2 DEBUG nova.compute.manager [req-a3f041aa-2ee5-493a-b6d2-8ff584fa7185 req-414f9eda-4031-4cad-9ac1-04a24f8f6548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Processing event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:44:35 compute-0 podman[402765]: 2025-10-07 14:44:35.27500347 +0000 UTC m=+0.057658184 container create 6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:44:35 compute-0 systemd[1]: Started libpod-conmon-6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5.scope.
Oct 07 14:44:35 compute-0 podman[402765]: 2025-10-07 14:44:35.242046074 +0000 UTC m=+0.024700788 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:44:35 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d869432837bfb3671cd5715ce91e9446229f7e47546d9d9b9ecd83497bca9d04/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:44:35 compute-0 podman[402765]: 2025-10-07 14:44:35.356013123 +0000 UTC m=+0.138667847 container init 6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:44:35 compute-0 podman[402765]: 2025-10-07 14:44:35.362472715 +0000 UTC m=+0.145127429 container start 6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 07 14:44:35 compute-0 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[402780]: [NOTICE]   (402784) : New worker (402786) forked
Oct 07 14:44:35 compute-0 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[402780]: [NOTICE]   (402784) : Loading success.
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.451 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848275.450709, 30241223-64c5-4a88-8ba2-ee340fe6cbd3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.452 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] VM Started (Lifecycle Event)
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.455 2 DEBUG nova.compute.manager [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.459 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.463 2 INFO nova.virt.libvirt.driver [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance spawned successfully.
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.496 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.499 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.532 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.533 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848275.452111, 30241223-64c5-4a88-8ba2-ee340fe6cbd3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.533 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] VM Paused (Lifecycle Event)
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.591 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.595 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848275.4588773, 30241223-64c5-4a88-8ba2-ee340fe6cbd3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.595 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] VM Resumed (Lifecycle Event)
Oct 07 14:44:35 compute-0 ceph-mon[74295]: pgmap v2468: 305 pgs: 305 active+clean; 249 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.4 MiB/s wr, 157 op/s
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.648 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.653 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:44:35 compute-0 nova_compute[259550]: 2025-10-07 14:44:35.694 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:44:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 7.2 MiB/s wr, 172 op/s
Oct 07 14:44:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Oct 07 14:44:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Oct 07 14:44:36 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Oct 07 14:44:37 compute-0 nova_compute[259550]: 2025-10-07 14:44:37.259 2 DEBUG nova.compute.manager [req-3735e836-3f11-4167-9e5d-eb5000e00dff req-7d77194a-cf0f-4e16-8fc7-a092fc54d4e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:37 compute-0 nova_compute[259550]: 2025-10-07 14:44:37.259 2 DEBUG oslo_concurrency.lockutils [req-3735e836-3f11-4167-9e5d-eb5000e00dff req-7d77194a-cf0f-4e16-8fc7-a092fc54d4e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:37 compute-0 nova_compute[259550]: 2025-10-07 14:44:37.259 2 DEBUG oslo_concurrency.lockutils [req-3735e836-3f11-4167-9e5d-eb5000e00dff req-7d77194a-cf0f-4e16-8fc7-a092fc54d4e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:37 compute-0 nova_compute[259550]: 2025-10-07 14:44:37.260 2 DEBUG oslo_concurrency.lockutils [req-3735e836-3f11-4167-9e5d-eb5000e00dff req-7d77194a-cf0f-4e16-8fc7-a092fc54d4e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:37 compute-0 nova_compute[259550]: 2025-10-07 14:44:37.260 2 DEBUG nova.compute.manager [req-3735e836-3f11-4167-9e5d-eb5000e00dff req-7d77194a-cf0f-4e16-8fc7-a092fc54d4e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] No waiting events found dispatching network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:44:37 compute-0 nova_compute[259550]: 2025-10-07 14:44:37.260 2 WARNING nova.compute.manager [req-3735e836-3f11-4167-9e5d-eb5000e00dff req-7d77194a-cf0f-4e16-8fc7-a092fc54d4e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received unexpected event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b for instance with vm_state shelved_offloaded and task_state spawning.
Oct 07 14:44:37 compute-0 nova_compute[259550]: 2025-10-07 14:44:37.274 2 DEBUG nova.compute.manager [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:44:37 compute-0 nova_compute[259550]: 2025-10-07 14:44:37.518 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 10.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:37 compute-0 ceph-mon[74295]: pgmap v2469: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 7.2 MiB/s wr, 172 op/s
Oct 07 14:44:37 compute-0 ceph-mon[74295]: osdmap e283: 3 total, 3 up, 3 in
Oct 07 14:44:38 compute-0 nova_compute[259550]: 2025-10-07 14:44:38.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:44:38 compute-0 nova_compute[259550]: 2025-10-07 14:44:38.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 7.2 MiB/s wr, 172 op/s
Oct 07 14:44:39 compute-0 nova_compute[259550]: 2025-10-07 14:44:39.018 2 INFO nova.compute.manager [None req-32f44190-fbd1-404b-86af-3a2b1fbdb35d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Get console output
Oct 07 14:44:39 compute-0 nova_compute[259550]: 2025-10-07 14:44:39.025 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:44:39 compute-0 podman[402795]: 2025-10-07 14:44:39.074853451 +0000 UTC m=+0.064754623 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.build-date=20251001, container_name=multipathd)
Oct 07 14:44:39 compute-0 podman[402796]: 2025-10-07 14:44:39.080803399 +0000 UTC m=+0.066713184 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:44:39 compute-0 ceph-mon[74295]: pgmap v2471: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 7.2 MiB/s wr, 172 op/s
Oct 07 14:44:39 compute-0 ovn_controller[151684]: 2025-10-07T14:44:39Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:0b:2c 10.100.0.13
Oct 07 14:44:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 226 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 7.2 MiB/s wr, 274 op/s
Oct 07 14:44:41 compute-0 nova_compute[259550]: 2025-10-07 14:44:41.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:41 compute-0 ceph-mon[74295]: pgmap v2472: 305 pgs: 305 active+clean; 226 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 7.2 MiB/s wr, 274 op/s
Oct 07 14:44:42 compute-0 ovn_controller[151684]: 2025-10-07T14:44:42Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:0b:2c 10.100.0.13
Oct 07 14:44:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 7.2 MiB/s wr, 284 op/s
Oct 07 14:44:43 compute-0 nova_compute[259550]: 2025-10-07 14:44:43.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:44:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Oct 07 14:44:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Oct 07 14:44:43 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Oct 07 14:44:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:43.320 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:0c:94 10.100.0.2 2001:db8::f816:3eff:fe31:c94'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe31:c94/64', 'neutron:device_id': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0dad457a-42a0-40e6-bb17-b6ab5f921cac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ff172ff9-04e5-4286-b498-dfa958b1473c) old=Port_Binding(mac=['fa:16:3e:31:0c:94 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:44:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:43.322 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ff172ff9-04e5-4286-b498-dfa958b1473c in datapath b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e updated
Oct 07 14:44:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:43.323 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:44:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:43.324 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[66fb655f-860b-4594-864b-a96f80c15e8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:43 compute-0 nova_compute[259550]: 2025-10-07 14:44:43.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:44 compute-0 ceph-mon[74295]: pgmap v2473: 305 pgs: 305 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 7.2 MiB/s wr, 284 op/s
Oct 07 14:44:44 compute-0 ceph-mon[74295]: osdmap e284: 3 total, 3 up, 3 in
Oct 07 14:44:44 compute-0 ovn_controller[151684]: 2025-10-07T14:44:44Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:0b:2c 10.100.0.13
Oct 07 14:44:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 19 KiB/s wr, 140 op/s
Oct 07 14:44:45 compute-0 nova_compute[259550]: 2025-10-07 14:44:45.720 2 DEBUG oslo_concurrency.lockutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:45 compute-0 nova_compute[259550]: 2025-10-07 14:44:45.721 2 DEBUG oslo_concurrency.lockutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:45 compute-0 nova_compute[259550]: 2025-10-07 14:44:45.721 2 DEBUG oslo_concurrency.lockutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:45 compute-0 nova_compute[259550]: 2025-10-07 14:44:45.722 2 DEBUG oslo_concurrency.lockutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:45 compute-0 nova_compute[259550]: 2025-10-07 14:44:45.722 2 DEBUG oslo_concurrency.lockutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:45 compute-0 nova_compute[259550]: 2025-10-07 14:44:45.723 2 INFO nova.compute.manager [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Terminating instance
Oct 07 14:44:45 compute-0 nova_compute[259550]: 2025-10-07 14:44:45.724 2 DEBUG nova.compute.manager [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:44:45 compute-0 nova_compute[259550]: 2025-10-07 14:44:45.727 2 DEBUG nova.compute.manager [req-c7c12ce4-150d-4898-8409-49790055c207 req-7ec9b6af-620a-4fbb-8fe9-b5e5ec722d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Received event network-changed-677a2058-6a17-4388-9011-69553854c197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:45 compute-0 nova_compute[259550]: 2025-10-07 14:44:45.727 2 DEBUG nova.compute.manager [req-c7c12ce4-150d-4898-8409-49790055c207 req-7ec9b6af-620a-4fbb-8fe9-b5e5ec722d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Refreshing instance network info cache due to event network-changed-677a2058-6a17-4388-9011-69553854c197. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:44:45 compute-0 nova_compute[259550]: 2025-10-07 14:44:45.728 2 DEBUG oslo_concurrency.lockutils [req-c7c12ce4-150d-4898-8409-49790055c207 req-7ec9b6af-620a-4fbb-8fe9-b5e5ec722d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:44:45 compute-0 nova_compute[259550]: 2025-10-07 14:44:45.728 2 DEBUG oslo_concurrency.lockutils [req-c7c12ce4-150d-4898-8409-49790055c207 req-7ec9b6af-620a-4fbb-8fe9-b5e5ec722d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:44:45 compute-0 nova_compute[259550]: 2025-10-07 14:44:45.728 2 DEBUG nova.network.neutron [req-c7c12ce4-150d-4898-8409-49790055c207 req-7ec9b6af-620a-4fbb-8fe9-b5e5ec722d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Refreshing network info cache for port 677a2058-6a17-4388-9011-69553854c197 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:44:46 compute-0 ceph-mon[74295]: pgmap v2475: 305 pgs: 305 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 19 KiB/s wr, 140 op/s
Oct 07 14:44:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 113 op/s
Oct 07 14:44:46 compute-0 kernel: tap677a2058-6a (unregistering): left promiscuous mode
Oct 07 14:44:46 compute-0 NetworkManager[44949]: <info>  [1759848286.6213] device (tap677a2058-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:44:46 compute-0 nova_compute[259550]: 2025-10-07 14:44:46.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:46 compute-0 ovn_controller[151684]: 2025-10-07T14:44:46Z|01474|binding|INFO|Releasing lport 677a2058-6a17-4388-9011-69553854c197 from this chassis (sb_readonly=0)
Oct 07 14:44:46 compute-0 ovn_controller[151684]: 2025-10-07T14:44:46Z|01475|binding|INFO|Setting lport 677a2058-6a17-4388-9011-69553854c197 down in Southbound
Oct 07 14:44:46 compute-0 ovn_controller[151684]: 2025-10-07T14:44:46Z|01476|binding|INFO|Removing iface tap677a2058-6a ovn-installed in OVS
Oct 07 14:44:46 compute-0 nova_compute[259550]: 2025-10-07 14:44:46.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:46.640 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:0b:2c 10.100.0.13'], port_security=['fa:16:3e:72:0b:2c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '489f6a90-6b26-4b8b-aa3e-095d1d8df333', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc3234eb-21e6-48c7-8919-4558ed1fcfca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd2a2dd7-3564-4953-a9fd-9479810a55c3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=677a2058-6a17-4388-9011-69553854c197) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:44:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:46.641 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 677a2058-6a17-4388-9011-69553854c197 in datapath 5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0 unbound from our chassis
Oct 07 14:44:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:46.642 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:44:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:46.644 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[32e353a1-f8eb-486c-a37a-c466dd63a675]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:46.645 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0 namespace which is not needed anymore
Oct 07 14:44:46 compute-0 nova_compute[259550]: 2025-10-07 14:44:46.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:46 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000084.scope: Deactivated successfully.
Oct 07 14:44:46 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000084.scope: Consumed 14.010s CPU time.
Oct 07 14:44:46 compute-0 systemd-machined[214580]: Machine qemu-165-instance-00000084 terminated.
Oct 07 14:44:46 compute-0 nova_compute[259550]: 2025-10-07 14:44:46.768 2 INFO nova.virt.libvirt.driver [-] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Instance destroyed successfully.
Oct 07 14:44:46 compute-0 nova_compute[259550]: 2025-10-07 14:44:46.770 2 DEBUG nova.objects.instance [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid 489f6a90-6b26-4b8b-aa3e-095d1d8df333 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:44:46 compute-0 nova_compute[259550]: 2025-10-07 14:44:46.783 2 DEBUG nova.virt.libvirt.vif [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:44:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-992946339',display_name='tempest-TestNetworkBasicOps-server-992946339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-992946339',id=132,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNb4KI5K3ESp/qtDfTCTovy9nAGo/33FtwIKSe6Yo7uBCrstvg9OTDvMktEqMWbObIphCTkLTVovrjRh9e99psr4PmzBYLqjNAws3HaKHjxoqr6GDuKbnjMBh642Y3hcvA==',key_name='tempest-TestNetworkBasicOps-2088718198',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:44:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-jb8fwwnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:44:20Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=489f6a90-6b26-4b8b-aa3e-095d1d8df333,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:44:46 compute-0 nova_compute[259550]: 2025-10-07 14:44:46.784 2 DEBUG nova.network.os_vif_util [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:44:46 compute-0 nova_compute[259550]: 2025-10-07 14:44:46.785 2 DEBUG nova.network.os_vif_util [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:72:0b:2c,bridge_name='br-int',has_traffic_filtering=True,id=677a2058-6a17-4388-9011-69553854c197,network=Network(5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677a2058-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:44:46 compute-0 nova_compute[259550]: 2025-10-07 14:44:46.785 2 DEBUG os_vif [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:0b:2c,bridge_name='br-int',has_traffic_filtering=True,id=677a2058-6a17-4388-9011-69553854c197,network=Network(5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677a2058-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:44:46 compute-0 nova_compute[259550]: 2025-10-07 14:44:46.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:46 compute-0 nova_compute[259550]: 2025-10-07 14:44:46.789 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap677a2058-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:46 compute-0 nova_compute[259550]: 2025-10-07 14:44:46.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:46 compute-0 nova_compute[259550]: 2025-10-07 14:44:46.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:44:46 compute-0 nova_compute[259550]: 2025-10-07 14:44:46.796 2 INFO os_vif [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:0b:2c,bridge_name='br-int',has_traffic_filtering=True,id=677a2058-6a17-4388-9011-69553854c197,network=Network(5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677a2058-6a')
Oct 07 14:44:46 compute-0 neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0[402160]: [NOTICE]   (402164) : haproxy version is 2.8.14-c23fe91
Oct 07 14:44:46 compute-0 neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0[402160]: [NOTICE]   (402164) : path to executable is /usr/sbin/haproxy
Oct 07 14:44:46 compute-0 neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0[402160]: [WARNING]  (402164) : Exiting Master process...
Oct 07 14:44:46 compute-0 neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0[402160]: [ALERT]    (402164) : Current worker (402166) exited with code 143 (Terminated)
Oct 07 14:44:46 compute-0 neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0[402160]: [WARNING]  (402164) : All workers exited. Exiting... (0)
Oct 07 14:44:46 compute-0 systemd[1]: libpod-feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6.scope: Deactivated successfully.
Oct 07 14:44:46 compute-0 podman[402855]: 2025-10-07 14:44:46.882121431 +0000 UTC m=+0.146716941 container died feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 07 14:44:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6-userdata-shm.mount: Deactivated successfully.
Oct 07 14:44:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-23924fe35529e2e0cddf4b199e3d7462d2dc12d09d3b0995ef3a978cbf4b85b8-merged.mount: Deactivated successfully.
Oct 07 14:44:47 compute-0 ceph-mon[74295]: pgmap v2476: 305 pgs: 305 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 113 op/s
Oct 07 14:44:47 compute-0 podman[402855]: 2025-10-07 14:44:47.533366991 +0000 UTC m=+0.797962491 container cleanup feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:44:47 compute-0 systemd[1]: libpod-conmon-feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6.scope: Deactivated successfully.
Oct 07 14:44:47 compute-0 podman[402912]: 2025-10-07 14:44:47.661144637 +0000 UTC m=+0.105806863 container remove feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 07 14:44:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:47.668 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[289059dd-4b67-45f8-bd7c-edaa5243ccc6]: (4, ('Tue Oct  7 02:44:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0 (feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6)\nfeedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6\nTue Oct  7 02:44:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0 (feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6)\nfeedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:47.670 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5045bc36-2448-4702-a497-88f23ca7eee1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:47.672 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5daef5b9-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:44:47 compute-0 nova_compute[259550]: 2025-10-07 14:44:47.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:47 compute-0 kernel: tap5daef5b9-00: left promiscuous mode
Oct 07 14:44:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:47.677 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea25d5b-2427-4081-888a-2c1feaf8605e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:47 compute-0 nova_compute[259550]: 2025-10-07 14:44:47.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:47.706 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cab14aad-da9e-43db-ae35-1b42763c5751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:47.709 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e4cd1788-d628-45b5-897d-67e01382fd15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:47.729 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3ce141-2030-45b3-af3b-283fbdcc91e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884065, 'reachable_time': 18791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402930, 'error': None, 'target': 'ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d5daef5b9\x2d08c8\x2d4a4d\x2db8b3\x2d4bcbbb7ff9e0.mount: Deactivated successfully.
Oct 07 14:44:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:47.734 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:44:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:47.734 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[34efa42b-7056-43a5-ba09-1cafa5eada8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:48 compute-0 nova_compute[259550]: 2025-10-07 14:44:48.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:44:48 compute-0 nova_compute[259550]: 2025-10-07 14:44:48.207 2 DEBUG nova.network.neutron [req-c7c12ce4-150d-4898-8409-49790055c207 req-7ec9b6af-620a-4fbb-8fe9-b5e5ec722d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Updated VIF entry in instance network info cache for port 677a2058-6a17-4388-9011-69553854c197. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:44:48 compute-0 nova_compute[259550]: 2025-10-07 14:44:48.208 2 DEBUG nova.network.neutron [req-c7c12ce4-150d-4898-8409-49790055c207 req-7ec9b6af-620a-4fbb-8fe9-b5e5ec722d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Updating instance_info_cache with network_info: [{"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:44:48 compute-0 nova_compute[259550]: 2025-10-07 14:44:48.240 2 DEBUG oslo_concurrency.lockutils [req-c7c12ce4-150d-4898-8409-49790055c207 req-7ec9b6af-620a-4fbb-8fe9-b5e5ec722d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:44:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 112 op/s
Oct 07 14:44:49 compute-0 ovn_controller[151684]: 2025-10-07T14:44:49Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:32:1b:50 10.100.0.11
Oct 07 14:44:49 compute-0 ceph-mon[74295]: pgmap v2477: 305 pgs: 305 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 112 op/s
Oct 07 14:44:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 151 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 718 KiB/s rd, 23 KiB/s wr, 51 op/s
Oct 07 14:44:50 compute-0 nova_compute[259550]: 2025-10-07 14:44:50.918 2 INFO nova.virt.libvirt.driver [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Deleting instance files /var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333_del
Oct 07 14:44:50 compute-0 nova_compute[259550]: 2025-10-07 14:44:50.920 2 INFO nova.virt.libvirt.driver [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Deletion of /var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333_del complete
Oct 07 14:44:50 compute-0 nova_compute[259550]: 2025-10-07 14:44:50.985 2 INFO nova.compute.manager [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Took 5.26 seconds to destroy the instance on the hypervisor.
Oct 07 14:44:50 compute-0 nova_compute[259550]: 2025-10-07 14:44:50.986 2 DEBUG oslo.service.loopingcall [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:44:50 compute-0 nova_compute[259550]: 2025-10-07 14:44:50.986 2 DEBUG nova.compute.manager [-] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:44:50 compute-0 nova_compute[259550]: 2025-10-07 14:44:50.987 2 DEBUG nova.network.neutron [-] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:44:51 compute-0 sudo[402932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:44:51 compute-0 sudo[402932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:51 compute-0 sudo[402932]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:51 compute-0 nova_compute[259550]: 2025-10-07 14:44:51.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:51 compute-0 sudo[402957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:44:51 compute-0 sudo[402957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:51 compute-0 sudo[402957]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:51 compute-0 sudo[402982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:44:51 compute-0 sudo[402982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:51 compute-0 sudo[402982]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:51 compute-0 sudo[403007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 07 14:44:51 compute-0 sudo[403007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:51 compute-0 ceph-mon[74295]: pgmap v2478: 305 pgs: 305 active+clean; 151 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 718 KiB/s rd, 23 KiB/s wr, 51 op/s
Oct 07 14:44:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:52.112 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:0c:94 10.100.0.2 2001:db8:0:1:f816:3eff:fe31:c94 2001:db8::f816:3eff:fe31:c94'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe31:c94/64 2001:db8::f816:3eff:fe31:c94/64', 'neutron:device_id': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0dad457a-42a0-40e6-bb17-b6ab5f921cac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ff172ff9-04e5-4286-b498-dfa958b1473c) old=Port_Binding(mac=['fa:16:3e:31:0c:94 10.100.0.2 2001:db8::f816:3eff:fe31:c94'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe31:c94/64', 'neutron:device_id': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:44:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:52.113 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ff172ff9-04e5-4286-b498-dfa958b1473c in datapath b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e updated
Oct 07 14:44:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:52.114 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:44:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:44:52.115 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c4102f13-22b8-4f9e-b038-82836459fa27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:44:52 compute-0 sudo[403007]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:52 compute-0 podman[403047]: 2025-10-07 14:44:52.202768515 +0000 UTC m=+0.063674084 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 07 14:44:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:44:52 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:44:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:44:52 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:44:52 compute-0 podman[403055]: 2025-10-07 14:44:52.243793556 +0000 UTC m=+0.088049312 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 07 14:44:52 compute-0 sudo[403095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:44:52 compute-0 sudo[403095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:52 compute-0 sudo[403095]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:52 compute-0 sudo[403123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:44:52 compute-0 sudo[403123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:52 compute-0 sudo[403123]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:52 compute-0 sudo[403148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:44:52 compute-0 sudo[403148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:52 compute-0 sudo[403148]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:52 compute-0 sudo[403173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:44:52 compute-0 sudo[403173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:52 compute-0 nova_compute[259550]: 2025-10-07 14:44:52.530 2 DEBUG nova.network.neutron [-] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:44:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 121 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 24 KiB/s wr, 83 op/s
Oct 07 14:44:52 compute-0 nova_compute[259550]: 2025-10-07 14:44:52.551 2 INFO nova.compute.manager [-] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Took 1.56 seconds to deallocate network for instance.
Oct 07 14:44:52 compute-0 nova_compute[259550]: 2025-10-07 14:44:52.604 2 DEBUG oslo_concurrency.lockutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:52 compute-0 nova_compute[259550]: 2025-10-07 14:44:52.605 2 DEBUG oslo_concurrency.lockutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:52 compute-0 nova_compute[259550]: 2025-10-07 14:44:52.628 2 DEBUG nova.compute.manager [req-f1926a45-f8bf-491d-b002-f00ab61f7e0d req-3b478405-0f6d-4e76-b7ef-77e76d9c6470 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Received event network-vif-deleted-677a2058-6a17-4388-9011-69553854c197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:44:52 compute-0 nova_compute[259550]: 2025-10-07 14:44:52.686 2 DEBUG oslo_concurrency.processutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:44:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:44:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:44:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:44:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:44:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:44:52 compute-0 sudo[403173]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 07 14:44:52 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 07 14:44:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:44:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:44:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:44:52 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:44:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:44:52 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:44:52 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 33e4a086-9953-407e-b033-a2290e691009 does not exist
Oct 07 14:44:52 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev dd7092c9-5e7b-4a54-8f6d-fd3e6a75803a does not exist
Oct 07 14:44:52 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 71a79f65-7663-416f-aa74-259e0a0e1edc does not exist
Oct 07 14:44:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:44:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:44:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:44:52 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:44:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:44:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:44:53 compute-0 sudo[403250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:44:53 compute-0 sudo[403250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:53 compute-0 sudo[403250]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:53 compute-0 sudo[403275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:44:53 compute-0 sudo[403275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:53 compute-0 sudo[403275]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:53 compute-0 nova_compute[259550]: 2025-10-07 14:44:53.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:44:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2014197318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:44:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:44:53 compute-0 nova_compute[259550]: 2025-10-07 14:44:53.145 2 DEBUG oslo_concurrency.processutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:53 compute-0 nova_compute[259550]: 2025-10-07 14:44:53.152 2 DEBUG nova.compute.provider_tree [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:44:53 compute-0 sudo[403300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:44:53 compute-0 sudo[403300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:53 compute-0 sudo[403300]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:53 compute-0 nova_compute[259550]: 2025-10-07 14:44:53.168 2 DEBUG nova.scheduler.client.report [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:44:53 compute-0 nova_compute[259550]: 2025-10-07 14:44:53.188 2 DEBUG oslo_concurrency.lockutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:53 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:44:53 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:44:53 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 07 14:44:53 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:44:53 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:44:53 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:44:53 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:44:53 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:44:53 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:44:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2014197318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:44:53 compute-0 nova_compute[259550]: 2025-10-07 14:44:53.224 2 INFO nova.scheduler.client.report [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance 489f6a90-6b26-4b8b-aa3e-095d1d8df333
Oct 07 14:44:53 compute-0 sudo[403327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:44:53 compute-0 sudo[403327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:53 compute-0 nova_compute[259550]: 2025-10-07 14:44:53.291 2 DEBUG oslo_concurrency.lockutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:53 compute-0 podman[403393]: 2025-10-07 14:44:53.560127194 +0000 UTC m=+0.054777997 container create 1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tu, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 07 14:44:53 compute-0 systemd[1]: Started libpod-conmon-1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8.scope.
Oct 07 14:44:53 compute-0 podman[403393]: 2025-10-07 14:44:53.528643768 +0000 UTC m=+0.023294601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:44:53 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:44:53 compute-0 podman[403393]: 2025-10-07 14:44:53.669039729 +0000 UTC m=+0.163690562 container init 1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 07 14:44:53 compute-0 podman[403393]: 2025-10-07 14:44:53.677380451 +0000 UTC m=+0.172031254 container start 1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 07 14:44:53 compute-0 mystifying_tu[403410]: 167 167
Oct 07 14:44:53 compute-0 systemd[1]: libpod-1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8.scope: Deactivated successfully.
Oct 07 14:44:53 compute-0 conmon[403410]: conmon 1c03e18050742a9c3871 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8.scope/container/memory.events
Oct 07 14:44:53 compute-0 podman[403393]: 2025-10-07 14:44:53.690498539 +0000 UTC m=+0.185149362 container attach 1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:44:53 compute-0 podman[403393]: 2025-10-07 14:44:53.690994763 +0000 UTC m=+0.185645596 container died 1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tu, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:44:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-84ddfbfefeedf8572146f4c416ac9c073bc1793965a93f3f136979121f808990-merged.mount: Deactivated successfully.
Oct 07 14:44:53 compute-0 podman[403393]: 2025-10-07 14:44:53.73714687 +0000 UTC m=+0.231797673 container remove 1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 14:44:53 compute-0 systemd[1]: libpod-conmon-1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8.scope: Deactivated successfully.
Oct 07 14:44:53 compute-0 podman[403433]: 2025-10-07 14:44:53.936402566 +0000 UTC m=+0.069716294 container create a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wu, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:44:53 compute-0 podman[403433]: 2025-10-07 14:44:53.888296137 +0000 UTC m=+0.021609885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:44:53 compute-0 systemd[1]: Started libpod-conmon-a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78.scope.
Oct 07 14:44:54 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:44:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a421e3ecf1420828fb70aec85052fc02a549c67de9bfc4f4bb81c486a68c77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:44:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a421e3ecf1420828fb70aec85052fc02a549c67de9bfc4f4bb81c486a68c77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:44:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a421e3ecf1420828fb70aec85052fc02a549c67de9bfc4f4bb81c486a68c77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:44:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a421e3ecf1420828fb70aec85052fc02a549c67de9bfc4f4bb81c486a68c77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:44:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a421e3ecf1420828fb70aec85052fc02a549c67de9bfc4f4bb81c486a68c77/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:44:54 compute-0 podman[403433]: 2025-10-07 14:44:54.041546501 +0000 UTC m=+0.174860239 container init a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wu, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:44:54 compute-0 podman[403433]: 2025-10-07 14:44:54.050699713 +0000 UTC m=+0.184013441 container start a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:44:54 compute-0 podman[403433]: 2025-10-07 14:44:54.054463454 +0000 UTC m=+0.187777182 container attach a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wu, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 07 14:44:54 compute-0 ceph-mon[74295]: pgmap v2479: 305 pgs: 305 active+clean; 121 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 24 KiB/s wr, 83 op/s
Oct 07 14:44:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 121 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 590 KiB/s rd, 31 KiB/s wr, 77 op/s
Oct 07 14:44:54 compute-0 nova_compute[259550]: 2025-10-07 14:44:54.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:44:55 compute-0 sad_wu[403449]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:44:55 compute-0 sad_wu[403449]: --> relative data size: 1.0
Oct 07 14:44:55 compute-0 sad_wu[403449]: --> All data devices are unavailable
Oct 07 14:44:55 compute-0 systemd[1]: libpod-a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78.scope: Deactivated successfully.
Oct 07 14:44:55 compute-0 systemd[1]: libpod-a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78.scope: Consumed 1.070s CPU time.
Oct 07 14:44:55 compute-0 podman[403433]: 2025-10-07 14:44:55.187328936 +0000 UTC m=+1.320642664 container died a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:44:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-74a421e3ecf1420828fb70aec85052fc02a549c67de9bfc4f4bb81c486a68c77-merged.mount: Deactivated successfully.
Oct 07 14:44:55 compute-0 podman[403433]: 2025-10-07 14:44:55.249661693 +0000 UTC m=+1.382975421 container remove a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:44:55 compute-0 systemd[1]: libpod-conmon-a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78.scope: Deactivated successfully.
Oct 07 14:44:55 compute-0 sudo[403327]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:55 compute-0 sudo[403492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:44:55 compute-0 sudo[403492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:55 compute-0 sudo[403492]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:55 compute-0 sudo[403517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:44:55 compute-0 sudo[403517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:55 compute-0 sudo[403517]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:55 compute-0 sudo[403542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:44:55 compute-0 sudo[403542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:55 compute-0 sudo[403542]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:55 compute-0 sudo[403567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:44:55 compute-0 sudo[403567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:55 compute-0 podman[403631]: 2025-10-07 14:44:55.860756976 +0000 UTC m=+0.039081890 container create b8fd56fb954c6e0e32001d8d1d27cecc473416ec1b66a448d3c7087661a08926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:44:55 compute-0 systemd[1]: Started libpod-conmon-b8fd56fb954c6e0e32001d8d1d27cecc473416ec1b66a448d3c7087661a08926.scope.
Oct 07 14:44:55 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:44:55 compute-0 podman[403631]: 2025-10-07 14:44:55.843481677 +0000 UTC m=+0.021806611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:44:55 compute-0 podman[403631]: 2025-10-07 14:44:55.942446148 +0000 UTC m=+0.120771092 container init b8fd56fb954c6e0e32001d8d1d27cecc473416ec1b66a448d3c7087661a08926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:44:55 compute-0 podman[403631]: 2025-10-07 14:44:55.948268402 +0000 UTC m=+0.126593316 container start b8fd56fb954c6e0e32001d8d1d27cecc473416ec1b66a448d3c7087661a08926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:44:55 compute-0 podman[403631]: 2025-10-07 14:44:55.951315763 +0000 UTC m=+0.129640677 container attach b8fd56fb954c6e0e32001d8d1d27cecc473416ec1b66a448d3c7087661a08926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 14:44:55 compute-0 crazy_driscoll[403647]: 167 167
Oct 07 14:44:55 compute-0 systemd[1]: libpod-b8fd56fb954c6e0e32001d8d1d27cecc473416ec1b66a448d3c7087661a08926.scope: Deactivated successfully.
Oct 07 14:44:55 compute-0 podman[403631]: 2025-10-07 14:44:55.954478327 +0000 UTC m=+0.132803241 container died b8fd56fb954c6e0e32001d8d1d27cecc473416ec1b66a448d3c7087661a08926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:44:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f48645466a88738f2876d9f4b425e97f67c615a0c3da1869072129a11c730440-merged.mount: Deactivated successfully.
Oct 07 14:44:55 compute-0 podman[403631]: 2025-10-07 14:44:55.991061359 +0000 UTC m=+0.169386273 container remove b8fd56fb954c6e0e32001d8d1d27cecc473416ec1b66a448d3c7087661a08926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:44:55 compute-0 systemd[1]: libpod-conmon-b8fd56fb954c6e0e32001d8d1d27cecc473416ec1b66a448d3c7087661a08926.scope: Deactivated successfully.
Oct 07 14:44:56 compute-0 podman[403671]: 2025-10-07 14:44:56.171534186 +0000 UTC m=+0.045761267 container create f4bacfe1bd2268ac51d79b4194f451ad70affa905dbe8ea61fa444cd33914e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_carver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:44:56 compute-0 systemd[1]: Started libpod-conmon-f4bacfe1bd2268ac51d79b4194f451ad70affa905dbe8ea61fa444cd33914e20.scope.
Oct 07 14:44:56 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321bdfd1240132c6fed542f0070e34f6b5c4472ecaa895da6fdc78ca1c1c40f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321bdfd1240132c6fed542f0070e34f6b5c4472ecaa895da6fdc78ca1c1c40f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321bdfd1240132c6fed542f0070e34f6b5c4472ecaa895da6fdc78ca1c1c40f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321bdfd1240132c6fed542f0070e34f6b5c4472ecaa895da6fdc78ca1c1c40f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:44:56 compute-0 podman[403671]: 2025-10-07 14:44:56.15474085 +0000 UTC m=+0.028967951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:44:56 compute-0 ceph-mon[74295]: pgmap v2480: 305 pgs: 305 active+clean; 121 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 590 KiB/s rd, 31 KiB/s wr, 77 op/s
Oct 07 14:44:56 compute-0 podman[403671]: 2025-10-07 14:44:56.26044992 +0000 UTC m=+0.134677011 container init f4bacfe1bd2268ac51d79b4194f451ad70affa905dbe8ea61fa444cd33914e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_carver, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:44:56 compute-0 podman[403671]: 2025-10-07 14:44:56.267501197 +0000 UTC m=+0.141728278 container start f4bacfe1bd2268ac51d79b4194f451ad70affa905dbe8ea61fa444cd33914e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_carver, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:44:56 compute-0 podman[403671]: 2025-10-07 14:44:56.272947112 +0000 UTC m=+0.147174213 container attach f4bacfe1bd2268ac51d79b4194f451ad70affa905dbe8ea61fa444cd33914e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_carver, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:44:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 305 active+clean; 122 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 31 KiB/s wr, 73 op/s
Oct 07 14:44:56 compute-0 nova_compute[259550]: 2025-10-07 14:44:56.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:57 compute-0 nova_compute[259550]: 2025-10-07 14:44:57.085 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "6721439b-34d7-4282-bbcd-37424c3f2691" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:57 compute-0 nova_compute[259550]: 2025-10-07 14:44:57.086 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:57 compute-0 nova_compute[259550]: 2025-10-07 14:44:57.135 2 DEBUG nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:44:57 compute-0 interesting_carver[403687]: {
Oct 07 14:44:57 compute-0 interesting_carver[403687]:     "0": [
Oct 07 14:44:57 compute-0 interesting_carver[403687]:         {
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "devices": [
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "/dev/loop3"
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             ],
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "lv_name": "ceph_lv0",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "lv_size": "21470642176",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "name": "ceph_lv0",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "tags": {
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.cluster_name": "ceph",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.crush_device_class": "",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.encrypted": "0",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.osd_id": "0",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.type": "block",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.vdo": "0"
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             },
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "type": "block",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "vg_name": "ceph_vg0"
Oct 07 14:44:57 compute-0 interesting_carver[403687]:         }
Oct 07 14:44:57 compute-0 interesting_carver[403687]:     ],
Oct 07 14:44:57 compute-0 interesting_carver[403687]:     "1": [
Oct 07 14:44:57 compute-0 interesting_carver[403687]:         {
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "devices": [
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "/dev/loop4"
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             ],
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "lv_name": "ceph_lv1",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "lv_size": "21470642176",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "name": "ceph_lv1",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "tags": {
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.cluster_name": "ceph",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.crush_device_class": "",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.encrypted": "0",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.osd_id": "1",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.type": "block",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.vdo": "0"
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             },
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "type": "block",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "vg_name": "ceph_vg1"
Oct 07 14:44:57 compute-0 interesting_carver[403687]:         }
Oct 07 14:44:57 compute-0 interesting_carver[403687]:     ],
Oct 07 14:44:57 compute-0 interesting_carver[403687]:     "2": [
Oct 07 14:44:57 compute-0 interesting_carver[403687]:         {
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "devices": [
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "/dev/loop5"
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             ],
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "lv_name": "ceph_lv2",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "lv_size": "21470642176",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "name": "ceph_lv2",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "tags": {
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.cluster_name": "ceph",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.crush_device_class": "",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.encrypted": "0",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.osd_id": "2",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.type": "block",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:                 "ceph.vdo": "0"
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             },
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "type": "block",
Oct 07 14:44:57 compute-0 interesting_carver[403687]:             "vg_name": "ceph_vg2"
Oct 07 14:44:57 compute-0 interesting_carver[403687]:         }
Oct 07 14:44:57 compute-0 interesting_carver[403687]:     ]
Oct 07 14:44:57 compute-0 interesting_carver[403687]: }
Oct 07 14:44:57 compute-0 systemd[1]: libpod-f4bacfe1bd2268ac51d79b4194f451ad70affa905dbe8ea61fa444cd33914e20.scope: Deactivated successfully.
Oct 07 14:44:57 compute-0 podman[403671]: 2025-10-07 14:44:57.170973412 +0000 UTC m=+1.045200523 container died f4bacfe1bd2268ac51d79b4194f451ad70affa905dbe8ea61fa444cd33914e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:44:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-321bdfd1240132c6fed542f0070e34f6b5c4472ecaa895da6fdc78ca1c1c40f4-merged.mount: Deactivated successfully.
Oct 07 14:44:57 compute-0 nova_compute[259550]: 2025-10-07 14:44:57.213 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:57 compute-0 nova_compute[259550]: 2025-10-07 14:44:57.215 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:57 compute-0 nova_compute[259550]: 2025-10-07 14:44:57.224 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:44:57 compute-0 nova_compute[259550]: 2025-10-07 14:44:57.224 2 INFO nova.compute.claims [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:44:57 compute-0 podman[403671]: 2025-10-07 14:44:57.229720653 +0000 UTC m=+1.103947744 container remove f4bacfe1bd2268ac51d79b4194f451ad70affa905dbe8ea61fa444cd33914e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 07 14:44:57 compute-0 systemd[1]: libpod-conmon-f4bacfe1bd2268ac51d79b4194f451ad70affa905dbe8ea61fa444cd33914e20.scope: Deactivated successfully.
Oct 07 14:44:57 compute-0 sudo[403567]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:57 compute-0 sudo[403710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:44:57 compute-0 sudo[403710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:57 compute-0 sudo[403710]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:57 compute-0 nova_compute[259550]: 2025-10-07 14:44:57.353 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:57 compute-0 sudo[403735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:44:57 compute-0 sudo[403735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:57 compute-0 sudo[403735]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:57 compute-0 sudo[403761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:44:57 compute-0 sudo[403761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:57 compute-0 sudo[403761]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:57 compute-0 sudo[403786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:44:57 compute-0 sudo[403786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:44:57 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1526407548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:44:57 compute-0 podman[403869]: 2025-10-07 14:44:57.873564447 +0000 UTC m=+0.043769064 container create b2758d14066cb7f6b26f91a605d4515306be85431d8aeccacbb17b6ecceb61ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wescoff, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 07 14:44:57 compute-0 nova_compute[259550]: 2025-10-07 14:44:57.883 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:57 compute-0 nova_compute[259550]: 2025-10-07 14:44:57.891 2 DEBUG nova.compute.provider_tree [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:44:57 compute-0 systemd[1]: Started libpod-conmon-b2758d14066cb7f6b26f91a605d4515306be85431d8aeccacbb17b6ecceb61ae.scope.
Oct 07 14:44:57 compute-0 nova_compute[259550]: 2025-10-07 14:44:57.911 2 DEBUG nova.scheduler.client.report [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:44:57 compute-0 nova_compute[259550]: 2025-10-07 14:44:57.932 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:57 compute-0 nova_compute[259550]: 2025-10-07 14:44:57.933 2 DEBUG nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:44:57 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:44:57 compute-0 podman[403869]: 2025-10-07 14:44:57.855372774 +0000 UTC m=+0.025577401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:44:57 compute-0 podman[403869]: 2025-10-07 14:44:57.960301293 +0000 UTC m=+0.130505900 container init b2758d14066cb7f6b26f91a605d4515306be85431d8aeccacbb17b6ecceb61ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 07 14:44:57 compute-0 podman[403869]: 2025-10-07 14:44:57.967162595 +0000 UTC m=+0.137367202 container start b2758d14066cb7f6b26f91a605d4515306be85431d8aeccacbb17b6ecceb61ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 07 14:44:57 compute-0 podman[403869]: 2025-10-07 14:44:57.971205193 +0000 UTC m=+0.141409810 container attach b2758d14066cb7f6b26f91a605d4515306be85431d8aeccacbb17b6ecceb61ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct 07 14:44:57 compute-0 wonderful_wescoff[403887]: 167 167
Oct 07 14:44:57 compute-0 systemd[1]: libpod-b2758d14066cb7f6b26f91a605d4515306be85431d8aeccacbb17b6ecceb61ae.scope: Deactivated successfully.
Oct 07 14:44:57 compute-0 podman[403869]: 2025-10-07 14:44:57.974417168 +0000 UTC m=+0.144621795 container died b2758d14066cb7f6b26f91a605d4515306be85431d8aeccacbb17b6ecceb61ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:44:57 compute-0 nova_compute[259550]: 2025-10-07 14:44:57.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:44:57 compute-0 nova_compute[259550]: 2025-10-07 14:44:57.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:44:57 compute-0 nova_compute[259550]: 2025-10-07 14:44:57.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:44:57 compute-0 nova_compute[259550]: 2025-10-07 14:44:57.988 2 DEBUG nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:44:57 compute-0 nova_compute[259550]: 2025-10-07 14:44:57.988 2 DEBUG nova.network.neutron [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:44:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-51aa93e43371a2fb144fa7469f6831d0cc0bb132a1ed5267db33d423510cb91a-merged.mount: Deactivated successfully.
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.014 2 INFO nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:44:58 compute-0 podman[403869]: 2025-10-07 14:44:58.016674081 +0000 UTC m=+0.186878678 container remove b2758d14066cb7f6b26f91a605d4515306be85431d8aeccacbb17b6ecceb61ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wescoff, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.032 2 DEBUG nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:44:58 compute-0 systemd[1]: libpod-conmon-b2758d14066cb7f6b26f91a605d4515306be85431d8aeccacbb17b6ecceb61ae.scope: Deactivated successfully.
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.141 2 DEBUG nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.142 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.142 2 INFO nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Creating image(s)
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.145060) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848298145105, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 527, "num_deletes": 258, "total_data_size": 499393, "memory_usage": 509496, "flush_reason": "Manual Compaction"}
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848298150287, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 495149, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51834, "largest_seqno": 52360, "table_properties": {"data_size": 492134, "index_size": 987, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7177, "raw_average_key_size": 18, "raw_value_size": 485915, "raw_average_value_size": 1285, "num_data_blocks": 43, "num_entries": 378, "num_filter_entries": 378, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759848270, "oldest_key_time": 1759848270, "file_creation_time": 1759848298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 5276 microseconds, and 2739 cpu microseconds.
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.150334) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 495149 bytes OK
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.150356) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.151606) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.151624) EVENT_LOG_v1 {"time_micros": 1759848298151618, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.151644) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 496310, prev total WAL file size 496310, number of live WAL files 2.
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.152601) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303131' seq:72057594037927935, type:22 .. '6C6F676D0032323634' seq:0, type:0; will stop at (end)
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(483KB)], [119(9808KB)]
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848298152651, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 10538916, "oldest_snapshot_seqno": -1}
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.173 2 DEBUG nova.storage.rbd_utils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6721439b-34d7-4282-bbcd-37424c3f2691_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.202 2 DEBUG nova.storage.rbd_utils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6721439b-34d7-4282-bbcd-37424c3f2691_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7220 keys, 10424762 bytes, temperature: kUnknown
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848298214365, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 10424762, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10375677, "index_size": 29901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18117, "raw_key_size": 188663, "raw_average_key_size": 26, "raw_value_size": 10245805, "raw_average_value_size": 1419, "num_data_blocks": 1171, "num_entries": 7220, "num_filter_entries": 7220, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759848298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.214622) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 10424762 bytes
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.216076) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.6 rd, 168.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.6 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(42.3) write-amplify(21.1) OK, records in: 7752, records dropped: 532 output_compression: NoCompression
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.216097) EVENT_LOG_v1 {"time_micros": 1759848298216087, "job": 72, "event": "compaction_finished", "compaction_time_micros": 61792, "compaction_time_cpu_micros": 26104, "output_level": 6, "num_output_files": 1, "total_output_size": 10424762, "num_input_records": 7752, "num_output_records": 7220, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848298216291, "job": 72, "event": "table_file_deletion", "file_number": 121}
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848298217816, "job": 72, "event": "table_file_deletion", "file_number": 119}
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.152176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.217949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.217958) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.217960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.217961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:44:58 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.217963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:44:58 compute-0 podman[403917]: 2025-10-07 14:44:58.222224385 +0000 UTC m=+0.054237183 container create 67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.233 2 DEBUG nova.storage.rbd_utils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6721439b-34d7-4282-bbcd-37424c3f2691_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.239 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:58 compute-0 ceph-mon[74295]: pgmap v2481: 305 pgs: 305 active+clean; 122 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 31 KiB/s wr, 73 op/s
Oct 07 14:44:58 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1526407548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:44:58 compute-0 systemd[1]: Started libpod-conmon-67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00.scope.
Oct 07 14:44:58 compute-0 ovn_controller[151684]: 2025-10-07T14:44:58Z|01477|binding|INFO|Releasing lport bb603baf-bbde-4821-a893-8713cfab0527 from this chassis (sb_readonly=0)
Oct 07 14:44:58 compute-0 podman[403917]: 2025-10-07 14:44:58.199265944 +0000 UTC m=+0.031278792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:44:58 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e991d5602cd84285f17c7710c5c0c265e561df83e2a11ec2e44b423af5d40067/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e991d5602cd84285f17c7710c5c0c265e561df83e2a11ec2e44b423af5d40067/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e991d5602cd84285f17c7710c5c0c265e561df83e2a11ec2e44b423af5d40067/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e991d5602cd84285f17c7710c5c0c265e561df83e2a11ec2e44b423af5d40067/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:44:58 compute-0 podman[403917]: 2025-10-07 14:44:58.329999179 +0000 UTC m=+0.162012007 container init 67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.336 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.338 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.339 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.340 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:58 compute-0 podman[403917]: 2025-10-07 14:44:58.342657566 +0000 UTC m=+0.174670364 container start 67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 07 14:44:58 compute-0 podman[403917]: 2025-10-07 14:44:58.377322947 +0000 UTC m=+0.209335755 container attach 67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.387 2 DEBUG nova.storage.rbd_utils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6721439b-34d7-4282-bbcd-37424c3f2691_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.391 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6721439b-34d7-4282-bbcd-37424c3f2691_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.445 2 DEBUG nova.policy [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:44:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 122 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 31 KiB/s wr, 72 op/s
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.725 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6721439b-34d7-4282-bbcd-37424c3f2691_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.790 2 DEBUG nova.storage.rbd_utils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 6721439b-34d7-4282-bbcd-37424c3f2691_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.884 2 DEBUG nova.objects.instance [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 6721439b-34d7-4282-bbcd-37424c3f2691 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.916 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.917 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Ensure instance console log exists: /var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.917 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.918 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.918 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:44:58 compute-0 nova_compute[259550]: 2025-10-07 14:44:58.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:44:59 compute-0 pensive_hellman[403981]: {
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:         "osd_id": 2,
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:         "type": "bluestore"
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:     },
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:         "osd_id": 1,
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:         "type": "bluestore"
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:     },
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:         "osd_id": 0,
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:         "type": "bluestore"
Oct 07 14:44:59 compute-0 pensive_hellman[403981]:     }
Oct 07 14:44:59 compute-0 pensive_hellman[403981]: }
Oct 07 14:44:59 compute-0 systemd[1]: libpod-67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00.scope: Deactivated successfully.
Oct 07 14:44:59 compute-0 systemd[1]: libpod-67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00.scope: Consumed 1.129s CPU time.
Oct 07 14:44:59 compute-0 podman[404125]: 2025-10-07 14:44:59.526959175 +0000 UTC m=+0.023640839 container died 67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 07 14:44:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-e991d5602cd84285f17c7710c5c0c265e561df83e2a11ec2e44b423af5d40067-merged.mount: Deactivated successfully.
Oct 07 14:44:59 compute-0 podman[404125]: 2025-10-07 14:44:59.589729694 +0000 UTC m=+0.086411348 container remove 67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:44:59 compute-0 systemd[1]: libpod-conmon-67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00.scope: Deactivated successfully.
Oct 07 14:44:59 compute-0 sudo[403786]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:44:59 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:44:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:44:59 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:44:59 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 668fb5dd-f230-499a-8367-d125a9eed801 does not exist
Oct 07 14:44:59 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev be1b15d7-42ef-45f4-9da3-916d7b42d20a does not exist
Oct 07 14:44:59 compute-0 nova_compute[259550]: 2025-10-07 14:44:59.649 2 DEBUG nova.network.neutron [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Successfully created port: 54bf17d9-ad25-4326-b981-fb4fe6afaf7c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:44:59 compute-0 sudo[404140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:44:59 compute-0 sudo[404140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:59 compute-0 sudo[404140]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:59 compute-0 sudo[404165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:44:59 compute-0 sudo[404165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:44:59 compute-0 sudo[404165]: pam_unix(sudo:session): session closed for user root
Oct 07 14:44:59 compute-0 nova_compute[259550]: 2025-10-07 14:44:59.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:45:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:00.076 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:00.077 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:00.078 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:00 compute-0 ceph-mon[74295]: pgmap v2482: 305 pgs: 305 active+clean; 122 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 31 KiB/s wr, 72 op/s
Oct 07 14:45:00 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:45:00 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:45:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 151 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 565 KiB/s rd, 1.3 MiB/s wr, 86 op/s
Oct 07 14:45:00 compute-0 nova_compute[259550]: 2025-10-07 14:45:00.965 2 DEBUG nova.network.neutron [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Successfully updated port: 54bf17d9-ad25-4326-b981-fb4fe6afaf7c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:45:00 compute-0 nova_compute[259550]: 2025-10-07 14:45:00.982 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:45:00 compute-0 nova_compute[259550]: 2025-10-07 14:45:00.982 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:45:00 compute-0 nova_compute[259550]: 2025-10-07 14:45:00.983 2 DEBUG nova.network.neutron [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:45:01 compute-0 nova_compute[259550]: 2025-10-07 14:45:01.118 2 DEBUG nova.compute.manager [req-5f41e937-9b33-40ee-8a07-c3a119236b68 req-2be50423-2877-452c-8e37-dc05f2b95e1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received event network-changed-54bf17d9-ad25-4326-b981-fb4fe6afaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:45:01 compute-0 nova_compute[259550]: 2025-10-07 14:45:01.118 2 DEBUG nova.compute.manager [req-5f41e937-9b33-40ee-8a07-c3a119236b68 req-2be50423-2877-452c-8e37-dc05f2b95e1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Refreshing instance network info cache due to event network-changed-54bf17d9-ad25-4326-b981-fb4fe6afaf7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:45:01 compute-0 nova_compute[259550]: 2025-10-07 14:45:01.118 2 DEBUG oslo_concurrency.lockutils [req-5f41e937-9b33-40ee-8a07-c3a119236b68 req-2be50423-2877-452c-8e37-dc05f2b95e1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:45:01 compute-0 nova_compute[259550]: 2025-10-07 14:45:01.224 2 DEBUG nova.network.neutron [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:45:01 compute-0 nova_compute[259550]: 2025-10-07 14:45:01.766 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848286.7657456, 489f6a90-6b26-4b8b-aa3e-095d1d8df333 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:45:01 compute-0 nova_compute[259550]: 2025-10-07 14:45:01.766 2 INFO nova.compute.manager [-] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] VM Stopped (Lifecycle Event)
Oct 07 14:45:01 compute-0 nova_compute[259550]: 2025-10-07 14:45:01.793 2 DEBUG nova.compute.manager [None req-b3932491-802b-48df-ae99-bc6c3c17fb7d - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:45:01 compute-0 nova_compute[259550]: 2025-10-07 14:45:01.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:02 compute-0 ceph-mon[74295]: pgmap v2483: 305 pgs: 305 active+clean; 151 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 565 KiB/s rd, 1.3 MiB/s wr, 86 op/s
Oct 07 14:45:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 169 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 202 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Oct 07 14:45:02 compute-0 nova_compute[259550]: 2025-10-07 14:45:02.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:03 compute-0 nova_compute[259550]: 2025-10-07 14:45:03.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:45:04 compute-0 ceph-mon[74295]: pgmap v2484: 305 pgs: 305 active+clean; 169 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 202 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.410 2 DEBUG nova.network.neutron [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updating instance_info_cache with network_info: [{"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.432 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.432 2 DEBUG nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Instance network_info: |[{"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.432 2 DEBUG oslo_concurrency.lockutils [req-5f41e937-9b33-40ee-8a07-c3a119236b68 req-2be50423-2877-452c-8e37-dc05f2b95e1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.433 2 DEBUG nova.network.neutron [req-5f41e937-9b33-40ee-8a07-c3a119236b68 req-2be50423-2877-452c-8e37-dc05f2b95e1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Refreshing network info cache for port 54bf17d9-ad25-4326-b981-fb4fe6afaf7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.436 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Start _get_guest_xml network_info=[{"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.442 2 WARNING nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.451 2 DEBUG nova.virt.libvirt.host [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.452 2 DEBUG nova.virt.libvirt.host [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.457 2 DEBUG nova.virt.libvirt.host [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.457 2 DEBUG nova.virt.libvirt.host [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.458 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.458 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.458 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.459 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.459 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.459 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.459 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.459 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.460 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.460 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.460 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.460 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.463 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 169 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 07 14:45:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:45:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3375971057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.959 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.987 2 DEBUG nova.storage.rbd_utils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6721439b-34d7-4282-bbcd-37424c3f2691_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:04 compute-0 nova_compute[259550]: 2025-10-07 14:45:04.992 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:05 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3375971057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:45:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:45:05 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2283799090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.439 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.443 2 DEBUG nova.virt.libvirt.vif [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:44:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-965086068',display_name='tempest-TestGettingAddress-server-965086068',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-965086068',id=133,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJduBxT7TahFqoelvVZ/7dLsZLsZopzCWy0c1s/fKLvT5f1/UUPBtVog3rnrfhVqOaBhsvpFnl4NRHZsXU2RV8U7aQqRvvSPu/+lGVEKuUSnHnWjBec95G9VYq2BFT7AFA==',key_name='tempest-TestGettingAddress-1301392792',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-0owwxica',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:44:58Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=6721439b-34d7-4282-bbcd-37424c3f2691,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.444 2 DEBUG nova.network.os_vif_util [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.446 2 DEBUG nova.network.os_vif_util [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:be:4b,bridge_name='br-int',has_traffic_filtering=True,id=54bf17d9-ad25-4326-b981-fb4fe6afaf7c,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bf17d9-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.448 2 DEBUG nova.objects.instance [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 6721439b-34d7-4282-bbcd-37424c3f2691 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.466 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:45:05 compute-0 nova_compute[259550]:   <uuid>6721439b-34d7-4282-bbcd-37424c3f2691</uuid>
Oct 07 14:45:05 compute-0 nova_compute[259550]:   <name>instance-00000085</name>
Oct 07 14:45:05 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:45:05 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:45:05 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <nova:name>tempest-TestGettingAddress-server-965086068</nova:name>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:45:04</nova:creationTime>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:45:05 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:45:05 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:45:05 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:45:05 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:45:05 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:45:05 compute-0 nova_compute[259550]:         <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct 07 14:45:05 compute-0 nova_compute[259550]:         <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:45:05 compute-0 nova_compute[259550]:         <nova:port uuid="54bf17d9-ad25-4326-b981-fb4fe6afaf7c">
Oct 07 14:45:05 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe28:be4b" ipVersion="6"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe28:be4b" ipVersion="6"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:45:05 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:45:05 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <system>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <entry name="serial">6721439b-34d7-4282-bbcd-37424c3f2691</entry>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <entry name="uuid">6721439b-34d7-4282-bbcd-37424c3f2691</entry>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     </system>
Oct 07 14:45:05 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:45:05 compute-0 nova_compute[259550]:   <os>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:   </os>
Oct 07 14:45:05 compute-0 nova_compute[259550]:   <features>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:   </features>
Oct 07 14:45:05 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:45:05 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:45:05 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/6721439b-34d7-4282-bbcd-37424c3f2691_disk">
Oct 07 14:45:05 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       </source>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:45:05 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/6721439b-34d7-4282-bbcd-37424c3f2691_disk.config">
Oct 07 14:45:05 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       </source>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:45:05 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:28:be:4b"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <target dev="tap54bf17d9-ad"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691/console.log" append="off"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <video>
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     </video>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:45:05 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:45:05 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:45:05 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:45:05 compute-0 nova_compute[259550]: </domain>
Oct 07 14:45:05 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.468 2 DEBUG nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Preparing to wait for external event network-vif-plugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.468 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.468 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.468 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.469 2 DEBUG nova.virt.libvirt.vif [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:44:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-965086068',display_name='tempest-TestGettingAddress-server-965086068',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-965086068',id=133,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJduBxT7TahFqoelvVZ/7dLsZLsZopzCWy0c1s/fKLvT5f1/UUPBtVog3rnrfhVqOaBhsvpFnl4NRHZsXU2RV8U7aQqRvvSPu/+lGVEKuUSnHnWjBec95G9VYq2BFT7AFA==',key_name='tempest-TestGettingAddress-1301392792',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-0owwxica',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:44:58Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=6721439b-34d7-4282-bbcd-37424c3f2691,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.470 2 DEBUG nova.network.os_vif_util [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.471 2 DEBUG nova.network.os_vif_util [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:be:4b,bridge_name='br-int',has_traffic_filtering=True,id=54bf17d9-ad25-4326-b981-fb4fe6afaf7c,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bf17d9-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.471 2 DEBUG os_vif [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:be:4b,bridge_name='br-int',has_traffic_filtering=True,id=54bf17d9-ad25-4326-b981-fb4fe6afaf7c,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bf17d9-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.472 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.477 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54bf17d9-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54bf17d9-ad, col_values=(('external_ids', {'iface-id': '54bf17d9-ad25-4326-b981-fb4fe6afaf7c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:be:4b', 'vm-uuid': '6721439b-34d7-4282-bbcd-37424c3f2691'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:05 compute-0 NetworkManager[44949]: <info>  [1759848305.4809] manager: (tap54bf17d9-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/591)
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.487 2 INFO os_vif [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:be:4b,bridge_name='br-int',has_traffic_filtering=True,id=54bf17d9-ad25-4326-b981-fb4fe6afaf7c,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bf17d9-ad')
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.551 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.551 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.552 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:28:be:4b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.552 2 INFO nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Using config drive
Oct 07 14:45:05 compute-0 nova_compute[259550]: 2025-10-07 14:45:05.576 2 DEBUG nova.storage.rbd_utils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6721439b-34d7-4282-bbcd-37424c3f2691_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:06 compute-0 nova_compute[259550]: 2025-10-07 14:45:06.048 2 INFO nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Creating config drive at /var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691/disk.config
Oct 07 14:45:06 compute-0 nova_compute[259550]: 2025-10-07 14:45:06.054 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphymbhq_g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:06 compute-0 nova_compute[259550]: 2025-10-07 14:45:06.214 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphymbhq_g" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:06 compute-0 nova_compute[259550]: 2025-10-07 14:45:06.237 2 DEBUG nova.storage.rbd_utils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6721439b-34d7-4282-bbcd-37424c3f2691_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:06 compute-0 nova_compute[259550]: 2025-10-07 14:45:06.241 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691/disk.config 6721439b-34d7-4282-bbcd-37424c3f2691_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:06 compute-0 ceph-mon[74295]: pgmap v2485: 305 pgs: 305 active+clean; 169 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 07 14:45:06 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2283799090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:45:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 169 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 07 14:45:06 compute-0 nova_compute[259550]: 2025-10-07 14:45:06.606 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691/disk.config 6721439b-34d7-4282-bbcd-37424c3f2691_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:06 compute-0 nova_compute[259550]: 2025-10-07 14:45:06.607 2 INFO nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Deleting local config drive /var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691/disk.config because it was imported into RBD.
Oct 07 14:45:06 compute-0 kernel: tap54bf17d9-ad: entered promiscuous mode
Oct 07 14:45:06 compute-0 NetworkManager[44949]: <info>  [1759848306.6599] manager: (tap54bf17d9-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/592)
Oct 07 14:45:06 compute-0 ovn_controller[151684]: 2025-10-07T14:45:06Z|01478|binding|INFO|Claiming lport 54bf17d9-ad25-4326-b981-fb4fe6afaf7c for this chassis.
Oct 07 14:45:06 compute-0 ovn_controller[151684]: 2025-10-07T14:45:06Z|01479|binding|INFO|54bf17d9-ad25-4326-b981-fb4fe6afaf7c: Claiming fa:16:3e:28:be:4b 10.100.0.9 2001:db8:0:1:f816:3eff:fe28:be4b 2001:db8::f816:3eff:fe28:be4b
Oct 07 14:45:06 compute-0 nova_compute[259550]: 2025-10-07 14:45:06.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.672 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:be:4b 10.100.0.9 2001:db8:0:1:f816:3eff:fe28:be4b 2001:db8::f816:3eff:fe28:be4b'], port_security=['fa:16:3e:28:be:4b 10.100.0.9 2001:db8:0:1:f816:3eff:fe28:be4b 2001:db8::f816:3eff:fe28:be4b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8:0:1:f816:3eff:fe28:be4b/64 2001:db8::f816:3eff:fe28:be4b/64', 'neutron:device_id': '6721439b-34d7-4282-bbcd-37424c3f2691', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb742e76-491f-4442-ba5f-a90a2210bfd3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0dad457a-42a0-40e6-bb17-b6ab5f921cac, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=54bf17d9-ad25-4326-b981-fb4fe6afaf7c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.673 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 54bf17d9-ad25-4326-b981-fb4fe6afaf7c in datapath b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e bound to our chassis
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.674 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e
Oct 07 14:45:06 compute-0 ovn_controller[151684]: 2025-10-07T14:45:06Z|01480|binding|INFO|Setting lport 54bf17d9-ad25-4326-b981-fb4fe6afaf7c ovn-installed in OVS
Oct 07 14:45:06 compute-0 ovn_controller[151684]: 2025-10-07T14:45:06Z|01481|binding|INFO|Setting lport 54bf17d9-ad25-4326-b981-fb4fe6afaf7c up in Southbound
Oct 07 14:45:06 compute-0 nova_compute[259550]: 2025-10-07 14:45:06.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.685 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[366499c8-c867-4ad3-ae6b-b56a985dc797]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.686 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb5308f20-41 in ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.688 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb5308f20-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.688 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1b19d4aa-02c9-4bed-8c62-9bcb2068d84a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.690 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[405c57e1-0309-4d9a-a2b2-bc4b82a66734]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:06 compute-0 systemd-udevd[404327]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.702 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd67542-2b49-4b0b-b1f0-c36df37ea5ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:06 compute-0 systemd-machined[214580]: New machine qemu-167-instance-00000085.
Oct 07 14:45:06 compute-0 NetworkManager[44949]: <info>  [1759848306.7096] device (tap54bf17d9-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:45:06 compute-0 NetworkManager[44949]: <info>  [1759848306.7108] device (tap54bf17d9-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:45:06 compute-0 systemd[1]: Started Virtual Machine qemu-167-instance-00000085.
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.731 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a83f2a-b7c9-4dbf-adb3-5ebe6a591aff]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.764 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[22f6d106-d949-4436-b2c4-2d99d59b2eec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.768 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[01a484bf-3f0c-4d01-a4fe-77302e1565a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:06 compute-0 NetworkManager[44949]: <info>  [1759848306.7704] manager: (tapb5308f20-40): new Veth device (/org/freedesktop/NetworkManager/Devices/593)
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.798 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5f098b-68c2-45a7-8323-1cdd749ca9c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.801 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf3760c-bf5b-4cae-9444-29de2b3cb013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:06 compute-0 NetworkManager[44949]: <info>  [1759848306.8288] device (tapb5308f20-40): carrier: link connected
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.834 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[124af055-3f2b-4b58-897a-0a9f3f08f1d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.850 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1032c42e-4b12-4552-b7e9-d7e031a92492]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5308f20-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0c:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 889039, 'reachable_time': 17557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404359, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.865 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0139ffb9-16bc-4204-a056-138d52bdd31a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:c94'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 889039, 'tstamp': 889039}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404360, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.881 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de6696d6-7ebc-4be5-b37c-e39ac7ae3e1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5308f20-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0c:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 889039, 'reachable_time': 17557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 404361, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.914 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e59e5dce-3cc1-486a-abdf-e23e370036d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.974 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[773e6cd7-0458-43df-b91a-80eced2adb49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.976 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5308f20-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.976 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.976 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5308f20-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:06 compute-0 nova_compute[259550]: 2025-10-07 14:45:06.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:06 compute-0 NetworkManager[44949]: <info>  [1759848306.9792] manager: (tapb5308f20-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/594)
Oct 07 14:45:06 compute-0 kernel: tapb5308f20-40: entered promiscuous mode
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.982 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5308f20-40, col_values=(('external_ids', {'iface-id': 'ff172ff9-04e5-4286-b498-dfa958b1473c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:06 compute-0 ovn_controller[151684]: 2025-10-07T14:45:06Z|01482|binding|INFO|Releasing lport ff172ff9-04e5-4286-b498-dfa958b1473c from this chassis (sb_readonly=0)
Oct 07 14:45:06 compute-0 nova_compute[259550]: 2025-10-07 14:45:06.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.985 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.986 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b58b9e09-e21f-4635-b5d2-4c24b9f37662]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.987 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e.pid.haproxy
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:45:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.988 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'env', 'PROCESS_TAG=haproxy-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:07 compute-0 podman[404435]: 2025-10-07 14:45:07.421435063 +0000 UTC m=+0.100411941 container create 5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:45:07 compute-0 podman[404435]: 2025-10-07 14:45:07.345342551 +0000 UTC m=+0.024319449 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:45:07 compute-0 systemd[1]: Started libpod-conmon-5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee.scope.
Oct 07 14:45:07 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7939ed48599346f2fcd08610b2ba91dee0ea97c56d4a7dfd84d89d159ce37fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:45:07 compute-0 podman[404435]: 2025-10-07 14:45:07.569389876 +0000 UTC m=+0.248366844 container init 5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 07 14:45:07 compute-0 podman[404435]: 2025-10-07 14:45:07.574753569 +0000 UTC m=+0.253730487 container start 5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 07 14:45:07 compute-0 neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e[404451]: [NOTICE]   (404455) : New worker (404457) forked
Oct 07 14:45:07 compute-0 neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e[404451]: [NOTICE]   (404455) : Loading success.
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.645 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848307.6439042, 6721439b-34d7-4282-bbcd-37424c3f2691 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.646 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] VM Started (Lifecycle Event)
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.670 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.675 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848307.6442134, 6721439b-34d7-4282-bbcd-37424c3f2691 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.675 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] VM Paused (Lifecycle Event)
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.693 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.696 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.717 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.935 2 DEBUG nova.compute.manager [req-365a78cc-551e-403e-afb4-e40b2bb8f46c req-6581a1aa-8591-4add-92a5-16cccc414221 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received event network-vif-plugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.936 2 DEBUG oslo_concurrency.lockutils [req-365a78cc-551e-403e-afb4-e40b2bb8f46c req-6581a1aa-8591-4add-92a5-16cccc414221 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.936 2 DEBUG oslo_concurrency.lockutils [req-365a78cc-551e-403e-afb4-e40b2bb8f46c req-6581a1aa-8591-4add-92a5-16cccc414221 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.936 2 DEBUG oslo_concurrency.lockutils [req-365a78cc-551e-403e-afb4-e40b2bb8f46c req-6581a1aa-8591-4add-92a5-16cccc414221 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.937 2 DEBUG nova.compute.manager [req-365a78cc-551e-403e-afb4-e40b2bb8f46c req-6581a1aa-8591-4add-92a5-16cccc414221 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Processing event network-vif-plugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.937 2 DEBUG nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.950 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.950 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848307.945736, 6721439b-34d7-4282-bbcd-37424c3f2691 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.951 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] VM Resumed (Lifecycle Event)
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.956 2 INFO nova.virt.libvirt.driver [-] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Instance spawned successfully.
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.956 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:45:07 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:07.999 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:08.008 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:08.013 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:08.013 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:08.014 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:08.014 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:08.015 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:08.015 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:08.060 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:08.090 2 INFO nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Took 9.95 seconds to spawn the instance on the hypervisor.
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:08.090 2 DEBUG nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:08.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:08.160 2 INFO nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Took 10.96 seconds to build instance.
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:08.178 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:08.323 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:08.324 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:08.324 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:45:08 compute-0 ceph-mon[74295]: pgmap v2486: 305 pgs: 305 active+clean; 169 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 07 14:45:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 169 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:08.677 2 DEBUG nova.network.neutron [req-5f41e937-9b33-40ee-8a07-c3a119236b68 req-2be50423-2877-452c-8e37-dc05f2b95e1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updated VIF entry in instance network info cache for port 54bf17d9-ad25-4326-b981-fb4fe6afaf7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:08.678 2 DEBUG nova.network.neutron [req-5f41e937-9b33-40ee-8a07-c3a119236b68 req-2be50423-2877-452c-8e37-dc05f2b95e1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updating instance_info_cache with network_info: [{"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:45:08 compute-0 nova_compute[259550]: 2025-10-07 14:45:08.714 2 DEBUG oslo_concurrency.lockutils [req-5f41e937-9b33-40ee-8a07-c3a119236b68 req-2be50423-2877-452c-8e37-dc05f2b95e1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:45:09 compute-0 ceph-mon[74295]: pgmap v2487: 305 pgs: 305 active+clean; 169 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.045 2 DEBUG nova.compute.manager [req-f92ce701-af4a-43a4-a061-d504080da8eb req-feb6ac02-60ab-4800-8f8f-96fa90b65b9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received event network-vif-plugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.045 2 DEBUG oslo_concurrency.lockutils [req-f92ce701-af4a-43a4-a061-d504080da8eb req-feb6ac02-60ab-4800-8f8f-96fa90b65b9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.046 2 DEBUG oslo_concurrency.lockutils [req-f92ce701-af4a-43a4-a061-d504080da8eb req-feb6ac02-60ab-4800-8f8f-96fa90b65b9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.046 2 DEBUG oslo_concurrency.lockutils [req-f92ce701-af4a-43a4-a061-d504080da8eb req-feb6ac02-60ab-4800-8f8f-96fa90b65b9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.046 2 DEBUG nova.compute.manager [req-f92ce701-af4a-43a4-a061-d504080da8eb req-feb6ac02-60ab-4800-8f8f-96fa90b65b9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] No waiting events found dispatching network-vif-plugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.046 2 WARNING nova.compute.manager [req-f92ce701-af4a-43a4-a061-d504080da8eb req-feb6ac02-60ab-4800-8f8f-96fa90b65b9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received unexpected event network-vif-plugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c for instance with vm_state active and task_state None.
Oct 07 14:45:10 compute-0 podman[404466]: 2025-10-07 14:45:10.07139718 +0000 UTC m=+0.060065418 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 14:45:10 compute-0 podman[404467]: 2025-10-07 14:45:10.101790957 +0000 UTC m=+0.090459415 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.161 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.182 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.183 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.183 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.206 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.207 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.207 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.207 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.208 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 169 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 615 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Oct 07 14:45:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:45:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1954414701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.684 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.770 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.771 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.777 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.777 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.969 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.970 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3282MB free_disk=59.921688079833984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.971 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:10 compute-0 nova_compute[259550]: 2025-10-07 14:45:10.971 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:11 compute-0 nova_compute[259550]: 2025-10-07 14:45:11.053 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 30241223-64c5-4a88-8ba2-ee340fe6cbd3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:45:11 compute-0 nova_compute[259550]: 2025-10-07 14:45:11.054 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 6721439b-34d7-4282-bbcd-37424c3f2691 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:45:11 compute-0 nova_compute[259550]: 2025-10-07 14:45:11.054 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:45:11 compute-0 nova_compute[259550]: 2025-10-07 14:45:11.055 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:45:11 compute-0 nova_compute[259550]: 2025-10-07 14:45:11.106 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:45:11 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/280385227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:45:11 compute-0 nova_compute[259550]: 2025-10-07 14:45:11.574 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:11 compute-0 nova_compute[259550]: 2025-10-07 14:45:11.578 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:45:11 compute-0 nova_compute[259550]: 2025-10-07 14:45:11.593 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:45:11 compute-0 ceph-mon[74295]: pgmap v2488: 305 pgs: 305 active+clean; 169 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 615 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Oct 07 14:45:11 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1954414701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:45:11 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/280385227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:45:11 compute-0 nova_compute[259550]: 2025-10-07 14:45:11.625 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:45:11 compute-0 nova_compute[259550]: 2025-10-07 14:45:11.625 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.185 2 DEBUG nova.compute.manager [req-024aebf4-3160-490a-b33c-e28332418504 req-956c92ba-dbbd-4fe4-83aa-2329d0d30a0f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received event network-changed-54bf17d9-ad25-4326-b981-fb4fe6afaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.185 2 DEBUG nova.compute.manager [req-024aebf4-3160-490a-b33c-e28332418504 req-956c92ba-dbbd-4fe4-83aa-2329d0d30a0f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Refreshing instance network info cache due to event network-changed-54bf17d9-ad25-4326-b981-fb4fe6afaf7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.186 2 DEBUG oslo_concurrency.lockutils [req-024aebf4-3160-490a-b33c-e28332418504 req-956c92ba-dbbd-4fe4-83aa-2329d0d30a0f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.186 2 DEBUG oslo_concurrency.lockutils [req-024aebf4-3160-490a-b33c-e28332418504 req-956c92ba-dbbd-4fe4-83aa-2329d0d30a0f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.186 2 DEBUG nova.network.neutron [req-024aebf4-3160-490a-b33c-e28332418504 req-956c92ba-dbbd-4fe4-83aa-2329d0d30a0f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Refreshing network info cache for port 54bf17d9-ad25-4326-b981-fb4fe6afaf7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.264 2 DEBUG oslo_concurrency.lockutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.264 2 DEBUG oslo_concurrency.lockutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.264 2 DEBUG oslo_concurrency.lockutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.265 2 DEBUG oslo_concurrency.lockutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.265 2 DEBUG oslo_concurrency.lockutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.266 2 INFO nova.compute.manager [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Terminating instance
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.267 2 DEBUG nova.compute.manager [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:12.511 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:45:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:12.514 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:45:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 169 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 523 KiB/s wr, 67 op/s
Oct 07 14:45:12 compute-0 kernel: tap3ac63514-77 (unregistering): left promiscuous mode
Oct 07 14:45:12 compute-0 NetworkManager[44949]: <info>  [1759848312.7908] device (tap3ac63514-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:12 compute-0 ovn_controller[151684]: 2025-10-07T14:45:12Z|01483|binding|INFO|Releasing lport 3ac63514-77e7-4d94-a67c-94806ca3b58b from this chassis (sb_readonly=0)
Oct 07 14:45:12 compute-0 ovn_controller[151684]: 2025-10-07T14:45:12Z|01484|binding|INFO|Setting lport 3ac63514-77e7-4d94-a67c-94806ca3b58b down in Southbound
Oct 07 14:45:12 compute-0 ovn_controller[151684]: 2025-10-07T14:45:12Z|01485|binding|INFO|Removing iface tap3ac63514-77 ovn-installed in OVS
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:12.811 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:1b:50 10.100.0.11'], port_security=['fa:16:3e:32:1b:50 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '30241223-64c5-4a88-8ba2-ee340fe6cbd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ac40ef14492f40768b3852a40da26621', 'neutron:revision_number': '9', 'neutron:security_group_ids': '56b8c028-3f77-4dba-a2e9-4a1cb7c88d4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46c5eb1a-8a6c-4afe-ae4e-424959d231e5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3ac63514-77e7-4d94-a67c-94806ca3b58b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:45:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:12.812 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3ac63514-77e7-4d94-a67c-94806ca3b58b in datapath 7c054d6f-68ec-4f0b-9362-221001cc6b67 unbound from our chassis
Oct 07 14:45:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:12.814 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c054d6f-68ec-4f0b-9362-221001cc6b67, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:45:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:12.816 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[af04e36e-d471-4ddf-9ac9-bde30ce35f99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:12.816 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 namespace which is not needed anymore
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:12 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000083.scope: Deactivated successfully.
Oct 07 14:45:12 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000083.scope: Consumed 14.692s CPU time.
Oct 07 14:45:12 compute-0 systemd-machined[214580]: Machine qemu-166-instance-00000083 terminated.
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.899 2 INFO nova.virt.libvirt.driver [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance destroyed successfully.
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.900 2 DEBUG nova.objects.instance [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'resources' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.923 2 DEBUG nova.virt.libvirt.vif [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:43:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-308957706',display_name='tempest-TestShelveInstance-server-308957706',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-308957706',id=131,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIP0AMqBzzYmRrc/gnJzbbMyBAFmuqR5iC2+H5lS9P1NxQlnpTWhcfEteNAmj5N76nsDrvP+kS3KGhT6YiYXIeHex+K2fKyOb9r6ICTnlnIC+U793tuGi+owzMBnIl2+nw==',key_name='tempest-TestShelveInstance-2096115086',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:44:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ac40ef14492f40768b3852a40da26621',ramdisk_id='',reservation_id='r-zct1uv2p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-703128978',owner_user_name='tempest-TestShelveInstance-703128978-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:44:37Z,user_data=None,user_id='52bb2c10051444f181ee0572525fbe9d',uuid=30241223-64c5-4a88-8ba2-ee340fe6cbd3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.925 2 DEBUG nova.network.os_vif_util [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converting VIF {"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.926 2 DEBUG nova.network.os_vif_util [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.926 2 DEBUG os_vif [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.930 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ac63514-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:45:12 compute-0 nova_compute[259550]: 2025-10-07 14:45:12.938 2 INFO os_vif [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77')
Oct 07 14:45:12 compute-0 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[402780]: [NOTICE]   (402784) : haproxy version is 2.8.14-c23fe91
Oct 07 14:45:12 compute-0 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[402780]: [NOTICE]   (402784) : path to executable is /usr/sbin/haproxy
Oct 07 14:45:12 compute-0 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[402780]: [WARNING]  (402784) : Exiting Master process...
Oct 07 14:45:12 compute-0 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[402780]: [ALERT]    (402784) : Current worker (402786) exited with code 143 (Terminated)
Oct 07 14:45:12 compute-0 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[402780]: [WARNING]  (402784) : All workers exited. Exiting... (0)
Oct 07 14:45:13 compute-0 systemd[1]: libpod-6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5.scope: Deactivated successfully.
Oct 07 14:45:13 compute-0 podman[404582]: 2025-10-07 14:45:13.007010859 +0000 UTC m=+0.082927195 container died 6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 14:45:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5-userdata-shm.mount: Deactivated successfully.
Oct 07 14:45:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d869432837bfb3671cd5715ce91e9446229f7e47546d9d9b9ecd83497bca9d04-merged.mount: Deactivated successfully.
Oct 07 14:45:13 compute-0 podman[404582]: 2025-10-07 14:45:13.05031712 +0000 UTC m=+0.126233446 container cleanup 6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:45:13 compute-0 systemd[1]: libpod-conmon-6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5.scope: Deactivated successfully.
Oct 07 14:45:13 compute-0 nova_compute[259550]: 2025-10-07 14:45:13.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:13 compute-0 podman[404630]: 2025-10-07 14:45:13.120381642 +0000 UTC m=+0.045543611 container remove 6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:45:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:13.127 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d287f618-b7c1-4912-8d1a-b4823dc9d91d]: (4, ('Tue Oct  7 02:45:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 (6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5)\n6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5\nTue Oct  7 02:45:13 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 (6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5)\n6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:13.129 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ecce13-7855-4aeb-9b32-ae74708c63cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:13.130 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c054d6f-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:13 compute-0 kernel: tap7c054d6f-60: left promiscuous mode
Oct 07 14:45:13 compute-0 nova_compute[259550]: 2025-10-07 14:45:13.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:45:13 compute-0 nova_compute[259550]: 2025-10-07 14:45:13.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:13.151 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[490ea24b-be20-4995-8f06-8ff18ae6f405]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:13.178 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9a1a0db0-9950-467d-9467-94e179633768]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:13.179 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[960e5abe-413a-4120-a0da-6234ba0c1f4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:13.195 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aa60b555-2e6d-413a-8ded-b6510337a229]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 885814, 'reachable_time': 28954, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404644, 'error': None, 'target': 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:13.198 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:45:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d7c054d6f\x2d68ec\x2d4f0b\x2d9362\x2d221001cc6b67.mount: Deactivated successfully.
Oct 07 14:45:13 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:13.198 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[4d38e81d-8717-48b2-ad27-961ff0f1c962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:13 compute-0 nova_compute[259550]: 2025-10-07 14:45:13.442 2 INFO nova.virt.libvirt.driver [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Deleting instance files /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3_del
Oct 07 14:45:13 compute-0 nova_compute[259550]: 2025-10-07 14:45:13.444 2 INFO nova.virt.libvirt.driver [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Deletion of /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3_del complete
Oct 07 14:45:13 compute-0 nova_compute[259550]: 2025-10-07 14:45:13.500 2 INFO nova.compute.manager [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Took 1.23 seconds to destroy the instance on the hypervisor.
Oct 07 14:45:13 compute-0 nova_compute[259550]: 2025-10-07 14:45:13.501 2 DEBUG oslo.service.loopingcall [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:45:13 compute-0 nova_compute[259550]: 2025-10-07 14:45:13.501 2 DEBUG nova.compute.manager [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:45:13 compute-0 nova_compute[259550]: 2025-10-07 14:45:13.502 2 DEBUG nova.network.neutron [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:45:13 compute-0 ceph-mon[74295]: pgmap v2489: 305 pgs: 305 active+clean; 169 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 523 KiB/s wr, 67 op/s
Oct 07 14:45:14 compute-0 nova_compute[259550]: 2025-10-07 14:45:14.307 2 DEBUG nova.compute.manager [req-333bc3e3-bd9f-4d72-b454-66fe546b0145 req-1a28779c-2f53-43f8-9055-c2d219467742 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:45:14 compute-0 nova_compute[259550]: 2025-10-07 14:45:14.307 2 DEBUG nova.compute.manager [req-333bc3e3-bd9f-4d72-b454-66fe546b0145 req-1a28779c-2f53-43f8-9055-c2d219467742 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing instance network info cache due to event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:45:14 compute-0 nova_compute[259550]: 2025-10-07 14:45:14.308 2 DEBUG oslo_concurrency.lockutils [req-333bc3e3-bd9f-4d72-b454-66fe546b0145 req-1a28779c-2f53-43f8-9055-c2d219467742 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:45:14 compute-0 nova_compute[259550]: 2025-10-07 14:45:14.308 2 DEBUG oslo_concurrency.lockutils [req-333bc3e3-bd9f-4d72-b454-66fe546b0145 req-1a28779c-2f53-43f8-9055-c2d219467742 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:45:14 compute-0 nova_compute[259550]: 2025-10-07 14:45:14.308 2 DEBUG nova.network.neutron [req-333bc3e3-bd9f-4d72-b454-66fe546b0145 req-1a28779c-2f53-43f8-9055-c2d219467742 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing network info cache for port 3ac63514-77e7-4d94-a67c-94806ca3b58b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:45:14 compute-0 nova_compute[259550]: 2025-10-07 14:45:14.396 2 DEBUG nova.network.neutron [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:45:14 compute-0 nova_compute[259550]: 2025-10-07 14:45:14.443 2 INFO nova.compute.manager [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Took 0.94 seconds to deallocate network for instance.
Oct 07 14:45:14 compute-0 nova_compute[259550]: 2025-10-07 14:45:14.466 2 DEBUG nova.network.neutron [req-024aebf4-3160-490a-b33c-e28332418504 req-956c92ba-dbbd-4fe4-83aa-2329d0d30a0f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updated VIF entry in instance network info cache for port 54bf17d9-ad25-4326-b981-fb4fe6afaf7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:45:14 compute-0 nova_compute[259550]: 2025-10-07 14:45:14.466 2 DEBUG nova.network.neutron [req-024aebf4-3160-490a-b33c-e28332418504 req-956c92ba-dbbd-4fe4-83aa-2329d0d30a0f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updating instance_info_cache with network_info: [{"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:45:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:14.516 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:14 compute-0 nova_compute[259550]: 2025-10-07 14:45:14.530 2 INFO nova.network.neutron [req-333bc3e3-bd9f-4d72-b454-66fe546b0145 req-1a28779c-2f53-43f8-9055-c2d219467742 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Port 3ac63514-77e7-4d94-a67c-94806ca3b58b from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 07 14:45:14 compute-0 nova_compute[259550]: 2025-10-07 14:45:14.530 2 DEBUG nova.network.neutron [req-333bc3e3-bd9f-4d72-b454-66fe546b0145 req-1a28779c-2f53-43f8-9055-c2d219467742 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:45:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 113 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 24 KiB/s wr, 94 op/s
Oct 07 14:45:14 compute-0 nova_compute[259550]: 2025-10-07 14:45:14.614 2 DEBUG oslo_concurrency.lockutils [req-333bc3e3-bd9f-4d72-b454-66fe546b0145 req-1a28779c-2f53-43f8-9055-c2d219467742 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:45:14 compute-0 nova_compute[259550]: 2025-10-07 14:45:14.630 2 DEBUG oslo_concurrency.lockutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:14 compute-0 nova_compute[259550]: 2025-10-07 14:45:14.631 2 DEBUG oslo_concurrency.lockutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:14 compute-0 nova_compute[259550]: 2025-10-07 14:45:14.632 2 DEBUG oslo_concurrency.lockutils [req-024aebf4-3160-490a-b33c-e28332418504 req-956c92ba-dbbd-4fe4-83aa-2329d0d30a0f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:45:14 compute-0 nova_compute[259550]: 2025-10-07 14:45:14.752 2 DEBUG oslo_concurrency.processutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:45:15 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/517420342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:45:15 compute-0 nova_compute[259550]: 2025-10-07 14:45:15.228 2 DEBUG oslo_concurrency.processutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:15 compute-0 nova_compute[259550]: 2025-10-07 14:45:15.236 2 DEBUG nova.compute.provider_tree [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:45:15 compute-0 nova_compute[259550]: 2025-10-07 14:45:15.319 2 DEBUG nova.scheduler.client.report [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:45:15 compute-0 nova_compute[259550]: 2025-10-07 14:45:15.583 2 DEBUG oslo_concurrency.lockutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:15 compute-0 nova_compute[259550]: 2025-10-07 14:45:15.674 2 INFO nova.scheduler.client.report [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Deleted allocations for instance 30241223-64c5-4a88-8ba2-ee340fe6cbd3
Oct 07 14:45:15 compute-0 ceph-mon[74295]: pgmap v2490: 305 pgs: 305 active+clean; 113 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 24 KiB/s wr, 94 op/s
Oct 07 14:45:15 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/517420342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:45:15 compute-0 nova_compute[259550]: 2025-10-07 14:45:15.936 2 DEBUG oslo_concurrency.lockutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:16 compute-0 nova_compute[259550]: 2025-10-07 14:45:16.424 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:45:16 compute-0 nova_compute[259550]: 2025-10-07 14:45:16.430 2 DEBUG nova.compute.manager [req-17ea9564-6d64-4522-ab2a-8bdf1557703c req-ec972ed5-28b3-4b5a-9371-e094740b0b62 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-vif-deleted-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:45:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 88 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 107 op/s
Oct 07 14:45:17 compute-0 ceph-mon[74295]: pgmap v2491: 305 pgs: 305 active+clean; 88 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 107 op/s
Oct 07 14:45:17 compute-0 nova_compute[259550]: 2025-10-07 14:45:17.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:18 compute-0 nova_compute[259550]: 2025-10-07 14:45:18.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:45:18 compute-0 nova_compute[259550]: 2025-10-07 14:45:18.177 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:18 compute-0 nova_compute[259550]: 2025-10-07 14:45:18.178 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:18 compute-0 nova_compute[259550]: 2025-10-07 14:45:18.197 2 DEBUG nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:45:18 compute-0 nova_compute[259550]: 2025-10-07 14:45:18.262 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:18 compute-0 nova_compute[259550]: 2025-10-07 14:45:18.263 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:18 compute-0 nova_compute[259550]: 2025-10-07 14:45:18.271 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:45:18 compute-0 nova_compute[259550]: 2025-10-07 14:45:18.272 2 INFO nova.compute.claims [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:45:18 compute-0 nova_compute[259550]: 2025-10-07 14:45:18.395 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 305 active+clean; 88 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 107 op/s
Oct 07 14:45:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:45:18 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1839292411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:45:18 compute-0 nova_compute[259550]: 2025-10-07 14:45:18.860 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:18 compute-0 nova_compute[259550]: 2025-10-07 14:45:18.866 2 DEBUG nova.compute.provider_tree [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:45:18 compute-0 nova_compute[259550]: 2025-10-07 14:45:18.898 2 DEBUG nova.scheduler.client.report [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:45:18 compute-0 nova_compute[259550]: 2025-10-07 14:45:18.958 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:18 compute-0 nova_compute[259550]: 2025-10-07 14:45:18.958 2 DEBUG nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.022 2 DEBUG nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.023 2 DEBUG nova.network.neutron [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.042 2 INFO nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.062 2 DEBUG nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.154 2 DEBUG nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.155 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.156 2 INFO nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Creating image(s)
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.176 2 DEBUG nova.storage.rbd_utils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.202 2 DEBUG nova.storage.rbd_utils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.222 2 DEBUG nova.storage.rbd_utils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.226 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.300 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.301 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.302 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.302 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.325 2 DEBUG nova.storage.rbd_utils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.331 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.416 2 DEBUG nova.policy [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.765 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:19 compute-0 ceph-mon[74295]: pgmap v2492: 305 pgs: 305 active+clean; 88 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 107 op/s
Oct 07 14:45:19 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1839292411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.837 2 DEBUG nova.storage.rbd_utils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.951 2 DEBUG nova.objects.instance [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.971 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.972 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Ensure instance console log exists: /var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.972 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.973 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:19 compute-0 nova_compute[259550]: 2025-10-07 14:45:19.973 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 305 active+clean; 88 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 108 op/s
Oct 07 14:45:20 compute-0 nova_compute[259550]: 2025-10-07 14:45:20.880 2 DEBUG nova.network.neutron [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Successfully created port: 706c4bba-81cd-4c03-ac73-d8225f4ea15f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:45:20 compute-0 ovn_controller[151684]: 2025-10-07T14:45:20Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:be:4b 10.100.0.9
Oct 07 14:45:20 compute-0 ovn_controller[151684]: 2025-10-07T14:45:20Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:be:4b 10.100.0.9
Oct 07 14:45:21 compute-0 ovn_controller[151684]: 2025-10-07T14:45:21Z|01486|binding|INFO|Releasing lport ff172ff9-04e5-4286-b498-dfa958b1473c from this chassis (sb_readonly=0)
Oct 07 14:45:21 compute-0 nova_compute[259550]: 2025-10-07 14:45:21.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:21 compute-0 ceph-mon[74295]: pgmap v2493: 305 pgs: 305 active+clean; 88 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 108 op/s
Oct 07 14:45:21 compute-0 nova_compute[259550]: 2025-10-07 14:45:21.939 2 DEBUG nova.network.neutron [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Successfully updated port: 706c4bba-81cd-4c03-ac73-d8225f4ea15f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:45:21 compute-0 nova_compute[259550]: 2025-10-07 14:45:21.953 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:45:21 compute-0 nova_compute[259550]: 2025-10-07 14:45:21.954 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:45:21 compute-0 nova_compute[259550]: 2025-10-07 14:45:21.954 2 DEBUG nova.network.neutron [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:45:22 compute-0 nova_compute[259550]: 2025-10-07 14:45:22.034 2 DEBUG nova.compute.manager [req-9c1d4134-bee6-4687-b4c5-1fc653ae13d3 req-7b72fa82-5b93-4bc0-a825-14bfea1247e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:45:22 compute-0 nova_compute[259550]: 2025-10-07 14:45:22.034 2 DEBUG nova.compute.manager [req-9c1d4134-bee6-4687-b4c5-1fc653ae13d3 req-7b72fa82-5b93-4bc0-a825-14bfea1247e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing instance network info cache due to event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:45:22 compute-0 nova_compute[259550]: 2025-10-07 14:45:22.035 2 DEBUG oslo_concurrency.lockutils [req-9c1d4134-bee6-4687-b4c5-1fc653ae13d3 req-7b72fa82-5b93-4bc0-a825-14bfea1247e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:45:22 compute-0 nova_compute[259550]: 2025-10-07 14:45:22.164 2 DEBUG nova.network.neutron [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:45:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 305 active+clean; 116 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Oct 07 14:45:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:45:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:45:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:45:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:45:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:45:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:45:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:45:22
Oct 07 14:45:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:45:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:45:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'images', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.meta']
Oct 07 14:45:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:45:22 compute-0 nova_compute[259550]: 2025-10-07 14:45:22.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:23 compute-0 podman[404856]: 2025-10-07 14:45:23.067907062 +0000 UTC m=+0.056643628 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 07 14:45:23 compute-0 podman[404857]: 2025-10-07 14:45:23.104176186 +0000 UTC m=+0.087501318 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 07 14:45:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:45:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:45:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:45:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:45:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:45:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:45:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:45:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:45:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:45:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.285 2 DEBUG nova.network.neutron [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updating instance_info_cache with network_info: [{"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.313 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.313 2 DEBUG nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Instance network_info: |[{"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.313 2 DEBUG oslo_concurrency.lockutils [req-9c1d4134-bee6-4687-b4c5-1fc653ae13d3 req-7b72fa82-5b93-4bc0-a825-14bfea1247e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.314 2 DEBUG nova.network.neutron [req-9c1d4134-bee6-4687-b4c5-1fc653ae13d3 req-7b72fa82-5b93-4bc0-a825-14bfea1247e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.316 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Start _get_guest_xml network_info=[{"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.320 2 WARNING nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.324 2 DEBUG nova.virt.libvirt.host [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.325 2 DEBUG nova.virt.libvirt.host [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.332 2 DEBUG nova.virt.libvirt.host [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.332 2 DEBUG nova.virt.libvirt.host [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.333 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.333 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.334 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.334 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.334 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.334 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.334 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.335 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.335 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.335 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.335 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.336 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.338 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:45:23 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/490112040' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.822 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:23 compute-0 ceph-mon[74295]: pgmap v2494: 305 pgs: 305 active+clean; 116 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Oct 07 14:45:23 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/490112040' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.848 2 DEBUG nova.storage.rbd_utils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:23 compute-0 nova_compute[259550]: 2025-10-07 14:45:23.852 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:45:24 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/825657784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.346 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.347 2 DEBUG nova.virt.libvirt.vif [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-229487908',display_name='tempest-TestNetworkBasicOps-server-229487908',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-229487908',id=134,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGRTZ02aMW7THUpB/Fbhor+/2PP8itgwraBW7FQrHv4wlPChnOyVpBM/gFf/sXxSnz2gDDJ7JKqZawr/DUsuuU6d+XBKYnr5LnbNHtnmtR34eSX9Sg3yAhhpxjufRRETtA==',key_name='tempest-TestNetworkBasicOps-262800501',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-82k3uyu4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:45:19Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.348 2 DEBUG nova.network.os_vif_util [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.349 2 DEBUG nova.network.os_vif_util [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:3a:c8,bridge_name='br-int',has_traffic_filtering=True,id=706c4bba-81cd-4c03-ac73-d8225f4ea15f,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706c4bba-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.350 2 DEBUG nova.objects.instance [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.365 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:45:24 compute-0 nova_compute[259550]:   <uuid>5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed</uuid>
Oct 07 14:45:24 compute-0 nova_compute[259550]:   <name>instance-00000086</name>
Oct 07 14:45:24 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:45:24 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:45:24 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkBasicOps-server-229487908</nova:name>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:45:23</nova:creationTime>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:45:24 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:45:24 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:45:24 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:45:24 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:45:24 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:45:24 compute-0 nova_compute[259550]:         <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:45:24 compute-0 nova_compute[259550]:         <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:45:24 compute-0 nova_compute[259550]:         <nova:port uuid="706c4bba-81cd-4c03-ac73-d8225f4ea15f">
Oct 07 14:45:24 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:45:24 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:45:24 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <system>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <entry name="serial">5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed</entry>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <entry name="uuid">5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed</entry>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     </system>
Oct 07 14:45:24 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:45:24 compute-0 nova_compute[259550]:   <os>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:   </os>
Oct 07 14:45:24 compute-0 nova_compute[259550]:   <features>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:   </features>
Oct 07 14:45:24 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:45:24 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:45:24 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk">
Oct 07 14:45:24 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       </source>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:45:24 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk.config">
Oct 07 14:45:24 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       </source>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:45:24 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:57:3a:c8"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <target dev="tap706c4bba-81"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed/console.log" append="off"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <video>
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     </video>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:45:24 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:45:24 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:45:24 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:45:24 compute-0 nova_compute[259550]: </domain>
Oct 07 14:45:24 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.367 2 DEBUG nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Preparing to wait for external event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.367 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.367 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.367 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.368 2 DEBUG nova.virt.libvirt.vif [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-229487908',display_name='tempest-TestNetworkBasicOps-server-229487908',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-229487908',id=134,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGRTZ02aMW7THUpB/Fbhor+/2PP8itgwraBW7FQrHv4wlPChnOyVpBM/gFf/sXxSnz2gDDJ7JKqZawr/DUsuuU6d+XBKYnr5LnbNHtnmtR34eSX9Sg3yAhhpxjufRRETtA==',key_name='tempest-TestNetworkBasicOps-262800501',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-82k3uyu4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:45:19Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.368 2 DEBUG nova.network.os_vif_util [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.369 2 DEBUG nova.network.os_vif_util [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:3a:c8,bridge_name='br-int',has_traffic_filtering=True,id=706c4bba-81cd-4c03-ac73-d8225f4ea15f,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706c4bba-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.369 2 DEBUG os_vif [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:3a:c8,bridge_name='br-int',has_traffic_filtering=True,id=706c4bba-81cd-4c03-ac73-d8225f4ea15f,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706c4bba-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.370 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.371 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.373 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap706c4bba-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.373 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap706c4bba-81, col_values=(('external_ids', {'iface-id': '706c4bba-81cd-4c03-ac73-d8225f4ea15f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:3a:c8', 'vm-uuid': '5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:24 compute-0 NetworkManager[44949]: <info>  [1759848324.3761] manager: (tap706c4bba-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/595)
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.381 2 INFO os_vif [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:3a:c8,bridge_name='br-int',has_traffic_filtering=True,id=706c4bba-81cd-4c03-ac73-d8225f4ea15f,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706c4bba-81')
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.438 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.439 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.439 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:57:3a:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.440 2 INFO nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Using config drive
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.460 2 DEBUG nova.storage.rbd_utils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 305 active+clean; 164 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.8 MiB/s wr, 140 op/s
Oct 07 14:45:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/825657784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.992 2 DEBUG nova.network.neutron [req-9c1d4134-bee6-4687-b4c5-1fc653ae13d3 req-7b72fa82-5b93-4bc0-a825-14bfea1247e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updated VIF entry in instance network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:45:24 compute-0 nova_compute[259550]: 2025-10-07 14:45:24.993 2 DEBUG nova.network.neutron [req-9c1d4134-bee6-4687-b4c5-1fc653ae13d3 req-7b72fa82-5b93-4bc0-a825-14bfea1247e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updating instance_info_cache with network_info: [{"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:45:25 compute-0 nova_compute[259550]: 2025-10-07 14:45:25.020 2 DEBUG oslo_concurrency.lockutils [req-9c1d4134-bee6-4687-b4c5-1fc653ae13d3 req-7b72fa82-5b93-4bc0-a825-14bfea1247e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:45:25 compute-0 nova_compute[259550]: 2025-10-07 14:45:25.132 2 INFO nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Creating config drive at /var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed/disk.config
Oct 07 14:45:25 compute-0 nova_compute[259550]: 2025-10-07 14:45:25.137 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyb75_pmj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:25 compute-0 nova_compute[259550]: 2025-10-07 14:45:25.280 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyb75_pmj" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:25 compute-0 nova_compute[259550]: 2025-10-07 14:45:25.305 2 DEBUG nova.storage.rbd_utils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:25 compute-0 nova_compute[259550]: 2025-10-07 14:45:25.309 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed/disk.config 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:25 compute-0 nova_compute[259550]: 2025-10-07 14:45:25.537 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed/disk.config 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:25 compute-0 nova_compute[259550]: 2025-10-07 14:45:25.538 2 INFO nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Deleting local config drive /var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed/disk.config because it was imported into RBD.
Oct 07 14:45:25 compute-0 kernel: tap706c4bba-81: entered promiscuous mode
Oct 07 14:45:25 compute-0 NetworkManager[44949]: <info>  [1759848325.5964] manager: (tap706c4bba-81): new Tun device (/org/freedesktop/NetworkManager/Devices/596)
Oct 07 14:45:25 compute-0 nova_compute[259550]: 2025-10-07 14:45:25.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:25 compute-0 ovn_controller[151684]: 2025-10-07T14:45:25Z|01487|binding|INFO|Claiming lport 706c4bba-81cd-4c03-ac73-d8225f4ea15f for this chassis.
Oct 07 14:45:25 compute-0 ovn_controller[151684]: 2025-10-07T14:45:25Z|01488|binding|INFO|706c4bba-81cd-4c03-ac73-d8225f4ea15f: Claiming fa:16:3e:57:3a:c8 10.100.0.13
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.611 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:3a:c8 10.100.0.13'], port_security=['fa:16:3e:57:3a:c8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a790910-04e4-4ed9-9209-184147e62b8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '497680c8-b146-4d33-ad78-53e98937483b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17109e69-8b68-4e08-a1dd-9e4119d12812, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=706c4bba-81cd-4c03-ac73-d8225f4ea15f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.612 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 706c4bba-81cd-4c03-ac73-d8225f4ea15f in datapath 8a790910-04e4-4ed9-9209-184147e62b8b bound to our chassis
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.613 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8a790910-04e4-4ed9-9209-184147e62b8b
Oct 07 14:45:25 compute-0 ovn_controller[151684]: 2025-10-07T14:45:25Z|01489|binding|INFO|Setting lport 706c4bba-81cd-4c03-ac73-d8225f4ea15f ovn-installed in OVS
Oct 07 14:45:25 compute-0 ovn_controller[151684]: 2025-10-07T14:45:25Z|01490|binding|INFO|Setting lport 706c4bba-81cd-4c03-ac73-d8225f4ea15f up in Southbound
Oct 07 14:45:25 compute-0 nova_compute[259550]: 2025-10-07 14:45:25.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:25 compute-0 nova_compute[259550]: 2025-10-07 14:45:25.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.628 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7a17cc-525f-486e-9b16-d17f6da1f2e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.629 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8a790910-01 in ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.633 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8a790910-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.634 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c456e1ff-47b1-46e0-b80d-0e08cecfe251]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:25 compute-0 systemd-udevd[405037]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.635 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bff3ecad-3edd-4c7a-989a-172327bc9e2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.648 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6d0dd468-e97d-4b6f-9d02-d67338b4dcf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:25 compute-0 NetworkManager[44949]: <info>  [1759848325.6510] device (tap706c4bba-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:45:25 compute-0 NetworkManager[44949]: <info>  [1759848325.6525] device (tap706c4bba-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:45:25 compute-0 systemd-machined[214580]: New machine qemu-168-instance-00000086.
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.663 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[981977d7-c175-4822-b67e-9ec1a6f990a2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:25 compute-0 systemd[1]: Started Virtual Machine qemu-168-instance-00000086.
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.690 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2216b25d-510d-47e7-a820-d685ad6a7898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:25 compute-0 NetworkManager[44949]: <info>  [1759848325.6966] manager: (tap8a790910-00): new Veth device (/org/freedesktop/NetworkManager/Devices/597)
Oct 07 14:45:25 compute-0 systemd-udevd[405041]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.695 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c27e757-7762-42d1-bfa1-7927941cdbb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.728 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[af8057e9-a11b-4020-a108-52cce910ab0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.732 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8492a89f-95e2-4223-aa42-e08e8c371dbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:25 compute-0 NetworkManager[44949]: <info>  [1759848325.7572] device (tap8a790910-00): carrier: link connected
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.763 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ab512807-7442-4d8d-bac7-b87a08c62ab1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.783 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c272c7c5-5a34-47cb-aaf5-2b7ec1684687]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a790910-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:8c:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890932, 'reachable_time': 40705, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405070, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.799 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0d1ab51e-a4c1-4a81-b7e9-35c1a6b0bf03]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:8cb2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 890932, 'tstamp': 890932}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405071, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.819 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[635ba17c-3b6a-435f-a803-e66c58b1eb6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a790910-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:8c:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890932, 'reachable_time': 40705, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 405072, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:25 compute-0 ceph-mon[74295]: pgmap v2495: 305 pgs: 305 active+clean; 164 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.8 MiB/s wr, 140 op/s
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.854 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[797eb0e1-4266-498d-b82c-cbb122e3ee7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.926 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8a682635-c981-49c3-80b0-188e335fd475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.927 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a790910-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.928 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.928 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a790910-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:25 compute-0 nova_compute[259550]: 2025-10-07 14:45:25.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:25 compute-0 NetworkManager[44949]: <info>  [1759848325.9305] manager: (tap8a790910-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/598)
Oct 07 14:45:25 compute-0 kernel: tap8a790910-00: entered promiscuous mode
Oct 07 14:45:25 compute-0 nova_compute[259550]: 2025-10-07 14:45:25.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.933 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8a790910-00, col_values=(('external_ids', {'iface-id': '473a96c8-dafe-4956-8316-8a82bc1c870e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:25 compute-0 nova_compute[259550]: 2025-10-07 14:45:25.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:25 compute-0 ovn_controller[151684]: 2025-10-07T14:45:25Z|01491|binding|INFO|Releasing lport 473a96c8-dafe-4956-8316-8a82bc1c870e from this chassis (sb_readonly=0)
Oct 07 14:45:25 compute-0 nova_compute[259550]: 2025-10-07 14:45:25.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.936 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8a790910-04e4-4ed9-9209-184147e62b8b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8a790910-04e4-4ed9-9209-184147e62b8b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.937 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[780e492b-fe69-4ebc-a2d6-2e481441f029]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.938 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-8a790910-04e4-4ed9-9209-184147e62b8b
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/8a790910-04e4-4ed9-9209-184147e62b8b.pid.haproxy
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 8a790910-04e4-4ed9-9209-184147e62b8b
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:45:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.938 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'env', 'PROCESS_TAG=haproxy-8a790910-04e4-4ed9-9209-184147e62b8b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8a790910-04e4-4ed9-9209-184147e62b8b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:45:25 compute-0 nova_compute[259550]: 2025-10-07 14:45:25.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:26 compute-0 ovn_controller[151684]: 2025-10-07T14:45:26Z|01492|binding|INFO|Releasing lport 473a96c8-dafe-4956-8316-8a82bc1c870e from this chassis (sb_readonly=0)
Oct 07 14:45:26 compute-0 ovn_controller[151684]: 2025-10-07T14:45:26Z|01493|binding|INFO|Releasing lport ff172ff9-04e5-4286-b498-dfa958b1473c from this chassis (sb_readonly=0)
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.054 2 DEBUG nova.compute.manager [req-bb153ed5-ab9e-4a48-b936-270a358512f7 req-c20d61ab-9b3c-410d-a129-959184d52893 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.054 2 DEBUG oslo_concurrency.lockutils [req-bb153ed5-ab9e-4a48-b936-270a358512f7 req-c20d61ab-9b3c-410d-a129-959184d52893 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.054 2 DEBUG oslo_concurrency.lockutils [req-bb153ed5-ab9e-4a48-b936-270a358512f7 req-c20d61ab-9b3c-410d-a129-959184d52893 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.055 2 DEBUG oslo_concurrency.lockutils [req-bb153ed5-ab9e-4a48-b936-270a358512f7 req-c20d61ab-9b3c-410d-a129-959184d52893 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.055 2 DEBUG nova.compute.manager [req-bb153ed5-ab9e-4a48-b936-270a358512f7 req-c20d61ab-9b3c-410d-a129-959184d52893 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Processing event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:26 compute-0 podman[405146]: 2025-10-07 14:45:26.318037271 +0000 UTC m=+0.028419826 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.545 2 DEBUG nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.546 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848326.5447032, 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.547 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] VM Started (Lifecycle Event)
Oct 07 14:45:26 compute-0 podman[405146]: 2025-10-07 14:45:26.547642484 +0000 UTC m=+0.258025019 container create c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.551 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.558 2 INFO nova.virt.libvirt.driver [-] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Instance spawned successfully.
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.559 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:45:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 513 KiB/s rd, 3.9 MiB/s wr, 111 op/s
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.567 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.573 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.578 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.579 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.579 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.579 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.580 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.580 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.590 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.591 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848326.545007, 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.591 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] VM Paused (Lifecycle Event)
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.612 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.614 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848326.5492656, 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.615 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] VM Resumed (Lifecycle Event)
Oct 07 14:45:26 compute-0 systemd[1]: Started libpod-conmon-c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78.scope.
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.639 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.642 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.653 2 INFO nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Took 7.50 seconds to spawn the instance on the hypervisor.
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.653 2 DEBUG nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:45:26 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:45:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36405587fcc29b64184ee9f2074d129e3df4d671c99cc0a15f619b8e476e7194/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.662 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.705 2 INFO nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Took 8.46 seconds to build instance.
Oct 07 14:45:26 compute-0 nova_compute[259550]: 2025-10-07 14:45:26.728 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:26 compute-0 podman[405146]: 2025-10-07 14:45:26.734140441 +0000 UTC m=+0.444522996 container init c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:45:26 compute-0 podman[405146]: 2025-10-07 14:45:26.740134261 +0000 UTC m=+0.450516796 container start c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 07 14:45:26 compute-0 neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b[405161]: [NOTICE]   (405165) : New worker (405167) forked
Oct 07 14:45:26 compute-0 neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b[405161]: [NOTICE]   (405165) : Loading success.
Oct 07 14:45:27 compute-0 nova_compute[259550]: 2025-10-07 14:45:27.897 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848312.896735, 30241223-64c5-4a88-8ba2-ee340fe6cbd3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:45:27 compute-0 nova_compute[259550]: 2025-10-07 14:45:27.899 2 INFO nova.compute.manager [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] VM Stopped (Lifecycle Event)
Oct 07 14:45:27 compute-0 nova_compute[259550]: 2025-10-07 14:45:27.917 2 DEBUG nova.compute.manager [None req-d540ec5c-04e6-4b64-8c0f-8119cac6d71a - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:45:27 compute-0 ceph-mon[74295]: pgmap v2496: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 513 KiB/s rd, 3.9 MiB/s wr, 111 op/s
Oct 07 14:45:28 compute-0 nova_compute[259550]: 2025-10-07 14:45:28.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:28 compute-0 nova_compute[259550]: 2025-10-07 14:45:28.138 2 DEBUG nova.compute.manager [req-083f0ea5-58c8-4bda-81c6-429841405d32 req-224b72f6-0abd-4fee-a6a7-f3eb9e9e3105 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:45:28 compute-0 nova_compute[259550]: 2025-10-07 14:45:28.139 2 DEBUG oslo_concurrency.lockutils [req-083f0ea5-58c8-4bda-81c6-429841405d32 req-224b72f6-0abd-4fee-a6a7-f3eb9e9e3105 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:28 compute-0 nova_compute[259550]: 2025-10-07 14:45:28.139 2 DEBUG oslo_concurrency.lockutils [req-083f0ea5-58c8-4bda-81c6-429841405d32 req-224b72f6-0abd-4fee-a6a7-f3eb9e9e3105 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:28 compute-0 nova_compute[259550]: 2025-10-07 14:45:28.140 2 DEBUG oslo_concurrency.lockutils [req-083f0ea5-58c8-4bda-81c6-429841405d32 req-224b72f6-0abd-4fee-a6a7-f3eb9e9e3105 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:28 compute-0 nova_compute[259550]: 2025-10-07 14:45:28.140 2 DEBUG nova.compute.manager [req-083f0ea5-58c8-4bda-81c6-429841405d32 req-224b72f6-0abd-4fee-a6a7-f3eb9e9e3105 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] No waiting events found dispatching network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:45:28 compute-0 nova_compute[259550]: 2025-10-07 14:45:28.140 2 WARNING nova.compute.manager [req-083f0ea5-58c8-4bda-81c6-429841405d32 req-224b72f6-0abd-4fee-a6a7-f3eb9e9e3105 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received unexpected event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f for instance with vm_state active and task_state None.
Oct 07 14:45:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:45:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct 07 14:45:29 compute-0 nova_compute[259550]: 2025-10-07 14:45:29.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:30 compute-0 ceph-mon[74295]: pgmap v2497: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct 07 14:45:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 140 op/s
Oct 07 14:45:32 compute-0 ceph-mon[74295]: pgmap v2498: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 140 op/s
Oct 07 14:45:32 compute-0 nova_compute[259550]: 2025-10-07 14:45:32.383 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "b598361e-dd69-448e-ade6-931a3d8c84cb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:32 compute-0 nova_compute[259550]: 2025-10-07 14:45:32.383 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:32 compute-0 nova_compute[259550]: 2025-10-07 14:45:32.404 2 DEBUG nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:45:32 compute-0 nova_compute[259550]: 2025-10-07 14:45:32.496 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:32 compute-0 nova_compute[259550]: 2025-10-07 14:45:32.497 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:32 compute-0 nova_compute[259550]: 2025-10-07 14:45:32.504 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:45:32 compute-0 nova_compute[259550]: 2025-10-07 14:45:32.504 2 INFO nova.compute.claims [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Oct 07 14:45:32 compute-0 nova_compute[259550]: 2025-10-07 14:45:32.659 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:45:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2992572256' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:45:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:45:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2992572256' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011073049952790258 of space, bias 1.0, pg target 0.3321914985837077 quantized to 32 (current 32)
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:45:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:45:32 compute-0 nova_compute[259550]: 2025-10-07 14:45:32.975 2 DEBUG nova.compute.manager [req-d93f139a-2053-42c3-b4f5-b4a714a364f1 req-9454c4f6-6551-48ee-88a2-b5b622374b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:45:32 compute-0 nova_compute[259550]: 2025-10-07 14:45:32.976 2 DEBUG nova.compute.manager [req-d93f139a-2053-42c3-b4f5-b4a714a364f1 req-9454c4f6-6551-48ee-88a2-b5b622374b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing instance network info cache due to event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:45:32 compute-0 nova_compute[259550]: 2025-10-07 14:45:32.977 2 DEBUG oslo_concurrency.lockutils [req-d93f139a-2053-42c3-b4f5-b4a714a364f1 req-9454c4f6-6551-48ee-88a2-b5b622374b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:45:32 compute-0 nova_compute[259550]: 2025-10-07 14:45:32.977 2 DEBUG oslo_concurrency.lockutils [req-d93f139a-2053-42c3-b4f5-b4a714a364f1 req-9454c4f6-6551-48ee-88a2-b5b622374b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:45:32 compute-0 nova_compute[259550]: 2025-10-07 14:45:32.978 2 DEBUG nova.network.neutron [req-d93f139a-2053-42c3-b4f5-b4a714a364f1 req-9454c4f6-6551-48ee-88a2-b5b622374b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:45:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:45:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4082682108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.098 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.105 2 DEBUG nova.compute.provider_tree [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.126 2 DEBUG nova.scheduler.client.report [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.153 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.154 2 DEBUG nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:45:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2992572256' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:45:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2992572256' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:45:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4082682108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.219 2 DEBUG nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.220 2 DEBUG nova.network.neutron [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.242 2 INFO nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.258 2 DEBUG nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.334 2 DEBUG nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.335 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.336 2 INFO nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Creating image(s)
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.358 2 DEBUG nova.storage.rbd_utils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image b598361e-dd69-448e-ade6-931a3d8c84cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.381 2 DEBUG nova.storage.rbd_utils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image b598361e-dd69-448e-ade6-931a3d8c84cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.408 2 DEBUG nova.storage.rbd_utils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image b598361e-dd69-448e-ade6-931a3d8c84cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.411 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.443 2 DEBUG nova.policy [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.478 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.479 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.480 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.480 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.506 2 DEBUG nova.storage.rbd_utils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image b598361e-dd69-448e-ade6-931a3d8c84cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.512 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b598361e-dd69-448e-ade6-931a3d8c84cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.888 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b598361e-dd69-448e-ade6-931a3d8c84cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.376s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:33 compute-0 nova_compute[259550]: 2025-10-07 14:45:33.954 2 DEBUG nova.storage.rbd_utils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image b598361e-dd69-448e-ade6-931a3d8c84cb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:45:34 compute-0 nova_compute[259550]: 2025-10-07 14:45:34.044 2 DEBUG nova.objects.instance [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid b598361e-dd69-448e-ade6-931a3d8c84cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:45:34 compute-0 nova_compute[259550]: 2025-10-07 14:45:34.059 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:45:34 compute-0 nova_compute[259550]: 2025-10-07 14:45:34.059 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Ensure instance console log exists: /var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:45:34 compute-0 nova_compute[259550]: 2025-10-07 14:45:34.060 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:34 compute-0 nova_compute[259550]: 2025-10-07 14:45:34.060 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:34 compute-0 nova_compute[259550]: 2025-10-07 14:45:34.060 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:34 compute-0 nova_compute[259550]: 2025-10-07 14:45:34.214 2 DEBUG nova.network.neutron [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Successfully created port: fef175ac-72ee-4716-9970-ff3dccaea9f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:45:34 compute-0 ceph-mon[74295]: pgmap v2499: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Oct 07 14:45:34 compute-0 nova_compute[259550]: 2025-10-07 14:45:34.366 2 DEBUG nova.network.neutron [req-d93f139a-2053-42c3-b4f5-b4a714a364f1 req-9454c4f6-6551-48ee-88a2-b5b622374b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updated VIF entry in instance network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:45:34 compute-0 nova_compute[259550]: 2025-10-07 14:45:34.367 2 DEBUG nova.network.neutron [req-d93f139a-2053-42c3-b4f5-b4a714a364f1 req-9454c4f6-6551-48ee-88a2-b5b622374b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updating instance_info_cache with network_info: [{"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:45:34 compute-0 nova_compute[259550]: 2025-10-07 14:45:34.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:34 compute-0 nova_compute[259550]: 2025-10-07 14:45:34.391 2 DEBUG oslo_concurrency.lockutils [req-d93f139a-2053-42c3-b4f5-b4a714a364f1 req-9454c4f6-6551-48ee-88a2-b5b622374b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:45:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 190 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 125 op/s
Oct 07 14:45:34 compute-0 nova_compute[259550]: 2025-10-07 14:45:34.900 2 DEBUG nova.network.neutron [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Successfully updated port: fef175ac-72ee-4716-9970-ff3dccaea9f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:45:34 compute-0 nova_compute[259550]: 2025-10-07 14:45:34.916 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:45:34 compute-0 nova_compute[259550]: 2025-10-07 14:45:34.916 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:45:34 compute-0 nova_compute[259550]: 2025-10-07 14:45:34.916 2 DEBUG nova.network.neutron [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:45:34 compute-0 nova_compute[259550]: 2025-10-07 14:45:34.999 2 DEBUG nova.compute.manager [req-de085423-146c-4053-bf01-6ffe6761abf7 req-d77c7638-45df-4129-a481-df13a8cf1c9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received event network-changed-fef175ac-72ee-4716-9970-ff3dccaea9f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:45:34 compute-0 nova_compute[259550]: 2025-10-07 14:45:34.999 2 DEBUG nova.compute.manager [req-de085423-146c-4053-bf01-6ffe6761abf7 req-d77c7638-45df-4129-a481-df13a8cf1c9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Refreshing instance network info cache due to event network-changed-fef175ac-72ee-4716-9970-ff3dccaea9f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:45:35 compute-0 nova_compute[259550]: 2025-10-07 14:45:34.999 2 DEBUG oslo_concurrency.lockutils [req-de085423-146c-4053-bf01-6ffe6761abf7 req-d77c7638-45df-4129-a481-df13a8cf1c9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:45:35 compute-0 nova_compute[259550]: 2025-10-07 14:45:35.910 2 DEBUG nova.network.neutron [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:45:36 compute-0 ceph-mon[74295]: pgmap v2500: 305 pgs: 305 active+clean; 190 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 125 op/s
Oct 07 14:45:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 213 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 109 op/s
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.948 2 DEBUG nova.network.neutron [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Updating instance_info_cache with network_info: [{"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.972 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.972 2 DEBUG nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Instance network_info: |[{"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.973 2 DEBUG oslo_concurrency.lockutils [req-de085423-146c-4053-bf01-6ffe6761abf7 req-d77c7638-45df-4129-a481-df13a8cf1c9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.973 2 DEBUG nova.network.neutron [req-de085423-146c-4053-bf01-6ffe6761abf7 req-d77c7638-45df-4129-a481-df13a8cf1c9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Refreshing network info cache for port fef175ac-72ee-4716-9970-ff3dccaea9f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.976 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Start _get_guest_xml network_info=[{"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.981 2 WARNING nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.987 2 DEBUG nova.virt.libvirt.host [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.987 2 DEBUG nova.virt.libvirt.host [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.993 2 DEBUG nova.virt.libvirt.host [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.993 2 DEBUG nova.virt.libvirt.host [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.994 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.994 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.994 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.994 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.995 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.995 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.995 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.995 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.996 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.996 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.996 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.996 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:45:37 compute-0 nova_compute[259550]: 2025-10-07 14:45:37.999 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.178 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.179 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.202 2 DEBUG nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.269 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:38 compute-0 ceph-mon[74295]: pgmap v2501: 305 pgs: 305 active+clean; 213 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 109 op/s
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.270 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.277 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.278 2 INFO nova.compute.claims [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.433 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:45:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3629840250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.478 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.501 2 DEBUG nova.storage.rbd_utils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image b598361e-dd69-448e-ade6-931a3d8c84cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.505 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 213 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Oct 07 14:45:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:45:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4152527710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.903 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.909 2 DEBUG nova.compute.provider_tree [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.938 2 DEBUG nova.scheduler.client.report [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.959 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.961 2 DEBUG nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:45:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:45:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2014971068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.983 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.986 2 DEBUG nova.virt.libvirt.vif [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:45:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1906442223',display_name='tempest-TestGettingAddress-server-1906442223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1906442223',id=135,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJduBxT7TahFqoelvVZ/7dLsZLsZopzCWy0c1s/fKLvT5f1/UUPBtVog3rnrfhVqOaBhsvpFnl4NRHZsXU2RV8U7aQqRvvSPu/+lGVEKuUSnHnWjBec95G9VYq2BFT7AFA==',key_name='tempest-TestGettingAddress-1301392792',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-xwt5qdp6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:45:33Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=b598361e-dd69-448e-ade6-931a3d8c84cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.986 2 DEBUG nova.network.os_vif_util [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.988 2 DEBUG nova.network.os_vif_util [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:9c:ff,bridge_name='br-int',has_traffic_filtering=True,id=fef175ac-72ee-4716-9970-ff3dccaea9f9,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfef175ac-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:45:38 compute-0 nova_compute[259550]: 2025-10-07 14:45:38.989 2 DEBUG nova.objects.instance [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid b598361e-dd69-448e-ade6-931a3d8c84cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.012 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:45:39 compute-0 nova_compute[259550]:   <uuid>b598361e-dd69-448e-ade6-931a3d8c84cb</uuid>
Oct 07 14:45:39 compute-0 nova_compute[259550]:   <name>instance-00000087</name>
Oct 07 14:45:39 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:45:39 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:45:39 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <nova:name>tempest-TestGettingAddress-server-1906442223</nova:name>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:45:37</nova:creationTime>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:45:39 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:45:39 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:45:39 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:45:39 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:45:39 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:45:39 compute-0 nova_compute[259550]:         <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct 07 14:45:39 compute-0 nova_compute[259550]:         <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:45:39 compute-0 nova_compute[259550]:         <nova:port uuid="fef175ac-72ee-4716-9970-ff3dccaea9f9">
Oct 07 14:45:39 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fef5:9cff" ipVersion="6"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fef5:9cff" ipVersion="6"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:45:39 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:45:39 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <system>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <entry name="serial">b598361e-dd69-448e-ade6-931a3d8c84cb</entry>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <entry name="uuid">b598361e-dd69-448e-ade6-931a3d8c84cb</entry>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     </system>
Oct 07 14:45:39 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:45:39 compute-0 nova_compute[259550]:   <os>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:   </os>
Oct 07 14:45:39 compute-0 nova_compute[259550]:   <features>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:   </features>
Oct 07 14:45:39 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:45:39 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:45:39 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/b598361e-dd69-448e-ade6-931a3d8c84cb_disk">
Oct 07 14:45:39 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       </source>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:45:39 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/b598361e-dd69-448e-ade6-931a3d8c84cb_disk.config">
Oct 07 14:45:39 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       </source>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:45:39 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:f5:9c:ff"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <target dev="tapfef175ac-72"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb/console.log" append="off"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <video>
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     </video>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:45:39 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:45:39 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:45:39 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:45:39 compute-0 nova_compute[259550]: </domain>
Oct 07 14:45:39 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.014 2 DEBUG nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Preparing to wait for external event network-vif-plugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.015 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.016 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.016 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.017 2 DEBUG nova.virt.libvirt.vif [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:45:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1906442223',display_name='tempest-TestGettingAddress-server-1906442223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1906442223',id=135,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJduBxT7TahFqoelvVZ/7dLsZLsZopzCWy0c1s/fKLvT5f1/UUPBtVog3rnrfhVqOaBhsvpFnl4NRHZsXU2RV8U7aQqRvvSPu/+lGVEKuUSnHnWjBec95G9VYq2BFT7AFA==',key_name='tempest-TestGettingAddress-1301392792',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-xwt5qdp6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:45:33Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=b598361e-dd69-448e-ade6-931a3d8c84cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.017 2 DEBUG nova.network.os_vif_util [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.020 2 DEBUG nova.network.os_vif_util [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:9c:ff,bridge_name='br-int',has_traffic_filtering=True,id=fef175ac-72ee-4716-9970-ff3dccaea9f9,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfef175ac-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.020 2 DEBUG os_vif [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:9c:ff,bridge_name='br-int',has_traffic_filtering=True,id=fef175ac-72ee-4716-9970-ff3dccaea9f9,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfef175ac-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.022 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.023 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.026 2 DEBUG nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.027 2 DEBUG nova.network.neutron [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.030 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfef175ac-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfef175ac-72, col_values=(('external_ids', {'iface-id': 'fef175ac-72ee-4716-9970-ff3dccaea9f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:9c:ff', 'vm-uuid': 'b598361e-dd69-448e-ade6-931a3d8c84cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:39 compute-0 NetworkManager[44949]: <info>  [1759848339.0340] manager: (tapfef175ac-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/599)
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.042 2 INFO os_vif [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:9c:ff,bridge_name='br-int',has_traffic_filtering=True,id=fef175ac-72ee-4716-9970-ff3dccaea9f9,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfef175ac-72')
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.055 2 INFO nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.083 2 DEBUG nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.114 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.114 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.115 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:f5:9c:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.115 2 INFO nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Using config drive
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.136 2 DEBUG nova.storage.rbd_utils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image b598361e-dd69-448e-ade6-931a3d8c84cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.201 2 DEBUG nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.202 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.203 2 INFO nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Creating image(s)
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.220 2 DEBUG nova.storage.rbd_utils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.242 2 DEBUG nova.storage.rbd_utils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.271 2 DEBUG nova.storage.rbd_utils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3629840250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:45:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4152527710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:45:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2014971068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.276 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:39 compute-0 ovn_controller[151684]: 2025-10-07T14:45:39Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:3a:c8 10.100.0.13
Oct 07 14:45:39 compute-0 ovn_controller[151684]: 2025-10-07T14:45:39Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:3a:c8 10.100.0.13
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.368 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.369 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.370 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.371 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.396 2 DEBUG nova.storage.rbd_utils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.401 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.745 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.803 2 DEBUG nova.storage.rbd_utils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.895 2 DEBUG nova.objects.instance [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid 36592d37-1eb3-431f-8cd3-aa0d320b2e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.910 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.911 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Ensure instance console log exists: /var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.911 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.912 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.912 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:39 compute-0 nova_compute[259550]: 2025-10-07 14:45:39.960 2 DEBUG nova.policy [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:45:40 compute-0 nova_compute[259550]: 2025-10-07 14:45:40.102 2 INFO nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Creating config drive at /var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb/disk.config
Oct 07 14:45:40 compute-0 nova_compute[259550]: 2025-10-07 14:45:40.108 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkiomsf2g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:40 compute-0 nova_compute[259550]: 2025-10-07 14:45:40.277 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkiomsf2g" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:40 compute-0 ceph-mon[74295]: pgmap v2502: 305 pgs: 305 active+clean; 213 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Oct 07 14:45:40 compute-0 nova_compute[259550]: 2025-10-07 14:45:40.308 2 DEBUG nova.storage.rbd_utils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image b598361e-dd69-448e-ade6-931a3d8c84cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:40 compute-0 nova_compute[259550]: 2025-10-07 14:45:40.312 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb/disk.config b598361e-dd69-448e-ade6-931a3d8c84cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:40 compute-0 nova_compute[259550]: 2025-10-07 14:45:40.485 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb/disk.config b598361e-dd69-448e-ade6-931a3d8c84cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:40 compute-0 nova_compute[259550]: 2025-10-07 14:45:40.486 2 INFO nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Deleting local config drive /var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb/disk.config because it was imported into RBD.
Oct 07 14:45:40 compute-0 kernel: tapfef175ac-72: entered promiscuous mode
Oct 07 14:45:40 compute-0 NetworkManager[44949]: <info>  [1759848340.5492] manager: (tapfef175ac-72): new Tun device (/org/freedesktop/NetworkManager/Devices/600)
Oct 07 14:45:40 compute-0 ovn_controller[151684]: 2025-10-07T14:45:40Z|01494|binding|INFO|Claiming lport fef175ac-72ee-4716-9970-ff3dccaea9f9 for this chassis.
Oct 07 14:45:40 compute-0 ovn_controller[151684]: 2025-10-07T14:45:40Z|01495|binding|INFO|fef175ac-72ee-4716-9970-ff3dccaea9f9: Claiming fa:16:3e:f5:9c:ff 10.100.0.5 2001:db8:0:1:f816:3eff:fef5:9cff 2001:db8::f816:3eff:fef5:9cff
Oct 07 14:45:40 compute-0 nova_compute[259550]: 2025-10-07 14:45:40.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.562 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:9c:ff 10.100.0.5 2001:db8:0:1:f816:3eff:fef5:9cff 2001:db8::f816:3eff:fef5:9cff'], port_security=['fa:16:3e:f5:9c:ff 10.100.0.5 2001:db8:0:1:f816:3eff:fef5:9cff 2001:db8::f816:3eff:fef5:9cff'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28 2001:db8:0:1:f816:3eff:fef5:9cff/64 2001:db8::f816:3eff:fef5:9cff/64', 'neutron:device_id': 'b598361e-dd69-448e-ade6-931a3d8c84cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb742e76-491f-4442-ba5f-a90a2210bfd3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0dad457a-42a0-40e6-bb17-b6ab5f921cac, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fef175ac-72ee-4716-9970-ff3dccaea9f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:45:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.563 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fef175ac-72ee-4716-9970-ff3dccaea9f9 in datapath b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e bound to our chassis
Oct 07 14:45:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.564 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e
Oct 07 14:45:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 227 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 132 op/s
Oct 07 14:45:40 compute-0 ovn_controller[151684]: 2025-10-07T14:45:40Z|01496|binding|INFO|Setting lport fef175ac-72ee-4716-9970-ff3dccaea9f9 ovn-installed in OVS
Oct 07 14:45:40 compute-0 ovn_controller[151684]: 2025-10-07T14:45:40Z|01497|binding|INFO|Setting lport fef175ac-72ee-4716-9970-ff3dccaea9f9 up in Southbound
Oct 07 14:45:40 compute-0 nova_compute[259550]: 2025-10-07 14:45:40.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:40 compute-0 nova_compute[259550]: 2025-10-07 14:45:40.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.589 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[06a986a1-8791-4e72-bdce-3e6cbdaa3b25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:40 compute-0 systemd-machined[214580]: New machine qemu-169-instance-00000087.
Oct 07 14:45:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.643 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7e3553-b459-47af-8f49-210135e8c0d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.647 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[39166131-f29c-430a-bd4b-c2fc38d047c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:40 compute-0 systemd[1]: Started Virtual Machine qemu-169-instance-00000087.
Oct 07 14:45:40 compute-0 systemd-udevd[405733]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:45:40 compute-0 podman[405687]: 2025-10-07 14:45:40.68199523 +0000 UTC m=+0.090266470 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible)
Oct 07 14:45:40 compute-0 NetworkManager[44949]: <info>  [1759848340.6895] device (tapfef175ac-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:45:40 compute-0 NetworkManager[44949]: <info>  [1759848340.6915] device (tapfef175ac-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:45:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.695 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[951a0130-533b-497d-b6a2-b04a907805f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:40 compute-0 podman[405689]: 2025-10-07 14:45:40.710799776 +0000 UTC m=+0.119073496 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 07 14:45:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.713 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[403d515d-218f-4f9f-abb1-06ce77b7a858]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5308f20-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0c:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 26, 'tx_packets': 5, 'rx_bytes': 2300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 26, 'tx_packets': 5, 'rx_bytes': 2300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 889039, 'reachable_time': 17557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 24, 'inoctets': 1880, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 24, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1880, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 24, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405743, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.736 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[28c4e6b2-88a9-4d5e-9713-c6956159c0a4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb5308f20-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 889050, 'tstamp': 889050}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405745, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb5308f20-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 889053, 'tstamp': 889053}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405745, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.742 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5308f20-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:40 compute-0 nova_compute[259550]: 2025-10-07 14:45:40.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:40 compute-0 nova_compute[259550]: 2025-10-07 14:45:40.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.746 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5308f20-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.747 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:45:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.747 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5308f20-40, col_values=(('external_ids', {'iface-id': 'ff172ff9-04e5-4286-b498-dfa958b1473c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.747 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.031 2 DEBUG nova.network.neutron [req-de085423-146c-4053-bf01-6ffe6761abf7 req-d77c7638-45df-4129-a481-df13a8cf1c9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Updated VIF entry in instance network info cache for port fef175ac-72ee-4716-9970-ff3dccaea9f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.033 2 DEBUG nova.network.neutron [req-de085423-146c-4053-bf01-6ffe6761abf7 req-d77c7638-45df-4129-a481-df13a8cf1c9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Updating instance_info_cache with network_info: [{"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.063 2 DEBUG oslo_concurrency.lockutils [req-de085423-146c-4053-bf01-6ffe6761abf7 req-d77c7638-45df-4129-a481-df13a8cf1c9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.254 2 DEBUG nova.compute.manager [req-fc1c5a4f-d230-41c2-9690-6efd6984b901 req-ead66b94-e7b0-4b70-ae9a-cf2d436568c3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received event network-vif-plugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.255 2 DEBUG oslo_concurrency.lockutils [req-fc1c5a4f-d230-41c2-9690-6efd6984b901 req-ead66b94-e7b0-4b70-ae9a-cf2d436568c3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.255 2 DEBUG oslo_concurrency.lockutils [req-fc1c5a4f-d230-41c2-9690-6efd6984b901 req-ead66b94-e7b0-4b70-ae9a-cf2d436568c3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.256 2 DEBUG oslo_concurrency.lockutils [req-fc1c5a4f-d230-41c2-9690-6efd6984b901 req-ead66b94-e7b0-4b70-ae9a-cf2d436568c3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.256 2 DEBUG nova.compute.manager [req-fc1c5a4f-d230-41c2-9690-6efd6984b901 req-ead66b94-e7b0-4b70-ae9a-cf2d436568c3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Processing event network-vif-plugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.482 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848341.480767, b598361e-dd69-448e-ade6-931a3d8c84cb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.482 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] VM Started (Lifecycle Event)
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.484 2 DEBUG nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.488 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.492 2 INFO nova.virt.libvirt.driver [-] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Instance spawned successfully.
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.492 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.510 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.515 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.519 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.520 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.520 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.520 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.521 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.521 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.564 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.565 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848341.4810252, b598361e-dd69-448e-ade6-931a3d8c84cb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.565 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] VM Paused (Lifecycle Event)
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.593 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.596 2 INFO nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Took 8.26 seconds to spawn the instance on the hypervisor.
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.596 2 DEBUG nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.602 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848341.4879448, b598361e-dd69-448e-ade6-931a3d8c84cb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.602 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] VM Resumed (Lifecycle Event)
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.623 2 DEBUG nova.network.neutron [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Successfully created port: 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.631 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.635 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.668 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.682 2 INFO nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Took 9.22 seconds to build instance.
Oct 07 14:45:41 compute-0 nova_compute[259550]: 2025-10-07 14:45:41.700 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:42 compute-0 ceph-mon[74295]: pgmap v2503: 305 pgs: 305 active+clean; 227 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 132 op/s
Oct 07 14:45:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 305 active+clean; 256 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1004 KiB/s rd, 4.2 MiB/s wr, 113 op/s
Oct 07 14:45:42 compute-0 nova_compute[259550]: 2025-10-07 14:45:42.745 2 DEBUG nova.network.neutron [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Successfully updated port: 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:45:42 compute-0 nova_compute[259550]: 2025-10-07 14:45:42.765 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:45:42 compute-0 nova_compute[259550]: 2025-10-07 14:45:42.765 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:45:42 compute-0 nova_compute[259550]: 2025-10-07 14:45:42.765 2 DEBUG nova.network.neutron [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:45:42 compute-0 nova_compute[259550]: 2025-10-07 14:45:42.959 2 DEBUG nova.network.neutron [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:45:43 compute-0 nova_compute[259550]: 2025-10-07 14:45:43.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:45:43 compute-0 nova_compute[259550]: 2025-10-07 14:45:43.341 2 DEBUG nova.compute.manager [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received event network-vif-plugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:45:43 compute-0 nova_compute[259550]: 2025-10-07 14:45:43.343 2 DEBUG oslo_concurrency.lockutils [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:43 compute-0 nova_compute[259550]: 2025-10-07 14:45:43.343 2 DEBUG oslo_concurrency.lockutils [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:43 compute-0 nova_compute[259550]: 2025-10-07 14:45:43.344 2 DEBUG oslo_concurrency.lockutils [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:43 compute-0 nova_compute[259550]: 2025-10-07 14:45:43.345 2 DEBUG nova.compute.manager [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] No waiting events found dispatching network-vif-plugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:45:43 compute-0 nova_compute[259550]: 2025-10-07 14:45:43.345 2 WARNING nova.compute.manager [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received unexpected event network-vif-plugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 for instance with vm_state active and task_state None.
Oct 07 14:45:43 compute-0 nova_compute[259550]: 2025-10-07 14:45:43.346 2 DEBUG nova.compute.manager [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received event network-changed-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:45:43 compute-0 nova_compute[259550]: 2025-10-07 14:45:43.347 2 DEBUG nova.compute.manager [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Refreshing instance network info cache due to event network-changed-44d8ec34-7fbc-440a-bab4-b9b8d29ff249. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:45:43 compute-0 nova_compute[259550]: 2025-10-07 14:45:43.348 2 DEBUG oslo_concurrency.lockutils [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:44 compute-0 ceph-mon[74295]: pgmap v2504: 305 pgs: 305 active+clean; 256 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1004 KiB/s rd, 4.2 MiB/s wr, 113 op/s
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.483 2 DEBUG nova.network.neutron [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Updating instance_info_cache with network_info: [{"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.519 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.519 2 DEBUG nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Instance network_info: |[{"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.519 2 DEBUG oslo_concurrency.lockutils [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.520 2 DEBUG nova.network.neutron [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Refreshing network info cache for port 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.522 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Start _get_guest_xml network_info=[{"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.527 2 WARNING nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.534 2 DEBUG nova.virt.libvirt.host [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.535 2 DEBUG nova.virt.libvirt.host [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.544 2 DEBUG nova.virt.libvirt.host [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.545 2 DEBUG nova.virt.libvirt.host [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.546 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.546 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.547 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.547 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.548 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.548 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.549 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.549 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.550 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.550 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.550 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.551 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:45:44 compute-0 nova_compute[259550]: 2025-10-07 14:45:44.556 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 160 op/s
Oct 07 14:45:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:45:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2156965025' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.090 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.123 2 DEBUG nova.storage.rbd_utils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.129 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:45 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2156965025' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:45:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:45:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1478757834' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.653 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.655 2 DEBUG nova.virt.libvirt.vif [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:45:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1103641822',display_name='tempest-TestNetworkBasicOps-server-1103641822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1103641822',id=136,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFPMP7i8ec4yKwVDxErtIGw1wPOouS90pUC/2M/KSrqUJIp7tpUDqB6OrTVom+DAk9JZ3iNgnjUuCMQr+/u1V//z0y/ybLMjjWdhld2MXrTrpN1FeSHNloBYJfIHxQTEMw==',key_name='tempest-TestNetworkBasicOps-329000454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-7sywmp2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:45:39Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=36592d37-1eb3-431f-8cd3-aa0d320b2e86,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.655 2 DEBUG nova.network.os_vif_util [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.656 2 DEBUG nova.network.os_vif_util [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=44d8ec34-7fbc-440a-bab4-b9b8d29ff249,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44d8ec34-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.658 2 DEBUG nova.objects.instance [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid 36592d37-1eb3-431f-8cd3-aa0d320b2e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.695 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:45:45 compute-0 nova_compute[259550]:   <uuid>36592d37-1eb3-431f-8cd3-aa0d320b2e86</uuid>
Oct 07 14:45:45 compute-0 nova_compute[259550]:   <name>instance-00000088</name>
Oct 07 14:45:45 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:45:45 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:45:45 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkBasicOps-server-1103641822</nova:name>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:45:44</nova:creationTime>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:45:45 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:45:45 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:45:45 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:45:45 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:45:45 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:45:45 compute-0 nova_compute[259550]:         <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:45:45 compute-0 nova_compute[259550]:         <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:45:45 compute-0 nova_compute[259550]:         <nova:port uuid="44d8ec34-7fbc-440a-bab4-b9b8d29ff249">
Oct 07 14:45:45 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:45:45 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:45:45 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <system>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <entry name="serial">36592d37-1eb3-431f-8cd3-aa0d320b2e86</entry>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <entry name="uuid">36592d37-1eb3-431f-8cd3-aa0d320b2e86</entry>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     </system>
Oct 07 14:45:45 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:45:45 compute-0 nova_compute[259550]:   <os>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:   </os>
Oct 07 14:45:45 compute-0 nova_compute[259550]:   <features>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:   </features>
Oct 07 14:45:45 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:45:45 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:45:45 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk">
Oct 07 14:45:45 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       </source>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:45:45 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk.config">
Oct 07 14:45:45 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       </source>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:45:45 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:0c:97:e5"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <target dev="tap44d8ec34-7f"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86/console.log" append="off"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <video>
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     </video>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:45:45 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:45:45 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:45:45 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:45:45 compute-0 nova_compute[259550]: </domain>
Oct 07 14:45:45 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.697 2 DEBUG nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Preparing to wait for external event network-vif-plugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.697 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.698 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.698 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.699 2 DEBUG nova.virt.libvirt.vif [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:45:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1103641822',display_name='tempest-TestNetworkBasicOps-server-1103641822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1103641822',id=136,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFPMP7i8ec4yKwVDxErtIGw1wPOouS90pUC/2M/KSrqUJIp7tpUDqB6OrTVom+DAk9JZ3iNgnjUuCMQr+/u1V//z0y/ybLMjjWdhld2MXrTrpN1FeSHNloBYJfIHxQTEMw==',key_name='tempest-TestNetworkBasicOps-329000454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-7sywmp2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:45:39Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=36592d37-1eb3-431f-8cd3-aa0d320b2e86,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.699 2 DEBUG nova.network.os_vif_util [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.700 2 DEBUG nova.network.os_vif_util [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=44d8ec34-7fbc-440a-bab4-b9b8d29ff249,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44d8ec34-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.700 2 DEBUG os_vif [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=44d8ec34-7fbc-440a-bab4-b9b8d29ff249,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44d8ec34-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.701 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.708 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44d8ec34-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap44d8ec34-7f, col_values=(('external_ids', {'iface-id': '44d8ec34-7fbc-440a-bab4-b9b8d29ff249', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:97:e5', 'vm-uuid': '36592d37-1eb3-431f-8cd3-aa0d320b2e86'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:45 compute-0 NetworkManager[44949]: <info>  [1759848345.7124] manager: (tap44d8ec34-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/601)
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.719 2 INFO os_vif [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=44d8ec34-7fbc-440a-bab4-b9b8d29ff249,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44d8ec34-7f')
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.775 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.776 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.776 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:0c:97:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.776 2 INFO nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Using config drive
Oct 07 14:45:45 compute-0 nova_compute[259550]: 2025-10-07 14:45:45.800 2 DEBUG nova.storage.rbd_utils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.201 2 INFO nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Creating config drive at /var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86/disk.config
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.207 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0llzze4n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:46 compute-0 ceph-mon[74295]: pgmap v2505: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 160 op/s
Oct 07 14:45:46 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1478757834' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.369 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0llzze4n" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.405 2 DEBUG nova.storage.rbd_utils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.410 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86/disk.config 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.461 2 DEBUG nova.network.neutron [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Updated VIF entry in instance network info cache for port 44d8ec34-7fbc-440a-bab4-b9b8d29ff249. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.462 2 DEBUG nova.network.neutron [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Updating instance_info_cache with network_info: [{"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.485 2 DEBUG oslo_concurrency.lockutils [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.506 2 DEBUG nova.compute.manager [req-9899075b-6f3e-42e1-8c71-e9d4e2b44036 req-d887fa40-3bf3-4598-8fbd-cba1c7c9edeb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received event network-changed-fef175ac-72ee-4716-9970-ff3dccaea9f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.507 2 DEBUG nova.compute.manager [req-9899075b-6f3e-42e1-8c71-e9d4e2b44036 req-d887fa40-3bf3-4598-8fbd-cba1c7c9edeb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Refreshing instance network info cache due to event network-changed-fef175ac-72ee-4716-9970-ff3dccaea9f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.508 2 DEBUG oslo_concurrency.lockutils [req-9899075b-6f3e-42e1-8c71-e9d4e2b44036 req-d887fa40-3bf3-4598-8fbd-cba1c7c9edeb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.508 2 DEBUG oslo_concurrency.lockutils [req-9899075b-6f3e-42e1-8c71-e9d4e2b44036 req-d887fa40-3bf3-4598-8fbd-cba1c7c9edeb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.508 2 DEBUG nova.network.neutron [req-9899075b-6f3e-42e1-8c71-e9d4e2b44036 req-d887fa40-3bf3-4598-8fbd-cba1c7c9edeb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Refreshing network info cache for port fef175ac-72ee-4716-9970-ff3dccaea9f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:45:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2506: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.6 MiB/s wr, 189 op/s
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.584 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86/disk.config 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.585 2 INFO nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Deleting local config drive /var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86/disk.config because it was imported into RBD.
Oct 07 14:45:46 compute-0 kernel: tap44d8ec34-7f: entered promiscuous mode
Oct 07 14:45:46 compute-0 NetworkManager[44949]: <info>  [1759848346.6341] manager: (tap44d8ec34-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/602)
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:46 compute-0 ovn_controller[151684]: 2025-10-07T14:45:46Z|01498|binding|INFO|Claiming lport 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 for this chassis.
Oct 07 14:45:46 compute-0 ovn_controller[151684]: 2025-10-07T14:45:46Z|01499|binding|INFO|44d8ec34-7fbc-440a-bab4-b9b8d29ff249: Claiming fa:16:3e:0c:97:e5 10.100.0.3
Oct 07 14:45:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.646 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:97:e5 10.100.0.3'], port_security=['fa:16:3e:0c:97:e5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '36592d37-1eb3-431f-8cd3-aa0d320b2e86', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a790910-04e4-4ed9-9209-184147e62b8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c2506e0-287c-4f8f-b28e-99b2bb4e4542', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17109e69-8b68-4e08-a1dd-9e4119d12812, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=44d8ec34-7fbc-440a-bab4-b9b8d29ff249) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:45:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.648 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 in datapath 8a790910-04e4-4ed9-9209-184147e62b8b bound to our chassis
Oct 07 14:45:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.651 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8a790910-04e4-4ed9-9209-184147e62b8b
Oct 07 14:45:46 compute-0 ovn_controller[151684]: 2025-10-07T14:45:46Z|01500|binding|INFO|Setting lport 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 ovn-installed in OVS
Oct 07 14:45:46 compute-0 ovn_controller[151684]: 2025-10-07T14:45:46Z|01501|binding|INFO|Setting lport 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 up in Southbound
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.669 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c34e5cc1-a8a2-4c72-89f1-820370e2c847]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:46 compute-0 systemd-udevd[405925]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:45:46 compute-0 NetworkManager[44949]: <info>  [1759848346.6978] device (tap44d8ec34-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:45:46 compute-0 NetworkManager[44949]: <info>  [1759848346.6990] device (tap44d8ec34-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:45:46 compute-0 systemd-machined[214580]: New machine qemu-170-instance-00000088.
Oct 07 14:45:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.703 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[55d84597-5474-4e62-8b95-ec0c5793dc33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.706 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2e2fd9-d8c5-4e6f-be34-306848a50577]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:46 compute-0 systemd[1]: Started Virtual Machine qemu-170-instance-00000088.
Oct 07 14:45:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.742 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b23e60c3-1fcd-4a33-af4e-eb903bc31501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.763 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e058d444-8682-43c4-8d56-8310982c7dd3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a790910-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:8c:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890932, 'reachable_time': 40705, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405932, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.780 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5be632a5-9085-46d2-9e8c-7e1c45d57d95]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8a790910-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 890945, 'tstamp': 890945}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405937, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8a790910-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 890948, 'tstamp': 890948}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405937, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:45:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.781 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a790910-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:46 compute-0 nova_compute[259550]: 2025-10-07 14:45:46.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.784 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a790910-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.785 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:45:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.785 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8a790910-00, col_values=(('external_ids', {'iface-id': '473a96c8-dafe-4956-8316-8a82bc1c870e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:45:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.785 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.088 2 DEBUG nova.compute.manager [req-68b24e60-ff78-46fd-9bd2-0680dad5325e req-3bbb8378-1de1-4a0a-bb5c-b099b81a8a08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received event network-vif-plugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.088 2 DEBUG oslo_concurrency.lockutils [req-68b24e60-ff78-46fd-9bd2-0680dad5325e req-3bbb8378-1de1-4a0a-bb5c-b099b81a8a08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.089 2 DEBUG oslo_concurrency.lockutils [req-68b24e60-ff78-46fd-9bd2-0680dad5325e req-3bbb8378-1de1-4a0a-bb5c-b099b81a8a08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.089 2 DEBUG oslo_concurrency.lockutils [req-68b24e60-ff78-46fd-9bd2-0680dad5325e req-3bbb8378-1de1-4a0a-bb5c-b099b81a8a08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.089 2 DEBUG nova.compute.manager [req-68b24e60-ff78-46fd-9bd2-0680dad5325e req-3bbb8378-1de1-4a0a-bb5c-b099b81a8a08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Processing event network-vif-plugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.692 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848347.6916924, 36592d37-1eb3-431f-8cd3-aa0d320b2e86 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.693 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] VM Started (Lifecycle Event)
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.696 2 DEBUG nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.706 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.710 2 INFO nova.virt.libvirt.driver [-] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Instance spawned successfully.
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.710 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.730 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.736 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.742 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.742 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.742 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.743 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.743 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.744 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.773 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.774 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848347.6918135, 36592d37-1eb3-431f-8cd3-aa0d320b2e86 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.774 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] VM Paused (Lifecycle Event)
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.812 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.815 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848347.6990612, 36592d37-1eb3-431f-8cd3-aa0d320b2e86 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.815 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] VM Resumed (Lifecycle Event)
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.827 2 INFO nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Took 8.63 seconds to spawn the instance on the hypervisor.
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.828 2 DEBUG nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.835 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.840 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.875 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.896 2 INFO nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Took 9.65 seconds to build instance.
Oct 07 14:45:47 compute-0 nova_compute[259550]: 2025-10-07 14:45:47.917 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:48 compute-0 nova_compute[259550]: 2025-10-07 14:45:48.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:45:48 compute-0 ceph-mon[74295]: pgmap v2506: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.6 MiB/s wr, 189 op/s
Oct 07 14:45:48 compute-0 nova_compute[259550]: 2025-10-07 14:45:48.513 2 DEBUG nova.network.neutron [req-9899075b-6f3e-42e1-8c71-e9d4e2b44036 req-d887fa40-3bf3-4598-8fbd-cba1c7c9edeb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Updated VIF entry in instance network info cache for port fef175ac-72ee-4716-9970-ff3dccaea9f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:45:48 compute-0 nova_compute[259550]: 2025-10-07 14:45:48.514 2 DEBUG nova.network.neutron [req-9899075b-6f3e-42e1-8c71-e9d4e2b44036 req-d887fa40-3bf3-4598-8fbd-cba1c7c9edeb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Updating instance_info_cache with network_info: [{"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:45:48 compute-0 nova_compute[259550]: 2025-10-07 14:45:48.538 2 DEBUG oslo_concurrency.lockutils [req-9899075b-6f3e-42e1-8c71-e9d4e2b44036 req-d887fa40-3bf3-4598-8fbd-cba1c7c9edeb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:45:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct 07 14:45:49 compute-0 nova_compute[259550]: 2025-10-07 14:45:49.188 2 DEBUG nova.compute.manager [req-4915b7f2-5515-4e64-b7b7-9521460f1cc9 req-e1d1720e-d95e-4583-b060-50d86948542a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received event network-vif-plugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:45:49 compute-0 nova_compute[259550]: 2025-10-07 14:45:49.189 2 DEBUG oslo_concurrency.lockutils [req-4915b7f2-5515-4e64-b7b7-9521460f1cc9 req-e1d1720e-d95e-4583-b060-50d86948542a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:45:49 compute-0 nova_compute[259550]: 2025-10-07 14:45:49.189 2 DEBUG oslo_concurrency.lockutils [req-4915b7f2-5515-4e64-b7b7-9521460f1cc9 req-e1d1720e-d95e-4583-b060-50d86948542a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:45:49 compute-0 nova_compute[259550]: 2025-10-07 14:45:49.189 2 DEBUG oslo_concurrency.lockutils [req-4915b7f2-5515-4e64-b7b7-9521460f1cc9 req-e1d1720e-d95e-4583-b060-50d86948542a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:45:49 compute-0 nova_compute[259550]: 2025-10-07 14:45:49.190 2 DEBUG nova.compute.manager [req-4915b7f2-5515-4e64-b7b7-9521460f1cc9 req-e1d1720e-d95e-4583-b060-50d86948542a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] No waiting events found dispatching network-vif-plugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:45:49 compute-0 nova_compute[259550]: 2025-10-07 14:45:49.190 2 WARNING nova.compute.manager [req-4915b7f2-5515-4e64-b7b7-9521460f1cc9 req-e1d1720e-d95e-4583-b060-50d86948542a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received unexpected event network-vif-plugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 for instance with vm_state active and task_state None.
Oct 07 14:45:50 compute-0 ceph-mon[74295]: pgmap v2507: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct 07 14:45:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 194 op/s
Oct 07 14:45:50 compute-0 nova_compute[259550]: 2025-10-07 14:45:50.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:52 compute-0 ceph-mon[74295]: pgmap v2508: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 194 op/s
Oct 07 14:45:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.8 MiB/s wr, 172 op/s
Oct 07 14:45:52 compute-0 nova_compute[259550]: 2025-10-07 14:45:52.596 2 DEBUG nova.compute.manager [req-5da339b6-0ede-4aa3-95ff-33c13064a9fa req-7a8c2f99-1999-4788-a39c-56c639a3bf9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received event network-changed-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:45:52 compute-0 nova_compute[259550]: 2025-10-07 14:45:52.596 2 DEBUG nova.compute.manager [req-5da339b6-0ede-4aa3-95ff-33c13064a9fa req-7a8c2f99-1999-4788-a39c-56c639a3bf9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Refreshing instance network info cache due to event network-changed-44d8ec34-7fbc-440a-bab4-b9b8d29ff249. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:45:52 compute-0 nova_compute[259550]: 2025-10-07 14:45:52.597 2 DEBUG oslo_concurrency.lockutils [req-5da339b6-0ede-4aa3-95ff-33c13064a9fa req-7a8c2f99-1999-4788-a39c-56c639a3bf9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:45:52 compute-0 nova_compute[259550]: 2025-10-07 14:45:52.597 2 DEBUG oslo_concurrency.lockutils [req-5da339b6-0ede-4aa3-95ff-33c13064a9fa req-7a8c2f99-1999-4788-a39c-56c639a3bf9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:45:52 compute-0 nova_compute[259550]: 2025-10-07 14:45:52.597 2 DEBUG nova.network.neutron [req-5da339b6-0ede-4aa3-95ff-33c13064a9fa req-7a8c2f99-1999-4788-a39c-56c639a3bf9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Refreshing network info cache for port 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:45:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:45:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:45:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:45:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:45:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:45:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:45:53 compute-0 nova_compute[259550]: 2025-10-07 14:45:53.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:45:53 compute-0 nova_compute[259550]: 2025-10-07 14:45:53.940 2 DEBUG nova.network.neutron [req-5da339b6-0ede-4aa3-95ff-33c13064a9fa req-7a8c2f99-1999-4788-a39c-56c639a3bf9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Updated VIF entry in instance network info cache for port 44d8ec34-7fbc-440a-bab4-b9b8d29ff249. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:45:53 compute-0 nova_compute[259550]: 2025-10-07 14:45:53.941 2 DEBUG nova.network.neutron [req-5da339b6-0ede-4aa3-95ff-33c13064a9fa req-7a8c2f99-1999-4788-a39c-56c639a3bf9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Updating instance_info_cache with network_info: [{"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:45:53 compute-0 nova_compute[259550]: 2025-10-07 14:45:53.961 2 DEBUG oslo_concurrency.lockutils [req-5da339b6-0ede-4aa3-95ff-33c13064a9fa req-7a8c2f99-1999-4788-a39c-56c639a3bf9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:45:54 compute-0 podman[405981]: 2025-10-07 14:45:54.09901322 +0000 UTC m=+0.082110144 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:45:54 compute-0 podman[405982]: 2025-10-07 14:45:54.131544614 +0000 UTC m=+0.109846401 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller)
Oct 07 14:45:54 compute-0 ceph-mon[74295]: pgmap v2509: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.8 MiB/s wr, 172 op/s
Oct 07 14:45:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 178 op/s
Oct 07 14:45:55 compute-0 nova_compute[259550]: 2025-10-07 14:45:55.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:55 compute-0 nova_compute[259550]: 2025-10-07 14:45:55.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:45:56 compute-0 ceph-mon[74295]: pgmap v2510: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 178 op/s
Oct 07 14:45:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 30 KiB/s wr, 105 op/s
Oct 07 14:45:57 compute-0 nova_compute[259550]: 2025-10-07 14:45:57.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:45:58 compute-0 nova_compute[259550]: 2025-10-07 14:45:58.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:45:58 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct 07 14:45:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:45:58 compute-0 ceph-mon[74295]: pgmap v2511: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 30 KiB/s wr, 105 op/s
Oct 07 14:45:58 compute-0 ovn_controller[151684]: 2025-10-07T14:45:58Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f5:9c:ff 10.100.0.5
Oct 07 14:45:58 compute-0 ovn_controller[151684]: 2025-10-07T14:45:58Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f5:9c:ff 10.100.0.5
Oct 07 14:45:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 75 op/s
Oct 07 14:45:58 compute-0 nova_compute[259550]: 2025-10-07 14:45:58.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:45:59 compute-0 sudo[406025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:45:59 compute-0 sudo[406025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:45:59 compute-0 sudo[406025]: pam_unix(sudo:session): session closed for user root
Oct 07 14:45:59 compute-0 sudo[406050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:45:59 compute-0 sudo[406050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:45:59 compute-0 sudo[406050]: pam_unix(sudo:session): session closed for user root
Oct 07 14:45:59 compute-0 sudo[406075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:45:59 compute-0 sudo[406075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:45:59 compute-0 sudo[406075]: pam_unix(sudo:session): session closed for user root
Oct 07 14:45:59 compute-0 nova_compute[259550]: 2025-10-07 14:45:59.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:45:59 compute-0 nova_compute[259550]: 2025-10-07 14:45:59.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:45:59 compute-0 nova_compute[259550]: 2025-10-07 14:45:59.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:46:00 compute-0 sudo[406100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 07 14:46:00 compute-0 sudo[406100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:46:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:00.077 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:00.078 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:00.079 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:00 compute-0 ceph-mon[74295]: pgmap v2512: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 75 op/s
Oct 07 14:46:00 compute-0 podman[406198]: 2025-10-07 14:46:00.567633918 +0000 UTC m=+0.071091441 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:46:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2513: 305 pgs: 305 active+clean; 317 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 91 op/s
Oct 07 14:46:00 compute-0 podman[406198]: 2025-10-07 14:46:00.710606048 +0000 UTC m=+0.214063551 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 07 14:46:00 compute-0 nova_compute[259550]: 2025-10-07 14:46:00.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:01 compute-0 sudo[406100]: pam_unix(sudo:session): session closed for user root
Oct 07 14:46:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:46:01 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:46:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:46:01 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:46:01 compute-0 sudo[406353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:46:01 compute-0 sudo[406353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:46:01 compute-0 sudo[406353]: pam_unix(sudo:session): session closed for user root
Oct 07 14:46:01 compute-0 sudo[406378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:46:01 compute-0 sudo[406378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:46:01 compute-0 sudo[406378]: pam_unix(sudo:session): session closed for user root
Oct 07 14:46:01 compute-0 sudo[406403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:46:01 compute-0 sudo[406403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:46:01 compute-0 sudo[406403]: pam_unix(sudo:session): session closed for user root
Oct 07 14:46:01 compute-0 sudo[406428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:46:01 compute-0 sudo[406428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:46:01 compute-0 nova_compute[259550]: 2025-10-07 14:46:01.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:46:02 compute-0 sudo[406428]: pam_unix(sudo:session): session closed for user root
Oct 07 14:46:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:46:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:46:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:46:02 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:46:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:46:02 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:46:02 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 7f44f9c6-6201-4589-bd4d-49faf7eff2d0 does not exist
Oct 07 14:46:02 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev a0ffc880-6633-4f52-8834-548f86b4a498 does not exist
Oct 07 14:46:02 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev eedd7a50-7723-4978-9da4-9468ae34395f does not exist
Oct 07 14:46:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:46:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:46:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:46:02 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:46:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:46:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:46:02 compute-0 sudo[406484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:46:02 compute-0 sudo[406484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:46:02 compute-0 sudo[406484]: pam_unix(sudo:session): session closed for user root
Oct 07 14:46:02 compute-0 ceph-mon[74295]: pgmap v2513: 305 pgs: 305 active+clean; 317 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 91 op/s
Oct 07 14:46:02 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:46:02 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:46:02 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:46:02 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:46:02 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:46:02 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:46:02 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:46:02 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:46:02 compute-0 sudo[406509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:46:02 compute-0 sudo[406509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:46:02 compute-0 sudo[406509]: pam_unix(sudo:session): session closed for user root
Oct 07 14:46:02 compute-0 sudo[406534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:46:02 compute-0 sudo[406534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:46:02 compute-0 sudo[406534]: pam_unix(sudo:session): session closed for user root
Oct 07 14:46:02 compute-0 sudo[406559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:46:02 compute-0 sudo[406559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:46:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 07 14:46:02 compute-0 podman[406624]: 2025-10-07 14:46:02.909118015 +0000 UTC m=+0.094908174 container create 755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 07 14:46:02 compute-0 podman[406624]: 2025-10-07 14:46:02.844536579 +0000 UTC m=+0.030326768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:46:02 compute-0 systemd[1]: Started libpod-conmon-755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e.scope.
Oct 07 14:46:02 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:46:03 compute-0 podman[406624]: 2025-10-07 14:46:03.087203459 +0000 UTC m=+0.272993648 container init 755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 07 14:46:03 compute-0 podman[406624]: 2025-10-07 14:46:03.100374219 +0000 UTC m=+0.286164378 container start 755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:46:03 compute-0 systemd[1]: libpod-755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e.scope: Deactivated successfully.
Oct 07 14:46:03 compute-0 great_dewdney[406640]: 167 167
Oct 07 14:46:03 compute-0 conmon[406640]: conmon 755894ed99e581743bcb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e.scope/container/memory.events
Oct 07 14:46:03 compute-0 podman[406624]: 2025-10-07 14:46:03.117754901 +0000 UTC m=+0.303545080 container attach 755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 14:46:03 compute-0 podman[406624]: 2025-10-07 14:46:03.118882621 +0000 UTC m=+0.304672790 container died 755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 07 14:46:03 compute-0 nova_compute[259550]: 2025-10-07 14:46:03.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:46:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-186ed782a6c8c480bd70c76b73d9d0e7953d185ab183d83db0aaab9ef9e3bc7e-merged.mount: Deactivated successfully.
Oct 07 14:46:03 compute-0 podman[406624]: 2025-10-07 14:46:03.242750093 +0000 UTC m=+0.428540262 container remove 755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 07 14:46:03 compute-0 systemd[1]: libpod-conmon-755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e.scope: Deactivated successfully.
Oct 07 14:46:03 compute-0 podman[406665]: 2025-10-07 14:46:03.444126186 +0000 UTC m=+0.044480504 container create ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 07 14:46:03 compute-0 systemd[1]: Started libpod-conmon-ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27.scope.
Oct 07 14:46:03 compute-0 podman[406665]: 2025-10-07 14:46:03.424312689 +0000 UTC m=+0.024667027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:46:03 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902ecb59b89a63880f774a76a43d737757bf10cd5557043e43d8b1bc2a91277e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902ecb59b89a63880f774a76a43d737757bf10cd5557043e43d8b1bc2a91277e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902ecb59b89a63880f774a76a43d737757bf10cd5557043e43d8b1bc2a91277e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902ecb59b89a63880f774a76a43d737757bf10cd5557043e43d8b1bc2a91277e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902ecb59b89a63880f774a76a43d737757bf10cd5557043e43d8b1bc2a91277e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:46:03 compute-0 podman[406665]: 2025-10-07 14:46:03.5624085 +0000 UTC m=+0.162762838 container init ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:46:03 compute-0 podman[406665]: 2025-10-07 14:46:03.573554927 +0000 UTC m=+0.173909245 container start ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 07 14:46:03 compute-0 podman[406665]: 2025-10-07 14:46:03.579255078 +0000 UTC m=+0.179609426 container attach ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:46:04 compute-0 ceph-mon[74295]: pgmap v2514: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 07 14:46:04 compute-0 ovn_controller[151684]: 2025-10-07T14:46:04Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0c:97:e5 10.100.0.3
Oct 07 14:46:04 compute-0 ovn_controller[151684]: 2025-10-07T14:46:04Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0c:97:e5 10.100.0.3
Oct 07 14:46:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 343 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 114 op/s
Oct 07 14:46:04 compute-0 cranky_lovelace[406681]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:46:04 compute-0 cranky_lovelace[406681]: --> relative data size: 1.0
Oct 07 14:46:04 compute-0 cranky_lovelace[406681]: --> All data devices are unavailable
Oct 07 14:46:04 compute-0 systemd[1]: libpod-ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27.scope: Deactivated successfully.
Oct 07 14:46:04 compute-0 systemd[1]: libpod-ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27.scope: Consumed 1.002s CPU time.
Oct 07 14:46:04 compute-0 podman[406710]: 2025-10-07 14:46:04.694284496 +0000 UTC m=+0.026053604 container died ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:46:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-902ecb59b89a63880f774a76a43d737757bf10cd5557043e43d8b1bc2a91277e-merged.mount: Deactivated successfully.
Oct 07 14:46:04 compute-0 podman[406710]: 2025-10-07 14:46:04.751324722 +0000 UTC m=+0.083093810 container remove ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 07 14:46:04 compute-0 systemd[1]: libpod-conmon-ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27.scope: Deactivated successfully.
Oct 07 14:46:04 compute-0 sudo[406559]: pam_unix(sudo:session): session closed for user root
Oct 07 14:46:04 compute-0 sudo[406725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:46:04 compute-0 sudo[406725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:46:04 compute-0 sudo[406725]: pam_unix(sudo:session): session closed for user root
Oct 07 14:46:04 compute-0 sudo[406750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:46:04 compute-0 sudo[406750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:46:04 compute-0 sudo[406750]: pam_unix(sudo:session): session closed for user root
Oct 07 14:46:04 compute-0 sudo[406775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:46:04 compute-0 sudo[406775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:46:04 compute-0 sudo[406775]: pam_unix(sudo:session): session closed for user root
Oct 07 14:46:05 compute-0 sudo[406800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:46:05 compute-0 sudo[406800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:46:05 compute-0 podman[406862]: 2025-10-07 14:46:05.398101464 +0000 UTC m=+0.045925732 container create 6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 07 14:46:05 compute-0 systemd[1]: Started libpod-conmon-6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12.scope.
Oct 07 14:46:05 compute-0 podman[406862]: 2025-10-07 14:46:05.380767862 +0000 UTC m=+0.028592060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:46:05 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:46:05 compute-0 podman[406862]: 2025-10-07 14:46:05.496211201 +0000 UTC m=+0.144035429 container init 6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_booth, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:46:05 compute-0 podman[406862]: 2025-10-07 14:46:05.502944691 +0000 UTC m=+0.150768859 container start 6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct 07 14:46:05 compute-0 podman[406862]: 2025-10-07 14:46:05.506027292 +0000 UTC m=+0.153851520 container attach 6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_booth, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:46:05 compute-0 great_booth[406878]: 167 167
Oct 07 14:46:05 compute-0 systemd[1]: libpod-6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12.scope: Deactivated successfully.
Oct 07 14:46:05 compute-0 conmon[406878]: conmon 6f4bbc641f7c670d4f20 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12.scope/container/memory.events
Oct 07 14:46:05 compute-0 podman[406862]: 2025-10-07 14:46:05.509023552 +0000 UTC m=+0.156847730 container died 6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_booth, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 07 14:46:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d21e11ec8698bf9cf2c7e8cb0a4a28b3e08633dd689d14b35c1edc7fe9731f5d-merged.mount: Deactivated successfully.
Oct 07 14:46:05 compute-0 podman[406862]: 2025-10-07 14:46:05.546914989 +0000 UTC m=+0.194739167 container remove 6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_booth, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:46:05 compute-0 systemd[1]: libpod-conmon-6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12.scope: Deactivated successfully.
Oct 07 14:46:05 compute-0 nova_compute[259550]: 2025-10-07 14:46:05.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:05 compute-0 podman[406903]: 2025-10-07 14:46:05.740819112 +0000 UTC m=+0.050534024 container create 01014c370c28924a54bbe9814b1e0d0e5de9c83945ea9b395d8664a6919efefd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 07 14:46:05 compute-0 systemd[1]: Started libpod-conmon-01014c370c28924a54bbe9814b1e0d0e5de9c83945ea9b395d8664a6919efefd.scope.
Oct 07 14:46:05 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ee251f6fdc96c20145c48fe4311a45cc39cbb54b358ae2b401089c5567d3dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ee251f6fdc96c20145c48fe4311a45cc39cbb54b358ae2b401089c5567d3dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ee251f6fdc96c20145c48fe4311a45cc39cbb54b358ae2b401089c5567d3dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ee251f6fdc96c20145c48fe4311a45cc39cbb54b358ae2b401089c5567d3dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:46:05 compute-0 podman[406903]: 2025-10-07 14:46:05.725486215 +0000 UTC m=+0.035201157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:46:05 compute-0 podman[406903]: 2025-10-07 14:46:05.82383175 +0000 UTC m=+0.133546682 container init 01014c370c28924a54bbe9814b1e0d0e5de9c83945ea9b395d8664a6919efefd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_volhard, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:46:05 compute-0 podman[406903]: 2025-10-07 14:46:05.830547338 +0000 UTC m=+0.140262260 container start 01014c370c28924a54bbe9814b1e0d0e5de9c83945ea9b395d8664a6919efefd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_volhard, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:46:05 compute-0 podman[406903]: 2025-10-07 14:46:05.83626457 +0000 UTC m=+0.145979482 container attach 01014c370c28924a54bbe9814b1e0d0e5de9c83945ea9b395d8664a6919efefd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_volhard, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 07 14:46:06 compute-0 ceph-mon[74295]: pgmap v2515: 305 pgs: 305 active+clean; 343 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 114 op/s
Oct 07 14:46:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 350 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 581 KiB/s rd, 4.2 MiB/s wr, 112 op/s
Oct 07 14:46:06 compute-0 sharp_volhard[406919]: {
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:     "0": [
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:         {
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "devices": [
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "/dev/loop3"
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             ],
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "lv_name": "ceph_lv0",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "lv_size": "21470642176",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "name": "ceph_lv0",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "tags": {
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.cluster_name": "ceph",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.crush_device_class": "",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.encrypted": "0",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.osd_id": "0",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.type": "block",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.vdo": "0"
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             },
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "type": "block",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "vg_name": "ceph_vg0"
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:         }
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:     ],
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:     "1": [
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:         {
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "devices": [
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "/dev/loop4"
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             ],
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "lv_name": "ceph_lv1",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "lv_size": "21470642176",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "name": "ceph_lv1",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "tags": {
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.cluster_name": "ceph",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.crush_device_class": "",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.encrypted": "0",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.osd_id": "1",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.type": "block",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.vdo": "0"
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             },
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "type": "block",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "vg_name": "ceph_vg1"
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:         }
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:     ],
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:     "2": [
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:         {
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "devices": [
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "/dev/loop5"
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             ],
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "lv_name": "ceph_lv2",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "lv_size": "21470642176",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "name": "ceph_lv2",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "tags": {
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.cluster_name": "ceph",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.crush_device_class": "",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.encrypted": "0",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.osd_id": "2",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.type": "block",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:                 "ceph.vdo": "0"
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             },
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "type": "block",
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:             "vg_name": "ceph_vg2"
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:         }
Oct 07 14:46:06 compute-0 sharp_volhard[406919]:     ]
Oct 07 14:46:06 compute-0 sharp_volhard[406919]: }
Oct 07 14:46:06 compute-0 systemd[1]: libpod-01014c370c28924a54bbe9814b1e0d0e5de9c83945ea9b395d8664a6919efefd.scope: Deactivated successfully.
Oct 07 14:46:06 compute-0 podman[406903]: 2025-10-07 14:46:06.660826707 +0000 UTC m=+0.970541619 container died 01014c370c28924a54bbe9814b1e0d0e5de9c83945ea9b395d8664a6919efefd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_volhard, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 07 14:46:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-78ee251f6fdc96c20145c48fe4311a45cc39cbb54b358ae2b401089c5567d3dd-merged.mount: Deactivated successfully.
Oct 07 14:46:06 compute-0 podman[406903]: 2025-10-07 14:46:06.877508116 +0000 UTC m=+1.187223028 container remove 01014c370c28924a54bbe9814b1e0d0e5de9c83945ea9b395d8664a6919efefd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_volhard, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 07 14:46:06 compute-0 systemd[1]: libpod-conmon-01014c370c28924a54bbe9814b1e0d0e5de9c83945ea9b395d8664a6919efefd.scope: Deactivated successfully.
Oct 07 14:46:06 compute-0 sudo[406800]: pam_unix(sudo:session): session closed for user root
Oct 07 14:46:06 compute-0 sudo[406940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:46:07 compute-0 sudo[406940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:46:07 compute-0 sudo[406940]: pam_unix(sudo:session): session closed for user root
Oct 07 14:46:07 compute-0 sudo[406965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:46:07 compute-0 sudo[406965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:46:07 compute-0 sudo[406965]: pam_unix(sudo:session): session closed for user root
Oct 07 14:46:07 compute-0 sudo[406990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:46:07 compute-0 sudo[406990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:46:07 compute-0 sudo[406990]: pam_unix(sudo:session): session closed for user root
Oct 07 14:46:07 compute-0 sudo[407015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:46:07 compute-0 sudo[407015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:46:07 compute-0 podman[407080]: 2025-10-07 14:46:07.52548913 +0000 UTC m=+0.039798679 container create f3e8bb995fb4816216b942dbb58e4eb2587c7aee6ee0446dcec4ca9b6a8fc03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:46:07 compute-0 systemd[1]: Started libpod-conmon-f3e8bb995fb4816216b942dbb58e4eb2587c7aee6ee0446dcec4ca9b6a8fc03a.scope.
Oct 07 14:46:07 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:46:07 compute-0 podman[407080]: 2025-10-07 14:46:07.506906016 +0000 UTC m=+0.021215585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:46:07 compute-0 podman[407080]: 2025-10-07 14:46:07.607993693 +0000 UTC m=+0.122303272 container init f3e8bb995fb4816216b942dbb58e4eb2587c7aee6ee0446dcec4ca9b6a8fc03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 14:46:07 compute-0 podman[407080]: 2025-10-07 14:46:07.615642746 +0000 UTC m=+0.129952285 container start f3e8bb995fb4816216b942dbb58e4eb2587c7aee6ee0446dcec4ca9b6a8fc03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 07 14:46:07 compute-0 podman[407080]: 2025-10-07 14:46:07.620389573 +0000 UTC m=+0.134699152 container attach f3e8bb995fb4816216b942dbb58e4eb2587c7aee6ee0446dcec4ca9b6a8fc03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:46:07 compute-0 ecstatic_engelbart[407096]: 167 167
Oct 07 14:46:07 compute-0 systemd[1]: libpod-f3e8bb995fb4816216b942dbb58e4eb2587c7aee6ee0446dcec4ca9b6a8fc03a.scope: Deactivated successfully.
Oct 07 14:46:07 compute-0 podman[407080]: 2025-10-07 14:46:07.623987148 +0000 UTC m=+0.138296707 container died f3e8bb995fb4816216b942dbb58e4eb2587c7aee6ee0446dcec4ca9b6a8fc03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 07 14:46:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef955d2f46904b757e6033e7da2bbe169199f2078693365f1fb99736884883b5-merged.mount: Deactivated successfully.
Oct 07 14:46:07 compute-0 podman[407080]: 2025-10-07 14:46:07.66921019 +0000 UTC m=+0.183519779 container remove f3e8bb995fb4816216b942dbb58e4eb2587c7aee6ee0446dcec4ca9b6a8fc03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 07 14:46:07 compute-0 systemd[1]: libpod-conmon-f3e8bb995fb4816216b942dbb58e4eb2587c7aee6ee0446dcec4ca9b6a8fc03a.scope: Deactivated successfully.
Oct 07 14:46:07 compute-0 podman[407120]: 2025-10-07 14:46:07.858968054 +0000 UTC m=+0.054445049 container create 90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_franklin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 07 14:46:07 compute-0 systemd[1]: Started libpod-conmon-90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d.scope.
Oct 07 14:46:07 compute-0 podman[407120]: 2025-10-07 14:46:07.835409907 +0000 UTC m=+0.030886822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:46:07 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f28816c9c30a9f8d04f6d84884d8e3ce2a59e289a51a3312602bca47fa894999/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f28816c9c30a9f8d04f6d84884d8e3ce2a59e289a51a3312602bca47fa894999/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f28816c9c30a9f8d04f6d84884d8e3ce2a59e289a51a3312602bca47fa894999/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f28816c9c30a9f8d04f6d84884d8e3ce2a59e289a51a3312602bca47fa894999/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:46:07 compute-0 podman[407120]: 2025-10-07 14:46:07.956275681 +0000 UTC m=+0.151752626 container init 90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 07 14:46:07 compute-0 podman[407120]: 2025-10-07 14:46:07.965130146 +0000 UTC m=+0.160607051 container start 90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_franklin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:46:07 compute-0 podman[407120]: 2025-10-07 14:46:07.973646622 +0000 UTC m=+0.169123567 container attach 90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_franklin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:46:07 compute-0 nova_compute[259550]: 2025-10-07 14:46:07.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:46:07 compute-0 nova_compute[259550]: 2025-10-07 14:46:07.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:46:07 compute-0 nova_compute[259550]: 2025-10-07 14:46:07.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:46:08 compute-0 nova_compute[259550]: 2025-10-07 14:46:08.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:46:08 compute-0 ceph-mon[74295]: pgmap v2516: 305 pgs: 305 active+clean; 350 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 581 KiB/s rd, 4.2 MiB/s wr, 112 op/s
Oct 07 14:46:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 350 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 579 KiB/s rd, 4.2 MiB/s wr, 112 op/s
Oct 07 14:46:08 compute-0 nova_compute[259550]: 2025-10-07 14:46:08.919 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:46:08 compute-0 nova_compute[259550]: 2025-10-07 14:46:08.919 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:46:08 compute-0 nova_compute[259550]: 2025-10-07 14:46:08.919 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:46:08 compute-0 nova_compute[259550]: 2025-10-07 14:46:08.920 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6721439b-34d7-4282-bbcd-37424c3f2691 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:46:09 compute-0 sharp_franklin[407137]: {
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:         "osd_id": 2,
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:         "type": "bluestore"
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:     },
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:         "osd_id": 1,
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:         "type": "bluestore"
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:     },
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:         "osd_id": 0,
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:         "type": "bluestore"
Oct 07 14:46:09 compute-0 sharp_franklin[407137]:     }
Oct 07 14:46:09 compute-0 sharp_franklin[407137]: }
Oct 07 14:46:09 compute-0 systemd[1]: libpod-90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d.scope: Deactivated successfully.
Oct 07 14:46:09 compute-0 systemd[1]: libpod-90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d.scope: Consumed 1.155s CPU time.
Oct 07 14:46:09 compute-0 podman[407170]: 2025-10-07 14:46:09.187946798 +0000 UTC m=+0.031878987 container died 90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Oct 07 14:46:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f28816c9c30a9f8d04f6d84884d8e3ce2a59e289a51a3312602bca47fa894999-merged.mount: Deactivated successfully.
Oct 07 14:46:09 compute-0 podman[407170]: 2025-10-07 14:46:09.263157138 +0000 UTC m=+0.107089297 container remove 90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_franklin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:46:09 compute-0 systemd[1]: libpod-conmon-90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d.scope: Deactivated successfully.
Oct 07 14:46:09 compute-0 sudo[407015]: pam_unix(sudo:session): session closed for user root
Oct 07 14:46:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:46:09 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:46:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:46:09 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:46:09 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 0ff82a8e-793c-4e9b-a8c9-f7c5759b9bd8 does not exist
Oct 07 14:46:09 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev f8bca1d4-5e82-496e-88b3-ca3321896d90 does not exist
Oct 07 14:46:09 compute-0 sudo[407186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:46:09 compute-0 sudo[407186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:46:09 compute-0 sudo[407186]: pam_unix(sudo:session): session closed for user root
Oct 07 14:46:09 compute-0 sudo[407211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:46:09 compute-0 sudo[407211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:46:09 compute-0 sudo[407211]: pam_unix(sudo:session): session closed for user root
Oct 07 14:46:10 compute-0 ceph-mon[74295]: pgmap v2517: 305 pgs: 305 active+clean; 350 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 579 KiB/s rd, 4.2 MiB/s wr, 112 op/s
Oct 07 14:46:10 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:46:10 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:46:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 679 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Oct 07 14:46:10 compute-0 nova_compute[259550]: 2025-10-07 14:46:10.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:11 compute-0 podman[407237]: 2025-10-07 14:46:11.092187624 +0000 UTC m=+0.071503832 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:46:11 compute-0 podman[407236]: 2025-10-07 14:46:11.094397203 +0000 UTC m=+0.074335637 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Oct 07 14:46:12 compute-0 nova_compute[259550]: 2025-10-07 14:46:12.295 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updating instance_info_cache with network_info: [{"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:46:12 compute-0 nova_compute[259550]: 2025-10-07 14:46:12.322 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:46:12 compute-0 nova_compute[259550]: 2025-10-07 14:46:12.322 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:46:12 compute-0 nova_compute[259550]: 2025-10-07 14:46:12.323 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:46:12 compute-0 ceph-mon[74295]: pgmap v2518: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 679 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Oct 07 14:46:12 compute-0 nova_compute[259550]: 2025-10-07 14:46:12.353 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:12 compute-0 nova_compute[259550]: 2025-10-07 14:46:12.354 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:12 compute-0 nova_compute[259550]: 2025-10-07 14:46:12.354 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:12 compute-0 nova_compute[259550]: 2025-10-07 14:46:12.354 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:46:12 compute-0 nova_compute[259550]: 2025-10-07 14:46:12.354 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:46:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 584 KiB/s rd, 2.9 MiB/s wr, 111 op/s
Oct 07 14:46:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:46:12 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1391210274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:46:12 compute-0 nova_compute[259550]: 2025-10-07 14:46:12.844 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:46:12 compute-0 nova_compute[259550]: 2025-10-07 14:46:12.956 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:46:12 compute-0 nova_compute[259550]: 2025-10-07 14:46:12.957 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:46:12 compute-0 nova_compute[259550]: 2025-10-07 14:46:12.960 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:46:12 compute-0 nova_compute[259550]: 2025-10-07 14:46:12.961 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:46:12 compute-0 nova_compute[259550]: 2025-10-07 14:46:12.966 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:46:12 compute-0 nova_compute[259550]: 2025-10-07 14:46:12.966 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:46:12 compute-0 nova_compute[259550]: 2025-10-07 14:46:12.971 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:46:12 compute-0 nova_compute[259550]: 2025-10-07 14:46:12.972 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:46:13 compute-0 nova_compute[259550]: 2025-10-07 14:46:13.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:46:13 compute-0 nova_compute[259550]: 2025-10-07 14:46:13.161 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:46:13 compute-0 nova_compute[259550]: 2025-10-07 14:46:13.162 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2853MB free_disk=59.80624771118164GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:46:13 compute-0 nova_compute[259550]: 2025-10-07 14:46:13.162 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:13 compute-0 nova_compute[259550]: 2025-10-07 14:46:13.162 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:13 compute-0 nova_compute[259550]: 2025-10-07 14:46:13.270 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 6721439b-34d7-4282-bbcd-37424c3f2691 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:46:13 compute-0 nova_compute[259550]: 2025-10-07 14:46:13.270 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:46:13 compute-0 nova_compute[259550]: 2025-10-07 14:46:13.270 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance b598361e-dd69-448e-ade6-931a3d8c84cb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:46:13 compute-0 nova_compute[259550]: 2025-10-07 14:46:13.270 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 36592d37-1eb3-431f-8cd3-aa0d320b2e86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:46:13 compute-0 nova_compute[259550]: 2025-10-07 14:46:13.271 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:46:13 compute-0 nova_compute[259550]: 2025-10-07 14:46:13.271 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:46:13 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1391210274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:46:13 compute-0 nova_compute[259550]: 2025-10-07 14:46:13.378 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:46:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:46:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3249140781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:46:13 compute-0 nova_compute[259550]: 2025-10-07 14:46:13.843 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:46:13 compute-0 nova_compute[259550]: 2025-10-07 14:46:13.853 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:46:13 compute-0 nova_compute[259550]: 2025-10-07 14:46:13.874 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:46:13 compute-0 nova_compute[259550]: 2025-10-07 14:46:13.902 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:46:13 compute-0 nova_compute[259550]: 2025-10-07 14:46:13.903 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:14 compute-0 ceph-mon[74295]: pgmap v2519: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 584 KiB/s rd, 2.9 MiB/s wr, 111 op/s
Oct 07 14:46:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3249140781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:46:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 388 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Oct 07 14:46:15 compute-0 nova_compute[259550]: 2025-10-07 14:46:15.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:16 compute-0 ceph-mon[74295]: pgmap v2520: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 388 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Oct 07 14:46:16 compute-0 nova_compute[259550]: 2025-10-07 14:46:16.562 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:46:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 704 KiB/s wr, 44 op/s
Oct 07 14:46:16 compute-0 nova_compute[259550]: 2025-10-07 14:46:16.594 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:46:18 compute-0 nova_compute[259550]: 2025-10-07 14:46:18.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:46:18 compute-0 ceph-mon[74295]: pgmap v2521: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 704 KiB/s wr, 44 op/s
Oct 07 14:46:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 75 KiB/s wr, 16 op/s
Oct 07 14:46:20 compute-0 ceph-mon[74295]: pgmap v2522: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 75 KiB/s wr, 16 op/s
Oct 07 14:46:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 75 KiB/s wr, 16 op/s
Oct 07 14:46:20 compute-0 nova_compute[259550]: 2025-10-07 14:46:20.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:21 compute-0 nova_compute[259550]: 2025-10-07 14:46:21.663 2 DEBUG nova.compute.manager [req-bc881283-9820-4a93-90fa-84c14cbfd03c req-48c5dfb0-dee0-40f5-a50a-a8c713f57b4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received event network-changed-fef175ac-72ee-4716-9970-ff3dccaea9f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:21 compute-0 nova_compute[259550]: 2025-10-07 14:46:21.664 2 DEBUG nova.compute.manager [req-bc881283-9820-4a93-90fa-84c14cbfd03c req-48c5dfb0-dee0-40f5-a50a-a8c713f57b4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Refreshing instance network info cache due to event network-changed-fef175ac-72ee-4716-9970-ff3dccaea9f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:46:21 compute-0 nova_compute[259550]: 2025-10-07 14:46:21.664 2 DEBUG oslo_concurrency.lockutils [req-bc881283-9820-4a93-90fa-84c14cbfd03c req-48c5dfb0-dee0-40f5-a50a-a8c713f57b4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:46:21 compute-0 nova_compute[259550]: 2025-10-07 14:46:21.664 2 DEBUG oslo_concurrency.lockutils [req-bc881283-9820-4a93-90fa-84c14cbfd03c req-48c5dfb0-dee0-40f5-a50a-a8c713f57b4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:46:21 compute-0 nova_compute[259550]: 2025-10-07 14:46:21.665 2 DEBUG nova.network.neutron [req-bc881283-9820-4a93-90fa-84c14cbfd03c req-48c5dfb0-dee0-40f5-a50a-a8c713f57b4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Refreshing network info cache for port fef175ac-72ee-4716-9970-ff3dccaea9f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:46:21 compute-0 nova_compute[259550]: 2025-10-07 14:46:21.747 2 DEBUG oslo_concurrency.lockutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "b598361e-dd69-448e-ade6-931a3d8c84cb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:21 compute-0 nova_compute[259550]: 2025-10-07 14:46:21.748 2 DEBUG oslo_concurrency.lockutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:21 compute-0 nova_compute[259550]: 2025-10-07 14:46:21.748 2 DEBUG oslo_concurrency.lockutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:21 compute-0 nova_compute[259550]: 2025-10-07 14:46:21.748 2 DEBUG oslo_concurrency.lockutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:21 compute-0 nova_compute[259550]: 2025-10-07 14:46:21.749 2 DEBUG oslo_concurrency.lockutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:21 compute-0 nova_compute[259550]: 2025-10-07 14:46:21.750 2 INFO nova.compute.manager [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Terminating instance
Oct 07 14:46:21 compute-0 nova_compute[259550]: 2025-10-07 14:46:21.750 2 DEBUG nova.compute.manager [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:46:21 compute-0 kernel: tapfef175ac-72 (unregistering): left promiscuous mode
Oct 07 14:46:21 compute-0 NetworkManager[44949]: <info>  [1759848381.8010] device (tapfef175ac-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:46:21 compute-0 ovn_controller[151684]: 2025-10-07T14:46:21Z|01502|binding|INFO|Releasing lport fef175ac-72ee-4716-9970-ff3dccaea9f9 from this chassis (sb_readonly=0)
Oct 07 14:46:21 compute-0 ovn_controller[151684]: 2025-10-07T14:46:21Z|01503|binding|INFO|Setting lport fef175ac-72ee-4716-9970-ff3dccaea9f9 down in Southbound
Oct 07 14:46:21 compute-0 nova_compute[259550]: 2025-10-07 14:46:21.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:21 compute-0 ovn_controller[151684]: 2025-10-07T14:46:21Z|01504|binding|INFO|Removing iface tapfef175ac-72 ovn-installed in OVS
Oct 07 14:46:21 compute-0 nova_compute[259550]: 2025-10-07 14:46:21.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.819 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:9c:ff 10.100.0.5 2001:db8:0:1:f816:3eff:fef5:9cff 2001:db8::f816:3eff:fef5:9cff'], port_security=['fa:16:3e:f5:9c:ff 10.100.0.5 2001:db8:0:1:f816:3eff:fef5:9cff 2001:db8::f816:3eff:fef5:9cff'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28 2001:db8:0:1:f816:3eff:fef5:9cff/64 2001:db8::f816:3eff:fef5:9cff/64', 'neutron:device_id': 'b598361e-dd69-448e-ade6-931a3d8c84cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb742e76-491f-4442-ba5f-a90a2210bfd3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0dad457a-42a0-40e6-bb17-b6ab5f921cac, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fef175ac-72ee-4716-9970-ff3dccaea9f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:46:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.824 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fef175ac-72ee-4716-9970-ff3dccaea9f9 in datapath b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e unbound from our chassis
Oct 07 14:46:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.826 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e
Oct 07 14:46:21 compute-0 nova_compute[259550]: 2025-10-07 14:46:21.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.845 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a98c9a-e29a-4950-b862-99717ea11eeb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.877 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5b9c8ca6-8a24-45dd-8a8a-f18540b16eae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:21 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000087.scope: Deactivated successfully.
Oct 07 14:46:21 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000087.scope: Consumed 18.232s CPU time.
Oct 07 14:46:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.882 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e389a431-1383-47b9-be2f-52a77984cbec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:21 compute-0 systemd-machined[214580]: Machine qemu-169-instance-00000087 terminated.
Oct 07 14:46:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.909 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[56f2d332-d5bd-4f23-8ed2-8f8bb7e05016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.927 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cae84e81-d335-4377-b01d-8944b1be0bce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5308f20-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0c:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 44, 'tx_packets': 7, 'rx_bytes': 3768, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 44, 'tx_packets': 7, 'rx_bytes': 3768, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 889039, 'reachable_time': 17557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 40, 'inoctets': 3040, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 40, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 3040, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 40, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407333, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.946 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[41ace20e-f684-4c9c-8554-51327cfc4ab9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb5308f20-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 889050, 'tstamp': 889050}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407334, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb5308f20-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 889053, 'tstamp': 889053}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407334, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.949 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5308f20-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:46:21 compute-0 nova_compute[259550]: 2025-10-07 14:46:21.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:21 compute-0 nova_compute[259550]: 2025-10-07 14:46:21.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.956 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5308f20-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:46:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.956 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:46:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.957 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5308f20-40, col_values=(('external_ids', {'iface-id': 'ff172ff9-04e5-4286-b498-dfa958b1473c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:46:21 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.957 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:46:21 compute-0 nova_compute[259550]: 2025-10-07 14:46:21.988 2 INFO nova.virt.libvirt.driver [-] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Instance destroyed successfully.
Oct 07 14:46:21 compute-0 nova_compute[259550]: 2025-10-07 14:46:21.989 2 DEBUG nova.objects.instance [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid b598361e-dd69-448e-ade6-931a3d8c84cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:46:22 compute-0 nova_compute[259550]: 2025-10-07 14:46:22.001 2 DEBUG nova.virt.libvirt.vif [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:45:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1906442223',display_name='tempest-TestGettingAddress-server-1906442223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1906442223',id=135,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJduBxT7TahFqoelvVZ/7dLsZLsZopzCWy0c1s/fKLvT5f1/UUPBtVog3rnrfhVqOaBhsvpFnl4NRHZsXU2RV8U7aQqRvvSPu/+lGVEKuUSnHnWjBec95G9VYq2BFT7AFA==',key_name='tempest-TestGettingAddress-1301392792',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:45:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-xwt5qdp6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:45:41Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=b598361e-dd69-448e-ade6-931a3d8c84cb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:46:22 compute-0 nova_compute[259550]: 2025-10-07 14:46:22.002 2 DEBUG nova.network.os_vif_util [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:46:22 compute-0 nova_compute[259550]: 2025-10-07 14:46:22.003 2 DEBUG nova.network.os_vif_util [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f5:9c:ff,bridge_name='br-int',has_traffic_filtering=True,id=fef175ac-72ee-4716-9970-ff3dccaea9f9,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfef175ac-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:46:22 compute-0 nova_compute[259550]: 2025-10-07 14:46:22.003 2 DEBUG os_vif [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:9c:ff,bridge_name='br-int',has_traffic_filtering=True,id=fef175ac-72ee-4716-9970-ff3dccaea9f9,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfef175ac-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:46:22 compute-0 nova_compute[259550]: 2025-10-07 14:46:22.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:22 compute-0 nova_compute[259550]: 2025-10-07 14:46:22.006 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfef175ac-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:46:22 compute-0 nova_compute[259550]: 2025-10-07 14:46:22.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:22 compute-0 nova_compute[259550]: 2025-10-07 14:46:22.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:46:22 compute-0 nova_compute[259550]: 2025-10-07 14:46:22.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:22 compute-0 nova_compute[259550]: 2025-10-07 14:46:22.013 2 INFO os_vif [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:9c:ff,bridge_name='br-int',has_traffic_filtering=True,id=fef175ac-72ee-4716-9970-ff3dccaea9f9,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfef175ac-72')
Oct 07 14:46:22 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct 07 14:46:22 compute-0 ceph-mon[74295]: pgmap v2523: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 75 KiB/s wr, 16 op/s
Oct 07 14:46:22 compute-0 nova_compute[259550]: 2025-10-07 14:46:22.414 2 INFO nova.virt.libvirt.driver [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Deleting instance files /var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb_del
Oct 07 14:46:22 compute-0 nova_compute[259550]: 2025-10-07 14:46:22.415 2 INFO nova.virt.libvirt.driver [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Deletion of /var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb_del complete
Oct 07 14:46:22 compute-0 nova_compute[259550]: 2025-10-07 14:46:22.466 2 INFO nova.compute.manager [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Took 0.72 seconds to destroy the instance on the hypervisor.
Oct 07 14:46:22 compute-0 nova_compute[259550]: 2025-10-07 14:46:22.467 2 DEBUG oslo.service.loopingcall [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:46:22 compute-0 nova_compute[259550]: 2025-10-07 14:46:22.467 2 DEBUG nova.compute.manager [-] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:46:22 compute-0 nova_compute[259550]: 2025-10-07 14:46:22.467 2 DEBUG nova.network.neutron [-] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:46:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 24 KiB/s wr, 3 op/s
Oct 07 14:46:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:46:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:46:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:46:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:46:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:46:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:46:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:46:22
Oct 07 14:46:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:46:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:46:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['backups', 'volumes', 'images', '.mgr', '.rgw.root', 'default.rgw.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control']
Oct 07 14:46:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:46:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:46:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:46:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:46:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:46:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:46:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:46:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:46:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:46:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:46:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:46:23 compute-0 nova_compute[259550]: 2025-10-07 14:46:23.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:46:23 compute-0 ceph-mgr[74587]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3626055412
Oct 07 14:46:23 compute-0 nova_compute[259550]: 2025-10-07 14:46:23.870 2 DEBUG nova.compute.manager [req-2d957e6b-235e-4b31-a5fd-8e2e16e417d7 req-51583556-21de-4b56-99bb-2f55eb670537 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received event network-vif-unplugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:23 compute-0 nova_compute[259550]: 2025-10-07 14:46:23.870 2 DEBUG oslo_concurrency.lockutils [req-2d957e6b-235e-4b31-a5fd-8e2e16e417d7 req-51583556-21de-4b56-99bb-2f55eb670537 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:23 compute-0 nova_compute[259550]: 2025-10-07 14:46:23.871 2 DEBUG oslo_concurrency.lockutils [req-2d957e6b-235e-4b31-a5fd-8e2e16e417d7 req-51583556-21de-4b56-99bb-2f55eb670537 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:23 compute-0 nova_compute[259550]: 2025-10-07 14:46:23.871 2 DEBUG oslo_concurrency.lockutils [req-2d957e6b-235e-4b31-a5fd-8e2e16e417d7 req-51583556-21de-4b56-99bb-2f55eb670537 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:23 compute-0 nova_compute[259550]: 2025-10-07 14:46:23.871 2 DEBUG nova.compute.manager [req-2d957e6b-235e-4b31-a5fd-8e2e16e417d7 req-51583556-21de-4b56-99bb-2f55eb670537 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] No waiting events found dispatching network-vif-unplugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:46:23 compute-0 nova_compute[259550]: 2025-10-07 14:46:23.872 2 DEBUG nova.compute.manager [req-2d957e6b-235e-4b31-a5fd-8e2e16e417d7 req-51583556-21de-4b56-99bb-2f55eb670537 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received event network-vif-unplugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:46:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:24.113 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:46:24 compute-0 nova_compute[259550]: 2025-10-07 14:46:24.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:24.115 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:46:24 compute-0 ceph-mon[74295]: pgmap v2524: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 24 KiB/s wr, 3 op/s
Oct 07 14:46:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 311 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 16 KiB/s wr, 16 op/s
Oct 07 14:46:24 compute-0 nova_compute[259550]: 2025-10-07 14:46:24.607 2 DEBUG nova.network.neutron [-] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:46:24 compute-0 nova_compute[259550]: 2025-10-07 14:46:24.637 2 INFO nova.compute.manager [-] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Took 2.17 seconds to deallocate network for instance.
Oct 07 14:46:24 compute-0 nova_compute[259550]: 2025-10-07 14:46:24.699 2 DEBUG oslo_concurrency.lockutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:24 compute-0 nova_compute[259550]: 2025-10-07 14:46:24.699 2 DEBUG oslo_concurrency.lockutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:24 compute-0 nova_compute[259550]: 2025-10-07 14:46:24.828 2 DEBUG oslo_concurrency.processutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:46:24 compute-0 nova_compute[259550]: 2025-10-07 14:46:24.882 2 INFO nova.compute.manager [None req-1cb980d6-c83b-42bf-aa5b-ec5eb306eed2 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Get console output
Oct 07 14:46:24 compute-0 nova_compute[259550]: 2025-10-07 14:46:24.888 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:46:25 compute-0 podman[407368]: 2025-10-07 14:46:25.104815854 +0000 UTC m=+0.092582832 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 07 14:46:25 compute-0 podman[407378]: 2025-10-07 14:46:25.112699564 +0000 UTC m=+0.099153527 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 07 14:46:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:46:25 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3434801302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.356 2 DEBUG oslo_concurrency.processutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.363 2 DEBUG nova.compute.provider_tree [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.384 2 DEBUG nova.scheduler.client.report [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:46:25 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3434801302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.418 2 DEBUG oslo_concurrency.lockutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.455 2 INFO nova.scheduler.client.report [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance b598361e-dd69-448e-ade6-931a3d8c84cb
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.519 2 DEBUG oslo_concurrency.lockutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.535 2 DEBUG nova.network.neutron [req-bc881283-9820-4a93-90fa-84c14cbfd03c req-48c5dfb0-dee0-40f5-a50a-a8c713f57b4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Updated VIF entry in instance network info cache for port fef175ac-72ee-4716-9970-ff3dccaea9f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.536 2 DEBUG nova.network.neutron [req-bc881283-9820-4a93-90fa-84c14cbfd03c req-48c5dfb0-dee0-40f5-a50a-a8c713f57b4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Updating instance_info_cache with network_info: [{"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.563 2 DEBUG oslo_concurrency.lockutils [req-bc881283-9820-4a93-90fa-84c14cbfd03c req-48c5dfb0-dee0-40f5-a50a-a8c713f57b4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.931 2 DEBUG nova.compute.manager [req-bfd89792-1506-4133-a804-5279a04f9632 req-d0443ca2-7929-4aa6-8822-105f589f92dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-unplugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.932 2 DEBUG oslo_concurrency.lockutils [req-bfd89792-1506-4133-a804-5279a04f9632 req-d0443ca2-7929-4aa6-8822-105f589f92dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.932 2 DEBUG oslo_concurrency.lockutils [req-bfd89792-1506-4133-a804-5279a04f9632 req-d0443ca2-7929-4aa6-8822-105f589f92dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.932 2 DEBUG oslo_concurrency.lockutils [req-bfd89792-1506-4133-a804-5279a04f9632 req-d0443ca2-7929-4aa6-8822-105f589f92dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.932 2 DEBUG nova.compute.manager [req-bfd89792-1506-4133-a804-5279a04f9632 req-d0443ca2-7929-4aa6-8822-105f589f92dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] No waiting events found dispatching network-vif-unplugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.932 2 WARNING nova.compute.manager [req-bfd89792-1506-4133-a804-5279a04f9632 req-d0443ca2-7929-4aa6-8822-105f589f92dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received unexpected event network-vif-unplugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f for instance with vm_state active and task_state None.
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.971 2 DEBUG nova.compute.manager [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received event network-vif-plugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.971 2 DEBUG oslo_concurrency.lockutils [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.972 2 DEBUG oslo_concurrency.lockutils [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.972 2 DEBUG oslo_concurrency.lockutils [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.972 2 DEBUG nova.compute.manager [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] No waiting events found dispatching network-vif-plugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.973 2 WARNING nova.compute.manager [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received unexpected event network-vif-plugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 for instance with vm_state deleted and task_state None.
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.973 2 DEBUG nova.compute.manager [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received event network-vif-deleted-fef175ac-72ee-4716-9970-ff3dccaea9f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.973 2 DEBUG nova.compute.manager [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.974 2 DEBUG nova.compute.manager [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing instance network info cache due to event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.974 2 DEBUG oslo_concurrency.lockutils [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.974 2 DEBUG oslo_concurrency.lockutils [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:46:25 compute-0 nova_compute[259550]: 2025-10-07 14:46:25.974 2 DEBUG nova.network.neutron [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:46:26 compute-0 ceph-mon[74295]: pgmap v2525: 305 pgs: 305 active+clean; 311 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 16 KiB/s wr, 16 op/s
Oct 07 14:46:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 6.1 KiB/s wr, 29 op/s
Oct 07 14:46:26 compute-0 nova_compute[259550]: 2025-10-07 14:46:26.965 2 INFO nova.compute.manager [None req-2043385e-e005-4e1e-b6ef-1ddf16bcb290 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Get console output
Oct 07 14:46:26 compute-0 nova_compute[259550]: 2025-10-07 14:46:26.970 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:46:27 compute-0 nova_compute[259550]: 2025-10-07 14:46:27.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:27 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:27.116 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.094 2 DEBUG nova.compute.manager [req-c3fc73c9-6230-43f9-bcc1-f6a5087c4bd1 req-056eb8c9-f9ba-4888-b8b3-e49da5ea92a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.095 2 DEBUG oslo_concurrency.lockutils [req-c3fc73c9-6230-43f9-bcc1-f6a5087c4bd1 req-056eb8c9-f9ba-4888-b8b3-e49da5ea92a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.095 2 DEBUG oslo_concurrency.lockutils [req-c3fc73c9-6230-43f9-bcc1-f6a5087c4bd1 req-056eb8c9-f9ba-4888-b8b3-e49da5ea92a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.096 2 DEBUG oslo_concurrency.lockutils [req-c3fc73c9-6230-43f9-bcc1-f6a5087c4bd1 req-056eb8c9-f9ba-4888-b8b3-e49da5ea92a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.096 2 DEBUG nova.compute.manager [req-c3fc73c9-6230-43f9-bcc1-f6a5087c4bd1 req-056eb8c9-f9ba-4888-b8b3-e49da5ea92a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] No waiting events found dispatching network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.096 2 WARNING nova.compute.manager [req-c3fc73c9-6230-43f9-bcc1-f6a5087c4bd1 req-056eb8c9-f9ba-4888-b8b3-e49da5ea92a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received unexpected event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f for instance with vm_state active and task_state None.
Oct 07 14:46:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.330 2 DEBUG nova.compute.manager [req-866388e9-3c64-4c4a-a4e5-52adc55927ff req-012b53ee-6c60-42c7-876e-c3cdb7f465b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received event network-changed-54bf17d9-ad25-4326-b981-fb4fe6afaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.331 2 DEBUG nova.compute.manager [req-866388e9-3c64-4c4a-a4e5-52adc55927ff req-012b53ee-6c60-42c7-876e-c3cdb7f465b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Refreshing instance network info cache due to event network-changed-54bf17d9-ad25-4326-b981-fb4fe6afaf7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.331 2 DEBUG oslo_concurrency.lockutils [req-866388e9-3c64-4c4a-a4e5-52adc55927ff req-012b53ee-6c60-42c7-876e-c3cdb7f465b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.332 2 DEBUG oslo_concurrency.lockutils [req-866388e9-3c64-4c4a-a4e5-52adc55927ff req-012b53ee-6c60-42c7-876e-c3cdb7f465b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.332 2 DEBUG nova.network.neutron [req-866388e9-3c64-4c4a-a4e5-52adc55927ff req-012b53ee-6c60-42c7-876e-c3cdb7f465b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Refreshing network info cache for port 54bf17d9-ad25-4326-b981-fb4fe6afaf7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:46:28 compute-0 ceph-mon[74295]: pgmap v2526: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 6.1 KiB/s wr, 29 op/s
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.461 2 DEBUG oslo_concurrency.lockutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "6721439b-34d7-4282-bbcd-37424c3f2691" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.462 2 DEBUG oslo_concurrency.lockutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.463 2 DEBUG oslo_concurrency.lockutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.464 2 DEBUG oslo_concurrency.lockutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.465 2 DEBUG oslo_concurrency.lockutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.467 2 INFO nova.compute.manager [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Terminating instance
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.469 2 DEBUG nova.compute.manager [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:46:28 compute-0 kernel: tap54bf17d9-ad (unregistering): left promiscuous mode
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.522 2 DEBUG nova.network.neutron [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updated VIF entry in instance network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.523 2 DEBUG nova.network.neutron [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updating instance_info_cache with network_info: [{"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:46:28 compute-0 NetworkManager[44949]: <info>  [1759848388.5261] device (tap54bf17d9-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:46:28 compute-0 ovn_controller[151684]: 2025-10-07T14:46:28Z|01505|binding|INFO|Releasing lport 54bf17d9-ad25-4326-b981-fb4fe6afaf7c from this chassis (sb_readonly=0)
Oct 07 14:46:28 compute-0 ovn_controller[151684]: 2025-10-07T14:46:28Z|01506|binding|INFO|Setting lport 54bf17d9-ad25-4326-b981-fb4fe6afaf7c down in Southbound
Oct 07 14:46:28 compute-0 ovn_controller[151684]: 2025-10-07T14:46:28Z|01507|binding|INFO|Removing iface tap54bf17d9-ad ovn-installed in OVS
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.541 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:be:4b 10.100.0.9 2001:db8:0:1:f816:3eff:fe28:be4b 2001:db8::f816:3eff:fe28:be4b'], port_security=['fa:16:3e:28:be:4b 10.100.0.9 2001:db8:0:1:f816:3eff:fe28:be4b 2001:db8::f816:3eff:fe28:be4b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8:0:1:f816:3eff:fe28:be4b/64 2001:db8::f816:3eff:fe28:be4b/64', 'neutron:device_id': '6721439b-34d7-4282-bbcd-37424c3f2691', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb742e76-491f-4442-ba5f-a90a2210bfd3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0dad457a-42a0-40e6-bb17-b6ab5f921cac, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=54bf17d9-ad25-4326-b981-fb4fe6afaf7c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:46:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.542 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 54bf17d9-ad25-4326-b981-fb4fe6afaf7c in datapath b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e unbound from our chassis
Oct 07 14:46:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.544 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:46:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.545 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7c75f240-18d3-40f4-b562-c1e8ec0f175a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.545 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e namespace which is not needed anymore
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.554 2 DEBUG oslo_concurrency.lockutils [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:46:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 5.1 KiB/s wr, 29 op/s
Oct 07 14:46:28 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000085.scope: Deactivated successfully.
Oct 07 14:46:28 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000085.scope: Consumed 16.699s CPU time.
Oct 07 14:46:28 compute-0 systemd-machined[214580]: Machine qemu-167-instance-00000085 terminated.
Oct 07 14:46:28 compute-0 neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e[404451]: [NOTICE]   (404455) : haproxy version is 2.8.14-c23fe91
Oct 07 14:46:28 compute-0 neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e[404451]: [NOTICE]   (404455) : path to executable is /usr/sbin/haproxy
Oct 07 14:46:28 compute-0 neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e[404451]: [WARNING]  (404455) : Exiting Master process...
Oct 07 14:46:28 compute-0 neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e[404451]: [WARNING]  (404455) : Exiting Master process...
Oct 07 14:46:28 compute-0 neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e[404451]: [ALERT]    (404455) : Current worker (404457) exited with code 143 (Terminated)
Oct 07 14:46:28 compute-0 neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e[404451]: [WARNING]  (404455) : All workers exited. Exiting... (0)
Oct 07 14:46:28 compute-0 systemd[1]: libpod-5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee.scope: Deactivated successfully.
Oct 07 14:46:28 compute-0 podman[407458]: 2025-10-07 14:46:28.680459706 +0000 UTC m=+0.043055966 container died 5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.703 2 INFO nova.virt.libvirt.driver [-] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Instance destroyed successfully.
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.704 2 DEBUG nova.objects.instance [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 6721439b-34d7-4282-bbcd-37424c3f2691 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:46:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee-userdata-shm.mount: Deactivated successfully.
Oct 07 14:46:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7939ed48599346f2fcd08610b2ba91dee0ea97c56d4a7dfd84d89d159ce37fa-merged.mount: Deactivated successfully.
Oct 07 14:46:28 compute-0 podman[407458]: 2025-10-07 14:46:28.723268274 +0000 UTC m=+0.085864534 container cleanup 5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.724 2 DEBUG nova.virt.libvirt.vif [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:44:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-965086068',display_name='tempest-TestGettingAddress-server-965086068',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-965086068',id=133,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJduBxT7TahFqoelvVZ/7dLsZLsZopzCWy0c1s/fKLvT5f1/UUPBtVog3rnrfhVqOaBhsvpFnl4NRHZsXU2RV8U7aQqRvvSPu/+lGVEKuUSnHnWjBec95G9VYq2BFT7AFA==',key_name='tempest-TestGettingAddress-1301392792',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:45:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-0owwxica',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:45:08Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=6721439b-34d7-4282-bbcd-37424c3f2691,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.724 2 DEBUG nova.network.os_vif_util [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.726 2 DEBUG nova.network.os_vif_util [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:be:4b,bridge_name='br-int',has_traffic_filtering=True,id=54bf17d9-ad25-4326-b981-fb4fe6afaf7c,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bf17d9-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.726 2 DEBUG os_vif [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:be:4b,bridge_name='br-int',has_traffic_filtering=True,id=54bf17d9-ad25-4326-b981-fb4fe6afaf7c,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bf17d9-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.728 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54bf17d9-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.736 2 INFO os_vif [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:be:4b,bridge_name='br-int',has_traffic_filtering=True,id=54bf17d9-ad25-4326-b981-fb4fe6afaf7c,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bf17d9-ad')
Oct 07 14:46:28 compute-0 systemd[1]: libpod-conmon-5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee.scope: Deactivated successfully.
Oct 07 14:46:28 compute-0 podman[407496]: 2025-10-07 14:46:28.813977396 +0000 UTC m=+0.056678338 container remove 5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 07 14:46:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.820 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c3075aad-aca7-4c41-837c-5f500a944a57]: (4, ('Tue Oct  7 02:46:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e (5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee)\n5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee\nTue Oct  7 02:46:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e (5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee)\n5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.822 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[96044539-de61-4b7e-968c-f42d3f0b97f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.823 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5308f20-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:46:28 compute-0 kernel: tapb5308f20-40: left promiscuous mode
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:28 compute-0 nova_compute[259550]: 2025-10-07 14:46:28.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.844 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1ded75-a976-4518-b02e-e409d2db23ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.889 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[70c5dfee-cd88-47a7-b026-767c5f52d167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.890 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e3baea-ad4e-4161-a4a0-26ece632dd8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.910 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb8858c-812a-4cfd-abbc-07fcd03424f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 889032, 'reachable_time': 34764, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407529, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:28 compute-0 systemd[1]: run-netns-ovnmeta\x2db5308f20\x2d4ad6\x2d4f12\x2da55c\x2d6d0b55a0cb4e.mount: Deactivated successfully.
Oct 07 14:46:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.915 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:46:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.915 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[64ae2793-f2a8-4469-8237-9ed9b3c92e42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:29 compute-0 nova_compute[259550]: 2025-10-07 14:46:29.193 2 INFO nova.virt.libvirt.driver [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Deleting instance files /var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691_del
Oct 07 14:46:29 compute-0 nova_compute[259550]: 2025-10-07 14:46:29.194 2 INFO nova.virt.libvirt.driver [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Deletion of /var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691_del complete
Oct 07 14:46:29 compute-0 nova_compute[259550]: 2025-10-07 14:46:29.260 2 INFO nova.compute.manager [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 07 14:46:29 compute-0 nova_compute[259550]: 2025-10-07 14:46:29.261 2 DEBUG oslo.service.loopingcall [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:46:29 compute-0 nova_compute[259550]: 2025-10-07 14:46:29.261 2 DEBUG nova.compute.manager [-] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:46:29 compute-0 nova_compute[259550]: 2025-10-07 14:46:29.261 2 DEBUG nova.network.neutron [-] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:46:29 compute-0 nova_compute[259550]: 2025-10-07 14:46:29.605 2 INFO nova.compute.manager [None req-c3cf273d-9b0a-400b-805e-c260ef5c81bc 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Get console output
Oct 07 14:46:29 compute-0 nova_compute[259550]: 2025-10-07 14:46:29.611 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:46:30 compute-0 ceph-mon[74295]: pgmap v2527: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 5.1 KiB/s wr, 29 op/s
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.427 2 DEBUG nova.compute.manager [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.428 2 DEBUG oslo_concurrency.lockutils [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.428 2 DEBUG oslo_concurrency.lockutils [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.428 2 DEBUG oslo_concurrency.lockutils [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.428 2 DEBUG nova.compute.manager [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] No waiting events found dispatching network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.428 2 WARNING nova.compute.manager [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received unexpected event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f for instance with vm_state active and task_state None.
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.429 2 DEBUG nova.compute.manager [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.429 2 DEBUG oslo_concurrency.lockutils [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.429 2 DEBUG oslo_concurrency.lockutils [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.429 2 DEBUG oslo_concurrency.lockutils [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.429 2 DEBUG nova.compute.manager [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] No waiting events found dispatching network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.430 2 WARNING nova.compute.manager [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received unexpected event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f for instance with vm_state active and task_state None.
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.542 2 DEBUG nova.compute.manager [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received event network-vif-unplugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.543 2 DEBUG oslo_concurrency.lockutils [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.543 2 DEBUG oslo_concurrency.lockutils [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.543 2 DEBUG oslo_concurrency.lockutils [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.543 2 DEBUG nova.compute.manager [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] No waiting events found dispatching network-vif-unplugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.544 2 DEBUG nova.compute.manager [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received event network-vif-unplugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.544 2 DEBUG nova.compute.manager [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.544 2 DEBUG nova.compute.manager [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing instance network info cache due to event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.544 2 DEBUG oslo_concurrency.lockutils [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.544 2 DEBUG oslo_concurrency.lockutils [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:46:30 compute-0 nova_compute[259550]: 2025-10-07 14:46:30.545 2 DEBUG nova.network.neutron [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:46:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 232 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 13 KiB/s wr, 36 op/s
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.256 2 DEBUG nova.network.neutron [-] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.300 2 INFO nova.compute.manager [-] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Took 3.04 seconds to deallocate network for instance.
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.382 2 DEBUG oslo_concurrency.lockutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.383 2 DEBUG oslo_concurrency.lockutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:32 compute-0 ceph-mon[74295]: pgmap v2528: 305 pgs: 305 active+clean; 232 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 13 KiB/s wr, 36 op/s
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.484 2 DEBUG oslo_concurrency.processutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.570 2 DEBUG oslo_concurrency.lockutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.570 2 DEBUG oslo_concurrency.lockutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.570 2 DEBUG oslo_concurrency.lockutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.571 2 DEBUG oslo_concurrency.lockutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.571 2 DEBUG oslo_concurrency.lockutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.572 2 INFO nova.compute.manager [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Terminating instance
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.573 2 DEBUG nova.compute.manager [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 13 KiB/s wr, 57 op/s
Oct 07 14:46:32 compute-0 kernel: tap44d8ec34-7f (unregistering): left promiscuous mode
Oct 07 14:46:32 compute-0 NetworkManager[44949]: <info>  [1759848392.6345] device (tap44d8ec34-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:46:32 compute-0 ovn_controller[151684]: 2025-10-07T14:46:32Z|01508|binding|INFO|Releasing lport 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 from this chassis (sb_readonly=0)
Oct 07 14:46:32 compute-0 ovn_controller[151684]: 2025-10-07T14:46:32Z|01509|binding|INFO|Setting lport 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 down in Southbound
Oct 07 14:46:32 compute-0 ovn_controller[151684]: 2025-10-07T14:46:32Z|01510|binding|INFO|Removing iface tap44d8ec34-7f ovn-installed in OVS
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.681 2 DEBUG nova.compute.manager [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received event network-vif-plugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.682 2 DEBUG oslo_concurrency.lockutils [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.689 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:97:e5 10.100.0.3'], port_security=['fa:16:3e:0c:97:e5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '36592d37-1eb3-431f-8cd3-aa0d320b2e86', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a790910-04e4-4ed9-9209-184147e62b8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c2506e0-287c-4f8f-b28e-99b2bb4e4542', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17109e69-8b68-4e08-a1dd-9e4119d12812, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=44d8ec34-7fbc-440a-bab4-b9b8d29ff249) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:46:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.690 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 in datapath 8a790910-04e4-4ed9-9209-184147e62b8b unbound from our chassis
Oct 07 14:46:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.691 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8a790910-04e4-4ed9-9209-184147e62b8b
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.682 2 DEBUG oslo_concurrency.lockutils [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.697 2 DEBUG oslo_concurrency.lockutils [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.697 2 DEBUG nova.compute.manager [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] No waiting events found dispatching network-vif-plugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.698 2 WARNING nova.compute.manager [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received unexpected event network-vif-plugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c for instance with vm_state deleted and task_state None.
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.698 2 DEBUG nova.compute.manager [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received event network-vif-deleted-54bf17d9-ad25-4326-b981-fb4fe6afaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.698 2 DEBUG nova.compute.manager [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received event network-changed-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.698 2 DEBUG nova.compute.manager [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Refreshing instance network info cache due to event network-changed-44d8ec34-7fbc-440a-bab4-b9b8d29ff249. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.698 2 DEBUG oslo_concurrency.lockutils [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.699 2 DEBUG oslo_concurrency.lockutils [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.699 2 DEBUG nova.network.neutron [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Refreshing network info cache for port 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.717 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0084a6e7-d41b-47e3-8b25-0969043ed4c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015212169124829069 of space, bias 1.0, pg target 0.45636507374487206 quantized to 32 (current 32)
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:46:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:46:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:46:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/445394989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:46:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:46:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/445394989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:46:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.757 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b446ef1c-1829-4559-98f2-8c0d535d2194]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.760 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e10819de-a06a-4169-be00-fe8f27622b39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:32 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000088.scope: Deactivated successfully.
Oct 07 14:46:32 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000088.scope: Consumed 19.169s CPU time.
Oct 07 14:46:32 compute-0 systemd-machined[214580]: Machine qemu-170-instance-00000088 terminated.
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.824 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9b66af-0a5e-495c-b737-80cc84ab6ae0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.840 2 INFO nova.virt.libvirt.driver [-] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Instance destroyed successfully.
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.841 2 DEBUG nova.objects.instance [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid 36592d37-1eb3-431f-8cd3-aa0d320b2e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:46:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.847 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9f8b81fb-436d-4e25-aeb9-47c1fdfb2b7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a790910-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:8c:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890932, 'reachable_time': 40705, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407570, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.862 2 DEBUG nova.virt.libvirt.vif [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:45:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1103641822',display_name='tempest-TestNetworkBasicOps-server-1103641822',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1103641822',id=136,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFPMP7i8ec4yKwVDxErtIGw1wPOouS90pUC/2M/KSrqUJIp7tpUDqB6OrTVom+DAk9JZ3iNgnjUuCMQr+/u1V//z0y/ybLMjjWdhld2MXrTrpN1FeSHNloBYJfIHxQTEMw==',key_name='tempest-TestNetworkBasicOps-329000454',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:45:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-7sywmp2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:45:47Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=36592d37-1eb3-431f-8cd3-aa0d320b2e86,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.862 2 DEBUG nova.network.os_vif_util [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.863 2 DEBUG nova.network.os_vif_util [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0c:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=44d8ec34-7fbc-440a-bab4-b9b8d29ff249,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44d8ec34-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.863 2 DEBUG os_vif [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=44d8ec34-7fbc-440a-bab4-b9b8d29ff249,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44d8ec34-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.865 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44d8ec34-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.873 2 INFO os_vif [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=44d8ec34-7fbc-440a-bab4-b9b8d29ff249,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44d8ec34-7f')
Oct 07 14:46:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.874 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1238902-2537-4b93-a74c-48c1879dee8e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8a790910-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 890945, 'tstamp': 890945}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407571, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8a790910-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 890948, 'tstamp': 890948}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407571, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.875 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a790910-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:46:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.878 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a790910-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:46:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.878 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:46:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.878 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8a790910-00, col_values=(('external_ids', {'iface-id': '473a96c8-dafe-4956-8316-8a82bc1c870e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:46:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.879 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:46:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/199723312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.992 2 DEBUG oslo_concurrency.processutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:46:32 compute-0 nova_compute[259550]: 2025-10-07 14:46:32.997 2 DEBUG nova.compute.provider_tree [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.018 2 DEBUG nova.scheduler.client.report [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.040 2 DEBUG oslo_concurrency.lockutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.058 2 DEBUG nova.network.neutron [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updated VIF entry in instance network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.059 2 DEBUG nova.network.neutron [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updating instance_info_cache with network_info: [{"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.069 2 DEBUG nova.network.neutron [req-866388e9-3c64-4c4a-a4e5-52adc55927ff req-012b53ee-6c60-42c7-876e-c3cdb7f465b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updated VIF entry in instance network info cache for port 54bf17d9-ad25-4326-b981-fb4fe6afaf7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.069 2 DEBUG nova.network.neutron [req-866388e9-3c64-4c4a-a4e5-52adc55927ff req-012b53ee-6c60-42c7-876e-c3cdb7f465b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updating instance_info_cache with network_info: [{"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.084 2 INFO nova.scheduler.client.report [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 6721439b-34d7-4282-bbcd-37424c3f2691
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.088 2 DEBUG oslo_concurrency.lockutils [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.097 2 DEBUG oslo_concurrency.lockutils [req-866388e9-3c64-4c4a-a4e5-52adc55927ff req-012b53ee-6c60-42c7-876e-c3cdb7f465b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.114 2 DEBUG nova.compute.manager [req-7657e352-3554-4fac-841c-32e4c2a3abcb req-93c1053d-8003-4310-9d97-c9174f34ff7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received event network-vif-unplugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.114 2 DEBUG oslo_concurrency.lockutils [req-7657e352-3554-4fac-841c-32e4c2a3abcb req-93c1053d-8003-4310-9d97-c9174f34ff7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.114 2 DEBUG oslo_concurrency.lockutils [req-7657e352-3554-4fac-841c-32e4c2a3abcb req-93c1053d-8003-4310-9d97-c9174f34ff7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.114 2 DEBUG oslo_concurrency.lockutils [req-7657e352-3554-4fac-841c-32e4c2a3abcb req-93c1053d-8003-4310-9d97-c9174f34ff7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.115 2 DEBUG nova.compute.manager [req-7657e352-3554-4fac-841c-32e4c2a3abcb req-93c1053d-8003-4310-9d97-c9174f34ff7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] No waiting events found dispatching network-vif-unplugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.115 2 DEBUG nova.compute.manager [req-7657e352-3554-4fac-841c-32e4c2a3abcb req-93c1053d-8003-4310-9d97-c9174f34ff7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received event network-vif-unplugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.150 2 DEBUG oslo_concurrency.lockutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:33 compute-0 ceph-mon[74295]: pgmap v2529: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 13 KiB/s wr, 57 op/s
Oct 07 14:46:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/445394989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:46:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/445394989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:46:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/199723312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.743 2 INFO nova.virt.libvirt.driver [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Deleting instance files /var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86_del
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.744 2 INFO nova.virt.libvirt.driver [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Deletion of /var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86_del complete
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.816 2 INFO nova.compute.manager [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Took 1.24 seconds to destroy the instance on the hypervisor.
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.817 2 DEBUG oslo.service.loopingcall [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.817 2 DEBUG nova.compute.manager [-] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:46:33 compute-0 nova_compute[259550]: 2025-10-07 14:46:33.817 2 DEBUG nova.network.neutron [-] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:46:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 143 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 19 KiB/s wr, 76 op/s
Oct 07 14:46:34 compute-0 nova_compute[259550]: 2025-10-07 14:46:34.611 2 DEBUG nova.network.neutron [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Updated VIF entry in instance network info cache for port 44d8ec34-7fbc-440a-bab4-b9b8d29ff249. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:46:34 compute-0 nova_compute[259550]: 2025-10-07 14:46:34.612 2 DEBUG nova.network.neutron [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Updating instance_info_cache with network_info: [{"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:46:34 compute-0 nova_compute[259550]: 2025-10-07 14:46:34.633 2 DEBUG oslo_concurrency.lockutils [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:46:35 compute-0 nova_compute[259550]: 2025-10-07 14:46:35.058 2 DEBUG nova.network.neutron [-] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:46:35 compute-0 nova_compute[259550]: 2025-10-07 14:46:35.084 2 INFO nova.compute.manager [-] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Took 1.27 seconds to deallocate network for instance.
Oct 07 14:46:35 compute-0 nova_compute[259550]: 2025-10-07 14:46:35.134 2 DEBUG nova.compute.manager [req-48ef7a39-060d-4b77-a316-2952606bc25b req-39d2e024-0a8b-4cd3-9475-eb9393f99c2d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received event network-vif-deleted-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:35 compute-0 nova_compute[259550]: 2025-10-07 14:46:35.154 2 DEBUG oslo_concurrency.lockutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:35 compute-0 nova_compute[259550]: 2025-10-07 14:46:35.155 2 DEBUG oslo_concurrency.lockutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:35 compute-0 nova_compute[259550]: 2025-10-07 14:46:35.218 2 DEBUG nova.compute.manager [req-654ea898-ad4b-48f7-bd6f-7bbc544832b7 req-31bc120e-81d2-4e40-95b7-72ae14333586 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received event network-vif-plugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:35 compute-0 nova_compute[259550]: 2025-10-07 14:46:35.219 2 DEBUG oslo_concurrency.lockutils [req-654ea898-ad4b-48f7-bd6f-7bbc544832b7 req-31bc120e-81d2-4e40-95b7-72ae14333586 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:35 compute-0 nova_compute[259550]: 2025-10-07 14:46:35.219 2 DEBUG oslo_concurrency.lockutils [req-654ea898-ad4b-48f7-bd6f-7bbc544832b7 req-31bc120e-81d2-4e40-95b7-72ae14333586 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:35 compute-0 nova_compute[259550]: 2025-10-07 14:46:35.219 2 DEBUG oslo_concurrency.lockutils [req-654ea898-ad4b-48f7-bd6f-7bbc544832b7 req-31bc120e-81d2-4e40-95b7-72ae14333586 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:35 compute-0 nova_compute[259550]: 2025-10-07 14:46:35.219 2 DEBUG nova.compute.manager [req-654ea898-ad4b-48f7-bd6f-7bbc544832b7 req-31bc120e-81d2-4e40-95b7-72ae14333586 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] No waiting events found dispatching network-vif-plugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:46:35 compute-0 nova_compute[259550]: 2025-10-07 14:46:35.220 2 WARNING nova.compute.manager [req-654ea898-ad4b-48f7-bd6f-7bbc544832b7 req-31bc120e-81d2-4e40-95b7-72ae14333586 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received unexpected event network-vif-plugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 for instance with vm_state deleted and task_state None.
Oct 07 14:46:35 compute-0 nova_compute[259550]: 2025-10-07 14:46:35.222 2 DEBUG oslo_concurrency.processutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:46:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:46:35 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1126728353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:46:35 compute-0 ceph-mon[74295]: pgmap v2530: 305 pgs: 305 active+clean; 143 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 19 KiB/s wr, 76 op/s
Oct 07 14:46:35 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1126728353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:46:35 compute-0 nova_compute[259550]: 2025-10-07 14:46:35.654 2 DEBUG oslo_concurrency.processutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:46:35 compute-0 nova_compute[259550]: 2025-10-07 14:46:35.662 2 DEBUG nova.compute.provider_tree [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:46:35 compute-0 nova_compute[259550]: 2025-10-07 14:46:35.681 2 DEBUG nova.scheduler.client.report [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:46:35 compute-0 nova_compute[259550]: 2025-10-07 14:46:35.710 2 DEBUG oslo_concurrency.lockutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:35 compute-0 nova_compute[259550]: 2025-10-07 14:46:35.743 2 INFO nova.scheduler.client.report [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance 36592d37-1eb3-431f-8cd3-aa0d320b2e86
Oct 07 14:46:35 compute-0 nova_compute[259550]: 2025-10-07 14:46:35.816 2 DEBUG oslo_concurrency.lockutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 121 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 17 KiB/s wr, 71 op/s
Oct 07 14:46:36 compute-0 nova_compute[259550]: 2025-10-07 14:46:36.987 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848381.986516, b598361e-dd69-448e-ade6-931a3d8c84cb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:46:36 compute-0 nova_compute[259550]: 2025-10-07 14:46:36.987 2 INFO nova.compute.manager [-] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] VM Stopped (Lifecycle Event)
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.021 2 DEBUG nova.compute.manager [None req-969980e2-1d86-4e9d-803f-7f2e00cd9506 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.102 2 DEBUG oslo_concurrency.lockutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.103 2 DEBUG oslo_concurrency.lockutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.103 2 DEBUG oslo_concurrency.lockutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.104 2 DEBUG oslo_concurrency.lockutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.105 2 DEBUG oslo_concurrency.lockutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.106 2 INFO nova.compute.manager [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Terminating instance
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.107 2 DEBUG nova.compute.manager [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:46:37 compute-0 kernel: tap706c4bba-81 (unregistering): left promiscuous mode
Oct 07 14:46:37 compute-0 NetworkManager[44949]: <info>  [1759848397.2078] device (tap706c4bba-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:46:37 compute-0 ovn_controller[151684]: 2025-10-07T14:46:37Z|01511|binding|INFO|Releasing lport 706c4bba-81cd-4c03-ac73-d8225f4ea15f from this chassis (sb_readonly=0)
Oct 07 14:46:37 compute-0 ovn_controller[151684]: 2025-10-07T14:46:37Z|01512|binding|INFO|Setting lport 706c4bba-81cd-4c03-ac73-d8225f4ea15f down in Southbound
Oct 07 14:46:37 compute-0 ovn_controller[151684]: 2025-10-07T14:46:37Z|01513|binding|INFO|Removing iface tap706c4bba-81 ovn-installed in OVS
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.235 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:3a:c8 10.100.0.13'], port_security=['fa:16:3e:57:3a:c8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a790910-04e4-4ed9-9209-184147e62b8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '8', 'neutron:security_group_ids': '497680c8-b146-4d33-ad78-53e98937483b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17109e69-8b68-4e08-a1dd-9e4119d12812, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=706c4bba-81cd-4c03-ac73-d8225f4ea15f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:46:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.236 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 706c4bba-81cd-4c03-ac73-d8225f4ea15f in datapath 8a790910-04e4-4ed9-9209-184147e62b8b unbound from our chassis
Oct 07 14:46:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.237 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8a790910-04e4-4ed9-9209-184147e62b8b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:46:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.239 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f4d2d7-7c9d-47fa-a601-1b5ae12926f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.243 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b namespace which is not needed anymore
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:37 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000086.scope: Deactivated successfully.
Oct 07 14:46:37 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000086.scope: Consumed 15.179s CPU time.
Oct 07 14:46:37 compute-0 systemd-machined[214580]: Machine qemu-168-instance-00000086 terminated.
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.338 2 DEBUG nova.compute.manager [req-cc9917a8-97ee-4fe0-a625-2ae4cf166f17 req-a2b11688-fd50-4a42-8cb3-d114f22f97e9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.339 2 DEBUG nova.compute.manager [req-cc9917a8-97ee-4fe0-a625-2ae4cf166f17 req-a2b11688-fd50-4a42-8cb3-d114f22f97e9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing instance network info cache due to event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.339 2 DEBUG oslo_concurrency.lockutils [req-cc9917a8-97ee-4fe0-a625-2ae4cf166f17 req-a2b11688-fd50-4a42-8cb3-d114f22f97e9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.339 2 DEBUG oslo_concurrency.lockutils [req-cc9917a8-97ee-4fe0-a625-2ae4cf166f17 req-a2b11688-fd50-4a42-8cb3-d114f22f97e9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.340 2 DEBUG nova.network.neutron [req-cc9917a8-97ee-4fe0-a625-2ae4cf166f17 req-a2b11688-fd50-4a42-8cb3-d114f22f97e9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.344 2 INFO nova.virt.libvirt.driver [-] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Instance destroyed successfully.
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.344 2 DEBUG nova.objects.instance [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.363 2 DEBUG nova.virt.libvirt.vif [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-229487908',display_name='tempest-TestNetworkBasicOps-server-229487908',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-229487908',id=134,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGRTZ02aMW7THUpB/Fbhor+/2PP8itgwraBW7FQrHv4wlPChnOyVpBM/gFf/sXxSnz2gDDJ7JKqZawr/DUsuuU6d+XBKYnr5LnbNHtnmtR34eSX9Sg3yAhhpxjufRRETtA==',key_name='tempest-TestNetworkBasicOps-262800501',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:45:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-82k3uyu4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:45:26Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.363 2 DEBUG nova.network.os_vif_util [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.364 2 DEBUG nova.network.os_vif_util [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:3a:c8,bridge_name='br-int',has_traffic_filtering=True,id=706c4bba-81cd-4c03-ac73-d8225f4ea15f,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706c4bba-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.365 2 DEBUG os_vif [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:3a:c8,bridge_name='br-int',has_traffic_filtering=True,id=706c4bba-81cd-4c03-ac73-d8225f4ea15f,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706c4bba-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.367 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap706c4bba-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.374 2 INFO os_vif [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:3a:c8,bridge_name='br-int',has_traffic_filtering=True,id=706c4bba-81cd-4c03-ac73-d8225f4ea15f,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706c4bba-81')
Oct 07 14:46:37 compute-0 neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b[405161]: [NOTICE]   (405165) : haproxy version is 2.8.14-c23fe91
Oct 07 14:46:37 compute-0 neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b[405161]: [NOTICE]   (405165) : path to executable is /usr/sbin/haproxy
Oct 07 14:46:37 compute-0 neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b[405161]: [WARNING]  (405165) : Exiting Master process...
Oct 07 14:46:37 compute-0 neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b[405161]: [WARNING]  (405165) : Exiting Master process...
Oct 07 14:46:37 compute-0 neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b[405161]: [ALERT]    (405165) : Current worker (405167) exited with code 143 (Terminated)
Oct 07 14:46:37 compute-0 neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b[405161]: [WARNING]  (405165) : All workers exited. Exiting... (0)
Oct 07 14:46:37 compute-0 systemd[1]: libpod-c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78.scope: Deactivated successfully.
Oct 07 14:46:37 compute-0 podman[407648]: 2025-10-07 14:46:37.412886327 +0000 UTC m=+0.055699271 container died c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 14:46:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78-userdata-shm.mount: Deactivated successfully.
Oct 07 14:46:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-36405587fcc29b64184ee9f2074d129e3df4d671c99cc0a15f619b8e476e7194-merged.mount: Deactivated successfully.
Oct 07 14:46:37 compute-0 podman[407648]: 2025-10-07 14:46:37.485616941 +0000 UTC m=+0.128429875 container cleanup c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:46:37 compute-0 systemd[1]: libpod-conmon-c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78.scope: Deactivated successfully.
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.541 2 DEBUG nova.compute.manager [req-939c2bf5-b597-44a4-8f11-62f3b49d2919 req-87dfd414-8b42-4557-a6aa-e6d68a1787b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-unplugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.542 2 DEBUG oslo_concurrency.lockutils [req-939c2bf5-b597-44a4-8f11-62f3b49d2919 req-87dfd414-8b42-4557-a6aa-e6d68a1787b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.543 2 DEBUG oslo_concurrency.lockutils [req-939c2bf5-b597-44a4-8f11-62f3b49d2919 req-87dfd414-8b42-4557-a6aa-e6d68a1787b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.543 2 DEBUG oslo_concurrency.lockutils [req-939c2bf5-b597-44a4-8f11-62f3b49d2919 req-87dfd414-8b42-4557-a6aa-e6d68a1787b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.543 2 DEBUG nova.compute.manager [req-939c2bf5-b597-44a4-8f11-62f3b49d2919 req-87dfd414-8b42-4557-a6aa-e6d68a1787b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] No waiting events found dispatching network-vif-unplugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.544 2 DEBUG nova.compute.manager [req-939c2bf5-b597-44a4-8f11-62f3b49d2919 req-87dfd414-8b42-4557-a6aa-e6d68a1787b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-unplugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:46:37 compute-0 podman[407698]: 2025-10-07 14:46:37.578094029 +0000 UTC m=+0.068588755 container remove c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 07 14:46:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.587 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0a6bfe9d-b246-4eab-be49-a6e43278a1d2]: (4, ('Tue Oct  7 02:46:37 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b (c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78)\nc3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78\nTue Oct  7 02:46:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b (c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78)\nc3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.589 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a5cd98-4459-4a5c-9fe0-ea729b15668a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.590 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a790910-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:46:37 compute-0 kernel: tap8a790910-00: left promiscuous mode
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.597 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[552e52bf-cbf9-4078-bd5b-aafdb0402a79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.633 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[954c5a5d-ce8b-4193-b683-a0931ab2826c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.635 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5cec5436-8f1f-4f13-bf62-cc151c7873f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.656 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cbd51bfe-de0c-4f2b-94ea-42b203301f80]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890925, 'reachable_time': 30535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407713, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d8a790910\x2d04e4\x2d4ed9\x2d9209\x2d184147e62b8b.mount: Deactivated successfully.
Oct 07 14:46:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.660 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:46:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.661 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[40e2dccb-0af3-49bd-af0e-e831c93b9ea4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:46:37 compute-0 ceph-mon[74295]: pgmap v2531: 305 pgs: 305 active+clean; 121 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 17 KiB/s wr, 71 op/s
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.950 2 INFO nova.virt.libvirt.driver [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Deleting instance files /var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_del
Oct 07 14:46:37 compute-0 nova_compute[259550]: 2025-10-07 14:46:37.951 2 INFO nova.virt.libvirt.driver [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Deletion of /var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_del complete
Oct 07 14:46:38 compute-0 nova_compute[259550]: 2025-10-07 14:46:38.008 2 INFO nova.compute.manager [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Took 0.90 seconds to destroy the instance on the hypervisor.
Oct 07 14:46:38 compute-0 nova_compute[259550]: 2025-10-07 14:46:38.010 2 DEBUG oslo.service.loopingcall [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:46:38 compute-0 nova_compute[259550]: 2025-10-07 14:46:38.010 2 DEBUG nova.compute.manager [-] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:46:38 compute-0 nova_compute[259550]: 2025-10-07 14:46:38.010 2 DEBUG nova.network.neutron [-] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:46:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:46:38 compute-0 nova_compute[259550]: 2025-10-07 14:46:38.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 121 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 14 KiB/s wr, 57 op/s
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.345 2 DEBUG nova.network.neutron [-] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.369 2 INFO nova.compute.manager [-] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Took 1.36 seconds to deallocate network for instance.
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.429 2 DEBUG oslo_concurrency.lockutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.430 2 DEBUG oslo_concurrency.lockutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.477 2 DEBUG oslo_concurrency.processutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:46:39 compute-0 ceph-mon[74295]: pgmap v2532: 305 pgs: 305 active+clean; 121 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 14 KiB/s wr, 57 op/s
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.678 2 DEBUG nova.compute.manager [req-2154e09b-9362-4e59-868a-87ee4d7b22e6 req-9f61893b-5ba1-48ce-bbf1-b96631eadcc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.680 2 DEBUG oslo_concurrency.lockutils [req-2154e09b-9362-4e59-868a-87ee4d7b22e6 req-9f61893b-5ba1-48ce-bbf1-b96631eadcc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.680 2 DEBUG oslo_concurrency.lockutils [req-2154e09b-9362-4e59-868a-87ee4d7b22e6 req-9f61893b-5ba1-48ce-bbf1-b96631eadcc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.680 2 DEBUG oslo_concurrency.lockutils [req-2154e09b-9362-4e59-868a-87ee4d7b22e6 req-9f61893b-5ba1-48ce-bbf1-b96631eadcc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.681 2 DEBUG nova.compute.manager [req-2154e09b-9362-4e59-868a-87ee4d7b22e6 req-9f61893b-5ba1-48ce-bbf1-b96631eadcc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] No waiting events found dispatching network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.681 2 WARNING nova.compute.manager [req-2154e09b-9362-4e59-868a-87ee4d7b22e6 req-9f61893b-5ba1-48ce-bbf1-b96631eadcc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received unexpected event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f for instance with vm_state deleted and task_state None.
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.792 2 DEBUG nova.network.neutron [req-cc9917a8-97ee-4fe0-a625-2ae4cf166f17 req-a2b11688-fd50-4a42-8cb3-d114f22f97e9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updated VIF entry in instance network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.793 2 DEBUG nova.network.neutron [req-cc9917a8-97ee-4fe0-a625-2ae4cf166f17 req-a2b11688-fd50-4a42-8cb3-d114f22f97e9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updating instance_info_cache with network_info: [{"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.833 2 DEBUG oslo_concurrency.lockutils [req-cc9917a8-97ee-4fe0-a625-2ae4cf166f17 req-a2b11688-fd50-4a42-8cb3-d114f22f97e9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:46:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:46:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/422391706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.913 2 DEBUG oslo_concurrency.processutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.917 2 DEBUG nova.compute.manager [req-9b411478-6eff-4ef4-911e-7818dfb93ca9 req-c097b37e-571f-4a46-9e45-37bc71f22ccb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-deleted-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.917 2 INFO nova.compute.manager [req-9b411478-6eff-4ef4-911e-7818dfb93ca9 req-c097b37e-571f-4a46-9e45-37bc71f22ccb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Neutron deleted interface 706c4bba-81cd-4c03-ac73-d8225f4ea15f; detaching it from the instance and deleting it from the info cache
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.918 2 DEBUG nova.network.neutron [req-9b411478-6eff-4ef4-911e-7818dfb93ca9 req-c097b37e-571f-4a46-9e45-37bc71f22ccb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.924 2 DEBUG nova.compute.provider_tree [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.936 2 DEBUG nova.scheduler.client.report [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.940 2 DEBUG nova.compute.manager [req-9b411478-6eff-4ef4-911e-7818dfb93ca9 req-c097b37e-571f-4a46-9e45-37bc71f22ccb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Detach interface failed, port_id=706c4bba-81cd-4c03-ac73-d8225f4ea15f, reason: Instance 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.957 2 DEBUG oslo_concurrency.lockutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:39 compute-0 nova_compute[259550]: 2025-10-07 14:46:39.996 2 INFO nova.scheduler.client.report [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed
Oct 07 14:46:40 compute-0 nova_compute[259550]: 2025-10-07 14:46:40.070 2 DEBUG oslo_concurrency.lockutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:46:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2533: 305 pgs: 305 active+clean; 73 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 14 KiB/s wr, 61 op/s
Oct 07 14:46:40 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/422391706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:46:41 compute-0 ceph-mon[74295]: pgmap v2533: 305 pgs: 305 active+clean; 73 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 14 KiB/s wr, 61 op/s
Oct 07 14:46:41 compute-0 nova_compute[259550]: 2025-10-07 14:46:41.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:41 compute-0 nova_compute[259550]: 2025-10-07 14:46:41.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:42 compute-0 podman[407739]: 2025-10-07 14:46:42.104298567 +0000 UTC m=+0.082557955 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:46:42 compute-0 podman[407738]: 2025-10-07 14:46:42.105001175 +0000 UTC m=+0.083757247 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:46:42 compute-0 nova_compute[259550]: 2025-10-07 14:46:42.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 7.2 KiB/s wr, 78 op/s
Oct 07 14:46:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:46:43 compute-0 nova_compute[259550]: 2025-10-07 14:46:43.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:43 compute-0 ceph-mon[74295]: pgmap v2534: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 7.2 KiB/s wr, 78 op/s
Oct 07 14:46:43 compute-0 nova_compute[259550]: 2025-10-07 14:46:43.702 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848388.700753, 6721439b-34d7-4282-bbcd-37424c3f2691 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:46:43 compute-0 nova_compute[259550]: 2025-10-07 14:46:43.702 2 INFO nova.compute.manager [-] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] VM Stopped (Lifecycle Event)
Oct 07 14:46:43 compute-0 nova_compute[259550]: 2025-10-07 14:46:43.723 2 DEBUG nova.compute.manager [None req-43dfb925-9f01-48b0-b2ee-a9c2f4043f18 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:46:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 6.9 KiB/s wr, 57 op/s
Oct 07 14:46:45 compute-0 ceph-mon[74295]: pgmap v2535: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 6.9 KiB/s wr, 57 op/s
Oct 07 14:46:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Oct 07 14:46:47 compute-0 nova_compute[259550]: 2025-10-07 14:46:47.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:47 compute-0 ceph-mon[74295]: pgmap v2536: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Oct 07 14:46:47 compute-0 nova_compute[259550]: 2025-10-07 14:46:47.822 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848392.8167634, 36592d37-1eb3-431f-8cd3-aa0d320b2e86 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:46:47 compute-0 nova_compute[259550]: 2025-10-07 14:46:47.823 2 INFO nova.compute.manager [-] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] VM Stopped (Lifecycle Event)
Oct 07 14:46:47 compute-0 nova_compute[259550]: 2025-10-07 14:46:47.839 2 DEBUG nova.compute.manager [None req-526f9d01-0556-496f-8db8-14ebad8ce9c8 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:46:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:46:48 compute-0 nova_compute[259550]: 2025-10-07 14:46:48.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:46:49 compute-0 ceph-mon[74295]: pgmap v2537: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:46:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:46:51 compute-0 ceph-mon[74295]: pgmap v2538: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:46:52 compute-0 nova_compute[259550]: 2025-10-07 14:46:52.342 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848397.339797, 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:46:52 compute-0 nova_compute[259550]: 2025-10-07 14:46:52.342 2 INFO nova.compute.manager [-] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] VM Stopped (Lifecycle Event)
Oct 07 14:46:52 compute-0 nova_compute[259550]: 2025-10-07 14:46:52.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:52 compute-0 nova_compute[259550]: 2025-10-07 14:46:52.389 2 DEBUG nova.compute.manager [None req-467b8fd1-ec8b-4832-b886-2162dd231f4b - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:46:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 682 B/s wr, 23 op/s
Oct 07 14:46:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:46:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:46:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:46:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:46:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:46:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:46:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:46:53 compute-0 nova_compute[259550]: 2025-10-07 14:46:53.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:53 compute-0 ceph-mon[74295]: pgmap v2539: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 682 B/s wr, 23 op/s
Oct 07 14:46:53 compute-0 nova_compute[259550]: 2025-10-07 14:46:53.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:46:53 compute-0 nova_compute[259550]: 2025-10-07 14:46:53.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 07 14:46:54 compute-0 nova_compute[259550]: 2025-10-07 14:46:54.169 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 07 14:46:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2540: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:46:55 compute-0 ceph-mon[74295]: pgmap v2540: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:46:56 compute-0 podman[407778]: 2025-10-07 14:46:56.121849257 +0000 UTC m=+0.110775325 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct 07 14:46:56 compute-0 podman[407777]: 2025-10-07 14:46:56.1302297 +0000 UTC m=+0.112966474 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:46:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:46:57 compute-0 nova_compute[259550]: 2025-10-07 14:46:57.170 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:46:57 compute-0 nova_compute[259550]: 2025-10-07 14:46:57.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:57 compute-0 ceph-mon[74295]: pgmap v2541: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:46:57 compute-0 nova_compute[259550]: 2025-10-07 14:46:57.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:46:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:46:58 compute-0 nova_compute[259550]: 2025-10-07 14:46:58.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:46:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:46:59 compute-0 ceph-mon[74295]: pgmap v2542: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:46:59 compute-0 nova_compute[259550]: 2025-10-07 14:46:59.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:46:59 compute-0 nova_compute[259550]: 2025-10-07 14:46:59.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:47:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:00.079 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:00.079 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:00.079 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:47:00 compute-0 nova_compute[259550]: 2025-10-07 14:47:00.987 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:47:01 compute-0 ceph-mon[74295]: pgmap v2543: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:47:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:01.960 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:9c:cb 10.100.0.2 2001:db8::f816:3eff:fec6:9ccb'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec6:9ccb/64', 'neutron:device_id': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=227fc944-7eb8-4e47-9b7f-017eeb7f2711, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a7ef9e15-2145-4a59-b756-368bcbe72d69) old=Port_Binding(mac=['fa:16:3e:c6:9c:cb 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:47:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:01.961 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a7ef9e15-2145-4a59-b756-368bcbe72d69 in datapath 970990f9-7a8a-40de-9a55-f4c40d657453 updated
Oct 07 14:47:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:01.962 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 970990f9-7a8a-40de-9a55-f4c40d657453, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:47:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:01.963 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[63ca3704-482b-4de8-8b67-07bd32379789]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:01 compute-0 nova_compute[259550]: 2025-10-07 14:47:01.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:47:01 compute-0 nova_compute[259550]: 2025-10-07 14:47:01.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:47:02 compute-0 nova_compute[259550]: 2025-10-07 14:47:02.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:47:02 compute-0 nova_compute[259550]: 2025-10-07 14:47:02.744 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "1262caef-f43e-429a-b613-b4d54273e604" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:02 compute-0 nova_compute[259550]: 2025-10-07 14:47:02.745 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:02 compute-0 nova_compute[259550]: 2025-10-07 14:47:02.764 2 DEBUG nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:47:02 compute-0 nova_compute[259550]: 2025-10-07 14:47:02.908 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:02 compute-0 nova_compute[259550]: 2025-10-07 14:47:02.908 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:02 compute-0 nova_compute[259550]: 2025-10-07 14:47:02.918 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:47:02 compute-0 nova_compute[259550]: 2025-10-07 14:47:02.919 2 INFO nova.compute.claims [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.058 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:47:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1882769882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.529 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.537 2 DEBUG nova.compute.provider_tree [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.554 2 DEBUG nova.scheduler.client.report [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.577 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.578 2 DEBUG nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.624 2 DEBUG nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.625 2 DEBUG nova.network.neutron [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.643 2 INFO nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.666 2 DEBUG nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.739 2 DEBUG nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.741 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.741 2 INFO nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Creating image(s)
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.763 2 DEBUG nova.storage.rbd_utils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 1262caef-f43e-429a-b613-b4d54273e604_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.786 2 DEBUG nova.storage.rbd_utils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 1262caef-f43e-429a-b613-b4d54273e604_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.809 2 DEBUG nova.storage.rbd_utils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 1262caef-f43e-429a-b613-b4d54273e604_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.813 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:03 compute-0 ceph-mon[74295]: pgmap v2544: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:47:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1882769882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.869 2 DEBUG nova.policy [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.914 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.915 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.916 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.916 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.939 2 DEBUG nova.storage.rbd_utils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 1262caef-f43e-429a-b613-b4d54273e604_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:03 compute-0 nova_compute[259550]: 2025-10-07 14:47:03.944 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 1262caef-f43e-429a-b613-b4d54273e604_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:04 compute-0 nova_compute[259550]: 2025-10-07 14:47:04.269 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 1262caef-f43e-429a-b613-b4d54273e604_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:04 compute-0 nova_compute[259550]: 2025-10-07 14:47:04.338 2 DEBUG nova.storage.rbd_utils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image 1262caef-f43e-429a-b613-b4d54273e604_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:47:04 compute-0 nova_compute[259550]: 2025-10-07 14:47:04.431 2 DEBUG nova.objects.instance [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid 1262caef-f43e-429a-b613-b4d54273e604 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:47:04 compute-0 nova_compute[259550]: 2025-10-07 14:47:04.450 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:47:04 compute-0 nova_compute[259550]: 2025-10-07 14:47:04.450 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Ensure instance console log exists: /var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:47:04 compute-0 nova_compute[259550]: 2025-10-07 14:47:04.451 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:04 compute-0 nova_compute[259550]: 2025-10-07 14:47:04.451 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:04 compute-0 nova_compute[259550]: 2025-10-07 14:47:04.451 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:47:04 compute-0 nova_compute[259550]: 2025-10-07 14:47:04.633 2 DEBUG nova.network.neutron [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Successfully created port: b06adbc1-01d9-45e3-b4b6-71bc4f85a659 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:47:04 compute-0 nova_compute[259550]: 2025-10-07 14:47:04.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:47:04 compute-0 nova_compute[259550]: 2025-10-07 14:47:04.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 07 14:47:05 compute-0 ceph-mon[74295]: pgmap v2545: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:47:06 compute-0 nova_compute[259550]: 2025-10-07 14:47:06.358 2 DEBUG nova.network.neutron [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Successfully updated port: b06adbc1-01d9-45e3-b4b6-71bc4f85a659 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:47:06 compute-0 nova_compute[259550]: 2025-10-07 14:47:06.377 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:47:06 compute-0 nova_compute[259550]: 2025-10-07 14:47:06.377 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:47:06 compute-0 nova_compute[259550]: 2025-10-07 14:47:06.377 2 DEBUG nova.network.neutron [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:47:06 compute-0 nova_compute[259550]: 2025-10-07 14:47:06.522 2 DEBUG nova.compute.manager [req-698c9298-83ce-4050-9aee-63362857e328 req-c1ba5745-02a4-48d4-a14b-f9b817352fc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received event network-changed-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:47:06 compute-0 nova_compute[259550]: 2025-10-07 14:47:06.523 2 DEBUG nova.compute.manager [req-698c9298-83ce-4050-9aee-63362857e328 req-c1ba5745-02a4-48d4-a14b-f9b817352fc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Refreshing instance network info cache due to event network-changed-b06adbc1-01d9-45e3-b4b6-71bc4f85a659. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:47:06 compute-0 nova_compute[259550]: 2025-10-07 14:47:06.523 2 DEBUG oslo_concurrency.lockutils [req-698c9298-83ce-4050-9aee-63362857e328 req-c1ba5745-02a4-48d4-a14b-f9b817352fc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:47:06 compute-0 nova_compute[259550]: 2025-10-07 14:47:06.572 2 DEBUG nova.network.neutron [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:47:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 305 active+clean; 64 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 536 KiB/s wr, 23 op/s
Oct 07 14:47:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:06.919 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:9c:cb 10.100.0.2 2001:db8:0:1:f816:3eff:fec6:9ccb 2001:db8::f816:3eff:fec6:9ccb'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fec6:9ccb/64 2001:db8::f816:3eff:fec6:9ccb/64', 'neutron:device_id': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=227fc944-7eb8-4e47-9b7f-017eeb7f2711, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a7ef9e15-2145-4a59-b756-368bcbe72d69) old=Port_Binding(mac=['fa:16:3e:c6:9c:cb 10.100.0.2 2001:db8::f816:3eff:fec6:9ccb'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec6:9ccb/64', 'neutron:device_id': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:47:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:06.920 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a7ef9e15-2145-4a59-b756-368bcbe72d69 in datapath 970990f9-7a8a-40de-9a55-f4c40d657453 updated
Oct 07 14:47:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:06.921 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 970990f9-7a8a-40de-9a55-f4c40d657453, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:47:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:06.922 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[65eb4691-2945-46be-9631-38ccc6729b67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:06 compute-0 nova_compute[259550]: 2025-10-07 14:47:06.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:47:07 compute-0 nova_compute[259550]: 2025-10-07 14:47:07.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:07 compute-0 ceph-mon[74295]: pgmap v2546: 305 pgs: 305 active+clean; 64 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 536 KiB/s wr, 23 op/s
Oct 07 14:47:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.203991) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848428204039, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1309, "num_deletes": 250, "total_data_size": 2007379, "memory_usage": 2042792, "flush_reason": "Manual Compaction"}
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Oct 07 14:47:08 compute-0 nova_compute[259550]: 2025-10-07 14:47:08.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848428211431, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1190783, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52361, "largest_seqno": 53669, "table_properties": {"data_size": 1186086, "index_size": 2093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12477, "raw_average_key_size": 20, "raw_value_size": 1175857, "raw_average_value_size": 1956, "num_data_blocks": 95, "num_entries": 601, "num_filter_entries": 601, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759848298, "oldest_key_time": 1759848298, "file_creation_time": 1759848428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 7489 microseconds, and 4233 cpu microseconds.
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.211479) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1190783 bytes OK
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.211501) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.212904) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.212916) EVENT_LOG_v1 {"time_micros": 1759848428212912, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.212955) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 2001494, prev total WAL file size 2001494, number of live WAL files 2.
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.213738) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303033' seq:72057594037927935, type:22 .. '6D6772737461740032323534' seq:0, type:0; will stop at (end)
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1162KB)], [122(10180KB)]
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848428213843, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 11615545, "oldest_snapshot_seqno": -1}
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7368 keys, 9090461 bytes, temperature: kUnknown
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848428272915, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 9090461, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9043515, "index_size": 27442, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18437, "raw_key_size": 191927, "raw_average_key_size": 26, "raw_value_size": 8914079, "raw_average_value_size": 1209, "num_data_blocks": 1072, "num_entries": 7368, "num_filter_entries": 7368, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759848428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.273233) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 9090461 bytes
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.274492) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.3 rd, 153.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.9 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(17.4) write-amplify(7.6) OK, records in: 7821, records dropped: 453 output_compression: NoCompression
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.274507) EVENT_LOG_v1 {"time_micros": 1759848428274499, "job": 74, "event": "compaction_finished", "compaction_time_micros": 59187, "compaction_time_cpu_micros": 26196, "output_level": 6, "num_output_files": 1, "total_output_size": 9090461, "num_input_records": 7821, "num_output_records": 7368, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848428275114, "job": 74, "event": "table_file_deletion", "file_number": 124}
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848428277080, "job": 74, "event": "table_file_deletion", "file_number": 122}
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.213604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.277165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.277171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.277173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.277175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:47:08 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.277176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:47:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 305 active+clean; 64 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 536 KiB/s wr, 23 op/s
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.012 2 DEBUG nova.network.neutron [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Updating instance_info_cache with network_info: [{"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.030 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.031 2 DEBUG nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Instance network_info: |[{"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.031 2 DEBUG oslo_concurrency.lockutils [req-698c9298-83ce-4050-9aee-63362857e328 req-c1ba5745-02a4-48d4-a14b-f9b817352fc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.032 2 DEBUG nova.network.neutron [req-698c9298-83ce-4050-9aee-63362857e328 req-c1ba5745-02a4-48d4-a14b-f9b817352fc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Refreshing network info cache for port b06adbc1-01d9-45e3-b4b6-71bc4f85a659 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.034 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Start _get_guest_xml network_info=[{"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.040 2 WARNING nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.045 2 DEBUG nova.virt.libvirt.host [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.045 2 DEBUG nova.virt.libvirt.host [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.053 2 DEBUG nova.virt.libvirt.host [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.054 2 DEBUG nova.virt.libvirt.host [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.054 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.054 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.055 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.055 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.055 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.056 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.056 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.056 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.056 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.057 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.057 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.057 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.060 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:47:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1858221718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.509 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:09 compute-0 sudo[408031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:47:09 compute-0 sudo[408031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:47:09 compute-0 sudo[408031]: pam_unix(sudo:session): session closed for user root
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.532 2 DEBUG nova.storage.rbd_utils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 1262caef-f43e-429a-b613-b4d54273e604_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.538 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:09 compute-0 sudo[408074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:47:09 compute-0 sudo[408074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:47:09 compute-0 sudo[408074]: pam_unix(sudo:session): session closed for user root
Oct 07 14:47:09 compute-0 sudo[408102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:47:09 compute-0 sudo[408102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:47:09 compute-0 sudo[408102]: pam_unix(sudo:session): session closed for user root
Oct 07 14:47:09 compute-0 sudo[408127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:47:09 compute-0 sudo[408127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:47:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:47:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944015782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.996 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:47:09 compute-0 nova_compute[259550]: 2025-10-07 14:47:09.997 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.014 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.016 2 DEBUG nova.virt.libvirt.vif [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:47:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-866328963',display_name='tempest-TestNetworkBasicOps-server-866328963',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-866328963',id=137,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPS/KeXOUgsVFvz8/06bZGZYUJsL0P8KN5zfOcHd36qPCONG0eiDz9BDiYtOjbD9G91TMKvoW2fltNYdXkyA98S8eOoqdEV3DHQkPpvlOS52YF4JxVXz9leZAC3qrtG9CA==',key_name='tempest-TestNetworkBasicOps-1170467374',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-uefbav6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:47:03Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=1262caef-f43e-429a-b613-b4d54273e604,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.016 2 DEBUG nova.network.os_vif_util [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.017 2 DEBUG nova.network.os_vif_util [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:e6:f4,bridge_name='br-int',has_traffic_filtering=True,id=b06adbc1-01d9-45e3-b4b6-71bc4f85a659,network=Network(08f8ca28-b7fe-4840-94a7-78acb08138e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06adbc1-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.018 2 DEBUG nova.objects.instance [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid 1262caef-f43e-429a-b613-b4d54273e604 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.037 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:47:10 compute-0 nova_compute[259550]:   <uuid>1262caef-f43e-429a-b613-b4d54273e604</uuid>
Oct 07 14:47:10 compute-0 nova_compute[259550]:   <name>instance-00000089</name>
Oct 07 14:47:10 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:47:10 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:47:10 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <nova:name>tempest-TestNetworkBasicOps-server-866328963</nova:name>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:47:09</nova:creationTime>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:47:10 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:47:10 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:47:10 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:47:10 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:47:10 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:47:10 compute-0 nova_compute[259550]:         <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct 07 14:47:10 compute-0 nova_compute[259550]:         <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:47:10 compute-0 nova_compute[259550]:         <nova:port uuid="b06adbc1-01d9-45e3-b4b6-71bc4f85a659">
Oct 07 14:47:10 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:47:10 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:47:10 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <system>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <entry name="serial">1262caef-f43e-429a-b613-b4d54273e604</entry>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <entry name="uuid">1262caef-f43e-429a-b613-b4d54273e604</entry>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     </system>
Oct 07 14:47:10 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:47:10 compute-0 nova_compute[259550]:   <os>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:   </os>
Oct 07 14:47:10 compute-0 nova_compute[259550]:   <features>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:   </features>
Oct 07 14:47:10 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:47:10 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:47:10 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1262caef-f43e-429a-b613-b4d54273e604_disk">
Oct 07 14:47:10 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       </source>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:47:10 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1262caef-f43e-429a-b613-b4d54273e604_disk.config">
Oct 07 14:47:10 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       </source>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:47:10 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:97:e6:f4"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <target dev="tapb06adbc1-01"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604/console.log" append="off"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <video>
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     </video>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:47:10 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:47:10 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:47:10 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:47:10 compute-0 nova_compute[259550]: </domain>
Oct 07 14:47:10 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.039 2 DEBUG nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Preparing to wait for external event network-vif-plugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.040 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "1262caef-f43e-429a-b613-b4d54273e604-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.040 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.040 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.041 2 DEBUG nova.virt.libvirt.vif [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:47:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-866328963',display_name='tempest-TestNetworkBasicOps-server-866328963',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-866328963',id=137,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPS/KeXOUgsVFvz8/06bZGZYUJsL0P8KN5zfOcHd36qPCONG0eiDz9BDiYtOjbD9G91TMKvoW2fltNYdXkyA98S8eOoqdEV3DHQkPpvlOS52YF4JxVXz9leZAC3qrtG9CA==',key_name='tempest-TestNetworkBasicOps-1170467374',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-uefbav6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:47:03Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=1262caef-f43e-429a-b613-b4d54273e604,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.041 2 DEBUG nova.network.os_vif_util [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.042 2 DEBUG nova.network.os_vif_util [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:e6:f4,bridge_name='br-int',has_traffic_filtering=True,id=b06adbc1-01d9-45e3-b4b6-71bc4f85a659,network=Network(08f8ca28-b7fe-4840-94a7-78acb08138e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06adbc1-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.042 2 DEBUG os_vif [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:e6:f4,bridge_name='br-int',has_traffic_filtering=True,id=b06adbc1-01d9-45e3-b4b6-71bc4f85a659,network=Network(08f8ca28-b7fe-4840-94a7-78acb08138e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06adbc1-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.043 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.044 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb06adbc1-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb06adbc1-01, col_values=(('external_ids', {'iface-id': 'b06adbc1-01d9-45e3-b4b6-71bc4f85a659', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:e6:f4', 'vm-uuid': '1262caef-f43e-429a-b613-b4d54273e604'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:47:10 compute-0 NetworkManager[44949]: <info>  [1759848430.0516] manager: (tapb06adbc1-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/603)
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.060 2 INFO os_vif [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:e6:f4,bridge_name='br-int',has_traffic_filtering=True,id=b06adbc1-01d9-45e3-b4b6-71bc4f85a659,network=Network(08f8ca28-b7fe-4840-94a7-78acb08138e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06adbc1-01')
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.067 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.125 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.126 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.126 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:97:e6:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.127 2 INFO nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Using config drive
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.149 2 DEBUG nova.storage.rbd_utils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 1262caef-f43e-429a-b613-b4d54273e604_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:10 compute-0 sudo[408127]: pam_unix(sudo:session): session closed for user root
Oct 07 14:47:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:47:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:47:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:47:10 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:47:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:47:10 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:47:10 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 2ee71841-74a7-4333-b1b6-d7b9897e040d does not exist
Oct 07 14:47:10 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 47fed264-4406-4f21-bef0-716647bec255 does not exist
Oct 07 14:47:10 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev de2399f8-d34e-4e54-8970-c0bb46f87b65 does not exist
Oct 07 14:47:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:47:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:47:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:47:10 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:47:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:47:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:47:10 compute-0 ceph-mon[74295]: pgmap v2547: 305 pgs: 305 active+clean; 64 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 536 KiB/s wr, 23 op/s
Oct 07 14:47:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1858221718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:47:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2944015782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:47:10 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:47:10 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:47:10 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:47:10 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:47:10 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:47:10 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:47:10 compute-0 sudo[408224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:47:10 compute-0 sudo[408224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:47:10 compute-0 sudo[408224]: pam_unix(sudo:session): session closed for user root
Oct 07 14:47:10 compute-0 sudo[408249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:47:10 compute-0 sudo[408249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:47:10 compute-0 sudo[408249]: pam_unix(sudo:session): session closed for user root
Oct 07 14:47:10 compute-0 sudo[408274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:47:10 compute-0 sudo[408274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:47:10 compute-0 sudo[408274]: pam_unix(sudo:session): session closed for user root
Oct 07 14:47:10 compute-0 sudo[408299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:47:10 compute-0 sudo[408299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:47:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:47:10 compute-0 podman[408364]: 2025-10-07 14:47:10.757321194 +0000 UTC m=+0.049640650 container create b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shannon, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:47:10 compute-0 systemd[1]: Started libpod-conmon-b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737.scope.
Oct 07 14:47:10 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:47:10 compute-0 podman[408364]: 2025-10-07 14:47:10.734725933 +0000 UTC m=+0.027045379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:47:10 compute-0 podman[408364]: 2025-10-07 14:47:10.847537241 +0000 UTC m=+0.139856687 container init b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:47:10 compute-0 podman[408364]: 2025-10-07 14:47:10.855233916 +0000 UTC m=+0.147553342 container start b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shannon, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct 07 14:47:10 compute-0 podman[408364]: 2025-10-07 14:47:10.858327918 +0000 UTC m=+0.150647344 container attach b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shannon, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 14:47:10 compute-0 gracious_shannon[408380]: 167 167
Oct 07 14:47:10 compute-0 systemd[1]: libpod-b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737.scope: Deactivated successfully.
Oct 07 14:47:10 compute-0 conmon[408380]: conmon b941cab654576334b9cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737.scope/container/memory.events
Oct 07 14:47:10 compute-0 podman[408364]: 2025-10-07 14:47:10.865659523 +0000 UTC m=+0.157978999 container died b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shannon, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 07 14:47:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2da8b257e23abb35586d91ae6d4818d3e27f73cac246b84277254c481bc80f9-merged.mount: Deactivated successfully.
Oct 07 14:47:10 compute-0 podman[408364]: 2025-10-07 14:47:10.907114395 +0000 UTC m=+0.199433831 container remove b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shannon, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 07 14:47:10 compute-0 systemd[1]: libpod-conmon-b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737.scope: Deactivated successfully.
Oct 07 14:47:10 compute-0 nova_compute[259550]: 2025-10-07 14:47:10.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:47:11 compute-0 nova_compute[259550]: 2025-10-07 14:47:11.017 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:11 compute-0 nova_compute[259550]: 2025-10-07 14:47:11.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:11 compute-0 nova_compute[259550]: 2025-10-07 14:47:11.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:11 compute-0 nova_compute[259550]: 2025-10-07 14:47:11.018 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:47:11 compute-0 nova_compute[259550]: 2025-10-07 14:47:11.018 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:11 compute-0 podman[408405]: 2025-10-07 14:47:11.084521881 +0000 UTC m=+0.040199300 container create 4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:47:11 compute-0 systemd[1]: Started libpod-conmon-4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b.scope.
Oct 07 14:47:11 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74861c48f30fcdc0dc0d2f60428e01267b2bf97ac6b00673905b3760e86a65e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74861c48f30fcdc0dc0d2f60428e01267b2bf97ac6b00673905b3760e86a65e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74861c48f30fcdc0dc0d2f60428e01267b2bf97ac6b00673905b3760e86a65e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74861c48f30fcdc0dc0d2f60428e01267b2bf97ac6b00673905b3760e86a65e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74861c48f30fcdc0dc0d2f60428e01267b2bf97ac6b00673905b3760e86a65e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:47:11 compute-0 podman[408405]: 2025-10-07 14:47:11.065516945 +0000 UTC m=+0.021194384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:47:11 compute-0 podman[408405]: 2025-10-07 14:47:11.181880378 +0000 UTC m=+0.137557827 container init 4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 07 14:47:11 compute-0 podman[408405]: 2025-10-07 14:47:11.190032295 +0000 UTC m=+0.145709714 container start 4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 07 14:47:11 compute-0 podman[408405]: 2025-10-07 14:47:11.195184242 +0000 UTC m=+0.150861671 container attach 4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:47:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:47:11 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2433315290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:47:11 compute-0 nova_compute[259550]: 2025-10-07 14:47:11.483 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:11 compute-0 nova_compute[259550]: 2025-10-07 14:47:11.543 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:47:11 compute-0 nova_compute[259550]: 2025-10-07 14:47:11.543 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:47:11 compute-0 nova_compute[259550]: 2025-10-07 14:47:11.687 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:47:11 compute-0 nova_compute[259550]: 2025-10-07 14:47:11.688 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3626MB free_disk=59.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:47:11 compute-0 nova_compute[259550]: 2025-10-07 14:47:11.688 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:11 compute-0 nova_compute[259550]: 2025-10-07 14:47:11.689 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:11 compute-0 nova_compute[259550]: 2025-10-07 14:47:11.755 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 1262caef-f43e-429a-b613-b4d54273e604 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:47:11 compute-0 nova_compute[259550]: 2025-10-07 14:47:11.756 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:47:11 compute-0 nova_compute[259550]: 2025-10-07 14:47:11.756 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:47:11 compute-0 nova_compute[259550]: 2025-10-07 14:47:11.796 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.011 2 INFO nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Creating config drive at /var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604/disk.config
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.016 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvplaiwp1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.065 2 DEBUG nova.network.neutron [req-698c9298-83ce-4050-9aee-63362857e328 req-c1ba5745-02a4-48d4-a14b-f9b817352fc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Updated VIF entry in instance network info cache for port b06adbc1-01d9-45e3-b4b6-71bc4f85a659. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.066 2 DEBUG nova.network.neutron [req-698c9298-83ce-4050-9aee-63362857e328 req-c1ba5745-02a4-48d4-a14b-f9b817352fc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Updating instance_info_cache with network_info: [{"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.094 2 DEBUG oslo_concurrency.lockutils [req-698c9298-83ce-4050-9aee-63362857e328 req-c1ba5745-02a4-48d4-a14b-f9b817352fc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.174 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvplaiwp1" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.200 2 DEBUG nova.storage.rbd_utils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 1262caef-f43e-429a-b613-b4d54273e604_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.208 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604/disk.config 1262caef-f43e-429a-b613-b4d54273e604_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:12 compute-0 ceph-mon[74295]: pgmap v2548: 305 pgs: 305 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:47:12 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2433315290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:47:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:47:12 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1514102017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:47:12 compute-0 epic_sinoussi[408422]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:47:12 compute-0 epic_sinoussi[408422]: --> relative data size: 1.0
Oct 07 14:47:12 compute-0 epic_sinoussi[408422]: --> All data devices are unavailable
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.270 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.280 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.297 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:47:12 compute-0 systemd[1]: libpod-4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b.scope: Deactivated successfully.
Oct 07 14:47:12 compute-0 podman[408405]: 2025-10-07 14:47:12.306327136 +0000 UTC m=+1.262004545 container died 4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 07 14:47:12 compute-0 systemd[1]: libpod-4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b.scope: Consumed 1.046s CPU time.
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.329 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.331 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f74861c48f30fcdc0dc0d2f60428e01267b2bf97ac6b00673905b3760e86a65e-merged.mount: Deactivated successfully.
Oct 07 14:47:12 compute-0 podman[408405]: 2025-10-07 14:47:12.377269142 +0000 UTC m=+1.332946551 container remove 4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 07 14:47:12 compute-0 systemd[1]: libpod-conmon-4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b.scope: Deactivated successfully.
Oct 07 14:47:12 compute-0 sudo[408299]: pam_unix(sudo:session): session closed for user root
Oct 07 14:47:12 compute-0 podman[408539]: 2025-10-07 14:47:12.434536755 +0000 UTC m=+0.078946089 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible)
Oct 07 14:47:12 compute-0 podman[408532]: 2025-10-07 14:47:12.434537225 +0000 UTC m=+0.088560705 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.457 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604/disk.config 1262caef-f43e-429a-b613-b4d54273e604_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.458 2 INFO nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Deleting local config drive /var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604/disk.config because it was imported into RBD.
Oct 07 14:47:12 compute-0 sudo[408588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:47:12 compute-0 sudo[408588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:47:12 compute-0 sudo[408588]: pam_unix(sudo:session): session closed for user root
Oct 07 14:47:12 compute-0 kernel: tapb06adbc1-01: entered promiscuous mode
Oct 07 14:47:12 compute-0 NetworkManager[44949]: <info>  [1759848432.5089] manager: (tapb06adbc1-01): new Tun device (/org/freedesktop/NetworkManager/Devices/604)
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:12 compute-0 ovn_controller[151684]: 2025-10-07T14:47:12Z|01514|binding|INFO|Claiming lport b06adbc1-01d9-45e3-b4b6-71bc4f85a659 for this chassis.
Oct 07 14:47:12 compute-0 ovn_controller[151684]: 2025-10-07T14:47:12Z|01515|binding|INFO|b06adbc1-01d9-45e3-b4b6-71bc4f85a659: Claiming fa:16:3e:97:e6:f4 10.100.0.12
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.523 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:e6:f4 10.100.0.12'], port_security=['fa:16:3e:97:e6:f4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1262caef-f43e-429a-b613-b4d54273e604', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08f8ca28-b7fe-4840-94a7-78acb08138e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3843b3bc-7e2a-472a-b856-d1d145332927', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14a0a809-1526-4a53-a5bb-23565156e65a, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b06adbc1-01d9-45e3-b4b6-71bc4f85a659) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.524 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b06adbc1-01d9-45e3-b4b6-71bc4f85a659 in datapath 08f8ca28-b7fe-4840-94a7-78acb08138e1 bound to our chassis
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.526 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08f8ca28-b7fe-4840-94a7-78acb08138e1
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.539 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[53ca28e3-5124-42c3-930d-4eb2cd2501b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.540 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap08f8ca28-b1 in ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.542 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap08f8ca28-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.542 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6c7a6d80-86a7-45d2-8622-f29d0e390427]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.543 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3dacf8-d1e2-4640-a04b-eefc64bbeece]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:12 compute-0 systemd-udevd[408652]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.564 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a48dd8-eace-43fc-a802-00b2ae0fa873]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:12 compute-0 systemd-machined[214580]: New machine qemu-171-instance-00000089.
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:12 compute-0 NetworkManager[44949]: <info>  [1759848432.5788] device (tapb06adbc1-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:47:12 compute-0 systemd[1]: Started Virtual Machine qemu-171-instance-00000089.
Oct 07 14:47:12 compute-0 NetworkManager[44949]: <info>  [1759848432.5795] device (tapb06adbc1-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:47:12 compute-0 sudo[408621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:12 compute-0 ovn_controller[151684]: 2025-10-07T14:47:12Z|01516|binding|INFO|Setting lport b06adbc1-01d9-45e3-b4b6-71bc4f85a659 ovn-installed in OVS
Oct 07 14:47:12 compute-0 ovn_controller[151684]: 2025-10-07T14:47:12Z|01517|binding|INFO|Setting lport b06adbc1-01d9-45e3-b4b6-71bc4f85a659 up in Southbound
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:12 compute-0 sudo[408621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.585 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[80a3ad5e-39ec-4957-8ddf-a3b269b77bd4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:12 compute-0 sudo[408621]: pam_unix(sudo:session): session closed for user root
Oct 07 14:47:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 305 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.619 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8451f590-172f-4664-a681-1fb5526b1696]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:12 compute-0 NetworkManager[44949]: <info>  [1759848432.6257] manager: (tap08f8ca28-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/605)
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.624 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[161fb092-2181-453c-98bd-7ae412d45ccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:12 compute-0 systemd-udevd[408656]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:47:12 compute-0 sudo[408659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:47:12 compute-0 sudo[408659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.661 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[34902103-58c3-4194-8f6b-a867d78f0998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:12 compute-0 sudo[408659]: pam_unix(sudo:session): session closed for user root
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.664 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f3a956ae-cc9e-4958-bc6a-bc48e975530b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:12 compute-0 NetworkManager[44949]: <info>  [1759848432.6964] device (tap08f8ca28-b0): carrier: link connected
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.701 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[495c13e2-6dff-41f2-ad77-6af8ff478057]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.720 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7ea56be-7318-4d60-bc5d-799c19f6c75b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08f8ca28-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:64:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 436], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 901626, 'reachable_time': 40685, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408731, 'error': None, 'target': 'ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:12 compute-0 sudo[408708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:47:12 compute-0 sudo[408708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.737 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fccdacb6-8c23-4aad-8e72-ce8a0a85500c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:6481'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 901626, 'tstamp': 901626}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408735, 'error': None, 'target': 'ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.755 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a8efafb1-e7cb-4513-b024-cbe49124567a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08f8ca28-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:64:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 436], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 901626, 'reachable_time': 40685, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 408737, 'error': None, 'target': 'ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.786 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[91c83e89-63ef-453b-be6a-c16fe50f87ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.793 2 DEBUG nova.compute.manager [req-1e4081ee-65a1-46fd-b6a9-33fa3fe08d64 req-92f78a4b-89ed-487e-847d-7f481951cb40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received event network-vif-plugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.794 2 DEBUG oslo_concurrency.lockutils [req-1e4081ee-65a1-46fd-b6a9-33fa3fe08d64 req-92f78a4b-89ed-487e-847d-7f481951cb40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1262caef-f43e-429a-b613-b4d54273e604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.794 2 DEBUG oslo_concurrency.lockutils [req-1e4081ee-65a1-46fd-b6a9-33fa3fe08d64 req-92f78a4b-89ed-487e-847d-7f481951cb40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.794 2 DEBUG oslo_concurrency.lockutils [req-1e4081ee-65a1-46fd-b6a9-33fa3fe08d64 req-92f78a4b-89ed-487e-847d-7f481951cb40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.795 2 DEBUG nova.compute.manager [req-1e4081ee-65a1-46fd-b6a9-33fa3fe08d64 req-92f78a4b-89ed-487e-847d-7f481951cb40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Processing event network-vif-plugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.853 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a4a642-5229-471b-8c35-a2cea628dcea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.855 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08f8ca28-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.855 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.856 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08f8ca28-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:12 compute-0 NetworkManager[44949]: <info>  [1759848432.8581] manager: (tap08f8ca28-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/606)
Oct 07 14:47:12 compute-0 kernel: tap08f8ca28-b0: entered promiscuous mode
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.861 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08f8ca28-b0, col_values=(('external_ids', {'iface-id': '4030b5ae-6c2e-4b07-9359-70a14ce783de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:47:12 compute-0 ovn_controller[151684]: 2025-10-07T14:47:12Z|01518|binding|INFO|Releasing lport 4030b5ae-6c2e-4b07-9359-70a14ce783de from this chassis (sb_readonly=0)
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:12 compute-0 nova_compute[259550]: 2025-10-07 14:47:12.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.877 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/08f8ca28-b7fe-4840-94a7-78acb08138e1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/08f8ca28-b7fe-4840-94a7-78acb08138e1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.877 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9fddab67-e2a6-4e31-81e3-e87bb31a8c0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.878 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-08f8ca28-b7fe-4840-94a7-78acb08138e1
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/08f8ca28-b7fe-4840-94a7-78acb08138e1.pid.haproxy
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 08f8ca28-b7fe-4840-94a7-78acb08138e1
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:47:12 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.880 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1', 'env', 'PROCESS_TAG=haproxy-08f8ca28-b7fe-4840-94a7-78acb08138e1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/08f8ca28-b7fe-4840-94a7-78acb08138e1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:47:13 compute-0 podman[408824]: 2025-10-07 14:47:13.063169484 +0000 UTC m=+0.042692056 container create 9b3e5b9c19f982bd95c4347a54105a460c5c5a87d48af868f56c45b066bee2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendeleev, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct 07 14:47:13 compute-0 systemd[1]: Started libpod-conmon-9b3e5b9c19f982bd95c4347a54105a460c5c5a87d48af868f56c45b066bee2c3.scope.
Oct 07 14:47:13 compute-0 podman[408824]: 2025-10-07 14:47:13.043584003 +0000 UTC m=+0.023106395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:47:13 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:47:13 compute-0 podman[408824]: 2025-10-07 14:47:13.159015801 +0000 UTC m=+0.138538183 container init 9b3e5b9c19f982bd95c4347a54105a460c5c5a87d48af868f56c45b066bee2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendeleev, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 07 14:47:13 compute-0 podman[408824]: 2025-10-07 14:47:13.165972396 +0000 UTC m=+0.145494758 container start 9b3e5b9c19f982bd95c4347a54105a460c5c5a87d48af868f56c45b066bee2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct 07 14:47:13 compute-0 podman[408824]: 2025-10-07 14:47:13.169219983 +0000 UTC m=+0.148742445 container attach 9b3e5b9c19f982bd95c4347a54105a460c5c5a87d48af868f56c45b066bee2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:47:13 compute-0 hungry_mendeleev[408839]: 167 167
Oct 07 14:47:13 compute-0 systemd[1]: libpod-9b3e5b9c19f982bd95c4347a54105a460c5c5a87d48af868f56c45b066bee2c3.scope: Deactivated successfully.
Oct 07 14:47:13 compute-0 podman[408824]: 2025-10-07 14:47:13.173193138 +0000 UTC m=+0.152715500 container died 9b3e5b9c19f982bd95c4347a54105a460c5c5a87d48af868f56c45b066bee2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendeleev, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:47:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-64084bf4a950db4c93fe8df2d4014a5bb270528b01481ff4888e9aa43078558e-merged.mount: Deactivated successfully.
Oct 07 14:47:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:47:13 compute-0 podman[408824]: 2025-10-07 14:47:13.210774477 +0000 UTC m=+0.190296839 container remove 9b3e5b9c19f982bd95c4347a54105a460c5c5a87d48af868f56c45b066bee2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:13 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1514102017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:47:13 compute-0 systemd[1]: libpod-conmon-9b3e5b9c19f982bd95c4347a54105a460c5c5a87d48af868f56c45b066bee2c3.scope: Deactivated successfully.
Oct 07 14:47:13 compute-0 podman[408878]: 2025-10-07 14:47:13.341182623 +0000 UTC m=+0.078759424 container create c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:47:13 compute-0 systemd[1]: Started libpod-conmon-c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078.scope.
Oct 07 14:47:13 compute-0 podman[408878]: 2025-10-07 14:47:13.315121391 +0000 UTC m=+0.052698212 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:47:13 compute-0 podman[408895]: 2025-10-07 14:47:13.396005141 +0000 UTC m=+0.053130073 container create 8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 07 14:47:13 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd60f4915bfff9b9f5172271d3ec2486162b140a81a778a520bd35334ab4d772/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:47:13 compute-0 podman[408878]: 2025-10-07 14:47:13.420799119 +0000 UTC m=+0.158375940 container init c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:47:13 compute-0 podman[408878]: 2025-10-07 14:47:13.427458037 +0000 UTC m=+0.165034838 container start c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 07 14:47:13 compute-0 systemd[1]: Started libpod-conmon-8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164.scope.
Oct 07 14:47:13 compute-0 neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1[408912]: [NOTICE]   (408917) : New worker (408923) forked
Oct 07 14:47:13 compute-0 neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1[408912]: [NOTICE]   (408917) : Loading success.
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.456 2 DEBUG nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.457 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848433.4563055, 1262caef-f43e-429a-b613-b4d54273e604 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.458 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] VM Started (Lifecycle Event)
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.464 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:47:13 compute-0 podman[408895]: 2025-10-07 14:47:13.369435525 +0000 UTC m=+0.026560477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.468 2 INFO nova.virt.libvirt.driver [-] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Instance spawned successfully.
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.469 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:47:13 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.479 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22a2dec0c41513bcc2d58bf20dccd3e26b97514a8c52e603fb2dff100fd0542c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22a2dec0c41513bcc2d58bf20dccd3e26b97514a8c52e603fb2dff100fd0542c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22a2dec0c41513bcc2d58bf20dccd3e26b97514a8c52e603fb2dff100fd0542c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22a2dec0c41513bcc2d58bf20dccd3e26b97514a8c52e603fb2dff100fd0542c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.491 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.496 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.497 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.497 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.497 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.498 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:47:13 compute-0 podman[408895]: 2025-10-07 14:47:13.498829034 +0000 UTC m=+0.155953986 container init 8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.498 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:47:13 compute-0 podman[408895]: 2025-10-07 14:47:13.507076013 +0000 UTC m=+0.164200945 container start 8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_turing, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:47:13 compute-0 podman[408895]: 2025-10-07 14:47:13.51111618 +0000 UTC m=+0.168241112 container attach 8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.535 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.536 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848433.456523, 1262caef-f43e-429a-b613-b4d54273e604 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.536 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] VM Paused (Lifecycle Event)
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.555 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.558 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848433.460603, 1262caef-f43e-429a-b613-b4d54273e604 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.558 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] VM Resumed (Lifecycle Event)
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.589 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.594 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.604 2 INFO nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Took 9.86 seconds to spawn the instance on the hypervisor.
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.604 2 DEBUG nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.638 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.682 2 INFO nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Took 10.87 seconds to build instance.
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.703 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.912 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.913 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:13 compute-0 nova_compute[259550]: 2025-10-07 14:47:13.930 2 DEBUG nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:47:14 compute-0 nova_compute[259550]: 2025-10-07 14:47:14.007 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:14 compute-0 nova_compute[259550]: 2025-10-07 14:47:14.007 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:14 compute-0 nova_compute[259550]: 2025-10-07 14:47:14.013 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:47:14 compute-0 nova_compute[259550]: 2025-10-07 14:47:14.014 2 INFO nova.compute.claims [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:47:14 compute-0 nova_compute[259550]: 2025-10-07 14:47:14.144 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:14 compute-0 ceph-mon[74295]: pgmap v2549: 305 pgs: 305 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:47:14 compute-0 dreamy_turing[408921]: {
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:     "0": [
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:         {
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "devices": [
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "/dev/loop3"
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             ],
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "lv_name": "ceph_lv0",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "lv_size": "21470642176",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "name": "ceph_lv0",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "tags": {
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.cluster_name": "ceph",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.crush_device_class": "",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.encrypted": "0",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.osd_id": "0",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.type": "block",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.vdo": "0"
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             },
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "type": "block",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "vg_name": "ceph_vg0"
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:         }
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:     ],
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:     "1": [
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:         {
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "devices": [
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "/dev/loop4"
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             ],
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "lv_name": "ceph_lv1",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "lv_size": "21470642176",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "name": "ceph_lv1",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "tags": {
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.cluster_name": "ceph",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.crush_device_class": "",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.encrypted": "0",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.osd_id": "1",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.type": "block",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.vdo": "0"
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             },
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "type": "block",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "vg_name": "ceph_vg1"
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:         }
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:     ],
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:     "2": [
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:         {
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "devices": [
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "/dev/loop5"
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             ],
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "lv_name": "ceph_lv2",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "lv_size": "21470642176",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "name": "ceph_lv2",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "tags": {
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.cluster_name": "ceph",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.crush_device_class": "",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.encrypted": "0",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.osd_id": "2",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.type": "block",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:                 "ceph.vdo": "0"
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             },
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "type": "block",
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:             "vg_name": "ceph_vg2"
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:         }
Oct 07 14:47:14 compute-0 dreamy_turing[408921]:     ]
Oct 07 14:47:14 compute-0 dreamy_turing[408921]: }
Oct 07 14:47:14 compute-0 systemd[1]: libpod-8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164.scope: Deactivated successfully.
Oct 07 14:47:14 compute-0 conmon[408921]: conmon 8b942641f35873fee21e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164.scope/container/memory.events
Oct 07 14:47:14 compute-0 podman[408895]: 2025-10-07 14:47:14.372599339 +0000 UTC m=+1.029724271 container died 8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_turing, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:47:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-22a2dec0c41513bcc2d58bf20dccd3e26b97514a8c52e603fb2dff100fd0542c-merged.mount: Deactivated successfully.
Oct 07 14:47:14 compute-0 podman[408895]: 2025-10-07 14:47:14.454392103 +0000 UTC m=+1.111517035 container remove 8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_turing, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:47:14 compute-0 systemd[1]: libpod-conmon-8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164.scope: Deactivated successfully.
Oct 07 14:47:14 compute-0 sudo[408708]: pam_unix(sudo:session): session closed for user root
Oct 07 14:47:14 compute-0 sudo[408971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:47:14 compute-0 sudo[408971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:47:14 compute-0 sudo[408971]: pam_unix(sudo:session): session closed for user root
Oct 07 14:47:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 07 14:47:14 compute-0 sudo[408996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:47:14 compute-0 sudo[408996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:47:14 compute-0 sudo[408996]: pam_unix(sudo:session): session closed for user root
Oct 07 14:47:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:47:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1803854212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:47:14 compute-0 sudo[409021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:47:14 compute-0 sudo[409021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:47:14 compute-0 sudo[409021]: pam_unix(sudo:session): session closed for user root
Oct 07 14:47:14 compute-0 nova_compute[259550]: 2025-10-07 14:47:14.688 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:14 compute-0 nova_compute[259550]: 2025-10-07 14:47:14.707 2 DEBUG nova.compute.provider_tree [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:47:14 compute-0 nova_compute[259550]: 2025-10-07 14:47:14.728 2 DEBUG nova.scheduler.client.report [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:47:14 compute-0 sudo[409048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:47:14 compute-0 sudo[409048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:47:14 compute-0 nova_compute[259550]: 2025-10-07 14:47:14.761 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:14 compute-0 nova_compute[259550]: 2025-10-07 14:47:14.764 2 DEBUG nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:47:14 compute-0 nova_compute[259550]: 2025-10-07 14:47:14.822 2 DEBUG nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:47:14 compute-0 nova_compute[259550]: 2025-10-07 14:47:14.823 2 DEBUG nova.network.neutron [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:47:14 compute-0 nova_compute[259550]: 2025-10-07 14:47:14.843 2 INFO nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:47:14 compute-0 nova_compute[259550]: 2025-10-07 14:47:14.866 2 DEBUG nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:47:14 compute-0 nova_compute[259550]: 2025-10-07 14:47:14.981 2 DEBUG nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:47:14 compute-0 nova_compute[259550]: 2025-10-07 14:47:14.982 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:47:14 compute-0 nova_compute[259550]: 2025-10-07 14:47:14.983 2 INFO nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Creating image(s)
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.006 2 DEBUG nova.storage.rbd_utils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.029 2 DEBUG nova.storage.rbd_utils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.058 2 DEBUG nova.storage.rbd_utils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.062 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:15 compute-0 podman[409150]: 2025-10-07 14:47:15.103950008 +0000 UTC m=+0.052668530 container create b84e6fb0d051a52ef926682810582478f2c0ba87e9c44db770a6974a820ea82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bhaskara, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.115 2 DEBUG nova.policy [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.122 2 DEBUG nova.compute.manager [req-762ff450-13c4-46dd-84ea-e038497feb04 req-dc4e8f5f-cf96-4016-8872-392575a5c89b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received event network-vif-plugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.123 2 DEBUG oslo_concurrency.lockutils [req-762ff450-13c4-46dd-84ea-e038497feb04 req-dc4e8f5f-cf96-4016-8872-392575a5c89b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1262caef-f43e-429a-b613-b4d54273e604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.123 2 DEBUG oslo_concurrency.lockutils [req-762ff450-13c4-46dd-84ea-e038497feb04 req-dc4e8f5f-cf96-4016-8872-392575a5c89b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.124 2 DEBUG oslo_concurrency.lockutils [req-762ff450-13c4-46dd-84ea-e038497feb04 req-dc4e8f5f-cf96-4016-8872-392575a5c89b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.124 2 DEBUG nova.compute.manager [req-762ff450-13c4-46dd-84ea-e038497feb04 req-dc4e8f5f-cf96-4016-8872-392575a5c89b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] No waiting events found dispatching network-vif-plugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.124 2 WARNING nova.compute.manager [req-762ff450-13c4-46dd-84ea-e038497feb04 req-dc4e8f5f-cf96-4016-8872-392575a5c89b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received unexpected event network-vif-plugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 for instance with vm_state active and task_state None.
Oct 07 14:47:15 compute-0 systemd[1]: Started libpod-conmon-b84e6fb0d051a52ef926682810582478f2c0ba87e9c44db770a6974a820ea82e.scope.
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.162 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.165 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.166 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.166 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:15 compute-0 podman[409150]: 2025-10-07 14:47:15.081218664 +0000 UTC m=+0.029937206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:47:15 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:47:15 compute-0 podman[409150]: 2025-10-07 14:47:15.192010079 +0000 UTC m=+0.140728611 container init b84e6fb0d051a52ef926682810582478f2c0ba87e9c44db770a6974a820ea82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.197 2 DEBUG nova.storage.rbd_utils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:15 compute-0 podman[409150]: 2025-10-07 14:47:15.202446387 +0000 UTC m=+0.151164899 container start b84e6fb0d051a52ef926682810582478f2c0ba87e9c44db770a6974a820ea82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:47:15 compute-0 podman[409150]: 2025-10-07 14:47:15.207417738 +0000 UTC m=+0.156136250 container attach b84e6fb0d051a52ef926682810582478f2c0ba87e9c44db770a6974a820ea82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 07 14:47:15 compute-0 practical_bhaskara[409185]: 167 167
Oct 07 14:47:15 compute-0 systemd[1]: libpod-b84e6fb0d051a52ef926682810582478f2c0ba87e9c44db770a6974a820ea82e.scope: Deactivated successfully.
Oct 07 14:47:15 compute-0 podman[409150]: 2025-10-07 14:47:15.20859706 +0000 UTC m=+0.157315572 container died b84e6fb0d051a52ef926682810582478f2c0ba87e9c44db770a6974a820ea82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.207 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-33397d69dd89790f1aa496d5f348e5389cd27f79ae3720685eb098ad6e9fc0bf-merged.mount: Deactivated successfully.
Oct 07 14:47:15 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1803854212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:47:15 compute-0 podman[409150]: 2025-10-07 14:47:15.264003863 +0000 UTC m=+0.212722375 container remove b84e6fb0d051a52ef926682810582478f2c0ba87e9c44db770a6974a820ea82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bhaskara, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:47:15 compute-0 systemd[1]: libpod-conmon-b84e6fb0d051a52ef926682810582478f2c0ba87e9c44db770a6974a820ea82e.scope: Deactivated successfully.
Oct 07 14:47:15 compute-0 podman[409248]: 2025-10-07 14:47:15.481965276 +0000 UTC m=+0.067629328 container create 6d9de39a09f22b407917b25525529275bdcc3df100dd86e3409d6ab1e38713e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kalam, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:47:15 compute-0 systemd[1]: Started libpod-conmon-6d9de39a09f22b407917b25525529275bdcc3df100dd86e3409d6ab1e38713e5.scope.
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.541 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:15 compute-0 podman[409248]: 2025-10-07 14:47:15.454155588 +0000 UTC m=+0.039819670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:47:15 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde4ee4c313b7df48efca96669f8358719e3279a3e890d950309393015c5ee4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde4ee4c313b7df48efca96669f8358719e3279a3e890d950309393015c5ee4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde4ee4c313b7df48efca96669f8358719e3279a3e890d950309393015c5ee4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde4ee4c313b7df48efca96669f8358719e3279a3e890d950309393015c5ee4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:47:15 compute-0 podman[409248]: 2025-10-07 14:47:15.596612083 +0000 UTC m=+0.182276125 container init 6d9de39a09f22b407917b25525529275bdcc3df100dd86e3409d6ab1e38713e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kalam, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:47:15 compute-0 podman[409248]: 2025-10-07 14:47:15.608613853 +0000 UTC m=+0.194277905 container start 6d9de39a09f22b407917b25525529275bdcc3df100dd86e3409d6ab1e38713e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 07 14:47:15 compute-0 podman[409248]: 2025-10-07 14:47:15.613604955 +0000 UTC m=+0.199269037 container attach 6d9de39a09f22b407917b25525529275bdcc3df100dd86e3409d6ab1e38713e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.635 2 DEBUG nova.storage.rbd_utils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.727 2 DEBUG nova.objects.instance [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid c621ddbd-d6b8-461e-9374-4f7e50d0ca5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.752 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.753 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Ensure instance console log exists: /var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.754 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.754 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:15 compute-0 nova_compute[259550]: 2025-10-07 14:47:15.755 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:16 compute-0 ceph-mon[74295]: pgmap v2550: 305 pgs: 305 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 07 14:47:16 compute-0 nova_compute[259550]: 2025-10-07 14:47:16.537 2 DEBUG nova.network.neutron [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Successfully created port: d90f9db1-8372-46fb-93ed-9be2902fe85c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]: {
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:         "osd_id": 2,
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:         "type": "bluestore"
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:     },
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:         "osd_id": 1,
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:         "type": "bluestore"
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:     },
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:         "osd_id": 0,
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:         "type": "bluestore"
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]:     }
Oct 07 14:47:16 compute-0 peaceful_kalam[409264]: }
Oct 07 14:47:16 compute-0 systemd[1]: libpod-6d9de39a09f22b407917b25525529275bdcc3df100dd86e3409d6ab1e38713e5.scope: Deactivated successfully.
Oct 07 14:47:16 compute-0 podman[409248]: 2025-10-07 14:47:16.574852616 +0000 UTC m=+1.160516668 container died 6d9de39a09f22b407917b25525529275bdcc3df100dd86e3409d6ab1e38713e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:47:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-bde4ee4c313b7df48efca96669f8358719e3279a3e890d950309393015c5ee4d-merged.mount: Deactivated successfully.
Oct 07 14:47:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 96 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 714 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 07 14:47:16 compute-0 podman[409248]: 2025-10-07 14:47:16.623139469 +0000 UTC m=+1.208803521 container remove 6d9de39a09f22b407917b25525529275bdcc3df100dd86e3409d6ab1e38713e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kalam, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:47:16 compute-0 systemd[1]: libpod-conmon-6d9de39a09f22b407917b25525529275bdcc3df100dd86e3409d6ab1e38713e5.scope: Deactivated successfully.
Oct 07 14:47:16 compute-0 sudo[409048]: pam_unix(sudo:session): session closed for user root
Oct 07 14:47:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:47:16 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:47:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:47:16 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:47:16 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 0653c403-876e-46e3-8d8d-fcb0705f9b3e does not exist
Oct 07 14:47:16 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 63231e2b-34a9-40ba-9504-14a4ac5e6205 does not exist
Oct 07 14:47:16 compute-0 sudo[409383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:47:16 compute-0 sudo[409383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:47:16 compute-0 sudo[409383]: pam_unix(sudo:session): session closed for user root
Oct 07 14:47:16 compute-0 sudo[409408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:47:16 compute-0 sudo[409408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:47:16 compute-0 sudo[409408]: pam_unix(sudo:session): session closed for user root
Oct 07 14:47:17 compute-0 nova_compute[259550]: 2025-10-07 14:47:17.339 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:47:17 compute-0 NetworkManager[44949]: <info>  [1759848437.4398] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/607)
Oct 07 14:47:17 compute-0 NetworkManager[44949]: <info>  [1759848437.4408] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/608)
Oct 07 14:47:17 compute-0 nova_compute[259550]: 2025-10-07 14:47:17.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:17 compute-0 nova_compute[259550]: 2025-10-07 14:47:17.461 2 DEBUG nova.network.neutron [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Successfully updated port: d90f9db1-8372-46fb-93ed-9be2902fe85c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:47:17 compute-0 nova_compute[259550]: 2025-10-07 14:47:17.481 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:47:17 compute-0 nova_compute[259550]: 2025-10-07 14:47:17.481 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:47:17 compute-0 nova_compute[259550]: 2025-10-07 14:47:17.481 2 DEBUG nova.network.neutron [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:47:17 compute-0 nova_compute[259550]: 2025-10-07 14:47:17.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:17 compute-0 ovn_controller[151684]: 2025-10-07T14:47:17Z|01519|binding|INFO|Releasing lport 4030b5ae-6c2e-4b07-9359-70a14ce783de from this chassis (sb_readonly=0)
Oct 07 14:47:17 compute-0 nova_compute[259550]: 2025-10-07 14:47:17.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:17 compute-0 nova_compute[259550]: 2025-10-07 14:47:17.621 2 DEBUG nova.compute.manager [req-6238584c-0eb4-4bef-80bd-055396f6a4eb req-b6057d0c-b279-4752-bbd5-c9a976f58b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received event network-changed-d90f9db1-8372-46fb-93ed-9be2902fe85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:47:17 compute-0 nova_compute[259550]: 2025-10-07 14:47:17.622 2 DEBUG nova.compute.manager [req-6238584c-0eb4-4bef-80bd-055396f6a4eb req-b6057d0c-b279-4752-bbd5-c9a976f58b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Refreshing instance network info cache due to event network-changed-d90f9db1-8372-46fb-93ed-9be2902fe85c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:47:17 compute-0 nova_compute[259550]: 2025-10-07 14:47:17.623 2 DEBUG oslo_concurrency.lockutils [req-6238584c-0eb4-4bef-80bd-055396f6a4eb req-b6057d0c-b279-4752-bbd5-c9a976f58b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:47:17 compute-0 ceph-mon[74295]: pgmap v2551: 305 pgs: 305 active+clean; 96 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 714 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 07 14:47:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:47:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:47:17 compute-0 nova_compute[259550]: 2025-10-07 14:47:17.698 2 DEBUG nova.network.neutron [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:47:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:47:18 compute-0 nova_compute[259550]: 2025-10-07 14:47:18.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 305 active+clean; 96 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 698 KiB/s rd, 1.6 MiB/s wr, 37 op/s
Oct 07 14:47:19 compute-0 ceph-mon[74295]: pgmap v2552: 305 pgs: 305 active+clean; 96 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 698 KiB/s rd, 1.6 MiB/s wr, 37 op/s
Oct 07 14:47:19 compute-0 nova_compute[259550]: 2025-10-07 14:47:19.715 2 DEBUG nova.compute.manager [req-98f9d95a-dae2-4828-a41b-6bec2aaf9be1 req-dfe5ad57-4886-44f2-a222-ee6eaaaf40c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received event network-changed-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:47:19 compute-0 nova_compute[259550]: 2025-10-07 14:47:19.716 2 DEBUG nova.compute.manager [req-98f9d95a-dae2-4828-a41b-6bec2aaf9be1 req-dfe5ad57-4886-44f2-a222-ee6eaaaf40c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Refreshing instance network info cache due to event network-changed-b06adbc1-01d9-45e3-b4b6-71bc4f85a659. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:47:19 compute-0 nova_compute[259550]: 2025-10-07 14:47:19.717 2 DEBUG oslo_concurrency.lockutils [req-98f9d95a-dae2-4828-a41b-6bec2aaf9be1 req-dfe5ad57-4886-44f2-a222-ee6eaaaf40c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:47:19 compute-0 nova_compute[259550]: 2025-10-07 14:47:19.717 2 DEBUG oslo_concurrency.lockutils [req-98f9d95a-dae2-4828-a41b-6bec2aaf9be1 req-dfe5ad57-4886-44f2-a222-ee6eaaaf40c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:47:19 compute-0 nova_compute[259550]: 2025-10-07 14:47:19.717 2 DEBUG nova.network.neutron [req-98f9d95a-dae2-4828-a41b-6bec2aaf9be1 req-dfe5ad57-4886-44f2-a222-ee6eaaaf40c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Refreshing network info cache for port b06adbc1-01d9-45e3-b4b6-71bc4f85a659 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:47:20 compute-0 nova_compute[259550]: 2025-10-07 14:47:20.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 134 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 104 op/s
Oct 07 14:47:21 compute-0 ceph-mon[74295]: pgmap v2553: 305 pgs: 305 active+clean; 134 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 104 op/s
Oct 07 14:47:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 134 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:47:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:47:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:47:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:47:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:47:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:47:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:47:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:47:22
Oct 07 14:47:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:47:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:47:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'images', 'volumes', 'backups', 'cephfs.cephfs.data', 'vms']
Oct 07 14:47:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:47:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:47:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:47:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:47:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:47:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:47:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:47:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:47:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:47:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:47:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:47:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.635 2 DEBUG nova.network.neutron [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updating instance_info_cache with network_info: [{"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.638 2 DEBUG nova.network.neutron [req-98f9d95a-dae2-4828-a41b-6bec2aaf9be1 req-dfe5ad57-4886-44f2-a222-ee6eaaaf40c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Updated VIF entry in instance network info cache for port b06adbc1-01d9-45e3-b4b6-71bc4f85a659. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.638 2 DEBUG nova.network.neutron [req-98f9d95a-dae2-4828-a41b-6bec2aaf9be1 req-dfe5ad57-4886-44f2-a222-ee6eaaaf40c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Updating instance_info_cache with network_info: [{"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.664 2 DEBUG oslo_concurrency.lockutils [req-98f9d95a-dae2-4828-a41b-6bec2aaf9be1 req-dfe5ad57-4886-44f2-a222-ee6eaaaf40c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.665 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.665 2 DEBUG nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Instance network_info: |[{"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.666 2 DEBUG oslo_concurrency.lockutils [req-6238584c-0eb4-4bef-80bd-055396f6a4eb req-b6057d0c-b279-4752-bbd5-c9a976f58b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.666 2 DEBUG nova.network.neutron [req-6238584c-0eb4-4bef-80bd-055396f6a4eb req-b6057d0c-b279-4752-bbd5-c9a976f58b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Refreshing network info cache for port d90f9db1-8372-46fb-93ed-9be2902fe85c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.669 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Start _get_guest_xml network_info=[{"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.674 2 WARNING nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.681 2 DEBUG nova.virt.libvirt.host [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.682 2 DEBUG nova.virt.libvirt.host [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:47:23 compute-0 ceph-mon[74295]: pgmap v2554: 305 pgs: 305 active+clean; 134 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.691 2 DEBUG nova.virt.libvirt.host [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.692 2 DEBUG nova.virt.libvirt.host [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.693 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.693 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.693 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.694 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.694 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.694 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.695 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.695 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.695 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.695 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.696 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.696 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:47:23 compute-0 nova_compute[259550]: 2025-10-07 14:47:23.699 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:47:24 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1425288132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.163 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.192 2 DEBUG nova.storage.rbd_utils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.198 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 305 active+clean; 134 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:47:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:47:24 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3049854274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.642 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.645 2 DEBUG nova.virt.libvirt.vif [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:47:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1379651617',display_name='tempest-TestGettingAddress-server-1379651617',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1379651617',id=138,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkAt4wtteK91EP3aa6Au9K7yq+N15JSCUefd3a6DRNjmPgvGC0hgDKYUniMgalUA3tACkiPsQDKv7a9b9TFDwqZAEmvf7GWwU8qoBld9UJd4PAomUBnp4Nc81ZIU+LnYw==',key_name='tempest-TestGettingAddress-370921161',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-wdqfp22g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:47:14Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=c621ddbd-d6b8-461e-9374-4f7e50d0ca5f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.646 2 DEBUG nova.network.os_vif_util [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.650 2 DEBUG nova.network.os_vif_util [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:39:08,bridge_name='br-int',has_traffic_filtering=True,id=d90f9db1-8372-46fb-93ed-9be2902fe85c,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f9db1-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.653 2 DEBUG nova.objects.instance [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid c621ddbd-d6b8-461e-9374-4f7e50d0ca5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.671 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:47:24 compute-0 nova_compute[259550]:   <uuid>c621ddbd-d6b8-461e-9374-4f7e50d0ca5f</uuid>
Oct 07 14:47:24 compute-0 nova_compute[259550]:   <name>instance-0000008a</name>
Oct 07 14:47:24 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:47:24 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:47:24 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <nova:name>tempest-TestGettingAddress-server-1379651617</nova:name>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:47:23</nova:creationTime>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:47:24 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:47:24 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:47:24 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:47:24 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:47:24 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:47:24 compute-0 nova_compute[259550]:         <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct 07 14:47:24 compute-0 nova_compute[259550]:         <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:47:24 compute-0 nova_compute[259550]:         <nova:port uuid="d90f9db1-8372-46fb-93ed-9be2902fe85c">
Oct 07 14:47:24 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe48:3908" ipVersion="6"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe48:3908" ipVersion="6"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:47:24 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:47:24 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <system>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <entry name="serial">c621ddbd-d6b8-461e-9374-4f7e50d0ca5f</entry>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <entry name="uuid">c621ddbd-d6b8-461e-9374-4f7e50d0ca5f</entry>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     </system>
Oct 07 14:47:24 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:47:24 compute-0 nova_compute[259550]:   <os>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:   </os>
Oct 07 14:47:24 compute-0 nova_compute[259550]:   <features>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:   </features>
Oct 07 14:47:24 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:47:24 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:47:24 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk">
Oct 07 14:47:24 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       </source>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:47:24 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk.config">
Oct 07 14:47:24 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       </source>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:47:24 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:48:39:08"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <target dev="tapd90f9db1-83"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f/console.log" append="off"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <video>
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     </video>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:47:24 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:47:24 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:47:24 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:47:24 compute-0 nova_compute[259550]: </domain>
Oct 07 14:47:24 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.672 2 DEBUG nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Preparing to wait for external event network-vif-plugged-d90f9db1-8372-46fb-93ed-9be2902fe85c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.673 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.673 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.673 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.674 2 DEBUG nova.virt.libvirt.vif [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:47:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1379651617',display_name='tempest-TestGettingAddress-server-1379651617',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1379651617',id=138,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkAt4wtteK91EP3aa6Au9K7yq+N15JSCUefd3a6DRNjmPgvGC0hgDKYUniMgalUA3tACkiPsQDKv7a9b9TFDwqZAEmvf7GWwU8qoBld9UJd4PAomUBnp4Nc81ZIU+LnYw==',key_name='tempest-TestGettingAddress-370921161',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-wdqfp22g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:47:14Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=c621ddbd-d6b8-461e-9374-4f7e50d0ca5f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.674 2 DEBUG nova.network.os_vif_util [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.675 2 DEBUG nova.network.os_vif_util [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:39:08,bridge_name='br-int',has_traffic_filtering=True,id=d90f9db1-8372-46fb-93ed-9be2902fe85c,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f9db1-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.676 2 DEBUG os_vif [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:39:08,bridge_name='br-int',has_traffic_filtering=True,id=d90f9db1-8372-46fb-93ed-9be2902fe85c,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f9db1-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.677 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.678 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.683 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd90f9db1-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd90f9db1-83, col_values=(('external_ids', {'iface-id': 'd90f9db1-8372-46fb-93ed-9be2902fe85c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:39:08', 'vm-uuid': 'c621ddbd-d6b8-461e-9374-4f7e50d0ca5f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:47:24 compute-0 NetworkManager[44949]: <info>  [1759848444.6866] manager: (tapd90f9db1-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/609)
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1425288132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:47:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3049854274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.697 2 INFO os_vif [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:39:08,bridge_name='br-int',has_traffic_filtering=True,id=d90f9db1-8372-46fb-93ed-9be2902fe85c,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f9db1-83')
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.753 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.754 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.755 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:48:39:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.755 2 INFO nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Using config drive
Oct 07 14:47:24 compute-0 nova_compute[259550]: 2025-10-07 14:47:24.778 2 DEBUG nova.storage.rbd_utils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:25 compute-0 ceph-mon[74295]: pgmap v2555: 305 pgs: 305 active+clean; 134 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:47:25 compute-0 nova_compute[259550]: 2025-10-07 14:47:25.965 2 INFO nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Creating config drive at /var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f/disk.config
Oct 07 14:47:25 compute-0 nova_compute[259550]: 2025-10-07 14:47:25.973 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc4njtkmh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:26 compute-0 nova_compute[259550]: 2025-10-07 14:47:26.118 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc4njtkmh" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:26 compute-0 nova_compute[259550]: 2025-10-07 14:47:26.140 2 DEBUG nova.storage.rbd_utils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:26 compute-0 nova_compute[259550]: 2025-10-07 14:47:26.143 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f/disk.config c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:26 compute-0 nova_compute[259550]: 2025-10-07 14:47:26.293 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f/disk.config c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:26 compute-0 nova_compute[259550]: 2025-10-07 14:47:26.294 2 INFO nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Deleting local config drive /var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f/disk.config because it was imported into RBD.
Oct 07 14:47:26 compute-0 kernel: tapd90f9db1-83: entered promiscuous mode
Oct 07 14:47:26 compute-0 NetworkManager[44949]: <info>  [1759848446.3574] manager: (tapd90f9db1-83): new Tun device (/org/freedesktop/NetworkManager/Devices/610)
Oct 07 14:47:26 compute-0 nova_compute[259550]: 2025-10-07 14:47:26.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:26 compute-0 ovn_controller[151684]: 2025-10-07T14:47:26Z|01520|binding|INFO|Claiming lport d90f9db1-8372-46fb-93ed-9be2902fe85c for this chassis.
Oct 07 14:47:26 compute-0 ovn_controller[151684]: 2025-10-07T14:47:26Z|01521|binding|INFO|d90f9db1-8372-46fb-93ed-9be2902fe85c: Claiming fa:16:3e:48:39:08 10.100.0.13 2001:db8:0:1:f816:3eff:fe48:3908 2001:db8::f816:3eff:fe48:3908
Oct 07 14:47:26 compute-0 nova_compute[259550]: 2025-10-07 14:47:26.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.373 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:39:08 10.100.0.13 2001:db8:0:1:f816:3eff:fe48:3908 2001:db8::f816:3eff:fe48:3908'], port_security=['fa:16:3e:48:39:08 10.100.0.13 2001:db8:0:1:f816:3eff:fe48:3908 2001:db8::f816:3eff:fe48:3908'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8:0:1:f816:3eff:fe48:3908/64 2001:db8::f816:3eff:fe48:3908/64', 'neutron:device_id': 'c621ddbd-d6b8-461e-9374-4f7e50d0ca5f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3ca8f444-a15b-48d2-afd7-d5447a1f3a63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=227fc944-7eb8-4e47-9b7f-017eeb7f2711, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d90f9db1-8372-46fb-93ed-9be2902fe85c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.374 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d90f9db1-8372-46fb-93ed-9be2902fe85c in datapath 970990f9-7a8a-40de-9a55-f4c40d657453 bound to our chassis
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.375 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 970990f9-7a8a-40de-9a55-f4c40d657453
Oct 07 14:47:26 compute-0 ovn_controller[151684]: 2025-10-07T14:47:26Z|01522|binding|INFO|Setting lport d90f9db1-8372-46fb-93ed-9be2902fe85c ovn-installed in OVS
Oct 07 14:47:26 compute-0 ovn_controller[151684]: 2025-10-07T14:47:26Z|01523|binding|INFO|Setting lport d90f9db1-8372-46fb-93ed-9be2902fe85c up in Southbound
Oct 07 14:47:26 compute-0 nova_compute[259550]: 2025-10-07 14:47:26.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.387 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cbb8fcce-aa09-4a65-b88c-5b66df688831]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.389 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap970990f9-71 in ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.390 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap970990f9-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.391 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[11dfe8fb-9dbe-4d40-931d-ec770a2319fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.391 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a212ea-25a2-4654-9eb7-af106659a679]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.407 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[910afb28-be11-403e-8a38-50cf6bef722f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.424 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8979d584-b20a-4680-b48d-fb1b5b40406f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:26 compute-0 systemd-machined[214580]: New machine qemu-172-instance-0000008a.
Oct 07 14:47:26 compute-0 systemd[1]: Started Virtual Machine qemu-172-instance-0000008a.
Oct 07 14:47:26 compute-0 systemd-udevd[409604]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.466 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f5fe6d5d-83fb-414a-b1a3-41a0c9761fb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:26 compute-0 NetworkManager[44949]: <info>  [1759848446.4694] device (tapd90f9db1-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:47:26 compute-0 NetworkManager[44949]: <info>  [1759848446.4702] device (tapd90f9db1-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.472 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[08ed92b9-9617-4abc-b333-645587407a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:26 compute-0 NetworkManager[44949]: <info>  [1759848446.4738] manager: (tap970990f9-70): new Veth device (/org/freedesktop/NetworkManager/Devices/611)
Oct 07 14:47:26 compute-0 ovn_controller[151684]: 2025-10-07T14:47:26Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:e6:f4 10.100.0.12
Oct 07 14:47:26 compute-0 ovn_controller[151684]: 2025-10-07T14:47:26Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:e6:f4 10.100.0.12
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.509 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[55120493-a884-4fa3-9837-717fd97fd038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.512 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a776734b-1b07-42b5-8fd1-79960adbf80f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:26 compute-0 podman[409567]: 2025-10-07 14:47:26.51995919 +0000 UTC m=+0.119185639 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:47:26 compute-0 NetworkManager[44949]: <info>  [1759848446.5394] device (tap970990f9-70): carrier: link connected
Oct 07 14:47:26 compute-0 podman[409569]: 2025-10-07 14:47:26.542072718 +0000 UTC m=+0.139623732 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.544 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[baac336d-659c-4c57-9da7-2f99e644689d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.560 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eceb1bd9-60d1-4d53-949d-ac874d3adf95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap970990f9-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:9c:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 438], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903010, 'reachable_time': 44649, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409647, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.579 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0c6db212-07b8-4c82-ac1f-ed28b9897d78]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec6:9ccb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 903010, 'tstamp': 903010}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409648, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.596 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a2aac0f4-6204-49d4-a7ad-fb05cb801781]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap970990f9-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:9c:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 438], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903010, 'reachable_time': 44649, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 409649, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 148 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 117 op/s
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.640 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de7e7df5-94cb-4a2c-a072-6d0d5fcd5f61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.708 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f1ee7284-dddb-45a3-8854-15267332514b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.709 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap970990f9-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.709 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.710 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap970990f9-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:47:26 compute-0 nova_compute[259550]: 2025-10-07 14:47:26.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:26 compute-0 NetworkManager[44949]: <info>  [1759848446.7123] manager: (tap970990f9-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/612)
Oct 07 14:47:26 compute-0 kernel: tap970990f9-70: entered promiscuous mode
Oct 07 14:47:26 compute-0 nova_compute[259550]: 2025-10-07 14:47:26.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.718 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap970990f9-70, col_values=(('external_ids', {'iface-id': 'a7ef9e15-2145-4a59-b756-368bcbe72d69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:47:26 compute-0 nova_compute[259550]: 2025-10-07 14:47:26.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:26 compute-0 ovn_controller[151684]: 2025-10-07T14:47:26Z|01524|binding|INFO|Releasing lport a7ef9e15-2145-4a59-b756-368bcbe72d69 from this chassis (sb_readonly=0)
Oct 07 14:47:26 compute-0 nova_compute[259550]: 2025-10-07 14:47:26.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.721 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/970990f9-7a8a-40de-9a55-f4c40d657453.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/970990f9-7a8a-40de-9a55-f4c40d657453.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.722 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd50617-9564-4be6-9274-9e3c01b252d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.723 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-970990f9-7a8a-40de-9a55-f4c40d657453
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/970990f9-7a8a-40de-9a55-f4c40d657453.pid.haproxy
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 970990f9-7a8a-40de-9a55-f4c40d657453
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:47:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.724 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'env', 'PROCESS_TAG=haproxy-970990f9-7a8a-40de-9a55-f4c40d657453', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/970990f9-7a8a-40de-9a55-f4c40d657453.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:47:26 compute-0 nova_compute[259550]: 2025-10-07 14:47:26.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:27 compute-0 podman[409723]: 2025-10-07 14:47:27.109540941 +0000 UTC m=+0.048049318 container create 10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 07 14:47:27 compute-0 systemd[1]: Started libpod-conmon-10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c.scope.
Oct 07 14:47:27 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:47:27 compute-0 podman[409723]: 2025-10-07 14:47:27.084173887 +0000 UTC m=+0.022682294 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a24921b3c8e4f08e504680464e3b58519f09b02a877bb0e4246cd4ccca4536/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:47:27 compute-0 podman[409723]: 2025-10-07 14:47:27.194431858 +0000 UTC m=+0.132940235 container init 10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:47:27 compute-0 podman[409723]: 2025-10-07 14:47:27.199632406 +0000 UTC m=+0.138140783 container start 10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.219 2 DEBUG nova.compute.manager [req-3fe5b1ae-f0fa-472c-be79-59a7e5295a21 req-10fac8b1-c1b0-45c0-8be9-8ac64de2914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received event network-vif-plugged-d90f9db1-8372-46fb-93ed-9be2902fe85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.220 2 DEBUG oslo_concurrency.lockutils [req-3fe5b1ae-f0fa-472c-be79-59a7e5295a21 req-10fac8b1-c1b0-45c0-8be9-8ac64de2914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.220 2 DEBUG oslo_concurrency.lockutils [req-3fe5b1ae-f0fa-472c-be79-59a7e5295a21 req-10fac8b1-c1b0-45c0-8be9-8ac64de2914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.220 2 DEBUG oslo_concurrency.lockutils [req-3fe5b1ae-f0fa-472c-be79-59a7e5295a21 req-10fac8b1-c1b0-45c0-8be9-8ac64de2914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.221 2 DEBUG nova.compute.manager [req-3fe5b1ae-f0fa-472c-be79-59a7e5295a21 req-10fac8b1-c1b0-45c0-8be9-8ac64de2914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Processing event network-vif-plugged-d90f9db1-8372-46fb-93ed-9be2902fe85c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:47:27 compute-0 neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453[409738]: [NOTICE]   (409742) : New worker (409744) forked
Oct 07 14:47:27 compute-0 neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453[409738]: [NOTICE]   (409742) : Loading success.
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.228 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848447.227463, c621ddbd-d6b8-461e-9374-4f7e50d0ca5f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.228 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] VM Started (Lifecycle Event)
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.231 2 DEBUG nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.235 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.240 2 INFO nova.virt.libvirt.driver [-] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Instance spawned successfully.
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.240 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.256 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.264 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.267 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.267 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.268 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.268 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.268 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.269 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.315 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.315 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848447.227604, c621ddbd-d6b8-461e-9374-4f7e50d0ca5f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.316 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] VM Paused (Lifecycle Event)
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.354 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.357 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848447.2345088, c621ddbd-d6b8-461e-9374-4f7e50d0ca5f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.358 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] VM Resumed (Lifecycle Event)
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.366 2 INFO nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Took 12.38 seconds to spawn the instance on the hypervisor.
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.366 2 DEBUG nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.380 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.383 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.407 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.441 2 INFO nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Took 13.45 seconds to build instance.
Oct 07 14:47:27 compute-0 nova_compute[259550]: 2025-10-07 14:47:27.458 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:27 compute-0 ceph-mon[74295]: pgmap v2556: 305 pgs: 305 active+clean; 148 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 117 op/s
Oct 07 14:47:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:47:28 compute-0 nova_compute[259550]: 2025-10-07 14:47:28.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:28.462 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:47:28 compute-0 nova_compute[259550]: 2025-10-07 14:47:28.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:28.463 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:47:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 164 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.5 MiB/s wr, 171 op/s
Oct 07 14:47:28 compute-0 nova_compute[259550]: 2025-10-07 14:47:28.987 2 DEBUG nova.network.neutron [req-6238584c-0eb4-4bef-80bd-055396f6a4eb req-b6057d0c-b279-4752-bbd5-c9a976f58b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updated VIF entry in instance network info cache for port d90f9db1-8372-46fb-93ed-9be2902fe85c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:47:28 compute-0 nova_compute[259550]: 2025-10-07 14:47:28.987 2 DEBUG nova.network.neutron [req-6238584c-0eb4-4bef-80bd-055396f6a4eb req-b6057d0c-b279-4752-bbd5-c9a976f58b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updating instance_info_cache with network_info: [{"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:47:29 compute-0 nova_compute[259550]: 2025-10-07 14:47:29.002 2 DEBUG oslo_concurrency.lockutils [req-6238584c-0eb4-4bef-80bd-055396f6a4eb req-b6057d0c-b279-4752-bbd5-c9a976f58b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:47:29 compute-0 nova_compute[259550]: 2025-10-07 14:47:29.308 2 DEBUG nova.compute.manager [req-1b2aff16-f71f-49ef-ba6d-a9ca2b84ad36 req-c2aff801-b9f3-45ae-bfee-e28c23b0256e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received event network-vif-plugged-d90f9db1-8372-46fb-93ed-9be2902fe85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:47:29 compute-0 nova_compute[259550]: 2025-10-07 14:47:29.309 2 DEBUG oslo_concurrency.lockutils [req-1b2aff16-f71f-49ef-ba6d-a9ca2b84ad36 req-c2aff801-b9f3-45ae-bfee-e28c23b0256e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:29 compute-0 nova_compute[259550]: 2025-10-07 14:47:29.309 2 DEBUG oslo_concurrency.lockutils [req-1b2aff16-f71f-49ef-ba6d-a9ca2b84ad36 req-c2aff801-b9f3-45ae-bfee-e28c23b0256e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:29 compute-0 nova_compute[259550]: 2025-10-07 14:47:29.309 2 DEBUG oslo_concurrency.lockutils [req-1b2aff16-f71f-49ef-ba6d-a9ca2b84ad36 req-c2aff801-b9f3-45ae-bfee-e28c23b0256e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:29 compute-0 nova_compute[259550]: 2025-10-07 14:47:29.309 2 DEBUG nova.compute.manager [req-1b2aff16-f71f-49ef-ba6d-a9ca2b84ad36 req-c2aff801-b9f3-45ae-bfee-e28c23b0256e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] No waiting events found dispatching network-vif-plugged-d90f9db1-8372-46fb-93ed-9be2902fe85c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:47:29 compute-0 nova_compute[259550]: 2025-10-07 14:47:29.310 2 WARNING nova.compute.manager [req-1b2aff16-f71f-49ef-ba6d-a9ca2b84ad36 req-c2aff801-b9f3-45ae-bfee-e28c23b0256e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received unexpected event network-vif-plugged-d90f9db1-8372-46fb-93ed-9be2902fe85c for instance with vm_state active and task_state None.
Oct 07 14:47:29 compute-0 nova_compute[259550]: 2025-10-07 14:47:29.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:29 compute-0 ceph-mon[74295]: pgmap v2557: 305 pgs: 305 active+clean; 164 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.5 MiB/s wr, 171 op/s
Oct 07 14:47:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.6 MiB/s wr, 202 op/s
Oct 07 14:47:31 compute-0 ceph-mon[74295]: pgmap v2558: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.6 MiB/s wr, 202 op/s
Oct 07 14:47:32 compute-0 nova_compute[259550]: 2025-10-07 14:47:32.116 2 DEBUG nova.compute.manager [req-1d7462e7-02e5-46b8-8e79-1a0c4ca6799f req-b8eb4111-c6b1-443d-8f53-6dbbcb36cdc8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received event network-changed-d90f9db1-8372-46fb-93ed-9be2902fe85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:47:32 compute-0 nova_compute[259550]: 2025-10-07 14:47:32.116 2 DEBUG nova.compute.manager [req-1d7462e7-02e5-46b8-8e79-1a0c4ca6799f req-b8eb4111-c6b1-443d-8f53-6dbbcb36cdc8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Refreshing instance network info cache due to event network-changed-d90f9db1-8372-46fb-93ed-9be2902fe85c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:47:32 compute-0 nova_compute[259550]: 2025-10-07 14:47:32.117 2 DEBUG oslo_concurrency.lockutils [req-1d7462e7-02e5-46b8-8e79-1a0c4ca6799f req-b8eb4111-c6b1-443d-8f53-6dbbcb36cdc8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:47:32 compute-0 nova_compute[259550]: 2025-10-07 14:47:32.117 2 DEBUG oslo_concurrency.lockutils [req-1d7462e7-02e5-46b8-8e79-1a0c4ca6799f req-b8eb4111-c6b1-443d-8f53-6dbbcb36cdc8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:47:32 compute-0 nova_compute[259550]: 2025-10-07 14:47:32.117 2 DEBUG nova.network.neutron [req-1d7462e7-02e5-46b8-8e79-1a0c4ca6799f req-b8eb4111-c6b1-443d-8f53-6dbbcb36cdc8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Refreshing network info cache for port d90f9db1-8372-46fb-93ed-9be2902fe85c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Oct 07 14:47:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:47:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3540941090' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:47:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:47:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3540941090' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:47:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3540941090' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:47:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3540941090' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011050157297974865 of space, bias 1.0, pg target 0.33150471893924593 quantized to 32 (current 32)
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:47:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:47:32 compute-0 nova_compute[259550]: 2025-10-07 14:47:32.792 2 INFO nova.compute.manager [None req-047d5187-286d-4fa9-8709-ae5bd87b32af 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Get console output
Oct 07 14:47:32 compute-0 nova_compute[259550]: 2025-10-07 14:47:32.797 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:47:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:47:33 compute-0 nova_compute[259550]: 2025-10-07 14:47:33.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:33 compute-0 ceph-mon[74295]: pgmap v2559: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Oct 07 14:47:33 compute-0 ovn_controller[151684]: 2025-10-07T14:47:33Z|01525|binding|INFO|Releasing lport a7ef9e15-2145-4a59-b756-368bcbe72d69 from this chassis (sb_readonly=0)
Oct 07 14:47:33 compute-0 ovn_controller[151684]: 2025-10-07T14:47:33Z|01526|binding|INFO|Releasing lport 4030b5ae-6c2e-4b07-9359-70a14ce783de from this chassis (sb_readonly=0)
Oct 07 14:47:33 compute-0 nova_compute[259550]: 2025-10-07 14:47:33.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:33 compute-0 nova_compute[259550]: 2025-10-07 14:47:33.968 2 DEBUG nova.network.neutron [req-1d7462e7-02e5-46b8-8e79-1a0c4ca6799f req-b8eb4111-c6b1-443d-8f53-6dbbcb36cdc8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updated VIF entry in instance network info cache for port d90f9db1-8372-46fb-93ed-9be2902fe85c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:47:33 compute-0 nova_compute[259550]: 2025-10-07 14:47:33.969 2 DEBUG nova.network.neutron [req-1d7462e7-02e5-46b8-8e79-1a0c4ca6799f req-b8eb4111-c6b1-443d-8f53-6dbbcb36cdc8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updating instance_info_cache with network_info: [{"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:47:34 compute-0 nova_compute[259550]: 2025-10-07 14:47:34.062 2 DEBUG oslo_concurrency.lockutils [req-1d7462e7-02e5-46b8-8e79-1a0c4ca6799f req-b8eb4111-c6b1-443d-8f53-6dbbcb36cdc8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:47:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Oct 07 14:47:34 compute-0 nova_compute[259550]: 2025-10-07 14:47:34.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:35 compute-0 nova_compute[259550]: 2025-10-07 14:47:35.091 2 INFO nova.compute.manager [None req-b81f85ef-9592-47f1-b7c3-a9cdcd64e684 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Get console output
Oct 07 14:47:35 compute-0 nova_compute[259550]: 2025-10-07 14:47:35.096 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:47:35 compute-0 ceph-mon[74295]: pgmap v2560: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Oct 07 14:47:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Oct 07 14:47:37 compute-0 nova_compute[259550]: 2025-10-07 14:47:37.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:37 compute-0 ceph-mon[74295]: pgmap v2561: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Oct 07 14:47:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:47:38 compute-0 nova_compute[259550]: 2025-10-07 14:47:38.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:38 compute-0 nova_compute[259550]: 2025-10-07 14:47:38.449 2 INFO nova.compute.manager [None req-6c6ead25-02dc-4e56-9390-f4466daa31a4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Get console output
Oct 07 14:47:38 compute-0 nova_compute[259550]: 2025-10-07 14:47:38.454 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 07 14:47:38 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:38.465 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:47:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 875 KiB/s wr, 118 op/s
Oct 07 14:47:39 compute-0 nova_compute[259550]: 2025-10-07 14:47:39.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:40 compute-0 ceph-mon[74295]: pgmap v2562: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 875 KiB/s wr, 118 op/s
Oct 07 14:47:40 compute-0 nova_compute[259550]: 2025-10-07 14:47:40.513 2 DEBUG nova.compute.manager [req-48f34889-77c3-42bd-af6c-b61d5c1b9521 req-35ccd3aa-07fc-44b1-a785-58eb3f47c713 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received event network-changed-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:47:40 compute-0 nova_compute[259550]: 2025-10-07 14:47:40.514 2 DEBUG nova.compute.manager [req-48f34889-77c3-42bd-af6c-b61d5c1b9521 req-35ccd3aa-07fc-44b1-a785-58eb3f47c713 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Refreshing instance network info cache due to event network-changed-b06adbc1-01d9-45e3-b4b6-71bc4f85a659. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:47:40 compute-0 nova_compute[259550]: 2025-10-07 14:47:40.514 2 DEBUG oslo_concurrency.lockutils [req-48f34889-77c3-42bd-af6c-b61d5c1b9521 req-35ccd3aa-07fc-44b1-a785-58eb3f47c713 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:47:40 compute-0 nova_compute[259550]: 2025-10-07 14:47:40.514 2 DEBUG oslo_concurrency.lockutils [req-48f34889-77c3-42bd-af6c-b61d5c1b9521 req-35ccd3aa-07fc-44b1-a785-58eb3f47c713 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:47:40 compute-0 nova_compute[259550]: 2025-10-07 14:47:40.515 2 DEBUG nova.network.neutron [req-48f34889-77c3-42bd-af6c-b61d5c1b9521 req-35ccd3aa-07fc-44b1-a785-58eb3f47c713 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Refreshing network info cache for port b06adbc1-01d9-45e3-b4b6-71bc4f85a659 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:47:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 305 active+clean; 175 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 577 KiB/s rd, 449 KiB/s wr, 51 op/s
Oct 07 14:47:40 compute-0 ovn_controller[151684]: 2025-10-07T14:47:40Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:39:08 10.100.0.13
Oct 07 14:47:40 compute-0 ovn_controller[151684]: 2025-10-07T14:47:40Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:39:08 10.100.0.13
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.210 2 DEBUG oslo_concurrency.lockutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "1262caef-f43e-429a-b613-b4d54273e604" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.210 2 DEBUG oslo_concurrency.lockutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.211 2 DEBUG oslo_concurrency.lockutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "1262caef-f43e-429a-b613-b4d54273e604-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.211 2 DEBUG oslo_concurrency.lockutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.211 2 DEBUG oslo_concurrency.lockutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.212 2 INFO nova.compute.manager [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Terminating instance
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.213 2 DEBUG nova.compute.manager [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:47:41 compute-0 kernel: tapb06adbc1-01 (unregistering): left promiscuous mode
Oct 07 14:47:41 compute-0 NetworkManager[44949]: <info>  [1759848461.2807] device (tapb06adbc1-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:41 compute-0 ovn_controller[151684]: 2025-10-07T14:47:41Z|01527|binding|INFO|Releasing lport b06adbc1-01d9-45e3-b4b6-71bc4f85a659 from this chassis (sb_readonly=0)
Oct 07 14:47:41 compute-0 ovn_controller[151684]: 2025-10-07T14:47:41Z|01528|binding|INFO|Setting lport b06adbc1-01d9-45e3-b4b6-71bc4f85a659 down in Southbound
Oct 07 14:47:41 compute-0 ovn_controller[151684]: 2025-10-07T14:47:41Z|01529|binding|INFO|Removing iface tapb06adbc1-01 ovn-installed in OVS
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:41 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000089.scope: Deactivated successfully.
Oct 07 14:47:41 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000089.scope: Consumed 13.392s CPU time.
Oct 07 14:47:41 compute-0 systemd-machined[214580]: Machine qemu-171-instance-00000089 terminated.
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.459 2 INFO nova.virt.libvirt.driver [-] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Instance destroyed successfully.
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.460 2 DEBUG nova.objects.instance [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid 1262caef-f43e-429a-b613-b4d54273e604 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.565 2 DEBUG nova.virt.libvirt.vif [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:47:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-866328963',display_name='tempest-TestNetworkBasicOps-server-866328963',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-866328963',id=137,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPS/KeXOUgsVFvz8/06bZGZYUJsL0P8KN5zfOcHd36qPCONG0eiDz9BDiYtOjbD9G91TMKvoW2fltNYdXkyA98S8eOoqdEV3DHQkPpvlOS52YF4JxVXz9leZAC3qrtG9CA==',key_name='tempest-TestNetworkBasicOps-1170467374',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:47:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-uefbav6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:47:13Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=1262caef-f43e-429a-b613-b4d54273e604,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.566 2 DEBUG nova.network.os_vif_util [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.567 2 DEBUG nova.network.os_vif_util [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:e6:f4,bridge_name='br-int',has_traffic_filtering=True,id=b06adbc1-01d9-45e3-b4b6-71bc4f85a659,network=Network(08f8ca28-b7fe-4840-94a7-78acb08138e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06adbc1-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.568 2 DEBUG os_vif [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:e6:f4,bridge_name='br-int',has_traffic_filtering=True,id=b06adbc1-01d9-45e3-b4b6-71bc4f85a659,network=Network(08f8ca28-b7fe-4840-94a7-78acb08138e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06adbc1-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.572 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb06adbc1-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:47:41 compute-0 nova_compute[259550]: 2025-10-07 14:47:41.579 2 INFO os_vif [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:e6:f4,bridge_name='br-int',has_traffic_filtering=True,id=b06adbc1-01d9-45e3-b4b6-71bc4f85a659,network=Network(08f8ca28-b7fe-4840-94a7-78acb08138e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06adbc1-01')
Oct 07 14:47:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:41.668 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:e6:f4 10.100.0.12'], port_security=['fa:16:3e:97:e6:f4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1262caef-f43e-429a-b613-b4d54273e604', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08f8ca28-b7fe-4840-94a7-78acb08138e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3843b3bc-7e2a-472a-b856-d1d145332927', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14a0a809-1526-4a53-a5bb-23565156e65a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b06adbc1-01d9-45e3-b4b6-71bc4f85a659) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:47:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:41.670 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b06adbc1-01d9-45e3-b4b6-71bc4f85a659 in datapath 08f8ca28-b7fe-4840-94a7-78acb08138e1 unbound from our chassis
Oct 07 14:47:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:41.672 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08f8ca28-b7fe-4840-94a7-78acb08138e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:47:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:41.673 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f0eebba-5525-428e-a122-cfa351f6f28c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:41.673 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1 namespace which is not needed anymore
Oct 07 14:47:41 compute-0 neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1[408912]: [NOTICE]   (408917) : haproxy version is 2.8.14-c23fe91
Oct 07 14:47:41 compute-0 neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1[408912]: [NOTICE]   (408917) : path to executable is /usr/sbin/haproxy
Oct 07 14:47:41 compute-0 neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1[408912]: [WARNING]  (408917) : Exiting Master process...
Oct 07 14:47:41 compute-0 neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1[408912]: [WARNING]  (408917) : Exiting Master process...
Oct 07 14:47:41 compute-0 neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1[408912]: [ALERT]    (408917) : Current worker (408923) exited with code 143 (Terminated)
Oct 07 14:47:41 compute-0 neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1[408912]: [WARNING]  (408917) : All workers exited. Exiting... (0)
Oct 07 14:47:41 compute-0 systemd[1]: libpod-c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078.scope: Deactivated successfully.
Oct 07 14:47:41 compute-0 podman[409807]: 2025-10-07 14:47:41.847369798 +0000 UTC m=+0.051861279 container died c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 14:47:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078-userdata-shm.mount: Deactivated successfully.
Oct 07 14:47:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd60f4915bfff9b9f5172271d3ec2486162b140a81a778a520bd35334ab4d772-merged.mount: Deactivated successfully.
Oct 07 14:47:41 compute-0 podman[409807]: 2025-10-07 14:47:41.916252949 +0000 UTC m=+0.120744420 container cleanup c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:47:41 compute-0 systemd[1]: libpod-conmon-c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078.scope: Deactivated successfully.
Oct 07 14:47:41 compute-0 podman[409838]: 2025-10-07 14:47:41.98547532 +0000 UTC m=+0.049572280 container remove c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:47:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:41.994 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d090e373-5f21-43a5-ad11-167881107ac9]: (4, ('Tue Oct  7 02:47:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1 (c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078)\nc37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078\nTue Oct  7 02:47:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1 (c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078)\nc37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:41.998 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4d30d7-bdca-4e2d-b3bd-249323e04c20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:41.999 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08f8ca28-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:47:42 compute-0 nova_compute[259550]: 2025-10-07 14:47:42.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:42 compute-0 kernel: tap08f8ca28-b0: left promiscuous mode
Oct 07 14:47:42 compute-0 nova_compute[259550]: 2025-10-07 14:47:42.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:42.021 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[741dbda7-d783-4306-861d-cb3a2f412945]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:42.057 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[36e07ae3-039a-41a4-8bbe-e2851b5c52bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:42.059 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bc50ccab-536f-4210-b504-7e165cc761ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:42.076 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ba542a92-ba6c-47ff-b460-ce39b226a6d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 901617, 'reachable_time': 31542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409854, 'error': None, 'target': 'ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d08f8ca28\x2db7fe\x2d4840\x2d94a7\x2d78acb08138e1.mount: Deactivated successfully.
Oct 07 14:47:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:42.082 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:47:42 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:47:42.082 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[62ecc3b5-84d7-4e80-919d-8d7b5b61913e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:47:42 compute-0 nova_compute[259550]: 2025-10-07 14:47:42.154 2 INFO nova.virt.libvirt.driver [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Deleting instance files /var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604_del
Oct 07 14:47:42 compute-0 nova_compute[259550]: 2025-10-07 14:47:42.155 2 INFO nova.virt.libvirt.driver [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Deletion of /var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604_del complete
Oct 07 14:47:42 compute-0 ceph-mon[74295]: pgmap v2563: 305 pgs: 305 active+clean; 175 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 577 KiB/s rd, 449 KiB/s wr, 51 op/s
Oct 07 14:47:42 compute-0 nova_compute[259550]: 2025-10-07 14:47:42.336 2 INFO nova.compute.manager [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Took 1.12 seconds to destroy the instance on the hypervisor.
Oct 07 14:47:42 compute-0 nova_compute[259550]: 2025-10-07 14:47:42.337 2 DEBUG oslo.service.loopingcall [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:47:42 compute-0 nova_compute[259550]: 2025-10-07 14:47:42.337 2 DEBUG nova.compute.manager [-] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:47:42 compute-0 nova_compute[259550]: 2025-10-07 14:47:42.337 2 DEBUG nova.network.neutron [-] [instance: 1262caef-f43e-429a-b613-b4d54273e604] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:47:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 175 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 121 KiB/s rd, 386 KiB/s wr, 20 op/s
Oct 07 14:47:42 compute-0 nova_compute[259550]: 2025-10-07 14:47:42.627 2 DEBUG nova.compute.manager [req-db66c004-f098-4539-820f-6a25faaceb67 req-4f09df87-d402-4018-9fb1-e189607aadd6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received event network-vif-unplugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:47:42 compute-0 nova_compute[259550]: 2025-10-07 14:47:42.627 2 DEBUG oslo_concurrency.lockutils [req-db66c004-f098-4539-820f-6a25faaceb67 req-4f09df87-d402-4018-9fb1-e189607aadd6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1262caef-f43e-429a-b613-b4d54273e604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:42 compute-0 nova_compute[259550]: 2025-10-07 14:47:42.628 2 DEBUG oslo_concurrency.lockutils [req-db66c004-f098-4539-820f-6a25faaceb67 req-4f09df87-d402-4018-9fb1-e189607aadd6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:42 compute-0 nova_compute[259550]: 2025-10-07 14:47:42.628 2 DEBUG oslo_concurrency.lockutils [req-db66c004-f098-4539-820f-6a25faaceb67 req-4f09df87-d402-4018-9fb1-e189607aadd6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:42 compute-0 nova_compute[259550]: 2025-10-07 14:47:42.628 2 DEBUG nova.compute.manager [req-db66c004-f098-4539-820f-6a25faaceb67 req-4f09df87-d402-4018-9fb1-e189607aadd6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] No waiting events found dispatching network-vif-unplugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:47:42 compute-0 nova_compute[259550]: 2025-10-07 14:47:42.628 2 DEBUG nova.compute.manager [req-db66c004-f098-4539-820f-6a25faaceb67 req-4f09df87-d402-4018-9fb1-e189607aadd6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received event network-vif-unplugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:47:43 compute-0 podman[409855]: 2025-10-07 14:47:43.095796092 +0000 UTC m=+0.060026107 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:47:43 compute-0 podman[409856]: 2025-10-07 14:47:43.095814753 +0000 UTC m=+0.058904948 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:47:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:47:43 compute-0 nova_compute[259550]: 2025-10-07 14:47:43.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:44 compute-0 nova_compute[259550]: 2025-10-07 14:47:44.157 2 DEBUG nova.network.neutron [req-48f34889-77c3-42bd-af6c-b61d5c1b9521 req-35ccd3aa-07fc-44b1-a785-58eb3f47c713 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Updated VIF entry in instance network info cache for port b06adbc1-01d9-45e3-b4b6-71bc4f85a659. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:47:44 compute-0 nova_compute[259550]: 2025-10-07 14:47:44.157 2 DEBUG nova.network.neutron [req-48f34889-77c3-42bd-af6c-b61d5c1b9521 req-35ccd3aa-07fc-44b1-a785-58eb3f47c713 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Updating instance_info_cache with network_info: [{"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:47:44 compute-0 ceph-mon[74295]: pgmap v2564: 305 pgs: 305 active+clean; 175 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 121 KiB/s rd, 386 KiB/s wr, 20 op/s
Oct 07 14:47:44 compute-0 nova_compute[259550]: 2025-10-07 14:47:44.306 2 DEBUG nova.network.neutron [-] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:47:44 compute-0 nova_compute[259550]: 2025-10-07 14:47:44.358 2 DEBUG nova.compute.manager [req-abbf02cf-c52b-473d-a428-66f9ab6d1c83 req-1866ab39-141e-4865-adb3-a0e4e9f99daa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received event network-vif-deleted-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:47:44 compute-0 nova_compute[259550]: 2025-10-07 14:47:44.358 2 INFO nova.compute.manager [req-abbf02cf-c52b-473d-a428-66f9ab6d1c83 req-1866ab39-141e-4865-adb3-a0e4e9f99daa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Neutron deleted interface b06adbc1-01d9-45e3-b4b6-71bc4f85a659; detaching it from the instance and deleting it from the info cache
Oct 07 14:47:44 compute-0 nova_compute[259550]: 2025-10-07 14:47:44.358 2 DEBUG nova.network.neutron [req-abbf02cf-c52b-473d-a428-66f9ab6d1c83 req-1866ab39-141e-4865-adb3-a0e4e9f99daa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:47:44 compute-0 nova_compute[259550]: 2025-10-07 14:47:44.476 2 DEBUG oslo_concurrency.lockutils [req-48f34889-77c3-42bd-af6c-b61d5c1b9521 req-35ccd3aa-07fc-44b1-a785-58eb3f47c713 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:47:44 compute-0 nova_compute[259550]: 2025-10-07 14:47:44.520 2 INFO nova.compute.manager [-] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Took 2.18 seconds to deallocate network for instance.
Oct 07 14:47:44 compute-0 nova_compute[259550]: 2025-10-07 14:47:44.526 2 DEBUG nova.compute.manager [req-abbf02cf-c52b-473d-a428-66f9ab6d1c83 req-1866ab39-141e-4865-adb3-a0e4e9f99daa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Detach interface failed, port_id=b06adbc1-01d9-45e3-b4b6-71bc4f85a659, reason: Instance 1262caef-f43e-429a-b613-b4d54273e604 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:47:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2565: 305 pgs: 305 active+clean; 159 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 290 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 07 14:47:44 compute-0 nova_compute[259550]: 2025-10-07 14:47:44.641 2 DEBUG oslo_concurrency.lockutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:44 compute-0 nova_compute[259550]: 2025-10-07 14:47:44.641 2 DEBUG oslo_concurrency.lockutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:44 compute-0 nova_compute[259550]: 2025-10-07 14:47:44.825 2 DEBUG oslo_concurrency.processutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:44 compute-0 nova_compute[259550]: 2025-10-07 14:47:44.866 2 DEBUG nova.compute.manager [req-dd8c6a35-15af-48cb-a38e-6f529176ce1b req-7361e5c6-4a64-4614-b0b1-89272055e4ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received event network-vif-plugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:47:44 compute-0 nova_compute[259550]: 2025-10-07 14:47:44.866 2 DEBUG oslo_concurrency.lockutils [req-dd8c6a35-15af-48cb-a38e-6f529176ce1b req-7361e5c6-4a64-4614-b0b1-89272055e4ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1262caef-f43e-429a-b613-b4d54273e604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:44 compute-0 nova_compute[259550]: 2025-10-07 14:47:44.867 2 DEBUG oslo_concurrency.lockutils [req-dd8c6a35-15af-48cb-a38e-6f529176ce1b req-7361e5c6-4a64-4614-b0b1-89272055e4ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:44 compute-0 nova_compute[259550]: 2025-10-07 14:47:44.867 2 DEBUG oslo_concurrency.lockutils [req-dd8c6a35-15af-48cb-a38e-6f529176ce1b req-7361e5c6-4a64-4614-b0b1-89272055e4ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:44 compute-0 nova_compute[259550]: 2025-10-07 14:47:44.867 2 DEBUG nova.compute.manager [req-dd8c6a35-15af-48cb-a38e-6f529176ce1b req-7361e5c6-4a64-4614-b0b1-89272055e4ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] No waiting events found dispatching network-vif-plugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:47:44 compute-0 nova_compute[259550]: 2025-10-07 14:47:44.867 2 WARNING nova.compute.manager [req-dd8c6a35-15af-48cb-a38e-6f529176ce1b req-7361e5c6-4a64-4614-b0b1-89272055e4ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received unexpected event network-vif-plugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 for instance with vm_state deleted and task_state None.
Oct 07 14:47:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:47:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2257398268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:47:45 compute-0 nova_compute[259550]: 2025-10-07 14:47:45.429 2 DEBUG oslo_concurrency.processutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:45 compute-0 nova_compute[259550]: 2025-10-07 14:47:45.434 2 DEBUG nova.compute.provider_tree [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:47:45 compute-0 nova_compute[259550]: 2025-10-07 14:47:45.517 2 DEBUG nova.scheduler.client.report [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:47:45 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2257398268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:47:45 compute-0 nova_compute[259550]: 2025-10-07 14:47:45.650 2 DEBUG oslo_concurrency.lockutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:45 compute-0 nova_compute[259550]: 2025-10-07 14:47:45.900 2 INFO nova.scheduler.client.report [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance 1262caef-f43e-429a-b613-b4d54273e604
Oct 07 14:47:46 compute-0 nova_compute[259550]: 2025-10-07 14:47:46.082 2 DEBUG oslo_concurrency.lockutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:46 compute-0 nova_compute[259550]: 2025-10-07 14:47:46.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 265 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 07 14:47:46 compute-0 ceph-mon[74295]: pgmap v2565: 305 pgs: 305 active+clean; 159 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 290 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 07 14:47:47 compute-0 ceph-mon[74295]: pgmap v2566: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 265 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 07 14:47:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:47:48 compute-0 nova_compute[259550]: 2025-10-07 14:47:48.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 265 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 07 14:47:49 compute-0 ceph-mon[74295]: pgmap v2567: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 265 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 07 14:47:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 121 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 242 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct 07 14:47:51 compute-0 nova_compute[259550]: 2025-10-07 14:47:51.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:51 compute-0 ceph-mon[74295]: pgmap v2568: 305 pgs: 305 active+clean; 121 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 242 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct 07 14:47:52 compute-0 nova_compute[259550]: 2025-10-07 14:47:52.317 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:52 compute-0 nova_compute[259550]: 2025-10-07 14:47:52.317 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:52 compute-0 nova_compute[259550]: 2025-10-07 14:47:52.338 2 DEBUG nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:47:52 compute-0 nova_compute[259550]: 2025-10-07 14:47:52.431 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:52 compute-0 nova_compute[259550]: 2025-10-07 14:47:52.432 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:52 compute-0 nova_compute[259550]: 2025-10-07 14:47:52.437 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:47:52 compute-0 nova_compute[259550]: 2025-10-07 14:47:52.437 2 INFO nova.compute.claims [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:47:52 compute-0 ovn_controller[151684]: 2025-10-07T14:47:52Z|01530|binding|INFO|Releasing lport a7ef9e15-2145-4a59-b756-368bcbe72d69 from this chassis (sb_readonly=0)
Oct 07 14:47:52 compute-0 nova_compute[259550]: 2025-10-07 14:47:52.562 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:52 compute-0 nova_compute[259550]: 2025-10-07 14:47:52.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 121 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct 07 14:47:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:47:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:47:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:47:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:47:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:47:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:47:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:47:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/335214207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.013 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.018 2 DEBUG nova.compute.provider_tree [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.036 2 DEBUG nova.scheduler.client.report [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.061 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.062 2 DEBUG nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.113 2 DEBUG nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.113 2 DEBUG nova.network.neutron [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.134 2 INFO nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.151 2 DEBUG nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:47:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.240 2 DEBUG nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.241 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.241 2 INFO nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Creating image(s)
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.260 2 DEBUG nova.storage.rbd_utils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.284 2 DEBUG nova.storage.rbd_utils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.308 2 DEBUG nova.storage.rbd_utils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.314 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.395 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.396 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.397 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.397 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.423 2 DEBUG nova.storage.rbd_utils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.427 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:53 compute-0 nova_compute[259550]: 2025-10-07 14:47:53.496 2 DEBUG nova.policy [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:47:53 compute-0 ceph-mon[74295]: pgmap v2569: 305 pgs: 305 active+clean; 121 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct 07 14:47:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/335214207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:47:54 compute-0 nova_compute[259550]: 2025-10-07 14:47:54.276 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.849s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:54 compute-0 nova_compute[259550]: 2025-10-07 14:47:54.336 2 DEBUG nova.storage.rbd_utils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:47:54 compute-0 nova_compute[259550]: 2025-10-07 14:47:54.565 2 DEBUG nova.objects.instance [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 96b2365c-ce47-4fa4-bff7-51dd2e4ac413 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:47:54 compute-0 nova_compute[259550]: 2025-10-07 14:47:54.586 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:47:54 compute-0 nova_compute[259550]: 2025-10-07 14:47:54.586 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Ensure instance console log exists: /var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:47:54 compute-0 nova_compute[259550]: 2025-10-07 14:47:54.587 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:54 compute-0 nova_compute[259550]: 2025-10-07 14:47:54.587 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:54 compute-0 nova_compute[259550]: 2025-10-07 14:47:54.587 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 121 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Oct 07 14:47:54 compute-0 nova_compute[259550]: 2025-10-07 14:47:54.733 2 DEBUG nova.network.neutron [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Successfully created port: 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:47:55 compute-0 ceph-mon[74295]: pgmap v2570: 305 pgs: 305 active+clean; 121 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Oct 07 14:47:56 compute-0 nova_compute[259550]: 2025-10-07 14:47:56.038 2 DEBUG nova.network.neutron [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Successfully updated port: 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:47:56 compute-0 nova_compute[259550]: 2025-10-07 14:47:56.106 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:47:56 compute-0 nova_compute[259550]: 2025-10-07 14:47:56.107 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:47:56 compute-0 nova_compute[259550]: 2025-10-07 14:47:56.107 2 DEBUG nova.network.neutron [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:47:56 compute-0 nova_compute[259550]: 2025-10-07 14:47:56.210 2 DEBUG nova.compute.manager [req-9e9fb854-5f1f-4521-b51c-1bc7939b580d req-abc02c04-9bf1-45f9-8683-63613976af83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received event network-changed-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:47:56 compute-0 nova_compute[259550]: 2025-10-07 14:47:56.210 2 DEBUG nova.compute.manager [req-9e9fb854-5f1f-4521-b51c-1bc7939b580d req-abc02c04-9bf1-45f9-8683-63613976af83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Refreshing instance network info cache due to event network-changed-07e00b31-d9ec-46d0-927f-1d89f6d03bc6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:47:56 compute-0 nova_compute[259550]: 2025-10-07 14:47:56.211 2 DEBUG oslo_concurrency.lockutils [req-9e9fb854-5f1f-4521-b51c-1bc7939b580d req-abc02c04-9bf1-45f9-8683-63613976af83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:47:56 compute-0 nova_compute[259550]: 2025-10-07 14:47:56.420 2 DEBUG nova.network.neutron [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:47:56 compute-0 nova_compute[259550]: 2025-10-07 14:47:56.455 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848461.454498, 1262caef-f43e-429a-b613-b4d54273e604 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:47:56 compute-0 nova_compute[259550]: 2025-10-07 14:47:56.456 2 INFO nova.compute.manager [-] [instance: 1262caef-f43e-429a-b613-b4d54273e604] VM Stopped (Lifecycle Event)
Oct 07 14:47:56 compute-0 nova_compute[259550]: 2025-10-07 14:47:56.482 2 DEBUG nova.compute.manager [None req-87e5fb10-4843-4e58-90b6-1c7f82ef0ac2 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:47:56 compute-0 nova_compute[259550]: 2025-10-07 14:47:56.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2571: 305 pgs: 305 active+clean; 137 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 968 KiB/s wr, 22 op/s
Oct 07 14:47:57 compute-0 podman[410103]: 2025-10-07 14:47:57.065380699 +0000 UTC m=+0.050486443 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 07 14:47:57 compute-0 podman[410104]: 2025-10-07 14:47:57.13392011 +0000 UTC m=+0.112080750 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:47:57 compute-0 ceph-mon[74295]: pgmap v2571: 305 pgs: 305 active+clean; 137 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 968 KiB/s wr, 22 op/s
Oct 07 14:47:57 compute-0 nova_compute[259550]: 2025-10-07 14:47:57.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:47:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.388 2 DEBUG nova.network.neutron [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Updating instance_info_cache with network_info: [{"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.409 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.410 2 DEBUG nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Instance network_info: |[{"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.411 2 DEBUG oslo_concurrency.lockutils [req-9e9fb854-5f1f-4521-b51c-1bc7939b580d req-abc02c04-9bf1-45f9-8683-63613976af83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.412 2 DEBUG nova.network.neutron [req-9e9fb854-5f1f-4521-b51c-1bc7939b580d req-abc02c04-9bf1-45f9-8683-63613976af83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Refreshing network info cache for port 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.420 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Start _get_guest_xml network_info=[{"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.428 2 WARNING nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.436 2 DEBUG nova.virt.libvirt.host [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.437 2 DEBUG nova.virt.libvirt.host [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.444 2 DEBUG nova.virt.libvirt.host [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.444 2 DEBUG nova.virt.libvirt.host [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.445 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.445 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.446 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.446 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.447 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.447 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.447 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.448 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.448 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.448 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.448 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.449 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.451 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:47:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:47:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2471712782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.922 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.959 2 DEBUG nova.storage.rbd_utils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:58 compute-0 nova_compute[259550]: 2025-10-07 14:47:58.966 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.019 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:47:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:47:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1872657069' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.431 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.433 2 DEBUG nova.virt.libvirt.vif [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:47:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-179829393',display_name='tempest-TestGettingAddress-server-179829393',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-179829393',id=139,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkAt4wtteK91EP3aa6Au9K7yq+N15JSCUefd3a6DRNjmPgvGC0hgDKYUniMgalUA3tACkiPsQDKv7a9b9TFDwqZAEmvf7GWwU8qoBld9UJd4PAomUBnp4Nc81ZIU+LnYw==',key_name='tempest-TestGettingAddress-370921161',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-17ii6zr4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:47:53Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=96b2365c-ce47-4fa4-bff7-51dd2e4ac413,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.433 2 DEBUG nova.network.os_vif_util [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.435 2 DEBUG nova.network.os_vif_util [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:14:3f,bridge_name='br-int',has_traffic_filtering=True,id=07e00b31-d9ec-46d0-927f-1d89f6d03bc6,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e00b31-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.436 2 DEBUG nova.objects.instance [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 96b2365c-ce47-4fa4-bff7-51dd2e4ac413 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.454 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:47:59 compute-0 nova_compute[259550]:   <uuid>96b2365c-ce47-4fa4-bff7-51dd2e4ac413</uuid>
Oct 07 14:47:59 compute-0 nova_compute[259550]:   <name>instance-0000008b</name>
Oct 07 14:47:59 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:47:59 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:47:59 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <nova:name>tempest-TestGettingAddress-server-179829393</nova:name>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:47:58</nova:creationTime>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:47:59 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:47:59 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:47:59 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:47:59 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:47:59 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:47:59 compute-0 nova_compute[259550]:         <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct 07 14:47:59 compute-0 nova_compute[259550]:         <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:47:59 compute-0 nova_compute[259550]:         <nova:port uuid="07e00b31-d9ec-46d0-927f-1d89f6d03bc6">
Oct 07 14:47:59 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fee6:143f" ipVersion="6"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fee6:143f" ipVersion="6"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:47:59 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:47:59 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <system>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <entry name="serial">96b2365c-ce47-4fa4-bff7-51dd2e4ac413</entry>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <entry name="uuid">96b2365c-ce47-4fa4-bff7-51dd2e4ac413</entry>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     </system>
Oct 07 14:47:59 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:47:59 compute-0 nova_compute[259550]:   <os>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:   </os>
Oct 07 14:47:59 compute-0 nova_compute[259550]:   <features>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:   </features>
Oct 07 14:47:59 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:47:59 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:47:59 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk">
Oct 07 14:47:59 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       </source>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:47:59 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk.config">
Oct 07 14:47:59 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       </source>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:47:59 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:e6:14:3f"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <target dev="tap07e00b31-d9"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413/console.log" append="off"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <video>
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     </video>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:47:59 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:47:59 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:47:59 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:47:59 compute-0 nova_compute[259550]: </domain>
Oct 07 14:47:59 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.454 2 DEBUG nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Preparing to wait for external event network-vif-plugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.455 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.455 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.455 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.456 2 DEBUG nova.virt.libvirt.vif [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:47:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-179829393',display_name='tempest-TestGettingAddress-server-179829393',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-179829393',id=139,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkAt4wtteK91EP3aa6Au9K7yq+N15JSCUefd3a6DRNjmPgvGC0hgDKYUniMgalUA3tACkiPsQDKv7a9b9TFDwqZAEmvf7GWwU8qoBld9UJd4PAomUBnp4Nc81ZIU+LnYw==',key_name='tempest-TestGettingAddress-370921161',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-17ii6zr4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:47:53Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=96b2365c-ce47-4fa4-bff7-51dd2e4ac413,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.456 2 DEBUG nova.network.os_vif_util [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.457 2 DEBUG nova.network.os_vif_util [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:14:3f,bridge_name='br-int',has_traffic_filtering=True,id=07e00b31-d9ec-46d0-927f-1d89f6d03bc6,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e00b31-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.457 2 DEBUG os_vif [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:14:3f,bridge_name='br-int',has_traffic_filtering=True,id=07e00b31-d9ec-46d0-927f-1d89f6d03bc6,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e00b31-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.458 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.458 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.462 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07e00b31-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.462 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07e00b31-d9, col_values=(('external_ids', {'iface-id': '07e00b31-d9ec-46d0-927f-1d89f6d03bc6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:14:3f', 'vm-uuid': '96b2365c-ce47-4fa4-bff7-51dd2e4ac413'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:47:59 compute-0 NetworkManager[44949]: <info>  [1759848479.4658] manager: (tap07e00b31-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/613)
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.473 2 INFO os_vif [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:14:3f,bridge_name='br-int',has_traffic_filtering=True,id=07e00b31-d9ec-46d0-927f-1d89f6d03bc6,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e00b31-d9')
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.540 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.540 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.540 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:e6:14:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.541 2 INFO nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Using config drive
Oct 07 14:47:59 compute-0 nova_compute[259550]: 2025-10-07 14:47:59.566 2 DEBUG nova.storage.rbd_utils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:47:59 compute-0 ceph-mon[74295]: pgmap v2572: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:47:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2471712782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:47:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1872657069' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:48:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.081 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:48:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.081 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:48:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.082 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.121 2 INFO nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Creating config drive at /var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413/disk.config
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.127 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_sb3pk2c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.272 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_sb3pk2c" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.302 2 DEBUG nova.storage.rbd_utils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.306 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413/disk.config 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.496 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413/disk.config 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.496 2 INFO nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Deleting local config drive /var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413/disk.config because it was imported into RBD.
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.538 2 DEBUG nova.network.neutron [req-9e9fb854-5f1f-4521-b51c-1bc7939b580d req-abc02c04-9bf1-45f9-8683-63613976af83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Updated VIF entry in instance network info cache for port 07e00b31-d9ec-46d0-927f-1d89f6d03bc6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.539 2 DEBUG nova.network.neutron [req-9e9fb854-5f1f-4521-b51c-1bc7939b580d req-abc02c04-9bf1-45f9-8683-63613976af83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Updating instance_info_cache with network_info: [{"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:48:00 compute-0 kernel: tap07e00b31-d9: entered promiscuous mode
Oct 07 14:48:00 compute-0 NetworkManager[44949]: <info>  [1759848480.5633] manager: (tap07e00b31-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/614)
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:00 compute-0 ovn_controller[151684]: 2025-10-07T14:48:00Z|01531|binding|INFO|Claiming lport 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 for this chassis.
Oct 07 14:48:00 compute-0 ovn_controller[151684]: 2025-10-07T14:48:00Z|01532|binding|INFO|07e00b31-d9ec-46d0-927f-1d89f6d03bc6: Claiming fa:16:3e:e6:14:3f 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:143f 2001:db8::f816:3eff:fee6:143f
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.566 2 DEBUG oslo_concurrency.lockutils [req-9e9fb854-5f1f-4521-b51c-1bc7939b580d req-abc02c04-9bf1-45f9-8683-63613976af83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:48:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.571 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:14:3f 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:143f 2001:db8::f816:3eff:fee6:143f'], port_security=['fa:16:3e:e6:14:3f 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:143f 2001:db8::f816:3eff:fee6:143f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8:0:1:f816:3eff:fee6:143f/64 2001:db8::f816:3eff:fee6:143f/64', 'neutron:device_id': '96b2365c-ce47-4fa4-bff7-51dd2e4ac413', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3ca8f444-a15b-48d2-afd7-d5447a1f3a63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=227fc944-7eb8-4e47-9b7f-017eeb7f2711, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=07e00b31-d9ec-46d0-927f-1d89f6d03bc6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:48:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.572 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 in datapath 970990f9-7a8a-40de-9a55-f4c40d657453 bound to our chassis
Oct 07 14:48:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.573 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 970990f9-7a8a-40de-9a55-f4c40d657453
Oct 07 14:48:00 compute-0 ovn_controller[151684]: 2025-10-07T14:48:00Z|01533|binding|INFO|Setting lport 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 ovn-installed in OVS
Oct 07 14:48:00 compute-0 ovn_controller[151684]: 2025-10-07T14:48:00Z|01534|binding|INFO|Setting lport 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 up in Southbound
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:00 compute-0 systemd-udevd[410285]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:48:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.598 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b5f8e5-28b0-4e7f-a606-918f7ebebdc1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:00 compute-0 systemd-machined[214580]: New machine qemu-173-instance-0000008b.
Oct 07 14:48:00 compute-0 NetworkManager[44949]: <info>  [1759848480.6087] device (tap07e00b31-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:48:00 compute-0 NetworkManager[44949]: <info>  [1759848480.6105] device (tap07e00b31-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:48:00 compute-0 systemd[1]: Started Virtual Machine qemu-173-instance-0000008b.
Oct 07 14:48:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:48:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.630 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c62abfef-fee1-4ad4-a0a8-a4fa67f3d940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.633 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0e4a62b8-95b9-428d-9084-9da85d5f9c6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.674 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[589f0329-93f3-4090-8fed-3b1fd5e73806]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.694 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5f7be3bd-d3e5-4b4c-ad25-55830b8b689f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap970990f9-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:9c:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 26, 'tx_packets': 5, 'rx_bytes': 2300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 26, 'tx_packets': 5, 'rx_bytes': 2300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 438], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903010, 'reachable_time': 44649, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 24, 'inoctets': 1880, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 24, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1880, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 24, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410297, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.710 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eba998b2-e46b-482d-a71c-1d7f85e4bae6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap970990f9-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 903023, 'tstamp': 903023}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410299, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap970990f9-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 903026, 'tstamp': 903026}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410299, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.712 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap970990f9-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.716 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap970990f9-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:48:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.717 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:48:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.717 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap970990f9-70, col_values=(('external_ids', {'iface-id': 'a7ef9e15-2145-4a59-b756-368bcbe72d69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:48:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.718 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.801 2 DEBUG nova.compute.manager [req-1aac3596-ec4c-494a-aee9-8c0949921f4d req-c703a233-b1bf-4786-a10f-3d5d276d05e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received event network-vif-plugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.802 2 DEBUG oslo_concurrency.lockutils [req-1aac3596-ec4c-494a-aee9-8c0949921f4d req-c703a233-b1bf-4786-a10f-3d5d276d05e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.802 2 DEBUG oslo_concurrency.lockutils [req-1aac3596-ec4c-494a-aee9-8c0949921f4d req-c703a233-b1bf-4786-a10f-3d5d276d05e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.802 2 DEBUG oslo_concurrency.lockutils [req-1aac3596-ec4c-494a-aee9-8c0949921f4d req-c703a233-b1bf-4786-a10f-3d5d276d05e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:48:00 compute-0 nova_compute[259550]: 2025-10-07 14:48:00.802 2 DEBUG nova.compute.manager [req-1aac3596-ec4c-494a-aee9-8c0949921f4d req-c703a233-b1bf-4786-a10f-3d5d276d05e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Processing event network-vif-plugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.566 2 DEBUG nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.568 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848481.565543, 96b2365c-ce47-4fa4-bff7-51dd2e4ac413 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.568 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] VM Started (Lifecycle Event)
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.573 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.577 2 INFO nova.virt.libvirt.driver [-] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Instance spawned successfully.
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.578 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.594 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.601 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.605 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.605 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.606 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.606 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.606 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.607 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.637 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.638 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848481.5659757, 96b2365c-ce47-4fa4-bff7-51dd2e4ac413 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.638 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] VM Paused (Lifecycle Event)
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.674 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.677 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848481.5709083, 96b2365c-ce47-4fa4-bff7-51dd2e4ac413 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.677 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] VM Resumed (Lifecycle Event)
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.682 2 INFO nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Took 8.44 seconds to spawn the instance on the hypervisor.
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.683 2 DEBUG nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.694 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.696 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.727 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.768 2 INFO nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Took 9.37 seconds to build instance.
Oct 07 14:48:01 compute-0 ceph-mon[74295]: pgmap v2573: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.787 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:48:01 compute-0 nova_compute[259550]: 2025-10-07 14:48:01.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:48:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:48:02 compute-0 nova_compute[259550]: 2025-10-07 14:48:02.911 2 DEBUG nova.compute.manager [req-5490cfd6-be0f-4d3d-bf32-d6dad5083334 req-41b33835-2148-4045-b2f5-4c5072098535 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received event network-vif-plugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:48:02 compute-0 nova_compute[259550]: 2025-10-07 14:48:02.911 2 DEBUG oslo_concurrency.lockutils [req-5490cfd6-be0f-4d3d-bf32-d6dad5083334 req-41b33835-2148-4045-b2f5-4c5072098535 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:48:02 compute-0 nova_compute[259550]: 2025-10-07 14:48:02.912 2 DEBUG oslo_concurrency.lockutils [req-5490cfd6-be0f-4d3d-bf32-d6dad5083334 req-41b33835-2148-4045-b2f5-4c5072098535 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:48:02 compute-0 nova_compute[259550]: 2025-10-07 14:48:02.912 2 DEBUG oslo_concurrency.lockutils [req-5490cfd6-be0f-4d3d-bf32-d6dad5083334 req-41b33835-2148-4045-b2f5-4c5072098535 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:48:02 compute-0 nova_compute[259550]: 2025-10-07 14:48:02.913 2 DEBUG nova.compute.manager [req-5490cfd6-be0f-4d3d-bf32-d6dad5083334 req-41b33835-2148-4045-b2f5-4c5072098535 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] No waiting events found dispatching network-vif-plugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:48:02 compute-0 nova_compute[259550]: 2025-10-07 14:48:02.913 2 WARNING nova.compute.manager [req-5490cfd6-be0f-4d3d-bf32-d6dad5083334 req-41b33835-2148-4045-b2f5-4c5072098535 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received unexpected event network-vif-plugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 for instance with vm_state active and task_state None.
Oct 07 14:48:02 compute-0 nova_compute[259550]: 2025-10-07 14:48:02.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:48:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:48:03 compute-0 nova_compute[259550]: 2025-10-07 14:48:03.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:03 compute-0 ceph-mon[74295]: pgmap v2574: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:48:04 compute-0 nova_compute[259550]: 2025-10-07 14:48:04.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1000 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Oct 07 14:48:05 compute-0 nova_compute[259550]: 2025-10-07 14:48:05.095 2 DEBUG nova.compute.manager [req-aada1cd2-caad-40c1-980c-93dede33a4a6 req-a5674181-7168-4aef-a44a-2fc862f2120d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received event network-changed-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:48:05 compute-0 nova_compute[259550]: 2025-10-07 14:48:05.096 2 DEBUG nova.compute.manager [req-aada1cd2-caad-40c1-980c-93dede33a4a6 req-a5674181-7168-4aef-a44a-2fc862f2120d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Refreshing instance network info cache due to event network-changed-07e00b31-d9ec-46d0-927f-1d89f6d03bc6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:48:05 compute-0 nova_compute[259550]: 2025-10-07 14:48:05.096 2 DEBUG oslo_concurrency.lockutils [req-aada1cd2-caad-40c1-980c-93dede33a4a6 req-a5674181-7168-4aef-a44a-2fc862f2120d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:48:05 compute-0 nova_compute[259550]: 2025-10-07 14:48:05.096 2 DEBUG oslo_concurrency.lockutils [req-aada1cd2-caad-40c1-980c-93dede33a4a6 req-a5674181-7168-4aef-a44a-2fc862f2120d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:48:05 compute-0 nova_compute[259550]: 2025-10-07 14:48:05.096 2 DEBUG nova.network.neutron [req-aada1cd2-caad-40c1-980c-93dede33a4a6 req-a5674181-7168-4aef-a44a-2fc862f2120d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Refreshing network info cache for port 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:48:05 compute-0 ceph-mon[74295]: pgmap v2575: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1000 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Oct 07 14:48:06 compute-0 nova_compute[259550]: 2025-10-07 14:48:06.574 2 DEBUG nova.network.neutron [req-aada1cd2-caad-40c1-980c-93dede33a4a6 req-a5674181-7168-4aef-a44a-2fc862f2120d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Updated VIF entry in instance network info cache for port 07e00b31-d9ec-46d0-927f-1d89f6d03bc6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:48:06 compute-0 nova_compute[259550]: 2025-10-07 14:48:06.575 2 DEBUG nova.network.neutron [req-aada1cd2-caad-40c1-980c-93dede33a4a6 req-a5674181-7168-4aef-a44a-2fc862f2120d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Updating instance_info_cache with network_info: [{"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:48:06 compute-0 nova_compute[259550]: 2025-10-07 14:48:06.595 2 DEBUG oslo_concurrency.lockutils [req-aada1cd2-caad-40c1-980c-93dede33a4a6 req-a5674181-7168-4aef-a44a-2fc862f2120d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:48:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:48:07 compute-0 ceph-mon[74295]: pgmap v2576: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.816629) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848487816667, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 750, "num_deletes": 251, "total_data_size": 930792, "memory_usage": 945728, "flush_reason": "Manual Compaction"}
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848487823373, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 921923, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53670, "largest_seqno": 54419, "table_properties": {"data_size": 918037, "index_size": 1666, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8847, "raw_average_key_size": 19, "raw_value_size": 910236, "raw_average_value_size": 2013, "num_data_blocks": 74, "num_entries": 452, "num_filter_entries": 452, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759848429, "oldest_key_time": 1759848429, "file_creation_time": 1759848487, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 6778 microseconds, and 3537 cpu microseconds.
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.823403) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 921923 bytes OK
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.823421) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.825690) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.825704) EVENT_LOG_v1 {"time_micros": 1759848487825699, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.825718) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 926947, prev total WAL file size 926947, number of live WAL files 2.
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.826224) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(900KB)], [125(8877KB)]
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848487826257, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 10012384, "oldest_snapshot_seqno": -1}
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7306 keys, 8317491 bytes, temperature: kUnknown
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848487879560, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 8317491, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8271652, "index_size": 26462, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18309, "raw_key_size": 191347, "raw_average_key_size": 26, "raw_value_size": 8144006, "raw_average_value_size": 1114, "num_data_blocks": 1024, "num_entries": 7306, "num_filter_entries": 7306, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759848487, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.879814) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 8317491 bytes
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.881618) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.6 rd, 155.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.7 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(19.9) write-amplify(9.0) OK, records in: 7820, records dropped: 514 output_compression: NoCompression
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.881765) EVENT_LOG_v1 {"time_micros": 1759848487881755, "job": 76, "event": "compaction_finished", "compaction_time_micros": 53381, "compaction_time_cpu_micros": 27117, "output_level": 6, "num_output_files": 1, "total_output_size": 8317491, "num_input_records": 7820, "num_output_records": 7306, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848487882060, "job": 76, "event": "table_file_deletion", "file_number": 127}
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848487883649, "job": 76, "event": "table_file_deletion", "file_number": 125}
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.826158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.883720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.883725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.883727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.883728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:48:07 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.883730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:48:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:48:08 compute-0 nova_compute[259550]: 2025-10-07 14:48:08.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 897 KiB/s wr, 99 op/s
Oct 07 14:48:09 compute-0 nova_compute[259550]: 2025-10-07 14:48:09.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:09 compute-0 ceph-mon[74295]: pgmap v2577: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 897 KiB/s wr, 99 op/s
Oct 07 14:48:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:48:10 compute-0 nova_compute[259550]: 2025-10-07 14:48:10.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:48:11 compute-0 nova_compute[259550]: 2025-10-07 14:48:11.005 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:48:11 compute-0 nova_compute[259550]: 2025-10-07 14:48:11.036 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:48:11 compute-0 nova_compute[259550]: 2025-10-07 14:48:11.037 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:48:11 compute-0 nova_compute[259550]: 2025-10-07 14:48:11.037 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:48:11 compute-0 nova_compute[259550]: 2025-10-07 14:48:11.037 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:48:11 compute-0 nova_compute[259550]: 2025-10-07 14:48:11.038 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:48:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:48:11 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3334848461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:48:11 compute-0 nova_compute[259550]: 2025-10-07 14:48:11.562 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:48:11 compute-0 nova_compute[259550]: 2025-10-07 14:48:11.637 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:48:11 compute-0 nova_compute[259550]: 2025-10-07 14:48:11.637 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:48:11 compute-0 nova_compute[259550]: 2025-10-07 14:48:11.642 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:48:11 compute-0 nova_compute[259550]: 2025-10-07 14:48:11.642 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:48:11 compute-0 nova_compute[259550]: 2025-10-07 14:48:11.848 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:48:11 compute-0 nova_compute[259550]: 2025-10-07 14:48:11.849 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3207MB free_disk=59.92183303833008GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:48:11 compute-0 nova_compute[259550]: 2025-10-07 14:48:11.850 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:48:11 compute-0 ceph-mon[74295]: pgmap v2578: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:48:11 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3334848461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:48:11 compute-0 nova_compute[259550]: 2025-10-07 14:48:11.850 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:48:11 compute-0 nova_compute[259550]: 2025-10-07 14:48:11.948 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance c621ddbd-d6b8-461e-9374-4f7e50d0ca5f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:48:11 compute-0 nova_compute[259550]: 2025-10-07 14:48:11.949 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 96b2365c-ce47-4fa4-bff7-51dd2e4ac413 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:48:11 compute-0 nova_compute[259550]: 2025-10-07 14:48:11.950 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:48:11 compute-0 nova_compute[259550]: 2025-10-07 14:48:11.951 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:48:12 compute-0 nova_compute[259550]: 2025-10-07 14:48:12.041 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:48:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:48:12 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2345827177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:48:12 compute-0 nova_compute[259550]: 2025-10-07 14:48:12.519 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:48:12 compute-0 nova_compute[259550]: 2025-10-07 14:48:12.525 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:48:12 compute-0 nova_compute[259550]: 2025-10-07 14:48:12.546 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:48:12 compute-0 nova_compute[259550]: 2025-10-07 14:48:12.568 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:48:12 compute-0 nova_compute[259550]: 2025-10-07 14:48:12.568 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:48:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:48:12 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2345827177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:48:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:48:13 compute-0 nova_compute[259550]: 2025-10-07 14:48:13.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:13 compute-0 nova_compute[259550]: 2025-10-07 14:48:13.546 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:48:13 compute-0 nova_compute[259550]: 2025-10-07 14:48:13.546 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:48:13 compute-0 nova_compute[259550]: 2025-10-07 14:48:13.546 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:48:13 compute-0 ceph-mon[74295]: pgmap v2579: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:48:13 compute-0 nova_compute[259550]: 2025-10-07 14:48:13.971 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:48:13 compute-0 nova_compute[259550]: 2025-10-07 14:48:13.971 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:48:13 compute-0 nova_compute[259550]: 2025-10-07 14:48:13.971 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:48:13 compute-0 nova_compute[259550]: 2025-10-07 14:48:13.972 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c621ddbd-d6b8-461e-9374-4f7e50d0ca5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:48:14 compute-0 podman[410388]: 2025-10-07 14:48:14.092417623 +0000 UTC m=+0.076538396 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 07 14:48:14 compute-0 podman[410389]: 2025-10-07 14:48:14.114903751 +0000 UTC m=+0.099256290 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 07 14:48:14 compute-0 nova_compute[259550]: 2025-10-07 14:48:14.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 305 active+clean; 168 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 247 KiB/s wr, 83 op/s
Oct 07 14:48:14 compute-0 ovn_controller[151684]: 2025-10-07T14:48:14Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:14:3f 10.100.0.7
Oct 07 14:48:14 compute-0 ovn_controller[151684]: 2025-10-07T14:48:14Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:14:3f 10.100.0.7
Oct 07 14:48:15 compute-0 ceph-mon[74295]: pgmap v2580: 305 pgs: 305 active+clean; 168 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 247 KiB/s wr, 83 op/s
Oct 07 14:48:16 compute-0 nova_compute[259550]: 2025-10-07 14:48:16.547 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updating instance_info_cache with network_info: [{"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:48:16 compute-0 nova_compute[259550]: 2025-10-07 14:48:16.564 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:48:16 compute-0 nova_compute[259550]: 2025-10-07 14:48:16.565 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:48:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 183 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 64 op/s
Oct 07 14:48:16 compute-0 sudo[410424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:48:16 compute-0 sudo[410424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:48:16 compute-0 sudo[410424]: pam_unix(sudo:session): session closed for user root
Oct 07 14:48:16 compute-0 sudo[410449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:48:16 compute-0 sudo[410449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:48:16 compute-0 sudo[410449]: pam_unix(sudo:session): session closed for user root
Oct 07 14:48:16 compute-0 nova_compute[259550]: 2025-10-07 14:48:16.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:48:17 compute-0 sudo[410474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:48:17 compute-0 sudo[410474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:48:17 compute-0 sudo[410474]: pam_unix(sudo:session): session closed for user root
Oct 07 14:48:17 compute-0 sudo[410499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:48:17 compute-0 sudo[410499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:48:17 compute-0 sudo[410499]: pam_unix(sudo:session): session closed for user root
Oct 07 14:48:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:48:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:48:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:48:17 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:48:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:48:17 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:48:17 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 06b9c6ef-7f49-4a97-906a-0a7869133d43 does not exist
Oct 07 14:48:17 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 1218c77f-5e49-41df-9e30-6748b153180f does not exist
Oct 07 14:48:17 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev a3572ace-65af-4dbf-8180-7cab3b631f56 does not exist
Oct 07 14:48:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:48:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:48:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:48:17 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:48:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:48:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:48:17 compute-0 sudo[410554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:48:17 compute-0 sudo[410554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:48:17 compute-0 sudo[410554]: pam_unix(sudo:session): session closed for user root
Oct 07 14:48:17 compute-0 sudo[410579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:48:17 compute-0 sudo[410579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:48:17 compute-0 sudo[410579]: pam_unix(sudo:session): session closed for user root
Oct 07 14:48:17 compute-0 sudo[410604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:48:17 compute-0 sudo[410604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:48:17 compute-0 sudo[410604]: pam_unix(sudo:session): session closed for user root
Oct 07 14:48:17 compute-0 sudo[410629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:48:17 compute-0 sudo[410629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:48:17 compute-0 ceph-mon[74295]: pgmap v2581: 305 pgs: 305 active+clean; 183 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 64 op/s
Oct 07 14:48:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:48:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:48:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:48:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:48:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:48:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:48:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:48:18 compute-0 nova_compute[259550]: 2025-10-07 14:48:18.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:18 compute-0 podman[410694]: 2025-10-07 14:48:18.21905401 +0000 UTC m=+0.020549197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:48:18 compute-0 podman[410694]: 2025-10-07 14:48:18.345181883 +0000 UTC m=+0.146677040 container create 42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_buck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:48:18 compute-0 systemd[1]: Started libpod-conmon-42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d.scope.
Oct 07 14:48:18 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:48:18 compute-0 podman[410694]: 2025-10-07 14:48:18.531639678 +0000 UTC m=+0.333134845 container init 42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Oct 07 14:48:18 compute-0 podman[410694]: 2025-10-07 14:48:18.545692582 +0000 UTC m=+0.347187749 container start 42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 07 14:48:18 compute-0 systemd[1]: libpod-42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d.scope: Deactivated successfully.
Oct 07 14:48:18 compute-0 mystifying_buck[410711]: 167 167
Oct 07 14:48:18 compute-0 conmon[410711]: conmon 42558a698836e7566b7d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d.scope/container/memory.events
Oct 07 14:48:18 compute-0 podman[410694]: 2025-10-07 14:48:18.584504384 +0000 UTC m=+0.385999571 container attach 42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:48:18 compute-0 podman[410694]: 2025-10-07 14:48:18.586793215 +0000 UTC m=+0.388288372 container died 42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_buck, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:48:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:48:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-8164d63db1cce6b3ca56faaf280cf0d859bc6672ff15746d290ed69d03a37237-merged.mount: Deactivated successfully.
Oct 07 14:48:18 compute-0 podman[410694]: 2025-10-07 14:48:18.916041426 +0000 UTC m=+0.717536583 container remove 42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:48:18 compute-0 systemd[1]: libpod-conmon-42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d.scope: Deactivated successfully.
Oct 07 14:48:19 compute-0 podman[410735]: 2025-10-07 14:48:19.105753559 +0000 UTC m=+0.048015267 container create 1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:48:19 compute-0 systemd[1]: Started libpod-conmon-1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640.scope.
Oct 07 14:48:19 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:48:19 compute-0 podman[410735]: 2025-10-07 14:48:19.085694356 +0000 UTC m=+0.027956094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:48:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fedc5864be58e4e5591dca69cc2ffe67b66754934d9aba8c8f9d8400c2b2b74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:48:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fedc5864be58e4e5591dca69cc2ffe67b66754934d9aba8c8f9d8400c2b2b74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:48:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fedc5864be58e4e5591dca69cc2ffe67b66754934d9aba8c8f9d8400c2b2b74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:48:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fedc5864be58e4e5591dca69cc2ffe67b66754934d9aba8c8f9d8400c2b2b74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:48:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fedc5864be58e4e5591dca69cc2ffe67b66754934d9aba8c8f9d8400c2b2b74/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:48:19 compute-0 podman[410735]: 2025-10-07 14:48:19.198538586 +0000 UTC m=+0.140800414 container init 1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 07 14:48:19 compute-0 podman[410735]: 2025-10-07 14:48:19.207448942 +0000 UTC m=+0.149710660 container start 1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cori, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:48:19 compute-0 podman[410735]: 2025-10-07 14:48:19.211083369 +0000 UTC m=+0.153345077 container attach 1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cori, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:48:19 compute-0 nova_compute[259550]: 2025-10-07 14:48:19.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:19 compute-0 ceph-mon[74295]: pgmap v2582: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:48:20 compute-0 eager_cori[410751]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:48:20 compute-0 eager_cori[410751]: --> relative data size: 1.0
Oct 07 14:48:20 compute-0 eager_cori[410751]: --> All data devices are unavailable
Oct 07 14:48:20 compute-0 systemd[1]: libpod-1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640.scope: Deactivated successfully.
Oct 07 14:48:20 compute-0 systemd[1]: libpod-1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640.scope: Consumed 1.154s CPU time.
Oct 07 14:48:20 compute-0 podman[410735]: 2025-10-07 14:48:20.424475701 +0000 UTC m=+1.366737419 container died 1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cori, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 07 14:48:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fedc5864be58e4e5591dca69cc2ffe67b66754934d9aba8c8f9d8400c2b2b74-merged.mount: Deactivated successfully.
Oct 07 14:48:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:48:20 compute-0 podman[410735]: 2025-10-07 14:48:20.93232954 +0000 UTC m=+1.874591248 container remove 1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 07 14:48:20 compute-0 systemd[1]: libpod-conmon-1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640.scope: Deactivated successfully.
Oct 07 14:48:20 compute-0 sudo[410629]: pam_unix(sudo:session): session closed for user root
Oct 07 14:48:21 compute-0 sudo[410794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:48:21 compute-0 sudo[410794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:48:21 compute-0 sudo[410794]: pam_unix(sudo:session): session closed for user root
Oct 07 14:48:21 compute-0 sudo[410819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:48:21 compute-0 sudo[410819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:48:21 compute-0 sudo[410819]: pam_unix(sudo:session): session closed for user root
Oct 07 14:48:21 compute-0 sudo[410844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:48:21 compute-0 sudo[410844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:48:21 compute-0 sudo[410844]: pam_unix(sudo:session): session closed for user root
Oct 07 14:48:21 compute-0 sudo[410869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:48:21 compute-0 sudo[410869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:48:21 compute-0 podman[410933]: 2025-10-07 14:48:21.558976676 +0000 UTC m=+0.024815150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:48:21 compute-0 podman[410933]: 2025-10-07 14:48:21.816572763 +0000 UTC m=+0.282411217 container create d8f7a8841df82aab3d7b58652de34806a3e29ea67905172bc06794a72e461df4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_poitras, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:48:21 compute-0 systemd[1]: Started libpod-conmon-d8f7a8841df82aab3d7b58652de34806a3e29ea67905172bc06794a72e461df4.scope.
Oct 07 14:48:21 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:48:22 compute-0 podman[410933]: 2025-10-07 14:48:22.208776149 +0000 UTC m=+0.674614613 container init d8f7a8841df82aab3d7b58652de34806a3e29ea67905172bc06794a72e461df4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_poitras, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 07 14:48:22 compute-0 podman[410933]: 2025-10-07 14:48:22.216473213 +0000 UTC m=+0.682311657 container start d8f7a8841df82aab3d7b58652de34806a3e29ea67905172bc06794a72e461df4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 07 14:48:22 compute-0 ceph-mon[74295]: pgmap v2583: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:48:22 compute-0 silly_poitras[410949]: 167 167
Oct 07 14:48:22 compute-0 systemd[1]: libpod-d8f7a8841df82aab3d7b58652de34806a3e29ea67905172bc06794a72e461df4.scope: Deactivated successfully.
Oct 07 14:48:22 compute-0 podman[410933]: 2025-10-07 14:48:22.295651568 +0000 UTC m=+0.761490032 container attach d8f7a8841df82aab3d7b58652de34806a3e29ea67905172bc06794a72e461df4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:48:22 compute-0 podman[410933]: 2025-10-07 14:48:22.296848409 +0000 UTC m=+0.762686853 container died d8f7a8841df82aab3d7b58652de34806a3e29ea67905172bc06794a72e461df4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:48:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-00c88d8eed651f384757c646631e620f75ced2493cc7cfa7c0b1482eaab4e95c-merged.mount: Deactivated successfully.
Oct 07 14:48:22 compute-0 podman[410933]: 2025-10-07 14:48:22.403526035 +0000 UTC m=+0.869364479 container remove d8f7a8841df82aab3d7b58652de34806a3e29ea67905172bc06794a72e461df4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:48:22 compute-0 systemd[1]: libpod-conmon-d8f7a8841df82aab3d7b58652de34806a3e29ea67905172bc06794a72e461df4.scope: Deactivated successfully.
Oct 07 14:48:22 compute-0 podman[410975]: 2025-10-07 14:48:22.577479349 +0000 UTC m=+0.042469500 container create 1efed685bf531d19e51cc53a4e41a28e9f6490eee4b562d41c84b731c1d6528c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:48:22 compute-0 systemd[1]: Started libpod-conmon-1efed685bf531d19e51cc53a4e41a28e9f6490eee4b562d41c84b731c1d6528c.scope.
Oct 07 14:48:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:48:22 compute-0 podman[410975]: 2025-10-07 14:48:22.557356083 +0000 UTC m=+0.022346254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:48:22 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68514caba0fb5c24000dc412f345aca78890e232ba6269b7c192ae8b59a1826/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68514caba0fb5c24000dc412f345aca78890e232ba6269b7c192ae8b59a1826/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68514caba0fb5c24000dc412f345aca78890e232ba6269b7c192ae8b59a1826/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68514caba0fb5c24000dc412f345aca78890e232ba6269b7c192ae8b59a1826/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:48:22 compute-0 podman[410975]: 2025-10-07 14:48:22.677769314 +0000 UTC m=+0.142759495 container init 1efed685bf531d19e51cc53a4e41a28e9f6490eee4b562d41c84b731c1d6528c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:48:22 compute-0 podman[410975]: 2025-10-07 14:48:22.684134163 +0000 UTC m=+0.149124324 container start 1efed685bf531d19e51cc53a4e41a28e9f6490eee4b562d41c84b731c1d6528c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hoover, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:48:22 compute-0 podman[410975]: 2025-10-07 14:48:22.689321152 +0000 UTC m=+0.154311323 container attach 1efed685bf531d19e51cc53a4e41a28e9f6490eee4b562d41c84b731c1d6528c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hoover, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 07 14:48:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:48:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:48:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:48:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:48:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:48:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:48:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:48:22
Oct 07 14:48:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:48:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:48:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'default.rgw.meta', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'images']
Oct 07 14:48:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.072 2 DEBUG nova.compute.manager [req-d3068b3f-8baa-43d0-b81f-4d7fe49f71dc req-4beab88b-6515-4b5e-80c9-53e720e2834a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received event network-changed-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.074 2 DEBUG nova.compute.manager [req-d3068b3f-8baa-43d0-b81f-4d7fe49f71dc req-4beab88b-6515-4b5e-80c9-53e720e2834a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Refreshing instance network info cache due to event network-changed-07e00b31-d9ec-46d0-927f-1d89f6d03bc6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.075 2 DEBUG oslo_concurrency.lockutils [req-d3068b3f-8baa-43d0-b81f-4d7fe49f71dc req-4beab88b-6515-4b5e-80c9-53e720e2834a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.075 2 DEBUG oslo_concurrency.lockutils [req-d3068b3f-8baa-43d0-b81f-4d7fe49f71dc req-4beab88b-6515-4b5e-80c9-53e720e2834a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.075 2 DEBUG nova.network.neutron [req-d3068b3f-8baa-43d0-b81f-4d7fe49f71dc req-4beab88b-6515-4b5e-80c9-53e720e2834a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Refreshing network info cache for port 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:48:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:48:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:48:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:48:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:48:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.135 2 DEBUG oslo_concurrency.lockutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.136 2 DEBUG oslo_concurrency.lockutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:48:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:48:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:48:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:48:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.137 2 DEBUG oslo_concurrency.lockutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.137 2 DEBUG oslo_concurrency.lockutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:48:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.138 2 DEBUG oslo_concurrency.lockutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.139 2 INFO nova.compute.manager [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Terminating instance
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.140 2 DEBUG nova.compute.manager [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:48:23 compute-0 kernel: tap07e00b31-d9 (unregistering): left promiscuous mode
Oct 07 14:48:23 compute-0 NetworkManager[44949]: <info>  [1759848503.2097] device (tap07e00b31-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:48:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:48:23 compute-0 ovn_controller[151684]: 2025-10-07T14:48:23Z|01535|binding|INFO|Releasing lport 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 from this chassis (sb_readonly=0)
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:23 compute-0 ovn_controller[151684]: 2025-10-07T14:48:23Z|01536|binding|INFO|Setting lport 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 down in Southbound
Oct 07 14:48:23 compute-0 ovn_controller[151684]: 2025-10-07T14:48:23Z|01537|binding|INFO|Removing iface tap07e00b31-d9 ovn-installed in OVS
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.242 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:14:3f 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:143f 2001:db8::f816:3eff:fee6:143f'], port_security=['fa:16:3e:e6:14:3f 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:143f 2001:db8::f816:3eff:fee6:143f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8:0:1:f816:3eff:fee6:143f/64 2001:db8::f816:3eff:fee6:143f/64', 'neutron:device_id': '96b2365c-ce47-4fa4-bff7-51dd2e4ac413', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3ca8f444-a15b-48d2-afd7-d5447a1f3a63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=227fc944-7eb8-4e47-9b7f-017eeb7f2711, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=07e00b31-d9ec-46d0-927f-1d89f6d03bc6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:48:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.244 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 in datapath 970990f9-7a8a-40de-9a55-f4c40d657453 unbound from our chassis
Oct 07 14:48:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.245 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 970990f9-7a8a-40de-9a55-f4c40d657453
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.266 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e1b17e96-c8ec-4666-9ab9-33b581a3abb2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:23 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Oct 07 14:48:23 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008b.scope: Consumed 13.822s CPU time.
Oct 07 14:48:23 compute-0 systemd-machined[214580]: Machine qemu-173-instance-0000008b terminated.
Oct 07 14:48:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.300 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab41ed2-030a-46b4-bc6f-00e21a40490f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.304 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d73f3a-3c21-4d75-a9ca-71d5bd475d0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.337 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d12e16-49e7-42c2-8986-8b43373b6ccc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.367 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ccbf752f-dc96-4033-befc-7e46347313ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap970990f9-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:9c:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 44, 'tx_packets': 7, 'rx_bytes': 3768, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 44, 'tx_packets': 7, 'rx_bytes': 3768, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 438], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903010, 'reachable_time': 44649, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 40, 'inoctets': 3040, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 40, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 3040, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 40, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411009, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.386 2 INFO nova.virt.libvirt.driver [-] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Instance destroyed successfully.
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.386 2 DEBUG nova.objects.instance [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 96b2365c-ce47-4fa4-bff7-51dd2e4ac413 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:48:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.386 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f40bac61-5722-477a-bdf9-68f43879a00f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap970990f9-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 903023, 'tstamp': 903023}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411017, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap970990f9-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 903026, 'tstamp': 903026}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411017, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.388 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap970990f9-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.395 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap970990f9-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:48:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.395 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:48:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.395 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap970990f9-70, col_values=(('external_ids', {'iface-id': 'a7ef9e15-2145-4a59-b756-368bcbe72d69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:48:23 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.396 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.403 2 DEBUG nova.virt.libvirt.vif [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:47:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-179829393',display_name='tempest-TestGettingAddress-server-179829393',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-179829393',id=139,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkAt4wtteK91EP3aa6Au9K7yq+N15JSCUefd3a6DRNjmPgvGC0hgDKYUniMgalUA3tACkiPsQDKv7a9b9TFDwqZAEmvf7GWwU8qoBld9UJd4PAomUBnp4Nc81ZIU+LnYw==',key_name='tempest-TestGettingAddress-370921161',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:48:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-17ii6zr4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:48:01Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=96b2365c-ce47-4fa4-bff7-51dd2e4ac413,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.404 2 DEBUG nova.network.os_vif_util [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.405 2 DEBUG nova.network.os_vif_util [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:14:3f,bridge_name='br-int',has_traffic_filtering=True,id=07e00b31-d9ec-46d0-927f-1d89f6d03bc6,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e00b31-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.406 2 DEBUG os_vif [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:14:3f,bridge_name='br-int',has_traffic_filtering=True,id=07e00b31-d9ec-46d0-927f-1d89f6d03bc6,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e00b31-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.408 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07e00b31-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.414 2 INFO os_vif [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:14:3f,bridge_name='br-int',has_traffic_filtering=True,id=07e00b31-d9ec-46d0-927f-1d89f6d03bc6,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e00b31-d9')
Oct 07 14:48:23 compute-0 sad_hoover[410993]: {
Oct 07 14:48:23 compute-0 sad_hoover[410993]:     "0": [
Oct 07 14:48:23 compute-0 sad_hoover[410993]:         {
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "devices": [
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "/dev/loop3"
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             ],
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "lv_name": "ceph_lv0",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "lv_size": "21470642176",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "name": "ceph_lv0",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "tags": {
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.cluster_name": "ceph",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.crush_device_class": "",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.encrypted": "0",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.osd_id": "0",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.type": "block",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.vdo": "0"
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             },
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "type": "block",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "vg_name": "ceph_vg0"
Oct 07 14:48:23 compute-0 sad_hoover[410993]:         }
Oct 07 14:48:23 compute-0 sad_hoover[410993]:     ],
Oct 07 14:48:23 compute-0 sad_hoover[410993]:     "1": [
Oct 07 14:48:23 compute-0 sad_hoover[410993]:         {
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "devices": [
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "/dev/loop4"
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             ],
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "lv_name": "ceph_lv1",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "lv_size": "21470642176",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "name": "ceph_lv1",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "tags": {
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.cluster_name": "ceph",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.crush_device_class": "",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.encrypted": "0",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.osd_id": "1",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.type": "block",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.vdo": "0"
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             },
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "type": "block",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "vg_name": "ceph_vg1"
Oct 07 14:48:23 compute-0 sad_hoover[410993]:         }
Oct 07 14:48:23 compute-0 sad_hoover[410993]:     ],
Oct 07 14:48:23 compute-0 sad_hoover[410993]:     "2": [
Oct 07 14:48:23 compute-0 sad_hoover[410993]:         {
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "devices": [
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "/dev/loop5"
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             ],
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "lv_name": "ceph_lv2",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "lv_size": "21470642176",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "name": "ceph_lv2",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "tags": {
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.cluster_name": "ceph",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.crush_device_class": "",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.encrypted": "0",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.osd_id": "2",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.type": "block",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:                 "ceph.vdo": "0"
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             },
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "type": "block",
Oct 07 14:48:23 compute-0 sad_hoover[410993]:             "vg_name": "ceph_vg2"
Oct 07 14:48:23 compute-0 sad_hoover[410993]:         }
Oct 07 14:48:23 compute-0 sad_hoover[410993]:     ]
Oct 07 14:48:23 compute-0 sad_hoover[410993]: }
Oct 07 14:48:23 compute-0 systemd[1]: libpod-1efed685bf531d19e51cc53a4e41a28e9f6490eee4b562d41c84b731c1d6528c.scope: Deactivated successfully.
Oct 07 14:48:23 compute-0 podman[410975]: 2025-10-07 14:48:23.542285543 +0000 UTC m=+1.007275714 container died 1efed685bf531d19e51cc53a4e41a28e9f6490eee4b562d41c84b731c1d6528c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hoover, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 14:48:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-d68514caba0fb5c24000dc412f345aca78890e232ba6269b7c192ae8b59a1826-merged.mount: Deactivated successfully.
Oct 07 14:48:23 compute-0 podman[410975]: 2025-10-07 14:48:23.611480882 +0000 UTC m=+1.076471033 container remove 1efed685bf531d19e51cc53a4e41a28e9f6490eee4b562d41c84b731c1d6528c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 07 14:48:23 compute-0 systemd[1]: libpod-conmon-1efed685bf531d19e51cc53a4e41a28e9f6490eee4b562d41c84b731c1d6528c.scope: Deactivated successfully.
Oct 07 14:48:23 compute-0 sudo[410869]: pam_unix(sudo:session): session closed for user root
Oct 07 14:48:23 compute-0 sudo[411060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:48:23 compute-0 sudo[411060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:48:23 compute-0 sudo[411060]: pam_unix(sudo:session): session closed for user root
Oct 07 14:48:23 compute-0 sudo[411085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:48:23 compute-0 sudo[411085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:48:23 compute-0 sudo[411085]: pam_unix(sudo:session): session closed for user root
Oct 07 14:48:23 compute-0 sudo[411110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:48:23 compute-0 sudo[411110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:48:23 compute-0 sudo[411110]: pam_unix(sudo:session): session closed for user root
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.895 2 INFO nova.virt.libvirt.driver [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Deleting instance files /var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413_del
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.897 2 INFO nova.virt.libvirt.driver [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Deletion of /var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413_del complete
Oct 07 14:48:23 compute-0 sudo[411135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:48:23 compute-0 sudo[411135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.950 2 INFO nova.compute.manager [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.951 2 DEBUG oslo.service.loopingcall [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.952 2 DEBUG nova.compute.manager [-] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:48:23 compute-0 nova_compute[259550]: 2025-10-07 14:48:23.952 2 DEBUG nova.network.neutron [-] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:48:24 compute-0 ceph-mon[74295]: pgmap v2584: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:48:24 compute-0 podman[411200]: 2025-10-07 14:48:24.342886474 +0000 UTC m=+0.109431950 container create 0af0885e61172a933195f7cb35ac7aa719f6044c1187aa1f184bde3c2f62e32a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct 07 14:48:24 compute-0 podman[411200]: 2025-10-07 14:48:24.259749464 +0000 UTC m=+0.026294930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:48:24 compute-0 systemd[1]: Started libpod-conmon-0af0885e61172a933195f7cb35ac7aa719f6044c1187aa1f184bde3c2f62e32a.scope.
Oct 07 14:48:24 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:48:24 compute-0 podman[411200]: 2025-10-07 14:48:24.570524614 +0000 UTC m=+0.337070090 container init 0af0885e61172a933195f7cb35ac7aa719f6044c1187aa1f184bde3c2f62e32a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 07 14:48:24 compute-0 nova_compute[259550]: 2025-10-07 14:48:24.573 2 DEBUG nova.network.neutron [-] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:48:24 compute-0 podman[411200]: 2025-10-07 14:48:24.579464222 +0000 UTC m=+0.346009648 container start 0af0885e61172a933195f7cb35ac7aa719f6044c1187aa1f184bde3c2f62e32a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:48:24 compute-0 happy_fermat[411216]: 167 167
Oct 07 14:48:24 compute-0 systemd[1]: libpod-0af0885e61172a933195f7cb35ac7aa719f6044c1187aa1f184bde3c2f62e32a.scope: Deactivated successfully.
Oct 07 14:48:24 compute-0 nova_compute[259550]: 2025-10-07 14:48:24.593 2 INFO nova.compute.manager [-] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Took 0.64 seconds to deallocate network for instance.
Oct 07 14:48:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 191 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 07 14:48:24 compute-0 nova_compute[259550]: 2025-10-07 14:48:24.645 2 DEBUG oslo_concurrency.lockutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:48:24 compute-0 nova_compute[259550]: 2025-10-07 14:48:24.646 2 DEBUG oslo_concurrency.lockutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:48:24 compute-0 podman[411200]: 2025-10-07 14:48:24.673669136 +0000 UTC m=+0.440214572 container attach 0af0885e61172a933195f7cb35ac7aa719f6044c1187aa1f184bde3c2f62e32a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 07 14:48:24 compute-0 podman[411200]: 2025-10-07 14:48:24.674837017 +0000 UTC m=+0.441382463 container died 0af0885e61172a933195f7cb35ac7aa719f6044c1187aa1f184bde3c2f62e32a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 07 14:48:24 compute-0 nova_compute[259550]: 2025-10-07 14:48:24.709 2 DEBUG oslo_concurrency.processutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:48:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b79d2abfc80da078dfc5c57f4056cb6abfedc148aaf92c64e29a8710788c0c8-merged.mount: Deactivated successfully.
Oct 07 14:48:24 compute-0 podman[411200]: 2025-10-07 14:48:24.818348061 +0000 UTC m=+0.584893497 container remove 0af0885e61172a933195f7cb35ac7aa719f6044c1187aa1f184bde3c2f62e32a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:48:24 compute-0 systemd[1]: libpod-conmon-0af0885e61172a933195f7cb35ac7aa719f6044c1187aa1f184bde3c2f62e32a.scope: Deactivated successfully.
Oct 07 14:48:25 compute-0 podman[411261]: 2025-10-07 14:48:25.013672074 +0000 UTC m=+0.046112797 container create 1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.016 2 DEBUG nova.network.neutron [req-d3068b3f-8baa-43d0-b81f-4d7fe49f71dc req-4beab88b-6515-4b5e-80c9-53e720e2834a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Updated VIF entry in instance network info cache for port 07e00b31-d9ec-46d0-927f-1d89f6d03bc6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.017 2 DEBUG nova.network.neutron [req-d3068b3f-8baa-43d0-b81f-4d7fe49f71dc req-4beab88b-6515-4b5e-80c9-53e720e2834a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Updating instance_info_cache with network_info: [{"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.035 2 DEBUG oslo_concurrency.lockutils [req-d3068b3f-8baa-43d0-b81f-4d7fe49f71dc req-4beab88b-6515-4b5e-80c9-53e720e2834a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:48:25 compute-0 systemd[1]: Started libpod-conmon-1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b.scope.
Oct 07 14:48:25 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:48:25 compute-0 podman[411261]: 2025-10-07 14:48:24.996545669 +0000 UTC m=+0.028986412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f801c86c80fffdfbdbfb67447ac66b6380cdaa3df0b2df87262a8229d8007f73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f801c86c80fffdfbdbfb67447ac66b6380cdaa3df0b2df87262a8229d8007f73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f801c86c80fffdfbdbfb67447ac66b6380cdaa3df0b2df87262a8229d8007f73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f801c86c80fffdfbdbfb67447ac66b6380cdaa3df0b2df87262a8229d8007f73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:48:25 compute-0 podman[411261]: 2025-10-07 14:48:25.144555802 +0000 UTC m=+0.176996555 container init 1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 07 14:48:25 compute-0 podman[411261]: 2025-10-07 14:48:25.157735913 +0000 UTC m=+0.190176626 container start 1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hugle, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 07 14:48:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:48:25 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432347890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.183 2 DEBUG nova.compute.manager [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received event network-vif-unplugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.184 2 DEBUG oslo_concurrency.lockutils [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.184 2 DEBUG oslo_concurrency.lockutils [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.184 2 DEBUG oslo_concurrency.lockutils [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.184 2 DEBUG nova.compute.manager [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] No waiting events found dispatching network-vif-unplugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.184 2 WARNING nova.compute.manager [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received unexpected event network-vif-unplugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 for instance with vm_state deleted and task_state None.
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.184 2 DEBUG nova.compute.manager [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received event network-vif-plugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.185 2 DEBUG oslo_concurrency.lockutils [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.185 2 DEBUG oslo_concurrency.lockutils [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.185 2 DEBUG oslo_concurrency.lockutils [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.186 2 DEBUG nova.compute.manager [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] No waiting events found dispatching network-vif-plugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.186 2 WARNING nova.compute.manager [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received unexpected event network-vif-plugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 for instance with vm_state deleted and task_state None.
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.186 2 DEBUG nova.compute.manager [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received event network-vif-deleted-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.186 2 INFO nova.compute.manager [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Neutron deleted interface 07e00b31-d9ec-46d0-927f-1d89f6d03bc6; detaching it from the instance and deleting it from the info cache
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.186 2 DEBUG nova.network.neutron [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:48:25 compute-0 podman[411261]: 2025-10-07 14:48:25.191144381 +0000 UTC m=+0.223585104 container attach 1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.203 2 DEBUG oslo_concurrency.processutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.210 2 DEBUG nova.compute.provider_tree [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.241 2 DEBUG nova.compute.manager [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Detach interface failed, port_id=07e00b31-d9ec-46d0-927f-1d89f6d03bc6, reason: Instance 96b2365c-ce47-4fa4-bff7-51dd2e4ac413 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.246 2 DEBUG nova.scheduler.client.report [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.265 2 DEBUG oslo_concurrency.lockutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.291 2 INFO nova.scheduler.client.report [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 96b2365c-ce47-4fa4-bff7-51dd2e4ac413
Oct 07 14:48:25 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1432347890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:48:25 compute-0 nova_compute[259550]: 2025-10-07 14:48:25.354 2 DEBUG oslo_concurrency.lockutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:48:26 compute-0 nervous_hugle[411279]: {
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:         "osd_id": 2,
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:         "type": "bluestore"
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:     },
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:         "osd_id": 1,
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:         "type": "bluestore"
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:     },
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:         "osd_id": 0,
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:         "type": "bluestore"
Oct 07 14:48:26 compute-0 nervous_hugle[411279]:     }
Oct 07 14:48:26 compute-0 nervous_hugle[411279]: }
Oct 07 14:48:26 compute-0 systemd[1]: libpod-1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b.scope: Deactivated successfully.
Oct 07 14:48:26 compute-0 systemd[1]: libpod-1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b.scope: Consumed 1.101s CPU time.
Oct 07 14:48:26 compute-0 podman[411261]: 2025-10-07 14:48:26.294233881 +0000 UTC m=+1.326674604 container died 1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 14:48:26 compute-0 ceph-mon[74295]: pgmap v2585: 305 pgs: 305 active+clean; 191 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 07 14:48:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 305 active+clean; 159 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 1.9 MiB/s wr, 67 op/s
Oct 07 14:48:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f801c86c80fffdfbdbfb67447ac66b6380cdaa3df0b2df87262a8229d8007f73-merged.mount: Deactivated successfully.
Oct 07 14:48:26 compute-0 podman[411261]: 2025-10-07 14:48:26.943985522 +0000 UTC m=+1.976426245 container remove 1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hugle, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:48:26 compute-0 systemd[1]: libpod-conmon-1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b.scope: Deactivated successfully.
Oct 07 14:48:26 compute-0 sudo[411135]: pam_unix(sudo:session): session closed for user root
Oct 07 14:48:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:48:27 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:48:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:48:27 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:48:27 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 191fd511-8159-4260-b64d-523697365158 does not exist
Oct 07 14:48:27 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 0def557f-5fcf-43e4-bf8d-1eb01d5d5a02 does not exist
Oct 07 14:48:27 compute-0 sudo[411326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:48:27 compute-0 sudo[411326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:48:27 compute-0 sudo[411326]: pam_unix(sudo:session): session closed for user root
Oct 07 14:48:27 compute-0 sudo[411364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:48:27 compute-0 podman[411350]: 2025-10-07 14:48:27.229880001 +0000 UTC m=+0.060653663 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:48:27 compute-0 sudo[411364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:48:27 compute-0 sudo[411364]: pam_unix(sudo:session): session closed for user root
Oct 07 14:48:27 compute-0 podman[411351]: 2025-10-07 14:48:27.260093054 +0000 UTC m=+0.088468483 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:48:28 compute-0 ceph-mon[74295]: pgmap v2586: 305 pgs: 305 active+clean; 159 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 1.9 MiB/s wr, 67 op/s
Oct 07 14:48:28 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:48:28 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.163 2 DEBUG nova.compute.manager [req-bdfc2fa5-9b2f-4aa5-816a-26b2d3d84e53 req-d55c51fb-8ee2-4769-8af3-0f9c8938669e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received event network-changed-d90f9db1-8372-46fb-93ed-9be2902fe85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.164 2 DEBUG nova.compute.manager [req-bdfc2fa5-9b2f-4aa5-816a-26b2d3d84e53 req-d55c51fb-8ee2-4769-8af3-0f9c8938669e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Refreshing instance network info cache due to event network-changed-d90f9db1-8372-46fb-93ed-9be2902fe85c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.165 2 DEBUG oslo_concurrency.lockutils [req-bdfc2fa5-9b2f-4aa5-816a-26b2d3d84e53 req-d55c51fb-8ee2-4769-8af3-0f9c8938669e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.165 2 DEBUG oslo_concurrency.lockutils [req-bdfc2fa5-9b2f-4aa5-816a-26b2d3d84e53 req-d55c51fb-8ee2-4769-8af3-0f9c8938669e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.166 2 DEBUG nova.network.neutron [req-bdfc2fa5-9b2f-4aa5-816a-26b2d3d84e53 req-d55c51fb-8ee2-4769-8af3-0f9c8938669e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Refreshing network info cache for port d90f9db1-8372-46fb-93ed-9be2902fe85c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:48:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.252 2 DEBUG oslo_concurrency.lockutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.253 2 DEBUG oslo_concurrency.lockutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.253 2 DEBUG oslo_concurrency.lockutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.253 2 DEBUG oslo_concurrency.lockutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.254 2 DEBUG oslo_concurrency.lockutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.255 2 INFO nova.compute.manager [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Terminating instance
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.256 2 DEBUG nova.compute.manager [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:48:28 compute-0 kernel: tapd90f9db1-83 (unregistering): left promiscuous mode
Oct 07 14:48:28 compute-0 NetworkManager[44949]: <info>  [1759848508.3236] device (tapd90f9db1-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:28 compute-0 ovn_controller[151684]: 2025-10-07T14:48:28Z|01538|binding|INFO|Releasing lport d90f9db1-8372-46fb-93ed-9be2902fe85c from this chassis (sb_readonly=0)
Oct 07 14:48:28 compute-0 ovn_controller[151684]: 2025-10-07T14:48:28Z|01539|binding|INFO|Setting lport d90f9db1-8372-46fb-93ed-9be2902fe85c down in Southbound
Oct 07 14:48:28 compute-0 ovn_controller[151684]: 2025-10-07T14:48:28Z|01540|binding|INFO|Removing iface tapd90f9db1-83 ovn-installed in OVS
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.347 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:39:08 10.100.0.13 2001:db8:0:1:f816:3eff:fe48:3908 2001:db8::f816:3eff:fe48:3908'], port_security=['fa:16:3e:48:39:08 10.100.0.13 2001:db8:0:1:f816:3eff:fe48:3908 2001:db8::f816:3eff:fe48:3908'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8:0:1:f816:3eff:fe48:3908/64 2001:db8::f816:3eff:fe48:3908/64', 'neutron:device_id': 'c621ddbd-d6b8-461e-9374-4f7e50d0ca5f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3ca8f444-a15b-48d2-afd7-d5447a1f3a63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=227fc944-7eb8-4e47-9b7f-017eeb7f2711, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d90f9db1-8372-46fb-93ed-9be2902fe85c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:48:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.348 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d90f9db1-8372-46fb-93ed-9be2902fe85c in datapath 970990f9-7a8a-40de-9a55-f4c40d657453 unbound from our chassis
Oct 07 14:48:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.349 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 970990f9-7a8a-40de-9a55-f4c40d657453, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:48:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.350 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd90a10-566e-4bbb-a49f-c27f99a791f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.350 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453 namespace which is not needed anymore
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:28 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Oct 07 14:48:28 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008a.scope: Consumed 15.080s CPU time.
Oct 07 14:48:28 compute-0 systemd-machined[214580]: Machine qemu-172-instance-0000008a terminated.
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.495 2 INFO nova.virt.libvirt.driver [-] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Instance destroyed successfully.
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.496 2 DEBUG nova.objects.instance [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid c621ddbd-d6b8-461e-9374-4f7e50d0ca5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:48:28 compute-0 neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453[409738]: [NOTICE]   (409742) : haproxy version is 2.8.14-c23fe91
Oct 07 14:48:28 compute-0 neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453[409738]: [NOTICE]   (409742) : path to executable is /usr/sbin/haproxy
Oct 07 14:48:28 compute-0 neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453[409738]: [WARNING]  (409742) : Exiting Master process...
Oct 07 14:48:28 compute-0 neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453[409738]: [WARNING]  (409742) : Exiting Master process...
Oct 07 14:48:28 compute-0 neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453[409738]: [ALERT]    (409742) : Current worker (409744) exited with code 143 (Terminated)
Oct 07 14:48:28 compute-0 neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453[409738]: [WARNING]  (409742) : All workers exited. Exiting... (0)
Oct 07 14:48:28 compute-0 systemd[1]: libpod-10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c.scope: Deactivated successfully.
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.512 2 DEBUG nova.virt.libvirt.vif [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:47:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1379651617',display_name='tempest-TestGettingAddress-server-1379651617',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1379651617',id=138,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkAt4wtteK91EP3aa6Au9K7yq+N15JSCUefd3a6DRNjmPgvGC0hgDKYUniMgalUA3tACkiPsQDKv7a9b9TFDwqZAEmvf7GWwU8qoBld9UJd4PAomUBnp4Nc81ZIU+LnYw==',key_name='tempest-TestGettingAddress-370921161',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:47:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-wdqfp22g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:47:27Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=c621ddbd-d6b8-461e-9374-4f7e50d0ca5f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.512 2 DEBUG nova.network.os_vif_util [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.514 2 DEBUG nova.network.os_vif_util [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:39:08,bridge_name='br-int',has_traffic_filtering=True,id=d90f9db1-8372-46fb-93ed-9be2902fe85c,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f9db1-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.514 2 DEBUG os_vif [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:39:08,bridge_name='br-int',has_traffic_filtering=True,id=d90f9db1-8372-46fb-93ed-9be2902fe85c,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f9db1-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.516 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd90f9db1-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:48:28 compute-0 podman[411443]: 2025-10-07 14:48:28.517159947 +0000 UTC m=+0.059646397 container died 10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.524 2 INFO os_vif [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:39:08,bridge_name='br-int',has_traffic_filtering=True,id=d90f9db1-8372-46fb-93ed-9be2902fe85c,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f9db1-83')
Oct 07 14:48:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c-userdata-shm.mount: Deactivated successfully.
Oct 07 14:48:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-47a24921b3c8e4f08e504680464e3b58519f09b02a877bb0e4246cd4ccca4536-merged.mount: Deactivated successfully.
Oct 07 14:48:28 compute-0 podman[411443]: 2025-10-07 14:48:28.557600862 +0000 UTC m=+0.100087302 container cleanup 10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:48:28 compute-0 systemd[1]: libpod-conmon-10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c.scope: Deactivated successfully.
Oct 07 14:48:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 218 KiB/s rd, 978 KiB/s wr, 66 op/s
Oct 07 14:48:28 compute-0 podman[411501]: 2025-10-07 14:48:28.660719974 +0000 UTC m=+0.076391562 container remove 10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:48:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.668 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92178668-3360-4cce-9414-358119271785]: (4, ('Tue Oct  7 02:48:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453 (10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c)\n10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c\nTue Oct  7 02:48:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453 (10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c)\n10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.670 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eac524eb-455f-4e46-85e8-7e3746d43bbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.672 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap970990f9-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:28 compute-0 kernel: tap970990f9-70: left promiscuous mode
Oct 07 14:48:28 compute-0 nova_compute[259550]: 2025-10-07 14:48:28.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.693 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2772a4-c674-4d64-90d2-489c8d48a0f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.719 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6e671a80-66d8-440a-acb7-9384ffbc3b3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.721 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f11aef5a-4ebf-477f-8352-5d4396359bf9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.739 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[922a0276-661c-44f3-a2ff-9670a20e6583]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903002, 'reachable_time': 18976, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411516, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d970990f9\x2d7a8a\x2d40de\x2d9a55\x2df4c40d657453.mount: Deactivated successfully.
Oct 07 14:48:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.744 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:48:28 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.745 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[86e6cbf8-f92d-4ef4-9540-700f20fd30f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:29 compute-0 nova_compute[259550]: 2025-10-07 14:48:29.072 2 DEBUG nova.compute.manager [req-b6320d80-cf59-4219-8315-27b888403cd3 req-fafc5822-92b1-4947-9033-f9af68b9747f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received event network-vif-unplugged-d90f9db1-8372-46fb-93ed-9be2902fe85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:48:29 compute-0 nova_compute[259550]: 2025-10-07 14:48:29.072 2 DEBUG oslo_concurrency.lockutils [req-b6320d80-cf59-4219-8315-27b888403cd3 req-fafc5822-92b1-4947-9033-f9af68b9747f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:48:29 compute-0 nova_compute[259550]: 2025-10-07 14:48:29.072 2 DEBUG oslo_concurrency.lockutils [req-b6320d80-cf59-4219-8315-27b888403cd3 req-fafc5822-92b1-4947-9033-f9af68b9747f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:48:29 compute-0 nova_compute[259550]: 2025-10-07 14:48:29.072 2 DEBUG oslo_concurrency.lockutils [req-b6320d80-cf59-4219-8315-27b888403cd3 req-fafc5822-92b1-4947-9033-f9af68b9747f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:48:29 compute-0 nova_compute[259550]: 2025-10-07 14:48:29.073 2 DEBUG nova.compute.manager [req-b6320d80-cf59-4219-8315-27b888403cd3 req-fafc5822-92b1-4947-9033-f9af68b9747f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] No waiting events found dispatching network-vif-unplugged-d90f9db1-8372-46fb-93ed-9be2902fe85c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:48:29 compute-0 nova_compute[259550]: 2025-10-07 14:48:29.073 2 DEBUG nova.compute.manager [req-b6320d80-cf59-4219-8315-27b888403cd3 req-fafc5822-92b1-4947-9033-f9af68b9747f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received event network-vif-unplugged-d90f9db1-8372-46fb-93ed-9be2902fe85c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:48:29 compute-0 nova_compute[259550]: 2025-10-07 14:48:29.198 2 INFO nova.virt.libvirt.driver [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Deleting instance files /var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_del
Oct 07 14:48:29 compute-0 nova_compute[259550]: 2025-10-07 14:48:29.199 2 INFO nova.virt.libvirt.driver [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Deletion of /var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_del complete
Oct 07 14:48:29 compute-0 nova_compute[259550]: 2025-10-07 14:48:29.381 2 INFO nova.compute.manager [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Took 1.12 seconds to destroy the instance on the hypervisor.
Oct 07 14:48:29 compute-0 nova_compute[259550]: 2025-10-07 14:48:29.382 2 DEBUG oslo.service.loopingcall [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:48:29 compute-0 nova_compute[259550]: 2025-10-07 14:48:29.382 2 DEBUG nova.compute.manager [-] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:48:29 compute-0 nova_compute[259550]: 2025-10-07 14:48:29.382 2 DEBUG nova.network.neutron [-] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:48:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:30.098 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:48:30 compute-0 nova_compute[259550]: 2025-10-07 14:48:30.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:30.099 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:48:30 compute-0 ceph-mon[74295]: pgmap v2587: 305 pgs: 305 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 218 KiB/s rd, 978 KiB/s wr, 66 op/s
Oct 07 14:48:30 compute-0 nova_compute[259550]: 2025-10-07 14:48:30.466 2 DEBUG nova.network.neutron [-] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:48:30 compute-0 nova_compute[259550]: 2025-10-07 14:48:30.499 2 INFO nova.compute.manager [-] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Took 1.12 seconds to deallocate network for instance.
Oct 07 14:48:30 compute-0 nova_compute[259550]: 2025-10-07 14:48:30.570 2 DEBUG oslo_concurrency.lockutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:48:30 compute-0 nova_compute[259550]: 2025-10-07 14:48:30.571 2 DEBUG oslo_concurrency.lockutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:48:30 compute-0 nova_compute[259550]: 2025-10-07 14:48:30.635 2 DEBUG oslo_concurrency.processutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:48:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 305 active+clean; 102 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 24 KiB/s wr, 48 op/s
Oct 07 14:48:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:48:31 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/490652022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:48:31 compute-0 nova_compute[259550]: 2025-10-07 14:48:31.123 2 DEBUG oslo_concurrency.processutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:48:31 compute-0 nova_compute[259550]: 2025-10-07 14:48:31.130 2 DEBUG nova.compute.provider_tree [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:48:31 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/490652022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:48:31 compute-0 nova_compute[259550]: 2025-10-07 14:48:31.166 2 DEBUG nova.scheduler.client.report [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:48:31 compute-0 nova_compute[259550]: 2025-10-07 14:48:31.179 2 DEBUG nova.compute.manager [req-cb66206e-fb6b-497d-afd5-f8d44a74ff2e req-b5c7ea5a-a945-4fed-808d-ed92d907fe6b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received event network-vif-plugged-d90f9db1-8372-46fb-93ed-9be2902fe85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:48:31 compute-0 nova_compute[259550]: 2025-10-07 14:48:31.180 2 DEBUG oslo_concurrency.lockutils [req-cb66206e-fb6b-497d-afd5-f8d44a74ff2e req-b5c7ea5a-a945-4fed-808d-ed92d907fe6b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:48:31 compute-0 nova_compute[259550]: 2025-10-07 14:48:31.180 2 DEBUG oslo_concurrency.lockutils [req-cb66206e-fb6b-497d-afd5-f8d44a74ff2e req-b5c7ea5a-a945-4fed-808d-ed92d907fe6b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:48:31 compute-0 nova_compute[259550]: 2025-10-07 14:48:31.180 2 DEBUG oslo_concurrency.lockutils [req-cb66206e-fb6b-497d-afd5-f8d44a74ff2e req-b5c7ea5a-a945-4fed-808d-ed92d907fe6b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:48:31 compute-0 nova_compute[259550]: 2025-10-07 14:48:31.180 2 DEBUG nova.compute.manager [req-cb66206e-fb6b-497d-afd5-f8d44a74ff2e req-b5c7ea5a-a945-4fed-808d-ed92d907fe6b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] No waiting events found dispatching network-vif-plugged-d90f9db1-8372-46fb-93ed-9be2902fe85c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:48:31 compute-0 nova_compute[259550]: 2025-10-07 14:48:31.181 2 WARNING nova.compute.manager [req-cb66206e-fb6b-497d-afd5-f8d44a74ff2e req-b5c7ea5a-a945-4fed-808d-ed92d907fe6b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received unexpected event network-vif-plugged-d90f9db1-8372-46fb-93ed-9be2902fe85c for instance with vm_state deleted and task_state None.
Oct 07 14:48:31 compute-0 nova_compute[259550]: 2025-10-07 14:48:31.181 2 DEBUG nova.compute.manager [req-cb66206e-fb6b-497d-afd5-f8d44a74ff2e req-b5c7ea5a-a945-4fed-808d-ed92d907fe6b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received event network-vif-deleted-d90f9db1-8372-46fb-93ed-9be2902fe85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:48:31 compute-0 nova_compute[259550]: 2025-10-07 14:48:31.200 2 DEBUG oslo_concurrency.lockutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:48:31 compute-0 nova_compute[259550]: 2025-10-07 14:48:31.228 2 INFO nova.scheduler.client.report [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance c621ddbd-d6b8-461e-9374-4f7e50d0ca5f
Oct 07 14:48:31 compute-0 nova_compute[259550]: 2025-10-07 14:48:31.309 2 DEBUG oslo_concurrency.lockutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:48:32 compute-0 ceph-mon[74295]: pgmap v2588: 305 pgs: 305 active+clean; 102 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 24 KiB/s wr, 48 op/s
Oct 07 14:48:32 compute-0 nova_compute[259550]: 2025-10-07 14:48:32.311 2 DEBUG nova.network.neutron [req-bdfc2fa5-9b2f-4aa5-816a-26b2d3d84e53 req-d55c51fb-8ee2-4769-8af3-0f9c8938669e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updated VIF entry in instance network info cache for port d90f9db1-8372-46fb-93ed-9be2902fe85c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:48:32 compute-0 nova_compute[259550]: 2025-10-07 14:48:32.312 2 DEBUG nova.network.neutron [req-bdfc2fa5-9b2f-4aa5-816a-26b2d3d84e53 req-d55c51fb-8ee2-4769-8af3-0f9c8938669e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updating instance_info_cache with network_info: [{"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:48:32 compute-0 nova_compute[259550]: 2025-10-07 14:48:32.338 2 DEBUG oslo_concurrency.lockutils [req-bdfc2fa5-9b2f-4aa5-816a-26b2d3d84e53 req-d55c51fb-8ee2-4769-8af3-0f9c8938669e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 102 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 47 op/s
Oct 07 14:48:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:48:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2908587486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:48:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:48:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2908587486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006099620694145992 of space, bias 1.0, pg target 0.18298862082437975 quantized to 32 (current 32)
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:48:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:48:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2908587486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:48:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2908587486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:48:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:48:33 compute-0 nova_compute[259550]: 2025-10-07 14:48:33.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:33 compute-0 nova_compute[259550]: 2025-10-07 14:48:33.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:34 compute-0 ceph-mon[74295]: pgmap v2589: 305 pgs: 305 active+clean; 102 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 47 op/s
Oct 07 14:48:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 24 KiB/s wr, 57 op/s
Oct 07 14:48:36 compute-0 ceph-mon[74295]: pgmap v2590: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 24 KiB/s wr, 57 op/s
Oct 07 14:48:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 12 KiB/s wr, 49 op/s
Oct 07 14:48:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:37.100 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:48:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:48:38 compute-0 nova_compute[259550]: 2025-10-07 14:48:38.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:38 compute-0 ceph-mon[74295]: pgmap v2591: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 12 KiB/s wr, 49 op/s
Oct 07 14:48:38 compute-0 nova_compute[259550]: 2025-10-07 14:48:38.385 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848503.3836818, 96b2365c-ce47-4fa4-bff7-51dd2e4ac413 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:48:38 compute-0 nova_compute[259550]: 2025-10-07 14:48:38.385 2 INFO nova.compute.manager [-] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] VM Stopped (Lifecycle Event)
Oct 07 14:48:38 compute-0 nova_compute[259550]: 2025-10-07 14:48:38.411 2 DEBUG nova.compute.manager [None req-21808f39-ca3a-4265-a3fe-b968ee0b8469 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:48:38 compute-0 nova_compute[259550]: 2025-10-07 14:48:38.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 12 KiB/s wr, 43 op/s
Oct 07 14:48:39 compute-0 ceph-mon[74295]: pgmap v2592: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 12 KiB/s wr, 43 op/s
Oct 07 14:48:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 3.5 KiB/s wr, 22 op/s
Oct 07 14:48:41 compute-0 ceph-mon[74295]: pgmap v2593: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 3.5 KiB/s wr, 22 op/s
Oct 07 14:48:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 5.4 KiB/s rd, 852 B/s wr, 9 op/s
Oct 07 14:48:43 compute-0 nova_compute[259550]: 2025-10-07 14:48:43.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:48:43 compute-0 nova_compute[259550]: 2025-10-07 14:48:43.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:43 compute-0 nova_compute[259550]: 2025-10-07 14:48:43.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:43 compute-0 nova_compute[259550]: 2025-10-07 14:48:43.494 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848508.4918177, c621ddbd-d6b8-461e-9374-4f7e50d0ca5f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:48:43 compute-0 nova_compute[259550]: 2025-10-07 14:48:43.494 2 INFO nova.compute.manager [-] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] VM Stopped (Lifecycle Event)
Oct 07 14:48:43 compute-0 nova_compute[259550]: 2025-10-07 14:48:43.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:43 compute-0 nova_compute[259550]: 2025-10-07 14:48:43.596 2 DEBUG nova.compute.manager [None req-c998466e-40f2-470b-83f6-7e425c5df92a - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:48:43 compute-0 ceph-mon[74295]: pgmap v2594: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 5.4 KiB/s rd, 852 B/s wr, 9 op/s
Oct 07 14:48:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 5.4 KiB/s rd, 853 B/s wr, 9 op/s
Oct 07 14:48:45 compute-0 podman[411543]: 2025-10-07 14:48:45.078773231 +0000 UTC m=+0.064328961 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 07 14:48:45 compute-0 podman[411542]: 2025-10-07 14:48:45.08434749 +0000 UTC m=+0.070524457 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 07 14:48:45 compute-0 ceph-mon[74295]: pgmap v2595: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 5.4 KiB/s rd, 853 B/s wr, 9 op/s
Oct 07 14:48:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:48:48 compute-0 ceph-mon[74295]: pgmap v2596: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:48:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:48:48 compute-0 nova_compute[259550]: 2025-10-07 14:48:48.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:48 compute-0 nova_compute[259550]: 2025-10-07 14:48:48.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:48:50 compute-0 ceph-mon[74295]: pgmap v2597: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:48:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:48:52 compute-0 ceph-mon[74295]: pgmap v2598: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:48:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:48:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:48:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:48:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:48:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:48:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:48:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:48:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:48:53 compute-0 nova_compute[259550]: 2025-10-07 14:48:53.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:53 compute-0 nova_compute[259550]: 2025-10-07 14:48:53.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:53.595 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:4f:60 10.100.0.2 2001:db8::f816:3eff:fec5:4f60'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec5:4f60/64', 'neutron:device_id': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69026eb8-a969-41bf-9300-c17871babf58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba61e351-8cde-4bb5-94c2-d294e74e4b35, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c9a8228f-4b9d-4f81-919d-b6781c06cebf) old=Port_Binding(mac=['fa:16:3e:c5:4f:60 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69026eb8-a969-41bf-9300-c17871babf58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:48:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:53.596 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c9a8228f-4b9d-4f81-919d-b6781c06cebf in datapath 69026eb8-a969-41bf-9300-c17871babf58 updated
Oct 07 14:48:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:53.597 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 69026eb8-a969-41bf-9300-c17871babf58, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:48:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:48:53.598 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9cf94a-d634-4cf1-8d5d-8b3084e94fe6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:48:53 compute-0 ceph-mon[74295]: pgmap v2599: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:48:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:48:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:48:58 compute-0 podman[411584]: 2025-10-07 14:48:58.087051322 +0000 UTC m=+0.063995421 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct 07 14:48:58 compute-0 podman[411585]: 2025-10-07 14:48:58.118157331 +0000 UTC m=+0.092428507 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:48:58 compute-0 nova_compute[259550]: 2025-10-07 14:48:58.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:58 compute-0 nova_compute[259550]: 2025-10-07 14:48:58.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:48:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:48:58 compute-0 nova_compute[259550]: 2025-10-07 14:48:58.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:48:59 compute-0 nova_compute[259550]: 2025-10-07 14:48:59.326 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "f8ed40fc-237a-46e5-9557-c128fe833cea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:48:59 compute-0 nova_compute[259550]: 2025-10-07 14:48:59.327 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:48:59 compute-0 nova_compute[259550]: 2025-10-07 14:48:59.346 2 DEBUG nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:48:59 compute-0 nova_compute[259550]: 2025-10-07 14:48:59.435 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:48:59 compute-0 nova_compute[259550]: 2025-10-07 14:48:59.435 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:48:59 compute-0 nova_compute[259550]: 2025-10-07 14:48:59.444 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:48:59 compute-0 nova_compute[259550]: 2025-10-07 14:48:59.444 2 INFO nova.compute.claims [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:48:59 compute-0 nova_compute[259550]: 2025-10-07 14:48:59.539 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:48:59 compute-0 nova_compute[259550]: 2025-10-07 14:48:59.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:49:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:00.082 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:49:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:00.083 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:49:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:00.083 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:49:00 compute-0 ceph-mds[100686]: mds.beacon.cephfs.compute-0.xpofvx missed beacon ack from the monitors
Oct 07 14:49:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:49:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:49:01 compute-0 ceph-mon[74295]: pgmap v2600: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:49:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:49:01 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3294263181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.504 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.965s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.512 2 DEBUG nova.compute.provider_tree [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.536 2 DEBUG nova.scheduler.client.report [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.563 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.564 2 DEBUG nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.613 2 DEBUG nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.613 2 DEBUG nova.network.neutron [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.645 2 INFO nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.677 2 DEBUG nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.776 2 DEBUG nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.778 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.778 2 INFO nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Creating image(s)
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.807 2 DEBUG nova.storage.rbd_utils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image f8ed40fc-237a-46e5-9557-c128fe833cea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.837 2 DEBUG nova.storage.rbd_utils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image f8ed40fc-237a-46e5-9557-c128fe833cea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.865 2 DEBUG nova.storage.rbd_utils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image f8ed40fc-237a-46e5-9557-c128fe833cea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.870 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.962 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.964 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.965 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.965 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.991 2 DEBUG nova.storage.rbd_utils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image f8ed40fc-237a-46e5-9557-c128fe833cea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:49:01 compute-0 nova_compute[259550]: 2025-10-07 14:49:01.995 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f8ed40fc-237a-46e5-9557-c128fe833cea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:49:02 compute-0 nova_compute[259550]: 2025-10-07 14:49:02.043 2 DEBUG nova.policy [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:49:02 compute-0 ceph-mon[74295]: pgmap v2601: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:49:02 compute-0 ceph-mon[74295]: pgmap v2602: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:49:02 compute-0 ceph-mon[74295]: pgmap v2603: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:49:02 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3294263181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:49:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2604: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:49:02 compute-0 nova_compute[259550]: 2025-10-07 14:49:02.759 2 DEBUG nova.network.neutron [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Successfully created port: b3e49a3d-2116-49cf-832f-f0126541c3aa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:49:03 compute-0 nova_compute[259550]: 2025-10-07 14:49:03.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:03 compute-0 nova_compute[259550]: 2025-10-07 14:49:03.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:03 compute-0 nova_compute[259550]: 2025-10-07 14:49:03.615 2 DEBUG nova.network.neutron [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Successfully updated port: b3e49a3d-2116-49cf-832f-f0126541c3aa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:49:03 compute-0 nova_compute[259550]: 2025-10-07 14:49:03.635 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:49:03 compute-0 nova_compute[259550]: 2025-10-07 14:49:03.635 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:49:03 compute-0 nova_compute[259550]: 2025-10-07 14:49:03.635 2 DEBUG nova.network.neutron [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:49:03 compute-0 nova_compute[259550]: 2025-10-07 14:49:03.711 2 DEBUG nova.compute.manager [req-181e6895-26c7-4a73-8c35-cd5fa41c199c req-6678d108-bfea-4559-a228-913d17ea7b07 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received event network-changed-b3e49a3d-2116-49cf-832f-f0126541c3aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:49:03 compute-0 nova_compute[259550]: 2025-10-07 14:49:03.712 2 DEBUG nova.compute.manager [req-181e6895-26c7-4a73-8c35-cd5fa41c199c req-6678d108-bfea-4559-a228-913d17ea7b07 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Refreshing instance network info cache due to event network-changed-b3e49a3d-2116-49cf-832f-f0126541c3aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:49:03 compute-0 nova_compute[259550]: 2025-10-07 14:49:03.712 2 DEBUG oslo_concurrency.lockutils [req-181e6895-26c7-4a73-8c35-cd5fa41c199c req-6678d108-bfea-4559-a228-913d17ea7b07 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:49:03 compute-0 nova_compute[259550]: 2025-10-07 14:49:03.798 2 DEBUG nova.network.neutron [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:49:03 compute-0 ceph-mon[74295]: pgmap v2604: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:49:03 compute-0 nova_compute[259550]: 2025-10-07 14:49:03.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:49:03 compute-0 nova_compute[259550]: 2025-10-07 14:49:03.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:49:03 compute-0 nova_compute[259550]: 2025-10-07 14:49:03.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:49:03 compute-0 nova_compute[259550]: 2025-10-07 14:49:03.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:49:04 compute-0 nova_compute[259550]: 2025-10-07 14:49:04.369 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f8ed40fc-237a-46e5-9557-c128fe833cea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:49:04 compute-0 nova_compute[259550]: 2025-10-07 14:49:04.433 2 DEBUG nova.storage.rbd_utils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image f8ed40fc-237a-46e5-9557-c128fe833cea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:49:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 43 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 92 KiB/s wr, 1 op/s
Oct 07 14:49:04 compute-0 nova_compute[259550]: 2025-10-07 14:49:04.885 2 DEBUG nova.network.neutron [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Updating instance_info_cache with network_info: [{"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:49:04 compute-0 nova_compute[259550]: 2025-10-07 14:49:04.913 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:49:04 compute-0 nova_compute[259550]: 2025-10-07 14:49:04.914 2 DEBUG nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Instance network_info: |[{"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:49:04 compute-0 nova_compute[259550]: 2025-10-07 14:49:04.915 2 DEBUG oslo_concurrency.lockutils [req-181e6895-26c7-4a73-8c35-cd5fa41c199c req-6678d108-bfea-4559-a228-913d17ea7b07 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:49:04 compute-0 nova_compute[259550]: 2025-10-07 14:49:04.915 2 DEBUG nova.network.neutron [req-181e6895-26c7-4a73-8c35-cd5fa41c199c req-6678d108-bfea-4559-a228-913d17ea7b07 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Refreshing network info cache for port b3e49a3d-2116-49cf-832f-f0126541c3aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:49:04 compute-0 nova_compute[259550]: 2025-10-07 14:49:04.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.120 2 DEBUG nova.objects.instance [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid f8ed40fc-237a-46e5-9557-c128fe833cea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.135 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.136 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Ensure instance console log exists: /var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.136 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.136 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.137 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.139 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Start _get_guest_xml network_info=[{"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.144 2 WARNING nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.149 2 DEBUG nova.virt.libvirt.host [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.150 2 DEBUG nova.virt.libvirt.host [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.154 2 DEBUG nova.virt.libvirt.host [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.155 2 DEBUG nova.virt.libvirt.host [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.155 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.155 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.156 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.156 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.156 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.156 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.157 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.157 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.157 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.157 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.158 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.158 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.161 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:49:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:49:05 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3030086265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.665 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.691 2 DEBUG nova.storage.rbd_utils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image f8ed40fc-237a-46e5-9557-c128fe833cea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:49:05 compute-0 nova_compute[259550]: 2025-10-07 14:49:05.696 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:49:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:49:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/562497360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:49:06 compute-0 ceph-mon[74295]: pgmap v2605: 305 pgs: 305 active+clean; 43 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 92 KiB/s wr, 1 op/s
Oct 07 14:49:06 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3030086265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.141 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.143 2 DEBUG nova.virt.libvirt.vif [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:48:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-332055908',display_name='tempest-TestGettingAddress-server-332055908',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-332055908',id=140,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKnFc7oNzLOWPggfVYh6NCn5dPRqTtk1lzxcoCVefWPMaEDbpbiTeEgrtbvchkPj1nQBZHZbolxqW+9TzmB8BHZOPL0u9Rkk75pOIuFTC4DXuvzWlGpX2j66d1jp31XwzQ==',key_name='tempest-TestGettingAddress-1287985769',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-fp0trybq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:49:01Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=f8ed40fc-237a-46e5-9557-c128fe833cea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.143 2 DEBUG nova.network.os_vif_util [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.144 2 DEBUG nova.network.os_vif_util [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:47:3d,bridge_name='br-int',has_traffic_filtering=True,id=b3e49a3d-2116-49cf-832f-f0126541c3aa,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e49a3d-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.145 2 DEBUG nova.objects.instance [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid f8ed40fc-237a-46e5-9557-c128fe833cea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.172 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:49:06 compute-0 nova_compute[259550]:   <uuid>f8ed40fc-237a-46e5-9557-c128fe833cea</uuid>
Oct 07 14:49:06 compute-0 nova_compute[259550]:   <name>instance-0000008c</name>
Oct 07 14:49:06 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:49:06 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:49:06 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <nova:name>tempest-TestGettingAddress-server-332055908</nova:name>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:49:05</nova:creationTime>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:49:06 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:49:06 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:49:06 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:49:06 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:49:06 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:49:06 compute-0 nova_compute[259550]:         <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct 07 14:49:06 compute-0 nova_compute[259550]:         <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:49:06 compute-0 nova_compute[259550]:         <nova:port uuid="b3e49a3d-2116-49cf-832f-f0126541c3aa">
Oct 07 14:49:06 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe53:473d" ipVersion="6"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:49:06 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:49:06 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <system>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <entry name="serial">f8ed40fc-237a-46e5-9557-c128fe833cea</entry>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <entry name="uuid">f8ed40fc-237a-46e5-9557-c128fe833cea</entry>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     </system>
Oct 07 14:49:06 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:49:06 compute-0 nova_compute[259550]:   <os>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:   </os>
Oct 07 14:49:06 compute-0 nova_compute[259550]:   <features>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:   </features>
Oct 07 14:49:06 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:49:06 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:49:06 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/f8ed40fc-237a-46e5-9557-c128fe833cea_disk">
Oct 07 14:49:06 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       </source>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:49:06 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/f8ed40fc-237a-46e5-9557-c128fe833cea_disk.config">
Oct 07 14:49:06 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       </source>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:49:06 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:53:47:3d"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <target dev="tapb3e49a3d-21"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea/console.log" append="off"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <video>
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     </video>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:49:06 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:49:06 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:49:06 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:49:06 compute-0 nova_compute[259550]: </domain>
Oct 07 14:49:06 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.174 2 DEBUG nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Preparing to wait for external event network-vif-plugged-b3e49a3d-2116-49cf-832f-f0126541c3aa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.175 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.175 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.175 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.176 2 DEBUG nova.virt.libvirt.vif [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:48:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-332055908',display_name='tempest-TestGettingAddress-server-332055908',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-332055908',id=140,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKnFc7oNzLOWPggfVYh6NCn5dPRqTtk1lzxcoCVefWPMaEDbpbiTeEgrtbvchkPj1nQBZHZbolxqW+9TzmB8BHZOPL0u9Rkk75pOIuFTC4DXuvzWlGpX2j66d1jp31XwzQ==',key_name='tempest-TestGettingAddress-1287985769',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-fp0trybq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:49:01Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=f8ed40fc-237a-46e5-9557-c128fe833cea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.177 2 DEBUG nova.network.os_vif_util [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.177 2 DEBUG nova.network.os_vif_util [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:47:3d,bridge_name='br-int',has_traffic_filtering=True,id=b3e49a3d-2116-49cf-832f-f0126541c3aa,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e49a3d-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.178 2 DEBUG os_vif [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:47:3d,bridge_name='br-int',has_traffic_filtering=True,id=b3e49a3d-2116-49cf-832f-f0126541c3aa,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e49a3d-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.180 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.180 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.185 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3e49a3d-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.185 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3e49a3d-21, col_values=(('external_ids', {'iface-id': 'b3e49a3d-2116-49cf-832f-f0126541c3aa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:47:3d', 'vm-uuid': 'f8ed40fc-237a-46e5-9557-c128fe833cea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:06 compute-0 NetworkManager[44949]: <info>  [1759848546.1877] manager: (tapb3e49a3d-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/615)
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.194 2 INFO os_vif [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:47:3d,bridge_name='br-int',has_traffic_filtering=True,id=b3e49a3d-2116-49cf-832f-f0126541c3aa,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e49a3d-21')
Oct 07 14:49:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.282 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.283 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.283 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:53:47:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.284 2 INFO nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Using config drive
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.304 2 DEBUG nova.storage.rbd_utils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image f8ed40fc-237a-46e5-9557-c128fe833cea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:49:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 68 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 857 KiB/s wr, 25 op/s
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.716 2 INFO nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Creating config drive at /var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea/disk.config
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.721 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxv15dtvt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.870 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxv15dtvt" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.895 2 DEBUG nova.storage.rbd_utils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image f8ed40fc-237a-46e5-9557-c128fe833cea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:49:06 compute-0 nova_compute[259550]: 2025-10-07 14:49:06.899 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea/disk.config f8ed40fc-237a-46e5-9557-c128fe833cea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:49:07 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/562497360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:49:08 compute-0 nova_compute[259550]: 2025-10-07 14:49:08.046 2 DEBUG nova.network.neutron [req-181e6895-26c7-4a73-8c35-cd5fa41c199c req-6678d108-bfea-4559-a228-913d17ea7b07 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Updated VIF entry in instance network info cache for port b3e49a3d-2116-49cf-832f-f0126541c3aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:49:08 compute-0 nova_compute[259550]: 2025-10-07 14:49:08.047 2 DEBUG nova.network.neutron [req-181e6895-26c7-4a73-8c35-cd5fa41c199c req-6678d108-bfea-4559-a228-913d17ea7b07 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Updating instance_info_cache with network_info: [{"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:49:08 compute-0 nova_compute[259550]: 2025-10-07 14:49:08.070 2 DEBUG oslo_concurrency.lockutils [req-181e6895-26c7-4a73-8c35-cd5fa41c199c req-6678d108-bfea-4559-a228-913d17ea7b07 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:49:08 compute-0 nova_compute[259550]: 2025-10-07 14:49:08.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:08 compute-0 nova_compute[259550]: 2025-10-07 14:49:08.326 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea/disk.config f8ed40fc-237a-46e5-9557-c128fe833cea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:49:08 compute-0 nova_compute[259550]: 2025-10-07 14:49:08.326 2 INFO nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Deleting local config drive /var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea/disk.config because it was imported into RBD.
Oct 07 14:49:08 compute-0 kernel: tapb3e49a3d-21: entered promiscuous mode
Oct 07 14:49:08 compute-0 NetworkManager[44949]: <info>  [1759848548.3777] manager: (tapb3e49a3d-21): new Tun device (/org/freedesktop/NetworkManager/Devices/616)
Oct 07 14:49:08 compute-0 nova_compute[259550]: 2025-10-07 14:49:08.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:08 compute-0 ovn_controller[151684]: 2025-10-07T14:49:08Z|01541|binding|INFO|Claiming lport b3e49a3d-2116-49cf-832f-f0126541c3aa for this chassis.
Oct 07 14:49:08 compute-0 ovn_controller[151684]: 2025-10-07T14:49:08Z|01542|binding|INFO|b3e49a3d-2116-49cf-832f-f0126541c3aa: Claiming fa:16:3e:53:47:3d 10.100.0.14 2001:db8::f816:3eff:fe53:473d
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.396 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:47:3d 10.100.0.14 2001:db8::f816:3eff:fe53:473d'], port_security=['fa:16:3e:53:47:3d 10.100.0.14 2001:db8::f816:3eff:fe53:473d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8::f816:3eff:fe53:473d/64', 'neutron:device_id': 'f8ed40fc-237a-46e5-9557-c128fe833cea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69026eb8-a969-41bf-9300-c17871babf58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '131e11cc-a041-4277-a4c1-b09bd1db8909', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba61e351-8cde-4bb5-94c2-d294e74e4b35, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b3e49a3d-2116-49cf-832f-f0126541c3aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.397 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b3e49a3d-2116-49cf-832f-f0126541c3aa in datapath 69026eb8-a969-41bf-9300-c17871babf58 bound to our chassis
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.399 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69026eb8-a969-41bf-9300-c17871babf58
Oct 07 14:49:08 compute-0 systemd-machined[214580]: New machine qemu-174-instance-0000008c.
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.413 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6733876e-a006-4d0b-8dfc-3bbec74b3e0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.414 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap69026eb8-a1 in ovnmeta-69026eb8-a969-41bf-9300-c17871babf58 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.416 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap69026eb8-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.416 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[666c85d6-c63c-4c53-b05b-6424b3a8ec40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.416 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[414384ea-7a02-41c5-9214-0ebaae1d6704]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:08 compute-0 systemd[1]: Started Virtual Machine qemu-174-instance-0000008c.
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.429 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[295ae2c0-c8af-43db-ab03-98d3544ae494]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:08 compute-0 ceph-mon[74295]: pgmap v2606: 305 pgs: 305 active+clean; 68 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 857 KiB/s wr, 25 op/s
Oct 07 14:49:08 compute-0 systemd-udevd[411953]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.456 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[945db05b-88dc-4a58-b6e4-44cd9a213c0b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:08 compute-0 NetworkManager[44949]: <info>  [1759848548.4632] device (tapb3e49a3d-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:49:08 compute-0 NetworkManager[44949]: <info>  [1759848548.4643] device (tapb3e49a3d-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:49:08 compute-0 nova_compute[259550]: 2025-10-07 14:49:08.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:08 compute-0 ovn_controller[151684]: 2025-10-07T14:49:08Z|01543|binding|INFO|Setting lport b3e49a3d-2116-49cf-832f-f0126541c3aa ovn-installed in OVS
Oct 07 14:49:08 compute-0 ovn_controller[151684]: 2025-10-07T14:49:08Z|01544|binding|INFO|Setting lport b3e49a3d-2116-49cf-832f-f0126541c3aa up in Southbound
Oct 07 14:49:08 compute-0 nova_compute[259550]: 2025-10-07 14:49:08.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.496 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[82bb5606-2ac0-416d-b193-ce1fefc82c5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.501 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b80d00fb-75ed-40a3-8a00-0cd81b8013db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:08 compute-0 NetworkManager[44949]: <info>  [1759848548.5029] manager: (tap69026eb8-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/617)
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.541 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a002c675-cfd9-4c7a-afe5-a9f15e64e140]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.546 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b61b7e4e-def9-4943-a4f6-68332a279fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:08 compute-0 NetworkManager[44949]: <info>  [1759848548.5767] device (tap69026eb8-a0): carrier: link connected
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.583 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c3300b-ed3c-4ffc-b472-38e16cae3ed3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.604 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fca98c6c-939c-4839-a9bb-355571d60c92]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69026eb8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:4f:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 913214, 'reachable_time': 15041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411987, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.621 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e0915857-4f40-482c-b2fd-871fd0cfe5bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec5:4f60'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 913214, 'tstamp': 913214}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412000, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.641 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6bda8aad-3258-4edb-8732-e4742b9ac5a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69026eb8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:4f:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 913214, 'reachable_time': 15041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 412003, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.674 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a56c15bd-d87f-4013-aa09-45d2eb7627a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.741 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[93084394-a690-49af-b156-93b2480912da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.743 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69026eb8-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.744 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.744 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69026eb8-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:49:08 compute-0 nova_compute[259550]: 2025-10-07 14:49:08.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:08 compute-0 NetworkManager[44949]: <info>  [1759848548.7473] manager: (tap69026eb8-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/618)
Oct 07 14:49:08 compute-0 kernel: tap69026eb8-a0: entered promiscuous mode
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.750 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69026eb8-a0, col_values=(('external_ids', {'iface-id': 'c9a8228f-4b9d-4f81-919d-b6781c06cebf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:49:08 compute-0 nova_compute[259550]: 2025-10-07 14:49:08.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:08 compute-0 ovn_controller[151684]: 2025-10-07T14:49:08Z|01545|binding|INFO|Releasing lport c9a8228f-4b9d-4f81-919d-b6781c06cebf from this chassis (sb_readonly=0)
Oct 07 14:49:08 compute-0 nova_compute[259550]: 2025-10-07 14:49:08.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.753 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/69026eb8-a969-41bf-9300-c17871babf58.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/69026eb8-a969-41bf-9300-c17871babf58.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.754 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c10fd891-a8de-4fd8-9f9c-c2a78e7d5d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.755 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-69026eb8-a969-41bf-9300-c17871babf58
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/69026eb8-a969-41bf-9300-c17871babf58.pid.haproxy
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 69026eb8-a969-41bf-9300-c17871babf58
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:49:08 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.756 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'env', 'PROCESS_TAG=haproxy-69026eb8-a969-41bf-9300-c17871babf58', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/69026eb8-a969-41bf-9300-c17871babf58.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:49:08 compute-0 nova_compute[259550]: 2025-10-07 14:49:08.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:09 compute-0 podman[412059]: 2025-10-07 14:49:09.110918941 +0000 UTC m=+0.026056150 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.219 2 DEBUG nova.compute.manager [req-1002c955-d2e3-43f6-a884-20488ac222be req-3864af0d-9b38-4185-8eb0-4c780bda004e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received event network-vif-plugged-b3e49a3d-2116-49cf-832f-f0126541c3aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.219 2 DEBUG oslo_concurrency.lockutils [req-1002c955-d2e3-43f6-a884-20488ac222be req-3864af0d-9b38-4185-8eb0-4c780bda004e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.220 2 DEBUG oslo_concurrency.lockutils [req-1002c955-d2e3-43f6-a884-20488ac222be req-3864af0d-9b38-4185-8eb0-4c780bda004e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.220 2 DEBUG oslo_concurrency.lockutils [req-1002c955-d2e3-43f6-a884-20488ac222be req-3864af0d-9b38-4185-8eb0-4c780bda004e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.220 2 DEBUG nova.compute.manager [req-1002c955-d2e3-43f6-a884-20488ac222be req-3864af0d-9b38-4185-8eb0-4c780bda004e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Processing event network-vif-plugged-b3e49a3d-2116-49cf-832f-f0126541c3aa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.393 2 DEBUG nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.394 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848549.3933198, f8ed40fc-237a-46e5-9557-c128fe833cea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.395 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] VM Started (Lifecycle Event)
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.399 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.403 2 INFO nova.virt.libvirt.driver [-] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Instance spawned successfully.
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.404 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.426 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.432 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.435 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.436 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.436 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.437 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.437 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.438 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.462 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.462 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848549.3942733, f8ed40fc-237a-46e5-9557-c128fe833cea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.462 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] VM Paused (Lifecycle Event)
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.496 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.499 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848549.398714, f8ed40fc-237a-46e5-9557-c128fe833cea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.499 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] VM Resumed (Lifecycle Event)
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.502 2 INFO nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Took 7.73 seconds to spawn the instance on the hypervisor.
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.503 2 DEBUG nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.517 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.519 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.540 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.575 2 INFO nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Took 10.18 seconds to build instance.
Oct 07 14:49:09 compute-0 nova_compute[259550]: 2025-10-07 14:49:09.593 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:49:09 compute-0 podman[412059]: 2025-10-07 14:49:09.967174622 +0000 UTC m=+0.882311801 container create 08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 07 14:49:10 compute-0 ceph-mon[74295]: pgmap v2607: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 07 14:49:10 compute-0 systemd[1]: Started libpod-conmon-08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848.scope.
Oct 07 14:49:10 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d721598f91972f193adad12221f9b62ad2f40fb31ec50195e4d2557053f117c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:49:10 compute-0 podman[412059]: 2025-10-07 14:49:10.287178295 +0000 UTC m=+1.202315484 container init 08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 07 14:49:10 compute-0 podman[412059]: 2025-10-07 14:49:10.296022414 +0000 UTC m=+1.211159593 container start 08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 07 14:49:10 compute-0 neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58[412072]: [NOTICE]   (412078) : New worker (412080) forked
Oct 07 14:49:10 compute-0 neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58[412072]: [NOTICE]   (412078) : Loading success.
Oct 07 14:49:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 680 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Oct 07 14:49:11 compute-0 nova_compute[259550]: 2025-10-07 14:49:11.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:49:11 compute-0 nova_compute[259550]: 2025-10-07 14:49:11.317 2 DEBUG nova.compute.manager [req-71b0e18f-6f34-4a55-be25-01d9967c301a req-b52adefc-7545-4a4d-844c-de29c7542fdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received event network-vif-plugged-b3e49a3d-2116-49cf-832f-f0126541c3aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:49:11 compute-0 nova_compute[259550]: 2025-10-07 14:49:11.318 2 DEBUG oslo_concurrency.lockutils [req-71b0e18f-6f34-4a55-be25-01d9967c301a req-b52adefc-7545-4a4d-844c-de29c7542fdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:49:11 compute-0 nova_compute[259550]: 2025-10-07 14:49:11.318 2 DEBUG oslo_concurrency.lockutils [req-71b0e18f-6f34-4a55-be25-01d9967c301a req-b52adefc-7545-4a4d-844c-de29c7542fdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:49:11 compute-0 nova_compute[259550]: 2025-10-07 14:49:11.318 2 DEBUG oslo_concurrency.lockutils [req-71b0e18f-6f34-4a55-be25-01d9967c301a req-b52adefc-7545-4a4d-844c-de29c7542fdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:49:11 compute-0 nova_compute[259550]: 2025-10-07 14:49:11.319 2 DEBUG nova.compute.manager [req-71b0e18f-6f34-4a55-be25-01d9967c301a req-b52adefc-7545-4a4d-844c-de29c7542fdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] No waiting events found dispatching network-vif-plugged-b3e49a3d-2116-49cf-832f-f0126541c3aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:49:11 compute-0 nova_compute[259550]: 2025-10-07 14:49:11.319 2 WARNING nova.compute.manager [req-71b0e18f-6f34-4a55-be25-01d9967c301a req-b52adefc-7545-4a4d-844c-de29c7542fdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received unexpected event network-vif-plugged-b3e49a3d-2116-49cf-832f-f0126541c3aa for instance with vm_state active and task_state None.
Oct 07 14:49:11 compute-0 nova_compute[259550]: 2025-10-07 14:49:11.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:49:12 compute-0 nova_compute[259550]: 2025-10-07 14:49:12.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:49:12 compute-0 nova_compute[259550]: 2025-10-07 14:49:12.014 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:49:12 compute-0 nova_compute[259550]: 2025-10-07 14:49:12.014 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:49:12 compute-0 nova_compute[259550]: 2025-10-07 14:49:12.014 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:49:12 compute-0 nova_compute[259550]: 2025-10-07 14:49:12.015 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:49:12 compute-0 ceph-mon[74295]: pgmap v2608: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 680 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Oct 07 14:49:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:49:12 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2650259811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:49:12 compute-0 nova_compute[259550]: 2025-10-07 14:49:12.516 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:49:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 680 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Oct 07 14:49:12 compute-0 nova_compute[259550]: 2025-10-07 14:49:12.715 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:49:12 compute-0 nova_compute[259550]: 2025-10-07 14:49:12.716 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:49:12 compute-0 nova_compute[259550]: 2025-10-07 14:49:12.859 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:49:12 compute-0 nova_compute[259550]: 2025-10-07 14:49:12.860 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3489MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:49:12 compute-0 nova_compute[259550]: 2025-10-07 14:49:12.860 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:49:12 compute-0 nova_compute[259550]: 2025-10-07 14:49:12.860 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:49:13 compute-0 nova_compute[259550]: 2025-10-07 14:49:13.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:13 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2650259811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:49:13 compute-0 nova_compute[259550]: 2025-10-07 14:49:13.647 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance f8ed40fc-237a-46e5-9557-c128fe833cea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:49:13 compute-0 nova_compute[259550]: 2025-10-07 14:49:13.648 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:49:13 compute-0 nova_compute[259550]: 2025-10-07 14:49:13.648 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:49:14 compute-0 nova_compute[259550]: 2025-10-07 14:49:14.015 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 07 14:49:14 compute-0 nova_compute[259550]: 2025-10-07 14:49:14.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:14 compute-0 ovn_controller[151684]: 2025-10-07T14:49:14Z|01546|binding|INFO|Releasing lport c9a8228f-4b9d-4f81-919d-b6781c06cebf from this chassis (sb_readonly=0)
Oct 07 14:49:14 compute-0 NetworkManager[44949]: <info>  [1759848554.0410] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/619)
Oct 07 14:49:14 compute-0 NetworkManager[44949]: <info>  [1759848554.0422] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/620)
Oct 07 14:49:14 compute-0 ovn_controller[151684]: 2025-10-07T14:49:14Z|01547|binding|INFO|Releasing lport c9a8228f-4b9d-4f81-919d-b6781c06cebf from this chassis (sb_readonly=0)
Oct 07 14:49:14 compute-0 nova_compute[259550]: 2025-10-07 14:49:14.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:14 compute-0 nova_compute[259550]: 2025-10-07 14:49:14.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:14 compute-0 nova_compute[259550]: 2025-10-07 14:49:14.448 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 07 14:49:14 compute-0 nova_compute[259550]: 2025-10-07 14:49:14.449 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 07 14:49:14 compute-0 nova_compute[259550]: 2025-10-07 14:49:14.465 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 07 14:49:14 compute-0 nova_compute[259550]: 2025-10-07 14:49:14.474 2 DEBUG nova.compute.manager [req-6eba4f43-68d4-4ab7-b78a-a9c76167848a req-528fee86-4715-417c-a3e0-28901f4b1f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received event network-changed-b3e49a3d-2116-49cf-832f-f0126541c3aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:49:14 compute-0 nova_compute[259550]: 2025-10-07 14:49:14.474 2 DEBUG nova.compute.manager [req-6eba4f43-68d4-4ab7-b78a-a9c76167848a req-528fee86-4715-417c-a3e0-28901f4b1f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Refreshing instance network info cache due to event network-changed-b3e49a3d-2116-49cf-832f-f0126541c3aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:49:14 compute-0 nova_compute[259550]: 2025-10-07 14:49:14.474 2 DEBUG oslo_concurrency.lockutils [req-6eba4f43-68d4-4ab7-b78a-a9c76167848a req-528fee86-4715-417c-a3e0-28901f4b1f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:49:14 compute-0 nova_compute[259550]: 2025-10-07 14:49:14.475 2 DEBUG oslo_concurrency.lockutils [req-6eba4f43-68d4-4ab7-b78a-a9c76167848a req-528fee86-4715-417c-a3e0-28901f4b1f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:49:14 compute-0 nova_compute[259550]: 2025-10-07 14:49:14.475 2 DEBUG nova.network.neutron [req-6eba4f43-68d4-4ab7-b78a-a9c76167848a req-528fee86-4715-417c-a3e0-28901f4b1f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Refreshing network info cache for port b3e49a3d-2116-49cf-832f-f0126541c3aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:49:14 compute-0 nova_compute[259550]: 2025-10-07 14:49:14.489 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 07 14:49:14 compute-0 nova_compute[259550]: 2025-10-07 14:49:14.526 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:49:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Oct 07 14:49:14 compute-0 ceph-mon[74295]: pgmap v2609: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 680 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Oct 07 14:49:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:49:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/276797032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:49:15 compute-0 nova_compute[259550]: 2025-10-07 14:49:15.015 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:49:15 compute-0 nova_compute[259550]: 2025-10-07 14:49:15.021 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:49:15 compute-0 nova_compute[259550]: 2025-10-07 14:49:15.073 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:49:15 compute-0 nova_compute[259550]: 2025-10-07 14:49:15.106 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:49:15 compute-0 nova_compute[259550]: 2025-10-07 14:49:15.107 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:49:15 compute-0 ceph-mon[74295]: pgmap v2610: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Oct 07 14:49:15 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/276797032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:49:16 compute-0 podman[412136]: 2025-10-07 14:49:16.074334517 +0000 UTC m=+0.058868560 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:49:16 compute-0 podman[412135]: 2025-10-07 14:49:16.083874234 +0000 UTC m=+0.071787827 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:49:16 compute-0 nova_compute[259550]: 2025-10-07 14:49:16.107 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:49:16 compute-0 nova_compute[259550]: 2025-10-07 14:49:16.108 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:49:16 compute-0 nova_compute[259550]: 2025-10-07 14:49:16.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:16 compute-0 nova_compute[259550]: 2025-10-07 14:49:16.211 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:49:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:49:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 99 op/s
Oct 07 14:49:17 compute-0 nova_compute[259550]: 2025-10-07 14:49:17.169 2 DEBUG nova.network.neutron [req-6eba4f43-68d4-4ab7-b78a-a9c76167848a req-528fee86-4715-417c-a3e0-28901f4b1f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Updated VIF entry in instance network info cache for port b3e49a3d-2116-49cf-832f-f0126541c3aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:49:17 compute-0 nova_compute[259550]: 2025-10-07 14:49:17.170 2 DEBUG nova.network.neutron [req-6eba4f43-68d4-4ab7-b78a-a9c76167848a req-528fee86-4715-417c-a3e0-28901f4b1f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Updating instance_info_cache with network_info: [{"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:49:17 compute-0 nova_compute[259550]: 2025-10-07 14:49:17.193 2 DEBUG oslo_concurrency.lockutils [req-6eba4f43-68d4-4ab7-b78a-a9c76167848a req-528fee86-4715-417c-a3e0-28901f4b1f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:49:17 compute-0 ceph-mon[74295]: pgmap v2611: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 99 op/s
Oct 07 14:49:17 compute-0 nova_compute[259550]: 2025-10-07 14:49:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:49:18 compute-0 nova_compute[259550]: 2025-10-07 14:49:18.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 970 KiB/s wr, 75 op/s
Oct 07 14:49:19 compute-0 ceph-mon[74295]: pgmap v2612: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 970 KiB/s wr, 75 op/s
Oct 07 14:49:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Oct 07 14:49:21 compute-0 nova_compute[259550]: 2025-10-07 14:49:21.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:49:21 compute-0 ceph-mon[74295]: pgmap v2613: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Oct 07 14:49:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 47 op/s
Oct 07 14:49:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:49:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:49:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:49:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:49:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:49:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:49:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:49:22
Oct 07 14:49:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:49:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:49:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'backups', 'default.rgw.log', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'volumes']
Oct 07 14:49:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:49:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:49:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:49:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:49:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:49:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:49:23 compute-0 ovn_controller[151684]: 2025-10-07T14:49:23Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:47:3d 10.100.0.14
Oct 07 14:49:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:49:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:49:23 compute-0 ovn_controller[151684]: 2025-10-07T14:49:23Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:47:3d 10.100.0.14
Oct 07 14:49:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:49:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:49:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:49:23 compute-0 nova_compute[259550]: 2025-10-07 14:49:23.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:24 compute-0 ceph-mon[74295]: pgmap v2614: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 47 op/s
Oct 07 14:49:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 113 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.3 MiB/s wr, 69 op/s
Oct 07 14:49:26 compute-0 nova_compute[259550]: 2025-10-07 14:49:26.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:49:26 compute-0 ceph-mon[74295]: pgmap v2615: 305 pgs: 305 active+clean; 113 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.3 MiB/s wr, 69 op/s
Oct 07 14:49:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 627 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 07 14:49:27 compute-0 sudo[412175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:49:27 compute-0 sudo[412175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:49:27 compute-0 sudo[412175]: pam_unix(sudo:session): session closed for user root
Oct 07 14:49:27 compute-0 sudo[412200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:49:27 compute-0 sudo[412200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:49:27 compute-0 sudo[412200]: pam_unix(sudo:session): session closed for user root
Oct 07 14:49:27 compute-0 sudo[412225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:49:27 compute-0 sudo[412225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:49:27 compute-0 sudo[412225]: pam_unix(sudo:session): session closed for user root
Oct 07 14:49:27 compute-0 sudo[412250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:49:27 compute-0 sudo[412250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:49:28 compute-0 sudo[412250]: pam_unix(sudo:session): session closed for user root
Oct 07 14:49:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:49:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:49:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:49:28 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:49:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:49:28 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:49:28 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev b51d6463-6aea-429a-b2bb-de1738e2dd14 does not exist
Oct 07 14:49:28 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev b54900a6-be00-43b2-ba61-68eee045047e does not exist
Oct 07 14:49:28 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 73604d0f-7308-4f33-929c-18770bf0092b does not exist
Oct 07 14:49:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:49:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:49:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:49:28 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:49:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:49:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:49:28 compute-0 sudo[412306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:49:28 compute-0 sudo[412306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:49:28 compute-0 sudo[412306]: pam_unix(sudo:session): session closed for user root
Oct 07 14:49:28 compute-0 nova_compute[259550]: 2025-10-07 14:49:28.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:28 compute-0 sudo[412338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:49:28 compute-0 sudo[412338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:49:28 compute-0 sudo[412338]: pam_unix(sudo:session): session closed for user root
Oct 07 14:49:28 compute-0 podman[412330]: 2025-10-07 14:49:28.321299502 +0000 UTC m=+0.072751309 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 07 14:49:28 compute-0 ceph-mon[74295]: pgmap v2616: 305 pgs: 305 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 627 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 07 14:49:28 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:49:28 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:49:28 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:49:28 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:49:28 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:49:28 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:49:28 compute-0 podman[412331]: 2025-10-07 14:49:28.356670242 +0000 UTC m=+0.102300721 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 07 14:49:28 compute-0 sudo[412395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:49:28 compute-0 sudo[412395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:49:28 compute-0 sudo[412395]: pam_unix(sudo:session): session closed for user root
Oct 07 14:49:28 compute-0 sudo[412425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:49:28 compute-0 sudo[412425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:49:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:49:28 compute-0 podman[412491]: 2025-10-07 14:49:28.784562728 +0000 UTC m=+0.041210101 container create 31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 07 14:49:28 compute-0 systemd[1]: Started libpod-conmon-31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41.scope.
Oct 07 14:49:28 compute-0 podman[412491]: 2025-10-07 14:49:28.763541428 +0000 UTC m=+0.020188831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:49:28 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:49:28 compute-0 podman[412491]: 2025-10-07 14:49:28.897946451 +0000 UTC m=+0.154593824 container init 31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brahmagupta, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 07 14:49:28 compute-0 podman[412491]: 2025-10-07 14:49:28.906286459 +0000 UTC m=+0.162933832 container start 31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 07 14:49:28 compute-0 podman[412491]: 2025-10-07 14:49:28.910550981 +0000 UTC m=+0.167198354 container attach 31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brahmagupta, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 07 14:49:28 compute-0 infallible_brahmagupta[412508]: 167 167
Oct 07 14:49:28 compute-0 systemd[1]: libpod-31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41.scope: Deactivated successfully.
Oct 07 14:49:28 compute-0 conmon[412508]: conmon 31b8eed3a41daaaa1fb4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41.scope/container/memory.events
Oct 07 14:49:28 compute-0 podman[412491]: 2025-10-07 14:49:28.915658911 +0000 UTC m=+0.172306284 container died 31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 07 14:49:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e5233ace8eb59972be32f50ae3db0e647ac9cdbca680df0f9e10d55302defb9-merged.mount: Deactivated successfully.
Oct 07 14:49:28 compute-0 podman[412491]: 2025-10-07 14:49:28.957816644 +0000 UTC m=+0.214464077 container remove 31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brahmagupta, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 14:49:28 compute-0 systemd[1]: libpod-conmon-31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41.scope: Deactivated successfully.
Oct 07 14:49:29 compute-0 podman[412532]: 2025-10-07 14:49:29.133438336 +0000 UTC m=+0.042026660 container create cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 07 14:49:29 compute-0 systemd[1]: Started libpod-conmon-cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d.scope.
Oct 07 14:49:29 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:49:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba27fe8ee89b7843e35688e480b1422abb8077aa5d4f575b42478ed6fef31f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:49:29 compute-0 podman[412532]: 2025-10-07 14:49:29.115857067 +0000 UTC m=+0.024445431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:49:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba27fe8ee89b7843e35688e480b1422abb8077aa5d4f575b42478ed6fef31f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:49:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba27fe8ee89b7843e35688e480b1422abb8077aa5d4f575b42478ed6fef31f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:49:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba27fe8ee89b7843e35688e480b1422abb8077aa5d4f575b42478ed6fef31f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:49:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba27fe8ee89b7843e35688e480b1422abb8077aa5d4f575b42478ed6fef31f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:49:29 compute-0 podman[412532]: 2025-10-07 14:49:29.22705931 +0000 UTC m=+0.135647654 container init cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:49:29 compute-0 podman[412532]: 2025-10-07 14:49:29.233254367 +0000 UTC m=+0.141842691 container start cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:49:29 compute-0 podman[412532]: 2025-10-07 14:49:29.238608784 +0000 UTC m=+0.147197138 container attach cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 07 14:49:30 compute-0 silly_elgamal[412548]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:49:30 compute-0 silly_elgamal[412548]: --> relative data size: 1.0
Oct 07 14:49:30 compute-0 silly_elgamal[412548]: --> All data devices are unavailable
Oct 07 14:49:30 compute-0 ceph-mon[74295]: pgmap v2617: 305 pgs: 305 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:49:30 compute-0 systemd[1]: libpod-cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d.scope: Deactivated successfully.
Oct 07 14:49:30 compute-0 podman[412532]: 2025-10-07 14:49:30.389108546 +0000 UTC m=+1.297696880 container died cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:49:30 compute-0 systemd[1]: libpod-cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d.scope: Consumed 1.098s CPU time.
Oct 07 14:49:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ba27fe8ee89b7843e35688e480b1422abb8077aa5d4f575b42478ed6fef31f4-merged.mount: Deactivated successfully.
Oct 07 14:49:30 compute-0 podman[412532]: 2025-10-07 14:49:30.466409702 +0000 UTC m=+1.374998026 container remove cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:49:30 compute-0 systemd[1]: libpod-conmon-cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d.scope: Deactivated successfully.
Oct 07 14:49:30 compute-0 sudo[412425]: pam_unix(sudo:session): session closed for user root
Oct 07 14:49:30 compute-0 sudo[412591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:49:30 compute-0 sudo[412591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:49:30 compute-0 sudo[412591]: pam_unix(sudo:session): session closed for user root
Oct 07 14:49:30 compute-0 sudo[412616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:49:30 compute-0 sudo[412616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:49:30 compute-0 sudo[412616]: pam_unix(sudo:session): session closed for user root
Oct 07 14:49:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:49:30 compute-0 sudo[412641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:49:30 compute-0 sudo[412641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:49:30 compute-0 sudo[412641]: pam_unix(sudo:session): session closed for user root
Oct 07 14:49:30 compute-0 sudo[412666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:49:30 compute-0 sudo[412666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:49:31 compute-0 podman[412729]: 2025-10-07 14:49:31.102482633 +0000 UTC m=+0.048614545 container create 041d6ad8efbc052610b6026e27aa36e55043064c20a012fcc3906076ecf32fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hellman, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:49:31 compute-0 podman[412729]: 2025-10-07 14:49:31.077222823 +0000 UTC m=+0.023354745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:49:31 compute-0 systemd[1]: Started libpod-conmon-041d6ad8efbc052610b6026e27aa36e55043064c20a012fcc3906076ecf32fca.scope.
Oct 07 14:49:31 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:49:31 compute-0 nova_compute[259550]: 2025-10-07 14:49:31.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:31 compute-0 podman[412729]: 2025-10-07 14:49:31.268588319 +0000 UTC m=+0.214720251 container init 041d6ad8efbc052610b6026e27aa36e55043064c20a012fcc3906076ecf32fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hellman, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:49:31 compute-0 podman[412729]: 2025-10-07 14:49:31.276059436 +0000 UTC m=+0.222191328 container start 041d6ad8efbc052610b6026e27aa36e55043064c20a012fcc3906076ecf32fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hellman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 07 14:49:31 compute-0 vigorous_hellman[412746]: 167 167
Oct 07 14:49:31 compute-0 systemd[1]: libpod-041d6ad8efbc052610b6026e27aa36e55043064c20a012fcc3906076ecf32fca.scope: Deactivated successfully.
Oct 07 14:49:31 compute-0 podman[412729]: 2025-10-07 14:49:31.285520111 +0000 UTC m=+0.231652033 container attach 041d6ad8efbc052610b6026e27aa36e55043064c20a012fcc3906076ecf32fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 07 14:49:31 compute-0 podman[412729]: 2025-10-07 14:49:31.285872209 +0000 UTC m=+0.232004111 container died 041d6ad8efbc052610b6026e27aa36e55043064c20a012fcc3906076ecf32fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hellman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 07 14:49:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:49:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ccce8dfb7b84a1ef09a73dfd8e1c977058d619558a90fae3cb5c886c08802f3-merged.mount: Deactivated successfully.
Oct 07 14:49:31 compute-0 podman[412729]: 2025-10-07 14:49:31.525570344 +0000 UTC m=+0.471702246 container remove 041d6ad8efbc052610b6026e27aa36e55043064c20a012fcc3906076ecf32fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hellman, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 07 14:49:31 compute-0 systemd[1]: libpod-conmon-041d6ad8efbc052610b6026e27aa36e55043064c20a012fcc3906076ecf32fca.scope: Deactivated successfully.
Oct 07 14:49:31 compute-0 podman[412772]: 2025-10-07 14:49:31.727298267 +0000 UTC m=+0.072684408 container create 066ece02b6fc141a8cd31314b55e0a854b0705247e15e70f7190a780f4ef0b15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wiles, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:49:31 compute-0 podman[412772]: 2025-10-07 14:49:31.676665893 +0000 UTC m=+0.022052024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:49:31 compute-0 systemd[1]: Started libpod-conmon-066ece02b6fc141a8cd31314b55e0a854b0705247e15e70f7190a780f4ef0b15.scope.
Oct 07 14:49:31 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:49:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e647979171da4c7639b41f46b7629fcf9bf0bc7e5059d03e8b399baa01a22a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:49:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e647979171da4c7639b41f46b7629fcf9bf0bc7e5059d03e8b399baa01a22a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:49:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e647979171da4c7639b41f46b7629fcf9bf0bc7e5059d03e8b399baa01a22a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:49:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e647979171da4c7639b41f46b7629fcf9bf0bc7e5059d03e8b399baa01a22a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:49:32 compute-0 podman[412772]: 2025-10-07 14:49:32.016467216 +0000 UTC m=+0.361853357 container init 066ece02b6fc141a8cd31314b55e0a854b0705247e15e70f7190a780f4ef0b15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:49:32 compute-0 podman[412772]: 2025-10-07 14:49:32.023907772 +0000 UTC m=+0.369293883 container start 066ece02b6fc141a8cd31314b55e0a854b0705247e15e70f7190a780f4ef0b15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 07 14:49:32 compute-0 podman[412772]: 2025-10-07 14:49:32.130217478 +0000 UTC m=+0.475603589 container attach 066ece02b6fc141a8cd31314b55e0a854b0705247e15e70f7190a780f4ef0b15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wiles, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 07 14:49:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:49:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/562905878' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:49:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:49:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/562905878' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007587643257146578 of space, bias 1.0, pg target 0.22762929771439736 quantized to 32 (current 32)
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:49:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:49:32 compute-0 sad_wiles[412788]: {
Oct 07 14:49:32 compute-0 sad_wiles[412788]:     "0": [
Oct 07 14:49:32 compute-0 sad_wiles[412788]:         {
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "devices": [
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "/dev/loop3"
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             ],
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "lv_name": "ceph_lv0",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "lv_size": "21470642176",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "name": "ceph_lv0",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "tags": {
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.cluster_name": "ceph",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.crush_device_class": "",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.encrypted": "0",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.osd_id": "0",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.type": "block",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.vdo": "0"
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             },
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "type": "block",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "vg_name": "ceph_vg0"
Oct 07 14:49:32 compute-0 sad_wiles[412788]:         }
Oct 07 14:49:32 compute-0 sad_wiles[412788]:     ],
Oct 07 14:49:32 compute-0 sad_wiles[412788]:     "1": [
Oct 07 14:49:32 compute-0 sad_wiles[412788]:         {
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "devices": [
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "/dev/loop4"
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             ],
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "lv_name": "ceph_lv1",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "lv_size": "21470642176",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "name": "ceph_lv1",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "tags": {
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.cluster_name": "ceph",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.crush_device_class": "",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.encrypted": "0",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.osd_id": "1",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.type": "block",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.vdo": "0"
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             },
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "type": "block",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "vg_name": "ceph_vg1"
Oct 07 14:49:32 compute-0 sad_wiles[412788]:         }
Oct 07 14:49:32 compute-0 sad_wiles[412788]:     ],
Oct 07 14:49:32 compute-0 sad_wiles[412788]:     "2": [
Oct 07 14:49:32 compute-0 sad_wiles[412788]:         {
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "devices": [
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "/dev/loop5"
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             ],
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "lv_name": "ceph_lv2",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "lv_size": "21470642176",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "name": "ceph_lv2",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "tags": {
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.cluster_name": "ceph",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.crush_device_class": "",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.encrypted": "0",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.osd_id": "2",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.type": "block",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:                 "ceph.vdo": "0"
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             },
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "type": "block",
Oct 07 14:49:32 compute-0 sad_wiles[412788]:             "vg_name": "ceph_vg2"
Oct 07 14:49:32 compute-0 sad_wiles[412788]:         }
Oct 07 14:49:32 compute-0 sad_wiles[412788]:     ]
Oct 07 14:49:32 compute-0 sad_wiles[412788]: }
Oct 07 14:49:32 compute-0 ceph-mon[74295]: pgmap v2618: 305 pgs: 305 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:49:32 compute-0 systemd[1]: libpod-066ece02b6fc141a8cd31314b55e0a854b0705247e15e70f7190a780f4ef0b15.scope: Deactivated successfully.
Oct 07 14:49:32 compute-0 podman[412772]: 2025-10-07 14:49:32.792877051 +0000 UTC m=+1.138263162 container died 066ece02b6fc141a8cd31314b55e0a854b0705247e15e70f7190a780f4ef0b15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:49:32 compute-0 nova_compute[259550]: 2025-10-07 14:49:32.980 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:49:32 compute-0 nova_compute[259550]: 2025-10-07 14:49:32.982 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:49:32 compute-0 nova_compute[259550]: 2025-10-07 14:49:32.997 2 DEBUG nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:49:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3e647979171da4c7639b41f46b7629fcf9bf0bc7e5059d03e8b399baa01a22a-merged.mount: Deactivated successfully.
Oct 07 14:49:33 compute-0 podman[412772]: 2025-10-07 14:49:33.035889754 +0000 UTC m=+1.381275865 container remove 066ece02b6fc141a8cd31314b55e0a854b0705247e15e70f7190a780f4ef0b15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:49:33 compute-0 systemd[1]: libpod-conmon-066ece02b6fc141a8cd31314b55e0a854b0705247e15e70f7190a780f4ef0b15.scope: Deactivated successfully.
Oct 07 14:49:33 compute-0 sudo[412666]: pam_unix(sudo:session): session closed for user root
Oct 07 14:49:33 compute-0 nova_compute[259550]: 2025-10-07 14:49:33.082 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:49:33 compute-0 nova_compute[259550]: 2025-10-07 14:49:33.083 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:49:33 compute-0 nova_compute[259550]: 2025-10-07 14:49:33.094 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:49:33 compute-0 nova_compute[259550]: 2025-10-07 14:49:33.095 2 INFO nova.compute.claims [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:49:33 compute-0 sudo[412812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:49:33 compute-0 sudo[412812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:49:33 compute-0 sudo[412812]: pam_unix(sudo:session): session closed for user root
Oct 07 14:49:33 compute-0 sudo[412837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:49:33 compute-0 sudo[412837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:49:33 compute-0 sudo[412837]: pam_unix(sudo:session): session closed for user root
Oct 07 14:49:33 compute-0 nova_compute[259550]: 2025-10-07 14:49:33.238 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:49:33 compute-0 sudo[412862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:49:33 compute-0 nova_compute[259550]: 2025-10-07 14:49:33.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:33 compute-0 sudo[412862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:49:33 compute-0 sudo[412862]: pam_unix(sudo:session): session closed for user root
Oct 07 14:49:33 compute-0 sudo[412888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:49:33 compute-0 sudo[412888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:49:33 compute-0 podman[412972]: 2025-10-07 14:49:33.682455084 +0000 UTC m=+0.043062214 container create 120d2dd6b147dbc2e9952e76ae73c775f0764c2af21d54ec198761114ccb6fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cerf, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:49:33 compute-0 systemd[1]: Started libpod-conmon-120d2dd6b147dbc2e9952e76ae73c775f0764c2af21d54ec198761114ccb6fa5.scope.
Oct 07 14:49:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:49:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3931066364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:49:33 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:49:33 compute-0 nova_compute[259550]: 2025-10-07 14:49:33.754 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:49:33 compute-0 podman[412972]: 2025-10-07 14:49:33.663392821 +0000 UTC m=+0.023999981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:49:33 compute-0 podman[412972]: 2025-10-07 14:49:33.763978001 +0000 UTC m=+0.124585151 container init 120d2dd6b147dbc2e9952e76ae73c775f0764c2af21d54ec198761114ccb6fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 07 14:49:33 compute-0 nova_compute[259550]: 2025-10-07 14:49:33.763 2 DEBUG nova.compute.provider_tree [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:49:33 compute-0 podman[412972]: 2025-10-07 14:49:33.771823157 +0000 UTC m=+0.132430287 container start 120d2dd6b147dbc2e9952e76ae73c775f0764c2af21d54ec198761114ccb6fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cerf, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:49:33 compute-0 podman[412972]: 2025-10-07 14:49:33.776463397 +0000 UTC m=+0.137070567 container attach 120d2dd6b147dbc2e9952e76ae73c775f0764c2af21d54ec198761114ccb6fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 07 14:49:33 compute-0 upbeat_cerf[412989]: 167 167
Oct 07 14:49:33 compute-0 systemd[1]: libpod-120d2dd6b147dbc2e9952e76ae73c775f0764c2af21d54ec198761114ccb6fa5.scope: Deactivated successfully.
Oct 07 14:49:33 compute-0 podman[412972]: 2025-10-07 14:49:33.778399534 +0000 UTC m=+0.139006684 container died 120d2dd6b147dbc2e9952e76ae73c775f0764c2af21d54ec198761114ccb6fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 07 14:49:33 compute-0 ceph-mon[74295]: pgmap v2619: 305 pgs: 305 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 07 14:49:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/562905878' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:49:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/562905878' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:49:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3931066364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:49:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e87213fe89ff7cc3f43911b8920999c37256905e3e42c357446da86dfaaf3d1-merged.mount: Deactivated successfully.
Oct 07 14:49:33 compute-0 podman[412972]: 2025-10-07 14:49:33.83170864 +0000 UTC m=+0.192315770 container remove 120d2dd6b147dbc2e9952e76ae73c775f0764c2af21d54ec198761114ccb6fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:49:33 compute-0 systemd[1]: libpod-conmon-120d2dd6b147dbc2e9952e76ae73c775f0764c2af21d54ec198761114ccb6fa5.scope: Deactivated successfully.
Oct 07 14:49:33 compute-0 nova_compute[259550]: 2025-10-07 14:49:33.984 2 DEBUG nova.scheduler.client.report [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.019 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.020 2 DEBUG nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:49:34 compute-0 podman[413014]: 2025-10-07 14:49:34.025912713 +0000 UTC m=+0.043718809 container create 2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shtern, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.067 2 DEBUG nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.069 2 DEBUG nova.network.neutron [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:49:34 compute-0 systemd[1]: Started libpod-conmon-2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8.scope.
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.093 2 INFO nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:49:34 compute-0 podman[413014]: 2025-10-07 14:49:34.009021812 +0000 UTC m=+0.026827938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:49:34 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f4a91ad3f1af05aac1a62c987dd9cfdd0ebad2a4ae13a91b61e3139d46478b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.114 2 DEBUG nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f4a91ad3f1af05aac1a62c987dd9cfdd0ebad2a4ae13a91b61e3139d46478b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f4a91ad3f1af05aac1a62c987dd9cfdd0ebad2a4ae13a91b61e3139d46478b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f4a91ad3f1af05aac1a62c987dd9cfdd0ebad2a4ae13a91b61e3139d46478b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:49:34 compute-0 podman[413014]: 2025-10-07 14:49:34.135984268 +0000 UTC m=+0.153790394 container init 2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shtern, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 07 14:49:34 compute-0 podman[413014]: 2025-10-07 14:49:34.147798549 +0000 UTC m=+0.165604645 container start 2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shtern, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:49:34 compute-0 podman[413014]: 2025-10-07 14:49:34.152110361 +0000 UTC m=+0.169916487 container attach 2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.210 2 DEBUG nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.212 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.212 2 INFO nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Creating image(s)
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.244 2 DEBUG nova.storage.rbd_utils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.270 2 DEBUG nova.storage.rbd_utils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.294 2 DEBUG nova.storage.rbd_utils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.299 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.386 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.388 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.388 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.389 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.415 2 DEBUG nova.storage.rbd_utils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.419 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:49:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.837 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:49:34 compute-0 nova_compute[259550]: 2025-10-07 14:49:34.908 2 DEBUG nova.storage.rbd_utils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:49:35 compute-0 nova_compute[259550]: 2025-10-07 14:49:35.021 2 DEBUG nova.objects.instance [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:49:35 compute-0 nova_compute[259550]: 2025-10-07 14:49:35.028 2 DEBUG nova.policy [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:49:35 compute-0 nova_compute[259550]: 2025-10-07 14:49:35.053 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:49:35 compute-0 nova_compute[259550]: 2025-10-07 14:49:35.054 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Ensure instance console log exists: /var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:49:35 compute-0 nova_compute[259550]: 2025-10-07 14:49:35.054 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:49:35 compute-0 nova_compute[259550]: 2025-10-07 14:49:35.055 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:49:35 compute-0 nova_compute[259550]: 2025-10-07 14:49:35.055 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:49:35 compute-0 magical_shtern[413029]: {
Oct 07 14:49:35 compute-0 magical_shtern[413029]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:49:35 compute-0 magical_shtern[413029]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:49:35 compute-0 magical_shtern[413029]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:49:35 compute-0 magical_shtern[413029]:         "osd_id": 2,
Oct 07 14:49:35 compute-0 magical_shtern[413029]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:49:35 compute-0 magical_shtern[413029]:         "type": "bluestore"
Oct 07 14:49:35 compute-0 magical_shtern[413029]:     },
Oct 07 14:49:35 compute-0 magical_shtern[413029]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:49:35 compute-0 magical_shtern[413029]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:49:35 compute-0 magical_shtern[413029]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:49:35 compute-0 magical_shtern[413029]:         "osd_id": 1,
Oct 07 14:49:35 compute-0 magical_shtern[413029]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:49:35 compute-0 magical_shtern[413029]:         "type": "bluestore"
Oct 07 14:49:35 compute-0 magical_shtern[413029]:     },
Oct 07 14:49:35 compute-0 magical_shtern[413029]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:49:35 compute-0 magical_shtern[413029]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:49:35 compute-0 magical_shtern[413029]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:49:35 compute-0 magical_shtern[413029]:         "osd_id": 0,
Oct 07 14:49:35 compute-0 magical_shtern[413029]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:49:35 compute-0 magical_shtern[413029]:         "type": "bluestore"
Oct 07 14:49:35 compute-0 magical_shtern[413029]:     }
Oct 07 14:49:35 compute-0 magical_shtern[413029]: }
Oct 07 14:49:35 compute-0 systemd[1]: libpod-2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8.scope: Deactivated successfully.
Oct 07 14:49:35 compute-0 systemd[1]: libpod-2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8.scope: Consumed 1.118s CPU time.
Oct 07 14:49:35 compute-0 podman[413228]: 2025-10-07 14:49:35.3277274 +0000 UTC m=+0.027268868 container died 2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:49:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4f4a91ad3f1af05aac1a62c987dd9cfdd0ebad2a4ae13a91b61e3139d46478b-merged.mount: Deactivated successfully.
Oct 07 14:49:35 compute-0 podman[413228]: 2025-10-07 14:49:35.392672742 +0000 UTC m=+0.092214170 container remove 2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shtern, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:49:35 compute-0 systemd[1]: libpod-conmon-2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8.scope: Deactivated successfully.
Oct 07 14:49:35 compute-0 sudo[412888]: pam_unix(sudo:session): session closed for user root
Oct 07 14:49:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:49:35 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:49:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:49:35 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:49:35 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 4e5d2f3a-73b8-401e-ac23-e21abf915910 does not exist
Oct 07 14:49:35 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 4eb57069-362a-4acd-b653-7a8adb223785 does not exist
Oct 07 14:49:35 compute-0 sudo[413241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:49:35 compute-0 sudo[413241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:49:35 compute-0 sudo[413241]: pam_unix(sudo:session): session closed for user root
Oct 07 14:49:35 compute-0 sudo[413266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:49:35 compute-0 sudo[413266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:49:35 compute-0 sudo[413266]: pam_unix(sudo:session): session closed for user root
Oct 07 14:49:36 compute-0 nova_compute[259550]: 2025-10-07 14:49:36.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:49:36 compute-0 ceph-mon[74295]: pgmap v2620: 305 pgs: 305 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 07 14:49:36 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:49:36 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:49:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:36.585 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:49:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:36.586 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:49:36 compute-0 nova_compute[259550]: 2025-10-07 14:49:36.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:36 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:36.587 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:49:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 132 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 208 KiB/s rd, 940 KiB/s wr, 38 op/s
Oct 07 14:49:36 compute-0 nova_compute[259550]: 2025-10-07 14:49:36.671 2 DEBUG nova.network.neutron [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Successfully created port: b6c3792a-487a-43d7-969c-5f2b969e1390 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:49:38 compute-0 nova_compute[259550]: 2025-10-07 14:49:38.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:38 compute-0 ceph-mon[74295]: pgmap v2621: 305 pgs: 305 active+clean; 132 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 208 KiB/s rd, 940 KiB/s wr, 38 op/s
Oct 07 14:49:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:49:39 compute-0 ceph-mon[74295]: pgmap v2622: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:49:40 compute-0 nova_compute[259550]: 2025-10-07 14:49:40.047 2 DEBUG nova.network.neutron [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Successfully updated port: b6c3792a-487a-43d7-969c-5f2b969e1390 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:49:40 compute-0 nova_compute[259550]: 2025-10-07 14:49:40.076 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:49:40 compute-0 nova_compute[259550]: 2025-10-07 14:49:40.076 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:49:40 compute-0 nova_compute[259550]: 2025-10-07 14:49:40.077 2 DEBUG nova.network.neutron [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:49:40 compute-0 nova_compute[259550]: 2025-10-07 14:49:40.161 2 DEBUG nova.compute.manager [req-7183ada6-f127-44e2-be89-39e684553c21 req-3a3f6344-ecb6-4ec7-b2c3-0d2303d8aa81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received event network-changed-b6c3792a-487a-43d7-969c-5f2b969e1390 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:49:40 compute-0 nova_compute[259550]: 2025-10-07 14:49:40.162 2 DEBUG nova.compute.manager [req-7183ada6-f127-44e2-be89-39e684553c21 req-3a3f6344-ecb6-4ec7-b2c3-0d2303d8aa81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Refreshing instance network info cache due to event network-changed-b6c3792a-487a-43d7-969c-5f2b969e1390. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:49:40 compute-0 nova_compute[259550]: 2025-10-07 14:49:40.162 2 DEBUG oslo_concurrency.lockutils [req-7183ada6-f127-44e2-be89-39e684553c21 req-3a3f6344-ecb6-4ec7-b2c3-0d2303d8aa81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:49:40 compute-0 nova_compute[259550]: 2025-10-07 14:49:40.252 2 DEBUG nova.network.neutron [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:49:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:49:41 compute-0 nova_compute[259550]: 2025-10-07 14:49:41.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:49:41 compute-0 ceph-mon[74295]: pgmap v2623: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.633 2 DEBUG nova.network.neutron [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Updating instance_info_cache with network_info: [{"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.660 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.661 2 DEBUG nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Instance network_info: |[{"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.662 2 DEBUG oslo_concurrency.lockutils [req-7183ada6-f127-44e2-be89-39e684553c21 req-3a3f6344-ecb6-4ec7-b2c3-0d2303d8aa81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.662 2 DEBUG nova.network.neutron [req-7183ada6-f127-44e2-be89-39e684553c21 req-3a3f6344-ecb6-4ec7-b2c3-0d2303d8aa81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Refreshing network info cache for port b6c3792a-487a-43d7-969c-5f2b969e1390 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.665 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Start _get_guest_xml network_info=[{"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.673 2 WARNING nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:49:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.683 2 DEBUG nova.virt.libvirt.host [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.684 2 DEBUG nova.virt.libvirt.host [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.689 2 DEBUG nova.virt.libvirt.host [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.690 2 DEBUG nova.virt.libvirt.host [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.690 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.691 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.691 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.691 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.692 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.692 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.692 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.692 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.692 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.693 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.693 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.693 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:49:42 compute-0 nova_compute[259550]: 2025-10-07 14:49:42.696 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:49:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:49:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/289082590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.163 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.186 2 DEBUG nova.storage.rbd_utils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.191 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:49:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/46692623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.659 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.661 2 DEBUG nova.virt.libvirt.vif [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1636457952',display_name='tempest-TestGettingAddress-server-1636457952',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1636457952',id=141,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKnFc7oNzLOWPggfVYh6NCn5dPRqTtk1lzxcoCVefWPMaEDbpbiTeEgrtbvchkPj1nQBZHZbolxqW+9TzmB8BHZOPL0u9Rkk75pOIuFTC4DXuvzWlGpX2j66d1jp31XwzQ==',key_name='tempest-TestGettingAddress-1287985769',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-rg0e899h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:49:34Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=6d966826-1c6c-4205-9ee0-3a69c9e1c2d5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.662 2 DEBUG nova.network.os_vif_util [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.663 2 DEBUG nova.network.os_vif_util [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:42:c6,bridge_name='br-int',has_traffic_filtering=True,id=b6c3792a-487a-43d7-969c-5f2b969e1390,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c3792a-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.664 2 DEBUG nova.objects.instance [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.682 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:49:43 compute-0 nova_compute[259550]:   <uuid>6d966826-1c6c-4205-9ee0-3a69c9e1c2d5</uuid>
Oct 07 14:49:43 compute-0 nova_compute[259550]:   <name>instance-0000008d</name>
Oct 07 14:49:43 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:49:43 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:49:43 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <nova:name>tempest-TestGettingAddress-server-1636457952</nova:name>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:49:42</nova:creationTime>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:49:43 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:49:43 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:49:43 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:49:43 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:49:43 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:49:43 compute-0 nova_compute[259550]:         <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct 07 14:49:43 compute-0 nova_compute[259550]:         <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:49:43 compute-0 nova_compute[259550]:         <nova:port uuid="b6c3792a-487a-43d7-969c-5f2b969e1390">
Oct 07 14:49:43 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fee2:42c6" ipVersion="6"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:49:43 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:49:43 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <system>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <entry name="serial">6d966826-1c6c-4205-9ee0-3a69c9e1c2d5</entry>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <entry name="uuid">6d966826-1c6c-4205-9ee0-3a69c9e1c2d5</entry>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     </system>
Oct 07 14:49:43 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:49:43 compute-0 nova_compute[259550]:   <os>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:   </os>
Oct 07 14:49:43 compute-0 nova_compute[259550]:   <features>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:   </features>
Oct 07 14:49:43 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:49:43 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:49:43 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk">
Oct 07 14:49:43 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       </source>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:49:43 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk.config">
Oct 07 14:49:43 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       </source>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:49:43 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:e2:42:c6"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <target dev="tapb6c3792a-48"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5/console.log" append="off"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <video>
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     </video>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:49:43 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:49:43 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:49:43 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:49:43 compute-0 nova_compute[259550]: </domain>
Oct 07 14:49:43 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.684 2 DEBUG nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Preparing to wait for external event network-vif-plugged-b6c3792a-487a-43d7-969c-5f2b969e1390 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.684 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.684 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.685 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.686 2 DEBUG nova.virt.libvirt.vif [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1636457952',display_name='tempest-TestGettingAddress-server-1636457952',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1636457952',id=141,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKnFc7oNzLOWPggfVYh6NCn5dPRqTtk1lzxcoCVefWPMaEDbpbiTeEgrtbvchkPj1nQBZHZbolxqW+9TzmB8BHZOPL0u9Rkk75pOIuFTC4DXuvzWlGpX2j66d1jp31XwzQ==',key_name='tempest-TestGettingAddress-1287985769',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-rg0e899h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:49:34Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=6d966826-1c6c-4205-9ee0-3a69c9e1c2d5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.686 2 DEBUG nova.network.os_vif_util [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.687 2 DEBUG nova.network.os_vif_util [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:42:c6,bridge_name='br-int',has_traffic_filtering=True,id=b6c3792a-487a-43d7-969c-5f2b969e1390,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c3792a-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.687 2 DEBUG os_vif [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:42:c6,bridge_name='br-int',has_traffic_filtering=True,id=b6c3792a-487a-43d7-969c-5f2b969e1390,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c3792a-48') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.688 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.689 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.695 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6c3792a-48, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.695 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb6c3792a-48, col_values=(('external_ids', {'iface-id': 'b6c3792a-487a-43d7-969c-5f2b969e1390', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:42:c6', 'vm-uuid': '6d966826-1c6c-4205-9ee0-3a69c9e1c2d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:43 compute-0 NetworkManager[44949]: <info>  [1759848583.6990] manager: (tapb6c3792a-48): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/621)
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.710 2 INFO os_vif [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:42:c6,bridge_name='br-int',has_traffic_filtering=True,id=b6c3792a-487a-43d7-969c-5f2b969e1390,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c3792a-48')
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.766 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.766 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.767 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:e2:42:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.767 2 INFO nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Using config drive
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.787 2 DEBUG nova.storage.rbd_utils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:49:43 compute-0 ceph-mon[74295]: pgmap v2624: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:49:43 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/289082590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:49:43 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/46692623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.911 2 DEBUG nova.network.neutron [req-7183ada6-f127-44e2-be89-39e684553c21 req-3a3f6344-ecb6-4ec7-b2c3-0d2303d8aa81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Updated VIF entry in instance network info cache for port b6c3792a-487a-43d7-969c-5f2b969e1390. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.912 2 DEBUG nova.network.neutron [req-7183ada6-f127-44e2-be89-39e684553c21 req-3a3f6344-ecb6-4ec7-b2c3-0d2303d8aa81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Updating instance_info_cache with network_info: [{"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:49:43 compute-0 nova_compute[259550]: 2025-10-07 14:49:43.926 2 DEBUG oslo_concurrency.lockutils [req-7183ada6-f127-44e2-be89-39e684553c21 req-3a3f6344-ecb6-4ec7-b2c3-0d2303d8aa81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:49:44 compute-0 nova_compute[259550]: 2025-10-07 14:49:44.572 2 INFO nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Creating config drive at /var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5/disk.config
Oct 07 14:49:44 compute-0 nova_compute[259550]: 2025-10-07 14:49:44.578 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpar2rfiix execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:49:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:49:44 compute-0 nova_compute[259550]: 2025-10-07 14:49:44.727 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpar2rfiix" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:49:44 compute-0 nova_compute[259550]: 2025-10-07 14:49:44.755 2 DEBUG nova.storage.rbd_utils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:49:44 compute-0 nova_compute[259550]: 2025-10-07 14:49:44.759 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5/disk.config 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:49:45 compute-0 ceph-mon[74295]: pgmap v2625: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:49:46 compute-0 nova_compute[259550]: 2025-10-07 14:49:46.095 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5/disk.config 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:49:46 compute-0 nova_compute[259550]: 2025-10-07 14:49:46.096 2 INFO nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Deleting local config drive /var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5/disk.config because it was imported into RBD.
Oct 07 14:49:46 compute-0 kernel: tapb6c3792a-48: entered promiscuous mode
Oct 07 14:49:46 compute-0 ovn_controller[151684]: 2025-10-07T14:49:46Z|01548|binding|INFO|Claiming lport b6c3792a-487a-43d7-969c-5f2b969e1390 for this chassis.
Oct 07 14:49:46 compute-0 ovn_controller[151684]: 2025-10-07T14:49:46Z|01549|binding|INFO|b6c3792a-487a-43d7-969c-5f2b969e1390: Claiming fa:16:3e:e2:42:c6 10.100.0.8 2001:db8::f816:3eff:fee2:42c6
Oct 07 14:49:46 compute-0 NetworkManager[44949]: <info>  [1759848586.1657] manager: (tapb6c3792a-48): new Tun device (/org/freedesktop/NetworkManager/Devices/622)
Oct 07 14:49:46 compute-0 nova_compute[259550]: 2025-10-07 14:49:46.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.172 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:42:c6 10.100.0.8 2001:db8::f816:3eff:fee2:42c6'], port_security=['fa:16:3e:e2:42:c6 10.100.0.8 2001:db8::f816:3eff:fee2:42c6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8::f816:3eff:fee2:42c6/64', 'neutron:device_id': '6d966826-1c6c-4205-9ee0-3a69c9e1c2d5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69026eb8-a969-41bf-9300-c17871babf58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '131e11cc-a041-4277-a4c1-b09bd1db8909', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba61e351-8cde-4bb5-94c2-d294e74e4b35, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b6c3792a-487a-43d7-969c-5f2b969e1390) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:49:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.173 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b6c3792a-487a-43d7-969c-5f2b969e1390 in datapath 69026eb8-a969-41bf-9300-c17871babf58 bound to our chassis
Oct 07 14:49:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.174 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69026eb8-a969-41bf-9300-c17871babf58
Oct 07 14:49:46 compute-0 ovn_controller[151684]: 2025-10-07T14:49:46Z|01550|binding|INFO|Setting lport b6c3792a-487a-43d7-969c-5f2b969e1390 up in Southbound
Oct 07 14:49:46 compute-0 ovn_controller[151684]: 2025-10-07T14:49:46Z|01551|binding|INFO|Setting lport b6c3792a-487a-43d7-969c-5f2b969e1390 ovn-installed in OVS
Oct 07 14:49:46 compute-0 nova_compute[259550]: 2025-10-07 14:49:46.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:46 compute-0 nova_compute[259550]: 2025-10-07 14:49:46.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.194 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7d0fc3d5-c4f1-4a7a-a033-59307ed587ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:46 compute-0 systemd-machined[214580]: New machine qemu-175-instance-0000008d.
Oct 07 14:49:46 compute-0 systemd[1]: Started Virtual Machine qemu-175-instance-0000008d.
Oct 07 14:49:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.229 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c521452a-5cf6-4795-acaf-51a1d4b164b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.232 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[bdc6b008-753a-40a9-99bc-4b988d749e91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:46 compute-0 systemd-udevd[413451]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:49:46 compute-0 NetworkManager[44949]: <info>  [1759848586.2551] device (tapb6c3792a-48): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:49:46 compute-0 NetworkManager[44949]: <info>  [1759848586.2561] device (tapb6c3792a-48): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:49:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.270 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9597fe75-1a98-42c5-bced-559a47958474]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:46 compute-0 podman[413425]: 2025-10-07 14:49:46.275392459 +0000 UTC m=+0.075005464 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible)
Oct 07 14:49:46 compute-0 podman[413423]: 2025-10-07 14:49:46.277710234 +0000 UTC m=+0.069679236 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 07 14:49:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.295 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a7161e-ce15-42e5-bf20-4e944735063a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69026eb8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:4f:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 913214, 'reachable_time': 15041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 20, 'inoctets': 1592, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 20, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1592, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 20, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413474, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:49:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.312 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b96c7d18-6e8d-4e5e-877e-4abb0bb2dc4d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap69026eb8-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 913226, 'tstamp': 913226}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413477, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap69026eb8-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 913229, 'tstamp': 913229}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413477, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:49:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.314 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69026eb8-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:49:46 compute-0 nova_compute[259550]: 2025-10-07 14:49:46.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:46 compute-0 nova_compute[259550]: 2025-10-07 14:49:46.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.317 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69026eb8-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:49:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.318 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:49:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.318 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69026eb8-a0, col_values=(('external_ids', {'iface-id': 'c9a8228f-4b9d-4f81-919d-b6781c06cebf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:49:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.318 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:49:46 compute-0 nova_compute[259550]: 2025-10-07 14:49:46.413 2 DEBUG nova.compute.manager [req-ff0b314d-a592-4270-988a-c74054e53835 req-97b920e0-b824-4ffd-b764-e43d6f9af808 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received event network-vif-plugged-b6c3792a-487a-43d7-969c-5f2b969e1390 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:49:46 compute-0 nova_compute[259550]: 2025-10-07 14:49:46.413 2 DEBUG oslo_concurrency.lockutils [req-ff0b314d-a592-4270-988a-c74054e53835 req-97b920e0-b824-4ffd-b764-e43d6f9af808 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:49:46 compute-0 nova_compute[259550]: 2025-10-07 14:49:46.414 2 DEBUG oslo_concurrency.lockutils [req-ff0b314d-a592-4270-988a-c74054e53835 req-97b920e0-b824-4ffd-b764-e43d6f9af808 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:49:46 compute-0 nova_compute[259550]: 2025-10-07 14:49:46.414 2 DEBUG oslo_concurrency.lockutils [req-ff0b314d-a592-4270-988a-c74054e53835 req-97b920e0-b824-4ffd-b764-e43d6f9af808 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:49:46 compute-0 nova_compute[259550]: 2025-10-07 14:49:46.414 2 DEBUG nova.compute.manager [req-ff0b314d-a592-4270-988a-c74054e53835 req-97b920e0-b824-4ffd-b764-e43d6f9af808 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Processing event network-vif-plugged-b6c3792a-487a-43d7-969c-5f2b969e1390 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:49:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.256 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848587.2561026, 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.256 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] VM Started (Lifecycle Event)
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.259 2 DEBUG nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.262 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.265 2 INFO nova.virt.libvirt.driver [-] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Instance spawned successfully.
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.266 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.277 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.280 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.337 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.337 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848587.256277, 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.337 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] VM Paused (Lifecycle Event)
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.344 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.344 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.345 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.345 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.346 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.346 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.464 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.467 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848587.2611039, 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.467 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] VM Resumed (Lifecycle Event)
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.501 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.506 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.519 2 INFO nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Took 13.31 seconds to spawn the instance on the hypervisor.
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.519 2 DEBUG nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.532 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.601 2 INFO nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Took 14.55 seconds to build instance.
Oct 07 14:49:47 compute-0 nova_compute[259550]: 2025-10-07 14:49:47.621 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:49:47 compute-0 ceph-mon[74295]: pgmap v2626: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 07 14:49:48 compute-0 nova_compute[259550]: 2025-10-07 14:49:48.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:48 compute-0 nova_compute[259550]: 2025-10-07 14:49:48.495 2 DEBUG nova.compute.manager [req-3f762281-0374-4008-ae03-1dea144f190a req-27bdb0eb-08fe-4092-91ec-32f8bf7eedf1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received event network-vif-plugged-b6c3792a-487a-43d7-969c-5f2b969e1390 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:49:48 compute-0 nova_compute[259550]: 2025-10-07 14:49:48.495 2 DEBUG oslo_concurrency.lockutils [req-3f762281-0374-4008-ae03-1dea144f190a req-27bdb0eb-08fe-4092-91ec-32f8bf7eedf1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:49:48 compute-0 nova_compute[259550]: 2025-10-07 14:49:48.495 2 DEBUG oslo_concurrency.lockutils [req-3f762281-0374-4008-ae03-1dea144f190a req-27bdb0eb-08fe-4092-91ec-32f8bf7eedf1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:49:48 compute-0 nova_compute[259550]: 2025-10-07 14:49:48.495 2 DEBUG oslo_concurrency.lockutils [req-3f762281-0374-4008-ae03-1dea144f190a req-27bdb0eb-08fe-4092-91ec-32f8bf7eedf1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:49:48 compute-0 nova_compute[259550]: 2025-10-07 14:49:48.496 2 DEBUG nova.compute.manager [req-3f762281-0374-4008-ae03-1dea144f190a req-27bdb0eb-08fe-4092-91ec-32f8bf7eedf1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] No waiting events found dispatching network-vif-plugged-b6c3792a-487a-43d7-969c-5f2b969e1390 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:49:48 compute-0 nova_compute[259550]: 2025-10-07 14:49:48.496 2 WARNING nova.compute.manager [req-3f762281-0374-4008-ae03-1dea144f190a req-27bdb0eb-08fe-4092-91ec-32f8bf7eedf1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received unexpected event network-vif-plugged-b6c3792a-487a-43d7-969c-5f2b969e1390 for instance with vm_state active and task_state None.
Oct 07 14:49:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.7 MiB/s wr, 94 op/s
Oct 07 14:49:48 compute-0 nova_compute[259550]: 2025-10-07 14:49:48.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:49 compute-0 ceph-mon[74295]: pgmap v2627: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.7 MiB/s wr, 94 op/s
Oct 07 14:49:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:49:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:49:51 compute-0 ceph-mon[74295]: pgmap v2628: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:49:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:49:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:49:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:49:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:49:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:49:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:49:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:49:53 compute-0 nova_compute[259550]: 2025-10-07 14:49:53.157 2 DEBUG nova.compute.manager [req-2cd6db21-befe-44b2-a514-e77ed4dff719 req-5ba7c562-e949-4b6b-ae4e-c39c03e85770 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received event network-changed-b6c3792a-487a-43d7-969c-5f2b969e1390 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:49:53 compute-0 nova_compute[259550]: 2025-10-07 14:49:53.157 2 DEBUG nova.compute.manager [req-2cd6db21-befe-44b2-a514-e77ed4dff719 req-5ba7c562-e949-4b6b-ae4e-c39c03e85770 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Refreshing instance network info cache due to event network-changed-b6c3792a-487a-43d7-969c-5f2b969e1390. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:49:53 compute-0 nova_compute[259550]: 2025-10-07 14:49:53.158 2 DEBUG oslo_concurrency.lockutils [req-2cd6db21-befe-44b2-a514-e77ed4dff719 req-5ba7c562-e949-4b6b-ae4e-c39c03e85770 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:49:53 compute-0 nova_compute[259550]: 2025-10-07 14:49:53.158 2 DEBUG oslo_concurrency.lockutils [req-2cd6db21-befe-44b2-a514-e77ed4dff719 req-5ba7c562-e949-4b6b-ae4e-c39c03e85770 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:49:53 compute-0 nova_compute[259550]: 2025-10-07 14:49:53.158 2 DEBUG nova.network.neutron [req-2cd6db21-befe-44b2-a514-e77ed4dff719 req-5ba7c562-e949-4b6b-ae4e-c39c03e85770 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Refreshing network info cache for port b6c3792a-487a-43d7-969c-5f2b969e1390 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:49:53 compute-0 nova_compute[259550]: 2025-10-07 14:49:53.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:53 compute-0 nova_compute[259550]: 2025-10-07 14:49:53.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:53 compute-0 ceph-mon[74295]: pgmap v2629: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:49:54 compute-0 nova_compute[259550]: 2025-10-07 14:49:54.542 2 DEBUG nova.network.neutron [req-2cd6db21-befe-44b2-a514-e77ed4dff719 req-5ba7c562-e949-4b6b-ae4e-c39c03e85770 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Updated VIF entry in instance network info cache for port b6c3792a-487a-43d7-969c-5f2b969e1390. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:49:54 compute-0 nova_compute[259550]: 2025-10-07 14:49:54.543 2 DEBUG nova.network.neutron [req-2cd6db21-befe-44b2-a514-e77ed4dff719 req-5ba7c562-e949-4b6b-ae4e-c39c03e85770 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Updating instance_info_cache with network_info: [{"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:49:54 compute-0 nova_compute[259550]: 2025-10-07 14:49:54.577 2 DEBUG oslo_concurrency.lockutils [req-2cd6db21-befe-44b2-a514-e77ed4dff719 req-5ba7c562-e949-4b6b-ae4e-c39c03e85770 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:49:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 07 14:49:55 compute-0 ceph-mon[74295]: pgmap v2630: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 07 14:49:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:49:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 07 14:49:57 compute-0 ceph-mon[74295]: pgmap v2631: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 07 14:49:58 compute-0 nova_compute[259550]: 2025-10-07 14:49:58.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:58 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct 07 14:49:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 174 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 741 KiB/s wr, 84 op/s
Oct 07 14:49:58 compute-0 nova_compute[259550]: 2025-10-07 14:49:58.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:49:59 compute-0 podman[413521]: 2025-10-07 14:49:59.093134806 +0000 UTC m=+0.081853026 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:49:59 compute-0 podman[413522]: 2025-10-07 14:49:59.105021848 +0000 UTC m=+0.088847342 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:49:59 compute-0 ovn_controller[151684]: 2025-10-07T14:49:59Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:42:c6 10.100.0.8
Oct 07 14:49:59 compute-0 ovn_controller[151684]: 2025-10-07T14:49:59Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:42:c6 10.100.0.8
Oct 07 14:49:59 compute-0 nova_compute[259550]: 2025-10-07 14:49:59.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:49:59 compute-0 ceph-mon[74295]: pgmap v2632: 305 pgs: 305 active+clean; 174 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 741 KiB/s wr, 84 op/s
Oct 07 14:50:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:00.084 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:50:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:00.085 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:50:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:00.086 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:50:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 183 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 149 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Oct 07 14:50:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:50:01 compute-0 nova_compute[259550]: 2025-10-07 14:50:01.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:50:02 compute-0 ceph-mon[74295]: pgmap v2633: 305 pgs: 305 active+clean; 183 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 149 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Oct 07 14:50:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 183 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.3 MiB/s wr, 19 op/s
Oct 07 14:50:03 compute-0 nova_compute[259550]: 2025-10-07 14:50:03.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:03 compute-0 nova_compute[259550]: 2025-10-07 14:50:03.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:04 compute-0 ceph-mon[74295]: pgmap v2634: 305 pgs: 305 active+clean; 183 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.3 MiB/s wr, 19 op/s
Oct 07 14:50:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:50:04 compute-0 nova_compute[259550]: 2025-10-07 14:50:04.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:50:04 compute-0 nova_compute[259550]: 2025-10-07 14:50:04.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:50:04 compute-0 nova_compute[259550]: 2025-10-07 14:50:04.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:50:04 compute-0 nova_compute[259550]: 2025-10-07 14:50:04.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:50:06 compute-0 ceph-mon[74295]: pgmap v2635: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:50:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:50:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:50:06 compute-0 nova_compute[259550]: 2025-10-07 14:50:06.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:50:08 compute-0 nova_compute[259550]: 2025-10-07 14:50:08.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:08 compute-0 ceph-mon[74295]: pgmap v2636: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:50:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:50:08 compute-0 nova_compute[259550]: 2025-10-07 14:50:08.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:10 compute-0 ceph-mon[74295]: pgmap v2637: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.476 2 DEBUG nova.compute.manager [req-c4f51eab-4362-4bf2-8a11-0f9b0e34583b req-f851ae8e-944d-4c0b-9f42-7c4b94c3da84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received event network-changed-b6c3792a-487a-43d7-969c-5f2b969e1390 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.477 2 DEBUG nova.compute.manager [req-c4f51eab-4362-4bf2-8a11-0f9b0e34583b req-f851ae8e-944d-4c0b-9f42-7c4b94c3da84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Refreshing instance network info cache due to event network-changed-b6c3792a-487a-43d7-969c-5f2b969e1390. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.477 2 DEBUG oslo_concurrency.lockutils [req-c4f51eab-4362-4bf2-8a11-0f9b0e34583b req-f851ae8e-944d-4c0b-9f42-7c4b94c3da84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.477 2 DEBUG oslo_concurrency.lockutils [req-c4f51eab-4362-4bf2-8a11-0f9b0e34583b req-f851ae8e-944d-4c0b-9f42-7c4b94c3da84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.477 2 DEBUG nova.network.neutron [req-c4f51eab-4362-4bf2-8a11-0f9b0e34583b req-f851ae8e-944d-4c0b-9f42-7c4b94c3da84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Refreshing network info cache for port b6c3792a-487a-43d7-969c-5f2b969e1390 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.564 2 DEBUG oslo_concurrency.lockutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.564 2 DEBUG oslo_concurrency.lockutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.564 2 DEBUG oslo_concurrency.lockutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.565 2 DEBUG oslo_concurrency.lockutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.565 2 DEBUG oslo_concurrency.lockutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.566 2 INFO nova.compute.manager [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Terminating instance
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.567 2 DEBUG nova.compute.manager [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:50:10 compute-0 kernel: tapb6c3792a-48 (unregistering): left promiscuous mode
Oct 07 14:50:10 compute-0 NetworkManager[44949]: <info>  [1759848610.6371] device (tapb6c3792a-48): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:10 compute-0 ovn_controller[151684]: 2025-10-07T14:50:10Z|01552|binding|INFO|Releasing lport b6c3792a-487a-43d7-969c-5f2b969e1390 from this chassis (sb_readonly=0)
Oct 07 14:50:10 compute-0 ovn_controller[151684]: 2025-10-07T14:50:10Z|01553|binding|INFO|Setting lport b6c3792a-487a-43d7-969c-5f2b969e1390 down in Southbound
Oct 07 14:50:10 compute-0 ovn_controller[151684]: 2025-10-07T14:50:10Z|01554|binding|INFO|Removing iface tapb6c3792a-48 ovn-installed in OVS
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.669 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:42:c6 10.100.0.8 2001:db8::f816:3eff:fee2:42c6'], port_security=['fa:16:3e:e2:42:c6 10.100.0.8 2001:db8::f816:3eff:fee2:42c6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8::f816:3eff:fee2:42c6/64', 'neutron:device_id': '6d966826-1c6c-4205-9ee0-3a69c9e1c2d5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69026eb8-a969-41bf-9300-c17871babf58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '131e11cc-a041-4277-a4c1-b09bd1db8909', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba61e351-8cde-4bb5-94c2-d294e74e4b35, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b6c3792a-487a-43d7-969c-5f2b969e1390) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:50:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.670 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b6c3792a-487a-43d7-969c-5f2b969e1390 in datapath 69026eb8-a969-41bf-9300-c17871babf58 unbound from our chassis
Oct 07 14:50:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.671 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69026eb8-a969-41bf-9300-c17871babf58
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 1.4 MiB/s wr, 52 op/s
Oct 07 14:50:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.691 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab2b4ea-b1b6-41db-a9b5-0327838aa780]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:50:10 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Oct 07 14:50:10 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008d.scope: Consumed 13.624s CPU time.
Oct 07 14:50:10 compute-0 systemd-machined[214580]: Machine qemu-175-instance-0000008d terminated.
Oct 07 14:50:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.723 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5af3368a-3ba6-442d-a2a3-5215d04dd20b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:50:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.727 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1f2f7ff5-d238-4f41-beed-b4a2eab0c5e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:50:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.762 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[87eacac8-c9f1-4f36-b352-b8d8644e7c08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:50:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.783 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[69cc8106-fbe6-4874-8531-0b68215d42a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69026eb8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:4f:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 913214, 'reachable_time': 15041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 32, 'inoctets': 2464, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 32, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2464, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 32, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413576, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.802 2 INFO nova.virt.libvirt.driver [-] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Instance destroyed successfully.
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.803 2 DEBUG nova.objects.instance [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:50:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.805 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[54c60d3b-e914-4f3d-a035-bd68ede93a0e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap69026eb8-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 913226, 'tstamp': 913226}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413582, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap69026eb8-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 913229, 'tstamp': 913229}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413582, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:50:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.807 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69026eb8-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.814 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69026eb8-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:50:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.814 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:50:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.815 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69026eb8-a0, col_values=(('external_ids', {'iface-id': 'c9a8228f-4b9d-4f81-919d-b6781c06cebf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:50:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.815 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.828 2 DEBUG nova.virt.libvirt.vif [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1636457952',display_name='tempest-TestGettingAddress-server-1636457952',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1636457952',id=141,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKnFc7oNzLOWPggfVYh6NCn5dPRqTtk1lzxcoCVefWPMaEDbpbiTeEgrtbvchkPj1nQBZHZbolxqW+9TzmB8BHZOPL0u9Rkk75pOIuFTC4DXuvzWlGpX2j66d1jp31XwzQ==',key_name='tempest-TestGettingAddress-1287985769',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:49:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-rg0e899h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:49:47Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=6d966826-1c6c-4205-9ee0-3a69c9e1c2d5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.830 2 DEBUG nova.network.os_vif_util [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.831 2 DEBUG nova.network.os_vif_util [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e2:42:c6,bridge_name='br-int',has_traffic_filtering=True,id=b6c3792a-487a-43d7-969c-5f2b969e1390,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c3792a-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.831 2 DEBUG os_vif [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:42:c6,bridge_name='br-int',has_traffic_filtering=True,id=b6c3792a-487a-43d7-969c-5f2b969e1390,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c3792a-48') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.834 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6c3792a-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.839 2 INFO os_vif [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:42:c6,bridge_name='br-int',has_traffic_filtering=True,id=b6c3792a-487a-43d7-969c-5f2b969e1390,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c3792a-48')
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.859 2 DEBUG nova.compute.manager [req-a3c5a33e-dcd2-4b4a-b861-93965fc0b6c6 req-3781a92b-4fa3-4f9c-a05b-2e010218bf84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received event network-vif-unplugged-b6c3792a-487a-43d7-969c-5f2b969e1390 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.860 2 DEBUG oslo_concurrency.lockutils [req-a3c5a33e-dcd2-4b4a-b861-93965fc0b6c6 req-3781a92b-4fa3-4f9c-a05b-2e010218bf84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.860 2 DEBUG oslo_concurrency.lockutils [req-a3c5a33e-dcd2-4b4a-b861-93965fc0b6c6 req-3781a92b-4fa3-4f9c-a05b-2e010218bf84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.860 2 DEBUG oslo_concurrency.lockutils [req-a3c5a33e-dcd2-4b4a-b861-93965fc0b6c6 req-3781a92b-4fa3-4f9c-a05b-2e010218bf84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.860 2 DEBUG nova.compute.manager [req-a3c5a33e-dcd2-4b4a-b861-93965fc0b6c6 req-3781a92b-4fa3-4f9c-a05b-2e010218bf84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] No waiting events found dispatching network-vif-unplugged-b6c3792a-487a-43d7-969c-5f2b969e1390 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:50:10 compute-0 nova_compute[259550]: 2025-10-07 14:50:10.861 2 DEBUG nova.compute.manager [req-a3c5a33e-dcd2-4b4a-b861-93965fc0b6c6 req-3781a92b-4fa3-4f9c-a05b-2e010218bf84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received event network-vif-unplugged-b6c3792a-487a-43d7-969c-5f2b969e1390 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:50:11 compute-0 nova_compute[259550]: 2025-10-07 14:50:11.300 2 INFO nova.virt.libvirt.driver [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Deleting instance files /var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_del
Oct 07 14:50:11 compute-0 nova_compute[259550]: 2025-10-07 14:50:11.301 2 INFO nova.virt.libvirt.driver [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Deletion of /var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_del complete
Oct 07 14:50:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:50:11 compute-0 nova_compute[259550]: 2025-10-07 14:50:11.350 2 INFO nova.compute.manager [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 07 14:50:11 compute-0 nova_compute[259550]: 2025-10-07 14:50:11.352 2 DEBUG oslo.service.loopingcall [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:50:11 compute-0 nova_compute[259550]: 2025-10-07 14:50:11.352 2 DEBUG nova.compute.manager [-] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:50:11 compute-0 nova_compute[259550]: 2025-10-07 14:50:11.353 2 DEBUG nova.network.neutron [-] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.003 2 DEBUG nova.network.neutron [req-c4f51eab-4362-4bf2-8a11-0f9b0e34583b req-f851ae8e-944d-4c0b-9f42-7c4b94c3da84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Updated VIF entry in instance network info cache for port b6c3792a-487a-43d7-969c-5f2b969e1390. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.004 2 DEBUG nova.network.neutron [req-c4f51eab-4362-4bf2-8a11-0f9b0e34583b req-f851ae8e-944d-4c0b-9f42-7c4b94c3da84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Updating instance_info_cache with network_info: [{"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.022 2 DEBUG nova.network.neutron [-] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.024 2 DEBUG oslo_concurrency.lockutils [req-c4f51eab-4362-4bf2-8a11-0f9b0e34583b req-f851ae8e-944d-4c0b-9f42-7c4b94c3da84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.037 2 INFO nova.compute.manager [-] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Took 0.68 seconds to deallocate network for instance.
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.074 2 DEBUG oslo_concurrency.lockutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.075 2 DEBUG oslo_concurrency.lockutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.144 2 DEBUG oslo_concurrency.processutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:50:12 compute-0 ceph-mon[74295]: pgmap v2638: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 1.4 MiB/s wr, 52 op/s
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.556 2 DEBUG nova.compute.manager [req-fd779f44-3d9b-4e34-bcb5-303522981ee1 req-c89e1209-4bb5-4a1f-af11-9ce3bdc12e97 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received event network-vif-deleted-b6c3792a-487a-43d7-969c-5f2b969e1390 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:50:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:50:12 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1366973841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.607 2 DEBUG oslo_concurrency.processutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.613 2 DEBUG nova.compute.provider_tree [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.631 2 DEBUG nova.scheduler.client.report [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.653 2 DEBUG oslo_concurrency.lockutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.677 2 INFO nova.scheduler.client.report [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5
Oct 07 14:50:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 921 KiB/s wr, 45 op/s
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.737 2 DEBUG oslo_concurrency.lockutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.937 2 DEBUG nova.compute.manager [req-8ea4a2c9-34a5-43cc-a7ec-f380d168f542 req-0cce5372-e5d3-44c6-8d40-86e7f510a66c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received event network-vif-plugged-b6c3792a-487a-43d7-969c-5f2b969e1390 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.938 2 DEBUG oslo_concurrency.lockutils [req-8ea4a2c9-34a5-43cc-a7ec-f380d168f542 req-0cce5372-e5d3-44c6-8d40-86e7f510a66c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.939 2 DEBUG oslo_concurrency.lockutils [req-8ea4a2c9-34a5-43cc-a7ec-f380d168f542 req-0cce5372-e5d3-44c6-8d40-86e7f510a66c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.939 2 DEBUG oslo_concurrency.lockutils [req-8ea4a2c9-34a5-43cc-a7ec-f380d168f542 req-0cce5372-e5d3-44c6-8d40-86e7f510a66c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.940 2 DEBUG nova.compute.manager [req-8ea4a2c9-34a5-43cc-a7ec-f380d168f542 req-0cce5372-e5d3-44c6-8d40-86e7f510a66c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] No waiting events found dispatching network-vif-plugged-b6c3792a-487a-43d7-969c-5f2b969e1390 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.940 2 WARNING nova.compute.manager [req-8ea4a2c9-34a5-43cc-a7ec-f380d168f542 req-0cce5372-e5d3-44c6-8d40-86e7f510a66c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received unexpected event network-vif-plugged-b6c3792a-487a-43d7-969c-5f2b969e1390 for instance with vm_state deleted and task_state None.
Oct 07 14:50:12 compute-0 nova_compute[259550]: 2025-10-07 14:50:12.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:50:13 compute-0 nova_compute[259550]: 2025-10-07 14:50:13.006 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:50:13 compute-0 nova_compute[259550]: 2025-10-07 14:50:13.007 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:50:13 compute-0 nova_compute[259550]: 2025-10-07 14:50:13.008 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:50:13 compute-0 nova_compute[259550]: 2025-10-07 14:50:13.008 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:50:13 compute-0 nova_compute[259550]: 2025-10-07 14:50:13.009 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:50:13 compute-0 nova_compute[259550]: 2025-10-07 14:50:13.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:13 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1366973841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:50:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:50:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1845816187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:50:13 compute-0 nova_compute[259550]: 2025-10-07 14:50:13.470 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:50:13 compute-0 nova_compute[259550]: 2025-10-07 14:50:13.550 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:50:13 compute-0 nova_compute[259550]: 2025-10-07 14:50:13.551 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:50:13 compute-0 nova_compute[259550]: 2025-10-07 14:50:13.710 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:50:13 compute-0 nova_compute[259550]: 2025-10-07 14:50:13.711 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3421MB free_disk=59.89718246459961GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:50:13 compute-0 nova_compute[259550]: 2025-10-07 14:50:13.712 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:50:13 compute-0 nova_compute[259550]: 2025-10-07 14:50:13.712 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:50:13 compute-0 nova_compute[259550]: 2025-10-07 14:50:13.778 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance f8ed40fc-237a-46e5-9557-c128fe833cea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:50:13 compute-0 nova_compute[259550]: 2025-10-07 14:50:13.778 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:50:13 compute-0 nova_compute[259550]: 2025-10-07 14:50:13.779 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:50:13 compute-0 nova_compute[259550]: 2025-10-07 14:50:13.813 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.070 2 DEBUG oslo_concurrency.lockutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "f8ed40fc-237a-46e5-9557-c128fe833cea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.071 2 DEBUG oslo_concurrency.lockutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.071 2 DEBUG oslo_concurrency.lockutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.072 2 DEBUG oslo_concurrency.lockutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.072 2 DEBUG oslo_concurrency.lockutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.073 2 INFO nova.compute.manager [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Terminating instance
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.075 2 DEBUG nova.compute.manager [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:50:14 compute-0 kernel: tapb3e49a3d-21 (unregistering): left promiscuous mode
Oct 07 14:50:14 compute-0 NetworkManager[44949]: <info>  [1759848614.1413] device (tapb3e49a3d-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:50:14 compute-0 ovn_controller[151684]: 2025-10-07T14:50:14Z|01555|binding|INFO|Releasing lport b3e49a3d-2116-49cf-832f-f0126541c3aa from this chassis (sb_readonly=0)
Oct 07 14:50:14 compute-0 ovn_controller[151684]: 2025-10-07T14:50:14Z|01556|binding|INFO|Setting lport b3e49a3d-2116-49cf-832f-f0126541c3aa down in Southbound
Oct 07 14:50:14 compute-0 ovn_controller[151684]: 2025-10-07T14:50:14Z|01557|binding|INFO|Removing iface tapb3e49a3d-21 ovn-installed in OVS
Oct 07 14:50:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.157 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:47:3d 10.100.0.14 2001:db8::f816:3eff:fe53:473d'], port_security=['fa:16:3e:53:47:3d 10.100.0.14 2001:db8::f816:3eff:fe53:473d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8::f816:3eff:fe53:473d/64', 'neutron:device_id': 'f8ed40fc-237a-46e5-9557-c128fe833cea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69026eb8-a969-41bf-9300-c17871babf58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '131e11cc-a041-4277-a4c1-b09bd1db8909', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba61e351-8cde-4bb5-94c2-d294e74e4b35, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b3e49a3d-2116-49cf-832f-f0126541c3aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.158 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b3e49a3d-2116-49cf-832f-f0126541c3aa in datapath 69026eb8-a969-41bf-9300-c17871babf58 unbound from our chassis
Oct 07 14:50:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.159 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 69026eb8-a969-41bf-9300-c17871babf58, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:50:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.160 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c1c88b-7798-4362-b10a-0671723b37ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:50:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.161 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-69026eb8-a969-41bf-9300-c17871babf58 namespace which is not needed anymore
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:14 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Oct 07 14:50:14 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008c.scope: Consumed 15.135s CPU time.
Oct 07 14:50:14 compute-0 systemd-machined[214580]: Machine qemu-174-instance-0000008c terminated.
Oct 07 14:50:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:50:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3725484053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:14 compute-0 neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58[412072]: [NOTICE]   (412078) : haproxy version is 2.8.14-c23fe91
Oct 07 14:50:14 compute-0 neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58[412072]: [NOTICE]   (412078) : path to executable is /usr/sbin/haproxy
Oct 07 14:50:14 compute-0 neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58[412072]: [WARNING]  (412078) : Exiting Master process...
Oct 07 14:50:14 compute-0 neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58[412072]: [WARNING]  (412078) : Exiting Master process...
Oct 07 14:50:14 compute-0 neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58[412072]: [ALERT]    (412078) : Current worker (412080) exited with code 143 (Terminated)
Oct 07 14:50:14 compute-0 neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58[412072]: [WARNING]  (412078) : All workers exited. Exiting... (0)
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:14 compute-0 systemd[1]: libpod-08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848.scope: Deactivated successfully.
Oct 07 14:50:14 compute-0 conmon[412072]: conmon 08b45ab11c8c74752dfc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848.scope/container/memory.events
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.308 2 INFO nova.virt.libvirt.driver [-] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Instance destroyed successfully.
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.308 2 DEBUG nova.objects.instance [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid f8ed40fc-237a-46e5-9557-c128fe833cea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:50:14 compute-0 podman[413698]: 2025-10-07 14:50:14.310395737 +0000 UTC m=+0.049158559 container died 08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.317 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.322 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.339 2 DEBUG nova.virt.libvirt.vif [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:48:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-332055908',display_name='tempest-TestGettingAddress-server-332055908',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-332055908',id=140,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKnFc7oNzLOWPggfVYh6NCn5dPRqTtk1lzxcoCVefWPMaEDbpbiTeEgrtbvchkPj1nQBZHZbolxqW+9TzmB8BHZOPL0u9Rkk75pOIuFTC4DXuvzWlGpX2j66d1jp31XwzQ==',key_name='tempest-TestGettingAddress-1287985769',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:49:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-fp0trybq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:49:09Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=f8ed40fc-237a-46e5-9557-c128fe833cea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.340 2 DEBUG nova.network.os_vif_util [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:50:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d721598f91972f193adad12221f9b62ad2f40fb31ec50195e4d2557053f117c-merged.mount: Deactivated successfully.
Oct 07 14:50:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848-userdata-shm.mount: Deactivated successfully.
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.341 2 DEBUG nova.network.os_vif_util [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:47:3d,bridge_name='br-int',has_traffic_filtering=True,id=b3e49a3d-2116-49cf-832f-f0126541c3aa,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e49a3d-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.341 2 DEBUG os_vif [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:47:3d,bridge_name='br-int',has_traffic_filtering=True,id=b3e49a3d-2116-49cf-832f-f0126541c3aa,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e49a3d-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.345 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3e49a3d-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.346 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:14 compute-0 podman[413698]: 2025-10-07 14:50:14.350211802 +0000 UTC m=+0.088974624 container cleanup 08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.352 2 INFO os_vif [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:47:3d,bridge_name='br-int',has_traffic_filtering=True,id=b3e49a3d-2116-49cf-832f-f0126541c3aa,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e49a3d-21')
Oct 07 14:50:14 compute-0 ceph-mon[74295]: pgmap v2639: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 921 KiB/s wr, 45 op/s
Oct 07 14:50:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1845816187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:50:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3725484053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:50:14 compute-0 systemd[1]: libpod-conmon-08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848.scope: Deactivated successfully.
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.390 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.391 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:50:14 compute-0 podman[413736]: 2025-10-07 14:50:14.426156527 +0000 UTC m=+0.052816256 container remove 08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:50:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.433 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[532cfdc0-f897-485b-b316-1aee31c3689f]: (4, ('Tue Oct  7 02:50:14 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58 (08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848)\n08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848\nTue Oct  7 02:50:14 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58 (08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848)\n08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:50:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.435 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e1d3de90-bce4-4e6f-aec7-a534a86dac24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:50:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.435 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69026eb8-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:14 compute-0 kernel: tap69026eb8-a0: left promiscuous mode
Oct 07 14:50:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.442 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6db400fd-960b-423e-903b-661149ab240d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.470 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc1b30b-0adb-4a4c-949b-42d0be9b2f1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:50:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.472 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2f409d21-1969-44d2-a529-6b88bbd83ef8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:50:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.488 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b92baa-bcfc-4579-a701-748d6d739287]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 913205, 'reachable_time': 22125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413770, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:50:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.491 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-69026eb8-a969-41bf-9300-c17871babf58 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:50:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.491 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[7711c11b-1006-4d99-8d93-7c6c67928c01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:50:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d69026eb8\x2da969\x2d41bf\x2d9300\x2dc17871babf58.mount: Deactivated successfully.
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.635 2 DEBUG nova.compute.manager [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received event network-changed-b3e49a3d-2116-49cf-832f-f0126541c3aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.638 2 DEBUG nova.compute.manager [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Refreshing instance network info cache due to event network-changed-b3e49a3d-2116-49cf-832f-f0126541c3aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.638 2 DEBUG oslo_concurrency.lockutils [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.638 2 DEBUG oslo_concurrency.lockutils [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.639 2 DEBUG nova.network.neutron [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Refreshing network info cache for port b3e49a3d-2116-49cf-832f-f0126541c3aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:50:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 141 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 922 KiB/s wr, 64 op/s
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.805 2 INFO nova.virt.libvirt.driver [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Deleting instance files /var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea_del
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.807 2 INFO nova.virt.libvirt.driver [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Deletion of /var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea_del complete
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.888 2 INFO nova.compute.manager [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.889 2 DEBUG oslo.service.loopingcall [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.890 2 DEBUG nova.compute.manager [-] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:50:14 compute-0 nova_compute[259550]: 2025-10-07 14:50:14.890 2 DEBUG nova.network.neutron [-] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:50:15 compute-0 nova_compute[259550]: 2025-10-07 14:50:15.599 2 DEBUG nova.network.neutron [-] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:50:15 compute-0 nova_compute[259550]: 2025-10-07 14:50:15.617 2 INFO nova.compute.manager [-] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Took 0.73 seconds to deallocate network for instance.
Oct 07 14:50:15 compute-0 nova_compute[259550]: 2025-10-07 14:50:15.658 2 DEBUG oslo_concurrency.lockutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:50:15 compute-0 nova_compute[259550]: 2025-10-07 14:50:15.658 2 DEBUG oslo_concurrency.lockutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:50:15 compute-0 nova_compute[259550]: 2025-10-07 14:50:15.726 2 DEBUG oslo_concurrency.processutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:50:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:50:16 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3612406044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.196 2 DEBUG oslo_concurrency.processutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.206 2 DEBUG nova.compute.provider_tree [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.227 2 DEBUG nova.scheduler.client.report [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.247 2 DEBUG oslo_concurrency.lockutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.270 2 INFO nova.scheduler.client.report [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance f8ed40fc-237a-46e5-9557-c128fe833cea
Oct 07 14:50:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.343 2 DEBUG oslo_concurrency.lockutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:50:16 compute-0 ceph-mon[74295]: pgmap v2640: 305 pgs: 305 active+clean; 141 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 922 KiB/s wr, 64 op/s
Oct 07 14:50:16 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3612406044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.374 2 DEBUG nova.network.neutron [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Updated VIF entry in instance network info cache for port b3e49a3d-2116-49cf-832f-f0126541c3aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.375 2 DEBUG nova.network.neutron [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Updating instance_info_cache with network_info: [{"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.392 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.396 2 DEBUG oslo_concurrency.lockutils [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.396 2 DEBUG nova.compute.manager [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received event network-vif-unplugged-b3e49a3d-2116-49cf-832f-f0126541c3aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.397 2 DEBUG oslo_concurrency.lockutils [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.397 2 DEBUG oslo_concurrency.lockutils [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.398 2 DEBUG oslo_concurrency.lockutils [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.398 2 DEBUG nova.compute.manager [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] No waiting events found dispatching network-vif-unplugged-b3e49a3d-2116-49cf-832f-f0126541c3aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.398 2 DEBUG nova.compute.manager [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received event network-vif-unplugged-b3e49a3d-2116-49cf-832f-f0126541c3aa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.398 2 DEBUG nova.compute.manager [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received event network-vif-plugged-b3e49a3d-2116-49cf-832f-f0126541c3aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.399 2 DEBUG oslo_concurrency.lockutils [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.399 2 DEBUG oslo_concurrency.lockutils [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.399 2 DEBUG oslo_concurrency.lockutils [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.400 2 DEBUG nova.compute.manager [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] No waiting events found dispatching network-vif-plugged-b3e49a3d-2116-49cf-832f-f0126541c3aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.400 2 WARNING nova.compute.manager [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received unexpected event network-vif-plugged-b3e49a3d-2116-49cf-832f-f0126541c3aa for instance with vm_state active and task_state deleting.
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.408 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.408 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.408 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.420 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:50:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 88 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 22 KiB/s wr, 43 op/s
Oct 07 14:50:16 compute-0 nova_compute[259550]: 2025-10-07 14:50:16.699 2 DEBUG nova.compute.manager [req-791c0356-eeef-4b10-9862-49f2cffe2d91 req-cf181dfd-709b-410d-87ef-faead0f39e56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received event network-vif-deleted-b3e49a3d-2116-49cf-832f-f0126541c3aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:50:17 compute-0 podman[413794]: 2025-10-07 14:50:17.068894749 +0000 UTC m=+0.055746886 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 07 14:50:17 compute-0 podman[413795]: 2025-10-07 14:50:17.074669336 +0000 UTC m=+0.059132935 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 07 14:50:17 compute-0 nova_compute[259550]: 2025-10-07 14:50:17.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:50:18 compute-0 nova_compute[259550]: 2025-10-07 14:50:18.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:18 compute-0 ceph-mon[74295]: pgmap v2641: 305 pgs: 305 active+clean; 88 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 22 KiB/s wr, 43 op/s
Oct 07 14:50:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 23 KiB/s wr, 58 op/s
Oct 07 14:50:19 compute-0 nova_compute[259550]: 2025-10-07 14:50:19.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:20 compute-0 ceph-mon[74295]: pgmap v2642: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 23 KiB/s wr, 58 op/s
Oct 07 14:50:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 11 KiB/s wr, 57 op/s
Oct 07 14:50:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:50:22 compute-0 ceph-mon[74295]: pgmap v2643: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 11 KiB/s wr, 57 op/s
Oct 07 14:50:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.3 KiB/s wr, 56 op/s
Oct 07 14:50:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:50:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:50:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:50:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:50:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:50:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:50:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:50:22
Oct 07 14:50:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:50:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:50:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.meta', 'volumes', 'vms', '.rgw.root']
Oct 07 14:50:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:50:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:50:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:50:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:50:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:50:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:50:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:50:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:50:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:50:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:50:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:50:23 compute-0 nova_compute[259550]: 2025-10-07 14:50:23.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:24 compute-0 nova_compute[259550]: 2025-10-07 14:50:24.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:24 compute-0 ceph-mon[74295]: pgmap v2644: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.3 KiB/s wr, 56 op/s
Oct 07 14:50:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.3 KiB/s wr, 56 op/s
Oct 07 14:50:24 compute-0 nova_compute[259550]: 2025-10-07 14:50:24.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:24 compute-0 nova_compute[259550]: 2025-10-07 14:50:24.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:25 compute-0 nova_compute[259550]: 2025-10-07 14:50:25.802 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848610.8006272, 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:50:25 compute-0 nova_compute[259550]: 2025-10-07 14:50:25.802 2 INFO nova.compute.manager [-] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] VM Stopped (Lifecycle Event)
Oct 07 14:50:25 compute-0 nova_compute[259550]: 2025-10-07 14:50:25.826 2 DEBUG nova.compute.manager [None req-7dac03c7-a698-4ec6-92b3-086b70525b77 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:50:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:50:26 compute-0 ceph-mon[74295]: pgmap v2645: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.3 KiB/s wr, 56 op/s
Oct 07 14:50:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Oct 07 14:50:28 compute-0 nova_compute[259550]: 2025-10-07 14:50:28.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:28 compute-0 ceph-mon[74295]: pgmap v2646: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Oct 07 14:50:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 852 B/s wr, 14 op/s
Oct 07 14:50:29 compute-0 nova_compute[259550]: 2025-10-07 14:50:29.307 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848614.3047776, f8ed40fc-237a-46e5-9557-c128fe833cea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:50:29 compute-0 nova_compute[259550]: 2025-10-07 14:50:29.307 2 INFO nova.compute.manager [-] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] VM Stopped (Lifecycle Event)
Oct 07 14:50:29 compute-0 nova_compute[259550]: 2025-10-07 14:50:29.331 2 DEBUG nova.compute.manager [None req-7b7ce7f8-f2f5-4ac4-afbf-769a5a7b9da1 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:50:29 compute-0 nova_compute[259550]: 2025-10-07 14:50:29.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:30 compute-0 podman[413835]: 2025-10-07 14:50:30.073970496 +0000 UTC m=+0.060501579 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:50:30 compute-0 podman[413836]: 2025-10-07 14:50:30.112652324 +0000 UTC m=+0.089556898 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 07 14:50:30 compute-0 ceph-mon[74295]: pgmap v2647: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 852 B/s wr, 14 op/s
Oct 07 14:50:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:50:32 compute-0 ceph-mon[74295]: pgmap v2648: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:50:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1371807605' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:50:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:50:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1371807605' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:50:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:50:33 compute-0 nova_compute[259550]: 2025-10-07 14:50:33.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1371807605' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:50:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1371807605' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:50:34 compute-0 nova_compute[259550]: 2025-10-07 14:50:34.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:34 compute-0 ceph-mon[74295]: pgmap v2649: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:35 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:50:35 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 12K writes, 55K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1362 writes, 6184 keys, 1362 commit groups, 1.0 writes per commit group, ingest: 8.69 MB, 0.01 MB/s
                                           Interval WAL: 1362 writes, 1362 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     76.9      0.86              0.21        38    0.023       0      0       0.0       0.0
                                             L6      1/0    7.93 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.5    143.4    120.1      2.50              0.85        37    0.068    224K    20K       0.0       0.0
                                            Sum      1/0    7.93 MB   0.0      0.3     0.1      0.3       0.4      0.1       0.0   5.5    106.7    109.1      3.36              1.06        75    0.045    224K    20K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.0    148.2    148.3      0.35              0.15        10    0.035     38K   2533       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    143.4    120.1      2.50              0.85        37    0.068    224K    20K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     77.2      0.85              0.21        37    0.023       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.065, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.36 GB write, 0.08 MB/s write, 0.35 GB read, 0.07 MB/s read, 3.4 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5619101451f0#2 capacity: 304.00 MB usage: 40.75 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000297 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2648,39.10 MB,12.8633%) FilterBlock(76,632.36 KB,0.203138%) IndexBlock(76,1.03 MB,0.338158%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 07 14:50:35 compute-0 sudo[413881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:50:35 compute-0 sudo[413881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:50:35 compute-0 sudo[413881]: pam_unix(sudo:session): session closed for user root
Oct 07 14:50:35 compute-0 sudo[413906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:50:35 compute-0 sudo[413906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:50:35 compute-0 sudo[413906]: pam_unix(sudo:session): session closed for user root
Oct 07 14:50:35 compute-0 sudo[413931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:50:35 compute-0 sudo[413931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:50:35 compute-0 sudo[413931]: pam_unix(sudo:session): session closed for user root
Oct 07 14:50:35 compute-0 sudo[413956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:50:35 compute-0 sudo[413956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:50:36 compute-0 sudo[413956]: pam_unix(sudo:session): session closed for user root
Oct 07 14:50:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:50:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:50:36 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:50:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:50:36 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:50:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:50:36 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:50:36 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 1575957d-f4ec-46bf-b164-4041e986014a does not exist
Oct 07 14:50:36 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 975a8f92-78af-4835-a1a0-ed515622348f does not exist
Oct 07 14:50:36 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev e2040ad4-f5ed-42c3-834d-b42088e5fa76 does not exist
Oct 07 14:50:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:50:36 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:50:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:50:36 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:50:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:50:36 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:50:36 compute-0 sudo[414012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:50:36 compute-0 sudo[414012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:50:36 compute-0 sudo[414012]: pam_unix(sudo:session): session closed for user root
Oct 07 14:50:36 compute-0 sudo[414037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:50:36 compute-0 sudo[414037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:50:36 compute-0 sudo[414037]: pam_unix(sudo:session): session closed for user root
Oct 07 14:50:36 compute-0 ceph-mon[74295]: pgmap v2650: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:36 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:50:36 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:50:36 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:50:36 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:50:36 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:50:36 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:50:36 compute-0 sudo[414062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:50:36 compute-0 sudo[414062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:50:36 compute-0 sudo[414062]: pam_unix(sudo:session): session closed for user root
Oct 07 14:50:36 compute-0 sudo[414087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:50:36 compute-0 sudo[414087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:50:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:36 compute-0 podman[414151]: 2025-10-07 14:50:36.981559304 +0000 UTC m=+0.046948616 container create 2a73dc31d08b34dd6aed3db1cc13771b94d87a729d6e526684e6952ac8c53055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 07 14:50:37 compute-0 systemd[1]: Started libpod-conmon-2a73dc31d08b34dd6aed3db1cc13771b94d87a729d6e526684e6952ac8c53055.scope.
Oct 07 14:50:37 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:50:37 compute-0 podman[414151]: 2025-10-07 14:50:36.962359377 +0000 UTC m=+0.027748719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:50:37 compute-0 podman[414151]: 2025-10-07 14:50:37.074077532 +0000 UTC m=+0.139466864 container init 2a73dc31d08b34dd6aed3db1cc13771b94d87a729d6e526684e6952ac8c53055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:50:37 compute-0 podman[414151]: 2025-10-07 14:50:37.081110129 +0000 UTC m=+0.146499451 container start 2a73dc31d08b34dd6aed3db1cc13771b94d87a729d6e526684e6952ac8c53055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:50:37 compute-0 podman[414151]: 2025-10-07 14:50:37.084614692 +0000 UTC m=+0.150004024 container attach 2a73dc31d08b34dd6aed3db1cc13771b94d87a729d6e526684e6952ac8c53055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:50:37 compute-0 brave_fermi[414167]: 167 167
Oct 07 14:50:37 compute-0 systemd[1]: libpod-2a73dc31d08b34dd6aed3db1cc13771b94d87a729d6e526684e6952ac8c53055.scope: Deactivated successfully.
Oct 07 14:50:37 compute-0 podman[414151]: 2025-10-07 14:50:37.088888564 +0000 UTC m=+0.154277886 container died 2a73dc31d08b34dd6aed3db1cc13771b94d87a729d6e526684e6952ac8c53055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 07 14:50:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbcfeb77df84228bf29b99ed1c3df5904a5395542f2f5889427cd279f9279b85-merged.mount: Deactivated successfully.
Oct 07 14:50:37 compute-0 podman[414151]: 2025-10-07 14:50:37.13044534 +0000 UTC m=+0.195834652 container remove 2a73dc31d08b34dd6aed3db1cc13771b94d87a729d6e526684e6952ac8c53055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:50:37 compute-0 systemd[1]: libpod-conmon-2a73dc31d08b34dd6aed3db1cc13771b94d87a729d6e526684e6952ac8c53055.scope: Deactivated successfully.
Oct 07 14:50:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:37.213 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:50:37 compute-0 nova_compute[259550]: 2025-10-07 14:50:37.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:37.216 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:50:37 compute-0 podman[414190]: 2025-10-07 14:50:37.305676904 +0000 UTC m=+0.046654730 container create 6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 07 14:50:37 compute-0 systemd[1]: Started libpod-conmon-6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c.scope.
Oct 07 14:50:37 compute-0 podman[414190]: 2025-10-07 14:50:37.285094285 +0000 UTC m=+0.026072141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:50:37 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13297a2b7c42f41ca8f41016d799d1a011d312ac73e95a14285775f21bdf8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13297a2b7c42f41ca8f41016d799d1a011d312ac73e95a14285775f21bdf8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13297a2b7c42f41ca8f41016d799d1a011d312ac73e95a14285775f21bdf8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13297a2b7c42f41ca8f41016d799d1a011d312ac73e95a14285775f21bdf8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13297a2b7c42f41ca8f41016d799d1a011d312ac73e95a14285775f21bdf8f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:50:37 compute-0 podman[414190]: 2025-10-07 14:50:37.40358577 +0000 UTC m=+0.144563616 container init 6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_almeida, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:50:37 compute-0 podman[414190]: 2025-10-07 14:50:37.412064282 +0000 UTC m=+0.153042108 container start 6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_almeida, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 07 14:50:37 compute-0 podman[414190]: 2025-10-07 14:50:37.418819241 +0000 UTC m=+0.159797067 container attach 6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 07 14:50:38 compute-0 nova_compute[259550]: 2025-10-07 14:50:38.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:38 compute-0 ceph-mon[74295]: pgmap v2651: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:38 compute-0 infallible_almeida[414207]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:50:38 compute-0 infallible_almeida[414207]: --> relative data size: 1.0
Oct 07 14:50:38 compute-0 infallible_almeida[414207]: --> All data devices are unavailable
Oct 07 14:50:38 compute-0 systemd[1]: libpod-6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c.scope: Deactivated successfully.
Oct 07 14:50:38 compute-0 systemd[1]: libpod-6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c.scope: Consumed 1.132s CPU time.
Oct 07 14:50:38 compute-0 podman[414190]: 2025-10-07 14:50:38.600479813 +0000 UTC m=+1.341457659 container died 6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:50:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c13297a2b7c42f41ca8f41016d799d1a011d312ac73e95a14285775f21bdf8f-merged.mount: Deactivated successfully.
Oct 07 14:50:38 compute-0 podman[414190]: 2025-10-07 14:50:38.666058542 +0000 UTC m=+1.407036368 container remove 6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 07 14:50:38 compute-0 systemd[1]: libpod-conmon-6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c.scope: Deactivated successfully.
Oct 07 14:50:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:38 compute-0 sudo[414087]: pam_unix(sudo:session): session closed for user root
Oct 07 14:50:38 compute-0 sudo[414248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:50:38 compute-0 sudo[414248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:50:38 compute-0 sudo[414248]: pam_unix(sudo:session): session closed for user root
Oct 07 14:50:38 compute-0 sudo[414273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:50:38 compute-0 sudo[414273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:50:38 compute-0 sudo[414273]: pam_unix(sudo:session): session closed for user root
Oct 07 14:50:38 compute-0 sudo[414298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:50:38 compute-0 sudo[414298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:50:38 compute-0 sudo[414298]: pam_unix(sudo:session): session closed for user root
Oct 07 14:50:38 compute-0 sudo[414323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:50:38 compute-0 sudo[414323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:50:39 compute-0 podman[414388]: 2025-10-07 14:50:39.368643843 +0000 UTC m=+0.071496680 container create 72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 07 14:50:39 compute-0 nova_compute[259550]: 2025-10-07 14:50:39.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:39 compute-0 podman[414388]: 2025-10-07 14:50:39.322973177 +0000 UTC m=+0.025826004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:50:39 compute-0 systemd[1]: Started libpod-conmon-72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f.scope.
Oct 07 14:50:39 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:50:39 compute-0 podman[414388]: 2025-10-07 14:50:39.612483476 +0000 UTC m=+0.315336293 container init 72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 07 14:50:39 compute-0 podman[414388]: 2025-10-07 14:50:39.619720528 +0000 UTC m=+0.322573335 container start 72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:50:39 compute-0 systemd[1]: libpod-72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f.scope: Deactivated successfully.
Oct 07 14:50:39 compute-0 vibrant_ride[414404]: 167 167
Oct 07 14:50:39 compute-0 conmon[414404]: conmon 72a33e3025f96325173a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f.scope/container/memory.events
Oct 07 14:50:39 compute-0 ceph-mon[74295]: pgmap v2652: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:39 compute-0 podman[414388]: 2025-10-07 14:50:39.700995228 +0000 UTC m=+0.403848035 container attach 72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:50:39 compute-0 podman[414388]: 2025-10-07 14:50:39.701770287 +0000 UTC m=+0.404623124 container died 72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:50:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-14974cce87bda83f2ca23232cc3f4636139ac1d0edb0fc515cff0e201014618b-merged.mount: Deactivated successfully.
Oct 07 14:50:40 compute-0 podman[414388]: 2025-10-07 14:50:40.073757535 +0000 UTC m=+0.776610342 container remove 72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:50:40 compute-0 systemd[1]: libpod-conmon-72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f.scope: Deactivated successfully.
Oct 07 14:50:40 compute-0 podman[414427]: 2025-10-07 14:50:40.261761231 +0000 UTC m=+0.059558726 container create 772a8a159a18e92639b51548a85f72d935f9a18c6a8a2db2be832c7dc36449a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_payne, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:50:40 compute-0 systemd[1]: Started libpod-conmon-772a8a159a18e92639b51548a85f72d935f9a18c6a8a2db2be832c7dc36449a2.scope.
Oct 07 14:50:40 compute-0 podman[414427]: 2025-10-07 14:50:40.229100835 +0000 UTC m=+0.026898410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:50:40 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd10e1e6c9fa60614292ad4fecbfd9b97f53506e6d41113ab214d69fb4584ebe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd10e1e6c9fa60614292ad4fecbfd9b97f53506e6d41113ab214d69fb4584ebe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd10e1e6c9fa60614292ad4fecbfd9b97f53506e6d41113ab214d69fb4584ebe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd10e1e6c9fa60614292ad4fecbfd9b97f53506e6d41113ab214d69fb4584ebe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:50:40 compute-0 podman[414427]: 2025-10-07 14:50:40.348675446 +0000 UTC m=+0.146472961 container init 772a8a159a18e92639b51548a85f72d935f9a18c6a8a2db2be832c7dc36449a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 07 14:50:40 compute-0 podman[414427]: 2025-10-07 14:50:40.359557674 +0000 UTC m=+0.157355159 container start 772a8a159a18e92639b51548a85f72d935f9a18c6a8a2db2be832c7dc36449a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_payne, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 07 14:50:40 compute-0 podman[414427]: 2025-10-07 14:50:40.363081518 +0000 UTC m=+0.160879033 container attach 772a8a159a18e92639b51548a85f72d935f9a18c6a8a2db2be832c7dc36449a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_payne, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 07 14:50:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:41 compute-0 quizzical_payne[414443]: {
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:     "0": [
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:         {
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "devices": [
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "/dev/loop3"
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             ],
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "lv_name": "ceph_lv0",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "lv_size": "21470642176",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "name": "ceph_lv0",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "tags": {
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.cluster_name": "ceph",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.crush_device_class": "",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.encrypted": "0",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.osd_id": "0",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.type": "block",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.vdo": "0"
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             },
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "type": "block",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "vg_name": "ceph_vg0"
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:         }
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:     ],
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:     "1": [
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:         {
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "devices": [
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "/dev/loop4"
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             ],
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "lv_name": "ceph_lv1",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "lv_size": "21470642176",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "name": "ceph_lv1",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "tags": {
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.cluster_name": "ceph",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.crush_device_class": "",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.encrypted": "0",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.osd_id": "1",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.type": "block",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.vdo": "0"
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             },
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "type": "block",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "vg_name": "ceph_vg1"
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:         }
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:     ],
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:     "2": [
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:         {
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "devices": [
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "/dev/loop5"
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             ],
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "lv_name": "ceph_lv2",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "lv_size": "21470642176",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "name": "ceph_lv2",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "tags": {
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.cluster_name": "ceph",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.crush_device_class": "",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.encrypted": "0",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.osd_id": "2",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.type": "block",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:                 "ceph.vdo": "0"
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             },
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "type": "block",
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:             "vg_name": "ceph_vg2"
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:         }
Oct 07 14:50:41 compute-0 quizzical_payne[414443]:     ]
Oct 07 14:50:41 compute-0 quizzical_payne[414443]: }
Oct 07 14:50:41 compute-0 systemd[1]: libpod-772a8a159a18e92639b51548a85f72d935f9a18c6a8a2db2be832c7dc36449a2.scope: Deactivated successfully.
Oct 07 14:50:41 compute-0 podman[414427]: 2025-10-07 14:50:41.187114004 +0000 UTC m=+0.984911569 container died 772a8a159a18e92639b51548a85f72d935f9a18c6a8a2db2be832c7dc36449a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_payne, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 07 14:50:41 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:50:41.218 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:50:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:50:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd10e1e6c9fa60614292ad4fecbfd9b97f53506e6d41113ab214d69fb4584ebe-merged.mount: Deactivated successfully.
Oct 07 14:50:42 compute-0 ceph-mon[74295]: pgmap v2653: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:42 compute-0 podman[414427]: 2025-10-07 14:50:42.394667371 +0000 UTC m=+2.192464866 container remove 772a8a159a18e92639b51548a85f72d935f9a18c6a8a2db2be832c7dc36449a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 07 14:50:42 compute-0 sudo[414323]: pam_unix(sudo:session): session closed for user root
Oct 07 14:50:42 compute-0 systemd[1]: libpod-conmon-772a8a159a18e92639b51548a85f72d935f9a18c6a8a2db2be832c7dc36449a2.scope: Deactivated successfully.
Oct 07 14:50:42 compute-0 sudo[414463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:50:42 compute-0 sudo[414463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:50:42 compute-0 sudo[414463]: pam_unix(sudo:session): session closed for user root
Oct 07 14:50:42 compute-0 sudo[414488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:50:42 compute-0 sudo[414488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:50:42 compute-0 sudo[414488]: pam_unix(sudo:session): session closed for user root
Oct 07 14:50:42 compute-0 sudo[414513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:50:42 compute-0 sudo[414513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:50:42 compute-0 sudo[414513]: pam_unix(sudo:session): session closed for user root
Oct 07 14:50:42 compute-0 sudo[414538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:50:42 compute-0 sudo[414538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:50:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:42 compute-0 podman[414602]: 2025-10-07 14:50:42.984087154 +0000 UTC m=+0.036722664 container create 7afd718dfa4940c04619893c3e6086ec27141347cdd823c8642a08fb16a71f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:50:43 compute-0 systemd[1]: Started libpod-conmon-7afd718dfa4940c04619893c3e6086ec27141347cdd823c8642a08fb16a71f66.scope.
Oct 07 14:50:43 compute-0 podman[414602]: 2025-10-07 14:50:42.968831231 +0000 UTC m=+0.021466771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:50:43 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:50:43 compute-0 podman[414602]: 2025-10-07 14:50:43.080238848 +0000 UTC m=+0.132874378 container init 7afd718dfa4940c04619893c3e6086ec27141347cdd823c8642a08fb16a71f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_torvalds, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:50:43 compute-0 podman[414602]: 2025-10-07 14:50:43.089405405 +0000 UTC m=+0.142040915 container start 7afd718dfa4940c04619893c3e6086ec27141347cdd823c8642a08fb16a71f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_torvalds, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:50:43 compute-0 podman[414602]: 2025-10-07 14:50:43.093024162 +0000 UTC m=+0.145659702 container attach 7afd718dfa4940c04619893c3e6086ec27141347cdd823c8642a08fb16a71f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_torvalds, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:50:43 compute-0 recursing_torvalds[414618]: 167 167
Oct 07 14:50:43 compute-0 systemd[1]: libpod-7afd718dfa4940c04619893c3e6086ec27141347cdd823c8642a08fb16a71f66.scope: Deactivated successfully.
Oct 07 14:50:43 compute-0 podman[414623]: 2025-10-07 14:50:43.13504889 +0000 UTC m=+0.025172259 container died 7afd718dfa4940c04619893c3e6086ec27141347cdd823c8642a08fb16a71f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:50:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-160997dc104eaf9e71a547fa1d0eff32dff80b957537c57d5c4b7ff08b52e2a0-merged.mount: Deactivated successfully.
Oct 07 14:50:43 compute-0 podman[414623]: 2025-10-07 14:50:43.16745929 +0000 UTC m=+0.057582619 container remove 7afd718dfa4940c04619893c3e6086ec27141347cdd823c8642a08fb16a71f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:50:43 compute-0 systemd[1]: libpod-conmon-7afd718dfa4940c04619893c3e6086ec27141347cdd823c8642a08fb16a71f66.scope: Deactivated successfully.
Oct 07 14:50:43 compute-0 nova_compute[259550]: 2025-10-07 14:50:43.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:43 compute-0 podman[414644]: 2025-10-07 14:50:43.333785791 +0000 UTC m=+0.042809368 container create 75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kepler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 07 14:50:43 compute-0 systemd[1]: Started libpod-conmon-75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182.scope.
Oct 07 14:50:43 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:50:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0842d37c1e684ee0fca45ed57a46edd6ddb90354acc6c6f2c9c057ae10c16d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:50:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0842d37c1e684ee0fca45ed57a46edd6ddb90354acc6c6f2c9c057ae10c16d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:50:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0842d37c1e684ee0fca45ed57a46edd6ddb90354acc6c6f2c9c057ae10c16d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:50:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0842d37c1e684ee0fca45ed57a46edd6ddb90354acc6c6f2c9c057ae10c16d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:50:43 compute-0 podman[414644]: 2025-10-07 14:50:43.317662368 +0000 UTC m=+0.026685975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:50:43 compute-0 podman[414644]: 2025-10-07 14:50:43.414916309 +0000 UTC m=+0.123939906 container init 75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kepler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 07 14:50:43 compute-0 podman[414644]: 2025-10-07 14:50:43.423242127 +0000 UTC m=+0.132265704 container start 75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kepler, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 07 14:50:43 compute-0 podman[414644]: 2025-10-07 14:50:43.427445827 +0000 UTC m=+0.136469434 container attach 75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kepler, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 07 14:50:44 compute-0 ceph-mon[74295]: pgmap v2654: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:44 compute-0 nova_compute[259550]: 2025-10-07 14:50:44.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:44 compute-0 jovial_kepler[414661]: {
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:         "osd_id": 2,
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:         "type": "bluestore"
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:     },
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:         "osd_id": 1,
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:         "type": "bluestore"
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:     },
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:         "osd_id": 0,
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:         "type": "bluestore"
Oct 07 14:50:44 compute-0 jovial_kepler[414661]:     }
Oct 07 14:50:44 compute-0 jovial_kepler[414661]: }
Oct 07 14:50:44 compute-0 systemd[1]: libpod-75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182.scope: Deactivated successfully.
Oct 07 14:50:44 compute-0 systemd[1]: libpod-75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182.scope: Consumed 1.085s CPU time.
Oct 07 14:50:44 compute-0 podman[414694]: 2025-10-07 14:50:44.54690128 +0000 UTC m=+0.029327647 container died 75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 07 14:50:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0842d37c1e684ee0fca45ed57a46edd6ddb90354acc6c6f2c9c057ae10c16d0-merged.mount: Deactivated successfully.
Oct 07 14:50:44 compute-0 podman[414694]: 2025-10-07 14:50:44.608138225 +0000 UTC m=+0.090564512 container remove 75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 07 14:50:44 compute-0 systemd[1]: libpod-conmon-75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182.scope: Deactivated successfully.
Oct 07 14:50:44 compute-0 sudo[414538]: pam_unix(sudo:session): session closed for user root
Oct 07 14:50:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:50:44 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:50:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:50:44 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:50:44 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 903fd8e5-7cb8-4c48-8bf8-f30e21a4b7ec does not exist
Oct 07 14:50:44 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 508c81a4-a460-4720-ad59-e857c21d5f56 does not exist
Oct 07 14:50:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:44 compute-0 sudo[414708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:50:44 compute-0 sudo[414708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:50:44 compute-0 sudo[414708]: pam_unix(sudo:session): session closed for user root
Oct 07 14:50:44 compute-0 sudo[414733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:50:44 compute-0 sudo[414733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:50:44 compute-0 sudo[414733]: pam_unix(sudo:session): session closed for user root
Oct 07 14:50:45 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:50:45 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:50:45 compute-0 ceph-mon[74295]: pgmap v2655: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:50:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:47 compute-0 ceph-mon[74295]: pgmap v2656: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:48 compute-0 podman[414758]: 2025-10-07 14:50:48.081665994 +0000 UTC m=+0.066189003 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 07 14:50:48 compute-0 podman[414759]: 2025-10-07 14:50:48.08320034 +0000 UTC m=+0.066725006 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 07 14:50:48 compute-0 nova_compute[259550]: 2025-10-07 14:50:48.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:49 compute-0 nova_compute[259550]: 2025-10-07 14:50:49.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:49 compute-0 ceph-mon[74295]: pgmap v2657: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:50:51 compute-0 ceph-mon[74295]: pgmap v2658: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:50:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:50:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:50:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:50:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:50:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:50:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:52 compute-0 nova_compute[259550]: 2025-10-07 14:50:52.827 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "9b571b54-001b-4da1-8b91-2659b1fbaac6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:50:52 compute-0 nova_compute[259550]: 2025-10-07 14:50:52.827 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:50:52 compute-0 nova_compute[259550]: 2025-10-07 14:50:52.847 2 DEBUG nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:50:52 compute-0 nova_compute[259550]: 2025-10-07 14:50:52.924 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:50:52 compute-0 nova_compute[259550]: 2025-10-07 14:50:52.925 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:50:52 compute-0 nova_compute[259550]: 2025-10-07 14:50:52.935 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:50:52 compute-0 nova_compute[259550]: 2025-10-07 14:50:52.935 2 INFO nova.compute.claims [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.033 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:50:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/881193533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.516 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.523 2 DEBUG nova.compute.provider_tree [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.548 2 DEBUG nova.scheduler.client.report [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.576 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.578 2 DEBUG nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.651 2 DEBUG nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.652 2 DEBUG nova.network.neutron [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.678 2 INFO nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.726 2 DEBUG nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:50:53 compute-0 ceph-mon[74295]: pgmap v2659: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/881193533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.819 2 DEBUG nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.821 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.821 2 INFO nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Creating image(s)
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.843 2 DEBUG nova.storage.rbd_utils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.867 2 DEBUG nova.storage.rbd_utils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.888 2 DEBUG nova.storage.rbd_utils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.893 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.934 2 DEBUG nova.policy [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f8f54ad8b4adcb7d392a6d730edbd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2dd1166031634469bed4993a4eb97989', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.972 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.974 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.975 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.975 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:50:53 compute-0 nova_compute[259550]: 2025-10-07 14:50:53.999 2 DEBUG nova.storage.rbd_utils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:50:54 compute-0 nova_compute[259550]: 2025-10-07 14:50:54.003 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:50:54 compute-0 nova_compute[259550]: 2025-10-07 14:50:54.367 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:50:54 compute-0 nova_compute[259550]: 2025-10-07 14:50:54.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:54 compute-0 nova_compute[259550]: 2025-10-07 14:50:54.433 2 DEBUG nova.storage.rbd_utils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] resizing rbd image 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:50:54 compute-0 nova_compute[259550]: 2025-10-07 14:50:54.524 2 DEBUG nova.objects.instance [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b571b54-001b-4da1-8b91-2659b1fbaac6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:50:54 compute-0 nova_compute[259550]: 2025-10-07 14:50:54.578 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:50:54 compute-0 nova_compute[259550]: 2025-10-07 14:50:54.579 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Ensure instance console log exists: /var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:50:54 compute-0 nova_compute[259550]: 2025-10-07 14:50:54.579 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:50:54 compute-0 nova_compute[259550]: 2025-10-07 14:50:54.580 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:50:54 compute-0 nova_compute[259550]: 2025-10-07 14:50:54.580 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:50:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:55 compute-0 ceph-mon[74295]: pgmap v2660: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:50:56 compute-0 nova_compute[259550]: 2025-10-07 14:50:56.133 2 DEBUG nova.network.neutron [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Successfully created port: 144643c5-91ec-4bd7-a646-3d64339b6691 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:50:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:50:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 57 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 572 KiB/s wr, 24 op/s
Oct 07 14:50:57 compute-0 ceph-mon[74295]: pgmap v2661: 305 pgs: 305 active+clean; 57 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 572 KiB/s wr, 24 op/s
Oct 07 14:50:58 compute-0 nova_compute[259550]: 2025-10-07 14:50:58.268 2 DEBUG nova.network.neutron [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Successfully updated port: 144643c5-91ec-4bd7-a646-3d64339b6691 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:50:58 compute-0 nova_compute[259550]: 2025-10-07 14:50:58.286 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:50:58 compute-0 nova_compute[259550]: 2025-10-07 14:50:58.286 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquired lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:50:58 compute-0 nova_compute[259550]: 2025-10-07 14:50:58.286 2 DEBUG nova.network.neutron [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:50:58 compute-0 nova_compute[259550]: 2025-10-07 14:50:58.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:58 compute-0 nova_compute[259550]: 2025-10-07 14:50:58.406 2 DEBUG nova.compute.manager [req-883a8dff-c6e4-4d65-af4a-76c9bd6cc461 req-a08f3a59-45c4-4cbb-b7a4-f0aaa32b8f59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received event network-changed-144643c5-91ec-4bd7-a646-3d64339b6691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:50:58 compute-0 nova_compute[259550]: 2025-10-07 14:50:58.406 2 DEBUG nova.compute.manager [req-883a8dff-c6e4-4d65-af4a-76c9bd6cc461 req-a08f3a59-45c4-4cbb-b7a4-f0aaa32b8f59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Refreshing instance network info cache due to event network-changed-144643c5-91ec-4bd7-a646-3d64339b6691. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:50:58 compute-0 nova_compute[259550]: 2025-10-07 14:50:58.406 2 DEBUG oslo_concurrency.lockutils [req-883a8dff-c6e4-4d65-af4a-76c9bd6cc461 req-a08f3a59-45c4-4cbb-b7a4-f0aaa32b8f59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:50:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.015 2 DEBUG nova.network.neutron [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.660 2 DEBUG nova.network.neutron [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updating instance_info_cache with network_info: [{"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.686 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Releasing lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.686 2 DEBUG nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Instance network_info: |[{"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.687 2 DEBUG oslo_concurrency.lockutils [req-883a8dff-c6e4-4d65-af4a-76c9bd6cc461 req-a08f3a59-45c4-4cbb-b7a4-f0aaa32b8f59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.687 2 DEBUG nova.network.neutron [req-883a8dff-c6e4-4d65-af4a-76c9bd6cc461 req-a08f3a59-45c4-4cbb-b7a4-f0aaa32b8f59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Refreshing network info cache for port 144643c5-91ec-4bd7-a646-3d64339b6691 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.691 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Start _get_guest_xml network_info=[{"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.695 2 WARNING nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.702 2 DEBUG nova.virt.libvirt.host [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.702 2 DEBUG nova.virt.libvirt.host [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.706 2 DEBUG nova.virt.libvirt.host [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.706 2 DEBUG nova.virt.libvirt.host [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.707 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.707 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.707 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.708 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.708 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.708 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.708 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.708 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.709 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.709 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.709 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.709 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:50:59 compute-0 nova_compute[259550]: 2025-10-07 14:50:59.712 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:50:59 compute-0 ceph-mon[74295]: pgmap v2662: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:51:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:00.085 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:51:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:00.086 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:51:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:00.086 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:51:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:51:00 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2726827845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.183 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.213 2 DEBUG nova.storage.rbd_utils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.220 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:51:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:51:00 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3938573712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.682 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.684 2 DEBUG nova.virt.libvirt.vif [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:50:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-29238859',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-29238859',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=142,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOW4CdHXcKKzCR3ZI3nv4/KGv+Q9vtn7hxRZelXQfKtVpIrtTAVdZ3Df4UTU8EaZDakLPU1Olzl+Z8Q7yW10W/8sO5gvQt5UY4ZcGXGzLqm+i3yJBS0R+/jppzmQO8xBdQ==',key_name='tempest-TestSecurityGroupsBasicOps-1150608020',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-0j1o8fjx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:50:53Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=9b571b54-001b-4da1-8b91-2659b1fbaac6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.685 2 DEBUG nova.network.os_vif_util [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.685 2 DEBUG nova.network.os_vif_util [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:52:85,bridge_name='br-int',has_traffic_filtering=True,id=144643c5-91ec-4bd7-a646-3d64339b6691,network=Network(7d6dba35-f647-4d4e-a538-87de70ead701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap144643c5-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.686 2 DEBUG nova.objects.instance [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b571b54-001b-4da1-8b91-2659b1fbaac6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.704 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:51:00 compute-0 nova_compute[259550]:   <uuid>9b571b54-001b-4da1-8b91-2659b1fbaac6</uuid>
Oct 07 14:51:00 compute-0 nova_compute[259550]:   <name>instance-0000008e</name>
Oct 07 14:51:00 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:51:00 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:51:00 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-29238859</nova:name>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:50:59</nova:creationTime>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:51:00 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:51:00 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:51:00 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:51:00 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:51:00 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:51:00 compute-0 nova_compute[259550]:         <nova:user uuid="229f8f54ad8b4adcb7d392a6d730edbd">tempest-TestSecurityGroupsBasicOps-1946829349-project-member</nova:user>
Oct 07 14:51:00 compute-0 nova_compute[259550]:         <nova:project uuid="2dd1166031634469bed4993a4eb97989">tempest-TestSecurityGroupsBasicOps-1946829349</nova:project>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:51:00 compute-0 nova_compute[259550]:         <nova:port uuid="144643c5-91ec-4bd7-a646-3d64339b6691">
Oct 07 14:51:00 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:51:00 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:51:00 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <system>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <entry name="serial">9b571b54-001b-4da1-8b91-2659b1fbaac6</entry>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <entry name="uuid">9b571b54-001b-4da1-8b91-2659b1fbaac6</entry>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     </system>
Oct 07 14:51:00 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:51:00 compute-0 nova_compute[259550]:   <os>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:   </os>
Oct 07 14:51:00 compute-0 nova_compute[259550]:   <features>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:   </features>
Oct 07 14:51:00 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:51:00 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:51:00 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/9b571b54-001b-4da1-8b91-2659b1fbaac6_disk">
Oct 07 14:51:00 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       </source>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:51:00 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/9b571b54-001b-4da1-8b91-2659b1fbaac6_disk.config">
Oct 07 14:51:00 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       </source>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:51:00 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:24:52:85"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <target dev="tap144643c5-91"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6/console.log" append="off"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <video>
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     </video>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:51:00 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:51:00 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:51:00 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:51:00 compute-0 nova_compute[259550]: </domain>
Oct 07 14:51:00 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:51:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.704 2 DEBUG nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Preparing to wait for external event network-vif-plugged-144643c5-91ec-4bd7-a646-3d64339b6691 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.705 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.705 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.706 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.706 2 DEBUG nova.virt.libvirt.vif [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:50:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-29238859',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-29238859',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=142,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOW4CdHXcKKzCR3ZI3nv4/KGv+Q9vtn7hxRZelXQfKtVpIrtTAVdZ3Df4UTU8EaZDakLPU1Olzl+Z8Q7yW10W/8sO5gvQt5UY4ZcGXGzLqm+i3yJBS0R+/jppzmQO8xBdQ==',key_name='tempest-TestSecurityGroupsBasicOps-1150608020',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-0j1o8fjx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:50:53Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=9b571b54-001b-4da1-8b91-2659b1fbaac6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.707 2 DEBUG nova.network.os_vif_util [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.707 2 DEBUG nova.network.os_vif_util [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:52:85,bridge_name='br-int',has_traffic_filtering=True,id=144643c5-91ec-4bd7-a646-3d64339b6691,network=Network(7d6dba35-f647-4d4e-a538-87de70ead701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap144643c5-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.708 2 DEBUG os_vif [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:52:85,bridge_name='br-int',has_traffic_filtering=True,id=144643c5-91ec-4bd7-a646-3d64339b6691,network=Network(7d6dba35-f647-4d4e-a538-87de70ead701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap144643c5-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.712 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap144643c5-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.713 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap144643c5-91, col_values=(('external_ids', {'iface-id': '144643c5-91ec-4bd7-a646-3d64339b6691', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:52:85', 'vm-uuid': '9b571b54-001b-4da1-8b91-2659b1fbaac6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:00 compute-0 NetworkManager[44949]: <info>  [1759848660.7154] manager: (tap144643c5-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/623)
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.722 2 INFO os_vif [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:52:85,bridge_name='br-int',has_traffic_filtering=True,id=144643c5-91ec-4bd7-a646-3d64339b6691,network=Network(7d6dba35-f647-4d4e-a538-87de70ead701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap144643c5-91')
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.791 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.792 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.792 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No VIF found with MAC fa:16:3e:24:52:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.793 2 INFO nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Using config drive
Oct 07 14:51:00 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2726827845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:51:00 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3938573712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.827 2 DEBUG nova.storage.rbd_utils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:51:00 compute-0 podman[415049]: 2025-10-07 14:51:00.839327919 +0000 UTC m=+0.078717552 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 07 14:51:00 compute-0 podman[415051]: 2025-10-07 14:51:00.854184121 +0000 UTC m=+0.092618201 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.863 2 DEBUG nova.network.neutron [req-883a8dff-c6e4-4d65-af4a-76c9bd6cc461 req-a08f3a59-45c4-4cbb-b7a4-f0aaa32b8f59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updated VIF entry in instance network info cache for port 144643c5-91ec-4bd7-a646-3d64339b6691. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.864 2 DEBUG nova.network.neutron [req-883a8dff-c6e4-4d65-af4a-76c9bd6cc461 req-a08f3a59-45c4-4cbb-b7a4-f0aaa32b8f59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updating instance_info_cache with network_info: [{"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.945 2 DEBUG oslo_concurrency.lockutils [req-883a8dff-c6e4-4d65-af4a-76c9bd6cc461 req-a08f3a59-45c4-4cbb-b7a4-f0aaa32b8f59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:51:00 compute-0 nova_compute[259550]: 2025-10-07 14:51:00.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.129 2 INFO nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Creating config drive at /var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6/disk.config
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.135 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnip4at91 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.285 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnip4at91" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.311 2 DEBUG nova.storage.rbd_utils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:51:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.315 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6/disk.config 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.487 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6/disk.config 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.488 2 INFO nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Deleting local config drive /var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6/disk.config because it was imported into RBD.
Oct 07 14:51:01 compute-0 kernel: tap144643c5-91: entered promiscuous mode
Oct 07 14:51:01 compute-0 NetworkManager[44949]: <info>  [1759848661.5525] manager: (tap144643c5-91): new Tun device (/org/freedesktop/NetworkManager/Devices/624)
Oct 07 14:51:01 compute-0 ovn_controller[151684]: 2025-10-07T14:51:01Z|01558|binding|INFO|Claiming lport 144643c5-91ec-4bd7-a646-3d64339b6691 for this chassis.
Oct 07 14:51:01 compute-0 ovn_controller[151684]: 2025-10-07T14:51:01Z|01559|binding|INFO|144643c5-91ec-4bd7-a646-3d64339b6691: Claiming fa:16:3e:24:52:85 10.100.0.7
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.568 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:52:85 10.100.0.7'], port_security=['fa:16:3e:24:52:85 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9b571b54-001b-4da1-8b91-2659b1fbaac6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d6dba35-f647-4d4e-a538-87de70ead701', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5665608a-5872-470d-ac7c-b6ba91a59eac e6cb9e17-5a87-48e7-beaf-ada7a8c060a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=459b9ec0-d62a-47af-8167-94e59b8e15bc, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=144643c5-91ec-4bd7-a646-3d64339b6691) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.570 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 144643c5-91ec-4bd7-a646-3d64339b6691 in datapath 7d6dba35-f647-4d4e-a538-87de70ead701 bound to our chassis
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.572 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7d6dba35-f647-4d4e-a538-87de70ead701
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.592 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f5768da1-4d40-41e7-874f-c198261c0aad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.594 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7d6dba35-f1 in ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.598 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7d6dba35-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.598 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[074ee520-53ae-48cf-b170-ed88cf8b5445]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:01 compute-0 systemd-machined[214580]: New machine qemu-176-instance-0000008e.
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.600 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1683de15-df04-4cd2-9d84-2c3c34b207c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:01 compute-0 systemd-udevd[415163]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.615 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[ad766a94-922d-437a-8ba7-bc22beaffd2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:01 compute-0 NetworkManager[44949]: <info>  [1759848661.6179] device (tap144643c5-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:51:01 compute-0 NetworkManager[44949]: <info>  [1759848661.6193] device (tap144643c5-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:01 compute-0 systemd[1]: Started Virtual Machine qemu-176-instance-0000008e.
Oct 07 14:51:01 compute-0 ovn_controller[151684]: 2025-10-07T14:51:01Z|01560|binding|INFO|Setting lport 144643c5-91ec-4bd7-a646-3d64339b6691 ovn-installed in OVS
Oct 07 14:51:01 compute-0 ovn_controller[151684]: 2025-10-07T14:51:01Z|01561|binding|INFO|Setting lport 144643c5-91ec-4bd7-a646-3d64339b6691 up in Southbound
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.632 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[53ee17e1-7d48-457b-9f8d-0510984ec249]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.665 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c8851b64-c3f6-41fa-b462-6da69acfdb4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:01 compute-0 NetworkManager[44949]: <info>  [1759848661.6713] manager: (tap7d6dba35-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/625)
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.669 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5aca7b83-1162-4e68-b3dc-14d53a5c15e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.706 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d20e6b19-6a9e-4f27-b505-4e59dca7c05a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.710 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a8ea29-f930-4d9e-a3b9-0d92480443c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:01 compute-0 NetworkManager[44949]: <info>  [1759848661.7397] device (tap7d6dba35-f0): carrier: link connected
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.746 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[683f301d-ad14-4f05-8d9a-bb56e8c14827]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.765 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c7853c0d-1976-4cfc-8f59-abc1bdafc232]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7d6dba35-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:31:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 924530, 'reachable_time': 28816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 415195, 'error': None, 'target': 'ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.780 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b1398a94-f0fa-4555-8142-5121e8901ce9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea2:31a5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 924530, 'tstamp': 924530}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415196, 'error': None, 'target': 'ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.802 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd32287-f1ba-4f0d-a710-1b2a9fa13810]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7d6dba35-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:31:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 924530, 'reachable_time': 28816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 415197, 'error': None, 'target': 'ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:01 compute-0 ceph-mon[74295]: pgmap v2663: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.821 2 DEBUG nova.compute.manager [req-2dc0fa44-b0f8-47a3-b7eb-1e212623608f req-b9ec8e9c-5e01-4765-9ea7-bb936fa48a5e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received event network-vif-plugged-144643c5-91ec-4bd7-a646-3d64339b6691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.822 2 DEBUG oslo_concurrency.lockutils [req-2dc0fa44-b0f8-47a3-b7eb-1e212623608f req-b9ec8e9c-5e01-4765-9ea7-bb936fa48a5e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.822 2 DEBUG oslo_concurrency.lockutils [req-2dc0fa44-b0f8-47a3-b7eb-1e212623608f req-b9ec8e9c-5e01-4765-9ea7-bb936fa48a5e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.822 2 DEBUG oslo_concurrency.lockutils [req-2dc0fa44-b0f8-47a3-b7eb-1e212623608f req-b9ec8e9c-5e01-4765-9ea7-bb936fa48a5e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.822 2 DEBUG nova.compute.manager [req-2dc0fa44-b0f8-47a3-b7eb-1e212623608f req-b9ec8e9c-5e01-4765-9ea7-bb936fa48a5e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Processing event network-vif-plugged-144643c5-91ec-4bd7-a646-3d64339b6691 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.840 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[61c0ab09-2aa3-42c0-852a-f6c9acac90d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.914 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3e676c-3938-41d0-b0f7-918eac41bd6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.916 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7d6dba35-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.917 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.917 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7d6dba35-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:01 compute-0 NetworkManager[44949]: <info>  [1759848661.9193] manager: (tap7d6dba35-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/626)
Oct 07 14:51:01 compute-0 kernel: tap7d6dba35-f0: entered promiscuous mode
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.923 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7d6dba35-f0, col_values=(('external_ids', {'iface-id': '271db248-560d-4b3a-bd65-947c8ea27522'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:51:01 compute-0 ovn_controller[151684]: 2025-10-07T14:51:01Z|01562|binding|INFO|Releasing lport 271db248-560d-4b3a-bd65-947c8ea27522 from this chassis (sb_readonly=0)
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.926 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7d6dba35-f647-4d4e-a538-87de70ead701.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7d6dba35-f647-4d4e-a538-87de70ead701.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.930 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e175ed51-1056-4df3-87fd-a8631d36ea0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.931 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-7d6dba35-f647-4d4e-a538-87de70ead701
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/7d6dba35-f647-4d4e-a538-87de70ead701.pid.haproxy
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 7d6dba35-f647-4d4e-a538-87de70ead701
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:51:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.932 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701', 'env', 'PROCESS_TAG=haproxy-7d6dba35-f647-4d4e-a538-87de70ead701', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7d6dba35-f647-4d4e-a538-87de70ead701.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:51:01 compute-0 nova_compute[259550]: 2025-10-07 14:51:01.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:02 compute-0 podman[415271]: 2025-10-07 14:51:02.331420375 +0000 UTC m=+0.049422535 container create 9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 14:51:02 compute-0 systemd[1]: Started libpod-conmon-9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b.scope.
Oct 07 14:51:02 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:51:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c990b5ffee34bb0ee8d8fa81835ad0deb9249f2ad983045d67c79c75ff5af772/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:51:02 compute-0 podman[415271]: 2025-10-07 14:51:02.302674323 +0000 UTC m=+0.020676503 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:51:02 compute-0 podman[415271]: 2025-10-07 14:51:02.400494727 +0000 UTC m=+0.118496917 container init 9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 07 14:51:02 compute-0 podman[415271]: 2025-10-07 14:51:02.40608419 +0000 UTC m=+0.124086350 container start 9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true)
Oct 07 14:51:02 compute-0 neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701[415286]: [NOTICE]   (415290) : New worker (415292) forked
Oct 07 14:51:02 compute-0 neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701[415286]: [NOTICE]   (415290) : Loading success.
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.462 2 DEBUG nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.463 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848662.462222, 9b571b54-001b-4da1-8b91-2659b1fbaac6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.464 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] VM Started (Lifecycle Event)
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.466 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.470 2 INFO nova.virt.libvirt.driver [-] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Instance spawned successfully.
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.470 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.494 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.501 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.501 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.502 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.502 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.502 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.503 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.508 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.543 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.544 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848662.4624236, 9b571b54-001b-4da1-8b91-2659b1fbaac6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.544 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] VM Paused (Lifecycle Event)
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.584 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.588 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848662.4658854, 9b571b54-001b-4da1-8b91-2659b1fbaac6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.588 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] VM Resumed (Lifecycle Event)
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.601 2 INFO nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Took 8.78 seconds to spawn the instance on the hypervisor.
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.602 2 DEBUG nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.614 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.618 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.656 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.683 2 INFO nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Took 9.79 seconds to build instance.
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.704 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:51:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2664: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:51:02 compute-0 nova_compute[259550]: 2025-10-07 14:51:02.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:51:03 compute-0 nova_compute[259550]: 2025-10-07 14:51:03.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:03 compute-0 ceph-mon[74295]: pgmap v2664: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:51:03 compute-0 nova_compute[259550]: 2025-10-07 14:51:03.932 2 DEBUG nova.compute.manager [req-12163b53-0f8e-43dd-8363-2340f1bcb0a8 req-43791f4e-511e-449f-b41c-23d912266732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received event network-vif-plugged-144643c5-91ec-4bd7-a646-3d64339b6691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:51:03 compute-0 nova_compute[259550]: 2025-10-07 14:51:03.933 2 DEBUG oslo_concurrency.lockutils [req-12163b53-0f8e-43dd-8363-2340f1bcb0a8 req-43791f4e-511e-449f-b41c-23d912266732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:51:03 compute-0 nova_compute[259550]: 2025-10-07 14:51:03.933 2 DEBUG oslo_concurrency.lockutils [req-12163b53-0f8e-43dd-8363-2340f1bcb0a8 req-43791f4e-511e-449f-b41c-23d912266732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:51:03 compute-0 nova_compute[259550]: 2025-10-07 14:51:03.933 2 DEBUG oslo_concurrency.lockutils [req-12163b53-0f8e-43dd-8363-2340f1bcb0a8 req-43791f4e-511e-449f-b41c-23d912266732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:51:03 compute-0 nova_compute[259550]: 2025-10-07 14:51:03.939 2 DEBUG nova.compute.manager [req-12163b53-0f8e-43dd-8363-2340f1bcb0a8 req-43791f4e-511e-449f-b41c-23d912266732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] No waiting events found dispatching network-vif-plugged-144643c5-91ec-4bd7-a646-3d64339b6691 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:51:03 compute-0 nova_compute[259550]: 2025-10-07 14:51:03.939 2 WARNING nova.compute.manager [req-12163b53-0f8e-43dd-8363-2340f1bcb0a8 req-43791f4e-511e-449f-b41c-23d912266732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received unexpected event network-vif-plugged-144643c5-91ec-4bd7-a646-3d64339b6691 for instance with vm_state active and task_state None.
Oct 07 14:51:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 67 op/s
Oct 07 14:51:04 compute-0 nova_compute[259550]: 2025-10-07 14:51:04.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:51:05 compute-0 nova_compute[259550]: 2025-10-07 14:51:05.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:05 compute-0 ceph-mon[74295]: pgmap v2665: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 67 op/s
Oct 07 14:51:05 compute-0 nova_compute[259550]: 2025-10-07 14:51:05.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:51:05 compute-0 nova_compute[259550]: 2025-10-07 14:51:05.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:51:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:51:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Oct 07 14:51:06 compute-0 ovn_controller[151684]: 2025-10-07T14:51:06Z|01563|binding|INFO|Releasing lport 271db248-560d-4b3a-bd65-947c8ea27522 from this chassis (sb_readonly=0)
Oct 07 14:51:06 compute-0 nova_compute[259550]: 2025-10-07 14:51:06.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:06 compute-0 NetworkManager[44949]: <info>  [1759848666.8179] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/627)
Oct 07 14:51:06 compute-0 NetworkManager[44949]: <info>  [1759848666.8186] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/628)
Oct 07 14:51:06 compute-0 ovn_controller[151684]: 2025-10-07T14:51:06Z|01564|binding|INFO|Releasing lport 271db248-560d-4b3a-bd65-947c8ea27522 from this chassis (sb_readonly=0)
Oct 07 14:51:06 compute-0 nova_compute[259550]: 2025-10-07 14:51:06.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:06 compute-0 nova_compute[259550]: 2025-10-07 14:51:06.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:06 compute-0 nova_compute[259550]: 2025-10-07 14:51:06.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:51:06 compute-0 nova_compute[259550]: 2025-10-07 14:51:06.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:51:07 compute-0 ceph-mon[74295]: pgmap v2666: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Oct 07 14:51:08 compute-0 nova_compute[259550]: 2025-10-07 14:51:08.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:08 compute-0 nova_compute[259550]: 2025-10-07 14:51:08.571 2 DEBUG nova.compute.manager [req-3116c987-dbbd-442c-9ca4-0d64dadcd0a6 req-70ba1d85-d699-4e02-98ea-9a92c8f8c369 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received event network-changed-144643c5-91ec-4bd7-a646-3d64339b6691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:51:08 compute-0 nova_compute[259550]: 2025-10-07 14:51:08.572 2 DEBUG nova.compute.manager [req-3116c987-dbbd-442c-9ca4-0d64dadcd0a6 req-70ba1d85-d699-4e02-98ea-9a92c8f8c369 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Refreshing instance network info cache due to event network-changed-144643c5-91ec-4bd7-a646-3d64339b6691. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:51:08 compute-0 nova_compute[259550]: 2025-10-07 14:51:08.572 2 DEBUG oslo_concurrency.lockutils [req-3116c987-dbbd-442c-9ca4-0d64dadcd0a6 req-70ba1d85-d699-4e02-98ea-9a92c8f8c369 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:51:08 compute-0 nova_compute[259550]: 2025-10-07 14:51:08.572 2 DEBUG oslo_concurrency.lockutils [req-3116c987-dbbd-442c-9ca4-0d64dadcd0a6 req-70ba1d85-d699-4e02-98ea-9a92c8f8c369 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:51:08 compute-0 nova_compute[259550]: 2025-10-07 14:51:08.573 2 DEBUG nova.network.neutron [req-3116c987-dbbd-442c-9ca4-0d64dadcd0a6 req-70ba1d85-d699-4e02-98ea-9a92c8f8c369 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Refreshing network info cache for port 144643c5-91ec-4bd7-a646-3d64339b6691 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:51:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 75 op/s
Oct 07 14:51:09 compute-0 ceph-mon[74295]: pgmap v2667: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 75 op/s
Oct 07 14:51:09 compute-0 nova_compute[259550]: 2025-10-07 14:51:09.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:51:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:51:10 compute-0 nova_compute[259550]: 2025-10-07 14:51:10.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:51:11 compute-0 ceph-mon[74295]: pgmap v2668: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:51:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:51:12 compute-0 nova_compute[259550]: 2025-10-07 14:51:12.994 2 DEBUG nova.network.neutron [req-3116c987-dbbd-442c-9ca4-0d64dadcd0a6 req-70ba1d85-d699-4e02-98ea-9a92c8f8c369 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updated VIF entry in instance network info cache for port 144643c5-91ec-4bd7-a646-3d64339b6691. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:51:12 compute-0 nova_compute[259550]: 2025-10-07 14:51:12.995 2 DEBUG nova.network.neutron [req-3116c987-dbbd-442c-9ca4-0d64dadcd0a6 req-70ba1d85-d699-4e02-98ea-9a92c8f8c369 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updating instance_info_cache with network_info: [{"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:51:13 compute-0 nova_compute[259550]: 2025-10-07 14:51:13.149 2 DEBUG oslo_concurrency.lockutils [req-3116c987-dbbd-442c-9ca4-0d64dadcd0a6 req-70ba1d85-d699-4e02-98ea-9a92c8f8c369 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:51:13 compute-0 nova_compute[259550]: 2025-10-07 14:51:13.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:13 compute-0 ceph-mon[74295]: pgmap v2669: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:51:13 compute-0 nova_compute[259550]: 2025-10-07 14:51:13.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:51:14 compute-0 nova_compute[259550]: 2025-10-07 14:51:14.051 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:51:14 compute-0 nova_compute[259550]: 2025-10-07 14:51:14.052 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:51:14 compute-0 nova_compute[259550]: 2025-10-07 14:51:14.052 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:51:14 compute-0 nova_compute[259550]: 2025-10-07 14:51:14.052 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:51:14 compute-0 nova_compute[259550]: 2025-10-07 14:51:14.053 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:51:14 compute-0 ovn_controller[151684]: 2025-10-07T14:51:14Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:52:85 10.100.0.7
Oct 07 14:51:14 compute-0 ovn_controller[151684]: 2025-10-07T14:51:14Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:52:85 10.100.0.7
Oct 07 14:51:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:51:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842761179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:51:14 compute-0 nova_compute[259550]: 2025-10-07 14:51:14.551 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:51:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 305 active+clean; 91 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 455 KiB/s wr, 80 op/s
Oct 07 14:51:14 compute-0 nova_compute[259550]: 2025-10-07 14:51:14.838 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:51:14 compute-0 nova_compute[259550]: 2025-10-07 14:51:14.838 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:51:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1842761179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:51:15 compute-0 nova_compute[259550]: 2025-10-07 14:51:15.012 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:51:15 compute-0 nova_compute[259550]: 2025-10-07 14:51:15.013 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3468MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:51:15 compute-0 nova_compute[259550]: 2025-10-07 14:51:15.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:51:15 compute-0 nova_compute[259550]: 2025-10-07 14:51:15.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:51:15 compute-0 nova_compute[259550]: 2025-10-07 14:51:15.159 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 9b571b54-001b-4da1-8b91-2659b1fbaac6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:51:15 compute-0 nova_compute[259550]: 2025-10-07 14:51:15.160 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:51:15 compute-0 nova_compute[259550]: 2025-10-07 14:51:15.160 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:51:15 compute-0 nova_compute[259550]: 2025-10-07 14:51:15.197 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:51:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:51:15 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1763728664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:51:15 compute-0 nova_compute[259550]: 2025-10-07 14:51:15.676 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:51:15 compute-0 nova_compute[259550]: 2025-10-07 14:51:15.682 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:51:15 compute-0 nova_compute[259550]: 2025-10-07 14:51:15.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:15 compute-0 nova_compute[259550]: 2025-10-07 14:51:15.768 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:51:15 compute-0 ceph-mon[74295]: pgmap v2670: 305 pgs: 305 active+clean; 91 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 455 KiB/s wr, 80 op/s
Oct 07 14:51:15 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1763728664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:51:16 compute-0 nova_compute[259550]: 2025-10-07 14:51:16.057 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:51:16 compute-0 nova_compute[259550]: 2025-10-07 14:51:16.058 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:51:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:51:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 305 active+clean; 95 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 970 KiB/s rd, 746 KiB/s wr, 64 op/s
Oct 07 14:51:17 compute-0 ceph-mon[74295]: pgmap v2671: 305 pgs: 305 active+clean; 95 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 970 KiB/s rd, 746 KiB/s wr, 64 op/s
Oct 07 14:51:18 compute-0 nova_compute[259550]: 2025-10-07 14:51:18.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 305 active+clean; 121 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 461 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 07 14:51:19 compute-0 nova_compute[259550]: 2025-10-07 14:51:19.059 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:51:19 compute-0 nova_compute[259550]: 2025-10-07 14:51:19.059 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:51:19 compute-0 nova_compute[259550]: 2025-10-07 14:51:19.059 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:51:19 compute-0 podman[415350]: 2025-10-07 14:51:19.076709723 +0000 UTC m=+0.058733136 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 07 14:51:19 compute-0 podman[415349]: 2025-10-07 14:51:19.082055891 +0000 UTC m=+0.066702086 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 07 14:51:19 compute-0 ceph-mon[74295]: pgmap v2672: 305 pgs: 305 active+clean; 121 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 461 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 07 14:51:20 compute-0 nova_compute[259550]: 2025-10-07 14:51:20.062 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:51:20 compute-0 nova_compute[259550]: 2025-10-07 14:51:20.063 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:51:20 compute-0 nova_compute[259550]: 2025-10-07 14:51:20.063 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:51:20 compute-0 nova_compute[259550]: 2025-10-07 14:51:20.063 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9b571b54-001b-4da1-8b91-2659b1fbaac6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:51:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 07 14:51:20 compute-0 nova_compute[259550]: 2025-10-07 14:51:20.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:51:22 compute-0 ceph-mon[74295]: pgmap v2673: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 07 14:51:22 compute-0 nova_compute[259550]: 2025-10-07 14:51:22.668 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updating instance_info_cache with network_info: [{"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:51:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:51:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:51:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:51:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:51:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:51:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:51:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 07 14:51:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:51:22
Oct 07 14:51:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:51:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:51:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'images', 'default.rgw.meta', 'volumes', 'backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data']
Oct 07 14:51:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:51:22 compute-0 nova_compute[259550]: 2025-10-07 14:51:22.817 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:51:22 compute-0 nova_compute[259550]: 2025-10-07 14:51:22.818 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:51:22 compute-0 nova_compute[259550]: 2025-10-07 14:51:22.818 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:51:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:51:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:51:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:51:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:51:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:51:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:51:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:51:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:51:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:51:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:51:23 compute-0 nova_compute[259550]: 2025-10-07 14:51:23.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:24 compute-0 ceph-mon[74295]: pgmap v2674: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 07 14:51:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:51:25 compute-0 nova_compute[259550]: 2025-10-07 14:51:25.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:25 compute-0 nova_compute[259550]: 2025-10-07 14:51:25.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:26 compute-0 ceph-mon[74295]: pgmap v2675: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:51:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:51:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 1.7 MiB/s wr, 56 op/s
Oct 07 14:51:28 compute-0 ceph-mon[74295]: pgmap v2676: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 1.7 MiB/s wr, 56 op/s
Oct 07 14:51:28 compute-0 nova_compute[259550]: 2025-10-07 14:51:28.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 224 KiB/s rd, 1.4 MiB/s wr, 32 op/s
Oct 07 14:51:30 compute-0 ceph-mon[74295]: pgmap v2677: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 224 KiB/s rd, 1.4 MiB/s wr, 32 op/s
Oct 07 14:51:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Oct 07 14:51:30 compute-0 nova_compute[259550]: 2025-10-07 14:51:30.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:31 compute-0 podman[415387]: 2025-10-07 14:51:31.071320213 +0000 UTC m=+0.057846026 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 07 14:51:31 compute-0 podman[415388]: 2025-10-07 14:51:31.094883223 +0000 UTC m=+0.078684371 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 07 14:51:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:51:32 compute-0 ceph-mon[74295]: pgmap v2678: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Oct 07 14:51:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:51:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1933739025' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:51:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:51:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1933739025' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075901868854594 of space, bias 1.0, pg target 0.227705606563782 quantized to 32 (current 32)
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:51:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:51:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1933739025' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:51:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1933739025' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:51:33 compute-0 nova_compute[259550]: 2025-10-07 14:51:33.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:34 compute-0 ceph-mon[74295]: pgmap v2679: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Oct 07 14:51:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Oct 07 14:51:35 compute-0 nova_compute[259550]: 2025-10-07 14:51:35.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:36 compute-0 ceph-mon[74295]: pgmap v2680: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Oct 07 14:51:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:51:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 07 14:51:37 compute-0 nova_compute[259550]: 2025-10-07 14:51:37.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:38 compute-0 ceph-mon[74295]: pgmap v2681: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 07 14:51:38 compute-0 nova_compute[259550]: 2025-10-07 14:51:38.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 07 14:51:40 compute-0 ceph-mon[74295]: pgmap v2682: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 07 14:51:40 compute-0 nova_compute[259550]: 2025-10-07 14:51:40.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:40.271 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:51:40 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:40.273 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:51:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 07 14:51:40 compute-0 nova_compute[259550]: 2025-10-07 14:51:40.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:51:42 compute-0 ceph-mon[74295]: pgmap v2683: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 07 14:51:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 07 14:51:43 compute-0 nova_compute[259550]: 2025-10-07 14:51:43.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:44 compute-0 ceph-mon[74295]: pgmap v2684: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 07 14:51:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 07 14:51:44 compute-0 sudo[415433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:51:44 compute-0 sudo[415433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:51:44 compute-0 sudo[415433]: pam_unix(sudo:session): session closed for user root
Oct 07 14:51:44 compute-0 sudo[415458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:51:44 compute-0 sudo[415458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:51:44 compute-0 sudo[415458]: pam_unix(sudo:session): session closed for user root
Oct 07 14:51:45 compute-0 sudo[415483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:51:45 compute-0 sudo[415483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:51:45 compute-0 sudo[415483]: pam_unix(sudo:session): session closed for user root
Oct 07 14:51:45 compute-0 sudo[415508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:51:45 compute-0 sudo[415508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:51:45 compute-0 sudo[415508]: pam_unix(sudo:session): session closed for user root
Oct 07 14:51:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:51:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:51:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:51:45 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:51:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:51:45 compute-0 nova_compute[259550]: 2025-10-07 14:51:45.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:45 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:51:45 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 23b38127-8c77-4021-8ac6-4c1eba8d7e52 does not exist
Oct 07 14:51:45 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 56564b26-f2b0-402b-ab5c-7011145e9b57 does not exist
Oct 07 14:51:45 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev ca1acce2-3704-462a-a24e-ae3103fed068 does not exist
Oct 07 14:51:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:51:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:51:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:51:45 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:51:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:51:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:51:45 compute-0 nova_compute[259550]: 2025-10-07 14:51:45.799 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquiring lock "89da30f4-cbe5-4f70-8597-b215d57427e0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:51:45 compute-0 nova_compute[259550]: 2025-10-07 14:51:45.799 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:51:45 compute-0 sudo[415562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:51:45 compute-0 sudo[415562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:51:45 compute-0 sudo[415562]: pam_unix(sudo:session): session closed for user root
Oct 07 14:51:45 compute-0 nova_compute[259550]: 2025-10-07 14:51:45.841 2 DEBUG nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:51:45 compute-0 sudo[415587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:51:45 compute-0 sudo[415587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:51:45 compute-0 sudo[415587]: pam_unix(sudo:session): session closed for user root
Oct 07 14:51:45 compute-0 sudo[415612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:51:45 compute-0 sudo[415612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:51:45 compute-0 sudo[415612]: pam_unix(sudo:session): session closed for user root
Oct 07 14:51:45 compute-0 nova_compute[259550]: 2025-10-07 14:51:45.966 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:51:45 compute-0 nova_compute[259550]: 2025-10-07 14:51:45.967 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:51:45 compute-0 nova_compute[259550]: 2025-10-07 14:51:45.978 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:51:45 compute-0 nova_compute[259550]: 2025-10-07 14:51:45.979 2 INFO nova.compute.claims [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:51:46 compute-0 sudo[415637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:51:46 compute-0 sudo[415637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:51:46 compute-0 nova_compute[259550]: 2025-10-07 14:51:46.262 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:51:46 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:46.275 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:51:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:51:46 compute-0 ceph-mon[74295]: pgmap v2685: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 07 14:51:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:51:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:51:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:51:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:51:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:51:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:51:46 compute-0 podman[415700]: 2025-10-07 14:51:46.428210817 +0000 UTC m=+0.101697228 container create b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 07 14:51:46 compute-0 podman[415700]: 2025-10-07 14:51:46.352970629 +0000 UTC m=+0.026457070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:51:46 compute-0 systemd[1]: Started libpod-conmon-b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497.scope.
Oct 07 14:51:46 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:51:46 compute-0 podman[415700]: 2025-10-07 14:51:46.696187463 +0000 UTC m=+0.369673904 container init b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williamson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:51:46 compute-0 podman[415700]: 2025-10-07 14:51:46.705600806 +0000 UTC m=+0.379087207 container start b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williamson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 07 14:51:46 compute-0 festive_williamson[415735]: 167 167
Oct 07 14:51:46 compute-0 systemd[1]: libpod-b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497.scope: Deactivated successfully.
Oct 07 14:51:46 compute-0 conmon[415735]: conmon b37509a12264de23ee8a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497.scope/container/memory.events
Oct 07 14:51:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 07 14:51:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:51:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/132995398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:51:46 compute-0 podman[415700]: 2025-10-07 14:51:46.777020313 +0000 UTC m=+0.450506734 container attach b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williamson, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:51:46 compute-0 podman[415700]: 2025-10-07 14:51:46.777450003 +0000 UTC m=+0.450936444 container died b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 07 14:51:46 compute-0 nova_compute[259550]: 2025-10-07 14:51:46.780 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:51:46 compute-0 nova_compute[259550]: 2025-10-07 14:51:46.790 2 DEBUG nova.compute.provider_tree [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:51:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b93e86c5e0702438d9fb1d06871ce67d37e05cfd56ea222b83c5201d68fe325e-merged.mount: Deactivated successfully.
Oct 07 14:51:46 compute-0 nova_compute[259550]: 2025-10-07 14:51:46.843 2 DEBUG nova.scheduler.client.report [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:51:46 compute-0 podman[415700]: 2025-10-07 14:51:46.866161211 +0000 UTC m=+0.539647632 container remove b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:51:46 compute-0 systemd[1]: libpod-conmon-b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497.scope: Deactivated successfully.
Oct 07 14:51:47 compute-0 podman[415761]: 2025-10-07 14:51:47.039278003 +0000 UTC m=+0.024703497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:51:47 compute-0 podman[415761]: 2025-10-07 14:51:47.137327412 +0000 UTC m=+0.122752886 container create 140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_euler, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 07 14:51:47 compute-0 nova_compute[259550]: 2025-10-07 14:51:47.149 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:51:47 compute-0 nova_compute[259550]: 2025-10-07 14:51:47.150 2 DEBUG nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:51:47 compute-0 systemd[1]: Started libpod-conmon-140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d.scope.
Oct 07 14:51:47 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:51:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1937279c8bf8f6999014c95b338431a049ef9133ef1f3381bbb042b992cedea4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:51:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1937279c8bf8f6999014c95b338431a049ef9133ef1f3381bbb042b992cedea4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:51:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1937279c8bf8f6999014c95b338431a049ef9133ef1f3381bbb042b992cedea4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:51:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1937279c8bf8f6999014c95b338431a049ef9133ef1f3381bbb042b992cedea4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:51:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1937279c8bf8f6999014c95b338431a049ef9133ef1f3381bbb042b992cedea4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:51:47 compute-0 podman[415761]: 2025-10-07 14:51:47.269413421 +0000 UTC m=+0.254838885 container init 140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:51:47 compute-0 podman[415761]: 2025-10-07 14:51:47.280528575 +0000 UTC m=+0.265954049 container start 140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_euler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 07 14:51:47 compute-0 podman[415761]: 2025-10-07 14:51:47.302981338 +0000 UTC m=+0.288406802 container attach 140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 07 14:51:47 compute-0 nova_compute[259550]: 2025-10-07 14:51:47.373 2 DEBUG nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:51:47 compute-0 nova_compute[259550]: 2025-10-07 14:51:47.374 2 DEBUG nova.network.neutron [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:51:47 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/132995398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:51:47 compute-0 nova_compute[259550]: 2025-10-07 14:51:47.523 2 INFO nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:51:47 compute-0 nova_compute[259550]: 2025-10-07 14:51:47.722 2 DEBUG nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:51:47 compute-0 nova_compute[259550]: 2025-10-07 14:51:47.931 2 DEBUG nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:51:47 compute-0 nova_compute[259550]: 2025-10-07 14:51:47.933 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:51:47 compute-0 nova_compute[259550]: 2025-10-07 14:51:47.934 2 INFO nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Creating image(s)
Oct 07 14:51:47 compute-0 nova_compute[259550]: 2025-10-07 14:51:47.954 2 DEBUG nova.storage.rbd_utils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] rbd image 89da30f4-cbe5-4f70-8597-b215d57427e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:51:47 compute-0 nova_compute[259550]: 2025-10-07 14:51:47.975 2 DEBUG nova.storage.rbd_utils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] rbd image 89da30f4-cbe5-4f70-8597-b215d57427e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:51:47 compute-0 nova_compute[259550]: 2025-10-07 14:51:47.998 2 DEBUG nova.storage.rbd_utils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] rbd image 89da30f4-cbe5-4f70-8597-b215d57427e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:51:48 compute-0 nova_compute[259550]: 2025-10-07 14:51:48.002 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:51:48 compute-0 nova_compute[259550]: 2025-10-07 14:51:48.090 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:51:48 compute-0 nova_compute[259550]: 2025-10-07 14:51:48.091 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:51:48 compute-0 nova_compute[259550]: 2025-10-07 14:51:48.092 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:51:48 compute-0 nova_compute[259550]: 2025-10-07 14:51:48.092 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:51:48 compute-0 nova_compute[259550]: 2025-10-07 14:51:48.113 2 DEBUG nova.storage.rbd_utils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] rbd image 89da30f4-cbe5-4f70-8597-b215d57427e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:51:48 compute-0 nova_compute[259550]: 2025-10-07 14:51:48.119 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 89da30f4-cbe5-4f70-8597-b215d57427e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:51:48 compute-0 nova_compute[259550]: 2025-10-07 14:51:48.164 2 DEBUG nova.policy [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f756e2b18f7246a48f99aaa3bb77a5c3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '969793770e3f43a48c49fbd115a24ce2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:51:48 compute-0 nova_compute[259550]: 2025-10-07 14:51:48.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:48 compute-0 magical_euler[415778]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:51:48 compute-0 magical_euler[415778]: --> relative data size: 1.0
Oct 07 14:51:48 compute-0 magical_euler[415778]: --> All data devices are unavailable
Oct 07 14:51:48 compute-0 systemd[1]: libpod-140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d.scope: Deactivated successfully.
Oct 07 14:51:48 compute-0 systemd[1]: libpod-140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d.scope: Consumed 1.059s CPU time.
Oct 07 14:51:48 compute-0 podman[415761]: 2025-10-07 14:51:48.417444844 +0000 UTC m=+1.402870338 container died 140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_euler, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:51:48 compute-0 ceph-mon[74295]: pgmap v2686: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 07 14:51:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1937279c8bf8f6999014c95b338431a049ef9133ef1f3381bbb042b992cedea4-merged.mount: Deactivated successfully.
Oct 07 14:51:48 compute-0 podman[415761]: 2025-10-07 14:51:48.604411255 +0000 UTC m=+1.589836729 container remove 140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 07 14:51:48 compute-0 systemd[1]: libpod-conmon-140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d.scope: Deactivated successfully.
Oct 07 14:51:48 compute-0 nova_compute[259550]: 2025-10-07 14:51:48.625 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 89da30f4-cbe5-4f70-8597-b215d57427e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:51:48 compute-0 sudo[415637]: pam_unix(sudo:session): session closed for user root
Oct 07 14:51:48 compute-0 sudo[415925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:51:48 compute-0 nova_compute[259550]: 2025-10-07 14:51:48.702 2 DEBUG nova.storage.rbd_utils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] resizing rbd image 89da30f4-cbe5-4f70-8597-b215d57427e0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:51:48 compute-0 sudo[415925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:51:48 compute-0 sudo[415925]: pam_unix(sudo:session): session closed for user root
Oct 07 14:51:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 305 active+clean; 143 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 8.1 KiB/s rd, 560 KiB/s wr, 14 op/s
Oct 07 14:51:48 compute-0 sudo[415988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:51:48 compute-0 sudo[415988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:51:48 compute-0 sudo[415988]: pam_unix(sudo:session): session closed for user root
Oct 07 14:51:48 compute-0 sudo[416016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:51:48 compute-0 sudo[416016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:51:48 compute-0 sudo[416016]: pam_unix(sudo:session): session closed for user root
Oct 07 14:51:48 compute-0 nova_compute[259550]: 2025-10-07 14:51:48.898 2 DEBUG nova.objects.instance [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lazy-loading 'migration_context' on Instance uuid 89da30f4-cbe5-4f70-8597-b215d57427e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:51:48 compute-0 sudo[416041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:51:48 compute-0 sudo[416041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:51:49 compute-0 nova_compute[259550]: 2025-10-07 14:51:49.164 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:51:49 compute-0 nova_compute[259550]: 2025-10-07 14:51:49.164 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Ensure instance console log exists: /var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:51:49 compute-0 nova_compute[259550]: 2025-10-07 14:51:49.165 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:51:49 compute-0 nova_compute[259550]: 2025-10-07 14:51:49.165 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:51:49 compute-0 nova_compute[259550]: 2025-10-07 14:51:49.165 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:51:49 compute-0 podman[416123]: 2025-10-07 14:51:49.245857384 +0000 UTC m=+0.037599115 container create 678aa6568cd73e1be1be9bde5e238df212096a41120892efea6bfb5092e4beca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gagarin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:51:49 compute-0 systemd[1]: Started libpod-conmon-678aa6568cd73e1be1be9bde5e238df212096a41120892efea6bfb5092e4beca.scope.
Oct 07 14:51:49 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:51:49 compute-0 podman[416123]: 2025-10-07 14:51:49.318733765 +0000 UTC m=+0.110475526 container init 678aa6568cd73e1be1be9bde5e238df212096a41120892efea6bfb5092e4beca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gagarin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:51:49 compute-0 podman[416123]: 2025-10-07 14:51:49.229544597 +0000 UTC m=+0.021286348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:51:49 compute-0 podman[416123]: 2025-10-07 14:51:49.327482013 +0000 UTC m=+0.119223754 container start 678aa6568cd73e1be1be9bde5e238df212096a41120892efea6bfb5092e4beca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gagarin, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:51:49 compute-0 podman[416123]: 2025-10-07 14:51:49.332277587 +0000 UTC m=+0.124019318 container attach 678aa6568cd73e1be1be9bde5e238df212096a41120892efea6bfb5092e4beca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:51:49 compute-0 romantic_gagarin[416141]: 167 167
Oct 07 14:51:49 compute-0 systemd[1]: libpod-678aa6568cd73e1be1be9bde5e238df212096a41120892efea6bfb5092e4beca.scope: Deactivated successfully.
Oct 07 14:51:49 compute-0 podman[416123]: 2025-10-07 14:51:49.336969998 +0000 UTC m=+0.128711739 container died 678aa6568cd73e1be1be9bde5e238df212096a41120892efea6bfb5092e4beca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gagarin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 07 14:51:49 compute-0 podman[416140]: 2025-10-07 14:51:49.348098232 +0000 UTC m=+0.065390373 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 07 14:51:49 compute-0 podman[416137]: 2025-10-07 14:51:49.350801487 +0000 UTC m=+0.068371636 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:51:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-a380edfa45d7f4dd9ce1a931a7a0cec048f4db85680172a96cba3284d96f4ff5-merged.mount: Deactivated successfully.
Oct 07 14:51:49 compute-0 podman[416123]: 2025-10-07 14:51:49.45153721 +0000 UTC m=+0.243278941 container remove 678aa6568cd73e1be1be9bde5e238df212096a41120892efea6bfb5092e4beca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gagarin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 07 14:51:49 compute-0 systemd[1]: libpod-conmon-678aa6568cd73e1be1be9bde5e238df212096a41120892efea6bfb5092e4beca.scope: Deactivated successfully.
Oct 07 14:51:49 compute-0 podman[416201]: 2025-10-07 14:51:49.624335556 +0000 UTC m=+0.049556419 container create b0df088568a1969bb296bde69c9a2b87a05b70d75e0d77cbe85cc4baa5912da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 07 14:51:49 compute-0 systemd[1]: Started libpod-conmon-b0df088568a1969bb296bde69c9a2b87a05b70d75e0d77cbe85cc4baa5912da7.scope.
Oct 07 14:51:49 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e90f6e2dd1b058541592aa495f3ea0c5f5c829fb339b7b75bdfe96209a2d499/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e90f6e2dd1b058541592aa495f3ea0c5f5c829fb339b7b75bdfe96209a2d499/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e90f6e2dd1b058541592aa495f3ea0c5f5c829fb339b7b75bdfe96209a2d499/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e90f6e2dd1b058541592aa495f3ea0c5f5c829fb339b7b75bdfe96209a2d499/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:51:49 compute-0 podman[416201]: 2025-10-07 14:51:49.604814182 +0000 UTC m=+0.030035075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:51:49 compute-0 podman[416201]: 2025-10-07 14:51:49.709953309 +0000 UTC m=+0.135174202 container init b0df088568a1969bb296bde69c9a2b87a05b70d75e0d77cbe85cc4baa5912da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_torvalds, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:51:49 compute-0 podman[416201]: 2025-10-07 14:51:49.718962903 +0000 UTC m=+0.144183766 container start b0df088568a1969bb296bde69c9a2b87a05b70d75e0d77cbe85cc4baa5912da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 07 14:51:49 compute-0 podman[416201]: 2025-10-07 14:51:49.72304665 +0000 UTC m=+0.148267513 container attach b0df088568a1969bb296bde69c9a2b87a05b70d75e0d77cbe85cc4baa5912da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct 07 14:51:50 compute-0 nova_compute[259550]: 2025-10-07 14:51:50.151 2 DEBUG nova.network.neutron [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Successfully created port: 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]: {
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:     "0": [
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:         {
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "devices": [
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "/dev/loop3"
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             ],
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "lv_name": "ceph_lv0",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "lv_size": "21470642176",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "name": "ceph_lv0",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "tags": {
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.cluster_name": "ceph",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.crush_device_class": "",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.encrypted": "0",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.osd_id": "0",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.type": "block",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.vdo": "0"
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             },
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "type": "block",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "vg_name": "ceph_vg0"
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:         }
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:     ],
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:     "1": [
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:         {
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "devices": [
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "/dev/loop4"
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             ],
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "lv_name": "ceph_lv1",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "lv_size": "21470642176",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "name": "ceph_lv1",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "tags": {
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.cluster_name": "ceph",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.crush_device_class": "",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.encrypted": "0",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.osd_id": "1",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.type": "block",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.vdo": "0"
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             },
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "type": "block",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "vg_name": "ceph_vg1"
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:         }
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:     ],
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:     "2": [
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:         {
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "devices": [
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "/dev/loop5"
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             ],
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "lv_name": "ceph_lv2",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "lv_size": "21470642176",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "name": "ceph_lv2",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "tags": {
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.cluster_name": "ceph",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.crush_device_class": "",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.encrypted": "0",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.osd_id": "2",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.type": "block",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:                 "ceph.vdo": "0"
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             },
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "type": "block",
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:             "vg_name": "ceph_vg2"
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:         }
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]:     ]
Oct 07 14:51:50 compute-0 sharp_torvalds[416218]: }
Oct 07 14:51:50 compute-0 podman[416201]: 2025-10-07 14:51:50.50933063 +0000 UTC m=+0.934551503 container died b0df088568a1969bb296bde69c9a2b87a05b70d75e0d77cbe85cc4baa5912da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_torvalds, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 14:51:50 compute-0 systemd[1]: libpod-b0df088568a1969bb296bde69c9a2b87a05b70d75e0d77cbe85cc4baa5912da7.scope: Deactivated successfully.
Oct 07 14:51:50 compute-0 ceph-mon[74295]: pgmap v2687: 305 pgs: 305 active+clean; 143 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 8.1 KiB/s rd, 560 KiB/s wr, 14 op/s
Oct 07 14:51:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e90f6e2dd1b058541592aa495f3ea0c5f5c829fb339b7b75bdfe96209a2d499-merged.mount: Deactivated successfully.
Oct 07 14:51:50 compute-0 podman[416201]: 2025-10-07 14:51:50.574738053 +0000 UTC m=+0.999958916 container remove b0df088568a1969bb296bde69c9a2b87a05b70d75e0d77cbe85cc4baa5912da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 07 14:51:50 compute-0 systemd[1]: libpod-conmon-b0df088568a1969bb296bde69c9a2b87a05b70d75e0d77cbe85cc4baa5912da7.scope: Deactivated successfully.
Oct 07 14:51:50 compute-0 sudo[416041]: pam_unix(sudo:session): session closed for user root
Oct 07 14:51:50 compute-0 sudo[416239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:51:50 compute-0 sudo[416239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:51:50 compute-0 sudo[416239]: pam_unix(sudo:session): session closed for user root
Oct 07 14:51:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 305 active+clean; 167 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 16 op/s
Oct 07 14:51:50 compute-0 sudo[416264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:51:50 compute-0 sudo[416264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:51:50 compute-0 sudo[416264]: pam_unix(sudo:session): session closed for user root
Oct 07 14:51:50 compute-0 nova_compute[259550]: 2025-10-07 14:51:50.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:50 compute-0 sudo[416289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:51:50 compute-0 sudo[416289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:51:50 compute-0 sudo[416289]: pam_unix(sudo:session): session closed for user root
Oct 07 14:51:50 compute-0 sudo[416314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:51:50 compute-0 sudo[416314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:51:51 compute-0 podman[416379]: 2025-10-07 14:51:51.189498137 +0000 UTC m=+0.040163054 container create d1e2658deed5eac7f46a0038f45cdca2048eaa7e7caf8994d90b24f4e8c4f696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 07 14:51:51 compute-0 systemd[1]: Started libpod-conmon-d1e2658deed5eac7f46a0038f45cdca2048eaa7e7caf8994d90b24f4e8c4f696.scope.
Oct 07 14:51:51 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:51:51 compute-0 podman[416379]: 2025-10-07 14:51:51.173999699 +0000 UTC m=+0.024664636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:51:51 compute-0 podman[416379]: 2025-10-07 14:51:51.271848434 +0000 UTC m=+0.122513381 container init d1e2658deed5eac7f46a0038f45cdca2048eaa7e7caf8994d90b24f4e8c4f696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 07 14:51:51 compute-0 podman[416379]: 2025-10-07 14:51:51.279153378 +0000 UTC m=+0.129818295 container start d1e2658deed5eac7f46a0038f45cdca2048eaa7e7caf8994d90b24f4e8c4f696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:51:51 compute-0 podman[416379]: 2025-10-07 14:51:51.283118911 +0000 UTC m=+0.133783848 container attach d1e2658deed5eac7f46a0038f45cdca2048eaa7e7caf8994d90b24f4e8c4f696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:51:51 compute-0 dazzling_thompson[416395]: 167 167
Oct 07 14:51:51 compute-0 systemd[1]: libpod-d1e2658deed5eac7f46a0038f45cdca2048eaa7e7caf8994d90b24f4e8c4f696.scope: Deactivated successfully.
Oct 07 14:51:51 compute-0 podman[416379]: 2025-10-07 14:51:51.286594645 +0000 UTC m=+0.137259592 container died d1e2658deed5eac7f46a0038f45cdca2048eaa7e7caf8994d90b24f4e8c4f696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 07 14:51:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b7adf8e52656e9c1301aaa4ed37cb2cd20574e1758dde5b37566df63f487a17-merged.mount: Deactivated successfully.
Oct 07 14:51:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:51:51 compute-0 podman[416379]: 2025-10-07 14:51:51.326760249 +0000 UTC m=+0.177425166 container remove d1e2658deed5eac7f46a0038f45cdca2048eaa7e7caf8994d90b24f4e8c4f696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:51:51 compute-0 systemd[1]: libpod-conmon-d1e2658deed5eac7f46a0038f45cdca2048eaa7e7caf8994d90b24f4e8c4f696.scope: Deactivated successfully.
Oct 07 14:51:51 compute-0 nova_compute[259550]: 2025-10-07 14:51:51.384 2 DEBUG nova.network.neutron [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Successfully updated port: 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:51:51 compute-0 nova_compute[259550]: 2025-10-07 14:51:51.430 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquiring lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:51:51 compute-0 nova_compute[259550]: 2025-10-07 14:51:51.431 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquired lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:51:51 compute-0 nova_compute[259550]: 2025-10-07 14:51:51.431 2 DEBUG nova.network.neutron [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:51:51 compute-0 podman[416418]: 2025-10-07 14:51:51.501175062 +0000 UTC m=+0.040741389 container create b4b7c8dbe4346ba614b15264d1257e47527fd0c84f5a9a742c49069bdefd90ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:51:51 compute-0 systemd[1]: Started libpod-conmon-b4b7c8dbe4346ba614b15264d1257e47527fd0c84f5a9a742c49069bdefd90ac.scope.
Oct 07 14:51:51 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3504e3312785d7c1a0a1ac352201bf43c509a1297ae5c04a535bc874fcae1c72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3504e3312785d7c1a0a1ac352201bf43c509a1297ae5c04a535bc874fcae1c72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3504e3312785d7c1a0a1ac352201bf43c509a1297ae5c04a535bc874fcae1c72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3504e3312785d7c1a0a1ac352201bf43c509a1297ae5c04a535bc874fcae1c72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:51:51 compute-0 podman[416418]: 2025-10-07 14:51:51.484368143 +0000 UTC m=+0.023934480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:51:51 compute-0 podman[416418]: 2025-10-07 14:51:51.589142872 +0000 UTC m=+0.128709229 container init b4b7c8dbe4346ba614b15264d1257e47527fd0c84f5a9a742c49069bdefd90ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 07 14:51:51 compute-0 ceph-mon[74295]: pgmap v2688: 305 pgs: 305 active+clean; 167 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 16 op/s
Oct 07 14:51:51 compute-0 podman[416418]: 2025-10-07 14:51:51.59747898 +0000 UTC m=+0.137045317 container start b4b7c8dbe4346ba614b15264d1257e47527fd0c84f5a9a742c49069bdefd90ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 07 14:51:51 compute-0 podman[416418]: 2025-10-07 14:51:51.603702067 +0000 UTC m=+0.143268404 container attach b4b7c8dbe4346ba614b15264d1257e47527fd0c84f5a9a742c49069bdefd90ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:51:51 compute-0 nova_compute[259550]: 2025-10-07 14:51:51.633 2 DEBUG nova.network.neutron [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:51:51 compute-0 nova_compute[259550]: 2025-10-07 14:51:51.680 2 DEBUG nova.compute.manager [req-f9282397-1606-4021-8a5a-9efb3dace988 req-cd6cca8b-796f-42eb-aa64-ca85fb1b0e59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received event network-changed-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:51:51 compute-0 nova_compute[259550]: 2025-10-07 14:51:51.680 2 DEBUG nova.compute.manager [req-f9282397-1606-4021-8a5a-9efb3dace988 req-cd6cca8b-796f-42eb-aa64-ca85fb1b0e59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Refreshing instance network info cache due to event network-changed-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:51:51 compute-0 nova_compute[259550]: 2025-10-07 14:51:51.680 2 DEBUG oslo_concurrency.lockutils [req-f9282397-1606-4021-8a5a-9efb3dace988 req-cd6cca8b-796f-42eb-aa64-ca85fb1b0e59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:51:52 compute-0 determined_satoshi[416434]: {
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:         "osd_id": 2,
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:         "type": "bluestore"
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:     },
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:         "osd_id": 1,
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:         "type": "bluestore"
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:     },
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:         "osd_id": 0,
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:         "type": "bluestore"
Oct 07 14:51:52 compute-0 determined_satoshi[416434]:     }
Oct 07 14:51:52 compute-0 determined_satoshi[416434]: }
Oct 07 14:51:52 compute-0 systemd[1]: libpod-b4b7c8dbe4346ba614b15264d1257e47527fd0c84f5a9a742c49069bdefd90ac.scope: Deactivated successfully.
Oct 07 14:51:52 compute-0 podman[416418]: 2025-10-07 14:51:52.560718983 +0000 UTC m=+1.100285320 container died b4b7c8dbe4346ba614b15264d1257e47527fd0c84f5a9a742c49069bdefd90ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 07 14:51:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-3504e3312785d7c1a0a1ac352201bf43c509a1297ae5c04a535bc874fcae1c72-merged.mount: Deactivated successfully.
Oct 07 14:51:52 compute-0 podman[416418]: 2025-10-07 14:51:52.619409808 +0000 UTC m=+1.158976145 container remove b4b7c8dbe4346ba614b15264d1257e47527fd0c84f5a9a742c49069bdefd90ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 07 14:51:52 compute-0 systemd[1]: libpod-conmon-b4b7c8dbe4346ba614b15264d1257e47527fd0c84f5a9a742c49069bdefd90ac.scope: Deactivated successfully.
Oct 07 14:51:52 compute-0 sudo[416314]: pam_unix(sudo:session): session closed for user root
Oct 07 14:51:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:51:52 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:51:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:51:52 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:51:52 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev e36d1196-efed-4b03-8ce2-99ba3b4e2113 does not exist
Oct 07 14:51:52 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 0b4aba54-208b-4412-a8f8-b6a889c554e5 does not exist
Oct 07 14:51:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:51:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:51:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:51:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:51:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:51:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:51:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 305 active+clean; 167 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 16 op/s
Oct 07 14:51:52 compute-0 sudo[416479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:51:52 compute-0 sudo[416479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:51:52 compute-0 sudo[416479]: pam_unix(sudo:session): session closed for user root
Oct 07 14:51:52 compute-0 sudo[416504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:51:52 compute-0 sudo[416504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:51:52 compute-0 sudo[416504]: pam_unix(sudo:session): session closed for user root
Oct 07 14:51:53 compute-0 nova_compute[259550]: 2025-10-07 14:51:53.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:53 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:51:53 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:51:53 compute-0 ceph-mon[74295]: pgmap v2689: 305 pgs: 305 active+clean; 167 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 16 op/s
Oct 07 14:51:53 compute-0 nova_compute[259550]: 2025-10-07 14:51:53.964 2 DEBUG nova.network.neutron [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Updating instance_info_cache with network_info: [{"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.034 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Releasing lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.035 2 DEBUG nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Instance network_info: |[{"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.035 2 DEBUG oslo_concurrency.lockutils [req-f9282397-1606-4021-8a5a-9efb3dace988 req-cd6cca8b-796f-42eb-aa64-ca85fb1b0e59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.035 2 DEBUG nova.network.neutron [req-f9282397-1606-4021-8a5a-9efb3dace988 req-cd6cca8b-796f-42eb-aa64-ca85fb1b0e59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Refreshing network info cache for port 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.039 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Start _get_guest_xml network_info=[{"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.043 2 WARNING nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.050 2 DEBUG nova.virt.libvirt.host [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.051 2 DEBUG nova.virt.libvirt.host [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.055 2 DEBUG nova.virt.libvirt.host [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.056 2 DEBUG nova.virt.libvirt.host [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.056 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.056 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.057 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.057 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.057 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.057 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.057 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.058 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.058 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.058 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.058 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.058 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.061 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:51:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:51:54 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2298750681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.535 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.560 2 DEBUG nova.storage.rbd_utils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] rbd image 89da30f4-cbe5-4f70-8597-b215d57427e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:51:54 compute-0 nova_compute[259550]: 2025-10-07 14:51:54.564 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:51:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2298750681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:51:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:51:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:51:54 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1626967430' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.008 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.010 2 DEBUG nova.virt.libvirt.vif [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-987945970-access_point-838105505',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-987945970-access_point-838105505',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-987945970-acc',id=143,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKkGL7CpbVdMuxEtVgVuwQZuX1JagsFc33SU+kYXfEMvDW2NlVQUlEd18hOTn1D9VxBpXbBu9ycn+w26VHLMwz7Ov9eXBoA6pi1tNnQXvgt/I9xVMdBkIMeLhNwY3Vzj3A==',key_name='tempest-TestSecurityGroupsBasicOps-1225276649',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='969793770e3f43a48c49fbd115a24ce2',ramdisk_id='',reservation_id='r-iv0cq43w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-987945970',owner_user_name='tempest-TestSecurityGroupsBasicOps-987945970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:51:47Z,user_data=None,user_id='f756e2b18f7246a48f99aaa3bb77a5c3',uuid=89da30f4-cbe5-4f70-8597-b215d57427e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.011 2 DEBUG nova.network.os_vif_util [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Converting VIF {"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.011 2 DEBUG nova.network.os_vif_util [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:e9:5b,bridge_name='br-int',has_traffic_filtering=True,id=5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a,network=Network(d0572f71-e177-44ba-b973-868f94fd5edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad3e5c8-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.013 2 DEBUG nova.objects.instance [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 89da30f4-cbe5-4f70-8597-b215d57427e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.103 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:51:55 compute-0 nova_compute[259550]:   <uuid>89da30f4-cbe5-4f70-8597-b215d57427e0</uuid>
Oct 07 14:51:55 compute-0 nova_compute[259550]:   <name>instance-0000008f</name>
Oct 07 14:51:55 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:51:55 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:51:55 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-987945970-access_point-838105505</nova:name>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:51:54</nova:creationTime>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:51:55 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:51:55 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:51:55 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:51:55 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:51:55 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:51:55 compute-0 nova_compute[259550]:         <nova:user uuid="f756e2b18f7246a48f99aaa3bb77a5c3">tempest-TestSecurityGroupsBasicOps-987945970-project-member</nova:user>
Oct 07 14:51:55 compute-0 nova_compute[259550]:         <nova:project uuid="969793770e3f43a48c49fbd115a24ce2">tempest-TestSecurityGroupsBasicOps-987945970</nova:project>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:51:55 compute-0 nova_compute[259550]:         <nova:port uuid="5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a">
Oct 07 14:51:55 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:51:55 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:51:55 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <system>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <entry name="serial">89da30f4-cbe5-4f70-8597-b215d57427e0</entry>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <entry name="uuid">89da30f4-cbe5-4f70-8597-b215d57427e0</entry>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     </system>
Oct 07 14:51:55 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:51:55 compute-0 nova_compute[259550]:   <os>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:   </os>
Oct 07 14:51:55 compute-0 nova_compute[259550]:   <features>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:   </features>
Oct 07 14:51:55 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:51:55 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:51:55 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/89da30f4-cbe5-4f70-8597-b215d57427e0_disk">
Oct 07 14:51:55 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       </source>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:51:55 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/89da30f4-cbe5-4f70-8597-b215d57427e0_disk.config">
Oct 07 14:51:55 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       </source>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:51:55 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:a4:e9:5b"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <target dev="tap5ad3e5c8-f8"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0/console.log" append="off"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <video>
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     </video>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:51:55 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:51:55 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:51:55 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:51:55 compute-0 nova_compute[259550]: </domain>
Oct 07 14:51:55 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.104 2 DEBUG nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Preparing to wait for external event network-vif-plugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.104 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquiring lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.105 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.105 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.105 2 DEBUG nova.virt.libvirt.vif [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-987945970-access_point-838105505',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-987945970-access_point-838105505',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-987945970-acc',id=143,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKkGL7CpbVdMuxEtVgVuwQZuX1JagsFc33SU+kYXfEMvDW2NlVQUlEd18hOTn1D9VxBpXbBu9ycn+w26VHLMwz7Ov9eXBoA6pi1tNnQXvgt/I9xVMdBkIMeLhNwY3Vzj3A==',key_name='tempest-TestSecurityGroupsBasicOps-1225276649',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='969793770e3f43a48c49fbd115a24ce2',ramdisk_id='',reservation_id='r-iv0cq43w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-987945970',owner_user_name='tempest-TestSecurityGroupsBasicOps-987945970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:51:47Z,user_data=None,user_id='f756e2b18f7246a48f99aaa3bb77a5c3',uuid=89da30f4-cbe5-4f70-8597-b215d57427e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.106 2 DEBUG nova.network.os_vif_util [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Converting VIF {"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.106 2 DEBUG nova.network.os_vif_util [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:e9:5b,bridge_name='br-int',has_traffic_filtering=True,id=5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a,network=Network(d0572f71-e177-44ba-b973-868f94fd5edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad3e5c8-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.107 2 DEBUG os_vif [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:e9:5b,bridge_name='br-int',has_traffic_filtering=True,id=5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a,network=Network(d0572f71-e177-44ba-b973-868f94fd5edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad3e5c8-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.108 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.108 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.113 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ad3e5c8-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.114 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5ad3e5c8-f8, col_values=(('external_ids', {'iface-id': '5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:e9:5b', 'vm-uuid': '89da30f4-cbe5-4f70-8597-b215d57427e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:55 compute-0 NetworkManager[44949]: <info>  [1759848715.1171] manager: (tap5ad3e5c8-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/629)
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.126 2 INFO os_vif [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:e9:5b,bridge_name='br-int',has_traffic_filtering=True,id=5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a,network=Network(d0572f71-e177-44ba-b973-868f94fd5edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad3e5c8-f8')
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.217 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.217 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.217 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] No VIF found with MAC fa:16:3e:a4:e9:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.218 2 INFO nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Using config drive
Oct 07 14:51:55 compute-0 nova_compute[259550]: 2025-10-07 14:51:55.240 2 DEBUG nova.storage.rbd_utils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] rbd image 89da30f4-cbe5-4f70-8597-b215d57427e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:51:55 compute-0 ceph-mon[74295]: pgmap v2690: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:51:55 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1626967430' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:51:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:51:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.790652) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848716790734, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2063, "num_deletes": 251, "total_data_size": 3430374, "memory_usage": 3490128, "flush_reason": "Manual Compaction"}
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848716810880, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 3363784, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54420, "largest_seqno": 56482, "table_properties": {"data_size": 3354361, "index_size": 5980, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19058, "raw_average_key_size": 20, "raw_value_size": 3335654, "raw_average_value_size": 3533, "num_data_blocks": 265, "num_entries": 944, "num_filter_entries": 944, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759848488, "oldest_key_time": 1759848488, "file_creation_time": 1759848716, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 20613 microseconds, and 7300 cpu microseconds.
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.811273) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 3363784 bytes OK
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.811303) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.813956) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.813971) EVENT_LOG_v1 {"time_micros": 1759848716813966, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.813986) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 3421706, prev total WAL file size 3421706, number of live WAL files 2.
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.814854) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(3284KB)], [128(8122KB)]
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848716814886, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 11681275, "oldest_snapshot_seqno": -1}
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7736 keys, 10006781 bytes, temperature: kUnknown
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848716873631, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 10006781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9956386, "index_size": 29930, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19397, "raw_key_size": 200978, "raw_average_key_size": 25, "raw_value_size": 9819570, "raw_average_value_size": 1269, "num_data_blocks": 1169, "num_entries": 7736, "num_filter_entries": 7736, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759848716, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.873853) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10006781 bytes
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.876748) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 198.6 rd, 170.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.9 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 8250, records dropped: 514 output_compression: NoCompression
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.876767) EVENT_LOG_v1 {"time_micros": 1759848716876758, "job": 78, "event": "compaction_finished", "compaction_time_micros": 58807, "compaction_time_cpu_micros": 24269, "output_level": 6, "num_output_files": 1, "total_output_size": 10006781, "num_input_records": 8250, "num_output_records": 7736, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848716877504, "job": 78, "event": "table_file_deletion", "file_number": 130}
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848716879292, "job": 78, "event": "table_file_deletion", "file_number": 128}
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.814764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.879345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.879352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.879354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.879356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:51:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.879358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:51:56 compute-0 nova_compute[259550]: 2025-10-07 14:51:56.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:51:56 compute-0 nova_compute[259550]: 2025-10-07 14:51:56.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 07 14:51:57 compute-0 nova_compute[259550]: 2025-10-07 14:51:57.144 2 INFO nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Creating config drive at /var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0/disk.config
Oct 07 14:51:57 compute-0 nova_compute[259550]: 2025-10-07 14:51:57.149 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp92zjtbvh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:51:57 compute-0 nova_compute[259550]: 2025-10-07 14:51:57.299 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp92zjtbvh" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:51:57 compute-0 nova_compute[259550]: 2025-10-07 14:51:57.328 2 DEBUG nova.storage.rbd_utils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] rbd image 89da30f4-cbe5-4f70-8597-b215d57427e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:51:57 compute-0 nova_compute[259550]: 2025-10-07 14:51:57.332 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0/disk.config 89da30f4-cbe5-4f70-8597-b215d57427e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:51:57 compute-0 nova_compute[259550]: 2025-10-07 14:51:57.377 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 07 14:51:57 compute-0 nova_compute[259550]: 2025-10-07 14:51:57.513 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0/disk.config 89da30f4-cbe5-4f70-8597-b215d57427e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:51:57 compute-0 nova_compute[259550]: 2025-10-07 14:51:57.514 2 INFO nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Deleting local config drive /var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0/disk.config because it was imported into RBD.
Oct 07 14:51:57 compute-0 kernel: tap5ad3e5c8-f8: entered promiscuous mode
Oct 07 14:51:57 compute-0 NetworkManager[44949]: <info>  [1759848717.5702] manager: (tap5ad3e5c8-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/630)
Oct 07 14:51:57 compute-0 ovn_controller[151684]: 2025-10-07T14:51:57Z|01565|binding|INFO|Claiming lport 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a for this chassis.
Oct 07 14:51:57 compute-0 ovn_controller[151684]: 2025-10-07T14:51:57Z|01566|binding|INFO|5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a: Claiming fa:16:3e:a4:e9:5b 10.100.0.9
Oct 07 14:51:57 compute-0 nova_compute[259550]: 2025-10-07 14:51:57.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:57 compute-0 ovn_controller[151684]: 2025-10-07T14:51:57Z|01567|binding|INFO|Setting lport 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a ovn-installed in OVS
Oct 07 14:51:57 compute-0 nova_compute[259550]: 2025-10-07 14:51:57.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:57 compute-0 nova_compute[259550]: 2025-10-07 14:51:57.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:57 compute-0 systemd-udevd[416661]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:51:57 compute-0 NetworkManager[44949]: <info>  [1759848717.6167] device (tap5ad3e5c8-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:51:57 compute-0 NetworkManager[44949]: <info>  [1759848717.6180] device (tap5ad3e5c8-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:51:57 compute-0 systemd-machined[214580]: New machine qemu-177-instance-0000008f.
Oct 07 14:51:57 compute-0 systemd[1]: Started Virtual Machine qemu-177-instance-0000008f.
Oct 07 14:51:57 compute-0 ovn_controller[151684]: 2025-10-07T14:51:57Z|01568|binding|INFO|Setting lport 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a up in Southbound
Oct 07 14:51:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.762 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:e9:5b 10.100.0.9'], port_security=['fa:16:3e:a4:e9:5b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '89da30f4-cbe5-4f70-8597-b215d57427e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0572f71-e177-44ba-b973-868f94fd5edc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '969793770e3f43a48c49fbd115a24ce2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '246aff8e-3934-45b5-b52a-548706ef8690 bc0fccda-f204-449b-a489-9db51a7ff6e3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9fa489b-3f19-42ed-a2d8-cef4ea357478, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:51:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.763 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a in datapath d0572f71-e177-44ba-b973-868f94fd5edc bound to our chassis
Oct 07 14:51:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.765 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0572f71-e177-44ba-b973-868f94fd5edc
Oct 07 14:51:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.778 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[30acecf5-2b0b-4ec1-a2b8-9f9de41925d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.779 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd0572f71-e1 in ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:51:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.784 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd0572f71-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:51:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.785 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e057764a-50c3-443d-9296-937c28148720]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.785 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9fdd5891-6350-44ff-8c38-88f0fb9e5432]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:57 compute-0 ceph-mon[74295]: pgmap v2691: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:51:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.800 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[1c24016b-d18f-44f9-9ef8-090044e462f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.829 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5358ed78-0356-47fb-89b1-e680af184fe4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.855 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa63b11-9511-4463-bbb1-dbfdd9a82a80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.860 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ee891db6-03a4-4458-9a14-29ae37776fb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:57 compute-0 NetworkManager[44949]: <info>  [1759848717.8622] manager: (tapd0572f71-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/631)
Oct 07 14:51:57 compute-0 systemd-udevd[416665]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:51:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.896 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[559acc35-3756-472e-ae84-a4b4ec036c98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.899 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[eca288f1-2a23-4617-b335-7181683837bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:57 compute-0 NetworkManager[44949]: <info>  [1759848717.9293] device (tapd0572f71-e0): carrier: link connected
Oct 07 14:51:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.936 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d67d8f70-2b5c-4b96-8542-ed42e62710b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.958 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0cae99da-1cfd-4521-b832-4eeb4b7d0094]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0572f71-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:76:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 451], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 930149, 'reachable_time': 19606, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416697, 'error': None, 'target': 'ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.975 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[761f0934-4264-47ba-9770-63d782bd78d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe43:768d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 930149, 'tstamp': 930149}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416698, 'error': None, 'target': 'ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:57 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.993 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7883cbc9-52ff-4b17-8b76-cf4a7978254a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0572f71-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:76:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 451], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 930149, 'reachable_time': 19606, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 416699, 'error': None, 'target': 'ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.020 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b3eab527-beca-446b-90df-d2e9ed3e7a03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.099 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dd46ac78-c38a-4d13-8667-9d1ade8e99cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.100 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0572f71-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.100 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.101 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0572f71-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:51:58 compute-0 NetworkManager[44949]: <info>  [1759848718.1036] manager: (tapd0572f71-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/632)
Oct 07 14:51:58 compute-0 kernel: tapd0572f71-e0: entered promiscuous mode
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.105 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0572f71-e0, col_values=(('external_ids', {'iface-id': 'd9a91367-648e-44ca-a4fb-a9159c3c71a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:51:58 compute-0 ovn_controller[151684]: 2025-10-07T14:51:58Z|01569|binding|INFO|Releasing lport d9a91367-648e-44ca-a4fb-a9159c3c71a6 from this chassis (sb_readonly=0)
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.121 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d0572f71-e177-44ba-b973-868f94fd5edc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d0572f71-e177-44ba-b973-868f94fd5edc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.122 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[51a0b905-cfa9-4bbd-8150-5779a096a8f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.123 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-d0572f71-e177-44ba-b973-868f94fd5edc
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/d0572f71-e177-44ba-b973-868f94fd5edc.pid.haproxy
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID d0572f71-e177-44ba-b973-868f94fd5edc
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:51:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.123 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc', 'env', 'PROCESS_TAG=haproxy-d0572f71-e177-44ba-b973-868f94fd5edc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d0572f71-e177-44ba-b973-868f94fd5edc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.335 2 DEBUG nova.compute.manager [req-d601dca1-0bd8-421b-980e-e21fe632f460 req-4b959a63-770c-4c35-9a42-966d446f92a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received event network-vif-plugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.335 2 DEBUG oslo_concurrency.lockutils [req-d601dca1-0bd8-421b-980e-e21fe632f460 req-4b959a63-770c-4c35-9a42-966d446f92a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.336 2 DEBUG oslo_concurrency.lockutils [req-d601dca1-0bd8-421b-980e-e21fe632f460 req-4b959a63-770c-4c35-9a42-966d446f92a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.336 2 DEBUG oslo_concurrency.lockutils [req-d601dca1-0bd8-421b-980e-e21fe632f460 req-4b959a63-770c-4c35-9a42-966d446f92a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.336 2 DEBUG nova.compute.manager [req-d601dca1-0bd8-421b-980e-e21fe632f460 req-4b959a63-770c-4c35-9a42-966d446f92a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Processing event network-vif-plugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:51:58 compute-0 podman[416772]: 2025-10-07 14:51:58.553418568 +0000 UTC m=+0.099876703 container create a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 07 14:51:58 compute-0 podman[416772]: 2025-10-07 14:51:58.476753497 +0000 UTC m=+0.023211652 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:51:58 compute-0 systemd[1]: Started libpod-conmon-a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620.scope.
Oct 07 14:51:58 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b1bb2847302638c83cc4ed2dbe5103686edd071cff651ce463463f585fcfd30/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:51:58 compute-0 podman[416772]: 2025-10-07 14:51:58.642914525 +0000 UTC m=+0.189372700 container init a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:51:58 compute-0 podman[416772]: 2025-10-07 14:51:58.651600641 +0000 UTC m=+0.198058776 container start a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 07 14:51:58 compute-0 neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc[416787]: [NOTICE]   (416791) : New worker (416793) forked
Oct 07 14:51:58 compute-0 neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc[416787]: [NOTICE]   (416791) : Loading success.
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.677 2 DEBUG nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.678 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848718.6778185, 89da30f4-cbe5-4f70-8597-b215d57427e0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.678 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] VM Started (Lifecycle Event)
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.686 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.693 2 INFO nova.virt.libvirt.driver [-] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Instance spawned successfully.
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.693 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:51:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.893 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.897 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.969 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.970 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.970 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.970 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.971 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:51:58 compute-0 nova_compute[259550]: 2025-10-07 14:51:58.971 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:51:59 compute-0 nova_compute[259550]: 2025-10-07 14:51:59.046 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:51:59 compute-0 nova_compute[259550]: 2025-10-07 14:51:59.047 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848718.67813, 89da30f4-cbe5-4f70-8597-b215d57427e0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:51:59 compute-0 nova_compute[259550]: 2025-10-07 14:51:59.047 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] VM Paused (Lifecycle Event)
Oct 07 14:51:59 compute-0 nova_compute[259550]: 2025-10-07 14:51:59.099 2 DEBUG nova.network.neutron [req-f9282397-1606-4021-8a5a-9efb3dace988 req-cd6cca8b-796f-42eb-aa64-ca85fb1b0e59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Updated VIF entry in instance network info cache for port 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:51:59 compute-0 nova_compute[259550]: 2025-10-07 14:51:59.100 2 DEBUG nova.network.neutron [req-f9282397-1606-4021-8a5a-9efb3dace988 req-cd6cca8b-796f-42eb-aa64-ca85fb1b0e59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Updating instance_info_cache with network_info: [{"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:51:59 compute-0 nova_compute[259550]: 2025-10-07 14:51:59.510 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:51:59 compute-0 nova_compute[259550]: 2025-10-07 14:51:59.512 2 DEBUG oslo_concurrency.lockutils [req-f9282397-1606-4021-8a5a-9efb3dace988 req-cd6cca8b-796f-42eb-aa64-ca85fb1b0e59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:51:59 compute-0 nova_compute[259550]: 2025-10-07 14:51:59.515 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848718.6808052, 89da30f4-cbe5-4f70-8597-b215d57427e0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:51:59 compute-0 nova_compute[259550]: 2025-10-07 14:51:59.515 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] VM Resumed (Lifecycle Event)
Oct 07 14:51:59 compute-0 nova_compute[259550]: 2025-10-07 14:51:59.537 2 INFO nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Took 11.61 seconds to spawn the instance on the hypervisor.
Oct 07 14:51:59 compute-0 nova_compute[259550]: 2025-10-07 14:51:59.538 2 DEBUG nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:51:59 compute-0 nova_compute[259550]: 2025-10-07 14:51:59.575 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:51:59 compute-0 nova_compute[259550]: 2025-10-07 14:51:59.578 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:51:59 compute-0 nova_compute[259550]: 2025-10-07 14:51:59.622 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:51:59 compute-0 nova_compute[259550]: 2025-10-07 14:51:59.641 2 INFO nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Took 13.71 seconds to build instance.
Oct 07 14:51:59 compute-0 nova_compute[259550]: 2025-10-07 14:51:59.692 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:51:59 compute-0 ceph-mon[74295]: pgmap v2692: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 07 14:52:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:00.086 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:52:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:00.087 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:52:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:00.088 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:52:00 compute-0 nova_compute[259550]: 2025-10-07 14:52:00.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:00 compute-0 nova_compute[259550]: 2025-10-07 14:52:00.465 2 DEBUG nova.compute.manager [req-29c03693-d7b8-401b-b42e-330791c466f6 req-61788503-cf5f-4c7d-90bd-ca469f782e79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received event network-vif-plugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:52:00 compute-0 nova_compute[259550]: 2025-10-07 14:52:00.466 2 DEBUG oslo_concurrency.lockutils [req-29c03693-d7b8-401b-b42e-330791c466f6 req-61788503-cf5f-4c7d-90bd-ca469f782e79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:52:00 compute-0 nova_compute[259550]: 2025-10-07 14:52:00.466 2 DEBUG oslo_concurrency.lockutils [req-29c03693-d7b8-401b-b42e-330791c466f6 req-61788503-cf5f-4c7d-90bd-ca469f782e79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:52:00 compute-0 nova_compute[259550]: 2025-10-07 14:52:00.467 2 DEBUG oslo_concurrency.lockutils [req-29c03693-d7b8-401b-b42e-330791c466f6 req-61788503-cf5f-4c7d-90bd-ca469f782e79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:52:00 compute-0 nova_compute[259550]: 2025-10-07 14:52:00.467 2 DEBUG nova.compute.manager [req-29c03693-d7b8-401b-b42e-330791c466f6 req-61788503-cf5f-4c7d-90bd-ca469f782e79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] No waiting events found dispatching network-vif-plugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:52:00 compute-0 nova_compute[259550]: 2025-10-07 14:52:00.468 2 WARNING nova.compute.manager [req-29c03693-d7b8-401b-b42e-330791c466f6 req-61788503-cf5f-4c7d-90bd-ca469f782e79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received unexpected event network-vif-plugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a for instance with vm_state active and task_state None.
Oct 07 14:52:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.2 MiB/s wr, 64 op/s
Oct 07 14:52:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:52:01 compute-0 nova_compute[259550]: 2025-10-07 14:52:01.377 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:52:01 compute-0 ceph-mon[74295]: pgmap v2693: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.2 MiB/s wr, 64 op/s
Oct 07 14:52:02 compute-0 podman[416802]: 2025-10-07 14:52:02.072809707 +0000 UTC m=+0.060500149 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 07 14:52:02 compute-0 podman[416803]: 2025-10-07 14:52:02.098303613 +0000 UTC m=+0.083100716 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:52:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 13 KiB/s wr, 62 op/s
Oct 07 14:52:02 compute-0 nova_compute[259550]: 2025-10-07 14:52:02.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:52:03 compute-0 nova_compute[259550]: 2025-10-07 14:52:03.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:03 compute-0 ceph-mon[74295]: pgmap v2694: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 13 KiB/s wr, 62 op/s
Oct 07 14:52:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 84 op/s
Oct 07 14:52:05 compute-0 nova_compute[259550]: 2025-10-07 14:52:05.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:05 compute-0 nova_compute[259550]: 2025-10-07 14:52:05.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:52:05 compute-0 nova_compute[259550]: 2025-10-07 14:52:05.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:52:05 compute-0 nova_compute[259550]: 2025-10-07 14:52:05.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:52:06 compute-0 ceph-mon[74295]: pgmap v2695: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 84 op/s
Oct 07 14:52:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:52:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 74 op/s
Oct 07 14:52:07 compute-0 nova_compute[259550]: 2025-10-07 14:52:07.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:52:08 compute-0 nova_compute[259550]: 2025-10-07 14:52:08.021 2 DEBUG nova.compute.manager [req-f6c6af4a-d97b-41fb-8657-e122586735dd req-5e72d569-f201-4bca-940d-84cd65af43d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received event network-changed-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:52:08 compute-0 nova_compute[259550]: 2025-10-07 14:52:08.022 2 DEBUG nova.compute.manager [req-f6c6af4a-d97b-41fb-8657-e122586735dd req-5e72d569-f201-4bca-940d-84cd65af43d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Refreshing instance network info cache due to event network-changed-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:52:08 compute-0 nova_compute[259550]: 2025-10-07 14:52:08.022 2 DEBUG oslo_concurrency.lockutils [req-f6c6af4a-d97b-41fb-8657-e122586735dd req-5e72d569-f201-4bca-940d-84cd65af43d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:52:08 compute-0 nova_compute[259550]: 2025-10-07 14:52:08.022 2 DEBUG oslo_concurrency.lockutils [req-f6c6af4a-d97b-41fb-8657-e122586735dd req-5e72d569-f201-4bca-940d-84cd65af43d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:52:08 compute-0 nova_compute[259550]: 2025-10-07 14:52:08.023 2 DEBUG nova.network.neutron [req-f6c6af4a-d97b-41fb-8657-e122586735dd req-5e72d569-f201-4bca-940d-84cd65af43d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Refreshing network info cache for port 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:52:08 compute-0 ceph-mon[74295]: pgmap v2696: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 74 op/s
Oct 07 14:52:08 compute-0 nova_compute[259550]: 2025-10-07 14:52:08.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 74 op/s
Oct 07 14:52:08 compute-0 nova_compute[259550]: 2025-10-07 14:52:08.901 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:52:09 compute-0 nova_compute[259550]: 2025-10-07 14:52:09.083 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Triggering sync for uuid 9b571b54-001b-4da1-8b91-2659b1fbaac6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 07 14:52:09 compute-0 nova_compute[259550]: 2025-10-07 14:52:09.084 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Triggering sync for uuid 89da30f4-cbe5-4f70-8597-b215d57427e0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 07 14:52:09 compute-0 nova_compute[259550]: 2025-10-07 14:52:09.084 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "9b571b54-001b-4da1-8b91-2659b1fbaac6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:52:09 compute-0 nova_compute[259550]: 2025-10-07 14:52:09.084 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:52:09 compute-0 nova_compute[259550]: 2025-10-07 14:52:09.085 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "89da30f4-cbe5-4f70-8597-b215d57427e0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:52:09 compute-0 nova_compute[259550]: 2025-10-07 14:52:09.085 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:52:09 compute-0 nova_compute[259550]: 2025-10-07 14:52:09.085 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:52:09 compute-0 nova_compute[259550]: 2025-10-07 14:52:09.344 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:52:09 compute-0 nova_compute[259550]: 2025-10-07 14:52:09.345 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.261s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:52:09 compute-0 nova_compute[259550]: 2025-10-07 14:52:09.426 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:52:10 compute-0 nova_compute[259550]: 2025-10-07 14:52:10.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:10 compute-0 ceph-mon[74295]: pgmap v2697: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 74 op/s
Oct 07 14:52:10 compute-0 nova_compute[259550]: 2025-10-07 14:52:10.496 2 DEBUG nova.network.neutron [req-f6c6af4a-d97b-41fb-8657-e122586735dd req-5e72d569-f201-4bca-940d-84cd65af43d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Updated VIF entry in instance network info cache for port 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:52:10 compute-0 nova_compute[259550]: 2025-10-07 14:52:10.497 2 DEBUG nova.network.neutron [req-f6c6af4a-d97b-41fb-8657-e122586735dd req-5e72d569-f201-4bca-940d-84cd65af43d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Updating instance_info_cache with network_info: [{"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:52:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 305 active+clean; 174 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 930 KiB/s wr, 77 op/s
Oct 07 14:52:10 compute-0 nova_compute[259550]: 2025-10-07 14:52:10.769 2 DEBUG oslo_concurrency.lockutils [req-f6c6af4a-d97b-41fb-8657-e122586735dd req-5e72d569-f201-4bca-940d-84cd65af43d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:52:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:52:11 compute-0 ovn_controller[151684]: 2025-10-07T14:52:11Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a4:e9:5b 10.100.0.9
Oct 07 14:52:11 compute-0 ovn_controller[151684]: 2025-10-07T14:52:11Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a4:e9:5b 10.100.0.9
Oct 07 14:52:12 compute-0 ceph-mon[74295]: pgmap v2698: 305 pgs: 305 active+clean; 174 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 930 KiB/s wr, 77 op/s
Oct 07 14:52:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 305 active+clean; 174 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 755 KiB/s rd, 930 KiB/s wr, 35 op/s
Oct 07 14:52:13 compute-0 nova_compute[259550]: 2025-10-07 14:52:13.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:13 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:52:13 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 41K writes, 167K keys, 41K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 41K writes, 14K syncs, 2.80 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5090 writes, 20K keys, 5090 commit groups, 1.0 writes per commit group, ingest: 24.09 MB, 0.04 MB/s
                                           Interval WAL: 5090 writes, 1904 syncs, 2.67 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 14:52:14 compute-0 ceph-mon[74295]: pgmap v2699: 305 pgs: 305 active+clean; 174 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 755 KiB/s rd, 930 KiB/s wr, 35 op/s
Oct 07 14:52:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 305 active+clean; 198 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 914 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Oct 07 14:52:15 compute-0 nova_compute[259550]: 2025-10-07 14:52:15.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:15 compute-0 nova_compute[259550]: 2025-10-07 14:52:15.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:52:16 compute-0 nova_compute[259550]: 2025-10-07 14:52:16.081 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:52:16 compute-0 nova_compute[259550]: 2025-10-07 14:52:16.081 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:52:16 compute-0 nova_compute[259550]: 2025-10-07 14:52:16.081 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:52:16 compute-0 nova_compute[259550]: 2025-10-07 14:52:16.082 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:52:16 compute-0 nova_compute[259550]: 2025-10-07 14:52:16.082 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:52:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:52:16 compute-0 ceph-mon[74295]: pgmap v2700: 305 pgs: 305 active+clean; 198 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 914 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Oct 07 14:52:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:52:16 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3413451029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:52:16 compute-0 nova_compute[259550]: 2025-10-07 14:52:16.555 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:52:16 compute-0 nova_compute[259550]: 2025-10-07 14:52:16.727 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:52:16 compute-0 nova_compute[259550]: 2025-10-07 14:52:16.727 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:52:16 compute-0 nova_compute[259550]: 2025-10-07 14:52:16.731 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:52:16 compute-0 nova_compute[259550]: 2025-10-07 14:52:16.732 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:52:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 07 14:52:16 compute-0 nova_compute[259550]: 2025-10-07 14:52:16.903 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:52:16 compute-0 nova_compute[259550]: 2025-10-07 14:52:16.904 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3235MB free_disk=59.89751052856445GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:52:16 compute-0 nova_compute[259550]: 2025-10-07 14:52:16.904 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:52:16 compute-0 nova_compute[259550]: 2025-10-07 14:52:16.904 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:52:17 compute-0 nova_compute[259550]: 2025-10-07 14:52:17.246 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 9b571b54-001b-4da1-8b91-2659b1fbaac6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:52:17 compute-0 nova_compute[259550]: 2025-10-07 14:52:17.246 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 89da30f4-cbe5-4f70-8597-b215d57427e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:52:17 compute-0 nova_compute[259550]: 2025-10-07 14:52:17.246 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:52:17 compute-0 nova_compute[259550]: 2025-10-07 14:52:17.247 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:52:17 compute-0 nova_compute[259550]: 2025-10-07 14:52:17.317 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:52:17 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3413451029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:52:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:52:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/371424317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:52:17 compute-0 nova_compute[259550]: 2025-10-07 14:52:17.886 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:52:17 compute-0 nova_compute[259550]: 2025-10-07 14:52:17.893 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:52:17 compute-0 nova_compute[259550]: 2025-10-07 14:52:17.918 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:52:17 compute-0 nova_compute[259550]: 2025-10-07 14:52:17.944 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:52:17 compute-0 nova_compute[259550]: 2025-10-07 14:52:17.944 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:52:17 compute-0 nova_compute[259550]: 2025-10-07 14:52:17.945 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:52:17 compute-0 nova_compute[259550]: 2025-10-07 14:52:17.945 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 07 14:52:18 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:52:18 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 182K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.77 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4220 writes, 19K keys, 4220 commit groups, 1.0 writes per commit group, ingest: 23.74 MB, 0.04 MB/s
                                           Interval WAL: 4220 writes, 1544 syncs, 2.73 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 14:52:18 compute-0 nova_compute[259550]: 2025-10-07 14:52:18.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:18 compute-0 ceph-mon[74295]: pgmap v2701: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 07 14:52:18 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/371424317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:52:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:52:19 compute-0 nova_compute[259550]: 2025-10-07 14:52:19.962 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:52:19 compute-0 nova_compute[259550]: 2025-10-07 14:52:19.962 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:52:19 compute-0 nova_compute[259550]: 2025-10-07 14:52:19.963 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:52:20 compute-0 podman[416893]: 2025-10-07 14:52:20.074566916 +0000 UTC m=+0.058577943 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 07 14:52:20 compute-0 podman[416892]: 2025-10-07 14:52:20.109949326 +0000 UTC m=+0.096737139 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 14:52:20 compute-0 nova_compute[259550]: 2025-10-07 14:52:20.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:20 compute-0 nova_compute[259550]: 2025-10-07 14:52:20.272 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:52:20 compute-0 nova_compute[259550]: 2025-10-07 14:52:20.272 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:52:20 compute-0 nova_compute[259550]: 2025-10-07 14:52:20.272 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:52:20 compute-0 nova_compute[259550]: 2025-10-07 14:52:20.272 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9b571b54-001b-4da1-8b91-2659b1fbaac6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:52:20 compute-0 ceph-mon[74295]: pgmap v2702: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:52:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:52:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:52:22 compute-0 nova_compute[259550]: 2025-10-07 14:52:22.199 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updating instance_info_cache with network_info: [{"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:52:22 compute-0 nova_compute[259550]: 2025-10-07 14:52:22.228 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:52:22 compute-0 nova_compute[259550]: 2025-10-07 14:52:22.228 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:52:22 compute-0 nova_compute[259550]: 2025-10-07 14:52:22.229 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:52:22 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 14:52:22 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 33K writes, 134K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s
                                           Cumulative WAL: 33K writes, 11K syncs, 2.82 writes per sync, written: 0.13 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2785 writes, 10K keys, 2785 commit groups, 1.0 writes per commit group, ingest: 11.72 MB, 0.02 MB/s
                                           Interval WAL: 2785 writes, 1097 syncs, 2.54 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 14:52:22 compute-0 ceph-mon[74295]: pgmap v2703: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:52:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:52:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:52:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:52:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:52:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:52:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:52:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 206 KiB/s rd, 1.2 MiB/s wr, 51 op/s
Oct 07 14:52:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:52:22
Oct 07 14:52:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:52:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:52:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', '.mgr', '.rgw.root', 'images', 'volumes', 'default.rgw.meta', 'vms']
Oct 07 14:52:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:52:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:52:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:52:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:52:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:52:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:52:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:52:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:52:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:52:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:52:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:52:23 compute-0 nova_compute[259550]: 2025-10-07 14:52:23.245 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:52:23 compute-0 nova_compute[259550]: 2025-10-07 14:52:23.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:24 compute-0 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct 07 14:52:24 compute-0 ceph-mon[74295]: pgmap v2704: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 206 KiB/s rd, 1.2 MiB/s wr, 51 op/s
Oct 07 14:52:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 206 KiB/s rd, 1.2 MiB/s wr, 51 op/s
Oct 07 14:52:25 compute-0 nova_compute[259550]: 2025-10-07 14:52:25.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:52:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:26.442 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:52:26 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:26.444 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:52:26 compute-0 nova_compute[259550]: 2025-10-07 14:52:26.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:26 compute-0 ceph-mon[74295]: pgmap v2705: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 206 KiB/s rd, 1.2 MiB/s wr, 51 op/s
Oct 07 14:52:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 28 KiB/s wr, 6 op/s
Oct 07 14:52:27 compute-0 ceph-mon[74295]: pgmap v2706: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 28 KiB/s wr, 6 op/s
Oct 07 14:52:28 compute-0 nova_compute[259550]: 2025-10-07 14:52:28.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 12 KiB/s wr, 0 op/s
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.290 2 DEBUG nova.compute.manager [req-4f00321b-6305-4138-8bc5-7dac172af786 req-51f4c246-0bf5-43dd-a74d-d5278df3f181 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received event network-changed-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.290 2 DEBUG nova.compute.manager [req-4f00321b-6305-4138-8bc5-7dac172af786 req-51f4c246-0bf5-43dd-a74d-d5278df3f181 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Refreshing instance network info cache due to event network-changed-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.290 2 DEBUG oslo_concurrency.lockutils [req-4f00321b-6305-4138-8bc5-7dac172af786 req-51f4c246-0bf5-43dd-a74d-d5278df3f181 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.291 2 DEBUG oslo_concurrency.lockutils [req-4f00321b-6305-4138-8bc5-7dac172af786 req-51f4c246-0bf5-43dd-a74d-d5278df3f181 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.291 2 DEBUG nova.network.neutron [req-4f00321b-6305-4138-8bc5-7dac172af786 req-51f4c246-0bf5-43dd-a74d-d5278df3f181 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Refreshing network info cache for port 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.358 2 DEBUG oslo_concurrency.lockutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquiring lock "89da30f4-cbe5-4f70-8597-b215d57427e0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.358 2 DEBUG oslo_concurrency.lockutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.359 2 DEBUG oslo_concurrency.lockutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquiring lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.359 2 DEBUG oslo_concurrency.lockutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.359 2 DEBUG oslo_concurrency.lockutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.361 2 INFO nova.compute.manager [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Terminating instance
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.362 2 DEBUG nova.compute.manager [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:52:29 compute-0 kernel: tap5ad3e5c8-f8 (unregistering): left promiscuous mode
Oct 07 14:52:29 compute-0 NetworkManager[44949]: <info>  [1759848749.4837] device (tap5ad3e5c8-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:52:29 compute-0 ovn_controller[151684]: 2025-10-07T14:52:29Z|01570|binding|INFO|Releasing lport 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a from this chassis (sb_readonly=0)
Oct 07 14:52:29 compute-0 ovn_controller[151684]: 2025-10-07T14:52:29Z|01571|binding|INFO|Setting lport 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a down in Southbound
Oct 07 14:52:29 compute-0 ovn_controller[151684]: 2025-10-07T14:52:29Z|01572|binding|INFO|Removing iface tap5ad3e5c8-f8 ovn-installed in OVS
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:29.505 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:e9:5b 10.100.0.9'], port_security=['fa:16:3e:a4:e9:5b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '89da30f4-cbe5-4f70-8597-b215d57427e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0572f71-e177-44ba-b973-868f94fd5edc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '969793770e3f43a48c49fbd115a24ce2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '246aff8e-3934-45b5-b52a-548706ef8690 bc0fccda-f204-449b-a489-9db51a7ff6e3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9fa489b-3f19-42ed-a2d8-cef4ea357478, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:52:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:29.506 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a in datapath d0572f71-e177-44ba-b973-868f94fd5edc unbound from our chassis
Oct 07 14:52:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:29.507 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d0572f71-e177-44ba-b973-868f94fd5edc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:52:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:29.509 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c82d11cd-877d-43d5-9661-5773a588010b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:52:29 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:29.510 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc namespace which is not needed anymore
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:29 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Oct 07 14:52:29 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d0000008f.scope: Consumed 14.840s CPU time.
Oct 07 14:52:29 compute-0 systemd-machined[214580]: Machine qemu-177-instance-0000008f terminated.
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.598 2 INFO nova.virt.libvirt.driver [-] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Instance destroyed successfully.
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.600 2 DEBUG nova.objects.instance [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lazy-loading 'resources' on Instance uuid 89da30f4-cbe5-4f70-8597-b215d57427e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.669 2 DEBUG nova.virt.libvirt.vif [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-987945970-access_point-838105505',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-987945970-access_point-838105505',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-987945970-acc',id=143,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKkGL7CpbVdMuxEtVgVuwQZuX1JagsFc33SU+kYXfEMvDW2NlVQUlEd18hOTn1D9VxBpXbBu9ycn+w26VHLMwz7Ov9eXBoA6pi1tNnQXvgt/I9xVMdBkIMeLhNwY3Vzj3A==',key_name='tempest-TestSecurityGroupsBasicOps-1225276649',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:51:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='969793770e3f43a48c49fbd115a24ce2',ramdisk_id='',reservation_id='r-iv0cq43w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-987945970',owner_user_name='tempest-TestSecurityGroupsBasicOps-987945970-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:51:59Z,user_data=None,user_id='f756e2b18f7246a48f99aaa3bb77a5c3',uuid=89da30f4-cbe5-4f70-8597-b215d57427e0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.671 2 DEBUG nova.network.os_vif_util [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Converting VIF {"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.672 2 DEBUG nova.network.os_vif_util [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:e9:5b,bridge_name='br-int',has_traffic_filtering=True,id=5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a,network=Network(d0572f71-e177-44ba-b973-868f94fd5edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad3e5c8-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.673 2 DEBUG os_vif [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:e9:5b,bridge_name='br-int',has_traffic_filtering=True,id=5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a,network=Network(d0572f71-e177-44ba-b973-868f94fd5edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad3e5c8-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.675 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ad3e5c8-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:52:29 compute-0 neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc[416787]: [NOTICE]   (416791) : haproxy version is 2.8.14-c23fe91
Oct 07 14:52:29 compute-0 neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc[416787]: [NOTICE]   (416791) : path to executable is /usr/sbin/haproxy
Oct 07 14:52:29 compute-0 neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc[416787]: [WARNING]  (416791) : Exiting Master process...
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.682 2 INFO os_vif [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:e9:5b,bridge_name='br-int',has_traffic_filtering=True,id=5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a,network=Network(d0572f71-e177-44ba-b973-868f94fd5edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad3e5c8-f8')
Oct 07 14:52:29 compute-0 neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc[416787]: [ALERT]    (416791) : Current worker (416793) exited with code 143 (Terminated)
Oct 07 14:52:29 compute-0 neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc[416787]: [WARNING]  (416791) : All workers exited. Exiting... (0)
Oct 07 14:52:29 compute-0 systemd[1]: libpod-a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620.scope: Deactivated successfully.
Oct 07 14:52:29 compute-0 podman[416965]: 2025-10-07 14:52:29.692856634 +0000 UTC m=+0.063461848 container died a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:52:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620-userdata-shm.mount: Deactivated successfully.
Oct 07 14:52:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b1bb2847302638c83cc4ed2dbe5103686edd071cff651ce463463f585fcfd30-merged.mount: Deactivated successfully.
Oct 07 14:52:29 compute-0 podman[416965]: 2025-10-07 14:52:29.849350362 +0000 UTC m=+0.219955576 container cleanup a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:52:29 compute-0 systemd[1]: libpod-conmon-a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620.scope: Deactivated successfully.
Oct 07 14:52:29 compute-0 ceph-mon[74295]: pgmap v2707: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 12 KiB/s wr, 0 op/s
Oct 07 14:52:29 compute-0 nova_compute[259550]: 2025-10-07 14:52:29.936 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:52:30 compute-0 podman[417015]: 2025-10-07 14:52:30.087742375 +0000 UTC m=+0.213366950 container remove a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:52:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:30.093 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d53713b1-e943-48d1-b264-0b4469d1ab96]: (4, ('Tue Oct  7 02:52:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc (a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620)\na7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620\nTue Oct  7 02:52:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc (a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620)\na7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:52:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:30.095 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[511d8bc8-285e-4b87-aea4-3336e7b35342]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:52:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:30.096 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0572f71-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:52:30 compute-0 kernel: tapd0572f71-e0: left promiscuous mode
Oct 07 14:52:30 compute-0 nova_compute[259550]: 2025-10-07 14:52:30.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:30 compute-0 nova_compute[259550]: 2025-10-07 14:52:30.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:30.120 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f6757687-6570-4173-99c4-6dd2755c2d37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:52:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:30.152 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cba6f9e7-11ff-44c9-9e2c-9c81cd2506f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:52:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:30.155 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[be6dd664-ed3a-441d-a1f7-03c7738a01bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:52:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:30.179 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d41de374-d22f-4c89-95fb-e51defefb6c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 930141, 'reachable_time': 18977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417029, 'error': None, 'target': 'ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:52:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:30.183 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:52:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:30.183 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6aa528-408e-4bee-bdef-92b2d444a032]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:52:30 compute-0 systemd[1]: run-netns-ovnmeta\x2dd0572f71\x2de177\x2d44ba\x2db973\x2d868f94fd5edc.mount: Deactivated successfully.
Oct 07 14:52:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 3.1 KiB/s wr, 1 op/s
Oct 07 14:52:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.335105) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848751335150, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 510, "num_deletes": 257, "total_data_size": 491971, "memory_usage": 501880, "flush_reason": "Manual Compaction"}
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848751343517, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 487662, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56483, "largest_seqno": 56992, "table_properties": {"data_size": 484835, "index_size": 862, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6463, "raw_average_key_size": 18, "raw_value_size": 479217, "raw_average_value_size": 1346, "num_data_blocks": 39, "num_entries": 356, "num_filter_entries": 356, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759848717, "oldest_key_time": 1759848717, "file_creation_time": 1759848751, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 8477 microseconds, and 2275 cpu microseconds.
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.343574) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 487662 bytes OK
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.343603) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.351837) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.351888) EVENT_LOG_v1 {"time_micros": 1759848751351877, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.351919) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 489008, prev total WAL file size 489008, number of live WAL files 2.
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.352523) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323633' seq:72057594037927935, type:22 .. '6C6F676D0032353136' seq:0, type:0; will stop at (end)
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(476KB)], [131(9772KB)]
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848751352586, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 10494443, "oldest_snapshot_seqno": -1}
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7570 keys, 10391651 bytes, temperature: kUnknown
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848751422885, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 10391651, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10341355, "index_size": 30279, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18949, "raw_key_size": 198447, "raw_average_key_size": 26, "raw_value_size": 10206364, "raw_average_value_size": 1348, "num_data_blocks": 1181, "num_entries": 7570, "num_filter_entries": 7570, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759848751, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.423273) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 10391651 bytes
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.428358) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.9 rd, 147.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.5 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(42.8) write-amplify(21.3) OK, records in: 8092, records dropped: 522 output_compression: NoCompression
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.428421) EVENT_LOG_v1 {"time_micros": 1759848751428406, "job": 80, "event": "compaction_finished", "compaction_time_micros": 70457, "compaction_time_cpu_micros": 27363, "output_level": 6, "num_output_files": 1, "total_output_size": 10391651, "num_input_records": 8092, "num_output_records": 7570, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848751428710, "job": 80, "event": "table_file_deletion", "file_number": 133}
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848751430942, "job": 80, "event": "table_file_deletion", "file_number": 131}
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.352413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.431056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.431062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.431064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.431066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:52:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.431068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:52:31 compute-0 nova_compute[259550]: 2025-10-07 14:52:31.519 2 INFO nova.virt.libvirt.driver [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Deleting instance files /var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0_del
Oct 07 14:52:31 compute-0 nova_compute[259550]: 2025-10-07 14:52:31.520 2 INFO nova.virt.libvirt.driver [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Deletion of /var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0_del complete
Oct 07 14:52:31 compute-0 nova_compute[259550]: 2025-10-07 14:52:31.584 2 DEBUG nova.compute.manager [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received event network-vif-unplugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:52:31 compute-0 nova_compute[259550]: 2025-10-07 14:52:31.584 2 DEBUG oslo_concurrency.lockutils [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:52:31 compute-0 nova_compute[259550]: 2025-10-07 14:52:31.585 2 DEBUG oslo_concurrency.lockutils [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:52:31 compute-0 nova_compute[259550]: 2025-10-07 14:52:31.585 2 DEBUG oslo_concurrency.lockutils [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:52:31 compute-0 nova_compute[259550]: 2025-10-07 14:52:31.585 2 DEBUG nova.compute.manager [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] No waiting events found dispatching network-vif-unplugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:52:31 compute-0 nova_compute[259550]: 2025-10-07 14:52:31.585 2 DEBUG nova.compute.manager [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received event network-vif-unplugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:52:31 compute-0 nova_compute[259550]: 2025-10-07 14:52:31.586 2 DEBUG nova.compute.manager [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received event network-vif-plugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:52:31 compute-0 nova_compute[259550]: 2025-10-07 14:52:31.586 2 DEBUG oslo_concurrency.lockutils [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:52:31 compute-0 nova_compute[259550]: 2025-10-07 14:52:31.586 2 DEBUG oslo_concurrency.lockutils [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:52:31 compute-0 nova_compute[259550]: 2025-10-07 14:52:31.586 2 DEBUG oslo_concurrency.lockutils [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:52:31 compute-0 nova_compute[259550]: 2025-10-07 14:52:31.587 2 DEBUG nova.compute.manager [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] No waiting events found dispatching network-vif-plugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:52:31 compute-0 nova_compute[259550]: 2025-10-07 14:52:31.587 2 WARNING nova.compute.manager [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received unexpected event network-vif-plugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a for instance with vm_state active and task_state deleting.
Oct 07 14:52:31 compute-0 nova_compute[259550]: 2025-10-07 14:52:31.790 2 INFO nova.compute.manager [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Took 2.43 seconds to destroy the instance on the hypervisor.
Oct 07 14:52:31 compute-0 nova_compute[259550]: 2025-10-07 14:52:31.791 2 DEBUG oslo.service.loopingcall [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:52:31 compute-0 nova_compute[259550]: 2025-10-07 14:52:31.791 2 DEBUG nova.compute.manager [-] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:52:31 compute-0 nova_compute[259550]: 2025-10-07 14:52:31.791 2 DEBUG nova.network.neutron [-] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:52:32 compute-0 ceph-mon[74295]: pgmap v2708: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 3.1 KiB/s wr, 1 op/s
Oct 07 14:52:32 compute-0 nova_compute[259550]: 2025-10-07 14:52:32.679 2 DEBUG nova.network.neutron [req-4f00321b-6305-4138-8bc5-7dac172af786 req-51f4c246-0bf5-43dd-a74d-d5278df3f181 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Updated VIF entry in instance network info cache for port 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:52:32 compute-0 nova_compute[259550]: 2025-10-07 14:52:32.680 2 DEBUG nova.network.neutron [req-4f00321b-6305-4138-8bc5-7dac172af786 req-51f4c246-0bf5-43dd-a74d-d5278df3f181 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Updating instance_info_cache with network_info: [{"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:52:32 compute-0 nova_compute[259550]: 2025-10-07 14:52:32.708 2 DEBUG oslo_concurrency.lockutils [req-4f00321b-6305-4138-8bc5-7dac172af786 req-51f4c246-0bf5-43dd-a74d-d5278df3f181 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:52:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:52:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4204438902' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:52:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:52:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4204438902' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 2.1 KiB/s wr, 1 op/s
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015185461027544442 of space, bias 1.0, pg target 0.4555638308263333 quantized to 32 (current 32)
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:52:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:52:32 compute-0 nova_compute[259550]: 2025-10-07 14:52:32.940 2 DEBUG nova.network.neutron [-] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:52:32 compute-0 nova_compute[259550]: 2025-10-07 14:52:32.958 2 INFO nova.compute.manager [-] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Took 1.17 seconds to deallocate network for instance.
Oct 07 14:52:33 compute-0 nova_compute[259550]: 2025-10-07 14:52:33.017 2 DEBUG oslo_concurrency.lockutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:52:33 compute-0 nova_compute[259550]: 2025-10-07 14:52:33.017 2 DEBUG oslo_concurrency.lockutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:52:33 compute-0 nova_compute[259550]: 2025-10-07 14:52:33.020 2 DEBUG nova.compute.manager [req-6342257f-f45e-4a31-bafa-ae8775dcea85 req-740aa6e3-4905-4643-8fb2-b073de7df6b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received event network-vif-deleted-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:52:33 compute-0 podman[417032]: 2025-10-07 14:52:33.075125234 +0000 UTC m=+0.061243116 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 07 14:52:33 compute-0 nova_compute[259550]: 2025-10-07 14:52:33.094 2 DEBUG oslo_concurrency.processutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:52:33 compute-0 podman[417033]: 2025-10-07 14:52:33.112714338 +0000 UTC m=+0.092081069 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:52:33 compute-0 nova_compute[259550]: 2025-10-07 14:52:33.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/4204438902' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:52:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/4204438902' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:52:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:52:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3172131990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:52:33 compute-0 nova_compute[259550]: 2025-10-07 14:52:33.595 2 DEBUG oslo_concurrency.processutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:52:33 compute-0 nova_compute[259550]: 2025-10-07 14:52:33.602 2 DEBUG nova.compute.provider_tree [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:52:33 compute-0 nova_compute[259550]: 2025-10-07 14:52:33.620 2 DEBUG nova.scheduler.client.report [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:52:33 compute-0 nova_compute[259550]: 2025-10-07 14:52:33.654 2 DEBUG oslo_concurrency.lockutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:52:33 compute-0 nova_compute[259550]: 2025-10-07 14:52:33.679 2 INFO nova.scheduler.client.report [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Deleted allocations for instance 89da30f4-cbe5-4f70-8597-b215d57427e0
Oct 07 14:52:33 compute-0 nova_compute[259550]: 2025-10-07 14:52:33.762 2 DEBUG oslo_concurrency.lockutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:52:34 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:34.446 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:52:34 compute-0 ceph-mon[74295]: pgmap v2709: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 2.1 KiB/s wr, 1 op/s
Oct 07 14:52:34 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3172131990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:52:34 compute-0 nova_compute[259550]: 2025-10-07 14:52:34.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 154 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.6 KiB/s wr, 27 op/s
Oct 07 14:52:35 compute-0 ceph-mon[74295]: pgmap v2710: 305 pgs: 305 active+clean; 154 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.6 KiB/s wr, 27 op/s
Oct 07 14:52:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:52:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Oct 07 14:52:38 compute-0 ceph-mon[74295]: pgmap v2711: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Oct 07 14:52:38 compute-0 nova_compute[259550]: 2025-10-07 14:52:38.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Oct 07 14:52:39 compute-0 nova_compute[259550]: 2025-10-07 14:52:39.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:40 compute-0 ceph-mon[74295]: pgmap v2712: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Oct 07 14:52:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Oct 07 14:52:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:52:41 compute-0 ceph-mon[74295]: pgmap v2713: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Oct 07 14:52:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Oct 07 14:52:43 compute-0 nova_compute[259550]: 2025-10-07 14:52:43.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:44 compute-0 ovn_controller[151684]: 2025-10-07T14:52:44Z|01573|binding|INFO|Releasing lport 271db248-560d-4b3a-bd65-947c8ea27522 from this chassis (sb_readonly=0)
Oct 07 14:52:44 compute-0 nova_compute[259550]: 2025-10-07 14:52:44.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:44 compute-0 ceph-mon[74295]: pgmap v2714: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Oct 07 14:52:44 compute-0 nova_compute[259550]: 2025-10-07 14:52:44.597 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848749.5961843, 89da30f4-cbe5-4f70-8597-b215d57427e0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:52:44 compute-0 nova_compute[259550]: 2025-10-07 14:52:44.598 2 INFO nova.compute.manager [-] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] VM Stopped (Lifecycle Event)
Oct 07 14:52:44 compute-0 nova_compute[259550]: 2025-10-07 14:52:44.634 2 DEBUG nova.compute.manager [None req-6ba61f49-eb71-4e9e-893d-44f269675c91 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:52:44 compute-0 nova_compute[259550]: 2025-10-07 14:52:44.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Oct 07 14:52:46 compute-0 ceph-mon[74295]: pgmap v2715: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Oct 07 14:52:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 0 B/s wr, 1 op/s
Oct 07 14:52:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:52:47 compute-0 nova_compute[259550]: 2025-10-07 14:52:47.370 2 DEBUG nova.compute.manager [req-1ce3c26c-3412-4584-98d8-98b7b7459068 req-7c638f74-df93-4c22-a02a-d8156b39489f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received event network-changed-144643c5-91ec-4bd7-a646-3d64339b6691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:52:47 compute-0 nova_compute[259550]: 2025-10-07 14:52:47.370 2 DEBUG nova.compute.manager [req-1ce3c26c-3412-4584-98d8-98b7b7459068 req-7c638f74-df93-4c22-a02a-d8156b39489f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Refreshing instance network info cache due to event network-changed-144643c5-91ec-4bd7-a646-3d64339b6691. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:52:47 compute-0 nova_compute[259550]: 2025-10-07 14:52:47.371 2 DEBUG oslo_concurrency.lockutils [req-1ce3c26c-3412-4584-98d8-98b7b7459068 req-7c638f74-df93-4c22-a02a-d8156b39489f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:52:47 compute-0 nova_compute[259550]: 2025-10-07 14:52:47.371 2 DEBUG oslo_concurrency.lockutils [req-1ce3c26c-3412-4584-98d8-98b7b7459068 req-7c638f74-df93-4c22-a02a-d8156b39489f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:52:47 compute-0 nova_compute[259550]: 2025-10-07 14:52:47.371 2 DEBUG nova.network.neutron [req-1ce3c26c-3412-4584-98d8-98b7b7459068 req-7c638f74-df93-4c22-a02a-d8156b39489f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Refreshing network info cache for port 144643c5-91ec-4bd7-a646-3d64339b6691 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:52:47 compute-0 nova_compute[259550]: 2025-10-07 14:52:47.427 2 DEBUG oslo_concurrency.lockutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "9b571b54-001b-4da1-8b91-2659b1fbaac6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:52:47 compute-0 nova_compute[259550]: 2025-10-07 14:52:47.428 2 DEBUG oslo_concurrency.lockutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:52:47 compute-0 nova_compute[259550]: 2025-10-07 14:52:47.428 2 DEBUG oslo_concurrency.lockutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:52:47 compute-0 nova_compute[259550]: 2025-10-07 14:52:47.431 2 DEBUG oslo_concurrency.lockutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:52:47 compute-0 nova_compute[259550]: 2025-10-07 14:52:47.431 2 DEBUG oslo_concurrency.lockutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:52:47 compute-0 nova_compute[259550]: 2025-10-07 14:52:47.433 2 INFO nova.compute.manager [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Terminating instance
Oct 07 14:52:47 compute-0 nova_compute[259550]: 2025-10-07 14:52:47.434 2 DEBUG nova.compute.manager [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:52:47 compute-0 ceph-mon[74295]: pgmap v2716: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 0 B/s wr, 1 op/s
Oct 07 14:52:48 compute-0 kernel: tap144643c5-91 (unregistering): left promiscuous mode
Oct 07 14:52:48 compute-0 NetworkManager[44949]: <info>  [1759848768.0449] device (tap144643c5-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:52:48 compute-0 ovn_controller[151684]: 2025-10-07T14:52:48Z|01574|binding|INFO|Releasing lport 144643c5-91ec-4bd7-a646-3d64339b6691 from this chassis (sb_readonly=0)
Oct 07 14:52:48 compute-0 nova_compute[259550]: 2025-10-07 14:52:48.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:48 compute-0 ovn_controller[151684]: 2025-10-07T14:52:48Z|01575|binding|INFO|Setting lport 144643c5-91ec-4bd7-a646-3d64339b6691 down in Southbound
Oct 07 14:52:48 compute-0 ovn_controller[151684]: 2025-10-07T14:52:48Z|01576|binding|INFO|Removing iface tap144643c5-91 ovn-installed in OVS
Oct 07 14:52:48 compute-0 nova_compute[259550]: 2025-10-07 14:52:48.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:48.071 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:52:85 10.100.0.7'], port_security=['fa:16:3e:24:52:85 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9b571b54-001b-4da1-8b91-2659b1fbaac6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d6dba35-f647-4d4e-a538-87de70ead701', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5665608a-5872-470d-ac7c-b6ba91a59eac e6cb9e17-5a87-48e7-beaf-ada7a8c060a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=459b9ec0-d62a-47af-8167-94e59b8e15bc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=144643c5-91ec-4bd7-a646-3d64339b6691) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:52:48 compute-0 nova_compute[259550]: 2025-10-07 14:52:48.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:48.073 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 144643c5-91ec-4bd7-a646-3d64339b6691 in datapath 7d6dba35-f647-4d4e-a538-87de70ead701 unbound from our chassis
Oct 07 14:52:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:48.075 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d6dba35-f647-4d4e-a538-87de70ead701, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:52:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:48.076 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc8b623-c2bc-4847-b245-e6f23ee63509]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:52:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:48.077 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701 namespace which is not needed anymore
Oct 07 14:52:48 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Oct 07 14:52:48 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008e.scope: Consumed 16.856s CPU time.
Oct 07 14:52:48 compute-0 systemd-machined[214580]: Machine qemu-176-instance-0000008e terminated.
Oct 07 14:52:48 compute-0 nova_compute[259550]: 2025-10-07 14:52:48.274 2 INFO nova.virt.libvirt.driver [-] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Instance destroyed successfully.
Oct 07 14:52:48 compute-0 nova_compute[259550]: 2025-10-07 14:52:48.275 2 DEBUG nova.objects.instance [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'resources' on Instance uuid 9b571b54-001b-4da1-8b91-2659b1fbaac6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:52:48 compute-0 nova_compute[259550]: 2025-10-07 14:52:48.294 2 DEBUG nova.virt.libvirt.vif [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:50:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-29238859',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-29238859',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=142,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOW4CdHXcKKzCR3ZI3nv4/KGv+Q9vtn7hxRZelXQfKtVpIrtTAVdZ3Df4UTU8EaZDakLPU1Olzl+Z8Q7yW10W/8sO5gvQt5UY4ZcGXGzLqm+i3yJBS0R+/jppzmQO8xBdQ==',key_name='tempest-TestSecurityGroupsBasicOps-1150608020',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:51:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-0j1o8fjx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:51:02Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=9b571b54-001b-4da1-8b91-2659b1fbaac6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:52:48 compute-0 nova_compute[259550]: 2025-10-07 14:52:48.295 2 DEBUG nova.network.os_vif_util [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:52:48 compute-0 nova_compute[259550]: 2025-10-07 14:52:48.296 2 DEBUG nova.network.os_vif_util [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:52:85,bridge_name='br-int',has_traffic_filtering=True,id=144643c5-91ec-4bd7-a646-3d64339b6691,network=Network(7d6dba35-f647-4d4e-a538-87de70ead701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap144643c5-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:52:48 compute-0 nova_compute[259550]: 2025-10-07 14:52:48.296 2 DEBUG os_vif [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:52:85,bridge_name='br-int',has_traffic_filtering=True,id=144643c5-91ec-4bd7-a646-3d64339b6691,network=Network(7d6dba35-f647-4d4e-a538-87de70ead701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap144643c5-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:52:48 compute-0 nova_compute[259550]: 2025-10-07 14:52:48.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:48 compute-0 nova_compute[259550]: 2025-10-07 14:52:48.298 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap144643c5-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:52:48 compute-0 nova_compute[259550]: 2025-10-07 14:52:48.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:48 compute-0 nova_compute[259550]: 2025-10-07 14:52:48.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:48 compute-0 nova_compute[259550]: 2025-10-07 14:52:48.304 2 INFO os_vif [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:52:85,bridge_name='br-int',has_traffic_filtering=True,id=144643c5-91ec-4bd7-a646-3d64339b6691,network=Network(7d6dba35-f647-4d4e-a538-87de70ead701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap144643c5-91')
Oct 07 14:52:48 compute-0 nova_compute[259550]: 2025-10-07 14:52:48.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:48 compute-0 neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701[415286]: [NOTICE]   (415290) : haproxy version is 2.8.14-c23fe91
Oct 07 14:52:48 compute-0 neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701[415286]: [NOTICE]   (415290) : path to executable is /usr/sbin/haproxy
Oct 07 14:52:48 compute-0 neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701[415286]: [WARNING]  (415290) : Exiting Master process...
Oct 07 14:52:48 compute-0 neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701[415286]: [ALERT]    (415290) : Current worker (415292) exited with code 143 (Terminated)
Oct 07 14:52:48 compute-0 neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701[415286]: [WARNING]  (415290) : All workers exited. Exiting... (0)
Oct 07 14:52:48 compute-0 systemd[1]: libpod-9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b.scope: Deactivated successfully.
Oct 07 14:52:48 compute-0 podman[417123]: 2025-10-07 14:52:48.45866551 +0000 UTC m=+0.293507084 container died 9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:52:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 07 14:52:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b-userdata-shm.mount: Deactivated successfully.
Oct 07 14:52:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c990b5ffee34bb0ee8d8fa81835ad0deb9249f2ad983045d67c79c75ff5af772-merged.mount: Deactivated successfully.
Oct 07 14:52:49 compute-0 nova_compute[259550]: 2025-10-07 14:52:49.569 2 DEBUG nova.compute.manager [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received event network-vif-unplugged-144643c5-91ec-4bd7-a646-3d64339b6691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:52:49 compute-0 nova_compute[259550]: 2025-10-07 14:52:49.569 2 DEBUG oslo_concurrency.lockutils [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:52:49 compute-0 nova_compute[259550]: 2025-10-07 14:52:49.570 2 DEBUG oslo_concurrency.lockutils [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:52:49 compute-0 nova_compute[259550]: 2025-10-07 14:52:49.570 2 DEBUG oslo_concurrency.lockutils [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:52:49 compute-0 nova_compute[259550]: 2025-10-07 14:52:49.570 2 DEBUG nova.compute.manager [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] No waiting events found dispatching network-vif-unplugged-144643c5-91ec-4bd7-a646-3d64339b6691 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:52:49 compute-0 nova_compute[259550]: 2025-10-07 14:52:49.570 2 DEBUG nova.compute.manager [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received event network-vif-unplugged-144643c5-91ec-4bd7-a646-3d64339b6691 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:52:49 compute-0 nova_compute[259550]: 2025-10-07 14:52:49.570 2 DEBUG nova.compute.manager [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received event network-vif-plugged-144643c5-91ec-4bd7-a646-3d64339b6691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:52:49 compute-0 nova_compute[259550]: 2025-10-07 14:52:49.571 2 DEBUG oslo_concurrency.lockutils [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:52:49 compute-0 nova_compute[259550]: 2025-10-07 14:52:49.571 2 DEBUG oslo_concurrency.lockutils [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:52:49 compute-0 nova_compute[259550]: 2025-10-07 14:52:49.571 2 DEBUG oslo_concurrency.lockutils [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:52:49 compute-0 nova_compute[259550]: 2025-10-07 14:52:49.571 2 DEBUG nova.compute.manager [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] No waiting events found dispatching network-vif-plugged-144643c5-91ec-4bd7-a646-3d64339b6691 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:52:49 compute-0 nova_compute[259550]: 2025-10-07 14:52:49.572 2 WARNING nova.compute.manager [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received unexpected event network-vif-plugged-144643c5-91ec-4bd7-a646-3d64339b6691 for instance with vm_state active and task_state deleting.
Oct 07 14:52:49 compute-0 ceph-mon[74295]: pgmap v2717: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 07 14:52:49 compute-0 podman[417123]: 2025-10-07 14:52:49.889803149 +0000 UTC m=+1.724644743 container cleanup 9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 07 14:52:49 compute-0 systemd[1]: libpod-conmon-9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b.scope: Deactivated successfully.
Oct 07 14:52:50 compute-0 podman[417182]: 2025-10-07 14:52:50.197687114 +0000 UTC m=+0.282429121 container remove 9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 07 14:52:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:50.204 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7f54a677-92b7-4d16-93f2-45686c6ccec8]: (4, ('Tue Oct  7 02:52:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701 (9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b)\n9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b\nTue Oct  7 02:52:49 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701 (9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b)\n9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:52:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:50.206 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8e89be9b-c580-4bad-8bd5-1315e130bb83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:52:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:50.207 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7d6dba35-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:52:50 compute-0 nova_compute[259550]: 2025-10-07 14:52:50.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:50 compute-0 kernel: tap7d6dba35-f0: left promiscuous mode
Oct 07 14:52:50 compute-0 nova_compute[259550]: 2025-10-07 14:52:50.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:50.225 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e39cbe-50d3-47ba-ba11-63b3bdaa3447]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:52:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:50.258 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b4bde41a-36fa-4d12-babc-7b07da383077]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:52:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:50.260 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6ecb8acf-366b-4f2b-a0dc-20355d098fff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:52:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:50.276 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2a6604-4253-4456-85c2-68df8442f21d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 924522, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417220, 'error': None, 'target': 'ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:52:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:50.278 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:52:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:52:50.278 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[c80e73d8-14ef-4255-97f2-f2f69e08de9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:52:50 compute-0 systemd[1]: run-netns-ovnmeta\x2d7d6dba35\x2df647\x2d4d4e\x2da538\x2d87de70ead701.mount: Deactivated successfully.
Oct 07 14:52:50 compute-0 podman[417196]: 2025-10-07 14:52:50.305814162 +0000 UTC m=+0.062329171 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 07 14:52:50 compute-0 podman[417195]: 2025-10-07 14:52:50.310023542 +0000 UTC m=+0.067814291 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:52:50 compute-0 nova_compute[259550]: 2025-10-07 14:52:50.401 2 DEBUG nova.network.neutron [req-1ce3c26c-3412-4584-98d8-98b7b7459068 req-7c638f74-df93-4c22-a02a-d8156b39489f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updated VIF entry in instance network info cache for port 144643c5-91ec-4bd7-a646-3d64339b6691. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:52:50 compute-0 nova_compute[259550]: 2025-10-07 14:52:50.402 2 DEBUG nova.network.neutron [req-1ce3c26c-3412-4584-98d8-98b7b7459068 req-7c638f74-df93-4c22-a02a-d8156b39489f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updating instance_info_cache with network_info: [{"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:52:50 compute-0 nova_compute[259550]: 2025-10-07 14:52:50.431 2 DEBUG oslo_concurrency.lockutils [req-1ce3c26c-3412-4584-98d8-98b7b7459068 req-7c638f74-df93-4c22-a02a-d8156b39489f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:52:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 7 op/s
Oct 07 14:52:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:52:52 compute-0 ceph-mon[74295]: pgmap v2718: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 7 op/s
Oct 07 14:52:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:52:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:52:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:52:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:52:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:52:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:52:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 7 op/s
Oct 07 14:52:52 compute-0 sudo[417239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:52:52 compute-0 sudo[417239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:52:52 compute-0 sudo[417239]: pam_unix(sudo:session): session closed for user root
Oct 07 14:52:52 compute-0 sudo[417264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:52:52 compute-0 sudo[417264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:52:52 compute-0 sudo[417264]: pam_unix(sudo:session): session closed for user root
Oct 07 14:52:52 compute-0 nova_compute[259550]: 2025-10-07 14:52:52.992 2 INFO nova.virt.libvirt.driver [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Deleting instance files /var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6_del
Oct 07 14:52:52 compute-0 nova_compute[259550]: 2025-10-07 14:52:52.993 2 INFO nova.virt.libvirt.driver [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Deletion of /var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6_del complete
Oct 07 14:52:53 compute-0 sudo[417289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:52:53 compute-0 sudo[417289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:52:53 compute-0 sudo[417289]: pam_unix(sudo:session): session closed for user root
Oct 07 14:52:53 compute-0 nova_compute[259550]: 2025-10-07 14:52:53.100 2 INFO nova.compute.manager [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Took 5.67 seconds to destroy the instance on the hypervisor.
Oct 07 14:52:53 compute-0 nova_compute[259550]: 2025-10-07 14:52:53.101 2 DEBUG oslo.service.loopingcall [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:52:53 compute-0 nova_compute[259550]: 2025-10-07 14:52:53.101 2 DEBUG nova.compute.manager [-] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:52:53 compute-0 nova_compute[259550]: 2025-10-07 14:52:53.101 2 DEBUG nova.network.neutron [-] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:52:53 compute-0 sudo[417314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:52:53 compute-0 sudo[417314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:52:53 compute-0 nova_compute[259550]: 2025-10-07 14:52:53.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:53 compute-0 nova_compute[259550]: 2025-10-07 14:52:53.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:53 compute-0 sudo[417314]: pam_unix(sudo:session): session closed for user root
Oct 07 14:52:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:52:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:52:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:52:53 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:52:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:52:53 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:52:53 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 20dfece1-da1b-4b5d-ad94-19368b28688e does not exist
Oct 07 14:52:53 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 728a550a-dc26-4184-96fe-0fc44982ea0e does not exist
Oct 07 14:52:53 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 11b0b58d-2bcc-465e-b0f2-a45d61ee376f does not exist
Oct 07 14:52:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:52:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:52:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:52:53 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:52:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:52:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:52:53 compute-0 sudo[417372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:52:53 compute-0 sudo[417372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:52:53 compute-0 sudo[417372]: pam_unix(sudo:session): session closed for user root
Oct 07 14:52:53 compute-0 sudo[417397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:52:53 compute-0 sudo[417397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:52:53 compute-0 sudo[417397]: pam_unix(sudo:session): session closed for user root
Oct 07 14:52:53 compute-0 sudo[417422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:52:53 compute-0 sudo[417422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:52:53 compute-0 sudo[417422]: pam_unix(sudo:session): session closed for user root
Oct 07 14:52:54 compute-0 sudo[417447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:52:54 compute-0 sudo[417447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:52:54 compute-0 ceph-mon[74295]: pgmap v2719: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 7 op/s
Oct 07 14:52:54 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:52:54 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:52:54 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:52:54 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:52:54 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:52:54 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:52:54 compute-0 podman[417512]: 2025-10-07 14:52:54.472885887 +0000 UTC m=+0.101063551 container create d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:52:54 compute-0 podman[417512]: 2025-10-07 14:52:54.402004784 +0000 UTC m=+0.030182468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:52:54 compute-0 systemd[1]: Started libpod-conmon-d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5.scope.
Oct 07 14:52:54 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:52:54 compute-0 podman[417512]: 2025-10-07 14:52:54.702833901 +0000 UTC m=+0.331011615 container init d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:52:54 compute-0 podman[417512]: 2025-10-07 14:52:54.713396341 +0000 UTC m=+0.341574005 container start d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 07 14:52:54 compute-0 podman[417512]: 2025-10-07 14:52:54.722350915 +0000 UTC m=+0.350528599 container attach d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:52:54 compute-0 gallant_liskov[417528]: 167 167
Oct 07 14:52:54 compute-0 systemd[1]: libpod-d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5.scope: Deactivated successfully.
Oct 07 14:52:54 compute-0 conmon[417528]: conmon d044e40c74198f23a3e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5.scope/container/memory.events
Oct 07 14:52:54 compute-0 podman[417512]: 2025-10-07 14:52:54.72471257 +0000 UTC m=+0.352890244 container died d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:52:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bb9ab64c6327eae4b24f894f32cbab2fbf2f45303ca0ff98a302e57fd411d58-merged.mount: Deactivated successfully.
Oct 07 14:52:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 61 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 511 B/s wr, 19 op/s
Oct 07 14:52:54 compute-0 podman[417512]: 2025-10-07 14:52:54.766337629 +0000 UTC m=+0.394515303 container remove d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 07 14:52:54 compute-0 systemd[1]: libpod-conmon-d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5.scope: Deactivated successfully.
Oct 07 14:52:54 compute-0 podman[417551]: 2025-10-07 14:52:54.94777807 +0000 UTC m=+0.052245033 container create 21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:52:54 compute-0 systemd[1]: Started libpod-conmon-21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995.scope.
Oct 07 14:52:55 compute-0 podman[417551]: 2025-10-07 14:52:54.923885692 +0000 UTC m=+0.028352685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:52:55 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b1fff7612ad704c86e5d73741555c9f1959046eb11e96f565beba79c2488f54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b1fff7612ad704c86e5d73741555c9f1959046eb11e96f565beba79c2488f54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b1fff7612ad704c86e5d73741555c9f1959046eb11e96f565beba79c2488f54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b1fff7612ad704c86e5d73741555c9f1959046eb11e96f565beba79c2488f54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b1fff7612ad704c86e5d73741555c9f1959046eb11e96f565beba79c2488f54/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:52:55 compute-0 podman[417551]: 2025-10-07 14:52:55.044180529 +0000 UTC m=+0.148647522 container init 21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_aryabhata, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 07 14:52:55 compute-0 podman[417551]: 2025-10-07 14:52:55.050904199 +0000 UTC m=+0.155371152 container start 21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 14:52:55 compute-0 podman[417551]: 2025-10-07 14:52:55.054186068 +0000 UTC m=+0.158653031 container attach 21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_aryabhata, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:52:55 compute-0 nova_compute[259550]: 2025-10-07 14:52:55.274 2 DEBUG nova.network.neutron [-] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:52:55 compute-0 nova_compute[259550]: 2025-10-07 14:52:55.309 2 INFO nova.compute.manager [-] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Took 2.21 seconds to deallocate network for instance.
Oct 07 14:52:55 compute-0 nova_compute[259550]: 2025-10-07 14:52:55.394 2 DEBUG oslo_concurrency.lockutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:52:55 compute-0 nova_compute[259550]: 2025-10-07 14:52:55.394 2 DEBUG oslo_concurrency.lockutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:52:55 compute-0 nova_compute[259550]: 2025-10-07 14:52:55.396 2 DEBUG nova.compute.manager [req-e983b98f-a03e-4bee-9a75-1c7e7543272c req-59b8671b-a94c-4e16-ac40-4d1959cde7b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received event network-vif-deleted-144643c5-91ec-4bd7-a646-3d64339b6691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:52:55 compute-0 nova_compute[259550]: 2025-10-07 14:52:55.451 2 DEBUG oslo_concurrency.processutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:52:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:52:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3705558032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:52:55 compute-0 nova_compute[259550]: 2025-10-07 14:52:55.974 2 DEBUG oslo_concurrency.processutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:52:55 compute-0 nova_compute[259550]: 2025-10-07 14:52:55.982 2 DEBUG nova.compute.provider_tree [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:52:56 compute-0 nova_compute[259550]: 2025-10-07 14:52:56.000 2 DEBUG nova.scheduler.client.report [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:52:56 compute-0 nova_compute[259550]: 2025-10-07 14:52:56.021 2 DEBUG oslo_concurrency.lockutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:52:56 compute-0 nova_compute[259550]: 2025-10-07 14:52:56.066 2 INFO nova.scheduler.client.report [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Deleted allocations for instance 9b571b54-001b-4da1-8b91-2659b1fbaac6
Oct 07 14:52:56 compute-0 nova_compute[259550]: 2025-10-07 14:52:56.152 2 DEBUG oslo_concurrency.lockutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:52:56 compute-0 priceless_aryabhata[417568]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:52:56 compute-0 priceless_aryabhata[417568]: --> relative data size: 1.0
Oct 07 14:52:56 compute-0 priceless_aryabhata[417568]: --> All data devices are unavailable
Oct 07 14:52:56 compute-0 systemd[1]: libpod-21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995.scope: Deactivated successfully.
Oct 07 14:52:56 compute-0 podman[417551]: 2025-10-07 14:52:56.244446514 +0000 UTC m=+1.348913487 container died 21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:52:56 compute-0 systemd[1]: libpod-21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995.scope: Consumed 1.133s CPU time.
Oct 07 14:52:56 compute-0 ceph-mon[74295]: pgmap v2720: 305 pgs: 305 active+clean; 61 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 511 B/s wr, 19 op/s
Oct 07 14:52:56 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3705558032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:52:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b1fff7612ad704c86e5d73741555c9f1959046eb11e96f565beba79c2488f54-merged.mount: Deactivated successfully.
Oct 07 14:52:56 compute-0 podman[417551]: 2025-10-07 14:52:56.525922471 +0000 UTC m=+1.630389434 container remove 21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:52:56 compute-0 systemd[1]: libpod-conmon-21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995.scope: Deactivated successfully.
Oct 07 14:52:56 compute-0 sudo[417447]: pam_unix(sudo:session): session closed for user root
Oct 07 14:52:56 compute-0 sudo[417632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:52:56 compute-0 sudo[417632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:52:56 compute-0 sudo[417632]: pam_unix(sudo:session): session closed for user root
Oct 07 14:52:56 compute-0 sudo[417657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:52:56 compute-0 sudo[417657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:52:56 compute-0 sudo[417657]: pam_unix(sudo:session): session closed for user root
Oct 07 14:52:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:52:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:52:56 compute-0 sudo[417682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:52:56 compute-0 sudo[417682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:52:56 compute-0 sudo[417682]: pam_unix(sudo:session): session closed for user root
Oct 07 14:52:56 compute-0 sudo[417707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:52:56 compute-0 sudo[417707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:52:57 compute-0 podman[417773]: 2025-10-07 14:52:57.173147536 +0000 UTC m=+0.046323851 container create 22972adfcdae01a150ee6ea001e6375f59c46cd7bd14cf233ca7c4f183d8388d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_joliot, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 07 14:52:57 compute-0 systemd[1]: Started libpod-conmon-22972adfcdae01a150ee6ea001e6375f59c46cd7bd14cf233ca7c4f183d8388d.scope.
Oct 07 14:52:57 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:52:57 compute-0 podman[417773]: 2025-10-07 14:52:57.152807112 +0000 UTC m=+0.025983407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:52:57 compute-0 podman[417773]: 2025-10-07 14:52:57.268326167 +0000 UTC m=+0.141502462 container init 22972adfcdae01a150ee6ea001e6375f59c46cd7bd14cf233ca7c4f183d8388d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 07 14:52:57 compute-0 podman[417773]: 2025-10-07 14:52:57.276728347 +0000 UTC m=+0.149904652 container start 22972adfcdae01a150ee6ea001e6375f59c46cd7bd14cf233ca7c4f183d8388d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:52:57 compute-0 peaceful_joliot[417789]: 167 167
Oct 07 14:52:57 compute-0 systemd[1]: libpod-22972adfcdae01a150ee6ea001e6375f59c46cd7bd14cf233ca7c4f183d8388d.scope: Deactivated successfully.
Oct 07 14:52:57 compute-0 podman[417773]: 2025-10-07 14:52:57.290149456 +0000 UTC m=+0.163325771 container attach 22972adfcdae01a150ee6ea001e6375f59c46cd7bd14cf233ca7c4f183d8388d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_joliot, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 14:52:57 compute-0 podman[417773]: 2025-10-07 14:52:57.291667982 +0000 UTC m=+0.164844277 container died 22972adfcdae01a150ee6ea001e6375f59c46cd7bd14cf233ca7c4f183d8388d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_joliot, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:52:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-551965e319eb2bb7eae6de6ddd54c5de3843457b5bada0b088bd6a6502cd73d4-merged.mount: Deactivated successfully.
Oct 07 14:52:57 compute-0 podman[417773]: 2025-10-07 14:52:57.399720739 +0000 UTC m=+0.272897024 container remove 22972adfcdae01a150ee6ea001e6375f59c46cd7bd14cf233ca7c4f183d8388d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_joliot, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:52:57 compute-0 systemd[1]: libpod-conmon-22972adfcdae01a150ee6ea001e6375f59c46cd7bd14cf233ca7c4f183d8388d.scope: Deactivated successfully.
Oct 07 14:52:57 compute-0 podman[417815]: 2025-10-07 14:52:57.558677905 +0000 UTC m=+0.036969079 container create 78ffab58952f447ad18d9ff5d35f3859a6f826de353fa24455acb6d40e1d28d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lichterman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:52:57 compute-0 systemd[1]: Started libpod-conmon-78ffab58952f447ad18d9ff5d35f3859a6f826de353fa24455acb6d40e1d28d7.scope.
Oct 07 14:52:57 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:52:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e15cfabbceaa6b9cbd0ceaa9b3789df4d8e7aa9f28df4c73f79c9de39f9cad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:52:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e15cfabbceaa6b9cbd0ceaa9b3789df4d8e7aa9f28df4c73f79c9de39f9cad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:52:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e15cfabbceaa6b9cbd0ceaa9b3789df4d8e7aa9f28df4c73f79c9de39f9cad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:52:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e15cfabbceaa6b9cbd0ceaa9b3789df4d8e7aa9f28df4c73f79c9de39f9cad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:52:57 compute-0 podman[417815]: 2025-10-07 14:52:57.543180267 +0000 UTC m=+0.021471461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:52:57 compute-0 podman[417815]: 2025-10-07 14:52:57.658173508 +0000 UTC m=+0.136464692 container init 78ffab58952f447ad18d9ff5d35f3859a6f826de353fa24455acb6d40e1d28d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lichterman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:52:57 compute-0 podman[417815]: 2025-10-07 14:52:57.665021621 +0000 UTC m=+0.143312795 container start 78ffab58952f447ad18d9ff5d35f3859a6f826de353fa24455acb6d40e1d28d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:52:57 compute-0 podman[417815]: 2025-10-07 14:52:57.66874498 +0000 UTC m=+0.147036164 container attach 78ffab58952f447ad18d9ff5d35f3859a6f826de353fa24455acb6d40e1d28d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lichterman, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:52:58 compute-0 nova_compute[259550]: 2025-10-07 14:52:58.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:58 compute-0 nova_compute[259550]: 2025-10-07 14:52:58.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:52:58 compute-0 ceph-mon[74295]: pgmap v2721: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]: {
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:     "0": [
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:         {
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "devices": [
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "/dev/loop3"
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             ],
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "lv_name": "ceph_lv0",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "lv_size": "21470642176",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "name": "ceph_lv0",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "tags": {
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.cluster_name": "ceph",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.crush_device_class": "",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.encrypted": "0",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.osd_id": "0",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.type": "block",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.vdo": "0"
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             },
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "type": "block",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "vg_name": "ceph_vg0"
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:         }
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:     ],
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:     "1": [
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:         {
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "devices": [
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "/dev/loop4"
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             ],
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "lv_name": "ceph_lv1",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "lv_size": "21470642176",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "name": "ceph_lv1",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "tags": {
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.cluster_name": "ceph",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.crush_device_class": "",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.encrypted": "0",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.osd_id": "1",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.type": "block",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.vdo": "0"
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             },
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "type": "block",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "vg_name": "ceph_vg1"
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:         }
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:     ],
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:     "2": [
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:         {
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "devices": [
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "/dev/loop5"
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             ],
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "lv_name": "ceph_lv2",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "lv_size": "21470642176",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "name": "ceph_lv2",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "tags": {
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.cluster_name": "ceph",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.crush_device_class": "",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.encrypted": "0",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.osd_id": "2",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.type": "block",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:                 "ceph.vdo": "0"
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             },
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "type": "block",
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:             "vg_name": "ceph_vg2"
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:         }
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]:     ]
Oct 07 14:52:58 compute-0 optimistic_lichterman[417830]: }
Oct 07 14:52:58 compute-0 systemd[1]: libpod-78ffab58952f447ad18d9ff5d35f3859a6f826de353fa24455acb6d40e1d28d7.scope: Deactivated successfully.
Oct 07 14:52:58 compute-0 podman[417815]: 2025-10-07 14:52:58.576846823 +0000 UTC m=+1.055138037 container died 78ffab58952f447ad18d9ff5d35f3859a6f826de353fa24455acb6d40e1d28d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lichterman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:52:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-86e15cfabbceaa6b9cbd0ceaa9b3789df4d8e7aa9f28df4c73f79c9de39f9cad-merged.mount: Deactivated successfully.
Oct 07 14:52:58 compute-0 podman[417815]: 2025-10-07 14:52:58.652831508 +0000 UTC m=+1.131122682 container remove 78ffab58952f447ad18d9ff5d35f3859a6f826de353fa24455acb6d40e1d28d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lichterman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:52:58 compute-0 systemd[1]: libpod-conmon-78ffab58952f447ad18d9ff5d35f3859a6f826de353fa24455acb6d40e1d28d7.scope: Deactivated successfully.
Oct 07 14:52:58 compute-0 sudo[417707]: pam_unix(sudo:session): session closed for user root
Oct 07 14:52:58 compute-0 sudo[417850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:52:58 compute-0 sudo[417850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:52:58 compute-0 sudo[417850]: pam_unix(sudo:session): session closed for user root
Oct 07 14:52:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:52:58 compute-0 sudo[417875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:52:58 compute-0 sudo[417875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:52:58 compute-0 sudo[417875]: pam_unix(sudo:session): session closed for user root
Oct 07 14:52:58 compute-0 sudo[417900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:52:58 compute-0 sudo[417900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:52:58 compute-0 sudo[417900]: pam_unix(sudo:session): session closed for user root
Oct 07 14:52:58 compute-0 sudo[417925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:52:58 compute-0 sudo[417925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:52:59 compute-0 podman[417988]: 2025-10-07 14:52:59.273912272 +0000 UTC m=+0.057628690 container create df58cf3e53077e06a1e9cc935c52046c0622435780a50eba4bfa365664865e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:52:59 compute-0 systemd[1]: Started libpod-conmon-df58cf3e53077e06a1e9cc935c52046c0622435780a50eba4bfa365664865e67.scope.
Oct 07 14:52:59 compute-0 podman[417988]: 2025-10-07 14:52:59.240675504 +0000 UTC m=+0.024391962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:52:59 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:52:59 compute-0 podman[417988]: 2025-10-07 14:52:59.374591074 +0000 UTC m=+0.158307502 container init df58cf3e53077e06a1e9cc935c52046c0622435780a50eba4bfa365664865e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 07 14:52:59 compute-0 podman[417988]: 2025-10-07 14:52:59.383434335 +0000 UTC m=+0.167150753 container start df58cf3e53077e06a1e9cc935c52046c0622435780a50eba4bfa365664865e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:52:59 compute-0 youthful_meninsky[418004]: 167 167
Oct 07 14:52:59 compute-0 systemd[1]: libpod-df58cf3e53077e06a1e9cc935c52046c0622435780a50eba4bfa365664865e67.scope: Deactivated successfully.
Oct 07 14:52:59 compute-0 podman[417988]: 2025-10-07 14:52:59.395091291 +0000 UTC m=+0.178807739 container attach df58cf3e53077e06a1e9cc935c52046c0622435780a50eba4bfa365664865e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_meninsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 07 14:52:59 compute-0 podman[417988]: 2025-10-07 14:52:59.396418283 +0000 UTC m=+0.180134711 container died df58cf3e53077e06a1e9cc935c52046c0622435780a50eba4bfa365664865e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 07 14:52:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ee07a0a313637955428a4c59c1d559b99f989d9a9ad07865dc0445082d4e64f-merged.mount: Deactivated successfully.
Oct 07 14:52:59 compute-0 podman[417988]: 2025-10-07 14:52:59.439690371 +0000 UTC m=+0.223406789 container remove df58cf3e53077e06a1e9cc935c52046c0622435780a50eba4bfa365664865e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_meninsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 07 14:52:59 compute-0 systemd[1]: libpod-conmon-df58cf3e53077e06a1e9cc935c52046c0622435780a50eba4bfa365664865e67.scope: Deactivated successfully.
Oct 07 14:52:59 compute-0 podman[418028]: 2025-10-07 14:52:59.585654489 +0000 UTC m=+0.035750631 container create 1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dewdney, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:52:59 compute-0 systemd[1]: Started libpod-conmon-1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed.scope.
Oct 07 14:52:59 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:52:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc41776cddad7fb7cb995d805cb42a2b630a8595ab91eb4a09964777de963564/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:52:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc41776cddad7fb7cb995d805cb42a2b630a8595ab91eb4a09964777de963564/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:52:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc41776cddad7fb7cb995d805cb42a2b630a8595ab91eb4a09964777de963564/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:52:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc41776cddad7fb7cb995d805cb42a2b630a8595ab91eb4a09964777de963564/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:52:59 compute-0 podman[418028]: 2025-10-07 14:52:59.65979783 +0000 UTC m=+0.109893982 container init 1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dewdney, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:52:59 compute-0 podman[418028]: 2025-10-07 14:52:59.570330724 +0000 UTC m=+0.020426886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:52:59 compute-0 podman[418028]: 2025-10-07 14:52:59.667041872 +0000 UTC m=+0.117138014 container start 1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dewdney, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:52:59 compute-0 podman[418028]: 2025-10-07 14:52:59.670482584 +0000 UTC m=+0.120578736 container attach 1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 07 14:53:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:00.089 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:53:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:00.090 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:53:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:00.090 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:53:00 compute-0 nova_compute[259550]: 2025-10-07 14:53:00.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:00 compute-0 nova_compute[259550]: 2025-10-07 14:53:00.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:00 compute-0 ceph-mon[74295]: pgmap v2722: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]: {
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:         "osd_id": 2,
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:         "type": "bluestore"
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:     },
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:         "osd_id": 1,
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:         "type": "bluestore"
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:     },
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:         "osd_id": 0,
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:         "type": "bluestore"
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]:     }
Oct 07 14:53:00 compute-0 pensive_dewdney[418044]: }
Oct 07 14:53:00 compute-0 systemd[1]: libpod-1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed.scope: Deactivated successfully.
Oct 07 14:53:00 compute-0 systemd[1]: libpod-1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed.scope: Consumed 1.044s CPU time.
Oct 07 14:53:00 compute-0 podman[418028]: 2025-10-07 14:53:00.710844409 +0000 UTC m=+1.160940551 container died 1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 07 14:53:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:53:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc41776cddad7fb7cb995d805cb42a2b630a8595ab91eb4a09964777de963564-merged.mount: Deactivated successfully.
Oct 07 14:53:00 compute-0 podman[418028]: 2025-10-07 14:53:00.836406442 +0000 UTC m=+1.286502584 container remove 1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 07 14:53:00 compute-0 systemd[1]: libpod-conmon-1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed.scope: Deactivated successfully.
Oct 07 14:53:00 compute-0 sudo[417925]: pam_unix(sudo:session): session closed for user root
Oct 07 14:53:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:53:00 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:53:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:53:00 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:53:00 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev ed5d41c6-30e5-40e1-9e5a-357ed82c489a does not exist
Oct 07 14:53:00 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 0197eda3-e98c-4439-802b-6d05cf2f4c78 does not exist
Oct 07 14:53:01 compute-0 sudo[418092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:53:01 compute-0 sudo[418092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:53:01 compute-0 sudo[418092]: pam_unix(sudo:session): session closed for user root
Oct 07 14:53:01 compute-0 sudo[418117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:53:01 compute-0 sudo[418117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:53:01 compute-0 sudo[418117]: pam_unix(sudo:session): session closed for user root
Oct 07 14:53:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:53:01 compute-0 ceph-mon[74295]: pgmap v2723: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:53:01 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:53:01 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:53:02 compute-0 nova_compute[259550]: 2025-10-07 14:53:02.012 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:53:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Oct 07 14:53:03 compute-0 nova_compute[259550]: 2025-10-07 14:53:03.273 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848768.2726712, 9b571b54-001b-4da1-8b91-2659b1fbaac6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:53:03 compute-0 nova_compute[259550]: 2025-10-07 14:53:03.274 2 INFO nova.compute.manager [-] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] VM Stopped (Lifecycle Event)
Oct 07 14:53:03 compute-0 nova_compute[259550]: 2025-10-07 14:53:03.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:03 compute-0 nova_compute[259550]: 2025-10-07 14:53:03.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:03 compute-0 nova_compute[259550]: 2025-10-07 14:53:03.446 2 DEBUG nova.compute.manager [None req-202c3388-2b9d-43ae-b9d8-b14cde7f7dab - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:53:03 compute-0 nova_compute[259550]: 2025-10-07 14:53:03.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:53:04 compute-0 ceph-mon[74295]: pgmap v2724: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Oct 07 14:53:04 compute-0 podman[418142]: 2025-10-07 14:53:04.094343678 +0000 UTC m=+0.079504800 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 07 14:53:04 compute-0 podman[418143]: 2025-10-07 14:53:04.108961796 +0000 UTC m=+0.094531708 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 14:53:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Oct 07 14:53:05 compute-0 nova_compute[259550]: 2025-10-07 14:53:05.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:53:06 compute-0 ceph-mon[74295]: pgmap v2725: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Oct 07 14:53:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:53:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 682 B/s wr, 8 op/s
Oct 07 14:53:07 compute-0 nova_compute[259550]: 2025-10-07 14:53:07.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:53:07 compute-0 nova_compute[259550]: 2025-10-07 14:53:07.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:53:07 compute-0 nova_compute[259550]: 2025-10-07 14:53:07.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:53:08 compute-0 ceph-mon[74295]: pgmap v2726: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 682 B/s wr, 8 op/s
Oct 07 14:53:08 compute-0 nova_compute[259550]: 2025-10-07 14:53:08.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:08 compute-0 nova_compute[259550]: 2025-10-07 14:53:08.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:53:08 compute-0 nova_compute[259550]: 2025-10-07 14:53:08.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:53:10 compute-0 ceph-mon[74295]: pgmap v2727: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:53:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:53:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:53:12 compute-0 ceph-mon[74295]: pgmap v2728: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:53:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:53:13 compute-0 nova_compute[259550]: 2025-10-07 14:53:13.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:13 compute-0 nova_compute[259550]: 2025-10-07 14:53:13.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:14 compute-0 ceph-mon[74295]: pgmap v2729: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:53:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:53:14 compute-0 nova_compute[259550]: 2025-10-07 14:53:14.790 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "5360ad21-c174-468a-9be2-d82d672f2911" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:53:14 compute-0 nova_compute[259550]: 2025-10-07 14:53:14.791 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:53:14 compute-0 nova_compute[259550]: 2025-10-07 14:53:14.836 2 DEBUG nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:53:14 compute-0 nova_compute[259550]: 2025-10-07 14:53:14.971 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:53:14 compute-0 nova_compute[259550]: 2025-10-07 14:53:14.972 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:53:14 compute-0 nova_compute[259550]: 2025-10-07 14:53:14.983 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:53:14 compute-0 nova_compute[259550]: 2025-10-07 14:53:14.984 2 INFO nova.compute.claims [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:53:15 compute-0 nova_compute[259550]: 2025-10-07 14:53:15.264 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:53:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:53:15 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1864059452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:53:15 compute-0 nova_compute[259550]: 2025-10-07 14:53:15.711 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:53:15 compute-0 nova_compute[259550]: 2025-10-07 14:53:15.718 2 DEBUG nova.compute.provider_tree [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:53:15 compute-0 nova_compute[259550]: 2025-10-07 14:53:15.758 2 DEBUG nova.scheduler.client.report [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:53:15 compute-0 nova_compute[259550]: 2025-10-07 14:53:15.787 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:53:15 compute-0 nova_compute[259550]: 2025-10-07 14:53:15.788 2 DEBUG nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:53:15 compute-0 nova_compute[259550]: 2025-10-07 14:53:15.871 2 DEBUG nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:53:15 compute-0 nova_compute[259550]: 2025-10-07 14:53:15.873 2 DEBUG nova.network.neutron [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:53:15 compute-0 nova_compute[259550]: 2025-10-07 14:53:15.899 2 INFO nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:53:15 compute-0 nova_compute[259550]: 2025-10-07 14:53:15.918 2 DEBUG nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:53:15 compute-0 nova_compute[259550]: 2025-10-07 14:53:15.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.033 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.034 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.034 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.034 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.035 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.085 2 DEBUG nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.087 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.087 2 INFO nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Creating image(s)
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.112 2 DEBUG nova.storage.rbd_utils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5360ad21-c174-468a-9be2-d82d672f2911_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.135 2 DEBUG nova.storage.rbd_utils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5360ad21-c174-468a-9be2-d82d672f2911_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.162 2 DEBUG nova.storage.rbd_utils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5360ad21-c174-468a-9be2-d82d672f2911_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.167 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.234 2 DEBUG nova.policy [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f8f54ad8b4adcb7d392a6d730edbd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2dd1166031634469bed4993a4eb97989', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.248 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.249 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.250 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.250 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.276 2 DEBUG nova.storage.rbd_utils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5360ad21-c174-468a-9be2-d82d672f2911_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.280 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5360ad21-c174-468a-9be2-d82d672f2911_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:53:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:53:16 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2455657705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.531 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:53:16 compute-0 ceph-mon[74295]: pgmap v2730: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:53:16 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1864059452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:53:16 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2455657705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.708 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.710 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3635MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.710 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.710 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:53:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:53:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.778 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 5360ad21-c174-468a-9be2-d82d672f2911 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.779 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.779 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:53:16 compute-0 nova_compute[259550]: 2025-10-07 14:53:16.813 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:53:17 compute-0 nova_compute[259550]: 2025-10-07 14:53:17.004 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5360ad21-c174-468a-9be2-d82d672f2911_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.725s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:53:17 compute-0 nova_compute[259550]: 2025-10-07 14:53:17.066 2 DEBUG nova.storage.rbd_utils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] resizing rbd image 5360ad21-c174-468a-9be2-d82d672f2911_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:53:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:53:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1142625864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:53:17 compute-0 nova_compute[259550]: 2025-10-07 14:53:17.273 2 DEBUG nova.objects.instance [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'migration_context' on Instance uuid 5360ad21-c174-468a-9be2-d82d672f2911 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:53:17 compute-0 nova_compute[259550]: 2025-10-07 14:53:17.288 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:53:17 compute-0 nova_compute[259550]: 2025-10-07 14:53:17.289 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Ensure instance console log exists: /var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:53:17 compute-0 nova_compute[259550]: 2025-10-07 14:53:17.289 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:53:17 compute-0 nova_compute[259550]: 2025-10-07 14:53:17.290 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:53:17 compute-0 nova_compute[259550]: 2025-10-07 14:53:17.290 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:53:17 compute-0 nova_compute[259550]: 2025-10-07 14:53:17.291 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:53:17 compute-0 nova_compute[259550]: 2025-10-07 14:53:17.297 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:53:17 compute-0 nova_compute[259550]: 2025-10-07 14:53:17.314 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:53:17 compute-0 nova_compute[259550]: 2025-10-07 14:53:17.339 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:53:17 compute-0 nova_compute[259550]: 2025-10-07 14:53:17.340 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:53:17 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1142625864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:53:18 compute-0 nova_compute[259550]: 2025-10-07 14:53:18.242 2 DEBUG nova.network.neutron [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Successfully created port: 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:53:18 compute-0 nova_compute[259550]: 2025-10-07 14:53:18.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:18 compute-0 nova_compute[259550]: 2025-10-07 14:53:18.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:18 compute-0 ceph-mon[74295]: pgmap v2731: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:53:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 72 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 25 op/s
Oct 07 14:53:19 compute-0 nova_compute[259550]: 2025-10-07 14:53:19.359 2 DEBUG nova.network.neutron [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Successfully updated port: 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:53:19 compute-0 nova_compute[259550]: 2025-10-07 14:53:19.412 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:53:19 compute-0 nova_compute[259550]: 2025-10-07 14:53:19.412 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquired lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:53:19 compute-0 nova_compute[259550]: 2025-10-07 14:53:19.412 2 DEBUG nova.network.neutron [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:53:19 compute-0 nova_compute[259550]: 2025-10-07 14:53:19.444 2 DEBUG nova.compute.manager [req-02f527be-cd24-45f9-8c9a-276f91a508f4 req-8c5d51ea-a99e-4e01-b1dd-504d77af6bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received event network-changed-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:53:19 compute-0 nova_compute[259550]: 2025-10-07 14:53:19.444 2 DEBUG nova.compute.manager [req-02f527be-cd24-45f9-8c9a-276f91a508f4 req-8c5d51ea-a99e-4e01-b1dd-504d77af6bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Refreshing instance network info cache due to event network-changed-78d82e05-f97f-4cd4-aed0-b65c80e18ec5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:53:19 compute-0 nova_compute[259550]: 2025-10-07 14:53:19.444 2 DEBUG oslo_concurrency.lockutils [req-02f527be-cd24-45f9-8c9a-276f91a508f4 req-8c5d51ea-a99e-4e01-b1dd-504d77af6bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:53:19 compute-0 nova_compute[259550]: 2025-10-07 14:53:19.575 2 DEBUG nova.network.neutron [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:53:19 compute-0 ceph-mon[74295]: pgmap v2732: 305 pgs: 305 active+clean; 72 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 25 op/s
Oct 07 14:53:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2733: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:53:21 compute-0 podman[418422]: 2025-10-07 14:53:21.079753868 +0000 UTC m=+0.061973573 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd)
Oct 07 14:53:21 compute-0 podman[418423]: 2025-10-07 14:53:21.102201121 +0000 UTC m=+0.082335426 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:53:21 compute-0 nova_compute[259550]: 2025-10-07 14:53:21.342 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:53:21 compute-0 nova_compute[259550]: 2025-10-07 14:53:21.342 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:53:21 compute-0 nova_compute[259550]: 2025-10-07 14:53:21.503 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:53:21 compute-0 nova_compute[259550]: 2025-10-07 14:53:21.504 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:53:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:53:21 compute-0 ceph-mon[74295]: pgmap v2733: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.215 2 DEBUG nova.network.neutron [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updating instance_info_cache with network_info: [{"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.409 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Releasing lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.410 2 DEBUG nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Instance network_info: |[{"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.411 2 DEBUG oslo_concurrency.lockutils [req-02f527be-cd24-45f9-8c9a-276f91a508f4 req-8c5d51ea-a99e-4e01-b1dd-504d77af6bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.411 2 DEBUG nova.network.neutron [req-02f527be-cd24-45f9-8c9a-276f91a508f4 req-8c5d51ea-a99e-4e01-b1dd-504d77af6bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Refreshing network info cache for port 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.419 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Start _get_guest_xml network_info=[{"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.424 2 WARNING nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.430 2 DEBUG nova.virt.libvirt.host [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.430 2 DEBUG nova.virt.libvirt.host [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.434 2 DEBUG nova.virt.libvirt.host [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.435 2 DEBUG nova.virt.libvirt.host [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.435 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.435 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.436 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.436 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.436 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.437 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.437 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.437 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.437 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.438 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.438 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.438 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.441 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:53:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:53:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:53:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:53:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:53:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:53:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:53:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:53:22
Oct 07 14:53:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:53:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:53:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'images', 'volumes', '.rgw.root', '.mgr', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct 07 14:53:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:53:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:53:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:53:22 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3403038772' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.929 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.970 2 DEBUG nova.storage.rbd_utils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5360ad21-c174-468a-9be2-d82d672f2911_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:53:22 compute-0 nova_compute[259550]: 2025-10-07 14:53:22.975 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:53:23 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3403038772' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:53:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:53:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:53:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:53:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:53:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:53:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:53:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:53:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:53:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:53:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:53:23 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/891966138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.439 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.440 2 DEBUG nova.virt.libvirt.vif [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:53:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-945928091',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-945928091',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=144,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIvYEBw++0H23CPU/EQE9HWaLr7vi4X6+2UQNCQ0N5Tz2y4ny5PBn9cmCdgcJlcQrK39s79mDb9nDQdopbqGy2qQX711Rlve/xjg+NPmBBGEtxIddIA3GNH/TrG7DZF1Rw==',key_name='tempest-TestSecurityGroupsBasicOps-276938345',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-p3qp2bxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:53:15Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=5360ad21-c174-468a-9be2-d82d672f2911,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.441 2 DEBUG nova.network.os_vif_util [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.442 2 DEBUG nova.network.os_vif_util [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:b1:b3,bridge_name='br-int',has_traffic_filtering=True,id=78d82e05-f97f-4cd4-aed0-b65c80e18ec5,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78d82e05-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.443 2 DEBUG nova.objects.instance [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5360ad21-c174-468a-9be2-d82d672f2911 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.468 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:53:23 compute-0 nova_compute[259550]:   <uuid>5360ad21-c174-468a-9be2-d82d672f2911</uuid>
Oct 07 14:53:23 compute-0 nova_compute[259550]:   <name>instance-00000090</name>
Oct 07 14:53:23 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:53:23 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:53:23 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-945928091</nova:name>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:53:22</nova:creationTime>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:53:23 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:53:23 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:53:23 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:53:23 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:53:23 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:53:23 compute-0 nova_compute[259550]:         <nova:user uuid="229f8f54ad8b4adcb7d392a6d730edbd">tempest-TestSecurityGroupsBasicOps-1946829349-project-member</nova:user>
Oct 07 14:53:23 compute-0 nova_compute[259550]:         <nova:project uuid="2dd1166031634469bed4993a4eb97989">tempest-TestSecurityGroupsBasicOps-1946829349</nova:project>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:53:23 compute-0 nova_compute[259550]:         <nova:port uuid="78d82e05-f97f-4cd4-aed0-b65c80e18ec5">
Oct 07 14:53:23 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:53:23 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:53:23 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <system>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <entry name="serial">5360ad21-c174-468a-9be2-d82d672f2911</entry>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <entry name="uuid">5360ad21-c174-468a-9be2-d82d672f2911</entry>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     </system>
Oct 07 14:53:23 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:53:23 compute-0 nova_compute[259550]:   <os>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:   </os>
Oct 07 14:53:23 compute-0 nova_compute[259550]:   <features>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:   </features>
Oct 07 14:53:23 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:53:23 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:53:23 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/5360ad21-c174-468a-9be2-d82d672f2911_disk">
Oct 07 14:53:23 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       </source>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:53:23 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/5360ad21-c174-468a-9be2-d82d672f2911_disk.config">
Oct 07 14:53:23 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       </source>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:53:23 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:7f:b1:b3"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <target dev="tap78d82e05-f9"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911/console.log" append="off"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <video>
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     </video>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:53:23 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:53:23 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:53:23 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:53:23 compute-0 nova_compute[259550]: </domain>
Oct 07 14:53:23 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.469 2 DEBUG nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Preparing to wait for external event network-vif-plugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.469 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "5360ad21-c174-468a-9be2-d82d672f2911-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.470 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.470 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.471 2 DEBUG nova.virt.libvirt.vif [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:53:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-945928091',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-945928091',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=144,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIvYEBw++0H23CPU/EQE9HWaLr7vi4X6+2UQNCQ0N5Tz2y4ny5PBn9cmCdgcJlcQrK39s79mDb9nDQdopbqGy2qQX711Rlve/xjg+NPmBBGEtxIddIA3GNH/TrG7DZF1Rw==',key_name='tempest-TestSecurityGroupsBasicOps-276938345',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-p3qp2bxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:53:15Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=5360ad21-c174-468a-9be2-d82d672f2911,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.471 2 DEBUG nova.network.os_vif_util [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.472 2 DEBUG nova.network.os_vif_util [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:b1:b3,bridge_name='br-int',has_traffic_filtering=True,id=78d82e05-f97f-4cd4-aed0-b65c80e18ec5,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78d82e05-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.472 2 DEBUG os_vif [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:b1:b3,bridge_name='br-int',has_traffic_filtering=True,id=78d82e05-f97f-4cd4-aed0-b65c80e18ec5,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78d82e05-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap78d82e05-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap78d82e05-f9, col_values=(('external_ids', {'iface-id': '78d82e05-f97f-4cd4-aed0-b65c80e18ec5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:b1:b3', 'vm-uuid': '5360ad21-c174-468a-9be2-d82d672f2911'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:23 compute-0 NetworkManager[44949]: <info>  [1759848803.4813] manager: (tap78d82e05-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/633)
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.489 2 INFO os_vif [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:b1:b3,bridge_name='br-int',has_traffic_filtering=True,id=78d82e05-f97f-4cd4-aed0-b65c80e18ec5,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78d82e05-f9')
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.541 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.541 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.542 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No VIF found with MAC fa:16:3e:7f:b1:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.542 2 INFO nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Using config drive
Oct 07 14:53:23 compute-0 nova_compute[259550]: 2025-10-07 14:53:23.566 2 DEBUG nova.storage.rbd_utils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5360ad21-c174-468a-9be2-d82d672f2911_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:53:24 compute-0 ceph-mon[74295]: pgmap v2734: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:53:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/891966138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:53:24 compute-0 nova_compute[259550]: 2025-10-07 14:53:24.190 2 INFO nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Creating config drive at /var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911/disk.config
Oct 07 14:53:24 compute-0 nova_compute[259550]: 2025-10-07 14:53:24.196 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0e626y2h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:53:24 compute-0 nova_compute[259550]: 2025-10-07 14:53:24.350 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0e626y2h" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:53:24 compute-0 nova_compute[259550]: 2025-10-07 14:53:24.378 2 DEBUG nova.storage.rbd_utils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5360ad21-c174-468a-9be2-d82d672f2911_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:53:24 compute-0 nova_compute[259550]: 2025-10-07 14:53:24.382 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911/disk.config 5360ad21-c174-468a-9be2-d82d672f2911_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:53:24 compute-0 nova_compute[259550]: 2025-10-07 14:53:24.427 2 DEBUG nova.network.neutron [req-02f527be-cd24-45f9-8c9a-276f91a508f4 req-8c5d51ea-a99e-4e01-b1dd-504d77af6bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updated VIF entry in instance network info cache for port 78d82e05-f97f-4cd4-aed0-b65c80e18ec5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:53:24 compute-0 nova_compute[259550]: 2025-10-07 14:53:24.427 2 DEBUG nova.network.neutron [req-02f527be-cd24-45f9-8c9a-276f91a508f4 req-8c5d51ea-a99e-4e01-b1dd-504d77af6bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updating instance_info_cache with network_info: [{"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:53:24 compute-0 nova_compute[259550]: 2025-10-07 14:53:24.443 2 DEBUG oslo_concurrency.lockutils [req-02f527be-cd24-45f9-8c9a-276f91a508f4 req-8c5d51ea-a99e-4e01-b1dd-504d77af6bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:53:24 compute-0 nova_compute[259550]: 2025-10-07 14:53:24.558 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911/disk.config 5360ad21-c174-468a-9be2-d82d672f2911_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:53:24 compute-0 nova_compute[259550]: 2025-10-07 14:53:24.558 2 INFO nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Deleting local config drive /var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911/disk.config because it was imported into RBD.
Oct 07 14:53:24 compute-0 kernel: tap78d82e05-f9: entered promiscuous mode
Oct 07 14:53:24 compute-0 NetworkManager[44949]: <info>  [1759848804.6051] manager: (tap78d82e05-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/634)
Oct 07 14:53:24 compute-0 nova_compute[259550]: 2025-10-07 14:53:24.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:24 compute-0 nova_compute[259550]: 2025-10-07 14:53:24.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:24 compute-0 ovn_controller[151684]: 2025-10-07T14:53:24Z|01577|binding|INFO|Claiming lport 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 for this chassis.
Oct 07 14:53:24 compute-0 ovn_controller[151684]: 2025-10-07T14:53:24Z|01578|binding|INFO|78d82e05-f97f-4cd4-aed0-b65c80e18ec5: Claiming fa:16:3e:7f:b1:b3 10.100.0.6
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.622 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:b1:b3 10.100.0.6'], port_security=['fa:16:3e:7f:b1:b3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5360ad21-c174-468a-9be2-d82d672f2911', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7d1e85a4-dffc-45c3-a3d8-ec6178189662 fb064d27-a242-4db9-8bed-79adb9fb8cf9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8255c2c5-7755-43fa-85c6-c4d3fcdeecda, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=78d82e05-f97f-4cd4-aed0-b65c80e18ec5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.623 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 in datapath b7b9d978-a319-4f4b-b8e1-4891fcd559d0 bound to our chassis
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.624 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b7b9d978-a319-4f4b-b8e1-4891fcd559d0
Oct 07 14:53:24 compute-0 systemd-udevd[418595]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.639 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[43e753f2-df80-4f68-acce-6f5c67bf71d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.641 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb7b9d978-a1 in ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.643 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb7b9d978-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.643 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff57bad-1245-49fe-9988-45a89a2cdd65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.644 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[90847988-6af7-490c-8f8d-e1dcf2db716a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:53:24 compute-0 systemd-machined[214580]: New machine qemu-178-instance-00000090.
Oct 07 14:53:24 compute-0 NetworkManager[44949]: <info>  [1759848804.6534] device (tap78d82e05-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:53:24 compute-0 NetworkManager[44949]: <info>  [1759848804.6551] device (tap78d82e05-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.661 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8165ea-c041-4d2c-8ddb-4390a8d07c52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:53:24 compute-0 nova_compute[259550]: 2025-10-07 14:53:24.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:24 compute-0 systemd[1]: Started Virtual Machine qemu-178-instance-00000090.
Oct 07 14:53:24 compute-0 ovn_controller[151684]: 2025-10-07T14:53:24Z|01579|binding|INFO|Setting lport 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 ovn-installed in OVS
Oct 07 14:53:24 compute-0 ovn_controller[151684]: 2025-10-07T14:53:24Z|01580|binding|INFO|Setting lport 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 up in Southbound
Oct 07 14:53:24 compute-0 nova_compute[259550]: 2025-10-07 14:53:24.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.688 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7968a7-1973-4044-b333-a6775cacfc66]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.719 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2ebdaef3-cfd0-4268-a2f2-5e4be6400c79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.724 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e92d98-8c63-4dc5-a8a7-48441ec0f62b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:53:24 compute-0 NetworkManager[44949]: <info>  [1759848804.7258] manager: (tapb7b9d978-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/635)
Oct 07 14:53:24 compute-0 systemd-udevd[418599]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.760 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a28d9d99-a9c7-4014-aa3a-1f0d3f71f508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.766 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[71935adc-cd62-40cc-a40d-a36ba8a7dfe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:53:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:53:24 compute-0 NetworkManager[44949]: <info>  [1759848804.7916] device (tapb7b9d978-a0): carrier: link connected
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.799 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b3febb33-764f-475f-9904-0b26e1ac2c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.820 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[df25c65b-5dae-4285-96df-00f6547872d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb7b9d978-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:47:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 938835, 'reachable_time': 42170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 418628, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.840 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[95e1b576-cc48-4479-bb1f-9d71fdbfce15]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:471e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 938835, 'tstamp': 938835}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 418629, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.861 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[df3f8ec7-73cf-4e3a-acbd-b8da7330289f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb7b9d978-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:47:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 938835, 'reachable_time': 42170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 418630, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.904 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9904787b-2056-49cc-8445-e0296a7ba0d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:53:24 compute-0 nova_compute[259550]: 2025-10-07 14:53:24.931 2 DEBUG nova.compute.manager [req-2b383cb7-56f8-48fc-b0db-8ff8d69fb781 req-a9a57a7a-8c75-423d-812d-f4dc8b599e16 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received event network-vif-plugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:53:24 compute-0 nova_compute[259550]: 2025-10-07 14:53:24.932 2 DEBUG oslo_concurrency.lockutils [req-2b383cb7-56f8-48fc-b0db-8ff8d69fb781 req-a9a57a7a-8c75-423d-812d-f4dc8b599e16 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5360ad21-c174-468a-9be2-d82d672f2911-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:53:24 compute-0 nova_compute[259550]: 2025-10-07 14:53:24.932 2 DEBUG oslo_concurrency.lockutils [req-2b383cb7-56f8-48fc-b0db-8ff8d69fb781 req-a9a57a7a-8c75-423d-812d-f4dc8b599e16 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:53:24 compute-0 nova_compute[259550]: 2025-10-07 14:53:24.932 2 DEBUG oslo_concurrency.lockutils [req-2b383cb7-56f8-48fc-b0db-8ff8d69fb781 req-a9a57a7a-8c75-423d-812d-f4dc8b599e16 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:53:24 compute-0 nova_compute[259550]: 2025-10-07 14:53:24.933 2 DEBUG nova.compute.manager [req-2b383cb7-56f8-48fc-b0db-8ff8d69fb781 req-a9a57a7a-8c75-423d-812d-f4dc8b599e16 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Processing event network-vif-plugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.970 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3a328995-f97a-4369-b041-6b8536d9e5ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.972 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7b9d978-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.972 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:53:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.972 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7b9d978-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:53:25 compute-0 NetworkManager[44949]: <info>  [1759848805.0686] manager: (tapb7b9d978-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/636)
Oct 07 14:53:25 compute-0 kernel: tapb7b9d978-a0: entered promiscuous mode
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:25.072 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb7b9d978-a0, col_values=(('external_ids', {'iface-id': '844b358c-727e-4661-9ab8-5cf03a82eb82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:53:25 compute-0 ovn_controller[151684]: 2025-10-07T14:53:25Z|01581|binding|INFO|Releasing lport 844b358c-727e-4661-9ab8-5cf03a82eb82 from this chassis (sb_readonly=0)
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:25.086 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b7b9d978-a319-4f4b-b8e1-4891fcd559d0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b7b9d978-a319-4f4b-b8e1-4891fcd559d0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:25.087 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[beb8872f-858e-41f2-8aa1-db0165f8455a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:25.087 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-b7b9d978-a319-4f4b-b8e1-4891fcd559d0
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/b7b9d978-a319-4f4b-b8e1-4891fcd559d0.pid.haproxy
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID b7b9d978-a319-4f4b-b8e1-4891fcd559d0
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:53:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:25.088 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'env', 'PROCESS_TAG=haproxy-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b7b9d978-a319-4f4b-b8e1-4891fcd559d0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:53:25 compute-0 podman[418703]: 2025-10-07 14:53:25.476356067 +0000 UTC m=+0.063907289 container create b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 07 14:53:25 compute-0 systemd[1]: Started libpod-conmon-b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047.scope.
Oct 07 14:53:25 compute-0 podman[418703]: 2025-10-07 14:53:25.439328447 +0000 UTC m=+0.026879689 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:53:25 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:53:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced5dd1aff1b8cd3c8452ea0fb4c0158258e892a5ae20c9c27856c33d2c1bdf8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:53:25 compute-0 podman[418703]: 2025-10-07 14:53:25.580626694 +0000 UTC m=+0.168177916 container init b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:53:25 compute-0 podman[418703]: 2025-10-07 14:53:25.588807729 +0000 UTC m=+0.176358951 container start b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:53:25 compute-0 neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0[418719]: [NOTICE]   (418723) : New worker (418725) forked
Oct 07 14:53:25 compute-0 neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0[418719]: [NOTICE]   (418723) : Loading success.
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.918 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848805.9184005, 5360ad21-c174-468a-9be2-d82d672f2911 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.919 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] VM Started (Lifecycle Event)
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.921 2 DEBUG nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.926 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.930 2 INFO nova.virt.libvirt.driver [-] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Instance spawned successfully.
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.930 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.948 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.956 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.960 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.960 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.961 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.961 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.961 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.961 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.997 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.998 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848805.921185, 5360ad21-c174-468a-9be2-d82d672f2911 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:53:25 compute-0 nova_compute[259550]: 2025-10-07 14:53:25.998 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] VM Paused (Lifecycle Event)
Oct 07 14:53:26 compute-0 nova_compute[259550]: 2025-10-07 14:53:26.033 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:53:26 compute-0 nova_compute[259550]: 2025-10-07 14:53:26.036 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848805.9243731, 5360ad21-c174-468a-9be2-d82d672f2911 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:53:26 compute-0 nova_compute[259550]: 2025-10-07 14:53:26.037 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] VM Resumed (Lifecycle Event)
Oct 07 14:53:26 compute-0 nova_compute[259550]: 2025-10-07 14:53:26.040 2 INFO nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Took 9.95 seconds to spawn the instance on the hypervisor.
Oct 07 14:53:26 compute-0 nova_compute[259550]: 2025-10-07 14:53:26.041 2 DEBUG nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:53:26 compute-0 nova_compute[259550]: 2025-10-07 14:53:26.053 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:53:26 compute-0 nova_compute[259550]: 2025-10-07 14:53:26.056 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:53:26 compute-0 ceph-mon[74295]: pgmap v2735: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:53:26 compute-0 nova_compute[259550]: 2025-10-07 14:53:26.094 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:53:26 compute-0 nova_compute[259550]: 2025-10-07 14:53:26.115 2 INFO nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Took 11.18 seconds to build instance.
Oct 07 14:53:26 compute-0 nova_compute[259550]: 2025-10-07 14:53:26.132 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:53:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:53:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 07 14:53:27 compute-0 nova_compute[259550]: 2025-10-07 14:53:27.021 2 DEBUG nova.compute.manager [req-67bd9490-0b3e-4bc9-a3da-eb909d0ad750 req-64202359-85f2-4644-a250-202275262732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received event network-vif-plugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:53:27 compute-0 nova_compute[259550]: 2025-10-07 14:53:27.021 2 DEBUG oslo_concurrency.lockutils [req-67bd9490-0b3e-4bc9-a3da-eb909d0ad750 req-64202359-85f2-4644-a250-202275262732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5360ad21-c174-468a-9be2-d82d672f2911-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:53:27 compute-0 nova_compute[259550]: 2025-10-07 14:53:27.022 2 DEBUG oslo_concurrency.lockutils [req-67bd9490-0b3e-4bc9-a3da-eb909d0ad750 req-64202359-85f2-4644-a250-202275262732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:53:27 compute-0 nova_compute[259550]: 2025-10-07 14:53:27.022 2 DEBUG oslo_concurrency.lockutils [req-67bd9490-0b3e-4bc9-a3da-eb909d0ad750 req-64202359-85f2-4644-a250-202275262732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:53:27 compute-0 nova_compute[259550]: 2025-10-07 14:53:27.022 2 DEBUG nova.compute.manager [req-67bd9490-0b3e-4bc9-a3da-eb909d0ad750 req-64202359-85f2-4644-a250-202275262732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] No waiting events found dispatching network-vif-plugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:53:27 compute-0 nova_compute[259550]: 2025-10-07 14:53:27.023 2 WARNING nova.compute.manager [req-67bd9490-0b3e-4bc9-a3da-eb909d0ad750 req-64202359-85f2-4644-a250-202275262732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received unexpected event network-vif-plugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 for instance with vm_state active and task_state None.
Oct 07 14:53:28 compute-0 ceph-mon[74295]: pgmap v2736: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 07 14:53:28 compute-0 nova_compute[259550]: 2025-10-07 14:53:28.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:28 compute-0 nova_compute[259550]: 2025-10-07 14:53:28.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct 07 14:53:29 compute-0 NetworkManager[44949]: <info>  [1759848809.5035] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/637)
Oct 07 14:53:29 compute-0 NetworkManager[44949]: <info>  [1759848809.5045] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/638)
Oct 07 14:53:29 compute-0 nova_compute[259550]: 2025-10-07 14:53:29.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:29 compute-0 ovn_controller[151684]: 2025-10-07T14:53:29Z|01582|binding|INFO|Releasing lport 844b358c-727e-4661-9ab8-5cf03a82eb82 from this chassis (sb_readonly=0)
Oct 07 14:53:29 compute-0 nova_compute[259550]: 2025-10-07 14:53:29.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:29 compute-0 ovn_controller[151684]: 2025-10-07T14:53:29Z|01583|binding|INFO|Releasing lport 844b358c-727e-4661-9ab8-5cf03a82eb82 from this chassis (sb_readonly=0)
Oct 07 14:53:29 compute-0 nova_compute[259550]: 2025-10-07 14:53:29.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:30 compute-0 nova_compute[259550]: 2025-10-07 14:53:30.014 2 DEBUG nova.compute.manager [req-227aae49-99d4-491a-847a-d41d7d9dfcef req-926d1705-ea86-4c23-ad1f-996c067248d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received event network-changed-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:53:30 compute-0 nova_compute[259550]: 2025-10-07 14:53:30.014 2 DEBUG nova.compute.manager [req-227aae49-99d4-491a-847a-d41d7d9dfcef req-926d1705-ea86-4c23-ad1f-996c067248d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Refreshing instance network info cache due to event network-changed-78d82e05-f97f-4cd4-aed0-b65c80e18ec5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:53:30 compute-0 nova_compute[259550]: 2025-10-07 14:53:30.014 2 DEBUG oslo_concurrency.lockutils [req-227aae49-99d4-491a-847a-d41d7d9dfcef req-926d1705-ea86-4c23-ad1f-996c067248d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:53:30 compute-0 nova_compute[259550]: 2025-10-07 14:53:30.015 2 DEBUG oslo_concurrency.lockutils [req-227aae49-99d4-491a-847a-d41d7d9dfcef req-926d1705-ea86-4c23-ad1f-996c067248d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:53:30 compute-0 nova_compute[259550]: 2025-10-07 14:53:30.015 2 DEBUG nova.network.neutron [req-227aae49-99d4-491a-847a-d41d7d9dfcef req-926d1705-ea86-4c23-ad1f-996c067248d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Refreshing network info cache for port 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:53:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:30.038 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:53:30 compute-0 nova_compute[259550]: 2025-10-07 14:53:30.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:30 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:30.040 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:53:30 compute-0 ceph-mon[74295]: pgmap v2737: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct 07 14:53:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2738: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 379 KiB/s wr, 76 op/s
Oct 07 14:53:31 compute-0 nova_compute[259550]: 2025-10-07 14:53:31.463 2 DEBUG nova.network.neutron [req-227aae49-99d4-491a-847a-d41d7d9dfcef req-926d1705-ea86-4c23-ad1f-996c067248d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updated VIF entry in instance network info cache for port 78d82e05-f97f-4cd4-aed0-b65c80e18ec5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:53:31 compute-0 nova_compute[259550]: 2025-10-07 14:53:31.464 2 DEBUG nova.network.neutron [req-227aae49-99d4-491a-847a-d41d7d9dfcef req-926d1705-ea86-4c23-ad1f-996c067248d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updating instance_info_cache with network_info: [{"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:53:31 compute-0 nova_compute[259550]: 2025-10-07 14:53:31.645 2 DEBUG oslo_concurrency.lockutils [req-227aae49-99d4-491a-847a-d41d7d9dfcef req-926d1705-ea86-4c23-ad1f-996c067248d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:53:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:53:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:53:32.042 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:53:32 compute-0 ceph-mon[74295]: pgmap v2738: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 379 KiB/s wr, 76 op/s
Oct 07 14:53:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:53:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/97945509' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:53:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:53:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/97945509' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:53:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:53:33 compute-0 nova_compute[259550]: 2025-10-07 14:53:33.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:33 compute-0 nova_compute[259550]: 2025-10-07 14:53:33.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/97945509' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:53:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/97945509' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:53:34 compute-0 ceph-mon[74295]: pgmap v2739: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Oct 07 14:53:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 86 op/s
Oct 07 14:53:35 compute-0 podman[418736]: 2025-10-07 14:53:35.071881084 +0000 UTC m=+0.061927972 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 07 14:53:35 compute-0 podman[418737]: 2025-10-07 14:53:35.117556458 +0000 UTC m=+0.102263110 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:53:35 compute-0 ceph-mon[74295]: pgmap v2740: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 86 op/s
Oct 07 14:53:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:53:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 88 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 92 op/s
Oct 07 14:53:37 compute-0 ceph-mon[74295]: pgmap v2741: 305 pgs: 305 active+clean; 88 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 92 op/s
Oct 07 14:53:38 compute-0 nova_compute[259550]: 2025-10-07 14:53:38.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:38 compute-0 nova_compute[259550]: 2025-10-07 14:53:38.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 305 active+clean; 96 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 931 KiB/s wr, 111 op/s
Oct 07 14:53:40 compute-0 ceph-mon[74295]: pgmap v2742: 305 pgs: 305 active+clean; 96 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 931 KiB/s wr, 111 op/s
Oct 07 14:53:40 compute-0 ovn_controller[151684]: 2025-10-07T14:53:40Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:b1:b3 10.100.0.6
Oct 07 14:53:40 compute-0 ovn_controller[151684]: 2025-10-07T14:53:40Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:b1:b3 10.100.0.6
Oct 07 14:53:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 305 active+clean; 101 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 483 KiB/s rd, 1.6 MiB/s wr, 71 op/s
Oct 07 14:53:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:53:42 compute-0 ceph-mon[74295]: pgmap v2743: 305 pgs: 305 active+clean; 101 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 483 KiB/s rd, 1.6 MiB/s wr, 71 op/s
Oct 07 14:53:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 305 active+clean; 101 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 1.6 MiB/s wr, 54 op/s
Oct 07 14:53:43 compute-0 nova_compute[259550]: 2025-10-07 14:53:43.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:43 compute-0 nova_compute[259550]: 2025-10-07 14:53:43.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:44 compute-0 ceph-mon[74295]: pgmap v2744: 305 pgs: 305 active+clean; 101 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 1.6 MiB/s wr, 54 op/s
Oct 07 14:53:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 305 active+clean; 117 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Oct 07 14:53:46 compute-0 ceph-mon[74295]: pgmap v2745: 305 pgs: 305 active+clean; 117 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Oct 07 14:53:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:53:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 305 active+clean; 121 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Oct 07 14:53:47 compute-0 ceph-mon[74295]: pgmap v2746: 305 pgs: 305 active+clean; 121 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Oct 07 14:53:48 compute-0 nova_compute[259550]: 2025-10-07 14:53:48.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:48 compute-0 nova_compute[259550]: 2025-10-07 14:53:48.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 07 14:53:49 compute-0 ceph-mon[74295]: pgmap v2747: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 07 14:53:50 compute-0 nova_compute[259550]: 2025-10-07 14:53:50.512 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "0d0450f4-efd0-4508-8aca-28b5d858c397" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:53:50 compute-0 nova_compute[259550]: 2025-10-07 14:53:50.513 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:53:50 compute-0 nova_compute[259550]: 2025-10-07 14:53:50.533 2 DEBUG nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:53:50 compute-0 nova_compute[259550]: 2025-10-07 14:53:50.657 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:53:50 compute-0 nova_compute[259550]: 2025-10-07 14:53:50.657 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:53:50 compute-0 nova_compute[259550]: 2025-10-07 14:53:50.672 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:53:50 compute-0 nova_compute[259550]: 2025-10-07 14:53:50.672 2 INFO nova.compute.claims [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:53:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 1.2 MiB/s wr, 82 op/s
Oct 07 14:53:50 compute-0 nova_compute[259550]: 2025-10-07 14:53:50.785 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:53:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:53:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2043452326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:53:51 compute-0 nova_compute[259550]: 2025-10-07 14:53:51.246 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:53:51 compute-0 nova_compute[259550]: 2025-10-07 14:53:51.255 2 DEBUG nova.compute.provider_tree [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:53:51 compute-0 nova_compute[259550]: 2025-10-07 14:53:51.297 2 DEBUG nova.scheduler.client.report [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:53:51 compute-0 nova_compute[259550]: 2025-10-07 14:53:51.444 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:53:51 compute-0 nova_compute[259550]: 2025-10-07 14:53:51.444 2 DEBUG nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:53:51 compute-0 nova_compute[259550]: 2025-10-07 14:53:51.643 2 DEBUG nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:53:51 compute-0 nova_compute[259550]: 2025-10-07 14:53:51.643 2 DEBUG nova.network.neutron [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:53:51 compute-0 nova_compute[259550]: 2025-10-07 14:53:51.693 2 INFO nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:53:51 compute-0 nova_compute[259550]: 2025-10-07 14:53:51.737 2 DEBUG nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:53:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:53:51 compute-0 nova_compute[259550]: 2025-10-07 14:53:51.862 2 DEBUG nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:53:51 compute-0 nova_compute[259550]: 2025-10-07 14:53:51.863 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:53:51 compute-0 nova_compute[259550]: 2025-10-07 14:53:51.864 2 INFO nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Creating image(s)
Oct 07 14:53:51 compute-0 nova_compute[259550]: 2025-10-07 14:53:51.889 2 DEBUG nova.storage.rbd_utils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 0d0450f4-efd0-4508-8aca-28b5d858c397_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:53:51 compute-0 nova_compute[259550]: 2025-10-07 14:53:51.926 2 DEBUG nova.storage.rbd_utils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 0d0450f4-efd0-4508-8aca-28b5d858c397_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:53:51 compute-0 nova_compute[259550]: 2025-10-07 14:53:51.948 2 DEBUG nova.storage.rbd_utils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 0d0450f4-efd0-4508-8aca-28b5d858c397_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:53:51 compute-0 nova_compute[259550]: 2025-10-07 14:53:51.951 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:53:52 compute-0 nova_compute[259550]: 2025-10-07 14:53:52.033 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:53:52 compute-0 nova_compute[259550]: 2025-10-07 14:53:52.034 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:53:52 compute-0 nova_compute[259550]: 2025-10-07 14:53:52.035 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:53:52 compute-0 nova_compute[259550]: 2025-10-07 14:53:52.035 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:53:52 compute-0 nova_compute[259550]: 2025-10-07 14:53:52.061 2 DEBUG nova.storage.rbd_utils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 0d0450f4-efd0-4508-8aca-28b5d858c397_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:53:52 compute-0 ceph-mon[74295]: pgmap v2748: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 1.2 MiB/s wr, 82 op/s
Oct 07 14:53:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2043452326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:53:52 compute-0 nova_compute[259550]: 2025-10-07 14:53:52.068 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 0d0450f4-efd0-4508-8aca-28b5d858c397_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:53:52 compute-0 podman[418861]: 2025-10-07 14:53:52.084607246 +0000 UTC m=+0.067031203 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001)
Oct 07 14:53:52 compute-0 podman[418862]: 2025-10-07 14:53:52.113120803 +0000 UTC m=+0.093813659 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 07 14:53:52 compute-0 nova_compute[259550]: 2025-10-07 14:53:52.210 2 DEBUG nova.policy [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f8f54ad8b4adcb7d392a6d730edbd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2dd1166031634469bed4993a4eb97989', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:53:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:53:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:53:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:53:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:53:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:53:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:53:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 599 KiB/s wr, 68 op/s
Oct 07 14:53:53 compute-0 nova_compute[259550]: 2025-10-07 14:53:53.125 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 0d0450f4-efd0-4508-8aca-28b5d858c397_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:53:53 compute-0 nova_compute[259550]: 2025-10-07 14:53:53.184 2 DEBUG nova.storage.rbd_utils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] resizing rbd image 0d0450f4-efd0-4508-8aca-28b5d858c397_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:53:53 compute-0 nova_compute[259550]: 2025-10-07 14:53:53.299 2 DEBUG nova.network.neutron [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Successfully created port: ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:53:53 compute-0 nova_compute[259550]: 2025-10-07 14:53:53.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:53 compute-0 nova_compute[259550]: 2025-10-07 14:53:53.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:53 compute-0 nova_compute[259550]: 2025-10-07 14:53:53.864 2 DEBUG nova.objects.instance [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'migration_context' on Instance uuid 0d0450f4-efd0-4508-8aca-28b5d858c397 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:53:53 compute-0 nova_compute[259550]: 2025-10-07 14:53:53.945 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:53:53 compute-0 nova_compute[259550]: 2025-10-07 14:53:53.946 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Ensure instance console log exists: /var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:53:53 compute-0 nova_compute[259550]: 2025-10-07 14:53:53.947 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:53:53 compute-0 nova_compute[259550]: 2025-10-07 14:53:53.947 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:53:53 compute-0 nova_compute[259550]: 2025-10-07 14:53:53.947 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:53:54 compute-0 ceph-mon[74295]: pgmap v2749: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 599 KiB/s wr, 68 op/s
Oct 07 14:53:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 305 active+clean; 142 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Oct 07 14:53:56 compute-0 ceph-mon[74295]: pgmap v2750: 305 pgs: 305 active+clean; 142 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Oct 07 14:53:56 compute-0 nova_compute[259550]: 2025-10-07 14:53:56.408 2 DEBUG nova.network.neutron [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Successfully updated port: ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:53:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:53:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 07 14:53:56 compute-0 nova_compute[259550]: 2025-10-07 14:53:56.823 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "refresh_cache-0d0450f4-efd0-4508-8aca-28b5d858c397" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:53:56 compute-0 nova_compute[259550]: 2025-10-07 14:53:56.824 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquired lock "refresh_cache-0d0450f4-efd0-4508-8aca-28b5d858c397" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:53:56 compute-0 nova_compute[259550]: 2025-10-07 14:53:56.824 2 DEBUG nova.network.neutron [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:53:56 compute-0 nova_compute[259550]: 2025-10-07 14:53:56.853 2 DEBUG nova.compute.manager [req-22c3e85c-cab0-4641-a8e9-f1cb262b3b82 req-55affff8-18a7-410e-9d2c-223f656cb75f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Received event network-changed-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:53:56 compute-0 nova_compute[259550]: 2025-10-07 14:53:56.854 2 DEBUG nova.compute.manager [req-22c3e85c-cab0-4641-a8e9-f1cb262b3b82 req-55affff8-18a7-410e-9d2c-223f656cb75f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Refreshing instance network info cache due to event network-changed-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:53:56 compute-0 nova_compute[259550]: 2025-10-07 14:53:56.854 2 DEBUG oslo_concurrency.lockutils [req-22c3e85c-cab0-4641-a8e9-f1cb262b3b82 req-55affff8-18a7-410e-9d2c-223f656cb75f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-0d0450f4-efd0-4508-8aca-28b5d858c397" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:53:57 compute-0 nova_compute[259550]: 2025-10-07 14:53:57.182 2 DEBUG nova.network.neutron [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.187 2 DEBUG nova.network.neutron [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Updating instance_info_cache with network_info: [{"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:53:58 compute-0 ceph-mon[74295]: pgmap v2751: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.455 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Releasing lock "refresh_cache-0d0450f4-efd0-4508-8aca-28b5d858c397" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.456 2 DEBUG nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Instance network_info: |[{"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.456 2 DEBUG oslo_concurrency.lockutils [req-22c3e85c-cab0-4641-a8e9-f1cb262b3b82 req-55affff8-18a7-410e-9d2c-223f656cb75f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-0d0450f4-efd0-4508-8aca-28b5d858c397" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.456 2 DEBUG nova.network.neutron [req-22c3e85c-cab0-4641-a8e9-f1cb262b3b82 req-55affff8-18a7-410e-9d2c-223f656cb75f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Refreshing network info cache for port ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.460 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Start _get_guest_xml network_info=[{"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.466 2 WARNING nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.474 2 DEBUG nova.virt.libvirt.host [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.476 2 DEBUG nova.virt.libvirt.host [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.481 2 DEBUG nova.virt.libvirt.host [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.481 2 DEBUG nova.virt.libvirt.host [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.482 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.483 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.483 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.484 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.484 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.485 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.485 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.486 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.486 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.487 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.487 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.488 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.493 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2752: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 07 14:53:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:53:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2310903720' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:53:58 compute-0 nova_compute[259550]: 2025-10-07 14:53:58.979 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.004 2 DEBUG nova.storage.rbd_utils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 0d0450f4-efd0-4508-8aca-28b5d858c397_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.008 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:53:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2310903720' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:53:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:53:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2387551868' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.455 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.457 2 DEBUG nova.virt.libvirt.vif [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:53:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-0-2089167577',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-0-2089167577',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ge',id=145,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIvYEBw++0H23CPU/EQE9HWaLr7vi4X6+2UQNCQ0N5Tz2y4ny5PBn9cmCdgcJlcQrK39s79mDb9nDQdopbqGy2qQX711Rlve/xjg+NPmBBGEtxIddIA3GNH/TrG7DZF1Rw==',key_name='tempest-TestSecurityGroupsBasicOps-276938345',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-rvfyeg7d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:53:51Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=0d0450f4-efd0-4508-8aca-28b5d858c397,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.457 2 DEBUG nova.network.os_vif_util [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.458 2 DEBUG nova.network.os_vif_util [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:bd:28,bridge_name='br-int',has_traffic_filtering=True,id=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba7ffd6d-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.459 2 DEBUG nova.objects.instance [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0d0450f4-efd0-4508-8aca-28b5d858c397 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.564 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:53:59 compute-0 nova_compute[259550]:   <uuid>0d0450f4-efd0-4508-8aca-28b5d858c397</uuid>
Oct 07 14:53:59 compute-0 nova_compute[259550]:   <name>instance-00000091</name>
Oct 07 14:53:59 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:53:59 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:53:59 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-0-2089167577</nova:name>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:53:58</nova:creationTime>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:53:59 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:53:59 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:53:59 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:53:59 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:53:59 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:53:59 compute-0 nova_compute[259550]:         <nova:user uuid="229f8f54ad8b4adcb7d392a6d730edbd">tempest-TestSecurityGroupsBasicOps-1946829349-project-member</nova:user>
Oct 07 14:53:59 compute-0 nova_compute[259550]:         <nova:project uuid="2dd1166031634469bed4993a4eb97989">tempest-TestSecurityGroupsBasicOps-1946829349</nova:project>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:53:59 compute-0 nova_compute[259550]:         <nova:port uuid="ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57">
Oct 07 14:53:59 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:53:59 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:53:59 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <system>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <entry name="serial">0d0450f4-efd0-4508-8aca-28b5d858c397</entry>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <entry name="uuid">0d0450f4-efd0-4508-8aca-28b5d858c397</entry>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     </system>
Oct 07 14:53:59 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:53:59 compute-0 nova_compute[259550]:   <os>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:   </os>
Oct 07 14:53:59 compute-0 nova_compute[259550]:   <features>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:   </features>
Oct 07 14:53:59 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:53:59 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:53:59 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/0d0450f4-efd0-4508-8aca-28b5d858c397_disk">
Oct 07 14:53:59 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       </source>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:53:59 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/0d0450f4-efd0-4508-8aca-28b5d858c397_disk.config">
Oct 07 14:53:59 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       </source>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:53:59 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:8a:bd:28"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <target dev="tapba7ffd6d-6a"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397/console.log" append="off"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <video>
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     </video>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:53:59 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:53:59 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:53:59 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:53:59 compute-0 nova_compute[259550]: </domain>
Oct 07 14:53:59 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.566 2 DEBUG nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Preparing to wait for external event network-vif-plugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.567 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.567 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.567 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.568 2 DEBUG nova.virt.libvirt.vif [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:53:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-0-2089167577',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-0-2089167577',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ge',id=145,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIvYEBw++0H23CPU/EQE9HWaLr7vi4X6+2UQNCQ0N5Tz2y4ny5PBn9cmCdgcJlcQrK39s79mDb9nDQdopbqGy2qQX711Rlve/xjg+NPmBBGEtxIddIA3GNH/TrG7DZF1Rw==',key_name='tempest-TestSecurityGroupsBasicOps-276938345',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-rvfyeg7d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:53:51Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=0d0450f4-efd0-4508-8aca-28b5d858c397,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.568 2 DEBUG nova.network.os_vif_util [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.569 2 DEBUG nova.network.os_vif_util [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:bd:28,bridge_name='br-int',has_traffic_filtering=True,id=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba7ffd6d-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.570 2 DEBUG os_vif [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:bd:28,bridge_name='br-int',has_traffic_filtering=True,id=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba7ffd6d-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.571 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.571 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba7ffd6d-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapba7ffd6d-6a, col_values=(('external_ids', {'iface-id': 'ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8a:bd:28', 'vm-uuid': '0d0450f4-efd0-4508-8aca-28b5d858c397'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:53:59 compute-0 NetworkManager[44949]: <info>  [1759848839.5805] manager: (tapba7ffd6d-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/639)
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.586 2 INFO os_vif [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:bd:28,bridge_name='br-int',has_traffic_filtering=True,id=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba7ffd6d-6a')
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.948 2 DEBUG nova.network.neutron [req-22c3e85c-cab0-4641-a8e9-f1cb262b3b82 req-55affff8-18a7-410e-9d2c-223f656cb75f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Updated VIF entry in instance network info cache for port ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:53:59 compute-0 nova_compute[259550]: 2025-10-07 14:53:59.949 2 DEBUG nova.network.neutron [req-22c3e85c-cab0-4641-a8e9-f1cb262b3b82 req-55affff8-18a7-410e-9d2c-223f656cb75f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Updating instance_info_cache with network_info: [{"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:54:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:00.090 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:54:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:00.091 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:54:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:00.092 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:54:00 compute-0 nova_compute[259550]: 2025-10-07 14:54:00.118 2 DEBUG oslo_concurrency.lockutils [req-22c3e85c-cab0-4641-a8e9-f1cb262b3b82 req-55affff8-18a7-410e-9d2c-223f656cb75f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-0d0450f4-efd0-4508-8aca-28b5d858c397" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:54:00 compute-0 nova_compute[259550]: 2025-10-07 14:54:00.273 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:54:00 compute-0 nova_compute[259550]: 2025-10-07 14:54:00.273 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:54:00 compute-0 nova_compute[259550]: 2025-10-07 14:54:00.274 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No VIF found with MAC fa:16:3e:8a:bd:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:54:00 compute-0 nova_compute[259550]: 2025-10-07 14:54:00.275 2 INFO nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Using config drive
Oct 07 14:54:00 compute-0 nova_compute[259550]: 2025-10-07 14:54:00.305 2 DEBUG nova.storage.rbd_utils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 0d0450f4-efd0-4508-8aca-28b5d858c397_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:54:00 compute-0 ceph-mon[74295]: pgmap v2752: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 07 14:54:00 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2387551868' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:54:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:54:01 compute-0 sudo[419092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:54:01 compute-0 sudo[419092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:54:01 compute-0 sudo[419092]: pam_unix(sudo:session): session closed for user root
Oct 07 14:54:01 compute-0 sudo[419117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:54:01 compute-0 sudo[419117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:54:01 compute-0 sudo[419117]: pam_unix(sudo:session): session closed for user root
Oct 07 14:54:01 compute-0 nova_compute[259550]: 2025-10-07 14:54:01.311 2 INFO nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Creating config drive at /var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397/disk.config
Oct 07 14:54:01 compute-0 sudo[419142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:54:01 compute-0 sudo[419142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:54:01 compute-0 nova_compute[259550]: 2025-10-07 14:54:01.316 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphpjlzgu0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:54:01 compute-0 sudo[419142]: pam_unix(sudo:session): session closed for user root
Oct 07 14:54:01 compute-0 sudo[419167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:54:01 compute-0 sudo[419167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:54:01 compute-0 nova_compute[259550]: 2025-10-07 14:54:01.458 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphpjlzgu0" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:54:01 compute-0 nova_compute[259550]: 2025-10-07 14:54:01.490 2 DEBUG nova.storage.rbd_utils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 0d0450f4-efd0-4508-8aca-28b5d858c397_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:54:01 compute-0 nova_compute[259550]: 2025-10-07 14:54:01.496 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397/disk.config 0d0450f4-efd0-4508-8aca-28b5d858c397_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:54:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:54:01 compute-0 nova_compute[259550]: 2025-10-07 14:54:01.803 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397/disk.config 0d0450f4-efd0-4508-8aca-28b5d858c397_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:54:01 compute-0 nova_compute[259550]: 2025-10-07 14:54:01.805 2 INFO nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Deleting local config drive /var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397/disk.config because it was imported into RBD.
Oct 07 14:54:01 compute-0 NetworkManager[44949]: <info>  [1759848841.8627] manager: (tapba7ffd6d-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/640)
Oct 07 14:54:01 compute-0 kernel: tapba7ffd6d-6a: entered promiscuous mode
Oct 07 14:54:01 compute-0 sudo[419167]: pam_unix(sudo:session): session closed for user root
Oct 07 14:54:01 compute-0 ovn_controller[151684]: 2025-10-07T14:54:01Z|01584|binding|INFO|Claiming lport ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 for this chassis.
Oct 07 14:54:01 compute-0 ovn_controller[151684]: 2025-10-07T14:54:01Z|01585|binding|INFO|ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57: Claiming fa:16:3e:8a:bd:28 10.100.0.8
Oct 07 14:54:01 compute-0 nova_compute[259550]: 2025-10-07 14:54:01.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:01 compute-0 ovn_controller[151684]: 2025-10-07T14:54:01Z|01586|binding|INFO|Setting lport ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 ovn-installed in OVS
Oct 07 14:54:01 compute-0 nova_compute[259550]: 2025-10-07 14:54:01.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:01 compute-0 nova_compute[259550]: 2025-10-07 14:54:01.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:01 compute-0 systemd-machined[214580]: New machine qemu-179-instance-00000091.
Oct 07 14:54:01 compute-0 systemd[1]: Started Virtual Machine qemu-179-instance-00000091.
Oct 07 14:54:01 compute-0 systemd-udevd[419278]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:54:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:54:01 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:54:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:54:01 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:54:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:54:01 compute-0 NetworkManager[44949]: <info>  [1759848841.9307] device (tapba7ffd6d-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:54:01 compute-0 NetworkManager[44949]: <info>  [1759848841.9315] device (tapba7ffd6d-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:54:01 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:54:01 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev cb020fe9-7f22-4ebb-812f-141189117c6b does not exist
Oct 07 14:54:01 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 6d1f556c-d670-4b7d-872f-f9eb687b22be does not exist
Oct 07 14:54:01 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 6f33e89c-6fac-43c5-adab-dde97e6dadfe does not exist
Oct 07 14:54:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:54:01 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:54:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:54:01 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:54:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:54:01 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:54:02 compute-0 sudo[419286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:54:02 compute-0 sudo[419286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:54:02 compute-0 sudo[419286]: pam_unix(sudo:session): session closed for user root
Oct 07 14:54:02 compute-0 sudo[419312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:54:02 compute-0 sudo[419312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:54:02 compute-0 sudo[419312]: pam_unix(sudo:session): session closed for user root
Oct 07 14:54:02 compute-0 sudo[419337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:54:02 compute-0 sudo[419337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:54:02 compute-0 sudo[419337]: pam_unix(sudo:session): session closed for user root
Oct 07 14:54:02 compute-0 ovn_controller[151684]: 2025-10-07T14:54:02Z|01587|binding|INFO|Setting lport ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 up in Southbound
Oct 07 14:54:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.191 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:bd:28 10.100.0.8'], port_security=['fa:16:3e:8a:bd:28 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0d0450f4-efd0-4508-8aca-28b5d858c397', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb064d27-a242-4db9-8bed-79adb9fb8cf9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8255c2c5-7755-43fa-85c6-c4d3fcdeecda, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:54:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.193 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 in datapath b7b9d978-a319-4f4b-b8e1-4891fcd559d0 bound to our chassis
Oct 07 14:54:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.194 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b7b9d978-a319-4f4b-b8e1-4891fcd559d0
Oct 07 14:54:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.212 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d4a0ce-7376-48cd-9936-c52b5ec4f91a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:02 compute-0 sudo[419362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:54:02 compute-0 sudo[419362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:54:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.242 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[31fb85e9-526b-4282-a811-fa8358404e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.245 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5af994a5-d43a-47ec-bf87-3bf918c6f29a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.273 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5631bc03-d1c2-4c8b-a156-8f97afb0cae7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.293 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[07354c73-177f-4b3e-bf84-222408f7a0c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb7b9d978-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:47:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 938835, 'reachable_time': 44639, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419392, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.315 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[43e7c066-a140-40af-9bf5-346bfff99733]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb7b9d978-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 938849, 'tstamp': 938849}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419393, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb7b9d978-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 938852, 'tstamp': 938852}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419393, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.317 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7b9d978-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:54:02 compute-0 nova_compute[259550]: 2025-10-07 14:54:02.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.320 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7b9d978-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:54:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.321 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:54:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.321 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb7b9d978-a0, col_values=(('external_ids', {'iface-id': '844b358c-727e-4661-9ab8-5cf03a82eb82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:54:02 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.321 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:54:02 compute-0 ceph-mon[74295]: pgmap v2753: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:54:02 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:54:02 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:54:02 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:54:02 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:54:02 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:54:02 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:54:02 compute-0 podman[419435]: 2025-10-07 14:54:02.525172296 +0000 UTC m=+0.021847870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:54:02 compute-0 podman[419435]: 2025-10-07 14:54:02.676161363 +0000 UTC m=+0.172836927 container create d84ed425172dea7c8d736e5c009e58a6b008b215d6a5404025e61fa3a436350f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct 07 14:54:02 compute-0 systemd[1]: Started libpod-conmon-d84ed425172dea7c8d736e5c009e58a6b008b215d6a5404025e61fa3a436350f.scope.
Oct 07 14:54:02 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:54:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:54:02 compute-0 podman[419435]: 2025-10-07 14:54:02.818841003 +0000 UTC m=+0.315516577 container init d84ed425172dea7c8d736e5c009e58a6b008b215d6a5404025e61fa3a436350f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 14:54:02 compute-0 podman[419435]: 2025-10-07 14:54:02.826266959 +0000 UTC m=+0.322942513 container start d84ed425172dea7c8d736e5c009e58a6b008b215d6a5404025e61fa3a436350f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mayer, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:54:02 compute-0 elegant_mayer[419487]: 167 167
Oct 07 14:54:02 compute-0 systemd[1]: libpod-d84ed425172dea7c8d736e5c009e58a6b008b215d6a5404025e61fa3a436350f.scope: Deactivated successfully.
Oct 07 14:54:02 compute-0 podman[419435]: 2025-10-07 14:54:02.842952915 +0000 UTC m=+0.339628489 container attach d84ed425172dea7c8d736e5c009e58a6b008b215d6a5404025e61fa3a436350f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mayer, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 07 14:54:02 compute-0 podman[419435]: 2025-10-07 14:54:02.843324095 +0000 UTC m=+0.339999649 container died d84ed425172dea7c8d736e5c009e58a6b008b215d6a5404025e61fa3a436350f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mayer, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 14:54:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-85db81374a0c3de4464ff227b48d241810a9bf53fba1032719a273f2cfa98ed9-merged.mount: Deactivated successfully.
Oct 07 14:54:02 compute-0 podman[419435]: 2025-10-07 14:54:02.919026053 +0000 UTC m=+0.415701607 container remove d84ed425172dea7c8d736e5c009e58a6b008b215d6a5404025e61fa3a436350f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mayer, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:54:02 compute-0 systemd[1]: libpod-conmon-d84ed425172dea7c8d736e5c009e58a6b008b215d6a5404025e61fa3a436350f.scope: Deactivated successfully.
Oct 07 14:54:02 compute-0 nova_compute[259550]: 2025-10-07 14:54:02.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:54:03 compute-0 podman[419517]: 2025-10-07 14:54:03.116716009 +0000 UTC m=+0.055499639 container create 13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lumiere, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:54:03 compute-0 systemd[1]: Started libpod-conmon-13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558.scope.
Oct 07 14:54:03 compute-0 podman[419517]: 2025-10-07 14:54:03.088452318 +0000 UTC m=+0.027235948 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:54:03 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:54:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba2116272e1e114924649c3e361a99f5e2910d3887817702d0fc803eca3d9eb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:54:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba2116272e1e114924649c3e361a99f5e2910d3887817702d0fc803eca3d9eb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:54:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba2116272e1e114924649c3e361a99f5e2910d3887817702d0fc803eca3d9eb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:54:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba2116272e1e114924649c3e361a99f5e2910d3887817702d0fc803eca3d9eb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:54:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba2116272e1e114924649c3e361a99f5e2910d3887817702d0fc803eca3d9eb5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:54:03 compute-0 podman[419517]: 2025-10-07 14:54:03.286056122 +0000 UTC m=+0.224839782 container init 13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 07 14:54:03 compute-0 podman[419517]: 2025-10-07 14:54:03.296102951 +0000 UTC m=+0.234886581 container start 13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 07 14:54:03 compute-0 podman[419517]: 2025-10-07 14:54:03.326050032 +0000 UTC m=+0.264833692 container attach 13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.353 2 DEBUG nova.compute.manager [req-86c4675d-e2a9-4038-8b10-f763a0b736cb req-da25cdf6-f85b-462e-b788-950d3d93921c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Received event network-vif-plugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.355 2 DEBUG oslo_concurrency.lockutils [req-86c4675d-e2a9-4038-8b10-f763a0b736cb req-da25cdf6-f85b-462e-b788-950d3d93921c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.355 2 DEBUG oslo_concurrency.lockutils [req-86c4675d-e2a9-4038-8b10-f763a0b736cb req-da25cdf6-f85b-462e-b788-950d3d93921c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.356 2 DEBUG oslo_concurrency.lockutils [req-86c4675d-e2a9-4038-8b10-f763a0b736cb req-da25cdf6-f85b-462e-b788-950d3d93921c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.356 2 DEBUG nova.compute.manager [req-86c4675d-e2a9-4038-8b10-f763a0b736cb req-da25cdf6-f85b-462e-b788-950d3d93921c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Processing event network-vif-plugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.378 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848843.3777244, 0d0450f4-efd0-4508-8aca-28b5d858c397 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.379 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] VM Started (Lifecycle Event)
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.380 2 DEBUG nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.390 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.394 2 INFO nova.virt.libvirt.driver [-] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Instance spawned successfully.
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.394 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.412 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.417 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.631 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.632 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848843.3778427, 0d0450f4-efd0-4508-8aca-28b5d858c397 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.632 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] VM Paused (Lifecycle Event)
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.640 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.642 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.642 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.643 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.643 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.644 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.943 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.945 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848843.3844037, 0d0450f4-efd0-4508-8aca-28b5d858c397 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:54:03 compute-0 nova_compute[259550]: 2025-10-07 14:54:03.946 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] VM Resumed (Lifecycle Event)
Oct 07 14:54:04 compute-0 nova_compute[259550]: 2025-10-07 14:54:04.094 2 INFO nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Took 12.23 seconds to spawn the instance on the hypervisor.
Oct 07 14:54:04 compute-0 nova_compute[259550]: 2025-10-07 14:54:04.095 2 DEBUG nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:54:04 compute-0 nova_compute[259550]: 2025-10-07 14:54:04.103 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:54:04 compute-0 nova_compute[259550]: 2025-10-07 14:54:04.110 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:54:04 compute-0 blissful_lumiere[419533]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:54:04 compute-0 blissful_lumiere[419533]: --> relative data size: 1.0
Oct 07 14:54:04 compute-0 blissful_lumiere[419533]: --> All data devices are unavailable
Oct 07 14:54:04 compute-0 systemd[1]: libpod-13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558.scope: Deactivated successfully.
Oct 07 14:54:04 compute-0 podman[419517]: 2025-10-07 14:54:04.491666733 +0000 UTC m=+1.430450373 container died 13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lumiere, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 07 14:54:04 compute-0 systemd[1]: libpod-13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558.scope: Consumed 1.111s CPU time.
Oct 07 14:54:04 compute-0 ceph-mon[74295]: pgmap v2754: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:54:04 compute-0 nova_compute[259550]: 2025-10-07 14:54:04.537 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:54:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba2116272e1e114924649c3e361a99f5e2910d3887817702d0fc803eca3d9eb5-merged.mount: Deactivated successfully.
Oct 07 14:54:04 compute-0 nova_compute[259550]: 2025-10-07 14:54:04.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:04 compute-0 podman[419517]: 2025-10-07 14:54:04.591128786 +0000 UTC m=+1.529912416 container remove 13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 07 14:54:04 compute-0 systemd[1]: libpod-conmon-13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558.scope: Deactivated successfully.
Oct 07 14:54:04 compute-0 sudo[419362]: pam_unix(sudo:session): session closed for user root
Oct 07 14:54:04 compute-0 sudo[419572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:54:04 compute-0 sudo[419572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:54:04 compute-0 sudo[419572]: pam_unix(sudo:session): session closed for user root
Oct 07 14:54:04 compute-0 sudo[419597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:54:04 compute-0 sudo[419597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:54:04 compute-0 sudo[419597]: pam_unix(sudo:session): session closed for user root
Oct 07 14:54:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 07 14:54:04 compute-0 sudo[419622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:54:04 compute-0 sudo[419622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:54:04 compute-0 sudo[419622]: pam_unix(sudo:session): session closed for user root
Oct 07 14:54:04 compute-0 sudo[419647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:54:04 compute-0 sudo[419647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:54:04 compute-0 nova_compute[259550]: 2025-10-07 14:54:04.890 2 INFO nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Took 14.31 seconds to build instance.
Oct 07 14:54:04 compute-0 nova_compute[259550]: 2025-10-07 14:54:04.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:54:05 compute-0 nova_compute[259550]: 2025-10-07 14:54:05.135 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:54:05 compute-0 podman[419713]: 2025-10-07 14:54:05.226056169 +0000 UTC m=+0.053593524 container create e283de28d54604c047c9ffe19dea22bf55bd80931a89bf9090e54fb81f97f256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:54:05 compute-0 systemd[1]: Started libpod-conmon-e283de28d54604c047c9ffe19dea22bf55bd80931a89bf9090e54fb81f97f256.scope.
Oct 07 14:54:05 compute-0 podman[419713]: 2025-10-07 14:54:05.195818931 +0000 UTC m=+0.023356296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:54:05 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:54:05 compute-0 podman[419713]: 2025-10-07 14:54:05.333207375 +0000 UTC m=+0.160744760 container init e283de28d54604c047c9ffe19dea22bf55bd80931a89bf9090e54fb81f97f256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:54:05 compute-0 podman[419713]: 2025-10-07 14:54:05.341666176 +0000 UTC m=+0.169203531 container start e283de28d54604c047c9ffe19dea22bf55bd80931a89bf9090e54fb81f97f256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ellis, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 07 14:54:05 compute-0 modest_ellis[419748]: 167 167
Oct 07 14:54:05 compute-0 systemd[1]: libpod-e283de28d54604c047c9ffe19dea22bf55bd80931a89bf9090e54fb81f97f256.scope: Deactivated successfully.
Oct 07 14:54:05 compute-0 podman[419713]: 2025-10-07 14:54:05.365563494 +0000 UTC m=+0.193100849 container attach e283de28d54604c047c9ffe19dea22bf55bd80931a89bf9090e54fb81f97f256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ellis, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:54:05 compute-0 podman[419713]: 2025-10-07 14:54:05.366596968 +0000 UTC m=+0.194134323 container died e283de28d54604c047c9ffe19dea22bf55bd80931a89bf9090e54fb81f97f256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ellis, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:54:05 compute-0 podman[419730]: 2025-10-07 14:54:05.367211683 +0000 UTC m=+0.101790830 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct 07 14:54:05 compute-0 podman[419727]: 2025-10-07 14:54:05.390130148 +0000 UTC m=+0.124845088 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:54:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-86474028c34c60b1a022316c5d7a5d4e3550e08a00a079251a43017268332cd2-merged.mount: Deactivated successfully.
Oct 07 14:54:05 compute-0 podman[419713]: 2025-10-07 14:54:05.453997624 +0000 UTC m=+0.281534989 container remove e283de28d54604c047c9ffe19dea22bf55bd80931a89bf9090e54fb81f97f256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:54:05 compute-0 systemd[1]: libpod-conmon-e283de28d54604c047c9ffe19dea22bf55bd80931a89bf9090e54fb81f97f256.scope: Deactivated successfully.
Oct 07 14:54:05 compute-0 nova_compute[259550]: 2025-10-07 14:54:05.465 2 DEBUG nova.compute.manager [req-7c496edb-aa6e-4dc7-b3dc-792a5fd82a3b req-01905b56-83fc-4dc8-bdce-362414fca55f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Received event network-vif-plugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:54:05 compute-0 nova_compute[259550]: 2025-10-07 14:54:05.467 2 DEBUG oslo_concurrency.lockutils [req-7c496edb-aa6e-4dc7-b3dc-792a5fd82a3b req-01905b56-83fc-4dc8-bdce-362414fca55f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:54:05 compute-0 nova_compute[259550]: 2025-10-07 14:54:05.467 2 DEBUG oslo_concurrency.lockutils [req-7c496edb-aa6e-4dc7-b3dc-792a5fd82a3b req-01905b56-83fc-4dc8-bdce-362414fca55f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:54:05 compute-0 nova_compute[259550]: 2025-10-07 14:54:05.467 2 DEBUG oslo_concurrency.lockutils [req-7c496edb-aa6e-4dc7-b3dc-792a5fd82a3b req-01905b56-83fc-4dc8-bdce-362414fca55f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:54:05 compute-0 nova_compute[259550]: 2025-10-07 14:54:05.468 2 DEBUG nova.compute.manager [req-7c496edb-aa6e-4dc7-b3dc-792a5fd82a3b req-01905b56-83fc-4dc8-bdce-362414fca55f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] No waiting events found dispatching network-vif-plugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:54:05 compute-0 nova_compute[259550]: 2025-10-07 14:54:05.468 2 WARNING nova.compute.manager [req-7c496edb-aa6e-4dc7-b3dc-792a5fd82a3b req-01905b56-83fc-4dc8-bdce-362414fca55f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Received unexpected event network-vif-plugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 for instance with vm_state active and task_state None.
Oct 07 14:54:05 compute-0 podman[419795]: 2025-10-07 14:54:05.698910063 +0000 UTC m=+0.080183646 container create 44da6b4cb9a01aac28943f1a10fdb454cc0c4359a77b0030f9d3c87f1246af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhaskara, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:54:05 compute-0 podman[419795]: 2025-10-07 14:54:05.643171448 +0000 UTC m=+0.024445051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:54:05 compute-0 systemd[1]: Started libpod-conmon-44da6b4cb9a01aac28943f1a10fdb454cc0c4359a77b0030f9d3c87f1246af33.scope.
Oct 07 14:54:05 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:54:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d47deb399f6f906785e83a76f35f316f5579eef63db850682d1edd043a5e41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:54:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d47deb399f6f906785e83a76f35f316f5579eef63db850682d1edd043a5e41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:54:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d47deb399f6f906785e83a76f35f316f5579eef63db850682d1edd043a5e41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:54:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d47deb399f6f906785e83a76f35f316f5579eef63db850682d1edd043a5e41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:54:05 compute-0 podman[419795]: 2025-10-07 14:54:05.878078219 +0000 UTC m=+0.259351822 container init 44da6b4cb9a01aac28943f1a10fdb454cc0c4359a77b0030f9d3c87f1246af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 07 14:54:05 compute-0 podman[419795]: 2025-10-07 14:54:05.884819509 +0000 UTC m=+0.266093092 container start 44da6b4cb9a01aac28943f1a10fdb454cc0c4359a77b0030f9d3c87f1246af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhaskara, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:54:05 compute-0 podman[419795]: 2025-10-07 14:54:05.924160114 +0000 UTC m=+0.305433697 container attach 44da6b4cb9a01aac28943f1a10fdb454cc0c4359a77b0030f9d3c87f1246af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhaskara, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 07 14:54:06 compute-0 ceph-mon[74295]: pgmap v2755: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]: {
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:     "0": [
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:         {
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "devices": [
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "/dev/loop3"
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             ],
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "lv_name": "ceph_lv0",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "lv_size": "21470642176",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "name": "ceph_lv0",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "tags": {
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.cluster_name": "ceph",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.crush_device_class": "",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.encrypted": "0",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.osd_id": "0",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.type": "block",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.vdo": "0"
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             },
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "type": "block",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "vg_name": "ceph_vg0"
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:         }
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:     ],
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:     "1": [
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:         {
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "devices": [
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "/dev/loop4"
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             ],
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "lv_name": "ceph_lv1",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "lv_size": "21470642176",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "name": "ceph_lv1",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "tags": {
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.cluster_name": "ceph",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.crush_device_class": "",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.encrypted": "0",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.osd_id": "1",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.type": "block",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.vdo": "0"
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             },
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "type": "block",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "vg_name": "ceph_vg1"
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:         }
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:     ],
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:     "2": [
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:         {
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "devices": [
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "/dev/loop5"
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             ],
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "lv_name": "ceph_lv2",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "lv_size": "21470642176",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "name": "ceph_lv2",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "tags": {
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.cluster_name": "ceph",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.crush_device_class": "",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.encrypted": "0",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.osd_id": "2",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.type": "block",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:                 "ceph.vdo": "0"
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             },
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "type": "block",
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:             "vg_name": "ceph_vg2"
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:         }
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]:     ]
Oct 07 14:54:06 compute-0 admiring_bhaskara[419811]: }
Oct 07 14:54:06 compute-0 systemd[1]: libpod-44da6b4cb9a01aac28943f1a10fdb454cc0c4359a77b0030f9d3c87f1246af33.scope: Deactivated successfully.
Oct 07 14:54:06 compute-0 podman[419795]: 2025-10-07 14:54:06.71912481 +0000 UTC m=+1.100398403 container died 44da6b4cb9a01aac28943f1a10fdb454cc0c4359a77b0030f9d3c87f1246af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 07 14:54:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-49d47deb399f6f906785e83a76f35f316f5579eef63db850682d1edd043a5e41-merged.mount: Deactivated successfully.
Oct 07 14:54:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:54:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 556 KiB/s wr, 45 op/s
Oct 07 14:54:06 compute-0 podman[419795]: 2025-10-07 14:54:06.801394277 +0000 UTC m=+1.182667860 container remove 44da6b4cb9a01aac28943f1a10fdb454cc0c4359a77b0030f9d3c87f1246af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhaskara, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:54:06 compute-0 systemd[1]: libpod-conmon-44da6b4cb9a01aac28943f1a10fdb454cc0c4359a77b0030f9d3c87f1246af33.scope: Deactivated successfully.
Oct 07 14:54:06 compute-0 sudo[419647]: pam_unix(sudo:session): session closed for user root
Oct 07 14:54:06 compute-0 sudo[419832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:54:06 compute-0 sudo[419832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:54:06 compute-0 sudo[419832]: pam_unix(sudo:session): session closed for user root
Oct 07 14:54:06 compute-0 sudo[419857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:54:06 compute-0 sudo[419857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:54:06 compute-0 sudo[419857]: pam_unix(sudo:session): session closed for user root
Oct 07 14:54:06 compute-0 nova_compute[259550]: 2025-10-07 14:54:06.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:54:07 compute-0 sudo[419882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:54:07 compute-0 sudo[419882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:54:07 compute-0 sudo[419882]: pam_unix(sudo:session): session closed for user root
Oct 07 14:54:07 compute-0 sudo[419907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:54:07 compute-0 sudo[419907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:54:07 compute-0 podman[419971]: 2025-10-07 14:54:07.441762655 +0000 UTC m=+0.075125760 container create bd905385f3a9ca6bb569464591839d0fb5ebcd465bb2697b504b6ab2e86f019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:54:07 compute-0 systemd[1]: Started libpod-conmon-bd905385f3a9ca6bb569464591839d0fb5ebcd465bb2697b504b6ab2e86f019f.scope.
Oct 07 14:54:07 compute-0 podman[419971]: 2025-10-07 14:54:07.387853248 +0000 UTC m=+0.021216373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:54:07 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:54:07 compute-0 podman[419971]: 2025-10-07 14:54:07.54133829 +0000 UTC m=+0.174701415 container init bd905385f3a9ca6bb569464591839d0fb5ebcd465bb2697b504b6ab2e86f019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:54:07 compute-0 podman[419971]: 2025-10-07 14:54:07.549666221 +0000 UTC m=+0.183029326 container start bd905385f3a9ca6bb569464591839d0fb5ebcd465bb2697b504b6ab2e86f019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct 07 14:54:07 compute-0 nervous_wu[419988]: 167 167
Oct 07 14:54:07 compute-0 systemd[1]: libpod-bd905385f3a9ca6bb569464591839d0fb5ebcd465bb2697b504b6ab2e86f019f.scope: Deactivated successfully.
Oct 07 14:54:07 compute-0 podman[419971]: 2025-10-07 14:54:07.5587132 +0000 UTC m=+0.192076325 container attach bd905385f3a9ca6bb569464591839d0fb5ebcd465bb2697b504b6ab2e86f019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 07 14:54:07 compute-0 podman[419971]: 2025-10-07 14:54:07.559378008 +0000 UTC m=+0.192741143 container died bd905385f3a9ca6bb569464591839d0fb5ebcd465bb2697b504b6ab2e86f019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:54:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b18319f23850e9963f6d70c560db6198039b407c617d3fd315f8fb6293d8f698-merged.mount: Deactivated successfully.
Oct 07 14:54:07 compute-0 podman[419971]: 2025-10-07 14:54:07.670854377 +0000 UTC m=+0.304217482 container remove bd905385f3a9ca6bb569464591839d0fb5ebcd465bb2697b504b6ab2e86f019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:54:07 compute-0 systemd[1]: libpod-conmon-bd905385f3a9ca6bb569464591839d0fb5ebcd465bb2697b504b6ab2e86f019f.scope: Deactivated successfully.
Oct 07 14:54:07 compute-0 podman[420013]: 2025-10-07 14:54:07.895609746 +0000 UTC m=+0.082332180 container create d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 07 14:54:07 compute-0 podman[420013]: 2025-10-07 14:54:07.835802253 +0000 UTC m=+0.022524717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:54:07 compute-0 systemd[1]: Started libpod-conmon-d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc.scope.
Oct 07 14:54:07 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54b0fd7663a9a9fc792743db45a464d0a7a02f689d21d6040518b4cc13ecf5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54b0fd7663a9a9fc792743db45a464d0a7a02f689d21d6040518b4cc13ecf5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54b0fd7663a9a9fc792743db45a464d0a7a02f689d21d6040518b4cc13ecf5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54b0fd7663a9a9fc792743db45a464d0a7a02f689d21d6040518b4cc13ecf5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:54:07 compute-0 nova_compute[259550]: 2025-10-07 14:54:07.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:54:08 compute-0 podman[420013]: 2025-10-07 14:54:08.012685275 +0000 UTC m=+0.199407729 container init d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mayer, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 07 14:54:08 compute-0 podman[420013]: 2025-10-07 14:54:08.021016915 +0000 UTC m=+0.207739349 container start d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mayer, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 07 14:54:08 compute-0 podman[420013]: 2025-10-07 14:54:08.029870299 +0000 UTC m=+0.216592743 container attach d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 07 14:54:08 compute-0 nova_compute[259550]: 2025-10-07 14:54:08.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:08 compute-0 ceph-mon[74295]: pgmap v2756: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 556 KiB/s wr, 45 op/s
Oct 07 14:54:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Oct 07 14:54:09 compute-0 adoring_mayer[420030]: {
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:         "osd_id": 2,
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:         "type": "bluestore"
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:     },
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:         "osd_id": 1,
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:         "type": "bluestore"
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:     },
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:         "osd_id": 0,
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:         "type": "bluestore"
Oct 07 14:54:09 compute-0 adoring_mayer[420030]:     }
Oct 07 14:54:09 compute-0 adoring_mayer[420030]: }
Oct 07 14:54:09 compute-0 systemd[1]: libpod-d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc.scope: Deactivated successfully.
Oct 07 14:54:09 compute-0 systemd[1]: libpod-d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc.scope: Consumed 1.011s CPU time.
Oct 07 14:54:09 compute-0 podman[420013]: 2025-10-07 14:54:09.040016632 +0000 UTC m=+1.226739076 container died d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mayer, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 07 14:54:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f54b0fd7663a9a9fc792743db45a464d0a7a02f689d21d6040518b4cc13ecf5e-merged.mount: Deactivated successfully.
Oct 07 14:54:09 compute-0 podman[420013]: 2025-10-07 14:54:09.168288197 +0000 UTC m=+1.355010631 container remove d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:54:09 compute-0 systemd[1]: libpod-conmon-d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc.scope: Deactivated successfully.
Oct 07 14:54:09 compute-0 sudo[419907]: pam_unix(sudo:session): session closed for user root
Oct 07 14:54:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:54:09 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:54:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:54:09 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:54:09 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 2a655268-3c0c-4254-bd7c-bc48764b8e01 does not exist
Oct 07 14:54:09 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 85d0db88-df97-4cde-96c7-02b195fcd014 does not exist
Oct 07 14:54:09 compute-0 sudo[420074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:54:09 compute-0 sudo[420074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:54:09 compute-0 sudo[420074]: pam_unix(sudo:session): session closed for user root
Oct 07 14:54:09 compute-0 sudo[420099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:54:09 compute-0 sudo[420099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:54:09 compute-0 sudo[420099]: pam_unix(sudo:session): session closed for user root
Oct 07 14:54:09 compute-0 nova_compute[259550]: 2025-10-07 14:54:09.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:09 compute-0 nova_compute[259550]: 2025-10-07 14:54:09.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:54:09 compute-0 nova_compute[259550]: 2025-10-07 14:54:09.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:54:10 compute-0 ceph-mon[74295]: pgmap v2757: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Oct 07 14:54:10 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:54:10 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:54:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 07 14:54:10 compute-0 nova_compute[259550]: 2025-10-07 14:54:10.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:54:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:54:12 compute-0 ceph-mon[74295]: pgmap v2758: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 07 14:54:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:54:13 compute-0 nova_compute[259550]: 2025-10-07 14:54:13.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:14 compute-0 ceph-mon[74295]: pgmap v2759: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:54:14 compute-0 nova_compute[259550]: 2025-10-07 14:54:14.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct 07 14:54:15 compute-0 nova_compute[259550]: 2025-10-07 14:54:15.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:54:16 compute-0 nova_compute[259550]: 2025-10-07 14:54:16.098 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:54:16 compute-0 nova_compute[259550]: 2025-10-07 14:54:16.099 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:54:16 compute-0 nova_compute[259550]: 2025-10-07 14:54:16.099 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:54:16 compute-0 nova_compute[259550]: 2025-10-07 14:54:16.099 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:54:16 compute-0 nova_compute[259550]: 2025-10-07 14:54:16.099 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:54:16 compute-0 ceph-mon[74295]: pgmap v2760: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct 07 14:54:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:54:16 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3612550605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:54:16 compute-0 nova_compute[259550]: 2025-10-07 14:54:16.566 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:54:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:54:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 169 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 473 KiB/s wr, 76 op/s
Oct 07 14:54:16 compute-0 nova_compute[259550]: 2025-10-07 14:54:16.837 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:54:16 compute-0 nova_compute[259550]: 2025-10-07 14:54:16.839 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:54:16 compute-0 nova_compute[259550]: 2025-10-07 14:54:16.843 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:54:16 compute-0 nova_compute[259550]: 2025-10-07 14:54:16.843 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:54:17 compute-0 nova_compute[259550]: 2025-10-07 14:54:17.042 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:54:17 compute-0 nova_compute[259550]: 2025-10-07 14:54:17.044 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3244MB free_disk=59.92183303833008GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:54:17 compute-0 nova_compute[259550]: 2025-10-07 14:54:17.044 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:54:17 compute-0 nova_compute[259550]: 2025-10-07 14:54:17.044 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:54:17 compute-0 sshd-session[420125]: banner exchange: Connection from 177.94.225.168 port 47923: invalid format
Oct 07 14:54:17 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3612550605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:54:17 compute-0 nova_compute[259550]: 2025-10-07 14:54:17.528 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 5360ad21-c174-468a-9be2-d82d672f2911 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:54:17 compute-0 nova_compute[259550]: 2025-10-07 14:54:17.529 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 0d0450f4-efd0-4508-8aca-28b5d858c397 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:54:17 compute-0 nova_compute[259550]: 2025-10-07 14:54:17.529 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:54:17 compute-0 nova_compute[259550]: 2025-10-07 14:54:17.529 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:54:17 compute-0 nova_compute[259550]: 2025-10-07 14:54:17.585 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 07 14:54:17 compute-0 nova_compute[259550]: 2025-10-07 14:54:17.656 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 07 14:54:17 compute-0 nova_compute[259550]: 2025-10-07 14:54:17.656 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 07 14:54:17 compute-0 nova_compute[259550]: 2025-10-07 14:54:17.671 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 07 14:54:17 compute-0 nova_compute[259550]: 2025-10-07 14:54:17.691 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 07 14:54:17 compute-0 nova_compute[259550]: 2025-10-07 14:54:17.753 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:54:17 compute-0 ovn_controller[151684]: 2025-10-07T14:54:17Z|00196|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8a:bd:28 10.100.0.8
Oct 07 14:54:17 compute-0 ovn_controller[151684]: 2025-10-07T14:54:17Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8a:bd:28 10.100.0.8
Oct 07 14:54:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:54:18 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3417912071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:54:18 compute-0 nova_compute[259550]: 2025-10-07 14:54:18.215 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:54:18 compute-0 nova_compute[259550]: 2025-10-07 14:54:18.223 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:54:18 compute-0 nova_compute[259550]: 2025-10-07 14:54:18.256 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:54:18 compute-0 ceph-mon[74295]: pgmap v2761: 305 pgs: 305 active+clean; 169 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 473 KiB/s wr, 76 op/s
Oct 07 14:54:18 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3417912071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:54:18 compute-0 nova_compute[259550]: 2025-10-07 14:54:18.447 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:54:18 compute-0 nova_compute[259550]: 2025-10-07 14:54:18.448 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.404s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:54:18 compute-0 nova_compute[259550]: 2025-10-07 14:54:18.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 191 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 90 op/s
Oct 07 14:54:19 compute-0 nova_compute[259550]: 2025-10-07 14:54:19.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:20 compute-0 ceph-mon[74295]: pgmap v2762: 305 pgs: 305 active+clean; 191 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 90 op/s
Oct 07 14:54:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 07 14:54:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:54:22 compute-0 ceph-mon[74295]: pgmap v2763: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 07 14:54:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:54:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:54:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:54:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:54:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:54:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:54:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:54:22
Oct 07 14:54:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:54:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:54:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['vms', 'default.rgw.log', '.rgw.root', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta']
Oct 07 14:54:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:54:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 07 14:54:23 compute-0 podman[420171]: 2025-10-07 14:54:23.084521319 +0000 UTC m=+0.068477943 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 07 14:54:23 compute-0 podman[420172]: 2025-10-07 14:54:23.084828267 +0000 UTC m=+0.068728609 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 07 14:54:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:54:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:54:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:54:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:54:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:54:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:54:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:54:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:54:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:54:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:54:23 compute-0 nova_compute[259550]: 2025-10-07 14:54:23.444 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:54:23 compute-0 nova_compute[259550]: 2025-10-07 14:54:23.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:23 compute-0 nova_compute[259550]: 2025-10-07 14:54:23.484 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:54:23 compute-0 nova_compute[259550]: 2025-10-07 14:54:23.484 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:54:23 compute-0 nova_compute[259550]: 2025-10-07 14:54:23.485 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:54:23 compute-0 nova_compute[259550]: 2025-10-07 14:54:23.822 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:54:23 compute-0 nova_compute[259550]: 2025-10-07 14:54:23.823 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:54:23 compute-0 nova_compute[259550]: 2025-10-07 14:54:23.823 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:54:23 compute-0 nova_compute[259550]: 2025-10-07 14:54:23.823 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5360ad21-c174-468a-9be2-d82d672f2911 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.304 2 DEBUG oslo_concurrency.lockutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "0d0450f4-efd0-4508-8aca-28b5d858c397" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.305 2 DEBUG oslo_concurrency.lockutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.305 2 DEBUG oslo_concurrency.lockutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.305 2 DEBUG oslo_concurrency.lockutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.306 2 DEBUG oslo_concurrency.lockutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.307 2 INFO nova.compute.manager [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Terminating instance
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.308 2 DEBUG nova.compute.manager [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:54:24 compute-0 kernel: tapba7ffd6d-6a (unregistering): left promiscuous mode
Oct 07 14:54:24 compute-0 NetworkManager[44949]: <info>  [1759848864.4356] device (tapba7ffd6d-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:24 compute-0 ovn_controller[151684]: 2025-10-07T14:54:24Z|01588|binding|INFO|Releasing lport ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 from this chassis (sb_readonly=0)
Oct 07 14:54:24 compute-0 ovn_controller[151684]: 2025-10-07T14:54:24Z|01589|binding|INFO|Setting lport ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 down in Southbound
Oct 07 14:54:24 compute-0 ovn_controller[151684]: 2025-10-07T14:54:24Z|01590|binding|INFO|Removing iface tapba7ffd6d-6a ovn-installed in OVS
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.453 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:bd:28 10.100.0.8'], port_security=['fa:16:3e:8a:bd:28 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0d0450f4-efd0-4508-8aca-28b5d858c397', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb064d27-a242-4db9-8bed-79adb9fb8cf9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8255c2c5-7755-43fa-85c6-c4d3fcdeecda, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:54:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.455 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 in datapath b7b9d978-a319-4f4b-b8e1-4891fcd559d0 unbound from our chassis
Oct 07 14:54:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.456 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b7b9d978-a319-4f4b-b8e1-4891fcd559d0
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.478 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee11dfb-557c-4e86-bd3e-750248cb5401]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:24 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000091.scope: Deactivated successfully.
Oct 07 14:54:24 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000091.scope: Consumed 14.369s CPU time.
Oct 07 14:54:24 compute-0 ceph-mon[74295]: pgmap v2764: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 07 14:54:24 compute-0 systemd-machined[214580]: Machine qemu-179-instance-00000091 terminated.
Oct 07 14:54:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.511 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1a5ca538-de8b-437a-9418-68c1a29fe952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.515 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2d32a9-52cc-4689-9e5e-a9a1500f3e0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.549 2 INFO nova.virt.libvirt.driver [-] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Instance destroyed successfully.
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.549 2 DEBUG nova.objects.instance [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'resources' on Instance uuid 0d0450f4-efd0-4508-8aca-28b5d858c397 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:54:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.550 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8a797fb3-5df8-44f4-b5a6-8257d05d6cf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.565 2 DEBUG nova.virt.libvirt.vif [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:53:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-0-2089167577',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-0-2089167577',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ge',id=145,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIvYEBw++0H23CPU/EQE9HWaLr7vi4X6+2UQNCQ0N5Tz2y4ny5PBn9cmCdgcJlcQrK39s79mDb9nDQdopbqGy2qQX711Rlve/xjg+NPmBBGEtxIddIA3GNH/TrG7DZF1Rw==',key_name='tempest-TestSecurityGroupsBasicOps-276938345',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:54:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-rvfyeg7d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:54:04Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=0d0450f4-efd0-4508-8aca-28b5d858c397,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.566 2 DEBUG nova.network.os_vif_util [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.567 2 DEBUG nova.network.os_vif_util [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:bd:28,bridge_name='br-int',has_traffic_filtering=True,id=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba7ffd6d-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.567 2 DEBUG os_vif [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:bd:28,bridge_name='br-int',has_traffic_filtering=True,id=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba7ffd6d-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.569 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba7ffd6d-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.575 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3b790696-f45c-47fb-ad24-a1f5127718c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb7b9d978-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:47:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 938835, 'reachable_time': 44639, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420231, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.578 2 INFO os_vif [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:bd:28,bridge_name='br-int',has_traffic_filtering=True,id=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba7ffd6d-6a')
Oct 07 14:54:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.591 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a10ff121-bf5a-490d-a8de-a9b1746c3ca8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb7b9d978-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 938849, 'tstamp': 938849}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420232, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb7b9d978-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 938852, 'tstamp': 938852}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420232, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.593 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7b9d978-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:54:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.596 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7b9d978-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:54:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.596 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:54:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.596 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb7b9d978-a0, col_values=(('external_ids', {'iface-id': '844b358c-727e-4661-9ab8-5cf03a82eb82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:54:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.597 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.701 2 DEBUG nova.compute.manager [req-773df6bb-ea4a-45bd-b218-281ce30a54ca req-b5aca8ad-a895-45d8-9300-7774eeb037ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Received event network-vif-unplugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.701 2 DEBUG oslo_concurrency.lockutils [req-773df6bb-ea4a-45bd-b218-281ce30a54ca req-b5aca8ad-a895-45d8-9300-7774eeb037ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.701 2 DEBUG oslo_concurrency.lockutils [req-773df6bb-ea4a-45bd-b218-281ce30a54ca req-b5aca8ad-a895-45d8-9300-7774eeb037ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.701 2 DEBUG oslo_concurrency.lockutils [req-773df6bb-ea4a-45bd-b218-281ce30a54ca req-b5aca8ad-a895-45d8-9300-7774eeb037ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.702 2 DEBUG nova.compute.manager [req-773df6bb-ea4a-45bd-b218-281ce30a54ca req-b5aca8ad-a895-45d8-9300-7774eeb037ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] No waiting events found dispatching network-vif-unplugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:54:24 compute-0 nova_compute[259550]: 2025-10-07 14:54:24.702 2 DEBUG nova.compute.manager [req-773df6bb-ea4a-45bd-b218-281ce30a54ca req-b5aca8ad-a895-45d8-9300-7774eeb037ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Received event network-vif-unplugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:54:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 07 14:54:26 compute-0 nova_compute[259550]: 2025-10-07 14:54:26.214 2 INFO nova.virt.libvirt.driver [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Deleting instance files /var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397_del
Oct 07 14:54:26 compute-0 nova_compute[259550]: 2025-10-07 14:54:26.215 2 INFO nova.virt.libvirt.driver [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Deletion of /var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397_del complete
Oct 07 14:54:26 compute-0 nova_compute[259550]: 2025-10-07 14:54:26.244 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updating instance_info_cache with network_info: [{"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:54:26 compute-0 nova_compute[259550]: 2025-10-07 14:54:26.359 2 INFO nova.compute.manager [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Took 2.05 seconds to destroy the instance on the hypervisor.
Oct 07 14:54:26 compute-0 nova_compute[259550]: 2025-10-07 14:54:26.359 2 DEBUG oslo.service.loopingcall [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:54:26 compute-0 nova_compute[259550]: 2025-10-07 14:54:26.360 2 DEBUG nova.compute.manager [-] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:54:26 compute-0 nova_compute[259550]: 2025-10-07 14:54:26.360 2 DEBUG nova.network.neutron [-] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:54:26 compute-0 nova_compute[259550]: 2025-10-07 14:54:26.362 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:54:26 compute-0 nova_compute[259550]: 2025-10-07 14:54:26.362 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:54:26 compute-0 nova_compute[259550]: 2025-10-07 14:54:26.363 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:54:26 compute-0 ceph-mon[74295]: pgmap v2765: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 07 14:54:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:54:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 305 active+clean; 165 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 07 14:54:26 compute-0 nova_compute[259550]: 2025-10-07 14:54:26.825 2 DEBUG nova.compute.manager [req-70d58a5f-d45f-4392-945f-06cea39bbe90 req-f260dc27-7551-44f1-bd7b-01a4326aba4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Received event network-vif-plugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:54:26 compute-0 nova_compute[259550]: 2025-10-07 14:54:26.825 2 DEBUG oslo_concurrency.lockutils [req-70d58a5f-d45f-4392-945f-06cea39bbe90 req-f260dc27-7551-44f1-bd7b-01a4326aba4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:54:26 compute-0 nova_compute[259550]: 2025-10-07 14:54:26.825 2 DEBUG oslo_concurrency.lockutils [req-70d58a5f-d45f-4392-945f-06cea39bbe90 req-f260dc27-7551-44f1-bd7b-01a4326aba4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:54:26 compute-0 nova_compute[259550]: 2025-10-07 14:54:26.825 2 DEBUG oslo_concurrency.lockutils [req-70d58a5f-d45f-4392-945f-06cea39bbe90 req-f260dc27-7551-44f1-bd7b-01a4326aba4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:54:26 compute-0 nova_compute[259550]: 2025-10-07 14:54:26.826 2 DEBUG nova.compute.manager [req-70d58a5f-d45f-4392-945f-06cea39bbe90 req-f260dc27-7551-44f1-bd7b-01a4326aba4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] No waiting events found dispatching network-vif-plugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:54:26 compute-0 nova_compute[259550]: 2025-10-07 14:54:26.826 2 WARNING nova.compute.manager [req-70d58a5f-d45f-4392-945f-06cea39bbe90 req-f260dc27-7551-44f1-bd7b-01a4326aba4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Received unexpected event network-vif-plugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 for instance with vm_state active and task_state deleting.
Oct 07 14:54:27 compute-0 ceph-mon[74295]: pgmap v2766: 305 pgs: 305 active+clean; 165 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 07 14:54:27 compute-0 nova_compute[259550]: 2025-10-07 14:54:27.967 2 DEBUG nova.network.neutron [-] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:54:28 compute-0 nova_compute[259550]: 2025-10-07 14:54:28.104 2 INFO nova.compute.manager [-] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Took 1.74 seconds to deallocate network for instance.
Oct 07 14:54:28 compute-0 nova_compute[259550]: 2025-10-07 14:54:28.114 2 DEBUG nova.compute.manager [req-545dab1c-c77f-4b6b-9ba8-6488f72cb71b req-75211747-bcf0-47e2-9112-a28122d53667 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Received event network-vif-deleted-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:54:28 compute-0 nova_compute[259550]: 2025-10-07 14:54:28.114 2 INFO nova.compute.manager [req-545dab1c-c77f-4b6b-9ba8-6488f72cb71b req-75211747-bcf0-47e2-9112-a28122d53667 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Neutron deleted interface ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57; detaching it from the instance and deleting it from the info cache
Oct 07 14:54:28 compute-0 nova_compute[259550]: 2025-10-07 14:54:28.114 2 DEBUG nova.network.neutron [req-545dab1c-c77f-4b6b-9ba8-6488f72cb71b req-75211747-bcf0-47e2-9112-a28122d53667 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:54:28 compute-0 nova_compute[259550]: 2025-10-07 14:54:28.339 2 DEBUG nova.compute.manager [req-545dab1c-c77f-4b6b-9ba8-6488f72cb71b req-75211747-bcf0-47e2-9112-a28122d53667 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Detach interface failed, port_id=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57, reason: Instance 0d0450f4-efd0-4508-8aca-28b5d858c397 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:54:28 compute-0 nova_compute[259550]: 2025-10-07 14:54:28.457 2 DEBUG oslo_concurrency.lockutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:54:28 compute-0 nova_compute[259550]: 2025-10-07 14:54:28.458 2 DEBUG oslo_concurrency.lockutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:54:28 compute-0 nova_compute[259550]: 2025-10-07 14:54:28.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:28 compute-0 nova_compute[259550]: 2025-10-07 14:54:28.568 2 DEBUG oslo_concurrency.processutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:54:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 1.7 MiB/s wr, 83 op/s
Oct 07 14:54:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:54:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2272273109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:54:29 compute-0 nova_compute[259550]: 2025-10-07 14:54:29.039 2 DEBUG oslo_concurrency.processutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:54:29 compute-0 nova_compute[259550]: 2025-10-07 14:54:29.046 2 DEBUG nova.compute.provider_tree [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:54:29 compute-0 nova_compute[259550]: 2025-10-07 14:54:29.162 2 DEBUG nova.scheduler.client.report [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:54:29 compute-0 nova_compute[259550]: 2025-10-07 14:54:29.372 2 DEBUG oslo_concurrency.lockutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:54:29 compute-0 nova_compute[259550]: 2025-10-07 14:54:29.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:29 compute-0 nova_compute[259550]: 2025-10-07 14:54:29.586 2 INFO nova.scheduler.client.report [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Deleted allocations for instance 0d0450f4-efd0-4508-8aca-28b5d858c397
Oct 07 14:54:29 compute-0 nova_compute[259550]: 2025-10-07 14:54:29.738 2 DEBUG oslo_concurrency.lockutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:54:29 compute-0 ceph-mon[74295]: pgmap v2767: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 1.7 MiB/s wr, 83 op/s
Oct 07 14:54:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2272273109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:54:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 121 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 190 KiB/s rd, 172 KiB/s wr, 51 op/s
Oct 07 14:54:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:54:31 compute-0 ceph-mon[74295]: pgmap v2768: 305 pgs: 305 active+clean; 121 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 190 KiB/s rd, 172 KiB/s wr, 51 op/s
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.418 2 DEBUG nova.compute.manager [req-378d4042-a469-493e-acd7-deba626eb9fd req-534441a3-2a65-4acb-816d-bd883dfa7344 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received event network-changed-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.418 2 DEBUG nova.compute.manager [req-378d4042-a469-493e-acd7-deba626eb9fd req-534441a3-2a65-4acb-816d-bd883dfa7344 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Refreshing instance network info cache due to event network-changed-78d82e05-f97f-4cd4-aed0-b65c80e18ec5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.419 2 DEBUG oslo_concurrency.lockutils [req-378d4042-a469-493e-acd7-deba626eb9fd req-534441a3-2a65-4acb-816d-bd883dfa7344 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.419 2 DEBUG oslo_concurrency.lockutils [req-378d4042-a469-493e-acd7-deba626eb9fd req-534441a3-2a65-4acb-816d-bd883dfa7344 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.419 2 DEBUG nova.network.neutron [req-378d4042-a469-493e-acd7-deba626eb9fd req-534441a3-2a65-4acb-816d-bd883dfa7344 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Refreshing network info cache for port 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.561 2 DEBUG oslo_concurrency.lockutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "5360ad21-c174-468a-9be2-d82d672f2911" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.561 2 DEBUG oslo_concurrency.lockutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.561 2 DEBUG oslo_concurrency.lockutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "5360ad21-c174-468a-9be2-d82d672f2911-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.562 2 DEBUG oslo_concurrency.lockutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.562 2 DEBUG oslo_concurrency.lockutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.563 2 INFO nova.compute.manager [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Terminating instance
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.564 2 DEBUG nova.compute.manager [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:54:32 compute-0 kernel: tap78d82e05-f9 (unregistering): left promiscuous mode
Oct 07 14:54:32 compute-0 NetworkManager[44949]: <info>  [1759848872.6310] device (tap78d82e05-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:54:32 compute-0 ovn_controller[151684]: 2025-10-07T14:54:32Z|01591|binding|INFO|Releasing lport 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 from this chassis (sb_readonly=0)
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:32 compute-0 ovn_controller[151684]: 2025-10-07T14:54:32Z|01592|binding|INFO|Setting lport 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 down in Southbound
Oct 07 14:54:32 compute-0 ovn_controller[151684]: 2025-10-07T14:54:32Z|01593|binding|INFO|Removing iface tap78d82e05-f9 ovn-installed in OVS
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:32 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000090.scope: Deactivated successfully.
Oct 07 14:54:32 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000090.scope: Consumed 16.091s CPU time.
Oct 07 14:54:32 compute-0 systemd-machined[214580]: Machine qemu-178-instance-00000090 terminated.
Oct 07 14:54:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:32.757 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:b1:b3 10.100.0.6'], port_security=['fa:16:3e:7f:b1:b3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5360ad21-c174-468a-9be2-d82d672f2911', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7d1e85a4-dffc-45c3-a3d8-ec6178189662 fb064d27-a242-4db9-8bed-79adb9fb8cf9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8255c2c5-7755-43fa-85c6-c4d3fcdeecda, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=78d82e05-f97f-4cd4-aed0-b65c80e18ec5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:54:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:32.758 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 in datapath b7b9d978-a319-4f4b-b8e1-4891fcd559d0 unbound from our chassis
Oct 07 14:54:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:32.759 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b7b9d978-a319-4f4b-b8e1-4891fcd559d0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:54:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:32.760 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8b4125-2048-4521-8ea5-f987883e1397]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:32.761 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0 namespace which is not needed anymore
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 121 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 22 KiB/s wr, 29 op/s
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.807 2 INFO nova.virt.libvirt.driver [-] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Instance destroyed successfully.
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.808 2 DEBUG nova.objects.instance [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'resources' on Instance uuid 5360ad21-c174-468a-9be2-d82d672f2911 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007600997305788891 of space, bias 1.0, pg target 0.22802991917366675 quantized to 32 (current 32)
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:54:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.884 2 DEBUG nova.virt.libvirt.vif [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:53:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-945928091',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-945928091',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=144,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIvYEBw++0H23CPU/EQE9HWaLr7vi4X6+2UQNCQ0N5Tz2y4ny5PBn9cmCdgcJlcQrK39s79mDb9nDQdopbqGy2qQX711Rlve/xjg+NPmBBGEtxIddIA3GNH/TrG7DZF1Rw==',key_name='tempest-TestSecurityGroupsBasicOps-276938345',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:53:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-p3qp2bxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:53:26Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=5360ad21-c174-468a-9be2-d82d672f2911,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.885 2 DEBUG nova.network.os_vif_util [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.886 2 DEBUG nova.network.os_vif_util [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:b1:b3,bridge_name='br-int',has_traffic_filtering=True,id=78d82e05-f97f-4cd4-aed0-b65c80e18ec5,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78d82e05-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.887 2 DEBUG os_vif [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:b1:b3,bridge_name='br-int',has_traffic_filtering=True,id=78d82e05-f97f-4cd4-aed0-b65c80e18ec5,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78d82e05-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.891 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78d82e05-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:54:32 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:32.909 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:54:32 compute-0 nova_compute[259550]: 2025-10-07 14:54:32.942 2 INFO os_vif [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:b1:b3,bridge_name='br-int',has_traffic_filtering=True,id=78d82e05-f97f-4cd4-aed0-b65c80e18ec5,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78d82e05-f9')
Oct 07 14:54:32 compute-0 neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0[418719]: [NOTICE]   (418723) : haproxy version is 2.8.14-c23fe91
Oct 07 14:54:32 compute-0 neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0[418719]: [NOTICE]   (418723) : path to executable is /usr/sbin/haproxy
Oct 07 14:54:32 compute-0 neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0[418719]: [WARNING]  (418723) : Exiting Master process...
Oct 07 14:54:32 compute-0 neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0[418719]: [WARNING]  (418723) : Exiting Master process...
Oct 07 14:54:32 compute-0 neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0[418719]: [ALERT]    (418723) : Current worker (418725) exited with code 143 (Terminated)
Oct 07 14:54:32 compute-0 neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0[418719]: [WARNING]  (418723) : All workers exited. Exiting... (0)
Oct 07 14:54:32 compute-0 systemd[1]: libpod-b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047.scope: Deactivated successfully.
Oct 07 14:54:32 compute-0 podman[420310]: 2025-10-07 14:54:32.98668017 +0000 UTC m=+0.127381212 container died b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:54:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047-userdata-shm.mount: Deactivated successfully.
Oct 07 14:54:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ced5dd1aff1b8cd3c8452ea0fb4c0158258e892a5ae20c9c27856c33d2c1bdf8-merged.mount: Deactivated successfully.
Oct 07 14:54:33 compute-0 podman[420310]: 2025-10-07 14:54:33.210577055 +0000 UTC m=+0.351278087 container cleanup b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:54:33 compute-0 systemd[1]: libpod-conmon-b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047.scope: Deactivated successfully.
Oct 07 14:54:33 compute-0 nova_compute[259550]: 2025-10-07 14:54:33.248 2 DEBUG nova.compute.manager [req-ad172bb7-fbae-4c13-8701-20ec6629c2ad req-aa206efe-c1bf-4eec-af49-29193adc2c58 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received event network-vif-unplugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:54:33 compute-0 nova_compute[259550]: 2025-10-07 14:54:33.249 2 DEBUG oslo_concurrency.lockutils [req-ad172bb7-fbae-4c13-8701-20ec6629c2ad req-aa206efe-c1bf-4eec-af49-29193adc2c58 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5360ad21-c174-468a-9be2-d82d672f2911-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:54:33 compute-0 nova_compute[259550]: 2025-10-07 14:54:33.249 2 DEBUG oslo_concurrency.lockutils [req-ad172bb7-fbae-4c13-8701-20ec6629c2ad req-aa206efe-c1bf-4eec-af49-29193adc2c58 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:54:33 compute-0 nova_compute[259550]: 2025-10-07 14:54:33.250 2 DEBUG oslo_concurrency.lockutils [req-ad172bb7-fbae-4c13-8701-20ec6629c2ad req-aa206efe-c1bf-4eec-af49-29193adc2c58 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:54:33 compute-0 nova_compute[259550]: 2025-10-07 14:54:33.250 2 DEBUG nova.compute.manager [req-ad172bb7-fbae-4c13-8701-20ec6629c2ad req-aa206efe-c1bf-4eec-af49-29193adc2c58 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] No waiting events found dispatching network-vif-unplugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:54:33 compute-0 nova_compute[259550]: 2025-10-07 14:54:33.250 2 DEBUG nova.compute.manager [req-ad172bb7-fbae-4c13-8701-20ec6629c2ad req-aa206efe-c1bf-4eec-af49-29193adc2c58 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received event network-vif-unplugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:54:33 compute-0 podman[420355]: 2025-10-07 14:54:33.344142021 +0000 UTC m=+0.109096239 container remove b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:54:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.352 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1a52b9-c873-44f6-9014-2a11d0a46c96]: (4, ('Tue Oct  7 02:54:32 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0 (b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047)\nb699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047\nTue Oct  7 02:54:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0 (b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047)\nb699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.355 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ff48cae1-045e-4e39-8922-565b1897bfb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.357 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7b9d978-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:54:33 compute-0 nova_compute[259550]: 2025-10-07 14:54:33.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:33 compute-0 kernel: tapb7b9d978-a0: left promiscuous mode
Oct 07 14:54:33 compute-0 nova_compute[259550]: 2025-10-07 14:54:33.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.380 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ab5986fb-4a51-4c2b-8e61-97f19370929f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.408 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[539ec8b5-5b67-4ba8-aca2-67808e2ab5ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.409 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[48672143-568c-4617-a358-9959d4a28cca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.427 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c9427f58-47c6-4549-a2dd-67052d883f14]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 938827, 'reachable_time': 30503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420371, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:33 compute-0 systemd[1]: run-netns-ovnmeta\x2db7b9d978\x2da319\x2d4f4b\x2db8e1\x2d4891fcd559d0.mount: Deactivated successfully.
Oct 07 14:54:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.429 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:54:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.429 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[5d622e86-c635-46fd-aaf5-0b902689239b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:54:33 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.431 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:54:33 compute-0 nova_compute[259550]: 2025-10-07 14:54:33.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:33 compute-0 ceph-mon[74295]: pgmap v2769: 305 pgs: 305 active+clean; 121 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 22 KiB/s wr, 29 op/s
Oct 07 14:54:34 compute-0 nova_compute[259550]: 2025-10-07 14:54:34.094 2 DEBUG nova.network.neutron [req-378d4042-a469-493e-acd7-deba626eb9fd req-534441a3-2a65-4acb-816d-bd883dfa7344 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updated VIF entry in instance network info cache for port 78d82e05-f97f-4cd4-aed0-b65c80e18ec5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:54:34 compute-0 nova_compute[259550]: 2025-10-07 14:54:34.094 2 DEBUG nova.network.neutron [req-378d4042-a469-493e-acd7-deba626eb9fd req-534441a3-2a65-4acb-816d-bd883dfa7344 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updating instance_info_cache with network_info: [{"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:54:34 compute-0 nova_compute[259550]: 2025-10-07 14:54:34.285 2 DEBUG oslo_concurrency.lockutils [req-378d4042-a469-493e-acd7-deba626eb9fd req-534441a3-2a65-4acb-816d-bd883dfa7344 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:54:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 305 active+clean; 113 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 22 KiB/s wr, 39 op/s
Oct 07 14:54:35 compute-0 nova_compute[259550]: 2025-10-07 14:54:35.111 2 INFO nova.virt.libvirt.driver [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Deleting instance files /var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911_del
Oct 07 14:54:35 compute-0 nova_compute[259550]: 2025-10-07 14:54:35.111 2 INFO nova.virt.libvirt.driver [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Deletion of /var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911_del complete
Oct 07 14:54:35 compute-0 nova_compute[259550]: 2025-10-07 14:54:35.240 2 INFO nova.compute.manager [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Took 2.68 seconds to destroy the instance on the hypervisor.
Oct 07 14:54:35 compute-0 nova_compute[259550]: 2025-10-07 14:54:35.241 2 DEBUG oslo.service.loopingcall [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:54:35 compute-0 nova_compute[259550]: 2025-10-07 14:54:35.243 2 DEBUG nova.compute.manager [-] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:54:35 compute-0 nova_compute[259550]: 2025-10-07 14:54:35.243 2 DEBUG nova.network.neutron [-] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:54:35 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:54:35.433 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:54:35 compute-0 nova_compute[259550]: 2025-10-07 14:54:35.913 2 DEBUG nova.compute.manager [req-8198aaeb-f273-43db-9ba4-0130bcacd92a req-10fad4e9-efe6-4a75-afa8-1f421b105ab9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received event network-vif-plugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:54:35 compute-0 nova_compute[259550]: 2025-10-07 14:54:35.914 2 DEBUG oslo_concurrency.lockutils [req-8198aaeb-f273-43db-9ba4-0130bcacd92a req-10fad4e9-efe6-4a75-afa8-1f421b105ab9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5360ad21-c174-468a-9be2-d82d672f2911-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:54:35 compute-0 nova_compute[259550]: 2025-10-07 14:54:35.914 2 DEBUG oslo_concurrency.lockutils [req-8198aaeb-f273-43db-9ba4-0130bcacd92a req-10fad4e9-efe6-4a75-afa8-1f421b105ab9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:54:35 compute-0 nova_compute[259550]: 2025-10-07 14:54:35.914 2 DEBUG oslo_concurrency.lockutils [req-8198aaeb-f273-43db-9ba4-0130bcacd92a req-10fad4e9-efe6-4a75-afa8-1f421b105ab9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:54:35 compute-0 nova_compute[259550]: 2025-10-07 14:54:35.915 2 DEBUG nova.compute.manager [req-8198aaeb-f273-43db-9ba4-0130bcacd92a req-10fad4e9-efe6-4a75-afa8-1f421b105ab9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] No waiting events found dispatching network-vif-plugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:54:35 compute-0 nova_compute[259550]: 2025-10-07 14:54:35.915 2 WARNING nova.compute.manager [req-8198aaeb-f273-43db-9ba4-0130bcacd92a req-10fad4e9-efe6-4a75-afa8-1f421b105ab9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received unexpected event network-vif-plugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 for instance with vm_state active and task_state deleting.
Oct 07 14:54:36 compute-0 ceph-mon[74295]: pgmap v2770: 305 pgs: 305 active+clean; 113 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 22 KiB/s wr, 39 op/s
Oct 07 14:54:36 compute-0 podman[420375]: 2025-10-07 14:54:36.067045672 +0000 UTC m=+0.058596922 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 07 14:54:36 compute-0 podman[420376]: 2025-10-07 14:54:36.099983134 +0000 UTC m=+0.091478441 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 07 14:54:36 compute-0 nova_compute[259550]: 2025-10-07 14:54:36.330 2 DEBUG nova.network.neutron [-] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:54:36 compute-0 nova_compute[259550]: 2025-10-07 14:54:36.369 2 DEBUG nova.compute.manager [req-f91a80c1-c8e3-42e7-9502-4caca56a3bd4 req-9eec0fc1-f03f-4816-acd8-2991b9a6fd72 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received event network-vif-deleted-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:54:36 compute-0 nova_compute[259550]: 2025-10-07 14:54:36.370 2 INFO nova.compute.manager [req-f91a80c1-c8e3-42e7-9502-4caca56a3bd4 req-9eec0fc1-f03f-4816-acd8-2991b9a6fd72 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Neutron deleted interface 78d82e05-f97f-4cd4-aed0-b65c80e18ec5; detaching it from the instance and deleting it from the info cache
Oct 07 14:54:36 compute-0 nova_compute[259550]: 2025-10-07 14:54:36.370 2 DEBUG nova.network.neutron [req-f91a80c1-c8e3-42e7-9502-4caca56a3bd4 req-9eec0fc1-f03f-4816-acd8-2991b9a6fd72 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:54:36 compute-0 nova_compute[259550]: 2025-10-07 14:54:36.392 2 INFO nova.compute.manager [-] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Took 1.15 seconds to deallocate network for instance.
Oct 07 14:54:36 compute-0 nova_compute[259550]: 2025-10-07 14:54:36.461 2 DEBUG nova.compute.manager [req-f91a80c1-c8e3-42e7-9502-4caca56a3bd4 req-9eec0fc1-f03f-4816-acd8-2991b9a6fd72 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Detach interface failed, port_id=78d82e05-f97f-4cd4-aed0-b65c80e18ec5, reason: Instance 5360ad21-c174-468a-9be2-d82d672f2911 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:54:36 compute-0 nova_compute[259550]: 2025-10-07 14:54:36.697 2 DEBUG oslo_concurrency.lockutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:54:36 compute-0 nova_compute[259550]: 2025-10-07 14:54:36.698 2 DEBUG oslo_concurrency.lockutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:54:36 compute-0 nova_compute[259550]: 2025-10-07 14:54:36.754 2 DEBUG oslo_concurrency.processutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:54:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:54:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 305 active+clean; 88 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 22 KiB/s wr, 49 op/s
Oct 07 14:54:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:54:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3549538094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:54:37 compute-0 nova_compute[259550]: 2025-10-07 14:54:37.224 2 DEBUG oslo_concurrency.processutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:54:37 compute-0 nova_compute[259550]: 2025-10-07 14:54:37.232 2 DEBUG nova.compute.provider_tree [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:54:37 compute-0 nova_compute[259550]: 2025-10-07 14:54:37.283 2 DEBUG nova.scheduler.client.report [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:54:37 compute-0 nova_compute[259550]: 2025-10-07 14:54:37.366 2 DEBUG oslo_concurrency.lockutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:54:37 compute-0 nova_compute[259550]: 2025-10-07 14:54:37.434 2 INFO nova.scheduler.client.report [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Deleted allocations for instance 5360ad21-c174-468a-9be2-d82d672f2911
Oct 07 14:54:37 compute-0 nova_compute[259550]: 2025-10-07 14:54:37.806 2 DEBUG oslo_concurrency.lockutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:54:37 compute-0 nova_compute[259550]: 2025-10-07 14:54:37.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:38 compute-0 ceph-mon[74295]: pgmap v2771: 305 pgs: 305 active+clean; 88 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 22 KiB/s wr, 49 op/s
Oct 07 14:54:38 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3549538094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:54:38 compute-0 nova_compute[259550]: 2025-10-07 14:54:38.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 11 KiB/s wr, 51 op/s
Oct 07 14:54:39 compute-0 nova_compute[259550]: 2025-10-07 14:54:39.547 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848864.5441399, 0d0450f4-efd0-4508-8aca-28b5d858c397 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:54:39 compute-0 nova_compute[259550]: 2025-10-07 14:54:39.548 2 INFO nova.compute.manager [-] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] VM Stopped (Lifecycle Event)
Oct 07 14:54:39 compute-0 nova_compute[259550]: 2025-10-07 14:54:39.581 2 DEBUG nova.compute.manager [None req-b81f6eb5-4aa7-45e3-ba4d-c1aaa61e0060 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:54:40 compute-0 ceph-mon[74295]: pgmap v2772: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 11 KiB/s wr, 51 op/s
Oct 07 14:54:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 07 14:54:41 compute-0 nova_compute[259550]: 2025-10-07 14:54:41.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:41 compute-0 nova_compute[259550]: 2025-10-07 14:54:41.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:54:42 compute-0 ceph-mon[74295]: pgmap v2773: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 07 14:54:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:54:42 compute-0 nova_compute[259550]: 2025-10-07 14:54:42.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:43 compute-0 nova_compute[259550]: 2025-10-07 14:54:43.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:44 compute-0 ceph-mon[74295]: pgmap v2774: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:54:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:54:46 compute-0 ceph-mon[74295]: pgmap v2775: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:54:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:54:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 938 B/s wr, 17 op/s
Oct 07 14:54:47 compute-0 nova_compute[259550]: 2025-10-07 14:54:47.807 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848872.805726, 5360ad21-c174-468a-9be2-d82d672f2911 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:54:47 compute-0 nova_compute[259550]: 2025-10-07 14:54:47.808 2 INFO nova.compute.manager [-] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] VM Stopped (Lifecycle Event)
Oct 07 14:54:47 compute-0 nova_compute[259550]: 2025-10-07 14:54:47.925 2 DEBUG nova.compute.manager [None req-6790ce36-407e-4acc-810d-3cbb694b78c9 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:54:47 compute-0 nova_compute[259550]: 2025-10-07 14:54:47.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:48 compute-0 ceph-mon[74295]: pgmap v2776: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 938 B/s wr, 17 op/s
Oct 07 14:54:48 compute-0 nova_compute[259550]: 2025-10-07 14:54:48.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 3.5 KiB/s rd, 938 B/s wr, 7 op/s
Oct 07 14:54:50 compute-0 ceph-mon[74295]: pgmap v2777: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 3.5 KiB/s rd, 938 B/s wr, 7 op/s
Oct 07 14:54:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:54:51 compute-0 ceph-mon[74295]: pgmap v2778: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:54:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:54:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:54:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:54:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:54:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:54:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:54:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:54:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:54:52 compute-0 nova_compute[259550]: 2025-10-07 14:54:52.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:53 compute-0 nova_compute[259550]: 2025-10-07 14:54:53.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:53 compute-0 ceph-mon[74295]: pgmap v2779: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:54:54 compute-0 podman[420441]: 2025-10-07 14:54:54.071099097 +0000 UTC m=+0.061009886 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:54:54 compute-0 podman[420440]: 2025-10-07 14:54:54.075118314 +0000 UTC m=+0.065627948 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 14:54:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:54:56 compute-0 ceph-mon[74295]: pgmap v2780: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:54:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:54:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:54:57 compute-0 nova_compute[259550]: 2025-10-07 14:54:57.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:58 compute-0 ceph-mon[74295]: pgmap v2781: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:54:58 compute-0 nova_compute[259550]: 2025-10-07 14:54:58.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:54:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:55:00 compute-0 ceph-mon[74295]: pgmap v2782: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:55:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:00.091 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:55:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:00.091 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:55:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:00.091 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:55:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:55:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:55:01 compute-0 nova_compute[259550]: 2025-10-07 14:55:01.921 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:55:01 compute-0 nova_compute[259550]: 2025-10-07 14:55:01.921 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:55:02 compute-0 nova_compute[259550]: 2025-10-07 14:55:02.064 2 DEBUG nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:55:02 compute-0 ceph-mon[74295]: pgmap v2783: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:55:02 compute-0 nova_compute[259550]: 2025-10-07 14:55:02.200 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:55:02 compute-0 nova_compute[259550]: 2025-10-07 14:55:02.201 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:55:02 compute-0 nova_compute[259550]: 2025-10-07 14:55:02.211 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:55:02 compute-0 nova_compute[259550]: 2025-10-07 14:55:02.212 2 INFO nova.compute.claims [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:55:02 compute-0 nova_compute[259550]: 2025-10-07 14:55:02.501 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:55:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:55:02 compute-0 nova_compute[259550]: 2025-10-07 14:55:02.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:55:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2902939224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:55:02 compute-0 nova_compute[259550]: 2025-10-07 14:55:02.975 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:55:02 compute-0 nova_compute[259550]: 2025-10-07 14:55:02.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:55:02 compute-0 nova_compute[259550]: 2025-10-07 14:55:02.984 2 DEBUG nova.compute.provider_tree [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.011 2 DEBUG nova.scheduler.client.report [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:55:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2902939224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.174 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.175 2 DEBUG nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.302 2 DEBUG nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.302 2 DEBUG nova.network.neutron [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.326 2 INFO nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.344 2 DEBUG nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.436 2 DEBUG nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.437 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.437 2 INFO nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Creating image(s)
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.462 2 DEBUG nova.storage.rbd_utils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.492 2 DEBUG nova.storage.rbd_utils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.516 2 DEBUG nova.storage.rbd_utils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.520 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.560 2 DEBUG nova.policy [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f8f54ad8b4adcb7d392a6d730edbd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2dd1166031634469bed4993a4eb97989', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.596 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.597 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.598 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.598 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.621 2 DEBUG nova.storage.rbd_utils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:55:03 compute-0 nova_compute[259550]: 2025-10-07 14:55:03.625 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:55:04 compute-0 ceph-mon[74295]: pgmap v2784: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:55:04 compute-0 nova_compute[259550]: 2025-10-07 14:55:04.495 2 DEBUG nova.network.neutron [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Successfully created port: 0f10619b-f7b9-4cfb-a093-06e296ce6a63 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:55:04 compute-0 nova_compute[259550]: 2025-10-07 14:55:04.579 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.955s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:55:04 compute-0 nova_compute[259550]: 2025-10-07 14:55:04.649 2 DEBUG nova.storage.rbd_utils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] resizing rbd image b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:55:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:55:05 compute-0 nova_compute[259550]: 2025-10-07 14:55:05.121 2 DEBUG nova.objects.instance [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'migration_context' on Instance uuid b49c6d60-6c44-4afd-8dbc-521e4c1c31ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:55:05 compute-0 nova_compute[259550]: 2025-10-07 14:55:05.274 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:55:05 compute-0 nova_compute[259550]: 2025-10-07 14:55:05.275 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Ensure instance console log exists: /var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:55:05 compute-0 nova_compute[259550]: 2025-10-07 14:55:05.276 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:55:05 compute-0 nova_compute[259550]: 2025-10-07 14:55:05.277 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:55:05 compute-0 nova_compute[259550]: 2025-10-07 14:55:05.277 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:55:05 compute-0 nova_compute[259550]: 2025-10-07 14:55:05.589 2 DEBUG nova.network.neutron [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Successfully updated port: 0f10619b-f7b9-4cfb-a093-06e296ce6a63 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:55:05 compute-0 nova_compute[259550]: 2025-10-07 14:55:05.724 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:55:05 compute-0 nova_compute[259550]: 2025-10-07 14:55:05.724 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquired lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:55:05 compute-0 nova_compute[259550]: 2025-10-07 14:55:05.724 2 DEBUG nova.network.neutron [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:55:05 compute-0 nova_compute[259550]: 2025-10-07 14:55:05.735 2 DEBUG nova.compute.manager [req-892d9197-7c1a-4ba1-b552-6e81e08b961f req-d432e146-d30e-489c-aa63-8d33caa92b97 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received event network-changed-0f10619b-f7b9-4cfb-a093-06e296ce6a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:55:05 compute-0 nova_compute[259550]: 2025-10-07 14:55:05.735 2 DEBUG nova.compute.manager [req-892d9197-7c1a-4ba1-b552-6e81e08b961f req-d432e146-d30e-489c-aa63-8d33caa92b97 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Refreshing instance network info cache due to event network-changed-0f10619b-f7b9-4cfb-a093-06e296ce6a63. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:55:05 compute-0 nova_compute[259550]: 2025-10-07 14:55:05.736 2 DEBUG oslo_concurrency.lockutils [req-892d9197-7c1a-4ba1-b552-6e81e08b961f req-d432e146-d30e-489c-aa63-8d33caa92b97 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:55:05 compute-0 nova_compute[259550]: 2025-10-07 14:55:05.909 2 DEBUG nova.network.neutron [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:55:06 compute-0 ceph-mon[74295]: pgmap v2785: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:55:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:55:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 68 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 1.1 MiB/s wr, 1 op/s
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.820 2 DEBUG nova.network.neutron [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Updating instance_info_cache with network_info: [{"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.919 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Releasing lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.920 2 DEBUG nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Instance network_info: |[{"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.920 2 DEBUG oslo_concurrency.lockutils [req-892d9197-7c1a-4ba1-b552-6e81e08b961f req-d432e146-d30e-489c-aa63-8d33caa92b97 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.920 2 DEBUG nova.network.neutron [req-892d9197-7c1a-4ba1-b552-6e81e08b961f req-d432e146-d30e-489c-aa63-8d33caa92b97 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Refreshing network info cache for port 0f10619b-f7b9-4cfb-a093-06e296ce6a63 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.923 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Start _get_guest_xml network_info=[{"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.927 2 WARNING nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.932 2 DEBUG nova.virt.libvirt.host [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.933 2 DEBUG nova.virt.libvirt.host [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.936 2 DEBUG nova.virt.libvirt.host [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.937 2 DEBUG nova.virt.libvirt.host [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.938 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.938 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.938 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.939 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.940 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.940 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.940 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.940 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.941 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.941 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.941 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.941 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.944 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:55:06 compute-0 nova_compute[259550]: 2025-10-07 14:55:06.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:55:07 compute-0 podman[420668]: 2025-10-07 14:55:07.06985089 +0000 UTC m=+0.054117944 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 14:55:07 compute-0 podman[420669]: 2025-10-07 14:55:07.106951441 +0000 UTC m=+0.092651033 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 07 14:55:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:55:07 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2642530535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.426 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.451 2 DEBUG nova.storage.rbd_utils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.455 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:55:07 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2642530535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:55:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:55:07 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1521135202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.901 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.904 2 DEBUG nova.virt.libvirt.vif [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:55:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-17551502',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-17551502',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=146,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBECPh1pByCCitV1nn5SXhUBD70Y3hmlmSyk96xTNnlcIfpBRz1+IZ9Aq3nwoGJZoH4mSlndssl/oL5+TBJGoYqXuFU3nK+musxCwWhV9dKumK0LA7mZMLKLnViO0sCqyFA==',key_name='tempest-TestSecurityGroupsBasicOps-911554559',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-5q94rep4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:55:03Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=b49c6d60-6c44-4afd-8dbc-521e4c1c31ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.904 2 DEBUG nova.network.os_vif_util [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.905 2 DEBUG nova.network.os_vif_util [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:c5:45,bridge_name='br-int',has_traffic_filtering=True,id=0f10619b-f7b9-4cfb-a093-06e296ce6a63,network=Network(d7162764-2824-4b84-93f3-e59b2b2d0365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f10619b-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.906 2 DEBUG nova.objects.instance [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'pci_devices' on Instance uuid b49c6d60-6c44-4afd-8dbc-521e4c1c31ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.952 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:55:07 compute-0 nova_compute[259550]:   <uuid>b49c6d60-6c44-4afd-8dbc-521e4c1c31ac</uuid>
Oct 07 14:55:07 compute-0 nova_compute[259550]:   <name>instance-00000092</name>
Oct 07 14:55:07 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:55:07 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:55:07 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-17551502</nova:name>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:55:06</nova:creationTime>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:55:07 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:55:07 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:55:07 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:55:07 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:55:07 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:55:07 compute-0 nova_compute[259550]:         <nova:user uuid="229f8f54ad8b4adcb7d392a6d730edbd">tempest-TestSecurityGroupsBasicOps-1946829349-project-member</nova:user>
Oct 07 14:55:07 compute-0 nova_compute[259550]:         <nova:project uuid="2dd1166031634469bed4993a4eb97989">tempest-TestSecurityGroupsBasicOps-1946829349</nova:project>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:55:07 compute-0 nova_compute[259550]:         <nova:port uuid="0f10619b-f7b9-4cfb-a093-06e296ce6a63">
Oct 07 14:55:07 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:55:07 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:55:07 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <system>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <entry name="serial">b49c6d60-6c44-4afd-8dbc-521e4c1c31ac</entry>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <entry name="uuid">b49c6d60-6c44-4afd-8dbc-521e4c1c31ac</entry>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     </system>
Oct 07 14:55:07 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:55:07 compute-0 nova_compute[259550]:   <os>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:   </os>
Oct 07 14:55:07 compute-0 nova_compute[259550]:   <features>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:   </features>
Oct 07 14:55:07 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:55:07 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:55:07 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk">
Oct 07 14:55:07 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       </source>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:55:07 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk.config">
Oct 07 14:55:07 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       </source>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:55:07 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:e0:c5:45"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <target dev="tap0f10619b-f7"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac/console.log" append="off"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <video>
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     </video>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:55:07 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:55:07 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:55:07 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:55:07 compute-0 nova_compute[259550]: </domain>
Oct 07 14:55:07 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.954 2 DEBUG nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Preparing to wait for external event network-vif-plugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.954 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.954 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.954 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.955 2 DEBUG nova.virt.libvirt.vif [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:55:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-17551502',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-17551502',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=146,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBECPh1pByCCitV1nn5SXhUBD70Y3hmlmSyk96xTNnlcIfpBRz1+IZ9Aq3nwoGJZoH4mSlndssl/oL5+TBJGoYqXuFU3nK+musxCwWhV9dKumK0LA7mZMLKLnViO0sCqyFA==',key_name='tempest-TestSecurityGroupsBasicOps-911554559',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-5q94rep4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:55:03Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=b49c6d60-6c44-4afd-8dbc-521e4c1c31ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.955 2 DEBUG nova.network.os_vif_util [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.956 2 DEBUG nova.network.os_vif_util [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:c5:45,bridge_name='br-int',has_traffic_filtering=True,id=0f10619b-f7b9-4cfb-a093-06e296ce6a63,network=Network(d7162764-2824-4b84-93f3-e59b2b2d0365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f10619b-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.956 2 DEBUG os_vif [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:c5:45,bridge_name='br-int',has_traffic_filtering=True,id=0f10619b-f7b9-4cfb-a093-06e296ce6a63,network=Network(d7162764-2824-4b84-93f3-e59b2b2d0365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f10619b-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.958 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.958 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.963 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f10619b-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.963 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0f10619b-f7, col_values=(('external_ids', {'iface-id': '0f10619b-f7b9-4cfb-a093-06e296ce6a63', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:c5:45', 'vm-uuid': 'b49c6d60-6c44-4afd-8dbc-521e4c1c31ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:07 compute-0 NetworkManager[44949]: <info>  [1759848907.9659] manager: (tap0f10619b-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/641)
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:07 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.973 2 INFO os_vif [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:c5:45,bridge_name='br-int',has_traffic_filtering=True,id=0f10619b-f7b9-4cfb-a093-06e296ce6a63,network=Network(d7162764-2824-4b84-93f3-e59b2b2d0365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f10619b-f7')
Oct 07 14:55:07 compute-0 nova_compute[259550]: 2025-10-07 14:55:07.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:55:07 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 14:55:08 compute-0 nova_compute[259550]: 2025-10-07 14:55:08.062 2 DEBUG nova.network.neutron [req-892d9197-7c1a-4ba1-b552-6e81e08b961f req-d432e146-d30e-489c-aa63-8d33caa92b97 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Updated VIF entry in instance network info cache for port 0f10619b-f7b9-4cfb-a093-06e296ce6a63. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:55:08 compute-0 nova_compute[259550]: 2025-10-07 14:55:08.063 2 DEBUG nova.network.neutron [req-892d9197-7c1a-4ba1-b552-6e81e08b961f req-d432e146-d30e-489c-aa63-8d33caa92b97 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Updating instance_info_cache with network_info: [{"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:55:08 compute-0 nova_compute[259550]: 2025-10-07 14:55:08.147 2 DEBUG oslo_concurrency.lockutils [req-892d9197-7c1a-4ba1-b552-6e81e08b961f req-d432e146-d30e-489c-aa63-8d33caa92b97 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:55:08 compute-0 nova_compute[259550]: 2025-10-07 14:55:08.152 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:55:08 compute-0 nova_compute[259550]: 2025-10-07 14:55:08.152 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:55:08 compute-0 nova_compute[259550]: 2025-10-07 14:55:08.152 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No VIF found with MAC fa:16:3e:e0:c5:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:55:08 compute-0 nova_compute[259550]: 2025-10-07 14:55:08.153 2 INFO nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Using config drive
Oct 07 14:55:08 compute-0 nova_compute[259550]: 2025-10-07 14:55:08.176 2 DEBUG nova.storage.rbd_utils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:55:08 compute-0 ceph-mon[74295]: pgmap v2786: 305 pgs: 305 active+clean; 68 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 1.1 MiB/s wr, 1 op/s
Oct 07 14:55:08 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1521135202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:55:08 compute-0 nova_compute[259550]: 2025-10-07 14:55:08.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:55:09 compute-0 sudo[420794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:55:09 compute-0 sudo[420794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:09 compute-0 sudo[420794]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:09 compute-0 sudo[420819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:55:09 compute-0 sudo[420819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:09 compute-0 sudo[420819]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:09 compute-0 sudo[420844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:55:09 compute-0 sudo[420844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:09 compute-0 sudo[420844]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:09 compute-0 sudo[420869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 07 14:55:09 compute-0 sudo[420869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:09 compute-0 sudo[420869]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:55:09 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:55:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:55:09 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:55:09 compute-0 sudo[420913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:55:09 compute-0 sudo[420913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:09 compute-0 sudo[420913]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:10 compute-0 sudo[420938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:55:10 compute-0 sudo[420938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:10 compute-0 sudo[420938]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:10 compute-0 sudo[420963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:55:10 compute-0 sudo[420963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:10 compute-0 sudo[420963]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:10 compute-0 sudo[420988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:55:10 compute-0 sudo[420988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:10 compute-0 nova_compute[259550]: 2025-10-07 14:55:10.212 2 INFO nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Creating config drive at /var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac/disk.config
Oct 07 14:55:10 compute-0 nova_compute[259550]: 2025-10-07 14:55:10.219 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2hbmpblw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:55:10 compute-0 nova_compute[259550]: 2025-10-07 14:55:10.380 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2hbmpblw" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:55:10 compute-0 nova_compute[259550]: 2025-10-07 14:55:10.403 2 DEBUG nova.storage.rbd_utils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:55:10 compute-0 nova_compute[259550]: 2025-10-07 14:55:10.406 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac/disk.config b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:55:10 compute-0 ceph-mon[74295]: pgmap v2787: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:55:10 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:55:10 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:55:10 compute-0 nova_compute[259550]: 2025-10-07 14:55:10.583 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac/disk.config b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:55:10 compute-0 nova_compute[259550]: 2025-10-07 14:55:10.585 2 INFO nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Deleting local config drive /var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac/disk.config because it was imported into RBD.
Oct 07 14:55:10 compute-0 sudo[420988]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 07 14:55:10 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 07 14:55:10 compute-0 kernel: tap0f10619b-f7: entered promiscuous mode
Oct 07 14:55:10 compute-0 NetworkManager[44949]: <info>  [1759848910.6787] manager: (tap0f10619b-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/642)
Oct 07 14:55:10 compute-0 ovn_controller[151684]: 2025-10-07T14:55:10Z|01594|binding|INFO|Claiming lport 0f10619b-f7b9-4cfb-a093-06e296ce6a63 for this chassis.
Oct 07 14:55:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:55:10 compute-0 ovn_controller[151684]: 2025-10-07T14:55:10Z|01595|binding|INFO|0f10619b-f7b9-4cfb-a093-06e296ce6a63: Claiming fa:16:3e:e0:c5:45 10.100.0.6
Oct 07 14:55:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:55:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:55:10 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:55:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:55:10 compute-0 nova_compute[259550]: 2025-10-07 14:55:10.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.733 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:c5:45 10.100.0.6'], port_security=['fa:16:3e:e0:c5:45 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b49c6d60-6c44-4afd-8dbc-521e4c1c31ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7162764-2824-4b84-93f3-e59b2b2d0365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1fd27387-7887-47bf-8a95-e87828fb4a63 9aa4f30d-9ce8-4452-a71d-172dedad2762', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c238a0b0-d3a1-4d4e-9703-74187cbb70ac, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0f10619b-f7b9-4cfb-a093-06e296ce6a63) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:55:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.734 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0f10619b-f7b9-4cfb-a093-06e296ce6a63 in datapath d7162764-2824-4b84-93f3-e59b2b2d0365 bound to our chassis
Oct 07 14:55:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.735 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7162764-2824-4b84-93f3-e59b2b2d0365
Oct 07 14:55:10 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:55:10 compute-0 systemd-udevd[421097]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:55:10 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 9d87e256-a54a-413d-85b5-9396f7b09200 does not exist
Oct 07 14:55:10 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 8d6c3a39-01b8-4eab-8b75-27c71f89d804 does not exist
Oct 07 14:55:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.748 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2a29c1-edaa-4f01-bda4-95b649b44f44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.750 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd7162764-21 in ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:55:10 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 7e26ee2b-a2f2-41c7-9e2c-aa6fa318f8f7 does not exist
Oct 07 14:55:10 compute-0 systemd-machined[214580]: New machine qemu-180-instance-00000092.
Oct 07 14:55:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.752 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd7162764-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:55:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.753 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[087bc853-f573-4ca8-adb5-899a1b73f853]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.754 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[97112972-756b-46f8-8648-0012c666803d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:55:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:55:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:55:10 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:55:10 compute-0 NetworkManager[44949]: <info>  [1759848910.7650] device (tap0f10619b-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:55:10 compute-0 NetworkManager[44949]: <info>  [1759848910.7663] device (tap0f10619b-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:55:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.766 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[3b00a56a-0e75-4988-b617-c65f08a0cd73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:55:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:55:10 compute-0 systemd[1]: Started Virtual Machine qemu-180-instance-00000092.
Oct 07 14:55:10 compute-0 nova_compute[259550]: 2025-10-07 14:55:10.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.788 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[807673f2-3277-4220-b5bb-40e8f42e5fe9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:10 compute-0 ovn_controller[151684]: 2025-10-07T14:55:10Z|01596|binding|INFO|Setting lport 0f10619b-f7b9-4cfb-a093-06e296ce6a63 ovn-installed in OVS
Oct 07 14:55:10 compute-0 ovn_controller[151684]: 2025-10-07T14:55:10Z|01597|binding|INFO|Setting lport 0f10619b-f7b9-4cfb-a093-06e296ce6a63 up in Southbound
Oct 07 14:55:10 compute-0 nova_compute[259550]: 2025-10-07 14:55:10.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:55:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.827 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4059cc30-431a-4f5d-89ca-06cdf703878b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:10 compute-0 systemd-udevd[421101]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:55:10 compute-0 NetworkManager[44949]: <info>  [1759848910.8341] manager: (tapd7162764-20): new Veth device (/org/freedesktop/NetworkManager/Devices/643)
Oct 07 14:55:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.833 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8700d2e6-7b17-471d-bb56-2442098b0494]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:10 compute-0 sudo[421103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:55:10 compute-0 sudo[421103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:10 compute-0 sudo[421103]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.871 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f314d3b9-9739-436b-8802-16d6029c992a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.877 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[15ddeb0b-e17c-4ad9-9e6f-4b3bc84dea35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:10 compute-0 NetworkManager[44949]: <info>  [1759848910.9230] device (tapd7162764-20): carrier: link connected
Oct 07 14:55:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.929 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4fe259-8a3c-49b2-965e-a0272406abe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:10 compute-0 sudo[421152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:55:10 compute-0 sudo[421152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:10 compute-0 sudo[421152]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.955 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[54c3370e-be8b-4a0b-ae76-8eb581016a28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7162764-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:68:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 949448, 'reachable_time': 31641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 421180, 'error': None, 'target': 'ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.975 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9a6b2af2-7ea2-4143-b6ab-8a9733ddbf37]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:68d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 949448, 'tstamp': 949448}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 421187, 'error': None, 'target': 'ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:10 compute-0 nova_compute[259550]: 2025-10-07 14:55:10.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:55:10 compute-0 nova_compute[259550]: 2025-10-07 14:55:10.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:55:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.995 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[84846f98-e8dc-4ce6-98a4-561e60436e15]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7162764-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:68:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 949448, 'reachable_time': 31641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 421203, 'error': None, 'target': 'ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:11 compute-0 sudo[421181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:55:11 compute-0 sudo[421181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:11 compute-0 sudo[421181]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.034 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a2fcbe32-ae6e-4320-9549-672f04a76867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:11 compute-0 sudo[421210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:55:11 compute-0 sudo[421210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.100 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[adfe4a55-1ea9-482d-b441-87d6eee5ffcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.101 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7162764-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.102 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.102 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7162764-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:55:11 compute-0 NetworkManager[44949]: <info>  [1759848911.1057] manager: (tapd7162764-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/644)
Oct 07 14:55:11 compute-0 kernel: tapd7162764-20: entered promiscuous mode
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.108 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7162764-20, col_values=(('external_ids', {'iface-id': 'b1fef780-f55f-4980-93e7-c37cceea4385'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:55:11 compute-0 ovn_controller[151684]: 2025-10-07T14:55:11Z|01598|binding|INFO|Releasing lport b1fef780-f55f-4980-93e7-c37cceea4385 from this chassis (sb_readonly=0)
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.132 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d7162764-2824-4b84-93f3-e59b2b2d0365.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d7162764-2824-4b84-93f3-e59b2b2d0365.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.133 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[546e3c02-329d-4046-9289-0d1d52a75257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.134 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-d7162764-2824-4b84-93f3-e59b2b2d0365
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/d7162764-2824-4b84-93f3-e59b2b2d0365.pid.haproxy
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID d7162764-2824-4b84-93f3-e59b2b2d0365
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:55:11 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.136 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365', 'env', 'PROCESS_TAG=haproxy-d7162764-2824-4b84-93f3-e59b2b2d0365', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d7162764-2824-4b84-93f3-e59b2b2d0365.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:55:11 compute-0 podman[421321]: 2025-10-07 14:55:11.45337468 +0000 UTC m=+0.054583846 container create ce315dc818ff4a6324f0917c7f97fd4429063e0a41461cbcb6cd465f10911467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.484 2 DEBUG nova.compute.manager [req-b75f3c15-89c1-402d-ae6e-f7892906d98f req-a378aab2-4b0a-429c-9362-082454f0a22e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received event network-vif-plugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.487 2 DEBUG oslo_concurrency.lockutils [req-b75f3c15-89c1-402d-ae6e-f7892906d98f req-a378aab2-4b0a-429c-9362-082454f0a22e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.487 2 DEBUG oslo_concurrency.lockutils [req-b75f3c15-89c1-402d-ae6e-f7892906d98f req-a378aab2-4b0a-429c-9362-082454f0a22e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.488 2 DEBUG oslo_concurrency.lockutils [req-b75f3c15-89c1-402d-ae6e-f7892906d98f req-a378aab2-4b0a-429c-9362-082454f0a22e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.488 2 DEBUG nova.compute.manager [req-b75f3c15-89c1-402d-ae6e-f7892906d98f req-a378aab2-4b0a-429c-9362-082454f0a22e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Processing event network-vif-plugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:55:11 compute-0 systemd[1]: Started libpod-conmon-ce315dc818ff4a6324f0917c7f97fd4429063e0a41461cbcb6cd465f10911467.scope.
Oct 07 14:55:11 compute-0 podman[421321]: 2025-10-07 14:55:11.432289711 +0000 UTC m=+0.033498917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:55:11 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:55:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 07 14:55:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:55:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:55:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:55:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:55:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:55:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:55:11 compute-0 podman[421321]: 2025-10-07 14:55:11.559325133 +0000 UTC m=+0.160534349 container init ce315dc818ff4a6324f0917c7f97fd4429063e0a41461cbcb6cd465f10911467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:55:11 compute-0 podman[421321]: 2025-10-07 14:55:11.567234092 +0000 UTC m=+0.168443278 container start ce315dc818ff4a6324f0917c7f97fd4429063e0a41461cbcb6cd465f10911467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 07 14:55:11 compute-0 podman[421357]: 2025-10-07 14:55:11.569431051 +0000 UTC m=+0.068649028 container create 14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:55:11 compute-0 inspiring_yonath[421364]: 167 167
Oct 07 14:55:11 compute-0 podman[421321]: 2025-10-07 14:55:11.575385998 +0000 UTC m=+0.176595184 container attach ce315dc818ff4a6324f0917c7f97fd4429063e0a41461cbcb6cd465f10911467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 07 14:55:11 compute-0 systemd[1]: libpod-ce315dc818ff4a6324f0917c7f97fd4429063e0a41461cbcb6cd465f10911467.scope: Deactivated successfully.
Oct 07 14:55:11 compute-0 podman[421321]: 2025-10-07 14:55:11.581777917 +0000 UTC m=+0.182987093 container died ce315dc818ff4a6324f0917c7f97fd4429063e0a41461cbcb6cd465f10911467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 07 14:55:11 compute-0 systemd[1]: Started libpod-conmon-14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275.scope.
Oct 07 14:55:11 compute-0 podman[421357]: 2025-10-07 14:55:11.534397564 +0000 UTC m=+0.033615561 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:55:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a3a6c09898b294b675b805c74e6ee2d718cbf7403086fb24883c8713ed47c8e-merged.mount: Deactivated successfully.
Oct 07 14:55:11 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:55:11 compute-0 podman[421321]: 2025-10-07 14:55:11.653383943 +0000 UTC m=+0.254593129 container remove ce315dc818ff4a6324f0917c7f97fd4429063e0a41461cbcb6cd465f10911467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_yonath, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/980dd10139229014c16cb6749bcc4d9b0aa60136ed3872e78879cb5d137c8d02/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:55:11 compute-0 systemd[1]: libpod-conmon-ce315dc818ff4a6324f0917c7f97fd4429063e0a41461cbcb6cd465f10911467.scope: Deactivated successfully.
Oct 07 14:55:11 compute-0 podman[421357]: 2025-10-07 14:55:11.671633706 +0000 UTC m=+0.170851713 container init 14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 07 14:55:11 compute-0 podman[421357]: 2025-10-07 14:55:11.678265001 +0000 UTC m=+0.177482988 container start 14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 07 14:55:11 compute-0 neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365[421390]: [NOTICE]   (421396) : New worker (421398) forked
Oct 07 14:55:11 compute-0 neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365[421390]: [NOTICE]   (421396) : Loading success.
Oct 07 14:55:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:55:11 compute-0 podman[421412]: 2025-10-07 14:55:11.83881681 +0000 UTC m=+0.044918400 container create 0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.846 2 DEBUG nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.848 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848911.8475258, b49c6d60-6c44-4afd-8dbc-521e4c1c31ac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.848 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] VM Started (Lifecycle Event)
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.851 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.855 2 INFO nova.virt.libvirt.driver [-] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Instance spawned successfully.
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.856 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:55:11 compute-0 systemd[1]: Started libpod-conmon-0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276.scope.
Oct 07 14:55:11 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:55:11 compute-0 podman[421412]: 2025-10-07 14:55:11.819741215 +0000 UTC m=+0.025842825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ffd2ef8981cf299fb82c8efa0b726fff752d840e7d7ec05292c502398879269/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ffd2ef8981cf299fb82c8efa0b726fff752d840e7d7ec05292c502398879269/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ffd2ef8981cf299fb82c8efa0b726fff752d840e7d7ec05292c502398879269/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ffd2ef8981cf299fb82c8efa0b726fff752d840e7d7ec05292c502398879269/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ffd2ef8981cf299fb82c8efa0b726fff752d840e7d7ec05292c502398879269/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:55:11 compute-0 podman[421412]: 2025-10-07 14:55:11.933830244 +0000 UTC m=+0.139931854 container init 0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.939 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.945 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:55:11 compute-0 podman[421412]: 2025-10-07 14:55:11.946382937 +0000 UTC m=+0.152484527 container start 0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jepsen, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.950 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.951 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:55:11 compute-0 podman[421412]: 2025-10-07 14:55:11.951358228 +0000 UTC m=+0.157459818 container attach 0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.951 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.952 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.953 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.953 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.991 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.992 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848911.8476467, b49c6d60-6c44-4afd-8dbc-521e4c1c31ac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:55:11 compute-0 nova_compute[259550]: 2025-10-07 14:55:11.992 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] VM Paused (Lifecycle Event)
Oct 07 14:55:12 compute-0 nova_compute[259550]: 2025-10-07 14:55:12.050 2 INFO nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Took 8.61 seconds to spawn the instance on the hypervisor.
Oct 07 14:55:12 compute-0 nova_compute[259550]: 2025-10-07 14:55:12.050 2 DEBUG nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:55:12 compute-0 nova_compute[259550]: 2025-10-07 14:55:12.058 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:55:12 compute-0 nova_compute[259550]: 2025-10-07 14:55:12.062 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848911.8507066, b49c6d60-6c44-4afd-8dbc-521e4c1c31ac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:55:12 compute-0 nova_compute[259550]: 2025-10-07 14:55:12.062 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] VM Resumed (Lifecycle Event)
Oct 07 14:55:12 compute-0 nova_compute[259550]: 2025-10-07 14:55:12.110 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:55:12 compute-0 nova_compute[259550]: 2025-10-07 14:55:12.114 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:55:12 compute-0 nova_compute[259550]: 2025-10-07 14:55:12.228 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:55:12 compute-0 nova_compute[259550]: 2025-10-07 14:55:12.287 2 INFO nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Took 10.12 seconds to build instance.
Oct 07 14:55:12 compute-0 nova_compute[259550]: 2025-10-07 14:55:12.348 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:55:12 compute-0 ceph-mon[74295]: pgmap v2788: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:55:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:55:12 compute-0 nova_compute[259550]: 2025-10-07 14:55:12.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:55:13 compute-0 nova_compute[259550]: 2025-10-07 14:55:13.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:13 compute-0 gifted_jepsen[421429]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:55:13 compute-0 gifted_jepsen[421429]: --> relative data size: 1.0
Oct 07 14:55:13 compute-0 gifted_jepsen[421429]: --> All data devices are unavailable
Oct 07 14:55:13 compute-0 systemd[1]: libpod-0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276.scope: Deactivated successfully.
Oct 07 14:55:13 compute-0 systemd[1]: libpod-0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276.scope: Consumed 1.058s CPU time.
Oct 07 14:55:13 compute-0 conmon[421429]: conmon 0cdda4a018b220ce555c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276.scope/container/memory.events
Oct 07 14:55:13 compute-0 podman[421412]: 2025-10-07 14:55:13.134577762 +0000 UTC m=+1.340679382 container died 0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 07 14:55:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ffd2ef8981cf299fb82c8efa0b726fff752d840e7d7ec05292c502398879269-merged.mount: Deactivated successfully.
Oct 07 14:55:13 compute-0 podman[421412]: 2025-10-07 14:55:13.199198103 +0000 UTC m=+1.405299693 container remove 0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 07 14:55:13 compute-0 systemd[1]: libpod-conmon-0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276.scope: Deactivated successfully.
Oct 07 14:55:13 compute-0 sudo[421210]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:13 compute-0 sudo[421470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:55:13 compute-0 sudo[421470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:13 compute-0 sudo[421470]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:13 compute-0 sudo[421495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:55:13 compute-0 sudo[421495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:13 compute-0 sudo[421495]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:13 compute-0 sudo[421520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:55:13 compute-0 sudo[421520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:13 compute-0 sudo[421520]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:13 compute-0 sudo[421545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:55:13 compute-0 sudo[421545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:13 compute-0 nova_compute[259550]: 2025-10-07 14:55:13.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:13 compute-0 nova_compute[259550]: 2025-10-07 14:55:13.618 2 DEBUG nova.compute.manager [req-1fd54054-cccc-4b4b-bfff-84f66bf7e6aa req-75fce8c5-1c0d-49e5-96f1-e4c59d08fa4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received event network-vif-plugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:55:13 compute-0 nova_compute[259550]: 2025-10-07 14:55:13.619 2 DEBUG oslo_concurrency.lockutils [req-1fd54054-cccc-4b4b-bfff-84f66bf7e6aa req-75fce8c5-1c0d-49e5-96f1-e4c59d08fa4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:55:13 compute-0 nova_compute[259550]: 2025-10-07 14:55:13.620 2 DEBUG oslo_concurrency.lockutils [req-1fd54054-cccc-4b4b-bfff-84f66bf7e6aa req-75fce8c5-1c0d-49e5-96f1-e4c59d08fa4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:55:13 compute-0 nova_compute[259550]: 2025-10-07 14:55:13.620 2 DEBUG oslo_concurrency.lockutils [req-1fd54054-cccc-4b4b-bfff-84f66bf7e6aa req-75fce8c5-1c0d-49e5-96f1-e4c59d08fa4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:55:13 compute-0 nova_compute[259550]: 2025-10-07 14:55:13.620 2 DEBUG nova.compute.manager [req-1fd54054-cccc-4b4b-bfff-84f66bf7e6aa req-75fce8c5-1c0d-49e5-96f1-e4c59d08fa4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] No waiting events found dispatching network-vif-plugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:55:13 compute-0 nova_compute[259550]: 2025-10-07 14:55:13.621 2 WARNING nova.compute.manager [req-1fd54054-cccc-4b4b-bfff-84f66bf7e6aa req-75fce8c5-1c0d-49e5-96f1-e4c59d08fa4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received unexpected event network-vif-plugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 for instance with vm_state active and task_state None.
Oct 07 14:55:13 compute-0 ceph-mon[74295]: pgmap v2789: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:55:13 compute-0 podman[421609]: 2025-10-07 14:55:13.799958592 +0000 UTC m=+0.050824627 container create a7512749695c8a26342ff0b803d02892f1e43af6666f8e6c29257ee601b2f2e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:55:13 compute-0 systemd[1]: Started libpod-conmon-a7512749695c8a26342ff0b803d02892f1e43af6666f8e6c29257ee601b2f2e6.scope.
Oct 07 14:55:13 compute-0 podman[421609]: 2025-10-07 14:55:13.777366934 +0000 UTC m=+0.028232989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:55:13 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:55:13 compute-0 podman[421609]: 2025-10-07 14:55:13.913176628 +0000 UTC m=+0.164042853 container init a7512749695c8a26342ff0b803d02892f1e43af6666f8e6c29257ee601b2f2e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct 07 14:55:13 compute-0 podman[421609]: 2025-10-07 14:55:13.920571264 +0000 UTC m=+0.171437299 container start a7512749695c8a26342ff0b803d02892f1e43af6666f8e6c29257ee601b2f2e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_johnson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 07 14:55:13 compute-0 serene_johnson[421626]: 167 167
Oct 07 14:55:13 compute-0 systemd[1]: libpod-a7512749695c8a26342ff0b803d02892f1e43af6666f8e6c29257ee601b2f2e6.scope: Deactivated successfully.
Oct 07 14:55:13 compute-0 podman[421609]: 2025-10-07 14:55:13.969322394 +0000 UTC m=+0.220188429 container attach a7512749695c8a26342ff0b803d02892f1e43af6666f8e6c29257ee601b2f2e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:55:13 compute-0 podman[421609]: 2025-10-07 14:55:13.969766285 +0000 UTC m=+0.220632320 container died a7512749695c8a26342ff0b803d02892f1e43af6666f8e6c29257ee601b2f2e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:55:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c020eb36c4a1e0be63906847e3f187419bf70ce928a1ca37e6cad4def9b18d10-merged.mount: Deactivated successfully.
Oct 07 14:55:14 compute-0 podman[421609]: 2025-10-07 14:55:14.23746636 +0000 UTC m=+0.488332385 container remove a7512749695c8a26342ff0b803d02892f1e43af6666f8e6c29257ee601b2f2e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_johnson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 07 14:55:14 compute-0 systemd[1]: libpod-conmon-a7512749695c8a26342ff0b803d02892f1e43af6666f8e6c29257ee601b2f2e6.scope: Deactivated successfully.
Oct 07 14:55:14 compute-0 podman[421649]: 2025-10-07 14:55:14.416849957 +0000 UTC m=+0.049034189 container create 2e7a97f95a936937b80c7059099c252a0fd888d0b63a44ad4db99b746598e08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 07 14:55:14 compute-0 systemd[1]: Started libpod-conmon-2e7a97f95a936937b80c7059099c252a0fd888d0b63a44ad4db99b746598e08b.scope.
Oct 07 14:55:14 compute-0 podman[421649]: 2025-10-07 14:55:14.391881826 +0000 UTC m=+0.024066078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:55:14 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:55:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e04d2b2561392ebbff6cc7a67da3fbf8e4551788b552f2e88e213a2822b39c57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:55:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e04d2b2561392ebbff6cc7a67da3fbf8e4551788b552f2e88e213a2822b39c57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:55:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e04d2b2561392ebbff6cc7a67da3fbf8e4551788b552f2e88e213a2822b39c57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:55:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e04d2b2561392ebbff6cc7a67da3fbf8e4551788b552f2e88e213a2822b39c57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:55:14 compute-0 podman[421649]: 2025-10-07 14:55:14.525558414 +0000 UTC m=+0.157742666 container init 2e7a97f95a936937b80c7059099c252a0fd888d0b63a44ad4db99b746598e08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_zhukovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:55:14 compute-0 podman[421649]: 2025-10-07 14:55:14.534453899 +0000 UTC m=+0.166638131 container start 2e7a97f95a936937b80c7059099c252a0fd888d0b63a44ad4db99b746598e08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:55:14 compute-0 podman[421649]: 2025-10-07 14:55:14.537714296 +0000 UTC m=+0.169898528 container attach 2e7a97f95a936937b80c7059099c252a0fd888d0b63a44ad4db99b746598e08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_zhukovsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:55:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 701 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]: {
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:     "0": [
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:         {
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "devices": [
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "/dev/loop3"
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             ],
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "lv_name": "ceph_lv0",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "lv_size": "21470642176",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "name": "ceph_lv0",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "tags": {
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.cluster_name": "ceph",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.crush_device_class": "",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.encrypted": "0",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.osd_id": "0",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.type": "block",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.vdo": "0"
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             },
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "type": "block",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "vg_name": "ceph_vg0"
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:         }
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:     ],
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:     "1": [
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:         {
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "devices": [
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "/dev/loop4"
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             ],
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "lv_name": "ceph_lv1",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "lv_size": "21470642176",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "name": "ceph_lv1",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "tags": {
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.cluster_name": "ceph",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.crush_device_class": "",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.encrypted": "0",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.osd_id": "1",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.type": "block",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.vdo": "0"
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             },
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "type": "block",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "vg_name": "ceph_vg1"
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:         }
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:     ],
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:     "2": [
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:         {
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "devices": [
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "/dev/loop5"
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             ],
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "lv_name": "ceph_lv2",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "lv_size": "21470642176",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "name": "ceph_lv2",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "tags": {
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.cluster_name": "ceph",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.crush_device_class": "",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.encrypted": "0",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.osd_id": "2",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.type": "block",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:                 "ceph.vdo": "0"
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             },
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "type": "block",
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:             "vg_name": "ceph_vg2"
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:         }
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]:     ]
Oct 07 14:55:15 compute-0 sad_zhukovsky[421665]: }
Oct 07 14:55:15 compute-0 systemd[1]: libpod-2e7a97f95a936937b80c7059099c252a0fd888d0b63a44ad4db99b746598e08b.scope: Deactivated successfully.
Oct 07 14:55:15 compute-0 podman[421674]: 2025-10-07 14:55:15.406399506 +0000 UTC m=+0.026569644 container died 2e7a97f95a936937b80c7059099c252a0fd888d0b63a44ad4db99b746598e08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 07 14:55:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e04d2b2561392ebbff6cc7a67da3fbf8e4551788b552f2e88e213a2822b39c57-merged.mount: Deactivated successfully.
Oct 07 14:55:15 compute-0 podman[421674]: 2025-10-07 14:55:15.505897459 +0000 UTC m=+0.126067577 container remove 2e7a97f95a936937b80c7059099c252a0fd888d0b63a44ad4db99b746598e08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_zhukovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 07 14:55:15 compute-0 systemd[1]: libpod-conmon-2e7a97f95a936937b80c7059099c252a0fd888d0b63a44ad4db99b746598e08b.scope: Deactivated successfully.
Oct 07 14:55:15 compute-0 sudo[421545]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:15 compute-0 NetworkManager[44949]: <info>  [1759848915.6161] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/645)
Oct 07 14:55:15 compute-0 NetworkManager[44949]: <info>  [1759848915.6173] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/646)
Oct 07 14:55:15 compute-0 ovn_controller[151684]: 2025-10-07T14:55:15Z|01599|binding|INFO|Releasing lport b1fef780-f55f-4980-93e7-c37cceea4385 from this chassis (sb_readonly=0)
Oct 07 14:55:15 compute-0 nova_compute[259550]: 2025-10-07 14:55:15.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:15 compute-0 sudo[421689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:55:15 compute-0 sudo[421689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:15 compute-0 sudo[421689]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:15 compute-0 nova_compute[259550]: 2025-10-07 14:55:15.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:15 compute-0 ovn_controller[151684]: 2025-10-07T14:55:15Z|01600|binding|INFO|Releasing lport b1fef780-f55f-4980-93e7-c37cceea4385 from this chassis (sb_readonly=0)
Oct 07 14:55:15 compute-0 nova_compute[259550]: 2025-10-07 14:55:15.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:15 compute-0 sudo[421714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:55:15 compute-0 sudo[421714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:15 compute-0 sudo[421714]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:15 compute-0 sudo[421740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:55:15 compute-0 sudo[421740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:15 compute-0 sudo[421740]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:15 compute-0 sudo[421765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:55:15 compute-0 sudo[421765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:15 compute-0 ceph-mon[74295]: pgmap v2790: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 701 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Oct 07 14:55:16 compute-0 nova_compute[259550]: 2025-10-07 14:55:16.043 2 DEBUG nova.compute.manager [req-bfecb761-31dc-494b-adb3-1bb52219e98e req-ece69685-52f4-4f0b-9001-8ba726eea02c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received event network-changed-0f10619b-f7b9-4cfb-a093-06e296ce6a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:55:16 compute-0 nova_compute[259550]: 2025-10-07 14:55:16.044 2 DEBUG nova.compute.manager [req-bfecb761-31dc-494b-adb3-1bb52219e98e req-ece69685-52f4-4f0b-9001-8ba726eea02c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Refreshing instance network info cache due to event network-changed-0f10619b-f7b9-4cfb-a093-06e296ce6a63. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:55:16 compute-0 nova_compute[259550]: 2025-10-07 14:55:16.044 2 DEBUG oslo_concurrency.lockutils [req-bfecb761-31dc-494b-adb3-1bb52219e98e req-ece69685-52f4-4f0b-9001-8ba726eea02c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:55:16 compute-0 nova_compute[259550]: 2025-10-07 14:55:16.045 2 DEBUG oslo_concurrency.lockutils [req-bfecb761-31dc-494b-adb3-1bb52219e98e req-ece69685-52f4-4f0b-9001-8ba726eea02c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:55:16 compute-0 nova_compute[259550]: 2025-10-07 14:55:16.045 2 DEBUG nova.network.neutron [req-bfecb761-31dc-494b-adb3-1bb52219e98e req-ece69685-52f4-4f0b-9001-8ba726eea02c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Refreshing network info cache for port 0f10619b-f7b9-4cfb-a093-06e296ce6a63 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:55:16 compute-0 podman[421830]: 2025-10-07 14:55:16.186155472 +0000 UTC m=+0.043576544 container create c21a57fe040a339129e7cbd3c6c1a1effde780919b2571a0b5e2b62818b85730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 07 14:55:16 compute-0 systemd[1]: Started libpod-conmon-c21a57fe040a339129e7cbd3c6c1a1effde780919b2571a0b5e2b62818b85730.scope.
Oct 07 14:55:16 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:55:16 compute-0 podman[421830]: 2025-10-07 14:55:16.165137696 +0000 UTC m=+0.022558788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:55:16 compute-0 podman[421830]: 2025-10-07 14:55:16.273811822 +0000 UTC m=+0.131232924 container init c21a57fe040a339129e7cbd3c6c1a1effde780919b2571a0b5e2b62818b85730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_aryabhata, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:55:16 compute-0 podman[421830]: 2025-10-07 14:55:16.282303187 +0000 UTC m=+0.139724259 container start c21a57fe040a339129e7cbd3c6c1a1effde780919b2571a0b5e2b62818b85730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_aryabhata, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct 07 14:55:16 compute-0 practical_aryabhata[421846]: 167 167
Oct 07 14:55:16 compute-0 systemd[1]: libpod-c21a57fe040a339129e7cbd3c6c1a1effde780919b2571a0b5e2b62818b85730.scope: Deactivated successfully.
Oct 07 14:55:16 compute-0 podman[421830]: 2025-10-07 14:55:16.290235726 +0000 UTC m=+0.147656788 container attach c21a57fe040a339129e7cbd3c6c1a1effde780919b2571a0b5e2b62818b85730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:55:16 compute-0 podman[421830]: 2025-10-07 14:55:16.290586825 +0000 UTC m=+0.148007897 container died c21a57fe040a339129e7cbd3c6c1a1effde780919b2571a0b5e2b62818b85730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_aryabhata, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 07 14:55:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-652074d54567be9ad2f1b150a95f4a9e814c4a5f4b92d69b6760724d5b6b16c0-merged.mount: Deactivated successfully.
Oct 07 14:55:16 compute-0 podman[421830]: 2025-10-07 14:55:16.342997793 +0000 UTC m=+0.200418865 container remove c21a57fe040a339129e7cbd3c6c1a1effde780919b2571a0b5e2b62818b85730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 07 14:55:16 compute-0 systemd[1]: libpod-conmon-c21a57fe040a339129e7cbd3c6c1a1effde780919b2571a0b5e2b62818b85730.scope: Deactivated successfully.
Oct 07 14:55:16 compute-0 podman[421870]: 2025-10-07 14:55:16.547214748 +0000 UTC m=+0.058609933 container create 99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:55:16 compute-0 systemd[1]: Started libpod-conmon-99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea.scope.
Oct 07 14:55:16 compute-0 podman[421870]: 2025-10-07 14:55:16.51933266 +0000 UTC m=+0.030727865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:55:16 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:55:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5256b296090ad58d86ad45a70a0551452ce545bddcd655fc157fa6c1bd1666/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:55:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5256b296090ad58d86ad45a70a0551452ce545bddcd655fc157fa6c1bd1666/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:55:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5256b296090ad58d86ad45a70a0551452ce545bddcd655fc157fa6c1bd1666/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:55:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5256b296090ad58d86ad45a70a0551452ce545bddcd655fc157fa6c1bd1666/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:55:16 compute-0 podman[421870]: 2025-10-07 14:55:16.638785281 +0000 UTC m=+0.150180486 container init 99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Oct 07 14:55:16 compute-0 podman[421870]: 2025-10-07 14:55:16.6455694 +0000 UTC m=+0.156964585 container start 99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:55:16 compute-0 podman[421870]: 2025-10-07 14:55:16.65046218 +0000 UTC m=+0.161857375 container attach 99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:55:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:55:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:55:17 compute-0 gallant_hellman[421888]: {
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:         "osd_id": 2,
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:         "type": "bluestore"
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:     },
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:         "osd_id": 1,
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:         "type": "bluestore"
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:     },
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:         "osd_id": 0,
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:         "type": "bluestore"
Oct 07 14:55:17 compute-0 gallant_hellman[421888]:     }
Oct 07 14:55:17 compute-0 gallant_hellman[421888]: }
Oct 07 14:55:17 compute-0 nova_compute[259550]: 2025-10-07 14:55:17.698 2 DEBUG nova.network.neutron [req-bfecb761-31dc-494b-adb3-1bb52219e98e req-ece69685-52f4-4f0b-9001-8ba726eea02c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Updated VIF entry in instance network info cache for port 0f10619b-f7b9-4cfb-a093-06e296ce6a63. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:55:17 compute-0 nova_compute[259550]: 2025-10-07 14:55:17.701 2 DEBUG nova.network.neutron [req-bfecb761-31dc-494b-adb3-1bb52219e98e req-ece69685-52f4-4f0b-9001-8ba726eea02c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Updating instance_info_cache with network_info: [{"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:55:17 compute-0 nova_compute[259550]: 2025-10-07 14:55:17.723 2 DEBUG oslo_concurrency.lockutils [req-bfecb761-31dc-494b-adb3-1bb52219e98e req-ece69685-52f4-4f0b-9001-8ba726eea02c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:55:17 compute-0 systemd[1]: libpod-99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea.scope: Deactivated successfully.
Oct 07 14:55:17 compute-0 systemd[1]: libpod-99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea.scope: Consumed 1.086s CPU time.
Oct 07 14:55:17 compute-0 podman[421870]: 2025-10-07 14:55:17.732027463 +0000 UTC m=+1.243422658 container died 99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:55:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce5256b296090ad58d86ad45a70a0551452ce545bddcd655fc157fa6c1bd1666-merged.mount: Deactivated successfully.
Oct 07 14:55:17 compute-0 podman[421870]: 2025-10-07 14:55:17.881275914 +0000 UTC m=+1.392671099 container remove 99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 07 14:55:17 compute-0 systemd[1]: libpod-conmon-99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea.scope: Deactivated successfully.
Oct 07 14:55:17 compute-0 ceph-mon[74295]: pgmap v2791: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 07 14:55:17 compute-0 sudo[421765]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:55:17 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:55:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:55:17 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:55:17 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 615bafa6-9496-4a09-adfe-a38ae51cafa2 does not exist
Oct 07 14:55:17 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 1dea4b98-2bcc-4bb9-9e1f-f286fb85a19a does not exist
Oct 07 14:55:17 compute-0 nova_compute[259550]: 2025-10-07 14:55:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:55:17 compute-0 sudo[421935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:55:17 compute-0 sudo[421935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:17 compute-0 sudo[421935]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:18 compute-0 nova_compute[259550]: 2025-10-07 14:55:18.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:18 compute-0 nova_compute[259550]: 2025-10-07 14:55:18.020 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:55:18 compute-0 nova_compute[259550]: 2025-10-07 14:55:18.021 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:55:18 compute-0 nova_compute[259550]: 2025-10-07 14:55:18.021 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:55:18 compute-0 nova_compute[259550]: 2025-10-07 14:55:18.022 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:55:18 compute-0 nova_compute[259550]: 2025-10-07 14:55:18.022 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:55:18 compute-0 sudo[421960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:55:18 compute-0 sudo[421960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:55:18 compute-0 sudo[421960]: pam_unix(sudo:session): session closed for user root
Oct 07 14:55:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:55:18 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2056866912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:55:18 compute-0 nova_compute[259550]: 2025-10-07 14:55:18.496 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:55:18 compute-0 nova_compute[259550]: 2025-10-07 14:55:18.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:18 compute-0 nova_compute[259550]: 2025-10-07 14:55:18.584 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:55:18 compute-0 nova_compute[259550]: 2025-10-07 14:55:18.585 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:55:18 compute-0 nova_compute[259550]: 2025-10-07 14:55:18.739 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:55:18 compute-0 nova_compute[259550]: 2025-10-07 14:55:18.741 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3449MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:55:18 compute-0 nova_compute[259550]: 2025-10-07 14:55:18.742 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:55:18 compute-0 nova_compute[259550]: 2025-10-07 14:55:18.742 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:55:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 654 KiB/s wr, 99 op/s
Oct 07 14:55:18 compute-0 nova_compute[259550]: 2025-10-07 14:55:18.870 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance b49c6d60-6c44-4afd-8dbc-521e4c1c31ac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:55:18 compute-0 nova_compute[259550]: 2025-10-07 14:55:18.871 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:55:18 compute-0 nova_compute[259550]: 2025-10-07 14:55:18.871 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:55:18 compute-0 nova_compute[259550]: 2025-10-07 14:55:18.918 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:55:18 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:55:18 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:55:18 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2056866912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:55:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:55:19 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2917906744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:55:19 compute-0 nova_compute[259550]: 2025-10-07 14:55:19.439 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:55:19 compute-0 nova_compute[259550]: 2025-10-07 14:55:19.447 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:55:19 compute-0 nova_compute[259550]: 2025-10-07 14:55:19.471 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:55:19 compute-0 nova_compute[259550]: 2025-10-07 14:55:19.525 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:55:19 compute-0 nova_compute[259550]: 2025-10-07 14:55:19.526 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:55:19 compute-0 ceph-mon[74295]: pgmap v2792: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 654 KiB/s wr, 99 op/s
Oct 07 14:55:19 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2917906744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:55:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:55:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:55:21 compute-0 ceph-mon[74295]: pgmap v2793: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:55:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:55:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:55:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:55:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:55:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:55:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:55:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:55:22
Oct 07 14:55:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:55:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:55:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr', 'volumes', 'default.rgw.log']
Oct 07 14:55:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:55:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:55:23 compute-0 nova_compute[259550]: 2025-10-07 14:55:23.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:55:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:55:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:55:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:55:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:55:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:55:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:55:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:55:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:55:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:55:23 compute-0 nova_compute[259550]: 2025-10-07 14:55:23.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:24 compute-0 ceph-mon[74295]: pgmap v2794: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:55:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 07 14:55:25 compute-0 podman[422030]: 2025-10-07 14:55:25.092821336 +0000 UTC m=+0.070392243 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 07 14:55:25 compute-0 podman[422031]: 2025-10-07 14:55:25.110293069 +0000 UTC m=+0.087784684 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 07 14:55:25 compute-0 nova_compute[259550]: 2025-10-07 14:55:25.527 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:55:25 compute-0 nova_compute[259550]: 2025-10-07 14:55:25.528 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:55:25 compute-0 nova_compute[259550]: 2025-10-07 14:55:25.553 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:55:25 compute-0 nova_compute[259550]: 2025-10-07 14:55:25.554 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:55:26 compute-0 ceph-mon[74295]: pgmap v2795: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 07 14:55:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:55:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 305 active+clean; 91 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 527 KiB/s wr, 51 op/s
Oct 07 14:55:26 compute-0 ovn_controller[151684]: 2025-10-07T14:55:26Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e0:c5:45 10.100.0.6
Oct 07 14:55:26 compute-0 ovn_controller[151684]: 2025-10-07T14:55:26Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e0:c5:45 10.100.0.6
Oct 07 14:55:28 compute-0 nova_compute[259550]: 2025-10-07 14:55:28.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:28 compute-0 ceph-mon[74295]: pgmap v2796: 305 pgs: 305 active+clean; 91 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 527 KiB/s wr, 51 op/s
Oct 07 14:55:28 compute-0 nova_compute[259550]: 2025-10-07 14:55:28.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 305 active+clean; 117 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 07 14:55:30 compute-0 ceph-mon[74295]: pgmap v2797: 305 pgs: 305 active+clean; 117 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 07 14:55:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:55:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.824015) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848931824102, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1706, "num_deletes": 250, "total_data_size": 2765726, "memory_usage": 2818840, "flush_reason": "Manual Compaction"}
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848931835380, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 1601559, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56993, "largest_seqno": 58698, "table_properties": {"data_size": 1595821, "index_size": 2812, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15007, "raw_average_key_size": 20, "raw_value_size": 1583216, "raw_average_value_size": 2189, "num_data_blocks": 129, "num_entries": 723, "num_filter_entries": 723, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759848752, "oldest_key_time": 1759848752, "file_creation_time": 1759848931, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 11410 microseconds, and 4162 cpu microseconds.
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.835434) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 1601559 bytes OK
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.835455) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.838392) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.838407) EVENT_LOG_v1 {"time_micros": 1759848931838402, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.838425) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 2758382, prev total WAL file size 2758382, number of live WAL files 2.
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.839721) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323533' seq:72057594037927935, type:22 .. '6D6772737461740032353034' seq:0, type:0; will stop at (end)
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(1564KB)], [134(10148KB)]
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848931839807, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 11993210, "oldest_snapshot_seqno": -1}
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 7865 keys, 9787612 bytes, temperature: kUnknown
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848931915091, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 9787612, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9737630, "index_size": 29197, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19717, "raw_key_size": 204741, "raw_average_key_size": 26, "raw_value_size": 9599807, "raw_average_value_size": 1220, "num_data_blocks": 1143, "num_entries": 7865, "num_filter_entries": 7865, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759848931, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.915626) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 9787612 bytes
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.917489) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.2 rd, 129.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.9 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(13.6) write-amplify(6.1) OK, records in: 8293, records dropped: 428 output_compression: NoCompression
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.917513) EVENT_LOG_v1 {"time_micros": 1759848931917502, "job": 82, "event": "compaction_finished", "compaction_time_micros": 75346, "compaction_time_cpu_micros": 26599, "output_level": 6, "num_output_files": 1, "total_output_size": 9787612, "num_input_records": 8293, "num_output_records": 7865, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848931918180, "job": 82, "event": "table_file_deletion", "file_number": 136}
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848931920781, "job": 82, "event": "table_file_deletion", "file_number": 134}
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.839540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.920868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.920887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.920889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.920891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:55:31 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.920893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:55:32 compute-0 ceph-mon[74295]: pgmap v2798: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:55:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:55:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2728124249' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:55:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:55:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2728124249' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:55:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:55:33 compute-0 nova_compute[259550]: 2025-10-07 14:55:33.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2728124249' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:55:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2728124249' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:55:33 compute-0 nova_compute[259550]: 2025-10-07 14:55:33.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:34 compute-0 ceph-mon[74295]: pgmap v2799: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:55:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:55:36 compute-0 ceph-mon[74295]: pgmap v2800: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:55:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:55:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 07 14:55:38 compute-0 nova_compute[259550]: 2025-10-07 14:55:38.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:38 compute-0 podman[422073]: 2025-10-07 14:55:38.073745746 +0000 UTC m=+0.055271714 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:55:38 compute-0 podman[422074]: 2025-10-07 14:55:38.104768477 +0000 UTC m=+0.084583860 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 07 14:55:38 compute-0 nova_compute[259550]: 2025-10-07 14:55:38.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:38 compute-0 ceph-mon[74295]: pgmap v2801: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 07 14:55:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 1.6 MiB/s wr, 54 op/s
Oct 07 14:55:40 compute-0 ceph-mon[74295]: pgmap v2802: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 1.6 MiB/s wr, 54 op/s
Oct 07 14:55:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 62 KiB/s wr, 2 op/s
Oct 07 14:55:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:55:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Oct 07 14:55:43 compute-0 nova_compute[259550]: 2025-10-07 14:55:43.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:43 compute-0 ceph-mon[74295]: pgmap v2803: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 62 KiB/s wr, 2 op/s
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.568429) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848943568534, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 344, "num_deletes": 251, "total_data_size": 182906, "memory_usage": 189576, "flush_reason": "Manual Compaction"}
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Oct 07 14:55:43 compute-0 nova_compute[259550]: 2025-10-07 14:55:43.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848943696133, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 181444, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58699, "largest_seqno": 59042, "table_properties": {"data_size": 179264, "index_size": 343, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5402, "raw_average_key_size": 18, "raw_value_size": 175033, "raw_average_value_size": 599, "num_data_blocks": 15, "num_entries": 292, "num_filter_entries": 292, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759848932, "oldest_key_time": 1759848932, "file_creation_time": 1759848943, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 127753 microseconds, and 1983 cpu microseconds.
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.696188) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 181444 bytes OK
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.696213) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.760296) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.760348) EVENT_LOG_v1 {"time_micros": 1759848943760337, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.760374) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 180559, prev total WAL file size 180559, number of live WAL files 2.
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.760985) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(177KB)], [137(9558KB)]
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848943761016, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 9969056, "oldest_snapshot_seqno": -1}
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 7648 keys, 8238760 bytes, temperature: kUnknown
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848943883053, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 8238760, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8191734, "index_size": 26817, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19141, "raw_key_size": 200924, "raw_average_key_size": 26, "raw_value_size": 8059230, "raw_average_value_size": 1053, "num_data_blocks": 1034, "num_entries": 7648, "num_filter_entries": 7648, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759848943, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.883290) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 8238760 bytes
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.941311) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 81.6 rd, 67.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.3 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(100.3) write-amplify(45.4) OK, records in: 8157, records dropped: 509 output_compression: NoCompression
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.941346) EVENT_LOG_v1 {"time_micros": 1759848943941333, "job": 84, "event": "compaction_finished", "compaction_time_micros": 122116, "compaction_time_cpu_micros": 22938, "output_level": 6, "num_output_files": 1, "total_output_size": 8238760, "num_input_records": 8157, "num_output_records": 7648, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848943941565, "job": 84, "event": "table_file_deletion", "file_number": 139}
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848943943306, "job": 84, "event": "table_file_deletion", "file_number": 137}
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.760840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.943351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.943356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.943357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.943359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:55:43 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.943361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:55:44 compute-0 ceph-mon[74295]: pgmap v2804: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Oct 07 14:55:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Oct 07 14:55:45 compute-0 ovn_controller[151684]: 2025-10-07T14:55:45Z|01601|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Oct 07 14:55:45 compute-0 ceph-mon[74295]: pgmap v2805: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Oct 07 14:55:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:55:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Oct 07 14:55:48 compute-0 nova_compute[259550]: 2025-10-07 14:55:48.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:48 compute-0 ceph-mon[74295]: pgmap v2806: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Oct 07 14:55:48 compute-0 nova_compute[259550]: 2025-10-07 14:55:48.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 4.7 KiB/s wr, 0 op/s
Oct 07 14:55:49 compute-0 ceph-mon[74295]: pgmap v2807: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 4.7 KiB/s wr, 0 op/s
Oct 07 14:55:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Oct 07 14:55:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:55:51 compute-0 ceph-mon[74295]: pgmap v2808: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Oct 07 14:55:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:55:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:55:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:55:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:55:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:55:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:55:52 compute-0 nova_compute[259550]: 2025-10-07 14:55:52.734 2 DEBUG nova.compute.manager [req-334ff3d6-7d9b-4a1b-bef3-d769d270b523 req-87b1d5f5-8020-4710-981f-9312f936afaf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received event network-changed-0f10619b-f7b9-4cfb-a093-06e296ce6a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:55:52 compute-0 nova_compute[259550]: 2025-10-07 14:55:52.734 2 DEBUG nova.compute.manager [req-334ff3d6-7d9b-4a1b-bef3-d769d270b523 req-87b1d5f5-8020-4710-981f-9312f936afaf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Refreshing instance network info cache due to event network-changed-0f10619b-f7b9-4cfb-a093-06e296ce6a63. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:55:52 compute-0 nova_compute[259550]: 2025-10-07 14:55:52.735 2 DEBUG oslo_concurrency.lockutils [req-334ff3d6-7d9b-4a1b-bef3-d769d270b523 req-87b1d5f5-8020-4710-981f-9312f936afaf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:55:52 compute-0 nova_compute[259550]: 2025-10-07 14:55:52.735 2 DEBUG oslo_concurrency.lockutils [req-334ff3d6-7d9b-4a1b-bef3-d769d270b523 req-87b1d5f5-8020-4710-981f-9312f936afaf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:55:52 compute-0 nova_compute[259550]: 2025-10-07 14:55:52.735 2 DEBUG nova.network.neutron [req-334ff3d6-7d9b-4a1b-bef3-d769d270b523 req-87b1d5f5-8020-4710-981f-9312f936afaf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Refreshing network info cache for port 0f10619b-f7b9-4cfb-a093-06e296ce6a63 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:55:52 compute-0 nova_compute[259550]: 2025-10-07 14:55:52.824 2 DEBUG oslo_concurrency.lockutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:55:52 compute-0 nova_compute[259550]: 2025-10-07 14:55:52.825 2 DEBUG oslo_concurrency.lockutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:55:52 compute-0 nova_compute[259550]: 2025-10-07 14:55:52.825 2 DEBUG oslo_concurrency.lockutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:55:52 compute-0 nova_compute[259550]: 2025-10-07 14:55:52.825 2 DEBUG oslo_concurrency.lockutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:55:52 compute-0 nova_compute[259550]: 2025-10-07 14:55:52.825 2 DEBUG oslo_concurrency.lockutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:55:52 compute-0 nova_compute[259550]: 2025-10-07 14:55:52.827 2 INFO nova.compute.manager [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Terminating instance
Oct 07 14:55:52 compute-0 nova_compute[259550]: 2025-10-07 14:55:52.827 2 DEBUG nova.compute.manager [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:55:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 07 14:55:52 compute-0 kernel: tap0f10619b-f7 (unregistering): left promiscuous mode
Oct 07 14:55:52 compute-0 NetworkManager[44949]: <info>  [1759848952.8881] device (tap0f10619b-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:55:52 compute-0 ovn_controller[151684]: 2025-10-07T14:55:52Z|01602|binding|INFO|Releasing lport 0f10619b-f7b9-4cfb-a093-06e296ce6a63 from this chassis (sb_readonly=0)
Oct 07 14:55:52 compute-0 ovn_controller[151684]: 2025-10-07T14:55:52Z|01603|binding|INFO|Setting lport 0f10619b-f7b9-4cfb-a093-06e296ce6a63 down in Southbound
Oct 07 14:55:52 compute-0 ovn_controller[151684]: 2025-10-07T14:55:52Z|01604|binding|INFO|Removing iface tap0f10619b-f7 ovn-installed in OVS
Oct 07 14:55:52 compute-0 nova_compute[259550]: 2025-10-07 14:55:52.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:52.902 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:c5:45 10.100.0.6'], port_security=['fa:16:3e:e0:c5:45 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b49c6d60-6c44-4afd-8dbc-521e4c1c31ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7162764-2824-4b84-93f3-e59b2b2d0365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1fd27387-7887-47bf-8a95-e87828fb4a63 9aa4f30d-9ce8-4452-a71d-172dedad2762', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c238a0b0-d3a1-4d4e-9703-74187cbb70ac, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0f10619b-f7b9-4cfb-a093-06e296ce6a63) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:55:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:52.904 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0f10619b-f7b9-4cfb-a093-06e296ce6a63 in datapath d7162764-2824-4b84-93f3-e59b2b2d0365 unbound from our chassis
Oct 07 14:55:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:52.904 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d7162764-2824-4b84-93f3-e59b2b2d0365, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:55:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:52.905 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aefd6fef-7d14-4c5a-9576-32e995795a6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:52 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:52.906 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365 namespace which is not needed anymore
Oct 07 14:55:52 compute-0 nova_compute[259550]: 2025-10-07 14:55:52.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:52 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000092.scope: Deactivated successfully.
Oct 07 14:55:52 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000092.scope: Consumed 15.120s CPU time.
Oct 07 14:55:52 compute-0 systemd-machined[214580]: Machine qemu-180-instance-00000092 terminated.
Oct 07 14:55:53 compute-0 neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365[421390]: [NOTICE]   (421396) : haproxy version is 2.8.14-c23fe91
Oct 07 14:55:53 compute-0 neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365[421390]: [NOTICE]   (421396) : path to executable is /usr/sbin/haproxy
Oct 07 14:55:53 compute-0 neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365[421390]: [WARNING]  (421396) : Exiting Master process...
Oct 07 14:55:53 compute-0 neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365[421390]: [WARNING]  (421396) : Exiting Master process...
Oct 07 14:55:53 compute-0 neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365[421390]: [ALERT]    (421396) : Current worker (421398) exited with code 143 (Terminated)
Oct 07 14:55:53 compute-0 neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365[421390]: [WARNING]  (421396) : All workers exited. Exiting... (0)
Oct 07 14:55:53 compute-0 systemd[1]: libpod-14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275.scope: Deactivated successfully.
Oct 07 14:55:53 compute-0 podman[422143]: 2025-10-07 14:55:53.065320939 +0000 UTC m=+0.063196864 container died 14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.068 2 INFO nova.virt.libvirt.driver [-] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Instance destroyed successfully.
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.068 2 DEBUG nova.objects.instance [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'resources' on Instance uuid b49c6d60-6c44-4afd-8dbc-521e4c1c31ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.090 2 DEBUG nova.virt.libvirt.vif [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:55:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-17551502',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-17551502',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=146,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBECPh1pByCCitV1nn5SXhUBD70Y3hmlmSyk96xTNnlcIfpBRz1+IZ9Aq3nwoGJZoH4mSlndssl/oL5+TBJGoYqXuFU3nK+musxCwWhV9dKumK0LA7mZMLKLnViO0sCqyFA==',key_name='tempest-TestSecurityGroupsBasicOps-911554559',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:55:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-5q94rep4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:55:12Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=b49c6d60-6c44-4afd-8dbc-521e4c1c31ac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.091 2 DEBUG nova.network.os_vif_util [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.092 2 DEBUG nova.network.os_vif_util [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e0:c5:45,bridge_name='br-int',has_traffic_filtering=True,id=0f10619b-f7b9-4cfb-a093-06e296ce6a63,network=Network(d7162764-2824-4b84-93f3-e59b2b2d0365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f10619b-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.092 2 DEBUG os_vif [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:c5:45,bridge_name='br-int',has_traffic_filtering=True,id=0f10619b-f7b9-4cfb-a093-06e296ce6a63,network=Network(d7162764-2824-4b84-93f3-e59b2b2d0365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f10619b-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.095 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f10619b-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.099 2 INFO os_vif [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:c5:45,bridge_name='br-int',has_traffic_filtering=True,id=0f10619b-f7b9-4cfb-a093-06e296ce6a63,network=Network(d7162764-2824-4b84-93f3-e59b2b2d0365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f10619b-f7')
Oct 07 14:55:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275-userdata-shm.mount: Deactivated successfully.
Oct 07 14:55:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-980dd10139229014c16cb6749bcc4d9b0aa60136ed3872e78879cb5d137c8d02-merged.mount: Deactivated successfully.
Oct 07 14:55:53 compute-0 podman[422143]: 2025-10-07 14:55:53.163107547 +0000 UTC m=+0.160983472 container cleanup 14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 14:55:53 compute-0 systemd[1]: libpod-conmon-14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275.scope: Deactivated successfully.
Oct 07 14:55:53 compute-0 podman[422200]: 2025-10-07 14:55:53.245008115 +0000 UTC m=+0.059342292 container remove 14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:55:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:53.251 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b00ca7ad-e0e2-4d78-8a81-ddbd1f6784e5]: (4, ('Tue Oct  7 02:55:52 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365 (14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275)\n14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275\nTue Oct  7 02:55:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365 (14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275)\n14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:53.253 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe5b746-1eff-47e3-b6b7-398319859c38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:53.254 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7162764-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:53 compute-0 kernel: tapd7162764-20: left promiscuous mode
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:53.283 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[70e6c077-9715-463a-a4c3-cf485f6bba82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:53.309 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0c6d3aca-a344-4ea4-81fa-7d9d9802d781]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:53.311 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5a645e4e-e344-4a37-94e0-1d2806181544]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:53.326 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[031766d2-57cc-47d2-9306-8497dfba1830]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 949438, 'reachable_time': 23730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422215, 'error': None, 'target': 'ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:53 compute-0 systemd[1]: run-netns-ovnmeta\x2dd7162764\x2d2824\x2d4b84\x2d93f3\x2de59b2b2d0365.mount: Deactivated successfully.
Oct 07 14:55:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:53.330 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:55:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:53.330 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6d1cd18e-0520-4044-a2e1-df346a266d7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.723 2 INFO nova.virt.libvirt.driver [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Deleting instance files /var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_del
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.724 2 INFO nova.virt.libvirt.driver [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Deletion of /var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_del complete
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.785 2 INFO nova.compute.manager [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Took 0.96 seconds to destroy the instance on the hypervisor.
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.786 2 DEBUG oslo.service.loopingcall [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.787 2 DEBUG nova.compute.manager [-] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:55:53 compute-0 nova_compute[259550]: 2025-10-07 14:55:53.787 2 DEBUG nova.network.neutron [-] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:55:53 compute-0 ceph-mon[74295]: pgmap v2809: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.243 2 DEBUG nova.network.neutron [req-334ff3d6-7d9b-4a1b-bef3-d769d270b523 req-87b1d5f5-8020-4710-981f-9312f936afaf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Updated VIF entry in instance network info cache for port 0f10619b-f7b9-4cfb-a093-06e296ce6a63. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.244 2 DEBUG nova.network.neutron [req-334ff3d6-7d9b-4a1b-bef3-d769d270b523 req-87b1d5f5-8020-4710-981f-9312f936afaf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Updating instance_info_cache with network_info: [{"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.267 2 DEBUG oslo_concurrency.lockutils [req-334ff3d6-7d9b-4a1b-bef3-d769d270b523 req-87b1d5f5-8020-4710-981f-9312f936afaf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.594 2 DEBUG nova.network.neutron [-] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.616 2 INFO nova.compute.manager [-] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Took 0.83 seconds to deallocate network for instance.
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.658 2 DEBUG oslo_concurrency.lockutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.659 2 DEBUG oslo_concurrency.lockutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.720 2 DEBUG oslo_concurrency.processutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.811 2 DEBUG nova.compute.manager [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received event network-vif-unplugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.812 2 DEBUG oslo_concurrency.lockutils [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.813 2 DEBUG oslo_concurrency.lockutils [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.813 2 DEBUG oslo_concurrency.lockutils [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.813 2 DEBUG nova.compute.manager [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] No waiting events found dispatching network-vif-unplugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.813 2 WARNING nova.compute.manager [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received unexpected event network-vif-unplugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 for instance with vm_state deleted and task_state None.
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.814 2 DEBUG nova.compute.manager [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received event network-vif-plugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.814 2 DEBUG oslo_concurrency.lockutils [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.814 2 DEBUG oslo_concurrency.lockutils [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.815 2 DEBUG oslo_concurrency.lockutils [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.815 2 DEBUG nova.compute.manager [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] No waiting events found dispatching network-vif-plugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.815 2 WARNING nova.compute.manager [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received unexpected event network-vif-plugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 for instance with vm_state deleted and task_state None.
Oct 07 14:55:54 compute-0 nova_compute[259550]: 2025-10-07 14:55:54.815 2 DEBUG nova.compute.manager [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received event network-vif-deleted-0f10619b-f7b9-4cfb-a093-06e296ce6a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:55:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 305 active+clean; 75 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 KiB/s wr, 22 op/s
Oct 07 14:55:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:55:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204438503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:55:55 compute-0 nova_compute[259550]: 2025-10-07 14:55:55.160 2 DEBUG oslo_concurrency.processutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:55:55 compute-0 nova_compute[259550]: 2025-10-07 14:55:55.166 2 DEBUG nova.compute.provider_tree [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:55:55 compute-0 nova_compute[259550]: 2025-10-07 14:55:55.184 2 DEBUG nova.scheduler.client.report [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:55:55 compute-0 nova_compute[259550]: 2025-10-07 14:55:55.204 2 DEBUG oslo_concurrency.lockutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:55:55 compute-0 nova_compute[259550]: 2025-10-07 14:55:55.229 2 INFO nova.scheduler.client.report [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Deleted allocations for instance b49c6d60-6c44-4afd-8dbc-521e4c1c31ac
Oct 07 14:55:55 compute-0 nova_compute[259550]: 2025-10-07 14:55:55.309 2 DEBUG oslo_concurrency.lockutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:55:56 compute-0 podman[422240]: 2025-10-07 14:55:56.068081927 +0000 UTC m=+0.058389237 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 14:55:56 compute-0 podman[422239]: 2025-10-07 14:55:56.069166056 +0000 UTC m=+0.061304634 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd)
Oct 07 14:55:56 compute-0 ceph-mon[74295]: pgmap v2810: 305 pgs: 305 active+clean; 75 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 KiB/s wr, 22 op/s
Oct 07 14:55:56 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4204438503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:55:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:56.308 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:55:56 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:55:56.309 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:55:56 compute-0 nova_compute[259550]: 2025-10-07 14:55:56.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:55:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 305 active+clean; 41 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 07 14:55:58 compute-0 nova_compute[259550]: 2025-10-07 14:55:58.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:58 compute-0 ceph-mon[74295]: pgmap v2811: 305 pgs: 305 active+clean; 41 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 07 14:55:58 compute-0 nova_compute[259550]: 2025-10-07 14:55:58.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:58 compute-0 nova_compute[259550]: 2025-10-07 14:55:58.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:58 compute-0 nova_compute[259550]: 2025-10-07 14:55:58.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:55:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:56:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:00.093 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:56:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:00.094 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:56:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:00.094 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:56:00 compute-0 ceph-mon[74295]: pgmap v2812: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:56:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:56:01 compute-0 ceph-mon[74295]: pgmap v2813: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:56:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:56:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:56:02 compute-0 nova_compute[259550]: 2025-10-07 14:56:02.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:56:03 compute-0 nova_compute[259550]: 2025-10-07 14:56:03.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:03.311 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:56:03 compute-0 nova_compute[259550]: 2025-10-07 14:56:03.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:04 compute-0 ceph-mon[74295]: pgmap v2814: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:56:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:56:06 compute-0 ceph-mon[74295]: pgmap v2815: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:56:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:56:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 2.6 KiB/s rd, 341 B/s wr, 5 op/s
Oct 07 14:56:06 compute-0 nova_compute[259550]: 2025-10-07 14:56:06.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:56:07 compute-0 nova_compute[259550]: 2025-10-07 14:56:07.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:56:07 compute-0 nova_compute[259550]: 2025-10-07 14:56:07.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:56:08 compute-0 nova_compute[259550]: 2025-10-07 14:56:08.067 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848953.0660236, b49c6d60-6c44-4afd-8dbc-521e4c1c31ac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:56:08 compute-0 nova_compute[259550]: 2025-10-07 14:56:08.067 2 INFO nova.compute.manager [-] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] VM Stopped (Lifecycle Event)
Oct 07 14:56:08 compute-0 nova_compute[259550]: 2025-10-07 14:56:08.093 2 DEBUG nova.compute.manager [None req-5da3938b-6b65-48fb-9a81-a31d70c9924c - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:56:08 compute-0 ceph-mon[74295]: pgmap v2816: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 2.6 KiB/s rd, 341 B/s wr, 5 op/s
Oct 07 14:56:08 compute-0 nova_compute[259550]: 2025-10-07 14:56:08.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:08 compute-0 nova_compute[259550]: 2025-10-07 14:56:08.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 07 14:56:09 compute-0 podman[422281]: 2025-10-07 14:56:09.054779999 +0000 UTC m=+0.047729914 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 07 14:56:09 compute-0 podman[422282]: 2025-10-07 14:56:09.082873443 +0000 UTC m=+0.075321014 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 14:56:10 compute-0 ceph-mon[74295]: pgmap v2817: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 07 14:56:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:56:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:56:11 compute-0 nova_compute[259550]: 2025-10-07 14:56:11.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:56:11 compute-0 nova_compute[259550]: 2025-10-07 14:56:11.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:56:12 compute-0 ceph-mon[74295]: pgmap v2818: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:56:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:56:13 compute-0 nova_compute[259550]: 2025-10-07 14:56:13.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:13 compute-0 nova_compute[259550]: 2025-10-07 14:56:13.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:13 compute-0 ceph-mon[74295]: pgmap v2819: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:56:13 compute-0 nova_compute[259550]: 2025-10-07 14:56:13.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:56:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:56:16 compute-0 ceph-mon[74295]: pgmap v2820: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:56:16 compute-0 nova_compute[259550]: 2025-10-07 14:56:16.803 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:56:16 compute-0 nova_compute[259550]: 2025-10-07 14:56:16.803 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:56:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:56:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:56:16 compute-0 nova_compute[259550]: 2025-10-07 14:56:16.939 2 DEBUG nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:56:17 compute-0 nova_compute[259550]: 2025-10-07 14:56:17.210 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:56:17 compute-0 nova_compute[259550]: 2025-10-07 14:56:17.211 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:56:17 compute-0 nova_compute[259550]: 2025-10-07 14:56:17.220 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:56:17 compute-0 nova_compute[259550]: 2025-10-07 14:56:17.220 2 INFO nova.compute.claims [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:56:17 compute-0 nova_compute[259550]: 2025-10-07 14:56:17.362 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:56:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:56:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/418693607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:56:17 compute-0 nova_compute[259550]: 2025-10-07 14:56:17.833 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:56:17 compute-0 nova_compute[259550]: 2025-10-07 14:56:17.838 2 DEBUG nova.compute.provider_tree [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:56:17 compute-0 nova_compute[259550]: 2025-10-07 14:56:17.858 2 DEBUG nova.scheduler.client.report [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:56:17 compute-0 nova_compute[259550]: 2025-10-07 14:56:17.887 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:56:17 compute-0 nova_compute[259550]: 2025-10-07 14:56:17.888 2 DEBUG nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:56:17 compute-0 nova_compute[259550]: 2025-10-07 14:56:17.944 2 DEBUG nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:56:17 compute-0 nova_compute[259550]: 2025-10-07 14:56:17.945 2 DEBUG nova.network.neutron [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:56:17 compute-0 nova_compute[259550]: 2025-10-07 14:56:17.967 2 INFO nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:56:17 compute-0 nova_compute[259550]: 2025-10-07 14:56:17.987 2 DEBUG nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:56:18 compute-0 nova_compute[259550]: 2025-10-07 14:56:18.076 2 DEBUG nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:56:18 compute-0 nova_compute[259550]: 2025-10-07 14:56:18.077 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:56:18 compute-0 nova_compute[259550]: 2025-10-07 14:56:18.078 2 INFO nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Creating image(s)
Oct 07 14:56:18 compute-0 nova_compute[259550]: 2025-10-07 14:56:18.102 2 DEBUG nova.storage.rbd_utils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:56:18 compute-0 nova_compute[259550]: 2025-10-07 14:56:18.128 2 DEBUG nova.storage.rbd_utils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:56:18 compute-0 sudo[422347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:56:18 compute-0 sudo[422347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:18 compute-0 sudo[422347]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:18 compute-0 nova_compute[259550]: 2025-10-07 14:56:18.153 2 DEBUG nova.storage.rbd_utils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:56:18 compute-0 nova_compute[259550]: 2025-10-07 14:56:18.158 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:56:18 compute-0 sudo[422423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:56:18 compute-0 nova_compute[259550]: 2025-10-07 14:56:18.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:18 compute-0 sudo[422423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:18 compute-0 sudo[422423]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:18 compute-0 nova_compute[259550]: 2025-10-07 14:56:18.237 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:56:18 compute-0 nova_compute[259550]: 2025-10-07 14:56:18.239 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:56:18 compute-0 nova_compute[259550]: 2025-10-07 14:56:18.239 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:56:18 compute-0 nova_compute[259550]: 2025-10-07 14:56:18.240 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:56:18 compute-0 sudo[422452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:56:18 compute-0 sudo[422452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:18 compute-0 sudo[422452]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:18 compute-0 nova_compute[259550]: 2025-10-07 14:56:18.263 2 DEBUG nova.storage.rbd_utils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:56:18 compute-0 nova_compute[259550]: 2025-10-07 14:56:18.267 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:56:18 compute-0 nova_compute[259550]: 2025-10-07 14:56:18.302 2 DEBUG nova.policy [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f8f54ad8b4adcb7d392a6d730edbd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2dd1166031634469bed4993a4eb97989', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:56:18 compute-0 sudo[422497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 07 14:56:18 compute-0 sudo[422497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:18 compute-0 ceph-mon[74295]: pgmap v2821: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:56:18 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/418693607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:56:18 compute-0 nova_compute[259550]: 2025-10-07 14:56:18.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 170 B/s wr, 3 op/s
Oct 07 14:56:18 compute-0 nova_compute[259550]: 2025-10-07 14:56:18.992 2 DEBUG nova.network.neutron [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Successfully created port: 6b0ce98f-f482-4715-a60b-a5b393b822f1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:56:19 compute-0 podman[422612]: 2025-10-07 14:56:19.057688207 +0000 UTC m=+0.295381998 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:56:19 compute-0 podman[422634]: 2025-10-07 14:56:19.422285666 +0000 UTC m=+0.237897627 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:56:19 compute-0 podman[422612]: 2025-10-07 14:56:19.629697375 +0000 UTC m=+0.867391186 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:56:19 compute-0 ceph-mon[74295]: pgmap v2822: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 170 B/s wr, 3 op/s
Oct 07 14:56:19 compute-0 nova_compute[259550]: 2025-10-07 14:56:19.760 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:56:19 compute-0 nova_compute[259550]: 2025-10-07 14:56:19.824 2 DEBUG nova.network.neutron [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Successfully updated port: 6b0ce98f-f482-4715-a60b-a5b393b822f1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:56:19 compute-0 nova_compute[259550]: 2025-10-07 14:56:19.867 2 DEBUG nova.storage.rbd_utils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] resizing rbd image 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:56:19 compute-0 nova_compute[259550]: 2025-10-07 14:56:19.991 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:56:19 compute-0 nova_compute[259550]: 2025-10-07 14:56:19.993 2 DEBUG nova.compute.manager [req-1bff8f07-95a2-46c2-a2d3-c6d76646be7e req-b1ecb655-3da0-4684-b2a3-1e1ae71057e0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received event network-changed-6b0ce98f-f482-4715-a60b-a5b393b822f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:56:19 compute-0 nova_compute[259550]: 2025-10-07 14:56:19.993 2 DEBUG nova.compute.manager [req-1bff8f07-95a2-46c2-a2d3-c6d76646be7e req-b1ecb655-3da0-4684-b2a3-1e1ae71057e0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Refreshing instance network info cache due to event network-changed-6b0ce98f-f482-4715-a60b-a5b393b822f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:56:19 compute-0 nova_compute[259550]: 2025-10-07 14:56:19.993 2 DEBUG oslo_concurrency.lockutils [req-1bff8f07-95a2-46c2-a2d3-c6d76646be7e req-b1ecb655-3da0-4684-b2a3-1e1ae71057e0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:56:19 compute-0 nova_compute[259550]: 2025-10-07 14:56:19.994 2 DEBUG oslo_concurrency.lockutils [req-1bff8f07-95a2-46c2-a2d3-c6d76646be7e req-b1ecb655-3da0-4684-b2a3-1e1ae71057e0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:56:19 compute-0 nova_compute[259550]: 2025-10-07 14:56:19.994 2 DEBUG nova.network.neutron [req-1bff8f07-95a2-46c2-a2d3-c6d76646be7e req-b1ecb655-3da0-4684-b2a3-1e1ae71057e0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Refreshing network info cache for port 6b0ce98f-f482-4715-a60b-a5b393b822f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:56:19 compute-0 nova_compute[259550]: 2025-10-07 14:56:19.996 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.019 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.019 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.020 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.020 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.223 2 DEBUG nova.objects.instance [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'migration_context' on Instance uuid 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.231 2 DEBUG nova.network.neutron [req-1bff8f07-95a2-46c2-a2d3-c6d76646be7e req-b1ecb655-3da0-4684-b2a3-1e1ae71057e0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.241 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.242 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Ensure instance console log exists: /var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.242 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.242 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.243 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:56:20 compute-0 sudo[422497]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:56:20 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:56:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:56:20 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:56:20 compute-0 sudo[422864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:56:20 compute-0 sudo[422864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:20 compute-0 sudo[422864]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:56:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2049733192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.556 2 DEBUG nova.network.neutron [req-1bff8f07-95a2-46c2-a2d3-c6d76646be7e req-b1ecb655-3da0-4684-b2a3-1e1ae71057e0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.575 2 DEBUG oslo_concurrency.lockutils [req-1bff8f07-95a2-46c2-a2d3-c6d76646be7e req-b1ecb655-3da0-4684-b2a3-1e1ae71057e0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.577 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquired lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.577 2 DEBUG nova.network.neutron [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.579 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:56:20 compute-0 sudo[422889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:56:20 compute-0 sudo[422889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:20 compute-0 sudo[422889]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:20 compute-0 sudo[422917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:56:20 compute-0 sudo[422917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:20 compute-0 sudo[422917]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:20 compute-0 sudo[422942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:56:20 compute-0 sudo[422942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.752 2 DEBUG nova.network.neutron [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.763 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.764 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3628MB free_disk=59.98827362060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.764 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.764 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.831 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.832 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.833 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:56:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 305 active+clean; 58 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 636 KiB/s wr, 14 op/s
Oct 07 14:56:20 compute-0 nova_compute[259550]: 2025-10-07 14:56:20.871 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:56:21 compute-0 sudo[422942]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:56:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:56:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:56:21 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:56:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:56:21 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:56:21 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 1b7453e9-a58e-4451-85a9-6bfc45015c8b does not exist
Oct 07 14:56:21 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev ee918ae6-2cb2-4019-9761-daaa0b126d47 does not exist
Oct 07 14:56:21 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev f6d4539b-8f88-4708-965a-289cde9acb3f does not exist
Oct 07 14:56:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:56:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:56:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:56:21 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:56:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:56:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:56:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:56:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/699554650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.348 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.355 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:56:21 compute-0 sudo[423016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:56:21 compute-0 sudo[423016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:21 compute-0 sudo[423016]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.374 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.399 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.400 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:56:21 compute-0 sudo[423043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:56:21 compute-0 sudo[423043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:21 compute-0 sudo[423043]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:21 compute-0 sudo[423068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:56:21 compute-0 sudo[423068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:21 compute-0 sudo[423068]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:21 compute-0 sudo[423093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:56:21 compute-0 sudo[423093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:21 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:56:21 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:56:21 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2049733192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:56:21 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:56:21 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:56:21 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:56:21 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:56:21 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:56:21 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:56:21 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/699554650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.594 2 DEBUG nova.network.neutron [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Updating instance_info_cache with network_info: [{"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.618 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Releasing lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.619 2 DEBUG nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Instance network_info: |[{"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.623 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Start _get_guest_xml network_info=[{"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.630 2 WARNING nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.637 2 DEBUG nova.virt.libvirt.host [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.638 2 DEBUG nova.virt.libvirt.host [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.645 2 DEBUG nova.virt.libvirt.host [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.646 2 DEBUG nova.virt.libvirt.host [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.646 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.646 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.647 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.647 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.647 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.648 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.648 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.648 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.648 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.649 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.649 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.649 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:56:21 compute-0 nova_compute[259550]: 2025-10-07 14:56:21.653 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:56:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:56:21 compute-0 podman[423157]: 2025-10-07 14:56:21.832364078 +0000 UTC m=+0.030579480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:56:21 compute-0 podman[423157]: 2025-10-07 14:56:21.933840964 +0000 UTC m=+0.132056336 container create 3eaebbb00b2195f4a8dab11ef1bb2e0dc49bfb756db1b0c3155e177c010fa613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:56:22 compute-0 systemd[1]: Started libpod-conmon-3eaebbb00b2195f4a8dab11ef1bb2e0dc49bfb756db1b0c3155e177c010fa613.scope.
Oct 07 14:56:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:56:22 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2988443510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:56:22 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.144 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.169 2 DEBUG nova.storage.rbd_utils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.173 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:56:22 compute-0 podman[423157]: 2025-10-07 14:56:22.223455609 +0000 UTC m=+0.421671001 container init 3eaebbb00b2195f4a8dab11ef1bb2e0dc49bfb756db1b0c3155e177c010fa613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:56:22 compute-0 podman[423157]: 2025-10-07 14:56:22.231616654 +0000 UTC m=+0.429832026 container start 3eaebbb00b2195f4a8dab11ef1bb2e0dc49bfb756db1b0c3155e177c010fa613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:56:22 compute-0 beautiful_bell[423193]: 167 167
Oct 07 14:56:22 compute-0 systemd[1]: libpod-3eaebbb00b2195f4a8dab11ef1bb2e0dc49bfb756db1b0c3155e177c010fa613.scope: Deactivated successfully.
Oct 07 14:56:22 compute-0 podman[423157]: 2025-10-07 14:56:22.304500673 +0000 UTC m=+0.502716065 container attach 3eaebbb00b2195f4a8dab11ef1bb2e0dc49bfb756db1b0c3155e177c010fa613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 07 14:56:22 compute-0 podman[423157]: 2025-10-07 14:56:22.304880974 +0000 UTC m=+0.503096346 container died 3eaebbb00b2195f4a8dab11ef1bb2e0dc49bfb756db1b0c3155e177c010fa613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 07 14:56:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a88d8996b1a84837a668bdde9ef353111b808a7270c4a00c8faa2f0d5e7bcede-merged.mount: Deactivated successfully.
Oct 07 14:56:22 compute-0 ceph-mon[74295]: pgmap v2823: 305 pgs: 305 active+clean; 58 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 636 KiB/s wr, 14 op/s
Oct 07 14:56:22 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2988443510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:56:22 compute-0 podman[423157]: 2025-10-07 14:56:22.610140303 +0000 UTC m=+0.808355675 container remove 3eaebbb00b2195f4a8dab11ef1bb2e0dc49bfb756db1b0c3155e177c010fa613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 07 14:56:22 compute-0 systemd[1]: libpod-conmon-3eaebbb00b2195f4a8dab11ef1bb2e0dc49bfb756db1b0c3155e177c010fa613.scope: Deactivated successfully.
Oct 07 14:56:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:56:22 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1639954895' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.691 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.695 2 DEBUG nova.virt.libvirt.vif [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:56:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1451855046',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1451855046',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=147,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAN4U7sP6SJsynS4Ra+moKlo0gRKOYuYBnoBAljs45JBWYzT3XXBVBWAQMcJqohTF8wfY170SJ+ijw1ZYS57Fc+FviOO7AVF8Pk5bQI1pIusmI737U1/kZZxNMNFXAeEBQ==',key_name='tempest-TestSecurityGroupsBasicOps-1693465167',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-v02jgwcv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:56:18Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=1713ed1d-a673-4fc9-ac2b-14c7fda46d8b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.695 2 DEBUG nova.network.os_vif_util [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.697 2 DEBUG nova.network.os_vif_util [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:48:40,bridge_name='br-int',has_traffic_filtering=True,id=6b0ce98f-f482-4715-a60b-a5b393b822f1,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0ce98f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.699 2 DEBUG nova.objects.instance [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:56:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:56:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:56:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:56:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:56:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:56:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.722 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:56:22 compute-0 nova_compute[259550]:   <uuid>1713ed1d-a673-4fc9-ac2b-14c7fda46d8b</uuid>
Oct 07 14:56:22 compute-0 nova_compute[259550]:   <name>instance-00000093</name>
Oct 07 14:56:22 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:56:22 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:56:22 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1451855046</nova:name>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:56:21</nova:creationTime>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:56:22 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:56:22 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:56:22 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:56:22 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:56:22 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:56:22 compute-0 nova_compute[259550]:         <nova:user uuid="229f8f54ad8b4adcb7d392a6d730edbd">tempest-TestSecurityGroupsBasicOps-1946829349-project-member</nova:user>
Oct 07 14:56:22 compute-0 nova_compute[259550]:         <nova:project uuid="2dd1166031634469bed4993a4eb97989">tempest-TestSecurityGroupsBasicOps-1946829349</nova:project>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:56:22 compute-0 nova_compute[259550]:         <nova:port uuid="6b0ce98f-f482-4715-a60b-a5b393b822f1">
Oct 07 14:56:22 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:56:22 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:56:22 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <system>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <entry name="serial">1713ed1d-a673-4fc9-ac2b-14c7fda46d8b</entry>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <entry name="uuid">1713ed1d-a673-4fc9-ac2b-14c7fda46d8b</entry>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     </system>
Oct 07 14:56:22 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:56:22 compute-0 nova_compute[259550]:   <os>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:   </os>
Oct 07 14:56:22 compute-0 nova_compute[259550]:   <features>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:   </features>
Oct 07 14:56:22 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:56:22 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:56:22 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk">
Oct 07 14:56:22 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       </source>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:56:22 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk.config">
Oct 07 14:56:22 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       </source>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:56:22 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:87:48:40"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <target dev="tap6b0ce98f-f4"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b/console.log" append="off"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <video>
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     </video>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:56:22 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:56:22 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:56:22 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:56:22 compute-0 nova_compute[259550]: </domain>
Oct 07 14:56:22 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.724 2 DEBUG nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Preparing to wait for external event network-vif-plugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.724 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.725 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.725 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.726 2 DEBUG nova.virt.libvirt.vif [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:56:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1451855046',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1451855046',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=147,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAN4U7sP6SJsynS4Ra+moKlo0gRKOYuYBnoBAljs45JBWYzT3XXBVBWAQMcJqohTF8wfY170SJ+ijw1ZYS57Fc+FviOO7AVF8Pk5bQI1pIusmI737U1/kZZxNMNFXAeEBQ==',key_name='tempest-TestSecurityGroupsBasicOps-1693465167',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-v02jgwcv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:56:18Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=1713ed1d-a673-4fc9-ac2b-14c7fda46d8b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.726 2 DEBUG nova.network.os_vif_util [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.727 2 DEBUG nova.network.os_vif_util [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:48:40,bridge_name='br-int',has_traffic_filtering=True,id=6b0ce98f-f482-4715-a60b-a5b393b822f1,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0ce98f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.727 2 DEBUG os_vif [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:48:40,bridge_name='br-int',has_traffic_filtering=True,id=6b0ce98f-f482-4715-a60b-a5b393b822f1,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0ce98f-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.729 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.730 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b0ce98f-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b0ce98f-f4, col_values=(('external_ids', {'iface-id': '6b0ce98f-f482-4715-a60b-a5b393b822f1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:48:40', 'vm-uuid': '1713ed1d-a673-4fc9-ac2b-14c7fda46d8b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:22 compute-0 NetworkManager[44949]: <info>  [1759848982.7386] manager: (tap6b0ce98f-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/647)
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.751 2 INFO os_vif [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:48:40,bridge_name='br-int',has_traffic_filtering=True,id=6b0ce98f-f482-4715-a60b-a5b393b822f1,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0ce98f-f4')
Oct 07 14:56:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:56:22
Oct 07 14:56:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:56:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:56:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', '.mgr', 'images', 'backups', 'default.rgw.meta']
Oct 07 14:56:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.802 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.803 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.803 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No VIF found with MAC fa:16:3e:87:48:40, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.804 2 INFO nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Using config drive
Oct 07 14:56:22 compute-0 nova_compute[259550]: 2025-10-07 14:56:22.840 2 DEBUG nova.storage.rbd_utils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:56:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 305 active+clean; 58 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 636 KiB/s wr, 14 op/s
Oct 07 14:56:22 compute-0 podman[423261]: 2025-10-07 14:56:22.86022458 +0000 UTC m=+0.079822853 container create da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bouman, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:56:22 compute-0 podman[423261]: 2025-10-07 14:56:22.814428219 +0000 UTC m=+0.034026502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:56:22 compute-0 systemd[1]: Started libpod-conmon-da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923.scope.
Oct 07 14:56:22 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:56:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c7698d42687d2c75dc6e095d4a0fddcfb41710b5580961f3d47925b4e407f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:56:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c7698d42687d2c75dc6e095d4a0fddcfb41710b5580961f3d47925b4e407f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:56:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c7698d42687d2c75dc6e095d4a0fddcfb41710b5580961f3d47925b4e407f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:56:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c7698d42687d2c75dc6e095d4a0fddcfb41710b5580961f3d47925b4e407f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:56:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c7698d42687d2c75dc6e095d4a0fddcfb41710b5580961f3d47925b4e407f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:56:23 compute-0 podman[423261]: 2025-10-07 14:56:23.122087821 +0000 UTC m=+0.341686114 container init da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:56:23 compute-0 podman[423261]: 2025-10-07 14:56:23.133614937 +0000 UTC m=+0.353213210 container start da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 07 14:56:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:56:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:56:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:56:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:56:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:56:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:56:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:56:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:56:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:56:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:56:23 compute-0 podman[423261]: 2025-10-07 14:56:23.300151354 +0000 UTC m=+0.519749627 container attach da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 07 14:56:23 compute-0 nova_compute[259550]: 2025-10-07 14:56:23.304 2 INFO nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Creating config drive at /var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b/disk.config
Oct 07 14:56:23 compute-0 nova_compute[259550]: 2025-10-07 14:56:23.312 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy2iphfz0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:56:23 compute-0 nova_compute[259550]: 2025-10-07 14:56:23.389 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:56:23 compute-0 nova_compute[259550]: 2025-10-07 14:56:23.477 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy2iphfz0" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:56:23 compute-0 nova_compute[259550]: 2025-10-07 14:56:23.503 2 DEBUG nova.storage.rbd_utils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:56:23 compute-0 nova_compute[259550]: 2025-10-07 14:56:23.508 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b/disk.config 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:56:23 compute-0 nova_compute[259550]: 2025-10-07 14:56:23.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:23 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1639954895' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:56:24 compute-0 pedantic_bouman[423295]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:56:24 compute-0 pedantic_bouman[423295]: --> relative data size: 1.0
Oct 07 14:56:24 compute-0 pedantic_bouman[423295]: --> All data devices are unavailable
Oct 07 14:56:24 compute-0 systemd[1]: libpod-da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923.scope: Deactivated successfully.
Oct 07 14:56:24 compute-0 systemd[1]: libpod-da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923.scope: Consumed 1.095s CPU time.
Oct 07 14:56:24 compute-0 podman[423261]: 2025-10-07 14:56:24.647761968 +0000 UTC m=+1.867360251 container died da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:56:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:56:24 compute-0 ceph-mon[74295]: pgmap v2824: 305 pgs: 305 active+clean; 58 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 636 KiB/s wr, 14 op/s
Oct 07 14:56:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-842c7698d42687d2c75dc6e095d4a0fddcfb41710b5580961f3d47925b4e407f-merged.mount: Deactivated successfully.
Oct 07 14:56:25 compute-0 nova_compute[259550]: 2025-10-07 14:56:25.182 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b/disk.config 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.674s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:56:25 compute-0 nova_compute[259550]: 2025-10-07 14:56:25.184 2 INFO nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Deleting local config drive /var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b/disk.config because it was imported into RBD.
Oct 07 14:56:25 compute-0 kernel: tap6b0ce98f-f4: entered promiscuous mode
Oct 07 14:56:25 compute-0 ovn_controller[151684]: 2025-10-07T14:56:25Z|01605|binding|INFO|Claiming lport 6b0ce98f-f482-4715-a60b-a5b393b822f1 for this chassis.
Oct 07 14:56:25 compute-0 ovn_controller[151684]: 2025-10-07T14:56:25Z|01606|binding|INFO|6b0ce98f-f482-4715-a60b-a5b393b822f1: Claiming fa:16:3e:87:48:40 10.100.0.8
Oct 07 14:56:25 compute-0 nova_compute[259550]: 2025-10-07 14:56:25.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:25 compute-0 NetworkManager[44949]: <info>  [1759848985.2564] manager: (tap6b0ce98f-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/648)
Oct 07 14:56:25 compute-0 podman[423261]: 2025-10-07 14:56:25.272656725 +0000 UTC m=+2.492254998 container remove da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct 07 14:56:25 compute-0 systemd[1]: libpod-conmon-da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923.scope: Deactivated successfully.
Oct 07 14:56:25 compute-0 systemd-machined[214580]: New machine qemu-181-instance-00000093.
Oct 07 14:56:25 compute-0 systemd-udevd[423390]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:56:25 compute-0 systemd[1]: Started Virtual Machine qemu-181-instance-00000093.
Oct 07 14:56:25 compute-0 sudo[423093]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:25 compute-0 NetworkManager[44949]: <info>  [1759848985.3130] device (tap6b0ce98f-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:56:25 compute-0 NetworkManager[44949]: <info>  [1759848985.3144] device (tap6b0ce98f-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:56:25 compute-0 nova_compute[259550]: 2025-10-07 14:56:25.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:25 compute-0 ovn_controller[151684]: 2025-10-07T14:56:25Z|01607|binding|INFO|Setting lport 6b0ce98f-f482-4715-a60b-a5b393b822f1 ovn-installed in OVS
Oct 07 14:56:25 compute-0 nova_compute[259550]: 2025-10-07 14:56:25.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:25 compute-0 ovn_controller[151684]: 2025-10-07T14:56:25Z|01608|binding|INFO|Setting lport 6b0ce98f-f482-4715-a60b-a5b393b822f1 up in Southbound
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.344 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:48:40 10.100.0.8'], port_security=['fa:16:3e:87:48:40 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1713ed1d-a673-4fc9-ac2b-14c7fda46d8b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de9e106e-fa40-4336-9ad7-f811682b66d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bfc0506d-4bd3-43b3-bacb-108e14651f30 ce5386dc-6551-4933-ba47-e2cefb87e49d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8921e707-2004-4697-abc9-199bf17e9157, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6b0ce98f-f482-4715-a60b-a5b393b822f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.345 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6b0ce98f-f482-4715-a60b-a5b393b822f1 in datapath de9e106e-fa40-4336-9ad7-f811682b66d2 bound to our chassis
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.346 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network de9e106e-fa40-4336-9ad7-f811682b66d2
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.363 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[57ea48d4-1c35-4f4e-a3c7-49034052b8c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.364 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapde9e106e-f1 in ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.366 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapde9e106e-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.366 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2de0944f-7d62-4e9d-b5c2-2873dfad0534]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.367 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5054d5e9-b9c9-4586-8d5c-fcb941f73b82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.380 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[5184d496-6a00-4752-823d-e4d1495167af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:56:25 compute-0 sudo[423393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:56:25 compute-0 sudo[423393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:25 compute-0 sudo[423393]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.409 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ee95c29b-e588-4eb9-a0ae-ba14902163b3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.447 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f3069a-52a4-4020-a3b6-0f052a1169db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.452 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a9a9724f-3653-45b8-95ee-4b9098244a24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:56:25 compute-0 NetworkManager[44949]: <info>  [1759848985.4538] manager: (tapde9e106e-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/649)
Oct 07 14:56:25 compute-0 systemd-udevd[423392]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:56:25 compute-0 sudo[423427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:56:25 compute-0 sudo[423427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:25 compute-0 sudo[423427]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.498 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2395a4ef-813b-4e54-8f3a-02615800612d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.502 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3f63a79e-f3c8-4128-924d-b42b87711093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:56:25 compute-0 NetworkManager[44949]: <info>  [1759848985.5278] device (tapde9e106e-f0): carrier: link connected
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.533 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3aef7bea-3ede-437a-94df-c1632cdda675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:56:25 compute-0 sudo[423471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:56:25 compute-0 sudo[423471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:25 compute-0 sudo[423471]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.552 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b53d2196-dccc-42cb-b83d-0a099419ff66]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde9e106e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:a2:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 956909, 'reachable_time': 21559, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423496, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.568 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bd116021-a68d-4c88-a2a1-bac1fa9d54fd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed9:a2b1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 956909, 'tstamp': 956909}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423499, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.585 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bf84466c-ce08-40f4-8fa9-829f61bde85f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde9e106e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:a2:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 956909, 'reachable_time': 21559, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 423507, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:56:25 compute-0 sudo[423500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:56:25 compute-0 sudo[423500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.623 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[134d219a-f522-43a5-b2bd-aebfd3be00d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.707 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc8c12c-7a16-422a-a1e4-b33328dc75a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.709 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde9e106e-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.709 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.709 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde9e106e-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:56:25 compute-0 nova_compute[259550]: 2025-10-07 14:56:25.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:25 compute-0 NetworkManager[44949]: <info>  [1759848985.7126] manager: (tapde9e106e-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/650)
Oct 07 14:56:25 compute-0 kernel: tapde9e106e-f0: entered promiscuous mode
Oct 07 14:56:25 compute-0 nova_compute[259550]: 2025-10-07 14:56:25.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.714 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapde9e106e-f0, col_values=(('external_ids', {'iface-id': 'df6ec9ae-b5a3-4dbd-9ad0-fe284c253a7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:56:25 compute-0 nova_compute[259550]: 2025-10-07 14:56:25.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:25 compute-0 ovn_controller[151684]: 2025-10-07T14:56:25Z|01609|binding|INFO|Releasing lport df6ec9ae-b5a3-4dbd-9ad0-fe284c253a7b from this chassis (sb_readonly=0)
Oct 07 14:56:25 compute-0 nova_compute[259550]: 2025-10-07 14:56:25.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.732 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/de9e106e-fa40-4336-9ad7-f811682b66d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/de9e106e-fa40-4336-9ad7-f811682b66d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.734 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a9afdba2-a0fa-4d5c-bbca-a54e2fc45919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.734 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-de9e106e-fa40-4336-9ad7-f811682b66d2
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/de9e106e-fa40-4336-9ad7-f811682b66d2.pid.haproxy
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID de9e106e-fa40-4336-9ad7-f811682b66d2
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:56:25 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.735 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'env', 'PROCESS_TAG=haproxy-de9e106e-fa40-4336-9ad7-f811682b66d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/de9e106e-fa40-4336-9ad7-f811682b66d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:56:25 compute-0 nova_compute[259550]: 2025-10-07 14:56:25.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:56:25 compute-0 nova_compute[259550]: 2025-10-07 14:56:25.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:56:25 compute-0 nova_compute[259550]: 2025-10-07 14:56:25.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.007 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.007 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.007 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:56:26 compute-0 podman[423614]: 2025-10-07 14:56:26.015961587 +0000 UTC m=+0.062226937 container create c86efd6e1e7f0ded719f40e4e7befa0037065cbb4c108b56d1b2a2b3511674f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:56:26 compute-0 systemd[1]: Started libpod-conmon-c86efd6e1e7f0ded719f40e4e7befa0037065cbb4c108b56d1b2a2b3511674f3.scope.
Oct 07 14:56:26 compute-0 podman[423614]: 2025-10-07 14:56:25.978508635 +0000 UTC m=+0.024774015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:56:26 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:56:26 compute-0 podman[423614]: 2025-10-07 14:56:26.177523763 +0000 UTC m=+0.223789143 container init c86efd6e1e7f0ded719f40e4e7befa0037065cbb4c108b56d1b2a2b3511674f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:56:26 compute-0 podman[423614]: 2025-10-07 14:56:26.189035148 +0000 UTC m=+0.235300498 container start c86efd6e1e7f0ded719f40e4e7befa0037065cbb4c108b56d1b2a2b3511674f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 07 14:56:26 compute-0 ceph-mon[74295]: pgmap v2825: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:56:26 compute-0 stoic_borg[423649]: 167 167
Oct 07 14:56:26 compute-0 systemd[1]: libpod-c86efd6e1e7f0ded719f40e4e7befa0037065cbb4c108b56d1b2a2b3511674f3.scope: Deactivated successfully.
Oct 07 14:56:26 compute-0 podman[423650]: 2025-10-07 14:56:26.129503922 +0000 UTC m=+0.046535953 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:56:26 compute-0 podman[423614]: 2025-10-07 14:56:26.208640437 +0000 UTC m=+0.254905817 container attach c86efd6e1e7f0ded719f40e4e7befa0037065cbb4c108b56d1b2a2b3511674f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:56:26 compute-0 podman[423614]: 2025-10-07 14:56:26.211133123 +0000 UTC m=+0.257398483 container died c86efd6e1e7f0ded719f40e4e7befa0037065cbb4c108b56d1b2a2b3511674f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 07 14:56:26 compute-0 podman[423650]: 2025-10-07 14:56:26.236987677 +0000 UTC m=+0.154019718 container create a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 07 14:56:26 compute-0 systemd[1]: Started libpod-conmon-a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511.scope.
Oct 07 14:56:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ec8982c3297fd74b8e18954d96ec6a24520b1c950eb8b1e62bb2b7bd1ad61e0-merged.mount: Deactivated successfully.
Oct 07 14:56:26 compute-0 podman[423664]: 2025-10-07 14:56:26.297807187 +0000 UTC m=+0.158528448 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 07 14:56:26 compute-0 podman[423614]: 2025-10-07 14:56:26.307841702 +0000 UTC m=+0.354107052 container remove c86efd6e1e7f0ded719f40e4e7befa0037065cbb4c108b56d1b2a2b3511674f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 07 14:56:26 compute-0 systemd[1]: libpod-conmon-c86efd6e1e7f0ded719f40e4e7befa0037065cbb4c108b56d1b2a2b3511674f3.scope: Deactivated successfully.
Oct 07 14:56:26 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febe76e8dd5c30f47b13b78fd42131538300a58f00e7858de7bede20aa45a62a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:56:26 compute-0 podman[423650]: 2025-10-07 14:56:26.363625859 +0000 UTC m=+0.280657910 container init a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:56:26 compute-0 podman[423650]: 2025-10-07 14:56:26.3723638 +0000 UTC m=+0.289395841 container start a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 07 14:56:26 compute-0 podman[423663]: 2025-10-07 14:56:26.388957389 +0000 UTC m=+0.254326032 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.392 2 DEBUG nova.compute.manager [req-6cba2ebc-ac3a-44f8-aa0f-be04ccd9637b req-5fc26886-cefd-46a4-afe2-cfc2efdcf240 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received event network-vif-plugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.392 2 DEBUG oslo_concurrency.lockutils [req-6cba2ebc-ac3a-44f8-aa0f-be04ccd9637b req-5fc26886-cefd-46a4-afe2-cfc2efdcf240 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.392 2 DEBUG oslo_concurrency.lockutils [req-6cba2ebc-ac3a-44f8-aa0f-be04ccd9637b req-5fc26886-cefd-46a4-afe2-cfc2efdcf240 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.393 2 DEBUG oslo_concurrency.lockutils [req-6cba2ebc-ac3a-44f8-aa0f-be04ccd9637b req-5fc26886-cefd-46a4-afe2-cfc2efdcf240 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.393 2 DEBUG nova.compute.manager [req-6cba2ebc-ac3a-44f8-aa0f-be04ccd9637b req-5fc26886-cefd-46a4-afe2-cfc2efdcf240 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Processing event network-vif-plugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:56:26 compute-0 neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2[423710]: [NOTICE]   (423728) : New worker (423731) forked
Oct 07 14:56:26 compute-0 neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2[423710]: [NOTICE]   (423728) : Loading success.
Oct 07 14:56:26 compute-0 podman[423745]: 2025-10-07 14:56:26.510905486 +0000 UTC m=+0.047037266 container create 7ee4802ecb91746232c491f96aa2796896cbe94f1a25436aae181b37257b8c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nightingale, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 07 14:56:26 compute-0 systemd[1]: Started libpod-conmon-7ee4802ecb91746232c491f96aa2796896cbe94f1a25436aae181b37257b8c35.scope.
Oct 07 14:56:26 compute-0 podman[423745]: 2025-10-07 14:56:26.4921956 +0000 UTC m=+0.028327400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:56:26 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98efbbce7c83cc5b5ef65b8af85c7885d4aaadb7b9f8b81954cfc624439196f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98efbbce7c83cc5b5ef65b8af85c7885d4aaadb7b9f8b81954cfc624439196f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98efbbce7c83cc5b5ef65b8af85c7885d4aaadb7b9f8b81954cfc624439196f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98efbbce7c83cc5b5ef65b8af85c7885d4aaadb7b9f8b81954cfc624439196f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.631 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848986.6305578, 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.632 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] VM Started (Lifecycle Event)
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.636 2 DEBUG nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:56:26 compute-0 podman[423745]: 2025-10-07 14:56:26.641389209 +0000 UTC m=+0.177521019 container init 7ee4802ecb91746232c491f96aa2796896cbe94f1a25436aae181b37257b8c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.641 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.647 2 INFO nova.virt.libvirt.driver [-] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Instance spawned successfully.
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.647 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:56:26 compute-0 podman[423745]: 2025-10-07 14:56:26.651409744 +0000 UTC m=+0.187541524 container start 7ee4802ecb91746232c491f96aa2796896cbe94f1a25436aae181b37257b8c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.656 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:56:26 compute-0 podman[423745]: 2025-10-07 14:56:26.658918923 +0000 UTC m=+0.195050723 container attach 7ee4802ecb91746232c491f96aa2796896cbe94f1a25436aae181b37257b8c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.665 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.670 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.670 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.671 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.671 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.671 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.672 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.699 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.700 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848986.6307867, 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.700 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] VM Paused (Lifecycle Event)
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.726 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.731 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848986.6400173, 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.731 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] VM Resumed (Lifecycle Event)
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.739 2 INFO nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Took 8.66 seconds to spawn the instance on the hypervisor.
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.740 2 DEBUG nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.750 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.753 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.786 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.820 2 INFO nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Took 9.64 seconds to build instance.
Oct 07 14:56:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:56:26 compute-0 nova_compute[259550]: 2025-10-07 14:56:26.840 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:56:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]: {
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:     "0": [
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:         {
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "devices": [
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "/dev/loop3"
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             ],
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "lv_name": "ceph_lv0",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "lv_size": "21470642176",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "name": "ceph_lv0",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "tags": {
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.cluster_name": "ceph",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.crush_device_class": "",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.encrypted": "0",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.osd_id": "0",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.type": "block",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.vdo": "0"
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             },
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "type": "block",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "vg_name": "ceph_vg0"
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:         }
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:     ],
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:     "1": [
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:         {
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "devices": [
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "/dev/loop4"
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             ],
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "lv_name": "ceph_lv1",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "lv_size": "21470642176",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "name": "ceph_lv1",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "tags": {
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.cluster_name": "ceph",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.crush_device_class": "",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.encrypted": "0",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.osd_id": "1",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.type": "block",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.vdo": "0"
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             },
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "type": "block",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "vg_name": "ceph_vg1"
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:         }
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:     ],
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:     "2": [
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:         {
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "devices": [
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "/dev/loop5"
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             ],
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "lv_name": "ceph_lv2",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "lv_size": "21470642176",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "name": "ceph_lv2",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "tags": {
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.cluster_name": "ceph",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.crush_device_class": "",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.encrypted": "0",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.osd_id": "2",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.type": "block",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:                 "ceph.vdo": "0"
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             },
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "type": "block",
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:             "vg_name": "ceph_vg2"
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:         }
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]:     ]
Oct 07 14:56:27 compute-0 interesting_nightingale[423760]: }
Oct 07 14:56:27 compute-0 systemd[1]: libpod-7ee4802ecb91746232c491f96aa2796896cbe94f1a25436aae181b37257b8c35.scope: Deactivated successfully.
Oct 07 14:56:27 compute-0 podman[423745]: 2025-10-07 14:56:27.613329561 +0000 UTC m=+1.149461341 container died 7ee4802ecb91746232c491f96aa2796896cbe94f1a25436aae181b37257b8c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nightingale, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:56:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-f98efbbce7c83cc5b5ef65b8af85c7885d4aaadb7b9f8b81954cfc624439196f-merged.mount: Deactivated successfully.
Oct 07 14:56:27 compute-0 podman[423745]: 2025-10-07 14:56:27.677747676 +0000 UTC m=+1.213879456 container remove 7ee4802ecb91746232c491f96aa2796896cbe94f1a25436aae181b37257b8c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nightingale, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:56:27 compute-0 systemd[1]: libpod-conmon-7ee4802ecb91746232c491f96aa2796896cbe94f1a25436aae181b37257b8c35.scope: Deactivated successfully.
Oct 07 14:56:27 compute-0 sudo[423500]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:27 compute-0 nova_compute[259550]: 2025-10-07 14:56:27.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:27 compute-0 sudo[423780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:56:27 compute-0 sudo[423780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:27 compute-0 sudo[423780]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:27 compute-0 sudo[423805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:56:27 compute-0 sudo[423805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:27 compute-0 sudo[423805]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:27 compute-0 sudo[423830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:56:27 compute-0 sudo[423830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:27 compute-0 sudo[423830]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:27 compute-0 sudo[423855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:56:27 compute-0 sudo[423855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:28 compute-0 ceph-mon[74295]: pgmap v2826: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 07 14:56:28 compute-0 podman[423918]: 2025-10-07 14:56:28.428484104 +0000 UTC m=+0.048731451 container create 74a1482b41c05a2d8c5d3a4bfe7875f51a4ea83cbf99a6ce2ca2f949e739411e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tharp, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:56:28 compute-0 systemd[1]: Started libpod-conmon-74a1482b41c05a2d8c5d3a4bfe7875f51a4ea83cbf99a6ce2ca2f949e739411e.scope.
Oct 07 14:56:28 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:56:28 compute-0 podman[423918]: 2025-10-07 14:56:28.407490929 +0000 UTC m=+0.027738296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:56:28 compute-0 podman[423918]: 2025-10-07 14:56:28.514761718 +0000 UTC m=+0.135009095 container init 74a1482b41c05a2d8c5d3a4bfe7875f51a4ea83cbf99a6ce2ca2f949e739411e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tharp, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 07 14:56:28 compute-0 podman[423918]: 2025-10-07 14:56:28.52316224 +0000 UTC m=+0.143409587 container start 74a1482b41c05a2d8c5d3a4bfe7875f51a4ea83cbf99a6ce2ca2f949e739411e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tharp, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 07 14:56:28 compute-0 podman[423918]: 2025-10-07 14:56:28.527250848 +0000 UTC m=+0.147498195 container attach 74a1482b41c05a2d8c5d3a4bfe7875f51a4ea83cbf99a6ce2ca2f949e739411e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:56:28 compute-0 nervous_tharp[423934]: 167 167
Oct 07 14:56:28 compute-0 systemd[1]: libpod-74a1482b41c05a2d8c5d3a4bfe7875f51a4ea83cbf99a6ce2ca2f949e739411e.scope: Deactivated successfully.
Oct 07 14:56:28 compute-0 podman[423918]: 2025-10-07 14:56:28.533093123 +0000 UTC m=+0.153340470 container died 74a1482b41c05a2d8c5d3a4bfe7875f51a4ea83cbf99a6ce2ca2f949e739411e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tharp, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 07 14:56:28 compute-0 nova_compute[259550]: 2025-10-07 14:56:28.550 2 DEBUG nova.compute.manager [req-441390ef-1dc8-4000-955d-724ab15f8f40 req-9652ee3c-637a-4655-a867-1c51bc637f8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received event network-vif-plugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:56:28 compute-0 nova_compute[259550]: 2025-10-07 14:56:28.550 2 DEBUG oslo_concurrency.lockutils [req-441390ef-1dc8-4000-955d-724ab15f8f40 req-9652ee3c-637a-4655-a867-1c51bc637f8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:56:28 compute-0 nova_compute[259550]: 2025-10-07 14:56:28.550 2 DEBUG oslo_concurrency.lockutils [req-441390ef-1dc8-4000-955d-724ab15f8f40 req-9652ee3c-637a-4655-a867-1c51bc637f8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:56:28 compute-0 nova_compute[259550]: 2025-10-07 14:56:28.551 2 DEBUG oslo_concurrency.lockutils [req-441390ef-1dc8-4000-955d-724ab15f8f40 req-9652ee3c-637a-4655-a867-1c51bc637f8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:56:28 compute-0 nova_compute[259550]: 2025-10-07 14:56:28.551 2 DEBUG nova.compute.manager [req-441390ef-1dc8-4000-955d-724ab15f8f40 req-9652ee3c-637a-4655-a867-1c51bc637f8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] No waiting events found dispatching network-vif-plugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:56:28 compute-0 nova_compute[259550]: 2025-10-07 14:56:28.551 2 WARNING nova.compute.manager [req-441390ef-1dc8-4000-955d-724ab15f8f40 req-9652ee3c-637a-4655-a867-1c51bc637f8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received unexpected event network-vif-plugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 for instance with vm_state active and task_state None.
Oct 07 14:56:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-4069cfee9c1f7f7607216075dd0c90b2654f4aa67053720b052db6f962833216-merged.mount: Deactivated successfully.
Oct 07 14:56:28 compute-0 podman[423918]: 2025-10-07 14:56:28.582244334 +0000 UTC m=+0.202491691 container remove 74a1482b41c05a2d8c5d3a4bfe7875f51a4ea83cbf99a6ce2ca2f949e739411e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tharp, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct 07 14:56:28 compute-0 systemd[1]: libpod-conmon-74a1482b41c05a2d8c5d3a4bfe7875f51a4ea83cbf99a6ce2ca2f949e739411e.scope: Deactivated successfully.
Oct 07 14:56:28 compute-0 nova_compute[259550]: 2025-10-07 14:56:28.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:28 compute-0 podman[423960]: 2025-10-07 14:56:28.771899453 +0000 UTC m=+0.047793376 container create 00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:56:28 compute-0 systemd[1]: Started libpod-conmon-00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67.scope.
Oct 07 14:56:28 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/677536e0f10ec528ca46b74831e35873909e225dd734c4489c011ecbb7afbb4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/677536e0f10ec528ca46b74831e35873909e225dd734c4489c011ecbb7afbb4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/677536e0f10ec528ca46b74831e35873909e225dd734c4489c011ecbb7afbb4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/677536e0f10ec528ca46b74831e35873909e225dd734c4489c011ecbb7afbb4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:56:28 compute-0 podman[423960]: 2025-10-07 14:56:28.75136783 +0000 UTC m=+0.027261783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:56:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Oct 07 14:56:28 compute-0 podman[423960]: 2025-10-07 14:56:28.864761501 +0000 UTC m=+0.140655444 container init 00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mccarthy, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 07 14:56:28 compute-0 podman[423960]: 2025-10-07 14:56:28.875125815 +0000 UTC m=+0.151019738 container start 00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:56:28 compute-0 podman[423960]: 2025-10-07 14:56:28.883749323 +0000 UTC m=+0.159643266 container attach 00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]: {
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:         "osd_id": 2,
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:         "type": "bluestore"
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:     },
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:         "osd_id": 1,
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:         "type": "bluestore"
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:     },
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:         "osd_id": 0,
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:         "type": "bluestore"
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]:     }
Oct 07 14:56:29 compute-0 kind_mccarthy[423977]: }
Oct 07 14:56:29 compute-0 systemd[1]: libpod-00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67.scope: Deactivated successfully.
Oct 07 14:56:29 compute-0 systemd[1]: libpod-00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67.scope: Consumed 1.119s CPU time.
Oct 07 14:56:29 compute-0 podman[423960]: 2025-10-07 14:56:29.996302567 +0000 UTC m=+1.272196500 container died 00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mccarthy, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:56:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-677536e0f10ec528ca46b74831e35873909e225dd734c4489c011ecbb7afbb4a-merged.mount: Deactivated successfully.
Oct 07 14:56:30 compute-0 podman[423960]: 2025-10-07 14:56:30.062010566 +0000 UTC m=+1.337904489 container remove 00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 07 14:56:30 compute-0 systemd[1]: libpod-conmon-00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67.scope: Deactivated successfully.
Oct 07 14:56:30 compute-0 sudo[423855]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:56:30 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:56:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:56:30 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:56:30 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 8a8ff7e7-d0b9-41e1-bb64-8a1732029abe does not exist
Oct 07 14:56:30 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev fd470abd-1253-46ad-8048-72530a644d75 does not exist
Oct 07 14:56:30 compute-0 sudo[424021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:56:30 compute-0 sudo[424021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:30 compute-0 sudo[424021]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:30 compute-0 ceph-mon[74295]: pgmap v2827: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Oct 07 14:56:30 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:56:30 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:56:30 compute-0 sudo[424046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:56:30 compute-0 sudo[424046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:56:30 compute-0 sudo[424046]: pam_unix(sudo:session): session closed for user root
Oct 07 14:56:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct 07 14:56:31 compute-0 NetworkManager[44949]: <info>  [1759848991.3571] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/651)
Oct 07 14:56:31 compute-0 NetworkManager[44949]: <info>  [1759848991.3581] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/652)
Oct 07 14:56:31 compute-0 ovn_controller[151684]: 2025-10-07T14:56:31Z|01610|binding|INFO|Releasing lport df6ec9ae-b5a3-4dbd-9ad0-fe284c253a7b from this chassis (sb_readonly=0)
Oct 07 14:56:31 compute-0 nova_compute[259550]: 2025-10-07 14:56:31.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:31 compute-0 ovn_controller[151684]: 2025-10-07T14:56:31Z|01611|binding|INFO|Releasing lport df6ec9ae-b5a3-4dbd-9ad0-fe284c253a7b from this chassis (sb_readonly=0)
Oct 07 14:56:31 compute-0 nova_compute[259550]: 2025-10-07 14:56:31.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:31 compute-0 nova_compute[259550]: 2025-10-07 14:56:31.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:31 compute-0 nova_compute[259550]: 2025-10-07 14:56:31.810 2 DEBUG nova.compute.manager [req-0ad6a2d5-a77b-40fb-a2e5-53e34a6a2e94 req-510d23b5-193f-4dfb-a283-4ad7d629a5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received event network-changed-6b0ce98f-f482-4715-a60b-a5b393b822f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:56:31 compute-0 nova_compute[259550]: 2025-10-07 14:56:31.811 2 DEBUG nova.compute.manager [req-0ad6a2d5-a77b-40fb-a2e5-53e34a6a2e94 req-510d23b5-193f-4dfb-a283-4ad7d629a5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Refreshing instance network info cache due to event network-changed-6b0ce98f-f482-4715-a60b-a5b393b822f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:56:31 compute-0 nova_compute[259550]: 2025-10-07 14:56:31.811 2 DEBUG oslo_concurrency.lockutils [req-0ad6a2d5-a77b-40fb-a2e5-53e34a6a2e94 req-510d23b5-193f-4dfb-a283-4ad7d629a5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:56:31 compute-0 nova_compute[259550]: 2025-10-07 14:56:31.811 2 DEBUG oslo_concurrency.lockutils [req-0ad6a2d5-a77b-40fb-a2e5-53e34a6a2e94 req-510d23b5-193f-4dfb-a283-4ad7d629a5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:56:31 compute-0 nova_compute[259550]: 2025-10-07 14:56:31.811 2 DEBUG nova.network.neutron [req-0ad6a2d5-a77b-40fb-a2e5-53e34a6a2e94 req-510d23b5-193f-4dfb-a283-4ad7d629a5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Refreshing network info cache for port 6b0ce98f-f482-4715-a60b-a5b393b822f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:56:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:56:32 compute-0 ceph-mon[74295]: pgmap v2828: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct 07 14:56:32 compute-0 nova_compute[259550]: 2025-10-07 14:56:32.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:56:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1491135164' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:56:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:56:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1491135164' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:56:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 85 op/s
Oct 07 14:56:33 compute-0 nova_compute[259550]: 2025-10-07 14:56:33.253 2 DEBUG nova.network.neutron [req-0ad6a2d5-a77b-40fb-a2e5-53e34a6a2e94 req-510d23b5-193f-4dfb-a283-4ad7d629a5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Updated VIF entry in instance network info cache for port 6b0ce98f-f482-4715-a60b-a5b393b822f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:56:33 compute-0 nova_compute[259550]: 2025-10-07 14:56:33.254 2 DEBUG nova.network.neutron [req-0ad6a2d5-a77b-40fb-a2e5-53e34a6a2e94 req-510d23b5-193f-4dfb-a283-4ad7d629a5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Updating instance_info_cache with network_info: [{"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:56:33 compute-0 nova_compute[259550]: 2025-10-07 14:56:33.285 2 DEBUG oslo_concurrency.lockutils [req-0ad6a2d5-a77b-40fb-a2e5-53e34a6a2e94 req-510d23b5-193f-4dfb-a283-4ad7d629a5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:56:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1491135164' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:56:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1491135164' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:56:33 compute-0 nova_compute[259550]: 2025-10-07 14:56:33.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:34 compute-0 ceph-mon[74295]: pgmap v2829: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 85 op/s
Oct 07 14:56:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 85 op/s
Oct 07 14:56:36 compute-0 ceph-mon[74295]: pgmap v2830: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 85 op/s
Oct 07 14:56:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:56:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:56:37 compute-0 ceph-mon[74295]: pgmap v2831: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:56:37 compute-0 nova_compute[259550]: 2025-10-07 14:56:37.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:38 compute-0 nova_compute[259550]: 2025-10-07 14:56:38.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 75 op/s
Oct 07 14:56:40 compute-0 podman[424073]: 2025-10-07 14:56:40.108030604 +0000 UTC m=+0.087579710 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible)
Oct 07 14:56:40 compute-0 podman[424074]: 2025-10-07 14:56:40.112784899 +0000 UTC m=+0.089274823 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 07 14:56:40 compute-0 ceph-mon[74295]: pgmap v2832: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 75 op/s
Oct 07 14:56:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 305 active+clean; 96 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 821 KiB/s rd, 647 KiB/s wr, 42 op/s
Oct 07 14:56:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:56:42 compute-0 ovn_controller[151684]: 2025-10-07T14:56:42Z|00200|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:48:40 10.100.0.8
Oct 07 14:56:42 compute-0 ovn_controller[151684]: 2025-10-07T14:56:42Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:48:40 10.100.0.8
Oct 07 14:56:42 compute-0 ceph-mon[74295]: pgmap v2833: 305 pgs: 305 active+clean; 96 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 821 KiB/s rd, 647 KiB/s wr, 42 op/s
Oct 07 14:56:42 compute-0 nova_compute[259550]: 2025-10-07 14:56:42.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 305 active+clean; 96 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 647 KiB/s wr, 15 op/s
Oct 07 14:56:43 compute-0 nova_compute[259550]: 2025-10-07 14:56:43.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:43 compute-0 ceph-mon[74295]: pgmap v2834: 305 pgs: 305 active+clean; 96 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 647 KiB/s wr, 15 op/s
Oct 07 14:56:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 305 active+clean; 109 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 225 KiB/s rd, 2.0 MiB/s wr, 40 op/s
Oct 07 14:56:45 compute-0 ceph-mon[74295]: pgmap v2835: 305 pgs: 305 active+clean; 109 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 225 KiB/s rd, 2.0 MiB/s wr, 40 op/s
Oct 07 14:56:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:56:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 305 active+clean; 115 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 07 14:56:47 compute-0 nova_compute[259550]: 2025-10-07 14:56:47.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:48 compute-0 ceph-mon[74295]: pgmap v2836: 305 pgs: 305 active+clean; 115 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 07 14:56:48 compute-0 nova_compute[259550]: 2025-10-07 14:56:48.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 07 14:56:50 compute-0 ceph-mon[74295]: pgmap v2837: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 07 14:56:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:56:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:56:52 compute-0 ceph-mon[74295]: pgmap v2838: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 07 14:56:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:56:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:56:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:56:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:56:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:56:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:56:52 compute-0 nova_compute[259550]: 2025-10-07 14:56:52.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 295 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Oct 07 14:56:53 compute-0 nova_compute[259550]: 2025-10-07 14:56:53.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:53 compute-0 nova_compute[259550]: 2025-10-07 14:56:53.675 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:56:53 compute-0 nova_compute[259550]: 2025-10-07 14:56:53.676 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:56:53 compute-0 nova_compute[259550]: 2025-10-07 14:56:53.710 2 DEBUG nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:56:53 compute-0 nova_compute[259550]: 2025-10-07 14:56:53.835 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:56:53 compute-0 nova_compute[259550]: 2025-10-07 14:56:53.835 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:56:53 compute-0 nova_compute[259550]: 2025-10-07 14:56:53.844 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:56:53 compute-0 nova_compute[259550]: 2025-10-07 14:56:53.845 2 INFO nova.compute.claims [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:56:53 compute-0 nova_compute[259550]: 2025-10-07 14:56:53.985 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:56:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:56:54 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1382234774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:56:54 compute-0 nova_compute[259550]: 2025-10-07 14:56:54.444 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:56:54 compute-0 nova_compute[259550]: 2025-10-07 14:56:54.452 2 DEBUG nova.compute.provider_tree [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:56:54 compute-0 nova_compute[259550]: 2025-10-07 14:56:54.482 2 DEBUG nova.scheduler.client.report [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:56:54 compute-0 ceph-mon[74295]: pgmap v2839: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 295 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Oct 07 14:56:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1382234774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:56:54 compute-0 nova_compute[259550]: 2025-10-07 14:56:54.527 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:56:54 compute-0 nova_compute[259550]: 2025-10-07 14:56:54.528 2 DEBUG nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:56:54 compute-0 nova_compute[259550]: 2025-10-07 14:56:54.614 2 DEBUG nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:56:54 compute-0 nova_compute[259550]: 2025-10-07 14:56:54.615 2 DEBUG nova.network.neutron [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:56:54 compute-0 nova_compute[259550]: 2025-10-07 14:56:54.664 2 INFO nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:56:54 compute-0 nova_compute[259550]: 2025-10-07 14:56:54.691 2 DEBUG nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:56:54 compute-0 nova_compute[259550]: 2025-10-07 14:56:54.819 2 DEBUG nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:56:54 compute-0 nova_compute[259550]: 2025-10-07 14:56:54.821 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:56:54 compute-0 nova_compute[259550]: 2025-10-07 14:56:54.821 2 INFO nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Creating image(s)
Oct 07 14:56:54 compute-0 nova_compute[259550]: 2025-10-07 14:56:54.845 2 DEBUG nova.storage.rbd_utils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:56:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 295 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Oct 07 14:56:54 compute-0 nova_compute[259550]: 2025-10-07 14:56:54.881 2 DEBUG nova.storage.rbd_utils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:56:54 compute-0 nova_compute[259550]: 2025-10-07 14:56:54.914 2 DEBUG nova.storage.rbd_utils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:56:54 compute-0 nova_compute[259550]: 2025-10-07 14:56:54.922 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:56:55 compute-0 nova_compute[259550]: 2025-10-07 14:56:55.014 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:56:55 compute-0 nova_compute[259550]: 2025-10-07 14:56:55.015 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:56:55 compute-0 nova_compute[259550]: 2025-10-07 14:56:55.016 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:56:55 compute-0 nova_compute[259550]: 2025-10-07 14:56:55.016 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:56:55 compute-0 nova_compute[259550]: 2025-10-07 14:56:55.042 2 DEBUG nova.storage.rbd_utils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:56:55 compute-0 nova_compute[259550]: 2025-10-07 14:56:55.048 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:56:55 compute-0 nova_compute[259550]: 2025-10-07 14:56:55.280 2 DEBUG nova.policy [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f8f54ad8b4adcb7d392a6d730edbd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2dd1166031634469bed4993a4eb97989', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:56:56 compute-0 nova_compute[259550]: 2025-10-07 14:56:56.063 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:56:56 compute-0 nova_compute[259550]: 2025-10-07 14:56:56.129 2 DEBUG nova.storage.rbd_utils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] resizing rbd image b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:56:56 compute-0 nova_compute[259550]: 2025-10-07 14:56:56.451 2 DEBUG nova.objects.instance [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'migration_context' on Instance uuid b02ddcfd-0f68-4cc3-9c20-24cac0572be8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:56:56 compute-0 nova_compute[259550]: 2025-10-07 14:56:56.487 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:56:56 compute-0 nova_compute[259550]: 2025-10-07 14:56:56.487 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Ensure instance console log exists: /var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:56:56 compute-0 nova_compute[259550]: 2025-10-07 14:56:56.488 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:56:56 compute-0 nova_compute[259550]: 2025-10-07 14:56:56.488 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:56:56 compute-0 nova_compute[259550]: 2025-10-07 14:56:56.489 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:56:56 compute-0 ceph-mon[74295]: pgmap v2840: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 295 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Oct 07 14:56:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:56:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 105 KiB/s rd, 155 KiB/s wr, 34 op/s
Oct 07 14:56:57 compute-0 podman[424306]: 2025-10-07 14:56:57.066010486 +0000 UTC m=+0.054577666 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid)
Oct 07 14:56:57 compute-0 podman[424305]: 2025-10-07 14:56:57.065995795 +0000 UTC m=+0.056432605 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 07 14:56:57 compute-0 nova_compute[259550]: 2025-10-07 14:56:57.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:57 compute-0 nova_compute[259550]: 2025-10-07 14:56:57.766 2 DEBUG nova.network.neutron [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Successfully created port: 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:56:57 compute-0 ceph-mon[74295]: pgmap v2841: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 105 KiB/s rd, 155 KiB/s wr, 34 op/s
Oct 07 14:56:58 compute-0 nova_compute[259550]: 2025-10-07 14:56:58.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:56:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 305 active+clean; 143 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 817 KiB/s wr, 25 op/s
Oct 07 14:56:58 compute-0 nova_compute[259550]: 2025-10-07 14:56:58.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:56:58 compute-0 nova_compute[259550]: 2025-10-07 14:56:58.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 07 14:56:59 compute-0 nova_compute[259550]: 2025-10-07 14:56:59.037 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 07 14:56:59 compute-0 nova_compute[259550]: 2025-10-07 14:56:59.569 2 DEBUG nova.network.neutron [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Successfully updated port: 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:56:59 compute-0 nova_compute[259550]: 2025-10-07 14:56:59.608 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:56:59 compute-0 nova_compute[259550]: 2025-10-07 14:56:59.609 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquired lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:56:59 compute-0 nova_compute[259550]: 2025-10-07 14:56:59.609 2 DEBUG nova.network.neutron [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:56:59 compute-0 nova_compute[259550]: 2025-10-07 14:56:59.744 2 DEBUG nova.compute.manager [req-65634a2d-3280-4ed4-88c9-0bf765b13e11 req-39154594-fcc5-4776-b251-9788c3a9ee9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received event network-changed-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:56:59 compute-0 nova_compute[259550]: 2025-10-07 14:56:59.744 2 DEBUG nova.compute.manager [req-65634a2d-3280-4ed4-88c9-0bf765b13e11 req-39154594-fcc5-4776-b251-9788c3a9ee9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Refreshing instance network info cache due to event network-changed-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:56:59 compute-0 nova_compute[259550]: 2025-10-07 14:56:59.745 2 DEBUG oslo_concurrency.lockutils [req-65634a2d-3280-4ed4-88c9-0bf765b13e11 req-39154594-fcc5-4776-b251-9788c3a9ee9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:56:59 compute-0 nova_compute[259550]: 2025-10-07 14:56:59.822 2 DEBUG nova.network.neutron [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:57:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:00.093 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:00.094 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:00.094 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:57:00 compute-0 ceph-mon[74295]: pgmap v2842: 305 pgs: 305 active+clean; 143 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 817 KiB/s wr, 25 op/s
Oct 07 14:57:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:57:00 compute-0 nova_compute[259550]: 2025-10-07 14:57:00.933 2 DEBUG nova.network.neutron [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Updating instance_info_cache with network_info: [{"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.057 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Releasing lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.058 2 DEBUG nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Instance network_info: |[{"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.059 2 DEBUG oslo_concurrency.lockutils [req-65634a2d-3280-4ed4-88c9-0bf765b13e11 req-39154594-fcc5-4776-b251-9788c3a9ee9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.059 2 DEBUG nova.network.neutron [req-65634a2d-3280-4ed4-88c9-0bf765b13e11 req-39154594-fcc5-4776-b251-9788c3a9ee9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Refreshing network info cache for port 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.062 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Start _get_guest_xml network_info=[{"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.067 2 WARNING nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.072 2 DEBUG nova.virt.libvirt.host [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.073 2 DEBUG nova.virt.libvirt.host [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.078 2 DEBUG nova.virt.libvirt.host [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.078 2 DEBUG nova.virt.libvirt.host [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.079 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.079 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.080 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.080 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.080 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.080 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.080 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.081 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.081 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.081 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.081 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.081 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.084 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:57:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:57:01 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3389181530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.589 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.611 2 DEBUG nova.storage.rbd_utils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:57:01 compute-0 nova_compute[259550]: 2025-10-07 14:57:01.615 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:57:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:57:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:57:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1344967948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.054 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.056 2 DEBUG nova.virt.libvirt.vif [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:56:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1074090825',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1074090825',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ge',id=148,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAN4U7sP6SJsynS4Ra+moKlo0gRKOYuYBnoBAljs45JBWYzT3XXBVBWAQMcJqohTF8wfY170SJ+ijw1ZYS57Fc+FviOO7AVF8Pk5bQI1pIusmI737U1/kZZxNMNFXAeEBQ==',key_name='tempest-TestSecurityGroupsBasicOps-1693465167',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-hydkwpjc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:56:54Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=b02ddcfd-0f68-4cc3-9c20-24cac0572be8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.057 2 DEBUG nova.network.os_vif_util [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.058 2 DEBUG nova.network.os_vif_util [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:5e:cf,bridge_name='br-int',has_traffic_filtering=True,id=791c5fbe-b06e-4554-b2ad-f3f5c1709fc8,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap791c5fbe-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.059 2 DEBUG nova.objects.instance [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'pci_devices' on Instance uuid b02ddcfd-0f68-4cc3-9c20-24cac0572be8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.081 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:57:02 compute-0 nova_compute[259550]:   <uuid>b02ddcfd-0f68-4cc3-9c20-24cac0572be8</uuid>
Oct 07 14:57:02 compute-0 nova_compute[259550]:   <name>instance-00000094</name>
Oct 07 14:57:02 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:57:02 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:57:02 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1074090825</nova:name>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:57:01</nova:creationTime>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:57:02 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:57:02 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:57:02 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:57:02 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:57:02 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:57:02 compute-0 nova_compute[259550]:         <nova:user uuid="229f8f54ad8b4adcb7d392a6d730edbd">tempest-TestSecurityGroupsBasicOps-1946829349-project-member</nova:user>
Oct 07 14:57:02 compute-0 nova_compute[259550]:         <nova:project uuid="2dd1166031634469bed4993a4eb97989">tempest-TestSecurityGroupsBasicOps-1946829349</nova:project>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:57:02 compute-0 nova_compute[259550]:         <nova:port uuid="791c5fbe-b06e-4554-b2ad-f3f5c1709fc8">
Oct 07 14:57:02 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:57:02 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:57:02 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <system>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <entry name="serial">b02ddcfd-0f68-4cc3-9c20-24cac0572be8</entry>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <entry name="uuid">b02ddcfd-0f68-4cc3-9c20-24cac0572be8</entry>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     </system>
Oct 07 14:57:02 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:57:02 compute-0 nova_compute[259550]:   <os>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:   </os>
Oct 07 14:57:02 compute-0 nova_compute[259550]:   <features>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:   </features>
Oct 07 14:57:02 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:57:02 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:57:02 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk">
Oct 07 14:57:02 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       </source>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:57:02 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk.config">
Oct 07 14:57:02 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       </source>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:57:02 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:9f:5e:cf"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <target dev="tap791c5fbe-b0"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8/console.log" append="off"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <video>
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     </video>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:57:02 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:57:02 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:57:02 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:57:02 compute-0 nova_compute[259550]: </domain>
Oct 07 14:57:02 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.082 2 DEBUG nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Preparing to wait for external event network-vif-plugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.082 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.083 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.083 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.084 2 DEBUG nova.virt.libvirt.vif [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:56:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1074090825',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1074090825',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ge',id=148,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAN4U7sP6SJsynS4Ra+moKlo0gRKOYuYBnoBAljs45JBWYzT3XXBVBWAQMcJqohTF8wfY170SJ+ijw1ZYS57Fc+FviOO7AVF8Pk5bQI1pIusmI737U1/kZZxNMNFXAeEBQ==',key_name='tempest-TestSecurityGroupsBasicOps-1693465167',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-hydkwpjc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:56:54Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=b02ddcfd-0f68-4cc3-9c20-24cac0572be8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.084 2 DEBUG nova.network.os_vif_util [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.085 2 DEBUG nova.network.os_vif_util [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:5e:cf,bridge_name='br-int',has_traffic_filtering=True,id=791c5fbe-b06e-4554-b2ad-f3f5c1709fc8,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap791c5fbe-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.085 2 DEBUG os_vif [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:5e:cf,bridge_name='br-int',has_traffic_filtering=True,id=791c5fbe-b06e-4554-b2ad-f3f5c1709fc8,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap791c5fbe-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.086 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.086 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.090 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap791c5fbe-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.090 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap791c5fbe-b0, col_values=(('external_ids', {'iface-id': '791c5fbe-b06e-4554-b2ad-f3f5c1709fc8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:5e:cf', 'vm-uuid': 'b02ddcfd-0f68-4cc3-9c20-24cac0572be8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:02 compute-0 NetworkManager[44949]: <info>  [1759849022.0930] manager: (tap791c5fbe-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/653)
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.100 2 INFO os_vif [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:5e:cf,bridge_name='br-int',has_traffic_filtering=True,id=791c5fbe-b06e-4554-b2ad-f3f5c1709fc8,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap791c5fbe-b0')
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.227 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.228 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.228 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No VIF found with MAC fa:16:3e:9f:5e:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.228 2 INFO nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Using config drive
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.257 2 DEBUG nova.storage.rbd_utils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:57:02 compute-0 ceph-mon[74295]: pgmap v2843: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:57:02 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3389181530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:57:02 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1344967948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.800 2 INFO nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Creating config drive at /var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8/disk.config
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.806 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcqdow3ne execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:57:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.947 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcqdow3ne" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.982 2 DEBUG nova.storage.rbd_utils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:57:02 compute-0 nova_compute[259550]: 2025-10-07 14:57:02.988 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8/disk.config b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:57:03 compute-0 nova_compute[259550]: 2025-10-07 14:57:03.345 2 DEBUG nova.network.neutron [req-65634a2d-3280-4ed4-88c9-0bf765b13e11 req-39154594-fcc5-4776-b251-9788c3a9ee9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Updated VIF entry in instance network info cache for port 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:57:03 compute-0 nova_compute[259550]: 2025-10-07 14:57:03.346 2 DEBUG nova.network.neutron [req-65634a2d-3280-4ed4-88c9-0bf765b13e11 req-39154594-fcc5-4776-b251-9788c3a9ee9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Updating instance_info_cache with network_info: [{"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:57:03 compute-0 nova_compute[259550]: 2025-10-07 14:57:03.371 2 DEBUG oslo_concurrency.lockutils [req-65634a2d-3280-4ed4-88c9-0bf765b13e11 req-39154594-fcc5-4776-b251-9788c3a9ee9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:57:03 compute-0 nova_compute[259550]: 2025-10-07 14:57:03.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:03 compute-0 nova_compute[259550]: 2025-10-07 14:57:03.871 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8/disk.config b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.883s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:57:03 compute-0 nova_compute[259550]: 2025-10-07 14:57:03.871 2 INFO nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Deleting local config drive /var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8/disk.config because it was imported into RBD.
Oct 07 14:57:03 compute-0 kernel: tap791c5fbe-b0: entered promiscuous mode
Oct 07 14:57:03 compute-0 NetworkManager[44949]: <info>  [1759849023.9246] manager: (tap791c5fbe-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/654)
Oct 07 14:57:03 compute-0 nova_compute[259550]: 2025-10-07 14:57:03.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:03 compute-0 ovn_controller[151684]: 2025-10-07T14:57:03Z|01612|binding|INFO|Claiming lport 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 for this chassis.
Oct 07 14:57:03 compute-0 ovn_controller[151684]: 2025-10-07T14:57:03Z|01613|binding|INFO|791c5fbe-b06e-4554-b2ad-f3f5c1709fc8: Claiming fa:16:3e:9f:5e:cf 10.100.0.5
Oct 07 14:57:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:03.936 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:5e:cf 10.100.0.5'], port_security=['fa:16:3e:9f:5e:cf 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b02ddcfd-0f68-4cc3-9c20-24cac0572be8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de9e106e-fa40-4336-9ad7-f811682b66d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bfc0506d-4bd3-43b3-bacb-108e14651f30', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8921e707-2004-4697-abc9-199bf17e9157, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=791c5fbe-b06e-4554-b2ad-f3f5c1709fc8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:57:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:03.937 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 in datapath de9e106e-fa40-4336-9ad7-f811682b66d2 bound to our chassis
Oct 07 14:57:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:03.938 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network de9e106e-fa40-4336-9ad7-f811682b66d2
Oct 07 14:57:03 compute-0 ovn_controller[151684]: 2025-10-07T14:57:03Z|01614|binding|INFO|Setting lport 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 ovn-installed in OVS
Oct 07 14:57:03 compute-0 ovn_controller[151684]: 2025-10-07T14:57:03Z|01615|binding|INFO|Setting lport 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 up in Southbound
Oct 07 14:57:03 compute-0 nova_compute[259550]: 2025-10-07 14:57:03.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:03 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:03.956 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fd39ac61-5dab-4689-a1f6-9f3db7cfd698]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:03 compute-0 systemd-udevd[424483]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:57:03 compute-0 systemd-machined[214580]: New machine qemu-182-instance-00000094.
Oct 07 14:57:03 compute-0 NetworkManager[44949]: <info>  [1759849023.9768] device (tap791c5fbe-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:57:03 compute-0 NetworkManager[44949]: <info>  [1759849023.9780] device (tap791c5fbe-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:57:03 compute-0 systemd[1]: Started Virtual Machine qemu-182-instance-00000094.
Oct 07 14:57:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.000 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[86abc616-349a-4bde-9f3f-8ebd40168e60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.003 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d55f427e-e946-4f2a-ab66-39e457e3e1fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.035 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5efb5385-abc4-4040-a423-41d498967870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.054 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd46687-2e20-47ea-9450-1650533248ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde9e106e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:a2:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 956909, 'reachable_time': 21559, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 424496, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.070 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[badfc6e6-8316-4e90-ac44-2ccdd833e216]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapde9e106e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 956922, 'tstamp': 956922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424497, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapde9e106e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 956926, 'tstamp': 956926}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424497, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.072 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde9e106e-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:57:04 compute-0 nova_compute[259550]: 2025-10-07 14:57:04.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.075 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde9e106e-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:57:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.075 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:57:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.076 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapde9e106e-f0, col_values=(('external_ids', {'iface-id': 'df6ec9ae-b5a3-4dbd-9ad0-fe284c253a7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:57:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.076 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:57:04 compute-0 ceph-mon[74295]: pgmap v2844: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:57:04 compute-0 nova_compute[259550]: 2025-10-07 14:57:04.535 2 DEBUG nova.compute.manager [req-dcb361d8-3eb3-4e80-a11d-36f89c208985 req-f66c4482-e818-4525-82f7-a4dcc31d4b3c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received event network-vif-plugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:57:04 compute-0 nova_compute[259550]: 2025-10-07 14:57:04.536 2 DEBUG oslo_concurrency.lockutils [req-dcb361d8-3eb3-4e80-a11d-36f89c208985 req-f66c4482-e818-4525-82f7-a4dcc31d4b3c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:04 compute-0 nova_compute[259550]: 2025-10-07 14:57:04.536 2 DEBUG oslo_concurrency.lockutils [req-dcb361d8-3eb3-4e80-a11d-36f89c208985 req-f66c4482-e818-4525-82f7-a4dcc31d4b3c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:04 compute-0 nova_compute[259550]: 2025-10-07 14:57:04.536 2 DEBUG oslo_concurrency.lockutils [req-dcb361d8-3eb3-4e80-a11d-36f89c208985 req-f66c4482-e818-4525-82f7-a4dcc31d4b3c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:57:04 compute-0 nova_compute[259550]: 2025-10-07 14:57:04.536 2 DEBUG nova.compute.manager [req-dcb361d8-3eb3-4e80-a11d-36f89c208985 req-f66c4482-e818-4525-82f7-a4dcc31d4b3c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Processing event network-vif-plugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:57:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.648 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:57:04 compute-0 nova_compute[259550]: 2025-10-07 14:57:04.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.649 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:57:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.037 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.327 2 DEBUG nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.328 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849025.327855, b02ddcfd-0f68-4cc3-9c20-24cac0572be8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.328 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] VM Started (Lifecycle Event)
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.331 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.335 2 INFO nova.virt.libvirt.driver [-] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Instance spawned successfully.
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.336 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.371 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.376 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.377 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.377 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.377 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.378 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.378 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.386 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.442 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.443 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849025.3286908, b02ddcfd-0f68-4cc3-9c20-24cac0572be8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.443 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] VM Paused (Lifecycle Event)
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.467 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.471 2 INFO nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Took 10.65 seconds to spawn the instance on the hypervisor.
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.472 2 DEBUG nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.474 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849025.3306851, b02ddcfd-0f68-4cc3-9c20-24cac0572be8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.474 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] VM Resumed (Lifecycle Event)
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.496 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.500 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.545 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.562 2 INFO nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Took 11.77 seconds to build instance.
Oct 07 14:57:05 compute-0 nova_compute[259550]: 2025-10-07 14:57:05.587 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:57:06 compute-0 nova_compute[259550]: 2025-10-07 14:57:06.643 2 DEBUG nova.compute.manager [req-7b71bf87-06e7-46cf-b7e1-5b99bbe1c58f req-afc3fd89-7a66-4b46-a23d-19c0d2687de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received event network-vif-plugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:57:06 compute-0 nova_compute[259550]: 2025-10-07 14:57:06.643 2 DEBUG oslo_concurrency.lockutils [req-7b71bf87-06e7-46cf-b7e1-5b99bbe1c58f req-afc3fd89-7a66-4b46-a23d-19c0d2687de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:06 compute-0 nova_compute[259550]: 2025-10-07 14:57:06.644 2 DEBUG oslo_concurrency.lockutils [req-7b71bf87-06e7-46cf-b7e1-5b99bbe1c58f req-afc3fd89-7a66-4b46-a23d-19c0d2687de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:06 compute-0 nova_compute[259550]: 2025-10-07 14:57:06.644 2 DEBUG oslo_concurrency.lockutils [req-7b71bf87-06e7-46cf-b7e1-5b99bbe1c58f req-afc3fd89-7a66-4b46-a23d-19c0d2687de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:57:06 compute-0 nova_compute[259550]: 2025-10-07 14:57:06.644 2 DEBUG nova.compute.manager [req-7b71bf87-06e7-46cf-b7e1-5b99bbe1c58f req-afc3fd89-7a66-4b46-a23d-19c0d2687de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] No waiting events found dispatching network-vif-plugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:57:06 compute-0 nova_compute[259550]: 2025-10-07 14:57:06.644 2 WARNING nova.compute.manager [req-7b71bf87-06e7-46cf-b7e1-5b99bbe1c58f req-afc3fd89-7a66-4b46-a23d-19c0d2687de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received unexpected event network-vif-plugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 for instance with vm_state active and task_state None.
Oct 07 14:57:06 compute-0 ceph-mon[74295]: pgmap v2845: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 07 14:57:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:57:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 07 14:57:07 compute-0 nova_compute[259550]: 2025-10-07 14:57:07.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:07 compute-0 nova_compute[259550]: 2025-10-07 14:57:07.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:57:08 compute-0 ceph-mon[74295]: pgmap v2846: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 07 14:57:08 compute-0 nova_compute[259550]: 2025-10-07 14:57:08.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:08 compute-0 nova_compute[259550]: 2025-10-07 14:57:08.733 2 DEBUG nova.compute.manager [req-de73354e-585f-42a9-8e24-b0db98f36f6e req-2f260ea0-c051-4b23-bda4-d6c3f20cc226 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received event network-changed-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:57:08 compute-0 nova_compute[259550]: 2025-10-07 14:57:08.734 2 DEBUG nova.compute.manager [req-de73354e-585f-42a9-8e24-b0db98f36f6e req-2f260ea0-c051-4b23-bda4-d6c3f20cc226 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Refreshing instance network info cache due to event network-changed-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:57:08 compute-0 nova_compute[259550]: 2025-10-07 14:57:08.735 2 DEBUG oslo_concurrency.lockutils [req-de73354e-585f-42a9-8e24-b0db98f36f6e req-2f260ea0-c051-4b23-bda4-d6c3f20cc226 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:57:08 compute-0 nova_compute[259550]: 2025-10-07 14:57:08.735 2 DEBUG oslo_concurrency.lockutils [req-de73354e-585f-42a9-8e24-b0db98f36f6e req-2f260ea0-c051-4b23-bda4-d6c3f20cc226 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:57:08 compute-0 nova_compute[259550]: 2025-10-07 14:57:08.735 2 DEBUG nova.network.neutron [req-de73354e-585f-42a9-8e24-b0db98f36f6e req-2f260ea0-c051-4b23-bda4-d6c3f20cc226 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Refreshing network info cache for port 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:57:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 824 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct 07 14:57:08 compute-0 nova_compute[259550]: 2025-10-07 14:57:08.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:57:08 compute-0 nova_compute[259550]: 2025-10-07 14:57:08.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:57:10 compute-0 ceph-mon[74295]: pgmap v2847: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 824 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct 07 14:57:10 compute-0 nova_compute[259550]: 2025-10-07 14:57:10.262 2 DEBUG nova.network.neutron [req-de73354e-585f-42a9-8e24-b0db98f36f6e req-2f260ea0-c051-4b23-bda4-d6c3f20cc226 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Updated VIF entry in instance network info cache for port 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:57:10 compute-0 nova_compute[259550]: 2025-10-07 14:57:10.263 2 DEBUG nova.network.neutron [req-de73354e-585f-42a9-8e24-b0db98f36f6e req-2f260ea0-c051-4b23-bda4-d6c3f20cc226 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Updating instance_info_cache with network_info: [{"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:57:10 compute-0 nova_compute[259550]: 2025-10-07 14:57:10.289 2 DEBUG oslo_concurrency.lockutils [req-de73354e-585f-42a9-8e24-b0db98f36f6e req-2f260ea0-c051-4b23-bda4-d6c3f20cc226 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:57:10 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:10.652 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:57:10 compute-0 nova_compute[259550]: 2025-10-07 14:57:10.828 2 DEBUG nova.compute.manager [req-d5ed69a9-19c9-49c6-a9f1-0d05b362a2df req-b7752085-a970-4d74-938f-63bf79e07f4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received event network-changed-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:57:10 compute-0 nova_compute[259550]: 2025-10-07 14:57:10.829 2 DEBUG nova.compute.manager [req-d5ed69a9-19c9-49c6-a9f1-0d05b362a2df req-b7752085-a970-4d74-938f-63bf79e07f4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Refreshing instance network info cache due to event network-changed-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:57:10 compute-0 nova_compute[259550]: 2025-10-07 14:57:10.830 2 DEBUG oslo_concurrency.lockutils [req-d5ed69a9-19c9-49c6-a9f1-0d05b362a2df req-b7752085-a970-4d74-938f-63bf79e07f4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:57:10 compute-0 nova_compute[259550]: 2025-10-07 14:57:10.830 2 DEBUG oslo_concurrency.lockutils [req-d5ed69a9-19c9-49c6-a9f1-0d05b362a2df req-b7752085-a970-4d74-938f-63bf79e07f4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:57:10 compute-0 nova_compute[259550]: 2025-10-07 14:57:10.830 2 DEBUG nova.network.neutron [req-d5ed69a9-19c9-49c6-a9f1-0d05b362a2df req-b7752085-a970-4d74-938f-63bf79e07f4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Refreshing network info cache for port 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:57:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 79 op/s
Oct 07 14:57:11 compute-0 podman[424540]: 2025-10-07 14:57:11.100820396 +0000 UTC m=+0.086524910 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 07 14:57:11 compute-0 podman[424541]: 2025-10-07 14:57:11.136498921 +0000 UTC m=+0.119983637 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct 07 14:57:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:57:11 compute-0 nova_compute[259550]: 2025-10-07 14:57:11.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:57:12 compute-0 nova_compute[259550]: 2025-10-07 14:57:12.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:12 compute-0 ceph-mon[74295]: pgmap v2848: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 79 op/s
Oct 07 14:57:12 compute-0 nova_compute[259550]: 2025-10-07 14:57:12.271 2 DEBUG nova.network.neutron [req-d5ed69a9-19c9-49c6-a9f1-0d05b362a2df req-b7752085-a970-4d74-938f-63bf79e07f4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Updated VIF entry in instance network info cache for port 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:57:12 compute-0 nova_compute[259550]: 2025-10-07 14:57:12.272 2 DEBUG nova.network.neutron [req-d5ed69a9-19c9-49c6-a9f1-0d05b362a2df req-b7752085-a970-4d74-938f-63bf79e07f4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Updating instance_info_cache with network_info: [{"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:57:12 compute-0 nova_compute[259550]: 2025-10-07 14:57:12.295 2 DEBUG oslo_concurrency.lockutils [req-d5ed69a9-19c9-49c6-a9f1-0d05b362a2df req-b7752085-a970-4d74-938f-63bf79e07f4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:57:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 07 14:57:13 compute-0 nova_compute[259550]: 2025-10-07 14:57:13.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:14 compute-0 nova_compute[259550]: 2025-10-07 14:57:14.004 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:57:14 compute-0 nova_compute[259550]: 2025-10-07 14:57:14.004 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:57:14 compute-0 ceph-mon[74295]: pgmap v2849: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 07 14:57:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 07 14:57:15 compute-0 nova_compute[259550]: 2025-10-07 14:57:15.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:57:16 compute-0 ceph-mon[74295]: pgmap v2850: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 07 14:57:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:57:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.4 KiB/s wr, 71 op/s
Oct 07 14:57:16 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 07 14:57:17 compute-0 nova_compute[259550]: 2025-10-07 14:57:17.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:17 compute-0 ovn_controller[151684]: 2025-10-07T14:57:17Z|00202|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9f:5e:cf 10.100.0.5
Oct 07 14:57:17 compute-0 ovn_controller[151684]: 2025-10-07T14:57:17Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9f:5e:cf 10.100.0.5
Oct 07 14:57:17 compute-0 nova_compute[259550]: 2025-10-07 14:57:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:57:17 compute-0 nova_compute[259550]: 2025-10-07 14:57:17.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 07 14:57:18 compute-0 ceph-mon[74295]: pgmap v2851: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.4 KiB/s wr, 71 op/s
Oct 07 14:57:18 compute-0 nova_compute[259550]: 2025-10-07 14:57:18.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 305 active+clean; 189 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 106 op/s
Oct 07 14:57:20 compute-0 ceph-mon[74295]: pgmap v2852: 305 pgs: 305 active+clean; 189 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 106 op/s
Oct 07 14:57:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 100 op/s
Oct 07 14:57:21 compute-0 nova_compute[259550]: 2025-10-07 14:57:21.131 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:57:21 compute-0 nova_compute[259550]: 2025-10-07 14:57:21.324 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:21 compute-0 nova_compute[259550]: 2025-10-07 14:57:21.325 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:21 compute-0 nova_compute[259550]: 2025-10-07 14:57:21.325 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:57:21 compute-0 nova_compute[259550]: 2025-10-07 14:57:21.325 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:57:21 compute-0 nova_compute[259550]: 2025-10-07 14:57:21.326 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:57:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:57:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:57:21 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2621711359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:57:21 compute-0 nova_compute[259550]: 2025-10-07 14:57:21.882 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:57:22 compute-0 nova_compute[259550]: 2025-10-07 14:57:22.092 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:57:22 compute-0 nova_compute[259550]: 2025-10-07 14:57:22.092 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:57:22 compute-0 nova_compute[259550]: 2025-10-07 14:57:22.096 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:57:22 compute-0 nova_compute[259550]: 2025-10-07 14:57:22.097 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:57:22 compute-0 nova_compute[259550]: 2025-10-07 14:57:22.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:22 compute-0 nova_compute[259550]: 2025-10-07 14:57:22.313 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:57:22 compute-0 nova_compute[259550]: 2025-10-07 14:57:22.314 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3206MB free_disk=59.897308349609375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:57:22 compute-0 nova_compute[259550]: 2025-10-07 14:57:22.315 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:22 compute-0 nova_compute[259550]: 2025-10-07 14:57:22.315 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:22 compute-0 ceph-mon[74295]: pgmap v2853: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 100 op/s
Oct 07 14:57:22 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2621711359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:57:22 compute-0 nova_compute[259550]: 2025-10-07 14:57:22.551 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:57:22 compute-0 nova_compute[259550]: 2025-10-07 14:57:22.552 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance b02ddcfd-0f68-4cc3-9c20-24cac0572be8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:57:22 compute-0 nova_compute[259550]: 2025-10-07 14:57:22.552 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:57:22 compute-0 nova_compute[259550]: 2025-10-07 14:57:22.552 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:57:22 compute-0 nova_compute[259550]: 2025-10-07 14:57:22.610 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:57:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:57:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:57:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:57:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:57:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:57:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:57:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:57:22
Oct 07 14:57:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:57:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:57:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.meta', 'volumes', '.mgr', 'vms', 'default.rgw.log', '.rgw.root']
Oct 07 14:57:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:57:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 07 14:57:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:57:23 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1097252514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:57:23 compute-0 nova_compute[259550]: 2025-10-07 14:57:23.108 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:57:23 compute-0 nova_compute[259550]: 2025-10-07 14:57:23.115 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:57:23 compute-0 nova_compute[259550]: 2025-10-07 14:57:23.131 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:57:23 compute-0 nova_compute[259550]: 2025-10-07 14:57:23.155 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:57:23 compute-0 nova_compute[259550]: 2025-10-07 14:57:23.155 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:57:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:57:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:57:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:57:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:57:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:57:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:57:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:57:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:57:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:57:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:57:23 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1097252514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:57:23 compute-0 nova_compute[259550]: 2025-10-07 14:57:23.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.348 2 DEBUG oslo_concurrency.lockutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.349 2 DEBUG oslo_concurrency.lockutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.349 2 DEBUG oslo_concurrency.lockutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.349 2 DEBUG oslo_concurrency.lockutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.349 2 DEBUG oslo_concurrency.lockutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.351 2 INFO nova.compute.manager [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Terminating instance
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.352 2 DEBUG nova.compute.manager [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:57:24 compute-0 ceph-mon[74295]: pgmap v2854: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 07 14:57:24 compute-0 kernel: tap791c5fbe-b0 (unregistering): left promiscuous mode
Oct 07 14:57:24 compute-0 NetworkManager[44949]: <info>  [1759849044.4504] device (tap791c5fbe-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:24 compute-0 ovn_controller[151684]: 2025-10-07T14:57:24Z|01616|binding|INFO|Releasing lport 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 from this chassis (sb_readonly=0)
Oct 07 14:57:24 compute-0 ovn_controller[151684]: 2025-10-07T14:57:24Z|01617|binding|INFO|Setting lport 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 down in Southbound
Oct 07 14:57:24 compute-0 ovn_controller[151684]: 2025-10-07T14:57:24Z|01618|binding|INFO|Removing iface tap791c5fbe-b0 ovn-installed in OVS
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.476 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:5e:cf 10.100.0.5', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b02ddcfd-0f68-4cc3-9c20-24cac0572be8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de9e106e-fa40-4336-9ad7-f811682b66d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8921e707-2004-4697-abc9-199bf17e9157, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=791c5fbe-b06e-4554-b2ad-f3f5c1709fc8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:57:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.477 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 in datapath de9e106e-fa40-4336-9ad7-f811682b66d2 unbound from our chassis
Oct 07 14:57:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.479 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network de9e106e-fa40-4336-9ad7-f811682b66d2
Oct 07 14:57:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.498 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[20629c84-4e92-437f-9238-ad1136108cd9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:24 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000094.scope: Deactivated successfully.
Oct 07 14:57:24 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000094.scope: Consumed 13.794s CPU time.
Oct 07 14:57:24 compute-0 systemd-machined[214580]: Machine qemu-182-instance-00000094 terminated.
Oct 07 14:57:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.536 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[13573cea-a176-48f5-a8e4-116014577d32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.539 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2a1f7d61-0d05-4dd2-9739-b8408943d0c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.573 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[955398a2-64b0-4655-8b1b-76393ccf3e09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.586 2 INFO nova.virt.libvirt.driver [-] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Instance destroyed successfully.
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.586 2 DEBUG nova.objects.instance [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'resources' on Instance uuid b02ddcfd-0f68-4cc3-9c20-24cac0572be8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:57:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.596 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c128df-9d3f-4869-a447-92bf659ecd01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde9e106e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:a2:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1670, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1670, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 956909, 'reachable_time': 21559, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 15, 'inoctets': 1208, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 15, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1208, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 15, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 424648, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.609 2 DEBUG nova.virt.libvirt.vif [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:56:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1074090825',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1074090825',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ge',id=148,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAN4U7sP6SJsynS4Ra+moKlo0gRKOYuYBnoBAljs45JBWYzT3XXBVBWAQMcJqohTF8wfY170SJ+ijw1ZYS57Fc+FviOO7AVF8Pk5bQI1pIusmI737U1/kZZxNMNFXAeEBQ==',key_name='tempest-TestSecurityGroupsBasicOps-1693465167',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:57:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-hydkwpjc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:57:05Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=b02ddcfd-0f68-4cc3-9c20-24cac0572be8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.611 2 DEBUG nova.network.os_vif_util [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.613 2 DEBUG nova.network.os_vif_util [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9f:5e:cf,bridge_name='br-int',has_traffic_filtering=True,id=791c5fbe-b06e-4554-b2ad-f3f5c1709fc8,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap791c5fbe-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.613 2 DEBUG os_vif [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:5e:cf,bridge_name='br-int',has_traffic_filtering=True,id=791c5fbe-b06e-4554-b2ad-f3f5c1709fc8,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap791c5fbe-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.617 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap791c5fbe-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:57:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.616 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[652f1550-7ed3-483e-b923-52b7d845e21c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapde9e106e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 956922, 'tstamp': 956922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424653, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapde9e106e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 956926, 'tstamp': 956926}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424653, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.618 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde9e106e-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.621 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde9e106e-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:57:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.621 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:57:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.621 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapde9e106e-f0, col_values=(('external_ids', {'iface-id': 'df6ec9ae-b5a3-4dbd-9ad0-fe284c253a7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:57:24 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.622 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:57:24 compute-0 nova_compute[259550]: 2025-10-07 14:57:24.624 2 INFO os_vif [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:5e:cf,bridge_name='br-int',has_traffic_filtering=True,id=791c5fbe-b06e-4554-b2ad-f3f5c1709fc8,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap791c5fbe-b0')
Oct 07 14:57:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 07 14:57:25 compute-0 nova_compute[259550]: 2025-10-07 14:57:25.527 2 DEBUG nova.compute.manager [req-7b0775ca-5150-466e-b452-58d7940d68d6 req-ebb9f46b-e85e-4901-9f0a-878716719bc5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received event network-vif-unplugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:57:25 compute-0 nova_compute[259550]: 2025-10-07 14:57:25.528 2 DEBUG oslo_concurrency.lockutils [req-7b0775ca-5150-466e-b452-58d7940d68d6 req-ebb9f46b-e85e-4901-9f0a-878716719bc5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:25 compute-0 nova_compute[259550]: 2025-10-07 14:57:25.529 2 DEBUG oslo_concurrency.lockutils [req-7b0775ca-5150-466e-b452-58d7940d68d6 req-ebb9f46b-e85e-4901-9f0a-878716719bc5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:25 compute-0 nova_compute[259550]: 2025-10-07 14:57:25.529 2 DEBUG oslo_concurrency.lockutils [req-7b0775ca-5150-466e-b452-58d7940d68d6 req-ebb9f46b-e85e-4901-9f0a-878716719bc5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:57:25 compute-0 nova_compute[259550]: 2025-10-07 14:57:25.529 2 DEBUG nova.compute.manager [req-7b0775ca-5150-466e-b452-58d7940d68d6 req-ebb9f46b-e85e-4901-9f0a-878716719bc5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] No waiting events found dispatching network-vif-unplugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:57:25 compute-0 nova_compute[259550]: 2025-10-07 14:57:25.529 2 DEBUG nova.compute.manager [req-7b0775ca-5150-466e-b452-58d7940d68d6 req-ebb9f46b-e85e-4901-9f0a-878716719bc5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received event network-vif-unplugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:57:26 compute-0 ceph-mon[74295]: pgmap v2855: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 07 14:57:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:57:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 165 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Oct 07 14:57:27 compute-0 nova_compute[259550]: 2025-10-07 14:57:27.677 2 DEBUG nova.compute.manager [req-6473191f-e484-418c-9486-e56983e0345f req-732af724-5549-4ff0-bc53-4796085eb5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received event network-vif-plugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:57:27 compute-0 nova_compute[259550]: 2025-10-07 14:57:27.678 2 DEBUG oslo_concurrency.lockutils [req-6473191f-e484-418c-9486-e56983e0345f req-732af724-5549-4ff0-bc53-4796085eb5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:27 compute-0 nova_compute[259550]: 2025-10-07 14:57:27.678 2 DEBUG oslo_concurrency.lockutils [req-6473191f-e484-418c-9486-e56983e0345f req-732af724-5549-4ff0-bc53-4796085eb5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:27 compute-0 nova_compute[259550]: 2025-10-07 14:57:27.678 2 DEBUG oslo_concurrency.lockutils [req-6473191f-e484-418c-9486-e56983e0345f req-732af724-5549-4ff0-bc53-4796085eb5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:57:27 compute-0 nova_compute[259550]: 2025-10-07 14:57:27.678 2 DEBUG nova.compute.manager [req-6473191f-e484-418c-9486-e56983e0345f req-732af724-5549-4ff0-bc53-4796085eb5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] No waiting events found dispatching network-vif-plugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:57:27 compute-0 nova_compute[259550]: 2025-10-07 14:57:27.678 2 WARNING nova.compute.manager [req-6473191f-e484-418c-9486-e56983e0345f req-732af724-5549-4ff0-bc53-4796085eb5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received unexpected event network-vif-plugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 for instance with vm_state active and task_state deleting.
Oct 07 14:57:27 compute-0 nova_compute[259550]: 2025-10-07 14:57:27.756 2 INFO nova.virt.libvirt.driver [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Deleting instance files /var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8_del
Oct 07 14:57:27 compute-0 nova_compute[259550]: 2025-10-07 14:57:27.757 2 INFO nova.virt.libvirt.driver [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Deletion of /var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8_del complete
Oct 07 14:57:27 compute-0 nova_compute[259550]: 2025-10-07 14:57:27.816 2 INFO nova.compute.manager [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Took 3.46 seconds to destroy the instance on the hypervisor.
Oct 07 14:57:27 compute-0 nova_compute[259550]: 2025-10-07 14:57:27.817 2 DEBUG oslo.service.loopingcall [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:57:27 compute-0 nova_compute[259550]: 2025-10-07 14:57:27.817 2 DEBUG nova.compute.manager [-] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:57:27 compute-0 nova_compute[259550]: 2025-10-07 14:57:27.818 2 DEBUG nova.network.neutron [-] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:57:28 compute-0 podman[424673]: 2025-10-07 14:57:28.07397667 +0000 UTC m=+0.058910930 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 07 14:57:28 compute-0 podman[424674]: 2025-10-07 14:57:28.073969779 +0000 UTC m=+0.058855628 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 14:57:28 compute-0 nova_compute[259550]: 2025-10-07 14:57:28.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:28 compute-0 nova_compute[259550]: 2025-10-07 14:57:28.718 2 DEBUG nova.network.neutron [-] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:57:28 compute-0 nova_compute[259550]: 2025-10-07 14:57:28.752 2 INFO nova.compute.manager [-] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Took 0.93 seconds to deallocate network for instance.
Oct 07 14:57:28 compute-0 ceph-mon[74295]: pgmap v2856: 305 pgs: 305 active+clean; 165 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Oct 07 14:57:28 compute-0 nova_compute[259550]: 2025-10-07 14:57:28.820 2 DEBUG oslo_concurrency.lockutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:28 compute-0 nova_compute[259550]: 2025-10-07 14:57:28.821 2 DEBUG oslo_concurrency.lockutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Oct 07 14:57:28 compute-0 nova_compute[259550]: 2025-10-07 14:57:28.908 2 DEBUG oslo_concurrency.processutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:57:29 compute-0 nova_compute[259550]: 2025-10-07 14:57:29.007 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:57:29 compute-0 nova_compute[259550]: 2025-10-07 14:57:29.008 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:57:29 compute-0 nova_compute[259550]: 2025-10-07 14:57:29.008 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:57:29 compute-0 nova_compute[259550]: 2025-10-07 14:57:29.252 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:57:29 compute-0 nova_compute[259550]: 2025-10-07 14:57:29.252 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:57:29 compute-0 nova_compute[259550]: 2025-10-07 14:57:29.252 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 14:57:29 compute-0 nova_compute[259550]: 2025-10-07 14:57:29.253 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:57:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:57:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1658601581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:57:29 compute-0 nova_compute[259550]: 2025-10-07 14:57:29.371 2 DEBUG oslo_concurrency.processutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:57:29 compute-0 nova_compute[259550]: 2025-10-07 14:57:29.377 2 DEBUG nova.compute.provider_tree [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:57:29 compute-0 nova_compute[259550]: 2025-10-07 14:57:29.398 2 DEBUG nova.scheduler.client.report [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:57:29 compute-0 nova_compute[259550]: 2025-10-07 14:57:29.423 2 DEBUG oslo_concurrency.lockutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:57:29 compute-0 nova_compute[259550]: 2025-10-07 14:57:29.457 2 INFO nova.scheduler.client.report [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Deleted allocations for instance b02ddcfd-0f68-4cc3-9c20-24cac0572be8
Oct 07 14:57:29 compute-0 nova_compute[259550]: 2025-10-07 14:57:29.537 2 DEBUG oslo_concurrency.lockutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:57:29 compute-0 nova_compute[259550]: 2025-10-07 14:57:29.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:29 compute-0 nova_compute[259550]: 2025-10-07 14:57:29.773 2 DEBUG nova.compute.manager [req-f400b7d2-6df9-4c2d-a76a-629a357600d0 req-e399b02c-1dda-4826-b7ee-18142cf9cc47 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received event network-vif-deleted-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:57:29 compute-0 ceph-mon[74295]: pgmap v2857: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Oct 07 14:57:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1658601581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:57:30 compute-0 sudo[424733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:57:30 compute-0 sudo[424733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:57:30 compute-0 sudo[424733]: pam_unix(sudo:session): session closed for user root
Oct 07 14:57:30 compute-0 sudo[424758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:57:30 compute-0 sudo[424758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:57:30 compute-0 sudo[424758]: pam_unix(sudo:session): session closed for user root
Oct 07 14:57:30 compute-0 sudo[424783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:57:30 compute-0 sudo[424783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:57:30 compute-0 sudo[424783]: pam_unix(sudo:session): session closed for user root
Oct 07 14:57:30 compute-0 sudo[424808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:57:30 compute-0 sudo[424808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:57:30 compute-0 nova_compute[259550]: 2025-10-07 14:57:30.700 2 DEBUG oslo_concurrency.lockutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:30 compute-0 nova_compute[259550]: 2025-10-07 14:57:30.702 2 DEBUG oslo_concurrency.lockutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:30 compute-0 nova_compute[259550]: 2025-10-07 14:57:30.703 2 DEBUG oslo_concurrency.lockutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:30 compute-0 nova_compute[259550]: 2025-10-07 14:57:30.703 2 DEBUG oslo_concurrency.lockutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:30 compute-0 nova_compute[259550]: 2025-10-07 14:57:30.703 2 DEBUG oslo_concurrency.lockutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:57:30 compute-0 nova_compute[259550]: 2025-10-07 14:57:30.704 2 INFO nova.compute.manager [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Terminating instance
Oct 07 14:57:30 compute-0 nova_compute[259550]: 2025-10-07 14:57:30.705 2 DEBUG nova.compute.manager [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:57:30 compute-0 nova_compute[259550]: 2025-10-07 14:57:30.861 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Updating instance_info_cache with network_info: [{"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:57:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 118 KiB/s rd, 1.1 MiB/s wr, 54 op/s
Oct 07 14:57:30 compute-0 nova_compute[259550]: 2025-10-07 14:57:30.884 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:57:30 compute-0 nova_compute[259550]: 2025-10-07 14:57:30.884 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 14:57:30 compute-0 nova_compute[259550]: 2025-10-07 14:57:30.885 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:57:31 compute-0 kernel: tap6b0ce98f-f4 (unregistering): left promiscuous mode
Oct 07 14:57:31 compute-0 sudo[424808]: pam_unix(sudo:session): session closed for user root
Oct 07 14:57:31 compute-0 NetworkManager[44949]: <info>  [1759849051.0679] device (tap6b0ce98f-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:31 compute-0 ovn_controller[151684]: 2025-10-07T14:57:31Z|01619|binding|INFO|Releasing lport 6b0ce98f-f482-4715-a60b-a5b393b822f1 from this chassis (sb_readonly=0)
Oct 07 14:57:31 compute-0 ovn_controller[151684]: 2025-10-07T14:57:31Z|01620|binding|INFO|Setting lport 6b0ce98f-f482-4715-a60b-a5b393b822f1 down in Southbound
Oct 07 14:57:31 compute-0 ovn_controller[151684]: 2025-10-07T14:57:31Z|01621|binding|INFO|Removing iface tap6b0ce98f-f4 ovn-installed in OVS
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:57:31 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:57:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:57:31 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:57:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:57:31 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:57:31 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 21145e5a-9e8c-47ef-a52f-5f4b02122ed5 does not exist
Oct 07 14:57:31 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev fc8394f9-c185-4100-8d5e-fe80f6a9cc4f does not exist
Oct 07 14:57:31 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 8b146528-28b3-4788-82ce-d53c74b1c21d does not exist
Oct 07 14:57:31 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000093.scope: Deactivated successfully.
Oct 07 14:57:31 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000093.scope: Consumed 17.000s CPU time.
Oct 07 14:57:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:57:31 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:57:31 compute-0 systemd-machined[214580]: Machine qemu-181-instance-00000093 terminated.
Oct 07 14:57:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:57:31 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:57:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:57:31 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:57:31 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:57:31 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:57:31 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:57:31 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:57:31 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:57:31 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.195 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:48:40 10.100.0.8'], port_security=['fa:16:3e:87:48:40 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1713ed1d-a673-4fc9-ac2b-14c7fda46d8b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de9e106e-fa40-4336-9ad7-f811682b66d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bfc0506d-4bd3-43b3-bacb-108e14651f30 ce5386dc-6551-4933-ba47-e2cefb87e49d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8921e707-2004-4697-abc9-199bf17e9157, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6b0ce98f-f482-4715-a60b-a5b393b822f1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.196 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6b0ce98f-f482-4715-a60b-a5b393b822f1 in datapath de9e106e-fa40-4336-9ad7-f811682b66d2 unbound from our chassis
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.197 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de9e106e-fa40-4336-9ad7-f811682b66d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.198 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2208f2b4-b2e1-4847-a2f1-f819b890a98d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.199 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2 namespace which is not needed anymore
Oct 07 14:57:31 compute-0 sudo[424871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:57:31 compute-0 sudo[424871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:57:31 compute-0 sudo[424871]: pam_unix(sudo:session): session closed for user root
Oct 07 14:57:31 compute-0 sudo[424898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:57:31 compute-0 sudo[424898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:57:31 compute-0 sudo[424898]: pam_unix(sudo:session): session closed for user root
Oct 07 14:57:31 compute-0 kernel: tap6b0ce98f-f4: entered promiscuous mode
Oct 07 14:57:31 compute-0 NetworkManager[44949]: <info>  [1759849051.3269] manager: (tap6b0ce98f-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/655)
Oct 07 14:57:31 compute-0 systemd-udevd[424868]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:57:31 compute-0 ovn_controller[151684]: 2025-10-07T14:57:31Z|01622|binding|INFO|Claiming lport 6b0ce98f-f482-4715-a60b-a5b393b822f1 for this chassis.
Oct 07 14:57:31 compute-0 ovn_controller[151684]: 2025-10-07T14:57:31Z|01623|binding|INFO|6b0ce98f-f482-4715-a60b-a5b393b822f1: Claiming fa:16:3e:87:48:40 10.100.0.8
Oct 07 14:57:31 compute-0 kernel: tap6b0ce98f-f4 (unregistering): left promiscuous mode
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:31 compute-0 sudo[424936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:57:31 compute-0 sudo[424936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:57:31 compute-0 sudo[424936]: pam_unix(sudo:session): session closed for user root
Oct 07 14:57:31 compute-0 neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2[423710]: [NOTICE]   (423728) : haproxy version is 2.8.14-c23fe91
Oct 07 14:57:31 compute-0 neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2[423710]: [NOTICE]   (423728) : path to executable is /usr/sbin/haproxy
Oct 07 14:57:31 compute-0 neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2[423710]: [WARNING]  (423728) : Exiting Master process...
Oct 07 14:57:31 compute-0 neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2[423710]: [WARNING]  (423728) : Exiting Master process...
Oct 07 14:57:31 compute-0 neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2[423710]: [ALERT]    (423728) : Current worker (423731) exited with code 143 (Terminated)
Oct 07 14:57:31 compute-0 neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2[423710]: [WARNING]  (423728) : All workers exited. Exiting... (0)
Oct 07 14:57:31 compute-0 ovn_controller[151684]: 2025-10-07T14:57:31Z|01624|binding|INFO|Setting lport 6b0ce98f-f482-4715-a60b-a5b393b822f1 ovn-installed in OVS
Oct 07 14:57:31 compute-0 ovn_controller[151684]: 2025-10-07T14:57:31Z|01625|if_status|INFO|Dropped 2 log messages in last 797 seconds (most recently, 797 seconds ago) due to excessive rate
Oct 07 14:57:31 compute-0 ovn_controller[151684]: 2025-10-07T14:57:31Z|01626|if_status|INFO|Not setting lport 6b0ce98f-f482-4715-a60b-a5b393b822f1 down as sb is readonly
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:31 compute-0 systemd[1]: libpod-a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511.scope: Deactivated successfully.
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.366 2 INFO nova.virt.libvirt.driver [-] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Instance destroyed successfully.
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.367 2 DEBUG nova.objects.instance [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'resources' on Instance uuid 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:57:31 compute-0 podman[424941]: 2025-10-07 14:57:31.36891027 +0000 UTC m=+0.067775035 container died a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.369 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:48:40 10.100.0.8'], port_security=['fa:16:3e:87:48:40 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1713ed1d-a673-4fc9-ac2b-14c7fda46d8b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de9e106e-fa40-4336-9ad7-f811682b66d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bfc0506d-4bd3-43b3-bacb-108e14651f30 ce5386dc-6551-4933-ba47-e2cefb87e49d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8921e707-2004-4697-abc9-199bf17e9157, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6b0ce98f-f482-4715-a60b-a5b393b822f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:57:31 compute-0 ovn_controller[151684]: 2025-10-07T14:57:31Z|01627|binding|INFO|Releasing lport 6b0ce98f-f482-4715-a60b-a5b393b822f1 from this chassis (sb_readonly=0)
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.388 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:48:40 10.100.0.8'], port_security=['fa:16:3e:87:48:40 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1713ed1d-a673-4fc9-ac2b-14c7fda46d8b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de9e106e-fa40-4336-9ad7-f811682b66d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bfc0506d-4bd3-43b3-bacb-108e14651f30 ce5386dc-6551-4933-ba47-e2cefb87e49d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8921e707-2004-4697-abc9-199bf17e9157, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6b0ce98f-f482-4715-a60b-a5b393b822f1) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.388 2 DEBUG nova.virt.libvirt.vif [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:56:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1451855046',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1451855046',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=147,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAN4U7sP6SJsynS4Ra+moKlo0gRKOYuYBnoBAljs45JBWYzT3XXBVBWAQMcJqohTF8wfY170SJ+ijw1ZYS57Fc+FviOO7AVF8Pk5bQI1pIusmI737U1/kZZxNMNFXAeEBQ==',key_name='tempest-TestSecurityGroupsBasicOps-1693465167',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:56:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-v02jgwcv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:56:26Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=1713ed1d-a673-4fc9-ac2b-14c7fda46d8b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.389 2 DEBUG nova.network.os_vif_util [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.390 2 DEBUG nova.network.os_vif_util [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:48:40,bridge_name='br-int',has_traffic_filtering=True,id=6b0ce98f-f482-4715-a60b-a5b393b822f1,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0ce98f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.390 2 DEBUG os_vif [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:48:40,bridge_name='br-int',has_traffic_filtering=True,id=6b0ce98f-f482-4715-a60b-a5b393b822f1,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0ce98f-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.392 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b0ce98f-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.402 2 INFO os_vif [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:48:40,bridge_name='br-int',has_traffic_filtering=True,id=6b0ce98f-f482-4715-a60b-a5b393b822f1,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0ce98f-f4')
Oct 07 14:57:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511-userdata-shm.mount: Deactivated successfully.
Oct 07 14:57:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-febe76e8dd5c30f47b13b78fd42131538300a58f00e7858de7bede20aa45a62a-merged.mount: Deactivated successfully.
Oct 07 14:57:31 compute-0 sudo[424980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:57:31 compute-0 sudo[424980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:57:31 compute-0 podman[424941]: 2025-10-07 14:57:31.495221882 +0000 UTC m=+0.194086657 container cleanup a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 07 14:57:31 compute-0 systemd[1]: libpod-conmon-a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511.scope: Deactivated successfully.
Oct 07 14:57:31 compute-0 podman[425037]: 2025-10-07 14:57:31.611111449 +0000 UTC m=+0.091250455 container remove a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.618 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0024fb33-38d9-497a-84e9-d964a15ddda4]: (4, ('Tue Oct  7 02:57:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2 (a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511)\na5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511\nTue Oct  7 02:57:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2 (a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511)\na5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.620 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b26da1-ebab-4e4a-9477-3373bab63876]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.621 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde9e106e-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:31 compute-0 kernel: tapde9e106e-f0: left promiscuous mode
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.642 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d4573e2a-c144-4de8-8d56-df36a3357538]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.679 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[32c0a4ac-4ade-47c8-bfbd-60c1c6000db5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.681 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e750dcb0-d7ab-4fa1-868a-17118cf085ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.697 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f1c559-94b8-49de-9217-0376a7233245]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 956900, 'reachable_time': 33371, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 425075, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.699 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.699 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[f053f989-756c-4543-b92a-b2216f5f0a38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.700 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6b0ce98f-f482-4715-a60b-a5b393b822f1 in datapath de9e106e-fa40-4336-9ad7-f811682b66d2 unbound from our chassis
Oct 07 14:57:31 compute-0 systemd[1]: run-netns-ovnmeta\x2dde9e106e\x2dfa40\x2d4336\x2d9ad7\x2df811682b66d2.mount: Deactivated successfully.
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.701 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de9e106e-fa40-4336-9ad7-f811682b66d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.702 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0fc214-f517-4213-ab22-11a020d0f678]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.703 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6b0ce98f-f482-4715-a60b-a5b393b822f1 in datapath de9e106e-fa40-4336-9ad7-f811682b66d2 unbound from our chassis
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.703 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de9e106e-fa40-4336-9ad7-f811682b66d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:57:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.704 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c5baac45-ba0f-4f8b-b72f-488622644134]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:57:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:57:31 compute-0 podman[425090]: 2025-10-07 14:57:31.87107109 +0000 UTC m=+0.073679972 container create 7bd01e376cc44ae1b253894bcf2b00e8cf0c72fd65139d0671d590e8c7cd9817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kowalevski, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.879 2 DEBUG nova.compute.manager [req-080092dd-e9aa-4164-b2aa-c9e1214d8c69 req-fae4bcac-2759-46f7-ab51-c848d10580ba 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received event network-changed-6b0ce98f-f482-4715-a60b-a5b393b822f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.879 2 DEBUG nova.compute.manager [req-080092dd-e9aa-4164-b2aa-c9e1214d8c69 req-fae4bcac-2759-46f7-ab51-c848d10580ba 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Refreshing instance network info cache due to event network-changed-6b0ce98f-f482-4715-a60b-a5b393b822f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.880 2 DEBUG oslo_concurrency.lockutils [req-080092dd-e9aa-4164-b2aa-c9e1214d8c69 req-fae4bcac-2759-46f7-ab51-c848d10580ba 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.880 2 DEBUG oslo_concurrency.lockutils [req-080092dd-e9aa-4164-b2aa-c9e1214d8c69 req-fae4bcac-2759-46f7-ab51-c848d10580ba 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:57:31 compute-0 nova_compute[259550]: 2025-10-07 14:57:31.880 2 DEBUG nova.network.neutron [req-080092dd-e9aa-4164-b2aa-c9e1214d8c69 req-fae4bcac-2759-46f7-ab51-c848d10580ba 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Refreshing network info cache for port 6b0ce98f-f482-4715-a60b-a5b393b822f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:57:31 compute-0 podman[425090]: 2025-10-07 14:57:31.828549074 +0000 UTC m=+0.031157976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:57:31 compute-0 systemd[1]: Started libpod-conmon-7bd01e376cc44ae1b253894bcf2b00e8cf0c72fd65139d0671d590e8c7cd9817.scope.
Oct 07 14:57:32 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:57:32 compute-0 podman[425090]: 2025-10-07 14:57:32.046136693 +0000 UTC m=+0.248745605 container init 7bd01e376cc44ae1b253894bcf2b00e8cf0c72fd65139d0671d590e8c7cd9817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 07 14:57:32 compute-0 podman[425090]: 2025-10-07 14:57:32.054432672 +0000 UTC m=+0.257041554 container start 7bd01e376cc44ae1b253894bcf2b00e8cf0c72fd65139d0671d590e8c7cd9817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 07 14:57:32 compute-0 infallible_kowalevski[425107]: 167 167
Oct 07 14:57:32 compute-0 systemd[1]: libpod-7bd01e376cc44ae1b253894bcf2b00e8cf0c72fd65139d0671d590e8c7cd9817.scope: Deactivated successfully.
Oct 07 14:57:32 compute-0 podman[425090]: 2025-10-07 14:57:32.067367934 +0000 UTC m=+0.269976816 container attach 7bd01e376cc44ae1b253894bcf2b00e8cf0c72fd65139d0671d590e8c7cd9817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 07 14:57:32 compute-0 podman[425090]: 2025-10-07 14:57:32.067852077 +0000 UTC m=+0.270460959 container died 7bd01e376cc44ae1b253894bcf2b00e8cf0c72fd65139d0671d590e8c7cd9817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 07 14:57:32 compute-0 ceph-mon[74295]: pgmap v2858: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 118 KiB/s rd, 1.1 MiB/s wr, 54 op/s
Oct 07 14:57:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-04e419fc3e7ba72985f34118c48bfcd8ccd9ffca2179d39b3652355d43ac1673-merged.mount: Deactivated successfully.
Oct 07 14:57:32 compute-0 podman[425090]: 2025-10-07 14:57:32.280594317 +0000 UTC m=+0.483203199 container remove 7bd01e376cc44ae1b253894bcf2b00e8cf0c72fd65139d0671d590e8c7cd9817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:57:32 compute-0 systemd[1]: libpod-conmon-7bd01e376cc44ae1b253894bcf2b00e8cf0c72fd65139d0671d590e8c7cd9817.scope: Deactivated successfully.
Oct 07 14:57:32 compute-0 podman[425131]: 2025-10-07 14:57:32.492747402 +0000 UTC m=+0.068979587 container create ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 07 14:57:32 compute-0 podman[425131]: 2025-10-07 14:57:32.450259067 +0000 UTC m=+0.026491282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:57:32 compute-0 systemd[1]: Started libpod-conmon-ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d.scope.
Oct 07 14:57:32 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:57:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2356f2d3c08e8421ff718535658ad0a4afd95ea7db90eda43eef6965ab69a431/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:57:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2356f2d3c08e8421ff718535658ad0a4afd95ea7db90eda43eef6965ab69a431/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:57:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2356f2d3c08e8421ff718535658ad0a4afd95ea7db90eda43eef6965ab69a431/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:57:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2356f2d3c08e8421ff718535658ad0a4afd95ea7db90eda43eef6965ab69a431/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:57:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2356f2d3c08e8421ff718535658ad0a4afd95ea7db90eda43eef6965ab69a431/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:57:32 compute-0 podman[425131]: 2025-10-07 14:57:32.622661581 +0000 UTC m=+0.198893776 container init ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 07 14:57:32 compute-0 podman[425131]: 2025-10-07 14:57:32.62982569 +0000 UTC m=+0.206057875 container start ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Oct 07 14:57:32 compute-0 podman[425131]: 2025-10-07 14:57:32.643777439 +0000 UTC m=+0.220009644 container attach ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:57:32 compute-0 nova_compute[259550]: 2025-10-07 14:57:32.723 2 INFO nova.virt.libvirt.driver [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Deleting instance files /var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_del
Oct 07 14:57:32 compute-0 nova_compute[259550]: 2025-10-07 14:57:32.725 2 INFO nova.virt.libvirt.driver [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Deletion of /var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_del complete
Oct 07 14:57:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:57:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2688123210' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:57:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:57:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2688123210' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:57:32 compute-0 nova_compute[259550]: 2025-10-07 14:57:32.812 2 DEBUG nova.compute.manager [req-6293952f-5447-42a8-8252-adb8e42128ab req-07750b11-8536-4316-b445-e11dda7d762c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received event network-vif-unplugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:57:32 compute-0 nova_compute[259550]: 2025-10-07 14:57:32.813 2 DEBUG oslo_concurrency.lockutils [req-6293952f-5447-42a8-8252-adb8e42128ab req-07750b11-8536-4316-b445-e11dda7d762c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:32 compute-0 nova_compute[259550]: 2025-10-07 14:57:32.813 2 DEBUG oslo_concurrency.lockutils [req-6293952f-5447-42a8-8252-adb8e42128ab req-07750b11-8536-4316-b445-e11dda7d762c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:32 compute-0 nova_compute[259550]: 2025-10-07 14:57:32.814 2 DEBUG oslo_concurrency.lockutils [req-6293952f-5447-42a8-8252-adb8e42128ab req-07750b11-8536-4316-b445-e11dda7d762c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:57:32 compute-0 nova_compute[259550]: 2025-10-07 14:57:32.814 2 DEBUG nova.compute.manager [req-6293952f-5447-42a8-8252-adb8e42128ab req-07750b11-8536-4316-b445-e11dda7d762c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] No waiting events found dispatching network-vif-unplugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:57:32 compute-0 nova_compute[259550]: 2025-10-07 14:57:32.814 2 DEBUG nova.compute.manager [req-6293952f-5447-42a8-8252-adb8e42128ab req-07750b11-8536-4316-b445-e11dda7d762c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received event network-vif-unplugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007600997305788891 of space, bias 1.0, pg target 0.22802991917366675 quantized to 32 (current 32)
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:57:32 compute-0 nova_compute[259550]: 2025-10-07 14:57:32.853 2 INFO nova.compute.manager [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Took 2.15 seconds to destroy the instance on the hypervisor.
Oct 07 14:57:32 compute-0 nova_compute[259550]: 2025-10-07 14:57:32.853 2 DEBUG oslo.service.loopingcall [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:57:32 compute-0 nova_compute[259550]: 2025-10-07 14:57:32.854 2 DEBUG nova.compute.manager [-] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:57:32 compute-0 nova_compute[259550]: 2025-10-07 14:57:32.854 2 DEBUG nova.network.neutron [-] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:57:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 28 op/s
Oct 07 14:57:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2688123210' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:57:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2688123210' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:57:33 compute-0 nova_compute[259550]: 2025-10-07 14:57:33.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:33 compute-0 happy_engelbart[425148]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:57:33 compute-0 happy_engelbart[425148]: --> relative data size: 1.0
Oct 07 14:57:33 compute-0 happy_engelbart[425148]: --> All data devices are unavailable
Oct 07 14:57:33 compute-0 systemd[1]: libpod-ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d.scope: Deactivated successfully.
Oct 07 14:57:33 compute-0 systemd[1]: libpod-ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d.scope: Consumed 1.082s CPU time.
Oct 07 14:57:33 compute-0 podman[425131]: 2025-10-07 14:57:33.775267784 +0000 UTC m=+1.351499979 container died ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 14:57:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-2356f2d3c08e8421ff718535658ad0a4afd95ea7db90eda43eef6965ab69a431-merged.mount: Deactivated successfully.
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.006 2 DEBUG nova.network.neutron [-] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.024 2 INFO nova.compute.manager [-] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Took 1.17 seconds to deallocate network for instance.
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.073 2 DEBUG oslo_concurrency.lockutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.074 2 DEBUG oslo_concurrency.lockutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.120 2 DEBUG oslo_concurrency.processutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.158 2 DEBUG nova.network.neutron [req-080092dd-e9aa-4164-b2aa-c9e1214d8c69 req-fae4bcac-2759-46f7-ab51-c848d10580ba 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Updated VIF entry in instance network info cache for port 6b0ce98f-f482-4715-a60b-a5b393b822f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.159 2 DEBUG nova.network.neutron [req-080092dd-e9aa-4164-b2aa-c9e1214d8c69 req-fae4bcac-2759-46f7-ab51-c848d10580ba 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Updating instance_info_cache with network_info: [{"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.179 2 DEBUG oslo_concurrency.lockutils [req-080092dd-e9aa-4164-b2aa-c9e1214d8c69 req-fae4bcac-2759-46f7-ab51-c848d10580ba 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:57:34 compute-0 ceph-mon[74295]: pgmap v2859: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 28 op/s
Oct 07 14:57:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:57:34 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3985743802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.656 2 DEBUG oslo_concurrency.processutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:57:34 compute-0 podman[425131]: 2025-10-07 14:57:34.658075607 +0000 UTC m=+2.234307792 container remove ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.669 2 DEBUG nova.compute.provider_tree [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.689 2 DEBUG nova.scheduler.client.report [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:57:34 compute-0 sudo[424980]: pam_unix(sudo:session): session closed for user root
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.723 2 DEBUG oslo_concurrency.lockutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.750 2 INFO nova.scheduler.client.report [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Deleted allocations for instance 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b
Oct 07 14:57:34 compute-0 systemd[1]: libpod-conmon-ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d.scope: Deactivated successfully.
Oct 07 14:57:34 compute-0 sudo[425212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:57:34 compute-0 sudo[425212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:57:34 compute-0 sudo[425212]: pam_unix(sudo:session): session closed for user root
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.819 2 DEBUG oslo_concurrency.lockutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:57:34 compute-0 sudo[425237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:57:34 compute-0 sudo[425237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:57:34 compute-0 sudo[425237]: pam_unix(sudo:session): session closed for user root
Oct 07 14:57:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 72 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 16 KiB/s wr, 51 op/s
Oct 07 14:57:34 compute-0 sudo[425262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:57:34 compute-0 sudo[425262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:57:34 compute-0 sudo[425262]: pam_unix(sudo:session): session closed for user root
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.928 2 DEBUG nova.compute.manager [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received event network-vif-plugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.928 2 DEBUG oslo_concurrency.lockutils [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.929 2 DEBUG oslo_concurrency.lockutils [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.929 2 DEBUG oslo_concurrency.lockutils [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.929 2 DEBUG nova.compute.manager [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] No waiting events found dispatching network-vif-plugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.929 2 WARNING nova.compute.manager [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received unexpected event network-vif-plugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 for instance with vm_state deleted and task_state None.
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.929 2 DEBUG nova.compute.manager [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received event network-vif-deleted-6b0ce98f-f482-4715-a60b-a5b393b822f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.930 2 INFO nova.compute.manager [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Neutron deleted interface 6b0ce98f-f482-4715-a60b-a5b393b822f1; detaching it from the instance and deleting it from the info cache
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.930 2 DEBUG nova.network.neutron [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Oct 07 14:57:34 compute-0 nova_compute[259550]: 2025-10-07 14:57:34.931 2 DEBUG nova.compute.manager [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Detach interface failed, port_id=6b0ce98f-f482-4715-a60b-a5b393b822f1, reason: Instance 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:57:34 compute-0 sudo[425287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:57:34 compute-0 sudo[425287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:57:35 compute-0 podman[425352]: 2025-10-07 14:57:35.276226237 +0000 UTC m=+0.022201739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:57:35 compute-0 podman[425352]: 2025-10-07 14:57:35.541535118 +0000 UTC m=+0.287510620 container create f7e6c8a75d20e6fbfaff589d153014631573833100743af09390eff27a202a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:57:35 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3985743802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:57:35 compute-0 systemd[1]: Started libpod-conmon-f7e6c8a75d20e6fbfaff589d153014631573833100743af09390eff27a202a86.scope.
Oct 07 14:57:35 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:57:35 compute-0 podman[425352]: 2025-10-07 14:57:35.870633647 +0000 UTC m=+0.616609149 container init f7e6c8a75d20e6fbfaff589d153014631573833100743af09390eff27a202a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lamarr, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 07 14:57:35 compute-0 podman[425352]: 2025-10-07 14:57:35.881542876 +0000 UTC m=+0.627518358 container start f7e6c8a75d20e6fbfaff589d153014631573833100743af09390eff27a202a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 07 14:57:35 compute-0 friendly_lamarr[425368]: 167 167
Oct 07 14:57:35 compute-0 systemd[1]: libpod-f7e6c8a75d20e6fbfaff589d153014631573833100743af09390eff27a202a86.scope: Deactivated successfully.
Oct 07 14:57:35 compute-0 podman[425352]: 2025-10-07 14:57:35.894024297 +0000 UTC m=+0.639999799 container attach f7e6c8a75d20e6fbfaff589d153014631573833100743af09390eff27a202a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lamarr, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct 07 14:57:35 compute-0 podman[425352]: 2025-10-07 14:57:35.894377776 +0000 UTC m=+0.640353268 container died f7e6c8a75d20e6fbfaff589d153014631573833100743af09390eff27a202a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lamarr, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 07 14:57:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b68ee51a63dc5020fb555501a4909a41aab83d14eb61a082f2681d8924e5f736-merged.mount: Deactivated successfully.
Oct 07 14:57:36 compute-0 podman[425352]: 2025-10-07 14:57:36.025154307 +0000 UTC m=+0.771129789 container remove f7e6c8a75d20e6fbfaff589d153014631573833100743af09390eff27a202a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 07 14:57:36 compute-0 systemd[1]: libpod-conmon-f7e6c8a75d20e6fbfaff589d153014631573833100743af09390eff27a202a86.scope: Deactivated successfully.
Oct 07 14:57:36 compute-0 podman[425393]: 2025-10-07 14:57:36.202380187 +0000 UTC m=+0.055268003 container create 5a4fbbd7ccf785a680f67e276c884c3a7f36d1c3539e699f7803f40fdf703911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 07 14:57:36 compute-0 podman[425393]: 2025-10-07 14:57:36.169036704 +0000 UTC m=+0.021924550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:57:36 compute-0 systemd[1]: Started libpod-conmon-5a4fbbd7ccf785a680f67e276c884c3a7f36d1c3539e699f7803f40fdf703911.scope.
Oct 07 14:57:36 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3e7d9e28048cabb42c4161ab21b8d1d19736b2a17e8f9174e4a38be74acbb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3e7d9e28048cabb42c4161ab21b8d1d19736b2a17e8f9174e4a38be74acbb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3e7d9e28048cabb42c4161ab21b8d1d19736b2a17e8f9174e4a38be74acbb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3e7d9e28048cabb42c4161ab21b8d1d19736b2a17e8f9174e4a38be74acbb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:57:36 compute-0 podman[425393]: 2025-10-07 14:57:36.328585317 +0000 UTC m=+0.181473133 container init 5a4fbbd7ccf785a680f67e276c884c3a7f36d1c3539e699f7803f40fdf703911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_yalow, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 07 14:57:36 compute-0 podman[425393]: 2025-10-07 14:57:36.337570035 +0000 UTC m=+0.190457851 container start 5a4fbbd7ccf785a680f67e276c884c3a7f36d1c3539e699f7803f40fdf703911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_yalow, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 07 14:57:36 compute-0 podman[425393]: 2025-10-07 14:57:36.360097191 +0000 UTC m=+0.212984997 container attach 5a4fbbd7ccf785a680f67e276c884c3a7f36d1c3539e699f7803f40fdf703911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_yalow, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:57:36 compute-0 nova_compute[259550]: 2025-10-07 14:57:36.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:36 compute-0 ceph-mon[74295]: pgmap v2860: 305 pgs: 305 active+clean; 72 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 16 KiB/s wr, 51 op/s
Oct 07 14:57:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:57:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Oct 07 14:57:37 compute-0 interesting_yalow[425409]: {
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:     "0": [
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:         {
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "devices": [
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "/dev/loop3"
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             ],
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "lv_name": "ceph_lv0",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "lv_size": "21470642176",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "name": "ceph_lv0",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "tags": {
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.cluster_name": "ceph",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.crush_device_class": "",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.encrypted": "0",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.osd_id": "0",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.type": "block",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.vdo": "0"
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             },
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "type": "block",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "vg_name": "ceph_vg0"
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:         }
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:     ],
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:     "1": [
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:         {
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "devices": [
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "/dev/loop4"
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             ],
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "lv_name": "ceph_lv1",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "lv_size": "21470642176",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "name": "ceph_lv1",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "tags": {
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.cluster_name": "ceph",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.crush_device_class": "",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.encrypted": "0",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.osd_id": "1",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.type": "block",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.vdo": "0"
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             },
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "type": "block",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "vg_name": "ceph_vg1"
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:         }
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:     ],
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:     "2": [
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:         {
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "devices": [
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "/dev/loop5"
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             ],
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "lv_name": "ceph_lv2",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "lv_size": "21470642176",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "name": "ceph_lv2",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "tags": {
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.cluster_name": "ceph",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.crush_device_class": "",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.encrypted": "0",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.osd_id": "2",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.type": "block",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:                 "ceph.vdo": "0"
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             },
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "type": "block",
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:             "vg_name": "ceph_vg2"
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:         }
Oct 07 14:57:37 compute-0 interesting_yalow[425409]:     ]
Oct 07 14:57:37 compute-0 interesting_yalow[425409]: }
Oct 07 14:57:37 compute-0 systemd[1]: libpod-5a4fbbd7ccf785a680f67e276c884c3a7f36d1c3539e699f7803f40fdf703911.scope: Deactivated successfully.
Oct 07 14:57:37 compute-0 podman[425393]: 2025-10-07 14:57:37.248670027 +0000 UTC m=+1.101557853 container died 5a4fbbd7ccf785a680f67e276c884c3a7f36d1c3539e699f7803f40fdf703911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_yalow, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 07 14:57:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a3e7d9e28048cabb42c4161ab21b8d1d19736b2a17e8f9174e4a38be74acbb4-merged.mount: Deactivated successfully.
Oct 07 14:57:37 compute-0 podman[425393]: 2025-10-07 14:57:37.319022449 +0000 UTC m=+1.171910265 container remove 5a4fbbd7ccf785a680f67e276c884c3a7f36d1c3539e699f7803f40fdf703911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:57:37 compute-0 systemd[1]: libpod-conmon-5a4fbbd7ccf785a680f67e276c884c3a7f36d1c3539e699f7803f40fdf703911.scope: Deactivated successfully.
Oct 07 14:57:37 compute-0 sudo[425287]: pam_unix(sudo:session): session closed for user root
Oct 07 14:57:37 compute-0 sudo[425428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:57:37 compute-0 sudo[425428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:57:37 compute-0 sudo[425428]: pam_unix(sudo:session): session closed for user root
Oct 07 14:57:37 compute-0 sudo[425453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:57:37 compute-0 sudo[425453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:57:37 compute-0 sudo[425453]: pam_unix(sudo:session): session closed for user root
Oct 07 14:57:37 compute-0 sudo[425478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:57:37 compute-0 sudo[425478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:57:37 compute-0 sudo[425478]: pam_unix(sudo:session): session closed for user root
Oct 07 14:57:37 compute-0 sudo[425503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:57:37 compute-0 sudo[425503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:57:38 compute-0 podman[425569]: 2025-10-07 14:57:38.000078363 +0000 UTC m=+0.049608963 container create fd53e7e68bdb752b5edf063fbc779f353fd6a157eec06ad47febddce1c588371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 07 14:57:38 compute-0 systemd[1]: Started libpod-conmon-fd53e7e68bdb752b5edf063fbc779f353fd6a157eec06ad47febddce1c588371.scope.
Oct 07 14:57:38 compute-0 podman[425569]: 2025-10-07 14:57:37.974389194 +0000 UTC m=+0.023919804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:57:38 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:57:38 compute-0 podman[425569]: 2025-10-07 14:57:38.122239896 +0000 UTC m=+0.171770516 container init fd53e7e68bdb752b5edf063fbc779f353fd6a157eec06ad47febddce1c588371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:57:38 compute-0 podman[425569]: 2025-10-07 14:57:38.130267409 +0000 UTC m=+0.179797999 container start fd53e7e68bdb752b5edf063fbc779f353fd6a157eec06ad47febddce1c588371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 07 14:57:38 compute-0 sharp_elion[425585]: 167 167
Oct 07 14:57:38 compute-0 systemd[1]: libpod-fd53e7e68bdb752b5edf063fbc779f353fd6a157eec06ad47febddce1c588371.scope: Deactivated successfully.
Oct 07 14:57:38 compute-0 podman[425569]: 2025-10-07 14:57:38.148161502 +0000 UTC m=+0.197692112 container attach fd53e7e68bdb752b5edf063fbc779f353fd6a157eec06ad47febddce1c588371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:57:38 compute-0 podman[425569]: 2025-10-07 14:57:38.148616894 +0000 UTC m=+0.198147494 container died fd53e7e68bdb752b5edf063fbc779f353fd6a157eec06ad47febddce1c588371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 07 14:57:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-188799ff27af07fbda60f5e5039199cdef13593cd4055227f4773bdc7d0726d3-merged.mount: Deactivated successfully.
Oct 07 14:57:38 compute-0 podman[425569]: 2025-10-07 14:57:38.201786431 +0000 UTC m=+0.251317021 container remove fd53e7e68bdb752b5edf063fbc779f353fd6a157eec06ad47febddce1c588371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:57:38 compute-0 systemd[1]: libpod-conmon-fd53e7e68bdb752b5edf063fbc779f353fd6a157eec06ad47febddce1c588371.scope: Deactivated successfully.
Oct 07 14:57:38 compute-0 podman[425611]: 2025-10-07 14:57:38.371116183 +0000 UTC m=+0.047393196 container create ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 14:57:38 compute-0 systemd[1]: Started libpod-conmon-ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5.scope.
Oct 07 14:57:38 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a1c618c922e4a6c80befe6df6fd5840341f13fe34a340d23e025e21e9c6c00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a1c618c922e4a6c80befe6df6fd5840341f13fe34a340d23e025e21e9c6c00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a1c618c922e4a6c80befe6df6fd5840341f13fe34a340d23e025e21e9c6c00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a1c618c922e4a6c80befe6df6fd5840341f13fe34a340d23e025e21e9c6c00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:57:38 compute-0 podman[425611]: 2025-10-07 14:57:38.348296399 +0000 UTC m=+0.024573452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:57:38 compute-0 podman[425611]: 2025-10-07 14:57:38.447946516 +0000 UTC m=+0.124223529 container init ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:57:38 compute-0 podman[425611]: 2025-10-07 14:57:38.456700857 +0000 UTC m=+0.132977870 container start ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 07 14:57:38 compute-0 podman[425611]: 2025-10-07 14:57:38.463092327 +0000 UTC m=+0.139369360 container attach ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 07 14:57:38 compute-0 ceph-mon[74295]: pgmap v2861: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Oct 07 14:57:38 compute-0 nova_compute[259550]: 2025-10-07 14:57:38.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 14 KiB/s wr, 50 op/s
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]: {
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:         "osd_id": 2,
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:         "type": "bluestore"
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:     },
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:         "osd_id": 1,
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:         "type": "bluestore"
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:     },
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:         "osd_id": 0,
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:         "type": "bluestore"
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]:     }
Oct 07 14:57:39 compute-0 suspicious_pasteur[425628]: }
Oct 07 14:57:39 compute-0 systemd[1]: libpod-ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5.scope: Deactivated successfully.
Oct 07 14:57:39 compute-0 systemd[1]: libpod-ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5.scope: Consumed 1.052s CPU time.
Oct 07 14:57:39 compute-0 podman[425611]: 2025-10-07 14:57:39.505497184 +0000 UTC m=+1.181774207 container died ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:57:39 compute-0 nova_compute[259550]: 2025-10-07 14:57:39.583 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759849044.581324, b02ddcfd-0f68-4cc3-9c20-24cac0572be8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:57:39 compute-0 nova_compute[259550]: 2025-10-07 14:57:39.583 2 INFO nova.compute.manager [-] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] VM Stopped (Lifecycle Event)
Oct 07 14:57:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-56a1c618c922e4a6c80befe6df6fd5840341f13fe34a340d23e025e21e9c6c00-merged.mount: Deactivated successfully.
Oct 07 14:57:39 compute-0 nova_compute[259550]: 2025-10-07 14:57:39.606 2 DEBUG nova.compute.manager [None req-e4a76e7a-113c-4331-852f-9e7587e475e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:57:39 compute-0 podman[425611]: 2025-10-07 14:57:39.689150525 +0000 UTC m=+1.365427538 container remove ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:57:39 compute-0 systemd[1]: libpod-conmon-ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5.scope: Deactivated successfully.
Oct 07 14:57:39 compute-0 sudo[425503]: pam_unix(sudo:session): session closed for user root
Oct 07 14:57:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:57:39 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:57:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:57:39 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:57:39 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 4b33b418-87ad-4034-b70d-df0984820eb9 does not exist
Oct 07 14:57:39 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 4e02ce49-6cd0-4918-8843-0d3f98af91ee does not exist
Oct 07 14:57:39 compute-0 sudo[425675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:57:39 compute-0 sudo[425675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:57:39 compute-0 sudo[425675]: pam_unix(sudo:session): session closed for user root
Oct 07 14:57:39 compute-0 sudo[425700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:57:39 compute-0 sudo[425700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:57:39 compute-0 sudo[425700]: pam_unix(sudo:session): session closed for user root
Oct 07 14:57:40 compute-0 nova_compute[259550]: 2025-10-07 14:57:40.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:40 compute-0 nova_compute[259550]: 2025-10-07 14:57:40.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:40 compute-0 ceph-mon[74295]: pgmap v2862: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 14 KiB/s wr, 50 op/s
Oct 07 14:57:40 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:57:40 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:57:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 31 op/s
Oct 07 14:57:41 compute-0 nova_compute[259550]: 2025-10-07 14:57:41.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:57:41 compute-0 ceph-mon[74295]: pgmap v2863: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 31 op/s
Oct 07 14:57:42 compute-0 podman[425726]: 2025-10-07 14:57:42.080239044 +0000 UTC m=+0.070215789 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:57:42 compute-0 podman[425727]: 2025-10-07 14:57:42.111099531 +0000 UTC m=+0.099977396 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:57:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:57:43 compute-0 nova_compute[259550]: 2025-10-07 14:57:43.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:43 compute-0 ceph-mon[74295]: pgmap v2864: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:57:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:57:45 compute-0 ceph-mon[74295]: pgmap v2865: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:57:46 compute-0 nova_compute[259550]: 2025-10-07 14:57:46.358 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759849051.3568137, 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:57:46 compute-0 nova_compute[259550]: 2025-10-07 14:57:46.358 2 INFO nova.compute.manager [-] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] VM Stopped (Lifecycle Event)
Oct 07 14:57:46 compute-0 nova_compute[259550]: 2025-10-07 14:57:46.381 2 DEBUG nova.compute.manager [None req-2d8155e4-0242-4b3f-a53d-382ef24014fd - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:57:46 compute-0 nova_compute[259550]: 2025-10-07 14:57:46.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:57:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.6 KiB/s rd, 341 B/s wr, 5 op/s
Oct 07 14:57:47 compute-0 ceph-mon[74295]: pgmap v2866: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.6 KiB/s rd, 341 B/s wr, 5 op/s
Oct 07 14:57:48 compute-0 nova_compute[259550]: 2025-10-07 14:57:48.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:57:49 compute-0 ceph-mon[74295]: pgmap v2867: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:57:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:57:51 compute-0 nova_compute[259550]: 2025-10-07 14:57:51.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:57:52 compute-0 ceph-mon[74295]: pgmap v2868: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:57:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:57:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:57:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:57:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:57:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:57:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:57:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:57:53 compute-0 nova_compute[259550]: 2025-10-07 14:57:53.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:53 compute-0 nova_compute[259550]: 2025-10-07 14:57:53.857 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "5dbed140-a0ed-4dbb-b782-da386ad68471" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:53 compute-0 nova_compute[259550]: 2025-10-07 14:57:53.858 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:54 compute-0 nova_compute[259550]: 2025-10-07 14:57:54.013 2 DEBUG nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:57:54 compute-0 nova_compute[259550]: 2025-10-07 14:57:54.108 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:54 compute-0 nova_compute[259550]: 2025-10-07 14:57:54.109 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:54 compute-0 nova_compute[259550]: 2025-10-07 14:57:54.120 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:57:54 compute-0 nova_compute[259550]: 2025-10-07 14:57:54.120 2 INFO nova.compute.claims [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:57:54 compute-0 ceph-mon[74295]: pgmap v2869: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:57:54 compute-0 nova_compute[259550]: 2025-10-07 14:57:54.353 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:57:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:57:54 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4263650502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:57:54 compute-0 nova_compute[259550]: 2025-10-07 14:57:54.854 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:57:54 compute-0 nova_compute[259550]: 2025-10-07 14:57:54.862 2 DEBUG nova.compute.provider_tree [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:57:54 compute-0 nova_compute[259550]: 2025-10-07 14:57:54.885 2 DEBUG nova.scheduler.client.report [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:57:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:57:54 compute-0 nova_compute[259550]: 2025-10-07 14:57:54.922 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:57:54 compute-0 nova_compute[259550]: 2025-10-07 14:57:54.923 2 DEBUG nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:57:55 compute-0 nova_compute[259550]: 2025-10-07 14:57:55.030 2 DEBUG nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:57:55 compute-0 nova_compute[259550]: 2025-10-07 14:57:55.030 2 DEBUG nova.network.neutron [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:57:55 compute-0 nova_compute[259550]: 2025-10-07 14:57:55.124 2 INFO nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:57:55 compute-0 nova_compute[259550]: 2025-10-07 14:57:55.160 2 DEBUG nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:57:55 compute-0 nova_compute[259550]: 2025-10-07 14:57:55.254 2 DEBUG nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:57:55 compute-0 nova_compute[259550]: 2025-10-07 14:57:55.255 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:57:55 compute-0 nova_compute[259550]: 2025-10-07 14:57:55.256 2 INFO nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Creating image(s)
Oct 07 14:57:55 compute-0 nova_compute[259550]: 2025-10-07 14:57:55.282 2 DEBUG nova.storage.rbd_utils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5dbed140-a0ed-4dbb-b782-da386ad68471_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:57:55 compute-0 nova_compute[259550]: 2025-10-07 14:57:55.311 2 DEBUG nova.storage.rbd_utils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5dbed140-a0ed-4dbb-b782-da386ad68471_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:57:55 compute-0 nova_compute[259550]: 2025-10-07 14:57:55.336 2 DEBUG nova.storage.rbd_utils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5dbed140-a0ed-4dbb-b782-da386ad68471_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:57:55 compute-0 nova_compute[259550]: 2025-10-07 14:57:55.340 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:57:55 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4263650502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:57:55 compute-0 nova_compute[259550]: 2025-10-07 14:57:55.382 2 DEBUG nova.policy [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f8f54ad8b4adcb7d392a6d730edbd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2dd1166031634469bed4993a4eb97989', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:57:55 compute-0 nova_compute[259550]: 2025-10-07 14:57:55.428 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:57:55 compute-0 nova_compute[259550]: 2025-10-07 14:57:55.429 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:55 compute-0 nova_compute[259550]: 2025-10-07 14:57:55.430 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:55 compute-0 nova_compute[259550]: 2025-10-07 14:57:55.430 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:57:55 compute-0 nova_compute[259550]: 2025-10-07 14:57:55.454 2 DEBUG nova.storage.rbd_utils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5dbed140-a0ed-4dbb-b782-da386ad68471_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:57:55 compute-0 nova_compute[259550]: 2025-10-07 14:57:55.458 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5dbed140-a0ed-4dbb-b782-da386ad68471_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:57:56 compute-0 nova_compute[259550]: 2025-10-07 14:57:56.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:56 compute-0 ceph-mon[74295]: pgmap v2870: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:57:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:57:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:57:57 compute-0 ceph-mon[74295]: pgmap v2871: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Oct 07 14:57:58 compute-0 nova_compute[259550]: 2025-10-07 14:57:58.188 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5dbed140-a0ed-4dbb-b782-da386ad68471_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.730s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:57:58 compute-0 nova_compute[259550]: 2025-10-07 14:57:58.261 2 DEBUG nova.storage.rbd_utils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] resizing rbd image 5dbed140-a0ed-4dbb-b782-da386ad68471_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:57:58 compute-0 nova_compute[259550]: 2025-10-07 14:57:58.477 2 DEBUG nova.network.neutron [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Successfully created port: 6d023b7b-1ffe-467f-b731-8b53fc063b54 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:57:58 compute-0 nova_compute[259550]: 2025-10-07 14:57:58.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:57:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 305 active+clean; 58 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 MiB/s wr, 21 op/s
Oct 07 14:57:59 compute-0 podman[425939]: 2025-10-07 14:57:59.079313383 +0000 UTC m=+0.065569596 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:57:59 compute-0 podman[425940]: 2025-10-07 14:57:59.082880368 +0000 UTC m=+0.066090770 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 07 14:57:59 compute-0 nova_compute[259550]: 2025-10-07 14:57:59.129 2 DEBUG nova.objects.instance [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'migration_context' on Instance uuid 5dbed140-a0ed-4dbb-b782-da386ad68471 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:57:59 compute-0 nova_compute[259550]: 2025-10-07 14:57:59.190 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:57:59 compute-0 nova_compute[259550]: 2025-10-07 14:57:59.191 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Ensure instance console log exists: /var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:57:59 compute-0 nova_compute[259550]: 2025-10-07 14:57:59.192 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:57:59 compute-0 nova_compute[259550]: 2025-10-07 14:57:59.192 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:57:59 compute-0 nova_compute[259550]: 2025-10-07 14:57:59.192 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:58:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:00.095 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:58:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:00.095 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:58:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:00.095 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:58:00 compute-0 ceph-mon[74295]: pgmap v2872: 305 pgs: 305 active+clean; 58 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 MiB/s wr, 21 op/s
Oct 07 14:58:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct 07 14:58:01 compute-0 nova_compute[259550]: 2025-10-07 14:58:01.154 2 DEBUG nova.network.neutron [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Successfully updated port: 6d023b7b-1ffe-467f-b731-8b53fc063b54 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:58:01 compute-0 nova_compute[259550]: 2025-10-07 14:58:01.179 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:58:01 compute-0 nova_compute[259550]: 2025-10-07 14:58:01.179 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquired lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:58:01 compute-0 nova_compute[259550]: 2025-10-07 14:58:01.180 2 DEBUG nova.network.neutron [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:58:01 compute-0 nova_compute[259550]: 2025-10-07 14:58:01.274 2 DEBUG nova.compute.manager [req-51f0797b-3c47-461b-95b1-6173094b93da req-fd4c34dc-92a0-4d51-995b-e4eddebde560 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received event network-changed-6d023b7b-1ffe-467f-b731-8b53fc063b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:58:01 compute-0 nova_compute[259550]: 2025-10-07 14:58:01.275 2 DEBUG nova.compute.manager [req-51f0797b-3c47-461b-95b1-6173094b93da req-fd4c34dc-92a0-4d51-995b-e4eddebde560 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Refreshing instance network info cache due to event network-changed-6d023b7b-1ffe-467f-b731-8b53fc063b54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:58:01 compute-0 nova_compute[259550]: 2025-10-07 14:58:01.275 2 DEBUG oslo_concurrency.lockutils [req-51f0797b-3c47-461b-95b1-6173094b93da req-fd4c34dc-92a0-4d51-995b-e4eddebde560 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:58:01 compute-0 nova_compute[259550]: 2025-10-07 14:58:01.383 2 DEBUG nova.network.neutron [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:58:01 compute-0 nova_compute[259550]: 2025-10-07 14:58:01.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.492 2 DEBUG nova.network.neutron [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Updating instance_info_cache with network_info: [{"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.606 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Releasing lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.606 2 DEBUG nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Instance network_info: |[{"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.606 2 DEBUG oslo_concurrency.lockutils [req-51f0797b-3c47-461b-95b1-6173094b93da req-fd4c34dc-92a0-4d51-995b-e4eddebde560 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.607 2 DEBUG nova.network.neutron [req-51f0797b-3c47-461b-95b1-6173094b93da req-fd4c34dc-92a0-4d51-995b-e4eddebde560 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Refreshing network info cache for port 6d023b7b-1ffe-467f-b731-8b53fc063b54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.610 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Start _get_guest_xml network_info=[{"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.613 2 WARNING nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.618 2 DEBUG nova.virt.libvirt.host [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.619 2 DEBUG nova.virt.libvirt.host [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.624 2 DEBUG nova.virt.libvirt.host [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.625 2 DEBUG nova.virt.libvirt.host [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.625 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.625 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.626 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.626 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.626 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.626 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.627 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.627 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.627 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.627 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.627 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.628 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:58:02 compute-0 nova_compute[259550]: 2025-10-07 14:58:02.630 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:58:02 compute-0 ceph-mon[74295]: pgmap v2873: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct 07 14:58:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct 07 14:58:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:58:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/590169032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.131 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.161 2 DEBUG nova.storage.rbd_utils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5dbed140-a0ed-4dbb-b782-da386ad68471_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.166 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:58:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:58:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2218428295' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.673 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.675 2 DEBUG nova.virt.libvirt.vif [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:57:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1531578973',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1531578973',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=149,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSFN/mFQx9H7dlmOCtth1LrSvWe++YjThy0IEMqu/uGYCDYUjvf7c45GqSO97LvsDFzXDoa1ytH4+EWghxiErPTj2Cb9endnZ6Iv01/bnsmOGiX5Tg0V7s6thKKuGN4cw==',key_name='tempest-TestSecurityGroupsBasicOps-206963290',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-d2b8ujzr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:57:55Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=5dbed140-a0ed-4dbb-b782-da386ad68471,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.676 2 DEBUG nova.network.os_vif_util [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.678 2 DEBUG nova.network.os_vif_util [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:43:f9,bridge_name='br-int',has_traffic_filtering=True,id=6d023b7b-1ffe-467f-b731-8b53fc063b54,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d023b7b-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.680 2 DEBUG nova.objects.instance [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5dbed140-a0ed-4dbb-b782-da386ad68471 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.700 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:58:03 compute-0 nova_compute[259550]:   <uuid>5dbed140-a0ed-4dbb-b782-da386ad68471</uuid>
Oct 07 14:58:03 compute-0 nova_compute[259550]:   <name>instance-00000095</name>
Oct 07 14:58:03 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:58:03 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:58:03 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1531578973</nova:name>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:58:02</nova:creationTime>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:58:03 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:58:03 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:58:03 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:58:03 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:58:03 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:58:03 compute-0 nova_compute[259550]:         <nova:user uuid="229f8f54ad8b4adcb7d392a6d730edbd">tempest-TestSecurityGroupsBasicOps-1946829349-project-member</nova:user>
Oct 07 14:58:03 compute-0 nova_compute[259550]:         <nova:project uuid="2dd1166031634469bed4993a4eb97989">tempest-TestSecurityGroupsBasicOps-1946829349</nova:project>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:58:03 compute-0 nova_compute[259550]:         <nova:port uuid="6d023b7b-1ffe-467f-b731-8b53fc063b54">
Oct 07 14:58:03 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:58:03 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:58:03 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <system>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <entry name="serial">5dbed140-a0ed-4dbb-b782-da386ad68471</entry>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <entry name="uuid">5dbed140-a0ed-4dbb-b782-da386ad68471</entry>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     </system>
Oct 07 14:58:03 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:58:03 compute-0 nova_compute[259550]:   <os>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:   </os>
Oct 07 14:58:03 compute-0 nova_compute[259550]:   <features>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:   </features>
Oct 07 14:58:03 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:58:03 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:58:03 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/5dbed140-a0ed-4dbb-b782-da386ad68471_disk">
Oct 07 14:58:03 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       </source>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:58:03 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/5dbed140-a0ed-4dbb-b782-da386ad68471_disk.config">
Oct 07 14:58:03 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       </source>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:58:03 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:b5:43:f9"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <target dev="tap6d023b7b-1f"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471/console.log" append="off"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <video>
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     </video>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:58:03 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:58:03 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:58:03 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:58:03 compute-0 nova_compute[259550]: </domain>
Oct 07 14:58:03 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.702 2 DEBUG nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Preparing to wait for external event network-vif-plugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.703 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.703 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.703 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.704 2 DEBUG nova.virt.libvirt.vif [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:57:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1531578973',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1531578973',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=149,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSFN/mFQx9H7dlmOCtth1LrSvWe++YjThy0IEMqu/uGYCDYUjvf7c45GqSO97LvsDFzXDoa1ytH4+EWghxiErPTj2Cb9endnZ6Iv01/bnsmOGiX5Tg0V7s6thKKuGN4cw==',key_name='tempest-TestSecurityGroupsBasicOps-206963290',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-d2b8ujzr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:57:55Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=5dbed140-a0ed-4dbb-b782-da386ad68471,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.704 2 DEBUG nova.network.os_vif_util [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.705 2 DEBUG nova.network.os_vif_util [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:43:f9,bridge_name='br-int',has_traffic_filtering=True,id=6d023b7b-1ffe-467f-b731-8b53fc063b54,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d023b7b-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.705 2 DEBUG os_vif [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:43:f9,bridge_name='br-int',has_traffic_filtering=True,id=6d023b7b-1ffe-467f-b731-8b53fc063b54,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d023b7b-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.706 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.707 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.711 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d023b7b-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.711 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6d023b7b-1f, col_values=(('external_ids', {'iface-id': '6d023b7b-1ffe-467f-b731-8b53fc063b54', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:43:f9', 'vm-uuid': '5dbed140-a0ed-4dbb-b782-da386ad68471'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:03 compute-0 NetworkManager[44949]: <info>  [1759849083.7140] manager: (tap6d023b7b-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/656)
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.723 2 INFO os_vif [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:43:f9,bridge_name='br-int',has_traffic_filtering=True,id=6d023b7b-1ffe-467f-b731-8b53fc063b54,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d023b7b-1f')
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.794 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.795 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:58:03 compute-0 ceph-mon[74295]: pgmap v2874: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.795 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No VIF found with MAC fa:16:3e:b5:43:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:58:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/590169032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:58:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2218428295' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.796 2 INFO nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Using config drive
Oct 07 14:58:03 compute-0 nova_compute[259550]: 2025-10-07 14:58:03.879 2 DEBUG nova.storage.rbd_utils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5dbed140-a0ed-4dbb-b782-da386ad68471_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:58:04 compute-0 nova_compute[259550]: 2025-10-07 14:58:04.376 2 DEBUG nova.network.neutron [req-51f0797b-3c47-461b-95b1-6173094b93da req-fd4c34dc-92a0-4d51-995b-e4eddebde560 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Updated VIF entry in instance network info cache for port 6d023b7b-1ffe-467f-b731-8b53fc063b54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:58:04 compute-0 nova_compute[259550]: 2025-10-07 14:58:04.376 2 DEBUG nova.network.neutron [req-51f0797b-3c47-461b-95b1-6173094b93da req-fd4c34dc-92a0-4d51-995b-e4eddebde560 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Updating instance_info_cache with network_info: [{"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:58:04 compute-0 nova_compute[259550]: 2025-10-07 14:58:04.398 2 DEBUG oslo_concurrency.lockutils [req-51f0797b-3c47-461b-95b1-6173094b93da req-fd4c34dc-92a0-4d51-995b-e4eddebde560 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:58:04 compute-0 nova_compute[259550]: 2025-10-07 14:58:04.480 2 INFO nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Creating config drive at /var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471/disk.config
Oct 07 14:58:04 compute-0 nova_compute[259550]: 2025-10-07 14:58:04.486 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpza97nzpl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:58:04 compute-0 nova_compute[259550]: 2025-10-07 14:58:04.629 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpza97nzpl" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:58:04 compute-0 nova_compute[259550]: 2025-10-07 14:58:04.658 2 DEBUG nova.storage.rbd_utils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5dbed140-a0ed-4dbb-b782-da386ad68471_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:58:04 compute-0 nova_compute[259550]: 2025-10-07 14:58:04.663 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471/disk.config 5dbed140-a0ed-4dbb-b782-da386ad68471_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:58:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:58:04 compute-0 nova_compute[259550]: 2025-10-07 14:58:04.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:58:06 compute-0 ceph-mon[74295]: pgmap v2875: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:58:06 compute-0 nova_compute[259550]: 2025-10-07 14:58:06.156 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471/disk.config 5dbed140-a0ed-4dbb-b782-da386ad68471_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:58:06 compute-0 nova_compute[259550]: 2025-10-07 14:58:06.158 2 INFO nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Deleting local config drive /var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471/disk.config because it was imported into RBD.
Oct 07 14:58:06 compute-0 kernel: tap6d023b7b-1f: entered promiscuous mode
Oct 07 14:58:06 compute-0 NetworkManager[44949]: <info>  [1759849086.2217] manager: (tap6d023b7b-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/657)
Oct 07 14:58:06 compute-0 ovn_controller[151684]: 2025-10-07T14:58:06Z|01628|binding|INFO|Claiming lport 6d023b7b-1ffe-467f-b731-8b53fc063b54 for this chassis.
Oct 07 14:58:06 compute-0 ovn_controller[151684]: 2025-10-07T14:58:06Z|01629|binding|INFO|6d023b7b-1ffe-467f-b731-8b53fc063b54: Claiming fa:16:3e:b5:43:f9 10.100.0.7
Oct 07 14:58:06 compute-0 nova_compute[259550]: 2025-10-07 14:58:06.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:06 compute-0 nova_compute[259550]: 2025-10-07 14:58:06.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.239 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:43:f9 10.100.0.7'], port_security=['fa:16:3e:b5:43:f9 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5dbed140-a0ed-4dbb-b782-da386ad68471', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '2', 'neutron:security_group_ids': '21c14362-d44c-45e9-a632-669517d5fa0c ce726bee-ed69-45a1-b3aa-f723c1140437', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=262e3557-a916-444a-8a9a-3fa6e70341a2, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6d023b7b-1ffe-467f-b731-8b53fc063b54) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.241 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6d023b7b-1ffe-467f-b731-8b53fc063b54 in datapath 249c8beb-3ee0-47fa-bbca-7ba20e00ad6a bound to our chassis
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.242 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 249c8beb-3ee0-47fa-bbca-7ba20e00ad6a
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.259 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[62dadfbb-5b0f-48a8-bbf4-b2c4508e494a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.260 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap249c8beb-31 in ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:58:06 compute-0 systemd-udevd[426132]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.262 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap249c8beb-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.262 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92cad7fb-54a4-4264-b0fc-dd09a1ecdc21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:06 compute-0 systemd-machined[214580]: New machine qemu-183-instance-00000095.
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.263 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e7d7d4-8447-4351-a23c-5c0223e4b812]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.276 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[95cc23fb-78b5-4c03-af6f-f55f04fa6144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:06 compute-0 NetworkManager[44949]: <info>  [1759849086.2802] device (tap6d023b7b-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:58:06 compute-0 NetworkManager[44949]: <info>  [1759849086.2810] device (tap6d023b7b-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.289 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[128586e2-bc08-418b-b9c4-f88affacbefa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:06 compute-0 nova_compute[259550]: 2025-10-07 14:58:06.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:06 compute-0 ovn_controller[151684]: 2025-10-07T14:58:06Z|01630|binding|INFO|Setting lport 6d023b7b-1ffe-467f-b731-8b53fc063b54 ovn-installed in OVS
Oct 07 14:58:06 compute-0 ovn_controller[151684]: 2025-10-07T14:58:06Z|01631|binding|INFO|Setting lport 6d023b7b-1ffe-467f-b731-8b53fc063b54 up in Southbound
Oct 07 14:58:06 compute-0 systemd[1]: Started Virtual Machine qemu-183-instance-00000095.
Oct 07 14:58:06 compute-0 nova_compute[259550]: 2025-10-07 14:58:06.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.325 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d33da4e7-17ce-4ad6-9306-f55151da28d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.330 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c48f0640-95a1-43ef-ad3d-5fa770ef493a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:06 compute-0 NetworkManager[44949]: <info>  [1759849086.3317] manager: (tap249c8beb-30): new Veth device (/org/freedesktop/NetworkManager/Devices/658)
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.366 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[544d6ece-3f49-4904-bb58-e57fecebbb55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.369 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8836d4-da90-4419-9f20-397cfc91d641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:06 compute-0 NetworkManager[44949]: <info>  [1759849086.3985] device (tap249c8beb-30): carrier: link connected
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.406 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[528985a8-2fb3-4106-83e3-885be617cc7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.425 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[81776116-a964-49a0-ac47-a38ec7b421af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap249c8beb-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:fe:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 966995, 'reachable_time': 19037, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 426165, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.448 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c7cd46d4-998d-4d35-9239-ee09c09fcc9b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe37:fe18'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 966995, 'tstamp': 966995}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426166, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.472 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e91e5b60-e6ee-437b-9975-a3c012ab1cae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap249c8beb-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:fe:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 966995, 'reachable_time': 19037, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 426167, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.512 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ca1f8b74-670d-4ca5-b523-2e04b2be8789]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:06 compute-0 nova_compute[259550]: 2025-10-07 14:58:06.557 2 DEBUG nova.compute.manager [req-6502b19d-77a7-4f80-812e-04ead8dcb032 req-2564ac80-db7e-4473-8dfe-77727a13922a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received event network-vif-plugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:58:06 compute-0 nova_compute[259550]: 2025-10-07 14:58:06.558 2 DEBUG oslo_concurrency.lockutils [req-6502b19d-77a7-4f80-812e-04ead8dcb032 req-2564ac80-db7e-4473-8dfe-77727a13922a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:58:06 compute-0 nova_compute[259550]: 2025-10-07 14:58:06.558 2 DEBUG oslo_concurrency.lockutils [req-6502b19d-77a7-4f80-812e-04ead8dcb032 req-2564ac80-db7e-4473-8dfe-77727a13922a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:58:06 compute-0 nova_compute[259550]: 2025-10-07 14:58:06.558 2 DEBUG oslo_concurrency.lockutils [req-6502b19d-77a7-4f80-812e-04ead8dcb032 req-2564ac80-db7e-4473-8dfe-77727a13922a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:58:06 compute-0 nova_compute[259550]: 2025-10-07 14:58:06.558 2 DEBUG nova.compute.manager [req-6502b19d-77a7-4f80-812e-04ead8dcb032 req-2564ac80-db7e-4473-8dfe-77727a13922a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Processing event network-vif-plugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.582 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2938115d-9e73-4aeb-bf3c-bdb8c93a92ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.584 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap249c8beb-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.584 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.585 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap249c8beb-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:58:06 compute-0 nova_compute[259550]: 2025-10-07 14:58:06.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:06 compute-0 NetworkManager[44949]: <info>  [1759849086.5872] manager: (tap249c8beb-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/659)
Oct 07 14:58:06 compute-0 kernel: tap249c8beb-30: entered promiscuous mode
Oct 07 14:58:06 compute-0 nova_compute[259550]: 2025-10-07 14:58:06.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.592 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap249c8beb-30, col_values=(('external_ids', {'iface-id': 'ae63080e-cf28-419c-b3fe-501836a95f3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:58:06 compute-0 ovn_controller[151684]: 2025-10-07T14:58:06Z|01632|binding|INFO|Releasing lport ae63080e-cf28-419c-b3fe-501836a95f3b from this chassis (sb_readonly=0)
Oct 07 14:58:06 compute-0 nova_compute[259550]: 2025-10-07 14:58:06.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:06 compute-0 nova_compute[259550]: 2025-10-07 14:58:06.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.609 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/249c8beb-3ee0-47fa-bbca-7ba20e00ad6a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/249c8beb-3ee0-47fa-bbca-7ba20e00ad6a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.610 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[11977b1f-949c-401f-9a9f-3dfc0d893563]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.611 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/249c8beb-3ee0-47fa-bbca-7ba20e00ad6a.pid.haproxy
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 249c8beb-3ee0-47fa-bbca-7ba20e00ad6a
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.612 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'env', 'PROCESS_TAG=haproxy-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/249c8beb-3ee0-47fa-bbca-7ba20e00ad6a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:58:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.680 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:58:06 compute-0 nova_compute[259550]: 2025-10-07 14:58:06.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:58:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:58:07 compute-0 podman[426217]: 2025-10-07 14:58:06.949129667 +0000 UTC m=+0.022071985 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:58:07 compute-0 podman[426217]: 2025-10-07 14:58:07.540102977 +0000 UTC m=+0.613045305 container create e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:58:07 compute-0 systemd[1]: Started libpod-conmon-e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df.scope.
Oct 07 14:58:07 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:58:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e09a94c0c4d5c30406c651a2371f59d3e0389c901886c205db67c6aa86d5a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:58:07 compute-0 podman[426217]: 2025-10-07 14:58:07.716763032 +0000 UTC m=+0.789705330 container init e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 07 14:58:07 compute-0 podman[426217]: 2025-10-07 14:58:07.725097673 +0000 UTC m=+0.798039981 container start e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 14:58:07 compute-0 neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a[426257]: [NOTICE]   (426261) : New worker (426263) forked
Oct 07 14:58:07 compute-0 neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a[426257]: [NOTICE]   (426261) : Loading success.
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.837 2 DEBUG nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.838 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849087.8368425, 5dbed140-a0ed-4dbb-b782-da386ad68471 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.838 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] VM Started (Lifecycle Event)
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.844 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.849 2 INFO nova.virt.libvirt.driver [-] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Instance spawned successfully.
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.850 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.857 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.861 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:58:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:07.866 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.870 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.871 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.872 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.872 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.873 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.873 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.903 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.904 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849087.8370867, 5dbed140-a0ed-4dbb-b782-da386ad68471 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.904 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] VM Paused (Lifecycle Event)
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.931 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.934 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849087.8430917, 5dbed140-a0ed-4dbb-b782-da386ad68471 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.935 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] VM Resumed (Lifecycle Event)
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.941 2 INFO nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Took 12.69 seconds to spawn the instance on the hypervisor.
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.942 2 DEBUG nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.953 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.956 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.981 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:58:07 compute-0 nova_compute[259550]: 2025-10-07 14:58:07.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:58:08 compute-0 nova_compute[259550]: 2025-10-07 14:58:08.012 2 INFO nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Took 13.93 seconds to build instance.
Oct 07 14:58:08 compute-0 nova_compute[259550]: 2025-10-07 14:58:08.093 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:58:08 compute-0 ceph-mon[74295]: pgmap v2876: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:58:08 compute-0 nova_compute[259550]: 2025-10-07 14:58:08.663 2 DEBUG nova.compute.manager [req-cc5063f1-c130-488e-a8c5-c3b84f8bd43e req-58b2aeba-042e-4951-ae4c-d912b6f2c754 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received event network-vif-plugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:58:08 compute-0 nova_compute[259550]: 2025-10-07 14:58:08.664 2 DEBUG oslo_concurrency.lockutils [req-cc5063f1-c130-488e-a8c5-c3b84f8bd43e req-58b2aeba-042e-4951-ae4c-d912b6f2c754 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:58:08 compute-0 nova_compute[259550]: 2025-10-07 14:58:08.664 2 DEBUG oslo_concurrency.lockutils [req-cc5063f1-c130-488e-a8c5-c3b84f8bd43e req-58b2aeba-042e-4951-ae4c-d912b6f2c754 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:58:08 compute-0 nova_compute[259550]: 2025-10-07 14:58:08.664 2 DEBUG oslo_concurrency.lockutils [req-cc5063f1-c130-488e-a8c5-c3b84f8bd43e req-58b2aeba-042e-4951-ae4c-d912b6f2c754 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:58:08 compute-0 nova_compute[259550]: 2025-10-07 14:58:08.665 2 DEBUG nova.compute.manager [req-cc5063f1-c130-488e-a8c5-c3b84f8bd43e req-58b2aeba-042e-4951-ae4c-d912b6f2c754 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] No waiting events found dispatching network-vif-plugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:58:08 compute-0 nova_compute[259550]: 2025-10-07 14:58:08.665 2 WARNING nova.compute.manager [req-cc5063f1-c130-488e-a8c5-c3b84f8bd43e req-58b2aeba-042e-4951-ae4c-d912b6f2c754 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received unexpected event network-vif-plugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 for instance with vm_state active and task_state None.
Oct 07 14:58:08 compute-0 nova_compute[259550]: 2025-10-07 14:58:08.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:08 compute-0 nova_compute[259550]: 2025-10-07 14:58:08.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct 07 14:58:08 compute-0 nova_compute[259550]: 2025-10-07 14:58:08.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:58:09 compute-0 ceph-mon[74295]: pgmap v2877: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct 07 14:58:09 compute-0 nova_compute[259550]: 2025-10-07 14:58:09.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:58:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 895 KiB/s rd, 702 KiB/s wr, 45 op/s
Oct 07 14:58:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:58:11 compute-0 ceph-mon[74295]: pgmap v2878: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 895 KiB/s rd, 702 KiB/s wr, 45 op/s
Oct 07 14:58:12 compute-0 ovn_controller[151684]: 2025-10-07T14:58:12Z|01633|binding|INFO|Releasing lport ae63080e-cf28-419c-b3fe-501836a95f3b from this chassis (sb_readonly=0)
Oct 07 14:58:12 compute-0 NetworkManager[44949]: <info>  [1759849092.6828] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/660)
Oct 07 14:58:12 compute-0 NetworkManager[44949]: <info>  [1759849092.6842] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/661)
Oct 07 14:58:12 compute-0 nova_compute[259550]: 2025-10-07 14:58:12.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:12 compute-0 ovn_controller[151684]: 2025-10-07T14:58:12Z|01634|binding|INFO|Releasing lport ae63080e-cf28-419c-b3fe-501836a95f3b from this chassis (sb_readonly=0)
Oct 07 14:58:12 compute-0 nova_compute[259550]: 2025-10-07 14:58:12.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:12 compute-0 nova_compute[259550]: 2025-10-07 14:58:12.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 895 KiB/s rd, 12 KiB/s wr, 42 op/s
Oct 07 14:58:13 compute-0 podman[426273]: 2025-10-07 14:58:13.062175489 +0000 UTC m=+0.052108150 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 07 14:58:13 compute-0 podman[426274]: 2025-10-07 14:58:13.101653453 +0000 UTC m=+0.089512740 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 14:58:13 compute-0 nova_compute[259550]: 2025-10-07 14:58:13.290 2 DEBUG nova.compute.manager [req-093c4ead-f4a0-4be8-8b24-3f4d56325f89 req-6952a686-4546-4b4a-9a87-2d571c28d9a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received event network-changed-6d023b7b-1ffe-467f-b731-8b53fc063b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:58:13 compute-0 nova_compute[259550]: 2025-10-07 14:58:13.291 2 DEBUG nova.compute.manager [req-093c4ead-f4a0-4be8-8b24-3f4d56325f89 req-6952a686-4546-4b4a-9a87-2d571c28d9a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Refreshing instance network info cache due to event network-changed-6d023b7b-1ffe-467f-b731-8b53fc063b54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:58:13 compute-0 nova_compute[259550]: 2025-10-07 14:58:13.292 2 DEBUG oslo_concurrency.lockutils [req-093c4ead-f4a0-4be8-8b24-3f4d56325f89 req-6952a686-4546-4b4a-9a87-2d571c28d9a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:58:13 compute-0 nova_compute[259550]: 2025-10-07 14:58:13.293 2 DEBUG oslo_concurrency.lockutils [req-093c4ead-f4a0-4be8-8b24-3f4d56325f89 req-6952a686-4546-4b4a-9a87-2d571c28d9a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:58:13 compute-0 nova_compute[259550]: 2025-10-07 14:58:13.294 2 DEBUG nova.network.neutron [req-093c4ead-f4a0-4be8-8b24-3f4d56325f89 req-6952a686-4546-4b4a-9a87-2d571c28d9a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Refreshing network info cache for port 6d023b7b-1ffe-467f-b731-8b53fc063b54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:58:13 compute-0 nova_compute[259550]: 2025-10-07 14:58:13.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:13 compute-0 nova_compute[259550]: 2025-10-07 14:58:13.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:13 compute-0 nova_compute[259550]: 2025-10-07 14:58:13.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:58:13 compute-0 nova_compute[259550]: 2025-10-07 14:58:13.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:58:14 compute-0 ceph-mon[74295]: pgmap v2879: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 895 KiB/s rd, 12 KiB/s wr, 42 op/s
Oct 07 14:58:14 compute-0 nova_compute[259550]: 2025-10-07 14:58:14.789 2 DEBUG nova.network.neutron [req-093c4ead-f4a0-4be8-8b24-3f4d56325f89 req-6952a686-4546-4b4a-9a87-2d571c28d9a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Updated VIF entry in instance network info cache for port 6d023b7b-1ffe-467f-b731-8b53fc063b54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:58:14 compute-0 nova_compute[259550]: 2025-10-07 14:58:14.790 2 DEBUG nova.network.neutron [req-093c4ead-f4a0-4be8-8b24-3f4d56325f89 req-6952a686-4546-4b4a-9a87-2d571c28d9a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Updating instance_info_cache with network_info: [{"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:58:14 compute-0 nova_compute[259550]: 2025-10-07 14:58:14.823 2 DEBUG oslo_concurrency.lockutils [req-093c4ead-f4a0-4be8-8b24-3f4d56325f89 req-6952a686-4546-4b4a-9a87-2d571c28d9a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:58:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 07 14:58:16 compute-0 ceph-mon[74295]: pgmap v2880: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 07 14:58:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:58:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:16.868 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:58:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:58:16 compute-0 nova_compute[259550]: 2025-10-07 14:58:16.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:58:18 compute-0 ceph-mon[74295]: pgmap v2881: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:58:18 compute-0 nova_compute[259550]: 2025-10-07 14:58:18.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:18 compute-0 nova_compute[259550]: 2025-10-07 14:58:18.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:58:20 compute-0 ceph-mon[74295]: pgmap v2882: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 14:58:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 305 active+clean; 93 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 822 KiB/s wr, 74 op/s
Oct 07 14:58:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:58:22 compute-0 ceph-mon[74295]: pgmap v2883: 305 pgs: 305 active+clean; 93 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 822 KiB/s wr, 74 op/s
Oct 07 14:58:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:58:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:58:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:58:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:58:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:58:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:58:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:58:22
Oct 07 14:58:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:58:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:58:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.meta', 'vms', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'volumes', '.rgw.root']
Oct 07 14:58:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:58:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 305 active+clean; 93 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 810 KiB/s wr, 46 op/s
Oct 07 14:58:22 compute-0 nova_compute[259550]: 2025-10-07 14:58:22.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:58:23 compute-0 nova_compute[259550]: 2025-10-07 14:58:23.110 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:58:23 compute-0 nova_compute[259550]: 2025-10-07 14:58:23.110 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:58:23 compute-0 nova_compute[259550]: 2025-10-07 14:58:23.111 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:58:23 compute-0 nova_compute[259550]: 2025-10-07 14:58:23.111 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:58:23 compute-0 nova_compute[259550]: 2025-10-07 14:58:23.111 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:58:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:58:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:58:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:58:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:58:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:58:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:58:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:58:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:58:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:58:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:58:23 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:58:23 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4179381753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:58:23 compute-0 nova_compute[259550]: 2025-10-07 14:58:23.602 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:58:23 compute-0 nova_compute[259550]: 2025-10-07 14:58:23.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:23 compute-0 nova_compute[259550]: 2025-10-07 14:58:23.707 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:58:23 compute-0 nova_compute[259550]: 2025-10-07 14:58:23.708 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 14:58:23 compute-0 nova_compute[259550]: 2025-10-07 14:58:23.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:23 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4179381753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:58:23 compute-0 nova_compute[259550]: 2025-10-07 14:58:23.944 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:58:23 compute-0 nova_compute[259550]: 2025-10-07 14:58:23.946 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3433MB free_disk=59.95820236206055GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:58:23 compute-0 nova_compute[259550]: 2025-10-07 14:58:23.946 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:58:23 compute-0 nova_compute[259550]: 2025-10-07 14:58:23.946 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:58:24 compute-0 nova_compute[259550]: 2025-10-07 14:58:24.033 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 5dbed140-a0ed-4dbb-b782-da386ad68471 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 14:58:24 compute-0 nova_compute[259550]: 2025-10-07 14:58:24.034 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:58:24 compute-0 nova_compute[259550]: 2025-10-07 14:58:24.035 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:58:24 compute-0 nova_compute[259550]: 2025-10-07 14:58:24.073 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:58:24 compute-0 ovn_controller[151684]: 2025-10-07T14:58:24Z|00204|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b5:43:f9 10.100.0.7
Oct 07 14:58:24 compute-0 ovn_controller[151684]: 2025-10-07T14:58:24Z|00205|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:43:f9 10.100.0.7
Oct 07 14:58:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:58:24 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2520959172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:58:24 compute-0 nova_compute[259550]: 2025-10-07 14:58:24.592 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:58:24 compute-0 nova_compute[259550]: 2025-10-07 14:58:24.600 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:58:24 compute-0 nova_compute[259550]: 2025-10-07 14:58:24.618 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:58:24 compute-0 nova_compute[259550]: 2025-10-07 14:58:24.644 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:58:24 compute-0 nova_compute[259550]: 2025-10-07 14:58:24.644 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:58:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 305 active+clean; 113 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 62 op/s
Oct 07 14:58:25 compute-0 ceph-mon[74295]: pgmap v2884: 305 pgs: 305 active+clean; 93 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 810 KiB/s wr, 46 op/s
Oct 07 14:58:25 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2520959172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:58:26 compute-0 ceph-mon[74295]: pgmap v2885: 305 pgs: 305 active+clean; 113 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 62 op/s
Oct 07 14:58:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:58:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 113 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 226 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Oct 07 14:58:27 compute-0 nova_compute[259550]: 2025-10-07 14:58:27.641 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:58:27 compute-0 nova_compute[259550]: 2025-10-07 14:58:27.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:58:27 compute-0 nova_compute[259550]: 2025-10-07 14:58:27.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:58:28 compute-0 nova_compute[259550]: 2025-10-07 14:58:28.025 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:58:28 compute-0 nova_compute[259550]: 2025-10-07 14:58:28.025 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:58:28 compute-0 ceph-mon[74295]: pgmap v2886: 305 pgs: 305 active+clean; 113 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 226 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Oct 07 14:58:28 compute-0 nova_compute[259550]: 2025-10-07 14:58:28.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:28 compute-0 nova_compute[259550]: 2025-10-07 14:58:28.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 305 active+clean; 119 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Oct 07 14:58:30 compute-0 podman[426362]: 2025-10-07 14:58:30.062961834 +0000 UTC m=+0.057106323 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:58:30 compute-0 podman[426363]: 2025-10-07 14:58:30.062989364 +0000 UTC m=+0.051353040 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:58:30 compute-0 ceph-mon[74295]: pgmap v2887: 305 pgs: 305 active+clean; 119 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Oct 07 14:58:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 290 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 07 14:58:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:58:32 compute-0 ceph-mon[74295]: pgmap v2888: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 290 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 07 14:58:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:58:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4138065770' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:58:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:58:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4138065770' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:58:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 1.3 MiB/s wr, 47 op/s
Oct 07 14:58:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/4138065770' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:58:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/4138065770' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:58:33 compute-0 nova_compute[259550]: 2025-10-07 14:58:33.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:33 compute-0 nova_compute[259550]: 2025-10-07 14:58:33.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:34 compute-0 nova_compute[259550]: 2025-10-07 14:58:34.337 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "27840587-4b28-416e-a84d-176918007fb6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:58:34 compute-0 nova_compute[259550]: 2025-10-07 14:58:34.338 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:58:34 compute-0 nova_compute[259550]: 2025-10-07 14:58:34.356 2 DEBUG nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:58:34 compute-0 nova_compute[259550]: 2025-10-07 14:58:34.430 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:58:34 compute-0 nova_compute[259550]: 2025-10-07 14:58:34.431 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:58:34 compute-0 nova_compute[259550]: 2025-10-07 14:58:34.440 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:58:34 compute-0 nova_compute[259550]: 2025-10-07 14:58:34.441 2 INFO nova.compute.claims [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:58:34 compute-0 ceph-mon[74295]: pgmap v2889: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 1.3 MiB/s wr, 47 op/s
Oct 07 14:58:34 compute-0 nova_compute[259550]: 2025-10-07 14:58:34.569 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:58:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 1.3 MiB/s wr, 47 op/s
Oct 07 14:58:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:58:35 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1828718441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.079 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.087 2 DEBUG nova.compute.provider_tree [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.113 2 DEBUG nova.scheduler.client.report [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.149 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.150 2 DEBUG nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.211 2 DEBUG nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.211 2 DEBUG nova.network.neutron [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.234 2 INFO nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.263 2 DEBUG nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.358 2 DEBUG nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.360 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.360 2 INFO nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Creating image(s)
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.387 2 DEBUG nova.storage.rbd_utils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 27840587-4b28-416e-a84d-176918007fb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.409 2 DEBUG nova.storage.rbd_utils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 27840587-4b28-416e-a84d-176918007fb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.430 2 DEBUG nova.storage.rbd_utils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 27840587-4b28-416e-a84d-176918007fb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.434 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.509 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.510 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.511 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.511 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:58:35 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1828718441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.555 2 DEBUG nova.storage.rbd_utils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 27840587-4b28-416e-a84d-176918007fb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:58:35 compute-0 nova_compute[259550]: 2025-10-07 14:58:35.560 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 27840587-4b28-416e-a84d-176918007fb6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:58:36 compute-0 nova_compute[259550]: 2025-10-07 14:58:36.359 2 DEBUG nova.policy [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f8f54ad8b4adcb7d392a6d730edbd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2dd1166031634469bed4993a4eb97989', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:58:36 compute-0 ceph-mon[74295]: pgmap v2890: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 1.3 MiB/s wr, 47 op/s
Oct 07 14:58:36 compute-0 nova_compute[259550]: 2025-10-07 14:58:36.571 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 27840587-4b28-416e-a84d-176918007fb6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:58:36 compute-0 nova_compute[259550]: 2025-10-07 14:58:36.644 2 DEBUG nova.storage.rbd_utils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] resizing rbd image 27840587-4b28-416e-a84d-176918007fb6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:58:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:58:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 192 KiB/s rd, 177 KiB/s wr, 32 op/s
Oct 07 14:58:37 compute-0 nova_compute[259550]: 2025-10-07 14:58:37.056 2 DEBUG nova.network.neutron [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Successfully created port: 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:58:37 compute-0 nova_compute[259550]: 2025-10-07 14:58:37.339 2 DEBUG nova.objects.instance [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'migration_context' on Instance uuid 27840587-4b28-416e-a84d-176918007fb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:58:37 compute-0 nova_compute[259550]: 2025-10-07 14:58:37.361 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:58:37 compute-0 nova_compute[259550]: 2025-10-07 14:58:37.361 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Ensure instance console log exists: /var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:58:37 compute-0 nova_compute[259550]: 2025-10-07 14:58:37.362 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:58:37 compute-0 nova_compute[259550]: 2025-10-07 14:58:37.362 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:58:37 compute-0 nova_compute[259550]: 2025-10-07 14:58:37.363 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:58:38 compute-0 ceph-mon[74295]: pgmap v2891: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 192 KiB/s rd, 177 KiB/s wr, 32 op/s
Oct 07 14:58:38 compute-0 nova_compute[259550]: 2025-10-07 14:58:38.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:38 compute-0 nova_compute[259550]: 2025-10-07 14:58:38.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 1.3 MiB/s wr, 28 op/s
Oct 07 14:58:39 compute-0 nova_compute[259550]: 2025-10-07 14:58:39.300 2 DEBUG nova.network.neutron [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Successfully updated port: 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:58:39 compute-0 nova_compute[259550]: 2025-10-07 14:58:39.324 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "refresh_cache-27840587-4b28-416e-a84d-176918007fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:58:39 compute-0 nova_compute[259550]: 2025-10-07 14:58:39.325 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquired lock "refresh_cache-27840587-4b28-416e-a84d-176918007fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:58:39 compute-0 nova_compute[259550]: 2025-10-07 14:58:39.325 2 DEBUG nova.network.neutron [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:58:39 compute-0 nova_compute[259550]: 2025-10-07 14:58:39.445 2 DEBUG nova.compute.manager [req-2f609867-a634-4ac9-98bd-30caceda9953 req-73cc1bae-abd9-42dc-bdba-c3a3302998e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received event network-changed-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:58:39 compute-0 nova_compute[259550]: 2025-10-07 14:58:39.446 2 DEBUG nova.compute.manager [req-2f609867-a634-4ac9-98bd-30caceda9953 req-73cc1bae-abd9-42dc-bdba-c3a3302998e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Refreshing instance network info cache due to event network-changed-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:58:39 compute-0 nova_compute[259550]: 2025-10-07 14:58:39.446 2 DEBUG oslo_concurrency.lockutils [req-2f609867-a634-4ac9-98bd-30caceda9953 req-73cc1bae-abd9-42dc-bdba-c3a3302998e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-27840587-4b28-416e-a84d-176918007fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:58:39 compute-0 ceph-mon[74295]: pgmap v2892: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 1.3 MiB/s wr, 28 op/s
Oct 07 14:58:40 compute-0 sudo[426591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:58:40 compute-0 sudo[426591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:58:40 compute-0 sudo[426591]: pam_unix(sudo:session): session closed for user root
Oct 07 14:58:40 compute-0 sudo[426616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:58:40 compute-0 sudo[426616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:58:40 compute-0 sudo[426616]: pam_unix(sudo:session): session closed for user root
Oct 07 14:58:40 compute-0 sudo[426641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:58:40 compute-0 sudo[426641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:58:40 compute-0 sudo[426641]: pam_unix(sudo:session): session closed for user root
Oct 07 14:58:40 compute-0 sudo[426666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:58:40 compute-0 sudo[426666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:58:40 compute-0 nova_compute[259550]: 2025-10-07 14:58:40.291 2 DEBUG nova.network.neutron [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:58:40 compute-0 sudo[426666]: pam_unix(sudo:session): session closed for user root
Oct 07 14:58:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:58:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:58:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:58:40 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:58:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:58:40 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:58:40 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 52e41822-2fe5-41fd-beec-ee57505e7aed does not exist
Oct 07 14:58:40 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 99dfcdc1-1df7-4ada-828f-49350d41ec63 does not exist
Oct 07 14:58:40 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev b91a160a-1170-4233-931e-3badd4382d30 does not exist
Oct 07 14:58:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:58:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:58:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:58:40 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:58:40 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:58:40 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:58:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:58:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:58:40 compute-0 sudo[426721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:58:40 compute-0 sudo[426721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:58:40 compute-0 sudo[426721]: pam_unix(sudo:session): session closed for user root
Oct 07 14:58:40 compute-0 sudo[426746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:58:40 compute-0 sudo[426746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:58:40 compute-0 sudo[426746]: pam_unix(sudo:session): session closed for user root
Oct 07 14:58:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 07 14:58:40 compute-0 sudo[426771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:58:40 compute-0 sudo[426771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:58:40 compute-0 sudo[426771]: pam_unix(sudo:session): session closed for user root
Oct 07 14:58:41 compute-0 sudo[426796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:58:41 compute-0 sudo[426796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:58:41 compute-0 podman[426862]: 2025-10-07 14:58:41.360580044 +0000 UTC m=+0.039650750 container create eb387fe70071221fe69b5ad6529114a31da4fc4d53e1083744fc45d158ff42e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lamarr, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:58:41 compute-0 systemd[1]: Started libpod-conmon-eb387fe70071221fe69b5ad6529114a31da4fc4d53e1083744fc45d158ff42e8.scope.
Oct 07 14:58:41 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:58:41 compute-0 podman[426862]: 2025-10-07 14:58:41.342250589 +0000 UTC m=+0.021321315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:58:41 compute-0 podman[426862]: 2025-10-07 14:58:41.456445171 +0000 UTC m=+0.135515897 container init eb387fe70071221fe69b5ad6529114a31da4fc4d53e1083744fc45d158ff42e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lamarr, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct 07 14:58:41 compute-0 podman[426862]: 2025-10-07 14:58:41.465164782 +0000 UTC m=+0.144235488 container start eb387fe70071221fe69b5ad6529114a31da4fc4d53e1083744fc45d158ff42e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lamarr, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:58:41 compute-0 tender_lamarr[426878]: 167 167
Oct 07 14:58:41 compute-0 systemd[1]: libpod-eb387fe70071221fe69b5ad6529114a31da4fc4d53e1083744fc45d158ff42e8.scope: Deactivated successfully.
Oct 07 14:58:41 compute-0 podman[426862]: 2025-10-07 14:58:41.477235861 +0000 UTC m=+0.156306577 container attach eb387fe70071221fe69b5ad6529114a31da4fc4d53e1083744fc45d158ff42e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lamarr, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 07 14:58:41 compute-0 podman[426862]: 2025-10-07 14:58:41.477878559 +0000 UTC m=+0.156949265 container died eb387fe70071221fe69b5ad6529114a31da4fc4d53e1083744fc45d158ff42e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lamarr, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 07 14:58:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1f49412782a21f12ec51e1efb0361f64790c4a629d2725e61593700b85952ef-merged.mount: Deactivated successfully.
Oct 07 14:58:41 compute-0 podman[426862]: 2025-10-07 14:58:41.548138068 +0000 UTC m=+0.227208784 container remove eb387fe70071221fe69b5ad6529114a31da4fc4d53e1083744fc45d158ff42e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:58:41 compute-0 systemd[1]: libpod-conmon-eb387fe70071221fe69b5ad6529114a31da4fc4d53e1083744fc45d158ff42e8.scope: Deactivated successfully.
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.593 2 DEBUG nova.network.neutron [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Updating instance_info_cache with network_info: [{"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.626 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Releasing lock "refresh_cache-27840587-4b28-416e-a84d-176918007fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.627 2 DEBUG nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Instance network_info: |[{"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.627 2 DEBUG oslo_concurrency.lockutils [req-2f609867-a634-4ac9-98bd-30caceda9953 req-73cc1bae-abd9-42dc-bdba-c3a3302998e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-27840587-4b28-416e-a84d-176918007fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.628 2 DEBUG nova.network.neutron [req-2f609867-a634-4ac9-98bd-30caceda9953 req-73cc1bae-abd9-42dc-bdba-c3a3302998e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Refreshing network info cache for port 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.631 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Start _get_guest_xml network_info=[{"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.637 2 WARNING nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.642 2 DEBUG nova.virt.libvirt.host [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.643 2 DEBUG nova.virt.libvirt.host [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.646 2 DEBUG nova.virt.libvirt.host [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.647 2 DEBUG nova.virt.libvirt.host [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.647 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.647 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.648 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.648 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.648 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.648 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.648 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.649 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.649 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.649 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.649 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.650 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:58:41 compute-0 nova_compute[259550]: 2025-10-07 14:58:41.652 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:58:41 compute-0 podman[426902]: 2025-10-07 14:58:41.740780327 +0000 UTC m=+0.059369223 container create d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hofstadter, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 14:58:41 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:58:41 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:58:41 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:58:41 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:58:41 compute-0 ceph-mon[74295]: pgmap v2893: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 07 14:58:41 compute-0 systemd[1]: Started libpod-conmon-d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905.scope.
Oct 07 14:58:41 compute-0 podman[426902]: 2025-10-07 14:58:41.713921665 +0000 UTC m=+0.032510591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:58:41 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:58:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561c7e64f816ace0dead067c0ef02ad8fae7bedfb2c565bcdf76dfdde9d561bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:58:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561c7e64f816ace0dead067c0ef02ad8fae7bedfb2c565bcdf76dfdde9d561bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:58:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561c7e64f816ace0dead067c0ef02ad8fae7bedfb2c565bcdf76dfdde9d561bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:58:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561c7e64f816ace0dead067c0ef02ad8fae7bedfb2c565bcdf76dfdde9d561bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:58:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561c7e64f816ace0dead067c0ef02ad8fae7bedfb2c565bcdf76dfdde9d561bb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:58:41 compute-0 podman[426902]: 2025-10-07 14:58:41.844539002 +0000 UTC m=+0.163127898 container init d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 07 14:58:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:58:41 compute-0 podman[426902]: 2025-10-07 14:58:41.853175411 +0000 UTC m=+0.171764317 container start d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hofstadter, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Oct 07 14:58:41 compute-0 podman[426902]: 2025-10-07 14:58:41.856827987 +0000 UTC m=+0.175416903 container attach d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hofstadter, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 07 14:58:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:58:42 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4075044755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.171 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.199 2 DEBUG nova.storage.rbd_utils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 27840587-4b28-416e-a84d-176918007fb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.204 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:58:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:58:42 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4055149411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.653 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.655 2 DEBUG nova.virt.libvirt.vif [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:58:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1089669404',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1089669404',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ge',id=150,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSFN/mFQx9H7dlmOCtth1LrSvWe++YjThy0IEMqu/uGYCDYUjvf7c45GqSO97LvsDFzXDoa1ytH4+EWghxiErPTj2Cb9endnZ6Iv01/bnsmOGiX5Tg0V7s6thKKuGN4cw==',key_name='tempest-TestSecurityGroupsBasicOps-206963290',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-le2uqduc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:58:35Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=27840587-4b28-416e-a84d-176918007fb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.656 2 DEBUG nova.network.os_vif_util [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.657 2 DEBUG nova.network.os_vif_util [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:6f:6c,bridge_name='br-int',has_traffic_filtering=True,id=345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap345470b2-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.658 2 DEBUG nova.objects.instance [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'pci_devices' on Instance uuid 27840587-4b28-416e-a84d-176918007fb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.675 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:58:42 compute-0 nova_compute[259550]:   <uuid>27840587-4b28-416e-a84d-176918007fb6</uuid>
Oct 07 14:58:42 compute-0 nova_compute[259550]:   <name>instance-00000096</name>
Oct 07 14:58:42 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:58:42 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:58:42 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1089669404</nova:name>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:58:41</nova:creationTime>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:58:42 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:58:42 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:58:42 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:58:42 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:58:42 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:58:42 compute-0 nova_compute[259550]:         <nova:user uuid="229f8f54ad8b4adcb7d392a6d730edbd">tempest-TestSecurityGroupsBasicOps-1946829349-project-member</nova:user>
Oct 07 14:58:42 compute-0 nova_compute[259550]:         <nova:project uuid="2dd1166031634469bed4993a4eb97989">tempest-TestSecurityGroupsBasicOps-1946829349</nova:project>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:58:42 compute-0 nova_compute[259550]:         <nova:port uuid="345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4">
Oct 07 14:58:42 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:58:42 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:58:42 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <system>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <entry name="serial">27840587-4b28-416e-a84d-176918007fb6</entry>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <entry name="uuid">27840587-4b28-416e-a84d-176918007fb6</entry>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     </system>
Oct 07 14:58:42 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:58:42 compute-0 nova_compute[259550]:   <os>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:   </os>
Oct 07 14:58:42 compute-0 nova_compute[259550]:   <features>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:   </features>
Oct 07 14:58:42 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:58:42 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:58:42 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/27840587-4b28-416e-a84d-176918007fb6_disk">
Oct 07 14:58:42 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       </source>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:58:42 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/27840587-4b28-416e-a84d-176918007fb6_disk.config">
Oct 07 14:58:42 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       </source>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:58:42 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:dc:6f:6c"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <target dev="tap345470b2-e0"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6/console.log" append="off"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <video>
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     </video>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:58:42 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:58:42 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:58:42 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:58:42 compute-0 nova_compute[259550]: </domain>
Oct 07 14:58:42 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.676 2 DEBUG nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Preparing to wait for external event network-vif-plugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.676 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "27840587-4b28-416e-a84d-176918007fb6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.677 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.677 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.677 2 DEBUG nova.virt.libvirt.vif [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:58:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1089669404',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1089669404',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ge',id=150,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSFN/mFQx9H7dlmOCtth1LrSvWe++YjThy0IEMqu/uGYCDYUjvf7c45GqSO97LvsDFzXDoa1ytH4+EWghxiErPTj2Cb9endnZ6Iv01/bnsmOGiX5Tg0V7s6thKKuGN4cw==',key_name='tempest-TestSecurityGroupsBasicOps-206963290',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-le2uqduc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:58:35Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=27840587-4b28-416e-a84d-176918007fb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.678 2 DEBUG nova.network.os_vif_util [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.679 2 DEBUG nova.network.os_vif_util [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:6f:6c,bridge_name='br-int',has_traffic_filtering=True,id=345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap345470b2-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.679 2 DEBUG os_vif [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:6f:6c,bridge_name='br-int',has_traffic_filtering=True,id=345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap345470b2-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.681 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.681 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap345470b2-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap345470b2-e0, col_values=(('external_ids', {'iface-id': '345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:6f:6c', 'vm-uuid': '27840587-4b28-416e-a84d-176918007fb6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:42 compute-0 NetworkManager[44949]: <info>  [1759849122.7298] manager: (tap345470b2-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/662)
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.737 2 INFO os_vif [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:6f:6c,bridge_name='br-int',has_traffic_filtering=True,id=345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap345470b2-e0')
Oct 07 14:58:42 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4075044755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:58:42 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4055149411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.785 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.785 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.786 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No VIF found with MAC fa:16:3e:dc:6f:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.786 2 INFO nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Using config drive
Oct 07 14:58:42 compute-0 nova_compute[259550]: 2025-10-07 14:58:42.808 2 DEBUG nova.storage.rbd_utils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 27840587-4b28-416e-a84d-176918007fb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:58:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:58:42 compute-0 vigilant_hofstadter[426919]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:58:42 compute-0 vigilant_hofstadter[426919]: --> relative data size: 1.0
Oct 07 14:58:42 compute-0 vigilant_hofstadter[426919]: --> All data devices are unavailable
Oct 07 14:58:43 compute-0 systemd[1]: libpod-d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905.scope: Deactivated successfully.
Oct 07 14:58:43 compute-0 podman[426902]: 2025-10-07 14:58:43.016782715 +0000 UTC m=+1.335371621 container died d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:58:43 compute-0 systemd[1]: libpod-d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905.scope: Consumed 1.055s CPU time.
Oct 07 14:58:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-561c7e64f816ace0dead067c0ef02ad8fae7bedfb2c565bcdf76dfdde9d561bb-merged.mount: Deactivated successfully.
Oct 07 14:58:43 compute-0 podman[426902]: 2025-10-07 14:58:43.093763933 +0000 UTC m=+1.412352829 container remove d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hofstadter, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:58:43 compute-0 systemd[1]: libpod-conmon-d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905.scope: Deactivated successfully.
Oct 07 14:58:43 compute-0 sudo[426796]: pam_unix(sudo:session): session closed for user root
Oct 07 14:58:43 compute-0 sudo[427055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:58:43 compute-0 sudo[427055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:58:43 compute-0 sudo[427055]: pam_unix(sudo:session): session closed for user root
Oct 07 14:58:43 compute-0 podman[427043]: 2025-10-07 14:58:43.224722938 +0000 UTC m=+0.081373815 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 07 14:58:43 compute-0 podman[427044]: 2025-10-07 14:58:43.235392281 +0000 UTC m=+0.092038277 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 07 14:58:43 compute-0 nova_compute[259550]: 2025-10-07 14:58:43.271 2 INFO nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Creating config drive at /var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6/disk.config
Oct 07 14:58:43 compute-0 nova_compute[259550]: 2025-10-07 14:58:43.276 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq4_huy2s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:58:43 compute-0 sudo[427108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:58:43 compute-0 sudo[427108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:58:43 compute-0 sudo[427108]: pam_unix(sudo:session): session closed for user root
Oct 07 14:58:43 compute-0 sudo[427134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:58:43 compute-0 sudo[427134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:58:43 compute-0 sudo[427134]: pam_unix(sudo:session): session closed for user root
Oct 07 14:58:43 compute-0 sudo[427161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:58:43 compute-0 sudo[427161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:58:43 compute-0 nova_compute[259550]: 2025-10-07 14:58:43.416 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq4_huy2s" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:58:43 compute-0 nova_compute[259550]: 2025-10-07 14:58:43.468 2 DEBUG nova.storage.rbd_utils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 27840587-4b28-416e-a84d-176918007fb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:58:43 compute-0 nova_compute[259550]: 2025-10-07 14:58:43.472 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6/disk.config 27840587-4b28-416e-a84d-176918007fb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:58:43 compute-0 nova_compute[259550]: 2025-10-07 14:58:43.511 2 DEBUG nova.network.neutron [req-2f609867-a634-4ac9-98bd-30caceda9953 req-73cc1bae-abd9-42dc-bdba-c3a3302998e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Updated VIF entry in instance network info cache for port 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:58:43 compute-0 nova_compute[259550]: 2025-10-07 14:58:43.512 2 DEBUG nova.network.neutron [req-2f609867-a634-4ac9-98bd-30caceda9953 req-73cc1bae-abd9-42dc-bdba-c3a3302998e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Updating instance_info_cache with network_info: [{"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:58:43 compute-0 nova_compute[259550]: 2025-10-07 14:58:43.535 2 DEBUG oslo_concurrency.lockutils [req-2f609867-a634-4ac9-98bd-30caceda9953 req-73cc1bae-abd9-42dc-bdba-c3a3302998e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-27840587-4b28-416e-a84d-176918007fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:58:43 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Oct 07 14:58:43 compute-0 nova_compute[259550]: 2025-10-07 14:58:43.665 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6/disk.config 27840587-4b28-416e-a84d-176918007fb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:58:43 compute-0 nova_compute[259550]: 2025-10-07 14:58:43.667 2 INFO nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Deleting local config drive /var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6/disk.config because it was imported into RBD.
Oct 07 14:58:43 compute-0 nova_compute[259550]: 2025-10-07 14:58:43.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:43 compute-0 kernel: tap345470b2-e0: entered promiscuous mode
Oct 07 14:58:43 compute-0 NetworkManager[44949]: <info>  [1759849123.7234] manager: (tap345470b2-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/663)
Oct 07 14:58:43 compute-0 ovn_controller[151684]: 2025-10-07T14:58:43Z|01635|binding|INFO|Claiming lport 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 for this chassis.
Oct 07 14:58:43 compute-0 nova_compute[259550]: 2025-10-07 14:58:43.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:43 compute-0 ovn_controller[151684]: 2025-10-07T14:58:43Z|01636|binding|INFO|345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4: Claiming fa:16:3e:dc:6f:6c 10.100.0.13
Oct 07 14:58:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.737 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:6f:6c 10.100.0.13'], port_security=['fa:16:3e:dc:6f:6c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '27840587-4b28-416e-a84d-176918007fb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ce726bee-ed69-45a1-b3aa-f723c1140437', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=262e3557-a916-444a-8a9a-3fa6e70341a2, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:58:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.738 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 in datapath 249c8beb-3ee0-47fa-bbca-7ba20e00ad6a bound to our chassis
Oct 07 14:58:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.739 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 249c8beb-3ee0-47fa-bbca-7ba20e00ad6a
Oct 07 14:58:43 compute-0 ovn_controller[151684]: 2025-10-07T14:58:43Z|01637|binding|INFO|Setting lport 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 ovn-installed in OVS
Oct 07 14:58:43 compute-0 ovn_controller[151684]: 2025-10-07T14:58:43Z|01638|binding|INFO|Setting lport 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 up in Southbound
Oct 07 14:58:43 compute-0 nova_compute[259550]: 2025-10-07 14:58:43.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.761 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1238e0bf-6454-4c9e-bbdd-e460b2975c17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:43 compute-0 systemd-udevd[427289]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:58:43 compute-0 systemd-machined[214580]: New machine qemu-184-instance-00000096.
Oct 07 14:58:43 compute-0 podman[427266]: 2025-10-07 14:58:43.773186053 +0000 UTC m=+0.067303312 container create 774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 07 14:58:43 compute-0 systemd[1]: Started Virtual Machine qemu-184-instance-00000096.
Oct 07 14:58:43 compute-0 NetworkManager[44949]: <info>  [1759849123.7780] device (tap345470b2-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:58:43 compute-0 NetworkManager[44949]: <info>  [1759849123.7801] device (tap345470b2-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:58:43 compute-0 ceph-mon[74295]: pgmap v2894: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:58:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.799 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[56bf1dcb-4b36-4a33-863d-7536b9c3f461]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.804 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6a256198-fc3d-4e43-adba-11688287d745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:43 compute-0 systemd[1]: Started libpod-conmon-774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d.scope.
Oct 07 14:58:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.834 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6efa1990-7ef9-43b3-9e7e-93105fb3a53e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:43 compute-0 podman[427266]: 2025-10-07 14:58:43.753444261 +0000 UTC m=+0.047561550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:58:43 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:58:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.852 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a3e950d6-c117-470e-85fe-62c827c70fce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap249c8beb-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:fe:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 966995, 'reachable_time': 19037, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 427306, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:43 compute-0 podman[427266]: 2025-10-07 14:58:43.859725184 +0000 UTC m=+0.153842443 container init 774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 07 14:58:43 compute-0 podman[427266]: 2025-10-07 14:58:43.868538787 +0000 UTC m=+0.162656046 container start 774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:58:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.867 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9abe13fa-5b25-4dbf-90d0-86681466af15]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap249c8beb-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 967010, 'tstamp': 967010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 427308, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap249c8beb-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 967014, 'tstamp': 967014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 427308, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:58:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.870 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap249c8beb-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:58:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.873 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap249c8beb-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:58:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.873 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:58:43 compute-0 nova_compute[259550]: 2025-10-07 14:58:43.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.873 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap249c8beb-30, col_values=(('external_ids', {'iface-id': 'ae63080e-cf28-419c-b3fe-501836a95f3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:58:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.874 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:58:43 compute-0 podman[427266]: 2025-10-07 14:58:43.87505768 +0000 UTC m=+0.169175019 container attach 774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_borg, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 14:58:43 compute-0 quizzical_borg[427298]: 167 167
Oct 07 14:58:43 compute-0 systemd[1]: libpod-774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d.scope: Deactivated successfully.
Oct 07 14:58:43 compute-0 conmon[427298]: conmon 774f6d9cef435ed253c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d.scope/container/memory.events
Oct 07 14:58:43 compute-0 podman[427266]: 2025-10-07 14:58:43.879439436 +0000 UTC m=+0.173556695 container died 774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_borg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 07 14:58:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce3a7fa9ad6ebb397fcdfcb7d355630e82e8430ce3e21621f1f42cbf34f919fe-merged.mount: Deactivated successfully.
Oct 07 14:58:43 compute-0 podman[427266]: 2025-10-07 14:58:43.946667825 +0000 UTC m=+0.240785084 container remove 774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_borg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 07 14:58:43 compute-0 systemd[1]: libpod-conmon-774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d.scope: Deactivated successfully.
Oct 07 14:58:44 compute-0 podman[427331]: 2025-10-07 14:58:44.134763943 +0000 UTC m=+0.053169569 container create ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilbur, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:58:44 compute-0 systemd[1]: Started libpod-conmon-ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097.scope.
Oct 07 14:58:44 compute-0 podman[427331]: 2025-10-07 14:58:44.105474977 +0000 UTC m=+0.023880623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:58:44 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c691f69fa29c64e1eeb2d9ca4c154906ef5ab47c27e1fcfb2614ab3abcf2a01d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c691f69fa29c64e1eeb2d9ca4c154906ef5ab47c27e1fcfb2614ab3abcf2a01d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c691f69fa29c64e1eeb2d9ca4c154906ef5ab47c27e1fcfb2614ab3abcf2a01d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c691f69fa29c64e1eeb2d9ca4c154906ef5ab47c27e1fcfb2614ab3abcf2a01d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:58:44 compute-0 podman[427331]: 2025-10-07 14:58:44.313828911 +0000 UTC m=+0.232234537 container init ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 07 14:58:44 compute-0 podman[427331]: 2025-10-07 14:58:44.321187977 +0000 UTC m=+0.239593603 container start ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:58:44 compute-0 podman[427331]: 2025-10-07 14:58:44.32662896 +0000 UTC m=+0.245034606 container attach ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilbur, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.428 2 DEBUG nova.compute.manager [req-e1b058b8-4e8b-4afe-91b7-a1e1c413576e req-1a3750d2-37cd-4391-ae08-f902c654f95f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received event network-vif-plugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.430 2 DEBUG oslo_concurrency.lockutils [req-e1b058b8-4e8b-4afe-91b7-a1e1c413576e req-1a3750d2-37cd-4391-ae08-f902c654f95f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "27840587-4b28-416e-a84d-176918007fb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.430 2 DEBUG oslo_concurrency.lockutils [req-e1b058b8-4e8b-4afe-91b7-a1e1c413576e req-1a3750d2-37cd-4391-ae08-f902c654f95f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.431 2 DEBUG oslo_concurrency.lockutils [req-e1b058b8-4e8b-4afe-91b7-a1e1c413576e req-1a3750d2-37cd-4391-ae08-f902c654f95f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.431 2 DEBUG nova.compute.manager [req-e1b058b8-4e8b-4afe-91b7-a1e1c413576e req-1a3750d2-37cd-4391-ae08-f902c654f95f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Processing event network-vif-plugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.810 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849124.8101046, 27840587-4b28-416e-a84d-176918007fb6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.812 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] VM Started (Lifecycle Event)
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.814 2 DEBUG nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.819 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.823 2 INFO nova.virt.libvirt.driver [-] [instance: 27840587-4b28-416e-a84d-176918007fb6] Instance spawned successfully.
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.823 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.835 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.840 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.846 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.846 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.847 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.847 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.848 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.848 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.863 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.864 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849124.8104093, 27840587-4b28-416e-a84d-176918007fb6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.864 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] VM Paused (Lifecycle Event)
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.894 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.899 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849124.816494, 27840587-4b28-416e-a84d-176918007fb6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.899 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] VM Resumed (Lifecycle Event)
Oct 07 14:58:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.917 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.921 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.925 2 INFO nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Took 9.57 seconds to spawn the instance on the hypervisor.
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.925 2 DEBUG nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.951 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:58:44 compute-0 nova_compute[259550]: 2025-10-07 14:58:44.984 2 INFO nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Took 10.58 seconds to build instance.
Oct 07 14:58:45 compute-0 nova_compute[259550]: 2025-10-07 14:58:45.002 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]: {
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:     "0": [
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:         {
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "devices": [
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "/dev/loop3"
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             ],
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "lv_name": "ceph_lv0",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "lv_size": "21470642176",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "name": "ceph_lv0",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "tags": {
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.cluster_name": "ceph",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.crush_device_class": "",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.encrypted": "0",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.osd_id": "0",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.type": "block",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.vdo": "0"
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             },
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "type": "block",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "vg_name": "ceph_vg0"
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:         }
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:     ],
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:     "1": [
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:         {
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "devices": [
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "/dev/loop4"
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             ],
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "lv_name": "ceph_lv1",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "lv_size": "21470642176",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "name": "ceph_lv1",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "tags": {
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.cluster_name": "ceph",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.crush_device_class": "",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.encrypted": "0",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.osd_id": "1",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.type": "block",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.vdo": "0"
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             },
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "type": "block",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "vg_name": "ceph_vg1"
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:         }
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:     ],
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:     "2": [
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:         {
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "devices": [
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "/dev/loop5"
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             ],
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "lv_name": "ceph_lv2",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "lv_size": "21470642176",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "name": "ceph_lv2",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "tags": {
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.cluster_name": "ceph",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.crush_device_class": "",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.encrypted": "0",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.osd_id": "2",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.type": "block",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:                 "ceph.vdo": "0"
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             },
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "type": "block",
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:             "vg_name": "ceph_vg2"
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:         }
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]:     ]
Oct 07 14:58:45 compute-0 priceless_wilbur[427381]: }
Oct 07 14:58:45 compute-0 systemd[1]: libpod-ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097.scope: Deactivated successfully.
Oct 07 14:58:45 compute-0 conmon[427381]: conmon ad4259bf764e87834929 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097.scope/container/memory.events
Oct 07 14:58:45 compute-0 podman[427331]: 2025-10-07 14:58:45.180688903 +0000 UTC m=+1.099094529 container died ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilbur, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 07 14:58:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c691f69fa29c64e1eeb2d9ca4c154906ef5ab47c27e1fcfb2614ab3abcf2a01d-merged.mount: Deactivated successfully.
Oct 07 14:58:45 compute-0 podman[427331]: 2025-10-07 14:58:45.545092217 +0000 UTC m=+1.463497843 container remove ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:58:45 compute-0 sudo[427161]: pam_unix(sudo:session): session closed for user root
Oct 07 14:58:45 compute-0 systemd[1]: libpod-conmon-ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097.scope: Deactivated successfully.
Oct 07 14:58:45 compute-0 sudo[427413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:58:45 compute-0 sudo[427413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:58:45 compute-0 sudo[427413]: pam_unix(sudo:session): session closed for user root
Oct 07 14:58:45 compute-0 sudo[427438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:58:45 compute-0 sudo[427438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:58:45 compute-0 sudo[427438]: pam_unix(sudo:session): session closed for user root
Oct 07 14:58:45 compute-0 sudo[427463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:58:45 compute-0 sudo[427463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:58:45 compute-0 sudo[427463]: pam_unix(sudo:session): session closed for user root
Oct 07 14:58:45 compute-0 sudo[427488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:58:45 compute-0 sudo[427488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:58:45 compute-0 ceph-mon[74295]: pgmap v2895: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 07 14:58:46 compute-0 podman[427552]: 2025-10-07 14:58:46.165032023 +0000 UTC m=+0.023146913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:58:46 compute-0 podman[427552]: 2025-10-07 14:58:46.337041286 +0000 UTC m=+0.195156176 container create a995f98709b46071d1f0897493b025c3d56305c0f75a4daeb89714033751824d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:58:46 compute-0 systemd[1]: Started libpod-conmon-a995f98709b46071d1f0897493b025c3d56305c0f75a4daeb89714033751824d.scope.
Oct 07 14:58:46 compute-0 nova_compute[259550]: 2025-10-07 14:58:46.581 2 DEBUG nova.compute.manager [req-2abfe958-9249-425e-a940-2bdf36a50a04 req-2f46fe51-af74-47d4-b85d-58f63327c899 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received event network-vif-plugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:58:46 compute-0 nova_compute[259550]: 2025-10-07 14:58:46.583 2 DEBUG oslo_concurrency.lockutils [req-2abfe958-9249-425e-a940-2bdf36a50a04 req-2f46fe51-af74-47d4-b85d-58f63327c899 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "27840587-4b28-416e-a84d-176918007fb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:58:46 compute-0 nova_compute[259550]: 2025-10-07 14:58:46.583 2 DEBUG oslo_concurrency.lockutils [req-2abfe958-9249-425e-a940-2bdf36a50a04 req-2f46fe51-af74-47d4-b85d-58f63327c899 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:58:46 compute-0 nova_compute[259550]: 2025-10-07 14:58:46.583 2 DEBUG oslo_concurrency.lockutils [req-2abfe958-9249-425e-a940-2bdf36a50a04 req-2f46fe51-af74-47d4-b85d-58f63327c899 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:58:46 compute-0 nova_compute[259550]: 2025-10-07 14:58:46.583 2 DEBUG nova.compute.manager [req-2abfe958-9249-425e-a940-2bdf36a50a04 req-2f46fe51-af74-47d4-b85d-58f63327c899 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] No waiting events found dispatching network-vif-plugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:58:46 compute-0 nova_compute[259550]: 2025-10-07 14:58:46.584 2 WARNING nova.compute.manager [req-2abfe958-9249-425e-a940-2bdf36a50a04 req-2f46fe51-af74-47d4-b85d-58f63327c899 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received unexpected event network-vif-plugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 for instance with vm_state active and task_state None.
Oct 07 14:58:46 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:58:46 compute-0 podman[427552]: 2025-10-07 14:58:46.663238968 +0000 UTC m=+0.521353828 container init a995f98709b46071d1f0897493b025c3d56305c0f75a4daeb89714033751824d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 14:58:46 compute-0 podman[427552]: 2025-10-07 14:58:46.677085335 +0000 UTC m=+0.535200195 container start a995f98709b46071d1f0897493b025c3d56305c0f75a4daeb89714033751824d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 07 14:58:46 compute-0 eager_kalam[427568]: 167 167
Oct 07 14:58:46 compute-0 systemd[1]: libpod-a995f98709b46071d1f0897493b025c3d56305c0f75a4daeb89714033751824d.scope: Deactivated successfully.
Oct 07 14:58:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:58:46 compute-0 podman[427552]: 2025-10-07 14:58:46.857262144 +0000 UTC m=+0.715377054 container attach a995f98709b46071d1f0897493b025c3d56305c0f75a4daeb89714033751824d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 07 14:58:46 compute-0 podman[427552]: 2025-10-07 14:58:46.85790044 +0000 UTC m=+0.716015310 container died a995f98709b46071d1f0897493b025c3d56305c0f75a4daeb89714033751824d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct 07 14:58:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 190 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 07 14:58:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b45993188fc29c8052779cf74b67ce92b348422d1563e704de348436e9b9fa7-merged.mount: Deactivated successfully.
Oct 07 14:58:47 compute-0 podman[427552]: 2025-10-07 14:58:47.099750101 +0000 UTC m=+0.957864961 container remove a995f98709b46071d1f0897493b025c3d56305c0f75a4daeb89714033751824d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:58:47 compute-0 systemd[1]: libpod-conmon-a995f98709b46071d1f0897493b025c3d56305c0f75a4daeb89714033751824d.scope: Deactivated successfully.
Oct 07 14:58:47 compute-0 podman[427592]: 2025-10-07 14:58:47.308159587 +0000 UTC m=+0.070621231 container create 913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_davinci, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:58:47 compute-0 podman[427592]: 2025-10-07 14:58:47.26107874 +0000 UTC m=+0.023540414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:58:47 compute-0 systemd[1]: Started libpod-conmon-913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c.scope.
Oct 07 14:58:47 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee18d7f1181c2bb0f43b02559bd92122357d2255839032f0f6f5189a0216fb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee18d7f1181c2bb0f43b02559bd92122357d2255839032f0f6f5189a0216fb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee18d7f1181c2bb0f43b02559bd92122357d2255839032f0f6f5189a0216fb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee18d7f1181c2bb0f43b02559bd92122357d2255839032f0f6f5189a0216fb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:58:47 compute-0 podman[427592]: 2025-10-07 14:58:47.44167467 +0000 UTC m=+0.204136314 container init 913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_davinci, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:58:47 compute-0 podman[427592]: 2025-10-07 14:58:47.449916908 +0000 UTC m=+0.212378552 container start 913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 07 14:58:47 compute-0 podman[427592]: 2025-10-07 14:58:47.515657088 +0000 UTC m=+0.278118742 container attach 913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:58:47 compute-0 nova_compute[259550]: 2025-10-07 14:58:47.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:48 compute-0 ceph-mon[74295]: pgmap v2896: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 190 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 07 14:58:48 compute-0 kind_davinci[427609]: {
Oct 07 14:58:48 compute-0 kind_davinci[427609]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:58:48 compute-0 kind_davinci[427609]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:58:48 compute-0 kind_davinci[427609]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:58:48 compute-0 kind_davinci[427609]:         "osd_id": 2,
Oct 07 14:58:48 compute-0 kind_davinci[427609]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:58:48 compute-0 kind_davinci[427609]:         "type": "bluestore"
Oct 07 14:58:48 compute-0 kind_davinci[427609]:     },
Oct 07 14:58:48 compute-0 kind_davinci[427609]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:58:48 compute-0 kind_davinci[427609]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:58:48 compute-0 kind_davinci[427609]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:58:48 compute-0 kind_davinci[427609]:         "osd_id": 1,
Oct 07 14:58:48 compute-0 kind_davinci[427609]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:58:48 compute-0 kind_davinci[427609]:         "type": "bluestore"
Oct 07 14:58:48 compute-0 kind_davinci[427609]:     },
Oct 07 14:58:48 compute-0 kind_davinci[427609]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:58:48 compute-0 kind_davinci[427609]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:58:48 compute-0 kind_davinci[427609]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:58:48 compute-0 kind_davinci[427609]:         "osd_id": 0,
Oct 07 14:58:48 compute-0 kind_davinci[427609]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:58:48 compute-0 kind_davinci[427609]:         "type": "bluestore"
Oct 07 14:58:48 compute-0 kind_davinci[427609]:     }
Oct 07 14:58:48 compute-0 kind_davinci[427609]: }
Oct 07 14:58:48 compute-0 systemd[1]: libpod-913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c.scope: Deactivated successfully.
Oct 07 14:58:48 compute-0 systemd[1]: libpod-913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c.scope: Consumed 1.062s CPU time.
Oct 07 14:58:48 compute-0 podman[427592]: 2025-10-07 14:58:48.506921092 +0000 UTC m=+1.269382736 container died 913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_davinci, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:58:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-dee18d7f1181c2bb0f43b02559bd92122357d2255839032f0f6f5189a0216fb7-merged.mount: Deactivated successfully.
Oct 07 14:58:48 compute-0 nova_compute[259550]: 2025-10-07 14:58:48.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:48 compute-0 podman[427592]: 2025-10-07 14:58:48.713569611 +0000 UTC m=+1.476031255 container remove 913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_davinci, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 07 14:58:48 compute-0 systemd[1]: libpod-conmon-913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c.scope: Deactivated successfully.
Oct 07 14:58:48 compute-0 sudo[427488]: pam_unix(sudo:session): session closed for user root
Oct 07 14:58:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:58:48 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:58:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:58:48 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:58:48 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev c3df6443-03b5-4591-9a68-e9ea0e21a8f4 does not exist
Oct 07 14:58:48 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 78888be8-3a13-42ae-95db-7ff0727ff714 does not exist
Oct 07 14:58:48 compute-0 sudo[427655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:58:48 compute-0 sudo[427655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:58:48 compute-0 sudo[427655]: pam_unix(sudo:session): session closed for user root
Oct 07 14:58:48 compute-0 sudo[427680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:58:48 compute-0 sudo[427680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:58:48 compute-0 sudo[427680]: pam_unix(sudo:session): session closed for user root
Oct 07 14:58:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct 07 14:58:49 compute-0 nova_compute[259550]: 2025-10-07 14:58:49.661 2 DEBUG nova.compute.manager [req-4ce7b2d1-e6ea-4672-b991-c0fd6e09d415 req-51ef9f2f-1e9c-4a20-85b4-9451257ed3ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received event network-changed-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:58:49 compute-0 nova_compute[259550]: 2025-10-07 14:58:49.662 2 DEBUG nova.compute.manager [req-4ce7b2d1-e6ea-4672-b991-c0fd6e09d415 req-51ef9f2f-1e9c-4a20-85b4-9451257ed3ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Refreshing instance network info cache due to event network-changed-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:58:49 compute-0 nova_compute[259550]: 2025-10-07 14:58:49.662 2 DEBUG oslo_concurrency.lockutils [req-4ce7b2d1-e6ea-4672-b991-c0fd6e09d415 req-51ef9f2f-1e9c-4a20-85b4-9451257ed3ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-27840587-4b28-416e-a84d-176918007fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:58:49 compute-0 nova_compute[259550]: 2025-10-07 14:58:49.663 2 DEBUG oslo_concurrency.lockutils [req-4ce7b2d1-e6ea-4672-b991-c0fd6e09d415 req-51ef9f2f-1e9c-4a20-85b4-9451257ed3ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-27840587-4b28-416e-a84d-176918007fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:58:49 compute-0 nova_compute[259550]: 2025-10-07 14:58:49.663 2 DEBUG nova.network.neutron [req-4ce7b2d1-e6ea-4672-b991-c0fd6e09d415 req-51ef9f2f-1e9c-4a20-85b4-9451257ed3ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Refreshing network info cache for port 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:58:49 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:58:49 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:58:49 compute-0 ceph-mon[74295]: pgmap v2897: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct 07 14:58:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2898: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 520 KiB/s wr, 86 op/s
Oct 07 14:58:51 compute-0 nova_compute[259550]: 2025-10-07 14:58:51.337 2 DEBUG nova.network.neutron [req-4ce7b2d1-e6ea-4672-b991-c0fd6e09d415 req-51ef9f2f-1e9c-4a20-85b4-9451257ed3ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Updated VIF entry in instance network info cache for port 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:58:51 compute-0 nova_compute[259550]: 2025-10-07 14:58:51.338 2 DEBUG nova.network.neutron [req-4ce7b2d1-e6ea-4672-b991-c0fd6e09d415 req-51ef9f2f-1e9c-4a20-85b4-9451257ed3ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Updating instance_info_cache with network_info: [{"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:58:51 compute-0 nova_compute[259550]: 2025-10-07 14:58:51.376 2 DEBUG oslo_concurrency.lockutils [req-4ce7b2d1-e6ea-4672-b991-c0fd6e09d415 req-51ef9f2f-1e9c-4a20-85b4-9451257ed3ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-27840587-4b28-416e-a84d-176918007fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:58:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:58:51 compute-0 ceph-mon[74295]: pgmap v2898: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 520 KiB/s wr, 86 op/s
Oct 07 14:58:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:58:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:58:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:58:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:58:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:58:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:58:52 compute-0 nova_compute[259550]: 2025-10-07 14:58:52.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 07 14:58:53 compute-0 nova_compute[259550]: 2025-10-07 14:58:53.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:54 compute-0 ceph-mon[74295]: pgmap v2899: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 07 14:58:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 07 14:58:56 compute-0 ceph-mon[74295]: pgmap v2900: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 07 14:58:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:58:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.7 KiB/s wr, 71 op/s
Oct 07 14:58:57 compute-0 nova_compute[259550]: 2025-10-07 14:58:57.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:58 compute-0 ceph-mon[74295]: pgmap v2901: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.7 KiB/s wr, 71 op/s
Oct 07 14:58:58 compute-0 nova_compute[259550]: 2025-10-07 14:58:58.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:58:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 187 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 91 op/s
Oct 07 14:58:59 compute-0 ovn_controller[151684]: 2025-10-07T14:58:59Z|00206|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dc:6f:6c 10.100.0.13
Oct 07 14:58:59 compute-0 ovn_controller[151684]: 2025-10-07T14:58:59Z|00207|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dc:6f:6c 10.100.0.13
Oct 07 14:59:00 compute-0 ceph-mon[74295]: pgmap v2902: 305 pgs: 305 active+clean; 187 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 91 op/s
Oct 07 14:59:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:00.096 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:00.096 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:00.097 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 773 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Oct 07 14:59:01 compute-0 podman[427707]: 2025-10-07 14:59:01.083722717 +0000 UTC m=+0.067461697 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 14:59:01 compute-0 podman[427706]: 2025-10-07 14:59:01.085717079 +0000 UTC m=+0.069457319 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 07 14:59:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:59:02 compute-0 ceph-mon[74295]: pgmap v2903: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 773 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Oct 07 14:59:02 compute-0 nova_compute[259550]: 2025-10-07 14:59:02.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 07 14:59:03 compute-0 nova_compute[259550]: 2025-10-07 14:59:03.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:04 compute-0 ceph-mon[74295]: pgmap v2904: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.445 2 DEBUG oslo_concurrency.lockutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "27840587-4b28-416e-a84d-176918007fb6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.446 2 DEBUG oslo_concurrency.lockutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.446 2 DEBUG oslo_concurrency.lockutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "27840587-4b28-416e-a84d-176918007fb6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.446 2 DEBUG oslo_concurrency.lockutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.447 2 DEBUG oslo_concurrency.lockutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.449 2 INFO nova.compute.manager [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Terminating instance
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.450 2 DEBUG nova.compute.manager [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:59:04 compute-0 kernel: tap345470b2-e0 (unregistering): left promiscuous mode
Oct 07 14:59:04 compute-0 NetworkManager[44949]: <info>  [1759849144.5477] device (tap345470b2-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:59:04 compute-0 ovn_controller[151684]: 2025-10-07T14:59:04Z|01639|binding|INFO|Releasing lport 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 from this chassis (sb_readonly=0)
Oct 07 14:59:04 compute-0 ovn_controller[151684]: 2025-10-07T14:59:04Z|01640|binding|INFO|Setting lport 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 down in Southbound
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:04 compute-0 ovn_controller[151684]: 2025-10-07T14:59:04Z|01641|binding|INFO|Removing iface tap345470b2-e0 ovn-installed in OVS
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.571 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:6f:6c 10.100.0.13'], port_security=['fa:16:3e:dc:6f:6c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '27840587-4b28-416e-a84d-176918007fb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'abaa319c-3a20-4f59-984b-df0700b81045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=262e3557-a916-444a-8a9a-3fa6e70341a2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:59:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.572 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 in datapath 249c8beb-3ee0-47fa-bbca-7ba20e00ad6a unbound from our chassis
Oct 07 14:59:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.573 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 249c8beb-3ee0-47fa-bbca-7ba20e00ad6a
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.594 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f42870e5-5e3e-4d2b-a157-3f7e2e488abf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.626 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[952b07a8-5d60-4f47-851a-53cdabb475a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:04 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000096.scope: Deactivated successfully.
Oct 07 14:59:04 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000096.scope: Consumed 14.327s CPU time.
Oct 07 14:59:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.629 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9373ba5f-1df1-4a2c-a4d7-a90044bba633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:04 compute-0 systemd-machined[214580]: Machine qemu-184-instance-00000096 terminated.
Oct 07 14:59:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.656 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ca8754-979a-4cf0-8bad-e6d089db066f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.677 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[43f3d6d6-778f-4cfb-9257-fc95cfe8cd0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap249c8beb-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:fe:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 966995, 'reachable_time': 38312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 427759, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.692 2 INFO nova.virt.libvirt.driver [-] [instance: 27840587-4b28-416e-a84d-176918007fb6] Instance destroyed successfully.
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.693 2 DEBUG nova.objects.instance [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'resources' on Instance uuid 27840587-4b28-416e-a84d-176918007fb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:59:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.695 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[756d78df-9ebd-419c-b7ff-4a1d4c59e43b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap249c8beb-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 967010, 'tstamp': 967010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 427767, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap249c8beb-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 967014, 'tstamp': 967014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 427767, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.696 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap249c8beb-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.703 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap249c8beb-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:59:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.703 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:59:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.703 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap249c8beb-30, col_values=(('external_ids', {'iface-id': 'ae63080e-cf28-419c-b3fe-501836a95f3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:59:04 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.704 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.708 2 DEBUG nova.virt.libvirt.vif [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:58:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1089669404',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1089669404',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ge',id=150,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSFN/mFQx9H7dlmOCtth1LrSvWe++YjThy0IEMqu/uGYCDYUjvf7c45GqSO97LvsDFzXDoa1ytH4+EWghxiErPTj2Cb9endnZ6Iv01/bnsmOGiX5Tg0V7s6thKKuGN4cw==',key_name='tempest-TestSecurityGroupsBasicOps-206963290',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:58:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-le2uqduc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:58:44Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=27840587-4b28-416e-a84d-176918007fb6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.709 2 DEBUG nova.network.os_vif_util [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.709 2 DEBUG nova.network.os_vif_util [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dc:6f:6c,bridge_name='br-int',has_traffic_filtering=True,id=345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap345470b2-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.710 2 DEBUG os_vif [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:6f:6c,bridge_name='br-int',has_traffic_filtering=True,id=345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap345470b2-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.713 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap345470b2-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.719 2 INFO os_vif [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:6f:6c,bridge_name='br-int',has_traffic_filtering=True,id=345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap345470b2-e0')
Oct 07 14:59:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 07 14:59:04 compute-0 nova_compute[259550]: 2025-10-07 14:59:04.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:59:05 compute-0 nova_compute[259550]: 2025-10-07 14:59:05.393 2 INFO nova.virt.libvirt.driver [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Deleting instance files /var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6_del
Oct 07 14:59:05 compute-0 nova_compute[259550]: 2025-10-07 14:59:05.394 2 INFO nova.virt.libvirt.driver [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Deletion of /var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6_del complete
Oct 07 14:59:05 compute-0 nova_compute[259550]: 2025-10-07 14:59:05.454 2 INFO nova.compute.manager [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Took 1.00 seconds to destroy the instance on the hypervisor.
Oct 07 14:59:05 compute-0 nova_compute[259550]: 2025-10-07 14:59:05.456 2 DEBUG oslo.service.loopingcall [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:59:05 compute-0 nova_compute[259550]: 2025-10-07 14:59:05.456 2 DEBUG nova.compute.manager [-] [instance: 27840587-4b28-416e-a84d-176918007fb6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:59:05 compute-0 nova_compute[259550]: 2025-10-07 14:59:05.456 2 DEBUG nova.network.neutron [-] [instance: 27840587-4b28-416e-a84d-176918007fb6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:59:05 compute-0 nova_compute[259550]: 2025-10-07 14:59:05.480 2 DEBUG nova.compute.manager [req-0e5e1fe2-fba6-4401-aa3e-a77a6d27807c req-04b7eed6-b0e9-4c66-ab0b-04528d586c98 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received event network-vif-unplugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:59:05 compute-0 nova_compute[259550]: 2025-10-07 14:59:05.481 2 DEBUG oslo_concurrency.lockutils [req-0e5e1fe2-fba6-4401-aa3e-a77a6d27807c req-04b7eed6-b0e9-4c66-ab0b-04528d586c98 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "27840587-4b28-416e-a84d-176918007fb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:05 compute-0 nova_compute[259550]: 2025-10-07 14:59:05.481 2 DEBUG oslo_concurrency.lockutils [req-0e5e1fe2-fba6-4401-aa3e-a77a6d27807c req-04b7eed6-b0e9-4c66-ab0b-04528d586c98 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:05 compute-0 nova_compute[259550]: 2025-10-07 14:59:05.482 2 DEBUG oslo_concurrency.lockutils [req-0e5e1fe2-fba6-4401-aa3e-a77a6d27807c req-04b7eed6-b0e9-4c66-ab0b-04528d586c98 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:05 compute-0 nova_compute[259550]: 2025-10-07 14:59:05.482 2 DEBUG nova.compute.manager [req-0e5e1fe2-fba6-4401-aa3e-a77a6d27807c req-04b7eed6-b0e9-4c66-ab0b-04528d586c98 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] No waiting events found dispatching network-vif-unplugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:59:05 compute-0 nova_compute[259550]: 2025-10-07 14:59:05.482 2 DEBUG nova.compute.manager [req-0e5e1fe2-fba6-4401-aa3e-a77a6d27807c req-04b7eed6-b0e9-4c66-ab0b-04528d586c98 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received event network-vif-unplugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:59:06 compute-0 ceph-mon[74295]: pgmap v2905: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 07 14:59:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:06.708 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:59:06 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:06.709 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 14:59:06 compute-0 nova_compute[259550]: 2025-10-07 14:59:06.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:06 compute-0 nova_compute[259550]: 2025-10-07 14:59:06.736 2 DEBUG nova.network.neutron [-] [instance: 27840587-4b28-416e-a84d-176918007fb6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:59:06 compute-0 nova_compute[259550]: 2025-10-07 14:59:06.759 2 INFO nova.compute.manager [-] [instance: 27840587-4b28-416e-a84d-176918007fb6] Took 1.30 seconds to deallocate network for instance.
Oct 07 14:59:06 compute-0 nova_compute[259550]: 2025-10-07 14:59:06.802 2 DEBUG oslo_concurrency.lockutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:06 compute-0 nova_compute[259550]: 2025-10-07 14:59:06.802 2 DEBUG oslo_concurrency.lockutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:06 compute-0 nova_compute[259550]: 2025-10-07 14:59:06.850 2 DEBUG nova.compute.manager [req-b0f3c282-8c61-47d4-8b91-78388f442310 req-fee7adee-e552-4141-9601-5282d9102dc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received event network-vif-deleted-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:59:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:59:06 compute-0 nova_compute[259550]: 2025-10-07 14:59:06.879 2 DEBUG oslo_concurrency.processutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:59:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 173 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Oct 07 14:59:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:59:07 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/442935267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:59:07 compute-0 nova_compute[259550]: 2025-10-07 14:59:07.316 2 DEBUG oslo_concurrency.processutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:59:07 compute-0 nova_compute[259550]: 2025-10-07 14:59:07.324 2 DEBUG nova.compute.provider_tree [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:59:07 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/442935267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:59:07 compute-0 nova_compute[259550]: 2025-10-07 14:59:07.565 2 DEBUG nova.scheduler.client.report [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:59:07 compute-0 nova_compute[259550]: 2025-10-07 14:59:07.588 2 DEBUG oslo_concurrency.lockutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:07 compute-0 nova_compute[259550]: 2025-10-07 14:59:07.611 2 INFO nova.scheduler.client.report [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Deleted allocations for instance 27840587-4b28-416e-a84d-176918007fb6
Oct 07 14:59:07 compute-0 nova_compute[259550]: 2025-10-07 14:59:07.622 2 DEBUG nova.compute.manager [req-caff3358-1307-4193-bd25-7c398b682f88 req-7c4178fb-c45b-45a0-ae6f-de97de2d6133 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received event network-vif-plugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:59:07 compute-0 nova_compute[259550]: 2025-10-07 14:59:07.623 2 DEBUG oslo_concurrency.lockutils [req-caff3358-1307-4193-bd25-7c398b682f88 req-7c4178fb-c45b-45a0-ae6f-de97de2d6133 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "27840587-4b28-416e-a84d-176918007fb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:07 compute-0 nova_compute[259550]: 2025-10-07 14:59:07.623 2 DEBUG oslo_concurrency.lockutils [req-caff3358-1307-4193-bd25-7c398b682f88 req-7c4178fb-c45b-45a0-ae6f-de97de2d6133 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:07 compute-0 nova_compute[259550]: 2025-10-07 14:59:07.623 2 DEBUG oslo_concurrency.lockutils [req-caff3358-1307-4193-bd25-7c398b682f88 req-7c4178fb-c45b-45a0-ae6f-de97de2d6133 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:07 compute-0 nova_compute[259550]: 2025-10-07 14:59:07.624 2 DEBUG nova.compute.manager [req-caff3358-1307-4193-bd25-7c398b682f88 req-7c4178fb-c45b-45a0-ae6f-de97de2d6133 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] No waiting events found dispatching network-vif-plugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:59:07 compute-0 nova_compute[259550]: 2025-10-07 14:59:07.624 2 WARNING nova.compute.manager [req-caff3358-1307-4193-bd25-7c398b682f88 req-7c4178fb-c45b-45a0-ae6f-de97de2d6133 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received unexpected event network-vif-plugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 for instance with vm_state deleted and task_state None.
Oct 07 14:59:07 compute-0 nova_compute[259550]: 2025-10-07 14:59:07.689 2 DEBUG oslo_concurrency.lockutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:08 compute-0 ceph-mon[74295]: pgmap v2906: 305 pgs: 305 active+clean; 173 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Oct 07 14:59:08 compute-0 nova_compute[259550]: 2025-10-07 14:59:08.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 07 14:59:08 compute-0 nova_compute[259550]: 2025-10-07 14:59:08.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:59:08 compute-0 nova_compute[259550]: 2025-10-07 14:59:08.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.055 2 DEBUG oslo_concurrency.lockutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "5dbed140-a0ed-4dbb-b782-da386ad68471" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.056 2 DEBUG oslo_concurrency.lockutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.056 2 DEBUG oslo_concurrency.lockutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.057 2 DEBUG oslo_concurrency.lockutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.057 2 DEBUG oslo_concurrency.lockutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.058 2 INFO nova.compute.manager [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Terminating instance
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.059 2 DEBUG nova.compute.manager [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 14:59:09 compute-0 kernel: tap6d023b7b-1f (unregistering): left promiscuous mode
Oct 07 14:59:09 compute-0 NetworkManager[44949]: <info>  [1759849149.1803] device (tap6d023b7b-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 14:59:09 compute-0 ovn_controller[151684]: 2025-10-07T14:59:09Z|01642|binding|INFO|Releasing lport 6d023b7b-1ffe-467f-b731-8b53fc063b54 from this chassis (sb_readonly=0)
Oct 07 14:59:09 compute-0 ovn_controller[151684]: 2025-10-07T14:59:09Z|01643|binding|INFO|Setting lport 6d023b7b-1ffe-467f-b731-8b53fc063b54 down in Southbound
Oct 07 14:59:09 compute-0 ovn_controller[151684]: 2025-10-07T14:59:09Z|01644|binding|INFO|Removing iface tap6d023b7b-1f ovn-installed in OVS
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.192 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:43:f9 10.100.0.7'], port_security=['fa:16:3e:b5:43:f9 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5dbed140-a0ed-4dbb-b782-da386ad68471', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '4', 'neutron:security_group_ids': '21c14362-d44c-45e9-a632-669517d5fa0c ce726bee-ed69-45a1-b3aa-f723c1140437', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=262e3557-a916-444a-8a9a-3fa6e70341a2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6d023b7b-1ffe-467f-b731-8b53fc063b54) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:59:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.193 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6d023b7b-1ffe-467f-b731-8b53fc063b54 in datapath 249c8beb-3ee0-47fa-bbca-7ba20e00ad6a unbound from our chassis
Oct 07 14:59:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.194 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 249c8beb-3ee0-47fa-bbca-7ba20e00ad6a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 14:59:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.196 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1bb9cd75-f985-4dc1-8dc9-9924c679e7fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.197 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a namespace which is not needed anymore
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:09 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000095.scope: Deactivated successfully.
Oct 07 14:59:09 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000095.scope: Consumed 15.666s CPU time.
Oct 07 14:59:09 compute-0 systemd-machined[214580]: Machine qemu-183-instance-00000095 terminated.
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.297 2 INFO nova.virt.libvirt.driver [-] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Instance destroyed successfully.
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.298 2 DEBUG nova.objects.instance [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'resources' on Instance uuid 5dbed140-a0ed-4dbb-b782-da386ad68471 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.311 2 DEBUG nova.virt.libvirt.vif [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:57:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1531578973',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1531578973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=149,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSFN/mFQx9H7dlmOCtth1LrSvWe++YjThy0IEMqu/uGYCDYUjvf7c45GqSO97LvsDFzXDoa1ytH4+EWghxiErPTj2Cb9endnZ6Iv01/bnsmOGiX5Tg0V7s6thKKuGN4cw==',key_name='tempest-TestSecurityGroupsBasicOps-206963290',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:58:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-d2b8ujzr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:58:07Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=5dbed140-a0ed-4dbb-b782-da386ad68471,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.311 2 DEBUG nova.network.os_vif_util [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.312 2 DEBUG nova.network.os_vif_util [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b5:43:f9,bridge_name='br-int',has_traffic_filtering=True,id=6d023b7b-1ffe-467f-b731-8b53fc063b54,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d023b7b-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.312 2 DEBUG os_vif [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:43:f9,bridge_name='br-int',has_traffic_filtering=True,id=6d023b7b-1ffe-467f-b731-8b53fc063b54,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d023b7b-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.314 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d023b7b-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.320 2 INFO os_vif [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:43:f9,bridge_name='br-int',has_traffic_filtering=True,id=6d023b7b-1ffe-467f-b731-8b53fc063b54,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d023b7b-1f')
Oct 07 14:59:09 compute-0 neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a[426257]: [NOTICE]   (426261) : haproxy version is 2.8.14-c23fe91
Oct 07 14:59:09 compute-0 neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a[426257]: [NOTICE]   (426261) : path to executable is /usr/sbin/haproxy
Oct 07 14:59:09 compute-0 neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a[426257]: [WARNING]  (426261) : Exiting Master process...
Oct 07 14:59:09 compute-0 neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a[426257]: [WARNING]  (426261) : Exiting Master process...
Oct 07 14:59:09 compute-0 neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a[426257]: [ALERT]    (426261) : Current worker (426263) exited with code 143 (Terminated)
Oct 07 14:59:09 compute-0 neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a[426257]: [WARNING]  (426261) : All workers exited. Exiting... (0)
Oct 07 14:59:09 compute-0 systemd[1]: libpod-e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df.scope: Deactivated successfully.
Oct 07 14:59:09 compute-0 podman[427842]: 2025-10-07 14:59:09.366372647 +0000 UTC m=+0.063536533 container died e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 14:59:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df-userdata-shm.mount: Deactivated successfully.
Oct 07 14:59:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1e09a94c0c4d5c30406c651a2371f59d3e0389c901886c205db67c6aa86d5a6-merged.mount: Deactivated successfully.
Oct 07 14:59:09 compute-0 podman[427842]: 2025-10-07 14:59:09.482077139 +0000 UTC m=+0.179241025 container cleanup e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 07 14:59:09 compute-0 systemd[1]: libpod-conmon-e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df.scope: Deactivated successfully.
Oct 07 14:59:09 compute-0 podman[427895]: 2025-10-07 14:59:09.570572971 +0000 UTC m=+0.065845834 container remove e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 14:59:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.576 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[166c430d-3d4d-4c6e-b6f3-78146d3c1efb]: (4, ('Tue Oct  7 02:59:09 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a (e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df)\ne648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df\nTue Oct  7 02:59:09 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a (e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df)\ne648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.578 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a49eb191-faca-4438-b7d9-0318978c7803]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.579 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap249c8beb-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:09 compute-0 kernel: tap249c8beb-30: left promiscuous mode
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.586 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c95d7af3-0f53-44a2-8a54-8c558ec2687f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.621 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0251e70d-e8db-4a00-a381-e7fefb1fd46e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.622 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0773d553-4685-4055-887d-7b2c3e70ce68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.641 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad5e0d2-3d90-47fa-8f72-d51ec25f6da8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 966988, 'reachable_time': 43718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 427910, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d249c8beb\x2d3ee0\x2d47fa\x2dbbca\x2d7ba20e00ad6a.mount: Deactivated successfully.
Oct 07 14:59:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.646 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 14:59:09 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.646 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[cc5628c0-85f4-4984-812c-b54faffc3a68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.699 2 DEBUG nova.compute.manager [req-4178abf0-084d-4dbc-85a6-217fef9ffd8c req-645aea56-622d-4fab-bf98-1396f6d5d653 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received event network-changed-6d023b7b-1ffe-467f-b731-8b53fc063b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.699 2 DEBUG nova.compute.manager [req-4178abf0-084d-4dbc-85a6-217fef9ffd8c req-645aea56-622d-4fab-bf98-1396f6d5d653 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Refreshing instance network info cache due to event network-changed-6d023b7b-1ffe-467f-b731-8b53fc063b54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.699 2 DEBUG oslo_concurrency.lockutils [req-4178abf0-084d-4dbc-85a6-217fef9ffd8c req-645aea56-622d-4fab-bf98-1396f6d5d653 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.700 2 DEBUG oslo_concurrency.lockutils [req-4178abf0-084d-4dbc-85a6-217fef9ffd8c req-645aea56-622d-4fab-bf98-1396f6d5d653 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.700 2 DEBUG nova.network.neutron [req-4178abf0-084d-4dbc-85a6-217fef9ffd8c req-645aea56-622d-4fab-bf98-1396f6d5d653 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Refreshing network info cache for port 6d023b7b-1ffe-467f-b731-8b53fc063b54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.791 2 DEBUG nova.compute.manager [req-6b5b1440-c2e3-4e51-98a6-271dca525a13 req-0789ad6d-6636-4b92-bcf8-9f213fa01eb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received event network-vif-unplugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.792 2 DEBUG oslo_concurrency.lockutils [req-6b5b1440-c2e3-4e51-98a6-271dca525a13 req-0789ad6d-6636-4b92-bcf8-9f213fa01eb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.792 2 DEBUG oslo_concurrency.lockutils [req-6b5b1440-c2e3-4e51-98a6-271dca525a13 req-0789ad6d-6636-4b92-bcf8-9f213fa01eb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.792 2 DEBUG oslo_concurrency.lockutils [req-6b5b1440-c2e3-4e51-98a6-271dca525a13 req-0789ad6d-6636-4b92-bcf8-9f213fa01eb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.793 2 DEBUG nova.compute.manager [req-6b5b1440-c2e3-4e51-98a6-271dca525a13 req-0789ad6d-6636-4b92-bcf8-9f213fa01eb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] No waiting events found dispatching network-vif-unplugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:59:09 compute-0 nova_compute[259550]: 2025-10-07 14:59:09.793 2 DEBUG nova.compute.manager [req-6b5b1440-c2e3-4e51-98a6-271dca525a13 req-0789ad6d-6636-4b92-bcf8-9f213fa01eb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received event network-vif-unplugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 14:59:10 compute-0 nova_compute[259550]: 2025-10-07 14:59:10.179 2 INFO nova.virt.libvirt.driver [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Deleting instance files /var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471_del
Oct 07 14:59:10 compute-0 nova_compute[259550]: 2025-10-07 14:59:10.179 2 INFO nova.virt.libvirt.driver [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Deletion of /var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471_del complete
Oct 07 14:59:10 compute-0 nova_compute[259550]: 2025-10-07 14:59:10.240 2 INFO nova.compute.manager [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Took 1.18 seconds to destroy the instance on the hypervisor.
Oct 07 14:59:10 compute-0 nova_compute[259550]: 2025-10-07 14:59:10.240 2 DEBUG oslo.service.loopingcall [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 14:59:10 compute-0 nova_compute[259550]: 2025-10-07 14:59:10.241 2 DEBUG nova.compute.manager [-] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 14:59:10 compute-0 nova_compute[259550]: 2025-10-07 14:59:10.241 2 DEBUG nova.network.neutron [-] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 14:59:10 compute-0 ceph-mon[74295]: pgmap v2907: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 07 14:59:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 83 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 817 KiB/s wr, 67 op/s
Oct 07 14:59:10 compute-0 nova_compute[259550]: 2025-10-07 14:59:10.993 2 DEBUG nova.network.neutron [-] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.017 2 INFO nova.compute.manager [-] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Took 0.78 seconds to deallocate network for instance.
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.074 2 DEBUG oslo_concurrency.lockutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.074 2 DEBUG oslo_concurrency.lockutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.133 2 DEBUG oslo_concurrency.processutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.351 2 DEBUG nova.network.neutron [req-4178abf0-084d-4dbc-85a6-217fef9ffd8c req-645aea56-622d-4fab-bf98-1396f6d5d653 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Updated VIF entry in instance network info cache for port 6d023b7b-1ffe-467f-b731-8b53fc063b54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.352 2 DEBUG nova.network.neutron [req-4178abf0-084d-4dbc-85a6-217fef9ffd8c req-645aea56-622d-4fab-bf98-1396f6d5d653 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Updating instance_info_cache with network_info: [{"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.412 2 DEBUG oslo_concurrency.lockutils [req-4178abf0-084d-4dbc-85a6-217fef9ffd8c req-645aea56-622d-4fab-bf98-1396f6d5d653 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:59:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:59:11 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1034001136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.600 2 DEBUG oslo_concurrency.processutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.607 2 DEBUG nova.compute.provider_tree [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.620 2 DEBUG nova.scheduler.client.report [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.641 2 DEBUG oslo_concurrency.lockutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.663 2 INFO nova.scheduler.client.report [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Deleted allocations for instance 5dbed140-a0ed-4dbb-b782-da386ad68471
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.760 2 DEBUG oslo_concurrency.lockutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.874 2 DEBUG nova.compute.manager [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received event network-vif-plugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.874 2 DEBUG oslo_concurrency.lockutils [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.875 2 DEBUG oslo_concurrency.lockutils [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.875 2 DEBUG oslo_concurrency.lockutils [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.876 2 DEBUG nova.compute.manager [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] No waiting events found dispatching network-vif-plugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.876 2 WARNING nova.compute.manager [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received unexpected event network-vif-plugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 for instance with vm_state deleted and task_state None.
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.877 2 DEBUG nova.compute.manager [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received event network-vif-deleted-6d023b7b-1ffe-467f-b731-8b53fc063b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.877 2 INFO nova.compute.manager [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Neutron deleted interface 6d023b7b-1ffe-467f-b731-8b53fc063b54; detaching it from the instance and deleting it from the info cache
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.878 2 DEBUG nova.network.neutron [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.881 2 DEBUG nova.compute.manager [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Detach interface failed, port_id=6d023b7b-1ffe-467f-b731-8b53fc063b54, reason: Instance 5dbed140-a0ed-4dbb-b782-da386ad68471 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 14:59:11 compute-0 nova_compute[259550]: 2025-10-07 14:59:11.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:59:12 compute-0 ceph-mon[74295]: pgmap v2908: 305 pgs: 305 active+clean; 83 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 817 KiB/s wr, 67 op/s
Oct 07 14:59:12 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1034001136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:59:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 83 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 15 KiB/s wr, 31 op/s
Oct 07 14:59:13 compute-0 nova_compute[259550]: 2025-10-07 14:59:13.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:14 compute-0 podman[427935]: 2025-10-07 14:59:14.102733604 +0000 UTC m=+0.088118623 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:59:14 compute-0 podman[427934]: 2025-10-07 14:59:14.103106844 +0000 UTC m=+0.087839556 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 07 14:59:14 compute-0 nova_compute[259550]: 2025-10-07 14:59:14.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:14 compute-0 ceph-mon[74295]: pgmap v2909: 305 pgs: 305 active+clean; 83 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 15 KiB/s wr, 31 op/s
Oct 07 14:59:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Oct 07 14:59:15 compute-0 nova_compute[259550]: 2025-10-07 14:59:15.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:15 compute-0 nova_compute[259550]: 2025-10-07 14:59:15.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:15 compute-0 nova_compute[259550]: 2025-10-07 14:59:15.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:59:15 compute-0 nova_compute[259550]: 2025-10-07 14:59:15.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 14:59:16 compute-0 ceph-mon[74295]: pgmap v2910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Oct 07 14:59:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:16.711 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:59:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:59:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Oct 07 14:59:16 compute-0 nova_compute[259550]: 2025-10-07 14:59:16.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:59:18 compute-0 ceph-mon[74295]: pgmap v2911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Oct 07 14:59:18 compute-0 nova_compute[259550]: 2025-10-07 14:59:18.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 13 KiB/s wr, 47 op/s
Oct 07 14:59:19 compute-0 nova_compute[259550]: 2025-10-07 14:59:19.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:19 compute-0 nova_compute[259550]: 2025-10-07 14:59:19.691 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759849144.690233, 27840587-4b28-416e-a84d-176918007fb6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:59:19 compute-0 nova_compute[259550]: 2025-10-07 14:59:19.691 2 INFO nova.compute.manager [-] [instance: 27840587-4b28-416e-a84d-176918007fb6] VM Stopped (Lifecycle Event)
Oct 07 14:59:19 compute-0 nova_compute[259550]: 2025-10-07 14:59:19.719 2 DEBUG nova.compute.manager [None req-9b1ca84a-24a2-4f0b-bf9f-7adaad1cc013 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:59:20 compute-0 ceph-mon[74295]: pgmap v2912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 13 KiB/s wr, 47 op/s
Oct 07 14:59:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:59:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:59:22 compute-0 ceph-mon[74295]: pgmap v2913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 07 14:59:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:59:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:59:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:59:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:59:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:59:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:59:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:59:22
Oct 07 14:59:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 14:59:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 14:59:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.log', 'default.rgw.control', '.mgr', 'backups', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root']
Oct 07 14:59:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 14:59:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Oct 07 14:59:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 14:59:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:59:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 14:59:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 14:59:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:59:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 14:59:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:59:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 14:59:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:59:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 14:59:23 compute-0 nova_compute[259550]: 2025-10-07 14:59:23.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:24 compute-0 nova_compute[259550]: 2025-10-07 14:59:24.295 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759849149.2936287, 5dbed140-a0ed-4dbb-b782-da386ad68471 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:59:24 compute-0 nova_compute[259550]: 2025-10-07 14:59:24.295 2 INFO nova.compute.manager [-] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] VM Stopped (Lifecycle Event)
Oct 07 14:59:24 compute-0 nova_compute[259550]: 2025-10-07 14:59:24.316 2 DEBUG nova.compute.manager [None req-c3b6f157-05da-4124-b253-22904e492994 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:59:24 compute-0 nova_compute[259550]: 2025-10-07 14:59:24.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:24 compute-0 ceph-mon[74295]: pgmap v2914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Oct 07 14:59:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2915: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Oct 07 14:59:24 compute-0 nova_compute[259550]: 2025-10-07 14:59:24.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:59:25 compute-0 nova_compute[259550]: 2025-10-07 14:59:25.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:25 compute-0 nova_compute[259550]: 2025-10-07 14:59:25.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:25 compute-0 nova_compute[259550]: 2025-10-07 14:59:25.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:25 compute-0 nova_compute[259550]: 2025-10-07 14:59:25.014 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 14:59:25 compute-0 nova_compute[259550]: 2025-10-07 14:59:25.014 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:59:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:59:25 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/302169912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:59:25 compute-0 nova_compute[259550]: 2025-10-07 14:59:25.476 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:59:25 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/302169912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:59:25 compute-0 nova_compute[259550]: 2025-10-07 14:59:25.641 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:59:25 compute-0 nova_compute[259550]: 2025-10-07 14:59:25.643 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3653MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 14:59:25 compute-0 nova_compute[259550]: 2025-10-07 14:59:25.643 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:25 compute-0 nova_compute[259550]: 2025-10-07 14:59:25.643 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:25 compute-0 nova_compute[259550]: 2025-10-07 14:59:25.862 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 14:59:25 compute-0 nova_compute[259550]: 2025-10-07 14:59:25.863 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 14:59:25 compute-0 nova_compute[259550]: 2025-10-07 14:59:25.923 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 07 14:59:25 compute-0 nova_compute[259550]: 2025-10-07 14:59:25.996 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 07 14:59:25 compute-0 nova_compute[259550]: 2025-10-07 14:59:25.996 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 07 14:59:26 compute-0 nova_compute[259550]: 2025-10-07 14:59:26.011 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 07 14:59:26 compute-0 nova_compute[259550]: 2025-10-07 14:59:26.031 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 07 14:59:26 compute-0 nova_compute[259550]: 2025-10-07 14:59:26.051 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:59:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:59:26 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2045185835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:59:26 compute-0 nova_compute[259550]: 2025-10-07 14:59:26.488 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:59:26 compute-0 nova_compute[259550]: 2025-10-07 14:59:26.494 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:59:26 compute-0 nova_compute[259550]: 2025-10-07 14:59:26.516 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:59:26 compute-0 nova_compute[259550]: 2025-10-07 14:59:26.541 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 14:59:26 compute-0 nova_compute[259550]: 2025-10-07 14:59:26.541 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:26 compute-0 ceph-mon[74295]: pgmap v2915: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Oct 07 14:59:26 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2045185835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:59:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:59:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.695501) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849167695547, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2052, "num_deletes": 251, "total_data_size": 3387322, "memory_usage": 3448632, "flush_reason": "Manual Compaction"}
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849167730629, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3320738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59043, "largest_seqno": 61094, "table_properties": {"data_size": 3311418, "index_size": 5877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18811, "raw_average_key_size": 20, "raw_value_size": 3292958, "raw_average_value_size": 3521, "num_data_blocks": 261, "num_entries": 935, "num_filter_entries": 935, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759848944, "oldest_key_time": 1759848944, "file_creation_time": 1759849167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 35189 microseconds, and 7551 cpu microseconds.
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.730685) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3320738 bytes OK
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.730711) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.752689) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.752737) EVENT_LOG_v1 {"time_micros": 1759849167752726, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.752763) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3378731, prev total WAL file size 3379368, number of live WAL files 2.
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.757412) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3242KB)], [140(8045KB)]
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849167757464, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 11559498, "oldest_snapshot_seqno": -1}
Oct 07 14:59:27 compute-0 ceph-mon[74295]: pgmap v2916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8069 keys, 9825013 bytes, temperature: kUnknown
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849167903903, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 9825013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9773608, "index_size": 30108, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20229, "raw_key_size": 210308, "raw_average_key_size": 26, "raw_value_size": 9632309, "raw_average_value_size": 1193, "num_data_blocks": 1170, "num_entries": 8069, "num_filter_entries": 8069, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759849167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.904342) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 9825013 bytes
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.930327) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.8 rd, 67.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.9 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 8583, records dropped: 514 output_compression: NoCompression
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.930390) EVENT_LOG_v1 {"time_micros": 1759849167930367, "job": 86, "event": "compaction_finished", "compaction_time_micros": 146621, "compaction_time_cpu_micros": 27754, "output_level": 6, "num_output_files": 1, "total_output_size": 9825013, "num_input_records": 8583, "num_output_records": 8069, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849167931477, "job": 86, "event": "table_file_deletion", "file_number": 142}
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849167933600, "job": 86, "event": "table_file_deletion", "file_number": 140}
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.757260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.933686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.933694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.933696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.933698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:59:27 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.933700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:59:28 compute-0 nova_compute[259550]: 2025-10-07 14:59:28.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:29 compute-0 nova_compute[259550]: 2025-10-07 14:59:29.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:30 compute-0 ceph-mon[74295]: pgmap v2917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:31 compute-0 nova_compute[259550]: 2025-10-07 14:59:31.542 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:59:31 compute-0 nova_compute[259550]: 2025-10-07 14:59:31.542 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 14:59:31 compute-0 nova_compute[259550]: 2025-10-07 14:59:31.543 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 14:59:31 compute-0 nova_compute[259550]: 2025-10-07 14:59:31.560 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 14:59:31 compute-0 nova_compute[259550]: 2025-10-07 14:59:31.561 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 14:59:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:59:32 compute-0 ceph-mon[74295]: pgmap v2918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:32 compute-0 podman[428026]: 2025-10-07 14:59:32.070646325 +0000 UTC m=+0.062117605 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 14:59:32 compute-0 podman[428027]: 2025-10-07 14:59:32.072635677 +0000 UTC m=+0.058472778 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0)
Oct 07 14:59:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 14:59:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3777546043' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:59:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 14:59:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3777546043' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 14:59:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3777546043' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 14:59:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3777546043' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 14:59:33 compute-0 nova_compute[259550]: 2025-10-07 14:59:33.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:34 compute-0 ceph-mon[74295]: pgmap v2919: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:34 compute-0 nova_compute[259550]: 2025-10-07 14:59:34.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:36 compute-0 ceph-mon[74295]: pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:59:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:38 compute-0 ceph-mon[74295]: pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:38 compute-0 nova_compute[259550]: 2025-10-07 14:59:38.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:39 compute-0 nova_compute[259550]: 2025-10-07 14:59:39.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:40 compute-0 ceph-mon[74295]: pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:59:42 compute-0 ceph-mon[74295]: pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:43 compute-0 nova_compute[259550]: 2025-10-07 14:59:43.635 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "a6b18035-4aef-4825-90e6-799173979626" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:43 compute-0 nova_compute[259550]: 2025-10-07 14:59:43.635 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:43 compute-0 nova_compute[259550]: 2025-10-07 14:59:43.652 2 DEBUG nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 14:59:43 compute-0 nova_compute[259550]: 2025-10-07 14:59:43.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:43 compute-0 nova_compute[259550]: 2025-10-07 14:59:43.721 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:43 compute-0 nova_compute[259550]: 2025-10-07 14:59:43.722 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:43 compute-0 nova_compute[259550]: 2025-10-07 14:59:43.732 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 14:59:43 compute-0 nova_compute[259550]: 2025-10-07 14:59:43.733 2 INFO nova.compute.claims [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Claim successful on node compute-0.ctlplane.example.com
Oct 07 14:59:43 compute-0 nova_compute[259550]: 2025-10-07 14:59:43.819 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:59:44 compute-0 ceph-mon[74295]: pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 14:59:44 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2626730701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.318 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.324 2 DEBUG nova.compute.provider_tree [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.345 2 DEBUG nova.scheduler.client.report [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.374 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.375 2 DEBUG nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.417 2 DEBUG nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.417 2 DEBUG nova.network.neutron [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.439 2 INFO nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.455 2 DEBUG nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.545 2 DEBUG nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.546 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.546 2 INFO nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Creating image(s)
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.563 2 DEBUG nova.storage.rbd_utils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image a6b18035-4aef-4825-90e6-799173979626_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.584 2 DEBUG nova.storage.rbd_utils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image a6b18035-4aef-4825-90e6-799173979626_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.603 2 DEBUG nova.storage.rbd_utils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image a6b18035-4aef-4825-90e6-799173979626_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.607 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.644 2 DEBUG nova.policy [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1a7552cec1354175be418fba9a7588af', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aa7bd91eb3b040c89929aa23c9775dc9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.680 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.681 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.682 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.682 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.704 2 DEBUG nova.storage.rbd_utils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image a6b18035-4aef-4825-90e6-799173979626_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:59:44 compute-0 nova_compute[259550]: 2025-10-07 14:59:44.707 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a6b18035-4aef-4825-90e6-799173979626_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:59:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:45 compute-0 podman[428183]: 2025-10-07 14:59:45.109844357 +0000 UTC m=+0.090135247 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:59:45 compute-0 podman[428184]: 2025-10-07 14:59:45.114727076 +0000 UTC m=+0.096346171 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 14:59:45 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2626730701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 14:59:45 compute-0 nova_compute[259550]: 2025-10-07 14:59:45.465 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a6b18035-4aef-4825-90e6-799173979626_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.758s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:59:45 compute-0 nova_compute[259550]: 2025-10-07 14:59:45.517 2 DEBUG nova.storage.rbd_utils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] resizing rbd image a6b18035-4aef-4825-90e6-799173979626_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 14:59:45 compute-0 nova_compute[259550]: 2025-10-07 14:59:45.734 2 DEBUG nova.objects.instance [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lazy-loading 'migration_context' on Instance uuid a6b18035-4aef-4825-90e6-799173979626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:59:45 compute-0 nova_compute[259550]: 2025-10-07 14:59:45.748 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 14:59:45 compute-0 nova_compute[259550]: 2025-10-07 14:59:45.749 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Ensure instance console log exists: /var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 14:59:45 compute-0 nova_compute[259550]: 2025-10-07 14:59:45.749 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:45 compute-0 nova_compute[259550]: 2025-10-07 14:59:45.749 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:45 compute-0 nova_compute[259550]: 2025-10-07 14:59:45.750 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:46 compute-0 nova_compute[259550]: 2025-10-07 14:59:46.338 2 DEBUG nova.network.neutron [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Successfully created port: 9e613de1-4d71-4293-836f-5f1e121f0bb5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 14:59:46 compute-0 ceph-mon[74295]: pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 14:59:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:59:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 305 active+clean; 64 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 766 KiB/s wr, 13 op/s
Oct 07 14:59:47 compute-0 nova_compute[259550]: 2025-10-07 14:59:47.493 2 DEBUG nova.network.neutron [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Successfully updated port: 9e613de1-4d71-4293-836f-5f1e121f0bb5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 14:59:47 compute-0 nova_compute[259550]: 2025-10-07 14:59:47.508 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:59:47 compute-0 nova_compute[259550]: 2025-10-07 14:59:47.508 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquired lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:59:47 compute-0 nova_compute[259550]: 2025-10-07 14:59:47.509 2 DEBUG nova.network.neutron [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 14:59:47 compute-0 nova_compute[259550]: 2025-10-07 14:59:47.579 2 DEBUG nova.compute.manager [req-72e2ac2c-e41b-443f-b929-c5b43ec53803 req-82a1be52-6326-45c7-9732-6a155b1d8d83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received event network-changed-9e613de1-4d71-4293-836f-5f1e121f0bb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:59:47 compute-0 nova_compute[259550]: 2025-10-07 14:59:47.579 2 DEBUG nova.compute.manager [req-72e2ac2c-e41b-443f-b929-c5b43ec53803 req-82a1be52-6326-45c7-9732-6a155b1d8d83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Refreshing instance network info cache due to event network-changed-9e613de1-4d71-4293-836f-5f1e121f0bb5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:59:47 compute-0 nova_compute[259550]: 2025-10-07 14:59:47.579 2 DEBUG oslo_concurrency.lockutils [req-72e2ac2c-e41b-443f-b929-c5b43ec53803 req-82a1be52-6326-45c7-9732-6a155b1d8d83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:59:47 compute-0 nova_compute[259550]: 2025-10-07 14:59:47.628 2 DEBUG nova.network.neutron [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 14:59:48 compute-0 ceph-mon[74295]: pgmap v2926: 305 pgs: 305 active+clean; 64 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 766 KiB/s wr, 13 op/s
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.728 2 DEBUG nova.network.neutron [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Updating instance_info_cache with network_info: [{"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.752 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Releasing lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.752 2 DEBUG nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Instance network_info: |[{"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.753 2 DEBUG oslo_concurrency.lockutils [req-72e2ac2c-e41b-443f-b929-c5b43ec53803 req-82a1be52-6326-45c7-9732-6a155b1d8d83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.753 2 DEBUG nova.network.neutron [req-72e2ac2c-e41b-443f-b929-c5b43ec53803 req-82a1be52-6326-45c7-9732-6a155b1d8d83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Refreshing network info cache for port 9e613de1-4d71-4293-836f-5f1e121f0bb5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.755 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Start _get_guest_xml network_info=[{"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.759 2 WARNING nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.764 2 DEBUG nova.virt.libvirt.host [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.764 2 DEBUG nova.virt.libvirt.host [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.772 2 DEBUG nova.virt.libvirt.host [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.773 2 DEBUG nova.virt.libvirt.host [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.774 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.774 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.775 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.775 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.775 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.775 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.775 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.776 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.776 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.776 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.777 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.777 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 14:59:48 compute-0 nova_compute[259550]: 2025-10-07 14:59:48.780 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:59:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 07 14:59:48 compute-0 sudo[428299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:59:48 compute-0 sudo[428299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:59:48 compute-0 sudo[428299]: pam_unix(sudo:session): session closed for user root
Oct 07 14:59:49 compute-0 sudo[428343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:59:49 compute-0 sudo[428343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:59:49 compute-0 sudo[428343]: pam_unix(sudo:session): session closed for user root
Oct 07 14:59:49 compute-0 sudo[428368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:59:49 compute-0 sudo[428368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:59:49 compute-0 sudo[428368]: pam_unix(sudo:session): session closed for user root
Oct 07 14:59:49 compute-0 sudo[428393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 14:59:49 compute-0 sudo[428393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:59:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:59:49 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3398001066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.294 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.315 2 DEBUG nova.storage.rbd_utils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image a6b18035-4aef-4825-90e6-799173979626_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.320 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:49 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3398001066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:59:49 compute-0 sudo[428393]: pam_unix(sudo:session): session closed for user root
Oct 07 14:59:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:59:49 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:59:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 14:59:49 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:59:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 14:59:49 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:59:49 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 7150cb73-734b-4e70-befa-e81b51a9ae47 does not exist
Oct 07 14:59:49 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 0261e093-26f1-4ab7-8475-25462c3b8b2a does not exist
Oct 07 14:59:49 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev bfb38346-fab7-4632-9653-228a509f115b does not exist
Oct 07 14:59:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 14:59:49 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:59:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 14:59:49 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:59:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 14:59:49 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:59:49 compute-0 sudo[428488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:59:49 compute-0 sudo[428488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:59:49 compute-0 sudo[428488]: pam_unix(sudo:session): session closed for user root
Oct 07 14:59:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 14:59:49 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1256694600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:59:49 compute-0 sudo[428513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:59:49 compute-0 sudo[428513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:59:49 compute-0 sudo[428513]: pam_unix(sudo:session): session closed for user root
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.807 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.808 2 DEBUG nova.virt.libvirt.vif [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:59:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-821190433',display_name='tempest-TestSnapshotPattern-server-821190433',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-821190433',id=151,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLS8No0f8lI6ufkJmHXUc83InsAiFTNLFmJKfMLwW232En9QazJdzMvhlDyJwZrw5ZgxsPztgc0fXNZCFoxmRX/wK0ADddqAh1D7rvdzceS1mG7VJugN4Nxl5xOWKIFTbQ==',key_name='tempest-TestSnapshotPattern-1267535545',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa7bd91eb3b040c89929aa23c9775dc9',ramdisk_id='',reservation_id='r-vnbjscds',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1480624877',owner_user_name='tempest-TestSnapshotPattern-1480624877-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:59:44Z,user_data=None,user_id='1a7552cec1354175be418fba9a7588af',uuid=a6b18035-4aef-4825-90e6-799173979626,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.808 2 DEBUG nova.network.os_vif_util [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converting VIF {"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.809 2 DEBUG nova.network.os_vif_util [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:68:98,bridge_name='br-int',has_traffic_filtering=True,id=9e613de1-4d71-4293-836f-5f1e121f0bb5,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e613de1-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.811 2 DEBUG nova.objects.instance [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid a6b18035-4aef-4825-90e6-799173979626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.830 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] End _get_guest_xml xml=<domain type="kvm">
Oct 07 14:59:49 compute-0 nova_compute[259550]:   <uuid>a6b18035-4aef-4825-90e6-799173979626</uuid>
Oct 07 14:59:49 compute-0 nova_compute[259550]:   <name>instance-00000097</name>
Oct 07 14:59:49 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 14:59:49 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 14:59:49 compute-0 nova_compute[259550]:   <metadata>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <nova:name>tempest-TestSnapshotPattern-server-821190433</nova:name>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 14:59:48</nova:creationTime>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 14:59:49 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 14:59:49 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 14:59:49 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 14:59:49 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 14:59:49 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 14:59:49 compute-0 nova_compute[259550]:         <nova:user uuid="1a7552cec1354175be418fba9a7588af">tempest-TestSnapshotPattern-1480624877-project-member</nova:user>
Oct 07 14:59:49 compute-0 nova_compute[259550]:         <nova:project uuid="aa7bd91eb3b040c89929aa23c9775dc9">tempest-TestSnapshotPattern-1480624877</nova:project>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 14:59:49 compute-0 nova_compute[259550]:         <nova:port uuid="9e613de1-4d71-4293-836f-5f1e121f0bb5">
Oct 07 14:59:49 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 14:59:49 compute-0 nova_compute[259550]:   </metadata>
Oct 07 14:59:49 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <system>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <entry name="serial">a6b18035-4aef-4825-90e6-799173979626</entry>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <entry name="uuid">a6b18035-4aef-4825-90e6-799173979626</entry>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     </system>
Oct 07 14:59:49 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 14:59:49 compute-0 nova_compute[259550]:   <os>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:   </os>
Oct 07 14:59:49 compute-0 nova_compute[259550]:   <features>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <apic/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:   </features>
Oct 07 14:59:49 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:   </clock>
Oct 07 14:59:49 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:   </cpu>
Oct 07 14:59:49 compute-0 nova_compute[259550]:   <devices>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/a6b18035-4aef-4825-90e6-799173979626_disk">
Oct 07 14:59:49 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       </source>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:59:49 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/a6b18035-4aef-4825-90e6-799173979626_disk.config">
Oct 07 14:59:49 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       </source>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 14:59:49 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       </auth>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     </disk>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:b8:68:98"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <target dev="tap9e613de1-4d"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     </interface>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626/console.log" append="off"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     </serial>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <video>
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     </video>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     </rng>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 14:59:49 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 14:59:49 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 14:59:49 compute-0 nova_compute[259550]:   </devices>
Oct 07 14:59:49 compute-0 nova_compute[259550]: </domain>
Oct 07 14:59:49 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.833 2 DEBUG nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Preparing to wait for external event network-vif-plugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.833 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "a6b18035-4aef-4825-90e6-799173979626-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.833 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.834 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.834 2 DEBUG nova.virt.libvirt.vif [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:59:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-821190433',display_name='tempest-TestSnapshotPattern-server-821190433',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-821190433',id=151,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLS8No0f8lI6ufkJmHXUc83InsAiFTNLFmJKfMLwW232En9QazJdzMvhlDyJwZrw5ZgxsPztgc0fXNZCFoxmRX/wK0ADddqAh1D7rvdzceS1mG7VJugN4Nxl5xOWKIFTbQ==',key_name='tempest-TestSnapshotPattern-1267535545',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa7bd91eb3b040c89929aa23c9775dc9',ramdisk_id='',reservation_id='r-vnbjscds',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1480624877',owner_user_name='tempest-TestSnapshotPattern-1480624877-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:59:44Z,user_data=None,user_id='1a7552cec1354175be418fba9a7588af',uuid=a6b18035-4aef-4825-90e6-799173979626,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.835 2 DEBUG nova.network.os_vif_util [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converting VIF {"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.835 2 DEBUG nova.network.os_vif_util [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:68:98,bridge_name='br-int',has_traffic_filtering=True,id=9e613de1-4d71-4293-836f-5f1e121f0bb5,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e613de1-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.836 2 DEBUG os_vif [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:68:98,bridge_name='br-int',has_traffic_filtering=True,id=9e613de1-4d71-4293-836f-5f1e121f0bb5,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e613de1-4d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.837 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.837 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.841 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e613de1-4d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.842 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e613de1-4d, col_values=(('external_ids', {'iface-id': '9e613de1-4d71-4293-836f-5f1e121f0bb5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:68:98', 'vm-uuid': 'a6b18035-4aef-4825-90e6-799173979626'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:49 compute-0 NetworkManager[44949]: <info>  [1759849189.8447] manager: (tap9e613de1-4d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/664)
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 14:59:49 compute-0 sudo[428540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:59:49 compute-0 sudo[428540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.852 2 INFO os_vif [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:68:98,bridge_name='br-int',has_traffic_filtering=True,id=9e613de1-4d71-4293-836f-5f1e121f0bb5,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e613de1-4d')
Oct 07 14:59:49 compute-0 sudo[428540]: pam_unix(sudo:session): session closed for user root
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.894 2 DEBUG nova.network.neutron [req-72e2ac2c-e41b-443f-b929-c5b43ec53803 req-82a1be52-6326-45c7-9732-6a155b1d8d83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Updated VIF entry in instance network info cache for port 9e613de1-4d71-4293-836f-5f1e121f0bb5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.895 2 DEBUG nova.network.neutron [req-72e2ac2c-e41b-443f-b929-c5b43ec53803 req-82a1be52-6326-45c7-9732-6a155b1d8d83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Updating instance_info_cache with network_info: [{"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:59:49 compute-0 sudo[428567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 14:59:49 compute-0 sudo[428567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.921 2 DEBUG oslo_concurrency.lockutils [req-72e2ac2c-e41b-443f-b929-c5b43ec53803 req-82a1be52-6326-45c7-9732-6a155b1d8d83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.925 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.925 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.925 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] No VIF found with MAC fa:16:3e:b8:68:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.925 2 INFO nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Using config drive
Oct 07 14:59:49 compute-0 nova_compute[259550]: 2025-10-07 14:59:49.947 2 DEBUG nova.storage.rbd_utils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image a6b18035-4aef-4825-90e6-799173979626_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:59:50 compute-0 podman[428650]: 2025-10-07 14:59:50.231358727 +0000 UTC m=+0.048152715 container create df3b210dadaed2e071f88f9f9ffd2855a91f5e104866858546698785bb40201b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 07 14:59:50 compute-0 podman[428650]: 2025-10-07 14:59:50.203418167 +0000 UTC m=+0.020212175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:59:50 compute-0 systemd[1]: Started libpod-conmon-df3b210dadaed2e071f88f9f9ffd2855a91f5e104866858546698785bb40201b.scope.
Oct 07 14:59:50 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:59:50 compute-0 podman[428650]: 2025-10-07 14:59:50.384100979 +0000 UTC m=+0.200895017 container init df3b210dadaed2e071f88f9f9ffd2855a91f5e104866858546698785bb40201b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ishizaka, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:59:50 compute-0 podman[428650]: 2025-10-07 14:59:50.391762642 +0000 UTC m=+0.208556630 container start df3b210dadaed2e071f88f9f9ffd2855a91f5e104866858546698785bb40201b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ishizaka, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:59:50 compute-0 quirky_ishizaka[428666]: 167 167
Oct 07 14:59:50 compute-0 systemd[1]: libpod-df3b210dadaed2e071f88f9f9ffd2855a91f5e104866858546698785bb40201b.scope: Deactivated successfully.
Oct 07 14:59:50 compute-0 podman[428650]: 2025-10-07 14:59:50.407783036 +0000 UTC m=+0.224577024 container attach df3b210dadaed2e071f88f9f9ffd2855a91f5e104866858546698785bb40201b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 07 14:59:50 compute-0 podman[428650]: 2025-10-07 14:59:50.409031059 +0000 UTC m=+0.225825047 container died df3b210dadaed2e071f88f9f9ffd2855a91f5e104866858546698785bb40201b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Oct 07 14:59:50 compute-0 nova_compute[259550]: 2025-10-07 14:59:50.461 2 INFO nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Creating config drive at /var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626/disk.config
Oct 07 14:59:50 compute-0 nova_compute[259550]: 2025-10-07 14:59:50.467 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp8xqf73c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:59:50 compute-0 ceph-mon[74295]: pgmap v2927: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 07 14:59:50 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:59:50 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 14:59:50 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:59:50 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 14:59:50 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 14:59:50 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 14:59:50 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1256694600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 14:59:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bcee7878689e2f91fb4e4297519b7d022c959de8cb14dd51107eff5e7954a42-merged.mount: Deactivated successfully.
Oct 07 14:59:50 compute-0 nova_compute[259550]: 2025-10-07 14:59:50.616 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp8xqf73c" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:59:50 compute-0 nova_compute[259550]: 2025-10-07 14:59:50.642 2 DEBUG nova.storage.rbd_utils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image a6b18035-4aef-4825-90e6-799173979626_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 14:59:50 compute-0 nova_compute[259550]: 2025-10-07 14:59:50.646 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626/disk.config a6b18035-4aef-4825-90e6-799173979626_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 14:59:50 compute-0 podman[428650]: 2025-10-07 14:59:50.66147266 +0000 UTC m=+0.478266648 container remove df3b210dadaed2e071f88f9f9ffd2855a91f5e104866858546698785bb40201b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ishizaka, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 07 14:59:50 compute-0 systemd[1]: libpod-conmon-df3b210dadaed2e071f88f9f9ffd2855a91f5e104866858546698785bb40201b.scope: Deactivated successfully.
Oct 07 14:59:50 compute-0 podman[428728]: 2025-10-07 14:59:50.83867941 +0000 UTC m=+0.045863795 container create f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lumiere, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 14:59:50 compute-0 nova_compute[259550]: 2025-10-07 14:59:50.874 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626/disk.config a6b18035-4aef-4825-90e6-799173979626_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 14:59:50 compute-0 nova_compute[259550]: 2025-10-07 14:59:50.875 2 INFO nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Deleting local config drive /var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626/disk.config because it was imported into RBD.
Oct 07 14:59:50 compute-0 systemd[1]: Started libpod-conmon-f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe.scope.
Oct 07 14:59:50 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d7e5f2aae8ce8f3763efb924d10457e38aa722f7c4b742a28b3f0019845c509/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:59:50 compute-0 podman[428728]: 2025-10-07 14:59:50.817352135 +0000 UTC m=+0.024536550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d7e5f2aae8ce8f3763efb924d10457e38aa722f7c4b742a28b3f0019845c509/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d7e5f2aae8ce8f3763efb924d10457e38aa722f7c4b742a28b3f0019845c509/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d7e5f2aae8ce8f3763efb924d10457e38aa722f7c4b742a28b3f0019845c509/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d7e5f2aae8ce8f3763efb924d10457e38aa722f7c4b742a28b3f0019845c509/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 14:59:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:59:50 compute-0 kernel: tap9e613de1-4d: entered promiscuous mode
Oct 07 14:59:50 compute-0 podman[428728]: 2025-10-07 14:59:50.939326093 +0000 UTC m=+0.146510488 container init f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lumiere, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 07 14:59:50 compute-0 ovn_controller[151684]: 2025-10-07T14:59:50Z|01645|binding|INFO|Claiming lport 9e613de1-4d71-4293-836f-5f1e121f0bb5 for this chassis.
Oct 07 14:59:50 compute-0 ovn_controller[151684]: 2025-10-07T14:59:50Z|01646|binding|INFO|9e613de1-4d71-4293-836f-5f1e121f0bb5: Claiming fa:16:3e:b8:68:98 10.100.0.8
Oct 07 14:59:50 compute-0 NetworkManager[44949]: <info>  [1759849190.9449] manager: (tap9e613de1-4d): new Tun device (/org/freedesktop/NetworkManager/Devices/665)
Oct 07 14:59:50 compute-0 nova_compute[259550]: 2025-10-07 14:59:50.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:50 compute-0 podman[428728]: 2025-10-07 14:59:50.949833362 +0000 UTC m=+0.157017747 container start f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lumiere, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:59:50 compute-0 podman[428728]: 2025-10-07 14:59:50.955621205 +0000 UTC m=+0.162805590 container attach f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lumiere, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:59:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:50.960 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:68:98 10.100.0.8'], port_security=['fa:16:3e:b8:68:98 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a6b18035-4aef-4825-90e6-799173979626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5714012d-b182-4fef-9241-3afcb9c700d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa7bd91eb3b040c89929aa23c9775dc9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5a9b7c48-78a6-4a08-9a38-f3e228e68bdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f64d47ed-3333-494f-a3a9-07b1b0158b50, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9e613de1-4d71-4293-836f-5f1e121f0bb5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 14:59:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:50.962 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9e613de1-4d71-4293-836f-5f1e121f0bb5 in datapath 5714012d-b182-4fef-9241-3afcb9c700d6 bound to our chassis
Oct 07 14:59:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:50.964 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5714012d-b182-4fef-9241-3afcb9c700d6
Oct 07 14:59:50 compute-0 systemd-udevd[428766]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:59:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:50.979 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3b0929-0e5f-4fad-9da4-835b2f9edd29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:50.980 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5714012d-b1 in ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 07 14:59:50 compute-0 NetworkManager[44949]: <info>  [1759849190.9927] device (tap9e613de1-4d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 14:59:50 compute-0 NetworkManager[44949]: <info>  [1759849190.9941] device (tap9e613de1-4d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 14:59:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:50.984 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5714012d-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 07 14:59:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:50.985 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2f22d0fa-9a97-4b4b-a10f-87a34446e540]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:50 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:50.987 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[335887ec-c008-43d4-ba4c-790952277c11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:50 compute-0 systemd-machined[214580]: New machine qemu-185-instance-00000097.
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.001 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[228ae7ce-e9f5-4ca8-aeb8-098182970c32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:51 compute-0 nova_compute[259550]: 2025-10-07 14:59:51.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:51 compute-0 systemd[1]: Started Virtual Machine qemu-185-instance-00000097.
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.030 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e575df02-254e-450b-8096-783077102a26]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:51 compute-0 ovn_controller[151684]: 2025-10-07T14:59:51Z|01647|binding|INFO|Setting lport 9e613de1-4d71-4293-836f-5f1e121f0bb5 ovn-installed in OVS
Oct 07 14:59:51 compute-0 ovn_controller[151684]: 2025-10-07T14:59:51Z|01648|binding|INFO|Setting lport 9e613de1-4d71-4293-836f-5f1e121f0bb5 up in Southbound
Oct 07 14:59:51 compute-0 nova_compute[259550]: 2025-10-07 14:59:51.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.060 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[49062261-605e-44c8-9dbe-b734bb857b25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:51 compute-0 systemd-udevd[428770]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 14:59:51 compute-0 NetworkManager[44949]: <info>  [1759849191.0691] manager: (tap5714012d-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/666)
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.068 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b22645-9df5-4ae8-abb6-88fb0951c0df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.106 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd5a55d-18dc-4de4-9df1-2ac527db3e63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.111 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0d384bb9-3a3f-4b13-b15e-c36c7797e720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:51 compute-0 NetworkManager[44949]: <info>  [1759849191.1378] device (tap5714012d-b0): carrier: link connected
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.140 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cb70f400-e95f-4735-9b2f-17e1708e6183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.160 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b22d82c8-e6cd-4f32-8511-1ca35c35da1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5714012d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:84:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977469, 'reachable_time': 27955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 428799, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.178 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f3241a81-2367-4b16-b6d0-efe9165e2dbe]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:84e6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977469, 'tstamp': 977469}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 428800, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.200 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7db7ba1b-cc1b-416b-bcf2-dd1e43229dcd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5714012d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:84:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977469, 'reachable_time': 27955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 428801, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.233 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[173e6b7c-d844-40a5-bfaf-0c419f1e393f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.287 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[db3ca3c5-f6ea-42b9-a810-b678cdf30b5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.288 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5714012d-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.289 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.289 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5714012d-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:59:51 compute-0 nova_compute[259550]: 2025-10-07 14:59:51.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:51 compute-0 NetworkManager[44949]: <info>  [1759849191.2955] manager: (tap5714012d-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/667)
Oct 07 14:59:51 compute-0 kernel: tap5714012d-b0: entered promiscuous mode
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.301 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5714012d-b0, col_values=(('external_ids', {'iface-id': '8ae40a35-baff-4538-b31d-4c05f61bc2b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 14:59:51 compute-0 ovn_controller[151684]: 2025-10-07T14:59:51Z|01649|binding|INFO|Releasing lport 8ae40a35-baff-4538-b31d-4c05f61bc2b8 from this chassis (sb_readonly=0)
Oct 07 14:59:51 compute-0 nova_compute[259550]: 2025-10-07 14:59:51.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:51 compute-0 nova_compute[259550]: 2025-10-07 14:59:51.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:51 compute-0 nova_compute[259550]: 2025-10-07 14:59:51.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.323 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5714012d-b182-4fef-9241-3afcb9c700d6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5714012d-b182-4fef-9241-3afcb9c700d6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.324 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3675b6fb-ca3e-4b9c-80e0-dcdff84b1111]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.325 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: global
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     log         /dev/log local0 debug
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     log-tag     haproxy-metadata-proxy-5714012d-b182-4fef-9241-3afcb9c700d6
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     user        root
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     group       root
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     maxconn     1024
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     pidfile     /var/lib/neutron/external/pids/5714012d-b182-4fef-9241-3afcb9c700d6.pid.haproxy
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     daemon
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: defaults
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     log global
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     mode http
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     option httplog
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     option dontlognull
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     option http-server-close
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     option forwardfor
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     retries                 3
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     timeout http-request    30s
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     timeout connect         30s
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     timeout client          32s
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     timeout server          32s
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     timeout http-keep-alive 30s
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: listen listener
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     bind 169.254.169.254:80
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     server metadata /var/lib/neutron/metadata_proxy
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:     http-request add-header X-OVN-Network-ID 5714012d-b182-4fef-9241-3afcb9c700d6
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 07 14:59:51 compute-0 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.326 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'env', 'PROCESS_TAG=haproxy-5714012d-b182-4fef-9241-3afcb9c700d6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5714012d-b182-4fef-9241-3afcb9c700d6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 07 14:59:51 compute-0 nova_compute[259550]: 2025-10-07 14:59:51.465 2 DEBUG nova.compute.manager [req-17658c69-58c5-42b2-a45c-6e91b25f8e3c req-e6729576-6feb-4f1e-952a-6e0bddb021b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received event network-vif-plugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:59:51 compute-0 nova_compute[259550]: 2025-10-07 14:59:51.466 2 DEBUG oslo_concurrency.lockutils [req-17658c69-58c5-42b2-a45c-6e91b25f8e3c req-e6729576-6feb-4f1e-952a-6e0bddb021b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a6b18035-4aef-4825-90e6-799173979626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:51 compute-0 nova_compute[259550]: 2025-10-07 14:59:51.466 2 DEBUG oslo_concurrency.lockutils [req-17658c69-58c5-42b2-a45c-6e91b25f8e3c req-e6729576-6feb-4f1e-952a-6e0bddb021b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:51 compute-0 nova_compute[259550]: 2025-10-07 14:59:51.466 2 DEBUG oslo_concurrency.lockutils [req-17658c69-58c5-42b2-a45c-6e91b25f8e3c req-e6729576-6feb-4f1e-952a-6e0bddb021b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:51 compute-0 nova_compute[259550]: 2025-10-07 14:59:51.466 2 DEBUG nova.compute.manager [req-17658c69-58c5-42b2-a45c-6e91b25f8e3c req-e6729576-6feb-4f1e-952a-6e0bddb021b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Processing event network-vif-plugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 14:59:51 compute-0 podman[428836]: 2025-10-07 14:59:51.720658701 +0000 UTC m=+0.058372606 container create c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 07 14:59:51 compute-0 systemd[1]: Started libpod-conmon-c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7.scope.
Oct 07 14:59:51 compute-0 podman[428836]: 2025-10-07 14:59:51.685375438 +0000 UTC m=+0.023089363 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 07 14:59:51 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:59:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0737f8a91a4236f462b27e601747872518f52d58de889974962d2146e39c7d2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 07 14:59:51 compute-0 podman[428836]: 2025-10-07 14:59:51.813651472 +0000 UTC m=+0.151365407 container init c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 14:59:51 compute-0 podman[428836]: 2025-10-07 14:59:51.821354786 +0000 UTC m=+0.159068701 container start c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:59:51 compute-0 neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6[428891]: [NOTICE]   (428907) : New worker (428909) forked
Oct 07 14:59:51 compute-0 neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6[428891]: [NOTICE]   (428907) : Loading success.
Oct 07 14:59:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:59:52 compute-0 exciting_lumiere[428748]: --> passed data devices: 0 physical, 3 LVM
Oct 07 14:59:52 compute-0 exciting_lumiere[428748]: --> relative data size: 1.0
Oct 07 14:59:52 compute-0 exciting_lumiere[428748]: --> All data devices are unavailable
Oct 07 14:59:52 compute-0 systemd[1]: libpod-f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe.scope: Deactivated successfully.
Oct 07 14:59:52 compute-0 systemd[1]: libpod-f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe.scope: Consumed 1.012s CPU time.
Oct 07 14:59:52 compute-0 podman[428728]: 2025-10-07 14:59:52.046227228 +0000 UTC m=+1.253411623 container died f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lumiere, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:59:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d7e5f2aae8ce8f3763efb924d10457e38aa722f7c4b742a28b3f0019845c509-merged.mount: Deactivated successfully.
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.295 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849192.295185, a6b18035-4aef-4825-90e6-799173979626 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.297 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] VM Started (Lifecycle Event)
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.299 2 DEBUG nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.306 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.310 2 INFO nova.virt.libvirt.driver [-] [instance: a6b18035-4aef-4825-90e6-799173979626] Instance spawned successfully.
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.311 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.376 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.379 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.406 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.406 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.406 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.407 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.407 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.407 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 14:59:52 compute-0 podman[428728]: 2025-10-07 14:59:52.426337237 +0000 UTC m=+1.633521622 container remove f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lumiere, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 07 14:59:52 compute-0 systemd[1]: libpod-conmon-f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe.scope: Deactivated successfully.
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.453 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.453 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849192.2952929, a6b18035-4aef-4825-90e6-799173979626 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.453 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] VM Paused (Lifecycle Event)
Oct 07 14:59:52 compute-0 sudo[428567]: pam_unix(sudo:session): session closed for user root
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.480 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.486 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849192.306175, a6b18035-4aef-4825-90e6-799173979626 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.486 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] VM Resumed (Lifecycle Event)
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.489 2 INFO nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Took 7.94 seconds to spawn the instance on the hypervisor.
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.489 2 DEBUG nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:59:52 compute-0 ceph-mon[74295]: pgmap v2928: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:59:52 compute-0 sudo[428942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:59:52 compute-0 sudo[428942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:59:52 compute-0 sudo[428942]: pam_unix(sudo:session): session closed for user root
Oct 07 14:59:52 compute-0 sudo[428967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:59:52 compute-0 sudo[428967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:59:52 compute-0 sudo[428967]: pam_unix(sudo:session): session closed for user root
Oct 07 14:59:52 compute-0 sudo[428992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:59:52 compute-0 sudo[428992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:59:52 compute-0 sudo[428992]: pam_unix(sudo:session): session closed for user root
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.655 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.661 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.681 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.695 2 INFO nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Took 9.00 seconds to build instance.
Oct 07 14:59:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:59:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:59:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:59:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:59:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 14:59:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 14:59:52 compute-0 nova_compute[259550]: 2025-10-07 14:59:52.713 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:52 compute-0 sudo[429017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 14:59:52 compute-0 sudo[429017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:59:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:59:53 compute-0 podman[429083]: 2025-10-07 14:59:53.075084206 +0000 UTC m=+0.043666457 container create 3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:59:53 compute-0 systemd[1]: Started libpod-conmon-3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7.scope.
Oct 07 14:59:53 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:59:53 compute-0 podman[429083]: 2025-10-07 14:59:53.056420632 +0000 UTC m=+0.025002913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:59:53 compute-0 podman[429083]: 2025-10-07 14:59:53.166720161 +0000 UTC m=+0.135302442 container init 3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 07 14:59:53 compute-0 podman[429083]: 2025-10-07 14:59:53.175259897 +0000 UTC m=+0.143842158 container start 3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:59:53 compute-0 podman[429083]: 2025-10-07 14:59:53.179838578 +0000 UTC m=+0.148420919 container attach 3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:59:53 compute-0 optimistic_neumann[429100]: 167 167
Oct 07 14:59:53 compute-0 systemd[1]: libpod-3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7.scope: Deactivated successfully.
Oct 07 14:59:53 compute-0 conmon[429100]: conmon 3b2fd1f2e66c6189a99e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7.scope/container/memory.events
Oct 07 14:59:53 compute-0 podman[429083]: 2025-10-07 14:59:53.184328017 +0000 UTC m=+0.152910288 container died 3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:59:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b63eb7c8616cdd72a656982b3b992999736a5950b42ef08ea922a5ec3841dc35-merged.mount: Deactivated successfully.
Oct 07 14:59:53 compute-0 podman[429083]: 2025-10-07 14:59:53.249011209 +0000 UTC m=+0.217593470 container remove 3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 07 14:59:53 compute-0 systemd[1]: libpod-conmon-3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7.scope: Deactivated successfully.
Oct 07 14:59:53 compute-0 podman[429124]: 2025-10-07 14:59:53.440890267 +0000 UTC m=+0.041829828 container create 473b55c7ca030a9c782031dfb9462e504133bf725c14b0b6114c3a95725c4ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 14:59:53 compute-0 systemd[1]: Started libpod-conmon-473b55c7ca030a9c782031dfb9462e504133bf725c14b0b6114c3a95725c4ef9.scope.
Oct 07 14:59:53 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:59:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c756079f35c9a12409fbb3e7865ec690fec67aa364ec2a48b8443cdab0b8682/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:59:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c756079f35c9a12409fbb3e7865ec690fec67aa364ec2a48b8443cdab0b8682/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:59:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c756079f35c9a12409fbb3e7865ec690fec67aa364ec2a48b8443cdab0b8682/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:59:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c756079f35c9a12409fbb3e7865ec690fec67aa364ec2a48b8443cdab0b8682/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:59:53 compute-0 podman[429124]: 2025-10-07 14:59:53.423412794 +0000 UTC m=+0.024352375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:59:53 compute-0 podman[429124]: 2025-10-07 14:59:53.517765212 +0000 UTC m=+0.118704783 container init 473b55c7ca030a9c782031dfb9462e504133bf725c14b0b6114c3a95725c4ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 07 14:59:53 compute-0 podman[429124]: 2025-10-07 14:59:53.526113483 +0000 UTC m=+0.127053034 container start 473b55c7ca030a9c782031dfb9462e504133bf725c14b0b6114c3a95725c4ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:59:53 compute-0 podman[429124]: 2025-10-07 14:59:53.530580511 +0000 UTC m=+0.131520102 container attach 473b55c7ca030a9c782031dfb9462e504133bf725c14b0b6114c3a95725c4ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Oct 07 14:59:53 compute-0 nova_compute[259550]: 2025-10-07 14:59:53.558 2 DEBUG nova.compute.manager [req-210b6217-e434-4dc2-a8d5-e4169c42813c req-a272c014-5e30-41b2-887d-1dd97e3a798c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received event network-vif-plugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:59:53 compute-0 nova_compute[259550]: 2025-10-07 14:59:53.560 2 DEBUG oslo_concurrency.lockutils [req-210b6217-e434-4dc2-a8d5-e4169c42813c req-a272c014-5e30-41b2-887d-1dd97e3a798c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a6b18035-4aef-4825-90e6-799173979626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 14:59:53 compute-0 nova_compute[259550]: 2025-10-07 14:59:53.560 2 DEBUG oslo_concurrency.lockutils [req-210b6217-e434-4dc2-a8d5-e4169c42813c req-a272c014-5e30-41b2-887d-1dd97e3a798c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 14:59:53 compute-0 nova_compute[259550]: 2025-10-07 14:59:53.560 2 DEBUG oslo_concurrency.lockutils [req-210b6217-e434-4dc2-a8d5-e4169c42813c req-a272c014-5e30-41b2-887d-1dd97e3a798c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 14:59:53 compute-0 nova_compute[259550]: 2025-10-07 14:59:53.560 2 DEBUG nova.compute.manager [req-210b6217-e434-4dc2-a8d5-e4169c42813c req-a272c014-5e30-41b2-887d-1dd97e3a798c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] No waiting events found dispatching network-vif-plugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 14:59:53 compute-0 nova_compute[259550]: 2025-10-07 14:59:53.561 2 WARNING nova.compute.manager [req-210b6217-e434-4dc2-a8d5-e4169c42813c req-a272c014-5e30-41b2-887d-1dd97e3a798c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received unexpected event network-vif-plugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 for instance with vm_state active and task_state None.
Oct 07 14:59:53 compute-0 nova_compute[259550]: 2025-10-07 14:59:53.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]: {
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:     "0": [
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:         {
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "devices": [
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "/dev/loop3"
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             ],
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "lv_name": "ceph_lv0",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "lv_size": "21470642176",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "name": "ceph_lv0",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "tags": {
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.cluster_name": "ceph",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.crush_device_class": "",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.encrypted": "0",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.osd_id": "0",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.type": "block",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.vdo": "0"
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             },
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "type": "block",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "vg_name": "ceph_vg0"
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:         }
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:     ],
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:     "1": [
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:         {
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "devices": [
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "/dev/loop4"
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             ],
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "lv_name": "ceph_lv1",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "lv_size": "21470642176",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "name": "ceph_lv1",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "tags": {
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.cluster_name": "ceph",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.crush_device_class": "",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.encrypted": "0",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.osd_id": "1",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.type": "block",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.vdo": "0"
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             },
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "type": "block",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "vg_name": "ceph_vg1"
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:         }
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:     ],
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:     "2": [
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:         {
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "devices": [
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "/dev/loop5"
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             ],
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "lv_name": "ceph_lv2",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "lv_size": "21470642176",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "name": "ceph_lv2",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "tags": {
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.cluster_name": "ceph",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.crush_device_class": "",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.encrypted": "0",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.osd_id": "2",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.type": "block",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:                 "ceph.vdo": "0"
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             },
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "type": "block",
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:             "vg_name": "ceph_vg2"
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:         }
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]:     ]
Oct 07 14:59:54 compute-0 vibrant_mccarthy[429141]: }
Oct 07 14:59:54 compute-0 systemd[1]: libpod-473b55c7ca030a9c782031dfb9462e504133bf725c14b0b6114c3a95725c4ef9.scope: Deactivated successfully.
Oct 07 14:59:54 compute-0 podman[429150]: 2025-10-07 14:59:54.400080992 +0000 UTC m=+0.026947155 container died 473b55c7ca030a9c782031dfb9462e504133bf725c14b0b6114c3a95725c4ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:59:54 compute-0 ceph-mon[74295]: pgmap v2929: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 07 14:59:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c756079f35c9a12409fbb3e7865ec690fec67aa364ec2a48b8443cdab0b8682-merged.mount: Deactivated successfully.
Oct 07 14:59:54 compute-0 nova_compute[259550]: 2025-10-07 14:59:54.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:54 compute-0 NetworkManager[44949]: <info>  [1759849194.8525] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/668)
Oct 07 14:59:54 compute-0 NetworkManager[44949]: <info>  [1759849194.8534] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/669)
Oct 07 14:59:54 compute-0 nova_compute[259550]: 2025-10-07 14:59:54.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 70 op/s
Oct 07 14:59:54 compute-0 nova_compute[259550]: 2025-10-07 14:59:54.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:54 compute-0 ovn_controller[151684]: 2025-10-07T14:59:54Z|01650|binding|INFO|Releasing lport 8ae40a35-baff-4538-b31d-4c05f61bc2b8 from this chassis (sb_readonly=0)
Oct 07 14:59:54 compute-0 nova_compute[259550]: 2025-10-07 14:59:54.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:55 compute-0 nova_compute[259550]: 2025-10-07 14:59:55.272 2 DEBUG nova.compute.manager [req-4b7ce120-6321-4a47-9fa0-1b98c0de9b98 req-6f8d2663-ff95-4482-b563-dba42271f98a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received event network-changed-9e613de1-4d71-4293-836f-5f1e121f0bb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 14:59:55 compute-0 nova_compute[259550]: 2025-10-07 14:59:55.273 2 DEBUG nova.compute.manager [req-4b7ce120-6321-4a47-9fa0-1b98c0de9b98 req-6f8d2663-ff95-4482-b563-dba42271f98a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Refreshing instance network info cache due to event network-changed-9e613de1-4d71-4293-836f-5f1e121f0bb5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 14:59:55 compute-0 nova_compute[259550]: 2025-10-07 14:59:55.273 2 DEBUG oslo_concurrency.lockutils [req-4b7ce120-6321-4a47-9fa0-1b98c0de9b98 req-6f8d2663-ff95-4482-b563-dba42271f98a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 14:59:55 compute-0 nova_compute[259550]: 2025-10-07 14:59:55.274 2 DEBUG oslo_concurrency.lockutils [req-4b7ce120-6321-4a47-9fa0-1b98c0de9b98 req-6f8d2663-ff95-4482-b563-dba42271f98a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 14:59:55 compute-0 nova_compute[259550]: 2025-10-07 14:59:55.274 2 DEBUG nova.network.neutron [req-4b7ce120-6321-4a47-9fa0-1b98c0de9b98 req-6f8d2663-ff95-4482-b563-dba42271f98a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Refreshing network info cache for port 9e613de1-4d71-4293-836f-5f1e121f0bb5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 14:59:55 compute-0 podman[429150]: 2025-10-07 14:59:55.419713887 +0000 UTC m=+1.046580030 container remove 473b55c7ca030a9c782031dfb9462e504133bf725c14b0b6114c3a95725c4ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 07 14:59:55 compute-0 systemd[1]: libpod-conmon-473b55c7ca030a9c782031dfb9462e504133bf725c14b0b6114c3a95725c4ef9.scope: Deactivated successfully.
Oct 07 14:59:55 compute-0 sudo[429017]: pam_unix(sudo:session): session closed for user root
Oct 07 14:59:55 compute-0 sudo[429166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:59:55 compute-0 sudo[429166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:59:55 compute-0 sudo[429166]: pam_unix(sudo:session): session closed for user root
Oct 07 14:59:55 compute-0 sudo[429191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 14:59:55 compute-0 sudo[429191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:59:55 compute-0 sudo[429191]: pam_unix(sudo:session): session closed for user root
Oct 07 14:59:55 compute-0 sudo[429216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:59:55 compute-0 sudo[429216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:59:55 compute-0 sudo[429216]: pam_unix(sudo:session): session closed for user root
Oct 07 14:59:55 compute-0 sudo[429241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 14:59:55 compute-0 sudo[429241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:59:56 compute-0 podman[429306]: 2025-10-07 14:59:56.058953044 +0000 UTC m=+0.037460672 container create 7db6af7b105132c438df236ab3e07e22eb8491fd523fd7cc70e409708a5d6c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:59:56 compute-0 systemd[1]: Started libpod-conmon-7db6af7b105132c438df236ab3e07e22eb8491fd523fd7cc70e409708a5d6c44.scope.
Oct 07 14:59:56 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:59:56 compute-0 podman[429306]: 2025-10-07 14:59:56.042780316 +0000 UTC m=+0.021287954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:59:56 compute-0 podman[429306]: 2025-10-07 14:59:56.154695448 +0000 UTC m=+0.133203116 container init 7db6af7b105132c438df236ab3e07e22eb8491fd523fd7cc70e409708a5d6c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct 07 14:59:56 compute-0 podman[429306]: 2025-10-07 14:59:56.160741478 +0000 UTC m=+0.139249106 container start 7db6af7b105132c438df236ab3e07e22eb8491fd523fd7cc70e409708a5d6c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:59:56 compute-0 pedantic_pasteur[429323]: 167 167
Oct 07 14:59:56 compute-0 podman[429306]: 2025-10-07 14:59:56.164750744 +0000 UTC m=+0.143258372 container attach 7db6af7b105132c438df236ab3e07e22eb8491fd523fd7cc70e409708a5d6c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:59:56 compute-0 systemd[1]: libpod-7db6af7b105132c438df236ab3e07e22eb8491fd523fd7cc70e409708a5d6c44.scope: Deactivated successfully.
Oct 07 14:59:56 compute-0 podman[429306]: 2025-10-07 14:59:56.166908641 +0000 UTC m=+0.145416269 container died 7db6af7b105132c438df236ab3e07e22eb8491fd523fd7cc70e409708a5d6c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 14:59:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-1532678f27fd4fa9a72350c65d3906fec359699e1c1bc49167ce332d057521d3-merged.mount: Deactivated successfully.
Oct 07 14:59:56 compute-0 nova_compute[259550]: 2025-10-07 14:59:56.203 2 DEBUG nova.network.neutron [req-4b7ce120-6321-4a47-9fa0-1b98c0de9b98 req-6f8d2663-ff95-4482-b563-dba42271f98a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Updated VIF entry in instance network info cache for port 9e613de1-4d71-4293-836f-5f1e121f0bb5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 14:59:56 compute-0 nova_compute[259550]: 2025-10-07 14:59:56.205 2 DEBUG nova.network.neutron [req-4b7ce120-6321-4a47-9fa0-1b98c0de9b98 req-6f8d2663-ff95-4482-b563-dba42271f98a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Updating instance_info_cache with network_info: [{"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 14:59:56 compute-0 podman[429306]: 2025-10-07 14:59:56.222179794 +0000 UTC m=+0.200687422 container remove 7db6af7b105132c438df236ab3e07e22eb8491fd523fd7cc70e409708a5d6c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 14:59:56 compute-0 nova_compute[259550]: 2025-10-07 14:59:56.223 2 DEBUG oslo_concurrency.lockutils [req-4b7ce120-6321-4a47-9fa0-1b98c0de9b98 req-6f8d2663-ff95-4482-b563-dba42271f98a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 14:59:56 compute-0 systemd[1]: libpod-conmon-7db6af7b105132c438df236ab3e07e22eb8491fd523fd7cc70e409708a5d6c44.scope: Deactivated successfully.
Oct 07 14:59:56 compute-0 podman[429347]: 2025-10-07 14:59:56.377090654 +0000 UTC m=+0.037739180 container create 6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 14:59:56 compute-0 systemd[1]: Started libpod-conmon-6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f.scope.
Oct 07 14:59:56 compute-0 systemd[1]: Started libcrun container.
Oct 07 14:59:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2300897d88ac58b7bc37499001b7428874fb3d14637811830f411c1280195a44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 14:59:56 compute-0 podman[429347]: 2025-10-07 14:59:56.361779949 +0000 UTC m=+0.022428495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 14:59:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2300897d88ac58b7bc37499001b7428874fb3d14637811830f411c1280195a44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 14:59:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2300897d88ac58b7bc37499001b7428874fb3d14637811830f411c1280195a44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 14:59:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2300897d88ac58b7bc37499001b7428874fb3d14637811830f411c1280195a44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 14:59:56 compute-0 podman[429347]: 2025-10-07 14:59:56.477011568 +0000 UTC m=+0.137660124 container init 6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 14:59:56 compute-0 podman[429347]: 2025-10-07 14:59:56.484584189 +0000 UTC m=+0.145232715 container start 6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 14:59:56 compute-0 podman[429347]: 2025-10-07 14:59:56.491966184 +0000 UTC m=+0.152614710 container attach 6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 07 14:59:56 compute-0 ceph-mon[74295]: pgmap v2930: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 70 op/s
Oct 07 14:59:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.875996) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849196876285, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 503, "num_deletes": 257, "total_data_size": 420171, "memory_usage": 429656, "flush_reason": "Manual Compaction"}
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849196881960, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 416094, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61095, "largest_seqno": 61597, "table_properties": {"data_size": 413308, "index_size": 757, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6642, "raw_average_key_size": 18, "raw_value_size": 407658, "raw_average_value_size": 1129, "num_data_blocks": 35, "num_entries": 361, "num_filter_entries": 361, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759849167, "oldest_key_time": 1759849167, "file_creation_time": 1759849196, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 6009 microseconds, and 2334 cpu microseconds.
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.882013) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 416094 bytes OK
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.882034) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.883486) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.883500) EVENT_LOG_v1 {"time_micros": 1759849196883496, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.883517) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 417209, prev total WAL file size 417209, number of live WAL files 2.
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.884076) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353135' seq:72057594037927935, type:22 .. '6C6F676D0032373638' seq:0, type:0; will stop at (end)
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(406KB)], [143(9594KB)]
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849196884148, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 10241107, "oldest_snapshot_seqno": -1}
Oct 07 14:59:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 7906 keys, 10129329 bytes, temperature: kUnknown
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849196963213, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 10129329, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10078130, "index_size": 30350, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19781, "raw_key_size": 207880, "raw_average_key_size": 26, "raw_value_size": 9938710, "raw_average_value_size": 1257, "num_data_blocks": 1178, "num_entries": 7906, "num_filter_entries": 7906, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759849196, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.963408) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10129329 bytes
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.969219) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.4 rd, 128.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.4 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(49.0) write-amplify(24.3) OK, records in: 8430, records dropped: 524 output_compression: NoCompression
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.969239) EVENT_LOG_v1 {"time_micros": 1759849196969230, "job": 88, "event": "compaction_finished", "compaction_time_micros": 79116, "compaction_time_cpu_micros": 35192, "output_level": 6, "num_output_files": 1, "total_output_size": 10129329, "num_input_records": 8430, "num_output_records": 7906, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849196969422, "job": 88, "event": "table_file_deletion", "file_number": 145}
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849196971154, "job": 88, "event": "table_file_deletion", "file_number": 143}
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.883858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.971201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.971207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.971208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.971210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:59:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.971211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 14:59:57 compute-0 determined_leavitt[429364]: {
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:         "osd_id": 2,
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:         "type": "bluestore"
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:     },
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:         "osd_id": 1,
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:         "type": "bluestore"
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:     },
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:         "osd_id": 0,
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:         "type": "bluestore"
Oct 07 14:59:57 compute-0 determined_leavitt[429364]:     }
Oct 07 14:59:57 compute-0 determined_leavitt[429364]: }
Oct 07 14:59:57 compute-0 systemd[1]: libpod-6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f.scope: Deactivated successfully.
Oct 07 14:59:57 compute-0 systemd[1]: libpod-6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f.scope: Consumed 1.013s CPU time.
Oct 07 14:59:57 compute-0 podman[429347]: 2025-10-07 14:59:57.499012305 +0000 UTC m=+1.159660831 container died 6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 14:59:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-2300897d88ac58b7bc37499001b7428874fb3d14637811830f411c1280195a44-merged.mount: Deactivated successfully.
Oct 07 14:59:57 compute-0 podman[429347]: 2025-10-07 14:59:57.564431537 +0000 UTC m=+1.225080063 container remove 6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 07 14:59:57 compute-0 systemd[1]: libpod-conmon-6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f.scope: Deactivated successfully.
Oct 07 14:59:57 compute-0 sudo[429241]: pam_unix(sudo:session): session closed for user root
Oct 07 14:59:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 14:59:57 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:59:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 14:59:57 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:59:57 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 97819306-58ca-4909-9a4b-248c36972704 does not exist
Oct 07 14:59:57 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev c4a142a5-7ec5-452b-9704-c8432c8d2778 does not exist
Oct 07 14:59:57 compute-0 sudo[429410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 14:59:57 compute-0 sudo[429410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:59:57 compute-0 sudo[429410]: pam_unix(sudo:session): session closed for user root
Oct 07 14:59:57 compute-0 sudo[429435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 14:59:57 compute-0 sudo[429435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 14:59:57 compute-0 sudo[429435]: pam_unix(sudo:session): session closed for user root
Oct 07 14:59:58 compute-0 ceph-mon[74295]: pgmap v2931: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Oct 07 14:59:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:59:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 14:59:58 compute-0 nova_compute[259550]: 2025-10-07 14:59:58.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 14:59:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 87 op/s
Oct 07 14:59:59 compute-0 nova_compute[259550]: 2025-10-07 14:59:59.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:00.097 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:00:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:00.097 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:00:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:00.098 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:00:00 compute-0 ceph-mon[74295]: pgmap v2932: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 87 op/s
Oct 07 15:00:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 07 15:00:01 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:00:02 compute-0 ceph-mon[74295]: pgmap v2933: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 07 15:00:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 15:00:03 compute-0 podman[429461]: 2025-10-07 15:00:03.080104509 +0000 UTC m=+0.065461994 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 15:00:03 compute-0 podman[429460]: 2025-10-07 15:00:03.080187521 +0000 UTC m=+0.068751980 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 15:00:03 compute-0 nova_compute[259550]: 2025-10-07 15:00:03.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:03 compute-0 ceph-mon[74295]: pgmap v2934: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 07 15:00:04 compute-0 nova_compute[259550]: 2025-10-07 15:00:04.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 305 active+clean; 91 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 465 KiB/s wr, 77 op/s
Oct 07 15:00:05 compute-0 nova_compute[259550]: 2025-10-07 15:00:05.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:00:06 compute-0 ovn_controller[151684]: 2025-10-07T15:00:06Z|00208|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:68:98 10.100.0.8
Oct 07 15:00:06 compute-0 ovn_controller[151684]: 2025-10-07T15:00:06Z|00209|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:68:98 10.100.0.8
Oct 07 15:00:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 305 active+clean; 104 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 71 op/s
Oct 07 15:00:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:00:07 compute-0 ceph-mon[74295]: pgmap v2935: 305 pgs: 305 active+clean; 91 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 465 KiB/s wr, 77 op/s
Oct 07 15:00:08 compute-0 ceph-mon[74295]: pgmap v2936: 305 pgs: 305 active+clean; 104 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 71 op/s
Oct 07 15:00:08 compute-0 nova_compute[259550]: 2025-10-07 15:00:08.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 305 active+clean; 117 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 07 15:00:09 compute-0 nova_compute[259550]: 2025-10-07 15:00:09.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:09 compute-0 nova_compute[259550]: 2025-10-07 15:00:09.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:00:10 compute-0 ceph-mon[74295]: pgmap v2937: 305 pgs: 305 active+clean; 117 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 07 15:00:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 305 active+clean; 120 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 07 15:00:10 compute-0 nova_compute[259550]: 2025-10-07 15:00:10.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:00:11 compute-0 nova_compute[259550]: 2025-10-07 15:00:11.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:00:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:00:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 305 active+clean; 120 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 07 15:00:12 compute-0 ceph-mon[74295]: pgmap v2938: 305 pgs: 305 active+clean; 120 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 07 15:00:13 compute-0 nova_compute[259550]: 2025-10-07 15:00:13.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:14 compute-0 nova_compute[259550]: 2025-10-07 15:00:14.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 07 15:00:15 compute-0 ceph-mon[74295]: pgmap v2939: 305 pgs: 305 active+clean; 120 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 07 15:00:15 compute-0 nova_compute[259550]: 2025-10-07 15:00:15.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:00:15 compute-0 nova_compute[259550]: 2025-10-07 15:00:15.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:00:16 compute-0 podman[429498]: 2025-10-07 15:00:16.064056119 +0000 UTC m=+0.049755968 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 07 15:00:16 compute-0 podman[429499]: 2025-10-07 15:00:16.128303739 +0000 UTC m=+0.111667355 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct 07 15:00:16 compute-0 ceph-mon[74295]: pgmap v2940: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 07 15:00:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 1.7 MiB/s wr, 59 op/s
Oct 07 15:00:16 compute-0 nova_compute[259550]: 2025-10-07 15:00:16.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:00:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:00:18 compute-0 ceph-mon[74295]: pgmap v2941: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 1.7 MiB/s wr, 59 op/s
Oct 07 15:00:18 compute-0 nova_compute[259550]: 2025-10-07 15:00:18.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 96 KiB/s rd, 775 KiB/s wr, 22 op/s
Oct 07 15:00:19 compute-0 nova_compute[259550]: 2025-10-07 15:00:19.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:20 compute-0 ceph-mon[74295]: pgmap v2942: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 96 KiB/s rd, 775 KiB/s wr, 22 op/s
Oct 07 15:00:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 56 KiB/s wr, 12 op/s
Oct 07 15:00:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:00:22 compute-0 ceph-mon[74295]: pgmap v2943: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 56 KiB/s wr, 12 op/s
Oct 07 15:00:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:00:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:00:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:00:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:00:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:00:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:00:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:00:22
Oct 07 15:00:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:00:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:00:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.control', 'vms']
Oct 07 15:00:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:00:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 14 KiB/s wr, 1 op/s
Oct 07 15:00:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:00:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:00:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:00:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:00:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:00:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:00:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:00:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:00:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:00:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:00:23 compute-0 nova_compute[259550]: 2025-10-07 15:00:23.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:24 compute-0 ceph-mon[74295]: pgmap v2944: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 14 KiB/s wr, 1 op/s
Oct 07 15:00:24 compute-0 ovn_controller[151684]: 2025-10-07T15:00:24Z|01651|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Oct 07 15:00:24 compute-0 nova_compute[259550]: 2025-10-07 15:00:24.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 14 KiB/s wr, 1 op/s
Oct 07 15:00:25 compute-0 ceph-mon[74295]: pgmap v2945: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 14 KiB/s wr, 1 op/s
Oct 07 15:00:25 compute-0 nova_compute[259550]: 2025-10-07 15:00:25.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:00:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 07 15:00:26 compute-0 nova_compute[259550]: 2025-10-07 15:00:26.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:00:27 compute-0 nova_compute[259550]: 2025-10-07 15:00:27.025 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:00:27 compute-0 nova_compute[259550]: 2025-10-07 15:00:27.026 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:00:27 compute-0 nova_compute[259550]: 2025-10-07 15:00:27.026 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:00:27 compute-0 nova_compute[259550]: 2025-10-07 15:00:27.026 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:00:27 compute-0 nova_compute[259550]: 2025-10-07 15:00:27.027 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:00:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:00:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:00:27 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3712701967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:00:27 compute-0 nova_compute[259550]: 2025-10-07 15:00:27.478 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:00:27 compute-0 nova_compute[259550]: 2025-10-07 15:00:27.576 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 15:00:27 compute-0 nova_compute[259550]: 2025-10-07 15:00:27.577 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 15:00:27 compute-0 nova_compute[259550]: 2025-10-07 15:00:27.813 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:00:27 compute-0 nova_compute[259550]: 2025-10-07 15:00:27.815 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3420MB free_disk=59.942745208740234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:00:27 compute-0 nova_compute[259550]: 2025-10-07 15:00:27.815 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:00:27 compute-0 nova_compute[259550]: 2025-10-07 15:00:27.815 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:00:27 compute-0 nova_compute[259550]: 2025-10-07 15:00:27.888 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance a6b18035-4aef-4825-90e6-799173979626 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 15:00:27 compute-0 nova_compute[259550]: 2025-10-07 15:00:27.889 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:00:27 compute-0 nova_compute[259550]: 2025-10-07 15:00:27.889 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:00:27 compute-0 nova_compute[259550]: 2025-10-07 15:00:27.933 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:00:28 compute-0 ceph-mon[74295]: pgmap v2946: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 07 15:00:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3712701967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:00:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:00:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3550013779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:00:28 compute-0 nova_compute[259550]: 2025-10-07 15:00:28.500 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:00:28 compute-0 nova_compute[259550]: 2025-10-07 15:00:28.506 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:00:28 compute-0 nova_compute[259550]: 2025-10-07 15:00:28.530 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:00:28 compute-0 nova_compute[259550]: 2025-10-07 15:00:28.551 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:00:28 compute-0 nova_compute[259550]: 2025-10-07 15:00:28.551 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:00:28 compute-0 nova_compute[259550]: 2025-10-07 15:00:28.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 19 KiB/s wr, 1 op/s
Oct 07 15:00:29 compute-0 nova_compute[259550]: 2025-10-07 15:00:29.223 2 DEBUG nova.compute.manager [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 15:00:29 compute-0 nova_compute[259550]: 2025-10-07 15:00:29.272 2 INFO nova.compute.manager [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] instance snapshotting
Oct 07 15:00:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3550013779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:00:29 compute-0 nova_compute[259550]: 2025-10-07 15:00:29.547 2 INFO nova.virt.libvirt.driver [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Beginning live snapshot process
Oct 07 15:00:29 compute-0 nova_compute[259550]: 2025-10-07 15:00:29.692 2 DEBUG nova.virt.libvirt.imagebackend [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 07 15:00:29 compute-0 nova_compute[259550]: 2025-10-07 15:00:29.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:29 compute-0 nova_compute[259550]: 2025-10-07 15:00:29.938 2 DEBUG nova.storage.rbd_utils [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] creating snapshot(f3fa92b8d0f948688249e80b56b0cd9a) on rbd image(a6b18035-4aef-4825-90e6-799173979626_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 15:00:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Oct 07 15:00:30 compute-0 ceph-mon[74295]: pgmap v2947: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 19 KiB/s wr, 1 op/s
Oct 07 15:00:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Oct 07 15:00:30 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Oct 07 15:00:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 KiB/s rd, 23 KiB/s wr, 4 op/s
Oct 07 15:00:30 compute-0 nova_compute[259550]: 2025-10-07 15:00:30.967 2 DEBUG nova.storage.rbd_utils [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] cloning vms/a6b18035-4aef-4825-90e6-799173979626_disk@f3fa92b8d0f948688249e80b56b0cd9a to images/a45275a6-57fe-4099-b442-be8d2cb87827 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 15:00:31 compute-0 nova_compute[259550]: 2025-10-07 15:00:31.121 2 DEBUG nova.storage.rbd_utils [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] flattening images/a45275a6-57fe-4099-b442-be8d2cb87827 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 15:00:31 compute-0 ceph-mon[74295]: osdmap e285: 3 total, 3 up, 3 in
Oct 07 15:00:31 compute-0 nova_compute[259550]: 2025-10-07 15:00:31.552 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:00:31 compute-0 nova_compute[259550]: 2025-10-07 15:00:31.552 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:00:31 compute-0 nova_compute[259550]: 2025-10-07 15:00:31.552 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:00:31 compute-0 nova_compute[259550]: 2025-10-07 15:00:31.602 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 15:00:31 compute-0 nova_compute[259550]: 2025-10-07 15:00:31.603 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 15:00:31 compute-0 nova_compute[259550]: 2025-10-07 15:00:31.603 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 15:00:31 compute-0 nova_compute[259550]: 2025-10-07 15:00:31.603 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a6b18035-4aef-4825-90e6-799173979626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 15:00:31 compute-0 nova_compute[259550]: 2025-10-07 15:00:31.621 2 DEBUG nova.storage.rbd_utils [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] removing snapshot(f3fa92b8d0f948688249e80b56b0cd9a) on rbd image(a6b18035-4aef-4825-90e6-799173979626_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 15:00:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:00:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Oct 07 15:00:32 compute-0 ceph-mon[74295]: pgmap v2949: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 KiB/s rd, 23 KiB/s wr, 4 op/s
Oct 07 15:00:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Oct 07 15:00:32 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Oct 07 15:00:32 compute-0 nova_compute[259550]: 2025-10-07 15:00:32.701 2 DEBUG nova.storage.rbd_utils [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] creating snapshot(snap) on rbd image(a45275a6-57fe-4099-b442-be8d2cb87827) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 15:00:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:00:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1919368522' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:00:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:00:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1919368522' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007603540934101713 of space, bias 1.0, pg target 0.2281062280230514 quantized to 32 (current 32)
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:00:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.7 KiB/s rd, 28 KiB/s wr, 5 op/s
Oct 07 15:00:33 compute-0 nova_compute[259550]: 2025-10-07 15:00:33.487 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Updating instance_info_cache with network_info: [{"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 15:00:33 compute-0 nova_compute[259550]: 2025-10-07 15:00:33.506 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 15:00:33 compute-0 nova_compute[259550]: 2025-10-07 15:00:33.506 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 15:00:33 compute-0 nova_compute[259550]: 2025-10-07 15:00:33.507 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:00:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Oct 07 15:00:33 compute-0 nova_compute[259550]: 2025-10-07 15:00:33.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Oct 07 15:00:33 compute-0 ceph-mon[74295]: osdmap e286: 3 total, 3 up, 3 in
Oct 07 15:00:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1919368522' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:00:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1919368522' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:00:33 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Oct 07 15:00:34 compute-0 podman[429730]: 2025-10-07 15:00:34.094650808 +0000 UTC m=+0.056629549 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct 07 15:00:34 compute-0 podman[429731]: 2025-10-07 15:00:34.094567956 +0000 UTC m=+0.056176518 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible)
Oct 07 15:00:34 compute-0 ceph-mon[74295]: pgmap v2951: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.7 KiB/s rd, 28 KiB/s wr, 5 op/s
Oct 07 15:00:34 compute-0 ceph-mon[74295]: osdmap e287: 3 total, 3 up, 3 in
Oct 07 15:00:34 compute-0 nova_compute[259550]: 2025-10-07 15:00:34.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 189 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.6 MiB/s wr, 124 op/s
Oct 07 15:00:35 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:00:35 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 61K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1355 writes, 6386 keys, 1355 commit groups, 1.0 writes per commit group, ingest: 8.73 MB, 0.01 MB/s
                                           Interval WAL: 1355 writes, 1355 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     70.2      1.07              0.23        44    0.024       0      0       0.0       0.0
                                             L6      1/0    9.66 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.7    138.0    116.6      3.05              1.02        43    0.071    274K    23K       0.0       0.0
                                            Sum      1/0    9.66 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.7    102.2    104.6      4.12              1.25        87    0.047    274K    23K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.2     82.5     84.8      0.76              0.19        12    0.063     49K   3011       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    138.0    116.6      3.05              1.02        43    0.071    274K    23K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     70.4      1.06              0.23        43    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.073, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.42 GB write, 0.08 MB/s write, 0.41 GB read, 0.08 MB/s read, 4.1 seconds
                                           Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5619101451f0#2 capacity: 304.00 MB usage: 47.98 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000743 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3105,46.01 MB,15.1339%) FilterBlock(88,762.61 KB,0.244979%) IndexBlock(88,1.23 MB,0.405507%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 07 15:00:35 compute-0 nova_compute[259550]: 2025-10-07 15:00:35.642 2 INFO nova.virt.libvirt.driver [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Snapshot image upload complete
Oct 07 15:00:35 compute-0 nova_compute[259550]: 2025-10-07 15:00:35.643 2 INFO nova.compute.manager [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Took 6.37 seconds to snapshot the instance on the hypervisor.
Oct 07 15:00:35 compute-0 ceph-mon[74295]: pgmap v2953: 305 pgs: 305 active+clean; 189 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.6 MiB/s wr, 124 op/s
Oct 07 15:00:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 7.3 MiB/s wr, 160 op/s
Oct 07 15:00:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:00:38 compute-0 ceph-mon[74295]: pgmap v2954: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 7.3 MiB/s wr, 160 op/s
Oct 07 15:00:38 compute-0 nova_compute[259550]: 2025-10-07 15:00:38.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 131 op/s
Oct 07 15:00:39 compute-0 nova_compute[259550]: 2025-10-07 15:00:39.611 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:00:39 compute-0 nova_compute[259550]: 2025-10-07 15:00:39.611 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:00:39 compute-0 nova_compute[259550]: 2025-10-07 15:00:39.629 2 DEBUG nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 15:00:39 compute-0 nova_compute[259550]: 2025-10-07 15:00:39.694 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:00:39 compute-0 nova_compute[259550]: 2025-10-07 15:00:39.694 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:00:39 compute-0 nova_compute[259550]: 2025-10-07 15:00:39.702 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 15:00:39 compute-0 nova_compute[259550]: 2025-10-07 15:00:39.702 2 INFO nova.compute.claims [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Claim successful on node compute-0.ctlplane.example.com
Oct 07 15:00:39 compute-0 nova_compute[259550]: 2025-10-07 15:00:39.852 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:00:39 compute-0 nova_compute[259550]: 2025-10-07 15:00:39.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:40 compute-0 ceph-mon[74295]: pgmap v2955: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 131 op/s
Oct 07 15:00:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:00:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1994964867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:00:40 compute-0 nova_compute[259550]: 2025-10-07 15:00:40.341 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:00:40 compute-0 nova_compute[259550]: 2025-10-07 15:00:40.349 2 DEBUG nova.compute.provider_tree [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:00:40 compute-0 nova_compute[259550]: 2025-10-07 15:00:40.390 2 DEBUG nova.scheduler.client.report [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:00:40 compute-0 nova_compute[259550]: 2025-10-07 15:00:40.432 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:00:40 compute-0 nova_compute[259550]: 2025-10-07 15:00:40.433 2 DEBUG nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 15:00:40 compute-0 nova_compute[259550]: 2025-10-07 15:00:40.486 2 DEBUG nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 15:00:40 compute-0 nova_compute[259550]: 2025-10-07 15:00:40.486 2 DEBUG nova.network.neutron [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 15:00:40 compute-0 nova_compute[259550]: 2025-10-07 15:00:40.506 2 INFO nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 15:00:40 compute-0 nova_compute[259550]: 2025-10-07 15:00:40.531 2 DEBUG nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 15:00:40 compute-0 nova_compute[259550]: 2025-10-07 15:00:40.626 2 DEBUG nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 15:00:40 compute-0 nova_compute[259550]: 2025-10-07 15:00:40.627 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 15:00:40 compute-0 nova_compute[259550]: 2025-10-07 15:00:40.628 2 INFO nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Creating image(s)
Oct 07 15:00:40 compute-0 nova_compute[259550]: 2025-10-07 15:00:40.657 2 DEBUG nova.storage.rbd_utils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 15:00:40 compute-0 nova_compute[259550]: 2025-10-07 15:00:40.687 2 DEBUG nova.storage.rbd_utils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 15:00:40 compute-0 nova_compute[259550]: 2025-10-07 15:00:40.714 2 DEBUG nova.storage.rbd_utils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 15:00:40 compute-0 nova_compute[259550]: 2025-10-07 15:00:40.718 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "bbd3df744fb203e041adee356c0511b079943c80" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:00:40 compute-0 nova_compute[259550]: 2025-10-07 15:00:40.719 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "bbd3df744fb203e041adee356c0511b079943c80" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:00:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 5.6 MiB/s wr, 125 op/s
Oct 07 15:00:41 compute-0 nova_compute[259550]: 2025-10-07 15:00:41.000 2 DEBUG nova.virt.libvirt.imagebackend [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Image locations are: [{'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/a45275a6-57fe-4099-b442-be8d2cb87827/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/a45275a6-57fe-4099-b442-be8d2cb87827/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 07 15:00:41 compute-0 nova_compute[259550]: 2025-10-07 15:00:41.046 2 DEBUG nova.virt.libvirt.imagebackend [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Selected location: {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/a45275a6-57fe-4099-b442-be8d2cb87827/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 07 15:00:41 compute-0 nova_compute[259550]: 2025-10-07 15:00:41.047 2 DEBUG nova.storage.rbd_utils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] cloning images/a45275a6-57fe-4099-b442-be8d2cb87827@snap to None/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 15:00:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1994964867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:00:41 compute-0 nova_compute[259550]: 2025-10-07 15:00:41.128 2 DEBUG nova.policy [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1a7552cec1354175be418fba9a7588af', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aa7bd91eb3b040c89929aa23c9775dc9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 07 15:00:41 compute-0 nova_compute[259550]: 2025-10-07 15:00:41.209 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "bbd3df744fb203e041adee356c0511b079943c80" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:00:41 compute-0 nova_compute[259550]: 2025-10-07 15:00:41.430 2 DEBUG nova.objects.instance [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lazy-loading 'migration_context' on Instance uuid 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 15:00:41 compute-0 nova_compute[259550]: 2025-10-07 15:00:41.542 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 15:00:41 compute-0 nova_compute[259550]: 2025-10-07 15:00:41.543 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Ensure instance console log exists: /var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 15:00:41 compute-0 nova_compute[259550]: 2025-10-07 15:00:41.543 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:00:41 compute-0 nova_compute[259550]: 2025-10-07 15:00:41.543 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:00:41 compute-0 nova_compute[259550]: 2025-10-07 15:00:41.544 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:00:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:00:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Oct 07 15:00:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Oct 07 15:00:42 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Oct 07 15:00:42 compute-0 ceph-mon[74295]: pgmap v2956: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 5.6 MiB/s wr, 125 op/s
Oct 07 15:00:42 compute-0 ceph-mon[74295]: osdmap e288: 3 total, 3 up, 3 in
Oct 07 15:00:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 829 KiB/s wr, 36 op/s
Oct 07 15:00:43 compute-0 nova_compute[259550]: 2025-10-07 15:00:43.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:43.172 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 15:00:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:43.173 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 15:00:43 compute-0 nova_compute[259550]: 2025-10-07 15:00:43.290 2 DEBUG nova.network.neutron [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Successfully created port: fc3eccef-04e2-406f-b8a2-3b5015d4e10a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 07 15:00:43 compute-0 nova_compute[259550]: 2025-10-07 15:00:43.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:44 compute-0 ceph-mon[74295]: pgmap v2958: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 829 KiB/s wr, 36 op/s
Oct 07 15:00:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:44.175 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 15:00:44 compute-0 nova_compute[259550]: 2025-10-07 15:00:44.192 2 DEBUG nova.network.neutron [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Successfully updated port: fc3eccef-04e2-406f-b8a2-3b5015d4e10a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 07 15:00:44 compute-0 nova_compute[259550]: 2025-10-07 15:00:44.312 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 15:00:44 compute-0 nova_compute[259550]: 2025-10-07 15:00:44.313 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquired lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 15:00:44 compute-0 nova_compute[259550]: 2025-10-07 15:00:44.313 2 DEBUG nova.network.neutron [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 15:00:44 compute-0 nova_compute[259550]: 2025-10-07 15:00:44.384 2 DEBUG nova.compute.manager [req-31488e1b-d89d-42ff-99da-a4e38ed3e3b4 req-c319425f-c986-4b6d-98b7-d6adfa34827e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received event network-changed-fc3eccef-04e2-406f-b8a2-3b5015d4e10a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 15:00:44 compute-0 nova_compute[259550]: 2025-10-07 15:00:44.384 2 DEBUG nova.compute.manager [req-31488e1b-d89d-42ff-99da-a4e38ed3e3b4 req-c319425f-c986-4b6d-98b7-d6adfa34827e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Refreshing instance network info cache due to event network-changed-fc3eccef-04e2-406f-b8a2-3b5015d4e10a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 15:00:44 compute-0 nova_compute[259550]: 2025-10-07 15:00:44.384 2 DEBUG oslo_concurrency.lockutils [req-31488e1b-d89d-42ff-99da-a4e38ed3e3b4 req-c319425f-c986-4b6d-98b7-d6adfa34827e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 15:00:44 compute-0 nova_compute[259550]: 2025-10-07 15:00:44.637 2 DEBUG nova.network.neutron [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 15:00:44 compute-0 nova_compute[259550]: 2025-10-07 15:00:44.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 755 KiB/s wr, 47 op/s
Oct 07 15:00:46 compute-0 ceph-mon[74295]: pgmap v2959: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 755 KiB/s wr, 47 op/s
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.512 2 DEBUG nova.network.neutron [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Updating instance_info_cache with network_info: [{"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.534 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Releasing lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.535 2 DEBUG nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Instance network_info: |[{"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.535 2 DEBUG oslo_concurrency.lockutils [req-31488e1b-d89d-42ff-99da-a4e38ed3e3b4 req-c319425f-c986-4b6d-98b7-d6adfa34827e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.535 2 DEBUG nova.network.neutron [req-31488e1b-d89d-42ff-99da-a4e38ed3e3b4 req-c319425f-c986-4b6d-98b7-d6adfa34827e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Refreshing network info cache for port fc3eccef-04e2-406f-b8a2-3b5015d4e10a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.538 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Start _get_guest_xml network_info=[{"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T15:00:29Z,direct_url=<?>,disk_format='raw',id=a45275a6-57fe-4099-b442-be8d2cb87827,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-1465212491',owner='aa7bd91eb3b040c89929aa23c9775dc9',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T15:00:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': 'a45275a6-57fe-4099-b442-be8d2cb87827'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.542 2 WARNING nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.547 2 DEBUG nova.virt.libvirt.host [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.547 2 DEBUG nova.virt.libvirt.host [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.552 2 DEBUG nova.virt.libvirt.host [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.553 2 DEBUG nova.virt.libvirt.host [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.554 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.554 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T15:00:29Z,direct_url=<?>,disk_format='raw',id=a45275a6-57fe-4099-b442-be8d2cb87827,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-1465212491',owner='aa7bd91eb3b040c89929aa23c9775dc9',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T15:00:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.554 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.554 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.555 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.555 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.555 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.556 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.556 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.556 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.556 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.557 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 15:00:46 compute-0 nova_compute[259550]: 2025-10-07 15:00:46.559 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:00:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.9 KiB/s wr, 37 op/s
Oct 07 15:00:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 15:00:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204450414' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.002 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.026 2 DEBUG nova.storage.rbd_utils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.030 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:00:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:00:47 compute-0 podman[429987]: 2025-10-07 15:00:47.064515035 +0000 UTC m=+0.058822368 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 15:00:47 compute-0 podman[429988]: 2025-10-07 15:00:47.093945664 +0000 UTC m=+0.084821656 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 07 15:00:47 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4204450414' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 15:00:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 15:00:47 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/122746718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.501 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.503 2 DEBUG nova.virt.libvirt.vif [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T15:00:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-257341137',display_name='tempest-TestSnapshotPattern-server-257341137',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-257341137',id=152,image_ref='a45275a6-57fe-4099-b442-be8d2cb87827',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLS8No0f8lI6ufkJmHXUc83InsAiFTNLFmJKfMLwW232En9QazJdzMvhlDyJwZrw5ZgxsPztgc0fXNZCFoxmRX/wK0ADddqAh1D7rvdzceS1mG7VJugN4Nxl5xOWKIFTbQ==',key_name='tempest-TestSnapshotPattern-1267535545',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa7bd91eb3b040c89929aa23c9775dc9',ramdisk_id='',reservation_id='r-tb8l5uf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='a6b18035-4aef-4825-90e6-799173979626',image_min_disk='1',image_min_ram='0',image_owner_id='aa7bd91eb3b040c89929aa23c9775dc9',image_owner_project_name='tempest-TestSnapshotPattern-1480624877',image_owner_user_name='tempest-TestSnapshotPattern-1480624877-project-member',image_user_id='1a7552cec1354175be418fba9a7588af',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1480624877',owner_user_name='tempest-TestSnapshotPattern-1480624877-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T15:00:40Z,user_data=None,user_id='1a7552cec1354175be418fba9a7588af',uuid=98da6205-c6cb-48d4-9502-fa1ca0f3e4ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.503 2 DEBUG nova.network.os_vif_util [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converting VIF {"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.504 2 DEBUG nova.network.os_vif_util [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:41:f9,bridge_name='br-int',has_traffic_filtering=True,id=fc3eccef-04e2-406f-b8a2-3b5015d4e10a,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3eccef-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.505 2 DEBUG nova.objects.instance [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.521 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] End _get_guest_xml xml=<domain type="kvm">
Oct 07 15:00:47 compute-0 nova_compute[259550]:   <uuid>98da6205-c6cb-48d4-9502-fa1ca0f3e4ce</uuid>
Oct 07 15:00:47 compute-0 nova_compute[259550]:   <name>instance-00000098</name>
Oct 07 15:00:47 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 15:00:47 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 15:00:47 compute-0 nova_compute[259550]:   <metadata>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <nova:name>tempest-TestSnapshotPattern-server-257341137</nova:name>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 15:00:46</nova:creationTime>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 15:00:47 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 15:00:47 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 15:00:47 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 15:00:47 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 15:00:47 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 15:00:47 compute-0 nova_compute[259550]:         <nova:user uuid="1a7552cec1354175be418fba9a7588af">tempest-TestSnapshotPattern-1480624877-project-member</nova:user>
Oct 07 15:00:47 compute-0 nova_compute[259550]:         <nova:project uuid="aa7bd91eb3b040c89929aa23c9775dc9">tempest-TestSnapshotPattern-1480624877</nova:project>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="a45275a6-57fe-4099-b442-be8d2cb87827"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <nova:ports>
Oct 07 15:00:47 compute-0 nova_compute[259550]:         <nova:port uuid="fc3eccef-04e2-406f-b8a2-3b5015d4e10a">
Oct 07 15:00:47 compute-0 nova_compute[259550]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:         </nova:port>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       </nova:ports>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 15:00:47 compute-0 nova_compute[259550]:   </metadata>
Oct 07 15:00:47 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <system>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <entry name="serial">98da6205-c6cb-48d4-9502-fa1ca0f3e4ce</entry>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <entry name="uuid">98da6205-c6cb-48d4-9502-fa1ca0f3e4ce</entry>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     </system>
Oct 07 15:00:47 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 15:00:47 compute-0 nova_compute[259550]:   <os>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:   </os>
Oct 07 15:00:47 compute-0 nova_compute[259550]:   <features>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <apic/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:   </features>
Oct 07 15:00:47 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:   </clock>
Oct 07 15:00:47 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:   </cpu>
Oct 07 15:00:47 compute-0 nova_compute[259550]:   <devices>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk">
Oct 07 15:00:47 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       </source>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 15:00:47 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       </auth>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     </disk>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk.config">
Oct 07 15:00:47 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       </source>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 15:00:47 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       </auth>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     </disk>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <interface type="ethernet">
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <mac address="fa:16:3e:0b:41:f9"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <driver name="vhost" rx_queue_size="512"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <mtu size="1442"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <target dev="tapfc3eccef-04"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     </interface>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce/console.log" append="off"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     </serial>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <video>
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     </video>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <input type="keyboard" bus="usb"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     </rng>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 15:00:47 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 15:00:47 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 15:00:47 compute-0 nova_compute[259550]:   </devices>
Oct 07 15:00:47 compute-0 nova_compute[259550]: </domain>
Oct 07 15:00:47 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.523 2 DEBUG nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Preparing to wait for external event network-vif-plugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.523 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.524 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.524 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.524 2 DEBUG nova.virt.libvirt.vif [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T15:00:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-257341137',display_name='tempest-TestSnapshotPattern-server-257341137',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-257341137',id=152,image_ref='a45275a6-57fe-4099-b442-be8d2cb87827',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLS8No0f8lI6ufkJmHXUc83InsAiFTNLFmJKfMLwW232En9QazJdzMvhlDyJwZrw5ZgxsPztgc0fXNZCFoxmRX/wK0ADddqAh1D7rvdzceS1mG7VJugN4Nxl5xOWKIFTbQ==',key_name='tempest-TestSnapshotPattern-1267535545',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa7bd91eb3b040c89929aa23c9775dc9',ramdisk_id='',reservation_id='r-tb8l5uf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='a6b18035-4aef-4825-90e6-799173979626',image_min_disk='1',image_min_ram='0',image_owner_id='aa7bd91eb3b040c89929aa23c9775dc9',image_owner_project_name='tempest-TestSnapshotPattern-1480624877',image_owner_user_name='tempest-TestSnapshotPattern-1480624877-project-member',image_user_id='1a7552cec1354175be418fba9a7588af',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1480624877',owner_user_name='tempest-TestSnapshotPattern-1480624877-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T15:00:40Z,user_data=None,user_id='1a7552cec1354175be418fba9a7588af',uuid=98da6205-c6cb-48d4-9502-fa1ca0f3e4ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.525 2 DEBUG nova.network.os_vif_util [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converting VIF {"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.525 2 DEBUG nova.network.os_vif_util [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:41:f9,bridge_name='br-int',has_traffic_filtering=True,id=fc3eccef-04e2-406f-b8a2-3b5015d4e10a,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3eccef-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.525 2 DEBUG os_vif [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:41:f9,bridge_name='br-int',has_traffic_filtering=True,id=fc3eccef-04e2-406f-b8a2-3b5015d4e10a,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3eccef-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.526 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.527 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.530 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc3eccef-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.530 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc3eccef-04, col_values=(('external_ids', {'iface-id': 'fc3eccef-04e2-406f-b8a2-3b5015d4e10a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:41:f9', 'vm-uuid': '98da6205-c6cb-48d4-9502-fa1ca0f3e4ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:47 compute-0 NetworkManager[44949]: <info>  [1759849247.5332] manager: (tapfc3eccef-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/670)
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.539 2 INFO os_vif [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:41:f9,bridge_name='br-int',has_traffic_filtering=True,id=fc3eccef-04e2-406f-b8a2-3b5015d4e10a,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3eccef-04')
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.600 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.601 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.601 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] No VIF found with MAC fa:16:3e:0b:41:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.601 2 INFO nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Using config drive
Oct 07 15:00:47 compute-0 nova_compute[259550]: 2025-10-07 15:00:47.624 2 DEBUG nova.storage.rbd_utils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 15:00:48 compute-0 ceph-mon[74295]: pgmap v2960: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.9 KiB/s wr, 37 op/s
Oct 07 15:00:48 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/122746718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 15:00:48 compute-0 nova_compute[259550]: 2025-10-07 15:00:48.503 2 INFO nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Creating config drive at /var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce/disk.config
Oct 07 15:00:48 compute-0 nova_compute[259550]: 2025-10-07 15:00:48.508 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaag5z_gm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:00:48 compute-0 nova_compute[259550]: 2025-10-07 15:00:48.648 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaag5z_gm" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:00:48 compute-0 nova_compute[259550]: 2025-10-07 15:00:48.676 2 DEBUG nova.storage.rbd_utils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 15:00:48 compute-0 nova_compute[259550]: 2025-10-07 15:00:48.680 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce/disk.config 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:00:48 compute-0 nova_compute[259550]: 2025-10-07 15:00:48.747 2 DEBUG nova.network.neutron [req-31488e1b-d89d-42ff-99da-a4e38ed3e3b4 req-c319425f-c986-4b6d-98b7-d6adfa34827e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Updated VIF entry in instance network info cache for port fc3eccef-04e2-406f-b8a2-3b5015d4e10a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 15:00:48 compute-0 nova_compute[259550]: 2025-10-07 15:00:48.749 2 DEBUG nova.network.neutron [req-31488e1b-d89d-42ff-99da-a4e38ed3e3b4 req-c319425f-c986-4b6d-98b7-d6adfa34827e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Updating instance_info_cache with network_info: [{"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 15:00:48 compute-0 nova_compute[259550]: 2025-10-07 15:00:48.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:48 compute-0 nova_compute[259550]: 2025-10-07 15:00:48.816 2 DEBUG oslo_concurrency.lockutils [req-31488e1b-d89d-42ff-99da-a4e38ed3e3b4 req-c319425f-c986-4b6d-98b7-d6adfa34827e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 15:00:48 compute-0 nova_compute[259550]: 2025-10-07 15:00:48.886 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce/disk.config 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:00:48 compute-0 nova_compute[259550]: 2025-10-07 15:00:48.886 2 INFO nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Deleting local config drive /var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce/disk.config because it was imported into RBD.
Oct 07 15:00:48 compute-0 kernel: tapfc3eccef-04: entered promiscuous mode
Oct 07 15:00:48 compute-0 ovn_controller[151684]: 2025-10-07T15:00:48Z|01652|binding|INFO|Claiming lport fc3eccef-04e2-406f-b8a2-3b5015d4e10a for this chassis.
Oct 07 15:00:48 compute-0 ovn_controller[151684]: 2025-10-07T15:00:48Z|01653|binding|INFO|fc3eccef-04e2-406f-b8a2-3b5015d4e10a: Claiming fa:16:3e:0b:41:f9 10.100.0.4
Oct 07 15:00:48 compute-0 NetworkManager[44949]: <info>  [1759849248.9534] manager: (tapfc3eccef-04): new Tun device (/org/freedesktop/NetworkManager/Devices/671)
Oct 07 15:00:48 compute-0 nova_compute[259550]: 2025-10-07 15:00:48.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.1 KiB/s wr, 32 op/s
Oct 07 15:00:48 compute-0 ovn_controller[151684]: 2025-10-07T15:00:48Z|01654|binding|INFO|Setting lport fc3eccef-04e2-406f-b8a2-3b5015d4e10a ovn-installed in OVS
Oct 07 15:00:48 compute-0 nova_compute[259550]: 2025-10-07 15:00:48.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:48 compute-0 ovn_controller[151684]: 2025-10-07T15:00:48Z|01655|binding|INFO|Setting lport fc3eccef-04e2-406f-b8a2-3b5015d4e10a up in Southbound
Oct 07 15:00:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:48.978 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:41:f9 10.100.0.4'], port_security=['fa:16:3e:0b:41:f9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '98da6205-c6cb-48d4-9502-fa1ca0f3e4ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5714012d-b182-4fef-9241-3afcb9c700d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa7bd91eb3b040c89929aa23c9775dc9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5a9b7c48-78a6-4a08-9a38-f3e228e68bdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f64d47ed-3333-494f-a3a9-07b1b0158b50, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fc3eccef-04e2-406f-b8a2-3b5015d4e10a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 15:00:48 compute-0 nova_compute[259550]: 2025-10-07 15:00:48.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:48.979 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fc3eccef-04e2-406f-b8a2-3b5015d4e10a in datapath 5714012d-b182-4fef-9241-3afcb9c700d6 bound to our chassis
Oct 07 15:00:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:48.981 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5714012d-b182-4fef-9241-3afcb9c700d6
Oct 07 15:00:48 compute-0 systemd-machined[214580]: New machine qemu-186-instance-00000098.
Oct 07 15:00:48 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:48.996 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7aca2c21-7ea4-4598-8af0-0ce30378d114]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:00:49 compute-0 systemd-udevd[430144]: Network interface NamePolicy= disabled on kernel command line.
Oct 07 15:00:49 compute-0 systemd[1]: Started Virtual Machine qemu-186-instance-00000098.
Oct 07 15:00:49 compute-0 NetworkManager[44949]: <info>  [1759849249.0246] device (tapfc3eccef-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 07 15:00:49 compute-0 NetworkManager[44949]: <info>  [1759849249.0263] device (tapfc3eccef-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 07 15:00:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.044 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[182f630a-8ced-4c28-a80e-c8100da39250]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:00:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.048 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1105270d-ca2f-4999-ac5d-d374f2a6860b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:00:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.077 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ca9d6d8f-550b-4c2a-a8cd-4562d6b6be82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:00:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.099 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[beb1ee3e-216f-40d7-a8f6-87082a0fddae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5714012d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:84:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977469, 'reachable_time': 27955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 430154, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:00:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.118 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[69ab2987-885c-429d-a0bf-0d8693f09ca8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5714012d-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977482, 'tstamp': 977482}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430157, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5714012d-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977484, 'tstamp': 977484}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430157, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:00:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.121 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5714012d-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 15:00:49 compute-0 nova_compute[259550]: 2025-10-07 15:00:49.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:49 compute-0 nova_compute[259550]: 2025-10-07 15:00:49.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.125 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5714012d-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 15:00:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.126 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 15:00:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.126 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5714012d-b0, col_values=(('external_ids', {'iface-id': '8ae40a35-baff-4538-b31d-4c05f61bc2b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 15:00:49 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.126 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 15:00:49 compute-0 nova_compute[259550]: 2025-10-07 15:00:49.181 2 DEBUG nova.compute.manager [req-1f6f2a3e-b125-48ee-aba6-23c2efd11aac req-50ba8819-0d8c-4a63-b2ba-73551d817228 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received event network-vif-plugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 15:00:49 compute-0 nova_compute[259550]: 2025-10-07 15:00:49.182 2 DEBUG oslo_concurrency.lockutils [req-1f6f2a3e-b125-48ee-aba6-23c2efd11aac req-50ba8819-0d8c-4a63-b2ba-73551d817228 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:00:49 compute-0 nova_compute[259550]: 2025-10-07 15:00:49.182 2 DEBUG oslo_concurrency.lockutils [req-1f6f2a3e-b125-48ee-aba6-23c2efd11aac req-50ba8819-0d8c-4a63-b2ba-73551d817228 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:00:49 compute-0 nova_compute[259550]: 2025-10-07 15:00:49.182 2 DEBUG oslo_concurrency.lockutils [req-1f6f2a3e-b125-48ee-aba6-23c2efd11aac req-50ba8819-0d8c-4a63-b2ba-73551d817228 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:00:49 compute-0 nova_compute[259550]: 2025-10-07 15:00:49.182 2 DEBUG nova.compute.manager [req-1f6f2a3e-b125-48ee-aba6-23c2efd11aac req-50ba8819-0d8c-4a63-b2ba-73551d817228 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Processing event network-vif-plugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 07 15:00:50 compute-0 nova_compute[259550]: 2025-10-07 15:00:50.112 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849250.112383, 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 15:00:50 compute-0 nova_compute[259550]: 2025-10-07 15:00:50.113 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] VM Started (Lifecycle Event)
Oct 07 15:00:50 compute-0 nova_compute[259550]: 2025-10-07 15:00:50.115 2 DEBUG nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 15:00:50 compute-0 nova_compute[259550]: 2025-10-07 15:00:50.117 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 15:00:50 compute-0 nova_compute[259550]: 2025-10-07 15:00:50.121 2 INFO nova.virt.libvirt.driver [-] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Instance spawned successfully.
Oct 07 15:00:50 compute-0 nova_compute[259550]: 2025-10-07 15:00:50.121 2 INFO nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Took 9.49 seconds to spawn the instance on the hypervisor.
Oct 07 15:00:50 compute-0 nova_compute[259550]: 2025-10-07 15:00:50.121 2 DEBUG nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 15:00:50 compute-0 nova_compute[259550]: 2025-10-07 15:00:50.136 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 15:00:50 compute-0 nova_compute[259550]: 2025-10-07 15:00:50.139 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 15:00:50 compute-0 nova_compute[259550]: 2025-10-07 15:00:50.164 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 15:00:50 compute-0 nova_compute[259550]: 2025-10-07 15:00:50.165 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849250.1126544, 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 15:00:50 compute-0 nova_compute[259550]: 2025-10-07 15:00:50.165 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] VM Paused (Lifecycle Event)
Oct 07 15:00:50 compute-0 nova_compute[259550]: 2025-10-07 15:00:50.189 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 15:00:50 compute-0 nova_compute[259550]: 2025-10-07 15:00:50.192 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849250.1175108, 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 15:00:50 compute-0 nova_compute[259550]: 2025-10-07 15:00:50.192 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] VM Resumed (Lifecycle Event)
Oct 07 15:00:50 compute-0 nova_compute[259550]: 2025-10-07 15:00:50.201 2 INFO nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Took 10.53 seconds to build instance.
Oct 07 15:00:50 compute-0 nova_compute[259550]: 2025-10-07 15:00:50.215 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 15:00:50 compute-0 nova_compute[259550]: 2025-10-07 15:00:50.217 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 15:00:50 compute-0 nova_compute[259550]: 2025-10-07 15:00:50.227 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:00:50 compute-0 ceph-mon[74295]: pgmap v2961: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.1 KiB/s wr, 32 op/s
Oct 07 15:00:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 16 KiB/s wr, 37 op/s
Oct 07 15:00:51 compute-0 nova_compute[259550]: 2025-10-07 15:00:51.334 2 DEBUG nova.compute.manager [req-710b56a5-27ae-4e7c-b48d-01cef29a514e req-1a8d5518-4bdf-45a2-baa8-484c54a810a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received event network-vif-plugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 15:00:51 compute-0 nova_compute[259550]: 2025-10-07 15:00:51.335 2 DEBUG oslo_concurrency.lockutils [req-710b56a5-27ae-4e7c-b48d-01cef29a514e req-1a8d5518-4bdf-45a2-baa8-484c54a810a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:00:51 compute-0 nova_compute[259550]: 2025-10-07 15:00:51.335 2 DEBUG oslo_concurrency.lockutils [req-710b56a5-27ae-4e7c-b48d-01cef29a514e req-1a8d5518-4bdf-45a2-baa8-484c54a810a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:00:51 compute-0 nova_compute[259550]: 2025-10-07 15:00:51.335 2 DEBUG oslo_concurrency.lockutils [req-710b56a5-27ae-4e7c-b48d-01cef29a514e req-1a8d5518-4bdf-45a2-baa8-484c54a810a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:00:51 compute-0 nova_compute[259550]: 2025-10-07 15:00:51.335 2 DEBUG nova.compute.manager [req-710b56a5-27ae-4e7c-b48d-01cef29a514e req-1a8d5518-4bdf-45a2-baa8-484c54a810a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] No waiting events found dispatching network-vif-plugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 15:00:51 compute-0 nova_compute[259550]: 2025-10-07 15:00:51.336 2 WARNING nova.compute.manager [req-710b56a5-27ae-4e7c-b48d-01cef29a514e req-1a8d5518-4bdf-45a2-baa8-484c54a810a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received unexpected event network-vif-plugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a for instance with vm_state active and task_state None.
Oct 07 15:00:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:00:52 compute-0 nova_compute[259550]: 2025-10-07 15:00:52.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:52 compute-0 ceph-mon[74295]: pgmap v2962: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 16 KiB/s wr, 37 op/s
Oct 07 15:00:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:00:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:00:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:00:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:00:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:00:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:00:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 14 KiB/s wr, 34 op/s
Oct 07 15:00:53 compute-0 nova_compute[259550]: 2025-10-07 15:00:53.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:54 compute-0 ceph-mon[74295]: pgmap v2963: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 14 KiB/s wr, 34 op/s
Oct 07 15:00:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 90 op/s
Oct 07 15:00:55 compute-0 nova_compute[259550]: 2025-10-07 15:00:55.624 2 DEBUG nova.compute.manager [req-15e0f812-b599-42a9-82eb-f8c391697b3e req-9fbcd70b-6f7a-4615-a828-6fe80f6a6de8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received event network-changed-fc3eccef-04e2-406f-b8a2-3b5015d4e10a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 15:00:55 compute-0 nova_compute[259550]: 2025-10-07 15:00:55.625 2 DEBUG nova.compute.manager [req-15e0f812-b599-42a9-82eb-f8c391697b3e req-9fbcd70b-6f7a-4615-a828-6fe80f6a6de8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Refreshing instance network info cache due to event network-changed-fc3eccef-04e2-406f-b8a2-3b5015d4e10a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 15:00:55 compute-0 nova_compute[259550]: 2025-10-07 15:00:55.626 2 DEBUG oslo_concurrency.lockutils [req-15e0f812-b599-42a9-82eb-f8c391697b3e req-9fbcd70b-6f7a-4615-a828-6fe80f6a6de8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 15:00:55 compute-0 nova_compute[259550]: 2025-10-07 15:00:55.626 2 DEBUG oslo_concurrency.lockutils [req-15e0f812-b599-42a9-82eb-f8c391697b3e req-9fbcd70b-6f7a-4615-a828-6fe80f6a6de8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 15:00:55 compute-0 nova_compute[259550]: 2025-10-07 15:00:55.626 2 DEBUG nova.network.neutron [req-15e0f812-b599-42a9-82eb-f8c391697b3e req-9fbcd70b-6f7a-4615-a828-6fe80f6a6de8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Refreshing network info cache for port fc3eccef-04e2-406f-b8a2-3b5015d4e10a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 15:00:56 compute-0 ceph-mon[74295]: pgmap v2964: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 90 op/s
Oct 07 15:00:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 92 op/s
Oct 07 15:00:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:00:57 compute-0 nova_compute[259550]: 2025-10-07 15:00:57.345 2 DEBUG nova.network.neutron [req-15e0f812-b599-42a9-82eb-f8c391697b3e req-9fbcd70b-6f7a-4615-a828-6fe80f6a6de8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Updated VIF entry in instance network info cache for port fc3eccef-04e2-406f-b8a2-3b5015d4e10a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 15:00:57 compute-0 nova_compute[259550]: 2025-10-07 15:00:57.347 2 DEBUG nova.network.neutron [req-15e0f812-b599-42a9-82eb-f8c391697b3e req-9fbcd70b-6f7a-4615-a828-6fe80f6a6de8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Updating instance_info_cache with network_info: [{"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 15:00:57 compute-0 nova_compute[259550]: 2025-10-07 15:00:57.372 2 DEBUG oslo_concurrency.lockutils [req-15e0f812-b599-42a9-82eb-f8c391697b3e req-9fbcd70b-6f7a-4615-a828-6fe80f6a6de8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 15:00:57 compute-0 nova_compute[259550]: 2025-10-07 15:00:57.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:57 compute-0 sudo[430201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:00:57 compute-0 sudo[430201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:00:57 compute-0 sudo[430201]: pam_unix(sudo:session): session closed for user root
Oct 07 15:00:57 compute-0 sudo[430226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:00:57 compute-0 sudo[430226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:00:57 compute-0 sudo[430226]: pam_unix(sudo:session): session closed for user root
Oct 07 15:00:57 compute-0 sudo[430251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:00:57 compute-0 sudo[430251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:00:57 compute-0 sudo[430251]: pam_unix(sudo:session): session closed for user root
Oct 07 15:00:58 compute-0 sudo[430276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:00:58 compute-0 sudo[430276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:00:58 compute-0 sudo[430276]: pam_unix(sudo:session): session closed for user root
Oct 07 15:00:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:00:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:00:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:00:58 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:00:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:00:58 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:00:58 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 3b55c3b4-4536-49ae-9880-8bff6bae950c does not exist
Oct 07 15:00:58 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev bab1311a-4b66-4dd6-9ce7-d140c9b6aa15 does not exist
Oct 07 15:00:58 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 3cb9ec95-f4e9-4e44-87dd-d2eb82480044 does not exist
Oct 07 15:00:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:00:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:00:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:00:58 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:00:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:00:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:00:58 compute-0 ceph-mon[74295]: pgmap v2965: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 92 op/s
Oct 07 15:00:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:00:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:00:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:00:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:00:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:00:58 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:00:58 compute-0 sudo[430331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:00:58 compute-0 sudo[430331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:00:58 compute-0 sudo[430331]: pam_unix(sudo:session): session closed for user root
Oct 07 15:00:58 compute-0 sudo[430356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:00:58 compute-0 sudo[430356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:00:58 compute-0 sudo[430356]: pam_unix(sudo:session): session closed for user root
Oct 07 15:00:58 compute-0 sudo[430381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:00:58 compute-0 sudo[430381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:00:58 compute-0 sudo[430381]: pam_unix(sudo:session): session closed for user root
Oct 07 15:00:58 compute-0 nova_compute[259550]: 2025-10-07 15:00:58.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:00:58 compute-0 sudo[430406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:00:58 compute-0 sudo[430406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:00:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 07 15:00:59 compute-0 podman[430472]: 2025-10-07 15:00:59.146257507 +0000 UTC m=+0.046167783 container create c275f6b97c04dd70129807b783fa87305030bb6ed89a6f5de427989bd109814b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:00:59 compute-0 systemd[1]: Started libpod-conmon-c275f6b97c04dd70129807b783fa87305030bb6ed89a6f5de427989bd109814b.scope.
Oct 07 15:00:59 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:00:59 compute-0 podman[430472]: 2025-10-07 15:00:59.122396136 +0000 UTC m=+0.022306442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:00:59 compute-0 podman[430472]: 2025-10-07 15:00:59.222744622 +0000 UTC m=+0.122654938 container init c275f6b97c04dd70129807b783fa87305030bb6ed89a6f5de427989bd109814b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackburn, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:00:59 compute-0 podman[430472]: 2025-10-07 15:00:59.231268507 +0000 UTC m=+0.131178783 container start c275f6b97c04dd70129807b783fa87305030bb6ed89a6f5de427989bd109814b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 07 15:00:59 compute-0 podman[430472]: 2025-10-07 15:00:59.237077191 +0000 UTC m=+0.136987517 container attach c275f6b97c04dd70129807b783fa87305030bb6ed89a6f5de427989bd109814b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:00:59 compute-0 wonderful_blackburn[430489]: 167 167
Oct 07 15:00:59 compute-0 systemd[1]: libpod-c275f6b97c04dd70129807b783fa87305030bb6ed89a6f5de427989bd109814b.scope: Deactivated successfully.
Oct 07 15:00:59 compute-0 podman[430472]: 2025-10-07 15:00:59.238720624 +0000 UTC m=+0.138630910 container died c275f6b97c04dd70129807b783fa87305030bb6ed89a6f5de427989bd109814b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackburn, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:00:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-460852c0455135d6704b58f5311c9ed0ca953c7db7a6d6ca6a7f90118426d610-merged.mount: Deactivated successfully.
Oct 07 15:00:59 compute-0 podman[430472]: 2025-10-07 15:00:59.288002418 +0000 UTC m=+0.187912694 container remove c275f6b97c04dd70129807b783fa87305030bb6ed89a6f5de427989bd109814b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackburn, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 07 15:00:59 compute-0 systemd[1]: libpod-conmon-c275f6b97c04dd70129807b783fa87305030bb6ed89a6f5de427989bd109814b.scope: Deactivated successfully.
Oct 07 15:00:59 compute-0 podman[430513]: 2025-10-07 15:00:59.480666558 +0000 UTC m=+0.045768443 container create d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 07 15:00:59 compute-0 systemd[1]: Started libpod-conmon-d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc.scope.
Oct 07 15:00:59 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:00:59 compute-0 podman[430513]: 2025-10-07 15:00:59.459792785 +0000 UTC m=+0.024894640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:00:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/493c84fc704a82e2a226e95334ddf5c94e372d5f58ccfccc266bf9610ae0e5ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:00:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/493c84fc704a82e2a226e95334ddf5c94e372d5f58ccfccc266bf9610ae0e5ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:00:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/493c84fc704a82e2a226e95334ddf5c94e372d5f58ccfccc266bf9610ae0e5ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:00:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/493c84fc704a82e2a226e95334ddf5c94e372d5f58ccfccc266bf9610ae0e5ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:00:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/493c84fc704a82e2a226e95334ddf5c94e372d5f58ccfccc266bf9610ae0e5ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:00:59 compute-0 podman[430513]: 2025-10-07 15:00:59.58166075 +0000 UTC m=+0.146762615 container init d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:00:59 compute-0 podman[430513]: 2025-10-07 15:00:59.589314423 +0000 UTC m=+0.154416268 container start d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct 07 15:00:59 compute-0 podman[430513]: 2025-10-07 15:00:59.595146928 +0000 UTC m=+0.160248803 container attach d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mayer, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:01:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:00.097 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:01:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:00.100 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:01:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:00.101 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:01:00 compute-0 compassionate_mayer[430529]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:01:00 compute-0 compassionate_mayer[430529]: --> relative data size: 1.0
Oct 07 15:01:00 compute-0 compassionate_mayer[430529]: --> All data devices are unavailable
Oct 07 15:01:00 compute-0 systemd[1]: libpod-d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc.scope: Deactivated successfully.
Oct 07 15:01:00 compute-0 systemd[1]: libpod-d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc.scope: Consumed 1.022s CPU time.
Oct 07 15:01:00 compute-0 podman[430558]: 2025-10-07 15:01:00.725957274 +0000 UTC m=+0.030784995 container died d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct 07 15:01:00 compute-0 ceph-mon[74295]: pgmap v2966: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 07 15:01:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-493c84fc704a82e2a226e95334ddf5c94e372d5f58ccfccc266bf9610ae0e5ce-merged.mount: Deactivated successfully.
Oct 07 15:01:00 compute-0 podman[430558]: 2025-10-07 15:01:00.798375801 +0000 UTC m=+0.103203512 container remove d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 07 15:01:00 compute-0 systemd[1]: libpod-conmon-d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc.scope: Deactivated successfully.
Oct 07 15:01:00 compute-0 sudo[430406]: pam_unix(sudo:session): session closed for user root
Oct 07 15:01:00 compute-0 sudo[430572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:01:00 compute-0 sudo[430572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:01:00 compute-0 sudo[430572]: pam_unix(sudo:session): session closed for user root
Oct 07 15:01:00 compute-0 sudo[430597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:01:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 07 15:01:00 compute-0 sudo[430597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:01:00 compute-0 sudo[430597]: pam_unix(sudo:session): session closed for user root
Oct 07 15:01:01 compute-0 sudo[430622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:01:01 compute-0 sudo[430622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:01:01 compute-0 sudo[430622]: pam_unix(sudo:session): session closed for user root
Oct 07 15:01:01 compute-0 sudo[430647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:01:01 compute-0 CROND[430672]: (root) CMD (run-parts /etc/cron.hourly)
Oct 07 15:01:01 compute-0 sudo[430647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:01:01 compute-0 run-parts[430676]: (/etc/cron.hourly) starting 0anacron
Oct 07 15:01:01 compute-0 run-parts[430682]: (/etc/cron.hourly) finished 0anacron
Oct 07 15:01:01 compute-0 CROND[430671]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 07 15:01:01 compute-0 podman[430726]: 2025-10-07 15:01:01.440185467 +0000 UTC m=+0.051437173 container create 0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 07 15:01:01 compute-0 systemd[1]: Started libpod-conmon-0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8.scope.
Oct 07 15:01:01 compute-0 podman[430726]: 2025-10-07 15:01:01.411322622 +0000 UTC m=+0.022574328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:01:01 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:01:01 compute-0 podman[430726]: 2025-10-07 15:01:01.534702158 +0000 UTC m=+0.145953884 container init 0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_panini, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:01:01 compute-0 podman[430726]: 2025-10-07 15:01:01.545912614 +0000 UTC m=+0.157164320 container start 0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:01:01 compute-0 podman[430726]: 2025-10-07 15:01:01.54988666 +0000 UTC m=+0.161138396 container attach 0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:01:01 compute-0 systemd[1]: libpod-0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8.scope: Deactivated successfully.
Oct 07 15:01:01 compute-0 eloquent_panini[430742]: 167 167
Oct 07 15:01:01 compute-0 conmon[430742]: conmon 0be1bab65f6cdfe77b61 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8.scope/container/memory.events
Oct 07 15:01:01 compute-0 podman[430726]: 2025-10-07 15:01:01.554996505 +0000 UTC m=+0.166248211 container died 0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_panini, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 15:01:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7dbe514f764c418a612ab73491261f5a0111ead3919308013295c60ac0f83ec1-merged.mount: Deactivated successfully.
Oct 07 15:01:01 compute-0 podman[430726]: 2025-10-07 15:01:01.600902299 +0000 UTC m=+0.212154005 container remove 0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_panini, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 07 15:01:01 compute-0 systemd[1]: libpod-conmon-0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8.scope: Deactivated successfully.
Oct 07 15:01:01 compute-0 ceph-mon[74295]: pgmap v2967: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 07 15:01:01 compute-0 podman[430765]: 2025-10-07 15:01:01.796365383 +0000 UTC m=+0.042479585 container create 4853d8dcd787bcc3576f62489432edc70d3f48b93ddf328fa18803f8935a9813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:01:01 compute-0 systemd[1]: Started libpod-conmon-4853d8dcd787bcc3576f62489432edc70d3f48b93ddf328fa18803f8935a9813.scope.
Oct 07 15:01:01 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:01:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821f265375ee869cd035a38d38ee9fcdfe0f7f8d38096fa87b5ea5a60d874959/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:01:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821f265375ee869cd035a38d38ee9fcdfe0f7f8d38096fa87b5ea5a60d874959/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:01:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821f265375ee869cd035a38d38ee9fcdfe0f7f8d38096fa87b5ea5a60d874959/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:01:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821f265375ee869cd035a38d38ee9fcdfe0f7f8d38096fa87b5ea5a60d874959/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:01:01 compute-0 podman[430765]: 2025-10-07 15:01:01.781388336 +0000 UTC m=+0.027502558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:01:01 compute-0 podman[430765]: 2025-10-07 15:01:01.879198915 +0000 UTC m=+0.125313137 container init 4853d8dcd787bcc3576f62489432edc70d3f48b93ddf328fa18803f8935a9813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rosalind, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 07 15:01:01 compute-0 podman[430765]: 2025-10-07 15:01:01.884868305 +0000 UTC m=+0.130982507 container start 4853d8dcd787bcc3576f62489432edc70d3f48b93ddf328fa18803f8935a9813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rosalind, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:01:01 compute-0 podman[430765]: 2025-10-07 15:01:01.893987956 +0000 UTC m=+0.140102158 container attach 4853d8dcd787bcc3576f62489432edc70d3f48b93ddf328fa18803f8935a9813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rosalind, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 07 15:01:01 compute-0 ovn_controller[151684]: 2025-10-07T15:01:01Z|00210|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.4
Oct 07 15:01:01 compute-0 ovn_controller[151684]: 2025-10-07T15:01:01Z|00211|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:0b:41:f9 10.100.0.4
Oct 07 15:01:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:01:02 compute-0 nova_compute[259550]: 2025-10-07 15:01:02.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]: {
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:     "0": [
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:         {
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "devices": [
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "/dev/loop3"
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             ],
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "lv_name": "ceph_lv0",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "lv_size": "21470642176",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "name": "ceph_lv0",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "tags": {
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.cluster_name": "ceph",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.crush_device_class": "",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.encrypted": "0",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.osd_id": "0",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.type": "block",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.vdo": "0"
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             },
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "type": "block",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "vg_name": "ceph_vg0"
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:         }
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:     ],
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:     "1": [
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:         {
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "devices": [
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "/dev/loop4"
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             ],
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "lv_name": "ceph_lv1",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "lv_size": "21470642176",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "name": "ceph_lv1",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "tags": {
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.cluster_name": "ceph",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.crush_device_class": "",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.encrypted": "0",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.osd_id": "1",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.type": "block",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.vdo": "0"
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             },
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "type": "block",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "vg_name": "ceph_vg1"
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:         }
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:     ],
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:     "2": [
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:         {
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "devices": [
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "/dev/loop5"
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             ],
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "lv_name": "ceph_lv2",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "lv_size": "21470642176",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "name": "ceph_lv2",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "tags": {
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.cluster_name": "ceph",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.crush_device_class": "",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.encrypted": "0",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.osd_id": "2",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.type": "block",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:                 "ceph.vdo": "0"
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             },
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "type": "block",
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:             "vg_name": "ceph_vg2"
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:         }
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]:     ]
Oct 07 15:01:02 compute-0 gifted_rosalind[430782]: }
Oct 07 15:01:02 compute-0 systemd[1]: libpod-4853d8dcd787bcc3576f62489432edc70d3f48b93ddf328fa18803f8935a9813.scope: Deactivated successfully.
Oct 07 15:01:02 compute-0 podman[430765]: 2025-10-07 15:01:02.665573896 +0000 UTC m=+0.911688118 container died 4853d8dcd787bcc3576f62489432edc70d3f48b93ddf328fa18803f8935a9813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rosalind, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:01:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-821f265375ee869cd035a38d38ee9fcdfe0f7f8d38096fa87b5ea5a60d874959-merged.mount: Deactivated successfully.
Oct 07 15:01:02 compute-0 podman[430765]: 2025-10-07 15:01:02.748295205 +0000 UTC m=+0.994409407 container remove 4853d8dcd787bcc3576f62489432edc70d3f48b93ddf328fa18803f8935a9813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 07 15:01:02 compute-0 systemd[1]: libpod-conmon-4853d8dcd787bcc3576f62489432edc70d3f48b93ddf328fa18803f8935a9813.scope: Deactivated successfully.
Oct 07 15:01:02 compute-0 sudo[430647]: pam_unix(sudo:session): session closed for user root
Oct 07 15:01:02 compute-0 sudo[430803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:01:02 compute-0 sudo[430803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:01:02 compute-0 sudo[430803]: pam_unix(sudo:session): session closed for user root
Oct 07 15:01:02 compute-0 sudo[430828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:01:02 compute-0 sudo[430828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:01:02 compute-0 sudo[430828]: pam_unix(sudo:session): session closed for user root
Oct 07 15:01:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 KiB/s wr, 72 op/s
Oct 07 15:01:02 compute-0 sudo[430853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:01:02 compute-0 sudo[430853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:01:02 compute-0 sudo[430853]: pam_unix(sudo:session): session closed for user root
Oct 07 15:01:03 compute-0 sudo[430878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:01:03 compute-0 sudo[430878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:01:03 compute-0 podman[430945]: 2025-10-07 15:01:03.358493935 +0000 UTC m=+0.043835682 container create c68726003cd4c9864f32ac63cf8c50e7b51077e68a80225888460f82cfcfcc00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 07 15:01:03 compute-0 systemd[1]: Started libpod-conmon-c68726003cd4c9864f32ac63cf8c50e7b51077e68a80225888460f82cfcfcc00.scope.
Oct 07 15:01:03 compute-0 podman[430945]: 2025-10-07 15:01:03.338769583 +0000 UTC m=+0.024111350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:01:03 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:01:03 compute-0 podman[430945]: 2025-10-07 15:01:03.460105573 +0000 UTC m=+0.145447330 container init c68726003cd4c9864f32ac63cf8c50e7b51077e68a80225888460f82cfcfcc00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:01:03 compute-0 podman[430945]: 2025-10-07 15:01:03.469209985 +0000 UTC m=+0.154551732 container start c68726003cd4c9864f32ac63cf8c50e7b51077e68a80225888460f82cfcfcc00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mirzakhani, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:01:03 compute-0 great_mirzakhani[430962]: 167 167
Oct 07 15:01:03 compute-0 podman[430945]: 2025-10-07 15:01:03.474290928 +0000 UTC m=+0.159632715 container attach c68726003cd4c9864f32ac63cf8c50e7b51077e68a80225888460f82cfcfcc00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mirzakhani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 07 15:01:03 compute-0 systemd[1]: libpod-c68726003cd4c9864f32ac63cf8c50e7b51077e68a80225888460f82cfcfcc00.scope: Deactivated successfully.
Oct 07 15:01:03 compute-0 podman[430945]: 2025-10-07 15:01:03.475123681 +0000 UTC m=+0.160465448 container died c68726003cd4c9864f32ac63cf8c50e7b51077e68a80225888460f82cfcfcc00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 15:01:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-144038a8cc0fcd0a34379eff5d8fb04a35af8e86d27b3a798bcf7bec5914614a-merged.mount: Deactivated successfully.
Oct 07 15:01:03 compute-0 podman[430945]: 2025-10-07 15:01:03.524502687 +0000 UTC m=+0.209844444 container remove c68726003cd4c9864f32ac63cf8c50e7b51077e68a80225888460f82cfcfcc00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 07 15:01:03 compute-0 systemd[1]: libpod-conmon-c68726003cd4c9864f32ac63cf8c50e7b51077e68a80225888460f82cfcfcc00.scope: Deactivated successfully.
Oct 07 15:01:03 compute-0 nova_compute[259550]: 2025-10-07 15:01:03.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:03 compute-0 podman[430987]: 2025-10-07 15:01:03.766993695 +0000 UTC m=+0.063206594 container create 00939e25da0657790ad4ad9e3b61c745dc05b408f3a6e8b2892269d531301307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shirley, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:01:03 compute-0 systemd[1]: Started libpod-conmon-00939e25da0657790ad4ad9e3b61c745dc05b408f3a6e8b2892269d531301307.scope.
Oct 07 15:01:03 compute-0 podman[430987]: 2025-10-07 15:01:03.739828006 +0000 UTC m=+0.036040995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:01:03 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:01:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fadb46fc814549b13ea6593b2e96987499f82c10258f716d91206a4d1eb3a03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:01:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fadb46fc814549b13ea6593b2e96987499f82c10258f716d91206a4d1eb3a03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:01:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fadb46fc814549b13ea6593b2e96987499f82c10258f716d91206a4d1eb3a03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:01:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fadb46fc814549b13ea6593b2e96987499f82c10258f716d91206a4d1eb3a03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:01:03 compute-0 podman[430987]: 2025-10-07 15:01:03.865276017 +0000 UTC m=+0.161489006 container init 00939e25da0657790ad4ad9e3b61c745dc05b408f3a6e8b2892269d531301307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 15:01:03 compute-0 podman[430987]: 2025-10-07 15:01:03.878971888 +0000 UTC m=+0.175184837 container start 00939e25da0657790ad4ad9e3b61c745dc05b408f3a6e8b2892269d531301307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shirley, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:01:03 compute-0 podman[430987]: 2025-10-07 15:01:03.883372796 +0000 UTC m=+0.179585735 container attach 00939e25da0657790ad4ad9e3b61c745dc05b408f3a6e8b2892269d531301307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 07 15:01:04 compute-0 ceph-mon[74295]: pgmap v2968: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 KiB/s wr, 72 op/s
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]: {
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:         "osd_id": 2,
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:         "type": "bluestore"
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:     },
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:         "osd_id": 1,
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:         "type": "bluestore"
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:     },
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:         "osd_id": 0,
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:         "type": "bluestore"
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]:     }
Oct 07 15:01:04 compute-0 hardcore_shirley[431003]: }
Oct 07 15:01:04 compute-0 systemd[1]: libpod-00939e25da0657790ad4ad9e3b61c745dc05b408f3a6e8b2892269d531301307.scope: Deactivated successfully.
Oct 07 15:01:04 compute-0 podman[430987]: 2025-10-07 15:01:04.844355738 +0000 UTC m=+1.140568637 container died 00939e25da0657790ad4ad9e3b61c745dc05b408f3a6e8b2892269d531301307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 07 15:01:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fadb46fc814549b13ea6593b2e96987499f82c10258f716d91206a4d1eb3a03-merged.mount: Deactivated successfully.
Oct 07 15:01:04 compute-0 podman[430987]: 2025-10-07 15:01:04.911857144 +0000 UTC m=+1.208070033 container remove 00939e25da0657790ad4ad9e3b61c745dc05b408f3a6e8b2892269d531301307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:01:04 compute-0 systemd[1]: libpod-conmon-00939e25da0657790ad4ad9e3b61c745dc05b408f3a6e8b2892269d531301307.scope: Deactivated successfully.
Oct 07 15:01:04 compute-0 sudo[430878]: pam_unix(sudo:session): session closed for user root
Oct 07 15:01:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:01:04 compute-0 podman[431047]: 2025-10-07 15:01:04.949303945 +0000 UTC m=+0.065946336 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:01:04 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:01:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:01:04 compute-0 podman[431036]: 2025-10-07 15:01:04.954504053 +0000 UTC m=+0.073272321 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 15:01:04 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:01:04 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev b83f5c80-4e69-48b5-b0a6-f3e20e5b2371 does not exist
Oct 07 15:01:04 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev f4a3e2ce-6515-4a58-80d9-e9d5f7dad699 does not exist
Oct 07 15:01:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 205 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 65 KiB/s wr, 100 op/s
Oct 07 15:01:04 compute-0 ovn_controller[151684]: 2025-10-07T15:01:04Z|00212|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.4
Oct 07 15:01:04 compute-0 ovn_controller[151684]: 2025-10-07T15:01:04Z|00213|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:0b:41:f9 10.100.0.4
Oct 07 15:01:05 compute-0 sudo[431090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:01:05 compute-0 sudo[431090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:01:05 compute-0 sudo[431090]: pam_unix(sudo:session): session closed for user root
Oct 07 15:01:05 compute-0 sudo[431117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:01:05 compute-0 sudo[431117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:01:05 compute-0 sudo[431117]: pam_unix(sudo:session): session closed for user root
Oct 07 15:01:05 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:01:05 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:01:05 compute-0 ceph-mon[74295]: pgmap v2969: 305 pgs: 305 active+clean; 205 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 65 KiB/s wr, 100 op/s
Oct 07 15:01:06 compute-0 ovn_controller[151684]: 2025-10-07T15:01:06Z|00214|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:41:f9 10.100.0.4
Oct 07 15:01:06 compute-0 ovn_controller[151684]: 2025-10-07T15:01:06Z|00215|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:41:f9 10.100.0.4
Oct 07 15:01:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 305 active+clean; 214 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 500 KiB/s wr, 65 op/s
Oct 07 15:01:06 compute-0 nova_compute[259550]: 2025-10-07 15:01:06.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:01:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:01:07 compute-0 nova_compute[259550]: 2025-10-07 15:01:07.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:08 compute-0 ceph-mon[74295]: pgmap v2970: 305 pgs: 305 active+clean; 214 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 500 KiB/s wr, 65 op/s
Oct 07 15:01:08 compute-0 nova_compute[259550]: 2025-10-07 15:01:08.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 547 KiB/s wr, 54 op/s
Oct 07 15:01:10 compute-0 ceph-mon[74295]: pgmap v2971: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 547 KiB/s wr, 54 op/s
Oct 07 15:01:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 547 KiB/s wr, 54 op/s
Oct 07 15:01:10 compute-0 nova_compute[259550]: 2025-10-07 15:01:10.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:01:11 compute-0 nova_compute[259550]: 2025-10-07 15:01:11.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:01:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:01:12 compute-0 ceph-mon[74295]: pgmap v2972: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 547 KiB/s wr, 54 op/s
Oct 07 15:01:12 compute-0 nova_compute[259550]: 2025-10-07 15:01:12.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 54 op/s
Oct 07 15:01:12 compute-0 nova_compute[259550]: 2025-10-07 15:01:12.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:01:13 compute-0 nova_compute[259550]: 2025-10-07 15:01:13.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:14 compute-0 ceph-mon[74295]: pgmap v2973: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 54 op/s
Oct 07 15:01:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 54 op/s
Oct 07 15:01:15 compute-0 nova_compute[259550]: 2025-10-07 15:01:15.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:01:15 compute-0 nova_compute[259550]: 2025-10-07 15:01:15.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:01:16 compute-0 ceph-mon[74295]: pgmap v2974: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 54 op/s
Oct 07 15:01:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 483 KiB/s wr, 26 op/s
Oct 07 15:01:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:01:17 compute-0 nova_compute[259550]: 2025-10-07 15:01:17.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:18 compute-0 ceph-mon[74295]: pgmap v2975: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 483 KiB/s wr, 26 op/s
Oct 07 15:01:18 compute-0 podman[431142]: 2025-10-07 15:01:18.087720732 +0000 UTC m=+0.064460927 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 15:01:18 compute-0 podman[431143]: 2025-10-07 15:01:18.121041064 +0000 UTC m=+0.094532432 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 15:01:18 compute-0 nova_compute[259550]: 2025-10-07 15:01:18.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 50 KiB/s wr, 2 op/s
Oct 07 15:01:18 compute-0 nova_compute[259550]: 2025-10-07 15:01:18.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:01:20 compute-0 ceph-mon[74295]: pgmap v2976: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 50 KiB/s wr, 2 op/s
Oct 07 15:01:20 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct 07 15:01:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:01:22 compute-0 ceph-mon[74295]: pgmap v2977: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct 07 15:01:22 compute-0 nova_compute[259550]: 2025-10-07 15:01:22.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:01:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:01:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:01:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:01:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:01:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:01:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:01:22
Oct 07 15:01:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:01:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:01:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['volumes', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta']
Oct 07 15:01:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:01:22 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct 07 15:01:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:01:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:01:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:01:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:01:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:01:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:01:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:01:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:01:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:01:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:01:23 compute-0 ceph-mgr[74587]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3626055412
Oct 07 15:01:23 compute-0 nova_compute[259550]: 2025-10-07 15:01:23.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:24 compute-0 ceph-mon[74295]: pgmap v2978: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct 07 15:01:24 compute-0 ovn_controller[151684]: 2025-10-07T15:01:24Z|01656|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Oct 07 15:01:24 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct 07 15:01:26 compute-0 ceph-mon[74295]: pgmap v2979: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct 07 15:01:26 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct 07 15:01:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:01:27 compute-0 nova_compute[259550]: 2025-10-07 15:01:27.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:27 compute-0 nova_compute[259550]: 2025-10-07 15:01:27.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:01:28 compute-0 nova_compute[259550]: 2025-10-07 15:01:28.021 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:01:28 compute-0 nova_compute[259550]: 2025-10-07 15:01:28.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:01:28 compute-0 nova_compute[259550]: 2025-10-07 15:01:28.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:01:28 compute-0 nova_compute[259550]: 2025-10-07 15:01:28.022 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:01:28 compute-0 nova_compute[259550]: 2025-10-07 15:01:28.022 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:01:28 compute-0 ceph-mon[74295]: pgmap v2980: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct 07 15:01:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:01:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3925714958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:01:28 compute-0 nova_compute[259550]: 2025-10-07 15:01:28.482 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:01:28 compute-0 nova_compute[259550]: 2025-10-07 15:01:28.615 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 15:01:28 compute-0 nova_compute[259550]: 2025-10-07 15:01:28.615 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 15:01:28 compute-0 nova_compute[259550]: 2025-10-07 15:01:28.619 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 15:01:28 compute-0 nova_compute[259550]: 2025-10-07 15:01:28.619 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 07 15:01:28 compute-0 nova_compute[259550]: 2025-10-07 15:01:28.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:28 compute-0 nova_compute[259550]: 2025-10-07 15:01:28.785 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:01:28 compute-0 nova_compute[259550]: 2025-10-07 15:01:28.786 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3184MB free_disk=59.9363899230957GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:01:28 compute-0 nova_compute[259550]: 2025-10-07 15:01:28.786 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:01:28 compute-0 nova_compute[259550]: 2025-10-07 15:01:28.787 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:01:28 compute-0 nova_compute[259550]: 2025-10-07 15:01:28.959 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance a6b18035-4aef-4825-90e6-799173979626 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 15:01:28 compute-0 nova_compute[259550]: 2025-10-07 15:01:28.959 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 15:01:28 compute-0 nova_compute[259550]: 2025-10-07 15:01:28.960 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:01:28 compute-0 nova_compute[259550]: 2025-10-07 15:01:28.960 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:01:28 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 207 KiB/s rd, 3.3 KiB/s wr, 5 op/s
Oct 07 15:01:29 compute-0 nova_compute[259550]: 2025-10-07 15:01:29.019 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:01:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3925714958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:01:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:01:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4268660978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:01:29 compute-0 nova_compute[259550]: 2025-10-07 15:01:29.461 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:01:29 compute-0 nova_compute[259550]: 2025-10-07 15:01:29.467 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:01:29 compute-0 nova_compute[259550]: 2025-10-07 15:01:29.505 2 DEBUG nova.compute.manager [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 15:01:29 compute-0 nova_compute[259550]: 2025-10-07 15:01:29.543 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:01:29 compute-0 nova_compute[259550]: 2025-10-07 15:01:29.564 2 INFO nova.compute.manager [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] instance snapshotting
Oct 07 15:01:29 compute-0 nova_compute[259550]: 2025-10-07 15:01:29.664 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:01:29 compute-0 nova_compute[259550]: 2025-10-07 15:01:29.664 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:01:29 compute-0 nova_compute[259550]: 2025-10-07 15:01:29.784 2 INFO nova.virt.libvirt.driver [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Beginning live snapshot process
Oct 07 15:01:30 compute-0 nova_compute[259550]: 2025-10-07 15:01:30.164 2 DEBUG nova.storage.rbd_utils [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] creating snapshot(08c8dc04d88e46189a248fcc9143d1c9) on rbd image(98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 15:01:30 compute-0 ceph-mon[74295]: pgmap v2981: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 207 KiB/s rd, 3.3 KiB/s wr, 5 op/s
Oct 07 15:01:30 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4268660978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:01:30 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 215 KiB/s rd, 11 op/s
Oct 07 15:01:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Oct 07 15:01:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Oct 07 15:01:31 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Oct 07 15:01:31 compute-0 nova_compute[259550]: 2025-10-07 15:01:31.254 2 DEBUG nova.storage.rbd_utils [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] cloning vms/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk@08c8dc04d88e46189a248fcc9143d1c9 to images/30696525-f806-4144-b1e6-ccc7d3798a90 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 07 15:01:31 compute-0 nova_compute[259550]: 2025-10-07 15:01:31.369 2 DEBUG nova.storage.rbd_utils [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] flattening images/30696525-f806-4144-b1e6-ccc7d3798a90 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 07 15:01:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:01:32 compute-0 nova_compute[259550]: 2025-10-07 15:01:32.166 2 DEBUG nova.storage.rbd_utils [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] removing snapshot(08c8dc04d88e46189a248fcc9143d1c9) on rbd image(98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 07 15:01:32 compute-0 ceph-mon[74295]: pgmap v2982: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 215 KiB/s rd, 11 op/s
Oct 07 15:01:32 compute-0 ceph-mon[74295]: osdmap e289: 3 total, 3 up, 3 in
Oct 07 15:01:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Oct 07 15:01:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Oct 07 15:01:32 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Oct 07 15:01:32 compute-0 nova_compute[259550]: 2025-10-07 15:01:32.259 2 DEBUG nova.storage.rbd_utils [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] creating snapshot(snap) on rbd image(30696525-f806-4144-b1e6-ccc7d3798a90) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 07 15:01:32 compute-0 nova_compute[259550]: 2025-10-07 15:01:32.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:32 compute-0 nova_compute[259550]: 2025-10-07 15:01:32.666 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:01:32 compute-0 nova_compute[259550]: 2025-10-07 15:01:32.667 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:01:32 compute-0 nova_compute[259550]: 2025-10-07 15:01:32.667 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:01:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:01:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2834333303' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:01:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:01:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2834333303' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008650243984827766 of space, bias 1.0, pg target 0.25950731954483297 quantized to 32 (current 32)
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014241774923487661 of space, bias 1.0, pg target 0.42725324770462986 quantized to 32 (current 32)
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:01:32 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 16 op/s
Oct 07 15:01:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Oct 07 15:01:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Oct 07 15:01:33 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Oct 07 15:01:33 compute-0 ceph-mon[74295]: osdmap e290: 3 total, 3 up, 3 in
Oct 07 15:01:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2834333303' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:01:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2834333303' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:01:33 compute-0 nova_compute[259550]: 2025-10-07 15:01:33.331 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 15:01:33 compute-0 nova_compute[259550]: 2025-10-07 15:01:33.332 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 15:01:33 compute-0 nova_compute[259550]: 2025-10-07 15:01:33.332 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 07 15:01:33 compute-0 nova_compute[259550]: 2025-10-07 15:01:33.332 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a6b18035-4aef-4825-90e6-799173979626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 15:01:33 compute-0 nova_compute[259550]: 2025-10-07 15:01:33.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:34 compute-0 ceph-mon[74295]: pgmap v2985: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 16 op/s
Oct 07 15:01:34 compute-0 ceph-mon[74295]: osdmap e291: 3 total, 3 up, 3 in
Oct 07 15:01:34 compute-0 nova_compute[259550]: 2025-10-07 15:01:34.737 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Updating instance_info_cache with network_info: [{"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 15:01:34 compute-0 nova_compute[259550]: 2025-10-07 15:01:34.762 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 15:01:34 compute-0 nova_compute[259550]: 2025-10-07 15:01:34.762 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 07 15:01:34 compute-0 nova_compute[259550]: 2025-10-07 15:01:34.763 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:01:34 compute-0 nova_compute[259550]: 2025-10-07 15:01:34.897 2 INFO nova.virt.libvirt.driver [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Snapshot image upload complete
Oct 07 15:01:34 compute-0 nova_compute[259550]: 2025-10-07 15:01:34.897 2 INFO nova.compute.manager [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Took 5.33 seconds to snapshot the instance on the hypervisor.
Oct 07 15:01:34 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 305 active+clean; 303 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 12 MiB/s wr, 160 op/s
Oct 07 15:01:35 compute-0 podman[431374]: 2025-10-07 15:01:35.064541283 +0000 UTC m=+0.053917498 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 07 15:01:35 compute-0 podman[431373]: 2025-10-07 15:01:35.064537263 +0000 UTC m=+0.055699385 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 15:01:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Oct 07 15:01:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Oct 07 15:01:35 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Oct 07 15:01:36 compute-0 ceph-mon[74295]: pgmap v2987: 305 pgs: 305 active+clean; 303 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 12 MiB/s wr, 160 op/s
Oct 07 15:01:36 compute-0 ceph-mon[74295]: osdmap e292: 3 total, 3 up, 3 in
Oct 07 15:01:36 compute-0 nova_compute[259550]: 2025-10-07 15:01:36.899 2 DEBUG oslo_concurrency.lockutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:01:36 compute-0 nova_compute[259550]: 2025-10-07 15:01:36.900 2 DEBUG oslo_concurrency.lockutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:01:36 compute-0 nova_compute[259550]: 2025-10-07 15:01:36.901 2 DEBUG oslo_concurrency.lockutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:01:36 compute-0 nova_compute[259550]: 2025-10-07 15:01:36.901 2 DEBUG oslo_concurrency.lockutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:01:36 compute-0 nova_compute[259550]: 2025-10-07 15:01:36.901 2 DEBUG oslo_concurrency.lockutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:01:36 compute-0 nova_compute[259550]: 2025-10-07 15:01:36.902 2 INFO nova.compute.manager [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Terminating instance
Oct 07 15:01:36 compute-0 nova_compute[259550]: 2025-10-07 15:01:36.903 2 DEBUG nova.compute.manager [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 15:01:36 compute-0 kernel: tapfc3eccef-04 (unregistering): left promiscuous mode
Oct 07 15:01:36 compute-0 NetworkManager[44949]: <info>  [1759849296.9583] device (tapfc3eccef-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 15:01:36 compute-0 ovn_controller[151684]: 2025-10-07T15:01:36Z|01657|binding|INFO|Releasing lport fc3eccef-04e2-406f-b8a2-3b5015d4e10a from this chassis (sb_readonly=0)
Oct 07 15:01:36 compute-0 nova_compute[259550]: 2025-10-07 15:01:36.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:36 compute-0 ovn_controller[151684]: 2025-10-07T15:01:36Z|01658|binding|INFO|Setting lport fc3eccef-04e2-406f-b8a2-3b5015d4e10a down in Southbound
Oct 07 15:01:36 compute-0 ovn_controller[151684]: 2025-10-07T15:01:36Z|01659|binding|INFO|Removing iface tapfc3eccef-04 ovn-installed in OVS
Oct 07 15:01:36 compute-0 nova_compute[259550]: 2025-10-07 15:01:36.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:36 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 305 active+clean; 323 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 16 MiB/s wr, 182 op/s
Oct 07 15:01:36 compute-0 nova_compute[259550]: 2025-10-07 15:01:36.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:36.999 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:41:f9 10.100.0.4'], port_security=['fa:16:3e:0b:41:f9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '98da6205-c6cb-48d4-9502-fa1ca0f3e4ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5714012d-b182-4fef-9241-3afcb9c700d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa7bd91eb3b040c89929aa23c9775dc9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5a9b7c48-78a6-4a08-9a38-f3e228e68bdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f64d47ed-3333-494f-a3a9-07b1b0158b50, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fc3eccef-04e2-406f-b8a2-3b5015d4e10a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 15:01:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.000 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fc3eccef-04e2-406f-b8a2-3b5015d4e10a in datapath 5714012d-b182-4fef-9241-3afcb9c700d6 unbound from our chassis
Oct 07 15:01:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.001 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5714012d-b182-4fef-9241-3afcb9c700d6
Oct 07 15:01:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.014 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[78810df1-05e6-4f19-bb2e-efcf831f8909]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:01:37 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Deactivated successfully.
Oct 07 15:01:37 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Consumed 14.884s CPU time.
Oct 07 15:01:37 compute-0 systemd-machined[214580]: Machine qemu-186-instance-00000098 terminated.
Oct 07 15:01:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:01:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.051 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f15ed4e7-a4eb-4115-b6ca-83e1f4186f93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:01:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.056 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7a4dfc13-3be5-4d37-9674-1f0e88efb810]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:01:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.084 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[174ae8ad-de97-49be-b407-96065d2ad264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:01:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.101 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ba7ee6c3-7c5f-4624-af6f-34579a662189]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5714012d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:84:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977469, 'reachable_time': 27955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 431425, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:01:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.118 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d7f476-8456-4e13-8359-31056b97d9d6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5714012d-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977482, 'tstamp': 977482}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 431426, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5714012d-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977484, 'tstamp': 977484}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 431426, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:01:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.119 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5714012d-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.128 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5714012d-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 15:01:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.129 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 15:01:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.129 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5714012d-b0, col_values=(('external_ids', {'iface-id': '8ae40a35-baff-4538-b31d-4c05f61bc2b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 15:01:37 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.129 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.145 2 INFO nova.virt.libvirt.driver [-] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Instance destroyed successfully.
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.146 2 DEBUG nova.objects.instance [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lazy-loading 'resources' on Instance uuid 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.166 2 DEBUG nova.virt.libvirt.vif [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T15:00:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-257341137',display_name='tempest-TestSnapshotPattern-server-257341137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-257341137',id=152,image_ref='a45275a6-57fe-4099-b442-be8d2cb87827',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLS8No0f8lI6ufkJmHXUc83InsAiFTNLFmJKfMLwW232En9QazJdzMvhlDyJwZrw5ZgxsPztgc0fXNZCFoxmRX/wK0ADddqAh1D7rvdzceS1mG7VJugN4Nxl5xOWKIFTbQ==',key_name='tempest-TestSnapshotPattern-1267535545',keypairs=<?>,launch_index=0,launched_at=2025-10-07T15:00:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aa7bd91eb3b040c89929aa23c9775dc9',ramdisk_id='',reservation_id='r-tb8l5uf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='a6b18035-4aef-4825-90e6-799173979626',image_min_disk='1',image_min_ram='0',image_owner_id='aa7bd91eb3b040c89929aa23c9775dc9',image_owner_project_name='tempest-TestSnapshotPattern-1480624877',image_owner_user_name='tempest-TestSnapshotPattern-1480624877-project-member',image_user_id='1a7552cec1354175be418fba9a7588af',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-1480624877',owner_user_name='tempest-TestSnapshotPattern-1480624877-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T15:01:35Z,user_data=None,user_id='1a7552cec1354175be418fba9a7588af',uuid=98da6205-c6cb-48d4-9502-fa1ca0f3e4ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.167 2 DEBUG nova.network.os_vif_util [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converting VIF {"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.168 2 DEBUG nova.network.os_vif_util [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:41:f9,bridge_name='br-int',has_traffic_filtering=True,id=fc3eccef-04e2-406f-b8a2-3b5015d4e10a,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3eccef-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.169 2 DEBUG os_vif [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:41:f9,bridge_name='br-int',has_traffic_filtering=True,id=fc3eccef-04e2-406f-b8a2-3b5015d4e10a,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3eccef-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc3eccef-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.177 2 DEBUG nova.compute.manager [req-0d8174b7-7829-4caf-abb8-67949526e6da req-a8ab461b-0bfa-4173-8e66-a5d9739f9f5b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received event network-changed-fc3eccef-04e2-406f-b8a2-3b5015d4e10a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.177 2 DEBUG nova.compute.manager [req-0d8174b7-7829-4caf-abb8-67949526e6da req-a8ab461b-0bfa-4173-8e66-a5d9739f9f5b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Refreshing instance network info cache due to event network-changed-fc3eccef-04e2-406f-b8a2-3b5015d4e10a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.178 2 DEBUG oslo_concurrency.lockutils [req-0d8174b7-7829-4caf-abb8-67949526e6da req-a8ab461b-0bfa-4173-8e66-a5d9739f9f5b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.178 2 DEBUG oslo_concurrency.lockutils [req-0d8174b7-7829-4caf-abb8-67949526e6da req-a8ab461b-0bfa-4173-8e66-a5d9739f9f5b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.178 2 DEBUG nova.network.neutron [req-0d8174b7-7829-4caf-abb8-67949526e6da req-a8ab461b-0bfa-4173-8e66-a5d9739f9f5b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Refreshing network info cache for port fc3eccef-04e2-406f-b8a2-3b5015d4e10a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.181 2 INFO os_vif [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:41:f9,bridge_name='br-int',has_traffic_filtering=True,id=fc3eccef-04e2-406f-b8a2-3b5015d4e10a,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3eccef-04')
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.590 2 INFO nova.virt.libvirt.driver [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Deleting instance files /var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_del
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.591 2 INFO nova.virt.libvirt.driver [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Deletion of /var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_del complete
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.687 2 INFO nova.compute.manager [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.688 2 DEBUG oslo.service.loopingcall [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.688 2 DEBUG nova.compute.manager [-] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 15:01:37 compute-0 nova_compute[259550]: 2025-10-07 15:01:37.688 2 DEBUG nova.network.neutron [-] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 15:01:38 compute-0 ceph-mon[74295]: pgmap v2989: 305 pgs: 305 active+clean; 323 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 16 MiB/s wr, 182 op/s
Oct 07 15:01:38 compute-0 nova_compute[259550]: 2025-10-07 15:01:38.315 2 DEBUG nova.compute.manager [req-d057e432-7a56-4a1f-9351-0669dbbd69bf req-873541f5-d721-4382-a7b3-15038dd44e4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received event network-vif-unplugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 15:01:38 compute-0 nova_compute[259550]: 2025-10-07 15:01:38.315 2 DEBUG oslo_concurrency.lockutils [req-d057e432-7a56-4a1f-9351-0669dbbd69bf req-873541f5-d721-4382-a7b3-15038dd44e4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:01:38 compute-0 nova_compute[259550]: 2025-10-07 15:01:38.316 2 DEBUG oslo_concurrency.lockutils [req-d057e432-7a56-4a1f-9351-0669dbbd69bf req-873541f5-d721-4382-a7b3-15038dd44e4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:01:38 compute-0 nova_compute[259550]: 2025-10-07 15:01:38.316 2 DEBUG oslo_concurrency.lockutils [req-d057e432-7a56-4a1f-9351-0669dbbd69bf req-873541f5-d721-4382-a7b3-15038dd44e4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:01:38 compute-0 nova_compute[259550]: 2025-10-07 15:01:38.316 2 DEBUG nova.compute.manager [req-d057e432-7a56-4a1f-9351-0669dbbd69bf req-873541f5-d721-4382-a7b3-15038dd44e4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] No waiting events found dispatching network-vif-unplugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 15:01:38 compute-0 nova_compute[259550]: 2025-10-07 15:01:38.317 2 DEBUG nova.compute.manager [req-d057e432-7a56-4a1f-9351-0669dbbd69bf req-873541f5-d721-4382-a7b3-15038dd44e4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received event network-vif-unplugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 15:01:38 compute-0 nova_compute[259550]: 2025-10-07 15:01:38.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:38 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 305 active+clean; 229 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 14 MiB/s wr, 253 op/s
Oct 07 15:01:39 compute-0 nova_compute[259550]: 2025-10-07 15:01:39.239 2 DEBUG nova.network.neutron [-] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 15:01:39 compute-0 nova_compute[259550]: 2025-10-07 15:01:39.262 2 INFO nova.compute.manager [-] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Took 1.57 seconds to deallocate network for instance.
Oct 07 15:01:39 compute-0 nova_compute[259550]: 2025-10-07 15:01:39.326 2 DEBUG oslo_concurrency.lockutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:01:39 compute-0 nova_compute[259550]: 2025-10-07 15:01:39.327 2 DEBUG oslo_concurrency.lockutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:01:39 compute-0 nova_compute[259550]: 2025-10-07 15:01:39.568 2 DEBUG oslo_concurrency.processutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:01:39 compute-0 nova_compute[259550]: 2025-10-07 15:01:39.632 2 DEBUG nova.network.neutron [req-0d8174b7-7829-4caf-abb8-67949526e6da req-a8ab461b-0bfa-4173-8e66-a5d9739f9f5b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Updated VIF entry in instance network info cache for port fc3eccef-04e2-406f-b8a2-3b5015d4e10a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 15:01:39 compute-0 nova_compute[259550]: 2025-10-07 15:01:39.633 2 DEBUG nova.network.neutron [req-0d8174b7-7829-4caf-abb8-67949526e6da req-a8ab461b-0bfa-4173-8e66-a5d9739f9f5b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Updating instance_info_cache with network_info: [{"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 15:01:39 compute-0 nova_compute[259550]: 2025-10-07 15:01:39.653 2 DEBUG oslo_concurrency.lockutils [req-0d8174b7-7829-4caf-abb8-67949526e6da req-a8ab461b-0bfa-4173-8e66-a5d9739f9f5b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 15:01:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:01:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1176012184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:01:40 compute-0 nova_compute[259550]: 2025-10-07 15:01:40.010 2 DEBUG oslo_concurrency.processutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:01:40 compute-0 nova_compute[259550]: 2025-10-07 15:01:40.016 2 DEBUG nova.compute.provider_tree [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:01:40 compute-0 nova_compute[259550]: 2025-10-07 15:01:40.033 2 DEBUG nova.scheduler.client.report [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:01:40 compute-0 nova_compute[259550]: 2025-10-07 15:01:40.059 2 DEBUG oslo_concurrency.lockutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:01:40 compute-0 nova_compute[259550]: 2025-10-07 15:01:40.084 2 INFO nova.scheduler.client.report [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Deleted allocations for instance 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce
Oct 07 15:01:40 compute-0 nova_compute[259550]: 2025-10-07 15:01:40.145 2 DEBUG oslo_concurrency.lockutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:01:40 compute-0 ceph-mon[74295]: pgmap v2990: 305 pgs: 305 active+clean; 229 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 14 MiB/s wr, 253 op/s
Oct 07 15:01:40 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1176012184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:01:40 compute-0 nova_compute[259550]: 2025-10-07 15:01:40.430 2 DEBUG nova.compute.manager [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received event network-vif-plugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 15:01:40 compute-0 nova_compute[259550]: 2025-10-07 15:01:40.431 2 DEBUG oslo_concurrency.lockutils [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:01:40 compute-0 nova_compute[259550]: 2025-10-07 15:01:40.431 2 DEBUG oslo_concurrency.lockutils [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:01:40 compute-0 nova_compute[259550]: 2025-10-07 15:01:40.431 2 DEBUG oslo_concurrency.lockutils [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:01:40 compute-0 nova_compute[259550]: 2025-10-07 15:01:40.431 2 DEBUG nova.compute.manager [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] No waiting events found dispatching network-vif-plugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 15:01:40 compute-0 nova_compute[259550]: 2025-10-07 15:01:40.431 2 WARNING nova.compute.manager [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received unexpected event network-vif-plugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a for instance with vm_state deleted and task_state None.
Oct 07 15:01:40 compute-0 nova_compute[259550]: 2025-10-07 15:01:40.432 2 DEBUG nova.compute.manager [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received event network-vif-deleted-fc3eccef-04e2-406f-b8a2-3b5015d4e10a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 15:01:40 compute-0 nova_compute[259550]: 2025-10-07 15:01:40.432 2 INFO nova.compute.manager [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Neutron deleted interface fc3eccef-04e2-406f-b8a2-3b5015d4e10a; detaching it from the instance and deleting it from the info cache
Oct 07 15:01:40 compute-0 nova_compute[259550]: 2025-10-07 15:01:40.432 2 DEBUG nova.network.neutron [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Oct 07 15:01:40 compute-0 nova_compute[259550]: 2025-10-07 15:01:40.435 2 DEBUG nova.compute.manager [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Detach interface failed, port_id=fc3eccef-04e2-406f-b8a2-3b5015d4e10a, reason: Instance 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 07 15:01:40 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 12 MiB/s wr, 219 op/s
Oct 07 15:01:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Oct 07 15:01:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Oct 07 15:01:41 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Oct 07 15:01:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:01:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Oct 07 15:01:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Oct 07 15:01:42 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Oct 07 15:01:42 compute-0 nova_compute[259550]: 2025-10-07 15:01:42.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:42 compute-0 ceph-mon[74295]: pgmap v2991: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 12 MiB/s wr, 219 op/s
Oct 07 15:01:42 compute-0 ceph-mon[74295]: osdmap e293: 3 total, 3 up, 3 in
Oct 07 15:01:42 compute-0 ceph-mon[74295]: osdmap e294: 3 total, 3 up, 3 in
Oct 07 15:01:42 compute-0 nova_compute[259550]: 2025-10-07 15:01:42.781 2 DEBUG nova.compute.manager [req-3dec6506-910d-4f32-9111-d7e90b59e898 req-ec807541-ab77-46d2-b7d4-575a80e6458f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received event network-changed-9e613de1-4d71-4293-836f-5f1e121f0bb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 15:01:42 compute-0 nova_compute[259550]: 2025-10-07 15:01:42.781 2 DEBUG nova.compute.manager [req-3dec6506-910d-4f32-9111-d7e90b59e898 req-ec807541-ab77-46d2-b7d4-575a80e6458f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Refreshing instance network info cache due to event network-changed-9e613de1-4d71-4293-836f-5f1e121f0bb5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 07 15:01:42 compute-0 nova_compute[259550]: 2025-10-07 15:01:42.782 2 DEBUG oslo_concurrency.lockutils [req-3dec6506-910d-4f32-9111-d7e90b59e898 req-ec807541-ab77-46d2-b7d4-575a80e6458f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 15:01:42 compute-0 nova_compute[259550]: 2025-10-07 15:01:42.782 2 DEBUG oslo_concurrency.lockutils [req-3dec6506-910d-4f32-9111-d7e90b59e898 req-ec807541-ab77-46d2-b7d4-575a80e6458f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 15:01:42 compute-0 nova_compute[259550]: 2025-10-07 15:01:42.782 2 DEBUG nova.network.neutron [req-3dec6506-910d-4f32-9111-d7e90b59e898 req-ec807541-ab77-46d2-b7d4-575a80e6458f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Refreshing network info cache for port 9e613de1-4d71-4293-836f-5f1e121f0bb5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 07 15:01:42 compute-0 nova_compute[259550]: 2025-10-07 15:01:42.895 2 DEBUG oslo_concurrency.lockutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "a6b18035-4aef-4825-90e6-799173979626" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:01:42 compute-0 nova_compute[259550]: 2025-10-07 15:01:42.895 2 DEBUG oslo_concurrency.lockutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:01:42 compute-0 nova_compute[259550]: 2025-10-07 15:01:42.896 2 DEBUG oslo_concurrency.lockutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "a6b18035-4aef-4825-90e6-799173979626-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:01:42 compute-0 nova_compute[259550]: 2025-10-07 15:01:42.896 2 DEBUG oslo_concurrency.lockutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:01:42 compute-0 nova_compute[259550]: 2025-10-07 15:01:42.897 2 DEBUG oslo_concurrency.lockutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:01:42 compute-0 nova_compute[259550]: 2025-10-07 15:01:42.898 2 INFO nova.compute.manager [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Terminating instance
Oct 07 15:01:42 compute-0 nova_compute[259550]: 2025-10-07 15:01:42.899 2 DEBUG nova.compute.manager [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 15:01:42 compute-0 kernel: tap9e613de1-4d (unregistering): left promiscuous mode
Oct 07 15:01:42 compute-0 NetworkManager[44949]: <info>  [1759849302.9523] device (tap9e613de1-4d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 07 15:01:42 compute-0 nova_compute[259550]: 2025-10-07 15:01:42.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:42 compute-0 ovn_controller[151684]: 2025-10-07T15:01:42Z|01660|binding|INFO|Releasing lport 9e613de1-4d71-4293-836f-5f1e121f0bb5 from this chassis (sb_readonly=0)
Oct 07 15:01:42 compute-0 ovn_controller[151684]: 2025-10-07T15:01:42Z|01661|binding|INFO|Setting lport 9e613de1-4d71-4293-836f-5f1e121f0bb5 down in Southbound
Oct 07 15:01:42 compute-0 ovn_controller[151684]: 2025-10-07T15:01:42Z|01662|binding|INFO|Removing iface tap9e613de1-4d ovn-installed in OVS
Oct 07 15:01:42 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.4 MiB/s wr, 112 op/s
Oct 07 15:01:42 compute-0 nova_compute[259550]: 2025-10-07 15:01:42.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:43 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Deactivated successfully.
Oct 07 15:01:43 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Consumed 17.809s CPU time.
Oct 07 15:01:43 compute-0 systemd-machined[214580]: Machine qemu-185-instance-00000097 terminated.
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.137 2 INFO nova.virt.libvirt.driver [-] [instance: a6b18035-4aef-4825-90e6-799173979626] Instance destroyed successfully.
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.139 2 DEBUG nova.objects.instance [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lazy-loading 'resources' on Instance uuid a6b18035-4aef-4825-90e6-799173979626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 15:01:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.176 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:68:98 10.100.0.8'], port_security=['fa:16:3e:b8:68:98 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a6b18035-4aef-4825-90e6-799173979626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5714012d-b182-4fef-9241-3afcb9c700d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa7bd91eb3b040c89929aa23c9775dc9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5a9b7c48-78a6-4a08-9a38-f3e228e68bdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f64d47ed-3333-494f-a3a9-07b1b0158b50, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9e613de1-4d71-4293-836f-5f1e121f0bb5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 15:01:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.177 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9e613de1-4d71-4293-836f-5f1e121f0bb5 in datapath 5714012d-b182-4fef-9241-3afcb9c700d6 unbound from our chassis
Oct 07 15:01:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.178 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5714012d-b182-4fef-9241-3afcb9c700d6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 07 15:01:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.179 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ad252cd0-c246-4745-8967-65aaf5d2a6ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:01:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.179 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6 namespace which is not needed anymore
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.195 2 DEBUG nova.virt.libvirt.vif [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:59:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-821190433',display_name='tempest-TestSnapshotPattern-server-821190433',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-821190433',id=151,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLS8No0f8lI6ufkJmHXUc83InsAiFTNLFmJKfMLwW232En9QazJdzMvhlDyJwZrw5ZgxsPztgc0fXNZCFoxmRX/wK0ADddqAh1D7rvdzceS1mG7VJugN4Nxl5xOWKIFTbQ==',key_name='tempest-TestSnapshotPattern-1267535545',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:59:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aa7bd91eb3b040c89929aa23c9775dc9',ramdisk_id='',reservation_id='r-vnbjscds',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-1480624877',owner_user_name='tempest-TestSnapshotPattern-1480624877-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T15:00:35Z,user_data=None,user_id='1a7552cec1354175be418fba9a7588af',uuid=a6b18035-4aef-4825-90e6-799173979626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.195 2 DEBUG nova.network.os_vif_util [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converting VIF {"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.196 2 DEBUG nova.network.os_vif_util [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:68:98,bridge_name='br-int',has_traffic_filtering=True,id=9e613de1-4d71-4293-836f-5f1e121f0bb5,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e613de1-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.197 2 DEBUG os_vif [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:68:98,bridge_name='br-int',has_traffic_filtering=True,id=9e613de1-4d71-4293-836f-5f1e121f0bb5,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e613de1-4d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.199 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e613de1-4d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.209 2 INFO os_vif [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:68:98,bridge_name='br-int',has_traffic_filtering=True,id=9e613de1-4d71-4293-836f-5f1e121f0bb5,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e613de1-4d')
Oct 07 15:01:43 compute-0 neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6[428891]: [NOTICE]   (428907) : haproxy version is 2.8.14-c23fe91
Oct 07 15:01:43 compute-0 neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6[428891]: [NOTICE]   (428907) : path to executable is /usr/sbin/haproxy
Oct 07 15:01:43 compute-0 neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6[428891]: [WARNING]  (428907) : Exiting Master process...
Oct 07 15:01:43 compute-0 neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6[428891]: [WARNING]  (428907) : Exiting Master process...
Oct 07 15:01:43 compute-0 neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6[428891]: [ALERT]    (428907) : Current worker (428909) exited with code 143 (Terminated)
Oct 07 15:01:43 compute-0 neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6[428891]: [WARNING]  (428907) : All workers exited. Exiting... (0)
Oct 07 15:01:43 compute-0 systemd[1]: libpod-c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7.scope: Deactivated successfully.
Oct 07 15:01:43 compute-0 podman[431532]: 2025-10-07 15:01:43.342061226 +0000 UTC m=+0.065489813 container died c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 07 15:01:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0737f8a91a4236f462b27e601747872518f52d58de889974962d2146e39c7d2-merged.mount: Deactivated successfully.
Oct 07 15:01:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7-userdata-shm.mount: Deactivated successfully.
Oct 07 15:01:43 compute-0 podman[431532]: 2025-10-07 15:01:43.466196782 +0000 UTC m=+0.189625399 container cleanup c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 07 15:01:43 compute-0 systemd[1]: libpod-conmon-c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7.scope: Deactivated successfully.
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.487 2 DEBUG nova.compute.manager [req-df800850-ba99-4ab6-b929-8643ad987026 req-b97a8469-af0b-419a-b8e6-bdca13c7b360 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received event network-vif-unplugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.488 2 DEBUG oslo_concurrency.lockutils [req-df800850-ba99-4ab6-b929-8643ad987026 req-b97a8469-af0b-419a-b8e6-bdca13c7b360 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a6b18035-4aef-4825-90e6-799173979626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.488 2 DEBUG oslo_concurrency.lockutils [req-df800850-ba99-4ab6-b929-8643ad987026 req-b97a8469-af0b-419a-b8e6-bdca13c7b360 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.488 2 DEBUG oslo_concurrency.lockutils [req-df800850-ba99-4ab6-b929-8643ad987026 req-b97a8469-af0b-419a-b8e6-bdca13c7b360 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.488 2 DEBUG nova.compute.manager [req-df800850-ba99-4ab6-b929-8643ad987026 req-b97a8469-af0b-419a-b8e6-bdca13c7b360 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] No waiting events found dispatching network-vif-unplugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.489 2 DEBUG nova.compute.manager [req-df800850-ba99-4ab6-b929-8643ad987026 req-b97a8469-af0b-419a-b8e6-bdca13c7b360 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received event network-vif-unplugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 07 15:01:43 compute-0 podman[431563]: 2025-10-07 15:01:43.565904841 +0000 UTC m=+0.073363803 container remove c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 15:01:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.571 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[17669dbd-078b-4abd-b234-0375f20ee0e1]: (4, ('Tue Oct  7 03:01:43 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6 (c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7)\nc5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7\nTue Oct  7 03:01:43 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6 (c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7)\nc5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:01:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.573 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ee782722-eb72-4827-8744-63d863b85046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:01:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.574 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5714012d-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:43 compute-0 kernel: tap5714012d-b0: left promiscuous mode
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.594 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e32ab513-2cd3-47df-ac50-ad75892af359]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:01:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.629 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa1e29b-c934-45e0-b44e-16a9d7dad226]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:01:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.630 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5277a918-d0c2-459e-b863-bb221d2a3da9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:01:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.651 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb8cf7d-415d-45d4-8d06-d36c4a2b487f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977461, 'reachable_time': 33316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 431579, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:01:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d5714012d\x2db182\x2d4fef\x2d9241\x2d3afcb9c700d6.mount: Deactivated successfully.
Oct 07 15:01:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.658 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 07 15:01:43 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.658 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[3454b640-263e-468e-b6ac-3faf3adb0e28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.851 2 INFO nova.virt.libvirt.driver [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Deleting instance files /var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626_del
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.852 2 INFO nova.virt.libvirt.driver [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Deletion of /var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626_del complete
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.901 2 INFO nova.compute.manager [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Took 1.00 seconds to destroy the instance on the hypervisor.
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.902 2 DEBUG oslo.service.loopingcall [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.902 2 DEBUG nova.compute.manager [-] [instance: a6b18035-4aef-4825-90e6-799173979626] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 15:01:43 compute-0 nova_compute[259550]: 2025-10-07 15:01:43.903 2 DEBUG nova.network.neutron [-] [instance: a6b18035-4aef-4825-90e6-799173979626] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 15:01:44 compute-0 nova_compute[259550]: 2025-10-07 15:01:44.109 2 DEBUG nova.network.neutron [req-3dec6506-910d-4f32-9111-d7e90b59e898 req-ec807541-ab77-46d2-b7d4-575a80e6458f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Updated VIF entry in instance network info cache for port 9e613de1-4d71-4293-836f-5f1e121f0bb5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 07 15:01:44 compute-0 nova_compute[259550]: 2025-10-07 15:01:44.110 2 DEBUG nova.network.neutron [req-3dec6506-910d-4f32-9111-d7e90b59e898 req-ec807541-ab77-46d2-b7d4-575a80e6458f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Updating instance_info_cache with network_info: [{"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 15:01:44 compute-0 nova_compute[259550]: 2025-10-07 15:01:44.134 2 DEBUG oslo_concurrency.lockutils [req-3dec6506-910d-4f32-9111-d7e90b59e898 req-ec807541-ab77-46d2-b7d4-575a80e6458f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 15:01:44 compute-0 ceph-mon[74295]: pgmap v2994: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.4 MiB/s wr, 112 op/s
Oct 07 15:01:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:44.758 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 15:01:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:44.759 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 15:01:44 compute-0 nova_compute[259550]: 2025-10-07 15:01:44.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:44 compute-0 nova_compute[259550]: 2025-10-07 15:01:44.783 2 DEBUG nova.network.neutron [-] [instance: a6b18035-4aef-4825-90e6-799173979626] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 15:01:44 compute-0 nova_compute[259550]: 2025-10-07 15:01:44.796 2 INFO nova.compute.manager [-] [instance: a6b18035-4aef-4825-90e6-799173979626] Took 0.89 seconds to deallocate network for instance.
Oct 07 15:01:44 compute-0 nova_compute[259550]: 2025-10-07 15:01:44.838 2 DEBUG oslo_concurrency.lockutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:01:44 compute-0 nova_compute[259550]: 2025-10-07 15:01:44.839 2 DEBUG oslo_concurrency.lockutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:01:44 compute-0 nova_compute[259550]: 2025-10-07 15:01:44.854 2 DEBUG nova.compute.manager [req-cb3e5451-ac91-40b5-bb60-79e819226296 req-3541378b-a191-4a57-a130-76717128022c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received event network-vif-deleted-9e613de1-4d71-4293-836f-5f1e121f0bb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 15:01:44 compute-0 nova_compute[259550]: 2025-10-07 15:01:44.883 2 DEBUG oslo_concurrency.processutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:01:44 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 305 active+clean; 89 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 4.5 KiB/s wr, 121 op/s
Oct 07 15:01:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:01:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1603158260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:01:45 compute-0 nova_compute[259550]: 2025-10-07 15:01:45.383 2 DEBUG oslo_concurrency.processutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:01:45 compute-0 nova_compute[259550]: 2025-10-07 15:01:45.391 2 DEBUG nova.compute.provider_tree [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:01:45 compute-0 nova_compute[259550]: 2025-10-07 15:01:45.406 2 DEBUG nova.scheduler.client.report [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:01:45 compute-0 nova_compute[259550]: 2025-10-07 15:01:45.424 2 DEBUG oslo_concurrency.lockutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:01:45 compute-0 nova_compute[259550]: 2025-10-07 15:01:45.448 2 INFO nova.scheduler.client.report [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Deleted allocations for instance a6b18035-4aef-4825-90e6-799173979626
Oct 07 15:01:45 compute-0 nova_compute[259550]: 2025-10-07 15:01:45.526 2 DEBUG oslo_concurrency.lockutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:01:45 compute-0 nova_compute[259550]: 2025-10-07 15:01:45.566 2 DEBUG nova.compute.manager [req-6f45ca23-f462-499d-a13c-eb6a1227cd1c req-05ca206a-d838-4afe-a187-8b492e03f36b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received event network-vif-plugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 07 15:01:45 compute-0 nova_compute[259550]: 2025-10-07 15:01:45.566 2 DEBUG oslo_concurrency.lockutils [req-6f45ca23-f462-499d-a13c-eb6a1227cd1c req-05ca206a-d838-4afe-a187-8b492e03f36b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a6b18035-4aef-4825-90e6-799173979626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:01:45 compute-0 nova_compute[259550]: 2025-10-07 15:01:45.567 2 DEBUG oslo_concurrency.lockutils [req-6f45ca23-f462-499d-a13c-eb6a1227cd1c req-05ca206a-d838-4afe-a187-8b492e03f36b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:01:45 compute-0 nova_compute[259550]: 2025-10-07 15:01:45.567 2 DEBUG oslo_concurrency.lockutils [req-6f45ca23-f462-499d-a13c-eb6a1227cd1c req-05ca206a-d838-4afe-a187-8b492e03f36b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:01:45 compute-0 nova_compute[259550]: 2025-10-07 15:01:45.567 2 DEBUG nova.compute.manager [req-6f45ca23-f462-499d-a13c-eb6a1227cd1c req-05ca206a-d838-4afe-a187-8b492e03f36b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] No waiting events found dispatching network-vif-plugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 07 15:01:45 compute-0 nova_compute[259550]: 2025-10-07 15:01:45.567 2 WARNING nova.compute.manager [req-6f45ca23-f462-499d-a13c-eb6a1227cd1c req-05ca206a-d838-4afe-a187-8b492e03f36b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received unexpected event network-vif-plugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 for instance with vm_state deleted and task_state None.
Oct 07 15:01:46 compute-0 ceph-mon[74295]: pgmap v2995: 305 pgs: 305 active+clean; 89 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 4.5 KiB/s wr, 121 op/s
Oct 07 15:01:46 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1603158260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:01:46 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.5 KiB/s wr, 75 op/s
Oct 07 15:01:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:01:47 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:01:47.761 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 15:01:48 compute-0 nova_compute[259550]: 2025-10-07 15:01:48.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:48 compute-0 ceph-mon[74295]: pgmap v2996: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.5 KiB/s wr, 75 op/s
Oct 07 15:01:48 compute-0 nova_compute[259550]: 2025-10-07 15:01:48.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:48 compute-0 nova_compute[259550]: 2025-10-07 15:01:48.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:48 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 76 op/s
Oct 07 15:01:49 compute-0 nova_compute[259550]: 2025-10-07 15:01:49.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:49 compute-0 podman[431602]: 2025-10-07 15:01:49.090654343 +0000 UTC m=+0.062359751 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct 07 15:01:49 compute-0 podman[431604]: 2025-10-07 15:01:49.113421705 +0000 UTC m=+0.090019513 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct 07 15:01:50 compute-0 ceph-mon[74295]: pgmap v2997: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 76 op/s
Oct 07 15:01:50 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.9 KiB/s wr, 62 op/s
Oct 07 15:01:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:01:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Oct 07 15:01:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Oct 07 15:01:52 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Oct 07 15:01:52 compute-0 nova_compute[259550]: 2025-10-07 15:01:52.142 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759849297.1406746, 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 15:01:52 compute-0 nova_compute[259550]: 2025-10-07 15:01:52.143 2 INFO nova.compute.manager [-] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] VM Stopped (Lifecycle Event)
Oct 07 15:01:52 compute-0 nova_compute[259550]: 2025-10-07 15:01:52.161 2 DEBUG nova.compute.manager [None req-facf4e0b-9c32-48a4-8db3-3d26e187e541 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 15:01:52 compute-0 ceph-mon[74295]: pgmap v2998: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.9 KiB/s wr, 62 op/s
Oct 07 15:01:52 compute-0 ceph-mon[74295]: osdmap e295: 3 total, 3 up, 3 in
Oct 07 15:01:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:01:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:01:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:01:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:01:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:01:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:01:52 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Oct 07 15:01:53 compute-0 nova_compute[259550]: 2025-10-07 15:01:53.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:53 compute-0 nova_compute[259550]: 2025-10-07 15:01:53.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:54 compute-0 ceph-mon[74295]: pgmap v3000: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Oct 07 15:01:54 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 2.0 KiB/s wr, 33 op/s
Oct 07 15:01:56 compute-0 ceph-mon[74295]: pgmap v3001: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 2.0 KiB/s wr, 33 op/s
Oct 07 15:01:56 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3002: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 307 B/s wr, 4 op/s
Oct 07 15:01:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:01:58 compute-0 nova_compute[259550]: 2025-10-07 15:01:58.136 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759849303.1347249, a6b18035-4aef-4825-90e6-799173979626 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 15:01:58 compute-0 nova_compute[259550]: 2025-10-07 15:01:58.136 2 INFO nova.compute.manager [-] [instance: a6b18035-4aef-4825-90e6-799173979626] VM Stopped (Lifecycle Event)
Oct 07 15:01:58 compute-0 nova_compute[259550]: 2025-10-07 15:01:58.158 2 DEBUG nova.compute.manager [None req-702c23bd-510d-41b1-8c59-1872f9bfb0f8 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 15:01:58 compute-0 nova_compute[259550]: 2025-10-07 15:01:58.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:58 compute-0 ceph-mon[74295]: pgmap v3002: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 307 B/s wr, 4 op/s
Oct 07 15:01:58 compute-0 nova_compute[259550]: 2025-10-07 15:01:58.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:01:58 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:02:00.099 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:02:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:02:00.099 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:02:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:02:00.099 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:02:00 compute-0 ceph-mon[74295]: pgmap v3003: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:00 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:02:02 compute-0 ceph-mon[74295]: pgmap v3004: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:02 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:03 compute-0 nova_compute[259550]: 2025-10-07 15:02:03.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:03 compute-0 nova_compute[259550]: 2025-10-07 15:02:03.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:04 compute-0 ceph-mon[74295]: pgmap v3005: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:04 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:05 compute-0 sudo[431651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:02:05 compute-0 sudo[431651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:02:05 compute-0 sudo[431651]: pam_unix(sudo:session): session closed for user root
Oct 07 15:02:05 compute-0 podman[431675]: 2025-10-07 15:02:05.260089447 +0000 UTC m=+0.059770773 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 07 15:02:05 compute-0 sudo[431693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:02:05 compute-0 sudo[431693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:02:05 compute-0 sudo[431693]: pam_unix(sudo:session): session closed for user root
Oct 07 15:02:05 compute-0 podman[431676]: 2025-10-07 15:02:05.28934344 +0000 UTC m=+0.086252943 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 07 15:02:05 compute-0 sudo[431736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:02:05 compute-0 sudo[431736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:02:05 compute-0 sudo[431736]: pam_unix(sudo:session): session closed for user root
Oct 07 15:02:05 compute-0 sudo[431763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:02:05 compute-0 sudo[431763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:02:05 compute-0 sudo[431763]: pam_unix(sudo:session): session closed for user root
Oct 07 15:02:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:02:05 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:02:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:02:05 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:02:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:02:05 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:02:05 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev a33e09a8-bbe2-4bfa-8f64-c613452366db does not exist
Oct 07 15:02:05 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 63f67e98-eea7-415d-a8ca-cca60a341d70 does not exist
Oct 07 15:02:05 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 22f1b031-bc2c-41d1-ba99-e5abb0c4e027 does not exist
Oct 07 15:02:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:02:05 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:02:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:02:05 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:02:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:02:05 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:02:05 compute-0 sudo[431818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:02:05 compute-0 sudo[431818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:02:05 compute-0 sudo[431818]: pam_unix(sudo:session): session closed for user root
Oct 07 15:02:05 compute-0 sudo[431843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:02:06 compute-0 sudo[431843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:02:06 compute-0 sudo[431843]: pam_unix(sudo:session): session closed for user root
Oct 07 15:02:06 compute-0 sudo[431868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:02:06 compute-0 sudo[431868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:02:06 compute-0 sudo[431868]: pam_unix(sudo:session): session closed for user root
Oct 07 15:02:06 compute-0 sudo[431893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:02:06 compute-0 sudo[431893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:02:06 compute-0 ceph-mon[74295]: pgmap v3006: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:02:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:02:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:02:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:02:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:02:06 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:02:06 compute-0 podman[431958]: 2025-10-07 15:02:06.47507539 +0000 UTC m=+0.073675140 container create 994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 07 15:02:06 compute-0 systemd[1]: Started libpod-conmon-994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe.scope.
Oct 07 15:02:06 compute-0 podman[431958]: 2025-10-07 15:02:06.447127261 +0000 UTC m=+0.045727021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:02:06 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:02:06 compute-0 podman[431958]: 2025-10-07 15:02:06.557897782 +0000 UTC m=+0.156497512 container init 994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 07 15:02:06 compute-0 podman[431958]: 2025-10-07 15:02:06.567018664 +0000 UTC m=+0.165618374 container start 994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 07 15:02:06 compute-0 podman[431958]: 2025-10-07 15:02:06.570511526 +0000 UTC m=+0.169111266 container attach 994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:02:06 compute-0 gracious_pare[431975]: 167 167
Oct 07 15:02:06 compute-0 systemd[1]: libpod-994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe.scope: Deactivated successfully.
Oct 07 15:02:06 compute-0 conmon[431975]: conmon 994c94548b640c9e123d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe.scope/container/memory.events
Oct 07 15:02:06 compute-0 podman[431958]: 2025-10-07 15:02:06.5763079 +0000 UTC m=+0.174907610 container died 994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:02:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c7ba3bde43503d335c6ef321c10f8734e53289cee42e8186c36dcae7c97616c-merged.mount: Deactivated successfully.
Oct 07 15:02:06 compute-0 podman[431958]: 2025-10-07 15:02:06.616568245 +0000 UTC m=+0.215167955 container remove 994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 07 15:02:06 compute-0 systemd[1]: libpod-conmon-994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe.scope: Deactivated successfully.
Oct 07 15:02:06 compute-0 podman[431999]: 2025-10-07 15:02:06.782425555 +0000 UTC m=+0.041908400 container create 99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 07 15:02:06 compute-0 systemd[1]: Started libpod-conmon-99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d.scope.
Oct 07 15:02:06 compute-0 podman[431999]: 2025-10-07 15:02:06.763484533 +0000 UTC m=+0.022967398 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:02:06 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:02:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa56bc841a7d8c96c1f0e3eaa914378e19669d876ffb51c229d4c01459bf4715/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:02:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa56bc841a7d8c96c1f0e3eaa914378e19669d876ffb51c229d4c01459bf4715/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:02:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa56bc841a7d8c96c1f0e3eaa914378e19669d876ffb51c229d4c01459bf4715/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:02:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa56bc841a7d8c96c1f0e3eaa914378e19669d876ffb51c229d4c01459bf4715/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:02:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa56bc841a7d8c96c1f0e3eaa914378e19669d876ffb51c229d4c01459bf4715/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:02:06 compute-0 podman[431999]: 2025-10-07 15:02:06.893002791 +0000 UTC m=+0.152485686 container init 99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 15:02:06 compute-0 podman[431999]: 2025-10-07 15:02:06.904076285 +0000 UTC m=+0.163559130 container start 99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:02:06 compute-0 podman[431999]: 2025-10-07 15:02:06.907801042 +0000 UTC m=+0.167283887 container attach 99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 07 15:02:06 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:02:07 compute-0 great_vaughan[432015]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:02:07 compute-0 great_vaughan[432015]: --> relative data size: 1.0
Oct 07 15:02:07 compute-0 great_vaughan[432015]: --> All data devices are unavailable
Oct 07 15:02:07 compute-0 nova_compute[259550]: 2025-10-07 15:02:07.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:02:07 compute-0 nova_compute[259550]: 2025-10-07 15:02:07.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:02:07 compute-0 nova_compute[259550]: 2025-10-07 15:02:07.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 07 15:02:08 compute-0 systemd[1]: libpod-99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d.scope: Deactivated successfully.
Oct 07 15:02:08 compute-0 systemd[1]: libpod-99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d.scope: Consumed 1.047s CPU time.
Oct 07 15:02:08 compute-0 podman[431999]: 2025-10-07 15:02:08.012654602 +0000 UTC m=+1.272137467 container died 99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 07 15:02:08 compute-0 nova_compute[259550]: 2025-10-07 15:02:08.018 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 07 15:02:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa56bc841a7d8c96c1f0e3eaa914378e19669d876ffb51c229d4c01459bf4715-merged.mount: Deactivated successfully.
Oct 07 15:02:08 compute-0 podman[431999]: 2025-10-07 15:02:08.078519436 +0000 UTC m=+1.338002281 container remove 99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:02:08 compute-0 systemd[1]: libpod-conmon-99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d.scope: Deactivated successfully.
Oct 07 15:02:08 compute-0 sudo[431893]: pam_unix(sudo:session): session closed for user root
Oct 07 15:02:08 compute-0 sudo[432053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:02:08 compute-0 sudo[432053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:02:08 compute-0 sudo[432053]: pam_unix(sudo:session): session closed for user root
Oct 07 15:02:08 compute-0 nova_compute[259550]: 2025-10-07 15:02:08.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:08 compute-0 sudo[432078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:02:08 compute-0 sudo[432078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:02:08 compute-0 sudo[432078]: pam_unix(sudo:session): session closed for user root
Oct 07 15:02:08 compute-0 sudo[432103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:02:08 compute-0 sudo[432103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:02:08 compute-0 sudo[432103]: pam_unix(sudo:session): session closed for user root
Oct 07 15:02:08 compute-0 sudo[432128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:02:08 compute-0 sudo[432128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:02:08 compute-0 ceph-mon[74295]: pgmap v3007: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:08 compute-0 podman[432193]: 2025-10-07 15:02:08.685599892 +0000 UTC m=+0.049538832 container create 60e5d5807e20088fbaa2fd493fcf751146d231386f3ef0079186fc4701dbd581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 07 15:02:08 compute-0 systemd[1]: Started libpod-conmon-60e5d5807e20088fbaa2fd493fcf751146d231386f3ef0079186fc4701dbd581.scope.
Oct 07 15:02:08 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:02:08 compute-0 podman[432193]: 2025-10-07 15:02:08.661510095 +0000 UTC m=+0.025449125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:02:08 compute-0 podman[432193]: 2025-10-07 15:02:08.768201798 +0000 UTC m=+0.132140748 container init 60e5d5807e20088fbaa2fd493fcf751146d231386f3ef0079186fc4701dbd581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lehmann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:02:08 compute-0 podman[432193]: 2025-10-07 15:02:08.775373368 +0000 UTC m=+0.139312348 container start 60e5d5807e20088fbaa2fd493fcf751146d231386f3ef0079186fc4701dbd581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 15:02:08 compute-0 fervent_lehmann[432210]: 167 167
Oct 07 15:02:08 compute-0 systemd[1]: libpod-60e5d5807e20088fbaa2fd493fcf751146d231386f3ef0079186fc4701dbd581.scope: Deactivated successfully.
Oct 07 15:02:08 compute-0 nova_compute[259550]: 2025-10-07 15:02:08.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:08 compute-0 podman[432193]: 2025-10-07 15:02:08.859159395 +0000 UTC m=+0.223098455 container attach 60e5d5807e20088fbaa2fd493fcf751146d231386f3ef0079186fc4701dbd581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct 07 15:02:08 compute-0 podman[432193]: 2025-10-07 15:02:08.860156242 +0000 UTC m=+0.224095182 container died 60e5d5807e20088fbaa2fd493fcf751146d231386f3ef0079186fc4701dbd581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lehmann, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 07 15:02:08 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f38f4289801c27221813f818b1477a444af9ecc36f564fdccb045fd9e7d0ecb-merged.mount: Deactivated successfully.
Oct 07 15:02:09 compute-0 podman[432193]: 2025-10-07 15:02:09.073533219 +0000 UTC m=+0.437472149 container remove 60e5d5807e20088fbaa2fd493fcf751146d231386f3ef0079186fc4701dbd581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:02:09 compute-0 systemd[1]: libpod-conmon-60e5d5807e20088fbaa2fd493fcf751146d231386f3ef0079186fc4701dbd581.scope: Deactivated successfully.
Oct 07 15:02:09 compute-0 podman[432235]: 2025-10-07 15:02:09.256088351 +0000 UTC m=+0.052301236 container create 92e68cff02571e56e39bf0bb9900e3ef41f471bec9a89689ed525655b024dd74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:02:09 compute-0 systemd[1]: Started libpod-conmon-92e68cff02571e56e39bf0bb9900e3ef41f471bec9a89689ed525655b024dd74.scope.
Oct 07 15:02:09 compute-0 podman[432235]: 2025-10-07 15:02:09.231715365 +0000 UTC m=+0.027928270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:02:09 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:02:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebf50abdec0fe2b98e3f573531d102839480d172c44db2ae4771cb663715a5c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:02:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebf50abdec0fe2b98e3f573531d102839480d172c44db2ae4771cb663715a5c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:02:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebf50abdec0fe2b98e3f573531d102839480d172c44db2ae4771cb663715a5c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:02:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebf50abdec0fe2b98e3f573531d102839480d172c44db2ae4771cb663715a5c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:02:09 compute-0 podman[432235]: 2025-10-07 15:02:09.362753103 +0000 UTC m=+0.158966018 container init 92e68cff02571e56e39bf0bb9900e3ef41f471bec9a89689ed525655b024dd74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:02:09 compute-0 podman[432235]: 2025-10-07 15:02:09.370283362 +0000 UTC m=+0.166496247 container start 92e68cff02571e56e39bf0bb9900e3ef41f471bec9a89689ed525655b024dd74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 07 15:02:09 compute-0 podman[432235]: 2025-10-07 15:02:09.387437656 +0000 UTC m=+0.183650571 container attach 92e68cff02571e56e39bf0bb9900e3ef41f471bec9a89689ed525655b024dd74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 07 15:02:10 compute-0 sweet_galois[432252]: {
Oct 07 15:02:10 compute-0 sweet_galois[432252]:     "0": [
Oct 07 15:02:10 compute-0 sweet_galois[432252]:         {
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "devices": [
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "/dev/loop3"
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             ],
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "lv_name": "ceph_lv0",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "lv_size": "21470642176",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "name": "ceph_lv0",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "tags": {
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.cluster_name": "ceph",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.crush_device_class": "",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.encrypted": "0",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.osd_id": "0",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.type": "block",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.vdo": "0"
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             },
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "type": "block",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "vg_name": "ceph_vg0"
Oct 07 15:02:10 compute-0 sweet_galois[432252]:         }
Oct 07 15:02:10 compute-0 sweet_galois[432252]:     ],
Oct 07 15:02:10 compute-0 sweet_galois[432252]:     "1": [
Oct 07 15:02:10 compute-0 sweet_galois[432252]:         {
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "devices": [
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "/dev/loop4"
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             ],
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "lv_name": "ceph_lv1",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "lv_size": "21470642176",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "name": "ceph_lv1",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "tags": {
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.cluster_name": "ceph",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.crush_device_class": "",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.encrypted": "0",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.osd_id": "1",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.type": "block",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.vdo": "0"
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             },
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "type": "block",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "vg_name": "ceph_vg1"
Oct 07 15:02:10 compute-0 sweet_galois[432252]:         }
Oct 07 15:02:10 compute-0 sweet_galois[432252]:     ],
Oct 07 15:02:10 compute-0 sweet_galois[432252]:     "2": [
Oct 07 15:02:10 compute-0 sweet_galois[432252]:         {
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "devices": [
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "/dev/loop5"
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             ],
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "lv_name": "ceph_lv2",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "lv_size": "21470642176",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "name": "ceph_lv2",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "tags": {
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.cluster_name": "ceph",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.crush_device_class": "",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.encrypted": "0",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.osd_id": "2",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.type": "block",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:                 "ceph.vdo": "0"
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             },
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "type": "block",
Oct 07 15:02:10 compute-0 sweet_galois[432252]:             "vg_name": "ceph_vg2"
Oct 07 15:02:10 compute-0 sweet_galois[432252]:         }
Oct 07 15:02:10 compute-0 sweet_galois[432252]:     ]
Oct 07 15:02:10 compute-0 sweet_galois[432252]: }
Oct 07 15:02:10 compute-0 systemd[1]: libpod-92e68cff02571e56e39bf0bb9900e3ef41f471bec9a89689ed525655b024dd74.scope: Deactivated successfully.
Oct 07 15:02:10 compute-0 podman[432235]: 2025-10-07 15:02:10.169886844 +0000 UTC m=+0.966099729 container died 92e68cff02571e56e39bf0bb9900e3ef41f471bec9a89689ed525655b024dd74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:02:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebf50abdec0fe2b98e3f573531d102839480d172c44db2ae4771cb663715a5c9-merged.mount: Deactivated successfully.
Oct 07 15:02:10 compute-0 podman[432235]: 2025-10-07 15:02:10.259319001 +0000 UTC m=+1.055531916 container remove 92e68cff02571e56e39bf0bb9900e3ef41f471bec9a89689ed525655b024dd74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:02:10 compute-0 systemd[1]: libpod-conmon-92e68cff02571e56e39bf0bb9900e3ef41f471bec9a89689ed525655b024dd74.scope: Deactivated successfully.
Oct 07 15:02:10 compute-0 sudo[432128]: pam_unix(sudo:session): session closed for user root
Oct 07 15:02:10 compute-0 sudo[432275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:02:10 compute-0 sudo[432275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:02:10 compute-0 sudo[432275]: pam_unix(sudo:session): session closed for user root
Oct 07 15:02:10 compute-0 sudo[432300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:02:10 compute-0 sudo[432300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:02:10 compute-0 ceph-mon[74295]: pgmap v3008: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:10 compute-0 sudo[432300]: pam_unix(sudo:session): session closed for user root
Oct 07 15:02:10 compute-0 sudo[432325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:02:10 compute-0 sudo[432325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:02:10 compute-0 sudo[432325]: pam_unix(sudo:session): session closed for user root
Oct 07 15:02:10 compute-0 sudo[432350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:02:10 compute-0 sudo[432350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:02:10 compute-0 podman[432414]: 2025-10-07 15:02:10.871888293 +0000 UTC m=+0.046993896 container create 49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 07 15:02:10 compute-0 systemd[1]: Started libpod-conmon-49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463.scope.
Oct 07 15:02:10 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:02:10 compute-0 podman[432414]: 2025-10-07 15:02:10.852328935 +0000 UTC m=+0.027434598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:02:10 compute-0 podman[432414]: 2025-10-07 15:02:10.946165308 +0000 UTC m=+0.121270941 container init 49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 07 15:02:10 compute-0 podman[432414]: 2025-10-07 15:02:10.954406727 +0000 UTC m=+0.129512340 container start 49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:02:10 compute-0 podman[432414]: 2025-10-07 15:02:10.959987694 +0000 UTC m=+0.135093367 container attach 49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 07 15:02:10 compute-0 practical_cohen[432431]: 167 167
Oct 07 15:02:10 compute-0 systemd[1]: libpod-49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463.scope: Deactivated successfully.
Oct 07 15:02:10 compute-0 conmon[432431]: conmon 49a0dd8ca4640861b71e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463.scope/container/memory.events
Oct 07 15:02:10 compute-0 podman[432414]: 2025-10-07 15:02:10.962716646 +0000 UTC m=+0.137822279 container died 49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 07 15:02:10 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-88e3435f4ffe8ce2b216d9c1de293c00c749edd4a4eef6ea55b8e459a4497f75-merged.mount: Deactivated successfully.
Oct 07 15:02:11 compute-0 podman[432414]: 2025-10-07 15:02:11.026085123 +0000 UTC m=+0.201190756 container remove 49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 07 15:02:11 compute-0 systemd[1]: libpod-conmon-49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463.scope: Deactivated successfully.
Oct 07 15:02:11 compute-0 podman[432454]: 2025-10-07 15:02:11.20205878 +0000 UTC m=+0.039434065 container create 70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:02:11 compute-0 systemd[1]: Started libpod-conmon-70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece.scope.
Oct 07 15:02:11 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:02:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969ae5c6bd278792287c8709ad1c6c42bfeec90418b7315cbd094c60349ac656/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:02:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969ae5c6bd278792287c8709ad1c6c42bfeec90418b7315cbd094c60349ac656/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:02:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969ae5c6bd278792287c8709ad1c6c42bfeec90418b7315cbd094c60349ac656/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:02:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969ae5c6bd278792287c8709ad1c6c42bfeec90418b7315cbd094c60349ac656/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:02:11 compute-0 podman[432454]: 2025-10-07 15:02:11.184190208 +0000 UTC m=+0.021565513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:02:11 compute-0 podman[432454]: 2025-10-07 15:02:11.28630101 +0000 UTC m=+0.123676315 container init 70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Oct 07 15:02:11 compute-0 podman[432454]: 2025-10-07 15:02:11.292986076 +0000 UTC m=+0.130361361 container start 70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 07 15:02:11 compute-0 podman[432454]: 2025-10-07 15:02:11.298247246 +0000 UTC m=+0.135622551 container attach 70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:02:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:02:12 compute-0 bold_goldberg[432470]: {
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:         "osd_id": 2,
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:         "type": "bluestore"
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:     },
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:         "osd_id": 1,
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:         "type": "bluestore"
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:     },
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:         "osd_id": 0,
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:         "type": "bluestore"
Oct 07 15:02:12 compute-0 bold_goldberg[432470]:     }
Oct 07 15:02:12 compute-0 bold_goldberg[432470]: }
Oct 07 15:02:12 compute-0 systemd[1]: libpod-70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece.scope: Deactivated successfully.
Oct 07 15:02:12 compute-0 conmon[432470]: conmon 70b7b676694bc6a76640 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece.scope/container/memory.events
Oct 07 15:02:12 compute-0 podman[432454]: 2025-10-07 15:02:12.283162732 +0000 UTC m=+1.120538017 container died 70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 07 15:02:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-969ae5c6bd278792287c8709ad1c6c42bfeec90418b7315cbd094c60349ac656-merged.mount: Deactivated successfully.
Oct 07 15:02:12 compute-0 podman[432454]: 2025-10-07 15:02:12.341607608 +0000 UTC m=+1.178982933 container remove 70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 07 15:02:12 compute-0 systemd[1]: libpod-conmon-70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece.scope: Deactivated successfully.
Oct 07 15:02:12 compute-0 sudo[432350]: pam_unix(sudo:session): session closed for user root
Oct 07 15:02:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:02:12 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:02:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:02:12 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:02:12 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev af0312e4-c902-4695-a553-e0f1aa1634be does not exist
Oct 07 15:02:12 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev bec471fd-f030-4c24-8bee-9494fea6a7e6 does not exist
Oct 07 15:02:12 compute-0 ceph-mon[74295]: pgmap v3009: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:12 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:02:12 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:02:12 compute-0 sudo[432517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:02:12 compute-0 sudo[432517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:02:12 compute-0 sudo[432517]: pam_unix(sudo:session): session closed for user root
Oct 07 15:02:12 compute-0 sudo[432542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:02:12 compute-0 sudo[432542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:02:12 compute-0 sudo[432542]: pam_unix(sudo:session): session closed for user root
Oct 07 15:02:12 compute-0 nova_compute[259550]: 2025-10-07 15:02:12.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:02:12 compute-0 nova_compute[259550]: 2025-10-07 15:02:12.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:02:12 compute-0 nova_compute[259550]: 2025-10-07 15:02:12.985 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:02:12 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:13 compute-0 nova_compute[259550]: 2025-10-07 15:02:13.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:13 compute-0 nova_compute[259550]: 2025-10-07 15:02:13.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:13 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:02:13 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 44K writes, 180K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 44K writes, 15K syncs, 2.80 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2832 writes, 12K keys, 2832 commit groups, 1.0 writes per commit group, ingest: 13.48 MB, 0.02 MB/s
                                           Interval WAL: 2832 writes, 1026 syncs, 2.76 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 15:02:14 compute-0 ceph-mon[74295]: pgmap v3010: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:14 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:15 compute-0 nova_compute[259550]: 2025-10-07 15:02:15.002 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:02:15 compute-0 ceph-mon[74295]: pgmap v3011: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:16 compute-0 nova_compute[259550]: 2025-10-07 15:02:16.902 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:02:16 compute-0 nova_compute[259550]: 2025-10-07 15:02:16.985 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:02:16 compute-0 nova_compute[259550]: 2025-10-07 15:02:16.986 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:02:16 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:02:18 compute-0 ceph-mon[74295]: pgmap v3012: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:18 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:02:18 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 49K writes, 193K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s
                                           Cumulative WAL: 49K writes, 17K syncs, 2.76 writes per sync, written: 0.19 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2813 writes, 11K keys, 2813 commit groups, 1.0 writes per commit group, ingest: 11.66 MB, 0.02 MB/s
                                           Interval WAL: 2813 writes, 1097 syncs, 2.56 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 15:02:18 compute-0 nova_compute[259550]: 2025-10-07 15:02:18.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:18 compute-0 nova_compute[259550]: 2025-10-07 15:02:18.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:18 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:19 compute-0 nova_compute[259550]: 2025-10-07 15:02:19.065 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:02:20 compute-0 podman[432567]: 2025-10-07 15:02:20.091869459 +0000 UTC m=+0.068675508 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 15:02:20 compute-0 podman[432568]: 2025-10-07 15:02:20.125783886 +0000 UTC m=+0.102405621 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 07 15:02:20 compute-0 ceph-mon[74295]: pgmap v3013: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:20 compute-0 nova_compute[259550]: 2025-10-07 15:02:20.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:02:20 compute-0 nova_compute[259550]: 2025-10-07 15:02:20.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 07 15:02:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:02:22 compute-0 ceph-mon[74295]: pgmap v3014: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:22 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:02:22 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 35K writes, 142K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.80 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1971 writes, 8322 keys, 1971 commit groups, 1.0 writes per commit group, ingest: 9.10 MB, 0.02 MB/s
                                           Interval WAL: 1971 writes, 761 syncs, 2.59 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 15:02:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:02:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:02:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:02:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:02:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:02:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:02:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:02:22
Oct 07 15:02:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:02:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:02:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'images']
Oct 07 15:02:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:02:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:02:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:02:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:02:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:02:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:02:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:02:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:02:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:02:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:02:23 compute-0 nova_compute[259550]: 2025-10-07 15:02:23.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:02:23 compute-0 nova_compute[259550]: 2025-10-07 15:02:23.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:24 compute-0 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct 07 15:02:24 compute-0 ceph-mon[74295]: pgmap v3015: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:26 compute-0 ceph-mon[74295]: pgmap v3016: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:26 compute-0 nova_compute[259550]: 2025-10-07 15:02:26.996 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:02:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:02:27 compute-0 nova_compute[259550]: 2025-10-07 15:02:27.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:02:28 compute-0 nova_compute[259550]: 2025-10-07 15:02:28.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:02:28 compute-0 nova_compute[259550]: 2025-10-07 15:02:28.019 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:02:28 compute-0 nova_compute[259550]: 2025-10-07 15:02:28.019 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:02:28 compute-0 nova_compute[259550]: 2025-10-07 15:02:28.019 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:02:28 compute-0 nova_compute[259550]: 2025-10-07 15:02:28.020 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:02:28 compute-0 nova_compute[259550]: 2025-10-07 15:02:28.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:28 compute-0 nova_compute[259550]: 2025-10-07 15:02:28.292 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Acquiring lock "f218f03c-5cbd-49aa-bf01-f486d865d347" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:02:28 compute-0 nova_compute[259550]: 2025-10-07 15:02:28.293 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "f218f03c-5cbd-49aa-bf01-f486d865d347" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:02:28 compute-0 nova_compute[259550]: 2025-10-07 15:02:28.312 2 DEBUG nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 07 15:02:28 compute-0 nova_compute[259550]: 2025-10-07 15:02:28.393 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:02:28 compute-0 nova_compute[259550]: 2025-10-07 15:02:28.394 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:02:28 compute-0 nova_compute[259550]: 2025-10-07 15:02:28.404 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 07 15:02:28 compute-0 nova_compute[259550]: 2025-10-07 15:02:28.404 2 INFO nova.compute.claims [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Claim successful on node compute-0.ctlplane.example.com
Oct 07 15:02:28 compute-0 ceph-mon[74295]: pgmap v3017: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:28 compute-0 nova_compute[259550]: 2025-10-07 15:02:28.516 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:02:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:02:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3100737693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:02:28 compute-0 nova_compute[259550]: 2025-10-07 15:02:28.554 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:02:28 compute-0 nova_compute[259550]: 2025-10-07 15:02:28.739 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:02:28 compute-0 nova_compute[259550]: 2025-10-07 15:02:28.740 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3549MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:02:28 compute-0 nova_compute[259550]: 2025-10-07 15:02:28.741 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:02:28 compute-0 nova_compute[259550]: 2025-10-07 15:02:28.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:02:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2774696288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.058 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.064 2 DEBUG nova.compute.provider_tree [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.082 2 DEBUG nova.scheduler.client.report [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.110 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.111 2 DEBUG nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.114 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.226 2 DEBUG nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.226 2 DEBUG nova.network.neutron [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.247 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance f218f03c-5cbd-49aa-bf01-f486d865d347 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.247 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.248 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.253 2 INFO nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.281 2 DEBUG nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.302 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.484 2 DEBUG nova.network.neutron [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.485 2 DEBUG nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 07 15:02:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3100737693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:02:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2774696288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.570 2 DEBUG nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.572 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.573 2 INFO nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Creating image(s)
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.598 2 DEBUG nova.storage.rbd_utils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] rbd image f218f03c-5cbd-49aa-bf01-f486d865d347_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.622 2 DEBUG nova.storage.rbd_utils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] rbd image f218f03c-5cbd-49aa-bf01-f486d865d347_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.646 2 DEBUG nova.storage.rbd_utils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] rbd image f218f03c-5cbd-49aa-bf01-f486d865d347_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.651 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.753 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.754 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.754 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.755 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:02:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:02:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1923467118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.784 2 DEBUG nova.storage.rbd_utils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] rbd image f218f03c-5cbd-49aa-bf01-f486d865d347_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.789 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f218f03c-5cbd-49aa-bf01-f486d865d347_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.831 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.837 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.854 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.877 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:02:29 compute-0 nova_compute[259550]: 2025-10-07 15:02:29.877 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:02:30 compute-0 ceph-mon[74295]: pgmap v3018: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:30 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1923467118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.622 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f218f03c-5cbd-49aa-bf01-f486d865d347_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.833s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.678 2 DEBUG nova.storage.rbd_utils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] resizing rbd image f218f03c-5cbd-49aa-bf01-f486d865d347_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.767 2 DEBUG nova.objects.instance [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lazy-loading 'migration_context' on Instance uuid f218f03c-5cbd-49aa-bf01-f486d865d347 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.782 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.782 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Ensure instance console log exists: /var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.783 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.783 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.784 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.785 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.791 2 WARNING nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.796 2 DEBUG nova.virt.libvirt.host [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.797 2 DEBUG nova.virt.libvirt.host [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.800 2 DEBUG nova.virt.libvirt.host [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.800 2 DEBUG nova.virt.libvirt.host [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.801 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.801 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.801 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.802 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.802 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.802 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.802 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.803 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.803 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.803 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.804 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.804 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 07 15:02:30 compute-0 nova_compute[259550]: 2025-10-07 15:02:30.806 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:02:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 305 active+clean; 47 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 268 KiB/s wr, 1 op/s
Oct 07 15:02:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 15:02:31 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/605935459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 15:02:31 compute-0 nova_compute[259550]: 2025-10-07 15:02:31.261 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:02:31 compute-0 nova_compute[259550]: 2025-10-07 15:02:31.285 2 DEBUG nova.storage.rbd_utils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] rbd image f218f03c-5cbd-49aa-bf01-f486d865d347_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 15:02:31 compute-0 nova_compute[259550]: 2025-10-07 15:02:31.289 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:02:31 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/605935459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 15:02:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 07 15:02:31 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/698810980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 15:02:31 compute-0 nova_compute[259550]: 2025-10-07 15:02:31.719 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:02:31 compute-0 nova_compute[259550]: 2025-10-07 15:02:31.723 2 DEBUG nova.objects.instance [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lazy-loading 'pci_devices' on Instance uuid f218f03c-5cbd-49aa-bf01-f486d865d347 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 15:02:31 compute-0 nova_compute[259550]: 2025-10-07 15:02:31.743 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] End _get_guest_xml xml=<domain type="kvm">
Oct 07 15:02:31 compute-0 nova_compute[259550]:   <uuid>f218f03c-5cbd-49aa-bf01-f486d865d347</uuid>
Oct 07 15:02:31 compute-0 nova_compute[259550]:   <name>instance-00000099</name>
Oct 07 15:02:31 compute-0 nova_compute[259550]:   <memory>131072</memory>
Oct 07 15:02:31 compute-0 nova_compute[259550]:   <vcpu>1</vcpu>
Oct 07 15:02:31 compute-0 nova_compute[259550]:   <metadata>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <nova:name>tempest-AggregatesAdminTestJSON-server-1262446659</nova:name>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <nova:creationTime>2025-10-07 15:02:30</nova:creationTime>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <nova:flavor name="m1.nano">
Oct 07 15:02:31 compute-0 nova_compute[259550]:         <nova:memory>128</nova:memory>
Oct 07 15:02:31 compute-0 nova_compute[259550]:         <nova:disk>1</nova:disk>
Oct 07 15:02:31 compute-0 nova_compute[259550]:         <nova:swap>0</nova:swap>
Oct 07 15:02:31 compute-0 nova_compute[259550]:         <nova:ephemeral>0</nova:ephemeral>
Oct 07 15:02:31 compute-0 nova_compute[259550]:         <nova:vcpus>1</nova:vcpus>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       </nova:flavor>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <nova:owner>
Oct 07 15:02:31 compute-0 nova_compute[259550]:         <nova:user uuid="d77c23d5cf1f4accb328083c0f0bf419">tempest-AggregatesAdminTestJSON-1571393624-project-member</nova:user>
Oct 07 15:02:31 compute-0 nova_compute[259550]:         <nova:project uuid="cb6361241083417fa69181a79e143485">tempest-AggregatesAdminTestJSON-1571393624</nova:project>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       </nova:owner>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <nova:ports/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     </nova:instance>
Oct 07 15:02:31 compute-0 nova_compute[259550]:   </metadata>
Oct 07 15:02:31 compute-0 nova_compute[259550]:   <sysinfo type="smbios">
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <system>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <entry name="manufacturer">RDO</entry>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <entry name="product">OpenStack Compute</entry>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <entry name="serial">f218f03c-5cbd-49aa-bf01-f486d865d347</entry>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <entry name="uuid">f218f03c-5cbd-49aa-bf01-f486d865d347</entry>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <entry name="family">Virtual Machine</entry>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     </system>
Oct 07 15:02:31 compute-0 nova_compute[259550]:   </sysinfo>
Oct 07 15:02:31 compute-0 nova_compute[259550]:   <os>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <boot dev="hd"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <smbios mode="sysinfo"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:   </os>
Oct 07 15:02:31 compute-0 nova_compute[259550]:   <features>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <acpi/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <apic/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <vmcoreinfo/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:   </features>
Oct 07 15:02:31 compute-0 nova_compute[259550]:   <clock offset="utc">
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <timer name="pit" tickpolicy="delay"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <timer name="hpet" present="no"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:   </clock>
Oct 07 15:02:31 compute-0 nova_compute[259550]:   <cpu mode="host-model" match="exact">
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <topology sockets="1" cores="1" threads="1"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:   </cpu>
Oct 07 15:02:31 compute-0 nova_compute[259550]:   <devices>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <disk type="network" device="disk">
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/f218f03c-5cbd-49aa-bf01-f486d865d347_disk">
Oct 07 15:02:31 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       </source>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 15:02:31 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       </auth>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <target dev="vda" bus="virtio"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     </disk>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <disk type="network" device="cdrom">
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <driver type="raw" cache="none"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <source protocol="rbd" name="vms/f218f03c-5cbd-49aa-bf01-f486d865d347_disk.config">
Oct 07 15:02:31 compute-0 nova_compute[259550]:         <host name="192.168.122.100" port="6789"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       </source>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <auth username="openstack">
Oct 07 15:02:31 compute-0 nova_compute[259550]:         <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       </auth>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <target dev="sda" bus="sata"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     </disk>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <serial type="pty">
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <log file="/var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347/console.log" append="off"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     </serial>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <video>
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <model type="virtio"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     </video>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <input type="tablet" bus="usb"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <rng model="virtio">
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <backend model="random">/dev/urandom</backend>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     </rng>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="pci" model="pcie-root-port"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <controller type="usb" index="0"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     <memballoon model="virtio">
Oct 07 15:02:31 compute-0 nova_compute[259550]:       <stats period="10"/>
Oct 07 15:02:31 compute-0 nova_compute[259550]:     </memballoon>
Oct 07 15:02:31 compute-0 nova_compute[259550]:   </devices>
Oct 07 15:02:31 compute-0 nova_compute[259550]: </domain>
Oct 07 15:02:31 compute-0 nova_compute[259550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 07 15:02:31 compute-0 nova_compute[259550]: 2025-10-07 15:02:31.828 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 15:02:31 compute-0 nova_compute[259550]: 2025-10-07 15:02:31.829 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 07 15:02:31 compute-0 nova_compute[259550]: 2025-10-07 15:02:31.830 2 INFO nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Using config drive
Oct 07 15:02:31 compute-0 nova_compute[259550]: 2025-10-07 15:02:31.854 2 DEBUG nova.storage.rbd_utils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] rbd image f218f03c-5cbd-49aa-bf01-f486d865d347_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 15:02:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:02:32 compute-0 nova_compute[259550]: 2025-10-07 15:02:32.344 2 INFO nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Creating config drive at /var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347/disk.config
Oct 07 15:02:32 compute-0 nova_compute[259550]: 2025-10-07 15:02:32.350 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpar5cfv8n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:02:32 compute-0 nova_compute[259550]: 2025-10-07 15:02:32.496 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpar5cfv8n" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:02:32 compute-0 nova_compute[259550]: 2025-10-07 15:02:32.522 2 DEBUG nova.storage.rbd_utils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] rbd image f218f03c-5cbd-49aa-bf01-f486d865d347_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 07 15:02:32 compute-0 nova_compute[259550]: 2025-10-07 15:02:32.525 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347/disk.config f218f03c-5cbd-49aa-bf01-f486d865d347_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:02:32 compute-0 nova_compute[259550]: 2025-10-07 15:02:32.689 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347/disk.config f218f03c-5cbd-49aa-bf01-f486d865d347_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:02:32 compute-0 nova_compute[259550]: 2025-10-07 15:02:32.690 2 INFO nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Deleting local config drive /var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347/disk.config because it was imported into RBD.
Oct 07 15:02:32 compute-0 ceph-mon[74295]: pgmap v3019: 305 pgs: 305 active+clean; 47 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 268 KiB/s wr, 1 op/s
Oct 07 15:02:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/698810980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 07 15:02:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:02:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4010401128' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:02:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:02:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4010401128' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:02:32 compute-0 systemd-machined[214580]: New machine qemu-187-instance-00000099.
Oct 07 15:02:32 compute-0 systemd[1]: Started Virtual Machine qemu-187-instance-00000099.
Oct 07 15:02:32 compute-0 nova_compute[259550]: 2025-10-07 15:02:32.878 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:02:32 compute-0 nova_compute[259550]: 2025-10-07 15:02:32.879 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00015611518769942045 of space, bias 1.0, pg target 0.046834556309826136 quantized to 32 (current 32)
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:02:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:02:32 compute-0 nova_compute[259550]: 2025-10-07 15:02:32.903 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:02:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 305 active+clean; 47 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 268 KiB/s wr, 1 op/s
Oct 07 15:02:33 compute-0 ovn_controller[151684]: 2025-10-07T15:02:33Z|01663|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.470 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849353.4694266, f218f03c-5cbd-49aa-bf01-f486d865d347 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.471 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] VM Resumed (Lifecycle Event)
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.473 2 DEBUG nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.473 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.477 2 INFO nova.virt.libvirt.driver [-] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Instance spawned successfully.
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.477 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.492 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.498 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.503 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.503 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.504 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.504 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.504 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.505 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.514 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.515 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849353.4709637, f218f03c-5cbd-49aa-bf01-f486d865d347 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.515 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] VM Started (Lifecycle Event)
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.535 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.541 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.570 2 INFO nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Took 4.00 seconds to spawn the instance on the hypervisor.
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.570 2 DEBUG nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.572 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.632 2 INFO nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Took 5.27 seconds to build instance.
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.645 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "f218f03c-5cbd-49aa-bf01-f486d865d347" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:02:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/4010401128' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:02:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/4010401128' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:33 compute-0 nova_compute[259550]: 2025-10-07 15:02:33.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:02:34 compute-0 ceph-mon[74295]: pgmap v3020: 305 pgs: 305 active+clean; 47 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 268 KiB/s wr, 1 op/s
Oct 07 15:02:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 305 active+clean; 71 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.2 MiB/s wr, 33 op/s
Oct 07 15:02:35 compute-0 nova_compute[259550]: 2025-10-07 15:02:35.729 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Acquiring lock "f218f03c-5cbd-49aa-bf01-f486d865d347" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:02:35 compute-0 nova_compute[259550]: 2025-10-07 15:02:35.729 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "f218f03c-5cbd-49aa-bf01-f486d865d347" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:02:35 compute-0 nova_compute[259550]: 2025-10-07 15:02:35.730 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Acquiring lock "f218f03c-5cbd-49aa-bf01-f486d865d347-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:02:35 compute-0 nova_compute[259550]: 2025-10-07 15:02:35.730 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "f218f03c-5cbd-49aa-bf01-f486d865d347-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:02:35 compute-0 nova_compute[259550]: 2025-10-07 15:02:35.730 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "f218f03c-5cbd-49aa-bf01-f486d865d347-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:02:35 compute-0 nova_compute[259550]: 2025-10-07 15:02:35.731 2 INFO nova.compute.manager [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Terminating instance
Oct 07 15:02:35 compute-0 nova_compute[259550]: 2025-10-07 15:02:35.732 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Acquiring lock "refresh_cache-f218f03c-5cbd-49aa-bf01-f486d865d347" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 07 15:02:35 compute-0 nova_compute[259550]: 2025-10-07 15:02:35.733 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Acquired lock "refresh_cache-f218f03c-5cbd-49aa-bf01-f486d865d347" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 07 15:02:35 compute-0 nova_compute[259550]: 2025-10-07 15:02:35.733 2 DEBUG nova.network.neutron [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 07 15:02:35 compute-0 nova_compute[259550]: 2025-10-07 15:02:35.886 2 DEBUG nova.network.neutron [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 15:02:36 compute-0 podman[433023]: 2025-10-07 15:02:36.083534128 +0000 UTC m=+0.063965714 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible)
Oct 07 15:02:36 compute-0 podman[433022]: 2025-10-07 15:02:36.083550408 +0000 UTC m=+0.068301919 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:02:36 compute-0 nova_compute[259550]: 2025-10-07 15:02:36.409 2 DEBUG nova.network.neutron [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 15:02:36 compute-0 nova_compute[259550]: 2025-10-07 15:02:36.425 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Releasing lock "refresh_cache-f218f03c-5cbd-49aa-bf01-f486d865d347" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 07 15:02:36 compute-0 nova_compute[259550]: 2025-10-07 15:02:36.426 2 DEBUG nova.compute.manager [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 07 15:02:36 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Deactivated successfully.
Oct 07 15:02:36 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Consumed 3.651s CPU time.
Oct 07 15:02:36 compute-0 systemd-machined[214580]: Machine qemu-187-instance-00000099 terminated.
Oct 07 15:02:36 compute-0 nova_compute[259550]: 2025-10-07 15:02:36.651 2 INFO nova.virt.libvirt.driver [-] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Instance destroyed successfully.
Oct 07 15:02:36 compute-0 nova_compute[259550]: 2025-10-07 15:02:36.651 2 DEBUG nova.objects.instance [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lazy-loading 'resources' on Instance uuid f218f03c-5cbd-49aa-bf01-f486d865d347 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 07 15:02:36 compute-0 ceph-mon[74295]: pgmap v3021: 305 pgs: 305 active+clean; 71 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.2 MiB/s wr, 33 op/s
Oct 07 15:02:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 900 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Oct 07 15:02:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:02:37 compute-0 nova_compute[259550]: 2025-10-07 15:02:37.062 2 INFO nova.virt.libvirt.driver [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Deleting instance files /var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347_del
Oct 07 15:02:37 compute-0 nova_compute[259550]: 2025-10-07 15:02:37.063 2 INFO nova.virt.libvirt.driver [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Deletion of /var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347_del complete
Oct 07 15:02:37 compute-0 nova_compute[259550]: 2025-10-07 15:02:37.116 2 INFO nova.compute.manager [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Took 0.69 seconds to destroy the instance on the hypervisor.
Oct 07 15:02:37 compute-0 nova_compute[259550]: 2025-10-07 15:02:37.116 2 DEBUG oslo.service.loopingcall [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 07 15:02:37 compute-0 nova_compute[259550]: 2025-10-07 15:02:37.117 2 DEBUG nova.compute.manager [-] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 07 15:02:37 compute-0 nova_compute[259550]: 2025-10-07 15:02:37.118 2 DEBUG nova.network.neutron [-] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 07 15:02:37 compute-0 nova_compute[259550]: 2025-10-07 15:02:37.345 2 DEBUG nova.network.neutron [-] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 07 15:02:37 compute-0 nova_compute[259550]: 2025-10-07 15:02:37.421 2 DEBUG nova.network.neutron [-] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 07 15:02:37 compute-0 nova_compute[259550]: 2025-10-07 15:02:37.440 2 INFO nova.compute.manager [-] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Took 0.32 seconds to deallocate network for instance.
Oct 07 15:02:37 compute-0 nova_compute[259550]: 2025-10-07 15:02:37.495 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:02:37 compute-0 nova_compute[259550]: 2025-10-07 15:02:37.495 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:02:37 compute-0 nova_compute[259550]: 2025-10-07 15:02:37.542 2 DEBUG oslo_concurrency.processutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:02:37 compute-0 ceph-mon[74295]: pgmap v3022: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 900 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Oct 07 15:02:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:02:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1823466636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:02:37 compute-0 nova_compute[259550]: 2025-10-07 15:02:37.994 2 DEBUG oslo_concurrency.processutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:02:38 compute-0 nova_compute[259550]: 2025-10-07 15:02:38.000 2 DEBUG nova.compute.provider_tree [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:02:38 compute-0 nova_compute[259550]: 2025-10-07 15:02:38.021 2 DEBUG nova.scheduler.client.report [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:02:38 compute-0 nova_compute[259550]: 2025-10-07 15:02:38.052 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:02:38 compute-0 nova_compute[259550]: 2025-10-07 15:02:38.080 2 INFO nova.scheduler.client.report [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Deleted allocations for instance f218f03c-5cbd-49aa-bf01-f486d865d347
Oct 07 15:02:38 compute-0 nova_compute[259550]: 2025-10-07 15:02:38.154 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "f218f03c-5cbd-49aa-bf01-f486d865d347" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:02:38 compute-0 nova_compute[259550]: 2025-10-07 15:02:38.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:38 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1823466636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:02:38 compute-0 nova_compute[259550]: 2025-10-07 15:02:38.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 305 active+clean; 63 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Oct 07 15:02:39 compute-0 ceph-mon[74295]: pgmap v3023: 305 pgs: 305 active+clean; 63 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Oct 07 15:02:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 07 15:02:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:02:42 compute-0 ceph-mon[74295]: pgmap v3024: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 07 15:02:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 125 op/s
Oct 07 15:02:43 compute-0 nova_compute[259550]: 2025-10-07 15:02:43.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:43 compute-0 nova_compute[259550]: 2025-10-07 15:02:43.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:44 compute-0 ceph-mon[74295]: pgmap v3025: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 125 op/s
Oct 07 15:02:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:02:44.801 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 15:02:44 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:02:44.802 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 15:02:44 compute-0 nova_compute[259550]: 2025-10-07 15:02:44.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 125 op/s
Oct 07 15:02:46 compute-0 ceph-mon[74295]: pgmap v3026: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 125 op/s
Oct 07 15:02:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 594 KiB/s wr, 93 op/s
Oct 07 15:02:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:02:48 compute-0 ceph-mon[74295]: pgmap v3027: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 594 KiB/s wr, 93 op/s
Oct 07 15:02:48 compute-0 nova_compute[259550]: 2025-10-07 15:02:48.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:48 compute-0 nova_compute[259550]: 2025-10-07 15:02:48.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.2 KiB/s wr, 60 op/s
Oct 07 15:02:50 compute-0 ceph-mon[74295]: pgmap v3028: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.2 KiB/s wr, 60 op/s
Oct 07 15:02:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 937 B/s rd, 0 B/s wr, 1 op/s
Oct 07 15:02:51 compute-0 podman[433104]: 2025-10-07 15:02:51.0865058 +0000 UTC m=+0.067016394 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:02:51 compute-0 podman[433105]: 2025-10-07 15:02:51.103077749 +0000 UTC m=+0.087914418 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 07 15:02:51 compute-0 nova_compute[259550]: 2025-10-07 15:02:51.650 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759849356.6480515, f218f03c-5cbd-49aa-bf01-f486d865d347 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 07 15:02:51 compute-0 nova_compute[259550]: 2025-10-07 15:02:51.650 2 INFO nova.compute.manager [-] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] VM Stopped (Lifecycle Event)
Oct 07 15:02:51 compute-0 nova_compute[259550]: 2025-10-07 15:02:51.694 2 DEBUG nova.compute.manager [None req-60248d07-2a9d-49ee-9367-0e17060c4e6d - - - - - -] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 07 15:02:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:02:52 compute-0 ceph-mon[74295]: pgmap v3029: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 937 B/s rd, 0 B/s wr, 1 op/s
Oct 07 15:02:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:02:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:02:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:02:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:02:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:02:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:02:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:53 compute-0 nova_compute[259550]: 2025-10-07 15:02:53.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:53 compute-0 nova_compute[259550]: 2025-10-07 15:02:53.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:54 compute-0 ceph-mon[74295]: pgmap v3030: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:54 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:02:54.804 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 15:02:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:56 compute-0 ceph-mon[74295]: pgmap v3031: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:02:58 compute-0 nova_compute[259550]: 2025-10-07 15:02:58.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:58 compute-0 ceph-mon[74295]: pgmap v3032: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:02:58 compute-0 nova_compute[259550]: 2025-10-07 15:02:58.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:02:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:03:00.100 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:03:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:03:00.100 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:03:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:03:00.100 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:03:00 compute-0 ceph-mon[74295]: pgmap v3033: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:03:02 compute-0 ceph-mon[74295]: pgmap v3034: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:03 compute-0 nova_compute[259550]: 2025-10-07 15:03:03.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:03 compute-0 nova_compute[259550]: 2025-10-07 15:03:03.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:04 compute-0 ceph-mon[74295]: pgmap v3035: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.683490) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849384683542, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1865, "num_deletes": 255, "total_data_size": 2997142, "memory_usage": 3047120, "flush_reason": "Manual Compaction"}
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849384701118, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 2944251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61598, "largest_seqno": 63462, "table_properties": {"data_size": 2935548, "index_size": 5452, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17689, "raw_average_key_size": 20, "raw_value_size": 2918188, "raw_average_value_size": 3361, "num_data_blocks": 241, "num_entries": 868, "num_filter_entries": 868, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759849197, "oldest_key_time": 1759849197, "file_creation_time": 1759849384, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 17681 microseconds, and 6882 cpu microseconds.
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.701166) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 2944251 bytes OK
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.701188) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.703452) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.703470) EVENT_LOG_v1 {"time_micros": 1759849384703465, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.703499) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 2989172, prev total WAL file size 2989172, number of live WAL files 2.
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.704597) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(2875KB)], [146(9891KB)]
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849384704633, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 13073580, "oldest_snapshot_seqno": -1}
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8250 keys, 11330515 bytes, temperature: kUnknown
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849384760097, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 11330515, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11275555, "index_size": 33215, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20677, "raw_key_size": 215752, "raw_average_key_size": 26, "raw_value_size": 11128658, "raw_average_value_size": 1348, "num_data_blocks": 1295, "num_entries": 8250, "num_filter_entries": 8250, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759849384, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.760318) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11330515 bytes
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.761834) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.4 rd, 204.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 9.7 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(8.3) write-amplify(3.8) OK, records in: 8774, records dropped: 524 output_compression: NoCompression
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.762065) EVENT_LOG_v1 {"time_micros": 1759849384762056, "job": 90, "event": "compaction_finished", "compaction_time_micros": 55539, "compaction_time_cpu_micros": 26821, "output_level": 6, "num_output_files": 1, "total_output_size": 11330515, "num_input_records": 8774, "num_output_records": 8250, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849384762580, "job": 90, "event": "table_file_deletion", "file_number": 148}
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849384764695, "job": 90, "event": "table_file_deletion", "file_number": 146}
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.704467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.764845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.764858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.764859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.764860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:03:04 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.764862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:03:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:06 compute-0 ceph-mon[74295]: pgmap v3036: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:03:07 compute-0 podman[433146]: 2025-10-07 15:03:07.076463885 +0000 UTC m=+0.056094946 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 07 15:03:07 compute-0 podman[433147]: 2025-10-07 15:03:07.091942244 +0000 UTC m=+0.066067170 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 07 15:03:08 compute-0 nova_compute[259550]: 2025-10-07 15:03:08.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:08 compute-0 ceph-mon[74295]: pgmap v3037: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:08 compute-0 nova_compute[259550]: 2025-10-07 15:03:08.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:09 compute-0 nova_compute[259550]: 2025-10-07 15:03:09.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:03:10 compute-0 ceph-mon[74295]: pgmap v3038: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:03:12 compute-0 sudo[433185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:03:12 compute-0 sudo[433185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:03:12 compute-0 sudo[433185]: pam_unix(sudo:session): session closed for user root
Oct 07 15:03:12 compute-0 sudo[433210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:03:12 compute-0 sudo[433210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:03:12 compute-0 sudo[433210]: pam_unix(sudo:session): session closed for user root
Oct 07 15:03:12 compute-0 ceph-mon[74295]: pgmap v3039: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:12 compute-0 sudo[433235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:03:12 compute-0 sudo[433235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:03:12 compute-0 sudo[433235]: pam_unix(sudo:session): session closed for user root
Oct 07 15:03:12 compute-0 sudo[433260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:03:12 compute-0 sudo[433260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:03:12 compute-0 nova_compute[259550]: 2025-10-07 15:03:12.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:03:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:13 compute-0 sudo[433260]: pam_unix(sudo:session): session closed for user root
Oct 07 15:03:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:03:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:03:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:03:13 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:03:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:03:13 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:03:13 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev a380af48-68b5-4f83-b746-7f721f235858 does not exist
Oct 07 15:03:13 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 7082204d-5285-4294-b615-590b5b9a44d9 does not exist
Oct 07 15:03:13 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev c0903892-923a-4aef-b10c-e6cf431f1393 does not exist
Oct 07 15:03:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:03:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:03:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:03:13 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:03:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:03:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:03:13 compute-0 nova_compute[259550]: 2025-10-07 15:03:13.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:13 compute-0 sudo[433316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:03:13 compute-0 sudo[433316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:03:13 compute-0 sudo[433316]: pam_unix(sudo:session): session closed for user root
Oct 07 15:03:13 compute-0 sudo[433341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:03:13 compute-0 sudo[433341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:03:13 compute-0 sudo[433341]: pam_unix(sudo:session): session closed for user root
Oct 07 15:03:13 compute-0 sudo[433366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:03:13 compute-0 sudo[433366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:03:13 compute-0 sudo[433366]: pam_unix(sudo:session): session closed for user root
Oct 07 15:03:13 compute-0 sudo[433391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:03:13 compute-0 sudo[433391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:03:13 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:03:13 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:03:13 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:03:13 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:03:13 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:03:13 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:03:13 compute-0 nova_compute[259550]: 2025-10-07 15:03:13.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:13 compute-0 podman[433455]: 2025-10-07 15:03:13.926340966 +0000 UTC m=+0.054857042 container create cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 07 15:03:13 compute-0 systemd[1]: Started libpod-conmon-cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b.scope.
Oct 07 15:03:13 compute-0 nova_compute[259550]: 2025-10-07 15:03:13.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:03:13 compute-0 podman[433455]: 2025-10-07 15:03:13.894975916 +0000 UTC m=+0.023492012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:03:13 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:03:14 compute-0 podman[433455]: 2025-10-07 15:03:14.008505521 +0000 UTC m=+0.137021597 container init cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:03:14 compute-0 podman[433455]: 2025-10-07 15:03:14.015096825 +0000 UTC m=+0.143612891 container start cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 07 15:03:14 compute-0 ecstatic_yalow[433471]: 167 167
Oct 07 15:03:14 compute-0 systemd[1]: libpod-cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b.scope: Deactivated successfully.
Oct 07 15:03:14 compute-0 conmon[433471]: conmon cc07692f804ed904fc00 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b.scope/container/memory.events
Oct 07 15:03:14 compute-0 podman[433455]: 2025-10-07 15:03:14.023794115 +0000 UTC m=+0.152310211 container attach cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:03:14 compute-0 podman[433455]: 2025-10-07 15:03:14.024896424 +0000 UTC m=+0.153412500 container died cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 15:03:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1513757c0dc2b977b1fdd548db596ed46d5e374535daf9510ec63f4e2cafe524-merged.mount: Deactivated successfully.
Oct 07 15:03:14 compute-0 podman[433455]: 2025-10-07 15:03:14.075554125 +0000 UTC m=+0.204070201 container remove cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 07 15:03:14 compute-0 systemd[1]: libpod-conmon-cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b.scope: Deactivated successfully.
Oct 07 15:03:14 compute-0 podman[433497]: 2025-10-07 15:03:14.232342844 +0000 UTC m=+0.039615989 container create 42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:03:14 compute-0 systemd[1]: Started libpod-conmon-42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae.scope.
Oct 07 15:03:14 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626d624c33b26321a320d3b8b6c032025d80a937b74aa0915704cb8618d31869/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626d624c33b26321a320d3b8b6c032025d80a937b74aa0915704cb8618d31869/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626d624c33b26321a320d3b8b6c032025d80a937b74aa0915704cb8618d31869/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626d624c33b26321a320d3b8b6c032025d80a937b74aa0915704cb8618d31869/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626d624c33b26321a320d3b8b6c032025d80a937b74aa0915704cb8618d31869/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:03:14 compute-0 podman[433497]: 2025-10-07 15:03:14.215870119 +0000 UTC m=+0.023143294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:03:14 compute-0 podman[433497]: 2025-10-07 15:03:14.3156539 +0000 UTC m=+0.122927105 container init 42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:03:14 compute-0 podman[433497]: 2025-10-07 15:03:14.32741337 +0000 UTC m=+0.134686525 container start 42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:03:14 compute-0 podman[433497]: 2025-10-07 15:03:14.330613676 +0000 UTC m=+0.137886881 container attach 42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 07 15:03:14 compute-0 ceph-mon[74295]: pgmap v3040: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:15 compute-0 brave_kowalevski[433514]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:03:15 compute-0 brave_kowalevski[433514]: --> relative data size: 1.0
Oct 07 15:03:15 compute-0 brave_kowalevski[433514]: --> All data devices are unavailable
Oct 07 15:03:15 compute-0 systemd[1]: libpod-42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae.scope: Deactivated successfully.
Oct 07 15:03:15 compute-0 systemd[1]: libpod-42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae.scope: Consumed 1.041s CPU time.
Oct 07 15:03:15 compute-0 podman[433497]: 2025-10-07 15:03:15.410167076 +0000 UTC m=+1.217440251 container died 42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:03:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-626d624c33b26321a320d3b8b6c032025d80a937b74aa0915704cb8618d31869-merged.mount: Deactivated successfully.
Oct 07 15:03:15 compute-0 podman[433497]: 2025-10-07 15:03:15.51800206 +0000 UTC m=+1.325275215 container remove 42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 07 15:03:15 compute-0 systemd[1]: libpod-conmon-42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae.scope: Deactivated successfully.
Oct 07 15:03:15 compute-0 sudo[433391]: pam_unix(sudo:session): session closed for user root
Oct 07 15:03:15 compute-0 sudo[433556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:03:15 compute-0 sudo[433556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:03:15 compute-0 sudo[433556]: pam_unix(sudo:session): session closed for user root
Oct 07 15:03:15 compute-0 sudo[433581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:03:15 compute-0 sudo[433581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:03:15 compute-0 sudo[433581]: pam_unix(sudo:session): session closed for user root
Oct 07 15:03:15 compute-0 ceph-mon[74295]: pgmap v3041: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:15 compute-0 sudo[433606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:03:15 compute-0 sudo[433606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:03:15 compute-0 sudo[433606]: pam_unix(sudo:session): session closed for user root
Oct 07 15:03:15 compute-0 sudo[433631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:03:15 compute-0 sudo[433631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:03:15 compute-0 nova_compute[259550]: 2025-10-07 15:03:15.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:03:16 compute-0 podman[433696]: 2025-10-07 15:03:16.14642648 +0000 UTC m=+0.044773056 container create 486b86bc5626d404316c0be6f019468bc7fee267f564439e7c9e22bf37b7dd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_burnell, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 07 15:03:16 compute-0 systemd[1]: Started libpod-conmon-486b86bc5626d404316c0be6f019468bc7fee267f564439e7c9e22bf37b7dd6e.scope.
Oct 07 15:03:16 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:03:16 compute-0 podman[433696]: 2025-10-07 15:03:16.128810545 +0000 UTC m=+0.027157151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:03:16 compute-0 podman[433696]: 2025-10-07 15:03:16.238816945 +0000 UTC m=+0.137163541 container init 486b86bc5626d404316c0be6f019468bc7fee267f564439e7c9e22bf37b7dd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_burnell, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 07 15:03:16 compute-0 podman[433696]: 2025-10-07 15:03:16.246054588 +0000 UTC m=+0.144401164 container start 486b86bc5626d404316c0be6f019468bc7fee267f564439e7c9e22bf37b7dd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 07 15:03:16 compute-0 condescending_burnell[433712]: 167 167
Oct 07 15:03:16 compute-0 systemd[1]: libpod-486b86bc5626d404316c0be6f019468bc7fee267f564439e7c9e22bf37b7dd6e.scope: Deactivated successfully.
Oct 07 15:03:16 compute-0 podman[433696]: 2025-10-07 15:03:16.252459877 +0000 UTC m=+0.150806473 container attach 486b86bc5626d404316c0be6f019468bc7fee267f564439e7c9e22bf37b7dd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 07 15:03:16 compute-0 podman[433696]: 2025-10-07 15:03:16.253477274 +0000 UTC m=+0.151823850 container died 486b86bc5626d404316c0be6f019468bc7fee267f564439e7c9e22bf37b7dd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 07 15:03:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c049bdc8b50039b485382191ff3aa3f1bd3e4a26f99c08026959927102f3bb79-merged.mount: Deactivated successfully.
Oct 07 15:03:16 compute-0 podman[433696]: 2025-10-07 15:03:16.29373347 +0000 UTC m=+0.192080046 container remove 486b86bc5626d404316c0be6f019468bc7fee267f564439e7c9e22bf37b7dd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_burnell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:03:16 compute-0 systemd[1]: libpod-conmon-486b86bc5626d404316c0be6f019468bc7fee267f564439e7c9e22bf37b7dd6e.scope: Deactivated successfully.
Oct 07 15:03:16 compute-0 podman[433736]: 2025-10-07 15:03:16.470897398 +0000 UTC m=+0.046762159 container create aefbec233d3325bf6fad2119653bfb29576f9eb1a2b583722fc7b2fd63fe30fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 07 15:03:16 compute-0 systemd[1]: Started libpod-conmon-aefbec233d3325bf6fad2119653bfb29576f9eb1a2b583722fc7b2fd63fe30fa.scope.
Oct 07 15:03:16 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:03:16 compute-0 podman[433736]: 2025-10-07 15:03:16.453288982 +0000 UTC m=+0.029153743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c41aaa0a4438d8aca5e6e75780ce704d8591b3bea4859dc8961bb728817df1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c41aaa0a4438d8aca5e6e75780ce704d8591b3bea4859dc8961bb728817df1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c41aaa0a4438d8aca5e6e75780ce704d8591b3bea4859dc8961bb728817df1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c41aaa0a4438d8aca5e6e75780ce704d8591b3bea4859dc8961bb728817df1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:03:16 compute-0 podman[433736]: 2025-10-07 15:03:16.560119679 +0000 UTC m=+0.135984470 container init aefbec233d3325bf6fad2119653bfb29576f9eb1a2b583722fc7b2fd63fe30fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct 07 15:03:16 compute-0 podman[433736]: 2025-10-07 15:03:16.567061533 +0000 UTC m=+0.142926294 container start aefbec233d3325bf6fad2119653bfb29576f9eb1a2b583722fc7b2fd63fe30fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 07 15:03:16 compute-0 podman[433736]: 2025-10-07 15:03:16.571193992 +0000 UTC m=+0.147058773 container attach aefbec233d3325bf6fad2119653bfb29576f9eb1a2b583722fc7b2fd63fe30fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dubinsky, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 07 15:03:16 compute-0 nova_compute[259550]: 2025-10-07 15:03:16.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:03:16 compute-0 nova_compute[259550]: 2025-10-07 15:03:16.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:03:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3042: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]: {
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:     "0": [
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:         {
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "devices": [
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "/dev/loop3"
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             ],
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "lv_name": "ceph_lv0",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "lv_size": "21470642176",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "name": "ceph_lv0",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "tags": {
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.cluster_name": "ceph",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.crush_device_class": "",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.encrypted": "0",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.osd_id": "0",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.type": "block",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.vdo": "0"
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             },
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "type": "block",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "vg_name": "ceph_vg0"
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:         }
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:     ],
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:     "1": [
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:         {
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "devices": [
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "/dev/loop4"
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             ],
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "lv_name": "ceph_lv1",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "lv_size": "21470642176",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "name": "ceph_lv1",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "tags": {
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.cluster_name": "ceph",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.crush_device_class": "",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.encrypted": "0",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.osd_id": "1",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.type": "block",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.vdo": "0"
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             },
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "type": "block",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "vg_name": "ceph_vg1"
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:         }
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:     ],
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:     "2": [
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:         {
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "devices": [
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "/dev/loop5"
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             ],
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "lv_name": "ceph_lv2",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "lv_size": "21470642176",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "name": "ceph_lv2",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "tags": {
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.cluster_name": "ceph",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.crush_device_class": "",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.encrypted": "0",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.osd_id": "2",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.type": "block",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:                 "ceph.vdo": "0"
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             },
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "type": "block",
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:             "vg_name": "ceph_vg2"
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:         }
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]:     ]
Oct 07 15:03:17 compute-0 keen_dubinsky[433752]: }
Oct 07 15:03:17 compute-0 systemd[1]: libpod-aefbec233d3325bf6fad2119653bfb29576f9eb1a2b583722fc7b2fd63fe30fa.scope: Deactivated successfully.
Oct 07 15:03:17 compute-0 podman[433736]: 2025-10-07 15:03:17.404017883 +0000 UTC m=+0.979882674 container died aefbec233d3325bf6fad2119653bfb29576f9eb1a2b583722fc7b2fd63fe30fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dubinsky, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:03:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-96c41aaa0a4438d8aca5e6e75780ce704d8591b3bea4859dc8961bb728817df1-merged.mount: Deactivated successfully.
Oct 07 15:03:17 compute-0 podman[433736]: 2025-10-07 15:03:17.466732932 +0000 UTC m=+1.042597693 container remove aefbec233d3325bf6fad2119653bfb29576f9eb1a2b583722fc7b2fd63fe30fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dubinsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:03:17 compute-0 systemd[1]: libpod-conmon-aefbec233d3325bf6fad2119653bfb29576f9eb1a2b583722fc7b2fd63fe30fa.scope: Deactivated successfully.
Oct 07 15:03:17 compute-0 sudo[433631]: pam_unix(sudo:session): session closed for user root
Oct 07 15:03:17 compute-0 sudo[433775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:03:17 compute-0 sudo[433775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:03:17 compute-0 sudo[433775]: pam_unix(sudo:session): session closed for user root
Oct 07 15:03:17 compute-0 sudo[433800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:03:17 compute-0 sudo[433800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:03:17 compute-0 sudo[433800]: pam_unix(sudo:session): session closed for user root
Oct 07 15:03:17 compute-0 sudo[433825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:03:17 compute-0 sudo[433825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:03:17 compute-0 sudo[433825]: pam_unix(sudo:session): session closed for user root
Oct 07 15:03:17 compute-0 sudo[433850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:03:17 compute-0 sudo[433850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:03:18 compute-0 ceph-mon[74295]: pgmap v3042: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:18 compute-0 podman[433916]: 2025-10-07 15:03:18.12033476 +0000 UTC m=+0.044490909 container create 34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 07 15:03:18 compute-0 systemd[1]: Started libpod-conmon-34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f.scope.
Oct 07 15:03:18 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:03:18 compute-0 podman[433916]: 2025-10-07 15:03:18.104142092 +0000 UTC m=+0.028298251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:03:18 compute-0 podman[433916]: 2025-10-07 15:03:18.208097073 +0000 UTC m=+0.132253252 container init 34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_carver, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:03:18 compute-0 podman[433916]: 2025-10-07 15:03:18.215143059 +0000 UTC m=+0.139299208 container start 34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_carver, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:03:18 compute-0 podman[433916]: 2025-10-07 15:03:18.219923295 +0000 UTC m=+0.144079464 container attach 34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 07 15:03:18 compute-0 friendly_carver[433933]: 167 167
Oct 07 15:03:18 compute-0 systemd[1]: libpod-34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f.scope: Deactivated successfully.
Oct 07 15:03:18 compute-0 conmon[433933]: conmon 34d0a59c2f6899ed48ad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f.scope/container/memory.events
Oct 07 15:03:18 compute-0 podman[433916]: 2025-10-07 15:03:18.222187276 +0000 UTC m=+0.146343415 container died 34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:03:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5968717b5562fa77fca4e62b79d1af4dc1a341e03e4d5396692707d27c1a5e7b-merged.mount: Deactivated successfully.
Oct 07 15:03:18 compute-0 podman[433916]: 2025-10-07 15:03:18.25368434 +0000 UTC m=+0.177840489 container remove 34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_carver, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 07 15:03:18 compute-0 systemd[1]: libpod-conmon-34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f.scope: Deactivated successfully.
Oct 07 15:03:18 compute-0 nova_compute[259550]: 2025-10-07 15:03:18.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:18 compute-0 podman[433957]: 2025-10-07 15:03:18.421541941 +0000 UTC m=+0.048160535 container create fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct 07 15:03:18 compute-0 systemd[1]: Started libpod-conmon-fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690.scope.
Oct 07 15:03:18 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:03:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4dfe621136d7cfc5fced739bb2eb02f1423f90ac813e8fbc11478ffab5f2c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:03:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4dfe621136d7cfc5fced739bb2eb02f1423f90ac813e8fbc11478ffab5f2c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:03:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4dfe621136d7cfc5fced739bb2eb02f1423f90ac813e8fbc11478ffab5f2c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:03:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4dfe621136d7cfc5fced739bb2eb02f1423f90ac813e8fbc11478ffab5f2c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:03:18 compute-0 podman[433957]: 2025-10-07 15:03:18.491247506 +0000 UTC m=+0.117866120 container init fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 07 15:03:18 compute-0 podman[433957]: 2025-10-07 15:03:18.400020322 +0000 UTC m=+0.026638946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:03:18 compute-0 podman[433957]: 2025-10-07 15:03:18.501720464 +0000 UTC m=+0.128339068 container start fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:03:18 compute-0 podman[433957]: 2025-10-07 15:03:18.506444438 +0000 UTC m=+0.133063072 container attach fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:03:18 compute-0 nova_compute[259550]: 2025-10-07 15:03:18.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:19 compute-0 nifty_cannon[433973]: {
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:         "osd_id": 2,
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:         "type": "bluestore"
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:     },
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:         "osd_id": 1,
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:         "type": "bluestore"
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:     },
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:         "osd_id": 0,
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:         "type": "bluestore"
Oct 07 15:03:19 compute-0 nifty_cannon[433973]:     }
Oct 07 15:03:19 compute-0 nifty_cannon[433973]: }
Oct 07 15:03:19 compute-0 systemd[1]: libpod-fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690.scope: Deactivated successfully.
Oct 07 15:03:19 compute-0 systemd[1]: libpod-fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690.scope: Consumed 1.027s CPU time.
Oct 07 15:03:19 compute-0 podman[433957]: 2025-10-07 15:03:19.525734444 +0000 UTC m=+1.152353078 container died fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:03:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f4dfe621136d7cfc5fced739bb2eb02f1423f90ac813e8fbc11478ffab5f2c9-merged.mount: Deactivated successfully.
Oct 07 15:03:19 compute-0 podman[433957]: 2025-10-07 15:03:19.591291479 +0000 UTC m=+1.217910073 container remove fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 07 15:03:19 compute-0 systemd[1]: libpod-conmon-fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690.scope: Deactivated successfully.
Oct 07 15:03:19 compute-0 sudo[433850]: pam_unix(sudo:session): session closed for user root
Oct 07 15:03:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:03:19 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:03:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:03:19 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:03:19 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 20b46b08-a93b-42a2-81b4-85d304abd7e6 does not exist
Oct 07 15:03:19 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev aca6e8d7-169f-4641-a8a2-f08e92cbb8ad does not exist
Oct 07 15:03:19 compute-0 sudo[434018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:03:19 compute-0 sudo[434018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:03:19 compute-0 sudo[434018]: pam_unix(sudo:session): session closed for user root
Oct 07 15:03:19 compute-0 sudo[434043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:03:19 compute-0 sudo[434043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:03:19 compute-0 sudo[434043]: pam_unix(sudo:session): session closed for user root
Oct 07 15:03:20 compute-0 ceph-mon[74295]: pgmap v3043: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:20 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:03:20 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:03:20 compute-0 nova_compute[259550]: 2025-10-07 15:03:20.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:03:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:22 compute-0 podman[434068]: 2025-10-07 15:03:22.066201357 +0000 UTC m=+0.051836813 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 07 15:03:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:03:22 compute-0 podman[434069]: 2025-10-07 15:03:22.13581544 +0000 UTC m=+0.115619451 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 07 15:03:22 compute-0 ceph-mon[74295]: pgmap v3044: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:03:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:03:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:03:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:03:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:03:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:03:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:03:22
Oct 07 15:03:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:03:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:03:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['volumes', 'images', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', '.mgr', '.rgw.root', 'backups']
Oct 07 15:03:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:03:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:03:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:03:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:03:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:03:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:03:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:03:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:03:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:03:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:03:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:03:23 compute-0 nova_compute[259550]: 2025-10-07 15:03:23.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:23 compute-0 nova_compute[259550]: 2025-10-07 15:03:23.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:24 compute-0 ceph-mon[74295]: pgmap v3045: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:26 compute-0 ceph-mon[74295]: pgmap v3046: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:03:28 compute-0 nova_compute[259550]: 2025-10-07 15:03:28.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:28 compute-0 ceph-mon[74295]: pgmap v3047: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:28 compute-0 nova_compute[259550]: 2025-10-07 15:03:28.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:28 compute-0 nova_compute[259550]: 2025-10-07 15:03:28.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:03:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:29 compute-0 nova_compute[259550]: 2025-10-07 15:03:29.099 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:03:29 compute-0 nova_compute[259550]: 2025-10-07 15:03:29.099 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:03:29 compute-0 nova_compute[259550]: 2025-10-07 15:03:29.100 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:03:29 compute-0 nova_compute[259550]: 2025-10-07 15:03:29.100 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:03:29 compute-0 nova_compute[259550]: 2025-10-07 15:03:29.100 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:03:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:03:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2450312774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:03:29 compute-0 nova_compute[259550]: 2025-10-07 15:03:29.554 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:03:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2450312774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:03:29 compute-0 nova_compute[259550]: 2025-10-07 15:03:29.733 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:03:29 compute-0 nova_compute[259550]: 2025-10-07 15:03:29.734 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3600MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:03:29 compute-0 nova_compute[259550]: 2025-10-07 15:03:29.735 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:03:29 compute-0 nova_compute[259550]: 2025-10-07 15:03:29.735 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:03:30 compute-0 nova_compute[259550]: 2025-10-07 15:03:30.016 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:03:30 compute-0 nova_compute[259550]: 2025-10-07 15:03:30.017 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:03:30 compute-0 nova_compute[259550]: 2025-10-07 15:03:30.035 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:03:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:03:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2216449213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:03:30 compute-0 nova_compute[259550]: 2025-10-07 15:03:30.572 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:03:30 compute-0 nova_compute[259550]: 2025-10-07 15:03:30.579 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:03:30 compute-0 nova_compute[259550]: 2025-10-07 15:03:30.601 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:03:30 compute-0 nova_compute[259550]: 2025-10-07 15:03:30.644 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:03:30 compute-0 nova_compute[259550]: 2025-10-07 15:03:30.645 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:03:30 compute-0 ceph-mon[74295]: pgmap v3048: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:30 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2216449213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:03:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3049: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct 07 15:03:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:03:32 compute-0 nova_compute[259550]: 2025-10-07 15:03:32.646 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:03:32 compute-0 nova_compute[259550]: 2025-10-07 15:03:32.646 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:03:32 compute-0 nova_compute[259550]: 2025-10-07 15:03:32.646 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:03:32 compute-0 nova_compute[259550]: 2025-10-07 15:03:32.673 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:03:32 compute-0 ceph-mon[74295]: pgmap v3049: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct 07 15:03:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:03:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3713885254' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:03:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:03:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3713885254' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:03:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:03:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct 07 15:03:33 compute-0 nova_compute[259550]: 2025-10-07 15:03:33.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Oct 07 15:03:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Oct 07 15:03:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3713885254' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:03:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3713885254' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:03:33 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Oct 07 15:03:33 compute-0 nova_compute[259550]: 2025-10-07 15:03:33.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:34 compute-0 ceph-mon[74295]: pgmap v3050: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct 07 15:03:34 compute-0 ceph-mon[74295]: osdmap e296: 3 total, 3 up, 3 in
Oct 07 15:03:34 compute-0 nova_compute[259550]: 2025-10-07 15:03:34.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:03:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 614 B/s wr, 56 op/s
Oct 07 15:03:35 compute-0 ceph-mon[74295]: pgmap v3052: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 614 B/s wr, 56 op/s
Oct 07 15:03:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 921 B/s wr, 81 op/s
Oct 07 15:03:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:03:38 compute-0 podman[434159]: 2025-10-07 15:03:38.071866507 +0000 UTC m=+0.066068739 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 15:03:38 compute-0 podman[434160]: 2025-10-07 15:03:38.093002496 +0000 UTC m=+0.083934672 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible)
Oct 07 15:03:38 compute-0 ceph-mon[74295]: pgmap v3053: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 921 B/s wr, 81 op/s
Oct 07 15:03:38 compute-0 nova_compute[259550]: 2025-10-07 15:03:38.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:38 compute-0 nova_compute[259550]: 2025-10-07 15:03:38.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 1.4 KiB/s wr, 96 op/s
Oct 07 15:03:40 compute-0 ceph-mon[74295]: pgmap v3054: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 1.4 KiB/s wr, 96 op/s
Oct 07 15:03:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 1.4 KiB/s wr, 92 op/s
Oct 07 15:03:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:03:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Oct 07 15:03:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Oct 07 15:03:42 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Oct 07 15:03:42 compute-0 ceph-mon[74295]: pgmap v3055: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 1.4 KiB/s wr, 92 op/s
Oct 07 15:03:42 compute-0 ceph-mon[74295]: osdmap e297: 3 total, 3 up, 3 in
Oct 07 15:03:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 879 B/s wr, 42 op/s
Oct 07 15:03:43 compute-0 nova_compute[259550]: 2025-10-07 15:03:43.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:43 compute-0 nova_compute[259550]: 2025-10-07 15:03:43.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:44 compute-0 ceph-mon[74295]: pgmap v3057: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 879 B/s wr, 42 op/s
Oct 07 15:03:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 818 B/s wr, 39 op/s
Oct 07 15:03:46 compute-0 ceph-mon[74295]: pgmap v3058: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 818 B/s wr, 39 op/s
Oct 07 15:03:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 511 B/s wr, 15 op/s
Oct 07 15:03:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:03:48 compute-0 ceph-mon[74295]: pgmap v3059: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 511 B/s wr, 15 op/s
Oct 07 15:03:48 compute-0 nova_compute[259550]: 2025-10-07 15:03:48.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:48 compute-0 nova_compute[259550]: 2025-10-07 15:03:48.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:50 compute-0 ceph-mon[74295]: pgmap v3060: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.137113) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849432137358, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 658, "num_deletes": 252, "total_data_size": 778652, "memory_usage": 791376, "flush_reason": "Manual Compaction"}
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849432143782, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 531315, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63463, "largest_seqno": 64120, "table_properties": {"data_size": 528210, "index_size": 1012, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8428, "raw_average_key_size": 21, "raw_value_size": 521618, "raw_average_value_size": 1300, "num_data_blocks": 45, "num_entries": 401, "num_filter_entries": 401, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759849385, "oldest_key_time": 1759849385, "file_creation_time": 1759849432, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 7140 microseconds, and 3095 cpu microseconds.
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.144262) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 531315 bytes OK
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.144385) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.145856) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.145877) EVENT_LOG_v1 {"time_micros": 1759849432145868, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.145906) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 775138, prev total WAL file size 775138, number of live WAL files 2.
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.147162) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353033' seq:72057594037927935, type:22 .. '6D6772737461740032373534' seq:0, type:0; will stop at (end)
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(518KB)], [149(10MB)]
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849432147258, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 11861830, "oldest_snapshot_seqno": -1}
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8150 keys, 8793005 bytes, temperature: kUnknown
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849432239018, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 8793005, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8742871, "index_size": 28645, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20421, "raw_key_size": 213875, "raw_average_key_size": 26, "raw_value_size": 8601838, "raw_average_value_size": 1055, "num_data_blocks": 1105, "num_entries": 8150, "num_filter_entries": 8150, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759849432, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.239583) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 8793005 bytes
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.241422) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.9 rd, 95.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.8 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(38.9) write-amplify(16.5) OK, records in: 8651, records dropped: 501 output_compression: NoCompression
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.241448) EVENT_LOG_v1 {"time_micros": 1759849432241435, "job": 92, "event": "compaction_finished", "compaction_time_micros": 92040, "compaction_time_cpu_micros": 43291, "output_level": 6, "num_output_files": 1, "total_output_size": 8793005, "num_input_records": 8651, "num_output_records": 8150, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849432242178, "job": 92, "event": "table_file_deletion", "file_number": 151}
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849432245355, "job": 92, "event": "table_file_deletion", "file_number": 149}
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.147005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.245513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.245520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.245526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.245539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:03:52 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.245541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:03:52 compute-0 ceph-mon[74295]: pgmap v3061: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:03:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:03:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:03:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:03:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:03:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:03:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:53 compute-0 podman[434200]: 2025-10-07 15:03:53.067254629 +0000 UTC m=+0.052755577 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 07 15:03:53 compute-0 podman[434201]: 2025-10-07 15:03:53.126767654 +0000 UTC m=+0.111683456 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct 07 15:03:53 compute-0 nova_compute[259550]: 2025-10-07 15:03:53.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:53 compute-0 nova_compute[259550]: 2025-10-07 15:03:53.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:54 compute-0 ceph-mon[74295]: pgmap v3062: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:56 compute-0 ceph-mon[74295]: pgmap v3063: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:03:58 compute-0 nova_compute[259550]: 2025-10-07 15:03:58.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:58 compute-0 ceph-mon[74295]: pgmap v3064: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:03:58 compute-0 nova_compute[259550]: 2025-10-07 15:03:58.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:03:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:04:00.102 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:04:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:04:00.102 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:04:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:04:00.103 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:04:00 compute-0 ceph-mon[74295]: pgmap v3065: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:04:02 compute-0 ceph-mon[74295]: pgmap v3066: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:03 compute-0 nova_compute[259550]: 2025-10-07 15:04:03.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:03 compute-0 nova_compute[259550]: 2025-10-07 15:04:03.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:04 compute-0 ceph-mon[74295]: pgmap v3067: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:06 compute-0 ceph-mon[74295]: pgmap v3068: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:04:08 compute-0 nova_compute[259550]: 2025-10-07 15:04:08.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:08 compute-0 ceph-mon[74295]: pgmap v3069: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:08 compute-0 nova_compute[259550]: 2025-10-07 15:04:08.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:09 compute-0 podman[434245]: 2025-10-07 15:04:09.070169355 +0000 UTC m=+0.056249609 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Oct 07 15:04:09 compute-0 podman[434246]: 2025-10-07 15:04:09.0816854 +0000 UTC m=+0.058999352 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 07 15:04:10 compute-0 ceph-mon[74295]: pgmap v3070: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:11 compute-0 nova_compute[259550]: 2025-10-07 15:04:11.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:04:12 compute-0 ceph-mon[74295]: pgmap v3071: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:04:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:13 compute-0 nova_compute[259550]: 2025-10-07 15:04:13.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:13 compute-0 nova_compute[259550]: 2025-10-07 15:04:13.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:14 compute-0 ceph-mon[74295]: pgmap v3072: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:14 compute-0 nova_compute[259550]: 2025-10-07 15:04:14.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:04:14 compute-0 nova_compute[259550]: 2025-10-07 15:04:14.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:04:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:16 compute-0 ceph-mon[74295]: pgmap v3073: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:16 compute-0 nova_compute[259550]: 2025-10-07 15:04:16.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:04:16 compute-0 nova_compute[259550]: 2025-10-07 15:04:16.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:04:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:04:17 compute-0 nova_compute[259550]: 2025-10-07 15:04:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:04:18 compute-0 nova_compute[259550]: 2025-10-07 15:04:18.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:18 compute-0 ceph-mon[74295]: pgmap v3074: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:18 compute-0 nova_compute[259550]: 2025-10-07 15:04:18.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:19 compute-0 sudo[434286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:04:19 compute-0 sudo[434286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:19 compute-0 sudo[434286]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:19 compute-0 sudo[434311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:04:19 compute-0 sudo[434311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:19 compute-0 sudo[434311]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:19 compute-0 sudo[434336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:04:19 compute-0 sudo[434336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:20 compute-0 sudo[434336]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:20 compute-0 sudo[434361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:04:20 compute-0 sudo[434361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:20 compute-0 ceph-mon[74295]: pgmap v3075: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:20 compute-0 sudo[434361]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:20 compute-0 sudo[434417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:04:20 compute-0 sudo[434417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:20 compute-0 sudo[434417]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:20 compute-0 sudo[434442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:04:20 compute-0 sudo[434442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:20 compute-0 sudo[434442]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:20 compute-0 sudo[434467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:04:20 compute-0 sudo[434467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:20 compute-0 sudo[434467]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:20 compute-0 sudo[434492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 07 15:04:20 compute-0 sudo[434492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:21 compute-0 sudo[434492]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:04:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:21 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:04:21 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:04:21 compute-0 nova_compute[259550]: 2025-10-07 15:04:21.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:04:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:04:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:04:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:04:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:04:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:04:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:04:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:04:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:04:22
Oct 07 15:04:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:04:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:04:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.meta', 'volumes']
Oct 07 15:04:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:04:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:04:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:04:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:04:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:04:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:04:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:04:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:04:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:04:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:04:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:04:23 compute-0 nova_compute[259550]: 2025-10-07 15:04:23.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:23 compute-0 nova_compute[259550]: 2025-10-07 15:04:23.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:24 compute-0 podman[434535]: 2025-10-07 15:04:24.115992322 +0000 UTC m=+0.085673558 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Oct 07 15:04:24 compute-0 podman[434536]: 2025-10-07 15:04:24.154423879 +0000 UTC m=+0.121717922 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Oct 07 15:04:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:26 compute-0 ceph-mon[74295]: pgmap v3076: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:26 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:04:26 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:04:26 compute-0 sudo[434579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:04:26 compute-0 sudo[434579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:26 compute-0 sudo[434579]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:26 compute-0 sudo[434604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:04:26 compute-0 sudo[434604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:26 compute-0 sudo[434604]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:26 compute-0 sudo[434629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:04:26 compute-0 sudo[434629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:26 compute-0 sudo[434629]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:26 compute-0 sudo[434654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- inventory --format=json-pretty --filter-for-batch
Oct 07 15:04:26 compute-0 sudo[434654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3079: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:04:27 compute-0 podman[434718]: 2025-10-07 15:04:27.29446248 +0000 UTC m=+0.027005666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:04:27 compute-0 podman[434718]: 2025-10-07 15:04:27.437436193 +0000 UTC m=+0.169979389 container create 0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:04:27 compute-0 systemd[1]: Started libpod-conmon-0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64.scope.
Oct 07 15:04:27 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:04:27 compute-0 podman[434718]: 2025-10-07 15:04:27.64285452 +0000 UTC m=+0.375397966 container init 0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_curran, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 07 15:04:27 compute-0 podman[434718]: 2025-10-07 15:04:27.65571452 +0000 UTC m=+0.388257676 container start 0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_curran, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:04:27 compute-0 systemd[1]: libpod-0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64.scope: Deactivated successfully.
Oct 07 15:04:27 compute-0 busy_curran[434734]: 167 167
Oct 07 15:04:27 compute-0 conmon[434734]: conmon 0d84bd59d58d674abb07 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64.scope/container/memory.events
Oct 07 15:04:27 compute-0 ceph-mon[74295]: pgmap v3077: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:27 compute-0 ceph-mon[74295]: pgmap v3078: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:27 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:04:27 compute-0 podman[434718]: 2025-10-07 15:04:27.72521477 +0000 UTC m=+0.457757976 container attach 0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_curran, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:04:27 compute-0 podman[434718]: 2025-10-07 15:04:27.726383871 +0000 UTC m=+0.458927027 container died 0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_curran, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:04:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9aab1a605212a214e59f9022b055ee525e778b28baa4251854a17b3b0374f7a-merged.mount: Deactivated successfully.
Oct 07 15:04:27 compute-0 nova_compute[259550]: 2025-10-07 15:04:27.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:04:28 compute-0 podman[434718]: 2025-10-07 15:04:28.059272721 +0000 UTC m=+0.791815877 container remove 0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:04:28 compute-0 systemd[1]: libpod-conmon-0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64.scope: Deactivated successfully.
Oct 07 15:04:28 compute-0 podman[434760]: 2025-10-07 15:04:28.192198008 +0000 UTC m=+0.022052924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:04:28 compute-0 podman[434760]: 2025-10-07 15:04:28.290560512 +0000 UTC m=+0.120415408 container create b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 07 15:04:28 compute-0 nova_compute[259550]: 2025-10-07 15:04:28.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:28 compute-0 systemd[1]: Started libpod-conmon-b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5.scope.
Oct 07 15:04:28 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e0fd429d0f9424fb544d4a9fd177082e8b011eebaab1d183d8b7991ce70aeb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e0fd429d0f9424fb544d4a9fd177082e8b011eebaab1d183d8b7991ce70aeb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e0fd429d0f9424fb544d4a9fd177082e8b011eebaab1d183d8b7991ce70aeb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e0fd429d0f9424fb544d4a9fd177082e8b011eebaab1d183d8b7991ce70aeb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:04:28 compute-0 podman[434760]: 2025-10-07 15:04:28.465491481 +0000 UTC m=+0.295346397 container init b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_borg, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 15:04:28 compute-0 podman[434760]: 2025-10-07 15:04:28.477690333 +0000 UTC m=+0.307545219 container start b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_borg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 07 15:04:28 compute-0 podman[434760]: 2025-10-07 15:04:28.528507239 +0000 UTC m=+0.358362195 container attach b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_borg, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:04:28 compute-0 ceph-mon[74295]: pgmap v3079: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:28 compute-0 nova_compute[259550]: 2025-10-07 15:04:28.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:29 compute-0 ceph-mon[74295]: pgmap v3080: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:29 compute-0 awesome_borg[434776]: [
Oct 07 15:04:29 compute-0 awesome_borg[434776]:     {
Oct 07 15:04:29 compute-0 awesome_borg[434776]:         "available": false,
Oct 07 15:04:29 compute-0 awesome_borg[434776]:         "ceph_device": false,
Oct 07 15:04:29 compute-0 awesome_borg[434776]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:         "lsm_data": {},
Oct 07 15:04:29 compute-0 awesome_borg[434776]:         "lvs": [],
Oct 07 15:04:29 compute-0 awesome_borg[434776]:         "path": "/dev/sr0",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:         "rejected_reasons": [
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "Has a FileSystem",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "Insufficient space (<5GB)"
Oct 07 15:04:29 compute-0 awesome_borg[434776]:         ],
Oct 07 15:04:29 compute-0 awesome_borg[434776]:         "sys_api": {
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "actuators": null,
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "device_nodes": "sr0",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "devname": "sr0",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "human_readable_size": "482.00 KB",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "id_bus": "ata",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "model": "QEMU DVD-ROM",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "nr_requests": "2",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "parent": "/dev/sr0",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "partitions": {},
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "path": "/dev/sr0",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "removable": "1",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "rev": "2.5+",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "ro": "0",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "rotational": "0",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "sas_address": "",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "sas_device_handle": "",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "scheduler_mode": "mq-deadline",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "sectors": 0,
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "sectorsize": "2048",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "size": 493568.0,
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "support_discard": "2048",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "type": "disk",
Oct 07 15:04:29 compute-0 awesome_borg[434776]:             "vendor": "QEMU"
Oct 07 15:04:29 compute-0 awesome_borg[434776]:         }
Oct 07 15:04:29 compute-0 awesome_borg[434776]:     }
Oct 07 15:04:29 compute-0 awesome_borg[434776]: ]
Oct 07 15:04:29 compute-0 nova_compute[259550]: 2025-10-07 15:04:29.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:04:30 compute-0 systemd[1]: libpod-b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5.scope: Deactivated successfully.
Oct 07 15:04:30 compute-0 systemd[1]: libpod-b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5.scope: Consumed 1.554s CPU time.
Oct 07 15:04:30 compute-0 podman[436985]: 2025-10-07 15:04:30.042739443 +0000 UTC m=+0.024893150 container died b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:04:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e0fd429d0f9424fb544d4a9fd177082e8b011eebaab1d183d8b7991ce70aeb5-merged.mount: Deactivated successfully.
Oct 07 15:04:30 compute-0 nova_compute[259550]: 2025-10-07 15:04:30.145 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:04:30 compute-0 nova_compute[259550]: 2025-10-07 15:04:30.145 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:04:30 compute-0 nova_compute[259550]: 2025-10-07 15:04:30.146 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:04:30 compute-0 nova_compute[259550]: 2025-10-07 15:04:30.146 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:04:30 compute-0 nova_compute[259550]: 2025-10-07 15:04:30.146 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:04:30 compute-0 podman[436985]: 2025-10-07 15:04:30.251742554 +0000 UTC m=+0.233896241 container remove b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_borg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:04:30 compute-0 systemd[1]: libpod-conmon-b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5.scope: Deactivated successfully.
Oct 07 15:04:30 compute-0 sudo[434654]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:04:30 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:04:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:04:30 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:04:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:04:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:04:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:04:30 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:04:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:04:30 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:04:30 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev a2b91a4c-6db3-4430-b7ce-4122ba644cff does not exist
Oct 07 15:04:30 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 28ca5ebc-c64a-46de-9ba8-9e78f31f1052 does not exist
Oct 07 15:04:30 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev ea3b936b-ffe4-4693-ad20-689f52124aca does not exist
Oct 07 15:04:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:04:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:04:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:04:30 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:04:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:04:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:04:30 compute-0 sudo[437020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:04:30 compute-0 sudo[437020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:30 compute-0 sudo[437020]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:30 compute-0 sudo[437045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:04:30 compute-0 sudo[437045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:30 compute-0 sudo[437045]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:04:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2992853080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:04:30 compute-0 sudo[437070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:04:30 compute-0 sudo[437070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:30 compute-0 sudo[437070]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:30 compute-0 nova_compute[259550]: 2025-10-07 15:04:30.643 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:04:30 compute-0 sudo[437097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:04:30 compute-0 sudo[437097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:30 compute-0 nova_compute[259550]: 2025-10-07 15:04:30.815 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:04:30 compute-0 nova_compute[259550]: 2025-10-07 15:04:30.817 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3589MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:04:30 compute-0 nova_compute[259550]: 2025-10-07 15:04:30.817 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:04:30 compute-0 nova_compute[259550]: 2025-10-07 15:04:30.818 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:04:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:31 compute-0 nova_compute[259550]: 2025-10-07 15:04:31.083 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:04:31 compute-0 nova_compute[259550]: 2025-10-07 15:04:31.083 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:04:31 compute-0 podman[437162]: 2025-10-07 15:04:31.143002891 +0000 UTC m=+0.118941419 container create 88d2c02f3b96888547796670571930f6da06b61936c4c67bc9263a43edad116f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:04:31 compute-0 podman[437162]: 2025-10-07 15:04:31.052374133 +0000 UTC m=+0.028312711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:04:31 compute-0 nova_compute[259550]: 2025-10-07 15:04:31.148 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 07 15:04:31 compute-0 nova_compute[259550]: 2025-10-07 15:04:31.225 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 07 15:04:31 compute-0 systemd[1]: Started libpod-conmon-88d2c02f3b96888547796670571930f6da06b61936c4c67bc9263a43edad116f.scope.
Oct 07 15:04:31 compute-0 nova_compute[259550]: 2025-10-07 15:04:31.226 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 07 15:04:31 compute-0 nova_compute[259550]: 2025-10-07 15:04:31.241 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 07 15:04:31 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:04:31 compute-0 nova_compute[259550]: 2025-10-07 15:04:31.262 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 07 15:04:31 compute-0 nova_compute[259550]: 2025-10-07 15:04:31.280 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:04:31 compute-0 podman[437162]: 2025-10-07 15:04:31.319052521 +0000 UTC m=+0.294991099 container init 88d2c02f3b96888547796670571930f6da06b61936c4c67bc9263a43edad116f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 07 15:04:31 compute-0 podman[437162]: 2025-10-07 15:04:31.330717669 +0000 UTC m=+0.306656197 container start 88d2c02f3b96888547796670571930f6da06b61936c4c67bc9263a43edad116f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:04:31 compute-0 romantic_khorana[437178]: 167 167
Oct 07 15:04:31 compute-0 systemd[1]: libpod-88d2c02f3b96888547796670571930f6da06b61936c4c67bc9263a43edad116f.scope: Deactivated successfully.
Oct 07 15:04:31 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:04:31 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:04:31 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:04:31 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:04:31 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:04:31 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:04:31 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:04:31 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:04:31 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2992853080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:04:31 compute-0 podman[437162]: 2025-10-07 15:04:31.395270647 +0000 UTC m=+0.371209175 container attach 88d2c02f3b96888547796670571930f6da06b61936c4c67bc9263a43edad116f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 07 15:04:31 compute-0 podman[437162]: 2025-10-07 15:04:31.395974626 +0000 UTC m=+0.371913164 container died 88d2c02f3b96888547796670571930f6da06b61936c4c67bc9263a43edad116f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:04:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fcf70b9d0bf1ad0356037b6f0d1d5a9e9652cfea17e7b66e7d3562f7ef5bad4-merged.mount: Deactivated successfully.
Oct 07 15:04:31 compute-0 podman[437162]: 2025-10-07 15:04:31.639065639 +0000 UTC m=+0.615004167 container remove 88d2c02f3b96888547796670571930f6da06b61936c4c67bc9263a43edad116f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:04:31 compute-0 systemd[1]: libpod-conmon-88d2c02f3b96888547796670571930f6da06b61936c4c67bc9263a43edad116f.scope: Deactivated successfully.
Oct 07 15:04:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:04:31 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1203092699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:04:31 compute-0 nova_compute[259550]: 2025-10-07 15:04:31.813 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:04:31 compute-0 nova_compute[259550]: 2025-10-07 15:04:31.823 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:04:31 compute-0 podman[437225]: 2025-10-07 15:04:31.838036215 +0000 UTC m=+0.081893148 container create 6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 07 15:04:31 compute-0 nova_compute[259550]: 2025-10-07 15:04:31.839 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:04:31 compute-0 nova_compute[259550]: 2025-10-07 15:04:31.840 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:04:31 compute-0 nova_compute[259550]: 2025-10-07 15:04:31.840 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:04:31 compute-0 podman[437225]: 2025-10-07 15:04:31.777980675 +0000 UTC m=+0.021837628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:04:31 compute-0 systemd[1]: Started libpod-conmon-6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be.scope.
Oct 07 15:04:31 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a617e74c30fdd2d7db612de5ab95898140bc0cd4d4ef503e5c5c88f891812185/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a617e74c30fdd2d7db612de5ab95898140bc0cd4d4ef503e5c5c88f891812185/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a617e74c30fdd2d7db612de5ab95898140bc0cd4d4ef503e5c5c88f891812185/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a617e74c30fdd2d7db612de5ab95898140bc0cd4d4ef503e5c5c88f891812185/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a617e74c30fdd2d7db612de5ab95898140bc0cd4d4ef503e5c5c88f891812185/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:04:31 compute-0 podman[437225]: 2025-10-07 15:04:31.962329184 +0000 UTC m=+0.206186147 container init 6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:04:31 compute-0 podman[437225]: 2025-10-07 15:04:31.970613084 +0000 UTC m=+0.214470017 container start 6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:04:31 compute-0 podman[437225]: 2025-10-07 15:04:31.974763744 +0000 UTC m=+0.218620677 container attach 6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 07 15:04:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:04:32 compute-0 ceph-mon[74295]: pgmap v3081: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1203092699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:04:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:04:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/453162746' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:04:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:04:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/453162746' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:04:32 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:04:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:04:33 compute-0 focused_keldysh[437243]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:04:33 compute-0 focused_keldysh[437243]: --> relative data size: 1.0
Oct 07 15:04:33 compute-0 focused_keldysh[437243]: --> All data devices are unavailable
Oct 07 15:04:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:33 compute-0 systemd[1]: libpod-6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be.scope: Deactivated successfully.
Oct 07 15:04:33 compute-0 systemd[1]: libpod-6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be.scope: Consumed 1.063s CPU time.
Oct 07 15:04:33 compute-0 podman[437225]: 2025-10-07 15:04:33.092975087 +0000 UTC m=+1.336832070 container died 6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:04:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a617e74c30fdd2d7db612de5ab95898140bc0cd4d4ef503e5c5c88f891812185-merged.mount: Deactivated successfully.
Oct 07 15:04:33 compute-0 podman[437225]: 2025-10-07 15:04:33.151076575 +0000 UTC m=+1.394933518 container remove 6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:04:33 compute-0 systemd[1]: libpod-conmon-6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be.scope: Deactivated successfully.
Oct 07 15:04:33 compute-0 sudo[437097]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:33 compute-0 sudo[437289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:04:33 compute-0 sudo[437289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:33 compute-0 sudo[437289]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:33 compute-0 sudo[437314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:04:33 compute-0 sudo[437314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:33 compute-0 sudo[437314]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:33 compute-0 sudo[437339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:04:33 compute-0 sudo[437339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:33 compute-0 sudo[437339]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:33 compute-0 nova_compute[259550]: 2025-10-07 15:04:33.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:33 compute-0 sudo[437364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:04:33 compute-0 sudo[437364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/453162746' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:04:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/453162746' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:04:33 compute-0 podman[437429]: 2025-10-07 15:04:33.717030432 +0000 UTC m=+0.042679410 container create 848755228aaa944d75a7d16ac10501ba2a0da49dc3d89114dd8b80d08d72af5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:04:33 compute-0 systemd[1]: Started libpod-conmon-848755228aaa944d75a7d16ac10501ba2a0da49dc3d89114dd8b80d08d72af5a.scope.
Oct 07 15:04:33 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:04:33 compute-0 podman[437429]: 2025-10-07 15:04:33.699537 +0000 UTC m=+0.025186008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:04:33 compute-0 podman[437429]: 2025-10-07 15:04:33.796963168 +0000 UTC m=+0.122612166 container init 848755228aaa944d75a7d16ac10501ba2a0da49dc3d89114dd8b80d08d72af5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kowalevski, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 07 15:04:33 compute-0 podman[437429]: 2025-10-07 15:04:33.805906095 +0000 UTC m=+0.131555073 container start 848755228aaa944d75a7d16ac10501ba2a0da49dc3d89114dd8b80d08d72af5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 07 15:04:33 compute-0 podman[437429]: 2025-10-07 15:04:33.809917321 +0000 UTC m=+0.135566319 container attach 848755228aaa944d75a7d16ac10501ba2a0da49dc3d89114dd8b80d08d72af5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:04:33 compute-0 musing_kowalevski[437445]: 167 167
Oct 07 15:04:33 compute-0 systemd[1]: libpod-848755228aaa944d75a7d16ac10501ba2a0da49dc3d89114dd8b80d08d72af5a.scope: Deactivated successfully.
Oct 07 15:04:33 compute-0 podman[437429]: 2025-10-07 15:04:33.814167474 +0000 UTC m=+0.139816472 container died 848755228aaa944d75a7d16ac10501ba2a0da49dc3d89114dd8b80d08d72af5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kowalevski, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 07 15:04:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c073287ce4f5a45282265fdb8e3f044cb1ae588ba17596314e8ad2d1d20b81b6-merged.mount: Deactivated successfully.
Oct 07 15:04:33 compute-0 nova_compute[259550]: 2025-10-07 15:04:33.841 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:04:33 compute-0 nova_compute[259550]: 2025-10-07 15:04:33.842 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:04:33 compute-0 nova_compute[259550]: 2025-10-07 15:04:33.842 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:04:33 compute-0 nova_compute[259550]: 2025-10-07 15:04:33.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:33 compute-0 podman[437429]: 2025-10-07 15:04:33.850710161 +0000 UTC m=+0.176359149 container remove 848755228aaa944d75a7d16ac10501ba2a0da49dc3d89114dd8b80d08d72af5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kowalevski, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:04:33 compute-0 nova_compute[259550]: 2025-10-07 15:04:33.860 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:04:33 compute-0 systemd[1]: libpod-conmon-848755228aaa944d75a7d16ac10501ba2a0da49dc3d89114dd8b80d08d72af5a.scope: Deactivated successfully.
Oct 07 15:04:34 compute-0 podman[437469]: 2025-10-07 15:04:34.028730502 +0000 UTC m=+0.043450361 container create b596ec93d774b8a463f5abe191c9eb36e8c4965fdba7f3b6800fbbfa55a76825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 07 15:04:34 compute-0 systemd[1]: Started libpod-conmon-b596ec93d774b8a463f5abe191c9eb36e8c4965fdba7f3b6800fbbfa55a76825.scope.
Oct 07 15:04:34 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f8aabab54f4fc043a25334a6ee0bd8bc6438ee1890d48164f03a28e9843034/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f8aabab54f4fc043a25334a6ee0bd8bc6438ee1890d48164f03a28e9843034/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f8aabab54f4fc043a25334a6ee0bd8bc6438ee1890d48164f03a28e9843034/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:04:34 compute-0 podman[437469]: 2025-10-07 15:04:34.011819004 +0000 UTC m=+0.026538883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f8aabab54f4fc043a25334a6ee0bd8bc6438ee1890d48164f03a28e9843034/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:04:34 compute-0 podman[437469]: 2025-10-07 15:04:34.121428315 +0000 UTC m=+0.136148224 container init b596ec93d774b8a463f5abe191c9eb36e8c4965fdba7f3b6800fbbfa55a76825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:04:34 compute-0 podman[437469]: 2025-10-07 15:04:34.129870028 +0000 UTC m=+0.144589927 container start b596ec93d774b8a463f5abe191c9eb36e8c4965fdba7f3b6800fbbfa55a76825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:04:34 compute-0 podman[437469]: 2025-10-07 15:04:34.133801862 +0000 UTC m=+0.148521721 container attach b596ec93d774b8a463f5abe191c9eb36e8c4965fdba7f3b6800fbbfa55a76825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:04:34 compute-0 ceph-mon[74295]: pgmap v3082: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]: {
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:     "0": [
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:         {
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "devices": [
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "/dev/loop3"
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             ],
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "lv_name": "ceph_lv0",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "lv_size": "21470642176",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "name": "ceph_lv0",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "tags": {
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.cluster_name": "ceph",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.crush_device_class": "",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.encrypted": "0",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.osd_id": "0",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.type": "block",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.vdo": "0"
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             },
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "type": "block",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "vg_name": "ceph_vg0"
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:         }
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:     ],
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:     "1": [
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:         {
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "devices": [
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "/dev/loop4"
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             ],
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "lv_name": "ceph_lv1",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "lv_size": "21470642176",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "name": "ceph_lv1",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "tags": {
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.cluster_name": "ceph",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.crush_device_class": "",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.encrypted": "0",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.osd_id": "1",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.type": "block",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.vdo": "0"
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             },
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "type": "block",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "vg_name": "ceph_vg1"
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:         }
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:     ],
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:     "2": [
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:         {
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "devices": [
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "/dev/loop5"
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             ],
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "lv_name": "ceph_lv2",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "lv_size": "21470642176",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "name": "ceph_lv2",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "tags": {
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.cluster_name": "ceph",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.crush_device_class": "",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.encrypted": "0",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.osd_id": "2",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.type": "block",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:                 "ceph.vdo": "0"
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             },
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "type": "block",
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:             "vg_name": "ceph_vg2"
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:         }
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]:     ]
Oct 07 15:04:34 compute-0 vigilant_lederberg[437485]: }
Oct 07 15:04:34 compute-0 systemd[1]: libpod-b596ec93d774b8a463f5abe191c9eb36e8c4965fdba7f3b6800fbbfa55a76825.scope: Deactivated successfully.
Oct 07 15:04:34 compute-0 podman[437494]: 2025-10-07 15:04:34.93931131 +0000 UTC m=+0.023285427 container died b596ec93d774b8a463f5abe191c9eb36e8c4965fdba7f3b6800fbbfa55a76825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:04:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-94f8aabab54f4fc043a25334a6ee0bd8bc6438ee1890d48164f03a28e9843034-merged.mount: Deactivated successfully.
Oct 07 15:04:34 compute-0 podman[437494]: 2025-10-07 15:04:34.997029528 +0000 UTC m=+0.081003625 container remove b596ec93d774b8a463f5abe191c9eb36e8c4965fdba7f3b6800fbbfa55a76825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 07 15:04:35 compute-0 systemd[1]: libpod-conmon-b596ec93d774b8a463f5abe191c9eb36e8c4965fdba7f3b6800fbbfa55a76825.scope: Deactivated successfully.
Oct 07 15:04:35 compute-0 sudo[437364]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:35 compute-0 sudo[437509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:04:35 compute-0 sudo[437509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:35 compute-0 sudo[437509]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:35 compute-0 sudo[437534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:04:35 compute-0 sudo[437534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:35 compute-0 sudo[437534]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:35 compute-0 sudo[437559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:04:35 compute-0 sudo[437559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:35 compute-0 sudo[437559]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:35 compute-0 sudo[437584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:04:35 compute-0 sudo[437584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:35 compute-0 podman[437648]: 2025-10-07 15:04:35.608412748 +0000 UTC m=+0.042582188 container create ee14f72cc2cd822baf102b109677c9e877742481b07191529fc8092ac9a9cba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_edison, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 07 15:04:35 compute-0 systemd[1]: Started libpod-conmon-ee14f72cc2cd822baf102b109677c9e877742481b07191529fc8092ac9a9cba7.scope.
Oct 07 15:04:35 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:04:35 compute-0 podman[437648]: 2025-10-07 15:04:35.590875564 +0000 UTC m=+0.025044994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:04:35 compute-0 podman[437648]: 2025-10-07 15:04:35.694405714 +0000 UTC m=+0.128575154 container init ee14f72cc2cd822baf102b109677c9e877742481b07191529fc8092ac9a9cba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_edison, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct 07 15:04:35 compute-0 podman[437648]: 2025-10-07 15:04:35.700760272 +0000 UTC m=+0.134929682 container start ee14f72cc2cd822baf102b109677c9e877742481b07191529fc8092ac9a9cba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_edison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 07 15:04:35 compute-0 podman[437648]: 2025-10-07 15:04:35.704132882 +0000 UTC m=+0.138302322 container attach ee14f72cc2cd822baf102b109677c9e877742481b07191529fc8092ac9a9cba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_edison, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:04:35 compute-0 eloquent_edison[437664]: 167 167
Oct 07 15:04:35 compute-0 systemd[1]: libpod-ee14f72cc2cd822baf102b109677c9e877742481b07191529fc8092ac9a9cba7.scope: Deactivated successfully.
Oct 07 15:04:35 compute-0 podman[437648]: 2025-10-07 15:04:35.709372 +0000 UTC m=+0.143541430 container died ee14f72cc2cd822baf102b109677c9e877742481b07191529fc8092ac9a9cba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 07 15:04:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4e46ddaa8900e981bd2d2368b1d39a598bb5ca18d8deccc8be1d87674be943b-merged.mount: Deactivated successfully.
Oct 07 15:04:35 compute-0 podman[437648]: 2025-10-07 15:04:35.748383372 +0000 UTC m=+0.182552802 container remove ee14f72cc2cd822baf102b109677c9e877742481b07191529fc8092ac9a9cba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_edison, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 07 15:04:35 compute-0 systemd[1]: libpod-conmon-ee14f72cc2cd822baf102b109677c9e877742481b07191529fc8092ac9a9cba7.scope: Deactivated successfully.
Oct 07 15:04:35 compute-0 podman[437689]: 2025-10-07 15:04:35.928334115 +0000 UTC m=+0.038062748 container create 0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct 07 15:04:35 compute-0 systemd[1]: Started libpod-conmon-0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd.scope.
Oct 07 15:04:35 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e06079bfde7c21fbc41518bdcded0240e37412b458c919f78d9aa448c0d8204/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e06079bfde7c21fbc41518bdcded0240e37412b458c919f78d9aa448c0d8204/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e06079bfde7c21fbc41518bdcded0240e37412b458c919f78d9aa448c0d8204/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e06079bfde7c21fbc41518bdcded0240e37412b458c919f78d9aa448c0d8204/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:04:36 compute-0 podman[437689]: 2025-10-07 15:04:35.912518656 +0000 UTC m=+0.022247279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:04:36 compute-0 podman[437689]: 2025-10-07 15:04:36.010119349 +0000 UTC m=+0.119848002 container init 0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_nightingale, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct 07 15:04:36 compute-0 podman[437689]: 2025-10-07 15:04:36.016453717 +0000 UTC m=+0.126182330 container start 0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_nightingale, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:04:36 compute-0 podman[437689]: 2025-10-07 15:04:36.02301095 +0000 UTC m=+0.132739593 container attach 0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 07 15:04:36 compute-0 ceph-mon[74295]: pgmap v3083: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:36 compute-0 nova_compute[259550]: 2025-10-07 15:04:36.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]: {
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:         "osd_id": 2,
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:         "type": "bluestore"
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:     },
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:         "osd_id": 1,
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:         "type": "bluestore"
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:     },
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:         "osd_id": 0,
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:         "type": "bluestore"
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]:     }
Oct 07 15:04:37 compute-0 compassionate_nightingale[437705]: }
Oct 07 15:04:37 compute-0 systemd[1]: libpod-0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd.scope: Deactivated successfully.
Oct 07 15:04:37 compute-0 systemd[1]: libpod-0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd.scope: Consumed 1.020s CPU time.
Oct 07 15:04:37 compute-0 podman[437689]: 2025-10-07 15:04:37.033668307 +0000 UTC m=+1.143396920 container died 0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_nightingale, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 07 15:04:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e06079bfde7c21fbc41518bdcded0240e37412b458c919f78d9aa448c0d8204-merged.mount: Deactivated successfully.
Oct 07 15:04:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:37 compute-0 podman[437689]: 2025-10-07 15:04:37.084876863 +0000 UTC m=+1.194605466 container remove 0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:04:37 compute-0 systemd[1]: libpod-conmon-0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd.scope: Deactivated successfully.
Oct 07 15:04:37 compute-0 sudo[437584]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:04:37 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:04:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:04:37 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:04:37 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev f0bc7d41-d21e-47fb-bede-63235bccdf20 does not exist
Oct 07 15:04:37 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev bfb93889-712f-4253-8cd0-e19b7b15d7c2 does not exist
Oct 07 15:04:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:04:37 compute-0 sudo[437749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:04:37 compute-0 sudo[437749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:37 compute-0 sudo[437749]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:37 compute-0 sudo[437774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:04:37 compute-0 sudo[437774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:04:37 compute-0 sudo[437774]: pam_unix(sudo:session): session closed for user root
Oct 07 15:04:38 compute-0 ceph-mon[74295]: pgmap v3084: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:38 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:04:38 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:04:38 compute-0 nova_compute[259550]: 2025-10-07 15:04:38.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:38 compute-0 nova_compute[259550]: 2025-10-07 15:04:38.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:40 compute-0 podman[437799]: 2025-10-07 15:04:40.092818828 +0000 UTC m=+0.068789132 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 15:04:40 compute-0 podman[437800]: 2025-10-07 15:04:40.119030762 +0000 UTC m=+0.094662567 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 15:04:40 compute-0 ceph-mon[74295]: pgmap v3085: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:04:42 compute-0 ceph-mon[74295]: pgmap v3086: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:43 compute-0 nova_compute[259550]: 2025-10-07 15:04:43.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:43 compute-0 nova_compute[259550]: 2025-10-07 15:04:43.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:44 compute-0 ceph-mon[74295]: pgmap v3087: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:46 compute-0 ceph-mon[74295]: pgmap v3088: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:04:48 compute-0 ceph-mon[74295]: pgmap v3089: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:48 compute-0 nova_compute[259550]: 2025-10-07 15:04:48.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:48 compute-0 nova_compute[259550]: 2025-10-07 15:04:48.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:50 compute-0 ceph-mon[74295]: pgmap v3090: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:04:52 compute-0 ceph-mon[74295]: pgmap v3091: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:04:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:04:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:04:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:04:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:04:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:04:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:53 compute-0 nova_compute[259550]: 2025-10-07 15:04:53.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:53 compute-0 nova_compute[259550]: 2025-10-07 15:04:53.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:54 compute-0 ceph-mon[74295]: pgmap v3092: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:55 compute-0 podman[437841]: 2025-10-07 15:04:55.101228405 +0000 UTC m=+0.080131942 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Oct 07 15:04:55 compute-0 podman[437842]: 2025-10-07 15:04:55.147955722 +0000 UTC m=+0.120695046 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:04:56 compute-0 ceph-mon[74295]: pgmap v3093: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:04:58 compute-0 ceph-mon[74295]: pgmap v3094: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:04:58 compute-0 nova_compute[259550]: 2025-10-07 15:04:58.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:58 compute-0 nova_compute[259550]: 2025-10-07 15:04:58.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:04:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:05:00.104 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:05:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:05:00.104 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:05:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:05:00.104 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:05:00 compute-0 ceph-mon[74295]: pgmap v3095: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:05:02 compute-0 ceph-mon[74295]: pgmap v3096: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:03 compute-0 nova_compute[259550]: 2025-10-07 15:05:03.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:03 compute-0 nova_compute[259550]: 2025-10-07 15:05:03.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:04 compute-0 ceph-mon[74295]: pgmap v3097: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:06 compute-0 ceph-mon[74295]: pgmap v3098: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:05:08 compute-0 ceph-mon[74295]: pgmap v3099: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:08 compute-0 nova_compute[259550]: 2025-10-07 15:05:08.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:08 compute-0 nova_compute[259550]: 2025-10-07 15:05:08.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:10 compute-0 ceph-mon[74295]: pgmap v3100: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:11 compute-0 podman[437884]: 2025-10-07 15:05:11.088701023 +0000 UTC m=+0.072429768 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 07 15:05:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:11 compute-0 podman[437885]: 2025-10-07 15:05:11.108913297 +0000 UTC m=+0.089719175 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct 07 15:05:11 compute-0 nova_compute[259550]: 2025-10-07 15:05:11.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:05:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:05:12 compute-0 ceph-mon[74295]: pgmap v3101: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:13 compute-0 nova_compute[259550]: 2025-10-07 15:05:13.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:13 compute-0 nova_compute[259550]: 2025-10-07 15:05:13.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:14 compute-0 ceph-mon[74295]: pgmap v3102: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:15 compute-0 nova_compute[259550]: 2025-10-07 15:05:15.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:05:16 compute-0 ceph-mon[74295]: pgmap v3103: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:16 compute-0 nova_compute[259550]: 2025-10-07 15:05:16.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:05:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:05:17 compute-0 nova_compute[259550]: 2025-10-07 15:05:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:05:18 compute-0 nova_compute[259550]: 2025-10-07 15:05:18.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:18 compute-0 ceph-mon[74295]: pgmap v3104: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:18 compute-0 nova_compute[259550]: 2025-10-07 15:05:18.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:18 compute-0 nova_compute[259550]: 2025-10-07 15:05:18.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:05:18 compute-0 nova_compute[259550]: 2025-10-07 15:05:18.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:05:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:20 compute-0 ceph-mon[74295]: pgmap v3105: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:05:22 compute-0 ceph-mon[74295]: pgmap v3106: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:05:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:05:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:05:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:05:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:05:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:05:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:05:22
Oct 07 15:05:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:05:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:05:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['backups', '.mgr', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta']
Oct 07 15:05:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:05:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:05:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:05:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:05:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:05:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:05:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:05:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:05:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:05:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:05:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:05:23 compute-0 nova_compute[259550]: 2025-10-07 15:05:23.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:23 compute-0 nova_compute[259550]: 2025-10-07 15:05:23.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:23 compute-0 nova_compute[259550]: 2025-10-07 15:05:23.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:05:24 compute-0 ceph-mon[74295]: pgmap v3107: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:26 compute-0 podman[437925]: 2025-10-07 15:05:26.060802799 +0000 UTC m=+0.051559526 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 15:05:26 compute-0 podman[437926]: 2025-10-07 15:05:26.147096133 +0000 UTC m=+0.123188191 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 07 15:05:26 compute-0 ceph-mon[74295]: pgmap v3108: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:05:28 compute-0 nova_compute[259550]: 2025-10-07 15:05:28.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:28 compute-0 ceph-mon[74295]: pgmap v3109: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:05:28 compute-0 nova_compute[259550]: 2025-10-07 15:05:28.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 85 B/s wr, 6 op/s
Oct 07 15:05:30 compute-0 ceph-mon[74295]: pgmap v3110: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 85 B/s wr, 6 op/s
Oct 07 15:05:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 07 15:05:31 compute-0 nova_compute[259550]: 2025-10-07 15:05:31.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:05:32 compute-0 nova_compute[259550]: 2025-10-07 15:05:32.012 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:05:32 compute-0 nova_compute[259550]: 2025-10-07 15:05:32.012 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:05:32 compute-0 nova_compute[259550]: 2025-10-07 15:05:32.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:05:32 compute-0 nova_compute[259550]: 2025-10-07 15:05:32.013 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:05:32 compute-0 nova_compute[259550]: 2025-10-07 15:05:32.013 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:05:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:05:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:05:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/673196036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:05:32 compute-0 nova_compute[259550]: 2025-10-07 15:05:32.485 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:05:32 compute-0 ceph-mon[74295]: pgmap v3111: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 07 15:05:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/673196036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:05:32 compute-0 nova_compute[259550]: 2025-10-07 15:05:32.653 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:05:32 compute-0 nova_compute[259550]: 2025-10-07 15:05:32.654 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3612MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:05:32 compute-0 nova_compute[259550]: 2025-10-07 15:05:32.655 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:05:32 compute-0 nova_compute[259550]: 2025-10-07 15:05:32.655 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:05:32 compute-0 nova_compute[259550]: 2025-10-07 15:05:32.735 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:05:32 compute-0 nova_compute[259550]: 2025-10-07 15:05:32.735 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:05:32 compute-0 nova_compute[259550]: 2025-10-07 15:05:32.758 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:05:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:05:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/346489447' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:05:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:05:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/346489447' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:05:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:05:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 07 15:05:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:05:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2114457313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:05:33 compute-0 nova_compute[259550]: 2025-10-07 15:05:33.222 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:05:33 compute-0 nova_compute[259550]: 2025-10-07 15:05:33.229 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:05:33 compute-0 nova_compute[259550]: 2025-10-07 15:05:33.246 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:05:33 compute-0 nova_compute[259550]: 2025-10-07 15:05:33.248 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:05:33 compute-0 nova_compute[259550]: 2025-10-07 15:05:33.249 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:05:33 compute-0 nova_compute[259550]: 2025-10-07 15:05:33.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/346489447' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:05:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/346489447' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:05:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2114457313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:05:33 compute-0 nova_compute[259550]: 2025-10-07 15:05:33.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:34 compute-0 nova_compute[259550]: 2025-10-07 15:05:34.250 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:05:34 compute-0 nova_compute[259550]: 2025-10-07 15:05:34.251 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:05:34 compute-0 nova_compute[259550]: 2025-10-07 15:05:34.251 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:05:34 compute-0 nova_compute[259550]: 2025-10-07 15:05:34.295 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:05:34 compute-0 ceph-mon[74295]: pgmap v3112: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 07 15:05:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 07 15:05:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Oct 07 15:05:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Oct 07 15:05:36 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Oct 07 15:05:36 compute-0 ceph-mon[74295]: pgmap v3113: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 07 15:05:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Oct 07 15:05:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:05:37 compute-0 sudo[438014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:05:37 compute-0 sudo[438014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:37 compute-0 sudo[438014]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:37 compute-0 sudo[438039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:05:37 compute-0 sudo[438039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:37 compute-0 sudo[438039]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:37 compute-0 sudo[438064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:05:37 compute-0 sudo[438064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:37 compute-0 sudo[438064]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:37 compute-0 sudo[438089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 07 15:05:37 compute-0 sudo[438089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:37 compute-0 ceph-mon[74295]: osdmap e298: 3 total, 3 up, 3 in
Oct 07 15:05:37 compute-0 sudo[438089]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:05:37 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:05:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:05:37 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:05:37 compute-0 sudo[438134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:05:37 compute-0 sudo[438134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:37 compute-0 sudo[438134]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:37 compute-0 sudo[438159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:05:37 compute-0 sudo[438159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:37 compute-0 sudo[438159]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:38 compute-0 sudo[438184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:05:38 compute-0 sudo[438184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:38 compute-0 sudo[438184]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:38 compute-0 sudo[438209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:05:38 compute-0 sudo[438209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:38 compute-0 nova_compute[259550]: 2025-10-07 15:05:38.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:38 compute-0 ceph-mon[74295]: pgmap v3115: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Oct 07 15:05:38 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:05:38 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:05:38 compute-0 sudo[438209]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 07 15:05:38 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 07 15:05:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:05:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:05:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:05:38 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:05:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:05:38 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:05:38 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 46d7502f-71d3-411a-a9b0-b689569df79d does not exist
Oct 07 15:05:38 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev a11df85f-6749-4c0c-a51c-d45b7d1922f8 does not exist
Oct 07 15:05:38 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev bb97a492-c32f-4672-aaf4-b9488938bdeb does not exist
Oct 07 15:05:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:05:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:05:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:05:38 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:05:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:05:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:05:38 compute-0 sudo[438265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:05:38 compute-0 sudo[438265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:38 compute-0 sudo[438265]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:38 compute-0 sudo[438290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:05:38 compute-0 sudo[438290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:38 compute-0 sudo[438290]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:38 compute-0 sudo[438315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:05:38 compute-0 sudo[438315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:38 compute-0 sudo[438315]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:38 compute-0 nova_compute[259550]: 2025-10-07 15:05:38.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:38 compute-0 sudo[438340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:05:38 compute-0 sudo[438340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:38 compute-0 nova_compute[259550]: 2025-10-07 15:05:38.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:05:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 305 active+clean; 25 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 472 KiB/s rd, 1.1 KiB/s wr, 24 op/s
Oct 07 15:05:39 compute-0 podman[438405]: 2025-10-07 15:05:39.285608412 +0000 UTC m=+0.046139042 container create aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_boyd, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 07 15:05:39 compute-0 systemd[1]: Started libpod-conmon-aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e.scope.
Oct 07 15:05:39 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:05:39 compute-0 podman[438405]: 2025-10-07 15:05:39.265666005 +0000 UTC m=+0.026196655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:05:39 compute-0 podman[438405]: 2025-10-07 15:05:39.37921752 +0000 UTC m=+0.139748230 container init aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_boyd, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:05:39 compute-0 podman[438405]: 2025-10-07 15:05:39.386148943 +0000 UTC m=+0.146679573 container start aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 07 15:05:39 compute-0 podman[438405]: 2025-10-07 15:05:39.388884826 +0000 UTC m=+0.149415466 container attach aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_boyd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:05:39 compute-0 funny_boyd[438421]: 167 167
Oct 07 15:05:39 compute-0 systemd[1]: libpod-aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e.scope: Deactivated successfully.
Oct 07 15:05:39 compute-0 conmon[438421]: conmon aa25eafe6fe64df5d33e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e.scope/container/memory.events
Oct 07 15:05:39 compute-0 podman[438405]: 2025-10-07 15:05:39.39509923 +0000 UTC m=+0.155629860 container died aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_boyd, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 07 15:05:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-b25a29ff388276b1fc6bf0ceca75683ff340a89dbc3a8eb4ec1677d4fc2d1a5f-merged.mount: Deactivated successfully.
Oct 07 15:05:39 compute-0 podman[438405]: 2025-10-07 15:05:39.439260829 +0000 UTC m=+0.199791499 container remove aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 07 15:05:39 compute-0 systemd[1]: libpod-conmon-aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e.scope: Deactivated successfully.
Oct 07 15:05:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 07 15:05:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:05:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:05:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:05:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:05:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:05:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:05:39 compute-0 podman[438444]: 2025-10-07 15:05:39.595625198 +0000 UTC m=+0.045928307 container create 26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:05:39 compute-0 systemd[1]: Started libpod-conmon-26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a.scope.
Oct 07 15:05:39 compute-0 podman[438444]: 2025-10-07 15:05:39.577951249 +0000 UTC m=+0.028254368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:05:39 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:05:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/262fb01adafa5f6223da0e379b570a5ec154dfd844a2e72e988a0edf69782401/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:05:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/262fb01adafa5f6223da0e379b570a5ec154dfd844a2e72e988a0edf69782401/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:05:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/262fb01adafa5f6223da0e379b570a5ec154dfd844a2e72e988a0edf69782401/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:05:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/262fb01adafa5f6223da0e379b570a5ec154dfd844a2e72e988a0edf69782401/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:05:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/262fb01adafa5f6223da0e379b570a5ec154dfd844a2e72e988a0edf69782401/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:05:39 compute-0 podman[438444]: 2025-10-07 15:05:39.706745788 +0000 UTC m=+0.157048967 container init 26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:05:39 compute-0 podman[438444]: 2025-10-07 15:05:39.719365082 +0000 UTC m=+0.169668181 container start 26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 07 15:05:39 compute-0 podman[438444]: 2025-10-07 15:05:39.723649795 +0000 UTC m=+0.173952934 container attach 26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:05:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Oct 07 15:05:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Oct 07 15:05:40 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Oct 07 15:05:40 compute-0 ceph-mon[74295]: pgmap v3116: 305 pgs: 305 active+clean; 25 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 472 KiB/s rd, 1.1 KiB/s wr, 24 op/s
Oct 07 15:05:40 compute-0 keen_lehmann[438460]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:05:40 compute-0 keen_lehmann[438460]: --> relative data size: 1.0
Oct 07 15:05:40 compute-0 keen_lehmann[438460]: --> All data devices are unavailable
Oct 07 15:05:40 compute-0 systemd[1]: libpod-26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a.scope: Deactivated successfully.
Oct 07 15:05:40 compute-0 systemd[1]: libpod-26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a.scope: Consumed 1.017s CPU time.
Oct 07 15:05:40 compute-0 podman[438489]: 2025-10-07 15:05:40.842532716 +0000 UTC m=+0.029744568 container died 26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:05:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-262fb01adafa5f6223da0e379b570a5ec154dfd844a2e72e988a0edf69782401-merged.mount: Deactivated successfully.
Oct 07 15:05:40 compute-0 podman[438489]: 2025-10-07 15:05:40.908553203 +0000 UTC m=+0.095765025 container remove 26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:05:40 compute-0 systemd[1]: libpod-conmon-26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a.scope: Deactivated successfully.
Oct 07 15:05:40 compute-0 sudo[438340]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:41 compute-0 sudo[438504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:05:41 compute-0 sudo[438504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:41 compute-0 sudo[438504]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:41 compute-0 sudo[438529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:05:41 compute-0 sudo[438529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:41 compute-0 sudo[438529]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 305 active+clean; 21 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.9 KiB/s wr, 32 op/s
Oct 07 15:05:41 compute-0 sudo[438555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:05:41 compute-0 sudo[438555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:41 compute-0 sudo[438555]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:41 compute-0 podman[438554]: 2025-10-07 15:05:41.2201306 +0000 UTC m=+0.093125986 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 15:05:41 compute-0 sudo[438602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:05:41 compute-0 sudo[438602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:41 compute-0 podman[438589]: 2025-10-07 15:05:41.254848049 +0000 UTC m=+0.076839235 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid)
Oct 07 15:05:41 compute-0 podman[438683]: 2025-10-07 15:05:41.552481205 +0000 UTC m=+0.045445433 container create dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 07 15:05:41 compute-0 systemd[1]: Started libpod-conmon-dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777.scope.
Oct 07 15:05:41 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:05:41 compute-0 podman[438683]: 2025-10-07 15:05:41.62523231 +0000 UTC m=+0.118196568 container init dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leavitt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 07 15:05:41 compute-0 podman[438683]: 2025-10-07 15:05:41.532815135 +0000 UTC m=+0.025779353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:05:41 compute-0 podman[438683]: 2025-10-07 15:05:41.63654164 +0000 UTC m=+0.129505868 container start dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leavitt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:05:41 compute-0 podman[438683]: 2025-10-07 15:05:41.639695424 +0000 UTC m=+0.132659682 container attach dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:05:41 compute-0 friendly_leavitt[438699]: 167 167
Oct 07 15:05:41 compute-0 systemd[1]: libpod-dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777.scope: Deactivated successfully.
Oct 07 15:05:41 compute-0 conmon[438699]: conmon dbb7dd7ce9ca2f56ca19 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777.scope/container/memory.events
Oct 07 15:05:41 compute-0 podman[438683]: 2025-10-07 15:05:41.643448633 +0000 UTC m=+0.136412851 container died dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leavitt, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 07 15:05:41 compute-0 ceph-mon[74295]: osdmap e299: 3 total, 3 up, 3 in
Oct 07 15:05:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a6d7cfff79f72f339f0532b8e13800d47afe7bf70cd7eb26530e76340f5da0f-merged.mount: Deactivated successfully.
Oct 07 15:05:41 compute-0 podman[438683]: 2025-10-07 15:05:41.692675955 +0000 UTC m=+0.185640173 container remove dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:05:41 compute-0 systemd[1]: libpod-conmon-dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777.scope: Deactivated successfully.
Oct 07 15:05:41 compute-0 podman[438722]: 2025-10-07 15:05:41.890862601 +0000 UTC m=+0.064645572 container create ed518bb65f177fdb65828bdf4b0c18bf4e7db55a8d68713b286bd511abb1730c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 07 15:05:41 compute-0 systemd[1]: Started libpod-conmon-ed518bb65f177fdb65828bdf4b0c18bf4e7db55a8d68713b286bd511abb1730c.scope.
Oct 07 15:05:41 compute-0 podman[438722]: 2025-10-07 15:05:41.869357852 +0000 UTC m=+0.043140823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:05:41 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59e79a274ab3549be8be2397fe0c2398bf7da14d0671348118e10c25de59aaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59e79a274ab3549be8be2397fe0c2398bf7da14d0671348118e10c25de59aaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59e79a274ab3549be8be2397fe0c2398bf7da14d0671348118e10c25de59aaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59e79a274ab3549be8be2397fe0c2398bf7da14d0671348118e10c25de59aaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:05:42 compute-0 podman[438722]: 2025-10-07 15:05:42.01210701 +0000 UTC m=+0.185889981 container init ed518bb65f177fdb65828bdf4b0c18bf4e7db55a8d68713b286bd511abb1730c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:05:42 compute-0 podman[438722]: 2025-10-07 15:05:42.025532035 +0000 UTC m=+0.199315006 container start ed518bb65f177fdb65828bdf4b0c18bf4e7db55a8d68713b286bd511abb1730c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wu, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:05:42 compute-0 podman[438722]: 2025-10-07 15:05:42.029958492 +0000 UTC m=+0.203741463 container attach ed518bb65f177fdb65828bdf4b0c18bf4e7db55a8d68713b286bd511abb1730c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:05:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:05:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Oct 07 15:05:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Oct 07 15:05:42 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Oct 07 15:05:42 compute-0 ceph-mon[74295]: pgmap v3118: 305 pgs: 305 active+clean; 21 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.9 KiB/s wr, 32 op/s
Oct 07 15:05:42 compute-0 ceph-mon[74295]: osdmap e300: 3 total, 3 up, 3 in
Oct 07 15:05:42 compute-0 inspiring_wu[438738]: {
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:     "0": [
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:         {
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "devices": [
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "/dev/loop3"
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             ],
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "lv_name": "ceph_lv0",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "lv_size": "21470642176",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "name": "ceph_lv0",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "tags": {
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.cluster_name": "ceph",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.crush_device_class": "",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.encrypted": "0",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.osd_id": "0",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.type": "block",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.vdo": "0"
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             },
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "type": "block",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "vg_name": "ceph_vg0"
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:         }
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:     ],
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:     "1": [
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:         {
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "devices": [
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "/dev/loop4"
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             ],
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "lv_name": "ceph_lv1",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "lv_size": "21470642176",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "name": "ceph_lv1",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "tags": {
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.cluster_name": "ceph",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.crush_device_class": "",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.encrypted": "0",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.osd_id": "1",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.type": "block",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.vdo": "0"
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             },
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "type": "block",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "vg_name": "ceph_vg1"
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:         }
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:     ],
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:     "2": [
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:         {
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "devices": [
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "/dev/loop5"
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             ],
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "lv_name": "ceph_lv2",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "lv_size": "21470642176",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "name": "ceph_lv2",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "tags": {
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.cluster_name": "ceph",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.crush_device_class": "",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.encrypted": "0",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.osd_id": "2",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.type": "block",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:                 "ceph.vdo": "0"
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             },
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "type": "block",
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:             "vg_name": "ceph_vg2"
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:         }
Oct 07 15:05:42 compute-0 inspiring_wu[438738]:     ]
Oct 07 15:05:42 compute-0 inspiring_wu[438738]: }
Oct 07 15:05:42 compute-0 systemd[1]: libpod-ed518bb65f177fdb65828bdf4b0c18bf4e7db55a8d68713b286bd511abb1730c.scope: Deactivated successfully.
Oct 07 15:05:42 compute-0 podman[438747]: 2025-10-07 15:05:42.912372985 +0000 UTC m=+0.040194345 container died ed518bb65f177fdb65828bdf4b0c18bf4e7db55a8d68713b286bd511abb1730c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wu, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 07 15:05:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e59e79a274ab3549be8be2397fe0c2398bf7da14d0671348118e10c25de59aaa-merged.mount: Deactivated successfully.
Oct 07 15:05:42 compute-0 podman[438747]: 2025-10-07 15:05:42.970150814 +0000 UTC m=+0.097972144 container remove ed518bb65f177fdb65828bdf4b0c18bf4e7db55a8d68713b286bd511abb1730c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wu, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 07 15:05:42 compute-0 systemd[1]: libpod-conmon-ed518bb65f177fdb65828bdf4b0c18bf4e7db55a8d68713b286bd511abb1730c.scope: Deactivated successfully.
Oct 07 15:05:43 compute-0 sudo[438602]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:43 compute-0 sudo[438762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:05:43 compute-0 sudo[438762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:43 compute-0 sudo[438762]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 305 active+clean; 21 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.3 KiB/s wr, 40 op/s
Oct 07 15:05:43 compute-0 sudo[438787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:05:43 compute-0 sudo[438787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:43 compute-0 sudo[438787]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:43 compute-0 sudo[438812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:05:43 compute-0 sudo[438812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:43 compute-0 sudo[438812]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:43 compute-0 sudo[438837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:05:43 compute-0 sudo[438837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:43 compute-0 nova_compute[259550]: 2025-10-07 15:05:43.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:43 compute-0 podman[438902]: 2025-10-07 15:05:43.674875624 +0000 UTC m=+0.040037550 container create 1617375290b0cad581dd7b6310b29c3283ca36defea9724b3043def77aef8cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:05:43 compute-0 systemd[1]: Started libpod-conmon-1617375290b0cad581dd7b6310b29c3283ca36defea9724b3043def77aef8cdb.scope.
Oct 07 15:05:43 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:05:43 compute-0 podman[438902]: 2025-10-07 15:05:43.660041042 +0000 UTC m=+0.025202878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:05:43 compute-0 podman[438902]: 2025-10-07 15:05:43.757726457 +0000 UTC m=+0.122888323 container init 1617375290b0cad581dd7b6310b29c3283ca36defea9724b3043def77aef8cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lichterman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:05:43 compute-0 podman[438902]: 2025-10-07 15:05:43.767323731 +0000 UTC m=+0.132485547 container start 1617375290b0cad581dd7b6310b29c3283ca36defea9724b3043def77aef8cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lichterman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:05:43 compute-0 podman[438902]: 2025-10-07 15:05:43.771005849 +0000 UTC m=+0.136167705 container attach 1617375290b0cad581dd7b6310b29c3283ca36defea9724b3043def77aef8cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lichterman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:05:43 compute-0 vigilant_lichterman[438918]: 167 167
Oct 07 15:05:43 compute-0 systemd[1]: libpod-1617375290b0cad581dd7b6310b29c3283ca36defea9724b3043def77aef8cdb.scope: Deactivated successfully.
Oct 07 15:05:43 compute-0 podman[438902]: 2025-10-07 15:05:43.775376284 +0000 UTC m=+0.140538140 container died 1617375290b0cad581dd7b6310b29c3283ca36defea9724b3043def77aef8cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:05:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee94a2b601bc250eba7f82ed24b76de0f15eaacc08b6036d2890259885d8f03b-merged.mount: Deactivated successfully.
Oct 07 15:05:43 compute-0 podman[438902]: 2025-10-07 15:05:43.824123564 +0000 UTC m=+0.189285410 container remove 1617375290b0cad581dd7b6310b29c3283ca36defea9724b3043def77aef8cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lichterman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:05:43 compute-0 systemd[1]: libpod-conmon-1617375290b0cad581dd7b6310b29c3283ca36defea9724b3043def77aef8cdb.scope: Deactivated successfully.
Oct 07 15:05:43 compute-0 nova_compute[259550]: 2025-10-07 15:05:43.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:44 compute-0 podman[438944]: 2025-10-07 15:05:44.002531176 +0000 UTC m=+0.053020005 container create d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:05:44 compute-0 systemd[1]: Started libpod-conmon-d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3.scope.
Oct 07 15:05:44 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae59aafbde3c01cec472b46319226d2ae495e720c174201b3f7620d5a752d7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae59aafbde3c01cec472b46319226d2ae495e720c174201b3f7620d5a752d7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae59aafbde3c01cec472b46319226d2ae495e720c174201b3f7620d5a752d7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae59aafbde3c01cec472b46319226d2ae495e720c174201b3f7620d5a752d7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:05:44 compute-0 podman[438944]: 2025-10-07 15:05:44.076874123 +0000 UTC m=+0.127362982 container init d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kare, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 07 15:05:44 compute-0 podman[438944]: 2025-10-07 15:05:43.987097577 +0000 UTC m=+0.037586426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:05:44 compute-0 podman[438944]: 2025-10-07 15:05:44.084369612 +0000 UTC m=+0.134858441 container start d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 07 15:05:44 compute-0 podman[438944]: 2025-10-07 15:05:44.08882485 +0000 UTC m=+0.139313699 container attach d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 07 15:05:44 compute-0 ceph-mon[74295]: pgmap v3120: 305 pgs: 305 active+clean; 21 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.3 KiB/s wr, 40 op/s
Oct 07 15:05:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 305 active+clean; 4.9 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.6 KiB/s wr, 56 op/s
Oct 07 15:05:45 compute-0 pedantic_kare[438960]: {
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:         "osd_id": 2,
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:         "type": "bluestore"
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:     },
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:         "osd_id": 1,
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:         "type": "bluestore"
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:     },
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:         "osd_id": 0,
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:         "type": "bluestore"
Oct 07 15:05:45 compute-0 pedantic_kare[438960]:     }
Oct 07 15:05:45 compute-0 pedantic_kare[438960]: }
Oct 07 15:05:45 compute-0 systemd[1]: libpod-d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3.scope: Deactivated successfully.
Oct 07 15:05:45 compute-0 podman[438944]: 2025-10-07 15:05:45.147370544 +0000 UTC m=+1.197859383 container died d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:05:45 compute-0 systemd[1]: libpod-d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3.scope: Consumed 1.070s CPU time.
Oct 07 15:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ae59aafbde3c01cec472b46319226d2ae495e720c174201b3f7620d5a752d7a-merged.mount: Deactivated successfully.
Oct 07 15:05:45 compute-0 podman[438944]: 2025-10-07 15:05:45.201317181 +0000 UTC m=+1.251806010 container remove d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:05:45 compute-0 systemd[1]: libpod-conmon-d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3.scope: Deactivated successfully.
Oct 07 15:05:45 compute-0 sudo[438837]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:05:45 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:05:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:05:45 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:05:45 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev bc613f6a-b458-4b41-b0ca-01fef27065de does not exist
Oct 07 15:05:45 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 0f395372-2b25-46e7-ae63-13881ad4ddc4 does not exist
Oct 07 15:05:45 compute-0 sudo[439007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:05:45 compute-0 sudo[439007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:45 compute-0 sudo[439007]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:45 compute-0 sudo[439032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:05:45 compute-0 sudo[439032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:05:45 compute-0 sudo[439032]: pam_unix(sudo:session): session closed for user root
Oct 07 15:05:46 compute-0 ceph-mon[74295]: pgmap v3121: 305 pgs: 305 active+clean; 4.9 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.6 KiB/s wr, 56 op/s
Oct 07 15:05:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:05:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:05:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 305 active+clean; 457 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 2.1 KiB/s wr, 32 op/s
Oct 07 15:05:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:05:48 compute-0 ceph-mon[74295]: pgmap v3122: 305 pgs: 305 active+clean; 457 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 2.1 KiB/s wr, 32 op/s
Oct 07 15:05:48 compute-0 nova_compute[259550]: 2025-10-07 15:05:48.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:48 compute-0 nova_compute[259550]: 2025-10-07 15:05:48.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 305 active+clean; 457 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 28 op/s
Oct 07 15:05:50 compute-0 ceph-mon[74295]: pgmap v3123: 305 pgs: 305 active+clean; 457 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 28 op/s
Oct 07 15:05:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 305 active+clean; 457 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Oct 07 15:05:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:05:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Oct 07 15:05:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Oct 07 15:05:52 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Oct 07 15:05:52 compute-0 ceph-mon[74295]: pgmap v3124: 305 pgs: 305 active+clean; 457 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Oct 07 15:05:52 compute-0 ceph-mon[74295]: osdmap e301: 3 total, 3 up, 3 in
Oct 07 15:05:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:05:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:05:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:05:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:05:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:05:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:05:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 305 active+clean; 457 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Oct 07 15:05:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Oct 07 15:05:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Oct 07 15:05:53 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Oct 07 15:05:53 compute-0 nova_compute[259550]: 2025-10-07 15:05:53.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:53 compute-0 nova_compute[259550]: 2025-10-07 15:05:53.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:54 compute-0 ceph-mon[74295]: pgmap v3126: 305 pgs: 305 active+clean; 457 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Oct 07 15:05:54 compute-0 ceph-mon[74295]: osdmap e302: 3 total, 3 up, 3 in
Oct 07 15:05:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 305 active+clean; 457 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.9 KiB/s wr, 18 op/s
Oct 07 15:05:56 compute-0 ceph-mon[74295]: pgmap v3128: 305 pgs: 305 active+clean; 457 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.9 KiB/s wr, 18 op/s
Oct 07 15:05:57 compute-0 podman[439057]: 2025-10-07 15:05:57.06926554 +0000 UTC m=+0.053783966 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true)
Oct 07 15:05:57 compute-0 podman[439058]: 2025-10-07 15:05:57.095826159 +0000 UTC m=+0.080389446 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 07 15:05:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 07 15:05:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:05:58 compute-0 ceph-mon[74295]: pgmap v3129: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 07 15:05:58 compute-0 nova_compute[259550]: 2025-10-07 15:05:58.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:58 compute-0 nova_compute[259550]: 2025-10-07 15:05:58.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:05:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 07 15:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:06:00.106 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:06:00.106 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:06:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:06:00.106 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:06:00 compute-0 ceph-mon[74295]: pgmap v3130: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 07 15:06:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 KiB/s wr, 16 op/s
Oct 07 15:06:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:06:02 compute-0 ceph-mon[74295]: pgmap v3131: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 KiB/s wr, 16 op/s
Oct 07 15:06:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3132: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 07 15:06:03 compute-0 nova_compute[259550]: 2025-10-07 15:06:03.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:03 compute-0 nova_compute[259550]: 2025-10-07 15:06:03.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:04 compute-0 ceph-mon[74295]: pgmap v3132: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 07 15:06:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 8.8 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Oct 07 15:06:06 compute-0 ceph-mon[74295]: pgmap v3133: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 8.8 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Oct 07 15:06:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Oct 07 15:06:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:06:08 compute-0 ceph-mon[74295]: pgmap v3134: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Oct 07 15:06:08 compute-0 nova_compute[259550]: 2025-10-07 15:06:08.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:08 compute-0 nova_compute[259550]: 2025-10-07 15:06:08.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3135: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:10 compute-0 ceph-mon[74295]: pgmap v3135: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3136: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:12 compute-0 podman[439103]: 2025-10-07 15:06:12.080097939 +0000 UTC m=+0.060309947 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 07 15:06:12 compute-0 podman[439104]: 2025-10-07 15:06:12.110507339 +0000 UTC m=+0.088296323 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 07 15:06:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:06:12 compute-0 ceph-mon[74295]: pgmap v3136: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:12 compute-0 nova_compute[259550]: 2025-10-07 15:06:12.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:06:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:13 compute-0 nova_compute[259550]: 2025-10-07 15:06:13.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:13 compute-0 nova_compute[259550]: 2025-10-07 15:06:13.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:14 compute-0 ceph-mon[74295]: pgmap v3137: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:16 compute-0 ceph-mon[74295]: pgmap v3138: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:06:17 compute-0 nova_compute[259550]: 2025-10-07 15:06:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:06:18 compute-0 ceph-mon[74295]: pgmap v3139: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:18 compute-0 nova_compute[259550]: 2025-10-07 15:06:18.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:18 compute-0 nova_compute[259550]: 2025-10-07 15:06:18.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:18 compute-0 nova_compute[259550]: 2025-10-07 15:06:18.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:06:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:19 compute-0 nova_compute[259550]: 2025-10-07 15:06:19.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:06:20 compute-0 ceph-mon[74295]: pgmap v3140: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:20 compute-0 nova_compute[259550]: 2025-10-07 15:06:20.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:06:20 compute-0 nova_compute[259550]: 2025-10-07 15:06:20.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:06:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:06:22 compute-0 ceph-mon[74295]: pgmap v3141: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:06:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:06:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:06:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:06:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:06:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:06:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:06:22
Oct 07 15:06:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:06:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:06:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'volumes', '.rgw.root', 'default.rgw.log', '.mgr', 'default.rgw.control', 'vms']
Oct 07 15:06:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:06:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:06:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:06:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:06:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:06:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:06:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:06:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:06:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:06:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:06:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:06:23 compute-0 nova_compute[259550]: 2025-10-07 15:06:23.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:23 compute-0 nova_compute[259550]: 2025-10-07 15:06:23.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:24 compute-0 ceph-mon[74295]: pgmap v3142: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:24 compute-0 nova_compute[259550]: 2025-10-07 15:06:24.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:06:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:26 compute-0 ceph-mon[74295]: pgmap v3143: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:06:27 compute-0 nova_compute[259550]: 2025-10-07 15:06:27.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:06:28 compute-0 podman[439142]: 2025-10-07 15:06:28.088081649 +0000 UTC m=+0.077077398 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 15:06:28 compute-0 podman[439141]: 2025-10-07 15:06:28.094096157 +0000 UTC m=+0.086786344 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 07 15:06:28 compute-0 nova_compute[259550]: 2025-10-07 15:06:28.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:28 compute-0 ceph-mon[74295]: pgmap v3144: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:28 compute-0 nova_compute[259550]: 2025-10-07 15:06:28.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:30 compute-0 ceph-mon[74295]: pgmap v3145: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:31 compute-0 nova_compute[259550]: 2025-10-07 15:06:31.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:06:32 compute-0 nova_compute[259550]: 2025-10-07 15:06:32.019 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:06:32 compute-0 nova_compute[259550]: 2025-10-07 15:06:32.020 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:06:32 compute-0 nova_compute[259550]: 2025-10-07 15:06:32.020 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:06:32 compute-0 nova_compute[259550]: 2025-10-07 15:06:32.021 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:06:32 compute-0 nova_compute[259550]: 2025-10-07 15:06:32.022 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:06:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:06:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:06:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3515539142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:06:32 compute-0 nova_compute[259550]: 2025-10-07 15:06:32.477 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:06:32 compute-0 nova_compute[259550]: 2025-10-07 15:06:32.648 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:06:32 compute-0 nova_compute[259550]: 2025-10-07 15:06:32.650 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3631MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:06:32 compute-0 nova_compute[259550]: 2025-10-07 15:06:32.650 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:06:32 compute-0 nova_compute[259550]: 2025-10-07 15:06:32.650 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:06:32 compute-0 ceph-mon[74295]: pgmap v3146: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3515539142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:06:32 compute-0 nova_compute[259550]: 2025-10-07 15:06:32.737 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:06:32 compute-0 nova_compute[259550]: 2025-10-07 15:06:32.738 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:06:32 compute-0 nova_compute[259550]: 2025-10-07 15:06:32.760 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:06:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:06:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2481322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:06:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:06:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2481322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:06:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:06:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:06:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145620044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:06:33 compute-0 nova_compute[259550]: 2025-10-07 15:06:33.245 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:06:33 compute-0 nova_compute[259550]: 2025-10-07 15:06:33.255 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:06:33 compute-0 nova_compute[259550]: 2025-10-07 15:06:33.275 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:06:33 compute-0 nova_compute[259550]: 2025-10-07 15:06:33.278 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:06:33 compute-0 nova_compute[259550]: 2025-10-07 15:06:33.279 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:06:33 compute-0 nova_compute[259550]: 2025-10-07 15:06:33.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2481322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:06:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2481322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:06:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4145620044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:06:33 compute-0 nova_compute[259550]: 2025-10-07 15:06:33.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:34 compute-0 ceph-mon[74295]: pgmap v3147: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:35 compute-0 nova_compute[259550]: 2025-10-07 15:06:35.281 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:06:35 compute-0 nova_compute[259550]: 2025-10-07 15:06:35.282 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:06:35 compute-0 nova_compute[259550]: 2025-10-07 15:06:35.282 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:06:35 compute-0 nova_compute[259550]: 2025-10-07 15:06:35.305 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:06:36 compute-0 ceph-mon[74295]: pgmap v3148: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:06:38 compute-0 nova_compute[259550]: 2025-10-07 15:06:38.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:38 compute-0 ceph-mon[74295]: pgmap v3149: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:38 compute-0 nova_compute[259550]: 2025-10-07 15:06:38.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:39 compute-0 ceph-mon[74295]: pgmap v3150: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:39 compute-0 nova_compute[259550]: 2025-10-07 15:06:39.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:06:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:06:42 compute-0 ceph-mon[74295]: pgmap v3151: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:43 compute-0 podman[439231]: 2025-10-07 15:06:43.103832871 +0000 UTC m=+0.090152742 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid)
Oct 07 15:06:43 compute-0 podman[439230]: 2025-10-07 15:06:43.108044652 +0000 UTC m=+0.095827291 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 15:06:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:43 compute-0 nova_compute[259550]: 2025-10-07 15:06:43.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:43 compute-0 nova_compute[259550]: 2025-10-07 15:06:43.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:44 compute-0 ceph-mon[74295]: pgmap v3152: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:45 compute-0 sudo[439272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:06:45 compute-0 sudo[439272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:45 compute-0 sudo[439272]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:45 compute-0 sudo[439297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:06:45 compute-0 sudo[439297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:45 compute-0 sudo[439297]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:45 compute-0 sudo[439322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:06:45 compute-0 sudo[439322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:45 compute-0 sudo[439322]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:45 compute-0 sudo[439347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 07 15:06:45 compute-0 sudo[439347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:46 compute-0 podman[439445]: 2025-10-07 15:06:46.226442933 +0000 UTC m=+0.069127809 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 07 15:06:46 compute-0 podman[439445]: 2025-10-07 15:06:46.322585262 +0000 UTC m=+0.165270118 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:06:46 compute-0 ceph-mon[74295]: pgmap v3153: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.337133) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849606337207, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1687, "num_deletes": 255, "total_data_size": 2725226, "memory_usage": 2765216, "flush_reason": "Manual Compaction"}
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849606356180, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 2654879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64121, "largest_seqno": 65807, "table_properties": {"data_size": 2646982, "index_size": 4775, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16260, "raw_average_key_size": 20, "raw_value_size": 2631154, "raw_average_value_size": 3280, "num_data_blocks": 213, "num_entries": 802, "num_filter_entries": 802, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759849432, "oldest_key_time": 1759849432, "file_creation_time": 1759849606, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 19104 microseconds, and 5741 cpu microseconds.
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.356247) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 2654879 bytes OK
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.356266) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.357798) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.357812) EVENT_LOG_v1 {"time_micros": 1759849606357808, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.357829) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 2717926, prev total WAL file size 2717926, number of live WAL files 2.
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.358552) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(2592KB)], [152(8586KB)]
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849606358580, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 11447884, "oldest_snapshot_seqno": -1}
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8428 keys, 9734643 bytes, temperature: kUnknown
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849606425760, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 9734643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9681578, "index_size": 30872, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21125, "raw_key_size": 220318, "raw_average_key_size": 26, "raw_value_size": 9534524, "raw_average_value_size": 1131, "num_data_blocks": 1195, "num_entries": 8428, "num_filter_entries": 8428, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759849606, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.426060) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 9734643 bytes
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.427371) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.2 rd, 144.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.4 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 8952, records dropped: 524 output_compression: NoCompression
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.427386) EVENT_LOG_v1 {"time_micros": 1759849606427379, "job": 94, "event": "compaction_finished", "compaction_time_micros": 67262, "compaction_time_cpu_micros": 23106, "output_level": 6, "num_output_files": 1, "total_output_size": 9734643, "num_input_records": 8952, "num_output_records": 8428, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849606427827, "job": 94, "event": "table_file_deletion", "file_number": 154}
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849606429039, "job": 94, "event": "table_file_deletion", "file_number": 152}
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.358464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.429084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.429089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.429091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.429092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:06:46 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.429094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:06:47 compute-0 sudo[439347]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:06:47 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:06:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:06:47 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:06:47 compute-0 sudo[439603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:06:47 compute-0 sudo[439603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:47 compute-0 sudo[439603]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:06:47 compute-0 sudo[439628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:06:47 compute-0 sudo[439628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:47 compute-0 sudo[439628]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:47 compute-0 sudo[439653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:06:47 compute-0 sudo[439653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:47 compute-0 sudo[439653]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:47 compute-0 sudo[439678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:06:47 compute-0 sudo[439678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:47 compute-0 sudo[439678]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:06:47 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:06:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:06:47 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:06:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:06:47 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:06:47 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 5d8439fd-e66a-4346-8771-ffbdb6ed0ee4 does not exist
Oct 07 15:06:47 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 479c0b00-47cd-4d0a-b482-c197a85d20dc does not exist
Oct 07 15:06:47 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 8e7cf4b3-b131-42ac-bca8-cbca57cfba35 does not exist
Oct 07 15:06:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:06:47 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:06:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:06:47 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:06:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:06:47 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:06:47 compute-0 sudo[439734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:06:47 compute-0 sudo[439734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:47 compute-0 sudo[439734]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:48 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:06:48 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:06:48 compute-0 ceph-mon[74295]: pgmap v3154: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:48 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:06:48 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:06:48 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:06:48 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:06:48 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:06:48 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:06:48 compute-0 sudo[439759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:06:48 compute-0 sudo[439759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:48 compute-0 sudo[439759]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:48 compute-0 sudo[439784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:06:48 compute-0 sudo[439784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:48 compute-0 sudo[439784]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:48 compute-0 sudo[439809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:06:48 compute-0 sudo[439809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:48 compute-0 nova_compute[259550]: 2025-10-07 15:06:48.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:48 compute-0 podman[439874]: 2025-10-07 15:06:48.571843863 +0000 UTC m=+0.036690016 container create 66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bell, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 07 15:06:48 compute-0 systemd[1]: Started libpod-conmon-66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893.scope.
Oct 07 15:06:48 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:06:48 compute-0 podman[439874]: 2025-10-07 15:06:48.630888757 +0000 UTC m=+0.095734930 container init 66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 07 15:06:48 compute-0 podman[439874]: 2025-10-07 15:06:48.637306305 +0000 UTC m=+0.102152448 container start 66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:06:48 compute-0 podman[439874]: 2025-10-07 15:06:48.640655053 +0000 UTC m=+0.105501216 container attach 66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:06:48 compute-0 jolly_bell[439890]: 167 167
Oct 07 15:06:48 compute-0 systemd[1]: libpod-66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893.scope: Deactivated successfully.
Oct 07 15:06:48 compute-0 conmon[439890]: conmon 66199bc5a8ad67828d5f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893.scope/container/memory.events
Oct 07 15:06:48 compute-0 podman[439874]: 2025-10-07 15:06:48.644642368 +0000 UTC m=+0.109488521 container died 66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 07 15:06:48 compute-0 podman[439874]: 2025-10-07 15:06:48.557862345 +0000 UTC m=+0.022708508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:06:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-92bae8950ca45e8ce285dfc85f83bc88a042971fed35475200f4673044ab7407-merged.mount: Deactivated successfully.
Oct 07 15:06:48 compute-0 podman[439874]: 2025-10-07 15:06:48.687256379 +0000 UTC m=+0.152102542 container remove 66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:06:48 compute-0 systemd[1]: libpod-conmon-66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893.scope: Deactivated successfully.
Oct 07 15:06:48 compute-0 podman[439915]: 2025-10-07 15:06:48.879426943 +0000 UTC m=+0.059935607 container create 84b4e5d458cd5dafe29d478d0448abdb1597826ec1e2e503430c3e0950a5ff29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:06:48 compute-0 nova_compute[259550]: 2025-10-07 15:06:48.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:48 compute-0 systemd[1]: Started libpod-conmon-84b4e5d458cd5dafe29d478d0448abdb1597826ec1e2e503430c3e0950a5ff29.scope.
Oct 07 15:06:48 compute-0 podman[439915]: 2025-10-07 15:06:48.849222919 +0000 UTC m=+0.029731633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:06:48 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76ab1ac2e3ac15ab275bdc3bf63f6121d16782edbdc3daf05fd8a272d607ec2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76ab1ac2e3ac15ab275bdc3bf63f6121d16782edbdc3daf05fd8a272d607ec2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76ab1ac2e3ac15ab275bdc3bf63f6121d16782edbdc3daf05fd8a272d607ec2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76ab1ac2e3ac15ab275bdc3bf63f6121d16782edbdc3daf05fd8a272d607ec2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76ab1ac2e3ac15ab275bdc3bf63f6121d16782edbdc3daf05fd8a272d607ec2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:06:48 compute-0 podman[439915]: 2025-10-07 15:06:48.995220949 +0000 UTC m=+0.175729613 container init 84b4e5d458cd5dafe29d478d0448abdb1597826ec1e2e503430c3e0950a5ff29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_moore, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct 07 15:06:49 compute-0 podman[439915]: 2025-10-07 15:06:49.004605666 +0000 UTC m=+0.185114300 container start 84b4e5d458cd5dafe29d478d0448abdb1597826ec1e2e503430c3e0950a5ff29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_moore, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 07 15:06:49 compute-0 podman[439915]: 2025-10-07 15:06:49.007966165 +0000 UTC m=+0.188474839 container attach 84b4e5d458cd5dafe29d478d0448abdb1597826ec1e2e503430c3e0950a5ff29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_moore, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 07 15:06:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:50 compute-0 nostalgic_moore[439931]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:06:50 compute-0 nostalgic_moore[439931]: --> relative data size: 1.0
Oct 07 15:06:50 compute-0 nostalgic_moore[439931]: --> All data devices are unavailable
Oct 07 15:06:50 compute-0 systemd[1]: libpod-84b4e5d458cd5dafe29d478d0448abdb1597826ec1e2e503430c3e0950a5ff29.scope: Deactivated successfully.
Oct 07 15:06:50 compute-0 podman[439915]: 2025-10-07 15:06:50.051486451 +0000 UTC m=+1.231995075 container died 84b4e5d458cd5dafe29d478d0448abdb1597826ec1e2e503430c3e0950a5ff29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_moore, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:06:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a76ab1ac2e3ac15ab275bdc3bf63f6121d16782edbdc3daf05fd8a272d607ec2-merged.mount: Deactivated successfully.
Oct 07 15:06:50 compute-0 podman[439915]: 2025-10-07 15:06:50.115821524 +0000 UTC m=+1.296330168 container remove 84b4e5d458cd5dafe29d478d0448abdb1597826ec1e2e503430c3e0950a5ff29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 07 15:06:50 compute-0 systemd[1]: libpod-conmon-84b4e5d458cd5dafe29d478d0448abdb1597826ec1e2e503430c3e0950a5ff29.scope: Deactivated successfully.
Oct 07 15:06:50 compute-0 sudo[439809]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:50 compute-0 ceph-mon[74295]: pgmap v3155: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:50 compute-0 sudo[439972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:06:50 compute-0 sudo[439972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:50 compute-0 sudo[439972]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:50 compute-0 sudo[439997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:06:50 compute-0 sudo[439997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:50 compute-0 sudo[439997]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:50 compute-0 sudo[440022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:06:50 compute-0 sudo[440022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:50 compute-0 sudo[440022]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:50 compute-0 sudo[440047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:06:50 compute-0 sudo[440047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:50 compute-0 podman[440114]: 2025-10-07 15:06:50.80113774 +0000 UTC m=+0.083140449 container create 166ce60cce877edda75923ea1a37b3910d282f80366d3ea5aea11abfc45b0556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:06:50 compute-0 podman[440114]: 2025-10-07 15:06:50.757667615 +0000 UTC m=+0.039670404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:06:50 compute-0 systemd[1]: Started libpod-conmon-166ce60cce877edda75923ea1a37b3910d282f80366d3ea5aea11abfc45b0556.scope.
Oct 07 15:06:50 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:06:50 compute-0 podman[440114]: 2025-10-07 15:06:50.896247301 +0000 UTC m=+0.178250020 container init 166ce60cce877edda75923ea1a37b3910d282f80366d3ea5aea11abfc45b0556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 07 15:06:50 compute-0 podman[440114]: 2025-10-07 15:06:50.906769378 +0000 UTC m=+0.188772127 container start 166ce60cce877edda75923ea1a37b3910d282f80366d3ea5aea11abfc45b0556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 07 15:06:50 compute-0 musing_swanson[440130]: 167 167
Oct 07 15:06:50 compute-0 podman[440114]: 2025-10-07 15:06:50.910600629 +0000 UTC m=+0.192603328 container attach 166ce60cce877edda75923ea1a37b3910d282f80366d3ea5aea11abfc45b0556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:06:50 compute-0 systemd[1]: libpod-166ce60cce877edda75923ea1a37b3910d282f80366d3ea5aea11abfc45b0556.scope: Deactivated successfully.
Oct 07 15:06:50 compute-0 podman[440114]: 2025-10-07 15:06:50.911319687 +0000 UTC m=+0.193322386 container died 166ce60cce877edda75923ea1a37b3910d282f80366d3ea5aea11abfc45b0556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:06:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-38afbf8c424e37f1d5e6ad3ab313d36b133aaddd657bd22f58499952d49bb8f4-merged.mount: Deactivated successfully.
Oct 07 15:06:50 compute-0 podman[440114]: 2025-10-07 15:06:50.95551571 +0000 UTC m=+0.237518399 container remove 166ce60cce877edda75923ea1a37b3910d282f80366d3ea5aea11abfc45b0556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 07 15:06:50 compute-0 systemd[1]: libpod-conmon-166ce60cce877edda75923ea1a37b3910d282f80366d3ea5aea11abfc45b0556.scope: Deactivated successfully.
Oct 07 15:06:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:51 compute-0 podman[440152]: 2025-10-07 15:06:51.163812909 +0000 UTC m=+0.071173973 container create 828af43303450e5ce0c34c1d87855dcbd003481107a68c858df9c59bd3b85ae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:06:51 compute-0 systemd[1]: Started libpod-conmon-828af43303450e5ce0c34c1d87855dcbd003481107a68c858df9c59bd3b85ae4.scope.
Oct 07 15:06:51 compute-0 podman[440152]: 2025-10-07 15:06:51.134884318 +0000 UTC m=+0.042245482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:06:51 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8578c3ffd08b49ae6ad4d22842fda5966d29e71a47080ea7e0e6165f3d4213d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8578c3ffd08b49ae6ad4d22842fda5966d29e71a47080ea7e0e6165f3d4213d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8578c3ffd08b49ae6ad4d22842fda5966d29e71a47080ea7e0e6165f3d4213d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8578c3ffd08b49ae6ad4d22842fda5966d29e71a47080ea7e0e6165f3d4213d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:06:51 compute-0 podman[440152]: 2025-10-07 15:06:51.289182336 +0000 UTC m=+0.196543500 container init 828af43303450e5ce0c34c1d87855dcbd003481107a68c858df9c59bd3b85ae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 07 15:06:51 compute-0 podman[440152]: 2025-10-07 15:06:51.309161671 +0000 UTC m=+0.216522755 container start 828af43303450e5ce0c34c1d87855dcbd003481107a68c858df9c59bd3b85ae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 07 15:06:51 compute-0 podman[440152]: 2025-10-07 15:06:51.313123776 +0000 UTC m=+0.220484960 container attach 828af43303450e5ce0c34c1d87855dcbd003481107a68c858df9c59bd3b85ae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rosalind, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]: {
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:     "0": [
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:         {
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "devices": [
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "/dev/loop3"
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             ],
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "lv_name": "ceph_lv0",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "lv_size": "21470642176",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "name": "ceph_lv0",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "tags": {
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.cluster_name": "ceph",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.crush_device_class": "",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.encrypted": "0",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.osd_id": "0",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.type": "block",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.vdo": "0"
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             },
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "type": "block",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "vg_name": "ceph_vg0"
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:         }
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:     ],
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:     "1": [
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:         {
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "devices": [
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "/dev/loop4"
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             ],
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "lv_name": "ceph_lv1",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "lv_size": "21470642176",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "name": "ceph_lv1",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "tags": {
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.cluster_name": "ceph",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.crush_device_class": "",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.encrypted": "0",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.osd_id": "1",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.type": "block",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.vdo": "0"
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             },
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "type": "block",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "vg_name": "ceph_vg1"
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:         }
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:     ],
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:     "2": [
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:         {
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "devices": [
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "/dev/loop5"
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             ],
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "lv_name": "ceph_lv2",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "lv_size": "21470642176",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "name": "ceph_lv2",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "tags": {
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.cluster_name": "ceph",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.crush_device_class": "",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.encrypted": "0",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.osd_id": "2",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.type": "block",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:                 "ceph.vdo": "0"
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             },
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "type": "block",
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:             "vg_name": "ceph_vg2"
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:         }
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]:     ]
Oct 07 15:06:52 compute-0 gracious_rosalind[440169]: }
Oct 07 15:06:52 compute-0 systemd[1]: libpod-828af43303450e5ce0c34c1d87855dcbd003481107a68c858df9c59bd3b85ae4.scope: Deactivated successfully.
Oct 07 15:06:52 compute-0 podman[440152]: 2025-10-07 15:06:52.123971473 +0000 UTC m=+1.031332587 container died 828af43303450e5ce0c34c1d87855dcbd003481107a68c858df9c59bd3b85ae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rosalind, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 07 15:06:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-8578c3ffd08b49ae6ad4d22842fda5966d29e71a47080ea7e0e6165f3d4213d1-merged.mount: Deactivated successfully.
Oct 07 15:06:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:06:52 compute-0 ceph-mon[74295]: pgmap v3156: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:52 compute-0 podman[440152]: 2025-10-07 15:06:52.242285005 +0000 UTC m=+1.149646079 container remove 828af43303450e5ce0c34c1d87855dcbd003481107a68c858df9c59bd3b85ae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rosalind, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 07 15:06:52 compute-0 systemd[1]: libpod-conmon-828af43303450e5ce0c34c1d87855dcbd003481107a68c858df9c59bd3b85ae4.scope: Deactivated successfully.
Oct 07 15:06:52 compute-0 sudo[440047]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:52 compute-0 sudo[440192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:06:52 compute-0 sudo[440192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:52 compute-0 sudo[440192]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:52 compute-0 sudo[440217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:06:52 compute-0 sudo[440217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:52 compute-0 sudo[440217]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:52 compute-0 sudo[440242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:06:52 compute-0 sudo[440242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:52 compute-0 sudo[440242]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:52 compute-0 sudo[440267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:06:52 compute-0 sudo[440267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:06:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:06:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:06:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:06:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:06:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:06:52 compute-0 podman[440333]: 2025-10-07 15:06:52.89714788 +0000 UTC m=+0.049962675 container create aa2400a56924d67799835181c1a39e3852e134fe0222101bce26bd043cc6684e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_fermat, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:06:52 compute-0 systemd[1]: Started libpod-conmon-aa2400a56924d67799835181c1a39e3852e134fe0222101bce26bd043cc6684e.scope.
Oct 07 15:06:52 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:06:52 compute-0 podman[440333]: 2025-10-07 15:06:52.877200665 +0000 UTC m=+0.030015560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:06:52 compute-0 podman[440333]: 2025-10-07 15:06:52.988443471 +0000 UTC m=+0.141258276 container init aa2400a56924d67799835181c1a39e3852e134fe0222101bce26bd043cc6684e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_fermat, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 15:06:53 compute-0 podman[440333]: 2025-10-07 15:06:53.002063759 +0000 UTC m=+0.154878554 container start aa2400a56924d67799835181c1a39e3852e134fe0222101bce26bd043cc6684e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 07 15:06:53 compute-0 podman[440333]: 2025-10-07 15:06:53.006513277 +0000 UTC m=+0.159328082 container attach aa2400a56924d67799835181c1a39e3852e134fe0222101bce26bd043cc6684e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 07 15:06:53 compute-0 jolly_fermat[440349]: 167 167
Oct 07 15:06:53 compute-0 systemd[1]: libpod-aa2400a56924d67799835181c1a39e3852e134fe0222101bce26bd043cc6684e.scope: Deactivated successfully.
Oct 07 15:06:53 compute-0 podman[440333]: 2025-10-07 15:06:53.011091897 +0000 UTC m=+0.163906732 container died aa2400a56924d67799835181c1a39e3852e134fe0222101bce26bd043cc6684e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 07 15:06:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b5fd7e72bb59963314ae7fcb74cc703511a2bbaa8eea063e8cf59e11df049e3-merged.mount: Deactivated successfully.
Oct 07 15:06:53 compute-0 podman[440333]: 2025-10-07 15:06:53.058218957 +0000 UTC m=+0.211033752 container remove aa2400a56924d67799835181c1a39e3852e134fe0222101bce26bd043cc6684e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 15:06:53 compute-0 systemd[1]: libpod-conmon-aa2400a56924d67799835181c1a39e3852e134fe0222101bce26bd043cc6684e.scope: Deactivated successfully.
Oct 07 15:06:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:53 compute-0 podman[440374]: 2025-10-07 15:06:53.265079187 +0000 UTC m=+0.045113777 container create 4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:06:53 compute-0 systemd[1]: Started libpod-conmon-4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018.scope.
Oct 07 15:06:53 compute-0 podman[440374]: 2025-10-07 15:06:53.245988455 +0000 UTC m=+0.026023035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:06:53 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f48f54d84ba80bb0f102e8ba367396004022978a87ece73bea31af496f4cd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f48f54d84ba80bb0f102e8ba367396004022978a87ece73bea31af496f4cd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f48f54d84ba80bb0f102e8ba367396004022978a87ece73bea31af496f4cd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f48f54d84ba80bb0f102e8ba367396004022978a87ece73bea31af496f4cd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:06:53 compute-0 podman[440374]: 2025-10-07 15:06:53.368145078 +0000 UTC m=+0.148179668 container init 4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rhodes, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:06:53 compute-0 podman[440374]: 2025-10-07 15:06:53.375951163 +0000 UTC m=+0.155985733 container start 4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 15:06:53 compute-0 podman[440374]: 2025-10-07 15:06:53.379406064 +0000 UTC m=+0.159440624 container attach 4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rhodes, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 07 15:06:53 compute-0 nova_compute[259550]: 2025-10-07 15:06:53.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:53 compute-0 nova_compute[259550]: 2025-10-07 15:06:53.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:54 compute-0 ceph-mon[74295]: pgmap v3157: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:54 compute-0 eager_rhodes[440390]: {
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:         "osd_id": 2,
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:         "type": "bluestore"
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:     },
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:         "osd_id": 1,
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:         "type": "bluestore"
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:     },
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:         "osd_id": 0,
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:         "type": "bluestore"
Oct 07 15:06:54 compute-0 eager_rhodes[440390]:     }
Oct 07 15:06:54 compute-0 eager_rhodes[440390]: }
Oct 07 15:06:54 compute-0 systemd[1]: libpod-4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018.scope: Deactivated successfully.
Oct 07 15:06:54 compute-0 systemd[1]: libpod-4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018.scope: Consumed 1.235s CPU time.
Oct 07 15:06:54 compute-0 podman[440374]: 2025-10-07 15:06:54.603571613 +0000 UTC m=+1.383606213 container died 4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 07 15:06:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-11f48f54d84ba80bb0f102e8ba367396004022978a87ece73bea31af496f4cd6-merged.mount: Deactivated successfully.
Oct 07 15:06:54 compute-0 podman[440374]: 2025-10-07 15:06:54.671988942 +0000 UTC m=+1.452023492 container remove 4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:06:54 compute-0 systemd[1]: libpod-conmon-4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018.scope: Deactivated successfully.
Oct 07 15:06:54 compute-0 sudo[440267]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:06:54 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:06:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:06:54 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:06:54 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev ec4ae349-8bd1-4886-9ac8-924a49c1e991 does not exist
Oct 07 15:06:54 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev e122a2a6-a0c6-46ae-abaf-4b057918bf6e does not exist
Oct 07 15:06:54 compute-0 sudo[440438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:06:54 compute-0 sudo[440438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:54 compute-0 sudo[440438]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:54 compute-0 sudo[440463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:06:54 compute-0 sudo[440463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:06:54 compute-0 sudo[440463]: pam_unix(sudo:session): session closed for user root
Oct 07 15:06:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:55 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:06:55 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:06:56 compute-0 ceph-mon[74295]: pgmap v3158: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:06:58 compute-0 nova_compute[259550]: 2025-10-07 15:06:58.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:58 compute-0 ceph-mon[74295]: pgmap v3159: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:58 compute-0 nova_compute[259550]: 2025-10-07 15:06:58.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:06:59 compute-0 podman[440488]: 2025-10-07 15:06:59.099110638 +0000 UTC m=+0.074939562 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 15:06:59 compute-0 podman[440489]: 2025-10-07 15:06:59.145788545 +0000 UTC m=+0.125699286 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 07 15:06:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:06:59 compute-0 ceph-mon[74295]: pgmap v3160: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:07:00.108 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:07:00.108 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:07:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:07:00.108 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:07:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:07:02 compute-0 ceph-mon[74295]: pgmap v3161: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:03 compute-0 nova_compute[259550]: 2025-10-07 15:07:03.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:03 compute-0 nova_compute[259550]: 2025-10-07 15:07:03.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:04 compute-0 ceph-mon[74295]: pgmap v3162: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:06 compute-0 ceph-mon[74295]: pgmap v3163: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:07:08 compute-0 nova_compute[259550]: 2025-10-07 15:07:08.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:08 compute-0 ceph-mon[74295]: pgmap v3164: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:08 compute-0 nova_compute[259550]: 2025-10-07 15:07:08.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:10 compute-0 ceph-mon[74295]: pgmap v3165: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:11 compute-0 nova_compute[259550]: 2025-10-07 15:07:11.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:07:11 compute-0 nova_compute[259550]: 2025-10-07 15:07:11.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 07 15:07:11 compute-0 nova_compute[259550]: 2025-10-07 15:07:11.999 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 07 15:07:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:07:12 compute-0 ceph-mon[74295]: pgmap v3166: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:12 compute-0 nova_compute[259550]: 2025-10-07 15:07:12.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:07:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:13 compute-0 nova_compute[259550]: 2025-10-07 15:07:13.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:13 compute-0 nova_compute[259550]: 2025-10-07 15:07:13.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:14 compute-0 podman[440534]: 2025-10-07 15:07:14.085030242 +0000 UTC m=+0.071665986 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 15:07:14 compute-0 podman[440535]: 2025-10-07 15:07:14.104987987 +0000 UTC m=+0.072856168 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 15:07:14 compute-0 ceph-mon[74295]: pgmap v3167: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:14 compute-0 nova_compute[259550]: 2025-10-07 15:07:14.996 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:07:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:16 compute-0 ceph-mon[74295]: pgmap v3168: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:07:18 compute-0 nova_compute[259550]: 2025-10-07 15:07:18.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:18 compute-0 ceph-mon[74295]: pgmap v3169: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:18 compute-0 nova_compute[259550]: 2025-10-07 15:07:18.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:18 compute-0 nova_compute[259550]: 2025-10-07 15:07:18.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:07:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:20 compute-0 ceph-mon[74295]: pgmap v3170: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:20 compute-0 nova_compute[259550]: 2025-10-07 15:07:20.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:07:20 compute-0 nova_compute[259550]: 2025-10-07 15:07:20.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:07:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:21 compute-0 nova_compute[259550]: 2025-10-07 15:07:21.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:07:21 compute-0 nova_compute[259550]: 2025-10-07 15:07:21.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:07:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:07:22 compute-0 ceph-mon[74295]: pgmap v3171: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:07:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:07:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:07:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:07:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:07:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:07:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:07:22
Oct 07 15:07:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:07:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:07:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'default.rgw.meta', 'vms', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'images', 'default.rgw.log']
Oct 07 15:07:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:07:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:07:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:07:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:07:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:07:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:07:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:07:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:07:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:07:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:07:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:07:23 compute-0 nova_compute[259550]: 2025-10-07 15:07:23.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:23 compute-0 nova_compute[259550]: 2025-10-07 15:07:23.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:24 compute-0 ceph-mon[74295]: pgmap v3172: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:24 compute-0 nova_compute[259550]: 2025-10-07 15:07:24.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:07:24 compute-0 nova_compute[259550]: 2025-10-07 15:07:24.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:07:24 compute-0 nova_compute[259550]: 2025-10-07 15:07:24.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 07 15:07:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:26 compute-0 ceph-mon[74295]: pgmap v3173: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:07:28 compute-0 nova_compute[259550]: 2025-10-07 15:07:28.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:28 compute-0 ceph-mon[74295]: pgmap v3174: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:28 compute-0 nova_compute[259550]: 2025-10-07 15:07:28.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:30 compute-0 podman[440574]: 2025-10-07 15:07:30.083174914 +0000 UTC m=+0.077316154 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 15:07:30 compute-0 podman[440575]: 2025-10-07 15:07:30.104812324 +0000 UTC m=+0.097862725 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 07 15:07:30 compute-0 ceph-mon[74295]: pgmap v3175: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:07:32 compute-0 ceph-mon[74295]: pgmap v3176: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:07:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/488099912' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:07:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:07:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/488099912' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:07:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:07:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:33 compute-0 nova_compute[259550]: 2025-10-07 15:07:33.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/488099912' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:07:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/488099912' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:07:33 compute-0 nova_compute[259550]: 2025-10-07 15:07:33.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:34 compute-0 nova_compute[259550]: 2025-10-07 15:07:34.017 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:07:34 compute-0 nova_compute[259550]: 2025-10-07 15:07:34.079 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:07:34 compute-0 nova_compute[259550]: 2025-10-07 15:07:34.079 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:07:34 compute-0 nova_compute[259550]: 2025-10-07 15:07:34.080 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:07:34 compute-0 nova_compute[259550]: 2025-10-07 15:07:34.080 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:07:34 compute-0 nova_compute[259550]: 2025-10-07 15:07:34.080 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:07:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:07:34 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/941561486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:07:34 compute-0 nova_compute[259550]: 2025-10-07 15:07:34.507 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:07:34 compute-0 nova_compute[259550]: 2025-10-07 15:07:34.702 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:07:34 compute-0 nova_compute[259550]: 2025-10-07 15:07:34.703 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3608MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:07:34 compute-0 nova_compute[259550]: 2025-10-07 15:07:34.704 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:07:34 compute-0 nova_compute[259550]: 2025-10-07 15:07:34.704 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:07:34 compute-0 ceph-mon[74295]: pgmap v3177: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:34 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/941561486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:07:34 compute-0 nova_compute[259550]: 2025-10-07 15:07:34.767 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:07:34 compute-0 nova_compute[259550]: 2025-10-07 15:07:34.767 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:07:34 compute-0 nova_compute[259550]: 2025-10-07 15:07:34.806 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:07:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:07:35 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/464643896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:07:35 compute-0 nova_compute[259550]: 2025-10-07 15:07:35.276 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:07:35 compute-0 nova_compute[259550]: 2025-10-07 15:07:35.285 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:07:35 compute-0 nova_compute[259550]: 2025-10-07 15:07:35.313 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:07:35 compute-0 nova_compute[259550]: 2025-10-07 15:07:35.316 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:07:35 compute-0 nova_compute[259550]: 2025-10-07 15:07:35.316 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:07:35 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/464643896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:07:36 compute-0 ceph-mon[74295]: pgmap v3178: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.186857) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849657187186, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 682, "num_deletes": 258, "total_data_size": 814183, "memory_usage": 828024, "flush_reason": "Manual Compaction"}
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849657200492, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 795974, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65808, "largest_seqno": 66489, "table_properties": {"data_size": 792393, "index_size": 1424, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8034, "raw_average_key_size": 18, "raw_value_size": 785147, "raw_average_value_size": 1838, "num_data_blocks": 63, "num_entries": 427, "num_filter_entries": 427, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759849607, "oldest_key_time": 1759849607, "file_creation_time": 1759849657, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 13713 microseconds, and 4373 cpu microseconds.
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.200562) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 795974 bytes OK
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.200597) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.202613) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.202634) EVENT_LOG_v1 {"time_micros": 1759849657202626, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.202661) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 810567, prev total WAL file size 810567, number of live WAL files 2.
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.203340) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373637' seq:72057594037927935, type:22 .. '6C6F676D0033303231' seq:0, type:0; will stop at (end)
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(777KB)], [155(9506KB)]
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849657203386, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 10530617, "oldest_snapshot_seqno": -1}
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8327 keys, 10414740 bytes, temperature: kUnknown
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849657274058, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 10414740, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10361033, "index_size": 31770, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20869, "raw_key_size": 219140, "raw_average_key_size": 26, "raw_value_size": 10214364, "raw_average_value_size": 1226, "num_data_blocks": 1232, "num_entries": 8327, "num_filter_entries": 8327, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759849657, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.276156) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 10414740 bytes
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.278368) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.7 rd, 144.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.3 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(26.3) write-amplify(13.1) OK, records in: 8855, records dropped: 528 output_compression: NoCompression
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.278399) EVENT_LOG_v1 {"time_micros": 1759849657278388, "job": 96, "event": "compaction_finished", "compaction_time_micros": 72253, "compaction_time_cpu_micros": 34834, "output_level": 6, "num_output_files": 1, "total_output_size": 10414740, "num_input_records": 8855, "num_output_records": 8327, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849657279217, "job": 96, "event": "table_file_deletion", "file_number": 157}
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849657281388, "job": 96, "event": "table_file_deletion", "file_number": 155}
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.203244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.281485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.281491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.281493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.281495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:07:37 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.281496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:07:37 compute-0 nova_compute[259550]: 2025-10-07 15:07:37.283 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:07:37 compute-0 nova_compute[259550]: 2025-10-07 15:07:37.283 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:07:37 compute-0 nova_compute[259550]: 2025-10-07 15:07:37.284 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:07:37 compute-0 nova_compute[259550]: 2025-10-07 15:07:37.303 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:07:38 compute-0 nova_compute[259550]: 2025-10-07 15:07:38.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Oct 07 15:07:38 compute-0 ceph-mon[74295]: pgmap v3179: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Oct 07 15:07:38 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Oct 07 15:07:39 compute-0 nova_compute[259550]: 2025-10-07 15:07:39.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Oct 07 15:07:39 compute-0 ceph-mon[74295]: osdmap e303: 3 total, 3 up, 3 in
Oct 07 15:07:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Oct 07 15:07:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Oct 07 15:07:40 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Oct 07 15:07:40 compute-0 ceph-mon[74295]: pgmap v3181: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Oct 07 15:07:40 compute-0 nova_compute[259550]: 2025-10-07 15:07:40.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:07:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 305 active+clean; 21 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.6 MiB/s wr, 28 op/s
Oct 07 15:07:41 compute-0 ceph-mon[74295]: osdmap e304: 3 total, 3 up, 3 in
Oct 07 15:07:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:07:42 compute-0 ceph-mon[74295]: pgmap v3183: 305 pgs: 305 active+clean; 21 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.6 MiB/s wr, 28 op/s
Oct 07 15:07:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 21 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.6 MiB/s wr, 28 op/s
Oct 07 15:07:43 compute-0 nova_compute[259550]: 2025-10-07 15:07:43.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:43 compute-0 ceph-mon[74295]: pgmap v3184: 305 pgs: 305 active+clean; 21 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.6 MiB/s wr, 28 op/s
Oct 07 15:07:44 compute-0 nova_compute[259550]: 2025-10-07 15:07:44.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:45 compute-0 podman[440665]: 2025-10-07 15:07:45.070721775 +0000 UTC m=+0.060777630 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 07 15:07:45 compute-0 podman[440666]: 2025-10-07 15:07:45.075628333 +0000 UTC m=+0.059979788 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid)
Oct 07 15:07:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 5.1 MiB/s wr, 46 op/s
Oct 07 15:07:46 compute-0 ceph-mon[74295]: pgmap v3185: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 5.1 MiB/s wr, 46 op/s
Oct 07 15:07:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 4.9 MiB/s wr, 39 op/s
Oct 07 15:07:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:07:48 compute-0 ceph-mon[74295]: pgmap v3186: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 4.9 MiB/s wr, 39 op/s
Oct 07 15:07:48 compute-0 nova_compute[259550]: 2025-10-07 15:07:48.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:49 compute-0 nova_compute[259550]: 2025-10-07 15:07:49.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 4.1 MiB/s wr, 32 op/s
Oct 07 15:07:50 compute-0 ceph-mon[74295]: pgmap v3187: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 4.1 MiB/s wr, 32 op/s
Oct 07 15:07:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Oct 07 15:07:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:07:52 compute-0 ceph-mon[74295]: pgmap v3188: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Oct 07 15:07:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:07:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:07:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:07:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:07:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:07:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:07:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.7 MiB/s wr, 13 op/s
Oct 07 15:07:53 compute-0 nova_compute[259550]: 2025-10-07 15:07:53.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:54 compute-0 nova_compute[259550]: 2025-10-07 15:07:54.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:54 compute-0 ceph-mon[74295]: pgmap v3189: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.7 MiB/s wr, 13 op/s
Oct 07 15:07:55 compute-0 sudo[440704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:07:55 compute-0 sudo[440704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:07:55 compute-0 sudo[440704]: pam_unix(sudo:session): session closed for user root
Oct 07 15:07:55 compute-0 sudo[440729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:07:55 compute-0 sudo[440729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:07:55 compute-0 sudo[440729]: pam_unix(sudo:session): session closed for user root
Oct 07 15:07:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.7 MiB/s wr, 13 op/s
Oct 07 15:07:55 compute-0 sudo[440754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:07:55 compute-0 sudo[440754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:07:55 compute-0 sudo[440754]: pam_unix(sudo:session): session closed for user root
Oct 07 15:07:55 compute-0 sudo[440779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:07:55 compute-0 sudo[440779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:07:55 compute-0 sudo[440779]: pam_unix(sudo:session): session closed for user root
Oct 07 15:07:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:07:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:07:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:07:55 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:07:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:07:55 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:07:55 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 0eaf2512-4bef-4194-8df1-7f3ae61dcfb2 does not exist
Oct 07 15:07:55 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 3a9917bf-047e-4804-8616-0a9262cf8f86 does not exist
Oct 07 15:07:55 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 52b49856-9431-4dd1-b9e9-f23aa6eb28f2 does not exist
Oct 07 15:07:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:07:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:07:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:07:55 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:07:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:07:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:07:55 compute-0 sudo[440835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:07:55 compute-0 sudo[440835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:07:55 compute-0 sudo[440835]: pam_unix(sudo:session): session closed for user root
Oct 07 15:07:55 compute-0 sudo[440860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:07:55 compute-0 sudo[440860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:07:55 compute-0 sudo[440860]: pam_unix(sudo:session): session closed for user root
Oct 07 15:07:56 compute-0 sudo[440885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:07:56 compute-0 sudo[440885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:07:56 compute-0 sudo[440885]: pam_unix(sudo:session): session closed for user root
Oct 07 15:07:56 compute-0 sudo[440910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:07:56 compute-0 sudo[440910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:07:56 compute-0 ceph-mon[74295]: pgmap v3190: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.7 MiB/s wr, 13 op/s
Oct 07 15:07:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:07:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:07:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:07:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:07:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:07:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:07:56 compute-0 podman[440975]: 2025-10-07 15:07:56.54126996 +0000 UTC m=+0.056243230 container create 0d71de39b09bfda6a0205061dfa897a0129ff075d06134d9d35cce18d250b852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:07:56 compute-0 systemd[1]: Started libpod-conmon-0d71de39b09bfda6a0205061dfa897a0129ff075d06134d9d35cce18d250b852.scope.
Oct 07 15:07:56 compute-0 podman[440975]: 2025-10-07 15:07:56.516376395 +0000 UTC m=+0.031349665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:07:56 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:07:56 compute-0 podman[440975]: 2025-10-07 15:07:56.652612278 +0000 UTC m=+0.167585538 container init 0d71de39b09bfda6a0205061dfa897a0129ff075d06134d9d35cce18d250b852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_rubin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 07 15:07:56 compute-0 podman[440975]: 2025-10-07 15:07:56.664967353 +0000 UTC m=+0.179940593 container start 0d71de39b09bfda6a0205061dfa897a0129ff075d06134d9d35cce18d250b852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_rubin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 07 15:07:56 compute-0 podman[440975]: 2025-10-07 15:07:56.668685431 +0000 UTC m=+0.183658691 container attach 0d71de39b09bfda6a0205061dfa897a0129ff075d06134d9d35cce18d250b852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Oct 07 15:07:56 compute-0 jovial_rubin[440992]: 167 167
Oct 07 15:07:56 compute-0 systemd[1]: libpod-0d71de39b09bfda6a0205061dfa897a0129ff075d06134d9d35cce18d250b852.scope: Deactivated successfully.
Oct 07 15:07:56 compute-0 podman[440975]: 2025-10-07 15:07:56.674081413 +0000 UTC m=+0.189054683 container died 0d71de39b09bfda6a0205061dfa897a0129ff075d06134d9d35cce18d250b852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:07:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b603a179f5bc4bb342d7c20c74f5a3a3d3941d6cbb4b2b875c7fb372037a6af-merged.mount: Deactivated successfully.
Oct 07 15:07:56 compute-0 podman[440975]: 2025-10-07 15:07:56.73820267 +0000 UTC m=+0.253175910 container remove 0d71de39b09bfda6a0205061dfa897a0129ff075d06134d9d35cce18d250b852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 07 15:07:56 compute-0 systemd[1]: libpod-conmon-0d71de39b09bfda6a0205061dfa897a0129ff075d06134d9d35cce18d250b852.scope: Deactivated successfully.
Oct 07 15:07:56 compute-0 podman[441014]: 2025-10-07 15:07:56.945706408 +0000 UTC m=+0.051296511 container create 73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:07:56 compute-0 systemd[1]: Started libpod-conmon-73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491.scope.
Oct 07 15:07:57 compute-0 podman[441014]: 2025-10-07 15:07:56.925491906 +0000 UTC m=+0.031081999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:07:57 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e36d90b695de25aa4ff967039ebfed2b25fd2e2c1a1a46597babb1cff3cde6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e36d90b695de25aa4ff967039ebfed2b25fd2e2c1a1a46597babb1cff3cde6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e36d90b695de25aa4ff967039ebfed2b25fd2e2c1a1a46597babb1cff3cde6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e36d90b695de25aa4ff967039ebfed2b25fd2e2c1a1a46597babb1cff3cde6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e36d90b695de25aa4ff967039ebfed2b25fd2e2c1a1a46597babb1cff3cde6f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:07:57 compute-0 podman[441014]: 2025-10-07 15:07:57.048822689 +0000 UTC m=+0.154412782 container init 73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:07:57 compute-0 podman[441014]: 2025-10-07 15:07:57.057069926 +0000 UTC m=+0.162659999 container start 73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:07:57 compute-0 podman[441014]: 2025-10-07 15:07:57.064177023 +0000 UTC m=+0.169767116 container attach 73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 07 15:07:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s
Oct 07 15:07:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:07:58 compute-0 brave_ishizaka[441031]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:07:58 compute-0 brave_ishizaka[441031]: --> relative data size: 1.0
Oct 07 15:07:58 compute-0 brave_ishizaka[441031]: --> All data devices are unavailable
Oct 07 15:07:58 compute-0 systemd[1]: libpod-73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491.scope: Deactivated successfully.
Oct 07 15:07:58 compute-0 systemd[1]: libpod-73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491.scope: Consumed 1.017s CPU time.
Oct 07 15:07:58 compute-0 conmon[441031]: conmon 73b024d4b058c52ab577 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491.scope/container/memory.events
Oct 07 15:07:58 compute-0 podman[441014]: 2025-10-07 15:07:58.121815752 +0000 UTC m=+1.227405845 container died 73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct 07 15:07:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e36d90b695de25aa4ff967039ebfed2b25fd2e2c1a1a46597babb1cff3cde6f-merged.mount: Deactivated successfully.
Oct 07 15:07:58 compute-0 podman[441014]: 2025-10-07 15:07:58.225418668 +0000 UTC m=+1.331008731 container remove 73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:07:58 compute-0 systemd[1]: libpod-conmon-73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491.scope: Deactivated successfully.
Oct 07 15:07:58 compute-0 sudo[440910]: pam_unix(sudo:session): session closed for user root
Oct 07 15:07:58 compute-0 ceph-mon[74295]: pgmap v3191: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s
Oct 07 15:07:58 compute-0 sudo[441072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:07:58 compute-0 sudo[441072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:07:58 compute-0 sudo[441072]: pam_unix(sudo:session): session closed for user root
Oct 07 15:07:58 compute-0 sudo[441097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:07:58 compute-0 sudo[441097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:07:58 compute-0 sudo[441097]: pam_unix(sudo:session): session closed for user root
Oct 07 15:07:58 compute-0 nova_compute[259550]: 2025-10-07 15:07:58.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:58 compute-0 sudo[441122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:07:58 compute-0 sudo[441122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:07:58 compute-0 sudo[441122]: pam_unix(sudo:session): session closed for user root
Oct 07 15:07:58 compute-0 sudo[441147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:07:58 compute-0 sudo[441147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:07:58 compute-0 podman[441213]: 2025-10-07 15:07:58.946170825 +0000 UTC m=+0.048052345 container create 52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 07 15:07:58 compute-0 systemd[1]: Started libpod-conmon-52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09.scope.
Oct 07 15:07:59 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:07:59 compute-0 podman[441213]: 2025-10-07 15:07:58.926100637 +0000 UTC m=+0.027982067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:07:59 compute-0 podman[441213]: 2025-10-07 15:07:59.027993877 +0000 UTC m=+0.129875297 container init 52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 15:07:59 compute-0 podman[441213]: 2025-10-07 15:07:59.034688083 +0000 UTC m=+0.136569503 container start 52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 15:07:59 compute-0 podman[441213]: 2025-10-07 15:07:59.038513454 +0000 UTC m=+0.140394894 container attach 52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:07:59 compute-0 wizardly_heyrovsky[441230]: 167 167
Oct 07 15:07:59 compute-0 systemd[1]: libpod-52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09.scope: Deactivated successfully.
Oct 07 15:07:59 compute-0 conmon[441230]: conmon 52e6da90c0cb65cb7ae9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09.scope/container/memory.events
Oct 07 15:07:59 compute-0 podman[441213]: 2025-10-07 15:07:59.043989277 +0000 UTC m=+0.145870687 container died 52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:07:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-53f89e85f8aad3364de0da29d9917f699bce77577f3fb6c12554790decdcee5b-merged.mount: Deactivated successfully.
Oct 07 15:07:59 compute-0 podman[441213]: 2025-10-07 15:07:59.078995108 +0000 UTC m=+0.180876518 container remove 52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:07:59 compute-0 systemd[1]: libpod-conmon-52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09.scope: Deactivated successfully.
Oct 07 15:07:59 compute-0 nova_compute[259550]: 2025-10-07 15:07:59.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:07:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:07:59 compute-0 podman[441254]: 2025-10-07 15:07:59.248422305 +0000 UTC m=+0.045598720 container create 8d30b25af7ba3f25255abfa067d9bebab9e67eeb921c998e953dadee7016c490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 07 15:07:59 compute-0 systemd[1]: Started libpod-conmon-8d30b25af7ba3f25255abfa067d9bebab9e67eeb921c998e953dadee7016c490.scope.
Oct 07 15:07:59 compute-0 podman[441254]: 2025-10-07 15:07:59.228799078 +0000 UTC m=+0.025975503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:07:59 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:07:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed7266e51ffbfcc438828762f1f6bc2af71b98803f041ae694e1b4f41643f39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:07:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed7266e51ffbfcc438828762f1f6bc2af71b98803f041ae694e1b4f41643f39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:07:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed7266e51ffbfcc438828762f1f6bc2af71b98803f041ae694e1b4f41643f39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:07:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed7266e51ffbfcc438828762f1f6bc2af71b98803f041ae694e1b4f41643f39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:07:59 compute-0 podman[441254]: 2025-10-07 15:07:59.350481009 +0000 UTC m=+0.147657434 container init 8d30b25af7ba3f25255abfa067d9bebab9e67eeb921c998e953dadee7016c490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haibt, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 07 15:07:59 compute-0 podman[441254]: 2025-10-07 15:07:59.359061655 +0000 UTC m=+0.156238070 container start 8d30b25af7ba3f25255abfa067d9bebab9e67eeb921c998e953dadee7016c490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haibt, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Oct 07 15:07:59 compute-0 podman[441254]: 2025-10-07 15:07:59.363492122 +0000 UTC m=+0.160668537 container attach 8d30b25af7ba3f25255abfa067d9bebab9e67eeb921c998e953dadee7016c490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haibt, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:08:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:08:00.109 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:08:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:08:00.111 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:08:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:08:00.111 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]: {
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:     "0": [
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:         {
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "devices": [
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "/dev/loop3"
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             ],
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "lv_name": "ceph_lv0",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "lv_size": "21470642176",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "name": "ceph_lv0",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "tags": {
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.cluster_name": "ceph",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.crush_device_class": "",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.encrypted": "0",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.osd_id": "0",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.type": "block",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.vdo": "0"
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             },
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "type": "block",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "vg_name": "ceph_vg0"
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:         }
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:     ],
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:     "1": [
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:         {
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "devices": [
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "/dev/loop4"
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             ],
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "lv_name": "ceph_lv1",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "lv_size": "21470642176",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "name": "ceph_lv1",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "tags": {
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.cluster_name": "ceph",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.crush_device_class": "",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.encrypted": "0",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.osd_id": "1",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.type": "block",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.vdo": "0"
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             },
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "type": "block",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "vg_name": "ceph_vg1"
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:         }
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:     ],
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:     "2": [
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:         {
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "devices": [
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "/dev/loop5"
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             ],
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "lv_name": "ceph_lv2",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "lv_size": "21470642176",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "name": "ceph_lv2",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "tags": {
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.cluster_name": "ceph",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.crush_device_class": "",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.encrypted": "0",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.osd_id": "2",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.type": "block",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:                 "ceph.vdo": "0"
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             },
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "type": "block",
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:             "vg_name": "ceph_vg2"
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:         }
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]:     ]
Oct 07 15:08:00 compute-0 relaxed_haibt[441271]: }
Oct 07 15:08:00 compute-0 systemd[1]: libpod-8d30b25af7ba3f25255abfa067d9bebab9e67eeb921c998e953dadee7016c490.scope: Deactivated successfully.
Oct 07 15:08:00 compute-0 podman[441254]: 2025-10-07 15:08:00.201406941 +0000 UTC m=+0.998583356 container died 8d30b25af7ba3f25255abfa067d9bebab9e67eeb921c998e953dadee7016c490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haibt, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:08:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ed7266e51ffbfcc438828762f1f6bc2af71b98803f041ae694e1b4f41643f39-merged.mount: Deactivated successfully.
Oct 07 15:08:00 compute-0 podman[441254]: 2025-10-07 15:08:00.264269584 +0000 UTC m=+1.061445999 container remove 8d30b25af7ba3f25255abfa067d9bebab9e67eeb921c998e953dadee7016c490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:08:00 compute-0 systemd[1]: libpod-conmon-8d30b25af7ba3f25255abfa067d9bebab9e67eeb921c998e953dadee7016c490.scope: Deactivated successfully.
Oct 07 15:08:00 compute-0 ceph-mon[74295]: pgmap v3192: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:00 compute-0 sudo[441147]: pam_unix(sudo:session): session closed for user root
Oct 07 15:08:00 compute-0 podman[441280]: 2025-10-07 15:08:00.322794994 +0000 UTC m=+0.083514578 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 15:08:00 compute-0 podman[441288]: 2025-10-07 15:08:00.360772123 +0000 UTC m=+0.115809478 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 07 15:08:00 compute-0 sudo[441327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:08:00 compute-0 sudo[441327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:08:00 compute-0 sudo[441327]: pam_unix(sudo:session): session closed for user root
Oct 07 15:08:00 compute-0 sudo[441359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:08:00 compute-0 sudo[441359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:08:00 compute-0 sudo[441359]: pam_unix(sudo:session): session closed for user root
Oct 07 15:08:00 compute-0 sudo[441384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:08:00 compute-0 sudo[441384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:08:00 compute-0 sudo[441384]: pam_unix(sudo:session): session closed for user root
Oct 07 15:08:00 compute-0 sudo[441409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:08:00 compute-0 sudo[441409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:08:01 compute-0 podman[441476]: 2025-10-07 15:08:01.048711367 +0000 UTC m=+0.049725439 container create a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bhaskara, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 07 15:08:01 compute-0 systemd[1]: Started libpod-conmon-a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749.scope.
Oct 07 15:08:01 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:08:01 compute-0 podman[441476]: 2025-10-07 15:08:01.021620985 +0000 UTC m=+0.022635087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:08:01 compute-0 podman[441476]: 2025-10-07 15:08:01.131225757 +0000 UTC m=+0.132239849 container init a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 07 15:08:01 compute-0 podman[441476]: 2025-10-07 15:08:01.141841187 +0000 UTC m=+0.142855259 container start a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 07 15:08:01 compute-0 podman[441476]: 2025-10-07 15:08:01.146423147 +0000 UTC m=+0.147437249 container attach a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bhaskara, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Oct 07 15:08:01 compute-0 systemd[1]: libpod-a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749.scope: Deactivated successfully.
Oct 07 15:08:01 compute-0 xenodochial_bhaskara[441492]: 167 167
Oct 07 15:08:01 compute-0 podman[441476]: 2025-10-07 15:08:01.151776348 +0000 UTC m=+0.152790430 container died a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bhaskara, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 07 15:08:01 compute-0 conmon[441492]: conmon a8fa7905b5c58222d891 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749.scope/container/memory.events
Oct 07 15:08:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fc6fa78a8eb060bb2cc1625789a631abbdeed43c33431f64c519a268d90dc8d-merged.mount: Deactivated successfully.
Oct 07 15:08:01 compute-0 podman[441476]: 2025-10-07 15:08:01.199913824 +0000 UTC m=+0.200927896 container remove a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bhaskara, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 07 15:08:01 compute-0 systemd[1]: libpod-conmon-a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749.scope: Deactivated successfully.
Oct 07 15:08:01 compute-0 nova_compute[259550]: 2025-10-07 15:08:01.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:08:01.347 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 15:08:01 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:08:01.348 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 15:08:01 compute-0 podman[441516]: 2025-10-07 15:08:01.427662815 +0000 UTC m=+0.063156273 container create b064db35199cf57ec6ec25128e93a999e0d7430d42f9c638e70e26276acb5334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 07 15:08:01 compute-0 systemd[1]: Started libpod-conmon-b064db35199cf57ec6ec25128e93a999e0d7430d42f9c638e70e26276acb5334.scope.
Oct 07 15:08:01 compute-0 podman[441516]: 2025-10-07 15:08:01.402899213 +0000 UTC m=+0.038392751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:08:01 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d7f0755967d88704fcc4ffc2a28bd96981114351fcbc34ffc4356986dace944/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d7f0755967d88704fcc4ffc2a28bd96981114351fcbc34ffc4356986dace944/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d7f0755967d88704fcc4ffc2a28bd96981114351fcbc34ffc4356986dace944/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d7f0755967d88704fcc4ffc2a28bd96981114351fcbc34ffc4356986dace944/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:08:01 compute-0 podman[441516]: 2025-10-07 15:08:01.530295334 +0000 UTC m=+0.165788812 container init b064db35199cf57ec6ec25128e93a999e0d7430d42f9c638e70e26276acb5334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:08:01 compute-0 podman[441516]: 2025-10-07 15:08:01.544427545 +0000 UTC m=+0.179921003 container start b064db35199cf57ec6ec25128e93a999e0d7430d42f9c638e70e26276acb5334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:08:01 compute-0 podman[441516]: 2025-10-07 15:08:01.548319388 +0000 UTC m=+0.183812956 container attach b064db35199cf57ec6ec25128e93a999e0d7430d42f9c638e70e26276acb5334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:08:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:08:02 compute-0 sharp_bohr[441533]: {
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:         "osd_id": 2,
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:         "type": "bluestore"
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:     },
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:         "osd_id": 1,
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:         "type": "bluestore"
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:     },
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:         "osd_id": 0,
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:         "type": "bluestore"
Oct 07 15:08:02 compute-0 sharp_bohr[441533]:     }
Oct 07 15:08:02 compute-0 sharp_bohr[441533]: }
Oct 07 15:08:02 compute-0 ceph-mon[74295]: pgmap v3193: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:02 compute-0 systemd[1]: libpod-b064db35199cf57ec6ec25128e93a999e0d7430d42f9c638e70e26276acb5334.scope: Deactivated successfully.
Oct 07 15:08:02 compute-0 podman[441516]: 2025-10-07 15:08:02.524807882 +0000 UTC m=+1.160301350 container died b064db35199cf57ec6ec25128e93a999e0d7430d42f9c638e70e26276acb5334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:08:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d7f0755967d88704fcc4ffc2a28bd96981114351fcbc34ffc4356986dace944-merged.mount: Deactivated successfully.
Oct 07 15:08:02 compute-0 podman[441516]: 2025-10-07 15:08:02.624176305 +0000 UTC m=+1.259669793 container remove b064db35199cf57ec6ec25128e93a999e0d7430d42f9c638e70e26276acb5334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 07 15:08:02 compute-0 systemd[1]: libpod-conmon-b064db35199cf57ec6ec25128e93a999e0d7430d42f9c638e70e26276acb5334.scope: Deactivated successfully.
Oct 07 15:08:02 compute-0 sudo[441409]: pam_unix(sudo:session): session closed for user root
Oct 07 15:08:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:08:02 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:08:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:08:02 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:08:02 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 3eca058d-050f-44a1-ae4f-f84c32f57365 does not exist
Oct 07 15:08:02 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev b54f8a1d-7a3f-4f1b-85b0-5c7c35bf32ef does not exist
Oct 07 15:08:02 compute-0 sudo[441580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:08:02 compute-0 sudo[441580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:08:02 compute-0 sudo[441580]: pam_unix(sudo:session): session closed for user root
Oct 07 15:08:02 compute-0 sudo[441605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:08:02 compute-0 sudo[441605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:08:02 compute-0 sudo[441605]: pam_unix(sudo:session): session closed for user root
Oct 07 15:08:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:03 compute-0 nova_compute[259550]: 2025-10-07 15:08:03.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:03 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:08:03 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:08:04 compute-0 nova_compute[259550]: 2025-10-07 15:08:04.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:04 compute-0 ceph-mon[74295]: pgmap v3194: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:06 compute-0 ceph-mon[74295]: pgmap v3195: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:08:07 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:08:07.350 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 15:08:08 compute-0 nova_compute[259550]: 2025-10-07 15:08:08.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Oct 07 15:08:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Oct 07 15:08:08 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Oct 07 15:08:08 compute-0 ceph-mon[74295]: pgmap v3196: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:09 compute-0 nova_compute[259550]: 2025-10-07 15:08:09.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Oct 07 15:08:09 compute-0 ceph-mon[74295]: osdmap e305: 3 total, 3 up, 3 in
Oct 07 15:08:10 compute-0 ceph-mon[74295]: pgmap v3198: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Oct 07 15:08:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 921 B/s wr, 9 op/s
Oct 07 15:08:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:08:12 compute-0 ceph-mon[74295]: pgmap v3199: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 921 B/s wr, 9 op/s
Oct 07 15:08:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 921 B/s wr, 9 op/s
Oct 07 15:08:13 compute-0 nova_compute[259550]: 2025-10-07 15:08:13.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:14 compute-0 nova_compute[259550]: 2025-10-07 15:08:14.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:14 compute-0 ceph-mon[74295]: pgmap v3200: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 921 B/s wr, 9 op/s
Oct 07 15:08:14 compute-0 nova_compute[259550]: 2025-10-07 15:08:14.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:08:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 07 15:08:16 compute-0 ceph-mon[74295]: pgmap v3201: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 07 15:08:16 compute-0 podman[441633]: 2025-10-07 15:08:16.088092999 +0000 UTC m=+0.069489019 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 07 15:08:16 compute-0 podman[441632]: 2025-10-07 15:08:16.08849745 +0000 UTC m=+0.070032053 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd)
Oct 07 15:08:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 07 15:08:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:08:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Oct 07 15:08:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Oct 07 15:08:17 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Oct 07 15:08:18 compute-0 ceph-mon[74295]: pgmap v3202: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 07 15:08:18 compute-0 ceph-mon[74295]: osdmap e306: 3 total, 3 up, 3 in
Oct 07 15:08:18 compute-0 nova_compute[259550]: 2025-10-07 15:08:18.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:19 compute-0 nova_compute[259550]: 2025-10-07 15:08:19.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct 07 15:08:20 compute-0 ceph-mon[74295]: pgmap v3204: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct 07 15:08:20 compute-0 nova_compute[259550]: 2025-10-07 15:08:20.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:08:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 511 B/s wr, 15 op/s
Oct 07 15:08:21 compute-0 nova_compute[259550]: 2025-10-07 15:08:21.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:08:21 compute-0 nova_compute[259550]: 2025-10-07 15:08:21.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:08:21 compute-0 nova_compute[259550]: 2025-10-07 15:08:21.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:08:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:08:22 compute-0 ceph-mon[74295]: pgmap v3205: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 511 B/s wr, 15 op/s
Oct 07 15:08:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:08:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:08:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:08:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:08:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:08:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:08:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:08:22
Oct 07 15:08:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:08:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:08:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'vms', 'volumes', 'default.rgw.control', '.mgr', 'backups', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data']
Oct 07 15:08:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:08:22 compute-0 nova_compute[259550]: 2025-10-07 15:08:22.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:08:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 511 B/s wr, 15 op/s
Oct 07 15:08:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:08:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:08:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:08:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:08:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:08:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:08:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:08:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:08:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:08:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:08:23 compute-0 nova_compute[259550]: 2025-10-07 15:08:23.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:24 compute-0 nova_compute[259550]: 2025-10-07 15:08:24.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:24 compute-0 ceph-mon[74295]: pgmap v3206: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 511 B/s wr, 15 op/s
Oct 07 15:08:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:25 compute-0 nova_compute[259550]: 2025-10-07 15:08:25.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:08:26 compute-0 ceph-mon[74295]: pgmap v3207: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:08:28 compute-0 ceph-mon[74295]: pgmap v3208: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:28 compute-0 nova_compute[259550]: 2025-10-07 15:08:28.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:28 compute-0 nova_compute[259550]: 2025-10-07 15:08:28.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:08:29 compute-0 nova_compute[259550]: 2025-10-07 15:08:29.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:30 compute-0 ceph-mon[74295]: pgmap v3209: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:31 compute-0 podman[441671]: 2025-10-07 15:08:31.098401567 +0000 UTC m=+0.081928655 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible)
Oct 07 15:08:31 compute-0 podman[441672]: 2025-10-07 15:08:31.13118276 +0000 UTC m=+0.115191012 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 15:08:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:08:32 compute-0 ceph-mon[74295]: pgmap v3210: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:08:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1335946709' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:08:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:08:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1335946709' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:08:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:08:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1335946709' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:08:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1335946709' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:08:33 compute-0 nova_compute[259550]: 2025-10-07 15:08:33.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:34 compute-0 nova_compute[259550]: 2025-10-07 15:08:34.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:34 compute-0 ceph-mon[74295]: pgmap v3211: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:35 compute-0 nova_compute[259550]: 2025-10-07 15:08:35.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:08:36 compute-0 nova_compute[259550]: 2025-10-07 15:08:36.014 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:08:36 compute-0 nova_compute[259550]: 2025-10-07 15:08:36.015 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:08:36 compute-0 nova_compute[259550]: 2025-10-07 15:08:36.016 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:08:36 compute-0 nova_compute[259550]: 2025-10-07 15:08:36.016 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:08:36 compute-0 nova_compute[259550]: 2025-10-07 15:08:36.017 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:08:36 compute-0 ceph-mon[74295]: pgmap v3212: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:08:36 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4124713347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:08:36 compute-0 nova_compute[259550]: 2025-10-07 15:08:36.534 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:08:36 compute-0 nova_compute[259550]: 2025-10-07 15:08:36.714 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:08:36 compute-0 nova_compute[259550]: 2025-10-07 15:08:36.716 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3603MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:08:36 compute-0 nova_compute[259550]: 2025-10-07 15:08:36.717 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:08:36 compute-0 nova_compute[259550]: 2025-10-07 15:08:36.717 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:08:36 compute-0 nova_compute[259550]: 2025-10-07 15:08:36.788 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:08:36 compute-0 nova_compute[259550]: 2025-10-07 15:08:36.789 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:08:36 compute-0 nova_compute[259550]: 2025-10-07 15:08:36.810 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:08:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:08:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1847847925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:08:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:08:37 compute-0 nova_compute[259550]: 2025-10-07 15:08:37.297 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:08:37 compute-0 nova_compute[259550]: 2025-10-07 15:08:37.306 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:08:37 compute-0 nova_compute[259550]: 2025-10-07 15:08:37.329 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:08:37 compute-0 nova_compute[259550]: 2025-10-07 15:08:37.333 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:08:37 compute-0 nova_compute[259550]: 2025-10-07 15:08:37.333 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:08:37 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4124713347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:08:37 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1847847925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:08:38 compute-0 ceph-mon[74295]: pgmap v3213: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:38 compute-0 nova_compute[259550]: 2025-10-07 15:08:38.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:39 compute-0 nova_compute[259550]: 2025-10-07 15:08:39.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:39 compute-0 nova_compute[259550]: 2025-10-07 15:08:39.334 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:08:39 compute-0 nova_compute[259550]: 2025-10-07 15:08:39.335 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:08:39 compute-0 nova_compute[259550]: 2025-10-07 15:08:39.335 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:08:39 compute-0 nova_compute[259550]: 2025-10-07 15:08:39.371 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:08:40 compute-0 ceph-mon[74295]: pgmap v3214: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:40 compute-0 nova_compute[259550]: 2025-10-07 15:08:40.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:08:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:08:42 compute-0 ceph-mon[74295]: pgmap v3215: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:43 compute-0 nova_compute[259550]: 2025-10-07 15:08:43.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:44 compute-0 nova_compute[259550]: 2025-10-07 15:08:44.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:44 compute-0 ceph-mon[74295]: pgmap v3216: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:46 compute-0 ceph-mon[74295]: pgmap v3217: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:47 compute-0 podman[441760]: 2025-10-07 15:08:47.083772431 +0000 UTC m=+0.064248281 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:08:47 compute-0 podman[441759]: 2025-10-07 15:08:47.11108543 +0000 UTC m=+0.096893881 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 07 15:08:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:08:48 compute-0 nova_compute[259550]: 2025-10-07 15:08:48.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:48 compute-0 ceph-mon[74295]: pgmap v3218: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:49 compute-0 nova_compute[259550]: 2025-10-07 15:08:49.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:50 compute-0 ceph-mon[74295]: pgmap v3219: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3220: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:08:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:08:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:08:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:08:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:08:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:08:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:08:52 compute-0 ceph-mon[74295]: pgmap v3220: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:53 compute-0 nova_compute[259550]: 2025-10-07 15:08:53.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:54 compute-0 nova_compute[259550]: 2025-10-07 15:08:54.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:54 compute-0 ceph-mon[74295]: pgmap v3221: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:56 compute-0 ceph-mon[74295]: pgmap v3222: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:08:58 compute-0 nova_compute[259550]: 2025-10-07 15:08:58.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:58 compute-0 ceph-mon[74295]: pgmap v3223: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:08:59 compute-0 nova_compute[259550]: 2025-10-07 15:08:59.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:08:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:09:00.111 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:09:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:09:00.111 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:09:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:09:00.111 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:09:00 compute-0 ceph-mon[74295]: pgmap v3224: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:02 compute-0 podman[441794]: 2025-10-07 15:09:02.0917039 +0000 UTC m=+0.076143013 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 07 15:09:02 compute-0 podman[441795]: 2025-10-07 15:09:02.117183 +0000 UTC m=+0.102154388 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 07 15:09:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:09:02 compute-0 ceph-mon[74295]: pgmap v3225: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:02 compute-0 sudo[441840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:09:02 compute-0 sudo[441840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:09:02 compute-0 sudo[441840]: pam_unix(sudo:session): session closed for user root
Oct 07 15:09:03 compute-0 sudo[441865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:09:03 compute-0 sudo[441865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:09:03 compute-0 sudo[441865]: pam_unix(sudo:session): session closed for user root
Oct 07 15:09:03 compute-0 sudo[441890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:09:03 compute-0 sudo[441890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:09:03 compute-0 sudo[441890]: pam_unix(sudo:session): session closed for user root
Oct 07 15:09:03 compute-0 sudo[441915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:09:03 compute-0 sudo[441915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:09:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:03 compute-0 nova_compute[259550]: 2025-10-07 15:09:03.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:03 compute-0 sudo[441915]: pam_unix(sudo:session): session closed for user root
Oct 07 15:09:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:09:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:09:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:09:03 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:09:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:09:03 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:09:03 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev bcff4464-3bfa-459f-8a2b-1cb673ec9cfa does not exist
Oct 07 15:09:03 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 712199a8-2d06-4c15-8eee-7ec328337708 does not exist
Oct 07 15:09:03 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev a7603c14-68d6-4d4a-9389-9c132ccdb1f4 does not exist
Oct 07 15:09:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:09:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:09:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:09:03 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:09:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:09:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:09:03 compute-0 sudo[441971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:09:03 compute-0 sudo[441971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:09:03 compute-0 sudo[441971]: pam_unix(sudo:session): session closed for user root
Oct 07 15:09:03 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:09:03 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:09:03 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:09:03 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:09:03 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:09:03 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:09:03 compute-0 sudo[441996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:09:03 compute-0 sudo[441996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:09:03 compute-0 sudo[441996]: pam_unix(sudo:session): session closed for user root
Oct 07 15:09:03 compute-0 sudo[442021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:09:03 compute-0 sudo[442021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:09:03 compute-0 sudo[442021]: pam_unix(sudo:session): session closed for user root
Oct 07 15:09:03 compute-0 sudo[442046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:09:03 compute-0 sudo[442046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:09:04 compute-0 nova_compute[259550]: 2025-10-07 15:09:04.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:04 compute-0 podman[442111]: 2025-10-07 15:09:04.376981239 +0000 UTC m=+0.050833818 container create ce940b6a8dd52fa398e8077bfb7ab4d7b365d196b2cc4057298296669c89023a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_heisenberg, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 07 15:09:04 compute-0 systemd[1]: Started libpod-conmon-ce940b6a8dd52fa398e8077bfb7ab4d7b365d196b2cc4057298296669c89023a.scope.
Oct 07 15:09:04 compute-0 podman[442111]: 2025-10-07 15:09:04.352120855 +0000 UTC m=+0.025973464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:09:04 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:09:04 compute-0 podman[442111]: 2025-10-07 15:09:04.479189777 +0000 UTC m=+0.153042376 container init ce940b6a8dd52fa398e8077bfb7ab4d7b365d196b2cc4057298296669c89023a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:09:04 compute-0 podman[442111]: 2025-10-07 15:09:04.489266122 +0000 UTC m=+0.163118701 container start ce940b6a8dd52fa398e8077bfb7ab4d7b365d196b2cc4057298296669c89023a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:09:04 compute-0 podman[442111]: 2025-10-07 15:09:04.493548085 +0000 UTC m=+0.167400704 container attach ce940b6a8dd52fa398e8077bfb7ab4d7b365d196b2cc4057298296669c89023a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_heisenberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 15:09:04 compute-0 musing_heisenberg[442128]: 167 167
Oct 07 15:09:04 compute-0 systemd[1]: libpod-ce940b6a8dd52fa398e8077bfb7ab4d7b365d196b2cc4057298296669c89023a.scope: Deactivated successfully.
Oct 07 15:09:04 compute-0 podman[442111]: 2025-10-07 15:09:04.499119151 +0000 UTC m=+0.172971720 container died ce940b6a8dd52fa398e8077bfb7ab4d7b365d196b2cc4057298296669c89023a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_heisenberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:09:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-079cfeaeaeff89cd63598c1a8441fafe1892064f940727e5ecbef92b70c66745-merged.mount: Deactivated successfully.
Oct 07 15:09:04 compute-0 podman[442111]: 2025-10-07 15:09:04.553901172 +0000 UTC m=+0.227753741 container remove ce940b6a8dd52fa398e8077bfb7ab4d7b365d196b2cc4057298296669c89023a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_heisenberg, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Oct 07 15:09:04 compute-0 systemd[1]: libpod-conmon-ce940b6a8dd52fa398e8077bfb7ab4d7b365d196b2cc4057298296669c89023a.scope: Deactivated successfully.
Oct 07 15:09:04 compute-0 podman[442152]: 2025-10-07 15:09:04.777253377 +0000 UTC m=+0.063236584 container create e26ab054974b2a28f772729c9cbb3a58c7f86508d582fb105c8ead446d56dadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mirzakhani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 07 15:09:04 compute-0 systemd[1]: Started libpod-conmon-e26ab054974b2a28f772729c9cbb3a58c7f86508d582fb105c8ead446d56dadc.scope.
Oct 07 15:09:04 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e18383c2e56c74256dd071fc40accd2db4a50b74abe3d18b87f2fb97f7f9a96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e18383c2e56c74256dd071fc40accd2db4a50b74abe3d18b87f2fb97f7f9a96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e18383c2e56c74256dd071fc40accd2db4a50b74abe3d18b87f2fb97f7f9a96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e18383c2e56c74256dd071fc40accd2db4a50b74abe3d18b87f2fb97f7f9a96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e18383c2e56c74256dd071fc40accd2db4a50b74abe3d18b87f2fb97f7f9a96/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:09:04 compute-0 podman[442152]: 2025-10-07 15:09:04.757687022 +0000 UTC m=+0.043670239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:09:04 compute-0 ceph-mon[74295]: pgmap v3226: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:04 compute-0 podman[442152]: 2025-10-07 15:09:04.866360331 +0000 UTC m=+0.152343518 container init e26ab054974b2a28f772729c9cbb3a58c7f86508d582fb105c8ead446d56dadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:09:04 compute-0 podman[442152]: 2025-10-07 15:09:04.873408836 +0000 UTC m=+0.159392023 container start e26ab054974b2a28f772729c9cbb3a58c7f86508d582fb105c8ead446d56dadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:09:04 compute-0 podman[442152]: 2025-10-07 15:09:04.877355829 +0000 UTC m=+0.163339016 container attach e26ab054974b2a28f772729c9cbb3a58c7f86508d582fb105c8ead446d56dadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mirzakhani, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 07 15:09:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:05 compute-0 ceph-mon[74295]: pgmap v3227: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:05 compute-0 elastic_mirzakhani[442169]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:09:05 compute-0 elastic_mirzakhani[442169]: --> relative data size: 1.0
Oct 07 15:09:05 compute-0 elastic_mirzakhani[442169]: --> All data devices are unavailable
Oct 07 15:09:05 compute-0 systemd[1]: libpod-e26ab054974b2a28f772729c9cbb3a58c7f86508d582fb105c8ead446d56dadc.scope: Deactivated successfully.
Oct 07 15:09:05 compute-0 podman[442152]: 2025-10-07 15:09:05.904775613 +0000 UTC m=+1.190758800 container died e26ab054974b2a28f772729c9cbb3a58c7f86508d582fb105c8ead446d56dadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 07 15:09:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e18383c2e56c74256dd071fc40accd2db4a50b74abe3d18b87f2fb97f7f9a96-merged.mount: Deactivated successfully.
Oct 07 15:09:05 compute-0 podman[442152]: 2025-10-07 15:09:05.9654524 +0000 UTC m=+1.251435587 container remove e26ab054974b2a28f772729c9cbb3a58c7f86508d582fb105c8ead446d56dadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 07 15:09:05 compute-0 systemd[1]: libpod-conmon-e26ab054974b2a28f772729c9cbb3a58c7f86508d582fb105c8ead446d56dadc.scope: Deactivated successfully.
Oct 07 15:09:05 compute-0 sudo[442046]: pam_unix(sudo:session): session closed for user root
Oct 07 15:09:06 compute-0 sudo[442212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:09:06 compute-0 sudo[442212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:09:06 compute-0 sudo[442212]: pam_unix(sudo:session): session closed for user root
Oct 07 15:09:06 compute-0 sudo[442237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:09:06 compute-0 sudo[442237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:09:06 compute-0 sudo[442237]: pam_unix(sudo:session): session closed for user root
Oct 07 15:09:06 compute-0 sudo[442262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:09:06 compute-0 sudo[442262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:09:06 compute-0 sudo[442262]: pam_unix(sudo:session): session closed for user root
Oct 07 15:09:06 compute-0 sudo[442287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:09:06 compute-0 sudo[442287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:09:06 compute-0 podman[442353]: 2025-10-07 15:09:06.646198904 +0000 UTC m=+0.046331570 container create 7aeae4bd77869cb85e992854724822ee625ff6252cd38379e64bd7aed3e336ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sutherland, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 07 15:09:06 compute-0 systemd[1]: Started libpod-conmon-7aeae4bd77869cb85e992854724822ee625ff6252cd38379e64bd7aed3e336ca.scope.
Oct 07 15:09:06 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:09:06 compute-0 podman[442353]: 2025-10-07 15:09:06.627070111 +0000 UTC m=+0.027202787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:09:06 compute-0 podman[442353]: 2025-10-07 15:09:06.737919366 +0000 UTC m=+0.138052052 container init 7aeae4bd77869cb85e992854724822ee625ff6252cd38379e64bd7aed3e336ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sutherland, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:09:06 compute-0 podman[442353]: 2025-10-07 15:09:06.747880848 +0000 UTC m=+0.148013514 container start 7aeae4bd77869cb85e992854724822ee625ff6252cd38379e64bd7aed3e336ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sutherland, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:09:06 compute-0 epic_sutherland[442370]: 167 167
Oct 07 15:09:06 compute-0 podman[442353]: 2025-10-07 15:09:06.752189371 +0000 UTC m=+0.152322037 container attach 7aeae4bd77869cb85e992854724822ee625ff6252cd38379e64bd7aed3e336ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sutherland, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:09:06 compute-0 systemd[1]: libpod-7aeae4bd77869cb85e992854724822ee625ff6252cd38379e64bd7aed3e336ca.scope: Deactivated successfully.
Oct 07 15:09:06 compute-0 podman[442353]: 2025-10-07 15:09:06.754289747 +0000 UTC m=+0.154422393 container died 7aeae4bd77869cb85e992854724822ee625ff6252cd38379e64bd7aed3e336ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:09:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc70d0e61f79c7dde5d0d1fb9e985b5a56eb3a35891552194cd39534638d9fea-merged.mount: Deactivated successfully.
Oct 07 15:09:06 compute-0 podman[442353]: 2025-10-07 15:09:06.805122943 +0000 UTC m=+0.205255609 container remove 7aeae4bd77869cb85e992854724822ee625ff6252cd38379e64bd7aed3e336ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 07 15:09:06 compute-0 systemd[1]: libpod-conmon-7aeae4bd77869cb85e992854724822ee625ff6252cd38379e64bd7aed3e336ca.scope: Deactivated successfully.
Oct 07 15:09:07 compute-0 podman[442393]: 2025-10-07 15:09:07.00874054 +0000 UTC m=+0.037623521 container create 237ea5a80c4274e1453b562e2292f67800f2bf41a8df4c1b0f91c036ca237164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 07 15:09:07 compute-0 systemd[1]: Started libpod-conmon-237ea5a80c4274e1453b562e2292f67800f2bf41a8df4c1b0f91c036ca237164.scope.
Oct 07 15:09:07 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617ef24771d05ed9148c0911dfd7afb5c5fcb6758d5a9bcf06c14350458c2100/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617ef24771d05ed9148c0911dfd7afb5c5fcb6758d5a9bcf06c14350458c2100/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:09:07 compute-0 podman[442393]: 2025-10-07 15:09:06.99282332 +0000 UTC m=+0.021706321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617ef24771d05ed9148c0911dfd7afb5c5fcb6758d5a9bcf06c14350458c2100/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617ef24771d05ed9148c0911dfd7afb5c5fcb6758d5a9bcf06c14350458c2100/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:09:07 compute-0 podman[442393]: 2025-10-07 15:09:07.103939244 +0000 UTC m=+0.132822255 container init 237ea5a80c4274e1453b562e2292f67800f2bf41a8df4c1b0f91c036ca237164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 07 15:09:07 compute-0 podman[442393]: 2025-10-07 15:09:07.117316225 +0000 UTC m=+0.146199236 container start 237ea5a80c4274e1453b562e2292f67800f2bf41a8df4c1b0f91c036ca237164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 07 15:09:07 compute-0 podman[442393]: 2025-10-07 15:09:07.120992642 +0000 UTC m=+0.149875633 container attach 237ea5a80c4274e1453b562e2292f67800f2bf41a8df4c1b0f91c036ca237164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 07 15:09:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:09:07 compute-0 vigorous_wing[442410]: {
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:     "0": [
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:         {
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "devices": [
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "/dev/loop3"
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             ],
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "lv_name": "ceph_lv0",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "lv_size": "21470642176",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "name": "ceph_lv0",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "tags": {
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.cluster_name": "ceph",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.crush_device_class": "",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.encrypted": "0",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.osd_id": "0",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.type": "block",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.vdo": "0"
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             },
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "type": "block",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "vg_name": "ceph_vg0"
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:         }
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:     ],
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:     "1": [
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:         {
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "devices": [
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "/dev/loop4"
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             ],
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "lv_name": "ceph_lv1",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "lv_size": "21470642176",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "name": "ceph_lv1",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "tags": {
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.cluster_name": "ceph",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.crush_device_class": "",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.encrypted": "0",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.osd_id": "1",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.type": "block",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.vdo": "0"
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             },
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "type": "block",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "vg_name": "ceph_vg1"
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:         }
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:     ],
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:     "2": [
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:         {
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "devices": [
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "/dev/loop5"
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             ],
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "lv_name": "ceph_lv2",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "lv_size": "21470642176",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "name": "ceph_lv2",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "tags": {
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.cluster_name": "ceph",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.crush_device_class": "",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.encrypted": "0",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.osd_id": "2",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.type": "block",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:                 "ceph.vdo": "0"
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             },
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "type": "block",
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:             "vg_name": "ceph_vg2"
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:         }
Oct 07 15:09:07 compute-0 vigorous_wing[442410]:     ]
Oct 07 15:09:07 compute-0 vigorous_wing[442410]: }
Oct 07 15:09:07 compute-0 systemd[1]: libpod-237ea5a80c4274e1453b562e2292f67800f2bf41a8df4c1b0f91c036ca237164.scope: Deactivated successfully.
Oct 07 15:09:07 compute-0 podman[442393]: 2025-10-07 15:09:07.925954725 +0000 UTC m=+0.954837716 container died 237ea5a80c4274e1453b562e2292f67800f2bf41a8df4c1b0f91c036ca237164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wing, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:09:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-617ef24771d05ed9148c0911dfd7afb5c5fcb6758d5a9bcf06c14350458c2100-merged.mount: Deactivated successfully.
Oct 07 15:09:07 compute-0 podman[442393]: 2025-10-07 15:09:07.9976331 +0000 UTC m=+1.026516081 container remove 237ea5a80c4274e1453b562e2292f67800f2bf41a8df4c1b0f91c036ca237164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:09:08 compute-0 systemd[1]: libpod-conmon-237ea5a80c4274e1453b562e2292f67800f2bf41a8df4c1b0f91c036ca237164.scope: Deactivated successfully.
Oct 07 15:09:08 compute-0 sudo[442287]: pam_unix(sudo:session): session closed for user root
Oct 07 15:09:08 compute-0 sudo[442434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:09:08 compute-0 sudo[442434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:09:08 compute-0 sudo[442434]: pam_unix(sudo:session): session closed for user root
Oct 07 15:09:08 compute-0 sudo[442459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:09:08 compute-0 sudo[442459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:09:08 compute-0 sudo[442459]: pam_unix(sudo:session): session closed for user root
Oct 07 15:09:08 compute-0 sudo[442484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:09:08 compute-0 sudo[442484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:09:08 compute-0 sudo[442484]: pam_unix(sudo:session): session closed for user root
Oct 07 15:09:08 compute-0 ceph-mon[74295]: pgmap v3228: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:08 compute-0 sudo[442509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:09:08 compute-0 sudo[442509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:09:08 compute-0 nova_compute[259550]: 2025-10-07 15:09:08.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:08 compute-0 podman[442577]: 2025-10-07 15:09:08.743942569 +0000 UTC m=+0.041318248 container create 559d3db92a022bba71b2ced0f8a2a899b6a3bb3920e1f73352f4ef276d8ddaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 07 15:09:08 compute-0 systemd[1]: Started libpod-conmon-559d3db92a022bba71b2ced0f8a2a899b6a3bb3920e1f73352f4ef276d8ddaed.scope.
Oct 07 15:09:08 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:09:08 compute-0 podman[442577]: 2025-10-07 15:09:08.726705626 +0000 UTC m=+0.024081395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:09:08 compute-0 podman[442577]: 2025-10-07 15:09:08.838717722 +0000 UTC m=+0.136093431 container init 559d3db92a022bba71b2ced0f8a2a899b6a3bb3920e1f73352f4ef276d8ddaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:09:08 compute-0 podman[442577]: 2025-10-07 15:09:08.846465756 +0000 UTC m=+0.143841445 container start 559d3db92a022bba71b2ced0f8a2a899b6a3bb3920e1f73352f4ef276d8ddaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 07 15:09:08 compute-0 podman[442577]: 2025-10-07 15:09:08.850658576 +0000 UTC m=+0.148034345 container attach 559d3db92a022bba71b2ced0f8a2a899b6a3bb3920e1f73352f4ef276d8ddaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:09:08 compute-0 vigilant_sanderson[442593]: 167 167
Oct 07 15:09:08 compute-0 systemd[1]: libpod-559d3db92a022bba71b2ced0f8a2a899b6a3bb3920e1f73352f4ef276d8ddaed.scope: Deactivated successfully.
Oct 07 15:09:08 compute-0 podman[442577]: 2025-10-07 15:09:08.851881448 +0000 UTC m=+0.149257127 container died 559d3db92a022bba71b2ced0f8a2a899b6a3bb3920e1f73352f4ef276d8ddaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:09:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dce03e32ed244b7f6b3e9a50cb15cef4b5a547771eb9d869b4514b682e6adbd-merged.mount: Deactivated successfully.
Oct 07 15:09:08 compute-0 podman[442577]: 2025-10-07 15:09:08.892892097 +0000 UTC m=+0.190267796 container remove 559d3db92a022bba71b2ced0f8a2a899b6a3bb3920e1f73352f4ef276d8ddaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:09:08 compute-0 systemd[1]: libpod-conmon-559d3db92a022bba71b2ced0f8a2a899b6a3bb3920e1f73352f4ef276d8ddaed.scope: Deactivated successfully.
Oct 07 15:09:09 compute-0 podman[442617]: 2025-10-07 15:09:09.085164535 +0000 UTC m=+0.047654425 container create 8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:09:09 compute-0 systemd[1]: Started libpod-conmon-8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838.scope.
Oct 07 15:09:09 compute-0 nova_compute[259550]: 2025-10-07 15:09:09.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:09 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e918bfe283fad05848c1cc0c5ce2519c481c71e52e831191dcac58062e67c3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e918bfe283fad05848c1cc0c5ce2519c481c71e52e831191dcac58062e67c3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:09:09 compute-0 podman[442617]: 2025-10-07 15:09:09.064671465 +0000 UTC m=+0.027161335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e918bfe283fad05848c1cc0c5ce2519c481c71e52e831191dcac58062e67c3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e918bfe283fad05848c1cc0c5ce2519c481c71e52e831191dcac58062e67c3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:09:09 compute-0 podman[442617]: 2025-10-07 15:09:09.171411102 +0000 UTC m=+0.133900972 container init 8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 07 15:09:09 compute-0 podman[442617]: 2025-10-07 15:09:09.183119701 +0000 UTC m=+0.145609551 container start 8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 15:09:09 compute-0 podman[442617]: 2025-10-07 15:09:09.185868463 +0000 UTC m=+0.148358313 container attach 8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:09:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:10 compute-0 modest_khayyam[442632]: {
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:         "osd_id": 2,
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:         "type": "bluestore"
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:     },
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:         "osd_id": 1,
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:         "type": "bluestore"
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:     },
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:         "osd_id": 0,
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:         "type": "bluestore"
Oct 07 15:09:10 compute-0 modest_khayyam[442632]:     }
Oct 07 15:09:10 compute-0 modest_khayyam[442632]: }
Oct 07 15:09:10 compute-0 systemd[1]: libpod-8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838.scope: Deactivated successfully.
Oct 07 15:09:10 compute-0 podman[442617]: 2025-10-07 15:09:10.179691223 +0000 UTC m=+1.142181083 container died 8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 07 15:09:10 compute-0 systemd[1]: libpod-8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838.scope: Consumed 1.004s CPU time.
Oct 07 15:09:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e918bfe283fad05848c1cc0c5ce2519c481c71e52e831191dcac58062e67c3d-merged.mount: Deactivated successfully.
Oct 07 15:09:10 compute-0 podman[442617]: 2025-10-07 15:09:10.265963712 +0000 UTC m=+1.228453562 container remove 8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:09:10 compute-0 systemd[1]: libpod-conmon-8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838.scope: Deactivated successfully.
Oct 07 15:09:10 compute-0 ceph-mon[74295]: pgmap v3229: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:10 compute-0 sudo[442509]: pam_unix(sudo:session): session closed for user root
Oct 07 15:09:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:09:10 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:09:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:09:10 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:09:10 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev eaf22875-8f08-424e-a7fd-216145d63d4a does not exist
Oct 07 15:09:10 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev febe4991-3f83-4f46-9d8e-8d98b2446073 does not exist
Oct 07 15:09:10 compute-0 sudo[442679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:09:10 compute-0 sudo[442679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:09:10 compute-0 sudo[442679]: pam_unix(sudo:session): session closed for user root
Oct 07 15:09:10 compute-0 sudo[442704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:09:10 compute-0 sudo[442704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:09:10 compute-0 sudo[442704]: pam_unix(sudo:session): session closed for user root
Oct 07 15:09:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:09:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:09:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:09:12 compute-0 ceph-mon[74295]: pgmap v3230: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:13 compute-0 nova_compute[259550]: 2025-10-07 15:09:13.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:14 compute-0 nova_compute[259550]: 2025-10-07 15:09:14.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:14 compute-0 ceph-mon[74295]: pgmap v3231: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:14 compute-0 nova_compute[259550]: 2025-10-07 15:09:14.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:09:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:16 compute-0 ceph-mon[74295]: pgmap v3232: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:09:18 compute-0 podman[442729]: 2025-10-07 15:09:18.09551628 +0000 UTC m=+0.077118759 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd)
Oct 07 15:09:18 compute-0 podman[442730]: 2025-10-07 15:09:18.10011202 +0000 UTC m=+0.071799009 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 07 15:09:18 compute-0 ceph-mon[74295]: pgmap v3233: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:18 compute-0 nova_compute[259550]: 2025-10-07 15:09:18.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:19 compute-0 nova_compute[259550]: 2025-10-07 15:09:19.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:20 compute-0 ceph-mon[74295]: pgmap v3234: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:21 compute-0 nova_compute[259550]: 2025-10-07 15:09:21.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:09:21 compute-0 nova_compute[259550]: 2025-10-07 15:09:21.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:09:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:09:22 compute-0 ceph-mon[74295]: pgmap v3235: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:09:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:09:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:09:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:09:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:09:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:09:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:09:22
Oct 07 15:09:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:09:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:09:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['volumes', 'backups', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'images', 'vms', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct 07 15:09:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:09:22 compute-0 nova_compute[259550]: 2025-10-07 15:09:22.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:09:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:09:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:09:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:09:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:09:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:09:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:09:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:09:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:09:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:09:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:09:23 compute-0 nova_compute[259550]: 2025-10-07 15:09:23.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:23 compute-0 nova_compute[259550]: 2025-10-07 15:09:23.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:09:23 compute-0 nova_compute[259550]: 2025-10-07 15:09:23.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:09:24 compute-0 nova_compute[259550]: 2025-10-07 15:09:24.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:24 compute-0 ceph-mon[74295]: pgmap v3236: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:26 compute-0 ceph-mon[74295]: pgmap v3237: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:09:27 compute-0 nova_compute[259550]: 2025-10-07 15:09:27.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:09:28 compute-0 ceph-mon[74295]: pgmap v3238: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:28 compute-0 nova_compute[259550]: 2025-10-07 15:09:28.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:29 compute-0 nova_compute[259550]: 2025-10-07 15:09:29.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:30 compute-0 ceph-mon[74295]: pgmap v3239: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:09:32 compute-0 ceph-mon[74295]: pgmap v3240: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:09:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1273617194' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:09:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:09:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1273617194' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:09:32 compute-0 podman[442765]: 2025-10-07 15:09:32.977860912 +0000 UTC m=+0.091439976 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:09:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:09:32 compute-0 podman[442766]: 2025-10-07 15:09:32.996387599 +0000 UTC m=+0.118946909 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 15:09:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:33 compute-0 nova_compute[259550]: 2025-10-07 15:09:33.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1273617194' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:09:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1273617194' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:09:34 compute-0 nova_compute[259550]: 2025-10-07 15:09:34.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:34 compute-0 ceph-mon[74295]: pgmap v3241: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:35 compute-0 ceph-mon[74295]: pgmap v3242: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:35 compute-0 nova_compute[259550]: 2025-10-07 15:09:35.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:09:36 compute-0 nova_compute[259550]: 2025-10-07 15:09:36.103 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:09:36 compute-0 nova_compute[259550]: 2025-10-07 15:09:36.103 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:09:36 compute-0 nova_compute[259550]: 2025-10-07 15:09:36.103 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:09:36 compute-0 nova_compute[259550]: 2025-10-07 15:09:36.104 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:09:36 compute-0 nova_compute[259550]: 2025-10-07 15:09:36.104 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:09:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:09:36 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3536468881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:09:36 compute-0 nova_compute[259550]: 2025-10-07 15:09:36.578 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:09:36 compute-0 nova_compute[259550]: 2025-10-07 15:09:36.785 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:09:36 compute-0 nova_compute[259550]: 2025-10-07 15:09:36.787 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3618MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:09:36 compute-0 nova_compute[259550]: 2025-10-07 15:09:36.787 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:09:36 compute-0 nova_compute[259550]: 2025-10-07 15:09:36.787 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:09:36 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3536468881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:09:37 compute-0 nova_compute[259550]: 2025-10-07 15:09:37.028 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:09:37 compute-0 nova_compute[259550]: 2025-10-07 15:09:37.029 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:09:37 compute-0 nova_compute[259550]: 2025-10-07 15:09:37.124 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 07 15:09:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:37 compute-0 nova_compute[259550]: 2025-10-07 15:09:37.261 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 07 15:09:37 compute-0 nova_compute[259550]: 2025-10-07 15:09:37.261 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 07 15:09:37 compute-0 nova_compute[259550]: 2025-10-07 15:09:37.279 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 07 15:09:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:09:37 compute-0 nova_compute[259550]: 2025-10-07 15:09:37.328 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 07 15:09:37 compute-0 nova_compute[259550]: 2025-10-07 15:09:37.348 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:09:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:09:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3173276964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:09:37 compute-0 nova_compute[259550]: 2025-10-07 15:09:37.793 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:09:37 compute-0 nova_compute[259550]: 2025-10-07 15:09:37.799 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:09:37 compute-0 nova_compute[259550]: 2025-10-07 15:09:37.819 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:09:37 compute-0 nova_compute[259550]: 2025-10-07 15:09:37.821 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:09:37 compute-0 nova_compute[259550]: 2025-10-07 15:09:37.821 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:09:38 compute-0 ceph-mon[74295]: pgmap v3243: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:38 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3173276964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:09:38 compute-0 nova_compute[259550]: 2025-10-07 15:09:38.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:39 compute-0 nova_compute[259550]: 2025-10-07 15:09:39.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:40 compute-0 ceph-mon[74295]: pgmap v3244: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:41 compute-0 nova_compute[259550]: 2025-10-07 15:09:41.822 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:09:41 compute-0 nova_compute[259550]: 2025-10-07 15:09:41.823 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:09:41 compute-0 nova_compute[259550]: 2025-10-07 15:09:41.823 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:09:41 compute-0 nova_compute[259550]: 2025-10-07 15:09:41.847 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:09:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:09:42 compute-0 ceph-mon[74295]: pgmap v3245: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:42 compute-0 nova_compute[259550]: 2025-10-07 15:09:42.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:09:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:43 compute-0 nova_compute[259550]: 2025-10-07 15:09:43.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:44 compute-0 nova_compute[259550]: 2025-10-07 15:09:44.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:44 compute-0 ceph-mon[74295]: pgmap v3246: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:46 compute-0 ceph-mon[74295]: pgmap v3247: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:09:48 compute-0 nova_compute[259550]: 2025-10-07 15:09:48.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:48 compute-0 ceph-mon[74295]: pgmap v3248: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:49 compute-0 podman[442853]: 2025-10-07 15:09:49.101643347 +0000 UTC m=+0.084232687 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Oct 07 15:09:49 compute-0 podman[442854]: 2025-10-07 15:09:49.121988792 +0000 UTC m=+0.101194293 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:09:49 compute-0 nova_compute[259550]: 2025-10-07 15:09:49.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:50 compute-0 ceph-mon[74295]: pgmap v3249: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:09:52 compute-0 ceph-mon[74295]: pgmap v3250: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:09:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:09:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:09:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:09:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:09:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:09:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:53 compute-0 nova_compute[259550]: 2025-10-07 15:09:53.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:54 compute-0 nova_compute[259550]: 2025-10-07 15:09:54.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:54 compute-0 ceph-mon[74295]: pgmap v3251: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:56 compute-0 ceph-mon[74295]: pgmap v3252: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:09:58 compute-0 nova_compute[259550]: 2025-10-07 15:09:58.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:58 compute-0 ceph-mon[74295]: pgmap v3253: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:09:59 compute-0 nova_compute[259550]: 2025-10-07 15:09:59.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:09:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:10:00.112 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:10:00.112 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:10:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:10:00.113 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:10:00 compute-0 ceph-mon[74295]: pgmap v3254: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:01 compute-0 ceph-mon[74295]: pgmap v3255: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:10:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:03 compute-0 nova_compute[259550]: 2025-10-07 15:10:03.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:04 compute-0 podman[442895]: 2025-10-07 15:10:04.059603617 +0000 UTC m=+0.052125522 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 07 15:10:04 compute-0 podman[442896]: 2025-10-07 15:10:04.1038283 +0000 UTC m=+0.091223840 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 15:10:04 compute-0 nova_compute[259550]: 2025-10-07 15:10:04.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:04 compute-0 ceph-mon[74295]: pgmap v3256: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:06 compute-0 ceph-mon[74295]: pgmap v3257: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:10:08 compute-0 nova_compute[259550]: 2025-10-07 15:10:08.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:08 compute-0 ceph-mon[74295]: pgmap v3258: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:09 compute-0 nova_compute[259550]: 2025-10-07 15:10:09.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:10 compute-0 sudo[442939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:10:10 compute-0 sudo[442939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:10:10 compute-0 sudo[442939]: pam_unix(sudo:session): session closed for user root
Oct 07 15:10:10 compute-0 sudo[442964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:10:10 compute-0 sudo[442964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:10:10 compute-0 sudo[442964]: pam_unix(sudo:session): session closed for user root
Oct 07 15:10:10 compute-0 sudo[442989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:10:10 compute-0 sudo[442989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:10:10 compute-0 sudo[442989]: pam_unix(sudo:session): session closed for user root
Oct 07 15:10:10 compute-0 ceph-mon[74295]: pgmap v3259: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:10 compute-0 sudo[443014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:10:10 compute-0 sudo[443014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:10:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:11 compute-0 sudo[443014]: pam_unix(sudo:session): session closed for user root
Oct 07 15:10:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:10:11 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:10:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:10:11 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:10:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:10:11 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:10:11 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev af0deae0-6c88-4754-a4c3-0ff08117dac6 does not exist
Oct 07 15:10:11 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 91f9fdd7-9068-4ec1-801e-aabaeac65a50 does not exist
Oct 07 15:10:11 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 93f1c869-f13e-40a1-a805-24a2cc2fee31 does not exist
Oct 07 15:10:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:10:11 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:10:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:10:11 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:10:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:10:11 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:10:11 compute-0 sudo[443070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:10:11 compute-0 sudo[443070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:10:11 compute-0 sudo[443070]: pam_unix(sudo:session): session closed for user root
Oct 07 15:10:11 compute-0 sudo[443095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:10:11 compute-0 sudo[443095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:10:11 compute-0 sudo[443095]: pam_unix(sudo:session): session closed for user root
Oct 07 15:10:11 compute-0 sudo[443120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:10:11 compute-0 sudo[443120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:10:11 compute-0 sudo[443120]: pam_unix(sudo:session): session closed for user root
Oct 07 15:10:11 compute-0 sudo[443145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:10:11 compute-0 sudo[443145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:10:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:10:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:10:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:10:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:10:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:10:11 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:10:12 compute-0 podman[443212]: 2025-10-07 15:10:12.10647629 +0000 UTC m=+0.065403241 container create 6b195f54a0de99b5c1ece1f2ee33d840393fd69b780bb2e3bfd59537a385131f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct 07 15:10:12 compute-0 systemd[1]: Started libpod-conmon-6b195f54a0de99b5c1ece1f2ee33d840393fd69b780bb2e3bfd59537a385131f.scope.
Oct 07 15:10:12 compute-0 podman[443212]: 2025-10-07 15:10:12.07986716 +0000 UTC m=+0.038794151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:10:12 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:10:12 compute-0 podman[443212]: 2025-10-07 15:10:12.230172764 +0000 UTC m=+0.189099765 container init 6b195f54a0de99b5c1ece1f2ee33d840393fd69b780bb2e3bfd59537a385131f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cartwright, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:10:12 compute-0 podman[443212]: 2025-10-07 15:10:12.237722972 +0000 UTC m=+0.196649923 container start 6b195f54a0de99b5c1ece1f2ee33d840393fd69b780bb2e3bfd59537a385131f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 07 15:10:12 compute-0 brave_cartwright[443229]: 167 167
Oct 07 15:10:12 compute-0 systemd[1]: libpod-6b195f54a0de99b5c1ece1f2ee33d840393fd69b780bb2e3bfd59537a385131f.scope: Deactivated successfully.
Oct 07 15:10:12 compute-0 podman[443212]: 2025-10-07 15:10:12.244402828 +0000 UTC m=+0.203329879 container attach 6b195f54a0de99b5c1ece1f2ee33d840393fd69b780bb2e3bfd59537a385131f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 07 15:10:12 compute-0 podman[443212]: 2025-10-07 15:10:12.245083445 +0000 UTC m=+0.204010436 container died 6b195f54a0de99b5c1ece1f2ee33d840393fd69b780bb2e3bfd59537a385131f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cartwright, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 07 15:10:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-383adae7a6607bdab438cbb1fbac14962d085844ab7c3e92960cce5ad87865ae-merged.mount: Deactivated successfully.
Oct 07 15:10:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:10:12 compute-0 podman[443212]: 2025-10-07 15:10:12.325899101 +0000 UTC m=+0.284826052 container remove 6b195f54a0de99b5c1ece1f2ee33d840393fd69b780bb2e3bfd59537a385131f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 07 15:10:12 compute-0 systemd[1]: libpod-conmon-6b195f54a0de99b5c1ece1f2ee33d840393fd69b780bb2e3bfd59537a385131f.scope: Deactivated successfully.
Oct 07 15:10:12 compute-0 podman[443256]: 2025-10-07 15:10:12.579180153 +0000 UTC m=+0.058356916 container create ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 07 15:10:12 compute-0 systemd[1]: Started libpod-conmon-ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657.scope.
Oct 07 15:10:12 compute-0 podman[443256]: 2025-10-07 15:10:12.560374959 +0000 UTC m=+0.039551692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:10:12 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2541b25e74ef25aaad72a976feb3b79c9ef788395b27c6965d7f34a5c1c06712/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2541b25e74ef25aaad72a976feb3b79c9ef788395b27c6965d7f34a5c1c06712/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2541b25e74ef25aaad72a976feb3b79c9ef788395b27c6965d7f34a5c1c06712/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2541b25e74ef25aaad72a976feb3b79c9ef788395b27c6965d7f34a5c1c06712/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2541b25e74ef25aaad72a976feb3b79c9ef788395b27c6965d7f34a5c1c06712/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:10:12 compute-0 podman[443256]: 2025-10-07 15:10:12.690619355 +0000 UTC m=+0.169796138 container init ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 07 15:10:12 compute-0 podman[443256]: 2025-10-07 15:10:12.69730566 +0000 UTC m=+0.176482383 container start ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_golick, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:10:12 compute-0 podman[443256]: 2025-10-07 15:10:12.700866634 +0000 UTC m=+0.180043397 container attach ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 07 15:10:12 compute-0 ceph-mon[74295]: pgmap v3260: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:13 compute-0 nova_compute[259550]: 2025-10-07 15:10:13.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:13 compute-0 confident_golick[443272]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:10:13 compute-0 confident_golick[443272]: --> relative data size: 1.0
Oct 07 15:10:13 compute-0 confident_golick[443272]: --> All data devices are unavailable
Oct 07 15:10:13 compute-0 systemd[1]: libpod-ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657.scope: Deactivated successfully.
Oct 07 15:10:13 compute-0 podman[443256]: 2025-10-07 15:10:13.856321615 +0000 UTC m=+1.335498358 container died ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_golick, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 07 15:10:13 compute-0 systemd[1]: libpod-ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657.scope: Consumed 1.095s CPU time.
Oct 07 15:10:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2541b25e74ef25aaad72a976feb3b79c9ef788395b27c6965d7f34a5c1c06712-merged.mount: Deactivated successfully.
Oct 07 15:10:13 compute-0 podman[443256]: 2025-10-07 15:10:13.9473672 +0000 UTC m=+1.426543923 container remove ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 07 15:10:13 compute-0 systemd[1]: libpod-conmon-ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657.scope: Deactivated successfully.
Oct 07 15:10:13 compute-0 sudo[443145]: pam_unix(sudo:session): session closed for user root
Oct 07 15:10:14 compute-0 sudo[443314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:10:14 compute-0 sudo[443314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:10:14 compute-0 sudo[443314]: pam_unix(sudo:session): session closed for user root
Oct 07 15:10:14 compute-0 sudo[443339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:10:14 compute-0 sudo[443339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:10:14 compute-0 sudo[443339]: pam_unix(sudo:session): session closed for user root
Oct 07 15:10:14 compute-0 sudo[443364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:10:14 compute-0 sudo[443364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:10:14 compute-0 sudo[443364]: pam_unix(sudo:session): session closed for user root
Oct 07 15:10:14 compute-0 nova_compute[259550]: 2025-10-07 15:10:14.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:14 compute-0 sudo[443389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:10:14 compute-0 sudo[443389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:10:14 compute-0 podman[443455]: 2025-10-07 15:10:14.605981173 +0000 UTC m=+0.030012570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:10:14 compute-0 podman[443455]: 2025-10-07 15:10:14.704534935 +0000 UTC m=+0.128566282 container create 4516f5d350c0a631ad0b116d8e4adf11bb8773a0ffe2f5a0eea0734abf3f0b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_benz, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 07 15:10:14 compute-0 systemd[1]: Started libpod-conmon-4516f5d350c0a631ad0b116d8e4adf11bb8773a0ffe2f5a0eea0734abf3f0b6c.scope.
Oct 07 15:10:14 compute-0 ceph-mon[74295]: pgmap v3261: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:14 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:10:14 compute-0 podman[443455]: 2025-10-07 15:10:14.911416697 +0000 UTC m=+0.335448144 container init 4516f5d350c0a631ad0b116d8e4adf11bb8773a0ffe2f5a0eea0734abf3f0b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_benz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 07 15:10:14 compute-0 podman[443455]: 2025-10-07 15:10:14.923925096 +0000 UTC m=+0.347956483 container start 4516f5d350c0a631ad0b116d8e4adf11bb8773a0ffe2f5a0eea0734abf3f0b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:10:14 compute-0 agitated_benz[443471]: 167 167
Oct 07 15:10:14 compute-0 systemd[1]: libpod-4516f5d350c0a631ad0b116d8e4adf11bb8773a0ffe2f5a0eea0734abf3f0b6c.scope: Deactivated successfully.
Oct 07 15:10:14 compute-0 podman[443455]: 2025-10-07 15:10:14.985590038 +0000 UTC m=+0.409621405 container attach 4516f5d350c0a631ad0b116d8e4adf11bb8773a0ffe2f5a0eea0734abf3f0b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_benz, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:10:14 compute-0 podman[443455]: 2025-10-07 15:10:14.987533229 +0000 UTC m=+0.411564586 container died 4516f5d350c0a631ad0b116d8e4adf11bb8773a0ffe2f5a0eea0734abf3f0b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_benz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 07 15:10:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcc43757abf960d6cf39aa8f4b42e958f70cb8e5c0bd1cb5b724f802aa847839-merged.mount: Deactivated successfully.
Oct 07 15:10:15 compute-0 podman[443455]: 2025-10-07 15:10:15.239187978 +0000 UTC m=+0.663219365 container remove 4516f5d350c0a631ad0b116d8e4adf11bb8773a0ffe2f5a0eea0734abf3f0b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_benz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 07 15:10:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:15 compute-0 systemd[1]: libpod-conmon-4516f5d350c0a631ad0b116d8e4adf11bb8773a0ffe2f5a0eea0734abf3f0b6c.scope: Deactivated successfully.
Oct 07 15:10:15 compute-0 podman[443497]: 2025-10-07 15:10:15.52173589 +0000 UTC m=+0.106622906 container create 15dc1948d934cc85721242be1938a0f8c0eca2d18e8bb305d142317da68be977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:10:15 compute-0 podman[443497]: 2025-10-07 15:10:15.471597231 +0000 UTC m=+0.056484247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:10:15 compute-0 systemd[1]: Started libpod-conmon-15dc1948d934cc85721242be1938a0f8c0eca2d18e8bb305d142317da68be977.scope.
Oct 07 15:10:15 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5e84771f4af906cc20eb87bc99ba6799bd5ba3c63364cd5f35c6c2b63accc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5e84771f4af906cc20eb87bc99ba6799bd5ba3c63364cd5f35c6c2b63accc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5e84771f4af906cc20eb87bc99ba6799bd5ba3c63364cd5f35c6c2b63accc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5e84771f4af906cc20eb87bc99ba6799bd5ba3c63364cd5f35c6c2b63accc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:10:15 compute-0 podman[443497]: 2025-10-07 15:10:15.654918253 +0000 UTC m=+0.239805239 container init 15dc1948d934cc85721242be1938a0f8c0eca2d18e8bb305d142317da68be977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_joliot, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:10:15 compute-0 podman[443497]: 2025-10-07 15:10:15.663129729 +0000 UTC m=+0.248016705 container start 15dc1948d934cc85721242be1938a0f8c0eca2d18e8bb305d142317da68be977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_joliot, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:10:15 compute-0 podman[443497]: 2025-10-07 15:10:15.669309081 +0000 UTC m=+0.254196067 container attach 15dc1948d934cc85721242be1938a0f8c0eca2d18e8bb305d142317da68be977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]: {
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:     "0": [
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:         {
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "devices": [
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "/dev/loop3"
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             ],
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "lv_name": "ceph_lv0",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "lv_size": "21470642176",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "name": "ceph_lv0",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "tags": {
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.cluster_name": "ceph",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.crush_device_class": "",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.encrypted": "0",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.osd_id": "0",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.type": "block",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.vdo": "0"
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             },
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "type": "block",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "vg_name": "ceph_vg0"
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:         }
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:     ],
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:     "1": [
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:         {
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "devices": [
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "/dev/loop4"
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             ],
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "lv_name": "ceph_lv1",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "lv_size": "21470642176",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "name": "ceph_lv1",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "tags": {
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.cluster_name": "ceph",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.crush_device_class": "",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.encrypted": "0",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.osd_id": "1",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.type": "block",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.vdo": "0"
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             },
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "type": "block",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "vg_name": "ceph_vg1"
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:         }
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:     ],
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:     "2": [
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:         {
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "devices": [
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "/dev/loop5"
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             ],
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "lv_name": "ceph_lv2",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "lv_size": "21470642176",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "name": "ceph_lv2",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "tags": {
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.cluster_name": "ceph",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.crush_device_class": "",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.encrypted": "0",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.osd_id": "2",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.type": "block",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:                 "ceph.vdo": "0"
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             },
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "type": "block",
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:             "vg_name": "ceph_vg2"
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:         }
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]:     ]
Oct 07 15:10:16 compute-0 ecstatic_joliot[443514]: }
Oct 07 15:10:16 compute-0 systemd[1]: libpod-15dc1948d934cc85721242be1938a0f8c0eca2d18e8bb305d142317da68be977.scope: Deactivated successfully.
Oct 07 15:10:16 compute-0 podman[443497]: 2025-10-07 15:10:16.454185686 +0000 UTC m=+1.039072772 container died 15dc1948d934cc85721242be1938a0f8c0eca2d18e8bb305d142317da68be977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 07 15:10:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c5e84771f4af906cc20eb87bc99ba6799bd5ba3c63364cd5f35c6c2b63accc1-merged.mount: Deactivated successfully.
Oct 07 15:10:16 compute-0 podman[443497]: 2025-10-07 15:10:16.567234328 +0000 UTC m=+1.152121314 container remove 15dc1948d934cc85721242be1938a0f8c0eca2d18e8bb305d142317da68be977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_joliot, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 07 15:10:16 compute-0 systemd[1]: libpod-conmon-15dc1948d934cc85721242be1938a0f8c0eca2d18e8bb305d142317da68be977.scope: Deactivated successfully.
Oct 07 15:10:16 compute-0 sudo[443389]: pam_unix(sudo:session): session closed for user root
Oct 07 15:10:16 compute-0 sudo[443537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:10:16 compute-0 sudo[443537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:10:16 compute-0 sudo[443537]: pam_unix(sudo:session): session closed for user root
Oct 07 15:10:16 compute-0 sudo[443562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:10:16 compute-0 sudo[443562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:10:16 compute-0 sudo[443562]: pam_unix(sudo:session): session closed for user root
Oct 07 15:10:16 compute-0 sudo[443587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:10:16 compute-0 sudo[443587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:10:16 compute-0 sudo[443587]: pam_unix(sudo:session): session closed for user root
Oct 07 15:10:16 compute-0 sudo[443612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:10:16 compute-0 ceph-mon[74295]: pgmap v3262: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:16 compute-0 sudo[443612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:10:16 compute-0 nova_compute[259550]: 2025-10-07 15:10:16.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:10:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:10:17 compute-0 podman[443677]: 2025-10-07 15:10:17.308836775 +0000 UTC m=+0.054107845 container create e44c9569b77ed454c5b0e7afa4039ac7c0349dba2efb1712b2865e037708873c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:10:17 compute-0 podman[443677]: 2025-10-07 15:10:17.279034101 +0000 UTC m=+0.024305201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:10:17 compute-0 systemd[1]: Started libpod-conmon-e44c9569b77ed454c5b0e7afa4039ac7c0349dba2efb1712b2865e037708873c.scope.
Oct 07 15:10:17 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:10:17 compute-0 podman[443677]: 2025-10-07 15:10:17.484916176 +0000 UTC m=+0.230187346 container init e44c9569b77ed454c5b0e7afa4039ac7c0349dba2efb1712b2865e037708873c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 07 15:10:17 compute-0 podman[443677]: 2025-10-07 15:10:17.492394483 +0000 UTC m=+0.237665563 container start e44c9569b77ed454c5b0e7afa4039ac7c0349dba2efb1712b2865e037708873c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 07 15:10:17 compute-0 zealous_haibt[443693]: 167 167
Oct 07 15:10:17 compute-0 systemd[1]: libpod-e44c9569b77ed454c5b0e7afa4039ac7c0349dba2efb1712b2865e037708873c.scope: Deactivated successfully.
Oct 07 15:10:17 compute-0 podman[443677]: 2025-10-07 15:10:17.517741119 +0000 UTC m=+0.263012209 container attach e44c9569b77ed454c5b0e7afa4039ac7c0349dba2efb1712b2865e037708873c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 07 15:10:17 compute-0 podman[443677]: 2025-10-07 15:10:17.518239122 +0000 UTC m=+0.263510192 container died e44c9569b77ed454c5b0e7afa4039ac7c0349dba2efb1712b2865e037708873c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:10:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce2aa59bacdaa94bbda2ebec2eeed915a302383420c7644425e74ce5276b3c96-merged.mount: Deactivated successfully.
Oct 07 15:10:17 compute-0 podman[443677]: 2025-10-07 15:10:17.685009489 +0000 UTC m=+0.430280559 container remove e44c9569b77ed454c5b0e7afa4039ac7c0349dba2efb1712b2865e037708873c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 07 15:10:17 compute-0 systemd[1]: libpod-conmon-e44c9569b77ed454c5b0e7afa4039ac7c0349dba2efb1712b2865e037708873c.scope: Deactivated successfully.
Oct 07 15:10:17 compute-0 podman[443721]: 2025-10-07 15:10:17.899319716 +0000 UTC m=+0.077261774 container create 02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:10:17 compute-0 podman[443721]: 2025-10-07 15:10:17.849973667 +0000 UTC m=+0.027915775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:10:17 compute-0 ceph-mon[74295]: pgmap v3263: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:17 compute-0 systemd[1]: Started libpod-conmon-02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c.scope.
Oct 07 15:10:18 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85f14e0d4822cc0c0ca17d1be1d8a6d3b903affd4ce152b4c8327a53d1029fd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85f14e0d4822cc0c0ca17d1be1d8a6d3b903affd4ce152b4c8327a53d1029fd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85f14e0d4822cc0c0ca17d1be1d8a6d3b903affd4ce152b4c8327a53d1029fd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85f14e0d4822cc0c0ca17d1be1d8a6d3b903affd4ce152b4c8327a53d1029fd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:10:18 compute-0 podman[443721]: 2025-10-07 15:10:18.050510243 +0000 UTC m=+0.228452321 container init 02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 07 15:10:18 compute-0 podman[443721]: 2025-10-07 15:10:18.060830454 +0000 UTC m=+0.238772522 container start 02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:10:18 compute-0 podman[443721]: 2025-10-07 15:10:18.063924675 +0000 UTC m=+0.241866743 container attach 02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 07 15:10:18 compute-0 nova_compute[259550]: 2025-10-07 15:10:18.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:19 compute-0 goofy_euler[443738]: {
Oct 07 15:10:19 compute-0 goofy_euler[443738]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:10:19 compute-0 goofy_euler[443738]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:10:19 compute-0 goofy_euler[443738]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:10:19 compute-0 goofy_euler[443738]:         "osd_id": 2,
Oct 07 15:10:19 compute-0 goofy_euler[443738]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:10:19 compute-0 goofy_euler[443738]:         "type": "bluestore"
Oct 07 15:10:19 compute-0 goofy_euler[443738]:     },
Oct 07 15:10:19 compute-0 goofy_euler[443738]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:10:19 compute-0 goofy_euler[443738]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:10:19 compute-0 goofy_euler[443738]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:10:19 compute-0 goofy_euler[443738]:         "osd_id": 1,
Oct 07 15:10:19 compute-0 goofy_euler[443738]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:10:19 compute-0 goofy_euler[443738]:         "type": "bluestore"
Oct 07 15:10:19 compute-0 goofy_euler[443738]:     },
Oct 07 15:10:19 compute-0 goofy_euler[443738]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:10:19 compute-0 goofy_euler[443738]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:10:19 compute-0 goofy_euler[443738]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:10:19 compute-0 goofy_euler[443738]:         "osd_id": 0,
Oct 07 15:10:19 compute-0 goofy_euler[443738]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:10:19 compute-0 goofy_euler[443738]:         "type": "bluestore"
Oct 07 15:10:19 compute-0 goofy_euler[443738]:     }
Oct 07 15:10:19 compute-0 goofy_euler[443738]: }
Oct 07 15:10:19 compute-0 systemd[1]: libpod-02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c.scope: Deactivated successfully.
Oct 07 15:10:19 compute-0 systemd[1]: libpod-02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c.scope: Consumed 1.027s CPU time.
Oct 07 15:10:19 compute-0 podman[443721]: 2025-10-07 15:10:19.120344302 +0000 UTC m=+1.298286370 container died 02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 07 15:10:19 compute-0 nova_compute[259550]: 2025-10-07 15:10:19.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:19 compute-0 podman[443771]: 2025-10-07 15:10:19.23547728 +0000 UTC m=+0.084992536 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 15:10:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-85f14e0d4822cc0c0ca17d1be1d8a6d3b903affd4ce152b4c8327a53d1029fd4-merged.mount: Deactivated successfully.
Oct 07 15:10:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:19 compute-0 podman[443721]: 2025-10-07 15:10:19.463123129 +0000 UTC m=+1.641065227 container remove 02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 07 15:10:19 compute-0 systemd[1]: libpod-conmon-02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c.scope: Deactivated successfully.
Oct 07 15:10:19 compute-0 podman[443778]: 2025-10-07 15:10:19.487819828 +0000 UTC m=+0.333289608 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 15:10:19 compute-0 sudo[443612]: pam_unix(sudo:session): session closed for user root
Oct 07 15:10:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:10:19 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:10:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:10:19 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:10:19 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev ad90d8f7-32b5-42e4-b9ab-9f71fd78de87 does not exist
Oct 07 15:10:19 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev a0b24251-ae28-473b-8e49-a80d49cbe9ab does not exist
Oct 07 15:10:19 compute-0 sudo[443823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:10:19 compute-0 sudo[443823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:10:19 compute-0 sudo[443823]: pam_unix(sudo:session): session closed for user root
Oct 07 15:10:19 compute-0 sudo[443848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:10:19 compute-0 sudo[443848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:10:19 compute-0 sudo[443848]: pam_unix(sudo:session): session closed for user root
Oct 07 15:10:20 compute-0 ceph-mon[74295]: pgmap v3264: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:20 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:10:20 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:10:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:10:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:10:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:10:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:10:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:10:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:10:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:10:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:10:22
Oct 07 15:10:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:10:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:10:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'volumes', '.rgw.root', '.mgr', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'images']
Oct 07 15:10:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:10:22 compute-0 ceph-mon[74295]: pgmap v3265: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:22 compute-0 nova_compute[259550]: 2025-10-07 15:10:22.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:10:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:10:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:10:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:10:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:10:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:10:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:10:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:10:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:10:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:10:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:10:23 compute-0 nova_compute[259550]: 2025-10-07 15:10:23.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:23 compute-0 nova_compute[259550]: 2025-10-07 15:10:23.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:10:23 compute-0 nova_compute[259550]: 2025-10-07 15:10:23.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:10:23 compute-0 nova_compute[259550]: 2025-10-07 15:10:23.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:10:23 compute-0 nova_compute[259550]: 2025-10-07 15:10:23.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.074265) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849824074332, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1636, "num_deletes": 252, "total_data_size": 2580163, "memory_usage": 2617152, "flush_reason": "Manual Compaction"}
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849824151074, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 2532535, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66490, "largest_seqno": 68125, "table_properties": {"data_size": 2524947, "index_size": 4530, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15629, "raw_average_key_size": 20, "raw_value_size": 2509723, "raw_average_value_size": 3225, "num_data_blocks": 202, "num_entries": 778, "num_filter_entries": 778, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759849657, "oldest_key_time": 1759849657, "file_creation_time": 1759849824, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 76867 microseconds, and 10435 cpu microseconds.
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:10:24 compute-0 ceph-mon[74295]: pgmap v3266: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.151129) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 2532535 bytes OK
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.151155) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.242605) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.242652) EVENT_LOG_v1 {"time_micros": 1759849824242641, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.242681) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 2573115, prev total WAL file size 2574272, number of live WAL files 2.
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.246434) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(2473KB)], [158(10170KB)]
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849824246569, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 12947275, "oldest_snapshot_seqno": -1}
Oct 07 15:10:24 compute-0 nova_compute[259550]: 2025-10-07 15:10:24.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8585 keys, 11217433 bytes, temperature: kUnknown
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849824499333, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 11217433, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11161214, "index_size": 33642, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21509, "raw_key_size": 225140, "raw_average_key_size": 26, "raw_value_size": 11009122, "raw_average_value_size": 1282, "num_data_blocks": 1306, "num_entries": 8585, "num_filter_entries": 8585, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759849824, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.499995) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11217433 bytes
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.541509) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 51.2 rd, 44.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 9.9 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(9.5) write-amplify(4.4) OK, records in: 9105, records dropped: 520 output_compression: NoCompression
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.541542) EVENT_LOG_v1 {"time_micros": 1759849824541529, "job": 98, "event": "compaction_finished", "compaction_time_micros": 253036, "compaction_time_cpu_micros": 56264, "output_level": 6, "num_output_files": 1, "total_output_size": 11217433, "num_input_records": 9105, "num_output_records": 8585, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849824542288, "job": 98, "event": "table_file_deletion", "file_number": 160}
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849824544394, "job": 98, "event": "table_file_deletion", "file_number": 158}
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.246249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.544492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.544497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.544498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.544500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:10:24 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.544501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:10:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:26 compute-0 ceph-mon[74295]: pgmap v3267: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:10:27 compute-0 nova_compute[259550]: 2025-10-07 15:10:27.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:10:28 compute-0 ceph-mon[74295]: pgmap v3268: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:28 compute-0 nova_compute[259550]: 2025-10-07 15:10:28.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:29 compute-0 nova_compute[259550]: 2025-10-07 15:10:29.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:29 compute-0 sshd-session[443873]: Accepted publickey for zuul from 192.168.122.30 port 39932 ssh2: ECDSA SHA256:eYsf9rxr6jxX1A35M0IajnB7UFHS0tds1lEVHTSpAhk
Oct 07 15:10:29 compute-0 systemd-logind[801]: New session 54 of user zuul.
Oct 07 15:10:29 compute-0 systemd[1]: Started Session 54 of User zuul.
Oct 07 15:10:29 compute-0 sshd-session[443873]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 07 15:10:29 compute-0 nova_compute[259550]: 2025-10-07 15:10:29.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:10:30 compute-0 ceph-mon[74295]: pgmap v3269: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:10:31.524 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 15:10:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:10:31.526 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 15:10:31 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:10:31.526 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 15:10:31 compute-0 nova_compute[259550]: 2025-10-07 15:10:31.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:10:32 compute-0 ceph-mon[74295]: pgmap v3270: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:10:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1032906449' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:10:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:10:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1032906449' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:10:32 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:10:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:33 compute-0 nova_compute[259550]: 2025-10-07 15:10:33.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1032906449' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:10:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1032906449' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:10:34 compute-0 nova_compute[259550]: 2025-10-07 15:10:34.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:34 compute-0 ceph-mon[74295]: pgmap v3271: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:35 compute-0 podman[444107]: 2025-10-07 15:10:35.089919243 +0000 UTC m=+0.072872857 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 07 15:10:35 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:10:35 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 14K writes, 68K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1391 writes, 6281 keys, 1391 commit groups, 1.0 writes per commit group, ingest: 8.95 MB, 0.01 MB/s
                                           Interval WAL: 1391 writes, 1391 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     69.9      1.20              0.26        49    0.025       0      0       0.0       0.0
                                             L6      1/0   10.70 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.8    133.2    112.7      3.59              1.20        48    0.075    318K    25K       0.0       0.0
                                            Sum      1/0   10.70 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.8     99.7    102.0      4.79              1.47        97    0.049    318K    25K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.4     84.6     86.2      0.67              0.21        10    0.067     44K   2597       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    133.2    112.7      3.59              1.20        48    0.075    318K    25K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     70.0      1.20              0.26        48    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.082, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.48 GB write, 0.08 MB/s write, 0.47 GB read, 0.08 MB/s read, 4.8 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5619101451f0#2 capacity: 304.00 MB usage: 55.08 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000446 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3552,52.79 MB,17.3649%) FilterBlock(98,887.67 KB,0.285154%) IndexBlock(98,1.42 MB,0.468078%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 07 15:10:35 compute-0 podman[444108]: 2025-10-07 15:10:35.142581489 +0000 UTC m=+0.115104249 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 15:10:35 compute-0 sshd-session[443876]: Connection closed by 192.168.122.30 port 39932
Oct 07 15:10:35 compute-0 sshd-session[443873]: pam_unix(sshd:session): session closed for user zuul
Oct 07 15:10:35 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Oct 07 15:10:35 compute-0 systemd-logind[801]: Session 54 logged out. Waiting for processes to exit.
Oct 07 15:10:35 compute-0 systemd-logind[801]: Removed session 54.
Oct 07 15:10:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:36 compute-0 ceph-mon[74295]: pgmap v3272: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:10:37 compute-0 nova_compute[259550]: 2025-10-07 15:10:37.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:10:38 compute-0 nova_compute[259550]: 2025-10-07 15:10:38.113 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:10:38 compute-0 nova_compute[259550]: 2025-10-07 15:10:38.114 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:10:38 compute-0 nova_compute[259550]: 2025-10-07 15:10:38.114 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:10:38 compute-0 nova_compute[259550]: 2025-10-07 15:10:38.115 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:10:38 compute-0 nova_compute[259550]: 2025-10-07 15:10:38.115 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:10:38 compute-0 ceph-mon[74295]: pgmap v3273: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:10:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2706709220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:10:38 compute-0 nova_compute[259550]: 2025-10-07 15:10:38.590 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:10:38 compute-0 nova_compute[259550]: 2025-10-07 15:10:38.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:38 compute-0 nova_compute[259550]: 2025-10-07 15:10:38.748 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:10:38 compute-0 nova_compute[259550]: 2025-10-07 15:10:38.750 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3621MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:10:38 compute-0 nova_compute[259550]: 2025-10-07 15:10:38.750 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:10:38 compute-0 nova_compute[259550]: 2025-10-07 15:10:38.750 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:10:38 compute-0 nova_compute[259550]: 2025-10-07 15:10:38.829 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:10:38 compute-0 nova_compute[259550]: 2025-10-07 15:10:38.829 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:10:38 compute-0 nova_compute[259550]: 2025-10-07 15:10:38.857 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:10:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:10:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/619261694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:10:39 compute-0 nova_compute[259550]: 2025-10-07 15:10:39.282 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:10:39 compute-0 nova_compute[259550]: 2025-10-07 15:10:39.289 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:10:39 compute-0 nova_compute[259550]: 2025-10-07 15:10:39.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:39 compute-0 nova_compute[259550]: 2025-10-07 15:10:39.378 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:10:39 compute-0 nova_compute[259550]: 2025-10-07 15:10:39.381 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:10:39 compute-0 nova_compute[259550]: 2025-10-07 15:10:39.381 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:10:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2706709220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:10:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/619261694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:10:40 compute-0 ceph-mon[74295]: pgmap v3274: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:10:42 compute-0 ceph-mon[74295]: pgmap v3275: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:43 compute-0 nova_compute[259550]: 2025-10-07 15:10:43.382 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:10:43 compute-0 nova_compute[259550]: 2025-10-07 15:10:43.382 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:10:43 compute-0 nova_compute[259550]: 2025-10-07 15:10:43.383 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:10:43 compute-0 nova_compute[259550]: 2025-10-07 15:10:43.404 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:10:43 compute-0 nova_compute[259550]: 2025-10-07 15:10:43.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:43 compute-0 nova_compute[259550]: 2025-10-07 15:10:43.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:10:44 compute-0 nova_compute[259550]: 2025-10-07 15:10:44.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:44 compute-0 ceph-mon[74295]: pgmap v3276: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:46 compute-0 ceph-mon[74295]: pgmap v3277: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:10:48 compute-0 ceph-mon[74295]: pgmap v3278: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:48 compute-0 nova_compute[259550]: 2025-10-07 15:10:48.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:49 compute-0 nova_compute[259550]: 2025-10-07 15:10:49.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:50 compute-0 podman[444218]: 2025-10-07 15:10:50.08303917 +0000 UTC m=+0.066993204 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 07 15:10:50 compute-0 podman[444219]: 2025-10-07 15:10:50.094491241 +0000 UTC m=+0.080033706 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3)
Oct 07 15:10:50 compute-0 ceph-mon[74295]: pgmap v3279: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:52 compute-0 ceph-mon[74295]: pgmap v3280: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:10:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:10:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:10:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:10:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:10:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:10:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:10:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:53 compute-0 nova_compute[259550]: 2025-10-07 15:10:53.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:54 compute-0 nova_compute[259550]: 2025-10-07 15:10:54.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:54 compute-0 ceph-mon[74295]: pgmap v3281: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:56 compute-0 ceph-mon[74295]: pgmap v3282: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:10:58 compute-0 ceph-mon[74295]: pgmap v3283: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:58 compute-0 nova_compute[259550]: 2025-10-07 15:10:58.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:10:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:10:59 compute-0 nova_compute[259550]: 2025-10-07 15:10:59.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:11:00.114 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:11:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:11:00.114 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:11:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:11:00.114 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:11:01 compute-0 ceph-mon[74295]: pgmap v3284: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:03 compute-0 nova_compute[259550]: 2025-10-07 15:11:03.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:04 compute-0 nova_compute[259550]: 2025-10-07 15:11:04.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:06 compute-0 podman[444258]: 2025-10-07 15:11:06.106579663 +0000 UTC m=+0.086919967 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 15:11:06 compute-0 podman[444259]: 2025-10-07 15:11:06.167716681 +0000 UTC m=+0.143421884 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 15:11:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:11:07 compute-0 ceph-mon[74295]: pgmap v3285: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:08 compute-0 nova_compute[259550]: 2025-10-07 15:11:08.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:08 compute-0 ceph-mon[74295]: pgmap v3286: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:08 compute-0 ceph-mon[74295]: pgmap v3287: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:08 compute-0 ceph-mon[74295]: pgmap v3288: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:09 compute-0 nova_compute[259550]: 2025-10-07 15:11:09.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:10 compute-0 ceph-mon[74295]: pgmap v3289: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:12 compute-0 ceph-mon[74295]: pgmap v3290: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:11:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:13 compute-0 nova_compute[259550]: 2025-10-07 15:11:13.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:14 compute-0 nova_compute[259550]: 2025-10-07 15:11:14.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:14 compute-0 ceph-mon[74295]: pgmap v3291: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:16 compute-0 ceph-mon[74295]: pgmap v3292: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:16 compute-0 nova_compute[259550]: 2025-10-07 15:11:16.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:11:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:11:18 compute-0 ceph-mon[74295]: pgmap v3293: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:18 compute-0 nova_compute[259550]: 2025-10-07 15:11:18.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:19 compute-0 nova_compute[259550]: 2025-10-07 15:11:19.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:19 compute-0 sudo[444301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:11:19 compute-0 sudo[444301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:11:19 compute-0 sudo[444301]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:19 compute-0 sudo[444326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:11:19 compute-0 sudo[444326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:11:19 compute-0 sudo[444326]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:20 compute-0 sudo[444351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:11:20 compute-0 sudo[444351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:11:20 compute-0 sudo[444351]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:20 compute-0 sudo[444376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:11:20 compute-0 sudo[444376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:11:20 compute-0 ceph-mon[74295]: pgmap v3294: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:20 compute-0 sudo[444376]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:11:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:11:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:11:20 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:11:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:11:20 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:11:20 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 96fc50dc-9cc2-4f98-9126-df10c0a7de94 does not exist
Oct 07 15:11:20 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 0eeb8113-bf92-4b09-89cb-f85ff60210ee does not exist
Oct 07 15:11:20 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 228daef2-7cc3-4856-ae5e-f0aee3a6c8df does not exist
Oct 07 15:11:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:11:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:11:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:11:20 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:11:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:11:20 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:11:20 compute-0 sudo[444432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:11:20 compute-0 sudo[444432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:11:20 compute-0 sudo[444432]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:20 compute-0 sudo[444469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:11:20 compute-0 sudo[444469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:11:20 compute-0 sudo[444469]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:20 compute-0 podman[444456]: 2025-10-07 15:11:20.885292079 +0000 UTC m=+0.087418740 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 15:11:20 compute-0 podman[444457]: 2025-10-07 15:11:20.886257265 +0000 UTC m=+0.089341451 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 07 15:11:20 compute-0 sudo[444518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:11:20 compute-0 sudo[444518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:11:20 compute-0 sudo[444518]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:21 compute-0 sudo[444544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:11:21 compute-0 sudo[444544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:11:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:21 compute-0 podman[444611]: 2025-10-07 15:11:21.386483592 +0000 UTC m=+0.035025452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:11:21 compute-0 podman[444611]: 2025-10-07 15:11:21.560425607 +0000 UTC m=+0.208967447 container create 125ddaa9bbbc95ed86072669cffe5f804c810e91d7bb52044a11301a74c4aed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 07 15:11:21 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:11:21 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:11:21 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:11:21 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:11:21 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:11:21 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:11:21 compute-0 systemd[1]: Started libpod-conmon-125ddaa9bbbc95ed86072669cffe5f804c810e91d7bb52044a11301a74c4aed5.scope.
Oct 07 15:11:21 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:11:21 compute-0 podman[444611]: 2025-10-07 15:11:21.929463864 +0000 UTC m=+0.578005724 container init 125ddaa9bbbc95ed86072669cffe5f804c810e91d7bb52044a11301a74c4aed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_heyrovsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:11:21 compute-0 podman[444611]: 2025-10-07 15:11:21.942565018 +0000 UTC m=+0.591106888 container start 125ddaa9bbbc95ed86072669cffe5f804c810e91d7bb52044a11301a74c4aed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:11:21 compute-0 mystifying_heyrovsky[444628]: 167 167
Oct 07 15:11:21 compute-0 systemd[1]: libpod-125ddaa9bbbc95ed86072669cffe5f804c810e91d7bb52044a11301a74c4aed5.scope: Deactivated successfully.
Oct 07 15:11:21 compute-0 podman[444611]: 2025-10-07 15:11:21.992870932 +0000 UTC m=+0.641412802 container attach 125ddaa9bbbc95ed86072669cffe5f804c810e91d7bb52044a11301a74c4aed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:11:21 compute-0 podman[444611]: 2025-10-07 15:11:21.994912316 +0000 UTC m=+0.643454146 container died 125ddaa9bbbc95ed86072669cffe5f804c810e91d7bb52044a11301a74c4aed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_heyrovsky, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 07 15:11:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b6090f2b4af7a8e38e43fc63d0af7d11b87a1d5b66ea5bee993725b846d312b-merged.mount: Deactivated successfully.
Oct 07 15:11:22 compute-0 podman[444611]: 2025-10-07 15:11:22.243987616 +0000 UTC m=+0.892529506 container remove 125ddaa9bbbc95ed86072669cffe5f804c810e91d7bb52044a11301a74c4aed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:11:22 compute-0 systemd[1]: libpod-conmon-125ddaa9bbbc95ed86072669cffe5f804c810e91d7bb52044a11301a74c4aed5.scope: Deactivated successfully.
Oct 07 15:11:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:11:22 compute-0 podman[444652]: 2025-10-07 15:11:22.416158505 +0000 UTC m=+0.045649301 container create c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:11:22 compute-0 systemd[1]: Started libpod-conmon-c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2.scope.
Oct 07 15:11:22 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d0a834cabf0b64bb4e14c66089ff28aefc7c668f3f1f87cac9c9ecf7a87898/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d0a834cabf0b64bb4e14c66089ff28aefc7c668f3f1f87cac9c9ecf7a87898/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d0a834cabf0b64bb4e14c66089ff28aefc7c668f3f1f87cac9c9ecf7a87898/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d0a834cabf0b64bb4e14c66089ff28aefc7c668f3f1f87cac9c9ecf7a87898/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d0a834cabf0b64bb4e14c66089ff28aefc7c668f3f1f87cac9c9ecf7a87898/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:11:22 compute-0 podman[444652]: 2025-10-07 15:11:22.395736118 +0000 UTC m=+0.025226944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:11:22 compute-0 podman[444652]: 2025-10-07 15:11:22.498508071 +0000 UTC m=+0.127998887 container init c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:11:22 compute-0 podman[444652]: 2025-10-07 15:11:22.507735614 +0000 UTC m=+0.137226410 container start c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:11:22 compute-0 podman[444652]: 2025-10-07 15:11:22.511150944 +0000 UTC m=+0.140641730 container attach c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:11:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:11:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:11:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:11:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:11:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:11:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:11:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:11:22
Oct 07 15:11:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:11:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:11:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'images', 'vms', '.mgr', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta']
Oct 07 15:11:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:11:22 compute-0 ceph-mon[74295]: pgmap v3295: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:11:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:11:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:11:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:11:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:11:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:11:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:11:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:11:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:11:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:11:23 compute-0 nova_compute[259550]: 2025-10-07 15:11:23.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:23 compute-0 modest_dubinsky[444668]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:11:23 compute-0 modest_dubinsky[444668]: --> relative data size: 1.0
Oct 07 15:11:23 compute-0 modest_dubinsky[444668]: --> All data devices are unavailable
Oct 07 15:11:23 compute-0 systemd[1]: libpod-c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2.scope: Deactivated successfully.
Oct 07 15:11:23 compute-0 systemd[1]: libpod-c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2.scope: Consumed 1.139s CPU time.
Oct 07 15:11:23 compute-0 podman[444652]: 2025-10-07 15:11:23.704647806 +0000 UTC m=+1.334138612 container died c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:11:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9d0a834cabf0b64bb4e14c66089ff28aefc7c668f3f1f87cac9c9ecf7a87898-merged.mount: Deactivated successfully.
Oct 07 15:11:23 compute-0 podman[444652]: 2025-10-07 15:11:23.777319678 +0000 UTC m=+1.406810474 container remove c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:11:23 compute-0 systemd[1]: libpod-conmon-c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2.scope: Deactivated successfully.
Oct 07 15:11:23 compute-0 sudo[444544]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:23 compute-0 sudo[444712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:11:23 compute-0 sudo[444712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:11:23 compute-0 sudo[444712]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:23 compute-0 sudo[444737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:11:23 compute-0 sudo[444737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:11:23 compute-0 sudo[444737]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:23 compute-0 nova_compute[259550]: 2025-10-07 15:11:23.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:11:23 compute-0 nova_compute[259550]: 2025-10-07 15:11:23.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:11:23 compute-0 nova_compute[259550]: 2025-10-07 15:11:23.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:11:23 compute-0 nova_compute[259550]: 2025-10-07 15:11:23.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:11:24 compute-0 sudo[444762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:11:24 compute-0 sudo[444762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:11:24 compute-0 sudo[444762]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:24 compute-0 sudo[444787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:11:24 compute-0 sudo[444787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:11:24 compute-0 podman[444853]: 2025-10-07 15:11:24.403302903 +0000 UTC m=+0.037509878 container create d39d76a168123db8aeceb595394fdc17ae48680098bdbc96febc60d0a39b6eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kilby, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 07 15:11:24 compute-0 systemd[1]: Started libpod-conmon-d39d76a168123db8aeceb595394fdc17ae48680098bdbc96febc60d0a39b6eb9.scope.
Oct 07 15:11:24 compute-0 nova_compute[259550]: 2025-10-07 15:11:24.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:24 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:11:24 compute-0 podman[444853]: 2025-10-07 15:11:24.386336816 +0000 UTC m=+0.020543811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:11:24 compute-0 podman[444853]: 2025-10-07 15:11:24.482046984 +0000 UTC m=+0.116253979 container init d39d76a168123db8aeceb595394fdc17ae48680098bdbc96febc60d0a39b6eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kilby, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:11:24 compute-0 podman[444853]: 2025-10-07 15:11:24.488432121 +0000 UTC m=+0.122639096 container start d39d76a168123db8aeceb595394fdc17ae48680098bdbc96febc60d0a39b6eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Oct 07 15:11:24 compute-0 podman[444853]: 2025-10-07 15:11:24.4921708 +0000 UTC m=+0.126377795 container attach d39d76a168123db8aeceb595394fdc17ae48680098bdbc96febc60d0a39b6eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kilby, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 15:11:24 compute-0 jovial_kilby[444869]: 167 167
Oct 07 15:11:24 compute-0 podman[444853]: 2025-10-07 15:11:24.493621168 +0000 UTC m=+0.127828143 container died d39d76a168123db8aeceb595394fdc17ae48680098bdbc96febc60d0a39b6eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kilby, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:11:24 compute-0 systemd[1]: libpod-d39d76a168123db8aeceb595394fdc17ae48680098bdbc96febc60d0a39b6eb9.scope: Deactivated successfully.
Oct 07 15:11:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c2d185594dd18b91d7d49c9483425f3ab5a77409f93ef3c5338f52265c849d5-merged.mount: Deactivated successfully.
Oct 07 15:11:24 compute-0 podman[444853]: 2025-10-07 15:11:24.537292596 +0000 UTC m=+0.171499561 container remove d39d76a168123db8aeceb595394fdc17ae48680098bdbc96febc60d0a39b6eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kilby, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 07 15:11:24 compute-0 systemd[1]: libpod-conmon-d39d76a168123db8aeceb595394fdc17ae48680098bdbc96febc60d0a39b6eb9.scope: Deactivated successfully.
Oct 07 15:11:24 compute-0 podman[444892]: 2025-10-07 15:11:24.686384068 +0000 UTC m=+0.042496549 container create 79c113f0a59a3550d07cd24f3db0ae384a7fc944a8dbe9e8ffe06a1ff00b46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 07 15:11:24 compute-0 systemd[1]: Started libpod-conmon-79c113f0a59a3550d07cd24f3db0ae384a7fc944a8dbe9e8ffe06a1ff00b46e6.scope.
Oct 07 15:11:24 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:11:24 compute-0 podman[444892]: 2025-10-07 15:11:24.666696 +0000 UTC m=+0.022808501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b766a7db4082f6f8da47e9a551d4dd456c308bfd17f25950a3712d9eb358da77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b766a7db4082f6f8da47e9a551d4dd456c308bfd17f25950a3712d9eb358da77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b766a7db4082f6f8da47e9a551d4dd456c308bfd17f25950a3712d9eb358da77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b766a7db4082f6f8da47e9a551d4dd456c308bfd17f25950a3712d9eb358da77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:11:24 compute-0 podman[444892]: 2025-10-07 15:11:24.784063078 +0000 UTC m=+0.140175579 container init 79c113f0a59a3550d07cd24f3db0ae384a7fc944a8dbe9e8ffe06a1ff00b46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:11:24 compute-0 podman[444892]: 2025-10-07 15:11:24.792018626 +0000 UTC m=+0.148131107 container start 79c113f0a59a3550d07cd24f3db0ae384a7fc944a8dbe9e8ffe06a1ff00b46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:11:24 compute-0 podman[444892]: 2025-10-07 15:11:24.796961867 +0000 UTC m=+0.153074368 container attach 79c113f0a59a3550d07cd24f3db0ae384a7fc944a8dbe9e8ffe06a1ff00b46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_allen, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:11:24 compute-0 ceph-mon[74295]: pgmap v3296: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:24 compute-0 nova_compute[259550]: 2025-10-07 15:11:24.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:11:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:25 compute-0 elated_allen[444908]: {
Oct 07 15:11:25 compute-0 elated_allen[444908]:     "0": [
Oct 07 15:11:25 compute-0 elated_allen[444908]:         {
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "devices": [
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "/dev/loop3"
Oct 07 15:11:25 compute-0 elated_allen[444908]:             ],
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "lv_name": "ceph_lv0",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "lv_size": "21470642176",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "name": "ceph_lv0",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "tags": {
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.cluster_name": "ceph",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.crush_device_class": "",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.encrypted": "0",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.osd_id": "0",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.type": "block",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.vdo": "0"
Oct 07 15:11:25 compute-0 elated_allen[444908]:             },
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "type": "block",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "vg_name": "ceph_vg0"
Oct 07 15:11:25 compute-0 elated_allen[444908]:         }
Oct 07 15:11:25 compute-0 elated_allen[444908]:     ],
Oct 07 15:11:25 compute-0 elated_allen[444908]:     "1": [
Oct 07 15:11:25 compute-0 elated_allen[444908]:         {
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "devices": [
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "/dev/loop4"
Oct 07 15:11:25 compute-0 elated_allen[444908]:             ],
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "lv_name": "ceph_lv1",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "lv_size": "21470642176",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "name": "ceph_lv1",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "tags": {
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.cluster_name": "ceph",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.crush_device_class": "",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.encrypted": "0",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.osd_id": "1",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.type": "block",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.vdo": "0"
Oct 07 15:11:25 compute-0 elated_allen[444908]:             },
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "type": "block",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "vg_name": "ceph_vg1"
Oct 07 15:11:25 compute-0 elated_allen[444908]:         }
Oct 07 15:11:25 compute-0 elated_allen[444908]:     ],
Oct 07 15:11:25 compute-0 elated_allen[444908]:     "2": [
Oct 07 15:11:25 compute-0 elated_allen[444908]:         {
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "devices": [
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "/dev/loop5"
Oct 07 15:11:25 compute-0 elated_allen[444908]:             ],
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "lv_name": "ceph_lv2",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "lv_size": "21470642176",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "name": "ceph_lv2",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "tags": {
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.cluster_name": "ceph",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.crush_device_class": "",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.encrypted": "0",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.osd_id": "2",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.type": "block",
Oct 07 15:11:25 compute-0 elated_allen[444908]:                 "ceph.vdo": "0"
Oct 07 15:11:25 compute-0 elated_allen[444908]:             },
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "type": "block",
Oct 07 15:11:25 compute-0 elated_allen[444908]:             "vg_name": "ceph_vg2"
Oct 07 15:11:25 compute-0 elated_allen[444908]:         }
Oct 07 15:11:25 compute-0 elated_allen[444908]:     ]
Oct 07 15:11:25 compute-0 elated_allen[444908]: }
Oct 07 15:11:25 compute-0 systemd[1]: libpod-79c113f0a59a3550d07cd24f3db0ae384a7fc944a8dbe9e8ffe06a1ff00b46e6.scope: Deactivated successfully.
Oct 07 15:11:25 compute-0 podman[444892]: 2025-10-07 15:11:25.667528305 +0000 UTC m=+1.023640806 container died 79c113f0a59a3550d07cd24f3db0ae384a7fc944a8dbe9e8ffe06a1ff00b46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:11:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b766a7db4082f6f8da47e9a551d4dd456c308bfd17f25950a3712d9eb358da77-merged.mount: Deactivated successfully.
Oct 07 15:11:25 compute-0 podman[444892]: 2025-10-07 15:11:25.734771534 +0000 UTC m=+1.090884015 container remove 79c113f0a59a3550d07cd24f3db0ae384a7fc944a8dbe9e8ffe06a1ff00b46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_allen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 07 15:11:25 compute-0 systemd[1]: libpod-conmon-79c113f0a59a3550d07cd24f3db0ae384a7fc944a8dbe9e8ffe06a1ff00b46e6.scope: Deactivated successfully.
Oct 07 15:11:25 compute-0 sudo[444787]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:25 compute-0 sudo[444929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:11:25 compute-0 sudo[444929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:11:25 compute-0 sudo[444929]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:25 compute-0 sudo[444954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:11:25 compute-0 sudo[444954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:11:25 compute-0 sudo[444954]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:25 compute-0 sudo[444979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:11:25 compute-0 sudo[444979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:11:25 compute-0 sudo[444979]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:26 compute-0 sudo[445004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:11:26 compute-0 sudo[445004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:11:26 compute-0 podman[445070]: 2025-10-07 15:11:26.368328887 +0000 UTC m=+0.047598283 container create 2a8719fbd95e31672aeecba38658af70b839142feaed646d3d9b230bc5d7f35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sammet, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 07 15:11:26 compute-0 systemd[1]: Started libpod-conmon-2a8719fbd95e31672aeecba38658af70b839142feaed646d3d9b230bc5d7f35e.scope.
Oct 07 15:11:26 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:11:26 compute-0 podman[445070]: 2025-10-07 15:11:26.347788637 +0000 UTC m=+0.027058123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:11:26 compute-0 podman[445070]: 2025-10-07 15:11:26.456216799 +0000 UTC m=+0.135486265 container init 2a8719fbd95e31672aeecba38658af70b839142feaed646d3d9b230bc5d7f35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:11:26 compute-0 podman[445070]: 2025-10-07 15:11:26.463902161 +0000 UTC m=+0.143171557 container start 2a8719fbd95e31672aeecba38658af70b839142feaed646d3d9b230bc5d7f35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sammet, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:11:26 compute-0 podman[445070]: 2025-10-07 15:11:26.467994409 +0000 UTC m=+0.147263805 container attach 2a8719fbd95e31672aeecba38658af70b839142feaed646d3d9b230bc5d7f35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sammet, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 07 15:11:26 compute-0 condescending_sammet[445087]: 167 167
Oct 07 15:11:26 compute-0 systemd[1]: libpod-2a8719fbd95e31672aeecba38658af70b839142feaed646d3d9b230bc5d7f35e.scope: Deactivated successfully.
Oct 07 15:11:26 compute-0 podman[445070]: 2025-10-07 15:11:26.471969533 +0000 UTC m=+0.151239019 container died 2a8719fbd95e31672aeecba38658af70b839142feaed646d3d9b230bc5d7f35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sammet, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 07 15:11:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-da17b1272f1a88c292bbb6d0561d27cfbffb535a923e2aa04bcd5520f0a882a4-merged.mount: Deactivated successfully.
Oct 07 15:11:26 compute-0 podman[445070]: 2025-10-07 15:11:26.518006284 +0000 UTC m=+0.197275680 container remove 2a8719fbd95e31672aeecba38658af70b839142feaed646d3d9b230bc5d7f35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:11:26 compute-0 systemd[1]: libpod-conmon-2a8719fbd95e31672aeecba38658af70b839142feaed646d3d9b230bc5d7f35e.scope: Deactivated successfully.
Oct 07 15:11:26 compute-0 podman[445111]: 2025-10-07 15:11:26.677223272 +0000 UTC m=+0.039799288 container create 90ccda29c653156bd1f5a8c961ebdcb7ee83ca19617698ee4e33457311be6a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:11:26 compute-0 systemd[1]: Started libpod-conmon-90ccda29c653156bd1f5a8c961ebdcb7ee83ca19617698ee4e33457311be6a25.scope.
Oct 07 15:11:26 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:11:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb7d2705957bf233ece967742f4d534c6aecf4d28c9f2d4e4701fff93a91086/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:11:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb7d2705957bf233ece967742f4d534c6aecf4d28c9f2d4e4701fff93a91086/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:11:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb7d2705957bf233ece967742f4d534c6aecf4d28c9f2d4e4701fff93a91086/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:11:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb7d2705957bf233ece967742f4d534c6aecf4d28c9f2d4e4701fff93a91086/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:11:26 compute-0 podman[445111]: 2025-10-07 15:11:26.756325363 +0000 UTC m=+0.118901319 container init 90ccda29c653156bd1f5a8c961ebdcb7ee83ca19617698ee4e33457311be6a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_swanson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:11:26 compute-0 podman[445111]: 2025-10-07 15:11:26.660967705 +0000 UTC m=+0.023543681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:11:26 compute-0 podman[445111]: 2025-10-07 15:11:26.762749922 +0000 UTC m=+0.125325868 container start 90ccda29c653156bd1f5a8c961ebdcb7ee83ca19617698ee4e33457311be6a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:11:26 compute-0 podman[445111]: 2025-10-07 15:11:26.769744926 +0000 UTC m=+0.132320892 container attach 90ccda29c653156bd1f5a8c961ebdcb7ee83ca19617698ee4e33457311be6a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 15:11:26 compute-0 ceph-mon[74295]: pgmap v3297: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:11:27 compute-0 interesting_swanson[445127]: {
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:         "osd_id": 2,
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:         "type": "bluestore"
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:     },
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:         "osd_id": 1,
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:         "type": "bluestore"
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:     },
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:         "osd_id": 0,
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:         "type": "bluestore"
Oct 07 15:11:27 compute-0 interesting_swanson[445127]:     }
Oct 07 15:11:27 compute-0 interesting_swanson[445127]: }
Oct 07 15:11:27 compute-0 systemd[1]: libpod-90ccda29c653156bd1f5a8c961ebdcb7ee83ca19617698ee4e33457311be6a25.scope: Deactivated successfully.
Oct 07 15:11:27 compute-0 podman[445111]: 2025-10-07 15:11:27.698670439 +0000 UTC m=+1.061246385 container died 90ccda29c653156bd1f5a8c961ebdcb7ee83ca19617698ee4e33457311be6a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_swanson, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:11:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eb7d2705957bf233ece967742f4d534c6aecf4d28c9f2d4e4701fff93a91086-merged.mount: Deactivated successfully.
Oct 07 15:11:27 compute-0 podman[445111]: 2025-10-07 15:11:27.837074059 +0000 UTC m=+1.199650005 container remove 90ccda29c653156bd1f5a8c961ebdcb7ee83ca19617698ee4e33457311be6a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_swanson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 15:11:27 compute-0 sudo[445004]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:27 compute-0 systemd[1]: libpod-conmon-90ccda29c653156bd1f5a8c961ebdcb7ee83ca19617698ee4e33457311be6a25.scope: Deactivated successfully.
Oct 07 15:11:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:11:27 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:11:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:11:27 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:11:27 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev fd0a614e-7087-410d-abb3-446994592cfa does not exist
Oct 07 15:11:27 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 1559e2ad-7f8d-4a95-acb2-f3b784b7c680 does not exist
Oct 07 15:11:27 compute-0 sudo[445171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:11:27 compute-0 sudo[445171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:11:27 compute-0 sudo[445171]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:28 compute-0 sudo[445196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:11:28 compute-0 sudo[445196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:11:28 compute-0 sudo[445196]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:28 compute-0 nova_compute[259550]: 2025-10-07 15:11:28.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:28 compute-0 nova_compute[259550]: 2025-10-07 15:11:28.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:11:29 compute-0 ceph-mon[74295]: pgmap v3298: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:29 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:11:29 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:11:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:29 compute-0 nova_compute[259550]: 2025-10-07 15:11:29.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:30 compute-0 ceph-mon[74295]: pgmap v3299: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:32 compute-0 ceph-mon[74295]: pgmap v3300: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:11:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:11:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1528135723' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:11:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:11:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1528135723' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:11:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1528135723' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:11:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1528135723' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:11:33 compute-0 nova_compute[259550]: 2025-10-07 15:11:33.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:34 compute-0 nova_compute[259550]: 2025-10-07 15:11:34.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:34 compute-0 ceph-mon[74295]: pgmap v3301: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:36 compute-0 ceph-mon[74295]: pgmap v3302: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:37 compute-0 podman[445221]: 2025-10-07 15:11:37.09925518 +0000 UTC m=+0.078481066 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Oct 07 15:11:37 compute-0 podman[445222]: 2025-10-07 15:11:37.134036005 +0000 UTC m=+0.105974119 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 15:11:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:11:38 compute-0 nova_compute[259550]: 2025-10-07 15:11:38.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:38 compute-0 ceph-mon[74295]: pgmap v3303: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:39 compute-0 nova_compute[259550]: 2025-10-07 15:11:39.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:39 compute-0 nova_compute[259550]: 2025-10-07 15:11:39.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:11:40 compute-0 nova_compute[259550]: 2025-10-07 15:11:40.010 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:11:40 compute-0 nova_compute[259550]: 2025-10-07 15:11:40.010 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:11:40 compute-0 nova_compute[259550]: 2025-10-07 15:11:40.010 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:11:40 compute-0 nova_compute[259550]: 2025-10-07 15:11:40.010 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:11:40 compute-0 nova_compute[259550]: 2025-10-07 15:11:40.011 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:11:40 compute-0 ceph-mon[74295]: pgmap v3304: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:11:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3028571804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:11:40 compute-0 nova_compute[259550]: 2025-10-07 15:11:40.487 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:11:40 compute-0 nova_compute[259550]: 2025-10-07 15:11:40.683 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:11:40 compute-0 nova_compute[259550]: 2025-10-07 15:11:40.684 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3579MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:11:40 compute-0 nova_compute[259550]: 2025-10-07 15:11:40.685 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:11:40 compute-0 nova_compute[259550]: 2025-10-07 15:11:40.685 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:11:40 compute-0 nova_compute[259550]: 2025-10-07 15:11:40.792 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:11:40 compute-0 nova_compute[259550]: 2025-10-07 15:11:40.793 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:11:40 compute-0 nova_compute[259550]: 2025-10-07 15:11:40.812 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:11:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:11:41 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1145876012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:11:41 compute-0 nova_compute[259550]: 2025-10-07 15:11:41.257 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:11:41 compute-0 nova_compute[259550]: 2025-10-07 15:11:41.265 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:11:41 compute-0 nova_compute[259550]: 2025-10-07 15:11:41.280 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:11:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:41 compute-0 nova_compute[259550]: 2025-10-07 15:11:41.282 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:11:41 compute-0 nova_compute[259550]: 2025-10-07 15:11:41.283 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:11:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3028571804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:11:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1145876012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:11:42 compute-0 ceph-mon[74295]: pgmap v3305: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:11:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:43 compute-0 nova_compute[259550]: 2025-10-07 15:11:43.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:44 compute-0 nova_compute[259550]: 2025-10-07 15:11:44.284 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:11:44 compute-0 nova_compute[259550]: 2025-10-07 15:11:44.284 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:11:44 compute-0 nova_compute[259550]: 2025-10-07 15:11:44.284 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:11:44 compute-0 nova_compute[259550]: 2025-10-07 15:11:44.312 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:11:44 compute-0 ceph-mon[74295]: pgmap v3306: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:44 compute-0 nova_compute[259550]: 2025-10-07 15:11:44.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:44 compute-0 nova_compute[259550]: 2025-10-07 15:11:44.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:11:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:46 compute-0 ceph-mon[74295]: pgmap v3307: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:11:48 compute-0 ceph-mon[74295]: pgmap v3308: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:48 compute-0 nova_compute[259550]: 2025-10-07 15:11:48.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:49 compute-0 nova_compute[259550]: 2025-10-07 15:11:49.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:50 compute-0 ceph-mon[74295]: pgmap v3309: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:51 compute-0 podman[445310]: 2025-10-07 15:11:51.069313519 +0000 UTC m=+0.061544730 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 07 15:11:51 compute-0 podman[445311]: 2025-10-07 15:11:51.069273768 +0000 UTC m=+0.059068944 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 07 15:11:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:52 compute-0 ceph-mon[74295]: pgmap v3310: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:11:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:11:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:11:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:11:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:11:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:11:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:11:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:53 compute-0 nova_compute[259550]: 2025-10-07 15:11:53.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:53 compute-0 sshd-session[445349]: Accepted publickey for zuul from 192.168.122.30 port 36982 ssh2: ECDSA SHA256:eYsf9rxr6jxX1A35M0IajnB7UFHS0tds1lEVHTSpAhk
Oct 07 15:11:53 compute-0 systemd-logind[801]: New session 55 of user zuul.
Oct 07 15:11:53 compute-0 systemd[1]: Started Session 55 of User zuul.
Oct 07 15:11:53 compute-0 sshd-session[445349]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 07 15:11:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:11:53.978 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 15:11:53 compute-0 nova_compute[259550]: 2025-10-07 15:11:53.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:53 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:11:53.980 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 15:11:54 compute-0 sudo[445422]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/test -f /var/podman_client_access_setup
Oct 07 15:11:54 compute-0 sudo[445422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:11:54 compute-0 sudo[445422]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:54 compute-0 sudo[445448]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/groupadd -f podman
Oct 07 15:11:54 compute-0 sudo[445448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:11:54 compute-0 groupadd[445450]: group added to /etc/group: name=podman, GID=42479
Oct 07 15:11:54 compute-0 groupadd[445450]: group added to /etc/gshadow: name=podman
Oct 07 15:11:54 compute-0 groupadd[445450]: new group: name=podman, GID=42479
Oct 07 15:11:54 compute-0 sudo[445448]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:54 compute-0 sudo[445456]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/usermod -a -G podman zuul
Oct 07 15:11:54 compute-0 sudo[445456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:11:54 compute-0 nova_compute[259550]: 2025-10-07 15:11:54.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:54 compute-0 usermod[445458]: add 'zuul' to group 'podman'
Oct 07 15:11:54 compute-0 usermod[445458]: add 'zuul' to shadow group 'podman'
Oct 07 15:11:54 compute-0 sudo[445456]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:54 compute-0 ceph-mon[74295]: pgmap v3311: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:54 compute-0 sudo[445465]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod -R o=wxr /etc/tmpfiles.d
Oct 07 15:11:54 compute-0 sudo[445465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:11:54 compute-0 sudo[445465]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:54 compute-0 sudo[445468]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/echo 'd /run/podman 0770 root zuul'
Oct 07 15:11:54 compute-0 sudo[445468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:11:54 compute-0 sudo[445468]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:54 compute-0 sudo[445471]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cp /lib/systemd/system/podman.socket /etc/systemd/system/podman.socket
Oct 07 15:11:54 compute-0 sudo[445471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:11:54 compute-0 sudo[445471]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:54 compute-0 sudo[445474]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketMode 0660
Oct 07 15:11:54 compute-0 sudo[445474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:11:54 compute-0 sudo[445474]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:54 compute-0 sudo[445477]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketGroup podman
Oct 07 15:11:54 compute-0 sudo[445477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:11:54 compute-0 sudo[445477]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:54 compute-0 sudo[445480]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl daemon-reload
Oct 07 15:11:54 compute-0 sudo[445480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:11:54 compute-0 systemd[1]: Reloading.
Oct 07 15:11:54 compute-0 systemd-sysv-generator[445513]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 07 15:11:54 compute-0 systemd-rc-local-generator[445509]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 07 15:11:55 compute-0 sudo[445480]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:55 compute-0 systemd[1]: Starting dnf makecache...
Oct 07 15:11:55 compute-0 sudo[445517]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemd-tmpfiles --create
Oct 07 15:11:55 compute-0 sudo[445517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:11:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:55 compute-0 sudo[445517]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:55 compute-0 sudo[445521]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl enable --now podman.socket
Oct 07 15:11:55 compute-0 sudo[445521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:11:55 compute-0 systemd[1]: Reloading.
Oct 07 15:11:55 compute-0 systemd-rc-local-generator[445550]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 07 15:11:55 compute-0 systemd-sysv-generator[445553]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 07 15:11:55 compute-0 dnf[445518]: Metadata cache refreshed recently.
Oct 07 15:11:55 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 07 15:11:55 compute-0 systemd[1]: Finished dnf makecache.
Oct 07 15:11:55 compute-0 systemd[1]: Starting Podman API Socket...
Oct 07 15:11:55 compute-0 systemd[1]: Listening on Podman API Socket.
Oct 07 15:11:55 compute-0 sudo[445521]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:55 compute-0 sudo[445559]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman
Oct 07 15:11:55 compute-0 sudo[445559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:11:55 compute-0 sudo[445559]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:55 compute-0 sudo[445562]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chown -R root: /run/podman
Oct 07 15:11:55 compute-0 sudo[445562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:11:55 compute-0 sudo[445562]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:55 compute-0 sudo[445565]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod g+rw /run/podman/podman.sock
Oct 07 15:11:55 compute-0 sudo[445565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:11:55 compute-0 sudo[445565]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:55 compute-0 sudo[445568]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman/podman.sock
Oct 07 15:11:55 compute-0 sudo[445568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:11:55 compute-0 sudo[445568]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:55 compute-0 sudo[445571]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/setenforce 0
Oct 07 15:11:55 compute-0 sudo[445571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:11:56 compute-0 sudo[445571]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:56 compute-0 sudo[445574]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl restart podman.socket
Oct 07 15:11:56 compute-0 dbus-broker-launch[772]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Oct 07 15:11:56 compute-0 sudo[445574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:11:56 compute-0 systemd[1]: podman.socket: Deactivated successfully.
Oct 07 15:11:56 compute-0 systemd[1]: Closed Podman API Socket.
Oct 07 15:11:56 compute-0 systemd[1]: Stopping Podman API Socket...
Oct 07 15:11:56 compute-0 systemd[1]: Starting Podman API Socket...
Oct 07 15:11:56 compute-0 systemd[1]: Listening on Podman API Socket.
Oct 07 15:11:56 compute-0 sudo[445574]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:56 compute-0 sudo[445425]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/touch /var/podman_client_access_setup
Oct 07 15:11:56 compute-0 sudo[445425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:11:56 compute-0 sudo[445425]: pam_unix(sudo:session): session closed for user root
Oct 07 15:11:56 compute-0 sshd-session[445580]: Accepted publickey for zuul from 192.168.122.30 port 36986 ssh2: ECDSA SHA256:eYsf9rxr6jxX1A35M0IajnB7UFHS0tds1lEVHTSpAhk
Oct 07 15:11:56 compute-0 systemd-logind[801]: New session 56 of user zuul.
Oct 07 15:11:56 compute-0 systemd[1]: Started Session 56 of User zuul.
Oct 07 15:11:56 compute-0 sshd-session[445580]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 07 15:11:56 compute-0 systemd[1]: Starting Podman API Service...
Oct 07 15:11:56 compute-0 systemd[1]: Started Podman API Service.
Oct 07 15:11:56 compute-0 podman[445584]: time="2025-10-07T15:11:56Z" level=info msg="/usr/bin/podman filtering at log level info"
Oct 07 15:11:56 compute-0 podman[445584]: time="2025-10-07T15:11:56Z" level=info msg="Setting parallel job count to 25"
Oct 07 15:11:56 compute-0 podman[445584]: time="2025-10-07T15:11:56Z" level=info msg="Using sqlite as database backend"
Oct 07 15:11:56 compute-0 podman[445584]: time="2025-10-07T15:11:56Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Oct 07 15:11:56 compute-0 podman[445584]: time="2025-10-07T15:11:56Z" level=info msg="Using systemd socket activation to determine API endpoint"
Oct 07 15:11:56 compute-0 podman[445584]: time="2025-10-07T15:11:56Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Oct 07 15:11:56 compute-0 podman[445584]: @ - - [07/Oct/2025:15:11:56 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct 07 15:11:56 compute-0 podman[445584]: @ - - [07/Oct/2025:15:11:56 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 27465 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct 07 15:11:56 compute-0 ceph-mon[74295]: pgmap v3312: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:11:58 compute-0 ceph-mon[74295]: pgmap v3313: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:58 compute-0 nova_compute[259550]: 2025-10-07 15:11:58.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:11:58 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:11:58.982 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 15:11:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:11:59 compute-0 nova_compute[259550]: 2025-10-07 15:11:59.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:12:00.114 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:12:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:12:00.115 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:12:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:12:00.115 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:12:00 compute-0 ceph-mon[74295]: pgmap v3314: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:12:02 compute-0 ceph-mon[74295]: pgmap v3315: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:03 compute-0 nova_compute[259550]: 2025-10-07 15:12:03.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:04 compute-0 nova_compute[259550]: 2025-10-07 15:12:04.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:04 compute-0 ceph-mon[74295]: pgmap v3316: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:05 compute-0 ceph-mon[74295]: pgmap v3317: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:12:08 compute-0 podman[445621]: 2025-10-07 15:12:08.101362597 +0000 UTC m=+0.086609979 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 15:12:08 compute-0 podman[445622]: 2025-10-07 15:12:08.123340515 +0000 UTC m=+0.094125736 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 15:12:08 compute-0 ceph-mon[74295]: pgmap v3318: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:08 compute-0 nova_compute[259550]: 2025-10-07 15:12:08.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:09 compute-0 nova_compute[259550]: 2025-10-07 15:12:09.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:10 compute-0 ceph-mon[74295]: pgmap v3319: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:11 compute-0 podman[445584]: time="2025-10-07T15:12:11Z" level=info msg="Received shutdown.Stop(), terminating!" PID=445584
Oct 07 15:12:11 compute-0 systemd[1]: podman.service: Deactivated successfully.
Oct 07 15:12:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:12:12 compute-0 ceph-mon[74295]: pgmap v3320: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:13 compute-0 nova_compute[259550]: 2025-10-07 15:12:13.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:13 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:12:13 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 45K writes, 181K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.79 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 515 writes, 1459 keys, 515 commit groups, 1.0 writes per commit group, ingest: 0.82 MB, 0.00 MB/s
                                           Interval WAL: 515 writes, 229 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 07 15:12:13 compute-0 nova_compute[259550]: 2025-10-07 15:12:13.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:12:13 compute-0 nova_compute[259550]: 2025-10-07 15:12:13.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 07 15:12:14 compute-0 nova_compute[259550]: 2025-10-07 15:12:14.001 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 07 15:12:14 compute-0 nova_compute[259550]: 2025-10-07 15:12:14.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:14 compute-0 ceph-mon[74295]: pgmap v3321: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:16 compute-0 ceph-mon[74295]: pgmap v3322: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:16 compute-0 nova_compute[259550]: 2025-10-07 15:12:16.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:12:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:12:17 compute-0 nova_compute[259550]: 2025-10-07 15:12:17.995 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:12:18 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:12:18 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 49K writes, 195K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.75 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 645 writes, 1636 keys, 645 commit groups, 1.0 writes per commit group, ingest: 0.83 MB, 0.00 MB/s
                                           Interval WAL: 645 writes, 286 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 07 15:12:18 compute-0 ceph-mon[74295]: pgmap v3323: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:18 compute-0 nova_compute[259550]: 2025-10-07 15:12:18.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:19 compute-0 nova_compute[259550]: 2025-10-07 15:12:19.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:19 compute-0 sudo[445666]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip --brief address list
Oct 07 15:12:19 compute-0 sudo[445666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:12:19 compute-0 sudo[445666]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:19 compute-0 sudo[445691]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip -o netns list
Oct 07 15:12:19 compute-0 sudo[445691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:12:19 compute-0 sudo[445691]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:20 compute-0 ceph-mon[74295]: pgmap v3324: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:21 compute-0 ceph-mon[74295]: pgmap v3325: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:22 compute-0 podman[445716]: 2025-10-07 15:12:22.097076521 +0000 UTC m=+0.078730472 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 07 15:12:22 compute-0 podman[445717]: 2025-10-07 15:12:22.101703523 +0000 UTC m=+0.081399543 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 07 15:12:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:12:22 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:12:22 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 36K writes, 144K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 36K writes, 13K syncs, 2.79 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 677 writes, 1677 keys, 677 commit groups, 1.0 writes per commit group, ingest: 0.81 MB, 0.00 MB/s
                                           Interval WAL: 677 writes, 305 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.448477) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849942448572, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1134, "num_deletes": 251, "total_data_size": 1665999, "memory_usage": 1686872, "flush_reason": "Manual Compaction"}
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849942464564, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 979743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68126, "largest_seqno": 69259, "table_properties": {"data_size": 975627, "index_size": 1703, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 11094, "raw_average_key_size": 20, "raw_value_size": 966559, "raw_average_value_size": 1799, "num_data_blocks": 78, "num_entries": 537, "num_filter_entries": 537, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759849824, "oldest_key_time": 1759849824, "file_creation_time": 1759849942, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 16128 microseconds, and 5519 cpu microseconds.
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.464617) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 979743 bytes OK
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.464639) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.472489) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.472526) EVENT_LOG_v1 {"time_micros": 1759849942472518, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.472551) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1660786, prev total WAL file size 1660786, number of live WAL files 2.
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.473450) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373533' seq:72057594037927935, type:22 .. '6D6772737461740033303035' seq:0, type:0; will stop at (end)
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(956KB)], [161(10MB)]
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849942473524, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12197176, "oldest_snapshot_seqno": -1}
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 8659 keys, 9552040 bytes, temperature: kUnknown
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849942644791, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 9552040, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9498690, "index_size": 30581, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21701, "raw_key_size": 226821, "raw_average_key_size": 26, "raw_value_size": 9348726, "raw_average_value_size": 1079, "num_data_blocks": 1182, "num_entries": 8659, "num_filter_entries": 8659, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759849942, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.645256) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 9552040 bytes
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.677553) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 71.2 rd, 55.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.7 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(22.2) write-amplify(9.7) OK, records in: 9122, records dropped: 463 output_compression: NoCompression
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.677592) EVENT_LOG_v1 {"time_micros": 1759849942677579, "job": 100, "event": "compaction_finished", "compaction_time_micros": 171425, "compaction_time_cpu_micros": 28623, "output_level": 6, "num_output_files": 1, "total_output_size": 9552040, "num_input_records": 9122, "num_output_records": 8659, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849942678016, "job": 100, "event": "table_file_deletion", "file_number": 163}
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849942680223, "job": 100, "event": "table_file_deletion", "file_number": 161}
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.473330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.680320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.680325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.680327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.680328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:12:22 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.680329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:12:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:12:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:12:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:12:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:12:22 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 15:12:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:12:22 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 15:12:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:12:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:12:22
Oct 07 15:12:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:12:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:12:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['backups', 'volumes', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'vms', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images']
Oct 07 15:12:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:12:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:12:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:12:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:12:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:12:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:12:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:12:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:12:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:12:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:12:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:12:23 compute-0 nova_compute[259550]: 2025-10-07 15:12:23.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:23 compute-0 nova_compute[259550]: 2025-10-07 15:12:23.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:12:23 compute-0 nova_compute[259550]: 2025-10-07 15:12:23.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:12:24 compute-0 sshd-session[445352]: Connection closed by 192.168.122.30 port 36982
Oct 07 15:12:24 compute-0 sshd-session[445349]: pam_unix(sshd:session): session closed for user zuul
Oct 07 15:12:24 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Oct 07 15:12:24 compute-0 systemd[1]: session-55.scope: Consumed 1.410s CPU time.
Oct 07 15:12:24 compute-0 systemd-logind[801]: Session 55 logged out. Waiting for processes to exit.
Oct 07 15:12:24 compute-0 systemd-logind[801]: Removed session 55.
Oct 07 15:12:24 compute-0 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct 07 15:12:24 compute-0 nova_compute[259550]: 2025-10-07 15:12:24.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:24 compute-0 ceph-mon[74295]: pgmap v3326: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:24 compute-0 nova_compute[259550]: 2025-10-07 15:12:24.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:12:25 compute-0 sshd-session[445583]: Connection closed by 192.168.122.30 port 36986
Oct 07 15:12:25 compute-0 sshd-session[445580]: pam_unix(sshd:session): session closed for user zuul
Oct 07 15:12:25 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Oct 07 15:12:25 compute-0 systemd-logind[801]: Session 56 logged out. Waiting for processes to exit.
Oct 07 15:12:25 compute-0 systemd-logind[801]: Removed session 56.
Oct 07 15:12:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:25 compute-0 nova_compute[259550]: 2025-10-07 15:12:25.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:12:25 compute-0 nova_compute[259550]: 2025-10-07 15:12:25.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:12:26 compute-0 ceph-mon[74295]: pgmap v3327: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:12:28 compute-0 sudo[445756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:12:28 compute-0 sudo[445756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:12:28 compute-0 sudo[445756]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:28 compute-0 sudo[445781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:12:28 compute-0 sudo[445781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:12:28 compute-0 sudo[445781]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:28 compute-0 sudo[445806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:12:28 compute-0 sudo[445806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:12:28 compute-0 sudo[445806]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:28 compute-0 sudo[445831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:12:28 compute-0 sudo[445831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:12:28 compute-0 ceph-mon[74295]: pgmap v3328: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:28 compute-0 nova_compute[259550]: 2025-10-07 15:12:28.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:28 compute-0 sudo[445831]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:12:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:12:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:12:28 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:12:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:12:28 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:12:28 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev b6e4585f-257e-4a83-a68c-23fe4045db25 does not exist
Oct 07 15:12:28 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev cdd11784-248d-4c23-aaeb-63e76fe08c04 does not exist
Oct 07 15:12:28 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 4602800b-1692-497d-96c0-1261e7b064fe does not exist
Oct 07 15:12:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:12:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:12:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:12:28 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:12:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:12:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:12:29 compute-0 sudo[445887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:12:29 compute-0 sudo[445887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:12:29 compute-0 sudo[445887]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:29 compute-0 sudo[445912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:12:29 compute-0 sudo[445912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:12:29 compute-0 sudo[445912]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:29 compute-0 sudo[445937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:12:29 compute-0 sudo[445937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:12:29 compute-0 sudo[445937]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:29 compute-0 sudo[445962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:12:29 compute-0 sudo[445962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:12:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:29 compute-0 nova_compute[259550]: 2025-10-07 15:12:29.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:29 compute-0 podman[446030]: 2025-10-07 15:12:29.566180368 +0000 UTC m=+0.024854055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:12:29 compute-0 podman[446030]: 2025-10-07 15:12:29.68985753 +0000 UTC m=+0.148531187 container create 9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_babbage, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 15:12:29 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:12:29 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:12:29 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:12:29 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:12:29 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:12:29 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:12:29 compute-0 systemd[1]: Started libpod-conmon-9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae.scope.
Oct 07 15:12:29 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:12:29 compute-0 nova_compute[259550]: 2025-10-07 15:12:29.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:12:30 compute-0 podman[446030]: 2025-10-07 15:12:30.013739659 +0000 UTC m=+0.472413346 container init 9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct 07 15:12:30 compute-0 podman[446030]: 2025-10-07 15:12:30.027869311 +0000 UTC m=+0.486542958 container start 9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_babbage, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:12:30 compute-0 cranky_babbage[446044]: 167 167
Oct 07 15:12:30 compute-0 systemd[1]: libpod-9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae.scope: Deactivated successfully.
Oct 07 15:12:30 compute-0 conmon[446044]: conmon 9d96c20b2d23e406d173 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae.scope/container/memory.events
Oct 07 15:12:30 compute-0 podman[446030]: 2025-10-07 15:12:30.0939877 +0000 UTC m=+0.552661387 container attach 9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_babbage, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:12:30 compute-0 podman[446030]: 2025-10-07 15:12:30.094522674 +0000 UTC m=+0.553196341 container died 9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:12:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d4a1ff6f064a60bf568b9cf077049ebea338fe6bed9f01b521b4d6f9c319a3d-merged.mount: Deactivated successfully.
Oct 07 15:12:30 compute-0 podman[446030]: 2025-10-07 15:12:30.219369038 +0000 UTC m=+0.678042695 container remove 9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:12:30 compute-0 systemd[1]: libpod-conmon-9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae.scope: Deactivated successfully.
Oct 07 15:12:30 compute-0 podman[446070]: 2025-10-07 15:12:30.383714351 +0000 UTC m=+0.041509233 container create f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct 07 15:12:30 compute-0 systemd[1]: Started libpod-conmon-f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899.scope.
Oct 07 15:12:30 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dbc9fb39eaa8da925d35cef447043ff171a818301af34e3489f97e5ce84b5b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dbc9fb39eaa8da925d35cef447043ff171a818301af34e3489f97e5ce84b5b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dbc9fb39eaa8da925d35cef447043ff171a818301af34e3489f97e5ce84b5b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dbc9fb39eaa8da925d35cef447043ff171a818301af34e3489f97e5ce84b5b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dbc9fb39eaa8da925d35cef447043ff171a818301af34e3489f97e5ce84b5b8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:12:30 compute-0 podman[446070]: 2025-10-07 15:12:30.365243384 +0000 UTC m=+0.023038286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:12:30 compute-0 podman[446070]: 2025-10-07 15:12:30.478512474 +0000 UTC m=+0.136307356 container init f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 07 15:12:30 compute-0 podman[446070]: 2025-10-07 15:12:30.488023254 +0000 UTC m=+0.145818126 container start f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 07 15:12:30 compute-0 podman[446070]: 2025-10-07 15:12:30.492815431 +0000 UTC m=+0.150610333 container attach f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:12:30 compute-0 ceph-mon[74295]: pgmap v3329: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:31 compute-0 sweet_sanderson[446086]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:12:31 compute-0 sweet_sanderson[446086]: --> relative data size: 1.0
Oct 07 15:12:31 compute-0 sweet_sanderson[446086]: --> All data devices are unavailable
Oct 07 15:12:31 compute-0 systemd[1]: libpod-f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899.scope: Deactivated successfully.
Oct 07 15:12:31 compute-0 systemd[1]: libpod-f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899.scope: Consumed 1.085s CPU time.
Oct 07 15:12:31 compute-0 podman[446070]: 2025-10-07 15:12:31.622310109 +0000 UTC m=+1.280105011 container died f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:12:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dbc9fb39eaa8da925d35cef447043ff171a818301af34e3489f97e5ce84b5b8-merged.mount: Deactivated successfully.
Oct 07 15:12:31 compute-0 podman[446070]: 2025-10-07 15:12:31.704546822 +0000 UTC m=+1.362341714 container remove f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:12:31 compute-0 systemd[1]: libpod-conmon-f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899.scope: Deactivated successfully.
Oct 07 15:12:31 compute-0 sudo[445962]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:31 compute-0 sudo[446128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:12:31 compute-0 sudo[446128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:12:31 compute-0 sudo[446128]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:31 compute-0 nova_compute[259550]: 2025-10-07 15:12:31.901 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:12:31 compute-0 sudo[446153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:12:31 compute-0 sudo[446153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:12:31 compute-0 sudo[446153]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:32 compute-0 nova_compute[259550]: 2025-10-07 15:12:32.001 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:12:32 compute-0 sudo[446178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:12:32 compute-0 sudo[446178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:12:32 compute-0 sudo[446178]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:32 compute-0 sudo[446203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:12:32 compute-0 sudo[446203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:12:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:12:32 compute-0 podman[446270]: 2025-10-07 15:12:32.47281187 +0000 UTC m=+0.037083037 container create 725e2b396215263814d9c38a43b090754b8b6c8965cbdaf4bd039629279125d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:12:32 compute-0 systemd[1]: Started libpod-conmon-725e2b396215263814d9c38a43b090754b8b6c8965cbdaf4bd039629279125d3.scope.
Oct 07 15:12:32 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:12:32 compute-0 podman[446270]: 2025-10-07 15:12:32.540976022 +0000 UTC m=+0.105247229 container init 725e2b396215263814d9c38a43b090754b8b6c8965cbdaf4bd039629279125d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 07 15:12:32 compute-0 podman[446270]: 2025-10-07 15:12:32.551167681 +0000 UTC m=+0.115438868 container start 725e2b396215263814d9c38a43b090754b8b6c8965cbdaf4bd039629279125d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 07 15:12:32 compute-0 podman[446270]: 2025-10-07 15:12:32.456833109 +0000 UTC m=+0.021104306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:12:32 compute-0 vigorous_mirzakhani[446286]: 167 167
Oct 07 15:12:32 compute-0 systemd[1]: libpod-725e2b396215263814d9c38a43b090754b8b6c8965cbdaf4bd039629279125d3.scope: Deactivated successfully.
Oct 07 15:12:32 compute-0 podman[446270]: 2025-10-07 15:12:32.557311712 +0000 UTC m=+0.121582909 container attach 725e2b396215263814d9c38a43b090754b8b6c8965cbdaf4bd039629279125d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 07 15:12:32 compute-0 podman[446270]: 2025-10-07 15:12:32.558227926 +0000 UTC m=+0.122499143 container died 725e2b396215263814d9c38a43b090754b8b6c8965cbdaf4bd039629279125d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 07 15:12:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-75047ea33df9ce3d036c246b2669889fc0fbbcadbf498dd6c9390ad07ac2d277-merged.mount: Deactivated successfully.
Oct 07 15:12:32 compute-0 podman[446270]: 2025-10-07 15:12:32.598910736 +0000 UTC m=+0.163181913 container remove 725e2b396215263814d9c38a43b090754b8b6c8965cbdaf4bd039629279125d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 07 15:12:32 compute-0 systemd[1]: libpod-conmon-725e2b396215263814d9c38a43b090754b8b6c8965cbdaf4bd039629279125d3.scope: Deactivated successfully.
Oct 07 15:12:32 compute-0 ceph-mon[74295]: pgmap v3330: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:12:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2624438936' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:12:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:12:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2624438936' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:12:32 compute-0 podman[446309]: 2025-10-07 15:12:32.791523592 +0000 UTC m=+0.056741823 container create 5c68cf6259710fd00f06d2034c6fca76a65c759733346102663106abc49dcd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dhawan, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:12:32 compute-0 systemd[1]: Started libpod-conmon-5c68cf6259710fd00f06d2034c6fca76a65c759733346102663106abc49dcd02.scope.
Oct 07 15:12:32 compute-0 podman[446309]: 2025-10-07 15:12:32.763754302 +0000 UTC m=+0.028972573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:12:32 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70e2d48d4b0d6b9855d104843dbe382582a50af344a95c4baa63c1643c0945/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70e2d48d4b0d6b9855d104843dbe382582a50af344a95c4baa63c1643c0945/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70e2d48d4b0d6b9855d104843dbe382582a50af344a95c4baa63c1643c0945/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70e2d48d4b0d6b9855d104843dbe382582a50af344a95c4baa63c1643c0945/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:12:32 compute-0 podman[446309]: 2025-10-07 15:12:32.889375466 +0000 UTC m=+0.154593707 container init 5c68cf6259710fd00f06d2034c6fca76a65c759733346102663106abc49dcd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 07 15:12:32 compute-0 podman[446309]: 2025-10-07 15:12:32.897707415 +0000 UTC m=+0.162925606 container start 5c68cf6259710fd00f06d2034c6fca76a65c759733346102663106abc49dcd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:12:32 compute-0 podman[446309]: 2025-10-07 15:12:32.90170732 +0000 UTC m=+0.166925561 container attach 5c68cf6259710fd00f06d2034c6fca76a65c759733346102663106abc49dcd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:12:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]: {
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:     "0": [
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:         {
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "devices": [
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "/dev/loop3"
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             ],
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "lv_name": "ceph_lv0",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "lv_size": "21470642176",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "name": "ceph_lv0",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "tags": {
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.cluster_name": "ceph",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.crush_device_class": "",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.encrypted": "0",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.osd_id": "0",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.type": "block",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.vdo": "0"
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             },
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "type": "block",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "vg_name": "ceph_vg0"
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:         }
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:     ],
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:     "1": [
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:         {
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "devices": [
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "/dev/loop4"
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             ],
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "lv_name": "ceph_lv1",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "lv_size": "21470642176",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "name": "ceph_lv1",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "tags": {
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.cluster_name": "ceph",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.crush_device_class": "",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.encrypted": "0",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.osd_id": "1",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.type": "block",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.vdo": "0"
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             },
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "type": "block",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "vg_name": "ceph_vg1"
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:         }
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:     ],
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:     "2": [
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:         {
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "devices": [
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "/dev/loop5"
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             ],
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "lv_name": "ceph_lv2",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "lv_size": "21470642176",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "name": "ceph_lv2",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "tags": {
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.cluster_name": "ceph",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.crush_device_class": "",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.encrypted": "0",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.osd_id": "2",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.type": "block",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:                 "ceph.vdo": "0"
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             },
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "type": "block",
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:             "vg_name": "ceph_vg2"
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:         }
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]:     ]
Oct 07 15:12:33 compute-0 fervent_dhawan[446325]: }
Oct 07 15:12:33 compute-0 nova_compute[259550]: 2025-10-07 15:12:33.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:33 compute-0 podman[446309]: 2025-10-07 15:12:33.706763915 +0000 UTC m=+0.971982126 container died 5c68cf6259710fd00f06d2034c6fca76a65c759733346102663106abc49dcd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 07 15:12:33 compute-0 systemd[1]: libpod-5c68cf6259710fd00f06d2034c6fca76a65c759733346102663106abc49dcd02.scope: Deactivated successfully.
Oct 07 15:12:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2624438936' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:12:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2624438936' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:12:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c70e2d48d4b0d6b9855d104843dbe382582a50af344a95c4baa63c1643c0945-merged.mount: Deactivated successfully.
Oct 07 15:12:33 compute-0 podman[446309]: 2025-10-07 15:12:33.773136871 +0000 UTC m=+1.038355062 container remove 5c68cf6259710fd00f06d2034c6fca76a65c759733346102663106abc49dcd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dhawan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 15:12:33 compute-0 systemd[1]: libpod-conmon-5c68cf6259710fd00f06d2034c6fca76a65c759733346102663106abc49dcd02.scope: Deactivated successfully.
Oct 07 15:12:33 compute-0 sudo[446203]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:33 compute-0 sudo[446347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:12:33 compute-0 sudo[446347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:12:33 compute-0 sudo[446347]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:33 compute-0 sudo[446372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:12:33 compute-0 sudo[446372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:12:33 compute-0 sudo[446372]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:34 compute-0 sudo[446397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:12:34 compute-0 sudo[446397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:12:34 compute-0 sudo[446397]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:34 compute-0 sudo[446422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:12:34 compute-0 sudo[446422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:12:34 compute-0 podman[446489]: 2025-10-07 15:12:34.460695385 +0000 UTC m=+0.039217192 container create 2cdb93597eeeb03096b545d7de2985ec67ecdd2cd69c13e1c90ca436f85eacba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_newton, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:12:34 compute-0 systemd[1]: Started libpod-conmon-2cdb93597eeeb03096b545d7de2985ec67ecdd2cd69c13e1c90ca436f85eacba.scope.
Oct 07 15:12:34 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:12:34 compute-0 podman[446489]: 2025-10-07 15:12:34.444581392 +0000 UTC m=+0.023103219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:12:34 compute-0 podman[446489]: 2025-10-07 15:12:34.552662045 +0000 UTC m=+0.131183932 container init 2cdb93597eeeb03096b545d7de2985ec67ecdd2cd69c13e1c90ca436f85eacba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 07 15:12:34 compute-0 nova_compute[259550]: 2025-10-07 15:12:34.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:34 compute-0 podman[446489]: 2025-10-07 15:12:34.562078523 +0000 UTC m=+0.140600330 container start 2cdb93597eeeb03096b545d7de2985ec67ecdd2cd69c13e1c90ca436f85eacba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_newton, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 07 15:12:34 compute-0 podman[446489]: 2025-10-07 15:12:34.566190661 +0000 UTC m=+0.144712488 container attach 2cdb93597eeeb03096b545d7de2985ec67ecdd2cd69c13e1c90ca436f85eacba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_newton, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:12:34 compute-0 reverent_newton[446505]: 167 167
Oct 07 15:12:34 compute-0 systemd[1]: libpod-2cdb93597eeeb03096b545d7de2985ec67ecdd2cd69c13e1c90ca436f85eacba.scope: Deactivated successfully.
Oct 07 15:12:34 compute-0 podman[446489]: 2025-10-07 15:12:34.570031822 +0000 UTC m=+0.148553629 container died 2cdb93597eeeb03096b545d7de2985ec67ecdd2cd69c13e1c90ca436f85eacba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_newton, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 07 15:12:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-67e343dce7938a745ec27efa8d67e94a11ddfd2f501cde5034dcf5cb67ded4b9-merged.mount: Deactivated successfully.
Oct 07 15:12:34 compute-0 podman[446489]: 2025-10-07 15:12:34.609785567 +0000 UTC m=+0.188307374 container remove 2cdb93597eeeb03096b545d7de2985ec67ecdd2cd69c13e1c90ca436f85eacba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_newton, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 07 15:12:34 compute-0 systemd[1]: libpod-conmon-2cdb93597eeeb03096b545d7de2985ec67ecdd2cd69c13e1c90ca436f85eacba.scope: Deactivated successfully.
Oct 07 15:12:34 compute-0 ceph-mon[74295]: pgmap v3331: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:34 compute-0 podman[446529]: 2025-10-07 15:12:34.808211217 +0000 UTC m=+0.051500266 container create a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 07 15:12:34 compute-0 systemd[1]: Started libpod-conmon-a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3.scope.
Oct 07 15:12:34 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:12:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444341dcf4d0088643e764f543db1e8d64e7a421834e9c327e56fcd90c7e87cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:12:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444341dcf4d0088643e764f543db1e8d64e7a421834e9c327e56fcd90c7e87cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:12:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444341dcf4d0088643e764f543db1e8d64e7a421834e9c327e56fcd90c7e87cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:12:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444341dcf4d0088643e764f543db1e8d64e7a421834e9c327e56fcd90c7e87cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:12:34 compute-0 podman[446529]: 2025-10-07 15:12:34.787804339 +0000 UTC m=+0.031093388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:12:34 compute-0 podman[446529]: 2025-10-07 15:12:34.914143582 +0000 UTC m=+0.157432631 container init a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 07 15:12:34 compute-0 podman[446529]: 2025-10-07 15:12:34.920634053 +0000 UTC m=+0.163923082 container start a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_einstein, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 07 15:12:34 compute-0 podman[446529]: 2025-10-07 15:12:34.923795396 +0000 UTC m=+0.167084435 container attach a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_einstein, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 07 15:12:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:36 compute-0 loving_einstein[446545]: {
Oct 07 15:12:36 compute-0 loving_einstein[446545]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:12:36 compute-0 loving_einstein[446545]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:12:36 compute-0 loving_einstein[446545]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:12:36 compute-0 loving_einstein[446545]:         "osd_id": 2,
Oct 07 15:12:36 compute-0 loving_einstein[446545]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:12:36 compute-0 loving_einstein[446545]:         "type": "bluestore"
Oct 07 15:12:36 compute-0 loving_einstein[446545]:     },
Oct 07 15:12:36 compute-0 loving_einstein[446545]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:12:36 compute-0 loving_einstein[446545]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:12:36 compute-0 loving_einstein[446545]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:12:36 compute-0 loving_einstein[446545]:         "osd_id": 1,
Oct 07 15:12:36 compute-0 loving_einstein[446545]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:12:36 compute-0 loving_einstein[446545]:         "type": "bluestore"
Oct 07 15:12:36 compute-0 loving_einstein[446545]:     },
Oct 07 15:12:36 compute-0 loving_einstein[446545]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:12:36 compute-0 loving_einstein[446545]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:12:36 compute-0 loving_einstein[446545]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:12:36 compute-0 loving_einstein[446545]:         "osd_id": 0,
Oct 07 15:12:36 compute-0 loving_einstein[446545]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:12:36 compute-0 loving_einstein[446545]:         "type": "bluestore"
Oct 07 15:12:36 compute-0 loving_einstein[446545]:     }
Oct 07 15:12:36 compute-0 loving_einstein[446545]: }
Oct 07 15:12:36 compute-0 systemd[1]: libpod-a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3.scope: Deactivated successfully.
Oct 07 15:12:36 compute-0 systemd[1]: libpod-a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3.scope: Consumed 1.133s CPU time.
Oct 07 15:12:36 compute-0 conmon[446545]: conmon a3c3653222408e514d9e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3.scope/container/memory.events
Oct 07 15:12:36 compute-0 podman[446578]: 2025-10-07 15:12:36.09757023 +0000 UTC m=+0.035749211 container died a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_einstein, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:12:36 compute-0 ceph-mon[74295]: pgmap v3332: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-444341dcf4d0088643e764f543db1e8d64e7a421834e9c327e56fcd90c7e87cd-merged.mount: Deactivated successfully.
Oct 07 15:12:36 compute-0 podman[446578]: 2025-10-07 15:12:36.979475806 +0000 UTC m=+0.917654717 container remove a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 07 15:12:36 compute-0 systemd[1]: libpod-conmon-a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3.scope: Deactivated successfully.
Oct 07 15:12:37 compute-0 sudo[446422]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:12:37 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:12:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:12:37 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:12:37 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 275c5d69-2f59-4030-810a-39fe2617190f does not exist
Oct 07 15:12:37 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev d5cfb0ce-2bcb-4c9a-a50d-81ecf9ec5870 does not exist
Oct 07 15:12:37 compute-0 sudo[446593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:12:37 compute-0 sudo[446593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:12:37 compute-0 sudo[446593]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:37 compute-0 sudo[446618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:12:37 compute-0 sudo[446618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:12:37 compute-0 sudo[446618]: pam_unix(sudo:session): session closed for user root
Oct 07 15:12:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:12:37 compute-0 nova_compute[259550]: 2025-10-07 15:12:37.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:12:37 compute-0 nova_compute[259550]: 2025-10-07 15:12:37.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 07 15:12:38 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:12:38 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:12:38 compute-0 ceph-mon[74295]: pgmap v3333: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:38 compute-0 nova_compute[259550]: 2025-10-07 15:12:38.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:39 compute-0 podman[446643]: 2025-10-07 15:12:39.121155748 +0000 UTC m=+0.105548337 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 07 15:12:39 compute-0 podman[446644]: 2025-10-07 15:12:39.136673096 +0000 UTC m=+0.118610851 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 07 15:12:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:39 compute-0 nova_compute[259550]: 2025-10-07 15:12:39.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:40 compute-0 ceph-mon[74295]: pgmap v3334: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:42 compute-0 nova_compute[259550]: 2025-10-07 15:12:42.025 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:12:42 compute-0 nova_compute[259550]: 2025-10-07 15:12:42.057 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:12:42 compute-0 nova_compute[259550]: 2025-10-07 15:12:42.058 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:12:42 compute-0 nova_compute[259550]: 2025-10-07 15:12:42.058 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:12:42 compute-0 nova_compute[259550]: 2025-10-07 15:12:42.059 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:12:42 compute-0 nova_compute[259550]: 2025-10-07 15:12:42.059 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:12:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:12:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:12:42 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3266480678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:12:42 compute-0 ceph-mon[74295]: pgmap v3335: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:42 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3266480678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:12:42 compute-0 nova_compute[259550]: 2025-10-07 15:12:42.516 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:12:42 compute-0 nova_compute[259550]: 2025-10-07 15:12:42.704 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:12:42 compute-0 nova_compute[259550]: 2025-10-07 15:12:42.707 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3606MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:12:42 compute-0 nova_compute[259550]: 2025-10-07 15:12:42.707 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:12:42 compute-0 nova_compute[259550]: 2025-10-07 15:12:42.708 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:12:42 compute-0 nova_compute[259550]: 2025-10-07 15:12:42.787 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:12:42 compute-0 nova_compute[259550]: 2025-10-07 15:12:42.787 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:12:42 compute-0 nova_compute[259550]: 2025-10-07 15:12:42.810 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:12:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:12:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/337520255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:12:43 compute-0 nova_compute[259550]: 2025-10-07 15:12:43.286 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:12:43 compute-0 nova_compute[259550]: 2025-10-07 15:12:43.293 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:12:43 compute-0 nova_compute[259550]: 2025-10-07 15:12:43.307 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:12:43 compute-0 nova_compute[259550]: 2025-10-07 15:12:43.309 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:12:43 compute-0 nova_compute[259550]: 2025-10-07 15:12:43.309 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:12:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:43 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/337520255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:12:43 compute-0 nova_compute[259550]: 2025-10-07 15:12:43.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:44 compute-0 ceph-mon[74295]: pgmap v3336: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:44 compute-0 nova_compute[259550]: 2025-10-07 15:12:44.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:45 compute-0 nova_compute[259550]: 2025-10-07 15:12:45.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:12:45 compute-0 nova_compute[259550]: 2025-10-07 15:12:45.986 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:12:45 compute-0 nova_compute[259550]: 2025-10-07 15:12:45.986 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:12:46 compute-0 nova_compute[259550]: 2025-10-07 15:12:46.019 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:12:46 compute-0 nova_compute[259550]: 2025-10-07 15:12:46.020 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:12:46 compute-0 nova_compute[259550]: 2025-10-07 15:12:46.020 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:12:46 compute-0 nova_compute[259550]: 2025-10-07 15:12:46.021 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:12:46 compute-0 nova_compute[259550]: 2025-10-07 15:12:46.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:12:46 compute-0 nova_compute[259550]: 2025-10-07 15:12:46.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:12:46 compute-0 nova_compute[259550]: 2025-10-07 15:12:46.023 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:12:46 compute-0 nova_compute[259550]: 2025-10-07 15:12:46.023 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:12:46 compute-0 nova_compute[259550]: 2025-10-07 15:12:46.034 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Oct 07 15:12:46 compute-0 nova_compute[259550]: 2025-10-07 15:12:46.042 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Oct 07 15:12:46 compute-0 nova_compute[259550]: 2025-10-07 15:12:46.042 2 WARNING nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122
Oct 07 15:12:46 compute-0 nova_compute[259550]: 2025-10-07 15:12:46.042 2 WARNING nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2
Oct 07 15:12:46 compute-0 nova_compute[259550]: 2025-10-07 15:12:46.043 2 INFO nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Removable base files: /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2
Oct 07 15:12:46 compute-0 nova_compute[259550]: 2025-10-07 15:12:46.043 2 INFO nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122
Oct 07 15:12:46 compute-0 nova_compute[259550]: 2025-10-07 15:12:46.043 2 INFO nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2
Oct 07 15:12:46 compute-0 nova_compute[259550]: 2025-10-07 15:12:46.043 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Oct 07 15:12:46 compute-0 nova_compute[259550]: 2025-10-07 15:12:46.043 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Oct 07 15:12:46 compute-0 nova_compute[259550]: 2025-10-07 15:12:46.043 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Oct 07 15:12:46 compute-0 nova_compute[259550]: 2025-10-07 15:12:46.044 2 INFO nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Oct 07 15:12:46 compute-0 ceph-mon[74295]: pgmap v3337: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:47 compute-0 nova_compute[259550]: 2025-10-07 15:12:47.006 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:12:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:12:48 compute-0 ceph-mon[74295]: pgmap v3338: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:48 compute-0 nova_compute[259550]: 2025-10-07 15:12:48.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:49 compute-0 nova_compute[259550]: 2025-10-07 15:12:49.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:50 compute-0 ceph-mon[74295]: pgmap v3339: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:12:52 compute-0 ceph-mon[74295]: pgmap v3340: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:12:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:12:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:12:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:12:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:12:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:12:53 compute-0 podman[446733]: 2025-10-07 15:12:53.079486737 +0000 UTC m=+0.067624839 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 15:12:53 compute-0 podman[446734]: 2025-10-07 15:12:53.099480274 +0000 UTC m=+0.074924833 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 15:12:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:53 compute-0 nova_compute[259550]: 2025-10-07 15:12:53.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:54 compute-0 nova_compute[259550]: 2025-10-07 15:12:54.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:54 compute-0 ceph-mon[74295]: pgmap v3341: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:56 compute-0 ceph-mon[74295]: pgmap v3342: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:12:58 compute-0 ceph-mon[74295]: pgmap v3343: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:58 compute-0 nova_compute[259550]: 2025-10-07 15:12:58.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:12:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:12:59 compute-0 nova_compute[259550]: 2025-10-07 15:12:59.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:13:00.115 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:13:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:13:00.115 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:13:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:13:00.115 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:13:00 compute-0 ceph-mon[74295]: pgmap v3344: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:13:02 compute-0 ceph-mon[74295]: pgmap v3345: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:03 compute-0 nova_compute[259550]: 2025-10-07 15:13:03.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:04 compute-0 nova_compute[259550]: 2025-10-07 15:13:04.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:04 compute-0 ceph-mon[74295]: pgmap v3346: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:06 compute-0 ceph-mon[74295]: pgmap v3347: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:13:08 compute-0 nova_compute[259550]: 2025-10-07 15:13:08.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:08 compute-0 ceph-mon[74295]: pgmap v3348: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:09 compute-0 nova_compute[259550]: 2025-10-07 15:13:09.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:10 compute-0 podman[446776]: 2025-10-07 15:13:10.075138616 +0000 UTC m=+0.058669495 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 15:13:10 compute-0 podman[446777]: 2025-10-07 15:13:10.148248048 +0000 UTC m=+0.119970266 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 07 15:13:11 compute-0 ceph-mon[74295]: pgmap v3349: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3350: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:12 compute-0 ceph-mon[74295]: pgmap v3350: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:13:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:13 compute-0 nova_compute[259550]: 2025-10-07 15:13:13.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:14 compute-0 ceph-mon[74295]: pgmap v3351: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:14 compute-0 nova_compute[259550]: 2025-10-07 15:13:14.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3352: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:16 compute-0 ceph-mon[74295]: pgmap v3352: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:13:17 compute-0 nova_compute[259550]: 2025-10-07 15:13:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:13:18 compute-0 nova_compute[259550]: 2025-10-07 15:13:18.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:18 compute-0 ceph-mon[74295]: pgmap v3353: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:19 compute-0 nova_compute[259550]: 2025-10-07 15:13:19.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:20 compute-0 ceph-mon[74295]: pgmap v3354: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:13:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:13:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:13:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:13:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:13:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:13:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:13:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:13:22
Oct 07 15:13:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:13:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:13:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'volumes', 'vms', '.mgr', 'default.rgw.control']
Oct 07 15:13:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:13:22 compute-0 ceph-mon[74295]: pgmap v3355: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:13:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:13:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:13:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:13:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:13:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:13:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:13:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:13:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:13:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:13:23 compute-0 nova_compute[259550]: 2025-10-07 15:13:23.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:23 compute-0 nova_compute[259550]: 2025-10-07 15:13:23.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:13:23 compute-0 nova_compute[259550]: 2025-10-07 15:13:23.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:13:24 compute-0 podman[446824]: 2025-10-07 15:13:24.076804395 +0000 UTC m=+0.065997627 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 07 15:13:24 compute-0 podman[446825]: 2025-10-07 15:13:24.098343281 +0000 UTC m=+0.085068608 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 15:13:24 compute-0 nova_compute[259550]: 2025-10-07 15:13:24.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:24 compute-0 ceph-mon[74295]: pgmap v3356: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:24 compute-0 nova_compute[259550]: 2025-10-07 15:13:24.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:13:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3357: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:25 compute-0 nova_compute[259550]: 2025-10-07 15:13:25.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:13:26 compute-0 ceph-mon[74295]: pgmap v3357: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:13:27 compute-0 nova_compute[259550]: 2025-10-07 15:13:27.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:13:28 compute-0 nova_compute[259550]: 2025-10-07 15:13:28.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:28 compute-0 ceph-mon[74295]: pgmap v3358: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:29 compute-0 nova_compute[259550]: 2025-10-07 15:13:29.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:29 compute-0 nova_compute[259550]: 2025-10-07 15:13:29.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:13:30 compute-0 ceph-mon[74295]: pgmap v3359: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct 07 15:13:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:13:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:13:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/965044073' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:13:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:13:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/965044073' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:13:32 compute-0 ceph-mon[74295]: pgmap v3360: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct 07 15:13:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/965044073' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:13:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/965044073' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:13:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct 07 15:13:33 compute-0 nova_compute[259550]: 2025-10-07 15:13:33.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:33 compute-0 ceph-mon[74295]: pgmap v3361: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct 07 15:13:34 compute-0 nova_compute[259550]: 2025-10-07 15:13:34.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Oct 07 15:13:36 compute-0 ceph-mon[74295]: pgmap v3362: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Oct 07 15:13:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 07 15:13:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:13:37 compute-0 sudo[446865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:13:37 compute-0 sudo[446865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:13:37 compute-0 sudo[446865]: pam_unix(sudo:session): session closed for user root
Oct 07 15:13:37 compute-0 sudo[446890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:13:37 compute-0 sudo[446890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:13:37 compute-0 sudo[446890]: pam_unix(sudo:session): session closed for user root
Oct 07 15:13:37 compute-0 sudo[446915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:13:37 compute-0 sudo[446915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:13:37 compute-0 sudo[446915]: pam_unix(sudo:session): session closed for user root
Oct 07 15:13:37 compute-0 sudo[446940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:13:37 compute-0 sudo[446940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:13:38 compute-0 ceph-mon[74295]: pgmap v3363: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 07 15:13:38 compute-0 sudo[446940]: pam_unix(sudo:session): session closed for user root
Oct 07 15:13:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:13:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:13:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:13:38 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:13:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:13:38 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:13:38 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 27a4af63-00fe-4fa6-8b8e-b5ccbcc9eda0 does not exist
Oct 07 15:13:38 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev bae9f2e6-18fc-4994-82fd-96b1dd2a15ed does not exist
Oct 07 15:13:38 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 47900388-6140-401a-a348-aa847bfba54b does not exist
Oct 07 15:13:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:13:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:13:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:13:38 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:13:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:13:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:13:38 compute-0 sudo[446996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:13:38 compute-0 sudo[446996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:13:38 compute-0 sudo[446996]: pam_unix(sudo:session): session closed for user root
Oct 07 15:13:38 compute-0 sudo[447021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:13:38 compute-0 sudo[447021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:13:38 compute-0 sudo[447021]: pam_unix(sudo:session): session closed for user root
Oct 07 15:13:38 compute-0 sudo[447046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:13:38 compute-0 sudo[447046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:13:38 compute-0 sudo[447046]: pam_unix(sudo:session): session closed for user root
Oct 07 15:13:38 compute-0 nova_compute[259550]: 2025-10-07 15:13:38.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:38 compute-0 sudo[447071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:13:38 compute-0 sudo[447071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:13:39 compute-0 podman[447137]: 2025-10-07 15:13:39.110375095 +0000 UTC m=+0.051674030 container create d9a57ccb2f90323890fcd16555697375be11b0c9349f278f5fd690cf52d67c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:13:39 compute-0 systemd[1]: Started libpod-conmon-d9a57ccb2f90323890fcd16555697375be11b0c9349f278f5fd690cf52d67c9f.scope.
Oct 07 15:13:39 compute-0 podman[447137]: 2025-10-07 15:13:39.083637342 +0000 UTC m=+0.024936357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:13:39 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:13:39 compute-0 podman[447137]: 2025-10-07 15:13:39.210759275 +0000 UTC m=+0.152058230 container init d9a57ccb2f90323890fcd16555697375be11b0c9349f278f5fd690cf52d67c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lalande, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:13:39 compute-0 podman[447137]: 2025-10-07 15:13:39.222893894 +0000 UTC m=+0.164192859 container start d9a57ccb2f90323890fcd16555697375be11b0c9349f278f5fd690cf52d67c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:13:39 compute-0 podman[447137]: 2025-10-07 15:13:39.228114171 +0000 UTC m=+0.169413146 container attach d9a57ccb2f90323890fcd16555697375be11b0c9349f278f5fd690cf52d67c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lalande, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 15:13:39 compute-0 nostalgic_lalande[447153]: 167 167
Oct 07 15:13:39 compute-0 systemd[1]: libpod-d9a57ccb2f90323890fcd16555697375be11b0c9349f278f5fd690cf52d67c9f.scope: Deactivated successfully.
Oct 07 15:13:39 compute-0 podman[447137]: 2025-10-07 15:13:39.232113487 +0000 UTC m=+0.173412412 container died d9a57ccb2f90323890fcd16555697375be11b0c9349f278f5fd690cf52d67c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lalande, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 07 15:13:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-5827ba1752a225ba69277638b5d59b7b9336f48514004fe1e6e4a4698c0d916a-merged.mount: Deactivated successfully.
Oct 07 15:13:39 compute-0 podman[447137]: 2025-10-07 15:13:39.299365275 +0000 UTC m=+0.240664200 container remove d9a57ccb2f90323890fcd16555697375be11b0c9349f278f5fd690cf52d67c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lalande, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:13:39 compute-0 systemd[1]: libpod-conmon-d9a57ccb2f90323890fcd16555697375be11b0c9349f278f5fd690cf52d67c9f.scope: Deactivated successfully.
Oct 07 15:13:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 07 15:13:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:13:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:13:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:13:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:13:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:13:39 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:13:39 compute-0 podman[447179]: 2025-10-07 15:13:39.505153408 +0000 UTC m=+0.055214853 container create e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:13:39 compute-0 systemd[1]: Started libpod-conmon-e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e.scope.
Oct 07 15:13:39 compute-0 podman[447179]: 2025-10-07 15:13:39.479495733 +0000 UTC m=+0.029557168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:13:39 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86598f04b766773d6d82e2ce28a01176d3fec428f86dcefd7073ed57769d28c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86598f04b766773d6d82e2ce28a01176d3fec428f86dcefd7073ed57769d28c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86598f04b766773d6d82e2ce28a01176d3fec428f86dcefd7073ed57769d28c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86598f04b766773d6d82e2ce28a01176d3fec428f86dcefd7073ed57769d28c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86598f04b766773d6d82e2ce28a01176d3fec428f86dcefd7073ed57769d28c2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:13:39 compute-0 nova_compute[259550]: 2025-10-07 15:13:39.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:39 compute-0 podman[447179]: 2025-10-07 15:13:39.624351342 +0000 UTC m=+0.174412797 container init e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 07 15:13:39 compute-0 podman[447179]: 2025-10-07 15:13:39.636843411 +0000 UTC m=+0.186904866 container start e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 07 15:13:39 compute-0 podman[447179]: 2025-10-07 15:13:39.64326052 +0000 UTC m=+0.193322015 container attach e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_archimedes, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 07 15:13:40 compute-0 ceph-mon[74295]: pgmap v3364: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 07 15:13:40 compute-0 magical_archimedes[447195]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:13:40 compute-0 magical_archimedes[447195]: --> relative data size: 1.0
Oct 07 15:13:40 compute-0 magical_archimedes[447195]: --> All data devices are unavailable
Oct 07 15:13:40 compute-0 systemd[1]: libpod-e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e.scope: Deactivated successfully.
Oct 07 15:13:40 compute-0 systemd[1]: libpod-e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e.scope: Consumed 1.103s CPU time.
Oct 07 15:13:40 compute-0 podman[447179]: 2025-10-07 15:13:40.781980461 +0000 UTC m=+1.332041886 container died e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_archimedes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 07 15:13:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-86598f04b766773d6d82e2ce28a01176d3fec428f86dcefd7073ed57769d28c2-merged.mount: Deactivated successfully.
Oct 07 15:13:40 compute-0 podman[447225]: 2025-10-07 15:13:40.896957575 +0000 UTC m=+0.081313040 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 07 15:13:40 compute-0 podman[447179]: 2025-10-07 15:13:40.918194633 +0000 UTC m=+1.468256058 container remove e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_archimedes, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 07 15:13:40 compute-0 systemd[1]: libpod-conmon-e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e.scope: Deactivated successfully.
Oct 07 15:13:40 compute-0 podman[447227]: 2025-10-07 15:13:40.947973837 +0000 UTC m=+0.131131130 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 07 15:13:40 compute-0 sudo[447071]: pam_unix(sudo:session): session closed for user root
Oct 07 15:13:41 compute-0 sudo[447283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:13:41 compute-0 sudo[447283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:13:41 compute-0 sudo[447283]: pam_unix(sudo:session): session closed for user root
Oct 07 15:13:41 compute-0 sudo[447308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:13:41 compute-0 sudo[447308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:13:41 compute-0 sudo[447308]: pam_unix(sudo:session): session closed for user root
Oct 07 15:13:41 compute-0 sudo[447333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:13:41 compute-0 sudo[447333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:13:41 compute-0 sudo[447333]: pam_unix(sudo:session): session closed for user root
Oct 07 15:13:41 compute-0 sudo[447358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:13:41 compute-0 sudo[447358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:13:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 07 15:13:41 compute-0 podman[447425]: 2025-10-07 15:13:41.688640299 +0000 UTC m=+0.095765561 container create af6ba029085d59400ffb8601d37e29b1a67c0ed8f55d7afd1c71c3f2d170321e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:13:41 compute-0 podman[447425]: 2025-10-07 15:13:41.622056317 +0000 UTC m=+0.029181569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:13:41 compute-0 systemd[1]: Started libpod-conmon-af6ba029085d59400ffb8601d37e29b1a67c0ed8f55d7afd1c71c3f2d170321e.scope.
Oct 07 15:13:41 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:13:41 compute-0 podman[447425]: 2025-10-07 15:13:41.817329373 +0000 UTC m=+0.224454685 container init af6ba029085d59400ffb8601d37e29b1a67c0ed8f55d7afd1c71c3f2d170321e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 07 15:13:41 compute-0 podman[447425]: 2025-10-07 15:13:41.82480761 +0000 UTC m=+0.231932832 container start af6ba029085d59400ffb8601d37e29b1a67c0ed8f55d7afd1c71c3f2d170321e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 15:13:41 compute-0 podman[447425]: 2025-10-07 15:13:41.829259097 +0000 UTC m=+0.236384429 container attach af6ba029085d59400ffb8601d37e29b1a67c0ed8f55d7afd1c71c3f2d170321e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 07 15:13:41 compute-0 upbeat_feynman[447441]: 167 167
Oct 07 15:13:41 compute-0 systemd[1]: libpod-af6ba029085d59400ffb8601d37e29b1a67c0ed8f55d7afd1c71c3f2d170321e.scope: Deactivated successfully.
Oct 07 15:13:41 compute-0 podman[447425]: 2025-10-07 15:13:41.835923423 +0000 UTC m=+0.243048645 container died af6ba029085d59400ffb8601d37e29b1a67c0ed8f55d7afd1c71c3f2d170321e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct 07 15:13:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ff6a829b4cc89b6ea19893e3b071e1c25fef49f5aa471aed1983f171d460e32-merged.mount: Deactivated successfully.
Oct 07 15:13:41 compute-0 podman[447425]: 2025-10-07 15:13:41.882696312 +0000 UTC m=+0.289821544 container remove af6ba029085d59400ffb8601d37e29b1a67c0ed8f55d7afd1c71c3f2d170321e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 07 15:13:41 compute-0 systemd[1]: libpod-conmon-af6ba029085d59400ffb8601d37e29b1a67c0ed8f55d7afd1c71c3f2d170321e.scope: Deactivated successfully.
Oct 07 15:13:42 compute-0 podman[447464]: 2025-10-07 15:13:42.123441414 +0000 UTC m=+0.066948862 container create 0d9b6671f6f632cc5704da45d3813590c6585fb83a3d963a2db6094fa1536369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_easley, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 07 15:13:42 compute-0 systemd[1]: Started libpod-conmon-0d9b6671f6f632cc5704da45d3813590c6585fb83a3d963a2db6094fa1536369.scope.
Oct 07 15:13:42 compute-0 podman[447464]: 2025-10-07 15:13:42.095613292 +0000 UTC m=+0.039120790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:13:42 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424fa7c3d40d8a4f0fa425124cfa0bbd7dedc4feceb008f36b7a1249f0c3d81d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424fa7c3d40d8a4f0fa425124cfa0bbd7dedc4feceb008f36b7a1249f0c3d81d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424fa7c3d40d8a4f0fa425124cfa0bbd7dedc4feceb008f36b7a1249f0c3d81d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424fa7c3d40d8a4f0fa425124cfa0bbd7dedc4feceb008f36b7a1249f0c3d81d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:13:42 compute-0 podman[447464]: 2025-10-07 15:13:42.221177375 +0000 UTC m=+0.164684883 container init 0d9b6671f6f632cc5704da45d3813590c6585fb83a3d963a2db6094fa1536369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 07 15:13:42 compute-0 podman[447464]: 2025-10-07 15:13:42.22820845 +0000 UTC m=+0.171715888 container start 0d9b6671f6f632cc5704da45d3813590c6585fb83a3d963a2db6094fa1536369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:13:42 compute-0 podman[447464]: 2025-10-07 15:13:42.233037057 +0000 UTC m=+0.176544465 container attach 0d9b6671f6f632cc5704da45d3813590c6585fb83a3d963a2db6094fa1536369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct 07 15:13:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:13:42 compute-0 ceph-mon[74295]: pgmap v3365: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 07 15:13:42 compute-0 boring_easley[447481]: {
Oct 07 15:13:42 compute-0 boring_easley[447481]:     "0": [
Oct 07 15:13:42 compute-0 boring_easley[447481]:         {
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "devices": [
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "/dev/loop3"
Oct 07 15:13:42 compute-0 boring_easley[447481]:             ],
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "lv_name": "ceph_lv0",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "lv_size": "21470642176",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "name": "ceph_lv0",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "tags": {
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.cluster_name": "ceph",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.crush_device_class": "",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.encrypted": "0",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.osd_id": "0",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.type": "block",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.vdo": "0"
Oct 07 15:13:42 compute-0 boring_easley[447481]:             },
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "type": "block",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "vg_name": "ceph_vg0"
Oct 07 15:13:42 compute-0 boring_easley[447481]:         }
Oct 07 15:13:42 compute-0 boring_easley[447481]:     ],
Oct 07 15:13:42 compute-0 boring_easley[447481]:     "1": [
Oct 07 15:13:42 compute-0 boring_easley[447481]:         {
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "devices": [
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "/dev/loop4"
Oct 07 15:13:42 compute-0 boring_easley[447481]:             ],
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "lv_name": "ceph_lv1",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "lv_size": "21470642176",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "name": "ceph_lv1",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "tags": {
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.cluster_name": "ceph",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.crush_device_class": "",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.encrypted": "0",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.osd_id": "1",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.type": "block",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.vdo": "0"
Oct 07 15:13:42 compute-0 boring_easley[447481]:             },
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "type": "block",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "vg_name": "ceph_vg1"
Oct 07 15:13:42 compute-0 boring_easley[447481]:         }
Oct 07 15:13:42 compute-0 boring_easley[447481]:     ],
Oct 07 15:13:42 compute-0 boring_easley[447481]:     "2": [
Oct 07 15:13:42 compute-0 boring_easley[447481]:         {
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "devices": [
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "/dev/loop5"
Oct 07 15:13:42 compute-0 boring_easley[447481]:             ],
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "lv_name": "ceph_lv2",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "lv_size": "21470642176",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "name": "ceph_lv2",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "tags": {
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.cluster_name": "ceph",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.crush_device_class": "",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.encrypted": "0",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.osd_id": "2",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.type": "block",
Oct 07 15:13:42 compute-0 boring_easley[447481]:                 "ceph.vdo": "0"
Oct 07 15:13:42 compute-0 boring_easley[447481]:             },
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "type": "block",
Oct 07 15:13:42 compute-0 boring_easley[447481]:             "vg_name": "ceph_vg2"
Oct 07 15:13:42 compute-0 boring_easley[447481]:         }
Oct 07 15:13:42 compute-0 boring_easley[447481]:     ]
Oct 07 15:13:42 compute-0 boring_easley[447481]: }
Oct 07 15:13:43 compute-0 systemd[1]: libpod-0d9b6671f6f632cc5704da45d3813590c6585fb83a3d963a2db6094fa1536369.scope: Deactivated successfully.
Oct 07 15:13:43 compute-0 podman[447464]: 2025-10-07 15:13:43.015455066 +0000 UTC m=+0.958962474 container died 0d9b6671f6f632cc5704da45d3813590c6585fb83a3d963a2db6094fa1536369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_easley, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:13:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-424fa7c3d40d8a4f0fa425124cfa0bbd7dedc4feceb008f36b7a1249f0c3d81d-merged.mount: Deactivated successfully.
Oct 07 15:13:43 compute-0 podman[447464]: 2025-10-07 15:13:43.071657784 +0000 UTC m=+1.015165182 container remove 0d9b6671f6f632cc5704da45d3813590c6585fb83a3d963a2db6094fa1536369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_easley, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:13:43 compute-0 systemd[1]: libpod-conmon-0d9b6671f6f632cc5704da45d3813590c6585fb83a3d963a2db6094fa1536369.scope: Deactivated successfully.
Oct 07 15:13:43 compute-0 sudo[447358]: pam_unix(sudo:session): session closed for user root
Oct 07 15:13:43 compute-0 sudo[447501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:13:43 compute-0 sudo[447501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:13:43 compute-0 sudo[447501]: pam_unix(sudo:session): session closed for user root
Oct 07 15:13:43 compute-0 sudo[447526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:13:43 compute-0 sudo[447526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:13:43 compute-0 sudo[447526]: pam_unix(sudo:session): session closed for user root
Oct 07 15:13:43 compute-0 sudo[447551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:13:43 compute-0 sudo[447551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:13:43 compute-0 sudo[447551]: pam_unix(sudo:session): session closed for user root
Oct 07 15:13:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct 07 15:13:43 compute-0 sudo[447576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:13:43 compute-0 sudo[447576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:13:43 compute-0 nova_compute[259550]: 2025-10-07 15:13:43.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:43 compute-0 podman[447642]: 2025-10-07 15:13:43.742259463 +0000 UTC m=+0.040939678 container create 0668a6167e9e5ca7199ea9515527abd482219726ec9773bf9044ab2ea7abab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 07 15:13:43 compute-0 systemd[1]: Started libpod-conmon-0668a6167e9e5ca7199ea9515527abd482219726ec9773bf9044ab2ea7abab26.scope.
Oct 07 15:13:43 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:13:43 compute-0 podman[447642]: 2025-10-07 15:13:43.81627565 +0000 UTC m=+0.114955855 container init 0668a6167e9e5ca7199ea9515527abd482219726ec9773bf9044ab2ea7abab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 07 15:13:43 compute-0 podman[447642]: 2025-10-07 15:13:43.7227604 +0000 UTC m=+0.021440595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:13:43 compute-0 podman[447642]: 2025-10-07 15:13:43.823992453 +0000 UTC m=+0.122672628 container start 0668a6167e9e5ca7199ea9515527abd482219726ec9773bf9044ab2ea7abab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:13:43 compute-0 podman[447642]: 2025-10-07 15:13:43.827663249 +0000 UTC m=+0.126343444 container attach 0668a6167e9e5ca7199ea9515527abd482219726ec9773bf9044ab2ea7abab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:13:43 compute-0 angry_noether[447658]: 167 167
Oct 07 15:13:43 compute-0 systemd[1]: libpod-0668a6167e9e5ca7199ea9515527abd482219726ec9773bf9044ab2ea7abab26.scope: Deactivated successfully.
Oct 07 15:13:43 compute-0 podman[447642]: 2025-10-07 15:13:43.833351669 +0000 UTC m=+0.132031874 container died 0668a6167e9e5ca7199ea9515527abd482219726ec9773bf9044ab2ea7abab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:13:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1b1952cc0103757621f89e77937f113737cd21fc52bc1845e70c49aa0b01052-merged.mount: Deactivated successfully.
Oct 07 15:13:43 compute-0 podman[447642]: 2025-10-07 15:13:43.873342281 +0000 UTC m=+0.172022456 container remove 0668a6167e9e5ca7199ea9515527abd482219726ec9773bf9044ab2ea7abab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 07 15:13:43 compute-0 systemd[1]: libpod-conmon-0668a6167e9e5ca7199ea9515527abd482219726ec9773bf9044ab2ea7abab26.scope: Deactivated successfully.
Oct 07 15:13:43 compute-0 nova_compute[259550]: 2025-10-07 15:13:43.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:13:44 compute-0 nova_compute[259550]: 2025-10-07 15:13:44.003 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:13:44 compute-0 nova_compute[259550]: 2025-10-07 15:13:44.004 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:13:44 compute-0 nova_compute[259550]: 2025-10-07 15:13:44.005 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:13:44 compute-0 nova_compute[259550]: 2025-10-07 15:13:44.005 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:13:44 compute-0 nova_compute[259550]: 2025-10-07 15:13:44.005 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:13:44 compute-0 podman[447683]: 2025-10-07 15:13:44.048014696 +0000 UTC m=+0.050340765 container create c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 15:13:44 compute-0 systemd[1]: Started libpod-conmon-c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c.scope.
Oct 07 15:13:44 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99259b08f96986b2c2090542e984f8bddca1c89083edf97aba55bbcfcc0bc86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99259b08f96986b2c2090542e984f8bddca1c89083edf97aba55bbcfcc0bc86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99259b08f96986b2c2090542e984f8bddca1c89083edf97aba55bbcfcc0bc86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99259b08f96986b2c2090542e984f8bddca1c89083edf97aba55bbcfcc0bc86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:13:44 compute-0 podman[447683]: 2025-10-07 15:13:44.031719377 +0000 UTC m=+0.034045456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:13:44 compute-0 podman[447683]: 2025-10-07 15:13:44.14360276 +0000 UTC m=+0.145928829 container init c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:13:44 compute-0 podman[447683]: 2025-10-07 15:13:44.151717633 +0000 UTC m=+0.154043702 container start c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:13:44 compute-0 podman[447683]: 2025-10-07 15:13:44.155875043 +0000 UTC m=+0.158201102 container attach c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 07 15:13:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:13:44 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1751947822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:13:44 compute-0 nova_compute[259550]: 2025-10-07 15:13:44.470 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:13:44 compute-0 ceph-mon[74295]: pgmap v3366: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct 07 15:13:44 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1751947822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:13:44 compute-0 nova_compute[259550]: 2025-10-07 15:13:44.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:44 compute-0 nova_compute[259550]: 2025-10-07 15:13:44.683 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:13:44 compute-0 nova_compute[259550]: 2025-10-07 15:13:44.684 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3570MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:13:44 compute-0 nova_compute[259550]: 2025-10-07 15:13:44.685 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:13:44 compute-0 nova_compute[259550]: 2025-10-07 15:13:44.685 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:13:44 compute-0 nova_compute[259550]: 2025-10-07 15:13:44.753 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:13:44 compute-0 nova_compute[259550]: 2025-10-07 15:13:44.753 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:13:44 compute-0 nova_compute[259550]: 2025-10-07 15:13:44.772 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]: {
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:         "osd_id": 2,
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:         "type": "bluestore"
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:     },
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:         "osd_id": 1,
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:         "type": "bluestore"
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:     },
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:         "osd_id": 0,
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:         "type": "bluestore"
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]:     }
Oct 07 15:13:45 compute-0 pedantic_wiles[447701]: }
Oct 07 15:13:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:13:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2691601116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:13:45 compute-0 nova_compute[259550]: 2025-10-07 15:13:45.233 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:13:45 compute-0 nova_compute[259550]: 2025-10-07 15:13:45.240 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:13:45 compute-0 systemd[1]: libpod-c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c.scope: Deactivated successfully.
Oct 07 15:13:45 compute-0 systemd[1]: libpod-c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c.scope: Consumed 1.078s CPU time.
Oct 07 15:13:45 compute-0 nova_compute[259550]: 2025-10-07 15:13:45.261 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:13:45 compute-0 nova_compute[259550]: 2025-10-07 15:13:45.263 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:13:45 compute-0 nova_compute[259550]: 2025-10-07 15:13:45.263 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:13:45 compute-0 podman[447777]: 2025-10-07 15:13:45.299327418 +0000 UTC m=+0.036507511 container died c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 07 15:13:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct 07 15:13:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c99259b08f96986b2c2090542e984f8bddca1c89083edf97aba55bbcfcc0bc86-merged.mount: Deactivated successfully.
Oct 07 15:13:45 compute-0 podman[447777]: 2025-10-07 15:13:45.4244649 +0000 UTC m=+0.161644943 container remove c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:13:45 compute-0 systemd[1]: libpod-conmon-c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c.scope: Deactivated successfully.
Oct 07 15:13:45 compute-0 sudo[447576]: pam_unix(sudo:session): session closed for user root
Oct 07 15:13:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:13:45 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:13:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:13:45 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:13:45 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 9ea6d768-0976-4a38-8f17-be34405c2e8b does not exist
Oct 07 15:13:45 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 04f05371-d51d-4f3a-9ef8-8c8d43ef1f82 does not exist
Oct 07 15:13:45 compute-0 sudo[447792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:13:45 compute-0 sudo[447792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:13:45 compute-0 sudo[447792]: pam_unix(sudo:session): session closed for user root
Oct 07 15:13:45 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2691601116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:13:45 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:13:45 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:13:45 compute-0 sudo[447817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:13:45 compute-0 sudo[447817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:13:45 compute-0 sudo[447817]: pam_unix(sudo:session): session closed for user root
Oct 07 15:13:46 compute-0 ceph-mon[74295]: pgmap v3367: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct 07 15:13:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 19 op/s
Oct 07 15:13:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:13:48 compute-0 nova_compute[259550]: 2025-10-07 15:13:48.264 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:13:48 compute-0 nova_compute[259550]: 2025-10-07 15:13:48.265 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:13:48 compute-0 nova_compute[259550]: 2025-10-07 15:13:48.265 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:13:48 compute-0 nova_compute[259550]: 2025-10-07 15:13:48.286 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:13:48 compute-0 ceph-mon[74295]: pgmap v3368: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 19 op/s
Oct 07 15:13:48 compute-0 nova_compute[259550]: 2025-10-07 15:13:48.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:48 compute-0 nova_compute[259550]: 2025-10-07 15:13:48.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:13:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:49 compute-0 nova_compute[259550]: 2025-10-07 15:13:49.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:50 compute-0 ceph-mon[74295]: pgmap v3369: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:13:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:13:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:13:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:13:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:13:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:13:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:13:52 compute-0 ceph-mon[74295]: pgmap v3370: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3371: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:53 compute-0 nova_compute[259550]: 2025-10-07 15:13:53.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:54 compute-0 ceph-mon[74295]: pgmap v3371: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:54 compute-0 nova_compute[259550]: 2025-10-07 15:13:54.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:55 compute-0 podman[447842]: 2025-10-07 15:13:55.106022127 +0000 UTC m=+0.090148112 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 07 15:13:55 compute-0 podman[447843]: 2025-10-07 15:13:55.112279262 +0000 UTC m=+0.088584381 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 07 15:13:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:56 compute-0 ceph-mon[74295]: pgmap v3372: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:13:58 compute-0 ceph-mon[74295]: pgmap v3373: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:58 compute-0 nova_compute[259550]: 2025-10-07 15:13:58.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:13:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:13:59 compute-0 nova_compute[259550]: 2025-10-07 15:13:59.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:14:00.116 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:14:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:14:00.116 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:14:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:14:00.117 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:14:00 compute-0 ceph-mon[74295]: pgmap v3374: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:14:02 compute-0 ceph-mon[74295]: pgmap v3375: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3376: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:03 compute-0 nova_compute[259550]: 2025-10-07 15:14:03.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:04 compute-0 ceph-mon[74295]: pgmap v3376: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:04 compute-0 nova_compute[259550]: 2025-10-07 15:14:04.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:06 compute-0 ceph-mon[74295]: pgmap v3377: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:14:08 compute-0 ceph-mon[74295]: pgmap v3378: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:08 compute-0 nova_compute[259550]: 2025-10-07 15:14:08.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:09 compute-0 nova_compute[259550]: 2025-10-07 15:14:09.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:10 compute-0 ceph-mon[74295]: pgmap v3379: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:11 compute-0 podman[447884]: 2025-10-07 15:14:11.079771487 +0000 UTC m=+0.067211339 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:14:11 compute-0 podman[447885]: 2025-10-07 15:14:11.12208036 +0000 UTC m=+0.108164716 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Oct 07 15:14:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:14:12 compute-0 ceph-mon[74295]: pgmap v3380: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:13 compute-0 nova_compute[259550]: 2025-10-07 15:14:13.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:14 compute-0 ceph-mon[74295]: pgmap v3381: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:14 compute-0 nova_compute[259550]: 2025-10-07 15:14:14.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:16 compute-0 ceph-mon[74295]: pgmap v3382: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.712630) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850057712675, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1159, "num_deletes": 251, "total_data_size": 1740474, "memory_usage": 1770592, "flush_reason": "Manual Compaction"}
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850057729296, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 1713048, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69260, "largest_seqno": 70418, "table_properties": {"data_size": 1707407, "index_size": 3035, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11816, "raw_average_key_size": 19, "raw_value_size": 1696209, "raw_average_value_size": 2836, "num_data_blocks": 136, "num_entries": 598, "num_filter_entries": 598, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759849943, "oldest_key_time": 1759849943, "file_creation_time": 1759850057, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 16715 microseconds, and 4844 cpu microseconds.
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.729340) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 1713048 bytes OK
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.729364) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.736345) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.736363) EVENT_LOG_v1 {"time_micros": 1759850057736357, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.736380) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 1735158, prev total WAL file size 1735158, number of live WAL files 2.
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.737231) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(1672KB)], [164(9328KB)]
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850057737271, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 11265088, "oldest_snapshot_seqno": -1}
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8743 keys, 9492209 bytes, temperature: kUnknown
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850057810171, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 9492209, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9438392, "index_size": 30856, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21893, "raw_key_size": 229210, "raw_average_key_size": 26, "raw_value_size": 9286884, "raw_average_value_size": 1062, "num_data_blocks": 1187, "num_entries": 8743, "num_filter_entries": 8743, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759850057, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.810574) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 9492209 bytes
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.816332) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.3 rd, 130.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 9.1 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(12.1) write-amplify(5.5) OK, records in: 9257, records dropped: 514 output_compression: NoCompression
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.816376) EVENT_LOG_v1 {"time_micros": 1759850057816362, "job": 102, "event": "compaction_finished", "compaction_time_micros": 73031, "compaction_time_cpu_micros": 25310, "output_level": 6, "num_output_files": 1, "total_output_size": 9492209, "num_input_records": 9257, "num_output_records": 8743, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850057817154, "job": 102, "event": "table_file_deletion", "file_number": 166}
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850057820606, "job": 102, "event": "table_file_deletion", "file_number": 164}
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.737141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.820710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.820717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.820720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.820723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:14:17 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.820726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:14:17 compute-0 nova_compute[259550]: 2025-10-07 15:14:17.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:14:18 compute-0 ceph-mon[74295]: pgmap v3383: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:18 compute-0 nova_compute[259550]: 2025-10-07 15:14:18.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3384: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:19 compute-0 nova_compute[259550]: 2025-10-07 15:14:19.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:20 compute-0 ceph-mon[74295]: pgmap v3384: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:14:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:14:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:14:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:14:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:14:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:14:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:14:22 compute-0 ceph-mon[74295]: pgmap v3385: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:14:22
Oct 07 15:14:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:14:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:14:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'images', '.mgr', '.rgw.root']
Oct 07 15:14:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:14:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:14:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:14:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:14:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:14:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:14:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:14:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:14:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:14:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:14:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:14:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:23 compute-0 nova_compute[259550]: 2025-10-07 15:14:23.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:24 compute-0 nova_compute[259550]: 2025-10-07 15:14:24.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:24 compute-0 ceph-mon[74295]: pgmap v3386: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:24 compute-0 nova_compute[259550]: 2025-10-07 15:14:24.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:14:24 compute-0 nova_compute[259550]: 2025-10-07 15:14:24.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:14:24 compute-0 nova_compute[259550]: 2025-10-07 15:14:24.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:14:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:26 compute-0 podman[447928]: 2025-10-07 15:14:26.067886694 +0000 UTC m=+0.053277872 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 07 15:14:26 compute-0 podman[447927]: 2025-10-07 15:14:26.068302185 +0000 UTC m=+0.057118064 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 07 15:14:26 compute-0 ceph-mon[74295]: pgmap v3387: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:14:27 compute-0 nova_compute[259550]: 2025-10-07 15:14:27.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:14:28 compute-0 nova_compute[259550]: 2025-10-07 15:14:28.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:28 compute-0 nova_compute[259550]: 2025-10-07 15:14:28.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:14:29 compute-0 ceph-mon[74295]: pgmap v3388: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:29 compute-0 nova_compute[259550]: 2025-10-07 15:14:29.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:29 compute-0 nova_compute[259550]: 2025-10-07 15:14:29.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:14:30 compute-0 ceph-mon[74295]: pgmap v3389: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:32 compute-0 ceph-mon[74295]: pgmap v3390: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:14:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:14:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3995275996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:14:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:14:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3995275996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:14:32 compute-0 nova_compute[259550]: 2025-10-07 15:14:32.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:14:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3995275996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:14:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3995275996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:14:33 compute-0 nova_compute[259550]: 2025-10-07 15:14:33.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:34 compute-0 ceph-mon[74295]: pgmap v3391: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:34 compute-0 nova_compute[259550]: 2025-10-07 15:14:34.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:36 compute-0 ceph-mon[74295]: pgmap v3392: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3393: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:14:38 compute-0 ceph-mon[74295]: pgmap v3393: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:38 compute-0 nova_compute[259550]: 2025-10-07 15:14:38.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:39 compute-0 nova_compute[259550]: 2025-10-07 15:14:39.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:40 compute-0 ceph-mon[74295]: pgmap v3394: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:42 compute-0 podman[447969]: 2025-10-07 15:14:42.105056432 +0000 UTC m=+0.076571685 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 07 15:14:42 compute-0 podman[447970]: 2025-10-07 15:14:42.171631343 +0000 UTC m=+0.136747757 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller)
Oct 07 15:14:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:14:42 compute-0 ceph-mon[74295]: pgmap v3395: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3396: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:43 compute-0 nova_compute[259550]: 2025-10-07 15:14:43.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:44 compute-0 ceph-mon[74295]: pgmap v3396: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:44 compute-0 nova_compute[259550]: 2025-10-07 15:14:44.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:44 compute-0 nova_compute[259550]: 2025-10-07 15:14:44.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:14:45 compute-0 nova_compute[259550]: 2025-10-07 15:14:45.012 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:14:45 compute-0 nova_compute[259550]: 2025-10-07 15:14:45.012 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:14:45 compute-0 nova_compute[259550]: 2025-10-07 15:14:45.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:14:45 compute-0 nova_compute[259550]: 2025-10-07 15:14:45.013 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:14:45 compute-0 nova_compute[259550]: 2025-10-07 15:14:45.014 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:14:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3397: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:14:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3797503227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:14:45 compute-0 nova_compute[259550]: 2025-10-07 15:14:45.447 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:14:45 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3797503227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:14:45 compute-0 nova_compute[259550]: 2025-10-07 15:14:45.639 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:14:45 compute-0 nova_compute[259550]: 2025-10-07 15:14:45.641 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3634MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:14:45 compute-0 nova_compute[259550]: 2025-10-07 15:14:45.642 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:14:45 compute-0 nova_compute[259550]: 2025-10-07 15:14:45.642 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:14:45 compute-0 sudo[448034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:14:45 compute-0 sudo[448034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:14:45 compute-0 sudo[448034]: pam_unix(sudo:session): session closed for user root
Oct 07 15:14:45 compute-0 sudo[448059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:14:45 compute-0 sudo[448059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:14:45 compute-0 sudo[448059]: pam_unix(sudo:session): session closed for user root
Oct 07 15:14:45 compute-0 sudo[448084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:14:45 compute-0 sudo[448084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:14:45 compute-0 sudo[448084]: pam_unix(sudo:session): session closed for user root
Oct 07 15:14:45 compute-0 sudo[448109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:14:45 compute-0 sudo[448109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:14:46 compute-0 nova_compute[259550]: 2025-10-07 15:14:46.051 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:14:46 compute-0 nova_compute[259550]: 2025-10-07 15:14:46.051 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:14:46 compute-0 nova_compute[259550]: 2025-10-07 15:14:46.158 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 07 15:14:46 compute-0 nova_compute[259550]: 2025-10-07 15:14:46.292 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 07 15:14:46 compute-0 nova_compute[259550]: 2025-10-07 15:14:46.292 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 07 15:14:46 compute-0 nova_compute[259550]: 2025-10-07 15:14:46.312 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 07 15:14:46 compute-0 nova_compute[259550]: 2025-10-07 15:14:46.334 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 07 15:14:46 compute-0 nova_compute[259550]: 2025-10-07 15:14:46.352 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:14:46 compute-0 sudo[448109]: pam_unix(sudo:session): session closed for user root
Oct 07 15:14:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:14:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:14:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:14:46 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:14:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:14:46 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:14:46 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 428c4d31-a3ee-48bc-9655-86899c7a158a does not exist
Oct 07 15:14:46 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev c7cb0c7b-078b-47ab-af4a-17413a045f22 does not exist
Oct 07 15:14:46 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev b1021ced-d779-4f7c-b346-948dcbf39a0d does not exist
Oct 07 15:14:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:14:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:14:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:14:46 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:14:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:14:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:14:46 compute-0 sudo[448166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:14:46 compute-0 sudo[448166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:14:46 compute-0 ceph-mon[74295]: pgmap v3397: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:14:46 compute-0 sudo[448166]: pam_unix(sudo:session): session closed for user root
Oct 07 15:14:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:14:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:14:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:14:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:14:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:14:46 compute-0 sudo[448210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:14:46 compute-0 sudo[448210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:14:46 compute-0 sudo[448210]: pam_unix(sudo:session): session closed for user root
Oct 07 15:14:46 compute-0 sudo[448235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:14:46 compute-0 sudo[448235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:14:46 compute-0 sudo[448235]: pam_unix(sudo:session): session closed for user root
Oct 07 15:14:46 compute-0 sudo[448260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:14:46 compute-0 sudo[448260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:14:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:14:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2330554311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:14:46 compute-0 nova_compute[259550]: 2025-10-07 15:14:46.955 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:14:46 compute-0 nova_compute[259550]: 2025-10-07 15:14:46.961 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:14:46 compute-0 nova_compute[259550]: 2025-10-07 15:14:46.981 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:14:46 compute-0 nova_compute[259550]: 2025-10-07 15:14:46.983 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:14:46 compute-0 nova_compute[259550]: 2025-10-07 15:14:46.983 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:14:47 compute-0 podman[448330]: 2025-10-07 15:14:47.172310293 +0000 UTC m=+0.051626779 container create f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 07 15:14:47 compute-0 systemd[1]: Started libpod-conmon-f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97.scope.
Oct 07 15:14:47 compute-0 podman[448330]: 2025-10-07 15:14:47.146597046 +0000 UTC m=+0.025913522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:14:47 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:14:47 compute-0 podman[448330]: 2025-10-07 15:14:47.275463896 +0000 UTC m=+0.154780422 container init f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_napier, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:14:47 compute-0 podman[448330]: 2025-10-07 15:14:47.284286919 +0000 UTC m=+0.163603405 container start f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:14:47 compute-0 podman[448330]: 2025-10-07 15:14:47.289611699 +0000 UTC m=+0.168928205 container attach f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:14:47 compute-0 serene_napier[448346]: 167 167
Oct 07 15:14:47 compute-0 systemd[1]: libpod-f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97.scope: Deactivated successfully.
Oct 07 15:14:47 compute-0 conmon[448346]: conmon f2248a036b7c241101ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97.scope/container/memory.events
Oct 07 15:14:47 compute-0 podman[448330]: 2025-10-07 15:14:47.294190309 +0000 UTC m=+0.173506795 container died f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_napier, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 07 15:14:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d3c79e2cbeb5af053de20f6e2e38b1d33c76fdd7a41b39c8df870a9d98cb14f-merged.mount: Deactivated successfully.
Oct 07 15:14:47 compute-0 podman[448330]: 2025-10-07 15:14:47.343177058 +0000 UTC m=+0.222493514 container remove f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 07 15:14:47 compute-0 systemd[1]: libpod-conmon-f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97.scope: Deactivated successfully.
Oct 07 15:14:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3398: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:14:47 compute-0 podman[448370]: 2025-10-07 15:14:47.563200095 +0000 UTC m=+0.061287694 container create 320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_panini, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 07 15:14:47 compute-0 systemd[1]: Started libpod-conmon-320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7.scope.
Oct 07 15:14:47 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2330554311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:14:47 compute-0 podman[448370]: 2025-10-07 15:14:47.545076598 +0000 UTC m=+0.043164177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:14:47 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73bcbf8b902919bf75d38b575b4855d198d3467fa5d186ed6ca3380451eafa6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73bcbf8b902919bf75d38b575b4855d198d3467fa5d186ed6ca3380451eafa6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73bcbf8b902919bf75d38b575b4855d198d3467fa5d186ed6ca3380451eafa6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73bcbf8b902919bf75d38b575b4855d198d3467fa5d186ed6ca3380451eafa6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73bcbf8b902919bf75d38b575b4855d198d3467fa5d186ed6ca3380451eafa6e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:14:47 compute-0 podman[448370]: 2025-10-07 15:14:47.665371972 +0000 UTC m=+0.163459541 container init 320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 07 15:14:47 compute-0 podman[448370]: 2025-10-07 15:14:47.673091135 +0000 UTC m=+0.171178694 container start 320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:14:47 compute-0 podman[448370]: 2025-10-07 15:14:47.676598667 +0000 UTC m=+0.174686256 container attach 320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_panini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:14:48 compute-0 ceph-mon[74295]: pgmap v3398: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:48 compute-0 priceless_panini[448387]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:14:48 compute-0 priceless_panini[448387]: --> relative data size: 1.0
Oct 07 15:14:48 compute-0 priceless_panini[448387]: --> All data devices are unavailable
Oct 07 15:14:48 compute-0 systemd[1]: libpod-320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7.scope: Deactivated successfully.
Oct 07 15:14:48 compute-0 podman[448370]: 2025-10-07 15:14:48.741119207 +0000 UTC m=+1.239206776 container died 320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_panini, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 15:14:48 compute-0 systemd[1]: libpod-320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7.scope: Consumed 1.021s CPU time.
Oct 07 15:14:48 compute-0 nova_compute[259550]: 2025-10-07 15:14:48.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-73bcbf8b902919bf75d38b575b4855d198d3467fa5d186ed6ca3380451eafa6e-merged.mount: Deactivated successfully.
Oct 07 15:14:48 compute-0 podman[448370]: 2025-10-07 15:14:48.897173922 +0000 UTC m=+1.395261491 container remove 320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_panini, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 07 15:14:48 compute-0 systemd[1]: libpod-conmon-320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7.scope: Deactivated successfully.
Oct 07 15:14:48 compute-0 sudo[448260]: pam_unix(sudo:session): session closed for user root
Oct 07 15:14:48 compute-0 sudo[448429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:14:48 compute-0 sudo[448429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:14:48 compute-0 sudo[448429]: pam_unix(sudo:session): session closed for user root
Oct 07 15:14:49 compute-0 sudo[448454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:14:49 compute-0 sudo[448454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:14:49 compute-0 sudo[448454]: pam_unix(sudo:session): session closed for user root
Oct 07 15:14:49 compute-0 sudo[448479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:14:49 compute-0 sudo[448479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:14:49 compute-0 sudo[448479]: pam_unix(sudo:session): session closed for user root
Oct 07 15:14:49 compute-0 sudo[448504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:14:49 compute-0 sudo[448504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:14:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:49 compute-0 podman[448572]: 2025-10-07 15:14:49.574148297 +0000 UTC m=+0.058145759 container create 461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 07 15:14:49 compute-0 systemd[1]: Started libpod-conmon-461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd.scope.
Oct 07 15:14:49 compute-0 podman[448572]: 2025-10-07 15:14:49.53928813 +0000 UTC m=+0.023285612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:14:49 compute-0 nova_compute[259550]: 2025-10-07 15:14:49.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:49 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:14:49 compute-0 podman[448572]: 2025-10-07 15:14:49.686960935 +0000 UTC m=+0.170958407 container init 461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:14:49 compute-0 podman[448572]: 2025-10-07 15:14:49.696140546 +0000 UTC m=+0.180138008 container start 461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_sinoussi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 15:14:49 compute-0 pensive_sinoussi[448587]: 167 167
Oct 07 15:14:49 compute-0 systemd[1]: libpod-461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd.scope: Deactivated successfully.
Oct 07 15:14:49 compute-0 conmon[448587]: conmon 461e2d787392734d625d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd.scope/container/memory.events
Oct 07 15:14:49 compute-0 podman[448572]: 2025-10-07 15:14:49.711646274 +0000 UTC m=+0.195643756 container attach 461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:14:49 compute-0 podman[448572]: 2025-10-07 15:14:49.712291591 +0000 UTC m=+0.196289083 container died 461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_sinoussi, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:14:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e0210df84fed5dbb3a0902703b33bd828ab54fd2651d3379d79ef456bc98a59-merged.mount: Deactivated successfully.
Oct 07 15:14:49 compute-0 podman[448572]: 2025-10-07 15:14:49.778104722 +0000 UTC m=+0.262102174 container remove 461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:14:49 compute-0 systemd[1]: libpod-conmon-461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd.scope: Deactivated successfully.
Oct 07 15:14:50 compute-0 podman[448612]: 2025-10-07 15:14:49.953465775 +0000 UTC m=+0.027618228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:14:50 compute-0 podman[448612]: 2025-10-07 15:14:50.271466458 +0000 UTC m=+0.345618881 container create 5c0b1bfa6eb6d8d6e6d61bfe1b75c41863ef081cfb2d2cd29b830fa713ba8aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 07 15:14:50 compute-0 systemd[1]: Started libpod-conmon-5c0b1bfa6eb6d8d6e6d61bfe1b75c41863ef081cfb2d2cd29b830fa713ba8aac.scope.
Oct 07 15:14:50 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d56517ca73afdfb34a385b6739292fa2c965059cba89462273a9b21915a3550/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d56517ca73afdfb34a385b6739292fa2c965059cba89462273a9b21915a3550/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d56517ca73afdfb34a385b6739292fa2c965059cba89462273a9b21915a3550/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d56517ca73afdfb34a385b6739292fa2c965059cba89462273a9b21915a3550/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:14:50 compute-0 podman[448612]: 2025-10-07 15:14:50.539161049 +0000 UTC m=+0.613313542 container init 5c0b1bfa6eb6d8d6e6d61bfe1b75c41863ef081cfb2d2cd29b830fa713ba8aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:14:50 compute-0 podman[448612]: 2025-10-07 15:14:50.551749621 +0000 UTC m=+0.625902034 container start 5c0b1bfa6eb6d8d6e6d61bfe1b75c41863ef081cfb2d2cd29b830fa713ba8aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shamir, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:14:50 compute-0 podman[448612]: 2025-10-07 15:14:50.569342094 +0000 UTC m=+0.643494607 container attach 5c0b1bfa6eb6d8d6e6d61bfe1b75c41863ef081cfb2d2cd29b830fa713ba8aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shamir, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 07 15:14:50 compute-0 ceph-mon[74295]: pgmap v3399: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:50 compute-0 nova_compute[259550]: 2025-10-07 15:14:50.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:14:50 compute-0 nova_compute[259550]: 2025-10-07 15:14:50.986 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:14:50 compute-0 nova_compute[259550]: 2025-10-07 15:14:50.986 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:14:51 compute-0 nova_compute[259550]: 2025-10-07 15:14:51.005 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:14:51 compute-0 nova_compute[259550]: 2025-10-07 15:14:51.005 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:14:51 compute-0 zen_shamir[448629]: {
Oct 07 15:14:51 compute-0 zen_shamir[448629]:     "0": [
Oct 07 15:14:51 compute-0 zen_shamir[448629]:         {
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "devices": [
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "/dev/loop3"
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             ],
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "lv_name": "ceph_lv0",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "lv_size": "21470642176",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "name": "ceph_lv0",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "tags": {
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.cluster_name": "ceph",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.crush_device_class": "",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.encrypted": "0",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.osd_id": "0",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.type": "block",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.vdo": "0"
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             },
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "type": "block",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "vg_name": "ceph_vg0"
Oct 07 15:14:51 compute-0 zen_shamir[448629]:         }
Oct 07 15:14:51 compute-0 zen_shamir[448629]:     ],
Oct 07 15:14:51 compute-0 zen_shamir[448629]:     "1": [
Oct 07 15:14:51 compute-0 zen_shamir[448629]:         {
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "devices": [
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "/dev/loop4"
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             ],
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "lv_name": "ceph_lv1",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "lv_size": "21470642176",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "name": "ceph_lv1",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "tags": {
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.cluster_name": "ceph",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.crush_device_class": "",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.encrypted": "0",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.osd_id": "1",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.type": "block",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.vdo": "0"
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             },
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "type": "block",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "vg_name": "ceph_vg1"
Oct 07 15:14:51 compute-0 zen_shamir[448629]:         }
Oct 07 15:14:51 compute-0 zen_shamir[448629]:     ],
Oct 07 15:14:51 compute-0 zen_shamir[448629]:     "2": [
Oct 07 15:14:51 compute-0 zen_shamir[448629]:         {
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "devices": [
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "/dev/loop5"
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             ],
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "lv_name": "ceph_lv2",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "lv_size": "21470642176",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "name": "ceph_lv2",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "tags": {
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.cluster_name": "ceph",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.crush_device_class": "",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.encrypted": "0",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.osd_id": "2",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.type": "block",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:                 "ceph.vdo": "0"
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             },
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "type": "block",
Oct 07 15:14:51 compute-0 zen_shamir[448629]:             "vg_name": "ceph_vg2"
Oct 07 15:14:51 compute-0 zen_shamir[448629]:         }
Oct 07 15:14:51 compute-0 zen_shamir[448629]:     ]
Oct 07 15:14:51 compute-0 zen_shamir[448629]: }
Oct 07 15:14:51 compute-0 systemd[1]: libpod-5c0b1bfa6eb6d8d6e6d61bfe1b75c41863ef081cfb2d2cd29b830fa713ba8aac.scope: Deactivated successfully.
Oct 07 15:14:51 compute-0 podman[448612]: 2025-10-07 15:14:51.322066783 +0000 UTC m=+1.396219206 container died 5c0b1bfa6eb6d8d6e6d61bfe1b75c41863ef081cfb2d2cd29b830fa713ba8aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shamir, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:14:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d56517ca73afdfb34a385b6739292fa2c965059cba89462273a9b21915a3550-merged.mount: Deactivated successfully.
Oct 07 15:14:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3400: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:51 compute-0 podman[448612]: 2025-10-07 15:14:51.385773578 +0000 UTC m=+1.459926001 container remove 5c0b1bfa6eb6d8d6e6d61bfe1b75c41863ef081cfb2d2cd29b830fa713ba8aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shamir, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 15:14:51 compute-0 systemd[1]: libpod-conmon-5c0b1bfa6eb6d8d6e6d61bfe1b75c41863ef081cfb2d2cd29b830fa713ba8aac.scope: Deactivated successfully.
Oct 07 15:14:51 compute-0 sudo[448504]: pam_unix(sudo:session): session closed for user root
Oct 07 15:14:51 compute-0 sudo[448652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:14:51 compute-0 sudo[448652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:14:51 compute-0 sudo[448652]: pam_unix(sudo:session): session closed for user root
Oct 07 15:14:51 compute-0 sudo[448677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:14:51 compute-0 sudo[448677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:14:51 compute-0 sudo[448677]: pam_unix(sudo:session): session closed for user root
Oct 07 15:14:51 compute-0 sudo[448702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:14:51 compute-0 sudo[448702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:14:51 compute-0 sudo[448702]: pam_unix(sudo:session): session closed for user root
Oct 07 15:14:51 compute-0 sudo[448727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:14:51 compute-0 sudo[448727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:14:52 compute-0 podman[448793]: 2025-10-07 15:14:52.115310507 +0000 UTC m=+0.057129464 container create 3222cd3736a93366224ecbc87033b99d2eb4443620c1407bd048540fdd684f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:14:52 compute-0 systemd[1]: Started libpod-conmon-3222cd3736a93366224ecbc87033b99d2eb4443620c1407bd048540fdd684f43.scope.
Oct 07 15:14:52 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:14:52 compute-0 podman[448793]: 2025-10-07 15:14:52.095374192 +0000 UTC m=+0.037193199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:14:52 compute-0 podman[448793]: 2025-10-07 15:14:52.201917245 +0000 UTC m=+0.143736282 container init 3222cd3736a93366224ecbc87033b99d2eb4443620c1407bd048540fdd684f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 15:14:52 compute-0 podman[448793]: 2025-10-07 15:14:52.211046115 +0000 UTC m=+0.152865062 container start 3222cd3736a93366224ecbc87033b99d2eb4443620c1407bd048540fdd684f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:14:52 compute-0 podman[448793]: 2025-10-07 15:14:52.214569887 +0000 UTC m=+0.156388884 container attach 3222cd3736a93366224ecbc87033b99d2eb4443620c1407bd048540fdd684f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 07 15:14:52 compute-0 hopeful_banach[448810]: 167 167
Oct 07 15:14:52 compute-0 systemd[1]: libpod-3222cd3736a93366224ecbc87033b99d2eb4443620c1407bd048540fdd684f43.scope: Deactivated successfully.
Oct 07 15:14:52 compute-0 podman[448793]: 2025-10-07 15:14:52.217177906 +0000 UTC m=+0.158996893 container died 3222cd3736a93366224ecbc87033b99d2eb4443620c1407bd048540fdd684f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:14:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac228111f78e1185cd881ba3f8ad7adc98b6b3d676d11ee522e5fb755b201bd4-merged.mount: Deactivated successfully.
Oct 07 15:14:52 compute-0 podman[448793]: 2025-10-07 15:14:52.294389707 +0000 UTC m=+0.236208674 container remove 3222cd3736a93366224ecbc87033b99d2eb4443620c1407bd048540fdd684f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:14:52 compute-0 systemd[1]: libpod-conmon-3222cd3736a93366224ecbc87033b99d2eb4443620c1407bd048540fdd684f43.scope: Deactivated successfully.
Oct 07 15:14:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:14:52 compute-0 podman[448833]: 2025-10-07 15:14:52.506466165 +0000 UTC m=+0.056114037 container create 8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:14:52 compute-0 systemd[1]: Started libpod-conmon-8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911.scope.
Oct 07 15:14:52 compute-0 podman[448833]: 2025-10-07 15:14:52.479411983 +0000 UTC m=+0.029059915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:14:52 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:14:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f7032e65233f4bbffa93f8f5df1cac882b776b43188a3a7beb0efb13b6cd19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:14:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f7032e65233f4bbffa93f8f5df1cac882b776b43188a3a7beb0efb13b6cd19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:14:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f7032e65233f4bbffa93f8f5df1cac882b776b43188a3a7beb0efb13b6cd19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:14:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f7032e65233f4bbffa93f8f5df1cac882b776b43188a3a7beb0efb13b6cd19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:14:52 compute-0 podman[448833]: 2025-10-07 15:14:52.623341539 +0000 UTC m=+0.172989401 container init 8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:14:52 compute-0 podman[448833]: 2025-10-07 15:14:52.634340369 +0000 UTC m=+0.183988211 container start 8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:14:52 compute-0 podman[448833]: 2025-10-07 15:14:52.638332154 +0000 UTC m=+0.187980056 container attach 8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 07 15:14:52 compute-0 ceph-mon[74295]: pgmap v3400: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:14:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:14:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:14:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:14:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:14:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:14:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:53 compute-0 stoic_booth[448850]: {
Oct 07 15:14:53 compute-0 stoic_booth[448850]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:14:53 compute-0 stoic_booth[448850]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:14:53 compute-0 stoic_booth[448850]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:14:53 compute-0 stoic_booth[448850]:         "osd_id": 2,
Oct 07 15:14:53 compute-0 stoic_booth[448850]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:14:53 compute-0 stoic_booth[448850]:         "type": "bluestore"
Oct 07 15:14:53 compute-0 stoic_booth[448850]:     },
Oct 07 15:14:53 compute-0 stoic_booth[448850]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:14:53 compute-0 stoic_booth[448850]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:14:53 compute-0 stoic_booth[448850]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:14:53 compute-0 stoic_booth[448850]:         "osd_id": 1,
Oct 07 15:14:53 compute-0 stoic_booth[448850]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:14:53 compute-0 stoic_booth[448850]:         "type": "bluestore"
Oct 07 15:14:53 compute-0 stoic_booth[448850]:     },
Oct 07 15:14:53 compute-0 stoic_booth[448850]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:14:53 compute-0 stoic_booth[448850]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:14:53 compute-0 stoic_booth[448850]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:14:53 compute-0 stoic_booth[448850]:         "osd_id": 0,
Oct 07 15:14:53 compute-0 stoic_booth[448850]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:14:53 compute-0 stoic_booth[448850]:         "type": "bluestore"
Oct 07 15:14:53 compute-0 stoic_booth[448850]:     }
Oct 07 15:14:53 compute-0 stoic_booth[448850]: }
Oct 07 15:14:53 compute-0 systemd[1]: libpod-8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911.scope: Deactivated successfully.
Oct 07 15:14:53 compute-0 systemd[1]: libpod-8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911.scope: Consumed 1.100s CPU time.
Oct 07 15:14:53 compute-0 conmon[448850]: conmon 8160dcd933f9b02de8ca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911.scope/container/memory.events
Oct 07 15:14:53 compute-0 podman[448833]: 2025-10-07 15:14:53.727382708 +0000 UTC m=+1.277030550 container died 8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct 07 15:14:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-70f7032e65233f4bbffa93f8f5df1cac882b776b43188a3a7beb0efb13b6cd19-merged.mount: Deactivated successfully.
Oct 07 15:14:53 compute-0 nova_compute[259550]: 2025-10-07 15:14:53.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:53 compute-0 podman[448833]: 2025-10-07 15:14:53.818400452 +0000 UTC m=+1.368048324 container remove 8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 07 15:14:53 compute-0 systemd[1]: libpod-conmon-8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911.scope: Deactivated successfully.
Oct 07 15:14:53 compute-0 sudo[448727]: pam_unix(sudo:session): session closed for user root
Oct 07 15:14:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:14:53 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:14:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:14:53 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:14:53 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 66d93530-0350-4d01-b885-93c78c8957c2 does not exist
Oct 07 15:14:53 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev a6084d7b-42d8-4d49-98a6-108295f2b42b does not exist
Oct 07 15:14:53 compute-0 sudo[448896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:14:53 compute-0 sudo[448896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:14:53 compute-0 sudo[448896]: pam_unix(sudo:session): session closed for user root
Oct 07 15:14:54 compute-0 sudo[448921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:14:54 compute-0 sudo[448921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:14:54 compute-0 sudo[448921]: pam_unix(sudo:session): session closed for user root
Oct 07 15:14:54 compute-0 nova_compute[259550]: 2025-10-07 15:14:54.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:54 compute-0 ceph-mon[74295]: pgmap v3401: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:54 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:14:54 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:14:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3402: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:56 compute-0 ceph-mon[74295]: pgmap v3402: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:57 compute-0 podman[448947]: 2025-10-07 15:14:57.101271949 +0000 UTC m=+0.072676463 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 07 15:14:57 compute-0 podman[448946]: 2025-10-07 15:14:57.110708827 +0000 UTC m=+0.093993613 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 07 15:14:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3403: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:14:58 compute-0 nova_compute[259550]: 2025-10-07 15:14:58.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:58 compute-0 ceph-mon[74295]: pgmap v3403: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:14:59 compute-0 nova_compute[259550]: 2025-10-07 15:14:59.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:14:59 compute-0 ceph-mon[74295]: pgmap v3404: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:15:00.117 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:15:00.117 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:15:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:15:00.117 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:15:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:15:02 compute-0 ceph-mon[74295]: pgmap v3405: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:03 compute-0 nova_compute[259550]: 2025-10-07 15:15:03.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:04 compute-0 ceph-mon[74295]: pgmap v3406: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:04 compute-0 nova_compute[259550]: 2025-10-07 15:15:04.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3407: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:06 compute-0 ceph-mon[74295]: pgmap v3407: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3408: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:15:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Oct 07 15:15:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Oct 07 15:15:07 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Oct 07 15:15:08 compute-0 ceph-mon[74295]: pgmap v3408: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:08 compute-0 ceph-mon[74295]: osdmap e307: 3 total, 3 up, 3 in
Oct 07 15:15:08 compute-0 nova_compute[259550]: 2025-10-07 15:15:08.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3410: 305 pgs: 305 active+clean; 37 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 511 B/s wr, 6 op/s
Oct 07 15:15:09 compute-0 nova_compute[259550]: 2025-10-07 15:15:09.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Oct 07 15:15:10 compute-0 ceph-mon[74295]: pgmap v3410: 305 pgs: 305 active+clean; 37 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 511 B/s wr, 6 op/s
Oct 07 15:15:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Oct 07 15:15:10 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Oct 07 15:15:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3412: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.9 KiB/s wr, 33 op/s
Oct 07 15:15:11 compute-0 ceph-mon[74295]: osdmap e308: 3 total, 3 up, 3 in
Oct 07 15:15:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:15:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Oct 07 15:15:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Oct 07 15:15:12 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.476664) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850112476732, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 734, "num_deletes": 258, "total_data_size": 867344, "memory_usage": 880520, "flush_reason": "Manual Compaction"}
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850112486772, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 858833, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70419, "largest_seqno": 71152, "table_properties": {"data_size": 855002, "index_size": 1610, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8674, "raw_average_key_size": 19, "raw_value_size": 847200, "raw_average_value_size": 1886, "num_data_blocks": 71, "num_entries": 449, "num_filter_entries": 449, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759850058, "oldest_key_time": 1759850058, "file_creation_time": 1759850112, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 10149 microseconds, and 4436 cpu microseconds.
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.486822) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 858833 bytes OK
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.486845) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.490041) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.490057) EVENT_LOG_v1 {"time_micros": 1759850112490051, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.490075) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 863544, prev total WAL file size 863544, number of live WAL files 2.
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.490568) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303230' seq:72057594037927935, type:22 .. '6C6F676D0033323731' seq:0, type:0; will stop at (end)
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(838KB)], [167(9269KB)]
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850112490616, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 10351042, "oldest_snapshot_seqno": -1}
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 8662 keys, 10238067 bytes, temperature: kUnknown
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850112586286, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 10238067, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10183311, "index_size": 31987, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21701, "raw_key_size": 228455, "raw_average_key_size": 26, "raw_value_size": 10031790, "raw_average_value_size": 1158, "num_data_blocks": 1234, "num_entries": 8662, "num_filter_entries": 8662, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759850112, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.586620) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 10238067 bytes
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.588053) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.1 rd, 106.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.1 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(24.0) write-amplify(11.9) OK, records in: 9192, records dropped: 530 output_compression: NoCompression
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.588072) EVENT_LOG_v1 {"time_micros": 1759850112588063, "job": 104, "event": "compaction_finished", "compaction_time_micros": 95782, "compaction_time_cpu_micros": 43279, "output_level": 6, "num_output_files": 1, "total_output_size": 10238067, "num_input_records": 9192, "num_output_records": 8662, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850112588391, "job": 104, "event": "table_file_deletion", "file_number": 169}
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850112590205, "job": 104, "event": "table_file_deletion", "file_number": 167}
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.490498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.590272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.590279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.590281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.590283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:15:12 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.590285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:15:12 compute-0 ceph-mon[74295]: pgmap v3412: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.9 KiB/s wr, 33 op/s
Oct 07 15:15:12 compute-0 ceph-mon[74295]: osdmap e309: 3 total, 3 up, 3 in
Oct 07 15:15:13 compute-0 podman[448986]: 2025-10-07 15:15:13.142448639 +0000 UTC m=+0.113751623 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:15:13 compute-0 podman[448987]: 2025-10-07 15:15:13.147790279 +0000 UTC m=+0.114161124 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:15:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3414: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.5 KiB/s wr, 44 op/s
Oct 07 15:15:13 compute-0 nova_compute[259550]: 2025-10-07 15:15:13.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:14 compute-0 nova_compute[259550]: 2025-10-07 15:15:14.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:14 compute-0 ceph-mon[74295]: pgmap v3414: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.5 KiB/s wr, 44 op/s
Oct 07 15:15:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3415: 305 pgs: 305 active+clean; 4.9 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 2.7 KiB/s wr, 58 op/s
Oct 07 15:15:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Oct 07 15:15:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Oct 07 15:15:16 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Oct 07 15:15:16 compute-0 ceph-mon[74295]: pgmap v3415: 305 pgs: 305 active+clean; 4.9 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 2.7 KiB/s wr, 58 op/s
Oct 07 15:15:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3417: 305 pgs: 305 active+clean; 8.4 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.2 MiB/s wr, 64 op/s
Oct 07 15:15:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:15:17 compute-0 ceph-mon[74295]: osdmap e310: 3 total, 3 up, 3 in
Oct 07 15:15:18 compute-0 nova_compute[259550]: 2025-10-07 15:15:18.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:18 compute-0 nova_compute[259550]: 2025-10-07 15:15:18.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:15:19 compute-0 ceph-mon[74295]: pgmap v3417: 305 pgs: 305 active+clean; 8.4 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.2 MiB/s wr, 64 op/s
Oct 07 15:15:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3418: 305 pgs: 305 active+clean; 21 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.6 MiB/s wr, 44 op/s
Oct 07 15:15:19 compute-0 nova_compute[259550]: 2025-10-07 15:15:19.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:20 compute-0 ceph-mon[74295]: pgmap v3418: 305 pgs: 305 active+clean; 21 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.6 MiB/s wr, 44 op/s
Oct 07 15:15:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3419: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.3 MiB/s wr, 40 op/s
Oct 07 15:15:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:15:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Oct 07 15:15:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Oct 07 15:15:22 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Oct 07 15:15:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:15:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:15:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:15:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:15:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:15:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:15:22 compute-0 ceph-mon[74295]: pgmap v3419: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.3 MiB/s wr, 40 op/s
Oct 07 15:15:22 compute-0 ceph-mon[74295]: osdmap e311: 3 total, 3 up, 3 in
Oct 07 15:15:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:15:22
Oct 07 15:15:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:15:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:15:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'volumes', 'images', 'backups', 'vms', 'default.rgw.meta']
Oct 07 15:15:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:15:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:15:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:15:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:15:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:15:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:15:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:15:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:15:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:15:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3421: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.6 MiB/s wr, 21 op/s
Oct 07 15:15:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:15:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:15:23 compute-0 nova_compute[259550]: 2025-10-07 15:15:23.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:24 compute-0 nova_compute[259550]: 2025-10-07 15:15:24.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:24 compute-0 ceph-mon[74295]: pgmap v3421: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.6 MiB/s wr, 21 op/s
Oct 07 15:15:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3422: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.5 MiB/s wr, 13 op/s
Oct 07 15:15:26 compute-0 ceph-mon[74295]: pgmap v3422: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.5 MiB/s wr, 13 op/s
Oct 07 15:15:26 compute-0 nova_compute[259550]: 2025-10-07 15:15:26.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:15:26 compute-0 nova_compute[259550]: 2025-10-07 15:15:26.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:15:26 compute-0 nova_compute[259550]: 2025-10-07 15:15:26.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:15:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3423: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 1.2 MiB/s wr, 11 op/s
Oct 07 15:15:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:15:28 compute-0 ceph-mon[74295]: pgmap v3423: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 1.2 MiB/s wr, 11 op/s
Oct 07 15:15:28 compute-0 podman[449033]: 2025-10-07 15:15:28.07568542 +0000 UTC m=+0.054175975 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 15:15:28 compute-0 podman[449032]: 2025-10-07 15:15:28.101975582 +0000 UTC m=+0.086320441 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 07 15:15:28 compute-0 nova_compute[259550]: 2025-10-07 15:15:28.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:28 compute-0 nova_compute[259550]: 2025-10-07 15:15:28.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:15:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3424: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:29 compute-0 nova_compute[259550]: 2025-10-07 15:15:29.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:29 compute-0 nova_compute[259550]: 2025-10-07 15:15:29.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:15:29 compute-0 nova_compute[259550]: 2025-10-07 15:15:29.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:15:30 compute-0 ceph-mon[74295]: pgmap v3424: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3425: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:15:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:15:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1490900485' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:15:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:15:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1490900485' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033308812756397733 of space, bias 1.0, pg target 0.0999264382691932 quantized to 32 (current 32)
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:15:33 compute-0 ceph-mon[74295]: pgmap v3425: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1490900485' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:15:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1490900485' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:15:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3426: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:33 compute-0 nova_compute[259550]: 2025-10-07 15:15:33.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:34 compute-0 ceph-mon[74295]: pgmap v3426: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:34 compute-0 nova_compute[259550]: 2025-10-07 15:15:34.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3427: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:36 compute-0 ceph-mon[74295]: pgmap v3427: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3428: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:15:38 compute-0 nova_compute[259550]: 2025-10-07 15:15:38.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:38 compute-0 ceph-mon[74295]: pgmap v3428: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3429: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:39 compute-0 nova_compute[259550]: 2025-10-07 15:15:39.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:40 compute-0 ceph-mon[74295]: pgmap v3429: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3430: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:15:42 compute-0 ceph-mon[74295]: pgmap v3430: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3431: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:43 compute-0 nova_compute[259550]: 2025-10-07 15:15:43.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:44 compute-0 podman[449073]: 2025-10-07 15:15:44.062588496 +0000 UTC m=+0.053203331 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 15:15:44 compute-0 podman[449074]: 2025-10-07 15:15:44.115487436 +0000 UTC m=+0.099089616 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 07 15:15:44 compute-0 ceph-mon[74295]: pgmap v3431: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:44 compute-0 nova_compute[259550]: 2025-10-07 15:15:44.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3432: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:45 compute-0 nova_compute[259550]: 2025-10-07 15:15:45.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:15:46 compute-0 nova_compute[259550]: 2025-10-07 15:15:46.010 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:15:46 compute-0 nova_compute[259550]: 2025-10-07 15:15:46.011 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:15:46 compute-0 nova_compute[259550]: 2025-10-07 15:15:46.011 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:15:46 compute-0 nova_compute[259550]: 2025-10-07 15:15:46.011 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:15:46 compute-0 nova_compute[259550]: 2025-10-07 15:15:46.012 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:15:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:15:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2035377137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:15:46 compute-0 nova_compute[259550]: 2025-10-07 15:15:46.499 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:15:46 compute-0 ceph-mon[74295]: pgmap v3432: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:46 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2035377137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:15:46 compute-0 nova_compute[259550]: 2025-10-07 15:15:46.663 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:15:46 compute-0 nova_compute[259550]: 2025-10-07 15:15:46.665 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3619MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:15:46 compute-0 nova_compute[259550]: 2025-10-07 15:15:46.665 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:15:46 compute-0 nova_compute[259550]: 2025-10-07 15:15:46.666 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:15:46 compute-0 nova_compute[259550]: 2025-10-07 15:15:46.722 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:15:46 compute-0 nova_compute[259550]: 2025-10-07 15:15:46.723 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:15:46 compute-0 nova_compute[259550]: 2025-10-07 15:15:46.742 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:15:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:15:47 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4744337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:15:47 compute-0 nova_compute[259550]: 2025-10-07 15:15:47.256 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:15:47 compute-0 nova_compute[259550]: 2025-10-07 15:15:47.265 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:15:47 compute-0 nova_compute[259550]: 2025-10-07 15:15:47.285 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:15:47 compute-0 nova_compute[259550]: 2025-10-07 15:15:47.287 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:15:47 compute-0 nova_compute[259550]: 2025-10-07 15:15:47.287 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:15:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3433: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:15:47 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4744337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:15:48 compute-0 nova_compute[259550]: 2025-10-07 15:15:48.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:48 compute-0 ceph-mon[74295]: pgmap v3433: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3434: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:49 compute-0 nova_compute[259550]: 2025-10-07 15:15:49.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:50 compute-0 ceph-mon[74295]: pgmap v3434: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:51 compute-0 nova_compute[259550]: 2025-10-07 15:15:51.288 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:15:51 compute-0 nova_compute[259550]: 2025-10-07 15:15:51.289 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:15:51 compute-0 nova_compute[259550]: 2025-10-07 15:15:51.289 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:15:51 compute-0 nova_compute[259550]: 2025-10-07 15:15:51.304 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:15:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3435: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:15:52 compute-0 ceph-mon[74295]: pgmap v3435: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:15:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:15:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:15:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:15:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:15:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:15:52 compute-0 nova_compute[259550]: 2025-10-07 15:15:52.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:15:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3436: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:53 compute-0 nova_compute[259550]: 2025-10-07 15:15:53.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:54 compute-0 sudo[449165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:15:54 compute-0 sudo[449165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:15:54 compute-0 sudo[449165]: pam_unix(sudo:session): session closed for user root
Oct 07 15:15:54 compute-0 sudo[449190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:15:54 compute-0 sudo[449190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:15:54 compute-0 sudo[449190]: pam_unix(sudo:session): session closed for user root
Oct 07 15:15:54 compute-0 sudo[449215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:15:54 compute-0 sudo[449215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:15:54 compute-0 sudo[449215]: pam_unix(sudo:session): session closed for user root
Oct 07 15:15:54 compute-0 sudo[449240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 07 15:15:54 compute-0 sudo[449240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:15:54 compute-0 sudo[449240]: pam_unix(sudo:session): session closed for user root
Oct 07 15:15:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:15:54 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:15:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:15:54 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:15:54 compute-0 nova_compute[259550]: 2025-10-07 15:15:54.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:54 compute-0 ceph-mon[74295]: pgmap v3436: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:54 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:15:54 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:15:54 compute-0 sudo[449285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:15:54 compute-0 sudo[449285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:15:54 compute-0 sudo[449285]: pam_unix(sudo:session): session closed for user root
Oct 07 15:15:54 compute-0 sudo[449310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:15:54 compute-0 sudo[449310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:15:54 compute-0 sudo[449310]: pam_unix(sudo:session): session closed for user root
Oct 07 15:15:54 compute-0 sudo[449335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:15:54 compute-0 sudo[449335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:15:54 compute-0 sudo[449335]: pam_unix(sudo:session): session closed for user root
Oct 07 15:15:54 compute-0 sudo[449360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:15:54 compute-0 sudo[449360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:15:55 compute-0 sudo[449360]: pam_unix(sudo:session): session closed for user root
Oct 07 15:15:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3437: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 07 15:15:55 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 07 15:15:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:15:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:15:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:15:55 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:15:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:15:55 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:15:55 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev da8639ee-af7b-4093-8c20-a62e7ee28f55 does not exist
Oct 07 15:15:55 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 5226074b-cbdc-42b4-9cbf-d5c848717b41 does not exist
Oct 07 15:15:55 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 701eb824-98fe-4803-8beb-f42f1a1f50c7 does not exist
Oct 07 15:15:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:15:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:15:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:15:55 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:15:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:15:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:15:55 compute-0 sudo[449416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:15:55 compute-0 sudo[449416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:15:55 compute-0 sudo[449416]: pam_unix(sudo:session): session closed for user root
Oct 07 15:15:55 compute-0 sudo[449441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:15:55 compute-0 sudo[449441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:15:55 compute-0 sudo[449441]: pam_unix(sudo:session): session closed for user root
Oct 07 15:15:55 compute-0 sudo[449466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:15:55 compute-0 sudo[449466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:15:55 compute-0 sudo[449466]: pam_unix(sudo:session): session closed for user root
Oct 07 15:15:55 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 07 15:15:55 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:15:55 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:15:55 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:15:55 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:15:55 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:15:55 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:15:55 compute-0 sudo[449491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:15:55 compute-0 sudo[449491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:15:56 compute-0 podman[449557]: 2025-10-07 15:15:56.133444069 +0000 UTC m=+0.040100946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:15:56 compute-0 podman[449557]: 2025-10-07 15:15:56.255222702 +0000 UTC m=+0.161879559 container create 9daff3193fbb75816c4cf9812593ba8201cfef8266efb8a5bedcb05038373d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 15:15:56 compute-0 systemd[1]: Started libpod-conmon-9daff3193fbb75816c4cf9812593ba8201cfef8266efb8a5bedcb05038373d0a.scope.
Oct 07 15:15:56 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:15:56 compute-0 podman[449557]: 2025-10-07 15:15:56.555402517 +0000 UTC m=+0.462059394 container init 9daff3193fbb75816c4cf9812593ba8201cfef8266efb8a5bedcb05038373d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_poincare, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 07 15:15:56 compute-0 podman[449557]: 2025-10-07 15:15:56.56537808 +0000 UTC m=+0.472034977 container start 9daff3193fbb75816c4cf9812593ba8201cfef8266efb8a5bedcb05038373d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 07 15:15:56 compute-0 epic_poincare[449574]: 167 167
Oct 07 15:15:56 compute-0 systemd[1]: libpod-9daff3193fbb75816c4cf9812593ba8201cfef8266efb8a5bedcb05038373d0a.scope: Deactivated successfully.
Oct 07 15:15:56 compute-0 podman[449557]: 2025-10-07 15:15:56.714749359 +0000 UTC m=+0.621406316 container attach 9daff3193fbb75816c4cf9812593ba8201cfef8266efb8a5bedcb05038373d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 07 15:15:56 compute-0 podman[449557]: 2025-10-07 15:15:56.717045028 +0000 UTC m=+0.623701985 container died 9daff3193fbb75816c4cf9812593ba8201cfef8266efb8a5bedcb05038373d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:15:56 compute-0 ceph-mon[74295]: pgmap v3437: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f4cb06a5ccca84580d644f67e4184fd4924c28d28c22eb2575e6252a567c579-merged.mount: Deactivated successfully.
Oct 07 15:15:57 compute-0 podman[449557]: 2025-10-07 15:15:57.235869435 +0000 UTC m=+1.142526312 container remove 9daff3193fbb75816c4cf9812593ba8201cfef8266efb8a5bedcb05038373d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_poincare, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:15:57 compute-0 systemd[1]: libpod-conmon-9daff3193fbb75816c4cf9812593ba8201cfef8266efb8a5bedcb05038373d0a.scope: Deactivated successfully.
Oct 07 15:15:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3438: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:57 compute-0 podman[449598]: 2025-10-07 15:15:57.39572856 +0000 UTC m=+0.036187773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:15:57 compute-0 podman[449598]: 2025-10-07 15:15:57.490023731 +0000 UTC m=+0.130482894 container create b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 07 15:15:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:15:57 compute-0 systemd[1]: Started libpod-conmon-b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910.scope.
Oct 07 15:15:57 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c08f1462b629f502ad6cfd8332cd22e7f196871c31418c8fe3783da8bd6160b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c08f1462b629f502ad6cfd8332cd22e7f196871c31418c8fe3783da8bd6160b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c08f1462b629f502ad6cfd8332cd22e7f196871c31418c8fe3783da8bd6160b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c08f1462b629f502ad6cfd8332cd22e7f196871c31418c8fe3783da8bd6160b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c08f1462b629f502ad6cfd8332cd22e7f196871c31418c8fe3783da8bd6160b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:15:57 compute-0 podman[449598]: 2025-10-07 15:15:57.839068991 +0000 UTC m=+0.479528224 container init b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:15:57 compute-0 podman[449598]: 2025-10-07 15:15:57.854300422 +0000 UTC m=+0.494759545 container start b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:15:57 compute-0 podman[449598]: 2025-10-07 15:15:57.974456592 +0000 UTC m=+0.614915745 container attach b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 07 15:15:58 compute-0 nova_compute[259550]: 2025-10-07 15:15:58.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:58 compute-0 competent_bohr[449614]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:15:58 compute-0 competent_bohr[449614]: --> relative data size: 1.0
Oct 07 15:15:58 compute-0 competent_bohr[449614]: --> All data devices are unavailable
Oct 07 15:15:58 compute-0 ceph-mon[74295]: pgmap v3438: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:58 compute-0 systemd[1]: libpod-b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910.scope: Deactivated successfully.
Oct 07 15:15:58 compute-0 podman[449598]: 2025-10-07 15:15:58.980250687 +0000 UTC m=+1.620709810 container died b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 07 15:15:58 compute-0 systemd[1]: libpod-b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910.scope: Consumed 1.080s CPU time.
Oct 07 15:15:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c08f1462b629f502ad6cfd8332cd22e7f196871c31418c8fe3783da8bd6160b-merged.mount: Deactivated successfully.
Oct 07 15:15:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3439: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:15:59 compute-0 podman[449598]: 2025-10-07 15:15:59.43320942 +0000 UTC m=+2.073668553 container remove b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 07 15:15:59 compute-0 systemd[1]: libpod-conmon-b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910.scope: Deactivated successfully.
Oct 07 15:15:59 compute-0 sudo[449491]: pam_unix(sudo:session): session closed for user root
Oct 07 15:15:59 compute-0 podman[449644]: 2025-10-07 15:15:59.519433819 +0000 UTC m=+0.505689722 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 07 15:15:59 compute-0 podman[449653]: 2025-10-07 15:15:59.519689595 +0000 UTC m=+0.503412501 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 07 15:15:59 compute-0 sudo[449687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:15:59 compute-0 sudo[449687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:15:59 compute-0 sudo[449687]: pam_unix(sudo:session): session closed for user root
Oct 07 15:15:59 compute-0 sudo[449716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:15:59 compute-0 sudo[449716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:15:59 compute-0 sudo[449716]: pam_unix(sudo:session): session closed for user root
Oct 07 15:15:59 compute-0 nova_compute[259550]: 2025-10-07 15:15:59.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:15:59 compute-0 sudo[449741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:15:59 compute-0 sudo[449741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:15:59 compute-0 sudo[449741]: pam_unix(sudo:session): session closed for user root
Oct 07 15:15:59 compute-0 sudo[449766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:15:59 compute-0 sudo[449766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:16:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:16:00.117 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:16:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:16:00.117 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:16:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:16:00.117 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:16:00 compute-0 podman[449831]: 2025-10-07 15:16:00.092201754 +0000 UTC m=+0.025448190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:16:00 compute-0 ceph-mon[74295]: pgmap v3439: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:00 compute-0 podman[449831]: 2025-10-07 15:16:00.253202099 +0000 UTC m=+0.186448485 container create b1049979d03638e95c4a4b81d886121a59efc1ed090725bbee3b7768ab86e680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 07 15:16:00 compute-0 systemd[1]: Started libpod-conmon-b1049979d03638e95c4a4b81d886121a59efc1ed090725bbee3b7768ab86e680.scope.
Oct 07 15:16:00 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:16:00 compute-0 podman[449831]: 2025-10-07 15:16:00.721538757 +0000 UTC m=+0.654785213 container init b1049979d03638e95c4a4b81d886121a59efc1ed090725bbee3b7768ab86e680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:16:00 compute-0 podman[449831]: 2025-10-07 15:16:00.730733559 +0000 UTC m=+0.663979905 container start b1049979d03638e95c4a4b81d886121a59efc1ed090725bbee3b7768ab86e680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:16:00 compute-0 stupefied_kalam[449848]: 167 167
Oct 07 15:16:00 compute-0 systemd[1]: libpod-b1049979d03638e95c4a4b81d886121a59efc1ed090725bbee3b7768ab86e680.scope: Deactivated successfully.
Oct 07 15:16:00 compute-0 podman[449831]: 2025-10-07 15:16:00.794079905 +0000 UTC m=+0.727326281 container attach b1049979d03638e95c4a4b81d886121a59efc1ed090725bbee3b7768ab86e680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kalam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Oct 07 15:16:00 compute-0 podman[449831]: 2025-10-07 15:16:00.794616239 +0000 UTC m=+0.727862615 container died b1049979d03638e95c4a4b81d886121a59efc1ed090725bbee3b7768ab86e680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:16:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-40d0d3847fda64ec7383e4040f3fbff0ed3433f1cb4804a97b2f3951f9cc7466-merged.mount: Deactivated successfully.
Oct 07 15:16:00 compute-0 podman[449831]: 2025-10-07 15:16:00.963113851 +0000 UTC m=+0.896360197 container remove b1049979d03638e95c4a4b81d886121a59efc1ed090725bbee3b7768ab86e680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kalam, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 15:16:00 compute-0 systemd[1]: libpod-conmon-b1049979d03638e95c4a4b81d886121a59efc1ed090725bbee3b7768ab86e680.scope: Deactivated successfully.
Oct 07 15:16:01 compute-0 podman[449872]: 2025-10-07 15:16:01.183789516 +0000 UTC m=+0.026408886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:16:01 compute-0 podman[449872]: 2025-10-07 15:16:01.314478742 +0000 UTC m=+0.157098122 container create 3b23a4b058d661681947d93082939c1ace6c3db817d1ec9acfbd38180881ef7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:16:01 compute-0 systemd[1]: Started libpod-conmon-3b23a4b058d661681947d93082939c1ace6c3db817d1ec9acfbd38180881ef7e.scope.
Oct 07 15:16:01 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:16:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3440: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7921156b790b07f43a8edd1fc21f45c7ce7c2c8035ef9512c1d84be3452947/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7921156b790b07f43a8edd1fc21f45c7ce7c2c8035ef9512c1d84be3452947/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7921156b790b07f43a8edd1fc21f45c7ce7c2c8035ef9512c1d84be3452947/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7921156b790b07f43a8edd1fc21f45c7ce7c2c8035ef9512c1d84be3452947/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:16:01 compute-0 podman[449872]: 2025-10-07 15:16:01.435579338 +0000 UTC m=+0.278198708 container init 3b23a4b058d661681947d93082939c1ace6c3db817d1ec9acfbd38180881ef7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:16:01 compute-0 podman[449872]: 2025-10-07 15:16:01.446761463 +0000 UTC m=+0.289380813 container start 3b23a4b058d661681947d93082939c1ace6c3db817d1ec9acfbd38180881ef7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:16:01 compute-0 podman[449872]: 2025-10-07 15:16:01.455391449 +0000 UTC m=+0.298010799 container attach 3b23a4b058d661681947d93082939c1ace6c3db817d1ec9acfbd38180881ef7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 07 15:16:02 compute-0 great_murdock[449889]: {
Oct 07 15:16:02 compute-0 great_murdock[449889]:     "0": [
Oct 07 15:16:02 compute-0 great_murdock[449889]:         {
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "devices": [
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "/dev/loop3"
Oct 07 15:16:02 compute-0 great_murdock[449889]:             ],
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "lv_name": "ceph_lv0",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "lv_size": "21470642176",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "name": "ceph_lv0",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "tags": {
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.cluster_name": "ceph",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.crush_device_class": "",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.encrypted": "0",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.osd_id": "0",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.type": "block",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.vdo": "0"
Oct 07 15:16:02 compute-0 great_murdock[449889]:             },
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "type": "block",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "vg_name": "ceph_vg0"
Oct 07 15:16:02 compute-0 great_murdock[449889]:         }
Oct 07 15:16:02 compute-0 great_murdock[449889]:     ],
Oct 07 15:16:02 compute-0 great_murdock[449889]:     "1": [
Oct 07 15:16:02 compute-0 great_murdock[449889]:         {
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "devices": [
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "/dev/loop4"
Oct 07 15:16:02 compute-0 great_murdock[449889]:             ],
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "lv_name": "ceph_lv1",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "lv_size": "21470642176",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "name": "ceph_lv1",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "tags": {
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.cluster_name": "ceph",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.crush_device_class": "",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.encrypted": "0",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.osd_id": "1",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.type": "block",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.vdo": "0"
Oct 07 15:16:02 compute-0 great_murdock[449889]:             },
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "type": "block",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "vg_name": "ceph_vg1"
Oct 07 15:16:02 compute-0 great_murdock[449889]:         }
Oct 07 15:16:02 compute-0 great_murdock[449889]:     ],
Oct 07 15:16:02 compute-0 great_murdock[449889]:     "2": [
Oct 07 15:16:02 compute-0 great_murdock[449889]:         {
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "devices": [
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "/dev/loop5"
Oct 07 15:16:02 compute-0 great_murdock[449889]:             ],
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "lv_name": "ceph_lv2",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "lv_size": "21470642176",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "name": "ceph_lv2",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "tags": {
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.cluster_name": "ceph",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.crush_device_class": "",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.encrypted": "0",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.osd_id": "2",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.type": "block",
Oct 07 15:16:02 compute-0 great_murdock[449889]:                 "ceph.vdo": "0"
Oct 07 15:16:02 compute-0 great_murdock[449889]:             },
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "type": "block",
Oct 07 15:16:02 compute-0 great_murdock[449889]:             "vg_name": "ceph_vg2"
Oct 07 15:16:02 compute-0 great_murdock[449889]:         }
Oct 07 15:16:02 compute-0 great_murdock[449889]:     ]
Oct 07 15:16:02 compute-0 great_murdock[449889]: }
Oct 07 15:16:02 compute-0 systemd[1]: libpod-3b23a4b058d661681947d93082939c1ace6c3db817d1ec9acfbd38180881ef7e.scope: Deactivated successfully.
Oct 07 15:16:02 compute-0 podman[449872]: 2025-10-07 15:16:02.304824781 +0000 UTC m=+1.147444131 container died 3b23a4b058d661681947d93082939c1ace6c3db817d1ec9acfbd38180881ef7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:16:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f7921156b790b07f43a8edd1fc21f45c7ce7c2c8035ef9512c1d84be3452947-merged.mount: Deactivated successfully.
Oct 07 15:16:02 compute-0 podman[449872]: 2025-10-07 15:16:02.375449809 +0000 UTC m=+1.218069159 container remove 3b23a4b058d661681947d93082939c1ace6c3db817d1ec9acfbd38180881ef7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:16:02 compute-0 systemd[1]: libpod-conmon-3b23a4b058d661681947d93082939c1ace6c3db817d1ec9acfbd38180881ef7e.scope: Deactivated successfully.
Oct 07 15:16:02 compute-0 sudo[449766]: pam_unix(sudo:session): session closed for user root
Oct 07 15:16:02 compute-0 ceph-mon[74295]: pgmap v3440: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:02 compute-0 sudo[449909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:16:02 compute-0 sudo[449909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:16:02 compute-0 sudo[449909]: pam_unix(sudo:session): session closed for user root
Oct 07 15:16:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:16:02 compute-0 sudo[449934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:16:02 compute-0 sudo[449934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:16:02 compute-0 sudo[449934]: pam_unix(sudo:session): session closed for user root
Oct 07 15:16:02 compute-0 sudo[449959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:16:02 compute-0 sudo[449959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:16:02 compute-0 sudo[449959]: pam_unix(sudo:session): session closed for user root
Oct 07 15:16:02 compute-0 sudo[449984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:16:02 compute-0 sudo[449984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:16:03 compute-0 podman[450051]: 2025-10-07 15:16:03.041602681 +0000 UTC m=+0.038051783 container create cea40ac9c8ff3dba54d6ce70f83170d26256e30ed8e6410ba886ad7721e3ecee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 07 15:16:03 compute-0 systemd[1]: Started libpod-conmon-cea40ac9c8ff3dba54d6ce70f83170d26256e30ed8e6410ba886ad7721e3ecee.scope.
Oct 07 15:16:03 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:16:03 compute-0 podman[450051]: 2025-10-07 15:16:03.120251599 +0000 UTC m=+0.116700711 container init cea40ac9c8ff3dba54d6ce70f83170d26256e30ed8e6410ba886ad7721e3ecee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:16:03 compute-0 podman[450051]: 2025-10-07 15:16:03.023809013 +0000 UTC m=+0.020258115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:16:03 compute-0 podman[450051]: 2025-10-07 15:16:03.127628783 +0000 UTC m=+0.124077875 container start cea40ac9c8ff3dba54d6ce70f83170d26256e30ed8e6410ba886ad7721e3ecee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:16:03 compute-0 zen_feynman[450067]: 167 167
Oct 07 15:16:03 compute-0 systemd[1]: libpod-cea40ac9c8ff3dba54d6ce70f83170d26256e30ed8e6410ba886ad7721e3ecee.scope: Deactivated successfully.
Oct 07 15:16:03 compute-0 podman[450051]: 2025-10-07 15:16:03.133439256 +0000 UTC m=+0.129888358 container attach cea40ac9c8ff3dba54d6ce70f83170d26256e30ed8e6410ba886ad7721e3ecee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct 07 15:16:03 compute-0 podman[450051]: 2025-10-07 15:16:03.134093054 +0000 UTC m=+0.130542146 container died cea40ac9c8ff3dba54d6ce70f83170d26256e30ed8e6410ba886ad7721e3ecee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 07 15:16:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-28a11f444fcdcf5045486b8752ed82a835886fb001feea17619c80f59beb2f0f-merged.mount: Deactivated successfully.
Oct 07 15:16:03 compute-0 podman[450051]: 2025-10-07 15:16:03.179688182 +0000 UTC m=+0.176137274 container remove cea40ac9c8ff3dba54d6ce70f83170d26256e30ed8e6410ba886ad7721e3ecee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:16:03 compute-0 systemd[1]: libpod-conmon-cea40ac9c8ff3dba54d6ce70f83170d26256e30ed8e6410ba886ad7721e3ecee.scope: Deactivated successfully.
Oct 07 15:16:03 compute-0 podman[450090]: 2025-10-07 15:16:03.340066631 +0000 UTC m=+0.044088911 container create 40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_albattani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 07 15:16:03 compute-0 systemd[1]: Started libpod-conmon-40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b.scope.
Oct 07 15:16:03 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:16:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d84a48ded24c6f69ec57f9dacf31724f654d038386bfd9db1ccdd048e04563/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:16:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d84a48ded24c6f69ec57f9dacf31724f654d038386bfd9db1ccdd048e04563/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:16:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d84a48ded24c6f69ec57f9dacf31724f654d038386bfd9db1ccdd048e04563/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:16:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d84a48ded24c6f69ec57f9dacf31724f654d038386bfd9db1ccdd048e04563/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:16:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3441: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:03 compute-0 podman[450090]: 2025-10-07 15:16:03.405604015 +0000 UTC m=+0.109626315 container init 40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_albattani, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 07 15:16:03 compute-0 podman[450090]: 2025-10-07 15:16:03.413253196 +0000 UTC m=+0.117275486 container start 40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_albattani, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:16:03 compute-0 podman[450090]: 2025-10-07 15:16:03.320001623 +0000 UTC m=+0.024023953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:16:03 compute-0 podman[450090]: 2025-10-07 15:16:03.417912968 +0000 UTC m=+0.121935268 container attach 40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 07 15:16:03 compute-0 nova_compute[259550]: 2025-10-07 15:16:03.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]: {
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:         "osd_id": 2,
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:         "type": "bluestore"
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:     },
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:         "osd_id": 1,
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:         "type": "bluestore"
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:     },
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:         "osd_id": 0,
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:         "type": "bluestore"
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]:     }
Oct 07 15:16:04 compute-0 wizardly_albattani[450107]: }
Oct 07 15:16:04 compute-0 systemd[1]: libpod-40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b.scope: Deactivated successfully.
Oct 07 15:16:04 compute-0 podman[450090]: 2025-10-07 15:16:04.468486121 +0000 UTC m=+1.172508401 container died 40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_albattani, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:16:04 compute-0 systemd[1]: libpod-40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b.scope: Consumed 1.062s CPU time.
Oct 07 15:16:04 compute-0 ceph-mon[74295]: pgmap v3441: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6d84a48ded24c6f69ec57f9dacf31724f654d038386bfd9db1ccdd048e04563-merged.mount: Deactivated successfully.
Oct 07 15:16:04 compute-0 podman[450090]: 2025-10-07 15:16:04.56650212 +0000 UTC m=+1.270524410 container remove 40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 07 15:16:04 compute-0 systemd[1]: libpod-conmon-40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b.scope: Deactivated successfully.
Oct 07 15:16:04 compute-0 sudo[449984]: pam_unix(sudo:session): session closed for user root
Oct 07 15:16:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:16:04 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:16:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:16:04 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:16:04 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 27b8b4a8-9bbb-424d-a626-a1d237c023e5 does not exist
Oct 07 15:16:04 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 49946aa6-69b7-42c3-b6a5-96ef96126bc4 does not exist
Oct 07 15:16:04 compute-0 nova_compute[259550]: 2025-10-07 15:16:04.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:04 compute-0 sudo[450155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:16:04 compute-0 sudo[450155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:16:04 compute-0 sudo[450155]: pam_unix(sudo:session): session closed for user root
Oct 07 15:16:04 compute-0 sudo[450180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:16:04 compute-0 sudo[450180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:16:04 compute-0 sudo[450180]: pam_unix(sudo:session): session closed for user root
Oct 07 15:16:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3442: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:05 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:16:05 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:16:06 compute-0 ceph-mon[74295]: pgmap v3442: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3443: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:16:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Oct 07 15:16:08 compute-0 nova_compute[259550]: 2025-10-07 15:16:08.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Oct 07 15:16:09 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Oct 07 15:16:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3445: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Oct 07 15:16:09 compute-0 ceph-mon[74295]: pgmap v3443: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:09 compute-0 nova_compute[259550]: 2025-10-07 15:16:09.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:09 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Oct 07 15:16:10 compute-0 ceph-mon[74295]: osdmap e312: 3 total, 3 up, 3 in
Oct 07 15:16:10 compute-0 ceph-mon[74295]: pgmap v3445: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Oct 07 15:16:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3446: 305 pgs: 305 active+clean; 21 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Oct 07 15:16:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:16:12 compute-0 ceph-mon[74295]: pgmap v3446: 305 pgs: 305 active+clean; 21 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Oct 07 15:16:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3447: 305 pgs: 305 active+clean; 21 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Oct 07 15:16:13 compute-0 nova_compute[259550]: 2025-10-07 15:16:13.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:14 compute-0 nova_compute[259550]: 2025-10-07 15:16:14.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:14 compute-0 ceph-mon[74295]: pgmap v3447: 305 pgs: 305 active+clean; 21 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Oct 07 15:16:15 compute-0 podman[450205]: 2025-10-07 15:16:15.08984574 +0000 UTC m=+0.071986515 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 07 15:16:15 compute-0 podman[450206]: 2025-10-07 15:16:15.164252037 +0000 UTC m=+0.143918796 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 15:16:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3448: 305 pgs: 305 active+clean; 8.4 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 KiB/s wr, 24 op/s
Oct 07 15:16:16 compute-0 ceph-mon[74295]: pgmap v3448: 305 pgs: 305 active+clean; 8.4 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 KiB/s wr, 24 op/s
Oct 07 15:16:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3449: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 07 15:16:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:16:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Oct 07 15:16:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Oct 07 15:16:17 compute-0 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Oct 07 15:16:18 compute-0 ceph-mon[74295]: pgmap v3449: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 07 15:16:18 compute-0 ceph-mon[74295]: osdmap e313: 3 total, 3 up, 3 in
Oct 07 15:16:18 compute-0 nova_compute[259550]: 2025-10-07 15:16:18.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3451: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct 07 15:16:19 compute-0 nova_compute[259550]: 2025-10-07 15:16:19.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:20 compute-0 nova_compute[259550]: 2025-10-07 15:16:20.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:16:21 compute-0 ceph-mon[74295]: pgmap v3451: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct 07 15:16:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3452: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct 07 15:16:22 compute-0 ceph-mon[74295]: pgmap v3452: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct 07 15:16:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:16:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:16:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:16:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:16:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:16:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:16:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:16:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:16:22
Oct 07 15:16:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:16:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:16:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'images', 'default.rgw.meta', '.mgr', '.rgw.root', 'volumes', 'vms', 'cephfs.cephfs.meta']
Oct 07 15:16:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:16:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3453: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct 07 15:16:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:16:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:16:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:16:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:16:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:16:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:16:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:16:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:16:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:16:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:16:23 compute-0 nova_compute[259550]: 2025-10-07 15:16:23.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:24 compute-0 nova_compute[259550]: 2025-10-07 15:16:24.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:24 compute-0 ceph-mon[74295]: pgmap v3453: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct 07 15:16:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3454: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 614 B/s rd, 307 B/s wr, 1 op/s
Oct 07 15:16:26 compute-0 ceph-mon[74295]: pgmap v3454: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 614 B/s rd, 307 B/s wr, 1 op/s
Oct 07 15:16:26 compute-0 nova_compute[259550]: 2025-10-07 15:16:26.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:16:26 compute-0 nova_compute[259550]: 2025-10-07 15:16:26.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:16:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3455: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:16:27 compute-0 nova_compute[259550]: 2025-10-07 15:16:27.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:16:28 compute-0 ceph-mon[74295]: pgmap v3455: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:28 compute-0 nova_compute[259550]: 2025-10-07 15:16:28.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3456: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:29 compute-0 nova_compute[259550]: 2025-10-07 15:16:29.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:29 compute-0 nova_compute[259550]: 2025-10-07 15:16:29.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:16:30 compute-0 podman[450249]: 2025-10-07 15:16:30.073453887 +0000 UTC m=+0.060874681 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 07 15:16:30 compute-0 podman[450250]: 2025-10-07 15:16:30.074445124 +0000 UTC m=+0.059401044 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 07 15:16:30 compute-0 ceph-mon[74295]: pgmap v3456: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:30 compute-0 nova_compute[259550]: 2025-10-07 15:16:30.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:16:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3457: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:31 compute-0 nova_compute[259550]: 2025-10-07 15:16:31.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:16:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:16:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:16:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2385556054' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:16:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:16:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2385556054' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:16:32 compute-0 ceph-mon[74295]: pgmap v3457: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2385556054' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:16:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2385556054' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:16:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3458: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:33 compute-0 nova_compute[259550]: 2025-10-07 15:16:33.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:33 compute-0 nova_compute[259550]: 2025-10-07 15:16:33.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:16:34 compute-0 ceph-mon[74295]: pgmap v3458: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:34 compute-0 nova_compute[259550]: 2025-10-07 15:16:34.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3459: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:36 compute-0 ceph-mon[74295]: pgmap v3459: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3460: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:16:38 compute-0 nova_compute[259550]: 2025-10-07 15:16:38.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:38 compute-0 ceph-mon[74295]: pgmap v3460: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3461: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:39 compute-0 nova_compute[259550]: 2025-10-07 15:16:39.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:40 compute-0 ceph-mon[74295]: pgmap v3461: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3462: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:16:42 compute-0 ceph-mon[74295]: pgmap v3462: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3463: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:43 compute-0 nova_compute[259550]: 2025-10-07 15:16:43.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:44 compute-0 ceph-mon[74295]: pgmap v3463: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:44 compute-0 nova_compute[259550]: 2025-10-07 15:16:44.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3464: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:46 compute-0 podman[450291]: 2025-10-07 15:16:46.094312236 +0000 UTC m=+0.086813744 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 07 15:16:46 compute-0 podman[450292]: 2025-10-07 15:16:46.100852659 +0000 UTC m=+0.091734354 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 15:16:46 compute-0 ceph-mon[74295]: pgmap v3464: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3465: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:16:47 compute-0 nova_compute[259550]: 2025-10-07 15:16:47.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:16:48 compute-0 nova_compute[259550]: 2025-10-07 15:16:48.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:16:48 compute-0 nova_compute[259550]: 2025-10-07 15:16:48.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:16:48 compute-0 nova_compute[259550]: 2025-10-07 15:16:48.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:16:48 compute-0 nova_compute[259550]: 2025-10-07 15:16:48.013 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:16:48 compute-0 nova_compute[259550]: 2025-10-07 15:16:48.014 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:16:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:16:48 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1277041791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:16:48 compute-0 nova_compute[259550]: 2025-10-07 15:16:48.546 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:16:48 compute-0 ceph-mon[74295]: pgmap v3465: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:48 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1277041791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:16:48 compute-0 nova_compute[259550]: 2025-10-07 15:16:48.692 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:16:48 compute-0 nova_compute[259550]: 2025-10-07 15:16:48.693 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3601MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:16:48 compute-0 nova_compute[259550]: 2025-10-07 15:16:48.693 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:16:48 compute-0 nova_compute[259550]: 2025-10-07 15:16:48.693 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:16:48 compute-0 nova_compute[259550]: 2025-10-07 15:16:48.751 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:16:48 compute-0 nova_compute[259550]: 2025-10-07 15:16:48.751 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:16:48 compute-0 nova_compute[259550]: 2025-10-07 15:16:48.767 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:16:48 compute-0 nova_compute[259550]: 2025-10-07 15:16:48.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:16:49 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3081836187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:16:49 compute-0 nova_compute[259550]: 2025-10-07 15:16:49.232 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:16:49 compute-0 nova_compute[259550]: 2025-10-07 15:16:49.239 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:16:49 compute-0 nova_compute[259550]: 2025-10-07 15:16:49.258 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:16:49 compute-0 nova_compute[259550]: 2025-10-07 15:16:49.260 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:16:49 compute-0 nova_compute[259550]: 2025-10-07 15:16:49.260 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:16:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3466: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:49 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3081836187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:16:49 compute-0 nova_compute[259550]: 2025-10-07 15:16:49.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:50 compute-0 ceph-mon[74295]: pgmap v3466: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3467: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:52 compute-0 nova_compute[259550]: 2025-10-07 15:16:52.261 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:16:52 compute-0 nova_compute[259550]: 2025-10-07 15:16:52.262 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:16:52 compute-0 nova_compute[259550]: 2025-10-07 15:16:52.262 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:16:52 compute-0 nova_compute[259550]: 2025-10-07 15:16:52.297 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:16:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:16:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:16:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:16:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:16:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:16:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:16:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:16:52 compute-0 ceph-mon[74295]: pgmap v3467: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3468: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:53 compute-0 nova_compute[259550]: 2025-10-07 15:16:53.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:53 compute-0 nova_compute[259550]: 2025-10-07 15:16:53.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:16:54 compute-0 nova_compute[259550]: 2025-10-07 15:16:54.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:54 compute-0 ceph-mon[74295]: pgmap v3468: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3469: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:56 compute-0 ceph-mon[74295]: pgmap v3469: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3470: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:16:58 compute-0 nova_compute[259550]: 2025-10-07 15:16:58.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:16:58 compute-0 ceph-mon[74295]: pgmap v3470: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3471: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:16:59 compute-0 nova_compute[259550]: 2025-10-07 15:16:59.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:00 compute-0 ceph-mon[74295]: pgmap v3471: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:17:00.118 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:17:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:17:00.119 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:17:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:17:00.119 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:17:01 compute-0 podman[450376]: 2025-10-07 15:17:01.06715326 +0000 UTC m=+0.052172443 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2)
Oct 07 15:17:01 compute-0 podman[450377]: 2025-10-07 15:17:01.095776634 +0000 UTC m=+0.081358162 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct 07 15:17:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3472: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:17:02 compute-0 ceph-mon[74295]: pgmap v3472: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3473: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:03 compute-0 nova_compute[259550]: 2025-10-07 15:17:03.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:04 compute-0 nova_compute[259550]: 2025-10-07 15:17:04.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:04 compute-0 sudo[450416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:17:04 compute-0 sudo[450416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:04 compute-0 sudo[450416]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:04 compute-0 ceph-mon[74295]: pgmap v3473: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:04 compute-0 sudo[450441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:17:04 compute-0 sudo[450441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:04 compute-0 sudo[450441]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:05 compute-0 sudo[450466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:17:05 compute-0 sudo[450466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:05 compute-0 sudo[450466]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:05 compute-0 sudo[450491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 07 15:17:05 compute-0 sudo[450491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3474: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:05 compute-0 podman[450589]: 2025-10-07 15:17:05.743654875 +0000 UTC m=+0.130755480 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:17:05 compute-0 podman[450589]: 2025-10-07 15:17:05.870617845 +0000 UTC m=+0.257718460 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 07 15:17:06 compute-0 ceph-mon[74295]: pgmap v3474: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:06 compute-0 sudo[450491]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:17:06 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:17:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:17:06 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:17:07 compute-0 sudo[450748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:17:07 compute-0 sudo[450748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:07 compute-0 sudo[450748]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:07 compute-0 sudo[450773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:17:07 compute-0 sudo[450773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:07 compute-0 sudo[450773]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:07 compute-0 sudo[450798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:17:07 compute-0 sudo[450798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:07 compute-0 sudo[450798]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:07 compute-0 sudo[450823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:17:07 compute-0 sudo[450823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3475: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:17:07 compute-0 sudo[450823]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:17:07 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:17:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:17:07 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:17:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:17:07 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:17:07 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev d22b4abc-bf75-420a-ada5-30d408deab50 does not exist
Oct 07 15:17:07 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 76976241-f4d4-4fd6-b83a-1f932378c2f3 does not exist
Oct 07 15:17:07 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 3c1abc65-b67a-43de-a5ea-54c9c5f42538 does not exist
Oct 07 15:17:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:17:07 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:17:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:17:07 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:17:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:17:07 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:17:07 compute-0 sudo[450879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:17:07 compute-0 sudo[450879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:07 compute-0 sudo[450879]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:07 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:17:07 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:17:07 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:17:07 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:17:07 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:17:07 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:17:07 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:17:07 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:17:08 compute-0 sudo[450904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:17:08 compute-0 sudo[450904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:08 compute-0 sudo[450904]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:08 compute-0 sudo[450929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:17:08 compute-0 sudo[450929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:08 compute-0 sudo[450929]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:08 compute-0 sudo[450954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:17:08 compute-0 sudo[450954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:08 compute-0 podman[451019]: 2025-10-07 15:17:08.57210003 +0000 UTC m=+0.060620795 container create 18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaplygin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:17:08 compute-0 systemd[1]: Started libpod-conmon-18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d.scope.
Oct 07 15:17:08 compute-0 podman[451019]: 2025-10-07 15:17:08.539827231 +0000 UTC m=+0.028348046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:17:08 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:17:08 compute-0 podman[451019]: 2025-10-07 15:17:08.670309983 +0000 UTC m=+0.158830858 container init 18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct 07 15:17:08 compute-0 podman[451019]: 2025-10-07 15:17:08.67969627 +0000 UTC m=+0.168217035 container start 18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:17:08 compute-0 exciting_chaplygin[451035]: 167 167
Oct 07 15:17:08 compute-0 systemd[1]: libpod-18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d.scope: Deactivated successfully.
Oct 07 15:17:08 compute-0 conmon[451035]: conmon 18ba261746bddfc86906 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d.scope/container/memory.events
Oct 07 15:17:08 compute-0 podman[451019]: 2025-10-07 15:17:08.696168214 +0000 UTC m=+0.184689079 container attach 18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaplygin, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:17:08 compute-0 podman[451019]: 2025-10-07 15:17:08.696901493 +0000 UTC m=+0.185422298 container died 18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaplygin, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 07 15:17:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7019f26ad2b515eafc39b1aa8a45c252083dd84bf7dfcd5b581e1201c331a796-merged.mount: Deactivated successfully.
Oct 07 15:17:08 compute-0 podman[451019]: 2025-10-07 15:17:08.848178041 +0000 UTC m=+0.336698836 container remove 18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 07 15:17:08 compute-0 systemd[1]: libpod-conmon-18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d.scope: Deactivated successfully.
Oct 07 15:17:08 compute-0 nova_compute[259550]: 2025-10-07 15:17:08.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:09 compute-0 ceph-mon[74295]: pgmap v3475: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:09 compute-0 podman[451062]: 2025-10-07 15:17:09.082561236 +0000 UTC m=+0.074254483 container create 878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 07 15:17:09 compute-0 podman[451062]: 2025-10-07 15:17:09.036503205 +0000 UTC m=+0.028196532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:17:09 compute-0 systemd[1]: Started libpod-conmon-878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723.scope.
Oct 07 15:17:09 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:17:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72827cc906f0dc57a7dfd8b48be15bc5d5e5e6f96975d5e968a7c7a745ac9bbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:17:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72827cc906f0dc57a7dfd8b48be15bc5d5e5e6f96975d5e968a7c7a745ac9bbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:17:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72827cc906f0dc57a7dfd8b48be15bc5d5e5e6f96975d5e968a7c7a745ac9bbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:17:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72827cc906f0dc57a7dfd8b48be15bc5d5e5e6f96975d5e968a7c7a745ac9bbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:17:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72827cc906f0dc57a7dfd8b48be15bc5d5e5e6f96975d5e968a7c7a745ac9bbe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:17:09 compute-0 podman[451062]: 2025-10-07 15:17:09.195341013 +0000 UTC m=+0.187034340 container init 878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 07 15:17:09 compute-0 podman[451062]: 2025-10-07 15:17:09.209482174 +0000 UTC m=+0.201175411 container start 878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 07 15:17:09 compute-0 podman[451062]: 2025-10-07 15:17:09.21767956 +0000 UTC m=+0.209372897 container attach 878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 07 15:17:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3476: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:09 compute-0 nova_compute[259550]: 2025-10-07 15:17:09.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:10 compute-0 ceph-mon[74295]: pgmap v3476: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:10 compute-0 heuristic_greider[451079]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:17:10 compute-0 heuristic_greider[451079]: --> relative data size: 1.0
Oct 07 15:17:10 compute-0 heuristic_greider[451079]: --> All data devices are unavailable
Oct 07 15:17:10 compute-0 systemd[1]: libpod-878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723.scope: Deactivated successfully.
Oct 07 15:17:10 compute-0 systemd[1]: libpod-878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723.scope: Consumed 1.035s CPU time.
Oct 07 15:17:10 compute-0 podman[451062]: 2025-10-07 15:17:10.291374591 +0000 UTC m=+1.283067828 container died 878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 07 15:17:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-72827cc906f0dc57a7dfd8b48be15bc5d5e5e6f96975d5e968a7c7a745ac9bbe-merged.mount: Deactivated successfully.
Oct 07 15:17:10 compute-0 podman[451062]: 2025-10-07 15:17:10.643296238 +0000 UTC m=+1.634989515 container remove 878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 07 15:17:10 compute-0 sudo[450954]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:10 compute-0 systemd[1]: libpod-conmon-878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723.scope: Deactivated successfully.
Oct 07 15:17:10 compute-0 sudo[451120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:17:10 compute-0 sudo[451120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:10 compute-0 sudo[451120]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:10 compute-0 sudo[451145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:17:10 compute-0 sudo[451145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:10 compute-0 sudo[451145]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:10 compute-0 sudo[451170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:17:10 compute-0 sudo[451170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:10 compute-0 sudo[451170]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:10 compute-0 sudo[451195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:17:10 compute-0 sudo[451195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:11 compute-0 podman[451259]: 2025-10-07 15:17:11.350405197 +0000 UTC m=+0.083309233 container create a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Oct 07 15:17:11 compute-0 podman[451259]: 2025-10-07 15:17:11.292581996 +0000 UTC m=+0.025486042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:17:11 compute-0 systemd[1]: Started libpod-conmon-a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715.scope.
Oct 07 15:17:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3477: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:11 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:17:11 compute-0 podman[451259]: 2025-10-07 15:17:11.488683143 +0000 UTC m=+0.221587189 container init a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:17:11 compute-0 podman[451259]: 2025-10-07 15:17:11.497027663 +0000 UTC m=+0.229931659 container start a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 07 15:17:11 compute-0 podman[451259]: 2025-10-07 15:17:11.504266273 +0000 UTC m=+0.237170349 container attach a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Oct 07 15:17:11 compute-0 nervous_allen[451276]: 167 167
Oct 07 15:17:11 compute-0 systemd[1]: libpod-a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715.scope: Deactivated successfully.
Oct 07 15:17:11 compute-0 conmon[451276]: conmon a9997cf5bd7170a1bd81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715.scope/container/memory.events
Oct 07 15:17:11 compute-0 podman[451259]: 2025-10-07 15:17:11.509315566 +0000 UTC m=+0.242219672 container died a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6b97085cca2ebeb335a05cd450b5aa457b2bdd7f0e9ff24a2d4d1dcd55907eb-merged.mount: Deactivated successfully.
Oct 07 15:17:11 compute-0 podman[451259]: 2025-10-07 15:17:11.688860379 +0000 UTC m=+0.421764415 container remove a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:17:11 compute-0 systemd[1]: libpod-conmon-a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715.scope: Deactivated successfully.
Oct 07 15:17:11 compute-0 podman[451302]: 2025-10-07 15:17:11.89725952 +0000 UTC m=+0.074670415 container create bf17ea06ca2df7aa1d95c0553d5896b6d59fde4d174ba7320068c2307072769a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_varahamihira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 07 15:17:11 compute-0 podman[451302]: 2025-10-07 15:17:11.852298788 +0000 UTC m=+0.029709723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:17:11 compute-0 systemd[1]: Started libpod-conmon-bf17ea06ca2df7aa1d95c0553d5896b6d59fde4d174ba7320068c2307072769a.scope.
Oct 07 15:17:12 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cee79ca10faee88d1492d305198eff217eff0812223dc3eb776e574ad822e2ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cee79ca10faee88d1492d305198eff217eff0812223dc3eb776e574ad822e2ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cee79ca10faee88d1492d305198eff217eff0812223dc3eb776e574ad822e2ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cee79ca10faee88d1492d305198eff217eff0812223dc3eb776e574ad822e2ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:17:12 compute-0 podman[451302]: 2025-10-07 15:17:12.066021179 +0000 UTC m=+0.243432134 container init bf17ea06ca2df7aa1d95c0553d5896b6d59fde4d174ba7320068c2307072769a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_varahamihira, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 07 15:17:12 compute-0 podman[451302]: 2025-10-07 15:17:12.077710806 +0000 UTC m=+0.255121721 container start bf17ea06ca2df7aa1d95c0553d5896b6d59fde4d174ba7320068c2307072769a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_varahamihira, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 07 15:17:12 compute-0 podman[451302]: 2025-10-07 15:17:12.113807276 +0000 UTC m=+0.291218191 container attach bf17ea06ca2df7aa1d95c0553d5896b6d59fde4d174ba7320068c2307072769a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 07 15:17:12 compute-0 sshd-session[451323]: Accepted publickey for zuul from 192.168.122.10 port 49146 ssh2: ECDSA SHA256:eYsf9rxr6jxX1A35M0IajnB7UFHS0tds1lEVHTSpAhk
Oct 07 15:17:12 compute-0 systemd-logind[801]: New session 57 of user zuul.
Oct 07 15:17:12 compute-0 ceph-mon[74295]: pgmap v3477: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:12 compute-0 systemd[1]: Started Session 57 of User zuul.
Oct 07 15:17:12 compute-0 sshd-session[451323]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 07 15:17:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:17:12 compute-0 sudo[451327]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 07 15:17:12 compute-0 sudo[451327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]: {
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:     "0": [
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:         {
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "devices": [
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "/dev/loop3"
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             ],
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "lv_name": "ceph_lv0",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "lv_size": "21470642176",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "name": "ceph_lv0",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "tags": {
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.cluster_name": "ceph",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.crush_device_class": "",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.encrypted": "0",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.osd_id": "0",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.type": "block",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.vdo": "0"
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             },
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "type": "block",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "vg_name": "ceph_vg0"
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:         }
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:     ],
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:     "1": [
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:         {
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "devices": [
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "/dev/loop4"
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             ],
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "lv_name": "ceph_lv1",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "lv_size": "21470642176",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "name": "ceph_lv1",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "tags": {
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.cluster_name": "ceph",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.crush_device_class": "",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.encrypted": "0",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.osd_id": "1",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.type": "block",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.vdo": "0"
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             },
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "type": "block",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "vg_name": "ceph_vg1"
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:         }
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:     ],
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:     "2": [
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:         {
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "devices": [
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "/dev/loop5"
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             ],
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "lv_name": "ceph_lv2",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "lv_size": "21470642176",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "name": "ceph_lv2",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "tags": {
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.cluster_name": "ceph",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.crush_device_class": "",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.encrypted": "0",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.osd_id": "2",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.type": "block",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:                 "ceph.vdo": "0"
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             },
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "type": "block",
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:             "vg_name": "ceph_vg2"
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:         }
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]:     ]
Oct 07 15:17:12 compute-0 bold_varahamihira[451318]: }
Oct 07 15:17:13 compute-0 podman[451365]: 2025-10-07 15:17:13.062291443 +0000 UTC m=+0.033729198 container died bf17ea06ca2df7aa1d95c0553d5896b6d59fde4d174ba7320068c2307072769a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 07 15:17:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3478: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:13 compute-0 systemd[1]: libpod-bf17ea06ca2df7aa1d95c0553d5896b6d59fde4d174ba7320068c2307072769a.scope: Deactivated successfully.
Oct 07 15:17:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-cee79ca10faee88d1492d305198eff217eff0812223dc3eb776e574ad822e2ae-merged.mount: Deactivated successfully.
Oct 07 15:17:13 compute-0 nova_compute[259550]: 2025-10-07 15:17:13.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:14 compute-0 podman[451365]: 2025-10-07 15:17:14.009167168 +0000 UTC m=+0.980604943 container remove bf17ea06ca2df7aa1d95c0553d5896b6d59fde4d174ba7320068c2307072769a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_varahamihira, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:17:14 compute-0 systemd[1]: libpod-conmon-bf17ea06ca2df7aa1d95c0553d5896b6d59fde4d174ba7320068c2307072769a.scope: Deactivated successfully.
Oct 07 15:17:14 compute-0 sudo[451195]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:14 compute-0 sudo[451398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:17:14 compute-0 sudo[451398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:14 compute-0 sudo[451398]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:14 compute-0 sudo[451433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:17:14 compute-0 sudo[451433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:14 compute-0 sudo[451433]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:14 compute-0 sudo[451470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:17:14 compute-0 sudo[451470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:14 compute-0 sudo[451470]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:14 compute-0 sudo[451498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:17:14 compute-0 sudo[451498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:14 compute-0 ceph-mon[74295]: pgmap v3478: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:14 compute-0 podman[451567]: 2025-10-07 15:17:14.67759901 +0000 UTC m=+0.045400966 container create 5e22cd42adb4c15256aa2e793494c8e9d4d38d68306b25e0a1b27ec6aee3da8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_payne, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:17:14 compute-0 systemd[1]: Started libpod-conmon-5e22cd42adb4c15256aa2e793494c8e9d4d38d68306b25e0a1b27ec6aee3da8d.scope.
Oct 07 15:17:14 compute-0 nova_compute[259550]: 2025-10-07 15:17:14.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:14 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:17:14 compute-0 podman[451567]: 2025-10-07 15:17:14.656237919 +0000 UTC m=+0.024039885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:17:14 compute-0 podman[451567]: 2025-10-07 15:17:14.766261262 +0000 UTC m=+0.134063238 container init 5e22cd42adb4c15256aa2e793494c8e9d4d38d68306b25e0a1b27ec6aee3da8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:17:14 compute-0 podman[451567]: 2025-10-07 15:17:14.774385016 +0000 UTC m=+0.142186992 container start 5e22cd42adb4c15256aa2e793494c8e9d4d38d68306b25e0a1b27ec6aee3da8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:17:14 compute-0 podman[451567]: 2025-10-07 15:17:14.778139425 +0000 UTC m=+0.145941381 container attach 5e22cd42adb4c15256aa2e793494c8e9d4d38d68306b25e0a1b27ec6aee3da8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_payne, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:17:14 compute-0 jolly_payne[451599]: 167 167
Oct 07 15:17:14 compute-0 systemd[1]: libpod-5e22cd42adb4c15256aa2e793494c8e9d4d38d68306b25e0a1b27ec6aee3da8d.scope: Deactivated successfully.
Oct 07 15:17:14 compute-0 podman[451567]: 2025-10-07 15:17:14.780883737 +0000 UTC m=+0.148685713 container died 5e22cd42adb4c15256aa2e793494c8e9d4d38d68306b25e0a1b27ec6aee3da8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_payne, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:17:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c483ce2195fbc27755e079a85699c838055a8a7873078da3df7c8cde3478edcc-merged.mount: Deactivated successfully.
Oct 07 15:17:14 compute-0 podman[451567]: 2025-10-07 15:17:14.832202956 +0000 UTC m=+0.200004912 container remove 5e22cd42adb4c15256aa2e793494c8e9d4d38d68306b25e0a1b27ec6aee3da8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 07 15:17:14 compute-0 systemd[1]: libpod-conmon-5e22cd42adb4c15256aa2e793494c8e9d4d38d68306b25e0a1b27ec6aee3da8d.scope: Deactivated successfully.
Oct 07 15:17:14 compute-0 nova_compute[259550]: 2025-10-07 15:17:14.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:17:14 compute-0 nova_compute[259550]: 2025-10-07 15:17:14.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 07 15:17:15 compute-0 nova_compute[259550]: 2025-10-07 15:17:15.015 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 07 15:17:15 compute-0 podman[451630]: 2025-10-07 15:17:15.068360088 +0000 UTC m=+0.087168013 container create 58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 07 15:17:15 compute-0 podman[451630]: 2025-10-07 15:17:15.0246885 +0000 UTC m=+0.043496505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:17:15 compute-0 systemd[1]: Started libpod-conmon-58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723.scope.
Oct 07 15:17:15 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a5c4d46e9e65800afcdc681ff21b8d8315b26572dcd0273d8bad1501260f8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a5c4d46e9e65800afcdc681ff21b8d8315b26572dcd0273d8bad1501260f8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a5c4d46e9e65800afcdc681ff21b8d8315b26572dcd0273d8bad1501260f8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a5c4d46e9e65800afcdc681ff21b8d8315b26572dcd0273d8bad1501260f8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:17:15 compute-0 podman[451630]: 2025-10-07 15:17:15.188587311 +0000 UTC m=+0.207395246 container init 58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 07 15:17:15 compute-0 podman[451630]: 2025-10-07 15:17:15.196859568 +0000 UTC m=+0.215667483 container start 58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_galois, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 07 15:17:15 compute-0 podman[451630]: 2025-10-07 15:17:15.205734621 +0000 UTC m=+0.224542546 container attach 58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_galois, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 07 15:17:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3479: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:15 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23073 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:16 compute-0 exciting_galois[451650]: {
Oct 07 15:17:16 compute-0 exciting_galois[451650]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:17:16 compute-0 exciting_galois[451650]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:17:16 compute-0 exciting_galois[451650]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:17:16 compute-0 exciting_galois[451650]:         "osd_id": 2,
Oct 07 15:17:16 compute-0 exciting_galois[451650]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:17:16 compute-0 exciting_galois[451650]:         "type": "bluestore"
Oct 07 15:17:16 compute-0 exciting_galois[451650]:     },
Oct 07 15:17:16 compute-0 exciting_galois[451650]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:17:16 compute-0 exciting_galois[451650]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:17:16 compute-0 exciting_galois[451650]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:17:16 compute-0 exciting_galois[451650]:         "osd_id": 1,
Oct 07 15:17:16 compute-0 exciting_galois[451650]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:17:16 compute-0 exciting_galois[451650]:         "type": "bluestore"
Oct 07 15:17:16 compute-0 exciting_galois[451650]:     },
Oct 07 15:17:16 compute-0 exciting_galois[451650]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:17:16 compute-0 exciting_galois[451650]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:17:16 compute-0 exciting_galois[451650]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:17:16 compute-0 exciting_galois[451650]:         "osd_id": 0,
Oct 07 15:17:16 compute-0 exciting_galois[451650]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:17:16 compute-0 exciting_galois[451650]:         "type": "bluestore"
Oct 07 15:17:16 compute-0 exciting_galois[451650]:     }
Oct 07 15:17:16 compute-0 exciting_galois[451650]: }
Oct 07 15:17:16 compute-0 systemd[1]: libpod-58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723.scope: Deactivated successfully.
Oct 07 15:17:16 compute-0 systemd[1]: libpod-58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723.scope: Consumed 1.138s CPU time.
Oct 07 15:17:16 compute-0 podman[451630]: 2025-10-07 15:17:16.329755606 +0000 UTC m=+1.348563561 container died 58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 15:17:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-23a5c4d46e9e65800afcdc681ff21b8d8315b26572dcd0273d8bad1501260f8b-merged.mount: Deactivated successfully.
Oct 07 15:17:16 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23075 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:16 compute-0 ceph-mon[74295]: pgmap v3479: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:16 compute-0 ceph-mon[74295]: from='client.23073 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:17 compute-0 podman[451630]: 2025-10-07 15:17:17.114547377 +0000 UTC m=+2.133355292 container remove 58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_galois, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:17:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 07 15:17:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1867773775' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 07 15:17:17 compute-0 systemd[1]: libpod-conmon-58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723.scope: Deactivated successfully.
Oct 07 15:17:17 compute-0 podman[451776]: 2025-10-07 15:17:17.18797377 +0000 UTC m=+0.822789293 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 07 15:17:17 compute-0 sudo[451498]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:17:17 compute-0 podman[451783]: 2025-10-07 15:17:17.219402526 +0000 UTC m=+0.851992050 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller)
Oct 07 15:17:17 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:17:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:17:17 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:17:17 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 5f465ae1-7121-4790-ada9-b10e40466223 does not exist
Oct 07 15:17:17 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev f1c1ffcb-eaee-431e-8cb3-2efc4b67a11d does not exist
Oct 07 15:17:17 compute-0 sudo[451856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:17:17 compute-0 sudo[451856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:17 compute-0 sudo[451856]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3480: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:17 compute-0 sudo[451886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:17:17 compute-0 sudo[451886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:17:17 compute-0 sudo[451886]: pam_unix(sudo:session): session closed for user root
Oct 07 15:17:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:17:17 compute-0 ceph-mon[74295]: from='client.23075 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:17 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1867773775' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 07 15:17:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:17:17 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:17:18 compute-0 ceph-mon[74295]: pgmap v3480: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:18 compute-0 nova_compute[259550]: 2025-10-07 15:17:18.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3481: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:19 compute-0 nova_compute[259550]: 2025-10-07 15:17:19.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:20 compute-0 ovs-vsctl[451960]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 07 15:17:20 compute-0 ceph-mon[74295]: pgmap v3481: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:21 compute-0 nova_compute[259550]: 2025-10-07 15:17:21.014 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:17:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3482: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:21 compute-0 virtqemud[259430]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 07 15:17:21 compute-0 virtqemud[259430]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 07 15:17:21 compute-0 virtqemud[259430]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 07 15:17:22 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: cache status {prefix=cache status} (starting...)
Oct 07 15:17:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:17:22 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: client ls {prefix=client ls} (starting...)
Oct 07 15:17:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:17:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:17:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:17:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:17:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:17:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:17:22 compute-0 lvm[452306]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 07 15:17:22 compute-0 lvm[452305]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 07 15:17:22 compute-0 lvm[452305]: VG ceph_vg1 finished
Oct 07 15:17:22 compute-0 lvm[452306]: VG ceph_vg2 finished
Oct 07 15:17:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:17:22
Oct 07 15:17:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:17:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:17:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'images', 'volumes', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.control']
Oct 07 15:17:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:17:22 compute-0 ceph-mon[74295]: pgmap v3482: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:22 compute-0 lvm[452340]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 07 15:17:22 compute-0 lvm[452340]: VG ceph_vg0 finished
Oct 07 15:17:23 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23079 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:23 compute-0 kernel: block loop4: the capability attribute has been deprecated.
Oct 07 15:17:23 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: damage ls {prefix=damage ls} (starting...)
Oct 07 15:17:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3483: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:23 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump loads {prefix=dump loads} (starting...)
Oct 07 15:17:23 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23081 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:17:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:17:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:17:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:17:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:17:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:17:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:17:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:17:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:17:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:17:23 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 07 15:17:23 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 07 15:17:23 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 07 15:17:24 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 07 15:17:24 compute-0 nova_compute[259550]: 2025-10-07 15:17:24.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 07 15:17:24 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/183215034' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 07 15:17:24 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 07 15:17:24 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 07 15:17:24 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23087 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:24 compute-0 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T15:17:24.478+0000 7fbd06de6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 07 15:17:24 compute-0 ceph-mgr[74587]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 07 15:17:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:17:24 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/207903652' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:17:24 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: ops {prefix=ops} (starting...)
Oct 07 15:17:24 compute-0 nova_compute[259550]: 2025-10-07 15:17:24.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 07 15:17:24 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2192595890' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 07 15:17:24 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 07 15:17:24 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3957854229' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 07 15:17:24 compute-0 ceph-mon[74295]: from='client.23079 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:24 compute-0 ceph-mon[74295]: pgmap v3483: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:24 compute-0 ceph-mon[74295]: from='client.23081 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/183215034' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 07 15:17:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/207903652' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:17:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2192595890' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 07 15:17:24 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3957854229' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 07 15:17:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 07 15:17:25 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4109361635' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 07 15:17:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 07 15:17:25 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2688670234' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 07 15:17:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3484: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:25 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: session ls {prefix=session ls} (starting...)
Oct 07 15:17:25 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: status {prefix=status} (starting...)
Oct 07 15:17:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 07 15:17:25 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204556699' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 07 15:17:25 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23101 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:26 compute-0 ceph-mon[74295]: from='client.23087 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:26 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4109361635' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 07 15:17:26 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2688670234' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 07 15:17:26 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4204556699' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 07 15:17:26 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23105 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 07 15:17:26 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/976810913' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 07 15:17:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 07 15:17:26 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1545421245' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 07 15:17:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 07 15:17:26 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/901755309' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 07 15:17:26 compute-0 nova_compute[259550]: 2025-10-07 15:17:26.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:17:26 compute-0 nova_compute[259550]: 2025-10-07 15:17:26.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:17:26 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 07 15:17:26 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1334750667' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 07 15:17:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 07 15:17:27 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1911627210' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 07 15:17:27 compute-0 ceph-mon[74295]: pgmap v3484: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:27 compute-0 ceph-mon[74295]: from='client.23101 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:27 compute-0 ceph-mon[74295]: from='client.23105 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:27 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/976810913' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 07 15:17:27 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1545421245' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 07 15:17:27 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/901755309' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 07 15:17:27 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1334750667' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 07 15:17:27 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1911627210' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 07 15:17:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3485: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:27 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23117 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:27 compute-0 ceph-mgr[74587]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 07 15:17:27 compute-0 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T15:17:27.452+0000 7fbd06de6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 07 15:17:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 07 15:17:27 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1790706680' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 07 15:17:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:17:27 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23121 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 07 15:17:27 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2601673510' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 07 15:17:28 compute-0 ceph-mon[74295]: pgmap v3485: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:28 compute-0 ceph-mon[74295]: from='client.23117 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1790706680' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 07 15:17:28 compute-0 ceph-mon[74295]: from='client.23121 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:28 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2601673510' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 07 15:17:28 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23123 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 07 15:17:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2818234284' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 07 15:17:28 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23127 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 07 15:17:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2590059880' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 07 15:17:28 compute-0 nova_compute[259550]: 2025-10-07 15:17:28.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:17:28 compute-0 nova_compute[259550]: 2025-10-07 15:17:28.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:17:29 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23131 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:29 compute-0 nova_compute[259550]: 2025-10-07 15:17:29.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:29 compute-0 ceph-mon[74295]: from='client.23123 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2818234284' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 07 15:17:29 compute-0 ceph-mon[74295]: from='client.23127 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:29 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2590059880' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 07 15:17:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 07 15:17:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2873225065' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ece55000/0x0/0x4ffc00000, data 0x321cda1/0x33a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:13.555622+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 280879104 unmapped: 47890432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ece55000/0x0/0x4ffc00000, data 0x321cda1/0x33a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:14.555970+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3267813 data_alloc: 234881024 data_used: 32980992
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 280887296 unmapped: 47882240 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:15.556092+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 280887296 unmapped: 47882240 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ece55000/0x0/0x4ffc00000, data 0x321cda1/0x33a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa6e000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:16.556237+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa6e000 session 0x55f4fed55860
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 280887296 unmapped: 47882240 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fd065800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:17.556387+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 280887296 unmapped: 47882240 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:18.556538+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281911296 unmapped: 46858240 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:19.556680+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3301733 data_alloc: 234881024 data_used: 37003264
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283025408 unmapped: 45744128 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:20.556893+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ece55000/0x0/0x4ffc00000, data 0x321cda1/0x33a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283025408 unmapped: 45744128 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:21.557043+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283025408 unmapped: 45744128 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.247110367s of 14.595113754s, submitted: 73
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:22.557174+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283033600 unmapped: 45735936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:23.557324+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283033600 unmapped: 45735936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:24.557459+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ece59000/0x0/0x4ffc00000, data 0x321dda1/0x33a4000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3297529 data_alloc: 234881024 data_used: 37003264
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283033600 unmapped: 45735936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:25.557594+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283033600 unmapped: 45735936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:26.557743+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287014912 unmapped: 41754624 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:27.557954+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 41656320 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:28.558360+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 41656320 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:29.558499+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3370667 data_alloc: 234881024 data_used: 37146624
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 41656320 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:30.558628+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec4fb000/0x0/0x4ffc00000, data 0x3b7cda1/0x3d03000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 41656320 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:31.558744+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 41656320 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:32.558880+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.609622955s of 10.946825981s, submitted: 59
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 41656320 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:33.560130+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288464896 unmapped: 40304640 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:34.560225+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec0f0000/0x0/0x4ffc00000, data 0x3f7fda1/0x4106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3406277 data_alloc: 234881024 data_used: 37318656
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 40042496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:35.560346+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288972800 unmapped: 39796736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:36.560515+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288989184 unmapped: 39780352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:37.560692+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288989184 unmapped: 39780352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec072000/0x0/0x4ffc00000, data 0x3ffdda1/0x4184000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:38.560833+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288989184 unmapped: 39780352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:39.560998+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3415403 data_alloc: 234881024 data_used: 37376000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec072000/0x0/0x4ffc00000, data 0x3ffdda1/0x4184000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288989184 unmapped: 39780352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:40.561171+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288989184 unmapped: 39780352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:41.561301+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288317440 unmapped: 40452096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:42.561455+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec059000/0x0/0x4ffc00000, data 0x401eda1/0x41a5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288317440 unmapped: 40452096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:43.561578+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288317440 unmapped: 40452096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:44.561787+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3410287 data_alloc: 234881024 data_used: 37376000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288325632 unmapped: 40443904 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:45.561924+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288325632 unmapped: 40443904 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:46.562114+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288325632 unmapped: 40443904 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:47.562276+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec059000/0x0/0x4ffc00000, data 0x401eda1/0x41a5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288325632 unmapped: 40443904 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:48.562409+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288325632 unmapped: 40443904 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:49.562546+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec059000/0x0/0x4ffc00000, data 0x401eda1/0x41a5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3410447 data_alloc: 234881024 data_used: 37380096
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288325632 unmapped: 40443904 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:50.562692+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 40435712 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:51.562837+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf400 session 0x55f4fdfa8b40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf800 session 0x55f4fed665a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffacf800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 40435712 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.526844025s of 19.174123764s, submitted: 54
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf800 session 0x55f4fe016d20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:52.563013+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 40427520 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:53.563154+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 40427520 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:54.563316+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3332683 data_alloc: 234881024 data_used: 36712448
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 40427520 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:55.563440+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eca3a000/0x0/0x4ffc00000, data 0x363dda1/0x37c4000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 40427520 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:56.563569+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 40427520 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:57.563811+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 40427520 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fed66b40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94c00 session 0x55f4fecc2960
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:58.564764+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdd48c00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 280305664 unmapped: 48463872 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48c00 session 0x55f4fd321860
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:59.564887+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3129665 data_alloc: 234881024 data_used: 24416256
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 280305664 unmapped: 48463872 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:00.565022+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 280305664 unmapped: 48463872 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:01.565169+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed6af000/0x0/0x4ffc00000, data 0x25c6d3f/0x274c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 280305664 unmapped: 48463872 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:02.565375+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fd065800 session 0x55f4fcf212c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fd2e4d20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdd48000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.698882103s of 10.823541641s, submitted: 39
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 279519232 unmapped: 49250304 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:03.565522+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fdfa7860
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275791872 unmapped: 52977664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:04.565663+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee442000/0x0/0x4ffc00000, data 0x1c37d2f/0x1dbc000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3020159 data_alloc: 218103808 data_used: 19435520
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275791872 unmapped: 52977664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:05.565814+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275791872 unmapped: 52977664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:06.565991+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275791872 unmapped: 52977664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fdf8b0e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fef4a5a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdd48c00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:07.566140+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48c00 session 0x55f4ff0eaf00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:08.566340+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:09.566538+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2886229 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:10.566672+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:11.566860+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:12.567367+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:13.567524+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:14.567909+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2886229 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:15.568084+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:16.568828+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:17.569072+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:18.569652+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:19.569796+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2886229 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:20.570175+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:21.570306+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:22.570531+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:23.570725+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:24.570886+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2886229 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272801792 unmapped: 55967744 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:25.571069+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272801792 unmapped: 55967744 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:26.571304+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272801792 unmapped: 55967744 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:27.571525+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272801792 unmapped: 55967744 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:28.571775+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272801792 unmapped: 55967744 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:29.571973+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2886229 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272801792 unmapped: 55967744 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:30.572189+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272801792 unmapped: 55967744 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:31.572362+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272801792 unmapped: 55967744 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:32.572552+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272809984 unmapped: 55959552 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:33.572699+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4feefa780
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272809984 unmapped: 55959552 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdd48000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4feefb2c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fcf4cd20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fd2e4960
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94c00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94c00 session 0x55f4fcf210e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:34.572836+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2886229 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272809984 unmapped: 55959552 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:35.573011+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272809984 unmapped: 55959552 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:36.573193+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272809984 unmapped: 55959552 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:37.573358+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272809984 unmapped: 55959552 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:38.573521+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272818176 unmapped: 55951360 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:39.573659+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2886229 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272818176 unmapped: 55951360 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:40.573799+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272818176 unmapped: 55951360 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:41.573934+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272826368 unmapped: 55943168 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 39.326816559s of 39.382202148s, submitted: 15
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ffa01680
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:42.574050+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdd48000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272842752 unmapped: 55926784 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:43.574178+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272842752 unmapped: 55926784 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:44.574317+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffe29c20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffacf800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf800 session 0x55f4fe023a40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff65cc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65cc00 session 0x55f4fd29d4a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2887858 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa6e000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa6e000 session 0x55f4fe016000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa6e000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272842752 unmapped: 55926784 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa6e000 session 0x55f4fed66d20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fdd10d20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:45.574436+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ff9e85a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff65cc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65cc00 session 0x55f4ff0ea960
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffacf800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf800 session 0x55f4fd402960
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272957440 unmapped: 55812096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:46.574570+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272957440 unmapped: 55812096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:47.574714+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272957440 unmapped: 55812096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:48.574895+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea17000/0x0/0x4ffc00000, data 0x1661d3f/0x17e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272957440 unmapped: 55812096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:49.575001+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2928261 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272957440 unmapped: 55812096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:50.575158+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea17000/0x0/0x4ffc00000, data 0x1661d3f/0x17e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272957440 unmapped: 55812096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:51.575274+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272957440 unmapped: 55812096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:52.575492+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea17000/0x0/0x4ffc00000, data 0x1661d3f/0x17e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272957440 unmapped: 55812096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:53.575629+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272957440 unmapped: 55812096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:54.575750+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.332861900s of 12.470142365s, submitted: 24
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2944057 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274358272 unmapped: 54411264 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:55.575874+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee80e000/0x0/0x4ffc00000, data 0x186ad3f/0x19f0000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffacf800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf800 session 0x55f4ff9e8960
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274382848 unmapped: 54386688 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:56.575993+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee7e2000/0x0/0x4ffc00000, data 0x1896d3f/0x1a1c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274382848 unmapped: 54386688 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:57.576149+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274382848 unmapped: 54386688 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:58.576288+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: mgrc ms_handle_reset ms_handle_reset con 0x55f4fdfd5c00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3626055412
Oct 07 15:17:29 compute-0 ceph-osd[90092]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3626055412,v1:192.168.122.100:6801/3626055412]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: get_auth_request con 0x55f4fdd48c00 auth_method 0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: mgrc handle_mgr_configure stats_period=5
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:59.576398+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979497 data_alloc: 218103808 data_used: 17051648
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:00.576544+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee7e2000/0x0/0x4ffc00000, data 0x1896d3f/0x1a1c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:01.576681+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:02.576989+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:03.577115+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:04.577772+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2977945 data_alloc: 218103808 data_used: 17051648
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:05.577973+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:06.578151+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee7df000/0x0/0x4ffc00000, data 0x1899d3f/0x1a1f000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:07.578379+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee7df000/0x0/0x4ffc00000, data 0x1899d3f/0x1a1f000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee7df000/0x0/0x4ffc00000, data 0x1899d3f/0x1a1f000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.412960052s of 13.534274101s, submitted: 32
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:08.578531+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275062784 unmapped: 53706752 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:09.578663+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3032607 data_alloc: 218103808 data_used: 17084416
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 52101120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:10.578797+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fd2e43c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffe29a40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276766720 unmapped: 52002816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:11.578956+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff65cc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65cc00 session 0x55f4fed54000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276783104 unmapped: 51986432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:12.579336+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276783104 unmapped: 51986432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:13.579489+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee2ad000/0x0/0x4ffc00000, data 0x1dbdd3f/0x1f43000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276783104 unmapped: 51986432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:14.579623+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee2ad000/0x0/0x4ffc00000, data 0x1dbdd3f/0x1f43000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014203 data_alloc: 218103808 data_used: 17084416
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276783104 unmapped: 51986432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:15.579764+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee2ad000/0x0/0x4ffc00000, data 0x1dbdd3f/0x1f43000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276799488 unmapped: 51970048 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:16.579898+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276742144 unmapped: 52027392 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:17.580090+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276742144 unmapped: 52027392 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:18.580234+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276742144 unmapped: 52027392 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:19.580473+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee2b8000/0x0/0x4ffc00000, data 0x1dc0d3f/0x1f46000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3012987 data_alloc: 218103808 data_used: 17084416
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276750336 unmapped: 52019200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:20.580670+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276750336 unmapped: 52019200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:21.580887+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276750336 unmapped: 52019200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:22.581013+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa6e000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa6e000 session 0x55f4ffdba1e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdd48000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fedade00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee2b8000/0x0/0x4ffc00000, data 0x1dc0d3f/0x1f46000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ff0ea780
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276750336 unmapped: 52019200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff65cc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65cc00 session 0x55f4fe024000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffacf800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.059223175s of 15.161356926s, submitted: 70
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:23.585323+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf800 session 0x55f4fe023e00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffacf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf400 session 0x55f4ff98ad20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdd48000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fcf4be00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277807104 unmapped: 50962432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fed66f00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff65cc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65cc00 session 0x55f4fe0ea5a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:24.585510+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3044288 data_alloc: 218103808 data_used: 17088512
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277807104 unmapped: 50962432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:25.585675+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277807104 unmapped: 50962432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:26.585809+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb6000/0x0/0x4ffc00000, data 0x21c0db1/0x2348000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277807104 unmapped: 50962432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:27.585996+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277807104 unmapped: 50962432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:28.586134+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffacf800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf800 session 0x55f4ff98ad20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277815296 unmapped: 50954240 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff65c400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65c400 session 0x55f4fe023e00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:29.586302+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3044288 data_alloc: 218103808 data_used: 17088512
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdd48000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fe024000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277815296 unmapped: 50954240 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ff0ea780
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:30.586990+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff65c400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff65cc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277815296 unmapped: 50954240 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:31.587185+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277831680 unmapped: 50937856 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:32.587315+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb6000/0x0/0x4ffc00000, data 0x21c0db1/0x2348000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:33.587456+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:34.587641+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074048 data_alloc: 218103808 data_used: 21282816
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:35.587843+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:36.588008+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb6000/0x0/0x4ffc00000, data 0x21c0db1/0x2348000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:37.588187+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb6000/0x0/0x4ffc00000, data 0x21c0db1/0x2348000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:38.588341+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:39.588487+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb6000/0x0/0x4ffc00000, data 0x21c0db1/0x2348000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074048 data_alloc: 218103808 data_used: 21282816
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:40.588654+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.183795929s of 17.558851242s, submitted: 22
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:41.588803+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:42.588921+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:43.589079+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277979136 unmapped: 50790400 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:44.589224+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3097804 data_alloc: 218103808 data_used: 21282816
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edc5e000/0x0/0x4ffc00000, data 0x2418db1/0x25a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277979136 unmapped: 50790400 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:45.589427+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277979136 unmapped: 50790400 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:46.589583+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277979136 unmapped: 50790400 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:47.589810+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edc5e000/0x0/0x4ffc00000, data 0x2418db1/0x25a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277979136 unmapped: 50790400 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:48.589960+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edc5e000/0x0/0x4ffc00000, data 0x2418db1/0x25a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277987328 unmapped: 50782208 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:49.590112+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3097804 data_alloc: 218103808 data_used: 21282816
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277987328 unmapped: 50782208 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:50.590195+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277987328 unmapped: 50782208 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:51.590300+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277995520 unmapped: 50774016 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:52.590430+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.474517822s of 12.056572914s, submitted: 25
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65c400 session 0x55f4ffdba1e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65cc00 session 0x55f4fcf4ba40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffacf800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 279052288 unmapped: 49717248 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:53.591153+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf800 session 0x55f4fdd5af00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:54.591309+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019490 data_alloc: 218103808 data_used: 17084416
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee2b5000/0x0/0x4ffc00000, data 0x1dc0d3f/0x1f46000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:55.591794+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:56.592050+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277848064 unmapped: 50921472 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:57.592361+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fcf21e00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fd29d680
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277848064 unmapped: 50921472 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdd48000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:58.592500+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fef4a780
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:59.592640+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904568 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:00.592779+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:01.592951+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:02.593084+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:03.593279+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:04.593433+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904568 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:05.593639+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:06.593896+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:07.594157+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:08.594354+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:09.594614+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904568 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:10.594790+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:11.595030+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:12.595218+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:13.596081+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:14.596259+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904568 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:15.596440+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:16.596649+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:17.596985+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275652608 unmapped: 53116928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:18.597155+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275652608 unmapped: 53116928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:19.597364+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275652608 unmapped: 53116928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904568 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:20.597530+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275652608 unmapped: 53116928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:21.597688+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275652608 unmapped: 53116928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:22.597828+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275652608 unmapped: 53116928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:23.597994+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275652608 unmapped: 53116928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:24.598131+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275660800 unmapped: 53108736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904568 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:25.598326+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275660800 unmapped: 53108736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:26.598454+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275660800 unmapped: 53108736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:27.598636+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275660800 unmapped: 53108736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:28.598792+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275660800 unmapped: 53108736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:29.598965+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275660800 unmapped: 53108736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904568 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:30.599221+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275660800 unmapped: 53108736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:31.599426+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275660800 unmapped: 53108736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:32.599689+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275668992 unmapped: 53100544 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 39.611000061s of 40.106571198s, submitted: 41
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:33.599974+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 48906240 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fdd5b0e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff65c400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65c400 session 0x55f4fdfa8f00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fcf503c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdd48000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:34.600143+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275668992 unmapped: 53100544 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4ffa0da40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fedada40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2932616 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:35.600291+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275668992 unmapped: 53100544 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffe28b40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeaac000/0x0/0x4ffc00000, data 0x15cdd2f/0x1752000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff65cc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65cc00 session 0x55f4fecc30e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:36.600452+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275668992 unmapped: 53100544 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ff0eb0e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdd48000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:37.600674+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275677184 unmapped: 53092352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fdd112c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:38.600810+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275677184 unmapped: 53092352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeaaa000/0x0/0x4ffc00000, data 0x15cdd62/0x1754000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeaaa000/0x0/0x4ffc00000, data 0x15cdd62/0x1754000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:39.600991+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275677184 unmapped: 53092352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2966043 data_alloc: 218103808 data_used: 17567744
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:40.601161+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275693568 unmapped: 53075968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:41.601293+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275693568 unmapped: 53075968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeaaa000/0x0/0x4ffc00000, data 0x15cdd62/0x1754000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:42.601419+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275693568 unmapped: 53075968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:43.601539+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275693568 unmapped: 53075968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:44.601779+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275693568 unmapped: 53075968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2966043 data_alloc: 218103808 data_used: 17567744
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:45.602008+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275693568 unmapped: 53075968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:46.602150+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275693568 unmapped: 53075968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:47.602310+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275693568 unmapped: 53075968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeaaa000/0x0/0x4ffc00000, data 0x15cdd62/0x1754000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:48.602432+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275701760 unmapped: 53067776 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:49.602555+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275701760 unmapped: 53067776 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2966043 data_alloc: 218103808 data_used: 17567744
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.276357651s of 17.347126007s, submitted: 6
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:50.603777+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275800064 unmapped: 52969472 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:51.603905+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:52.604043+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee505000/0x0/0x4ffc00000, data 0x1b72d62/0x1cf9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:53.604180+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:54.604324+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3018643 data_alloc: 218103808 data_used: 17600512
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:55.604464+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee505000/0x0/0x4ffc00000, data 0x1b72d62/0x1cf9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:56.604608+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276684800 unmapped: 52084736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee505000/0x0/0x4ffc00000, data 0x1b72d62/0x1cf9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:57.604794+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276684800 unmapped: 52084736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:58.604946+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276684800 unmapped: 52084736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:59.605111+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276684800 unmapped: 52084736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:00.605241+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3018643 data_alloc: 218103808 data_used: 17600512
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276684800 unmapped: 52084736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:01.605407+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276684800 unmapped: 52084736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee505000/0x0/0x4ffc00000, data 0x1b72d62/0x1cf9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:02.605527+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276684800 unmapped: 52084736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:03.605658+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276684800 unmapped: 52084736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fface000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.575452805s of 13.935386658s, submitted: 29
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:04.605758+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276660224 unmapped: 52109312 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fface000 session 0x55f4fecc3680
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffb9d800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffb9d800 session 0x55f4fcf4c780
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95000 session 0x55f4fdd110e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95000 session 0x55f4fed66b40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fdfa90e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:05.605890+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3026528 data_alloc: 218103808 data_used: 17600512
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 52101120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:06.606031+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 52101120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:07.606258+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee4b6000/0x0/0x4ffc00000, data 0x1bc0dc4/0x1d48000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 52101120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:08.606515+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 52101120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:09.606661+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 52101120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee4b6000/0x0/0x4ffc00000, data 0x1bc0dc4/0x1d48000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:10.606807+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3026528 data_alloc: 218103808 data_used: 17600512
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:11.606975+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee4b6000/0x0/0x4ffc00000, data 0x1bc0dc4/0x1d48000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:12.607136+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee4b6000/0x0/0x4ffc00000, data 0x1bc0dc4/0x1d48000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdd48000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fdd11c20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:13.607283+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fface000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fface000 session 0x55f4fef4b860
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:14.607445+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffb9d800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffb9d800 session 0x55f4fdd101e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffb9d800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.281874657s of 10.393685341s, submitted: 26
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:15.607575+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028747 data_alloc: 218103808 data_used: 17600512
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffb9d800 session 0x55f4fd2e5e00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:16.607705+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdd48000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276684800 unmapped: 52084736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:17.607889+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276807680 unmapped: 51961856 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee4b4000/0x0/0x4ffc00000, data 0x1bc0df7/0x1d4a000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:18.608031+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:19.608223+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:20.608363+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3029955 data_alloc: 218103808 data_used: 17719296
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:21.608600+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:22.608744+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:23.608895+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x1bc1df7/0x1d4b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:24.609111+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:25.609267+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x1bc1df7/0x1d4b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030263 data_alloc: 218103808 data_used: 17719296
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:26.609468+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x1bc1df7/0x1d4b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:27.609641+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x1bc1df7/0x1d4b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.995096207s of 13.440644264s, submitted: 6
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:28.609769+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282075136 unmapped: 46694400 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:29.609901+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 46186496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:30.610060+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099583 data_alloc: 218103808 data_used: 17780736
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282607616 unmapped: 46161920 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:31.610194+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282607616 unmapped: 46161920 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edd3b000/0x0/0x4ffc00000, data 0x232bdf7/0x24b5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:32.610318+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282607616 unmapped: 46161920 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:33.610515+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282607616 unmapped: 46161920 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:34.610660+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282607616 unmapped: 46161920 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:35.610808+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092099 data_alloc: 218103808 data_used: 17784832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282476544 unmapped: 46292992 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:36.611545+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282484736 unmapped: 46284800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:37.611747+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282484736 unmapped: 46284800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edd25000/0x0/0x4ffc00000, data 0x234fdf7/0x24d9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edd25000/0x0/0x4ffc00000, data 0x234fdf7/0x24d9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:38.611919+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282484736 unmapped: 46284800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:39.612088+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282484736 unmapped: 46284800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.874958992s of 12.207747459s, submitted: 98
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4feca8000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fcf4d2c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:40.612243+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092027 data_alloc: 218103808 data_used: 17784832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282484736 unmapped: 46284800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fe0245a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:41.612403+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282509312 unmapped: 46260224 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:42.612561+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282509312 unmapped: 46260224 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:43.612715+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282509312 unmapped: 46260224 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f5019785a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fd4034a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee502000/0x0/0x4ffc00000, data 0x1b73d62/0x1cfa000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:44.612872+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fd402000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:45.613052+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923753 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:46.613304+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:47.613536+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:48.613769+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:49.614083+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:50.614315+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923753 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:51.614478+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:52.614697+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:53.615008+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:54.615264+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:55.615514+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923753 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:56.615736+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:57.616065+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:58.616315+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:59.616570+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:00.616768+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923753 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:01.617001+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:02.617368+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:03.617593+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277659648 unmapped: 51109888 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:04.617793+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277659648 unmapped: 51109888 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:05.618058+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923753 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277659648 unmapped: 51109888 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:06.618453+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277659648 unmapped: 51109888 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:07.618664+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277659648 unmapped: 51109888 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:08.618833+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277667840 unmapped: 51101696 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:09.618994+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277667840 unmapped: 51101696 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:10.619131+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923753 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277667840 unmapped: 51101696 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:11.619269+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277667840 unmapped: 51101696 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:12.619403+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277667840 unmapped: 51101696 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:13.619543+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277667840 unmapped: 51101696 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:14.619669+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277667840 unmapped: 51101696 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:15.619810+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923753 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277667840 unmapped: 51101696 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:16.620028+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277676032 unmapped: 51093504 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:17.620324+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277676032 unmapped: 51093504 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:18.620535+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277676032 unmapped: 51093504 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:19.620714+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277676032 unmapped: 51093504 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:20.620909+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923753 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277676032 unmapped: 51093504 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:21.621152+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277676032 unmapped: 51093504 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:22.621387+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277676032 unmapped: 51093504 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:23.621575+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277676032 unmapped: 51093504 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdd48000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.395324707s of 43.585483551s, submitted: 45
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4ff997680
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fd321860
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffb9d800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffb9d800 session 0x55f4fcf514a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fcf21860
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdd48000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fd320000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:24.621797+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eee0c000/0x0/0x4ffc00000, data 0x126dd2f/0x13f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:25.621993+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2933211 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eee0c000/0x0/0x4ffc00000, data 0x126dd2f/0x13f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:26.622443+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:27.623016+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:28.623152+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:29.623342+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ff98ab40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:30.623566+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffa012c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2933211 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fef4af00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eee0c000/0x0/0x4ffc00000, data 0x126dd2f/0x13f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [1])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ff98bc20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:31.623896+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdd48000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:32.624088+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:33.624233+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eee0b000/0x0/0x4ffc00000, data 0x126dd3f/0x13f3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:34.624444+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eee0b000/0x0/0x4ffc00000, data 0x126dd3f/0x13f3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eee0b000/0x0/0x4ffc00000, data 0x126dd3f/0x13f3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:35.624684+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939369 data_alloc: 218103808 data_used: 13934592
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277733376 unmapped: 51036160 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:36.624993+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277733376 unmapped: 51036160 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:37.625262+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277733376 unmapped: 51036160 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:38.625437+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eee0b000/0x0/0x4ffc00000, data 0x126dd3f/0x13f3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277733376 unmapped: 51036160 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:39.625699+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277733376 unmapped: 51036160 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:40.626011+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939369 data_alloc: 218103808 data_used: 13934592
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277741568 unmapped: 51027968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:41.626159+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277741568 unmapped: 51027968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:42.626338+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277741568 unmapped: 51027968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.298503876s of 19.327520370s, submitted: 4
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:43.626498+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282157056 unmapped: 46612480 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:44.626650+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee5ba000/0x0/0x4ffc00000, data 0x1ab6d3f/0x1c3c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282173440 unmapped: 46596096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:45.626779+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3015491 data_alloc: 218103808 data_used: 14123008
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282173440 unmapped: 46596096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:46.626982+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282173440 unmapped: 46596096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:47.627153+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282173440 unmapped: 46596096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:48.627390+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282173440 unmapped: 46596096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee58e000/0x0/0x4ffc00000, data 0x1ae2d3f/0x1c68000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:49.627551+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282173440 unmapped: 46596096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee58e000/0x0/0x4ffc00000, data 0x1ae2d3f/0x1c68000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:50.627690+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010451 data_alloc: 218103808 data_used: 14123008
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 47316992 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:51.627872+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 47316992 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:52.628019+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 47316992 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:53.628163+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 47316992 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:54.628340+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 47316992 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee593000/0x0/0x4ffc00000, data 0x1ae5d3f/0x1c6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:55.628520+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee593000/0x0/0x4ffc00000, data 0x1ae5d3f/0x1c6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010451 data_alloc: 218103808 data_used: 14123008
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 47308800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:56.628687+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 47308800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:57.628905+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 47308800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee593000/0x0/0x4ffc00000, data 0x1ae5d3f/0x1c6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:58.629040+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 47308800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:59.629181+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 47308800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee593000/0x0/0x4ffc00000, data 0x1ae5d3f/0x1c6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:00.629324+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010451 data_alloc: 218103808 data_used: 14123008
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 47308800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:01.629494+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 47308800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:02.629648+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281468928 unmapped: 47300608 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:03.629810+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee593000/0x0/0x4ffc00000, data 0x1ae5d3f/0x1c6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281468928 unmapped: 47300608 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:04.630006+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281468928 unmapped: 47300608 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee593000/0x0/0x4ffc00000, data 0x1ae5d3f/0x1c6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:05.630178+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010451 data_alloc: 218103808 data_used: 14123008
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281477120 unmapped: 47292416 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:06.630366+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281477120 unmapped: 47292416 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:07.630549+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281477120 unmapped: 47292416 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:08.630693+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281477120 unmapped: 47292416 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee593000/0x0/0x4ffc00000, data 0x1ae5d3f/0x1c6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:09.630853+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281477120 unmapped: 47292416 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:10.630994+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010451 data_alloc: 218103808 data_used: 14123008
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281477120 unmapped: 47292416 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:11.631159+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281477120 unmapped: 47292416 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:12.631342+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281485312 unmapped: 47284224 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:13.631720+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281485312 unmapped: 47284224 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee593000/0x0/0x4ffc00000, data 0x1ae5d3f/0x1c6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:14.631853+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281485312 unmapped: 47284224 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:15.631996+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.915740967s of 32.219837189s, submitted: 68
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010407 data_alloc: 218103808 data_used: 14123008
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281485312 unmapped: 47284224 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:16.632197+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281485312 unmapped: 47284224 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:17.632834+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee591000/0x0/0x4ffc00000, data 0x1ae6d3f/0x1c6c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281485312 unmapped: 47284224 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fdd11c20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:18.632987+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f502877400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4fed66b40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff35b800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff35b800 session 0x55f4fcf4c780
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fecc3680
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fdd112c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:19.633211+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:20.633364+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3024651 data_alloc: 218103808 data_used: 14123008
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:21.633514+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee099000/0x0/0x4ffc00000, data 0x1bcfd3f/0x1d55000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:22.633651+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281616384 unmapped: 47153152 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:23.633914+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281616384 unmapped: 47153152 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:24.634135+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281616384 unmapped: 47153152 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:25.634365+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3024651 data_alloc: 218103808 data_used: 14123008
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281616384 unmapped: 47153152 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:26.634519+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee099000/0x0/0x4ffc00000, data 0x1bcfd3f/0x1d55000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281616384 unmapped: 47153152 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.549773216s of 11.732599258s, submitted: 10
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ff0eb0e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:27.634692+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff35b800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281591808 unmapped: 47177728 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:28.634823+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281600000 unmapped: 47169536 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee074000/0x0/0x4ffc00000, data 0x1bf3d62/0x1d7a000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:29.634969+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:30.635093+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3033316 data_alloc: 218103808 data_used: 14778368
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:31.635229+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:32.635433+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:33.635578+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:34.635743+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee074000/0x0/0x4ffc00000, data 0x1bf3d62/0x1d7a000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:35.635916+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3033316 data_alloc: 218103808 data_used: 14778368
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:36.636091+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:37.636243+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:38.636404+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281616384 unmapped: 47153152 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:39.636660+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.436887741s of 12.460129738s, submitted: 5
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 46022656 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:40.636877+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3085982 data_alloc: 218103808 data_used: 14929920
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 46022656 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edab2000/0x0/0x4ffc00000, data 0x21add62/0x2334000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:41.637059+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282517504 unmapped: 46252032 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:42.637253+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:43.637453+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:44.637656+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:45.637803+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eda35000/0x0/0x4ffc00000, data 0x2232d62/0x23b9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3094090 data_alloc: 218103808 data_used: 14921728
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:46.637959+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eda35000/0x0/0x4ffc00000, data 0x2232d62/0x23b9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:47.638130+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:48.638277+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:49.638383+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:50.638523+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092590 data_alloc: 218103808 data_used: 14921728
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:51.638726+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 33K writes, 134K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s
                                           Cumulative WAL: 33K writes, 11K syncs, 2.82 writes per sync, written: 0.13 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2785 writes, 10K keys, 2785 commit groups, 1.0 writes per commit group, ingest: 11.72 MB, 0.02 MB/s
                                           Interval WAL: 2785 writes, 1097 syncs, 2.54 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:52.638887+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eda14000/0x0/0x4ffc00000, data 0x2253d62/0x23da000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282533888 unmapped: 46235648 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:53.639086+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eda14000/0x0/0x4ffc00000, data 0x2253d62/0x23da000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282533888 unmapped: 46235648 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eda14000/0x0/0x4ffc00000, data 0x2253d62/0x23da000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:54.639280+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eda14000/0x0/0x4ffc00000, data 0x2253d62/0x23da000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282542080 unmapped: 46227456 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:55.639525+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092590 data_alloc: 218103808 data_used: 14921728
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282542080 unmapped: 46227456 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:56.639778+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282542080 unmapped: 46227456 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:57.640037+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282542080 unmapped: 46227456 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:58.640225+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eda14000/0x0/0x4ffc00000, data 0x2253d62/0x23da000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282542080 unmapped: 46227456 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fedada40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff35b800 session 0x55f4fcf4b680
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.073291779s of 19.583322525s, submitted: 55
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:59.640369+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f502877400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282542080 unmapped: 46227456 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:00.640531+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3026900 data_alloc: 218103808 data_used: 14237696
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282566656 unmapped: 46202880 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4ffdbb2c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:01.640756+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282574848 unmapped: 46194688 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:02.640904+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee180000/0x0/0x4ffc00000, data 0x1ae7d3f/0x1c6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282574848 unmapped: 46194688 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:03.641078+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee180000/0x0/0x4ffc00000, data 0x1ae7d3f/0x1c6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282574848 unmapped: 46194688 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:04.641269+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282574848 unmapped: 46194688 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee180000/0x0/0x4ffc00000, data 0x1ae7d3f/0x1c6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:05.641503+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019712 data_alloc: 218103808 data_used: 14123008
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282574848 unmapped: 46194688 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:06.641700+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee180000/0x0/0x4ffc00000, data 0x1ae7d3f/0x1c6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282574848 unmapped: 46194688 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:07.641892+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 46186496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:08.642046+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 46186496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:09.642241+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 46186496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:10.642439+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019712 data_alloc: 218103808 data_used: 14123008
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 46186496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:11.642581+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 46186496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee180000/0x0/0x4ffc00000, data 0x1ae7d3f/0x1c6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:12.642733+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 46186496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:13.642884+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 46186496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:14.643042+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 46186496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:15.643207+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee180000/0x0/0x4ffc00000, data 0x1ae7d3f/0x1c6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019712 data_alloc: 218103808 data_used: 14123008
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282591232 unmapped: 46178304 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:16.643348+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282591232 unmapped: 46178304 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:17.643547+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4feefa1e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fd3210e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f502877400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282591232 unmapped: 46178304 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:18.643709+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee180000/0x0/0x4ffc00000, data 0x1ae7d3f/0x1c6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282607616 unmapped: 46161920 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:19.643870+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282607616 unmapped: 46161920 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:20.644003+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.432577133s of 21.670606613s, submitted: 19
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee180000/0x0/0x4ffc00000, data 0x1ae7d3f/0x1c6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988114 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282615808 unmapped: 46153728 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:21.644161+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282615808 unmapped: 46153728 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:22.644439+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4ffe28d20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 46145536 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:23.644597+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 46145536 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:24.644772+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 46145536 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:25.644935+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2941655 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 46145536 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:26.645092+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 46137344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:27.645283+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 46137344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:28.645435+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 46137344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:29.645559+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 46137344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:30.645735+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2941655 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 46137344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:31.645996+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 46137344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:32.646151+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:33.646801+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:34.647001+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:35.647440+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2941655 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:36.649028+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:37.649986+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:38.650465+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:39.650920+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:40.651460+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2941655 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:41.651700+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:42.652034+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:43.652912+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:44.653601+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:45.654117+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2941655 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:46.654364+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fcf4b4a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fed55c20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fdd5ba40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4ffdbab40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdd48000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fd403860
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:47.654590+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 46112768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:48.654893+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 46104576 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:49.655129+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 46104576 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:50.655391+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 46104576 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2941655 data_alloc: 218103808 data_used: 13373440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:51.655563+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 46104576 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:52.655806+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 46104576 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ff98be00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f502877400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4fedade00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:53.656122+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 46104576 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fc6caf00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.682163239s of 33.369308472s, submitted: 9
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fe0ea5a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:54.656283+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 45948928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdd48000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:55.656469+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 45948928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2944731 data_alloc: 218103808 data_used: 13377536
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:56.656617+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 45940736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea78000/0x0/0x4ffc00000, data 0x11f1d2f/0x1376000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:57.656781+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 45940736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:58.657075+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 45940736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:59.657213+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 45940736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:00.657333+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 45916160 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2944731 data_alloc: 218103808 data_used: 13377536
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:01.657529+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 45899776 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea78000/0x0/0x4ffc00000, data 0x11f1d2f/0x1376000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:02.657675+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 45891584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:03.657826+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 45891584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.930875778s of 10.032606125s, submitted: 31
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:04.657994+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282894336 unmapped: 45875200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:05.658126+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282894336 unmapped: 45875200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2944731 data_alloc: 218103808 data_used: 13377536
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:06.658257+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282894336 unmapped: 45875200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:07.658448+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282894336 unmapped: 45875200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea78000/0x0/0x4ffc00000, data 0x11f1d2f/0x1376000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:08.658635+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283410432 unmapped: 45359104 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:09.658766+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283951104 unmapped: 44818432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee52b000/0x0/0x4ffc00000, data 0x173ed2f/0x18c3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:10.658958+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 44769280 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee50e000/0x0/0x4ffc00000, data 0x175bd2f/0x18e0000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [0,0,0,0,0,1])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2991023 data_alloc: 218103808 data_used: 13512704
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:11.659196+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 43827200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:12.659351+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284950528 unmapped: 43819008 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:13.659490+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285999104 unmapped: 42770432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:14.659666+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:15.659857+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee0e8000/0x0/0x4ffc00000, data 0x1761d2f/0x18e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1022f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2992473 data_alloc: 218103808 data_used: 13512704
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:16.660104+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:17.660323+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.205342293s of 13.499714851s, submitted: 106
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef545000/0x0/0x4ffc00000, data 0x1764d2f/0x18e9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:18.660462+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:19.660696+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:20.660872+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef545000/0x0/0x4ffc00000, data 0x1764d2f/0x18e9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993209 data_alloc: 218103808 data_used: 13512704
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:21.660998+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f502877400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:22.661146+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 45596672 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4fd2e45a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffdbb4a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff35b800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff35b800 session 0x55f4fd29dc20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff35b800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:23.661340+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff35b800 session 0x55f4fdf8a000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ffe29860
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:24.661505+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:25.661680+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eecf1000/0x0/0x4ffc00000, data 0x1fb8d2f/0x213d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053185 data_alloc: 218103808 data_used: 13512704
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:26.661883+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:27.662153+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:28.662313+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fdf8bc20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:29.662475+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffa0c960
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:30.662617+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f502877400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4fdd5b2c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f502877400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053185 data_alloc: 218103808 data_used: 13512704
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.148612022s of 13.547485352s, submitted: 7
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:31.662746+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4ffdbba40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eecf1000/0x0/0x4ffc00000, data 0x1fb8d2f/0x213d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286031872 unmapped: 50085888 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:32.662884+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286031872 unmapped: 50085888 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:33.663018+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286031872 unmapped: 50085888 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:34.663128+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:35.663241+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eecef000/0x0/0x4ffc00000, data 0x1fb8d62/0x213f000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3109156 data_alloc: 218103808 data_used: 21655552
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:36.663379+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:37.663543+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:38.663728+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:39.663967+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eecef000/0x0/0x4ffc00000, data 0x1fb8d62/0x213f000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:40.665148+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3109156 data_alloc: 218103808 data_used: 21655552
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:41.665372+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:42.665534+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.794086456s of 11.875811577s, submitted: 5
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:43.665700+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:44.665884+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 48177152 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:45.666021+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289021952 unmapped: 47095808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee39e000/0x0/0x4ffc00000, data 0x2909d62/0x2a90000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3187768 data_alloc: 218103808 data_used: 22167552
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:46.666144+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289079296 unmapped: 47038464 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:47.666304+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee39d000/0x0/0x4ffc00000, data 0x2909d62/0x2a90000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:48.666455+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:49.666661+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:50.666866+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188904 data_alloc: 218103808 data_used: 22589440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:51.667024+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:52.667179+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee39d000/0x0/0x4ffc00000, data 0x2909d62/0x2a90000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:53.667435+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fcf21e00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4ff9972c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:54.667576+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.230875969s of 11.849612236s, submitted: 48
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:55.667713+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fed86f00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef543000/0x0/0x4ffc00000, data 0x1765d2f/0x18ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:56.667895+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2990971 data_alloc: 218103808 data_used: 13225984
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef543000/0x0/0x4ffc00000, data 0x1765d2f/0x18ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:57.668188+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:58.668333+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:59.668501+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef542000/0x0/0x4ffc00000, data 0x1765d2f/0x18ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:00.668665+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef542000/0x0/0x4ffc00000, data 0x1765d2f/0x18ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:01.668825+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2991147 data_alloc: 218103808 data_used: 13225984
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fdd5b0e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f501979c20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:02.671067+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:03.671226+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efab8000/0x0/0x4ffc00000, data 0x11f1d2f/0x1376000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:04.671379+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fcf214a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:05.671565+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:06.671761+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:07.672049+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:08.672274+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:09.672496+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:10.672726+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:11.672872+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:12.673074+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:13.673191+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:14.673327+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:15.673536+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:16.673675+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:17.673872+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:18.674088+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283770880 unmapped: 52346880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:19.674252+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283770880 unmapped: 52346880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:20.674447+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283770880 unmapped: 52346880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:21.674611+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283770880 unmapped: 52346880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:22.674769+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:23.674980+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:24.675203+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:25.675343+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:26.675538+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:27.675759+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:28.676013+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:29.676191+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:30.676354+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283795456 unmapped: 52322304 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:31.676541+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283803648 unmapped: 52314112 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:32.676695+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283803648 unmapped: 52314112 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:33.676872+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.062763214s of 38.424762726s, submitted: 20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 49577984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fd29c960
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4feca9860
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f502877400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f501978780
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:34.677002+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ff0eb4a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fd2e4d20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef57d000/0x0/0x4ffc00000, data 0x172cd2f/0x18b1000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:35.677140+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:36.677349+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2987362 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffa0d0e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef57d000/0x0/0x4ffc00000, data 0x172cd2f/0x18b1000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:37.677599+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffa0dc20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:38.677795+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:39.678007+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff35b800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff35b800 session 0x55f4ff9e8780
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fdfa9e00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:40.678221+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284180480 unmapped: 51937280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:41.678338+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2992039 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284180480 unmapped: 51937280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:42.678463+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284327936 unmapped: 51789824 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:43.678610+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:44.679176+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:45.679386+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:46.680558+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3031879 data_alloc: 218103808 data_used: 18636800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:47.680769+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:48.680922+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:49.681708+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:50.682355+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:51.682833+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3031879 data_alloc: 218103808 data_used: 18636800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:52.683000+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.940246582s of 19.882999420s, submitted: 27
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:53.683162+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 49446912 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:54.683496+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 48250880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:55.683812+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:56.684022+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:57.684447+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:58.684706+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:59.685038+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:00.685281+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:01.685722+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:02.686096+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:03.686337+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:04.686533+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:05.686709+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:06.686904+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:07.687157+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:08.687417+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:09.687616+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:10.687853+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:11.688000+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:12.688121+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:13.688260+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:14.688421+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:15.688555+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:16.688715+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:17.689095+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:18.689220+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:19.689366+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287916032 unmapped: 48201728 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:20.689503+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287916032 unmapped: 48201728 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:21.689678+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3111081 data_alloc: 218103808 data_used: 20406272
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287916032 unmapped: 48201728 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:22.689926+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.229282379s of 29.198295593s, submitted: 69
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fed67e00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fecc3860
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286769152 unmapped: 49348608 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:23.690119+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ff9e8000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:24.690292+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:25.690545+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:26.690740+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:27.691063+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:28.691192+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:29.691413+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:30.691570+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:31.691786+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:32.692020+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:33.692189+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:34.692454+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:35.692619+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:36.692815+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:37.692984+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:38.693131+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:39.693273+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:40.693413+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:41.693619+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:42.693845+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 49332224 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:43.694007+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 49332224 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:44.694176+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 49332224 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:45.694348+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 49332224 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:46.694526+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:47.694709+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffb9f800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffb9f800 session 0x55f4ff9e9680
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fe026780
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4feca81e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ff9974a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.315746307s of 25.564805984s, submitted: 37
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:48.695079+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:49.695217+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4feca85a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdb38c00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdb38c00 session 0x55f4ffe29e00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fed55860
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4feca9680
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fedac1e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:50.695335+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:51.695477+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2982457 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:52.695634+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:53.695789+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:54.695958+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fe026780
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:55.696298+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:56.696449+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2982457 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:57.696621+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:58.696811+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:59.696973+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:00.697123+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:01.697284+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993657 data_alloc: 218103808 data_used: 14663680
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:02.697410+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:03.697551+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:04.697716+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:05.697922+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:06.698078+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993657 data_alloc: 218103808 data_used: 14663680
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:07.698343+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286810112 unmapped: 49307648 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:08.698490+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.177154541s of 20.592260361s, submitted: 12
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285138944 unmapped: 50978816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:09.698621+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285138944 unmapped: 50978816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:10.698803+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee22c000/0x0/0x4ffc00000, data 0x18dcd3f/0x1a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285138944 unmapped: 50978816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:11.699016+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028855 data_alloc: 218103808 data_used: 14909440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285081600 unmapped: 51036160 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:12.699190+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285081600 unmapped: 51036160 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:13.700132+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:14.700315+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee227000/0x0/0x4ffc00000, data 0x18e0d3f/0x1a66000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:15.700481+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee222000/0x0/0x4ffc00000, data 0x18e6d3f/0x1a6c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:16.700698+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee222000/0x0/0x4ffc00000, data 0x18e6d3f/0x1a6c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3029661 data_alloc: 218103808 data_used: 14909440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:17.700985+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:18.701164+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee222000/0x0/0x4ffc00000, data 0x18e6d3f/0x1a6c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:19.701359+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:20.701550+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:21.701732+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3029661 data_alloc: 218103808 data_used: 14909440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:22.701902+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee222000/0x0/0x4ffc00000, data 0x18e6d3f/0x1a6c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:23.702067+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:24.702213+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa6f000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa6f000 session 0x55f4ff98b680
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fdfa85a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fed663c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fecc3e00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.534084320s of 16.210994720s, submitted: 47
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 49561600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:25.702325+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffa0cf00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffe29860
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fdd5b0e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ff996d20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fcf4ba40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:26.702497+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063882 data_alloc: 218103808 data_used: 14909440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:27.702673+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:28.702799+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb9000/0x0/0x4ffc00000, data 0x1c4fd3f/0x1dd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:29.702973+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb9000/0x0/0x4ffc00000, data 0x1c4fd3f/0x1dd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:30.703096+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:31.703224+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063882 data_alloc: 218103808 data_used: 14909440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285532160 unmapped: 50585600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:32.703359+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb9000/0x0/0x4ffc00000, data 0x1c4fd3f/0x1dd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285532160 unmapped: 50585600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:33.703485+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fdd112c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285532160 unmapped: 50585600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:34.703640+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285532160 unmapped: 50585600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:35.703774+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:36.703949+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3083327 data_alloc: 218103808 data_used: 17149952
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:37.704128+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x1c73d3f/0x1df9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:38.704756+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:39.704982+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x1c73d3f/0x1df9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:40.705175+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:41.705329+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3083327 data_alloc: 218103808 data_used: 17149952
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:42.705504+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x1c73d3f/0x1df9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:43.705640+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:44.705772+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:45.705915+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:46.706097+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285548544 unmapped: 50569216 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.206413269s of 21.587003708s, submitted: 31
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x1c73d3f/0x1df9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130179 data_alloc: 218103808 data_used: 17604608
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:47.706317+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288866304 unmapped: 47251456 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec7db000/0x0/0x4ffc00000, data 0x218dd3f/0x2313000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:48.706525+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:49.706666+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:50.706868+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec758000/0x0/0x4ffc00000, data 0x220fd3f/0x2395000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:51.707028+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec758000/0x0/0x4ffc00000, data 0x220fd3f/0x2395000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3148909 data_alloc: 218103808 data_used: 18034688
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:52.707200+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:53.707336+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289267712 unmapped: 46850048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fdd5be00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fe017c20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:54.707571+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289275904 unmapped: 46841856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:55.707824+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:56.707996+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.451373100s of 10.043084145s, submitted: 88
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038639 data_alloc: 218103808 data_used: 14909440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:57.708189+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fc6ca3c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed05e000/0x0/0x4ffc00000, data 0x190ad3f/0x1a90000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:58.708741+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:59.708999+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:00.709138+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4ff9e9680
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fecd3e00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:01.709252+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:02.709386+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fcf51e00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:03.710059+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:04.710186+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:05.710498+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:06.710867+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:07.712236+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:08.712375+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:09.712594+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:10.712831+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:11.713155+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:12.713788+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:13.714112+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:14.714297+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:15.714601+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:16.714910+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:17.715238+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:18.715463+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:19.715622+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:20.716024+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:21.716209+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:22.716365+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:23.716562+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:24.716834+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:25.716990+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.709793091s of 29.150884628s, submitted: 12
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:26.717163+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 38690816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028468 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:27.717389+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289628160 unmapped: 46489600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fef4a000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fdd5a3c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fef4b2c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:28.717545+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fe0161e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ff997a40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc6000/0x0/0x4ffc00000, data 0x19a2d91/0x1b28000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc6000/0x0/0x4ffc00000, data 0x19a2d91/0x1b28000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:29.717689+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:30.717844+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc6000/0x0/0x4ffc00000, data 0x19a2d91/0x1b28000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:31.718023+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028484 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:32.718180+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fd403860
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:33.718332+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f5019781e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4ffa0c3c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:34.718490+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:35.718639+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.679104328s of 10.049762726s, submitted: 34
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fecd34a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:36.718743+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 46473216 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3031495 data_alloc: 218103808 data_used: 13090816
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:37.718919+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 46473216 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:38.719070+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 290816000 unmapped: 45301760 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:39.719182+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:40.719368+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:41.719525+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3089575 data_alloc: 218103808 data_used: 21217280
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:42.722796+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:43.722901+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:44.723058+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:45.723155+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:46.723307+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3089575 data_alloc: 218103808 data_used: 21217280
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:47.723522+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:48.723669+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.181274414s of 13.181275368s, submitted: 0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:49.723802+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 41181184 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:50.723993+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 41836544 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:51.724131+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 41836544 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3176615 data_alloc: 218103808 data_used: 21319680
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:52.724293+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 41836544 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:53.724441+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3bf000/0x0/0x4ffc00000, data 0x25a7dc4/0x272f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:54.724613+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:55.724991+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:56.725170+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3b9000/0x0/0x4ffc00000, data 0x25addc4/0x2735000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3182271 data_alloc: 218103808 data_used: 21987328
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:57.725449+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:58.726185+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:59.726802+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:00.727247+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:01.727387+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3b5000/0x0/0x4ffc00000, data 0x25b1dc4/0x2739000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3182271 data_alloc: 218103808 data_used: 21987328
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:02.727641+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:03.727826+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:04.727975+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f502877c00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877c00 session 0x55f4fdf8a1e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fd3210e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fe024960
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4ffa01860
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.366380692s of 16.474279404s, submitted: 92
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:05.728503+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3b5000/0x0/0x4ffc00000, data 0x25b1dc4/0x2739000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 40181760 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fcf201e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f500c4ac00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f500c4ac00 session 0x55f4ff9e8d20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f500c4ac00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f500c4ac00 session 0x55f4ff9970e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:06.729139+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 41738240 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fef4a5a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fecc32c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:07.729696+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3200253 data_alloc: 218103808 data_used: 21991424
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21d000/0x0/0x4ffc00000, data 0x2748dd4/0x28d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 41738240 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:08.729865+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 41738240 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:09.730227+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 41738240 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:10.730810+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:11.731034+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21d000/0x0/0x4ffc00000, data 0x2748dd4/0x28d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:12.731167+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3200253 data_alloc: 218103808 data_used: 21991424
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fdd11e00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:13.732165+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbbb000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:14.732286+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:15.732399+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:16.732577+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec1f9000/0x0/0x4ffc00000, data 0x276cdd4/0x28f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:17.732818+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3205953 data_alloc: 218103808 data_used: 22528000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:18.733002+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:19.733263+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:20.733409+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:21.733552+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:22.733786+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec1f9000/0x0/0x4ffc00000, data 0x276cdd4/0x28f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3205953 data_alloc: 218103808 data_used: 22528000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:23.733899+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 41721856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec1f9000/0x0/0x4ffc00000, data 0x276cdd4/0x28f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:24.734005+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 41721856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec1f9000/0x0/0x4ffc00000, data 0x276cdd4/0x28f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:25.734162+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 41721856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.447574615s of 20.827314377s, submitted: 10
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:26.734276+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 39960576 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:27.734451+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284443 data_alloc: 218103808 data_used: 22757376
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 39960576 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:28.734661+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:29.734849+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb869000/0x0/0x4ffc00000, data 0x30fbdd4/0x3284000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:30.734977+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:31.735121+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb869000/0x0/0x4ffc00000, data 0x30fbdd4/0x3284000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:32.735327+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291167 data_alloc: 218103808 data_used: 22753280
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:33.735689+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4ff98b2c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4ffdba000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:34.735806+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fecc2960
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:35.735961+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3b5000/0x0/0x4ffc00000, data 0x25b1dc4/0x2739000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:36.736051+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:37.736249+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3191033 data_alloc: 218103808 data_used: 21991424
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:38.736578+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.200098038s of 12.547958374s, submitted: 61
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4ffa0d4a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fecdb4a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:39.736764+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ffa0c780
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:40.736973+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:41.737133+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:42.737282+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:43.738023+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:44.738164+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:45.738363+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:46.738568+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:47.738808+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:48.739007+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:49.739159+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:50.739320+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:51.739513+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:52.739657+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:53.739808+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:54.739963+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:55.740115+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:56.740253+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:57.740403+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:58.740547+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:59.740681+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:00.741096+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:01.741261+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:02.741418+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:03.741603+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:04.741793+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:05.742017+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:06.742161+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:07.742393+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:08.742593+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:09.742799+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:10.743002+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:11.743211+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:12.743431+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:13.743599+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:14.743751+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.669296265s of 35.925739288s, submitted: 28
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,0,0,0,0,0,1,2])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297992192 unmapped: 42328064 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fcf4a000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4ff9965a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fed66f00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbbb000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4ffe28960
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fe0241e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:15.743888+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:16.744131+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:17.744358+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053060 data_alloc: 218103808 data_used: 13086720
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:18.744578+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffe28780
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:19.744732+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffe294a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fe016b40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:20.744857+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fed87a40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:21.744981+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbbb000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:22.745161+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3058801 data_alloc: 218103808 data_used: 13697024
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:23.745292+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292102144 unmapped: 48218112 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:24.745402+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292102144 unmapped: 48218112 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:25.745517+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292102144 unmapped: 48218112 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:26.745665+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:27.745858+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3115441 data_alloc: 218103808 data_used: 21676032
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:28.746064+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:29.746279+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:30.746434+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:31.746578+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:32.746712+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3115441 data_alloc: 218103808 data_used: 21676032
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.079471588s of 18.551073074s, submitted: 14
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:33.746855+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 45481984 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:34.746985+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 45449216 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:35.748241+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 45441024 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:36.748787+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 45441024 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec27d000/0x0/0x4ffc00000, data 0x26ecd2f/0x2871000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:37.749096+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3220233 data_alloc: 218103808 data_used: 22417408
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 45441024 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:38.749254+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:39.749783+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec23c000/0x0/0x4ffc00000, data 0x272dd2f/0x28b2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:40.750306+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec23c000/0x0/0x4ffc00000, data 0x272dd2f/0x28b2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:41.750784+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec23c000/0x0/0x4ffc00000, data 0x272dd2f/0x28b2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:42.751167+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3227157 data_alloc: 218103808 data_used: 22556672
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:43.751321+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:44.751660+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.110246658s of 11.604579926s, submitted: 73
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:45.751803+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21b000/0x0/0x4ffc00000, data 0x274ed2f/0x28d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:46.752084+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:47.752513+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3226953 data_alloc: 218103808 data_used: 22556672
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:48.752663+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:49.753122+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:50.753368+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21b000/0x0/0x4ffc00000, data 0x274ed2f/0x28d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:51.753607+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21b000/0x0/0x4ffc00000, data 0x274ed2f/0x28d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:52.753973+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3226953 data_alloc: 218103808 data_used: 22556672
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:53.754088+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:54.754327+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:55.754603+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec218000/0x0/0x4ffc00000, data 0x2751d2f/0x28d6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:56.754739+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.151399612s of 12.149919510s, submitted: 4
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:57.761818+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3229065 data_alloc: 218103808 data_used: 22589440
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:58.762028+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f500c4ac00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f500c4ac00 session 0x55f4fcf51c20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:59.762315+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec207000/0x0/0x4ffc00000, data 0x2762d2f/0x28e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:00.762542+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 285 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fdd5ab40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 285 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fdfa74a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 285 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffe285a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 285 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fcf4be00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 39550976 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb8000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:01.762710+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 285 heartbeat osd_stat(store_statfs(0x4eba29000/0x0/0x4ffc00000, data 0x2f3d90e/0x30c4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 39550976 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 286 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4ffe28b40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:02.762843+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb8000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 286 heartbeat osd_stat(store_statfs(0x4eb1c3000/0x0/0x4ffc00000, data 0x37a490e/0x392b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,0,0,0,1])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3421390 data_alloc: 234881024 data_used: 35491840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 39534592 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:03.762996+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 286 handle_osd_map epochs [286,287], i have 286, src has [1,287]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4fef4af00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 39518208 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:04.763173+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 39518208 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:05.763324+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 39518208 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:06.763545+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 287 heartbeat osd_stat(store_statfs(0x4eb1b7000/0x0/0x4ffc00000, data 0x37ac078/0x3935000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 39510016 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:07.763885+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3425380 data_alloc: 234881024 data_used: 35491840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307953664 unmapped: 39501824 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:08.764063+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307953664 unmapped: 39501824 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:09.764372+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 287 heartbeat osd_stat(store_statfs(0x4eb1b7000/0x0/0x4ffc00000, data 0x37ac078/0x3935000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307953664 unmapped: 39501824 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:10.764576+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.261311531s of 13.822600365s, submitted: 64
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffdbb680
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffdbbe00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fdd10d20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4ffdbb0e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4ff0eb2c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305127424 unmapped: 42328064 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:11.764771+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 287 heartbeat osd_stat(store_statfs(0x4eb1b9000/0x0/0x4ffc00000, data 0x37ac078/0x3935000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305135616 unmapped: 42319872 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:12.764951+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3420546 data_alloc: 234881024 data_used: 35500032
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305135616 unmapped: 42319872 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:13.765134+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adadb/0x3938000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305143808 unmapped: 42311680 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:14.765364+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305143808 unmapped: 42311680 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:15.765573+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305160192 unmapped: 42295296 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:16.765755+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fdfa90e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffdbaf00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305160192 unmapped: 42295296 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:17.766007+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3420546 data_alloc: 234881024 data_used: 35500032
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fdd5b4a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305160192 unmapped: 42295296 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb8000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:18.766132+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4feefa1e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb8000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305160192 unmapped: 42295296 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:19.766300+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b4000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305176576 unmapped: 42278912 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:20.766397+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:21.766604+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:22.766756+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3467236 data_alloc: 251658240 data_used: 41988096
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:23.766959+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:24.767100+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:25.767254+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:26.767435+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:27.767648+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3467236 data_alloc: 251658240 data_used: 41988096
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:28.767787+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309108736 unmapped: 38346752 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:29.767900+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309108736 unmapped: 38346752 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:30.768836+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.982900620s of 20.121829987s, submitted: 17
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 37863424 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:31.768990+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:32.769174+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3489675 data_alloc: 251658240 data_used: 43794432
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:33.769259+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:34.769359+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:35.769525+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb0fa000/0x0/0x4ffc00000, data 0x3868aeb/0x39f4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:36.769751+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:37.769970+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3500175 data_alloc: 251658240 data_used: 44195840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08c000/0x0/0x4ffc00000, data 0x38d6aeb/0x3a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:38.770120+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:39.770336+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:40.770479+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:41.770597+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08c000/0x0/0x4ffc00000, data 0x38d6aeb/0x3a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:42.770728+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3500175 data_alloc: 251658240 data_used: 44195840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:43.770887+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:44.771115+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08c000/0x0/0x4ffc00000, data 0x38d6aeb/0x3a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:45.771274+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08c000/0x0/0x4ffc00000, data 0x38d6aeb/0x3a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:46.771523+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310722560 unmapped: 36732928 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.973714828s of 16.078048706s, submitted: 14
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:47.771722+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3497747 data_alloc: 251658240 data_used: 44195840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:48.771887+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:49.772066+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:50.772213+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08b000/0x0/0x4ffc00000, data 0x38d7aeb/0x3a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:51.772489+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:52.772628+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08b000/0x0/0x4ffc00000, data 0x38d7aeb/0x3a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3498067 data_alloc: 251658240 data_used: 44204032
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:53.772775+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310394880 unmapped: 37060608 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:54.772920+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310394880 unmapped: 37060608 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:55.773113+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310394880 unmapped: 37060608 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08b000/0x0/0x4ffc00000, data 0x38d7aeb/0x3a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:56.773310+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310394880 unmapped: 37060608 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:57.773571+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 311271424 unmapped: 36184064 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08b000/0x0/0x4ffc00000, data 0x38d7aeb/0x3a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3505747 data_alloc: 251658240 data_used: 45961216
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:58.773720+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 311271424 unmapped: 36184064 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:59.774014+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 311271424 unmapped: 36184064 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fdfa85a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.843559265s of 12.874654770s, submitted: 3
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:00.774180+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 311279616 unmapped: 36175872 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 289 handle_osd_map epochs [289,289], i have 289, src has [1,289]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 289 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fedac1e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffb9a000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 289 ms_handle_reset con 0x55f4ffb9a000 session 0x55f4ff9e9c20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 289 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fe026f00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fec93000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:01.774374+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 326180864 unmapped: 21274624 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 289 ms_handle_reset con 0x55f4fec93000 session 0x55f4ffa012c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffb9a000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 290 ms_handle_reset con 0x55f4ffb9a000 session 0x55f4ffa0c780
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:02.774530+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319987712 unmapped: 31670272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 291 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4feefa5a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 291 heartbeat osd_stat(store_statfs(0x4e9172000/0x0/0x4ffc00000, data 0x57f0239/0x597a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3768224 data_alloc: 251658240 data_used: 52424704
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:03.774763+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 31563776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 291 ms_handle_reset con 0x55f4ffbae000 session 0x55f4feca94a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 291 heartbeat osd_stat(store_statfs(0x4e916e000/0x0/0x4ffc00000, data 0x57ecdd2/0x597c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 291 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4feca9a40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fd23fc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 291 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4feca8960
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:04.775020+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 320143360 unmapped: 31514624 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fd23fc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:05.775137+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 292 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4fd320d20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 320233472 unmapped: 31424512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:06.775327+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 320233472 unmapped: 31424512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 292 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4ff9961e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 292 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffdbab40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbba400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 292 ms_handle_reset con 0x55f4ffbba400 session 0x55f4fcf4b860
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:07.775570+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319537152 unmapped: 32120832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3534457 data_alloc: 251658240 data_used: 51228672
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:08.775735+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319537152 unmapped: 32120832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:09.775870+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319537152 unmapped: 32120832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb1a9000/0x0/0x4ffc00000, data 0x37b49af/0x3944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff358c00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:10.776025+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.539574623s of 10.580464363s, submitted: 128
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb1a9000/0x0/0x4ffc00000, data 0x37b49af/0x3944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319537152 unmapped: 32120832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 293 ms_handle_reset con 0x55f4ff358c00 session 0x55f501979e00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:11.776288+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 312303616 unmapped: 39354368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:12.776428+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 312303616 unmapped: 39354368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 293 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4ff997e00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 293 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fe0161e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fd23fc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3328352 data_alloc: 234881024 data_used: 34619392
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:13.776536+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 294 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4ffe29a40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:14.776753+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:15.776919+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed77b000/0x0/0x4ffc00000, data 0x11deff1/0x1370000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:16.777132+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:17.777376+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3060980 data_alloc: 218103808 data_used: 12230656
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:18.777543+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:19.777760+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:20.778032+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed77b000/0x0/0x4ffc00000, data 0x11deff1/0x1370000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:21.778230+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.101095200s of 11.366854668s, submitted: 87
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:22.778427+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:23.778575+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:24.778712+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:25.778880+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:26.779097+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:27.779346+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:28.779521+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:29.779717+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:30.779899+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:31.780133+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:32.780345+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:33.780548+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:34.780769+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:35.780957+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:36.781137+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:37.781951+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:38.782086+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:39.782313+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:40.782446+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:41.782665+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:42.782998+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:43.783215+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:44.783411+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:45.783667+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:46.783921+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:47.784288+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:48.784469+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:49.784644+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:50.784847+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:51.785020+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 35K writes, 142K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.80 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1971 writes, 8322 keys, 1971 commit groups, 1.0 writes per commit group, ingest: 9.10 MB, 0.02 MB/s
                                           Interval WAL: 1971 writes, 761 syncs, 2.59 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:52.785235+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063266 data_alloc: 218103808 data_used: 12234752
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets getting new tickets!
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:53.785466+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _finish_auth 0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:53.786025+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:54.785698+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:55.785860+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:56.786103+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:57.786385+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063266 data_alloc: 218103808 data_used: 12234752
Oct 07 15:17:29 compute-0 ceph-osd[90092]: mgrc ms_handle_reset ms_handle_reset con 0x55f4fdd48c00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3626055412
Oct 07 15:17:29 compute-0 ceph-osd[90092]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3626055412,v1:192.168.122.100:6801/3626055412]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: get_auth_request con 0x55f4ff358c00 auth_method 0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: mgrc handle_mgr_configure stats_period=5
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:58.786565+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296697856 unmapped: 54960128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:59.786702+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 37.798728943s of 37.813915253s, submitted: 13
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296697856 unmapped: 54960128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fcf4af00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb8000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4feca81e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fd23fc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4fdfa65a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fecc32c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4ffa3a400 session 0x55f501979c20
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:00.786843+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 54886400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:01.787003+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 54886400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed055000/0x0/0x4ffc00000, data 0x1906a54/0x1a99000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbbb000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4fed870e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:02.787178+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbba400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f5049ac800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 54886400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3125660 data_alloc: 218103808 data_used: 12234752
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:03.787399+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 54722560 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:04.787576+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297385984 unmapped: 54272000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:05.787708+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed054000/0x0/0x4ffc00000, data 0x1906a77/0x1a9a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297385984 unmapped: 54272000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4ffbba400 session 0x55f501979a40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f5049ac800 session 0x55f4ffa0d0e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fd23fc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:06.788069+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed054000/0x0/0x4ffc00000, data 0x1906a77/0x1a9a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,1])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4ff997e00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:07.788283+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:08.788467+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:09.788624+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:10.788808+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:11.789090+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:12.789271+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:13.789500+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:14.789695+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:15.789893+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:16.790316+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:17.790729+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:18.791037+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:19.791272+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:20.791474+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:21.791852+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:22.792045+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:23.792265+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:24.792575+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:25.792826+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:26.793095+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:27.793394+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:28.793570+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:29.793790+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:30.794017+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:31.794178+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:32.794315+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:33.794506+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:34.794695+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:35.794918+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:36.795163+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:37.795426+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:38.795687+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:39.795871+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:40.796030+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:41.796241+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:42.796371+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:43.796521+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:44.796707+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:45.796836+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:46.796989+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:47.797148+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:48.797295+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:49.797433+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:50.797580+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:51.797761+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:52.797915+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:53.798085+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:54.798221+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:55.798441+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:56.798664+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:57.798848+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:58.799015+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 58.547794342s of 59.170566559s, submitted: 55
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:59.799247+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:00.799394+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 54435840 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:01.799529+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297230336 unmapped: 54427648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:02.799698+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed36b000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297230336 unmapped: 54427648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:03.799827+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074277 data_alloc: 218103808 data_used: 12242944
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 296 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffa0de00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:04.799977+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:05.800236+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 296 heartbeat osd_stat(store_statfs(0x4ed367000/0x0/0x4ffc00000, data 0x11e2602/0x1375000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:06.800519+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:07.800705+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:08.800873+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3072989 data_alloc: 218103808 data_used: 12242944
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:09.801013+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 296 heartbeat osd_stat(store_statfs(0x4ed367000/0x0/0x4ffc00000, data 0x11e2602/0x1375000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:10.801188+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:11.801374+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 296 heartbeat osd_stat(store_statfs(0x4ed367000/0x0/0x4ffc00000, data 0x11e2602/0x1375000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.538488388s of 13.065173149s, submitted: 139
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:12.801559+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:13.801760+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:14.801869+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:15.802042+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:16.802230+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:17.802471+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:18.802680+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:19.802869+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:20.803059+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:21.803223+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:22.803439+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:23.803581+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:24.803797+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:25.803998+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:26.804203+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:27.804401+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:28.804587+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:29.804763+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:30.804911+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:31.805249+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:32.805518+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:33.805697+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:34.805901+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:35.806242+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:36.806419+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:37.806593+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:38.806812+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:39.807014+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:40.807175+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:41.807318+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:42.807454+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:43.807611+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:44.807849+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:45.808016+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:46.808222+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:47.808553+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:48.808819+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:49.809024+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:50.809195+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:51.809412+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:52.809591+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:53.809739+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:54.809918+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:55.810141+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:56.810279+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:57.810456+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:58.810625+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:59.810733+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:00.810882+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:01.811039+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:02.811153+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:03.811284+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:04.811424+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:05.811575+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:06.811755+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:07.811958+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:08.812110+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:09.812262+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:10.812434+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:11.812608+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297279488 unmapped: 54378496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:12.812763+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297279488 unmapped: 54378496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:13.812910+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:17:29 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23135 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297279488 unmapped: 54378496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:14.813086+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297279488 unmapped: 54378496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:15.813287+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:16.813479+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:17.813690+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:18.813857+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:19.814047+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:20.814210+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:21.814389+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:22.814578+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:23.814733+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:24.814867+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:25.815160+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:26.815473+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:27.815878+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:28.816054+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:29.816394+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:30.816630+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:31.816815+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:32.817052+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:33.817227+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:34.817377+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:35.817685+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:36.817911+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:37.818209+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:38.818371+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:39.818587+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:40.818827+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:41.819071+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:42.819206+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:43.819422+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:44.819612+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:45.819793+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:46.820015+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:47.820290+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 54337536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:48.820485+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 54337536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:49.820655+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 54337536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:50.820853+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 54329344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:51.821066+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 54329344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:52.821194+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 54329344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:53.821381+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 54321152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:54.821527+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 54321152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:55.821734+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 54321152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:56.821999+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 54321152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:57.822259+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fdfa83c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 55164928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:58.822480+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12378112
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:59.822649+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:00.822813+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:01.823030+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:02.823193+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:03.823409+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12378112
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 55148544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:04.823584+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 55148544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:05.823758+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbbb000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 113.802856445s of 113.915954590s, submitted: 17
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:06.824099+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 298 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4fe0254a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:07.824278+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:08.824449+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3004649 data_alloc: 218103808 data_used: 5570560
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fd23fc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:09.824607+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edb62000/0x0/0x4ffc00000, data 0x9e5c13/0xb7a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:10.824720+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 299 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4ffa01a40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:11.824849+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:12.824979+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee361000/0x0/0x4ffc00000, data 0x1e77c1/0x37c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:13.825106+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee361000/0x0/0x4ffc00000, data 0x1e77c1/0x37c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2935475 data_alloc: 218103808 data_used: 126976
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:14.825234+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:15.825439+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:16.825630+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee35e000/0x0/0x4ffc00000, data 0x1e9240/0x37f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:17.825828+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:18.825979+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee35e000/0x0/0x4ffc00000, data 0x1e9240/0x37f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2935475 data_alloc: 218103808 data_used: 126976
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:19.826196+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:20.826331+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:21.826506+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.293596268s of 16.493015289s, submitted: 58
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:22.826652+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ee35a000/0x0/0x4ffc00000, data 0x1eacc6/0x383000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fdf8ba40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:23.826830+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:24.826984+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:25.827199+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:26.827343+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:27.827522+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:28.827683+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:29.828098+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:30.829112+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:31.829701+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:32.830040+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:33.830747+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:34.831839+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:35.832360+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:36.832693+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:37.832866+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:38.833099+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:39.833299+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:40.833669+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:41.834015+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:42.834205+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:43.834455+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:44.834826+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:45.835058+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:46.835414+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:47.835609+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:48.836096+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:49.836232+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:50.836615+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:51.836808+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:52.837038+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:53.837282+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:54.837457+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:55.837660+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:56.837830+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:57.838004+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:58.838163+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:59.838397+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:00.838708+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:01.839102+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:02.839330+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:03.839563+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:04.840047+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:05.840261+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:06.840719+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:07.840996+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:08.841308+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288161792 unmapped: 63496192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:09.841597+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288161792 unmapped: 63496192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:10.841797+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288161792 unmapped: 63496192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:11.841997+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:12.842210+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:13.842366+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:14.842602+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:15.842762+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:16.843014+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:17.843165+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:18.843292+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:19.843435+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:20.843588+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:21.843757+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:22.843877+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:23.844041+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288022528 unmapped: 63635456 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:24.844226+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 63627264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:25.844461+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 63627264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:26.844628+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 63627264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:27.844837+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 63627264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:28.845089+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288038912 unmapped: 63619072 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:29.845329+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288038912 unmapped: 63619072 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:30.845501+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288038912 unmapped: 63619072 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:31.845734+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288038912 unmapped: 63619072 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:32.846016+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:33.846222+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:34.846414+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:35.846793+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:36.847025+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:37.847334+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:38.847722+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:39.848049+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:40.848288+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:41.848502+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:42.848683+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:43.848991+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:44.849213+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:45.849491+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:46.849687+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:47.850072+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:48.850317+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288063488 unmapped: 63594496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:49.850535+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288063488 unmapped: 63594496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:50.850725+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:51.851014+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:52.851130+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:53.851402+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:54.851580+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:55.851765+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:56.852025+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:57.852234+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:58.852402+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:59.852623+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:00.852912+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:01.853286+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:02.853542+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:03.853704+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:04.854011+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288088064 unmapped: 63569920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:05.854210+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288088064 unmapped: 63569920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:06.854371+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288088064 unmapped: 63569920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:07.854640+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 105.418647766s of 105.483337402s, submitted: 31
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288088064 unmapped: 63569920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:08.854852+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 303 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fdfa8b40
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f5049ac800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288104448 unmapped: 63553536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:09.855025+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3005561 data_alloc: 218103808 data_used: 147456
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:10.855188+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 304 ms_handle_reset con 0x55f5049ac800 session 0x55f4ff996000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:11.855380+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:12.855643+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:13.855808+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:14.855990+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:15.856142+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:16.856317+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:17.856587+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:18.856795+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:19.856995+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:20.857186+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:21.857349+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:22.857560+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:23.858551+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:24.858814+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:25.858977+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:26.859226+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:27.859439+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:28.859592+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:29.859771+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:30.859946+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:31.860097+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:32.860260+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:33.860426+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:34.860610+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:35.860820+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:36.861013+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:37.861251+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f5039db400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.854793549s of 29.929613113s, submitted: 9
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288161792 unmapped: 63496192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:38.861476+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 305 ms_handle_reset con 0x55f5039db400 session 0x55f4fcf4a3c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edb4c000/0x0/0x4ffc00000, data 0x9f1b41/0xb91000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:39.861662+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3011917 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edb4c000/0x0/0x4ffc00000, data 0x9f1b1e/0xb90000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:40.861828+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:41.862019+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:42.862234+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edb4c000/0x0/0x4ffc00000, data 0x9f1b1e/0xb90000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:43.862397+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:44.862536+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3011917 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:45.862679+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:46.862898+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:47.863162+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288202752 unmapped: 63455232 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:48.863313+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288202752 unmapped: 63455232 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:49.863483+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288202752 unmapped: 63455232 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:50.863653+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288210944 unmapped: 63447040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:51.864467+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288219136 unmapped: 63438848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:52.864676+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288219136 unmapped: 63438848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:53.864851+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288219136 unmapped: 63438848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:54.864975+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:55.865141+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:56.865329+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:57.865528+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:58.865696+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:59.865955+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:00.866201+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:01.866371+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:02.866569+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:03.866773+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:04.867005+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:05.867194+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:06.867428+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:07.867615+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:08.867813+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:09.868044+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:10.868203+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:11.868387+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:12.868562+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:13.868778+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:14.868991+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288260096 unmapped: 63397888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:15.869142+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:16.869295+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:17.869535+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:18.869741+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:19.869894+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:20.870038+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:21.870180+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:22.870354+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:23.870543+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:24.870815+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:25.871032+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:26.871298+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:27.871636+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:28.871889+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:29.872119+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:30.872332+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:31.872493+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288301056 unmapped: 63356928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:32.872702+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:33.872848+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:34.873006+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:35.873141+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:36.873275+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:37.873497+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:38.873637+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288317440 unmapped: 63340544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:39.873839+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:40.873991+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:41.874231+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:42.874408+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:43.875012+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:44.875207+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:45.875565+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:46.875842+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:47.876104+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:48.876287+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:49.876574+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:50.876805+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:51.877097+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:52.877285+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:53.877672+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:54.878075+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288350208 unmapped: 63307776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:55.878257+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288358400 unmapped: 63299584 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:56.878492+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288358400 unmapped: 63299584 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:57.878755+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288358400 unmapped: 63299584 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:58.878994+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:59.879263+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:00.879585+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:01.879821+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:02.879969+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:03.880153+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:04.880330+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 63283200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:05.880485+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 63283200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:06.880656+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 63283200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:07.880914+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 63283200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:08.881222+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288382976 unmapped: 63275008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:09.881413+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288382976 unmapped: 63275008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:10.881649+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288391168 unmapped: 63266816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:11.881862+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288391168 unmapped: 63266816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:12.882116+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288407552 unmapped: 63250432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:13.882317+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:14.882510+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:15.882693+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:16.882910+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:17.883201+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:18.883383+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:19.883534+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288423936 unmapped: 63234048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:20.883729+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:21.884019+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:22.884216+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:23.884379+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:24.884589+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:25.884831+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:26.885134+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:27.885375+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:28.885509+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:29.885659+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:30.885818+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:31.885974+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:32.886130+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:33.886335+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:34.886486+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:35.886678+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288456704 unmapped: 63201280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:36.886898+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288464896 unmapped: 63193088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:37.889703+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288464896 unmapped: 63193088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:38.890607+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288464896 unmapped: 63193088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:39.890857+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:40.891061+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:41.891191+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:42.891359+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:43.891876+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:44.892080+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288481280 unmapped: 63176704 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:45.892284+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288481280 unmapped: 63176704 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:46.892496+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288481280 unmapped: 63176704 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:47.892681+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288489472 unmapped: 63168512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:48.892846+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288489472 unmapped: 63168512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:49.893035+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288489472 unmapped: 63168512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:50.893170+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288489472 unmapped: 63168512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:51.893322+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288497664 unmapped: 63160320 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:52.893468+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:53.893659+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:54.893882+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:55.894027+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:56.894173+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:57.894377+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:58.894519+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:59.894686+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:00.894868+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:01.895018+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:02.895187+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:03.895349+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:04.895524+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:05.895685+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:06.895902+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:07.896212+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:08.896389+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288530432 unmapped: 63127552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:09.896577+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:10.896739+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:11.896972+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:12.897207+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:13.897387+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:14.897622+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288546816 unmapped: 63111168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:15.897862+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288546816 unmapped: 63111168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:16.898074+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288555008 unmapped: 63102976 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:17.898344+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288555008 unmapped: 63102976 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:18.898560+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:19.898735+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:20.899262+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:21.900226+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:22.900669+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:23.900880+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:24.901335+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:25.902077+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:26.902310+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:27.902667+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:28.902847+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:29.903020+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:30.903317+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288579584 unmapped: 63078400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:31.903622+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288579584 unmapped: 63078400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:32.903774+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288587776 unmapped: 63070208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:33.904024+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288587776 unmapped: 63070208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:34.904198+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:35.904450+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:36.904661+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:37.904917+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:38.905223+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:39.905516+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:40.905713+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288604160 unmapped: 63053824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:41.905951+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288604160 unmapped: 63053824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:42.906184+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288612352 unmapped: 63045632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:43.906469+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288612352 unmapped: 63045632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:44.906742+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288612352 unmapped: 63045632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:45.906946+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288612352 unmapped: 63045632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:46.907176+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288620544 unmapped: 63037440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:47.907480+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288620544 unmapped: 63037440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:48.907669+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:49.907888+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:50.908061+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:51.908288+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:52.908401+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:53.908578+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:54.908714+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:55.908893+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:56.909018+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:57.909184+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:58.909318+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:59.909442+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:00.909645+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:01.909871+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:02.909991+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288653312 unmapped: 63004672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:03.910123+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288653312 unmapped: 63004672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:04.910291+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288661504 unmapped: 62996480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:05.910446+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288669696 unmapped: 62988288 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:06.910605+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:07.910803+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:08.911004+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:09.911184+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:10.911351+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:11.911516+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288686080 unmapped: 62971904 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:12.911706+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:13.911872+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:14.912041+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:15.912226+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:16.912403+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:17.912589+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:18.912724+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:19.913041+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288702464 unmapped: 62955520 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:20.913206+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:21.913384+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:22.913564+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:23.913725+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:24.913880+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:25.914013+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:26.914228+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:27.914524+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:28.914828+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:29.915051+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:30.915348+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:31.915556+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:32.915738+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:33.915967+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:34.916232+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:35.916457+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:36.916685+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:37.916974+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:38.917238+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:39.917434+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:40.917622+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:41.917808+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:42.918026+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:43.918184+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288768000 unmapped: 62889984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:44.918402+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288768000 unmapped: 62889984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:45.918578+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288768000 unmapped: 62889984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:46.918808+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288768000 unmapped: 62889984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:47.919030+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288776192 unmapped: 62881792 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:48.919194+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288776192 unmapped: 62881792 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:49.919374+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288776192 unmapped: 62881792 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:50.919561+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288776192 unmapped: 62881792 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:51.919695+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 36K writes, 144K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 36K writes, 13K syncs, 2.79 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 677 writes, 1677 keys, 677 commit groups, 1.0 writes per commit group, ingest: 0.81 MB, 0.00 MB/s
                                           Interval WAL: 677 writes, 305 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:52.919860+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:53.920084+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:54.920259+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:55.920406+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:56.920584+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288792576 unmapped: 62865408 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:57.920774+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288792576 unmapped: 62865408 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:58.920904+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288792576 unmapped: 62865408 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:59.921000+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288800768 unmapped: 62857216 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:00.921149+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288800768 unmapped: 62857216 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:01.921286+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288800768 unmapped: 62857216 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:02.921399+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:03.921555+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:04.921691+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:05.921792+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:06.922055+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:07.922247+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:08.922381+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:09.922592+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:10.922744+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:11.922922+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:12.923160+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:13.923363+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:14.923523+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:15.923852+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:16.924031+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:17.924206+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:18.924351+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:19.924501+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:20.924680+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:21.924829+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:22.925007+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288833536 unmapped: 62824448 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:23.925153+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288841728 unmapped: 62816256 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:24.925326+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288841728 unmapped: 62816256 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:25.925502+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288841728 unmapped: 62816256 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:26.925640+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288849920 unmapped: 62808064 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:27.925799+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288849920 unmapped: 62808064 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:28.926032+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288858112 unmapped: 62799872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:29.926204+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288858112 unmapped: 62799872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:30.926344+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288858112 unmapped: 62799872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:31.926497+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288866304 unmapped: 62791680 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:32.926613+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288874496 unmapped: 62783488 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:33.926768+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288874496 unmapped: 62783488 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:34.926994+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288874496 unmapped: 62783488 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:35.927153+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:36.927336+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:37.927713+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:38.928017+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:39.928385+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:40.928517+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:41.928709+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:42.928868+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:43.929100+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:44.929257+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:45.929434+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:46.929597+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:47.929797+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:48.930041+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:49.930236+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:50.930392+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:51.930584+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:52.930761+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:53.930919+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:54.931136+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288915456 unmapped: 62742528 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:55.931291+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288931840 unmapped: 62726144 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:56.931441+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288931840 unmapped: 62726144 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:57.931613+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288931840 unmapped: 62726144 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:58.931734+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 320.509613037s of 321.359313965s, submitted: 63
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288931840 unmapped: 62726144 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:59.931901+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288980992 unmapped: 62676992 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:00.932123+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [0,0,0,1])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 65208320 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:01.932431+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:02.932597+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:03.933120+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:04.933394+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:05.933579+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:06.933836+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:07.934070+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:08.934242+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:09.934419+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:10.934532+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:11.934684+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:12.934805+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:13.934949+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:14.935089+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:15.935228+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:16.935423+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:17.935643+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:18.935828+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:19.936031+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:20.936196+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:21.936365+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:22.936628+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:23.936787+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:24.936974+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:25.937121+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:26.937258+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:27.937448+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:28.937760+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:29.937887+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:30.938021+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:31.938195+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:32.938419+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:33.938572+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:34.938730+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:35.938905+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:36.939165+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:37.939412+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:38.939565+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:39.939706+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:40.939857+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:41.940012+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:42.940167+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:43.940321+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:44.940518+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 65159168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:45.940721+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 65159168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:46.940957+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 65159168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:47.941182+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:48.941363+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:49.941546+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:50.941699+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:51.941838+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:52.942010+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:53.942151+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 65134592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3486: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:54.942299+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 65134592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:55.942443+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 65134592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:56.942623+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 65134592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:57.942817+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:58.943015+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:59.943214+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:00.943373+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:01.943622+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:02.943842+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:03.944080+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:04.944249+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:05.944434+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:06.944570+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:07.944739+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:08.944863+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:09.944996+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:10.945146+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:11.945251+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:12.945359+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:13.945473+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:14.945626+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:15.945772+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 65093632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:16.945890+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:17.946072+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:18.946205+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:19.946340+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:20.946491+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:21.946626+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:22.946810+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:23.946984+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:24.947135+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:25.947279+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:26.947416+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:27.947634+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:28.947773+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:29.947910+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:30.948091+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:31.948227+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:32.948379+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 65052672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:33.948517+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 65052672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:34.948651+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286613504 unmapped: 65044480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:35.948760+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286613504 unmapped: 65044480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:36.948897+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fd23fc00
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 97.341293335s of 97.832099915s, submitted: 90
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286613504 unmapped: 65044480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 307 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4fcf210e0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:37.949365+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:38.949558+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edb49000/0x0/0x4ffc00000, data 0x9f511f/0xb94000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edb49000/0x0/0x4ffc00000, data 0x9f511f/0xb94000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:39.949730+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 307 handle_osd_map epochs [307,308], i have 307, src has [1,308]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:40.949895+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 308 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fed552c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965964 data_alloc: 218103808 data_used: 172032
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:41.950131+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284082176 unmapped: 67575808 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:42.950374+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284082176 unmapped: 67575808 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 309 heartbeat osd_stat(store_statfs(0x4ee346000/0x0/0x4ffc00000, data 0x1f6cf0/0x397000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:43.950555+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285147136 unmapped: 66510848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 309 heartbeat osd_stat(store_statfs(0x4ee343000/0x0/0x4ffc00000, data 0x1f876f/0x39a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:44.950713+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 309 heartbeat osd_stat(store_statfs(0x4ee343000/0x0/0x4ffc00000, data 0x1f876f/0x39a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285147136 unmapped: 66510848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:45.951019+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 66502656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3055603 data_alloc: 218103808 data_used: 172032
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:46.951172+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.332621574s of 10.034673691s, submitted: 57
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285171712 unmapped: 66486272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ed6d4000/0x0/0x4ffc00000, data 0xe6876f/0x100a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [0,0,0,2])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 310 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fed545a0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:47.951383+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285171712 unmapped: 66486272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:48.951653+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 66478080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:49.951799+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 66478080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:50.952069+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 66478080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3059713 data_alloc: 218103808 data_used: 184320
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:51.952333+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 66478080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 310 handle_osd_map epochs [310,311], i have 310, src has [1,311]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ed6d0000/0x0/0x4ffc00000, data 0xe6a308/0x100d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:52.952521+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285188096 unmapped: 66469888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:53.952708+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285188096 unmapped: 66469888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:54.952899+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285188096 unmapped: 66469888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:55.953075+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285188096 unmapped: 66469888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:56.953259+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:57.953460+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:58.953651+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:59.953910+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:00.954136+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:01.954282+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:02.954406+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 66453504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:03.954523+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 66453504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:04.954692+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 66453504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:05.954860+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285212672 unmapped: 66445312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:06.954989+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:07.955168+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:08.955310+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:09.955561+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:10.956478+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:11.956729+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:12.957558+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:13.958261+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:14.959902+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:15.961995+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:16.962152+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:17.962588+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:18.963022+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:19.963243+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:20.964586+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:21.964751+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:22.964917+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:23.965121+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:24.965268+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:25.965413+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:26.965546+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:27.965880+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:28.966177+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:29.966317+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:30.966458+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:31.966613+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:32.966751+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:33.966971+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:34.967261+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:35.967418+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285270016 unmapped: 66387968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:36.967573+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285270016 unmapped: 66387968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:37.967733+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f5049ac800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 50.395416260s of 50.752197266s, submitted: 15
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285270016 unmapped: 66387968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:38.967875+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285270016 unmapped: 66387968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:39.968038+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed6ca000/0x0/0x4ffc00000, data 0xe6d93c/0x1013000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285294592 unmapped: 66363392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:40.968170+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285294592 unmapped: 66363392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 312 ms_handle_reset con 0x55f5049ac800 session 0x55f4fd3212c0
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2981229 data_alloc: 218103808 data_used: 196608
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:41.968359+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285294592 unmapped: 66363392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:42.968487+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ee33b000/0x0/0x4ffc00000, data 0x1fd93c/0x3a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285294592 unmapped: 66363392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:43.968639+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:44.968832+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:45.969067+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2981229 data_alloc: 218103808 data_used: 196608
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:46.969211+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:47.969407+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:48.969553+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ee33b000/0x0/0x4ffc00000, data 0x1fd93c/0x3a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.898818970s of 11.465106010s, submitted: 39
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285310976 unmapped: 66347008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:49.969642+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285310976 unmapped: 66347008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:50.969747+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285310976 unmapped: 66347008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:51.969898+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:52.970108+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:53.970251+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:54.970394+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:55.970531+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:56.970688+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:57.970882+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:58.971082+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285327360 unmapped: 66330624 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:59.971248+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285335552 unmapped: 66322432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:00.971414+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285335552 unmapped: 66322432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:01.971578+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285335552 unmapped: 66322432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:02.971755+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:03.971895+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:04.972057+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:05.972307+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:06.972492+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:07.972674+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:08.972819+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:09.972983+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:10.973142+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:11.973338+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:12.973564+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:13.973834+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:14.974037+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:15.974262+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285368320 unmapped: 66289664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:16.974481+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:17.974738+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:18.974991+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:19.975165+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:20.975303+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:21.975498+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:22.975677+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:23.975861+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:24.976020+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:25.976155+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:26.976287+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:27.976488+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:28.976652+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:29.976776+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:30.976893+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:31.977100+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 66265088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:32.977230+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 66265088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:33.977386+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 66265088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:34.977583+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 66265088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:35.977716+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 66256896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:36.977865+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 66256896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:37.978506+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 66256896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:38.978656+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 66256896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:39.978799+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:40.978965+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:41.979424+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:42.979845+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:43.979996+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:44.980156+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:45.980282+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:46.980462+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285425664 unmapped: 66232320 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:47.980650+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:48.980810+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:49.981022+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:50.981174+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:51.981307+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:52.981423+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:53.981571+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285442048 unmapped: 66215936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:54.981690+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285442048 unmapped: 66215936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:55.981803+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: do_command 'config diff' '{prefix=config diff}'
Oct 07 15:17:29 compute-0 ceph-osd[90092]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285515776 unmapped: 66142208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: do_command 'config show' '{prefix=config show}'
Oct 07 15:17:29 compute-0 ceph-osd[90092]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 07 15:17:29 compute-0 ceph-osd[90092]: do_command 'counter dump' '{prefix=counter dump}'
Oct 07 15:17:29 compute-0 ceph-osd[90092]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:56.981947+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: do_command 'counter schema' '{prefix=counter schema}'
Oct 07 15:17:29 compute-0 ceph-osd[90092]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:29 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:29 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284499968 unmapped: 67158016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:57.982099+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:17:29 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 67141632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:17:29 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:58.982224+0000)
Oct 07 15:17:29 compute-0 ceph-osd[90092]: do_command 'log dump' '{prefix=log dump}'
Oct 07 15:17:29 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 07 15:17:29 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1547585651' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 07 15:17:29 compute-0 nova_compute[259550]: 2025-10-07 15:17:29.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:29 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23139 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 07 15:17:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1355788894' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 07 15:17:30 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23143 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:30 compute-0 ceph-mon[74295]: from='client.23131 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:30 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2873225065' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 07 15:17:30 compute-0 ceph-mon[74295]: from='client.23135 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:30 compute-0 ceph-mon[74295]: pgmap v3486: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:30 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1547585651' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 07 15:17:30 compute-0 ceph-mon[74295]: from='client.23139 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:30 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1355788894' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 07 15:17:30 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23147 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 07 15:17:30 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4002352828' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 07 15:17:30 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23149 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:31 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 07 15:17:31 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2910642032' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 07 15:17:31 compute-0 nova_compute[259550]: 2025-10-07 15:17:31.108 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:17:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3487: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:31 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23157 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:31 compute-0 ceph-mgr[74587]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 07 15:17:31 compute-0 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T15:17:31.704+0000 7fbd06de6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 07 15:17:31 compute-0 ceph-mon[74295]: from='client.23143 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:31 compute-0 ceph-mon[74295]: from='client.23147 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:31 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4002352828' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 07 15:17:31 compute-0 ceph-mon[74295]: from='client.23149 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:31 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2910642032' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 07 15:17:31 compute-0 nova_compute[259550]: 2025-10-07 15:17:31.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:17:32 compute-0 podman[453703]: 2025-10-07 15:17:32.102232204 +0000 UTC m=+0.078820645 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 07 15:17:32 compute-0 podman[453702]: 2025-10-07 15:17:32.130199239 +0000 UTC m=+0.107000866 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:17:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 07 15:17:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/466961050' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 07 15:17:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 07 15:17:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1724404335' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 07 15:17:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 07 15:17:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2059885865' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 07 15:17:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:17:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 07 15:17:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2800186474' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 07 15:17:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:17:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/997777716' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:17:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:17:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/997777716' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:17:32 compute-0 nova_compute[259550]: 2025-10-07 15:17:32.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:17:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 07 15:17:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2378386877' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 07 15:17:32 compute-0 ceph-mon[74295]: pgmap v3487: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:32 compute-0 ceph-mon[74295]: from='client.23157 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/466961050' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 07 15:17:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1724404335' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 07 15:17:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2059885865' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 07 15:17:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2800186474' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 07 15:17:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/997777716' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:17:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/997777716' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:17:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 07 15:17:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/467493459' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:17:33 compute-0 crontab[453903]: (root) LIST (root)
Oct 07 15:17:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 07 15:17:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4236161573' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 07 15:17:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3488: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 07 15:17:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3621770134' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 07 15:17:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 07 15:17:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3600964625' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3970781 data_alloc: 251658240 data_used: 42672128
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:26.257978+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 47415296 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:27.258141+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6c5d000/0x0/0x4ffc00000, data 0x4d90846/0x4f21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [1])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 353951744 unmapped: 45531136 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:28.258312+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354107392 unmapped: 45375488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:29.258667+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354107392 unmapped: 45375488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:30.258765+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354107392 unmapped: 45375488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4032595 data_alloc: 251658240 data_used: 43528192
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6c48000/0x0/0x4ffc00000, data 0x4da5846/0x4f36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:31.258901+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354107392 unmapped: 45375488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:32.259036+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354107392 unmapped: 45375488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.074107170s of 12.390025139s, submitted: 73
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:33.259236+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 43597824 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:34.259400+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356007936 unmapped: 43474944 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:35.259516+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 43671552 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4155015 data_alloc: 251658240 data_used: 45051904
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a8e000/0x0/0x4ffc00000, data 0x5b4f846/0x5ce0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:36.259635+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 43671552 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:37.259786+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 43671552 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:38.259956+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 43671552 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:39.260091+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 43671552 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:40.260311+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355844096 unmapped: 43638784 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4154407 data_alloc: 251658240 data_used: 45056000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:41.260506+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a8e000/0x0/0x4ffc00000, data 0x5b4f846/0x5ce0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355844096 unmapped: 43638784 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:42.260692+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355852288 unmapped: 43630592 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:43.260899+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355852288 unmapped: 43630592 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:44.261059+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355852288 unmapped: 43630592 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:45.261223+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355852288 unmapped: 43630592 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4154407 data_alloc: 251658240 data_used: 45056000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:46.261362+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355852288 unmapped: 43630592 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a8e000/0x0/0x4ffc00000, data 0x5b4f846/0x5ce0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:47.261522+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355852288 unmapped: 43630592 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a8e000/0x0/0x4ffc00000, data 0x5b4f846/0x5ce0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:48.261657+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 43622400 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:49.261882+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a8e000/0x0/0x4ffc00000, data 0x5b4f846/0x5ce0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 43622400 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a8e000/0x0/0x4ffc00000, data 0x5b4f846/0x5ce0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:50.261998+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 43622400 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4154407 data_alloc: 251658240 data_used: 45056000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:51.262112+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 43622400 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556933455680
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.250329971s of 18.907407761s, submitted: 97
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556932ebb000 session 0x556932877e00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:52.262249+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569341c21e0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 48685056 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:53.262447+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 48685056 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5c4c000/0x0/0x4ffc00000, data 0x47f2813/0x4981000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:54.262647+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 48685056 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:55.262807+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 48685056 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3919666 data_alloc: 234881024 data_used: 31633408
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:56.262971+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350806016 unmapped: 48676864 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:57.263122+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350806016 unmapped: 48676864 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:58.263247+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x556934899a40
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556938d23000 session 0x5569341e4780
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350806016 unmapped: 48676864 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693360f800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x55693345ef00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:59.263378+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6693000/0x0/0x4ffc00000, data 0x3dac813/0x3f3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 48660480 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:00.263523+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 48660480 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3803339 data_alloc: 234881024 data_used: 27447296
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:01.263643+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 48660480 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:02.263846+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.559225082s of 10.806392670s, submitted: 68
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556932eba800 session 0x5569344d1860
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bad3400 session 0x5569361ac960
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 48660480 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693360f800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e66b2000/0x0/0x4ffc00000, data 0x3d8e803/0x3f1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:03.264029+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 341852160 unmapped: 57630720 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556932877860
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:04.264253+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 341852160 unmapped: 57630720 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:05.264389+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 341852160 unmapped: 57630720 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3488754 data_alloc: 218103808 data_used: 11800576
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:06.264597+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 341852160 unmapped: 57630720 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x55693442e5a0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933564000 session 0x556934899c20
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:07.264760+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556932eba800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333586432 unmapped: 65896448 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556932eba800 session 0x556932eff4a0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:08.264913+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9205000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:09.265181+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:10.266021+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3308374 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:11.266707+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:12.266895+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9205000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:13.267338+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9205000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:14.267609+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:15.267747+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3308374 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:16.268046+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:17.268504+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:18.268896+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9205000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:19.269275+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:20.269455+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3308374 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:21.269606+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:22.269758+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9205000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:23.270018+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:24.270325+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:25.270496+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3308374 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:26.270671+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:27.270899+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9205000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:28.271134+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9205000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:29.271372+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:30.271541+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9205000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3308374 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:31.271808+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:32.271995+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:33.272207+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9205000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693360f800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556935562780
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935afc400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x5569344cc3c0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693bad3400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bad3400 session 0x55693501a3c0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556932ebb000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556932ebb000 session 0x55693442e3c0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556932eba800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.720289230s of 31.468746185s, submitted: 79
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556932eba800 session 0x556932effa40
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:34.272373+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693360f800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x5569361ac000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935afc400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x5569344ccf00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693bad3400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bad3400 session 0x5569361adc20
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569344cc780
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333701120 unmapped: 65781760 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:35.272565+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8306000/0x0/0x4ffc00000, data 0x213c77e/0x22c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333701120 unmapped: 65781760 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424970 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:36.272733+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333701120 unmapped: 65781760 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:37.272910+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333701120 unmapped: 65781760 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:38.273054+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8306000/0x0/0x4ffc00000, data 0x213c77e/0x22c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333701120 unmapped: 65781760 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:39.273289+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556932eba800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556932eba800 session 0x556934124780
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333709312 unmapped: 65773568 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693360f800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556932876f00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:40.273444+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333709312 unmapped: 65773568 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424970 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:41.273618+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333709312 unmapped: 65773568 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x55693616b860
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:42.273759+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935afc400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x55693616a780
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693bad3400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333717504 unmapped: 65765376 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935af8400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:43.273901+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333717504 unmapped: 65765376 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:44.274053+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8305000/0x0/0x4ffc00000, data 0x213c78e/0x22c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333922304 unmapped: 65560576 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:45.274217+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556938d23000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.126608849s of 11.262543678s, submitted: 19
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556938d23000 session 0x556932efc3c0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556938d23000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556938d23000 session 0x556934125a40
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 64995328 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3634183 data_alloc: 234881024 data_used: 19550208
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:46.274372+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 64995328 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:47.274508+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7282000/0x0/0x4ffc00000, data 0x2d9e7f0/0x2f2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15a4f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 64995328 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:48.274652+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 64995328 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:49.274786+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7282000/0x0/0x4ffc00000, data 0x2d9e7f0/0x2f2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15a4f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 64995328 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:50.274919+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 64995328 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3634183 data_alloc: 234881024 data_used: 19550208
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:51.275082+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 64995328 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:52.275207+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693560b400 session 0x556933dcb0e0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933bdbc00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 64995328 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:53.275380+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: mgrc ms_handle_reset ms_handle_reset con 0x55693560d400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3626055412
Oct 07 15:17:33 compute-0 ceph-osd[89062]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3626055412,v1:192.168.122.100:6801/3626055412]
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: get_auth_request con 0x556935c56c00 auth_method 0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: mgrc handle_mgr_configure stats_period=5
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f000 session 0x5569362ec3c0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c59000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360b400 session 0x5569334554a0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693360f000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 64995328 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693360f800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556932ea8960
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:54.275510+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556932eff0e0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 339206144 unmapped: 60276736 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:55.275631+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8014000/0x0/0x4ffc00000, data 0x34667f0/0x35f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.866021156s of 10.196639061s, submitted: 89
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 60243968 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935afc400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x5569328772c0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3701641 data_alloc: 234881024 data_used: 20480000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c4d800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:56.275745+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c4d800 session 0x5569341e5a40
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693360f800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 60243968 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:57.275886+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 60178432 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:58.275978+0000)
Oct 07 15:17:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7ffa000/0x0/0x4ffc00000, data 0x347c800/0x360b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:59.276102+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:00.276238+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784232819' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3795319 data_alloc: 251658240 data_used: 32694272
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:01.276424+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:02.276554+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7ffa000/0x0/0x4ffc00000, data 0x347c800/0x360b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:03.276710+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:04.276825+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7ffa000/0x0/0x4ffc00000, data 0x347c800/0x360b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:05.276984+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3795639 data_alloc: 251658240 data_used: 32702464
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:06.277089+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:07.277254+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:08.277404+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.916577339s of 12.971271515s, submitted: 22
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 349634560 unmapped: 49848320 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:09.277533+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 48119808 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:10.277664+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7300000/0x0/0x4ffc00000, data 0x4179800/0x4308000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 48062464 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3908103 data_alloc: 251658240 data_used: 33550336
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bad3400 session 0x556935cb25a0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x5569341e4000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:11.277827+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935afc400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 51576832 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x5569362ed0e0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:12.278036+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 51576832 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:13.278248+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 51576832 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7df1000/0x0/0x4ffc00000, data 0x2bbd7e0/0x2d4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:14.278448+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 51576832 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:15.278647+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 51576832 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635134 data_alloc: 234881024 data_used: 18665472
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:16.278781+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 50528256 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:17.278965+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7df1000/0x0/0x4ffc00000, data 0x2bbd7e0/0x2d4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 50528256 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:18.279118+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 50528256 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:19.279319+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 50528256 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:20.279456+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 50520064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635150 data_alloc: 234881024 data_used: 18665472
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:21.279610+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 50520064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:22.279799+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 50520064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7df1000/0x0/0x4ffc00000, data 0x2bbd7e0/0x2d4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:23.279962+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556938d23000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.648083687s of 15.073346138s, submitted: 187
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 51503104 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556938d23000 session 0x556932efe5a0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:24.280090+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693bad0800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bad0800 session 0x556935562780
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935af8400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x55693616b860
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935afc400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x5569353d3a40
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556938d23000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556938d23000 session 0x556933585860
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e826c000/0x0/0x4ffc00000, data 0x32157e0/0x33a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 51503104 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:25.280259+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 51503104 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:26.280421+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3690379 data_alloc: 234881024 data_used: 18669568
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e826c000/0x0/0x4ffc00000, data 0x32157e0/0x33a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 51503104 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:27.280569+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 51503104 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:28.280704+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e826c000/0x0/0x4ffc00000, data 0x32157e0/0x33a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693bad3400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bad3400 session 0x5569333570e0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 51494912 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:29.280924+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693f800800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693f800800 session 0x5569335832c0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 51494912 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:30.281130+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935af8400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x5569355641e0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935afc400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x556933461e00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556938d23000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693bad3400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 51494912 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:31.281317+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692378 data_alloc: 234881024 data_used: 18673664
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e826b000/0x0/0x4ffc00000, data 0x32157f0/0x33a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 51494912 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:32.281460+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:33.281639+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:34.281836+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:35.282006+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:36.282124+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3729498 data_alloc: 234881024 data_used: 23805952
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e826b000/0x0/0x4ffc00000, data 0x32157f0/0x33a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:37.282274+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:38.282507+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e826b000/0x0/0x4ffc00000, data 0x32157f0/0x33a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:39.282662+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:40.282807+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e826b000/0x0/0x4ffc00000, data 0x32157f0/0x33a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.954650879s of 17.537221909s, submitted: 42
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:41.283009+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3729982 data_alloc: 234881024 data_used: 23805952
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8269000/0x0/0x4ffc00000, data 0x32167f0/0x33a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:42.283172+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 50585600 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:43.283411+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 47259648 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:44.283557+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 44883968 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:45.283705+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 44883968 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:46.283839+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3818156 data_alloc: 234881024 data_used: 24391680
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 44883968 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:47.283989+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e774c000/0x0/0x4ffc00000, data 0x3d347f0/0x3ec2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 44883968 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:48.284124+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 44883968 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:49.284271+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 44883968 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:50.284430+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 44875776 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:51.284569+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3817408 data_alloc: 234881024 data_used: 24391680
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 44875776 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:52.284680+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e772b000/0x0/0x4ffc00000, data 0x3d557f0/0x3ee3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 44875776 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.053106308s of 12.065863609s, submitted: 123
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556938d23000 session 0x55693442f4a0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bad3400 session 0x55693442fa40
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:53.284893+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693f800800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693f800800 session 0x556933357a40
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 44859392 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:54.285009+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 44859392 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:55.285190+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 44859392 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:56.285356+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3638949 data_alloc: 234881024 data_used: 18124800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 44851200 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:57.285528+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88c3000/0x0/0x4ffc00000, data 0x2bbe7e0/0x2d4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 44851200 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:58.285686+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556933c4fa40
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556933582000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935af8400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x55693616b680
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:59.285891+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:00.286070+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:01.286225+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351274 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:02.286397+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:03.286568+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:04.286709+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:05.286877+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:06.287057+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351274 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:07.287236+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:08.287443+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:09.287617+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:10.287774+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 55787520 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:11.287912+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351274 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 55787520 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:12.288119+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 55787520 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:13.288298+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 55787520 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:14.288470+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 55787520 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:15.288690+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 55787520 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:16.288922+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351274 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343711744 unmapped: 55771136 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:17.289206+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343711744 unmapped: 55771136 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:18.289408+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343711744 unmapped: 55771136 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:19.289683+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343711744 unmapped: 55771136 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:20.289903+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343711744 unmapped: 55771136 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:21.290076+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351274 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343711744 unmapped: 55771136 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:22.290207+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343711744 unmapped: 55771136 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:23.290400+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343719936 unmapped: 55762944 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:24.290565+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343719936 unmapped: 55762944 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:25.290772+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343719936 unmapped: 55762944 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:26.291012+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351274 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343719936 unmapped: 55762944 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:27.291198+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343719936 unmapped: 55762944 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:28.291423+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343719936 unmapped: 55762944 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:29.291641+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 55754752 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:30.292009+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 55754752 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:31.292200+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351274 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935afc400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x5569354fda40
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556938d23000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556938d23000 session 0x5569354fda40
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693360f800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x55693616b680
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556933582000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935af8400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 55754752 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.474052429s of 39.106876373s, submitted: 103
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:32.292358+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352321536 unmapped: 47161344 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:33.292561+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343826432 unmapped: 55656448 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:34.292710+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x556933357a40
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935afc400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x5569355641e0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693bad3400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bad3400 session 0x5569335832c0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693360f800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x5569333570e0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569353d3a40
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343826432 unmapped: 55656448 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:35.292891+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343826432 unmapped: 55656448 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:36.293016+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420480 data_alloc: 218103808 data_used: 5025792
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e99f6000/0x0/0x4ffc00000, data 0x1a8c77e/0x1c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935af8400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343834624 unmapped: 55648256 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:37.293203+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e99f6000/0x0/0x4ffc00000, data 0x1a8c77e/0x1c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343842816 unmapped: 55640064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:38.293324+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x55693616b860
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935afc400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693b5ea400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343842816 unmapped: 55640064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:39.293464+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e99f6000/0x0/0x4ffc00000, data 0x1a8c77e/0x1c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:40.293575+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:41.293715+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475021 data_alloc: 218103808 data_used: 12517376
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:42.293855+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e99f6000/0x0/0x4ffc00000, data 0x1a8c77e/0x1c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:43.294035+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:44.294186+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:45.294412+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:46.294569+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475021 data_alloc: 218103808 data_used: 12517376
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:47.294712+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:48.294871+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e99f6000/0x0/0x4ffc00000, data 0x1a8c77e/0x1c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:49.295078+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:50.295263+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.824821472s of 18.289262772s, submitted: 18
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:51.295431+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 51896320 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563925 data_alloc: 218103808 data_used: 13352960
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:52.295580+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8fe4000/0x0/0x4ffc00000, data 0x249d77e/0x2629000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:53.295803+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:54.295980+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:55.296129+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:56.296320+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563777 data_alloc: 218103808 data_used: 13570048
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:57.296534+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8fe4000/0x0/0x4ffc00000, data 0x249d77e/0x2629000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:58.296673+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:59.296876+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:00.297050+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:01.297180+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564113 data_alloc: 218103808 data_used: 13578240
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:02.297344+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:03.297547+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8fe4000/0x0/0x4ffc00000, data 0x249d77e/0x2629000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569341e5e00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935604000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935604000 session 0x5569353d2f00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569353d2000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:04.297694+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693360f800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556932b5ed20
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.420858383s of 13.965919495s, submitted: 73
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569334552c0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935af8400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x556935563a40
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933d8a800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933d8a800 session 0x5569344ce3c0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933d8a800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933d8a800 session 0x556932ea94a0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556932876960
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:05.297854+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 62758912 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:06.298001+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 62758912 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717224 data_alloc: 218103808 data_used: 13578240
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:07.298173+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 62758912 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c15000/0x0/0x4ffc00000, data 0x386c78e/0x39f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:08.298338+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 62758912 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:09.298510+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 62758912 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:10.298841+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 62758912 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:11.299204+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 62758912 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717224 data_alloc: 218103808 data_used: 13578240
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:12.299354+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 62750720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693360f800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556932ea92c0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c15000/0x0/0x4ffc00000, data 0x386c78e/0x39f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:13.299533+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 62750720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569341250e0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:14.299739+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 62750720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935af8400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x556932876f00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935af8400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:15.299894+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 62750720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c15000/0x0/0x4ffc00000, data 0x386c78e/0x39f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.793764114s of 11.047670364s, submitted: 33
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x556933dcb0e0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:16.300076+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347389952 unmapped: 63643648 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719536 data_alloc: 218103808 data_used: 13586432
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693360f800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7bf1000/0x0/0x4ffc00000, data 0x389078e/0x3a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:17.300213+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347389952 unmapped: 63643648 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:18.300327+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350879744 unmapped: 60153856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:19.300491+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:20.300676+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:21.300822+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3859056 data_alloc: 251658240 data_used: 33144832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:22.301033+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7bf1000/0x0/0x4ffc00000, data 0x389078e/0x3a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:23.301259+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:24.301447+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:25.301646+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7bf1000/0x0/0x4ffc00000, data 0x389078e/0x3a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:26.301793+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3859056 data_alloc: 251658240 data_used: 33144832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:27.301991+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:28.302153+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.438287735s of 12.799725533s, submitted: 2
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 56451072 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e62b7000/0x0/0x4ffc00000, data 0x402a78e/0x41b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:29.302281+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357195776 unmapped: 53837824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:30.302443+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357318656 unmapped: 53714944 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:31.302639+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357318656 unmapped: 53714944 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3972574 data_alloc: 251658240 data_used: 35344384
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:32.302815+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357318656 unmapped: 53714944 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:33.303031+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357318656 unmapped: 53714944 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5f05000/0x0/0x4ffc00000, data 0x43db78e/0x4568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:34.303186+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 53706752 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:35.303361+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 53706752 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:36.303514+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 53706752 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3972574 data_alloc: 251658240 data_used: 35344384
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:37.303646+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 53706752 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:38.303801+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 53706752 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:39.303962+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 53706752 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5f05000/0x0/0x4ffc00000, data 0x43db78e/0x4568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:40.304140+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357335040 unmapped: 53698560 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.915142059s of 12.218699455s, submitted: 96
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569362ec3c0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556932876b40
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x55693488d4a0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:41.304293+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 349650944 unmapped: 61382656 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578692 data_alloc: 218103808 data_used: 13639680
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:42.304438+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 349650944 unmapped: 61382656 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:43.304626+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 349650944 unmapped: 61382656 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7e45000/0x0/0x4ffc00000, data 0x249d77e/0x2629000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x5569341e4000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x556934124960
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:44.304761+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346841088 unmapped: 64192512 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693501b2c0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:45.304984+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:46.305194+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376200 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:47.305393+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:48.305549+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:49.305733+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:50.305915+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:51.306107+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376200 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:52.306284+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:53.306490+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:54.306651+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:55.306840+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:56.306982+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376200 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:57.307153+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:58.317751+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:59.318019+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:00.318215+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 64176128 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:01.318421+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 64176128 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376200 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:02.318656+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 64176128 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:03.318873+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 64176128 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:04.319076+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 64176128 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:05.319262+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 64176128 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:06.319381+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 64176128 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376200 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:07.319539+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 64176128 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:08.319708+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 64167936 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:09.319872+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 64167936 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:10.320015+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 64167936 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:11.320181+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 64167936 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376200 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:12.320386+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 64167936 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:13.320567+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 64167936 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:14.320738+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 64159744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:15.320968+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 64159744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:16.321132+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 64159744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376200 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:17.321347+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 64159744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:18.321494+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 64159744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:19.321660+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 64159744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:20.321852+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 64159744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:21.322070+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 64159744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376200 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:22.322275+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 64151552 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:23.322504+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 64151552 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693360f800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.306056976s of 43.576793671s, submitted: 74
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:24.322640+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x5569341e54a0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347209728 unmapped: 63823872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:25.322800+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347209728 unmapped: 63823872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:26.323010+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347209728 unmapped: 63823872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3483670 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:27.323225+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347209728 unmapped: 63823872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e82b2000/0x0/0x4ffc00000, data 0x203176e/0x21bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:28.323418+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 63815680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:29.323614+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 63815680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569361ac5a0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:30.323780+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 63815680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935af8400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x5569341c3a40
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693501ab40
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:31.323909+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693360f800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556932eded20
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 63815680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487555 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935af8400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:32.324027+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e828e000/0x0/0x4ffc00000, data 0x205576e/0x21e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 63807488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:33.324164+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e828e000/0x0/0x4ffc00000, data 0x205576e/0x21e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:34.324300+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:35.324461+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:36.324657+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e828e000/0x0/0x4ffc00000, data 0x205576e/0x21e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3583367 data_alloc: 234881024 data_used: 18440192
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:37.324818+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:38.324987+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e828e000/0x0/0x4ffc00000, data 0x205576e/0x21e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:39.325126+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:40.325248+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:41.325371+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3583367 data_alloc: 234881024 data_used: 18440192
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:42.325498+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:43.325637+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.140216827s of 19.328117371s, submitted: 24
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 59318272 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e75b3000/0x0/0x4ffc00000, data 0x2d3076e/0x2ebb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:44.325771+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 59056128 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:45.325899+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:46.326031+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3711805 data_alloc: 234881024 data_used: 20373504
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:47.326231+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:48.326419+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:49.326577+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:50.326815+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:51.326988+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3712125 data_alloc: 234881024 data_used: 20381696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:52.327116+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:53.327355+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:54.327550+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:55.327728+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:56.327882+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3712125 data_alloc: 234881024 data_used: 20381696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:57.328033+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:58.328183+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:59.328329+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:00.328493+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:01.328624+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3712125 data_alloc: 234881024 data_used: 20381696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:02.328780+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:03.328982+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:04.329152+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:05.329338+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:06.329527+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3712125 data_alloc: 234881024 data_used: 20381696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:07.329693+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:08.329838+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:09.330019+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:10.330196+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:11.330322+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3712125 data_alloc: 234881024 data_used: 20381696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:12.330586+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:13.330813+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:14.331003+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:15.331340+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.933893204s of 32.221965790s, submitted: 98
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 59219968 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:16.331502+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 59211776 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3703549 data_alloc: 234881024 data_used: 20381696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:17.331657+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7529000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 59211776 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693b5ea400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x5569353d3e00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933d8a800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933d8a800 session 0x55693416d0e0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935ac6c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935ac6c00 session 0x5569344cd0e0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556933356000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693360f800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:18.332433+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x5569344cd860
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933d8a800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933d8a800 session 0x55693416c3c0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693b5ea400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x5569354605a0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556937126000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556937126000 session 0x556934894780
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556937126000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 59203584 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556937126000 session 0x5569344cef00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:19.332566+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 59203584 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:20.332767+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 59203584 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:21.332978+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 59203584 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3755983 data_alloc: 234881024 data_used: 20381696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:22.333113+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 59203584 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:23.333322+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8b000/0x0/0x4ffc00000, data 0x33567e0/0x34e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 59203584 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:24.333634+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8b000/0x0/0x4ffc00000, data 0x33567e0/0x34e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693501be00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 59203584 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693360f800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556934898780
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:25.333828+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 59203584 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:26.334010+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 59203584 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3755983 data_alloc: 234881024 data_used: 20381696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933d8a800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933d8a800 session 0x556933c4f0e0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:27.334133+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693b5ea400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.253780365s of 11.719895363s, submitted: 43
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x55693345e1e0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 59195392 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693b5ea400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:28.334284+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 59187200 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:29.334401+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:30.334522+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8a000/0x0/0x4ffc00000, data 0x33567f0/0x34e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8a000/0x0/0x4ffc00000, data 0x33567f0/0x34e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:31.334666+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3792094 data_alloc: 234881024 data_used: 25141248
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:32.334812+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:33.334996+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:34.335132+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8a000/0x0/0x4ffc00000, data 0x33567f0/0x34e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:35.335280+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:36.335422+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3792094 data_alloc: 234881024 data_used: 25141248
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:37.335618+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:38.335762+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8a000/0x0/0x4ffc00000, data 0x33567f0/0x34e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:39.335900+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.376653671s of 12.407247543s, submitted: 8
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 57409536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:40.336052+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 353435648 unmapped: 57597952 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:41.336217+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e67ba000/0x0/0x4ffc00000, data 0x3b1f7f0/0x3cad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 353435648 unmapped: 57597952 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3856984 data_alloc: 234881024 data_used: 25473024
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:42.336366+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:43.336537+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:44.336676+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:45.336821+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:46.336991+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3864480 data_alloc: 234881024 data_used: 25468928
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:47.337146+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e679e000/0x0/0x4ffc00000, data 0x3b337f0/0x3cc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 182K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.77 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4220 writes, 19K keys, 4220 commit groups, 1.0 writes per commit group, ingest: 23.74 MB, 0.04 MB/s
                                           Interval WAL: 4220 writes, 1544 syncs, 2.73 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:48.337338+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:49.337526+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:50.337680+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:51.337827+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3864480 data_alloc: 234881024 data_used: 25468928
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:52.338017+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:53.338208+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e679e000/0x0/0x4ffc00000, data 0x3b337f0/0x3cc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:54.338395+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:55.338602+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e679e000/0x0/0x4ffc00000, data 0x3b337f0/0x3cc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:56.338794+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3865120 data_alloc: 234881024 data_used: 25530368
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:57.339038+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:58.339211+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354754560 unmapped: 56279040 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:59.339410+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.104679108s of 19.538419724s, submitted: 86
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x5569353d2f00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569362ec000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933c71c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354770944 unmapped: 56262656 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:00.339665+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 56246272 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:01.339819+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x5569335854a0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717343 data_alloc: 234881024 data_used: 20443136
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:02.340082+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:03.340252+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:04.340439+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:05.340594+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:06.340787+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717343 data_alloc: 234881024 data_used: 20443136
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:07.340961+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:08.341114+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:09.341272+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:10.341476+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:11.341689+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717343 data_alloc: 234881024 data_used: 20443136
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:12.341855+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:13.342063+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:14.342415+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:15.342556+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:16.342684+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 56229888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717343 data_alloc: 234881024 data_used: 20443136
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:17.342846+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569362ed4a0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.052337646s of 18.575210571s, submitted: 62
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x556935cb34a0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 56229888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:18.343027+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 56229888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:19.343238+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 56229888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:20.343402+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 55173120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:21.343590+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 63569920 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3410491 data_alloc: 218103808 data_used: 5132288
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:22.343757+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9081000/0x0/0x4ffc00000, data 0x126276e/0x13ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [0,0,0,0,1])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569334545a0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:23.343960+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:24.344102+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:25.344282+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:26.344428+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403451 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:27.344644+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:28.344832+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:29.344973+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:30.345114+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:31.345294+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:32.345498+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403451 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:33.345721+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:34.345912+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:35.346428+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:36.346713+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:37.346910+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403451 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:38.347354+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:39.347552+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:40.348004+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:41.348257+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:42.348505+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403451 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:43.349380+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:44.350008+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:45.350572+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569362ede00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933c71c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x5569354f50e0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693b5ea400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x556934125860
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935af6000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af6000 session 0x5569354f4b40
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:46.350692+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.082035065s of 28.449712753s, submitted: 38
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556934898960
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556933357a40
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933c71c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x556932edef00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693b5ea400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x55693616b680
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 63078400 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693ff4a000
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693ff4a000 session 0x55693442f4a0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:47.350868+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544909 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 63078400 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:48.351183+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 63078400 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:49.351466+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 63078400 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:50.351718+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 63078400 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:51.351881+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7fb0000/0x0/0x4ffc00000, data 0x23317e0/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 63070208 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:52.352201+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544909 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556933454f00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 63070208 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:53.352547+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7fb0000/0x0/0x4ffc00000, data 0x23317e0/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569344cd4a0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7fb0000/0x0/0x4ffc00000, data 0x23317e0/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 63070208 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:54.352689+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933c71c00
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x5569333e1c20
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693b5ea400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x556932ea90e0
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 63070208 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:55.352824+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 63070208 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:56.353017+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 62259200 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:57.353242+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3664740 data_alloc: 234881024 data_used: 21495808
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 62259200 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:58.353393+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.941271782s of 12.659450531s, submitted: 59
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7fae000/0x0/0x4ffc00000, data 0x2331813/0x24c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [0,1])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 62259200 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:59.353581+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7fae000/0x0/0x4ffc00000, data 0x2331813/0x24c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 62259200 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:00.353770+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 62259200 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:01.353917+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 62259200 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:02.360392+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3664516 data_alloc: 234881024 data_used: 21499904
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 62259200 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:03.360591+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 62242816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:04.360753+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7fae000/0x0/0x4ffc00000, data 0x2331813/0x24c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 62242816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:05.360886+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 62242816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:06.361026+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 62242816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:07.361154+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:33 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3666280 data_alloc: 234881024 data_used: 21540864
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:08.361303+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 56147968 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.728628635s of 10.006806374s, submitted: 121
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:09.361423+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354697216 unmapped: 56336384 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:33 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:10.361655+0000)
Oct 07 15:17:33 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355778560 unmapped: 55255040 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:33 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7493000/0x0/0x4ffc00000, data 0x2e4b813/0x2fda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:11.361792+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355958784 unmapped: 55074816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:12.362037+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768784 data_alloc: 234881024 data_used: 23756800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:13.362316+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:14.362452+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e706c000/0x0/0x4ffc00000, data 0x2e63813/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:15.362623+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:16.362812+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:17.363001+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769104 data_alloc: 234881024 data_used: 23764992
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e706c000/0x0/0x4ffc00000, data 0x2e63813/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:18.363173+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:19.363271+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:20.363422+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:21.363640+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e706c000/0x0/0x4ffc00000, data 0x2e63813/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569354aa000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.462882042s of 13.117553711s, submitted: 114
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:22.363787+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833585 data_alloc: 234881024 data_used: 23764992
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569354aa000 session 0x5569354fdc20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569354aa000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569354aa000 session 0x55693489f4a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569344cdc20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:23.363973+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569354fc780
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933c71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x556933585680
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:24.364145+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:25.364286+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69d6000/0x0/0x4ffc00000, data 0x34f9813/0x3688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:26.364522+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:27.364718+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3821841 data_alloc: 234881024 data_used: 23764992
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:28.364874+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693b5ea400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x5569354f52c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:29.365019+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69d6000/0x0/0x4ffc00000, data 0x34f9813/0x3688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693b5ea400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x5569334545a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:30.365210+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:31.365484+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556933583a40
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569334543c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69d6000/0x0/0x4ffc00000, data 0x34f9813/0x3688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:32.365638+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933c71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.283301353s of 10.441823006s, submitted: 21
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826034 data_alloc: 234881024 data_used: 23764992
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569354aa000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:33.365838+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:34.365978+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359202816 unmapped: 51830784 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:35.366242+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:36.366376+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b2000/0x0/0x4ffc00000, data 0x351d813/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:37.366744+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3875606 data_alloc: 234881024 data_used: 29491200
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:38.366962+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:39.367105+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b2000/0x0/0x4ffc00000, data 0x351d813/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b2000/0x0/0x4ffc00000, data 0x351d813/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:40.367248+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b2000/0x0/0x4ffc00000, data 0x351d813/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:41.367391+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:42.367549+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3876746 data_alloc: 234881024 data_used: 29483008
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:43.367752+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b0000/0x0/0x4ffc00000, data 0x351e813/0x36ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:44.367906+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.968862534s of 12.149744034s, submitted: 5
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:45.368083+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 51118080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:46.368265+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 50880512 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:47.368413+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3958876 data_alloc: 234881024 data_used: 30003200
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:48.368565+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e60ea000/0x0/0x4ffc00000, data 0x3dd7813/0x3f66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:49.368758+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:50.369023+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e60ea000/0x0/0x4ffc00000, data 0x3dd7813/0x3f66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e60ea000/0x0/0x4ffc00000, data 0x3dd7813/0x3f66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:51.369217+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:52.369380+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3954340 data_alloc: 234881024 data_used: 30003200
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:53.369576+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:54.369763+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x55693489e960
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569354aa000 session 0x5569355652c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361299968 unmapped: 49733632 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.944513321s of 10.002432823s, submitted: 77
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e60f6000/0x0/0x4ffc00000, data 0x3dd9813/0x3f68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:55.369893+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361308160 unmapped: 49725440 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556935565c20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:56.370038+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361316352 unmapped: 49717248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:57.370173+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361316352 unmapped: 49717248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3783798 data_alloc: 234881024 data_used: 22560768
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:58.370317+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361316352 unmapped: 49717248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:59.370464+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361316352 unmapped: 49717248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:00.370629+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361324544 unmapped: 49709056 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e706b000/0x0/0x4ffc00000, data 0x2e64813/0x2ff3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:01.370825+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361324544 unmapped: 49709056 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:02.371034+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361324544 unmapped: 49709056 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x5569348954a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x55693345f860
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3781038 data_alloc: 234881024 data_used: 22560768
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:03.371213+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 48627712 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:04.371378+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 55648256 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.576909542s of 10.082017899s, submitted: 108
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556934894780
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:05.371543+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:06.371726+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:07.371913+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:08.372154+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:09.372410+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:10.372603+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:11.372791+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:12.373004+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:13.373173+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:14.373336+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:15.373548+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:16.373684+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:17.373851+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:18.373995+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:19.374117+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:20.374282+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:21.374455+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:22.374621+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:23.374820+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:24.375020+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 55623680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:25.375154+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 55623680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:26.375277+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 55623680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:27.375402+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 55623680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:28.375676+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 55615488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:29.375842+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 55615488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:30.376078+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 55615488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:31.376259+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 55615488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:32.376398+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355426304 unmapped: 55607296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:33.376658+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933c71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x55693416c3c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693488da40
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569333e12c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556933461c20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355426304 unmapped: 55607296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.773714066s of 28.863956451s, submitted: 9
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,1,0,1,7,6])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:34.376892+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556933454f00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693b5ea400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x5569341243c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556932b5ed20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556934898780
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569333e1c20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:35.377105+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:36.377288+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:37.377497+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556932ede960
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464974 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693bacec00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bacec00 session 0x55693345e5a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:38.377727+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a74000/0x0/0x4ffc00000, data 0x145d7e0/0x15ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:39.377856+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:40.378033+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556934894d20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569341c2960
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:41.378193+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:42.378331+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3474172 data_alloc: 218103808 data_used: 6074368
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:43.378501+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a73000/0x0/0x4ffc00000, data 0x145d7f0/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:44.378699+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:45.378924+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a73000/0x0/0x4ffc00000, data 0x145d7f0/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:46.379251+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:47.379642+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a73000/0x0/0x4ffc00000, data 0x145d7f0/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3474172 data_alloc: 218103808 data_used: 6074368
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:48.379857+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355467264 unmapped: 55566336 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:49.380044+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a73000/0x0/0x4ffc00000, data 0x145d7f0/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:50.380428+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:51.380775+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:52.381054+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3474172 data_alloc: 218103808 data_used: 6074368
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:53.381340+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.313346863s of 19.983018875s, submitted: 34
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:54.381532+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 53174272 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:55.381750+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e805b000/0x0/0x4ffc00000, data 0x1e6f7f0/0x1ffd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 53084160 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:56.381959+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:57.382081+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8018000/0x0/0x4ffc00000, data 0x1ea97f0/0x2037000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3565560 data_alloc: 218103808 data_used: 6414336
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:58.382222+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:59.382360+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8018000/0x0/0x4ffc00000, data 0x1ea97f0/0x2037000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:00.382719+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:01.383023+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:02.383183+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 52125696 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558532 data_alloc: 218103808 data_used: 6418432
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:03.383332+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8025000/0x0/0x4ffc00000, data 0x1eab7f0/0x2039000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 52125696 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:04.383608+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:05.383759+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8025000/0x0/0x4ffc00000, data 0x1eab7f0/0x2039000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:06.383969+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:07.384129+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558532 data_alloc: 218103808 data_used: 6418432
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.445440292s of 14.635492325s, submitted: 114
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:08.384284+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:09.384409+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:10.384568+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:11.384715+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:12.384881+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558760 data_alloc: 218103808 data_used: 6418432
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:13.385164+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:14.385303+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:15.385515+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:16.385668+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:17.385820+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558760 data_alloc: 218103808 data_used: 6418432
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:18.385989+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:19.386149+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:20.386416+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:21.386687+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:22.386839+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.461366653s of 14.488227844s, submitted: 1
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556932eff2c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x5569348985a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 52125696 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558336 data_alloc: 218103808 data_used: 6418432
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c53c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:23.387024+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c53c00 session 0x556935cb34a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:24.387175+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:25.387327+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:26.387508+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:27.387773+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:28.388005+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:29.388177+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:30.388314+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:31.388518+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:32.388658+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:33.389067+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:34.389282+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:35.389424+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:36.389608+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:37.389823+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:38.390089+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:39.390308+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:40.390519+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:41.390718+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:42.390863+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 52076544 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:43.391090+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 52076544 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:44.391223+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 52076544 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:45.391393+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 52076544 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:46.391525+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 52068352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:47.391636+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 52068352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:48.391771+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.447141647s of 25.612014771s, submitted: 32
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 370622464 unmapped: 40411136 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:49.391986+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556934125a40
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569344d05a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556933455c20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556933c4f0e0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935e52000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935e52000 session 0x5569333e0d20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:50.392418+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815c000/0x0/0x4ffc00000, data 0x1d7776e/0x1f02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:51.392628+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556932ea92c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:52.392804+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556935cb34a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533416 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:53.392997+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815c000/0x0/0x4ffc00000, data 0x1d7776e/0x1f02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569348985a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:54.393263+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556932eff2c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815b000/0x0/0x4ffc00000, data 0x1d77791/0x1f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:55.393443+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c56000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693b762800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:56.393598+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:57.393741+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815b000/0x0/0x4ffc00000, data 0x1d77791/0x1f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616138 data_alloc: 234881024 data_used: 16084992
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:58.393903+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:59.394052+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:00.394302+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:01.394454+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:02.394671+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620138 data_alloc: 234881024 data_used: 16707584
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:03.395017+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815b000/0x0/0x4ffc00000, data 0x1d77791/0x1f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:04.395172+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:05.395356+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:06.395496+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:07.395652+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620138 data_alloc: 234881024 data_used: 16707584
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:08.395904+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.948329926s of 20.243396759s, submitted: 34
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815b000/0x0/0x4ffc00000, data 0x1d77791/0x1f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,1])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 46022656 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:09.396158+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 368492544 unmapped: 42541056 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:10.396359+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:11.396581+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7583000/0x0/0x4ffc00000, data 0x294f791/0x2adb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:12.396739+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3721858 data_alloc: 234881024 data_used: 18522112
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:13.396994+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:14.397138+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e757d000/0x0/0x4ffc00000, data 0x2955791/0x2ae1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:15.397303+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:16.397431+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:17.397642+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730482 data_alloc: 234881024 data_used: 18755584
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:18.397838+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:19.397994+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:20.398145+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x295f791/0x2aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:21.398321+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x295f791/0x2aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:22.398417+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730482 data_alloc: 234881024 data_used: 18755584
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:23.398582+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:24.398738+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369147904 unmapped: 41885696 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:25.398887+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.108821869s of 16.882175446s, submitted: 108
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x556933c4e000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:26.399033+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:27.399171+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774326 data_alloc: 234881024 data_used: 18755584
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:28.399325+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:29.399454+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:30.399569+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:31.399738+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569341c30e0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693616a1e0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:32.399875+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556932b5fa40
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369319936 unmapped: 41713664 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774326 data_alloc: 234881024 data_used: 18755584
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:33.400101+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569341e4000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369319936 unmapped: 41713664 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:34.400262+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369319936 unmapped: 41713664 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:35.400403+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369942528 unmapped: 41091072 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:36.400529+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:37.400660+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3818646 data_alloc: 234881024 data_used: 24911872
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:38.401107+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:39.401428+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:40.401621+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:41.401761+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:42.401960+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:43.402125+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3818646 data_alloc: 234881024 data_used: 24911872
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:44.402273+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.362392426s of 19.756223679s, submitted: 9
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:45.402412+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:46.402606+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 40165376 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:47.402737+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372105216 unmapped: 38928384 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:48.402857+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3888978 data_alloc: 234881024 data_used: 26075136
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372121600 unmapped: 38912000 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e68c1000/0x0/0x4ffc00000, data 0x3611791/0x379d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:49.403022+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372121600 unmapped: 38912000 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:50.403188+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e68c1000/0x0/0x4ffc00000, data 0x3611791/0x379d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372121600 unmapped: 38912000 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:51.403336+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372121600 unmapped: 38912000 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:52.403497+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372129792 unmapped: 38903808 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:53.403699+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3888978 data_alloc: 234881024 data_used: 26075136
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372129792 unmapped: 38903808 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:54.403885+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x55693345ef00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e68be000/0x0/0x4ffc00000, data 0x3614791/0x37a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371220480 unmapped: 39813120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:55.404041+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.185196877s of 10.101745605s, submitted: 92
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:56.404373+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:57.404756+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x295f791/0x2aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x55693616be00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:58.404899+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3737914 data_alloc: 234881024 data_used: 18817024
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:59.405023+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x295f791/0x2aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:00.405148+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x5569333e1c20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b762800 session 0x556934808960
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:01.405401+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:02.405542+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569354f4b40
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:03.406000+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:04.406215+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:05.406491+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:06.406696+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:07.406856+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:08.407154+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:09.407307+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:10.407528+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:11.407773+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:12.407992+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:13.408197+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:14.408338+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:15.408618+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:16.408788+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:17.409005+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:18.409209+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:19.409383+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:20.409532+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:21.409800+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:22.459395+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:23.459575+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:24.459724+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569334550e0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556932efd4a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569348990e0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x55693488d0e0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:25.459863+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c56000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.007173538s of 30.416368484s, submitted: 66
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:26.459985+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,4])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:27.460146+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x5569354f5a40
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693b762800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b762800 session 0x556933454f00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569353d3860
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:28.460274+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510209 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x556935cb25a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x5569348954a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:29.461018+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:30.461146+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:31.461278+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:32.461434+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c56000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x55693488cb40
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:33.461650+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3509809 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569361adc20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:34.461801+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693488c3c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:35.461956+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.333779335s of 10.194737434s, submitted: 17
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569356eb4a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:36.462147+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:37.462282+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:38.462465+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511570 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:39.462613+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:40.462713+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:41.462797+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:42.462949+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:43.463109+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551250 data_alloc: 218103808 data_used: 10608640
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:44.463267+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:45.463401+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:46.463528+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:47.463642+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:48.463777+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551250 data_alloc: 218103808 data_used: 10608640
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.512318611s of 13.137044907s, submitted: 4
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:49.463911+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 50626560 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:50.464085+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 50626560 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:51.464241+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 50626560 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:52.464416+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:53.464623+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572334 data_alloc: 218103808 data_used: 10981376
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:54.464738+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:55.464981+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8525000/0x0/0x4ffc00000, data 0x19ae76e/0x1b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:56.465102+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:57.465265+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:58.465403+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580412 data_alloc: 218103808 data_used: 11112448
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:59.465591+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8519000/0x0/0x4ffc00000, data 0x19ba76e/0x1b45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:00.466013+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8519000/0x0/0x4ffc00000, data 0x19ba76e/0x1b45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:01.466278+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:02.466826+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:03.467122+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580412 data_alloc: 218103808 data_used: 11112448
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:04.467399+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c56000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x55693345eb40
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569335854a0
Oct 07 15:17:34 compute-0 nova_compute[259550]: 2025-10-07 15:17:34.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933bdb000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933bdb000 session 0x556933583680
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:05.467563+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933bdb000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933bdb000 session 0x5569344cc3c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.535245895s of 16.586713791s, submitted: 25
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8519000/0x0/0x4ffc00000, data 0x19ba76e/0x1b45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,1,0,6])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:06.467795+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569362ec000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x556935cb32c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556932ea9860
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c56000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 58228736 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x556934895860
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c56000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x5569354f4780
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:07.467918+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 58220544 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:08.468158+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3685997 data_alloc: 218103808 data_used: 11112448
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:09.468331+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:10.468458+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:11.468593+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556932efe000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:12.469092+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x556933583e00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933bdb000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:13.469293+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933bdb000 session 0x5569353d23c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569341c25a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686790 data_alloc: 218103808 data_used: 11112448
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:14.469446+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:15.469576+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 58204160 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:16.469747+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:17.469878+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:18.470035+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3788494 data_alloc: 234881024 data_used: 25300992
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:19.470416+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:20.470617+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:21.470836+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:22.471029+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:23.471209+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3788494 data_alloc: 234881024 data_used: 25300992
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:24.471424+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:25.471591+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.612783432s of 20.685228348s, submitted: 31
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:26.471724+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 52690944 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7153000/0x0/0x4ffc00000, data 0x2d7876e/0x2f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:27.471863+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 53075968 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7153000/0x0/0x4ffc00000, data 0x2d7876e/0x2f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7153000/0x0/0x4ffc00000, data 0x2d7876e/0x2f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,2,0,1])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:28.472016+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3846534 data_alloc: 234881024 data_used: 25468928
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7153000/0x0/0x4ffc00000, data 0x2d7876e/0x2f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:29.472172+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:30.472319+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e70d5000/0x0/0x4ffc00000, data 0x2df676e/0x2f81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:31.472519+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:32.472665+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:33.472881+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 365658112 unmapped: 53248000 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3841770 data_alloc: 234881024 data_used: 25468928
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:34.473035+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556934125860
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693489f680
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 365658112 unmapped: 53248000 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x55693442ed20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:35.473219+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362307584 unmapped: 56598528 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:36.473457+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362307584 unmapped: 56598528 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8519000/0x0/0x4ffc00000, data 0x19ba76e/0x1b45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:37.473617+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362307584 unmapped: 56598528 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:38.473737+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362307584 unmapped: 56598528 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3591774 data_alloc: 218103808 data_used: 11173888
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.757778168s of 12.639021873s, submitted: 116
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556932876f00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556934898f00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:39.473870+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 59170816 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693489fa40
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:40.474025+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:41.474186+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:42.474369+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:43.474553+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:44.474745+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:45.474949+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:46.475171+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:47.475399+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:48.475618+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:49.475778+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:50.476010+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:51.476135+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:52.476316+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:53.476524+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:54.476690+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:55.476845+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:56.476982+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:57.477202+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:58.477471+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:59.477728+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:00.478190+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:01.478446+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:02.478607+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:03.478818+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:04.479044+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:05.479272+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:06.479479+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:07.479693+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:08.480046+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:09.480241+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:10.480408+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:11.480701+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:12.481043+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:13.481376+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:14.481596+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.288772583s of 35.765110016s, submitted: 48
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569354fc1e0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:15.481716+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556934125a40
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556935cb34a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933bdb000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933bdb000 session 0x5569348990e0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569354f4b40
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:16.482003+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:17.482190+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:18.482342+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521521 data_alloc: 218103808 data_used: 5021696
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:19.482460+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:20.483048+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:21.483227+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:22.484099+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:23.484228+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549973 data_alloc: 218103808 data_used: 8990720
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:24.484366+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:25.484513+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:26.484651+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:27.484826+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:28.484999+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549973 data_alloc: 218103808 data_used: 8990720
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:29.485161+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:30.485311+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:31.485744+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:32.485886+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:33.486107+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.138957977s of 18.666332245s, submitted: 15
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361693184 unmapped: 57212928 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3557359 data_alloc: 218103808 data_used: 8982528
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:34.486253+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361693184 unmapped: 57212928 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:35.486545+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361693184 unmapped: 57212928 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880d000/0x0/0x4ffc00000, data 0x16be76e/0x1849000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:36.487221+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 57204736 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:37.487443+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 57204736 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:38.487608+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559541 data_alloc: 218103808 data_used: 8982528
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:39.488029+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:40.488249+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:41.488764+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:42.489070+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:43.489284+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559541 data_alloc: 218103808 data_used: 8982528
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.330938339s of 10.883395195s, submitted: 16
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:44.489468+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 57868288 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:45.489706+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 57868288 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:46.489886+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 57868288 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:47.490105+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 57868288 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:48.490310+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361046016 unmapped: 57860096 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559557 data_alloc: 218103808 data_used: 8982528
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:49.490668+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361046016 unmapped: 57860096 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:50.491025+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361046016 unmapped: 57860096 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:51.491198+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361046016 unmapped: 57860096 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:52.491401+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:53.491593+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559557 data_alloc: 218103808 data_used: 8982528
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:54.491787+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:55.491915+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:56.492080+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.153526306s of 12.898469925s, submitted: 1
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:57.492210+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361062400 unmapped: 57843712 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:58.492342+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880d000/0x0/0x4ffc00000, data 0x16c576e/0x1850000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361070592 unmapped: 57835520 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559113 data_alloc: 218103808 data_used: 8982528
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:59.492507+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569362ed860
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361070592 unmapped: 57835520 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:00.492686+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 handle_osd_map epochs [284,285], i have 284, src has [1,285]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 284 handle_osd_map epochs [285,285], i have 285, src has [1,285]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 285 ms_handle_reset con 0x5569343fc800 session 0x556932efeb40
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c56000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c4f000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 285 ms_handle_reset con 0x556935c56000 session 0x5569361acd20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 285 ms_handle_reset con 0x556935c4f000 session 0x5569335852c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c4f000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 57819136 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:01.492826+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 285 ms_handle_reset con 0x556935c4f000 session 0x5569344ce3c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364568576 unmapped: 65880064 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:02.493030+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 286 ms_handle_reset con 0x556933526400 session 0x5569362ecf00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364593152 unmapped: 65855488 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:03.493249+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 286 heartbeat osd_stat(store_statfs(0x4e7113000/0x0/0x4ffc00000, data 0x2dbbecb/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 286 heartbeat osd_stat(store_statfs(0x4e7113000/0x0/0x4ffc00000, data 0x2dbbecb/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933b71c00 session 0x5569341c3a40
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762301 data_alloc: 218103808 data_used: 9789440
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:04.493594+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x5569343fc800 session 0x55693489ef00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c56000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556935c56000 session 0x556932edf680
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933526400 session 0x55693345ef00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:05.493754+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:06.494023+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:07.494186+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e710e000/0x0/0x4ffc00000, data 0x2dbdac6/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:08.494330+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e710e000/0x0/0x4ffc00000, data 0x2dbdac6/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e710e000/0x0/0x4ffc00000, data 0x2dbdac6/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762301 data_alloc: 218103808 data_used: 9789440
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:09.494419+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:10.494603+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933b71c00 session 0x5569333574a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x5569343fc800 session 0x556934894d20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c4f000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556935c4f000 session 0x556932edf0e0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693f800800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x55693f800800 session 0x55693489e5a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.815621376s of 13.853458405s, submitted: 48
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933b71c00 session 0x556932b5f860
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933526400 session 0x556933583860
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x5569343fc800 session 0x556935564d20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c4f000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556935c4f000 session 0x5569344d0960
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933b71400 session 0x5569341c3c20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:11.494756+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:12.494914+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e710d000/0x0/0x4ffc00000, data 0x2dbdb28/0x2f4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 287 handle_osd_map epochs [288,288], i have 288, src has [1,288]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:13.495413+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 69681152 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3755695 data_alloc: 218103808 data_used: 9789440
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:14.495626+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf58b/0x2f52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 69681152 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:15.496199+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 69681152 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:16.496375+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 69664768 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:17.496515+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 69664768 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:18.496665+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 288 ms_handle_reset con 0x556933526400 session 0x5569354f5a40
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360792064 unmapped: 69656576 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3758141 data_alloc: 218103808 data_used: 9789440
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:19.496811+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 69640192 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:20.497041+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 67837952 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:21.497168+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:22.497306+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:23.497461+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833473 data_alloc: 234881024 data_used: 20254720
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:24.497703+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:25.497839+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:26.498012+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:27.498172+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:28.498300+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.241428375s of 17.537368774s, submitted: 23
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833473 data_alloc: 234881024 data_used: 20254720
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:29.498443+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:30.498592+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:31.498789+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 63209472 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:32.498919+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 63209472 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:33.499130+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 63209472 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866312 data_alloc: 234881024 data_used: 26378240
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:34.499378+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 63209472 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:35.499584+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:36.499762+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:37.500002+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:38.500188+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866344 data_alloc: 234881024 data_used: 26378240
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:39.500376+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:40.500621+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:41.500859+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:42.501026+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:43.501239+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866344 data_alloc: 234881024 data_used: 26378240
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:44.501404+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:45.501596+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:46.501795+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:47.501956+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:48.502119+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866344 data_alloc: 234881024 data_used: 26378240
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:49.502385+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:50.502593+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:51.502761+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:52.503000+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:53.503245+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866344 data_alloc: 234881024 data_used: 26378240
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:54.504070+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:55.504203+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:56.504326+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:57.504493+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367550464 unmapped: 62898176 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:58.504639+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367550464 unmapped: 62898176 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:59.504784+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3872264 data_alloc: 234881024 data_used: 27168768
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c4f000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 288 ms_handle_reset con 0x556935c4f000 session 0x5569353d25a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556934891400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367550464 unmapped: 62898176 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:00.504988+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.446823120s of 32.526866913s, submitted: 8
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x556934891400 session 0x5569348945a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556937127400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693f800000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367566848 unmapped: 62881792 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x556937127400 session 0x556932876780
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x55693f800000 session 0x55693416c3c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:01.505158+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x556933526400 session 0x5569353d3a40
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556934891400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x556934891400 session 0x556933c4fa40
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c4f000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374775808 unmapped: 59875328 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:02.505288+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 291 ms_handle_reset con 0x556935c4f000 session 0x556932efd2c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374390784 unmapped: 60260352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:03.505446+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e45c9000/0x0/0x4ffc00000, data 0x58fb8f7/0x5a94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556937127400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 291 ms_handle_reset con 0x556937127400 session 0x556935cb34a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374398976 unmapped: 60252160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 291 ms_handle_reset con 0x556933526800 session 0x556934125860
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:04.505573+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4223401 data_alloc: 251658240 data_used: 33873920
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 291 ms_handle_reset con 0x556933526400 session 0x55693489f0e0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556934891400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374439936 unmapped: 60211200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:05.505697+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 292 ms_handle_reset con 0x556934891400 session 0x5569354f4780
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e70fe000/0x0/0x4ffc00000, data 0x2dc6482/0x2f5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374439936 unmapped: 60211200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:06.505909+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 292 ms_handle_reset con 0x556933b71c00 session 0x556934899c20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 292 ms_handle_reset con 0x5569343fc800 session 0x556933583c20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c4f000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374448128 unmapped: 60203008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:07.506053+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 292 ms_handle_reset con 0x556935c4f000 session 0x55693501bc20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374464512 unmapped: 60186624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:08.506210+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374464512 unmapped: 60186624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:09.506386+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3922026 data_alloc: 251658240 data_used: 33869824
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374464512 unmapped: 60186624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:10.506537+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.127110481s of 10.177739143s, submitted: 163
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374464512 unmapped: 60186624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:11.506686+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e70ff000/0x0/0x4ffc00000, data 0x2dc639b/0x2f5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 293 ms_handle_reset con 0x556933526400 session 0x5569362ec5a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 293 handle_osd_map epochs [293,294], i have 293, src has [1,294]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 368902144 unmapped: 65748992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:12.506854+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 294 ms_handle_reset con 0x556933b6f400 session 0x5569354f4960
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 368918528 unmapped: 65732608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:13.506990+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 294 ms_handle_reset con 0x556933b71c00 session 0x5569333e0d20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:14.507178+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e8c76000/0x0/0x4ffc00000, data 0x124fa30/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3560957 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:15.507325+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:16.507476+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:17.507614+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:18.507771+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:19.507906+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3560957 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e8c76000/0x0/0x4ffc00000, data 0x124fa30/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:20.508001+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:21.508159+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.429761887s of 10.748415947s, submitted: 82
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c76000/0x0/0x4ffc00000, data 0x124fa30/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:22.508298+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:23.508463+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:24.508595+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:25.508768+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:26.508924+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:27.509211+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:28.509387+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:29.509547+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:30.509700+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:31.509872+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:32.510024+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:33.510239+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:34.510418+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:35.510550+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:36.510684+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:37.510860+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:38.511003+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:39.511215+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:40.511358+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:41.511523+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:42.511652+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:43.511864+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:44.512035+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:45.512166+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:46.512317+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:47.512446+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 49K writes, 193K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s
                                           Cumulative WAL: 49K writes, 17K syncs, 2.76 writes per sync, written: 0.19 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2813 writes, 11K keys, 2813 commit groups, 1.0 writes per commit group, ingest: 11.66 MB, 0.02 MB/s
                                           Interval WAL: 2813 writes, 1097 syncs, 2.56 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:48.512595+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets getting new tickets!
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:49.512891+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _finish_auth 0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:49.514759+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:50.513023+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:51.513148+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:52.513352+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933bdbc00 session 0x5569361ad2c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556932eba800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: mgrc ms_handle_reset ms_handle_reset con 0x556935c56c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3626055412
Oct 07 15:17:34 compute-0 ceph-osd[89062]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3626055412,v1:192.168.122.100:6801/3626055412]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: get_auth_request con 0x556933526800 auth_method 0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:53.513522+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: mgrc handle_mgr_configure stats_period=5
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556935c59000 session 0x5569341e52c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693560b400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x55693360f000 session 0x55693442e3c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c59000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:54.513655+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:55.513891+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:56.514141+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:57.514388+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:58.514985+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:59.515224+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556934891400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.841426849s of 37.856082916s, submitted: 15
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:00.515401+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362053632 unmapped: 72597504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556934891400 session 0x5569333e12c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556937127400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556937127400 session 0x5569362ed2c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8873000/0x0/0x4ffc00000, data 0x16514bc/0x17eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933526400 session 0x5569353d2000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:01.515538+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8873000/0x0/0x4ffc00000, data 0x16514f5/0x17eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362053632 unmapped: 72597504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933b6f400 session 0x556933585e00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:02.515653+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933b71c00 session 0x556933460f00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362053632 unmapped: 72597504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556934891400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556934891400 session 0x5569333e1c20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569335b2000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:03.515796+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362356736 unmapped: 72294400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935605800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:04.515917+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629019 data_alloc: 218103808 data_used: 9269248
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:05.516059+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556935605800 session 0x5569334550e0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x5569335b2000 session 0x556932ea85a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:06.516198+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e884e000/0x0/0x4ffc00000, data 0x1675504/0x1810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933526400 session 0x5569335830e0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:07.516344+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:08.516492+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:09.516625+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:10.516807+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:11.517019+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:12.517175+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:13.517364+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:14.517527+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:15.517660+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:16.517814+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:17.518325+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:18.518769+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:19.519037+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:20.519381+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:21.519614+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:22.519983+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:23.520263+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:24.520532+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:25.520737+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:26.521024+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:27.521294+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:28.521472+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:29.521664+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:30.522007+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:31.522264+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:32.522511+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:33.522756+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:34.523078+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:35.523320+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:36.523559+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362381312 unmapped: 72269824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:37.523697+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362381312 unmapped: 72269824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:38.523909+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362381312 unmapped: 72269824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:39.524166+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362381312 unmapped: 72269824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:40.524365+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:41.524588+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:42.524800+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:43.525026+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:44.525191+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:45.525321+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:46.525448+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362397696 unmapped: 72253440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:47.525619+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362397696 unmapped: 72253440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:48.525767+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:49.525915+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:50.526070+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:51.526223+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:52.526382+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:53.526627+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:54.526764+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:55.527026+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:56.527196+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:57.527338+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:58.527533+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362414080 unmapped: 72237056 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 58.524005890s of 59.169479370s, submitted: 66
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:59.527696+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362422272 unmapped: 72228864 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567816 data_alloc: 218103808 data_used: 5070848
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c74000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:00.527826+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 72220672 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:01.527954+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 72204288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:02.528107+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 72204288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:03.528270+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8c74000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362463232 unmapped: 72187904 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 296 ms_handle_reset con 0x556933b6f400 session 0x5569354fcb40
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:04.528430+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570568 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:05.528588+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:06.528754+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:07.529008+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:08.529158+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:09.529304+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8c72000/0x0/0x4ffc00000, data 0x1252f30/0x13eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570568 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:10.529440+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:11.529586+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:12.529772+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362479616 unmapped: 72171520 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:13.529993+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8c72000/0x0/0x4ffc00000, data 0x1252f30/0x13eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.192964554s of 14.603412628s, submitted: 105
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:14.530164+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:15.530282+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:16.530466+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:17.530592+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:18.530719+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:19.530987+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:20.531165+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:21.531305+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:22.531472+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:23.531736+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:24.531962+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:25.532173+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:26.532368+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:27.532511+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:28.532690+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:29.532867+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:30.533125+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:31.533321+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:32.533611+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:33.534248+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:34.534507+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362520576 unmapped: 72130560 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:35.534760+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362520576 unmapped: 72130560 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:36.535027+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362520576 unmapped: 72130560 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:37.535205+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:38.536074+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:39.536284+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:40.536446+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:41.536591+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:42.536767+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:43.536993+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:44.537258+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:45.537467+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:46.537657+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:47.537805+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:48.538076+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:49.538237+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:50.538362+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:51.538468+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:52.538679+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:53.538911+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:54.539080+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:55.539221+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:56.539386+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:57.539587+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:58.539744+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:59.539946+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:00.540064+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:01.540254+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:02.540450+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:03.540655+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:04.540857+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:05.541028+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:06.541228+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362561536 unmapped: 72089600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:07.541385+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362569728 unmapped: 72081408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:08.541641+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:09.541843+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:10.542043+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:11.542221+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:12.542419+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:13.542631+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:14.542964+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:15.543159+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:16.543426+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:17.543590+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:18.543784+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:19.544039+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:20.544234+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 72056832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:21.544455+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 72056832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:22.544622+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 72056832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:23.545121+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 72056832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:24.545281+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:25.545439+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:26.545996+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:27.546222+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:28.546785+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:29.547156+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:30.547555+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:31.547735+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:32.547914+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:33.548343+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:34.548602+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:35.548795+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:36.549136+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:37.549342+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:38.549557+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:39.549765+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:40.549981+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:41.550148+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:42.550276+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:43.550592+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 72024064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:44.550755+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 72024064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:45.551030+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 72024064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:46.551330+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 72024064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:47.551598+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362635264 unmapped: 72015872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:48.551803+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:49.552022+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:50.552196+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:51.552381+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:52.552577+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:53.552795+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:54.552991+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:55.553129+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:56.553334+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362659840 unmapped: 71991296 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:57.553575+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 104.046302795s of 104.136131287s, submitted: 16
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 ms_handle_reset con 0x556933b71c00 session 0x556933c4f4a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556934891400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 ms_handle_reset con 0x556934891400 session 0x55693416cb40
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:58.553784+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:59.554036+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580822 data_alloc: 218103808 data_used: 10911744
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:00.554243+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:01.554414+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:02.554542+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c70000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:03.554792+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:04.554978+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 67788800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580822 data_alloc: 218103808 data_used: 10911744
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:05.555226+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c70000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [1])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 67772416 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:06.555404+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8c70000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,1,1])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 298 ms_handle_reset con 0x556933526400 session 0x556932b5e000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 72474624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:07.555556+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 72474624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:08.555713+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 72474624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:09.555861+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569335b2000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.556733131s of 11.714488029s, submitted: 44
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 72474624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3508745 data_alloc: 218103808 data_used: 4096000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:10.556007+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 299 ms_handle_reset con 0x5569335b2000 session 0x556932efc960
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e905b000/0x0/0x4ffc00000, data 0xa58102/0xbf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:11.556132+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:12.556303+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:13.556485+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:14.556642+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9858000/0x0/0x4ffc00000, data 0x259b81/0x3f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3445661 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:15.556875+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9858000/0x0/0x4ffc00000, data 0x259b81/0x3f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:16.557011+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:17.557255+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:18.557391+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:19.557608+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3445661 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:20.557790+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9858000/0x0/0x4ffc00000, data 0x259b81/0x3f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:21.557990+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:22.558138+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.843618393s of 12.988986969s, submitted: 53
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 ms_handle_reset con 0x556933b6f400 session 0x556935cb34a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:23.558387+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:24.558524+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:25.558760+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:26.559025+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:27.559201+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:28.559450+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:29.559674+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:30.561167+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:31.561427+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:32.561708+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:33.562361+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:34.562704+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:35.563107+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:36.563899+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:37.564272+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:38.564553+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:39.564771+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:40.564994+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:41.565269+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:42.565512+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:43.565737+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:44.565882+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:45.565997+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:46.566226+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:47.566438+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:48.566588+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:49.566778+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:50.567004+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:51.567198+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:52.567342+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:53.567520+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:54.567672+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:55.567878+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:56.568086+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:57.568239+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:58.568388+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:59.568651+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:00.568806+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:01.568991+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:02.569127+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:03.569356+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:04.569590+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 77193216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:05.569733+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 77193216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:06.570176+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 77193216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:07.570611+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 77193216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:08.570812+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 77185024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:09.571084+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 77185024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:10.571381+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:11.571521+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:12.571661+0000)
Oct 07 15:17:34 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2344058974' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:13.571872+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:14.572190+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:15.572339+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:16.572483+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:17.572713+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:18.572852+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:19.573145+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:20.573316+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:21.573501+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:22.573666+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:23.573878+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:24.574021+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:25.574288+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:26.574504+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:27.574670+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:28.574859+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:29.575026+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:30.575220+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:31.575452+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:32.575629+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:33.575838+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:34.576190+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:35.576541+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:36.576903+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357515264 unmapped: 77135872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:37.577166+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357515264 unmapped: 77135872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:38.577648+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357515264 unmapped: 77135872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:39.577981+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357515264 unmapped: 77135872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:40.578186+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:41.578555+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:42.578871+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:43.579189+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:44.579464+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:45.579638+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:46.579881+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 77119488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:47.580165+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 77119488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:48.580431+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:49.580741+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:50.580917+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:51.581111+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:52.581271+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:53.581492+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:54.581665+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:55.581852+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:56.581994+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 77094912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:57.582154+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 77094912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:58.582405+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 77094912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:59.582676+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 77086720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:00.583026+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 77086720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:01.583213+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 77086720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:02.583458+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 77078528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:03.583713+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 77078528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:04.583919+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 77070336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:05.584133+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 77070336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:06.584396+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 77070336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:07.584587+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 105.469673157s of 105.490242004s, submitted: 20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:08.584746+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 303 ms_handle_reset con 0x556933b71c00 session 0x556935cb32c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935605800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:09.584915+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 77086720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:10.585106+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 304 ms_handle_reset con 0x556935605800 session 0x5569344ce5a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550337 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:11.585269+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:12.585538+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e8bd9000/0x0/0x4ffc00000, data 0xed08b1/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:13.585774+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:14.586057+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:15.586243+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550337 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:16.586398+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 77053952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:17.586542+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 77053952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:18.586691+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e8bd9000/0x0/0x4ffc00000, data 0xed08b1/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 77053952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:19.587067+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 77053952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:20.587225+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550337 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:21.587511+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:22.587725+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:23.588018+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e8bd9000/0x0/0x4ffc00000, data 0xed08b1/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:24.588196+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:25.588393+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550337 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:26.588601+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:27.588735+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e8bd9000/0x0/0x4ffc00000, data 0xed08b1/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:28.588869+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 77037568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:29.589113+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 77037568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:30.589264+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 77037568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550337 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:31.589466+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e8bd9000/0x0/0x4ffc00000, data 0xed08b1/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 77037568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:32.589611+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 77037568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:33.589786+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 77037568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e8bd9000/0x0/0x4ffc00000, data 0xed08b1/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:34.590010+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 77029376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:35.590201+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 77029376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550337 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:36.590384+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 77012992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:37.590549+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 77012992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.815532684s of 29.996992111s, submitted: 26
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:38.590746+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357654528 unmapped: 76996608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 305 ms_handle_reset con 0x556933526400 session 0x5569355654a0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:39.590982+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e8bd8000/0x0/0x4ffc00000, data 0xed244f/0x1075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357654528 unmapped: 76996608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e8bd8000/0x0/0x4ffc00000, data 0xed244f/0x1075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:40.591251+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e8bd8000/0x0/0x4ffc00000, data 0xed244f/0x1075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357654528 unmapped: 76996608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551884 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:41.591513+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357654528 unmapped: 76996608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:42.591685+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357662720 unmapped: 76988416 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:43.592011+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357662720 unmapped: 76988416 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e8bd8000/0x0/0x4ffc00000, data 0xed244f/0x1075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:44.592194+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 76980224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:45.592357+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 76980224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551884 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:46.592597+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 76980224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:47.592809+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 76980224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:48.593025+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e8bd8000/0x0/0x4ffc00000, data 0xed244f/0x1075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.439725876s of 10.520721436s, submitted: 15
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357695488 unmapped: 76955648 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:49.593230+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357695488 unmapped: 76955648 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:50.593382+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 76947456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:51.593542+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 76947456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:52.593742+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 76947456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:53.594020+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 76947456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:54.594170+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 76947456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:55.594328+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 76947456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:56.594504+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 76939264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:57.594655+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 76939264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:58.594847+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 76939264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:59.595018+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 76939264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:00.595245+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 76931072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:01.595401+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 76931072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:02.595567+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 76931072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:03.595763+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 76931072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:04.595956+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 76914688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:05.596141+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 76914688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:06.596267+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 76914688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:07.596425+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357744640 unmapped: 76906496 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:08.596628+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 76898304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:09.596811+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 76898304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:10.597001+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 76898304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:11.597171+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 76898304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:12.597303+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 76898304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:13.597487+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 76898304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:14.597643+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 76890112 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:15.597833+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 76890112 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:16.598007+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 76890112 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:17.598273+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 76890112 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:18.598475+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357769216 unmapped: 76881920 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:19.598659+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 76873728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:20.598853+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 76873728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:21.599017+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 76873728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:22.599212+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 76873728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:23.599455+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 76873728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:24.599699+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 76857344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:25.599897+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 76857344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:26.600075+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 76857344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:27.600265+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 76857344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:28.600448+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 76857344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:29.600658+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 76857344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:30.600911+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 76849152 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:31.601207+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 76849152 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:32.601385+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 76840960 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:33.601602+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 76840960 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:34.601762+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 76840960 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:35.601970+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 76840960 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:36.602115+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 76832768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:37.602274+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 76832768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:38.602441+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 76832768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:39.602670+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357826560 unmapped: 76824576 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:40.602829+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 76816384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:41.603059+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 76816384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:42.603335+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 76816384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:43.603602+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 76816384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:44.603804+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 76816384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:45.604070+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 76816384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:46.604446+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357842944 unmapped: 76808192 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:47.605075+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357842944 unmapped: 76808192 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:48.605425+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:49.605694+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:50.605999+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:51.606166+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:52.606365+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:53.606615+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:54.606808+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:55.606988+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:56.607231+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 76791808 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:57.607469+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357867520 unmapped: 76783616 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:58.607656+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357867520 unmapped: 76783616 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:59.607839+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 76775424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:00.608092+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 76775424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:01.608348+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 76775424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:02.608623+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 76775424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:03.609021+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357883904 unmapped: 76767232 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:04.609256+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 76759040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:05.609518+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 76759040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:06.610149+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 76759040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:07.610375+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 76759040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:08.610603+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 76759040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:09.610761+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 76742656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:10.610970+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 76742656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:11.611116+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 76742656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:12.611331+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 76742656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:13.611591+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 76742656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:14.611768+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 76742656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:15.612029+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 76726272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:16.612115+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 76726272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:17.612249+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 76726272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:18.612415+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 76726272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:19.612616+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357933056 unmapped: 76718080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:20.612802+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357933056 unmapped: 76718080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:21.612973+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 76709888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:22.613071+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 76709888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:23.613217+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 76701696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:24.613399+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 76701696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:25.613560+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 76701696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:26.613718+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 76701696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:27.613870+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 76701696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:28.614049+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357957632 unmapped: 76693504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:29.614219+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357957632 unmapped: 76693504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:30.614344+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357965824 unmapped: 76685312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:31.614612+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357965824 unmapped: 76685312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:32.614912+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357965824 unmapped: 76685312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:33.615266+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357965824 unmapped: 76685312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:34.615439+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357965824 unmapped: 76685312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:35.615618+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 76677120 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:36.615811+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 76677120 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:37.616006+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 76677120 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:38.616185+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 76677120 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:39.616348+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357982208 unmapped: 76668928 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:40.616510+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357982208 unmapped: 76668928 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:41.616660+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357990400 unmapped: 76660736 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:42.616804+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357990400 unmapped: 76660736 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:43.617017+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:44.617220+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:45.617391+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:46.617777+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:47.617991+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:48.618193+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:49.618387+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:50.618589+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:51.618746+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:52.619020+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 76636160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:53.619220+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 76636160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:54.619395+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358023168 unmapped: 76627968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:55.619595+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358023168 unmapped: 76627968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:56.619787+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 76619776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:57.619993+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 76619776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:58.620142+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 76619776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:59.620344+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 76619776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:00.620542+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 76611584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:01.620716+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 76611584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:02.620845+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 76611584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:03.621070+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 76611584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:04.621281+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 76611584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:05.621420+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 76611584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:06.621627+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 76603392 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:07.621815+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 76603392 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:08.621972+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 76595200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:09.622232+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 76595200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:10.622408+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 76595200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:11.622572+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 76595200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:12.622776+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 76595200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:13.623082+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 76595200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:14.623260+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 76587008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:15.623367+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 76587008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:16.623541+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358080512 unmapped: 76570624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:17.623809+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 76562432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:18.624000+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 76562432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:19.624164+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 76562432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:20.627095+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 76562432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:21.632187+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 76562432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:22.634315+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 76554240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:23.634560+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 76554240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:24.635513+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 76554240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:25.637320+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 76554240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2378386877' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 07 15:17:34 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/467493459' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 07 15:17:34 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4236161573' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 07 15:17:34 compute-0 ceph-mon[74295]: pgmap v3488: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:34 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3621770134' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 07 15:17:34 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3600964625' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 07 15:17:34 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2784232819' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:26.638290+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 76546048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:27.640091+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 76546048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:28.640294+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 76546048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:29.640580+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 76546048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:30.640886+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 76546048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:31.642072+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 76546048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:32.643005+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:33.643309+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:34.643832+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:35.644122+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:36.644414+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:37.644623+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:38.644786+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:39.645002+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:40.645222+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 76513280 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:41.645414+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 76513280 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:42.645606+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 76513280 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:43.645834+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 76513280 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:44.646052+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 76505088 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:45.646277+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 76505088 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:46.646513+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 76505088 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:47.646656+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 76505088 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:48.646790+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:49.647029+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:50.647198+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:51.647380+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:52.647595+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:53.648786+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:54.648953+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:55.649100+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:56.649233+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 76480512 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:57.649427+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 76480512 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:58.649567+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 76472320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:59.649723+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 76472320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:00.649846+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 76472320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:01.650002+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 76472320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:02.650167+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 76472320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:03.650350+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358187008 unmapped: 76464128 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:04.650502+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 76455936 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:05.650680+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 76455936 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:06.650914+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 76455936 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:07.651179+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 76455936 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:08.651357+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358211584 unmapped: 76439552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:09.651549+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358211584 unmapped: 76439552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:10.651700+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358211584 unmapped: 76439552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:11.651876+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358211584 unmapped: 76439552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:12.652029+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358219776 unmapped: 76431360 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:13.652226+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358219776 unmapped: 76431360 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:14.652399+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 76423168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:15.652574+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 76423168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:16.652755+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 76423168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:17.652969+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 76423168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:18.653155+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 76423168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:19.653300+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 76423168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:20.653482+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358244352 unmapped: 76406784 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:21.653693+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358244352 unmapped: 76406784 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:22.653915+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358244352 unmapped: 76406784 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:23.654174+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358252544 unmapped: 76398592 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:24.654521+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358252544 unmapped: 76398592 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:25.654854+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358252544 unmapped: 76398592 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:26.655144+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 76390400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:27.655452+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 76390400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:28.655640+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 76390400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:29.656011+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 76390400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:30.656148+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 76390400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:31.656299+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 76390400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:32.656488+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358268928 unmapped: 76382208 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:33.656709+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358268928 unmapped: 76382208 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:34.657010+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358268928 unmapped: 76382208 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:35.657257+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 76374016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:36.657584+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 76365824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:37.657802+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 76365824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:38.657977+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 76365824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:39.658218+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 76365824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:40.658472+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 76365824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:41.658662+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 76365824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:42.658832+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358293504 unmapped: 76357632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:43.658996+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358293504 unmapped: 76357632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:44.659174+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 76349440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:45.659351+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 76349440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:46.659594+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 76349440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:47.659819+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 76349440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 49K writes, 195K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.75 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 645 writes, 1636 keys, 645 commit groups, 1.0 writes per commit group, ingest: 0.83 MB, 0.00 MB/s
                                           Interval WAL: 645 writes, 286 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:48.660027+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 76341248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:49.663699+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 76341248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:50.664043+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 76333056 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:51.664318+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 76333056 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:52.664557+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 76333056 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:53.664739+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 76324864 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:54.665014+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 76324864 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:55.665290+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358334464 unmapped: 76316672 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:56.665484+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358334464 unmapped: 76316672 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:57.665654+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358342656 unmapped: 76308480 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:58.665821+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358342656 unmapped: 76308480 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:59.666004+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358342656 unmapped: 76308480 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:00.666151+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:01.666310+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:02.666516+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:03.666744+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:04.667009+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:05.667241+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:06.667406+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:07.667573+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:08.667777+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 76292096 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:09.667904+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 76292096 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:10.668073+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 76292096 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:11.668241+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358367232 unmapped: 76283904 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:12.668406+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358367232 unmapped: 76283904 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:13.668608+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 76275712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:14.668781+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 76275712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:15.668983+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358383616 unmapped: 76267520 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:16.669176+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358391808 unmapped: 76259328 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:17.669353+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358391808 unmapped: 76259328 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:18.669574+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 76251136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:19.669805+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 76251136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:20.670035+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 76251136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:21.670208+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 76251136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:22.670388+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 76251136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:23.670590+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 76251136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:24.670825+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358408192 unmapped: 76242944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:25.671026+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358408192 unmapped: 76242944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:26.671249+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358408192 unmapped: 76242944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:27.671435+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 76234752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:28.671633+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 76234752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:29.671819+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 76234752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:30.672032+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 76234752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:31.672249+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358424576 unmapped: 76226560 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:32.672440+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 76218368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:33.672669+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 76218368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:34.672829+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 76218368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:35.673035+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 76218368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:36.673373+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 76218368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:37.673550+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 76218368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:38.673729+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 76210176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:39.673905+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 76210176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:40.674234+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 76201984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:41.674465+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 76201984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:42.674685+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 76201984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:43.674903+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 76201984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:44.675105+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 76201984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:45.675339+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358457344 unmapped: 76193792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:46.675501+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358457344 unmapped: 76193792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:47.675723+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358457344 unmapped: 76193792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:48.675873+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:49.676087+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:50.676356+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:51.676570+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:52.676747+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:53.676989+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:54.677225+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:55.677426+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:56.677659+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 76161024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:57.677882+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 76161024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:58.678052+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 310.758331299s of 310.772949219s, submitted: 14
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 76161024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:59.678193+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 76152832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:00.678342+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 75071488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:01.678468+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:02.679155+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:03.679626+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:04.679896+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:05.681111+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:06.682040+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:07.682186+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:08.682324+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:09.682460+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:10.682610+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:11.682910+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:12.683075+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:13.683240+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:14.683381+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 75038720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:15.683525+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 75038720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:16.683667+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 75038720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:17.684031+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 75038720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:18.684229+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 75038720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:19.684525+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 75030528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:20.684819+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 75030528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:21.685105+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 75030528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:22.685313+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 75022336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:23.685531+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 75022336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:24.685817+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 75022336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:25.686008+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:26.686194+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:27.686374+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:28.686585+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:29.686753+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:30.686916+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:31.687147+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:32.687388+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:33.687613+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:34.687816+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:35.688038+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:36.688248+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:37.688419+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:38.688575+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:39.688726+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:40.688990+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:41.689175+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:42.689339+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 74989568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:43.689506+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 74981376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:44.689718+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 74981376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:45.689985+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 74981376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:46.690133+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:47.690336+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 74981376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:48.690504+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:49.690639+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:50.690792+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:51.691082+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:52.691283+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:53.691477+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:54.691645+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:55.691791+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:56.691950+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:57.692100+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:58.692302+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:59.692478+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:00.692670+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 74956800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:01.692852+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 74956800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:02.693014+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 74948608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:03.693201+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 74948608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:04.693382+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 74940416 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:05.693526+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 74940416 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:06.693686+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359718912 unmapped: 74932224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:07.693882+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359718912 unmapped: 74932224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:08.694038+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359718912 unmapped: 74932224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:09.694222+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:10.694408+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:11.694587+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:12.694739+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:13.694967+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:14.695145+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:15.695291+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:16.695424+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:17.695556+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:18.695683+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:19.695829+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 74907648 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:20.695984+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 74907648 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:21.696129+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 74907648 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:22.696275+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 74899456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:23.696468+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 74899456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:24.696827+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:25.697006+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:26.697165+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:27.697266+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:28.697446+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:29.697605+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:30.697773+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:31.698039+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:32.698230+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:33.698441+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:34.698601+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:35.698801+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 74866688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:36.699005+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 74866688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569335b2000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 97.285827637s of 97.792411804s, submitted: 90
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:37.699331+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558328 data_alloc: 218103808 data_used: 1105920
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 74850304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 307 ms_handle_reset con 0x5569335b2000 session 0x556933454f00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:38.699482+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 74850304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:39.699653+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359809024 unmapped: 74842112 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:40.699797+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360890368 unmapped: 73760768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 308 ms_handle_reset con 0x556933b6f400 session 0x55693489e000
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e9840000/0x0/0x4ffc00000, data 0x267631/0x40d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:41.700017+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360890368 unmapped: 73760768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:42.700155+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476702 data_alloc: 218103808 data_used: 1114112
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360890368 unmapped: 73760768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e9840000/0x0/0x4ffc00000, data 0x267631/0x40d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:43.700367+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360906752 unmapped: 73744384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:44.700559+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 73736192 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:45.700821+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 73736192 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:46.700998+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e983c000/0x0/0x4ffc00000, data 0x2690e3/0x412000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 309 handle_osd_map epochs [309,310], i have 309, src has [1,310]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 73736192 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.069248199s of 10.059258461s, submitted: 86
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 310 ms_handle_reset con 0x556933b71c00 session 0x55693489e960
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:47.701142+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3488669 data_alloc: 218103808 data_used: 1126400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 73719808 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:48.701297+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360939520 unmapped: 73711616 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:49.701461+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360939520 unmapped: 73711616 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:50.701631+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e9837000/0x0/0x4ffc00000, data 0x26ac9f/0x416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360939520 unmapped: 73711616 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e9837000/0x0/0x4ffc00000, data 0x26ac9f/0x416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:51.701804+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360939520 unmapped: 73711616 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:52.701961+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3488669 data_alloc: 218103808 data_used: 1126400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360947712 unmapped: 73703424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:53.702120+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360955904 unmapped: 73695232 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:54.702285+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360964096 unmapped: 73687040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:55.702482+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360964096 unmapped: 73687040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:56.702679+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360964096 unmapped: 73687040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:57.702883+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360964096 unmapped: 73687040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:58.703088+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360972288 unmapped: 73678848 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:59.703318+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360972288 unmapped: 73678848 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:00.703571+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360972288 unmapped: 73678848 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:01.703770+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360972288 unmapped: 73678848 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:02.704031+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360972288 unmapped: 73678848 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:03.704253+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360972288 unmapped: 73678848 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:04.704392+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 73662464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:05.704570+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 73662464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:06.704776+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 73662464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:07.704975+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 73662464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:08.705235+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 73662464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:09.705434+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 73662464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:10.705651+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 73654272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:11.706261+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 73654272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:12.706520+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 73654272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:13.707434+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 73654272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:14.708057+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 73654272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:15.708712+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 73654272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:16.709424+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 73646080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:17.709795+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 73646080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:18.710405+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 73646080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:19.710567+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 73646080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:20.710756+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 73637888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:21.710895+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 73637888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:22.711279+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 73637888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:23.711582+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 73637888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:24.711741+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361021440 unmapped: 73629696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:25.711872+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361021440 unmapped: 73629696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:26.712211+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:27.712428+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:28.712674+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:29.712800+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:30.712986+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:31.713188+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:32.713391+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:33.713567+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:34.713760+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:35.714056+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:36.714330+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 73613312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:37.714540+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693288bc00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491163 data_alloc: 218103808 data_used: 1130496
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 73613312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 50.736141205s of 51.071052551s, submitted: 14
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:38.714783+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 73613312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:39.715031+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26e2b0/0x41b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:40.715211+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:41.715392+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 312 ms_handle_reset con 0x55693288bc00 session 0x556933455680
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:42.715775+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492334 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:43.715984+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9833000/0x0/0x4ffc00000, data 0x26e27d/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:44.716201+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:45.716429+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:46.716599+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:47.716778+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492334 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:48.716903+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9833000/0x0/0x4ffc00000, data 0x26e27d/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 312 handle_osd_map epochs [313,313], i have 313, src has [1,313]
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.388833046s of 11.278204918s, submitted: 27
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:49.717083+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361070592 unmapped: 73580544 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:50.717259+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361070592 unmapped: 73580544 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:51.717398+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361070592 unmapped: 73580544 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:52.717526+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361078784 unmapped: 73572352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:53.717748+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361078784 unmapped: 73572352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:54.717951+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361078784 unmapped: 73572352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:55.718108+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361078784 unmapped: 73572352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:56.718270+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361078784 unmapped: 73572352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:57.718443+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361078784 unmapped: 73572352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:58.718626+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:59.718766+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:00.718984+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:01.719161+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:02.719347+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:03.719539+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:04.719728+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:05.719895+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:06.720046+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:07.720196+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:08.720372+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:09.720487+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:10.720710+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:11.720995+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:12.721171+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:13.721354+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361111552 unmapped: 73539584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:14.721513+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361119744 unmapped: 73531392 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:15.721624+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361119744 unmapped: 73531392 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:16.721747+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361119744 unmapped: 73531392 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:17.722017+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:18.722141+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:19.722285+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:20.722394+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:21.722527+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:22.722709+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:23.722893+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:24.722988+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 73515008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:25.723180+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 73515008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:26.723387+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 73515008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:27.723541+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 73515008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:28.723687+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 73515008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:29.723845+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 73515008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:30.724054+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361144320 unmapped: 73506816 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:31.724222+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361152512 unmapped: 73498624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:32.724358+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361160704 unmapped: 73490432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:33.724556+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361160704 unmapped: 73490432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:34.724732+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361160704 unmapped: 73490432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:35.724897+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361160704 unmapped: 73490432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:36.725000+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361160704 unmapped: 73490432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:37.725286+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361168896 unmapped: 73482240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:38.725498+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361168896 unmapped: 73482240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:39.725624+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361168896 unmapped: 73482240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:40.725760+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361185280 unmapped: 73465856 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:41.725972+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361185280 unmapped: 73465856 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:42.726247+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361185280 unmapped: 73465856 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:43.727268+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361185280 unmapped: 73465856 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:44.727466+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361193472 unmapped: 73457664 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:45.727626+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361193472 unmapped: 73457664 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:46.727798+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361193472 unmapped: 73457664 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:47.727974+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361193472 unmapped: 73457664 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:48.728104+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361201664 unmapped: 73449472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:49.728295+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361201664 unmapped: 73449472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:50.728426+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361201664 unmapped: 73449472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:51.728595+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361201664 unmapped: 73449472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 ms_handle_reset con 0x556932eba800 session 0x55693501a3c0
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:52.728730+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361209856 unmapped: 73441280 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:53.728875+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 ms_handle_reset con 0x55693560b400 session 0x556935cb3c20
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933bdbc00
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 ms_handle_reset con 0x556935c59000 session 0x556933583a40
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693560b400
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361218048 unmapped: 73433088 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:54.728998+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361218048 unmapped: 73433088 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:55.729116+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361226240 unmapped: 73424896 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:56.729255+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361226240 unmapped: 73424896 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:57.729398+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361234432 unmapped: 73416704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:58.729530+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361234432 unmapped: 73416704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:59.729669+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361234432 unmapped: 73416704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:00.729789+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: do_command 'config diff' '{prefix=config diff}'
Oct 07 15:17:34 compute-0 ceph-osd[89062]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 07 15:17:34 compute-0 ceph-osd[89062]: do_command 'config show' '{prefix=config show}'
Oct 07 15:17:34 compute-0 ceph-osd[89062]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361250816 unmapped: 73400320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: do_command 'counter dump' '{prefix=counter dump}'
Oct 07 15:17:34 compute-0 ceph-osd[89062]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:01.729920+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: do_command 'counter schema' '{prefix=counter schema}'
Oct 07 15:17:34 compute-0 ceph-osd[89062]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 74186752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:02.730087+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:34 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:34 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:34 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 74432512 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:17:34 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:03.730239+0000)
Oct 07 15:17:34 compute-0 ceph-osd[89062]: do_command 'log dump' '{prefix=log dump}'
Oct 07 15:17:34 compute-0 rsyslogd[1004]: imjournal from <np0005473739:ceph-osd>: begin to drop messages due to rate-limiting
Oct 07 15:17:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 07 15:17:34 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1945094872' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 07 15:17:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 07 15:17:34 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2306230147' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 07 15:17:34 compute-0 nova_compute[259550]: 2025-10-07 15:17:34.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 07 15:17:34 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3581072664' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 07 15:17:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 07 15:17:34 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1327574828' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 07 15:17:35 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23193 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 07 15:17:35 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3517765410' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 07 15:17:35 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2344058974' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 07 15:17:35 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1945094872' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 07 15:17:35 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2306230147' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 07 15:17:35 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3581072664' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 07 15:17:35 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1327574828' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 07 15:17:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3489: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:35 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23197 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:35 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23199 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:36 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23201 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:36 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23203 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:36 compute-0 ceph-mon[74295]: from='client.23193 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:36 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3517765410' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 07 15:17:36 compute-0 ceph-mon[74295]: pgmap v3489: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:36 compute-0 ceph-mon[74295]: from='client.23197 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:36 compute-0 ceph-mon[74295]: from='client.23199 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:36 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23205 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:36 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23209 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 07 15:17:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4060124454' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 07 15:17:37 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23213 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3490: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:37 compute-0 ceph-mon[74295]: from='client.23201 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:37 compute-0 ceph-mon[74295]: from='client.23203 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:37 compute-0 ceph-mon[74295]: from='client.23205 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:37 compute-0 ceph-mon[74295]: from='client.23209 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:37 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4060124454' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 07 15:17:37 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23217 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:17:37 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 07 15:17:37 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1971043547' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 07 15:17:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 07 15:17:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3380019794' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 07 15:17:38 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 07 15:17:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3675653749' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:16.911029+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafd972800
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa710c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319422464 unmapped: 49299456 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:17.911174+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318922752 unmapped: 49799168 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:18.911401+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e915e000/0x0/0x4ffc00000, data 0x3e3c2d3/0x3fd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319119360 unmapped: 49602560 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:19.911696+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.620614052s of 13.051449776s, submitted: 115
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319119360 unmapped: 49602560 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:20.911829+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762364 data_alloc: 234881024 data_used: 24309760
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319119360 unmapped: 49602560 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:21.911995+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e915d000/0x0/0x4ffc00000, data 0x3e3d2d3/0x3fd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319127552 unmapped: 49594368 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:22.912217+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319127552 unmapped: 49594368 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:23.912376+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319127552 unmapped: 49594368 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:24.912567+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319127552 unmapped: 49594368 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:25.912842+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788122 data_alloc: 234881024 data_used: 24367104
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323158016 unmapped: 45563904 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:26.912985+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8968000/0x0/0x4ffc00000, data 0x46242d3/0x47b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 322625536 unmapped: 46096384 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:27.913104+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325697536 unmapped: 43024384 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:28.913263+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325697536 unmapped: 43024384 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:29.913398+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e773b000/0x0/0x4ffc00000, data 0x46b62d3/0x484a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:30.913566+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325697536 unmapped: 43024384 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3849112 data_alloc: 234881024 data_used: 25341952
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:31.913771+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325697536 unmapped: 43024384 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:32.913901+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325722112 unmapped: 42999808 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.806874275s of 12.448079109s, submitted: 120
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e734a000/0x0/0x4ffc00000, data 0x4ab02d3/0x4c44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:33.914051+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329523200 unmapped: 39198720 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:34.914197+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329531392 unmapped: 39190528 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:35.914357+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331005952 unmapped: 37715968 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e702e000/0x0/0x4ffc00000, data 0x4dc32d3/0x4f57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3914308 data_alloc: 234881024 data_used: 26738688
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:36.914501+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331022336 unmapped: 37699584 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:37.914666+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331030528 unmapped: 37691392 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:38.914847+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331030528 unmapped: 37691392 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:39.914996+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331030528 unmapped: 37691392 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7032000/0x0/0x4ffc00000, data 0x4dc72d3/0x4f5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:40.915124+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331030528 unmapped: 37691392 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3905940 data_alloc: 234881024 data_used: 26726400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:41.915402+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331030528 unmapped: 37691392 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:42.915629+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331030528 unmapped: 37691392 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:43.915777+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331030528 unmapped: 37691392 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7030000/0x0/0x4ffc00000, data 0x4dca2d3/0x4f5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:44.915981+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331038720 unmapped: 37683200 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:45.916256+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331038720 unmapped: 37683200 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3905940 data_alloc: 234881024 data_used: 26726400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:46.916396+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331046912 unmapped: 37675008 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7030000/0x0/0x4ffc00000, data 0x4dca2d3/0x4f5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:47.916529+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331046912 unmapped: 37675008 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:48.916703+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331046912 unmapped: 37675008 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.402234077s of 16.121103287s, submitted: 125
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:49.916852+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331046912 unmapped: 37675008 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:50.917032+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331046912 unmapped: 37675008 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7030000/0x0/0x4ffc00000, data 0x4dca2d3/0x4f5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3907380 data_alloc: 234881024 data_used: 26853376
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c20000 session 0x55daf7c825a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55dafa80cb40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:51.917165+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331055104 unmapped: 37666816 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d24400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9d24400 session 0x55daf993c000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:52.917338+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325050368 unmapped: 43671552 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e818c000/0x0/0x4ffc00000, data 0x3c6f261/0x3e01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:53.917481+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325050368 unmapped: 43671552 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8185000/0x0/0x4ffc00000, data 0x3c76261/0x3e08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:54.917635+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325050368 unmapped: 43671552 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:55.917846+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325050368 unmapped: 43671552 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3706104 data_alloc: 234881024 data_used: 18141184
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:56.918034+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325050368 unmapped: 43671552 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:57.918179+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325050368 unmapped: 43671552 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa831800 session 0x55daf7bef860
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa83b400 session 0x55daf9670b40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d24c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:58.918333+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 47136768 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.667469978s of 10.012344360s, submitted: 87
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9d24c00 session 0x55dafa80c000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:45:59.918470+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 47136768 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9548000/0x0/0x4ffc00000, data 0x28b522e/0x2a45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:00.918636+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 47136768 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3501452 data_alloc: 234881024 data_used: 14000128
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:01.918801+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 47136768 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafd972800 session 0x55dafa80d0e0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa710c00 session 0x55daf86a25a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:02.918942+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319799296 unmapped: 48922624 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf98f92c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:03.919125+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9f2c000/0x0/0x4ffc00000, data 0x1ed222e/0x2062000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320856064 unmapped: 47865856 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:04.919336+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320856064 unmapped: 47865856 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:05.919611+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320856064 unmapped: 47865856 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398646 data_alloc: 218103808 data_used: 10735616
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:06.919836+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320856064 unmapped: 47865856 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf7caef00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafd972400 session 0x55daf98c4960
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:07.919974+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55dafb59cb40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:08.920214+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb352000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:09.920389+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:10.920813+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3192031 data_alloc: 218103808 data_used: 3338240
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:11.921001+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:12.921645+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:13.922244+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb352000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:14.922589+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:15.923196+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3192031 data_alloc: 218103808 data_used: 3338240
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:16.923554+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb352000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:17.923775+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:18.924144+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:19.924364+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:20.924670+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb352000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3192031 data_alloc: 218103808 data_used: 3338240
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:21.924819+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:22.925062+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:23.925512+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb352000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:24.925774+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:25.926040+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb352000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3192031 data_alloc: 218103808 data_used: 3338240
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:26.926240+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:27.926442+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb352000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:28.926718+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:29.927054+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:30.927330+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3192031 data_alloc: 218103808 data_used: 3338240
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:31.927586+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317890560 unmapped: 50831360 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:32.927813+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 317898752 unmapped: 50823168 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:33.928016+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.472484589s of 34.984943390s, submitted: 105
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 49913856 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55dafa80da40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa710c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa710c00 session 0x55daf7caf0e0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafd972800
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafd972800 session 0x55daf96701e0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d24400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ead13000/0x0/0x4ffc00000, data 0x10ee1b2/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [0,0,1])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9d24400 session 0x55daf8adef00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf8b17e00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:34.928211+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318234624 unmapped: 50487296 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:35.928383+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318234624 unmapped: 50487296 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253075 data_alloc: 218103808 data_used: 3338240
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:36.928530+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318234624 unmapped: 50487296 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ead13000/0x0/0x4ffc00000, data 0x10ee1eb/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:37.928713+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318234624 unmapped: 50487296 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ead13000/0x0/0x4ffc00000, data 0x10ee1eb/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:38.928882+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318234624 unmapped: 50487296 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ead13000/0x0/0x4ffc00000, data 0x10ee1eb/0x127b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:39.929062+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318234624 unmapped: 50487296 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:40.929203+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318242816 unmapped: 50479104 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253075 data_alloc: 218103808 data_used: 3338240
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:41.929327+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318242816 unmapped: 50479104 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf96dde00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa710c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafd972800
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:42.929494+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ead12000/0x0/0x4ffc00000, data 0x10ee20e/0x127c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318398464 unmapped: 50323456 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:43.929658+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318398464 unmapped: 50323456 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:44.929795+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa831800
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa831800 session 0x55daf8b17860
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83b400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa83b400 session 0x55dafad4d680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d25800
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9d25800 session 0x55daf993d860
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf8adf4a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318406656 unmapped: 50315264 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.029737473s of 11.252568245s, submitted: 54
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf98ef680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:45.930014+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318611456 unmapped: 50110464 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3335030 data_alloc: 218103808 data_used: 9510912
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:46.930174+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318611456 unmapped: 50110464 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:47.930324+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318611456 unmapped: 50110464 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea8a6000/0x0/0x4ffc00000, data 0x155a20e/0x16e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:48.930520+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318611456 unmapped: 50110464 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:49.930671+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318611456 unmapped: 50110464 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: mgrc ms_handle_reset ms_handle_reset con 0x55dafa841000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3626055412
Oct 07 15:17:38 compute-0 ceph-osd[88039]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3626055412,v1:192.168.122.100:6801/3626055412]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: get_auth_request con 0x55daf9d24400 auth_method 0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: mgrc handle_mgr_configure stats_period=5
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:50.930788+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318611456 unmapped: 50110464 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3335030 data_alloc: 218103808 data_used: 9510912
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:51.930898+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318611456 unmapped: 50110464 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:52.931101+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea8a6000/0x0/0x4ffc00000, data 0x155a20e/0x16e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318611456 unmapped: 50110464 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea8a6000/0x0/0x4ffc00000, data 0x155a20e/0x16e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9cccc00 session 0x55daf98f8f00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa831800
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:53.931228+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83b400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa83b400 session 0x55dafa7bcd20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 318611456 unmapped: 50110464 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa832c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa832c00 session 0x55daf7beeb40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:54.931355+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324804608 unmapped: 43917312 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.825714111s of 10.193340302s, submitted: 114
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:55.931493+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d22c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9d22c00 session 0x55daf8a49c20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324411392 unmapped: 44310528 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55dafb3c4b40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:56.931637+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476450 data_alloc: 218103808 data_used: 10915840
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324411392 unmapped: 44310528 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa832c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:57.931800+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9880000/0x0/0x4ffc00000, data 0x257f231/0x270e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [1])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324419584 unmapped: 44302336 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:58.931987+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324419584 unmapped: 44302336 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:46:59.932118+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324419584 unmapped: 44302336 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:00.932277+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324419584 unmapped: 44302336 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:01.932407+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504838 data_alloc: 234881024 data_used: 15216640
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e985f000/0x0/0x4ffc00000, data 0x25a0231/0x272f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324419584 unmapped: 44302336 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:02.932545+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324419584 unmapped: 44302336 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:03.932653+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324419584 unmapped: 44302336 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:04.932791+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324419584 unmapped: 44302336 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:05.932963+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324419584 unmapped: 44302336 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:06.933146+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504838 data_alloc: 234881024 data_used: 15216640
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324419584 unmapped: 44302336 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:07.933274+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e985f000/0x0/0x4ffc00000, data 0x25a0231/0x272f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324419584 unmapped: 44302336 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.835115433s of 12.987444878s, submitted: 47
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:08.933437+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327000064 unmapped: 41721856 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:09.933622+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325394432 unmapped: 43327488 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:10.933752+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa710c00 session 0x55dafa7bc5a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafd972800 session 0x55dafad4d4a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325328896 unmapped: 43393024 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83b400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:11.933859+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3333836 data_alloc: 218103808 data_used: 8323072
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa83b400 session 0x55dafa76ba40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320012288 unmapped: 48709632 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:12.934006+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320012288 unmapped: 48709632 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:13.934193+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea710000/0x0/0x4ffc00000, data 0x13b51ac/0x1542000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320012288 unmapped: 48709632 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:14.934356+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320012288 unmapped: 48709632 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:15.934491+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320012288 unmapped: 48709632 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:16.934663+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3325548 data_alloc: 218103808 data_used: 8208384
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320012288 unmapped: 48709632 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:17.934839+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaa2b000/0x0/0x4ffc00000, data 0x13d61ac/0x1563000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320012288 unmapped: 48709632 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:18.935037+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320012288 unmapped: 48709632 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:19.936050+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320012288 unmapped: 48709632 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:20.936193+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320012288 unmapped: 48709632 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:21.936426+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3325548 data_alloc: 218103808 data_used: 8208384
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320012288 unmapped: 48709632 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaa2b000/0x0/0x4ffc00000, data 0x13d61ac/0x1563000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:22.936582+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320012288 unmapped: 48709632 heap: 368721920 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.604933739s of 14.948109627s, submitted: 104
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83e000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:23.936736+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa83e000 session 0x55daf7c82780
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 59195392 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaa25000/0x0/0x4ffc00000, data 0x13dc1ac/0x1569000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:24.936878+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 59195392 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:25.937017+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 59195392 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:26.937190+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3407946 data_alloc: 218103808 data_used: 8208384
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 59195392 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:27.937329+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 59195392 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9f3f000/0x0/0x4ffc00000, data 0x1ec21ac/0x204f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:28.937534+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 59195392 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9f3f000/0x0/0x4ffc00000, data 0x1ec21ac/0x204f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:29.937725+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9f3f000/0x0/0x4ffc00000, data 0x1ec21ac/0x204f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320036864 unmapped: 59187200 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf96dd4a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:30.937876+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa710c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320061440 unmapped: 59162624 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:31.938028+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412147 data_alloc: 218103808 data_used: 8212480
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83b400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320167936 unmapped: 59056128 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9f1a000/0x0/0x4ffc00000, data 0x1ee61cf/0x2074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:32.938221+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 322338816 unmapped: 56885248 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:33.938349+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 322338816 unmapped: 56885248 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:34.938570+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 322338816 unmapped: 56885248 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:35.938721+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9f1a000/0x0/0x4ffc00000, data 0x1ee61cf/0x2074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 322338816 unmapped: 56885248 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:36.938889+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493427 data_alloc: 234881024 data_used: 19542016
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 322338816 unmapped: 56885248 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:37.939051+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 322338816 unmapped: 56885248 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:38.939310+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 322338816 unmapped: 56885248 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:39.939583+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 322338816 unmapped: 56885248 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9f1a000/0x0/0x4ffc00000, data 0x1ee61cf/0x2074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:40.939748+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.315305710s of 17.665071487s, submitted: 17
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 322338816 unmapped: 56885248 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:41.939909+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493731 data_alloc: 234881024 data_used: 19546112
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 322338816 unmapped: 56885248 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:42.940060+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327254016 unmapped: 51970048 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:43.940205+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9610000/0x0/0x4ffc00000, data 0x27f01cf/0x297e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327000064 unmapped: 52224000 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:44.940401+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327073792 unmapped: 52150272 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:45.940536+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327073792 unmapped: 52150272 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:46.940687+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3609073 data_alloc: 234881024 data_used: 20774912
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327073792 unmapped: 52150272 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:47.940801+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327073792 unmapped: 52150272 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:48.941009+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e937c000/0x0/0x4ffc00000, data 0x2a841cf/0x2c12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327073792 unmapped: 52150272 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:49.941128+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa776000 session 0x55daf96712c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafd972800
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327073792 unmapped: 52150272 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:50.941257+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327073792 unmapped: 52150272 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:51.941369+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3606545 data_alloc: 234881024 data_used: 20844544
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327081984 unmapped: 52142080 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:52.941504+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa83b400 session 0x55daf7c7fe00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa710c00 session 0x55daf7cae3c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9ccb000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327081984 unmapped: 52142080 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.679251671s of 12.448971748s, submitted: 98
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9ccb000 session 0x55daf967f860
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:53.941664+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0x14001cf/0x158e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323461120 unmapped: 55762944 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:54.941821+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323461120 unmapped: 55762944 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:55.942000+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323461120 unmapped: 55762944 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:56.942088+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3340157 data_alloc: 218103808 data_used: 8110080
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323461120 unmapped: 55762944 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:57.942233+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa832c00 session 0x55daf8958d20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55dafa7bc960
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323461120 unmapped: 55762944 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:58.942440+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7e04b40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319381504 unmapped: 59842560 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaa1e000/0x0/0x4ffc00000, data 0x13e31ac/0x1570000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:47:59.942591+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319381504 unmapped: 59842560 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:00.942759+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319381504 unmapped: 59842560 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:01.943006+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3230781 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319381504 unmapped: 59842560 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:02.943193+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319381504 unmapped: 59842560 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb32f000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:03.943318+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319381504 unmapped: 59842560 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:04.943483+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319381504 unmapped: 59842560 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:05.943656+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319381504 unmapped: 59842560 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:06.943789+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb32f000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3230781 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319381504 unmapped: 59842560 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:07.943996+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319389696 unmapped: 59834368 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:08.944222+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319389696 unmapped: 59834368 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:09.944360+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319389696 unmapped: 59834368 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:10.944550+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319389696 unmapped: 59834368 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb32f000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:11.944787+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3230781 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319389696 unmapped: 59834368 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:12.945023+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319389696 unmapped: 59834368 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb32f000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:13.945190+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319389696 unmapped: 59834368 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:14.945370+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319389696 unmapped: 59834368 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:15.945541+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 59826176 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:16.945727+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3230781 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 59826176 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:17.945904+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 59826176 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb32f000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:18.946132+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 59826176 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:19.946368+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 59826176 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:20.946574+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 59826176 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:21.946712+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3230781 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb32f000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 59826176 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:22.946846+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 59826176 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:23.947020+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 59817984 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb32f000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:24.947175+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 59817984 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb32f000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:25.947332+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb32f000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319414272 unmapped: 59809792 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:26.947468+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3230781 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319414272 unmapped: 59809792 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:27.947620+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb32f000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319414272 unmapped: 59809792 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:28.947812+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319414272 unmapped: 59809792 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:29.947956+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb32f000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319414272 unmapped: 59809792 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:30.948110+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319414272 unmapped: 59809792 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:31.948337+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9ccb000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.948318481s of 38.540275574s, submitted: 58
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3230781 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 319414272 unmapped: 59809792 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:32.948496+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329596928 unmapped: 49627136 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:33.948678+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9ccb000 session 0x55daf9697860
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa710c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa710c00 session 0x55daf98f92c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83b400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa83b400 session 0x55daf7cae960
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 321003520 unmapped: 58220544 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:34.948813+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf779fc20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf7c832c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaa63000/0x0/0x4ffc00000, data 0x139e1eb/0x152b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320946176 unmapped: 58277888 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9ccb000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9ccb000 session 0x55daf967f680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:35.948995+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa710c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa710c00 session 0x55daf8a01e00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320946176 unmapped: 58277888 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83b400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa83b400 session 0x55daf86a3680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:36.949155+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3310351 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320946176 unmapped: 58277888 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:37.949369+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf8b17860
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320946176 unmapped: 58277888 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9ccb000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:38.949580+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaa3f000/0x0/0x4ffc00000, data 0x13c21eb/0x154f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 58269696 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:39.949729+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaa3f000/0x0/0x4ffc00000, data 0x13c21eb/0x154f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 321593344 unmapped: 57630720 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:40.949850+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaa3f000/0x0/0x4ffc00000, data 0x13c21eb/0x154f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 321593344 unmapped: 57630720 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:41.949992+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3378503 data_alloc: 218103808 data_used: 12410880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 321593344 unmapped: 57630720 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:42.950126+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 321593344 unmapped: 57630720 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:43.950326+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaa3f000/0x0/0x4ffc00000, data 0x13c21eb/0x154f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 321593344 unmapped: 57630720 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:44.950467+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 321593344 unmapped: 57630720 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:45.950617+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaa3f000/0x0/0x4ffc00000, data 0x13c21eb/0x154f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 321593344 unmapped: 57630720 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:46.950804+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3378503 data_alloc: 218103808 data_used: 12410880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 321593344 unmapped: 57630720 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:47.950976+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaa3f000/0x0/0x4ffc00000, data 0x13c21eb/0x154f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 321593344 unmapped: 57630720 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:48.951156+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 321593344 unmapped: 57630720 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:49.951288+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.275275230s of 18.390993118s, submitted: 55
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323641344 unmapped: 55582720 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:50.951454+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea52b000/0x0/0x4ffc00000, data 0x18d61eb/0x1a63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325238784 unmapped: 53985280 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:51.951638+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3473347 data_alloc: 234881024 data_used: 13914112
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326574080 unmapped: 52649984 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:52.951834+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326574080 unmapped: 52649984 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:53.952015+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea0d3000/0x0/0x4ffc00000, data 0x1d201eb/0x1ead000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326574080 unmapped: 52649984 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:54.952179+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326606848 unmapped: 52617216 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:55.952371+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326606848 unmapped: 52617216 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:56.952523+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3473507 data_alloc: 234881024 data_used: 13918208
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326729728 unmapped: 52494336 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:57.952669+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea0c0000/0x0/0x4ffc00000, data 0x1d411eb/0x1ece000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326729728 unmapped: 52494336 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:58.952836+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea0c0000/0x0/0x4ffc00000, data 0x1d411eb/0x1ece000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326729728 unmapped: 52494336 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:48:59.953097+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326729728 unmapped: 52494336 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:00.953233+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326729728 unmapped: 52494336 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:01.953374+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea0c0000/0x0/0x4ffc00000, data 0x1d411eb/0x1ece000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3462555 data_alloc: 234881024 data_used: 13922304
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326729728 unmapped: 52494336 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:02.953505+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.338816643s of 12.979660034s, submitted: 129
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326729728 unmapped: 52494336 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:03.953652+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa710c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa710c00 session 0x55dafa80cf00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa714000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa714000 session 0x55daf7cba000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55dafa80d2c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf77b2400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf77b2400 session 0x55dafad4c000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327376896 unmapped: 51847168 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7cbbc20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:04.953780+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326852608 unmapped: 52371456 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:05.954137+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326852608 unmapped: 52371456 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:06.954303+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3478195 data_alloc: 234881024 data_used: 13922304
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326852608 unmapped: 52371456 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:07.954498+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9f92000/0x0/0x4ffc00000, data 0x1e6f1eb/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326860800 unmapped: 52363264 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:08.954696+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9f92000/0x0/0x4ffc00000, data 0x1e6f1eb/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326860800 unmapped: 52363264 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:09.954894+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326860800 unmapped: 52363264 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9f92000/0x0/0x4ffc00000, data 0x1e6f1eb/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:10.955143+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326860800 unmapped: 52363264 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:11.955310+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3478195 data_alloc: 234881024 data_used: 13922304
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326860800 unmapped: 52363264 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:12.955475+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326860800 unmapped: 52363264 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:13.955618+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326860800 unmapped: 52363264 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa710c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.277704239s of 11.397736549s, submitted: 9
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:14.955770+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326860800 unmapped: 52363264 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:15.956023+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa710c00 session 0x55dafa76a1e0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9f92000/0x0/0x4ffc00000, data 0x1e6f1eb/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa714000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326860800 unmapped: 52363264 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:16.956196+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3478944 data_alloc: 234881024 data_used: 13922304
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326860800 unmapped: 52363264 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:17.956427+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326860800 unmapped: 52363264 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:18.956613+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326860800 unmapped: 52363264 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:19.957008+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326868992 unmapped: 52355072 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:20.957148+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9f92000/0x0/0x4ffc00000, data 0x1e6f1eb/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326868992 unmapped: 52355072 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:21.957291+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486464 data_alloc: 234881024 data_used: 15003648
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326868992 unmapped: 52355072 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:22.957435+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326868992 unmapped: 52355072 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:23.957611+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9f92000/0x0/0x4ffc00000, data 0x1e6f1eb/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326868992 unmapped: 52355072 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:24.957729+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326868992 unmapped: 52355072 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:25.957842+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326868992 unmapped: 52355072 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:26.957978+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487344 data_alloc: 234881024 data_used: 15003648
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326877184 unmapped: 52346880 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:27.958117+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.942745209s of 13.596246719s, submitted: 12
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329236480 unmapped: 49987584 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:28.958282+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9936000/0x0/0x4ffc00000, data 0x24c31eb/0x2650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329236480 unmapped: 49987584 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:29.958405+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328564736 unmapped: 50659328 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:30.958557+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328564736 unmapped: 50659328 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:31.958700+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3541990 data_alloc: 234881024 data_used: 15081472
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328564736 unmapped: 50659328 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:32.958887+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9932000/0x0/0x4ffc00000, data 0x24cf1eb/0x265c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328564736 unmapped: 50659328 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:33.959029+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328564736 unmapped: 50659328 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:34.959189+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328564736 unmapped: 50659328 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:35.959364+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328572928 unmapped: 50651136 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:36.959515+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3542006 data_alloc: 234881024 data_used: 15081472
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9932000/0x0/0x4ffc00000, data 0x24cf1eb/0x265c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328572928 unmapped: 50651136 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:37.959687+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329621504 unmapped: 49602560 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:38.959895+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.576335907s of 10.851296425s, submitted: 60
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329621504 unmapped: 49602560 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:39.960039+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55dafad4c3c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa714000 session 0x55dafa80cf00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329629696 unmapped: 49594368 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafb42c400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:40.960164+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafb42c400 session 0x55daf7cae960
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329654272 unmapped: 49569792 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea0b3000/0x0/0x4ffc00000, data 0x1d4e1eb/0x1edb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:41.960334+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3470744 data_alloc: 234881024 data_used: 13914112
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329654272 unmapped: 49569792 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:42.960491+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329654272 unmapped: 49569792 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:43.960611+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf86a32c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9ccb000 session 0x55daf843fe00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55dafa7bc5a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323313664 unmapped: 55910400 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:44.960806+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323313664 unmapped: 55910400 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:45.960958+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb30d000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323313664 unmapped: 55910400 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:46.961148+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3252700 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323313664 unmapped: 55910400 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:47.961361+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323313664 unmapped: 55910400 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:48.961588+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323313664 unmapped: 55910400 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:49.961771+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323313664 unmapped: 55910400 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:50.962012+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323313664 unmapped: 55910400 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb30d000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:51.962197+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3252700 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323321856 unmapped: 55902208 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:52.962370+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323321856 unmapped: 55902208 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:53.962509+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb30d000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323330048 unmapped: 55894016 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:54.962733+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323330048 unmapped: 55894016 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:55.962897+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323330048 unmapped: 55894016 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:56.963085+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3252700 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323330048 unmapped: 55894016 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:57.963216+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323330048 unmapped: 55894016 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:58.963416+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb30d000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323330048 unmapped: 55894016 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:49:59.963594+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323330048 unmapped: 55894016 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:00.963788+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb30d000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323330048 unmapped: 55894016 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:01.964225+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3252700 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323338240 unmapped: 55885824 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:02.964461+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323338240 unmapped: 55885824 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:03.964603+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:04.964749+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323338240 unmapped: 55885824 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:05.964883+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323338240 unmapped: 55885824 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:06.965007+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323338240 unmapped: 55885824 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3252700 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb30d000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:07.965174+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323338240 unmapped: 55885824 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:08.965343+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323346432 unmapped: 55877632 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:09.965482+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323346432 unmapped: 55877632 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:10.965667+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323354624 unmapped: 55869440 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:11.965812+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323354624 unmapped: 55869440 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb30d000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3252700 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb30d000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:12.965965+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323354624 unmapped: 55869440 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:13.966157+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323354624 unmapped: 55869440 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:14.966402+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323362816 unmapped: 55861248 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:15.966597+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323362816 unmapped: 55861248 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb30d000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:16.966726+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323362816 unmapped: 55861248 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3252700 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:17.966911+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323362816 unmapped: 55861248 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:18.967159+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323362816 unmapped: 55861248 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb30d000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:19.967352+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323371008 unmapped: 55853056 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:20.967525+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323371008 unmapped: 55853056 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb30d000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:21.967723+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323371008 unmapped: 55853056 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3252700 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:22.967862+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323371008 unmapped: 55853056 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa710c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa710c00 session 0x55daf98bab40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa714000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa714000 session 0x55dafa80cb40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf8a01680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:23.968011+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf8b16960
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323371008 unmapped: 55853056 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9ccb000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 44.537647247s of 44.809226990s, submitted: 66
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9ccb000 session 0x55daf96dde00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa710c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa710c00 session 0x55daf778e3c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55daf7e05c20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf98c4000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf8ade960
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:24.968184+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324526080 unmapped: 54697984 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaca7000/0x0/0x4ffc00000, data 0x11591fb/0x12e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:25.968381+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324526080 unmapped: 54697984 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:26.968572+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324526080 unmapped: 54697984 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314564 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:27.968834+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324526080 unmapped: 54697984 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:28.969152+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324526080 unmapped: 54697984 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaca7000/0x0/0x4ffc00000, data 0x11591fb/0x12e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:29.969399+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324526080 unmapped: 54697984 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaca7000/0x0/0x4ffc00000, data 0x11591fb/0x12e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:30.969553+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324526080 unmapped: 54697984 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9ccb000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9ccb000 session 0x55dafa76b680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa710c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:31.969673+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324468736 unmapped: 54755328 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaca6000/0x0/0x4ffc00000, data 0x115921e/0x12e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314461 data_alloc: 218103808 data_used: 3338240
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:32.969853+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324354048 unmapped: 54870016 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaca6000/0x0/0x4ffc00000, data 0x115921e/0x12e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:33.970044+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324362240 unmapped: 54861824 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:34.970176+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324362240 unmapped: 54861824 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:35.970356+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324362240 unmapped: 54861824 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:36.972738+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324362240 unmapped: 54861824 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362781 data_alloc: 218103808 data_used: 10117120
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:37.972915+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324362240 unmapped: 54861824 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:38.973135+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324362240 unmapped: 54861824 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaca6000/0x0/0x4ffc00000, data 0x115921e/0x12e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:39.973294+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaca6000/0x0/0x4ffc00000, data 0x115921e/0x12e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324362240 unmapped: 54861824 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:40.973460+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324362240 unmapped: 54861824 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaca6000/0x0/0x4ffc00000, data 0x115921e/0x12e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:41.973597+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324362240 unmapped: 54861824 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362781 data_alloc: 218103808 data_used: 10117120
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:42.973758+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324362240 unmapped: 54861824 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.081775665s of 19.351840973s, submitted: 51
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:43.973891+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324886528 unmapped: 54337536 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:44.974016+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324919296 unmapped: 54304768 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:45.974194+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324919296 unmapped: 54304768 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea964000/0x0/0x4ffc00000, data 0x149321e/0x1622000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:46.974349+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea964000/0x0/0x4ffc00000, data 0x149321e/0x1622000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324919296 unmapped: 54304768 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397341 data_alloc: 218103808 data_used: 10317824
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:47.974534+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea964000/0x0/0x4ffc00000, data 0x149321e/0x1622000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324919296 unmapped: 54304768 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea964000/0x0/0x4ffc00000, data 0x149321e/0x1622000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:48.974714+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324919296 unmapped: 54304768 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:49.974903+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324919296 unmapped: 54304768 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:50.975066+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324853760 unmapped: 54370304 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea94a000/0x0/0x4ffc00000, data 0x14b521e/0x1644000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:51.975200+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324853760 unmapped: 54370304 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392517 data_alloc: 218103808 data_used: 10321920
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:52.975403+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324853760 unmapped: 54370304 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea94a000/0x0/0x4ffc00000, data 0x14b521e/0x1644000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:53.975613+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324853760 unmapped: 54370304 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:54.975759+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324853760 unmapped: 54370304 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:55.975907+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324853760 unmapped: 54370304 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea94a000/0x0/0x4ffc00000, data 0x14b521e/0x1644000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:56.976040+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324853760 unmapped: 54370304 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392517 data_alloc: 218103808 data_used: 10321920
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:57.976207+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324853760 unmapped: 54370304 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:58.976441+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324853760 unmapped: 54370304 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:50:59.976662+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324853760 unmapped: 54370304 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea94a000/0x0/0x4ffc00000, data 0x14b521e/0x1644000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:00.976857+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324853760 unmapped: 54370304 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:01.977003+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324853760 unmapped: 54370304 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.649921417s of 18.941385269s, submitted: 72
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392737 data_alloc: 218103808 data_used: 10321920
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:02.977196+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324853760 unmapped: 54370304 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:03.977339+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324853760 unmapped: 54370304 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea947000/0x0/0x4ffc00000, data 0x14b821e/0x1647000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea947000/0x0/0x4ffc00000, data 0x14b821e/0x1647000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:04.977498+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 54362112 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:05.977651+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 54362112 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:06.977789+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 54362112 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea947000/0x0/0x4ffc00000, data 0x14b821e/0x1647000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392737 data_alloc: 218103808 data_used: 10321920
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:07.978047+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 54362112 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:08.978321+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 54362112 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:09.978522+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 54362112 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea947000/0x0/0x4ffc00000, data 0x14b821e/0x1647000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:10.978733+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 54362112 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:11.978897+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 54362112 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392737 data_alloc: 218103808 data_used: 10321920
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:12.979061+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324870144 unmapped: 54353920 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:13.979281+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324870144 unmapped: 54353920 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:14.979417+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324870144 unmapped: 54353920 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea947000/0x0/0x4ffc00000, data 0x14b821e/0x1647000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.306808472s of 13.312695503s, submitted: 2
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:15.979565+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 54345728 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:16.979711+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 54345728 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392737 data_alloc: 218103808 data_used: 10321920
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:17.979839+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 54345728 heap: 379224064 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55daf8adeb40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:18.980044+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325099520 unmapped: 59424768 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9a8e000/0x0/0x4ffc00000, data 0x237121e/0x2500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:19.980232+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325099520 unmapped: 59424768 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:20.980418+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325099520 unmapped: 59424768 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:21.980564+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325107712 unmapped: 59416576 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9a8e000/0x0/0x4ffc00000, data 0x237121e/0x2500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504179 data_alloc: 218103808 data_used: 10321920
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:22.980704+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325107712 unmapped: 59416576 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:23.980854+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325107712 unmapped: 59416576 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d03c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9d03c00 session 0x55daf8958b40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:24.981073+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9a8e000/0x0/0x4ffc00000, data 0x237121e/0x2500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf9696000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325107712 unmapped: 59416576 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:25.981249+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325107712 unmapped: 59416576 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:26.981395+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325107712 unmapped: 59416576 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55dafb3c43c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9ccb000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.454171181s of 11.715562820s, submitted: 18
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9ccb000 session 0x55daf993d2c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504179 data_alloc: 218103808 data_used: 10321920
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:27.981534+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9a8e000/0x0/0x4ffc00000, data 0x237121e/0x2500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325107712 unmapped: 59416576 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:28.981714+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 325107712 unmapped: 59416576 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9a8e000/0x0/0x4ffc00000, data 0x237121e/0x2500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:29.981851+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327729152 unmapped: 56795136 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9a8e000/0x0/0x4ffc00000, data 0x237121e/0x2500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:30.982018+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327737344 unmapped: 56786944 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:31.982168+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327737344 unmapped: 56786944 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615427 data_alloc: 234881024 data_used: 23015424
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:32.982306+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9a8e000/0x0/0x4ffc00000, data 0x237121e/0x2500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327737344 unmapped: 56786944 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:33.982474+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327737344 unmapped: 56786944 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:34.982643+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327737344 unmapped: 56786944 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:35.982830+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9a8e000/0x0/0x4ffc00000, data 0x237121e/0x2500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327737344 unmapped: 56786944 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9a8e000/0x0/0x4ffc00000, data 0x237121e/0x2500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:36.982975+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327737344 unmapped: 56786944 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615427 data_alloc: 234881024 data_used: 23015424
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:37.983111+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327745536 unmapped: 56778752 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:38.983337+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327745536 unmapped: 56778752 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9a8e000/0x0/0x4ffc00000, data 0x237121e/0x2500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.310801506s of 12.447616577s, submitted: 15
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:39.983453+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334471168 unmapped: 50053120 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:40.983601+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334471168 unmapped: 50053120 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:41.983724+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 49922048 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3712291 data_alloc: 234881024 data_used: 24866816
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:42.983870+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8f96000/0x0/0x4ffc00000, data 0x2e6921e/0x2ff8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 49922048 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 41K writes, 167K keys, 41K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 41K writes, 14K syncs, 2.80 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5090 writes, 20K keys, 5090 commit groups, 1.0 writes per commit group, ingest: 24.09 MB, 0.04 MB/s
                                           Interval WAL: 5090 writes, 1904 syncs, 2.67 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:43.984040+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 49922048 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:44.984188+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 49922048 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:45.984367+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 49922048 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:46.984548+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 49922048 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:47.984719+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3709011 data_alloc: 234881024 data_used: 24866816
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 49913856 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8f93000/0x0/0x4ffc00000, data 0x2e6c21e/0x2ffb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:48.984994+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 49913856 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:49.985236+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 49913856 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:50.985398+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 49913856 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:51.985538+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 49913856 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8f93000/0x0/0x4ffc00000, data 0x2e6c21e/0x2ffb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:52.986265+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3709011 data_alloc: 234881024 data_used: 24866816
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 49905664 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:53.986516+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 49905664 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.112421989s of 14.535729408s, submitted: 102
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:54.986709+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 49905664 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:55.986899+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 49905664 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:56.987127+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 49905664 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:57.987285+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3709971 data_alloc: 234881024 data_used: 24936448
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8f93000/0x0/0x4ffc00000, data 0x2e6c21e/0x2ffb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 49905664 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:58.987497+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 49905664 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8f93000/0x0/0x4ffc00000, data 0x2e6c21e/0x2ffb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55daf8b16f00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c20400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8f93000/0x0/0x4ffc00000, data 0x2e6c21e/0x2ffb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:51:59.987646+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 49905664 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:00.987836+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332242944 unmapped: 52281344 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c20400 session 0x55daf96b52c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:01.988004+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332242944 unmapped: 52281344 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea941000/0x0/0x4ffc00000, data 0x14be21e/0x164d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:02.988160+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403081 data_alloc: 218103808 data_used: 7700480
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332242944 unmapped: 52281344 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:03.988289+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332242944 unmapped: 52281344 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:04.988444+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332242944 unmapped: 52281344 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:05.988619+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332242944 unmapped: 52281344 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea941000/0x0/0x4ffc00000, data 0x14be21e/0x164d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:06.988807+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332251136 unmapped: 52273152 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:07.989029+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403081 data_alloc: 218103808 data_used: 7700480
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332251136 unmapped: 52273152 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:08.989294+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea941000/0x0/0x4ffc00000, data 0x14be21e/0x164d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332251136 unmapped: 52273152 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:09.989457+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332251136 unmapped: 52273152 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:10.989618+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332259328 unmapped: 52264960 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:11.989813+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332259328 unmapped: 52264960 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:12.990006+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403081 data_alloc: 218103808 data_used: 7700480
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332259328 unmapped: 52264960 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:13.990143+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332259328 unmapped: 52264960 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:14.990306+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea941000/0x0/0x4ffc00000, data 0x14be21e/0x164d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332259328 unmapped: 52264960 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:15.990465+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332267520 unmapped: 52256768 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea941000/0x0/0x4ffc00000, data 0x14be21e/0x164d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:16.990623+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.840253830s of 23.029989243s, submitted: 15
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332267520 unmapped: 52256768 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa710c00 session 0x55daf8a48f00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:17.990789+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402433 data_alloc: 218103808 data_used: 7700480
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332267520 unmapped: 52256768 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:18.990995+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332275712 unmapped: 52248576 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:19.991204+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332275712 unmapped: 52248576 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:20.991411+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328138752 unmapped: 56385536 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:21.991561+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328155136 unmapped: 56369152 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb287000/0x0/0x4ffc00000, data 0xaae1ac/0xc3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf990ad20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:22.993024+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3273465 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328155136 unmapped: 56369152 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:23.993304+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328155136 unmapped: 56369152 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:24.993482+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328155136 unmapped: 56369152 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb353000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:25.993658+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328155136 unmapped: 56369152 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:26.993816+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328155136 unmapped: 56369152 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb353000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:27.994012+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3273465 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328155136 unmapped: 56369152 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:28.994215+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328155136 unmapped: 56369152 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:29.994362+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328155136 unmapped: 56369152 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:30.994527+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328155136 unmapped: 56369152 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:31.994736+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328163328 unmapped: 56360960 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb353000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:32.994897+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3273465 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328163328 unmapped: 56360960 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:33.995062+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328163328 unmapped: 56360960 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:34.995659+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328163328 unmapped: 56360960 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:35.996310+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb353000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328163328 unmapped: 56360960 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:36.996706+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328163328 unmapped: 56360960 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:37.997019+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3273465 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328163328 unmapped: 56360960 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:38.997285+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328171520 unmapped: 56352768 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb353000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:39.997450+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328171520 unmapped: 56352768 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:40.998391+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328171520 unmapped: 56352768 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:41.998843+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328171520 unmapped: 56352768 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:42.999200+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3273465 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328171520 unmapped: 56352768 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:43.999450+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328171520 unmapped: 56352768 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb353000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:44.999672+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328179712 unmapped: 56344576 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:45.999894+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.722011566s of 28.929365158s, submitted: 45
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb353000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [0,0,0,0,0,6])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331333632 unmapped: 53190656 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf7e2b860
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:47.000014+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328187904 unmapped: 56336384 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:48.000175+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3311903 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328187904 unmapped: 56336384 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:49.000447+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328187904 unmapped: 56336384 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:50.000664+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328187904 unmapped: 56336384 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:51.000923+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328196096 unmapped: 56328192 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf09000/0x0/0x4ffc00000, data 0xef9189/0x1085000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:52.001257+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328196096 unmapped: 56328192 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:53.001441+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3311903 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328196096 unmapped: 56328192 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:54.001617+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9ccb000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9ccb000 session 0x55daf7cbba40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 56451072 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:55.067430+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 56451072 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:56.067584+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafea70000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328081408 unmapped: 56442880 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:57.067715+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf09000/0x0/0x4ffc00000, data 0xef9189/0x1085000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328081408 unmapped: 56442880 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:58.068030+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3343228 data_alloc: 218103808 data_used: 7622656
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf09000/0x0/0x4ffc00000, data 0xef9189/0x1085000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328081408 unmapped: 56442880 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:59.068250+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.418829918s of 12.768917084s, submitted: 19
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328081408 unmapped: 56442880 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:00.068385+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328081408 unmapped: 56442880 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:01.068603+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328089600 unmapped: 56434688 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:02.068769+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf09000/0x0/0x4ffc00000, data 0xef9189/0x1085000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328089600 unmapped: 56434688 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:03.068990+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3343228 data_alloc: 218103808 data_used: 7622656
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328089600 unmapped: 56434688 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:04.069169+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328089600 unmapped: 56434688 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:05.069306+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328097792 unmapped: 56426496 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:06.069459+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328097792 unmapped: 56426496 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:07.069608+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaaf9000/0x0/0x4ffc00000, data 0xef9189/0x1085000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328105984 unmapped: 56418304 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:08.069756+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3371810 data_alloc: 218103808 data_used: 7634944
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330268672 unmapped: 54255616 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:09.070018+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.049137115s of 10.020059586s, submitted: 95
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329809920 unmapped: 54714368 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:10.070176+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329818112 unmapped: 54706176 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:11.070303+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329859072 unmapped: 54665216 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea2ca000/0x0/0x4ffc00000, data 0x1728189/0x18b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,5,1])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:12.070496+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329867264 unmapped: 54657024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:13.070646+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413802 data_alloc: 218103808 data_used: 7663616
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329875456 unmapped: 54648832 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:14.070821+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329875456 unmapped: 54648832 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:15.070991+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329875456 unmapped: 54648832 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:16.071207+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea287000/0x0/0x4ffc00000, data 0x176b189/0x18f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329875456 unmapped: 54648832 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:17.071326+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329875456 unmapped: 54648832 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:18.071485+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412530 data_alloc: 218103808 data_used: 7663616
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330014720 unmapped: 54509568 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:19.071702+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330014720 unmapped: 54509568 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:20.071907+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330022912 unmapped: 54501376 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:21.072166+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea265000/0x0/0x4ffc00000, data 0x178d189/0x1919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330022912 unmapped: 54501376 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf7e043c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7e050e0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf8a49a40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf779f4a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:22.072314+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9ccb000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.480791092s of 12.996686935s, submitted: 74
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331931648 unmapped: 52592640 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:23.072480+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9ccb000 session 0x55dafa7bc000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa710c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa710c00 session 0x55daf7e043c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7cbba40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476697 data_alloc: 218103808 data_used: 7663616
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 53116928 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf990ad20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf96b52c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:24.072668+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 53116928 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:25.072825+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 53116928 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:26.073026+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9c0f000/0x0/0x4ffc00000, data 0x1de11fb/0x1f6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 53116928 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:27.073211+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 53108736 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:28.073385+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476785 data_alloc: 218103808 data_used: 7667712
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 53108736 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:29.073590+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9c0c000/0x0/0x4ffc00000, data 0x1de41fb/0x1f72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 53108736 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:30.073875+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9c0c000/0x0/0x4ffc00000, data 0x1de41fb/0x1f72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 53108736 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:31.074055+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9ccb000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9ccb000 session 0x55daf8b16f00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 53100544 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:32.074226+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c7d000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 53100544 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:33.074370+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476813 data_alloc: 218103808 data_used: 7671808
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 53100544 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:34.074550+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:35.074686+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9c0c000/0x0/0x4ffc00000, data 0x1de41fb/0x1f72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:36.074821+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9c0c000/0x0/0x4ffc00000, data 0x1de41fb/0x1f72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:37.074979+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:38.075117+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513933 data_alloc: 234881024 data_used: 12787712
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9c0c000/0x0/0x4ffc00000, data 0x1de41fb/0x1f72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:39.075305+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9c0c000/0x0/0x4ffc00000, data 0x1de41fb/0x1f72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:40.075492+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:41.075648+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9c0c000/0x0/0x4ffc00000, data 0x1de41fb/0x1f72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.024848938s of 19.916954041s, submitted: 48
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:42.075776+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:43.075977+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9c0c000/0x0/0x4ffc00000, data 0x1de41fb/0x1f72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513933 data_alloc: 234881024 data_used: 12787712
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:44.076224+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332677120 unmapped: 51847168 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:45.076383+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335388672 unmapped: 49135616 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9762000/0x0/0x4ffc00000, data 0x228e1fb/0x241c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:46.076533+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e958a000/0x0/0x4ffc00000, data 0x245e1fb/0x25ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335454208 unmapped: 49070080 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:47.076673+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 49520640 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:48.076825+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3586847 data_alloc: 234881024 data_used: 14303232
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e94fb000/0x0/0x4ffc00000, data 0x24f51fb/0x2683000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335011840 unmapped: 49512448 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:49.077010+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335011840 unmapped: 49512448 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:50.077148+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335011840 unmapped: 49512448 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:51.077333+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335011840 unmapped: 49512448 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:52.077498+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.396517754s of 10.509934425s, submitted: 128
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335011840 unmapped: 49512448 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:53.077906+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584023 data_alloc: 234881024 data_used: 14307328
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335011840 unmapped: 49512448 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:54.078077+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c7d000 session 0x55dafb3c43c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e94d9000/0x0/0x4ffc00000, data 0x25171fb/0x26a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335028224 unmapped: 49496064 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:55.078267+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330842112 unmapped: 53682176 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:56.078433+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55dafa76a3c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330842112 unmapped: 53682176 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:57.078593+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330842112 unmapped: 53682176 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:58.078727+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3427669 data_alloc: 218103808 data_used: 7663616
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330842112 unmapped: 53682176 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:59.078980+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330842112 unmapped: 53682176 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:00.079148+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9f19000/0x0/0x4ffc00000, data 0x179c189/0x1928000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330842112 unmapped: 53682176 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:01.079360+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330842112 unmapped: 53682176 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:02.079552+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafea70000 session 0x55daf96b4780
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.475329399s of 10.017469406s, submitted: 31
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55dafa80c960
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330842112 unmapped: 53682176 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:03.079758+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3427009 data_alloc: 218103808 data_used: 7663616
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330850304 unmapped: 53673984 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:04.079908+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330866688 unmapped: 53657600 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:05.080096+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf990a5a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:06.080245+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf44000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:07.080416+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:08.080697+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3298025 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:09.081017+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf44000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:10.081192+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:11.081377+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:12.081551+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf44000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:13.081717+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3298025 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:14.081883+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf44000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:15.082049+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:16.082236+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330883072 unmapped: 53641216 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:17.082421+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:18.082585+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf44000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3298025 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:19.082752+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf44000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:20.082864+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:21.083041+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:22.083171+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:23.083325+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3298025 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf44000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:24.083492+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:25.083635+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:26.083794+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf44000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:27.084003+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:28.084174+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3298025 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:29.084362+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330899456 unmapped: 53624832 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:30.084528+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330899456 unmapped: 53624832 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:31.084701+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330899456 unmapped: 53624832 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf44000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:32.084879+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330899456 unmapped: 53624832 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:33.085046+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330907648 unmapped: 53616640 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf96b5680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf98ef2c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf778e3c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf8adeb40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3298025 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.468921661s of 31.110679626s, submitted: 27
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:34.085203+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344358912 unmapped: 48037888 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea245000/0x0/0x4ffc00000, data 0x17ad189/0x1939000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,4])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55daf7c823c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:35.085374+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330915840 unmapped: 61480960 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:36.085552+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330915840 unmapped: 61480960 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:37.085803+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330915840 unmapped: 61480960 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:38.085993+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330915840 unmapped: 61480960 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403217 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:39.086206+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330915840 unmapped: 61480960 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea184000/0x0/0x4ffc00000, data 0x186e189/0x19fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:40.086385+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330915840 unmapped: 61480960 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafea70000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafea70000 session 0x55daf7c7f2c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:41.086525+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329531392 unmapped: 62865408 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea183000/0x0/0x4ffc00000, data 0x186e1ac/0x19fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:42.086671+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329531392 unmapped: 62865408 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:43.086810+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329555968 unmapped: 62840832 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504982 data_alloc: 234881024 data_used: 17461248
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:44.087578+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 61726720 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:45.087833+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 61726720 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:46.087999+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 61726720 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea183000/0x0/0x4ffc00000, data 0x186e1ac/0x19fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:47.088145+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 61726720 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:48.088411+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 61726720 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504982 data_alloc: 234881024 data_used: 17461248
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea183000/0x0/0x4ffc00000, data 0x186e1ac/0x19fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:49.088628+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 61726720 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:50.089086+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 61726720 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:51.089401+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 61726720 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea183000/0x0/0x4ffc00000, data 0x186e1ac/0x19fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:52.090147+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330678272 unmapped: 61718528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea183000/0x0/0x4ffc00000, data 0x186e1ac/0x19fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:53.090542+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330678272 unmapped: 61718528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505142 data_alloc: 234881024 data_used: 17465344
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.601863861s of 20.106889725s, submitted: 18
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:54.090694+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333529088 unmapped: 58867712 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9d3e000/0x0/0x4ffc00000, data 0x1cb31ac/0x1e40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [0,0,0,2])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:55.090850+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332652544 unmapped: 59744256 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:56.091014+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 58195968 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:57.091311+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 58195968 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:58.091647+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 58195968 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567376 data_alloc: 234881024 data_used: 17616896
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:59.091831+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 58195968 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9ae4000/0x0/0x4ffc00000, data 0x1f051ac/0x2092000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:00.092198+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 58195968 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:01.092526+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 58195968 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:02.092831+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333537280 unmapped: 58859520 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:03.093042+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333537280 unmapped: 58859520 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3562076 data_alloc: 234881024 data_used: 17616896
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:04.093190+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9acd000/0x0/0x4ffc00000, data 0x1f241ac/0x20b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333537280 unmapped: 58859520 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:05.093420+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333537280 unmapped: 58859520 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:06.093552+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333537280 unmapped: 58859520 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:07.093682+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333537280 unmapped: 58859520 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:08.093850+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9acd000/0x0/0x4ffc00000, data 0x1f241ac/0x20b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333537280 unmapped: 58859520 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.403264999s of 14.527244568s, submitted: 69
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3562148 data_alloc: 234881024 data_used: 17616896
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:09.094087+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 58851328 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:10.094241+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 58851328 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:11.094413+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9ac3000/0x0/0x4ffc00000, data 0x1f2e1ac/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 58851328 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:12.094575+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 58851328 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:13.094756+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 58851328 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3562148 data_alloc: 234881024 data_used: 17616896
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:14.095076+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 58851328 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:15.095256+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 58851328 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9ac0000/0x0/0x4ffc00000, data 0x1f311ac/0x20be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:16.095413+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 58851328 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:17.095579+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 58851328 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:18.095717+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333553664 unmapped: 58843136 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3562224 data_alloc: 234881024 data_used: 17616896
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:19.096015+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333553664 unmapped: 58843136 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:20.096168+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333553664 unmapped: 58843136 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:21.096356+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333553664 unmapped: 58843136 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9ac0000/0x0/0x4ffc00000, data 0x1f311ac/0x20be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:22.096566+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333561856 unmapped: 58834944 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7c7e960
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:23.096753+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.645236015s of 14.925461769s, submitted: 4
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327196672 unmapped: 65200128 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55dafb59da40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3311046 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae1ac/0xc3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:24.096990+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:25.097112+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:26.097278+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:27.097461+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:28.097671+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3311046 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:29.097881+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:30.098054+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:31.098246+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:32.098467+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:33.098626+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3311046 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:34.098973+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:35.099185+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:36.099354+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:37.099545+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:38.099703+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3311046 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:39.099909+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:40.100080+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:41.100299+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:42.100528+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:43.100694+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3311046 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:44.100888+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:45.101084+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:46.101319+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:47.101502+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:48.101633+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.980880737s of 25.078647614s, submitted: 23
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323682304 unmapped: 68714496 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3391535 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:49.101791+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323903488 unmapped: 68493312 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55dafad4d0e0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:50.101923+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55daf96b5e00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323903488 unmapped: 68493312 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:51.102079+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323903488 unmapped: 68493312 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:52.102241+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9ccb000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9ccb000 session 0x55daf96dcd20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323903488 unmapped: 68493312 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7c832c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:53.102383+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf990af00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323903488 unmapped: 68493312 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3372917 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:54.102593+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea7d7000/0x0/0x4ffc00000, data 0x121a1eb/0x13a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324952064 unmapped: 67444736 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:55.102784+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf9696f00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323903488 unmapped: 68493312 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea7b2000/0x0/0x4ffc00000, data 0x123e1fa/0x13cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:56.103047+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea7b2000/0x0/0x4ffc00000, data 0x123e1fa/0x13cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323903488 unmapped: 68493312 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:57.103369+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323911680 unmapped: 68485120 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:58.103508+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429431 data_alloc: 218103808 data_used: 10829824
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:59.103743+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:00.103909+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea7b2000/0x0/0x4ffc00000, data 0x123e1fa/0x13cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:01.104045+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:02.104208+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:03.104369+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429431 data_alloc: 218103808 data_used: 10829824
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:04.104585+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:05.104796+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:06.105130+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea7b2000/0x0/0x4ffc00000, data 0x123e1fa/0x13cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea7b2000/0x0/0x4ffc00000, data 0x123e1fa/0x13cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:07.105273+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:08.105419+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.919275284s of 20.437402725s, submitted: 33
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433179 data_alloc: 218103808 data_used: 10850304
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:09.105599+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327008256 unmapped: 65388544 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:10.105756+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328482816 unmapped: 63913984 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:11.105887+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9fa6000/0x0/0x4ffc00000, data 0x1a3c1fa/0x1bca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328491008 unmapped: 63905792 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:12.106031+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327835648 unmapped: 64561152 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:13.106914+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327835648 unmapped: 64561152 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500709 data_alloc: 218103808 data_used: 11161600
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:14.107196+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328916992 unmapped: 63479808 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9f47000/0x0/0x4ffc00000, data 0x1aa71fa/0x1c35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,10])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:15.107322+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328925184 unmapped: 63471616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:16.107516+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328925184 unmapped: 63471616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:17.107697+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328925184 unmapped: 63471616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:18.107866+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328925184 unmapped: 63471616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:19.108078+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3507283 data_alloc: 218103808 data_used: 10985472
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328925184 unmapped: 63471616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:20.108254+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328925184 unmapped: 63471616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9f00000/0x0/0x4ffc00000, data 0x1aea1fa/0x1c78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:21.108475+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.159471512s of 12.475979805s, submitted: 113
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329064448 unmapped: 63332352 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:22.108631+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329064448 unmapped: 63332352 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9ee1000/0x0/0x4ffc00000, data 0x1b0f1fa/0x1c9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:23.108721+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329064448 unmapped: 63332352 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:24.108854+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505995 data_alloc: 218103808 data_used: 10989568
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329064448 unmapped: 63332352 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafb42dc00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafb42dc00 session 0x55dafad4d4a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafb42d400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafb42d400 session 0x55dafa76a1e0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf990be00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55dafa80c780
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:25.109019+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337100800 unmapped: 55296000 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf8aded20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafb42d400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafb42d400 session 0x55daf86a32c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:26.109177+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafb42dc00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafb42dc00 session 0x55daf8a01680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf86a32c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55dafa80c780
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329261056 unmapped: 63135744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:27.109309+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329261056 unmapped: 63135744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e92ef000/0x0/0x4ffc00000, data 0x26ff26c/0x288f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:28.109513+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329261056 unmapped: 63135744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:29.109717+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3603778 data_alloc: 218103808 data_used: 10989568
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329261056 unmapped: 63135744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:30.109858+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329261056 unmapped: 63135744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e92ed000/0x0/0x4ffc00000, data 0x270126c/0x2891000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:31.109999+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e92ed000/0x0/0x4ffc00000, data 0x270126c/0x2891000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf990be00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329261056 unmapped: 63135744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:32.110141+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafb42d400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafb42d400 session 0x55dafad4d4a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e92ec000/0x0/0x4ffc00000, data 0x270226c/0x2892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329269248 unmapped: 63127552 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa712000 session 0x55daf9696f00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.512236595s of 11.812539101s, submitted: 40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:33.110314+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329269248 unmapped: 63127552 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7c832c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:34.110480+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3605969 data_alloc: 218103808 data_used: 10989568
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329269248 unmapped: 63127552 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:35.110626+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329269248 unmapped: 63127552 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:36.110763+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 57622528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:37.110955+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e92ea000/0x0/0x4ffc00000, data 0x270229f/0x2894000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 57622528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:38.111114+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 57622528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:39.111320+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694929 data_alloc: 234881024 data_used: 23420928
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 57622528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e92ea000/0x0/0x4ffc00000, data 0x270229f/0x2894000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:40.111475+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 57622528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:41.111605+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 57622528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:42.111741+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 57622528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:43.111892+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 57622528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:44.112045+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3695357 data_alloc: 234881024 data_used: 23425024
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 57622528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:45.112194+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.835003853s of 12.251962662s, submitted: 9
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334782464 unmapped: 57614336 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e92e1000/0x0/0x4ffc00000, data 0x270a29f/0x289c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:46.112343+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334782464 unmapped: 57614336 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:47.112476+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 53452800 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:48.112629+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 54018048 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e748f000/0x0/0x4ffc00000, data 0x33b529f/0x3547000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:49.112848+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3797179 data_alloc: 234881024 data_used: 23822336
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 54018048 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:50.113009+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e748c000/0x0/0x4ffc00000, data 0x33b829f/0x354a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 54018048 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:51.113212+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 54018048 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:52.113370+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 54018048 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:53.113514+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 54018048 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:54.113796+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3797515 data_alloc: 234881024 data_used: 23822336
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf96b5e00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf990ab40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 54018048 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:55.114079+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e748b000/0x0/0x4ffc00000, data 0x33b829f/0x354a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.133233070s of 10.094684601s, submitted: 138
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330407936 unmapped: 61988864 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:56.114343+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8980000/0x0/0x4ffc00000, data 0x1b2021d/0x1caf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330407936 unmapped: 61988864 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8980000/0x0/0x4ffc00000, data 0x1b2021d/0x1caf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:57.114543+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa712000 session 0x55dafa7bc3c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330407936 unmapped: 61988864 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:58.114735+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330407936 unmapped: 61988864 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:59.114969+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3520924 data_alloc: 218103808 data_used: 10989568
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330407936 unmapped: 61988864 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:00.115188+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330407936 unmapped: 61988864 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8d2f000/0x0/0x4ffc00000, data 0x1b201fa/0x1cae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9d27c00 session 0x55daf7bef4a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55daf993de00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:01.115334+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326754304 unmapped: 65642496 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:02.115466+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9d7e000/0x0/0x4ffc00000, data 0xad2198/0xc5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9d27c00 session 0x55daf7e2b2c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:03.115598+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:04.115752+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338512 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9da3000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:05.115926+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:06.116233+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:07.116606+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:08.116786+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:09.117053+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338512 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:10.117301+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9da3000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:11.117588+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:12.117757+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:13.118008+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:14.118246+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338512 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9da3000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:15.118403+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9da3000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:16.118556+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:17.118751+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:18.119033+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:19.119288+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338512 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:20.119476+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:21.119639+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9da3000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:22.119868+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:23.120113+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:24.120286+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338512 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9da3000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:25.120520+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7caed20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf9670f00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf8959860
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7c7eb40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.895999908s of 30.124755859s, submitted: 36
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:26.120674+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9da3000/0x0/0x4ffc00000, data 0xaae199/0xc3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 57589760 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:27.120834+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326877184 unmapped: 65519616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:28.121002+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf8b16960
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9d27c00 session 0x55daf7bef680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55daf779e780
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326877184 unmapped: 65519616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa712000 session 0x55daf891ba40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:29.121211+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf967f2c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415822 data_alloc: 218103808 data_used: 3338240
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326877184 unmapped: 65519616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:30.121358+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e958e000/0x0/0x4ffc00000, data 0x12c3199/0x1450000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326877184 unmapped: 65519616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:31.121450+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e958e000/0x0/0x4ffc00000, data 0x12c3199/0x1450000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326877184 unmapped: 65519616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:32.121613+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326877184 unmapped: 65519616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:33.121804+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326877184 unmapped: 65519616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e958e000/0x0/0x4ffc00000, data 0x12c3199/0x1450000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:34.122000+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415822 data_alloc: 218103808 data_used: 3338240
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e958e000/0x0/0x4ffc00000, data 0x12c3199/0x1450000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326877184 unmapped: 65519616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:35.122153+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.895669937s of 10.373094559s, submitted: 24
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326762496 unmapped: 65634304 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:36.122280+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf98ba1e0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:37.122442+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:38.122639+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:39.122798+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467698 data_alloc: 218103808 data_used: 10387456
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e956a000/0x0/0x4ffc00000, data 0x12e7199/0x1474000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:40.122985+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:41.123383+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:42.123526+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e956a000/0x0/0x4ffc00000, data 0x12e7199/0x1474000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:43.123725+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:44.123883+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467698 data_alloc: 218103808 data_used: 10387456
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:45.124083+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:46.124251+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e956a000/0x0/0x4ffc00000, data 0x12e7199/0x1474000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:47.124381+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:48.124511+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:49.124670+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.315408707s of 13.315409660s, submitted: 0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513150 data_alloc: 218103808 data_used: 10457088
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333021184 unmapped: 59375616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:50.124814+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 57262080 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:51.125007+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 57933824 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:52.125151+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1d17199/0x1ea4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334471168 unmapped: 57925632 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:53.125337+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 57917440 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:54.125510+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554614 data_alloc: 218103808 data_used: 11251712
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 57917440 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:55.125714+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8b19000/0x0/0x4ffc00000, data 0x1d38199/0x1ec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 57786368 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:56.126032+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8a98000/0x0/0x4ffc00000, data 0x1db7199/0x1f44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334839808 unmapped: 57556992 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:57.126173+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334839808 unmapped: 57556992 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:58.126841+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8a85000/0x0/0x4ffc00000, data 0x1dc4199/0x1f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334839808 unmapped: 57556992 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:59.127592+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3568152 data_alloc: 218103808 data_used: 11124736
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334839808 unmapped: 57556992 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:00.128672+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8a85000/0x0/0x4ffc00000, data 0x1dc4199/0x1f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334839808 unmapped: 57556992 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:01.129689+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.314251900s of 12.733714104s, submitted: 129
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334839808 unmapped: 57556992 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:02.129827+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8a68000/0x0/0x4ffc00000, data 0x1de9199/0x1f76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334839808 unmapped: 57556992 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:03.130331+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334839808 unmapped: 57556992 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:04.130634+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563040 data_alloc: 218103808 data_used: 11124736
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:05.130818+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334839808 unmapped: 57556992 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:06.131100+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 340967424 unmapped: 51429376 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55dafa80c5a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafb42d400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:07.131227+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335216640 unmapped: 57180160 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8480000/0x0/0x4ffc00000, data 0x23d01c2/0x255e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [0,0,0,0,1])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafb42d400 session 0x55dafa80cd20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:08.131707+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335224832 unmapped: 57171968 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:09.132082+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335224832 unmapped: 57171968 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615673 data_alloc: 218103808 data_used: 11124736
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:10.132333+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335224832 unmapped: 57171968 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8480000/0x0/0x4ffc00000, data 0x23d01fb/0x255e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:11.132693+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335290368 unmapped: 57106432 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:12.132839+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335290368 unmapped: 57106432 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9ccbc00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9ccbc00 session 0x55daf96b4960
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55dafad4c000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:13.133003+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335290368 unmapped: 57106432 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf891b0e0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.596834183s of 11.591576576s, submitted: 39
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55daf7c825a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:14.133141+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337387520 unmapped: 55009280 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafb42d400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3618680 data_alloc: 218103808 data_used: 11124736
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:15.133300+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337387520 unmapped: 55009280 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa838c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72db000/0x0/0x4ffc00000, data 0x23d322e/0x2563000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:16.133420+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 54050816 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:17.133555+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 53780480 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:18.133734+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 53780480 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72db000/0x0/0x4ffc00000, data 0x23d322e/0x2563000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:19.133987+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 53780480 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3662680 data_alloc: 234881024 data_used: 17227776
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:20.134174+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 53780480 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72db000/0x0/0x4ffc00000, data 0x23d322e/0x2563000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:21.134311+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 53780480 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:22.134418+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 53780480 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:23.134553+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 53772288 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72d4000/0x0/0x4ffc00000, data 0x23da22e/0x256a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:24.134731+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 53772288 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3663124 data_alloc: 234881024 data_used: 17231872
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:25.135022+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 53772288 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:26.135222+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 53772288 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.848647118s of 12.892425537s, submitted: 9
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:27.135348+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344596480 unmapped: 47800320 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:28.135479+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e69ff000/0x0/0x4ffc00000, data 0x2ca822e/0x2e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344678400 unmapped: 47718400 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:29.135686+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 47693824 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3755878 data_alloc: 234881024 data_used: 19189760
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:30.135825+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 47693824 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:31.135994+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 47693824 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:32.136120+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 47693824 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e69bf000/0x0/0x4ffc00000, data 0x2ce722e/0x2e77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e69c6000/0x0/0x4ffc00000, data 0x2ce722e/0x2e77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:33.136270+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 47677440 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:34.136477+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 47677440 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa838c00 session 0x55dafa76b4a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafb42d400 session 0x55daf778f2c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749690 data_alloc: 234881024 data_used: 19193856
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:35.136595+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 341196800 unmapped: 51200000 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7caf4a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:36.136731+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 341196800 unmapped: 51200000 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7684000/0x0/0x4ffc00000, data 0x1dfa199/0x1f87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:37.136855+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 341196800 unmapped: 51200000 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:38.137055+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 341196800 unmapped: 51200000 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9d27c00 session 0x55dafb59c780
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:39.137208+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa712000 session 0x55daf8a00d20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 341196800 unmapped: 51200000 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.477106094s of 13.116346359s, submitted: 170
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370065 data_alloc: 218103808 data_used: 3444736
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:40.137366+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf990ad20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:41.137500+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:42.137633+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:43.137777+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:44.137911+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363749 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:45.138099+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:46.138314+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:47.138440+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:48.138587+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:49.138781+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363749 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:50.139035+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-mon[74295]: from='client.23213 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:38 compute-0 ceph-mon[74295]: pgmap v3490: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:38 compute-0 ceph-mon[74295]: from='client.23217 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:38 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1971043547' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 07 15:17:38 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3380019794' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 07 15:17:38 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3675653749' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:51.139167+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:52.139280+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:53.139437+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:54.139648+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363749 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:55.139834+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:56.139984+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:57.140167+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:58.140361+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:59.140532+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363749 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:00.140705+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:01.140875+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:02.141016+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:03.141172+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:04.141326+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363749 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:05.141486+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:06.141712+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:07.141906+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:08.142097+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 55255040 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:09.142299+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 55255040 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363749 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:10.142475+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 55255040 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:11.142630+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 55255040 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:12.142788+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 55255040 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:13.142966+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337149952 unmapped: 55246848 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:14.144841+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337149952 unmapped: 55246848 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55daf98d65a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf778fa40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf8a48780
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9d27c00 session 0x55daf98bbe00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.142990112s of 35.221427917s, submitted: 16
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3365573 data_alloc: 218103808 data_used: 3334144
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:15.145499+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 341803008 unmapped: 50593792 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa712000 session 0x55dafad4de00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa838c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa838c00 session 0x55dafb59d680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf98f92c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf7cbbe00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9d27c00 session 0x55daf96710e0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:16.145709+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338132992 unmapped: 54263808 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:17.145869+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338132992 unmapped: 54263808 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:18.146046+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338132992 unmapped: 54263808 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e830e000/0x0/0x4ffc00000, data 0x13a21fb/0x1530000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:19.146276+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa712000 session 0x55daf8adf4a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338132992 unmapped: 54263808 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e830e000/0x0/0x4ffc00000, data 0x13a21fb/0x1530000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444444 data_alloc: 218103808 data_used: 3338240
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7139c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7139c00 session 0x55daf8a48960
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:20.146420+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338132992 unmapped: 54263808 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e830e000/0x0/0x4ffc00000, data 0x13a21fb/0x1530000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7e2b680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55dafa7bd2c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:21.146556+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 54247424 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:22.146681+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 54247424 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:23.146999+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:24.147231+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3509059 data_alloc: 218103808 data_used: 11505664
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:25.147424+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e82e8000/0x0/0x4ffc00000, data 0x13c622e/0x1556000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:26.147554+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:27.147680+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:28.147798+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:29.147993+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3509059 data_alloc: 218103808 data_used: 11505664
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:30.148181+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:31.148348+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e82e8000/0x0/0x4ffc00000, data 0x13c622e/0x1556000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:32.148504+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:33.148669+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.093605042s of 18.734363556s, submitted: 50
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:34.148798+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342523904 unmapped: 49872896 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3610453 data_alloc: 234881024 data_used: 13078528
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:35.149052+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342540288 unmapped: 49856512 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:36.149181+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342679552 unmapped: 49717248 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7783000/0x0/0x4ffc00000, data 0x1f2422e/0x20b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,2])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:37.149330+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 49627136 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:38.149514+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 49627136 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:39.149674+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 49618944 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615591 data_alloc: 234881024 data_used: 12984320
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7768000/0x0/0x4ffc00000, data 0x1f3e22e/0x20ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:40.149977+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 49618944 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:41.150281+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 49618944 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7768000/0x0/0x4ffc00000, data 0x1f3e22e/0x20ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:42.150669+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 49618944 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:43.150974+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 49618944 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7768000/0x0/0x4ffc00000, data 0x1f3e22e/0x20ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:44.151285+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 49618944 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615591 data_alloc: 234881024 data_used: 12984320
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.009923935s of 11.624964714s, submitted: 145
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:45.151472+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 49618944 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:46.151625+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 49618944 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:47.151884+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e776d000/0x0/0x4ffc00000, data 0x1f4122e/0x20d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:48.152041+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:49.152265+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e776d000/0x0/0x4ffc00000, data 0x1f4122e/0x20d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3609847 data_alloc: 234881024 data_used: 12988416
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:50.152638+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:51.152871+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:52.153073+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e776d000/0x0/0x4ffc00000, data 0x1f4122e/0x20d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:53.153299+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:54.153511+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3609847 data_alloc: 234881024 data_used: 12988416
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:55.153754+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:56.154237+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e776d000/0x0/0x4ffc00000, data 0x1f4122e/0x20d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:57.154420+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.786358833s of 12.150555611s, submitted: 2
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 49569792 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:58.154566+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 49569792 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:59.154725+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 49569792 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa841400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa841400 session 0x55dafa76a1e0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa724400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3613387 data_alloc: 234881024 data_used: 12976128
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:00.154912+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 49569792 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 285 ms_handle_reset con 0x55dafa724400 session 0x55daf7e050e0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 285 ms_handle_reset con 0x55dafa83c000 session 0x55daf778e1e0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 285 ms_handle_reset con 0x55daf9cff400 session 0x55dafa7bd4a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:01.155072+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 285 ms_handle_reset con 0x55dafa83c000 session 0x55dafb3c43c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 38264832 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 285 heartbeat osd_stat(store_statfs(0x4e7722000/0x0/0x4ffc00000, data 0x1f89dab/0x211b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:02.155218+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 38264832 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 285 ms_handle_reset con 0x55daf7c21c00 session 0x55daf778ef00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:03.155382+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 38223872 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:04.155559+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55daf9c78400 session 0x55daf86a2000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 44589056 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa724400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55dafa724400 session 0x55daf98efe00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3713958 data_alloc: 234881024 data_used: 24891392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:05.155705+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55daf7c21c00 session 0x55dafad4c960
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55daf9c78400 session 0x55dafb59da40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351256576 unmapped: 44572672 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:06.155983+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351256576 unmapped: 44572672 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:07.156213+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 287 heartbeat osd_stat(store_statfs(0x4e7009000/0x0/0x4ffc00000, data 0x26a0515/0x2834000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 44564480 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:08.156409+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 44556288 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:09.156616+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 44556288 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 287 heartbeat osd_stat(store_statfs(0x4e7009000/0x0/0x4ffc00000, data 0x26a0515/0x2834000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3713958 data_alloc: 234881024 data_used: 24891392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:10.156765+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 44556288 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55daf9cff400 session 0x55daf779ef00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55dafa83c000 session 0x55daf96d52c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa841400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55dafa841400 session 0x55dafa7bdc20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55daf7c21c00 session 0x55daf8ade5a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.859335899s of 13.802700996s, submitted: 67
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:11.156901+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55daf9cff400 session 0x55dafa76a960
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55daf9c78400 session 0x55dafb3c5c20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55dafa83c000 session 0x55dafa7bc000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c9a000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55daf7c9a000 session 0x55daf8b17c20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55daf7c21c00 session 0x55daf96b5860
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 44531712 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 287 handle_osd_map epochs [287,288], i have 287, src has [1,288]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:12.157147+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 44531712 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:13.157366+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 44531712 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:14.157608+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e7005000/0x0/0x4ffc00000, data 0x26a1f87/0x2838000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 44523520 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3719076 data_alloc: 234881024 data_used: 24899584
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:15.157778+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 44523520 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e7005000/0x0/0x4ffc00000, data 0x26a1f87/0x2838000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:16.158241+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 44515328 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 ms_handle_reset con 0x55daf9c78400 session 0x55daf9671a40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:17.158370+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 ms_handle_reset con 0x55daf9cff400 session 0x55dafad4cd20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 44515328 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:18.158515+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 44515328 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 ms_handle_reset con 0x55dafa83c000 session 0x55daf779e5a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c7cc00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 ms_handle_reset con 0x55daf9c7cc00 session 0x55daf89585a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e7005000/0x0/0x4ffc00000, data 0x26a1f87/0x2838000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:19.158830+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 44515328 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6fe2000/0x0/0x4ffc00000, data 0x26c5f87/0x285c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3721416 data_alloc: 234881024 data_used: 24911872
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:20.159005+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6fe2000/0x0/0x4ffc00000, data 0x26c5f87/0x285c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 44515328 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:21.159136+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352403456 unmapped: 43425792 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6fe2000/0x0/0x4ffc00000, data 0x26c5f87/0x285c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:22.159304+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6fe2000/0x0/0x4ffc00000, data 0x26c5f87/0x285c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 43417600 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:23.159441+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 43417600 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:24.159587+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 43409408 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3750536 data_alloc: 234881024 data_used: 28971008
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:25.159721+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 43409408 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:26.159860+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 43409408 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:27.160016+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 43409408 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6fe2000/0x0/0x4ffc00000, data 0x26c5f87/0x285c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:28.160157+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 43409408 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.528598785s of 17.588058472s, submitted: 18
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:29.160347+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 43409408 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3751224 data_alloc: 234881024 data_used: 28958720
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:30.160544+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 43409408 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:31.160834+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 40017920 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:32.160985+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:33.161108+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:34.161990+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3811342 data_alloc: 234881024 data_used: 29794304
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:35.162168+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:36.162319+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:37.162504+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:38.162675+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:39.163030+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:40.163152+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3811342 data_alloc: 234881024 data_used: 29794304
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:41.163282+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:42.163438+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:43.163635+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:44.163791+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:45.164021+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3811342 data_alloc: 234881024 data_used: 29794304
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:46.164167+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:47.164310+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:48.164465+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 39960576 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:49.164598+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 39960576 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:50.164785+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3811342 data_alloc: 234881024 data_used: 29794304
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 39960576 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:51.164956+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 39960576 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:52.165106+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 39960576 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:53.165248+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 39960576 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:54.165374+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 39960576 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:55.165682+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3811342 data_alloc: 234881024 data_used: 29794304
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 39960576 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:56.165826+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355876864 unmapped: 39952384 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:57.165989+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 39944192 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:58.166132+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 39944192 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:59.166269+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 39944192 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 ms_handle_reset con 0x55daf9cff400 session 0x55daf98d6d20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:00.166396+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3811982 data_alloc: 234881024 data_used: 29810688
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 39944192 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:01.166565+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 ms_handle_reset con 0x55dafa83c000 session 0x55daf86a3860
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d05800
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.498996735s of 32.592540741s, submitted: 9
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa839c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 289 ms_handle_reset con 0x55dafa839c00 session 0x55dafb59da40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 289 ms_handle_reset con 0x55daf9d05800 session 0x55daf7c82960
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa831c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 368164864 unmapped: 31866880 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 289 ms_handle_reset con 0x55dafa831c00 session 0x55dafa7bc3c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:02.166713+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 290 ms_handle_reset con 0x55daf9cff400 session 0x55daf779ef00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d05800
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 362274816 unmapped: 37756928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:03.167109+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 290 ms_handle_reset con 0x55daf9d05800 session 0x55daf96b5e00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 362291200 unmapped: 37740544 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:04.167270+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa839c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 291 ms_handle_reset con 0x55dafa839c00 session 0x55daf96b4780
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 291 ms_handle_reset con 0x55dafa83c000 session 0x55daf96b5a40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55db01f0f000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 291 ms_handle_reset con 0x55db01f0f000 session 0x55dafa76be00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 362201088 unmapped: 37830656 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:05.167437+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3986275 data_alloc: 251658240 data_used: 38064128
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e5890000/0x0/0x4ffc00000, data 0x3e0f27e/0x3faa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 291 handle_osd_map epochs [292,292], i have 292, src has [1,292]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 292 ms_handle_reset con 0x55daf9cff400 session 0x55dafad4da40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 40017920 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:06.167573+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 40017920 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 292 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7cba000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 292 ms_handle_reset con 0x55daf9c78400 session 0x55dafb3c5c20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:07.167690+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d05800
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 292 ms_handle_reset con 0x55daf9d05800 session 0x55dafa76a960
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e6891000/0x0/0x4ffc00000, data 0x2e10e5b/0x2fac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 40001536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:08.167831+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e6ffb000/0x0/0x4ffc00000, data 0x26a8e4c/0x2843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 40001536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:09.167996+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 40001536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e6ffb000/0x0/0x4ffc00000, data 0x26a8e4c/0x2843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:10.168126+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3782951 data_alloc: 251658240 data_used: 32612352
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa839c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 40001536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:11.168316+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.342339516s of 10.014770508s, submitted: 57
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 293 ms_handle_reset con 0x55dafa839c00 session 0x55daf96b43c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 39944192 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:12.168439+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 39944192 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 293 ms_handle_reset con 0x55daf9d27c00 session 0x55daf8adfa40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 293 ms_handle_reset con 0x55dafa712000 session 0x55daf891bc20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:13.168649+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e7750000/0x0/0x4ffc00000, data 0x1f51a39/0x20ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 294 ms_handle_reset con 0x55daf7c21c00 session 0x55daf778ed20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:14.168852+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:15.169087+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429469 data_alloc: 218103808 data_used: 3379200
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:16.169245+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:17.169412+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e8ac4000/0x0/0x4ffc00000, data 0xabf44b/0xc58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:18.169585+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e8ac4000/0x0/0x4ffc00000, data 0xabf44b/0xc58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:19.169779+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:20.170053+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429789 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:21.170244+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.431297302s of 10.757673264s, submitted: 93
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:22.170403+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:23.170545+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:24.170744+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:25.171009+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432411 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:26.171159+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:27.171318+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:28.171469+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:29.171704+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:30.171878+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432411 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:31.172024+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:32.172156+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:33.172338+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:34.172496+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:35.173044+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432411 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:36.173197+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:37.173376+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:38.173502+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:39.173769+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:40.173893+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432411 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:41.173995+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:42.174157+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:43.174438+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 44K writes, 180K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 44K writes, 15K syncs, 2.80 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2832 writes, 12K keys, 2832 commit groups, 1.0 writes per commit group, ingest: 13.48 MB, 0.02 MB/s
                                           Interval WAL: 2832 writes, 1026 syncs, 2.76 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:44.174608+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets getting new tickets!
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:45.174877+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _finish_auth 0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:45.175697+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432411 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:46.175090+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:47.175314+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:48.175502+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:49.175683+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:50.175858+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432411 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: mgrc ms_handle_reset ms_handle_reset con 0x55daf9d24400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3626055412
Oct 07 15:17:38 compute-0 ceph-osd[88039]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3626055412,v1:192.168.122.100:6801/3626055412]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: get_auth_request con 0x55dafa839c00 auth_method 0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: mgrc handle_mgr_configure stats_period=5
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:51.176129+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:52.176891+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:53.177033+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55dafa831800 session 0x55daf98d61e0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:54.177249+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:55.177418+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432411 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:56.177585+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:57.177785+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:58.178092+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:59.178388+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9cff400 session 0x55daf8adf4a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d05800
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9d05800 session 0x55daf7cbbe00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d05800
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9d05800 session 0x55daf98f92c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf7c21c00 session 0x55dafb59d680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.825901031s of 37.835342407s, submitted: 11
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:00.178537+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533537 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9cff400 session 0x55daf98bbe00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9d27c00 session 0x55daf8a00d20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55dafa712000 session 0x55daf98bbe00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55dafa712000 session 0x55dafb59d680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf7c21c00 session 0x55daf98f92c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 50847744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:01.178676+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9cff400 session 0x55daf8adf4a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d05800
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9d05800 session 0x55daf98d61e0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 50847744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:02.179034+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9d27c00 session 0x55daf8b172c0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9d27c00 session 0x55daf778ed20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 50847744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:03.179280+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 50839552 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:04.179419+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 50135040 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:05.179628+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3582763 data_alloc: 218103808 data_used: 12410880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e81ca000/0x0/0x4ffc00000, data 0x14d8ebe/0x1674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 50135040 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:06.179792+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf7c21c00 session 0x55daf891bc20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9cff400 session 0x55daf7cbab40
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:07.179992+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:08.180137+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:09.180491+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:10.180667+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:11.180833+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:12.181121+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:13.181383+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:14.181735+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:15.182161+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:16.184296+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:17.184504+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:18.185415+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:19.185653+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:20.186161+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:21.186641+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:22.187048+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:23.187407+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:24.187552+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:25.187702+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:26.187878+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:27.188053+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:28.188418+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:29.188709+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:30.189042+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:31.189266+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:32.189436+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:33.189581+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:34.189823+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:35.190084+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:36.190320+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:37.190568+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:38.190850+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:39.191093+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:40.191302+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:41.191486+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343293952 unmapped: 56737792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:42.191679+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343293952 unmapped: 56737792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:43.191874+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343293952 unmapped: 56737792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:44.192009+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343293952 unmapped: 56737792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:45.192260+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343293952 unmapped: 56737792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:46.192410+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343293952 unmapped: 56737792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:47.192647+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343293952 unmapped: 56737792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:48.192784+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343293952 unmapped: 56737792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:49.193296+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343302144 unmapped: 56729600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55dafd972800 session 0x55daf8adf0e0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d05800
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:50.193457+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343302144 unmapped: 56729600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:51.193654+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343310336 unmapped: 56721408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:52.193851+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343310336 unmapped: 56721408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:53.194048+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343310336 unmapped: 56721408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:54.194212+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343310336 unmapped: 56721408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:55.194399+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343310336 unmapped: 56721408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:56.194572+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343310336 unmapped: 56721408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:57.194768+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343318528 unmapped: 56713216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:58.195358+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343318528 unmapped: 56713216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:59.195606+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 58.796035767s of 59.319786072s, submitted: 35
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343318528 unmapped: 56713216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:00.195809+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343318528 unmapped: 56713216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:01.196016+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343367680 unmapped: 56664064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:02.196240+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 56639488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:03.196382+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 56639488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 296 ms_handle_reset con 0x55dafa712000 session 0x55daf96dd680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:04.196539+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 56639488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:05.196673+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e87d0000/0x0/0x4ffc00000, data 0xac2a6f/0xc5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 56639488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441832 data_alloc: 218103808 data_used: 3395584
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:06.196881+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 56639488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:07.197059+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 56639488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:08.197246+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343400448 unmapped: 56631296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:09.197451+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343400448 unmapped: 56631296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:10.197582+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343400448 unmapped: 56631296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e87d0000/0x0/0x4ffc00000, data 0xac2a6f/0xc5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441832 data_alloc: 218103808 data_used: 3395584
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:11.197758+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343400448 unmapped: 56631296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:12.198072+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343400448 unmapped: 56631296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:13.198382+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343408640 unmapped: 56623104 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:14.198510+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343408640 unmapped: 56623104 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:15.198680+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343408640 unmapped: 56623104 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.962801933s of 16.419515610s, submitted: 120
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87d0000/0x0/0x4ffc00000, data 0xac2a6f/0xc5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444806 data_alloc: 218103808 data_used: 3395584
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:16.198822+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 56614912 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:17.198989+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 56614912 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:18.199129+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 56614912 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:19.199391+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 56614912 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:20.199538+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 56606720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:21.199731+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 56606720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:22.199887+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 56606720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:23.200094+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 56606720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:24.200254+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 56606720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:25.200460+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 56606720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:26.200663+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 56606720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:27.200882+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 56606720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:28.201047+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 56598528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:29.201244+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 56598528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:30.201409+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 56598528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:31.201606+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 56598528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:32.201740+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343441408 unmapped: 56590336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:33.201983+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343441408 unmapped: 56590336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:34.202357+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343441408 unmapped: 56590336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:35.202536+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343441408 unmapped: 56590336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:36.202734+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343441408 unmapped: 56590336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:37.203544+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343441408 unmapped: 56590336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:38.203701+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343441408 unmapped: 56590336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:39.204096+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343449600 unmapped: 56582144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:40.204288+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343449600 unmapped: 56582144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:41.204479+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343449600 unmapped: 56582144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:42.204686+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343449600 unmapped: 56582144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:43.204896+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343449600 unmapped: 56582144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:44.205183+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 56573952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:45.205444+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 56573952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:46.205682+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 56573952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:47.205921+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 56573952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:48.206166+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 56573952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:49.206415+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 56573952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:50.206548+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 56573952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:51.206747+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 56573952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:52.206999+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343465984 unmapped: 56565760 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:53.207178+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343465984 unmapped: 56565760 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:54.207352+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343465984 unmapped: 56565760 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:55.207507+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343465984 unmapped: 56565760 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:56.207668+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:57.207818+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:58.207966+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:59.208134+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:00.208273+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:01.208402+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:02.208506+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:03.208637+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:04.208801+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:05.208941+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:06.209078+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:07.209201+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 56549376 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:08.209350+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 56541184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:09.209558+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 56541184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:10.209711+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 56541184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:11.209835+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 56541184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:12.210008+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 56541184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:13.210159+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 56541184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:14.210389+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 56541184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:15.210650+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 56541184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:16.210858+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 56532992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:17.211045+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 56532992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:18.211222+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 56532992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:19.211475+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 56532992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:20.211664+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 56532992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:21.212063+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 56532992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:22.212231+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 56532992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:23.212441+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 56532992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:24.212573+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 56524800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:25.212793+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 56524800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:26.212986+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 56524800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:27.213251+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 56524800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:28.213538+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 56524800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:29.213723+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 56524800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:30.213995+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 56524800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:31.214196+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 56524800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:32.214343+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343515136 unmapped: 56516608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:33.214641+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343515136 unmapped: 56516608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:34.214801+0000)
Oct 07 15:17:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343515136 unmapped: 56516608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:35.215214+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343523328 unmapped: 56508416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:36.215694+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343523328 unmapped: 56508416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:37.216038+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343523328 unmapped: 56508416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:38.216272+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343523328 unmapped: 56508416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:39.216552+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343523328 unmapped: 56508416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:40.216923+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343523328 unmapped: 56508416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:41.217294+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 56500224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:42.217464+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 56500224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:43.217764+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 56500224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:44.218002+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 56500224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:45.218219+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 56500224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:46.218391+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 56500224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:47.218621+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 56500224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:48.218815+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 56492032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:49.219087+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 56492032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:50.219214+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 56492032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:51.219359+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 56492032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:52.219550+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 56492032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:53.219774+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 56492032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:54.220038+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 56492032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:55.220234+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343547904 unmapped: 56483840 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:56.220400+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343547904 unmapped: 56483840 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:57.220622+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343547904 unmapped: 56483840 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 ms_handle_reset con 0x55dafa83c000 session 0x55daf8b16960
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:58.220778+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 55230464 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:59.220985+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 55230464 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:00.221161+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 55230464 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:01.221390+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456326 data_alloc: 218103808 data_used: 8052736
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 55230464 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:02.221572+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 55230464 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:03.221765+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 55230464 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:04.222049+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344809472 unmapped: 55222272 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:05.222237+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344809472 unmapped: 55222272 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:06.222398+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456326 data_alloc: 218103808 data_used: 8052736
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 110.946731567s of 110.961837769s, submitted: 13
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 298 ms_handle_reset con 0x55dafa83c000 session 0x55daf8a01680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 55189504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:07.222661+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 55189504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:08.222885+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 55189504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c3a000/0x0/0x4ffc00000, data 0x6560a3/0x7f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:09.223108+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 55189504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:10.223265+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344850432 unmapped: 55181312 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 299 ms_handle_reset con 0x55daf7c21c00 session 0x55daf98c4f00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:11.223398+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3377681 data_alloc: 218103808 data_used: 1114112
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344850432 unmapped: 55181312 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 299 handle_osd_map epochs [299,300], i have 299, src has [1,300]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:12.223587+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344866816 unmapped: 55164928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:13.223720+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e90a6000/0x0/0x4ffc00000, data 0x1e96c0/0x387000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344866816 unmapped: 55164928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:14.223856+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344866816 unmapped: 55164928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:15.224020+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344866816 unmapped: 55164928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:16.224168+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381855 data_alloc: 218103808 data_used: 1122304
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344866816 unmapped: 55164928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:17.224401+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344866816 unmapped: 55164928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:18.224582+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e90a6000/0x0/0x4ffc00000, data 0x1e96c0/0x387000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344866816 unmapped: 55164928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:19.224779+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e90a6000/0x0/0x4ffc00000, data 0x1e96c0/0x387000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344866816 unmapped: 55164928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:20.225007+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e90a6000/0x0/0x4ffc00000, data 0x1e96c0/0x387000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344875008 unmapped: 55156736 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:21.225177+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3382175 data_alloc: 218103808 data_used: 1130496
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344875008 unmapped: 55156736 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.401482582s of 15.571696281s, submitted: 51
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:22.225370+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 55140352 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:23.225513+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 301 ms_handle_reset con 0x55daf9cff400 session 0x55dafad4d680
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 55140352 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:24.225712+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e90a2000/0x0/0x4ffc00000, data 0x1eb234/0x38b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 301 handle_osd_map epochs [302,302], i have 302, src has [1,302]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 55115776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:25.225884+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 55115776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:26.226057+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389751 data_alloc: 218103808 data_used: 1130496
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 55115776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:27.226294+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 55115776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:28.226483+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344924160 unmapped: 55107584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:29.226723+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344924160 unmapped: 55107584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:30.227207+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344924160 unmapped: 55107584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:31.228032+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389751 data_alloc: 218103808 data_used: 1130496
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344924160 unmapped: 55107584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:32.228794+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344924160 unmapped: 55107584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:33.229257+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344924160 unmapped: 55107584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:34.229808+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344924160 unmapped: 55107584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:35.230004+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344924160 unmapped: 55107584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:36.230435+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389751 data_alloc: 218103808 data_used: 1130496
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344932352 unmapped: 55099392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:37.230717+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344932352 unmapped: 55099392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:38.231024+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344932352 unmapped: 55099392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:39.231534+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344932352 unmapped: 55099392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:40.231708+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 55091200 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:41.232163+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389751 data_alloc: 218103808 data_used: 1130496
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 55091200 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:42.232338+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 55091200 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:43.232562+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 55091200 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:44.232790+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344948736 unmapped: 55083008 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:45.233011+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344948736 unmapped: 55083008 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:46.233204+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389751 data_alloc: 218103808 data_used: 1130496
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344956928 unmapped: 55074816 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:47.233545+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344956928 unmapped: 55074816 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:48.233734+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 55066624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:49.234021+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 55066624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:50.234219+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 55066624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:51.234471+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 55066624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:52.234737+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 55066624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:53.235072+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344989696 unmapped: 55042048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:54.235336+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344989696 unmapped: 55042048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:55.235506+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344989696 unmapped: 55042048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:56.235781+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344989696 unmapped: 55042048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:57.236028+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344989696 unmapped: 55042048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:58.236206+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344989696 unmapped: 55042048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:59.236475+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344989696 unmapped: 55042048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:00.236677+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344997888 unmapped: 55033856 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:01.236908+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345006080 unmapped: 55025664 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:02.237154+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345006080 unmapped: 55025664 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:03.237314+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345006080 unmapped: 55025664 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:04.237516+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345006080 unmapped: 55025664 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:05.237782+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345006080 unmapped: 55025664 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:06.238015+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345006080 unmapped: 55025664 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:07.238156+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:08.238402+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345006080 unmapped: 55025664 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:09.239245+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345014272 unmapped: 55017472 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:10.239387+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345014272 unmapped: 55017472 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:11.239536+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345014272 unmapped: 55017472 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:12.239796+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345014272 unmapped: 55017472 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:13.240025+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345022464 unmapped: 55009280 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:14.240167+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345022464 unmapped: 55009280 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:15.240375+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345022464 unmapped: 55009280 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:16.240501+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345022464 unmapped: 55009280 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:17.240639+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345047040 unmapped: 54984704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:18.240761+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345047040 unmapped: 54984704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:19.240984+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345047040 unmapped: 54984704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:20.241114+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345047040 unmapped: 54984704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:21.241255+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345047040 unmapped: 54984704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:22.241423+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345047040 unmapped: 54984704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:23.241557+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345047040 unmapped: 54984704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:24.241676+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345047040 unmapped: 54984704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:25.241836+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345055232 unmapped: 54976512 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:26.242044+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345055232 unmapped: 54976512 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:27.242249+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345055232 unmapped: 54976512 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:28.242428+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345063424 unmapped: 54968320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:29.242660+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345063424 unmapped: 54968320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:30.242821+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345063424 unmapped: 54968320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:31.243022+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345063424 unmapped: 54968320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:32.243191+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345063424 unmapped: 54968320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:33.243349+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345071616 unmapped: 54960128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:34.243589+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 54951936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:35.243811+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 54951936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:36.244192+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 54951936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:37.244427+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 54951936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:38.244661+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 54951936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:39.244885+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 54951936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:40.245110+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 54951936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:41.245307+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345088000 unmapped: 54943744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:42.245553+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345096192 unmapped: 54935552 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:43.245804+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345096192 unmapped: 54935552 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:44.246020+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345104384 unmapped: 54927360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:45.246184+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345104384 unmapped: 54927360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:46.246351+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345104384 unmapped: 54927360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:47.246544+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345104384 unmapped: 54927360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:48.246835+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345104384 unmapped: 54927360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:49.247172+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 54919168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:50.247491+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 54919168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:51.247712+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 54919168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:52.247905+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 54919168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:53.248188+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 54919168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:54.248413+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 54919168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:55.248633+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 54910976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:56.248857+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 54910976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:57.249026+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 54910976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:58.249185+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 54910976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:59.249367+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 54910976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:00.249570+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 54910976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:01.249713+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 54910976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:02.249917+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 54910976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:03.250244+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 54902784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:04.250435+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345137152 unmapped: 54894592 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:05.250579+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345137152 unmapped: 54894592 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:06.250804+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 54886400 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:07.251028+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 54878208 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 105.908264160s of 105.934684753s, submitted: 15
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:08.251178+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 46489600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 302 handle_osd_map epochs [303,303], i have 303, src has [1,303]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 303 ms_handle_reset con 0x55daf9d27c00 session 0x55dafa76b4a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:09.251359+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 54886400 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:10.251546+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 54870016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 304 ms_handle_reset con 0x55dafa712000 session 0x55daf7cae000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7c2a000/0x0/0x4ffc00000, data 0x165e961/0x1803000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:11.251735+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345169920 unmapped: 54861824 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544847 data_alloc: 218103808 data_used: 1142784
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:12.251914+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 54853632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:13.252383+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 54853632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:14.252599+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 54853632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:15.252746+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 54853632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7c25000/0x0/0x4ffc00000, data 0x1660501/0x1807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7c25000/0x0/0x4ffc00000, data 0x1660501/0x1807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:16.253017+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 54853632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544847 data_alloc: 218103808 data_used: 1142784
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:17.253173+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 54853632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:18.253331+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 54853632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:19.253547+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 54845440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:20.253751+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7c25000/0x0/0x4ffc00000, data 0x1660501/0x1807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 54845440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:21.253989+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 54845440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544847 data_alloc: 218103808 data_used: 1142784
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:22.254191+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 54845440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:23.254368+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 54845440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:24.254598+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7c25000/0x0/0x4ffc00000, data 0x1660501/0x1807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 54845440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:25.254767+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 54845440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:26.254923+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 54845440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544847 data_alloc: 218103808 data_used: 1142784
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:27.255167+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 54845440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7c25000/0x0/0x4ffc00000, data 0x1660501/0x1807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:28.255350+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 54837248 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:29.255551+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 54837248 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:30.255722+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 54837248 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7c25000/0x0/0x4ffc00000, data 0x1660501/0x1807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:31.255894+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 54829056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544847 data_alloc: 218103808 data_used: 1142784
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:32.256057+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 54829056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:33.256252+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 54829056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:34.256430+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 54829056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:35.256619+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 54829056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7c25000/0x0/0x4ffc00000, data 0x1660501/0x1807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:36.256802+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 54820864 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544847 data_alloc: 218103808 data_used: 1142784
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:37.257057+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 54812672 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:38.257211+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 54812672 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.637914658s of 30.856452942s, submitted: 28
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 305 ms_handle_reset con 0x55dafa712000 session 0x55daf8ade780
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:39.257410+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 54763520 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e7c24000/0x0/0x4ffc00000, data 0x1661fc1/0x1809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:40.257621+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 54763520 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:41.257795+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 54763520 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3545753 data_alloc: 218103808 data_used: 1142784
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:42.258005+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e7c24000/0x0/0x4ffc00000, data 0x1661fc1/0x1809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 54763520 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:43.258152+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 54763520 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:44.258328+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 54755328 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e7c24000/0x0/0x4ffc00000, data 0x1661fc1/0x1809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:45.258592+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 54755328 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:46.258785+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 54755328 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3545753 data_alloc: 218103808 data_used: 1142784
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:47.258979+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 54755328 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:48.259201+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 54755328 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e7c24000/0x0/0x4ffc00000, data 0x1661fc1/0x1809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 305 handle_osd_map epochs [306,306], i have 306, src has [1,306]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:49.259428+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 54730752 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:50.259624+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 54730752 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:51.259783+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 54730752 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:52.260006+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 54722560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:53.260179+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 54722560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:54.260344+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 54722560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:55.260534+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 54722560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:56.260772+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 54722560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:57.260991+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 54722560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:58.261177+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 54722560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:59.261436+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 54714368 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:00.261642+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345325568 unmapped: 54706176 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:01.261798+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345325568 unmapped: 54706176 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:02.262056+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345325568 unmapped: 54706176 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:03.262174+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345325568 unmapped: 54706176 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:04.262352+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345325568 unmapped: 54706176 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:05.262513+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345325568 unmapped: 54706176 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:06.262648+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345325568 unmapped: 54706176 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:07.262810+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345325568 unmapped: 54706176 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:08.262995+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 54697984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:09.263216+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 54697984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:10.263397+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345341952 unmapped: 54689792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:11.263600+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345341952 unmapped: 54689792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:12.263833+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 54681600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:13.264039+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 54681600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:14.264290+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 54681600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:15.264507+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 54681600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:16.264759+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 54681600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:17.265033+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 54673408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:18.265279+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 54673408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:19.265625+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 54673408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:20.265843+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 54673408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:21.266613+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 54673408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:22.266875+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 54673408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:23.267099+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 54673408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:24.267332+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 54665216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:25.267569+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 54665216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:26.267750+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 54665216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:27.268047+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345374720 unmapped: 54657024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:28.268352+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345374720 unmapped: 54657024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:29.268582+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345374720 unmapped: 54657024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:30.268808+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345374720 unmapped: 54657024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:31.269032+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345374720 unmapped: 54657024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:32.269188+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 54648832 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:33.269360+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 54648832 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:34.269483+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 54648832 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:35.269636+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345391104 unmapped: 54640640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:36.269805+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345391104 unmapped: 54640640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:37.269975+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345391104 unmapped: 54640640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:38.270097+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345391104 unmapped: 54640640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:39.270292+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345391104 unmapped: 54640640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:40.270446+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 54632448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:41.270646+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 54632448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:42.270883+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 54632448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:43.271173+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 54632448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:44.271662+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345407488 unmapped: 54624256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:45.271981+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345407488 unmapped: 54624256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:46.272596+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345407488 unmapped: 54624256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:47.272805+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345415680 unmapped: 54616064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:48.273037+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 54607872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:49.273286+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 54607872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:50.273483+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 54607872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:51.273794+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 54607872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:52.274010+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 54607872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:53.274170+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 54607872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:54.274490+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 54607872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:55.274787+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 54607872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:56.275090+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 54599680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:57.275297+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 54599680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:58.275556+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 54599680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:59.276145+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 54599680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:00.276296+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 54599680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:01.276520+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 54599680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:02.276724+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 54599680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:03.277035+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 54591488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:04.277325+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 54591488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:05.277617+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 54591488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:06.277817+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345448448 unmapped: 54583296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:07.278078+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345448448 unmapped: 54583296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:08.278264+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345448448 unmapped: 54583296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:09.278549+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345448448 unmapped: 54583296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:10.278750+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345448448 unmapped: 54583296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:11.279058+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 54575104 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:12.279274+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345464832 unmapped: 54566912 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:13.279518+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345473024 unmapped: 54558720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:14.279782+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345481216 unmapped: 54550528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:15.280048+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345481216 unmapped: 54550528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:16.280268+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345481216 unmapped: 54550528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:17.280429+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345481216 unmapped: 54550528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:18.280596+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345481216 unmapped: 54550528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:19.280825+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345481216 unmapped: 54550528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:20.281027+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 54542336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:21.281199+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 54542336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:22.281371+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 54542336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:23.281570+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 54542336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:24.281721+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345497600 unmapped: 54534144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:25.281859+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345497600 unmapped: 54534144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:26.282038+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345497600 unmapped: 54534144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:27.282229+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345497600 unmapped: 54534144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:28.282378+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 54525952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:29.282592+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 54525952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:30.282770+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 54525952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:31.282887+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 54525952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:32.283002+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 54525952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:33.283101+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 54525952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:34.283263+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 54525952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:35.283393+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 54525952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:36.283541+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 54517760 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:37.283737+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 54493184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:38.283962+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 54493184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:39.284195+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 54493184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:40.284362+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 54493184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:41.284517+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 54493184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:42.284728+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 54493184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:43.285087+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 54493184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:44.285312+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 54484992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:45.285626+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345554944 unmapped: 54476800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:46.285866+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345554944 unmapped: 54476800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:47.286041+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345554944 unmapped: 54476800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:48.286173+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345554944 unmapped: 54476800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:49.286380+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345554944 unmapped: 54476800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:50.286571+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345554944 unmapped: 54476800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:51.286766+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345554944 unmapped: 54476800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:52.287006+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 54468608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:53.287169+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 54468608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:54.287345+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 54468608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:55.287559+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 54468608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:56.287776+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 54468608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:57.288033+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 54468608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:58.288209+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 54468608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:59.288412+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 54468608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:00.288634+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 54452224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:01.288874+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 54452224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:02.289010+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 54452224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:03.289217+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 54452224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:04.289364+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 54452224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:05.289520+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 54452224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:06.289690+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 54452224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:07.289842+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 54452224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:08.289996+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345587712 unmapped: 54444032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:09.290171+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345587712 unmapped: 54444032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:10.290369+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345604096 unmapped: 54427648 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:11.290532+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345604096 unmapped: 54427648 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:12.290681+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345604096 unmapped: 54427648 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:13.290818+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345604096 unmapped: 54427648 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:14.291026+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345604096 unmapped: 54427648 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:15.291180+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345604096 unmapped: 54427648 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:16.291362+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54411264 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:17.291516+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54411264 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:18.291639+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54411264 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:19.291831+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54411264 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:20.292810+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54411264 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:21.294241+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54411264 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:22.294461+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54411264 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:23.295375+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:24.296097+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54411264 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:25.296755+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 54403072 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:26.297437+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 54403072 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:27.297851+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 54403072 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:28.298570+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 54403072 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:29.298816+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 54403072 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:30.299056+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 54403072 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:31.299462+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 54403072 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:32.299709+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 54403072 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:33.300024+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345636864 unmapped: 54394880 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:34.300215+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345636864 unmapped: 54394880 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:35.300393+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345636864 unmapped: 54394880 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:36.300725+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 54378496 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:37.300976+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 54378496 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:38.301221+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 54378496 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:39.301461+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 54378496 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:40.301641+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 54378496 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:41.301770+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 54362112 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:42.302048+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 54362112 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:43.302363+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 54362112 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:44.302521+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 54362112 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:45.302781+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 54362112 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:46.303011+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 54362112 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:47.303240+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345677824 unmapped: 54353920 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:48.303539+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54345728 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:49.303738+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54345728 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:50.303985+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54337536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:51.304135+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54337536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:52.304515+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54337536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:53.304686+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54337536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:54.305054+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54337536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:55.305225+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54337536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:56.305371+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54337536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:57.305515+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 54321152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:58.305655+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 54321152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:59.305986+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 54321152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:00.306114+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 54321152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:01.306300+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 54321152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:02.306450+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 54321152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:03.306594+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 54321152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:04.306729+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 54304768 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:05.306869+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 54304768 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:06.307021+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 54304768 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:07.307186+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 54304768 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:08.307370+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 54304768 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:09.307609+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 54304768 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:10.307768+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 54304768 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:11.307920+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 54304768 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:12.308062+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 54296576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:13.308186+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 54296576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:14.308392+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 54296576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:15.308567+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 54296576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:16.308811+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 54296576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:17.309109+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 54296576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:18.309366+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 54296576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:19.309635+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 54296576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:20.309832+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 54280192 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:21.309968+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 54280192 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:22.310155+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 54280192 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:23.310385+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 54280192 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:24.310560+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 54280192 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:25.310745+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345759744 unmapped: 54272000 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:26.310984+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345759744 unmapped: 54272000 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:27.311351+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345759744 unmapped: 54272000 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:28.311635+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 54263808 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:29.311978+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 54263808 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:30.312179+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 54263808 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:31.312386+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 54263808 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:32.312579+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 54263808 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:33.312853+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 54263808 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:34.313103+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 54263808 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:35.313318+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 54263808 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:36.313542+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 54239232 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:37.313776+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 54239232 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:38.314068+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 54239232 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:39.314328+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 54239232 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:40.314527+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 54239232 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:41.314712+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 54239232 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:42.314865+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 54239232 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:43.315020+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 54239232 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 45K writes, 181K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.79 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 515 writes, 1459 keys, 515 commit groups, 1.0 writes per commit group, ingest: 0.82 MB, 0.00 MB/s
                                           Interval WAL: 515 writes, 229 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:44.315209+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 54190080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:45.315424+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 54190080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:46.315632+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 54190080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:47.315800+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 54190080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:48.316043+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 54190080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:49.316221+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 54190080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:50.316422+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 54190080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:51.316635+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 54190080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:52.316911+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 54173696 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:53.317179+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 54173696 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:54.317382+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 54173696 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:55.317549+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 54165504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:56.317760+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 54165504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:57.317978+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345874432 unmapped: 54157312 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:58.318100+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345874432 unmapped: 54157312 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:59.318250+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345874432 unmapped: 54157312 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:00.318413+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 54149120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:01.318551+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 54149120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:02.318692+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 54149120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:03.318871+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 54149120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:04.318988+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 54149120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:05.319096+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 54149120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:06.319265+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 54149120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:07.319447+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 54149120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:08.319635+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345899008 unmapped: 54132736 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:09.319831+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345899008 unmapped: 54132736 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:10.320027+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345899008 unmapped: 54132736 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:11.320196+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345899008 unmapped: 54132736 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:12.320353+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 54116352 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:13.320478+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 54116352 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:14.320682+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 54116352 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:15.320959+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 54116352 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:16.321159+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54108160 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:17.321324+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54108160 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:18.321466+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54108160 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:19.321646+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54108160 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:20.321841+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54108160 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:21.322000+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54108160 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:22.322197+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54108160 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:23.322359+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54108160 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:24.322582+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 54091776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:25.322734+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 54091776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:26.322970+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 54091776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:27.323154+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 54091776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:28.323356+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 54091776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:29.323586+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 54091776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:30.323785+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 54091776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:31.323978+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 54091776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:32.324144+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345948160 unmapped: 54083584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:33.324306+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345948160 unmapped: 54083584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:34.324463+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345948160 unmapped: 54083584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:35.324597+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345956352 unmapped: 54075392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:36.324810+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345956352 unmapped: 54075392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:37.324961+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345956352 unmapped: 54075392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:38.325152+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345956352 unmapped: 54075392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:39.325335+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345956352 unmapped: 54075392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:40.325524+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 54059008 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:41.325692+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 54059008 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:42.325884+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 54059008 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:43.326053+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 54059008 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:44.326266+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 54050816 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:45.326791+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 54050816 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:46.326926+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 54050816 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:47.327145+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 54050816 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:48.327287+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 54050816 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:49.327562+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 54050816 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:50.327717+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 54042624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:51.328001+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 54042624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:52.328238+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 54042624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:53.328405+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 54042624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:54.328576+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 54042624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:55.328770+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 54042624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:56.328909+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 54018048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:57.329099+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 54018048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:58.329317+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 54018048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:59.329518+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 320.363372803s of 320.457611084s, submitted: 24
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 53985280 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:00.329651+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346054656 unmapped: 53977088 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:01.329804+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 53960704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:02.331364+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:03.331825+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:04.332275+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:05.332753+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:06.333679+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:07.333888+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:08.334581+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:09.334771+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:10.334991+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:11.335395+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:12.335592+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:13.335754+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:14.335962+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 53936128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:15.336112+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 53936128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:16.336501+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 53936128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:17.336755+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 53936128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:18.337029+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 53936128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:19.337563+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 53936128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:20.337876+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 53927936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:21.338259+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 53927936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:22.338582+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 53927936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:23.338793+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 53927936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:24.339151+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 53927936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:25.339444+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 53927936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:26.339714+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 53927936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:27.340009+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 53927936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:28.340170+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 53919744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:29.340446+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 53919744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:30.340669+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 53919744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:31.340882+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 53919744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:32.341084+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 53919744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:33.341308+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 53919744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:34.341478+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 53919744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:35.341719+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 53911552 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:36.341918+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 53911552 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:37.342165+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 53903360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:38.342333+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 53903360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:39.342500+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 53903360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:40.342681+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 53903360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:41.342863+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 53903360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:42.343037+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 53903360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:43.343166+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 53903360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:44.343347+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 53895168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:45.343571+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 53895168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:46.343745+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 53895168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:47.343920+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 53895168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:48.344128+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 53895168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:49.344402+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 53895168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:50.344614+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 53895168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:51.344755+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 53895168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:52.345029+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346144768 unmapped: 53886976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:53.345274+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346144768 unmapped: 53886976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:54.345469+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346152960 unmapped: 53878784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:55.345612+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346152960 unmapped: 53878784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:56.345870+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346152960 unmapped: 53878784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:57.346059+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346152960 unmapped: 53878784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:58.346197+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346152960 unmapped: 53878784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:59.346360+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346152960 unmapped: 53878784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:00.346488+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346161152 unmapped: 53870592 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:01.346642+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346169344 unmapped: 53862400 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:02.346873+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346169344 unmapped: 53862400 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:03.347055+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346169344 unmapped: 53862400 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:04.347204+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346169344 unmapped: 53862400 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:05.347374+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346169344 unmapped: 53862400 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:06.347552+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346177536 unmapped: 53854208 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:07.347727+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346177536 unmapped: 53854208 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:08.347920+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346185728 unmapped: 53846016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:09.348290+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346185728 unmapped: 53846016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:10.348472+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346185728 unmapped: 53846016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:11.348639+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346185728 unmapped: 53846016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:12.348801+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:13.349010+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346185728 unmapped: 53846016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:14.349160+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346185728 unmapped: 53846016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:15.349305+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346185728 unmapped: 53846016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:16.349532+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346185728 unmapped: 53846016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:17.349679+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 53837824 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:18.349891+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 53837824 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:19.350153+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 53837824 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:20.350395+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 53837824 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:21.350883+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346202112 unmapped: 53829632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:22.351001+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346202112 unmapped: 53829632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:23.351219+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346202112 unmapped: 53829632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:24.351468+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346210304 unmapped: 53821440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:25.351665+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 53805056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:26.351907+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 53805056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:27.352121+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 53805056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:28.352293+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 53805056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:29.352488+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 53805056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:30.352718+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 53805056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:31.352908+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346234880 unmapped: 53796864 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:32.353111+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346234880 unmapped: 53796864 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:33.353273+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346234880 unmapped: 53796864 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:34.353440+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 53788672 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:35.353602+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 53788672 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:36.353744+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 53788672 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 97.175842285s of 97.740692139s, submitted: 106
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:37.353910+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 53788672 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 307 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7c83c20
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e9c5e000/0x0/0x4ffc00000, data 0x6655f5/0x80f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:38.354105+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 53755904 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444437 data_alloc: 218103808 data_used: 1155072
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:39.354304+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 53755904 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:40.354529+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 53755904 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 308 ms_handle_reset con 0x55daf9cff400 session 0x55daf7cae5a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:41.354735+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 53731328 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:42.355182+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 53731328 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 308 handle_osd_map epochs [308,309], i have 308, src has [1,309]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:43.355380+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 309 heartbeat osd_stat(store_statfs(0x4ea0c9000/0x0/0x4ffc00000, data 0x1f8bef/0x3a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 53714944 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3419230 data_alloc: 218103808 data_used: 1155072
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:44.355561+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 53714944 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:45.355756+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355753984 unmapped: 44277760 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 309 heartbeat osd_stat(store_statfs(0x4ea0c9000/0x0/0x4ffc00000, data 0x1f8bef/0x3a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:46.356004+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 52666368 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.452994347s of 10.191070557s, submitted: 75
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:47.356156+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 310 ms_handle_reset con 0x55daf9d27c00 session 0x55dafb59d4a0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 52658176 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:48.356324+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347381760 unmapped: 52649984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476524 data_alloc: 218103808 data_used: 1163264
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:49.356572+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347381760 unmapped: 52649984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e98c8000/0x0/0x4ffc00000, data 0x9fa788/0xba5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:50.356772+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347381760 unmapped: 52649984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:51.356905+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347381760 unmapped: 52649984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:52.357053+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347381760 unmapped: 52649984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:53.357217+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347389952 unmapped: 52641792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:54.357366+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347398144 unmapped: 52633600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:55.357514+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347398144 unmapped: 52633600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:56.357695+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347406336 unmapped: 52625408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:57.357806+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347406336 unmapped: 52625408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:58.358057+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347406336 unmapped: 52625408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:59.358312+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347406336 unmapped: 52625408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:00.358528+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347414528 unmapped: 52617216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:01.358726+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347414528 unmapped: 52617216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:02.358873+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347414528 unmapped: 52617216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:03.359039+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347414528 unmapped: 52617216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:04.359202+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347422720 unmapped: 52609024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:05.359418+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347422720 unmapped: 52609024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:06.359557+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347422720 unmapped: 52609024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:07.359686+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347422720 unmapped: 52609024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:08.359907+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347422720 unmapped: 52609024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:09.360148+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347422720 unmapped: 52609024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:10.360457+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347422720 unmapped: 52609024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:11.360778+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347422720 unmapped: 52609024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:12.361186+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347439104 unmapped: 52592640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:13.361365+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347439104 unmapped: 52592640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:14.361762+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347439104 unmapped: 52592640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:15.362130+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347439104 unmapped: 52592640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:16.362460+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347439104 unmapped: 52592640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:17.362744+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347439104 unmapped: 52592640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:18.363004+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347439104 unmapped: 52592640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:19.363315+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347439104 unmapped: 52592640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:20.363563+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 52576256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:21.363725+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 52576256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:22.364200+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 52568064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:23.364381+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 52568064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:24.364517+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 52568064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:25.364664+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 52568064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:26.364877+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 52568064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:27.365010+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 52568064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:28.365170+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 52559872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:29.365514+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 52559872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:30.365756+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 52559872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:31.365945+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 52559872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:32.366144+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 52559872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:33.366455+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 52559872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:34.366673+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 52559872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:35.367006+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 52559872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:36.367256+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347488256 unmapped: 52543488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:37.367415+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347488256 unmapped: 52543488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:38.367548+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347488256 unmapped: 52543488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 311 handle_osd_map epochs [311,312], i have 311, src has [1,312]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 51.455036163s of 51.899494171s, submitted: 11
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:39.367799+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348553216 unmapped: 51478528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:40.367989+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 49381376 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:41.368177+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 49381376 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 312 ms_handle_reset con 0x55dafa83c000 session 0x55dafb59c780
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e8f23000/0x0/0x4ffc00000, data 0x1fddbc/0x3ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:42.368379+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 50634752 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:43.368539+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 50634752 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428176 data_alloc: 218103808 data_used: 1167360
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:44.368684+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 50626560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:45.368888+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 50626560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e8f23000/0x0/0x4ffc00000, data 0x1fddbc/0x3ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:46.369054+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 50618368 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:47.369224+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 312 handle_osd_map epochs [312,313], i have 312, src has [1,313]
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349429760 unmapped: 50601984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:48.369379+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349429760 unmapped: 50601984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432350 data_alloc: 218103808 data_used: 1175552
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:49.369596+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349429760 unmapped: 50601984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:50.369774+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349429760 unmapped: 50601984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:51.369921+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349429760 unmapped: 50601984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:52.370088+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 50577408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:53.370271+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 50577408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:54.370477+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 50577408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:55.370680+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 50577408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:56.370857+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 50577408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:57.371006+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 50577408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:58.371136+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 50577408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:59.371341+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 50577408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:00.371541+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:01.371690+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:02.371872+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:03.372012+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:04.372183+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:05.372376+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:06.372577+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:07.372795+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:08.373041+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 50552832 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:09.373242+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 50552832 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:10.373439+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 50536448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:11.373632+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 50536448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:12.373877+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 50536448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:13.374079+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 50536448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:14.374288+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 50536448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:15.374427+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 50536448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:16.374550+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 50528256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:17.374683+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 50528256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:18.374821+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 50528256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:19.374975+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 50528256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:20.375114+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 50528256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:21.375409+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 50528256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:22.375537+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 50528256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:23.375643+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 50528256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:24.375813+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349519872 unmapped: 50511872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:25.376022+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349519872 unmapped: 50511872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:26.376201+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 50503680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:27.376372+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 50503680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:28.376550+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 50503680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:29.376766+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 50503680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:30.376888+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 50503680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:31.377018+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349536256 unmapped: 50495488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:32.377191+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:33.377364+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:34.377500+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:35.377644+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:36.377768+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:37.377904+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:38.377985+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:39.378175+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:40.378546+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349552640 unmapped: 50479104 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:41.378723+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349560832 unmapped: 50470912 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:42.378996+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349560832 unmapped: 50470912 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:43.379170+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 50462720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:44.379574+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 50462720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:45.379801+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 50462720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:46.380098+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 50462720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:47.380236+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 50462720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:48.380583+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:49.380757+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:50.380986+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:51.381137+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:52.381265+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:53.381423+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 ms_handle_reset con 0x55daf9c78400 session 0x55daf8a01e00
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:54.381558+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:55.381693+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:56.381829+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 50438144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:57.381983+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 50438144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:58.382107+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 50438144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:59.382253+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 50438144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:00.382373+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 50438144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:01.382510+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349601792 unmapped: 50429952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:02.382617+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349601792 unmapped: 50429952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:03.382799+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349601792 unmapped: 50429952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:17:38 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:17:38 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:04.382953+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 50413568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:05.383105+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: do_command 'config diff' '{prefix=config diff}'
Oct 07 15:17:38 compute-0 ceph-osd[88039]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 07 15:17:38 compute-0 ceph-osd[88039]: do_command 'config show' '{prefix=config show}'
Oct 07 15:17:38 compute-0 ceph-osd[88039]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349659136 unmapped: 50372608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: do_command 'counter dump' '{prefix=counter dump}'
Oct 07 15:17:38 compute-0 ceph-osd[88039]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 07 15:17:38 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:17:38 compute-0 ceph-osd[88039]: do_command 'counter schema' '{prefix=counter schema}'
Oct 07 15:17:38 compute-0 ceph-osd[88039]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:06.383256+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:07.383364+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 50503680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:17:38 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:08.383493+0000)
Oct 07 15:17:38 compute-0 ceph-osd[88039]: do_command 'log dump' '{prefix=log dump}'
Oct 07 15:17:38 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 15:17:39 compute-0 nova_compute[259550]: 2025-10-07 15:17:39.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 07 15:17:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2358962184' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 07 15:17:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3491: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:39 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23229 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:39 compute-0 ceph-mon[74295]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 07 15:17:39 compute-0 ceph-mon[74295]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 07 15:17:39 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2358962184' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 07 15:17:39 compute-0 nova_compute[259550]: 2025-10-07 15:17:39.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:39 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 07 15:17:39 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1987650987' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 07 15:17:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 07 15:17:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183547765' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 07 15:17:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 07 15:17:40 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1287217841' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 07 15:17:40 compute-0 ceph-mon[74295]: pgmap v3491: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:40 compute-0 ceph-mon[74295]: from='client.23229 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:40 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1987650987' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 07 15:17:40 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2183547765' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 07 15:17:41 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 07 15:17:41 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2327001089' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 07 15:17:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3492: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:41 compute-0 systemd[1]: Starting Hostname Service...
Oct 07 15:17:41 compute-0 systemd[1]: Started Hostname Service.
Oct 07 15:17:41 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23239 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1287217841' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 07 15:17:41 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2327001089' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 07 15:17:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 07 15:17:42 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/774491848' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 07 15:17:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:17:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 07 15:17:42 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/704925540' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 07 15:17:42 compute-0 ceph-mon[74295]: pgmap v3492: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:42 compute-0 ceph-mon[74295]: from='client.23239 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:42 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/774491848' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 07 15:17:42 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/704925540' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 07 15:17:43 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23245 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3493: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:43 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 07 15:17:43 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/791365106' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 07 15:17:43 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23249 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:44 compute-0 ceph-mon[74295]: from='client.23245 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:44 compute-0 ceph-mon[74295]: pgmap v3493: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:44 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/791365106' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 07 15:17:44 compute-0 ceph-mon[74295]: from='client.23249 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:44 compute-0 nova_compute[259550]: 2025-10-07 15:17:44.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:44 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23251 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 07 15:17:44 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1783967474' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 07 15:17:44 compute-0 nova_compute[259550]: 2025-10-07 15:17:44.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 07 15:17:45 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1388038194' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 07 15:17:45 compute-0 ceph-mon[74295]: from='client.23251 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:45 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1783967474' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 07 15:17:45 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1388038194' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23257 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3494: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23259 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:45 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:17:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 07 15:17:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1855558499' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 07 15:17:46 compute-0 ceph-mon[74295]: from='client.23257 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:46 compute-0 ceph-mon[74295]: pgmap v3494: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:46 compute-0 ceph-mon[74295]: from='client.23259 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:46 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1855558499' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 07 15:17:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct 07 15:17:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1876747163' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 07 15:17:47 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23265 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:47 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23267 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:47 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1876747163' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 07 15:17:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3495: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:47 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Oct 07 15:17:47 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:17:47.589280) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 15:17:47 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Oct 07 15:17:47 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850267589325, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1684, "num_deletes": 253, "total_data_size": 2531973, "memory_usage": 2579536, "flush_reason": "Manual Compaction"}
Oct 07 15:17:47 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Oct 07 15:17:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:17:47 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850267709194, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 2473551, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71153, "largest_seqno": 72836, "table_properties": {"data_size": 2465595, "index_size": 4705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17742, "raw_average_key_size": 20, "raw_value_size": 2449207, "raw_average_value_size": 2871, "num_data_blocks": 210, "num_entries": 853, "num_filter_entries": 853, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759850112, "oldest_key_time": 1759850112, "file_creation_time": 1759850267, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:17:47 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 119950 microseconds, and 5928 cpu microseconds.
Oct 07 15:17:47 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:17:47 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 07 15:17:47 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925631231' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 07 15:17:47 compute-0 nova_compute[259550]: 2025-10-07 15:17:47.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:17:48 compute-0 nova_compute[259550]: 2025-10-07 15:17:48.049 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:17:48 compute-0 nova_compute[259550]: 2025-10-07 15:17:48.050 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:17:48 compute-0 nova_compute[259550]: 2025-10-07 15:17:48.050 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:17:48 compute-0 nova_compute[259550]: 2025-10-07 15:17:48.050 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:17:48 compute-0 nova_compute[259550]: 2025-10-07 15:17:48.050 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:17:48 compute-0 podman[455780]: 2025-10-07 15:17:48.086007158 +0000 UTC m=+0.072993652 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 15:17:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:17:47.709235) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 2473551 bytes OK
Oct 07 15:17:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:17:47.709253) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Oct 07 15:17:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:17:48.140451) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Oct 07 15:17:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:17:48.140498) EVENT_LOG_v1 {"time_micros": 1759850268140488, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 15:17:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:17:48.140524) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 15:17:48 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2524444, prev total WAL file size 2524444, number of live WAL files 2.
Oct 07 15:17:48 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:17:48 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:17:48.141674) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Oct 07 15:17:48 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 15:17:48 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(2415KB)], [170(9998KB)]
Oct 07 15:17:48 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850268141702, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 12711618, "oldest_snapshot_seqno": -1}
Oct 07 15:17:48 compute-0 podman[455782]: 2025-10-07 15:17:48.161912054 +0000 UTC m=+0.140210678 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 07 15:17:48 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Oct 07 15:17:48 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162353061' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 07 15:17:49 compute-0 nova_compute[259550]: 2025-10-07 15:17:49.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:17:49 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2530484889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:17:49 compute-0 nova_compute[259550]: 2025-10-07 15:17:49.250 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:17:49 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 8993 keys, 10993399 bytes, temperature: kUnknown
Oct 07 15:17:49 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850269252529, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 10993399, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10935702, "index_size": 34096, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22533, "raw_key_size": 236474, "raw_average_key_size": 26, "raw_value_size": 10777621, "raw_average_value_size": 1198, "num_data_blocks": 1319, "num_entries": 8993, "num_filter_entries": 8993, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759850268, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:17:49 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:17:49 compute-0 nova_compute[259550]: 2025-10-07 15:17:49.409 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:17:49 compute-0 nova_compute[259550]: 2025-10-07 15:17:49.410 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3505MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:17:49 compute-0 nova_compute[259550]: 2025-10-07 15:17:49.411 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:17:49 compute-0 nova_compute[259550]: 2025-10-07 15:17:49.411 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:17:49 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:17:49.252812) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 10993399 bytes
Oct 07 15:17:49 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:17:49.411372) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 11.4 rd, 9.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 9.8 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(9.6) write-amplify(4.4) OK, records in: 9515, records dropped: 522 output_compression: NoCompression
Oct 07 15:17:49 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:17:49.411430) EVENT_LOG_v1 {"time_micros": 1759850269411395, "job": 106, "event": "compaction_finished", "compaction_time_micros": 1110904, "compaction_time_cpu_micros": 25872, "output_level": 6, "num_output_files": 1, "total_output_size": 10993399, "num_input_records": 9515, "num_output_records": 8993, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 15:17:49 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:17:49 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850269412063, "job": 106, "event": "table_file_deletion", "file_number": 172}
Oct 07 15:17:49 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:17:49 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850269414817, "job": 106, "event": "table_file_deletion", "file_number": 170}
Oct 07 15:17:49 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:17:48.141618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:17:49 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:17:49.414914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:17:49 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:17:49.414920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:17:49 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:17:49.414922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:17:49 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:17:49.414924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:17:49 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:17:49.414958) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:17:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3496: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:49 compute-0 ceph-mon[74295]: from='client.23265 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:49 compute-0 ceph-mon[74295]: from='client.23267 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:17:49 compute-0 ceph-mon[74295]: pgmap v3495: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:49 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1925631231' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 07 15:17:49 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4162353061' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 07 15:17:49 compute-0 nova_compute[259550]: 2025-10-07 15:17:49.478 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:17:49 compute-0 nova_compute[259550]: 2025-10-07 15:17:49.478 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:17:49 compute-0 nova_compute[259550]: 2025-10-07 15:17:49.499 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:17:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Oct 07 15:17:49 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1515466353' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 07 15:17:49 compute-0 nova_compute[259550]: 2025-10-07 15:17:49.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:49 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23279 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:49 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:17:49 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1986439350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:17:49 compute-0 nova_compute[259550]: 2025-10-07 15:17:49.962 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:17:49 compute-0 nova_compute[259550]: 2025-10-07 15:17:49.968 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:17:49 compute-0 nova_compute[259550]: 2025-10-07 15:17:49.986 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:17:49 compute-0 nova_compute[259550]: 2025-10-07 15:17:49.988 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:17:49 compute-0 nova_compute[259550]: 2025-10-07 15:17:49.989 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:17:50 compute-0 ovs-appctl[456436]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 07 15:17:50 compute-0 ovs-appctl[456441]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 07 15:17:50 compute-0 ovs-appctl[456444]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 07 15:17:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 07 15:17:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2599469052' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 07 15:17:50 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2530484889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:17:50 compute-0 ceph-mon[74295]: pgmap v3496: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:50 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1515466353' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 07 15:17:50 compute-0 ceph-mon[74295]: from='client.23279 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:50 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1986439350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:17:50 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2599469052' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 07 15:17:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Oct 07 15:17:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2584447075' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 07 15:17:50 compute-0 nova_compute[259550]: 2025-10-07 15:17:50.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:17:50 compute-0 nova_compute[259550]: 2025-10-07 15:17:50.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 07 15:17:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Oct 07 15:17:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/57796272' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 07 15:17:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3497: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Oct 07 15:17:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2124013413' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 07 15:17:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2584447075' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 07 15:17:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/57796272' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 07 15:17:52 compute-0 nova_compute[259550]: 2025-10-07 15:17:52.001 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:17:52 compute-0 nova_compute[259550]: 2025-10-07 15:17:52.002 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:17:52 compute-0 nova_compute[259550]: 2025-10-07 15:17:52.002 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:17:52 compute-0 nova_compute[259550]: 2025-10-07 15:17:52.027 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:17:52 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23289 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:17:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:17:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:17:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:17:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:17:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:17:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:17:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Oct 07 15:17:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/418443420' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 07 15:17:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Oct 07 15:17:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/800029676' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 07 15:17:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3498: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:53 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23295 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:53 compute-0 ceph-mon[74295]: pgmap v3497: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2124013413' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 07 15:17:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/418443420' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 07 15:17:54 compute-0 nova_compute[259550]: 2025-10-07 15:17:54.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Oct 07 15:17:54 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1645808708' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 07 15:17:54 compute-0 nova_compute[259550]: 2025-10-07 15:17:54.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:54 compute-0 ceph-mon[74295]: from='client.23289 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/800029676' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 07 15:17:54 compute-0 ceph-mon[74295]: pgmap v3498: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:54 compute-0 ceph-mon[74295]: from='client.23295 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1645808708' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 07 15:17:54 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23299 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:55 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23301 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3499: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Oct 07 15:17:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2990679660' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 07 15:17:55 compute-0 ceph-mon[74295]: from='client.23299 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:55 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2990679660' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 07 15:17:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Oct 07 15:17:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/378261222' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 07 15:17:55 compute-0 nova_compute[259550]: 2025-10-07 15:17:55.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23307 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23309 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:17:56 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:17:57 compute-0 ceph-mon[74295]: from='client.23301 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:57 compute-0 ceph-mon[74295]: pgmap v3499: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:57 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/378261222' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 07 15:17:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 07 15:17:57 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2592525971' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 07 15:17:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3500: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Oct 07 15:17:57 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/136960715' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 07 15:17:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:17:57 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23315 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:58 compute-0 ceph-mon[74295]: from='client.23307 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:58 compute-0 ceph-mon[74295]: from='client.23309 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:58 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2592525971' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 07 15:17:58 compute-0 ceph-mon[74295]: pgmap v3500: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:58 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/136960715' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 07 15:17:58 compute-0 ceph-mon[74295]: from='client.23315 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:58 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23317 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 07 15:17:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/111349578' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 07 15:17:59 compute-0 nova_compute[259550]: 2025-10-07 15:17:59.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Oct 07 15:17:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2562630292' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 07 15:17:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3501: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:17:59 compute-0 nova_compute[259550]: 2025-10-07 15:17:59.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:17:59 compute-0 ceph-mon[74295]: from='client.23317 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:17:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/111349578' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 07 15:17:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2562630292' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 07 15:18:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:18:00.120 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:18:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:18:00.120 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:18:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:18:00.120 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:18:00 compute-0 ceph-mon[74295]: pgmap v3501: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3502: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:18:02 compute-0 podman[457944]: 2025-10-07 15:18:02.919481736 +0000 UTC m=+0.065727690 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 07 15:18:02 compute-0 ceph-mon[74295]: pgmap v3502: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:02 compute-0 podman[457930]: 2025-10-07 15:18:02.946110397 +0000 UTC m=+0.085940462 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 07 15:18:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3503: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:03 compute-0 virtqemud[259430]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 07 15:18:04 compute-0 nova_compute[259550]: 2025-10-07 15:18:04.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:04 compute-0 nova_compute[259550]: 2025-10-07 15:18:04.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:04 compute-0 ceph-mon[74295]: pgmap v3503: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3504: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:06 compute-0 systemd[1]: Starting Time & Date Service...
Oct 07 15:18:06 compute-0 systemd[1]: Started Time & Date Service.
Oct 07 15:18:06 compute-0 ceph-mon[74295]: pgmap v3504: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3505: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:18:08 compute-0 ceph-mon[74295]: pgmap v3505: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:09 compute-0 nova_compute[259550]: 2025-10-07 15:18:09.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3506: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:09 compute-0 nova_compute[259550]: 2025-10-07 15:18:09.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:10 compute-0 ceph-mon[74295]: pgmap v3506: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3507: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:12 compute-0 ceph-mon[74295]: pgmap v3507: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:18:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3508: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:14 compute-0 nova_compute[259550]: 2025-10-07 15:18:14.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:14 compute-0 ceph-mon[74295]: pgmap v3508: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:14 compute-0 nova_compute[259550]: 2025-10-07 15:18:14.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3509: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:16 compute-0 ceph-mon[74295]: pgmap v3509: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3510: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:17 compute-0 sudo[458396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:18:17 compute-0 sudo[458396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:18:17 compute-0 sudo[458396]: pam_unix(sudo:session): session closed for user root
Oct 07 15:18:17 compute-0 sudo[458421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:18:17 compute-0 sudo[458421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:18:17 compute-0 sudo[458421]: pam_unix(sudo:session): session closed for user root
Oct 07 15:18:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:18:17 compute-0 sudo[458446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:18:17 compute-0 sudo[458446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:18:17 compute-0 sudo[458446]: pam_unix(sudo:session): session closed for user root
Oct 07 15:18:17 compute-0 sudo[458471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:18:17 compute-0 sudo[458471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:18:18 compute-0 sudo[458471]: pam_unix(sudo:session): session closed for user root
Oct 07 15:18:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:18:18 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:18:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:18:18 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:18:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:18:18 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:18:18 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 54623f13-eae2-4bb8-a229-370a3d4174dc does not exist
Oct 07 15:18:18 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 6f5f65e7-3a90-4619-973c-ecf2bc603d64 does not exist
Oct 07 15:18:18 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev a5324b18-dc41-4264-9c7b-7e95082e7ba7 does not exist
Oct 07 15:18:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:18:18 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:18:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:18:18 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:18:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:18:18 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:18:18 compute-0 podman[458527]: 2025-10-07 15:18:18.380822377 +0000 UTC m=+0.064032645 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:18:18 compute-0 sudo[458547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:18:18 compute-0 sudo[458547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:18:18 compute-0 sudo[458547]: pam_unix(sudo:session): session closed for user root
Oct 07 15:18:18 compute-0 podman[458528]: 2025-10-07 15:18:18.413795914 +0000 UTC m=+0.089204118 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct 07 15:18:18 compute-0 sudo[458594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:18:18 compute-0 sudo[458594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:18:18 compute-0 sudo[458594]: pam_unix(sudo:session): session closed for user root
Oct 07 15:18:18 compute-0 sudo[458619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:18:18 compute-0 sudo[458619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:18:18 compute-0 sudo[458619]: pam_unix(sudo:session): session closed for user root
Oct 07 15:18:18 compute-0 sudo[458644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:18:18 compute-0 sudo[458644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:18:18 compute-0 ceph-mon[74295]: pgmap v3510: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:18 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:18:18 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:18:18 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:18:18 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:18:18 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:18:18 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:18:18 compute-0 podman[458708]: 2025-10-07 15:18:18.964195251 +0000 UTC m=+0.049035441 container create f0e2ebed236421f4449cbd525d4a99d0d33afc6b2d36ed06cc30f5987af6e4f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mahavira, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:18:19 compute-0 podman[458708]: 2025-10-07 15:18:18.94245576 +0000 UTC m=+0.027295980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:18:19 compute-0 systemd[1]: Started libpod-conmon-f0e2ebed236421f4449cbd525d4a99d0d33afc6b2d36ed06cc30f5987af6e4f1.scope.
Oct 07 15:18:19 compute-0 nova_compute[259550]: 2025-10-07 15:18:19.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:19 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:18:19 compute-0 podman[458708]: 2025-10-07 15:18:19.325597167 +0000 UTC m=+0.410437387 container init f0e2ebed236421f4449cbd525d4a99d0d33afc6b2d36ed06cc30f5987af6e4f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 07 15:18:19 compute-0 podman[458708]: 2025-10-07 15:18:19.333134005 +0000 UTC m=+0.417974235 container start f0e2ebed236421f4449cbd525d4a99d0d33afc6b2d36ed06cc30f5987af6e4f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mahavira, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:18:19 compute-0 podman[458708]: 2025-10-07 15:18:19.336873773 +0000 UTC m=+0.421713963 container attach f0e2ebed236421f4449cbd525d4a99d0d33afc6b2d36ed06cc30f5987af6e4f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mahavira, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 07 15:18:19 compute-0 systemd[1]: libpod-f0e2ebed236421f4449cbd525d4a99d0d33afc6b2d36ed06cc30f5987af6e4f1.scope: Deactivated successfully.
Oct 07 15:18:19 compute-0 hardcore_mahavira[458725]: 167 167
Oct 07 15:18:19 compute-0 conmon[458725]: conmon f0e2ebed236421f4449c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f0e2ebed236421f4449cbd525d4a99d0d33afc6b2d36ed06cc30f5987af6e4f1.scope/container/memory.events
Oct 07 15:18:19 compute-0 podman[458708]: 2025-10-07 15:18:19.341116475 +0000 UTC m=+0.425956665 container died f0e2ebed236421f4449cbd525d4a99d0d33afc6b2d36ed06cc30f5987af6e4f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mahavira, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 07 15:18:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-34c617e6d2b3a9337993c499c63eaa5189dfcd3ce65c2b5aaae5e6252983257e-merged.mount: Deactivated successfully.
Oct 07 15:18:19 compute-0 podman[458708]: 2025-10-07 15:18:19.386194881 +0000 UTC m=+0.471035091 container remove f0e2ebed236421f4449cbd525d4a99d0d33afc6b2d36ed06cc30f5987af6e4f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 15:18:19 compute-0 systemd[1]: libpod-conmon-f0e2ebed236421f4449cbd525d4a99d0d33afc6b2d36ed06cc30f5987af6e4f1.scope: Deactivated successfully.
Oct 07 15:18:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3511: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:19 compute-0 podman[458749]: 2025-10-07 15:18:19.558959205 +0000 UTC m=+0.032665790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:18:19 compute-0 podman[458749]: 2025-10-07 15:18:19.686165461 +0000 UTC m=+0.159872056 container create de131e64f818fe2fce6752d30303594f223950c56090619b2e843032d466d961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:18:19 compute-0 systemd[1]: Started libpod-conmon-de131e64f818fe2fce6752d30303594f223950c56090619b2e843032d466d961.scope.
Oct 07 15:18:19 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4fc83bef2567bd0da3aa7d3306f88ac82f8251cfc94455c2a56e8a608f3bf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4fc83bef2567bd0da3aa7d3306f88ac82f8251cfc94455c2a56e8a608f3bf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4fc83bef2567bd0da3aa7d3306f88ac82f8251cfc94455c2a56e8a608f3bf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4fc83bef2567bd0da3aa7d3306f88ac82f8251cfc94455c2a56e8a608f3bf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4fc83bef2567bd0da3aa7d3306f88ac82f8251cfc94455c2a56e8a608f3bf1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:18:19 compute-0 podman[458749]: 2025-10-07 15:18:19.783864811 +0000 UTC m=+0.257571366 container init de131e64f818fe2fce6752d30303594f223950c56090619b2e843032d466d961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pike, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 07 15:18:19 compute-0 podman[458749]: 2025-10-07 15:18:19.791726137 +0000 UTC m=+0.265432692 container start de131e64f818fe2fce6752d30303594f223950c56090619b2e843032d466d961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:18:19 compute-0 podman[458749]: 2025-10-07 15:18:19.795897497 +0000 UTC m=+0.269604102 container attach de131e64f818fe2fce6752d30303594f223950c56090619b2e843032d466d961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:18:19 compute-0 nova_compute[259550]: 2025-10-07 15:18:19.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:20 compute-0 ceph-mon[74295]: pgmap v3511: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:20 compute-0 suspicious_pike[458765]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:18:20 compute-0 suspicious_pike[458765]: --> relative data size: 1.0
Oct 07 15:18:20 compute-0 suspicious_pike[458765]: --> All data devices are unavailable
Oct 07 15:18:20 compute-0 systemd[1]: libpod-de131e64f818fe2fce6752d30303594f223950c56090619b2e843032d466d961.scope: Deactivated successfully.
Oct 07 15:18:20 compute-0 systemd[1]: libpod-de131e64f818fe2fce6752d30303594f223950c56090619b2e843032d466d961.scope: Consumed 1.024s CPU time.
Oct 07 15:18:20 compute-0 podman[458749]: 2025-10-07 15:18:20.866357603 +0000 UTC m=+1.340064158 container died de131e64f818fe2fce6752d30303594f223950c56090619b2e843032d466d961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:18:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a4fc83bef2567bd0da3aa7d3306f88ac82f8251cfc94455c2a56e8a608f3bf1-merged.mount: Deactivated successfully.
Oct 07 15:18:21 compute-0 podman[458749]: 2025-10-07 15:18:21.136591741 +0000 UTC m=+1.610298306 container remove de131e64f818fe2fce6752d30303594f223950c56090619b2e843032d466d961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pike, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 07 15:18:21 compute-0 systemd[1]: libpod-conmon-de131e64f818fe2fce6752d30303594f223950c56090619b2e843032d466d961.scope: Deactivated successfully.
Oct 07 15:18:21 compute-0 sudo[458644]: pam_unix(sudo:session): session closed for user root
Oct 07 15:18:21 compute-0 sudo[458808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:18:21 compute-0 sudo[458808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:18:21 compute-0 sudo[458808]: pam_unix(sudo:session): session closed for user root
Oct 07 15:18:21 compute-0 sudo[458833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:18:21 compute-0 sudo[458833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:18:21 compute-0 sudo[458833]: pam_unix(sudo:session): session closed for user root
Oct 07 15:18:21 compute-0 sudo[458858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:18:21 compute-0 sudo[458858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:18:21 compute-0 sudo[458858]: pam_unix(sudo:session): session closed for user root
Oct 07 15:18:21 compute-0 sudo[458883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:18:21 compute-0 sudo[458883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:18:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3512: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:21 compute-0 podman[458945]: 2025-10-07 15:18:21.735039321 +0000 UTC m=+0.037807555 container create 3b5f6940f42097910674c3ee49443fe19840e94ae81fc054950d6969c4cf5493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_kilby, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:18:21 compute-0 systemd[1]: Started libpod-conmon-3b5f6940f42097910674c3ee49443fe19840e94ae81fc054950d6969c4cf5493.scope.
Oct 07 15:18:21 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:18:21 compute-0 podman[458945]: 2025-10-07 15:18:21.721168376 +0000 UTC m=+0.023936630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:18:21 compute-0 podman[458945]: 2025-10-07 15:18:21.818589569 +0000 UTC m=+0.121357833 container init 3b5f6940f42097910674c3ee49443fe19840e94ae81fc054950d6969c4cf5493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_kilby, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:18:21 compute-0 podman[458945]: 2025-10-07 15:18:21.828218303 +0000 UTC m=+0.130986537 container start 3b5f6940f42097910674c3ee49443fe19840e94ae81fc054950d6969c4cf5493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 07 15:18:21 compute-0 podman[458945]: 2025-10-07 15:18:21.83196057 +0000 UTC m=+0.134728814 container attach 3b5f6940f42097910674c3ee49443fe19840e94ae81fc054950d6969c4cf5493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_kilby, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:18:21 compute-0 condescending_kilby[458962]: 167 167
Oct 07 15:18:21 compute-0 systemd[1]: libpod-3b5f6940f42097910674c3ee49443fe19840e94ae81fc054950d6969c4cf5493.scope: Deactivated successfully.
Oct 07 15:18:21 compute-0 podman[458945]: 2025-10-07 15:18:21.834479637 +0000 UTC m=+0.137247871 container died 3b5f6940f42097910674c3ee49443fe19840e94ae81fc054950d6969c4cf5493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_kilby, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:18:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-942ad87f0c526d691dee90f9b56affaad66cde3814a9067bef66ce7ca6241943-merged.mount: Deactivated successfully.
Oct 07 15:18:21 compute-0 podman[458945]: 2025-10-07 15:18:21.886865275 +0000 UTC m=+0.189633549 container remove 3b5f6940f42097910674c3ee49443fe19840e94ae81fc054950d6969c4cf5493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_kilby, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 07 15:18:21 compute-0 systemd[1]: libpod-conmon-3b5f6940f42097910674c3ee49443fe19840e94ae81fc054950d6969c4cf5493.scope: Deactivated successfully.
Oct 07 15:18:21 compute-0 nova_compute[259550]: 2025-10-07 15:18:21.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:18:22 compute-0 podman[458986]: 2025-10-07 15:18:22.084266627 +0000 UTC m=+0.054210047 container create 85f9d8847f5831a21e94ceed889d214ee180920260b0314b4fb089485c1c8a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_black, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 07 15:18:22 compute-0 systemd[1]: Started libpod-conmon-85f9d8847f5831a21e94ceed889d214ee180920260b0314b4fb089485c1c8a99.scope.
Oct 07 15:18:22 compute-0 podman[458986]: 2025-10-07 15:18:22.057145294 +0000 UTC m=+0.027088794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:18:22 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:18:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bada183aab9766ac0ebf70e8619fbf9fbe60cfcd5180f2c6ef7efd75e51e2c3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:18:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bada183aab9766ac0ebf70e8619fbf9fbe60cfcd5180f2c6ef7efd75e51e2c3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:18:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bada183aab9766ac0ebf70e8619fbf9fbe60cfcd5180f2c6ef7efd75e51e2c3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:18:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bada183aab9766ac0ebf70e8619fbf9fbe60cfcd5180f2c6ef7efd75e51e2c3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:18:22 compute-0 podman[458986]: 2025-10-07 15:18:22.179459441 +0000 UTC m=+0.149402881 container init 85f9d8847f5831a21e94ceed889d214ee180920260b0314b4fb089485c1c8a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 07 15:18:22 compute-0 podman[458986]: 2025-10-07 15:18:22.187582854 +0000 UTC m=+0.157526274 container start 85f9d8847f5831a21e94ceed889d214ee180920260b0314b4fb089485c1c8a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_black, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:18:22 compute-0 podman[458986]: 2025-10-07 15:18:22.192055273 +0000 UTC m=+0.161998723 container attach 85f9d8847f5831a21e94ceed889d214ee180920260b0314b4fb089485c1c8a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:18:22 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:18:22 compute-0 ceph-mon[74295]: pgmap v3512: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:18:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:18:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:18:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:18:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:18:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:18:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:18:22
Oct 07 15:18:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:18:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:18:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.control', 'volumes', '.rgw.root', 'default.rgw.log', 'backups', 'vms']
Oct 07 15:18:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:18:22 compute-0 heuristic_black[459002]: {
Oct 07 15:18:22 compute-0 heuristic_black[459002]:     "0": [
Oct 07 15:18:22 compute-0 heuristic_black[459002]:         {
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "devices": [
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "/dev/loop3"
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             ],
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "lv_name": "ceph_lv0",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "lv_size": "21470642176",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "name": "ceph_lv0",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "tags": {
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.cluster_name": "ceph",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.crush_device_class": "",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.encrypted": "0",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.osd_id": "0",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.type": "block",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.vdo": "0"
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             },
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "type": "block",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "vg_name": "ceph_vg0"
Oct 07 15:18:22 compute-0 heuristic_black[459002]:         }
Oct 07 15:18:22 compute-0 heuristic_black[459002]:     ],
Oct 07 15:18:22 compute-0 heuristic_black[459002]:     "1": [
Oct 07 15:18:22 compute-0 heuristic_black[459002]:         {
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "devices": [
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "/dev/loop4"
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             ],
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "lv_name": "ceph_lv1",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "lv_size": "21470642176",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "name": "ceph_lv1",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "tags": {
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.cluster_name": "ceph",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.crush_device_class": "",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.encrypted": "0",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.osd_id": "1",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.type": "block",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.vdo": "0"
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             },
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "type": "block",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "vg_name": "ceph_vg1"
Oct 07 15:18:22 compute-0 heuristic_black[459002]:         }
Oct 07 15:18:22 compute-0 heuristic_black[459002]:     ],
Oct 07 15:18:22 compute-0 heuristic_black[459002]:     "2": [
Oct 07 15:18:22 compute-0 heuristic_black[459002]:         {
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "devices": [
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "/dev/loop5"
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             ],
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "lv_name": "ceph_lv2",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "lv_size": "21470642176",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "name": "ceph_lv2",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "tags": {
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.cluster_name": "ceph",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.crush_device_class": "",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.encrypted": "0",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.osd_id": "2",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.type": "block",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:                 "ceph.vdo": "0"
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             },
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "type": "block",
Oct 07 15:18:22 compute-0 heuristic_black[459002]:             "vg_name": "ceph_vg2"
Oct 07 15:18:22 compute-0 heuristic_black[459002]:         }
Oct 07 15:18:22 compute-0 heuristic_black[459002]:     ]
Oct 07 15:18:22 compute-0 heuristic_black[459002]: }
Oct 07 15:18:22 compute-0 systemd[1]: libpod-85f9d8847f5831a21e94ceed889d214ee180920260b0314b4fb089485c1c8a99.scope: Deactivated successfully.
Oct 07 15:18:22 compute-0 podman[458986]: 2025-10-07 15:18:22.989797625 +0000 UTC m=+0.959741075 container died 85f9d8847f5831a21e94ceed889d214ee180920260b0314b4fb089485c1c8a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 07 15:18:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-bada183aab9766ac0ebf70e8619fbf9fbe60cfcd5180f2c6ef7efd75e51e2c3c-merged.mount: Deactivated successfully.
Oct 07 15:18:23 compute-0 podman[458986]: 2025-10-07 15:18:23.060570067 +0000 UTC m=+1.030513497 container remove 85f9d8847f5831a21e94ceed889d214ee180920260b0314b4fb089485c1c8a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_black, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:18:23 compute-0 systemd[1]: libpod-conmon-85f9d8847f5831a21e94ceed889d214ee180920260b0314b4fb089485c1c8a99.scope: Deactivated successfully.
Oct 07 15:18:23 compute-0 sudo[458883]: pam_unix(sudo:session): session closed for user root
Oct 07 15:18:23 compute-0 sudo[459025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:18:23 compute-0 sudo[459025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:18:23 compute-0 sudo[459025]: pam_unix(sudo:session): session closed for user root
Oct 07 15:18:23 compute-0 sudo[459050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:18:23 compute-0 sudo[459050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:18:23 compute-0 sudo[459050]: pam_unix(sudo:session): session closed for user root
Oct 07 15:18:23 compute-0 sudo[459075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:18:23 compute-0 sudo[459075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:18:23 compute-0 sudo[459075]: pam_unix(sudo:session): session closed for user root
Oct 07 15:18:23 compute-0 sudo[459100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:18:23 compute-0 sudo[459100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:18:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3513: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:18:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:18:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:18:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:18:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:18:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:18:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:18:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:18:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:18:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:18:23 compute-0 podman[459164]: 2025-10-07 15:18:23.705557252 +0000 UTC m=+0.040668622 container create 97e400124c5ad9be0a0d996149a1e8f53ff1d1bb8a04c6e430f56e34b7faa759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 15:18:23 compute-0 systemd[1]: Started libpod-conmon-97e400124c5ad9be0a0d996149a1e8f53ff1d1bb8a04c6e430f56e34b7faa759.scope.
Oct 07 15:18:23 compute-0 podman[459164]: 2025-10-07 15:18:23.688713748 +0000 UTC m=+0.023825148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:18:23 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:18:23 compute-0 podman[459164]: 2025-10-07 15:18:23.804059552 +0000 UTC m=+0.139170932 container init 97e400124c5ad9be0a0d996149a1e8f53ff1d1bb8a04c6e430f56e34b7faa759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct 07 15:18:23 compute-0 podman[459164]: 2025-10-07 15:18:23.816155421 +0000 UTC m=+0.151266821 container start 97e400124c5ad9be0a0d996149a1e8f53ff1d1bb8a04c6e430f56e34b7faa759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 07 15:18:23 compute-0 clever_shamir[459180]: 167 167
Oct 07 15:18:23 compute-0 systemd[1]: libpod-97e400124c5ad9be0a0d996149a1e8f53ff1d1bb8a04c6e430f56e34b7faa759.scope: Deactivated successfully.
Oct 07 15:18:23 compute-0 podman[459164]: 2025-10-07 15:18:23.822804475 +0000 UTC m=+0.157915885 container attach 97e400124c5ad9be0a0d996149a1e8f53ff1d1bb8a04c6e430f56e34b7faa759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shamir, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 07 15:18:23 compute-0 podman[459164]: 2025-10-07 15:18:23.823706609 +0000 UTC m=+0.158817989 container died 97e400124c5ad9be0a0d996149a1e8f53ff1d1bb8a04c6e430f56e34b7faa759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shamir, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 07 15:18:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9ee6e9793bb20c66de3b623058dde6d169ffef1a0e8391bf16a8e5716b4c16e-merged.mount: Deactivated successfully.
Oct 07 15:18:23 compute-0 podman[459164]: 2025-10-07 15:18:23.868944199 +0000 UTC m=+0.204055559 container remove 97e400124c5ad9be0a0d996149a1e8f53ff1d1bb8a04c6e430f56e34b7faa759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 07 15:18:23 compute-0 systemd[1]: libpod-conmon-97e400124c5ad9be0a0d996149a1e8f53ff1d1bb8a04c6e430f56e34b7faa759.scope: Deactivated successfully.
Oct 07 15:18:24 compute-0 podman[459202]: 2025-10-07 15:18:24.063541757 +0000 UTC m=+0.067573368 container create 776e52861e67ddcfbeea994aeb66ad8b73a3c30653b5f6ad335808e38d1625b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_einstein, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 07 15:18:24 compute-0 podman[459202]: 2025-10-07 15:18:24.025469146 +0000 UTC m=+0.029500777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:18:24 compute-0 nova_compute[259550]: 2025-10-07 15:18:24.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:24 compute-0 systemd[1]: Started libpod-conmon-776e52861e67ddcfbeea994aeb66ad8b73a3c30653b5f6ad335808e38d1625b0.scope.
Oct 07 15:18:24 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:18:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b3eaab1e168d7229e956292615c489d619b75b48cbc5b4339f240d93e8f20b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:18:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b3eaab1e168d7229e956292615c489d619b75b48cbc5b4339f240d93e8f20b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:18:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b3eaab1e168d7229e956292615c489d619b75b48cbc5b4339f240d93e8f20b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:18:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b3eaab1e168d7229e956292615c489d619b75b48cbc5b4339f240d93e8f20b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:18:24 compute-0 podman[459202]: 2025-10-07 15:18:24.23969076 +0000 UTC m=+0.243722421 container init 776e52861e67ddcfbeea994aeb66ad8b73a3c30653b5f6ad335808e38d1625b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:18:24 compute-0 podman[459202]: 2025-10-07 15:18:24.248130022 +0000 UTC m=+0.252161633 container start 776e52861e67ddcfbeea994aeb66ad8b73a3c30653b5f6ad335808e38d1625b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_einstein, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:18:24 compute-0 podman[459202]: 2025-10-07 15:18:24.289424089 +0000 UTC m=+0.293455700 container attach 776e52861e67ddcfbeea994aeb66ad8b73a3c30653b5f6ad335808e38d1625b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:18:24 compute-0 nova_compute[259550]: 2025-10-07 15:18:24.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:24 compute-0 ceph-mon[74295]: pgmap v3513: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:25 compute-0 priceless_einstein[459219]: {
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:         "osd_id": 2,
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:         "type": "bluestore"
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:     },
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:         "osd_id": 1,
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:         "type": "bluestore"
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:     },
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:         "osd_id": 0,
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:         "type": "bluestore"
Oct 07 15:18:25 compute-0 priceless_einstein[459219]:     }
Oct 07 15:18:25 compute-0 priceless_einstein[459219]: }
Oct 07 15:18:25 compute-0 systemd[1]: libpod-776e52861e67ddcfbeea994aeb66ad8b73a3c30653b5f6ad335808e38d1625b0.scope: Deactivated successfully.
Oct 07 15:18:25 compute-0 systemd[1]: libpod-776e52861e67ddcfbeea994aeb66ad8b73a3c30653b5f6ad335808e38d1625b0.scope: Consumed 1.064s CPU time.
Oct 07 15:18:25 compute-0 conmon[459219]: conmon 776e52861e67ddcfbeea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-776e52861e67ddcfbeea994aeb66ad8b73a3c30653b5f6ad335808e38d1625b0.scope/container/memory.events
Oct 07 15:18:25 compute-0 podman[459202]: 2025-10-07 15:18:25.314209723 +0000 UTC m=+1.318241374 container died 776e52861e67ddcfbeea994aeb66ad8b73a3c30653b5f6ad335808e38d1625b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 15:18:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3514: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:26 compute-0 nova_compute[259550]: 2025-10-07 15:18:26.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:18:26 compute-0 nova_compute[259550]: 2025-10-07 15:18:26.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:18:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3515: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:18:29 compute-0 nova_compute[259550]: 2025-10-07 15:18:29.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3516: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:29 compute-0 nova_compute[259550]: 2025-10-07 15:18:29.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:29 compute-0 nova_compute[259550]: 2025-10-07 15:18:29.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:18:30 compute-0 ceph-mon[74295]: pgmap v3514: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:30 compute-0 ceph-mon[74295]: pgmap v3515: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3b3eaab1e168d7229e956292615c489d619b75b48cbc5b4339f240d93e8f20b-merged.mount: Deactivated successfully.
Oct 07 15:18:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3517: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:31 compute-0 nova_compute[259550]: 2025-10-07 15:18:31.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:18:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3518: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:18:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:18:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1753894534' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:18:33 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:18:33 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1753894534' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:18:33 compute-0 nova_compute[259550]: 2025-10-07 15:18:33.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:18:34 compute-0 nova_compute[259550]: 2025-10-07 15:18:34.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:34 compute-0 ceph-mon[74295]: pgmap v3516: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:34 compute-0 nova_compute[259550]: 2025-10-07 15:18:34.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:34 compute-0 nova_compute[259550]: 2025-10-07 15:18:34.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:18:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3519: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:35 compute-0 podman[459202]: 2025-10-07 15:18:35.858819913 +0000 UTC m=+11.862851524 container remove 776e52861e67ddcfbeea994aeb66ad8b73a3c30653b5f6ad335808e38d1625b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:18:35 compute-0 sudo[459100]: pam_unix(sudo:session): session closed for user root
Oct 07 15:18:35 compute-0 podman[459266]: 2025-10-07 15:18:35.914476586 +0000 UTC m=+2.900784128 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 07 15:18:35 compute-0 podman[459265]: 2025-10-07 15:18:35.922145699 +0000 UTC m=+2.908035320 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd)
Oct 07 15:18:35 compute-0 systemd[1]: libpod-conmon-776e52861e67ddcfbeea994aeb66ad8b73a3c30653b5f6ad335808e38d1625b0.scope: Deactivated successfully.
Oct 07 15:18:36 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 07 15:18:36 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 07 15:18:36 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:18:36 compute-0 nova_compute[259550]: 2025-10-07 15:18:36.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:18:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3520: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:38 compute-0 ceph-mon[74295]: pgmap v3517: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:38 compute-0 ceph-mon[74295]: pgmap v3518: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:38 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1753894534' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:18:38 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1753894534' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:18:38 compute-0 ceph-mon[74295]: pgmap v3519: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:39 compute-0 nova_compute[259550]: 2025-10-07 15:18:39.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:39 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:18:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3521: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:39 compute-0 nova_compute[259550]: 2025-10-07 15:18:39.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:18:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:18:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3522: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3523: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:44 compute-0 nova_compute[259550]: 2025-10-07 15:18:44.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:44 compute-0 ceph-mon[74295]: pgmap v3520: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:44 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:18:44 compute-0 nova_compute[259550]: 2025-10-07 15:18:44.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:45 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:18:45 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 3ddb8890-d034-456b-9314-926724d5a85f does not exist
Oct 07 15:18:45 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev e9bc639e-ac78-419a-84b9-c0da64d17ca9 does not exist
Oct 07 15:18:45 compute-0 sudo[459309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:18:45 compute-0 sudo[459309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:18:45 compute-0 sudo[459309]: pam_unix(sudo:session): session closed for user root
Oct 07 15:18:45 compute-0 sudo[459334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:18:45 compute-0 sudo[459334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:18:45 compute-0 sudo[459334]: pam_unix(sudo:session): session closed for user root
Oct 07 15:18:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3524: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:18:47 compute-0 ceph-mon[74295]: pgmap v3521: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:47 compute-0 ceph-mon[74295]: pgmap v3522: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:47 compute-0 ceph-mon[74295]: pgmap v3523: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3525: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:49 compute-0 podman[459359]: 2025-10-07 15:18:49.085867228 +0000 UTC m=+0.071700587 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 07 15:18:49 compute-0 podman[459360]: 2025-10-07 15:18:49.121758033 +0000 UTC m=+0.107520820 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 07 15:18:49 compute-0 nova_compute[259550]: 2025-10-07 15:18:49.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:49 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:18:49 compute-0 ceph-mon[74295]: pgmap v3524: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:49 compute-0 ceph-mon[74295]: pgmap v3525: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3526: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:49 compute-0 nova_compute[259550]: 2025-10-07 15:18:49.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:49 compute-0 nova_compute[259550]: 2025-10-07 15:18:49.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:18:50 compute-0 nova_compute[259550]: 2025-10-07 15:18:50.017 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:18:50 compute-0 nova_compute[259550]: 2025-10-07 15:18:50.017 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:18:50 compute-0 nova_compute[259550]: 2025-10-07 15:18:50.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:18:50 compute-0 nova_compute[259550]: 2025-10-07 15:18:50.018 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:18:50 compute-0 nova_compute[259550]: 2025-10-07 15:18:50.018 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:18:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:18:50 compute-0 ceph-mon[74295]: pgmap v3526: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:18:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2339763650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:18:50 compute-0 nova_compute[259550]: 2025-10-07 15:18:50.906 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.888s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:18:51 compute-0 nova_compute[259550]: 2025-10-07 15:18:51.079 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:18:51 compute-0 nova_compute[259550]: 2025-10-07 15:18:51.081 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3501MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:18:51 compute-0 nova_compute[259550]: 2025-10-07 15:18:51.081 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:18:51 compute-0 nova_compute[259550]: 2025-10-07 15:18:51.081 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:18:51 compute-0 nova_compute[259550]: 2025-10-07 15:18:51.247 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:18:51 compute-0 nova_compute[259550]: 2025-10-07 15:18:51.247 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:18:51 compute-0 nova_compute[259550]: 2025-10-07 15:18:51.353 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:18:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3527: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:18:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/352561185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:18:52 compute-0 nova_compute[259550]: 2025-10-07 15:18:52.254 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.901s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:18:52 compute-0 nova_compute[259550]: 2025-10-07 15:18:52.261 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:18:52 compute-0 nova_compute[259550]: 2025-10-07 15:18:52.294 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:18:52 compute-0 nova_compute[259550]: 2025-10-07 15:18:52.297 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:18:52 compute-0 nova_compute[259550]: 2025-10-07 15:18:52.297 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:18:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:18:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:18:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:18:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:18:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:18:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:18:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2339763650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:18:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3528: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:54 compute-0 ceph-mon[74295]: pgmap v3527: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/352561185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:18:54 compute-0 nova_compute[259550]: 2025-10-07 15:18:54.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:54 compute-0 nova_compute[259550]: 2025-10-07 15:18:54.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:55 compute-0 ceph-mon[74295]: pgmap v3528: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3529: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:18:56 compute-0 nova_compute[259550]: 2025-10-07 15:18:56.298 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:18:56 compute-0 nova_compute[259550]: 2025-10-07 15:18:56.299 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:18:56 compute-0 nova_compute[259550]: 2025-10-07 15:18:56.299 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:18:56 compute-0 ceph-mon[74295]: pgmap v3529: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:56 compute-0 nova_compute[259550]: 2025-10-07 15:18:56.323 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:18:56 compute-0 nova_compute[259550]: 2025-10-07 15:18:56.323 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:18:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3530: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:58 compute-0 ceph-mon[74295]: pgmap v3530: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:59 compute-0 nova_compute[259550]: 2025-10-07 15:18:59.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:18:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3531: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:18:59 compute-0 nova_compute[259550]: 2025-10-07 15:18:59.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:19:00.121 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:19:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:19:00.122 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:19:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:19:00.122 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:19:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:19:00 compute-0 ceph-mon[74295]: pgmap v3531: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3532: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:02 compute-0 ceph-mon[74295]: pgmap v3532: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3533: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:04 compute-0 nova_compute[259550]: 2025-10-07 15:19:04.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:04 compute-0 nova_compute[259550]: 2025-10-07 15:19:04.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:04 compute-0 ceph-mon[74295]: pgmap v3533: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3534: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:19:06 compute-0 podman[459450]: 2025-10-07 15:19:06.081605419 +0000 UTC m=+0.063360187 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 07 15:19:06 compute-0 podman[459449]: 2025-10-07 15:19:06.083076358 +0000 UTC m=+0.066821749 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 07 15:19:06 compute-0 ceph-mon[74295]: pgmap v3534: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3535: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:08 compute-0 ceph-mon[74295]: pgmap v3535: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:09 compute-0 nova_compute[259550]: 2025-10-07 15:19:09.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3536: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:09 compute-0 sudo[451327]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:09 compute-0 sshd-session[451326]: Received disconnect from 192.168.122.10 port 49146:11: disconnected by user
Oct 07 15:19:09 compute-0 sshd-session[451326]: Disconnected from user zuul 192.168.122.10 port 49146
Oct 07 15:19:09 compute-0 sshd-session[451323]: pam_unix(sshd:session): session closed for user zuul
Oct 07 15:19:09 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Oct 07 15:19:09 compute-0 systemd[1]: session-57.scope: Consumed 2min 54.328s CPU time, 947.1M memory peak, read 401.1M from disk, written 341.5M to disk.
Oct 07 15:19:09 compute-0 systemd-logind[801]: Session 57 logged out. Waiting for processes to exit.
Oct 07 15:19:09 compute-0 systemd-logind[801]: Removed session 57.
Oct 07 15:19:09 compute-0 nova_compute[259550]: 2025-10-07 15:19:09.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:09 compute-0 sshd-session[459488]: Accepted publickey for zuul from 192.168.122.10 port 45748 ssh2: ECDSA SHA256:eYsf9rxr6jxX1A35M0IajnB7UFHS0tds1lEVHTSpAhk
Oct 07 15:19:09 compute-0 systemd-logind[801]: New session 58 of user zuul.
Oct 07 15:19:09 compute-0 systemd[1]: Started Session 58 of User zuul.
Oct 07 15:19:09 compute-0 sshd-session[459488]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 07 15:19:10 compute-0 sudo[459492]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-10-07-xzifdat.tar.xz
Oct 07 15:19:10 compute-0 sudo[459492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:19:10 compute-0 sudo[459492]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:10 compute-0 sshd-session[459491]: Received disconnect from 192.168.122.10 port 45748:11: disconnected by user
Oct 07 15:19:10 compute-0 sshd-session[459491]: Disconnected from user zuul 192.168.122.10 port 45748
Oct 07 15:19:10 compute-0 ceph-mon[74295]: pgmap v3536: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:10 compute-0 sshd-session[459488]: pam_unix(sshd:session): session closed for user zuul
Oct 07 15:19:10 compute-0 systemd-logind[801]: Session 58 logged out. Waiting for processes to exit.
Oct 07 15:19:10 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Oct 07 15:19:10 compute-0 systemd-logind[801]: Removed session 58.
Oct 07 15:19:10 compute-0 sshd-session[459517]: Accepted publickey for zuul from 192.168.122.10 port 45758 ssh2: ECDSA SHA256:eYsf9rxr6jxX1A35M0IajnB7UFHS0tds1lEVHTSpAhk
Oct 07 15:19:10 compute-0 systemd-logind[801]: New session 59 of user zuul.
Oct 07 15:19:10 compute-0 systemd[1]: Started Session 59 of User zuul.
Oct 07 15:19:10 compute-0 sshd-session[459517]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 07 15:19:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:19:10 compute-0 sudo[459521]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Oct 07 15:19:10 compute-0 sudo[459521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:19:10 compute-0 sudo[459521]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:10 compute-0 sshd-session[459520]: Received disconnect from 192.168.122.10 port 45758:11: disconnected by user
Oct 07 15:19:10 compute-0 sshd-session[459520]: Disconnected from user zuul 192.168.122.10 port 45758
Oct 07 15:19:10 compute-0 sshd-session[459517]: pam_unix(sshd:session): session closed for user zuul
Oct 07 15:19:10 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Oct 07 15:19:10 compute-0 systemd-logind[801]: Session 59 logged out. Waiting for processes to exit.
Oct 07 15:19:10 compute-0 systemd-logind[801]: Removed session 59.
Oct 07 15:19:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3537: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:12 compute-0 ceph-mon[74295]: pgmap v3537: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3538: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:14 compute-0 nova_compute[259550]: 2025-10-07 15:19:14.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:14 compute-0 ceph-mon[74295]: pgmap v3538: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:14 compute-0 nova_compute[259550]: 2025-10-07 15:19:14.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3539: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:19:16 compute-0 ceph-mon[74295]: pgmap v3539: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3540: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:18 compute-0 ceph-mon[74295]: pgmap v3540: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:19 compute-0 nova_compute[259550]: 2025-10-07 15:19:19.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3541: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:19 compute-0 nova_compute[259550]: 2025-10-07 15:19:19.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:20 compute-0 podman[459546]: 2025-10-07 15:19:20.067886454 +0000 UTC m=+0.060523203 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 07 15:19:20 compute-0 podman[459547]: 2025-10-07 15:19:20.099759472 +0000 UTC m=+0.091780025 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 07 15:19:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:19:20 compute-0 ceph-mon[74295]: pgmap v3541: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3542: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:21 compute-0 nova_compute[259550]: 2025-10-07 15:19:21.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:19:22 compute-0 ceph-mon[74295]: pgmap v3542: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:19:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:19:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:19:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:19:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:19:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:19:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:19:22
Oct 07 15:19:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:19:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:19:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['vms', 'volumes', 'images', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', '.mgr', 'default.rgw.log']
Oct 07 15:19:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:19:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3543: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:19:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:19:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:19:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:19:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:19:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:19:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:19:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:19:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:19:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:19:24 compute-0 nova_compute[259550]: 2025-10-07 15:19:24.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:24 compute-0 ceph-mon[74295]: pgmap v3543: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:24 compute-0 nova_compute[259550]: 2025-10-07 15:19:24.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3544: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:19:26 compute-0 ceph-mon[74295]: pgmap v3544: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3545: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:27 compute-0 nova_compute[259550]: 2025-10-07 15:19:27.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:19:27 compute-0 nova_compute[259550]: 2025-10-07 15:19:27.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:19:28 compute-0 ceph-mon[74295]: pgmap v3545: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:29 compute-0 nova_compute[259550]: 2025-10-07 15:19:29.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3546: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:29 compute-0 nova_compute[259550]: 2025-10-07 15:19:29.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:29 compute-0 nova_compute[259550]: 2025-10-07 15:19:29.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:19:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:19:30 compute-0 ceph-mon[74295]: pgmap v3546: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3547: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:31 compute-0 nova_compute[259550]: 2025-10-07 15:19:31.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:19:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:19:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1318156113' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:19:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:19:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1318156113' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:19:32 compute-0 ceph-mon[74295]: pgmap v3547: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1318156113' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:19:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1318156113' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:19:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3548: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:34 compute-0 nova_compute[259550]: 2025-10-07 15:19:34.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:34 compute-0 ceph-mon[74295]: pgmap v3548: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:34 compute-0 nova_compute[259550]: 2025-10-07 15:19:34.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:34 compute-0 nova_compute[259550]: 2025-10-07 15:19:34.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:19:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3549: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:19:35 compute-0 nova_compute[259550]: 2025-10-07 15:19:35.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:19:36 compute-0 ceph-mon[74295]: pgmap v3549: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:37 compute-0 podman[459590]: 2025-10-07 15:19:37.07975755 +0000 UTC m=+0.061873818 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 07 15:19:37 compute-0 podman[459589]: 2025-10-07 15:19:37.079882313 +0000 UTC m=+0.065818002 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Oct 07 15:19:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3550: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:39 compute-0 ceph-mon[74295]: pgmap v3550: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:39 compute-0 nova_compute[259550]: 2025-10-07 15:19:39.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3551: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:39 compute-0 nova_compute[259550]: 2025-10-07 15:19:39.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:40 compute-0 ceph-mon[74295]: pgmap v3551: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:19:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3552: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:43 compute-0 ceph-mon[74295]: pgmap v3552: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3553: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:44 compute-0 nova_compute[259550]: 2025-10-07 15:19:44.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:44 compute-0 ceph-mon[74295]: pgmap v3553: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:44 compute-0 nova_compute[259550]: 2025-10-07 15:19:44.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3554: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:45 compute-0 sudo[459625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:19:45 compute-0 sudo[459625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:19:45 compute-0 sudo[459625]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:45 compute-0 sudo[459650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:19:45 compute-0 sudo[459650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:19:45 compute-0 sudo[459650]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:19:45 compute-0 sudo[459675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:19:45 compute-0 sudo[459675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:19:45 compute-0 sudo[459675]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:45 compute-0 sudo[459700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:19:45 compute-0 sudo[459700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:19:46 compute-0 sudo[459700]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:19:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:19:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:19:46 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:19:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:19:46 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:19:46 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev bd66a3c4-ece9-41ab-8c86-5b54b592cad9 does not exist
Oct 07 15:19:46 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev e77abbaa-ec75-4eba-b136-a2e137cd4742 does not exist
Oct 07 15:19:46 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev f2db4f5b-657e-4851-bbeb-0cfed5b5fb47 does not exist
Oct 07 15:19:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:19:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:19:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:19:46 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:19:46 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:19:46 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:19:46 compute-0 sudo[459758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:19:46 compute-0 sudo[459758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:19:46 compute-0 sudo[459758]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:46 compute-0 sudo[459783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:19:46 compute-0 sudo[459783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:19:46 compute-0 sudo[459783]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:46 compute-0 sudo[459808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:19:46 compute-0 sudo[459808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:19:46 compute-0 sudo[459808]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:46 compute-0 sudo[459833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:19:46 compute-0 sudo[459833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:19:46 compute-0 ceph-mon[74295]: pgmap v3554: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:19:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:19:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:19:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:19:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:19:46 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:19:46 compute-0 podman[459897]: 2025-10-07 15:19:46.747287481 +0000 UTC m=+0.043504396 container create c4cbdc72ed0fb5287398d7924a4a6c35b302c752c183a2d26a31f11f16c0d632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wing, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:19:46 compute-0 systemd[1]: Started libpod-conmon-c4cbdc72ed0fb5287398d7924a4a6c35b302c752c183a2d26a31f11f16c0d632.scope.
Oct 07 15:19:46 compute-0 podman[459897]: 2025-10-07 15:19:46.726015731 +0000 UTC m=+0.022232676 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:19:46 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:19:46 compute-0 podman[459897]: 2025-10-07 15:19:46.849206922 +0000 UTC m=+0.145423867 container init c4cbdc72ed0fb5287398d7924a4a6c35b302c752c183a2d26a31f11f16c0d632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wing, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:19:46 compute-0 podman[459897]: 2025-10-07 15:19:46.859696977 +0000 UTC m=+0.155913882 container start c4cbdc72ed0fb5287398d7924a4a6c35b302c752c183a2d26a31f11f16c0d632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wing, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 07 15:19:46 compute-0 podman[459897]: 2025-10-07 15:19:46.863897197 +0000 UTC m=+0.160114142 container attach c4cbdc72ed0fb5287398d7924a4a6c35b302c752c183a2d26a31f11f16c0d632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 07 15:19:46 compute-0 cranky_wing[459913]: 167 167
Oct 07 15:19:46 compute-0 systemd[1]: libpod-c4cbdc72ed0fb5287398d7924a4a6c35b302c752c183a2d26a31f11f16c0d632.scope: Deactivated successfully.
Oct 07 15:19:46 compute-0 podman[459897]: 2025-10-07 15:19:46.866822664 +0000 UTC m=+0.163039599 container died c4cbdc72ed0fb5287398d7924a4a6c35b302c752c183a2d26a31f11f16c0d632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:19:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-54a8f60634961210258db0cca3210f5bdb012689315dab492bce7321c0d6fcd9-merged.mount: Deactivated successfully.
Oct 07 15:19:46 compute-0 podman[459897]: 2025-10-07 15:19:46.916140232 +0000 UTC m=+0.212357147 container remove c4cbdc72ed0fb5287398d7924a4a6c35b302c752c183a2d26a31f11f16c0d632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 07 15:19:46 compute-0 systemd[1]: libpod-conmon-c4cbdc72ed0fb5287398d7924a4a6c35b302c752c183a2d26a31f11f16c0d632.scope: Deactivated successfully.
Oct 07 15:19:47 compute-0 podman[459938]: 2025-10-07 15:19:47.105075842 +0000 UTC m=+0.053031926 container create 9c9188d5206ee0d1fddd0cbda5e35251d2e4f0fb1ca1caf6b69c361a90e58a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:19:47 compute-0 systemd[1]: Started libpod-conmon-9c9188d5206ee0d1fddd0cbda5e35251d2e4f0fb1ca1caf6b69c361a90e58a25.scope.
Oct 07 15:19:47 compute-0 podman[459938]: 2025-10-07 15:19:47.083183906 +0000 UTC m=+0.031139980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:19:47 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4351559b6b6b31ba94d3eaf4db95357bd9c909c18c08ea9ddbdd670dba93ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4351559b6b6b31ba94d3eaf4db95357bd9c909c18c08ea9ddbdd670dba93ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4351559b6b6b31ba94d3eaf4db95357bd9c909c18c08ea9ddbdd670dba93ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4351559b6b6b31ba94d3eaf4db95357bd9c909c18c08ea9ddbdd670dba93ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4351559b6b6b31ba94d3eaf4db95357bd9c909c18c08ea9ddbdd670dba93ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:19:47 compute-0 podman[459938]: 2025-10-07 15:19:47.204116826 +0000 UTC m=+0.152072930 container init 9c9188d5206ee0d1fddd0cbda5e35251d2e4f0fb1ca1caf6b69c361a90e58a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mcnulty, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:19:47 compute-0 podman[459938]: 2025-10-07 15:19:47.213325179 +0000 UTC m=+0.161281233 container start 9c9188d5206ee0d1fddd0cbda5e35251d2e4f0fb1ca1caf6b69c361a90e58a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mcnulty, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:19:47 compute-0 podman[459938]: 2025-10-07 15:19:47.218114364 +0000 UTC m=+0.166070438 container attach 9c9188d5206ee0d1fddd0cbda5e35251d2e4f0fb1ca1caf6b69c361a90e58a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mcnulty, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:19:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3555: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:48 compute-0 wizardly_mcnulty[459954]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:19:48 compute-0 wizardly_mcnulty[459954]: --> relative data size: 1.0
Oct 07 15:19:48 compute-0 wizardly_mcnulty[459954]: --> All data devices are unavailable
Oct 07 15:19:48 compute-0 systemd[1]: libpod-9c9188d5206ee0d1fddd0cbda5e35251d2e4f0fb1ca1caf6b69c361a90e58a25.scope: Deactivated successfully.
Oct 07 15:19:48 compute-0 systemd[1]: libpod-9c9188d5206ee0d1fddd0cbda5e35251d2e4f0fb1ca1caf6b69c361a90e58a25.scope: Consumed 1.023s CPU time.
Oct 07 15:19:48 compute-0 podman[459938]: 2025-10-07 15:19:48.29161328 +0000 UTC m=+1.239569364 container died 9c9188d5206ee0d1fddd0cbda5e35251d2e4f0fb1ca1caf6b69c361a90e58a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mcnulty, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 07 15:19:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d4351559b6b6b31ba94d3eaf4db95357bd9c909c18c08ea9ddbdd670dba93ce-merged.mount: Deactivated successfully.
Oct 07 15:19:48 compute-0 podman[459938]: 2025-10-07 15:19:48.619263688 +0000 UTC m=+1.567219742 container remove 9c9188d5206ee0d1fddd0cbda5e35251d2e4f0fb1ca1caf6b69c361a90e58a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:19:48 compute-0 systemd[1]: libpod-conmon-9c9188d5206ee0d1fddd0cbda5e35251d2e4f0fb1ca1caf6b69c361a90e58a25.scope: Deactivated successfully.
Oct 07 15:19:48 compute-0 sudo[459833]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:48 compute-0 sudo[459994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:19:48 compute-0 sudo[459994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:19:48 compute-0 sudo[459994]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:48 compute-0 sudo[460019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:19:48 compute-0 sudo[460019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:19:48 compute-0 sudo[460019]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:48 compute-0 sudo[460044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:19:48 compute-0 sudo[460044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:19:48 compute-0 sudo[460044]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:48 compute-0 ceph-mon[74295]: pgmap v3555: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:48 compute-0 sudo[460069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:19:48 compute-0 sudo[460069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:19:49 compute-0 nova_compute[259550]: 2025-10-07 15:19:49.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:49 compute-0 podman[460136]: 2025-10-07 15:19:49.353154591 +0000 UTC m=+0.049720868 container create d1de37c22a4764732b2b853b8670e4bcaaa671eba77f525b608f9eb70df5e98b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shirley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 07 15:19:49 compute-0 systemd[1]: Started libpod-conmon-d1de37c22a4764732b2b853b8670e4bcaaa671eba77f525b608f9eb70df5e98b.scope.
Oct 07 15:19:49 compute-0 podman[460136]: 2025-10-07 15:19:49.325924475 +0000 UTC m=+0.022490662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:19:49 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:19:49 compute-0 podman[460136]: 2025-10-07 15:19:49.443869137 +0000 UTC m=+0.140435314 container init d1de37c22a4764732b2b853b8670e4bcaaa671eba77f525b608f9eb70df5e98b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 07 15:19:49 compute-0 podman[460136]: 2025-10-07 15:19:49.450613485 +0000 UTC m=+0.147179642 container start d1de37c22a4764732b2b853b8670e4bcaaa671eba77f525b608f9eb70df5e98b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 07 15:19:49 compute-0 dazzling_shirley[460152]: 167 167
Oct 07 15:19:49 compute-0 systemd[1]: libpod-d1de37c22a4764732b2b853b8670e4bcaaa671eba77f525b608f9eb70df5e98b.scope: Deactivated successfully.
Oct 07 15:19:49 compute-0 podman[460136]: 2025-10-07 15:19:49.458622255 +0000 UTC m=+0.155188412 container attach d1de37c22a4764732b2b853b8670e4bcaaa671eba77f525b608f9eb70df5e98b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shirley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:19:49 compute-0 podman[460136]: 2025-10-07 15:19:49.459457467 +0000 UTC m=+0.156023614 container died d1de37c22a4764732b2b853b8670e4bcaaa671eba77f525b608f9eb70df5e98b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shirley, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:19:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cc9cc24a31895ab73b3028bbe99a8feaec2ec745d171b381465bc1fe5c98524-merged.mount: Deactivated successfully.
Oct 07 15:19:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3556: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:49 compute-0 podman[460136]: 2025-10-07 15:19:49.532113569 +0000 UTC m=+0.228679726 container remove d1de37c22a4764732b2b853b8670e4bcaaa671eba77f525b608f9eb70df5e98b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shirley, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 07 15:19:49 compute-0 systemd[1]: libpod-conmon-d1de37c22a4764732b2b853b8670e4bcaaa671eba77f525b608f9eb70df5e98b.scope: Deactivated successfully.
Oct 07 15:19:49 compute-0 podman[460176]: 2025-10-07 15:19:49.690804652 +0000 UTC m=+0.040005533 container create 9c81161b8e4d6d4339309723c501bcee52d8a9c3b48be5fc987a42e3076f7f45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:19:49 compute-0 systemd[1]: Started libpod-conmon-9c81161b8e4d6d4339309723c501bcee52d8a9c3b48be5fc987a42e3076f7f45.scope.
Oct 07 15:19:49 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e799e27cf1a8b38c1bb43f0750103cc958e0e92431c4b18db276287bc428f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:19:49 compute-0 podman[460176]: 2025-10-07 15:19:49.673767664 +0000 UTC m=+0.022968545 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e799e27cf1a8b38c1bb43f0750103cc958e0e92431c4b18db276287bc428f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e799e27cf1a8b38c1bb43f0750103cc958e0e92431c4b18db276287bc428f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e799e27cf1a8b38c1bb43f0750103cc958e0e92431c4b18db276287bc428f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:19:49 compute-0 podman[460176]: 2025-10-07 15:19:49.818919142 +0000 UTC m=+0.168120043 container init 9c81161b8e4d6d4339309723c501bcee52d8a9c3b48be5fc987a42e3076f7f45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hamilton, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:19:49 compute-0 podman[460176]: 2025-10-07 15:19:49.825193017 +0000 UTC m=+0.174393898 container start 9c81161b8e4d6d4339309723c501bcee52d8a9c3b48be5fc987a42e3076f7f45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hamilton, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 15:19:49 compute-0 podman[460176]: 2025-10-07 15:19:49.85460422 +0000 UTC m=+0.203805101 container attach 9c81161b8e4d6d4339309723c501bcee52d8a9c3b48be5fc987a42e3076f7f45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hamilton, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:19:49 compute-0 nova_compute[259550]: 2025-10-07 15:19:49.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]: {
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:     "0": [
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:         {
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "devices": [
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "/dev/loop3"
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             ],
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "lv_name": "ceph_lv0",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "lv_size": "21470642176",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "name": "ceph_lv0",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "tags": {
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.cluster_name": "ceph",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.crush_device_class": "",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.encrypted": "0",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.osd_id": "0",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.type": "block",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.vdo": "0"
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             },
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "type": "block",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "vg_name": "ceph_vg0"
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:         }
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:     ],
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:     "1": [
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:         {
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "devices": [
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "/dev/loop4"
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             ],
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "lv_name": "ceph_lv1",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "lv_size": "21470642176",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "name": "ceph_lv1",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "tags": {
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.cluster_name": "ceph",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.crush_device_class": "",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.encrypted": "0",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.osd_id": "1",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.type": "block",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.vdo": "0"
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             },
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "type": "block",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "vg_name": "ceph_vg1"
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:         }
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:     ],
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:     "2": [
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:         {
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "devices": [
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "/dev/loop5"
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             ],
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "lv_name": "ceph_lv2",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "lv_size": "21470642176",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "name": "ceph_lv2",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "tags": {
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.cluster_name": "ceph",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.crush_device_class": "",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.encrypted": "0",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.osd_id": "2",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.type": "block",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:                 "ceph.vdo": "0"
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             },
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "type": "block",
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:             "vg_name": "ceph_vg2"
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:         }
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]:     ]
Oct 07 15:19:50 compute-0 amazing_hamilton[460193]: }
Oct 07 15:19:50 compute-0 systemd[1]: libpod-9c81161b8e4d6d4339309723c501bcee52d8a9c3b48be5fc987a42e3076f7f45.scope: Deactivated successfully.
Oct 07 15:19:50 compute-0 podman[460176]: 2025-10-07 15:19:50.736192849 +0000 UTC m=+1.085393730 container died 9c81161b8e4d6d4339309723c501bcee52d8a9c3b48be5fc987a42e3076f7f45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hamilton, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 07 15:19:50 compute-0 nova_compute[259550]: 2025-10-07 15:19:50.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:19:51 compute-0 nova_compute[259550]: 2025-10-07 15:19:51.026 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:19:51 compute-0 nova_compute[259550]: 2025-10-07 15:19:51.027 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:19:51 compute-0 nova_compute[259550]: 2025-10-07 15:19:51.027 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:19:51 compute-0 nova_compute[259550]: 2025-10-07 15:19:51.027 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:19:51 compute-0 nova_compute[259550]: 2025-10-07 15:19:51.027 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:19:51 compute-0 ceph-mon[74295]: pgmap v3556: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6e799e27cf1a8b38c1bb43f0750103cc958e0e92431c4b18db276287bc428f7-merged.mount: Deactivated successfully.
Oct 07 15:19:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3557: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:19:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3143048972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:19:51 compute-0 nova_compute[259550]: 2025-10-07 15:19:51.560 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:19:51 compute-0 nova_compute[259550]: 2025-10-07 15:19:51.733 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:19:51 compute-0 nova_compute[259550]: 2025-10-07 15:19:51.736 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3518MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:19:51 compute-0 nova_compute[259550]: 2025-10-07 15:19:51.736 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:19:51 compute-0 nova_compute[259550]: 2025-10-07 15:19:51.737 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:19:51 compute-0 podman[460176]: 2025-10-07 15:19:51.916290929 +0000 UTC m=+2.265491810 container remove 9c81161b8e4d6d4339309723c501bcee52d8a9c3b48be5fc987a42e3076f7f45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 07 15:19:51 compute-0 sudo[460069]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:51 compute-0 nova_compute[259550]: 2025-10-07 15:19:51.982 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:19:51 compute-0 nova_compute[259550]: 2025-10-07 15:19:51.983 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:19:52 compute-0 podman[460202]: 2025-10-07 15:19:52.003826521 +0000 UTC m=+1.223001739 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent)
Oct 07 15:19:52 compute-0 systemd[1]: libpod-conmon-9c81161b8e4d6d4339309723c501bcee52d8a9c3b48be5fc987a42e3076f7f45.scope: Deactivated successfully.
Oct 07 15:19:52 compute-0 sudo[460259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:19:52 compute-0 sudo[460259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:19:52 compute-0 sudo[460259]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:52 compute-0 sudo[460286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:19:52 compute-0 sudo[460286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:19:52 compute-0 nova_compute[259550]: 2025-10-07 15:19:52.084 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 07 15:19:52 compute-0 sudo[460286]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:52 compute-0 podman[460211]: 2025-10-07 15:19:52.109350017 +0000 UTC m=+1.327968921 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 07 15:19:52 compute-0 sudo[460321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:19:52 compute-0 sudo[460321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:19:52 compute-0 sudo[460321]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:52 compute-0 nova_compute[259550]: 2025-10-07 15:19:52.186 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 07 15:19:52 compute-0 nova_compute[259550]: 2025-10-07 15:19:52.186 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 07 15:19:52 compute-0 sudo[460351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:19:52 compute-0 sudo[460351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:19:52 compute-0 nova_compute[259550]: 2025-10-07 15:19:52.197 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 07 15:19:52 compute-0 nova_compute[259550]: 2025-10-07 15:19:52.214 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 07 15:19:52 compute-0 nova_compute[259550]: 2025-10-07 15:19:52.230 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:19:52 compute-0 ceph-mon[74295]: pgmap v3557: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3143048972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:19:52 compute-0 podman[460435]: 2025-10-07 15:19:52.569744646 +0000 UTC m=+0.073056533 container create 477c5f35ef728c6c14999c5331e8b09c87d5c2135cb87000f7afe0b39428d988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bhabha, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 07 15:19:52 compute-0 podman[460435]: 2025-10-07 15:19:52.528984134 +0000 UTC m=+0.032296011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:19:52 compute-0 systemd[1]: Started libpod-conmon-477c5f35ef728c6c14999c5331e8b09c87d5c2135cb87000f7afe0b39428d988.scope.
Oct 07 15:19:52 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:19:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:19:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2918977142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:19:52 compute-0 podman[460435]: 2025-10-07 15:19:52.726082608 +0000 UTC m=+0.229394485 container init 477c5f35ef728c6c14999c5331e8b09c87d5c2135cb87000f7afe0b39428d988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bhabha, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 07 15:19:52 compute-0 nova_compute[259550]: 2025-10-07 15:19:52.725 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:19:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:19:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:19:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:19:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:19:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:19:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:19:52 compute-0 podman[460435]: 2025-10-07 15:19:52.732964259 +0000 UTC m=+0.236276116 container start 477c5f35ef728c6c14999c5331e8b09c87d5c2135cb87000f7afe0b39428d988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bhabha, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 07 15:19:52 compute-0 nova_compute[259550]: 2025-10-07 15:19:52.734 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:19:52 compute-0 boring_bhabha[460451]: 167 167
Oct 07 15:19:52 compute-0 systemd[1]: libpod-477c5f35ef728c6c14999c5331e8b09c87d5c2135cb87000f7afe0b39428d988.scope: Deactivated successfully.
Oct 07 15:19:52 compute-0 conmon[460451]: conmon 477c5f35ef728c6c1499 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-477c5f35ef728c6c14999c5331e8b09c87d5c2135cb87000f7afe0b39428d988.scope/container/memory.events
Oct 07 15:19:52 compute-0 nova_compute[259550]: 2025-10-07 15:19:52.768 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:19:52 compute-0 nova_compute[259550]: 2025-10-07 15:19:52.770 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:19:52 compute-0 nova_compute[259550]: 2025-10-07 15:19:52.771 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:19:52 compute-0 podman[460435]: 2025-10-07 15:19:52.831262294 +0000 UTC m=+0.334574161 container attach 477c5f35ef728c6c14999c5331e8b09c87d5c2135cb87000f7afe0b39428d988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bhabha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:19:52 compute-0 podman[460435]: 2025-10-07 15:19:52.831791179 +0000 UTC m=+0.335103046 container died 477c5f35ef728c6c14999c5331e8b09c87d5c2135cb87000f7afe0b39428d988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:19:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2ca0426c92cd8ce0d92f39c9548755f009bdd83b7769d10a4bc2285ae2d8788-merged.mount: Deactivated successfully.
Oct 07 15:19:53 compute-0 podman[460435]: 2025-10-07 15:19:53.258359018 +0000 UTC m=+0.761670915 container remove 477c5f35ef728c6c14999c5331e8b09c87d5c2135cb87000f7afe0b39428d988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bhabha, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:19:53 compute-0 systemd[1]: libpod-conmon-477c5f35ef728c6c14999c5331e8b09c87d5c2135cb87000f7afe0b39428d988.scope: Deactivated successfully.
Oct 07 15:19:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3558: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:53 compute-0 podman[460478]: 2025-10-07 15:19:53.441130446 +0000 UTC m=+0.026588670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:19:53 compute-0 podman[460478]: 2025-10-07 15:19:53.554888698 +0000 UTC m=+0.140346922 container create 8bdd51ccd2e0ec29638adfd915abc12b6ff59fb324b50f185d01d5fc30e55d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 07 15:19:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2918977142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:19:53 compute-0 systemd[1]: Started libpod-conmon-8bdd51ccd2e0ec29638adfd915abc12b6ff59fb324b50f185d01d5fc30e55d35.scope.
Oct 07 15:19:53 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6fb45adc527a163075df380d9b95e4583ed29ac930317c0a431f82bd601a75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6fb45adc527a163075df380d9b95e4583ed29ac930317c0a431f82bd601a75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6fb45adc527a163075df380d9b95e4583ed29ac930317c0a431f82bd601a75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6fb45adc527a163075df380d9b95e4583ed29ac930317c0a431f82bd601a75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:19:53 compute-0 podman[460478]: 2025-10-07 15:19:53.794575282 +0000 UTC m=+0.380033476 container init 8bdd51ccd2e0ec29638adfd915abc12b6ff59fb324b50f185d01d5fc30e55d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 07 15:19:53 compute-0 podman[460478]: 2025-10-07 15:19:53.80174069 +0000 UTC m=+0.387198884 container start 8bdd51ccd2e0ec29638adfd915abc12b6ff59fb324b50f185d01d5fc30e55d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:19:53 compute-0 podman[460478]: 2025-10-07 15:19:53.912302818 +0000 UTC m=+0.497761012 container attach 8bdd51ccd2e0ec29638adfd915abc12b6ff59fb324b50f185d01d5fc30e55d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:19:54 compute-0 nova_compute[259550]: 2025-10-07 15:19:54.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]: {
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:         "osd_id": 2,
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:         "type": "bluestore"
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:     },
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:         "osd_id": 1,
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:         "type": "bluestore"
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:     },
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:         "osd_id": 0,
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:         "type": "bluestore"
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]:     }
Oct 07 15:19:54 compute-0 gracious_ramanujan[460494]: }
Oct 07 15:19:54 compute-0 systemd[1]: libpod-8bdd51ccd2e0ec29638adfd915abc12b6ff59fb324b50f185d01d5fc30e55d35.scope: Deactivated successfully.
Oct 07 15:19:54 compute-0 podman[460478]: 2025-10-07 15:19:54.785127387 +0000 UTC m=+1.370585621 container died 8bdd51ccd2e0ec29638adfd915abc12b6ff59fb324b50f185d01d5fc30e55d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:19:54 compute-0 nova_compute[259550]: 2025-10-07 15:19:54.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:54 compute-0 ceph-mon[74295]: pgmap v3558: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a6fb45adc527a163075df380d9b95e4583ed29ac930317c0a431f82bd601a75-merged.mount: Deactivated successfully.
Oct 07 15:19:55 compute-0 podman[460478]: 2025-10-07 15:19:55.266384384 +0000 UTC m=+1.851842578 container remove 8bdd51ccd2e0ec29638adfd915abc12b6ff59fb324b50f185d01d5fc30e55d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:19:55 compute-0 sudo[460351]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:19:55 compute-0 systemd[1]: libpod-conmon-8bdd51ccd2e0ec29638adfd915abc12b6ff59fb324b50f185d01d5fc30e55d35.scope: Deactivated successfully.
Oct 07 15:19:55 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:19:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:19:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3559: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:55 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:19:55 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 95a060a1-48e6-4da0-a32a-0fd0d8535cdc does not exist
Oct 07 15:19:55 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev e247f234-9ae2-48e6-aeb6-638bff877b25 does not exist
Oct 07 15:19:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:19:55 compute-0 sudo[460539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:19:55 compute-0 sudo[460539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:19:55 compute-0 sudo[460539]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:55 compute-0 sudo[460564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:19:55 compute-0 sudo[460564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:19:55 compute-0 sudo[460564]: pam_unix(sudo:session): session closed for user root
Oct 07 15:19:55 compute-0 nova_compute[259550]: 2025-10-07 15:19:55.771 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:19:55 compute-0 nova_compute[259550]: 2025-10-07 15:19:55.772 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:19:55 compute-0 nova_compute[259550]: 2025-10-07 15:19:55.772 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:19:55 compute-0 nova_compute[259550]: 2025-10-07 15:19:55.808 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:19:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:19:56 compute-0 ceph-mon[74295]: pgmap v3559: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:19:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3560: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:57 compute-0 nova_compute[259550]: 2025-10-07 15:19:57.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:19:58 compute-0 ceph-mon[74295]: pgmap v3560: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:59 compute-0 nova_compute[259550]: 2025-10-07 15:19:59.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:19:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3561: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:19:59 compute-0 nova_compute[259550]: 2025-10-07 15:19:59.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:20:00.122 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:20:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:20:00.123 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:20:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:20:00.123 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:20:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:20:00 compute-0 ceph-mon[74295]: pgmap v3561: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3562: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:02 compute-0 ceph-mon[74295]: pgmap v3562: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3563: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:04 compute-0 ceph-mon[74295]: pgmap v3563: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:04 compute-0 nova_compute[259550]: 2025-10-07 15:20:04.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:04 compute-0 nova_compute[259550]: 2025-10-07 15:20:04.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3564: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:20:06 compute-0 ceph-mon[74295]: pgmap v3564: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3565: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:08 compute-0 podman[460590]: 2025-10-07 15:20:08.078376725 +0000 UTC m=+0.057748060 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 07 15:20:08 compute-0 ceph-mon[74295]: pgmap v3565: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:08 compute-0 podman[460589]: 2025-10-07 15:20:08.105882239 +0000 UTC m=+0.086038435 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 07 15:20:09 compute-0 nova_compute[259550]: 2025-10-07 15:20:09.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3566: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:09 compute-0 nova_compute[259550]: 2025-10-07 15:20:09.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:20:10 compute-0 ceph-mon[74295]: pgmap v3566: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3567: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:12 compute-0 ceph-mon[74295]: pgmap v3567: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3568: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:14 compute-0 nova_compute[259550]: 2025-10-07 15:20:14.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:14 compute-0 ceph-mon[74295]: pgmap v3568: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:14 compute-0 nova_compute[259550]: 2025-10-07 15:20:14.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3569: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:20:16 compute-0 ceph-mon[74295]: pgmap v3569: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3570: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:19 compute-0 ceph-mon[74295]: pgmap v3570: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:19 compute-0 nova_compute[259550]: 2025-10-07 15:20:19.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3571: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:19 compute-0 nova_compute[259550]: 2025-10-07 15:20:19.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:20 compute-0 ceph-mon[74295]: pgmap v3571: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:20:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3572: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:20:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:20:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:20:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:20:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:20:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:20:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:20:22
Oct 07 15:20:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:20:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:20:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'backups', 'vms', 'volumes', '.mgr', 'default.rgw.control']
Oct 07 15:20:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:20:23 compute-0 ceph-mon[74295]: pgmap v3572: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:23 compute-0 podman[460622]: 2025-10-07 15:20:23.119791425 +0000 UTC m=+0.091676283 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 07 15:20:23 compute-0 podman[460623]: 2025-10-07 15:20:23.173421235 +0000 UTC m=+0.143655390 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 15:20:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3573: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:20:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:20:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:20:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:20:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:20:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:20:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:20:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:20:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:20:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:20:23 compute-0 nova_compute[259550]: 2025-10-07 15:20:23.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:20:24 compute-0 nova_compute[259550]: 2025-10-07 15:20:24.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:24 compute-0 ceph-mon[74295]: pgmap v3573: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:24 compute-0 nova_compute[259550]: 2025-10-07 15:20:24.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3574: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:20:27 compute-0 ceph-mon[74295]: pgmap v3574: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3575: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:28 compute-0 ceph-mon[74295]: pgmap v3575: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:28 compute-0 nova_compute[259550]: 2025-10-07 15:20:28.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:20:28 compute-0 nova_compute[259550]: 2025-10-07 15:20:28.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:20:29 compute-0 nova_compute[259550]: 2025-10-07 15:20:29.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3576: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:29 compute-0 nova_compute[259550]: 2025-10-07 15:20:29.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:20:30 compute-0 ceph-mon[74295]: pgmap v3576: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:30 compute-0 nova_compute[259550]: 2025-10-07 15:20:30.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:20:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3577: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:20:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3783495864' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:20:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:20:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3783495864' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:20:32 compute-0 ceph-mon[74295]: pgmap v3577: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:20:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3578: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3783495864' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:20:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/3783495864' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:20:33 compute-0 nova_compute[259550]: 2025-10-07 15:20:33.985 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:20:34 compute-0 nova_compute[259550]: 2025-10-07 15:20:34.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:34 compute-0 nova_compute[259550]: 2025-10-07 15:20:34.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:34 compute-0 ceph-mon[74295]: pgmap v3578: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:35 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:20:35 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 16K writes, 73K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s
                                           Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1319 writes, 5851 keys, 1319 commit groups, 1.0 writes per commit group, ingest: 8.54 MB, 0.01 MB/s
                                           Interval WAL: 1319 writes, 1319 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     65.7      1.37              0.28        53    0.026       0      0       0.0       0.0
                                             L6      1/0   10.48 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.9    103.6     87.9      5.04              1.33        52    0.097    355K    27K       0.0       0.0
                                            Sum      1/0   10.48 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.9     81.5     83.2      6.41              1.61       105    0.061    355K    27K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.7     27.5     27.4      1.61              0.14         8    0.202     37K   2029       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    103.6     87.9      5.04              1.33        52    0.097    355K    27K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     65.9      1.36              0.28        52    0.026       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.088, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.52 GB write, 0.08 MB/s write, 0.51 GB read, 0.08 MB/s read, 6.4 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 1.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5619101451f0#2 capacity: 304.00 MB usage: 61.62 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000418 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4268,59.09 MB,19.4372%) FilterBlock(106,990.55 KB,0.318201%) IndexBlock(106,1.57 MB,0.515662%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 07 15:20:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3579: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:20:35 compute-0 nova_compute[259550]: 2025-10-07 15:20:35.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:20:36 compute-0 nova_compute[259550]: 2025-10-07 15:20:36.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:20:37 compute-0 ceph-mon[74295]: pgmap v3579: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3580: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:38 compute-0 ceph-mon[74295]: pgmap v3580: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:39 compute-0 podman[460666]: 2025-10-07 15:20:39.090569616 +0000 UTC m=+0.077677723 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 15:20:39 compute-0 podman[460667]: 2025-10-07 15:20:39.092527908 +0000 UTC m=+0.071682547 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 15:20:39 compute-0 nova_compute[259550]: 2025-10-07 15:20:39.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3581: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:39 compute-0 nova_compute[259550]: 2025-10-07 15:20:39.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:39 compute-0 nova_compute[259550]: 2025-10-07 15:20:39.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:20:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:20:40 compute-0 ceph-mon[74295]: pgmap v3581: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3582: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:42 compute-0 ceph-mon[74295]: pgmap v3582: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3583: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:44 compute-0 nova_compute[259550]: 2025-10-07 15:20:44.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:44 compute-0 ceph-mon[74295]: pgmap v3583: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:44 compute-0 nova_compute[259550]: 2025-10-07 15:20:44.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3584: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:20:46 compute-0 ceph-mon[74295]: pgmap v3584: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3585: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:48 compute-0 ceph-mon[74295]: pgmap v3585: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:49 compute-0 nova_compute[259550]: 2025-10-07 15:20:49.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3586: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:49 compute-0 nova_compute[259550]: 2025-10-07 15:20:49.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:20:50 compute-0 ceph-mon[74295]: pgmap v3586: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:50 compute-0 nova_compute[259550]: 2025-10-07 15:20:50.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:20:51 compute-0 nova_compute[259550]: 2025-10-07 15:20:51.021 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:20:51 compute-0 nova_compute[259550]: 2025-10-07 15:20:51.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:20:51 compute-0 nova_compute[259550]: 2025-10-07 15:20:51.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:20:51 compute-0 nova_compute[259550]: 2025-10-07 15:20:51.022 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:20:51 compute-0 nova_compute[259550]: 2025-10-07 15:20:51.022 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:20:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:20:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174490617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:20:51 compute-0 nova_compute[259550]: 2025-10-07 15:20:51.499 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:20:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3587: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:51 compute-0 nova_compute[259550]: 2025-10-07 15:20:51.648 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:20:51 compute-0 nova_compute[259550]: 2025-10-07 15:20:51.649 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3603MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:20:51 compute-0 nova_compute[259550]: 2025-10-07 15:20:51.649 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:20:51 compute-0 nova_compute[259550]: 2025-10-07 15:20:51.650 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:20:51 compute-0 nova_compute[259550]: 2025-10-07 15:20:51.762 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:20:51 compute-0 nova_compute[259550]: 2025-10-07 15:20:51.763 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:20:51 compute-0 nova_compute[259550]: 2025-10-07 15:20:51.791 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:20:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4174490617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:20:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:20:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2004049554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:20:52 compute-0 nova_compute[259550]: 2025-10-07 15:20:52.294 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:20:52 compute-0 nova_compute[259550]: 2025-10-07 15:20:52.300 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:20:52 compute-0 nova_compute[259550]: 2025-10-07 15:20:52.315 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:20:52 compute-0 nova_compute[259550]: 2025-10-07 15:20:52.317 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:20:52 compute-0 nova_compute[259550]: 2025-10-07 15:20:52.318 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:20:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:20:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:20:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:20:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:20:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:20:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:20:52 compute-0 ceph-mon[74295]: pgmap v3587: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2004049554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:20:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3588: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:54 compute-0 podman[460749]: 2025-10-07 15:20:54.070807214 +0000 UTC m=+0.060009029 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:20:54 compute-0 ceph-mon[74295]: pgmap v3588: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:54 compute-0 podman[460750]: 2025-10-07 15:20:54.139489781 +0000 UTC m=+0.118216110 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 07 15:20:54 compute-0 nova_compute[259550]: 2025-10-07 15:20:54.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:54 compute-0 nova_compute[259550]: 2025-10-07 15:20:54.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3589: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:20:55 compute-0 sudo[460795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:20:55 compute-0 sudo[460795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:20:55 compute-0 sudo[460795]: pam_unix(sudo:session): session closed for user root
Oct 07 15:20:55 compute-0 sudo[460820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:20:55 compute-0 sudo[460820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:20:55 compute-0 sudo[460820]: pam_unix(sudo:session): session closed for user root
Oct 07 15:20:55 compute-0 sudo[460845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:20:55 compute-0 sudo[460845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:20:55 compute-0 sudo[460845]: pam_unix(sudo:session): session closed for user root
Oct 07 15:20:55 compute-0 sudo[460870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:20:55 compute-0 sudo[460870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:20:56 compute-0 nova_compute[259550]: 2025-10-07 15:20:56.318 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:20:56 compute-0 nova_compute[259550]: 2025-10-07 15:20:56.318 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:20:56 compute-0 nova_compute[259550]: 2025-10-07 15:20:56.318 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:20:56 compute-0 nova_compute[259550]: 2025-10-07 15:20:56.345 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:20:56 compute-0 sudo[460870]: pam_unix(sudo:session): session closed for user root
Oct 07 15:20:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:20:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:20:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:20:56 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:20:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:20:56 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:20:56 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev b3f1e9a3-431f-4dbe-9ee0-9da8a76f6f89 does not exist
Oct 07 15:20:56 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 3d467e16-6ab7-44be-9c68-26873f4b45b6 does not exist
Oct 07 15:20:56 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 0f9ba160-8cda-4f79-b7bd-ddc7b7360936 does not exist
Oct 07 15:20:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:20:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:20:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:20:56 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:20:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:20:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:20:56 compute-0 sudo[460926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:20:56 compute-0 sudo[460926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:20:56 compute-0 sudo[460926]: pam_unix(sudo:session): session closed for user root
Oct 07 15:20:56 compute-0 sudo[460951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:20:56 compute-0 sudo[460951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:20:56 compute-0 sudo[460951]: pam_unix(sudo:session): session closed for user root
Oct 07 15:20:56 compute-0 sudo[460976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:20:56 compute-0 sudo[460976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:20:56 compute-0 sudo[460976]: pam_unix(sudo:session): session closed for user root
Oct 07 15:20:56 compute-0 sudo[461001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:20:56 compute-0 sudo[461001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:20:56 compute-0 ceph-mon[74295]: pgmap v3589: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:20:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:20:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:20:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:20:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:20:56 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:20:57 compute-0 podman[461066]: 2025-10-07 15:20:57.132646619 +0000 UTC m=+0.095482413 container create e1cb0e5e257cfa71709935853686ed3035bd7f4e7db191898ece6de389f5340f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 07 15:20:57 compute-0 podman[461066]: 2025-10-07 15:20:57.068637475 +0000 UTC m=+0.031473289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:20:57 compute-0 systemd[1]: Started libpod-conmon-e1cb0e5e257cfa71709935853686ed3035bd7f4e7db191898ece6de389f5340f.scope.
Oct 07 15:20:57 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:20:57 compute-0 podman[461066]: 2025-10-07 15:20:57.287188663 +0000 UTC m=+0.250024477 container init e1cb0e5e257cfa71709935853686ed3035bd7f4e7db191898ece6de389f5340f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:20:57 compute-0 podman[461066]: 2025-10-07 15:20:57.295122462 +0000 UTC m=+0.257958256 container start e1cb0e5e257cfa71709935853686ed3035bd7f4e7db191898ece6de389f5340f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 07 15:20:57 compute-0 systemd[1]: libpod-e1cb0e5e257cfa71709935853686ed3035bd7f4e7db191898ece6de389f5340f.scope: Deactivated successfully.
Oct 07 15:20:57 compute-0 musing_chandrasekhar[461082]: 167 167
Oct 07 15:20:57 compute-0 conmon[461082]: conmon e1cb0e5e257cfa717099 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1cb0e5e257cfa71709935853686ed3035bd7f4e7db191898ece6de389f5340f.scope/container/memory.events
Oct 07 15:20:57 compute-0 podman[461066]: 2025-10-07 15:20:57.370842804 +0000 UTC m=+0.333678598 container attach e1cb0e5e257cfa71709935853686ed3035bd7f4e7db191898ece6de389f5340f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chandrasekhar, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:20:57 compute-0 podman[461066]: 2025-10-07 15:20:57.372024855 +0000 UTC m=+0.334860689 container died e1cb0e5e257cfa71709935853686ed3035bd7f4e7db191898ece6de389f5340f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 07 15:20:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-66f9ff5584c8ac93964f6c128fd29cbc3956759de0fe7819365fad0fcaa695fb-merged.mount: Deactivated successfully.
Oct 07 15:20:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3590: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:57 compute-0 podman[461066]: 2025-10-07 15:20:57.666580102 +0000 UTC m=+0.629415896 container remove e1cb0e5e257cfa71709935853686ed3035bd7f4e7db191898ece6de389f5340f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:20:57 compute-0 systemd[1]: libpod-conmon-e1cb0e5e257cfa71709935853686ed3035bd7f4e7db191898ece6de389f5340f.scope: Deactivated successfully.
Oct 07 15:20:57 compute-0 podman[461109]: 2025-10-07 15:20:57.858458169 +0000 UTC m=+0.077420747 container create d4c0de8baa4ec2bc76664ad53daa772f3b5ac189acb347ac426cb50e67e0b89a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_knuth, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:20:57 compute-0 podman[461109]: 2025-10-07 15:20:57.806155523 +0000 UTC m=+0.025118121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:20:57 compute-0 systemd[1]: Started libpod-conmon-d4c0de8baa4ec2bc76664ad53daa772f3b5ac189acb347ac426cb50e67e0b89a.scope.
Oct 07 15:20:57 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:20:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99011fe35739ca192359f92d2dd8a60cf3393c396f971100294a0456070c39d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:20:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99011fe35739ca192359f92d2dd8a60cf3393c396f971100294a0456070c39d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:20:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99011fe35739ca192359f92d2dd8a60cf3393c396f971100294a0456070c39d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:20:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99011fe35739ca192359f92d2dd8a60cf3393c396f971100294a0456070c39d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:20:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99011fe35739ca192359f92d2dd8a60cf3393c396f971100294a0456070c39d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:20:58 compute-0 podman[461109]: 2025-10-07 15:20:58.044811091 +0000 UTC m=+0.263773679 container init d4c0de8baa4ec2bc76664ad53daa772f3b5ac189acb347ac426cb50e67e0b89a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 07 15:20:58 compute-0 podman[461109]: 2025-10-07 15:20:58.05466107 +0000 UTC m=+0.273623638 container start d4c0de8baa4ec2bc76664ad53daa772f3b5ac189acb347ac426cb50e67e0b89a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:20:58 compute-0 podman[461109]: 2025-10-07 15:20:58.073340221 +0000 UTC m=+0.292302809 container attach d4c0de8baa4ec2bc76664ad53daa772f3b5ac189acb347ac426cb50e67e0b89a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:20:58 compute-0 ceph-mon[74295]: pgmap v3590: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:59 compute-0 practical_knuth[461125]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:20:59 compute-0 practical_knuth[461125]: --> relative data size: 1.0
Oct 07 15:20:59 compute-0 practical_knuth[461125]: --> All data devices are unavailable
Oct 07 15:20:59 compute-0 systemd[1]: libpod-d4c0de8baa4ec2bc76664ad53daa772f3b5ac189acb347ac426cb50e67e0b89a.scope: Deactivated successfully.
Oct 07 15:20:59 compute-0 systemd[1]: libpod-d4c0de8baa4ec2bc76664ad53daa772f3b5ac189acb347ac426cb50e67e0b89a.scope: Consumed 1.068s CPU time.
Oct 07 15:20:59 compute-0 podman[461109]: 2025-10-07 15:20:59.165083767 +0000 UTC m=+1.384046425 container died d4c0de8baa4ec2bc76664ad53daa772f3b5ac189acb347ac426cb50e67e0b89a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_knuth, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:20:59 compute-0 nova_compute[259550]: 2025-10-07 15:20:59.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c99011fe35739ca192359f92d2dd8a60cf3393c396f971100294a0456070c39d-merged.mount: Deactivated successfully.
Oct 07 15:20:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3591: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:20:59 compute-0 nova_compute[259550]: 2025-10-07 15:20:59.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:20:59 compute-0 podman[461109]: 2025-10-07 15:20:59.967486482 +0000 UTC m=+2.186449090 container remove d4c0de8baa4ec2bc76664ad53daa772f3b5ac189acb347ac426cb50e67e0b89a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_knuth, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 07 15:20:59 compute-0 systemd[1]: libpod-conmon-d4c0de8baa4ec2bc76664ad53daa772f3b5ac189acb347ac426cb50e67e0b89a.scope: Deactivated successfully.
Oct 07 15:20:59 compute-0 nova_compute[259550]: 2025-10-07 15:20:59.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:21:00 compute-0 sudo[461001]: pam_unix(sudo:session): session closed for user root
Oct 07 15:21:00 compute-0 sudo[461169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:21:00 compute-0 sudo[461169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:21:00 compute-0 sudo[461169]: pam_unix(sudo:session): session closed for user root
Oct 07 15:21:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:21:00.123 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:21:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:21:00.124 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:21:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:21:00.124 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:21:00 compute-0 sudo[461194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:21:00 compute-0 sudo[461194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:21:00 compute-0 sudo[461194]: pam_unix(sudo:session): session closed for user root
Oct 07 15:21:00 compute-0 sudo[461219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:21:00 compute-0 sudo[461219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:21:00 compute-0 sudo[461219]: pam_unix(sudo:session): session closed for user root
Oct 07 15:21:00 compute-0 sudo[461244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:21:00 compute-0 sudo[461244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:21:00 compute-0 ceph-mon[74295]: pgmap v3591: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:00 compute-0 podman[461308]: 2025-10-07 15:21:00.567562065 +0000 UTC m=+0.022858522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:21:00 compute-0 podman[461308]: 2025-10-07 15:21:00.685007175 +0000 UTC m=+0.140303612 container create 19ff3035332e21b616b54064571d367995bfdd29ba02821f14fbcea043480245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:21:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:21:00 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Oct 07 15:21:00 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:00.872557) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 15:21:00 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Oct 07 15:21:00 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850460872605, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1688, "num_deletes": 253, "total_data_size": 2708743, "memory_usage": 2759768, "flush_reason": "Manual Compaction"}
Oct 07 15:21:00 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Oct 07 15:21:00 compute-0 systemd[1]: Started libpod-conmon-19ff3035332e21b616b54064571d367995bfdd29ba02821f14fbcea043480245.scope.
Oct 07 15:21:00 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850461000830, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 1555530, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72837, "largest_seqno": 74524, "table_properties": {"data_size": 1549744, "index_size": 2860, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15405, "raw_average_key_size": 21, "raw_value_size": 1536884, "raw_average_value_size": 2096, "num_data_blocks": 131, "num_entries": 733, "num_filter_entries": 733, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759850269, "oldest_key_time": 1759850269, "file_creation_time": 1759850460, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 128465 microseconds, and 5686 cpu microseconds.
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:21:01 compute-0 podman[461308]: 2025-10-07 15:21:01.001828487 +0000 UTC m=+0.457124964 container init 19ff3035332e21b616b54064571d367995bfdd29ba02821f14fbcea043480245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_agnesi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 07 15:21:01 compute-0 podman[461308]: 2025-10-07 15:21:01.010622529 +0000 UTC m=+0.465918966 container start 19ff3035332e21b616b54064571d367995bfdd29ba02821f14fbcea043480245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:21:01 compute-0 fervent_agnesi[461324]: 167 167
Oct 07 15:21:01 compute-0 systemd[1]: libpod-19ff3035332e21b616b54064571d367995bfdd29ba02821f14fbcea043480245.scope: Deactivated successfully.
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:01.001022) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 1555530 bytes OK
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:01.001068) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:01.153160) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:01.153204) EVENT_LOG_v1 {"time_micros": 1759850461153193, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:01.153227) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 2701354, prev total WAL file size 2701354, number of live WAL files 2.
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:01.154630) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303034' seq:72057594037927935, type:22 .. '6D6772737461740033323538' seq:0, type:0; will stop at (end)
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(1519KB)], [173(10MB)]
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850461154705, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 12548929, "oldest_snapshot_seqno": -1}
Oct 07 15:21:01 compute-0 podman[461308]: 2025-10-07 15:21:01.229054885 +0000 UTC m=+0.684351322 container attach 19ff3035332e21b616b54064571d367995bfdd29ba02821f14fbcea043480245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_agnesi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:21:01 compute-0 podman[461308]: 2025-10-07 15:21:01.229470425 +0000 UTC m=+0.684766872 container died 19ff3035332e21b616b54064571d367995bfdd29ba02821f14fbcea043480245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:21:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3592: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 9293 keys, 10326638 bytes, temperature: kUnknown
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850461635073, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 10326638, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10269264, "index_size": 33005, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23301, "raw_key_size": 243066, "raw_average_key_size": 26, "raw_value_size": 10108232, "raw_average_value_size": 1087, "num_data_blocks": 1279, "num_entries": 9293, "num_filter_entries": 9293, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759850461, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:01.635353) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10326638 bytes
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:01.642542) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 26.1 rd, 21.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 10.5 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(14.7) write-amplify(6.6) OK, records in: 9726, records dropped: 433 output_compression: NoCompression
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:01.642576) EVENT_LOG_v1 {"time_micros": 1759850461642565, "job": 108, "event": "compaction_finished", "compaction_time_micros": 480457, "compaction_time_cpu_micros": 26157, "output_level": 6, "num_output_files": 1, "total_output_size": 10326638, "num_input_records": 9726, "num_output_records": 9293, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850461643186, "job": 108, "event": "table_file_deletion", "file_number": 175}
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850461645596, "job": 108, "event": "table_file_deletion", "file_number": 173}
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:01.154478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:01.645733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:01.645739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:01.645741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:01.645742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:21:01 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:01.645743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:21:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a620ce3e8703f9101deffb2edad7f0dccb33ca5e891eb9375628fd30c80c0b9-merged.mount: Deactivated successfully.
Oct 07 15:21:02 compute-0 podman[461308]: 2025-10-07 15:21:02.474785239 +0000 UTC m=+1.930081736 container remove 19ff3035332e21b616b54064571d367995bfdd29ba02821f14fbcea043480245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_agnesi, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:21:02 compute-0 ceph-mon[74295]: pgmap v3592: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:02 compute-0 systemd[1]: libpod-conmon-19ff3035332e21b616b54064571d367995bfdd29ba02821f14fbcea043480245.scope: Deactivated successfully.
Oct 07 15:21:02 compute-0 podman[461347]: 2025-10-07 15:21:02.679166535 +0000 UTC m=+0.040197887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:21:02 compute-0 podman[461347]: 2025-10-07 15:21:02.846116997 +0000 UTC m=+0.207148259 container create 99218a69551bb7bae4d135ddb9c242408211467792045f3977b739a72d00e8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gates, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 07 15:21:02 compute-0 systemd[1]: Started libpod-conmon-99218a69551bb7bae4d135ddb9c242408211467792045f3977b739a72d00e8b5.scope.
Oct 07 15:21:02 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/398699d8759a6722230accf5479d6c7ba1ba810967c86d53b9cf242840d7f84f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/398699d8759a6722230accf5479d6c7ba1ba810967c86d53b9cf242840d7f84f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/398699d8759a6722230accf5479d6c7ba1ba810967c86d53b9cf242840d7f84f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/398699d8759a6722230accf5479d6c7ba1ba810967c86d53b9cf242840d7f84f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:21:03 compute-0 podman[461347]: 2025-10-07 15:21:03.211817276 +0000 UTC m=+0.572848578 container init 99218a69551bb7bae4d135ddb9c242408211467792045f3977b739a72d00e8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 07 15:21:03 compute-0 podman[461347]: 2025-10-07 15:21:03.219845807 +0000 UTC m=+0.580877079 container start 99218a69551bb7bae4d135ddb9c242408211467792045f3977b739a72d00e8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gates, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 07 15:21:03 compute-0 podman[461347]: 2025-10-07 15:21:03.493344181 +0000 UTC m=+0.854375453 container attach 99218a69551bb7bae4d135ddb9c242408211467792045f3977b739a72d00e8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gates, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 07 15:21:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3593: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:03 compute-0 optimistic_gates[461363]: {
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:     "0": [
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:         {
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "devices": [
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "/dev/loop3"
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             ],
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "lv_name": "ceph_lv0",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "lv_size": "21470642176",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "name": "ceph_lv0",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "tags": {
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.cluster_name": "ceph",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.crush_device_class": "",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.encrypted": "0",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.osd_id": "0",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.type": "block",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.vdo": "0"
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             },
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "type": "block",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "vg_name": "ceph_vg0"
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:         }
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:     ],
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:     "1": [
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:         {
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "devices": [
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "/dev/loop4"
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             ],
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "lv_name": "ceph_lv1",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "lv_size": "21470642176",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "name": "ceph_lv1",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "tags": {
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.cluster_name": "ceph",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.crush_device_class": "",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.encrypted": "0",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.osd_id": "1",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.type": "block",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.vdo": "0"
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             },
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "type": "block",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "vg_name": "ceph_vg1"
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:         }
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:     ],
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:     "2": [
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:         {
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "devices": [
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "/dev/loop5"
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             ],
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "lv_name": "ceph_lv2",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "lv_size": "21470642176",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "name": "ceph_lv2",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "tags": {
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.cluster_name": "ceph",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.crush_device_class": "",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.encrypted": "0",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.osd_id": "2",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.type": "block",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:                 "ceph.vdo": "0"
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             },
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "type": "block",
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:             "vg_name": "ceph_vg2"
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:         }
Oct 07 15:21:03 compute-0 optimistic_gates[461363]:     ]
Oct 07 15:21:03 compute-0 optimistic_gates[461363]: }
Oct 07 15:21:04 compute-0 systemd[1]: libpod-99218a69551bb7bae4d135ddb9c242408211467792045f3977b739a72d00e8b5.scope: Deactivated successfully.
Oct 07 15:21:04 compute-0 podman[461347]: 2025-10-07 15:21:04.030395447 +0000 UTC m=+1.391426709 container died 99218a69551bb7bae4d135ddb9c242408211467792045f3977b739a72d00e8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 07 15:21:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-398699d8759a6722230accf5479d6c7ba1ba810967c86d53b9cf242840d7f84f-merged.mount: Deactivated successfully.
Oct 07 15:21:04 compute-0 nova_compute[259550]: 2025-10-07 15:21:04.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:04 compute-0 podman[461347]: 2025-10-07 15:21:04.939106948 +0000 UTC m=+2.300138210 container remove 99218a69551bb7bae4d135ddb9c242408211467792045f3977b739a72d00e8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gates, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 07 15:21:04 compute-0 ceph-mon[74295]: pgmap v3593: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:04 compute-0 systemd[1]: libpod-conmon-99218a69551bb7bae4d135ddb9c242408211467792045f3977b739a72d00e8b5.scope: Deactivated successfully.
Oct 07 15:21:04 compute-0 nova_compute[259550]: 2025-10-07 15:21:04.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:04 compute-0 sudo[461244]: pam_unix(sudo:session): session closed for user root
Oct 07 15:21:05 compute-0 sudo[461386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:21:05 compute-0 sudo[461386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:21:05 compute-0 sudo[461386]: pam_unix(sudo:session): session closed for user root
Oct 07 15:21:05 compute-0 sudo[461411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:21:05 compute-0 sudo[461411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:21:05 compute-0 sudo[461411]: pam_unix(sudo:session): session closed for user root
Oct 07 15:21:05 compute-0 sudo[461436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:21:05 compute-0 sudo[461436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:21:05 compute-0 sudo[461436]: pam_unix(sudo:session): session closed for user root
Oct 07 15:21:05 compute-0 sudo[461461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:21:05 compute-0 sudo[461461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:21:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3594: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:05 compute-0 podman[461525]: 2025-10-07 15:21:05.58624242 +0000 UTC m=+0.047581663 container create 9092b5777ccd9ab58d94757a82975e992d7d7f599a474c969c2845b63aec166c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_jepsen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 07 15:21:05 compute-0 systemd[1]: Started libpod-conmon-9092b5777ccd9ab58d94757a82975e992d7d7f599a474c969c2845b63aec166c.scope.
Oct 07 15:21:05 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:21:05 compute-0 podman[461525]: 2025-10-07 15:21:05.563615315 +0000 UTC m=+0.024954578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:21:05 compute-0 podman[461525]: 2025-10-07 15:21:05.685426589 +0000 UTC m=+0.146765852 container init 9092b5777ccd9ab58d94757a82975e992d7d7f599a474c969c2845b63aec166c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 07 15:21:05 compute-0 podman[461525]: 2025-10-07 15:21:05.698328568 +0000 UTC m=+0.159667811 container start 9092b5777ccd9ab58d94757a82975e992d7d7f599a474c969c2845b63aec166c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_jepsen, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:21:05 compute-0 keen_jepsen[461542]: 167 167
Oct 07 15:21:05 compute-0 podman[461525]: 2025-10-07 15:21:05.706584645 +0000 UTC m=+0.167923888 container attach 9092b5777ccd9ab58d94757a82975e992d7d7f599a474c969c2845b63aec166c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_jepsen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct 07 15:21:05 compute-0 systemd[1]: libpod-9092b5777ccd9ab58d94757a82975e992d7d7f599a474c969c2845b63aec166c.scope: Deactivated successfully.
Oct 07 15:21:05 compute-0 podman[461547]: 2025-10-07 15:21:05.748119857 +0000 UTC m=+0.024709441 container died 9092b5777ccd9ab58d94757a82975e992d7d7f599a474c969c2845b63aec166c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 07 15:21:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9447728dfa043dc594abce6cb5510f7fc6ef1321c75341aec50a4b302e065e5b-merged.mount: Deactivated successfully.
Oct 07 15:21:05 compute-0 podman[461547]: 2025-10-07 15:21:05.810494458 +0000 UTC m=+0.087084012 container remove 9092b5777ccd9ab58d94757a82975e992d7d7f599a474c969c2845b63aec166c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_jepsen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 07 15:21:05 compute-0 systemd[1]: libpod-conmon-9092b5777ccd9ab58d94757a82975e992d7d7f599a474c969c2845b63aec166c.scope: Deactivated successfully.
Oct 07 15:21:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:21:05 compute-0 podman[461570]: 2025-10-07 15:21:05.972323134 +0000 UTC m=+0.038022511 container create 862da2b41379c17a9f61922f044c13cd050c0066829aa7534a4bb683bcf0be8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_margulis, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 07 15:21:06 compute-0 systemd[1]: Started libpod-conmon-862da2b41379c17a9f61922f044c13cd050c0066829aa7534a4bb683bcf0be8b.scope.
Oct 07 15:21:06 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465334566df5006e9a78d13a0c099488217efa6fd464fb7e05785ac8eef4d38c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465334566df5006e9a78d13a0c099488217efa6fd464fb7e05785ac8eef4d38c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465334566df5006e9a78d13a0c099488217efa6fd464fb7e05785ac8eef4d38c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465334566df5006e9a78d13a0c099488217efa6fd464fb7e05785ac8eef4d38c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:21:06 compute-0 podman[461570]: 2025-10-07 15:21:05.955202014 +0000 UTC m=+0.020901411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:21:06 compute-0 podman[461570]: 2025-10-07 15:21:06.056799586 +0000 UTC m=+0.122498993 container init 862da2b41379c17a9f61922f044c13cd050c0066829aa7534a4bb683bcf0be8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_margulis, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:21:06 compute-0 podman[461570]: 2025-10-07 15:21:06.063000139 +0000 UTC m=+0.128699516 container start 862da2b41379c17a9f61922f044c13cd050c0066829aa7534a4bb683bcf0be8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 07 15:21:06 compute-0 podman[461570]: 2025-10-07 15:21:06.071863523 +0000 UTC m=+0.137562940 container attach 862da2b41379c17a9f61922f044c13cd050c0066829aa7534a4bb683bcf0be8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_margulis, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 07 15:21:06 compute-0 ceph-mon[74295]: pgmap v3594: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:07 compute-0 cranky_margulis[461587]: {
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:         "osd_id": 2,
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:         "type": "bluestore"
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:     },
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:         "osd_id": 1,
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:         "type": "bluestore"
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:     },
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:         "osd_id": 0,
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:         "type": "bluestore"
Oct 07 15:21:07 compute-0 cranky_margulis[461587]:     }
Oct 07 15:21:07 compute-0 cranky_margulis[461587]: }
Oct 07 15:21:07 compute-0 systemd[1]: libpod-862da2b41379c17a9f61922f044c13cd050c0066829aa7534a4bb683bcf0be8b.scope: Deactivated successfully.
Oct 07 15:21:07 compute-0 podman[461570]: 2025-10-07 15:21:07.080320048 +0000 UTC m=+1.146019435 container died 862da2b41379c17a9f61922f044c13cd050c0066829aa7534a4bb683bcf0be8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:21:07 compute-0 systemd[1]: libpod-862da2b41379c17a9f61922f044c13cd050c0066829aa7534a4bb683bcf0be8b.scope: Consumed 1.021s CPU time.
Oct 07 15:21:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-465334566df5006e9a78d13a0c099488217efa6fd464fb7e05785ac8eef4d38c-merged.mount: Deactivated successfully.
Oct 07 15:21:07 compute-0 podman[461570]: 2025-10-07 15:21:07.172570144 +0000 UTC m=+1.238269521 container remove 862da2b41379c17a9f61922f044c13cd050c0066829aa7534a4bb683bcf0be8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 07 15:21:07 compute-0 systemd[1]: libpod-conmon-862da2b41379c17a9f61922f044c13cd050c0066829aa7534a4bb683bcf0be8b.scope: Deactivated successfully.
Oct 07 15:21:07 compute-0 sudo[461461]: pam_unix(sudo:session): session closed for user root
Oct 07 15:21:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:21:07 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:21:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:21:07 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:21:07 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev e921aaec-0e89-459c-ae59-c171d241ee96 does not exist
Oct 07 15:21:07 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 4459a239-ed7f-4599-b173-9cc613fc5475 does not exist
Oct 07 15:21:07 compute-0 sudo[461630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:21:07 compute-0 sudo[461630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:21:07 compute-0 sudo[461630]: pam_unix(sudo:session): session closed for user root
Oct 07 15:21:07 compute-0 sudo[461655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:21:07 compute-0 sudo[461655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:21:07 compute-0 sudo[461655]: pam_unix(sudo:session): session closed for user root
Oct 07 15:21:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3595: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:08 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:21:08 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:21:08 compute-0 ceph-mon[74295]: pgmap v3595: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:09 compute-0 nova_compute[259550]: 2025-10-07 15:21:09.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3596: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:09 compute-0 nova_compute[259550]: 2025-10-07 15:21:09.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:10 compute-0 podman[461681]: 2025-10-07 15:21:10.081876807 +0000 UTC m=+0.065962337 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 15:21:10 compute-0 podman[461682]: 2025-10-07 15:21:10.104845021 +0000 UTC m=+0.088510539 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 07 15:21:10 compute-0 ceph-mon[74295]: pgmap v3596: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:21:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3597: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:12 compute-0 ceph-mon[74295]: pgmap v3597: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3598: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:14 compute-0 nova_compute[259550]: 2025-10-07 15:21:14.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:14 compute-0 nova_compute[259550]: 2025-10-07 15:21:14.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:14 compute-0 ceph-mon[74295]: pgmap v3598: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3599: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:21:16 compute-0 ceph-mon[74295]: pgmap v3599: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3600: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:18 compute-0 ceph-mon[74295]: pgmap v3600: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:19 compute-0 nova_compute[259550]: 2025-10-07 15:21:19.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3601: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:19 compute-0 nova_compute[259550]: 2025-10-07 15:21:19.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:21:21 compute-0 ceph-mon[74295]: pgmap v3601: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3602: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:22 compute-0 ceph-mon[74295]: pgmap v3602: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:21:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:21:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:21:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:21:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:21:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:21:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:21:22
Oct 07 15:21:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:21:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:21:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.log', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'default.rgw.meta']
Oct 07 15:21:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:21:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3603: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:21:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:21:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:21:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:21:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:21:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:21:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:21:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:21:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:21:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:21:24 compute-0 nova_compute[259550]: 2025-10-07 15:21:24.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:24 compute-0 ceph-mon[74295]: pgmap v3603: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:24 compute-0 nova_compute[259550]: 2025-10-07 15:21:24.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:25 compute-0 podman[461720]: 2025-10-07 15:21:25.081610277 +0000 UTC m=+0.067845516 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 07 15:21:25 compute-0 podman[461721]: 2025-10-07 15:21:25.112670374 +0000 UTC m=+0.095953605 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 07 15:21:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3604: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:21:25 compute-0 nova_compute[259550]: 2025-10-07 15:21:25.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:21:26 compute-0 ceph-mon[74295]: pgmap v3604: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3605: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:28 compute-0 ceph-mon[74295]: pgmap v3605: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:29 compute-0 nova_compute[259550]: 2025-10-07 15:21:29.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3606: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:29 compute-0 nova_compute[259550]: 2025-10-07 15:21:29.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:29 compute-0 nova_compute[259550]: 2025-10-07 15:21:29.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:21:29 compute-0 nova_compute[259550]: 2025-10-07 15:21:29.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:21:30 compute-0 ceph-mon[74295]: pgmap v3606: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:21:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3607: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:21:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4109906523' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:21:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:21:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4109906523' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:21:32 compute-0 nova_compute[259550]: 2025-10-07 15:21:32.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:21:33 compute-0 ceph-mon[74295]: pgmap v3607: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/4109906523' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:21:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/4109906523' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:21:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3608: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:33 compute-0 nova_compute[259550]: 2025-10-07 15:21:33.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:21:34 compute-0 ceph-mon[74295]: pgmap v3608: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:34 compute-0 nova_compute[259550]: 2025-10-07 15:21:34.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:34 compute-0 nova_compute[259550]: 2025-10-07 15:21:34.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3609: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:21:36 compute-0 ceph-mon[74295]: pgmap v3609: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:36 compute-0 nova_compute[259550]: 2025-10-07 15:21:36.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:21:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3610: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:37 compute-0 nova_compute[259550]: 2025-10-07 15:21:37.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:21:38 compute-0 ceph-mon[74295]: pgmap v3610: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:39 compute-0 nova_compute[259550]: 2025-10-07 15:21:39.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3611: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:39 compute-0 nova_compute[259550]: 2025-10-07 15:21:39.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:40 compute-0 ceph-mon[74295]: pgmap v3611: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:21:41 compute-0 podman[461766]: 2025-10-07 15:21:41.069973462 +0000 UTC m=+0.060424520 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 07 15:21:41 compute-0 podman[461767]: 2025-10-07 15:21:41.072141109 +0000 UTC m=+0.059458445 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 15:21:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3612: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:42 compute-0 ceph-mon[74295]: pgmap v3612: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3613: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:44 compute-0 nova_compute[259550]: 2025-10-07 15:21:44.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:44 compute-0 nova_compute[259550]: 2025-10-07 15:21:44.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:44 compute-0 ceph-mon[74295]: pgmap v3613: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3614: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:21:46 compute-0 ceph-mon[74295]: pgmap v3614: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3615: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:48 compute-0 ceph-mon[74295]: pgmap v3615: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:49 compute-0 nova_compute[259550]: 2025-10-07 15:21:49.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3616: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:49 compute-0 nova_compute[259550]: 2025-10-07 15:21:49.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:21:51 compute-0 ceph-mon[74295]: pgmap v3616: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3617: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:51 compute-0 nova_compute[259550]: 2025-10-07 15:21:51.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:21:52 compute-0 nova_compute[259550]: 2025-10-07 15:21:52.007 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:21:52 compute-0 nova_compute[259550]: 2025-10-07 15:21:52.007 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:21:52 compute-0 nova_compute[259550]: 2025-10-07 15:21:52.007 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:21:52 compute-0 nova_compute[259550]: 2025-10-07 15:21:52.008 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:21:52 compute-0 nova_compute[259550]: 2025-10-07 15:21:52.008 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:21:52 compute-0 ceph-mon[74295]: pgmap v3617: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:21:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3569947461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:21:52 compute-0 nova_compute[259550]: 2025-10-07 15:21:52.441 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:21:52 compute-0 nova_compute[259550]: 2025-10-07 15:21:52.622 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:21:52 compute-0 nova_compute[259550]: 2025-10-07 15:21:52.624 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3616MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:21:52 compute-0 nova_compute[259550]: 2025-10-07 15:21:52.625 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:21:52 compute-0 nova_compute[259550]: 2025-10-07 15:21:52.625 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:21:52 compute-0 nova_compute[259550]: 2025-10-07 15:21:52.710 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:21:52 compute-0 nova_compute[259550]: 2025-10-07 15:21:52.711 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:21:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:21:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:21:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:21:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:21:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:21:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:21:52 compute-0 nova_compute[259550]: 2025-10-07 15:21:52.734 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:21:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:21:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4220896725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:21:53 compute-0 nova_compute[259550]: 2025-10-07 15:21:53.211 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:21:53 compute-0 nova_compute[259550]: 2025-10-07 15:21:53.217 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:21:53 compute-0 nova_compute[259550]: 2025-10-07 15:21:53.241 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:21:53 compute-0 nova_compute[259550]: 2025-10-07 15:21:53.243 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:21:53 compute-0 nova_compute[259550]: 2025-10-07 15:21:53.243 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:21:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3569947461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:21:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4220896725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:53.316348) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850513316386, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 648, "num_deletes": 251, "total_data_size": 787645, "memory_usage": 800504, "flush_reason": "Manual Compaction"}
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850513337178, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 780720, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74525, "largest_seqno": 75172, "table_properties": {"data_size": 777208, "index_size": 1419, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7811, "raw_average_key_size": 19, "raw_value_size": 770253, "raw_average_value_size": 1897, "num_data_blocks": 63, "num_entries": 406, "num_filter_entries": 406, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759850462, "oldest_key_time": 1759850462, "file_creation_time": 1759850513, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 20899 microseconds, and 3180 cpu microseconds.
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:53.337243) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 780720 bytes OK
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:53.337263) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:53.341157) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:53.341180) EVENT_LOG_v1 {"time_micros": 1759850513341174, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:53.341199) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 784202, prev total WAL file size 784202, number of live WAL files 2.
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:53.341691) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(762KB)], [176(10084KB)]
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850513341736, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 11107358, "oldest_snapshot_seqno": -1}
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 9186 keys, 9383463 bytes, temperature: kUnknown
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850513432053, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 9383463, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9327683, "index_size": 31731, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 241501, "raw_average_key_size": 26, "raw_value_size": 9169270, "raw_average_value_size": 998, "num_data_blocks": 1217, "num_entries": 9186, "num_filter_entries": 9186, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759850513, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:53.432293) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 9383463 bytes
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:53.434863) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.9 rd, 103.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.8 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(26.2) write-amplify(12.0) OK, records in: 9699, records dropped: 513 output_compression: NoCompression
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:53.434888) EVENT_LOG_v1 {"time_micros": 1759850513434877, "job": 110, "event": "compaction_finished", "compaction_time_micros": 90389, "compaction_time_cpu_micros": 23503, "output_level": 6, "num_output_files": 1, "total_output_size": 9383463, "num_input_records": 9699, "num_output_records": 9186, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850513435235, "job": 110, "event": "table_file_deletion", "file_number": 178}
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850513437548, "job": 110, "event": "table_file_deletion", "file_number": 176}
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:53.341626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:53.437611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:53.437616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:53.437618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:53.437620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:21:53 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:21:53.437622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:21:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3618: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:54 compute-0 ceph-mon[74295]: pgmap v3618: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:54 compute-0 nova_compute[259550]: 2025-10-07 15:21:54.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:54 compute-0 nova_compute[259550]: 2025-10-07 15:21:54.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3619: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:21:56 compute-0 podman[461850]: 2025-10-07 15:21:56.072756844 +0000 UTC m=+0.059859186 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 15:21:56 compute-0 podman[461851]: 2025-10-07 15:21:56.100236476 +0000 UTC m=+0.084873692 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 15:21:56 compute-0 ceph-mon[74295]: pgmap v3619: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3620: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:58 compute-0 nova_compute[259550]: 2025-10-07 15:21:58.245 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:21:58 compute-0 nova_compute[259550]: 2025-10-07 15:21:58.245 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:21:58 compute-0 nova_compute[259550]: 2025-10-07 15:21:58.245 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:21:58 compute-0 nova_compute[259550]: 2025-10-07 15:21:58.407 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:21:58 compute-0 ceph-mon[74295]: pgmap v3620: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:21:59 compute-0 nova_compute[259550]: 2025-10-07 15:21:59.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:21:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3621: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:00 compute-0 nova_compute[259550]: 2025-10-07 15:22:00.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:22:00.124 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:22:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:22:00.124 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:22:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:22:00.124 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:22:00 compute-0 ceph-mon[74295]: pgmap v3621: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:22:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3622: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:01 compute-0 nova_compute[259550]: 2025-10-07 15:22:01.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:22:02 compute-0 ceph-mon[74295]: pgmap v3622: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3623: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:04 compute-0 nova_compute[259550]: 2025-10-07 15:22:04.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:04 compute-0 ceph-mon[74295]: pgmap v3623: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:05 compute-0 nova_compute[259550]: 2025-10-07 15:22:05.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3624: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:22:06 compute-0 ceph-mon[74295]: pgmap v3624: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:07 compute-0 nova_compute[259550]: 2025-10-07 15:22:07.360 2 DEBUG oslo_concurrency.processutils [None req-2f988a4b-89e5-4fd1-9742-d8b4d665c63a 5f2727a1496f4c3183586bbe4b8701d3 942c511a3bc940a3b6f39753738e8dad - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:22:07 compute-0 nova_compute[259550]: 2025-10-07 15:22:07.415 2 DEBUG oslo_concurrency.processutils [None req-2f988a4b-89e5-4fd1-9742-d8b4d665c63a 5f2727a1496f4c3183586bbe4b8701d3 942c511a3bc940a3b6f39753738e8dad - - default default] CMD "env LANG=C uptime" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:22:07 compute-0 sudo[461893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:22:07 compute-0 sudo[461893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:22:07 compute-0 sudo[461893]: pam_unix(sudo:session): session closed for user root
Oct 07 15:22:07 compute-0 sudo[461918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:22:07 compute-0 sudo[461918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:22:07 compute-0 sudo[461918]: pam_unix(sudo:session): session closed for user root
Oct 07 15:22:07 compute-0 sudo[461943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:22:07 compute-0 sudo[461943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:22:07 compute-0 sudo[461943]: pam_unix(sudo:session): session closed for user root
Oct 07 15:22:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3625: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:07 compute-0 sudo[461968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:22:07 compute-0 sudo[461968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:22:08 compute-0 sudo[461968]: pam_unix(sudo:session): session closed for user root
Oct 07 15:22:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:22:08 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:22:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:22:08 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:22:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:22:08 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:22:08 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev db8156a5-6772-4111-b3cd-4bcea06c9c1d does not exist
Oct 07 15:22:08 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev aa107a93-5b5c-489b-8323-8cf764935919 does not exist
Oct 07 15:22:08 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev b96f6add-2102-454d-89ff-2501c5846345 does not exist
Oct 07 15:22:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:22:08 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:22:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:22:08 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:22:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:22:08 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:22:08 compute-0 sudo[462025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:22:08 compute-0 sudo[462025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:22:08 compute-0 sudo[462025]: pam_unix(sudo:session): session closed for user root
Oct 07 15:22:08 compute-0 sudo[462050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:22:08 compute-0 sudo[462050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:22:08 compute-0 sudo[462050]: pam_unix(sudo:session): session closed for user root
Oct 07 15:22:08 compute-0 sudo[462075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:22:08 compute-0 sudo[462075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:22:08 compute-0 sudo[462075]: pam_unix(sudo:session): session closed for user root
Oct 07 15:22:08 compute-0 sudo[462100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:22:08 compute-0 sudo[462100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:22:08 compute-0 podman[462167]: 2025-10-07 15:22:08.841256608 +0000 UTC m=+0.089008801 container create ab0e1a8b1024d608acc6de9b00db402e243a1c660997e0ea2d41ccb2f9204997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ishizaka, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:22:08 compute-0 podman[462167]: 2025-10-07 15:22:08.775382656 +0000 UTC m=+0.023134849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:22:08 compute-0 systemd[1]: Started libpod-conmon-ab0e1a8b1024d608acc6de9b00db402e243a1c660997e0ea2d41ccb2f9204997.scope.
Oct 07 15:22:08 compute-0 ceph-mon[74295]: pgmap v3625: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:08 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:22:08 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:22:08 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:22:08 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:22:08 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:22:08 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:22:08 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:22:09 compute-0 podman[462167]: 2025-10-07 15:22:09.078133079 +0000 UTC m=+0.325885272 container init ab0e1a8b1024d608acc6de9b00db402e243a1c660997e0ea2d41ccb2f9204997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:22:09 compute-0 podman[462167]: 2025-10-07 15:22:09.091522311 +0000 UTC m=+0.339274504 container start ab0e1a8b1024d608acc6de9b00db402e243a1c660997e0ea2d41ccb2f9204997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ishizaka, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:22:09 compute-0 distracted_ishizaka[462183]: 167 167
Oct 07 15:22:09 compute-0 systemd[1]: libpod-ab0e1a8b1024d608acc6de9b00db402e243a1c660997e0ea2d41ccb2f9204997.scope: Deactivated successfully.
Oct 07 15:22:09 compute-0 rsyslogd[1004]: imjournal: 15164 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 07 15:22:09 compute-0 podman[462167]: 2025-10-07 15:22:09.206430774 +0000 UTC m=+0.454182997 container attach ab0e1a8b1024d608acc6de9b00db402e243a1c660997e0ea2d41ccb2f9204997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 07 15:22:09 compute-0 podman[462167]: 2025-10-07 15:22:09.207657896 +0000 UTC m=+0.455410079 container died ab0e1a8b1024d608acc6de9b00db402e243a1c660997e0ea2d41ccb2f9204997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:22:09 compute-0 nova_compute[259550]: 2025-10-07 15:22:09.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3626: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-70dfa0a3487a4e2b74d3bfb2e05d2d9f624de23cebcf573b20f391cf3ba1ef80-merged.mount: Deactivated successfully.
Oct 07 15:22:10 compute-0 nova_compute[259550]: 2025-10-07 15:22:10.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:10 compute-0 ceph-mon[74295]: pgmap v3626: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:10 compute-0 podman[462167]: 2025-10-07 15:22:10.558211729 +0000 UTC m=+1.805963922 container remove ab0e1a8b1024d608acc6de9b00db402e243a1c660997e0ea2d41ccb2f9204997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:22:10 compute-0 systemd[1]: libpod-conmon-ab0e1a8b1024d608acc6de9b00db402e243a1c660997e0ea2d41ccb2f9204997.scope: Deactivated successfully.
Oct 07 15:22:10 compute-0 podman[462207]: 2025-10-07 15:22:10.734210867 +0000 UTC m=+0.044955023 container create 6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 07 15:22:10 compute-0 systemd[1]: Started libpod-conmon-6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932.scope.
Oct 07 15:22:10 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:22:10 compute-0 podman[462207]: 2025-10-07 15:22:10.71568762 +0000 UTC m=+0.026431816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:22:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487674ef27ef0b8cba58197b55c0f72fd17c06754e4935d335f963e02a9a9af7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:22:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487674ef27ef0b8cba58197b55c0f72fd17c06754e4935d335f963e02a9a9af7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:22:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487674ef27ef0b8cba58197b55c0f72fd17c06754e4935d335f963e02a9a9af7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:22:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487674ef27ef0b8cba58197b55c0f72fd17c06754e4935d335f963e02a9a9af7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:22:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487674ef27ef0b8cba58197b55c0f72fd17c06754e4935d335f963e02a9a9af7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:22:10 compute-0 podman[462207]: 2025-10-07 15:22:10.827383748 +0000 UTC m=+0.138127954 container init 6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 07 15:22:10 compute-0 podman[462207]: 2025-10-07 15:22:10.846390918 +0000 UTC m=+0.157135074 container start 6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lichterman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:22:10 compute-0 podman[462207]: 2025-10-07 15:22:10.850730092 +0000 UTC m=+0.161474338 container attach 6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lichterman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:22:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:22:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3627: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:11 compute-0 sweet_lichterman[462224]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:22:11 compute-0 sweet_lichterman[462224]: --> relative data size: 1.0
Oct 07 15:22:11 compute-0 sweet_lichterman[462224]: --> All data devices are unavailable
Oct 07 15:22:11 compute-0 systemd[1]: libpod-6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932.scope: Deactivated successfully.
Oct 07 15:22:11 compute-0 systemd[1]: libpod-6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932.scope: Consumed 1.087s CPU time.
Oct 07 15:22:11 compute-0 conmon[462224]: conmon 6356166d39afc820b794 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932.scope/container/memory.events
Oct 07 15:22:11 compute-0 podman[462207]: 2025-10-07 15:22:11.986361403 +0000 UTC m=+1.297105559 container died 6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 07 15:22:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-487674ef27ef0b8cba58197b55c0f72fd17c06754e4935d335f963e02a9a9af7-merged.mount: Deactivated successfully.
Oct 07 15:22:12 compute-0 podman[462207]: 2025-10-07 15:22:12.352056711 +0000 UTC m=+1.662800867 container remove 6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:22:12 compute-0 systemd[1]: libpod-conmon-6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932.scope: Deactivated successfully.
Oct 07 15:22:12 compute-0 podman[462253]: 2025-10-07 15:22:12.370219869 +0000 UTC m=+0.342472199 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 07 15:22:12 compute-0 podman[462260]: 2025-10-07 15:22:12.387148204 +0000 UTC m=+0.359184639 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 07 15:22:12 compute-0 sudo[462100]: pam_unix(sudo:session): session closed for user root
Oct 07 15:22:12 compute-0 sudo[462306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:22:12 compute-0 sudo[462306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:22:12 compute-0 sudo[462306]: pam_unix(sudo:session): session closed for user root
Oct 07 15:22:12 compute-0 sudo[462331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:22:12 compute-0 sudo[462331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:22:12 compute-0 sudo[462331]: pam_unix(sudo:session): session closed for user root
Oct 07 15:22:12 compute-0 sudo[462356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:22:12 compute-0 sudo[462356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:22:12 compute-0 sudo[462356]: pam_unix(sudo:session): session closed for user root
Oct 07 15:22:12 compute-0 ceph-mon[74295]: pgmap v3627: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:12 compute-0 sudo[462381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:22:12 compute-0 sudo[462381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:22:13 compute-0 podman[462447]: 2025-10-07 15:22:13.079108594 +0000 UTC m=+0.035820433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:22:13 compute-0 podman[462447]: 2025-10-07 15:22:13.205396685 +0000 UTC m=+0.162108494 container create fe5486250db9c58d2121141929d56eeee0f81f94b21335cb66f80b896e8a8432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 07 15:22:13 compute-0 systemd[1]: Started libpod-conmon-fe5486250db9c58d2121141929d56eeee0f81f94b21335cb66f80b896e8a8432.scope.
Oct 07 15:22:13 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:22:13 compute-0 podman[462447]: 2025-10-07 15:22:13.537817549 +0000 UTC m=+0.494529388 container init fe5486250db9c58d2121141929d56eeee0f81f94b21335cb66f80b896e8a8432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_darwin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 07 15:22:13 compute-0 podman[462447]: 2025-10-07 15:22:13.54814138 +0000 UTC m=+0.504853189 container start fe5486250db9c58d2121141929d56eeee0f81f94b21335cb66f80b896e8a8432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 07 15:22:13 compute-0 hopeful_darwin[462463]: 167 167
Oct 07 15:22:13 compute-0 systemd[1]: libpod-fe5486250db9c58d2121141929d56eeee0f81f94b21335cb66f80b896e8a8432.scope: Deactivated successfully.
Oct 07 15:22:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3628: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:13 compute-0 podman[462447]: 2025-10-07 15:22:13.788339479 +0000 UTC m=+0.745051278 container attach fe5486250db9c58d2121141929d56eeee0f81f94b21335cb66f80b896e8a8432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:22:13 compute-0 podman[462447]: 2025-10-07 15:22:13.78917662 +0000 UTC m=+0.745888429 container died fe5486250db9c58d2121141929d56eeee0f81f94b21335cb66f80b896e8a8432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_darwin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:22:13 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:22:13 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.78 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 400 writes, 997 keys, 400 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s
                                           Interval WAL: 400 writes, 182 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 15:22:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-05ac88f63d15c761ed155a6297a8f0ef15a1d7aa1f754e181ee6c351bf82796c-merged.mount: Deactivated successfully.
Oct 07 15:22:13 compute-0 podman[462447]: 2025-10-07 15:22:13.963035133 +0000 UTC m=+0.919746982 container remove fe5486250db9c58d2121141929d56eeee0f81f94b21335cb66f80b896e8a8432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_darwin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 15:22:13 compute-0 systemd[1]: libpod-conmon-fe5486250db9c58d2121141929d56eeee0f81f94b21335cb66f80b896e8a8432.scope: Deactivated successfully.
Oct 07 15:22:14 compute-0 podman[462487]: 2025-10-07 15:22:14.1754432 +0000 UTC m=+0.076462003 container create e82a13442aa147f0205b920ff2f34ddcb3cfc23861dc16aca6189dc50f218d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 07 15:22:14 compute-0 podman[462487]: 2025-10-07 15:22:14.130318573 +0000 UTC m=+0.031337386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:22:14 compute-0 systemd[1]: Started libpod-conmon-e82a13442aa147f0205b920ff2f34ddcb3cfc23861dc16aca6189dc50f218d06.scope.
Oct 07 15:22:14 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:22:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01846c62deb9396396089de18a1823533cadd86f0d3ff3e87bd555b9294ed0ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:22:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01846c62deb9396396089de18a1823533cadd86f0d3ff3e87bd555b9294ed0ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:22:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01846c62deb9396396089de18a1823533cadd86f0d3ff3e87bd555b9294ed0ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:22:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01846c62deb9396396089de18a1823533cadd86f0d3ff3e87bd555b9294ed0ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:22:14 compute-0 nova_compute[259550]: 2025-10-07 15:22:14.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:14 compute-0 podman[462487]: 2025-10-07 15:22:14.530690194 +0000 UTC m=+0.431708977 container init e82a13442aa147f0205b920ff2f34ddcb3cfc23861dc16aca6189dc50f218d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:22:14 compute-0 podman[462487]: 2025-10-07 15:22:14.540776759 +0000 UTC m=+0.441795522 container start e82a13442aa147f0205b920ff2f34ddcb3cfc23861dc16aca6189dc50f218d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct 07 15:22:14 compute-0 podman[462487]: 2025-10-07 15:22:14.546074659 +0000 UTC m=+0.447093422 container attach e82a13442aa147f0205b920ff2f34ddcb3cfc23861dc16aca6189dc50f218d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 07 15:22:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:22:14.725 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 07 15:22:14 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:22:14.726 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 07 15:22:14 compute-0 nova_compute[259550]: 2025-10-07 15:22:14.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:14 compute-0 ceph-mon[74295]: pgmap v3628: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:15 compute-0 nova_compute[259550]: 2025-10-07 15:22:15.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]: {
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:     "0": [
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:         {
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "devices": [
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "/dev/loop3"
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             ],
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "lv_name": "ceph_lv0",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "lv_size": "21470642176",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "name": "ceph_lv0",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "tags": {
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.cluster_name": "ceph",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.crush_device_class": "",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.encrypted": "0",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.osd_id": "0",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.type": "block",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.vdo": "0"
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             },
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "type": "block",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "vg_name": "ceph_vg0"
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:         }
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:     ],
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:     "1": [
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:         {
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "devices": [
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "/dev/loop4"
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             ],
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "lv_name": "ceph_lv1",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "lv_size": "21470642176",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "name": "ceph_lv1",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "tags": {
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.cluster_name": "ceph",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.crush_device_class": "",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.encrypted": "0",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.osd_id": "1",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.type": "block",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.vdo": "0"
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             },
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "type": "block",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "vg_name": "ceph_vg1"
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:         }
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:     ],
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:     "2": [
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:         {
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "devices": [
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "/dev/loop5"
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             ],
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "lv_name": "ceph_lv2",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "lv_size": "21470642176",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "name": "ceph_lv2",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "tags": {
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.cluster_name": "ceph",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.crush_device_class": "",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.encrypted": "0",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.osd_id": "2",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.type": "block",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:                 "ceph.vdo": "0"
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             },
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "type": "block",
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:             "vg_name": "ceph_vg2"
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:         }
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]:     ]
Oct 07 15:22:15 compute-0 friendly_wozniak[462504]: }
Oct 07 15:22:15 compute-0 systemd[1]: libpod-e82a13442aa147f0205b920ff2f34ddcb3cfc23861dc16aca6189dc50f218d06.scope: Deactivated successfully.
Oct 07 15:22:15 compute-0 podman[462513]: 2025-10-07 15:22:15.541362568 +0000 UTC m=+0.033631626 container died e82a13442aa147f0205b920ff2f34ddcb3cfc23861dc16aca6189dc50f218d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:22:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3629: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-01846c62deb9396396089de18a1823533cadd86f0d3ff3e87bd555b9294ed0ee-merged.mount: Deactivated successfully.
Oct 07 15:22:15 compute-0 podman[462513]: 2025-10-07 15:22:15.62549504 +0000 UTC m=+0.117764038 container remove e82a13442aa147f0205b920ff2f34ddcb3cfc23861dc16aca6189dc50f218d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 07 15:22:15 compute-0 systemd[1]: libpod-conmon-e82a13442aa147f0205b920ff2f34ddcb3cfc23861dc16aca6189dc50f218d06.scope: Deactivated successfully.
Oct 07 15:22:15 compute-0 sudo[462381]: pam_unix(sudo:session): session closed for user root
Oct 07 15:22:15 compute-0 sudo[462528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:22:15 compute-0 sudo[462528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:22:15 compute-0 sudo[462528]: pam_unix(sudo:session): session closed for user root
Oct 07 15:22:15 compute-0 sudo[462553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:22:15 compute-0 sudo[462553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:22:15 compute-0 sudo[462553]: pam_unix(sudo:session): session closed for user root
Oct 07 15:22:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:22:15 compute-0 sudo[462578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:22:15 compute-0 sudo[462578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:22:15 compute-0 sudo[462578]: pam_unix(sudo:session): session closed for user root
Oct 07 15:22:15 compute-0 sudo[462603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:22:15 compute-0 sudo[462603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:22:16 compute-0 podman[462667]: 2025-10-07 15:22:16.347348576 +0000 UTC m=+0.044086410 container create 2a0583dae4873ae998984796683693f1330f394005c10b868b5a71e1da259be5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackwell, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:22:16 compute-0 systemd[1]: Started libpod-conmon-2a0583dae4873ae998984796683693f1330f394005c10b868b5a71e1da259be5.scope.
Oct 07 15:22:16 compute-0 podman[462667]: 2025-10-07 15:22:16.329299281 +0000 UTC m=+0.026037135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:22:16 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:22:16 compute-0 podman[462667]: 2025-10-07 15:22:16.455295405 +0000 UTC m=+0.152033239 container init 2a0583dae4873ae998984796683693f1330f394005c10b868b5a71e1da259be5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackwell, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:22:16 compute-0 podman[462667]: 2025-10-07 15:22:16.464512488 +0000 UTC m=+0.161250302 container start 2a0583dae4873ae998984796683693f1330f394005c10b868b5a71e1da259be5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:22:16 compute-0 podman[462667]: 2025-10-07 15:22:16.46987704 +0000 UTC m=+0.166614894 container attach 2a0583dae4873ae998984796683693f1330f394005c10b868b5a71e1da259be5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackwell, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 07 15:22:16 compute-0 wonderful_blackwell[462683]: 167 167
Oct 07 15:22:16 compute-0 systemd[1]: libpod-2a0583dae4873ae998984796683693f1330f394005c10b868b5a71e1da259be5.scope: Deactivated successfully.
Oct 07 15:22:16 compute-0 podman[462667]: 2025-10-07 15:22:16.472471798 +0000 UTC m=+0.169209622 container died 2a0583dae4873ae998984796683693f1330f394005c10b868b5a71e1da259be5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackwell, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct 07 15:22:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-44dcee9af340b6f5abe8cbd2bcd945ee2e41a7edfda1c5e098cd6deac035a0f0-merged.mount: Deactivated successfully.
Oct 07 15:22:16 compute-0 podman[462667]: 2025-10-07 15:22:16.524506006 +0000 UTC m=+0.221243830 container remove 2a0583dae4873ae998984796683693f1330f394005c10b868b5a71e1da259be5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 07 15:22:16 compute-0 systemd[1]: libpod-conmon-2a0583dae4873ae998984796683693f1330f394005c10b868b5a71e1da259be5.scope: Deactivated successfully.
Oct 07 15:22:16 compute-0 podman[462705]: 2025-10-07 15:22:16.721408605 +0000 UTC m=+0.073012921 container create c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brattain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 07 15:22:16 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:22:16.727 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 07 15:22:16 compute-0 systemd[1]: Started libpod-conmon-c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d.scope.
Oct 07 15:22:16 compute-0 podman[462705]: 2025-10-07 15:22:16.674694326 +0000 UTC m=+0.026298672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:22:16 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e3f024f90aabe59d724aa624880f9924ac2ecf084e2d8d57bf82ab756131a36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e3f024f90aabe59d724aa624880f9924ac2ecf084e2d8d57bf82ab756131a36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e3f024f90aabe59d724aa624880f9924ac2ecf084e2d8d57bf82ab756131a36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e3f024f90aabe59d724aa624880f9924ac2ecf084e2d8d57bf82ab756131a36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:22:16 compute-0 podman[462705]: 2025-10-07 15:22:16.823711265 +0000 UTC m=+0.175315581 container init c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brattain, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:22:16 compute-0 podman[462705]: 2025-10-07 15:22:16.836666377 +0000 UTC m=+0.188270693 container start c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brattain, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 07 15:22:16 compute-0 podman[462705]: 2025-10-07 15:22:16.841447652 +0000 UTC m=+0.193051968 container attach c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 07 15:22:16 compute-0 ceph-mon[74295]: pgmap v3629: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3630: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:17 compute-0 elegant_brattain[462722]: {
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:         "osd_id": 2,
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:         "type": "bluestore"
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:     },
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:         "osd_id": 1,
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:         "type": "bluestore"
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:     },
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:         "osd_id": 0,
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:         "type": "bluestore"
Oct 07 15:22:17 compute-0 elegant_brattain[462722]:     }
Oct 07 15:22:17 compute-0 elegant_brattain[462722]: }
Oct 07 15:22:17 compute-0 systemd[1]: libpod-c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d.scope: Deactivated successfully.
Oct 07 15:22:17 compute-0 systemd[1]: libpod-c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d.scope: Consumed 1.144s CPU time.
Oct 07 15:22:17 compute-0 podman[462705]: 2025-10-07 15:22:17.974853364 +0000 UTC m=+1.326457710 container died c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brattain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:22:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e3f024f90aabe59d724aa624880f9924ac2ecf084e2d8d57bf82ab756131a36-merged.mount: Deactivated successfully.
Oct 07 15:22:18 compute-0 podman[462705]: 2025-10-07 15:22:18.195436815 +0000 UTC m=+1.547041151 container remove c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:22:18 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:22:18 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 50K writes, 196K keys, 50K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 50K writes, 18K syncs, 2.75 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 413 writes, 956 keys, 413 commit groups, 1.0 writes per commit group, ingest: 0.47 MB, 0.00 MB/s
                                           Interval WAL: 413 writes, 182 syncs, 2.27 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 15:22:18 compute-0 systemd[1]: libpod-conmon-c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d.scope: Deactivated successfully.
Oct 07 15:22:18 compute-0 sudo[462603]: pam_unix(sudo:session): session closed for user root
Oct 07 15:22:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:22:18 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:22:18 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:22:18 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:22:18 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev c6e3dedb-5925-479e-9651-ad50235254f5 does not exist
Oct 07 15:22:18 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 844f3ccc-f102-4781-b9c7-de8a28c2d79f does not exist
Oct 07 15:22:18 compute-0 sudo[462769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:22:18 compute-0 sudo[462769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:22:18 compute-0 sudo[462769]: pam_unix(sudo:session): session closed for user root
Oct 07 15:22:18 compute-0 sudo[462794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:22:18 compute-0 sudo[462794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:22:18 compute-0 sudo[462794]: pam_unix(sudo:session): session closed for user root
Oct 07 15:22:18 compute-0 ceph-mon[74295]: pgmap v3630: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:18 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:22:18 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:22:19 compute-0 nova_compute[259550]: 2025-10-07 15:22:19.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3631: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:20 compute-0 nova_compute[259550]: 2025-10-07 15:22:20.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:22:21 compute-0 ceph-mon[74295]: pgmap v3631: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3632: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:21 compute-0 nova_compute[259550]: 2025-10-07 15:22:21.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:22:21 compute-0 nova_compute[259550]: 2025-10-07 15:22:21.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 07 15:22:22 compute-0 nova_compute[259550]: 2025-10-07 15:22:22.003 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 07 15:22:22 compute-0 ceph-mon[74295]: pgmap v3632: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:22 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:22:22 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 36K writes, 145K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 36K writes, 13K syncs, 2.78 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 385 writes, 982 keys, 385 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s
                                           Interval WAL: 385 writes, 171 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 15:22:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:22:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:22:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:22:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:22:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:22:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:22:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:22:22
Oct 07 15:22:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:22:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:22:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'images', 'backups', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'vms']
Oct 07 15:22:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:22:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3633: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:22:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:22:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:22:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:22:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:22:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:22:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:22:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:22:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:22:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:22:24 compute-0 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct 07 15:22:24 compute-0 nova_compute[259550]: 2025-10-07 15:22:24.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:24 compute-0 ceph-mon[74295]: pgmap v3633: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:25 compute-0 nova_compute[259550]: 2025-10-07 15:22:25.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3634: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:22:26 compute-0 ceph-mon[74295]: pgmap v3634: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:27 compute-0 nova_compute[259550]: 2025-10-07 15:22:27.005 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:22:27 compute-0 podman[462819]: 2025-10-07 15:22:27.076220633 +0000 UTC m=+0.062622329 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 07 15:22:27 compute-0 podman[462820]: 2025-10-07 15:22:27.112830165 +0000 UTC m=+0.094827665 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 07 15:22:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3635: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:29 compute-0 ceph-mon[74295]: pgmap v3635: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:29 compute-0 nova_compute[259550]: 2025-10-07 15:22:29.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3636: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:30 compute-0 nova_compute[259550]: 2025-10-07 15:22:30.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:30 compute-0 ceph-mon[74295]: pgmap v3636: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:22:30 compute-0 nova_compute[259550]: 2025-10-07 15:22:30.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:22:30 compute-0 nova_compute[259550]: 2025-10-07 15:22:30.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:22:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3637: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:22:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/868484289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:22:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:22:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/868484289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:22:32 compute-0 ceph-mon[74295]: pgmap v3637: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:32 compute-0 nova_compute[259550]: 2025-10-07 15:22:32.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:22:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3638: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:34 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/868484289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:22:34 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/868484289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:22:34 compute-0 nova_compute[259550]: 2025-10-07 15:22:34.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:34 compute-0 nova_compute[259550]: 2025-10-07 15:22:34.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:22:35 compute-0 nova_compute[259550]: 2025-10-07 15:22:35.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:35 compute-0 ceph-mon[74295]: pgmap v3638: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3639: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:22:36 compute-0 ceph-mon[74295]: pgmap v3639: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3640: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:37 compute-0 nova_compute[259550]: 2025-10-07 15:22:37.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:22:38 compute-0 ceph-mon[74295]: pgmap v3640: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:38 compute-0 nova_compute[259550]: 2025-10-07 15:22:38.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:22:39 compute-0 nova_compute[259550]: 2025-10-07 15:22:39.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3641: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:39 compute-0 nova_compute[259550]: 2025-10-07 15:22:39.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:22:40 compute-0 nova_compute[259550]: 2025-10-07 15:22:40.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:40 compute-0 ceph-mon[74295]: pgmap v3641: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:22:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3642: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:42 compute-0 ceph-mon[74295]: pgmap v3642: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:43 compute-0 podman[462864]: 2025-10-07 15:22:43.072871292 +0000 UTC m=+0.057001670 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 07 15:22:43 compute-0 podman[462865]: 2025-10-07 15:22:43.076007945 +0000 UTC m=+0.057258187 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 07 15:22:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3643: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:43 compute-0 nova_compute[259550]: 2025-10-07 15:22:43.917 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:22:44 compute-0 nova_compute[259550]: 2025-10-07 15:22:44.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:44 compute-0 ceph-mon[74295]: pgmap v3643: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:45 compute-0 nova_compute[259550]: 2025-10-07 15:22:45.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3644: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:22:46 compute-0 nova_compute[259550]: 2025-10-07 15:22:46.479 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:22:46 compute-0 ceph-mon[74295]: pgmap v3644: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3645: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:49 compute-0 ceph-mon[74295]: pgmap v3645: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:49 compute-0 nova_compute[259550]: 2025-10-07 15:22:49.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3646: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:50 compute-0 nova_compute[259550]: 2025-10-07 15:22:50.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:50 compute-0 ceph-mon[74295]: pgmap v3646: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:22:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3647: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:51 compute-0 nova_compute[259550]: 2025-10-07 15:22:51.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:22:52 compute-0 nova_compute[259550]: 2025-10-07 15:22:52.059 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:22:52 compute-0 nova_compute[259550]: 2025-10-07 15:22:52.060 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:22:52 compute-0 nova_compute[259550]: 2025-10-07 15:22:52.060 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:22:52 compute-0 nova_compute[259550]: 2025-10-07 15:22:52.060 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:22:52 compute-0 nova_compute[259550]: 2025-10-07 15:22:52.060 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:22:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:22:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4063851977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:22:52 compute-0 nova_compute[259550]: 2025-10-07 15:22:52.507 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:22:52 compute-0 nova_compute[259550]: 2025-10-07 15:22:52.678 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:22:52 compute-0 ceph-mon[74295]: pgmap v3647: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4063851977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:22:52 compute-0 nova_compute[259550]: 2025-10-07 15:22:52.679 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3586MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:22:52 compute-0 nova_compute[259550]: 2025-10-07 15:22:52.680 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:22:52 compute-0 nova_compute[259550]: 2025-10-07 15:22:52.680 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:22:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:22:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:22:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:22:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:22:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:22:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:22:52 compute-0 nova_compute[259550]: 2025-10-07 15:22:52.761 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:22:52 compute-0 nova_compute[259550]: 2025-10-07 15:22:52.761 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:22:52 compute-0 nova_compute[259550]: 2025-10-07 15:22:52.780 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:22:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:22:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1582143350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:22:53 compute-0 nova_compute[259550]: 2025-10-07 15:22:53.242 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:22:53 compute-0 nova_compute[259550]: 2025-10-07 15:22:53.249 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:22:53 compute-0 nova_compute[259550]: 2025-10-07 15:22:53.292 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:22:53 compute-0 nova_compute[259550]: 2025-10-07 15:22:53.293 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:22:53 compute-0 nova_compute[259550]: 2025-10-07 15:22:53.294 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:22:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3648: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1582143350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:22:54 compute-0 nova_compute[259550]: 2025-10-07 15:22:54.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:54 compute-0 nova_compute[259550]: 2025-10-07 15:22:54.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:22:54 compute-0 nova_compute[259550]: 2025-10-07 15:22:54.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 07 15:22:55 compute-0 ceph-mon[74295]: pgmap v3648: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:55 compute-0 nova_compute[259550]: 2025-10-07 15:22:55.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3649: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.059062) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850576059096, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 741, "num_deletes": 257, "total_data_size": 930850, "memory_usage": 944288, "flush_reason": "Manual Compaction"}
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850576130872, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 922399, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75173, "largest_seqno": 75913, "table_properties": {"data_size": 918545, "index_size": 1633, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8430, "raw_average_key_size": 18, "raw_value_size": 910839, "raw_average_value_size": 2042, "num_data_blocks": 73, "num_entries": 446, "num_filter_entries": 446, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759850514, "oldest_key_time": 1759850514, "file_creation_time": 1759850576, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 71972 microseconds, and 3517 cpu microseconds.
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.131021) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 922399 bytes OK
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.131055) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.173284) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.173336) EVENT_LOG_v1 {"time_micros": 1759850576173324, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.173369) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 927038, prev total WAL file size 954167, number of live WAL files 2.
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.174335) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323730' seq:72057594037927935, type:22 .. '6C6F676D0033353233' seq:0, type:0; will stop at (end)
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(900KB)], [179(9163KB)]
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850576174368, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 10305862, "oldest_snapshot_seqno": -1}
Oct 07 15:22:56 compute-0 ceph-mon[74295]: pgmap v3649: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 9107 keys, 10197588 bytes, temperature: kUnknown
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850576681841, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 10197588, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10140857, "index_size": 32874, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22789, "raw_key_size": 240777, "raw_average_key_size": 26, "raw_value_size": 9982371, "raw_average_value_size": 1096, "num_data_blocks": 1265, "num_entries": 9107, "num_filter_entries": 9107, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759850576, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.682111) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 10197588 bytes
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.705812) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 20.3 rd, 20.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.9 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(22.2) write-amplify(11.1) OK, records in: 9632, records dropped: 525 output_compression: NoCompression
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.705859) EVENT_LOG_v1 {"time_micros": 1759850576705843, "job": 112, "event": "compaction_finished", "compaction_time_micros": 507574, "compaction_time_cpu_micros": 30596, "output_level": 6, "num_output_files": 1, "total_output_size": 10197588, "num_input_records": 9632, "num_output_records": 9107, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850576706379, "job": 112, "event": "table_file_deletion", "file_number": 181}
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850576708607, "job": 112, "event": "table_file_deletion", "file_number": 179}
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.174239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.708668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.708674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.708675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.708677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:22:56 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.708679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:22:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3650: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:58 compute-0 podman[462946]: 2025-10-07 15:22:58.134198346 +0000 UTC m=+0.120453489 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 07 15:22:58 compute-0 podman[462947]: 2025-10-07 15:22:58.149760885 +0000 UTC m=+0.128489921 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 07 15:22:58 compute-0 ceph-mon[74295]: pgmap v3650: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:22:59 compute-0 nova_compute[259550]: 2025-10-07 15:22:59.003 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:22:59 compute-0 nova_compute[259550]: 2025-10-07 15:22:59.003 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:22:59 compute-0 nova_compute[259550]: 2025-10-07 15:22:59.003 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:22:59 compute-0 nova_compute[259550]: 2025-10-07 15:22:59.025 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:22:59 compute-0 nova_compute[259550]: 2025-10-07 15:22:59.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:22:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3651: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:00 compute-0 nova_compute[259550]: 2025-10-07 15:23:00.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:23:00.124 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:23:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:23:00.125 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:23:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:23:00.125 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:23:00 compute-0 ceph-mon[74295]: pgmap v3651: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:23:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3652: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:02 compute-0 ceph-mon[74295]: pgmap v3652: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3653: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:03 compute-0 nova_compute[259550]: 2025-10-07 15:23:03.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:23:04 compute-0 nova_compute[259550]: 2025-10-07 15:23:04.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:04 compute-0 ceph-mon[74295]: pgmap v3653: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:05 compute-0 nova_compute[259550]: 2025-10-07 15:23:05.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3654: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:23:06 compute-0 ceph-mon[74295]: pgmap v3654: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3655: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:09 compute-0 ceph-mon[74295]: pgmap v3655: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:09 compute-0 nova_compute[259550]: 2025-10-07 15:23:09.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3656: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:10 compute-0 nova_compute[259550]: 2025-10-07 15:23:10.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:10 compute-0 ceph-mon[74295]: pgmap v3656: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:23:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3657: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:12 compute-0 ceph-mon[74295]: pgmap v3657: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3658: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:14 compute-0 podman[462991]: 2025-10-07 15:23:14.067608557 +0000 UTC m=+0.057268197 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 07 15:23:14 compute-0 podman[462992]: 2025-10-07 15:23:14.080812824 +0000 UTC m=+0.063177243 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid)
Oct 07 15:23:14 compute-0 nova_compute[259550]: 2025-10-07 15:23:14.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:14 compute-0 ceph-mon[74295]: pgmap v3658: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:15 compute-0 nova_compute[259550]: 2025-10-07 15:23:15.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3659: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:23:16 compute-0 ceph-mon[74295]: pgmap v3659: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3660: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:18 compute-0 sudo[463032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:23:18 compute-0 sudo[463032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:23:18 compute-0 sudo[463032]: pam_unix(sudo:session): session closed for user root
Oct 07 15:23:18 compute-0 sudo[463057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:23:18 compute-0 sudo[463057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:23:18 compute-0 sudo[463057]: pam_unix(sudo:session): session closed for user root
Oct 07 15:23:18 compute-0 sudo[463082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:23:18 compute-0 sudo[463082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:23:18 compute-0 sudo[463082]: pam_unix(sudo:session): session closed for user root
Oct 07 15:23:18 compute-0 sudo[463107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:23:18 compute-0 sudo[463107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:23:18 compute-0 ceph-mon[74295]: pgmap v3660: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:19 compute-0 sudo[463107]: pam_unix(sudo:session): session closed for user root
Oct 07 15:23:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:23:19 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:23:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:23:19 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:23:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:23:19 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:23:19 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev dc4db16c-618c-4a8d-b9d1-757e1e193aed does not exist
Oct 07 15:23:19 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 54a80a1b-8a07-474f-85b4-3be1bf36c2af does not exist
Oct 07 15:23:19 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 07e19790-83ca-455c-8e19-edcc4d38f7b6 does not exist
Oct 07 15:23:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:23:19 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:23:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:23:19 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:23:19 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:23:19 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:23:19 compute-0 sudo[463163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:23:19 compute-0 sudo[463163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:23:19 compute-0 sudo[463163]: pam_unix(sudo:session): session closed for user root
Oct 07 15:23:19 compute-0 nova_compute[259550]: 2025-10-07 15:23:19.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:19 compute-0 sudo[463188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:23:19 compute-0 sudo[463188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:23:19 compute-0 sudo[463188]: pam_unix(sudo:session): session closed for user root
Oct 07 15:23:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3661: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:19 compute-0 sudo[463213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:23:19 compute-0 sudo[463213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:23:19 compute-0 sudo[463213]: pam_unix(sudo:session): session closed for user root
Oct 07 15:23:19 compute-0 sudo[463238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:23:19 compute-0 sudo[463238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:23:19 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:23:19 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:23:19 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:23:19 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:23:19 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:23:19 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:23:20 compute-0 nova_compute[259550]: 2025-10-07 15:23:20.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:20 compute-0 podman[463302]: 2025-10-07 15:23:19.974629518 +0000 UTC m=+0.024907355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:23:20 compute-0 podman[463302]: 2025-10-07 15:23:20.330779503 +0000 UTC m=+0.381057290 container create e92d955fb2b52badc564ec38daaf0aa440807bff7376000cc2503f69e20b8c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keller, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:23:20 compute-0 systemd[1]: Started libpod-conmon-e92d955fb2b52badc564ec38daaf0aa440807bff7376000cc2503f69e20b8c0e.scope.
Oct 07 15:23:20 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:23:20 compute-0 podman[463302]: 2025-10-07 15:23:20.56785484 +0000 UTC m=+0.618132637 container init e92d955fb2b52badc564ec38daaf0aa440807bff7376000cc2503f69e20b8c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keller, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:23:20 compute-0 podman[463302]: 2025-10-07 15:23:20.576075426 +0000 UTC m=+0.626353183 container start e92d955fb2b52badc564ec38daaf0aa440807bff7376000cc2503f69e20b8c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keller, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 07 15:23:20 compute-0 modest_keller[463319]: 167 167
Oct 07 15:23:20 compute-0 systemd[1]: libpod-e92d955fb2b52badc564ec38daaf0aa440807bff7376000cc2503f69e20b8c0e.scope: Deactivated successfully.
Oct 07 15:23:20 compute-0 podman[463302]: 2025-10-07 15:23:20.658879271 +0000 UTC m=+0.709157048 container attach e92d955fb2b52badc564ec38daaf0aa440807bff7376000cc2503f69e20b8c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:23:20 compute-0 podman[463302]: 2025-10-07 15:23:20.661475009 +0000 UTC m=+0.711752786 container died e92d955fb2b52badc564ec38daaf0aa440807bff7376000cc2503f69e20b8c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:23:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-4586fefa4422814d6da0f615cf6d02bf5656212496e806438be2bc5210bae9b1-merged.mount: Deactivated successfully.
Oct 07 15:23:20 compute-0 ceph-mon[74295]: pgmap v3661: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:20 compute-0 podman[463302]: 2025-10-07 15:23:20.902895231 +0000 UTC m=+0.953172988 container remove e92d955fb2b52badc564ec38daaf0aa440807bff7376000cc2503f69e20b8c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keller, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 07 15:23:20 compute-0 systemd[1]: libpod-conmon-e92d955fb2b52badc564ec38daaf0aa440807bff7376000cc2503f69e20b8c0e.scope: Deactivated successfully.
Oct 07 15:23:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:23:21 compute-0 podman[463341]: 2025-10-07 15:23:21.153021391 +0000 UTC m=+0.045612769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:23:21 compute-0 podman[463341]: 2025-10-07 15:23:21.510473171 +0000 UTC m=+0.403064449 container create 5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 07 15:23:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3662: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:21 compute-0 systemd[1]: Started libpod-conmon-5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e.scope.
Oct 07 15:23:21 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25590288f9bc1f874a2304fd827b9729ba3bae68093e5992533bce41f718abcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25590288f9bc1f874a2304fd827b9729ba3bae68093e5992533bce41f718abcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25590288f9bc1f874a2304fd827b9729ba3bae68093e5992533bce41f718abcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25590288f9bc1f874a2304fd827b9729ba3bae68093e5992533bce41f718abcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25590288f9bc1f874a2304fd827b9729ba3bae68093e5992533bce41f718abcb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:23:21 compute-0 podman[463341]: 2025-10-07 15:23:21.80665241 +0000 UTC m=+0.699243708 container init 5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 07 15:23:21 compute-0 podman[463341]: 2025-10-07 15:23:21.816504629 +0000 UTC m=+0.709095937 container start 5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamport, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:23:21 compute-0 podman[463341]: 2025-10-07 15:23:21.971184723 +0000 UTC m=+0.863776001 container attach 5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamport, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:23:22 compute-0 ceph-mon[74295]: pgmap v3662: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:23:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:23:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:23:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:23:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:23:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:23:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:23:22
Oct 07 15:23:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:23:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:23:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'images', '.mgr', 'volumes', 'default.rgw.log', 'default.rgw.control', 'vms']
Oct 07 15:23:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:23:22 compute-0 hardcore_lamport[463357]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:23:22 compute-0 hardcore_lamport[463357]: --> relative data size: 1.0
Oct 07 15:23:22 compute-0 hardcore_lamport[463357]: --> All data devices are unavailable
Oct 07 15:23:22 compute-0 systemd[1]: libpod-5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e.scope: Deactivated successfully.
Oct 07 15:23:22 compute-0 systemd[1]: libpod-5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e.scope: Consumed 1.041s CPU time.
Oct 07 15:23:22 compute-0 podman[463386]: 2025-10-07 15:23:22.964538444 +0000 UTC m=+0.031647392 container died 5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamport, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:23:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-25590288f9bc1f874a2304fd827b9729ba3bae68093e5992533bce41f718abcb-merged.mount: Deactivated successfully.
Oct 07 15:23:23 compute-0 podman[463386]: 2025-10-07 15:23:23.112526302 +0000 UTC m=+0.179635260 container remove 5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 07 15:23:23 compute-0 systemd[1]: libpod-conmon-5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e.scope: Deactivated successfully.
Oct 07 15:23:23 compute-0 sudo[463238]: pam_unix(sudo:session): session closed for user root
Oct 07 15:23:23 compute-0 sudo[463402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:23:23 compute-0 sudo[463402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:23:23 compute-0 sudo[463402]: pam_unix(sudo:session): session closed for user root
Oct 07 15:23:23 compute-0 sudo[463427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:23:23 compute-0 sudo[463427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:23:23 compute-0 sudo[463427]: pam_unix(sudo:session): session closed for user root
Oct 07 15:23:23 compute-0 sudo[463452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:23:23 compute-0 sudo[463452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:23:23 compute-0 sudo[463452]: pam_unix(sudo:session): session closed for user root
Oct 07 15:23:23 compute-0 sudo[463477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:23:23 compute-0 sudo[463477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:23:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3663: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:23:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:23:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:23:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:23:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:23:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:23:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:23:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:23:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:23:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:23:23 compute-0 podman[463543]: 2025-10-07 15:23:23.768405009 +0000 UTC m=+0.060306415 container create 117fd8ccba92121cb164d90f68b0ef92fa1704a53bdeadbe9b852c4317ccb76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatterjee, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:23:23 compute-0 systemd[1]: Started libpod-conmon-117fd8ccba92121cb164d90f68b0ef92fa1704a53bdeadbe9b852c4317ccb76b.scope.
Oct 07 15:23:23 compute-0 podman[463543]: 2025-10-07 15:23:23.738527395 +0000 UTC m=+0.030428811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:23:23 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:23:23 compute-0 podman[463543]: 2025-10-07 15:23:23.871843296 +0000 UTC m=+0.163744692 container init 117fd8ccba92121cb164d90f68b0ef92fa1704a53bdeadbe9b852c4317ccb76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatterjee, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 07 15:23:23 compute-0 podman[463543]: 2025-10-07 15:23:23.879749954 +0000 UTC m=+0.171651320 container start 117fd8ccba92121cb164d90f68b0ef92fa1704a53bdeadbe9b852c4317ccb76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatterjee, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:23:23 compute-0 pedantic_chatterjee[463559]: 167 167
Oct 07 15:23:23 compute-0 systemd[1]: libpod-117fd8ccba92121cb164d90f68b0ef92fa1704a53bdeadbe9b852c4317ccb76b.scope: Deactivated successfully.
Oct 07 15:23:23 compute-0 podman[463543]: 2025-10-07 15:23:23.885869445 +0000 UTC m=+0.177770811 container attach 117fd8ccba92121cb164d90f68b0ef92fa1704a53bdeadbe9b852c4317ccb76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:23:23 compute-0 podman[463543]: 2025-10-07 15:23:23.887417035 +0000 UTC m=+0.179318401 container died 117fd8ccba92121cb164d90f68b0ef92fa1704a53bdeadbe9b852c4317ccb76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatterjee, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:23:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-254a87a56049e2ecea16e77c25636492b5ed3806eed179c1d01ff334259674cd-merged.mount: Deactivated successfully.
Oct 07 15:23:23 compute-0 podman[463543]: 2025-10-07 15:23:23.952152046 +0000 UTC m=+0.244053412 container remove 117fd8ccba92121cb164d90f68b0ef92fa1704a53bdeadbe9b852c4317ccb76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatterjee, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:23:23 compute-0 systemd[1]: libpod-conmon-117fd8ccba92121cb164d90f68b0ef92fa1704a53bdeadbe9b852c4317ccb76b.scope: Deactivated successfully.
Oct 07 15:23:24 compute-0 podman[463582]: 2025-10-07 15:23:24.141973742 +0000 UTC m=+0.045658361 container create 2569ac5de6ca557309ade89572a7ce5274b3f4baa33fa978796b24dbc7836a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 15:23:24 compute-0 systemd[1]: Started libpod-conmon-2569ac5de6ca557309ade89572a7ce5274b3f4baa33fa978796b24dbc7836a65.scope.
Oct 07 15:23:24 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:23:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c04c40d798398ff920fb664118f9482f8b3433a5db2a2f64881d38e05e5b82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:23:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c04c40d798398ff920fb664118f9482f8b3433a5db2a2f64881d38e05e5b82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:23:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c04c40d798398ff920fb664118f9482f8b3433a5db2a2f64881d38e05e5b82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:23:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c04c40d798398ff920fb664118f9482f8b3433a5db2a2f64881d38e05e5b82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:23:24 compute-0 podman[463582]: 2025-10-07 15:23:24.124991646 +0000 UTC m=+0.028676285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:23:24 compute-0 podman[463582]: 2025-10-07 15:23:24.238138238 +0000 UTC m=+0.141822897 container init 2569ac5de6ca557309ade89572a7ce5274b3f4baa33fa978796b24dbc7836a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 07 15:23:24 compute-0 podman[463582]: 2025-10-07 15:23:24.251402686 +0000 UTC m=+0.155087315 container start 2569ac5de6ca557309ade89572a7ce5274b3f4baa33fa978796b24dbc7836a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 07 15:23:24 compute-0 podman[463582]: 2025-10-07 15:23:24.255235977 +0000 UTC m=+0.158920636 container attach 2569ac5de6ca557309ade89572a7ce5274b3f4baa33fa978796b24dbc7836a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct 07 15:23:24 compute-0 nova_compute[259550]: 2025-10-07 15:23:24.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:24 compute-0 ceph-mon[74295]: pgmap v3663: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:25 compute-0 nova_compute[259550]: 2025-10-07 15:23:25.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:25 compute-0 suspicious_turing[463598]: {
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:     "0": [
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:         {
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "devices": [
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "/dev/loop3"
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             ],
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "lv_name": "ceph_lv0",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "lv_size": "21470642176",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "name": "ceph_lv0",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "tags": {
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.cluster_name": "ceph",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.crush_device_class": "",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.encrypted": "0",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.osd_id": "0",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.type": "block",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.vdo": "0"
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             },
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "type": "block",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "vg_name": "ceph_vg0"
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:         }
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:     ],
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:     "1": [
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:         {
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "devices": [
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "/dev/loop4"
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             ],
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "lv_name": "ceph_lv1",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "lv_size": "21470642176",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "name": "ceph_lv1",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "tags": {
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.cluster_name": "ceph",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.crush_device_class": "",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.encrypted": "0",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.osd_id": "1",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.type": "block",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.vdo": "0"
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             },
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "type": "block",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "vg_name": "ceph_vg1"
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:         }
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:     ],
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:     "2": [
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:         {
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "devices": [
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "/dev/loop5"
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             ],
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "lv_name": "ceph_lv2",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "lv_size": "21470642176",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "name": "ceph_lv2",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "tags": {
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.cluster_name": "ceph",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.crush_device_class": "",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.encrypted": "0",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.osd_id": "2",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.type": "block",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:                 "ceph.vdo": "0"
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             },
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "type": "block",
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:             "vg_name": "ceph_vg2"
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:         }
Oct 07 15:23:25 compute-0 suspicious_turing[463598]:     ]
Oct 07 15:23:25 compute-0 suspicious_turing[463598]: }
Oct 07 15:23:25 compute-0 systemd[1]: libpod-2569ac5de6ca557309ade89572a7ce5274b3f4baa33fa978796b24dbc7836a65.scope: Deactivated successfully.
Oct 07 15:23:25 compute-0 podman[463582]: 2025-10-07 15:23:25.139534464 +0000 UTC m=+1.043219133 container died 2569ac5de6ca557309ade89572a7ce5274b3f4baa33fa978796b24dbc7836a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:23:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-68c04c40d798398ff920fb664118f9482f8b3433a5db2a2f64881d38e05e5b82-merged.mount: Deactivated successfully.
Oct 07 15:23:25 compute-0 podman[463582]: 2025-10-07 15:23:25.226324264 +0000 UTC m=+1.130008893 container remove 2569ac5de6ca557309ade89572a7ce5274b3f4baa33fa978796b24dbc7836a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 07 15:23:25 compute-0 systemd[1]: libpod-conmon-2569ac5de6ca557309ade89572a7ce5274b3f4baa33fa978796b24dbc7836a65.scope: Deactivated successfully.
Oct 07 15:23:25 compute-0 sudo[463477]: pam_unix(sudo:session): session closed for user root
Oct 07 15:23:25 compute-0 sudo[463620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:23:25 compute-0 sudo[463620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:23:25 compute-0 sudo[463620]: pam_unix(sudo:session): session closed for user root
Oct 07 15:23:25 compute-0 sudo[463645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:23:25 compute-0 sudo[463645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:23:25 compute-0 sudo[463645]: pam_unix(sudo:session): session closed for user root
Oct 07 15:23:25 compute-0 sudo[463670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:23:25 compute-0 sudo[463670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:23:25 compute-0 sudo[463670]: pam_unix(sudo:session): session closed for user root
Oct 07 15:23:25 compute-0 sudo[463695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:23:25 compute-0 sudo[463695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:23:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3664: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:25 compute-0 podman[463760]: 2025-10-07 15:23:25.885341324 +0000 UTC m=+0.047899239 container create 6157c4fb3c41d3db7e3cd6294f9b0de4daf012ed5acb58fefd05ac679a2fd3e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_almeida, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:23:25 compute-0 systemd[1]: Started libpod-conmon-6157c4fb3c41d3db7e3cd6294f9b0de4daf012ed5acb58fefd05ac679a2fd3e0.scope.
Oct 07 15:23:25 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:23:25 compute-0 podman[463760]: 2025-10-07 15:23:25.867849805 +0000 UTC m=+0.030407730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:23:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:23:25 compute-0 podman[463760]: 2025-10-07 15:23:25.98147809 +0000 UTC m=+0.144036025 container init 6157c4fb3c41d3db7e3cd6294f9b0de4daf012ed5acb58fefd05ac679a2fd3e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:23:25 compute-0 podman[463760]: 2025-10-07 15:23:25.989467279 +0000 UTC m=+0.152025174 container start 6157c4fb3c41d3db7e3cd6294f9b0de4daf012ed5acb58fefd05ac679a2fd3e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_almeida, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 07 15:23:25 compute-0 heuristic_almeida[463776]: 167 167
Oct 07 15:23:25 compute-0 podman[463760]: 2025-10-07 15:23:25.994136862 +0000 UTC m=+0.156694787 container attach 6157c4fb3c41d3db7e3cd6294f9b0de4daf012ed5acb58fefd05ac679a2fd3e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_almeida, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 07 15:23:25 compute-0 systemd[1]: libpod-6157c4fb3c41d3db7e3cd6294f9b0de4daf012ed5acb58fefd05ac679a2fd3e0.scope: Deactivated successfully.
Oct 07 15:23:25 compute-0 podman[463760]: 2025-10-07 15:23:25.995230191 +0000 UTC m=+0.157788136 container died 6157c4fb3c41d3db7e3cd6294f9b0de4daf012ed5acb58fefd05ac679a2fd3e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:23:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a83e31d7f140dc3a03b03ada7f12904e5b4d52cd6f8ec3d1ddd90c997292b7f-merged.mount: Deactivated successfully.
Oct 07 15:23:26 compute-0 podman[463760]: 2025-10-07 15:23:26.056009907 +0000 UTC m=+0.218567802 container remove 6157c4fb3c41d3db7e3cd6294f9b0de4daf012ed5acb58fefd05ac679a2fd3e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_almeida, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 07 15:23:26 compute-0 systemd[1]: libpod-conmon-6157c4fb3c41d3db7e3cd6294f9b0de4daf012ed5acb58fefd05ac679a2fd3e0.scope: Deactivated successfully.
Oct 07 15:23:26 compute-0 podman[463803]: 2025-10-07 15:23:26.265721906 +0000 UTC m=+0.049896482 container create 159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 07 15:23:26 compute-0 systemd[1]: Started libpod-conmon-159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804.scope.
Oct 07 15:23:26 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a1cb66e7ad778694fe995bd6560125af0ccb97eb3fa9b8865de7a07305731f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a1cb66e7ad778694fe995bd6560125af0ccb97eb3fa9b8865de7a07305731f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:23:26 compute-0 podman[463803]: 2025-10-07 15:23:26.245394712 +0000 UTC m=+0.029569118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a1cb66e7ad778694fe995bd6560125af0ccb97eb3fa9b8865de7a07305731f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a1cb66e7ad778694fe995bd6560125af0ccb97eb3fa9b8865de7a07305731f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:23:26 compute-0 podman[463803]: 2025-10-07 15:23:26.35533552 +0000 UTC m=+0.139509886 container init 159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wright, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 07 15:23:26 compute-0 podman[463803]: 2025-10-07 15:23:26.363248327 +0000 UTC m=+0.147422693 container start 159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wright, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:23:26 compute-0 podman[463803]: 2025-10-07 15:23:26.367699455 +0000 UTC m=+0.151873871 container attach 159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wright, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:23:26 compute-0 ceph-mon[74295]: pgmap v3664: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]: {
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:         "osd_id": 2,
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:         "type": "bluestore"
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:     },
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:         "osd_id": 1,
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:         "type": "bluestore"
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:     },
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:         "osd_id": 0,
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:         "type": "bluestore"
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]:     }
Oct 07 15:23:27 compute-0 flamboyant_wright[463819]: }
Oct 07 15:23:27 compute-0 systemd[1]: libpod-159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804.scope: Deactivated successfully.
Oct 07 15:23:27 compute-0 systemd[1]: libpod-159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804.scope: Consumed 1.089s CPU time.
Oct 07 15:23:27 compute-0 podman[463803]: 2025-10-07 15:23:27.447208271 +0000 UTC m=+1.231382637 container died 159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wright, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 07 15:23:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-34a1cb66e7ad778694fe995bd6560125af0ccb97eb3fa9b8865de7a07305731f-merged.mount: Deactivated successfully.
Oct 07 15:23:27 compute-0 podman[463803]: 2025-10-07 15:23:27.509529857 +0000 UTC m=+1.293704263 container remove 159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wright, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:23:27 compute-0 systemd[1]: libpod-conmon-159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804.scope: Deactivated successfully.
Oct 07 15:23:27 compute-0 sudo[463695]: pam_unix(sudo:session): session closed for user root
Oct 07 15:23:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:23:27 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:23:27 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:23:27 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:23:27 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev ce13f93a-ae27-4335-b0fd-6850ded24d3b does not exist
Oct 07 15:23:27 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev cca867dd-b469-4dd5-b529-2581c21fca51 does not exist
Oct 07 15:23:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3665: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:27 compute-0 sudo[463864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:23:27 compute-0 sudo[463864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:23:27 compute-0 sudo[463864]: pam_unix(sudo:session): session closed for user root
Oct 07 15:23:27 compute-0 sudo[463889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:23:27 compute-0 sudo[463889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:23:27 compute-0 sudo[463889]: pam_unix(sudo:session): session closed for user root
Oct 07 15:23:27 compute-0 nova_compute[259550]: 2025-10-07 15:23:27.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:23:28 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:23:28 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:23:28 compute-0 ceph-mon[74295]: pgmap v3665: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:29 compute-0 podman[463914]: 2025-10-07 15:23:29.1036738 +0000 UTC m=+0.080788773 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 07 15:23:29 compute-0 podman[463915]: 2025-10-07 15:23:29.112465411 +0000 UTC m=+0.089765329 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 07 15:23:29 compute-0 nova_compute[259550]: 2025-10-07 15:23:29.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3666: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:30 compute-0 nova_compute[259550]: 2025-10-07 15:23:30.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:30 compute-0 ceph-mon[74295]: pgmap v3666: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:23:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:23:30 compute-0 nova_compute[259550]: 2025-10-07 15:23:30.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:23:30 compute-0 nova_compute[259550]: 2025-10-07 15:23:30.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:23:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3667: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct 07 15:23:32 compute-0 ceph-mon[74295]: pgmap v3667: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct 07 15:23:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:23:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2582887299' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:23:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:23:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2582887299' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:23:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3668: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct 07 15:23:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2582887299' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:23:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2582887299' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:23:33 compute-0 nova_compute[259550]: 2025-10-07 15:23:33.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:23:34 compute-0 nova_compute[259550]: 2025-10-07 15:23:34.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:34 compute-0 ceph-mon[74295]: pgmap v3668: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct 07 15:23:35 compute-0 nova_compute[259550]: 2025-10-07 15:23:35.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3669: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Oct 07 15:23:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:23:35 compute-0 nova_compute[259550]: 2025-10-07 15:23:35.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:23:36 compute-0 ceph-mon[74295]: pgmap v3669: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Oct 07 15:23:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3670: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 07 15:23:38 compute-0 ceph-mon[74295]: pgmap v3670: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 07 15:23:38 compute-0 nova_compute[259550]: 2025-10-07 15:23:38.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:23:39 compute-0 nova_compute[259550]: 2025-10-07 15:23:39.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3671: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 07 15:23:39 compute-0 nova_compute[259550]: 2025-10-07 15:23:39.936 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:23:39 compute-0 nova_compute[259550]: 2025-10-07 15:23:39.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:23:40 compute-0 nova_compute[259550]: 2025-10-07 15:23:40.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:40 compute-0 ceph-mon[74295]: pgmap v3671: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 07 15:23:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:23:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3672: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 07 15:23:42 compute-0 ceph-mon[74295]: pgmap v3672: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 07 15:23:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3673: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct 07 15:23:44 compute-0 nova_compute[259550]: 2025-10-07 15:23:44.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:44 compute-0 ceph-mon[74295]: pgmap v3673: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct 07 15:23:45 compute-0 nova_compute[259550]: 2025-10-07 15:23:45.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:45 compute-0 podman[463961]: 2025-10-07 15:23:45.090244571 +0000 UTC m=+0.063872178 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 07 15:23:45 compute-0 podman[463960]: 2025-10-07 15:23:45.115015612 +0000 UTC m=+0.087325504 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct 07 15:23:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3674: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct 07 15:23:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:23:46 compute-0 ceph-mon[74295]: pgmap v3674: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct 07 15:23:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3675: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 19 op/s
Oct 07 15:23:48 compute-0 ceph-mon[74295]: pgmap v3675: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 19 op/s
Oct 07 15:23:49 compute-0 nova_compute[259550]: 2025-10-07 15:23:49.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3676: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 07 15:23:50 compute-0 nova_compute[259550]: 2025-10-07 15:23:50.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:50 compute-0 ceph-mon[74295]: pgmap v3676: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 07 15:23:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:23:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3677: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 07 15:23:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:23:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:23:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:23:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:23:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:23:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:23:52 compute-0 ceph-mon[74295]: pgmap v3677: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 07 15:23:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3678: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 07 15:23:53 compute-0 nova_compute[259550]: 2025-10-07 15:23:53.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:23:54 compute-0 nova_compute[259550]: 2025-10-07 15:23:54.012 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:23:54 compute-0 nova_compute[259550]: 2025-10-07 15:23:54.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:23:54 compute-0 nova_compute[259550]: 2025-10-07 15:23:54.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:23:54 compute-0 nova_compute[259550]: 2025-10-07 15:23:54.013 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:23:54 compute-0 nova_compute[259550]: 2025-10-07 15:23:54.013 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:23:54 compute-0 ceph-mon[74295]: pgmap v3678: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 07 15:23:54 compute-0 nova_compute[259550]: 2025-10-07 15:23:54.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:23:54 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2788824219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:23:54 compute-0 nova_compute[259550]: 2025-10-07 15:23:54.638 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:23:54 compute-0 nova_compute[259550]: 2025-10-07 15:23:54.782 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:23:54 compute-0 nova_compute[259550]: 2025-10-07 15:23:54.783 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3618MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:23:54 compute-0 nova_compute[259550]: 2025-10-07 15:23:54.784 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:23:54 compute-0 nova_compute[259550]: 2025-10-07 15:23:54.784 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:23:54 compute-0 nova_compute[259550]: 2025-10-07 15:23:54.888 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:23:54 compute-0 nova_compute[259550]: 2025-10-07 15:23:54.889 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:23:54 compute-0 nova_compute[259550]: 2025-10-07 15:23:54.923 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:23:55 compute-0 nova_compute[259550]: 2025-10-07 15:23:55.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:55 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2788824219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:23:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:23:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4115273436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:23:55 compute-0 nova_compute[259550]: 2025-10-07 15:23:55.375 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:23:55 compute-0 nova_compute[259550]: 2025-10-07 15:23:55.382 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:23:55 compute-0 nova_compute[259550]: 2025-10-07 15:23:55.410 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:23:55 compute-0 nova_compute[259550]: 2025-10-07 15:23:55.412 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:23:55 compute-0 nova_compute[259550]: 2025-10-07 15:23:55.413 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:23:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3679: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 07 15:23:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:23:56 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4115273436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:23:56 compute-0 ceph-mon[74295]: pgmap v3679: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 07 15:23:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3680: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 07 15:23:58 compute-0 ceph-mon[74295]: pgmap v3680: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 07 15:23:59 compute-0 nova_compute[259550]: 2025-10-07 15:23:59.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:23:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3681: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 07 15:24:00 compute-0 podman[464042]: 2025-10-07 15:24:00.064148911 +0000 UTC m=+0.052560472 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:24:00 compute-0 nova_compute[259550]: 2025-10-07 15:24:00.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:00 compute-0 podman[464043]: 2025-10-07 15:24:00.106121884 +0000 UTC m=+0.090398496 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 07 15:24:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:24:00.125 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:24:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:24:00.126 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:24:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:24:00.126 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:24:00 compute-0 nova_compute[259550]: 2025-10-07 15:24:00.414 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:24:00 compute-0 nova_compute[259550]: 2025-10-07 15:24:00.414 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:24:00 compute-0 nova_compute[259550]: 2025-10-07 15:24:00.415 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:24:00 compute-0 nova_compute[259550]: 2025-10-07 15:24:00.428 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:24:00 compute-0 ceph-mon[74295]: pgmap v3681: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 07 15:24:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:24:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3682: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:02 compute-0 ceph-mon[74295]: pgmap v3682: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3683: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:03 compute-0 nova_compute[259550]: 2025-10-07 15:24:03.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:24:04 compute-0 nova_compute[259550]: 2025-10-07 15:24:04.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:04 compute-0 ceph-mon[74295]: pgmap v3683: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:05 compute-0 nova_compute[259550]: 2025-10-07 15:24:05.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3684: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:24:06 compute-0 ceph-mon[74295]: pgmap v3684: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3685: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:08 compute-0 ceph-mon[74295]: pgmap v3685: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:09 compute-0 nova_compute[259550]: 2025-10-07 15:24:09.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3686: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:10 compute-0 nova_compute[259550]: 2025-10-07 15:24:10.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:10 compute-0 ceph-mon[74295]: pgmap v3686: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:24:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3687: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:12 compute-0 ceph-mon[74295]: pgmap v3687: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3688: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:14 compute-0 nova_compute[259550]: 2025-10-07 15:24:14.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:14 compute-0 ceph-mon[74295]: pgmap v3688: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:15 compute-0 nova_compute[259550]: 2025-10-07 15:24:15.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3689: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:24:16 compute-0 podman[464086]: 2025-10-07 15:24:16.057776095 +0000 UTC m=+0.049106891 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 07 15:24:16 compute-0 podman[464085]: 2025-10-07 15:24:16.059640513 +0000 UTC m=+0.053269400 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct 07 15:24:16 compute-0 ceph-mon[74295]: pgmap v3689: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3690: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:18 compute-0 ceph-mon[74295]: pgmap v3690: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:19 compute-0 nova_compute[259550]: 2025-10-07 15:24:19.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3691: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:20 compute-0 nova_compute[259550]: 2025-10-07 15:24:20.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:20 compute-0 ceph-mon[74295]: pgmap v3691: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:24:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3692: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:24:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:24:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:24:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:24:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:24:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:24:22 compute-0 ceph-mon[74295]: pgmap v3692: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:24:22
Oct 07 15:24:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:24:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:24:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'vms']
Oct 07 15:24:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:24:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:24:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:24:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:24:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:24:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:24:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:24:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:24:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:24:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:24:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:24:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3693: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:24 compute-0 nova_compute[259550]: 2025-10-07 15:24:24.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:24 compute-0 ceph-mon[74295]: pgmap v3693: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:25 compute-0 nova_compute[259550]: 2025-10-07 15:24:25.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3694: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:24:26 compute-0 ceph-mon[74295]: pgmap v3694: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3695: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:27 compute-0 sudo[464123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:24:27 compute-0 sudo[464123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:24:27 compute-0 sudo[464123]: pam_unix(sudo:session): session closed for user root
Oct 07 15:24:27 compute-0 sudo[464148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:24:27 compute-0 sudo[464148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:24:27 compute-0 sudo[464148]: pam_unix(sudo:session): session closed for user root
Oct 07 15:24:27 compute-0 sudo[464173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:24:27 compute-0 sudo[464173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:24:27 compute-0 sudo[464173]: pam_unix(sudo:session): session closed for user root
Oct 07 15:24:27 compute-0 nova_compute[259550]: 2025-10-07 15:24:27.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:24:28 compute-0 sudo[464198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:24:28 compute-0 sudo[464198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:24:28 compute-0 sudo[464198]: pam_unix(sudo:session): session closed for user root
Oct 07 15:24:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:24:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:24:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:24:28 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:24:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:24:28 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:24:28 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev ccf56b99-df22-4ff0-97d0-b80a31dac81e does not exist
Oct 07 15:24:28 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev c941abd9-cdae-4ae6-a556-f0a2a4b2a3d6 does not exist
Oct 07 15:24:28 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 55953369-4cba-4b27-970d-de366e8481cf does not exist
Oct 07 15:24:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:24:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:24:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:24:28 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:24:28 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:24:28 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:24:28 compute-0 sudo[464254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:24:28 compute-0 sudo[464254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:24:28 compute-0 sudo[464254]: pam_unix(sudo:session): session closed for user root
Oct 07 15:24:28 compute-0 sudo[464279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:24:28 compute-0 sudo[464279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:24:28 compute-0 sudo[464279]: pam_unix(sudo:session): session closed for user root
Oct 07 15:24:28 compute-0 sudo[464304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:24:28 compute-0 sudo[464304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:24:28 compute-0 sudo[464304]: pam_unix(sudo:session): session closed for user root
Oct 07 15:24:28 compute-0 sudo[464329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:24:28 compute-0 sudo[464329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:24:28 compute-0 ceph-mon[74295]: pgmap v3695: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:28 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:24:28 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:24:28 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:24:28 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:24:28 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:24:28 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:24:29 compute-0 podman[464394]: 2025-10-07 15:24:29.069827712 +0000 UTC m=+0.043767511 container create 52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bose, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:24:29 compute-0 systemd[1]: Started libpod-conmon-52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76.scope.
Oct 07 15:24:29 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:24:29 compute-0 podman[464394]: 2025-10-07 15:24:29.143044225 +0000 UTC m=+0.116984044 container init 52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 07 15:24:29 compute-0 podman[464394]: 2025-10-07 15:24:29.051916181 +0000 UTC m=+0.025856000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:24:29 compute-0 podman[464394]: 2025-10-07 15:24:29.149579837 +0000 UTC m=+0.123519636 container start 52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:24:29 compute-0 podman[464394]: 2025-10-07 15:24:29.154010853 +0000 UTC m=+0.127950652 container attach 52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bose, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 07 15:24:29 compute-0 kind_bose[464412]: 167 167
Oct 07 15:24:29 compute-0 systemd[1]: libpod-52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76.scope: Deactivated successfully.
Oct 07 15:24:29 compute-0 conmon[464412]: conmon 52c056d33fd48f3d0d92 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76.scope/container/memory.events
Oct 07 15:24:29 compute-0 podman[464394]: 2025-10-07 15:24:29.156341974 +0000 UTC m=+0.130281773 container died 52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bose, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct 07 15:24:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-7acf668f3441202d574ee8e77b9ca8990a0f3c93ce78ecc1027a791a22cda63c-merged.mount: Deactivated successfully.
Oct 07 15:24:29 compute-0 podman[464394]: 2025-10-07 15:24:29.192492694 +0000 UTC m=+0.166432493 container remove 52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bose, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 07 15:24:29 compute-0 systemd[1]: libpod-conmon-52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76.scope: Deactivated successfully.
Oct 07 15:24:29 compute-0 podman[464437]: 2025-10-07 15:24:29.355872916 +0000 UTC m=+0.050599041 container create b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:24:29 compute-0 systemd[1]: Started libpod-conmon-b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2.scope.
Oct 07 15:24:29 compute-0 podman[464437]: 2025-10-07 15:24:29.331132706 +0000 UTC m=+0.025858911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:24:29 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:24:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1266a78569277661218ffac7a7049f2cb41871274078a85190c4b2541853e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:24:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1266a78569277661218ffac7a7049f2cb41871274078a85190c4b2541853e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:24:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1266a78569277661218ffac7a7049f2cb41871274078a85190c4b2541853e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:24:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1266a78569277661218ffac7a7049f2cb41871274078a85190c4b2541853e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:24:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1266a78569277661218ffac7a7049f2cb41871274078a85190c4b2541853e6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:24:29 compute-0 podman[464437]: 2025-10-07 15:24:29.451063656 +0000 UTC m=+0.145789831 container init b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hertz, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:24:29 compute-0 podman[464437]: 2025-10-07 15:24:29.460873394 +0000 UTC m=+0.155599509 container start b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hertz, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:24:29 compute-0 podman[464437]: 2025-10-07 15:24:29.464384006 +0000 UTC m=+0.159110131 container attach b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hertz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 07 15:24:29 compute-0 nova_compute[259550]: 2025-10-07 15:24:29.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3696: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:30 compute-0 nova_compute[259550]: 2025-10-07 15:24:30.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:30 compute-0 serene_hertz[464454]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:24:30 compute-0 serene_hertz[464454]: --> relative data size: 1.0
Oct 07 15:24:30 compute-0 serene_hertz[464454]: --> All data devices are unavailable
Oct 07 15:24:30 compute-0 systemd[1]: libpod-b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2.scope: Deactivated successfully.
Oct 07 15:24:30 compute-0 systemd[1]: libpod-b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2.scope: Consumed 1.078s CPU time.
Oct 07 15:24:30 compute-0 podman[464437]: 2025-10-07 15:24:30.586529362 +0000 UTC m=+1.281255477 container died b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 07 15:24:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f1266a78569277661218ffac7a7049f2cb41871274078a85190c4b2541853e6-merged.mount: Deactivated successfully.
Oct 07 15:24:30 compute-0 podman[464437]: 2025-10-07 15:24:30.667359304 +0000 UTC m=+1.362085419 container remove b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hertz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 07 15:24:30 compute-0 systemd[1]: libpod-conmon-b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2.scope: Deactivated successfully.
Oct 07 15:24:30 compute-0 sudo[464329]: pam_unix(sudo:session): session closed for user root
Oct 07 15:24:30 compute-0 podman[464484]: 2025-10-07 15:24:30.704706685 +0000 UTC m=+0.081806210 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 07 15:24:30 compute-0 sudo[464530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:24:30 compute-0 podman[464492]: 2025-10-07 15:24:30.760790408 +0000 UTC m=+0.135584722 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 15:24:30 compute-0 sudo[464530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:24:30 compute-0 sudo[464530]: pam_unix(sudo:session): session closed for user root
Oct 07 15:24:30 compute-0 sudo[464561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:24:30 compute-0 sudo[464561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:24:30 compute-0 ceph-mon[74295]: pgmap v3696: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:30 compute-0 sudo[464561]: pam_unix(sudo:session): session closed for user root
Oct 07 15:24:30 compute-0 sudo[464586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:24:30 compute-0 sudo[464586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:24:30 compute-0 sudo[464586]: pam_unix(sudo:session): session closed for user root
Oct 07 15:24:30 compute-0 sudo[464611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:24:30 compute-0 sudo[464611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:24:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:24:31 compute-0 podman[464676]: 2025-10-07 15:24:31.295225757 +0000 UTC m=+0.050493518 container create d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mclaren, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:24:31 compute-0 systemd[1]: Started libpod-conmon-d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539.scope.
Oct 07 15:24:31 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:24:31 compute-0 podman[464676]: 2025-10-07 15:24:31.274608115 +0000 UTC m=+0.029875886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:24:31 compute-0 podman[464676]: 2025-10-07 15:24:31.37114069 +0000 UTC m=+0.126408511 container init d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mclaren, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:24:31 compute-0 podman[464676]: 2025-10-07 15:24:31.385124698 +0000 UTC m=+0.140392469 container start d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 07 15:24:31 compute-0 podman[464676]: 2025-10-07 15:24:31.389372629 +0000 UTC m=+0.144640390 container attach d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mclaren, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 07 15:24:31 compute-0 inspiring_mclaren[464692]: 167 167
Oct 07 15:24:31 compute-0 systemd[1]: libpod-d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539.scope: Deactivated successfully.
Oct 07 15:24:31 compute-0 conmon[464692]: conmon d3dcc85c47adfe3bd4ea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539.scope/container/memory.events
Oct 07 15:24:31 compute-0 podman[464676]: 2025-10-07 15:24:31.393791115 +0000 UTC m=+0.149058856 container died d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mclaren, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 07 15:24:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4a774c59ae68d8bbc8f7c10ba65dc602715f95e60aaa7c66cd4489bf992c275-merged.mount: Deactivated successfully.
Oct 07 15:24:31 compute-0 podman[464676]: 2025-10-07 15:24:31.430840389 +0000 UTC m=+0.186108120 container remove d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:24:31 compute-0 systemd[1]: libpod-conmon-d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539.scope: Deactivated successfully.
Oct 07 15:24:31 compute-0 podman[464718]: 2025-10-07 15:24:31.584965997 +0000 UTC m=+0.042090706 container create a9c65a21c7e58ecb83359b742decc07981f78d03331e12ce156c8a228b23b350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 07 15:24:31 compute-0 systemd[1]: Started libpod-conmon-a9c65a21c7e58ecb83359b742decc07981f78d03331e12ce156c8a228b23b350.scope.
Oct 07 15:24:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3697: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:31 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:24:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df661b1d25246eeb8e2744119779b107e48073bdd0151d8b05223c3474f4d9db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:24:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df661b1d25246eeb8e2744119779b107e48073bdd0151d8b05223c3474f4d9db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:24:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df661b1d25246eeb8e2744119779b107e48073bdd0151d8b05223c3474f4d9db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:24:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df661b1d25246eeb8e2744119779b107e48073bdd0151d8b05223c3474f4d9db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:24:31 compute-0 podman[464718]: 2025-10-07 15:24:31.659489205 +0000 UTC m=+0.116613954 container init a9c65a21c7e58ecb83359b742decc07981f78d03331e12ce156c8a228b23b350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gould, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 07 15:24:31 compute-0 podman[464718]: 2025-10-07 15:24:31.56528091 +0000 UTC m=+0.022405639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:24:31 compute-0 podman[464718]: 2025-10-07 15:24:31.667014132 +0000 UTC m=+0.124138851 container start a9c65a21c7e58ecb83359b742decc07981f78d03331e12ce156c8a228b23b350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gould, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 07 15:24:31 compute-0 podman[464718]: 2025-10-07 15:24:31.670991456 +0000 UTC m=+0.128116255 container attach a9c65a21c7e58ecb83359b742decc07981f78d03331e12ce156c8a228b23b350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gould, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:24:31 compute-0 nova_compute[259550]: 2025-10-07 15:24:31.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:24:31 compute-0 nova_compute[259550]: 2025-10-07 15:24:31.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:24:32 compute-0 quizzical_gould[464734]: {
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:     "0": [
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:         {
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "devices": [
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "/dev/loop3"
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             ],
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "lv_name": "ceph_lv0",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "lv_size": "21470642176",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "name": "ceph_lv0",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "tags": {
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.cluster_name": "ceph",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.crush_device_class": "",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.encrypted": "0",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.osd_id": "0",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.type": "block",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.vdo": "0"
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             },
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "type": "block",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "vg_name": "ceph_vg0"
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:         }
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:     ],
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:     "1": [
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:         {
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "devices": [
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "/dev/loop4"
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             ],
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "lv_name": "ceph_lv1",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "lv_size": "21470642176",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "name": "ceph_lv1",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "tags": {
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.cluster_name": "ceph",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.crush_device_class": "",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.encrypted": "0",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.osd_id": "1",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.type": "block",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.vdo": "0"
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             },
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "type": "block",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "vg_name": "ceph_vg1"
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:         }
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:     ],
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:     "2": [
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:         {
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "devices": [
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "/dev/loop5"
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             ],
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "lv_name": "ceph_lv2",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "lv_size": "21470642176",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "name": "ceph_lv2",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "tags": {
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.cluster_name": "ceph",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.crush_device_class": "",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.encrypted": "0",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.osd_id": "2",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.type": "block",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:                 "ceph.vdo": "0"
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             },
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "type": "block",
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:             "vg_name": "ceph_vg2"
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:         }
Oct 07 15:24:32 compute-0 quizzical_gould[464734]:     ]
Oct 07 15:24:32 compute-0 quizzical_gould[464734]: }
Oct 07 15:24:32 compute-0 systemd[1]: libpod-a9c65a21c7e58ecb83359b742decc07981f78d03331e12ce156c8a228b23b350.scope: Deactivated successfully.
Oct 07 15:24:32 compute-0 podman[464718]: 2025-10-07 15:24:32.396076032 +0000 UTC m=+0.853200771 container died a9c65a21c7e58ecb83359b742decc07981f78d03331e12ce156c8a228b23b350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gould, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 15:24:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-df661b1d25246eeb8e2744119779b107e48073bdd0151d8b05223c3474f4d9db-merged.mount: Deactivated successfully.
Oct 07 15:24:32 compute-0 podman[464718]: 2025-10-07 15:24:32.458946044 +0000 UTC m=+0.916070763 container remove a9c65a21c7e58ecb83359b742decc07981f78d03331e12ce156c8a228b23b350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gould, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:24:32 compute-0 systemd[1]: libpod-conmon-a9c65a21c7e58ecb83359b742decc07981f78d03331e12ce156c8a228b23b350.scope: Deactivated successfully.
Oct 07 15:24:32 compute-0 sudo[464611]: pam_unix(sudo:session): session closed for user root
Oct 07 15:24:32 compute-0 sudo[464755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:24:32 compute-0 sudo[464755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:24:32 compute-0 sudo[464755]: pam_unix(sudo:session): session closed for user root
Oct 07 15:24:32 compute-0 sudo[464780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:24:32 compute-0 sudo[464780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:24:32 compute-0 sudo[464780]: pam_unix(sudo:session): session closed for user root
Oct 07 15:24:32 compute-0 sudo[464805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:24:32 compute-0 sudo[464805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:24:32 compute-0 sudo[464805]: pam_unix(sudo:session): session closed for user root
Oct 07 15:24:32 compute-0 sudo[464830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:24:32 compute-0 sudo[464830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:24:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:24:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2786856965' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:24:32 compute-0 ceph-mon[74295]: pgmap v3697: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:24:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2786856965' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:24:33 compute-0 podman[464897]: 2025-10-07 15:24:33.07441501 +0000 UTC m=+0.038998305 container create 5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:24:33 compute-0 systemd[1]: Started libpod-conmon-5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef.scope.
Oct 07 15:24:33 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:24:33 compute-0 podman[464897]: 2025-10-07 15:24:33.143997648 +0000 UTC m=+0.108580963 container init 5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 07 15:24:33 compute-0 podman[464897]: 2025-10-07 15:24:33.150752625 +0000 UTC m=+0.115335940 container start 5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:24:33 compute-0 podman[464897]: 2025-10-07 15:24:33.153864757 +0000 UTC m=+0.118448062 container attach 5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 07 15:24:33 compute-0 podman[464897]: 2025-10-07 15:24:33.058864031 +0000 UTC m=+0.023447356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:24:33 compute-0 jovial_maxwell[464913]: 167 167
Oct 07 15:24:33 compute-0 systemd[1]: libpod-5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef.scope: Deactivated successfully.
Oct 07 15:24:33 compute-0 conmon[464913]: conmon 5eae5440465fdd29c647 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef.scope/container/memory.events
Oct 07 15:24:33 compute-0 podman[464897]: 2025-10-07 15:24:33.160094871 +0000 UTC m=+0.124678166 container died 5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 07 15:24:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ed45e94e6f5b0f72e7fcf1fa157ab6078c82978452eb17f4ff2a07b39300745-merged.mount: Deactivated successfully.
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:24:33 compute-0 podman[464897]: 2025-10-07 15:24:33.204385194 +0000 UTC m=+0.168968499 container remove 5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 07 15:24:33 compute-0 systemd[1]: libpod-conmon-5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef.scope: Deactivated successfully.
Oct 07 15:24:33 compute-0 podman[464937]: 2025-10-07 15:24:33.405597049 +0000 UTC m=+0.042781005 container create b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lewin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:24:33 compute-0 systemd[1]: Started libpod-conmon-b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125.scope.
Oct 07 15:24:33 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:24:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c250ccc3bad133ab53126f3ff45dcf25037a38a94889a833f4e9bf3a82aaf055/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:24:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c250ccc3bad133ab53126f3ff45dcf25037a38a94889a833f4e9bf3a82aaf055/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:24:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c250ccc3bad133ab53126f3ff45dcf25037a38a94889a833f4e9bf3a82aaf055/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:24:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c250ccc3bad133ab53126f3ff45dcf25037a38a94889a833f4e9bf3a82aaf055/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:24:33 compute-0 podman[464937]: 2025-10-07 15:24:33.387277938 +0000 UTC m=+0.024461924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:24:33 compute-0 podman[464937]: 2025-10-07 15:24:33.493152969 +0000 UTC m=+0.130336945 container init b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 07 15:24:33 compute-0 podman[464937]: 2025-10-07 15:24:33.499062245 +0000 UTC m=+0.136246221 container start b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lewin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:24:33 compute-0 podman[464937]: 2025-10-07 15:24:33.501911009 +0000 UTC m=+0.139094985 container attach b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:24:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3698: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2786856965' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:24:33 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/2786856965' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:24:34 compute-0 naughty_lewin[464954]: {
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:         "osd_id": 2,
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:         "type": "bluestore"
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:     },
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:         "osd_id": 1,
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:         "type": "bluestore"
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:     },
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:         "osd_id": 0,
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:         "type": "bluestore"
Oct 07 15:24:34 compute-0 naughty_lewin[464954]:     }
Oct 07 15:24:34 compute-0 naughty_lewin[464954]: }
Oct 07 15:24:34 compute-0 systemd[1]: libpod-b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125.scope: Deactivated successfully.
Oct 07 15:24:34 compute-0 systemd[1]: libpod-b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125.scope: Consumed 1.013s CPU time.
Oct 07 15:24:34 compute-0 conmon[464954]: conmon b0e12d97464a8acd32b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125.scope/container/memory.events
Oct 07 15:24:34 compute-0 podman[464937]: 2025-10-07 15:24:34.506342202 +0000 UTC m=+1.143526198 container died b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 07 15:24:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c250ccc3bad133ab53126f3ff45dcf25037a38a94889a833f4e9bf3a82aaf055-merged.mount: Deactivated successfully.
Oct 07 15:24:34 compute-0 podman[464937]: 2025-10-07 15:24:34.567984191 +0000 UTC m=+1.205168137 container remove b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lewin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 07 15:24:34 compute-0 systemd[1]: libpod-conmon-b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125.scope: Deactivated successfully.
Oct 07 15:24:34 compute-0 sudo[464830]: pam_unix(sudo:session): session closed for user root
Oct 07 15:24:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:24:34 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:24:34 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:24:34 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:24:34 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 3344b6f1-4a67-44d6-ada0-30f82f899fee does not exist
Oct 07 15:24:34 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev cd5370f6-b2d1-46ee-bca4-4ffa53ce6249 does not exist
Oct 07 15:24:34 compute-0 nova_compute[259550]: 2025-10-07 15:24:34.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:34 compute-0 sudo[464999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:24:34 compute-0 sudo[464999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:24:34 compute-0 sudo[464999]: pam_unix(sudo:session): session closed for user root
Oct 07 15:24:34 compute-0 sudo[465024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:24:34 compute-0 sudo[465024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:24:34 compute-0 sudo[465024]: pam_unix(sudo:session): session closed for user root
Oct 07 15:24:34 compute-0 ceph-mon[74295]: pgmap v3698: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:34 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:24:34 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:24:35 compute-0 nova_compute[259550]: 2025-10-07 15:24:35.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3699: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:35 compute-0 nova_compute[259550]: 2025-10-07 15:24:35.980 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:24:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:24:36 compute-0 ceph-mon[74295]: pgmap v3699: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3700: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:37 compute-0 nova_compute[259550]: 2025-10-07 15:24:37.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:24:38 compute-0 ceph-mon[74295]: pgmap v3700: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:38 compute-0 nova_compute[259550]: 2025-10-07 15:24:38.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:24:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3701: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:39 compute-0 nova_compute[259550]: 2025-10-07 15:24:39.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:39 compute-0 nova_compute[259550]: 2025-10-07 15:24:39.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:24:40 compute-0 nova_compute[259550]: 2025-10-07 15:24:40.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:40 compute-0 ceph-mon[74295]: pgmap v3701: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:24:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3702: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:42 compute-0 ceph-mon[74295]: pgmap v3702: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3703: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:44 compute-0 nova_compute[259550]: 2025-10-07 15:24:44.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:44 compute-0 ceph-mon[74295]: pgmap v3703: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:45 compute-0 nova_compute[259550]: 2025-10-07 15:24:45.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3704: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:24:46 compute-0 ceph-mon[74295]: pgmap v3704: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:47 compute-0 podman[465050]: 2025-10-07 15:24:47.083158149 +0000 UTC m=+0.063587262 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 15:24:47 compute-0 podman[465049]: 2025-10-07 15:24:47.086229179 +0000 UTC m=+0.071321594 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 07 15:24:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3705: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:48 compute-0 ceph-mon[74295]: pgmap v3705: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3706: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:49 compute-0 nova_compute[259550]: 2025-10-07 15:24:49.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:49 compute-0 nova_compute[259550]: 2025-10-07 15:24:49.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:24:50 compute-0 nova_compute[259550]: 2025-10-07 15:24:50.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:50 compute-0 ceph-mon[74295]: pgmap v3706: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:24:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3707: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:24:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:24:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:24:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:24:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:24:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:24:52 compute-0 ceph-mon[74295]: pgmap v3707: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3708: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:54 compute-0 nova_compute[259550]: 2025-10-07 15:24:54.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:54 compute-0 nova_compute[259550]: 2025-10-07 15:24:54.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:24:55 compute-0 nova_compute[259550]: 2025-10-07 15:24:55.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:24:55 compute-0 nova_compute[259550]: 2025-10-07 15:24:55.023 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:24:55 compute-0 nova_compute[259550]: 2025-10-07 15:24:55.023 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:24:55 compute-0 nova_compute[259550]: 2025-10-07 15:24:55.023 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:24:55 compute-0 nova_compute[259550]: 2025-10-07 15:24:55.023 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:24:55 compute-0 ceph-mon[74295]: pgmap v3708: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:55 compute-0 nova_compute[259550]: 2025-10-07 15:24:55.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:24:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:24:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2153158543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:24:55 compute-0 nova_compute[259550]: 2025-10-07 15:24:55.555 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:24:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3709: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:55 compute-0 nova_compute[259550]: 2025-10-07 15:24:55.700 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:24:55 compute-0 nova_compute[259550]: 2025-10-07 15:24:55.701 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3565MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:24:55 compute-0 nova_compute[259550]: 2025-10-07 15:24:55.701 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:24:55 compute-0 nova_compute[259550]: 2025-10-07 15:24:55.702 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:24:55 compute-0 nova_compute[259550]: 2025-10-07 15:24:55.861 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:24:55 compute-0 nova_compute[259550]: 2025-10-07 15:24:55.861 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:24:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:24:56 compute-0 nova_compute[259550]: 2025-10-07 15:24:56.139 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 07 15:24:56 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2153158543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:24:56 compute-0 nova_compute[259550]: 2025-10-07 15:24:56.228 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 07 15:24:56 compute-0 nova_compute[259550]: 2025-10-07 15:24:56.228 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 07 15:24:56 compute-0 nova_compute[259550]: 2025-10-07 15:24:56.242 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 07 15:24:56 compute-0 nova_compute[259550]: 2025-10-07 15:24:56.261 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 07 15:24:56 compute-0 nova_compute[259550]: 2025-10-07 15:24:56.276 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:24:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:24:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1352555989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:24:56 compute-0 nova_compute[259550]: 2025-10-07 15:24:56.726 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:24:56 compute-0 nova_compute[259550]: 2025-10-07 15:24:56.731 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:24:56 compute-0 nova_compute[259550]: 2025-10-07 15:24:56.747 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:24:56 compute-0 nova_compute[259550]: 2025-10-07 15:24:56.749 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:24:56 compute-0 nova_compute[259550]: 2025-10-07 15:24:56.750 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:24:57 compute-0 ceph-mon[74295]: pgmap v3709: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:57 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1352555989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:24:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3710: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:58 compute-0 ceph-mon[74295]: pgmap v3710: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3711: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:24:59 compute-0 nova_compute[259550]: 2025-10-07 15:24:59.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:25:00.125 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:25:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:25:00.126 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:25:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:25:00.126 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:25:00 compute-0 nova_compute[259550]: 2025-10-07 15:25:00.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:00 compute-0 ceph-mon[74295]: pgmap v3711: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:25:01 compute-0 podman[465131]: 2025-10-07 15:25:01.064627479 +0000 UTC m=+0.052636874 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, tcib_managed=true)
Oct 07 15:25:01 compute-0 podman[465132]: 2025-10-07 15:25:01.094415742 +0000 UTC m=+0.080211999 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible)
Oct 07 15:25:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3712: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:01 compute-0 nova_compute[259550]: 2025-10-07 15:25:01.748 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:25:01 compute-0 nova_compute[259550]: 2025-10-07 15:25:01.749 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:25:01 compute-0 nova_compute[259550]: 2025-10-07 15:25:01.749 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:25:01 compute-0 nova_compute[259550]: 2025-10-07 15:25:01.765 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:25:02 compute-0 ceph-mon[74295]: pgmap v3712: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3713: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:03 compute-0 nova_compute[259550]: 2025-10-07 15:25:03.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:25:04 compute-0 nova_compute[259550]: 2025-10-07 15:25:04.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:04 compute-0 ceph-mon[74295]: pgmap v3713: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:05 compute-0 nova_compute[259550]: 2025-10-07 15:25:05.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3714: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:25:06 compute-0 ceph-mon[74295]: pgmap v3714: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3715: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:08 compute-0 ceph-mon[74295]: pgmap v3715: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3716: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:09 compute-0 nova_compute[259550]: 2025-10-07 15:25:09.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:10 compute-0 nova_compute[259550]: 2025-10-07 15:25:10.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:10 compute-0 ceph-mon[74295]: pgmap v3716: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:25:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3717: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:12 compute-0 ceph-mon[74295]: pgmap v3717: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3718: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:14 compute-0 nova_compute[259550]: 2025-10-07 15:25:14.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:14 compute-0 ceph-mon[74295]: pgmap v3718: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:15 compute-0 nova_compute[259550]: 2025-10-07 15:25:15.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3719: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:25:16 compute-0 ceph-mon[74295]: pgmap v3719: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3720: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:18 compute-0 podman[465174]: 2025-10-07 15:25:18.059741657 +0000 UTC m=+0.052097509 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct 07 15:25:18 compute-0 podman[465175]: 2025-10-07 15:25:18.066162996 +0000 UTC m=+0.055150539 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 07 15:25:18 compute-0 ceph-mon[74295]: pgmap v3720: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:19 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3721: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:19 compute-0 nova_compute[259550]: 2025-10-07 15:25:19.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:20 compute-0 nova_compute[259550]: 2025-10-07 15:25:20.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:20 compute-0 ceph-mon[74295]: pgmap v3721: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:20 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:25:21 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3722: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:25:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:25:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:25:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:25:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:25:22 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:25:22 compute-0 ceph-mon[74295]: pgmap v3722: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:25:22
Oct 07 15:25:22 compute-0 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 07 15:25:22 compute-0 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct 07 15:25:22 compute-0 ceph-mgr[74587]: [balancer INFO root] pools ['.mgr', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'vms', '.rgw.root', 'default.rgw.control', 'backups', 'default.rgw.meta']
Oct 07 15:25:22 compute-0 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct 07 15:25:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 07 15:25:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:25:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 07 15:25:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 07 15:25:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:25:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 07 15:25:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:25:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 07 15:25:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:25:23 compute-0 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 07 15:25:23 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3723: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:24 compute-0 nova_compute[259550]: 2025-10-07 15:25:24.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:24 compute-0 ceph-mon[74295]: pgmap v3723: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:25 compute-0 nova_compute[259550]: 2025-10-07 15:25:25.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:25 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3724: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:25 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:25:26 compute-0 ceph-mon[74295]: pgmap v3724: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:27 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3725: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:28 compute-0 ceph-mon[74295]: pgmap v3725: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:29 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3726: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:29 compute-0 nova_compute[259550]: 2025-10-07 15:25:29.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:29 compute-0 nova_compute[259550]: 2025-10-07 15:25:29.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:25:30 compute-0 nova_compute[259550]: 2025-10-07 15:25:30.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:30 compute-0 ceph-mon[74295]: pgmap v3726: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:30 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:25:31 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3727: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:32 compute-0 podman[465216]: 2025-10-07 15:25:32.089067738 +0000 UTC m=+0.067576826 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 07 15:25:32 compute-0 podman[465217]: 2025-10-07 15:25:32.136677168 +0000 UTC m=+0.097802229 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 07 15:25:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 07 15:25:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1889608050' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:25:32 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 07 15:25:32 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1889608050' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:25:32 compute-0 ceph-mon[74295]: pgmap v3727: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1889608050' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 07 15:25:32 compute-0 ceph-mon[74295]: from='client.? 192.168.122.10:0/1889608050' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 07 15:25:32 compute-0 nova_compute[259550]: 2025-10-07 15:25:32.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:25:32 compute-0 nova_compute[259550]: 2025-10-07 15:25:32.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:25:33 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3728: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:34 compute-0 nova_compute[259550]: 2025-10-07 15:25:34.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:34 compute-0 sudo[465263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:25:34 compute-0 sudo[465263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:25:34 compute-0 sudo[465263]: pam_unix(sudo:session): session closed for user root
Oct 07 15:25:34 compute-0 sudo[465288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:25:34 compute-0 sudo[465288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:25:34 compute-0 sudo[465288]: pam_unix(sudo:session): session closed for user root
Oct 07 15:25:34 compute-0 ceph-mon[74295]: pgmap v3728: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:34 compute-0 sudo[465313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:25:34 compute-0 sudo[465313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:25:34 compute-0 sudo[465313]: pam_unix(sudo:session): session closed for user root
Oct 07 15:25:34 compute-0 sudo[465338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 07 15:25:34 compute-0 sudo[465338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:25:35 compute-0 nova_compute[259550]: 2025-10-07 15:25:35.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:35 compute-0 sudo[465338]: pam_unix(sudo:session): session closed for user root
Oct 07 15:25:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:25:35 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:25:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 07 15:25:35 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:25:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 07 15:25:35 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:25:35 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev f819cce3-f64a-4461-acb7-06c88141a954 does not exist
Oct 07 15:25:35 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 925ecaec-31b9-4ea0-a726-3ed3a3b3905a does not exist
Oct 07 15:25:35 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev d8bf0ff8-9060-47fa-882f-75b2f266d250 does not exist
Oct 07 15:25:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 07 15:25:35 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:25:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 07 15:25:35 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:25:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:25:35 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:25:35 compute-0 sudo[465392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:25:35 compute-0 sudo[465392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:25:35 compute-0 sudo[465392]: pam_unix(sudo:session): session closed for user root
Oct 07 15:25:35 compute-0 sudo[465417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:25:35 compute-0 sudo[465417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:25:35 compute-0 sudo[465417]: pam_unix(sudo:session): session closed for user root
Oct 07 15:25:35 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3729: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:35 compute-0 sudo[465442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:25:35 compute-0 sudo[465442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:25:35 compute-0 sudo[465442]: pam_unix(sudo:session): session closed for user root
Oct 07 15:25:35 compute-0 sudo[465467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 07 15:25:35 compute-0 sudo[465467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:25:35 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:25:35 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 07 15:25:35 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:25:35 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 07 15:25:35 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 07 15:25:35 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:25:35 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:25:36 compute-0 podman[465533]: 2025-10-07 15:25:36.064247435 +0000 UTC m=+0.041251235 container create 0676877f08d537b0037fa040376043f331c6b80a9e26c2081c21299c860367e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 07 15:25:36 compute-0 systemd[1]: Started libpod-conmon-0676877f08d537b0037fa040376043f331c6b80a9e26c2081c21299c860367e2.scope.
Oct 07 15:25:36 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:25:36 compute-0 podman[465533]: 2025-10-07 15:25:36.131320696 +0000 UTC m=+0.108324516 container init 0676877f08d537b0037fa040376043f331c6b80a9e26c2081c21299c860367e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_stonebraker, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:25:36 compute-0 podman[465533]: 2025-10-07 15:25:36.044460464 +0000 UTC m=+0.021464294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:25:36 compute-0 podman[465533]: 2025-10-07 15:25:36.142178211 +0000 UTC m=+0.119182051 container start 0676877f08d537b0037fa040376043f331c6b80a9e26c2081c21299c860367e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 07 15:25:36 compute-0 podman[465533]: 2025-10-07 15:25:36.146571186 +0000 UTC m=+0.123575016 container attach 0676877f08d537b0037fa040376043f331c6b80a9e26c2081c21299c860367e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_stonebraker, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:25:36 compute-0 infallible_stonebraker[465549]: 167 167
Oct 07 15:25:36 compute-0 systemd[1]: libpod-0676877f08d537b0037fa040376043f331c6b80a9e26c2081c21299c860367e2.scope: Deactivated successfully.
Oct 07 15:25:36 compute-0 podman[465533]: 2025-10-07 15:25:36.148906198 +0000 UTC m=+0.125910028 container died 0676877f08d537b0037fa040376043f331c6b80a9e26c2081c21299c860367e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 07 15:25:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d27d28479245ef47f8a02c208c552ff700b3df8acf2074bd4ca2b447790d79aa-merged.mount: Deactivated successfully.
Oct 07 15:25:36 compute-0 podman[465533]: 2025-10-07 15:25:36.199968269 +0000 UTC m=+0.176972069 container remove 0676877f08d537b0037fa040376043f331c6b80a9e26c2081c21299c860367e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:25:36 compute-0 systemd[1]: libpod-conmon-0676877f08d537b0037fa040376043f331c6b80a9e26c2081c21299c860367e2.scope: Deactivated successfully.
Oct 07 15:25:36 compute-0 podman[465573]: 2025-10-07 15:25:36.351702765 +0000 UTC m=+0.040075734 container create 3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 07 15:25:36 compute-0 systemd[1]: Started libpod-conmon-3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11.scope.
Oct 07 15:25:36 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b589c48bbf3eeea69cd376427a6aef4f2e3ea4a8f85f4e6a7540a43939acd9d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b589c48bbf3eeea69cd376427a6aef4f2e3ea4a8f85f4e6a7540a43939acd9d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b589c48bbf3eeea69cd376427a6aef4f2e3ea4a8f85f4e6a7540a43939acd9d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b589c48bbf3eeea69cd376427a6aef4f2e3ea4a8f85f4e6a7540a43939acd9d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b589c48bbf3eeea69cd376427a6aef4f2e3ea4a8f85f4e6a7540a43939acd9d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 07 15:25:36 compute-0 podman[465573]: 2025-10-07 15:25:36.33513525 +0000 UTC m=+0.023508239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:25:36 compute-0 podman[465573]: 2025-10-07 15:25:36.515519488 +0000 UTC m=+0.203892537 container init 3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:25:36 compute-0 podman[465573]: 2025-10-07 15:25:36.525617643 +0000 UTC m=+0.213990612 container start 3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 07 15:25:36 compute-0 podman[465573]: 2025-10-07 15:25:36.5864301 +0000 UTC m=+0.274803119 container attach 3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 07 15:25:36 compute-0 ceph-mon[74295]: pgmap v3729: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:36 compute-0 nova_compute[259550]: 2025-10-07 15:25:36.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:25:37 compute-0 serene_swanson[465589]: --> passed data devices: 0 physical, 3 LVM
Oct 07 15:25:37 compute-0 serene_swanson[465589]: --> relative data size: 1.0
Oct 07 15:25:37 compute-0 serene_swanson[465589]: --> All data devices are unavailable
Oct 07 15:25:37 compute-0 systemd[1]: libpod-3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11.scope: Deactivated successfully.
Oct 07 15:25:37 compute-0 systemd[1]: libpod-3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11.scope: Consumed 1.029s CPU time.
Oct 07 15:25:37 compute-0 podman[465573]: 2025-10-07 15:25:37.604702177 +0000 UTC m=+1.293075156 container died 3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 07 15:25:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b589c48bbf3eeea69cd376427a6aef4f2e3ea4a8f85f4e6a7540a43939acd9d7-merged.mount: Deactivated successfully.
Oct 07 15:25:37 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3730: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:37 compute-0 podman[465573]: 2025-10-07 15:25:37.657777652 +0000 UTC m=+1.346150621 container remove 3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 07 15:25:37 compute-0 systemd[1]: libpod-conmon-3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11.scope: Deactivated successfully.
Oct 07 15:25:37 compute-0 sudo[465467]: pam_unix(sudo:session): session closed for user root
Oct 07 15:25:37 compute-0 sudo[465632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:25:37 compute-0 sudo[465632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:25:37 compute-0 sudo[465632]: pam_unix(sudo:session): session closed for user root
Oct 07 15:25:37 compute-0 sudo[465657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:25:37 compute-0 sudo[465657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:25:37 compute-0 sudo[465657]: pam_unix(sudo:session): session closed for user root
Oct 07 15:25:37 compute-0 sudo[465682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:25:37 compute-0 sudo[465682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:25:37 compute-0 sudo[465682]: pam_unix(sudo:session): session closed for user root
Oct 07 15:25:37 compute-0 sudo[465707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- lvm list --format json
Oct 07 15:25:37 compute-0 sudo[465707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:25:38 compute-0 podman[465772]: 2025-10-07 15:25:38.312116219 +0000 UTC m=+0.051969136 container create 86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:25:38 compute-0 systemd[1]: Started libpod-conmon-86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109.scope.
Oct 07 15:25:38 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:25:38 compute-0 podman[465772]: 2025-10-07 15:25:38.283472927 +0000 UTC m=+0.023325924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:25:38 compute-0 podman[465772]: 2025-10-07 15:25:38.390543619 +0000 UTC m=+0.130396546 container init 86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:25:38 compute-0 podman[465772]: 2025-10-07 15:25:38.403123939 +0000 UTC m=+0.142976846 container start 86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 07 15:25:38 compute-0 podman[465772]: 2025-10-07 15:25:38.407897484 +0000 UTC m=+0.147750401 container attach 86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 07 15:25:38 compute-0 intelligent_mestorf[465788]: 167 167
Oct 07 15:25:38 compute-0 systemd[1]: libpod-86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109.scope: Deactivated successfully.
Oct 07 15:25:38 compute-0 conmon[465788]: conmon 86a37bbb3a3a88d058b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109.scope/container/memory.events
Oct 07 15:25:38 compute-0 podman[465772]: 2025-10-07 15:25:38.410070972 +0000 UTC m=+0.149923889 container died 86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 07 15:25:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-59d2d22624289291d52d339ce546d3d97f99d365241b64755bfbf5411ce6cfd3-merged.mount: Deactivated successfully.
Oct 07 15:25:38 compute-0 podman[465772]: 2025-10-07 15:25:38.451759717 +0000 UTC m=+0.191612624 container remove 86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 07 15:25:38 compute-0 systemd[1]: libpod-conmon-86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109.scope: Deactivated successfully.
Oct 07 15:25:38 compute-0 podman[465813]: 2025-10-07 15:25:38.614970624 +0000 UTC m=+0.043641638 container create 9d228703016e29123e511eaab8eb063c5ab7c42e9424a8ce6afc04606127c86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 07 15:25:38 compute-0 systemd[1]: Started libpod-conmon-9d228703016e29123e511eaab8eb063c5ab7c42e9424a8ce6afc04606127c86e.scope.
Oct 07 15:25:38 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:25:38 compute-0 podman[465813]: 2025-10-07 15:25:38.595444581 +0000 UTC m=+0.024115625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d61215ea1499f47565b4e9d472dd8cfedee8ceef8297756a32ee1fdf380c46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d61215ea1499f47565b4e9d472dd8cfedee8ceef8297756a32ee1fdf380c46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d61215ea1499f47565b4e9d472dd8cfedee8ceef8297756a32ee1fdf380c46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d61215ea1499f47565b4e9d472dd8cfedee8ceef8297756a32ee1fdf380c46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:25:38 compute-0 podman[465813]: 2025-10-07 15:25:38.709196699 +0000 UTC m=+0.137867743 container init 9d228703016e29123e511eaab8eb063c5ab7c42e9424a8ce6afc04606127c86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_leakey, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 07 15:25:38 compute-0 podman[465813]: 2025-10-07 15:25:38.724320466 +0000 UTC m=+0.152991480 container start 9d228703016e29123e511eaab8eb063c5ab7c42e9424a8ce6afc04606127c86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 07 15:25:38 compute-0 podman[465813]: 2025-10-07 15:25:38.728100955 +0000 UTC m=+0.156772009 container attach 9d228703016e29123e511eaab8eb063c5ab7c42e9424a8ce6afc04606127c86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:25:38 compute-0 ceph-mon[74295]: pgmap v3730: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:39 compute-0 jolly_leakey[465831]: {
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:     "0": [
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:         {
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "devices": [
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "/dev/loop3"
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             ],
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "lv_name": "ceph_lv0",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "lv_size": "21470642176",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "name": "ceph_lv0",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "tags": {
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.cluster_name": "ceph",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.crush_device_class": "",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.encrypted": "0",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.osd_id": "0",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.type": "block",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.vdo": "0"
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             },
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "type": "block",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "vg_name": "ceph_vg0"
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:         }
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:     ],
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:     "1": [
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:         {
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "devices": [
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "/dev/loop4"
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             ],
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "lv_name": "ceph_lv1",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "lv_size": "21470642176",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "name": "ceph_lv1",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "tags": {
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.cluster_name": "ceph",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.crush_device_class": "",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.encrypted": "0",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.osd_id": "1",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.type": "block",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.vdo": "0"
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             },
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "type": "block",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "vg_name": "ceph_vg1"
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:         }
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:     ],
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:     "2": [
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:         {
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "devices": [
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "/dev/loop5"
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             ],
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "lv_name": "ceph_lv2",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "lv_size": "21470642176",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "name": "ceph_lv2",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "tags": {
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.cephx_lockbox_secret": "",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.cluster_name": "ceph",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.crush_device_class": "",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.encrypted": "0",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.osd_id": "2",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.type": "block",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:                 "ceph.vdo": "0"
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             },
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "type": "block",
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:             "vg_name": "ceph_vg2"
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:         }
Oct 07 15:25:39 compute-0 jolly_leakey[465831]:     ]
Oct 07 15:25:39 compute-0 jolly_leakey[465831]: }
Oct 07 15:25:39 compute-0 systemd[1]: libpod-9d228703016e29123e511eaab8eb063c5ab7c42e9424a8ce6afc04606127c86e.scope: Deactivated successfully.
Oct 07 15:25:39 compute-0 podman[465813]: 2025-10-07 15:25:39.513028893 +0000 UTC m=+0.941699897 container died 9d228703016e29123e511eaab8eb063c5ab7c42e9424a8ce6afc04606127c86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_leakey, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:25:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9d61215ea1499f47565b4e9d472dd8cfedee8ceef8297756a32ee1fdf380c46-merged.mount: Deactivated successfully.
Oct 07 15:25:39 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3731: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:39 compute-0 nova_compute[259550]: 2025-10-07 15:25:39.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:39 compute-0 podman[465813]: 2025-10-07 15:25:39.867963196 +0000 UTC m=+1.296634200 container remove 9d228703016e29123e511eaab8eb063c5ab7c42e9424a8ce6afc04606127c86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_leakey, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:25:39 compute-0 systemd[1]: libpod-conmon-9d228703016e29123e511eaab8eb063c5ab7c42e9424a8ce6afc04606127c86e.scope: Deactivated successfully.
Oct 07 15:25:39 compute-0 sudo[465707]: pam_unix(sudo:session): session closed for user root
Oct 07 15:25:39 compute-0 sudo[465853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:25:39 compute-0 sudo[465853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:25:39 compute-0 sudo[465853]: pam_unix(sudo:session): session closed for user root
Oct 07 15:25:39 compute-0 nova_compute[259550]: 2025-10-07 15:25:39.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:25:39 compute-0 nova_compute[259550]: 2025-10-07 15:25:39.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:25:40 compute-0 sudo[465878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 07 15:25:40 compute-0 sudo[465878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:25:40 compute-0 sudo[465878]: pam_unix(sudo:session): session closed for user root
Oct 07 15:25:40 compute-0 sudo[465903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:25:40 compute-0 sudo[465903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:25:40 compute-0 sudo[465903]: pam_unix(sudo:session): session closed for user root
Oct 07 15:25:40 compute-0 sudo[465928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -- raw list --format json
Oct 07 15:25:40 compute-0 sudo[465928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:25:40 compute-0 nova_compute[259550]: 2025-10-07 15:25:40.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:40 compute-0 podman[465994]: 2025-10-07 15:25:40.485695991 +0000 UTC m=+0.041475629 container create 94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 07 15:25:40 compute-0 systemd[1]: Started libpod-conmon-94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41.scope.
Oct 07 15:25:40 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:25:40 compute-0 podman[465994]: 2025-10-07 15:25:40.467148214 +0000 UTC m=+0.022927922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:25:40 compute-0 podman[465994]: 2025-10-07 15:25:40.574883334 +0000 UTC m=+0.130662982 container init 94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:25:40 compute-0 podman[465994]: 2025-10-07 15:25:40.58575458 +0000 UTC m=+0.141534218 container start 94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 07 15:25:40 compute-0 podman[465994]: 2025-10-07 15:25:40.589499299 +0000 UTC m=+0.145278987 container attach 94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_moore, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 07 15:25:40 compute-0 eager_moore[466010]: 167 167
Oct 07 15:25:40 compute-0 systemd[1]: libpod-94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41.scope: Deactivated successfully.
Oct 07 15:25:40 compute-0 conmon[466010]: conmon 94cf04ee10f9d2da420f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41.scope/container/memory.events
Oct 07 15:25:40 compute-0 podman[465994]: 2025-10-07 15:25:40.592296791 +0000 UTC m=+0.148076429 container died 94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_moore, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:25:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-83da07ce144da91d7ed92b32bd5d41f0e076fa4a32b4ac36996bfb94d4ac7117-merged.mount: Deactivated successfully.
Oct 07 15:25:40 compute-0 podman[465994]: 2025-10-07 15:25:40.630749722 +0000 UTC m=+0.186529370 container remove 94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:25:40 compute-0 systemd[1]: libpod-conmon-94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41.scope: Deactivated successfully.
Oct 07 15:25:40 compute-0 sshd-session[466017]: Accepted publickey for zuul from 192.168.122.10 port 43064 ssh2: ECDSA SHA256:eYsf9rxr6jxX1A35M0IajnB7UFHS0tds1lEVHTSpAhk
Oct 07 15:25:40 compute-0 systemd-logind[801]: New session 60 of user zuul.
Oct 07 15:25:40 compute-0 systemd[1]: Started Session 60 of User zuul.
Oct 07 15:25:40 compute-0 sshd-session[466017]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 07 15:25:40 compute-0 podman[466036]: 2025-10-07 15:25:40.810898824 +0000 UTC m=+0.045311642 container create a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:25:40 compute-0 systemd[1]: Started libpod-conmon-a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc.scope.
Oct 07 15:25:40 compute-0 systemd[1]: Started libcrun container.
Oct 07 15:25:40 compute-0 sudo[466051]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 07 15:25:40 compute-0 podman[466036]: 2025-10-07 15:25:40.79438284 +0000 UTC m=+0.028795688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 07 15:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a2066c2779f11476578dab282f417828a63e16ad202b88d243c7b1343af6cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 07 15:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a2066c2779f11476578dab282f417828a63e16ad202b88d243c7b1343af6cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 07 15:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a2066c2779f11476578dab282f417828a63e16ad202b88d243c7b1343af6cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 07 15:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a2066c2779f11476578dab282f417828a63e16ad202b88d243c7b1343af6cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 07 15:25:40 compute-0 podman[466036]: 2025-10-07 15:25:40.901388511 +0000 UTC m=+0.135801349 container init a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 07 15:25:40 compute-0 sudo[466051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 07 15:25:40 compute-0 podman[466036]: 2025-10-07 15:25:40.911052584 +0000 UTC m=+0.145465422 container start a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 07 15:25:40 compute-0 podman[466036]: 2025-10-07 15:25:40.914595408 +0000 UTC m=+0.149008246 container attach a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 07 15:25:40 compute-0 ceph-mon[74295]: pgmap v3731: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:40 compute-0 nova_compute[259550]: 2025-10-07 15:25:40.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:25:40 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:25:41 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3732: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:42 compute-0 trusting_allen[466074]: {
Oct 07 15:25:42 compute-0 trusting_allen[466074]:     "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct 07 15:25:42 compute-0 trusting_allen[466074]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:25:42 compute-0 trusting_allen[466074]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 07 15:25:42 compute-0 trusting_allen[466074]:         "osd_id": 2,
Oct 07 15:25:42 compute-0 trusting_allen[466074]:         "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct 07 15:25:42 compute-0 trusting_allen[466074]:         "type": "bluestore"
Oct 07 15:25:42 compute-0 trusting_allen[466074]:     },
Oct 07 15:25:42 compute-0 trusting_allen[466074]:     "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct 07 15:25:42 compute-0 trusting_allen[466074]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:25:42 compute-0 trusting_allen[466074]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 07 15:25:42 compute-0 trusting_allen[466074]:         "osd_id": 1,
Oct 07 15:25:42 compute-0 trusting_allen[466074]:         "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct 07 15:25:42 compute-0 trusting_allen[466074]:         "type": "bluestore"
Oct 07 15:25:42 compute-0 trusting_allen[466074]:     },
Oct 07 15:25:42 compute-0 trusting_allen[466074]:     "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct 07 15:25:42 compute-0 trusting_allen[466074]:         "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct 07 15:25:42 compute-0 trusting_allen[466074]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 07 15:25:42 compute-0 trusting_allen[466074]:         "osd_id": 0,
Oct 07 15:25:42 compute-0 trusting_allen[466074]:         "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct 07 15:25:42 compute-0 trusting_allen[466074]:         "type": "bluestore"
Oct 07 15:25:42 compute-0 trusting_allen[466074]:     }
Oct 07 15:25:42 compute-0 trusting_allen[466074]: }
Oct 07 15:25:42 compute-0 systemd[1]: libpod-a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc.scope: Deactivated successfully.
Oct 07 15:25:42 compute-0 podman[466036]: 2025-10-07 15:25:42.065255922 +0000 UTC m=+1.299668770 container died a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 07 15:25:42 compute-0 systemd[1]: libpod-a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc.scope: Consumed 1.146s CPU time.
Oct 07 15:25:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-67a2066c2779f11476578dab282f417828a63e16ad202b88d243c7b1343af6cc-merged.mount: Deactivated successfully.
Oct 07 15:25:42 compute-0 podman[466036]: 2025-10-07 15:25:42.131337298 +0000 UTC m=+1.365750126 container remove a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 07 15:25:42 compute-0 systemd[1]: libpod-conmon-a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc.scope: Deactivated successfully.
Oct 07 15:25:42 compute-0 sudo[465928]: pam_unix(sudo:session): session closed for user root
Oct 07 15:25:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 07 15:25:42 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:25:42 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 07 15:25:42 compute-0 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:25:42 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 75640a93-fd78-439b-9b41-800433bf52f5 does not exist
Oct 07 15:25:42 compute-0 ceph-mgr[74587]: [progress WARNING root] complete: ev 67949c2e-c40d-4779-9ed9-3db38a7ef880 does not exist
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.184408) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850742184444, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1560, "num_deletes": 251, "total_data_size": 2519049, "memory_usage": 2561728, "flush_reason": "Manual Compaction"}
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850742197698, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 2473228, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75914, "largest_seqno": 77473, "table_properties": {"data_size": 2465875, "index_size": 4359, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14837, "raw_average_key_size": 19, "raw_value_size": 2451344, "raw_average_value_size": 3290, "num_data_blocks": 195, "num_entries": 745, "num_filter_entries": 745, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759850576, "oldest_key_time": 1759850576, "file_creation_time": 1759850742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 13326 microseconds, and 5856 cpu microseconds.
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.197732) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 2473228 bytes OK
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.197748) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.198943) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.198957) EVENT_LOG_v1 {"time_micros": 1759850742198952, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.198972) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2512270, prev total WAL file size 2512270, number of live WAL files 2.
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.199650) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(2415KB)], [182(9958KB)]
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850742199713, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 12670816, "oldest_snapshot_seqno": -1}
Oct 07 15:25:42 compute-0 sudo[466151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 07 15:25:42 compute-0 sudo[466151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:25:42 compute-0 sudo[466151]: pam_unix(sudo:session): session closed for user root
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 9338 keys, 10949633 bytes, temperature: kUnknown
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850742278223, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 10949633, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10890732, "index_size": 34468, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23365, "raw_key_size": 246187, "raw_average_key_size": 26, "raw_value_size": 10727547, "raw_average_value_size": 1148, "num_data_blocks": 1327, "num_entries": 9338, "num_filter_entries": 9338, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759850742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.278480) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 10949633 bytes
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.280843) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.2 rd, 139.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 9.7 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(9.6) write-amplify(4.4) OK, records in: 9852, records dropped: 514 output_compression: NoCompression
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.280867) EVENT_LOG_v1 {"time_micros": 1759850742280856, "job": 114, "event": "compaction_finished", "compaction_time_micros": 78587, "compaction_time_cpu_micros": 27117, "output_level": 6, "num_output_files": 1, "total_output_size": 10949633, "num_input_records": 9852, "num_output_records": 9338, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850742281495, "job": 114, "event": "table_file_deletion", "file_number": 184}
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850742283654, "job": 114, "event": "table_file_deletion", "file_number": 182}
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.199568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.283723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.283729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.283730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.283732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:25:42 compute-0 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.283733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 07 15:25:42 compute-0 sudo[466191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 07 15:25:42 compute-0 sudo[466191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 07 15:25:42 compute-0 sudo[466191]: pam_unix(sudo:session): session closed for user root
Oct 07 15:25:42 compute-0 ceph-mon[74295]: pgmap v3732: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:42 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:25:42 compute-0 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct 07 15:25:43 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23383 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:43 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3733: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:44 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23385 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:44 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 07 15:25:44 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1068047090' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 07 15:25:44 compute-0 nova_compute[259550]: 2025-10-07 15:25:44.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:44 compute-0 ceph-mon[74295]: from='client.23383 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:44 compute-0 ceph-mon[74295]: pgmap v3733: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:44 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1068047090' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 07 15:25:45 compute-0 nova_compute[259550]: 2025-10-07 15:25:45.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:45 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3734: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:45 compute-0 ceph-mon[74295]: from='client.23385 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:45 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:25:46 compute-0 ceph-mon[74295]: pgmap v3734: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:47 compute-0 ovs-vsctl[466435]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 07 15:25:47 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3735: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:48 compute-0 virtqemud[259430]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 07 15:25:48 compute-0 virtqemud[259430]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 07 15:25:48 compute-0 virtqemud[259430]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 07 15:25:48 compute-0 ceph-mon[74295]: pgmap v3735: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:49 compute-0 podman[466678]: 2025-10-07 15:25:49.10673579 +0000 UTC m=+0.104578778 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 07 15:25:49 compute-0 podman[466675]: 2025-10-07 15:25:49.108453005 +0000 UTC m=+0.110240127 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 07 15:25:49 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: cache status {prefix=cache status} (starting...)
Oct 07 15:25:49 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: client ls {prefix=client ls} (starting...)
Oct 07 15:25:49 compute-0 lvm[466803]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 07 15:25:49 compute-0 lvm[466803]: VG ceph_vg2 finished
Oct 07 15:25:49 compute-0 lvm[466819]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 07 15:25:49 compute-0 lvm[466819]: VG ceph_vg0 finished
Oct 07 15:25:49 compute-0 lvm[466827]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 07 15:25:49 compute-0 lvm[466827]: VG ceph_vg1 finished
Oct 07 15:25:49 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3736: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:49 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23389 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:49 compute-0 nova_compute[259550]: 2025-10-07 15:25:49.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:49 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: damage ls {prefix=damage ls} (starting...)
Oct 07 15:25:50 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump loads {prefix=dump loads} (starting...)
Oct 07 15:25:50 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23391 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:50 compute-0 nova_compute[259550]: 2025-10-07 15:25:50.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:50 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 07 15:25:50 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 07 15:25:50 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 07 15:25:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 07 15:25:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3638627149' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 07 15:25:50 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 07 15:25:50 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23397 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:50 compute-0 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T15:25:50.817+0000 7fbd06de6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 07 15:25:50 compute-0 ceph-mgr[74587]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 07 15:25:50 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 07 15:25:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 07 15:25:50 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/948022299' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:25:50 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 07 15:25:50 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:25:51 compute-0 ceph-mon[74295]: pgmap v3736: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:51 compute-0 ceph-mon[74295]: from='client.23389 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:51 compute-0 ceph-mon[74295]: from='client.23391 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3638627149' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 07 15:25:51 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/948022299' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 07 15:25:51 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: ops {prefix=ops} (starting...)
Oct 07 15:25:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 07 15:25:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1421570317' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 07 15:25:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 07 15:25:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/776950298' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 07 15:25:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 07 15:25:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4081423507' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 07 15:25:51 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3737: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:51 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 07 15:25:51 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1735205685' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 07 15:25:51 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: session ls {prefix=session ls} (starting...)
Oct 07 15:25:52 compute-0 ceph-mon[74295]: from='client.23397 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1421570317' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 07 15:25:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/776950298' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 07 15:25:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4081423507' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 07 15:25:52 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1735205685' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 07 15:25:52 compute-0 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: status {prefix=status} (starting...)
Oct 07 15:25:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 07 15:25:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3353477631' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 07 15:25:52 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23411 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 07 15:25:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/21338404' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 07 15:25:52 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23415 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:25:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:25:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:25:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:25:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct 07 15:25:52 compute-0 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct 07 15:25:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 07 15:25:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3274021769' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 07 15:25:52 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 07 15:25:52 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2011630334' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 07 15:25:53 compute-0 ceph-mon[74295]: pgmap v3737: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3353477631' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 07 15:25:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/21338404' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 07 15:25:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3274021769' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 07 15:25:53 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2011630334' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 07 15:25:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 07 15:25:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4044100278' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 07 15:25:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 07 15:25:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3455579351' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 07 15:25:53 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3738: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:53 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23425 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:53 compute-0 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T15:25:53.717+0000 7fbd06de6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 07 15:25:53 compute-0 ceph-mgr[74587]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 07 15:25:53 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 07 15:25:53 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842496646' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 07 15:25:54 compute-0 ceph-mon[74295]: from='client.23411 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:54 compute-0 ceph-mon[74295]: from='client.23415 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4044100278' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 07 15:25:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3455579351' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 07 15:25:54 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1842496646' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 07 15:25:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 07 15:25:54 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1260388552' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 07 15:25:54 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23431 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:54 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23433 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:54 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 07 15:25:54 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/625669770' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 07 15:25:54 compute-0 nova_compute[259550]: 2025-10-07 15:25:54.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:54 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23437 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:54 compute-0 nova_compute[259550]: 2025-10-07 15:25:54.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:25:55 compute-0 nova_compute[259550]: 2025-10-07 15:25:55.020 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:25:55 compute-0 nova_compute[259550]: 2025-10-07 15:25:55.020 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:25:55 compute-0 nova_compute[259550]: 2025-10-07 15:25:55.020 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:25:55 compute-0 nova_compute[259550]: 2025-10-07 15:25:55.021 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 07 15:25:55 compute-0 nova_compute[259550]: 2025-10-07 15:25:55.021 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:25:55 compute-0 ceph-mon[74295]: pgmap v3738: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:55 compute-0 ceph-mon[74295]: from='client.23425 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:55 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1260388552' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 07 15:25:55 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/625669770' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 07 15:25:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 07 15:25:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1249979617' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 07 15:25:55 compute-0 nova_compute[259550]: 2025-10-07 15:25:55.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:55 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23441 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fc6caf00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.682163239s of 33.369308472s, submitted: 9
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fe0ea5a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:54.656283+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 45948928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdd48000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:55.656469+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 45948928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2944731 data_alloc: 218103808 data_used: 13377536
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:56.656617+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 45940736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea78000/0x0/0x4ffc00000, data 0x11f1d2f/0x1376000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:57.656781+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 45940736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:58.657075+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 45940736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:52:59.657213+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 45940736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:00.657333+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 45916160 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2944731 data_alloc: 218103808 data_used: 13377536
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:01.657529+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 45899776 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea78000/0x0/0x4ffc00000, data 0x11f1d2f/0x1376000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:02.657675+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 45891584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:03.657826+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 45891584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.930875778s of 10.032606125s, submitted: 31
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:04.657994+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282894336 unmapped: 45875200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:05.658126+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282894336 unmapped: 45875200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2944731 data_alloc: 218103808 data_used: 13377536
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:06.658257+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282894336 unmapped: 45875200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:07.658448+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282894336 unmapped: 45875200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea78000/0x0/0x4ffc00000, data 0x11f1d2f/0x1376000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:08.658635+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283410432 unmapped: 45359104 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:09.658766+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283951104 unmapped: 44818432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee52b000/0x0/0x4ffc00000, data 0x173ed2f/0x18c3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:10.658958+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 44769280 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee50e000/0x0/0x4ffc00000, data 0x175bd2f/0x18e0000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [0,0,0,0,0,1])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2991023 data_alloc: 218103808 data_used: 13512704
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:11.659196+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 43827200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:12.659351+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284950528 unmapped: 43819008 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:13.659490+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285999104 unmapped: 42770432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:14.659666+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:15.659857+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee0e8000/0x0/0x4ffc00000, data 0x1761d2f/0x18e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1022f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2992473 data_alloc: 218103808 data_used: 13512704
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:16.660104+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:17.660323+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.205342293s of 13.499714851s, submitted: 106
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef545000/0x0/0x4ffc00000, data 0x1764d2f/0x18e9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:18.660462+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:19.660696+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:20.660872+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef545000/0x0/0x4ffc00000, data 0x1764d2f/0x18e9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993209 data_alloc: 218103808 data_used: 13512704
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:21.660998+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f502877400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:22.661146+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 45596672 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4fd2e45a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffdbb4a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff35b800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff35b800 session 0x55f4fd29dc20
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff35b800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:23.661340+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff35b800 session 0x55f4fdf8a000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ffe29860
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:24.661505+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:25.661680+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eecf1000/0x0/0x4ffc00000, data 0x1fb8d2f/0x213d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053185 data_alloc: 218103808 data_used: 13512704
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:26.661883+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:27.662153+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:28.662313+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fdf8bc20
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:29.662475+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffa0c960
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:30.662617+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f502877400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4fdd5b2c0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f502877400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053185 data_alloc: 218103808 data_used: 13512704
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.148612022s of 13.547485352s, submitted: 7
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:31.662746+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4ffdbba40
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eecf1000/0x0/0x4ffc00000, data 0x1fb8d2f/0x213d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286031872 unmapped: 50085888 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:32.662884+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286031872 unmapped: 50085888 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:33.663018+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286031872 unmapped: 50085888 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:34.663128+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:35.663241+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eecef000/0x0/0x4ffc00000, data 0x1fb8d62/0x213f000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3109156 data_alloc: 218103808 data_used: 21655552
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:36.663379+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:37.663543+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:38.663728+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:39.663967+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eecef000/0x0/0x4ffc00000, data 0x1fb8d62/0x213f000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:40.665148+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3109156 data_alloc: 218103808 data_used: 21655552
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:41.665372+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:42.665534+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.794086456s of 11.875811577s, submitted: 5
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:43.665700+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:44.665884+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 48177152 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:45.666021+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289021952 unmapped: 47095808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee39e000/0x0/0x4ffc00000, data 0x2909d62/0x2a90000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3187768 data_alloc: 218103808 data_used: 22167552
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:46.666144+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289079296 unmapped: 47038464 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:47.666304+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee39d000/0x0/0x4ffc00000, data 0x2909d62/0x2a90000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:48.666455+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:49.666661+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:50.666866+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188904 data_alloc: 218103808 data_used: 22589440
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:51.667024+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:52.667179+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee39d000/0x0/0x4ffc00000, data 0x2909d62/0x2a90000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:53.667435+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fcf21e00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4ff9972c0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:54.667576+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.230875969s of 11.849612236s, submitted: 48
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:55.667713+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fed86f00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef543000/0x0/0x4ffc00000, data 0x1765d2f/0x18ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:56.667895+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2990971 data_alloc: 218103808 data_used: 13225984
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef543000/0x0/0x4ffc00000, data 0x1765d2f/0x18ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:57.668188+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:58.668333+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:59.668501+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef542000/0x0/0x4ffc00000, data 0x1765d2f/0x18ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:00.668665+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef542000/0x0/0x4ffc00000, data 0x1765d2f/0x18ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:01.668825+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2991147 data_alloc: 218103808 data_used: 13225984
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fdd5b0e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f501979c20
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:02.671067+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:03.671226+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efab8000/0x0/0x4ffc00000, data 0x11f1d2f/0x1376000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:04.671379+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fcf214a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:05.671565+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:06.671761+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:07.672049+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:08.672274+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:09.672496+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:10.672726+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:11.672872+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:12.673074+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:13.673191+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:14.673327+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:15.673536+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:16.673675+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:17.673872+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:18.674088+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283770880 unmapped: 52346880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:19.674252+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283770880 unmapped: 52346880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:20.674447+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283770880 unmapped: 52346880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:21.674611+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283770880 unmapped: 52346880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:22.674769+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:23.674980+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:24.675203+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:25.675343+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:26.675538+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:27.675759+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:28.676013+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:29.676191+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:30.676354+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283795456 unmapped: 52322304 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:31.676541+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283803648 unmapped: 52314112 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:32.676695+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283803648 unmapped: 52314112 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:33.676872+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.062763214s of 38.424762726s, submitted: 20
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 49577984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fd29c960
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4feca9860
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f502877400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f501978780
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:34.677002+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ff0eb4a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fd2e4d20
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef57d000/0x0/0x4ffc00000, data 0x172cd2f/0x18b1000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:35.677140+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:36.677349+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2987362 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffa0d0e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef57d000/0x0/0x4ffc00000, data 0x172cd2f/0x18b1000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:37.677599+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffa0dc20
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:38.677795+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:39.678007+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff35b800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff35b800 session 0x55f4ff9e8780
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fdfa9e00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:40.678221+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284180480 unmapped: 51937280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:41.678338+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2992039 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284180480 unmapped: 51937280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:42.678463+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284327936 unmapped: 51789824 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:43.678610+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:44.679176+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:45.679386+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:46.680558+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3031879 data_alloc: 218103808 data_used: 18636800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:47.680769+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:48.680922+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:49.681708+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:50.682355+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:51.682833+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3031879 data_alloc: 218103808 data_used: 18636800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:52.683000+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.940246582s of 19.882999420s, submitted: 27
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:53.683162+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 49446912 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:54.683496+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 48250880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:55.683812+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:56.684022+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:57.684447+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:58.684706+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:59.685038+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:00.685281+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:01.685722+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:02.686096+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:03.686337+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:04.686533+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:05.686709+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:06.686904+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:07.687157+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:08.687417+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:09.687616+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:10.687853+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:11.688000+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:12.688121+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:13.688260+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:14.688421+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:15.688555+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:16.688715+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:17.689095+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:18.689220+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:19.689366+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287916032 unmapped: 48201728 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:20.689503+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287916032 unmapped: 48201728 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:21.689678+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3111081 data_alloc: 218103808 data_used: 20406272
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287916032 unmapped: 48201728 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:22.689926+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.229282379s of 29.198295593s, submitted: 69
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fed67e00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fecc3860
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286769152 unmapped: 49348608 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:23.690119+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ff9e8000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:24.690292+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:25.690545+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:26.690740+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:27.691063+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:28.691192+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:29.691413+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:30.691570+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:31.691786+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:32.692020+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:33.692189+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:34.692454+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:35.692619+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:36.692815+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:37.692984+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:38.693131+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:39.693273+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:40.693413+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:41.693619+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:42.693845+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 49332224 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:43.694007+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 49332224 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:44.694176+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 49332224 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:45.694348+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 49332224 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:46.694526+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:47.694709+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffb9f800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffb9f800 session 0x55f4ff9e9680
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fe026780
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4feca81e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ff9974a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.315746307s of 25.564805984s, submitted: 37
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:48.695079+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:49.695217+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4feca85a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdb38c00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdb38c00 session 0x55f4ffe29e00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fed55860
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4feca9680
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fedac1e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:50.695335+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:51.695477+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2982457 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:52.695634+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:53.695789+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:54.695958+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fe026780
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:55.696298+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:56.696449+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2982457 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:57.696621+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:58.696811+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:59.696973+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:00.697123+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:01.697284+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993657 data_alloc: 218103808 data_used: 14663680
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:02.697410+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:03.697551+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:04.697716+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:05.697922+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:06.698078+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993657 data_alloc: 218103808 data_used: 14663680
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:07.698343+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286810112 unmapped: 49307648 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:08.698490+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.177154541s of 20.592260361s, submitted: 12
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285138944 unmapped: 50978816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:09.698621+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285138944 unmapped: 50978816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:10.698803+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee22c000/0x0/0x4ffc00000, data 0x18dcd3f/0x1a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285138944 unmapped: 50978816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:11.699016+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028855 data_alloc: 218103808 data_used: 14909440
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285081600 unmapped: 51036160 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:12.699190+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285081600 unmapped: 51036160 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:13.700132+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:14.700315+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee227000/0x0/0x4ffc00000, data 0x18e0d3f/0x1a66000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:15.700481+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee222000/0x0/0x4ffc00000, data 0x18e6d3f/0x1a6c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:16.700698+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee222000/0x0/0x4ffc00000, data 0x18e6d3f/0x1a6c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3029661 data_alloc: 218103808 data_used: 14909440
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:17.700985+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:18.701164+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee222000/0x0/0x4ffc00000, data 0x18e6d3f/0x1a6c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:19.701359+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:20.701550+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:21.701732+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3029661 data_alloc: 218103808 data_used: 14909440
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:22.701902+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee222000/0x0/0x4ffc00000, data 0x18e6d3f/0x1a6c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:23.702067+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:24.702213+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa6f000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa6f000 session 0x55f4ff98b680
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fdfa85a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fed663c0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fecc3e00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.534084320s of 16.210994720s, submitted: 47
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 49561600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:25.702325+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffa0cf00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffe29860
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fdd5b0e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ff996d20
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fcf4ba40
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:26.702497+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063882 data_alloc: 218103808 data_used: 14909440
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:27.702673+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:28.702799+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb9000/0x0/0x4ffc00000, data 0x1c4fd3f/0x1dd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:29.702973+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb9000/0x0/0x4ffc00000, data 0x1c4fd3f/0x1dd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:30.703096+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:31.703224+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063882 data_alloc: 218103808 data_used: 14909440
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285532160 unmapped: 50585600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:32.703359+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb9000/0x0/0x4ffc00000, data 0x1c4fd3f/0x1dd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285532160 unmapped: 50585600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:33.703485+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fdd112c0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf95400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285532160 unmapped: 50585600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:34.703640+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285532160 unmapped: 50585600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:35.703774+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:36.703949+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3083327 data_alloc: 218103808 data_used: 17149952
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:37.704128+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x1c73d3f/0x1df9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:38.704756+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:39.704982+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x1c73d3f/0x1df9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:40.705175+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:41.705329+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3083327 data_alloc: 218103808 data_used: 17149952
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:42.705504+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x1c73d3f/0x1df9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:43.705640+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:44.705772+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:45.705915+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:46.706097+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285548544 unmapped: 50569216 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.206413269s of 21.587003708s, submitted: 31
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x1c73d3f/0x1df9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130179 data_alloc: 218103808 data_used: 17604608
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:47.706317+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288866304 unmapped: 47251456 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec7db000/0x0/0x4ffc00000, data 0x218dd3f/0x2313000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:48.706525+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:49.706666+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:50.706868+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec758000/0x0/0x4ffc00000, data 0x220fd3f/0x2395000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:51.707028+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec758000/0x0/0x4ffc00000, data 0x220fd3f/0x2395000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3148909 data_alloc: 218103808 data_used: 18034688
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:52.707200+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:53.707336+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289267712 unmapped: 46850048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fdd5be00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fe017c20
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:54.707571+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289275904 unmapped: 46841856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:55.707824+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:56.707996+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.451373100s of 10.043084145s, submitted: 88
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038639 data_alloc: 218103808 data_used: 14909440
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:57.708189+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fc6ca3c0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed05e000/0x0/0x4ffc00000, data 0x190ad3f/0x1a90000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:58.708741+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:59.708999+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:00.709138+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4ff9e9680
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fecd3e00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fccd1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:01.709252+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:02.709386+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fcf51e00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:03.710059+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:04.710186+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:05.710498+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:06.710867+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:07.712236+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:08.712375+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:09.712594+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:10.712831+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:11.713155+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:12.713788+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:13.714112+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:14.714297+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:15.714601+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:16.714910+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:17.715238+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:18.715463+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:19.715622+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:20.716024+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:21.716209+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:22.716365+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:23.716562+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:24.716834+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:25.716990+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.709793091s of 29.150884628s, submitted: 12
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:26.717163+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 38690816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028468 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:27.717389+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289628160 unmapped: 46489600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fef4a000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fdd5a3c0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fef4b2c0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:28.717545+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fe0161e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ff997a40
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc6000/0x0/0x4ffc00000, data 0x19a2d91/0x1b28000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc6000/0x0/0x4ffc00000, data 0x19a2d91/0x1b28000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:29.717689+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:30.717844+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc6000/0x0/0x4ffc00000, data 0x19a2d91/0x1b28000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:31.718023+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028484 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:32.718180+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fd403860
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:33.718332+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f5019781e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4ffa0c3c0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:34.718490+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:35.718639+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.679104328s of 10.049762726s, submitted: 34
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fecd34a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:36.718743+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 46473216 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3031495 data_alloc: 218103808 data_used: 13090816
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:37.718919+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 46473216 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:38.719070+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 290816000 unmapped: 45301760 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:39.719182+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:40.719368+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:41.719525+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3089575 data_alloc: 218103808 data_used: 21217280
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:42.722796+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:43.722901+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:44.723058+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:45.723155+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:46.723307+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3089575 data_alloc: 218103808 data_used: 21217280
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:47.723522+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:48.723669+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.181274414s of 13.181275368s, submitted: 0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:49.723802+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 41181184 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:50.723993+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 41836544 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:51.724131+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 41836544 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3176615 data_alloc: 218103808 data_used: 21319680
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:52.724293+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 41836544 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:53.724441+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3bf000/0x0/0x4ffc00000, data 0x25a7dc4/0x272f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:54.724613+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:55.724991+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:56.725170+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3b9000/0x0/0x4ffc00000, data 0x25addc4/0x2735000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3182271 data_alloc: 218103808 data_used: 21987328
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:57.725449+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:58.726185+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:59.726802+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:00.727247+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:01.727387+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3b5000/0x0/0x4ffc00000, data 0x25b1dc4/0x2739000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3182271 data_alloc: 218103808 data_used: 21987328
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:02.727641+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:03.727826+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:04.727975+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f502877c00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877c00 session 0x55f4fdf8a1e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fd3210e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fe024960
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4ffa01860
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.366380692s of 16.474279404s, submitted: 92
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:05.728503+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3b5000/0x0/0x4ffc00000, data 0x25b1dc4/0x2739000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 40181760 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fcf201e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f500c4ac00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f500c4ac00 session 0x55f4ff9e8d20
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f500c4ac00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f500c4ac00 session 0x55f4ff9970e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:06.729139+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 41738240 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fef4a5a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fecc32c0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:07.729696+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3200253 data_alloc: 218103808 data_used: 21991424
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21d000/0x0/0x4ffc00000, data 0x2748dd4/0x28d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 41738240 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:08.729865+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 41738240 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:09.730227+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 41738240 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:10.730810+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:11.731034+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21d000/0x0/0x4ffc00000, data 0x2748dd4/0x28d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:12.731167+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3200253 data_alloc: 218103808 data_used: 21991424
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fdd11e00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:13.732165+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbbb000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:14.732286+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:15.732399+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:16.732577+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec1f9000/0x0/0x4ffc00000, data 0x276cdd4/0x28f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:17.732818+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3205953 data_alloc: 218103808 data_used: 22528000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:18.733002+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:19.733263+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:20.733409+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:21.733552+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:22.733786+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec1f9000/0x0/0x4ffc00000, data 0x276cdd4/0x28f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3205953 data_alloc: 218103808 data_used: 22528000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:23.733899+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 41721856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec1f9000/0x0/0x4ffc00000, data 0x276cdd4/0x28f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:24.734005+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 41721856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec1f9000/0x0/0x4ffc00000, data 0x276cdd4/0x28f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:25.734162+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 41721856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.447574615s of 20.827314377s, submitted: 10
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:26.734276+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 39960576 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:27.734451+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284443 data_alloc: 218103808 data_used: 22757376
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 39960576 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:28.734661+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:29.734849+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb869000/0x0/0x4ffc00000, data 0x30fbdd4/0x3284000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:30.734977+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:31.735121+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb869000/0x0/0x4ffc00000, data 0x30fbdd4/0x3284000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:32.735327+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291167 data_alloc: 218103808 data_used: 22753280
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:33.735689+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4ff98b2c0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4ffdba000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:34.735806+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fecc2960
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:35.735961+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3b5000/0x0/0x4ffc00000, data 0x25b1dc4/0x2739000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:36.736051+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:37.736249+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3191033 data_alloc: 218103808 data_used: 21991424
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:38.736578+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.200098038s of 12.547958374s, submitted: 61
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4ffa0d4a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fecdb4a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fcccf400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:39.736764+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ffa0c780
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:40.736973+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:41.737133+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:42.737282+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:43.738023+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:44.738164+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:45.738363+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:46.738568+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:47.738808+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:48.739007+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:49.739159+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:50.739320+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:51.739513+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:52.739657+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:53.739808+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:54.739963+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:55.740115+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:56.740253+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:57.740403+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:58.740547+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:59.740681+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:00.741096+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:01.741261+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:02.741418+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:03.741603+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:04.741793+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:05.742017+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:06.742161+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:07.742393+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:08.742593+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:09.742799+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:10.743002+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:11.743211+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:12.743431+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:13.743599+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:14.743751+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.669296265s of 35.925739288s, submitted: 28
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,0,0,0,0,0,1,2])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297992192 unmapped: 42328064 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fcf4a000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4ff9965a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fed66f00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbbb000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4ffe28960
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fe0241e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:15.743888+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:16.744131+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:17.744358+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053060 data_alloc: 218103808 data_used: 13086720
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:18.744578+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffe28780
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:19.744732+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffe294a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fe016b40
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:20.744857+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fed87a40
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:21.744981+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbbb000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:22.745161+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3058801 data_alloc: 218103808 data_used: 13697024
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:23.745292+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292102144 unmapped: 48218112 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:24.745402+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292102144 unmapped: 48218112 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:25.745517+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292102144 unmapped: 48218112 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:26.745665+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:27.745858+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3115441 data_alloc: 218103808 data_used: 21676032
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:28.746064+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:29.746279+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:30.746434+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:31.746578+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:32.746712+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3115441 data_alloc: 218103808 data_used: 21676032
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.079471588s of 18.551073074s, submitted: 14
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:33.746855+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 45481984 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:34.746985+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 45449216 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:35.748241+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 45441024 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:36.748787+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 45441024 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec27d000/0x0/0x4ffc00000, data 0x26ecd2f/0x2871000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:37.749096+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3220233 data_alloc: 218103808 data_used: 22417408
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 45441024 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:38.749254+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:39.749783+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec23c000/0x0/0x4ffc00000, data 0x272dd2f/0x28b2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:40.750306+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec23c000/0x0/0x4ffc00000, data 0x272dd2f/0x28b2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:41.750784+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec23c000/0x0/0x4ffc00000, data 0x272dd2f/0x28b2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:42.751167+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3227157 data_alloc: 218103808 data_used: 22556672
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:43.751321+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:44.751660+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.110246658s of 11.604579926s, submitted: 73
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:45.751803+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21b000/0x0/0x4ffc00000, data 0x274ed2f/0x28d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:46.752084+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:47.752513+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3226953 data_alloc: 218103808 data_used: 22556672
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:48.752663+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:49.753122+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:50.753368+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21b000/0x0/0x4ffc00000, data 0x274ed2f/0x28d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:51.753607+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21b000/0x0/0x4ffc00000, data 0x274ed2f/0x28d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:52.753973+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3226953 data_alloc: 218103808 data_used: 22556672
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:53.754088+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:54.754327+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:55.754603+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec218000/0x0/0x4ffc00000, data 0x2751d2f/0x28d6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:56.754739+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.151399612s of 12.149919510s, submitted: 4
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:57.761818+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3229065 data_alloc: 218103808 data_used: 22589440
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:58.762028+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f500c4ac00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f500c4ac00 session 0x55f4fcf51c20
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:59.762315+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec207000/0x0/0x4ffc00000, data 0x2762d2f/0x28e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:00.762542+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 285 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fdd5ab40
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 285 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fdfa74a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 285 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffe285a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 285 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fcf4be00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 39550976 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb8000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:01.762710+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 285 heartbeat osd_stat(store_statfs(0x4eba29000/0x0/0x4ffc00000, data 0x2f3d90e/0x30c4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 39550976 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 286 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4ffe28b40
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:02.762843+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb8000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 286 heartbeat osd_stat(store_statfs(0x4eb1c3000/0x0/0x4ffc00000, data 0x37a490e/0x392b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,0,0,0,1])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3421390 data_alloc: 234881024 data_used: 35491840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 39534592 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:03.762996+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 286 handle_osd_map epochs [286,287], i have 286, src has [1,287]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4fef4af00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 39518208 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:04.763173+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 39518208 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:05.763324+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 39518208 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:06.763545+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 287 heartbeat osd_stat(store_statfs(0x4eb1b7000/0x0/0x4ffc00000, data 0x37ac078/0x3935000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 39510016 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:07.763885+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3425380 data_alloc: 234881024 data_used: 35491840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307953664 unmapped: 39501824 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:08.764063+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307953664 unmapped: 39501824 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:09.764372+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 287 heartbeat osd_stat(store_statfs(0x4eb1b7000/0x0/0x4ffc00000, data 0x37ac078/0x3935000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307953664 unmapped: 39501824 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:10.764576+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.261311531s of 13.822600365s, submitted: 64
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffdbb680
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffdbbe00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fdd10d20
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4ffdbb0e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4ff0eb2c0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305127424 unmapped: 42328064 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:11.764771+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 287 heartbeat osd_stat(store_statfs(0x4eb1b9000/0x0/0x4ffc00000, data 0x37ac078/0x3935000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305135616 unmapped: 42319872 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:12.764951+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3420546 data_alloc: 234881024 data_used: 35500032
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305135616 unmapped: 42319872 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:13.765134+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adadb/0x3938000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305143808 unmapped: 42311680 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:14.765364+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305143808 unmapped: 42311680 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:15.765573+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305160192 unmapped: 42295296 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:16.765755+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fdfa90e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffdbaf00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305160192 unmapped: 42295296 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:17.766007+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3420546 data_alloc: 234881024 data_used: 35500032
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fdd5b4a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305160192 unmapped: 42295296 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb8000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:18.766132+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4feefa1e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb8000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305160192 unmapped: 42295296 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:19.766300+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b4000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305176576 unmapped: 42278912 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:20.766397+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:21.766604+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:22.766756+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3467236 data_alloc: 251658240 data_used: 41988096
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:23.766959+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:24.767100+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:25.767254+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:26.767435+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:27.767648+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3467236 data_alloc: 251658240 data_used: 41988096
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:28.767787+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309108736 unmapped: 38346752 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:29.767900+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309108736 unmapped: 38346752 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:30.768836+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.982900620s of 20.121829987s, submitted: 17
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 37863424 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:31.768990+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:32.769174+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3489675 data_alloc: 251658240 data_used: 43794432
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:33.769259+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:34.769359+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:35.769525+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb0fa000/0x0/0x4ffc00000, data 0x3868aeb/0x39f4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:36.769751+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:37.769970+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3500175 data_alloc: 251658240 data_used: 44195840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08c000/0x0/0x4ffc00000, data 0x38d6aeb/0x3a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:38.770120+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:39.770336+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:40.770479+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:41.770597+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08c000/0x0/0x4ffc00000, data 0x38d6aeb/0x3a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:42.770728+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3500175 data_alloc: 251658240 data_used: 44195840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:43.770887+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:44.771115+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08c000/0x0/0x4ffc00000, data 0x38d6aeb/0x3a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:45.771274+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08c000/0x0/0x4ffc00000, data 0x38d6aeb/0x3a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:46.771523+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310722560 unmapped: 36732928 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.973714828s of 16.078048706s, submitted: 14
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:47.771722+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3497747 data_alloc: 251658240 data_used: 44195840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:48.771887+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:49.772066+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:50.772213+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08b000/0x0/0x4ffc00000, data 0x38d7aeb/0x3a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:51.772489+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:52.772628+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08b000/0x0/0x4ffc00000, data 0x38d7aeb/0x3a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3498067 data_alloc: 251658240 data_used: 44204032
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:53.772775+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310394880 unmapped: 37060608 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:54.772920+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310394880 unmapped: 37060608 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:55.773113+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310394880 unmapped: 37060608 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08b000/0x0/0x4ffc00000, data 0x38d7aeb/0x3a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:56.773310+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310394880 unmapped: 37060608 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:57.773571+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 311271424 unmapped: 36184064 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08b000/0x0/0x4ffc00000, data 0x38d7aeb/0x3a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3505747 data_alloc: 251658240 data_used: 45961216
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:58.773720+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 311271424 unmapped: 36184064 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:59.774014+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 311271424 unmapped: 36184064 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fdfa85a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.843559265s of 12.874654770s, submitted: 3
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:00.774180+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 311279616 unmapped: 36175872 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 289 handle_osd_map epochs [289,289], i have 289, src has [1,289]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 289 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fedac1e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffb9a000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 289 ms_handle_reset con 0x55f4ffb9a000 session 0x55f4ff9e9c20
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 289 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fe026f00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fec93000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:01.774374+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 326180864 unmapped: 21274624 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 289 ms_handle_reset con 0x55f4fec93000 session 0x55f4ffa012c0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffb9a000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 290 ms_handle_reset con 0x55f4ffb9a000 session 0x55f4ffa0c780
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbacc00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:02.774530+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319987712 unmapped: 31670272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 291 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4feefa5a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 291 heartbeat osd_stat(store_statfs(0x4e9172000/0x0/0x4ffc00000, data 0x57f0239/0x597a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3768224 data_alloc: 251658240 data_used: 52424704
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:03.774763+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 31563776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbae000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 291 ms_handle_reset con 0x55f4ffbae000 session 0x55f4feca94a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb1000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 291 heartbeat osd_stat(store_statfs(0x4e916e000/0x0/0x4ffc00000, data 0x57ecdd2/0x597c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 291 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4feca9a40
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fd23fc00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 291 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4feca8960
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:04.775020+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 320143360 unmapped: 31514624 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fd23fc00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:05.775137+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 292 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4fd320d20
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 320233472 unmapped: 31424512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:06.775327+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 320233472 unmapped: 31424512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 292 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4ff9961e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 292 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffdbab40
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbba400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 292 ms_handle_reset con 0x55f4ffbba400 session 0x55f4fcf4b860
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:07.775570+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319537152 unmapped: 32120832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3534457 data_alloc: 251658240 data_used: 51228672
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:08.775735+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319537152 unmapped: 32120832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:09.775870+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319537152 unmapped: 32120832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb1a9000/0x0/0x4ffc00000, data 0x37b49af/0x3944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ff358c00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:10.776025+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.539574623s of 10.580464363s, submitted: 128
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb1a9000/0x0/0x4ffc00000, data 0x37b49af/0x3944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319537152 unmapped: 32120832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 293 ms_handle_reset con 0x55f4ff358c00 session 0x55f501979e00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:11.776288+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 312303616 unmapped: 39354368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:12.776428+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 312303616 unmapped: 39354368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 293 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4ff997e00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 293 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fe0161e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fd23fc00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3328352 data_alloc: 234881024 data_used: 34619392
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:13.776536+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 294 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4ffe29a40
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:14.776753+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:15.776919+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed77b000/0x0/0x4ffc00000, data 0x11deff1/0x1370000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:16.777132+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:17.777376+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3060980 data_alloc: 218103808 data_used: 12230656
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:18.777543+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:19.777760+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:20.778032+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed77b000/0x0/0x4ffc00000, data 0x11deff1/0x1370000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:21.778230+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.101095200s of 11.366854668s, submitted: 87
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:22.778427+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:23.778575+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:24.778712+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:25.778880+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:26.779097+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:27.779346+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:28.779521+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:29.779717+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:30.779899+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:31.780133+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:32.780345+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:33.780548+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:34.780769+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:35.780957+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:36.781137+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:37.781951+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:38.782086+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:39.782313+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:40.782446+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:41.782665+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:42.782998+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:43.783215+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:44.783411+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:45.783667+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:46.783921+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:47.784288+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:48.784469+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:49.784644+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:50.784847+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:51.785020+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 35K writes, 142K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.80 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1971 writes, 8322 keys, 1971 commit groups, 1.0 writes per commit group, ingest: 9.10 MB, 0.02 MB/s
                                           Interval WAL: 1971 writes, 761 syncs, 2.59 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:52.785235+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063266 data_alloc: 218103808 data_used: 12234752
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets getting new tickets!
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:53.785466+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _finish_auth 0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:53.786025+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:54.785698+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:55.785860+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:56.786103+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:57.786385+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063266 data_alloc: 218103808 data_used: 12234752
Oct 07 15:25:55 compute-0 ceph-osd[90092]: mgrc ms_handle_reset ms_handle_reset con 0x55f4fdd48c00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3626055412
Oct 07 15:25:55 compute-0 ceph-osd[90092]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3626055412,v1:192.168.122.100:6801/3626055412]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: get_auth_request con 0x55f4ff358c00 auth_method 0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: mgrc handle_mgr_configure stats_period=5
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:58.786565+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296697856 unmapped: 54960128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:59.786702+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 37.798728943s of 37.813915253s, submitted: 13
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296697856 unmapped: 54960128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fcf4af00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbb8000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4feca81e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fd23fc00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4fdfa65a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fecc32c0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4ffa3a400 session 0x55f501979c20
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:00.786843+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 54886400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:01.787003+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 54886400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed055000/0x0/0x4ffc00000, data 0x1906a54/0x1a99000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbbb000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4fed870e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:02.787178+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbba400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f5049ac800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 54886400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3125660 data_alloc: 218103808 data_used: 12234752
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:03.787399+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 54722560 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:04.787576+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297385984 unmapped: 54272000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:05.787708+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed054000/0x0/0x4ffc00000, data 0x1906a77/0x1a9a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297385984 unmapped: 54272000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4ffbba400 session 0x55f501979a40
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f5049ac800 session 0x55f4ffa0d0e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fd23fc00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:06.788069+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed054000/0x0/0x4ffc00000, data 0x1906a77/0x1a9a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,1])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4ff997e00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:07.788283+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:08.788467+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:09.788624+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:10.788808+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:11.789090+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:12.789271+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:13.789500+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:14.789695+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:15.789893+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:16.790316+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:17.790729+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:18.791037+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:19.791272+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:20.791474+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:21.791852+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:22.792045+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:23.792265+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:24.792575+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:25.792826+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:26.793095+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:27.793394+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:28.793570+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:29.793790+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:30.794017+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:31.794178+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:32.794315+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:33.794506+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:34.794695+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:35.794918+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:36.795163+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:37.795426+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:38.795687+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:39.795871+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:40.796030+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:41.796241+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:42.796371+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:43.796521+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:44.796707+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:45.796836+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:46.796989+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:47.797148+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:48.797295+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:49.797433+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:50.797580+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:51.797761+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:52.797915+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:53.798085+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:54.798221+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:55.798441+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:56.798664+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 07 15:25:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/718127271' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 07 15:25:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:25:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/676956947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:57.798848+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:58.799015+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 58.547794342s of 59.170566559s, submitted: 55
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:59.799247+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:00.799394+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 54435840 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:01.799529+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297230336 unmapped: 54427648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:02.799698+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed36b000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297230336 unmapped: 54427648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:03.799827+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074277 data_alloc: 218103808 data_used: 12242944
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 296 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffa0de00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:04.799977+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:05.800236+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 296 heartbeat osd_stat(store_statfs(0x4ed367000/0x0/0x4ffc00000, data 0x11e2602/0x1375000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:06.800519+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:07.800705+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:08.800873+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3072989 data_alloc: 218103808 data_used: 12242944
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:09.801013+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 296 heartbeat osd_stat(store_statfs(0x4ed367000/0x0/0x4ffc00000, data 0x11e2602/0x1375000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:10.801188+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:11.801374+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 296 heartbeat osd_stat(store_statfs(0x4ed367000/0x0/0x4ffc00000, data 0x11e2602/0x1375000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.538488388s of 13.065173149s, submitted: 139
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:12.801559+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:13.801760+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:14.801869+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:15.802042+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:16.802230+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:17.802471+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:18.802680+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:19.802869+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:20.803059+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:21.803223+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:22.803439+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:23.803581+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:24.803797+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:25.803998+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:26.804203+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:27.804401+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:28.804587+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:29.804763+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:30.804911+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:31.805249+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:32.805518+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:33.805697+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:34.805901+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:35.806242+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:36.806419+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:37.806593+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:38.806812+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:39.807014+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:40.807175+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:41.807318+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:42.807454+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:43.807611+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:44.807849+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:45.808016+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:46.808222+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:47.808553+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:48.808819+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:49.809024+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:50.809195+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:51.809412+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:52.809591+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:53.809739+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:54.809918+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:55.810141+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:56.810279+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:57.810456+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:58.810625+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:59.810733+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:00.810882+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:01.811039+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:02.811153+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:03.811284+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:04.811424+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:05.811575+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:06.811755+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:07.811958+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:08.812110+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:09.812262+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:10.812434+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:11.812608+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297279488 unmapped: 54378496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:12.812763+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297279488 unmapped: 54378496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:13.812910+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297279488 unmapped: 54378496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:14.813086+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297279488 unmapped: 54378496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:15.813287+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:16.813479+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:17.813690+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:18.813857+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:19.814047+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:20.814210+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:21.814389+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:22.814578+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:23.814733+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:24.814867+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:25.815160+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:26.815473+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:27.815878+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:28.816054+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:29.816394+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:30.816630+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:31.816815+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:32.817052+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:33.817227+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:34.817377+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:35.817685+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:36.817911+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:37.818209+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:38.818371+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:39.818587+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:40.818827+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:41.819071+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:42.819206+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:43.819422+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:44.819612+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:45.819793+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:46.820015+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:47.820290+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 54337536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:48.820485+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 54337536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:49.820655+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 54337536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:50.820853+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 54329344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:51.821066+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 54329344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:52.821194+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 54329344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:53.821381+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 54321152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:54.821527+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 54321152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:55.821734+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 54321152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:56.821999+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 54321152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:57.822259+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fdfa83c0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 55164928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:58.822480+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12378112
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:59.822649+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:00.822813+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:01.823030+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:02.823193+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:03.823409+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12378112
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 55148544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:04.823584+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 55148544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:05.823758+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffbbb000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 113.802856445s of 113.915954590s, submitted: 17
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:06.824099+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 298 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4fe0254a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:07.824278+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:08.824449+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3004649 data_alloc: 218103808 data_used: 5570560
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fd23fc00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:09.824607+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edb62000/0x0/0x4ffc00000, data 0x9e5c13/0xb7a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:10.824720+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 299 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4ffa01a40
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:11.824849+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:12.824979+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee361000/0x0/0x4ffc00000, data 0x1e77c1/0x37c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:13.825106+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee361000/0x0/0x4ffc00000, data 0x1e77c1/0x37c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2935475 data_alloc: 218103808 data_used: 126976
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:14.825234+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:15.825439+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:16.825630+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee35e000/0x0/0x4ffc00000, data 0x1e9240/0x37f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:17.825828+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:18.825979+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 nova_compute[259550]: 2025-10-07 15:25:55.533 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee35e000/0x0/0x4ffc00000, data 0x1e9240/0x37f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2935475 data_alloc: 218103808 data_used: 126976
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:19.826196+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:20.826331+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:21.826506+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.293596268s of 16.493015289s, submitted: 58
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:22.826652+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ee35a000/0x0/0x4ffc00000, data 0x1eacc6/0x383000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fdf8ba40
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:23.826830+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:24.826984+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:25.827199+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:26.827343+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:27.827522+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:28.827683+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:29.828098+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:30.829112+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:31.829701+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:32.830040+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:33.830747+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:34.831839+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:35.832360+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:36.832693+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:37.832866+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:38.833099+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:39.833299+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:40.833669+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:41.834015+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:42.834205+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:43.834455+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:44.834826+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:45.835058+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:46.835414+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:47.835609+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:48.836096+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:49.836232+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:50.836615+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:51.836808+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:52.837038+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:53.837282+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:54.837457+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:55.837660+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:56.837830+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:57.838004+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:58.838163+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:59.838397+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:00.838708+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:01.839102+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:02.839330+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:03.839563+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:04.840047+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:05.840261+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:06.840719+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:07.840996+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:08.841308+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288161792 unmapped: 63496192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:09.841597+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288161792 unmapped: 63496192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:10.841797+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288161792 unmapped: 63496192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:11.841997+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:12.842210+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:13.842366+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:14.842602+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:15.842762+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:16.843014+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:17.843165+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:18.843292+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:19.843435+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:20.843588+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:21.843757+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:22.843877+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:23.844041+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288022528 unmapped: 63635456 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:24.844226+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 63627264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:25.844461+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 63627264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:26.844628+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 63627264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:27.844837+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 63627264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:28.845089+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288038912 unmapped: 63619072 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:29.845329+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288038912 unmapped: 63619072 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:30.845501+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288038912 unmapped: 63619072 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:31.845734+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288038912 unmapped: 63619072 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:32.846016+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:33.846222+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:34.846414+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:35.846793+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:36.847025+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:37.847334+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:38.847722+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:39.848049+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:40.848288+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:41.848502+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:42.848683+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:43.848991+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:44.849213+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:45.849491+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:46.849687+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:47.850072+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:48.850317+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288063488 unmapped: 63594496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:49.850535+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288063488 unmapped: 63594496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:50.850725+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:51.851014+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:52.851130+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:53.851402+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:54.851580+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:55.851765+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:56.852025+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:57.852234+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:58.852402+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:59.852623+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:00.852912+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:01.853286+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:02.853542+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:03.853704+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:04.854011+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288088064 unmapped: 63569920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:05.854210+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288088064 unmapped: 63569920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:06.854371+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288088064 unmapped: 63569920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:07.854640+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 105.418647766s of 105.483337402s, submitted: 31
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288088064 unmapped: 63569920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:08.854852+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 303 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fdfa8b40
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f5049ac800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288104448 unmapped: 63553536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:09.855025+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3005561 data_alloc: 218103808 data_used: 147456
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:10.855188+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 304 ms_handle_reset con 0x55f5049ac800 session 0x55f4ff996000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:11.855380+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:12.855643+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:13.855808+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:14.855990+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:15.856142+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:16.856317+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:17.856587+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:18.856795+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:19.856995+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:20.857186+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:21.857349+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:22.857560+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:23.858551+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:24.858814+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:25.858977+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:26.859226+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:27.859439+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:28.859592+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:29.859771+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:30.859946+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:31.860097+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:32.860260+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:33.860426+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:34.860610+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:35.860820+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:36.861013+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:37.861251+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f5039db400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.854793549s of 29.929613113s, submitted: 9
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288161792 unmapped: 63496192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:38.861476+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 305 ms_handle_reset con 0x55f5039db400 session 0x55f4fcf4a3c0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edb4c000/0x0/0x4ffc00000, data 0x9f1b41/0xb91000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:39.861662+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3011917 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edb4c000/0x0/0x4ffc00000, data 0x9f1b1e/0xb90000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:40.861828+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:41.862019+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:42.862234+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edb4c000/0x0/0x4ffc00000, data 0x9f1b1e/0xb90000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:43.862397+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:44.862536+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3011917 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:45.862679+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:46.862898+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:47.863162+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288202752 unmapped: 63455232 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:48.863313+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288202752 unmapped: 63455232 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:49.863483+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288202752 unmapped: 63455232 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:50.863653+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288210944 unmapped: 63447040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:51.864467+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288219136 unmapped: 63438848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:52.864676+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288219136 unmapped: 63438848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:53.864851+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288219136 unmapped: 63438848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:54.864975+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:55.865141+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:56.865329+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:57.865528+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:58.865696+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:59.865955+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:00.866201+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:01.866371+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:02.866569+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:03.866773+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:04.867005+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:05.867194+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:06.867428+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:07.867615+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:08.867813+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:09.868044+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:10.868203+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:11.868387+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:12.868562+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:13.868778+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:14.868991+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288260096 unmapped: 63397888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:15.869142+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:16.869295+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:17.869535+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:18.869741+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:19.869894+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:20.870038+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:21.870180+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:22.870354+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:23.870543+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:24.870815+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:25.871032+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:26.871298+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:27.871636+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:28.871889+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:29.872119+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:30.872332+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:31.872493+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288301056 unmapped: 63356928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:32.872702+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:33.872848+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:34.873006+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:35.873141+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:36.873275+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:37.873497+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:38.873637+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288317440 unmapped: 63340544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:39.873839+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:40.873991+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:41.874231+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:42.874408+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:43.875012+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:44.875207+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:45.875565+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:46.875842+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:47.876104+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:48.876287+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:49.876574+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:50.876805+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:51.877097+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:52.877285+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:53.877672+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:54.878075+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288350208 unmapped: 63307776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:55.878257+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288358400 unmapped: 63299584 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:56.878492+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288358400 unmapped: 63299584 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:57.878755+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288358400 unmapped: 63299584 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:58.878994+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:59.879263+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:00.879585+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:01.879821+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:02.879969+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:03.880153+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:04.880330+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 63283200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:05.880485+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 63283200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:06.880656+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 63283200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:07.880914+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 63283200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:08.881222+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288382976 unmapped: 63275008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:09.881413+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288382976 unmapped: 63275008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:10.881649+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288391168 unmapped: 63266816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:11.881862+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288391168 unmapped: 63266816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:12.882116+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288407552 unmapped: 63250432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:13.882317+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:14.882510+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:15.882693+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:16.882910+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:17.883201+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:18.883383+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:19.883534+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288423936 unmapped: 63234048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:20.883729+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:21.884019+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:22.884216+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:23.884379+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:24.884589+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:25.884831+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:26.885134+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:27.885375+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:28.885509+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:29.885659+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:30.885818+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:31.885974+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:32.886130+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:33.886335+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:34.886486+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:35.886678+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288456704 unmapped: 63201280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:36.886898+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288464896 unmapped: 63193088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:37.889703+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288464896 unmapped: 63193088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:38.890607+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288464896 unmapped: 63193088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:39.890857+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:40.891061+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:41.891191+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:42.891359+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:43.891876+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:44.892080+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288481280 unmapped: 63176704 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:45.892284+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288481280 unmapped: 63176704 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:46.892496+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288481280 unmapped: 63176704 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:47.892681+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288489472 unmapped: 63168512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:48.892846+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288489472 unmapped: 63168512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:49.893035+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288489472 unmapped: 63168512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:50.893170+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288489472 unmapped: 63168512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:51.893322+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288497664 unmapped: 63160320 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:52.893468+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:53.893659+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:54.893882+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:55.894027+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:56.894173+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:57.894377+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:58.894519+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:59.894686+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:00.894868+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:01.895018+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:02.895187+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:03.895349+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:04.895524+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:05.895685+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:06.895902+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:07.896212+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:08.896389+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288530432 unmapped: 63127552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:09.896577+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:10.896739+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:11.896972+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:12.897207+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:13.897387+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:14.897622+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288546816 unmapped: 63111168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:15.897862+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288546816 unmapped: 63111168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:16.898074+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288555008 unmapped: 63102976 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:17.898344+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288555008 unmapped: 63102976 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:18.898560+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:19.898735+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:20.899262+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:21.900226+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:22.900669+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:23.900880+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:24.901335+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:25.902077+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:26.902310+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:27.902667+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:28.902847+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:29.903020+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:30.903317+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288579584 unmapped: 63078400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:31.903622+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288579584 unmapped: 63078400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:32.903774+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288587776 unmapped: 63070208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:33.904024+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288587776 unmapped: 63070208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:34.904198+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:35.904450+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:36.904661+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:37.904917+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:38.905223+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:39.905516+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:40.905713+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288604160 unmapped: 63053824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:41.905951+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288604160 unmapped: 63053824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:42.906184+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288612352 unmapped: 63045632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:43.906469+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288612352 unmapped: 63045632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:44.906742+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288612352 unmapped: 63045632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:45.906946+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288612352 unmapped: 63045632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:46.907176+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288620544 unmapped: 63037440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:47.907480+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288620544 unmapped: 63037440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:48.907669+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:49.907888+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:50.908061+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:51.908288+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:52.908401+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:53.908578+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:54.908714+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:55.908893+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:56.909018+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:57.909184+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:58.909318+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:59.909442+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:00.909645+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:01.909871+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:02.909991+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288653312 unmapped: 63004672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:03.910123+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288653312 unmapped: 63004672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:04.910291+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288661504 unmapped: 62996480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:05.910446+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288669696 unmapped: 62988288 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:06.910605+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:07.910803+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:08.911004+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:09.911184+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:10.911351+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:11.911516+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288686080 unmapped: 62971904 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:12.911706+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:13.911872+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:14.912041+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:15.912226+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:16.912403+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:17.912589+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:18.912724+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:19.913041+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288702464 unmapped: 62955520 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:20.913206+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:21.913384+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:22.913564+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:23.913725+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:24.913880+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:25.914013+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:26.914228+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:27.914524+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:28.914828+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:29.915051+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:30.915348+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:31.915556+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:32.915738+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:33.915967+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:34.916232+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:35.916457+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:36.916685+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:37.916974+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:38.917238+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:39.917434+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:40.917622+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:41.917808+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:42.918026+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:43.918184+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288768000 unmapped: 62889984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:44.918402+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288768000 unmapped: 62889984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:45.918578+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288768000 unmapped: 62889984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:46.918808+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288768000 unmapped: 62889984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:47.919030+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288776192 unmapped: 62881792 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:48.919194+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288776192 unmapped: 62881792 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:49.919374+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288776192 unmapped: 62881792 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:50.919561+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288776192 unmapped: 62881792 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:51.919695+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 36K writes, 144K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 36K writes, 13K syncs, 2.79 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 677 writes, 1677 keys, 677 commit groups, 1.0 writes per commit group, ingest: 0.81 MB, 0.00 MB/s
                                           Interval WAL: 677 writes, 305 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:52.919860+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:53.920084+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:54.920259+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:55.920406+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:56.920584+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288792576 unmapped: 62865408 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:57.920774+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288792576 unmapped: 62865408 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:58.920904+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288792576 unmapped: 62865408 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:59.921000+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288800768 unmapped: 62857216 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:00.921149+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288800768 unmapped: 62857216 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:01.921286+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288800768 unmapped: 62857216 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:02.921399+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:03.921555+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:04.921691+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:05.921792+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:06.922055+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:07.922247+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:08.922381+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:09.922592+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:10.922744+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:11.922922+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:12.923160+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:13.923363+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:14.923523+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:15.923852+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:16.924031+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:17.924206+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:18.924351+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:19.924501+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:20.924680+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:21.924829+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:22.925007+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288833536 unmapped: 62824448 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:23.925153+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288841728 unmapped: 62816256 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:24.925326+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288841728 unmapped: 62816256 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:25.925502+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288841728 unmapped: 62816256 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:26.925640+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288849920 unmapped: 62808064 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:27.925799+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288849920 unmapped: 62808064 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:28.926032+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288858112 unmapped: 62799872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:29.926204+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288858112 unmapped: 62799872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:30.926344+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288858112 unmapped: 62799872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:31.926497+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288866304 unmapped: 62791680 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:32.926613+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288874496 unmapped: 62783488 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:33.926768+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288874496 unmapped: 62783488 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:34.926994+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288874496 unmapped: 62783488 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:35.927153+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:36.927336+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:37.927713+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:38.928017+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:39.928385+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:40.928517+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:41.928709+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:42.928868+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:43.929100+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:44.929257+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:45.929434+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:46.929597+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:47.929797+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:48.930041+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:49.930236+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:50.930392+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:51.930584+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:52.930761+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:53.930919+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:54.931136+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288915456 unmapped: 62742528 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:55.931291+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288931840 unmapped: 62726144 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:56.931441+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288931840 unmapped: 62726144 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:57.931613+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288931840 unmapped: 62726144 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:58.931734+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 320.509613037s of 321.359313965s, submitted: 63
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288931840 unmapped: 62726144 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:59.931901+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288980992 unmapped: 62676992 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:00.932123+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [0,0,0,1])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 65208320 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:01.932431+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:02.932597+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:03.933120+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:04.933394+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:05.933579+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:06.933836+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:07.934070+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:08.934242+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:09.934419+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:10.934532+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:11.934684+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:12.934805+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:13.934949+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:14.935089+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:15.935228+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:16.935423+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:17.935643+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:18.935828+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:19.936031+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:20.936196+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:21.936365+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:22.936628+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:23.936787+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:24.936974+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:25.937121+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:26.937258+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:27.937448+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:28.937760+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:29.937887+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:30.938021+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:31.938195+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:32.938419+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:33.938572+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:34.938730+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:35.938905+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:36.939165+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:37.939412+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:38.939565+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:39.939706+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:40.939857+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:41.940012+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:42.940167+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:43.940321+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:44.940518+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 65159168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:45.940721+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 65159168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:46.940957+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 65159168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:47.941182+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:48.941363+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:49.941546+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:50.941699+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:51.941838+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:52.942010+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:53.942151+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 65134592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:54.942299+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 65134592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:55.942443+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 65134592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:56.942623+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 65134592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:57.942817+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:58.943015+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:59.943214+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:00.943373+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:01.943622+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:02.943842+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:03.944080+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:04.944249+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:05.944434+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:06.944570+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:07.944739+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:08.944863+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:09.944996+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:10.945146+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:11.945251+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:12.945359+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:13.945473+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:14.945626+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:15.945772+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 65093632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:16.945890+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:17.946072+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:18.946205+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:19.946340+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:20.946491+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:21.946626+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:22.946810+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:23.946984+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:24.947135+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:25.947279+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:26.947416+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:27.947634+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:28.947773+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:29.947910+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:30.948091+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:31.948227+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:32.948379+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 65052672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:33.948517+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 65052672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:34.948651+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286613504 unmapped: 65044480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:35.948760+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286613504 unmapped: 65044480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:36.948897+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fd23fc00
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 97.341293335s of 97.832099915s, submitted: 90
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286613504 unmapped: 65044480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 307 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4fcf210e0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:37.949365+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:38.949558+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edb49000/0x0/0x4ffc00000, data 0x9f511f/0xb94000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4fdf94000
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edb49000/0x0/0x4ffc00000, data 0x9f511f/0xb94000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:39.949730+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 307 handle_osd_map epochs [307,308], i have 307, src has [1,308]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:40.949895+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 308 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fed552c0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965964 data_alloc: 218103808 data_used: 172032
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:41.950131+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284082176 unmapped: 67575808 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:42.950374+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284082176 unmapped: 67575808 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 309 heartbeat osd_stat(store_statfs(0x4ee346000/0x0/0x4ffc00000, data 0x1f6cf0/0x397000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:43.950555+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285147136 unmapped: 66510848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 309 heartbeat osd_stat(store_statfs(0x4ee343000/0x0/0x4ffc00000, data 0x1f876f/0x39a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:44.950713+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 309 heartbeat osd_stat(store_statfs(0x4ee343000/0x0/0x4ffc00000, data 0x1f876f/0x39a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f4ffa3a400
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285147136 unmapped: 66510848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:45.951019+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 66502656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3055603 data_alloc: 218103808 data_used: 172032
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:46.951172+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.332621574s of 10.034673691s, submitted: 57
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285171712 unmapped: 66486272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ed6d4000/0x0/0x4ffc00000, data 0xe6876f/0x100a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [0,0,0,2])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 310 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fed545a0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:47.951383+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285171712 unmapped: 66486272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:48.951653+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 66478080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:49.951799+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 66478080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:50.952069+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 66478080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3059713 data_alloc: 218103808 data_used: 184320
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:51.952333+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 66478080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 310 handle_osd_map epochs [310,311], i have 310, src has [1,311]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ed6d0000/0x0/0x4ffc00000, data 0xe6a308/0x100d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:52.952521+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285188096 unmapped: 66469888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:53.952708+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285188096 unmapped: 66469888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:54.952899+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285188096 unmapped: 66469888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:55.953075+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285188096 unmapped: 66469888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:56.953259+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:57.953460+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:58.953651+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:59.953910+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:00.954136+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:01.954282+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:02.954406+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 66453504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:03.954523+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 66453504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:04.954692+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 66453504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:05.954860+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285212672 unmapped: 66445312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:06.954989+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:07.955168+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:08.955310+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:09.955561+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:10.956478+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:11.956729+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:12.957558+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:13.958261+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:14.959902+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:15.961995+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:16.962152+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:17.962588+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:18.963022+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:19.963243+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:20.964586+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:21.964751+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:22.964917+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:23.965121+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:24.965268+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:25.965413+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:26.965546+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:27.965880+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:28.966177+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:29.966317+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:30.966458+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:31.966613+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:32.966751+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:33.966971+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:34.967261+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:35.967418+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285270016 unmapped: 66387968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:36.967573+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285270016 unmapped: 66387968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:37.967733+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: handle_auth_request added challenge on 0x55f5049ac800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 50.395416260s of 50.752197266s, submitted: 15
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285270016 unmapped: 66387968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:38.967875+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285270016 unmapped: 66387968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _renew_subs
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:39.968038+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed6ca000/0x0/0x4ffc00000, data 0xe6d93c/0x1013000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285294592 unmapped: 66363392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:40.968170+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285294592 unmapped: 66363392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 312 ms_handle_reset con 0x55f5049ac800 session 0x55f4fd3212c0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2981229 data_alloc: 218103808 data_used: 196608
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:41.968359+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285294592 unmapped: 66363392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:42.968487+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ee33b000/0x0/0x4ffc00000, data 0x1fd93c/0x3a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285294592 unmapped: 66363392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:43.968639+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:44.968832+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:45.969067+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2981229 data_alloc: 218103808 data_used: 196608
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:46.969211+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:47.969407+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:48.969553+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ee33b000/0x0/0x4ffc00000, data 0x1fd93c/0x3a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.898818970s of 11.465106010s, submitted: 39
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285310976 unmapped: 66347008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:49.969642+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285310976 unmapped: 66347008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:50.969747+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285310976 unmapped: 66347008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:51.969898+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:52.970108+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:53.970251+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:54.970394+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:55.970531+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:56.970688+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:57.970882+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:58.971082+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285327360 unmapped: 66330624 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:59.971248+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285335552 unmapped: 66322432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:00.971414+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285335552 unmapped: 66322432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:01.971578+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285335552 unmapped: 66322432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:02.971755+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:03.971895+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:04.972057+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:05.972307+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:06.972492+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:07.972674+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:08.972819+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:09.972983+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:10.973142+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:11.973338+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:12.973564+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:13.973834+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:14.974037+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:15.974262+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285368320 unmapped: 66289664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:16.974481+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:17.974738+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:18.974991+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:19.975165+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:20.975303+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:21.975498+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:22.975677+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:23.975861+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:24.976020+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:25.976155+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:26.976287+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:27.976488+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:28.976652+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:29.976776+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:30.976893+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:31.977100+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 66265088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:32.977230+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 66265088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:33.977386+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 66265088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:34.977583+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 66265088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:35.977716+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 66256896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:36.977865+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 66256896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:37.978506+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 66256896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:38.978656+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 66256896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:39.978799+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:40.978965+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:41.979424+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:42.979845+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:43.979996+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:44.980156+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:45.980282+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:46.980462+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285425664 unmapped: 66232320 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:47.980650+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:48.980810+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:49.981022+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:50.981174+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:51.981307+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:52.981423+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:53.981571+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285442048 unmapped: 66215936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:54.981690+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285442048 unmapped: 66215936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:55.981803+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'config diff' '{prefix=config diff}'
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285515776 unmapped: 66142208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'config show' '{prefix=config show}'
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'counter dump' '{prefix=counter dump}'
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:56.981947+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'counter schema' '{prefix=counter schema}'
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284499968 unmapped: 67158016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:57.982099+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 67141632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:58.982224+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'log dump' '{prefix=log dump}'
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284524544 unmapped: 67133440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'perf dump' '{prefix=perf dump}'
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:59.982349+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'perf schema' '{prefix=perf schema}'
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 67657728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:00.982496+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 67657728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:01.982622+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 67657728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:02.982749+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 67657728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:03.982878+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284008448 unmapped: 67649536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:04.982992+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284008448 unmapped: 67649536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:05.983129+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284008448 unmapped: 67649536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:06.983252+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284016640 unmapped: 67641344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:07.983463+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284016640 unmapped: 67641344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:08.983590+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284016640 unmapped: 67641344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:09.983739+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284016640 unmapped: 67641344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:10.983923+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284016640 unmapped: 67641344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:11.984127+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284016640 unmapped: 67641344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:12.984288+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284016640 unmapped: 67641344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:13.984435+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284024832 unmapped: 67633152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:14.984642+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284024832 unmapped: 67633152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:15.984838+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284024832 unmapped: 67633152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:16.984994+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284024832 unmapped: 67633152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:17.985409+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284024832 unmapped: 67633152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:18.985535+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284024832 unmapped: 67633152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:19.985663+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284041216 unmapped: 67616768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:20.985859+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284041216 unmapped: 67616768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:21.986064+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284041216 unmapped: 67616768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:22.986227+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284041216 unmapped: 67616768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:23.986402+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284041216 unmapped: 67616768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:24.986603+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284041216 unmapped: 67616768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:25.986980+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284049408 unmapped: 67608576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:26.987233+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284049408 unmapped: 67608576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:27.987472+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284057600 unmapped: 67600384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:28.987625+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284057600 unmapped: 67600384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:29.987794+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284057600 unmapped: 67600384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:30.987986+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284057600 unmapped: 67600384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:31.988104+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284065792 unmapped: 67592192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:32.988238+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284065792 unmapped: 67592192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:33.988346+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284065792 unmapped: 67592192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:34.988488+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284065792 unmapped: 67592192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:35.988639+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284065792 unmapped: 67592192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:36.988746+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284065792 unmapped: 67592192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:37.988913+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284065792 unmapped: 67592192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:38.989092+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:39.989309+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:40.989442+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:41.989591+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:42.989773+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:43.990063+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284090368 unmapped: 67567616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:44.990201+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284098560 unmapped: 67559424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:45.990340+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284098560 unmapped: 67559424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:46.990470+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284098560 unmapped: 67559424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:47.990651+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284098560 unmapped: 67559424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:48.990794+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284098560 unmapped: 67559424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:49.990900+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284098560 unmapped: 67559424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:50.991016+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284106752 unmapped: 67551232 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:51.991164+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284114944 unmapped: 67543040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:52.991289+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284114944 unmapped: 67543040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:53.991406+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:54.991530+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284114944 unmapped: 67543040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:55.991675+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284114944 unmapped: 67543040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:56.991796+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284114944 unmapped: 67543040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:57.991985+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284114944 unmapped: 67543040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:58.992159+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284114944 unmapped: 67543040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:59.992341+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284114944 unmapped: 67543040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:00.992521+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284131328 unmapped: 67526656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:01.992722+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284131328 unmapped: 67526656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:02.992922+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284139520 unmapped: 67518464 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:03.993128+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284147712 unmapped: 67510272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:04.993266+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284147712 unmapped: 67510272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:05.993419+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284147712 unmapped: 67510272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:06.993657+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284147712 unmapped: 67510272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:07.993877+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284147712 unmapped: 67510272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:08.994067+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284147712 unmapped: 67510272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:09.994268+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284147712 unmapped: 67510272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:10.994413+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284147712 unmapped: 67510272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:11.994553+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284155904 unmapped: 67502080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:12.994701+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284155904 unmapped: 67502080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:13.994837+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284155904 unmapped: 67502080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:14.995030+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284155904 unmapped: 67502080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:15.995214+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284164096 unmapped: 67493888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:16.995412+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284172288 unmapped: 67485696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:17.995623+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284172288 unmapped: 67485696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:18.995803+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284172288 unmapped: 67485696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:19.995974+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284172288 unmapped: 67485696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:20.996139+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284172288 unmapped: 67485696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:21.996460+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284172288 unmapped: 67485696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:22.996652+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284180480 unmapped: 67477504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:23.996870+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284180480 unmapped: 67477504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:24.997053+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284180480 unmapped: 67477504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:25.997229+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284180480 unmapped: 67477504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:26.997518+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284180480 unmapped: 67477504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:27.997787+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284196864 unmapped: 67461120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:28.997991+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284196864 unmapped: 67461120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:29.998295+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284196864 unmapped: 67461120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:30.998576+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284196864 unmapped: 67461120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:31.998749+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284196864 unmapped: 67461120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:32.998951+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 67452928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:33.999145+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 67452928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:34.999332+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 67452928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:35.999980+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 67452928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:37.000257+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 67452928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:38.000533+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284213248 unmapped: 67444736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:39.000740+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284213248 unmapped: 67444736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:40.000978+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284213248 unmapped: 67444736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:41.001184+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284221440 unmapped: 67436544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:42.001429+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284221440 unmapped: 67436544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:43.001726+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284221440 unmapped: 67436544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:44.001996+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284229632 unmapped: 67428352 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:45.002211+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284229632 unmapped: 67428352 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:46.002464+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284229632 unmapped: 67428352 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:47.002743+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284229632 unmapped: 67428352 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:48.003021+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284229632 unmapped: 67428352 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:49.003211+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 67411968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:50.003425+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 67411968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:51.003655+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 67411968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:52.003918+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 67411968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:53.004239+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 67411968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:54.004501+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 67411968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:55.004737+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 67411968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:56.004985+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 67411968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:57.005252+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284254208 unmapped: 67403776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:58.006180+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284254208 unmapped: 67403776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:59.006414+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284254208 unmapped: 67403776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:00.006657+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284254208 unmapped: 67403776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:01.007028+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284254208 unmapped: 67403776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:02.007265+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284254208 unmapped: 67403776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:03.007470+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 67379200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:04.007660+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 67379200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:05.007879+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 67379200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:06.008118+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 67379200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:07.008324+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284286976 unmapped: 67371008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:08.008562+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284286976 unmapped: 67371008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:09.008792+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284286976 unmapped: 67371008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:10.009057+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284286976 unmapped: 67371008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:11.009257+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284286976 unmapped: 67371008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:12.009474+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284286976 unmapped: 67371008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:13.009659+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284286976 unmapped: 67371008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:14.009885+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284295168 unmapped: 67362816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:15.010130+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284295168 unmapped: 67362816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:16.010399+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284295168 unmapped: 67362816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:17.010656+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284295168 unmapped: 67362816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:18.010864+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284295168 unmapped: 67362816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:19.011009+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284303360 unmapped: 67354624 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:20.011174+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284311552 unmapped: 67346432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:21.011341+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284311552 unmapped: 67346432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:22.011470+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284311552 unmapped: 67346432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:23.011736+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284311552 unmapped: 67346432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:24.011979+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284311552 unmapped: 67346432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:25.012194+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284311552 unmapped: 67346432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:26.012460+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284311552 unmapped: 67346432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:27.012684+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284311552 unmapped: 67346432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:28.012917+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284319744 unmapped: 67338240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:29.013080+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284319744 unmapped: 67338240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:30.013223+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284319744 unmapped: 67338240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:31.013431+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284319744 unmapped: 67338240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:32.013645+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284327936 unmapped: 67330048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:33.013832+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284327936 unmapped: 67330048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:34.013978+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284327936 unmapped: 67330048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:35.014118+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284336128 unmapped: 67321856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:36.014421+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284336128 unmapped: 67321856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:37.014594+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284344320 unmapped: 67313664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:38.014807+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284344320 unmapped: 67313664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:39.014959+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284344320 unmapped: 67313664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:40.015127+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284352512 unmapped: 67305472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:41.015281+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284352512 unmapped: 67305472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:42.015485+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284352512 unmapped: 67305472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:43.015673+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284352512 unmapped: 67305472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:44.015792+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284360704 unmapped: 67297280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:45.015918+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284360704 unmapped: 67297280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:46.016143+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284360704 unmapped: 67297280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:47.016283+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284360704 unmapped: 67297280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:48.016437+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284360704 unmapped: 67297280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:49.016622+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284360704 unmapped: 67297280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:50.016884+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284368896 unmapped: 67289088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:51.017081+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284368896 unmapped: 67289088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:52.017283+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284377088 unmapped: 67280896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:53.017442+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284385280 unmapped: 67272704 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:54.017609+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284385280 unmapped: 67272704 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:55.017797+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284385280 unmapped: 67272704 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:56.018027+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284393472 unmapped: 67264512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:57.018222+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284393472 unmapped: 67264512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:58.018400+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284393472 unmapped: 67264512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:59.018566+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284393472 unmapped: 67264512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:00.018740+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284409856 unmapped: 67248128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:01.018901+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284409856 unmapped: 67248128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:02.019093+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284409856 unmapped: 67248128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:03.019270+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284409856 unmapped: 67248128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:04.019427+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284409856 unmapped: 67248128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:05.019591+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284409856 unmapped: 67248128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:06.019767+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284409856 unmapped: 67248128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:07.020043+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284418048 unmapped: 67239936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:08.020253+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284426240 unmapped: 67231744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:09.020380+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284426240 unmapped: 67231744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:10.020570+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284426240 unmapped: 67231744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:11.020757+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284426240 unmapped: 67231744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:12.020912+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284426240 unmapped: 67231744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:13.021158+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284426240 unmapped: 67231744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:14.021358+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284426240 unmapped: 67231744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:15.021553+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284442624 unmapped: 67215360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:16.021692+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284442624 unmapped: 67215360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:17.021858+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284442624 unmapped: 67215360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:18.022050+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284442624 unmapped: 67215360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:19.022200+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284442624 unmapped: 67215360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:20.022504+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284442624 unmapped: 67215360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:21.022648+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284442624 unmapped: 67215360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:22.022832+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284442624 unmapped: 67215360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:23.022994+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284459008 unmapped: 67198976 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:24.023128+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284459008 unmapped: 67198976 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:25.023308+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284459008 unmapped: 67198976 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:26.023473+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284459008 unmapped: 67198976 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:27.023620+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284467200 unmapped: 67190784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:28.023898+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284467200 unmapped: 67190784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:29.024128+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284467200 unmapped: 67190784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:30.024273+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284467200 unmapped: 67190784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:31.024418+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284467200 unmapped: 67190784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:32.024580+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284467200 unmapped: 67190784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:33.024739+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284467200 unmapped: 67190784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:34.024878+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284467200 unmapped: 67190784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:35.025033+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284475392 unmapped: 67182592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:36.025177+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284475392 unmapped: 67182592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:37.025346+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284475392 unmapped: 67182592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:38.025511+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284475392 unmapped: 67182592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:39.025666+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284475392 unmapped: 67182592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:40.025831+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284483584 unmapped: 67174400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:41.026022+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284483584 unmapped: 67174400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:42.026201+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284491776 unmapped: 67166208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:43.026458+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284491776 unmapped: 67166208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:44.026737+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284491776 unmapped: 67166208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:45.027051+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284491776 unmapped: 67166208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:46.027305+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284491776 unmapped: 67166208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:47.027500+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284491776 unmapped: 67166208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:48.027750+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 67149824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:49.027919+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 67149824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:50.028151+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 67149824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:51.028291+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 67149824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:52.028493+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 67149824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:53.028692+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 67149824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:54.028872+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 67141632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:55.029091+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 67141632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:56.029297+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 67141632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:57.029535+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 67141632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:58.029818+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 67141632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:59.030017+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284524544 unmapped: 67133440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:00.030218+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284524544 unmapped: 67133440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:01.030413+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284524544 unmapped: 67133440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:02.030639+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284524544 unmapped: 67133440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:03.030818+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284524544 unmapped: 67133440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:04.030998+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284540928 unmapped: 67117056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:05.031186+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284540928 unmapped: 67117056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:06.031424+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284540928 unmapped: 67117056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:07.031643+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284540928 unmapped: 67117056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:08.031845+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284540928 unmapped: 67117056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:09.032074+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284540928 unmapped: 67117056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:10.032279+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284540928 unmapped: 67117056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:11.032423+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284549120 unmapped: 67108864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:12.032656+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284557312 unmapped: 67100672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:13.032810+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284557312 unmapped: 67100672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:14.032999+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284557312 unmapped: 67100672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:15.033179+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284565504 unmapped: 67092480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:16.033348+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284565504 unmapped: 67092480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:17.033583+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284565504 unmapped: 67092480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:18.033846+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284565504 unmapped: 67092480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:19.034068+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284565504 unmapped: 67092480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:20.034328+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284573696 unmapped: 67084288 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:21.034542+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284573696 unmapped: 67084288 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:22.034688+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284573696 unmapped: 67084288 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:23.034895+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284581888 unmapped: 67076096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:24.035095+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284581888 unmapped: 67076096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:25.035366+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284581888 unmapped: 67076096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:26.035577+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284590080 unmapped: 67067904 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:27.035799+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284590080 unmapped: 67067904 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:28.036036+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284598272 unmapped: 67059712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:29.036247+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284598272 unmapped: 67059712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:30.036423+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284598272 unmapped: 67059712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:31.036632+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284598272 unmapped: 67059712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:32.036832+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284598272 unmapped: 67059712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:33.037004+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284598272 unmapped: 67059712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:34.037173+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284606464 unmapped: 67051520 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:35.037362+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284606464 unmapped: 67051520 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:36.037524+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284606464 unmapped: 67051520 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:37.037699+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284614656 unmapped: 67043328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:38.037881+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284614656 unmapped: 67043328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:39.038129+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284614656 unmapped: 67043328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:40.038305+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284614656 unmapped: 67043328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:41.038563+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284614656 unmapped: 67043328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:42.038755+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284622848 unmapped: 67035136 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:43.039056+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284622848 unmapped: 67035136 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:44.039345+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284622848 unmapped: 67035136 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:45.039508+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284622848 unmapped: 67035136 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:46.039725+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284622848 unmapped: 67035136 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:47.039853+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284631040 unmapped: 67026944 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:48.040062+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284631040 unmapped: 67026944 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:49.040248+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284631040 unmapped: 67026944 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:50.040402+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284647424 unmapped: 67010560 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:51.040530+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284647424 unmapped: 67010560 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:52.040711+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 36K writes, 145K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 36K writes, 13K syncs, 2.78 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 385 writes, 982 keys, 385 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s
                                           Interval WAL: 385 writes, 171 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284655616 unmapped: 67002368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:53.040857+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284655616 unmapped: 67002368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:54.041014+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284655616 unmapped: 67002368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:55.041161+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284655616 unmapped: 67002368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:56.041976+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284663808 unmapped: 66994176 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:57.042134+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284663808 unmapped: 66994176 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:58.042375+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284663808 unmapped: 66994176 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:59.042530+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284663808 unmapped: 66994176 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:00.042663+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284672000 unmapped: 66985984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:01.042843+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284672000 unmapped: 66985984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:02.042996+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284672000 unmapped: 66985984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:03.043143+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284672000 unmapped: 66985984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:04.043321+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284672000 unmapped: 66985984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:05.043482+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284672000 unmapped: 66985984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:06.043672+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284672000 unmapped: 66985984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:07.043882+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284680192 unmapped: 66977792 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:08.044100+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284688384 unmapped: 66969600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:09.044328+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284688384 unmapped: 66969600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:10.044493+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284688384 unmapped: 66969600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:11.044652+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284688384 unmapped: 66969600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:12.044833+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284688384 unmapped: 66969600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:13.044979+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284688384 unmapped: 66969600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:14.045150+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284688384 unmapped: 66969600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:15.045336+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284696576 unmapped: 66961408 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:16.045490+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284704768 unmapped: 66953216 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:17.045742+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284704768 unmapped: 66953216 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:18.045980+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284704768 unmapped: 66953216 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:19.046110+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284712960 unmapped: 66945024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:20.046361+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284712960 unmapped: 66945024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:21.046539+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284712960 unmapped: 66945024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:22.046779+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284712960 unmapped: 66945024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:23.046988+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284712960 unmapped: 66945024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:24.047167+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284721152 unmapped: 66936832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:25.047306+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284721152 unmapped: 66936832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:26.047466+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284721152 unmapped: 66936832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:27.047674+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284721152 unmapped: 66936832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:28.047867+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284721152 unmapped: 66936832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:29.048048+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284721152 unmapped: 66936832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:30.048245+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:31.048409+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284721152 unmapped: 66936832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:32.048586+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284721152 unmapped: 66936832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:33.048735+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284729344 unmapped: 66928640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:34.048877+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284729344 unmapped: 66928640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:35.049062+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284729344 unmapped: 66928640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:36.049207+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284729344 unmapped: 66928640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:37.049410+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284729344 unmapped: 66928640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:38.049589+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284729344 unmapped: 66928640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:39.049759+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284737536 unmapped: 66920448 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:40.049893+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284737536 unmapped: 66920448 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:41.050037+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284745728 unmapped: 66912256 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:42.050209+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284753920 unmapped: 66904064 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:43.050405+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284753920 unmapped: 66904064 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:44.050547+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284762112 unmapped: 66895872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:45.050721+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284762112 unmapped: 66895872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:46.050850+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284762112 unmapped: 66895872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:47.050983+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284762112 unmapped: 66895872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:48.051187+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284762112 unmapped: 66895872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:49.051321+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284770304 unmapped: 66887680 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:50.051473+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284770304 unmapped: 66887680 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:51.051600+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284770304 unmapped: 66887680 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:52.051821+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284770304 unmapped: 66887680 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:53.051974+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284770304 unmapped: 66887680 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:54.052120+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284770304 unmapped: 66887680 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:55.052285+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284794880 unmapped: 66863104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:56.052476+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284794880 unmapped: 66863104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:57.052606+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284794880 unmapped: 66863104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:58.052775+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284794880 unmapped: 66863104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:59.052904+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284794880 unmapped: 66863104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 429.513305664s of 430.105316162s, submitted: 13
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:00.053040+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284811264 unmapped: 66846720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:01.053206+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284811264 unmapped: 66846720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:02.053324+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284827648 unmapped: 66830336 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:03.053462+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:04.053568+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:05.053697+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:06.053823+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:07.053984+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:08.054167+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:09.054272+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:10.054444+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:11.054614+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:12.054838+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:13.055031+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 66797568 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:14.055243+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 66797568 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:15.055404+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 66797568 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:16.055644+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 66797568 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:17.055915+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 66797568 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:18.056202+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 66797568 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:19.056384+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 66797568 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:20.056540+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 66797568 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:21.056695+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 66797568 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:22.056843+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284868608 unmapped: 66789376 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:23.057023+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284868608 unmapped: 66789376 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:24.057174+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284868608 unmapped: 66789376 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:25.057302+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284868608 unmapped: 66789376 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:26.057448+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284868608 unmapped: 66789376 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:27.057569+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284868608 unmapped: 66789376 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:28.057737+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284868608 unmapped: 66789376 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:29.057876+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284876800 unmapped: 66781184 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:30.058004+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284876800 unmapped: 66781184 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:31.058139+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284876800 unmapped: 66781184 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:32.058292+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284876800 unmapped: 66781184 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:33.058433+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284876800 unmapped: 66781184 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:34.058574+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284876800 unmapped: 66781184 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:35.058719+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284876800 unmapped: 66781184 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:36.058843+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284876800 unmapped: 66781184 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:37.058996+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284884992 unmapped: 66772992 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:38.059208+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284893184 unmapped: 66764800 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:39.059360+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284893184 unmapped: 66764800 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:40.059542+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284893184 unmapped: 66764800 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:41.059710+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284893184 unmapped: 66764800 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:42.059857+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284893184 unmapped: 66764800 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:43.060153+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284893184 unmapped: 66764800 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:44.060331+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284901376 unmapped: 66756608 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:45.060911+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284901376 unmapped: 66756608 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:46.061121+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284901376 unmapped: 66756608 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:47.061992+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284917760 unmapped: 66740224 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:48.062521+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284917760 unmapped: 66740224 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:49.062754+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284917760 unmapped: 66740224 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:50.063400+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284917760 unmapped: 66740224 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:51.063720+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284917760 unmapped: 66740224 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:52.063990+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284917760 unmapped: 66740224 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:53.064545+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284917760 unmapped: 66740224 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:54.065018+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284917760 unmapped: 66740224 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:55.065354+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284925952 unmapped: 66732032 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:56.065541+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284925952 unmapped: 66732032 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:57.065982+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284925952 unmapped: 66732032 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:58.066350+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284925952 unmapped: 66732032 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:59.066514+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284934144 unmapped: 66723840 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:00.066724+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284934144 unmapped: 66723840 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:01.066965+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 66715648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:02.067142+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 66715648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:03.067291+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 66715648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:04.067564+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 66715648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:05.067830+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 66715648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:06.068126+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 66715648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:07.068379+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 66715648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:08.068695+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284950528 unmapped: 66707456 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:09.069005+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 66699264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:10.069289+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 66699264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:11.069559+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 66699264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3739: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:12.069810+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 66699264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:13.070038+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 66699264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:14.070251+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 66699264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:15.070484+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 66699264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:16.070729+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 66699264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:17.070924+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 66682880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:18.071881+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 66682880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:19.072904+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 66682880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:20.073866+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 66682880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:21.074383+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 66682880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:22.074737+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 66682880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:23.075743+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 66682880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:24.076434+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 66674688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:25.077008+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 66674688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:26.077349+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 66674688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:27.077784+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284991488 unmapped: 66666496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:28.078245+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284991488 unmapped: 66666496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:29.078458+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284991488 unmapped: 66666496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:30.078670+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284991488 unmapped: 66666496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:31.078908+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284991488 unmapped: 66666496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:32.079414+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 66650112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:33.079738+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 66650112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:34.080180+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 66650112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:35.080571+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 66650112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:36.081062+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285016064 unmapped: 66641920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:37.081266+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285016064 unmapped: 66641920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:38.081486+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285016064 unmapped: 66641920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:39.081680+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285016064 unmapped: 66641920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:40.081979+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285024256 unmapped: 66633728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:41.082138+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285024256 unmapped: 66633728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:42.082300+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285024256 unmapped: 66633728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:43.082440+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285024256 unmapped: 66633728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:44.082594+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285024256 unmapped: 66633728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:45.082809+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285024256 unmapped: 66633728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:46.083068+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285024256 unmapped: 66633728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:47.083239+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285024256 unmapped: 66633728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:48.083389+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285032448 unmapped: 66625536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:49.083617+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285032448 unmapped: 66625536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:50.083772+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285032448 unmapped: 66625536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:51.084051+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285032448 unmapped: 66625536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:52.084252+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285032448 unmapped: 66625536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:53.084429+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285032448 unmapped: 66625536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:54.084569+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285032448 unmapped: 66625536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:55.084803+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285040640 unmapped: 66617344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:56.084983+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285048832 unmapped: 66609152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:57.085111+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285048832 unmapped: 66609152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:58.085298+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285048832 unmapped: 66609152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:59.085473+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285048832 unmapped: 66609152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:00.085681+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285057024 unmapped: 66600960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:01.085784+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285057024 unmapped: 66600960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:02.085908+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285057024 unmapped: 66600960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:03.086064+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285065216 unmapped: 66592768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:04.086191+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285073408 unmapped: 66584576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:05.086350+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285073408 unmapped: 66584576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:06.086489+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285073408 unmapped: 66584576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:07.086620+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285073408 unmapped: 66584576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:08.086863+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285073408 unmapped: 66584576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:09.087026+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285073408 unmapped: 66584576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:10.087158+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285073408 unmapped: 66584576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:11.087312+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285073408 unmapped: 66584576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:12.087459+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285081600 unmapped: 66576384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:13.087628+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285081600 unmapped: 66576384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:14.087766+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285081600 unmapped: 66576384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:15.088027+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285089792 unmapped: 66568192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:16.088426+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285097984 unmapped: 66560000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:17.088652+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285097984 unmapped: 66560000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:18.088868+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285097984 unmapped: 66560000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:19.089030+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285097984 unmapped: 66560000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:20.089148+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285106176 unmapped: 66551808 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:21.089268+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285106176 unmapped: 66551808 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:22.089393+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285106176 unmapped: 66551808 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'config diff' '{prefix=config diff}'
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:23.089527+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'config show' '{prefix=config show}'
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'counter dump' '{prefix=counter dump}'
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'counter schema' '{prefix=counter schema}'
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285360128 unmapped: 66297856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:55 compute-0 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:55 compute-0 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:24.089703+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 66256896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: tick
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_tickets
Oct 07 15:25:55 compute-0 ceph-osd[90092]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:25.089846+0000)
Oct 07 15:25:55 compute-0 ceph-osd[90092]: do_command 'log dump' '{prefix=log dump}'
Oct 07 15:25:55 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 15:25:55 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23447 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:55 compute-0 nova_compute[259550]: 2025-10-07 15:25:55.862 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 07 15:25:55 compute-0 nova_compute[259550]: 2025-10-07 15:25:55.864 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3486MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 07 15:25:55 compute-0 nova_compute[259550]: 2025-10-07 15:25:55.864 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:25:55 compute-0 nova_compute[259550]: 2025-10-07 15:25:55.864 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:25:55 compute-0 nova_compute[259550]: 2025-10-07 15:25:55.932 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 07 15:25:55 compute-0 nova_compute[259550]: 2025-10-07 15:25:55.932 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 07 15:25:55 compute-0 nova_compute[259550]: 2025-10-07 15:25:55.952 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 07 15:25:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 07 15:25:55 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2518768742' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 07 15:25:55 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:25:56 compute-0 ceph-mon[74295]: from='client.23431 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:56 compute-0 ceph-mon[74295]: from='client.23433 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:56 compute-0 ceph-mon[74295]: from='client.23437 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:56 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1249979617' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 07 15:25:56 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/718127271' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 07 15:25:56 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/676956947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:25:56 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2518768742' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 07 15:25:56 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23451 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 07 15:25:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/667570897' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 07 15:25:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 07 15:25:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4137672803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:25:56 compute-0 nova_compute[259550]: 2025-10-07 15:25:56.425 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 07 15:25:56 compute-0 nova_compute[259550]: 2025-10-07 15:25:56.429 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 07 15:25:56 compute-0 nova_compute[259550]: 2025-10-07 15:25:56.456 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 07 15:25:56 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23457 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:56 compute-0 nova_compute[259550]: 2025-10-07 15:25:56.458 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 07 15:25:56 compute-0 nova_compute[259550]: 2025-10-07 15:25:56.458 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:25:56 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 07 15:25:56 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/504522829' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 07 15:25:56 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23461 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:25:57 compute-0 ceph-mon[74295]: from='client.23441 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:57 compute-0 ceph-mon[74295]: pgmap v3739: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:57 compute-0 ceph-mon[74295]: from='client.23447 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:57 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/667570897' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 07 15:25:57 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4137672803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 07 15:25:57 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/504522829' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 07 15:25:57 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23465 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:25:57 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 07 15:25:57 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2003225363' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 07 15:25:57 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3740: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:57 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23471 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:25:57 compute-0 ceph-mgr[74587]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 07 15:25:57 compute-0 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T15:25:57.949+0000 7fbd06de6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 07 15:25:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 07 15:25:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446855042' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 07 15:25:58 compute-0 ceph-mon[74295]: from='client.23451 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:58 compute-0 ceph-mon[74295]: from='client.23457 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:25:58 compute-0 ceph-mon[74295]: from='client.23461 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:25:58 compute-0 ceph-mon[74295]: from='client.23465 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:25:58 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2003225363' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 07 15:25:58 compute-0 ceph-mon[74295]: pgmap v3740: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:58 compute-0 ceph-mon[74295]: from='client.23471 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:25:58 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1446855042' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 07 15:25:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 07 15:25:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1534847316' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 07 15:25:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 07 15:25:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3059605516' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 07 15:25:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 07 15:25:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1660064485' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 07 15:25:58 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 07 15:25:58 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1438499408' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 07 15:25:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 07 15:25:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2656692931' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 07 15:25:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1534847316' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 07 15:25:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3059605516' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 07 15:25:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1660064485' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 07 15:25:59 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1438499408' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 07 15:25:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 07 15:25:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3977948967' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 07 15:25:59 compute-0 crontab[468486]: (root) LIST (root)
Oct 07 15:25:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 07 15:25:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2953768568' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 07 15:25:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 07 15:25:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1266719636' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 07 15:25:59 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3741: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:25:59 compute-0 nova_compute[259550]: 2025-10-07 15:25:59.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:03.360591+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 62242816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:04.360753+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7fae000/0x0/0x4ffc00000, data 0x2331813/0x24c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 62242816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:05.360886+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 62242816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:06.361026+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 62242816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:07.361154+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3666280 data_alloc: 234881024 data_used: 21540864
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:08.361303+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 56147968 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.728628635s of 10.006806374s, submitted: 121
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:09.361423+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354697216 unmapped: 56336384 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:10.361655+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355778560 unmapped: 55255040 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7493000/0x0/0x4ffc00000, data 0x2e4b813/0x2fda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:11.361792+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355958784 unmapped: 55074816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:12.362037+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768784 data_alloc: 234881024 data_used: 23756800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:13.362316+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:14.362452+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e706c000/0x0/0x4ffc00000, data 0x2e63813/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:15.362623+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:16.362812+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:17.363001+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769104 data_alloc: 234881024 data_used: 23764992
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e706c000/0x0/0x4ffc00000, data 0x2e63813/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:18.363173+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:19.363271+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:20.363422+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:21.363640+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e706c000/0x0/0x4ffc00000, data 0x2e63813/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569354aa000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.462882042s of 13.117553711s, submitted: 114
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:22.363787+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833585 data_alloc: 234881024 data_used: 23764992
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569354aa000 session 0x5569354fdc20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569354aa000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569354aa000 session 0x55693489f4a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569344cdc20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:23.363973+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569354fc780
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933c71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x556933585680
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:24.364145+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:25.364286+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69d6000/0x0/0x4ffc00000, data 0x34f9813/0x3688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:26.364522+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:27.364718+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3821841 data_alloc: 234881024 data_used: 23764992
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:28.364874+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693b5ea400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x5569354f52c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:29.365019+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69d6000/0x0/0x4ffc00000, data 0x34f9813/0x3688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693b5ea400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x5569334545a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:30.365210+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:31.365484+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556933583a40
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569334543c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69d6000/0x0/0x4ffc00000, data 0x34f9813/0x3688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:32.365638+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933c71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.283301353s of 10.441823006s, submitted: 21
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826034 data_alloc: 234881024 data_used: 23764992
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569354aa000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:33.365838+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:34.365978+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359202816 unmapped: 51830784 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:35.366242+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:36.366376+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b2000/0x0/0x4ffc00000, data 0x351d813/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:37.366744+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3875606 data_alloc: 234881024 data_used: 29491200
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:38.366962+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:39.367105+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b2000/0x0/0x4ffc00000, data 0x351d813/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b2000/0x0/0x4ffc00000, data 0x351d813/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:40.367248+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b2000/0x0/0x4ffc00000, data 0x351d813/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:41.367391+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:42.367549+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3876746 data_alloc: 234881024 data_used: 29483008
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:43.367752+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b0000/0x0/0x4ffc00000, data 0x351e813/0x36ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:44.367906+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.968862534s of 12.149744034s, submitted: 5
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:45.368083+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 51118080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:46.368265+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 50880512 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:47.368413+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3958876 data_alloc: 234881024 data_used: 30003200
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:48.368565+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e60ea000/0x0/0x4ffc00000, data 0x3dd7813/0x3f66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:49.368758+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:50.369023+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e60ea000/0x0/0x4ffc00000, data 0x3dd7813/0x3f66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e60ea000/0x0/0x4ffc00000, data 0x3dd7813/0x3f66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:51.369217+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:52.369380+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3954340 data_alloc: 234881024 data_used: 30003200
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:53.369576+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:54.369763+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x55693489e960
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569354aa000 session 0x5569355652c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361299968 unmapped: 49733632 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.944513321s of 10.002432823s, submitted: 77
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e60f6000/0x0/0x4ffc00000, data 0x3dd9813/0x3f68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:55.369893+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361308160 unmapped: 49725440 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556935565c20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:56.370038+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361316352 unmapped: 49717248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:57.370173+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361316352 unmapped: 49717248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3783798 data_alloc: 234881024 data_used: 22560768
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:58.370317+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361316352 unmapped: 49717248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:59.370464+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361316352 unmapped: 49717248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:00.370629+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361324544 unmapped: 49709056 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e706b000/0x0/0x4ffc00000, data 0x2e64813/0x2ff3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:01.370825+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361324544 unmapped: 49709056 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:02.371034+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361324544 unmapped: 49709056 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x5569348954a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x55693345f860
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3781038 data_alloc: 234881024 data_used: 22560768
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:03.371213+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 48627712 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:04.371378+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 55648256 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.576909542s of 10.082017899s, submitted: 108
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556934894780
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:05.371543+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:06.371726+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:07.371913+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:08.372154+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:09.372410+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:10.372603+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:11.372791+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:12.373004+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:13.373173+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:14.373336+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:15.373548+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:16.373684+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:17.373851+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:18.373995+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:19.374117+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:20.374282+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:21.374455+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:22.374621+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:23.374820+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:24.375020+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 55623680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:25.375154+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 55623680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:26.375277+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 55623680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:27.375402+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 55623680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:28.375676+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 55615488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:29.375842+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 55615488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:30.376078+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 55615488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:31.376259+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 55615488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:32.376398+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355426304 unmapped: 55607296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:33.376658+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933c71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x55693416c3c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693488da40
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569333e12c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556933461c20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355426304 unmapped: 55607296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.773714066s of 28.863956451s, submitted: 9
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,1,0,1,7,6])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:34.376892+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556933454f00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693b5ea400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x5569341243c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556932b5ed20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556934898780
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569333e1c20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:35.377105+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:36.377288+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:37.377497+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556932ede960
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464974 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693bacec00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bacec00 session 0x55693345e5a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:38.377727+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a74000/0x0/0x4ffc00000, data 0x145d7e0/0x15ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:39.377856+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:40.378033+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556934894d20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569341c2960
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:41.378193+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:42.378331+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3474172 data_alloc: 218103808 data_used: 6074368
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:43.378501+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a73000/0x0/0x4ffc00000, data 0x145d7f0/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:44.378699+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:45.378924+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a73000/0x0/0x4ffc00000, data 0x145d7f0/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:46.379251+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:47.379642+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a73000/0x0/0x4ffc00000, data 0x145d7f0/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3474172 data_alloc: 218103808 data_used: 6074368
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:48.379857+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355467264 unmapped: 55566336 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:49.380044+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a73000/0x0/0x4ffc00000, data 0x145d7f0/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:50.380428+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:51.380775+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:52.381054+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3474172 data_alloc: 218103808 data_used: 6074368
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:53.381340+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.313346863s of 19.983018875s, submitted: 34
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:54.381532+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 53174272 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:55.381750+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e805b000/0x0/0x4ffc00000, data 0x1e6f7f0/0x1ffd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 53084160 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:56.381959+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:57.382081+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8018000/0x0/0x4ffc00000, data 0x1ea97f0/0x2037000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3565560 data_alloc: 218103808 data_used: 6414336
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:58.382222+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:59.382360+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8018000/0x0/0x4ffc00000, data 0x1ea97f0/0x2037000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:00.382719+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:01.383023+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:02.383183+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 52125696 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558532 data_alloc: 218103808 data_used: 6418432
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:03.383332+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8025000/0x0/0x4ffc00000, data 0x1eab7f0/0x2039000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 52125696 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:04.383608+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:05.383759+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8025000/0x0/0x4ffc00000, data 0x1eab7f0/0x2039000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:06.383969+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:07.384129+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558532 data_alloc: 218103808 data_used: 6418432
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.445440292s of 14.635492325s, submitted: 114
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:08.384284+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:09.384409+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:10.384568+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:11.384715+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:12.384881+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558760 data_alloc: 218103808 data_used: 6418432
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:13.385164+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:14.385303+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:15.385515+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:16.385668+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:17.385820+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558760 data_alloc: 218103808 data_used: 6418432
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:18.385989+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:19.386149+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:20.386416+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:21.386687+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:22.386839+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.461366653s of 14.488227844s, submitted: 1
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556932eff2c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x5569348985a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 52125696 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558336 data_alloc: 218103808 data_used: 6418432
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c53c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:23.387024+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c53c00 session 0x556935cb34a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:24.387175+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:25.387327+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:26.387508+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:27.387773+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:28.388005+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:29.388177+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:30.388314+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:31.388518+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:32.388658+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:33.389067+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:34.389282+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:35.389424+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:36.389608+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:37.389823+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:38.390089+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:39.390308+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:40.390519+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:41.390718+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:42.390863+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 52076544 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:43.391090+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 52076544 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:44.391223+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 52076544 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:45.391393+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 52076544 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:46.391525+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 52068352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:47.391636+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 52068352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:48.391771+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.447141647s of 25.612014771s, submitted: 32
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 370622464 unmapped: 40411136 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:49.391986+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556934125a40
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569344d05a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556933455c20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556933c4f0e0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935e52000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935e52000 session 0x5569333e0d20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:50.392418+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815c000/0x0/0x4ffc00000, data 0x1d7776e/0x1f02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:51.392628+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556932ea92c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:52.392804+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556935cb34a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533416 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:53.392997+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815c000/0x0/0x4ffc00000, data 0x1d7776e/0x1f02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569348985a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:54.393263+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556932eff2c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815b000/0x0/0x4ffc00000, data 0x1d77791/0x1f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:55.393443+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c56000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693b762800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:56.393598+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:57.393741+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815b000/0x0/0x4ffc00000, data 0x1d77791/0x1f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616138 data_alloc: 234881024 data_used: 16084992
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:58.393903+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:59.394052+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:00.394302+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:01.394454+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:02.394671+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620138 data_alloc: 234881024 data_used: 16707584
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:03.395017+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815b000/0x0/0x4ffc00000, data 0x1d77791/0x1f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:04.395172+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:05.395356+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:06.395496+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:07.395652+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620138 data_alloc: 234881024 data_used: 16707584
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:08.395904+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.948329926s of 20.243396759s, submitted: 34
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815b000/0x0/0x4ffc00000, data 0x1d77791/0x1f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,1])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 46022656 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:09.396158+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 368492544 unmapped: 42541056 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:10.396359+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:11.396581+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7583000/0x0/0x4ffc00000, data 0x294f791/0x2adb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:12.396739+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3721858 data_alloc: 234881024 data_used: 18522112
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:13.396994+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:14.397138+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e757d000/0x0/0x4ffc00000, data 0x2955791/0x2ae1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:15.397303+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:16.397431+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:17.397642+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730482 data_alloc: 234881024 data_used: 18755584
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:18.397838+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:19.397994+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:20.398145+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x295f791/0x2aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:21.398321+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x295f791/0x2aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:22.398417+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730482 data_alloc: 234881024 data_used: 18755584
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:23.398582+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:24.398738+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369147904 unmapped: 41885696 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:25.398887+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.108821869s of 16.882175446s, submitted: 108
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x556933c4e000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:26.399033+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:27.399171+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774326 data_alloc: 234881024 data_used: 18755584
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:28.399325+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:29.399454+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:30.399569+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:31.399738+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569341c30e0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693616a1e0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:32.399875+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556932b5fa40
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369319936 unmapped: 41713664 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774326 data_alloc: 234881024 data_used: 18755584
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:33.400101+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569341e4000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369319936 unmapped: 41713664 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:34.400262+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369319936 unmapped: 41713664 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:35.400403+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369942528 unmapped: 41091072 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:36.400529+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:37.400660+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3818646 data_alloc: 234881024 data_used: 24911872
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:38.401107+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:39.401428+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:40.401621+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:41.401761+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:42.401960+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:43.402125+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3818646 data_alloc: 234881024 data_used: 24911872
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:44.402273+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.362392426s of 19.756223679s, submitted: 9
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:45.402412+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:46.402606+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 40165376 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:47.402737+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372105216 unmapped: 38928384 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:48.402857+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3888978 data_alloc: 234881024 data_used: 26075136
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372121600 unmapped: 38912000 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e68c1000/0x0/0x4ffc00000, data 0x3611791/0x379d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:49.403022+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372121600 unmapped: 38912000 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:50.403188+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e68c1000/0x0/0x4ffc00000, data 0x3611791/0x379d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372121600 unmapped: 38912000 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:51.403336+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372121600 unmapped: 38912000 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:52.403497+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372129792 unmapped: 38903808 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:53.403699+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3888978 data_alloc: 234881024 data_used: 26075136
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372129792 unmapped: 38903808 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:54.403885+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x55693345ef00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e68be000/0x0/0x4ffc00000, data 0x3614791/0x37a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371220480 unmapped: 39813120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:55.404041+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.185196877s of 10.101745605s, submitted: 92
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:56.404373+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:57.404756+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x295f791/0x2aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x55693616be00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:58.404899+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3737914 data_alloc: 234881024 data_used: 18817024
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:59.405023+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x295f791/0x2aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:00.405148+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x5569333e1c20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b762800 session 0x556934808960
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:01.405401+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:02.405542+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569354f4b40
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:03.406000+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:04.406215+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:05.406491+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:06.406696+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:07.406856+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:08.407154+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:09.407307+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:10.407528+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:11.407773+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:12.407992+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:13.408197+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:14.408338+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:15.408618+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:16.408788+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:17.409005+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:18.409209+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:19.409383+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:20.409532+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:21.409800+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:22.459395+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:23.459575+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:24.459724+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569334550e0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556932efd4a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569348990e0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x55693488d0e0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:25.459863+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c56000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.007173538s of 30.416368484s, submitted: 66
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:26.459985+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,4])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:27.460146+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x5569354f5a40
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693b762800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b762800 session 0x556933454f00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569353d3860
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:28.460274+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510209 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x556935cb25a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x5569348954a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:29.461018+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:30.461146+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:31.461278+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:32.461434+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c56000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x55693488cb40
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:33.461650+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3509809 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569361adc20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:34.461801+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693488c3c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:35.461956+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.333779335s of 10.194737434s, submitted: 17
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569356eb4a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:36.462147+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935b01400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:37.462282+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:38.462465+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511570 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:39.462613+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:40.462713+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:41.462797+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:42.462949+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:43.463109+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551250 data_alloc: 218103808 data_used: 10608640
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:44.463267+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:45.463401+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:46.463528+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:47.463642+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:48.463777+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551250 data_alloc: 218103808 data_used: 10608640
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.512318611s of 13.137044907s, submitted: 4
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:49.463911+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 50626560 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:50.464085+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 50626560 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:51.464241+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 50626560 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:52.464416+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:53.464623+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572334 data_alloc: 218103808 data_used: 10981376
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:54.464738+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:55.464981+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8525000/0x0/0x4ffc00000, data 0x19ae76e/0x1b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:56.465102+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:57.465265+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:58.465403+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580412 data_alloc: 218103808 data_used: 11112448
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:59.465591+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8519000/0x0/0x4ffc00000, data 0x19ba76e/0x1b45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:00.466013+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8519000/0x0/0x4ffc00000, data 0x19ba76e/0x1b45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:01.466278+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:02.466826+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:03.467122+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580412 data_alloc: 218103808 data_used: 11112448
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:04.467399+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c56000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x55693345eb40
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569335854a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933bdb000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933bdb000 session 0x556933583680
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:05.467563+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933bdb000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933bdb000 session 0x5569344cc3c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.535245895s of 16.586713791s, submitted: 25
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8519000/0x0/0x4ffc00000, data 0x19ba76e/0x1b45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,1,0,6])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:06.467795+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569362ec000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x556935cb32c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556932ea9860
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c56000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 58228736 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x556934895860
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c56000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x5569354f4780
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:07.467918+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 58220544 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:08.468158+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3685997 data_alloc: 218103808 data_used: 11112448
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:09.468331+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:10.468458+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:11.468593+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556932efe000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:12.469092+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x556933583e00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933bdb000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:13.469293+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933bdb000 session 0x5569353d23c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569341c25a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686790 data_alloc: 218103808 data_used: 11112448
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:14.469446+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:15.469576+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 58204160 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:16.469747+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:17.469878+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:18.470035+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3788494 data_alloc: 234881024 data_used: 25300992
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:19.470416+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:20.470617+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:21.470836+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:22.471029+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:23.471209+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3788494 data_alloc: 234881024 data_used: 25300992
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 07 15:25:59 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1980792972' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:24.471424+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:25.471591+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.612783432s of 20.685228348s, submitted: 31
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:26.471724+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 52690944 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7153000/0x0/0x4ffc00000, data 0x2d7876e/0x2f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:27.471863+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 53075968 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7153000/0x0/0x4ffc00000, data 0x2d7876e/0x2f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7153000/0x0/0x4ffc00000, data 0x2d7876e/0x2f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,2,0,1])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:28.472016+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3846534 data_alloc: 234881024 data_used: 25468928
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7153000/0x0/0x4ffc00000, data 0x2d7876e/0x2f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:29.472172+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:30.472319+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e70d5000/0x0/0x4ffc00000, data 0x2df676e/0x2f81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:31.472519+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:32.472665+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:33.472881+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 365658112 unmapped: 53248000 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3841770 data_alloc: 234881024 data_used: 25468928
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:34.473035+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556934125860
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693489f680
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 365658112 unmapped: 53248000 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x55693442ed20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:35.473219+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362307584 unmapped: 56598528 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:36.473457+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362307584 unmapped: 56598528 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8519000/0x0/0x4ffc00000, data 0x19ba76e/0x1b45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:37.473617+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362307584 unmapped: 56598528 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:38.473737+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362307584 unmapped: 56598528 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3591774 data_alloc: 218103808 data_used: 11173888
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.757778168s of 12.639021873s, submitted: 116
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556932876f00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556934898f00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:39.473870+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 59170816 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693489fa40
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:40.474025+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:41.474186+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:42.474369+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:43.474553+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:44.474745+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:45.474949+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:46.475171+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:47.475399+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:48.475618+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:49.475778+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:50.476010+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:51.476135+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:52.476316+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:53.476524+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:54.476690+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:55.476845+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:56.476982+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:57.477202+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:58.477471+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:59.477728+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:00.478190+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:01.478446+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:02.478607+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:03.478818+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:04.479044+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:05.479272+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:06.479479+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:07.479693+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:08.480046+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:09.480241+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:10.480408+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:11.480701+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:12.481043+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:13.481376+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:14.481596+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.288772583s of 35.765110016s, submitted: 48
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569354fc1e0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:15.481716+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556934125a40
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556935cb34a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933bdb000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933bdb000 session 0x5569348990e0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569354f4b40
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:16.482003+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:17.482190+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:18.482342+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521521 data_alloc: 218103808 data_used: 5021696
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:19.482460+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:20.483048+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:21.483227+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:22.484099+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:23.484228+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549973 data_alloc: 218103808 data_used: 8990720
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:24.484366+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:25.484513+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:26.484651+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:27.484826+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:28.484999+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549973 data_alloc: 218103808 data_used: 8990720
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:29.485161+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:30.485311+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:31.485744+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:32.485886+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:33.486107+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.138957977s of 18.666332245s, submitted: 15
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361693184 unmapped: 57212928 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3557359 data_alloc: 218103808 data_used: 8982528
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:34.486253+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361693184 unmapped: 57212928 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:35.486545+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361693184 unmapped: 57212928 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880d000/0x0/0x4ffc00000, data 0x16be76e/0x1849000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:36.487221+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 57204736 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:37.487443+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 57204736 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:38.487608+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559541 data_alloc: 218103808 data_used: 8982528
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:39.488029+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:40.488249+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:41.488764+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:42.489070+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:43.489284+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559541 data_alloc: 218103808 data_used: 8982528
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.330938339s of 10.883395195s, submitted: 16
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:44.489468+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 57868288 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:45.489706+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 57868288 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:46.489886+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 57868288 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:47.490105+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 57868288 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:48.490310+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361046016 unmapped: 57860096 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559557 data_alloc: 218103808 data_used: 8982528
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:49.490668+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361046016 unmapped: 57860096 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:50.491025+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361046016 unmapped: 57860096 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:51.491198+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361046016 unmapped: 57860096 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:52.491401+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:53.491593+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559557 data_alloc: 218103808 data_used: 8982528
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:54.491787+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:55.491915+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:56.492080+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.153526306s of 12.898469925s, submitted: 1
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:57.492210+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361062400 unmapped: 57843712 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:58.492342+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880d000/0x0/0x4ffc00000, data 0x16c576e/0x1850000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361070592 unmapped: 57835520 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559113 data_alloc: 218103808 data_used: 8982528
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:59.492507+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569362ed860
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361070592 unmapped: 57835520 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:00.492686+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 handle_osd_map epochs [284,285], i have 284, src has [1,285]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 284 handle_osd_map epochs [285,285], i have 285, src has [1,285]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 285 ms_handle_reset con 0x5569343fc800 session 0x556932efeb40
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c56000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c4f000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 285 ms_handle_reset con 0x556935c56000 session 0x5569361acd20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 285 ms_handle_reset con 0x556935c4f000 session 0x5569335852c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c4f000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 57819136 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:01.492826+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 285 ms_handle_reset con 0x556935c4f000 session 0x5569344ce3c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364568576 unmapped: 65880064 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:02.493030+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 286 ms_handle_reset con 0x556933526400 session 0x5569362ecf00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364593152 unmapped: 65855488 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:03.493249+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 286 heartbeat osd_stat(store_statfs(0x4e7113000/0x0/0x4ffc00000, data 0x2dbbecb/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 286 heartbeat osd_stat(store_statfs(0x4e7113000/0x0/0x4ffc00000, data 0x2dbbecb/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933b71c00 session 0x5569341c3a40
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762301 data_alloc: 218103808 data_used: 9789440
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:04.493594+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x5569343fc800 session 0x55693489ef00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c56000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556935c56000 session 0x556932edf680
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933526400 session 0x55693345ef00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:05.493754+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:06.494023+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:07.494186+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e710e000/0x0/0x4ffc00000, data 0x2dbdac6/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:08.494330+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e710e000/0x0/0x4ffc00000, data 0x2dbdac6/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e710e000/0x0/0x4ffc00000, data 0x2dbdac6/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762301 data_alloc: 218103808 data_used: 9789440
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:09.494419+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:10.494603+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933b71c00 session 0x5569333574a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x5569343fc800 session 0x556934894d20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c4f000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556935c4f000 session 0x556932edf0e0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693f800800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x55693f800800 session 0x55693489e5a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.815621376s of 13.853458405s, submitted: 48
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933b71c00 session 0x556932b5f860
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933526400 session 0x556933583860
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x5569343fc800 session 0x556935564d20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c4f000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556935c4f000 session 0x5569344d0960
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933b71400 session 0x5569341c3c20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:11.494756+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:12.494914+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e710d000/0x0/0x4ffc00000, data 0x2dbdb28/0x2f4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 287 handle_osd_map epochs [288,288], i have 288, src has [1,288]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:13.495413+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 69681152 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3755695 data_alloc: 218103808 data_used: 9789440
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:14.495626+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf58b/0x2f52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 69681152 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:15.496199+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 69681152 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:16.496375+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 69664768 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:17.496515+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 69664768 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:18.496665+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 288 ms_handle_reset con 0x556933526400 session 0x5569354f5a40
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360792064 unmapped: 69656576 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3758141 data_alloc: 218103808 data_used: 9789440
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:19.496811+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 69640192 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:20.497041+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 67837952 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:21.497168+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:22.497306+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:23.497461+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833473 data_alloc: 234881024 data_used: 20254720
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:24.497703+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:25.497839+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:26.498012+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:27.498172+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:28.498300+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.241428375s of 17.537368774s, submitted: 23
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833473 data_alloc: 234881024 data_used: 20254720
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:29.498443+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:30.498592+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:31.498789+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 63209472 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:32.498919+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 63209472 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:33.499130+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 63209472 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866312 data_alloc: 234881024 data_used: 26378240
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:34.499378+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 63209472 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:35.499584+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:36.499762+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:37.500002+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:38.500188+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866344 data_alloc: 234881024 data_used: 26378240
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:39.500376+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:40.500621+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:41.500859+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:42.501026+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:43.501239+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866344 data_alloc: 234881024 data_used: 26378240
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:44.501404+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:45.501596+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:46.501795+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:47.501956+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:48.502119+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866344 data_alloc: 234881024 data_used: 26378240
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:49.502385+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:50.502593+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:51.502761+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:52.503000+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:53.503245+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866344 data_alloc: 234881024 data_used: 26378240
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:54.504070+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:55.504203+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:56.504326+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:57.504493+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367550464 unmapped: 62898176 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:58.504639+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367550464 unmapped: 62898176 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:59.504784+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3872264 data_alloc: 234881024 data_used: 27168768
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c4f000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 288 ms_handle_reset con 0x556935c4f000 session 0x5569353d25a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556934891400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367550464 unmapped: 62898176 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:00.504988+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.446823120s of 32.526866913s, submitted: 8
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x556934891400 session 0x5569348945a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556937127400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693f800000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367566848 unmapped: 62881792 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x556937127400 session 0x556932876780
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x55693f800000 session 0x55693416c3c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:01.505158+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x556933526400 session 0x5569353d3a40
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556934891400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x556934891400 session 0x556933c4fa40
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c4f000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374775808 unmapped: 59875328 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:02.505288+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 291 ms_handle_reset con 0x556935c4f000 session 0x556932efd2c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374390784 unmapped: 60260352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:03.505446+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e45c9000/0x0/0x4ffc00000, data 0x58fb8f7/0x5a94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556937127400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 291 ms_handle_reset con 0x556937127400 session 0x556935cb34a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374398976 unmapped: 60252160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 291 ms_handle_reset con 0x556933526800 session 0x556934125860
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:04.505573+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4223401 data_alloc: 251658240 data_used: 33873920
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 291 ms_handle_reset con 0x556933526400 session 0x55693489f0e0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556934891400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374439936 unmapped: 60211200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:05.505697+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 292 ms_handle_reset con 0x556934891400 session 0x5569354f4780
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e70fe000/0x0/0x4ffc00000, data 0x2dc6482/0x2f5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374439936 unmapped: 60211200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:06.505909+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 292 ms_handle_reset con 0x556933b71c00 session 0x556934899c20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 292 ms_handle_reset con 0x5569343fc800 session 0x556933583c20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c4f000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374448128 unmapped: 60203008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:07.506053+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 292 ms_handle_reset con 0x556935c4f000 session 0x55693501bc20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374464512 unmapped: 60186624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:08.506210+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374464512 unmapped: 60186624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:09.506386+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3922026 data_alloc: 251658240 data_used: 33869824
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374464512 unmapped: 60186624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:10.506537+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.127110481s of 10.177739143s, submitted: 163
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374464512 unmapped: 60186624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:11.506686+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e70ff000/0x0/0x4ffc00000, data 0x2dc639b/0x2f5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 293 ms_handle_reset con 0x556933526400 session 0x5569362ec5a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 293 handle_osd_map epochs [293,294], i have 293, src has [1,294]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 368902144 unmapped: 65748992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:12.506854+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 294 ms_handle_reset con 0x556933b6f400 session 0x5569354f4960
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 368918528 unmapped: 65732608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:13.506990+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 294 ms_handle_reset con 0x556933b71c00 session 0x5569333e0d20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:14.507178+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e8c76000/0x0/0x4ffc00000, data 0x124fa30/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3560957 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:15.507325+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:16.507476+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:17.507614+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:18.507771+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:19.507906+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3560957 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e8c76000/0x0/0x4ffc00000, data 0x124fa30/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:20.508001+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:21.508159+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.429761887s of 10.748415947s, submitted: 82
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c76000/0x0/0x4ffc00000, data 0x124fa30/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:22.508298+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:23.508463+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:24.508595+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:25.508768+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:26.508924+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:27.509211+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:28.509387+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:29.509547+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:30.509700+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:31.509872+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:32.510024+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:33.510239+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:34.510418+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:35.510550+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:36.510684+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:37.510860+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:38.511003+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:39.511215+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:40.511358+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:41.511523+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:42.511652+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:43.511864+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:44.512035+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:45.512166+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:46.512317+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:47.512446+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 49K writes, 193K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s
                                           Cumulative WAL: 49K writes, 17K syncs, 2.76 writes per sync, written: 0.19 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2813 writes, 11K keys, 2813 commit groups, 1.0 writes per commit group, ingest: 11.66 MB, 0.02 MB/s
                                           Interval WAL: 2813 writes, 1097 syncs, 2.56 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:48.512595+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets getting new tickets!
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:49.512891+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _finish_auth 0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:49.514759+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:50.513023+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:51.513148+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:52.513352+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933bdbc00 session 0x5569361ad2c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556932eba800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: mgrc ms_handle_reset ms_handle_reset con 0x556935c56c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3626055412
Oct 07 15:25:59 compute-0 ceph-osd[89062]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3626055412,v1:192.168.122.100:6801/3626055412]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: get_auth_request con 0x556933526800 auth_method 0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:53.513522+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: mgrc handle_mgr_configure stats_period=5
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556935c59000 session 0x5569341e52c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693560b400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x55693360f000 session 0x55693442e3c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935c59000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:54.513655+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:55.513891+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:56.514141+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:57.514388+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:58.514985+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:59.515224+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556934891400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.841426849s of 37.856082916s, submitted: 15
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:00.515401+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362053632 unmapped: 72597504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556934891400 session 0x5569333e12c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556937127400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556937127400 session 0x5569362ed2c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8873000/0x0/0x4ffc00000, data 0x16514bc/0x17eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933526400 session 0x5569353d2000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:01.515538+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8873000/0x0/0x4ffc00000, data 0x16514f5/0x17eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362053632 unmapped: 72597504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933b6f400 session 0x556933585e00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:02.515653+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933b71c00 session 0x556933460f00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362053632 unmapped: 72597504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556934891400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556934891400 session 0x5569333e1c20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569335b2000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:03.515796+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362356736 unmapped: 72294400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935605800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:04.515917+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629019 data_alloc: 218103808 data_used: 9269248
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:05.516059+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556935605800 session 0x5569334550e0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x5569335b2000 session 0x556932ea85a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:06.516198+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e884e000/0x0/0x4ffc00000, data 0x1675504/0x1810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933526400 session 0x5569335830e0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:07.516344+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:08.516492+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:09.516625+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:10.516807+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:11.517019+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:12.517175+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:13.517364+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:14.517527+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:15.517660+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:16.517814+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:17.518325+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:18.518769+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:19.519037+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:20.519381+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:21.519614+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:22.519983+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:23.520263+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:24.520532+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:25.520737+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:26.521024+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:27.521294+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:28.521472+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:29.521664+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:30.522007+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:31.522264+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:32.522511+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:33.522756+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:34.523078+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:35.523320+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:36.523559+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362381312 unmapped: 72269824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:37.523697+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362381312 unmapped: 72269824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:38.523909+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362381312 unmapped: 72269824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:39.524166+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362381312 unmapped: 72269824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:40.524365+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:41.524588+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:42.524800+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:43.525026+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:44.525191+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:45.525321+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:46.525448+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362397696 unmapped: 72253440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:47.525619+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362397696 unmapped: 72253440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:48.525767+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:49.525915+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:50.526070+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:51.526223+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:52.526382+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:53.526627+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:54.526764+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:55.527026+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:56.527196+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:57.527338+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:58.527533+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362414080 unmapped: 72237056 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 58.524005890s of 59.169479370s, submitted: 66
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:59.527696+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362422272 unmapped: 72228864 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567816 data_alloc: 218103808 data_used: 5070848
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c74000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:00.527826+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 72220672 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:01.527954+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 72204288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:02.528107+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 72204288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:03.528270+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8c74000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362463232 unmapped: 72187904 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 296 ms_handle_reset con 0x556933b6f400 session 0x5569354fcb40
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:04.528430+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570568 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:05.528588+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:06.528754+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:07.529008+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:08.529158+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:09.529304+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8c72000/0x0/0x4ffc00000, data 0x1252f30/0x13eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570568 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:10.529440+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:11.529586+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:12.529772+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362479616 unmapped: 72171520 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:13.529993+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8c72000/0x0/0x4ffc00000, data 0x1252f30/0x13eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.192964554s of 14.603412628s, submitted: 105
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:14.530164+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:15.530282+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:16.530466+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:17.530592+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:18.530719+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:19.530987+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:20.531165+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:21.531305+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:22.531472+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:23.531736+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:24.531962+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:25.532173+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:26.532368+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:27.532511+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:28.532690+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:29.532867+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:30.533125+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:31.533321+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:32.533611+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:33.534248+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:34.534507+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362520576 unmapped: 72130560 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:35.534760+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362520576 unmapped: 72130560 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:36.535027+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362520576 unmapped: 72130560 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:37.535205+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:38.536074+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:39.536284+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:40.536446+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:41.536591+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:42.536767+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:43.536993+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:44.537258+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:45.537467+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:46.537657+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:47.537805+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:48.538076+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:49.538237+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:50.538362+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:51.538468+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:52.538679+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:53.538911+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:54.539080+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:55.539221+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:56.539386+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:57.539587+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:58.539744+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:59.539946+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:00.540064+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:01.540254+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:02.540450+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:03.540655+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:04.540857+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:05.541028+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:06.541228+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362561536 unmapped: 72089600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:07.541385+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362569728 unmapped: 72081408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:08.541641+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:09.541843+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:10.542043+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:11.542221+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:12.542419+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:13.542631+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:14.542964+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:15.543159+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:16.543426+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:17.543590+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:18.543784+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:19.544039+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:20.544234+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 72056832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:21.544455+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 72056832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:22.544622+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 72056832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:23.545121+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 72056832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:24.545281+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:25.545439+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:26.545996+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:27.546222+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:28.546785+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:29.547156+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:30.547555+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:31.547735+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:32.547914+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:33.548343+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:34.548602+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:35.548795+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:36.549136+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:37.549342+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:38.549557+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:39.549765+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:40.549981+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:41.550148+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:42.550276+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:43.550592+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 72024064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:44.550755+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 72024064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:45.551030+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 72024064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:46.551330+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 72024064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:47.551598+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362635264 unmapped: 72015872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:48.551803+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:49.552022+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:50.552196+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:51.552381+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:52.552577+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:53.552795+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:54.552991+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:55.553129+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:56.553334+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362659840 unmapped: 71991296 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:57.553575+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 104.046302795s of 104.136131287s, submitted: 16
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 ms_handle_reset con 0x556933b71c00 session 0x556933c4f4a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556934891400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 ms_handle_reset con 0x556934891400 session 0x55693416cb40
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:58.553784+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:59.554036+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580822 data_alloc: 218103808 data_used: 10911744
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:00.554243+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:01.554414+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:02.554542+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c70000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:03.554792+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:04.554978+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 67788800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580822 data_alloc: 218103808 data_used: 10911744
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:05.555226+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c70000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [1])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 67772416 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:06.555404+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8c70000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,1,1])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 298 ms_handle_reset con 0x556933526400 session 0x556932b5e000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 72474624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:07.555556+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 72474624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:08.555713+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 72474624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:09.555861+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569335b2000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.556733131s of 11.714488029s, submitted: 44
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 72474624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3508745 data_alloc: 218103808 data_used: 4096000
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:10.556007+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 299 ms_handle_reset con 0x5569335b2000 session 0x556932efc960
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e905b000/0x0/0x4ffc00000, data 0xa58102/0xbf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:11.556132+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:12.556303+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:13.556485+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:14.556642+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9858000/0x0/0x4ffc00000, data 0x259b81/0x3f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3445661 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:15.556875+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9858000/0x0/0x4ffc00000, data 0x259b81/0x3f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:16.557011+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:17.557255+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:18.557391+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:19.557608+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3445661 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:20.557790+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9858000/0x0/0x4ffc00000, data 0x259b81/0x3f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:21.557990+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:22.558138+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.843618393s of 12.988986969s, submitted: 53
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 ms_handle_reset con 0x556933b6f400 session 0x556935cb34a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:23.558387+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:24.558524+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:25.558760+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:26.559025+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:27.559201+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:28.559450+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:29.559674+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:30.561167+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:31.561427+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:32.561708+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:33.562361+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:34.562704+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:35.563107+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:36.563899+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:37.564272+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:38.564553+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:39.564771+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:40.564994+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:41.565269+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:42.565512+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:43.565737+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:44.565882+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:45.565997+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:46.566226+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:47.566438+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:48.566588+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:49.566778+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:50.567004+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:51.567198+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:52.567342+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:53.567520+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:54.567672+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:55.567878+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:56.568086+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:57.568239+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:58.568388+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:59.568651+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:00.568806+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:01.568991+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:02.569127+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:03.569356+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:04.569590+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 77193216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:05.569733+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 77193216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:06.570176+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 77193216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:07.570611+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 77193216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:08.570812+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 77185024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:09.571084+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 77185024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:10.571381+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:11.571521+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:12.571661+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:13.571872+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:14.572190+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:15.572339+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:16.572483+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:17.572713+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:18.572852+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:19.573145+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:20.573316+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:21.573501+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:22.573666+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:23.573878+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:24.574021+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:25.574288+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:26.574504+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:27.574670+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:28.574859+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:29.575026+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:30.575220+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:31.575452+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:32.575629+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:33.575838+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:34.576190+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:35.576541+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:36.576903+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357515264 unmapped: 77135872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:37.577166+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357515264 unmapped: 77135872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:38.577648+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357515264 unmapped: 77135872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:39.577981+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357515264 unmapped: 77135872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:40.578186+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:41.578555+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:42.578871+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:43.579189+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:44.579464+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:45.579638+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:46.579881+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 77119488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:47.580165+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 77119488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:48.580431+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:49.580741+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:50.580917+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:51.581111+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:52.581271+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:53.581492+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:54.581665+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:55.581852+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:56.581994+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 77094912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:57.582154+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 77094912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:58.582405+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 77094912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:59.582676+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 77086720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:00.583026+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 77086720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:01.583213+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 77086720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:02.583458+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 77078528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:03.583713+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 77078528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:04.583919+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 77070336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:05.584133+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 77070336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:06.584396+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 77070336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:07.584587+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 105.469673157s of 105.490242004s, submitted: 20
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:08.584746+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 303 ms_handle_reset con 0x556933b71c00 session 0x556935cb32c0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556935605800
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:09.584915+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 77086720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:10.585106+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 304 ms_handle_reset con 0x556935605800 session 0x5569344ce5a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550337 data_alloc: 218103808 data_used: 1097728
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:11.585269+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:12.585538+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e8bd9000/0x0/0x4ffc00000, data 0xed08b1/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:13.585774+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:14.586057+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:15.586243+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550337 data_alloc: 218103808 data_used: 1097728
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:16.586398+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 77053952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:17.586542+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 77053952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:18.586691+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e8bd9000/0x0/0x4ffc00000, data 0xed08b1/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 77053952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:19.587067+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 77053952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:20.587225+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550337 data_alloc: 218103808 data_used: 1097728
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:21.587511+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:22.587725+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:23.588018+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e8bd9000/0x0/0x4ffc00000, data 0xed08b1/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:24.588196+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:25.588393+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550337 data_alloc: 218103808 data_used: 1097728
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:26.588601+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:27.588735+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e8bd9000/0x0/0x4ffc00000, data 0xed08b1/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:28.588869+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 77037568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:29.589113+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 77037568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:30.589264+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 77037568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550337 data_alloc: 218103808 data_used: 1097728
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:31.589466+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e8bd9000/0x0/0x4ffc00000, data 0xed08b1/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 77037568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:32.589611+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 77037568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:33.589786+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 77037568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e8bd9000/0x0/0x4ffc00000, data 0xed08b1/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:34.590010+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 77029376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:35.590201+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 77029376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550337 data_alloc: 218103808 data_used: 1097728
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:36.590384+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 77012992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:37.590549+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933526400
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 77012992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.815532684s of 29.996992111s, submitted: 26
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:38.590746+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357654528 unmapped: 76996608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 305 ms_handle_reset con 0x556933526400 session 0x5569355654a0
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:39.590982+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e8bd8000/0x0/0x4ffc00000, data 0xed244f/0x1075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357654528 unmapped: 76996608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e8bd8000/0x0/0x4ffc00000, data 0xed244f/0x1075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:40.591251+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e8bd8000/0x0/0x4ffc00000, data 0xed244f/0x1075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357654528 unmapped: 76996608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551884 data_alloc: 218103808 data_used: 1097728
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:41.591513+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357654528 unmapped: 76996608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:42.591685+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357662720 unmapped: 76988416 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:43.592011+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357662720 unmapped: 76988416 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e8bd8000/0x0/0x4ffc00000, data 0xed244f/0x1075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:44.592194+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 76980224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:45.592357+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 76980224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551884 data_alloc: 218103808 data_used: 1097728
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:46.592597+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 76980224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:47.592809+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 76980224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:48.593025+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e8bd8000/0x0/0x4ffc00000, data 0xed244f/0x1075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.439725876s of 10.520721436s, submitted: 15
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357695488 unmapped: 76955648 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:49.593230+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357695488 unmapped: 76955648 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:50.593382+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 76947456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:51.593542+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 76947456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:52.593742+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 76947456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:53.594020+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 76947456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:54.594170+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 76947456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:55.594328+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 76947456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:56.594504+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 76939264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:57.594655+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 76939264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:58.594847+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 76939264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:59.595018+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 76939264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:00.595245+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 76931072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:01.595401+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 76931072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:02.595567+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 76931072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:03.595763+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 76931072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:04.595956+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 76914688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:05.596141+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 76914688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:06.596267+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 76914688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:07.596425+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357744640 unmapped: 76906496 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:08.596628+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 76898304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:09.596811+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 76898304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:10.597001+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 76898304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:11.597171+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 76898304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:12.597303+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 76898304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:13.597487+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 76898304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:14.597643+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 76890112 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:15.597833+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 76890112 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:16.598007+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 76890112 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:17.598273+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 76890112 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:18.598475+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357769216 unmapped: 76881920 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:19.598659+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 76873728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:20.598853+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 76873728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:21.599017+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 76873728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:22.599212+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 76873728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:23.599455+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 76873728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:24.599699+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 76857344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:25.599897+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 76857344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:26.600075+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 76857344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:27.600265+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 76857344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:28.600448+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 76857344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:29.600658+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 76857344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:30.600911+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 76849152 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:31.601207+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 76849152 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:32.601385+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 76840960 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:33.601602+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 76840960 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:34.601762+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 76840960 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:35.601970+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 76840960 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:36.602115+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 76832768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:37.602274+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 76832768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:38.602441+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 76832768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:39.602670+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357826560 unmapped: 76824576 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:40.602829+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 76816384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:25:59 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:25:59 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:41.603059+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 76816384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:25:59 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:42.603335+0000)
Oct 07 15:25:59 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 76816384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:43.603602+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 76816384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:44.603804+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 76816384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:45.604070+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 76816384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:46.604446+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357842944 unmapped: 76808192 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:47.605075+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357842944 unmapped: 76808192 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:48.605425+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:49.605694+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:50.605999+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:51.606166+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:52.606365+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:53.606615+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:54.606808+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:55.606988+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:56.607231+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 76791808 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:57.607469+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357867520 unmapped: 76783616 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:58.607656+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357867520 unmapped: 76783616 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:59.607839+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 76775424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:00.608092+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 76775424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:01.608348+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 76775424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:02.608623+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 76775424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:03.609021+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357883904 unmapped: 76767232 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:04.609256+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 76759040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:05.609518+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 76759040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:06.610149+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 76759040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:07.610375+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 76759040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:08.610603+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 76759040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:09.610761+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 76742656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:10.610970+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 76742656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:11.611116+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 76742656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:12.611331+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 76742656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:13.611591+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 76742656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:14.611768+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 76742656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:15.612029+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 76726272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:16.612115+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 76726272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:17.612249+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 76726272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:18.612415+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 76726272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:19.612616+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357933056 unmapped: 76718080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:20.612802+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357933056 unmapped: 76718080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:21.612973+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 76709888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:22.613071+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 76709888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:23.613217+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 76701696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:24.613399+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 76701696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:25.613560+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 76701696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:26.613718+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 76701696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:27.613870+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 76701696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:28.614049+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357957632 unmapped: 76693504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:29.614219+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357957632 unmapped: 76693504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:30.614344+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357965824 unmapped: 76685312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:31.614612+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357965824 unmapped: 76685312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:32.614912+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357965824 unmapped: 76685312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:33.615266+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357965824 unmapped: 76685312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:34.615439+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357965824 unmapped: 76685312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:35.615618+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 76677120 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:36.615811+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 76677120 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:37.616006+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 76677120 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:38.616185+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 76677120 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:39.616348+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357982208 unmapped: 76668928 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:40.616510+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357982208 unmapped: 76668928 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:41.616660+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357990400 unmapped: 76660736 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:42.616804+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357990400 unmapped: 76660736 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:43.617017+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:44.617220+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:45.617391+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:46.617777+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:47.617991+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:48.618193+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:49.618387+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:50.618589+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:51.618746+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:52.619020+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 76636160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:53.619220+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 76636160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:54.619395+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358023168 unmapped: 76627968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:55.619595+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358023168 unmapped: 76627968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:56.619787+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 76619776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:57.619993+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 76619776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:58.620142+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 76619776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:59.620344+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 76619776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:00.620542+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 76611584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:01.620716+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 76611584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:02.620845+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 76611584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:03.621070+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 76611584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:04.621281+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 76611584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:05.621420+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 76611584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:06.621627+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 76603392 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:07.621815+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 76603392 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:08.621972+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 76595200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:09.622232+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 76595200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:10.622408+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 76595200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:11.622572+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 76595200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:12.622776+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 76595200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:13.623082+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 76595200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:14.623260+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 76587008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:15.623367+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 76587008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:16.623541+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358080512 unmapped: 76570624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:17.623809+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 76562432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:18.624000+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 76562432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:19.624164+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 76562432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:20.627095+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 76562432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:21.632187+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 76562432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:22.634315+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 76554240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:23.634560+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 76554240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:24.635513+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 76554240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:25.637320+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 76554240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:26.638290+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 76546048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:27.640091+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 76546048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:28.640294+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 76546048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:29.640580+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 76546048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:30.640886+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 76546048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:31.642072+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 76546048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:32.643005+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:33.643309+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:34.643832+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:35.644122+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:36.644414+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:37.644623+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:38.644786+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:39.645002+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:40.645222+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 76513280 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:41.645414+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 76513280 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:42.645606+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 76513280 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:43.645834+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 76513280 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:44.646052+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 76505088 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:45.646277+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 76505088 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:46.646513+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 76505088 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:47.646656+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 76505088 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:48.646790+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:49.647029+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:50.647198+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:51.647380+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:52.647595+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:53.648786+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:54.648953+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:55.649100+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:56.649233+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 76480512 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:57.649427+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 76480512 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:58.649567+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 76472320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:59.649723+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 76472320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:00.649846+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 76472320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:01.650002+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 76472320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:02.650167+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 76472320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:03.650350+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358187008 unmapped: 76464128 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:04.650502+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 76455936 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:05.650680+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 76455936 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:06.650914+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 76455936 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:07.651179+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 76455936 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:08.651357+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358211584 unmapped: 76439552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:09.651549+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358211584 unmapped: 76439552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:10.651700+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358211584 unmapped: 76439552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:11.651876+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358211584 unmapped: 76439552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:12.652029+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358219776 unmapped: 76431360 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:13.652226+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358219776 unmapped: 76431360 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:14.652399+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 76423168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:15.652574+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 76423168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:16.652755+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 76423168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:17.652969+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 76423168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:18.653155+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 76423168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:19.653300+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 76423168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:20.653482+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358244352 unmapped: 76406784 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:21.653693+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358244352 unmapped: 76406784 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:22.653915+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358244352 unmapped: 76406784 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:23.654174+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358252544 unmapped: 76398592 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:24.654521+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358252544 unmapped: 76398592 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:25.654854+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358252544 unmapped: 76398592 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:26.655144+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 76390400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:27.655452+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 76390400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:28.655640+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 76390400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:29.656011+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 76390400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:30.656148+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 76390400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:31.656299+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 76390400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:32.656488+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358268928 unmapped: 76382208 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:33.656709+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358268928 unmapped: 76382208 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:34.657010+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358268928 unmapped: 76382208 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:35.657257+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 76374016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:36.657584+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 76365824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:37.657802+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 76365824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:38.657977+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 76365824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:39.658218+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 76365824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:40.658472+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 76365824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:41.658662+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 76365824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:42.658832+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358293504 unmapped: 76357632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:43.658996+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358293504 unmapped: 76357632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:44.659174+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 76349440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:45.659351+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 76349440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:46.659594+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 76349440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:47.659819+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 76349440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 49K writes, 195K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.75 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 645 writes, 1636 keys, 645 commit groups, 1.0 writes per commit group, ingest: 0.83 MB, 0.00 MB/s
                                           Interval WAL: 645 writes, 286 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:48.660027+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 76341248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:49.663699+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 76341248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:50.664043+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 76333056 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:51.664318+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 76333056 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:52.664557+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 76333056 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:53.664739+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 76324864 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:54.665014+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 76324864 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:55.665290+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358334464 unmapped: 76316672 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:56.665484+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358334464 unmapped: 76316672 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:57.665654+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358342656 unmapped: 76308480 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:58.665821+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358342656 unmapped: 76308480 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:59.666004+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358342656 unmapped: 76308480 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:00.666151+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:01.666310+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:02.666516+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:03.666744+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:04.667009+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:05.667241+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:06.667406+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:07.667573+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:08.667777+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 76292096 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:09.667904+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 76292096 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:10.668073+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 76292096 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:11.668241+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358367232 unmapped: 76283904 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:12.668406+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358367232 unmapped: 76283904 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:13.668608+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 76275712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:14.668781+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 76275712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:15.668983+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358383616 unmapped: 76267520 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:16.669176+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358391808 unmapped: 76259328 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:17.669353+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358391808 unmapped: 76259328 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:18.669574+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 76251136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:19.669805+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 76251136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:20.670035+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 76251136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:21.670208+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 76251136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:22.670388+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 76251136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:23.670590+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 76251136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:24.670825+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358408192 unmapped: 76242944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:25.671026+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358408192 unmapped: 76242944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:26.671249+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358408192 unmapped: 76242944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:27.671435+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 76234752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:28.671633+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 76234752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:29.671819+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 76234752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:30.672032+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 76234752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:31.672249+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358424576 unmapped: 76226560 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:32.672440+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 76218368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:33.672669+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 76218368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:34.672829+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 76218368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:35.673035+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 76218368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:36.673373+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 76218368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:37.673550+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 76218368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:38.673729+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 76210176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:39.673905+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 76210176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:40.674234+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 76201984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:41.674465+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 76201984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:42.674685+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 76201984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:43.674903+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 76201984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:44.675105+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 76201984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:45.675339+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358457344 unmapped: 76193792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:46.675501+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358457344 unmapped: 76193792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:47.675723+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358457344 unmapped: 76193792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:48.675873+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:49.676087+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:50.676356+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:51.676570+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:52.676747+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:53.676989+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:54.677225+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:55.677426+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:56.677659+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 76161024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:57.677882+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 76161024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:58.678052+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 310.758331299s of 310.772949219s, submitted: 14
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 76161024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:59.678193+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 76152832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:00.678342+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 75071488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:01.678468+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:02.679155+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:03.679626+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:04.679896+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:05.681111+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:06.682040+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:07.682186+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:08.682324+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:09.682460+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:10.682610+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:11.682910+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:12.683075+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:13.683240+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:14.683381+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 75038720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:15.683525+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 75038720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:16.683667+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 75038720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:17.684031+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 75038720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:18.684229+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 75038720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:19.684525+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 75030528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:20.684819+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 75030528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:21.685105+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 75030528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:22.685313+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 75022336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:23.685531+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 75022336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:24.685817+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 75022336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:25.686008+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:26.686194+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:27.686374+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:28.686585+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:29.686753+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:30.686916+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:31.687147+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:32.687388+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:33.687613+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:34.687816+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:35.688038+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:36.688248+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:37.688419+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:38.688575+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:39.688726+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:40.688990+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:41.689175+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:42.689339+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 74989568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:43.689506+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 74981376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 07 15:26:00 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2165553733' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:44.689718+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 74981376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:45.689985+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 74981376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:46.690133+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:47.690336+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 74981376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:48.690504+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:49.690639+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:50.690792+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:51.691082+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:52.691283+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:53.691477+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:54.691645+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:55.691791+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:56.691950+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:57.692100+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:58.692302+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:59.692478+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:00.692670+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 74956800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:01.692852+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 74956800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:02.693014+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 74948608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:03.693201+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 74948608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:04.693382+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 74940416 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:05.693526+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 74940416 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:06.693686+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359718912 unmapped: 74932224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:07.693882+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359718912 unmapped: 74932224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:08.694038+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359718912 unmapped: 74932224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:09.694222+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:10.694408+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:11.694587+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:12.694739+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:13.694967+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:14.695145+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:15.695291+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:16.695424+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:17.695556+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:18.695683+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:19.695829+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 74907648 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:20.695984+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 74907648 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:21.696129+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 74907648 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:22.696275+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 74899456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:23.696468+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 74899456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:24.696827+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:25.697006+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:26.697165+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:27.697266+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:28.697446+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:29.697605+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:30.697773+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:31.698039+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:32.698230+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:33.698441+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:34.698601+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:35.698801+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 74866688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:36.699005+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 74866688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569335b2000
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 97.285827637s of 97.792411804s, submitted: 90
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:37.699331+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558328 data_alloc: 218103808 data_used: 1105920
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 74850304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 307 ms_handle_reset con 0x5569335b2000 session 0x556933454f00
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:38.699482+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 74850304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:39.699653+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b6f400
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359809024 unmapped: 74842112 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:40.699797+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360890368 unmapped: 73760768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 308 ms_handle_reset con 0x556933b6f400 session 0x55693489e000
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e9840000/0x0/0x4ffc00000, data 0x267631/0x40d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:41.700017+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360890368 unmapped: 73760768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:42.700155+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476702 data_alloc: 218103808 data_used: 1114112
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360890368 unmapped: 73760768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e9840000/0x0/0x4ffc00000, data 0x267631/0x40d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:43.700367+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360906752 unmapped: 73744384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:44.700559+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 73736192 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933b71c00
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:45.700821+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 73736192 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:46.700998+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e983c000/0x0/0x4ffc00000, data 0x2690e3/0x412000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 309 handle_osd_map epochs [309,310], i have 309, src has [1,310]
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 73736192 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.069248199s of 10.059258461s, submitted: 86
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 310 ms_handle_reset con 0x556933b71c00 session 0x55693489e960
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:47.701142+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3488669 data_alloc: 218103808 data_used: 1126400
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 73719808 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:48.701297+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360939520 unmapped: 73711616 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:49.701461+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360939520 unmapped: 73711616 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:50.701631+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e9837000/0x0/0x4ffc00000, data 0x26ac9f/0x416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360939520 unmapped: 73711616 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e9837000/0x0/0x4ffc00000, data 0x26ac9f/0x416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:51.701804+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360939520 unmapped: 73711616 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:52.701961+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3488669 data_alloc: 218103808 data_used: 1126400
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360947712 unmapped: 73703424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:53.702120+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360955904 unmapped: 73695232 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:54.702285+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360964096 unmapped: 73687040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:55.702482+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360964096 unmapped: 73687040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:56.702679+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360964096 unmapped: 73687040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:57.702883+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360964096 unmapped: 73687040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:58.703088+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360972288 unmapped: 73678848 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:59.703318+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360972288 unmapped: 73678848 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:00.703571+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360972288 unmapped: 73678848 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:01.703770+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360972288 unmapped: 73678848 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:02.704031+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360972288 unmapped: 73678848 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:03.704253+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360972288 unmapped: 73678848 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:04.704392+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 73662464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:05.704570+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 73662464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:06.704776+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 73662464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:07.704975+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 73662464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:08.705235+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 73662464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:09.705434+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 73662464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:10.705651+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 73654272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:11.706261+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 73654272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:12.706520+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 73654272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:13.707434+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 73654272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:14.708057+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 73654272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:15.708712+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 73654272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:16.709424+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 73646080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:17.709795+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 73646080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:18.710405+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 73646080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:19.710567+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 73646080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:20.710756+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 73637888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:21.710895+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 73637888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:22.711279+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 73637888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:23.711582+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 73637888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:24.711741+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361021440 unmapped: 73629696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:25.711872+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361021440 unmapped: 73629696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:26.712211+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:27.712428+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:28.712674+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:29.712800+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:30.712986+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:31.713188+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:32.713391+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:33.713567+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:34.713760+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:35.714056+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:36.714330+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 73613312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:37.714540+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693288bc00
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491163 data_alloc: 218103808 data_used: 1130496
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 73613312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 50.736141205s of 51.071052551s, submitted: 14
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:38.714783+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 73613312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:39.715031+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _renew_subs
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26e2b0/0x41b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:40.715211+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:41.715392+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 312 ms_handle_reset con 0x55693288bc00 session 0x556933455680
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:42.715775+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492334 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:43.715984+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9833000/0x0/0x4ffc00000, data 0x26e27d/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:44.716201+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:45.716429+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:46.716599+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:47.716778+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492334 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:48.716903+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9833000/0x0/0x4ffc00000, data 0x26e27d/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 312 handle_osd_map epochs [313,313], i have 313, src has [1,313]
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.388833046s of 11.278204918s, submitted: 27
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:49.717083+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361070592 unmapped: 73580544 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:50.717259+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361070592 unmapped: 73580544 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:51.717398+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361070592 unmapped: 73580544 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:52.717526+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361078784 unmapped: 73572352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:53.717748+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361078784 unmapped: 73572352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:54.717951+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361078784 unmapped: 73572352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:55.718108+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361078784 unmapped: 73572352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:56.718270+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361078784 unmapped: 73572352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:57.718443+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361078784 unmapped: 73572352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:58.718626+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:59.718766+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:00.718984+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:01.719161+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:02.719347+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:03.719539+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:04.719728+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:05.719895+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:06.720046+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:07.720196+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:08.720372+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:09.720487+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:10.720710+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:11.720995+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:12.721171+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:13.721354+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361111552 unmapped: 73539584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:14.721513+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361119744 unmapped: 73531392 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:15.721624+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361119744 unmapped: 73531392 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:16.721747+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361119744 unmapped: 73531392 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:17.722017+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:18.722141+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:19.722285+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:20.722394+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:21.722527+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:22.722709+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:23.722893+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:24.722988+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 73515008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:25.723180+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 73515008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:26.723387+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 73515008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:27.723541+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 73515008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:28.723687+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 73515008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:29.723845+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 73515008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:30.724054+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361144320 unmapped: 73506816 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:31.724222+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361152512 unmapped: 73498624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:32.724358+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361160704 unmapped: 73490432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:33.724556+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361160704 unmapped: 73490432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:34.724732+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361160704 unmapped: 73490432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:35.724897+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361160704 unmapped: 73490432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:36.725000+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361160704 unmapped: 73490432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:37.725286+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361168896 unmapped: 73482240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:38.725498+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361168896 unmapped: 73482240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:39.725624+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361168896 unmapped: 73482240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:40.725760+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361185280 unmapped: 73465856 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:41.725972+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361185280 unmapped: 73465856 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:42.726247+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361185280 unmapped: 73465856 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:43.727268+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361185280 unmapped: 73465856 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:44.727466+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361193472 unmapped: 73457664 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:45.727626+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361193472 unmapped: 73457664 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:46.727798+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361193472 unmapped: 73457664 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:47.727974+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361193472 unmapped: 73457664 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:48.728104+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361201664 unmapped: 73449472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:49.728295+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361201664 unmapped: 73449472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:50.728426+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361201664 unmapped: 73449472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:51.728595+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361201664 unmapped: 73449472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 ms_handle_reset con 0x556932eba800 session 0x55693501a3c0
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x5569343fc800
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:52.728730+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361209856 unmapped: 73441280 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:53.728875+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 ms_handle_reset con 0x55693560b400 session 0x556935cb3c20
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x556933bdbc00
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 ms_handle_reset con 0x556935c59000 session 0x556933583a40
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: handle_auth_request added challenge on 0x55693560b400
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361218048 unmapped: 73433088 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:54.728998+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361218048 unmapped: 73433088 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:55.729116+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361226240 unmapped: 73424896 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:56.729255+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361226240 unmapped: 73424896 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:57.729398+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361234432 unmapped: 73416704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:58.729530+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361234432 unmapped: 73416704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:59.729669+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361234432 unmapped: 73416704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:00.729789+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'config diff' '{prefix=config diff}'
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'config show' '{prefix=config show}'
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361250816 unmapped: 73400320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'counter dump' '{prefix=counter dump}'
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:01.729920+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'counter schema' '{prefix=counter schema}'
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 74186752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:02.730087+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 74432512 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:03.730239+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'log dump' '{prefix=log dump}'
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 74424320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'perf dump' '{prefix=perf dump}'
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:04.730380+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'perf schema' '{prefix=perf schema}'
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:05.730548+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:06.730682+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:07.730832+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:08.732973+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:09.733096+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:10.733235+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:11.733361+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:12.733489+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:13.733645+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:14.733772+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:15.733877+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 74989568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:16.733983+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 74989568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:17.734144+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 74981376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:18.734262+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 74981376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:26:00.126 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:26:00.127 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 07 15:26:00 compute-0 ovn_metadata_agent[161531]: 2025-10-07 15:26:00.127 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:19.734385+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:20.734530+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:21.734668+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:22.734815+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:23.734958+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:24.735080+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:25.735256+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 74956800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:26.735392+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 74956800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:27.735564+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 74956800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:28.735711+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 74956800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:29.735860+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 74956800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:30.735994+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 74956800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:31.736131+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 74948608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:32.736290+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 74948608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:33.736451+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 74948608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:34.736581+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 74948608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:35.736757+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359718912 unmapped: 74932224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:36.736892+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359718912 unmapped: 74932224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:37.737054+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359718912 unmapped: 74932224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:38.737206+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359718912 unmapped: 74932224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:39.737401+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:40.737606+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:41.737808+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:42.738030+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:43.738240+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:44.738396+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:45.738550+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:46.738759+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:47.738893+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:48.739015+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:49.739161+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:50.739279+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:51.739402+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:52.739547+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:53.739683+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:54.739831+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:55.739980+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:56.740159+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:57.740296+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:58.740472+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:59.740584+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:00.740767+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:01.740963+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:02.741091+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:03.741250+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:04.741432+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:05.741578+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:06.741725+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:07.741893+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:08.742042+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 74858496 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:09.742218+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 74858496 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:10.742382+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 74858496 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:11.742528+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 74858496 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:12.742737+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 74858496 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:13.742914+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 74850304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:14.743177+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 74850304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:15.743438+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 74850304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:16.743614+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359809024 unmapped: 74842112 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:17.743797+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359809024 unmapped: 74842112 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:18.743957+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359809024 unmapped: 74842112 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:19.744104+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359809024 unmapped: 74842112 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:20.744239+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 74833920 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:21.744387+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 74833920 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:22.744536+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 74833920 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:23.744707+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 74833920 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:24.744867+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 74825728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:25.745030+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 74825728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:26.745205+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 74825728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:27.745343+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 74825728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:28.745473+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 74825728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:29.745631+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 74825728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:30.745765+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 74825728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:31.745888+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 74825728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:32.746015+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 74809344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:33.746172+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 74809344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:34.746329+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 74809344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:35.746467+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 74809344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:36.746612+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 74801152 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:37.746760+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 74801152 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:38.746885+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 74801152 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:39.747048+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 74801152 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:40.747261+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 74792960 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:41.747452+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 74792960 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:42.747620+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 74792960 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:43.747802+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 74792960 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:44.747988+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 74792960 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:45.748141+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 74792960 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:46.748274+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 74784768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:47.748434+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 74784768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:48.748610+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 74776576 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:49.748752+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 74776576 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:50.748885+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 74776576 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:51.749026+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 74776576 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:52.749251+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 74776576 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:53.749452+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 74776576 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:54.749596+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 74768384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:55.749749+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 74768384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:56.749886+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 74768384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:57.750045+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 74760192 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:58.750209+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359899136 unmapped: 74752000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:59.750408+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359899136 unmapped: 74752000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:00.750585+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359899136 unmapped: 74752000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:01.750821+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359899136 unmapped: 74752000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:02.751029+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 74743808 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:03.751242+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 74743808 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:04.751452+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 74727424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:05.751674+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 74727424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:06.751866+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 74727424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:07.752034+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 74727424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:08.752178+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 74727424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:09.752380+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 74727424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:10.752519+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 74727424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:11.752773+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 74727424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:12.753007+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 74719232 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:13.753192+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 74719232 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:14.753401+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 74719232 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:15.753591+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 74719232 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:16.753781+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 74719232 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:17.754042+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 74719232 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:18.754241+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 74719232 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:19.754379+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 74711040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:20.754519+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 74694656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:21.754659+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 74694656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:22.754791+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 74694656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:23.754957+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 74694656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:24.755075+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 74686464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:25.755249+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 74686464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:26.755411+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 74678272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:27.755592+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 74678272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:28.755770+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 74678272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:29.755970+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 74678272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:30.756213+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 74678272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:31.756453+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 74670080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:32.756619+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 74670080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:33.756803+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 74670080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:34.757016+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 74670080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:35.757198+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 74670080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:36.757357+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 74653696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:37.757572+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 74653696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:38.757709+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 74653696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:39.757881+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 74653696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:40.758094+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360005632 unmapped: 74645504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:41.758250+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360005632 unmapped: 74645504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:42.758430+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360005632 unmapped: 74645504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:43.758582+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360005632 unmapped: 74645504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:44.758729+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 74637312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:45.758864+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 74637312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:46.758998+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 74637312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:47.759170+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 74637312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:48.759325+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 74637312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:49.759515+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 74637312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:50.759721+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 74637312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:51.759887+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 74637312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:52.760071+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 74629120 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:53.760531+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 74629120 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:54.760708+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 74620928 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:55.760865+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 74620928 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:56.761039+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 74620928 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:57.761219+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 74620928 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:58.761398+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360038400 unmapped: 74612736 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:59.761582+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360038400 unmapped: 74612736 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:00.761828+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 74596352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:01.761982+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 74596352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:02.762114+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 74596352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:03.762284+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 74596352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:04.762434+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 74596352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:05.762594+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 74596352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:06.762756+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 74596352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:07.762965+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 74596352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:08.763153+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 74588160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:09.763298+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 74588160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:10.763446+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 74588160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:11.763697+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 74579968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:12.763920+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 74579968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:13.764192+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 74579968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:14.764360+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 74579968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:15.764544+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360079360 unmapped: 74571776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:16.764692+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 74563584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:17.764921+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 74563584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:18.765103+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 74563584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:19.765305+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 74563584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:20.765440+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 74563584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:21.765607+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 74563584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:22.765788+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 74563584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:23.765972+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 74555392 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:24.766138+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 74555392 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:25.766272+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 74555392 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:26.766430+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 74555392 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:27.766671+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 74547200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:28.766866+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 74547200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:29.767021+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 74547200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:30.767173+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 74547200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:31.767333+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 74547200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:32.767587+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 74530816 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:33.767752+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 74530816 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:34.767914+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 74530816 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:35.768146+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 74530816 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:36.768347+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 74530816 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:37.768587+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 74530816 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:38.768711+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 74530816 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:39.769006+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360128512 unmapped: 74522624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:40.769228+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360128512 unmapped: 74522624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:41.769383+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360128512 unmapped: 74522624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:42.769548+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360128512 unmapped: 74522624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:43.769715+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 74514432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:44.769901+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 74514432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:45.770186+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 74514432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:46.770368+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 74514432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:47.770571+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 74514432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:48.770711+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 74498048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:49.770884+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 74498048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:50.771101+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 74498048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:51.771314+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 74498048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:52.771460+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 74498048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:53.771610+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 74498048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:54.771793+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 74498048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:55.772035+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 74498048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:56.772201+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 74473472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:57.772462+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 74473472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:58.772604+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 74473472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:59.772756+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 74473472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:00.773033+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 74473472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:01.773349+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 74465280 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:02.773663+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 74465280 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:03.773839+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 74465280 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:04.773994+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 74448896 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:05.774240+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 74448896 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:06.774464+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 74448896 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:07.774625+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 74448896 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:08.774768+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 74448896 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:09.775058+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 74448896 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:10.775212+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 74448896 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:11.775392+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 74440704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:12.775549+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 74432512 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:13.775716+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 74432512 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:14.775857+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 74432512 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:15.775993+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 74432512 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:16.776135+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 74432512 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:17.776319+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 74424320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:18.776510+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 74424320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:19.776823+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360235008 unmapped: 74416128 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:20.777076+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360235008 unmapped: 74416128 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:21.777295+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 74407936 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:22.777487+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 74407936 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:23.777650+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 74399744 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:24.777870+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 74399744 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:25.778080+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 74399744 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:26.778231+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 74399744 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:27.778391+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 74391552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:28.778518+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 74391552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:29.778650+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 74391552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:30.778823+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 74391552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:31.779055+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 74391552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:32.779212+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 74391552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:33.779382+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 74391552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:34.779530+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 74391552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:35.779736+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 74375168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:36.780078+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 74375168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:37.780278+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 74375168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:38.780390+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 74375168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:39.780571+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 74375168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:40.780813+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 74375168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:41.781015+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 74375168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:42.781211+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 74375168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:43.781393+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 74366976 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:44.781534+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 74366976 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:45.781670+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 74366976 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:46.781809+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 74366976 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:47.781948+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 50K writes, 196K keys, 50K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 50K writes, 18K syncs, 2.75 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 413 writes, 956 keys, 413 commit groups, 1.0 writes per commit group, ingest: 0.47 MB, 0.00 MB/s
                                           Interval WAL: 413 writes, 182 syncs, 2.27 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 74350592 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:48.782072+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 74350592 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:49.782285+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 74350592 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:50.782474+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 74342400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:51.782683+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360316928 unmapped: 74334208 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:52.782870+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360316928 unmapped: 74334208 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:53.783274+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360316928 unmapped: 74334208 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:54.783446+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360316928 unmapped: 74334208 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:55.783651+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360316928 unmapped: 74334208 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:56.783902+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360316928 unmapped: 74334208 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:57.784171+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360316928 unmapped: 74334208 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:58.784394+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 74326016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:59.784521+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 74317824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:00.784661+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 74317824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:01.784807+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 74317824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:02.784955+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 74317824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:03.785176+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 74309632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:04.785353+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 74309632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:05.785564+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360349696 unmapped: 74301440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:06.785706+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360357888 unmapped: 74293248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:07.785915+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360357888 unmapped: 74293248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:08.786176+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360357888 unmapped: 74293248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:09.786328+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360357888 unmapped: 74293248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:10.786501+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360357888 unmapped: 74293248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:11.786721+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 74285056 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:12.786911+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 74285056 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:13.787186+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 74285056 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:14.787401+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 74285056 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:15.787538+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360374272 unmapped: 74276864 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:16.787666+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360374272 unmapped: 74276864 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:17.787856+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360374272 unmapped: 74276864 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:18.788055+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360374272 unmapped: 74276864 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:19.788199+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360374272 unmapped: 74276864 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:20.788373+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360374272 unmapped: 74276864 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:21.788528+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 74268672 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:22.788681+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 74268672 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:23.788846+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 74260480 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:24.789151+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 74260480 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:25.789306+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 74260480 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:26.789435+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 74260480 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:27.789554+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 74252288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:28.789713+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 74252288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:29.789902+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 74252288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:30.790138+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 74252288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:31.790284+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 74244096 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:32.790426+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 74244096 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:33.790643+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 74235904 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:34.790762+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 74235904 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:35.791015+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 74235904 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:36.791273+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 74235904 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:37.791433+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 74227712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:38.791563+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 74227712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:39.791734+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 74227712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:40.791902+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 74219520 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:41.792043+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 74219520 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:42.792188+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 74219520 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:43.792289+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:44.792437+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 74211328 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:45.792598+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 74211328 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:46.792804+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 74211328 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:47.793179+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 74211328 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:48.793294+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360448000 unmapped: 74203136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:49.793482+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360448000 unmapped: 74203136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:50.793660+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360448000 unmapped: 74203136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:51.793839+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360448000 unmapped: 74203136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:52.794022+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360448000 unmapped: 74203136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:53.797156+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360448000 unmapped: 74203136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:54.797327+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 74194944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:55.797455+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 74194944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:56.797648+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 74178560 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:57.797801+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 74178560 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:58.797911+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 74178560 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 429.339019775s of 429.981933594s, submitted: 14
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:59.798053+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 74170368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:00.798178+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 74162176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:01.798310+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 74162176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:02.798483+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360538112 unmapped: 74113024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:03.798656+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360538112 unmapped: 74113024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:04.798825+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360538112 unmapped: 74113024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:05.799039+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360538112 unmapped: 74113024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:06.799243+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360538112 unmapped: 74113024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:07.799420+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360538112 unmapped: 74113024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:08.799750+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360538112 unmapped: 74113024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:09.800002+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360538112 unmapped: 74113024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:10.800155+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360538112 unmapped: 74113024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:11.800332+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360538112 unmapped: 74113024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:12.800548+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360546304 unmapped: 74104832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:13.800719+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360546304 unmapped: 74104832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:14.800846+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360546304 unmapped: 74104832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:15.801072+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360546304 unmapped: 74104832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:16.801259+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360546304 unmapped: 74104832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:17.801432+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360546304 unmapped: 74104832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:18.801613+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360546304 unmapped: 74104832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:19.801739+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360546304 unmapped: 74104832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:20.801896+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360554496 unmapped: 74096640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:21.802069+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360554496 unmapped: 74096640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:22.802190+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360554496 unmapped: 74096640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:23.802408+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360554496 unmapped: 74096640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:24.802515+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360554496 unmapped: 74096640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:25.802653+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360554496 unmapped: 74096640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:26.802779+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360554496 unmapped: 74096640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:27.802981+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360554496 unmapped: 74096640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:28.803119+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360562688 unmapped: 74088448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:29.803277+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360562688 unmapped: 74088448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:30.803428+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360570880 unmapped: 74080256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:31.803668+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360570880 unmapped: 74080256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:32.803894+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360570880 unmapped: 74080256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:33.804182+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360570880 unmapped: 74080256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:34.804360+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360570880 unmapped: 74080256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:35.804552+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360570880 unmapped: 74080256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:36.804837+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360579072 unmapped: 74072064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:37.805106+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360579072 unmapped: 74072064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:38.805272+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360579072 unmapped: 74072064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:39.805476+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360579072 unmapped: 74072064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:40.805639+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360579072 unmapped: 74072064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:41.805795+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360579072 unmapped: 74072064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:42.806003+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360587264 unmapped: 74063872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:43.806183+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360587264 unmapped: 74063872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:44.806375+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360587264 unmapped: 74063872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:45.806776+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360587264 unmapped: 74063872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:46.807141+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360587264 unmapped: 74063872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:47.807484+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360587264 unmapped: 74063872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:48.807831+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360595456 unmapped: 74055680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:49.808048+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360595456 unmapped: 74055680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:50.808202+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360595456 unmapped: 74055680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:51.808396+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360595456 unmapped: 74055680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:52.808574+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 74047488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:53.808761+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 74047488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:54.808990+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 74047488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:55.809181+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 74047488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:56.809339+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 74047488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:57.809480+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 74047488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:58.809595+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360611840 unmapped: 74039296 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:59.809807+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360611840 unmapped: 74039296 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:00.809975+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360620032 unmapped: 74031104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:01.810230+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360620032 unmapped: 74031104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:02.810363+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360620032 unmapped: 74031104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:03.810597+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360620032 unmapped: 74031104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:04.810804+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 74022912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:05.811045+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 74022912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:06.811236+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 74022912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:07.811414+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 74022912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:08.811749+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 74022912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:09.811879+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 74022912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:10.812100+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 74022912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:11.812226+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 74022912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:12.812373+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 74022912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:13.812613+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 74022912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:14.812820+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 74022912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:15.813048+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360636416 unmapped: 74014720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:16.813550+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360644608 unmapped: 74006528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:17.814031+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360644608 unmapped: 74006528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:18.814269+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360644608 unmapped: 74006528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:19.814402+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360644608 unmapped: 74006528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:20.814684+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360652800 unmapped: 73998336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:21.815114+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360652800 unmapped: 73998336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:22.815419+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360660992 unmapped: 73990144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:23.815730+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360660992 unmapped: 73990144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:24.815968+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 73981952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:25.816130+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 73981952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:26.816255+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 73973760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:27.816491+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 73973760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:28.816722+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 73973760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:29.816879+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 73973760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:30.817162+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 73973760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:31.817451+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 73965568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:32.817640+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 73957376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:33.817841+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 73957376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:34.818113+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 73957376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:35.818297+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 73957376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:36.818485+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 73957376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:37.818762+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 73957376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:38.818911+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 73949184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:39.819136+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 73949184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:40.819407+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 73949184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:41.819608+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 73949184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:42.819769+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 73940992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:43.820012+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 73940992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:44.820202+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 73932800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:45.820411+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 73932800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:46.820581+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 73932800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:47.820764+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 73932800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:48.820904+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360726528 unmapped: 73924608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:49.821048+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360726528 unmapped: 73924608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:50.821189+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360726528 unmapped: 73924608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:51.821387+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 73916416 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:52.821538+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 73916416 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:53.821698+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 73908224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:54.821840+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 73908224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:55.822043+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 73908224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:56.822181+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 73908224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:57.822320+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 73908224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:58.822494+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 73908224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:59.822631+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360751104 unmapped: 73900032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:00.822842+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360751104 unmapped: 73900032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:01.823007+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360751104 unmapped: 73900032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:02.823184+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360751104 unmapped: 73900032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:03.823328+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 73891840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:04.823454+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 73883648 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:05.823632+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 73883648 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:06.823865+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 73875456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:07.823989+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 73875456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:08.824115+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 73875456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:09.824244+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 73867264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:10.824369+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 73867264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:11.824556+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 73867264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:12.824707+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 73867264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:13.824880+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 73867264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:14.825045+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360792064 unmapped: 73859072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:15.825246+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360792064 unmapped: 73859072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:16.825391+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360792064 unmapped: 73859072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:17.825634+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360792064 unmapped: 73859072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:18.825769+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360792064 unmapped: 73859072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:19.825891+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 73842688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:20.826018+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 73842688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:21.826130+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 73842688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:22.826249+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 73842688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:23.826401+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 73842688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:24.826537+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:00 compute-0 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:00 compute-0 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494252 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 73842688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:25.826665+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 73834496 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:26.826807+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'config diff' '{prefix=config diff}'
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'config show' '{prefix=config show}'
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361160704 unmapped: 73490432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'counter dump' '{prefix=counter dump}'
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'counter schema' '{prefix=counter schema}'
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:27.826941+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360890368 unmapped: 73760768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:28.827036+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 73826304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: tick
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_tickets
Oct 07 15:26:00 compute-0 ceph-osd[89062]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:29.827167+0000)
Oct 07 15:26:00 compute-0 ceph-osd[89062]: do_command 'log dump' '{prefix=log dump}'
Oct 07 15:26:00 compute-0 nova_compute[259550]: 2025-10-07 15:26:00.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:26:00 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2656692931' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 07 15:26:00 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3977948967' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 07 15:26:00 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2953768568' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 07 15:26:00 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1266719636' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 07 15:26:00 compute-0 ceph-mon[74295]: pgmap v3741: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:26:00 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1980792972' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 07 15:26:00 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2165553733' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 07 15:26:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 07 15:26:00 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1116493492' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 07 15:26:00 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 07 15:26:00 compute-0 rsyslogd[1004]: imjournal from <np0005473739:ceph-osd>: begin to drop messages due to rate-limiting
Oct 07 15:26:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 07 15:26:00 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2748421409' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 07 15:26:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 07 15:26:00 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/287888639' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 07 15:26:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 07 15:26:00 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1633804690' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 07 15:26:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 07 15:26:00 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2359879626' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 07 15:26:00 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:26:01 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1116493492' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 07 15:26:01 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2748421409' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 07 15:26:01 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/287888639' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 07 15:26:01 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1633804690' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 07 15:26:01 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2359879626' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 07 15:26:01 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23505 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:01 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23507 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:01 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3742: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:26:01 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23509 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:01 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23511 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:02 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23513 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:02 compute-0 ceph-mon[74295]: from='client.23505 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:02 compute-0 ceph-mon[74295]: from='client.23507 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:02 compute-0 ceph-mon[74295]: pgmap v3742: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:26:02 compute-0 ceph-mon[74295]: from='client.23509 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:02 compute-0 ceph-mon[74295]: from='client.23511 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:02 compute-0 ceph-mon[74295]: from='client.23513 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:02 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23517 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:02 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 07 15:26:02 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/560328971' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 07 15:26:02 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23521 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:02 compute-0 podman[468951]: 2025-10-07 15:26:02.888905426 +0000 UTC m=+0.092037109 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 07 15:26:02 compute-0 podman[468954]: 2025-10-07 15:26:02.892206692 +0000 UTC m=+0.092661474 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 07 15:26:03 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23525 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 07 15:26:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2816481822' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 07 15:26:03 compute-0 ceph-mon[74295]: from='client.23517 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/560328971' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 07 15:26:03 compute-0 ceph-mon[74295]: from='client.23521 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:03 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2816481822' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 07 15:26:03 compute-0 nova_compute[259550]: 2025-10-07 15:26:03.459 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:26:03 compute-0 nova_compute[259550]: 2025-10-07 15:26:03.459 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 07 15:26:03 compute-0 nova_compute[259550]: 2025-10-07 15:26:03.459 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 07 15:26:03 compute-0 nova_compute[259550]: 2025-10-07 15:26:03.498 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 07 15:26:03 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23527 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 07 15:26:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4132113437' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 07 15:26:03 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3743: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:26:03 compute-0 nova_compute[259550]: 2025-10-07 15:26:03.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 07 15:26:03 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 07 15:26:03 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2126725969' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 07 15:26:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 07 15:26:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 07 15:26:04 compute-0 ceph-mon[74295]: from='client.23525 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:04 compute-0 ceph-mon[74295]: from='client.23527 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4132113437' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 07 15:26:04 compute-0 ceph-mon[74295]: pgmap v3743: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:26:04 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2126725969' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 07 15:26:04 compute-0 ceph-mon[74295]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 07 15:26:04 compute-0 ceph-mon[74295]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:09.070018+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.049137115s of 10.020059586s, submitted: 95
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329809920 unmapped: 54714368 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:10.070176+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329818112 unmapped: 54706176 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:11.070303+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329859072 unmapped: 54665216 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea2ca000/0x0/0x4ffc00000, data 0x1728189/0x18b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,5,1])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:12.070496+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329867264 unmapped: 54657024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:13.070646+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413802 data_alloc: 218103808 data_used: 7663616
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329875456 unmapped: 54648832 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:14.070821+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329875456 unmapped: 54648832 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:15.070991+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329875456 unmapped: 54648832 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:16.071207+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea287000/0x0/0x4ffc00000, data 0x176b189/0x18f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329875456 unmapped: 54648832 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:17.071326+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329875456 unmapped: 54648832 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:18.071485+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412530 data_alloc: 218103808 data_used: 7663616
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330014720 unmapped: 54509568 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:19.071702+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330014720 unmapped: 54509568 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:20.071907+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330022912 unmapped: 54501376 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:21.072166+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea265000/0x0/0x4ffc00000, data 0x178d189/0x1919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330022912 unmapped: 54501376 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf7e043c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7e050e0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf8a49a40
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf779f4a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:22.072314+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9ccb000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.480791092s of 12.996686935s, submitted: 74
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331931648 unmapped: 52592640 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:23.072480+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9ccb000 session 0x55dafa7bc000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa710c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa710c00 session 0x55daf7e043c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7cbba40
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476697 data_alloc: 218103808 data_used: 7663616
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 53116928 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf990ad20
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf96b52c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:24.072668+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 53116928 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:25.072825+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 53116928 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:26.073026+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9c0f000/0x0/0x4ffc00000, data 0x1de11fb/0x1f6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 53116928 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:27.073211+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 53108736 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:28.073385+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476785 data_alloc: 218103808 data_used: 7667712
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 53108736 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:29.073590+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9c0c000/0x0/0x4ffc00000, data 0x1de41fb/0x1f72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 53108736 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:30.073875+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9c0c000/0x0/0x4ffc00000, data 0x1de41fb/0x1f72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 53108736 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:31.074055+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9ccb000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9ccb000 session 0x55daf8b16f00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 53100544 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:32.074226+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c7d000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 53100544 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:33.074370+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476813 data_alloc: 218103808 data_used: 7671808
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 53100544 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:34.074550+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:35.074686+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9c0c000/0x0/0x4ffc00000, data 0x1de41fb/0x1f72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:36.074821+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9c0c000/0x0/0x4ffc00000, data 0x1de41fb/0x1f72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:37.074979+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:38.075117+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513933 data_alloc: 234881024 data_used: 12787712
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9c0c000/0x0/0x4ffc00000, data 0x1de41fb/0x1f72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:39.075305+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9c0c000/0x0/0x4ffc00000, data 0x1de41fb/0x1f72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:40.075492+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:41.075648+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9c0c000/0x0/0x4ffc00000, data 0x1de41fb/0x1f72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.024848938s of 19.916954041s, submitted: 48
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:42.075776+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:43.075977+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9c0c000/0x0/0x4ffc00000, data 0x1de41fb/0x1f72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513933 data_alloc: 234881024 data_used: 12787712
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 53075968 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:44.076224+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332677120 unmapped: 51847168 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:45.076383+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335388672 unmapped: 49135616 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9762000/0x0/0x4ffc00000, data 0x228e1fb/0x241c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:46.076533+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e958a000/0x0/0x4ffc00000, data 0x245e1fb/0x25ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335454208 unmapped: 49070080 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:47.076673+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 49520640 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:48.076825+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3586847 data_alloc: 234881024 data_used: 14303232
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e94fb000/0x0/0x4ffc00000, data 0x24f51fb/0x2683000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335011840 unmapped: 49512448 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:49.077010+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335011840 unmapped: 49512448 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:50.077148+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335011840 unmapped: 49512448 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:51.077333+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335011840 unmapped: 49512448 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:52.077498+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.396517754s of 10.509934425s, submitted: 128
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335011840 unmapped: 49512448 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:53.077906+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584023 data_alloc: 234881024 data_used: 14307328
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335011840 unmapped: 49512448 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:54.078077+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c7d000 session 0x55dafb3c43c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e94d9000/0x0/0x4ffc00000, data 0x25171fb/0x26a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335028224 unmapped: 49496064 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:55.078267+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330842112 unmapped: 53682176 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:56.078433+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55dafa76a3c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330842112 unmapped: 53682176 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:57.078593+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330842112 unmapped: 53682176 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:58.078727+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3427669 data_alloc: 218103808 data_used: 7663616
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330842112 unmapped: 53682176 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:53:59.078980+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330842112 unmapped: 53682176 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:00.079148+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9f19000/0x0/0x4ffc00000, data 0x179c189/0x1928000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330842112 unmapped: 53682176 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:01.079360+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330842112 unmapped: 53682176 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:02.079552+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafea70000 session 0x55daf96b4780
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.475329399s of 10.017469406s, submitted: 31
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55dafa80c960
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330842112 unmapped: 53682176 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:03.079758+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3427009 data_alloc: 218103808 data_used: 7663616
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330850304 unmapped: 53673984 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:04.079908+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330866688 unmapped: 53657600 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:05.080096+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf990a5a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:06.080245+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf44000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:07.080416+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:08.080697+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3298025 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:09.081017+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf44000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:10.081192+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:11.081377+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:12.081551+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf44000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:13.081717+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3298025 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:14.081883+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf44000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:15.082049+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 53649408 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:16.082236+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330883072 unmapped: 53641216 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:17.082421+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:18.082585+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf44000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3298025 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:19.082752+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf44000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:20.082864+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:21.083041+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:22.083171+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:23.083325+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3298025 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf44000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:24.083492+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:25.083635+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:26.083794+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf44000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:27.084003+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:28.084174+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 53633024 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3298025 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:29.084362+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330899456 unmapped: 53624832 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:30.084528+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330899456 unmapped: 53624832 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:31.084701+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330899456 unmapped: 53624832 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf44000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:32.084879+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330899456 unmapped: 53624832 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:33.085046+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330907648 unmapped: 53616640 heap: 384524288 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf96b5680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf98ef2c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf778e3c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf8adeb40
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3298025 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.468921661s of 31.110679626s, submitted: 27
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:34.085203+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344358912 unmapped: 48037888 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea245000/0x0/0x4ffc00000, data 0x17ad189/0x1939000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,4])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55daf7c823c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:35.085374+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330915840 unmapped: 61480960 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:36.085552+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330915840 unmapped: 61480960 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:37.085803+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330915840 unmapped: 61480960 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:38.085993+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330915840 unmapped: 61480960 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403217 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:39.086206+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330915840 unmapped: 61480960 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea184000/0x0/0x4ffc00000, data 0x186e189/0x19fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:40.086385+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330915840 unmapped: 61480960 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafea70000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafea70000 session 0x55daf7c7f2c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:41.086525+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329531392 unmapped: 62865408 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea183000/0x0/0x4ffc00000, data 0x186e1ac/0x19fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:42.086671+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329531392 unmapped: 62865408 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:43.086810+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329555968 unmapped: 62840832 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504982 data_alloc: 234881024 data_used: 17461248
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:44.087578+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 61726720 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:45.087833+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 61726720 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:46.087999+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 61726720 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea183000/0x0/0x4ffc00000, data 0x186e1ac/0x19fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:47.088145+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 61726720 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:48.088411+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 61726720 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504982 data_alloc: 234881024 data_used: 17461248
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea183000/0x0/0x4ffc00000, data 0x186e1ac/0x19fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:49.088628+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 61726720 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:50.089086+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 61726720 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:51.089401+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 61726720 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea183000/0x0/0x4ffc00000, data 0x186e1ac/0x19fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:52.090147+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330678272 unmapped: 61718528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea183000/0x0/0x4ffc00000, data 0x186e1ac/0x19fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:53.090542+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330678272 unmapped: 61718528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505142 data_alloc: 234881024 data_used: 17465344
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.601863861s of 20.106889725s, submitted: 18
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:54.090694+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333529088 unmapped: 58867712 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9d3e000/0x0/0x4ffc00000, data 0x1cb31ac/0x1e40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [0,0,0,2])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:55.090850+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 332652544 unmapped: 59744256 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:56.091014+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 58195968 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:57.091311+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 58195968 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:58.091647+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 58195968 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567376 data_alloc: 234881024 data_used: 17616896
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:54:59.091831+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 58195968 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9ae4000/0x0/0x4ffc00000, data 0x1f051ac/0x2092000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:00.092198+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 58195968 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:01.092526+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 58195968 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:02.092831+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333537280 unmapped: 58859520 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:03.093042+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333537280 unmapped: 58859520 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3562076 data_alloc: 234881024 data_used: 17616896
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:04.093190+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9acd000/0x0/0x4ffc00000, data 0x1f241ac/0x20b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333537280 unmapped: 58859520 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:05.093420+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333537280 unmapped: 58859520 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:06.093552+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333537280 unmapped: 58859520 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:07.093682+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333537280 unmapped: 58859520 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:08.093850+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9acd000/0x0/0x4ffc00000, data 0x1f241ac/0x20b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333537280 unmapped: 58859520 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.403264999s of 14.527244568s, submitted: 69
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3562148 data_alloc: 234881024 data_used: 17616896
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:09.094087+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 58851328 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:10.094241+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 58851328 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:11.094413+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9ac3000/0x0/0x4ffc00000, data 0x1f2e1ac/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 58851328 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:12.094575+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 58851328 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:13.094756+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 58851328 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3562148 data_alloc: 234881024 data_used: 17616896
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:14.095076+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 58851328 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:15.095256+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 58851328 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9ac0000/0x0/0x4ffc00000, data 0x1f311ac/0x20be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:16.095413+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 58851328 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:17.095579+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 58851328 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:18.095717+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333553664 unmapped: 58843136 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3562224 data_alloc: 234881024 data_used: 17616896
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:19.096015+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333553664 unmapped: 58843136 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:20.096168+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333553664 unmapped: 58843136 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:21.096356+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333553664 unmapped: 58843136 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9ac0000/0x0/0x4ffc00000, data 0x1f311ac/0x20be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:22.096566+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333561856 unmapped: 58834944 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7c7e960
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:23.096753+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.645236015s of 14.925461769s, submitted: 4
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327196672 unmapped: 65200128 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55dafb59da40
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3311046 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae1ac/0xc3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:24.096990+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:25.097112+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:26.097278+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:27.097461+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:28.097671+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3311046 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:29.097881+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:30.098054+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:31.098246+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:32.098467+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:33.098626+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3311046 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:34.098973+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:35.099185+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:36.099354+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:37.099545+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:38.099703+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3311046 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:39.099909+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:40.100080+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:41.100299+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:42.100528+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:43.100694+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3311046 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:44.100888+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:45.101084+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:46.101319+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:47.101502+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 65183744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:48.101633+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf43000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.980880737s of 25.078647614s, submitted: 23
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323682304 unmapped: 68714496 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3391535 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:49.101791+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323903488 unmapped: 68493312 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55dafad4d0e0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:50.101923+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55daf96b5e00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323903488 unmapped: 68493312 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:51.102079+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323903488 unmapped: 68493312 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:52.102241+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9ccb000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9ccb000 session 0x55daf96dcd20
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323903488 unmapped: 68493312 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7c832c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:53.102383+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf990af00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323903488 unmapped: 68493312 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3372917 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:54.102593+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea7d7000/0x0/0x4ffc00000, data 0x121a1eb/0x13a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324952064 unmapped: 67444736 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:55.102784+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf9696f00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323903488 unmapped: 68493312 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea7b2000/0x0/0x4ffc00000, data 0x123e1fa/0x13cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:56.103047+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea7b2000/0x0/0x4ffc00000, data 0x123e1fa/0x13cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323903488 unmapped: 68493312 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:57.103369+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 323911680 unmapped: 68485120 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:58.103508+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429431 data_alloc: 218103808 data_used: 10829824
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:55:59.103743+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:00.103909+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea7b2000/0x0/0x4ffc00000, data 0x123e1fa/0x13cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:01.104045+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:02.104208+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:03.104369+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429431 data_alloc: 218103808 data_used: 10829824
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:04.104585+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:05.104796+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:06.105130+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea7b2000/0x0/0x4ffc00000, data 0x123e1fa/0x13cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea7b2000/0x0/0x4ffc00000, data 0x123e1fa/0x13cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:07.105273+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:08.105419+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 324878336 unmapped: 67518464 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.919275284s of 20.437402725s, submitted: 33
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433179 data_alloc: 218103808 data_used: 10850304
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:09.105599+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327008256 unmapped: 65388544 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:10.105756+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328482816 unmapped: 63913984 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:11.105887+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9fa6000/0x0/0x4ffc00000, data 0x1a3c1fa/0x1bca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328491008 unmapped: 63905792 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:12.106031+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327835648 unmapped: 64561152 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:13.106914+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 327835648 unmapped: 64561152 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500709 data_alloc: 218103808 data_used: 11161600
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:14.107196+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328916992 unmapped: 63479808 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9f47000/0x0/0x4ffc00000, data 0x1aa71fa/0x1c35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,10])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:15.107322+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328925184 unmapped: 63471616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:16.107516+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328925184 unmapped: 63471616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:17.107697+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328925184 unmapped: 63471616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:18.107866+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328925184 unmapped: 63471616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:19.108078+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3507283 data_alloc: 218103808 data_used: 10985472
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328925184 unmapped: 63471616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:20.108254+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 328925184 unmapped: 63471616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9f00000/0x0/0x4ffc00000, data 0x1aea1fa/0x1c78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:21.108475+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.159471512s of 12.475979805s, submitted: 113
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329064448 unmapped: 63332352 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:22.108631+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329064448 unmapped: 63332352 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9ee1000/0x0/0x4ffc00000, data 0x1b0f1fa/0x1c9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:23.108721+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329064448 unmapped: 63332352 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:24.108854+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505995 data_alloc: 218103808 data_used: 10989568
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329064448 unmapped: 63332352 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafb42dc00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafb42dc00 session 0x55dafad4d4a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafb42d400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafb42d400 session 0x55dafa76a1e0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf990be00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55dafa80c780
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:25.109019+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337100800 unmapped: 55296000 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf8aded20
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafb42d400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafb42d400 session 0x55daf86a32c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:26.109177+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafb42dc00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafb42dc00 session 0x55daf8a01680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf86a32c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55dafa80c780
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329261056 unmapped: 63135744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:27.109309+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329261056 unmapped: 63135744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e92ef000/0x0/0x4ffc00000, data 0x26ff26c/0x288f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:28.109513+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329261056 unmapped: 63135744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:29.109717+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3603778 data_alloc: 218103808 data_used: 10989568
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329261056 unmapped: 63135744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:30.109858+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329261056 unmapped: 63135744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e92ed000/0x0/0x4ffc00000, data 0x270126c/0x2891000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:31.109999+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e92ed000/0x0/0x4ffc00000, data 0x270126c/0x2891000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf990be00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329261056 unmapped: 63135744 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:32.110141+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafb42d400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafb42d400 session 0x55dafad4d4a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e92ec000/0x0/0x4ffc00000, data 0x270226c/0x2892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329269248 unmapped: 63127552 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa712000 session 0x55daf9696f00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.512236595s of 11.812539101s, submitted: 40
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:33.110314+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329269248 unmapped: 63127552 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7c832c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:34.110480+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3605969 data_alloc: 218103808 data_used: 10989568
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329269248 unmapped: 63127552 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:35.110626+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329269248 unmapped: 63127552 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:36.110763+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 57622528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:37.110955+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e92ea000/0x0/0x4ffc00000, data 0x270229f/0x2894000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 57622528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:38.111114+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 57622528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:39.111320+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694929 data_alloc: 234881024 data_used: 23420928
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 57622528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e92ea000/0x0/0x4ffc00000, data 0x270229f/0x2894000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:40.111475+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 57622528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:41.111605+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 57622528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:42.111741+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 57622528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:43.111892+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 57622528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:44.112045+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3695357 data_alloc: 234881024 data_used: 23425024
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 57622528 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:45.112194+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.835003853s of 12.251962662s, submitted: 9
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334782464 unmapped: 57614336 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e92e1000/0x0/0x4ffc00000, data 0x270a29f/0x289c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:46.112343+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334782464 unmapped: 57614336 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:47.112476+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 53452800 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:48.112629+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 54018048 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e748f000/0x0/0x4ffc00000, data 0x33b529f/0x3547000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:49.112848+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3797179 data_alloc: 234881024 data_used: 23822336
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 54018048 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:50.113009+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e748c000/0x0/0x4ffc00000, data 0x33b829f/0x354a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 54018048 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:51.113212+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 54018048 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:52.113370+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 54018048 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:53.113514+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 54018048 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:54.113796+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3797515 data_alloc: 234881024 data_used: 23822336
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf96b5e00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf990ab40
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 54018048 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:55.114079+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e748b000/0x0/0x4ffc00000, data 0x33b829f/0x354a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.133233070s of 10.094684601s, submitted: 138
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330407936 unmapped: 61988864 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:56.114343+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8980000/0x0/0x4ffc00000, data 0x1b2021d/0x1caf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330407936 unmapped: 61988864 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8980000/0x0/0x4ffc00000, data 0x1b2021d/0x1caf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:57.114543+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa712000 session 0x55dafa7bc3c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330407936 unmapped: 61988864 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:58.114735+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330407936 unmapped: 61988864 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:56:59.114969+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3520924 data_alloc: 218103808 data_used: 10989568
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330407936 unmapped: 61988864 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:00.115188+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 330407936 unmapped: 61988864 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8d2f000/0x0/0x4ffc00000, data 0x1b201fa/0x1cae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9d27c00 session 0x55daf7bef4a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55daf993de00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:01.115334+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326754304 unmapped: 65642496 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:02.115466+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9d7e000/0x0/0x4ffc00000, data 0xad2198/0xc5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9d27c00 session 0x55daf7e2b2c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:03.115598+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:04.115752+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338512 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9da3000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:05.115926+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:06.116233+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:07.116606+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:08.116786+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:09.117053+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338512 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:10.117301+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9da3000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:11.117588+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:12.117757+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:13.118008+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:14.118246+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338512 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9da3000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:15.118403+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9da3000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:16.118556+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:17.118751+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:18.119033+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:19.119288+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338512 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:20.119476+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:21.119639+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9da3000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:22.119868+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:23.120113+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:24.120286+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338512 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9da3000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:25.120520+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7caed20
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf9670f00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c84c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c84c00 session 0x55daf8959860
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7c7eb40
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.895999908s of 30.124755859s, submitted: 36
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:26.120674+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9da3000/0x0/0x4ffc00000, data 0xaae199/0xc3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 57589760 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:27.120834+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326877184 unmapped: 65519616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:28.121002+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf8b16960
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9d27c00 session 0x55daf7bef680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55daf779e780
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326877184 unmapped: 65519616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa712000 session 0x55daf891ba40
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:29.121211+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf967f2c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415822 data_alloc: 218103808 data_used: 3338240
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326877184 unmapped: 65519616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:30.121358+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e958e000/0x0/0x4ffc00000, data 0x12c3199/0x1450000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326877184 unmapped: 65519616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:31.121450+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e958e000/0x0/0x4ffc00000, data 0x12c3199/0x1450000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326877184 unmapped: 65519616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:32.121613+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326877184 unmapped: 65519616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:33.121804+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326877184 unmapped: 65519616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e958e000/0x0/0x4ffc00000, data 0x12c3199/0x1450000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:34.122000+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415822 data_alloc: 218103808 data_used: 3338240
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e958e000/0x0/0x4ffc00000, data 0x12c3199/0x1450000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326877184 unmapped: 65519616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:35.122153+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.895669937s of 10.373094559s, submitted: 24
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326762496 unmapped: 65634304 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:36.122280+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf98ba1e0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:37.122442+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 326770688 unmapped: 65626112 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:38.122639+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:39.122798+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467698 data_alloc: 218103808 data_used: 10387456
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e956a000/0x0/0x4ffc00000, data 0x12e7199/0x1474000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:40.122985+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:41.123383+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:42.123526+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e956a000/0x0/0x4ffc00000, data 0x12e7199/0x1474000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:43.123725+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:44.123883+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467698 data_alloc: 218103808 data_used: 10387456
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:45.124083+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:46.124251+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e956a000/0x0/0x4ffc00000, data 0x12e7199/0x1474000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:47.124381+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:48.124511+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 329539584 unmapped: 62857216 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:49.124670+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.315408707s of 13.315409660s, submitted: 0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513150 data_alloc: 218103808 data_used: 10457088
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 333021184 unmapped: 59375616 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:50.124814+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 57262080 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:51.125007+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 57933824 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:52.125151+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1d17199/0x1ea4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334471168 unmapped: 57925632 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:53.125337+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 57917440 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:54.125510+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554614 data_alloc: 218103808 data_used: 11251712
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 57917440 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:55.125714+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8b19000/0x0/0x4ffc00000, data 0x1d38199/0x1ec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 57786368 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:56.126032+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8a98000/0x0/0x4ffc00000, data 0x1db7199/0x1f44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334839808 unmapped: 57556992 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:57.126173+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334839808 unmapped: 57556992 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:58.126841+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8a85000/0x0/0x4ffc00000, data 0x1dc4199/0x1f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334839808 unmapped: 57556992 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:57:59.127592+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3568152 data_alloc: 218103808 data_used: 11124736
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334839808 unmapped: 57556992 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:00.128672+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8a85000/0x0/0x4ffc00000, data 0x1dc4199/0x1f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334839808 unmapped: 57556992 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:01.129689+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.314251900s of 12.733714104s, submitted: 129
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334839808 unmapped: 57556992 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:02.129827+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8a68000/0x0/0x4ffc00000, data 0x1de9199/0x1f76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334839808 unmapped: 57556992 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:03.130331+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334839808 unmapped: 57556992 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:04.130634+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563040 data_alloc: 218103808 data_used: 11124736
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:05.130818+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 334839808 unmapped: 57556992 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:06.131100+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 340967424 unmapped: 51429376 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55dafa80c5a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafb42d400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:07.131227+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335216640 unmapped: 57180160 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8480000/0x0/0x4ffc00000, data 0x23d01c2/0x255e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [0,0,0,0,1])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafb42d400 session 0x55dafa80cd20
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:08.131707+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335224832 unmapped: 57171968 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:09.132082+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335224832 unmapped: 57171968 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615673 data_alloc: 218103808 data_used: 11124736
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:10.132333+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335224832 unmapped: 57171968 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8480000/0x0/0x4ffc00000, data 0x23d01fb/0x255e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:11.132693+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335290368 unmapped: 57106432 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:12.132839+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335290368 unmapped: 57106432 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9ccbc00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9ccbc00 session 0x55daf96b4960
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55dafad4c000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:13.133003+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 335290368 unmapped: 57106432 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf891b0e0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.596834183s of 11.591576576s, submitted: 39
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55daf7c825a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:14.133141+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337387520 unmapped: 55009280 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafb42d400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3618680 data_alloc: 218103808 data_used: 11124736
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:15.133300+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337387520 unmapped: 55009280 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa838c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72db000/0x0/0x4ffc00000, data 0x23d322e/0x2563000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:16.133420+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 54050816 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:17.133555+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 53780480 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:18.133734+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 53780480 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72db000/0x0/0x4ffc00000, data 0x23d322e/0x2563000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:19.133987+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 53780480 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3662680 data_alloc: 234881024 data_used: 17227776
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:20.134174+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 53780480 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72db000/0x0/0x4ffc00000, data 0x23d322e/0x2563000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:21.134311+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 53780480 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:22.134418+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 53780480 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:23.134553+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 53772288 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72d4000/0x0/0x4ffc00000, data 0x23da22e/0x256a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:24.134731+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 53772288 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3663124 data_alloc: 234881024 data_used: 17231872
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:25.135022+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 53772288 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:26.135222+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 53772288 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.848647118s of 12.892425537s, submitted: 9
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:27.135348+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344596480 unmapped: 47800320 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:28.135479+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e69ff000/0x0/0x4ffc00000, data 0x2ca822e/0x2e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344678400 unmapped: 47718400 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:29.135686+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 47693824 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3755878 data_alloc: 234881024 data_used: 19189760
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:30.135825+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 47693824 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:31.135994+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 47693824 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:32.136120+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 47693824 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e69bf000/0x0/0x4ffc00000, data 0x2ce722e/0x2e77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e69c6000/0x0/0x4ffc00000, data 0x2ce722e/0x2e77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:33.136270+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 47677440 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:34.136477+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 47677440 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa838c00 session 0x55dafa76b4a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafb42d400 session 0x55daf778f2c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749690 data_alloc: 234881024 data_used: 19193856
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:35.136595+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 341196800 unmapped: 51200000 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7caf4a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:36.136731+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 341196800 unmapped: 51200000 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7684000/0x0/0x4ffc00000, data 0x1dfa199/0x1f87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:37.136855+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 341196800 unmapped: 51200000 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:38.137055+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 341196800 unmapped: 51200000 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9d27c00 session 0x55dafb59c780
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:39.137208+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa712000 session 0x55daf8a00d20
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 341196800 unmapped: 51200000 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.477106094s of 13.116346359s, submitted: 170
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370065 data_alloc: 218103808 data_used: 3444736
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:40.137366+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf990ad20
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:41.137500+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:42.137633+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:43.137777+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:44.137911+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363749 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:45.138099+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:46.138314+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:47.138440+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:48.138587+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:49.138781+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363749 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:50.139035+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:51.139167+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:52.139280+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:53.139437+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:54.139648+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363749 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:55.139834+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:56.139984+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:57.140167+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:58.140361+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:58:59.140532+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363749 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:00.140705+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:01.140875+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:02.141016+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:03.141172+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:04.141326+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363749 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:05.141486+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:06.141712+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:07.141906+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 55263232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:08.142097+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 55255040 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:09.142299+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 55255040 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363749 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:10.142475+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 55255040 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:11.142630+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 55255040 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:12.142788+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 55255040 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:13.142966+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337149952 unmapped: 55246848 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:14.144841+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 337149952 unmapped: 55246848 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa82e400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa82e400 session 0x55daf98d65a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf778fa40
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf8a48780
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9d27c00 session 0x55daf98bbe00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.142990112s of 35.221427917s, submitted: 16
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3365573 data_alloc: 218103808 data_used: 3334144
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c04000/0x0/0x4ffc00000, data 0xaae189/0xc3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:15.145499+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 341803008 unmapped: 50593792 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa712000 session 0x55dafad4de00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa838c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa838c00 session 0x55dafb59d680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf98f92c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55daf7cbbe00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9d27c00 session 0x55daf96710e0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:16.145709+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338132992 unmapped: 54263808 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:17.145869+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338132992 unmapped: 54263808 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:18.146046+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338132992 unmapped: 54263808 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e830e000/0x0/0x4ffc00000, data 0x13a21fb/0x1530000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:19.146276+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa712000 session 0x55daf8adf4a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338132992 unmapped: 54263808 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e830e000/0x0/0x4ffc00000, data 0x13a21fb/0x1530000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444444 data_alloc: 218103808 data_used: 3338240
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7139c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7139c00 session 0x55daf8a48960
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:20.146420+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338132992 unmapped: 54263808 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e830e000/0x0/0x4ffc00000, data 0x13a21fb/0x1530000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7e2b680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55daf9c78400 session 0x55dafa7bd2c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:21.146556+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 54247424 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:22.146681+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 54247424 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:23.146999+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:24.147231+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3509059 data_alloc: 218103808 data_used: 11505664
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:25.147424+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e82e8000/0x0/0x4ffc00000, data 0x13c622e/0x1556000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:26.147554+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:27.147680+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:28.147798+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:29.147993+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3509059 data_alloc: 218103808 data_used: 11505664
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:30.148181+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:31.148348+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e82e8000/0x0/0x4ffc00000, data 0x13c622e/0x1556000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:32.148504+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:33.148669+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 54239232 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.093605042s of 18.734363556s, submitted: 50
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:34.148798+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342523904 unmapped: 49872896 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3610453 data_alloc: 234881024 data_used: 13078528
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:35.149052+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342540288 unmapped: 49856512 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:36.149181+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342679552 unmapped: 49717248 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7783000/0x0/0x4ffc00000, data 0x1f2422e/0x20b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,2])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:37.149330+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 49627136 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:38.149514+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 49627136 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:39.149674+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 49618944 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615591 data_alloc: 234881024 data_used: 12984320
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7768000/0x0/0x4ffc00000, data 0x1f3e22e/0x20ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:40.149977+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 49618944 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:41.150281+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 49618944 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7768000/0x0/0x4ffc00000, data 0x1f3e22e/0x20ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:42.150669+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 49618944 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:43.150974+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 49618944 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7768000/0x0/0x4ffc00000, data 0x1f3e22e/0x20ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:44.151285+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 49618944 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615591 data_alloc: 234881024 data_used: 12984320
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.009923935s of 11.624964714s, submitted: 145
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:45.151472+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 49618944 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:46.151625+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 49618944 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:47.151884+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e776d000/0x0/0x4ffc00000, data 0x1f4122e/0x20d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:48.152041+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:49.152265+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e776d000/0x0/0x4ffc00000, data 0x1f4122e/0x20d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3609847 data_alloc: 234881024 data_used: 12988416
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:50.152638+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:51.152871+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:52.153073+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e776d000/0x0/0x4ffc00000, data 0x1f4122e/0x20d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:53.153299+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:54.153511+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3609847 data_alloc: 234881024 data_used: 12988416
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:55.153754+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:56.154237+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 49610752 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e776d000/0x0/0x4ffc00000, data 0x1f4122e/0x20d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:57.154420+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.786358833s of 12.150555611s, submitted: 2
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 49569792 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:58.154566+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 49569792 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T14:59:59.154725+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 49569792 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa841400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 ms_handle_reset con 0x55dafa841400 session 0x55dafa76a1e0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa724400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3613387 data_alloc: 234881024 data_used: 12976128
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:00.154912+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 49569792 heap: 392396800 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 285 ms_handle_reset con 0x55dafa724400 session 0x55daf7e050e0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 285 ms_handle_reset con 0x55dafa83c000 session 0x55daf778e1e0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 285 ms_handle_reset con 0x55daf9cff400 session 0x55dafa7bd4a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:01.155072+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 285 ms_handle_reset con 0x55dafa83c000 session 0x55dafb3c43c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 38264832 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 285 heartbeat osd_stat(store_statfs(0x4e7722000/0x0/0x4ffc00000, data 0x1f89dab/0x211b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:02.155218+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 38264832 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 285 ms_handle_reset con 0x55daf7c21c00 session 0x55daf778ef00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:03.155382+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 38223872 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:04.155559+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55daf9c78400 session 0x55daf86a2000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 44589056 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa724400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55dafa724400 session 0x55daf98efe00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3713958 data_alloc: 234881024 data_used: 24891392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:05.155705+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55daf7c21c00 session 0x55dafad4c960
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55daf9c78400 session 0x55dafb59da40
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351256576 unmapped: 44572672 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:06.155983+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351256576 unmapped: 44572672 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:07.156213+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 287 heartbeat osd_stat(store_statfs(0x4e7009000/0x0/0x4ffc00000, data 0x26a0515/0x2834000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 44564480 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:08.156409+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 44556288 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:09.156616+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 44556288 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 287 heartbeat osd_stat(store_statfs(0x4e7009000/0x0/0x4ffc00000, data 0x26a0515/0x2834000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3713958 data_alloc: 234881024 data_used: 24891392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:10.156765+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 44556288 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55daf9cff400 session 0x55daf779ef00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55dafa83c000 session 0x55daf96d52c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa841400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55dafa841400 session 0x55dafa7bdc20
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55daf7c21c00 session 0x55daf8ade5a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.859335899s of 13.802700996s, submitted: 67
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:11.156901+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55daf9cff400 session 0x55dafa76a960
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55daf9c78400 session 0x55dafb3c5c20
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55dafa83c000 session 0x55dafa7bc000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c9a000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55daf7c9a000 session 0x55daf8b17c20
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 287 ms_handle_reset con 0x55daf7c21c00 session 0x55daf96b5860
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 44531712 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 287 handle_osd_map epochs [287,288], i have 287, src has [1,288]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:12.157147+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 44531712 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:13.157366+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 44531712 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:14.157608+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e7005000/0x0/0x4ffc00000, data 0x26a1f87/0x2838000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 44523520 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3719076 data_alloc: 234881024 data_used: 24899584
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:15.157778+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 44523520 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e7005000/0x0/0x4ffc00000, data 0x26a1f87/0x2838000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:16.158241+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 44515328 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 ms_handle_reset con 0x55daf9c78400 session 0x55daf9671a40
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:17.158370+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 ms_handle_reset con 0x55daf9cff400 session 0x55dafad4cd20
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 44515328 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:18.158515+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 44515328 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 ms_handle_reset con 0x55dafa83c000 session 0x55daf779e5a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c7cc00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 ms_handle_reset con 0x55daf9c7cc00 session 0x55daf89585a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e7005000/0x0/0x4ffc00000, data 0x26a1f87/0x2838000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:19.158830+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 44515328 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6fe2000/0x0/0x4ffc00000, data 0x26c5f87/0x285c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3721416 data_alloc: 234881024 data_used: 24911872
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:20.159005+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6fe2000/0x0/0x4ffc00000, data 0x26c5f87/0x285c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 44515328 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:21.159136+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352403456 unmapped: 43425792 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6fe2000/0x0/0x4ffc00000, data 0x26c5f87/0x285c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:22.159304+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6fe2000/0x0/0x4ffc00000, data 0x26c5f87/0x285c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 43417600 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:23.159441+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 43417600 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:24.159587+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 43409408 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3750536 data_alloc: 234881024 data_used: 28971008
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:25.159721+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 43409408 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:26.159860+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 43409408 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:27.160016+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 43409408 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6fe2000/0x0/0x4ffc00000, data 0x26c5f87/0x285c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:28.160157+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 43409408 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.528598785s of 17.588058472s, submitted: 18
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:29.160347+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 43409408 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3751224 data_alloc: 234881024 data_used: 28958720
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:30.160544+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 43409408 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:31.160834+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 40017920 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:32.160985+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:33.161108+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:34.161990+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3811342 data_alloc: 234881024 data_used: 29794304
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:35.162168+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:36.162319+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:37.162504+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:38.162675+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:39.163030+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:40.163152+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3811342 data_alloc: 234881024 data_used: 29794304
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:41.163282+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:42.163438+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:43.163635+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:44.163791+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:45.164021+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3811342 data_alloc: 234881024 data_used: 29794304
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:46.164167+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:47.164310+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 39968768 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:48.164465+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 39960576 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:49.164598+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 39960576 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:50.164785+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3811342 data_alloc: 234881024 data_used: 29794304
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 39960576 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:51.164956+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 39960576 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:52.165106+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 39960576 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:53.165248+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 39960576 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:54.165374+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 39960576 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:55.165682+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3811342 data_alloc: 234881024 data_used: 29794304
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 39960576 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:56.165826+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355876864 unmapped: 39952384 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:57.165989+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 39944192 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:58.166132+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 39944192 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:00:59.166269+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 39944192 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 ms_handle_reset con 0x55daf9cff400 session 0x55daf98d6d20
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:00.166396+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3811982 data_alloc: 234881024 data_used: 29810688
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e6aca000/0x0/0x4ffc00000, data 0x2bddf87/0x2d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 39944192 heap: 395829248 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:01.166565+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 ms_handle_reset con 0x55dafa83c000 session 0x55daf86a3860
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d05800
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.498996735s of 32.592540741s, submitted: 9
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa839c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 289 ms_handle_reset con 0x55dafa839c00 session 0x55dafb59da40
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 289 ms_handle_reset con 0x55daf9d05800 session 0x55daf7c82960
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa831c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 368164864 unmapped: 31866880 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 289 ms_handle_reset con 0x55dafa831c00 session 0x55dafa7bc3c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:02.166713+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 290 ms_handle_reset con 0x55daf9cff400 session 0x55daf779ef00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d05800
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 362274816 unmapped: 37756928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:03.167109+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 290 ms_handle_reset con 0x55daf9d05800 session 0x55daf96b5e00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 362291200 unmapped: 37740544 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:04.167270+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa839c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 291 ms_handle_reset con 0x55dafa839c00 session 0x55daf96b4780
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 291 ms_handle_reset con 0x55dafa83c000 session 0x55daf96b5a40
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55db01f0f000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 291 ms_handle_reset con 0x55db01f0f000 session 0x55dafa76be00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 362201088 unmapped: 37830656 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:05.167437+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3986275 data_alloc: 251658240 data_used: 38064128
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e5890000/0x0/0x4ffc00000, data 0x3e0f27e/0x3faa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 291 handle_osd_map epochs [292,292], i have 292, src has [1,292]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 292 ms_handle_reset con 0x55daf9cff400 session 0x55dafad4da40
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 40017920 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:06.167573+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 40017920 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 292 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7cba000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 292 ms_handle_reset con 0x55daf9c78400 session 0x55dafb3c5c20
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:07.167690+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d05800
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 292 ms_handle_reset con 0x55daf9d05800 session 0x55dafa76a960
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e6891000/0x0/0x4ffc00000, data 0x2e10e5b/0x2fac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 40001536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:08.167831+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e6ffb000/0x0/0x4ffc00000, data 0x26a8e4c/0x2843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 40001536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:09.167996+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 40001536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e6ffb000/0x0/0x4ffc00000, data 0x26a8e4c/0x2843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:10.168126+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3782951 data_alloc: 251658240 data_used: 32612352
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa839c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 40001536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:11.168316+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.342339516s of 10.014770508s, submitted: 57
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 293 ms_handle_reset con 0x55dafa839c00 session 0x55daf96b43c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 39944192 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:12.168439+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 39944192 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 293 ms_handle_reset con 0x55daf9d27c00 session 0x55daf8adfa40
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 293 ms_handle_reset con 0x55dafa712000 session 0x55daf891bc20
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:13.168649+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e7750000/0x0/0x4ffc00000, data 0x1f51a39/0x20ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 294 ms_handle_reset con 0x55daf7c21c00 session 0x55daf778ed20
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:14.168852+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:15.169087+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429469 data_alloc: 218103808 data_used: 3379200
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:16.169245+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:17.169412+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e8ac4000/0x0/0x4ffc00000, data 0xabf44b/0xc58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:18.169585+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e8ac4000/0x0/0x4ffc00000, data 0xabf44b/0xc58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:19.169779+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:20.170053+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429789 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:21.170244+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.431297302s of 10.757673264s, submitted: 93
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:22.170403+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:23.170545+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:24.170744+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:25.171009+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432411 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:26.171159+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:27.171318+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:28.171469+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:29.171704+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:30.171878+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432411 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:31.172024+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:32.172156+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:33.172338+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:34.172496+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:35.173044+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432411 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:36.173197+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:37.173376+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:38.173502+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:39.173769+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:40.173893+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432411 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:41.173995+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:42.174157+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:43.174438+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 44K writes, 180K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 44K writes, 15K syncs, 2.80 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2832 writes, 12K keys, 2832 commit groups, 1.0 writes per commit group, ingest: 13.48 MB, 0.02 MB/s
                                           Interval WAL: 2832 writes, 1026 syncs, 2.76 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:44.174608+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets getting new tickets!
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:45.174877+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _finish_auth 0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:45.175697+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432411 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:46.175090+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 51396608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:47.175314+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:48.175502+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:49.175683+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:50.175858+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432411 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: mgrc ms_handle_reset ms_handle_reset con 0x55daf9d24400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3626055412
Oct 07 15:26:04 compute-0 ceph-osd[88039]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3626055412,v1:192.168.122.100:6801/3626055412]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: get_auth_request con 0x55dafa839c00 auth_method 0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: mgrc handle_mgr_configure stats_period=5
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:51.176129+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:52.176891+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:53.177033+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55dafa831800 session 0x55daf98d61e0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9c78400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:54.177249+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:55.177418+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432411 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:56.177585+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:57.177785+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:58.178092+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:01:59.178388+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9cff400 session 0x55daf8adf4a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d05800
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9d05800 session 0x55daf7cbbe00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d05800
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9d05800 session 0x55daf98f92c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf7c21c00 session 0x55dafb59d680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be2000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.825901031s of 37.835342407s, submitted: 11
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 51388416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:00.178537+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533537 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9cff400 session 0x55daf98bbe00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9d27c00 session 0x55daf8a00d20
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55dafa712000 session 0x55daf98bbe00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55dafa712000 session 0x55dafb59d680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf7c21c00 session 0x55daf98f92c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 50847744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:01.178676+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9cff400 session 0x55daf8adf4a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d05800
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9d05800 session 0x55daf98d61e0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 50847744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:02.179034+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9d27c00 session 0x55daf8b172c0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9d27c00 session 0x55daf778ed20
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 50847744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:03.179280+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 50839552 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:04.179419+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 50135040 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:05.179628+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3582763 data_alloc: 218103808 data_used: 12410880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e81ca000/0x0/0x4ffc00000, data 0x14d8ebe/0x1674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 50135040 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:06.179792+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf7c21c00 session 0x55daf891bc20
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55daf9cff400 session 0x55daf7cbab40
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:07.179992+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:08.180137+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:09.180491+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:10.180667+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:11.180833+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:12.181121+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:13.181383+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:14.181735+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:15.182161+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:16.184296+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:17.184504+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:18.185415+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:19.185653+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:20.186161+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:21.186641+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:22.187048+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:23.187407+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:24.187552+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:25.187702+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:26.187878+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:27.188053+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:28.188418+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:29.188709+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:30.189042+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:31.189266+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:32.189436+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:33.189581+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:34.189823+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:35.190084+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:36.190320+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:37.190568+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:38.190850+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:39.191093+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:40.191302+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343285760 unmapped: 56745984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:41.191486+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343293952 unmapped: 56737792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:42.191679+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343293952 unmapped: 56737792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:43.191874+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343293952 unmapped: 56737792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:44.192009+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343293952 unmapped: 56737792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:45.192260+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343293952 unmapped: 56737792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:46.192410+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343293952 unmapped: 56737792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:47.192647+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343293952 unmapped: 56737792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:48.192784+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343293952 unmapped: 56737792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:49.193296+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343302144 unmapped: 56729600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 ms_handle_reset con 0x55dafd972800 session 0x55daf8adf0e0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d05800
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:50.193457+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343302144 unmapped: 56729600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:51.193654+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343310336 unmapped: 56721408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:52.193851+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343310336 unmapped: 56721408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:53.194048+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343310336 unmapped: 56721408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:54.194212+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343310336 unmapped: 56721408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:55.194399+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343310336 unmapped: 56721408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:56.194572+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343310336 unmapped: 56721408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:57.194768+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343318528 unmapped: 56713216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:58.195358+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343318528 unmapped: 56713216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:02:59.195606+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 58.796035767s of 59.319786072s, submitted: 35
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343318528 unmapped: 56713216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:00.195809+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343318528 unmapped: 56713216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8be3000/0x0/0x4ffc00000, data 0xac0eae/0xc5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438371 data_alloc: 218103808 data_used: 3387392
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:01.196016+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343367680 unmapped: 56664064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:02.196240+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 56639488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:03.196382+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 56639488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 296 ms_handle_reset con 0x55dafa712000 session 0x55daf96dd680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:04.196539+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 56639488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:05.196673+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e87d0000/0x0/0x4ffc00000, data 0xac2a6f/0xc5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 56639488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441832 data_alloc: 218103808 data_used: 3395584
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:06.196881+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 56639488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:07.197059+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 56639488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:08.197246+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343400448 unmapped: 56631296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:09.197451+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343400448 unmapped: 56631296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:10.197582+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343400448 unmapped: 56631296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e87d0000/0x0/0x4ffc00000, data 0xac2a6f/0xc5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441832 data_alloc: 218103808 data_used: 3395584
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:11.197758+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343400448 unmapped: 56631296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:12.198072+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343400448 unmapped: 56631296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:13.198382+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343408640 unmapped: 56623104 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:14.198510+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343408640 unmapped: 56623104 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:15.198680+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343408640 unmapped: 56623104 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.962801933s of 16.419515610s, submitted: 120
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87d0000/0x0/0x4ffc00000, data 0xac2a6f/0xc5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444806 data_alloc: 218103808 data_used: 3395584
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:16.198822+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 56614912 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:17.198989+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 56614912 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:18.199129+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 56614912 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:19.199391+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 56614912 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:20.199538+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 56606720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:21.199731+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 56606720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:22.199887+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 56606720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:23.200094+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 56606720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:24.200254+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 56606720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:25.200460+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 56606720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:26.200663+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 56606720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:27.200882+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 56606720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:28.201047+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 56598528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:29.201244+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 56598528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:30.201409+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 56598528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:31.201606+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 56598528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:32.201740+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343441408 unmapped: 56590336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:33.201983+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343441408 unmapped: 56590336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:34.202357+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343441408 unmapped: 56590336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:35.202536+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343441408 unmapped: 56590336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:36.202734+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343441408 unmapped: 56590336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:37.203544+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343441408 unmapped: 56590336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:38.203701+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343441408 unmapped: 56590336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:39.204096+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343449600 unmapped: 56582144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:40.204288+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343449600 unmapped: 56582144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:41.204479+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343449600 unmapped: 56582144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:42.204686+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343449600 unmapped: 56582144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:43.204896+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343449600 unmapped: 56582144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:44.205183+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 56573952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:45.205444+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 56573952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:46.205682+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 56573952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:47.205921+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 56573952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:48.206166+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 56573952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:49.206415+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 56573952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:50.206548+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 56573952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:51.206747+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 56573952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:52.206999+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343465984 unmapped: 56565760 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:53.207178+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343465984 unmapped: 56565760 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:54.207352+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343465984 unmapped: 56565760 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:55.207507+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343465984 unmapped: 56565760 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:56.207668+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:57.207818+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:58.207966+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:03:59.208134+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:00.208273+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:01.208402+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:02.208506+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:03.208637+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:04.208801+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:05.208941+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:06.209078+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 56557568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:07.209201+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 56549376 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:08.209350+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 56541184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:09.209558+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 56541184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:10.209711+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 56541184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:11.209835+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 56541184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:12.210008+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 56541184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:13.210159+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 56541184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:14.210389+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 56541184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:15.210650+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 56541184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:16.210858+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 56532992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:17.211045+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 56532992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:18.211222+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 56532992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:19.211475+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 56532992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:20.211664+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 56532992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:21.212063+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 56532992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:22.212231+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 56532992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:23.212441+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 56532992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:24.212573+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 56524800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:25.212793+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 56524800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:26.212986+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 56524800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:27.213251+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 56524800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:28.213538+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 56524800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:29.213723+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 56524800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:30.213995+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 56524800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:31.214196+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 56524800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:32.214343+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343515136 unmapped: 56516608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:33.214641+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343515136 unmapped: 56516608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:34.214801+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343515136 unmapped: 56516608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:35.215214+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343523328 unmapped: 56508416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:36.215694+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343523328 unmapped: 56508416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:37.216038+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343523328 unmapped: 56508416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:38.216272+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343523328 unmapped: 56508416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:39.216552+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343523328 unmapped: 56508416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:40.216923+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343523328 unmapped: 56508416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:41.217294+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 56500224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:42.217464+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 56500224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:43.217764+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 56500224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:44.218002+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 56500224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:45.218219+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 56500224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:46.218391+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 56500224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:47.218621+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 56500224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:48.218815+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 56492032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:49.219087+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 56492032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:50.219214+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 56492032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:51.219359+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 56492032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:52.219550+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 56492032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:53.219774+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 56492032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:54.220038+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 56492032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:55.220234+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343547904 unmapped: 56483840 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:56.220400+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444966 data_alloc: 218103808 data_used: 3399680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343547904 unmapped: 56483840 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:57.220622+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 343547904 unmapped: 56483840 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 ms_handle_reset con 0x55dafa83c000 session 0x55daf8b16960
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:58.220778+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 55230464 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:04:59.220985+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 55230464 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:00.221161+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 55230464 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:01.221390+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456326 data_alloc: 218103808 data_used: 8052736
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 55230464 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:02.221572+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 55230464 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:03.221765+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 55230464 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:04.222049+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344809472 unmapped: 55222272 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:05.222237+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344809472 unmapped: 55222272 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:06.222398+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456326 data_alloc: 218103808 data_used: 8052736
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 110.946731567s of 110.961837769s, submitted: 13
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e87cd000/0x0/0x4ffc00000, data 0xac44d2/0xc60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 298 ms_handle_reset con 0x55dafa83c000 session 0x55daf8a01680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 55189504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:07.222661+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 55189504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:08.222885+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 55189504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c3a000/0x0/0x4ffc00000, data 0x6560a3/0x7f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:09.223108+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 55189504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:10.223265+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344850432 unmapped: 55181312 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 299 ms_handle_reset con 0x55daf7c21c00 session 0x55daf98c4f00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:11.223398+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3377681 data_alloc: 218103808 data_used: 1114112
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344850432 unmapped: 55181312 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 299 handle_osd_map epochs [299,300], i have 299, src has [1,300]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:12.223587+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344866816 unmapped: 55164928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:13.223720+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e90a6000/0x0/0x4ffc00000, data 0x1e96c0/0x387000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344866816 unmapped: 55164928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:14.223856+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344866816 unmapped: 55164928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:15.224020+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344866816 unmapped: 55164928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:16.224168+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381855 data_alloc: 218103808 data_used: 1122304
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344866816 unmapped: 55164928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:17.224401+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344866816 unmapped: 55164928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:18.224582+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e90a6000/0x0/0x4ffc00000, data 0x1e96c0/0x387000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344866816 unmapped: 55164928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:19.224779+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e90a6000/0x0/0x4ffc00000, data 0x1e96c0/0x387000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344866816 unmapped: 55164928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:20.225007+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e90a6000/0x0/0x4ffc00000, data 0x1e96c0/0x387000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344875008 unmapped: 55156736 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:21.225177+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3382175 data_alloc: 218103808 data_used: 1130496
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344875008 unmapped: 55156736 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.401482582s of 15.571696281s, submitted: 51
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:22.225370+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 55140352 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:23.225513+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 301 ms_handle_reset con 0x55daf9cff400 session 0x55dafad4d680
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 55140352 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:24.225712+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e90a2000/0x0/0x4ffc00000, data 0x1eb234/0x38b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 301 handle_osd_map epochs [302,302], i have 302, src has [1,302]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 55115776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:25.225884+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 55115776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:26.226057+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389751 data_alloc: 218103808 data_used: 1130496
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 55115776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:27.226294+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 55115776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:28.226483+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344924160 unmapped: 55107584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:29.226723+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344924160 unmapped: 55107584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:30.227207+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344924160 unmapped: 55107584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:31.228032+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389751 data_alloc: 218103808 data_used: 1130496
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344924160 unmapped: 55107584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:32.228794+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344924160 unmapped: 55107584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:33.229257+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344924160 unmapped: 55107584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:34.229808+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344924160 unmapped: 55107584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:35.230004+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344924160 unmapped: 55107584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:36.230435+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389751 data_alloc: 218103808 data_used: 1130496
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344932352 unmapped: 55099392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:37.230717+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344932352 unmapped: 55099392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:38.231024+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344932352 unmapped: 55099392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:39.231534+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344932352 unmapped: 55099392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:40.231708+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 55091200 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:41.232163+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389751 data_alloc: 218103808 data_used: 1130496
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 55091200 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:42.232338+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 55091200 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:43.232562+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 55091200 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:44.232790+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344948736 unmapped: 55083008 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:45.233011+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344948736 unmapped: 55083008 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:46.233204+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389751 data_alloc: 218103808 data_used: 1130496
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344956928 unmapped: 55074816 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:47.233545+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344956928 unmapped: 55074816 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:48.233734+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 55066624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:49.234021+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 55066624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:50.234219+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 55066624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:51.234471+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 55066624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:52.234737+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 55066624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:53.235072+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344989696 unmapped: 55042048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:54.235336+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344989696 unmapped: 55042048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:55.235506+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344989696 unmapped: 55042048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:56.235781+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344989696 unmapped: 55042048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:57.236028+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344989696 unmapped: 55042048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:58.236206+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344989696 unmapped: 55042048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:05:59.236475+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344989696 unmapped: 55042048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:00.236677+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 344997888 unmapped: 55033856 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:01.236908+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345006080 unmapped: 55025664 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:02.237154+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345006080 unmapped: 55025664 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:03.237314+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345006080 unmapped: 55025664 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:04.237516+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345006080 unmapped: 55025664 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:05.237782+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345006080 unmapped: 55025664 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:06.238015+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345006080 unmapped: 55025664 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:07.238156+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:08.238402+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345006080 unmapped: 55025664 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:09.239245+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345014272 unmapped: 55017472 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:10.239387+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345014272 unmapped: 55017472 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:11.239536+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345014272 unmapped: 55017472 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:12.239796+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345014272 unmapped: 55017472 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:13.240025+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345022464 unmapped: 55009280 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:14.240167+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345022464 unmapped: 55009280 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:15.240375+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345022464 unmapped: 55009280 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:16.240501+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345022464 unmapped: 55009280 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:17.240639+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345047040 unmapped: 54984704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:18.240761+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345047040 unmapped: 54984704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:19.240984+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345047040 unmapped: 54984704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:20.241114+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345047040 unmapped: 54984704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:21.241255+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345047040 unmapped: 54984704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:22.241423+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345047040 unmapped: 54984704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:23.241557+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345047040 unmapped: 54984704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:24.241676+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345047040 unmapped: 54984704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:25.241836+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345055232 unmapped: 54976512 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:26.242044+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345055232 unmapped: 54976512 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:27.242249+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345055232 unmapped: 54976512 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:28.242428+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345063424 unmapped: 54968320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:29.242660+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345063424 unmapped: 54968320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:30.242821+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345063424 unmapped: 54968320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:31.243022+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345063424 unmapped: 54968320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:32.243191+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345063424 unmapped: 54968320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:33.243349+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345071616 unmapped: 54960128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:34.243589+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 54951936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:35.243811+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 54951936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:36.244192+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 54951936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:37.244427+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 54951936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:38.244661+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 54951936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:39.244885+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 54951936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:40.245110+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 54951936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:41.245307+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345088000 unmapped: 54943744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:42.245553+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345096192 unmapped: 54935552 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:43.245804+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345096192 unmapped: 54935552 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:44.246020+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345104384 unmapped: 54927360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:45.246184+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345104384 unmapped: 54927360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:46.246351+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345104384 unmapped: 54927360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:47.246544+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345104384 unmapped: 54927360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:48.246835+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345104384 unmapped: 54927360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:49.247172+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 54919168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:50.247491+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 54919168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:51.247712+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 54919168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:52.247905+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 54919168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:53.248188+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 54919168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:54.248413+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 54919168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:55.248633+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 54910976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:56.248857+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 54910976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:57.249026+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 54910976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:58.249185+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 54910976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:06:59.249367+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 54910976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:00.249570+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 54910976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:01.249713+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 54910976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:02.249917+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 54910976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:03.250244+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 54902784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:04.250435+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345137152 unmapped: 54894592 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:05.250579+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345137152 unmapped: 54894592 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:06.250804+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 54886400 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389911 data_alloc: 218103808 data_used: 1134592
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e909f000/0x0/0x4ffc00000, data 0x1ecdb1/0x38e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:07.251028+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 54878208 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 105.908264160s of 105.934684753s, submitted: 15
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:08.251178+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 46489600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 302 handle_osd_map epochs [303,303], i have 303, src has [1,303]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 303 ms_handle_reset con 0x55daf9d27c00 session 0x55dafa76b4a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:09.251359+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 54886400 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:10.251546+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 54870016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 304 ms_handle_reset con 0x55dafa712000 session 0x55daf7cae000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7c2a000/0x0/0x4ffc00000, data 0x165e961/0x1803000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:11.251735+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345169920 unmapped: 54861824 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544847 data_alloc: 218103808 data_used: 1142784
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:12.251914+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 54853632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:13.252383+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 54853632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:14.252599+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 54853632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:15.252746+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 54853632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7c25000/0x0/0x4ffc00000, data 0x1660501/0x1807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7c25000/0x0/0x4ffc00000, data 0x1660501/0x1807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:16.253017+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 54853632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544847 data_alloc: 218103808 data_used: 1142784
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:17.253173+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 54853632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:18.253331+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 54853632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:19.253547+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 54845440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:20.253751+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7c25000/0x0/0x4ffc00000, data 0x1660501/0x1807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 54845440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:21.253989+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 54845440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544847 data_alloc: 218103808 data_used: 1142784
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:22.254191+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 54845440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:23.254368+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 54845440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:24.254598+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7c25000/0x0/0x4ffc00000, data 0x1660501/0x1807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 54845440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:25.254767+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 54845440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:26.254923+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 54845440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544847 data_alloc: 218103808 data_used: 1142784
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:27.255167+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 54845440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7c25000/0x0/0x4ffc00000, data 0x1660501/0x1807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:28.255350+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 54837248 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:29.255551+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 54837248 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:30.255722+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 54837248 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7c25000/0x0/0x4ffc00000, data 0x1660501/0x1807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:31.255894+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 54829056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544847 data_alloc: 218103808 data_used: 1142784
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:32.256057+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 54829056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:33.256252+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 54829056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:34.256430+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 54829056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:35.256619+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 54829056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7c25000/0x0/0x4ffc00000, data 0x1660501/0x1807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:36.256802+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 54820864 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544847 data_alloc: 218103808 data_used: 1142784
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:37.257057+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 54812672 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa712000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:38.257211+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 54812672 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.637914658s of 30.856452942s, submitted: 28
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 305 ms_handle_reset con 0x55dafa712000 session 0x55daf8ade780
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:39.257410+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 54763520 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e7c24000/0x0/0x4ffc00000, data 0x1661fc1/0x1809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:40.257621+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 54763520 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:41.257795+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 54763520 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3545753 data_alloc: 218103808 data_used: 1142784
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:42.258005+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e7c24000/0x0/0x4ffc00000, data 0x1661fc1/0x1809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 54763520 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:43.258152+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 54763520 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:44.258328+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 54755328 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e7c24000/0x0/0x4ffc00000, data 0x1661fc1/0x1809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:45.258592+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 54755328 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:46.258785+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 54755328 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3545753 data_alloc: 218103808 data_used: 1142784
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:47.258979+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 54755328 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:48.259201+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 54755328 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e7c24000/0x0/0x4ffc00000, data 0x1661fc1/0x1809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 305 handle_osd_map epochs [306,306], i have 306, src has [1,306]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:49.259428+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 54730752 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:50.259624+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 54730752 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:51.259783+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 54730752 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:52.260006+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 54722560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:53.260179+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 54722560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:54.260344+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 54722560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:55.260534+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 54722560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:56.260772+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 54722560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:57.260991+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 54722560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:58.261177+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 54722560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:07:59.261436+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 54714368 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:00.261642+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345325568 unmapped: 54706176 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:01.261798+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345325568 unmapped: 54706176 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:02.262056+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345325568 unmapped: 54706176 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:03.262174+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345325568 unmapped: 54706176 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:04.262352+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345325568 unmapped: 54706176 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:05.262513+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345325568 unmapped: 54706176 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:06.262648+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345325568 unmapped: 54706176 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:07.262810+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345325568 unmapped: 54706176 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:08.262995+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 54697984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:09.263216+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 54697984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:10.263397+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345341952 unmapped: 54689792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:11.263600+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345341952 unmapped: 54689792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:12.263833+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 54681600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:13.264039+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 54681600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:14.264290+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 54681600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:15.264507+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 54681600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:16.264759+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 54681600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:17.265033+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 54673408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:18.265279+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 54673408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:19.265625+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 54673408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:20.265843+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 54673408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:21.266613+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 54673408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:22.266875+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 54673408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:23.267099+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 54673408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:24.267332+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 54665216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:25.267569+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 54665216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:26.267750+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 54665216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:27.268047+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345374720 unmapped: 54657024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:28.268352+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345374720 unmapped: 54657024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:29.268582+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345374720 unmapped: 54657024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:30.268808+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345374720 unmapped: 54657024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:31.269032+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345374720 unmapped: 54657024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:32.269188+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 54648832 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:33.269360+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 54648832 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:34.269483+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 54648832 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:35.269636+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345391104 unmapped: 54640640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:36.269805+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345391104 unmapped: 54640640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:37.269975+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345391104 unmapped: 54640640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:38.270097+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345391104 unmapped: 54640640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:39.270292+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345391104 unmapped: 54640640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:40.270446+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 54632448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:41.270646+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 54632448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:42.270883+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 54632448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:43.271173+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 54632448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:44.271662+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345407488 unmapped: 54624256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:45.271981+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345407488 unmapped: 54624256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:46.272596+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345407488 unmapped: 54624256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:47.272805+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345415680 unmapped: 54616064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:48.273037+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 54607872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:49.273286+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 54607872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:50.273483+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 54607872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:51.273794+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 54607872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:52.274010+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 54607872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:53.274170+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 54607872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:54.274490+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 54607872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:55.274787+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 54607872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:56.275090+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 54599680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:57.275297+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 54599680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:58.275556+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 54599680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:08:59.276145+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 54599680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:00.276296+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 54599680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:01.276520+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 54599680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:02.276724+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 54599680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:03.277035+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 54591488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:04.277325+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 54591488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:05.277617+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 54591488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:06.277817+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345448448 unmapped: 54583296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:07.278078+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345448448 unmapped: 54583296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:08.278264+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345448448 unmapped: 54583296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:09.278549+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345448448 unmapped: 54583296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:10.278750+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345448448 unmapped: 54583296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:11.279058+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 54575104 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:12.279274+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345464832 unmapped: 54566912 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:13.279518+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345473024 unmapped: 54558720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:14.279782+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345481216 unmapped: 54550528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:15.280048+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345481216 unmapped: 54550528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:16.280268+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345481216 unmapped: 54550528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:17.280429+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345481216 unmapped: 54550528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:18.280596+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345481216 unmapped: 54550528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:19.280825+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345481216 unmapped: 54550528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:20.281027+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 54542336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:21.281199+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 54542336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:22.281371+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 54542336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:23.281570+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 54542336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:24.281721+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345497600 unmapped: 54534144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:25.281859+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345497600 unmapped: 54534144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:26.282038+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345497600 unmapped: 54534144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:27.282229+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345497600 unmapped: 54534144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:28.282378+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 54525952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:29.282592+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 54525952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:30.282770+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 54525952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:31.282887+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 54525952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:32.283002+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 54525952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:33.283101+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 54525952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:34.283263+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 54525952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:35.283393+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 54525952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:36.283541+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 54517760 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:37.283737+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 54493184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:38.283962+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 54493184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:39.284195+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 54493184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:40.284362+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 54493184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:41.284517+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 54493184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:42.284728+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 54493184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:43.285087+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 54493184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:44.285312+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 54484992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:45.285626+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345554944 unmapped: 54476800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:46.285866+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345554944 unmapped: 54476800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:47.286041+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345554944 unmapped: 54476800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:48.286173+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345554944 unmapped: 54476800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:49.286380+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345554944 unmapped: 54476800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:50.286571+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345554944 unmapped: 54476800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:51.286766+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345554944 unmapped: 54476800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:52.287006+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 54468608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:53.287169+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 54468608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:54.287345+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 54468608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:55.287559+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 54468608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:56.287776+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 54468608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:57.288033+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 54468608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:58.288209+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 54468608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:09:59.288412+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 54468608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:00.288634+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 54452224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:01.288874+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 54452224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:02.289010+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 54452224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:03.289217+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 54452224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:04.289364+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 54452224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:05.289520+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 54452224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:06.289690+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 54452224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:07.289842+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 54452224 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:08.289996+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345587712 unmapped: 54444032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:09.290171+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345587712 unmapped: 54444032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:10.290369+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345604096 unmapped: 54427648 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:11.290532+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345604096 unmapped: 54427648 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:12.290681+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345604096 unmapped: 54427648 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:13.290818+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345604096 unmapped: 54427648 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:14.291026+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345604096 unmapped: 54427648 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:15.291180+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345604096 unmapped: 54427648 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:16.291362+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54411264 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:17.291516+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54411264 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:18.291639+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54411264 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:19.291831+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54411264 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:20.292810+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54411264 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:21.294241+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54411264 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:22.294461+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54411264 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:23.295375+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:24.296097+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54411264 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:25.296755+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 54403072 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:26.297437+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 54403072 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:27.297851+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 54403072 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:28.298570+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 54403072 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:29.298816+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 54403072 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:30.299056+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 54403072 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:31.299462+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 54403072 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:32.299709+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 54403072 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:33.300024+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345636864 unmapped: 54394880 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:34.300215+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345636864 unmapped: 54394880 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:35.300393+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345636864 unmapped: 54394880 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:36.300725+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 54378496 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:37.300976+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 54378496 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:38.301221+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 54378496 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:39.301461+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 54378496 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:40.301641+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 54378496 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:41.301770+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 54362112 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:42.302048+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 54362112 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:43.302363+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 54362112 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:44.302521+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 54362112 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:45.302781+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 54362112 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:46.303011+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 54362112 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:47.303240+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345677824 unmapped: 54353920 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:48.303539+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54345728 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:49.303738+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54345728 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:50.303985+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54337536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:51.304135+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54337536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:52.304515+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54337536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:53.304686+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54337536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:54.305054+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54337536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:55.305225+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54337536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:56.305371+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54337536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:57.305515+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 54321152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:58.305655+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 54321152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:10:59.305986+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 54321152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:00.306114+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 54321152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:01.306300+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 54321152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:02.306450+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 54321152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:03.306594+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 54321152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:04.306729+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 54304768 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:05.306869+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 54304768 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:06.307021+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 54304768 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:07.307186+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 54304768 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:08.307370+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 54304768 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:09.307609+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 54304768 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:10.307768+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 54304768 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:11.307920+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 54304768 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:12.308062+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 54296576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:13.308186+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 54296576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:14.308392+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 54296576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:15.308567+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 54296576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:16.308811+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 54296576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:17.309109+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 54296576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:18.309366+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 54296576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:19.309635+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 54296576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:20.309832+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 54280192 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:21.309968+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 54280192 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:22.310155+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 54280192 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:23.310385+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 54280192 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:24.310560+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 54280192 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:25.310745+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345759744 unmapped: 54272000 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:26.310984+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345759744 unmapped: 54272000 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:27.311351+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345759744 unmapped: 54272000 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:28.311635+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 54263808 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:29.311978+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 54263808 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:30.312179+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 54263808 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:31.312386+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 54263808 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:32.312579+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 54263808 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:33.312853+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 54263808 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:34.313103+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 54263808 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:35.313318+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 54263808 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:36.313542+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 54239232 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:37.313776+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 54239232 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:38.314068+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 54239232 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:39.314328+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 54239232 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:40.314527+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 54239232 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:41.314712+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 54239232 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:42.314865+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 54239232 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:43.315020+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 54239232 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 45K writes, 181K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.79 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 515 writes, 1459 keys, 515 commit groups, 1.0 writes per commit group, ingest: 0.82 MB, 0.00 MB/s
                                           Interval WAL: 515 writes, 229 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:44.315209+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 54190080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:45.315424+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 54190080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:46.315632+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 54190080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:47.315800+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 54190080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:48.316043+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 54190080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:49.316221+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 54190080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:50.316422+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 54190080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:51.316635+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 54190080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:52.316911+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 54173696 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:53.317179+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 54173696 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:54.317382+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 54173696 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:55.317549+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 54165504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:56.317760+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 54165504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:57.317978+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345874432 unmapped: 54157312 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:58.318100+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345874432 unmapped: 54157312 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:11:59.318250+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345874432 unmapped: 54157312 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:00.318413+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 54149120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:01.318551+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 54149120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:02.318692+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 54149120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:03.318871+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 54149120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:04.318988+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 54149120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:05.319096+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 54149120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:06.319265+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 54149120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:07.319447+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 54149120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:08.319635+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345899008 unmapped: 54132736 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:09.319831+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345899008 unmapped: 54132736 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:10.320027+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345899008 unmapped: 54132736 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:11.320196+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345899008 unmapped: 54132736 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:12.320353+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 54116352 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:13.320478+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 54116352 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:14.320682+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 54116352 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:15.320959+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 54116352 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:16.321159+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54108160 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:17.321324+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54108160 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:18.321466+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54108160 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:19.321646+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54108160 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:20.321841+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54108160 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:21.322000+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54108160 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:22.322197+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54108160 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:23.322359+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54108160 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:24.322582+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 54091776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:25.322734+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 54091776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:26.322970+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 54091776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:27.323154+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 54091776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:28.323356+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 54091776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:29.323586+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 54091776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:30.323785+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 54091776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:31.323978+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 54091776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:32.324144+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345948160 unmapped: 54083584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:33.324306+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345948160 unmapped: 54083584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:34.324463+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345948160 unmapped: 54083584 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:35.324597+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345956352 unmapped: 54075392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:36.324810+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345956352 unmapped: 54075392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:37.324961+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345956352 unmapped: 54075392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:38.325152+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345956352 unmapped: 54075392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:39.325335+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345956352 unmapped: 54075392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:40.325524+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 54059008 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:41.325692+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 54059008 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:42.325884+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 54059008 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:43.326053+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 54059008 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:44.326266+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 54050816 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:45.326791+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 54050816 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:46.326926+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 54050816 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:47.327145+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 54050816 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:48.327287+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 54050816 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:49.327562+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 54050816 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:50.327717+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 54042624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:51.328001+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 54042624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:52.328238+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 54042624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:53.328405+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 54042624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:54.328576+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 54042624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:55.328770+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 54042624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:56.328909+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 54018048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:57.329099+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 54018048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548887 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:58.329317+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 54018048 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:12:59.329518+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 320.363372803s of 320.457611084s, submitted: 24
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 53985280 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:00.329651+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346054656 unmapped: 53977088 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:01.329804+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 53960704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:02.331364+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:03.331825+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:04.332275+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:05.332753+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:06.333679+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:07.333888+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:08.334581+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:09.334771+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:10.334991+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:11.335395+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:12.335592+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:13.335754+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 53944320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:14.335962+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 53936128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:15.336112+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 53936128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:16.336501+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 53936128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:17.336755+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 53936128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:18.337029+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 53936128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:19.337563+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 53936128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:20.337876+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 53927936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:21.338259+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 53927936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:22.338582+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 53927936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:23.338793+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 53927936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:24.339151+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 53927936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:25.339444+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 53927936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:26.339714+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 53927936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:27.340009+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 53927936 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:28.340170+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 53919744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:29.340446+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 53919744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:30.340669+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 53919744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:31.340882+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 53919744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:32.341084+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 53919744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:33.341308+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 53919744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:34.341478+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 53919744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:35.341719+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 53911552 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:36.341918+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 53911552 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:37.342165+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 53903360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:38.342333+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 53903360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:39.342500+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 53903360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:40.342681+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 53903360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:41.342863+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 53903360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:42.343037+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 53903360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:43.343166+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 53903360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:44.343347+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 53895168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:45.343571+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 53895168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:46.343745+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 53895168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:47.343920+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 53895168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:48.344128+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 53895168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:49.344402+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 53895168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:50.344614+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 53895168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:51.344755+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 53895168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:52.345029+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346144768 unmapped: 53886976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:53.345274+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346144768 unmapped: 53886976 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:54.345469+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346152960 unmapped: 53878784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:55.345612+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346152960 unmapped: 53878784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:56.345870+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346152960 unmapped: 53878784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:57.346059+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346152960 unmapped: 53878784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:58.346197+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346152960 unmapped: 53878784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:13:59.346360+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346152960 unmapped: 53878784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:00.346488+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346161152 unmapped: 53870592 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:01.346642+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346169344 unmapped: 53862400 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:02.346873+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346169344 unmapped: 53862400 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:03.347055+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346169344 unmapped: 53862400 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:04.347204+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346169344 unmapped: 53862400 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:05.347374+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346169344 unmapped: 53862400 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:06.347552+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346177536 unmapped: 53854208 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:07.347727+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346177536 unmapped: 53854208 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:08.347920+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346185728 unmapped: 53846016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:09.348290+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346185728 unmapped: 53846016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:10.348472+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346185728 unmapped: 53846016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:11.348639+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346185728 unmapped: 53846016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:12.348801+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:13.349010+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346185728 unmapped: 53846016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:14.349160+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346185728 unmapped: 53846016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:15.349305+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346185728 unmapped: 53846016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:16.349532+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346185728 unmapped: 53846016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:17.349679+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 53837824 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:18.349891+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 53837824 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:19.350153+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 53837824 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:20.350395+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 53837824 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:21.350883+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346202112 unmapped: 53829632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:22.351001+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346202112 unmapped: 53829632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:23.351219+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346202112 unmapped: 53829632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:24.351468+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346210304 unmapped: 53821440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:25.351665+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 53805056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:26.351907+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 53805056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:27.352121+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 53805056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:28.352293+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 53805056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:29.352488+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 53805056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:30.352718+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 53805056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:31.352908+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346234880 unmapped: 53796864 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:32.353111+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346234880 unmapped: 53796864 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:33.353273+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346234880 unmapped: 53796864 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x1663a24/0x180c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548007 data_alloc: 218103808 data_used: 1146880
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:34.353440+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 53788672 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:35.353602+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 53788672 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:36.353744+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 53788672 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 97.175842285s of 97.740692139s, submitted: 106
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:37.353910+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 53788672 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 307 ms_handle_reset con 0x55daf7c21c00 session 0x55daf7c83c20
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e9c5e000/0x0/0x4ffc00000, data 0x6655f5/0x80f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:38.354105+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 53755904 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444437 data_alloc: 218103808 data_used: 1155072
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:39.354304+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 53755904 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9cff400
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:40.354529+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 53755904 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 308 ms_handle_reset con 0x55daf9cff400 session 0x55daf7cae5a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:41.354735+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 53731328 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:42.355182+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 53731328 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 308 handle_osd_map epochs [308,309], i have 308, src has [1,309]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:43.355380+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 309 heartbeat osd_stat(store_statfs(0x4ea0c9000/0x0/0x4ffc00000, data 0x1f8bef/0x3a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 53714944 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3419230 data_alloc: 218103808 data_used: 1155072
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:44.355561+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 53714944 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf9d27c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:45.355756+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 355753984 unmapped: 44277760 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 309 heartbeat osd_stat(store_statfs(0x4ea0c9000/0x0/0x4ffc00000, data 0x1f8bef/0x3a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:46.356004+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 52666368 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.452994347s of 10.191070557s, submitted: 75
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:47.356156+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 310 ms_handle_reset con 0x55daf9d27c00 session 0x55dafb59d4a0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 52658176 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:48.356324+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347381760 unmapped: 52649984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476524 data_alloc: 218103808 data_used: 1163264
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:49.356572+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347381760 unmapped: 52649984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e98c8000/0x0/0x4ffc00000, data 0x9fa788/0xba5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:50.356772+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347381760 unmapped: 52649984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:51.356905+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347381760 unmapped: 52649984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:52.357053+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347381760 unmapped: 52649984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _renew_subs
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:53.357217+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347389952 unmapped: 52641792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:54.357366+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347398144 unmapped: 52633600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:55.357514+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347398144 unmapped: 52633600 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:56.357695+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347406336 unmapped: 52625408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:57.357806+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347406336 unmapped: 52625408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:58.358057+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347406336 unmapped: 52625408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:14:59.358312+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347406336 unmapped: 52625408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:00.358528+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347414528 unmapped: 52617216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:01.358726+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347414528 unmapped: 52617216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:02.358873+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347414528 unmapped: 52617216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:03.359039+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347414528 unmapped: 52617216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:04.359202+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347422720 unmapped: 52609024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:05.359418+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347422720 unmapped: 52609024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:06.359557+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347422720 unmapped: 52609024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:07.359686+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347422720 unmapped: 52609024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:08.359907+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347422720 unmapped: 52609024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:09.360148+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347422720 unmapped: 52609024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:10.360457+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347422720 unmapped: 52609024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:11.360778+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347422720 unmapped: 52609024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:12.361186+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347439104 unmapped: 52592640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:13.361365+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347439104 unmapped: 52592640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:14.361762+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347439104 unmapped: 52592640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:15.362130+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347439104 unmapped: 52592640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:16.362460+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347439104 unmapped: 52592640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:17.362744+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347439104 unmapped: 52592640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:18.363004+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347439104 unmapped: 52592640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:19.363315+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347439104 unmapped: 52592640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:20.363563+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 52576256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:21.363725+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 52576256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:22.364200+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 52568064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:23.364381+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 52568064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:24.364517+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 52568064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:25.364664+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 52568064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:26.364877+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 52568064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:27.365010+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 52568064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:28.365170+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 52559872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:29.365514+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 52559872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:30.365756+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 52559872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:31.365945+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 52559872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:32.366144+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 52559872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:33.366455+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 52559872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:34.366673+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 52559872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:35.367006+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 52559872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x9fc1eb/0xba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:36.367256+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347488256 unmapped: 52543488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:37.367415+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347488256 unmapped: 52543488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:38.367548+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 347488256 unmapped: 52543488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479658 data_alloc: 218103808 data_used: 1167360
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 311 handle_osd_map epochs [311,312], i have 311, src has [1,312]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 51.455036163s of 51.899494171s, submitted: 11
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:39.367799+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 348553216 unmapped: 51478528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:40.367989+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 49381376 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:41.368177+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 49381376 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 312 ms_handle_reset con 0x55dafa83c000 session 0x55dafb59c780
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e8f23000/0x0/0x4ffc00000, data 0x1fddbc/0x3ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:42.368379+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 50634752 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:43.368539+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 50634752 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428176 data_alloc: 218103808 data_used: 1167360
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:44.368684+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 50626560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:45.368888+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 50626560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e8f23000/0x0/0x4ffc00000, data 0x1fddbc/0x3ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:46.369054+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 50618368 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:47.369224+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 312 handle_osd_map epochs [312,313], i have 312, src has [1,313]
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349429760 unmapped: 50601984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:48.369379+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349429760 unmapped: 50601984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432350 data_alloc: 218103808 data_used: 1175552
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:49.369596+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349429760 unmapped: 50601984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:50.369774+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349429760 unmapped: 50601984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:51.369921+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349429760 unmapped: 50601984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:52.370088+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 50577408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:53.370271+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 50577408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:54.370477+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 50577408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:55.370680+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 50577408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:56.370857+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 50577408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:57.371006+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 50577408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:58.371136+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 50577408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:15:59.371341+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 50577408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:00.371541+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:01.371690+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:02.371872+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:03.372012+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:04.372183+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:05.372376+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:06.372577+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:07.372795+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:08.373041+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 50552832 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:09.373242+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 50552832 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:10.373439+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 50536448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:11.373632+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 50536448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:12.373877+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 50536448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:13.374079+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 50536448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:14.374288+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 50536448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:15.374427+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 50536448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:16.374550+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 50528256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:17.374683+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 50528256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:18.374821+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 50528256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:19.374975+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 50528256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:20.375114+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 50528256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:21.375409+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 50528256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:22.375537+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 50528256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:23.375643+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 50528256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:24.375813+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349519872 unmapped: 50511872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:25.376022+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349519872 unmapped: 50511872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:26.376201+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 50503680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:27.376372+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 50503680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:28.376550+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 50503680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:29.376766+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 50503680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:30.376888+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 50503680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:31.377018+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349536256 unmapped: 50495488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:32.377191+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:33.377364+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:34.377500+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:35.377644+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:36.377768+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:37.377904+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:38.377985+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:39.378175+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:40.378546+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349552640 unmapped: 50479104 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:41.378723+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349560832 unmapped: 50470912 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:42.378996+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349560832 unmapped: 50470912 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:43.379170+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 50462720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:44.379574+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 50462720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:45.379801+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 50462720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:46.380098+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 50462720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:47.380236+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 50462720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:48.380583+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:49.380757+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:50.380986+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:51.381137+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:52.381265+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:53.381423+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 ms_handle_reset con 0x55daf9c78400 session 0x55daf8a01e00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55dafa83c000
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:54.381558+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:55.381693+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:56.381829+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 50438144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:57.381983+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 50438144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:58.382107+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 50438144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:16:59.382253+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 50438144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:00.382373+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 50438144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:01.382510+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349601792 unmapped: 50429952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:02.382617+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349601792 unmapped: 50429952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:03.382799+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349601792 unmapped: 50429952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:04.382953+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 50413568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:05.383105+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'config diff' '{prefix=config diff}'
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'config show' '{prefix=config show}'
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349659136 unmapped: 50372608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'counter dump' '{prefix=counter dump}'
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'counter schema' '{prefix=counter schema}'
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:06.383256+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:07.383364+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 50503680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:08.383493+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'log dump' '{prefix=log dump}'
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 50503680 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'perf dump' '{prefix=perf dump}'
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:09.383645+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'perf schema' '{prefix=perf schema}'
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349298688 unmapped: 50733056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:10.383795+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349298688 unmapped: 50733056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:11.383918+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349298688 unmapped: 50733056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:12.384073+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 50716672 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:13.384183+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 50716672 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:14.384358+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 50716672 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:15.384498+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 50716672 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:16.384610+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 50716672 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:17.384724+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 50716672 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:18.384878+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 50708480 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:19.385324+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 50708480 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:20.385476+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 50708480 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:21.385605+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 50708480 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:22.385722+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349331456 unmapped: 50700288 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:23.385851+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349331456 unmapped: 50700288 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:24.386002+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349331456 unmapped: 50700288 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:25.386143+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349331456 unmapped: 50700288 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:26.386263+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349331456 unmapped: 50700288 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:27.386385+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349331456 unmapped: 50700288 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:28.386531+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 50692096 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:29.386685+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 50692096 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:30.386819+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349347840 unmapped: 50683904 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:31.387073+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349356032 unmapped: 50675712 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:32.387216+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349356032 unmapped: 50675712 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:33.387342+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349356032 unmapped: 50675712 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:34.387468+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349356032 unmapped: 50675712 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:35.387607+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349356032 unmapped: 50675712 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:36.387748+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349364224 unmapped: 50667520 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:37.387966+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349364224 unmapped: 50667520 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:38.388151+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349364224 unmapped: 50667520 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:39.388325+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349364224 unmapped: 50667520 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:40.388487+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349364224 unmapped: 50667520 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:41.388683+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349364224 unmapped: 50667520 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:42.388866+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349364224 unmapped: 50667520 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:43.389053+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349364224 unmapped: 50667520 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:44.389203+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 50659328 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:45.389360+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 50651136 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:46.389500+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 50651136 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:47.389647+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 50651136 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:48.389782+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 50651136 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:49.390201+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 ms_handle_reset con 0x55daf9d05800 session 0x55daf993cf00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: handle_auth_request added challenge on 0x55daf7c21c00
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 50651136 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:50.390344+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349388800 unmapped: 50642944 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:51.390478+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 50634752 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:52.390612+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 50626560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:53.390736+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 50626560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:54.390884+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 50626560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:55.391049+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 50626560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:56.391261+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 50626560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:57.391450+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 50626560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:58.391567+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 50626560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:17:59.391756+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 50626560 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:00.391989+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 50618368 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:01.392176+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 50618368 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:02.392357+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 50618368 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:03.392493+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 50618368 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:04.392692+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 50618368 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:05.392910+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 50618368 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:06.393118+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 50618368 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:07.393285+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349421568 unmapped: 50610176 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:08.393528+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349429760 unmapped: 50601984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:09.393801+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349429760 unmapped: 50601984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:10.394057+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349429760 unmapped: 50601984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:11.394289+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349429760 unmapped: 50601984 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:12.394493+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349437952 unmapped: 50593792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:13.394694+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349437952 unmapped: 50593792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:14.395056+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349437952 unmapped: 50593792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:15.395214+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349437952 unmapped: 50593792 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:16.395361+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 50577408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:17.395609+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 50577408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:18.395805+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 50577408 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:19.396009+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349462528 unmapped: 50569216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:20.396141+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349462528 unmapped: 50569216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:21.396308+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349462528 unmapped: 50569216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:22.396437+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349462528 unmapped: 50569216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:23.396545+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:24.396711+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349462528 unmapped: 50569216 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:25.396969+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:26.397134+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:27.397275+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:28.397433+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:29.397720+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:30.397881+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:31.398059+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:32.398226+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 50561024 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:33.398405+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 50544640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:34.398533+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 50544640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:35.398682+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 50544640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:36.398823+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 50544640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:37.398997+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 50544640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:38.399141+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 50544640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:39.399319+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 50544640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:40.399445+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 50544640 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:41.399636+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 50536448 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:42.399811+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 50528256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:43.399996+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 50528256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:44.400185+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 50528256 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:45.400385+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349511680 unmapped: 50520064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:46.400601+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349511680 unmapped: 50520064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:47.400785+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349511680 unmapped: 50520064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:48.400955+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349511680 unmapped: 50520064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:49.401154+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349511680 unmapped: 50520064 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:50.401319+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349519872 unmapped: 50511872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:51.401463+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349519872 unmapped: 50511872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:52.401620+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349519872 unmapped: 50511872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:53.401888+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349519872 unmapped: 50511872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:54.402038+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349519872 unmapped: 50511872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:55.402200+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349519872 unmapped: 50511872 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:56.402360+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349536256 unmapped: 50495488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:57.402574+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349536256 unmapped: 50495488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:58.402721+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349536256 unmapped: 50495488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:18:59.402905+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349536256 unmapped: 50495488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:00.403087+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349536256 unmapped: 50495488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:01.403274+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349536256 unmapped: 50495488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:02.403482+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349536256 unmapped: 50495488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:03.403664+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349536256 unmapped: 50495488 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:04.403871+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 07 15:26:04 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3121166231' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:05.404086+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:06.404302+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:07.404459+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 50487296 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:08.404613+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349552640 unmapped: 50479104 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:09.404836+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349552640 unmapped: 50479104 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:10.405035+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349552640 unmapped: 50479104 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:11.405225+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349552640 unmapped: 50479104 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:12.405399+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 50462720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:13.405529+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 50462720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:14.405764+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 50462720 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:15.406060+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:16.406215+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:17.406398+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:18.406584+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:19.406793+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 50454528 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:20.406965+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 50446336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:21.407142+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 50446336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:22.407272+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 50446336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:23.407415+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 50446336 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:24.407667+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 50438144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:25.407853+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 50438144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:26.408104+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 50438144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:27.408307+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 50438144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:28.408469+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 50438144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:29.408672+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 50438144 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:30.409021+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349601792 unmapped: 50429952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:31.409223+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349601792 unmapped: 50429952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:32.409419+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349601792 unmapped: 50429952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:33.409610+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349601792 unmapped: 50429952 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:34.409736+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349609984 unmapped: 50421760 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:35.409896+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349609984 unmapped: 50421760 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:36.410232+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 50405376 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:37.410388+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 50405376 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:38.410620+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 50405376 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:39.410877+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 50405376 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:40.411106+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 50405376 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:41.411312+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 50405376 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:42.411491+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 50405376 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:43.411698+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 50405376 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:44.411820+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349634560 unmapped: 50397184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:45.411993+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349634560 unmapped: 50397184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:46.412153+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349634560 unmapped: 50397184 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:47.412331+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349642752 unmapped: 50388992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:48.412727+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349642752 unmapped: 50388992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:49.413069+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349642752 unmapped: 50388992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:50.413262+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349642752 unmapped: 50388992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:51.413428+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349642752 unmapped: 50388992 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:52.413582+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349650944 unmapped: 50380800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:53.413812+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349650944 unmapped: 50380800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:54.414033+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349650944 unmapped: 50380800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:55.414228+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349650944 unmapped: 50380800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:56.414425+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349650944 unmapped: 50380800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:57.414572+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349659136 unmapped: 50372608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:58.414741+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349659136 unmapped: 50372608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:19:59.414985+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349659136 unmapped: 50372608 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:00.415210+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349667328 unmapped: 50364416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:01.415763+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349667328 unmapped: 50364416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:02.415994+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349667328 unmapped: 50364416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:03.416163+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349667328 unmapped: 50364416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:04.416311+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349667328 unmapped: 50364416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:05.416470+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349667328 unmapped: 50364416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:06.416683+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349667328 unmapped: 50364416 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:07.416857+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349683712 unmapped: 50348032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:08.417050+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349683712 unmapped: 50348032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:09.417352+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349683712 unmapped: 50348032 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:10.417511+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349691904 unmapped: 50339840 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:11.417666+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349691904 unmapped: 50339840 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:12.417811+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349700096 unmapped: 50331648 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:13.418061+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349700096 unmapped: 50331648 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:14.418217+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349700096 unmapped: 50331648 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:15.418433+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349700096 unmapped: 50331648 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:16.418639+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349708288 unmapped: 50323456 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:17.418905+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349708288 unmapped: 50323456 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:18.419137+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349708288 unmapped: 50323456 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:19.419322+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349708288 unmapped: 50323456 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:20.419489+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349708288 unmapped: 50323456 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:21.419653+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349708288 unmapped: 50323456 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:22.419922+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349716480 unmapped: 50315264 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:23.420175+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349716480 unmapped: 50315264 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:24.420314+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349724672 unmapped: 50307072 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:25.420498+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349724672 unmapped: 50307072 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:26.420652+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349732864 unmapped: 50298880 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:27.420770+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349741056 unmapped: 50290688 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:28.420978+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349741056 unmapped: 50290688 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:29.421142+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349741056 unmapped: 50290688 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:30.421304+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349741056 unmapped: 50290688 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:31.421444+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349741056 unmapped: 50290688 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:32.421600+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349749248 unmapped: 50282496 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:33.421766+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349749248 unmapped: 50282496 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:34.422019+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349749248 unmapped: 50282496 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:35.422179+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349749248 unmapped: 50282496 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:36.422406+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349749248 unmapped: 50282496 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:37.422597+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349749248 unmapped: 50282496 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:38.422754+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349749248 unmapped: 50282496 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:39.422984+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349749248 unmapped: 50282496 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:40.423184+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349757440 unmapped: 50274304 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:41.423355+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349757440 unmapped: 50274304 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:42.423511+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 50266112 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:43.423802+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 50266112 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:44.424023+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349773824 unmapped: 50257920 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:45.424144+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349773824 unmapped: 50257920 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:46.424477+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349773824 unmapped: 50257920 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:47.424708+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349773824 unmapped: 50257920 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:48.425001+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349773824 unmapped: 50257920 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:49.425187+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349782016 unmapped: 50249728 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:50.425386+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349790208 unmapped: 50241536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:51.425582+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349790208 unmapped: 50241536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:52.426037+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349790208 unmapped: 50241536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:53.426210+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349790208 unmapped: 50241536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:54.426410+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349790208 unmapped: 50241536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:55.426569+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349790208 unmapped: 50241536 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:56.426729+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 50225152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:57.426964+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 50225152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:58.427137+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 50225152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:20:59.427325+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 50225152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:00.427475+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 50225152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:01.427704+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 50225152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:02.427996+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 50225152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:03.428199+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 50225152 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:04.428437+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 50216960 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:05.428623+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 50216960 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:06.428823+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 50216960 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:07.428978+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 50200576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:08.429160+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 50200576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:09.429459+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 50200576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:10.429677+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 50200576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:11.429840+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 50200576 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:12.430030+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 50192384 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:13.430188+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 50192384 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:14.430358+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 50192384 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:15.430529+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 50192384 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:16.430735+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 50192384 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:17.431047+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 50192384 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:18.431292+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 50192384 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:19.431530+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 50192384 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:20.431757+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 50176000 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:21.431961+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 50176000 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:22.432144+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 50167808 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:23.432324+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349872128 unmapped: 50159616 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:24.432533+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349872128 unmapped: 50159616 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:25.432726+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349872128 unmapped: 50159616 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:26.433001+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349872128 unmapped: 50159616 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:27.433215+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349872128 unmapped: 50159616 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:28.433385+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349872128 unmapped: 50159616 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:29.433597+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349872128 unmapped: 50159616 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:30.433778+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349880320 unmapped: 50151424 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:31.433986+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349880320 unmapped: 50151424 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:32.434213+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349880320 unmapped: 50151424 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:33.434364+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349880320 unmapped: 50151424 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:34.434556+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349880320 unmapped: 50151424 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:35.434767+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349880320 unmapped: 50151424 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:36.435031+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 50135040 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:37.435180+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 50135040 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:38.435334+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 50135040 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:39.435552+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 50135040 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:40.435726+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 50135040 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:41.435867+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 50135040 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:42.436027+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349904896 unmapped: 50126848 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:43.436195+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.78 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 400 writes, 997 keys, 400 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s
                                           Interval WAL: 400 writes, 182 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349904896 unmapped: 50126848 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:44.436411+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349913088 unmapped: 50118656 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:45.436619+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349913088 unmapped: 50118656 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:46.436812+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349913088 unmapped: 50118656 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:47.437016+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349913088 unmapped: 50118656 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:48.437157+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349913088 unmapped: 50118656 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:49.437355+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349913088 unmapped: 50118656 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:50.437577+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349913088 unmapped: 50118656 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:51.437746+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349913088 unmapped: 50118656 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:52.437894+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 50110464 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:53.438057+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 50102272 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:54.438214+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 50094080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:55.438354+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 50094080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:56.438587+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 50094080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:57.438758+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 50094080 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:58.439011+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 50077696 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:21:59.439195+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 50077696 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:00.439377+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 50069504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:01.439540+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 50069504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:02.439682+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 50069504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:03.439838+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 50069504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:04.440015+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 50069504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:05.440189+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 50069504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:06.440416+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 50069504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:07.440608+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 50069504 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:08.440790+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349978624 unmapped: 50053120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:09.441015+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349978624 unmapped: 50053120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:10.441213+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349978624 unmapped: 50053120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:11.441461+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349978624 unmapped: 50053120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:12.441651+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349978624 unmapped: 50053120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:13.441815+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349978624 unmapped: 50053120 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:14.442033+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349986816 unmapped: 50044928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:15.442173+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349986816 unmapped: 50044928 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:16.442327+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349995008 unmapped: 50036736 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:17.442484+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349995008 unmapped: 50036736 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:18.442699+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349995008 unmapped: 50036736 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:19.442919+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 349995008 unmapped: 50036736 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:20.443105+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350003200 unmapped: 50028544 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:21.443279+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350003200 unmapped: 50028544 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:22.443416+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350003200 unmapped: 50028544 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:23.443602+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350003200 unmapped: 50028544 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:24.443736+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350011392 unmapped: 50020352 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:25.443990+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350011392 unmapped: 50020352 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:26.444158+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350011392 unmapped: 50020352 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:27.444358+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350011392 unmapped: 50020352 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:28.444533+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350011392 unmapped: 50020352 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:29.444751+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350011392 unmapped: 50020352 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:30.445067+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 50012160 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:31.445272+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 50012160 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:32.445516+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 49995776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:33.445699+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 49995776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:34.445877+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 49995776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:35.446065+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 49995776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:36.446297+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 49995776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:37.446499+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 49995776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:38.446670+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 49995776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:39.446897+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 49995776 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:40.447028+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 49979392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:41.447169+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 49979392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:42.447300+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 49979392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:43.447441+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 49979392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:44.447576+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 49979392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:45.447663+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 49979392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:46.447811+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 49979392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:47.448034+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 49979392 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:48.448203+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 49971200 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:49.448356+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 49971200 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:50.448477+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 49971200 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:51.448633+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 49971200 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:52.448771+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:53.448906+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 49971200 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:54.449023+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 49971200 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:55.449148+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 49971200 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432510 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:56.449310+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 49971200 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f1f000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:57.449420+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350068736 unmapped: 49963008 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:58.449580+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350068736 unmapped: 49963008 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:22:59.449775+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350068736 unmapped: 49963008 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 439.499328613s of 440.344421387s, submitted: 26
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:00.449899+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 49946624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:01.450132+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 49946624 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:02.450316+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 49905664 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:03.450471+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 49897472 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:04.450659+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 49897472 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:05.450810+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 49897472 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:06.450974+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 49897472 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:07.451137+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 49897472 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:08.451313+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 49897472 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:09.451528+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 49897472 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:10.451696+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 49897472 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:11.451968+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 49897472 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:12.452076+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 49897472 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:13.452239+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 49897472 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:14.452403+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 49889280 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:15.452577+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 49889280 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:16.452741+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 49889280 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:17.452913+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 49889280 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:18.453123+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 49889280 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:19.453355+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 49889280 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:20.453506+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 49889280 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:21.453676+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 49889280 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:22.453852+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 49889280 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:23.454043+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 49881088 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:24.454196+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 49881088 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:25.454337+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 49881088 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:26.454441+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 49881088 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:27.454602+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 49881088 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:28.454757+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 49872896 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:29.455045+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 49872896 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:30.455251+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 49872896 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:31.455400+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 49872896 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:32.455587+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350167040 unmapped: 49864704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:33.455718+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350167040 unmapped: 49864704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:34.455910+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350167040 unmapped: 49864704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:35.456160+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350167040 unmapped: 49864704 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:36.456284+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 49856512 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:37.456451+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 49856512 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:38.456642+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 49856512 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:39.456889+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 49856512 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:40.457148+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 49856512 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:41.457302+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 49856512 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:42.457486+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 49848320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:43.457679+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 49848320 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:44.458082+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 49840128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:45.459303+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 49840128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:46.459604+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 49840128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:47.459768+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 49840128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:48.460772+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 49840128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:49.461193+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 49840128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:50.461367+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 49840128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:51.461489+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 49840128 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:52.461672+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 49823744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:53.461883+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 49823744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:54.462034+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 49823744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:55.462198+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 49823744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:56.462482+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 49823744 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:57.462705+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350216192 unmapped: 49815552 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:58.462852+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350216192 unmapped: 49815552 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:23:59.463020+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350216192 unmapped: 49815552 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:00.463200+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350224384 unmapped: 49807360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:01.463509+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350224384 unmapped: 49807360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:02.463792+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350224384 unmapped: 49807360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:03.463938+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350224384 unmapped: 49807360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:04.464082+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350224384 unmapped: 49807360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:05.464240+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350224384 unmapped: 49807360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:06.464377+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350224384 unmapped: 49807360 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:07.464573+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 49799168 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:08.464718+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 49782784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:09.464871+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 49782784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:10.465029+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 49782784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:11.465182+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 49782784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:12.465377+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 49782784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:13.465559+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 49782784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:14.465708+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 49782784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:15.465916+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 49782784 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:16.466198+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 49774592 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:17.466665+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 49774592 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:18.466989+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 49774592 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:19.468371+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 49774592 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:20.468552+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 49774592 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:21.468791+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 49774592 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:22.469172+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 49774592 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:23.469536+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350265344 unmapped: 49766400 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:24.469695+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350265344 unmapped: 49766400 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:25.470225+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350265344 unmapped: 49766400 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:26.470586+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 49750016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:27.470837+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 49750016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:28.471209+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 49750016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:29.471649+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 49750016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:30.471916+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 49750016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:31.472330+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 49750016 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:32.472493+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 49733632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:33.472750+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 49733632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:34.473044+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 49733632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:35.473397+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 49733632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:36.473621+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 49733632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:37.473898+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 49733632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:38.474184+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 49733632 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:39.474451+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 49725440 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:40.474698+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 49709056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:41.474889+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 49709056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:42.475120+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 49709056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:43.475434+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 49709056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:44.475709+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 49709056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:45.475998+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 49709056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:46.476204+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 49709056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:47.476445+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 49709056 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:48.476608+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 49700864 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:49.476797+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 49700864 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:50.476994+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 49700864 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:51.477110+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 49700864 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:52.477293+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 49700864 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:53.477477+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 49700864 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:54.477617+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 49700864 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:55.477743+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 49700864 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:56.477857+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 49692672 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:57.478018+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 49684480 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:58.478191+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 49676288 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:24:59.478399+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 49676288 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:00.478517+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 49676288 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:01.478700+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 49676288 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:02.478868+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 49676288 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:03.479002+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 49676288 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:04.479137+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 49668096 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:05.479266+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 49668096 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:06.479459+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 49668096 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:07.479610+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 49668096 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:08.479757+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 49668096 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:09.479981+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 49668096 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:10.480131+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 49668096 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:11.480253+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 49668096 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:12.480392+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 49659904 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:13.480532+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 49659904 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:14.480702+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 49659904 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:15.480871+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 49651712 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:16.481003+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 49651712 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:17.481215+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 49651712 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:18.481407+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 49651712 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:19.481628+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 49651712 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:20.481789+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350396416 unmapped: 49635328 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:21.481982+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350396416 unmapped: 49635328 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:22.482151+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350404608 unmapped: 49627136 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:23.482299+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350404608 unmapped: 49627136 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:24.482417+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350404608 unmapped: 49627136 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:25.482636+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350404608 unmapped: 49627136 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:26.482964+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350404608 unmapped: 49627136 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:27.483122+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350404608 unmapped: 49627136 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:28.483291+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350412800 unmapped: 49618944 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:29.483498+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350412800 unmapped: 49618944 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:30.483663+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350412800 unmapped: 49618944 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 07 15:26:04 compute-0 ceph-osd[88039]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 07 15:26:04 compute-0 ceph-osd[88039]: bluestore.MempoolThread(0x55daf63d1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431630 data_alloc: 218103808 data_used: 1179648
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:31.483792+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'config diff' '{prefix=config diff}'
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'config show' '{prefix=config show}'
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'counter dump' '{prefix=counter dump}'
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350642176 unmapped: 49389568 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'counter schema' '{prefix=counter schema}'
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:32.483966+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350674944 unmapped: 49356800 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: tick
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_tickets
Oct 07 15:26:04 compute-0 ceph-osd[88039]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-07T15:25:33.484182+0000)
Oct 07 15:26:04 compute-0 ceph-osd[88039]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x1ff81f/0x3ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 07 15:26:04 compute-0 ceph-osd[88039]: prioritycache tune_memory target: 4294967296 mapped: 350412800 unmapped: 49618944 heap: 400031744 old mem: 2845415832 new mem: 2845415832
Oct 07 15:26:04 compute-0 ceph-osd[88039]: do_command 'log dump' '{prefix=log dump}'
Oct 07 15:26:04 compute-0 nova_compute[259550]: 2025-10-07 15:26:04.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:26:04 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23539 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:05 compute-0 nova_compute[259550]: 2025-10-07 15:26:05.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:26:05 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3121166231' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 07 15:26:05 compute-0 ceph-mon[74295]: from='client.23539 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 07 15:26:05 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2418896864' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 07 15:26:05 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3744: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:26:05 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 07 15:26:05 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3901616632' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 07 15:26:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:26:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 07 15:26:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2541072716' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 07 15:26:06 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2418896864' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 07 15:26:06 compute-0 ceph-mon[74295]: pgmap v3744: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:26:06 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3901616632' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 07 15:26:06 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2541072716' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 07 15:26:06 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 07 15:26:06 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4240561684' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 07 15:26:06 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23549 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:07 compute-0 systemd[1]: Starting Hostname Service...
Oct 07 15:26:07 compute-0 systemd[1]: Started Hostname Service.
Oct 07 15:26:07 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/4240561684' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 07 15:26:07 compute-0 ceph-mon[74295]: from='client.23549 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 07 15:26:07 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1341969430' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 07 15:26:07 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3745: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:26:07 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 07 15:26:07 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1754575224' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 07 15:26:08 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23555 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:08 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1341969430' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 07 15:26:08 compute-0 ceph-mon[74295]: pgmap v3745: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:26:08 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1754575224' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 07 15:26:08 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 07 15:26:08 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/235390122' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 07 15:26:08 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23559 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:09 compute-0 ceph-mon[74295]: from='client.23555 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:09 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/235390122' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 07 15:26:09 compute-0 ceph-mon[74295]: from='client.23559 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:09 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23561 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:09 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3746: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:26:09 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 07 15:26:09 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3653424174' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 07 15:26:09 compute-0 nova_compute[259550]: 2025-10-07 15:26:09.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:26:10 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 07 15:26:10 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2395958087' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 07 15:26:10 compute-0 nova_compute[259550]: 2025-10-07 15:26:10.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23567 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:10 compute-0 ceph-mon[74295]: from='client.23561 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:10 compute-0 ceph-mon[74295]: pgmap v3746: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:26:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3653424174' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 07 15:26:10 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2395958087' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23569 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 07 15:26:10 compute-0 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 07 15:26:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:26:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 07 15:26:11 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/359585318' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 07 15:26:11 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct 07 15:26:11 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2645045647' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 07 15:26:11 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3747: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:26:11 compute-0 ceph-mon[74295]: from='client.23567 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:11 compute-0 ceph-mon[74295]: from='client.23569 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:11 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/359585318' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 07 15:26:11 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2645045647' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 07 15:26:11 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23575 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:12 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23577 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:12 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 07 15:26:12 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2123965329' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 07 15:26:12 compute-0 ceph-mon[74295]: pgmap v3747: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:26:12 compute-0 ceph-mon[74295]: from='client.23575 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:12 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2123965329' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 07 15:26:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Oct 07 15:26:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1144088258' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 07 15:26:13 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Oct 07 15:26:13 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2292794537' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 07 15:26:13 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3748: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:26:13 compute-0 ceph-mon[74295]: from='client.23577 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 07 15:26:13 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1144088258' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 07 15:26:13 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2292794537' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 07 15:26:13 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23585 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 07 15:26:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/312920469' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 07 15:26:14 compute-0 ovs-appctl[470825]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 07 15:26:14 compute-0 ovs-appctl[470831]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 07 15:26:14 compute-0 ovs-appctl[470835]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 07 15:26:14 compute-0 nova_compute[259550]: 2025-10-07 15:26:14.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:26:14 compute-0 ceph-mon[74295]: pgmap v3748: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:26:14 compute-0 ceph-mon[74295]: from='client.23585 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:14 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/312920469' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 07 15:26:14 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Oct 07 15:26:14 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3298842630' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 07 15:26:15 compute-0 nova_compute[259550]: 2025-10-07 15:26:15.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 07 15:26:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Oct 07 15:26:15 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3237046558' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 07 15:26:15 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3749: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:26:15 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Oct 07 15:26:15 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1502557418' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 07 15:26:15 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3298842630' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 07 15:26:15 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3237046558' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 07 15:26:15 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1502557418' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 07 15:26:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 07 15:26:16 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23595 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:16 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Oct 07 15:26:16 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1890337291' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 07 15:26:16 compute-0 ceph-mon[74295]: pgmap v3749: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:26:16 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/1890337291' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 07 15:26:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Oct 07 15:26:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2451280315' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 07 15:26:17 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23601 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:17 compute-0 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3750: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct 07 15:26:17 compute-0 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Oct 07 15:26:17 compute-0 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3239718958' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 07 15:26:17 compute-0 ceph-mon[74295]: from='client.23595 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:17 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/2451280315' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 07 15:26:17 compute-0 ceph-mon[74295]: from='client.? 192.168.122.100:0/3239718958' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 07 15:26:18 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23605 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 07 15:26:18 compute-0 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23607 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
